public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 13:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 13:15 UTC (permalink / raw
  To: gentoo-commits

commit:     d28f00bbf4fb465719706fb9a629104a7ca43d33
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Nov 14 11:39:53 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 13:15:41 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d28f00bb

proj/linux-patches: Removal of redundant patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                      |  4 ----
 1800_TCA-OPTIONS-sched-fix.patch | 35 -----------------------------------
 2 files changed, 39 deletions(-)

diff --git a/0000_README b/0000_README
index afaac7a..4d0ed54 100644
--- a/0000_README
+++ b/0000_README
@@ -127,10 +127,6 @@ Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.
 
-Patch:  1800_TCA-OPTIONS-sched-fix.patch
-From:   https://git.kernel.org
-Desc:   net: sched: Remove TCA_OPTIONS from policy
-
 Patch:  2500_usb-storage-Disable-UAS-on-JMicron-SATA-enclosure.patch
 From:   https://bugzilla.redhat.com/show_bug.cgi?id=1260207#c5
 Desc:   Add UAS disable quirk. See bug #640082.

diff --git a/1800_TCA-OPTIONS-sched-fix.patch b/1800_TCA-OPTIONS-sched-fix.patch
deleted file mode 100644
index f960fac..0000000
--- a/1800_TCA-OPTIONS-sched-fix.patch
+++ /dev/null
@@ -1,35 +0,0 @@
-From e72bde6b66299602087c8c2350d36a525e75d06e Mon Sep 17 00:00:00 2001
-From: David Ahern <dsahern@gmail.com>
-Date: Wed, 24 Oct 2018 08:32:49 -0700
-Subject: net: sched: Remove TCA_OPTIONS from policy
-
-Marco reported an error with hfsc:
-root@Calimero:~# tc qdisc add dev eth0 root handle 1:0 hfsc default 1
-Error: Attribute failed policy validation.
-
-Apparently a few implementations pass TCA_OPTIONS as a binary instead
-of nested attribute, so drop TCA_OPTIONS from the policy.
-
-Fixes: 8b4c3cdd9dd8 ("net: sched: Add policy validation for tc attributes")
-Reported-by: Marco Berizzi <pupilla@libero.it>
-Signed-off-by: David Ahern <dsahern@gmail.com>
-Signed-off-by: David S. Miller <davem@davemloft.net>
----
- net/sched/sch_api.c | 1 -
- 1 file changed, 1 deletion(-)
-
-diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
-index 022bca98bde6..ca3b0f46de53 100644
---- a/net/sched/sch_api.c
-+++ b/net/sched/sch_api.c
-@@ -1320,7 +1320,6 @@ check_loop_fn(struct Qdisc *q, unsigned long cl, struct qdisc_walker *w)
- 
- const struct nla_policy rtm_tca_policy[TCA_MAX + 1] = {
- 	[TCA_KIND]		= { .type = NLA_STRING },
--	[TCA_OPTIONS]		= { .type = NLA_NESTED },
- 	[TCA_RATE]		= { .type = NLA_BINARY,
- 				    .len = sizeof(struct tc_estimator) },
- 	[TCA_STAB]		= { .type = NLA_NESTED },
--- 
-cgit 1.2-0.3.lf.el7
-


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-21 12:28 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-21 12:28 UTC (permalink / raw
  To: gentoo-commits

commit:     f038ea3a40fec1a50410f3e39b1ca402f8d6543c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Nov 21 12:28:27 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 21 12:28:27 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f038ea3a

proj/linux-patches: Linux patch 4.18.20

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1019_linux-4.18.20.patch | 4811 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4815 insertions(+)

diff --git a/0000_README b/0000_README
index 4d0ed54..805997e 100644
--- a/0000_README
+++ b/0000_README
@@ -119,6 +119,10 @@ Patch:  1018_linux-4.18.19.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.19
 
+Patch:  1019_linux-4.18.20.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.20
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1019_linux-4.18.20.patch b/1019_linux-4.18.20.patch
new file mode 100644
index 0000000..6ea25b7
--- /dev/null
+++ b/1019_linux-4.18.20.patch
@@ -0,0 +1,4811 @@
+diff --git a/Makefile b/Makefile
+index 71642133ba22..5f6697c4dbbc 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 19
++SUBLEVEL = 20
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/alpha/include/asm/termios.h b/arch/alpha/include/asm/termios.h
+index 6a8c53dec57e..b7c77bb1bfd2 100644
+--- a/arch/alpha/include/asm/termios.h
++++ b/arch/alpha/include/asm/termios.h
+@@ -73,9 +73,15 @@
+ })
+ 
+ #define user_termios_to_kernel_termios(k, u) \
+-	copy_from_user(k, u, sizeof(struct termios))
++	copy_from_user(k, u, sizeof(struct termios2))
+ 
+ #define kernel_termios_to_user_termios(u, k) \
++	copy_to_user(u, k, sizeof(struct termios2))
++
++#define user_termios_to_kernel_termios_1(k, u) \
++	copy_from_user(k, u, sizeof(struct termios))
++
++#define kernel_termios_to_user_termios_1(u, k) \
+ 	copy_to_user(u, k, sizeof(struct termios))
+ 
+ #endif	/* _ALPHA_TERMIOS_H */
+diff --git a/arch/alpha/include/uapi/asm/ioctls.h b/arch/alpha/include/uapi/asm/ioctls.h
+index 3729d92d3fa8..dc8c20ac7191 100644
+--- a/arch/alpha/include/uapi/asm/ioctls.h
++++ b/arch/alpha/include/uapi/asm/ioctls.h
+@@ -32,6 +32,11 @@
+ #define TCXONC		_IO('t', 30)
+ #define TCFLSH		_IO('t', 31)
+ 
++#define TCGETS2		_IOR('T', 42, struct termios2)
++#define TCSETS2		_IOW('T', 43, struct termios2)
++#define TCSETSW2	_IOW('T', 44, struct termios2)
++#define TCSETSF2	_IOW('T', 45, struct termios2)
++
+ #define TIOCSWINSZ	_IOW('t', 103, struct winsize)
+ #define TIOCGWINSZ	_IOR('t', 104, struct winsize)
+ #define	TIOCSTART	_IO('t', 110)		/* start output, like ^Q */
+diff --git a/arch/alpha/include/uapi/asm/termbits.h b/arch/alpha/include/uapi/asm/termbits.h
+index de6c8360fbe3..4575ba34a0ea 100644
+--- a/arch/alpha/include/uapi/asm/termbits.h
++++ b/arch/alpha/include/uapi/asm/termbits.h
+@@ -26,6 +26,19 @@ struct termios {
+ 	speed_t c_ospeed;		/* output speed */
+ };
+ 
++/* Alpha has identical termios and termios2 */
++
++struct termios2 {
++	tcflag_t c_iflag;		/* input mode flags */
++	tcflag_t c_oflag;		/* output mode flags */
++	tcflag_t c_cflag;		/* control mode flags */
++	tcflag_t c_lflag;		/* local mode flags */
++	cc_t c_cc[NCCS];		/* control characters */
++	cc_t c_line;			/* line discipline (== c_cc[19]) */
++	speed_t c_ispeed;		/* input speed */
++	speed_t c_ospeed;		/* output speed */
++};
++
+ /* Alpha has matching termios and ktermios */
+ 
+ struct ktermios {
+@@ -152,6 +165,7 @@ struct ktermios {
+ #define B3000000  00034
+ #define B3500000  00035
+ #define B4000000  00036
++#define BOTHER    00037
+ 
+ #define CSIZE	00001400
+ #define   CS5	00000000
+@@ -169,6 +183,9 @@ struct ktermios {
+ #define CMSPAR	  010000000000		/* mark or space (stick) parity */
+ #define CRTSCTS	  020000000000		/* flow control */
+ 
++#define CIBAUD	07600000
++#define IBSHIFT	16
++
+ /* c_lflag bits */
+ #define ISIG	0x00000080
+ #define ICANON	0x00000100
+diff --git a/arch/arm/boot/dts/imx6ull-pinfunc.h b/arch/arm/boot/dts/imx6ull-pinfunc.h
+index fdc46bb09cc1..3c12a6fb0b61 100644
+--- a/arch/arm/boot/dts/imx6ull-pinfunc.h
++++ b/arch/arm/boot/dts/imx6ull-pinfunc.h
+@@ -14,14 +14,23 @@
+  * The pin function ID is a tuple of
+  * <mux_reg conf_reg input_reg mux_mode input_val>
+  */
++/* signals common for i.MX6UL and i.MX6ULL */
++#undef MX6UL_PAD_UART5_TX_DATA__UART5_DTE_RX
++#define MX6UL_PAD_UART5_TX_DATA__UART5_DTE_RX                    0x00BC 0x0348 0x0644 0x0 0x6
++#undef MX6UL_PAD_UART5_RX_DATA__UART5_DCE_RX
++#define MX6UL_PAD_UART5_RX_DATA__UART5_DCE_RX                    0x00C0 0x034C 0x0644 0x0 0x7
++#undef MX6UL_PAD_ENET1_RX_EN__UART5_DCE_RTS
++#define MX6UL_PAD_ENET1_RX_EN__UART5_DCE_RTS                     0x00CC 0x0358 0x0640 0x1 0x5
++#undef MX6UL_PAD_ENET1_TX_DATA0__UART5_DTE_RTS
++#define MX6UL_PAD_ENET1_TX_DATA0__UART5_DTE_RTS                  0x00D0 0x035C 0x0640 0x1 0x6
++#undef MX6UL_PAD_CSI_DATA02__UART5_DCE_RTS
++#define MX6UL_PAD_CSI_DATA02__UART5_DCE_RTS                      0x01EC 0x0478 0x0640 0x8 0x7
++
++/* signals for i.MX6ULL only */
+ #define MX6ULL_PAD_UART1_TX_DATA__UART5_DTE_RX                    0x0084 0x0310 0x0644 0x9 0x4
+ #define MX6ULL_PAD_UART1_RX_DATA__UART5_DCE_RX                    0x0088 0x0314 0x0644 0x9 0x5
+ #define MX6ULL_PAD_UART1_CTS_B__UART5_DCE_RTS                     0x008C 0x0318 0x0640 0x9 0x3
+ #define MX6ULL_PAD_UART1_RTS_B__UART5_DTE_RTS                     0x0090 0x031C 0x0640 0x9 0x4
+-#define MX6ULL_PAD_UART5_TX_DATA__UART5_DTE_RX                    0x00BC 0x0348 0x0644 0x0 0x6
+-#define MX6ULL_PAD_UART5_RX_DATA__UART5_DCE_RX                    0x00C0 0x034C 0x0644 0x0 0x7
+-#define MX6ULL_PAD_ENET1_RX_EN__UART5_DCE_RTS                     0x00CC 0x0358 0x0640 0x1 0x5
+-#define MX6ULL_PAD_ENET1_TX_DATA0__UART5_DTE_RTS                  0x00D0 0x035C 0x0640 0x1 0x6
+ #define MX6ULL_PAD_ENET2_RX_DATA0__EPDC_SDDO08                    0x00E4 0x0370 0x0000 0x9 0x0
+ #define MX6ULL_PAD_ENET2_RX_DATA1__EPDC_SDDO09                    0x00E8 0x0374 0x0000 0x9 0x0
+ #define MX6ULL_PAD_ENET2_RX_EN__EPDC_SDDO10                       0x00EC 0x0378 0x0000 0x9 0x0
+@@ -55,7 +64,6 @@
+ #define MX6ULL_PAD_CSI_DATA00__ESAI_TX_HF_CLK                     0x01E4 0x0470 0x0000 0x9 0x0
+ #define MX6ULL_PAD_CSI_DATA01__ESAI_RX_HF_CLK                     0x01E8 0x0474 0x0000 0x9 0x0
+ #define MX6ULL_PAD_CSI_DATA02__ESAI_RX_FS                         0x01EC 0x0478 0x0000 0x9 0x0
+-#define MX6ULL_PAD_CSI_DATA02__UART5_DCE_RTS                      0x01EC 0x0478 0x0640 0x8 0x7
+ #define MX6ULL_PAD_CSI_DATA03__ESAI_RX_CLK                        0x01F0 0x047C 0x0000 0x9 0x0
+ #define MX6ULL_PAD_CSI_DATA04__ESAI_TX_FS                         0x01F4 0x0480 0x0000 0x9 0x0
+ #define MX6ULL_PAD_CSI_DATA05__ESAI_TX_CLK                        0x01F8 0x0484 0x0000 0x9 0x0
+diff --git a/arch/arm/configs/imx_v6_v7_defconfig b/arch/arm/configs/imx_v6_v7_defconfig
+index 200ebda47e0c..254dcad97e67 100644
+--- a/arch/arm/configs/imx_v6_v7_defconfig
++++ b/arch/arm/configs/imx_v6_v7_defconfig
+@@ -406,6 +406,7 @@ CONFIG_ZISOFS=y
+ CONFIG_UDF_FS=m
+ CONFIG_MSDOS_FS=m
+ CONFIG_VFAT_FS=y
++CONFIG_TMPFS_POSIX_ACL=y
+ CONFIG_JFFS2_FS=y
+ CONFIG_UBIFS_FS=y
+ CONFIG_NFS_FS=y
+diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
+index 6fe52819e014..339eb17c9808 100644
+--- a/arch/arm/mm/proc-v7.S
++++ b/arch/arm/mm/proc-v7.S
+@@ -112,7 +112,7 @@ ENTRY(cpu_v7_hvc_switch_mm)
+ 	hvc	#0
+ 	ldmfd	sp!, {r0 - r3}
+ 	b	cpu_v7_switch_mm
+-ENDPROC(cpu_v7_smc_switch_mm)
++ENDPROC(cpu_v7_hvc_switch_mm)
+ #endif
+ ENTRY(cpu_v7_iciallu_switch_mm)
+ 	mov	r3, #0
+diff --git a/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi b/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
+index 3989876ab699..6c8bd13d64b8 100644
+--- a/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
++++ b/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
+@@ -131,6 +131,9 @@
+ 			reset-names = "stmmaceth";
+ 			clocks = <&clkmgr STRATIX10_EMAC0_CLK>;
+ 			clock-names = "stmmaceth";
++			tx-fifo-depth = <16384>;
++			rx-fifo-depth = <16384>;
++			snps,multicast-filter-bins = <256>;
+ 			status = "disabled";
+ 		};
+ 
+@@ -144,6 +147,9 @@
+ 			reset-names = "stmmaceth";
+ 			clocks = <&clkmgr STRATIX10_EMAC1_CLK>;
+ 			clock-names = "stmmaceth";
++			tx-fifo-depth = <16384>;
++			rx-fifo-depth = <16384>;
++			snps,multicast-filter-bins = <256>;
+ 			status = "disabled";
+ 		};
+ 
+@@ -157,6 +163,9 @@
+ 			reset-names = "stmmaceth";
+ 			clocks = <&clkmgr STRATIX10_EMAC2_CLK>;
+ 			clock-names = "stmmaceth";
++			tx-fifo-depth = <16384>;
++			rx-fifo-depth = <16384>;
++			snps,multicast-filter-bins = <256>;
+ 			status = "disabled";
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk.dts b/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk.dts
+index f9b1ef12db48..fb1b9ddd9f51 100644
+--- a/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk.dts
++++ b/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk.dts
+@@ -76,7 +76,7 @@
+ 	phy-mode = "rgmii";
+ 	phy-handle = <&phy0>;
+ 
+-	max-frame-size = <3800>;
++	max-frame-size = <9000>;
+ 
+ 	mdio0 {
+ 		#address-cells = <1>;
+diff --git a/arch/mips/include/asm/mach-loongson64/irq.h b/arch/mips/include/asm/mach-loongson64/irq.h
+index 3644b68c0ccc..be9f727a9328 100644
+--- a/arch/mips/include/asm/mach-loongson64/irq.h
++++ b/arch/mips/include/asm/mach-loongson64/irq.h
+@@ -10,7 +10,7 @@
+ #define MIPS_CPU_IRQ_BASE 56
+ 
+ #define LOONGSON_UART_IRQ   (MIPS_CPU_IRQ_BASE + 2) /* UART */
+-#define LOONGSON_HT1_IRQ    (MIPS_CPU_IRQ_BASE + 3) /* HT1 */
++#define LOONGSON_BRIDGE_IRQ (MIPS_CPU_IRQ_BASE + 3) /* CASCADE */
+ #define LOONGSON_TIMER_IRQ  (MIPS_CPU_IRQ_BASE + 7) /* CPU Timer */
+ 
+ #define LOONGSON_HT1_CFG_BASE		loongson_sysconf.ht_control_base
+diff --git a/arch/mips/kernel/crash.c b/arch/mips/kernel/crash.c
+index d455363d51c3..4c07a43a3242 100644
+--- a/arch/mips/kernel/crash.c
++++ b/arch/mips/kernel/crash.c
+@@ -36,6 +36,9 @@ static void crash_shutdown_secondary(void *passed_regs)
+ 	if (!cpu_online(cpu))
+ 		return;
+ 
++	/* We won't be sent IPIs any more. */
++	set_cpu_online(cpu, false);
++
+ 	local_irq_disable();
+ 	if (!cpumask_test_cpu(cpu, &cpus_in_crash))
+ 		crash_save_cpu(regs, cpu);
+diff --git a/arch/mips/kernel/machine_kexec.c b/arch/mips/kernel/machine_kexec.c
+index 8b574bcd39ba..4b3726e4fe3a 100644
+--- a/arch/mips/kernel/machine_kexec.c
++++ b/arch/mips/kernel/machine_kexec.c
+@@ -118,6 +118,9 @@ machine_kexec(struct kimage *image)
+ 			*ptr = (unsigned long) phys_to_virt(*ptr);
+ 	}
+ 
++	/* Mark offline BEFORE disabling local irq. */
++	set_cpu_online(smp_processor_id(), false);
++
+ 	/*
+ 	 * we do not want to be bothered.
+ 	 */
+diff --git a/arch/mips/loongson64/loongson-3/irq.c b/arch/mips/loongson64/loongson-3/irq.c
+index cbeb20f9fc95..5605061f5f98 100644
+--- a/arch/mips/loongson64/loongson-3/irq.c
++++ b/arch/mips/loongson64/loongson-3/irq.c
+@@ -96,51 +96,8 @@ void mach_irq_dispatch(unsigned int pending)
+ 	}
+ }
+ 
+-static struct irqaction cascade_irqaction = {
+-	.handler = no_action,
+-	.flags = IRQF_NO_SUSPEND,
+-	.name = "cascade",
+-};
+-
+-static inline void mask_loongson_irq(struct irq_data *d)
+-{
+-	clear_c0_status(0x100 << (d->irq - MIPS_CPU_IRQ_BASE));
+-	irq_disable_hazard();
+-
+-	/* Workaround: UART IRQ may deliver to any core */
+-	if (d->irq == LOONGSON_UART_IRQ) {
+-		int cpu = smp_processor_id();
+-		int node_id = cpu_logical_map(cpu) / loongson_sysconf.cores_per_node;
+-		int core_id = cpu_logical_map(cpu) % loongson_sysconf.cores_per_node;
+-		u64 intenclr_addr = smp_group[node_id] |
+-			(u64)(&LOONGSON_INT_ROUTER_INTENCLR);
+-		u64 introuter_lpc_addr = smp_group[node_id] |
+-			(u64)(&LOONGSON_INT_ROUTER_LPC);
+-
+-		*(volatile u32 *)intenclr_addr = 1 << 10;
+-		*(volatile u8 *)introuter_lpc_addr = 0x10 + (1<<core_id);
+-	}
+-}
+-
+-static inline void unmask_loongson_irq(struct irq_data *d)
+-{
+-	/* Workaround: UART IRQ may deliver to any core */
+-	if (d->irq == LOONGSON_UART_IRQ) {
+-		int cpu = smp_processor_id();
+-		int node_id = cpu_logical_map(cpu) / loongson_sysconf.cores_per_node;
+-		int core_id = cpu_logical_map(cpu) % loongson_sysconf.cores_per_node;
+-		u64 intenset_addr = smp_group[node_id] |
+-			(u64)(&LOONGSON_INT_ROUTER_INTENSET);
+-		u64 introuter_lpc_addr = smp_group[node_id] |
+-			(u64)(&LOONGSON_INT_ROUTER_LPC);
+-
+-		*(volatile u32 *)intenset_addr = 1 << 10;
+-		*(volatile u8 *)introuter_lpc_addr = 0x10 + (1<<core_id);
+-	}
+-
+-	set_c0_status(0x100 << (d->irq - MIPS_CPU_IRQ_BASE));
+-	irq_enable_hazard();
+-}
++static inline void mask_loongson_irq(struct irq_data *d) { }
++static inline void unmask_loongson_irq(struct irq_data *d) { }
+ 
+  /* For MIPS IRQs which shared by all cores */
+ static struct irq_chip loongson_irq_chip = {
+@@ -183,12 +140,11 @@ void __init mach_init_irq(void)
+ 	chip->irq_set_affinity = plat_set_irq_affinity;
+ 
+ 	irq_set_chip_and_handler(LOONGSON_UART_IRQ,
+-			&loongson_irq_chip, handle_level_irq);
+-
+-	/* setup HT1 irq */
+-	setup_irq(LOONGSON_HT1_IRQ, &cascade_irqaction);
++			&loongson_irq_chip, handle_percpu_irq);
++	irq_set_chip_and_handler(LOONGSON_BRIDGE_IRQ,
++			&loongson_irq_chip, handle_percpu_irq);
+ 
+-	set_c0_status(STATUSF_IP2 | STATUSF_IP6);
++	set_c0_status(STATUSF_IP2 | STATUSF_IP3 | STATUSF_IP6);
+ }
+ 
+ #ifdef CONFIG_HOTPLUG_CPU
+diff --git a/arch/mips/pci/pci-legacy.c b/arch/mips/pci/pci-legacy.c
+index f1e92bf743c2..3c3b1e6abb53 100644
+--- a/arch/mips/pci/pci-legacy.c
++++ b/arch/mips/pci/pci-legacy.c
+@@ -127,8 +127,12 @@ static void pcibios_scanbus(struct pci_controller *hose)
+ 	if (pci_has_flag(PCI_PROBE_ONLY)) {
+ 		pci_bus_claim_resources(bus);
+ 	} else {
++		struct pci_bus *child;
++
+ 		pci_bus_size_bridges(bus);
+ 		pci_bus_assign_resources(bus);
++		list_for_each_entry(child, &bus->children, node)
++			pcie_bus_configure_settings(child);
+ 	}
+ 	pci_bus_add_devices(bus);
+ }
+diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
+index fb96206de317..2510ff9381d0 100644
+--- a/arch/powerpc/Makefile
++++ b/arch/powerpc/Makefile
+@@ -244,7 +244,11 @@ cpu-as-$(CONFIG_4xx)		+= -Wa,-m405
+ cpu-as-$(CONFIG_ALTIVEC)	+= $(call as-option,-Wa$(comma)-maltivec)
+ cpu-as-$(CONFIG_E200)		+= -Wa,-me200
+ cpu-as-$(CONFIG_E500)		+= -Wa,-me500
+-cpu-as-$(CONFIG_PPC_BOOK3S_64)	+= -Wa,-mpower4
++
++# When using '-many -mpower4' gas will first try and find a matching power4
++# mnemonic and failing that it will allow any valid mnemonic that GAS knows
++# about. GCC will pass -many to GAS when assembling, clang does not.
++cpu-as-$(CONFIG_PPC_BOOK3S_64)	+= -Wa,-mpower4 -Wa,-many
+ cpu-as-$(CONFIG_PPC_E500MC)	+= $(call as-option,-Wa$(comma)-me500mc)
+ 
+ KBUILD_AFLAGS += $(cpu-as-y)
+diff --git a/arch/powerpc/boot/crt0.S b/arch/powerpc/boot/crt0.S
+index dcf2f15e6797..32dfe6d083f3 100644
+--- a/arch/powerpc/boot/crt0.S
++++ b/arch/powerpc/boot/crt0.S
+@@ -47,8 +47,10 @@ p_end:		.long	_end
+ p_pstack:	.long	_platform_stack_top
+ #endif
+ 
+-	.weak	_zimage_start
+ 	.globl	_zimage_start
++	/* Clang appears to require the .weak directive to be after the symbol
++	 * is defined. See https://bugs.llvm.org/show_bug.cgi?id=38921  */
++	.weak	_zimage_start
+ _zimage_start:
+ 	.globl	_zimage_start_lib
+ _zimage_start_lib:
+diff --git a/arch/powerpc/include/asm/mmu-8xx.h b/arch/powerpc/include/asm/mmu-8xx.h
+index 4f547752ae79..193f53116c7a 100644
+--- a/arch/powerpc/include/asm/mmu-8xx.h
++++ b/arch/powerpc/include/asm/mmu-8xx.h
+@@ -34,20 +34,12 @@
+  * respectively NA for All or X for Supervisor and no access for User.
+  * Then we use the APG to say whether accesses are according to Page rules or
+  * "all Supervisor" rules (Access to all)
+- * We also use the 2nd APG bit for _PAGE_ACCESSED when having SWAP:
+- * When that bit is not set access is done iaw "all user"
+- * which means no access iaw page rules.
+- * Therefore, we define 4 APG groups. lsb is _PMD_USER, 2nd is _PAGE_ACCESSED
+- * 0x => No access => 11 (all accesses performed as user iaw page definition)
+- * 10 => No user => 01 (all accesses performed according to page definition)
+- * 11 => User => 00 (all accesses performed as supervisor iaw page definition)
++ * Therefore, we define 2 APG groups. lsb is _PMD_USER
++ * 0 => No user => 01 (all accesses performed according to page definition)
++ * 1 => User => 00 (all accesses performed as supervisor iaw page definition)
+  * We define all 16 groups so that all other bits of APG can take any value
+  */
+-#ifdef CONFIG_SWAP
+-#define MI_APG_INIT	0xf4f4f4f4
+-#else
+ #define MI_APG_INIT	0x44444444
+-#endif
+ 
+ /* The effective page number register.  When read, contains the information
+  * about the last instruction TLB miss.  When MI_RPN is written, bits in
+@@ -115,20 +107,12 @@
+  * Supervisor and no access for user and NA for ALL.
+  * Then we use the APG to say whether accesses are according to Page rules or
+  * "all Supervisor" rules (Access to all)
+- * We also use the 2nd APG bit for _PAGE_ACCESSED when having SWAP:
+- * When that bit is not set access is done iaw "all user"
+- * which means no access iaw page rules.
+- * Therefore, we define 4 APG groups. lsb is _PMD_USER, 2nd is _PAGE_ACCESSED
+- * 0x => No access => 11 (all accesses performed as user iaw page definition)
+- * 10 => No user => 01 (all accesses performed according to page definition)
+- * 11 => User => 00 (all accesses performed as supervisor iaw page definition)
++ * Therefore, we define 2 APG groups. lsb is _PMD_USER
++ * 0 => No user => 01 (all accesses performed according to page definition)
++ * 1 => User => 00 (all accesses performed as supervisor iaw page definition)
+  * We define all 16 groups so that all other bits of APG can take any value
+  */
+-#ifdef CONFIG_SWAP
+-#define MD_APG_INIT	0xf4f4f4f4
+-#else
+ #define MD_APG_INIT	0x44444444
+-#endif
+ 
+ /* The effective page number register.  When read, contains the information
+  * about the last instruction TLB miss.  When MD_RPN is written, bits in
+@@ -180,12 +164,6 @@
+  */
+ #define SPRN_M_TW	799
+ 
+-/* APGs */
+-#define M_APG0		0x00000000
+-#define M_APG1		0x00000020
+-#define M_APG2		0x00000040
+-#define M_APG3		0x00000060
+-
+ #ifdef CONFIG_PPC_MM_SLICES
+ #include <asm/nohash/32/slice.h>
+ #define SLICE_ARRAY_SIZE	(1 << (32 - SLICE_LOW_SHIFT - 1))
+diff --git a/arch/powerpc/kernel/eeh.c b/arch/powerpc/kernel/eeh.c
+index 5746809cfaad..bf0a02038cad 100644
+--- a/arch/powerpc/kernel/eeh.c
++++ b/arch/powerpc/kernel/eeh.c
+@@ -169,6 +169,11 @@ static size_t eeh_dump_dev_log(struct eeh_dev *edev, char *buf, size_t len)
+ 	int n = 0, l = 0;
+ 	char buffer[128];
+ 
++	if (!pdn) {
++		pr_warn("EEH: Note: No error log for absent device.\n");
++		return 0;
++	}
++
+ 	n += scnprintf(buf+n, len-n, "%04x:%02x:%02x.%01x\n",
+ 		       pdn->phb->global_number, pdn->busno,
+ 		       PCI_SLOT(pdn->devfn), PCI_FUNC(pdn->devfn));
+diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
+index 6cab07e76732..19bdc65d05b8 100644
+--- a/arch/powerpc/kernel/head_8xx.S
++++ b/arch/powerpc/kernel/head_8xx.S
+@@ -354,13 +354,14 @@ _ENTRY(ITLBMiss_cmp)
+ #if defined(ITLB_MISS_KERNEL) || defined(CONFIG_HUGETLB_PAGE)
+ 	mtcr	r12
+ #endif
+-
+-#ifdef CONFIG_SWAP
+-	rlwinm	r11, r10, 31, _PAGE_ACCESSED >> 1
+-#endif
+ 	/* Load the MI_TWC with the attributes for this "segment." */
+ 	mtspr	SPRN_MI_TWC, r11	/* Set segment attributes */
+ 
++#ifdef CONFIG_SWAP
++	rlwinm	r11, r10, 32-5, _PAGE_PRESENT
++	and	r11, r11, r10
++	rlwimi	r10, r11, 0, _PAGE_PRESENT
++#endif
+ 	li	r11, RPN_PATTERN | 0x200
+ 	/* The Linux PTE won't go exactly into the MMU TLB.
+ 	 * Software indicator bits 20 and 23 must be clear.
+@@ -471,14 +472,22 @@ _ENTRY(DTLBMiss_jmp)
+ 	 * above.
+ 	 */
+ 	rlwimi	r11, r10, 0, _PAGE_GUARDED
+-#ifdef CONFIG_SWAP
+-	/* _PAGE_ACCESSED has to be set. We use second APG bit for that, 0
+-	 * on that bit will represent a Non Access group
+-	 */
+-	rlwinm	r11, r10, 31, _PAGE_ACCESSED >> 1
+-#endif
+ 	mtspr	SPRN_MD_TWC, r11
+ 
++	/* Both _PAGE_ACCESSED and _PAGE_PRESENT has to be set.
++	 * We also need to know if the insn is a load/store, so:
++	 * Clear _PAGE_PRESENT and load that which will
++	 * trap into DTLB Error with store bit set accordinly.
++	 */
++	/* PRESENT=0x1, ACCESSED=0x20
++	 * r11 = ((r10 & PRESENT) & ((r10 & ACCESSED) >> 5));
++	 * r10 = (r10 & ~PRESENT) | r11;
++	 */
++#ifdef CONFIG_SWAP
++	rlwinm	r11, r10, 32-5, _PAGE_PRESENT
++	and	r11, r11, r10
++	rlwimi	r10, r11, 0, _PAGE_PRESENT
++#endif
+ 	/* The Linux PTE won't go exactly into the MMU TLB.
+ 	 * Software indicator bits 24, 25, 26, and 27 must be
+ 	 * set.  All other Linux PTE bits control the behavior
+@@ -638,8 +647,8 @@ InstructionBreakpoint:
+  */
+ DTLBMissIMMR:
+ 	mtcr	r12
+-	/* Set 512k byte guarded page and mark it valid and accessed */
+-	li	r10, MD_PS512K | MD_GUARDED | MD_SVALID | M_APG2
++	/* Set 512k byte guarded page and mark it valid */
++	li	r10, MD_PS512K | MD_GUARDED | MD_SVALID
+ 	mtspr	SPRN_MD_TWC, r10
+ 	mfspr	r10, SPRN_IMMR			/* Get current IMMR */
+ 	rlwinm	r10, r10, 0, 0xfff80000		/* Get 512 kbytes boundary */
+@@ -657,8 +666,8 @@ _ENTRY(dtlb_miss_exit_2)
+ 
+ DTLBMissLinear:
+ 	mtcr	r12
+-	/* Set 8M byte page and mark it valid and accessed */
+-	li	r11, MD_PS8MEG | MD_SVALID | M_APG2
++	/* Set 8M byte page and mark it valid */
++	li	r11, MD_PS8MEG | MD_SVALID
+ 	mtspr	SPRN_MD_TWC, r11
+ 	rlwinm	r10, r10, 0, 0x0f800000	/* 8xx supports max 256Mb RAM */
+ 	ori	r10, r10, 0xf0 | MD_SPS16K | _PAGE_PRIVILEGED | _PAGE_DIRTY | \
+@@ -676,8 +685,8 @@ _ENTRY(dtlb_miss_exit_3)
+ #ifndef CONFIG_PIN_TLB_TEXT
+ ITLBMissLinear:
+ 	mtcr	r12
+-	/* Set 8M byte page and mark it valid,accessed */
+-	li	r11, MI_PS8MEG | MI_SVALID | M_APG2
++	/* Set 8M byte page and mark it valid */
++	li	r11, MI_PS8MEG | MI_SVALID
+ 	mtspr	SPRN_MI_TWC, r11
+ 	rlwinm	r10, r10, 0, 0x0f800000	/* 8xx supports max 256Mb RAM */
+ 	ori	r10, r10, 0xf0 | MI_SPS16K | _PAGE_PRIVILEGED | _PAGE_DIRTY | \
+@@ -960,7 +969,7 @@ initial_mmu:
+ 	ori	r8, r8, MI_EVALID	/* Mark it valid */
+ 	mtspr	SPRN_MI_EPN, r8
+ 	li	r8, MI_PS8MEG /* Set 8M byte page */
+-	ori	r8, r8, MI_SVALID | M_APG2	/* Make it valid, APG 2 */
++	ori	r8, r8, MI_SVALID	/* Make it valid */
+ 	mtspr	SPRN_MI_TWC, r8
+ 	li	r8, MI_BOOTINIT		/* Create RPN for address 0 */
+ 	mtspr	SPRN_MI_RPN, r8		/* Store TLB entry */
+@@ -987,7 +996,7 @@ initial_mmu:
+ 	ori	r8, r8, MD_EVALID	/* Mark it valid */
+ 	mtspr	SPRN_MD_EPN, r8
+ 	li	r8, MD_PS512K | MD_GUARDED	/* Set 512k byte page */
+-	ori	r8, r8, MD_SVALID | M_APG2	/* Make it valid and accessed */
++	ori	r8, r8, MD_SVALID	/* Make it valid */
+ 	mtspr	SPRN_MD_TWC, r8
+ 	mr	r8, r9			/* Create paddr for TLB */
+ 	ori	r8, r8, MI_BOOTINIT|0x2 /* Inhibit cache -- Cort */
+diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
+index b8d61e019d06..f7b1203bdaee 100644
+--- a/arch/powerpc/kernel/module_64.c
++++ b/arch/powerpc/kernel/module_64.c
+@@ -685,7 +685,14 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
+ 
+ 		case R_PPC64_REL32:
+ 			/* 32 bits relative (used by relative exception tables) */
+-			*(u32 *)location = value - (unsigned long)location;
++			/* Convert value to relative */
++			value -= (unsigned long)location;
++			if (value + 0x80000000 > 0xffffffff) {
++				pr_err("%s: REL32 %li out of range!\n",
++				       me->name, (long int)value);
++				return -ENOEXEC;
++			}
++			*(u32 *)location = value;
+ 			break;
+ 
+ 		case R_PPC64_TOCSAVE:
+diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
+index 0e17dcb48720..6bfcb5a506af 100644
+--- a/arch/powerpc/kernel/traps.c
++++ b/arch/powerpc/kernel/traps.c
+@@ -736,12 +736,17 @@ void machine_check_exception(struct pt_regs *regs)
+ 	if (check_io_access(regs))
+ 		goto bail;
+ 
+-	die("Machine check", regs, SIGBUS);
+-
+ 	/* Must die if the interrupt is not recoverable */
+ 	if (!(regs->msr & MSR_RI))
+ 		nmi_panic(regs, "Unrecoverable Machine check");
+ 
++	if (!nested)
++		nmi_exit();
++
++	die("Machine check", regs, SIGBUS);
++
++	return;
++
+ bail:
+ 	if (!nested)
+ 		nmi_exit();
+diff --git a/arch/powerpc/mm/8xx_mmu.c b/arch/powerpc/mm/8xx_mmu.c
+index cf77d755246d..5d53684c2ebd 100644
+--- a/arch/powerpc/mm/8xx_mmu.c
++++ b/arch/powerpc/mm/8xx_mmu.c
+@@ -79,7 +79,7 @@ void __init MMU_init_hw(void)
+ 	for (; i < 32 && mem >= LARGE_PAGE_SIZE_8M; i++) {
+ 		mtspr(SPRN_MD_CTR, ctr | (i << 8));
+ 		mtspr(SPRN_MD_EPN, (unsigned long)__va(addr) | MD_EVALID);
+-		mtspr(SPRN_MD_TWC, MD_PS8MEG | MD_SVALID | M_APG2);
++		mtspr(SPRN_MD_TWC, MD_PS8MEG | MD_SVALID);
+ 		mtspr(SPRN_MD_RPN, addr | flags | _PAGE_PRESENT);
+ 		addr += LARGE_PAGE_SIZE_8M;
+ 		mem -= LARGE_PAGE_SIZE_8M;
+diff --git a/arch/powerpc/mm/dump_linuxpagetables.c b/arch/powerpc/mm/dump_linuxpagetables.c
+index 876e2a3c79f2..bdf33b989f98 100644
+--- a/arch/powerpc/mm/dump_linuxpagetables.c
++++ b/arch/powerpc/mm/dump_linuxpagetables.c
+@@ -418,12 +418,13 @@ static void walk_pagetables(struct pg_state *st)
+ 	unsigned int i;
+ 	unsigned long addr;
+ 
++	addr = st->start_address;
++
+ 	/*
+ 	 * Traverse the linux pagetable structure and dump pages that are in
+ 	 * the hash pagetable.
+ 	 */
+-	for (i = 0; i < PTRS_PER_PGD; i++, pgd++) {
+-		addr = KERN_VIRT_START + i * PGDIR_SIZE;
++	for (i = 0; i < PTRS_PER_PGD; i++, pgd++, addr += PGDIR_SIZE) {
+ 		if (!pgd_none(*pgd) && !pgd_huge(*pgd))
+ 			/* pgd exists */
+ 			walk_pud(st, pgd, addr);
+@@ -472,9 +473,14 @@ static int ptdump_show(struct seq_file *m, void *v)
+ {
+ 	struct pg_state st = {
+ 		.seq = m,
+-		.start_address = KERN_VIRT_START,
+ 		.marker = address_markers,
+ 	};
++
++	if (radix_enabled())
++		st.start_address = PAGE_OFFSET;
++	else
++		st.start_address = KERN_VIRT_START;
++
+ 	/* Traverse kernel page tables */
+ 	walk_pagetables(&st);
+ 	note_page(&st, 0, 0, 0);
+diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
+index 8a9a49c13865..a84c410e5090 100644
+--- a/arch/powerpc/mm/hugetlbpage.c
++++ b/arch/powerpc/mm/hugetlbpage.c
+@@ -19,6 +19,7 @@
+ #include <linux/moduleparam.h>
+ #include <linux/swap.h>
+ #include <linux/swapops.h>
++#include <linux/kmemleak.h>
+ #include <asm/pgtable.h>
+ #include <asm/pgalloc.h>
+ #include <asm/tlb.h>
+@@ -112,6 +113,8 @@ static int __hugepte_alloc(struct mm_struct *mm, hugepd_t *hpdp,
+ 		for (i = i - 1 ; i >= 0; i--, hpdp--)
+ 			*hpdp = __hugepd(0);
+ 		kmem_cache_free(cachep, new);
++	} else {
++		kmemleak_ignore(new);
+ 	}
+ 	spin_unlock(ptl);
+ 	return 0;
+diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
+index 205fe557ca10..4f213ba33491 100644
+--- a/arch/powerpc/mm/slice.c
++++ b/arch/powerpc/mm/slice.c
+@@ -61,6 +61,13 @@ static void slice_print_mask(const char *label, const struct slice_mask *mask) {
+ 
+ #endif
+ 
++static inline bool slice_addr_is_low(unsigned long addr)
++{
++	u64 tmp = (u64)addr;
++
++	return tmp < SLICE_LOW_TOP;
++}
++
+ static void slice_range_to_mask(unsigned long start, unsigned long len,
+ 				struct slice_mask *ret)
+ {
+@@ -70,7 +77,7 @@ static void slice_range_to_mask(unsigned long start, unsigned long len,
+ 	if (SLICE_NUM_HIGH)
+ 		bitmap_zero(ret->high_slices, SLICE_NUM_HIGH);
+ 
+-	if (start < SLICE_LOW_TOP) {
++	if (slice_addr_is_low(start)) {
+ 		unsigned long mend = min(end,
+ 					 (unsigned long)(SLICE_LOW_TOP - 1));
+ 
+@@ -78,7 +85,7 @@ static void slice_range_to_mask(unsigned long start, unsigned long len,
+ 			- (1u << GET_LOW_SLICE_INDEX(start));
+ 	}
+ 
+-	if ((start + len) > SLICE_LOW_TOP) {
++	if (SLICE_NUM_HIGH && !slice_addr_is_low(end)) {
+ 		unsigned long start_index = GET_HIGH_SLICE_INDEX(start);
+ 		unsigned long align_end = ALIGN(end, (1UL << SLICE_HIGH_SHIFT));
+ 		unsigned long count = GET_HIGH_SLICE_INDEX(align_end) - start_index;
+@@ -133,7 +140,7 @@ static void slice_mask_for_free(struct mm_struct *mm, struct slice_mask *ret,
+ 		if (!slice_low_has_vma(mm, i))
+ 			ret->low_slices |= 1u << i;
+ 
+-	if (high_limit <= SLICE_LOW_TOP)
++	if (slice_addr_is_low(high_limit - 1))
+ 		return;
+ 
+ 	for (i = 0; i < GET_HIGH_SLICE_INDEX(high_limit); i++)
+@@ -182,7 +189,7 @@ static bool slice_check_range_fits(struct mm_struct *mm,
+ 	unsigned long end = start + len - 1;
+ 	u64 low_slices = 0;
+ 
+-	if (start < SLICE_LOW_TOP) {
++	if (slice_addr_is_low(start)) {
+ 		unsigned long mend = min(end,
+ 					 (unsigned long)(SLICE_LOW_TOP - 1));
+ 
+@@ -192,7 +199,7 @@ static bool slice_check_range_fits(struct mm_struct *mm,
+ 	if ((low_slices & available->low_slices) != low_slices)
+ 		return false;
+ 
+-	if (SLICE_NUM_HIGH && ((start + len) > SLICE_LOW_TOP)) {
++	if (SLICE_NUM_HIGH && !slice_addr_is_low(end)) {
+ 		unsigned long start_index = GET_HIGH_SLICE_INDEX(start);
+ 		unsigned long align_end = ALIGN(end, (1UL << SLICE_HIGH_SHIFT));
+ 		unsigned long count = GET_HIGH_SLICE_INDEX(align_end) - start_index;
+@@ -303,7 +310,7 @@ static bool slice_scan_available(unsigned long addr,
+ 				 int end, unsigned long *boundary_addr)
+ {
+ 	unsigned long slice;
+-	if (addr < SLICE_LOW_TOP) {
++	if (slice_addr_is_low(addr)) {
+ 		slice = GET_LOW_SLICE_INDEX(addr);
+ 		*boundary_addr = (slice + end) << SLICE_LOW_SHIFT;
+ 		return !!(available->low_slices & (1u << slice));
+@@ -706,7 +713,7 @@ unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr)
+ 
+ 	VM_BUG_ON(radix_enabled());
+ 
+-	if (addr < SLICE_LOW_TOP) {
++	if (slice_addr_is_low(addr)) {
+ 		psizes = mm->context.low_slices_psize;
+ 		index = GET_LOW_SLICE_INDEX(addr);
+ 	} else {
+diff --git a/arch/powerpc/mm/tlb_nohash.c b/arch/powerpc/mm/tlb_nohash.c
+index 15fe5f0c8665..ae5d568e267f 100644
+--- a/arch/powerpc/mm/tlb_nohash.c
++++ b/arch/powerpc/mm/tlb_nohash.c
+@@ -503,6 +503,9 @@ static void setup_page_sizes(void)
+ 		for (psize = 0; psize < MMU_PAGE_COUNT; ++psize) {
+ 			struct mmu_psize_def *def = &mmu_psize_defs[psize];
+ 
++			if (!def->shift)
++				continue;
++
+ 			if (tlb1ps & (1U << (def->shift - 10))) {
+ 				def->flags |= MMU_PAGE_SIZE_DIRECT;
+ 
+diff --git a/arch/powerpc/platforms/powernv/memtrace.c b/arch/powerpc/platforms/powernv/memtrace.c
+index b99283df8584..265669002da0 100644
+--- a/arch/powerpc/platforms/powernv/memtrace.c
++++ b/arch/powerpc/platforms/powernv/memtrace.c
+@@ -119,17 +119,15 @@ static bool memtrace_offline_pages(u32 nid, u64 start_pfn, u64 nr_pages)
+ 	walk_memory_range(start_pfn, end_pfn, (void *)MEM_OFFLINE,
+ 			  change_memblock_state);
+ 
+-	lock_device_hotplug();
+-	remove_memory(nid, start_pfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT);
+-	unlock_device_hotplug();
+ 
+ 	return true;
+ }
+ 
+ static u64 memtrace_alloc_node(u32 nid, u64 size)
+ {
+-	u64 start_pfn, end_pfn, nr_pages;
++	u64 start_pfn, end_pfn, nr_pages, pfn;
+ 	u64 base_pfn;
++	u64 bytes = memory_block_size_bytes();
+ 
+ 	if (!node_spanned_pages(nid))
+ 		return 0;
+@@ -142,8 +140,21 @@ static u64 memtrace_alloc_node(u32 nid, u64 size)
+ 	end_pfn = round_down(end_pfn - nr_pages, nr_pages);
+ 
+ 	for (base_pfn = end_pfn; base_pfn > start_pfn; base_pfn -= nr_pages) {
+-		if (memtrace_offline_pages(nid, base_pfn, nr_pages) == true)
++		if (memtrace_offline_pages(nid, base_pfn, nr_pages) == true) {
++			/*
++			 * Remove memory in memory block size chunks so that
++			 * iomem resources are always split to the same size and
++			 * we never try to remove memory that spans two iomem
++			 * resources.
++			 */
++			lock_device_hotplug();
++			end_pfn = base_pfn + nr_pages;
++			for (pfn = base_pfn; pfn < end_pfn; pfn += bytes>> PAGE_SHIFT) {
++				remove_memory(nid, pfn << PAGE_SHIFT, bytes);
++			}
++			unlock_device_hotplug();
+ 			return base_pfn << PAGE_SHIFT;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/arch/x86/include/asm/mce.h b/arch/x86/include/asm/mce.h
+index 3a17107594c8..eb786f90f2d3 100644
+--- a/arch/x86/include/asm/mce.h
++++ b/arch/x86/include/asm/mce.h
+@@ -216,6 +216,8 @@ static inline int umc_normaddr_to_sysaddr(u64 norm_addr, u16 nid, u8 umc, u64 *s
+ 
+ int mce_available(struct cpuinfo_x86 *c);
+ bool mce_is_memory_error(struct mce *m);
++bool mce_is_correctable(struct mce *m);
++int mce_usable_address(struct mce *m);
+ 
+ DECLARE_PER_CPU(unsigned, mce_exception_count);
+ DECLARE_PER_CPU(unsigned, mce_poll_count);
+diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
+index 8c50754c09c1..c51b9d116be1 100644
+--- a/arch/x86/kernel/cpu/mcheck/mce.c
++++ b/arch/x86/kernel/cpu/mcheck/mce.c
+@@ -489,7 +489,7 @@ static void mce_report_event(struct pt_regs *regs)
+  * be somewhat complicated (e.g. segment offset would require an instruction
+  * parser). So only support physical addresses up to page granuality for now.
+  */
+-static int mce_usable_address(struct mce *m)
++int mce_usable_address(struct mce *m)
+ {
+ 	if (!(m->status & MCI_STATUS_ADDRV))
+ 		return 0;
+@@ -509,6 +509,7 @@ static int mce_usable_address(struct mce *m)
+ 
+ 	return 1;
+ }
++EXPORT_SYMBOL_GPL(mce_usable_address);
+ 
+ bool mce_is_memory_error(struct mce *m)
+ {
+@@ -538,7 +539,7 @@ bool mce_is_memory_error(struct mce *m)
+ }
+ EXPORT_SYMBOL_GPL(mce_is_memory_error);
+ 
+-static bool mce_is_correctable(struct mce *m)
++bool mce_is_correctable(struct mce *m)
+ {
+ 	if (m->cpuvendor == X86_VENDOR_AMD && m->status & MCI_STATUS_DEFERRED)
+ 		return false;
+@@ -548,6 +549,7 @@ static bool mce_is_correctable(struct mce *m)
+ 
+ 	return true;
+ }
++EXPORT_SYMBOL_GPL(mce_is_correctable);
+ 
+ static bool cec_add_mce(struct mce *m)
+ {
+diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
+index 031082c96db8..12a67046fefb 100644
+--- a/arch/x86/kernel/cpu/mshyperv.c
++++ b/arch/x86/kernel/cpu/mshyperv.c
+@@ -20,6 +20,7 @@
+ #include <linux/interrupt.h>
+ #include <linux/irq.h>
+ #include <linux/kexec.h>
++#include <linux/i8253.h>
+ #include <asm/processor.h>
+ #include <asm/hypervisor.h>
+ #include <asm/hyperv-tlfs.h>
+@@ -285,6 +286,16 @@ static void __init ms_hyperv_init_platform(void)
+ 	if (efi_enabled(EFI_BOOT))
+ 		x86_platform.get_nmi_reason = hv_get_nmi_reason;
+ 
++	/*
++	 * Hyper-V VMs have a PIT emulation quirk such that zeroing the
++	 * counter register during PIT shutdown restarts the PIT. So it
++	 * continues to interrupt @18.2 HZ. Setting i8253_clear_counter
++	 * to false tells pit_shutdown() not to zero the counter so that
++	 * the PIT really is shutdown. Generation 2 VMs don't have a PIT,
++	 * and setting this value has no effect.
++	 */
++	i8253_clear_counter_on_shutdown = false;
++
+ #if IS_ENABLED(CONFIG_HYPERV)
+ 	/*
+ 	 * Setup the hook to get control post apic initialization.
+diff --git a/arch/x86/kernel/cpu/vmware.c b/arch/x86/kernel/cpu/vmware.c
+index 8e005329648b..d805202c63cd 100644
+--- a/arch/x86/kernel/cpu/vmware.c
++++ b/arch/x86/kernel/cpu/vmware.c
+@@ -77,7 +77,7 @@ static __init int setup_vmw_sched_clock(char *s)
+ }
+ early_param("no-vmw-sched-clock", setup_vmw_sched_clock);
+ 
+-static unsigned long long vmware_sched_clock(void)
++static unsigned long long notrace vmware_sched_clock(void)
+ {
+ 	unsigned long long ns;
+ 
+diff --git a/arch/x86/um/shared/sysdep/ptrace_32.h b/arch/x86/um/shared/sysdep/ptrace_32.h
+index b94a108de1dc..ae00d22bce02 100644
+--- a/arch/x86/um/shared/sysdep/ptrace_32.h
++++ b/arch/x86/um/shared/sysdep/ptrace_32.h
+@@ -10,20 +10,10 @@
+ 
+ static inline void update_debugregs(int seq) {}
+ 
+-/* syscall emulation path in ptrace */
+-
+-#ifndef PTRACE_SYSEMU
+-#define PTRACE_SYSEMU 31
+-#endif
+-
+ void set_using_sysemu(int value);
+ int get_using_sysemu(void);
+ extern int sysemu_supported;
+ 
+-#ifndef PTRACE_SYSEMU_SINGLESTEP
+-#define PTRACE_SYSEMU_SINGLESTEP 32
+-#endif
+-
+ #define UPT_SYSCALL_ARG1(r) UPT_BX(r)
+ #define UPT_SYSCALL_ARG2(r) UPT_CX(r)
+ #define UPT_SYSCALL_ARG3(r) UPT_DX(r)
+diff --git a/arch/xtensa/boot/Makefile b/arch/xtensa/boot/Makefile
+index 53e4178711e6..8c20a7965bda 100644
+--- a/arch/xtensa/boot/Makefile
++++ b/arch/xtensa/boot/Makefile
+@@ -34,7 +34,7 @@ boot-elf boot-redboot: $(addprefix $(obj)/,$(subdir-y)) \
+ 		       $(addprefix $(obj)/,$(host-progs))
+ 	$(Q)$(MAKE) $(build)=$(obj)/$@ $(MAKECMDGOALS)
+ 
+-OBJCOPYFLAGS = --strip-all -R .comment -R .note.gnu.build-id -O binary
++OBJCOPYFLAGS = --strip-all -R .comment -R .notes -O binary
+ 
+ vmlinux.bin: vmlinux FORCE
+ 	$(call if_changed,objcopy)
+diff --git a/arch/xtensa/include/asm/processor.h b/arch/xtensa/include/asm/processor.h
+index 5b0027d4ecc0..a39cd81b741a 100644
+--- a/arch/xtensa/include/asm/processor.h
++++ b/arch/xtensa/include/asm/processor.h
+@@ -24,7 +24,11 @@
+ # error Linux requires the Xtensa Windowed Registers Option.
+ #endif
+ 
+-#define ARCH_SLAB_MINALIGN	XCHAL_DATA_WIDTH
++/* Xtensa ABI requires stack alignment to be at least 16 */
++
++#define STACK_ALIGN (XCHAL_DATA_WIDTH > 16 ? XCHAL_DATA_WIDTH : 16)
++
++#define ARCH_SLAB_MINALIGN STACK_ALIGN
+ 
+ /*
+  * User space process size: 1 GB.
+diff --git a/arch/xtensa/kernel/head.S b/arch/xtensa/kernel/head.S
+index 9c4e9433e536..3ceb76c7a4ae 100644
+--- a/arch/xtensa/kernel/head.S
++++ b/arch/xtensa/kernel/head.S
+@@ -88,9 +88,12 @@ _SetupMMU:
+ 	initialize_mmu
+ #if defined(CONFIG_MMU) && XCHAL_HAVE_PTP_MMU && XCHAL_HAVE_SPANNING_WAY
+ 	rsr	a2, excsave1
+-	movi	a3, 0x08000000
++	movi	a3, XCHAL_KSEG_PADDR
++	bltu	a2, a3, 1f
++	sub	a2, a2, a3
++	movi	a3, XCHAL_KSEG_SIZE
+ 	bgeu	a2, a3, 1f
+-	movi	a3, 0xd0000000
++	movi	a3, XCHAL_KSEG_CACHED_VADDR
+ 	add	a2, a2, a3
+ 	wsr	a2, excsave1
+ 1:
+diff --git a/arch/xtensa/kernel/vmlinux.lds.S b/arch/xtensa/kernel/vmlinux.lds.S
+index 70b731edc7b8..c430c96ea723 100644
+--- a/arch/xtensa/kernel/vmlinux.lds.S
++++ b/arch/xtensa/kernel/vmlinux.lds.S
+@@ -131,6 +131,7 @@ SECTIONS
+   .fixup   : { *(.fixup) }
+ 
+   EXCEPTION_TABLE(16)
++  NOTES
+   /* Data section */
+ 
+   _sdata = .;
+diff --git a/block/blk-core.c b/block/blk-core.c
+index f9d2e1b66e05..41b7396c8658 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -793,9 +793,8 @@ void blk_cleanup_queue(struct request_queue *q)
+ 	 * dispatch may still be in-progress since we dispatch requests
+ 	 * from more than one contexts.
+ 	 *
+-	 * No need to quiesce queue if it isn't initialized yet since
+-	 * blk_freeze_queue() should be enough for cases of passthrough
+-	 * request.
++	 * We rely on driver to deal with the race in case that queue
++	 * initialization isn't done.
+ 	 */
+ 	if (q->mq_ops && blk_queue_init_done(q))
+ 		blk_mq_quiesce_queue(q);
+diff --git a/crypto/crypto_user.c b/crypto/crypto_user.c
+index 0e89b5457cab..ceeb2eaf28cf 100644
+--- a/crypto/crypto_user.c
++++ b/crypto/crypto_user.c
+@@ -83,7 +83,7 @@ static int crypto_report_cipher(struct sk_buff *skb, struct crypto_alg *alg)
+ {
+ 	struct crypto_report_cipher rcipher;
+ 
+-	strlcpy(rcipher.type, "cipher", sizeof(rcipher.type));
++	strncpy(rcipher.type, "cipher", sizeof(rcipher.type));
+ 
+ 	rcipher.blocksize = alg->cra_blocksize;
+ 	rcipher.min_keysize = alg->cra_cipher.cia_min_keysize;
+@@ -102,7 +102,7 @@ static int crypto_report_comp(struct sk_buff *skb, struct crypto_alg *alg)
+ {
+ 	struct crypto_report_comp rcomp;
+ 
+-	strlcpy(rcomp.type, "compression", sizeof(rcomp.type));
++	strncpy(rcomp.type, "compression", sizeof(rcomp.type));
+ 	if (nla_put(skb, CRYPTOCFGA_REPORT_COMPRESS,
+ 		    sizeof(struct crypto_report_comp), &rcomp))
+ 		goto nla_put_failure;
+@@ -116,7 +116,7 @@ static int crypto_report_acomp(struct sk_buff *skb, struct crypto_alg *alg)
+ {
+ 	struct crypto_report_acomp racomp;
+ 
+-	strlcpy(racomp.type, "acomp", sizeof(racomp.type));
++	strncpy(racomp.type, "acomp", sizeof(racomp.type));
+ 
+ 	if (nla_put(skb, CRYPTOCFGA_REPORT_ACOMP,
+ 		    sizeof(struct crypto_report_acomp), &racomp))
+@@ -131,7 +131,7 @@ static int crypto_report_akcipher(struct sk_buff *skb, struct crypto_alg *alg)
+ {
+ 	struct crypto_report_akcipher rakcipher;
+ 
+-	strlcpy(rakcipher.type, "akcipher", sizeof(rakcipher.type));
++	strncpy(rakcipher.type, "akcipher", sizeof(rakcipher.type));
+ 
+ 	if (nla_put(skb, CRYPTOCFGA_REPORT_AKCIPHER,
+ 		    sizeof(struct crypto_report_akcipher), &rakcipher))
+@@ -146,7 +146,7 @@ static int crypto_report_kpp(struct sk_buff *skb, struct crypto_alg *alg)
+ {
+ 	struct crypto_report_kpp rkpp;
+ 
+-	strlcpy(rkpp.type, "kpp", sizeof(rkpp.type));
++	strncpy(rkpp.type, "kpp", sizeof(rkpp.type));
+ 
+ 	if (nla_put(skb, CRYPTOCFGA_REPORT_KPP,
+ 		    sizeof(struct crypto_report_kpp), &rkpp))
+@@ -160,10 +160,10 @@ nla_put_failure:
+ static int crypto_report_one(struct crypto_alg *alg,
+ 			     struct crypto_user_alg *ualg, struct sk_buff *skb)
+ {
+-	strlcpy(ualg->cru_name, alg->cra_name, sizeof(ualg->cru_name));
+-	strlcpy(ualg->cru_driver_name, alg->cra_driver_name,
++	strncpy(ualg->cru_name, alg->cra_name, sizeof(ualg->cru_name));
++	strncpy(ualg->cru_driver_name, alg->cra_driver_name,
+ 		sizeof(ualg->cru_driver_name));
+-	strlcpy(ualg->cru_module_name, module_name(alg->cra_module),
++	strncpy(ualg->cru_module_name, module_name(alg->cra_module),
+ 		sizeof(ualg->cru_module_name));
+ 
+ 	ualg->cru_type = 0;
+@@ -176,7 +176,7 @@ static int crypto_report_one(struct crypto_alg *alg,
+ 	if (alg->cra_flags & CRYPTO_ALG_LARVAL) {
+ 		struct crypto_report_larval rl;
+ 
+-		strlcpy(rl.type, "larval", sizeof(rl.type));
++		strncpy(rl.type, "larval", sizeof(rl.type));
+ 		if (nla_put(skb, CRYPTOCFGA_REPORT_LARVAL,
+ 			    sizeof(struct crypto_report_larval), &rl))
+ 			goto nla_put_failure;
+diff --git a/drivers/acpi/acpica/dsopcode.c b/drivers/acpi/acpica/dsopcode.c
+index 78f9de260d5f..e9fb0bf3c8d2 100644
+--- a/drivers/acpi/acpica/dsopcode.c
++++ b/drivers/acpi/acpica/dsopcode.c
+@@ -417,10 +417,6 @@ acpi_ds_eval_region_operands(struct acpi_walk_state *walk_state,
+ 			  ACPI_FORMAT_UINT64(obj_desc->region.address),
+ 			  obj_desc->region.length));
+ 
+-	status = acpi_ut_add_address_range(obj_desc->region.space_id,
+-					   obj_desc->region.address,
+-					   obj_desc->region.length, node);
+-
+ 	/* Now the address and length are valid for this opregion */
+ 
+ 	obj_desc->region.flags |= AOPOBJ_DATA_VALID;
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index c0db96e8a81a..8eb123d47d54 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -2835,9 +2835,9 @@ static int acpi_nfit_query_poison(struct acpi_nfit_desc *acpi_desc)
+ 		return rc;
+ 
+ 	if (ars_status_process_records(acpi_desc))
+-		return -ENOMEM;
++		dev_err(acpi_desc->dev, "Failed to process ARS records\n");
+ 
+-	return 0;
++	return rc;
+ }
+ 
+ static int ars_register(struct acpi_nfit_desc *acpi_desc,
+diff --git a/drivers/acpi/nfit/mce.c b/drivers/acpi/nfit/mce.c
+index e9626bf6ca29..d6c1b10f6c25 100644
+--- a/drivers/acpi/nfit/mce.c
++++ b/drivers/acpi/nfit/mce.c
+@@ -25,8 +25,12 @@ static int nfit_handle_mce(struct notifier_block *nb, unsigned long val,
+ 	struct acpi_nfit_desc *acpi_desc;
+ 	struct nfit_spa *nfit_spa;
+ 
+-	/* We only care about memory errors */
+-	if (!mce_is_memory_error(mce))
++	/* We only care about uncorrectable memory errors */
++	if (!mce_is_memory_error(mce) || mce_is_correctable(mce))
++		return NOTIFY_DONE;
++
++	/* Verify the address reported in the MCE is valid. */
++	if (!mce_usable_address(mce))
+ 		return NOTIFY_DONE;
+ 
+ 	/*
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 321a9579556d..a9a8440a4945 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -4552,7 +4552,7 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ 	/* These specific Samsung models/firmware-revs do not handle LPM well */
+ 	{ "SAMSUNG MZMPC128HBFU-000MV", "CXM14M1Q", ATA_HORKAGE_NOLPM, },
+ 	{ "SAMSUNG SSD PM830 mSATA *",  "CXM13D1Q", ATA_HORKAGE_NOLPM, },
+-	{ "SAMSUNG MZ7TD256HAFV-000L9", "DXT02L5Q", ATA_HORKAGE_NOLPM, },
++	{ "SAMSUNG MZ7TD256HAFV-000L9", NULL,       ATA_HORKAGE_NOLPM, },
+ 
+ 	/* devices that don't properly handle queued TRIM commands */
+ 	{ "Micron_M500IT_*",		"MU01",	ATA_HORKAGE_NO_NCQ_TRIM |
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index af7cb8e618fe..363b9102ebb0 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -1637,6 +1637,11 @@ static const struct attribute_group zram_disk_attr_group = {
+ 	.attrs = zram_disk_attrs,
+ };
+ 
++static const struct attribute_group *zram_disk_attr_groups[] = {
++	&zram_disk_attr_group,
++	NULL,
++};
++
+ /*
+  * Allocate and initialize new zram device. the function returns
+  * '>= 0' device_id upon success, and negative value otherwise.
+@@ -1717,24 +1722,15 @@ static int zram_add(void)
+ 
+ 	zram->disk->queue->backing_dev_info->capabilities |=
+ 			(BDI_CAP_STABLE_WRITES | BDI_CAP_SYNCHRONOUS_IO);
++	disk_to_dev(zram->disk)->groups = zram_disk_attr_groups;
+ 	add_disk(zram->disk);
+ 
+-	ret = sysfs_create_group(&disk_to_dev(zram->disk)->kobj,
+-				&zram_disk_attr_group);
+-	if (ret < 0) {
+-		pr_err("Error creating sysfs group for device %d\n",
+-				device_id);
+-		goto out_free_disk;
+-	}
+ 	strlcpy(zram->compressor, default_compressor, sizeof(zram->compressor));
+ 
+ 	zram_debugfs_register(zram);
+ 	pr_info("Added device: %s\n", zram->disk->disk_name);
+ 	return device_id;
+ 
+-out_free_disk:
+-	del_gendisk(zram->disk);
+-	put_disk(zram->disk);
+ out_free_queue:
+ 	blk_cleanup_queue(queue);
+ out_free_idr:
+@@ -1763,16 +1759,6 @@ static int zram_remove(struct zram *zram)
+ 	mutex_unlock(&bdev->bd_mutex);
+ 
+ 	zram_debugfs_unregister(zram);
+-	/*
+-	 * Remove sysfs first, so no one will perform a disksize
+-	 * store while we destroy the devices. This also helps during
+-	 * hot_remove -- zram_reset_device() is the last holder of
+-	 * ->init_lock, no later/concurrent disksize_store() or any
+-	 * other sysfs handlers are possible.
+-	 */
+-	sysfs_remove_group(&disk_to_dev(zram->disk)->kobj,
+-			&zram_disk_attr_group);
+-
+ 	/* Make sure all the pending I/O are finished */
+ 	fsync_bdev(bdev);
+ 	zram_reset_device(zram);
+diff --git a/drivers/cdrom/cdrom.c b/drivers/cdrom/cdrom.c
+index 66acbd063562..894103abe8a8 100644
+--- a/drivers/cdrom/cdrom.c
++++ b/drivers/cdrom/cdrom.c
+@@ -2441,7 +2441,7 @@ static int cdrom_ioctl_select_disc(struct cdrom_device_info *cdi,
+ 		return -ENOSYS;
+ 
+ 	if (arg != CDSL_CURRENT && arg != CDSL_NONE) {
+-		if ((int)arg >= cdi->capacity)
++		if (arg >= cdi->capacity)
+ 			return -EINVAL;
+ 	}
+ 
+diff --git a/drivers/clk/at91/clk-pll.c b/drivers/clk/at91/clk-pll.c
+index 72b6091eb7b9..dc7fbc796cb6 100644
+--- a/drivers/clk/at91/clk-pll.c
++++ b/drivers/clk/at91/clk-pll.c
+@@ -133,6 +133,9 @@ static unsigned long clk_pll_recalc_rate(struct clk_hw *hw,
+ {
+ 	struct clk_pll *pll = to_clk_pll(hw);
+ 
++	if (!pll->div || !pll->mul)
++		return 0;
++
+ 	return (parent_rate / pll->div) * (pll->mul + 1);
+ }
+ 
+diff --git a/drivers/clk/clk-s2mps11.c b/drivers/clk/clk-s2mps11.c
+index d44e0eea31ec..0934d3724495 100644
+--- a/drivers/clk/clk-s2mps11.c
++++ b/drivers/clk/clk-s2mps11.c
+@@ -245,6 +245,36 @@ static const struct platform_device_id s2mps11_clk_id[] = {
+ };
+ MODULE_DEVICE_TABLE(platform, s2mps11_clk_id);
+ 
++#ifdef CONFIG_OF
++/*
++ * Device is instantiated through parent MFD device and device matching is done
++ * through platform_device_id.
++ *
++ * However if device's DT node contains proper clock compatible and driver is
++ * built as a module, then the *module* matching will be done trough DT aliases.
++ * This requires of_device_id table.  In the same time this will not change the
++ * actual *device* matching so do not add .of_match_table.
++ */
++static const struct of_device_id s2mps11_dt_match[] = {
++	{
++		.compatible = "samsung,s2mps11-clk",
++		.data = (void *)S2MPS11X,
++	}, {
++		.compatible = "samsung,s2mps13-clk",
++		.data = (void *)S2MPS13X,
++	}, {
++		.compatible = "samsung,s2mps14-clk",
++		.data = (void *)S2MPS14X,
++	}, {
++		.compatible = "samsung,s5m8767-clk",
++		.data = (void *)S5M8767X,
++	}, {
++		/* Sentinel */
++	},
++};
++MODULE_DEVICE_TABLE(of, s2mps11_dt_match);
++#endif
++
+ static struct platform_driver s2mps11_clk_driver = {
+ 	.driver = {
+ 		.name  = "s2mps11-clk",
+diff --git a/drivers/clk/hisilicon/reset.c b/drivers/clk/hisilicon/reset.c
+index 2a5015c736ce..43e82fa64422 100644
+--- a/drivers/clk/hisilicon/reset.c
++++ b/drivers/clk/hisilicon/reset.c
+@@ -109,9 +109,8 @@ struct hisi_reset_controller *hisi_reset_init(struct platform_device *pdev)
+ 		return NULL;
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	rstc->membase = devm_ioremap(&pdev->dev,
+-				res->start, resource_size(res));
+-	if (!rstc->membase)
++	rstc->membase = devm_ioremap_resource(&pdev->dev, res);
++	if (IS_ERR(rstc->membase))
+ 		return NULL;
+ 
+ 	spin_lock_init(&rstc->lock);
+diff --git a/drivers/clk/meson/axg.c b/drivers/clk/meson/axg.c
+index bd4dbc696b88..cfd26fd7e404 100644
+--- a/drivers/clk/meson/axg.c
++++ b/drivers/clk/meson/axg.c
+@@ -320,6 +320,7 @@ static struct clk_regmap axg_fclk_div2 = {
+ 		.ops = &clk_regmap_gate_ops,
+ 		.parent_names = (const char *[]){ "fclk_div2_div" },
+ 		.num_parents = 1,
++		.flags = CLK_IS_CRITICAL,
+ 	},
+ };
+ 
+@@ -344,6 +345,18 @@ static struct clk_regmap axg_fclk_div3 = {
+ 		.ops = &clk_regmap_gate_ops,
+ 		.parent_names = (const char *[]){ "fclk_div3_div" },
+ 		.num_parents = 1,
++		/*
++		 * FIXME:
++		 * This clock, as fdiv2, is used by the SCPI FW and is required
++		 * by the platform to operate correctly.
++		 * Until the following condition are met, we need this clock to
++		 * be marked as critical:
++		 * a) The SCPI generic driver claims and enable all the clocks
++		 *    it needs
++		 * b) CCF has a clock hand-off mechanism to make the sure the
++		 *    clock stays on until the proper driver comes along
++		 */
++		.flags = CLK_IS_CRITICAL,
+ 	},
+ };
+ 
+diff --git a/drivers/clk/meson/gxbb.c b/drivers/clk/meson/gxbb.c
+index 177fffb9ebef..902c63209785 100644
+--- a/drivers/clk/meson/gxbb.c
++++ b/drivers/clk/meson/gxbb.c
+@@ -523,6 +523,18 @@ static struct clk_regmap gxbb_fclk_div3 = {
+ 		.ops = &clk_regmap_gate_ops,
+ 		.parent_names = (const char *[]){ "fclk_div3_div" },
+ 		.num_parents = 1,
++		/*
++		 * FIXME:
++		 * This clock, as fdiv2, is used by the SCPI FW and is required
++		 * by the platform to operate correctly.
++		 * Until the following condition are met, we need this clock to
++		 * be marked as critical:
++		 * a) The SCPI generic driver claims and enable all the clocks
++		 *    it needs
++		 * b) CCF has a clock hand-off mechanism to make the sure the
++		 *    clock stays on until the proper driver comes along
++		 */
++		.flags = CLK_IS_CRITICAL,
+ 	},
+ };
+ 
+diff --git a/drivers/clk/rockchip/clk-ddr.c b/drivers/clk/rockchip/clk-ddr.c
+index e8075359366b..ebce5260068b 100644
+--- a/drivers/clk/rockchip/clk-ddr.c
++++ b/drivers/clk/rockchip/clk-ddr.c
+@@ -80,16 +80,12 @@ static long rockchip_ddrclk_sip_round_rate(struct clk_hw *hw,
+ static u8 rockchip_ddrclk_get_parent(struct clk_hw *hw)
+ {
+ 	struct rockchip_ddrclk *ddrclk = to_rockchip_ddrclk_hw(hw);
+-	int num_parents = clk_hw_get_num_parents(hw);
+ 	u32 val;
+ 
+ 	val = clk_readl(ddrclk->reg_base +
+ 			ddrclk->mux_offset) >> ddrclk->mux_shift;
+ 	val &= GENMASK(ddrclk->mux_width - 1, 0);
+ 
+-	if (val >= num_parents)
+-		return -EINVAL;
+-
+ 	return val;
+ }
+ 
+diff --git a/drivers/clk/rockchip/clk-rk3328.c b/drivers/clk/rockchip/clk-rk3328.c
+index 252366a5231f..2c5426607790 100644
+--- a/drivers/clk/rockchip/clk-rk3328.c
++++ b/drivers/clk/rockchip/clk-rk3328.c
+@@ -813,22 +813,22 @@ static struct rockchip_clk_branch rk3328_clk_branches[] __initdata = {
+ 	MMC(SCLK_SDMMC_DRV, "sdmmc_drv", "clk_sdmmc",
+ 	    RK3328_SDMMC_CON0, 1),
+ 	MMC(SCLK_SDMMC_SAMPLE, "sdmmc_sample", "clk_sdmmc",
+-	    RK3328_SDMMC_CON1, 1),
++	    RK3328_SDMMC_CON1, 0),
+ 
+ 	MMC(SCLK_SDIO_DRV, "sdio_drv", "clk_sdio",
+ 	    RK3328_SDIO_CON0, 1),
+ 	MMC(SCLK_SDIO_SAMPLE, "sdio_sample", "clk_sdio",
+-	    RK3328_SDIO_CON1, 1),
++	    RK3328_SDIO_CON1, 0),
+ 
+ 	MMC(SCLK_EMMC_DRV, "emmc_drv", "clk_emmc",
+ 	    RK3328_EMMC_CON0, 1),
+ 	MMC(SCLK_EMMC_SAMPLE, "emmc_sample", "clk_emmc",
+-	    RK3328_EMMC_CON1, 1),
++	    RK3328_EMMC_CON1, 0),
+ 
+ 	MMC(SCLK_SDMMC_EXT_DRV, "sdmmc_ext_drv", "clk_sdmmc_ext",
+ 	    RK3328_SDMMC_EXT_CON0, 1),
+ 	MMC(SCLK_SDMMC_EXT_SAMPLE, "sdmmc_ext_sample", "clk_sdmmc_ext",
+-	    RK3328_SDMMC_EXT_CON1, 1),
++	    RK3328_SDMMC_EXT_CON1, 0),
+ };
+ 
+ static const char *const rk3328_critical_clocks[] __initconst = {
+diff --git a/drivers/clk/sunxi-ng/ccu-sun50i-h6.c b/drivers/clk/sunxi-ng/ccu-sun50i-h6.c
+index bdbfe78fe133..0f7a0ffd3f70 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun50i-h6.c
++++ b/drivers/clk/sunxi-ng/ccu-sun50i-h6.c
+@@ -224,7 +224,7 @@ static SUNXI_CCU_MP_WITH_MUX(psi_ahb1_ahb2_clk, "psi-ahb1-ahb2",
+ 			     psi_ahb1_ahb2_parents,
+ 			     0x510,
+ 			     0, 5,	/* M */
+-			     16, 2,	/* P */
++			     8, 2,	/* P */
+ 			     24, 2,	/* mux */
+ 			     0);
+ 
+@@ -233,19 +233,19 @@ static const char * const ahb3_apb1_apb2_parents[] = { "osc24M", "osc32k",
+ 						       "pll-periph0" };
+ static SUNXI_CCU_MP_WITH_MUX(ahb3_clk, "ahb3", ahb3_apb1_apb2_parents, 0x51c,
+ 			     0, 5,	/* M */
+-			     16, 2,	/* P */
++			     8, 2,	/* P */
+ 			     24, 2,	/* mux */
+ 			     0);
+ 
+ static SUNXI_CCU_MP_WITH_MUX(apb1_clk, "apb1", ahb3_apb1_apb2_parents, 0x520,
+ 			     0, 5,	/* M */
+-			     16, 2,	/* P */
++			     8, 2,	/* P */
+ 			     24, 2,	/* mux */
+ 			     0);
+ 
+ static SUNXI_CCU_MP_WITH_MUX(apb2_clk, "apb2", ahb3_apb1_apb2_parents, 0x524,
+ 			     0, 5,	/* M */
+-			     16, 2,	/* P */
++			     8, 2,	/* P */
+ 			     24, 2,	/* mux */
+ 			     0);
+ 
+diff --git a/drivers/clocksource/i8253.c b/drivers/clocksource/i8253.c
+index 9c38895542f4..d4350bb10b83 100644
+--- a/drivers/clocksource/i8253.c
++++ b/drivers/clocksource/i8253.c
+@@ -20,6 +20,13 @@
+ DEFINE_RAW_SPINLOCK(i8253_lock);
+ EXPORT_SYMBOL(i8253_lock);
+ 
++/*
++ * Handle PIT quirk in pit_shutdown() where zeroing the counter register
++ * restarts the PIT, negating the shutdown. On platforms with the quirk,
++ * platform specific code can set this to false.
++ */
++bool i8253_clear_counter_on_shutdown __ro_after_init = true;
++
+ #ifdef CONFIG_CLKSRC_I8253
+ /*
+  * Since the PIT overflows every tick, its not very useful
+@@ -109,8 +116,11 @@ static int pit_shutdown(struct clock_event_device *evt)
+ 	raw_spin_lock(&i8253_lock);
+ 
+ 	outb_p(0x30, PIT_MODE);
+-	outb_p(0, PIT_CH0);
+-	outb_p(0, PIT_CH0);
++
++	if (i8253_clear_counter_on_shutdown) {
++		outb_p(0, PIT_CH0);
++		outb_p(0, PIT_CH0);
++	}
+ 
+ 	raw_spin_unlock(&i8253_lock);
+ 	return 0;
+diff --git a/drivers/firmware/efi/libstub/fdt.c b/drivers/firmware/efi/libstub/fdt.c
+index 8830fa601e45..0c0d2312f4a8 100644
+--- a/drivers/firmware/efi/libstub/fdt.c
++++ b/drivers/firmware/efi/libstub/fdt.c
+@@ -158,6 +158,10 @@ static efi_status_t update_fdt(efi_system_table_t *sys_table, void *orig_fdt,
+ 			return efi_status;
+ 		}
+ 	}
++
++	/* shrink the FDT back to its minimum size */
++	fdt_pack(fdt);
++
+ 	return EFI_SUCCESS;
+ 
+ fdt_set_fail:
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
+index a1c78f90eadf..b1a86f99011a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
+@@ -574,7 +574,7 @@ void amdgpu_vmid_mgr_init(struct amdgpu_device *adev)
+ 		/* skip over VMID 0, since it is the system VM */
+ 		for (j = 1; j < id_mgr->num_ids; ++j) {
+ 			amdgpu_vmid_reset(adev, i, j);
+-			amdgpu_sync_create(&id_mgr->ids[i].active);
++			amdgpu_sync_create(&id_mgr->ids[j].active);
+ 			list_add_tail(&id_mgr->ids[j].list, &id_mgr->ids_lru);
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+index 2bd56760c744..b1cd8e9649b9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+@@ -62,6 +62,7 @@ int amdgpu_job_alloc(struct amdgpu_device *adev, unsigned num_ibs,
+ 	amdgpu_sync_create(&(*job)->sync);
+ 	amdgpu_sync_create(&(*job)->sched_sync);
+ 	(*job)->vram_lost_counter = atomic_read(&adev->vram_lost_counter);
++	(*job)->vm_pd_addr = AMDGPU_BO_INVALID_OFFSET;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+index f55f72a37ca8..c29d519fa381 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+@@ -277,6 +277,7 @@ amdgpu_ucode_get_load_type(struct amdgpu_device *adev, int load_type)
+ 	case CHIP_PITCAIRN:
+ 	case CHIP_VERDE:
+ 	case CHIP_OLAND:
++	case CHIP_HAINAN:
+ 		return AMDGPU_FW_LOAD_DIRECT;
+ #endif
+ #ifdef CONFIG_DRM_AMDGPU_CIK
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index c31fff32a321..eb0ae9726cf7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -631,7 +631,8 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job, bool need_
+ 	}
+ 
+ 	gds_switch_needed &= !!ring->funcs->emit_gds_switch;
+-	vm_flush_needed &= !!ring->funcs->emit_vm_flush;
++	vm_flush_needed &= !!ring->funcs->emit_vm_flush  &&
++			job->vm_pd_addr != AMDGPU_BO_INVALID_OFFSET;
+ 	pasid_mapping_needed &= adev->gmc.gmc_funcs->emit_pasid_mapping &&
+ 		ring->funcs->emit_wreg;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 644b2187507b..a6348bbb6fc7 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -1045,9 +1045,6 @@ static enum surface_update_type get_plane_info_update_type(const struct dc_surfa
+ 		 */
+ 		update_flags->bits.bpp_change = 1;
+ 
+-	if (u->gamma && dce_use_lut(u->plane_info->format))
+-		update_flags->bits.gamma_change = 1;
+-
+ 	if (memcmp(&u->plane_info->tiling_info, &u->surface->tiling_info,
+ 			sizeof(union dc_tiling_info)) != 0) {
+ 		update_flags->bits.swizzle_change = 1;
+@@ -1064,7 +1061,6 @@ static enum surface_update_type get_plane_info_update_type(const struct dc_surfa
+ 	if (update_flags->bits.rotation_change
+ 			|| update_flags->bits.stereo_format_change
+ 			|| update_flags->bits.pixel_format_change
+-			|| update_flags->bits.gamma_change
+ 			|| update_flags->bits.bpp_change
+ 			|| update_flags->bits.bandwidth_change
+ 			|| update_flags->bits.output_tf_change)
+@@ -1154,13 +1150,26 @@ static enum surface_update_type det_surface_update(const struct dc *dc,
+ 	if (u->coeff_reduction_factor)
+ 		update_flags->bits.coeff_reduction_change = 1;
+ 
++	if (u->gamma) {
++		enum surface_pixel_format format = SURFACE_PIXEL_FORMAT_GRPH_BEGIN;
++
++		if (u->plane_info)
++			format = u->plane_info->format;
++		else if (u->surface)
++			format = u->surface->format;
++
++		if (dce_use_lut(format))
++			update_flags->bits.gamma_change = 1;
++	}
++
+ 	if (update_flags->bits.in_transfer_func_change) {
+ 		type = UPDATE_TYPE_MED;
+ 		elevate_update_type(&overall_type, type);
+ 	}
+ 
+ 	if (update_flags->bits.input_csc_change
+-			|| update_flags->bits.coeff_reduction_change) {
++			|| update_flags->bits.coeff_reduction_change
++			|| update_flags->bits.gamma_change) {
+ 		type = UPDATE_TYPE_FULL;
+ 		elevate_update_type(&overall_type, type);
+ 	}
+diff --git a/drivers/gpu/drm/amd/display/modules/color/color_gamma.c b/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
+index eee0dfad6962..7d9fea6877bc 100644
+--- a/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
++++ b/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
+@@ -968,10 +968,14 @@ static void build_evenly_distributed_points(
+ 	struct dividers dividers)
+ {
+ 	struct gamma_pixel *p = points;
+-	struct gamma_pixel *p_last = p + numberof_points - 1;
++	struct gamma_pixel *p_last;
+ 
+ 	uint32_t i = 0;
+ 
++	// This function should not gets called with 0 as a parameter
++	ASSERT(numberof_points > 0);
++	p_last = p + numberof_points - 1;
++
+ 	do {
+ 		struct fixed31_32 value = dc_fixpt_from_fraction(i,
+ 			numberof_points - 1);
+@@ -982,7 +986,7 @@ static void build_evenly_distributed_points(
+ 
+ 		++p;
+ 		++i;
+-	} while (i != numberof_points);
++	} while (i < numberof_points);
+ 
+ 	p->r = dc_fixpt_div(p_last->r, dividers.divider1);
+ 	p->g = dc_fixpt_div(p_last->g, dividers.divider1);
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
+index 617557bd8c24..b813e77d8e93 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
+@@ -1222,14 +1222,17 @@ static int smu8_dpm_force_dpm_level(struct pp_hwmgr *hwmgr,
+ 
+ static int smu8_dpm_powerdown_uvd(struct pp_hwmgr *hwmgr)
+ {
+-	if (PP_CAP(PHM_PlatformCaps_UVDPowerGating))
++	if (PP_CAP(PHM_PlatformCaps_UVDPowerGating)) {
++		smu8_nbdpm_pstate_enable_disable(hwmgr, true, true);
+ 		return smum_send_msg_to_smc(hwmgr, PPSMC_MSG_UVDPowerOFF);
++	}
+ 	return 0;
+ }
+ 
+ static int smu8_dpm_powerup_uvd(struct pp_hwmgr *hwmgr)
+ {
+ 	if (PP_CAP(PHM_PlatformCaps_UVDPowerGating)) {
++		smu8_nbdpm_pstate_enable_disable(hwmgr, false, true);
+ 		return smum_send_msg_to_smc_with_parameter(
+ 			hwmgr,
+ 			PPSMC_MSG_UVDPowerON,
+diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c b/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
+index 2d4ec8ac3a08..0b3ea7e9b805 100644
+--- a/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
++++ b/drivers/gpu/drm/amd/powerplay/smumgr/ci_smumgr.c
+@@ -2303,11 +2303,13 @@ static uint32_t ci_get_offsetof(uint32_t type, uint32_t member)
+ 		case DRAM_LOG_BUFF_SIZE:
+ 			return offsetof(SMU7_SoftRegisters, DRAM_LOG_BUFF_SIZE);
+ 		}
++		break;
+ 	case SMU_Discrete_DpmTable:
+ 		switch (member) {
+ 		case LowSclkInterruptThreshold:
+ 			return offsetof(SMU7_Discrete_DpmTable, LowSclkInterruptT);
+ 		}
++		break;
+ 	}
+ 	pr_debug("can't get the offset of type %x member %x\n", type, member);
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/fiji_smumgr.c b/drivers/gpu/drm/amd/powerplay/smumgr/fiji_smumgr.c
+index 53df9405f43a..bb616a530d3c 100644
+--- a/drivers/gpu/drm/amd/powerplay/smumgr/fiji_smumgr.c
++++ b/drivers/gpu/drm/amd/powerplay/smumgr/fiji_smumgr.c
+@@ -2372,6 +2372,7 @@ static uint32_t fiji_get_offsetof(uint32_t type, uint32_t member)
+ 		case DRAM_LOG_BUFF_SIZE:
+ 			return offsetof(SMU73_SoftRegisters, DRAM_LOG_BUFF_SIZE);
+ 		}
++		break;
+ 	case SMU_Discrete_DpmTable:
+ 		switch (member) {
+ 		case UvdBootLevel:
+@@ -2383,6 +2384,7 @@ static uint32_t fiji_get_offsetof(uint32_t type, uint32_t member)
+ 		case LowSclkInterruptThreshold:
+ 			return offsetof(SMU73_Discrete_DpmTable, LowSclkInterruptThreshold);
+ 		}
++		break;
+ 	}
+ 	pr_warn("can't get the offset of type %x member %x\n", type, member);
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/iceland_smumgr.c b/drivers/gpu/drm/amd/powerplay/smumgr/iceland_smumgr.c
+index 415f691c3fa9..c15e15e657b8 100644
+--- a/drivers/gpu/drm/amd/powerplay/smumgr/iceland_smumgr.c
++++ b/drivers/gpu/drm/amd/powerplay/smumgr/iceland_smumgr.c
+@@ -2246,11 +2246,13 @@ static uint32_t iceland_get_offsetof(uint32_t type, uint32_t member)
+ 		case DRAM_LOG_BUFF_SIZE:
+ 			return offsetof(SMU71_SoftRegisters, DRAM_LOG_BUFF_SIZE);
+ 		}
++		break;
+ 	case SMU_Discrete_DpmTable:
+ 		switch (member) {
+ 		case LowSclkInterruptThreshold:
+ 			return offsetof(SMU71_Discrete_DpmTable, LowSclkInterruptThreshold);
+ 		}
++		break;
+ 	}
+ 	pr_warn("can't get the offset of type %x member %x\n", type, member);
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/tonga_smumgr.c b/drivers/gpu/drm/amd/powerplay/smumgr/tonga_smumgr.c
+index 782b19fc2e70..a5b7a4484700 100644
+--- a/drivers/gpu/drm/amd/powerplay/smumgr/tonga_smumgr.c
++++ b/drivers/gpu/drm/amd/powerplay/smumgr/tonga_smumgr.c
+@@ -2667,6 +2667,7 @@ static uint32_t tonga_get_offsetof(uint32_t type, uint32_t member)
+ 		case DRAM_LOG_BUFF_SIZE:
+ 			return offsetof(SMU72_SoftRegisters, DRAM_LOG_BUFF_SIZE);
+ 		}
++		break;
+ 	case SMU_Discrete_DpmTable:
+ 		switch (member) {
+ 		case UvdBootLevel:
+@@ -2678,6 +2679,7 @@ static uint32_t tonga_get_offsetof(uint32_t type, uint32_t member)
+ 		case LowSclkInterruptThreshold:
+ 			return offsetof(SMU72_Discrete_DpmTable, LowSclkInterruptThreshold);
+ 		}
++		break;
+ 	}
+ 	pr_warn("can't get the offset of type %x member %x\n", type, member);
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/vegam_smumgr.c b/drivers/gpu/drm/amd/powerplay/smumgr/vegam_smumgr.c
+index 2de48959ac93..52834334bd53 100644
+--- a/drivers/gpu/drm/amd/powerplay/smumgr/vegam_smumgr.c
++++ b/drivers/gpu/drm/amd/powerplay/smumgr/vegam_smumgr.c
+@@ -2267,6 +2267,7 @@ static uint32_t vegam_get_offsetof(uint32_t type, uint32_t member)
+ 		case DRAM_LOG_BUFF_SIZE:
+ 			return offsetof(SMU75_SoftRegisters, DRAM_LOG_BUFF_SIZE);
+ 		}
++		break;
+ 	case SMU_Discrete_DpmTable:
+ 		switch (member) {
+ 		case UvdBootLevel:
+@@ -2278,6 +2279,7 @@ static uint32_t vegam_get_offsetof(uint32_t type, uint32_t member)
+ 		case LowSclkInterruptThreshold:
+ 			return offsetof(SMU75_Discrete_DpmTable, LowSclkInterruptThreshold);
+ 		}
++		break;
+ 	}
+ 	pr_warn("can't get the offset of type %x member %x\n", type, member);
+ 	return 0;
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index 658830620ca3..9c166621e920 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -1274,6 +1274,9 @@ static struct drm_dp_mst_branch *drm_dp_get_mst_branch_device(struct drm_dp_mst_
+ 	mutex_lock(&mgr->lock);
+ 	mstb = mgr->mst_primary;
+ 
++	if (!mstb)
++		goto out;
++
+ 	for (i = 0; i < lct - 1; i++) {
+ 		int shift = (i % 2) ? 0 : 4;
+ 		int port_num = (rad[i / 2] >> shift) & 0xf;
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index fe9c6c731e87..ee4a5e1221f1 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -30,6 +30,12 @@ struct drm_dmi_panel_orientation_data {
+ 	int orientation;
+ };
+ 
++static const struct drm_dmi_panel_orientation_data acer_s1003 = {
++	.width = 800,
++	.height = 1280,
++	.orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
++};
++
+ static const struct drm_dmi_panel_orientation_data asus_t100ha = {
+ 	.width = 800,
+ 	.height = 1280,
+@@ -67,7 +73,13 @@ static const struct drm_dmi_panel_orientation_data lcd800x1280_rightside_up = {
+ };
+ 
+ static const struct dmi_system_id orientation_data[] = {
+-	{	/* Asus T100HA */
++	{	/* Acer One 10 (S1003) */
++		.matches = {
++		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Acer"),
++		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "One S1003"),
++		},
++		.driver_data = (void *)&acer_s1003,
++	}, {	/* Asus T100HA */
+ 		.matches = {
+ 		  DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+ 		  DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T100HAN"),
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
+index 50d6b88cb7aa..5944f319c78b 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
+@@ -93,7 +93,7 @@ static void etnaviv_sched_timedout_job(struct drm_sched_job *sched_job)
+ 	 * If the GPU managed to complete this jobs fence, the timout is
+ 	 * spurious. Bail out.
+ 	 */
+-	if (fence_completed(gpu, submit->out_fence->seqno))
++	if (dma_fence_is_signaled(submit->out_fence))
+ 		return;
+ 
+ 	/*
+diff --git a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_fbdev.c b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_fbdev.c
+index b92595c477ef..8bd29075ae4e 100644
+--- a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_fbdev.c
++++ b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_fbdev.c
+@@ -122,6 +122,7 @@ static int hibmc_drm_fb_create(struct drm_fb_helper *helper,
+ 	hi_fbdev->fb = hibmc_framebuffer_init(priv->dev, &mode_cmd, gobj);
+ 	if (IS_ERR(hi_fbdev->fb)) {
+ 		ret = PTR_ERR(hi_fbdev->fb);
++		hi_fbdev->fb = NULL;
+ 		DRM_ERROR("failed to initialize framebuffer: %d\n", ret);
+ 		goto out_release_fbi;
+ 	}
+diff --git a/drivers/gpu/drm/i915/gvt/gtt.h b/drivers/gpu/drm/i915/gvt/gtt.h
+index 97e62647418a..5040bcd430d2 100644
+--- a/drivers/gpu/drm/i915/gvt/gtt.h
++++ b/drivers/gpu/drm/i915/gvt/gtt.h
+@@ -35,7 +35,6 @@
+ #define _GVT_GTT_H_
+ 
+ #define I915_GTT_PAGE_SHIFT         12
+-#define I915_GTT_PAGE_MASK		(~(I915_GTT_PAGE_SIZE - 1))
+ 
+ struct intel_vgpu_mm;
+ 
+diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
+index 17c5097721e8..4b77325d135a 100644
+--- a/drivers/gpu/drm/i915/i915_gem.c
++++ b/drivers/gpu/drm/i915/i915_gem.c
+@@ -1112,11 +1112,7 @@ i915_gem_shmem_pread(struct drm_i915_gem_object *obj,
+ 	offset = offset_in_page(args->offset);
+ 	for (idx = args->offset >> PAGE_SHIFT; remain; idx++) {
+ 		struct page *page = i915_gem_object_get_page(obj, idx);
+-		int length;
+-
+-		length = remain;
+-		if (offset + length > PAGE_SIZE)
+-			length = PAGE_SIZE - offset;
++		unsigned int length = min_t(u64, remain, PAGE_SIZE - offset);
+ 
+ 		ret = shmem_pread(page, offset, length, user_data,
+ 				  page_to_phys(page) & obj_do_bit17_swizzling,
+@@ -1562,11 +1558,7 @@ i915_gem_shmem_pwrite(struct drm_i915_gem_object *obj,
+ 	offset = offset_in_page(args->offset);
+ 	for (idx = args->offset >> PAGE_SHIFT; remain; idx++) {
+ 		struct page *page = i915_gem_object_get_page(obj, idx);
+-		int length;
+-
+-		length = remain;
+-		if (offset + length > PAGE_SIZE)
+-			length = PAGE_SIZE - offset;
++		unsigned int length = min_t(u64, remain, PAGE_SIZE - offset);
+ 
+ 		ret = shmem_pwrite(page, offset, length, user_data,
+ 				   page_to_phys(page) & obj_do_bit17_swizzling,
+diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+index 22df17c8ca9b..b43bc767ec3d 100644
+--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
++++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+@@ -449,7 +449,7 @@ eb_validate_vma(struct i915_execbuffer *eb,
+ 	 * any non-page-aligned or non-canonical addresses.
+ 	 */
+ 	if (unlikely(entry->flags & EXEC_OBJECT_PINNED &&
+-		     entry->offset != gen8_canonical_addr(entry->offset & PAGE_MASK)))
++		     entry->offset != gen8_canonical_addr(entry->offset & I915_GTT_PAGE_MASK)))
+ 		return -EINVAL;
+ 
+ 	/* pad_to_size was once a reserved field, so sanitize it */
+diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
+index aec4f73574f4..69f53faab644 100644
+--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
++++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
+@@ -49,6 +49,8 @@
+ #define I915_GTT_PAGE_SIZE I915_GTT_PAGE_SIZE_4K
+ #define I915_GTT_MAX_PAGE_SIZE I915_GTT_PAGE_SIZE_2M
+ 
++#define I915_GTT_PAGE_MASK -I915_GTT_PAGE_SIZE
++
+ #define I915_GTT_MIN_ALIGNMENT I915_GTT_PAGE_SIZE
+ 
+ #define I915_FENCE_REG_NONE -1
+@@ -625,20 +627,20 @@ int i915_gem_gtt_insert(struct i915_address_space *vm,
+ 			u64 start, u64 end, unsigned int flags);
+ 
+ /* Flags used by pin/bind&friends. */
+-#define PIN_NONBLOCK		BIT(0)
+-#define PIN_MAPPABLE		BIT(1)
+-#define PIN_ZONE_4G		BIT(2)
+-#define PIN_NONFAULT		BIT(3)
+-#define PIN_NOEVICT		BIT(4)
+-
+-#define PIN_MBZ			BIT(5) /* I915_VMA_PIN_OVERFLOW */
+-#define PIN_GLOBAL		BIT(6) /* I915_VMA_GLOBAL_BIND */
+-#define PIN_USER		BIT(7) /* I915_VMA_LOCAL_BIND */
+-#define PIN_UPDATE		BIT(8)
+-
+-#define PIN_HIGH		BIT(9)
+-#define PIN_OFFSET_BIAS		BIT(10)
+-#define PIN_OFFSET_FIXED	BIT(11)
++#define PIN_NONBLOCK		BIT_ULL(0)
++#define PIN_MAPPABLE		BIT_ULL(1)
++#define PIN_ZONE_4G		BIT_ULL(2)
++#define PIN_NONFAULT		BIT_ULL(3)
++#define PIN_NOEVICT		BIT_ULL(4)
++
++#define PIN_MBZ			BIT_ULL(5) /* I915_VMA_PIN_OVERFLOW */
++#define PIN_GLOBAL		BIT_ULL(6) /* I915_VMA_GLOBAL_BIND */
++#define PIN_USER		BIT_ULL(7) /* I915_VMA_LOCAL_BIND */
++#define PIN_UPDATE		BIT_ULL(8)
++
++#define PIN_HIGH		BIT_ULL(9)
++#define PIN_OFFSET_BIAS		BIT_ULL(10)
++#define PIN_OFFSET_FIXED	BIT_ULL(11)
+ #define PIN_OFFSET_MASK		(-I915_GTT_PAGE_SIZE)
+ 
+ #endif
+diff --git a/drivers/gpu/drm/i915/intel_audio.c b/drivers/gpu/drm/i915/intel_audio.c
+index 3ea566f99450..391ad4123953 100644
+--- a/drivers/gpu/drm/i915/intel_audio.c
++++ b/drivers/gpu/drm/i915/intel_audio.c
+@@ -134,6 +134,9 @@ static const struct {
+ /* HDMI N/CTS table */
+ #define TMDS_297M 297000
+ #define TMDS_296M 296703
++#define TMDS_594M 594000
++#define TMDS_593M 593407
++
+ static const struct {
+ 	int sample_rate;
+ 	int clock;
+@@ -154,6 +157,20 @@ static const struct {
+ 	{ 176400, TMDS_297M, 18816, 247500 },
+ 	{ 192000, TMDS_296M, 23296, 281250 },
+ 	{ 192000, TMDS_297M, 20480, 247500 },
++	{ 44100, TMDS_593M, 8918, 937500 },
++	{ 44100, TMDS_594M, 9408, 990000 },
++	{ 48000, TMDS_593M, 5824, 562500 },
++	{ 48000, TMDS_594M, 6144, 594000 },
++	{ 32000, TMDS_593M, 5824, 843750 },
++	{ 32000, TMDS_594M, 3072, 445500 },
++	{ 88200, TMDS_593M, 17836, 937500 },
++	{ 88200, TMDS_594M, 18816, 990000 },
++	{ 96000, TMDS_593M, 11648, 562500 },
++	{ 96000, TMDS_594M, 12288, 594000 },
++	{ 176400, TMDS_593M, 35672, 937500 },
++	{ 176400, TMDS_594M, 37632, 990000 },
++	{ 192000, TMDS_593M, 23296, 562500 },
++	{ 192000, TMDS_594M, 24576, 594000 },
+ };
+ 
+ /* get AUD_CONFIG_PIXEL_CLOCK_HDMI_* value for mode */
+diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
+index 00486c744f24..b1e4f460f403 100644
+--- a/drivers/gpu/drm/i915/intel_display.c
++++ b/drivers/gpu/drm/i915/intel_display.c
+@@ -12509,17 +12509,12 @@ static void intel_atomic_commit_tail(struct drm_atomic_state *state)
+ 			intel_check_cpu_fifo_underruns(dev_priv);
+ 			intel_check_pch_fifo_underruns(dev_priv);
+ 
+-			if (!new_crtc_state->active) {
+-				/*
+-				 * Make sure we don't call initial_watermarks
+-				 * for ILK-style watermark updates.
+-				 *
+-				 * No clue what this is supposed to achieve.
+-				 */
+-				if (INTEL_GEN(dev_priv) >= 9)
+-					dev_priv->display.initial_watermarks(intel_state,
+-									     to_intel_crtc_state(new_crtc_state));
+-			}
++			/* FIXME unify this for all platforms */
++			if (!new_crtc_state->active &&
++			    !HAS_GMCH_DISPLAY(dev_priv) &&
++			    dev_priv->display.initial_watermarks)
++				dev_priv->display.initial_watermarks(intel_state,
++								     to_intel_crtc_state(new_crtc_state));
+ 		}
+ 	}
+ 
+@@ -14383,7 +14378,7 @@ static int intel_framebuffer_init(struct intel_framebuffer *intel_fb,
+ 	     fb->height < SKL_MIN_YUV_420_SRC_H ||
+ 	     (fb->width % 4) != 0 || (fb->height % 4) != 0)) {
+ 		DRM_DEBUG_KMS("src dimensions not correct for NV12\n");
+-		return -EINVAL;
++		goto err;
+ 	}
+ 
+ 	for (i = 0; i < fb->format->num_planes; i++) {
+@@ -15216,13 +15211,9 @@ static void intel_sanitize_crtc(struct intel_crtc *crtc,
+ 			   I915_READ(reg) & ~PIPECONF_FRAME_START_DELAY_MASK);
+ 	}
+ 
+-	/* restore vblank interrupts to correct state */
+-	drm_crtc_vblank_reset(&crtc->base);
+ 	if (crtc->active) {
+ 		struct intel_plane *plane;
+ 
+-		drm_crtc_vblank_on(&crtc->base);
+-
+ 		/* Disable everything but the primary plane */
+ 		for_each_intel_plane_on_crtc(dev, crtc, plane) {
+ 			const struct intel_plane_state *plane_state =
+@@ -15549,7 +15540,6 @@ intel_modeset_setup_hw_state(struct drm_device *dev,
+ 			     struct drm_modeset_acquire_ctx *ctx)
+ {
+ 	struct drm_i915_private *dev_priv = to_i915(dev);
+-	enum pipe pipe;
+ 	struct intel_crtc *crtc;
+ 	struct intel_encoder *encoder;
+ 	int i;
+@@ -15560,15 +15550,23 @@ intel_modeset_setup_hw_state(struct drm_device *dev,
+ 	/* HW state is read out, now we need to sanitize this mess. */
+ 	get_encoder_power_domains(dev_priv);
+ 
+-	intel_sanitize_plane_mapping(dev_priv);
++	/*
++	 * intel_sanitize_plane_mapping() may need to do vblank
++	 * waits, so we need vblank interrupts restored beforehand.
++	 */
++	for_each_intel_crtc(&dev_priv->drm, crtc) {
++		drm_crtc_vblank_reset(&crtc->base);
+ 
+-	for_each_intel_encoder(dev, encoder) {
+-		intel_sanitize_encoder(encoder);
++		if (crtc->active)
++			drm_crtc_vblank_on(&crtc->base);
+ 	}
+ 
+-	for_each_pipe(dev_priv, pipe) {
+-		crtc = intel_get_crtc_for_pipe(dev_priv, pipe);
++	intel_sanitize_plane_mapping(dev_priv);
++
++	for_each_intel_encoder(dev, encoder)
++		intel_sanitize_encoder(encoder);
+ 
++	for_each_intel_crtc(&dev_priv->drm, crtc) {
+ 		intel_sanitize_crtc(crtc, ctx);
+ 		intel_dump_pipe_config(crtc, crtc->config,
+ 				       "[setup_hw_state]");
+diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
+index 8e465095fe06..5d6517d37236 100644
+--- a/drivers/gpu/drm/i915/intel_dp.c
++++ b/drivers/gpu/drm/i915/intel_dp.c
+@@ -387,6 +387,22 @@ static bool intel_dp_link_params_valid(struct intel_dp *intel_dp, int link_rate,
+ 	return true;
+ }
+ 
++static bool intel_dp_can_link_train_fallback_for_edp(struct intel_dp *intel_dp,
++						     int link_rate,
++						     uint8_t lane_count)
++{
++	const struct drm_display_mode *fixed_mode =
++		intel_dp->attached_connector->panel.fixed_mode;
++	int mode_rate, max_rate;
++
++	mode_rate = intel_dp_link_required(fixed_mode->clock, 18);
++	max_rate = intel_dp_max_data_rate(link_rate, lane_count);
++	if (mode_rate > max_rate)
++		return false;
++
++	return true;
++}
++
+ int intel_dp_get_link_train_fallback_values(struct intel_dp *intel_dp,
+ 					    int link_rate, uint8_t lane_count)
+ {
+@@ -396,9 +412,23 @@ int intel_dp_get_link_train_fallback_values(struct intel_dp *intel_dp,
+ 				    intel_dp->num_common_rates,
+ 				    link_rate);
+ 	if (index > 0) {
++		if (intel_dp_is_edp(intel_dp) &&
++		    !intel_dp_can_link_train_fallback_for_edp(intel_dp,
++							      intel_dp->common_rates[index - 1],
++							      lane_count)) {
++			DRM_DEBUG_KMS("Retrying Link training for eDP with same parameters\n");
++			return 0;
++		}
+ 		intel_dp->max_link_rate = intel_dp->common_rates[index - 1];
+ 		intel_dp->max_link_lane_count = lane_count;
+ 	} else if (lane_count > 1) {
++		if (intel_dp_is_edp(intel_dp) &&
++		    !intel_dp_can_link_train_fallback_for_edp(intel_dp,
++							      intel_dp_max_common_rate(intel_dp),
++							      lane_count >> 1)) {
++			DRM_DEBUG_KMS("Retrying Link training for eDP with same parameters\n");
++			return 0;
++		}
+ 		intel_dp->max_link_rate = intel_dp_max_common_rate(intel_dp);
+ 		intel_dp->max_link_lane_count = lane_count >> 1;
+ 	} else {
+@@ -4842,19 +4872,13 @@ intel_dp_long_pulse(struct intel_connector *connector,
+ 		 */
+ 		status = connector_status_disconnected;
+ 		goto out;
+-	} else {
+-		/*
+-		 * If display is now connected check links status,
+-		 * there has been known issues of link loss triggering
+-		 * long pulse.
+-		 *
+-		 * Some sinks (eg. ASUS PB287Q) seem to perform some
+-		 * weird HPD ping pong during modesets. So we can apparently
+-		 * end up with HPD going low during a modeset, and then
+-		 * going back up soon after. And once that happens we must
+-		 * retrain the link to get a picture. That's in case no
+-		 * userspace component reacted to intermittent HPD dip.
+-		 */
++	}
++
++	/*
++	 * Some external monitors do not signal loss of link synchronization
++	 * with an IRQ_HPD, so force a link status check.
++	 */
++	if (!intel_dp_is_edp(intel_dp)) {
+ 		struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
+ 
+ 		intel_dp_retrain_link(encoder, ctx);
+diff --git a/drivers/gpu/drm/i915/intel_dp_link_training.c b/drivers/gpu/drm/i915/intel_dp_link_training.c
+index 3fcaa98b9055..667397541f10 100644
+--- a/drivers/gpu/drm/i915/intel_dp_link_training.c
++++ b/drivers/gpu/drm/i915/intel_dp_link_training.c
+@@ -335,22 +335,14 @@ intel_dp_start_link_train(struct intel_dp *intel_dp)
+ 	return;
+ 
+  failure_handling:
+-	/* Dont fallback and prune modes if its eDP */
+-	if (!intel_dp_is_edp(intel_dp)) {
+-		DRM_DEBUG_KMS("[CONNECTOR:%d:%s] Link Training failed at link rate = %d, lane count = %d",
+-			      intel_connector->base.base.id,
+-			      intel_connector->base.name,
+-			      intel_dp->link_rate, intel_dp->lane_count);
+-		if (!intel_dp_get_link_train_fallback_values(intel_dp,
+-							     intel_dp->link_rate,
+-							     intel_dp->lane_count))
+-			/* Schedule a Hotplug Uevent to userspace to start modeset */
+-			schedule_work(&intel_connector->modeset_retry_work);
+-	} else {
+-		DRM_ERROR("[CONNECTOR:%d:%s] Link Training failed at link rate = %d, lane count = %d",
+-			  intel_connector->base.base.id,
+-			  intel_connector->base.name,
+-			  intel_dp->link_rate, intel_dp->lane_count);
+-	}
++	DRM_DEBUG_KMS("[CONNECTOR:%d:%s] Link Training failed at link rate = %d, lane count = %d",
++		      intel_connector->base.base.id,
++		      intel_connector->base.name,
++		      intel_dp->link_rate, intel_dp->lane_count);
++	if (!intel_dp_get_link_train_fallback_values(intel_dp,
++						     intel_dp->link_rate,
++						     intel_dp->lane_count))
++		/* Schedule a Hotplug Uevent to userspace to start modeset */
++		schedule_work(&intel_connector->modeset_retry_work);
+ 	return;
+ }
+diff --git a/drivers/gpu/drm/i915/intel_dp_mst.c b/drivers/gpu/drm/i915/intel_dp_mst.c
+index 5890500a3a8b..80566cc347ee 100644
+--- a/drivers/gpu/drm/i915/intel_dp_mst.c
++++ b/drivers/gpu/drm/i915/intel_dp_mst.c
+@@ -38,11 +38,11 @@ static bool intel_dp_mst_compute_config(struct intel_encoder *encoder,
+ 	struct intel_dp_mst_encoder *intel_mst = enc_to_mst(&encoder->base);
+ 	struct intel_digital_port *intel_dig_port = intel_mst->primary;
+ 	struct intel_dp *intel_dp = &intel_dig_port->dp;
+-	struct intel_connector *connector =
+-		to_intel_connector(conn_state->connector);
++	struct drm_connector *connector = conn_state->connector;
++	void *port = to_intel_connector(connector)->port;
+ 	struct drm_atomic_state *state = pipe_config->base.state;
+ 	int bpp;
+-	int lane_count, slots;
++	int lane_count, slots = 0;
+ 	const struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
+ 	int mst_pbn;
+ 	bool reduce_m_n = drm_dp_has_quirk(&intel_dp->desc,
+@@ -70,17 +70,23 @@ static bool intel_dp_mst_compute_config(struct intel_encoder *encoder,
+ 
+ 	pipe_config->port_clock = intel_dp_max_link_rate(intel_dp);
+ 
+-	if (drm_dp_mst_port_has_audio(&intel_dp->mst_mgr, connector->port))
++	if (drm_dp_mst_port_has_audio(&intel_dp->mst_mgr, port))
+ 		pipe_config->has_audio = true;
+ 
+ 	mst_pbn = drm_dp_calc_pbn_mode(adjusted_mode->crtc_clock, bpp);
+ 	pipe_config->pbn = mst_pbn;
+ 
+-	slots = drm_dp_atomic_find_vcpi_slots(state, &intel_dp->mst_mgr,
+-					      connector->port, mst_pbn);
+-	if (slots < 0) {
+-		DRM_DEBUG_KMS("failed finding vcpi slots:%d\n", slots);
+-		return false;
++	/* Zombie connectors can't have VCPI slots */
++	if (READ_ONCE(connector->registered)) {
++		slots = drm_dp_atomic_find_vcpi_slots(state,
++						      &intel_dp->mst_mgr,
++						      port,
++						      mst_pbn);
++		if (slots < 0) {
++			DRM_DEBUG_KMS("failed finding vcpi slots:%d\n",
++				      slots);
++			return false;
++		}
+ 	}
+ 
+ 	intel_link_compute_m_n(bpp, lane_count,
+@@ -307,9 +313,8 @@ static int intel_dp_mst_get_ddc_modes(struct drm_connector *connector)
+ 	struct edid *edid;
+ 	int ret;
+ 
+-	if (!intel_dp) {
++	if (!READ_ONCE(connector->registered))
+ 		return intel_connector_update_modes(connector, NULL);
+-	}
+ 
+ 	edid = drm_dp_mst_get_edid(connector, &intel_dp->mst_mgr, intel_connector->port);
+ 	ret = intel_connector_update_modes(connector, edid);
+@@ -324,9 +329,10 @@ intel_dp_mst_detect(struct drm_connector *connector, bool force)
+ 	struct intel_connector *intel_connector = to_intel_connector(connector);
+ 	struct intel_dp *intel_dp = intel_connector->mst_port;
+ 
+-	if (!intel_dp)
++	if (!READ_ONCE(connector->registered))
+ 		return connector_status_disconnected;
+-	return drm_dp_mst_detect_port(connector, &intel_dp->mst_mgr, intel_connector->port);
++	return drm_dp_mst_detect_port(connector, &intel_dp->mst_mgr,
++				      intel_connector->port);
+ }
+ 
+ static void
+@@ -366,7 +372,7 @@ intel_dp_mst_mode_valid(struct drm_connector *connector,
+ 	int bpp = 24; /* MST uses fixed bpp */
+ 	int max_rate, mode_rate, max_lanes, max_link_clock;
+ 
+-	if (!intel_dp)
++	if (!READ_ONCE(connector->registered))
+ 		return MODE_ERROR;
+ 
+ 	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
+@@ -398,7 +404,7 @@ static struct drm_encoder *intel_mst_atomic_best_encoder(struct drm_connector *c
+ 	struct intel_dp *intel_dp = intel_connector->mst_port;
+ 	struct intel_crtc *crtc = to_intel_crtc(state->crtc);
+ 
+-	if (!intel_dp)
++	if (!READ_ONCE(connector->registered))
+ 		return NULL;
+ 	return &intel_dp->mst_encoders[crtc->pipe]->base.base;
+ }
+@@ -458,6 +464,10 @@ static struct drm_connector *intel_dp_add_mst_connector(struct drm_dp_mst_topolo
+ 	if (!intel_connector)
+ 		return NULL;
+ 
++	intel_connector->get_hw_state = intel_dp_mst_get_hw_state;
++	intel_connector->mst_port = intel_dp;
++	intel_connector->port = port;
++
+ 	connector = &intel_connector->base;
+ 	ret = drm_connector_init(dev, connector, &intel_dp_mst_connector_funcs,
+ 				 DRM_MODE_CONNECTOR_DisplayPort);
+@@ -468,10 +478,6 @@ static struct drm_connector *intel_dp_add_mst_connector(struct drm_dp_mst_topolo
+ 
+ 	drm_connector_helper_add(connector, &intel_dp_mst_connector_helper_funcs);
+ 
+-	intel_connector->get_hw_state = intel_dp_mst_get_hw_state;
+-	intel_connector->mst_port = intel_dp;
+-	intel_connector->port = port;
+-
+ 	for_each_pipe(dev_priv, pipe) {
+ 		struct drm_encoder *enc =
+ 			&intel_dp->mst_encoders[pipe]->base.base;
+@@ -510,7 +516,6 @@ static void intel_dp_register_mst_connector(struct drm_connector *connector)
+ static void intel_dp_destroy_mst_connector(struct drm_dp_mst_topology_mgr *mgr,
+ 					   struct drm_connector *connector)
+ {
+-	struct intel_connector *intel_connector = to_intel_connector(connector);
+ 	struct drm_i915_private *dev_priv = to_i915(connector->dev);
+ 
+ 	DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n", connector->base.id, connector->name);
+@@ -519,10 +524,6 @@ static void intel_dp_destroy_mst_connector(struct drm_dp_mst_topology_mgr *mgr,
+ 	if (dev_priv->fbdev)
+ 		drm_fb_helper_remove_one_connector(&dev_priv->fbdev->helper,
+ 						   connector);
+-	/* prevent race with the check in ->detect */
+-	drm_modeset_lock(&connector->dev->mode_config.connection_mutex, NULL);
+-	intel_connector->mst_port = NULL;
+-	drm_modeset_unlock(&connector->dev->mode_config.connection_mutex);
+ 
+ 	drm_connector_unreference(connector);
+ }
+diff --git a/drivers/gpu/drm/i915/intel_lpe_audio.c b/drivers/gpu/drm/i915/intel_lpe_audio.c
+index cdf19553ffac..5d5336fbe7b0 100644
+--- a/drivers/gpu/drm/i915/intel_lpe_audio.c
++++ b/drivers/gpu/drm/i915/intel_lpe_audio.c
+@@ -297,8 +297,10 @@ void intel_lpe_audio_teardown(struct drm_i915_private *dev_priv)
+ 	lpe_audio_platdev_destroy(dev_priv);
+ 
+ 	irq_free_desc(dev_priv->lpe_audio.irq);
+-}
+ 
++	dev_priv->lpe_audio.irq = -1;
++	dev_priv->lpe_audio.platdev = NULL;
++}
+ 
+ /**
+  * intel_lpe_audio_notify() - notify lpe audio event
+diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
+index 7c4c8fb1dae4..0328ee704ee5 100644
+--- a/drivers/gpu/drm/i915/intel_lrc.c
++++ b/drivers/gpu/drm/i915/intel_lrc.c
+@@ -425,7 +425,8 @@ static u64 execlists_update_context(struct i915_request *rq)
+ 
+ 	reg_state[CTX_RING_TAIL+1] = intel_ring_set_tail(rq->ring, rq->tail);
+ 
+-	/* True 32b PPGTT with dynamic page allocation: update PDP
++	/*
++	 * True 32b PPGTT with dynamic page allocation: update PDP
+ 	 * registers and point the unallocated PDPs to scratch page.
+ 	 * PML4 is allocated during ppgtt init, so this is not needed
+ 	 * in 48-bit mode.
+@@ -433,6 +434,17 @@ static u64 execlists_update_context(struct i915_request *rq)
+ 	if (ppgtt && !i915_vm_is_48bit(&ppgtt->base))
+ 		execlists_update_context_pdps(ppgtt, reg_state);
+ 
++	/*
++	 * Make sure the context image is complete before we submit it to HW.
++	 *
++	 * Ostensibly, writes (including the WCB) should be flushed prior to
++	 * an uncached write such as our mmio register access, the empirical
++	 * evidence (esp. on Braswell) suggests that the WC write into memory
++	 * may not be visible to the HW prior to the completion of the UC
++	 * register write and that we may begin execution from the context
++	 * before its image is complete leading to invalid PD chasing.
++	 */
++	wmb();
+ 	return ce->lrc_desc;
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
+index 8f19349a6055..72007d634359 100644
+--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
++++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
+@@ -91,6 +91,7 @@ static int
+ gen4_render_ring_flush(struct i915_request *rq, u32 mode)
+ {
+ 	u32 cmd, *cs;
++	int i;
+ 
+ 	/*
+ 	 * read/write caches:
+@@ -127,12 +128,45 @@ gen4_render_ring_flush(struct i915_request *rq, u32 mode)
+ 			cmd |= MI_INVALIDATE_ISP;
+ 	}
+ 
+-	cs = intel_ring_begin(rq, 2);
++	i = 2;
++	if (mode & EMIT_INVALIDATE)
++		i += 20;
++
++	cs = intel_ring_begin(rq, i);
+ 	if (IS_ERR(cs))
+ 		return PTR_ERR(cs);
+ 
+ 	*cs++ = cmd;
+-	*cs++ = MI_NOOP;
++
++	/*
++	 * A random delay to let the CS invalidate take effect? Without this
++	 * delay, the GPU relocation path fails as the CS does not see
++	 * the updated contents. Just as important, if we apply the flushes
++	 * to the EMIT_FLUSH branch (i.e. immediately after the relocation
++	 * write and before the invalidate on the next batch), the relocations
++	 * still fail. This implies that is a delay following invalidation
++	 * that is required to reset the caches as opposed to a delay to
++	 * ensure the memory is written.
++	 */
++	if (mode & EMIT_INVALIDATE) {
++		*cs++ = GFX_OP_PIPE_CONTROL(4) | PIPE_CONTROL_QW_WRITE;
++		*cs++ = i915_ggtt_offset(rq->engine->scratch) |
++			PIPE_CONTROL_GLOBAL_GTT;
++		*cs++ = 0;
++		*cs++ = 0;
++
++		for (i = 0; i < 12; i++)
++			*cs++ = MI_FLUSH;
++
++		*cs++ = GFX_OP_PIPE_CONTROL(4) | PIPE_CONTROL_QW_WRITE;
++		*cs++ = i915_ggtt_offset(rq->engine->scratch) |
++			PIPE_CONTROL_GLOBAL_GTT;
++		*cs++ = 0;
++		*cs++ = 0;
++	}
++
++	*cs++ = cmd;
++
+ 	intel_ring_advance(rq, cs);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+index 17d0506d058c..48960c1d92bc 100644
+--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+@@ -477,8 +477,7 @@ static int adreno_get_legacy_pwrlevels(struct device *dev)
+ 	struct device_node *child, *node;
+ 	int ret;
+ 
+-	node = of_find_compatible_node(dev->of_node, NULL,
+-		"qcom,gpu-pwrlevels");
++	node = of_get_compatible_child(dev->of_node, "qcom,gpu-pwrlevels");
+ 	if (!node) {
+ 		dev_err(dev, "Could not find the GPU powerlevels\n");
+ 		return -ENXIO;
+@@ -499,6 +498,8 @@ static int adreno_get_legacy_pwrlevels(struct device *dev)
+ 			dev_pm_opp_add(dev, val, 0);
+ 	}
+ 
++	of_node_put(node);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+index c3c8c84da113..c1e4aab9932e 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+@@ -818,22 +818,16 @@ nv50_mstc_atomic_best_encoder(struct drm_connector *connector,
+ {
+ 	struct nv50_head *head = nv50_head(connector_state->crtc);
+ 	struct nv50_mstc *mstc = nv50_mstc(connector);
+-	if (mstc->port) {
+-		struct nv50_mstm *mstm = mstc->mstm;
+-		return &mstm->msto[head->base.index]->encoder;
+-	}
+-	return NULL;
++
++	return &mstc->mstm->msto[head->base.index]->encoder;
+ }
+ 
+ static struct drm_encoder *
+ nv50_mstc_best_encoder(struct drm_connector *connector)
+ {
+ 	struct nv50_mstc *mstc = nv50_mstc(connector);
+-	if (mstc->port) {
+-		struct nv50_mstm *mstm = mstc->mstm;
+-		return &mstm->msto[0]->encoder;
+-	}
+-	return NULL;
++
++	return &mstc->mstm->msto[0]->encoder;
+ }
+ 
+ static enum drm_mode_status
+diff --git a/drivers/gpu/drm/nouveau/nouveau_backlight.c b/drivers/gpu/drm/nouveau/nouveau_backlight.c
+index 408b955e5c39..6dd72bc32897 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_backlight.c
++++ b/drivers/gpu/drm/nouveau/nouveau_backlight.c
+@@ -116,7 +116,7 @@ nv40_backlight_init(struct drm_connector *connector)
+ 				       &nv40_bl_ops, &props);
+ 
+ 	if (IS_ERR(bd)) {
+-		if (bl_connector.id > 0)
++		if (bl_connector.id >= 0)
+ 			ida_simple_remove(&bl_ida, bl_connector.id);
+ 		return PTR_ERR(bd);
+ 	}
+@@ -249,7 +249,7 @@ nv50_backlight_init(struct drm_connector *connector)
+ 				       nv_encoder, ops, &props);
+ 
+ 	if (IS_ERR(bd)) {
+-		if (bl_connector.id > 0)
++		if (bl_connector.id >= 0)
+ 			ida_simple_remove(&bl_ida, bl_connector.id);
+ 		return PTR_ERR(bd);
+ 	}
+diff --git a/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c b/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c
+index f92fe205550b..e884183c018a 100644
+--- a/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c
++++ b/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c
+@@ -285,6 +285,17 @@ static int dmm_txn_commit(struct dmm_txn *txn, bool wait)
+ 	}
+ 
+ 	txn->last_pat->next_pa = 0;
++	/* ensure that the written descriptors are visible to DMM */
++	wmb();
++
++	/*
++	 * NOTE: the wmb() above should be enough, but there seems to be a bug
++	 * in OMAP's memory barrier implementation, which in some rare cases may
++	 * cause the writes not to be observable after wmb().
++	 */
++
++	/* read back to ensure the data is in RAM */
++	readl(&txn->last_pat->next_pa);
+ 
+ 	/* write to PAT_DESCR to clear out any pending transaction */
+ 	dmm_write(dmm, 0x0, reg[PAT_DESCR][engine->id]);
+diff --git a/drivers/gpu/drm/rcar-du/rcar_du_kms.c b/drivers/gpu/drm/rcar-du/rcar_du_kms.c
+index f0bc7cc0e913..fb46df56f0c4 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_du_kms.c
++++ b/drivers/gpu/drm/rcar-du/rcar_du_kms.c
+@@ -516,12 +516,22 @@ int rcar_du_modeset_init(struct rcar_du_device *rcdu)
+ 
+ 	dev->mode_config.min_width = 0;
+ 	dev->mode_config.min_height = 0;
+-	dev->mode_config.max_width = 4095;
+-	dev->mode_config.max_height = 2047;
+ 	dev->mode_config.normalize_zpos = true;
+ 	dev->mode_config.funcs = &rcar_du_mode_config_funcs;
+ 	dev->mode_config.helper_private = &rcar_du_mode_config_helper;
+ 
++	if (rcdu->info->gen < 3) {
++		dev->mode_config.max_width = 4095;
++		dev->mode_config.max_height = 2047;
++	} else {
++		/*
++		 * The Gen3 DU uses the VSP1 for memory access, and is limited
++		 * to frame sizes of 8190x8190.
++		 */
++		dev->mode_config.max_width = 8190;
++		dev->mode_config.max_height = 8190;
++	}
++
+ 	rcdu->num_crtcs = hweight8(rcdu->info->channels_mask);
+ 
+ 	ret = rcar_du_properties_init(rcdu);
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_drv.c b/drivers/gpu/drm/rockchip/rockchip_drm_drv.c
+index f814d37b1db2..05368fa4f956 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_drv.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_drv.c
+@@ -442,6 +442,11 @@ static int rockchip_drm_platform_remove(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
++static void rockchip_drm_platform_shutdown(struct platform_device *pdev)
++{
++	rockchip_drm_platform_remove(pdev);
++}
++
+ static const struct of_device_id rockchip_drm_dt_ids[] = {
+ 	{ .compatible = "rockchip,display-subsystem", },
+ 	{ /* sentinel */ },
+@@ -451,6 +456,7 @@ MODULE_DEVICE_TABLE(of, rockchip_drm_dt_ids);
+ static struct platform_driver rockchip_drm_platform_driver = {
+ 	.probe = rockchip_drm_platform_probe,
+ 	.remove = rockchip_drm_platform_remove,
++	.shutdown = rockchip_drm_platform_shutdown,
+ 	.driver = {
+ 		.name = "rockchip-drm",
+ 		.of_match_table = rockchip_drm_dt_ids,
+diff --git a/drivers/hwmon/hwmon.c b/drivers/hwmon/hwmon.c
+index e88c01961948..52918185578a 100644
+--- a/drivers/hwmon/hwmon.c
++++ b/drivers/hwmon/hwmon.c
+@@ -631,8 +631,10 @@ __hwmon_device_register(struct device *dev, const char *name, void *drvdata,
+ 				if (info[i]->config[j] & HWMON_T_INPUT) {
+ 					err = hwmon_thermal_add_sensor(dev,
+ 								hwdev, j);
+-					if (err)
+-						goto free_device;
++					if (err) {
++						device_unregister(hdev);
++						goto ida_remove;
++					}
+ 				}
+ 			}
+ 		}
+@@ -640,8 +642,6 @@ __hwmon_device_register(struct device *dev, const char *name, void *drvdata,
+ 
+ 	return hdev;
+ 
+-free_device:
+-	device_unregister(hdev);
+ free_hwmon:
+ 	kfree(hwdev);
+ ida_remove:
+diff --git a/drivers/input/touchscreen/wm97xx-core.c b/drivers/input/touchscreen/wm97xx-core.c
+index 2566b4d8b342..73856c2a8ac0 100644
+--- a/drivers/input/touchscreen/wm97xx-core.c
++++ b/drivers/input/touchscreen/wm97xx-core.c
+@@ -929,7 +929,8 @@ static int __init wm97xx_init(void)
+ 
+ static void __exit wm97xx_exit(void)
+ {
+-	driver_unregister(&wm97xx_driver);
++	if (IS_BUILTIN(CONFIG_AC97_BUS))
++		driver_unregister(&wm97xx_driver);
+ 	platform_driver_unregister(&wm97xx_mfd_driver);
+ }
+ 
+diff --git a/drivers/media/i2c/tvp5150.c b/drivers/media/i2c/tvp5150.c
+index 805bd9c65940..8b450fc53202 100644
+--- a/drivers/media/i2c/tvp5150.c
++++ b/drivers/media/i2c/tvp5150.c
+@@ -901,9 +901,6 @@ static int tvp5150_set_selection(struct v4l2_subdev *sd,
+ 
+ 	/* tvp5150 has some special limits */
+ 	rect.left = clamp(rect.left, 0, TVP5150_MAX_CROP_LEFT);
+-	rect.width = clamp_t(unsigned int, rect.width,
+-			     TVP5150_H_MAX - TVP5150_MAX_CROP_LEFT - rect.left,
+-			     TVP5150_H_MAX - rect.left);
+ 	rect.top = clamp(rect.top, 0, TVP5150_MAX_CROP_TOP);
+ 
+ 	/* Calculate height based on current standard */
+@@ -917,9 +914,16 @@ static int tvp5150_set_selection(struct v4l2_subdev *sd,
+ 	else
+ 		hmax = TVP5150_V_MAX_OTHERS;
+ 
+-	rect.height = clamp_t(unsigned int, rect.height,
++	/*
++	 * alignments:
++	 *  - width = 2 due to UYVY colorspace
++	 *  - height, image = no special alignment
++	 */
++	v4l_bound_align_image(&rect.width,
++			      TVP5150_H_MAX - TVP5150_MAX_CROP_LEFT - rect.left,
++			      TVP5150_H_MAX - rect.left, 1, &rect.height,
+ 			      hmax - TVP5150_MAX_CROP_TOP - rect.top,
+-			      hmax - rect.top);
++			      hmax - rect.top, 0, 0);
+ 
+ 	tvp5150_write(sd, TVP5150_VERT_BLANKING_START, rect.top);
+ 	tvp5150_write(sd, TVP5150_VERT_BLANKING_STOP,
+diff --git a/drivers/media/pci/cx23885/altera-ci.c b/drivers/media/pci/cx23885/altera-ci.c
+index 70aec9bb7e95..07bf20c6c6fc 100644
+--- a/drivers/media/pci/cx23885/altera-ci.c
++++ b/drivers/media/pci/cx23885/altera-ci.c
+@@ -665,6 +665,10 @@ static int altera_hw_filt_init(struct altera_ci_config *config, int hw_filt_nr)
+ 		}
+ 
+ 		temp_int = append_internal(inter);
++		if (!temp_int) {
++			ret = -ENOMEM;
++			goto err;
++		}
+ 		inter->filts_used = 1;
+ 		inter->dev = config->dev;
+ 		inter->fpga_rw = config->fpga_rw;
+@@ -699,6 +703,7 @@ err:
+ 		     __func__, ret);
+ 
+ 	kfree(pid_filt);
++	kfree(inter);
+ 
+ 	return ret;
+ }
+@@ -733,6 +738,10 @@ int altera_ci_init(struct altera_ci_config *config, int ci_nr)
+ 		}
+ 
+ 		temp_int = append_internal(inter);
++		if (!temp_int) {
++			ret = -ENOMEM;
++			goto err;
++		}
+ 		inter->cis_used = 1;
+ 		inter->dev = config->dev;
+ 		inter->fpga_rw = config->fpga_rw;
+@@ -801,6 +810,7 @@ err:
+ 	ci_dbg_print("%s: Cannot initialize CI: Error %d.\n", __func__, ret);
+ 
+ 	kfree(state);
++	kfree(inter);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/media/platform/coda/coda-common.c b/drivers/media/platform/coda/coda-common.c
+index c7631e117dd3..1ae15d4ec5ed 100644
+--- a/drivers/media/platform/coda/coda-common.c
++++ b/drivers/media/platform/coda/coda-common.c
+@@ -1719,7 +1719,8 @@ static int coda_s_ctrl(struct v4l2_ctrl *ctrl)
+ 		break;
+ 	case V4L2_CID_MPEG_VIDEO_H264_PROFILE:
+ 		/* TODO: switch between baseline and constrained baseline */
+-		ctx->params.h264_profile_idc = 66;
++		if (ctx->inst_type == CODA_INST_ENCODER)
++			ctx->params.h264_profile_idc = 66;
+ 		break;
+ 	case V4L2_CID_MPEG_VIDEO_H264_LEVEL:
+ 		/* nothing to do, this is set by the encoder */
+diff --git a/drivers/mtd/devices/Kconfig b/drivers/mtd/devices/Kconfig
+index 57b02c4b3f63..6072ea9a84f2 100644
+--- a/drivers/mtd/devices/Kconfig
++++ b/drivers/mtd/devices/Kconfig
+@@ -207,7 +207,7 @@ comment "Disk-On-Chip Device Drivers"
+ config MTD_DOCG3
+ 	tristate "M-Systems Disk-On-Chip G3"
+ 	select BCH
+-	select BCH_CONST_PARAMS
++	select BCH_CONST_PARAMS if !MTD_NAND_BCH
+ 	select BITREVERSE
+ 	---help---
+ 	  This provides an MTD device driver for the M-Systems DiskOnChip
+diff --git a/drivers/mtd/spi-nor/cadence-quadspi.c b/drivers/mtd/spi-nor/cadence-quadspi.c
+index d7e10b36a0b9..0f0126901ac5 100644
+--- a/drivers/mtd/spi-nor/cadence-quadspi.c
++++ b/drivers/mtd/spi-nor/cadence-quadspi.c
+@@ -1000,7 +1000,7 @@ static int cqspi_direct_read_execute(struct spi_nor *nor, u_char *buf,
+ err_unmap:
+ 	dma_unmap_single(nor->dev, dma_dst, len, DMA_DEV_TO_MEM);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static ssize_t cqspi_read(struct spi_nor *nor, loff_t from,
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 2b01180be834..29661d45c6d0 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -3118,13 +3118,13 @@ static int bond_slave_netdev_event(unsigned long event,
+ 	case NETDEV_CHANGE:
+ 		/* For 802.3ad mode only:
+ 		 * Getting invalid Speed/Duplex values here will put slave
+-		 * in weird state. So mark it as link-down for the time
++		 * in weird state. So mark it as link-fail for the time
+ 		 * being and let link-monitoring (miimon) set it right when
+ 		 * correct speeds/duplex are available.
+ 		 */
+ 		if (bond_update_speed_duplex(slave) &&
+ 		    BOND_MODE(bond) == BOND_MODE_8023AD)
+-			slave->link = BOND_LINK_DOWN;
++			slave->link = BOND_LINK_FAIL;
+ 
+ 		if (BOND_MODE(bond) == BOND_MODE_8023AD)
+ 			bond_3ad_adapter_speed_duplex_changed(slave);
+diff --git a/drivers/of/of_numa.c b/drivers/of/of_numa.c
+index 27d9b4bba535..2411ed3c7303 100644
+--- a/drivers/of/of_numa.c
++++ b/drivers/of/of_numa.c
+@@ -115,9 +115,14 @@ static int __init of_numa_parse_distance_map_v1(struct device_node *map)
+ 		distance = of_read_number(matrix, 1);
+ 		matrix++;
+ 
++		if ((nodea == nodeb && distance != LOCAL_DISTANCE) ||
++		    (nodea != nodeb && distance <= LOCAL_DISTANCE)) {
++			pr_err("Invalid distance[node%d -> node%d] = %d\n",
++			       nodea, nodeb, distance);
++			return -EINVAL;
++		}
++
+ 		numa_set_distance(nodea, nodeb, distance);
+-		pr_debug("distance[node%d -> node%d] = %d\n",
+-			 nodea, nodeb, distance);
+ 
+ 		/* Set default distance of node B->A same as A->B */
+ 		if (nodeb > nodea)
+diff --git a/drivers/rtc/hctosys.c b/drivers/rtc/hctosys.c
+index e79f2a181ad2..b9ec4a16db1f 100644
+--- a/drivers/rtc/hctosys.c
++++ b/drivers/rtc/hctosys.c
+@@ -50,8 +50,10 @@ static int __init rtc_hctosys(void)
+ 	tv64.tv_sec = rtc_tm_to_time64(&tm);
+ 
+ #if BITS_PER_LONG == 32
+-	if (tv64.tv_sec > INT_MAX)
++	if (tv64.tv_sec > INT_MAX) {
++		err = -ERANGE;
+ 		goto err_read;
++	}
+ #endif
+ 
+ 	err = do_settimeofday64(&tv64);
+diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c
+index 7a3744006419..c492af5bcd95 100644
+--- a/drivers/scsi/qla2xxx/qla_gs.c
++++ b/drivers/scsi/qla2xxx/qla_gs.c
+@@ -4410,9 +4410,9 @@ int qla24xx_async_gpnft(scsi_qla_host_t *vha, u8 fc4_type, srb_t *sp)
+ 	sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+ 	qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
+ 
+-	rspsz = sizeof(struct ct_sns_gpnft_rsp) +
+-		((vha->hw->max_fibre_devices - 1) *
+-		    sizeof(struct ct_sns_gpn_ft_data));
++	rspsz = sp->u.iocb_cmd.u.ctarg.rsp_size;
++	memset(sp->u.iocb_cmd.u.ctarg.rsp, 0, sp->u.iocb_cmd.u.ctarg.rsp_size);
++	memset(sp->u.iocb_cmd.u.ctarg.req, 0, sp->u.iocb_cmd.u.ctarg.req_size);
+ 
+ 	ct_sns = (struct ct_sns_pkt *)sp->u.iocb_cmd.u.ctarg.req;
+ 	/* CT_IU preamble  */
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 75d34def2361..f77e470152d0 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -1742,25 +1742,15 @@ qla24xx_handle_plogi_done_event(struct scsi_qla_host *vha, struct event_arg *ea)
+ 		cid.b.rsvd_1 = 0;
+ 
+ 		ql_dbg(ql_dbg_disc, vha, 0x20ec,
+-		    "%s %d %8phC LoopID 0x%x in use post gnl\n",
++		    "%s %d %8phC lid %#x in use with pid %06x post gnl\n",
+ 		    __func__, __LINE__, ea->fcport->port_name,
+-		    ea->fcport->loop_id);
++		    ea->fcport->loop_id, cid.b24);
+ 
+-		if (IS_SW_RESV_ADDR(cid)) {
+-			set_bit(ea->fcport->loop_id, vha->hw->loop_id_map);
+-			ea->fcport->loop_id = FC_NO_LOOP_ID;
+-		} else {
+-			qla2x00_clear_loop_id(ea->fcport);
+-		}
++		set_bit(ea->fcport->loop_id, vha->hw->loop_id_map);
++		ea->fcport->loop_id = FC_NO_LOOP_ID;
+ 		qla24xx_post_gnl_work(vha, ea->fcport);
+ 		break;
+ 	case MBS_PORT_ID_USED:
+-		ql_dbg(ql_dbg_disc, vha, 0x20ed,
+-		    "%s %d %8phC NPortId %02x%02x%02x inuse post gidpn\n",
+-		    __func__, __LINE__, ea->fcport->port_name,
+-		    ea->fcport->d_id.b.domain, ea->fcport->d_id.b.area,
+-		    ea->fcport->d_id.b.al_pa);
+-
+ 		lid = ea->iop[1] & 0xffff;
+ 		qlt_find_sess_invalidate_other(vha,
+ 		    wwn_to_u64(ea->fcport->port_name),
+@@ -4501,6 +4491,7 @@ qla2x00_alloc_fcport(scsi_qla_host_t *vha, gfp_t flags)
+ 	fcport->loop_id = FC_NO_LOOP_ID;
+ 	qla2x00_set_fcport_state(fcport, FCS_UNCONFIGURED);
+ 	fcport->supported_classes = FC_COS_UNSPECIFIED;
++	fcport->fp_speed = PORT_SPEED_UNKNOWN;
+ 
+ 	fcport->ct_desc.ct_sns = dma_alloc_coherent(&vha->hw->pdev->dev,
+ 		sizeof(struct ct_sns_pkt), &fcport->ct_desc.ct_sns_dma,
+@@ -6515,7 +6506,7 @@ qla2x00_abort_isp(scsi_qla_host_t *vha)
+ 					 * The next call disables the board
+ 					 * completely.
+ 					 */
+-					ha->isp_ops->reset_adapter(vha);
++					qla2x00_abort_isp_cleanup(vha);
+ 					vha->flags.online = 0;
+ 					clear_bit(ISP_ABORT_RETRY,
+ 					    &vha->dpc_flags);
+@@ -6972,7 +6963,6 @@ qla24xx_nvram_config(scsi_qla_host_t *vha)
+ 	}
+ 	icb->firmware_options_2 &= cpu_to_le32(
+ 	    ~(BIT_3 | BIT_2 | BIT_1 | BIT_0));
+-	vha->flags.process_response_queue = 0;
+ 	if (ha->zio_mode != QLA_ZIO_DISABLED) {
+ 		ha->zio_mode = QLA_ZIO_MODE_6;
+ 
+@@ -6983,7 +6973,6 @@ qla24xx_nvram_config(scsi_qla_host_t *vha)
+ 		icb->firmware_options_2 |= cpu_to_le32(
+ 		    (uint32_t)ha->zio_mode);
+ 		icb->interrupt_delay_timer = cpu_to_le16(ha->zio_timer);
+-		vha->flags.process_response_queue = 1;
+ 	}
+ 
+ 	if (rval) {
+diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c
+index 667055cbe155..3e94f15ce1cf 100644
+--- a/drivers/scsi/qla2xxx/qla_iocb.c
++++ b/drivers/scsi/qla2xxx/qla_iocb.c
+@@ -1526,12 +1526,6 @@ qla24xx_start_scsi(srb_t *sp)
+ 
+ 	/* Set chip new ring index. */
+ 	WRT_REG_DWORD(req->req_q_in, req->ring_index);
+-	RD_REG_DWORD_RELAXED(&ha->iobase->isp24.hccr);
+-
+-	/* Manage unprocessed RIO/ZIO commands in response queue. */
+-	if (vha->flags.process_response_queue &&
+-		rsp->ring_ptr->signature != RESPONSE_PROCESSED)
+-		qla24xx_process_response_queue(vha, rsp);
+ 
+ 	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+ 	return QLA_SUCCESS;
+@@ -1725,12 +1719,6 @@ qla24xx_dif_start_scsi(srb_t *sp)
+ 
+ 	/* Set chip new ring index. */
+ 	WRT_REG_DWORD(req->req_q_in, req->ring_index);
+-	RD_REG_DWORD_RELAXED(&ha->iobase->isp24.hccr);
+-
+-	/* Manage unprocessed RIO/ZIO commands in response queue. */
+-	if (vha->flags.process_response_queue &&
+-	    rsp->ring_ptr->signature != RESPONSE_PROCESSED)
+-		qla24xx_process_response_queue(vha, rsp);
+ 
+ 	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+ 
+@@ -1880,11 +1868,6 @@ qla2xxx_start_scsi_mq(srb_t *sp)
+ 	/* Set chip new ring index. */
+ 	WRT_REG_DWORD(req->req_q_in, req->ring_index);
+ 
+-	/* Manage unprocessed RIO/ZIO commands in response queue. */
+-	if (vha->flags.process_response_queue &&
+-		rsp->ring_ptr->signature != RESPONSE_PROCESSED)
+-		qla24xx_process_response_queue(vha, rsp);
+-
+ 	spin_unlock_irqrestore(&qpair->qp_lock, flags);
+ 	return QLA_SUCCESS;
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
+index f0ec13d48bf3..f301621e39d7 100644
+--- a/drivers/scsi/qla2xxx/qla_mbx.c
++++ b/drivers/scsi/qla2xxx/qla_mbx.c
+@@ -3716,10 +3716,7 @@ qla2x00_set_idma_speed(scsi_qla_host_t *vha, uint16_t loop_id,
+ 	mcp->mb[0] = MBC_PORT_PARAMS;
+ 	mcp->mb[1] = loop_id;
+ 	mcp->mb[2] = BIT_0;
+-	if (IS_CNA_CAPABLE(vha->hw))
+-		mcp->mb[3] = port_speed & (BIT_5|BIT_4|BIT_3|BIT_2|BIT_1|BIT_0);
+-	else
+-		mcp->mb[3] = port_speed & (BIT_2|BIT_1|BIT_0);
++	mcp->mb[3] = port_speed & (BIT_5|BIT_4|BIT_3|BIT_2|BIT_1|BIT_0);
+ 	mcp->mb[9] = vha->vp_idx;
+ 	mcp->out_mb = MBX_9|MBX_3|MBX_2|MBX_1|MBX_0;
+ 	mcp->in_mb = MBX_3|MBX_1|MBX_0;
+diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
+index c5a963c2c86e..d0dc425f33f5 100644
+--- a/drivers/scsi/qla2xxx/qla_nvme.c
++++ b/drivers/scsi/qla2xxx/qla_nvme.c
+@@ -604,7 +604,7 @@ void qla_nvme_abort(struct qla_hw_data *ha, struct srb *sp, int res)
+ {
+ 	int rval;
+ 
+-	if (!test_bit(ABORT_ISP_ACTIVE, &sp->vha->dpc_flags)) {
++	if (ha->flags.fw_started) {
+ 		rval = ha->isp_ops->abort_command(sp);
+ 		if (!rval && !qla_nvme_wait_on_command(sp))
+ 			ql_log(ql_log_warn, NULL, 0x2112,
+@@ -657,9 +657,6 @@ void qla_nvme_delete(struct scsi_qla_host *vha)
+ 		    __func__, fcport);
+ 
+ 		nvme_fc_set_remoteport_devloss(fcport->nvme_remote_port, 0);
+-		init_completion(&fcport->nvme_del_done);
+-		nvme_fc_unregister_remoteport(fcport->nvme_remote_port);
+-		wait_for_completion(&fcport->nvme_del_done);
+ 	}
+ 
+ 	if (vha->nvme_local_port) {
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index 6dc1b1bd8069..27fbd437f412 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -1257,7 +1257,8 @@ void qlt_schedule_sess_for_deletion(struct fc_port *sess)
+ 	qla24xx_chk_fcp_state(sess);
+ 
+ 	ql_dbg(ql_dbg_tgt, sess->vha, 0xe001,
+-	    "Scheduling sess %p for deletion\n", sess);
++	    "Scheduling sess %p for deletion %8phC\n",
++	    sess, sess->port_name);
+ 
+ 	INIT_WORK(&sess->del_work, qla24xx_delete_sess_fn);
+ 	WARN_ON(!queue_work(sess->vha->hw->wq, &sess->del_work));
+diff --git a/drivers/scsi/qla2xxx/tcm_qla2xxx.c b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
+index 7732e9336d43..edfcb98aa4ef 100644
+--- a/drivers/scsi/qla2xxx/tcm_qla2xxx.c
++++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
+@@ -718,10 +718,6 @@ static int tcm_qla2xxx_queue_status(struct se_cmd *se_cmd)
+ 	cmd->sg_cnt = 0;
+ 	cmd->offset = 0;
+ 	cmd->dma_data_direction = target_reverse_dma_direction(se_cmd);
+-	if (cmd->trc_flags & TRC_XMIT_STATUS) {
+-		pr_crit("Multiple calls for status = %p.\n", cmd);
+-		dump_stack();
+-	}
+ 	cmd->trc_flags |= TRC_XMIT_STATUS;
+ 
+ 	if (se_cmd->data_direction == DMA_FROM_DEVICE) {
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 41e9ac9fc138..4fd92b1802cd 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -696,6 +696,12 @@ static bool scsi_end_request(struct request *req, blk_status_t error,
+ 		 */
+ 		scsi_mq_uninit_cmd(cmd);
+ 
++		/*
++		 * queue is still alive, so grab the ref for preventing it
++		 * from being cleaned up during running queue.
++		 */
++		percpu_ref_get(&q->q_usage_counter);
++
+ 		__blk_mq_end_request(req, error);
+ 
+ 		if (scsi_target(sdev)->single_lun ||
+@@ -703,6 +709,8 @@ static bool scsi_end_request(struct request *req, blk_status_t error,
+ 			kblockd_schedule_work(&sdev->requeue_work);
+ 		else
+ 			blk_mq_run_hw_queues(q, true);
++
++		percpu_ref_put(&q->q_usage_counter);
+ 	} else {
+ 		unsigned long flags;
+ 
+diff --git a/drivers/soc/ti/knav_qmss.h b/drivers/soc/ti/knav_qmss.h
+index 3efc47e82973..bd040c29c4bf 100644
+--- a/drivers/soc/ti/knav_qmss.h
++++ b/drivers/soc/ti/knav_qmss.h
+@@ -329,8 +329,8 @@ struct knav_range_ops {
+ };
+ 
+ struct knav_irq_info {
+-	int	irq;
+-	u32	cpu_map;
++	int		irq;
++	struct cpumask	*cpu_mask;
+ };
+ 
+ struct knav_range_info {
+diff --git a/drivers/soc/ti/knav_qmss_acc.c b/drivers/soc/ti/knav_qmss_acc.c
+index 316e82e46f6c..2f7fb2dcc1d6 100644
+--- a/drivers/soc/ti/knav_qmss_acc.c
++++ b/drivers/soc/ti/knav_qmss_acc.c
+@@ -205,18 +205,18 @@ static int knav_range_setup_acc_irq(struct knav_range_info *range,
+ {
+ 	struct knav_device *kdev = range->kdev;
+ 	struct knav_acc_channel *acc;
+-	unsigned long cpu_map;
++	struct cpumask *cpu_mask;
+ 	int ret = 0, irq;
+ 	u32 old, new;
+ 
+ 	if (range->flags & RANGE_MULTI_QUEUE) {
+ 		acc = range->acc;
+ 		irq = range->irqs[0].irq;
+-		cpu_map = range->irqs[0].cpu_map;
++		cpu_mask = range->irqs[0].cpu_mask;
+ 	} else {
+ 		acc = range->acc + queue;
+ 		irq = range->irqs[queue].irq;
+-		cpu_map = range->irqs[queue].cpu_map;
++		cpu_mask = range->irqs[queue].cpu_mask;
+ 	}
+ 
+ 	old = acc->open_mask;
+@@ -239,8 +239,8 @@ static int knav_range_setup_acc_irq(struct knav_range_info *range,
+ 			acc->name, acc->name);
+ 		ret = request_irq(irq, knav_acc_int_handler, 0, acc->name,
+ 				  range);
+-		if (!ret && cpu_map) {
+-			ret = irq_set_affinity_hint(irq, to_cpumask(&cpu_map));
++		if (!ret && cpu_mask) {
++			ret = irq_set_affinity_hint(irq, cpu_mask);
+ 			if (ret) {
+ 				dev_warn(range->kdev->dev,
+ 					 "Failed to set IRQ affinity\n");
+diff --git a/drivers/soc/ti/knav_qmss_queue.c b/drivers/soc/ti/knav_qmss_queue.c
+index 6755f2af5619..ef36acc0e708 100644
+--- a/drivers/soc/ti/knav_qmss_queue.c
++++ b/drivers/soc/ti/knav_qmss_queue.c
+@@ -118,19 +118,17 @@ static int knav_queue_setup_irq(struct knav_range_info *range,
+ 			  struct knav_queue_inst *inst)
+ {
+ 	unsigned queue = inst->id - range->queue_base;
+-	unsigned long cpu_map;
+ 	int ret = 0, irq;
+ 
+ 	if (range->flags & RANGE_HAS_IRQ) {
+ 		irq = range->irqs[queue].irq;
+-		cpu_map = range->irqs[queue].cpu_map;
+ 		ret = request_irq(irq, knav_queue_int_handler, 0,
+ 					inst->irq_name, inst);
+ 		if (ret)
+ 			return ret;
+ 		disable_irq(irq);
+-		if (cpu_map) {
+-			ret = irq_set_affinity_hint(irq, to_cpumask(&cpu_map));
++		if (range->irqs[queue].cpu_mask) {
++			ret = irq_set_affinity_hint(irq, range->irqs[queue].cpu_mask);
+ 			if (ret) {
+ 				dev_warn(range->kdev->dev,
+ 					 "Failed to set IRQ affinity\n");
+@@ -1262,9 +1260,19 @@ static int knav_setup_queue_range(struct knav_device *kdev,
+ 
+ 		range->num_irqs++;
+ 
+-		if (IS_ENABLED(CONFIG_SMP) && oirq.args_count == 3)
+-			range->irqs[i].cpu_map =
+-				(oirq.args[2] & 0x0000ff00) >> 8;
++		if (IS_ENABLED(CONFIG_SMP) && oirq.args_count == 3) {
++			unsigned long mask;
++			int bit;
++
++			range->irqs[i].cpu_mask = devm_kzalloc(dev,
++							       cpumask_size(), GFP_KERNEL);
++			if (!range->irqs[i].cpu_mask)
++				return -ENOMEM;
++
++			mask = (oirq.args[2] & 0x0000ff00) >> 8;
++			for_each_set_bit(bit, &mask, BITS_PER_LONG)
++				cpumask_set_cpu(bit, range->irqs[i].cpu_mask);
++		}
+ 	}
+ 
+ 	range->num_irqs = min(range->num_irqs, range->num_queues);
+diff --git a/drivers/staging/iio/adc/ad7606.c b/drivers/staging/iio/adc/ad7606.c
+index 25b9fcd5e3a4..ce3351832fb1 100644
+--- a/drivers/staging/iio/adc/ad7606.c
++++ b/drivers/staging/iio/adc/ad7606.c
+@@ -26,9 +26,12 @@
+ 
+ #include "ad7606.h"
+ 
+-/* Scales are computed as 2.5/2**16 and 5/2**16 respectively */
++/*
++ * Scales are computed as 5000/32768 and 10000/32768 respectively,
++ * so that when applied to the raw values they provide mV values
++ */
+ static const unsigned int scale_avail[2][2] = {
+-	{0, 38147}, {0, 76294}
++	{0, 152588}, {0, 305176}
+ };
+ 
+ static int ad7606_reset(struct ad7606_state *st)
+diff --git a/drivers/staging/most/video/video.c b/drivers/staging/most/video/video.c
+index cf342eb58e10..ad7e28ab9a4f 100644
+--- a/drivers/staging/most/video/video.c
++++ b/drivers/staging/most/video/video.c
+@@ -530,7 +530,7 @@ static int comp_disconnect_channel(struct most_interface *iface,
+ 	return 0;
+ }
+ 
+-static struct core_component comp_info = {
++static struct core_component comp = {
+ 	.name = "video",
+ 	.probe_channel = comp_probe_channel,
+ 	.disconnect_channel = comp_disconnect_channel,
+@@ -565,7 +565,7 @@ static void __exit comp_exit(void)
+ 	}
+ 	spin_unlock_irq(&list_lock);
+ 
+-	most_deregister_component(&comp_info);
++	most_deregister_component(&comp);
+ 	BUG_ON(!list_empty(&video_devices));
+ }
+ 
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index 6ab982309e6a..441778100887 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -1102,8 +1102,9 @@ void thermal_cooling_device_unregister(struct thermal_cooling_device *cdev)
+ 	mutex_unlock(&thermal_list_lock);
+ 
+ 	ida_simple_remove(&thermal_cdev_ida, cdev->id);
+-	device_unregister(&cdev->device);
++	device_del(&cdev->device);
+ 	thermal_cooling_device_destroy_sysfs(cdev);
++	put_device(&cdev->device);
+ }
+ EXPORT_SYMBOL_GPL(thermal_cooling_device_unregister);
+ 
+diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
+index 243c96025053..47b41159a8bc 100644
+--- a/drivers/tty/serial/sc16is7xx.c
++++ b/drivers/tty/serial/sc16is7xx.c
+@@ -657,7 +657,7 @@ static void sc16is7xx_handle_tx(struct uart_port *port)
+ 		uart_write_wakeup(port);
+ }
+ 
+-static void sc16is7xx_port_irq(struct sc16is7xx_port *s, int portno)
++static bool sc16is7xx_port_irq(struct sc16is7xx_port *s, int portno)
+ {
+ 	struct uart_port *port = &s->p[portno].port;
+ 
+@@ -666,7 +666,7 @@ static void sc16is7xx_port_irq(struct sc16is7xx_port *s, int portno)
+ 
+ 		iir = sc16is7xx_port_read(port, SC16IS7XX_IIR_REG);
+ 		if (iir & SC16IS7XX_IIR_NO_INT_BIT)
+-			break;
++			return false;
+ 
+ 		iir &= SC16IS7XX_IIR_ID_MASK;
+ 
+@@ -688,16 +688,23 @@ static void sc16is7xx_port_irq(struct sc16is7xx_port *s, int portno)
+ 					    port->line, iir);
+ 			break;
+ 		}
+-	} while (1);
++	} while (0);
++	return true;
+ }
+ 
+ static void sc16is7xx_ist(struct kthread_work *ws)
+ {
+ 	struct sc16is7xx_port *s = to_sc16is7xx_port(ws, irq_work);
+-	int i;
+ 
+-	for (i = 0; i < s->devtype->nr_uart; ++i)
+-		sc16is7xx_port_irq(s, i);
++	while (1) {
++		bool keep_polling = false;
++		int i;
++
++		for (i = 0; i < s->devtype->nr_uart; ++i)
++			keep_polling |= sc16is7xx_port_irq(s, i);
++		if (!keep_polling)
++			break;
++	}
+ }
+ 
+ static irqreturn_t sc16is7xx_irq(int irq, void *dev_id)
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index 3c55600a8236..671dc6a25e6c 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -3045,6 +3045,7 @@ static struct uart_driver sci_uart_driver = {
+ static int sci_remove(struct platform_device *dev)
+ {
+ 	struct sci_port *port = platform_get_drvdata(dev);
++	unsigned int type = port->port.type;	/* uart_remove_... clears it */
+ 
+ 	sci_ports_in_use &= ~BIT(port->port.line);
+ 	uart_remove_one_port(&sci_uart_driver, &port->port);
+@@ -3055,8 +3056,7 @@ static int sci_remove(struct platform_device *dev)
+ 		sysfs_remove_file(&dev->dev.kobj,
+ 				  &dev_attr_rx_fifo_trigger.attr);
+ 	}
+-	if (port->port.type == PORT_SCIFA || port->port.type == PORT_SCIFB ||
+-	    port->port.type == PORT_HSCIF) {
++	if (type == PORT_SCIFA || type == PORT_SCIFB || type == PORT_HSCIF) {
+ 		sysfs_remove_file(&dev->dev.kobj,
+ 				  &dev_attr_rx_fifo_timeout.attr);
+ 	}
+diff --git a/drivers/tty/tty_baudrate.c b/drivers/tty/tty_baudrate.c
+index 3e827a3d48d5..b7dc2196f9d7 100644
+--- a/drivers/tty/tty_baudrate.c
++++ b/drivers/tty/tty_baudrate.c
+@@ -77,7 +77,7 @@ speed_t tty_termios_baud_rate(struct ktermios *termios)
+ 		else
+ 			cbaud += 15;
+ 	}
+-	return baud_table[cbaud];
++	return cbaud >= n_baud_table ? 0 : baud_table[cbaud];
+ }
+ EXPORT_SYMBOL(tty_termios_baud_rate);
+ 
+@@ -113,7 +113,7 @@ speed_t tty_termios_input_baud_rate(struct ktermios *termios)
+ 		else
+ 			cbaud += 15;
+ 	}
+-	return baud_table[cbaud];
++	return cbaud >= n_baud_table ? 0 : baud_table[cbaud];
+ #else
+ 	return tty_termios_baud_rate(termios);
+ #endif
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index 31d06f59c4e4..da45120d9453 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -408,7 +408,7 @@ struct tty_driver *tty_find_polling_driver(char *name, int *line)
+ 	mutex_lock(&tty_mutex);
+ 	/* Search through the tty devices to look for a match */
+ 	list_for_each_entry(p, &tty_drivers, tty_drivers) {
+-		if (strncmp(name, p->name, len) != 0)
++		if (!len || strncmp(name, p->name, len) != 0)
+ 			continue;
+ 		stp = str;
+ 		if (*stp == ',')
+diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
+index 17fcd3b2e686..fe7914dffd8f 100644
+--- a/drivers/vhost/scsi.c
++++ b/drivers/vhost/scsi.c
+@@ -964,7 +964,8 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
+ 				prot_bytes = vhost32_to_cpu(vq, v_req_pi.pi_bytesin);
+ 			}
+ 			/*
+-			 * Set prot_iter to data_iter, and advance past any
++			 * Set prot_iter to data_iter and truncate it to
++			 * prot_bytes, and advance data_iter past any
+ 			 * preceeding prot_bytes that may be present.
+ 			 *
+ 			 * Also fix up the exp_data_len to reflect only the
+@@ -973,6 +974,7 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
+ 			if (prot_bytes) {
+ 				exp_data_len -= prot_bytes;
+ 				prot_iter = data_iter;
++				iov_iter_truncate(&prot_iter, prot_bytes);
+ 				iov_iter_advance(&data_iter, prot_bytes);
+ 			}
+ 			tag = vhost64_to_cpu(vq, v_req_pi.tag);
+diff --git a/drivers/video/fbdev/aty/mach64_accel.c b/drivers/video/fbdev/aty/mach64_accel.c
+index 2541a0e0de76..3ad46255f990 100644
+--- a/drivers/video/fbdev/aty/mach64_accel.c
++++ b/drivers/video/fbdev/aty/mach64_accel.c
+@@ -127,7 +127,7 @@ void aty_init_engine(struct atyfb_par *par, struct fb_info *info)
+ 
+ 	/* set host attributes */
+ 	wait_for_fifo(13, par);
+-	aty_st_le32(HOST_CNTL, 0, par);
++	aty_st_le32(HOST_CNTL, HOST_BYTE_ALIGN, par);
+ 
+ 	/* set pattern attributes */
+ 	aty_st_le32(PAT_REG0, 0, par);
+@@ -233,7 +233,8 @@ void atyfb_copyarea(struct fb_info *info, const struct fb_copyarea *area)
+ 		rotation = rotation24bpp(dx, direction);
+ 	}
+ 
+-	wait_for_fifo(4, par);
++	wait_for_fifo(5, par);
++	aty_st_le32(DP_PIX_WIDTH, par->crtc.dp_pix_width, par);
+ 	aty_st_le32(DP_SRC, FRGD_SRC_BLIT, par);
+ 	aty_st_le32(SRC_Y_X, (sx << 16) | sy, par);
+ 	aty_st_le32(SRC_HEIGHT1_WIDTH1, (width << 16) | area->height, par);
+@@ -269,7 +270,8 @@ void atyfb_fillrect(struct fb_info *info, const struct fb_fillrect *rect)
+ 		rotation = rotation24bpp(dx, DST_X_LEFT_TO_RIGHT);
+ 	}
+ 
+-	wait_for_fifo(3, par);
++	wait_for_fifo(4, par);
++	aty_st_le32(DP_PIX_WIDTH, par->crtc.dp_pix_width, par);
+ 	aty_st_le32(DP_FRGD_CLR, color, par);
+ 	aty_st_le32(DP_SRC,
+ 		    BKGD_SRC_BKGD_CLR | FRGD_SRC_FRGD_CLR | MONO_SRC_ONE,
+@@ -284,7 +286,7 @@ void atyfb_imageblit(struct fb_info *info, const struct fb_image *image)
+ {
+ 	struct atyfb_par *par = (struct atyfb_par *) info->par;
+ 	u32 src_bytes, dx = image->dx, dy = image->dy, width = image->width;
+-	u32 pix_width_save, pix_width, host_cntl, rotation = 0, src, mix;
++	u32 pix_width, rotation = 0, src, mix;
+ 
+ 	if (par->asleep)
+ 		return;
+@@ -296,8 +298,7 @@ void atyfb_imageblit(struct fb_info *info, const struct fb_image *image)
+ 		return;
+ 	}
+ 
+-	pix_width = pix_width_save = aty_ld_le32(DP_PIX_WIDTH, par);
+-	host_cntl = aty_ld_le32(HOST_CNTL, par) | HOST_BYTE_ALIGN;
++	pix_width = par->crtc.dp_pix_width;
+ 
+ 	switch (image->depth) {
+ 	case 1:
+@@ -345,7 +346,7 @@ void atyfb_imageblit(struct fb_info *info, const struct fb_image *image)
+ 		 * since Rage 3D IIc we have DP_HOST_TRIPLE_EN bit
+ 		 * this hwaccelerated triple has an issue with not aligned data
+ 		 */
+-		if (M64_HAS(HW_TRIPLE) && image->width % 8 == 0)
++		if (image->depth == 1 && M64_HAS(HW_TRIPLE) && image->width % 8 == 0)
+ 			pix_width |= DP_HOST_TRIPLE_EN;
+ 	}
+ 
+@@ -370,19 +371,18 @@ void atyfb_imageblit(struct fb_info *info, const struct fb_image *image)
+ 		mix = FRGD_MIX_D_XOR_S | BKGD_MIX_D;
+ 	}
+ 
+-	wait_for_fifo(6, par);
+-	aty_st_le32(DP_WRITE_MASK, 0xFFFFFFFF, par);
++	wait_for_fifo(5, par);
+ 	aty_st_le32(DP_PIX_WIDTH, pix_width, par);
+ 	aty_st_le32(DP_MIX, mix, par);
+ 	aty_st_le32(DP_SRC, src, par);
+-	aty_st_le32(HOST_CNTL, host_cntl, par);
++	aty_st_le32(HOST_CNTL, HOST_BYTE_ALIGN, par);
+ 	aty_st_le32(DST_CNTL, DST_Y_TOP_TO_BOTTOM | DST_X_LEFT_TO_RIGHT | rotation, par);
+ 
+ 	draw_rect(dx, dy, width, image->height, par);
+ 	src_bytes = (((image->width * image->depth) + 7) / 8) * image->height;
+ 
+ 	/* manual triple each pixel */
+-	if (info->var.bits_per_pixel == 24 && !(pix_width & DP_HOST_TRIPLE_EN)) {
++	if (image->depth == 1 && info->var.bits_per_pixel == 24 && !(pix_width & DP_HOST_TRIPLE_EN)) {
+ 		int inbit, outbit, mult24, byte_id_in_dword, width;
+ 		u8 *pbitmapin = (u8*)image->data, *pbitmapout;
+ 		u32 hostdword;
+@@ -415,7 +415,7 @@ void atyfb_imageblit(struct fb_info *info, const struct fb_image *image)
+ 				}
+ 			}
+ 			wait_for_fifo(1, par);
+-			aty_st_le32(HOST_DATA0, hostdword, par);
++			aty_st_le32(HOST_DATA0, le32_to_cpu(hostdword), par);
+ 		}
+ 	} else {
+ 		u32 *pbitmap, dwords = (src_bytes + 3) / 4;
+@@ -424,8 +424,4 @@ void atyfb_imageblit(struct fb_info *info, const struct fb_image *image)
+ 			aty_st_le32(HOST_DATA0, get_unaligned_le32(pbitmap), par);
+ 		}
+ 	}
+-
+-	/* restore pix_width */
+-	wait_for_fifo(1, par);
+-	aty_st_le32(DP_PIX_WIDTH, pix_width_save, par);
+ }
+diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
+index 03c9e325bfbc..3a2f37ad1f89 100644
+--- a/fs/9p/vfs_file.c
++++ b/fs/9p/vfs_file.c
+@@ -204,6 +204,14 @@ static int v9fs_file_do_lock(struct file *filp, int cmd, struct file_lock *fl)
+ 			break;
+ 		if (schedule_timeout_interruptible(P9_LOCK_TIMEOUT) != 0)
+ 			break;
++		/*
++		 * p9_client_lock_dotl overwrites flock.client_id with the
++		 * server message, free and reuse the client name
++		 */
++		if (flock.client_id != fid->clnt->name) {
++			kfree(flock.client_id);
++			flock.client_id = fid->clnt->name;
++		}
+ 	}
+ 
+ 	/* map 9p status to VFS status */
+@@ -235,6 +243,8 @@ out_unlock:
+ 		locks_lock_file_wait(filp, fl);
+ 		fl->fl_type = fl_type;
+ 	}
++	if (flock.client_id != fid->clnt->name)
++		kfree(flock.client_id);
+ out:
+ 	return res;
+ }
+@@ -269,7 +279,7 @@ static int v9fs_file_getlock(struct file *filp, struct file_lock *fl)
+ 
+ 	res = p9_client_getlock_dotl(fid, &glock);
+ 	if (res < 0)
+-		return res;
++		goto out;
+ 	/* map 9p lock type to os lock type */
+ 	switch (glock.type) {
+ 	case P9_LOCK_TYPE_RDLCK:
+@@ -290,7 +300,9 @@ static int v9fs_file_getlock(struct file *filp, struct file_lock *fl)
+ 			fl->fl_end = glock.start + glock.length - 1;
+ 		fl->fl_pid = -glock.proc_id;
+ 	}
+-	kfree(glock.client_id);
++out:
++	if (glock.client_id != fid->clnt->name)
++		kfree(glock.client_id);
+ 	return res;
+ }
+ 
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 891b1aab3480..2012eaf80da5 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -4404,13 +4404,23 @@ static int btrfs_destroy_pinned_extent(struct btrfs_fs_info *fs_info,
+ 	unpin = pinned_extents;
+ again:
+ 	while (1) {
++		/*
++		 * The btrfs_finish_extent_commit() may get the same range as
++		 * ours between find_first_extent_bit and clear_extent_dirty.
++		 * Hence, hold the unused_bg_unpin_mutex to avoid double unpin
++		 * the same extent range.
++		 */
++		mutex_lock(&fs_info->unused_bg_unpin_mutex);
+ 		ret = find_first_extent_bit(unpin, 0, &start, &end,
+ 					    EXTENT_DIRTY, NULL);
+-		if (ret)
++		if (ret) {
++			mutex_unlock(&fs_info->unused_bg_unpin_mutex);
+ 			break;
++		}
+ 
+ 		clear_extent_dirty(unpin, start, end);
+ 		btrfs_error_unpin_extent_range(fs_info, start, end);
++		mutex_unlock(&fs_info->unused_bg_unpin_mutex);
+ 		cond_resched();
+ 	}
+ 
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index dc0f9d089b19..3e6c1baddda3 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -1537,12 +1537,11 @@ out_check:
+ 	}
+ 	btrfs_release_path(path);
+ 
+-	if (cur_offset <= end && cow_start == (u64)-1) {
++	if (cur_offset <= end && cow_start == (u64)-1)
+ 		cow_start = cur_offset;
+-		cur_offset = end;
+-	}
+ 
+ 	if (cow_start != (u64)-1) {
++		cur_offset = end;
+ 		ret = cow_file_range(inode, locked_page, cow_start, end, end,
+ 				     page_started, nr_written, 1, NULL);
+ 		if (ret)
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index c972920701a3..ec021bd947ba 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -3499,6 +3499,8 @@ static int btrfs_extent_same_range(struct inode *src, u64 loff, u64 olen,
+ 			const u64 sz = BTRFS_I(src)->root->fs_info->sectorsize;
+ 
+ 			len = round_down(i_size_read(src), sz) - loff;
++			if (len == 0)
++				return 0;
+ 			olen = len;
+ 		}
+ 	}
+@@ -4291,9 +4293,17 @@ static noinline int btrfs_clone_files(struct file *file, struct file *file_src,
+ 		goto out_unlock;
+ 	if (len == 0)
+ 		olen = len = src->i_size - off;
+-	/* if we extend to eof, continue to block boundary */
+-	if (off + len == src->i_size)
++	/*
++	 * If we extend to eof, continue to block boundary if and only if the
++	 * destination end offset matches the destination file's size, otherwise
++	 * we would be corrupting data by placing the eof block into the middle
++	 * of a file.
++	 */
++	if (off + len == src->i_size) {
++		if (!IS_ALIGNED(len, bs) && destoff + len < inode->i_size)
++			goto out_unlock;
+ 		len = ALIGN(src->i_size, bs) - off;
++	}
+ 
+ 	if (len == 0) {
+ 		ret = 0;
+diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
+index a866be999216..4b1eda26480b 100644
+--- a/fs/ceph/inode.c
++++ b/fs/ceph/inode.c
+@@ -1135,8 +1135,12 @@ static struct dentry *splice_dentry(struct dentry *dn, struct inode *in)
+ 	if (IS_ERR(realdn)) {
+ 		pr_err("splice_dentry error %ld %p inode %p ino %llx.%llx\n",
+ 		       PTR_ERR(realdn), dn, in, ceph_vinop(in));
+-		dput(dn);
+-		dn = realdn; /* note realdn contains the error */
++		dn = realdn;
++		/*
++		 * Caller should release 'dn' in the case of error.
++		 * If 'req->r_dentry' is passed to this function,
++		 * caller should leave 'req->r_dentry' untouched.
++		 */
+ 		goto out;
+ 	} else if (realdn) {
+ 		dout("dn %p (%d) spliced with %p (%d) "
+diff --git a/fs/configfs/symlink.c b/fs/configfs/symlink.c
+index 78ffc2699993..a5c54af861f7 100644
+--- a/fs/configfs/symlink.c
++++ b/fs/configfs/symlink.c
+@@ -64,7 +64,7 @@ static void fill_item_path(struct config_item * item, char * buffer, int length)
+ 
+ 		/* back up enough to print this bus id with '/' */
+ 		length -= cur;
+-		strncpy(buffer + length,config_item_name(p),cur);
++		memcpy(buffer + length, config_item_name(p), cur);
+ 		*(buffer + --length) = '/';
+ 	}
+ }
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 2276137d0083..fc05c7f7bbcf 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -5763,9 +5763,10 @@ int ext4_mark_iloc_dirty(handle_t *handle,
+ {
+ 	int err = 0;
+ 
+-	if (unlikely(ext4_forced_shutdown(EXT4_SB(inode->i_sb))))
++	if (unlikely(ext4_forced_shutdown(EXT4_SB(inode->i_sb)))) {
++		put_bh(iloc->bh);
+ 		return -EIO;
+-
++	}
+ 	if (IS_I_VERSION(inode))
+ 		inode_inc_iversion(inode);
+ 
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 377d516c475f..ffa25753e929 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -126,6 +126,7 @@ static struct buffer_head *__ext4_read_dirblock(struct inode *inode,
+ 	if (!is_dx_block && type == INDEX) {
+ 		ext4_error_inode(inode, func, line, block,
+ 		       "directory leaf block found instead of index block");
++		brelse(bh);
+ 		return ERR_PTR(-EFSCORRUPTED);
+ 	}
+ 	if (!ext4_has_metadata_csum(inode->i_sb) ||
+@@ -2811,7 +2812,9 @@ int ext4_orphan_add(handle_t *handle, struct inode *inode)
+ 			list_del_init(&EXT4_I(inode)->i_orphan);
+ 			mutex_unlock(&sbi->s_orphan_lock);
+ 		}
+-	}
++	} else
++		brelse(iloc.bh);
++
+ 	jbd_debug(4, "superblock will point to %lu\n", inode->i_ino);
+ 	jbd_debug(4, "orphan inode %lu will point to %d\n",
+ 			inode->i_ino, NEXT_ORPHAN(inode));
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index ebbc663d0798..a5efee34415f 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -459,16 +459,18 @@ static int set_flexbg_block_bitmap(struct super_block *sb, handle_t *handle,
+ 
+ 		BUFFER_TRACE(bh, "get_write_access");
+ 		err = ext4_journal_get_write_access(handle, bh);
+-		if (err)
++		if (err) {
++			brelse(bh);
+ 			return err;
++		}
+ 		ext4_debug("mark block bitmap %#04llx (+%llu/%u)\n",
+ 			   first_cluster, first_cluster - start, count2);
+ 		ext4_set_bits(bh->b_data, first_cluster - start, count2);
+ 
+ 		err = ext4_handle_dirty_metadata(handle, NULL, bh);
++		brelse(bh);
+ 		if (unlikely(err))
+ 			return err;
+-		brelse(bh);
+ 	}
+ 
+ 	return 0;
+@@ -605,7 +607,6 @@ handle_bb:
+ 		bh = bclean(handle, sb, block);
+ 		if (IS_ERR(bh)) {
+ 			err = PTR_ERR(bh);
+-			bh = NULL;
+ 			goto out;
+ 		}
+ 		overhead = ext4_group_overhead_blocks(sb, group);
+@@ -618,9 +619,9 @@ handle_bb:
+ 		ext4_mark_bitmap_end(EXT4_B2C(sbi, group_data[i].blocks_count),
+ 				     sb->s_blocksize * 8, bh->b_data);
+ 		err = ext4_handle_dirty_metadata(handle, NULL, bh);
++		brelse(bh);
+ 		if (err)
+ 			goto out;
+-		brelse(bh);
+ 
+ handle_ib:
+ 		if (bg_flags[i] & EXT4_BG_INODE_UNINIT)
+@@ -635,18 +636,16 @@ handle_ib:
+ 		bh = bclean(handle, sb, block);
+ 		if (IS_ERR(bh)) {
+ 			err = PTR_ERR(bh);
+-			bh = NULL;
+ 			goto out;
+ 		}
+ 
+ 		ext4_mark_bitmap_end(EXT4_INODES_PER_GROUP(sb),
+ 				     sb->s_blocksize * 8, bh->b_data);
+ 		err = ext4_handle_dirty_metadata(handle, NULL, bh);
++		brelse(bh);
+ 		if (err)
+ 			goto out;
+-		brelse(bh);
+ 	}
+-	bh = NULL;
+ 
+ 	/* Mark group tables in block bitmap */
+ 	for (j = 0; j < GROUP_TABLE_COUNT; j++) {
+@@ -685,7 +684,6 @@ handle_ib:
+ 	}
+ 
+ out:
+-	brelse(bh);
+ 	err2 = ext4_journal_stop(handle);
+ 	if (err2 && !err)
+ 		err = err2;
+@@ -873,6 +871,7 @@ static int add_new_gdb(handle_t *handle, struct inode *inode,
+ 	err = ext4_handle_dirty_metadata(handle, NULL, gdb_bh);
+ 	if (unlikely(err)) {
+ 		ext4_std_error(sb, err);
++		iloc.bh = NULL;
+ 		goto exit_inode;
+ 	}
+ 	brelse(dind);
+@@ -924,6 +923,7 @@ static int add_new_gdb_meta_bg(struct super_block *sb,
+ 				     sizeof(struct buffer_head *),
+ 				     GFP_NOFS);
+ 	if (!n_group_desc) {
++		brelse(gdb_bh);
+ 		err = -ENOMEM;
+ 		ext4_warning(sb, "not enough memory for %lu groups",
+ 			     gdb_num + 1);
+@@ -939,8 +939,6 @@ static int add_new_gdb_meta_bg(struct super_block *sb,
+ 	kvfree(o_group_desc);
+ 	BUFFER_TRACE(gdb_bh, "get_write_access");
+ 	err = ext4_journal_get_write_access(handle, gdb_bh);
+-	if (unlikely(err))
+-		brelse(gdb_bh);
+ 	return err;
+ }
+ 
+@@ -1124,8 +1122,10 @@ static void update_backups(struct super_block *sb, sector_t blk_off, char *data,
+ 			   backup_block, backup_block -
+ 			   ext4_group_first_block_no(sb, group));
+ 		BUFFER_TRACE(bh, "get_write_access");
+-		if ((err = ext4_journal_get_write_access(handle, bh)))
++		if ((err = ext4_journal_get_write_access(handle, bh))) {
++			brelse(bh);
+ 			break;
++		}
+ 		lock_buffer(bh);
+ 		memcpy(bh->b_data, data, size);
+ 		if (rest)
+@@ -2023,7 +2023,7 @@ retry:
+ 
+ 	err = ext4_alloc_flex_bg_array(sb, n_group + 1);
+ 	if (err)
+-		return err;
++		goto out;
+ 
+ 	err = ext4_mb_alloc_groupinfo(sb, n_group + 1);
+ 	if (err)
+@@ -2059,6 +2059,10 @@ retry:
+ 		n_blocks_count_retry = 0;
+ 		free_flex_gd(flex_gd);
+ 		flex_gd = NULL;
++		if (resize_inode) {
++			iput(resize_inode);
++			resize_inode = NULL;
++		}
+ 		goto retry;
+ 	}
+ 
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 8d91d50ccf42..8b8c351fa9c5 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -4053,6 +4053,14 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 	sbi->s_groups_count = blocks_count;
+ 	sbi->s_blockfile_groups = min_t(ext4_group_t, sbi->s_groups_count,
+ 			(EXT4_MAX_BLOCK_FILE_PHYS / EXT4_BLOCKS_PER_GROUP(sb)));
++	if (((u64)sbi->s_groups_count * sbi->s_inodes_per_group) !=
++	    le32_to_cpu(es->s_inodes_count)) {
++		ext4_msg(sb, KERN_ERR, "inodes count not valid: %u vs %llu",
++			 le32_to_cpu(es->s_inodes_count),
++			 ((u64)sbi->s_groups_count * sbi->s_inodes_per_group));
++		ret = -EINVAL;
++		goto failed_mount;
++	}
+ 	db_count = (sbi->s_groups_count + EXT4_DESC_PER_BLOCK(sb) - 1) /
+ 		   EXT4_DESC_PER_BLOCK(sb);
+ 	if (ext4_has_feature_meta_bg(sb)) {
+@@ -4072,14 +4080,6 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 		ret = -ENOMEM;
+ 		goto failed_mount;
+ 	}
+-	if (((u64)sbi->s_groups_count * sbi->s_inodes_per_group) !=
+-	    le32_to_cpu(es->s_inodes_count)) {
+-		ext4_msg(sb, KERN_ERR, "inodes count not valid: %u vs %llu",
+-			 le32_to_cpu(es->s_inodes_count),
+-			 ((u64)sbi->s_groups_count * sbi->s_inodes_per_group));
+-		ret = -EINVAL;
+-		goto failed_mount;
+-	}
+ 
+ 	bgl_lock_init(sbi->s_blockgroup_lock);
+ 
+@@ -4488,6 +4488,7 @@ failed_mount6:
+ 	percpu_counter_destroy(&sbi->s_freeinodes_counter);
+ 	percpu_counter_destroy(&sbi->s_dirs_counter);
+ 	percpu_counter_destroy(&sbi->s_dirtyclusters_counter);
++	percpu_free_rwsem(&sbi->s_journal_flag_rwsem);
+ failed_mount5:
+ 	ext4_ext_release(sb);
+ 	ext4_release_system_zone(sb);
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index f36fc5d5b257..4380c8630539 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -1388,6 +1388,12 @@ retry:
+ 		bh = ext4_getblk(handle, ea_inode, block, 0);
+ 		if (IS_ERR(bh))
+ 			return PTR_ERR(bh);
++		if (!bh) {
++			WARN_ON_ONCE(1);
++			EXT4_ERROR_INODE(ea_inode,
++					 "ext4_getblk() return bh = NULL");
++			return -EFSCORRUPTED;
++		}
+ 		ret = ext4_journal_get_write_access(handle, bh);
+ 		if (ret)
+ 			goto out;
+@@ -2276,8 +2282,10 @@ static struct buffer_head *ext4_xattr_get_block(struct inode *inode)
+ 	if (!bh)
+ 		return ERR_PTR(-EIO);
+ 	error = ext4_xattr_check_block(inode, bh);
+-	if (error)
++	if (error) {
++		brelse(bh);
+ 		return ERR_PTR(error);
++	}
+ 	return bh;
+ }
+ 
+@@ -2397,6 +2405,8 @@ retry_inode:
+ 			error = ext4_xattr_block_set(handle, inode, &i, &bs);
+ 		} else if (error == -ENOSPC) {
+ 			if (EXT4_I(inode)->i_file_acl && !bs.s.base) {
++				brelse(bs.bh);
++				bs.bh = NULL;
+ 				error = ext4_xattr_block_find(inode, &i, &bs);
+ 				if (error)
+ 					goto cleanup;
+@@ -2617,6 +2627,8 @@ out:
+ 	kfree(buffer);
+ 	if (is)
+ 		brelse(is->iloc.bh);
++	if (bs)
++		brelse(bs->bh);
+ 	kfree(is);
+ 	kfree(bs);
+ 
+@@ -2696,7 +2708,6 @@ int ext4_expand_extra_isize_ea(struct inode *inode, int new_extra_isize,
+ 			       struct ext4_inode *raw_inode, handle_t *handle)
+ {
+ 	struct ext4_xattr_ibody_header *header;
+-	struct buffer_head *bh;
+ 	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
+ 	static unsigned int mnt_count;
+ 	size_t min_offs;
+@@ -2737,13 +2748,17 @@ retry:
+ 	 * EA block can hold new_extra_isize bytes.
+ 	 */
+ 	if (EXT4_I(inode)->i_file_acl) {
++		struct buffer_head *bh;
++
+ 		bh = sb_bread(inode->i_sb, EXT4_I(inode)->i_file_acl);
+ 		error = -EIO;
+ 		if (!bh)
+ 			goto cleanup;
+ 		error = ext4_xattr_check_block(inode, bh);
+-		if (error)
++		if (error) {
++			brelse(bh);
+ 			goto cleanup;
++		}
+ 		base = BHDR(bh);
+ 		end = bh->b_data + bh->b_size;
+ 		min_offs = end - base;
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index 4a9ace7280b9..97f15787cfeb 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -391,12 +391,19 @@ static void request_end(struct fuse_conn *fc, struct fuse_req *req)
+ 	if (test_bit(FR_BACKGROUND, &req->flags)) {
+ 		spin_lock(&fc->lock);
+ 		clear_bit(FR_BACKGROUND, &req->flags);
+-		if (fc->num_background == fc->max_background)
++		if (fc->num_background == fc->max_background) {
+ 			fc->blocked = 0;
+-
+-		/* Wake up next waiter, if any */
+-		if (!fc->blocked && waitqueue_active(&fc->blocked_waitq))
+ 			wake_up(&fc->blocked_waitq);
++		} else if (!fc->blocked) {
++			/*
++			 * Wake up next waiter, if any.  It's okay to use
++			 * waitqueue_active(), as we've already synced up
++			 * fc->blocked with waiters with the wake_up() call
++			 * above.
++			 */
++			if (waitqueue_active(&fc->blocked_waitq))
++				wake_up(&fc->blocked_waitq);
++		}
+ 
+ 		if (fc->num_background == fc->congestion_threshold && fc->sb) {
+ 			clear_bdi_congested(fc->sb->s_bdi, BLK_RW_SYNC);
+@@ -1311,12 +1318,14 @@ static ssize_t fuse_dev_do_read(struct fuse_dev *fud, struct file *file,
+ 		goto out_end;
+ 	}
+ 	list_move_tail(&req->list, &fpq->processing);
+-	spin_unlock(&fpq->lock);
++	__fuse_get_request(req);
+ 	set_bit(FR_SENT, &req->flags);
++	spin_unlock(&fpq->lock);
+ 	/* matches barrier in request_wait_answer() */
+ 	smp_mb__after_atomic();
+ 	if (test_bit(FR_INTERRUPTED, &req->flags))
+ 		queue_interrupt(fiq, req);
++	fuse_put_request(fc, req);
+ 
+ 	return reqsize;
+ 
+@@ -1715,8 +1724,10 @@ static int fuse_retrieve(struct fuse_conn *fc, struct inode *inode,
+ 	req->in.args[1].size = total_len;
+ 
+ 	err = fuse_request_send_notify_reply(fc, req, outarg->notify_unique);
+-	if (err)
++	if (err) {
+ 		fuse_retrieve_end(fc, req);
++		fuse_put_request(fc, req);
++	}
+ 
+ 	return err;
+ }
+@@ -1875,16 +1886,20 @@ static ssize_t fuse_dev_do_write(struct fuse_dev *fud,
+ 
+ 	/* Is it an interrupt reply? */
+ 	if (req->intr_unique == oh.unique) {
++		__fuse_get_request(req);
+ 		spin_unlock(&fpq->lock);
+ 
+ 		err = -EINVAL;
+-		if (nbytes != sizeof(struct fuse_out_header))
++		if (nbytes != sizeof(struct fuse_out_header)) {
++			fuse_put_request(fc, req);
+ 			goto err_finish;
++		}
+ 
+ 		if (oh.error == -ENOSYS)
+ 			fc->no_interrupt = 1;
+ 		else if (oh.error == -EAGAIN)
+ 			queue_interrupt(&fc->iq, req);
++		fuse_put_request(fc, req);
+ 
+ 		fuse_copy_finish(cs);
+ 		return nbytes;
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index aa23749a943b..2162771ce7d5 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -2912,10 +2912,12 @@ fuse_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ 	}
+ 
+ 	if (io->async) {
++		bool blocking = io->blocking;
++
+ 		fuse_aio_complete(io, ret < 0 ? ret : 0, -1);
+ 
+ 		/* we have a non-extending, async request, so return */
+-		if (!io->blocking)
++		if (!blocking)
+ 			return -EIOCBQUEUED;
+ 
+ 		wait_for_completion(&wait);
+diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
+index fd5bea55fd60..9c418249734d 100644
+--- a/fs/gfs2/bmap.c
++++ b/fs/gfs2/bmap.c
+@@ -1652,10 +1652,16 @@ static int punch_hole(struct gfs2_inode *ip, u64 offset, u64 length)
+ 			if (ret < 0)
+ 				goto out;
+ 
+-			/* issue read-ahead on metadata */
+-			if (mp.mp_aheight > 1) {
+-				for (; ret > 1; ret--) {
+-					metapointer_range(&mp, mp.mp_aheight - ret,
++			/* On the first pass, issue read-ahead on metadata. */
++			if (mp.mp_aheight > 1 && strip_h == ip->i_height - 1) {
++				unsigned int height = mp.mp_aheight - 1;
++
++				/* No read-ahead for data blocks. */
++				if (mp.mp_aheight - 1 == strip_h)
++					height--;
++
++				for (; height >= mp.mp_aheight - ret; height--) {
++					metapointer_range(&mp, height,
+ 							  start_list, start_aligned,
+ 							  end_list, end_aligned,
+ 							  &start, &end);
+diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c
+index b86249ebde11..1d62526738c4 100644
+--- a/fs/gfs2/rgrp.c
++++ b/fs/gfs2/rgrp.c
+@@ -714,6 +714,7 @@ void gfs2_clear_rgrpd(struct gfs2_sbd *sdp)
+ 
+ 		if (gl) {
+ 			glock_clear_object(gl, rgd);
++			gfs2_rgrp_brelse(rgd);
+ 			gfs2_glock_put(gl);
+ 		}
+ 
+@@ -1136,7 +1137,7 @@ static u32 count_unlinked(struct gfs2_rgrpd *rgd)
+  * @rgd: the struct gfs2_rgrpd describing the RG to read in
+  *
+  * Read in all of a Resource Group's header and bitmap blocks.
+- * Caller must eventually call gfs2_rgrp_relse() to free the bitmaps.
++ * Caller must eventually call gfs2_rgrp_brelse() to free the bitmaps.
+  *
+  * Returns: errno
+  */
+diff --git a/fs/namespace.c b/fs/namespace.c
+index bd2f4c68506a..e65254003cad 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -780,9 +780,6 @@ static struct mountpoint *lookup_mountpoint(struct dentry *dentry)
+ 
+ 	hlist_for_each_entry(mp, chain, m_hash) {
+ 		if (mp->m_dentry == dentry) {
+-			/* might be worth a WARN_ON() */
+-			if (d_unlinked(dentry))
+-				return ERR_PTR(-ENOENT);
+ 			mp->m_count++;
+ 			return mp;
+ 		}
+@@ -796,6 +793,9 @@ static struct mountpoint *get_mountpoint(struct dentry *dentry)
+ 	int ret;
+ 
+ 	if (d_mountpoint(dentry)) {
++		/* might be worth a WARN_ON() */
++		if (d_unlinked(dentry))
++			return ERR_PTR(-ENOENT);
+ mountpoint:
+ 		read_seqlock_excl(&mount_lock);
+ 		mp = lookup_mountpoint(dentry);
+@@ -1625,8 +1625,13 @@ static int do_umount(struct mount *mnt, int flags)
+ 
+ 	namespace_lock();
+ 	lock_mount_hash();
+-	event++;
+ 
++	/* Recheck MNT_LOCKED with the locks held */
++	retval = -EINVAL;
++	if (mnt->mnt.mnt_flags & MNT_LOCKED)
++		goto out;
++
++	event++;
+ 	if (flags & MNT_DETACH) {
+ 		if (!list_empty(&mnt->mnt_list))
+ 			umount_tree(mnt, UMOUNT_PROPAGATE);
+@@ -1640,6 +1645,7 @@ static int do_umount(struct mount *mnt, int flags)
+ 			retval = 0;
+ 		}
+ 	}
++out:
+ 	unlock_mount_hash();
+ 	namespace_unlock();
+ 	return retval;
+@@ -1730,7 +1736,7 @@ int ksys_umount(char __user *name, int flags)
+ 		goto dput_and_out;
+ 	if (!check_mnt(mnt))
+ 		goto dput_and_out;
+-	if (mnt->mnt.mnt_flags & MNT_LOCKED)
++	if (mnt->mnt.mnt_flags & MNT_LOCKED) /* Check optimistically */
+ 		goto dput_and_out;
+ 	retval = -EPERM;
+ 	if (flags & MNT_FORCE && !capable(CAP_SYS_ADMIN))
+@@ -1813,8 +1819,14 @@ struct mount *copy_tree(struct mount *mnt, struct dentry *dentry,
+ 		for (s = r; s; s = next_mnt(s, r)) {
+ 			if (!(flag & CL_COPY_UNBINDABLE) &&
+ 			    IS_MNT_UNBINDABLE(s)) {
+-				s = skip_mnt_tree(s);
+-				continue;
++				if (s->mnt.mnt_flags & MNT_LOCKED) {
++					/* Both unbindable and locked. */
++					q = ERR_PTR(-EPERM);
++					goto out;
++				} else {
++					s = skip_mnt_tree(s);
++					continue;
++				}
+ 			}
+ 			if (!(flag & CL_COPY_MNT_NS_FILE) &&
+ 			    is_mnt_ns_file(s->mnt.mnt_root)) {
+@@ -1867,7 +1879,7 @@ void drop_collected_mounts(struct vfsmount *mnt)
+ {
+ 	namespace_lock();
+ 	lock_mount_hash();
+-	umount_tree(real_mount(mnt), UMOUNT_SYNC);
++	umount_tree(real_mount(mnt), 0);
+ 	unlock_mount_hash();
+ 	namespace_unlock();
+ }
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index 3c18c12a5c4c..b8615a4f5316 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -2553,11 +2553,12 @@ static void nfs4_state_manager(struct nfs_client *clp)
+ 		nfs4_clear_state_manager_bit(clp);
+ 		/* Did we race with an attempt to give us more work? */
+ 		if (clp->cl_state == 0)
+-			break;
++			return;
+ 		if (test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) != 0)
+-			break;
++			return;
+ 	} while (refcount_read(&clp->cl_count) > 1);
+-	return;
++	goto out_drain;
++
+ out_error:
+ 	if (strlen(section))
+ 		section_sep = ": ";
+@@ -2565,6 +2566,7 @@ out_error:
+ 			" with error %d\n", section_sep, section,
+ 			clp->cl_hostname, -status);
+ 	ssleep(1);
++out_drain:
+ 	nfs4_end_drain_session(clp);
+ 	nfs4_clear_state_manager_bit(clp);
+ }
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index 0dded931f119..7c78d10a58a0 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -1048,6 +1048,9 @@ nfsd4_verify_copy(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ {
+ 	__be32 status;
+ 
++	if (!cstate->save_fh.fh_dentry)
++		return nfserr_nofilehandle;
++
+ 	status = nfs4_preprocess_stateid_op(rqstp, cstate, &cstate->save_fh,
+ 					    src_stateid, RD_STATE, src, NULL);
+ 	if (status) {
+diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
+index 302cd7caa4a7..7578bd507c70 100644
+--- a/fs/ocfs2/aops.c
++++ b/fs/ocfs2/aops.c
+@@ -2412,8 +2412,16 @@ static int ocfs2_dio_end_io(struct kiocb *iocb,
+ 	/* this io's submitter should not have unlocked this before we could */
+ 	BUG_ON(!ocfs2_iocb_is_rw_locked(iocb));
+ 
+-	if (bytes > 0 && private)
+-		ret = ocfs2_dio_end_io_write(inode, private, offset, bytes);
++	if (bytes <= 0)
++		mlog_ratelimited(ML_ERROR, "Direct IO failed, bytes = %lld",
++				 (long long)bytes);
++	if (private) {
++		if (bytes > 0)
++			ret = ocfs2_dio_end_io_write(inode, private, offset,
++						     bytes);
++		else
++			ocfs2_dio_free_write_ctx(inode, private);
++	}
+ 
+ 	ocfs2_iocb_clear_rw_locked(iocb);
+ 
+diff --git a/fs/ocfs2/cluster/masklog.h b/fs/ocfs2/cluster/masklog.h
+index 308ea0eb35fd..a396096a5099 100644
+--- a/fs/ocfs2/cluster/masklog.h
++++ b/fs/ocfs2/cluster/masklog.h
+@@ -178,6 +178,15 @@ do {									\
+ 			      ##__VA_ARGS__);				\
+ } while (0)
+ 
++#define mlog_ratelimited(mask, fmt, ...)				\
++do {									\
++	static DEFINE_RATELIMIT_STATE(_rs,				\
++				      DEFAULT_RATELIMIT_INTERVAL,	\
++				      DEFAULT_RATELIMIT_BURST);		\
++	if (__ratelimit(&_rs))						\
++		mlog(mask, fmt, ##__VA_ARGS__);				\
++} while (0)
++
+ #define mlog_errno(st) ({						\
+ 	int _st = (st);							\
+ 	if (_st != -ERESTARTSYS && _st != -EINTR &&			\
+diff --git a/fs/ocfs2/dir.c b/fs/ocfs2/dir.c
+index b048d4fa3959..c121abbdfc7d 100644
+--- a/fs/ocfs2/dir.c
++++ b/fs/ocfs2/dir.c
+@@ -1897,8 +1897,7 @@ static int ocfs2_dir_foreach_blk_el(struct inode *inode,
+ 				/* On error, skip the f_pos to the
+ 				   next block. */
+ 				ctx->pos = (ctx->pos | (sb->s_blocksize - 1)) + 1;
+-				brelse(bh);
+-				continue;
++				break;
+ 			}
+ 			if (le64_to_cpu(de->inode)) {
+ 				unsigned char d_type = DT_UNKNOWN;
+diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c
+index da9b3ccfde23..f1dffd70a1c0 100644
+--- a/fs/overlayfs/dir.c
++++ b/fs/overlayfs/dir.c
+@@ -461,6 +461,10 @@ static int ovl_create_over_whiteout(struct dentry *dentry, struct inode *inode,
+ 	if (IS_ERR(upper))
+ 		goto out_unlock;
+ 
++	err = -ESTALE;
++	if (d_is_negative(upper) || !IS_WHITEOUT(d_inode(upper)))
++		goto out_dput;
++
+ 	newdentry = ovl_create_temp(workdir, cattr);
+ 	err = PTR_ERR(newdentry);
+ 	if (IS_ERR(newdentry))
+@@ -661,6 +665,11 @@ static int ovl_link(struct dentry *old, struct inode *newdir,
+ 	if (err)
+ 		goto out_drop_write;
+ 
++	err = ovl_copy_up(new->d_parent);
++	if (err)
++		goto out_drop_write;
++
++
+ 	err = ovl_nlink_start(old, &locked);
+ 	if (err)
+ 		goto out_drop_write;
+diff --git a/fs/overlayfs/namei.c b/fs/overlayfs/namei.c
+index c2229f02389b..1531f81037b9 100644
+--- a/fs/overlayfs/namei.c
++++ b/fs/overlayfs/namei.c
+@@ -441,8 +441,10 @@ int ovl_verify_set_fh(struct dentry *dentry, const char *name,
+ 
+ 	fh = ovl_encode_real_fh(real, is_upper);
+ 	err = PTR_ERR(fh);
+-	if (IS_ERR(fh))
++	if (IS_ERR(fh)) {
++		fh = NULL;
+ 		goto fail;
++	}
+ 
+ 	err = ovl_verify_fh(dentry, name, fh);
+ 	if (set && err == -ENODATA)
+diff --git a/fs/udf/super.c b/fs/udf/super.c
+index 74b13347cd94..e557d1317d0e 100644
+--- a/fs/udf/super.c
++++ b/fs/udf/super.c
+@@ -613,14 +613,11 @@ static int udf_remount_fs(struct super_block *sb, int *flags, char *options)
+ 	struct udf_options uopt;
+ 	struct udf_sb_info *sbi = UDF_SB(sb);
+ 	int error = 0;
+-	struct logicalVolIntegrityDescImpUse *lvidiu = udf_sb_lvidiu(sb);
++
++	if (!(*flags & SB_RDONLY) && UDF_QUERY_FLAG(sb, UDF_FLAG_RW_INCOMPAT))
++		return -EACCES;
+ 
+ 	sync_filesystem(sb);
+-	if (lvidiu) {
+-		int write_rev = le16_to_cpu(lvidiu->minUDFWriteRev);
+-		if (write_rev > UDF_MAX_WRITE_VERSION && !(*flags & SB_RDONLY))
+-			return -EACCES;
+-	}
+ 
+ 	uopt.flags = sbi->s_flags;
+ 	uopt.uid   = sbi->s_uid;
+@@ -1317,6 +1314,7 @@ static int udf_load_partdesc(struct super_block *sb, sector_t block)
+ 			ret = -EACCES;
+ 			goto out_bh;
+ 		}
++		UDF_SET_FLAG(sb, UDF_FLAG_RW_INCOMPAT);
+ 		ret = udf_load_vat(sb, i, type1_idx);
+ 		if (ret < 0)
+ 			goto out_bh;
+@@ -2215,10 +2213,12 @@ static int udf_fill_super(struct super_block *sb, void *options, int silent)
+ 				UDF_MAX_READ_VERSION);
+ 			ret = -EINVAL;
+ 			goto error_out;
+-		} else if (minUDFWriteRev > UDF_MAX_WRITE_VERSION &&
+-			   !sb_rdonly(sb)) {
+-			ret = -EACCES;
+-			goto error_out;
++		} else if (minUDFWriteRev > UDF_MAX_WRITE_VERSION) {
++			if (!sb_rdonly(sb)) {
++				ret = -EACCES;
++				goto error_out;
++			}
++			UDF_SET_FLAG(sb, UDF_FLAG_RW_INCOMPAT);
+ 		}
+ 
+ 		sbi->s_udfrev = minUDFWriteRev;
+@@ -2236,10 +2236,12 @@ static int udf_fill_super(struct super_block *sb, void *options, int silent)
+ 	}
+ 
+ 	if (sbi->s_partmaps[sbi->s_partition].s_partition_flags &
+-			UDF_PART_FLAG_READ_ONLY &&
+-	    !sb_rdonly(sb)) {
+-		ret = -EACCES;
+-		goto error_out;
++			UDF_PART_FLAG_READ_ONLY) {
++		if (!sb_rdonly(sb)) {
++			ret = -EACCES;
++			goto error_out;
++		}
++		UDF_SET_FLAG(sb, UDF_FLAG_RW_INCOMPAT);
+ 	}
+ 
+ 	if (udf_find_fileset(sb, &fileset, &rootdir)) {
+diff --git a/fs/udf/udf_sb.h b/fs/udf/udf_sb.h
+index 9dd3e1b9619e..f8e0d200271d 100644
+--- a/fs/udf/udf_sb.h
++++ b/fs/udf/udf_sb.h
+@@ -30,6 +30,8 @@
+ #define UDF_FLAG_LASTBLOCK_SET	16
+ #define UDF_FLAG_BLOCKSIZE_SET	17
+ #define UDF_FLAG_INCONSISTENT	18
++#define UDF_FLAG_RW_INCOMPAT	19	/* Set when we find RW incompatible
++					 * feature */
+ 
+ #define UDF_PART_FLAG_UNALLOC_BITMAP	0x0001
+ #define UDF_PART_FLAG_UNALLOC_TABLE	0x0002
+diff --git a/include/linux/ceph/libceph.h b/include/linux/ceph/libceph.h
+index 49c93b9308d7..68bb09c29ce8 100644
+--- a/include/linux/ceph/libceph.h
++++ b/include/linux/ceph/libceph.h
+@@ -81,7 +81,13 @@ struct ceph_options {
+ 
+ #define CEPH_MSG_MAX_FRONT_LEN	(16*1024*1024)
+ #define CEPH_MSG_MAX_MIDDLE_LEN	(16*1024*1024)
+-#define CEPH_MSG_MAX_DATA_LEN	(16*1024*1024)
++
++/*
++ * Handle the largest possible rbd object in one message.
++ * There is no limit on the size of cephfs objects, but it has to obey
++ * rsize and wsize mount options anyway.
++ */
++#define CEPH_MSG_MAX_DATA_LEN	(32*1024*1024)
+ 
+ #define CEPH_AUTH_NAME_DEFAULT   "guest"
+ 
+diff --git a/include/linux/i8253.h b/include/linux/i8253.h
+index e6bb36a97519..8336b2f6f834 100644
+--- a/include/linux/i8253.h
++++ b/include/linux/i8253.h
+@@ -21,6 +21,7 @@
+ #define PIT_LATCH	((PIT_TICK_RATE + HZ/2) / HZ)
+ 
+ extern raw_spinlock_t i8253_lock;
++extern bool i8253_clear_counter_on_shutdown;
+ extern struct clock_event_device i8253_clockevent;
+ extern void clockevent_i8253_init(bool oneshot);
+ 
+diff --git a/include/linux/mtd/nand.h b/include/linux/mtd/nand.h
+index abe975c87b90..78b86dea2f29 100644
+--- a/include/linux/mtd/nand.h
++++ b/include/linux/mtd/nand.h
+@@ -324,9 +324,8 @@ static inline unsigned int nanddev_ntargets(const struct nand_device *nand)
+  */
+ static inline unsigned int nanddev_neraseblocks(const struct nand_device *nand)
+ {
+-	return (u64)nand->memorg.luns_per_target *
+-	       nand->memorg.eraseblocks_per_lun *
+-	       nand->memorg.pages_per_eraseblock;
++	return nand->memorg.ntargets * nand->memorg.luns_per_target *
++	       nand->memorg.eraseblocks_per_lun;
+ }
+ 
+ /**
+diff --git a/include/linux/nmi.h b/include/linux/nmi.h
+index b8d868d23e79..50d143995338 100644
+--- a/include/linux/nmi.h
++++ b/include/linux/nmi.h
+@@ -113,6 +113,8 @@ static inline int hardlockup_detector_perf_init(void) { return 0; }
+ void watchdog_nmi_stop(void);
+ void watchdog_nmi_start(void);
+ int watchdog_nmi_probe(void);
++int watchdog_nmi_enable(unsigned int cpu);
++void watchdog_nmi_disable(unsigned int cpu);
+ 
+ /**
+  * touch_nmi_watchdog - restart NMI watchdog timeout.
+diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
+index fd18c974a619..f6e798d42069 100644
+--- a/include/xen/xen-ops.h
++++ b/include/xen/xen-ops.h
+@@ -41,7 +41,7 @@ int xen_setup_shutdown_event(void);
+ 
+ extern unsigned long *xen_contiguous_bitmap;
+ 
+-#ifdef CONFIG_XEN_PV
++#if defined(CONFIG_XEN_PV) || defined(CONFIG_ARM) || defined(CONFIG_ARM64)
+ int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order,
+ 				unsigned int address_bits,
+ 				dma_addr_t *dma_handle);
+diff --git a/kernel/debug/kdb/kdb_bt.c b/kernel/debug/kdb/kdb_bt.c
+index 6ad4a9fcbd6f..7921ae4fca8d 100644
+--- a/kernel/debug/kdb/kdb_bt.c
++++ b/kernel/debug/kdb/kdb_bt.c
+@@ -179,14 +179,14 @@ kdb_bt(int argc, const char **argv)
+ 				kdb_printf("no process for cpu %ld\n", cpu);
+ 				return 0;
+ 			}
+-			sprintf(buf, "btt 0x%p\n", KDB_TSK(cpu));
++			sprintf(buf, "btt 0x%px\n", KDB_TSK(cpu));
+ 			kdb_parse(buf);
+ 			return 0;
+ 		}
+ 		kdb_printf("btc: cpu status: ");
+ 		kdb_parse("cpu\n");
+ 		for_each_online_cpu(cpu) {
+-			sprintf(buf, "btt 0x%p\n", KDB_TSK(cpu));
++			sprintf(buf, "btt 0x%px\n", KDB_TSK(cpu));
+ 			kdb_parse(buf);
+ 			touch_nmi_watchdog();
+ 		}
+diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c
+index 2ddfce8f1e8f..f338d23b112b 100644
+--- a/kernel/debug/kdb/kdb_main.c
++++ b/kernel/debug/kdb/kdb_main.c
+@@ -1192,7 +1192,7 @@ static int kdb_local(kdb_reason_t reason, int error, struct pt_regs *regs,
+ 	if (reason == KDB_REASON_DEBUG) {
+ 		/* special case below */
+ 	} else {
+-		kdb_printf("\nEntering kdb (current=0x%p, pid %d) ",
++		kdb_printf("\nEntering kdb (current=0x%px, pid %d) ",
+ 			   kdb_current, kdb_current ? kdb_current->pid : 0);
+ #if defined(CONFIG_SMP)
+ 		kdb_printf("on processor %d ", raw_smp_processor_id());
+@@ -1208,7 +1208,7 @@ static int kdb_local(kdb_reason_t reason, int error, struct pt_regs *regs,
+ 		 */
+ 		switch (db_result) {
+ 		case KDB_DB_BPT:
+-			kdb_printf("\nEntering kdb (0x%p, pid %d) ",
++			kdb_printf("\nEntering kdb (0x%px, pid %d) ",
+ 				   kdb_current, kdb_current->pid);
+ #if defined(CONFIG_SMP)
+ 			kdb_printf("on processor %d ", raw_smp_processor_id());
+@@ -2048,7 +2048,7 @@ static int kdb_lsmod(int argc, const char **argv)
+ 		if (mod->state == MODULE_STATE_UNFORMED)
+ 			continue;
+ 
+-		kdb_printf("%-20s%8u  0x%p ", mod->name,
++		kdb_printf("%-20s%8u  0x%px ", mod->name,
+ 			   mod->core_layout.size, (void *)mod);
+ #ifdef CONFIG_MODULE_UNLOAD
+ 		kdb_printf("%4d ", module_refcount(mod));
+@@ -2059,7 +2059,7 @@ static int kdb_lsmod(int argc, const char **argv)
+ 			kdb_printf(" (Loading)");
+ 		else
+ 			kdb_printf(" (Live)");
+-		kdb_printf(" 0x%p", mod->core_layout.base);
++		kdb_printf(" 0x%px", mod->core_layout.base);
+ 
+ #ifdef CONFIG_MODULE_UNLOAD
+ 		{
+@@ -2341,7 +2341,7 @@ void kdb_ps1(const struct task_struct *p)
+ 		return;
+ 
+ 	cpu = kdb_process_cpu(p);
+-	kdb_printf("0x%p %8d %8d  %d %4d   %c  0x%p %c%s\n",
++	kdb_printf("0x%px %8d %8d  %d %4d   %c  0x%px %c%s\n",
+ 		   (void *)p, p->pid, p->parent->pid,
+ 		   kdb_task_has_cpu(p), kdb_process_cpu(p),
+ 		   kdb_task_state_char(p),
+@@ -2354,7 +2354,7 @@ void kdb_ps1(const struct task_struct *p)
+ 		} else {
+ 			if (KDB_TSK(cpu) != p)
+ 				kdb_printf("  Error: does not match running "
+-				   "process table (0x%p)\n", KDB_TSK(cpu));
++				   "process table (0x%px)\n", KDB_TSK(cpu));
+ 		}
+ 	}
+ }
+@@ -2692,7 +2692,7 @@ int kdb_register_flags(char *cmd,
+ 	for_each_kdbcmd(kp, i) {
+ 		if (kp->cmd_name && (strcmp(kp->cmd_name, cmd) == 0)) {
+ 			kdb_printf("Duplicate kdb command registered: "
+-				"%s, func %p help %s\n", cmd, func, help);
++				"%s, func %px help %s\n", cmd, func, help);
+ 			return 1;
+ 		}
+ 	}
+diff --git a/kernel/debug/kdb/kdb_support.c b/kernel/debug/kdb/kdb_support.c
+index 990b3cc526c8..987eb73284d2 100644
+--- a/kernel/debug/kdb/kdb_support.c
++++ b/kernel/debug/kdb/kdb_support.c
+@@ -40,7 +40,7 @@
+ int kdbgetsymval(const char *symname, kdb_symtab_t *symtab)
+ {
+ 	if (KDB_DEBUG(AR))
+-		kdb_printf("kdbgetsymval: symname=%s, symtab=%p\n", symname,
++		kdb_printf("kdbgetsymval: symname=%s, symtab=%px\n", symname,
+ 			   symtab);
+ 	memset(symtab, 0, sizeof(*symtab));
+ 	symtab->sym_start = kallsyms_lookup_name(symname);
+@@ -88,7 +88,7 @@ int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab)
+ 	char *knt1 = NULL;
+ 
+ 	if (KDB_DEBUG(AR))
+-		kdb_printf("kdbnearsym: addr=0x%lx, symtab=%p\n", addr, symtab);
++		kdb_printf("kdbnearsym: addr=0x%lx, symtab=%px\n", addr, symtab);
+ 	memset(symtab, 0, sizeof(*symtab));
+ 
+ 	if (addr < 4096)
+@@ -149,7 +149,7 @@ int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab)
+ 		symtab->mod_name = "kernel";
+ 	if (KDB_DEBUG(AR))
+ 		kdb_printf("kdbnearsym: returns %d symtab->sym_start=0x%lx, "
+-		   "symtab->mod_name=%p, symtab->sym_name=%p (%s)\n", ret,
++		   "symtab->mod_name=%px, symtab->sym_name=%px (%s)\n", ret,
+ 		   symtab->sym_start, symtab->mod_name, symtab->sym_name,
+ 		   symtab->sym_name);
+ 
+@@ -887,13 +887,13 @@ void debug_kusage(void)
+ 		   __func__, dah_first);
+ 	if (dah_first) {
+ 		h_used = (struct debug_alloc_header *)debug_alloc_pool;
+-		kdb_printf("%s: h_used %p size %d\n", __func__, h_used,
++		kdb_printf("%s: h_used %px size %d\n", __func__, h_used,
+ 			   h_used->size);
+ 	}
+ 	do {
+ 		h_used = (struct debug_alloc_header *)
+ 			  ((char *)h_free + dah_overhead + h_free->size);
+-		kdb_printf("%s: h_used %p size %d caller %p\n",
++		kdb_printf("%s: h_used %px size %d caller %px\n",
+ 			   __func__, h_used, h_used->size, h_used->caller);
+ 		h_free = (struct debug_alloc_header *)
+ 			  (debug_alloc_pool + h_free->next);
+@@ -902,7 +902,7 @@ void debug_kusage(void)
+ 		  ((char *)h_free + dah_overhead + h_free->size);
+ 	if ((char *)h_used - debug_alloc_pool !=
+ 	    sizeof(debug_alloc_pool_aligned))
+-		kdb_printf("%s: h_used %p size %d caller %p\n",
++		kdb_printf("%s: h_used %px size %d caller %px\n",
+ 			   __func__, h_used, h_used->size, h_used->caller);
+ out:
+ 	spin_unlock(&dap_lock);
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index 6b71860f3998..4a8f3780aae5 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -71,9 +71,23 @@ static nokprobe_inline bool trace_kprobe_within_module(struct trace_kprobe *tk,
+ 	return strncmp(mod->name, name, len) == 0 && name[len] == ':';
+ }
+ 
+-static nokprobe_inline bool trace_kprobe_is_on_module(struct trace_kprobe *tk)
++static nokprobe_inline bool trace_kprobe_module_exist(struct trace_kprobe *tk)
+ {
+-	return !!strchr(trace_kprobe_symbol(tk), ':');
++	char *p;
++	bool ret;
++
++	if (!tk->symbol)
++		return false;
++	p = strchr(tk->symbol, ':');
++	if (!p)
++		return true;
++	*p = '\0';
++	mutex_lock(&module_mutex);
++	ret = !!find_module(tk->symbol);
++	mutex_unlock(&module_mutex);
++	*p = ':';
++
++	return ret;
+ }
+ 
+ static nokprobe_inline unsigned long trace_kprobe_nhit(struct trace_kprobe *tk)
+@@ -520,19 +534,13 @@ static int __register_trace_kprobe(struct trace_kprobe *tk)
+ 	else
+ 		ret = register_kprobe(&tk->rp.kp);
+ 
+-	if (ret == 0)
++	if (ret == 0) {
+ 		tk->tp.flags |= TP_FLAG_REGISTERED;
+-	else {
+-		if (ret == -ENOENT && trace_kprobe_is_on_module(tk)) {
+-			pr_warn("This probe might be able to register after target module is loaded. Continue.\n");
+-			ret = 0;
+-		} else if (ret == -EILSEQ) {
+-			pr_warn("Probing address(0x%p) is not an instruction boundary.\n",
+-				tk->rp.kp.addr);
+-			ret = -EINVAL;
+-		}
++	} else if (ret == -EILSEQ) {
++		pr_warn("Probing address(0x%p) is not an instruction boundary.\n",
++			tk->rp.kp.addr);
++		ret = -EINVAL;
+ 	}
+-
+ 	return ret;
+ }
+ 
+@@ -595,6 +603,11 @@ static int register_trace_kprobe(struct trace_kprobe *tk)
+ 
+ 	/* Register k*probe */
+ 	ret = __register_trace_kprobe(tk);
++	if (ret == -ENOENT && !trace_kprobe_module_exist(tk)) {
++		pr_warn("This probe might be able to register after target module is loaded. Continue.\n");
++		ret = 0;
++	}
++
+ 	if (ret < 0)
+ 		unregister_kprobe_event(tk);
+ 	else
+diff --git a/lib/ubsan.c b/lib/ubsan.c
+index 59fee96c29a0..e4162f59a81c 100644
+--- a/lib/ubsan.c
++++ b/lib/ubsan.c
+@@ -427,8 +427,7 @@ void __ubsan_handle_shift_out_of_bounds(struct shift_out_of_bounds_data *data,
+ EXPORT_SYMBOL(__ubsan_handle_shift_out_of_bounds);
+ 
+ 
+-void __noreturn
+-__ubsan_handle_builtin_unreachable(struct unreachable_data *data)
++void __ubsan_handle_builtin_unreachable(struct unreachable_data *data)
+ {
+ 	unsigned long flags;
+ 
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 5b38fbef9441..bf15bd78846b 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -3240,7 +3240,7 @@ static int is_hugetlb_entry_hwpoisoned(pte_t pte)
+ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
+ 			    struct vm_area_struct *vma)
+ {
+-	pte_t *src_pte, *dst_pte, entry;
++	pte_t *src_pte, *dst_pte, entry, dst_entry;
+ 	struct page *ptepage;
+ 	unsigned long addr;
+ 	int cow;
+@@ -3268,15 +3268,30 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
+ 			break;
+ 		}
+ 
+-		/* If the pagetables are shared don't copy or take references */
+-		if (dst_pte == src_pte)
++		/*
++		 * If the pagetables are shared don't copy or take references.
++		 * dst_pte == src_pte is the common case of src/dest sharing.
++		 *
++		 * However, src could have 'unshared' and dst shares with
++		 * another vma.  If dst_pte !none, this implies sharing.
++		 * Check here before taking page table lock, and once again
++		 * after taking the lock below.
++		 */
++		dst_entry = huge_ptep_get(dst_pte);
++		if ((dst_pte == src_pte) || !huge_pte_none(dst_entry))
+ 			continue;
+ 
+ 		dst_ptl = huge_pte_lock(h, dst, dst_pte);
+ 		src_ptl = huge_pte_lockptr(h, src, src_pte);
+ 		spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING);
+ 		entry = huge_ptep_get(src_pte);
+-		if (huge_pte_none(entry)) { /* skip none entry */
++		dst_entry = huge_ptep_get(dst_pte);
++		if (huge_pte_none(entry) || !huge_pte_none(dst_entry)) {
++			/*
++			 * Skip if src entry none.  Also, skip in the
++			 * unlikely case dst entry !none as this implies
++			 * sharing with another vma.
++			 */
+ 			;
+ 		} else if (unlikely(is_hugetlb_entry_migration(entry) ||
+ 				    is_hugetlb_entry_hwpoisoned(entry))) {
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 785252397e35..03fd2d08c361 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -587,6 +587,7 @@ int __remove_pages(struct zone *zone, unsigned long phys_start_pfn,
+ 	for (i = 0; i < sections_to_remove; i++) {
+ 		unsigned long pfn = phys_start_pfn + i*PAGES_PER_SECTION;
+ 
++		cond_resched();
+ 		ret = __remove_section(zone, __pfn_to_section(pfn), map_offset,
+ 				altmap);
+ 		map_offset = 0;
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index 01f1a14facc4..73fd00d2df8c 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -2046,8 +2046,36 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma,
+ 		nmask = policy_nodemask(gfp, pol);
+ 		if (!nmask || node_isset(hpage_node, *nmask)) {
+ 			mpol_cond_put(pol);
+-			page = __alloc_pages_node(hpage_node,
+-						gfp | __GFP_THISNODE, order);
++			/*
++			 * We cannot invoke reclaim if __GFP_THISNODE
++			 * is set. Invoking reclaim with
++			 * __GFP_THISNODE set, would cause THP
++			 * allocations to trigger heavy swapping
++			 * despite there may be tons of free memory
++			 * (including potentially plenty of THP
++			 * already available in the buddy) on all the
++			 * other NUMA nodes.
++			 *
++			 * At most we could invoke compaction when
++			 * __GFP_THISNODE is set (but we would need to
++			 * refrain from invoking reclaim even if
++			 * compaction returned COMPACT_SKIPPED because
++			 * there wasn't not enough memory to succeed
++			 * compaction). For now just avoid
++			 * __GFP_THISNODE instead of limiting the
++			 * allocation path to a strict and single
++			 * compaction invocation.
++			 *
++			 * Supposedly if direct reclaim was enabled by
++			 * the caller, the app prefers THP regardless
++			 * of the node it comes from so this would be
++			 * more desiderable behavior than only
++			 * providing THP originated from the local
++			 * node in such case.
++			 */
++			if (!(gfp & __GFP_DIRECT_RECLAIM))
++				gfp |= __GFP_THISNODE;
++			page = __alloc_pages_node(hpage_node, gfp, order);
+ 			goto out;
+ 		}
+ 	}
+diff --git a/mm/swapfile.c b/mm/swapfile.c
+index 18185ae4f223..f8b846b5108c 100644
+--- a/mm/swapfile.c
++++ b/mm/swapfile.c
+@@ -2837,7 +2837,7 @@ static struct swap_info_struct *alloc_swap_info(void)
+ 	unsigned int type;
+ 	int i;
+ 
+-	p = kzalloc(sizeof(*p), GFP_KERNEL);
++	p = kvzalloc(sizeof(*p), GFP_KERNEL);
+ 	if (!p)
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -2848,7 +2848,7 @@ static struct swap_info_struct *alloc_swap_info(void)
+ 	}
+ 	if (type >= MAX_SWAPFILES) {
+ 		spin_unlock(&swap_lock);
+-		kfree(p);
++		kvfree(p);
+ 		return ERR_PTR(-EPERM);
+ 	}
+ 	if (type >= nr_swapfiles) {
+@@ -2862,7 +2862,7 @@ static struct swap_info_struct *alloc_swap_info(void)
+ 		smp_wmb();
+ 		nr_swapfiles++;
+ 	} else {
+-		kfree(p);
++		kvfree(p);
+ 		p = swap_info[type];
+ 		/*
+ 		 * Do not memset this entry: a racing procfs swap_next()
+diff --git a/net/9p/protocol.c b/net/9p/protocol.c
+index 931ea00c4fed..ce7c221ca18b 100644
+--- a/net/9p/protocol.c
++++ b/net/9p/protocol.c
+@@ -46,10 +46,15 @@ p9pdu_writef(struct p9_fcall *pdu, int proto_version, const char *fmt, ...);
+ void p9stat_free(struct p9_wstat *stbuf)
+ {
+ 	kfree(stbuf->name);
++	stbuf->name = NULL;
+ 	kfree(stbuf->uid);
++	stbuf->uid = NULL;
+ 	kfree(stbuf->gid);
++	stbuf->gid = NULL;
+ 	kfree(stbuf->muid);
++	stbuf->muid = NULL;
+ 	kfree(stbuf->extension);
++	stbuf->extension = NULL;
+ }
+ EXPORT_SYMBOL(p9stat_free);
+ 
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index 3d5280425027..2d3723606fb0 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -929,19 +929,22 @@ static unsigned int early_drop_list(struct net *net,
+ 	return drops;
+ }
+ 
+-static noinline int early_drop(struct net *net, unsigned int _hash)
++static noinline int early_drop(struct net *net, unsigned int hash)
+ {
+-	unsigned int i;
++	unsigned int i, bucket;
+ 
+ 	for (i = 0; i < NF_CT_EVICTION_RANGE; i++) {
+ 		struct hlist_nulls_head *ct_hash;
+-		unsigned int hash, hsize, drops;
++		unsigned int hsize, drops;
+ 
+ 		rcu_read_lock();
+ 		nf_conntrack_get_ht(&ct_hash, &hsize);
+-		hash = reciprocal_scale(_hash++, hsize);
++		if (!i)
++			bucket = reciprocal_scale(hash, hsize);
++		else
++			bucket = (bucket + 1) % hsize;
+ 
+-		drops = early_drop_list(net, &ct_hash[hash]);
++		drops = early_drop_list(net, &ct_hash[bucket]);
+ 		rcu_read_unlock();
+ 
+ 		if (drops) {
+diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
+index 30afbd236656..b53cc0960b5d 100644
+--- a/net/sunrpc/xdr.c
++++ b/net/sunrpc/xdr.c
+@@ -639,11 +639,10 @@ void xdr_truncate_encode(struct xdr_stream *xdr, size_t len)
+ 		WARN_ON_ONCE(xdr->iov);
+ 		return;
+ 	}
+-	if (fraglen) {
++	if (fraglen)
+ 		xdr->end = head->iov_base + head->iov_len;
+-		xdr->page_ptr--;
+-	}
+ 	/* (otherwise assume xdr->end is already set) */
++	xdr->page_ptr--;
+ 	head->iov_len = len;
+ 	buf->len = len;
+ 	xdr->p = head->iov_base + head->iov_len;
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 4680a217d0fa..46ec7be75d4b 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -5306,6 +5306,9 @@ static int selinux_sctp_bind_connect(struct sock *sk, int optname,
+ 	addr_buf = address;
+ 
+ 	while (walk_size < addrlen) {
++		if (walk_size + sizeof(sa_family_t) > addrlen)
++			return -EINVAL;
++
+ 		addr = addr_buf;
+ 		switch (addr->sa_family) {
+ 		case AF_UNSPEC:
+diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
+index 02580f3ded1a..0b88ec9381e7 100644
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -779,7 +779,7 @@ static void pmu_add_cpu_aliases(struct list_head *head, struct perf_pmu *pmu)
+ 
+ 		if (!is_arm_pmu_core(name)) {
+ 			pname = pe->pmu ? pe->pmu : "cpu";
+-			if (strncmp(pname, name, strlen(pname)))
++			if (strcmp(pname, name))
+ 				continue;
+ 		}
+ 
+diff --git a/tools/testing/selftests/powerpc/tm/tm-tmspr.c b/tools/testing/selftests/powerpc/tm/tm-tmspr.c
+index 2bda81c7bf23..df1d7d4b1c89 100644
+--- a/tools/testing/selftests/powerpc/tm/tm-tmspr.c
++++ b/tools/testing/selftests/powerpc/tm/tm-tmspr.c
+@@ -98,7 +98,7 @@ void texasr(void *in)
+ 
+ int test_tmspr()
+ {
+-	pthread_t 	thread;
++	pthread_t	*thread;
+ 	int	   	thread_num;
+ 	unsigned long	i;
+ 
+@@ -107,21 +107,28 @@ int test_tmspr()
+ 	/* To cause some context switching */
+ 	thread_num = 10 * sysconf(_SC_NPROCESSORS_ONLN);
+ 
++	thread = malloc(thread_num * sizeof(pthread_t));
++	if (thread == NULL)
++		return EXIT_FAILURE;
++
+ 	/* Test TFIAR and TFHAR */
+-	for (i = 0 ; i < thread_num ; i += 2){
+-		if (pthread_create(&thread, NULL, (void*)tfiar_tfhar, (void *)i))
++	for (i = 0; i < thread_num; i += 2) {
++		if (pthread_create(&thread[i], NULL, (void *)tfiar_tfhar,
++				   (void *)i))
+ 			return EXIT_FAILURE;
+ 	}
+-	if (pthread_join(thread, NULL) != 0)
+-		return EXIT_FAILURE;
+-
+ 	/* Test TEXASR */
+-	for (i = 0 ; i < thread_num ; i++){
+-		if (pthread_create(&thread, NULL, (void*)texasr, (void *)i))
++	for (i = 1; i < thread_num; i += 2) {
++		if (pthread_create(&thread[i], NULL, (void *)texasr, (void *)i))
+ 			return EXIT_FAILURE;
+ 	}
+-	if (pthread_join(thread, NULL) != 0)
+-		return EXIT_FAILURE;
++
++	for (i = 0; i < thread_num; i++) {
++		if (pthread_join(thread[i], NULL) != 0)
++			return EXIT_FAILURE;
++	}
++
++	free(thread);
+ 
+ 	if (passed)
+ 		return 0;


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 13:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 13:15 UTC (permalink / raw
  To: gentoo-commits

commit:     4c43869360643e3f80b1ebd018e1524787e16a3f
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Nov 13 21:16:56 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 13:15:41 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4c438693

proj/linux-patches: Linux patch 4.18.19

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |     4 +
 1018_linux-4.18.19.patch | 15151 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 15155 insertions(+)

diff --git a/0000_README b/0000_README
index bdc7ee9..afaac7a 100644
--- a/0000_README
+++ b/0000_README
@@ -115,6 +115,10 @@ Patch:  1017_linux-4.18.18.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.18
 
+Patch:  1018_linux-4.18.19.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.19
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1018_linux-4.18.19.patch b/1018_linux-4.18.19.patch
new file mode 100644
index 0000000..40499cf
--- /dev/null
+++ b/1018_linux-4.18.19.patch
@@ -0,0 +1,15151 @@
+diff --git a/Documentation/filesystems/fscrypt.rst b/Documentation/filesystems/fscrypt.rst
+index 48b424de85bb..cfbc18f0d9c9 100644
+--- a/Documentation/filesystems/fscrypt.rst
++++ b/Documentation/filesystems/fscrypt.rst
+@@ -191,21 +191,11 @@ Currently, the following pairs of encryption modes are supported:
+ 
+ - AES-256-XTS for contents and AES-256-CTS-CBC for filenames
+ - AES-128-CBC for contents and AES-128-CTS-CBC for filenames
+-- Speck128/256-XTS for contents and Speck128/256-CTS-CBC for filenames
+ 
+ It is strongly recommended to use AES-256-XTS for contents encryption.
+ AES-128-CBC was added only for low-powered embedded devices with
+ crypto accelerators such as CAAM or CESA that do not support XTS.
+ 
+-Similarly, Speck128/256 support was only added for older or low-end
+-CPUs which cannot do AES fast enough -- especially ARM CPUs which have
+-NEON instructions but not the Cryptography Extensions -- and for which
+-it would not otherwise be feasible to use encryption at all.  It is
+-not recommended to use Speck on CPUs that have AES instructions.
+-Speck support is only available if it has been enabled in the crypto
+-API via CONFIG_CRYPTO_SPECK.  Also, on ARM platforms, to get
+-acceptable performance CONFIG_CRYPTO_SPECK_NEON must be enabled.
+-
+ New encryption modes can be added relatively easily, without changes
+ to individual filesystems.  However, authenticated encryption (AE)
+ modes are not currently supported because of the difficulty of dealing
+diff --git a/Documentation/media/uapi/cec/cec-ioc-receive.rst b/Documentation/media/uapi/cec/cec-ioc-receive.rst
+index e964074cd15b..b25e48afaa08 100644
+--- a/Documentation/media/uapi/cec/cec-ioc-receive.rst
++++ b/Documentation/media/uapi/cec/cec-ioc-receive.rst
+@@ -16,10 +16,10 @@ CEC_RECEIVE, CEC_TRANSMIT - Receive or transmit a CEC message
+ Synopsis
+ ========
+ 
+-.. c:function:: int ioctl( int fd, CEC_RECEIVE, struct cec_msg *argp )
++.. c:function:: int ioctl( int fd, CEC_RECEIVE, struct cec_msg \*argp )
+     :name: CEC_RECEIVE
+ 
+-.. c:function:: int ioctl( int fd, CEC_TRANSMIT, struct cec_msg *argp )
++.. c:function:: int ioctl( int fd, CEC_TRANSMIT, struct cec_msg \*argp )
+     :name: CEC_TRANSMIT
+ 
+ Arguments
+@@ -272,6 +272,19 @@ View On' messages from initiator 0xf ('Unregistered') to destination 0 ('TV').
+       - The transmit failed after one or more retries. This status bit is
+ 	mutually exclusive with :ref:`CEC_TX_STATUS_OK <CEC-TX-STATUS-OK>`.
+ 	Other bits can still be set to explain which failures were seen.
++    * .. _`CEC-TX-STATUS-ABORTED`:
++
++      - ``CEC_TX_STATUS_ABORTED``
++      - 0x40
++      - The transmit was aborted due to an HDMI disconnect, or the adapter
++        was unconfigured, or a transmit was interrupted, or the driver
++	returned an error when attempting to start a transmit.
++    * .. _`CEC-TX-STATUS-TIMEOUT`:
++
++      - ``CEC_TX_STATUS_TIMEOUT``
++      - 0x80
++      - The transmit timed out. This should not normally happen and this
++	indicates a driver problem.
+ 
+ 
+ .. tabularcolumns:: |p{5.6cm}|p{0.9cm}|p{11.0cm}|
+@@ -300,6 +313,14 @@ View On' messages from initiator 0xf ('Unregistered') to destination 0 ('TV').
+       - The message was received successfully but the reply was
+ 	``CEC_MSG_FEATURE_ABORT``. This status is only set if this message
+ 	was the reply to an earlier transmitted message.
++    * .. _`CEC-RX-STATUS-ABORTED`:
++
++      - ``CEC_RX_STATUS_ABORTED``
++      - 0x08
++      - The wait for a reply to an earlier transmitted message was aborted
++        because the HDMI cable was disconnected, the adapter was unconfigured
++	or the :ref:`CEC_TRANSMIT <CEC_RECEIVE>` that waited for a
++	reply was interrupted.
+ 
+ 
+ 
+diff --git a/Documentation/media/uapi/v4l/biblio.rst b/Documentation/media/uapi/v4l/biblio.rst
+index 1cedcfc04327..386d6cf83e9c 100644
+--- a/Documentation/media/uapi/v4l/biblio.rst
++++ b/Documentation/media/uapi/v4l/biblio.rst
+@@ -226,16 +226,6 @@ xvYCC
+ 
+ :author:    International Electrotechnical Commission (http://www.iec.ch)
+ 
+-.. _adobergb:
+-
+-AdobeRGB
+-========
+-
+-
+-:title:     Adobe© RGB (1998) Color Image Encoding Version 2005-05
+-
+-:author:    Adobe Systems Incorporated (http://www.adobe.com)
+-
+ .. _oprgb:
+ 
+ opRGB
+diff --git a/Documentation/media/uapi/v4l/colorspaces-defs.rst b/Documentation/media/uapi/v4l/colorspaces-defs.rst
+index 410907fe9415..f24615544792 100644
+--- a/Documentation/media/uapi/v4l/colorspaces-defs.rst
++++ b/Documentation/media/uapi/v4l/colorspaces-defs.rst
+@@ -51,8 +51,8 @@ whole range, 0-255, dividing the angular value by 1.41. The enum
+       - See :ref:`col-rec709`.
+     * - ``V4L2_COLORSPACE_SRGB``
+       - See :ref:`col-srgb`.
+-    * - ``V4L2_COLORSPACE_ADOBERGB``
+-      - See :ref:`col-adobergb`.
++    * - ``V4L2_COLORSPACE_OPRGB``
++      - See :ref:`col-oprgb`.
+     * - ``V4L2_COLORSPACE_BT2020``
+       - See :ref:`col-bt2020`.
+     * - ``V4L2_COLORSPACE_DCI_P3``
+@@ -90,8 +90,8 @@ whole range, 0-255, dividing the angular value by 1.41. The enum
+       - Use the Rec. 709 transfer function.
+     * - ``V4L2_XFER_FUNC_SRGB``
+       - Use the sRGB transfer function.
+-    * - ``V4L2_XFER_FUNC_ADOBERGB``
+-      - Use the AdobeRGB transfer function.
++    * - ``V4L2_XFER_FUNC_OPRGB``
++      - Use the opRGB transfer function.
+     * - ``V4L2_XFER_FUNC_SMPTE240M``
+       - Use the SMPTE 240M transfer function.
+     * - ``V4L2_XFER_FUNC_NONE``
+diff --git a/Documentation/media/uapi/v4l/colorspaces-details.rst b/Documentation/media/uapi/v4l/colorspaces-details.rst
+index b5d551b9cc8f..09fabf4cd412 100644
+--- a/Documentation/media/uapi/v4l/colorspaces-details.rst
++++ b/Documentation/media/uapi/v4l/colorspaces-details.rst
+@@ -290,15 +290,14 @@ Y' is clamped to the range [0…1] and Cb and Cr are clamped to the range
+ 170M/BT.601. The Y'CbCr quantization is limited range.
+ 
+ 
+-.. _col-adobergb:
++.. _col-oprgb:
+ 
+-Colorspace Adobe RGB (V4L2_COLORSPACE_ADOBERGB)
++Colorspace opRGB (V4L2_COLORSPACE_OPRGB)
+ ===============================================
+ 
+-The :ref:`adobergb` standard defines the colorspace used by computer
+-graphics that use the AdobeRGB colorspace. This is also known as the
+-:ref:`oprgb` standard. The default transfer function is
+-``V4L2_XFER_FUNC_ADOBERGB``. The default Y'CbCr encoding is
++The :ref:`oprgb` standard defines the colorspace used by computer
++graphics that use the opRGB colorspace. The default transfer function is
++``V4L2_XFER_FUNC_OPRGB``. The default Y'CbCr encoding is
+ ``V4L2_YCBCR_ENC_601``. The default Y'CbCr quantization is limited
+ range.
+ 
+@@ -312,7 +311,7 @@ The chromaticities of the primary colors and the white reference are:
+ 
+ .. tabularcolumns:: |p{4.4cm}|p{4.4cm}|p{8.7cm}|
+ 
+-.. flat-table:: Adobe RGB Chromaticities
++.. flat-table:: opRGB Chromaticities
+     :header-rows:  1
+     :stub-columns: 0
+     :widths:       1 1 2
+diff --git a/Documentation/media/videodev2.h.rst.exceptions b/Documentation/media/videodev2.h.rst.exceptions
+index a5cb0a8686ac..8813ff9c42b9 100644
+--- a/Documentation/media/videodev2.h.rst.exceptions
++++ b/Documentation/media/videodev2.h.rst.exceptions
+@@ -56,7 +56,8 @@ replace symbol V4L2_MEMORY_USERPTR :c:type:`v4l2_memory`
+ # Documented enum v4l2_colorspace
+ replace symbol V4L2_COLORSPACE_470_SYSTEM_BG :c:type:`v4l2_colorspace`
+ replace symbol V4L2_COLORSPACE_470_SYSTEM_M :c:type:`v4l2_colorspace`
+-replace symbol V4L2_COLORSPACE_ADOBERGB :c:type:`v4l2_colorspace`
++replace symbol V4L2_COLORSPACE_OPRGB :c:type:`v4l2_colorspace`
++replace define V4L2_COLORSPACE_ADOBERGB :c:type:`v4l2_colorspace`
+ replace symbol V4L2_COLORSPACE_BT2020 :c:type:`v4l2_colorspace`
+ replace symbol V4L2_COLORSPACE_DCI_P3 :c:type:`v4l2_colorspace`
+ replace symbol V4L2_COLORSPACE_DEFAULT :c:type:`v4l2_colorspace`
+@@ -69,7 +70,8 @@ replace symbol V4L2_COLORSPACE_SRGB :c:type:`v4l2_colorspace`
+ 
+ # Documented enum v4l2_xfer_func
+ replace symbol V4L2_XFER_FUNC_709 :c:type:`v4l2_xfer_func`
+-replace symbol V4L2_XFER_FUNC_ADOBERGB :c:type:`v4l2_xfer_func`
++replace symbol V4L2_XFER_FUNC_OPRGB :c:type:`v4l2_xfer_func`
++replace define V4L2_XFER_FUNC_ADOBERGB :c:type:`v4l2_xfer_func`
+ replace symbol V4L2_XFER_FUNC_DCI_P3 :c:type:`v4l2_xfer_func`
+ replace symbol V4L2_XFER_FUNC_DEFAULT :c:type:`v4l2_xfer_func`
+ replace symbol V4L2_XFER_FUNC_NONE :c:type:`v4l2_xfer_func`
+diff --git a/Makefile b/Makefile
+index 7b35c1ec0427..71642133ba22 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 18
++SUBLEVEL = 19
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/arm/boot/dts/dra7.dtsi b/arch/arm/boot/dts/dra7.dtsi
+index a0ddf497e8cd..2cb45ddd2ae3 100644
+--- a/arch/arm/boot/dts/dra7.dtsi
++++ b/arch/arm/boot/dts/dra7.dtsi
+@@ -354,7 +354,7 @@
+ 				ti,hwmods = "pcie1";
+ 				phys = <&pcie1_phy>;
+ 				phy-names = "pcie-phy0";
+-				ti,syscon-unaligned-access = <&scm_conf1 0x14 2>;
++				ti,syscon-unaligned-access = <&scm_conf1 0x14 1>;
+ 				status = "disabled";
+ 			};
+ 		};
+diff --git a/arch/arm/boot/dts/exynos3250.dtsi b/arch/arm/boot/dts/exynos3250.dtsi
+index 962af97c1883..aff5d66ae058 100644
+--- a/arch/arm/boot/dts/exynos3250.dtsi
++++ b/arch/arm/boot/dts/exynos3250.dtsi
+@@ -78,6 +78,22 @@
+ 			compatible = "arm,cortex-a7";
+ 			reg = <1>;
+ 			clock-frequency = <1000000000>;
++			clocks = <&cmu CLK_ARM_CLK>;
++			clock-names = "cpu";
++			#cooling-cells = <2>;
++
++			operating-points = <
++				1000000 1150000
++				900000  1112500
++				800000  1075000
++				700000  1037500
++				600000  1000000
++				500000  962500
++				400000  925000
++				300000  887500
++				200000  850000
++				100000  850000
++			>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/exynos4210-origen.dts b/arch/arm/boot/dts/exynos4210-origen.dts
+index 2ab99f9f3d0a..dd9ec05eb0f7 100644
+--- a/arch/arm/boot/dts/exynos4210-origen.dts
++++ b/arch/arm/boot/dts/exynos4210-origen.dts
+@@ -151,6 +151,8 @@
+ 		reg = <0x66>;
+ 		interrupt-parent = <&gpx0>;
+ 		interrupts = <4 IRQ_TYPE_NONE>, <3 IRQ_TYPE_NONE>;
++		pinctrl-names = "default";
++		pinctrl-0 = <&max8997_irq>;
+ 
+ 		max8997,pmic-buck1-dvs-voltage = <1350000>;
+ 		max8997,pmic-buck2-dvs-voltage = <1100000>;
+@@ -288,6 +290,13 @@
+ 	};
+ };
+ 
++&pinctrl_1 {
++	max8997_irq: max8997-irq {
++		samsung,pins = "gpx0-3", "gpx0-4";
++		samsung,pin-pud = <EXYNOS_PIN_PULL_NONE>;
++	};
++};
++
+ &sdhci_0 {
+ 	bus-width = <4>;
+ 	pinctrl-0 = <&sd0_clk &sd0_cmd &sd0_bus4 &sd0_cd>;
+diff --git a/arch/arm/boot/dts/exynos4210.dtsi b/arch/arm/boot/dts/exynos4210.dtsi
+index 88fb47cef9a8..b6091c27f155 100644
+--- a/arch/arm/boot/dts/exynos4210.dtsi
++++ b/arch/arm/boot/dts/exynos4210.dtsi
+@@ -55,6 +55,19 @@
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a9";
+ 			reg = <0x901>;
++			clocks = <&clock CLK_ARM_CLK>;
++			clock-names = "cpu";
++			clock-latency = <160000>;
++
++			operating-points = <
++				1200000 1250000
++				1000000 1150000
++				800000	1075000
++				500000	975000
++				400000	975000
++				200000	950000
++			>;
++			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/exynos4412.dtsi b/arch/arm/boot/dts/exynos4412.dtsi
+index 7b43c10c510b..51f72f0327e5 100644
+--- a/arch/arm/boot/dts/exynos4412.dtsi
++++ b/arch/arm/boot/dts/exynos4412.dtsi
+@@ -49,21 +49,30 @@
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a9";
+ 			reg = <0xA01>;
++			clocks = <&clock CLK_ARM_CLK>;
++			clock-names = "cpu";
+ 			operating-points-v2 = <&cpu0_opp_table>;
++			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 
+ 		cpu@a02 {
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a9";
+ 			reg = <0xA02>;
++			clocks = <&clock CLK_ARM_CLK>;
++			clock-names = "cpu";
+ 			operating-points-v2 = <&cpu0_opp_table>;
++			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 
+ 		cpu@a03 {
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a9";
+ 			reg = <0xA03>;
++			clocks = <&clock CLK_ARM_CLK>;
++			clock-names = "cpu";
+ 			operating-points-v2 = <&cpu0_opp_table>;
++			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/exynos5250.dtsi b/arch/arm/boot/dts/exynos5250.dtsi
+index 2daf505b3d08..f04adf72b80e 100644
+--- a/arch/arm/boot/dts/exynos5250.dtsi
++++ b/arch/arm/boot/dts/exynos5250.dtsi
+@@ -54,36 +54,106 @@
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a15";
+ 			reg = <0>;
+-			clock-frequency = <1700000000>;
+ 			clocks = <&clock CLK_ARM_CLK>;
+ 			clock-names = "cpu";
+-			clock-latency = <140000>;
+-
+-			operating-points = <
+-				1700000 1300000
+-				1600000 1250000
+-				1500000 1225000
+-				1400000 1200000
+-				1300000 1150000
+-				1200000 1125000
+-				1100000 1100000
+-				1000000 1075000
+-				 900000 1050000
+-				 800000 1025000
+-				 700000 1012500
+-				 600000 1000000
+-				 500000  975000
+-				 400000  950000
+-				 300000  937500
+-				 200000  925000
+-			>;
++			operating-points-v2 = <&cpu0_opp_table>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 		cpu@1 {
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a15";
+ 			reg = <1>;
+-			clock-frequency = <1700000000>;
++			clocks = <&clock CLK_ARM_CLK>;
++			clock-names = "cpu";
++			operating-points-v2 = <&cpu0_opp_table>;
++			#cooling-cells = <2>; /* min followed by max */
++		};
++	};
++
++	cpu0_opp_table: opp_table0 {
++		compatible = "operating-points-v2";
++		opp-shared;
++
++		opp-200000000 {
++			opp-hz = /bits/ 64 <200000000>;
++			opp-microvolt = <925000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-300000000 {
++			opp-hz = /bits/ 64 <300000000>;
++			opp-microvolt = <937500>;
++			clock-latency-ns = <140000>;
++		};
++		opp-400000000 {
++			opp-hz = /bits/ 64 <400000000>;
++			opp-microvolt = <950000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-500000000 {
++			opp-hz = /bits/ 64 <500000000>;
++			opp-microvolt = <975000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-600000000 {
++			opp-hz = /bits/ 64 <600000000>;
++			opp-microvolt = <1000000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-700000000 {
++			opp-hz = /bits/ 64 <700000000>;
++			opp-microvolt = <1012500>;
++			clock-latency-ns = <140000>;
++		};
++		opp-800000000 {
++			opp-hz = /bits/ 64 <800000000>;
++			opp-microvolt = <1025000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-900000000 {
++			opp-hz = /bits/ 64 <900000000>;
++			opp-microvolt = <1050000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1000000000 {
++			opp-hz = /bits/ 64 <1000000000>;
++			opp-microvolt = <1075000>;
++			clock-latency-ns = <140000>;
++			opp-suspend;
++		};
++		opp-1100000000 {
++			opp-hz = /bits/ 64 <1100000000>;
++			opp-microvolt = <1100000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1200000000 {
++			opp-hz = /bits/ 64 <1200000000>;
++			opp-microvolt = <1125000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1300000000 {
++			opp-hz = /bits/ 64 <1300000000>;
++			opp-microvolt = <1150000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1400000000 {
++			opp-hz = /bits/ 64 <1400000000>;
++			opp-microvolt = <1200000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1500000000 {
++			opp-hz = /bits/ 64 <1500000000>;
++			opp-microvolt = <1225000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1600000000 {
++			opp-hz = /bits/ 64 <1600000000>;
++			opp-microvolt = <1250000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1700000000 {
++			opp-hz = /bits/ 64 <1700000000>;
++			opp-microvolt = <1300000>;
++			clock-latency-ns = <140000>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/socfpga_arria10.dtsi b/arch/arm/boot/dts/socfpga_arria10.dtsi
+index 791ca15c799e..bd1985694bca 100644
+--- a/arch/arm/boot/dts/socfpga_arria10.dtsi
++++ b/arch/arm/boot/dts/socfpga_arria10.dtsi
+@@ -601,7 +601,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		sdr: sdr@ffc25000 {
++		sdr: sdr@ffcfb100 {
+ 			compatible = "altr,sdr-ctl", "syscon";
+ 			reg = <0xffcfb100 0x80>;
+ 		};
+diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig
+index 925d1364727a..b8e69fe282b8 100644
+--- a/arch/arm/crypto/Kconfig
++++ b/arch/arm/crypto/Kconfig
+@@ -121,10 +121,4 @@ config CRYPTO_CHACHA20_NEON
+ 	select CRYPTO_BLKCIPHER
+ 	select CRYPTO_CHACHA20
+ 
+-config CRYPTO_SPECK_NEON
+-	tristate "NEON accelerated Speck cipher algorithms"
+-	depends on KERNEL_MODE_NEON
+-	select CRYPTO_BLKCIPHER
+-	select CRYPTO_SPECK
+-
+ endif
+diff --git a/arch/arm/crypto/Makefile b/arch/arm/crypto/Makefile
+index 8de542c48ade..bd5bceef0605 100644
+--- a/arch/arm/crypto/Makefile
++++ b/arch/arm/crypto/Makefile
+@@ -10,7 +10,6 @@ obj-$(CONFIG_CRYPTO_SHA1_ARM_NEON) += sha1-arm-neon.o
+ obj-$(CONFIG_CRYPTO_SHA256_ARM) += sha256-arm.o
+ obj-$(CONFIG_CRYPTO_SHA512_ARM) += sha512-arm.o
+ obj-$(CONFIG_CRYPTO_CHACHA20_NEON) += chacha20-neon.o
+-obj-$(CONFIG_CRYPTO_SPECK_NEON) += speck-neon.o
+ 
+ ce-obj-$(CONFIG_CRYPTO_AES_ARM_CE) += aes-arm-ce.o
+ ce-obj-$(CONFIG_CRYPTO_SHA1_ARM_CE) += sha1-arm-ce.o
+@@ -54,7 +53,6 @@ ghash-arm-ce-y	:= ghash-ce-core.o ghash-ce-glue.o
+ crct10dif-arm-ce-y	:= crct10dif-ce-core.o crct10dif-ce-glue.o
+ crc32-arm-ce-y:= crc32-ce-core.o crc32-ce-glue.o
+ chacha20-neon-y := chacha20-neon-core.o chacha20-neon-glue.o
+-speck-neon-y := speck-neon-core.o speck-neon-glue.o
+ 
+ ifdef REGENERATE_ARM_CRYPTO
+ quiet_cmd_perl = PERL    $@
+diff --git a/arch/arm/crypto/speck-neon-core.S b/arch/arm/crypto/speck-neon-core.S
+deleted file mode 100644
+index 57caa742016e..000000000000
+--- a/arch/arm/crypto/speck-neon-core.S
++++ /dev/null
+@@ -1,434 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * NEON-accelerated implementation of Speck128-XTS and Speck64-XTS
+- *
+- * Copyright (c) 2018 Google, Inc
+- *
+- * Author: Eric Biggers <ebiggers@google.com>
+- */
+-
+-#include <linux/linkage.h>
+-
+-	.text
+-	.fpu		neon
+-
+-	// arguments
+-	ROUND_KEYS	.req	r0	// const {u64,u32} *round_keys
+-	NROUNDS		.req	r1	// int nrounds
+-	DST		.req	r2	// void *dst
+-	SRC		.req	r3	// const void *src
+-	NBYTES		.req	r4	// unsigned int nbytes
+-	TWEAK		.req	r5	// void *tweak
+-
+-	// registers which hold the data being encrypted/decrypted
+-	X0		.req	q0
+-	X0_L		.req	d0
+-	X0_H		.req	d1
+-	Y0		.req	q1
+-	Y0_H		.req	d3
+-	X1		.req	q2
+-	X1_L		.req	d4
+-	X1_H		.req	d5
+-	Y1		.req	q3
+-	Y1_H		.req	d7
+-	X2		.req	q4
+-	X2_L		.req	d8
+-	X2_H		.req	d9
+-	Y2		.req	q5
+-	Y2_H		.req	d11
+-	X3		.req	q6
+-	X3_L		.req	d12
+-	X3_H		.req	d13
+-	Y3		.req	q7
+-	Y3_H		.req	d15
+-
+-	// the round key, duplicated in all lanes
+-	ROUND_KEY	.req	q8
+-	ROUND_KEY_L	.req	d16
+-	ROUND_KEY_H	.req	d17
+-
+-	// index vector for vtbl-based 8-bit rotates
+-	ROTATE_TABLE	.req	d18
+-
+-	// multiplication table for updating XTS tweaks
+-	GF128MUL_TABLE	.req	d19
+-	GF64MUL_TABLE	.req	d19
+-
+-	// current XTS tweak value(s)
+-	TWEAKV		.req	q10
+-	TWEAKV_L	.req	d20
+-	TWEAKV_H	.req	d21
+-
+-	TMP0		.req	q12
+-	TMP0_L		.req	d24
+-	TMP0_H		.req	d25
+-	TMP1		.req	q13
+-	TMP2		.req	q14
+-	TMP3		.req	q15
+-
+-	.align		4
+-.Lror64_8_table:
+-	.byte		1, 2, 3, 4, 5, 6, 7, 0
+-.Lror32_8_table:
+-	.byte		1, 2, 3, 0, 5, 6, 7, 4
+-.Lrol64_8_table:
+-	.byte		7, 0, 1, 2, 3, 4, 5, 6
+-.Lrol32_8_table:
+-	.byte		3, 0, 1, 2, 7, 4, 5, 6
+-.Lgf128mul_table:
+-	.byte		0, 0x87
+-	.fill		14
+-.Lgf64mul_table:
+-	.byte		0, 0x1b, (0x1b << 1), (0x1b << 1) ^ 0x1b
+-	.fill		12
+-
+-/*
+- * _speck_round_128bytes() - Speck encryption round on 128 bytes at a time
+- *
+- * Do one Speck encryption round on the 128 bytes (8 blocks for Speck128, 16 for
+- * Speck64) stored in X0-X3 and Y0-Y3, using the round key stored in all lanes
+- * of ROUND_KEY.  'n' is the lane size: 64 for Speck128, or 32 for Speck64.
+- *
+- * The 8-bit rotates are implemented using vtbl instead of vshr + vsli because
+- * the vtbl approach is faster on some processors and the same speed on others.
+- */
+-.macro _speck_round_128bytes	n
+-
+-	// x = ror(x, 8)
+-	vtbl.8		X0_L, {X0_L}, ROTATE_TABLE
+-	vtbl.8		X0_H, {X0_H}, ROTATE_TABLE
+-	vtbl.8		X1_L, {X1_L}, ROTATE_TABLE
+-	vtbl.8		X1_H, {X1_H}, ROTATE_TABLE
+-	vtbl.8		X2_L, {X2_L}, ROTATE_TABLE
+-	vtbl.8		X2_H, {X2_H}, ROTATE_TABLE
+-	vtbl.8		X3_L, {X3_L}, ROTATE_TABLE
+-	vtbl.8		X3_H, {X3_H}, ROTATE_TABLE
+-
+-	// x += y
+-	vadd.u\n	X0, Y0
+-	vadd.u\n	X1, Y1
+-	vadd.u\n	X2, Y2
+-	vadd.u\n	X3, Y3
+-
+-	// x ^= k
+-	veor		X0, ROUND_KEY
+-	veor		X1, ROUND_KEY
+-	veor		X2, ROUND_KEY
+-	veor		X3, ROUND_KEY
+-
+-	// y = rol(y, 3)
+-	vshl.u\n	TMP0, Y0, #3
+-	vshl.u\n	TMP1, Y1, #3
+-	vshl.u\n	TMP2, Y2, #3
+-	vshl.u\n	TMP3, Y3, #3
+-	vsri.u\n	TMP0, Y0, #(\n - 3)
+-	vsri.u\n	TMP1, Y1, #(\n - 3)
+-	vsri.u\n	TMP2, Y2, #(\n - 3)
+-	vsri.u\n	TMP3, Y3, #(\n - 3)
+-
+-	// y ^= x
+-	veor		Y0, TMP0, X0
+-	veor		Y1, TMP1, X1
+-	veor		Y2, TMP2, X2
+-	veor		Y3, TMP3, X3
+-.endm
+-
+-/*
+- * _speck_unround_128bytes() - Speck decryption round on 128 bytes at a time
+- *
+- * This is the inverse of _speck_round_128bytes().
+- */
+-.macro _speck_unround_128bytes	n
+-
+-	// y ^= x
+-	veor		TMP0, Y0, X0
+-	veor		TMP1, Y1, X1
+-	veor		TMP2, Y2, X2
+-	veor		TMP3, Y3, X3
+-
+-	// y = ror(y, 3)
+-	vshr.u\n	Y0, TMP0, #3
+-	vshr.u\n	Y1, TMP1, #3
+-	vshr.u\n	Y2, TMP2, #3
+-	vshr.u\n	Y3, TMP3, #3
+-	vsli.u\n	Y0, TMP0, #(\n - 3)
+-	vsli.u\n	Y1, TMP1, #(\n - 3)
+-	vsli.u\n	Y2, TMP2, #(\n - 3)
+-	vsli.u\n	Y3, TMP3, #(\n - 3)
+-
+-	// x ^= k
+-	veor		X0, ROUND_KEY
+-	veor		X1, ROUND_KEY
+-	veor		X2, ROUND_KEY
+-	veor		X3, ROUND_KEY
+-
+-	// x -= y
+-	vsub.u\n	X0, Y0
+-	vsub.u\n	X1, Y1
+-	vsub.u\n	X2, Y2
+-	vsub.u\n	X3, Y3
+-
+-	// x = rol(x, 8);
+-	vtbl.8		X0_L, {X0_L}, ROTATE_TABLE
+-	vtbl.8		X0_H, {X0_H}, ROTATE_TABLE
+-	vtbl.8		X1_L, {X1_L}, ROTATE_TABLE
+-	vtbl.8		X1_H, {X1_H}, ROTATE_TABLE
+-	vtbl.8		X2_L, {X2_L}, ROTATE_TABLE
+-	vtbl.8		X2_H, {X2_H}, ROTATE_TABLE
+-	vtbl.8		X3_L, {X3_L}, ROTATE_TABLE
+-	vtbl.8		X3_H, {X3_H}, ROTATE_TABLE
+-.endm
+-
+-.macro _xts128_precrypt_one	dst_reg, tweak_buf, tmp
+-
+-	// Load the next source block
+-	vld1.8		{\dst_reg}, [SRC]!
+-
+-	// Save the current tweak in the tweak buffer
+-	vst1.8		{TWEAKV}, [\tweak_buf:128]!
+-
+-	// XOR the next source block with the current tweak
+-	veor		\dst_reg, TWEAKV
+-
+-	/*
+-	 * Calculate the next tweak by multiplying the current one by x,
+-	 * modulo p(x) = x^128 + x^7 + x^2 + x + 1.
+-	 */
+-	vshr.u64	\tmp, TWEAKV, #63
+-	vshl.u64	TWEAKV, #1
+-	veor		TWEAKV_H, \tmp\()_L
+-	vtbl.8		\tmp\()_H, {GF128MUL_TABLE}, \tmp\()_H
+-	veor		TWEAKV_L, \tmp\()_H
+-.endm
+-
+-.macro _xts64_precrypt_two	dst_reg, tweak_buf, tmp
+-
+-	// Load the next two source blocks
+-	vld1.8		{\dst_reg}, [SRC]!
+-
+-	// Save the current two tweaks in the tweak buffer
+-	vst1.8		{TWEAKV}, [\tweak_buf:128]!
+-
+-	// XOR the next two source blocks with the current two tweaks
+-	veor		\dst_reg, TWEAKV
+-
+-	/*
+-	 * Calculate the next two tweaks by multiplying the current ones by x^2,
+-	 * modulo p(x) = x^64 + x^4 + x^3 + x + 1.
+-	 */
+-	vshr.u64	\tmp, TWEAKV, #62
+-	vshl.u64	TWEAKV, #2
+-	vtbl.8		\tmp\()_L, {GF64MUL_TABLE}, \tmp\()_L
+-	vtbl.8		\tmp\()_H, {GF64MUL_TABLE}, \tmp\()_H
+-	veor		TWEAKV, \tmp
+-.endm
+-
+-/*
+- * _speck_xts_crypt() - Speck-XTS encryption/decryption
+- *
+- * Encrypt or decrypt NBYTES bytes of data from the SRC buffer to the DST buffer
+- * using Speck-XTS, specifically the variant with a block size of '2n' and round
+- * count given by NROUNDS.  The expanded round keys are given in ROUND_KEYS, and
+- * the current XTS tweak value is given in TWEAK.  It's assumed that NBYTES is a
+- * nonzero multiple of 128.
+- */
+-.macro _speck_xts_crypt	n, decrypting
+-	push		{r4-r7}
+-	mov		r7, sp
+-
+-	/*
+-	 * The first four parameters were passed in registers r0-r3.  Load the
+-	 * additional parameters, which were passed on the stack.
+-	 */
+-	ldr		NBYTES, [sp, #16]
+-	ldr		TWEAK, [sp, #20]
+-
+-	/*
+-	 * If decrypting, modify the ROUND_KEYS parameter to point to the last
+-	 * round key rather than the first, since for decryption the round keys
+-	 * are used in reverse order.
+-	 */
+-.if \decrypting
+-.if \n == 64
+-	add		ROUND_KEYS, ROUND_KEYS, NROUNDS, lsl #3
+-	sub		ROUND_KEYS, #8
+-.else
+-	add		ROUND_KEYS, ROUND_KEYS, NROUNDS, lsl #2
+-	sub		ROUND_KEYS, #4
+-.endif
+-.endif
+-
+-	// Load the index vector for vtbl-based 8-bit rotates
+-.if \decrypting
+-	ldr		r12, =.Lrol\n\()_8_table
+-.else
+-	ldr		r12, =.Lror\n\()_8_table
+-.endif
+-	vld1.8		{ROTATE_TABLE}, [r12:64]
+-
+-	// One-time XTS preparation
+-
+-	/*
+-	 * Allocate stack space to store 128 bytes worth of tweaks.  For
+-	 * performance, this space is aligned to a 16-byte boundary so that we
+-	 * can use the load/store instructions that declare 16-byte alignment.
+-	 * For Thumb2 compatibility, don't do the 'bic' directly on 'sp'.
+-	 */
+-	sub		r12, sp, #128
+-	bic		r12, #0xf
+-	mov		sp, r12
+-
+-.if \n == 64
+-	// Load first tweak
+-	vld1.8		{TWEAKV}, [TWEAK]
+-
+-	// Load GF(2^128) multiplication table
+-	ldr		r12, =.Lgf128mul_table
+-	vld1.8		{GF128MUL_TABLE}, [r12:64]
+-.else
+-	// Load first tweak
+-	vld1.8		{TWEAKV_L}, [TWEAK]
+-
+-	// Load GF(2^64) multiplication table
+-	ldr		r12, =.Lgf64mul_table
+-	vld1.8		{GF64MUL_TABLE}, [r12:64]
+-
+-	// Calculate second tweak, packing it together with the first
+-	vshr.u64	TMP0_L, TWEAKV_L, #63
+-	vtbl.u8		TMP0_L, {GF64MUL_TABLE}, TMP0_L
+-	vshl.u64	TWEAKV_H, TWEAKV_L, #1
+-	veor		TWEAKV_H, TMP0_L
+-.endif
+-
+-.Lnext_128bytes_\@:
+-
+-	/*
+-	 * Load the source blocks into {X,Y}[0-3], XOR them with their XTS tweak
+-	 * values, and save the tweaks on the stack for later.  Then
+-	 * de-interleave the 'x' and 'y' elements of each block, i.e. make it so
+-	 * that the X[0-3] registers contain only the second halves of blocks,
+-	 * and the Y[0-3] registers contain only the first halves of blocks.
+-	 * (Speck uses the order (y, x) rather than the more intuitive (x, y).)
+-	 */
+-	mov		r12, sp
+-.if \n == 64
+-	_xts128_precrypt_one	X0, r12, TMP0
+-	_xts128_precrypt_one	Y0, r12, TMP0
+-	_xts128_precrypt_one	X1, r12, TMP0
+-	_xts128_precrypt_one	Y1, r12, TMP0
+-	_xts128_precrypt_one	X2, r12, TMP0
+-	_xts128_precrypt_one	Y2, r12, TMP0
+-	_xts128_precrypt_one	X3, r12, TMP0
+-	_xts128_precrypt_one	Y3, r12, TMP0
+-	vswp		X0_L, Y0_H
+-	vswp		X1_L, Y1_H
+-	vswp		X2_L, Y2_H
+-	vswp		X3_L, Y3_H
+-.else
+-	_xts64_precrypt_two	X0, r12, TMP0
+-	_xts64_precrypt_two	Y0, r12, TMP0
+-	_xts64_precrypt_two	X1, r12, TMP0
+-	_xts64_precrypt_two	Y1, r12, TMP0
+-	_xts64_precrypt_two	X2, r12, TMP0
+-	_xts64_precrypt_two	Y2, r12, TMP0
+-	_xts64_precrypt_two	X3, r12, TMP0
+-	_xts64_precrypt_two	Y3, r12, TMP0
+-	vuzp.32		Y0, X0
+-	vuzp.32		Y1, X1
+-	vuzp.32		Y2, X2
+-	vuzp.32		Y3, X3
+-.endif
+-
+-	// Do the cipher rounds
+-
+-	mov		r12, ROUND_KEYS
+-	mov		r6, NROUNDS
+-
+-.Lnext_round_\@:
+-.if \decrypting
+-.if \n == 64
+-	vld1.64		ROUND_KEY_L, [r12]
+-	sub		r12, #8
+-	vmov		ROUND_KEY_H, ROUND_KEY_L
+-.else
+-	vld1.32		{ROUND_KEY_L[],ROUND_KEY_H[]}, [r12]
+-	sub		r12, #4
+-.endif
+-	_speck_unround_128bytes	\n
+-.else
+-.if \n == 64
+-	vld1.64		ROUND_KEY_L, [r12]!
+-	vmov		ROUND_KEY_H, ROUND_KEY_L
+-.else
+-	vld1.32		{ROUND_KEY_L[],ROUND_KEY_H[]}, [r12]!
+-.endif
+-	_speck_round_128bytes	\n
+-.endif
+-	subs		r6, r6, #1
+-	bne		.Lnext_round_\@
+-
+-	// Re-interleave the 'x' and 'y' elements of each block
+-.if \n == 64
+-	vswp		X0_L, Y0_H
+-	vswp		X1_L, Y1_H
+-	vswp		X2_L, Y2_H
+-	vswp		X3_L, Y3_H
+-.else
+-	vzip.32		Y0, X0
+-	vzip.32		Y1, X1
+-	vzip.32		Y2, X2
+-	vzip.32		Y3, X3
+-.endif
+-
+-	// XOR the encrypted/decrypted blocks with the tweaks we saved earlier
+-	mov		r12, sp
+-	vld1.8		{TMP0, TMP1}, [r12:128]!
+-	vld1.8		{TMP2, TMP3}, [r12:128]!
+-	veor		X0, TMP0
+-	veor		Y0, TMP1
+-	veor		X1, TMP2
+-	veor		Y1, TMP3
+-	vld1.8		{TMP0, TMP1}, [r12:128]!
+-	vld1.8		{TMP2, TMP3}, [r12:128]!
+-	veor		X2, TMP0
+-	veor		Y2, TMP1
+-	veor		X3, TMP2
+-	veor		Y3, TMP3
+-
+-	// Store the ciphertext in the destination buffer
+-	vst1.8		{X0, Y0}, [DST]!
+-	vst1.8		{X1, Y1}, [DST]!
+-	vst1.8		{X2, Y2}, [DST]!
+-	vst1.8		{X3, Y3}, [DST]!
+-
+-	// Continue if there are more 128-byte chunks remaining, else return
+-	subs		NBYTES, #128
+-	bne		.Lnext_128bytes_\@
+-
+-	// Store the next tweak
+-.if \n == 64
+-	vst1.8		{TWEAKV}, [TWEAK]
+-.else
+-	vst1.8		{TWEAKV_L}, [TWEAK]
+-.endif
+-
+-	mov		sp, r7
+-	pop		{r4-r7}
+-	bx		lr
+-.endm
+-
+-ENTRY(speck128_xts_encrypt_neon)
+-	_speck_xts_crypt	n=64, decrypting=0
+-ENDPROC(speck128_xts_encrypt_neon)
+-
+-ENTRY(speck128_xts_decrypt_neon)
+-	_speck_xts_crypt	n=64, decrypting=1
+-ENDPROC(speck128_xts_decrypt_neon)
+-
+-ENTRY(speck64_xts_encrypt_neon)
+-	_speck_xts_crypt	n=32, decrypting=0
+-ENDPROC(speck64_xts_encrypt_neon)
+-
+-ENTRY(speck64_xts_decrypt_neon)
+-	_speck_xts_crypt	n=32, decrypting=1
+-ENDPROC(speck64_xts_decrypt_neon)
+diff --git a/arch/arm/crypto/speck-neon-glue.c b/arch/arm/crypto/speck-neon-glue.c
+deleted file mode 100644
+index f012c3ea998f..000000000000
+--- a/arch/arm/crypto/speck-neon-glue.c
++++ /dev/null
+@@ -1,288 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * NEON-accelerated implementation of Speck128-XTS and Speck64-XTS
+- *
+- * Copyright (c) 2018 Google, Inc
+- *
+- * Note: the NIST recommendation for XTS only specifies a 128-bit block size,
+- * but a 64-bit version (needed for Speck64) is fairly straightforward; the math
+- * is just done in GF(2^64) instead of GF(2^128), with the reducing polynomial
+- * x^64 + x^4 + x^3 + x + 1 from the original XEX paper (Rogaway, 2004:
+- * "Efficient Instantiations of Tweakable Blockciphers and Refinements to Modes
+- * OCB and PMAC"), represented as 0x1B.
+- */
+-
+-#include <asm/hwcap.h>
+-#include <asm/neon.h>
+-#include <asm/simd.h>
+-#include <crypto/algapi.h>
+-#include <crypto/gf128mul.h>
+-#include <crypto/internal/skcipher.h>
+-#include <crypto/speck.h>
+-#include <crypto/xts.h>
+-#include <linux/kernel.h>
+-#include <linux/module.h>
+-
+-/* The assembly functions only handle multiples of 128 bytes */
+-#define SPECK_NEON_CHUNK_SIZE	128
+-
+-/* Speck128 */
+-
+-struct speck128_xts_tfm_ctx {
+-	struct speck128_tfm_ctx main_key;
+-	struct speck128_tfm_ctx tweak_key;
+-};
+-
+-asmlinkage void speck128_xts_encrypt_neon(const u64 *round_keys, int nrounds,
+-					  void *dst, const void *src,
+-					  unsigned int nbytes, void *tweak);
+-
+-asmlinkage void speck128_xts_decrypt_neon(const u64 *round_keys, int nrounds,
+-					  void *dst, const void *src,
+-					  unsigned int nbytes, void *tweak);
+-
+-typedef void (*speck128_crypt_one_t)(const struct speck128_tfm_ctx *,
+-				     u8 *, const u8 *);
+-typedef void (*speck128_xts_crypt_many_t)(const u64 *, int, void *,
+-					  const void *, unsigned int, void *);
+-
+-static __always_inline int
+-__speck128_xts_crypt(struct skcipher_request *req,
+-		     speck128_crypt_one_t crypt_one,
+-		     speck128_xts_crypt_many_t crypt_many)
+-{
+-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+-	const struct speck128_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	struct skcipher_walk walk;
+-	le128 tweak;
+-	int err;
+-
+-	err = skcipher_walk_virt(&walk, req, true);
+-
+-	crypto_speck128_encrypt(&ctx->tweak_key, (u8 *)&tweak, walk.iv);
+-
+-	while (walk.nbytes > 0) {
+-		unsigned int nbytes = walk.nbytes;
+-		u8 *dst = walk.dst.virt.addr;
+-		const u8 *src = walk.src.virt.addr;
+-
+-		if (nbytes >= SPECK_NEON_CHUNK_SIZE && may_use_simd()) {
+-			unsigned int count;
+-
+-			count = round_down(nbytes, SPECK_NEON_CHUNK_SIZE);
+-			kernel_neon_begin();
+-			(*crypt_many)(ctx->main_key.round_keys,
+-				      ctx->main_key.nrounds,
+-				      dst, src, count, &tweak);
+-			kernel_neon_end();
+-			dst += count;
+-			src += count;
+-			nbytes -= count;
+-		}
+-
+-		/* Handle any remainder with generic code */
+-		while (nbytes >= sizeof(tweak)) {
+-			le128_xor((le128 *)dst, (const le128 *)src, &tweak);
+-			(*crypt_one)(&ctx->main_key, dst, dst);
+-			le128_xor((le128 *)dst, (const le128 *)dst, &tweak);
+-			gf128mul_x_ble(&tweak, &tweak);
+-
+-			dst += sizeof(tweak);
+-			src += sizeof(tweak);
+-			nbytes -= sizeof(tweak);
+-		}
+-		err = skcipher_walk_done(&walk, nbytes);
+-	}
+-
+-	return err;
+-}
+-
+-static int speck128_xts_encrypt(struct skcipher_request *req)
+-{
+-	return __speck128_xts_crypt(req, crypto_speck128_encrypt,
+-				    speck128_xts_encrypt_neon);
+-}
+-
+-static int speck128_xts_decrypt(struct skcipher_request *req)
+-{
+-	return __speck128_xts_crypt(req, crypto_speck128_decrypt,
+-				    speck128_xts_decrypt_neon);
+-}
+-
+-static int speck128_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
+-			       unsigned int keylen)
+-{
+-	struct speck128_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	int err;
+-
+-	err = xts_verify_key(tfm, key, keylen);
+-	if (err)
+-		return err;
+-
+-	keylen /= 2;
+-
+-	err = crypto_speck128_setkey(&ctx->main_key, key, keylen);
+-	if (err)
+-		return err;
+-
+-	return crypto_speck128_setkey(&ctx->tweak_key, key + keylen, keylen);
+-}
+-
+-/* Speck64 */
+-
+-struct speck64_xts_tfm_ctx {
+-	struct speck64_tfm_ctx main_key;
+-	struct speck64_tfm_ctx tweak_key;
+-};
+-
+-asmlinkage void speck64_xts_encrypt_neon(const u32 *round_keys, int nrounds,
+-					 void *dst, const void *src,
+-					 unsigned int nbytes, void *tweak);
+-
+-asmlinkage void speck64_xts_decrypt_neon(const u32 *round_keys, int nrounds,
+-					 void *dst, const void *src,
+-					 unsigned int nbytes, void *tweak);
+-
+-typedef void (*speck64_crypt_one_t)(const struct speck64_tfm_ctx *,
+-				    u8 *, const u8 *);
+-typedef void (*speck64_xts_crypt_many_t)(const u32 *, int, void *,
+-					 const void *, unsigned int, void *);
+-
+-static __always_inline int
+-__speck64_xts_crypt(struct skcipher_request *req, speck64_crypt_one_t crypt_one,
+-		    speck64_xts_crypt_many_t crypt_many)
+-{
+-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+-	const struct speck64_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	struct skcipher_walk walk;
+-	__le64 tweak;
+-	int err;
+-
+-	err = skcipher_walk_virt(&walk, req, true);
+-
+-	crypto_speck64_encrypt(&ctx->tweak_key, (u8 *)&tweak, walk.iv);
+-
+-	while (walk.nbytes > 0) {
+-		unsigned int nbytes = walk.nbytes;
+-		u8 *dst = walk.dst.virt.addr;
+-		const u8 *src = walk.src.virt.addr;
+-
+-		if (nbytes >= SPECK_NEON_CHUNK_SIZE && may_use_simd()) {
+-			unsigned int count;
+-
+-			count = round_down(nbytes, SPECK_NEON_CHUNK_SIZE);
+-			kernel_neon_begin();
+-			(*crypt_many)(ctx->main_key.round_keys,
+-				      ctx->main_key.nrounds,
+-				      dst, src, count, &tweak);
+-			kernel_neon_end();
+-			dst += count;
+-			src += count;
+-			nbytes -= count;
+-		}
+-
+-		/* Handle any remainder with generic code */
+-		while (nbytes >= sizeof(tweak)) {
+-			*(__le64 *)dst = *(__le64 *)src ^ tweak;
+-			(*crypt_one)(&ctx->main_key, dst, dst);
+-			*(__le64 *)dst ^= tweak;
+-			tweak = cpu_to_le64((le64_to_cpu(tweak) << 1) ^
+-					    ((tweak & cpu_to_le64(1ULL << 63)) ?
+-					     0x1B : 0));
+-			dst += sizeof(tweak);
+-			src += sizeof(tweak);
+-			nbytes -= sizeof(tweak);
+-		}
+-		err = skcipher_walk_done(&walk, nbytes);
+-	}
+-
+-	return err;
+-}
+-
+-static int speck64_xts_encrypt(struct skcipher_request *req)
+-{
+-	return __speck64_xts_crypt(req, crypto_speck64_encrypt,
+-				   speck64_xts_encrypt_neon);
+-}
+-
+-static int speck64_xts_decrypt(struct skcipher_request *req)
+-{
+-	return __speck64_xts_crypt(req, crypto_speck64_decrypt,
+-				   speck64_xts_decrypt_neon);
+-}
+-
+-static int speck64_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
+-			      unsigned int keylen)
+-{
+-	struct speck64_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	int err;
+-
+-	err = xts_verify_key(tfm, key, keylen);
+-	if (err)
+-		return err;
+-
+-	keylen /= 2;
+-
+-	err = crypto_speck64_setkey(&ctx->main_key, key, keylen);
+-	if (err)
+-		return err;
+-
+-	return crypto_speck64_setkey(&ctx->tweak_key, key + keylen, keylen);
+-}
+-
+-static struct skcipher_alg speck_algs[] = {
+-	{
+-		.base.cra_name		= "xts(speck128)",
+-		.base.cra_driver_name	= "xts-speck128-neon",
+-		.base.cra_priority	= 300,
+-		.base.cra_blocksize	= SPECK128_BLOCK_SIZE,
+-		.base.cra_ctxsize	= sizeof(struct speck128_xts_tfm_ctx),
+-		.base.cra_alignmask	= 7,
+-		.base.cra_module	= THIS_MODULE,
+-		.min_keysize		= 2 * SPECK128_128_KEY_SIZE,
+-		.max_keysize		= 2 * SPECK128_256_KEY_SIZE,
+-		.ivsize			= SPECK128_BLOCK_SIZE,
+-		.walksize		= SPECK_NEON_CHUNK_SIZE,
+-		.setkey			= speck128_xts_setkey,
+-		.encrypt		= speck128_xts_encrypt,
+-		.decrypt		= speck128_xts_decrypt,
+-	}, {
+-		.base.cra_name		= "xts(speck64)",
+-		.base.cra_driver_name	= "xts-speck64-neon",
+-		.base.cra_priority	= 300,
+-		.base.cra_blocksize	= SPECK64_BLOCK_SIZE,
+-		.base.cra_ctxsize	= sizeof(struct speck64_xts_tfm_ctx),
+-		.base.cra_alignmask	= 7,
+-		.base.cra_module	= THIS_MODULE,
+-		.min_keysize		= 2 * SPECK64_96_KEY_SIZE,
+-		.max_keysize		= 2 * SPECK64_128_KEY_SIZE,
+-		.ivsize			= SPECK64_BLOCK_SIZE,
+-		.walksize		= SPECK_NEON_CHUNK_SIZE,
+-		.setkey			= speck64_xts_setkey,
+-		.encrypt		= speck64_xts_encrypt,
+-		.decrypt		= speck64_xts_decrypt,
+-	}
+-};
+-
+-static int __init speck_neon_module_init(void)
+-{
+-	if (!(elf_hwcap & HWCAP_NEON))
+-		return -ENODEV;
+-	return crypto_register_skciphers(speck_algs, ARRAY_SIZE(speck_algs));
+-}
+-
+-static void __exit speck_neon_module_exit(void)
+-{
+-	crypto_unregister_skciphers(speck_algs, ARRAY_SIZE(speck_algs));
+-}
+-
+-module_init(speck_neon_module_init);
+-module_exit(speck_neon_module_exit);
+-
+-MODULE_DESCRIPTION("Speck block cipher (NEON-accelerated)");
+-MODULE_LICENSE("GPL");
+-MODULE_AUTHOR("Eric Biggers <ebiggers@google.com>");
+-MODULE_ALIAS_CRYPTO("xts(speck128)");
+-MODULE_ALIAS_CRYPTO("xts-speck128-neon");
+-MODULE_ALIAS_CRYPTO("xts(speck64)");
+-MODULE_ALIAS_CRYPTO("xts-speck64-neon");
+diff --git a/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi b/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
+index 67dac595dc72..3989876ab699 100644
+--- a/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
++++ b/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
+@@ -327,7 +327,7 @@
+ 
+ 		sysmgr: sysmgr@ffd12000 {
+ 			compatible = "altr,sys-mgr", "syscon";
+-			reg = <0xffd12000 0x1000>;
++			reg = <0xffd12000 0x228>;
+ 		};
+ 
+ 		/* Local timer */
+diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig
+index e3fdb0fd6f70..d51944ff9f91 100644
+--- a/arch/arm64/crypto/Kconfig
++++ b/arch/arm64/crypto/Kconfig
+@@ -119,10 +119,4 @@ config CRYPTO_AES_ARM64_BS
+ 	select CRYPTO_AES_ARM64
+ 	select CRYPTO_SIMD
+ 
+-config CRYPTO_SPECK_NEON
+-	tristate "NEON accelerated Speck cipher algorithms"
+-	depends on KERNEL_MODE_NEON
+-	select CRYPTO_BLKCIPHER
+-	select CRYPTO_SPECK
+-
+ endif
+diff --git a/arch/arm64/crypto/Makefile b/arch/arm64/crypto/Makefile
+index bcafd016618e..7bc4bda6d9c6 100644
+--- a/arch/arm64/crypto/Makefile
++++ b/arch/arm64/crypto/Makefile
+@@ -56,9 +56,6 @@ sha512-arm64-y := sha512-glue.o sha512-core.o
+ obj-$(CONFIG_CRYPTO_CHACHA20_NEON) += chacha20-neon.o
+ chacha20-neon-y := chacha20-neon-core.o chacha20-neon-glue.o
+ 
+-obj-$(CONFIG_CRYPTO_SPECK_NEON) += speck-neon.o
+-speck-neon-y := speck-neon-core.o speck-neon-glue.o
+-
+ obj-$(CONFIG_CRYPTO_AES_ARM64) += aes-arm64.o
+ aes-arm64-y := aes-cipher-core.o aes-cipher-glue.o
+ 
+diff --git a/arch/arm64/crypto/speck-neon-core.S b/arch/arm64/crypto/speck-neon-core.S
+deleted file mode 100644
+index b14463438b09..000000000000
+--- a/arch/arm64/crypto/speck-neon-core.S
++++ /dev/null
+@@ -1,352 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * ARM64 NEON-accelerated implementation of Speck128-XTS and Speck64-XTS
+- *
+- * Copyright (c) 2018 Google, Inc
+- *
+- * Author: Eric Biggers <ebiggers@google.com>
+- */
+-
+-#include <linux/linkage.h>
+-
+-	.text
+-
+-	// arguments
+-	ROUND_KEYS	.req	x0	// const {u64,u32} *round_keys
+-	NROUNDS		.req	w1	// int nrounds
+-	NROUNDS_X	.req	x1
+-	DST		.req	x2	// void *dst
+-	SRC		.req	x3	// const void *src
+-	NBYTES		.req	w4	// unsigned int nbytes
+-	TWEAK		.req	x5	// void *tweak
+-
+-	// registers which hold the data being encrypted/decrypted
+-	// (underscores avoid a naming collision with ARM64 registers x0-x3)
+-	X_0		.req	v0
+-	Y_0		.req	v1
+-	X_1		.req	v2
+-	Y_1		.req	v3
+-	X_2		.req	v4
+-	Y_2		.req	v5
+-	X_3		.req	v6
+-	Y_3		.req	v7
+-
+-	// the round key, duplicated in all lanes
+-	ROUND_KEY	.req	v8
+-
+-	// index vector for tbl-based 8-bit rotates
+-	ROTATE_TABLE	.req	v9
+-	ROTATE_TABLE_Q	.req	q9
+-
+-	// temporary registers
+-	TMP0		.req	v10
+-	TMP1		.req	v11
+-	TMP2		.req	v12
+-	TMP3		.req	v13
+-
+-	// multiplication table for updating XTS tweaks
+-	GFMUL_TABLE	.req	v14
+-	GFMUL_TABLE_Q	.req	q14
+-
+-	// next XTS tweak value(s)
+-	TWEAKV_NEXT	.req	v15
+-
+-	// XTS tweaks for the blocks currently being encrypted/decrypted
+-	TWEAKV0		.req	v16
+-	TWEAKV1		.req	v17
+-	TWEAKV2		.req	v18
+-	TWEAKV3		.req	v19
+-	TWEAKV4		.req	v20
+-	TWEAKV5		.req	v21
+-	TWEAKV6		.req	v22
+-	TWEAKV7		.req	v23
+-
+-	.align		4
+-.Lror64_8_table:
+-	.octa		0x080f0e0d0c0b0a090007060504030201
+-.Lror32_8_table:
+-	.octa		0x0c0f0e0d080b0a090407060500030201
+-.Lrol64_8_table:
+-	.octa		0x0e0d0c0b0a09080f0605040302010007
+-.Lrol32_8_table:
+-	.octa		0x0e0d0c0f0a09080b0605040702010003
+-.Lgf128mul_table:
+-	.octa		0x00000000000000870000000000000001
+-.Lgf64mul_table:
+-	.octa		0x0000000000000000000000002d361b00
+-
+-/*
+- * _speck_round_128bytes() - Speck encryption round on 128 bytes at a time
+- *
+- * Do one Speck encryption round on the 128 bytes (8 blocks for Speck128, 16 for
+- * Speck64) stored in X0-X3 and Y0-Y3, using the round key stored in all lanes
+- * of ROUND_KEY.  'n' is the lane size: 64 for Speck128, or 32 for Speck64.
+- * 'lanes' is the lane specifier: "2d" for Speck128 or "4s" for Speck64.
+- */
+-.macro _speck_round_128bytes	n, lanes
+-
+-	// x = ror(x, 8)
+-	tbl		X_0.16b, {X_0.16b}, ROTATE_TABLE.16b
+-	tbl		X_1.16b, {X_1.16b}, ROTATE_TABLE.16b
+-	tbl		X_2.16b, {X_2.16b}, ROTATE_TABLE.16b
+-	tbl		X_3.16b, {X_3.16b}, ROTATE_TABLE.16b
+-
+-	// x += y
+-	add		X_0.\lanes, X_0.\lanes, Y_0.\lanes
+-	add		X_1.\lanes, X_1.\lanes, Y_1.\lanes
+-	add		X_2.\lanes, X_2.\lanes, Y_2.\lanes
+-	add		X_3.\lanes, X_3.\lanes, Y_3.\lanes
+-
+-	// x ^= k
+-	eor		X_0.16b, X_0.16b, ROUND_KEY.16b
+-	eor		X_1.16b, X_1.16b, ROUND_KEY.16b
+-	eor		X_2.16b, X_2.16b, ROUND_KEY.16b
+-	eor		X_3.16b, X_3.16b, ROUND_KEY.16b
+-
+-	// y = rol(y, 3)
+-	shl		TMP0.\lanes, Y_0.\lanes, #3
+-	shl		TMP1.\lanes, Y_1.\lanes, #3
+-	shl		TMP2.\lanes, Y_2.\lanes, #3
+-	shl		TMP3.\lanes, Y_3.\lanes, #3
+-	sri		TMP0.\lanes, Y_0.\lanes, #(\n - 3)
+-	sri		TMP1.\lanes, Y_1.\lanes, #(\n - 3)
+-	sri		TMP2.\lanes, Y_2.\lanes, #(\n - 3)
+-	sri		TMP3.\lanes, Y_3.\lanes, #(\n - 3)
+-
+-	// y ^= x
+-	eor		Y_0.16b, TMP0.16b, X_0.16b
+-	eor		Y_1.16b, TMP1.16b, X_1.16b
+-	eor		Y_2.16b, TMP2.16b, X_2.16b
+-	eor		Y_3.16b, TMP3.16b, X_3.16b
+-.endm
+-
+-/*
+- * _speck_unround_128bytes() - Speck decryption round on 128 bytes at a time
+- *
+- * This is the inverse of _speck_round_128bytes().
+- */
+-.macro _speck_unround_128bytes	n, lanes
+-
+-	// y ^= x
+-	eor		TMP0.16b, Y_0.16b, X_0.16b
+-	eor		TMP1.16b, Y_1.16b, X_1.16b
+-	eor		TMP2.16b, Y_2.16b, X_2.16b
+-	eor		TMP3.16b, Y_3.16b, X_3.16b
+-
+-	// y = ror(y, 3)
+-	ushr		Y_0.\lanes, TMP0.\lanes, #3
+-	ushr		Y_1.\lanes, TMP1.\lanes, #3
+-	ushr		Y_2.\lanes, TMP2.\lanes, #3
+-	ushr		Y_3.\lanes, TMP3.\lanes, #3
+-	sli		Y_0.\lanes, TMP0.\lanes, #(\n - 3)
+-	sli		Y_1.\lanes, TMP1.\lanes, #(\n - 3)
+-	sli		Y_2.\lanes, TMP2.\lanes, #(\n - 3)
+-	sli		Y_3.\lanes, TMP3.\lanes, #(\n - 3)
+-
+-	// x ^= k
+-	eor		X_0.16b, X_0.16b, ROUND_KEY.16b
+-	eor		X_1.16b, X_1.16b, ROUND_KEY.16b
+-	eor		X_2.16b, X_2.16b, ROUND_KEY.16b
+-	eor		X_3.16b, X_3.16b, ROUND_KEY.16b
+-
+-	// x -= y
+-	sub		X_0.\lanes, X_0.\lanes, Y_0.\lanes
+-	sub		X_1.\lanes, X_1.\lanes, Y_1.\lanes
+-	sub		X_2.\lanes, X_2.\lanes, Y_2.\lanes
+-	sub		X_3.\lanes, X_3.\lanes, Y_3.\lanes
+-
+-	// x = rol(x, 8)
+-	tbl		X_0.16b, {X_0.16b}, ROTATE_TABLE.16b
+-	tbl		X_1.16b, {X_1.16b}, ROTATE_TABLE.16b
+-	tbl		X_2.16b, {X_2.16b}, ROTATE_TABLE.16b
+-	tbl		X_3.16b, {X_3.16b}, ROTATE_TABLE.16b
+-.endm
+-
+-.macro _next_xts_tweak	next, cur, tmp, n
+-.if \n == 64
+-	/*
+-	 * Calculate the next tweak by multiplying the current one by x,
+-	 * modulo p(x) = x^128 + x^7 + x^2 + x + 1.
+-	 */
+-	sshr		\tmp\().2d, \cur\().2d, #63
+-	and		\tmp\().16b, \tmp\().16b, GFMUL_TABLE.16b
+-	shl		\next\().2d, \cur\().2d, #1
+-	ext		\tmp\().16b, \tmp\().16b, \tmp\().16b, #8
+-	eor		\next\().16b, \next\().16b, \tmp\().16b
+-.else
+-	/*
+-	 * Calculate the next two tweaks by multiplying the current ones by x^2,
+-	 * modulo p(x) = x^64 + x^4 + x^3 + x + 1.
+-	 */
+-	ushr		\tmp\().2d, \cur\().2d, #62
+-	shl		\next\().2d, \cur\().2d, #2
+-	tbl		\tmp\().16b, {GFMUL_TABLE.16b}, \tmp\().16b
+-	eor		\next\().16b, \next\().16b, \tmp\().16b
+-.endif
+-.endm
+-
+-/*
+- * _speck_xts_crypt() - Speck-XTS encryption/decryption
+- *
+- * Encrypt or decrypt NBYTES bytes of data from the SRC buffer to the DST buffer
+- * using Speck-XTS, specifically the variant with a block size of '2n' and round
+- * count given by NROUNDS.  The expanded round keys are given in ROUND_KEYS, and
+- * the current XTS tweak value is given in TWEAK.  It's assumed that NBYTES is a
+- * nonzero multiple of 128.
+- */
+-.macro _speck_xts_crypt	n, lanes, decrypting
+-
+-	/*
+-	 * If decrypting, modify the ROUND_KEYS parameter to point to the last
+-	 * round key rather than the first, since for decryption the round keys
+-	 * are used in reverse order.
+-	 */
+-.if \decrypting
+-	mov		NROUNDS, NROUNDS	/* zero the high 32 bits */
+-.if \n == 64
+-	add		ROUND_KEYS, ROUND_KEYS, NROUNDS_X, lsl #3
+-	sub		ROUND_KEYS, ROUND_KEYS, #8
+-.else
+-	add		ROUND_KEYS, ROUND_KEYS, NROUNDS_X, lsl #2
+-	sub		ROUND_KEYS, ROUND_KEYS, #4
+-.endif
+-.endif
+-
+-	// Load the index vector for tbl-based 8-bit rotates
+-.if \decrypting
+-	ldr		ROTATE_TABLE_Q, .Lrol\n\()_8_table
+-.else
+-	ldr		ROTATE_TABLE_Q, .Lror\n\()_8_table
+-.endif
+-
+-	// One-time XTS preparation
+-.if \n == 64
+-	// Load first tweak
+-	ld1		{TWEAKV0.16b}, [TWEAK]
+-
+-	// Load GF(2^128) multiplication table
+-	ldr		GFMUL_TABLE_Q, .Lgf128mul_table
+-.else
+-	// Load first tweak
+-	ld1		{TWEAKV0.8b}, [TWEAK]
+-
+-	// Load GF(2^64) multiplication table
+-	ldr		GFMUL_TABLE_Q, .Lgf64mul_table
+-
+-	// Calculate second tweak, packing it together with the first
+-	ushr		TMP0.2d, TWEAKV0.2d, #63
+-	shl		TMP1.2d, TWEAKV0.2d, #1
+-	tbl		TMP0.8b, {GFMUL_TABLE.16b}, TMP0.8b
+-	eor		TMP0.8b, TMP0.8b, TMP1.8b
+-	mov		TWEAKV0.d[1], TMP0.d[0]
+-.endif
+-
+-.Lnext_128bytes_\@:
+-
+-	// Calculate XTS tweaks for next 128 bytes
+-	_next_xts_tweak	TWEAKV1, TWEAKV0, TMP0, \n
+-	_next_xts_tweak	TWEAKV2, TWEAKV1, TMP0, \n
+-	_next_xts_tweak	TWEAKV3, TWEAKV2, TMP0, \n
+-	_next_xts_tweak	TWEAKV4, TWEAKV3, TMP0, \n
+-	_next_xts_tweak	TWEAKV5, TWEAKV4, TMP0, \n
+-	_next_xts_tweak	TWEAKV6, TWEAKV5, TMP0, \n
+-	_next_xts_tweak	TWEAKV7, TWEAKV6, TMP0, \n
+-	_next_xts_tweak	TWEAKV_NEXT, TWEAKV7, TMP0, \n
+-
+-	// Load the next source blocks into {X,Y}[0-3]
+-	ld1		{X_0.16b-Y_1.16b}, [SRC], #64
+-	ld1		{X_2.16b-Y_3.16b}, [SRC], #64
+-
+-	// XOR the source blocks with their XTS tweaks
+-	eor		TMP0.16b, X_0.16b, TWEAKV0.16b
+-	eor		Y_0.16b,  Y_0.16b, TWEAKV1.16b
+-	eor		TMP1.16b, X_1.16b, TWEAKV2.16b
+-	eor		Y_1.16b,  Y_1.16b, TWEAKV3.16b
+-	eor		TMP2.16b, X_2.16b, TWEAKV4.16b
+-	eor		Y_2.16b,  Y_2.16b, TWEAKV5.16b
+-	eor		TMP3.16b, X_3.16b, TWEAKV6.16b
+-	eor		Y_3.16b,  Y_3.16b, TWEAKV7.16b
+-
+-	/*
+-	 * De-interleave the 'x' and 'y' elements of each block, i.e. make it so
+-	 * that the X[0-3] registers contain only the second halves of blocks,
+-	 * and the Y[0-3] registers contain only the first halves of blocks.
+-	 * (Speck uses the order (y, x) rather than the more intuitive (x, y).)
+-	 */
+-	uzp2		X_0.\lanes, TMP0.\lanes, Y_0.\lanes
+-	uzp1		Y_0.\lanes, TMP0.\lanes, Y_0.\lanes
+-	uzp2		X_1.\lanes, TMP1.\lanes, Y_1.\lanes
+-	uzp1		Y_1.\lanes, TMP1.\lanes, Y_1.\lanes
+-	uzp2		X_2.\lanes, TMP2.\lanes, Y_2.\lanes
+-	uzp1		Y_2.\lanes, TMP2.\lanes, Y_2.\lanes
+-	uzp2		X_3.\lanes, TMP3.\lanes, Y_3.\lanes
+-	uzp1		Y_3.\lanes, TMP3.\lanes, Y_3.\lanes
+-
+-	// Do the cipher rounds
+-	mov		x6, ROUND_KEYS
+-	mov		w7, NROUNDS
+-.Lnext_round_\@:
+-.if \decrypting
+-	ld1r		{ROUND_KEY.\lanes}, [x6]
+-	sub		x6, x6, #( \n / 8 )
+-	_speck_unround_128bytes	\n, \lanes
+-.else
+-	ld1r		{ROUND_KEY.\lanes}, [x6], #( \n / 8 )
+-	_speck_round_128bytes	\n, \lanes
+-.endif
+-	subs		w7, w7, #1
+-	bne		.Lnext_round_\@
+-
+-	// Re-interleave the 'x' and 'y' elements of each block
+-	zip1		TMP0.\lanes, Y_0.\lanes, X_0.\lanes
+-	zip2		Y_0.\lanes,  Y_0.\lanes, X_0.\lanes
+-	zip1		TMP1.\lanes, Y_1.\lanes, X_1.\lanes
+-	zip2		Y_1.\lanes,  Y_1.\lanes, X_1.\lanes
+-	zip1		TMP2.\lanes, Y_2.\lanes, X_2.\lanes
+-	zip2		Y_2.\lanes,  Y_2.\lanes, X_2.\lanes
+-	zip1		TMP3.\lanes, Y_3.\lanes, X_3.\lanes
+-	zip2		Y_3.\lanes,  Y_3.\lanes, X_3.\lanes
+-
+-	// XOR the encrypted/decrypted blocks with the tweaks calculated earlier
+-	eor		X_0.16b, TMP0.16b, TWEAKV0.16b
+-	eor		Y_0.16b, Y_0.16b,  TWEAKV1.16b
+-	eor		X_1.16b, TMP1.16b, TWEAKV2.16b
+-	eor		Y_1.16b, Y_1.16b,  TWEAKV3.16b
+-	eor		X_2.16b, TMP2.16b, TWEAKV4.16b
+-	eor		Y_2.16b, Y_2.16b,  TWEAKV5.16b
+-	eor		X_3.16b, TMP3.16b, TWEAKV6.16b
+-	eor		Y_3.16b, Y_3.16b,  TWEAKV7.16b
+-	mov		TWEAKV0.16b, TWEAKV_NEXT.16b
+-
+-	// Store the ciphertext in the destination buffer
+-	st1		{X_0.16b-Y_1.16b}, [DST], #64
+-	st1		{X_2.16b-Y_3.16b}, [DST], #64
+-
+-	// Continue if there are more 128-byte chunks remaining
+-	subs		NBYTES, NBYTES, #128
+-	bne		.Lnext_128bytes_\@
+-
+-	// Store the next tweak and return
+-.if \n == 64
+-	st1		{TWEAKV_NEXT.16b}, [TWEAK]
+-.else
+-	st1		{TWEAKV_NEXT.8b}, [TWEAK]
+-.endif
+-	ret
+-.endm
+-
+-ENTRY(speck128_xts_encrypt_neon)
+-	_speck_xts_crypt	n=64, lanes=2d, decrypting=0
+-ENDPROC(speck128_xts_encrypt_neon)
+-
+-ENTRY(speck128_xts_decrypt_neon)
+-	_speck_xts_crypt	n=64, lanes=2d, decrypting=1
+-ENDPROC(speck128_xts_decrypt_neon)
+-
+-ENTRY(speck64_xts_encrypt_neon)
+-	_speck_xts_crypt	n=32, lanes=4s, decrypting=0
+-ENDPROC(speck64_xts_encrypt_neon)
+-
+-ENTRY(speck64_xts_decrypt_neon)
+-	_speck_xts_crypt	n=32, lanes=4s, decrypting=1
+-ENDPROC(speck64_xts_decrypt_neon)
+diff --git a/arch/arm64/crypto/speck-neon-glue.c b/arch/arm64/crypto/speck-neon-glue.c
+deleted file mode 100644
+index 6e233aeb4ff4..000000000000
+--- a/arch/arm64/crypto/speck-neon-glue.c
++++ /dev/null
+@@ -1,282 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * NEON-accelerated implementation of Speck128-XTS and Speck64-XTS
+- * (64-bit version; based on the 32-bit version)
+- *
+- * Copyright (c) 2018 Google, Inc
+- */
+-
+-#include <asm/hwcap.h>
+-#include <asm/neon.h>
+-#include <asm/simd.h>
+-#include <crypto/algapi.h>
+-#include <crypto/gf128mul.h>
+-#include <crypto/internal/skcipher.h>
+-#include <crypto/speck.h>
+-#include <crypto/xts.h>
+-#include <linux/kernel.h>
+-#include <linux/module.h>
+-
+-/* The assembly functions only handle multiples of 128 bytes */
+-#define SPECK_NEON_CHUNK_SIZE	128
+-
+-/* Speck128 */
+-
+-struct speck128_xts_tfm_ctx {
+-	struct speck128_tfm_ctx main_key;
+-	struct speck128_tfm_ctx tweak_key;
+-};
+-
+-asmlinkage void speck128_xts_encrypt_neon(const u64 *round_keys, int nrounds,
+-					  void *dst, const void *src,
+-					  unsigned int nbytes, void *tweak);
+-
+-asmlinkage void speck128_xts_decrypt_neon(const u64 *round_keys, int nrounds,
+-					  void *dst, const void *src,
+-					  unsigned int nbytes, void *tweak);
+-
+-typedef void (*speck128_crypt_one_t)(const struct speck128_tfm_ctx *,
+-				     u8 *, const u8 *);
+-typedef void (*speck128_xts_crypt_many_t)(const u64 *, int, void *,
+-					  const void *, unsigned int, void *);
+-
+-static __always_inline int
+-__speck128_xts_crypt(struct skcipher_request *req,
+-		     speck128_crypt_one_t crypt_one,
+-		     speck128_xts_crypt_many_t crypt_many)
+-{
+-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+-	const struct speck128_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	struct skcipher_walk walk;
+-	le128 tweak;
+-	int err;
+-
+-	err = skcipher_walk_virt(&walk, req, true);
+-
+-	crypto_speck128_encrypt(&ctx->tweak_key, (u8 *)&tweak, walk.iv);
+-
+-	while (walk.nbytes > 0) {
+-		unsigned int nbytes = walk.nbytes;
+-		u8 *dst = walk.dst.virt.addr;
+-		const u8 *src = walk.src.virt.addr;
+-
+-		if (nbytes >= SPECK_NEON_CHUNK_SIZE && may_use_simd()) {
+-			unsigned int count;
+-
+-			count = round_down(nbytes, SPECK_NEON_CHUNK_SIZE);
+-			kernel_neon_begin();
+-			(*crypt_many)(ctx->main_key.round_keys,
+-				      ctx->main_key.nrounds,
+-				      dst, src, count, &tweak);
+-			kernel_neon_end();
+-			dst += count;
+-			src += count;
+-			nbytes -= count;
+-		}
+-
+-		/* Handle any remainder with generic code */
+-		while (nbytes >= sizeof(tweak)) {
+-			le128_xor((le128 *)dst, (const le128 *)src, &tweak);
+-			(*crypt_one)(&ctx->main_key, dst, dst);
+-			le128_xor((le128 *)dst, (const le128 *)dst, &tweak);
+-			gf128mul_x_ble(&tweak, &tweak);
+-
+-			dst += sizeof(tweak);
+-			src += sizeof(tweak);
+-			nbytes -= sizeof(tweak);
+-		}
+-		err = skcipher_walk_done(&walk, nbytes);
+-	}
+-
+-	return err;
+-}
+-
+-static int speck128_xts_encrypt(struct skcipher_request *req)
+-{
+-	return __speck128_xts_crypt(req, crypto_speck128_encrypt,
+-				    speck128_xts_encrypt_neon);
+-}
+-
+-static int speck128_xts_decrypt(struct skcipher_request *req)
+-{
+-	return __speck128_xts_crypt(req, crypto_speck128_decrypt,
+-				    speck128_xts_decrypt_neon);
+-}
+-
+-static int speck128_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
+-			       unsigned int keylen)
+-{
+-	struct speck128_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	int err;
+-
+-	err = xts_verify_key(tfm, key, keylen);
+-	if (err)
+-		return err;
+-
+-	keylen /= 2;
+-
+-	err = crypto_speck128_setkey(&ctx->main_key, key, keylen);
+-	if (err)
+-		return err;
+-
+-	return crypto_speck128_setkey(&ctx->tweak_key, key + keylen, keylen);
+-}
+-
+-/* Speck64 */
+-
+-struct speck64_xts_tfm_ctx {
+-	struct speck64_tfm_ctx main_key;
+-	struct speck64_tfm_ctx tweak_key;
+-};
+-
+-asmlinkage void speck64_xts_encrypt_neon(const u32 *round_keys, int nrounds,
+-					 void *dst, const void *src,
+-					 unsigned int nbytes, void *tweak);
+-
+-asmlinkage void speck64_xts_decrypt_neon(const u32 *round_keys, int nrounds,
+-					 void *dst, const void *src,
+-					 unsigned int nbytes, void *tweak);
+-
+-typedef void (*speck64_crypt_one_t)(const struct speck64_tfm_ctx *,
+-				    u8 *, const u8 *);
+-typedef void (*speck64_xts_crypt_many_t)(const u32 *, int, void *,
+-					 const void *, unsigned int, void *);
+-
+-static __always_inline int
+-__speck64_xts_crypt(struct skcipher_request *req, speck64_crypt_one_t crypt_one,
+-		    speck64_xts_crypt_many_t crypt_many)
+-{
+-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+-	const struct speck64_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	struct skcipher_walk walk;
+-	__le64 tweak;
+-	int err;
+-
+-	err = skcipher_walk_virt(&walk, req, true);
+-
+-	crypto_speck64_encrypt(&ctx->tweak_key, (u8 *)&tweak, walk.iv);
+-
+-	while (walk.nbytes > 0) {
+-		unsigned int nbytes = walk.nbytes;
+-		u8 *dst = walk.dst.virt.addr;
+-		const u8 *src = walk.src.virt.addr;
+-
+-		if (nbytes >= SPECK_NEON_CHUNK_SIZE && may_use_simd()) {
+-			unsigned int count;
+-
+-			count = round_down(nbytes, SPECK_NEON_CHUNK_SIZE);
+-			kernel_neon_begin();
+-			(*crypt_many)(ctx->main_key.round_keys,
+-				      ctx->main_key.nrounds,
+-				      dst, src, count, &tweak);
+-			kernel_neon_end();
+-			dst += count;
+-			src += count;
+-			nbytes -= count;
+-		}
+-
+-		/* Handle any remainder with generic code */
+-		while (nbytes >= sizeof(tweak)) {
+-			*(__le64 *)dst = *(__le64 *)src ^ tweak;
+-			(*crypt_one)(&ctx->main_key, dst, dst);
+-			*(__le64 *)dst ^= tweak;
+-			tweak = cpu_to_le64((le64_to_cpu(tweak) << 1) ^
+-					    ((tweak & cpu_to_le64(1ULL << 63)) ?
+-					     0x1B : 0));
+-			dst += sizeof(tweak);
+-			src += sizeof(tweak);
+-			nbytes -= sizeof(tweak);
+-		}
+-		err = skcipher_walk_done(&walk, nbytes);
+-	}
+-
+-	return err;
+-}
+-
+-static int speck64_xts_encrypt(struct skcipher_request *req)
+-{
+-	return __speck64_xts_crypt(req, crypto_speck64_encrypt,
+-				   speck64_xts_encrypt_neon);
+-}
+-
+-static int speck64_xts_decrypt(struct skcipher_request *req)
+-{
+-	return __speck64_xts_crypt(req, crypto_speck64_decrypt,
+-				   speck64_xts_decrypt_neon);
+-}
+-
+-static int speck64_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
+-			      unsigned int keylen)
+-{
+-	struct speck64_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	int err;
+-
+-	err = xts_verify_key(tfm, key, keylen);
+-	if (err)
+-		return err;
+-
+-	keylen /= 2;
+-
+-	err = crypto_speck64_setkey(&ctx->main_key, key, keylen);
+-	if (err)
+-		return err;
+-
+-	return crypto_speck64_setkey(&ctx->tweak_key, key + keylen, keylen);
+-}
+-
+-static struct skcipher_alg speck_algs[] = {
+-	{
+-		.base.cra_name		= "xts(speck128)",
+-		.base.cra_driver_name	= "xts-speck128-neon",
+-		.base.cra_priority	= 300,
+-		.base.cra_blocksize	= SPECK128_BLOCK_SIZE,
+-		.base.cra_ctxsize	= sizeof(struct speck128_xts_tfm_ctx),
+-		.base.cra_alignmask	= 7,
+-		.base.cra_module	= THIS_MODULE,
+-		.min_keysize		= 2 * SPECK128_128_KEY_SIZE,
+-		.max_keysize		= 2 * SPECK128_256_KEY_SIZE,
+-		.ivsize			= SPECK128_BLOCK_SIZE,
+-		.walksize		= SPECK_NEON_CHUNK_SIZE,
+-		.setkey			= speck128_xts_setkey,
+-		.encrypt		= speck128_xts_encrypt,
+-		.decrypt		= speck128_xts_decrypt,
+-	}, {
+-		.base.cra_name		= "xts(speck64)",
+-		.base.cra_driver_name	= "xts-speck64-neon",
+-		.base.cra_priority	= 300,
+-		.base.cra_blocksize	= SPECK64_BLOCK_SIZE,
+-		.base.cra_ctxsize	= sizeof(struct speck64_xts_tfm_ctx),
+-		.base.cra_alignmask	= 7,
+-		.base.cra_module	= THIS_MODULE,
+-		.min_keysize		= 2 * SPECK64_96_KEY_SIZE,
+-		.max_keysize		= 2 * SPECK64_128_KEY_SIZE,
+-		.ivsize			= SPECK64_BLOCK_SIZE,
+-		.walksize		= SPECK_NEON_CHUNK_SIZE,
+-		.setkey			= speck64_xts_setkey,
+-		.encrypt		= speck64_xts_encrypt,
+-		.decrypt		= speck64_xts_decrypt,
+-	}
+-};
+-
+-static int __init speck_neon_module_init(void)
+-{
+-	if (!(elf_hwcap & HWCAP_ASIMD))
+-		return -ENODEV;
+-	return crypto_register_skciphers(speck_algs, ARRAY_SIZE(speck_algs));
+-}
+-
+-static void __exit speck_neon_module_exit(void)
+-{
+-	crypto_unregister_skciphers(speck_algs, ARRAY_SIZE(speck_algs));
+-}
+-
+-module_init(speck_neon_module_init);
+-module_exit(speck_neon_module_exit);
+-
+-MODULE_DESCRIPTION("Speck block cipher (NEON-accelerated)");
+-MODULE_LICENSE("GPL");
+-MODULE_AUTHOR("Eric Biggers <ebiggers@google.com>");
+-MODULE_ALIAS_CRYPTO("xts(speck128)");
+-MODULE_ALIAS_CRYPTO("xts-speck128-neon");
+-MODULE_ALIAS_CRYPTO("xts(speck64)");
+-MODULE_ALIAS_CRYPTO("xts-speck64-neon");
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index e4103b718a7c..b687c80a9c10 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -847,15 +847,29 @@ static bool has_no_fpsimd(const struct arm64_cpu_capabilities *entry, int __unus
+ }
+ 
+ static bool has_cache_idc(const struct arm64_cpu_capabilities *entry,
+-			  int __unused)
++			  int scope)
+ {
+-	return read_sanitised_ftr_reg(SYS_CTR_EL0) & BIT(CTR_IDC_SHIFT);
++	u64 ctr;
++
++	if (scope == SCOPE_SYSTEM)
++		ctr = arm64_ftr_reg_ctrel0.sys_val;
++	else
++		ctr = read_cpuid_cachetype();
++
++	return ctr & BIT(CTR_IDC_SHIFT);
+ }
+ 
+ static bool has_cache_dic(const struct arm64_cpu_capabilities *entry,
+-			  int __unused)
++			  int scope)
+ {
+-	return read_sanitised_ftr_reg(SYS_CTR_EL0) & BIT(CTR_DIC_SHIFT);
++	u64 ctr;
++
++	if (scope == SCOPE_SYSTEM)
++		ctr = arm64_ftr_reg_ctrel0.sys_val;
++	else
++		ctr = read_cpuid_cachetype();
++
++	return ctr & BIT(CTR_DIC_SHIFT);
+ }
+ 
+ #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index 28ad8799406f..b0db91eefbde 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -599,7 +599,7 @@ el1_undef:
+ 	inherit_daif	pstate=x23, tmp=x2
+ 	mov	x0, sp
+ 	bl	do_undefinstr
+-	ASM_BUG()
++	kernel_exit 1
+ el1_dbg:
+ 	/*
+ 	 * Debug exception handling
+diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
+index d399d459397b..9fa3d69cceaa 100644
+--- a/arch/arm64/kernel/traps.c
++++ b/arch/arm64/kernel/traps.c
+@@ -310,10 +310,12 @@ static int call_undef_hook(struct pt_regs *regs)
+ 	int (*fn)(struct pt_regs *regs, u32 instr) = NULL;
+ 	void __user *pc = (void __user *)instruction_pointer(regs);
+ 
+-	if (!user_mode(regs))
+-		return 1;
+-
+-	if (compat_thumb_mode(regs)) {
++	if (!user_mode(regs)) {
++		__le32 instr_le;
++		if (probe_kernel_address((__force __le32 *)pc, instr_le))
++			goto exit;
++		instr = le32_to_cpu(instr_le);
++	} else if (compat_thumb_mode(regs)) {
+ 		/* 16-bit Thumb instruction */
+ 		__le16 instr_le;
+ 		if (get_user(instr_le, (__le16 __user *)pc))
+@@ -407,6 +409,7 @@ asmlinkage void __exception do_undefinstr(struct pt_regs *regs)
+ 		return;
+ 
+ 	force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
++	BUG_ON(!user_mode(regs));
+ }
+ 
+ void cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
+diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile
+index 137710f4dac3..5105bb044aa5 100644
+--- a/arch/arm64/lib/Makefile
++++ b/arch/arm64/lib/Makefile
+@@ -12,7 +12,7 @@ lib-y		:= bitops.o clear_user.o delay.o copy_from_user.o	\
+ # when supported by the CPU. Result and argument registers are handled
+ # correctly, based on the function prototype.
+ lib-$(CONFIG_ARM64_LSE_ATOMICS) += atomic_ll_sc.o
+-CFLAGS_atomic_ll_sc.o	:= -fcall-used-x0 -ffixed-x1 -ffixed-x2		\
++CFLAGS_atomic_ll_sc.o	:= -ffixed-x1 -ffixed-x2        		\
+ 		   -ffixed-x3 -ffixed-x4 -ffixed-x5 -ffixed-x6		\
+ 		   -ffixed-x7 -fcall-saved-x8 -fcall-saved-x9		\
+ 		   -fcall-saved-x10 -fcall-saved-x11 -fcall-saved-x12	\
+diff --git a/arch/m68k/configs/amiga_defconfig b/arch/m68k/configs/amiga_defconfig
+index a874e54404d1..4d4c76ab0bac 100644
+--- a/arch/m68k/configs/amiga_defconfig
++++ b/arch/m68k/configs/amiga_defconfig
+@@ -650,7 +650,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/apollo_defconfig b/arch/m68k/configs/apollo_defconfig
+index 8ce39e23aa42..0fd006c19fa3 100644
+--- a/arch/m68k/configs/apollo_defconfig
++++ b/arch/m68k/configs/apollo_defconfig
+@@ -609,7 +609,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/atari_defconfig b/arch/m68k/configs/atari_defconfig
+index 346c4e75edf8..9343e8d5cf60 100644
+--- a/arch/m68k/configs/atari_defconfig
++++ b/arch/m68k/configs/atari_defconfig
+@@ -631,7 +631,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/bvme6000_defconfig b/arch/m68k/configs/bvme6000_defconfig
+index fca9c7aa71a3..a10fff6e7b50 100644
+--- a/arch/m68k/configs/bvme6000_defconfig
++++ b/arch/m68k/configs/bvme6000_defconfig
+@@ -601,7 +601,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/hp300_defconfig b/arch/m68k/configs/hp300_defconfig
+index f9eab174915c..db81d8ea9d03 100644
+--- a/arch/m68k/configs/hp300_defconfig
++++ b/arch/m68k/configs/hp300_defconfig
+@@ -611,7 +611,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/mac_defconfig b/arch/m68k/configs/mac_defconfig
+index b52e597899eb..2546617a1147 100644
+--- a/arch/m68k/configs/mac_defconfig
++++ b/arch/m68k/configs/mac_defconfig
+@@ -633,7 +633,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/multi_defconfig b/arch/m68k/configs/multi_defconfig
+index 2a84eeec5b02..dc9b0d885e8b 100644
+--- a/arch/m68k/configs/multi_defconfig
++++ b/arch/m68k/configs/multi_defconfig
+@@ -713,7 +713,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/mvme147_defconfig b/arch/m68k/configs/mvme147_defconfig
+index 476e69994340..0d815a375ba0 100644
+--- a/arch/m68k/configs/mvme147_defconfig
++++ b/arch/m68k/configs/mvme147_defconfig
+@@ -601,7 +601,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/mvme16x_defconfig b/arch/m68k/configs/mvme16x_defconfig
+index 1477cda9146e..0cb8109b4c9e 100644
+--- a/arch/m68k/configs/mvme16x_defconfig
++++ b/arch/m68k/configs/mvme16x_defconfig
+@@ -601,7 +601,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/q40_defconfig b/arch/m68k/configs/q40_defconfig
+index b3a543dc48a0..e91a1c28bba7 100644
+--- a/arch/m68k/configs/q40_defconfig
++++ b/arch/m68k/configs/q40_defconfig
+@@ -624,7 +624,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/sun3_defconfig b/arch/m68k/configs/sun3_defconfig
+index d543ed5dfa96..3b2f0914c34f 100644
+--- a/arch/m68k/configs/sun3_defconfig
++++ b/arch/m68k/configs/sun3_defconfig
+@@ -602,7 +602,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/sun3x_defconfig b/arch/m68k/configs/sun3x_defconfig
+index a67e54246023..e4365ef4f5ed 100644
+--- a/arch/m68k/configs/sun3x_defconfig
++++ b/arch/m68k/configs/sun3x_defconfig
+@@ -603,7 +603,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/mips/cavium-octeon/executive/cvmx-helper.c b/arch/mips/cavium-octeon/executive/cvmx-helper.c
+index 75108ec669eb..6c79e8a16a26 100644
+--- a/arch/mips/cavium-octeon/executive/cvmx-helper.c
++++ b/arch/mips/cavium-octeon/executive/cvmx-helper.c
+@@ -67,7 +67,7 @@ void (*cvmx_override_pko_queue_priority) (int pko_port,
+ void (*cvmx_override_ipd_port_setup) (int ipd_port);
+ 
+ /* Port count per interface */
+-static int interface_port_count[5];
++static int interface_port_count[9];
+ 
+ /**
+  * Return the number of interfaces the chip has. Each interface
+diff --git a/arch/mips/lib/memset.S b/arch/mips/lib/memset.S
+index fac26ce64b2f..e76e88222a4b 100644
+--- a/arch/mips/lib/memset.S
++++ b/arch/mips/lib/memset.S
+@@ -262,9 +262,11 @@
+ 	 nop
+ 
+ .Lsmall_fixup\@:
++	.set		reorder
+ 	PTR_SUBU	a2, t1, a0
++	PTR_ADDIU	a2, 1
+ 	jr		ra
+-	 PTR_ADDIU	a2, 1
++	.set		noreorder
+ 
+ 	.endm
+ 
+diff --git a/arch/parisc/kernel/entry.S b/arch/parisc/kernel/entry.S
+index 1b4732e20137..843825a7e6e2 100644
+--- a/arch/parisc/kernel/entry.S
++++ b/arch/parisc/kernel/entry.S
+@@ -185,7 +185,7 @@
+ 	bv,n	0(%r3)
+ 	nop
+ 	.word	0		/* checksum (will be patched) */
+-	.word	PA(os_hpmc)	/* address of handler */
++	.word	0		/* address of handler */
+ 	.word	0		/* length of handler */
+ 	.endm
+ 
+diff --git a/arch/parisc/kernel/hpmc.S b/arch/parisc/kernel/hpmc.S
+index 781c3b9a3e46..fde654115564 100644
+--- a/arch/parisc/kernel/hpmc.S
++++ b/arch/parisc/kernel/hpmc.S
+@@ -85,7 +85,7 @@ END(hpmc_pim_data)
+ 
+ 	.import intr_save, code
+ 	.align 16
+-ENTRY_CFI(os_hpmc)
++ENTRY(os_hpmc)
+ .os_hpmc:
+ 
+ 	/*
+@@ -302,7 +302,6 @@ os_hpmc_6:
+ 	b .
+ 	nop
+ 	.align 16	/* make function length multiple of 16 bytes */
+-ENDPROC_CFI(os_hpmc)
+ .os_hpmc_end:
+ 
+ 
+diff --git a/arch/parisc/kernel/traps.c b/arch/parisc/kernel/traps.c
+index 4309ad31a874..2cb35e1e0099 100644
+--- a/arch/parisc/kernel/traps.c
++++ b/arch/parisc/kernel/traps.c
+@@ -827,7 +827,8 @@ void __init initialize_ivt(const void *iva)
+ 	 *    the Length/4 words starting at Address is zero.
+ 	 */
+ 
+-	/* Compute Checksum for HPMC handler */
++	/* Setup IVA and compute checksum for HPMC handler */
++	ivap[6] = (u32)__pa(os_hpmc);
+ 	length = os_hpmc_size;
+ 	ivap[7] = length;
+ 
+diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
+index 2607d2d33405..db6cd857c8c0 100644
+--- a/arch/parisc/mm/init.c
++++ b/arch/parisc/mm/init.c
+@@ -495,12 +495,8 @@ static void __init map_pages(unsigned long start_vaddr,
+ 						pte = pte_mkhuge(pte);
+ 				}
+ 
+-				if (address >= end_paddr) {
+-					if (force)
+-						break;
+-					else
+-						pte_val(pte) = 0;
+-				}
++				if (address >= end_paddr)
++					break;
+ 
+ 				set_pte(pg_table, pte);
+ 
+diff --git a/arch/powerpc/include/asm/mpic.h b/arch/powerpc/include/asm/mpic.h
+index fad8ddd697ac..0abf2e7fd222 100644
+--- a/arch/powerpc/include/asm/mpic.h
++++ b/arch/powerpc/include/asm/mpic.h
+@@ -393,7 +393,14 @@ extern struct bus_type mpic_subsys;
+ #define	MPIC_REGSET_TSI108		MPIC_REGSET(1)	/* Tsi108/109 PIC */
+ 
+ /* Get the version of primary MPIC */
++#ifdef CONFIG_MPIC
+ extern u32 fsl_mpic_primary_get_version(void);
++#else
++static inline u32 fsl_mpic_primary_get_version(void)
++{
++	return 0;
++}
++#endif
+ 
+ /* Allocate the controller structure and setup the linux irq descs
+  * for the range if interrupts passed in. No HW initialization is
+diff --git a/arch/powerpc/kernel/mce_power.c b/arch/powerpc/kernel/mce_power.c
+index 38c5b4764bfe..a74ffd5ad15c 100644
+--- a/arch/powerpc/kernel/mce_power.c
++++ b/arch/powerpc/kernel/mce_power.c
+@@ -97,6 +97,13 @@ static void flush_and_reload_slb(void)
+ 
+ static void flush_erat(void)
+ {
++#ifdef CONFIG_PPC_BOOK3S_64
++	if (!early_cpu_has_feature(CPU_FTR_ARCH_300)) {
++		flush_and_reload_slb();
++		return;
++	}
++#endif
++	/* PPC_INVALIDATE_ERAT can only be used on ISA v3 and newer */
+ 	asm volatile(PPC_INVALIDATE_ERAT : : :"memory");
+ }
+ 
+diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
+index 225bc5f91049..03dd2f9d60cf 100644
+--- a/arch/powerpc/kernel/setup_64.c
++++ b/arch/powerpc/kernel/setup_64.c
+@@ -242,13 +242,19 @@ static void cpu_ready_for_interrupts(void)
+ 	}
+ 
+ 	/*
+-	 * Fixup HFSCR:TM based on CPU features. The bit is set by our
+-	 * early asm init because at that point we haven't updated our
+-	 * CPU features from firmware and device-tree. Here we have,
+-	 * so let's do it.
++	 * Set HFSCR:TM based on CPU features:
++	 * In the special case of TM no suspend (P9N DD2.1), Linux is
++	 * told TM is off via the dt-ftrs but told to (partially) use
++	 * it via OPAL_REINIT_CPUS_TM_SUSPEND_DISABLED. So HFSCR[TM]
++	 * will be off from dt-ftrs but we need to turn it on for the
++	 * no suspend case.
+ 	 */
+-	if (cpu_has_feature(CPU_FTR_HVMODE) && !cpu_has_feature(CPU_FTR_TM_COMP))
+-		mtspr(SPRN_HFSCR, mfspr(SPRN_HFSCR) & ~HFSCR_TM);
++	if (cpu_has_feature(CPU_FTR_HVMODE)) {
++		if (cpu_has_feature(CPU_FTR_TM_COMP))
++			mtspr(SPRN_HFSCR, mfspr(SPRN_HFSCR) | HFSCR_TM);
++		else
++			mtspr(SPRN_HFSCR, mfspr(SPRN_HFSCR) & ~HFSCR_TM);
++	}
+ 
+ 	/* Set IR and DR in PACA MSR */
+ 	get_paca()->kernel_msr = MSR_KERNEL;
+diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
+index 1d049c78c82a..2e45e5fbad5b 100644
+--- a/arch/powerpc/mm/hash_native_64.c
++++ b/arch/powerpc/mm/hash_native_64.c
+@@ -115,6 +115,8 @@ static void tlbiel_all_isa300(unsigned int num_sets, unsigned int is)
+ 	tlbiel_hash_set_isa300(0, is, 0, 2, 1);
+ 
+ 	asm volatile("ptesync": : :"memory");
++
++	asm volatile(PPC_INVALIDATE_ERAT "; isync" : : :"memory");
+ }
+ 
+ void hash__tlbiel_all(unsigned int action)
+@@ -140,8 +142,6 @@ void hash__tlbiel_all(unsigned int action)
+ 		tlbiel_all_isa206(POWER7_TLB_SETS, is);
+ 	else
+ 		WARN(1, "%s called on pre-POWER7 CPU\n", __func__);
+-
+-	asm volatile(PPC_INVALIDATE_ERAT "; isync" : : :"memory");
+ }
+ 
+ static inline unsigned long  ___tlbie(unsigned long vpn, int psize,
+diff --git a/arch/s390/defconfig b/arch/s390/defconfig
+index f40600eb1762..5134c71a4937 100644
+--- a/arch/s390/defconfig
++++ b/arch/s390/defconfig
+@@ -221,7 +221,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_DEFLATE=m
+diff --git a/arch/s390/kernel/sthyi.c b/arch/s390/kernel/sthyi.c
+index 0859cde36f75..888cc2f166db 100644
+--- a/arch/s390/kernel/sthyi.c
++++ b/arch/s390/kernel/sthyi.c
+@@ -183,17 +183,19 @@ static void fill_hdr(struct sthyi_sctns *sctns)
+ static void fill_stsi_mac(struct sthyi_sctns *sctns,
+ 			  struct sysinfo_1_1_1 *sysinfo)
+ {
++	sclp_ocf_cpc_name_copy(sctns->mac.infmname);
++	if (*(u64 *)sctns->mac.infmname != 0)
++		sctns->mac.infmval1 |= MAC_NAME_VLD;
++
+ 	if (stsi(sysinfo, 1, 1, 1))
+ 		return;
+ 
+-	sclp_ocf_cpc_name_copy(sctns->mac.infmname);
+-
+ 	memcpy(sctns->mac.infmtype, sysinfo->type, sizeof(sctns->mac.infmtype));
+ 	memcpy(sctns->mac.infmmanu, sysinfo->manufacturer, sizeof(sctns->mac.infmmanu));
+ 	memcpy(sctns->mac.infmpman, sysinfo->plant, sizeof(sctns->mac.infmpman));
+ 	memcpy(sctns->mac.infmseq, sysinfo->sequence, sizeof(sctns->mac.infmseq));
+ 
+-	sctns->mac.infmval1 |= MAC_ID_VLD | MAC_NAME_VLD;
++	sctns->mac.infmval1 |= MAC_ID_VLD;
+ }
+ 
+ static void fill_stsi_par(struct sthyi_sctns *sctns,
+diff --git a/arch/x86/boot/tools/build.c b/arch/x86/boot/tools/build.c
+index d4e6cd4577e5..bf0e82400358 100644
+--- a/arch/x86/boot/tools/build.c
++++ b/arch/x86/boot/tools/build.c
+@@ -391,6 +391,13 @@ int main(int argc, char ** argv)
+ 		die("Unable to mmap '%s': %m", argv[2]);
+ 	/* Number of 16-byte paragraphs, including space for a 4-byte CRC */
+ 	sys_size = (sz + 15 + 4) / 16;
++#ifdef CONFIG_EFI_STUB
++	/*
++	 * COFF requires minimum 32-byte alignment of sections, and
++	 * adding a signature is problematic without that alignment.
++	 */
++	sys_size = (sys_size + 1) & ~1;
++#endif
+ 
+ 	/* Patch the setup code with the appropriate size parameters */
+ 	buf[0x1f1] = setup_sectors-1;
+diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
+index acbe7e8336d8..e4b78f962874 100644
+--- a/arch/x86/crypto/aesni-intel_glue.c
++++ b/arch/x86/crypto/aesni-intel_glue.c
+@@ -817,7 +817,7 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req,
+ 	/* Linearize assoc, if not already linear */
+ 	if (req->src->length >= assoclen && req->src->length &&
+ 		(!PageHighMem(sg_page(req->src)) ||
+-			req->src->offset + req->src->length < PAGE_SIZE)) {
++			req->src->offset + req->src->length <= PAGE_SIZE)) {
+ 		scatterwalk_start(&assoc_sg_walk, req->src);
+ 		assoc = scatterwalk_map(&assoc_sg_walk);
+ 	} else {
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 64aaa3f5f36c..c8ac84e90d0f 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -220,6 +220,7 @@
+ #define X86_FEATURE_STIBP		( 7*32+27) /* Single Thread Indirect Branch Predictors */
+ #define X86_FEATURE_ZEN			( 7*32+28) /* "" CPU is AMD family 0x17 (Zen) */
+ #define X86_FEATURE_L1TF_PTEINV		( 7*32+29) /* "" L1TF workaround PTE inversion */
++#define X86_FEATURE_IBRS_ENHANCED	( 7*32+30) /* Enhanced IBRS */
+ 
+ /* Virtualization flags: Linux defined, word 8 */
+ #define X86_FEATURE_TPR_SHADOW		( 8*32+ 0) /* Intel TPR Shadow */
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 0722b7745382..ccc23203b327 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -176,6 +176,7 @@ enum {
+ 
+ #define DR6_BD		(1 << 13)
+ #define DR6_BS		(1 << 14)
++#define DR6_BT		(1 << 15)
+ #define DR6_RTM		(1 << 16)
+ #define DR6_FIXED_1	0xfffe0ff0
+ #define DR6_INIT	0xffff0ff0
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index f6f6c63da62f..e7c8086e570e 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -215,6 +215,7 @@ enum spectre_v2_mitigation {
+ 	SPECTRE_V2_RETPOLINE_GENERIC,
+ 	SPECTRE_V2_RETPOLINE_AMD,
+ 	SPECTRE_V2_IBRS,
++	SPECTRE_V2_IBRS_ENHANCED,
+ };
+ 
+ /* The Speculative Store Bypass disable variants */
+diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
+index 0af97e51e609..6f293d9a0b07 100644
+--- a/arch/x86/include/asm/tlbflush.h
++++ b/arch/x86/include/asm/tlbflush.h
+@@ -469,6 +469,12 @@ static inline void __native_flush_tlb_one_user(unsigned long addr)
+  */
+ static inline void __flush_tlb_all(void)
+ {
++	/*
++	 * This is to catch users with enabled preemption and the PGE feature
++	 * and don't trigger the warning in __native_flush_tlb().
++	 */
++	VM_WARN_ON_ONCE(preemptible());
++
+ 	if (boot_cpu_has(X86_FEATURE_PGE)) {
+ 		__flush_tlb_global();
+ 	} else {
+diff --git a/arch/x86/kernel/check.c b/arch/x86/kernel/check.c
+index 33399426793e..cc8258a5378b 100644
+--- a/arch/x86/kernel/check.c
++++ b/arch/x86/kernel/check.c
+@@ -31,6 +31,11 @@ static __init int set_corruption_check(char *arg)
+ 	ssize_t ret;
+ 	unsigned long val;
+ 
++	if (!arg) {
++		pr_err("memory_corruption_check config string not provided\n");
++		return -EINVAL;
++	}
++
+ 	ret = kstrtoul(arg, 10, &val);
+ 	if (ret)
+ 		return ret;
+@@ -45,6 +50,11 @@ static __init int set_corruption_check_period(char *arg)
+ 	ssize_t ret;
+ 	unsigned long val;
+ 
++	if (!arg) {
++		pr_err("memory_corruption_check_period config string not provided\n");
++		return -EINVAL;
++	}
++
+ 	ret = kstrtoul(arg, 10, &val);
+ 	if (ret)
+ 		return ret;
+@@ -59,6 +69,11 @@ static __init int set_corruption_check_size(char *arg)
+ 	char *end;
+ 	unsigned size;
+ 
++	if (!arg) {
++		pr_err("memory_corruption_check_size config string not provided\n");
++		return -EINVAL;
++	}
++
+ 	size = memparse(arg, &end);
+ 
+ 	if (*end == '\0')
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 4891a621a752..91e5e086606c 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -35,12 +35,10 @@ static void __init spectre_v2_select_mitigation(void);
+ static void __init ssb_select_mitigation(void);
+ static void __init l1tf_select_mitigation(void);
+ 
+-/*
+- * Our boot-time value of the SPEC_CTRL MSR. We read it once so that any
+- * writes to SPEC_CTRL contain whatever reserved bits have been set.
+- */
+-u64 __ro_after_init x86_spec_ctrl_base;
++/* The base value of the SPEC_CTRL MSR that always has to be preserved. */
++u64 x86_spec_ctrl_base;
+ EXPORT_SYMBOL_GPL(x86_spec_ctrl_base);
++static DEFINE_MUTEX(spec_ctrl_mutex);
+ 
+ /*
+  * The vendor and possibly platform specific bits which can be modified in
+@@ -141,6 +139,7 @@ static const char *spectre_v2_strings[] = {
+ 	[SPECTRE_V2_RETPOLINE_MINIMAL_AMD]	= "Vulnerable: Minimal AMD ASM retpoline",
+ 	[SPECTRE_V2_RETPOLINE_GENERIC]		= "Mitigation: Full generic retpoline",
+ 	[SPECTRE_V2_RETPOLINE_AMD]		= "Mitigation: Full AMD retpoline",
++	[SPECTRE_V2_IBRS_ENHANCED]		= "Mitigation: Enhanced IBRS",
+ };
+ 
+ #undef pr_fmt
+@@ -324,6 +323,46 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
+ 	return cmd;
+ }
+ 
++static bool stibp_needed(void)
++{
++	if (spectre_v2_enabled == SPECTRE_V2_NONE)
++		return false;
++
++	if (!boot_cpu_has(X86_FEATURE_STIBP))
++		return false;
++
++	return true;
++}
++
++static void update_stibp_msr(void *info)
++{
++	wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
++}
++
++void arch_smt_update(void)
++{
++	u64 mask;
++
++	if (!stibp_needed())
++		return;
++
++	mutex_lock(&spec_ctrl_mutex);
++	mask = x86_spec_ctrl_base;
++	if (cpu_smt_control == CPU_SMT_ENABLED)
++		mask |= SPEC_CTRL_STIBP;
++	else
++		mask &= ~SPEC_CTRL_STIBP;
++
++	if (mask != x86_spec_ctrl_base) {
++		pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
++				cpu_smt_control == CPU_SMT_ENABLED ?
++				"Enabling" : "Disabling");
++		x86_spec_ctrl_base = mask;
++		on_each_cpu(update_stibp_msr, NULL, 1);
++	}
++	mutex_unlock(&spec_ctrl_mutex);
++}
++
+ static void __init spectre_v2_select_mitigation(void)
+ {
+ 	enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
+@@ -343,6 +382,13 @@ static void __init spectre_v2_select_mitigation(void)
+ 
+ 	case SPECTRE_V2_CMD_FORCE:
+ 	case SPECTRE_V2_CMD_AUTO:
++		if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
++			mode = SPECTRE_V2_IBRS_ENHANCED;
++			/* Force it so VMEXIT will restore correctly */
++			x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
++			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
++			goto specv2_set_mode;
++		}
+ 		if (IS_ENABLED(CONFIG_RETPOLINE))
+ 			goto retpoline_auto;
+ 		break;
+@@ -380,6 +426,7 @@ retpoline_auto:
+ 		setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
+ 	}
+ 
++specv2_set_mode:
+ 	spectre_v2_enabled = mode;
+ 	pr_info("%s\n", spectre_v2_strings[mode]);
+ 
+@@ -402,12 +449,22 @@ retpoline_auto:
+ 
+ 	/*
+ 	 * Retpoline means the kernel is safe because it has no indirect
+-	 * branches. But firmware isn't, so use IBRS to protect that.
++	 * branches. Enhanced IBRS protects firmware too, so, enable restricted
++	 * speculation around firmware calls only when Enhanced IBRS isn't
++	 * supported.
++	 *
++	 * Use "mode" to check Enhanced IBRS instead of boot_cpu_has(), because
++	 * the user might select retpoline on the kernel command line and if
++	 * the CPU supports Enhanced IBRS, kernel might un-intentionally not
++	 * enable IBRS around firmware calls.
+ 	 */
+-	if (boot_cpu_has(X86_FEATURE_IBRS)) {
++	if (boot_cpu_has(X86_FEATURE_IBRS) && mode != SPECTRE_V2_IBRS_ENHANCED) {
+ 		setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
+ 		pr_info("Enabling Restricted Speculation for firmware calls\n");
+ 	}
++
++	/* Enable STIBP if appropriate */
++	arch_smt_update();
+ }
+ 
+ #undef pr_fmt
+@@ -798,6 +855,8 @@ static ssize_t l1tf_show_state(char *buf)
+ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
+ 			       char *buf, unsigned int bug)
+ {
++	int ret;
++
+ 	if (!boot_cpu_has_bug(bug))
+ 		return sprintf(buf, "Not affected\n");
+ 
+@@ -815,10 +874,12 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ 		return sprintf(buf, "Mitigation: __user pointer sanitization\n");
+ 
+ 	case X86_BUG_SPECTRE_V2:
+-		return sprintf(buf, "%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
++		ret = sprintf(buf, "%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
+ 			       boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "",
+ 			       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
++			       (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
+ 			       spectre_v2_module_string());
++		return ret;
+ 
+ 	case X86_BUG_SPEC_STORE_BYPASS:
+ 		return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 1ee8ea36af30..79561bfcfa87 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1015,6 +1015,9 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ 	   !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
+ 		setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
+ 
++	if (ia32_cap & ARCH_CAP_IBRS_ALL)
++		setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
++
+ 	if (x86_match_cpu(cpu_no_meltdown))
+ 		return;
+ 
+diff --git a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c
+index 749856a2e736..bc3801985d73 100644
+--- a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c
++++ b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c
+@@ -2032,6 +2032,13 @@ static int rdtgroup_show_options(struct seq_file *seq, struct kernfs_root *kf)
+ {
+ 	if (rdt_resources_all[RDT_RESOURCE_L3DATA].alloc_enabled)
+ 		seq_puts(seq, ",cdp");
++
++	if (rdt_resources_all[RDT_RESOURCE_L2DATA].alloc_enabled)
++		seq_puts(seq, ",cdpl2");
++
++	if (is_mba_sc(&rdt_resources_all[RDT_RESOURCE_MBA]))
++		seq_puts(seq, ",mba_MBps");
++
+ 	return 0;
+ }
+ 
+diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
+index 23f1691670b6..61a949d84dfa 100644
+--- a/arch/x86/kernel/fpu/signal.c
++++ b/arch/x86/kernel/fpu/signal.c
+@@ -314,7 +314,6 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
+ 		 * thread's fpu state, reconstruct fxstate from the fsave
+ 		 * header. Validate and sanitize the copied state.
+ 		 */
+-		struct fpu *fpu = &tsk->thread.fpu;
+ 		struct user_i387_ia32_struct env;
+ 		int err = 0;
+ 
+diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
+index 203d398802a3..1467f966cfec 100644
+--- a/arch/x86/kernel/kprobes/opt.c
++++ b/arch/x86/kernel/kprobes/opt.c
+@@ -179,7 +179,7 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
+ 		opt_pre_handler(&op->kp, regs);
+ 		__this_cpu_write(current_kprobe, NULL);
+ 	}
+-	preempt_enable_no_resched();
++	preempt_enable();
+ }
+ NOKPROBE_SYMBOL(optimized_callback);
+ 
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 9efe130ea2e6..9fcc3ec3ab78 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -3160,10 +3160,13 @@ static int nested_vmx_check_exception(struct kvm_vcpu *vcpu, unsigned long *exit
+ 		}
+ 	} else {
+ 		if (vmcs12->exception_bitmap & (1u << nr)) {
+-			if (nr == DB_VECTOR)
++			if (nr == DB_VECTOR) {
+ 				*exit_qual = vcpu->arch.dr6;
+-			else
++				*exit_qual &= ~(DR6_FIXED_1 | DR6_BT);
++				*exit_qual ^= DR6_RTM;
++			} else {
+ 				*exit_qual = 0;
++			}
+ 			return 1;
+ 		}
+ 	}
+diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
+index 8d6c34fe49be..800de88208d7 100644
+--- a/arch/x86/mm/pageattr.c
++++ b/arch/x86/mm/pageattr.c
+@@ -2063,9 +2063,13 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
+ 
+ 	/*
+ 	 * We should perform an IPI and flush all tlbs,
+-	 * but that can deadlock->flush only current cpu:
++	 * but that can deadlock->flush only current cpu.
++	 * Preemption needs to be disabled around __flush_tlb_all() due to
++	 * CR3 reload in __native_flush_tlb().
+ 	 */
++	preempt_disable();
+ 	__flush_tlb_all();
++	preempt_enable();
+ 
+ 	arch_flush_lazy_mmu_mode();
+ }
+diff --git a/arch/x86/platform/olpc/olpc-xo1-rtc.c b/arch/x86/platform/olpc/olpc-xo1-rtc.c
+index a2b4efddd61a..8e7ddd7e313a 100644
+--- a/arch/x86/platform/olpc/olpc-xo1-rtc.c
++++ b/arch/x86/platform/olpc/olpc-xo1-rtc.c
+@@ -16,6 +16,7 @@
+ 
+ #include <asm/msr.h>
+ #include <asm/olpc.h>
++#include <asm/x86_init.h>
+ 
+ static void rtc_wake_on(struct device *dev)
+ {
+@@ -75,6 +76,8 @@ static int __init xo1_rtc_init(void)
+ 	if (r)
+ 		return r;
+ 
++	x86_platform.legacy.rtc = 0;
++
+ 	device_init_wakeup(&xo1_rtc_device.dev, 1);
+ 	return 0;
+ }
+diff --git a/arch/x86/xen/enlighten_pvh.c b/arch/x86/xen/enlighten_pvh.c
+index c85d1a88f476..f7f77023288a 100644
+--- a/arch/x86/xen/enlighten_pvh.c
++++ b/arch/x86/xen/enlighten_pvh.c
+@@ -75,7 +75,7 @@ static void __init init_pvh_bootparams(void)
+ 	 * Version 2.12 supports Xen entry point but we will use default x86/PC
+ 	 * environment (i.e. hardware_subarch 0).
+ 	 */
+-	pvh_bootparams.hdr.version = 0x212;
++	pvh_bootparams.hdr.version = (2 << 8) | 12;
+ 	pvh_bootparams.hdr.type_of_loader = (9 << 4) | 0; /* Xen loader */
+ 
+ 	x86_init.acpi.get_root_pointer = pvh_get_root_pointer;
+diff --git a/arch/x86/xen/platform-pci-unplug.c b/arch/x86/xen/platform-pci-unplug.c
+index 33a783c77d96..184b36922397 100644
+--- a/arch/x86/xen/platform-pci-unplug.c
++++ b/arch/x86/xen/platform-pci-unplug.c
+@@ -146,6 +146,10 @@ void xen_unplug_emulated_devices(void)
+ {
+ 	int r;
+ 
++	/* PVH guests don't have emulated devices. */
++	if (xen_pvh_domain())
++		return;
++
+ 	/* user explicitly requested no unplug */
+ 	if (xen_emul_unplug & XEN_UNPLUG_NEVER)
+ 		return;
+diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
+index cd97a62394e7..a970a2aa4456 100644
+--- a/arch/x86/xen/spinlock.c
++++ b/arch/x86/xen/spinlock.c
+@@ -9,6 +9,7 @@
+ #include <linux/log2.h>
+ #include <linux/gfp.h>
+ #include <linux/slab.h>
++#include <linux/atomic.h>
+ 
+ #include <asm/paravirt.h>
+ #include <asm/qspinlock.h>
+@@ -21,6 +22,7 @@
+ 
+ static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+ static DEFINE_PER_CPU(char *, irq_name);
++static DEFINE_PER_CPU(atomic_t, xen_qlock_wait_nest);
+ static bool xen_pvspin = true;
+ 
+ static void xen_qlock_kick(int cpu)
+@@ -40,33 +42,24 @@ static void xen_qlock_kick(int cpu)
+ static void xen_qlock_wait(u8 *byte, u8 val)
+ {
+ 	int irq = __this_cpu_read(lock_kicker_irq);
++	atomic_t *nest_cnt = this_cpu_ptr(&xen_qlock_wait_nest);
+ 
+ 	/* If kicker interrupts not initialized yet, just spin */
+-	if (irq == -1)
++	if (irq == -1 || in_nmi())
+ 		return;
+ 
+-	/* clear pending */
+-	xen_clear_irq_pending(irq);
+-	barrier();
+-
+-	/*
+-	 * We check the byte value after clearing pending IRQ to make sure
+-	 * that we won't miss a wakeup event because of the clearing.
+-	 *
+-	 * The sync_clear_bit() call in xen_clear_irq_pending() is atomic.
+-	 * So it is effectively a memory barrier for x86.
+-	 */
+-	if (READ_ONCE(*byte) != val)
+-		return;
++	/* Detect reentry. */
++	atomic_inc(nest_cnt);
+ 
+-	/*
+-	 * If an interrupt happens here, it will leave the wakeup irq
+-	 * pending, which will cause xen_poll_irq() to return
+-	 * immediately.
+-	 */
++	/* If irq pending already and no nested call clear it. */
++	if (atomic_read(nest_cnt) == 1 && xen_test_irq_pending(irq)) {
++		xen_clear_irq_pending(irq);
++	} else if (READ_ONCE(*byte) == val) {
++		/* Block until irq becomes pending (or a spurious wakeup) */
++		xen_poll_irq(irq);
++	}
+ 
+-	/* Block until irq becomes pending (or perhaps a spurious wakeup) */
+-	xen_poll_irq(irq);
++	atomic_dec(nest_cnt);
+ }
+ 
+ static irqreturn_t dummy_handler(int irq, void *dev_id)
+diff --git a/arch/x86/xen/xen-pvh.S b/arch/x86/xen/xen-pvh.S
+index ca2d3b2bf2af..58722a052f9c 100644
+--- a/arch/x86/xen/xen-pvh.S
++++ b/arch/x86/xen/xen-pvh.S
+@@ -181,7 +181,7 @@ canary:
+ 	.fill 48, 1, 0
+ 
+ early_stack:
+-	.fill 256, 1, 0
++	.fill BOOT_STACK_SIZE, 1, 0
+ early_stack_end:
+ 
+ 	ELFNOTE(Xen, XEN_ELFNOTE_PHYS32_ENTRY,
+diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
+index 4498c43245e2..681498e5d40a 100644
+--- a/block/bfq-wf2q.c
++++ b/block/bfq-wf2q.c
+@@ -1178,10 +1178,17 @@ bool __bfq_deactivate_entity(struct bfq_entity *entity, bool ins_into_idle_tree)
+ 	st = bfq_entity_service_tree(entity);
+ 	is_in_service = entity == sd->in_service_entity;
+ 
+-	if (is_in_service) {
+-		bfq_calc_finish(entity, entity->service);
++	bfq_calc_finish(entity, entity->service);
++
++	if (is_in_service)
+ 		sd->in_service_entity = NULL;
+-	}
++	else
++		/*
++		 * Non in-service entity: nobody will take care of
++		 * resetting its service counter on expiration. Do it
++		 * now.
++		 */
++		entity->service = 0;
+ 
+ 	if (entity->tree == &st->active)
+ 		bfq_active_extract(st, entity);
+diff --git a/block/blk-lib.c b/block/blk-lib.c
+index d1b9dd03da25..1f196cf0aa5d 100644
+--- a/block/blk-lib.c
++++ b/block/blk-lib.c
+@@ -29,9 +29,7 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
+ {
+ 	struct request_queue *q = bdev_get_queue(bdev);
+ 	struct bio *bio = *biop;
+-	unsigned int granularity;
+ 	unsigned int op;
+-	int alignment;
+ 	sector_t bs_mask;
+ 
+ 	if (!q)
+@@ -54,38 +52,15 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
+ 	if ((sector | nr_sects) & bs_mask)
+ 		return -EINVAL;
+ 
+-	/* Zero-sector (unknown) and one-sector granularities are the same.  */
+-	granularity = max(q->limits.discard_granularity >> 9, 1U);
+-	alignment = (bdev_discard_alignment(bdev) >> 9) % granularity;
+-
+ 	while (nr_sects) {
+-		unsigned int req_sects;
+-		sector_t end_sect, tmp;
++		unsigned int req_sects = nr_sects;
++		sector_t end_sect;
+ 
+-		/*
+-		 * Issue in chunks of the user defined max discard setting,
+-		 * ensuring that bi_size doesn't overflow
+-		 */
+-		req_sects = min_t(sector_t, nr_sects,
+-					q->limits.max_discard_sectors);
+ 		if (!req_sects)
+ 			goto fail;
+-		if (req_sects > UINT_MAX >> 9)
+-			req_sects = UINT_MAX >> 9;
++		req_sects = min(req_sects, bio_allowed_max_sectors(q));
+ 
+-		/*
+-		 * If splitting a request, and the next starting sector would be
+-		 * misaligned, stop the discard at the previous aligned sector.
+-		 */
+ 		end_sect = sector + req_sects;
+-		tmp = end_sect;
+-		if (req_sects < nr_sects &&
+-		    sector_div(tmp, granularity) != alignment) {
+-			end_sect = end_sect - alignment;
+-			sector_div(end_sect, granularity);
+-			end_sect = end_sect * granularity + alignment;
+-			req_sects = end_sect - sector;
+-		}
+ 
+ 		bio = next_bio(bio, 0, gfp_mask);
+ 		bio->bi_iter.bi_sector = sector;
+@@ -186,7 +161,7 @@ static int __blkdev_issue_write_same(struct block_device *bdev, sector_t sector,
+ 		return -EOPNOTSUPP;
+ 
+ 	/* Ensure that max_write_same_sectors doesn't overflow bi_size */
+-	max_write_same_sectors = UINT_MAX >> 9;
++	max_write_same_sectors = bio_allowed_max_sectors(q);
+ 
+ 	while (nr_sects) {
+ 		bio = next_bio(bio, 1, gfp_mask);
+diff --git a/block/blk-merge.c b/block/blk-merge.c
+index aaec38cc37b8..2e042190a4f1 100644
+--- a/block/blk-merge.c
++++ b/block/blk-merge.c
+@@ -27,7 +27,8 @@ static struct bio *blk_bio_discard_split(struct request_queue *q,
+ 	/* Zero-sector (unknown) and one-sector granularities are the same.  */
+ 	granularity = max(q->limits.discard_granularity >> 9, 1U);
+ 
+-	max_discard_sectors = min(q->limits.max_discard_sectors, UINT_MAX >> 9);
++	max_discard_sectors = min(q->limits.max_discard_sectors,
++			bio_allowed_max_sectors(q));
+ 	max_discard_sectors -= max_discard_sectors % granularity;
+ 
+ 	if (unlikely(!max_discard_sectors)) {
+diff --git a/block/blk.h b/block/blk.h
+index a8f0f7986cfd..a26a8fb257a4 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -326,6 +326,16 @@ static inline unsigned long blk_rq_deadline(struct request *rq)
+ 	return rq->__deadline & ~0x1UL;
+ }
+ 
++/*
++ * The max size one bio can handle is UINT_MAX becasue bvec_iter.bi_size
++ * is defined as 'unsigned int', meantime it has to aligned to with logical
++ * block size which is the minimum accepted unit by hardware.
++ */
++static inline unsigned int bio_allowed_max_sectors(struct request_queue *q)
++{
++	return round_down(UINT_MAX, queue_logical_block_size(q)) >> 9;
++}
++
+ /*
+  * Internal io_context interface
+  */
+diff --git a/block/bounce.c b/block/bounce.c
+index fd31347b7836..5849535296b9 100644
+--- a/block/bounce.c
++++ b/block/bounce.c
+@@ -31,6 +31,24 @@
+ static struct bio_set bounce_bio_set, bounce_bio_split;
+ static mempool_t page_pool, isa_page_pool;
+ 
++static void init_bounce_bioset(void)
++{
++	static bool bounce_bs_setup;
++	int ret;
++
++	if (bounce_bs_setup)
++		return;
++
++	ret = bioset_init(&bounce_bio_set, BIO_POOL_SIZE, 0, BIOSET_NEED_BVECS);
++	BUG_ON(ret);
++	if (bioset_integrity_create(&bounce_bio_set, BIO_POOL_SIZE))
++		BUG_ON(1);
++
++	ret = bioset_init(&bounce_bio_split, BIO_POOL_SIZE, 0, 0);
++	BUG_ON(ret);
++	bounce_bs_setup = true;
++}
++
+ #if defined(CONFIG_HIGHMEM)
+ static __init int init_emergency_pool(void)
+ {
+@@ -44,14 +62,7 @@ static __init int init_emergency_pool(void)
+ 	BUG_ON(ret);
+ 	pr_info("pool size: %d pages\n", POOL_SIZE);
+ 
+-	ret = bioset_init(&bounce_bio_set, BIO_POOL_SIZE, 0, BIOSET_NEED_BVECS);
+-	BUG_ON(ret);
+-	if (bioset_integrity_create(&bounce_bio_set, BIO_POOL_SIZE))
+-		BUG_ON(1);
+-
+-	ret = bioset_init(&bounce_bio_split, BIO_POOL_SIZE, 0, 0);
+-	BUG_ON(ret);
+-
++	init_bounce_bioset();
+ 	return 0;
+ }
+ 
+@@ -86,6 +97,8 @@ static void *mempool_alloc_pages_isa(gfp_t gfp_mask, void *data)
+ 	return mempool_alloc_pages(gfp_mask | GFP_DMA, data);
+ }
+ 
++static DEFINE_MUTEX(isa_mutex);
++
+ /*
+  * gets called "every" time someone init's a queue with BLK_BOUNCE_ISA
+  * as the max address, so check if the pool has already been created.
+@@ -94,14 +107,20 @@ int init_emergency_isa_pool(void)
+ {
+ 	int ret;
+ 
+-	if (mempool_initialized(&isa_page_pool))
++	mutex_lock(&isa_mutex);
++
++	if (mempool_initialized(&isa_page_pool)) {
++		mutex_unlock(&isa_mutex);
+ 		return 0;
++	}
+ 
+ 	ret = mempool_init(&isa_page_pool, ISA_POOL_SIZE, mempool_alloc_pages_isa,
+ 			   mempool_free_pages, (void *) 0);
+ 	BUG_ON(ret);
+ 
+ 	pr_info("isa pool size: %d pages\n", ISA_POOL_SIZE);
++	init_bounce_bioset();
++	mutex_unlock(&isa_mutex);
+ 	return 0;
+ }
+ 
+diff --git a/crypto/Kconfig b/crypto/Kconfig
+index f3e40ac56d93..59e32623a7ce 100644
+--- a/crypto/Kconfig
++++ b/crypto/Kconfig
+@@ -1590,20 +1590,6 @@ config CRYPTO_SM4
+ 
+ 	  If unsure, say N.
+ 
+-config CRYPTO_SPECK
+-	tristate "Speck cipher algorithm"
+-	select CRYPTO_ALGAPI
+-	help
+-	  Speck is a lightweight block cipher that is tuned for optimal
+-	  performance in software (rather than hardware).
+-
+-	  Speck may not be as secure as AES, and should only be used on systems
+-	  where AES is not fast enough.
+-
+-	  See also: <https://eprint.iacr.org/2013/404.pdf>
+-
+-	  If unsure, say N.
+-
+ config CRYPTO_TEA
+ 	tristate "TEA, XTEA and XETA cipher algorithms"
+ 	select CRYPTO_ALGAPI
+diff --git a/crypto/Makefile b/crypto/Makefile
+index 6d1d40eeb964..f6a234d08882 100644
+--- a/crypto/Makefile
++++ b/crypto/Makefile
+@@ -115,7 +115,6 @@ obj-$(CONFIG_CRYPTO_TEA) += tea.o
+ obj-$(CONFIG_CRYPTO_KHAZAD) += khazad.o
+ obj-$(CONFIG_CRYPTO_ANUBIS) += anubis.o
+ obj-$(CONFIG_CRYPTO_SEED) += seed.o
+-obj-$(CONFIG_CRYPTO_SPECK) += speck.o
+ obj-$(CONFIG_CRYPTO_SALSA20) += salsa20_generic.o
+ obj-$(CONFIG_CRYPTO_CHACHA20) += chacha20_generic.o
+ obj-$(CONFIG_CRYPTO_POLY1305) += poly1305_generic.o
+diff --git a/crypto/aegis.h b/crypto/aegis.h
+index f1c6900ddb80..405e025fc906 100644
+--- a/crypto/aegis.h
++++ b/crypto/aegis.h
+@@ -21,7 +21,7 @@
+ 
+ union aegis_block {
+ 	__le64 words64[AEGIS_BLOCK_SIZE / sizeof(__le64)];
+-	u32 words32[AEGIS_BLOCK_SIZE / sizeof(u32)];
++	__le32 words32[AEGIS_BLOCK_SIZE / sizeof(__le32)];
+ 	u8 bytes[AEGIS_BLOCK_SIZE];
+ };
+ 
+@@ -57,24 +57,22 @@ static void crypto_aegis_aesenc(union aegis_block *dst,
+ 				const union aegis_block *src,
+ 				const union aegis_block *key)
+ {
+-	u32 *d = dst->words32;
+ 	const u8  *s  = src->bytes;
+-	const u32 *k  = key->words32;
+ 	const u32 *t0 = crypto_ft_tab[0];
+ 	const u32 *t1 = crypto_ft_tab[1];
+ 	const u32 *t2 = crypto_ft_tab[2];
+ 	const u32 *t3 = crypto_ft_tab[3];
+ 	u32 d0, d1, d2, d3;
+ 
+-	d0 = t0[s[ 0]] ^ t1[s[ 5]] ^ t2[s[10]] ^ t3[s[15]] ^ k[0];
+-	d1 = t0[s[ 4]] ^ t1[s[ 9]] ^ t2[s[14]] ^ t3[s[ 3]] ^ k[1];
+-	d2 = t0[s[ 8]] ^ t1[s[13]] ^ t2[s[ 2]] ^ t3[s[ 7]] ^ k[2];
+-	d3 = t0[s[12]] ^ t1[s[ 1]] ^ t2[s[ 6]] ^ t3[s[11]] ^ k[3];
++	d0 = t0[s[ 0]] ^ t1[s[ 5]] ^ t2[s[10]] ^ t3[s[15]];
++	d1 = t0[s[ 4]] ^ t1[s[ 9]] ^ t2[s[14]] ^ t3[s[ 3]];
++	d2 = t0[s[ 8]] ^ t1[s[13]] ^ t2[s[ 2]] ^ t3[s[ 7]];
++	d3 = t0[s[12]] ^ t1[s[ 1]] ^ t2[s[ 6]] ^ t3[s[11]];
+ 
+-	d[0] = d0;
+-	d[1] = d1;
+-	d[2] = d2;
+-	d[3] = d3;
++	dst->words32[0] = cpu_to_le32(d0) ^ key->words32[0];
++	dst->words32[1] = cpu_to_le32(d1) ^ key->words32[1];
++	dst->words32[2] = cpu_to_le32(d2) ^ key->words32[2];
++	dst->words32[3] = cpu_to_le32(d3) ^ key->words32[3];
+ }
+ 
+ #endif /* _CRYPTO_AEGIS_H */
+diff --git a/crypto/lrw.c b/crypto/lrw.c
+index 954a7064a179..7657bebd060c 100644
+--- a/crypto/lrw.c
++++ b/crypto/lrw.c
+@@ -143,7 +143,12 @@ static inline int get_index128(be128 *block)
+ 		return x + ffz(val);
+ 	}
+ 
+-	return x;
++	/*
++	 * If we get here, then x == 128 and we are incrementing the counter
++	 * from all ones to all zeros. This means we must return index 127, i.e.
++	 * the one corresponding to key2*{ 1,...,1 }.
++	 */
++	return 127;
+ }
+ 
+ static int post_crypt(struct skcipher_request *req)
+diff --git a/crypto/morus1280.c b/crypto/morus1280.c
+index 6180b2557836..8f1952d96ebd 100644
+--- a/crypto/morus1280.c
++++ b/crypto/morus1280.c
+@@ -385,14 +385,11 @@ static void crypto_morus1280_final(struct morus1280_state *state,
+ 				   struct morus1280_block *tag_xor,
+ 				   u64 assoclen, u64 cryptlen)
+ {
+-	u64 assocbits = assoclen * 8;
+-	u64 cryptbits = cryptlen * 8;
+-
+ 	struct morus1280_block tmp;
+ 	unsigned int i;
+ 
+-	tmp.words[0] = cpu_to_le64(assocbits);
+-	tmp.words[1] = cpu_to_le64(cryptbits);
++	tmp.words[0] = assoclen * 8;
++	tmp.words[1] = cryptlen * 8;
+ 	tmp.words[2] = 0;
+ 	tmp.words[3] = 0;
+ 
+diff --git a/crypto/morus640.c b/crypto/morus640.c
+index 5eede3749e64..6ccb901934c3 100644
+--- a/crypto/morus640.c
++++ b/crypto/morus640.c
+@@ -384,21 +384,13 @@ static void crypto_morus640_final(struct morus640_state *state,
+ 				  struct morus640_block *tag_xor,
+ 				  u64 assoclen, u64 cryptlen)
+ {
+-	u64 assocbits = assoclen * 8;
+-	u64 cryptbits = cryptlen * 8;
+-
+-	u32 assocbits_lo = (u32)assocbits;
+-	u32 assocbits_hi = (u32)(assocbits >> 32);
+-	u32 cryptbits_lo = (u32)cryptbits;
+-	u32 cryptbits_hi = (u32)(cryptbits >> 32);
+-
+ 	struct morus640_block tmp;
+ 	unsigned int i;
+ 
+-	tmp.words[0] = cpu_to_le32(assocbits_lo);
+-	tmp.words[1] = cpu_to_le32(assocbits_hi);
+-	tmp.words[2] = cpu_to_le32(cryptbits_lo);
+-	tmp.words[3] = cpu_to_le32(cryptbits_hi);
++	tmp.words[0] = lower_32_bits(assoclen * 8);
++	tmp.words[1] = upper_32_bits(assoclen * 8);
++	tmp.words[2] = lower_32_bits(cryptlen * 8);
++	tmp.words[3] = upper_32_bits(cryptlen * 8);
+ 
+ 	for (i = 0; i < MORUS_BLOCK_WORDS; i++)
+ 		state->s[4].words[i] ^= state->s[0].words[i];
+diff --git a/crypto/speck.c b/crypto/speck.c
+deleted file mode 100644
+index 58aa9f7f91f7..000000000000
+--- a/crypto/speck.c
++++ /dev/null
+@@ -1,307 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * Speck: a lightweight block cipher
+- *
+- * Copyright (c) 2018 Google, Inc
+- *
+- * Speck has 10 variants, including 5 block sizes.  For now we only implement
+- * the variants Speck128/128, Speck128/192, Speck128/256, Speck64/96, and
+- * Speck64/128.   Speck${B}/${K} denotes the variant with a block size of B bits
+- * and a key size of K bits.  The Speck128 variants are believed to be the most
+- * secure variants, and they use the same block size and key sizes as AES.  The
+- * Speck64 variants are less secure, but on 32-bit processors are usually
+- * faster.  The remaining variants (Speck32, Speck48, and Speck96) are even less
+- * secure and/or not as well suited for implementation on either 32-bit or
+- * 64-bit processors, so are omitted.
+- *
+- * Reference: "The Simon and Speck Families of Lightweight Block Ciphers"
+- * https://eprint.iacr.org/2013/404.pdf
+- *
+- * In a correspondence, the Speck designers have also clarified that the words
+- * should be interpreted in little-endian format, and the words should be
+- * ordered such that the first word of each block is 'y' rather than 'x', and
+- * the first key word (rather than the last) becomes the first round key.
+- */
+-
+-#include <asm/unaligned.h>
+-#include <crypto/speck.h>
+-#include <linux/bitops.h>
+-#include <linux/crypto.h>
+-#include <linux/init.h>
+-#include <linux/module.h>
+-
+-/* Speck128 */
+-
+-static __always_inline void speck128_round(u64 *x, u64 *y, u64 k)
+-{
+-	*x = ror64(*x, 8);
+-	*x += *y;
+-	*x ^= k;
+-	*y = rol64(*y, 3);
+-	*y ^= *x;
+-}
+-
+-static __always_inline void speck128_unround(u64 *x, u64 *y, u64 k)
+-{
+-	*y ^= *x;
+-	*y = ror64(*y, 3);
+-	*x ^= k;
+-	*x -= *y;
+-	*x = rol64(*x, 8);
+-}
+-
+-void crypto_speck128_encrypt(const struct speck128_tfm_ctx *ctx,
+-			     u8 *out, const u8 *in)
+-{
+-	u64 y = get_unaligned_le64(in);
+-	u64 x = get_unaligned_le64(in + 8);
+-	int i;
+-
+-	for (i = 0; i < ctx->nrounds; i++)
+-		speck128_round(&x, &y, ctx->round_keys[i]);
+-
+-	put_unaligned_le64(y, out);
+-	put_unaligned_le64(x, out + 8);
+-}
+-EXPORT_SYMBOL_GPL(crypto_speck128_encrypt);
+-
+-static void speck128_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+-{
+-	crypto_speck128_encrypt(crypto_tfm_ctx(tfm), out, in);
+-}
+-
+-void crypto_speck128_decrypt(const struct speck128_tfm_ctx *ctx,
+-			     u8 *out, const u8 *in)
+-{
+-	u64 y = get_unaligned_le64(in);
+-	u64 x = get_unaligned_le64(in + 8);
+-	int i;
+-
+-	for (i = ctx->nrounds - 1; i >= 0; i--)
+-		speck128_unround(&x, &y, ctx->round_keys[i]);
+-
+-	put_unaligned_le64(y, out);
+-	put_unaligned_le64(x, out + 8);
+-}
+-EXPORT_SYMBOL_GPL(crypto_speck128_decrypt);
+-
+-static void speck128_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+-{
+-	crypto_speck128_decrypt(crypto_tfm_ctx(tfm), out, in);
+-}
+-
+-int crypto_speck128_setkey(struct speck128_tfm_ctx *ctx, const u8 *key,
+-			   unsigned int keylen)
+-{
+-	u64 l[3];
+-	u64 k;
+-	int i;
+-
+-	switch (keylen) {
+-	case SPECK128_128_KEY_SIZE:
+-		k = get_unaligned_le64(key);
+-		l[0] = get_unaligned_le64(key + 8);
+-		ctx->nrounds = SPECK128_128_NROUNDS;
+-		for (i = 0; i < ctx->nrounds; i++) {
+-			ctx->round_keys[i] = k;
+-			speck128_round(&l[0], &k, i);
+-		}
+-		break;
+-	case SPECK128_192_KEY_SIZE:
+-		k = get_unaligned_le64(key);
+-		l[0] = get_unaligned_le64(key + 8);
+-		l[1] = get_unaligned_le64(key + 16);
+-		ctx->nrounds = SPECK128_192_NROUNDS;
+-		for (i = 0; i < ctx->nrounds; i++) {
+-			ctx->round_keys[i] = k;
+-			speck128_round(&l[i % 2], &k, i);
+-		}
+-		break;
+-	case SPECK128_256_KEY_SIZE:
+-		k = get_unaligned_le64(key);
+-		l[0] = get_unaligned_le64(key + 8);
+-		l[1] = get_unaligned_le64(key + 16);
+-		l[2] = get_unaligned_le64(key + 24);
+-		ctx->nrounds = SPECK128_256_NROUNDS;
+-		for (i = 0; i < ctx->nrounds; i++) {
+-			ctx->round_keys[i] = k;
+-			speck128_round(&l[i % 3], &k, i);
+-		}
+-		break;
+-	default:
+-		return -EINVAL;
+-	}
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(crypto_speck128_setkey);
+-
+-static int speck128_setkey(struct crypto_tfm *tfm, const u8 *key,
+-			   unsigned int keylen)
+-{
+-	return crypto_speck128_setkey(crypto_tfm_ctx(tfm), key, keylen);
+-}
+-
+-/* Speck64 */
+-
+-static __always_inline void speck64_round(u32 *x, u32 *y, u32 k)
+-{
+-	*x = ror32(*x, 8);
+-	*x += *y;
+-	*x ^= k;
+-	*y = rol32(*y, 3);
+-	*y ^= *x;
+-}
+-
+-static __always_inline void speck64_unround(u32 *x, u32 *y, u32 k)
+-{
+-	*y ^= *x;
+-	*y = ror32(*y, 3);
+-	*x ^= k;
+-	*x -= *y;
+-	*x = rol32(*x, 8);
+-}
+-
+-void crypto_speck64_encrypt(const struct speck64_tfm_ctx *ctx,
+-			    u8 *out, const u8 *in)
+-{
+-	u32 y = get_unaligned_le32(in);
+-	u32 x = get_unaligned_le32(in + 4);
+-	int i;
+-
+-	for (i = 0; i < ctx->nrounds; i++)
+-		speck64_round(&x, &y, ctx->round_keys[i]);
+-
+-	put_unaligned_le32(y, out);
+-	put_unaligned_le32(x, out + 4);
+-}
+-EXPORT_SYMBOL_GPL(crypto_speck64_encrypt);
+-
+-static void speck64_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+-{
+-	crypto_speck64_encrypt(crypto_tfm_ctx(tfm), out, in);
+-}
+-
+-void crypto_speck64_decrypt(const struct speck64_tfm_ctx *ctx,
+-			    u8 *out, const u8 *in)
+-{
+-	u32 y = get_unaligned_le32(in);
+-	u32 x = get_unaligned_le32(in + 4);
+-	int i;
+-
+-	for (i = ctx->nrounds - 1; i >= 0; i--)
+-		speck64_unround(&x, &y, ctx->round_keys[i]);
+-
+-	put_unaligned_le32(y, out);
+-	put_unaligned_le32(x, out + 4);
+-}
+-EXPORT_SYMBOL_GPL(crypto_speck64_decrypt);
+-
+-static void speck64_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+-{
+-	crypto_speck64_decrypt(crypto_tfm_ctx(tfm), out, in);
+-}
+-
+-int crypto_speck64_setkey(struct speck64_tfm_ctx *ctx, const u8 *key,
+-			  unsigned int keylen)
+-{
+-	u32 l[3];
+-	u32 k;
+-	int i;
+-
+-	switch (keylen) {
+-	case SPECK64_96_KEY_SIZE:
+-		k = get_unaligned_le32(key);
+-		l[0] = get_unaligned_le32(key + 4);
+-		l[1] = get_unaligned_le32(key + 8);
+-		ctx->nrounds = SPECK64_96_NROUNDS;
+-		for (i = 0; i < ctx->nrounds; i++) {
+-			ctx->round_keys[i] = k;
+-			speck64_round(&l[i % 2], &k, i);
+-		}
+-		break;
+-	case SPECK64_128_KEY_SIZE:
+-		k = get_unaligned_le32(key);
+-		l[0] = get_unaligned_le32(key + 4);
+-		l[1] = get_unaligned_le32(key + 8);
+-		l[2] = get_unaligned_le32(key + 12);
+-		ctx->nrounds = SPECK64_128_NROUNDS;
+-		for (i = 0; i < ctx->nrounds; i++) {
+-			ctx->round_keys[i] = k;
+-			speck64_round(&l[i % 3], &k, i);
+-		}
+-		break;
+-	default:
+-		return -EINVAL;
+-	}
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(crypto_speck64_setkey);
+-
+-static int speck64_setkey(struct crypto_tfm *tfm, const u8 *key,
+-			  unsigned int keylen)
+-{
+-	return crypto_speck64_setkey(crypto_tfm_ctx(tfm), key, keylen);
+-}
+-
+-/* Algorithm definitions */
+-
+-static struct crypto_alg speck_algs[] = {
+-	{
+-		.cra_name		= "speck128",
+-		.cra_driver_name	= "speck128-generic",
+-		.cra_priority		= 100,
+-		.cra_flags		= CRYPTO_ALG_TYPE_CIPHER,
+-		.cra_blocksize		= SPECK128_BLOCK_SIZE,
+-		.cra_ctxsize		= sizeof(struct speck128_tfm_ctx),
+-		.cra_module		= THIS_MODULE,
+-		.cra_u			= {
+-			.cipher = {
+-				.cia_min_keysize	= SPECK128_128_KEY_SIZE,
+-				.cia_max_keysize	= SPECK128_256_KEY_SIZE,
+-				.cia_setkey		= speck128_setkey,
+-				.cia_encrypt		= speck128_encrypt,
+-				.cia_decrypt		= speck128_decrypt
+-			}
+-		}
+-	}, {
+-		.cra_name		= "speck64",
+-		.cra_driver_name	= "speck64-generic",
+-		.cra_priority		= 100,
+-		.cra_flags		= CRYPTO_ALG_TYPE_CIPHER,
+-		.cra_blocksize		= SPECK64_BLOCK_SIZE,
+-		.cra_ctxsize		= sizeof(struct speck64_tfm_ctx),
+-		.cra_module		= THIS_MODULE,
+-		.cra_u			= {
+-			.cipher = {
+-				.cia_min_keysize	= SPECK64_96_KEY_SIZE,
+-				.cia_max_keysize	= SPECK64_128_KEY_SIZE,
+-				.cia_setkey		= speck64_setkey,
+-				.cia_encrypt		= speck64_encrypt,
+-				.cia_decrypt		= speck64_decrypt
+-			}
+-		}
+-	}
+-};
+-
+-static int __init speck_module_init(void)
+-{
+-	return crypto_register_algs(speck_algs, ARRAY_SIZE(speck_algs));
+-}
+-
+-static void __exit speck_module_exit(void)
+-{
+-	crypto_unregister_algs(speck_algs, ARRAY_SIZE(speck_algs));
+-}
+-
+-module_init(speck_module_init);
+-module_exit(speck_module_exit);
+-
+-MODULE_DESCRIPTION("Speck block cipher (generic)");
+-MODULE_LICENSE("GPL");
+-MODULE_AUTHOR("Eric Biggers <ebiggers@google.com>");
+-MODULE_ALIAS_CRYPTO("speck128");
+-MODULE_ALIAS_CRYPTO("speck128-generic");
+-MODULE_ALIAS_CRYPTO("speck64");
+-MODULE_ALIAS_CRYPTO("speck64-generic");
+diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
+index d5bcdd905007..ee4f2a175bda 100644
+--- a/crypto/tcrypt.c
++++ b/crypto/tcrypt.c
+@@ -1097,6 +1097,9 @@ static void test_ahash_speed_common(const char *algo, unsigned int secs,
+ 			break;
+ 		}
+ 
++		if (speed[i].klen)
++			crypto_ahash_setkey(tfm, tvmem[0], speed[i].klen);
++
+ 		pr_info("test%3u "
+ 			"(%5u byte blocks,%5u bytes per update,%4u updates): ",
+ 			i, speed[i].blen, speed[i].plen, speed[i].blen / speed[i].plen);
+diff --git a/crypto/testmgr.c b/crypto/testmgr.c
+index 11e45352fd0b..1ed03bf6a977 100644
+--- a/crypto/testmgr.c
++++ b/crypto/testmgr.c
+@@ -3000,18 +3000,6 @@ static const struct alg_test_desc alg_test_descs[] = {
+ 		.suite = {
+ 			.cipher = __VECS(sm4_tv_template)
+ 		}
+-	}, {
+-		.alg = "ecb(speck128)",
+-		.test = alg_test_skcipher,
+-		.suite = {
+-			.cipher = __VECS(speck128_tv_template)
+-		}
+-	}, {
+-		.alg = "ecb(speck64)",
+-		.test = alg_test_skcipher,
+-		.suite = {
+-			.cipher = __VECS(speck64_tv_template)
+-		}
+ 	}, {
+ 		.alg = "ecb(tea)",
+ 		.test = alg_test_skcipher,
+@@ -3539,18 +3527,6 @@ static const struct alg_test_desc alg_test_descs[] = {
+ 		.suite = {
+ 			.cipher = __VECS(serpent_xts_tv_template)
+ 		}
+-	}, {
+-		.alg = "xts(speck128)",
+-		.test = alg_test_skcipher,
+-		.suite = {
+-			.cipher = __VECS(speck128_xts_tv_template)
+-		}
+-	}, {
+-		.alg = "xts(speck64)",
+-		.test = alg_test_skcipher,
+-		.suite = {
+-			.cipher = __VECS(speck64_xts_tv_template)
+-		}
+ 	}, {
+ 		.alg = "xts(twofish)",
+ 		.test = alg_test_skcipher,
+diff --git a/crypto/testmgr.h b/crypto/testmgr.h
+index b950aa234e43..36572c665026 100644
+--- a/crypto/testmgr.h
++++ b/crypto/testmgr.h
+@@ -10141,744 +10141,6 @@ static const struct cipher_testvec sm4_tv_template[] = {
+ 	}
+ };
+ 
+-/*
+- * Speck test vectors taken from the original paper:
+- * "The Simon and Speck Families of Lightweight Block Ciphers"
+- * https://eprint.iacr.org/2013/404.pdf
+- *
+- * Note that the paper does not make byte and word order clear.  But it was
+- * confirmed with the authors that the intended orders are little endian byte
+- * order and (y, x) word order.  Equivalently, the printed test vectors, when
+- * looking at only the bytes (ignoring the whitespace that divides them into
+- * words), are backwards: the left-most byte is actually the one with the
+- * highest memory address, while the right-most byte is actually the one with
+- * the lowest memory address.
+- */
+-
+-static const struct cipher_testvec speck128_tv_template[] = {
+-	{ /* Speck128/128 */
+-		.key	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
+-		.klen	= 16,
+-		.ptext	= "\x20\x6d\x61\x64\x65\x20\x69\x74"
+-			  "\x20\x65\x71\x75\x69\x76\x61\x6c",
+-		.ctext	= "\x18\x0d\x57\x5c\xdf\xfe\x60\x78"
+-			  "\x65\x32\x78\x79\x51\x98\x5d\xa6",
+-		.len	= 16,
+-	}, { /* Speck128/192 */
+-		.key	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17",
+-		.klen	= 24,
+-		.ptext	= "\x65\x6e\x74\x20\x74\x6f\x20\x43"
+-			  "\x68\x69\x65\x66\x20\x48\x61\x72",
+-		.ctext	= "\x86\x18\x3c\xe0\x5d\x18\xbc\xf9"
+-			  "\x66\x55\x13\x13\x3a\xcf\xe4\x1b",
+-		.len	= 16,
+-	}, { /* Speck128/256 */
+-		.key	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f",
+-		.klen	= 32,
+-		.ptext	= "\x70\x6f\x6f\x6e\x65\x72\x2e\x20"
+-			  "\x49\x6e\x20\x74\x68\x6f\x73\x65",
+-		.ctext	= "\x43\x8f\x18\x9c\x8d\xb4\xee\x4e"
+-			  "\x3e\xf5\xc0\x05\x04\x01\x09\x41",
+-		.len	= 16,
+-	},
+-};
+-
+-/*
+- * Speck128-XTS test vectors, taken from the AES-XTS test vectors with the
+- * ciphertext recomputed with Speck128 as the cipher
+- */
+-static const struct cipher_testvec speck128_xts_tv_template[] = {
+-	{
+-		.key	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.klen	= 32,
+-		.iv	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ctext	= "\xbe\xa0\xe7\x03\xd7\xfe\xab\x62"
+-			  "\x3b\x99\x4a\x64\x74\x77\xac\xed"
+-			  "\xd8\xf4\xa6\xcf\xae\xb9\x07\x42"
+-			  "\x51\xd9\xb6\x1d\xe0\x5e\xbc\x54",
+-		.len	= 32,
+-	}, {
+-		.key	= "\x11\x11\x11\x11\x11\x11\x11\x11"
+-			  "\x11\x11\x11\x11\x11\x11\x11\x11"
+-			  "\x22\x22\x22\x22\x22\x22\x22\x22"
+-			  "\x22\x22\x22\x22\x22\x22\x22\x22",
+-		.klen	= 32,
+-		.iv	= "\x33\x33\x33\x33\x33\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44",
+-		.ctext	= "\xfb\x53\x81\x75\x6f\x9f\x34\xad"
+-			  "\x7e\x01\xed\x7b\xcc\xda\x4e\x4a"
+-			  "\xd4\x84\xa4\x53\xd5\x88\x73\x1b"
+-			  "\xfd\xcb\xae\x0d\xf3\x04\xee\xe6",
+-		.len	= 32,
+-	}, {
+-		.key	= "\xff\xfe\xfd\xfc\xfb\xfa\xf9\xf8"
+-			  "\xf7\xf6\xf5\xf4\xf3\xf2\xf1\xf0"
+-			  "\x22\x22\x22\x22\x22\x22\x22\x22"
+-			  "\x22\x22\x22\x22\x22\x22\x22\x22",
+-		.klen	= 32,
+-		.iv	= "\x33\x33\x33\x33\x33\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44",
+-		.ctext	= "\x21\x52\x84\x15\xd1\xf7\x21\x55"
+-			  "\xd9\x75\x4a\xd3\xc5\xdb\x9f\x7d"
+-			  "\xda\x63\xb2\xf1\x82\xb0\x89\x59"
+-			  "\x86\xd4\xaa\xaa\xdd\xff\x4f\x92",
+-		.len	= 32,
+-	}, {
+-		.key	= "\x27\x18\x28\x18\x28\x45\x90\x45"
+-			  "\x23\x53\x60\x28\x74\x71\x35\x26"
+-			  "\x31\x41\x59\x26\x53\x58\x97\x93"
+-			  "\x23\x84\x62\x64\x33\x83\x27\x95",
+-		.klen	= 32,
+-		.iv	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+-			  "\x20\x21\x22\x23\x24\x25\x26\x27"
+-			  "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+-			  "\x30\x31\x32\x33\x34\x35\x36\x37"
+-			  "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+-			  "\x40\x41\x42\x43\x44\x45\x46\x47"
+-			  "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+-			  "\x50\x51\x52\x53\x54\x55\x56\x57"
+-			  "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+-			  "\x60\x61\x62\x63\x64\x65\x66\x67"
+-			  "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+-			  "\x70\x71\x72\x73\x74\x75\x76\x77"
+-			  "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+-			  "\x80\x81\x82\x83\x84\x85\x86\x87"
+-			  "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+-			  "\x90\x91\x92\x93\x94\x95\x96\x97"
+-			  "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+-			  "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+-			  "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+-			  "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+-			  "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+-			  "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+-			  "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+-			  "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+-			  "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+-			  "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+-			  "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+-			  "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+-			  "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
+-			  "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+-			  "\x20\x21\x22\x23\x24\x25\x26\x27"
+-			  "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+-			  "\x30\x31\x32\x33\x34\x35\x36\x37"
+-			  "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+-			  "\x40\x41\x42\x43\x44\x45\x46\x47"
+-			  "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+-			  "\x50\x51\x52\x53\x54\x55\x56\x57"
+-			  "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+-			  "\x60\x61\x62\x63\x64\x65\x66\x67"
+-			  "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+-			  "\x70\x71\x72\x73\x74\x75\x76\x77"
+-			  "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+-			  "\x80\x81\x82\x83\x84\x85\x86\x87"
+-			  "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+-			  "\x90\x91\x92\x93\x94\x95\x96\x97"
+-			  "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+-			  "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+-			  "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+-			  "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+-			  "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+-			  "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+-			  "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+-			  "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+-			  "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+-			  "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+-			  "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+-			  "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+-			  "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
+-		.ctext	= "\x57\xb5\xf8\x71\x6e\x6d\xdd\x82"
+-			  "\x53\xd0\xed\x2d\x30\xc1\x20\xef"
+-			  "\x70\x67\x5e\xff\x09\x70\xbb\xc1"
+-			  "\x3a\x7b\x48\x26\xd9\x0b\xf4\x48"
+-			  "\xbe\xce\xb1\xc7\xb2\x67\xc4\xa7"
+-			  "\x76\xf8\x36\x30\xb7\xb4\x9a\xd9"
+-			  "\xf5\x9d\xd0\x7b\xc1\x06\x96\x44"
+-			  "\x19\xc5\x58\x84\x63\xb9\x12\x68"
+-			  "\x68\xc7\xaa\x18\x98\xf2\x1f\x5c"
+-			  "\x39\xa6\xd8\x32\x2b\xc3\x51\xfd"
+-			  "\x74\x79\x2e\xb4\x44\xd7\x69\xc4"
+-			  "\xfc\x29\xe6\xed\x26\x1e\xa6\x9d"
+-			  "\x1c\xbe\x00\x0e\x7f\x3a\xca\xfb"
+-			  "\x6d\x13\x65\xa0\xf9\x31\x12\xe2"
+-			  "\x26\xd1\xec\x2b\x0a\x8b\x59\x99"
+-			  "\xa7\x49\xa0\x0e\x09\x33\x85\x50"
+-			  "\xc3\x23\xca\x7a\xdd\x13\x45\x5f"
+-			  "\xde\x4c\xa7\xcb\x00\x8a\x66\x6f"
+-			  "\xa2\xb6\xb1\x2e\xe1\xa0\x18\xf6"
+-			  "\xad\xf3\xbd\xeb\xc7\xef\x55\x4f"
+-			  "\x79\x91\x8d\x36\x13\x7b\xd0\x4a"
+-			  "\x6c\x39\xfb\x53\xb8\x6f\x02\x51"
+-			  "\xa5\x20\xac\x24\x1c\x73\x59\x73"
+-			  "\x58\x61\x3a\x87\x58\xb3\x20\x56"
+-			  "\x39\x06\x2b\x4d\xd3\x20\x2b\x89"
+-			  "\x3f\xa2\xf0\x96\xeb\x7f\xa4\xcd"
+-			  "\x11\xae\xbd\xcb\x3a\xb4\xd9\x91"
+-			  "\x09\x35\x71\x50\x65\xac\x92\xe3"
+-			  "\x7b\x32\xc0\x7a\xdd\xd4\xc3\x92"
+-			  "\x6f\xeb\x79\xde\x6f\xd3\x25\xc9"
+-			  "\xcd\x63\xf5\x1e\x7a\x3b\x26\x9d"
+-			  "\x77\x04\x80\xa9\xbf\x38\xb5\xbd"
+-			  "\xb8\x05\x07\xbd\xfd\xab\x7b\xf8"
+-			  "\x2a\x26\xcc\x49\x14\x6d\x55\x01"
+-			  "\x06\x94\xd8\xb2\x2d\x53\x83\x1b"
+-			  "\x8f\xd4\xdd\x57\x12\x7e\x18\xba"
+-			  "\x8e\xe2\x4d\x80\xef\x7e\x6b\x9d"
+-			  "\x24\xa9\x60\xa4\x97\x85\x86\x2a"
+-			  "\x01\x00\x09\xf1\xcb\x4a\x24\x1c"
+-			  "\xd8\xf6\xe6\x5b\xe7\x5d\xf2\xc4"
+-			  "\x97\x1c\x10\xc6\x4d\x66\x4f\x98"
+-			  "\x87\x30\xac\xd5\xea\x73\x49\x10"
+-			  "\x80\xea\xe5\x5f\x4d\x5f\x03\x33"
+-			  "\x66\x02\x35\x3d\x60\x06\x36\x4f"
+-			  "\x14\x1c\xd8\x07\x1f\x78\xd0\xf8"
+-			  "\x4f\x6c\x62\x7c\x15\xa5\x7c\x28"
+-			  "\x7c\xcc\xeb\x1f\xd1\x07\x90\x93"
+-			  "\x7e\xc2\xa8\x3a\x80\xc0\xf5\x30"
+-			  "\xcc\x75\xcf\x16\x26\xa9\x26\x3b"
+-			  "\xe7\x68\x2f\x15\x21\x5b\xe4\x00"
+-			  "\xbd\x48\x50\xcd\x75\x70\xc4\x62"
+-			  "\xbb\x41\xfb\x89\x4a\x88\x3b\x3b"
+-			  "\x51\x66\x02\x69\x04\x97\x36\xd4"
+-			  "\x75\xae\x0b\xa3\x42\xf8\xca\x79"
+-			  "\x8f\x93\xe9\xcc\x38\xbd\xd6\xd2"
+-			  "\xf9\x70\x4e\xc3\x6a\x8e\x25\xbd"
+-			  "\xea\x15\x5a\xa0\x85\x7e\x81\x0d"
+-			  "\x03\xe7\x05\x39\xf5\x05\x26\xee"
+-			  "\xec\xaa\x1f\x3d\xc9\x98\x76\x01"
+-			  "\x2c\xf4\xfc\xa3\x88\x77\x38\xc4"
+-			  "\x50\x65\x50\x6d\x04\x1f\xdf\x5a"
+-			  "\xaa\xf2\x01\xa9\xc1\x8d\xee\xca"
+-			  "\x47\x26\xef\x39\xb8\xb4\xf2\xd1"
+-			  "\xd6\xbb\x1b\x2a\xc1\x34\x14\xcf",
+-		.len	= 512,
+-	}, {
+-		.key	= "\x27\x18\x28\x18\x28\x45\x90\x45"
+-			  "\x23\x53\x60\x28\x74\x71\x35\x26"
+-			  "\x62\x49\x77\x57\x24\x70\x93\x69"
+-			  "\x99\x59\x57\x49\x66\x96\x76\x27"
+-			  "\x31\x41\x59\x26\x53\x58\x97\x93"
+-			  "\x23\x84\x62\x64\x33\x83\x27\x95"
+-			  "\x02\x88\x41\x97\x16\x93\x99\x37"
+-			  "\x51\x05\x82\x09\x74\x94\x45\x92",
+-		.klen	= 64,
+-		.iv	= "\xff\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+-			  "\x20\x21\x22\x23\x24\x25\x26\x27"
+-			  "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+-			  "\x30\x31\x32\x33\x34\x35\x36\x37"
+-			  "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+-			  "\x40\x41\x42\x43\x44\x45\x46\x47"
+-			  "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+-			  "\x50\x51\x52\x53\x54\x55\x56\x57"
+-			  "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+-			  "\x60\x61\x62\x63\x64\x65\x66\x67"
+-			  "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+-			  "\x70\x71\x72\x73\x74\x75\x76\x77"
+-			  "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+-			  "\x80\x81\x82\x83\x84\x85\x86\x87"
+-			  "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+-			  "\x90\x91\x92\x93\x94\x95\x96\x97"
+-			  "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+-			  "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+-			  "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+-			  "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+-			  "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+-			  "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+-			  "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+-			  "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+-			  "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+-			  "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+-			  "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+-			  "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+-			  "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
+-			  "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+-			  "\x20\x21\x22\x23\x24\x25\x26\x27"
+-			  "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+-			  "\x30\x31\x32\x33\x34\x35\x36\x37"
+-			  "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+-			  "\x40\x41\x42\x43\x44\x45\x46\x47"
+-			  "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+-			  "\x50\x51\x52\x53\x54\x55\x56\x57"
+-			  "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+-			  "\x60\x61\x62\x63\x64\x65\x66\x67"
+-			  "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+-			  "\x70\x71\x72\x73\x74\x75\x76\x77"
+-			  "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+-			  "\x80\x81\x82\x83\x84\x85\x86\x87"
+-			  "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+-			  "\x90\x91\x92\x93\x94\x95\x96\x97"
+-			  "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+-			  "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+-			  "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+-			  "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+-			  "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+-			  "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+-			  "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+-			  "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+-			  "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+-			  "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+-			  "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+-			  "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+-			  "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
+-		.ctext	= "\xc5\x85\x2a\x4b\x73\xe4\xf6\xf1"
+-			  "\x7e\xf9\xf6\xe9\xa3\x73\x36\xcb"
+-			  "\xaa\xb6\x22\xb0\x24\x6e\x3d\x73"
+-			  "\x92\x99\xde\xd3\x76\xed\xcd\x63"
+-			  "\x64\x3a\x22\x57\xc1\x43\x49\xd4"
+-			  "\x79\x36\x31\x19\x62\xae\x10\x7e"
+-			  "\x7d\xcf\x7a\xe2\x6b\xce\x27\xfa"
+-			  "\xdc\x3d\xd9\x83\xd3\x42\x4c\xe0"
+-			  "\x1b\xd6\x1d\x1a\x6f\xd2\x03\x00"
+-			  "\xfc\x81\x99\x8a\x14\x62\xf5\x7e"
+-			  "\x0d\xe7\x12\xe8\x17\x9d\x0b\xec"
+-			  "\xe2\xf7\xc9\xa7\x63\xd1\x79\xb6"
+-			  "\x62\x62\x37\xfe\x0a\x4c\x4a\x37"
+-			  "\x70\xc7\x5e\x96\x5f\xbc\x8e\x9e"
+-			  "\x85\x3c\x4f\x26\x64\x85\xbc\x68"
+-			  "\xb0\xe0\x86\x5e\x26\x41\xce\x11"
+-			  "\x50\xda\x97\x14\xe9\x9e\xc7\x6d"
+-			  "\x3b\xdc\x43\xde\x2b\x27\x69\x7d"
+-			  "\xfc\xb0\x28\xbd\x8f\xb1\xc6\x31"
+-			  "\x14\x4d\xf0\x74\x37\xfd\x07\x25"
+-			  "\x96\x55\xe5\xfc\x9e\x27\x2a\x74"
+-			  "\x1b\x83\x4d\x15\x83\xac\x57\xa0"
+-			  "\xac\xa5\xd0\x38\xef\x19\x56\x53"
+-			  "\x25\x4b\xfc\xce\x04\x23\xe5\x6b"
+-			  "\xf6\xc6\x6c\x32\x0b\xb3\x12\xc5"
+-			  "\xed\x22\x34\x1c\x5d\xed\x17\x06"
+-			  "\x36\xa3\xe6\x77\xb9\x97\x46\xb8"
+-			  "\xe9\x3f\x7e\xc7\xbc\x13\x5c\xdc"
+-			  "\x6e\x3f\x04\x5e\xd1\x59\xa5\x82"
+-			  "\x35\x91\x3d\x1b\xe4\x97\x9f\x92"
+-			  "\x1c\x5e\x5f\x6f\x41\xd4\x62\xa1"
+-			  "\x8d\x39\xfc\x42\xfb\x38\x80\xb9"
+-			  "\x0a\xe3\xcc\x6a\x93\xd9\x7a\xb1"
+-			  "\xe9\x69\xaf\x0a\x6b\x75\x38\xa7"
+-			  "\xa1\xbf\xf7\xda\x95\x93\x4b\x78"
+-			  "\x19\xf5\x94\xf9\xd2\x00\x33\x37"
+-			  "\xcf\xf5\x9e\x9c\xf3\xcc\xa6\xee"
+-			  "\x42\xb2\x9e\x2c\x5f\x48\x23\x26"
+-			  "\x15\x25\x17\x03\x3d\xfe\x2c\xfc"
+-			  "\xeb\xba\xda\xe0\x00\x05\xb6\xa6"
+-			  "\x07\xb3\xe8\x36\x5b\xec\x5b\xbf"
+-			  "\xd6\x5b\x00\x74\xc6\x97\xf1\x6a"
+-			  "\x49\xa1\xc3\xfa\x10\x52\xb9\x14"
+-			  "\xad\xb7\x73\xf8\x78\x12\xc8\x59"
+-			  "\x17\x80\x4c\x57\x39\xf1\x6d\x80"
+-			  "\x25\x77\x0f\x5e\x7d\xf0\xaf\x21"
+-			  "\xec\xce\xb7\xc8\x02\x8a\xed\x53"
+-			  "\x2c\x25\x68\x2e\x1f\x85\x5e\x67"
+-			  "\xd1\x07\x7a\x3a\x89\x08\xe0\x34"
+-			  "\xdc\xdb\x26\xb4\x6b\x77\xfc\x40"
+-			  "\x31\x15\x72\xa0\xf0\x73\xd9\x3b"
+-			  "\xd5\xdb\xfe\xfc\x8f\xa9\x44\xa2"
+-			  "\x09\x9f\xc6\x33\xe5\xe2\x88\xe8"
+-			  "\xf3\xf0\x1a\xf4\xce\x12\x0f\xd6"
+-			  "\xf7\x36\xe6\xa4\xf4\x7a\x10\x58"
+-			  "\xcc\x1f\x48\x49\x65\x47\x75\xe9"
+-			  "\x28\xe1\x65\x7b\xf2\xc4\xb5\x07"
+-			  "\xf2\xec\x76\xd8\x8f\x09\xf3\x16"
+-			  "\xa1\x51\x89\x3b\xeb\x96\x42\xac"
+-			  "\x65\xe0\x67\x63\x29\xdc\xb4\x7d"
+-			  "\xf2\x41\x51\x6a\xcb\xde\x3c\xfb"
+-			  "\x66\x8d\x13\xca\xe0\x59\x2a\x00"
+-			  "\xc9\x53\x4c\xe6\x9e\xe2\x73\xd5"
+-			  "\x67\x19\xb2\xbd\x9a\x63\xd7\x5c",
+-		.len	= 512,
+-		.also_non_np = 1,
+-		.np	= 3,
+-		.tap	= { 512 - 20, 4, 16 },
+-	}
+-};
+-
+-static const struct cipher_testvec speck64_tv_template[] = {
+-	{ /* Speck64/96 */
+-		.key	= "\x00\x01\x02\x03\x08\x09\x0a\x0b"
+-			  "\x10\x11\x12\x13",
+-		.klen	= 12,
+-		.ptext	= "\x65\x61\x6e\x73\x20\x46\x61\x74",
+-		.ctext	= "\x6c\x94\x75\x41\xec\x52\x79\x9f",
+-		.len	= 8,
+-	}, { /* Speck64/128 */
+-		.key	= "\x00\x01\x02\x03\x08\x09\x0a\x0b"
+-			  "\x10\x11\x12\x13\x18\x19\x1a\x1b",
+-		.klen	= 16,
+-		.ptext	= "\x2d\x43\x75\x74\x74\x65\x72\x3b",
+-		.ctext	= "\x8b\x02\x4e\x45\x48\xa5\x6f\x8c",
+-		.len	= 8,
+-	},
+-};
+-
+-/*
+- * Speck64-XTS test vectors, taken from the AES-XTS test vectors with the
+- * ciphertext recomputed with Speck64 as the cipher, and key lengths adjusted
+- */
+-static const struct cipher_testvec speck64_xts_tv_template[] = {
+-	{
+-		.key	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.klen	= 24,
+-		.iv	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ctext	= "\x84\xaf\x54\x07\x19\xd4\x7c\xa6"
+-			  "\xe4\xfe\xdf\xc4\x1f\x34\xc3\xc2"
+-			  "\x80\xf5\x72\xe7\xcd\xf0\x99\x22"
+-			  "\x35\xa7\x2f\x06\xef\xdc\x51\xaa",
+-		.len	= 32,
+-	}, {
+-		.key	= "\x11\x11\x11\x11\x11\x11\x11\x11"
+-			  "\x11\x11\x11\x11\x11\x11\x11\x11"
+-			  "\x22\x22\x22\x22\x22\x22\x22\x22",
+-		.klen	= 24,
+-		.iv	= "\x33\x33\x33\x33\x33\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44",
+-		.ctext	= "\x12\x56\x73\xcd\x15\x87\xa8\x59"
+-			  "\xcf\x84\xae\xd9\x1c\x66\xd6\x9f"
+-			  "\xb3\x12\x69\x7e\x36\xeb\x52\xff"
+-			  "\x62\xdd\xba\x90\xb3\xe1\xee\x99",
+-		.len	= 32,
+-	}, {
+-		.key	= "\xff\xfe\xfd\xfc\xfb\xfa\xf9\xf8"
+-			  "\xf7\xf6\xf5\xf4\xf3\xf2\xf1\xf0"
+-			  "\x22\x22\x22\x22\x22\x22\x22\x22",
+-		.klen	= 24,
+-		.iv	= "\x33\x33\x33\x33\x33\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44",
+-		.ctext	= "\x15\x1b\xe4\x2c\xa2\x5a\x2d\x2c"
+-			  "\x27\x36\xc0\xbf\x5d\xea\x36\x37"
+-			  "\x2d\x1a\x88\xbc\x66\xb5\xd0\x0b"
+-			  "\xa1\xbc\x19\xb2\x0f\x3b\x75\x34",
+-		.len	= 32,
+-	}, {
+-		.key	= "\x27\x18\x28\x18\x28\x45\x90\x45"
+-			  "\x23\x53\x60\x28\x74\x71\x35\x26"
+-			  "\x31\x41\x59\x26\x53\x58\x97\x93",
+-		.klen	= 24,
+-		.iv	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+-			  "\x20\x21\x22\x23\x24\x25\x26\x27"
+-			  "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+-			  "\x30\x31\x32\x33\x34\x35\x36\x37"
+-			  "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+-			  "\x40\x41\x42\x43\x44\x45\x46\x47"
+-			  "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+-			  "\x50\x51\x52\x53\x54\x55\x56\x57"
+-			  "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+-			  "\x60\x61\x62\x63\x64\x65\x66\x67"
+-			  "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+-			  "\x70\x71\x72\x73\x74\x75\x76\x77"
+-			  "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+-			  "\x80\x81\x82\x83\x84\x85\x86\x87"
+-			  "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+-			  "\x90\x91\x92\x93\x94\x95\x96\x97"
+-			  "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+-			  "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+-			  "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+-			  "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+-			  "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+-			  "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+-			  "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+-			  "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+-			  "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+-			  "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+-			  "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+-			  "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+-			  "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
+-			  "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+-			  "\x20\x21\x22\x23\x24\x25\x26\x27"
+-			  "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+-			  "\x30\x31\x32\x33\x34\x35\x36\x37"
+-			  "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+-			  "\x40\x41\x42\x43\x44\x45\x46\x47"
+-			  "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+-			  "\x50\x51\x52\x53\x54\x55\x56\x57"
+-			  "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+-			  "\x60\x61\x62\x63\x64\x65\x66\x67"
+-			  "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+-			  "\x70\x71\x72\x73\x74\x75\x76\x77"
+-			  "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+-			  "\x80\x81\x82\x83\x84\x85\x86\x87"
+-			  "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+-			  "\x90\x91\x92\x93\x94\x95\x96\x97"
+-			  "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+-			  "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+-			  "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+-			  "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+-			  "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+-			  "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+-			  "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+-			  "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+-			  "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+-			  "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+-			  "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+-			  "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+-			  "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
+-		.ctext	= "\xaf\xa1\x81\xa6\x32\xbb\x15\x8e"
+-			  "\xf8\x95\x2e\xd3\xe6\xee\x7e\x09"
+-			  "\x0c\x1a\xf5\x02\x97\x8b\xe3\xb3"
+-			  "\x11\xc7\x39\x96\xd0\x95\xf4\x56"
+-			  "\xf4\xdd\x03\x38\x01\x44\x2c\xcf"
+-			  "\x88\xae\x8e\x3c\xcd\xe7\xaa\x66"
+-			  "\xfe\x3d\xc6\xfb\x01\x23\x51\x43"
+-			  "\xd5\xd2\x13\x86\x94\x34\xe9\x62"
+-			  "\xf9\x89\xe3\xd1\x7b\xbe\xf8\xef"
+-			  "\x76\x35\x04\x3f\xdb\x23\x9d\x0b"
+-			  "\x85\x42\xb9\x02\xd6\xcc\xdb\x96"
+-			  "\xa7\x6b\x27\xb6\xd4\x45\x8f\x7d"
+-			  "\xae\xd2\x04\xd5\xda\xc1\x7e\x24"
+-			  "\x8c\x73\xbe\x48\x7e\xcf\x65\x28"
+-			  "\x29\xe5\xbe\x54\x30\xcb\x46\x95"
+-			  "\x4f\x2e\x8a\x36\xc8\x27\xc5\xbe"
+-			  "\xd0\x1a\xaf\xab\x26\xcd\x9e\x69"
+-			  "\xa1\x09\x95\x71\x26\xe9\xc4\xdf"
+-			  "\xe6\x31\xc3\x46\xda\xaf\x0b\x41"
+-			  "\x1f\xab\xb1\x8e\xd6\xfc\x0b\xb3"
+-			  "\x82\xc0\x37\x27\xfc\x91\xa7\x05"
+-			  "\xfb\xc5\xdc\x2b\x74\x96\x48\x43"
+-			  "\x5d\x9c\x19\x0f\x60\x63\x3a\x1f"
+-			  "\x6f\xf0\x03\xbe\x4d\xfd\xc8\x4a"
+-			  "\xc6\xa4\x81\x6d\xc3\x12\x2a\x5c"
+-			  "\x07\xff\xf3\x72\x74\x48\xb5\x40"
+-			  "\x50\xb5\xdd\x90\x43\x31\x18\x15"
+-			  "\x7b\xf2\xa6\xdb\x83\xc8\x4b\x4a"
+-			  "\x29\x93\x90\x8b\xda\x07\xf0\x35"
+-			  "\x6d\x90\x88\x09\x4e\x83\xf5\x5b"
+-			  "\x94\x12\xbb\x33\x27\x1d\x3f\x23"
+-			  "\x51\xa8\x7c\x07\xa2\xae\x77\xa6"
+-			  "\x50\xfd\xcc\xc0\x4f\x80\x7a\x9f"
+-			  "\x66\xdd\xcd\x75\x24\x8b\x33\xf7"
+-			  "\x20\xdb\x83\x9b\x4f\x11\x63\x6e"
+-			  "\xcf\x37\xef\xc9\x11\x01\x5c\x45"
+-			  "\x32\x99\x7c\x3c\x9e\x42\x89\xe3"
+-			  "\x70\x6d\x15\x9f\xb1\xe6\xb6\x05"
+-			  "\xfe\x0c\xb9\x49\x2d\x90\x6d\xcc"
+-			  "\x5d\x3f\xc1\xfe\x89\x0a\x2e\x2d"
+-			  "\xa0\xa8\x89\x3b\x73\x39\xa5\x94"
+-			  "\x4c\xa4\xa6\xbb\xa7\x14\x46\x89"
+-			  "\x10\xff\xaf\xef\xca\xdd\x4f\x80"
+-			  "\xb3\xdf\x3b\xab\xd4\xe5\x5a\xc7"
+-			  "\x33\xca\x00\x8b\x8b\x3f\xea\xec"
+-			  "\x68\x8a\xc2\x6d\xfd\xd4\x67\x0f"
+-			  "\x22\x31\xe1\x0e\xfe\x5a\x04\xd5"
+-			  "\x64\xa3\xf1\x1a\x76\x28\xcc\x35"
+-			  "\x36\xa7\x0a\x74\xf7\x1c\x44\x9b"
+-			  "\xc7\x1b\x53\x17\x02\xea\xd1\xad"
+-			  "\x13\x51\x73\xc0\xa0\xb2\x05\x32"
+-			  "\xa8\xa2\x37\x2e\xe1\x7a\x3a\x19"
+-			  "\x26\xb4\x6c\x62\x5d\xb3\x1a\x1d"
+-			  "\x59\xda\xee\x1a\x22\x18\xda\x0d"
+-			  "\x88\x0f\x55\x8b\x72\x62\xfd\xc1"
+-			  "\x69\x13\xcd\x0d\x5f\xc1\x09\x52"
+-			  "\xee\xd6\xe3\x84\x4d\xee\xf6\x88"
+-			  "\xaf\x83\xdc\x76\xf4\xc0\x93\x3f"
+-			  "\x4a\x75\x2f\xb0\x0b\x3e\xc4\x54"
+-			  "\x7d\x69\x8d\x00\x62\x77\x0d\x14"
+-			  "\xbe\x7c\xa6\x7d\xc5\x24\x4f\xf3"
+-			  "\x50\xf7\x5f\xf4\xc2\xca\x41\x97"
+-			  "\x37\xbe\x75\x74\xcd\xf0\x75\x6e"
+-			  "\x25\x23\x94\xbd\xda\x8d\xb0\xd4",
+-		.len	= 512,
+-	}, {
+-		.key	= "\x27\x18\x28\x18\x28\x45\x90\x45"
+-			  "\x23\x53\x60\x28\x74\x71\x35\x26"
+-			  "\x62\x49\x77\x57\x24\x70\x93\x69"
+-			  "\x99\x59\x57\x49\x66\x96\x76\x27",
+-		.klen	= 32,
+-		.iv	= "\xff\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+-			  "\x20\x21\x22\x23\x24\x25\x26\x27"
+-			  "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+-			  "\x30\x31\x32\x33\x34\x35\x36\x37"
+-			  "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+-			  "\x40\x41\x42\x43\x44\x45\x46\x47"
+-			  "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+-			  "\x50\x51\x52\x53\x54\x55\x56\x57"
+-			  "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+-			  "\x60\x61\x62\x63\x64\x65\x66\x67"
+-			  "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+-			  "\x70\x71\x72\x73\x74\x75\x76\x77"
+-			  "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+-			  "\x80\x81\x82\x83\x84\x85\x86\x87"
+-			  "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+-			  "\x90\x91\x92\x93\x94\x95\x96\x97"
+-			  "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+-			  "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+-			  "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+-			  "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+-			  "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+-			  "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+-			  "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+-			  "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+-			  "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+-			  "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+-			  "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+-			  "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+-			  "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
+-			  "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+-			  "\x20\x21\x22\x23\x24\x25\x26\x27"
+-			  "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+-			  "\x30\x31\x32\x33\x34\x35\x36\x37"
+-			  "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+-			  "\x40\x41\x42\x43\x44\x45\x46\x47"
+-			  "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+-			  "\x50\x51\x52\x53\x54\x55\x56\x57"
+-			  "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+-			  "\x60\x61\x62\x63\x64\x65\x66\x67"
+-			  "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+-			  "\x70\x71\x72\x73\x74\x75\x76\x77"
+-			  "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+-			  "\x80\x81\x82\x83\x84\x85\x86\x87"
+-			  "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+-			  "\x90\x91\x92\x93\x94\x95\x96\x97"
+-			  "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+-			  "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+-			  "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+-			  "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+-			  "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+-			  "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+-			  "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+-			  "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+-			  "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+-			  "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+-			  "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+-			  "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+-			  "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
+-		.ctext	= "\x55\xed\x71\xd3\x02\x8e\x15\x3b"
+-			  "\xc6\x71\x29\x2d\x3e\x89\x9f\x59"
+-			  "\x68\x6a\xcc\x8a\x56\x97\xf3\x95"
+-			  "\x4e\x51\x08\xda\x2a\xf8\x6f\x3c"
+-			  "\x78\x16\xea\x80\xdb\x33\x75\x94"
+-			  "\xf9\x29\xc4\x2b\x76\x75\x97\xc7"
+-			  "\xf2\x98\x2c\xf9\xff\xc8\xd5\x2b"
+-			  "\x18\xf1\xaf\xcf\x7c\xc5\x0b\xee"
+-			  "\xad\x3c\x76\x7c\xe6\x27\xa2\x2a"
+-			  "\xe4\x66\xe1\xab\xa2\x39\xfc\x7c"
+-			  "\xf5\xec\x32\x74\xa3\xb8\x03\x88"
+-			  "\x52\xfc\x2e\x56\x3f\xa1\xf0\x9f"
+-			  "\x84\x5e\x46\xed\x20\x89\xb6\x44"
+-			  "\x8d\xd0\xed\x54\x47\x16\xbe\x95"
+-			  "\x8a\xb3\x6b\x72\xc4\x32\x52\x13"
+-			  "\x1b\xb0\x82\xbe\xac\xf9\x70\xa6"
+-			  "\x44\x18\xdd\x8c\x6e\xca\x6e\x45"
+-			  "\x8f\x1e\x10\x07\x57\x25\x98\x7b"
+-			  "\x17\x8c\x78\xdd\x80\xa7\xd9\xd8"
+-			  "\x63\xaf\xb9\x67\x57\xfd\xbc\xdb"
+-			  "\x44\xe9\xc5\x65\xd1\xc7\x3b\xff"
+-			  "\x20\xa0\x80\x1a\xc3\x9a\xad\x5e"
+-			  "\x5d\x3b\xd3\x07\xd9\xf5\xfd\x3d"
+-			  "\x4a\x8b\xa8\xd2\x6e\x7a\x51\x65"
+-			  "\x6c\x8e\x95\xe0\x45\xc9\x5f\x4a"
+-			  "\x09\x3c\x3d\x71\x7f\x0c\x84\x2a"
+-			  "\xc8\x48\x52\x1a\xc2\xd5\xd6\x78"
+-			  "\x92\x1e\xa0\x90\x2e\xea\xf0\xf3"
+-			  "\xdc\x0f\xb1\xaf\x0d\x9b\x06\x2e"
+-			  "\x35\x10\x30\x82\x0d\xe7\xc5\x9b"
+-			  "\xde\x44\x18\xbd\x9f\xd1\x45\xa9"
+-			  "\x7b\x7a\x4a\xad\x35\x65\x27\xca"
+-			  "\xb2\xc3\xd4\x9b\x71\x86\x70\xee"
+-			  "\xf1\x89\x3b\x85\x4b\x5b\xaa\xaf"
+-			  "\xfc\x42\xc8\x31\x59\xbe\x16\x60"
+-			  "\x4f\xf9\xfa\x12\xea\xd0\xa7\x14"
+-			  "\xf0\x7a\xf3\xd5\x8d\xbd\x81\xef"
+-			  "\x52\x7f\x29\x51\x94\x20\x67\x3c"
+-			  "\xd1\xaf\x77\x9f\x22\x5a\x4e\x63"
+-			  "\xe7\xff\x73\x25\xd1\xdd\x96\x8a"
+-			  "\x98\x52\x6d\xf3\xac\x3e\xf2\x18"
+-			  "\x6d\xf6\x0a\x29\xa6\x34\x3d\xed"
+-			  "\xe3\x27\x0d\x9d\x0a\x02\x44\x7e"
+-			  "\x5a\x7e\x67\x0f\x0a\x9e\xd6\xad"
+-			  "\x91\xe6\x4d\x81\x8c\x5c\x59\xaa"
+-			  "\xfb\xeb\x56\x53\xd2\x7d\x4c\x81"
+-			  "\x65\x53\x0f\x41\x11\xbd\x98\x99"
+-			  "\xf9\xc6\xfa\x51\x2e\xa3\xdd\x8d"
+-			  "\x84\x98\xf9\x34\xed\x33\x2a\x1f"
+-			  "\x82\xed\xc1\x73\x98\xd3\x02\xdc"
+-			  "\xe6\xc2\x33\x1d\xa2\xb4\xca\x76"
+-			  "\x63\x51\x34\x9d\x96\x12\xae\xce"
+-			  "\x83\xc9\x76\x5e\xa4\x1b\x53\x37"
+-			  "\x17\xd5\xc0\x80\x1d\x62\xf8\x3d"
+-			  "\x54\x27\x74\xbb\x10\x86\x57\x46"
+-			  "\x68\xe1\xed\x14\xe7\x9d\xfc\x84"
+-			  "\x47\xbc\xc2\xf8\x19\x4b\x99\xcf"
+-			  "\x7a\xe9\xc4\xb8\x8c\x82\x72\x4d"
+-			  "\x7b\x4f\x38\x55\x36\x71\x64\xc1"
+-			  "\xfc\x5c\x75\x52\x33\x02\x18\xf8"
+-			  "\x17\xe1\x2b\xc2\x43\x39\xbd\x76"
+-			  "\x9b\x63\x76\x32\x2f\x19\x72\x10"
+-			  "\x9f\x21\x0c\xf1\x66\x50\x7f\xa5"
+-			  "\x0d\x1f\x46\xe0\xba\xd3\x2f\x3c",
+-		.len	= 512,
+-		.also_non_np = 1,
+-		.np	= 3,
+-		.tap	= { 512 - 20, 4, 16 },
+-	}
+-};
+-
+ /* Cast6 test vectors from RFC 2612 */
+ static const struct cipher_testvec cast6_tv_template[] = {
+ 	{
+diff --git a/drivers/acpi/acpi_lpit.c b/drivers/acpi/acpi_lpit.c
+index cf4fc0161164..e43cb71b6972 100644
+--- a/drivers/acpi/acpi_lpit.c
++++ b/drivers/acpi/acpi_lpit.c
+@@ -117,11 +117,17 @@ static void lpit_update_residency(struct lpit_residency_info *info,
+ 		if (!info->iomem_addr)
+ 			return;
+ 
++		if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0))
++			return;
++
+ 		/* Silently fail, if cpuidle attribute group is not present */
+ 		sysfs_add_file_to_group(&cpu_subsys.dev_root->kobj,
+ 					&dev_attr_low_power_idle_system_residency_us.attr,
+ 					"cpuidle");
+ 	} else if (info->gaddr.space_id == ACPI_ADR_SPACE_FIXED_HARDWARE) {
++		if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0))
++			return;
++
+ 		/* Silently fail, if cpuidle attribute group is not present */
+ 		sysfs_add_file_to_group(&cpu_subsys.dev_root->kobj,
+ 					&dev_attr_low_power_idle_cpu_residency_us.attr,
+diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c
+index bf64cfa30feb..969bf8d515c0 100644
+--- a/drivers/acpi/acpi_lpss.c
++++ b/drivers/acpi/acpi_lpss.c
+@@ -327,9 +327,11 @@ static const struct acpi_device_id acpi_lpss_device_ids[] = {
+ 	{ "INT33FC", },
+ 
+ 	/* Braswell LPSS devices */
++	{ "80862286", LPSS_ADDR(lpss_dma_desc) },
+ 	{ "80862288", LPSS_ADDR(bsw_pwm_dev_desc) },
+ 	{ "8086228A", LPSS_ADDR(bsw_uart_dev_desc) },
+ 	{ "8086228E", LPSS_ADDR(bsw_spi_dev_desc) },
++	{ "808622C0", LPSS_ADDR(lpss_dma_desc) },
+ 	{ "808622C1", LPSS_ADDR(bsw_i2c_dev_desc) },
+ 
+ 	/* Broadwell LPSS devices */
+diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
+index 449d86d39965..fc447410ae4d 100644
+--- a/drivers/acpi/acpi_processor.c
++++ b/drivers/acpi/acpi_processor.c
+@@ -643,7 +643,7 @@ static acpi_status __init acpi_processor_ids_walk(acpi_handle handle,
+ 
+ 	status = acpi_get_type(handle, &acpi_type);
+ 	if (ACPI_FAILURE(status))
+-		return false;
++		return status;
+ 
+ 	switch (acpi_type) {
+ 	case ACPI_TYPE_PROCESSOR:
+@@ -663,11 +663,12 @@ static acpi_status __init acpi_processor_ids_walk(acpi_handle handle,
+ 	}
+ 
+ 	processor_validated_ids_update(uid);
+-	return true;
++	return AE_OK;
+ 
+ err:
++	/* Exit on error, but don't abort the namespace walk */
+ 	acpi_handle_info(handle, "Invalid processor object\n");
+-	return false;
++	return AE_OK;
+ 
+ }
+ 
+diff --git a/drivers/acpi/acpica/dsopcode.c b/drivers/acpi/acpica/dsopcode.c
+index e9fb0bf3c8d2..78f9de260d5f 100644
+--- a/drivers/acpi/acpica/dsopcode.c
++++ b/drivers/acpi/acpica/dsopcode.c
+@@ -417,6 +417,10 @@ acpi_ds_eval_region_operands(struct acpi_walk_state *walk_state,
+ 			  ACPI_FORMAT_UINT64(obj_desc->region.address),
+ 			  obj_desc->region.length));
+ 
++	status = acpi_ut_add_address_range(obj_desc->region.space_id,
++					   obj_desc->region.address,
++					   obj_desc->region.length, node);
++
+ 	/* Now the address and length are valid for this opregion */
+ 
+ 	obj_desc->region.flags |= AOPOBJ_DATA_VALID;
+diff --git a/drivers/acpi/acpica/psloop.c b/drivers/acpi/acpica/psloop.c
+index 0f0bdc9d24c6..314276779f57 100644
+--- a/drivers/acpi/acpica/psloop.c
++++ b/drivers/acpi/acpica/psloop.c
+@@ -417,6 +417,7 @@ acpi_status acpi_ps_parse_loop(struct acpi_walk_state *walk_state)
+ 	union acpi_parse_object *op = NULL;	/* current op */
+ 	struct acpi_parse_state *parser_state;
+ 	u8 *aml_op_start = NULL;
++	u8 opcode_length;
+ 
+ 	ACPI_FUNCTION_TRACE_PTR(ps_parse_loop, walk_state);
+ 
+@@ -540,8 +541,19 @@ acpi_status acpi_ps_parse_loop(struct acpi_walk_state *walk_state)
+ 						    "Skip parsing opcode %s",
+ 						    acpi_ps_get_opcode_name
+ 						    (walk_state->opcode)));
++
++					/*
++					 * Determine the opcode length before skipping the opcode.
++					 * An opcode can be 1 byte or 2 bytes in length.
++					 */
++					opcode_length = 1;
++					if ((walk_state->opcode & 0xFF00) ==
++					    AML_EXTENDED_OPCODE) {
++						opcode_length = 2;
++					}
+ 					walk_state->parser_state.aml =
+-					    walk_state->aml + 1;
++					    walk_state->aml + opcode_length;
++
+ 					walk_state->parser_state.aml =
+ 					    acpi_ps_get_next_package_end
+ 					    (&walk_state->parser_state);
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index 7c479002e798..c0db96e8a81a 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -2456,7 +2456,8 @@ static int ars_get_cap(struct acpi_nfit_desc *acpi_desc,
+ 	return cmd_rc;
+ }
+ 
+-static int ars_start(struct acpi_nfit_desc *acpi_desc, struct nfit_spa *nfit_spa)
++static int ars_start(struct acpi_nfit_desc *acpi_desc,
++		struct nfit_spa *nfit_spa, enum nfit_ars_state req_type)
+ {
+ 	int rc;
+ 	int cmd_rc;
+@@ -2467,7 +2468,7 @@ static int ars_start(struct acpi_nfit_desc *acpi_desc, struct nfit_spa *nfit_spa
+ 	memset(&ars_start, 0, sizeof(ars_start));
+ 	ars_start.address = spa->address;
+ 	ars_start.length = spa->length;
+-	if (test_bit(ARS_SHORT, &nfit_spa->ars_state))
++	if (req_type == ARS_REQ_SHORT)
+ 		ars_start.flags = ND_ARS_RETURN_PREV_DATA;
+ 	if (nfit_spa_type(spa) == NFIT_SPA_PM)
+ 		ars_start.type = ND_ARS_PERSISTENT;
+@@ -2524,6 +2525,15 @@ static void ars_complete(struct acpi_nfit_desc *acpi_desc,
+ 	struct nd_region *nd_region = nfit_spa->nd_region;
+ 	struct device *dev;
+ 
++	lockdep_assert_held(&acpi_desc->init_mutex);
++	/*
++	 * Only advance the ARS state for ARS runs initiated by the
++	 * kernel, ignore ARS results from BIOS initiated runs for scrub
++	 * completion tracking.
++	 */
++	if (acpi_desc->scrub_spa != nfit_spa)
++		return;
++
+ 	if ((ars_status->address >= spa->address && ars_status->address
+ 				< spa->address + spa->length)
+ 			|| (ars_status->address < spa->address)) {
+@@ -2543,23 +2553,13 @@ static void ars_complete(struct acpi_nfit_desc *acpi_desc,
+ 	} else
+ 		return;
+ 
+-	if (test_bit(ARS_DONE, &nfit_spa->ars_state))
+-		return;
+-
+-	if (!test_and_clear_bit(ARS_REQ, &nfit_spa->ars_state))
+-		return;
+-
++	acpi_desc->scrub_spa = NULL;
+ 	if (nd_region) {
+ 		dev = nd_region_dev(nd_region);
+ 		nvdimm_region_notify(nd_region, NVDIMM_REVALIDATE_POISON);
+ 	} else
+ 		dev = acpi_desc->dev;
+-
+-	dev_dbg(dev, "ARS: range %d %s complete\n", spa->range_index,
+-			test_bit(ARS_SHORT, &nfit_spa->ars_state)
+-			? "short" : "long");
+-	clear_bit(ARS_SHORT, &nfit_spa->ars_state);
+-	set_bit(ARS_DONE, &nfit_spa->ars_state);
++	dev_dbg(dev, "ARS: range %d complete\n", spa->range_index);
+ }
+ 
+ static int ars_status_process_records(struct acpi_nfit_desc *acpi_desc)
+@@ -2840,46 +2840,55 @@ static int acpi_nfit_query_poison(struct acpi_nfit_desc *acpi_desc)
+ 	return 0;
+ }
+ 
+-static int ars_register(struct acpi_nfit_desc *acpi_desc, struct nfit_spa *nfit_spa,
+-		int *query_rc)
++static int ars_register(struct acpi_nfit_desc *acpi_desc,
++		struct nfit_spa *nfit_spa)
+ {
+-	int rc = *query_rc;
++	int rc;
+ 
+-	if (no_init_ars)
++	if (no_init_ars || test_bit(ARS_FAILED, &nfit_spa->ars_state))
+ 		return acpi_nfit_register_region(acpi_desc, nfit_spa);
+ 
+-	set_bit(ARS_REQ, &nfit_spa->ars_state);
+-	set_bit(ARS_SHORT, &nfit_spa->ars_state);
++	set_bit(ARS_REQ_SHORT, &nfit_spa->ars_state);
++	set_bit(ARS_REQ_LONG, &nfit_spa->ars_state);
+ 
+-	switch (rc) {
++	switch (acpi_nfit_query_poison(acpi_desc)) {
+ 	case 0:
+ 	case -EAGAIN:
+-		rc = ars_start(acpi_desc, nfit_spa);
+-		if (rc == -EBUSY) {
+-			*query_rc = rc;
++		rc = ars_start(acpi_desc, nfit_spa, ARS_REQ_SHORT);
++		/* shouldn't happen, try again later */
++		if (rc == -EBUSY)
+ 			break;
+-		} else if (rc == 0) {
+-			rc = acpi_nfit_query_poison(acpi_desc);
+-		} else {
++		if (rc) {
+ 			set_bit(ARS_FAILED, &nfit_spa->ars_state);
+ 			break;
+ 		}
+-		if (rc == -EAGAIN)
+-			clear_bit(ARS_SHORT, &nfit_spa->ars_state);
+-		else if (rc == 0)
+-			ars_complete(acpi_desc, nfit_spa);
++		clear_bit(ARS_REQ_SHORT, &nfit_spa->ars_state);
++		rc = acpi_nfit_query_poison(acpi_desc);
++		if (rc)
++			break;
++		acpi_desc->scrub_spa = nfit_spa;
++		ars_complete(acpi_desc, nfit_spa);
++		/*
++		 * If ars_complete() says we didn't complete the
++		 * short scrub, we'll try again with a long
++		 * request.
++		 */
++		acpi_desc->scrub_spa = NULL;
+ 		break;
+ 	case -EBUSY:
++	case -ENOMEM:
+ 	case -ENOSPC:
++		/*
++		 * BIOS was using ARS, wait for it to complete (or
++		 * resources to become available) and then perform our
++		 * own scrubs.
++		 */
+ 		break;
+ 	default:
+ 		set_bit(ARS_FAILED, &nfit_spa->ars_state);
+ 		break;
+ 	}
+ 
+-	if (test_and_clear_bit(ARS_DONE, &nfit_spa->ars_state))
+-		set_bit(ARS_REQ, &nfit_spa->ars_state);
+-
+ 	return acpi_nfit_register_region(acpi_desc, nfit_spa);
+ }
+ 
+@@ -2901,6 +2910,8 @@ static unsigned int __acpi_nfit_scrub(struct acpi_nfit_desc *acpi_desc,
+ 	struct device *dev = acpi_desc->dev;
+ 	struct nfit_spa *nfit_spa;
+ 
++	lockdep_assert_held(&acpi_desc->init_mutex);
++
+ 	if (acpi_desc->cancel)
+ 		return 0;
+ 
+@@ -2924,21 +2935,49 @@ static unsigned int __acpi_nfit_scrub(struct acpi_nfit_desc *acpi_desc,
+ 
+ 	ars_complete_all(acpi_desc);
+ 	list_for_each_entry(nfit_spa, &acpi_desc->spas, list) {
++		enum nfit_ars_state req_type;
++		int rc;
++
+ 		if (test_bit(ARS_FAILED, &nfit_spa->ars_state))
+ 			continue;
+-		if (test_bit(ARS_REQ, &nfit_spa->ars_state)) {
+-			int rc = ars_start(acpi_desc, nfit_spa);
+-
+-			clear_bit(ARS_DONE, &nfit_spa->ars_state);
+-			dev = nd_region_dev(nfit_spa->nd_region);
+-			dev_dbg(dev, "ARS: range %d ARS start (%d)\n",
+-					nfit_spa->spa->range_index, rc);
+-			if (rc == 0 || rc == -EBUSY)
+-				return 1;
+-			dev_err(dev, "ARS: range %d ARS failed (%d)\n",
+-					nfit_spa->spa->range_index, rc);
+-			set_bit(ARS_FAILED, &nfit_spa->ars_state);
++
++		/* prefer short ARS requests first */
++		if (test_bit(ARS_REQ_SHORT, &nfit_spa->ars_state))
++			req_type = ARS_REQ_SHORT;
++		else if (test_bit(ARS_REQ_LONG, &nfit_spa->ars_state))
++			req_type = ARS_REQ_LONG;
++		else
++			continue;
++		rc = ars_start(acpi_desc, nfit_spa, req_type);
++
++		dev = nd_region_dev(nfit_spa->nd_region);
++		dev_dbg(dev, "ARS: range %d ARS start %s (%d)\n",
++				nfit_spa->spa->range_index,
++				req_type == ARS_REQ_SHORT ? "short" : "long",
++				rc);
++		/*
++		 * Hmm, we raced someone else starting ARS? Try again in
++		 * a bit.
++		 */
++		if (rc == -EBUSY)
++			return 1;
++		if (rc == 0) {
++			dev_WARN_ONCE(dev, acpi_desc->scrub_spa,
++					"scrub start while range %d active\n",
++					acpi_desc->scrub_spa->spa->range_index);
++			clear_bit(req_type, &nfit_spa->ars_state);
++			acpi_desc->scrub_spa = nfit_spa;
++			/*
++			 * Consider this spa last for future scrub
++			 * requests
++			 */
++			list_move_tail(&nfit_spa->list, &acpi_desc->spas);
++			return 1;
+ 		}
++
++		dev_err(dev, "ARS: range %d ARS failed (%d)\n",
++				nfit_spa->spa->range_index, rc);
++		set_bit(ARS_FAILED, &nfit_spa->ars_state);
+ 	}
+ 	return 0;
+ }
+@@ -2994,6 +3033,7 @@ static void acpi_nfit_init_ars(struct acpi_nfit_desc *acpi_desc,
+ 	struct nd_cmd_ars_cap ars_cap;
+ 	int rc;
+ 
++	set_bit(ARS_FAILED, &nfit_spa->ars_state);
+ 	memset(&ars_cap, 0, sizeof(ars_cap));
+ 	rc = ars_get_cap(acpi_desc, &ars_cap, nfit_spa);
+ 	if (rc < 0)
+@@ -3010,16 +3050,14 @@ static void acpi_nfit_init_ars(struct acpi_nfit_desc *acpi_desc,
+ 	nfit_spa->clear_err_unit = ars_cap.clear_err_unit;
+ 	acpi_desc->max_ars = max(nfit_spa->max_ars, acpi_desc->max_ars);
+ 	clear_bit(ARS_FAILED, &nfit_spa->ars_state);
+-	set_bit(ARS_REQ, &nfit_spa->ars_state);
+ }
+ 
+ static int acpi_nfit_register_regions(struct acpi_nfit_desc *acpi_desc)
+ {
+ 	struct nfit_spa *nfit_spa;
+-	int rc, query_rc;
++	int rc;
+ 
+ 	list_for_each_entry(nfit_spa, &acpi_desc->spas, list) {
+-		set_bit(ARS_FAILED, &nfit_spa->ars_state);
+ 		switch (nfit_spa_type(nfit_spa->spa)) {
+ 		case NFIT_SPA_VOLATILE:
+ 		case NFIT_SPA_PM:
+@@ -3028,20 +3066,12 @@ static int acpi_nfit_register_regions(struct acpi_nfit_desc *acpi_desc)
+ 		}
+ 	}
+ 
+-	/*
+-	 * Reap any results that might be pending before starting new
+-	 * short requests.
+-	 */
+-	query_rc = acpi_nfit_query_poison(acpi_desc);
+-	if (query_rc == 0)
+-		ars_complete_all(acpi_desc);
+-
+ 	list_for_each_entry(nfit_spa, &acpi_desc->spas, list)
+ 		switch (nfit_spa_type(nfit_spa->spa)) {
+ 		case NFIT_SPA_VOLATILE:
+ 		case NFIT_SPA_PM:
+ 			/* register regions and kick off initial ARS run */
+-			rc = ars_register(acpi_desc, nfit_spa, &query_rc);
++			rc = ars_register(acpi_desc, nfit_spa);
+ 			if (rc)
+ 				return rc;
+ 			break;
+@@ -3236,7 +3266,8 @@ static int acpi_nfit_clear_to_send(struct nvdimm_bus_descriptor *nd_desc,
+ 	return 0;
+ }
+ 
+-int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc, unsigned long flags)
++int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc,
++		enum nfit_ars_state req_type)
+ {
+ 	struct device *dev = acpi_desc->dev;
+ 	int scheduled = 0, busy = 0;
+@@ -3256,13 +3287,10 @@ int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc, unsigned long flags)
+ 		if (test_bit(ARS_FAILED, &nfit_spa->ars_state))
+ 			continue;
+ 
+-		if (test_and_set_bit(ARS_REQ, &nfit_spa->ars_state))
++		if (test_and_set_bit(req_type, &nfit_spa->ars_state))
+ 			busy++;
+-		else {
+-			if (test_bit(ARS_SHORT, &flags))
+-				set_bit(ARS_SHORT, &nfit_spa->ars_state);
++		else
+ 			scheduled++;
+-		}
+ 	}
+ 	if (scheduled) {
+ 		sched_ars(acpi_desc);
+@@ -3448,10 +3476,11 @@ static void acpi_nfit_update_notify(struct device *dev, acpi_handle handle)
+ static void acpi_nfit_uc_error_notify(struct device *dev, acpi_handle handle)
+ {
+ 	struct acpi_nfit_desc *acpi_desc = dev_get_drvdata(dev);
+-	unsigned long flags = (acpi_desc->scrub_mode == HW_ERROR_SCRUB_ON) ?
+-			0 : 1 << ARS_SHORT;
+ 
+-	acpi_nfit_ars_rescan(acpi_desc, flags);
++	if (acpi_desc->scrub_mode == HW_ERROR_SCRUB_ON)
++		acpi_nfit_ars_rescan(acpi_desc, ARS_REQ_LONG);
++	else
++		acpi_nfit_ars_rescan(acpi_desc, ARS_REQ_SHORT);
+ }
+ 
+ void __acpi_nfit_notify(struct device *dev, acpi_handle handle, u32 event)
+diff --git a/drivers/acpi/nfit/nfit.h b/drivers/acpi/nfit/nfit.h
+index a97ff42fe311..02c10de50386 100644
+--- a/drivers/acpi/nfit/nfit.h
++++ b/drivers/acpi/nfit/nfit.h
+@@ -118,9 +118,8 @@ enum nfit_dimm_notifiers {
+ };
+ 
+ enum nfit_ars_state {
+-	ARS_REQ,
+-	ARS_DONE,
+-	ARS_SHORT,
++	ARS_REQ_SHORT,
++	ARS_REQ_LONG,
+ 	ARS_FAILED,
+ };
+ 
+@@ -197,6 +196,7 @@ struct acpi_nfit_desc {
+ 	struct device *dev;
+ 	u8 ars_start_flags;
+ 	struct nd_cmd_ars_status *ars_status;
++	struct nfit_spa *scrub_spa;
+ 	struct delayed_work dwork;
+ 	struct list_head list;
+ 	struct kernfs_node *scrub_count_state;
+@@ -251,7 +251,8 @@ struct nfit_blk {
+ 
+ extern struct list_head acpi_descs;
+ extern struct mutex acpi_desc_lock;
+-int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc, unsigned long flags);
++int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc,
++		enum nfit_ars_state req_type);
+ 
+ #ifdef CONFIG_X86_MCE
+ void nfit_mce_register(void);
+diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
+index 8df9abfa947b..ed73f6fb0779 100644
+--- a/drivers/acpi/osl.c
++++ b/drivers/acpi/osl.c
+@@ -617,15 +617,18 @@ void acpi_os_stall(u32 us)
+ }
+ 
+ /*
+- * Support ACPI 3.0 AML Timer operand
+- * Returns 64-bit free-running, monotonically increasing timer
+- * with 100ns granularity
++ * Support ACPI 3.0 AML Timer operand. Returns a 64-bit free-running,
++ * monotonically increasing timer with 100ns granularity. Do not use
++ * ktime_get() to implement this function because this function may get
++ * called after timekeeping has been suspended. Note: calling this function
++ * after timekeeping has been suspended may lead to unexpected results
++ * because when timekeeping is suspended the jiffies counter is not
++ * incremented. See also timekeeping_suspend().
+  */
+ u64 acpi_os_get_timer(void)
+ {
+-	u64 time_ns = ktime_to_ns(ktime_get());
+-	do_div(time_ns, 100);
+-	return time_ns;
++	return (get_jiffies_64() - INITIAL_JIFFIES) *
++		(ACPI_100NSEC_PER_SEC / HZ);
+ }
+ 
+ acpi_status acpi_os_read_port(acpi_io_address port, u32 * value, u32 width)
+diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c
+index d1e26cb599bf..da031b1df6f5 100644
+--- a/drivers/acpi/pptt.c
++++ b/drivers/acpi/pptt.c
+@@ -338,9 +338,6 @@ static struct acpi_pptt_cache *acpi_find_cache_node(struct acpi_table_header *ta
+ 	return found;
+ }
+ 
+-/* total number of attributes checked by the properties code */
+-#define PPTT_CHECKED_ATTRIBUTES 4
+-
+ /**
+  * update_cache_properties() - Update cacheinfo for the given processor
+  * @this_leaf: Kernel cache info structure being updated
+@@ -357,25 +354,15 @@ static void update_cache_properties(struct cacheinfo *this_leaf,
+ 				    struct acpi_pptt_cache *found_cache,
+ 				    struct acpi_pptt_processor *cpu_node)
+ {
+-	int valid_flags = 0;
+-
+ 	this_leaf->fw_token = cpu_node;
+-	if (found_cache->flags & ACPI_PPTT_SIZE_PROPERTY_VALID) {
++	if (found_cache->flags & ACPI_PPTT_SIZE_PROPERTY_VALID)
+ 		this_leaf->size = found_cache->size;
+-		valid_flags++;
+-	}
+-	if (found_cache->flags & ACPI_PPTT_LINE_SIZE_VALID) {
++	if (found_cache->flags & ACPI_PPTT_LINE_SIZE_VALID)
+ 		this_leaf->coherency_line_size = found_cache->line_size;
+-		valid_flags++;
+-	}
+-	if (found_cache->flags & ACPI_PPTT_NUMBER_OF_SETS_VALID) {
++	if (found_cache->flags & ACPI_PPTT_NUMBER_OF_SETS_VALID)
+ 		this_leaf->number_of_sets = found_cache->number_of_sets;
+-		valid_flags++;
+-	}
+-	if (found_cache->flags & ACPI_PPTT_ASSOCIATIVITY_VALID) {
++	if (found_cache->flags & ACPI_PPTT_ASSOCIATIVITY_VALID)
+ 		this_leaf->ways_of_associativity = found_cache->associativity;
+-		valid_flags++;
+-	}
+ 	if (found_cache->flags & ACPI_PPTT_WRITE_POLICY_VALID) {
+ 		switch (found_cache->attributes & ACPI_PPTT_MASK_WRITE_POLICY) {
+ 		case ACPI_PPTT_CACHE_POLICY_WT:
+@@ -402,11 +389,17 @@ static void update_cache_properties(struct cacheinfo *this_leaf,
+ 		}
+ 	}
+ 	/*
+-	 * If the above flags are valid, and the cache type is NOCACHE
+-	 * update the cache type as well.
++	 * If cache type is NOCACHE, then the cache hasn't been specified
++	 * via other mechanisms.  Update the type if a cache type has been
++	 * provided.
++	 *
++	 * Note, we assume such caches are unified based on conventional system
++	 * design and known examples.  Significant work is required elsewhere to
++	 * fully support data/instruction only type caches which are only
++	 * specified in PPTT.
+ 	 */
+ 	if (this_leaf->type == CACHE_TYPE_NOCACHE &&
+-	    valid_flags == PPTT_CHECKED_ATTRIBUTES)
++	    found_cache->flags & ACPI_PPTT_CACHE_TYPE_VALID)
+ 		this_leaf->type = CACHE_TYPE_UNIFIED;
+ }
+ 
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 99bf0c0394f8..321a9579556d 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -4552,6 +4552,7 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ 	/* These specific Samsung models/firmware-revs do not handle LPM well */
+ 	{ "SAMSUNG MZMPC128HBFU-000MV", "CXM14M1Q", ATA_HORKAGE_NOLPM, },
+ 	{ "SAMSUNG SSD PM830 mSATA *",  "CXM13D1Q", ATA_HORKAGE_NOLPM, },
++	{ "SAMSUNG MZ7TD256HAFV-000L9", "DXT02L5Q", ATA_HORKAGE_NOLPM, },
+ 
+ 	/* devices that don't properly handle queued TRIM commands */
+ 	{ "Micron_M500IT_*",		"MU01",	ATA_HORKAGE_NO_NCQ_TRIM |
+diff --git a/drivers/block/ataflop.c b/drivers/block/ataflop.c
+index dfb2c2622e5a..822e3060d834 100644
+--- a/drivers/block/ataflop.c
++++ b/drivers/block/ataflop.c
+@@ -1935,6 +1935,11 @@ static int __init atari_floppy_init (void)
+ 		unit[i].disk = alloc_disk(1);
+ 		if (!unit[i].disk)
+ 			goto Enomem;
++
++		unit[i].disk->queue = blk_init_queue(do_fd_request,
++						     &ataflop_lock);
++		if (!unit[i].disk->queue)
++			goto Enomem;
+ 	}
+ 
+ 	if (UseTrackbuffer < 0)
+@@ -1966,10 +1971,6 @@ static int __init atari_floppy_init (void)
+ 		sprintf(unit[i].disk->disk_name, "fd%d", i);
+ 		unit[i].disk->fops = &floppy_fops;
+ 		unit[i].disk->private_data = &unit[i];
+-		unit[i].disk->queue = blk_init_queue(do_fd_request,
+-					&ataflop_lock);
+-		if (!unit[i].disk->queue)
+-			goto Enomem;
+ 		set_capacity(unit[i].disk, MAX_DISK_SIZE * 2);
+ 		add_disk(unit[i].disk);
+ 	}
+@@ -1984,13 +1985,17 @@ static int __init atari_floppy_init (void)
+ 
+ 	return 0;
+ Enomem:
+-	while (i--) {
+-		struct request_queue *q = unit[i].disk->queue;
++	do {
++		struct gendisk *disk = unit[i].disk;
+ 
+-		put_disk(unit[i].disk);
+-		if (q)
+-			blk_cleanup_queue(q);
+-	}
++		if (disk) {
++			if (disk->queue) {
++				blk_cleanup_queue(disk->queue);
++				disk->queue = NULL;
++			}
++			put_disk(unit[i].disk);
++		}
++	} while (i--);
+ 
+ 	unregister_blkdev(FLOPPY_MAJOR, "fd");
+ 	return -ENOMEM;
+diff --git a/drivers/block/swim.c b/drivers/block/swim.c
+index 0e31884a9519..cbe909c51847 100644
+--- a/drivers/block/swim.c
++++ b/drivers/block/swim.c
+@@ -887,8 +887,17 @@ static int swim_floppy_init(struct swim_priv *swd)
+ 
+ exit_put_disks:
+ 	unregister_blkdev(FLOPPY_MAJOR, "fd");
+-	while (drive--)
+-		put_disk(swd->unit[drive].disk);
++	do {
++		struct gendisk *disk = swd->unit[drive].disk;
++
++		if (disk) {
++			if (disk->queue) {
++				blk_cleanup_queue(disk->queue);
++				disk->queue = NULL;
++			}
++			put_disk(disk);
++		}
++	} while (drive--);
+ 	return err;
+ }
+ 
+diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
+index b5cedccb5d7d..144df6830b82 100644
+--- a/drivers/block/xen-blkfront.c
++++ b/drivers/block/xen-blkfront.c
+@@ -1911,6 +1911,7 @@ static int negotiate_mq(struct blkfront_info *info)
+ 			      GFP_KERNEL);
+ 	if (!info->rinfo) {
+ 		xenbus_dev_fatal(info->xbdev, -ENOMEM, "allocating ring_info structure");
++		info->nr_rings = 0;
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -2475,6 +2476,9 @@ static int blkfront_remove(struct xenbus_device *xbdev)
+ 
+ 	dev_dbg(&xbdev->dev, "%s removed", xbdev->nodename);
+ 
++	if (!info)
++		return 0;
++
+ 	blkif_free(info, 0);
+ 
+ 	mutex_lock(&info->mutex);
+diff --git a/drivers/bluetooth/btbcm.c b/drivers/bluetooth/btbcm.c
+index 99cde1f9467d..e3e4d929e74f 100644
+--- a/drivers/bluetooth/btbcm.c
++++ b/drivers/bluetooth/btbcm.c
+@@ -324,6 +324,7 @@ static const struct bcm_subver_table bcm_uart_subver_table[] = {
+ 	{ 0x4103, "BCM4330B1"	},	/* 002.001.003 */
+ 	{ 0x410e, "BCM43341B0"	},	/* 002.001.014 */
+ 	{ 0x4406, "BCM4324B3"	},	/* 002.004.006 */
++	{ 0x6109, "BCM4335C0"	},	/* 003.001.009 */
+ 	{ 0x610c, "BCM4354"	},	/* 003.001.012 */
+ 	{ 0x2122, "BCM4343A0"	},	/* 001.001.034 */
+ 	{ 0x2209, "BCM43430A1"  },	/* 001.002.009 */
+diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c
+index 265d6a6583bc..e33fefd6ceae 100644
+--- a/drivers/char/ipmi/ipmi_ssif.c
++++ b/drivers/char/ipmi/ipmi_ssif.c
+@@ -606,8 +606,9 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ 			flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
+ 			ssif_info->waiting_alert = true;
+ 			ssif_info->rtc_us_timer = SSIF_MSG_USEC;
+-			mod_timer(&ssif_info->retry_timer,
+-				  jiffies + SSIF_MSG_JIFFIES);
++			if (!ssif_info->stopping)
++				mod_timer(&ssif_info->retry_timer,
++					  jiffies + SSIF_MSG_JIFFIES);
+ 			ipmi_ssif_unlock_cond(ssif_info, flags);
+ 			return;
+ 		}
+@@ -939,8 +940,9 @@ static void msg_written_handler(struct ssif_info *ssif_info, int result,
+ 			ssif_info->waiting_alert = true;
+ 			ssif_info->retries_left = SSIF_RECV_RETRIES;
+ 			ssif_info->rtc_us_timer = SSIF_MSG_PART_USEC;
+-			mod_timer(&ssif_info->retry_timer,
+-				  jiffies + SSIF_MSG_PART_JIFFIES);
++			if (!ssif_info->stopping)
++				mod_timer(&ssif_info->retry_timer,
++					  jiffies + SSIF_MSG_PART_JIFFIES);
+ 			ipmi_ssif_unlock_cond(ssif_info, flags);
+ 		}
+ 	}
+diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
+index 3a3a7a548a85..e8822b3d10e1 100644
+--- a/drivers/char/tpm/tpm-interface.c
++++ b/drivers/char/tpm/tpm-interface.c
+@@ -664,7 +664,8 @@ ssize_t tpm_transmit_cmd(struct tpm_chip *chip, struct tpm_space *space,
+ 		return len;
+ 
+ 	err = be32_to_cpu(header->return_code);
+-	if (err != 0 && desc)
++	if (err != 0 && err != TPM_ERR_DISABLED && err != TPM_ERR_DEACTIVATED
++	    && desc)
+ 		dev_err(&chip->dev, "A TPM error (%d) occurred %s\n", err,
+ 			desc);
+ 	if (err)
+diff --git a/drivers/char/tpm/xen-tpmfront.c b/drivers/char/tpm/xen-tpmfront.c
+index 911475d36800..b150f87f38f5 100644
+--- a/drivers/char/tpm/xen-tpmfront.c
++++ b/drivers/char/tpm/xen-tpmfront.c
+@@ -264,7 +264,7 @@ static int setup_ring(struct xenbus_device *dev, struct tpm_private *priv)
+ 		return -ENOMEM;
+ 	}
+ 
+-	rv = xenbus_grant_ring(dev, &priv->shr, 1, &gref);
++	rv = xenbus_grant_ring(dev, priv->shr, 1, &gref);
+ 	if (rv < 0)
+ 		return rv;
+ 
+diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c
+index 0a9ebf00be46..e58bfcb1169e 100644
+--- a/drivers/cpufreq/cpufreq-dt.c
++++ b/drivers/cpufreq/cpufreq-dt.c
+@@ -32,6 +32,7 @@ struct private_data {
+ 	struct device *cpu_dev;
+ 	struct thermal_cooling_device *cdev;
+ 	const char *reg_name;
++	bool have_static_opps;
+ };
+ 
+ static struct freq_attr *cpufreq_dt_attr[] = {
+@@ -204,6 +205,15 @@ static int cpufreq_init(struct cpufreq_policy *policy)
+ 		}
+ 	}
+ 
++	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
++	if (!priv) {
++		ret = -ENOMEM;
++		goto out_put_regulator;
++	}
++
++	priv->reg_name = name;
++	priv->opp_table = opp_table;
++
+ 	/*
+ 	 * Initialize OPP tables for all policy->cpus. They will be shared by
+ 	 * all CPUs which have marked their CPUs shared with OPP bindings.
+@@ -214,7 +224,8 @@ static int cpufreq_init(struct cpufreq_policy *policy)
+ 	 *
+ 	 * OPPs might be populated at runtime, don't check for error here
+ 	 */
+-	dev_pm_opp_of_cpumask_add_table(policy->cpus);
++	if (!dev_pm_opp_of_cpumask_add_table(policy->cpus))
++		priv->have_static_opps = true;
+ 
+ 	/*
+ 	 * But we need OPP table to function so if it is not there let's
+@@ -240,19 +251,10 @@ static int cpufreq_init(struct cpufreq_policy *policy)
+ 				__func__, ret);
+ 	}
+ 
+-	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+-	if (!priv) {
+-		ret = -ENOMEM;
+-		goto out_free_opp;
+-	}
+-
+-	priv->reg_name = name;
+-	priv->opp_table = opp_table;
+-
+ 	ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table);
+ 	if (ret) {
+ 		dev_err(cpu_dev, "failed to init cpufreq table: %d\n", ret);
+-		goto out_free_priv;
++		goto out_free_opp;
+ 	}
+ 
+ 	priv->cpu_dev = cpu_dev;
+@@ -282,10 +284,11 @@ static int cpufreq_init(struct cpufreq_policy *policy)
+ 
+ out_free_cpufreq_table:
+ 	dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table);
+-out_free_priv:
+-	kfree(priv);
+ out_free_opp:
+-	dev_pm_opp_of_cpumask_remove_table(policy->cpus);
++	if (priv->have_static_opps)
++		dev_pm_opp_of_cpumask_remove_table(policy->cpus);
++	kfree(priv);
++out_put_regulator:
+ 	if (name)
+ 		dev_pm_opp_put_regulators(opp_table);
+ out_put_clk:
+@@ -300,7 +303,8 @@ static int cpufreq_exit(struct cpufreq_policy *policy)
+ 
+ 	cpufreq_cooling_unregister(priv->cdev);
+ 	dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
+-	dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
++	if (priv->have_static_opps)
++		dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
+ 	if (priv->reg_name)
+ 		dev_pm_opp_put_regulators(priv->opp_table);
+ 
+diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
+index f20f20a77d4d..4268f87e99fc 100644
+--- a/drivers/cpufreq/cpufreq_conservative.c
++++ b/drivers/cpufreq/cpufreq_conservative.c
+@@ -80,8 +80,10 @@ static unsigned int cs_dbs_update(struct cpufreq_policy *policy)
+ 	 * changed in the meantime, so fall back to current frequency in that
+ 	 * case.
+ 	 */
+-	if (requested_freq > policy->max || requested_freq < policy->min)
++	if (requested_freq > policy->max || requested_freq < policy->min) {
+ 		requested_freq = policy->cur;
++		dbs_info->requested_freq = requested_freq;
++	}
+ 
+ 	freq_step = get_freq_step(cs_tuners, policy);
+ 
+@@ -92,7 +94,7 @@ static unsigned int cs_dbs_update(struct cpufreq_policy *policy)
+ 	if (policy_dbs->idle_periods < UINT_MAX) {
+ 		unsigned int freq_steps = policy_dbs->idle_periods * freq_step;
+ 
+-		if (requested_freq > freq_steps)
++		if (requested_freq > policy->min + freq_steps)
+ 			requested_freq -= freq_steps;
+ 		else
+ 			requested_freq = policy->min;
+diff --git a/drivers/crypto/caam/regs.h b/drivers/crypto/caam/regs.h
+index 4fb91ba39c36..ce3f9ad7120f 100644
+--- a/drivers/crypto/caam/regs.h
++++ b/drivers/crypto/caam/regs.h
+@@ -70,22 +70,22 @@
+ extern bool caam_little_end;
+ extern bool caam_imx;
+ 
+-#define caam_to_cpu(len)				\
+-static inline u##len caam##len ## _to_cpu(u##len val)	\
+-{							\
+-	if (caam_little_end)				\
+-		return le##len ## _to_cpu(val);		\
+-	else						\
+-		return be##len ## _to_cpu(val);		\
++#define caam_to_cpu(len)						\
++static inline u##len caam##len ## _to_cpu(u##len val)			\
++{									\
++	if (caam_little_end)						\
++		return le##len ## _to_cpu((__force __le##len)val);	\
++	else								\
++		return be##len ## _to_cpu((__force __be##len)val);	\
+ }
+ 
+-#define cpu_to_caam(len)				\
+-static inline u##len cpu_to_caam##len(u##len val)	\
+-{							\
+-	if (caam_little_end)				\
+-		return cpu_to_le##len(val);		\
+-	else						\
+-		return cpu_to_be##len(val);		\
++#define cpu_to_caam(len)					\
++static inline u##len cpu_to_caam##len(u##len val)		\
++{								\
++	if (caam_little_end)					\
++		return (__force u##len)cpu_to_le##len(val);	\
++	else							\
++		return (__force u##len)cpu_to_be##len(val);	\
+ }
+ 
+ caam_to_cpu(16)
+diff --git a/drivers/dma/dma-jz4780.c b/drivers/dma/dma-jz4780.c
+index 85820a2d69d4..987899610b46 100644
+--- a/drivers/dma/dma-jz4780.c
++++ b/drivers/dma/dma-jz4780.c
+@@ -761,6 +761,11 @@ static int jz4780_dma_probe(struct platform_device *pdev)
+ 	struct resource *res;
+ 	int i, ret;
+ 
++	if (!dev->of_node) {
++		dev_err(dev, "This driver must be probed from devicetree\n");
++		return -EINVAL;
++	}
++
+ 	jzdma = devm_kzalloc(dev, sizeof(*jzdma), GFP_KERNEL);
+ 	if (!jzdma)
+ 		return -ENOMEM;
+diff --git a/drivers/dma/ioat/init.c b/drivers/dma/ioat/init.c
+index 4fa4c06c9edb..21a5708985bc 100644
+--- a/drivers/dma/ioat/init.c
++++ b/drivers/dma/ioat/init.c
+@@ -1205,8 +1205,15 @@ static void ioat_shutdown(struct pci_dev *pdev)
+ 
+ 		spin_lock_bh(&ioat_chan->prep_lock);
+ 		set_bit(IOAT_CHAN_DOWN, &ioat_chan->state);
+-		del_timer_sync(&ioat_chan->timer);
+ 		spin_unlock_bh(&ioat_chan->prep_lock);
++		/*
++		 * Synchronization rule for del_timer_sync():
++		 *  - The caller must not hold locks which would prevent
++		 *    completion of the timer's handler.
++		 * So prep_lock cannot be held before calling it.
++		 */
++		del_timer_sync(&ioat_chan->timer);
++
+ 		/* this should quiesce then reset */
+ 		ioat_reset_hw(ioat_chan);
+ 	}
+diff --git a/drivers/dma/ppc4xx/adma.c b/drivers/dma/ppc4xx/adma.c
+index 4cf0d4d0cecf..25610286979f 100644
+--- a/drivers/dma/ppc4xx/adma.c
++++ b/drivers/dma/ppc4xx/adma.c
+@@ -4360,7 +4360,7 @@ static ssize_t enable_store(struct device_driver *dev, const char *buf,
+ }
+ static DRIVER_ATTR_RW(enable);
+ 
+-static ssize_t poly_store(struct device_driver *dev, char *buf)
++static ssize_t poly_show(struct device_driver *dev, char *buf)
+ {
+ 	ssize_t size = 0;
+ 	u32 reg;
+diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c
+index 18aeabb1d5ee..e2addb2bca29 100644
+--- a/drivers/edac/amd64_edac.c
++++ b/drivers/edac/amd64_edac.c
+@@ -2200,6 +2200,15 @@ static struct amd64_family_type family_types[] = {
+ 			.dbam_to_cs		= f17_base_addr_to_cs_size,
+ 		}
+ 	},
++	[F17_M10H_CPUS] = {
++		.ctl_name = "F17h_M10h",
++		.f0_id = PCI_DEVICE_ID_AMD_17H_M10H_DF_F0,
++		.f6_id = PCI_DEVICE_ID_AMD_17H_M10H_DF_F6,
++		.ops = {
++			.early_channel_count	= f17_early_channel_count,
++			.dbam_to_cs		= f17_base_addr_to_cs_size,
++		}
++	},
+ };
+ 
+ /*
+@@ -3188,6 +3197,11 @@ static struct amd64_family_type *per_family_init(struct amd64_pvt *pvt)
+ 		break;
+ 
+ 	case 0x17:
++		if (pvt->model >= 0x10 && pvt->model <= 0x2f) {
++			fam_type = &family_types[F17_M10H_CPUS];
++			pvt->ops = &family_types[F17_M10H_CPUS].ops;
++			break;
++		}
+ 		fam_type	= &family_types[F17_CPUS];
+ 		pvt->ops	= &family_types[F17_CPUS].ops;
+ 		break;
+diff --git a/drivers/edac/amd64_edac.h b/drivers/edac/amd64_edac.h
+index 1d4b74e9a037..4242f8e39c18 100644
+--- a/drivers/edac/amd64_edac.h
++++ b/drivers/edac/amd64_edac.h
+@@ -115,6 +115,8 @@
+ #define PCI_DEVICE_ID_AMD_16H_M30H_NB_F2 0x1582
+ #define PCI_DEVICE_ID_AMD_17H_DF_F0	0x1460
+ #define PCI_DEVICE_ID_AMD_17H_DF_F6	0x1466
++#define PCI_DEVICE_ID_AMD_17H_M10H_DF_F0 0x15e8
++#define PCI_DEVICE_ID_AMD_17H_M10H_DF_F6 0x15ee
+ 
+ /*
+  * Function 1 - Address Map
+@@ -281,6 +283,7 @@ enum amd_families {
+ 	F16_CPUS,
+ 	F16_M30H_CPUS,
+ 	F17_CPUS,
++	F17_M10H_CPUS,
+ 	NUM_FAMILIES,
+ };
+ 
+diff --git a/drivers/edac/i7core_edac.c b/drivers/edac/i7core_edac.c
+index 8e120bf60624..f1d19504a028 100644
+--- a/drivers/edac/i7core_edac.c
++++ b/drivers/edac/i7core_edac.c
+@@ -1711,6 +1711,7 @@ static void i7core_mce_output_error(struct mem_ctl_info *mci,
+ 	u32 errnum = find_first_bit(&error, 32);
+ 
+ 	if (uncorrected_error) {
++		core_err_cnt = 1;
+ 		if (ripv)
+ 			tp_event = HW_EVENT_ERR_FATAL;
+ 		else
+diff --git a/drivers/edac/sb_edac.c b/drivers/edac/sb_edac.c
+index 4a89c8093307..498d253a3b7e 100644
+--- a/drivers/edac/sb_edac.c
++++ b/drivers/edac/sb_edac.c
+@@ -2881,6 +2881,7 @@ static void sbridge_mce_output_error(struct mem_ctl_info *mci,
+ 		recoverable = GET_BITFIELD(m->status, 56, 56);
+ 
+ 	if (uncorrected_error) {
++		core_err_cnt = 1;
+ 		if (ripv) {
+ 			type = "FATAL";
+ 			tp_event = HW_EVENT_ERR_FATAL;
+diff --git a/drivers/edac/skx_edac.c b/drivers/edac/skx_edac.c
+index fae095162c01..4ba92f1dd0f7 100644
+--- a/drivers/edac/skx_edac.c
++++ b/drivers/edac/skx_edac.c
+@@ -668,7 +668,7 @@ sad_found:
+ 			break;
+ 		case 2:
+ 			lchan = (addr >> shift) % 2;
+-			lchan = (lchan << 1) | ~lchan;
++			lchan = (lchan << 1) | !lchan;
+ 			break;
+ 		case 3:
+ 			lchan = ((addr >> shift) % 2) << 1;
+@@ -959,6 +959,7 @@ static void skx_mce_output_error(struct mem_ctl_info *mci,
+ 	recoverable = GET_BITFIELD(m->status, 56, 56);
+ 
+ 	if (uncorrected_error) {
++		core_err_cnt = 1;
+ 		if (ripv) {
+ 			type = "FATAL";
+ 			tp_event = HW_EVENT_ERR_FATAL;
+diff --git a/drivers/firmware/google/coreboot_table.c b/drivers/firmware/google/coreboot_table.c
+index 19db5709ae28..898bb9abc41f 100644
+--- a/drivers/firmware/google/coreboot_table.c
++++ b/drivers/firmware/google/coreboot_table.c
+@@ -110,7 +110,8 @@ int coreboot_table_init(struct device *dev, void __iomem *ptr)
+ 
+ 	if (strncmp(header.signature, "LBIO", sizeof(header.signature))) {
+ 		pr_warn("coreboot_table: coreboot table missing or corrupt!\n");
+-		return -ENODEV;
++		ret = -ENODEV;
++		goto out;
+ 	}
+ 
+ 	ptr_entry = (void *)ptr_header + header.header_bytes;
+@@ -137,7 +138,8 @@ int coreboot_table_init(struct device *dev, void __iomem *ptr)
+ 
+ 		ptr_entry += entry.size;
+ 	}
+-
++out:
++	iounmap(ptr);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(coreboot_table_init);
+@@ -146,7 +148,6 @@ int coreboot_table_exit(void)
+ {
+ 	if (ptr_header) {
+ 		bus_unregister(&coreboot_bus_type);
+-		iounmap(ptr_header);
+ 		ptr_header = NULL;
+ 	}
+ 
+diff --git a/drivers/gpio/gpio-brcmstb.c b/drivers/gpio/gpio-brcmstb.c
+index 16c7f9f49416..af936dcca659 100644
+--- a/drivers/gpio/gpio-brcmstb.c
++++ b/drivers/gpio/gpio-brcmstb.c
+@@ -664,6 +664,18 @@ static int brcmstb_gpio_probe(struct platform_device *pdev)
+ 		struct brcmstb_gpio_bank *bank;
+ 		struct gpio_chip *gc;
+ 
++		/*
++		 * If bank_width is 0, then there is an empty bank in the
++		 * register block. Special handling for this case.
++		 */
++		if (bank_width == 0) {
++			dev_dbg(dev, "Width 0 found: Empty bank @ %d\n",
++				num_banks);
++			num_banks++;
++			gpio_base += MAX_GPIO_PER_BANK;
++			continue;
++		}
++
+ 		bank = devm_kzalloc(dev, sizeof(*bank), GFP_KERNEL);
+ 		if (!bank) {
+ 			err = -ENOMEM;
+@@ -740,9 +752,6 @@ static int brcmstb_gpio_probe(struct platform_device *pdev)
+ 			goto fail;
+ 	}
+ 
+-	dev_info(dev, "Registered %d banks (GPIO(s): %d-%d)\n",
+-			num_banks, priv->gpio_base, gpio_base - 1);
+-
+ 	if (priv->parent_wake_irq && need_wakeup_event)
+ 		pm_wakeup_event(dev, 0);
+ 
+diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c
+index 895741e9cd7d..52ccf1c31855 100644
+--- a/drivers/gpu/drm/drm_atomic.c
++++ b/drivers/gpu/drm/drm_atomic.c
+@@ -173,6 +173,11 @@ void drm_atomic_state_default_clear(struct drm_atomic_state *state)
+ 		state->crtcs[i].state = NULL;
+ 		state->crtcs[i].old_state = NULL;
+ 		state->crtcs[i].new_state = NULL;
++
++		if (state->crtcs[i].commit) {
++			drm_crtc_commit_put(state->crtcs[i].commit);
++			state->crtcs[i].commit = NULL;
++		}
+ 	}
+ 
+ 	for (i = 0; i < config->num_total_plane; i++) {
+diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
+index 81e32199d3ef..abca95b970ea 100644
+--- a/drivers/gpu/drm/drm_atomic_helper.c
++++ b/drivers/gpu/drm/drm_atomic_helper.c
+@@ -1384,15 +1384,16 @@ EXPORT_SYMBOL(drm_atomic_helper_wait_for_vblanks);
+ void drm_atomic_helper_wait_for_flip_done(struct drm_device *dev,
+ 					  struct drm_atomic_state *old_state)
+ {
+-	struct drm_crtc_state *new_crtc_state;
+ 	struct drm_crtc *crtc;
+ 	int i;
+ 
+-	for_each_new_crtc_in_state(old_state, crtc, new_crtc_state, i) {
+-		struct drm_crtc_commit *commit = new_crtc_state->commit;
++	for (i = 0; i < dev->mode_config.num_crtc; i++) {
++		struct drm_crtc_commit *commit = old_state->crtcs[i].commit;
+ 		int ret;
+ 
+-		if (!commit)
++		crtc = old_state->crtcs[i].ptr;
++
++		if (!crtc || !commit)
+ 			continue;
+ 
+ 		ret = wait_for_completion_timeout(&commit->flip_done, 10 * HZ);
+@@ -1906,6 +1907,9 @@ int drm_atomic_helper_setup_commit(struct drm_atomic_state *state,
+ 		drm_crtc_commit_get(commit);
+ 
+ 		commit->abort_completion = true;
++
++		state->crtcs[i].commit = commit;
++		drm_crtc_commit_get(commit);
+ 	}
+ 
+ 	for_each_oldnew_connector_in_state(state, conn, old_conn_state, new_conn_state, i) {
+diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c
+index 98a36e6c69ad..bd207857a964 100644
+--- a/drivers/gpu/drm/drm_crtc.c
++++ b/drivers/gpu/drm/drm_crtc.c
+@@ -560,9 +560,9 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data,
+ 	struct drm_mode_crtc *crtc_req = data;
+ 	struct drm_crtc *crtc;
+ 	struct drm_plane *plane;
+-	struct drm_connector **connector_set = NULL, *connector;
+-	struct drm_framebuffer *fb = NULL;
+-	struct drm_display_mode *mode = NULL;
++	struct drm_connector **connector_set, *connector;
++	struct drm_framebuffer *fb;
++	struct drm_display_mode *mode;
+ 	struct drm_mode_set set;
+ 	uint32_t __user *set_connectors_ptr;
+ 	struct drm_modeset_acquire_ctx ctx;
+@@ -591,6 +591,10 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data,
+ 	mutex_lock(&crtc->dev->mode_config.mutex);
+ 	drm_modeset_acquire_init(&ctx, DRM_MODESET_ACQUIRE_INTERRUPTIBLE);
+ retry:
++	connector_set = NULL;
++	fb = NULL;
++	mode = NULL;
++
+ 	ret = drm_modeset_lock_all_ctx(crtc->dev, &ctx);
+ 	if (ret)
+ 		goto out;
+diff --git a/drivers/gpu/drm/mediatek/mtk_hdmi.c b/drivers/gpu/drm/mediatek/mtk_hdmi.c
+index 59a11026dceb..45a8ba42c8f4 100644
+--- a/drivers/gpu/drm/mediatek/mtk_hdmi.c
++++ b/drivers/gpu/drm/mediatek/mtk_hdmi.c
+@@ -1446,8 +1446,7 @@ static int mtk_hdmi_dt_parse_pdata(struct mtk_hdmi *hdmi,
+ 	}
+ 
+ 	/* The CEC module handles HDMI hotplug detection */
+-	cec_np = of_find_compatible_node(np->parent, NULL,
+-					 "mediatek,mt8173-cec");
++	cec_np = of_get_compatible_child(np->parent, "mediatek,mt8173-cec");
+ 	if (!cec_np) {
+ 		dev_err(dev, "Failed to find CEC node\n");
+ 		return -EINVAL;
+@@ -1457,8 +1456,10 @@ static int mtk_hdmi_dt_parse_pdata(struct mtk_hdmi *hdmi,
+ 	if (!cec_pdev) {
+ 		dev_err(hdmi->dev, "Waiting for CEC device %pOF\n",
+ 			cec_np);
++		of_node_put(cec_np);
+ 		return -EPROBE_DEFER;
+ 	}
++	of_node_put(cec_np);
+ 	hdmi->cec_dev = &cec_pdev->dev;
+ 
+ 	/*
+diff --git a/drivers/hid/usbhid/hiddev.c b/drivers/hid/usbhid/hiddev.c
+index 23872d08308c..a746017fac17 100644
+--- a/drivers/hid/usbhid/hiddev.c
++++ b/drivers/hid/usbhid/hiddev.c
+@@ -512,14 +512,24 @@ static noinline int hiddev_ioctl_usage(struct hiddev *hiddev, unsigned int cmd,
+ 			if (cmd == HIDIOCGCOLLECTIONINDEX) {
+ 				if (uref->usage_index >= field->maxusage)
+ 					goto inval;
++				uref->usage_index =
++					array_index_nospec(uref->usage_index,
++							   field->maxusage);
+ 			} else if (uref->usage_index >= field->report_count)
+ 				goto inval;
+ 		}
+ 
+-		if ((cmd == HIDIOCGUSAGES || cmd == HIDIOCSUSAGES) &&
+-		    (uref_multi->num_values > HID_MAX_MULTI_USAGES ||
+-		     uref->usage_index + uref_multi->num_values > field->report_count))
+-			goto inval;
++		if (cmd == HIDIOCGUSAGES || cmd == HIDIOCSUSAGES) {
++			if (uref_multi->num_values > HID_MAX_MULTI_USAGES ||
++			    uref->usage_index + uref_multi->num_values >
++			    field->report_count)
++				goto inval;
++
++			uref->usage_index =
++				array_index_nospec(uref->usage_index,
++						   field->report_count -
++						   uref_multi->num_values);
++		}
+ 
+ 		switch (cmd) {
+ 		case HIDIOCGUSAGE:
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index ad7afa74d365..ff9a1d8e90f7 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -3335,6 +3335,7 @@ static void wacom_setup_intuos(struct wacom_wac *wacom_wac)
+ 
+ void wacom_setup_device_quirks(struct wacom *wacom)
+ {
++	struct wacom_wac *wacom_wac = &wacom->wacom_wac;
+ 	struct wacom_features *features = &wacom->wacom_wac.features;
+ 
+ 	/* The pen and pad share the same interface on most devices */
+@@ -3464,6 +3465,24 @@ void wacom_setup_device_quirks(struct wacom *wacom)
+ 
+ 	if (features->type == REMOTE)
+ 		features->device_type |= WACOM_DEVICETYPE_WL_MONITOR;
++
++	/* HID descriptor for DTK-2451 / DTH-2452 claims to report lots
++	 * of things it shouldn't. Lets fix up the damage...
++	 */
++	if (wacom->hdev->product == 0x382 || wacom->hdev->product == 0x37d) {
++		features->quirks &= ~WACOM_QUIRK_TOOLSERIAL;
++		__clear_bit(BTN_TOOL_BRUSH, wacom_wac->pen_input->keybit);
++		__clear_bit(BTN_TOOL_PENCIL, wacom_wac->pen_input->keybit);
++		__clear_bit(BTN_TOOL_AIRBRUSH, wacom_wac->pen_input->keybit);
++		__clear_bit(ABS_Z, wacom_wac->pen_input->absbit);
++		__clear_bit(ABS_DISTANCE, wacom_wac->pen_input->absbit);
++		__clear_bit(ABS_TILT_X, wacom_wac->pen_input->absbit);
++		__clear_bit(ABS_TILT_Y, wacom_wac->pen_input->absbit);
++		__clear_bit(ABS_WHEEL, wacom_wac->pen_input->absbit);
++		__clear_bit(ABS_MISC, wacom_wac->pen_input->absbit);
++		__clear_bit(MSC_SERIAL, wacom_wac->pen_input->mscbit);
++		__clear_bit(EV_MSC, wacom_wac->pen_input->evbit);
++	}
+ }
+ 
+ int wacom_setup_pen_input_capabilities(struct input_dev *input_dev,
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index 0f0e091c117c..c4a1ebcfffb6 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -606,16 +606,18 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
+ 	bool perf_chn = vmbus_devs[dev_type].perf_device;
+ 	struct vmbus_channel *primary = channel->primary_channel;
+ 	int next_node;
+-	struct cpumask available_mask;
++	cpumask_var_t available_mask;
+ 	struct cpumask *alloced_mask;
+ 
+ 	if ((vmbus_proto_version == VERSION_WS2008) ||
+-	    (vmbus_proto_version == VERSION_WIN7) || (!perf_chn)) {
++	    (vmbus_proto_version == VERSION_WIN7) || (!perf_chn) ||
++	    !alloc_cpumask_var(&available_mask, GFP_KERNEL)) {
+ 		/*
+ 		 * Prior to win8, all channel interrupts are
+ 		 * delivered on cpu 0.
+ 		 * Also if the channel is not a performance critical
+ 		 * channel, bind it to cpu 0.
++		 * In case alloc_cpumask_var() fails, bind it to cpu 0.
+ 		 */
+ 		channel->numa_node = 0;
+ 		channel->target_cpu = 0;
+@@ -653,7 +655,7 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
+ 		cpumask_clear(alloced_mask);
+ 	}
+ 
+-	cpumask_xor(&available_mask, alloced_mask,
++	cpumask_xor(available_mask, alloced_mask,
+ 		    cpumask_of_node(primary->numa_node));
+ 
+ 	cur_cpu = -1;
+@@ -671,10 +673,10 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
+ 	}
+ 
+ 	while (true) {
+-		cur_cpu = cpumask_next(cur_cpu, &available_mask);
++		cur_cpu = cpumask_next(cur_cpu, available_mask);
+ 		if (cur_cpu >= nr_cpu_ids) {
+ 			cur_cpu = -1;
+-			cpumask_copy(&available_mask,
++			cpumask_copy(available_mask,
+ 				     cpumask_of_node(primary->numa_node));
+ 			continue;
+ 		}
+@@ -704,6 +706,8 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
+ 
+ 	channel->target_cpu = cur_cpu;
+ 	channel->target_vp = hv_cpu_number_to_vp_number(cur_cpu);
++
++	free_cpumask_var(available_mask);
+ }
+ 
+ static void vmbus_wait_for_unload(void)
+diff --git a/drivers/hwmon/pmbus/pmbus.c b/drivers/hwmon/pmbus/pmbus.c
+index 7718e58dbda5..7688dab32f6e 100644
+--- a/drivers/hwmon/pmbus/pmbus.c
++++ b/drivers/hwmon/pmbus/pmbus.c
+@@ -118,6 +118,8 @@ static int pmbus_identify(struct i2c_client *client,
+ 		} else {
+ 			info->pages = 1;
+ 		}
++
++		pmbus_clear_faults(client);
+ 	}
+ 
+ 	if (pmbus_check_byte_register(client, 0, PMBUS_VOUT_MODE)) {
+diff --git a/drivers/hwmon/pmbus/pmbus_core.c b/drivers/hwmon/pmbus/pmbus_core.c
+index 82c3754e21e3..2e2b5851139c 100644
+--- a/drivers/hwmon/pmbus/pmbus_core.c
++++ b/drivers/hwmon/pmbus/pmbus_core.c
+@@ -2015,7 +2015,10 @@ static int pmbus_init_common(struct i2c_client *client, struct pmbus_data *data,
+ 	if (ret >= 0 && (ret & PB_CAPABILITY_ERROR_CHECK))
+ 		client->flags |= I2C_CLIENT_PEC;
+ 
+-	pmbus_clear_faults(client);
++	if (data->info->pages)
++		pmbus_clear_faults(client);
++	else
++		pmbus_clear_fault_page(client, -1);
+ 
+ 	if (info->identify) {
+ 		ret = (*info->identify)(client, info);
+diff --git a/drivers/hwmon/pwm-fan.c b/drivers/hwmon/pwm-fan.c
+index 7838af58f92d..9d611dd268e1 100644
+--- a/drivers/hwmon/pwm-fan.c
++++ b/drivers/hwmon/pwm-fan.c
+@@ -290,9 +290,19 @@ static int pwm_fan_remove(struct platform_device *pdev)
+ static int pwm_fan_suspend(struct device *dev)
+ {
+ 	struct pwm_fan_ctx *ctx = dev_get_drvdata(dev);
++	struct pwm_args args;
++	int ret;
++
++	pwm_get_args(ctx->pwm, &args);
++
++	if (ctx->pwm_value) {
++		ret = pwm_config(ctx->pwm, 0, args.period);
++		if (ret < 0)
++			return ret;
+ 
+-	if (ctx->pwm_value)
+ 		pwm_disable(ctx->pwm);
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-etb10.c b/drivers/hwtracing/coresight/coresight-etb10.c
+index 320d29df17e1..8c1d53f7af83 100644
+--- a/drivers/hwtracing/coresight/coresight-etb10.c
++++ b/drivers/hwtracing/coresight/coresight-etb10.c
+@@ -147,6 +147,10 @@ static int etb_enable(struct coresight_device *csdev, u32 mode)
+ 	if (val == CS_MODE_PERF)
+ 		return -EBUSY;
+ 
++	/* Don't let perf disturb sysFS sessions */
++	if (val == CS_MODE_SYSFS && mode == CS_MODE_PERF)
++		return -EBUSY;
++
+ 	/* Nothing to do, the tracer is already enabled. */
+ 	if (val == CS_MODE_SYSFS)
+ 		goto out;
+diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
+index 3c1c817f6968..e152716bf07f 100644
+--- a/drivers/i2c/busses/i2c-rcar.c
++++ b/drivers/i2c/busses/i2c-rcar.c
+@@ -812,8 +812,12 @@ static int rcar_i2c_master_xfer(struct i2c_adapter *adap,
+ 
+ 	time_left = wait_event_timeout(priv->wait, priv->flags & ID_DONE,
+ 				     num * adap->timeout);
+-	if (!time_left) {
++
++	/* cleanup DMA if it couldn't complete properly due to an error */
++	if (priv->dma_direction != DMA_NONE)
+ 		rcar_i2c_cleanup_dma(priv);
++
++	if (!time_left) {
+ 		rcar_i2c_init(priv);
+ 		ret = -ETIMEDOUT;
+ 	} else if (priv->flags & ID_NACK) {
+diff --git a/drivers/iio/adc/at91_adc.c b/drivers/iio/adc/at91_adc.c
+index 44b516863c9d..75d2f73582a3 100644
+--- a/drivers/iio/adc/at91_adc.c
++++ b/drivers/iio/adc/at91_adc.c
+@@ -248,12 +248,14 @@ static irqreturn_t at91_adc_trigger_handler(int irq, void *p)
+ 	struct iio_poll_func *pf = p;
+ 	struct iio_dev *idev = pf->indio_dev;
+ 	struct at91_adc_state *st = iio_priv(idev);
++	struct iio_chan_spec const *chan;
+ 	int i, j = 0;
+ 
+ 	for (i = 0; i < idev->masklength; i++) {
+ 		if (!test_bit(i, idev->active_scan_mask))
+ 			continue;
+-		st->buffer[j] = at91_adc_readl(st, AT91_ADC_CHAN(st, i));
++		chan = idev->channels + i;
++		st->buffer[j] = at91_adc_readl(st, AT91_ADC_CHAN(st, chan->channel));
+ 		j++;
+ 	}
+ 
+@@ -279,6 +281,8 @@ static void handle_adc_eoc_trigger(int irq, struct iio_dev *idev)
+ 		iio_trigger_poll(idev->trig);
+ 	} else {
+ 		st->last_value = at91_adc_readl(st, AT91_ADC_CHAN(st, st->chnb));
++		/* Needed to ACK the DRDY interruption */
++		at91_adc_readl(st, AT91_ADC_LCDR);
+ 		st->done = true;
+ 		wake_up_interruptible(&st->wq_data_avail);
+ 	}
+diff --git a/drivers/iio/adc/fsl-imx25-gcq.c b/drivers/iio/adc/fsl-imx25-gcq.c
+index ea264fa9e567..929c617db364 100644
+--- a/drivers/iio/adc/fsl-imx25-gcq.c
++++ b/drivers/iio/adc/fsl-imx25-gcq.c
+@@ -209,12 +209,14 @@ static int mx25_gcq_setup_cfgs(struct platform_device *pdev,
+ 		ret = of_property_read_u32(child, "reg", &reg);
+ 		if (ret) {
+ 			dev_err(dev, "Failed to get reg property\n");
++			of_node_put(child);
+ 			return ret;
+ 		}
+ 
+ 		if (reg >= MX25_NUM_CFGS) {
+ 			dev_err(dev,
+ 				"reg value is greater than the number of available configuration registers\n");
++			of_node_put(child);
+ 			return -EINVAL;
+ 		}
+ 
+@@ -228,6 +230,7 @@ static int mx25_gcq_setup_cfgs(struct platform_device *pdev,
+ 			if (IS_ERR(priv->vref[refp])) {
+ 				dev_err(dev, "Error, trying to use external voltage reference without a vref-%s regulator.",
+ 					mx25_gcq_refp_names[refp]);
++				of_node_put(child);
+ 				return PTR_ERR(priv->vref[refp]);
+ 			}
+ 			priv->channel_vref_mv[reg] =
+@@ -240,6 +243,7 @@ static int mx25_gcq_setup_cfgs(struct platform_device *pdev,
+ 			break;
+ 		default:
+ 			dev_err(dev, "Invalid positive reference %d\n", refp);
++			of_node_put(child);
+ 			return -EINVAL;
+ 		}
+ 
+@@ -254,10 +258,12 @@ static int mx25_gcq_setup_cfgs(struct platform_device *pdev,
+ 
+ 		if ((refp & MX25_ADCQ_CFG_REFP_MASK) != refp) {
+ 			dev_err(dev, "Invalid fsl,adc-refp property value\n");
++			of_node_put(child);
+ 			return -EINVAL;
+ 		}
+ 		if ((refn & MX25_ADCQ_CFG_REFN_MASK) != refn) {
+ 			dev_err(dev, "Invalid fsl,adc-refn property value\n");
++			of_node_put(child);
+ 			return -EINVAL;
+ 		}
+ 
+diff --git a/drivers/iio/dac/ad5064.c b/drivers/iio/dac/ad5064.c
+index bf4fc40ec84d..2f98cb2a3b96 100644
+--- a/drivers/iio/dac/ad5064.c
++++ b/drivers/iio/dac/ad5064.c
+@@ -808,6 +808,40 @@ static int ad5064_set_config(struct ad5064_state *st, unsigned int val)
+ 	return ad5064_write(st, cmd, 0, val, 0);
+ }
+ 
++static int ad5064_request_vref(struct ad5064_state *st, struct device *dev)
++{
++	unsigned int i;
++	int ret;
++
++	for (i = 0; i < ad5064_num_vref(st); ++i)
++		st->vref_reg[i].supply = ad5064_vref_name(st, i);
++
++	if (!st->chip_info->internal_vref)
++		return devm_regulator_bulk_get(dev, ad5064_num_vref(st),
++					       st->vref_reg);
++
++	/*
++	 * This assumes that when the regulator has an internal VREF
++	 * there is only one external VREF connection, which is
++	 * currently the case for all supported devices.
++	 */
++	st->vref_reg[0].consumer = devm_regulator_get_optional(dev, "vref");
++	if (!IS_ERR(st->vref_reg[0].consumer))
++		return 0;
++
++	ret = PTR_ERR(st->vref_reg[0].consumer);
++	if (ret != -ENODEV)
++		return ret;
++
++	/* If no external regulator was supplied use the internal VREF */
++	st->use_internal_vref = true;
++	ret = ad5064_set_config(st, AD5064_CONFIG_INT_VREF_ENABLE);
++	if (ret)
++		dev_err(dev, "Failed to enable internal vref: %d\n", ret);
++
++	return ret;
++}
++
+ static int ad5064_probe(struct device *dev, enum ad5064_type type,
+ 			const char *name, ad5064_write_func write)
+ {
+@@ -828,22 +862,11 @@ static int ad5064_probe(struct device *dev, enum ad5064_type type,
+ 	st->dev = dev;
+ 	st->write = write;
+ 
+-	for (i = 0; i < ad5064_num_vref(st); ++i)
+-		st->vref_reg[i].supply = ad5064_vref_name(st, i);
++	ret = ad5064_request_vref(st, dev);
++	if (ret)
++		return ret;
+ 
+-	ret = devm_regulator_bulk_get(dev, ad5064_num_vref(st),
+-		st->vref_reg);
+-	if (ret) {
+-		if (!st->chip_info->internal_vref)
+-			return ret;
+-		st->use_internal_vref = true;
+-		ret = ad5064_set_config(st, AD5064_CONFIG_INT_VREF_ENABLE);
+-		if (ret) {
+-			dev_err(dev, "Failed to enable internal vref: %d\n",
+-				ret);
+-			return ret;
+-		}
+-	} else {
++	if (!st->use_internal_vref) {
+ 		ret = regulator_bulk_enable(ad5064_num_vref(st), st->vref_reg);
+ 		if (ret)
+ 			return ret;
+diff --git a/drivers/infiniband/core/sysfs.c b/drivers/infiniband/core/sysfs.c
+index 31c7efaf8e7a..63406cd212a7 100644
+--- a/drivers/infiniband/core/sysfs.c
++++ b/drivers/infiniband/core/sysfs.c
+@@ -516,7 +516,7 @@ static ssize_t show_pma_counter(struct ib_port *p, struct port_attribute *attr,
+ 	ret = get_perf_mad(p->ibdev, p->port_num, tab_attr->attr_id, &data,
+ 			40 + offset / 8, sizeof(data));
+ 	if (ret < 0)
+-		return sprintf(buf, "N/A (no PMA)\n");
++		return ret;
+ 
+ 	switch (width) {
+ 	case 4:
+@@ -1061,10 +1061,12 @@ static int add_port(struct ib_device *device, int port_num,
+ 		goto err_put;
+ 	}
+ 
+-	p->pma_table = get_counter_table(device, port_num);
+-	ret = sysfs_create_group(&p->kobj, p->pma_table);
+-	if (ret)
+-		goto err_put_gid_attrs;
++	if (device->process_mad) {
++		p->pma_table = get_counter_table(device, port_num);
++		ret = sysfs_create_group(&p->kobj, p->pma_table);
++		if (ret)
++			goto err_put_gid_attrs;
++	}
+ 
+ 	p->gid_group.name  = "gids";
+ 	p->gid_group.attrs = alloc_group_attrs(show_port_gid, attr.gid_tbl_len);
+@@ -1177,7 +1179,8 @@ err_free_gid:
+ 	p->gid_group.attrs = NULL;
+ 
+ err_remove_pma:
+-	sysfs_remove_group(&p->kobj, p->pma_table);
++	if (p->pma_table)
++		sysfs_remove_group(&p->kobj, p->pma_table);
+ 
+ err_put_gid_attrs:
+ 	kobject_put(&p->gid_attr_group->kobj);
+@@ -1289,7 +1292,9 @@ static void free_port_list_attributes(struct ib_device *device)
+ 			kfree(port->hw_stats);
+ 			free_hsag(&port->kobj, port->hw_stats_ag);
+ 		}
+-		sysfs_remove_group(p, port->pma_table);
++
++		if (port->pma_table)
++			sysfs_remove_group(p, port->pma_table);
+ 		sysfs_remove_group(p, &port->pkey_group);
+ 		sysfs_remove_group(p, &port->gid_group);
+ 		sysfs_remove_group(&port->gid_attr_group->kobj,
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index 6ad0d46ab879..249efa0a6aba 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -360,7 +360,8 @@ void bnxt_qplib_disable_nq(struct bnxt_qplib_nq *nq)
+ 	}
+ 
+ 	/* Make sure the HW is stopped! */
+-	bnxt_qplib_nq_stop_irq(nq, true);
++	if (nq->requested)
++		bnxt_qplib_nq_stop_irq(nq, true);
+ 
+ 	if (nq->bar_reg_iomem)
+ 		iounmap(nq->bar_reg_iomem);
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+index 2852d350ada1..6637df77d236 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+@@ -309,8 +309,17 @@ static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw,
+ 		rcfw->aeq_handler(rcfw, qp_event, qp);
+ 		break;
+ 	default:
+-		/* Command Response */
+-		spin_lock_irqsave(&cmdq->lock, flags);
++		/*
++		 * Command Response
++		 * cmdq->lock needs to be acquired to synchronie
++		 * the command send and completion reaping. This function
++		 * is always called with creq->lock held. Using
++		 * the nested variant of spin_lock.
++		 *
++		 */
++
++		spin_lock_irqsave_nested(&cmdq->lock, flags,
++					 SINGLE_DEPTH_NESTING);
+ 		cookie = le16_to_cpu(qp_event->cookie);
+ 		mcookie = qp_event->cookie;
+ 		blocked = cookie & RCFW_CMD_IS_BLOCKING;
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 73339fd47dd8..addd432f3f38 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -691,7 +691,6 @@ int mlx5_mr_cache_init(struct mlx5_ib_dev *dev)
+ 		init_completion(&ent->compl);
+ 		INIT_WORK(&ent->work, cache_work_func);
+ 		INIT_DELAYED_WORK(&ent->dwork, delayed_cache_work_func);
+-		queue_work(cache->wq, &ent->work);
+ 
+ 		if (i > MR_CACHE_LAST_STD_ENTRY) {
+ 			mlx5_odp_init_mr_cache_entry(ent);
+@@ -711,6 +710,7 @@ int mlx5_mr_cache_init(struct mlx5_ib_dev *dev)
+ 			ent->limit = dev->mdev->profile->mr_cache[i].limit;
+ 		else
+ 			ent->limit = 0;
++		queue_work(cache->wq, &ent->work);
+ 	}
+ 
+ 	err = mlx5_mr_cache_debugfs_init(dev);
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index 01eae67d5a6e..e260f6a156ed 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -3264,7 +3264,9 @@ static bool modify_dci_qp_is_ok(enum ib_qp_state cur_state, enum ib_qp_state new
+ 	int req = IB_QP_STATE;
+ 	int opt = 0;
+ 
+-	if (cur_state == IB_QPS_RESET && new_state == IB_QPS_INIT) {
++	if (new_state == IB_QPS_RESET) {
++		return is_valid_mask(attr_mask, req, opt);
++	} else if (cur_state == IB_QPS_RESET && new_state == IB_QPS_INIT) {
+ 		req |= IB_QP_PKEY_INDEX | IB_QP_PORT;
+ 		return is_valid_mask(attr_mask, req, opt);
+ 	} else if (cur_state == IB_QPS_INIT && new_state == IB_QPS_INIT) {
+diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
+index 5b57de30dee4..b8104d50b1a0 100644
+--- a/drivers/infiniband/sw/rxe/rxe_resp.c
++++ b/drivers/infiniband/sw/rxe/rxe_resp.c
+@@ -682,6 +682,7 @@ static enum resp_states read_reply(struct rxe_qp *qp,
+ 		rxe_advance_resp_resource(qp);
+ 
+ 		res->type		= RXE_READ_MASK;
++		res->replay		= 0;
+ 
+ 		res->read.va		= qp->resp.va;
+ 		res->read.va_org	= qp->resp.va;
+@@ -752,7 +753,8 @@ static enum resp_states read_reply(struct rxe_qp *qp,
+ 		state = RESPST_DONE;
+ 	} else {
+ 		qp->resp.res = NULL;
+-		qp->resp.opcode = -1;
++		if (!res->replay)
++			qp->resp.opcode = -1;
+ 		if (psn_compare(res->cur_psn, qp->resp.psn) >= 0)
+ 			qp->resp.psn = res->cur_psn;
+ 		state = RESPST_CLEANUP;
+@@ -814,6 +816,7 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt)
+ 
+ 	/* next expected psn, read handles this separately */
+ 	qp->resp.psn = (pkt->psn + 1) & BTH_PSN_MASK;
++	qp->resp.ack_psn = qp->resp.psn;
+ 
+ 	qp->resp.opcode = pkt->opcode;
+ 	qp->resp.status = IB_WC_SUCCESS;
+@@ -1060,7 +1063,7 @@ static enum resp_states duplicate_request(struct rxe_qp *qp,
+ 					  struct rxe_pkt_info *pkt)
+ {
+ 	enum resp_states rc;
+-	u32 prev_psn = (qp->resp.psn - 1) & BTH_PSN_MASK;
++	u32 prev_psn = (qp->resp.ack_psn - 1) & BTH_PSN_MASK;
+ 
+ 	if (pkt->mask & RXE_SEND_MASK ||
+ 	    pkt->mask & RXE_WRITE_MASK) {
+@@ -1103,6 +1106,7 @@ static enum resp_states duplicate_request(struct rxe_qp *qp,
+ 			res->state = (pkt->psn == res->first_psn) ?
+ 					rdatm_res_state_new :
+ 					rdatm_res_state_replay;
++			res->replay = 1;
+ 
+ 			/* Reset the resource, except length. */
+ 			res->read.va_org = iova;
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
+index af1470d29391..332a16dad2a7 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
+@@ -171,6 +171,7 @@ enum rdatm_res_state {
+ 
+ struct resp_res {
+ 	int			type;
++	int			replay;
+ 	u32			first_psn;
+ 	u32			last_psn;
+ 	u32			cur_psn;
+@@ -195,6 +196,7 @@ struct rxe_resp_info {
+ 	enum rxe_qp_state	state;
+ 	u32			msn;
+ 	u32			psn;
++	u32			ack_psn;
+ 	int			opcode;
+ 	int			drop_msg;
+ 	int			goto_error;
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+index a620701f9d41..1ac2bbc84671 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+@@ -1439,11 +1439,15 @@ static void ipoib_cm_skb_reap(struct work_struct *work)
+ 		spin_unlock_irqrestore(&priv->lock, flags);
+ 		netif_tx_unlock_bh(dev);
+ 
+-		if (skb->protocol == htons(ETH_P_IP))
++		if (skb->protocol == htons(ETH_P_IP)) {
++			memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
+ 			icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, htonl(mtu));
++		}
+ #if IS_ENABLED(CONFIG_IPV6)
+-		else if (skb->protocol == htons(ETH_P_IPV6))
++		else if (skb->protocol == htons(ETH_P_IPV6)) {
++			memset(IP6CB(skb), 0, sizeof(*IP6CB(skb)));
+ 			icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
++		}
+ #endif
+ 		dev_kfree_skb_any(skb);
+ 
+diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
+index 5349e22b5c78..29646004a4a7 100644
+--- a/drivers/iommu/arm-smmu.c
++++ b/drivers/iommu/arm-smmu.c
+@@ -469,6 +469,9 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
+ 	bool stage1 = cfg->cbar != CBAR_TYPE_S2_TRANS;
+ 	void __iomem *reg = ARM_SMMU_CB(smmu_domain->smmu, cfg->cbndx);
+ 
++	if (smmu_domain->smmu->features & ARM_SMMU_FEAT_COHERENT_WALK)
++		wmb();
++
+ 	if (stage1) {
+ 		reg += leaf ? ARM_SMMU_CB_S1_TLBIVAL : ARM_SMMU_CB_S1_TLBIVA;
+ 
+@@ -510,6 +513,9 @@ static void arm_smmu_tlb_inv_vmid_nosync(unsigned long iova, size_t size,
+ 	struct arm_smmu_domain *smmu_domain = cookie;
+ 	void __iomem *base = ARM_SMMU_GR0(smmu_domain->smmu);
+ 
++	if (smmu_domain->smmu->features & ARM_SMMU_FEAT_COHERENT_WALK)
++		wmb();
++
+ 	writel_relaxed(smmu_domain->cfg.vmid, base + ARM_SMMU_GR0_TLBIVMID);
+ }
+ 
+diff --git a/drivers/irqchip/qcom-pdc.c b/drivers/irqchip/qcom-pdc.c
+index b1b47a40a278..faa7d61b9d6c 100644
+--- a/drivers/irqchip/qcom-pdc.c
++++ b/drivers/irqchip/qcom-pdc.c
+@@ -124,6 +124,7 @@ static int qcom_pdc_gic_set_type(struct irq_data *d, unsigned int type)
+ 		break;
+ 	case IRQ_TYPE_EDGE_BOTH:
+ 		pdc_type = PDC_EDGE_DUAL;
++		type = IRQ_TYPE_EDGE_RISING;
+ 		break;
+ 	case IRQ_TYPE_LEVEL_HIGH:
+ 		pdc_type = PDC_LEVEL_HIGH;
+diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c
+index ed9cc977c8b3..f6427e805150 100644
+--- a/drivers/lightnvm/pblk-core.c
++++ b/drivers/lightnvm/pblk-core.c
+@@ -1538,13 +1538,14 @@ struct pblk_line *pblk_line_replace_data(struct pblk *pblk)
+ 	struct pblk_line *cur, *new = NULL;
+ 	unsigned int left_seblks;
+ 
+-	cur = l_mg->data_line;
+ 	new = l_mg->data_next;
+ 	if (!new)
+ 		goto out;
+-	l_mg->data_line = new;
+ 
+ 	spin_lock(&l_mg->free_lock);
++	cur = l_mg->data_line;
++	l_mg->data_line = new;
++
+ 	pblk_line_setup_metadata(new, l_mg, &pblk->lm);
+ 	spin_unlock(&l_mg->free_lock);
+ 
+diff --git a/drivers/lightnvm/pblk-recovery.c b/drivers/lightnvm/pblk-recovery.c
+index d83466b3821b..958bda8a69b7 100644
+--- a/drivers/lightnvm/pblk-recovery.c
++++ b/drivers/lightnvm/pblk-recovery.c
+@@ -956,12 +956,14 @@ next:
+ 		}
+ 	}
+ 
+-	spin_lock(&l_mg->free_lock);
+ 	if (!open_lines) {
++		spin_lock(&l_mg->free_lock);
+ 		WARN_ON_ONCE(!test_and_clear_bit(meta_line,
+ 							&l_mg->meta_bitmap));
++		spin_unlock(&l_mg->free_lock);
+ 		pblk_line_replace_data(pblk);
+ 	} else {
++		spin_lock(&l_mg->free_lock);
+ 		/* Allocate next line for preparation */
+ 		l_mg->data_next = pblk_line_get(pblk);
+ 		if (l_mg->data_next) {
+@@ -969,8 +971,8 @@ next:
+ 			l_mg->data_next->type = PBLK_LINETYPE_DATA;
+ 			is_next = 1;
+ 		}
++		spin_unlock(&l_mg->free_lock);
+ 	}
+-	spin_unlock(&l_mg->free_lock);
+ 
+ 	if (is_next)
+ 		pblk_line_erase(pblk, l_mg->data_next);
+diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c
+index 88a0a7c407aa..432f7d94d369 100644
+--- a/drivers/lightnvm/pblk-sysfs.c
++++ b/drivers/lightnvm/pblk-sysfs.c
+@@ -262,8 +262,14 @@ static ssize_t pblk_sysfs_lines(struct pblk *pblk, char *page)
+ 		sec_in_line = l_mg->data_line->sec_in_line;
+ 		meta_weight = bitmap_weight(&l_mg->meta_bitmap,
+ 							PBLK_DATA_LINES);
+-		map_weight = bitmap_weight(l_mg->data_line->map_bitmap,
++
++		spin_lock(&l_mg->data_line->lock);
++		if (l_mg->data_line->map_bitmap)
++			map_weight = bitmap_weight(l_mg->data_line->map_bitmap,
+ 							lm->sec_per_line);
++		else
++			map_weight = 0;
++		spin_unlock(&l_mg->data_line->lock);
+ 	}
+ 	spin_unlock(&l_mg->free_lock);
+ 
+diff --git a/drivers/lightnvm/pblk-write.c b/drivers/lightnvm/pblk-write.c
+index f353e52941f5..89ac60d4849e 100644
+--- a/drivers/lightnvm/pblk-write.c
++++ b/drivers/lightnvm/pblk-write.c
+@@ -417,12 +417,11 @@ int pblk_submit_meta_io(struct pblk *pblk, struct pblk_line *meta_line)
+ 			rqd->ppa_list[i] = addr_to_gen_ppa(pblk, paddr, id);
+ 	}
+ 
++	spin_lock(&l_mg->close_lock);
+ 	emeta->mem += rq_len;
+-	if (emeta->mem >= lm->emeta_len[0]) {
+-		spin_lock(&l_mg->close_lock);
++	if (emeta->mem >= lm->emeta_len[0])
+ 		list_del(&meta_line->list);
+-		spin_unlock(&l_mg->close_lock);
+-	}
++	spin_unlock(&l_mg->close_lock);
+ 
+ 	pblk_down_page(pblk, rqd->ppa_list, rqd->nr_ppas);
+ 
+@@ -491,14 +490,15 @@ static struct pblk_line *pblk_should_submit_meta_io(struct pblk *pblk,
+ 	struct pblk_line *meta_line;
+ 
+ 	spin_lock(&l_mg->close_lock);
+-retry:
+ 	if (list_empty(&l_mg->emeta_list)) {
+ 		spin_unlock(&l_mg->close_lock);
+ 		return NULL;
+ 	}
+ 	meta_line = list_first_entry(&l_mg->emeta_list, struct pblk_line, list);
+-	if (meta_line->emeta->mem >= lm->emeta_len[0])
+-		goto retry;
++	if (meta_line->emeta->mem >= lm->emeta_len[0]) {
++		spin_unlock(&l_mg->close_lock);
++		return NULL;
++	}
+ 	spin_unlock(&l_mg->close_lock);
+ 
+ 	if (!pblk_valid_meta_ppa(pblk, meta_line, data_rqd))
+diff --git a/drivers/mailbox/pcc.c b/drivers/mailbox/pcc.c
+index 311e91b1a14f..256f18b67e8a 100644
+--- a/drivers/mailbox/pcc.c
++++ b/drivers/mailbox/pcc.c
+@@ -461,8 +461,11 @@ static int __init acpi_pcc_probe(void)
+ 	count = acpi_table_parse_entries_array(ACPI_SIG_PCCT,
+ 			sizeof(struct acpi_table_pcct), proc,
+ 			ACPI_PCCT_TYPE_RESERVED, MAX_PCC_SUBSPACES);
+-	if (count == 0 || count > MAX_PCC_SUBSPACES) {
+-		pr_warn("Invalid PCCT: %d PCC subspaces\n", count);
++	if (count <= 0 || count > MAX_PCC_SUBSPACES) {
++		if (count < 0)
++			pr_warn("Error parsing PCC subspaces from PCCT\n");
++		else
++			pr_warn("Invalid PCCT: %d PCC subspaces\n", count);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
+index 547c9eedc2f4..d681524f82a4 100644
+--- a/drivers/md/bcache/btree.c
++++ b/drivers/md/bcache/btree.c
+@@ -2380,7 +2380,7 @@ static int refill_keybuf_fn(struct btree_op *op, struct btree *b,
+ 	struct keybuf *buf = refill->buf;
+ 	int ret = MAP_CONTINUE;
+ 
+-	if (bkey_cmp(k, refill->end) >= 0) {
++	if (bkey_cmp(k, refill->end) > 0) {
+ 		ret = MAP_DONE;
+ 		goto out;
+ 	}
+diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
+index ae67f5fa8047..9d2fa1359029 100644
+--- a/drivers/md/bcache/request.c
++++ b/drivers/md/bcache/request.c
+@@ -843,7 +843,7 @@ static void cached_dev_read_done_bh(struct closure *cl)
+ 
+ 	bch_mark_cache_accounting(s->iop.c, s->d,
+ 				  !s->cache_missed, s->iop.bypass);
+-	trace_bcache_read(s->orig_bio, !s->cache_miss, s->iop.bypass);
++	trace_bcache_read(s->orig_bio, !s->cache_missed, s->iop.bypass);
+ 
+ 	if (s->iop.status)
+ 		continue_at_nobarrier(cl, cached_dev_read_error, bcache_wq);
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index fa4058e43202..6e5220554220 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -1131,11 +1131,12 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c,
+ 	}
+ 
+ 	if (BDEV_STATE(&dc->sb) == BDEV_STATE_DIRTY) {
+-		bch_sectors_dirty_init(&dc->disk);
+ 		atomic_set(&dc->has_dirty, 1);
+ 		bch_writeback_queue(dc);
+ 	}
+ 
++	bch_sectors_dirty_init(&dc->disk);
++
+ 	bch_cached_dev_run(dc);
+ 	bcache_device_link(&dc->disk, c, "bdev");
+ 
+diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
+index 225b15aa0340..34819f2c257d 100644
+--- a/drivers/md/bcache/sysfs.c
++++ b/drivers/md/bcache/sysfs.c
+@@ -263,6 +263,7 @@ STORE(__cached_dev)
+ 			    1, WRITEBACK_RATE_UPDATE_SECS_MAX);
+ 	d_strtoul(writeback_rate_i_term_inverse);
+ 	d_strtoul_nonzero(writeback_rate_p_term_inverse);
++	d_strtoul_nonzero(writeback_rate_minimum);
+ 
+ 	sysfs_strtoul_clamp(io_error_limit, dc->error_limit, 0, INT_MAX);
+ 
+@@ -389,6 +390,7 @@ static struct attribute *bch_cached_dev_files[] = {
+ 	&sysfs_writeback_rate_update_seconds,
+ 	&sysfs_writeback_rate_i_term_inverse,
+ 	&sysfs_writeback_rate_p_term_inverse,
++	&sysfs_writeback_rate_minimum,
+ 	&sysfs_writeback_rate_debug,
+ 	&sysfs_errors,
+ 	&sysfs_io_error_limit,
+diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
+index b810ea77e6b1..f666778ad237 100644
+--- a/drivers/md/dm-ioctl.c
++++ b/drivers/md/dm-ioctl.c
+@@ -1720,8 +1720,7 @@ static void free_params(struct dm_ioctl *param, size_t param_size, int param_fla
+ }
+ 
+ static int copy_params(struct dm_ioctl __user *user, struct dm_ioctl *param_kernel,
+-		       int ioctl_flags,
+-		       struct dm_ioctl **param, int *param_flags)
++		       int ioctl_flags, struct dm_ioctl **param, int *param_flags)
+ {
+ 	struct dm_ioctl *dmi;
+ 	int secure_data;
+@@ -1762,18 +1761,13 @@ static int copy_params(struct dm_ioctl __user *user, struct dm_ioctl *param_kern
+ 
+ 	*param_flags |= DM_PARAMS_MALLOC;
+ 
+-	if (copy_from_user(dmi, user, param_kernel->data_size))
+-		goto bad;
++	/* Copy from param_kernel (which was already copied from user) */
++	memcpy(dmi, param_kernel, minimum_data_size);
+ 
+-data_copied:
+-	/*
+-	 * Abort if something changed the ioctl data while it was being copied.
+-	 */
+-	if (dmi->data_size != param_kernel->data_size) {
+-		DMERR("rejecting ioctl: data size modified while processing parameters");
++	if (copy_from_user(&dmi->data, (char __user *)user + minimum_data_size,
++			   param_kernel->data_size - minimum_data_size))
+ 		goto bad;
+-	}
+-
++data_copied:
+ 	/* Wipe the user buffer so we do not return it to userspace */
+ 	if (secure_data && clear_user(user, param_kernel->data_size))
+ 		goto bad;
+diff --git a/drivers/md/dm-zoned-metadata.c b/drivers/md/dm-zoned-metadata.c
+index 969954915566..fa68336560c3 100644
+--- a/drivers/md/dm-zoned-metadata.c
++++ b/drivers/md/dm-zoned-metadata.c
+@@ -99,7 +99,7 @@ struct dmz_mblock {
+ 	struct rb_node		node;
+ 	struct list_head	link;
+ 	sector_t		no;
+-	atomic_t		ref;
++	unsigned int		ref;
+ 	unsigned long		state;
+ 	struct page		*page;
+ 	void			*data;
+@@ -296,7 +296,7 @@ static struct dmz_mblock *dmz_alloc_mblock(struct dmz_metadata *zmd,
+ 
+ 	RB_CLEAR_NODE(&mblk->node);
+ 	INIT_LIST_HEAD(&mblk->link);
+-	atomic_set(&mblk->ref, 0);
++	mblk->ref = 0;
+ 	mblk->state = 0;
+ 	mblk->no = mblk_no;
+ 	mblk->data = page_address(mblk->page);
+@@ -339,10 +339,11 @@ static void dmz_insert_mblock(struct dmz_metadata *zmd, struct dmz_mblock *mblk)
+ }
+ 
+ /*
+- * Lookup a metadata block in the rbtree.
++ * Lookup a metadata block in the rbtree. If the block is found, increment
++ * its reference count.
+  */
+-static struct dmz_mblock *dmz_lookup_mblock(struct dmz_metadata *zmd,
+-					    sector_t mblk_no)
++static struct dmz_mblock *dmz_get_mblock_fast(struct dmz_metadata *zmd,
++					      sector_t mblk_no)
+ {
+ 	struct rb_root *root = &zmd->mblk_rbtree;
+ 	struct rb_node *node = root->rb_node;
+@@ -350,8 +351,17 @@ static struct dmz_mblock *dmz_lookup_mblock(struct dmz_metadata *zmd,
+ 
+ 	while (node) {
+ 		mblk = container_of(node, struct dmz_mblock, node);
+-		if (mblk->no == mblk_no)
++		if (mblk->no == mblk_no) {
++			/*
++			 * If this is the first reference to the block,
++			 * remove it from the LRU list.
++			 */
++			mblk->ref++;
++			if (mblk->ref == 1 &&
++			    !test_bit(DMZ_META_DIRTY, &mblk->state))
++				list_del_init(&mblk->link);
+ 			return mblk;
++		}
+ 		node = (mblk->no < mblk_no) ? node->rb_left : node->rb_right;
+ 	}
+ 
+@@ -382,32 +392,47 @@ static void dmz_mblock_bio_end_io(struct bio *bio)
+ }
+ 
+ /*
+- * Read a metadata block from disk.
++ * Read an uncached metadata block from disk and add it to the cache.
+  */
+-static struct dmz_mblock *dmz_fetch_mblock(struct dmz_metadata *zmd,
+-					   sector_t mblk_no)
++static struct dmz_mblock *dmz_get_mblock_slow(struct dmz_metadata *zmd,
++					      sector_t mblk_no)
+ {
+-	struct dmz_mblock *mblk;
++	struct dmz_mblock *mblk, *m;
+ 	sector_t block = zmd->sb[zmd->mblk_primary].block + mblk_no;
+ 	struct bio *bio;
+ 
+-	/* Get block and insert it */
++	/* Get a new block and a BIO to read it */
+ 	mblk = dmz_alloc_mblock(zmd, mblk_no);
+ 	if (!mblk)
+ 		return NULL;
+ 
+-	spin_lock(&zmd->mblk_lock);
+-	atomic_inc(&mblk->ref);
+-	set_bit(DMZ_META_READING, &mblk->state);
+-	dmz_insert_mblock(zmd, mblk);
+-	spin_unlock(&zmd->mblk_lock);
+-
+ 	bio = bio_alloc(GFP_NOIO, 1);
+ 	if (!bio) {
+ 		dmz_free_mblock(zmd, mblk);
+ 		return NULL;
+ 	}
+ 
++	spin_lock(&zmd->mblk_lock);
++
++	/*
++	 * Make sure that another context did not start reading
++	 * the block already.
++	 */
++	m = dmz_get_mblock_fast(zmd, mblk_no);
++	if (m) {
++		spin_unlock(&zmd->mblk_lock);
++		dmz_free_mblock(zmd, mblk);
++		bio_put(bio);
++		return m;
++	}
++
++	mblk->ref++;
++	set_bit(DMZ_META_READING, &mblk->state);
++	dmz_insert_mblock(zmd, mblk);
++
++	spin_unlock(&zmd->mblk_lock);
++
++	/* Submit read BIO */
+ 	bio->bi_iter.bi_sector = dmz_blk2sect(block);
+ 	bio_set_dev(bio, zmd->dev->bdev);
+ 	bio->bi_private = mblk;
+@@ -484,7 +509,8 @@ static void dmz_release_mblock(struct dmz_metadata *zmd,
+ 
+ 	spin_lock(&zmd->mblk_lock);
+ 
+-	if (atomic_dec_and_test(&mblk->ref)) {
++	mblk->ref--;
++	if (mblk->ref == 0) {
+ 		if (test_bit(DMZ_META_ERROR, &mblk->state)) {
+ 			rb_erase(&mblk->node, &zmd->mblk_rbtree);
+ 			dmz_free_mblock(zmd, mblk);
+@@ -508,18 +534,12 @@ static struct dmz_mblock *dmz_get_mblock(struct dmz_metadata *zmd,
+ 
+ 	/* Check rbtree */
+ 	spin_lock(&zmd->mblk_lock);
+-	mblk = dmz_lookup_mblock(zmd, mblk_no);
+-	if (mblk) {
+-		/* Cache hit: remove block from LRU list */
+-		if (atomic_inc_return(&mblk->ref) == 1 &&
+-		    !test_bit(DMZ_META_DIRTY, &mblk->state))
+-			list_del_init(&mblk->link);
+-	}
++	mblk = dmz_get_mblock_fast(zmd, mblk_no);
+ 	spin_unlock(&zmd->mblk_lock);
+ 
+ 	if (!mblk) {
+ 		/* Cache miss: read the block from disk */
+-		mblk = dmz_fetch_mblock(zmd, mblk_no);
++		mblk = dmz_get_mblock_slow(zmd, mblk_no);
+ 		if (!mblk)
+ 			return ERR_PTR(-ENOMEM);
+ 	}
+@@ -753,7 +773,7 @@ int dmz_flush_metadata(struct dmz_metadata *zmd)
+ 
+ 		spin_lock(&zmd->mblk_lock);
+ 		clear_bit(DMZ_META_DIRTY, &mblk->state);
+-		if (atomic_read(&mblk->ref) == 0)
++		if (mblk->ref == 0)
+ 			list_add_tail(&mblk->link, &zmd->mblk_lru_list);
+ 		spin_unlock(&zmd->mblk_lock);
+ 	}
+@@ -2308,7 +2328,7 @@ static void dmz_cleanup_metadata(struct dmz_metadata *zmd)
+ 		mblk = list_first_entry(&zmd->mblk_dirty_list,
+ 					struct dmz_mblock, link);
+ 		dmz_dev_warn(zmd->dev, "mblock %llu still in dirty list (ref %u)",
+-			     (u64)mblk->no, atomic_read(&mblk->ref));
++			     (u64)mblk->no, mblk->ref);
+ 		list_del_init(&mblk->link);
+ 		rb_erase(&mblk->node, &zmd->mblk_rbtree);
+ 		dmz_free_mblock(zmd, mblk);
+@@ -2326,8 +2346,8 @@ static void dmz_cleanup_metadata(struct dmz_metadata *zmd)
+ 	root = &zmd->mblk_rbtree;
+ 	rbtree_postorder_for_each_entry_safe(mblk, next, root, node) {
+ 		dmz_dev_warn(zmd->dev, "mblock %llu ref %u still in rbtree",
+-			     (u64)mblk->no, atomic_read(&mblk->ref));
+-		atomic_set(&mblk->ref, 0);
++			     (u64)mblk->no, mblk->ref);
++		mblk->ref = 0;
+ 		dmz_free_mblock(zmd, mblk);
+ 	}
+ 
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 994aed2f9dff..71665e2c30eb 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -455,10 +455,11 @@ static void md_end_flush(struct bio *fbio)
+ 	rdev_dec_pending(rdev, mddev);
+ 
+ 	if (atomic_dec_and_test(&fi->flush_pending)) {
+-		if (bio->bi_iter.bi_size == 0)
++		if (bio->bi_iter.bi_size == 0) {
+ 			/* an empty barrier - all done */
+ 			bio_endio(bio);
+-		else {
++			mempool_free(fi, mddev->flush_pool);
++		} else {
+ 			INIT_WORK(&fi->flush_work, submit_flushes);
+ 			queue_work(md_wq, &fi->flush_work);
+ 		}
+@@ -512,10 +513,11 @@ void md_flush_request(struct mddev *mddev, struct bio *bio)
+ 	rcu_read_unlock();
+ 
+ 	if (atomic_dec_and_test(&fi->flush_pending)) {
+-		if (bio->bi_iter.bi_size == 0)
++		if (bio->bi_iter.bi_size == 0) {
+ 			/* an empty barrier - all done */
+ 			bio_endio(bio);
+-		else {
++			mempool_free(fi, mddev->flush_pool);
++		} else {
+ 			INIT_WORK(&fi->flush_work, submit_flushes);
+ 			queue_work(md_wq, &fi->flush_work);
+ 		}
+@@ -5907,14 +5909,6 @@ static void __md_stop(struct mddev *mddev)
+ 		mddev->to_remove = &md_redundancy_group;
+ 	module_put(pers->owner);
+ 	clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+-}
+-
+-void md_stop(struct mddev *mddev)
+-{
+-	/* stop the array and free an attached data structures.
+-	 * This is called from dm-raid
+-	 */
+-	__md_stop(mddev);
+ 	if (mddev->flush_bio_pool) {
+ 		mempool_destroy(mddev->flush_bio_pool);
+ 		mddev->flush_bio_pool = NULL;
+@@ -5923,6 +5917,14 @@ void md_stop(struct mddev *mddev)
+ 		mempool_destroy(mddev->flush_pool);
+ 		mddev->flush_pool = NULL;
+ 	}
++}
++
++void md_stop(struct mddev *mddev)
++{
++	/* stop the array and free an attached data structures.
++	 * This is called from dm-raid
++	 */
++	__md_stop(mddev);
+ 	bioset_exit(&mddev->bio_set);
+ 	bioset_exit(&mddev->sync_set);
+ }
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index 8e05c1092aef..c9362463d266 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -1736,6 +1736,7 @@ static int raid1_add_disk(struct mddev *mddev, struct md_rdev *rdev)
+ 	 */
+ 	if (rdev->saved_raid_disk >= 0 &&
+ 	    rdev->saved_raid_disk >= first &&
++	    rdev->saved_raid_disk < conf->raid_disks &&
+ 	    conf->mirrors[rdev->saved_raid_disk].rdev == NULL)
+ 		first = last = rdev->saved_raid_disk;
+ 
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 8c93d44a052c..e555221fb75b 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -1808,6 +1808,7 @@ static int raid10_add_disk(struct mddev *mddev, struct md_rdev *rdev)
+ 		first = last = rdev->raid_disk;
+ 
+ 	if (rdev->saved_raid_disk >= first &&
++	    rdev->saved_raid_disk < conf->geo.raid_disks &&
+ 	    conf->mirrors[rdev->saved_raid_disk].rdev == NULL)
+ 		mirror = rdev->saved_raid_disk;
+ 	else
+diff --git a/drivers/media/cec/cec-adap.c b/drivers/media/cec/cec-adap.c
+index b7fad0ec5710..fecba7ddcd00 100644
+--- a/drivers/media/cec/cec-adap.c
++++ b/drivers/media/cec/cec-adap.c
+@@ -325,7 +325,7 @@ static void cec_data_completed(struct cec_data *data)
+  *
+  * This function is called with adap->lock held.
+  */
+-static void cec_data_cancel(struct cec_data *data)
++static void cec_data_cancel(struct cec_data *data, u8 tx_status)
+ {
+ 	/*
+ 	 * It's either the current transmit, or it is a pending
+@@ -340,13 +340,11 @@ static void cec_data_cancel(struct cec_data *data)
+ 	}
+ 
+ 	if (data->msg.tx_status & CEC_TX_STATUS_OK) {
+-		/* Mark the canceled RX as a timeout */
+ 		data->msg.rx_ts = ktime_get_ns();
+-		data->msg.rx_status = CEC_RX_STATUS_TIMEOUT;
++		data->msg.rx_status = CEC_RX_STATUS_ABORTED;
+ 	} else {
+-		/* Mark the canceled TX as an error */
+ 		data->msg.tx_ts = ktime_get_ns();
+-		data->msg.tx_status |= CEC_TX_STATUS_ERROR |
++		data->msg.tx_status |= tx_status |
+ 				       CEC_TX_STATUS_MAX_RETRIES;
+ 		data->msg.tx_error_cnt++;
+ 		data->attempts = 0;
+@@ -374,15 +372,15 @@ static void cec_flush(struct cec_adapter *adap)
+ 	while (!list_empty(&adap->transmit_queue)) {
+ 		data = list_first_entry(&adap->transmit_queue,
+ 					struct cec_data, list);
+-		cec_data_cancel(data);
++		cec_data_cancel(data, CEC_TX_STATUS_ABORTED);
+ 	}
+ 	if (adap->transmitting)
+-		cec_data_cancel(adap->transmitting);
++		cec_data_cancel(adap->transmitting, CEC_TX_STATUS_ABORTED);
+ 
+ 	/* Cancel the pending timeout work. */
+ 	list_for_each_entry_safe(data, n, &adap->wait_queue, list) {
+ 		if (cancel_delayed_work(&data->work))
+-			cec_data_cancel(data);
++			cec_data_cancel(data, CEC_TX_STATUS_OK);
+ 		/*
+ 		 * If cancel_delayed_work returned false, then
+ 		 * the cec_wait_timeout function is running,
+@@ -458,12 +456,13 @@ int cec_thread_func(void *_adap)
+ 			 * so much traffic on the bus that the adapter was
+ 			 * unable to transmit for CEC_XFER_TIMEOUT_MS (2.1s).
+ 			 */
+-			dprintk(1, "%s: message %*ph timed out\n", __func__,
++			pr_warn("cec-%s: message %*ph timed out\n", adap->name,
+ 				adap->transmitting->msg.len,
+ 				adap->transmitting->msg.msg);
+ 			adap->tx_timeouts++;
+ 			/* Just give up on this. */
+-			cec_data_cancel(adap->transmitting);
++			cec_data_cancel(adap->transmitting,
++					CEC_TX_STATUS_TIMEOUT);
+ 			goto unlock;
+ 		}
+ 
+@@ -498,9 +497,11 @@ int cec_thread_func(void *_adap)
+ 		if (data->attempts) {
+ 			/* should be >= 3 data bit periods for a retry */
+ 			signal_free_time = CEC_SIGNAL_FREE_TIME_RETRY;
+-		} else if (data->new_initiator) {
++		} else if (adap->last_initiator !=
++			   cec_msg_initiator(&data->msg)) {
+ 			/* should be >= 5 data bit periods for new initiator */
+ 			signal_free_time = CEC_SIGNAL_FREE_TIME_NEW_INITIATOR;
++			adap->last_initiator = cec_msg_initiator(&data->msg);
+ 		} else {
+ 			/*
+ 			 * should be >= 7 data bit periods for sending another
+@@ -514,7 +515,7 @@ int cec_thread_func(void *_adap)
+ 		/* Tell the adapter to transmit, cancel on error */
+ 		if (adap->ops->adap_transmit(adap, data->attempts,
+ 					     signal_free_time, &data->msg))
+-			cec_data_cancel(data);
++			cec_data_cancel(data, CEC_TX_STATUS_ABORTED);
+ 
+ unlock:
+ 		mutex_unlock(&adap->lock);
+@@ -685,9 +686,6 @@ int cec_transmit_msg_fh(struct cec_adapter *adap, struct cec_msg *msg,
+ 			struct cec_fh *fh, bool block)
+ {
+ 	struct cec_data *data;
+-	u8 last_initiator = 0xff;
+-	unsigned int timeout;
+-	int res = 0;
+ 
+ 	msg->rx_ts = 0;
+ 	msg->tx_ts = 0;
+@@ -797,23 +795,6 @@ int cec_transmit_msg_fh(struct cec_adapter *adap, struct cec_msg *msg,
+ 	data->adap = adap;
+ 	data->blocking = block;
+ 
+-	/*
+-	 * Determine if this message follows a message from the same
+-	 * initiator. Needed to determine the free signal time later on.
+-	 */
+-	if (msg->len > 1) {
+-		if (!(list_empty(&adap->transmit_queue))) {
+-			const struct cec_data *last;
+-
+-			last = list_last_entry(&adap->transmit_queue,
+-					       const struct cec_data, list);
+-			last_initiator = cec_msg_initiator(&last->msg);
+-		} else if (adap->transmitting) {
+-			last_initiator =
+-				cec_msg_initiator(&adap->transmitting->msg);
+-		}
+-	}
+-	data->new_initiator = last_initiator != cec_msg_initiator(msg);
+ 	init_completion(&data->c);
+ 	INIT_DELAYED_WORK(&data->work, cec_wait_timeout);
+ 
+@@ -829,48 +810,23 @@ int cec_transmit_msg_fh(struct cec_adapter *adap, struct cec_msg *msg,
+ 	if (!block)
+ 		return 0;
+ 
+-	/*
+-	 * If we don't get a completion before this time something is really
+-	 * wrong and we time out.
+-	 */
+-	timeout = CEC_XFER_TIMEOUT_MS;
+-	/* Add the requested timeout if we have to wait for a reply as well */
+-	if (msg->timeout)
+-		timeout += msg->timeout;
+-
+ 	/*
+ 	 * Release the lock and wait, retake the lock afterwards.
+ 	 */
+ 	mutex_unlock(&adap->lock);
+-	res = wait_for_completion_killable_timeout(&data->c,
+-						   msecs_to_jiffies(timeout));
++	wait_for_completion_killable(&data->c);
++	if (!data->completed)
++		cancel_delayed_work_sync(&data->work);
+ 	mutex_lock(&adap->lock);
+ 
+-	if (data->completed) {
+-		/* The transmit completed (possibly with an error) */
+-		*msg = data->msg;
+-		kfree(data);
+-		return 0;
+-	}
+-	/*
+-	 * The wait for completion timed out or was interrupted, so mark this
+-	 * as non-blocking and disconnect from the filehandle since it is
+-	 * still 'in flight'. When it finally completes it will just drop the
+-	 * result silently.
+-	 */
+-	data->blocking = false;
+-	if (data->fh)
+-		list_del(&data->xfer_list);
+-	data->fh = NULL;
++	/* Cancel the transmit if it was interrupted */
++	if (!data->completed)
++		cec_data_cancel(data, CEC_TX_STATUS_ABORTED);
+ 
+-	if (res == 0) { /* timed out */
+-		/* Check if the reply or the transmit failed */
+-		if (msg->timeout && (msg->tx_status & CEC_TX_STATUS_OK))
+-			msg->rx_status = CEC_RX_STATUS_TIMEOUT;
+-		else
+-			msg->tx_status = CEC_TX_STATUS_MAX_RETRIES;
+-	}
+-	return res > 0 ? 0 : res;
++	/* The transmit completed (possibly with an error) */
++	*msg = data->msg;
++	kfree(data);
++	return 0;
+ }
+ 
+ /* Helper function to be used by drivers and this framework. */
+@@ -1028,6 +984,8 @@ void cec_received_msg_ts(struct cec_adapter *adap,
+ 	mutex_lock(&adap->lock);
+ 	dprintk(2, "%s: %*ph\n", __func__, msg->len, msg->msg);
+ 
++	adap->last_initiator = 0xff;
++
+ 	/* Check if this message was for us (directed or broadcast). */
+ 	if (!cec_msg_is_broadcast(msg))
+ 		valid_la = cec_has_log_addr(adap, msg_dest);
+@@ -1490,6 +1448,8 @@ void __cec_s_phys_addr(struct cec_adapter *adap, u16 phys_addr, bool block)
+ 	}
+ 
+ 	mutex_lock(&adap->devnode.lock);
++	adap->last_initiator = 0xff;
++
+ 	if ((adap->needs_hpd || list_empty(&adap->devnode.fhs)) &&
+ 	    adap->ops->adap_enable(adap, true)) {
+ 		mutex_unlock(&adap->devnode.lock);
+diff --git a/drivers/media/cec/cec-api.c b/drivers/media/cec/cec-api.c
+index 10b67fc40318..0199765fbae6 100644
+--- a/drivers/media/cec/cec-api.c
++++ b/drivers/media/cec/cec-api.c
+@@ -101,6 +101,23 @@ static long cec_adap_g_phys_addr(struct cec_adapter *adap,
+ 	return 0;
+ }
+ 
++static int cec_validate_phys_addr(u16 phys_addr)
++{
++	int i;
++
++	if (phys_addr == CEC_PHYS_ADDR_INVALID)
++		return 0;
++	for (i = 0; i < 16; i += 4)
++		if (phys_addr & (0xf << i))
++			break;
++	if (i == 16)
++		return 0;
++	for (i += 4; i < 16; i += 4)
++		if ((phys_addr & (0xf << i)) == 0)
++			return -EINVAL;
++	return 0;
++}
++
+ static long cec_adap_s_phys_addr(struct cec_adapter *adap, struct cec_fh *fh,
+ 				 bool block, __u16 __user *parg)
+ {
+@@ -112,7 +129,7 @@ static long cec_adap_s_phys_addr(struct cec_adapter *adap, struct cec_fh *fh,
+ 	if (copy_from_user(&phys_addr, parg, sizeof(phys_addr)))
+ 		return -EFAULT;
+ 
+-	err = cec_phys_addr_validate(phys_addr, NULL, NULL);
++	err = cec_validate_phys_addr(phys_addr);
+ 	if (err)
+ 		return err;
+ 	mutex_lock(&adap->lock);
+diff --git a/drivers/media/cec/cec-edid.c b/drivers/media/cec/cec-edid.c
+index ec72ac1c0b91..f587e8eaefd8 100644
+--- a/drivers/media/cec/cec-edid.c
++++ b/drivers/media/cec/cec-edid.c
+@@ -10,66 +10,6 @@
+ #include <linux/types.h>
+ #include <media/cec.h>
+ 
+-/*
+- * This EDID is expected to be a CEA-861 compliant, which means that there are
+- * at least two blocks and one or more of the extensions blocks are CEA-861
+- * blocks.
+- *
+- * The returned location is guaranteed to be < size - 1.
+- */
+-static unsigned int cec_get_edid_spa_location(const u8 *edid, unsigned int size)
+-{
+-	unsigned int blocks = size / 128;
+-	unsigned int block;
+-	u8 d;
+-
+-	/* Sanity check: at least 2 blocks and a multiple of the block size */
+-	if (blocks < 2 || size % 128)
+-		return 0;
+-
+-	/*
+-	 * If there are fewer extension blocks than the size, then update
+-	 * 'blocks'. It is allowed to have more extension blocks than the size,
+-	 * since some hardware can only read e.g. 256 bytes of the EDID, even
+-	 * though more blocks are present. The first CEA-861 extension block
+-	 * should normally be in block 1 anyway.
+-	 */
+-	if (edid[0x7e] + 1 < blocks)
+-		blocks = edid[0x7e] + 1;
+-
+-	for (block = 1; block < blocks; block++) {
+-		unsigned int offset = block * 128;
+-
+-		/* Skip any non-CEA-861 extension blocks */
+-		if (edid[offset] != 0x02 || edid[offset + 1] != 0x03)
+-			continue;
+-
+-		/* search Vendor Specific Data Block (tag 3) */
+-		d = edid[offset + 2] & 0x7f;
+-		/* Check if there are Data Blocks */
+-		if (d <= 4)
+-			continue;
+-		if (d > 4) {
+-			unsigned int i = offset + 4;
+-			unsigned int end = offset + d;
+-
+-			/* Note: 'end' is always < 'size' */
+-			do {
+-				u8 tag = edid[i] >> 5;
+-				u8 len = edid[i] & 0x1f;
+-
+-				if (tag == 3 && len >= 5 && i + len <= end &&
+-				    edid[i + 1] == 0x03 &&
+-				    edid[i + 2] == 0x0c &&
+-				    edid[i + 3] == 0x00)
+-					return i + 4;
+-				i += len + 1;
+-			} while (i < end);
+-		}
+-	}
+-	return 0;
+-}
+-
+ u16 cec_get_edid_phys_addr(const u8 *edid, unsigned int size,
+ 			   unsigned int *offset)
+ {
+diff --git a/drivers/media/common/v4l2-tpg/v4l2-tpg-colors.c b/drivers/media/common/v4l2-tpg/v4l2-tpg-colors.c
+index 3a3dc23c560c..a4341205c197 100644
+--- a/drivers/media/common/v4l2-tpg/v4l2-tpg-colors.c
++++ b/drivers/media/common/v4l2-tpg/v4l2-tpg-colors.c
+@@ -602,14 +602,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_SRGB][5] = { 3138, 657, 810 },
+ 	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_SRGB][6] = { 731, 680, 3048 },
+ 	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_SRGB][7] = { 800, 799, 800 },
+-	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][1] = { 3046, 3054, 886 },
+-	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][2] = { 0, 3058, 3031 },
+-	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][3] = { 360, 3079, 877 },
+-	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][4] = { 3103, 587, 3027 },
+-	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][5] = { 3116, 723, 861 },
+-	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][6] = { 789, 744, 3025 },
+-	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][1] = { 3046, 3054, 886 },
++	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][2] = { 0, 3058, 3031 },
++	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][3] = { 360, 3079, 877 },
++	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][4] = { 3103, 587, 3027 },
++	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][5] = { 3116, 723, 861 },
++	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][6] = { 789, 744, 3025 },
++	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
+ 	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+ 	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_SMPTE240M][1] = { 2941, 2950, 546 },
+ 	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_SMPTE240M][2] = { 0, 2954, 2924 },
+@@ -658,14 +658,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_SRGB][5] = { 3138, 657, 810 },
+ 	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_SRGB][6] = { 731, 680, 3048 },
+ 	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_SRGB][7] = { 800, 799, 800 },
+-	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][1] = { 3046, 3054, 886 },
+-	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][2] = { 0, 3058, 3031 },
+-	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][3] = { 360, 3079, 877 },
+-	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][4] = { 3103, 587, 3027 },
+-	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][5] = { 3116, 723, 861 },
+-	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][6] = { 789, 744, 3025 },
+-	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][1] = { 3046, 3054, 886 },
++	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][2] = { 0, 3058, 3031 },
++	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][3] = { 360, 3079, 877 },
++	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][4] = { 3103, 587, 3027 },
++	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][5] = { 3116, 723, 861 },
++	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][6] = { 789, 744, 3025 },
++	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
+ 	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+ 	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_SMPTE240M][1] = { 2941, 2950, 546 },
+ 	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_SMPTE240M][2] = { 0, 2954, 2924 },
+@@ -714,14 +714,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_SRGB][5] = { 3056, 800, 800 },
+ 	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_SRGB][6] = { 800, 800, 3056 },
+ 	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 800 },
+-	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][1] = { 3033, 3033, 851 },
+-	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][2] = { 851, 3033, 3033 },
+-	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][3] = { 851, 3033, 851 },
+-	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][4] = { 3033, 851, 3033 },
+-	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][5] = { 3033, 851, 851 },
+-	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][6] = { 851, 851, 3033 },
+-	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][1] = { 3033, 3033, 851 },
++	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][2] = { 851, 3033, 3033 },
++	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][3] = { 851, 3033, 851 },
++	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][4] = { 3033, 851, 3033 },
++	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][5] = { 3033, 851, 851 },
++	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][6] = { 851, 851, 3033 },
++	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
+ 	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+ 	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_SMPTE240M][1] = { 2926, 2926, 507 },
+ 	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_SMPTE240M][2] = { 507, 2926, 2926 },
+@@ -770,14 +770,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_SRGB][5] = { 2599, 901, 909 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_SRGB][6] = { 991, 0, 2966 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_SRGB][7] = { 800, 799, 800 },
+-	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][1] = { 2989, 3120, 1180 },
+-	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][2] = { 1913, 3011, 3009 },
+-	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][3] = { 1836, 3099, 1105 },
+-	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][4] = { 2627, 413, 2966 },
+-	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][5] = { 2576, 943, 951 },
+-	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][6] = { 1026, 0, 2942 },
+-	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][1] = { 2989, 3120, 1180 },
++	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][2] = { 1913, 3011, 3009 },
++	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][3] = { 1836, 3099, 1105 },
++	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][4] = { 2627, 413, 2966 },
++	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][5] = { 2576, 943, 951 },
++	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][6] = { 1026, 0, 2942 },
++	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_SMPTE240M][1] = { 2879, 3022, 874 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_SMPTE240M][2] = { 1688, 2903, 2901 },
+@@ -826,14 +826,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_SRGB][5] = { 3001, 800, 799 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_SRGB][6] = { 800, 800, 3071 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 799 },
+-	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][1] = { 3033, 3033, 776 },
+-	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][2] = { 1068, 3033, 3033 },
+-	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][3] = { 1068, 3033, 776 },
+-	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][4] = { 2977, 851, 3048 },
+-	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][5] = { 2977, 851, 851 },
+-	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][6] = { 851, 851, 3048 },
+-	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][1] = { 3033, 3033, 776 },
++	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][2] = { 1068, 3033, 3033 },
++	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][3] = { 1068, 3033, 776 },
++	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][4] = { 2977, 851, 3048 },
++	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][5] = { 2977, 851, 851 },
++	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][6] = { 851, 851, 3048 },
++	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_SMPTE240M][1] = { 2926, 2926, 423 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_SMPTE240M][2] = { 749, 2926, 2926 },
+@@ -882,14 +882,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SRGB][5] = { 3056, 800, 800 },
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SRGB][6] = { 800, 800, 3056 },
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 800 },
+-	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][1] = { 3033, 3033, 851 },
+-	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][2] = { 851, 3033, 3033 },
+-	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][3] = { 851, 3033, 851 },
+-	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][4] = { 3033, 851, 3033 },
+-	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][5] = { 3033, 851, 851 },
+-	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][6] = { 851, 851, 3033 },
+-	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][1] = { 3033, 3033, 851 },
++	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][2] = { 851, 3033, 3033 },
++	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][3] = { 851, 3033, 851 },
++	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][4] = { 3033, 851, 3033 },
++	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][5] = { 3033, 851, 851 },
++	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][6] = { 851, 851, 3033 },
++	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SMPTE240M][1] = { 2926, 2926, 507 },
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SMPTE240M][2] = { 507, 2926, 2926 },
+@@ -922,62 +922,62 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SMPTE2084][5] = { 1812, 886, 886 },
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SMPTE2084][6] = { 886, 886, 1812 },
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SMPTE2084][7] = { 886, 886, 886 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][0] = { 2939, 2939, 2939 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][1] = { 2939, 2939, 781 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][2] = { 1622, 2939, 2939 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][3] = { 1622, 2939, 781 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][4] = { 2502, 547, 2881 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][5] = { 2502, 547, 547 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][6] = { 547, 547, 2881 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][7] = { 547, 547, 547 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][0] = { 3056, 3056, 3056 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][1] = { 3056, 3056, 1031 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][2] = { 1838, 3056, 3056 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][3] = { 1838, 3056, 1031 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][4] = { 2657, 800, 3002 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][5] = { 2657, 800, 800 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][6] = { 800, 800, 3002 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 800 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][1] = { 3033, 3033, 1063 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][2] = { 1828, 3033, 3033 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][3] = { 1828, 3033, 1063 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][4] = { 2633, 851, 2979 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][5] = { 2633, 851, 851 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][6] = { 851, 851, 2979 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][1] = { 2926, 2926, 744 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][2] = { 1594, 2926, 2926 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][3] = { 1594, 2926, 744 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][4] = { 2484, 507, 2867 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][5] = { 2484, 507, 507 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][6] = { 507, 507, 2867 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][7] = { 507, 507, 507 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][0] = { 2125, 2125, 2125 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][1] = { 2125, 2125, 212 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][2] = { 698, 2125, 2125 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][3] = { 698, 2125, 212 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][4] = { 1557, 130, 2043 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][5] = { 1557, 130, 130 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][6] = { 130, 130, 2043 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][7] = { 130, 130, 130 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][0] = { 3175, 3175, 3175 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][1] = { 3175, 3175, 1308 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][2] = { 2069, 3175, 3175 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][3] = { 2069, 3175, 1308 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][4] = { 2816, 1084, 3127 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][5] = { 2816, 1084, 1084 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][6] = { 1084, 1084, 3127 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][7] = { 1084, 1084, 1084 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][0] = { 1812, 1812, 1812 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][1] = { 1812, 1812, 1022 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][2] = { 1402, 1812, 1812 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][3] = { 1402, 1812, 1022 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][4] = { 1692, 886, 1797 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][5] = { 1692, 886, 886 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][6] = { 886, 886, 1797 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][7] = { 886, 886, 886 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][0] = { 2939, 2939, 2939 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][1] = { 2939, 2939, 781 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][2] = { 1622, 2939, 2939 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][3] = { 1622, 2939, 781 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][4] = { 2502, 547, 2881 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][5] = { 2502, 547, 547 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][6] = { 547, 547, 2881 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][7] = { 547, 547, 547 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][0] = { 3056, 3056, 3056 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][1] = { 3056, 3056, 1031 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][2] = { 1838, 3056, 3056 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][3] = { 1838, 3056, 1031 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][4] = { 2657, 800, 3002 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][5] = { 2657, 800, 800 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][6] = { 800, 800, 3002 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 800 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][1] = { 3033, 3033, 1063 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][2] = { 1828, 3033, 3033 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][3] = { 1828, 3033, 1063 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][4] = { 2633, 851, 2979 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][5] = { 2633, 851, 851 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][6] = { 851, 851, 2979 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][1] = { 2926, 2926, 744 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][2] = { 1594, 2926, 2926 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][3] = { 1594, 2926, 744 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][4] = { 2484, 507, 2867 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][5] = { 2484, 507, 507 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][6] = { 507, 507, 2867 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][7] = { 507, 507, 507 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][0] = { 2125, 2125, 2125 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][1] = { 2125, 2125, 212 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][2] = { 698, 2125, 2125 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][3] = { 698, 2125, 212 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][4] = { 1557, 130, 2043 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][5] = { 1557, 130, 130 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][6] = { 130, 130, 2043 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][7] = { 130, 130, 130 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][0] = { 3175, 3175, 3175 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][1] = { 3175, 3175, 1308 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][2] = { 2069, 3175, 3175 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][3] = { 2069, 3175, 1308 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][4] = { 2816, 1084, 3127 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][5] = { 2816, 1084, 1084 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][6] = { 1084, 1084, 3127 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][7] = { 1084, 1084, 1084 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][0] = { 1812, 1812, 1812 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][1] = { 1812, 1812, 1022 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][2] = { 1402, 1812, 1812 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][3] = { 1402, 1812, 1022 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][4] = { 1692, 886, 1797 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][5] = { 1692, 886, 886 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][6] = { 886, 886, 1797 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][7] = { 886, 886, 886 },
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_709][0] = { 2939, 2939, 2939 },
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_709][1] = { 2877, 2923, 1058 },
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_709][2] = { 1837, 2840, 2916 },
+@@ -994,14 +994,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_SRGB][5] = { 2517, 1159, 900 },
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_SRGB][6] = { 1042, 870, 2917 },
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 800 },
+-	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][1] = { 2976, 3018, 1315 },
+-	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][2] = { 2024, 2942, 3011 },
+-	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][3] = { 1930, 2926, 1256 },
+-	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][4] = { 2563, 1227, 2916 },
+-	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][5] = { 2494, 1183, 943 },
+-	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][6] = { 1073, 916, 2894 },
+-	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][1] = { 2976, 3018, 1315 },
++	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][2] = { 2024, 2942, 3011 },
++	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][3] = { 1930, 2926, 1256 },
++	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][4] = { 2563, 1227, 2916 },
++	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][5] = { 2494, 1183, 943 },
++	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][6] = { 1073, 916, 2894 },
++	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_SMPTE240M][1] = { 2864, 2910, 1024 },
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_SMPTE240M][2] = { 1811, 2826, 2903 },
+@@ -1050,14 +1050,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_SRGB][5] = { 2880, 998, 902 },
+ 	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_SRGB][6] = { 816, 823, 2940 },
+ 	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 799 },
+-	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][1] = { 3029, 3028, 1255 },
+-	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][2] = { 1406, 2988, 3011 },
+-	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][3] = { 1398, 2983, 1190 },
+-	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][4] = { 2860, 1050, 2939 },
+-	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][5] = { 2857, 1033, 945 },
+-	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][6] = { 866, 873, 2916 },
+-	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][1] = { 3029, 3028, 1255 },
++	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][2] = { 1406, 2988, 3011 },
++	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][3] = { 1398, 2983, 1190 },
++	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][4] = { 2860, 1050, 2939 },
++	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][5] = { 2857, 1033, 945 },
++	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][6] = { 866, 873, 2916 },
++	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
+ 	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+ 	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_SMPTE240M][1] = { 2923, 2921, 957 },
+ 	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_SMPTE240M][2] = { 1125, 2877, 2902 },
+@@ -1128,7 +1128,7 @@ static const double rec709_to_240m[3][3] = {
+ 	{ 0.0016327, 0.0044133, 0.9939540 },
+ };
+ 
+-static const double rec709_to_adobergb[3][3] = {
++static const double rec709_to_oprgb[3][3] = {
+ 	{ 0.7151627, 0.2848373, -0.0000000 },
+ 	{ 0.0000000, 1.0000000, 0.0000000 },
+ 	{ -0.0000000, 0.0411705, 0.9588295 },
+@@ -1195,7 +1195,7 @@ static double transfer_rec709_to_rgb(double v)
+ 	return (v < 0.081) ? v / 4.5 : pow((v + 0.099) / 1.099, 1.0 / 0.45);
+ }
+ 
+-static double transfer_rgb_to_adobergb(double v)
++static double transfer_rgb_to_oprgb(double v)
+ {
+ 	return pow(v, 1.0 / 2.19921875);
+ }
+@@ -1251,8 +1251,8 @@ static void csc(enum v4l2_colorspace colorspace, enum v4l2_xfer_func xfer_func,
+ 	case V4L2_COLORSPACE_470_SYSTEM_M:
+ 		mult_matrix(r, g, b, rec709_to_ntsc1953);
+ 		break;
+-	case V4L2_COLORSPACE_ADOBERGB:
+-		mult_matrix(r, g, b, rec709_to_adobergb);
++	case V4L2_COLORSPACE_OPRGB:
++		mult_matrix(r, g, b, rec709_to_oprgb);
+ 		break;
+ 	case V4L2_COLORSPACE_BT2020:
+ 		mult_matrix(r, g, b, rec709_to_bt2020);
+@@ -1284,10 +1284,10 @@ static void csc(enum v4l2_colorspace colorspace, enum v4l2_xfer_func xfer_func,
+ 		*g = transfer_rgb_to_srgb(*g);
+ 		*b = transfer_rgb_to_srgb(*b);
+ 		break;
+-	case V4L2_XFER_FUNC_ADOBERGB:
+-		*r = transfer_rgb_to_adobergb(*r);
+-		*g = transfer_rgb_to_adobergb(*g);
+-		*b = transfer_rgb_to_adobergb(*b);
++	case V4L2_XFER_FUNC_OPRGB:
++		*r = transfer_rgb_to_oprgb(*r);
++		*g = transfer_rgb_to_oprgb(*g);
++		*b = transfer_rgb_to_oprgb(*b);
+ 		break;
+ 	case V4L2_XFER_FUNC_DCI_P3:
+ 		*r = transfer_rgb_to_dcip3(*r);
+@@ -1321,7 +1321,7 @@ int main(int argc, char **argv)
+ 		V4L2_COLORSPACE_470_SYSTEM_BG,
+ 		0,
+ 		V4L2_COLORSPACE_SRGB,
+-		V4L2_COLORSPACE_ADOBERGB,
++		V4L2_COLORSPACE_OPRGB,
+ 		V4L2_COLORSPACE_BT2020,
+ 		0,
+ 		V4L2_COLORSPACE_DCI_P3,
+@@ -1336,7 +1336,7 @@ int main(int argc, char **argv)
+ 		"V4L2_COLORSPACE_470_SYSTEM_BG",
+ 		"",
+ 		"V4L2_COLORSPACE_SRGB",
+-		"V4L2_COLORSPACE_ADOBERGB",
++		"V4L2_COLORSPACE_OPRGB",
+ 		"V4L2_COLORSPACE_BT2020",
+ 		"",
+ 		"V4L2_COLORSPACE_DCI_P3",
+@@ -1345,7 +1345,7 @@ int main(int argc, char **argv)
+ 		"",
+ 		"V4L2_XFER_FUNC_709",
+ 		"V4L2_XFER_FUNC_SRGB",
+-		"V4L2_XFER_FUNC_ADOBERGB",
++		"V4L2_XFER_FUNC_OPRGB",
+ 		"V4L2_XFER_FUNC_SMPTE240M",
+ 		"V4L2_XFER_FUNC_NONE",
+ 		"V4L2_XFER_FUNC_DCI_P3",
+diff --git a/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c b/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
+index abd4c788dffd..f40ab5704bf0 100644
+--- a/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
++++ b/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
+@@ -1770,7 +1770,7 @@ typedef struct { u16 __; u8 _; } __packed x24;
+ 				pos[7] = (chr & (0x01 << 0) ? fg : bg);	\
+ 			} \
+ 	\
+-			pos += (tpg->hflip ? -8 : 8) / hdiv;	\
++			pos += (tpg->hflip ? -8 : 8) / (int)hdiv;	\
+ 		}	\
+ 	}	\
+ } while (0)
+diff --git a/drivers/media/i2c/adv7511.c b/drivers/media/i2c/adv7511.c
+index 5731751d3f2a..cd6e7372ef9c 100644
+--- a/drivers/media/i2c/adv7511.c
++++ b/drivers/media/i2c/adv7511.c
+@@ -1355,10 +1355,10 @@ static int adv7511_set_fmt(struct v4l2_subdev *sd,
+ 	state->xfer_func = format->format.xfer_func;
+ 
+ 	switch (format->format.colorspace) {
+-	case V4L2_COLORSPACE_ADOBERGB:
++	case V4L2_COLORSPACE_OPRGB:
+ 		c = HDMI_COLORIMETRY_EXTENDED;
+-		ec = y ? HDMI_EXTENDED_COLORIMETRY_ADOBE_YCC_601 :
+-			 HDMI_EXTENDED_COLORIMETRY_ADOBE_RGB;
++		ec = y ? HDMI_EXTENDED_COLORIMETRY_OPYCC_601 :
++			 HDMI_EXTENDED_COLORIMETRY_OPRGB;
+ 		break;
+ 	case V4L2_COLORSPACE_SMPTE170M:
+ 		c = y ? HDMI_COLORIMETRY_ITU_601 : HDMI_COLORIMETRY_NONE;
+diff --git a/drivers/media/i2c/adv7604.c b/drivers/media/i2c/adv7604.c
+index cac2081e876e..2437f72f7caf 100644
+--- a/drivers/media/i2c/adv7604.c
++++ b/drivers/media/i2c/adv7604.c
+@@ -2284,8 +2284,10 @@ static int adv76xx_set_edid(struct v4l2_subdev *sd, struct v4l2_edid *edid)
+ 		state->aspect_ratio.numerator = 16;
+ 		state->aspect_ratio.denominator = 9;
+ 
+-		if (!state->edid.present)
++		if (!state->edid.present) {
+ 			state->edid.blocks = 0;
++			cec_phys_addr_invalidate(state->cec_adap);
++		}
+ 
+ 		v4l2_dbg(2, debug, sd, "%s: clear EDID pad %d, edid.present = 0x%x\n",
+ 				__func__, edid->pad, state->edid.present);
+@@ -2474,7 +2476,7 @@ static int adv76xx_log_status(struct v4l2_subdev *sd)
+ 		"YCbCr Bt.601 (16-235)", "YCbCr Bt.709 (16-235)",
+ 		"xvYCC Bt.601", "xvYCC Bt.709",
+ 		"YCbCr Bt.601 (0-255)", "YCbCr Bt.709 (0-255)",
+-		"sYCC", "Adobe YCC 601", "AdobeRGB", "invalid", "invalid",
++		"sYCC", "opYCC 601", "opRGB", "invalid", "invalid",
+ 		"invalid", "invalid", "invalid"
+ 	};
+ 	static const char * const rgb_quantization_range_txt[] = {
+diff --git a/drivers/media/i2c/adv7842.c b/drivers/media/i2c/adv7842.c
+index fddac32e5051..ceca6be13ca9 100644
+--- a/drivers/media/i2c/adv7842.c
++++ b/drivers/media/i2c/adv7842.c
+@@ -786,8 +786,10 @@ static int edid_write_hdmi_segment(struct v4l2_subdev *sd, u8 port)
+ 	/* Disable I2C access to internal EDID ram from HDMI DDC ports */
+ 	rep_write_and_or(sd, 0x77, 0xf3, 0x00);
+ 
+-	if (!state->hdmi_edid.present)
++	if (!state->hdmi_edid.present) {
++		cec_phys_addr_invalidate(state->cec_adap);
+ 		return 0;
++	}
+ 
+ 	pa = cec_get_edid_phys_addr(edid, 256, &spa_loc);
+ 	err = cec_phys_addr_validate(pa, &pa, NULL);
+diff --git a/drivers/media/i2c/ov7670.c b/drivers/media/i2c/ov7670.c
+index 3474ef832c1e..480edeebac60 100644
+--- a/drivers/media/i2c/ov7670.c
++++ b/drivers/media/i2c/ov7670.c
+@@ -1810,17 +1810,24 @@ static int ov7670_probe(struct i2c_client *client,
+ 			info->pclk_hb_disable = true;
+ 	}
+ 
+-	info->clk = devm_clk_get(&client->dev, "xclk");
+-	if (IS_ERR(info->clk))
+-		return PTR_ERR(info->clk);
+-	ret = clk_prepare_enable(info->clk);
+-	if (ret)
+-		return ret;
++	info->clk = devm_clk_get(&client->dev, "xclk"); /* optional */
++	if (IS_ERR(info->clk)) {
++		ret = PTR_ERR(info->clk);
++		if (ret == -ENOENT)
++			info->clk = NULL;
++		else
++			return ret;
++	}
++	if (info->clk) {
++		ret = clk_prepare_enable(info->clk);
++		if (ret)
++			return ret;
+ 
+-	info->clock_speed = clk_get_rate(info->clk) / 1000000;
+-	if (info->clock_speed < 10 || info->clock_speed > 48) {
+-		ret = -EINVAL;
+-		goto clk_disable;
++		info->clock_speed = clk_get_rate(info->clk) / 1000000;
++		if (info->clock_speed < 10 || info->clock_speed > 48) {
++			ret = -EINVAL;
++			goto clk_disable;
++		}
+ 	}
+ 
+ 	ret = ov7670_init_gpio(client, info);
+diff --git a/drivers/media/i2c/tc358743.c b/drivers/media/i2c/tc358743.c
+index 393bbbbbaad7..865639587a97 100644
+--- a/drivers/media/i2c/tc358743.c
++++ b/drivers/media/i2c/tc358743.c
+@@ -1243,9 +1243,9 @@ static int tc358743_log_status(struct v4l2_subdev *sd)
+ 	u8 vi_status3 =  i2c_rd8(sd, VI_STATUS3);
+ 	const int deep_color_mode[4] = { 8, 10, 12, 16 };
+ 	static const char * const input_color_space[] = {
+-		"RGB", "YCbCr 601", "Adobe RGB", "YCbCr 709", "NA (4)",
++		"RGB", "YCbCr 601", "opRGB", "YCbCr 709", "NA (4)",
+ 		"xvYCC 601", "NA(6)", "xvYCC 709", "NA(8)", "sYCC601",
+-		"NA(10)", "NA(11)", "NA(12)", "Adobe YCC 601"};
++		"NA(10)", "NA(11)", "NA(12)", "opYCC 601"};
+ 
+ 	v4l2_info(sd, "-----Chip status-----\n");
+ 	v4l2_info(sd, "Chip ID: 0x%02x\n",
+diff --git a/drivers/media/i2c/tvp5150.c b/drivers/media/i2c/tvp5150.c
+index 76e6bed5a1da..805bd9c65940 100644
+--- a/drivers/media/i2c/tvp5150.c
++++ b/drivers/media/i2c/tvp5150.c
+@@ -1534,7 +1534,7 @@ static int tvp5150_probe(struct i2c_client *c,
+ 			27000000, 1, 27000000);
+ 	v4l2_ctrl_new_std_menu_items(&core->hdl, &tvp5150_ctrl_ops,
+ 				     V4L2_CID_TEST_PATTERN,
+-				     ARRAY_SIZE(tvp5150_test_patterns),
++				     ARRAY_SIZE(tvp5150_test_patterns) - 1,
+ 				     0, 0, tvp5150_test_patterns);
+ 	sd->ctrl_handler = &core->hdl;
+ 	if (core->hdl.error) {
+diff --git a/drivers/media/platform/vivid/vivid-core.h b/drivers/media/platform/vivid/vivid-core.h
+index 477c80a4d44c..cd4c8230563c 100644
+--- a/drivers/media/platform/vivid/vivid-core.h
++++ b/drivers/media/platform/vivid/vivid-core.h
+@@ -111,7 +111,7 @@ enum vivid_colorspace {
+ 	VIVID_CS_170M,
+ 	VIVID_CS_709,
+ 	VIVID_CS_SRGB,
+-	VIVID_CS_ADOBERGB,
++	VIVID_CS_OPRGB,
+ 	VIVID_CS_2020,
+ 	VIVID_CS_DCI_P3,
+ 	VIVID_CS_240M,
+diff --git a/drivers/media/platform/vivid/vivid-ctrls.c b/drivers/media/platform/vivid/vivid-ctrls.c
+index 6b0bfa091592..e1185f0f6607 100644
+--- a/drivers/media/platform/vivid/vivid-ctrls.c
++++ b/drivers/media/platform/vivid/vivid-ctrls.c
+@@ -348,7 +348,7 @@ static int vivid_vid_cap_s_ctrl(struct v4l2_ctrl *ctrl)
+ 		V4L2_COLORSPACE_SMPTE170M,
+ 		V4L2_COLORSPACE_REC709,
+ 		V4L2_COLORSPACE_SRGB,
+-		V4L2_COLORSPACE_ADOBERGB,
++		V4L2_COLORSPACE_OPRGB,
+ 		V4L2_COLORSPACE_BT2020,
+ 		V4L2_COLORSPACE_DCI_P3,
+ 		V4L2_COLORSPACE_SMPTE240M,
+@@ -729,7 +729,7 @@ static const char * const vivid_ctrl_colorspace_strings[] = {
+ 	"SMPTE 170M",
+ 	"Rec. 709",
+ 	"sRGB",
+-	"AdobeRGB",
++	"opRGB",
+ 	"BT.2020",
+ 	"DCI-P3",
+ 	"SMPTE 240M",
+@@ -752,7 +752,7 @@ static const char * const vivid_ctrl_xfer_func_strings[] = {
+ 	"Default",
+ 	"Rec. 709",
+ 	"sRGB",
+-	"AdobeRGB",
++	"opRGB",
+ 	"SMPTE 240M",
+ 	"None",
+ 	"DCI-P3",
+diff --git a/drivers/media/platform/vivid/vivid-vid-out.c b/drivers/media/platform/vivid/vivid-vid-out.c
+index 51fec66d8d45..50248e2176a0 100644
+--- a/drivers/media/platform/vivid/vivid-vid-out.c
++++ b/drivers/media/platform/vivid/vivid-vid-out.c
+@@ -413,7 +413,7 @@ int vivid_try_fmt_vid_out(struct file *file, void *priv,
+ 		mp->colorspace = V4L2_COLORSPACE_SMPTE170M;
+ 	} else if (mp->colorspace != V4L2_COLORSPACE_SMPTE170M &&
+ 		   mp->colorspace != V4L2_COLORSPACE_REC709 &&
+-		   mp->colorspace != V4L2_COLORSPACE_ADOBERGB &&
++		   mp->colorspace != V4L2_COLORSPACE_OPRGB &&
+ 		   mp->colorspace != V4L2_COLORSPACE_BT2020 &&
+ 		   mp->colorspace != V4L2_COLORSPACE_SRGB) {
+ 		mp->colorspace = V4L2_COLORSPACE_REC709;
+diff --git a/drivers/media/usb/dvb-usb-v2/dvbsky.c b/drivers/media/usb/dvb-usb-v2/dvbsky.c
+index 1aa88d94e57f..e28bd8836751 100644
+--- a/drivers/media/usb/dvb-usb-v2/dvbsky.c
++++ b/drivers/media/usb/dvb-usb-v2/dvbsky.c
+@@ -31,6 +31,7 @@ MODULE_PARM_DESC(disable_rc, "Disable inbuilt IR receiver.");
+ DVB_DEFINE_MOD_OPT_ADAPTER_NR(adapter_nr);
+ 
+ struct dvbsky_state {
++	struct mutex stream_mutex;
+ 	u8 ibuf[DVBSKY_BUF_LEN];
+ 	u8 obuf[DVBSKY_BUF_LEN];
+ 	u8 last_lock;
+@@ -67,17 +68,18 @@ static int dvbsky_usb_generic_rw(struct dvb_usb_device *d,
+ 
+ static int dvbsky_stream_ctrl(struct dvb_usb_device *d, u8 onoff)
+ {
++	struct dvbsky_state *state = d_to_priv(d);
+ 	int ret;
+-	static u8 obuf_pre[3] = { 0x37, 0, 0 };
+-	static u8 obuf_post[3] = { 0x36, 3, 0 };
++	u8 obuf_pre[3] = { 0x37, 0, 0 };
++	u8 obuf_post[3] = { 0x36, 3, 0 };
+ 
+-	mutex_lock(&d->usb_mutex);
+-	ret = dvb_usbv2_generic_rw_locked(d, obuf_pre, 3, NULL, 0);
++	mutex_lock(&state->stream_mutex);
++	ret = dvbsky_usb_generic_rw(d, obuf_pre, 3, NULL, 0);
+ 	if (!ret && onoff) {
+ 		msleep(20);
+-		ret = dvb_usbv2_generic_rw_locked(d, obuf_post, 3, NULL, 0);
++		ret = dvbsky_usb_generic_rw(d, obuf_post, 3, NULL, 0);
+ 	}
+-	mutex_unlock(&d->usb_mutex);
++	mutex_unlock(&state->stream_mutex);
+ 	return ret;
+ }
+ 
+@@ -606,6 +608,8 @@ static int dvbsky_init(struct dvb_usb_device *d)
+ 	if (ret)
+ 		return ret;
+ 	*/
++	mutex_init(&state->stream_mutex);
++
+ 	state->last_lock = 0;
+ 
+ 	return 0;
+diff --git a/drivers/media/usb/em28xx/em28xx-cards.c b/drivers/media/usb/em28xx/em28xx-cards.c
+index ff5e41ac4723..98d6c8fcd262 100644
+--- a/drivers/media/usb/em28xx/em28xx-cards.c
++++ b/drivers/media/usb/em28xx/em28xx-cards.c
+@@ -2141,13 +2141,13 @@ const struct em28xx_board em28xx_boards[] = {
+ 		.input           = { {
+ 			.type     = EM28XX_VMUX_COMPOSITE,
+ 			.vmux     = TVP5150_COMPOSITE1,
+-			.amux     = EM28XX_AUDIO_SRC_LINE,
++			.amux     = EM28XX_AMUX_LINE_IN,
+ 			.gpio     = terratec_av350_unmute_gpio,
+ 
+ 		}, {
+ 			.type     = EM28XX_VMUX_SVIDEO,
+ 			.vmux     = TVP5150_SVIDEO,
+-			.amux     = EM28XX_AUDIO_SRC_LINE,
++			.amux     = EM28XX_AMUX_LINE_IN,
+ 			.gpio     = terratec_av350_unmute_gpio,
+ 		} },
+ 	},
+@@ -3041,6 +3041,9 @@ static int em28xx_hint_board(struct em28xx *dev)
+ 
+ static void em28xx_card_setup(struct em28xx *dev)
+ {
++	int i, j, idx;
++	bool duplicate_entry;
++
+ 	/*
+ 	 * If the device can be a webcam, seek for a sensor.
+ 	 * If sensor is not found, then it isn't a webcam.
+@@ -3197,6 +3200,32 @@ static void em28xx_card_setup(struct em28xx *dev)
+ 	/* Allow override tuner type by a module parameter */
+ 	if (tuner >= 0)
+ 		dev->tuner_type = tuner;
++
++	/*
++	 * Dynamically generate a list of valid audio inputs for this
++	 * specific board, mapping them via enum em28xx_amux.
++	 */
++
++	idx = 0;
++	for (i = 0; i < MAX_EM28XX_INPUT; i++) {
++		if (!INPUT(i)->type)
++			continue;
++
++		/* Skip already mapped audio inputs */
++		duplicate_entry = false;
++		for (j = 0; j < idx; j++) {
++			if (INPUT(i)->amux == dev->amux_map[j]) {
++				duplicate_entry = true;
++				break;
++			}
++		}
++		if (duplicate_entry)
++			continue;
++
++		dev->amux_map[idx++] = INPUT(i)->amux;
++	}
++	for (; idx < MAX_EM28XX_INPUT; idx++)
++		dev->amux_map[idx] = EM28XX_AMUX_UNUSED;
+ }
+ 
+ void em28xx_setup_xc3028(struct em28xx *dev, struct xc2028_ctrl *ctl)
+diff --git a/drivers/media/usb/em28xx/em28xx-video.c b/drivers/media/usb/em28xx/em28xx-video.c
+index 68571bf36d28..3bf98ac897ec 100644
+--- a/drivers/media/usb/em28xx/em28xx-video.c
++++ b/drivers/media/usb/em28xx/em28xx-video.c
+@@ -1093,6 +1093,8 @@ int em28xx_start_analog_streaming(struct vb2_queue *vq, unsigned int count)
+ 
+ 	em28xx_videodbg("%s\n", __func__);
+ 
++	dev->v4l2->field_count = 0;
++
+ 	/*
+ 	 * Make sure streaming is not already in progress for this type
+ 	 * of filehandle (e.g. video, vbi)
+@@ -1471,9 +1473,9 @@ static int vidioc_try_fmt_vid_cap(struct file *file, void *priv,
+ 
+ 	fmt = format_by_fourcc(f->fmt.pix.pixelformat);
+ 	if (!fmt) {
+-		em28xx_videodbg("Fourcc format (%08x) invalid.\n",
+-				f->fmt.pix.pixelformat);
+-		return -EINVAL;
++		fmt = &format[0];
++		em28xx_videodbg("Fourcc format (%08x) invalid. Using default (%08x).\n",
++				f->fmt.pix.pixelformat, fmt->fourcc);
+ 	}
+ 
+ 	if (dev->board.is_em2800) {
+@@ -1666,6 +1668,7 @@ static int vidioc_enum_input(struct file *file, void *priv,
+ {
+ 	struct em28xx *dev = video_drvdata(file);
+ 	unsigned int       n;
++	int j;
+ 
+ 	n = i->index;
+ 	if (n >= MAX_EM28XX_INPUT)
+@@ -1685,6 +1688,12 @@ static int vidioc_enum_input(struct file *file, void *priv,
+ 	if (dev->is_webcam)
+ 		i->capabilities = 0;
+ 
++	/* Dynamically generates an audioset bitmask */
++	i->audioset = 0;
++	for (j = 0; j < MAX_EM28XX_INPUT; j++)
++		if (dev->amux_map[j] != EM28XX_AMUX_UNUSED)
++			i->audioset |= 1 << j;
++
+ 	return 0;
+ }
+ 
+@@ -1710,11 +1719,24 @@ static int vidioc_s_input(struct file *file, void *priv, unsigned int i)
+ 	return 0;
+ }
+ 
+-static int vidioc_g_audio(struct file *file, void *priv, struct v4l2_audio *a)
++static int em28xx_fill_audio_input(struct em28xx *dev,
++				   const char *s,
++				   struct v4l2_audio *a,
++				   unsigned int index)
+ {
+-	struct em28xx *dev = video_drvdata(file);
++	unsigned int idx = dev->amux_map[index];
++
++	/*
++	 * With msp3400, almost all mappings use the default (amux = 0).
++	 * The only one may use a different value is WinTV USB2, where it
++	 * can also be SCART1 input.
++	 * As it is very doubtful that we would see new boards with msp3400,
++	 * let's just reuse the existing switch.
++	 */
++	if (dev->has_msp34xx && idx != EM28XX_AMUX_UNUSED)
++		idx = EM28XX_AMUX_LINE_IN;
+ 
+-	switch (a->index) {
++	switch (idx) {
+ 	case EM28XX_AMUX_VIDEO:
+ 		strcpy(a->name, "Television");
+ 		break;
+@@ -1739,32 +1761,79 @@ static int vidioc_g_audio(struct file *file, void *priv, struct v4l2_audio *a)
+ 	case EM28XX_AMUX_PCM_OUT:
+ 		strcpy(a->name, "PCM");
+ 		break;
++	case EM28XX_AMUX_UNUSED:
+ 	default:
+ 		return -EINVAL;
+ 	}
+-
+-	a->index = dev->ctl_ainput;
++	a->index = index;
+ 	a->capability = V4L2_AUDCAP_STEREO;
+ 
++	em28xx_videodbg("%s: audio input index %d is '%s'\n",
++			s, a->index, a->name);
++
+ 	return 0;
+ }
+ 
++static int vidioc_enumaudio(struct file *file, void *fh, struct v4l2_audio *a)
++{
++	struct em28xx *dev = video_drvdata(file);
++
++	if (a->index >= MAX_EM28XX_INPUT)
++		return -EINVAL;
++
++	return em28xx_fill_audio_input(dev, __func__, a, a->index);
++}
++
++static int vidioc_g_audio(struct file *file, void *priv, struct v4l2_audio *a)
++{
++	struct em28xx *dev = video_drvdata(file);
++	int i;
++
++	for (i = 0; i < MAX_EM28XX_INPUT; i++)
++		if (dev->ctl_ainput == dev->amux_map[i])
++			return em28xx_fill_audio_input(dev, __func__, a, i);
++
++	/* Should never happen! */
++	return -EINVAL;
++}
++
+ static int vidioc_s_audio(struct file *file, void *priv,
+ 			  const struct v4l2_audio *a)
+ {
+ 	struct em28xx *dev = video_drvdata(file);
++	int idx, i;
+ 
+ 	if (a->index >= MAX_EM28XX_INPUT)
+ 		return -EINVAL;
+-	if (!INPUT(a->index)->type)
++
++	idx = dev->amux_map[a->index];
++
++	if (idx == EM28XX_AMUX_UNUSED)
+ 		return -EINVAL;
+ 
+-	dev->ctl_ainput = INPUT(a->index)->amux;
+-	dev->ctl_aoutput = INPUT(a->index)->aout;
++	dev->ctl_ainput = idx;
++
++	/*
++	 * FIXME: This is wrong, as different inputs at em28xx_cards
++	 * may have different audio outputs. So, the right thing
++	 * to do is to implement VIDIOC_G_AUDOUT/VIDIOC_S_AUDOUT.
++	 * With the current board definitions, this would work fine,
++	 * as, currently, all boards fit.
++	 */
++	for (i = 0; i < MAX_EM28XX_INPUT; i++)
++		if (idx == dev->amux_map[i])
++			break;
++	if (i == MAX_EM28XX_INPUT)
++		return -EINVAL;
++
++	dev->ctl_aoutput = INPUT(i)->aout;
+ 
+ 	if (!dev->ctl_aoutput)
+ 		dev->ctl_aoutput = EM28XX_AOUT_MASTER;
+ 
++	em28xx_videodbg("%s: set audio input to %d\n", __func__,
++			dev->ctl_ainput);
++
+ 	return 0;
+ }
+ 
+@@ -2302,6 +2371,7 @@ static const struct v4l2_ioctl_ops video_ioctl_ops = {
+ 	.vidioc_try_fmt_vbi_cap     = vidioc_g_fmt_vbi_cap,
+ 	.vidioc_s_fmt_vbi_cap       = vidioc_g_fmt_vbi_cap,
+ 	.vidioc_enum_framesizes     = vidioc_enum_framesizes,
++	.vidioc_enumaudio           = vidioc_enumaudio,
+ 	.vidioc_g_audio             = vidioc_g_audio,
+ 	.vidioc_s_audio             = vidioc_s_audio,
+ 
+diff --git a/drivers/media/usb/em28xx/em28xx.h b/drivers/media/usb/em28xx/em28xx.h
+index 953caac025f2..a551072e62ed 100644
+--- a/drivers/media/usb/em28xx/em28xx.h
++++ b/drivers/media/usb/em28xx/em28xx.h
+@@ -335,6 +335,9 @@ enum em28xx_usb_audio_type {
+ /**
+  * em28xx_amux - describes the type of audio input used by em28xx
+  *
++ * @EM28XX_AMUX_UNUSED:
++ *	Used only on em28xx dev->map field, in order to mark an entry
++ *	as unused.
+  * @EM28XX_AMUX_VIDEO:
+  *	On devices without AC97, this is the only value that it is currently
+  *	allowed.
+@@ -369,7 +372,8 @@ enum em28xx_usb_audio_type {
+  * same time, via the alsa mux.
+  */
+ enum em28xx_amux {
+-	EM28XX_AMUX_VIDEO,
++	EM28XX_AMUX_UNUSED = -1,
++	EM28XX_AMUX_VIDEO = 0,
+ 	EM28XX_AMUX_LINE_IN,
+ 
+ 	/* Some less-common mixer setups */
+@@ -692,6 +696,8 @@ struct em28xx {
+ 	unsigned int ctl_input;	// selected input
+ 	unsigned int ctl_ainput;// selected audio input
+ 	unsigned int ctl_aoutput;// selected audio output
++	enum em28xx_amux amux_map[MAX_EM28XX_INPUT];
++
+ 	int mute;
+ 	int volume;
+ 
+diff --git a/drivers/media/v4l2-core/v4l2-dv-timings.c b/drivers/media/v4l2-core/v4l2-dv-timings.c
+index c81faea96fba..c7c600c1f63b 100644
+--- a/drivers/media/v4l2-core/v4l2-dv-timings.c
++++ b/drivers/media/v4l2-core/v4l2-dv-timings.c
+@@ -837,9 +837,9 @@ v4l2_hdmi_rx_colorimetry(const struct hdmi_avi_infoframe *avi,
+ 		switch (avi->colorimetry) {
+ 		case HDMI_COLORIMETRY_EXTENDED:
+ 			switch (avi->extended_colorimetry) {
+-			case HDMI_EXTENDED_COLORIMETRY_ADOBE_RGB:
+-				c.colorspace = V4L2_COLORSPACE_ADOBERGB;
+-				c.xfer_func = V4L2_XFER_FUNC_ADOBERGB;
++			case HDMI_EXTENDED_COLORIMETRY_OPRGB:
++				c.colorspace = V4L2_COLORSPACE_OPRGB;
++				c.xfer_func = V4L2_XFER_FUNC_OPRGB;
+ 				break;
+ 			case HDMI_EXTENDED_COLORIMETRY_BT2020:
+ 				c.colorspace = V4L2_COLORSPACE_BT2020;
+@@ -908,10 +908,10 @@ v4l2_hdmi_rx_colorimetry(const struct hdmi_avi_infoframe *avi,
+ 				c.ycbcr_enc = V4L2_YCBCR_ENC_601;
+ 				c.xfer_func = V4L2_XFER_FUNC_SRGB;
+ 				break;
+-			case HDMI_EXTENDED_COLORIMETRY_ADOBE_YCC_601:
+-				c.colorspace = V4L2_COLORSPACE_ADOBERGB;
++			case HDMI_EXTENDED_COLORIMETRY_OPYCC_601:
++				c.colorspace = V4L2_COLORSPACE_OPRGB;
+ 				c.ycbcr_enc = V4L2_YCBCR_ENC_601;
+-				c.xfer_func = V4L2_XFER_FUNC_ADOBERGB;
++				c.xfer_func = V4L2_XFER_FUNC_OPRGB;
+ 				break;
+ 			case HDMI_EXTENDED_COLORIMETRY_BT2020:
+ 				c.colorspace = V4L2_COLORSPACE_BT2020;
+diff --git a/drivers/mfd/menelaus.c b/drivers/mfd/menelaus.c
+index 29b7164a823b..d28ebe7ecd21 100644
+--- a/drivers/mfd/menelaus.c
++++ b/drivers/mfd/menelaus.c
+@@ -1094,6 +1094,7 @@ static void menelaus_rtc_alarm_work(struct menelaus_chip *m)
+ static inline void menelaus_rtc_init(struct menelaus_chip *m)
+ {
+ 	int	alarm = (m->client->irq > 0);
++	int	err;
+ 
+ 	/* assume 32KDETEN pin is pulled high */
+ 	if (!(menelaus_read_reg(MENELAUS_OSC_CTRL) & 0x80)) {
+@@ -1101,6 +1102,12 @@ static inline void menelaus_rtc_init(struct menelaus_chip *m)
+ 		return;
+ 	}
+ 
++	m->rtc = devm_rtc_allocate_device(&m->client->dev);
++	if (IS_ERR(m->rtc))
++		return;
++
++	m->rtc->ops = &menelaus_rtc_ops;
++
+ 	/* support RTC alarm; it can issue wakeups */
+ 	if (alarm) {
+ 		if (menelaus_add_irq_work(MENELAUS_RTCALM_IRQ,
+@@ -1125,10 +1132,8 @@ static inline void menelaus_rtc_init(struct menelaus_chip *m)
+ 		menelaus_write_reg(MENELAUS_RTC_CTRL, m->rtc_control);
+ 	}
+ 
+-	m->rtc = rtc_device_register(DRIVER_NAME,
+-			&m->client->dev,
+-			&menelaus_rtc_ops, THIS_MODULE);
+-	if (IS_ERR(m->rtc)) {
++	err = rtc_register_device(m->rtc);
++	if (err) {
+ 		if (alarm) {
+ 			menelaus_remove_irq_work(MENELAUS_RTCALM_IRQ);
+ 			device_init_wakeup(&m->client->dev, 0);
+diff --git a/drivers/misc/genwqe/card_base.h b/drivers/misc/genwqe/card_base.h
+index 1c3967f10f55..1f94fb436c3c 100644
+--- a/drivers/misc/genwqe/card_base.h
++++ b/drivers/misc/genwqe/card_base.h
+@@ -408,7 +408,7 @@ struct genwqe_file {
+ 	struct file *filp;
+ 
+ 	struct fasync_struct *async_queue;
+-	struct task_struct *owner;
++	struct pid *opener;
+ 	struct list_head list;		/* entry in list of open files */
+ 
+ 	spinlock_t map_lock;		/* lock for dma_mappings */
+diff --git a/drivers/misc/genwqe/card_dev.c b/drivers/misc/genwqe/card_dev.c
+index 0dd6b5ef314a..66f222f24da3 100644
+--- a/drivers/misc/genwqe/card_dev.c
++++ b/drivers/misc/genwqe/card_dev.c
+@@ -52,7 +52,7 @@ static void genwqe_add_file(struct genwqe_dev *cd, struct genwqe_file *cfile)
+ {
+ 	unsigned long flags;
+ 
+-	cfile->owner = current;
++	cfile->opener = get_pid(task_tgid(current));
+ 	spin_lock_irqsave(&cd->file_lock, flags);
+ 	list_add(&cfile->list, &cd->file_list);
+ 	spin_unlock_irqrestore(&cd->file_lock, flags);
+@@ -65,6 +65,7 @@ static int genwqe_del_file(struct genwqe_dev *cd, struct genwqe_file *cfile)
+ 	spin_lock_irqsave(&cd->file_lock, flags);
+ 	list_del(&cfile->list);
+ 	spin_unlock_irqrestore(&cd->file_lock, flags);
++	put_pid(cfile->opener);
+ 
+ 	return 0;
+ }
+@@ -275,7 +276,7 @@ static int genwqe_kill_fasync(struct genwqe_dev *cd, int sig)
+ 	return files;
+ }
+ 
+-static int genwqe_force_sig(struct genwqe_dev *cd, int sig)
++static int genwqe_terminate(struct genwqe_dev *cd)
+ {
+ 	unsigned int files = 0;
+ 	unsigned long flags;
+@@ -283,7 +284,7 @@ static int genwqe_force_sig(struct genwqe_dev *cd, int sig)
+ 
+ 	spin_lock_irqsave(&cd->file_lock, flags);
+ 	list_for_each_entry(cfile, &cd->file_list, list) {
+-		force_sig(sig, cfile->owner);
++		kill_pid(cfile->opener, SIGKILL, 1);
+ 		files++;
+ 	}
+ 	spin_unlock_irqrestore(&cd->file_lock, flags);
+@@ -1357,7 +1358,7 @@ static int genwqe_inform_and_stop_processes(struct genwqe_dev *cd)
+ 		dev_warn(&pci_dev->dev,
+ 			 "[%s] send SIGKILL and wait ...\n", __func__);
+ 
+-		rc = genwqe_force_sig(cd, SIGKILL); /* force terminate */
++		rc = genwqe_terminate(cd);
+ 		if (rc) {
+ 			/* Give kill_timout more seconds to end processes */
+ 			for (i = 0; (i < GENWQE_KILL_TIMEOUT) &&
+diff --git a/drivers/misc/ocxl/config.c b/drivers/misc/ocxl/config.c
+index 2e30de9c694a..57a6bb1fd3c9 100644
+--- a/drivers/misc/ocxl/config.c
++++ b/drivers/misc/ocxl/config.c
+@@ -280,7 +280,9 @@ int ocxl_config_check_afu_index(struct pci_dev *dev,
+ 	u32 val;
+ 	int rc, templ_major, templ_minor, len;
+ 
+-	pci_write_config_word(dev, fn->dvsec_afu_info_pos, afu_idx);
++	pci_write_config_byte(dev,
++			fn->dvsec_afu_info_pos + OCXL_DVSEC_AFU_INFO_AFU_IDX,
++			afu_idx);
+ 	rc = read_afu_info(dev, fn, OCXL_DVSEC_TEMPL_VERSION, &val);
+ 	if (rc)
+ 		return rc;
+diff --git a/drivers/misc/vmw_vmci/vmci_driver.c b/drivers/misc/vmw_vmci/vmci_driver.c
+index d7eaf1eb11e7..003bfba40758 100644
+--- a/drivers/misc/vmw_vmci/vmci_driver.c
++++ b/drivers/misc/vmw_vmci/vmci_driver.c
+@@ -113,5 +113,5 @@ module_exit(vmci_drv_exit);
+ 
+ MODULE_AUTHOR("VMware, Inc.");
+ MODULE_DESCRIPTION("VMware Virtual Machine Communication Interface.");
+-MODULE_VERSION("1.1.5.0-k");
++MODULE_VERSION("1.1.6.0-k");
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/misc/vmw_vmci/vmci_resource.c b/drivers/misc/vmw_vmci/vmci_resource.c
+index 1ab6e8737a5f..da1ee2e1ba99 100644
+--- a/drivers/misc/vmw_vmci/vmci_resource.c
++++ b/drivers/misc/vmw_vmci/vmci_resource.c
+@@ -57,7 +57,8 @@ static struct vmci_resource *vmci_resource_lookup(struct vmci_handle handle,
+ 
+ 		if (r->type == type &&
+ 		    rid == handle.resource &&
+-		    (cid == handle.context || cid == VMCI_INVALID_ID)) {
++		    (cid == handle.context || cid == VMCI_INVALID_ID ||
++		     handle.context == VMCI_INVALID_ID)) {
+ 			resource = r;
+ 			break;
+ 		}
+diff --git a/drivers/mmc/host/sdhci-acpi.c b/drivers/mmc/host/sdhci-acpi.c
+index 32321bd596d8..c61109f7b793 100644
+--- a/drivers/mmc/host/sdhci-acpi.c
++++ b/drivers/mmc/host/sdhci-acpi.c
+@@ -76,6 +76,7 @@ struct sdhci_acpi_slot {
+ 	size_t		priv_size;
+ 	int (*probe_slot)(struct platform_device *, const char *, const char *);
+ 	int (*remove_slot)(struct platform_device *);
++	int (*free_slot)(struct platform_device *pdev);
+ 	int (*setup_host)(struct platform_device *pdev);
+ };
+ 
+@@ -756,6 +757,9 @@ static int sdhci_acpi_probe(struct platform_device *pdev)
+ err_cleanup:
+ 	sdhci_cleanup_host(c->host);
+ err_free:
++	if (c->slot && c->slot->free_slot)
++		c->slot->free_slot(pdev);
++
+ 	sdhci_free_host(c->host);
+ 	return err;
+ }
+@@ -777,6 +781,10 @@ static int sdhci_acpi_remove(struct platform_device *pdev)
+ 
+ 	dead = (sdhci_readl(c->host, SDHCI_INT_STATUS) == ~0);
+ 	sdhci_remove_host(c->host, dead);
++
++	if (c->slot && c->slot->free_slot)
++		c->slot->free_slot(pdev);
++
+ 	sdhci_free_host(c->host);
+ 
+ 	return 0;
+diff --git a/drivers/mmc/host/sdhci-pci-o2micro.c b/drivers/mmc/host/sdhci-pci-o2micro.c
+index 555970a29c94..34326d95d254 100644
+--- a/drivers/mmc/host/sdhci-pci-o2micro.c
++++ b/drivers/mmc/host/sdhci-pci-o2micro.c
+@@ -367,6 +367,9 @@ int sdhci_pci_o2_probe(struct sdhci_pci_chip *chip)
+ 		pci_write_config_byte(chip->pdev, O2_SD_LOCK_WP, scratch);
+ 		break;
+ 	case PCI_DEVICE_ID_O2_SEABIRD0:
++		if (chip->pdev->revision == 0x01)
++			chip->quirks |= SDHCI_QUIRK_DELAY_AFTER_POWER;
++		/* fall through */
+ 	case PCI_DEVICE_ID_O2_SEABIRD1:
+ 		/* UnLock WP */
+ 		ret = pci_read_config_byte(chip->pdev,
+diff --git a/drivers/mtd/nand/raw/atmel/nand-controller.c b/drivers/mtd/nand/raw/atmel/nand-controller.c
+index e686fe73159e..a1fd6f6f5414 100644
+--- a/drivers/mtd/nand/raw/atmel/nand-controller.c
++++ b/drivers/mtd/nand/raw/atmel/nand-controller.c
+@@ -2081,6 +2081,10 @@ atmel_hsmc_nand_controller_legacy_init(struct atmel_hsmc_nand_controller *nc)
+ 	nand_np = dev->of_node;
+ 	nfc_np = of_find_compatible_node(dev->of_node, NULL,
+ 					 "atmel,sama5d3-nfc");
++	if (!nfc_np) {
++		dev_err(dev, "Could not find device node for sama5d3-nfc\n");
++		return -ENODEV;
++	}
+ 
+ 	nc->clk = of_clk_get(nfc_np, 0);
+ 	if (IS_ERR(nc->clk)) {
+diff --git a/drivers/mtd/nand/raw/denali.c b/drivers/mtd/nand/raw/denali.c
+index c502075e5721..ff955f085351 100644
+--- a/drivers/mtd/nand/raw/denali.c
++++ b/drivers/mtd/nand/raw/denali.c
+@@ -28,6 +28,7 @@
+ MODULE_LICENSE("GPL");
+ 
+ #define DENALI_NAND_NAME    "denali-nand"
++#define DENALI_DEFAULT_OOB_SKIP_BYTES	8
+ 
+ /* for Indexed Addressing */
+ #define DENALI_INDEXED_CTRL	0x00
+@@ -1106,12 +1107,17 @@ static void denali_hw_init(struct denali_nand_info *denali)
+ 		denali->revision = swab16(ioread32(denali->reg + REVISION));
+ 
+ 	/*
+-	 * tell driver how many bit controller will skip before
+-	 * writing ECC code in OOB, this register may be already
+-	 * set by firmware. So we read this value out.
+-	 * if this value is 0, just let it be.
++	 * Set how many bytes should be skipped before writing data in OOB.
++	 * If a non-zero value has already been set (by firmware or something),
++	 * just use it.  Otherwise, set the driver default.
+ 	 */
+ 	denali->oob_skip_bytes = ioread32(denali->reg + SPARE_AREA_SKIP_BYTES);
++	if (!denali->oob_skip_bytes) {
++		denali->oob_skip_bytes = DENALI_DEFAULT_OOB_SKIP_BYTES;
++		iowrite32(denali->oob_skip_bytes,
++			  denali->reg + SPARE_AREA_SKIP_BYTES);
++	}
++
+ 	denali_detect_max_banks(denali);
+ 	iowrite32(0x0F, denali->reg + RB_PIN_ENABLED);
+ 	iowrite32(CHIP_EN_DONT_CARE__FLAG, denali->reg + CHIP_ENABLE_DONT_CARE);
+diff --git a/drivers/mtd/nand/raw/marvell_nand.c b/drivers/mtd/nand/raw/marvell_nand.c
+index c88588815ca1..a3477cbf6115 100644
+--- a/drivers/mtd/nand/raw/marvell_nand.c
++++ b/drivers/mtd/nand/raw/marvell_nand.c
+@@ -691,7 +691,7 @@ static irqreturn_t marvell_nfc_isr(int irq, void *dev_id)
+ 
+ 	marvell_nfc_disable_int(nfc, st & NDCR_ALL_INT);
+ 
+-	if (!(st & (NDSR_RDDREQ | NDSR_WRDREQ | NDSR_WRCMDREQ)))
++	if (st & (NDSR_RDY(0) | NDSR_RDY(1)))
+ 		complete(&nfc->complete);
+ 
+ 	return IRQ_HANDLED;
+diff --git a/drivers/mtd/spi-nor/fsl-quadspi.c b/drivers/mtd/spi-nor/fsl-quadspi.c
+index 7d9620c7ff6c..1ff3430f82c8 100644
+--- a/drivers/mtd/spi-nor/fsl-quadspi.c
++++ b/drivers/mtd/spi-nor/fsl-quadspi.c
+@@ -478,6 +478,7 @@ static int fsl_qspi_get_seqid(struct fsl_qspi *q, u8 cmd)
+ {
+ 	switch (cmd) {
+ 	case SPINOR_OP_READ_1_1_4:
++	case SPINOR_OP_READ_1_1_4_4B:
+ 		return SEQID_READ;
+ 	case SPINOR_OP_WREN:
+ 		return SEQID_WREN;
+@@ -543,6 +544,9 @@ fsl_qspi_runcmd(struct fsl_qspi *q, u8 cmd, unsigned int addr, int len)
+ 
+ 	/* trigger the LUT now */
+ 	seqid = fsl_qspi_get_seqid(q, cmd);
++	if (seqid < 0)
++		return seqid;
++
+ 	qspi_writel(q, (seqid << QUADSPI_IPCR_SEQID_SHIFT) | len,
+ 			base + QUADSPI_IPCR);
+ 
+@@ -671,7 +675,7 @@ static void fsl_qspi_set_map_addr(struct fsl_qspi *q)
+  * causes the controller to clear the buffer, and use the sequence pointed
+  * by the QUADSPI_BFGENCR[SEQID] to initiate a read from the flash.
+  */
+-static void fsl_qspi_init_ahb_read(struct fsl_qspi *q)
++static int fsl_qspi_init_ahb_read(struct fsl_qspi *q)
+ {
+ 	void __iomem *base = q->iobase;
+ 	int seqid;
+@@ -696,8 +700,13 @@ static void fsl_qspi_init_ahb_read(struct fsl_qspi *q)
+ 
+ 	/* Set the default lut sequence for AHB Read. */
+ 	seqid = fsl_qspi_get_seqid(q, q->nor[0].read_opcode);
++	if (seqid < 0)
++		return seqid;
++
+ 	qspi_writel(q, seqid << QUADSPI_BFGENCR_SEQID_SHIFT,
+ 		q->iobase + QUADSPI_BFGENCR);
++
++	return 0;
+ }
+ 
+ /* This function was used to prepare and enable QSPI clock */
+@@ -805,9 +814,7 @@ static int fsl_qspi_nor_setup_last(struct fsl_qspi *q)
+ 	fsl_qspi_init_lut(q);
+ 
+ 	/* Init for AHB read */
+-	fsl_qspi_init_ahb_read(q);
+-
+-	return 0;
++	return fsl_qspi_init_ahb_read(q);
+ }
+ 
+ static const struct of_device_id fsl_qspi_dt_ids[] = {
+diff --git a/drivers/mtd/spi-nor/intel-spi-pci.c b/drivers/mtd/spi-nor/intel-spi-pci.c
+index c0976f2e3dd1..872b40922608 100644
+--- a/drivers/mtd/spi-nor/intel-spi-pci.c
++++ b/drivers/mtd/spi-nor/intel-spi-pci.c
+@@ -65,6 +65,7 @@ static void intel_spi_pci_remove(struct pci_dev *pdev)
+ static const struct pci_device_id intel_spi_pci_ids[] = {
+ 	{ PCI_VDEVICE(INTEL, 0x18e0), (unsigned long)&bxt_info },
+ 	{ PCI_VDEVICE(INTEL, 0x19e0), (unsigned long)&bxt_info },
++	{ PCI_VDEVICE(INTEL, 0x34a4), (unsigned long)&bxt_info },
+ 	{ PCI_VDEVICE(INTEL, 0xa1a4), (unsigned long)&bxt_info },
+ 	{ PCI_VDEVICE(INTEL, 0xa224), (unsigned long)&bxt_info },
+ 	{ },
+diff --git a/drivers/net/dsa/mv88e6xxx/phy.c b/drivers/net/dsa/mv88e6xxx/phy.c
+index 46af8052e535..152a65d46e0b 100644
+--- a/drivers/net/dsa/mv88e6xxx/phy.c
++++ b/drivers/net/dsa/mv88e6xxx/phy.c
+@@ -110,6 +110,9 @@ int mv88e6xxx_phy_page_write(struct mv88e6xxx_chip *chip, int phy,
+ 	err = mv88e6xxx_phy_page_get(chip, phy, page);
+ 	if (!err) {
+ 		err = mv88e6xxx_phy_write(chip, phy, MV88E6XXX_PHY_PAGE, page);
++		if (!err)
++			err = mv88e6xxx_phy_write(chip, phy, reg, val);
++
+ 		mv88e6xxx_phy_page_put(chip, phy);
+ 	}
+ 
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+index 34af5f1569c8..de0e24d912fe 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+@@ -342,7 +342,7 @@ static struct device_node *bcmgenet_mii_of_find_mdio(struct bcmgenet_priv *priv)
+ 	if (!compat)
+ 		return NULL;
+ 
+-	priv->mdio_dn = of_find_compatible_node(dn, NULL, compat);
++	priv->mdio_dn = of_get_compatible_child(dn, compat);
+ 	kfree(compat);
+ 	if (!priv->mdio_dn) {
+ 		dev_err(kdev, "unable to find MDIO bus node\n");
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 9d69621f5ab4..542f16074dc9 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -1907,6 +1907,7 @@ static int is_valid_clean_head(struct hns3_enet_ring *ring, int h)
+ bool hns3_clean_tx_ring(struct hns3_enet_ring *ring, int budget)
+ {
+ 	struct net_device *netdev = ring->tqp->handle->kinfo.netdev;
++	struct hns3_nic_priv *priv = netdev_priv(netdev);
+ 	struct netdev_queue *dev_queue;
+ 	int bytes, pkts;
+ 	int head;
+@@ -1953,7 +1954,8 @@ bool hns3_clean_tx_ring(struct hns3_enet_ring *ring, int budget)
+ 		 * sees the new next_to_clean.
+ 		 */
+ 		smp_mb();
+-		if (netif_tx_queue_stopped(dev_queue)) {
++		if (netif_tx_queue_stopped(dev_queue) &&
++		    !test_bit(HNS3_NIC_STATE_DOWN, &priv->state)) {
+ 			netif_tx_wake_queue(dev_queue);
+ 			ring->stats.restart_queue++;
+ 		}
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+index 11620e003a8e..967a625c040d 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+@@ -310,7 +310,7 @@ static void hns3_self_test(struct net_device *ndev,
+ 			h->flags & HNAE3_SUPPORT_MAC_LOOPBACK;
+ 
+ 	if (if_running)
+-		dev_close(ndev);
++		ndev->netdev_ops->ndo_stop(ndev);
+ 
+ #if IS_ENABLED(CONFIG_VLAN_8021Q)
+ 	/* Disable the vlan filter for selftest does not support it */
+@@ -348,7 +348,7 @@ static void hns3_self_test(struct net_device *ndev,
+ #endif
+ 
+ 	if (if_running)
+-		dev_open(ndev);
++		ndev->netdev_ops->ndo_open(ndev);
+ }
+ 
+ static int hns3_get_sset_count(struct net_device *netdev, int stringset)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+index 955f0e3d5c95..b4c0597a392d 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+@@ -79,6 +79,7 @@ static int hclge_ieee_getets(struct hnae3_handle *h, struct ieee_ets *ets)
+ static int hclge_ets_validate(struct hclge_dev *hdev, struct ieee_ets *ets,
+ 			      u8 *tc, bool *changed)
+ {
++	bool has_ets_tc = false;
+ 	u32 total_ets_bw = 0;
+ 	u8 max_tc = 0;
+ 	u8 i;
+@@ -106,13 +107,14 @@ static int hclge_ets_validate(struct hclge_dev *hdev, struct ieee_ets *ets,
+ 				*changed = true;
+ 
+ 			total_ets_bw += ets->tc_tx_bw[i];
+-		break;
++			has_ets_tc = true;
++			break;
+ 		default:
+ 			return -EINVAL;
+ 		}
+ 	}
+ 
+-	if (total_ets_bw != BW_PERCENT)
++	if (has_ets_tc && total_ets_bw != BW_PERCENT)
+ 		return -EINVAL;
+ 
+ 	*tc = max_tc + 1;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 13f43b74fd6d..9f2bea64c522 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -1669,11 +1669,13 @@ static int hclge_tx_buffer_calc(struct hclge_dev *hdev,
+ static int hclge_rx_buffer_calc(struct hclge_dev *hdev,
+ 				struct hclge_pkt_buf_alloc *buf_alloc)
+ {
+-	u32 rx_all = hdev->pkt_buf_size;
++#define HCLGE_BUF_SIZE_UNIT	128
++	u32 rx_all = hdev->pkt_buf_size, aligned_mps;
+ 	int no_pfc_priv_num, pfc_priv_num;
+ 	struct hclge_priv_buf *priv;
+ 	int i;
+ 
++	aligned_mps = round_up(hdev->mps, HCLGE_BUF_SIZE_UNIT);
+ 	rx_all -= hclge_get_tx_buff_alloced(buf_alloc);
+ 
+ 	/* When DCB is not supported, rx private
+@@ -1692,13 +1694,13 @@ static int hclge_rx_buffer_calc(struct hclge_dev *hdev,
+ 		if (hdev->hw_tc_map & BIT(i)) {
+ 			priv->enable = 1;
+ 			if (hdev->tm_info.hw_pfc_map & BIT(i)) {
+-				priv->wl.low = hdev->mps;
+-				priv->wl.high = priv->wl.low + hdev->mps;
++				priv->wl.low = aligned_mps;
++				priv->wl.high = priv->wl.low + aligned_mps;
+ 				priv->buf_size = priv->wl.high +
+ 						HCLGE_DEFAULT_DV;
+ 			} else {
+ 				priv->wl.low = 0;
+-				priv->wl.high = 2 * hdev->mps;
++				priv->wl.high = 2 * aligned_mps;
+ 				priv->buf_size = priv->wl.high;
+ 			}
+ 		} else {
+@@ -1730,11 +1732,11 @@ static int hclge_rx_buffer_calc(struct hclge_dev *hdev,
+ 
+ 		if (hdev->tm_info.hw_pfc_map & BIT(i)) {
+ 			priv->wl.low = 128;
+-			priv->wl.high = priv->wl.low + hdev->mps;
++			priv->wl.high = priv->wl.low + aligned_mps;
+ 			priv->buf_size = priv->wl.high + HCLGE_DEFAULT_DV;
+ 		} else {
+ 			priv->wl.low = 0;
+-			priv->wl.high = hdev->mps;
++			priv->wl.high = aligned_mps;
+ 			priv->buf_size = priv->wl.high;
+ 		}
+ 	}
+@@ -2396,6 +2398,9 @@ static int hclge_get_mac_phy_link(struct hclge_dev *hdev)
+ 	int mac_state;
+ 	int link_stat;
+ 
++	if (test_bit(HCLGE_STATE_DOWN, &hdev->state))
++		return 0;
++
+ 	mac_state = hclge_get_mac_link_status(hdev);
+ 
+ 	if (hdev->hw.mac.phydev) {
+@@ -3789,6 +3794,8 @@ static void hclge_ae_stop(struct hnae3_handle *handle)
+ 	struct hclge_dev *hdev = vport->back;
+ 	int i;
+ 
++	set_bit(HCLGE_STATE_DOWN, &hdev->state);
++
+ 	del_timer_sync(&hdev->service_timer);
+ 	cancel_work_sync(&hdev->service_task);
+ 	clear_bit(HCLGE_STATE_SERVICE_SCHED, &hdev->state);
+@@ -4679,9 +4686,17 @@ static int hclge_set_vf_vlan_common(struct hclge_dev *hdev, int vfid,
+ 			"Add vf vlan filter fail, ret =%d.\n",
+ 			req0->resp_code);
+ 	} else {
++#define HCLGE_VF_VLAN_DEL_NO_FOUND	1
+ 		if (!req0->resp_code)
+ 			return 0;
+ 
++		if (req0->resp_code == HCLGE_VF_VLAN_DEL_NO_FOUND) {
++			dev_warn(&hdev->pdev->dev,
++				 "vlan %d filter is not in vf vlan table\n",
++				 vlan);
++			return 0;
++		}
++
+ 		dev_err(&hdev->pdev->dev,
+ 			"Kill vf vlan filter fail, ret =%d.\n",
+ 			req0->resp_code);
+@@ -4725,6 +4740,9 @@ static int hclge_set_vlan_filter_hw(struct hclge_dev *hdev, __be16 proto,
+ 	u16 vport_idx, vport_num = 0;
+ 	int ret;
+ 
++	if (is_kill && !vlan_id)
++		return 0;
++
+ 	ret = hclge_set_vf_vlan_common(hdev, vport_id, is_kill, vlan_id,
+ 				       0, proto);
+ 	if (ret) {
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index 12aa1f1b99ef..6090a7cd83e1 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -299,6 +299,9 @@ void hclgevf_update_link_status(struct hclgevf_dev *hdev, int link_state)
+ 
+ 	client = handle->client;
+ 
++	link_state =
++		test_bit(HCLGEVF_STATE_DOWN, &hdev->state) ? 0 : link_state;
++
+ 	if (link_state != hdev->hw.mac.link) {
+ 		client->ops->link_status_change(handle, !!link_state);
+ 		hdev->hw.mac.link = link_state;
+@@ -1439,6 +1442,8 @@ static void hclgevf_ae_stop(struct hnae3_handle *handle)
+ 	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+ 	int i, queue_id;
+ 
++	set_bit(HCLGEVF_STATE_DOWN, &hdev->state);
++
+ 	for (i = 0; i < hdev->num_tqps; i++) {
+ 		/* Ring disable */
+ 		queue_id = hclgevf_get_queue_id(handle->kinfo.tqp[i]);
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index ed071ea75f20..ce12824a8325 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -39,9 +39,9 @@
+ extern const char ice_drv_ver[];
+ #define ICE_BAR0		0
+ #define ICE_DFLT_NUM_DESC	128
+-#define ICE_MIN_NUM_DESC	8
+-#define ICE_MAX_NUM_DESC	8160
+ #define ICE_REQ_DESC_MULTIPLE	32
++#define ICE_MIN_NUM_DESC	ICE_REQ_DESC_MULTIPLE
++#define ICE_MAX_NUM_DESC	8160
+ #define ICE_DFLT_TRAFFIC_CLASS	BIT(0)
+ #define ICE_INT_NAME_STR_LEN	(IFNAMSIZ + 16)
+ #define ICE_ETHTOOL_FWVER_LEN	32
+diff --git a/drivers/net/ethernet/intel/ice/ice_controlq.c b/drivers/net/ethernet/intel/ice/ice_controlq.c
+index 62be72fdc8f3..e783976c401d 100644
+--- a/drivers/net/ethernet/intel/ice/ice_controlq.c
++++ b/drivers/net/ethernet/intel/ice/ice_controlq.c
+@@ -518,22 +518,31 @@ shutdown_sq_out:
+ 
+ /**
+  * ice_aq_ver_check - Check the reported AQ API version.
+- * @fw_branch: The "branch" of FW, typically describes the device type
+- * @fw_major: The major version of the FW API
+- * @fw_minor: The minor version increment of the FW API
++ * @hw: pointer to the hardware structure
+  *
+  * Checks if the driver should load on a given AQ API version.
+  *
+  * Return: 'true' iff the driver should attempt to load. 'false' otherwise.
+  */
+-static bool ice_aq_ver_check(u8 fw_branch, u8 fw_major, u8 fw_minor)
++static bool ice_aq_ver_check(struct ice_hw *hw)
+ {
+-	if (fw_branch != EXP_FW_API_VER_BRANCH)
+-		return false;
+-	if (fw_major != EXP_FW_API_VER_MAJOR)
+-		return false;
+-	if (fw_minor != EXP_FW_API_VER_MINOR)
++	if (hw->api_maj_ver > EXP_FW_API_VER_MAJOR) {
++		/* Major API version is newer than expected, don't load */
++		dev_warn(ice_hw_to_dev(hw),
++			 "The driver for the device stopped because the NVM image is newer than expected. You must install the most recent version of the network driver.\n");
+ 		return false;
++	} else if (hw->api_maj_ver == EXP_FW_API_VER_MAJOR) {
++		if (hw->api_min_ver > (EXP_FW_API_VER_MINOR + 2))
++			dev_info(ice_hw_to_dev(hw),
++				 "The driver for the device detected a newer version of the NVM image than expected. Please install the most recent version of the network driver.\n");
++		else if ((hw->api_min_ver + 2) < EXP_FW_API_VER_MINOR)
++			dev_info(ice_hw_to_dev(hw),
++				 "The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.\n");
++	} else {
++		/* Major API version is older than expected, log a warning */
++		dev_info(ice_hw_to_dev(hw),
++			 "The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.\n");
++	}
+ 	return true;
+ }
+ 
+@@ -588,8 +597,7 @@ static enum ice_status ice_init_check_adminq(struct ice_hw *hw)
+ 	if (status)
+ 		goto init_ctrlq_free_rq;
+ 
+-	if (!ice_aq_ver_check(hw->api_branch, hw->api_maj_ver,
+-			      hw->api_min_ver)) {
++	if (!ice_aq_ver_check(hw)) {
+ 		status = ICE_ERR_FW_API_VER;
+ 		goto init_ctrlq_free_rq;
+ 	}
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+index c71a9b528d6d..9d6754f65a1a 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+@@ -478,9 +478,11 @@ ice_get_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring)
+ 	ring->tx_max_pending = ICE_MAX_NUM_DESC;
+ 	ring->rx_pending = vsi->rx_rings[0]->count;
+ 	ring->tx_pending = vsi->tx_rings[0]->count;
+-	ring->rx_mini_pending = ICE_MIN_NUM_DESC;
++
++	/* Rx mini and jumbo rings are not supported */
+ 	ring->rx_mini_max_pending = 0;
+ 	ring->rx_jumbo_max_pending = 0;
++	ring->rx_mini_pending = 0;
+ 	ring->rx_jumbo_pending = 0;
+ }
+ 
+@@ -498,14 +500,23 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring)
+ 	    ring->tx_pending < ICE_MIN_NUM_DESC ||
+ 	    ring->rx_pending > ICE_MAX_NUM_DESC ||
+ 	    ring->rx_pending < ICE_MIN_NUM_DESC) {
+-		netdev_err(netdev, "Descriptors requested (Tx: %d / Rx: %d) out of range [%d-%d]\n",
++		netdev_err(netdev, "Descriptors requested (Tx: %d / Rx: %d) out of range [%d-%d] (increment %d)\n",
+ 			   ring->tx_pending, ring->rx_pending,
+-			   ICE_MIN_NUM_DESC, ICE_MAX_NUM_DESC);
++			   ICE_MIN_NUM_DESC, ICE_MAX_NUM_DESC,
++			   ICE_REQ_DESC_MULTIPLE);
+ 		return -EINVAL;
+ 	}
+ 
+ 	new_tx_cnt = ALIGN(ring->tx_pending, ICE_REQ_DESC_MULTIPLE);
++	if (new_tx_cnt != ring->tx_pending)
++		netdev_info(netdev,
++			    "Requested Tx descriptor count rounded up to %d\n",
++			    new_tx_cnt);
+ 	new_rx_cnt = ALIGN(ring->rx_pending, ICE_REQ_DESC_MULTIPLE);
++	if (new_rx_cnt != ring->rx_pending)
++		netdev_info(netdev,
++			    "Requested Rx descriptor count rounded up to %d\n",
++			    new_rx_cnt);
+ 
+ 	/* if nothing to do return success */
+ 	if (new_tx_cnt == vsi->tx_rings[0]->count &&
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
+index da4322e4daed..add124e0381d 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
+@@ -676,6 +676,9 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs)
+ 	} else {
+ 		struct tx_sa tsa;
+ 
++		if (adapter->num_vfs)
++			return -EOPNOTSUPP;
++
+ 		/* find the first unused index */
+ 		ret = ixgbe_ipsec_find_empty_idx(ipsec, false);
+ 		if (ret < 0) {
+diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+index 59416eddd840..ce28d474b929 100644
+--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
++++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+@@ -3849,6 +3849,10 @@ static void ixgbevf_tx_csum(struct ixgbevf_ring *tx_ring,
+ 		skb_checksum_help(skb);
+ 		goto no_csum;
+ 	}
++
++	if (first->protocol == htons(ETH_P_IP))
++		type_tucmd |= IXGBE_ADVTXD_TUCMD_IPV4;
++
+ 	/* update TX checksum flag */
+ 	first->tx_flags |= IXGBE_TX_FLAGS_CSUM;
+ 	vlan_macip_lens = skb_checksum_start_offset(skb) -
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/action.c b/drivers/net/ethernet/netronome/nfp/flower/action.c
+index 4a6d2db75071..417fbcc64f00 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/action.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/action.c
+@@ -314,12 +314,14 @@ nfp_fl_set_ip4(const struct tc_action *action, int idx, u32 off,
+ 
+ 	switch (off) {
+ 	case offsetof(struct iphdr, daddr):
+-		set_ip_addr->ipv4_dst_mask = mask;
+-		set_ip_addr->ipv4_dst = exact;
++		set_ip_addr->ipv4_dst_mask |= mask;
++		set_ip_addr->ipv4_dst &= ~mask;
++		set_ip_addr->ipv4_dst |= exact & mask;
+ 		break;
+ 	case offsetof(struct iphdr, saddr):
+-		set_ip_addr->ipv4_src_mask = mask;
+-		set_ip_addr->ipv4_src = exact;
++		set_ip_addr->ipv4_src_mask |= mask;
++		set_ip_addr->ipv4_src &= ~mask;
++		set_ip_addr->ipv4_src |= exact & mask;
+ 		break;
+ 	default:
+ 		return -EOPNOTSUPP;
+@@ -333,11 +335,12 @@ nfp_fl_set_ip4(const struct tc_action *action, int idx, u32 off,
+ }
+ 
+ static void
+-nfp_fl_set_ip6_helper(int opcode_tag, int idx, __be32 exact, __be32 mask,
++nfp_fl_set_ip6_helper(int opcode_tag, u8 word, __be32 exact, __be32 mask,
+ 		      struct nfp_fl_set_ipv6_addr *ip6)
+ {
+-	ip6->ipv6[idx % 4].mask = mask;
+-	ip6->ipv6[idx % 4].exact = exact;
++	ip6->ipv6[word].mask |= mask;
++	ip6->ipv6[word].exact &= ~mask;
++	ip6->ipv6[word].exact |= exact & mask;
+ 
+ 	ip6->reserved = cpu_to_be16(0);
+ 	ip6->head.jump_id = opcode_tag;
+@@ -350,6 +353,7 @@ nfp_fl_set_ip6(const struct tc_action *action, int idx, u32 off,
+ 	       struct nfp_fl_set_ipv6_addr *ip_src)
+ {
+ 	__be32 exact, mask;
++	u8 word;
+ 
+ 	/* We are expecting tcf_pedit to return a big endian value */
+ 	mask = (__force __be32)~tcf_pedit_mask(action, idx);
+@@ -358,17 +362,20 @@ nfp_fl_set_ip6(const struct tc_action *action, int idx, u32 off,
+ 	if (exact & ~mask)
+ 		return -EOPNOTSUPP;
+ 
+-	if (off < offsetof(struct ipv6hdr, saddr))
++	if (off < offsetof(struct ipv6hdr, saddr)) {
+ 		return -EOPNOTSUPP;
+-	else if (off < offsetof(struct ipv6hdr, daddr))
+-		nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_SRC, idx,
++	} else if (off < offsetof(struct ipv6hdr, daddr)) {
++		word = (off - offsetof(struct ipv6hdr, saddr)) / sizeof(exact);
++		nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_SRC, word,
+ 				      exact, mask, ip_src);
+-	else if (off < offsetof(struct ipv6hdr, daddr) +
+-		       sizeof(struct in6_addr))
+-		nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_DST, idx,
++	} else if (off < offsetof(struct ipv6hdr, daddr) +
++		       sizeof(struct in6_addr)) {
++		word = (off - offsetof(struct ipv6hdr, daddr)) / sizeof(exact);
++		nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_DST, word,
+ 				      exact, mask, ip_dst);
+-	else
++	} else {
+ 		return -EOPNOTSUPP;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_devlink.c b/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
+index db463e20a876..e9a4179e7e48 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
+@@ -96,6 +96,7 @@ nfp_devlink_port_split(struct devlink *devlink, unsigned int port_index,
+ {
+ 	struct nfp_pf *pf = devlink_priv(devlink);
+ 	struct nfp_eth_table_port eth_port;
++	unsigned int lanes;
+ 	int ret;
+ 
+ 	if (count < 2)
+@@ -114,8 +115,12 @@ nfp_devlink_port_split(struct devlink *devlink, unsigned int port_index,
+ 		goto out;
+ 	}
+ 
+-	ret = nfp_devlink_set_lanes(pf, eth_port.index,
+-				    eth_port.port_lanes / count);
++	/* Special case the 100G CXP -> 2x40G split */
++	lanes = eth_port.port_lanes / count;
++	if (eth_port.lanes == 10 && count == 2)
++		lanes = 8 / count;
++
++	ret = nfp_devlink_set_lanes(pf, eth_port.index, lanes);
+ out:
+ 	mutex_unlock(&pf->lock);
+ 
+@@ -128,6 +133,7 @@ nfp_devlink_port_unsplit(struct devlink *devlink, unsigned int port_index,
+ {
+ 	struct nfp_pf *pf = devlink_priv(devlink);
+ 	struct nfp_eth_table_port eth_port;
++	unsigned int lanes;
+ 	int ret;
+ 
+ 	mutex_lock(&pf->lock);
+@@ -143,7 +149,12 @@ nfp_devlink_port_unsplit(struct devlink *devlink, unsigned int port_index,
+ 		goto out;
+ 	}
+ 
+-	ret = nfp_devlink_set_lanes(pf, eth_port.index, eth_port.port_lanes);
++	/* Special case the 100G CXP -> 2x40G unsplit */
++	lanes = eth_port.port_lanes;
++	if (eth_port.port_lanes == 8)
++		lanes = 10;
++
++	ret = nfp_devlink_set_lanes(pf, eth_port.index, lanes);
+ out:
+ 	mutex_unlock(&pf->lock);
+ 
+diff --git a/drivers/net/ethernet/qlogic/qla3xxx.c b/drivers/net/ethernet/qlogic/qla3xxx.c
+index b48f76182049..10b075bc5959 100644
+--- a/drivers/net/ethernet/qlogic/qla3xxx.c
++++ b/drivers/net/ethernet/qlogic/qla3xxx.c
+@@ -380,8 +380,6 @@ static void fm93c56a_select(struct ql3_adapter *qdev)
+ 
+ 	qdev->eeprom_cmd_data = AUBURN_EEPROM_CS_1;
+ 	ql_write_nvram_reg(qdev, spir, ISP_NVRAM_MASK | qdev->eeprom_cmd_data);
+-	ql_write_nvram_reg(qdev, spir,
+-			   ((ISP_NVRAM_MASK << 16) | qdev->eeprom_cmd_data));
+ }
+ 
+ /*
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index f18087102d40..41bcbdd355f0 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -7539,20 +7539,12 @@ static int rtl_alloc_irq(struct rtl8169_private *tp)
+ {
+ 	unsigned int flags;
+ 
+-	switch (tp->mac_version) {
+-	case RTL_GIGA_MAC_VER_01 ... RTL_GIGA_MAC_VER_06:
++	if (tp->mac_version <= RTL_GIGA_MAC_VER_06) {
+ 		RTL_W8(tp, Cfg9346, Cfg9346_Unlock);
+ 		RTL_W8(tp, Config2, RTL_R8(tp, Config2) & ~MSIEnable);
+ 		RTL_W8(tp, Cfg9346, Cfg9346_Lock);
+ 		flags = PCI_IRQ_LEGACY;
+-		break;
+-	case RTL_GIGA_MAC_VER_39 ... RTL_GIGA_MAC_VER_40:
+-		/* This version was reported to have issues with resume
+-		 * from suspend when using MSI-X
+-		 */
+-		flags = PCI_IRQ_LEGACY | PCI_IRQ_MSI;
+-		break;
+-	default:
++	} else {
+ 		flags = PCI_IRQ_ALL_TYPES;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
+index e080d3e7c582..4d7d53fbc0ef 100644
+--- a/drivers/net/ethernet/socionext/netsec.c
++++ b/drivers/net/ethernet/socionext/netsec.c
+@@ -945,6 +945,9 @@ static void netsec_uninit_pkt_dring(struct netsec_priv *priv, int id)
+ 	dring->head = 0;
+ 	dring->tail = 0;
+ 	dring->pkt_cnt = 0;
++
++	if (id == NETSEC_RING_TX)
++		netdev_reset_queue(priv->ndev);
+ }
+ 
+ static void netsec_free_dring(struct netsec_priv *priv, int id)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+index f9a61f90cfbc..0f660af01a4b 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+@@ -714,8 +714,9 @@ static int get_ephy_nodes(struct stmmac_priv *priv)
+ 		return -ENODEV;
+ 	}
+ 
+-	mdio_internal = of_find_compatible_node(mdio_mux, NULL,
++	mdio_internal = of_get_compatible_child(mdio_mux,
+ 						"allwinner,sun8i-h3-mdio-internal");
++	of_node_put(mdio_mux);
+ 	if (!mdio_internal) {
+ 		dev_err(priv->device, "Cannot get internal_mdio node\n");
+ 		return -ENODEV;
+@@ -729,13 +730,20 @@ static int get_ephy_nodes(struct stmmac_priv *priv)
+ 		gmac->rst_ephy = of_reset_control_get_exclusive(iphynode, NULL);
+ 		if (IS_ERR(gmac->rst_ephy)) {
+ 			ret = PTR_ERR(gmac->rst_ephy);
+-			if (ret == -EPROBE_DEFER)
++			if (ret == -EPROBE_DEFER) {
++				of_node_put(iphynode);
++				of_node_put(mdio_internal);
+ 				return ret;
++			}
+ 			continue;
+ 		}
+ 		dev_info(priv->device, "Found internal PHY node\n");
++		of_node_put(iphynode);
++		of_node_put(mdio_internal);
+ 		return 0;
+ 	}
++
++	of_node_put(mdio_internal);
+ 	return -ENODEV;
+ }
+ 
+diff --git a/drivers/net/net_failover.c b/drivers/net/net_failover.c
+index 4f390fa557e4..8ec02f1a3be8 100644
+--- a/drivers/net/net_failover.c
++++ b/drivers/net/net_failover.c
+@@ -602,6 +602,9 @@ static int net_failover_slave_unregister(struct net_device *slave_dev,
+ 	primary_dev = rtnl_dereference(nfo_info->primary_dev);
+ 	standby_dev = rtnl_dereference(nfo_info->standby_dev);
+ 
++	if (WARN_ON_ONCE(slave_dev != primary_dev && slave_dev != standby_dev))
++		return -ENODEV;
++
+ 	vlan_vids_del_by_dev(slave_dev, failover_dev);
+ 	dev_uc_unsync(slave_dev, failover_dev);
+ 	dev_mc_unsync(slave_dev, failover_dev);
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index 5827fccd4f29..44a0770de142 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -907,6 +907,9 @@ void phylink_start(struct phylink *pl)
+ 		    phylink_an_mode_str(pl->link_an_mode),
+ 		    phy_modes(pl->link_config.interface));
+ 
++	/* Always set the carrier off */
++	netif_carrier_off(pl->netdev);
++
+ 	/* Apply the link configuration to the MAC when starting. This allows
+ 	 * a fixed-link to start with the correct parameters, and also
+ 	 * ensures that we set the appropriate advertisement for Serdes links.
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 725dd63f8413..546081993ecf 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -2304,6 +2304,8 @@ static void tun_setup(struct net_device *dev)
+ static int tun_validate(struct nlattr *tb[], struct nlattr *data[],
+ 			struct netlink_ext_ack *extack)
+ {
++	if (!data)
++		return 0;
+ 	return -EINVAL;
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
+index 2319f79b34f0..e6d23b6895bd 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.c
++++ b/drivers/net/wireless/ath/ath10k/wmi.c
+@@ -1869,6 +1869,12 @@ int ath10k_wmi_cmd_send(struct ath10k *ar, struct sk_buff *skb, u32 cmd_id)
+ 	if (ret)
+ 		dev_kfree_skb_any(skb);
+ 
++	if (ret == -EAGAIN) {
++		ath10k_warn(ar, "wmi command %d timeout, restarting hardware\n",
++			    cmd_id);
++		queue_work(ar->workqueue, &ar->restart_work);
++	}
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c b/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c
+index d8b79cb72b58..e7584b842dce 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c
+@@ -77,6 +77,8 @@ static u16 d11ac_bw(enum brcmu_chan_bw bw)
+ 		return BRCMU_CHSPEC_D11AC_BW_40;
+ 	case BRCMU_CHAN_BW_80:
+ 		return BRCMU_CHSPEC_D11AC_BW_80;
++	case BRCMU_CHAN_BW_160:
++		return BRCMU_CHSPEC_D11AC_BW_160;
+ 	default:
+ 		WARN_ON(1);
+ 	}
+@@ -190,8 +192,38 @@ static void brcmu_d11ac_decchspec(struct brcmu_chan *ch)
+ 			break;
+ 		}
+ 		break;
+-	case BRCMU_CHSPEC_D11AC_BW_8080:
+ 	case BRCMU_CHSPEC_D11AC_BW_160:
++		switch (ch->sb) {
++		case BRCMU_CHAN_SB_LLL:
++			ch->control_ch_num -= CH_70MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_LLU:
++			ch->control_ch_num -= CH_50MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_LUL:
++			ch->control_ch_num -= CH_30MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_LUU:
++			ch->control_ch_num -= CH_10MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_ULL:
++			ch->control_ch_num += CH_10MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_ULU:
++			ch->control_ch_num += CH_30MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_UUL:
++			ch->control_ch_num += CH_50MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_UUU:
++			ch->control_ch_num += CH_70MHZ_APART;
++			break;
++		default:
++			WARN_ON_ONCE(1);
++			break;
++		}
++		break;
++	case BRCMU_CHSPEC_D11AC_BW_8080:
+ 	default:
+ 		WARN_ON_ONCE(1);
+ 		break;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/include/brcmu_wifi.h b/drivers/net/wireless/broadcom/brcm80211/include/brcmu_wifi.h
+index 7b9a77981df1..75b2a0438cfa 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/include/brcmu_wifi.h
++++ b/drivers/net/wireless/broadcom/brcm80211/include/brcmu_wifi.h
+@@ -29,6 +29,8 @@
+ #define CH_UPPER_SB			0x01
+ #define CH_LOWER_SB			0x02
+ #define CH_EWA_VALID			0x04
++#define CH_70MHZ_APART			14
++#define CH_50MHZ_APART			10
+ #define CH_30MHZ_APART			6
+ #define CH_20MHZ_APART			4
+ #define CH_10MHZ_APART			2
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+index 866c91c923be..dd674dcf1a0a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+@@ -669,8 +669,12 @@ static int iwl_mvm_sar_get_ewrd_table(struct iwl_mvm *mvm)
+ 	enabled = !!(wifi_pkg->package.elements[1].integer.value);
+ 	n_profiles = wifi_pkg->package.elements[2].integer.value;
+ 
+-	/* in case of BIOS bug */
+-	if (n_profiles <= 0) {
++	/*
++	 * Check the validity of n_profiles.  The EWRD profiles start
++	 * from index 1, so the maximum value allowed here is
++	 * ACPI_SAR_PROFILES_NUM - 1.
++	 */
++	if (n_profiles <= 0 || n_profiles >= ACPI_SAR_PROFILE_NUM) {
+ 		ret = -EINVAL;
+ 		goto out_free;
+ 	}
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index a6e072234398..da45dc972889 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -1232,12 +1232,15 @@ void __iwl_mvm_mac_stop(struct iwl_mvm *mvm)
+ 	iwl_mvm_del_aux_sta(mvm);
+ 
+ 	/*
+-	 * Clear IN_HW_RESTART flag when stopping the hw (as restart_complete()
+-	 * won't be called in this case).
++	 * Clear IN_HW_RESTART and HW_RESTART_REQUESTED flag when stopping the
++	 * hw (as restart_complete() won't be called in this case) and mac80211
++	 * won't execute the restart.
+ 	 * But make sure to cleanup interfaces that have gone down before/during
+ 	 * HW restart was requested.
+ 	 */
+-	if (test_and_clear_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status))
++	if (test_and_clear_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) ||
++	    test_and_clear_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED,
++			       &mvm->status))
+ 		ieee80211_iterate_interfaces(mvm->hw, 0,
+ 					     iwl_mvm_cleanup_iterator, mvm);
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
+index 642da10b0b7f..fccb3a4f9d57 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
+@@ -1218,7 +1218,11 @@ void iwl_mvm_rs_tx_status(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ 	    !(info->flags & IEEE80211_TX_STAT_AMPDU))
+ 		return;
+ 
+-	rs_rate_from_ucode_rate(tx_resp_hwrate, info->band, &tx_resp_rate);
++	if (rs_rate_from_ucode_rate(tx_resp_hwrate, info->band,
++				    &tx_resp_rate)) {
++		WARN_ON_ONCE(1);
++		return;
++	}
+ 
+ #ifdef CONFIG_MAC80211_DEBUGFS
+ 	/* Disable last tx check if we are debugging with fixed rate but
+@@ -1269,7 +1273,10 @@ void iwl_mvm_rs_tx_status(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ 	 */
+ 	table = &lq_sta->lq;
+ 	lq_hwrate = le32_to_cpu(table->rs_table[0]);
+-	rs_rate_from_ucode_rate(lq_hwrate, info->band, &lq_rate);
++	if (rs_rate_from_ucode_rate(lq_hwrate, info->band, &lq_rate)) {
++		WARN_ON_ONCE(1);
++		return;
++	}
+ 
+ 	/* Here we actually compare this rate to the latest LQ command */
+ 	if (lq_color != LQ_FLAG_COLOR_GET(table->flags)) {
+@@ -1371,8 +1378,12 @@ void iwl_mvm_rs_tx_status(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ 		/* Collect data for each rate used during failed TX attempts */
+ 		for (i = 0; i <= retries; ++i) {
+ 			lq_hwrate = le32_to_cpu(table->rs_table[i]);
+-			rs_rate_from_ucode_rate(lq_hwrate, info->band,
+-						&lq_rate);
++			if (rs_rate_from_ucode_rate(lq_hwrate, info->band,
++						    &lq_rate)) {
++				WARN_ON_ONCE(1);
++				return;
++			}
++
+ 			/*
+ 			 * Only collect stats if retried rate is in the same RS
+ 			 * table as active/search.
+@@ -3241,7 +3252,10 @@ static void rs_build_rates_table_from_fixed(struct iwl_mvm *mvm,
+ 	for (i = 0; i < num_rates; i++)
+ 		lq_cmd->rs_table[i] = ucode_rate_le32;
+ 
+-	rs_rate_from_ucode_rate(ucode_rate, band, &rate);
++	if (rs_rate_from_ucode_rate(ucode_rate, band, &rate)) {
++		WARN_ON_ONCE(1);
++		return;
++	}
+ 
+ 	if (is_mimo(&rate))
+ 		lq_cmd->mimo_delim = num_rates - 1;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+index cf2591f2ac23..2d35b70de2ab 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+@@ -1385,6 +1385,7 @@ static void iwl_mvm_rx_tx_cmd_single(struct iwl_mvm *mvm,
+ 	while (!skb_queue_empty(&skbs)) {
+ 		struct sk_buff *skb = __skb_dequeue(&skbs);
+ 		struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
++		struct ieee80211_hdr *hdr = (void *)skb->data;
+ 		bool flushed = false;
+ 
+ 		skb_freed++;
+@@ -1429,11 +1430,11 @@ static void iwl_mvm_rx_tx_cmd_single(struct iwl_mvm *mvm,
+ 			info->flags |= IEEE80211_TX_STAT_AMPDU_NO_BACK;
+ 		info->flags &= ~IEEE80211_TX_CTL_AMPDU;
+ 
+-		/* W/A FW bug: seq_ctl is wrong when the status isn't success */
+-		if (status != TX_STATUS_SUCCESS) {
+-			struct ieee80211_hdr *hdr = (void *)skb->data;
++		/* W/A FW bug: seq_ctl is wrong upon failure / BAR frame */
++		if (ieee80211_is_back_req(hdr->frame_control))
++			seq_ctl = 0;
++		else if (status != TX_STATUS_SUCCESS)
+ 			seq_ctl = le16_to_cpu(hdr->seq_ctrl);
+-		}
+ 
+ 		if (unlikely(!seq_ctl)) {
+ 			struct ieee80211_hdr *hdr = (void *)skb->data;
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+index d15f5ba2dc77..cb5631c85d16 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+@@ -1050,6 +1050,14 @@ void iwl_pcie_rx_free(struct iwl_trans *trans)
+ 	kfree(trans_pcie->rxq);
+ }
+ 
++static void iwl_pcie_rx_move_to_allocator(struct iwl_rxq *rxq,
++					  struct iwl_rb_allocator *rba)
++{
++	spin_lock(&rba->lock);
++	list_splice_tail_init(&rxq->rx_used, &rba->rbd_empty);
++	spin_unlock(&rba->lock);
++}
++
+ /*
+  * iwl_pcie_rx_reuse_rbd - Recycle used RBDs
+  *
+@@ -1081,9 +1089,7 @@ static void iwl_pcie_rx_reuse_rbd(struct iwl_trans *trans,
+ 	if ((rxq->used_count % RX_CLAIM_REQ_ALLOC) == RX_POST_REQ_ALLOC) {
+ 		/* Move the 2 RBDs to the allocator ownership.
+ 		 Allocator has another 6 from pool for the request completion*/
+-		spin_lock(&rba->lock);
+-		list_splice_tail_init(&rxq->rx_used, &rba->rbd_empty);
+-		spin_unlock(&rba->lock);
++		iwl_pcie_rx_move_to_allocator(rxq, rba);
+ 
+ 		atomic_inc(&rba->req_pending);
+ 		queue_work(rba->alloc_wq, &rba->rx_alloc);
+@@ -1261,10 +1267,18 @@ restart:
+ 		IWL_DEBUG_RX(trans, "Q %d: HW = SW = %d\n", rxq->id, r);
+ 
+ 	while (i != r) {
++		struct iwl_rb_allocator *rba = &trans_pcie->rba;
+ 		struct iwl_rx_mem_buffer *rxb;
+-
+-		if (unlikely(rxq->used_count == rxq->queue_size / 2))
++		/* number of RBDs still waiting for page allocation */
++		u32 rb_pending_alloc =
++			atomic_read(&trans_pcie->rba.req_pending) *
++			RX_CLAIM_REQ_ALLOC;
++
++		if (unlikely(rb_pending_alloc >= rxq->queue_size / 2 &&
++			     !emergency)) {
++			iwl_pcie_rx_move_to_allocator(rxq, rba);
+ 			emergency = true;
++		}
+ 
+ 		if (trans->cfg->mq_rx_supported) {
+ 			/*
+@@ -1307,17 +1321,13 @@ restart:
+ 			iwl_pcie_rx_allocator_get(trans, rxq);
+ 
+ 		if (rxq->used_count % RX_CLAIM_REQ_ALLOC == 0 && !emergency) {
+-			struct iwl_rb_allocator *rba = &trans_pcie->rba;
+-
+ 			/* Add the remaining empty RBDs for allocator use */
+-			spin_lock(&rba->lock);
+-			list_splice_tail_init(&rxq->rx_used, &rba->rbd_empty);
+-			spin_unlock(&rba->lock);
++			iwl_pcie_rx_move_to_allocator(rxq, rba);
+ 		} else if (emergency) {
+ 			count++;
+ 			if (count == 8) {
+ 				count = 0;
+-				if (rxq->used_count < rxq->queue_size / 3)
++				if (rb_pending_alloc < rxq->queue_size / 3)
+ 					emergency = false;
+ 
+ 				rxq->read = i;
+diff --git a/drivers/net/wireless/marvell/libertas/if_usb.c b/drivers/net/wireless/marvell/libertas/if_usb.c
+index ffea610f67e2..10ba94c2b35b 100644
+--- a/drivers/net/wireless/marvell/libertas/if_usb.c
++++ b/drivers/net/wireless/marvell/libertas/if_usb.c
+@@ -456,8 +456,6 @@ static int __if_usb_submit_rx_urb(struct if_usb_card *cardp,
+ 			  MRVDRV_ETH_RX_PACKET_BUFFER_SIZE, callbackfn,
+ 			  cardp);
+ 
+-	cardp->rx_urb->transfer_flags |= URB_ZERO_PACKET;
+-
+ 	lbs_deb_usb2(&cardp->udev->dev, "Pointer for rx_urb %p\n", cardp->rx_urb);
+ 	if ((ret = usb_submit_urb(cardp->rx_urb, GFP_ATOMIC))) {
+ 		lbs_deb_usbd(&cardp->udev->dev, "Submit Rx URB failed: %d\n", ret);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c b/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c
+index 8985446570bd..190c699d6e3b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c
+@@ -725,8 +725,7 @@ __mt76x2_mac_set_beacon(struct mt76x2_dev *dev, u8 bcn_idx, struct sk_buff *skb)
+ 	if (skb) {
+ 		ret = mt76_write_beacon(dev, beacon_addr, skb);
+ 		if (!ret)
+-			dev->beacon_data_mask |= BIT(bcn_idx) &
+-						 dev->beacon_mask;
++			dev->beacon_data_mask |= BIT(bcn_idx);
+ 	} else {
+ 		dev->beacon_data_mask &= ~BIT(bcn_idx);
+ 		for (i = 0; i < beacon_len; i += 4)
+diff --git a/drivers/net/wireless/rsi/rsi_91x_usb.c b/drivers/net/wireless/rsi/rsi_91x_usb.c
+index 6ce6b754df12..45a1b86491b6 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_usb.c
++++ b/drivers/net/wireless/rsi/rsi_91x_usb.c
+@@ -266,15 +266,17 @@ static void rsi_rx_done_handler(struct urb *urb)
+ 	if (urb->status)
+ 		goto out;
+ 
+-	if (urb->actual_length <= 0) {
+-		rsi_dbg(INFO_ZONE, "%s: Zero length packet\n", __func__);
++	if (urb->actual_length <= 0 ||
++	    urb->actual_length > rx_cb->rx_skb->len) {
++		rsi_dbg(INFO_ZONE, "%s: Invalid packet length = %d\n",
++			__func__, urb->actual_length);
+ 		goto out;
+ 	}
+ 	if (skb_queue_len(&dev->rx_q) >= RSI_MAX_RX_PKTS) {
+ 		rsi_dbg(INFO_ZONE, "Max RX packets reached\n");
+ 		goto out;
+ 	}
+-	skb_put(rx_cb->rx_skb, urb->actual_length);
++	skb_trim(rx_cb->rx_skb, urb->actual_length);
+ 	skb_queue_tail(&dev->rx_q, rx_cb->rx_skb);
+ 
+ 	rsi_set_event(&dev->rx_thread.event);
+@@ -308,6 +310,7 @@ static int rsi_rx_urb_submit(struct rsi_hw *adapter, u8 ep_num)
+ 	if (!skb)
+ 		return -ENOMEM;
+ 	skb_reserve(skb, MAX_DWORD_ALIGN_BYTES);
++	skb_put(skb, RSI_MAX_RX_USB_PKT_SIZE - MAX_DWORD_ALIGN_BYTES);
+ 	dword_align_bytes = (unsigned long)skb->data & 0x3f;
+ 	if (dword_align_bytes > 0)
+ 		skb_push(skb, dword_align_bytes);
+@@ -319,7 +322,7 @@ static int rsi_rx_urb_submit(struct rsi_hw *adapter, u8 ep_num)
+ 			  usb_rcvbulkpipe(dev->usbdev,
+ 			  dev->bulkin_endpoint_addr[ep_num - 1]),
+ 			  urb->transfer_buffer,
+-			  RSI_MAX_RX_USB_PKT_SIZE,
++			  skb->len,
+ 			  rsi_rx_done_handler,
+ 			  rx_cb);
+ 
+diff --git a/drivers/nfc/nfcmrvl/uart.c b/drivers/nfc/nfcmrvl/uart.c
+index 91162f8e0366..9a22056e8d9e 100644
+--- a/drivers/nfc/nfcmrvl/uart.c
++++ b/drivers/nfc/nfcmrvl/uart.c
+@@ -73,10 +73,9 @@ static int nfcmrvl_uart_parse_dt(struct device_node *node,
+ 	struct device_node *matched_node;
+ 	int ret;
+ 
+-	matched_node = of_find_compatible_node(node, NULL, "marvell,nfc-uart");
++	matched_node = of_get_compatible_child(node, "marvell,nfc-uart");
+ 	if (!matched_node) {
+-		matched_node = of_find_compatible_node(node, NULL,
+-						       "mrvl,nfc-uart");
++		matched_node = of_get_compatible_child(node, "mrvl,nfc-uart");
+ 		if (!matched_node)
+ 			return -ENODEV;
+ 	}
+diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
+index 8aae6dcc839f..9148015ed803 100644
+--- a/drivers/nvdimm/bus.c
++++ b/drivers/nvdimm/bus.c
+@@ -488,6 +488,8 @@ static void nd_async_device_register(void *d, async_cookie_t cookie)
+ 		put_device(dev);
+ 	}
+ 	put_device(dev);
++	if (dev->parent)
++		put_device(dev->parent);
+ }
+ 
+ static void nd_async_device_unregister(void *d, async_cookie_t cookie)
+@@ -507,6 +509,8 @@ void __nd_device_register(struct device *dev)
+ 	if (!dev)
+ 		return;
+ 	dev->bus = &nvdimm_bus_type;
++	if (dev->parent)
++		get_device(dev->parent);
+ 	get_device(dev);
+ 	async_schedule_domain(nd_async_device_register, dev,
+ 			&nd_async_domain);
+diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
+index 8b1fd7f1a224..2245cfb8c6ab 100644
+--- a/drivers/nvdimm/pmem.c
++++ b/drivers/nvdimm/pmem.c
+@@ -393,9 +393,11 @@ static int pmem_attach_disk(struct device *dev,
+ 		addr = devm_memremap_pages(dev, &pmem->pgmap);
+ 		pmem->pfn_flags |= PFN_MAP;
+ 		memcpy(&bb_res, &pmem->pgmap.res, sizeof(bb_res));
+-	} else
++	} else {
+ 		addr = devm_memremap(dev, pmem->phys_addr,
+ 				pmem->size, ARCH_MEMREMAP_PMEM);
++		memcpy(&bb_res, &nsio->res, sizeof(bb_res));
++	}
+ 
+ 	/*
+ 	 * At release time the queue must be frozen before
+diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
+index c30d5af02cc2..63cb01ef4ef0 100644
+--- a/drivers/nvdimm/region_devs.c
++++ b/drivers/nvdimm/region_devs.c
+@@ -545,10 +545,17 @@ static ssize_t region_badblocks_show(struct device *dev,
+ 		struct device_attribute *attr, char *buf)
+ {
+ 	struct nd_region *nd_region = to_nd_region(dev);
++	ssize_t rc;
+ 
+-	return badblocks_show(&nd_region->bb, buf, 0);
+-}
++	device_lock(dev);
++	if (dev->driver)
++		rc = badblocks_show(&nd_region->bb, buf, 0);
++	else
++		rc = -ENXIO;
++	device_unlock(dev);
+ 
++	return rc;
++}
+ static DEVICE_ATTR(badblocks, 0444, region_badblocks_show, NULL);
+ 
+ static ssize_t resource_show(struct device *dev,
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index bf65501e6ed6..f1f375fb362b 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -3119,8 +3119,8 @@ static void nvme_ns_remove(struct nvme_ns *ns)
+ 	}
+ 
+ 	mutex_lock(&ns->ctrl->subsys->lock);
+-	nvme_mpath_clear_current_path(ns);
+ 	list_del_rcu(&ns->siblings);
++	nvme_mpath_clear_current_path(ns);
+ 	mutex_unlock(&ns->ctrl->subsys->lock);
+ 
+ 	down_write(&ns->ctrl->namespaces_rwsem);
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index 514d1dfc5630..122b52d0ebfd 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -518,11 +518,17 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
+ 			goto err_device_del;
+ 	}
+ 
+-	if (config->cells)
+-		nvmem_add_cells(nvmem, config->cells, config->ncells);
++	if (config->cells) {
++		rval = nvmem_add_cells(nvmem, config->cells, config->ncells);
++		if (rval)
++			goto err_teardown_compat;
++	}
+ 
+ 	return nvmem;
+ 
++err_teardown_compat:
++	if (config->compat)
++		device_remove_bin_file(nvmem->base_dev, &nvmem->eeprom);
+ err_device_del:
+ 	device_del(&nvmem->dev);
+ err_put_device:
+diff --git a/drivers/opp/of.c b/drivers/opp/of.c
+index 7af0ddec936b..20988c426650 100644
+--- a/drivers/opp/of.c
++++ b/drivers/opp/of.c
+@@ -425,6 +425,7 @@ static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np)
+ 		dev_err(dev, "Not all nodes have performance state set (%d: %d)\n",
+ 			count, pstate_count);
+ 		ret = -ENOENT;
++		_dev_pm_opp_remove_table(opp_table, dev, false);
+ 		goto put_opp_table;
+ 	}
+ 
+diff --git a/drivers/pci/controller/dwc/pci-dra7xx.c b/drivers/pci/controller/dwc/pci-dra7xx.c
+index 345aab56ce8b..78ed6cc8d521 100644
+--- a/drivers/pci/controller/dwc/pci-dra7xx.c
++++ b/drivers/pci/controller/dwc/pci-dra7xx.c
+@@ -542,7 +542,7 @@ static const struct of_device_id of_dra7xx_pcie_match[] = {
+ };
+ 
+ /*
+- * dra7xx_pcie_ep_unaligned_memaccess: workaround for AM572x/AM571x Errata i870
++ * dra7xx_pcie_unaligned_memaccess: workaround for AM572x/AM571x Errata i870
+  * @dra7xx: the dra7xx device where the workaround should be applied
+  *
+  * Access to the PCIe slave port that are not 32-bit aligned will result
+@@ -552,7 +552,7 @@ static const struct of_device_id of_dra7xx_pcie_match[] = {
+  *
+  * To avoid this issue set PCIE_SS1_AXI2OCP_LEGACY_MODE_ENABLE to 1.
+  */
+-static int dra7xx_pcie_ep_unaligned_memaccess(struct device *dev)
++static int dra7xx_pcie_unaligned_memaccess(struct device *dev)
+ {
+ 	int ret;
+ 	struct device_node *np = dev->of_node;
+@@ -704,6 +704,11 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev)
+ 
+ 		dra7xx_pcie_writel(dra7xx, PCIECTRL_TI_CONF_DEVICE_TYPE,
+ 				   DEVICE_TYPE_RC);
++
++		ret = dra7xx_pcie_unaligned_memaccess(dev);
++		if (ret)
++			dev_err(dev, "WA for Errata i870 not applied\n");
++
+ 		ret = dra7xx_add_pcie_port(dra7xx, pdev);
+ 		if (ret < 0)
+ 			goto err_gpio;
+@@ -717,7 +722,7 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev)
+ 		dra7xx_pcie_writel(dra7xx, PCIECTRL_TI_CONF_DEVICE_TYPE,
+ 				   DEVICE_TYPE_EP);
+ 
+-		ret = dra7xx_pcie_ep_unaligned_memaccess(dev);
++		ret = dra7xx_pcie_unaligned_memaccess(dev);
+ 		if (ret)
+ 			goto err_gpio;
+ 
+diff --git a/drivers/pci/controller/pcie-cadence-ep.c b/drivers/pci/controller/pcie-cadence-ep.c
+index e3fe4124e3af..a67dc91261f5 100644
+--- a/drivers/pci/controller/pcie-cadence-ep.c
++++ b/drivers/pci/controller/pcie-cadence-ep.c
+@@ -259,7 +259,6 @@ static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn,
+ 				     u8 intx, bool is_asserted)
+ {
+ 	struct cdns_pcie *pcie = &ep->pcie;
+-	u32 r = ep->max_regions - 1;
+ 	u32 offset;
+ 	u16 status;
+ 	u8 msg_code;
+@@ -269,8 +268,8 @@ static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn,
+ 	/* Set the outbound region if needed. */
+ 	if (unlikely(ep->irq_pci_addr != CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY ||
+ 		     ep->irq_pci_fn != fn)) {
+-		/* Last region was reserved for IRQ writes. */
+-		cdns_pcie_set_outbound_region_for_normal_msg(pcie, fn, r,
++		/* First region was reserved for IRQ writes. */
++		cdns_pcie_set_outbound_region_for_normal_msg(pcie, fn, 0,
+ 							     ep->irq_phys_addr);
+ 		ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY;
+ 		ep->irq_pci_fn = fn;
+@@ -348,8 +347,8 @@ static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn,
+ 	/* Set the outbound region if needed. */
+ 	if (unlikely(ep->irq_pci_addr != (pci_addr & ~pci_addr_mask) ||
+ 		     ep->irq_pci_fn != fn)) {
+-		/* Last region was reserved for IRQ writes. */
+-		cdns_pcie_set_outbound_region(pcie, fn, ep->max_regions - 1,
++		/* First region was reserved for IRQ writes. */
++		cdns_pcie_set_outbound_region(pcie, fn, 0,
+ 					      false,
+ 					      ep->irq_phys_addr,
+ 					      pci_addr & ~pci_addr_mask,
+@@ -510,6 +509,8 @@ static int cdns_pcie_ep_probe(struct platform_device *pdev)
+ 		goto free_epc_mem;
+ 	}
+ 	ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE;
++	/* Reserve region 0 for IRQs */
++	set_bit(0, &ep->ob_region_map);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/pci/controller/pcie-mediatek.c b/drivers/pci/controller/pcie-mediatek.c
+index 861dda69f366..c5ff6ca65eab 100644
+--- a/drivers/pci/controller/pcie-mediatek.c
++++ b/drivers/pci/controller/pcie-mediatek.c
+@@ -337,6 +337,17 @@ static struct mtk_pcie_port *mtk_pcie_find_port(struct pci_bus *bus,
+ {
+ 	struct mtk_pcie *pcie = bus->sysdata;
+ 	struct mtk_pcie_port *port;
++	struct pci_dev *dev = NULL;
++
++	/*
++	 * Walk the bus hierarchy to get the devfn value
++	 * of the port in the root bus.
++	 */
++	while (bus && bus->number) {
++		dev = bus->self;
++		bus = dev->bus;
++		devfn = dev->devfn;
++	}
+ 
+ 	list_for_each_entry(port, &pcie->ports, list)
+ 		if (port->slot == PCI_SLOT(devfn))
+diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
+index 942b64fc7f1f..fd2dbd7eed7b 100644
+--- a/drivers/pci/controller/vmd.c
++++ b/drivers/pci/controller/vmd.c
+@@ -197,9 +197,20 @@ static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd, struct msi_desc *d
+ 	int i, best = 1;
+ 	unsigned long flags;
+ 
+-	if (pci_is_bridge(msi_desc_to_pci_dev(desc)) || vmd->msix_count == 1)
++	if (vmd->msix_count == 1)
+ 		return &vmd->irqs[0];
+ 
++	/*
++	 * White list for fast-interrupt handlers. All others will share the
++	 * "slow" interrupt vector.
++	 */
++	switch (msi_desc_to_pci_dev(desc)->class) {
++	case PCI_CLASS_STORAGE_EXPRESS:
++		break;
++	default:
++		return &vmd->irqs[0];
++	}
++
+ 	raw_spin_lock_irqsave(&list_lock, flags);
+ 	for (i = 1; i < vmd->msix_count; i++)
+ 		if (vmd->irqs[i].count < vmd->irqs[best].count)
+diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
+index 4d88afdfc843..f7b7cb7189eb 100644
+--- a/drivers/pci/msi.c
++++ b/drivers/pci/msi.c
+@@ -958,7 +958,6 @@ static int __pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries,
+ 			}
+ 		}
+ 	}
+-	WARN_ON(!!dev->msix_enabled);
+ 
+ 	/* Check whether driver already requested for MSI irq */
+ 	if (dev->msi_enabled) {
+@@ -1028,8 +1027,6 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
+ 	if (!pci_msi_supported(dev, minvec))
+ 		return -EINVAL;
+ 
+-	WARN_ON(!!dev->msi_enabled);
+-
+ 	/* Check whether driver already requested MSI-X irqs */
+ 	if (dev->msix_enabled) {
+ 		pci_info(dev, "can't enable MSI (MSI-X already enabled)\n");
+@@ -1039,6 +1036,9 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
+ 	if (maxvec < minvec)
+ 		return -ERANGE;
+ 
++	if (WARN_ON_ONCE(dev->msi_enabled))
++		return -EINVAL;
++
+ 	nvec = pci_msi_vec_count(dev);
+ 	if (nvec < 0)
+ 		return nvec;
+@@ -1087,6 +1087,9 @@ static int __pci_enable_msix_range(struct pci_dev *dev,
+ 	if (maxvec < minvec)
+ 		return -ERANGE;
+ 
++	if (WARN_ON_ONCE(dev->msix_enabled))
++		return -EINVAL;
++
+ 	for (;;) {
+ 		if (affd) {
+ 			nvec = irq_calc_affinity_vectors(minvec, nvec, affd);
+diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
+index 5d1698265da5..d2b04ab37308 100644
+--- a/drivers/pci/pci-acpi.c
++++ b/drivers/pci/pci-acpi.c
+@@ -779,19 +779,33 @@ static void pci_acpi_setup(struct device *dev)
+ 		return;
+ 
+ 	device_set_wakeup_capable(dev, true);
++	/*
++	 * For bridges that can do D3 we enable wake automatically (as
++	 * we do for the power management itself in that case). The
++	 * reason is that the bridge may have additional methods such as
++	 * _DSW that need to be called.
++	 */
++	if (pci_dev->bridge_d3)
++		device_wakeup_enable(dev);
++
+ 	acpi_pci_wakeup(pci_dev, false);
+ }
+ 
+ static void pci_acpi_cleanup(struct device *dev)
+ {
+ 	struct acpi_device *adev = ACPI_COMPANION(dev);
++	struct pci_dev *pci_dev = to_pci_dev(dev);
+ 
+ 	if (!adev)
+ 		return;
+ 
+ 	pci_acpi_remove_pm_notifier(adev);
+-	if (adev->wakeup.flags.valid)
++	if (adev->wakeup.flags.valid) {
++		if (pci_dev->bridge_d3)
++			device_wakeup_disable(dev);
++
+ 		device_set_wakeup_capable(dev, false);
++	}
+ }
+ 
+ static bool pci_acpi_bus_match(struct device *dev)
+diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
+index c687c817b47d..6322c3f446bc 100644
+--- a/drivers/pci/pcie/aspm.c
++++ b/drivers/pci/pcie/aspm.c
+@@ -991,7 +991,7 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev)
+ 	 * All PCIe functions are in one slot, remove one function will remove
+ 	 * the whole slot, so just wait until we are the last function left.
+ 	 */
+-	if (!list_is_last(&pdev->bus_list, &parent->subordinate->devices))
++	if (!list_empty(&parent->subordinate->devices))
+ 		goto out;
+ 
+ 	link = parent->link_state;
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index d1e2d175c10b..a4d11d14b196 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -3177,7 +3177,11 @@ static void disable_igfx_irq(struct pci_dev *dev)
+ 
+ 	pci_iounmap(dev, regs);
+ }
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0042, disable_igfx_irq);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0046, disable_igfx_irq);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x004a, disable_igfx_irq);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0102, disable_igfx_irq);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0106, disable_igfx_irq);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x010a, disable_igfx_irq);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0152, disable_igfx_irq);
+ 
+diff --git a/drivers/pci/remove.c b/drivers/pci/remove.c
+index 5e3d0dced2b8..b08945a7bbfd 100644
+--- a/drivers/pci/remove.c
++++ b/drivers/pci/remove.c
+@@ -26,9 +26,6 @@ static void pci_stop_dev(struct pci_dev *dev)
+ 
+ 		pci_dev_assign_added(dev, false);
+ 	}
+-
+-	if (dev->bus->self)
+-		pcie_aspm_exit_link_state(dev);
+ }
+ 
+ static void pci_destroy_dev(struct pci_dev *dev)
+@@ -42,6 +39,7 @@ static void pci_destroy_dev(struct pci_dev *dev)
+ 	list_del(&dev->bus_list);
+ 	up_write(&pci_bus_sem);
+ 
++	pcie_aspm_exit_link_state(dev);
+ 	pci_bridge_d3_update(dev);
+ 	pci_free_resources(dev);
+ 	put_device(&dev->dev);
+diff --git a/drivers/pcmcia/ricoh.h b/drivers/pcmcia/ricoh.h
+index 01098c841f87..8ac7b138c094 100644
+--- a/drivers/pcmcia/ricoh.h
++++ b/drivers/pcmcia/ricoh.h
+@@ -119,6 +119,10 @@
+ #define  RL5C4XX_MISC_CONTROL           0x2F /* 8 bit */
+ #define  RL5C4XX_ZV_ENABLE              0x08
+ 
++/* Misc Control 3 Register */
++#define RL5C4XX_MISC3			0x00A2 /* 16 bit */
++#define  RL5C47X_MISC3_CB_CLKRUN_DIS	BIT(1)
++
+ #ifdef __YENTA_H
+ 
+ #define rl_misc(socket)		((socket)->private[0])
+@@ -156,6 +160,35 @@ static void ricoh_set_zv(struct yenta_socket *socket)
+         }
+ }
+ 
++static void ricoh_set_clkrun(struct yenta_socket *socket, bool quiet)
++{
++	u16 misc3;
++
++	/*
++	 * RL5C475II likely has this setting, too, however no datasheet
++	 * is publicly available for this chip
++	 */
++	if (socket->dev->device != PCI_DEVICE_ID_RICOH_RL5C476 &&
++	    socket->dev->device != PCI_DEVICE_ID_RICOH_RL5C478)
++		return;
++
++	if (socket->dev->revision < 0x80)
++		return;
++
++	misc3 = config_readw(socket, RL5C4XX_MISC3);
++	if (misc3 & RL5C47X_MISC3_CB_CLKRUN_DIS) {
++		if (!quiet)
++			dev_dbg(&socket->dev->dev,
++				"CLKRUN feature already disabled\n");
++	} else if (disable_clkrun) {
++		if (!quiet)
++			dev_info(&socket->dev->dev,
++				 "Disabling CLKRUN feature\n");
++		misc3 |= RL5C47X_MISC3_CB_CLKRUN_DIS;
++		config_writew(socket, RL5C4XX_MISC3, misc3);
++	}
++}
++
+ static void ricoh_save_state(struct yenta_socket *socket)
+ {
+ 	rl_misc(socket) = config_readw(socket, RL5C4XX_MISC);
+@@ -172,6 +205,7 @@ static void ricoh_restore_state(struct yenta_socket *socket)
+ 	config_writew(socket, RL5C4XX_16BIT_IO_0, rl_io(socket));
+ 	config_writew(socket, RL5C4XX_16BIT_MEM_0, rl_mem(socket));
+ 	config_writew(socket, RL5C4XX_CONFIG, rl_config(socket));
++	ricoh_set_clkrun(socket, true);
+ }
+ 
+ 
+@@ -197,6 +231,7 @@ static int ricoh_override(struct yenta_socket *socket)
+ 	config_writew(socket, RL5C4XX_CONFIG, config);
+ 
+ 	ricoh_set_zv(socket);
++	ricoh_set_clkrun(socket, false);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/pcmcia/yenta_socket.c b/drivers/pcmcia/yenta_socket.c
+index ab3da2262f0f..ac6a3f46b1e6 100644
+--- a/drivers/pcmcia/yenta_socket.c
++++ b/drivers/pcmcia/yenta_socket.c
+@@ -26,7 +26,8 @@
+ 
+ static bool disable_clkrun;
+ module_param(disable_clkrun, bool, 0444);
+-MODULE_PARM_DESC(disable_clkrun, "If PC card doesn't function properly, please try this option");
++MODULE_PARM_DESC(disable_clkrun,
++		 "If PC card doesn't function properly, please try this option (TI and Ricoh bridges only)");
+ 
+ static bool isa_probe = 1;
+ module_param(isa_probe, bool, 0444);
+diff --git a/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c b/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c
+index 6556dbeae65e..ac251c62bc66 100644
+--- a/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c
++++ b/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c
+@@ -319,6 +319,8 @@ static int pmic_mpp_set_mux(struct pinctrl_dev *pctldev, unsigned function,
+ 	pad->function = function;
+ 
+ 	ret = pmic_mpp_write_mode_ctl(state, pad);
++	if (ret < 0)
++		return ret;
+ 
+ 	val = pad->is_enabled << PMIC_MPP_REG_MASTER_EN_SHIFT;
+ 
+@@ -343,13 +345,12 @@ static int pmic_mpp_config_get(struct pinctrl_dev *pctldev,
+ 
+ 	switch (param) {
+ 	case PIN_CONFIG_BIAS_DISABLE:
+-		arg = pad->pullup == PMIC_MPP_PULL_UP_OPEN;
++		if (pad->pullup != PMIC_MPP_PULL_UP_OPEN)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_PULL_UP:
+ 		switch (pad->pullup) {
+-		case PMIC_MPP_PULL_UP_OPEN:
+-			arg = 0;
+-			break;
+ 		case PMIC_MPP_PULL_UP_0P6KOHM:
+ 			arg = 600;
+ 			break;
+@@ -364,13 +365,17 @@ static int pmic_mpp_config_get(struct pinctrl_dev *pctldev,
+ 		}
+ 		break;
+ 	case PIN_CONFIG_BIAS_HIGH_IMPEDANCE:
+-		arg = !pad->is_enabled;
++		if (pad->is_enabled)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_POWER_SOURCE:
+ 		arg = pad->power_source;
+ 		break;
+ 	case PIN_CONFIG_INPUT_ENABLE:
+-		arg = pad->input_enabled;
++		if (!pad->input_enabled)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_OUTPUT:
+ 		arg = pad->out_value;
+@@ -382,7 +387,9 @@ static int pmic_mpp_config_get(struct pinctrl_dev *pctldev,
+ 		arg = pad->amux_input;
+ 		break;
+ 	case PMIC_MPP_CONF_PAIRED:
+-		arg = pad->paired;
++		if (!pad->paired)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_DRIVE_STRENGTH:
+ 		arg = pad->drive_strength;
+@@ -455,7 +462,7 @@ static int pmic_mpp_config_set(struct pinctrl_dev *pctldev, unsigned int pin,
+ 			pad->dtest = arg;
+ 			break;
+ 		case PIN_CONFIG_DRIVE_STRENGTH:
+-			arg = pad->drive_strength;
++			pad->drive_strength = arg;
+ 			break;
+ 		case PMIC_MPP_CONF_AMUX_ROUTE:
+ 			if (arg >= PMIC_MPP_AMUX_ROUTE_ABUS4)
+@@ -502,6 +509,10 @@ static int pmic_mpp_config_set(struct pinctrl_dev *pctldev, unsigned int pin,
+ 	if (ret < 0)
+ 		return ret;
+ 
++	ret = pmic_mpp_write(state, pad, PMIC_MPP_REG_SINK_CTL, pad->drive_strength);
++	if (ret < 0)
++		return ret;
++
+ 	val = pad->is_enabled << PMIC_MPP_REG_MASTER_EN_SHIFT;
+ 
+ 	return pmic_mpp_write(state, pad, PMIC_MPP_REG_EN_CTL, val);
+diff --git a/drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c b/drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c
+index f53e32a9d8fc..0e153bae322e 100644
+--- a/drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c
++++ b/drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c
+@@ -260,22 +260,32 @@ static int pm8xxx_pin_config_get(struct pinctrl_dev *pctldev,
+ 
+ 	switch (param) {
+ 	case PIN_CONFIG_BIAS_DISABLE:
+-		arg = pin->bias == PM8XXX_GPIO_BIAS_NP;
++		if (pin->bias != PM8XXX_GPIO_BIAS_NP)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_PULL_DOWN:
+-		arg = pin->bias == PM8XXX_GPIO_BIAS_PD;
++		if (pin->bias != PM8XXX_GPIO_BIAS_PD)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_PULL_UP:
+-		arg = pin->bias <= PM8XXX_GPIO_BIAS_PU_1P5_30;
++		if (pin->bias > PM8XXX_GPIO_BIAS_PU_1P5_30)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PM8XXX_QCOM_PULL_UP_STRENGTH:
+ 		arg = pin->pull_up_strength;
+ 		break;
+ 	case PIN_CONFIG_BIAS_HIGH_IMPEDANCE:
+-		arg = pin->disable;
++		if (!pin->disable)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_INPUT_ENABLE:
+-		arg = pin->mode == PM8XXX_GPIO_MODE_INPUT;
++		if (pin->mode != PM8XXX_GPIO_MODE_INPUT)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_OUTPUT:
+ 		if (pin->mode & PM8XXX_GPIO_MODE_OUTPUT)
+@@ -290,10 +300,14 @@ static int pm8xxx_pin_config_get(struct pinctrl_dev *pctldev,
+ 		arg = pin->output_strength;
+ 		break;
+ 	case PIN_CONFIG_DRIVE_PUSH_PULL:
+-		arg = !pin->open_drain;
++		if (pin->open_drain)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_DRIVE_OPEN_DRAIN:
+-		arg = pin->open_drain;
++		if (!pin->open_drain)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	default:
+ 		return -EINVAL;
+diff --git a/drivers/pinctrl/sunxi/pinctrl-sunxi.c b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+index 4d9bf9b3e9f3..26ebedc1f6d3 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-sunxi.c
++++ b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+@@ -1079,10 +1079,9 @@ static int sunxi_pinctrl_build_state(struct platform_device *pdev)
+ 	 * We suppose that we won't have any more functions than pins,
+ 	 * we'll reallocate that later anyway
+ 	 */
+-	pctl->functions = devm_kcalloc(&pdev->dev,
+-				       pctl->ngroups,
+-				       sizeof(*pctl->functions),
+-				       GFP_KERNEL);
++	pctl->functions = kcalloc(pctl->ngroups,
++				  sizeof(*pctl->functions),
++				  GFP_KERNEL);
+ 	if (!pctl->functions)
+ 		return -ENOMEM;
+ 
+@@ -1133,8 +1132,10 @@ static int sunxi_pinctrl_build_state(struct platform_device *pdev)
+ 
+ 			func_item = sunxi_pinctrl_find_function_by_name(pctl,
+ 									func->name);
+-			if (!func_item)
++			if (!func_item) {
++				kfree(pctl->functions);
+ 				return -EINVAL;
++			}
+ 
+ 			if (!func_item->groups) {
+ 				func_item->groups =
+@@ -1142,8 +1143,10 @@ static int sunxi_pinctrl_build_state(struct platform_device *pdev)
+ 						     func_item->ngroups,
+ 						     sizeof(*func_item->groups),
+ 						     GFP_KERNEL);
+-				if (!func_item->groups)
++				if (!func_item->groups) {
++					kfree(pctl->functions);
+ 					return -ENOMEM;
++				}
+ 			}
+ 
+ 			func_grp = func_item->groups;
+diff --git a/drivers/power/supply/twl4030_charger.c b/drivers/power/supply/twl4030_charger.c
+index bbcaee56db9d..b6a7d9f74cf3 100644
+--- a/drivers/power/supply/twl4030_charger.c
++++ b/drivers/power/supply/twl4030_charger.c
+@@ -996,12 +996,13 @@ static int twl4030_bci_probe(struct platform_device *pdev)
+ 	if (bci->dev->of_node) {
+ 		struct device_node *phynode;
+ 
+-		phynode = of_find_compatible_node(bci->dev->of_node->parent,
+-						  NULL, "ti,twl4030-usb");
++		phynode = of_get_compatible_child(bci->dev->of_node->parent,
++						  "ti,twl4030-usb");
+ 		if (phynode) {
+ 			bci->usb_nb.notifier_call = twl4030_bci_usb_ncb;
+ 			bci->transceiver = devm_usb_get_phy_by_node(
+ 				bci->dev, phynode, &bci->usb_nb);
++			of_node_put(phynode);
+ 			if (IS_ERR(bci->transceiver)) {
+ 				ret = PTR_ERR(bci->transceiver);
+ 				if (ret == -EPROBE_DEFER)
+diff --git a/drivers/rpmsg/qcom_smd.c b/drivers/rpmsg/qcom_smd.c
+index 6437bbeebc91..e026a7817013 100644
+--- a/drivers/rpmsg/qcom_smd.c
++++ b/drivers/rpmsg/qcom_smd.c
+@@ -1114,8 +1114,10 @@ static struct qcom_smd_channel *qcom_smd_create_channel(struct qcom_smd_edge *ed
+ 
+ 	channel->edge = edge;
+ 	channel->name = kstrdup(name, GFP_KERNEL);
+-	if (!channel->name)
+-		return ERR_PTR(-ENOMEM);
++	if (!channel->name) {
++		ret = -ENOMEM;
++		goto free_channel;
++	}
+ 
+ 	spin_lock_init(&channel->tx_lock);
+ 	spin_lock_init(&channel->recv_lock);
+@@ -1165,6 +1167,7 @@ static struct qcom_smd_channel *qcom_smd_create_channel(struct qcom_smd_edge *ed
+ 
+ free_name_and_channel:
+ 	kfree(channel->name);
++free_channel:
+ 	kfree(channel);
+ 
+ 	return ERR_PTR(ret);
+diff --git a/drivers/rtc/rtc-cmos.c b/drivers/rtc/rtc-cmos.c
+index cd3a2411bc2f..df0c5776d49b 100644
+--- a/drivers/rtc/rtc-cmos.c
++++ b/drivers/rtc/rtc-cmos.c
+@@ -50,6 +50,7 @@
+ /* this is for "generic access to PC-style RTC" using CMOS_READ/CMOS_WRITE */
+ #include <linux/mc146818rtc.h>
+ 
++#ifdef CONFIG_ACPI
+ /*
+  * Use ACPI SCI to replace HPET interrupt for RTC Alarm event
+  *
+@@ -61,6 +62,18 @@
+ static bool use_acpi_alarm;
+ module_param(use_acpi_alarm, bool, 0444);
+ 
++static inline int cmos_use_acpi_alarm(void)
++{
++	return use_acpi_alarm;
++}
++#else /* !CONFIG_ACPI */
++
++static inline int cmos_use_acpi_alarm(void)
++{
++	return 0;
++}
++#endif
++
+ struct cmos_rtc {
+ 	struct rtc_device	*rtc;
+ 	struct device		*dev;
+@@ -167,9 +180,9 @@ static inline int hpet_unregister_irq_handler(irq_handler_t handler)
+ #endif
+ 
+ /* Don't use HPET for RTC Alarm event if ACPI Fixed event is used */
+-static int use_hpet_alarm(void)
++static inline int use_hpet_alarm(void)
+ {
+-	return is_hpet_enabled() && !use_acpi_alarm;
++	return is_hpet_enabled() && !cmos_use_acpi_alarm();
+ }
+ 
+ /*----------------------------------------------------------------*/
+@@ -340,7 +353,7 @@ static void cmos_irq_enable(struct cmos_rtc *cmos, unsigned char mask)
+ 	if (use_hpet_alarm())
+ 		hpet_set_rtc_irq_bit(mask);
+ 
+-	if ((mask & RTC_AIE) && use_acpi_alarm) {
++	if ((mask & RTC_AIE) && cmos_use_acpi_alarm()) {
+ 		if (cmos->wake_on)
+ 			cmos->wake_on(cmos->dev);
+ 	}
+@@ -358,7 +371,7 @@ static void cmos_irq_disable(struct cmos_rtc *cmos, unsigned char mask)
+ 	if (use_hpet_alarm())
+ 		hpet_mask_rtc_irq_bit(mask);
+ 
+-	if ((mask & RTC_AIE) && use_acpi_alarm) {
++	if ((mask & RTC_AIE) && cmos_use_acpi_alarm()) {
+ 		if (cmos->wake_off)
+ 			cmos->wake_off(cmos->dev);
+ 	}
+@@ -980,7 +993,7 @@ static int cmos_suspend(struct device *dev)
+ 	}
+ 	spin_unlock_irq(&rtc_lock);
+ 
+-	if ((tmp & RTC_AIE) && !use_acpi_alarm) {
++	if ((tmp & RTC_AIE) && !cmos_use_acpi_alarm()) {
+ 		cmos->enabled_wake = 1;
+ 		if (cmos->wake_on)
+ 			cmos->wake_on(dev);
+@@ -1031,7 +1044,7 @@ static void cmos_check_wkalrm(struct device *dev)
+ 	 * ACPI RTC wake event is cleared after resume from STR,
+ 	 * ACK the rtc irq here
+ 	 */
+-	if (t_now >= cmos->alarm_expires && use_acpi_alarm) {
++	if (t_now >= cmos->alarm_expires && cmos_use_acpi_alarm()) {
+ 		cmos_interrupt(0, (void *)cmos->rtc);
+ 		return;
+ 	}
+@@ -1053,7 +1066,7 @@ static int __maybe_unused cmos_resume(struct device *dev)
+ 	struct cmos_rtc	*cmos = dev_get_drvdata(dev);
+ 	unsigned char tmp;
+ 
+-	if (cmos->enabled_wake && !use_acpi_alarm) {
++	if (cmos->enabled_wake && !cmos_use_acpi_alarm()) {
+ 		if (cmos->wake_off)
+ 			cmos->wake_off(dev);
+ 		else
+@@ -1132,7 +1145,7 @@ static u32 rtc_handler(void *context)
+ 	 * Or else, ACPI SCI is enabled during suspend/resume only,
+ 	 * update rtc irq in that case.
+ 	 */
+-	if (use_acpi_alarm)
++	if (cmos_use_acpi_alarm())
+ 		cmos_interrupt(0, (void *)cmos->rtc);
+ 	else {
+ 		/* Fix me: can we use cmos_interrupt() here as well? */
+diff --git a/drivers/rtc/rtc-ds1307.c b/drivers/rtc/rtc-ds1307.c
+index e9ec4160d7f6..83fa875b89cd 100644
+--- a/drivers/rtc/rtc-ds1307.c
++++ b/drivers/rtc/rtc-ds1307.c
+@@ -1372,7 +1372,6 @@ static void ds1307_clks_register(struct ds1307 *ds1307)
+ static const struct regmap_config regmap_config = {
+ 	.reg_bits = 8,
+ 	.val_bits = 8,
+-	.max_register = 0x9,
+ };
+ 
+ static int ds1307_probe(struct i2c_client *client,
+diff --git a/drivers/scsi/esp_scsi.c b/drivers/scsi/esp_scsi.c
+index c3fc34b9964d..9e5d3f7d29ae 100644
+--- a/drivers/scsi/esp_scsi.c
++++ b/drivers/scsi/esp_scsi.c
+@@ -1338,6 +1338,7 @@ static int esp_data_bytes_sent(struct esp *esp, struct esp_cmd_entry *ent,
+ 
+ 	bytes_sent = esp->data_dma_len;
+ 	bytes_sent -= ecount;
++	bytes_sent -= esp->send_cmd_residual;
+ 
+ 	/*
+ 	 * The am53c974 has a DMA 'pecularity'. The doc states:
+diff --git a/drivers/scsi/esp_scsi.h b/drivers/scsi/esp_scsi.h
+index 8163dca2071b..a77772777a30 100644
+--- a/drivers/scsi/esp_scsi.h
++++ b/drivers/scsi/esp_scsi.h
+@@ -540,6 +540,8 @@ struct esp {
+ 
+ 	void			*dma;
+ 	int			dmarev;
++
++	u32			send_cmd_residual;
+ };
+ 
+ /* A front-end driver for the ESP chip should do the following in
+diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
+index a94fb9f8bb44..3b3af1459008 100644
+--- a/drivers/scsi/lpfc/lpfc_scsi.c
++++ b/drivers/scsi/lpfc/lpfc_scsi.c
+@@ -4140,9 +4140,17 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
+ 	}
+ 	lpfc_scsi_unprep_dma_buf(phba, lpfc_cmd);
+ 
+-	spin_lock_irqsave(&phba->hbalock, flags);
+-	lpfc_cmd->pCmd = NULL;
+-	spin_unlock_irqrestore(&phba->hbalock, flags);
++	/* If pCmd was set to NULL from abort path, do not call scsi_done */
++	if (xchg(&lpfc_cmd->pCmd, NULL) == NULL) {
++		lpfc_printf_vlog(vport, KERN_INFO, LOG_FCP,
++				 "0711 FCP cmd already NULL, sid: 0x%06x, "
++				 "did: 0x%06x, oxid: 0x%04x\n",
++				 vport->fc_myDID,
++				 (pnode) ? pnode->nlp_DID : 0,
++				 phba->sli_rev == LPFC_SLI_REV4 ?
++				 lpfc_cmd->cur_iocbq.sli4_xritag : 0xffff);
++		return;
++	}
+ 
+ 	/* The sdev is not guaranteed to be valid post scsi_done upcall. */
+ 	cmd->scsi_done(cmd);
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 6f3c00a233ec..4f8d459d2378 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -3790,6 +3790,7 @@ lpfc_sli_handle_slow_ring_event_s4(struct lpfc_hba *phba,
+ 	struct hbq_dmabuf *dmabuf;
+ 	struct lpfc_cq_event *cq_event;
+ 	unsigned long iflag;
++	int count = 0;
+ 
+ 	spin_lock_irqsave(&phba->hbalock, iflag);
+ 	phba->hba_flag &= ~HBA_SP_QUEUE_EVT;
+@@ -3811,16 +3812,22 @@ lpfc_sli_handle_slow_ring_event_s4(struct lpfc_hba *phba,
+ 			if (irspiocbq)
+ 				lpfc_sli_sp_handle_rspiocb(phba, pring,
+ 							   irspiocbq);
++			count++;
+ 			break;
+ 		case CQE_CODE_RECEIVE:
+ 		case CQE_CODE_RECEIVE_V1:
+ 			dmabuf = container_of(cq_event, struct hbq_dmabuf,
+ 					      cq_event);
+ 			lpfc_sli4_handle_received_buffer(phba, dmabuf);
++			count++;
+ 			break;
+ 		default:
+ 			break;
+ 		}
++
++		/* Limit the number of events to 64 to avoid soft lockups */
++		if (count == 64)
++			break;
+ 	}
+ }
+ 
+diff --git a/drivers/scsi/mac_esp.c b/drivers/scsi/mac_esp.c
+index eb551f3cc471..71879f2207e0 100644
+--- a/drivers/scsi/mac_esp.c
++++ b/drivers/scsi/mac_esp.c
+@@ -427,6 +427,8 @@ static void mac_esp_send_pio_cmd(struct esp *esp, u32 addr, u32 esp_count,
+ 			scsi_esp_cmd(esp, ESP_CMD_TI);
+ 		}
+ 	}
++
++	esp->send_cmd_residual = esp_count;
+ }
+ 
+ static int mac_esp_irq_pending(struct esp *esp)
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index 8e84e3fb648a..2d6f6414a2a2 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -7499,6 +7499,9 @@ static int megasas_mgmt_compat_ioctl_fw(struct file *file, unsigned long arg)
+ 		get_user(user_sense_off, &cioc->sense_off))
+ 		return -EFAULT;
+ 
++	if (local_sense_off != user_sense_off)
++		return -EINVAL;
++
+ 	if (local_sense_len) {
+ 		void __user **sense_ioc_ptr =
+ 			(void __user **)((u8 *)((unsigned long)&ioc->frame.raw) + local_sense_off);
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 397081d320b1..83f71c266c66 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -1677,8 +1677,9 @@ static void __ufshcd_release(struct ufs_hba *hba)
+ 
+ 	hba->clk_gating.state = REQ_CLKS_OFF;
+ 	trace_ufshcd_clk_gating(dev_name(hba->dev), hba->clk_gating.state);
+-	schedule_delayed_work(&hba->clk_gating.gate_work,
+-			msecs_to_jiffies(hba->clk_gating.delay_ms));
++	queue_delayed_work(hba->clk_gating.clk_gating_workq,
++			   &hba->clk_gating.gate_work,
++			   msecs_to_jiffies(hba->clk_gating.delay_ms));
+ }
+ 
+ void ufshcd_release(struct ufs_hba *hba)
+diff --git a/drivers/soc/qcom/rmtfs_mem.c b/drivers/soc/qcom/rmtfs_mem.c
+index 8a3678c2e83c..97bb5989aa21 100644
+--- a/drivers/soc/qcom/rmtfs_mem.c
++++ b/drivers/soc/qcom/rmtfs_mem.c
+@@ -212,6 +212,11 @@ static int qcom_rmtfs_mem_probe(struct platform_device *pdev)
+ 		dev_err(&pdev->dev, "failed to parse qcom,vmid\n");
+ 		goto remove_cdev;
+ 	} else if (!ret) {
++		if (!qcom_scm_is_available()) {
++			ret = -EPROBE_DEFER;
++			goto remove_cdev;
++		}
++
+ 		perms[0].vmid = QCOM_SCM_VMID_HLOS;
+ 		perms[0].perm = QCOM_SCM_PERM_RW;
+ 		perms[1].vmid = vmid;
+diff --git a/drivers/soc/tegra/pmc.c b/drivers/soc/tegra/pmc.c
+index 2d6f3fcf3211..ed71a4c9c8b2 100644
+--- a/drivers/soc/tegra/pmc.c
++++ b/drivers/soc/tegra/pmc.c
+@@ -1288,7 +1288,7 @@ static void tegra_pmc_init_tsense_reset(struct tegra_pmc *pmc)
+ 	if (!pmc->soc->has_tsense_reset)
+ 		return;
+ 
+-	np = of_find_node_by_name(pmc->dev->of_node, "i2c-thermtrip");
++	np = of_get_child_by_name(pmc->dev->of_node, "i2c-thermtrip");
+ 	if (!np) {
+ 		dev_warn(dev, "i2c-thermtrip node not found, %s.\n", disabled);
+ 		return;
+diff --git a/drivers/spi/spi-bcm-qspi.c b/drivers/spi/spi-bcm-qspi.c
+index 8612525fa4e3..584bcb018a62 100644
+--- a/drivers/spi/spi-bcm-qspi.c
++++ b/drivers/spi/spi-bcm-qspi.c
+@@ -89,7 +89,7 @@
+ #define BSPI_BPP_MODE_SELECT_MASK		BIT(8)
+ #define BSPI_BPP_ADDR_SELECT_MASK		BIT(16)
+ 
+-#define BSPI_READ_LENGTH			512
++#define BSPI_READ_LENGTH			256
+ 
+ /* MSPI register offsets */
+ #define MSPI_SPCR0_LSB				0x000
+@@ -355,7 +355,7 @@ static int bcm_qspi_bspi_set_flex_mode(struct bcm_qspi *qspi,
+ 	int bpc = 0, bpp = 0;
+ 	u8 command = op->cmd.opcode;
+ 	int width  = op->cmd.buswidth ? op->cmd.buswidth : SPI_NBITS_SINGLE;
+-	int addrlen = op->addr.nbytes * 8;
++	int addrlen = op->addr.nbytes;
+ 	int flex_mode = 1;
+ 
+ 	dev_dbg(&qspi->pdev->dev, "set flex mode w %x addrlen %x hp %d\n",
+diff --git a/drivers/spi/spi-ep93xx.c b/drivers/spi/spi-ep93xx.c
+index f1526757aaf6..79fc3940245a 100644
+--- a/drivers/spi/spi-ep93xx.c
++++ b/drivers/spi/spi-ep93xx.c
+@@ -246,6 +246,19 @@ static int ep93xx_spi_read_write(struct spi_master *master)
+ 	return -EINPROGRESS;
+ }
+ 
++static enum dma_transfer_direction
++ep93xx_dma_data_to_trans_dir(enum dma_data_direction dir)
++{
++	switch (dir) {
++	case DMA_TO_DEVICE:
++		return DMA_MEM_TO_DEV;
++	case DMA_FROM_DEVICE:
++		return DMA_DEV_TO_MEM;
++	default:
++		return DMA_TRANS_NONE;
++	}
++}
++
+ /**
+  * ep93xx_spi_dma_prepare() - prepares a DMA transfer
+  * @master: SPI master
+@@ -257,7 +270,7 @@ static int ep93xx_spi_read_write(struct spi_master *master)
+  */
+ static struct dma_async_tx_descriptor *
+ ep93xx_spi_dma_prepare(struct spi_master *master,
+-		       enum dma_transfer_direction dir)
++		       enum dma_data_direction dir)
+ {
+ 	struct ep93xx_spi *espi = spi_master_get_devdata(master);
+ 	struct spi_transfer *xfer = master->cur_msg->state;
+@@ -277,9 +290,9 @@ ep93xx_spi_dma_prepare(struct spi_master *master,
+ 		buswidth = DMA_SLAVE_BUSWIDTH_1_BYTE;
+ 
+ 	memset(&conf, 0, sizeof(conf));
+-	conf.direction = dir;
++	conf.direction = ep93xx_dma_data_to_trans_dir(dir);
+ 
+-	if (dir == DMA_DEV_TO_MEM) {
++	if (dir == DMA_FROM_DEVICE) {
+ 		chan = espi->dma_rx;
+ 		buf = xfer->rx_buf;
+ 		sgt = &espi->rx_sgt;
+@@ -343,7 +356,8 @@ ep93xx_spi_dma_prepare(struct spi_master *master,
+ 	if (!nents)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	txd = dmaengine_prep_slave_sg(chan, sgt->sgl, nents, dir, DMA_CTRL_ACK);
++	txd = dmaengine_prep_slave_sg(chan, sgt->sgl, nents, conf.direction,
++				      DMA_CTRL_ACK);
+ 	if (!txd) {
+ 		dma_unmap_sg(chan->device->dev, sgt->sgl, sgt->nents, dir);
+ 		return ERR_PTR(-ENOMEM);
+@@ -360,13 +374,13 @@ ep93xx_spi_dma_prepare(struct spi_master *master,
+  * unmapped.
+  */
+ static void ep93xx_spi_dma_finish(struct spi_master *master,
+-				  enum dma_transfer_direction dir)
++				  enum dma_data_direction dir)
+ {
+ 	struct ep93xx_spi *espi = spi_master_get_devdata(master);
+ 	struct dma_chan *chan;
+ 	struct sg_table *sgt;
+ 
+-	if (dir == DMA_DEV_TO_MEM) {
++	if (dir == DMA_FROM_DEVICE) {
+ 		chan = espi->dma_rx;
+ 		sgt = &espi->rx_sgt;
+ 	} else {
+@@ -381,8 +395,8 @@ static void ep93xx_spi_dma_callback(void *callback_param)
+ {
+ 	struct spi_master *master = callback_param;
+ 
+-	ep93xx_spi_dma_finish(master, DMA_MEM_TO_DEV);
+-	ep93xx_spi_dma_finish(master, DMA_DEV_TO_MEM);
++	ep93xx_spi_dma_finish(master, DMA_TO_DEVICE);
++	ep93xx_spi_dma_finish(master, DMA_FROM_DEVICE);
+ 
+ 	spi_finalize_current_transfer(master);
+ }
+@@ -392,15 +406,15 @@ static int ep93xx_spi_dma_transfer(struct spi_master *master)
+ 	struct ep93xx_spi *espi = spi_master_get_devdata(master);
+ 	struct dma_async_tx_descriptor *rxd, *txd;
+ 
+-	rxd = ep93xx_spi_dma_prepare(master, DMA_DEV_TO_MEM);
++	rxd = ep93xx_spi_dma_prepare(master, DMA_FROM_DEVICE);
+ 	if (IS_ERR(rxd)) {
+ 		dev_err(&master->dev, "DMA RX failed: %ld\n", PTR_ERR(rxd));
+ 		return PTR_ERR(rxd);
+ 	}
+ 
+-	txd = ep93xx_spi_dma_prepare(master, DMA_MEM_TO_DEV);
++	txd = ep93xx_spi_dma_prepare(master, DMA_TO_DEVICE);
+ 	if (IS_ERR(txd)) {
+-		ep93xx_spi_dma_finish(master, DMA_DEV_TO_MEM);
++		ep93xx_spi_dma_finish(master, DMA_FROM_DEVICE);
+ 		dev_err(&master->dev, "DMA TX failed: %ld\n", PTR_ERR(txd));
+ 		return PTR_ERR(txd);
+ 	}
+diff --git a/drivers/spi/spi-gpio.c b/drivers/spi/spi-gpio.c
+index 3b518ead504e..b82b47152b18 100644
+--- a/drivers/spi/spi-gpio.c
++++ b/drivers/spi/spi-gpio.c
+@@ -282,9 +282,11 @@ static int spi_gpio_request(struct device *dev,
+ 	spi_gpio->miso = devm_gpiod_get_optional(dev, "miso", GPIOD_IN);
+ 	if (IS_ERR(spi_gpio->miso))
+ 		return PTR_ERR(spi_gpio->miso);
+-	if (!spi_gpio->miso)
+-		/* HW configuration without MISO pin */
+-		*mflags |= SPI_MASTER_NO_RX;
++	/*
++	 * No setting SPI_MASTER_NO_RX here - if there is only a MOSI
++	 * pin connected the host can still do RX by changing the
++	 * direction of the line.
++	 */
+ 
+ 	spi_gpio->sck = devm_gpiod_get(dev, "sck", GPIOD_OUT_LOW);
+ 	if (IS_ERR(spi_gpio->sck))
+@@ -408,7 +410,7 @@ static int spi_gpio_probe(struct platform_device *pdev)
+ 	spi_gpio->bitbang.master = master;
+ 	spi_gpio->bitbang.chipselect = spi_gpio_chipselect;
+ 
+-	if ((master_flags & (SPI_MASTER_NO_TX | SPI_MASTER_NO_RX)) == 0) {
++	if ((master_flags & SPI_MASTER_NO_TX) == 0) {
+ 		spi_gpio->bitbang.txrx_word[SPI_MODE_0] = spi_gpio_txrx_word_mode0;
+ 		spi_gpio->bitbang.txrx_word[SPI_MODE_1] = spi_gpio_txrx_word_mode1;
+ 		spi_gpio->bitbang.txrx_word[SPI_MODE_2] = spi_gpio_txrx_word_mode2;
+diff --git a/drivers/spi/spi-mem.c b/drivers/spi/spi-mem.c
+index 990770dfa5cf..ec0c24e873cd 100644
+--- a/drivers/spi/spi-mem.c
++++ b/drivers/spi/spi-mem.c
+@@ -328,10 +328,25 @@ EXPORT_SYMBOL_GPL(spi_mem_exec_op);
+ int spi_mem_adjust_op_size(struct spi_mem *mem, struct spi_mem_op *op)
+ {
+ 	struct spi_controller *ctlr = mem->spi->controller;
++	size_t len;
++
++	len = sizeof(op->cmd.opcode) + op->addr.nbytes + op->dummy.nbytes;
+ 
+ 	if (ctlr->mem_ops && ctlr->mem_ops->adjust_op_size)
+ 		return ctlr->mem_ops->adjust_op_size(mem, op);
+ 
++	if (!ctlr->mem_ops || !ctlr->mem_ops->exec_op) {
++		if (len > spi_max_transfer_size(mem->spi))
++			return -EINVAL;
++
++		op->data.nbytes = min3((size_t)op->data.nbytes,
++				       spi_max_transfer_size(mem->spi),
++				       spi_max_message_size(mem->spi) -
++				       len);
++		if (!op->data.nbytes)
++			return -EINVAL;
++	}
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(spi_mem_adjust_op_size);
+diff --git a/drivers/tc/tc.c b/drivers/tc/tc.c
+index 3be9519654e5..cf3fad2cb871 100644
+--- a/drivers/tc/tc.c
++++ b/drivers/tc/tc.c
+@@ -2,7 +2,7 @@
+  *	TURBOchannel bus services.
+  *
+  *	Copyright (c) Harald Koerfgen, 1998
+- *	Copyright (c) 2001, 2003, 2005, 2006  Maciej W. Rozycki
++ *	Copyright (c) 2001, 2003, 2005, 2006, 2018  Maciej W. Rozycki
+  *	Copyright (c) 2005  James Simmons
+  *
+  *	This file is subject to the terms and conditions of the GNU
+@@ -10,6 +10,7 @@
+  *	directory of this archive for more details.
+  */
+ #include <linux/compiler.h>
++#include <linux/dma-mapping.h>
+ #include <linux/errno.h>
+ #include <linux/init.h>
+ #include <linux/ioport.h>
+@@ -92,6 +93,11 @@ static void __init tc_bus_add_devices(struct tc_bus *tbus)
+ 		tdev->dev.bus = &tc_bus_type;
+ 		tdev->slot = slot;
+ 
++		/* TURBOchannel has 34-bit DMA addressing (16GiB space). */
++		tdev->dma_mask = DMA_BIT_MASK(34);
++		tdev->dev.dma_mask = &tdev->dma_mask;
++		tdev->dev.coherent_dma_mask = DMA_BIT_MASK(34);
++
+ 		for (i = 0; i < 8; i++) {
+ 			tdev->firmware[i] =
+ 				readb(module + offset + TC_FIRM_VER + 4 * i);
+diff --git a/drivers/thermal/da9062-thermal.c b/drivers/thermal/da9062-thermal.c
+index dd8dd947b7f0..01b0cb994457 100644
+--- a/drivers/thermal/da9062-thermal.c
++++ b/drivers/thermal/da9062-thermal.c
+@@ -106,7 +106,7 @@ static void da9062_thermal_poll_on(struct work_struct *work)
+ 					   THERMAL_EVENT_UNSPECIFIED);
+ 
+ 		delay = msecs_to_jiffies(thermal->zone->passive_delay);
+-		schedule_delayed_work(&thermal->work, delay);
++		queue_delayed_work(system_freezable_wq, &thermal->work, delay);
+ 		return;
+ 	}
+ 
+@@ -125,7 +125,7 @@ static irqreturn_t da9062_thermal_irq_handler(int irq, void *data)
+ 	struct da9062_thermal *thermal = data;
+ 
+ 	disable_irq_nosync(thermal->irq);
+-	schedule_delayed_work(&thermal->work, 0);
++	queue_delayed_work(system_freezable_wq, &thermal->work, 0);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/thermal/rcar_thermal.c b/drivers/thermal/rcar_thermal.c
+index e77e63070e99..5844e26bd372 100644
+--- a/drivers/thermal/rcar_thermal.c
++++ b/drivers/thermal/rcar_thermal.c
+@@ -465,6 +465,7 @@ static int rcar_thermal_remove(struct platform_device *pdev)
+ 
+ 	rcar_thermal_for_each_priv(priv, common) {
+ 		rcar_thermal_irq_disable(priv);
++		cancel_delayed_work_sync(&priv->work);
+ 		if (priv->chip->use_of_thermal)
+ 			thermal_remove_hwmon_sysfs(priv->zone);
+ 		else
+diff --git a/drivers/tty/serial/kgdboc.c b/drivers/tty/serial/kgdboc.c
+index b4ba2b1dab76..f4d0ef695225 100644
+--- a/drivers/tty/serial/kgdboc.c
++++ b/drivers/tty/serial/kgdboc.c
+@@ -130,6 +130,11 @@ static void kgdboc_unregister_kbd(void)
+ 
+ static int kgdboc_option_setup(char *opt)
+ {
++	if (!opt) {
++		pr_err("kgdboc: config string not provided\n");
++		return -EINVAL;
++	}
++
+ 	if (strlen(opt) >= MAX_CONFIG_LEN) {
+ 		printk(KERN_ERR "kgdboc: config string too long\n");
+ 		return -ENOSPC;
+diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c
+index 6c58ad1abd7e..d5b2efae82fc 100644
+--- a/drivers/uio/uio.c
++++ b/drivers/uio/uio.c
+@@ -275,6 +275,8 @@ static struct class uio_class = {
+ 	.dev_groups = uio_groups,
+ };
+ 
++bool uio_class_registered;
++
+ /*
+  * device functions
+  */
+@@ -877,6 +879,9 @@ static int init_uio_class(void)
+ 		printk(KERN_ERR "class_register failed for uio\n");
+ 		goto err_class_register;
+ 	}
++
++	uio_class_registered = true;
++
+ 	return 0;
+ 
+ err_class_register:
+@@ -887,6 +892,7 @@ exit:
+ 
+ static void release_uio_class(void)
+ {
++	uio_class_registered = false;
+ 	class_unregister(&uio_class);
+ 	uio_major_cleanup();
+ }
+@@ -913,6 +919,9 @@ int __uio_register_device(struct module *owner,
+ 	struct uio_device *idev;
+ 	int ret = 0;
+ 
++	if (!uio_class_registered)
++		return -EPROBE_DEFER;
++
+ 	if (!parent || !info || !info->name || !info->version)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/usb/chipidea/otg.h b/drivers/usb/chipidea/otg.h
+index 7e7428e48bfa..4f8b8179ec96 100644
+--- a/drivers/usb/chipidea/otg.h
++++ b/drivers/usb/chipidea/otg.h
+@@ -17,7 +17,8 @@ void ci_handle_vbus_change(struct ci_hdrc *ci);
+ static inline void ci_otg_queue_work(struct ci_hdrc *ci)
+ {
+ 	disable_irq_nosync(ci->irq);
+-	queue_work(ci->wq, &ci->work);
++	if (queue_work(ci->wq, &ci->work) == false)
++		enable_irq(ci->irq);
+ }
+ 
+ #endif /* __DRIVERS_USB_CHIPIDEA_OTG_H */
+diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c
+index 6e2cdd7b93d4..05a68f035d19 100644
+--- a/drivers/usb/dwc2/hcd.c
++++ b/drivers/usb/dwc2/hcd.c
+@@ -4394,6 +4394,7 @@ static int _dwc2_hcd_start(struct usb_hcd *hcd)
+ 	struct dwc2_hsotg *hsotg = dwc2_hcd_to_hsotg(hcd);
+ 	struct usb_bus *bus = hcd_to_bus(hcd);
+ 	unsigned long flags;
++	int ret;
+ 
+ 	dev_dbg(hsotg->dev, "DWC OTG HCD START\n");
+ 
+@@ -4409,6 +4410,13 @@ static int _dwc2_hcd_start(struct usb_hcd *hcd)
+ 
+ 	dwc2_hcd_reinit(hsotg);
+ 
++	/* enable external vbus supply before resuming root hub */
++	spin_unlock_irqrestore(&hsotg->lock, flags);
++	ret = dwc2_vbus_supply_init(hsotg);
++	if (ret)
++		return ret;
++	spin_lock_irqsave(&hsotg->lock, flags);
++
+ 	/* Initialize and connect root hub if one is not already attached */
+ 	if (bus->root_hub) {
+ 		dev_dbg(hsotg->dev, "DWC OTG HCD Has Root Hub\n");
+@@ -4418,7 +4426,7 @@ static int _dwc2_hcd_start(struct usb_hcd *hcd)
+ 
+ 	spin_unlock_irqrestore(&hsotg->lock, flags);
+ 
+-	return dwc2_vbus_supply_init(hsotg);
++	return 0;
+ }
+ 
+ /*
+diff --git a/drivers/usb/gadget/udc/atmel_usba_udc.c b/drivers/usb/gadget/udc/atmel_usba_udc.c
+index 17147b8c771e..8f267be1745d 100644
+--- a/drivers/usb/gadget/udc/atmel_usba_udc.c
++++ b/drivers/usb/gadget/udc/atmel_usba_udc.c
+@@ -2017,6 +2017,8 @@ static struct usba_ep * atmel_udc_of_init(struct platform_device *pdev,
+ 
+ 	udc->errata = match->data;
+ 	udc->pmc = syscon_regmap_lookup_by_compatible("atmel,at91sam9g45-pmc");
++	if (IS_ERR(udc->pmc))
++		udc->pmc = syscon_regmap_lookup_by_compatible("atmel,at91sam9rl-pmc");
+ 	if (IS_ERR(udc->pmc))
+ 		udc->pmc = syscon_regmap_lookup_by_compatible("atmel,at91sam9x5-pmc");
+ 	if (udc->errata && IS_ERR(udc->pmc))
+diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c
+index 5b5f1c8b47c9..104b80c28636 100644
+--- a/drivers/usb/gadget/udc/renesas_usb3.c
++++ b/drivers/usb/gadget/udc/renesas_usb3.c
+@@ -2377,6 +2377,9 @@ static ssize_t renesas_usb3_b_device_write(struct file *file,
+ 	else
+ 		usb3->forced_b_device = false;
+ 
++	if (usb3->workaround_for_vbus)
++		usb3_disconnect(usb3);
++
+ 	/* Let this driver call usb3_connect() anyway */
+ 	usb3_check_id(usb3);
+ 
+diff --git a/drivers/usb/host/ohci-at91.c b/drivers/usb/host/ohci-at91.c
+index e98673954020..ec6739ef3129 100644
+--- a/drivers/usb/host/ohci-at91.c
++++ b/drivers/usb/host/ohci-at91.c
+@@ -551,6 +551,8 @@ static int ohci_hcd_at91_drv_probe(struct platform_device *pdev)
+ 		pdata->overcurrent_pin[i] =
+ 			devm_gpiod_get_index_optional(&pdev->dev, "atmel,oc",
+ 						      i, GPIOD_IN);
++		if (!pdata->overcurrent_pin[i])
++			continue;
+ 		if (IS_ERR(pdata->overcurrent_pin[i])) {
+ 			err = PTR_ERR(pdata->overcurrent_pin[i]);
+ 			dev_err(&pdev->dev, "unable to claim gpio \"overcurrent\": %d\n", err);
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index a4b95d019f84..1f7eeee2ebca 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -900,6 +900,7 @@ static u32 xhci_get_port_status(struct usb_hcd *hcd,
+ 				set_bit(wIndex, &bus_state->resuming_ports);
+ 				bus_state->resume_done[wIndex] = timeout;
+ 				mod_timer(&hcd->rh_timer, timeout);
++				usb_hcd_start_port_resume(&hcd->self, wIndex);
+ 			}
+ 		/* Has resume been signalled for USB_RESUME_TIME yet? */
+ 		} else if (time_after_eq(jiffies,
+@@ -940,6 +941,7 @@ static u32 xhci_get_port_status(struct usb_hcd *hcd,
+ 				clear_bit(wIndex, &bus_state->rexit_ports);
+ 			}
+ 
++			usb_hcd_end_port_resume(&hcd->self, wIndex);
+ 			bus_state->port_c_suspend |= 1 << wIndex;
+ 			bus_state->suspended_ports &= ~(1 << wIndex);
+ 		} else {
+@@ -962,6 +964,7 @@ static u32 xhci_get_port_status(struct usb_hcd *hcd,
+ 	    (raw_port_status & PORT_PLS_MASK) != XDEV_RESUME) {
+ 		bus_state->resume_done[wIndex] = 0;
+ 		clear_bit(wIndex, &bus_state->resuming_ports);
++		usb_hcd_end_port_resume(&hcd->self, wIndex);
+ 	}
+ 
+ 
+@@ -1337,6 +1340,7 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 					goto error;
+ 
+ 				set_bit(wIndex, &bus_state->resuming_ports);
++				usb_hcd_start_port_resume(&hcd->self, wIndex);
+ 				xhci_set_link_state(xhci, ports[wIndex],
+ 						    XDEV_RESUME);
+ 				spin_unlock_irqrestore(&xhci->lock, flags);
+@@ -1345,6 +1349,7 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 				xhci_set_link_state(xhci, ports[wIndex],
+ 							XDEV_U0);
+ 				clear_bit(wIndex, &bus_state->resuming_ports);
++				usb_hcd_end_port_resume(&hcd->self, wIndex);
+ 			}
+ 			bus_state->port_c_suspend |= 1 << wIndex;
+ 
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index f0a99aa0ac58..cd4659703647 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1602,6 +1602,7 @@ static void handle_port_status(struct xhci_hcd *xhci,
+ 			set_bit(HCD_FLAG_POLL_RH, &hcd->flags);
+ 			mod_timer(&hcd->rh_timer,
+ 				  bus_state->resume_done[hcd_portnum]);
++			usb_hcd_start_port_resume(&hcd->self, hcd_portnum);
+ 			bogus_port_status = true;
+ 		}
+ 	}
+diff --git a/drivers/usb/typec/tcpm.c b/drivers/usb/typec/tcpm.c
+index d1d20252bad8..a7e231ccb0a1 100644
+--- a/drivers/usb/typec/tcpm.c
++++ b/drivers/usb/typec/tcpm.c
+@@ -1383,8 +1383,8 @@ static enum pdo_err tcpm_caps_err(struct tcpm_port *port, const u32 *pdo,
+ 				if (pdo_apdo_type(pdo[i]) != APDO_TYPE_PPS)
+ 					break;
+ 
+-				if (pdo_pps_apdo_max_current(pdo[i]) <
+-				    pdo_pps_apdo_max_current(pdo[i - 1]))
++				if (pdo_pps_apdo_max_voltage(pdo[i]) <
++				    pdo_pps_apdo_max_voltage(pdo[i - 1]))
+ 					return PDO_ERR_PPS_APDO_NOT_SORTED;
+ 				else if (pdo_pps_apdo_min_voltage(pdo[i]) ==
+ 					  pdo_pps_apdo_min_voltage(pdo[i - 1]) &&
+@@ -4018,6 +4018,9 @@ static int tcpm_pps_set_op_curr(struct tcpm_port *port, u16 op_curr)
+ 		goto port_unlock;
+ 	}
+ 
++	/* Round down operating current to align with PPS valid steps */
++	op_curr = op_curr - (op_curr % RDO_PROG_CURR_MA_STEP);
++
+ 	reinit_completion(&port->pps_complete);
+ 	port->pps_data.op_curr = op_curr;
+ 	port->pps_status = 0;
+@@ -4071,6 +4074,9 @@ static int tcpm_pps_set_out_volt(struct tcpm_port *port, u16 out_volt)
+ 		goto port_unlock;
+ 	}
+ 
++	/* Round down output voltage to align with PPS valid steps */
++	out_volt = out_volt - (out_volt % RDO_PROG_VOLT_MV_STEP);
++
+ 	reinit_completion(&port->pps_complete);
+ 	port->pps_data.out_volt = out_volt;
+ 	port->pps_status = 0;
+diff --git a/drivers/usb/usbip/vudc_main.c b/drivers/usb/usbip/vudc_main.c
+index 3fc22037a82f..390733e6937e 100644
+--- a/drivers/usb/usbip/vudc_main.c
++++ b/drivers/usb/usbip/vudc_main.c
+@@ -73,6 +73,10 @@ static int __init init(void)
+ cleanup:
+ 	list_for_each_entry_safe(udc_dev, udc_dev2, &vudc_devices, dev_entry) {
+ 		list_del(&udc_dev->dev_entry);
++		/*
++		 * Just do platform_device_del() here, put_vudc_device()
++		 * calls the platform_device_put()
++		 */
+ 		platform_device_del(udc_dev->pdev);
+ 		put_vudc_device(udc_dev);
+ 	}
+@@ -89,7 +93,11 @@ static void __exit cleanup(void)
+ 
+ 	list_for_each_entry_safe(udc_dev, udc_dev2, &vudc_devices, dev_entry) {
+ 		list_del(&udc_dev->dev_entry);
+-		platform_device_unregister(udc_dev->pdev);
++		/*
++		 * Just do platform_device_del() here, put_vudc_device()
++		 * calls the platform_device_put()
++		 */
++		platform_device_del(udc_dev->pdev);
+ 		put_vudc_device(udc_dev);
+ 	}
+ 	platform_driver_unregister(&vudc_driver);
+diff --git a/drivers/video/hdmi.c b/drivers/video/hdmi.c
+index 38716eb50408..8a3e8f61b991 100644
+--- a/drivers/video/hdmi.c
++++ b/drivers/video/hdmi.c
+@@ -592,10 +592,10 @@ hdmi_extended_colorimetry_get_name(enum hdmi_extended_colorimetry ext_col)
+ 		return "xvYCC 709";
+ 	case HDMI_EXTENDED_COLORIMETRY_S_YCC_601:
+ 		return "sYCC 601";
+-	case HDMI_EXTENDED_COLORIMETRY_ADOBE_YCC_601:
+-		return "Adobe YCC 601";
+-	case HDMI_EXTENDED_COLORIMETRY_ADOBE_RGB:
+-		return "Adobe RGB";
++	case HDMI_EXTENDED_COLORIMETRY_OPYCC_601:
++		return "opYCC 601";
++	case HDMI_EXTENDED_COLORIMETRY_OPRGB:
++		return "opRGB";
+ 	case HDMI_EXTENDED_COLORIMETRY_BT2020_CONST_LUM:
+ 		return "BT.2020 Constant Luminance";
+ 	case HDMI_EXTENDED_COLORIMETRY_BT2020:
+diff --git a/drivers/w1/masters/omap_hdq.c b/drivers/w1/masters/omap_hdq.c
+index 83fc9aab34e8..3099052e1243 100644
+--- a/drivers/w1/masters/omap_hdq.c
++++ b/drivers/w1/masters/omap_hdq.c
+@@ -763,6 +763,8 @@ static int omap_hdq_remove(struct platform_device *pdev)
+ 	/* remove module dependency */
+ 	pm_runtime_disable(&pdev->dev);
+ 
++	w1_remove_master_device(&omap_w1_master);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/xen/privcmd-buf.c b/drivers/xen/privcmd-buf.c
+index df1ed37c3269..de01a6d0059d 100644
+--- a/drivers/xen/privcmd-buf.c
++++ b/drivers/xen/privcmd-buf.c
+@@ -21,15 +21,9 @@
+ 
+ MODULE_LICENSE("GPL");
+ 
+-static unsigned int limit = 64;
+-module_param(limit, uint, 0644);
+-MODULE_PARM_DESC(limit, "Maximum number of pages that may be allocated by "
+-			"the privcmd-buf device per open file");
+-
+ struct privcmd_buf_private {
+ 	struct mutex lock;
+ 	struct list_head list;
+-	unsigned int allocated;
+ };
+ 
+ struct privcmd_buf_vma_private {
+@@ -60,13 +54,10 @@ static void privcmd_buf_vmapriv_free(struct privcmd_buf_vma_private *vma_priv)
+ {
+ 	unsigned int i;
+ 
+-	vma_priv->file_priv->allocated -= vma_priv->n_pages;
+-
+ 	list_del(&vma_priv->list);
+ 
+ 	for (i = 0; i < vma_priv->n_pages; i++)
+-		if (vma_priv->pages[i])
+-			__free_page(vma_priv->pages[i]);
++		__free_page(vma_priv->pages[i]);
+ 
+ 	kfree(vma_priv);
+ }
+@@ -146,8 +137,7 @@ static int privcmd_buf_mmap(struct file *file, struct vm_area_struct *vma)
+ 	unsigned int i;
+ 	int ret = 0;
+ 
+-	if (!(vma->vm_flags & VM_SHARED) || count > limit ||
+-	    file_priv->allocated + count > limit)
++	if (!(vma->vm_flags & VM_SHARED))
+ 		return -EINVAL;
+ 
+ 	vma_priv = kzalloc(sizeof(*vma_priv) + count * sizeof(void *),
+@@ -155,19 +145,15 @@ static int privcmd_buf_mmap(struct file *file, struct vm_area_struct *vma)
+ 	if (!vma_priv)
+ 		return -ENOMEM;
+ 
+-	vma_priv->n_pages = count;
+-	count = 0;
+-	for (i = 0; i < vma_priv->n_pages; i++) {
++	for (i = 0; i < count; i++) {
+ 		vma_priv->pages[i] = alloc_page(GFP_KERNEL | __GFP_ZERO);
+ 		if (!vma_priv->pages[i])
+ 			break;
+-		count++;
++		vma_priv->n_pages++;
+ 	}
+ 
+ 	mutex_lock(&file_priv->lock);
+ 
+-	file_priv->allocated += count;
+-
+ 	vma_priv->file_priv = file_priv;
+ 	vma_priv->users = 1;
+ 
+diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
+index a6f9ba85dc4b..aa081f806728 100644
+--- a/drivers/xen/swiotlb-xen.c
++++ b/drivers/xen/swiotlb-xen.c
+@@ -303,6 +303,9 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
+ 	*/
+ 	flags &= ~(__GFP_DMA | __GFP_HIGHMEM);
+ 
++	/* Convert the size to actually allocated. */
++	size = 1UL << (order + XEN_PAGE_SHIFT);
++
+ 	/* On ARM this function returns an ioremap'ped virtual address for
+ 	 * which virt_to_phys doesn't return the corresponding physical
+ 	 * address. In fact on ARM virt_to_phys only works for kernel direct
+@@ -351,6 +354,9 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
+ 	 * physical address */
+ 	phys = xen_bus_to_phys(dev_addr);
+ 
++	/* Convert the size to actually allocated. */
++	size = 1UL << (order + XEN_PAGE_SHIFT);
++
+ 	if (((dev_addr + size - 1 <= dma_mask)) ||
+ 	    range_straddles_page_boundary(phys, size))
+ 		xen_destroy_contiguous_region(phys, order);
+diff --git a/drivers/xen/xen-balloon.c b/drivers/xen/xen-balloon.c
+index 294f35ce9e46..cf8ef8cee5a0 100644
+--- a/drivers/xen/xen-balloon.c
++++ b/drivers/xen/xen-balloon.c
+@@ -75,12 +75,15 @@ static void watch_target(struct xenbus_watch *watch,
+ 
+ 	if (!watch_fired) {
+ 		watch_fired = true;
+-		err = xenbus_scanf(XBT_NIL, "memory", "static-max", "%llu",
+-				   &static_max);
+-		if (err != 1)
+-			static_max = new_target;
+-		else
++
++		if ((xenbus_scanf(XBT_NIL, "memory", "static-max",
++				  "%llu", &static_max) == 1) ||
++		    (xenbus_scanf(XBT_NIL, "memory", "memory_static_max",
++				  "%llu", &static_max) == 1))
+ 			static_max >>= PAGE_SHIFT - 10;
++		else
++			static_max = new_target;
++
+ 		target_diff = (xen_pv_domain() || xen_initial_domain()) ? 0
+ 				: static_max - balloon_stats.target_pages;
+ 	}
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 4bc326df472e..4a7ae216977d 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -1054,9 +1054,26 @@ static noinline int __btrfs_cow_block(struct btrfs_trans_handle *trans,
+ 	if ((root->root_key.objectid == BTRFS_TREE_RELOC_OBJECTID) && parent)
+ 		parent_start = parent->start;
+ 
++	/*
++	 * If we are COWing a node/leaf from the extent, chunk or device trees,
++	 * make sure that we do not finish block group creation of pending block
++	 * groups. We do this to avoid a deadlock.
++	 * COWing can result in allocation of a new chunk, and flushing pending
++	 * block groups (btrfs_create_pending_block_groups()) can be triggered
++	 * when finishing allocation of a new chunk. Creation of a pending block
++	 * group modifies the extent, chunk and device trees, therefore we could
++	 * deadlock with ourselves since we are holding a lock on an extent
++	 * buffer that btrfs_create_pending_block_groups() may try to COW later.
++	 */
++	if (root == fs_info->extent_root ||
++	    root == fs_info->chunk_root ||
++	    root == fs_info->dev_root)
++		trans->can_flush_pending_bgs = false;
++
+ 	cow = btrfs_alloc_tree_block(trans, root, parent_start,
+ 			root->root_key.objectid, &disk_key, level,
+ 			search_start, empty_size);
++	trans->can_flush_pending_bgs = true;
+ 	if (IS_ERR(cow))
+ 		return PTR_ERR(cow);
+ 
+diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
+index d20b244623f2..e129a595f811 100644
+--- a/fs/btrfs/dev-replace.c
++++ b/fs/btrfs/dev-replace.c
+@@ -445,6 +445,7 @@ int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info,
+ 		break;
+ 	case BTRFS_IOCTL_DEV_REPLACE_STATE_STARTED:
+ 	case BTRFS_IOCTL_DEV_REPLACE_STATE_SUSPENDED:
++		ASSERT(0);
+ 		ret = BTRFS_IOCTL_DEV_REPLACE_RESULT_ALREADY_STARTED;
+ 		goto leave;
+ 	}
+@@ -487,6 +488,10 @@ int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info,
+ 	if (IS_ERR(trans)) {
+ 		ret = PTR_ERR(trans);
+ 		btrfs_dev_replace_write_lock(dev_replace);
++		dev_replace->replace_state =
++			BTRFS_IOCTL_DEV_REPLACE_STATE_NEVER_STARTED;
++		dev_replace->srcdev = NULL;
++		dev_replace->tgtdev = NULL;
+ 		goto leave;
+ 	}
+ 
+@@ -508,8 +513,6 @@ int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info,
+ 	return ret;
+ 
+ leave:
+-	dev_replace->srcdev = NULL;
+-	dev_replace->tgtdev = NULL;
+ 	btrfs_dev_replace_write_unlock(dev_replace);
+ 	btrfs_destroy_dev_replace_tgtdev(fs_info, tgt_device);
+ 	return ret;
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 4ab0bccfa281..e67de6a9805b 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -2490,6 +2490,9 @@ static int run_one_delayed_ref(struct btrfs_trans_handle *trans,
+ 					   insert_reserved);
+ 	else
+ 		BUG();
++	if (ret && insert_reserved)
++		btrfs_pin_extent(trans->fs_info, node->bytenr,
++				 node->num_bytes, 1);
+ 	return ret;
+ }
+ 
+@@ -3034,7 +3037,6 @@ int btrfs_run_delayed_refs(struct btrfs_trans_handle *trans,
+ 	struct btrfs_delayed_ref_head *head;
+ 	int ret;
+ 	int run_all = count == (unsigned long)-1;
+-	bool can_flush_pending_bgs = trans->can_flush_pending_bgs;
+ 
+ 	/* We'll clean this up in btrfs_cleanup_transaction */
+ 	if (trans->aborted)
+@@ -3051,7 +3053,6 @@ again:
+ #ifdef SCRAMBLE_DELAYED_REFS
+ 	delayed_refs->run_delayed_start = find_middle(&delayed_refs->root);
+ #endif
+-	trans->can_flush_pending_bgs = false;
+ 	ret = __btrfs_run_delayed_refs(trans, count);
+ 	if (ret < 0) {
+ 		btrfs_abort_transaction(trans, ret);
+@@ -3082,7 +3083,6 @@ again:
+ 		goto again;
+ 	}
+ out:
+-	trans->can_flush_pending_bgs = can_flush_pending_bgs;
+ 	return 0;
+ }
+ 
+@@ -4664,6 +4664,7 @@ again:
+ 			goto out;
+ 	} else {
+ 		ret = 1;
++		space_info->max_extent_size = 0;
+ 	}
+ 
+ 	space_info->force_alloc = CHUNK_ALLOC_NO_FORCE;
+@@ -4685,11 +4686,9 @@ out:
+ 	 * the block groups that were made dirty during the lifetime of the
+ 	 * transaction.
+ 	 */
+-	if (trans->can_flush_pending_bgs &&
+-	    trans->chunk_bytes_reserved >= (u64)SZ_2M) {
++	if (trans->chunk_bytes_reserved >= (u64)SZ_2M)
+ 		btrfs_create_pending_block_groups(trans);
+-		btrfs_trans_release_chunk_metadata(trans);
+-	}
++
+ 	return ret;
+ }
+ 
+@@ -6581,6 +6580,7 @@ static int btrfs_free_reserved_bytes(struct btrfs_block_group_cache *cache,
+ 		space_info->bytes_readonly += num_bytes;
+ 	cache->reserved -= num_bytes;
+ 	space_info->bytes_reserved -= num_bytes;
++	space_info->max_extent_size = 0;
+ 
+ 	if (delalloc)
+ 		cache->delalloc_bytes -= num_bytes;
+@@ -7412,6 +7412,7 @@ static noinline int find_free_extent(struct btrfs_fs_info *fs_info,
+ 	struct btrfs_block_group_cache *block_group = NULL;
+ 	u64 search_start = 0;
+ 	u64 max_extent_size = 0;
++	u64 max_free_space = 0;
+ 	u64 empty_cluster = 0;
+ 	struct btrfs_space_info *space_info;
+ 	int loop = 0;
+@@ -7707,8 +7708,8 @@ unclustered_alloc:
+ 			spin_lock(&ctl->tree_lock);
+ 			if (ctl->free_space <
+ 			    num_bytes + empty_cluster + empty_size) {
+-				if (ctl->free_space > max_extent_size)
+-					max_extent_size = ctl->free_space;
++				max_free_space = max(max_free_space,
++						     ctl->free_space);
+ 				spin_unlock(&ctl->tree_lock);
+ 				goto loop;
+ 			}
+@@ -7877,6 +7878,8 @@ loop:
+ 	}
+ out:
+ 	if (ret == -ENOSPC) {
++		if (!max_extent_size)
++			max_extent_size = max_free_space;
+ 		spin_lock(&space_info->lock);
+ 		space_info->max_extent_size = max_extent_size;
+ 		spin_unlock(&space_info->lock);
+@@ -8158,21 +8161,14 @@ static int alloc_reserved_tree_block(struct btrfs_trans_handle *trans,
+ 	}
+ 
+ 	path = btrfs_alloc_path();
+-	if (!path) {
+-		btrfs_free_and_pin_reserved_extent(fs_info,
+-						   extent_key.objectid,
+-						   fs_info->nodesize);
++	if (!path)
+ 		return -ENOMEM;
+-	}
+ 
+ 	path->leave_spinning = 1;
+ 	ret = btrfs_insert_empty_item(trans, fs_info->extent_root, path,
+ 				      &extent_key, size);
+ 	if (ret) {
+ 		btrfs_free_path(path);
+-		btrfs_free_and_pin_reserved_extent(fs_info,
+-						   extent_key.objectid,
+-						   fs_info->nodesize);
+ 		return ret;
+ 	}
+ 
+@@ -8301,6 +8297,19 @@ btrfs_init_new_buffer(struct btrfs_trans_handle *trans, struct btrfs_root *root,
+ 	if (IS_ERR(buf))
+ 		return buf;
+ 
++	/*
++	 * Extra safety check in case the extent tree is corrupted and extent
++	 * allocator chooses to use a tree block which is already used and
++	 * locked.
++	 */
++	if (buf->lock_owner == current->pid) {
++		btrfs_err_rl(fs_info,
++"tree block %llu owner %llu already locked by pid=%d, extent tree corruption detected",
++			buf->start, btrfs_header_owner(buf), current->pid);
++		free_extent_buffer(buf);
++		return ERR_PTR(-EUCLEAN);
++	}
++
+ 	btrfs_set_header_generation(buf, trans->transid);
+ 	btrfs_set_buffer_lockdep_class(root->root_key.objectid, buf, level);
+ 	btrfs_tree_lock(buf);
+@@ -8938,15 +8947,14 @@ static noinline int walk_up_proc(struct btrfs_trans_handle *trans,
+ 	if (eb == root->node) {
+ 		if (wc->flags[level] & BTRFS_BLOCK_FLAG_FULL_BACKREF)
+ 			parent = eb->start;
+-		else
+-			BUG_ON(root->root_key.objectid !=
+-			       btrfs_header_owner(eb));
++		else if (root->root_key.objectid != btrfs_header_owner(eb))
++			goto owner_mismatch;
+ 	} else {
+ 		if (wc->flags[level + 1] & BTRFS_BLOCK_FLAG_FULL_BACKREF)
+ 			parent = path->nodes[level + 1]->start;
+-		else
+-			BUG_ON(root->root_key.objectid !=
+-			       btrfs_header_owner(path->nodes[level + 1]));
++		else if (root->root_key.objectid !=
++			 btrfs_header_owner(path->nodes[level + 1]))
++			goto owner_mismatch;
+ 	}
+ 
+ 	btrfs_free_tree_block(trans, root, eb, parent, wc->refs[level] == 1);
+@@ -8954,6 +8962,11 @@ out:
+ 	wc->refs[level] = 0;
+ 	wc->flags[level] = 0;
+ 	return 0;
++
++owner_mismatch:
++	btrfs_err_rl(fs_info, "unexpected tree owner, have %llu expect %llu",
++		     btrfs_header_owner(eb), root->root_key.objectid);
++	return -EUCLEAN;
+ }
+ 
+ static noinline int walk_down_tree(struct btrfs_trans_handle *trans,
+@@ -9007,6 +9020,8 @@ static noinline int walk_up_tree(struct btrfs_trans_handle *trans,
+ 			ret = walk_up_proc(trans, root, path, wc);
+ 			if (ret > 0)
+ 				return 0;
++			if (ret < 0)
++				return ret;
+ 
+ 			if (path->locks[level]) {
+ 				btrfs_tree_unlock_rw(path->nodes[level],
+@@ -9772,6 +9787,7 @@ void btrfs_put_block_group_cache(struct btrfs_fs_info *info)
+ 
+ 		block_group = btrfs_lookup_first_block_group(info, last);
+ 		while (block_group) {
++			wait_block_group_cache_done(block_group);
+ 			spin_lock(&block_group->lock);
+ 			if (block_group->iref)
+ 				break;
+@@ -10184,15 +10200,19 @@ error:
+ void btrfs_create_pending_block_groups(struct btrfs_trans_handle *trans)
+ {
+ 	struct btrfs_fs_info *fs_info = trans->fs_info;
+-	struct btrfs_block_group_cache *block_group, *tmp;
++	struct btrfs_block_group_cache *block_group;
+ 	struct btrfs_root *extent_root = fs_info->extent_root;
+ 	struct btrfs_block_group_item item;
+ 	struct btrfs_key key;
+ 	int ret = 0;
+-	bool can_flush_pending_bgs = trans->can_flush_pending_bgs;
+ 
+-	trans->can_flush_pending_bgs = false;
+-	list_for_each_entry_safe(block_group, tmp, &trans->new_bgs, bg_list) {
++	if (!trans->can_flush_pending_bgs)
++		return;
++
++	while (!list_empty(&trans->new_bgs)) {
++		block_group = list_first_entry(&trans->new_bgs,
++					       struct btrfs_block_group_cache,
++					       bg_list);
+ 		if (ret)
+ 			goto next;
+ 
+@@ -10214,7 +10234,7 @@ void btrfs_create_pending_block_groups(struct btrfs_trans_handle *trans)
+ next:
+ 		list_del_init(&block_group->bg_list);
+ 	}
+-	trans->can_flush_pending_bgs = can_flush_pending_bgs;
++	btrfs_trans_release_chunk_metadata(trans);
+ }
+ 
+ int btrfs_make_block_group(struct btrfs_trans_handle *trans,
+@@ -10869,14 +10889,16 @@ int btrfs_error_unpin_extent_range(struct btrfs_fs_info *fs_info,
+  * We don't want a transaction for this since the discard may take a
+  * substantial amount of time.  We don't require that a transaction be
+  * running, but we do need to take a running transaction into account
+- * to ensure that we're not discarding chunks that were released in
+- * the current transaction.
++ * to ensure that we're not discarding chunks that were released or
++ * allocated in the current transaction.
+  *
+  * Holding the chunks lock will prevent other threads from allocating
+  * or releasing chunks, but it won't prevent a running transaction
+  * from committing and releasing the memory that the pending chunks
+  * list head uses.  For that, we need to take a reference to the
+- * transaction.
++ * transaction and hold the commit root sem.  We only need to hold
++ * it while performing the free space search since we have already
++ * held back allocations.
+  */
+ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 				   u64 minlen, u64 *trimmed)
+@@ -10886,6 +10908,10 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 
+ 	*trimmed = 0;
+ 
++	/* Discard not supported = nothing to do. */
++	if (!blk_queue_discard(bdev_get_queue(device->bdev)))
++		return 0;
++
+ 	/* Not writeable = nothing to do. */
+ 	if (!test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state))
+ 		return 0;
+@@ -10903,9 +10929,13 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 
+ 		ret = mutex_lock_interruptible(&fs_info->chunk_mutex);
+ 		if (ret)
+-			return ret;
++			break;
+ 
+-		down_read(&fs_info->commit_root_sem);
++		ret = down_read_killable(&fs_info->commit_root_sem);
++		if (ret) {
++			mutex_unlock(&fs_info->chunk_mutex);
++			break;
++		}
+ 
+ 		spin_lock(&fs_info->trans_lock);
+ 		trans = fs_info->running_transaction;
+@@ -10913,13 +10943,17 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 			refcount_inc(&trans->use_count);
+ 		spin_unlock(&fs_info->trans_lock);
+ 
++		if (!trans)
++			up_read(&fs_info->commit_root_sem);
++
+ 		ret = find_free_dev_extent_start(trans, device, minlen, start,
+ 						 &start, &len);
+-		if (trans)
++		if (trans) {
++			up_read(&fs_info->commit_root_sem);
+ 			btrfs_put_transaction(trans);
++		}
+ 
+ 		if (ret) {
+-			up_read(&fs_info->commit_root_sem);
+ 			mutex_unlock(&fs_info->chunk_mutex);
+ 			if (ret == -ENOSPC)
+ 				ret = 0;
+@@ -10927,7 +10961,6 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 		}
+ 
+ 		ret = btrfs_issue_discard(device->bdev, start, len, &bytes);
+-		up_read(&fs_info->commit_root_sem);
+ 		mutex_unlock(&fs_info->chunk_mutex);
+ 
+ 		if (ret)
+@@ -10947,6 +10980,15 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 	return ret;
+ }
+ 
++/*
++ * Trim the whole filesystem by:
++ * 1) trimming the free space in each block group
++ * 2) trimming the unallocated space on each device
++ *
++ * This will also continue trimming even if a block group or device encounters
++ * an error.  The return value will be the last error, or 0 if nothing bad
++ * happens.
++ */
+ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
+ {
+ 	struct btrfs_block_group_cache *cache = NULL;
+@@ -10956,18 +10998,14 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
+ 	u64 start;
+ 	u64 end;
+ 	u64 trimmed = 0;
+-	u64 total_bytes = btrfs_super_total_bytes(fs_info->super_copy);
++	u64 bg_failed = 0;
++	u64 dev_failed = 0;
++	int bg_ret = 0;
++	int dev_ret = 0;
+ 	int ret = 0;
+ 
+-	/*
+-	 * try to trim all FS space, our block group may start from non-zero.
+-	 */
+-	if (range->len == total_bytes)
+-		cache = btrfs_lookup_first_block_group(fs_info, range->start);
+-	else
+-		cache = btrfs_lookup_block_group(fs_info, range->start);
+-
+-	while (cache) {
++	cache = btrfs_lookup_first_block_group(fs_info, range->start);
++	for (; cache; cache = next_block_group(fs_info, cache)) {
+ 		if (cache->key.objectid >= (range->start + range->len)) {
+ 			btrfs_put_block_group(cache);
+ 			break;
+@@ -10981,13 +11019,15 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
+ 			if (!block_group_cache_done(cache)) {
+ 				ret = cache_block_group(cache, 0);
+ 				if (ret) {
+-					btrfs_put_block_group(cache);
+-					break;
++					bg_failed++;
++					bg_ret = ret;
++					continue;
+ 				}
+ 				ret = wait_block_group_cache_done(cache);
+ 				if (ret) {
+-					btrfs_put_block_group(cache);
+-					break;
++					bg_failed++;
++					bg_ret = ret;
++					continue;
+ 				}
+ 			}
+ 			ret = btrfs_trim_block_group(cache,
+@@ -10998,28 +11038,40 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
+ 
+ 			trimmed += group_trimmed;
+ 			if (ret) {
+-				btrfs_put_block_group(cache);
+-				break;
++				bg_failed++;
++				bg_ret = ret;
++				continue;
+ 			}
+ 		}
+-
+-		cache = next_block_group(fs_info, cache);
+ 	}
+ 
++	if (bg_failed)
++		btrfs_warn(fs_info,
++			"failed to trim %llu block group(s), last error %d",
++			bg_failed, bg_ret);
+ 	mutex_lock(&fs_info->fs_devices->device_list_mutex);
+-	devices = &fs_info->fs_devices->alloc_list;
+-	list_for_each_entry(device, devices, dev_alloc_list) {
++	devices = &fs_info->fs_devices->devices;
++	list_for_each_entry(device, devices, dev_list) {
+ 		ret = btrfs_trim_free_extents(device, range->minlen,
+ 					      &group_trimmed);
+-		if (ret)
++		if (ret) {
++			dev_failed++;
++			dev_ret = ret;
+ 			break;
++		}
+ 
+ 		trimmed += group_trimmed;
+ 	}
+ 	mutex_unlock(&fs_info->fs_devices->device_list_mutex);
+ 
++	if (dev_failed)
++		btrfs_warn(fs_info,
++			"failed to trim %llu device(s), last error %d",
++			dev_failed, dev_ret);
+ 	range->len = trimmed;
+-	return ret;
++	if (bg_ret)
++		return bg_ret;
++	return dev_ret;
+ }
+ 
+ /*
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 51e77d72068a..22c2f38cd9b3 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -534,6 +534,14 @@ int btrfs_dirty_pages(struct inode *inode, struct page **pages,
+ 
+ 	end_of_last_block = start_pos + num_bytes - 1;
+ 
++	/*
++	 * The pages may have already been dirty, clear out old accounting so
++	 * we can set things up properly
++	 */
++	clear_extent_bit(&BTRFS_I(inode)->io_tree, start_pos, end_of_last_block,
++			 EXTENT_DIRTY | EXTENT_DELALLOC |
++			 EXTENT_DO_ACCOUNTING | EXTENT_DEFRAG, 0, 0, cached);
++
+ 	if (!btrfs_is_free_space_inode(BTRFS_I(inode))) {
+ 		if (start_pos >= isize &&
+ 		    !(BTRFS_I(inode)->flags & BTRFS_INODE_PREALLOC)) {
+@@ -1504,18 +1512,27 @@ lock_and_cleanup_extent_if_need(struct btrfs_inode *inode, struct page **pages,
+ 		}
+ 		if (ordered)
+ 			btrfs_put_ordered_extent(ordered);
+-		clear_extent_bit(&inode->io_tree, start_pos, last_pos,
+-				 EXTENT_DIRTY | EXTENT_DELALLOC |
+-				 EXTENT_DO_ACCOUNTING | EXTENT_DEFRAG,
+-				 0, 0, cached_state);
++
+ 		*lockstart = start_pos;
+ 		*lockend = last_pos;
+ 		ret = 1;
+ 	}
+ 
++	/*
++	 * It's possible the pages are dirty right now, but we don't want
++	 * to clean them yet because copy_from_user may catch a page fault
++	 * and we might have to fall back to one page at a time.  If that
++	 * happens, we'll unlock these pages and we'd have a window where
++	 * reclaim could sneak in and drop the once-dirty page on the floor
++	 * without writing it.
++	 *
++	 * We have the pages locked and the extent range locked, so there's
++	 * no way someone can start IO on any dirty pages in this range.
++	 *
++	 * We'll call btrfs_dirty_pages() later on, and that will flip around
++	 * delalloc bits and dirty the pages as required.
++	 */
+ 	for (i = 0; i < num_pages; i++) {
+-		if (clear_page_dirty_for_io(pages[i]))
+-			account_page_redirty(pages[i]);
+ 		set_page_extent_mapped(pages[i]);
+ 		WARN_ON(!PageLocked(pages[i]));
+ 	}
+@@ -2065,6 +2082,14 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 		goto out;
+ 
+ 	inode_lock(inode);
++
++	/*
++	 * We take the dio_sem here because the tree log stuff can race with
++	 * lockless dio writes and get an extent map logged for an extent we
++	 * never waited on.  We need it this high up for lockdep reasons.
++	 */
++	down_write(&BTRFS_I(inode)->dio_sem);
++
+ 	atomic_inc(&root->log_batch);
+ 	full_sync = test_bit(BTRFS_INODE_NEEDS_FULL_SYNC,
+ 			     &BTRFS_I(inode)->runtime_flags);
+@@ -2116,6 +2141,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 		ret = start_ordered_ops(inode, start, end);
+ 	}
+ 	if (ret) {
++		up_write(&BTRFS_I(inode)->dio_sem);
+ 		inode_unlock(inode);
+ 		goto out;
+ 	}
+@@ -2171,6 +2197,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 		 * checked called fsync.
+ 		 */
+ 		ret = filemap_check_wb_err(inode->i_mapping, file->f_wb_err);
++		up_write(&BTRFS_I(inode)->dio_sem);
+ 		inode_unlock(inode);
+ 		goto out;
+ 	}
+@@ -2189,6 +2216,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 	trans = btrfs_start_transaction(root, 0);
+ 	if (IS_ERR(trans)) {
+ 		ret = PTR_ERR(trans);
++		up_write(&BTRFS_I(inode)->dio_sem);
+ 		inode_unlock(inode);
+ 		goto out;
+ 	}
+@@ -2210,6 +2238,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 	 * file again, but that will end up using the synchronization
+ 	 * inside btrfs_sync_log to keep things safe.
+ 	 */
++	up_write(&BTRFS_I(inode)->dio_sem);
+ 	inode_unlock(inode);
+ 
+ 	/*
+diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
+index d5f80cb300be..a5f18333aa8c 100644
+--- a/fs/btrfs/free-space-cache.c
++++ b/fs/btrfs/free-space-cache.c
+@@ -10,6 +10,7 @@
+ #include <linux/math64.h>
+ #include <linux/ratelimit.h>
+ #include <linux/error-injection.h>
++#include <linux/sched/mm.h>
+ #include "ctree.h"
+ #include "free-space-cache.h"
+ #include "transaction.h"
+@@ -47,6 +48,7 @@ static struct inode *__lookup_free_space_inode(struct btrfs_root *root,
+ 	struct btrfs_free_space_header *header;
+ 	struct extent_buffer *leaf;
+ 	struct inode *inode = NULL;
++	unsigned nofs_flag;
+ 	int ret;
+ 
+ 	key.objectid = BTRFS_FREE_SPACE_OBJECTID;
+@@ -68,7 +70,13 @@ static struct inode *__lookup_free_space_inode(struct btrfs_root *root,
+ 	btrfs_disk_key_to_cpu(&location, &disk_key);
+ 	btrfs_release_path(path);
+ 
++	/*
++	 * We are often under a trans handle at this point, so we need to make
++	 * sure NOFS is set to keep us from deadlocking.
++	 */
++	nofs_flag = memalloc_nofs_save();
+ 	inode = btrfs_iget(fs_info->sb, &location, root, NULL);
++	memalloc_nofs_restore(nofs_flag);
+ 	if (IS_ERR(inode))
+ 		return inode;
+ 	if (is_bad_inode(inode)) {
+@@ -1686,6 +1694,8 @@ static inline void __bitmap_clear_bits(struct btrfs_free_space_ctl *ctl,
+ 	bitmap_clear(info->bitmap, start, count);
+ 
+ 	info->bytes -= bytes;
++	if (info->max_extent_size > ctl->unit)
++		info->max_extent_size = 0;
+ }
+ 
+ static void bitmap_clear_bits(struct btrfs_free_space_ctl *ctl,
+@@ -1769,6 +1779,13 @@ static int search_bitmap(struct btrfs_free_space_ctl *ctl,
+ 	return -1;
+ }
+ 
++static inline u64 get_max_extent_size(struct btrfs_free_space *entry)
++{
++	if (entry->bitmap)
++		return entry->max_extent_size;
++	return entry->bytes;
++}
++
+ /* Cache the size of the max extent in bytes */
+ static struct btrfs_free_space *
+ find_free_space(struct btrfs_free_space_ctl *ctl, u64 *offset, u64 *bytes,
+@@ -1790,8 +1807,8 @@ find_free_space(struct btrfs_free_space_ctl *ctl, u64 *offset, u64 *bytes,
+ 	for (node = &entry->offset_index; node; node = rb_next(node)) {
+ 		entry = rb_entry(node, struct btrfs_free_space, offset_index);
+ 		if (entry->bytes < *bytes) {
+-			if (entry->bytes > *max_extent_size)
+-				*max_extent_size = entry->bytes;
++			*max_extent_size = max(get_max_extent_size(entry),
++					       *max_extent_size);
+ 			continue;
+ 		}
+ 
+@@ -1809,8 +1826,8 @@ find_free_space(struct btrfs_free_space_ctl *ctl, u64 *offset, u64 *bytes,
+ 		}
+ 
+ 		if (entry->bytes < *bytes + align_off) {
+-			if (entry->bytes > *max_extent_size)
+-				*max_extent_size = entry->bytes;
++			*max_extent_size = max(get_max_extent_size(entry),
++					       *max_extent_size);
+ 			continue;
+ 		}
+ 
+@@ -1822,8 +1839,10 @@ find_free_space(struct btrfs_free_space_ctl *ctl, u64 *offset, u64 *bytes,
+ 				*offset = tmp;
+ 				*bytes = size;
+ 				return entry;
+-			} else if (size > *max_extent_size) {
+-				*max_extent_size = size;
++			} else {
++				*max_extent_size =
++					max(get_max_extent_size(entry),
++					    *max_extent_size);
+ 			}
+ 			continue;
+ 		}
+@@ -2447,6 +2466,7 @@ void btrfs_dump_free_space(struct btrfs_block_group_cache *block_group,
+ 	struct rb_node *n;
+ 	int count = 0;
+ 
++	spin_lock(&ctl->tree_lock);
+ 	for (n = rb_first(&ctl->free_space_offset); n; n = rb_next(n)) {
+ 		info = rb_entry(n, struct btrfs_free_space, offset_index);
+ 		if (info->bytes >= bytes && !block_group->ro)
+@@ -2455,6 +2475,7 @@ void btrfs_dump_free_space(struct btrfs_block_group_cache *block_group,
+ 			   info->offset, info->bytes,
+ 		       (info->bitmap) ? "yes" : "no");
+ 	}
++	spin_unlock(&ctl->tree_lock);
+ 	btrfs_info(fs_info, "block group has cluster?: %s",
+ 	       list_empty(&block_group->cluster_list) ? "no" : "yes");
+ 	btrfs_info(fs_info,
+@@ -2683,8 +2704,8 @@ static u64 btrfs_alloc_from_bitmap(struct btrfs_block_group_cache *block_group,
+ 
+ 	err = search_bitmap(ctl, entry, &search_start, &search_bytes, true);
+ 	if (err) {
+-		if (search_bytes > *max_extent_size)
+-			*max_extent_size = search_bytes;
++		*max_extent_size = max(get_max_extent_size(entry),
++				       *max_extent_size);
+ 		return 0;
+ 	}
+ 
+@@ -2721,8 +2742,9 @@ u64 btrfs_alloc_from_cluster(struct btrfs_block_group_cache *block_group,
+ 
+ 	entry = rb_entry(node, struct btrfs_free_space, offset_index);
+ 	while (1) {
+-		if (entry->bytes < bytes && entry->bytes > *max_extent_size)
+-			*max_extent_size = entry->bytes;
++		if (entry->bytes < bytes)
++			*max_extent_size = max(get_max_extent_size(entry),
++					       *max_extent_size);
+ 
+ 		if (entry->bytes < bytes ||
+ 		    (!entry->bitmap && entry->offset < min_start)) {
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index d3736fbf6774..dc0f9d089b19 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -507,6 +507,7 @@ again:
+ 		pages = kcalloc(nr_pages, sizeof(struct page *), GFP_NOFS);
+ 		if (!pages) {
+ 			/* just bail out to the uncompressed code */
++			nr_pages = 0;
+ 			goto cont;
+ 		}
+ 
+@@ -2950,6 +2951,7 @@ static int btrfs_finish_ordered_io(struct btrfs_ordered_extent *ordered_extent)
+ 	bool truncated = false;
+ 	bool range_locked = false;
+ 	bool clear_new_delalloc_bytes = false;
++	bool clear_reserved_extent = true;
+ 
+ 	if (!test_bit(BTRFS_ORDERED_NOCOW, &ordered_extent->flags) &&
+ 	    !test_bit(BTRFS_ORDERED_PREALLOC, &ordered_extent->flags) &&
+@@ -3053,10 +3055,12 @@ static int btrfs_finish_ordered_io(struct btrfs_ordered_extent *ordered_extent)
+ 						logical_len, logical_len,
+ 						compress_type, 0, 0,
+ 						BTRFS_FILE_EXTENT_REG);
+-		if (!ret)
++		if (!ret) {
++			clear_reserved_extent = false;
+ 			btrfs_release_delalloc_bytes(fs_info,
+ 						     ordered_extent->start,
+ 						     ordered_extent->disk_len);
++		}
+ 	}
+ 	unpin_extent_cache(&BTRFS_I(inode)->extent_tree,
+ 			   ordered_extent->file_offset, ordered_extent->len,
+@@ -3117,8 +3121,13 @@ out:
+ 		 * wrong we need to return the space for this ordered extent
+ 		 * back to the allocator.  We only free the extent in the
+ 		 * truncated case if we didn't write out the extent at all.
++		 *
++		 * If we made it past insert_reserved_file_extent before we
++		 * errored out then we don't need to do this as the accounting
++		 * has already been done.
+ 		 */
+ 		if ((ret || !logical_len) &&
++		    clear_reserved_extent &&
+ 		    !test_bit(BTRFS_ORDERED_NOCOW, &ordered_extent->flags) &&
+ 		    !test_bit(BTRFS_ORDERED_PREALLOC, &ordered_extent->flags))
+ 			btrfs_free_reserved_extent(fs_info,
+@@ -5293,11 +5302,13 @@ static void evict_inode_truncate_pages(struct inode *inode)
+ 		struct extent_state *cached_state = NULL;
+ 		u64 start;
+ 		u64 end;
++		unsigned state_flags;
+ 
+ 		node = rb_first(&io_tree->state);
+ 		state = rb_entry(node, struct extent_state, rb_node);
+ 		start = state->start;
+ 		end = state->end;
++		state_flags = state->state;
+ 		spin_unlock(&io_tree->lock);
+ 
+ 		lock_extent_bits(io_tree, start, end, &cached_state);
+@@ -5310,7 +5321,7 @@ static void evict_inode_truncate_pages(struct inode *inode)
+ 		 *
+ 		 * Note, end is the bytenr of last byte, so we need + 1 here.
+ 		 */
+-		if (state->state & EXTENT_DELALLOC)
++		if (state_flags & EXTENT_DELALLOC)
+ 			btrfs_qgroup_free_data(inode, NULL, start, end - start + 1);
+ 
+ 		clear_extent_bit(io_tree, start, end,
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index ef7159646615..c972920701a3 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -496,7 +496,6 @@ static noinline int btrfs_ioctl_fitrim(struct file *file, void __user *arg)
+ 	struct fstrim_range range;
+ 	u64 minlen = ULLONG_MAX;
+ 	u64 num_devices = 0;
+-	u64 total_bytes = btrfs_super_total_bytes(fs_info->super_copy);
+ 	int ret;
+ 
+ 	if (!capable(CAP_SYS_ADMIN))
+@@ -520,11 +519,15 @@ static noinline int btrfs_ioctl_fitrim(struct file *file, void __user *arg)
+ 		return -EOPNOTSUPP;
+ 	if (copy_from_user(&range, arg, sizeof(range)))
+ 		return -EFAULT;
+-	if (range.start > total_bytes ||
+-	    range.len < fs_info->sb->s_blocksize)
++
++	/*
++	 * NOTE: Don't truncate the range using super->total_bytes.  Bytenr of
++	 * block group is in the logical address space, which can be any
++	 * sectorsize aligned bytenr in  the range [0, U64_MAX].
++	 */
++	if (range.len < fs_info->sb->s_blocksize)
+ 		return -EINVAL;
+ 
+-	range.len = min(range.len, total_bytes - range.start);
+ 	range.minlen = max(range.minlen, minlen);
+ 	ret = btrfs_trim_fs(fs_info, &range);
+ 	if (ret < 0)
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index c25dc47210a3..7407f5a5d682 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -2856,6 +2856,7 @@ qgroup_rescan_zero_tracking(struct btrfs_fs_info *fs_info)
+ 		qgroup->rfer_cmpr = 0;
+ 		qgroup->excl = 0;
+ 		qgroup->excl_cmpr = 0;
++		qgroup_dirty(fs_info, qgroup);
+ 	}
+ 	spin_unlock(&fs_info->qgroup_lock);
+ }
+@@ -3065,6 +3066,10 @@ static int __btrfs_qgroup_release_data(struct inode *inode,
+ 	int trace_op = QGROUP_RELEASE;
+ 	int ret;
+ 
++	if (!test_bit(BTRFS_FS_QUOTA_ENABLED,
++		      &BTRFS_I(inode)->root->fs_info->flags))
++		return 0;
++
+ 	/* In release case, we shouldn't have @reserved */
+ 	WARN_ON(!free && reserved);
+ 	if (free && reserved)
+diff --git a/fs/btrfs/qgroup.h b/fs/btrfs/qgroup.h
+index d60dd06445ce..cad73ed7aebc 100644
+--- a/fs/btrfs/qgroup.h
++++ b/fs/btrfs/qgroup.h
+@@ -261,6 +261,8 @@ void btrfs_qgroup_free_refroot(struct btrfs_fs_info *fs_info,
+ static inline void btrfs_qgroup_free_delayed_ref(struct btrfs_fs_info *fs_info,
+ 						 u64 ref_root, u64 num_bytes)
+ {
++	if (!test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags))
++		return;
+ 	trace_btrfs_qgroup_free_delayed_ref(fs_info, ref_root, num_bytes);
+ 	btrfs_qgroup_free_refroot(fs_info, ref_root, num_bytes,
+ 				  BTRFS_QGROUP_RSV_DATA);
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index be94c65bb4d2..5ee49b796815 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -1321,7 +1321,7 @@ static void __del_reloc_root(struct btrfs_root *root)
+ 	struct mapping_node *node = NULL;
+ 	struct reloc_control *rc = fs_info->reloc_ctl;
+ 
+-	if (rc) {
++	if (rc && root->node) {
+ 		spin_lock(&rc->reloc_root_tree.lock);
+ 		rb_node = tree_search(&rc->reloc_root_tree.rb_root,
+ 				      root->node->start);
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index ff5f6c719976..9ee0aca134fc 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -1930,6 +1930,9 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
+ 		return ret;
+ 	}
+ 
++	btrfs_trans_release_metadata(trans);
++	trans->block_rsv = NULL;
++
+ 	/* make a pass through all the delayed refs we have so far
+ 	 * any runnings procs may add more while we are here
+ 	 */
+@@ -1939,9 +1942,6 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
+ 		return ret;
+ 	}
+ 
+-	btrfs_trans_release_metadata(trans);
+-	trans->block_rsv = NULL;
+-
+ 	cur_trans = trans->transaction;
+ 
+ 	/*
+@@ -2281,15 +2281,6 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
+ 
+ 	kmem_cache_free(btrfs_trans_handle_cachep, trans);
+ 
+-	/*
+-	 * If fs has been frozen, we can not handle delayed iputs, otherwise
+-	 * it'll result in deadlock about SB_FREEZE_FS.
+-	 */
+-	if (current != fs_info->transaction_kthread &&
+-	    current != fs_info->cleaner_kthread &&
+-	    !test_bit(BTRFS_FS_FROZEN, &fs_info->flags))
+-		btrfs_run_delayed_iputs(fs_info);
+-
+ 	return ret;
+ 
+ scrub_continue:
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 84b00a29d531..8b3f14a1adf0 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -258,6 +258,13 @@ struct walk_control {
+ 	/* what stage of the replay code we're currently in */
+ 	int stage;
+ 
++	/*
++	 * Ignore any items from the inode currently being processed. Needs
++	 * to be set every time we find a BTRFS_INODE_ITEM_KEY and we are in
++	 * the LOG_WALK_REPLAY_INODES stage.
++	 */
++	bool ignore_cur_inode;
++
+ 	/* the root we are currently replaying */
+ 	struct btrfs_root *replay_dest;
+ 
+@@ -2492,6 +2499,20 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+ 
+ 			inode_item = btrfs_item_ptr(eb, i,
+ 					    struct btrfs_inode_item);
++			/*
++			 * If we have a tmpfile (O_TMPFILE) that got fsync'ed
++			 * and never got linked before the fsync, skip it, as
++			 * replaying it is pointless since it would be deleted
++			 * later. We skip logging tmpfiles, but it's always
++			 * possible we are replaying a log created with a kernel
++			 * that used to log tmpfiles.
++			 */
++			if (btrfs_inode_nlink(eb, inode_item) == 0) {
++				wc->ignore_cur_inode = true;
++				continue;
++			} else {
++				wc->ignore_cur_inode = false;
++			}
+ 			ret = replay_xattr_deletes(wc->trans, root, log,
+ 						   path, key.objectid);
+ 			if (ret)
+@@ -2529,16 +2550,8 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+ 					     root->fs_info->sectorsize);
+ 				ret = btrfs_drop_extents(wc->trans, root, inode,
+ 							 from, (u64)-1, 1);
+-				/*
+-				 * If the nlink count is zero here, the iput
+-				 * will free the inode.  We bump it to make
+-				 * sure it doesn't get freed until the link
+-				 * count fixup is done.
+-				 */
+ 				if (!ret) {
+-					if (inode->i_nlink == 0)
+-						inc_nlink(inode);
+-					/* Update link count and nbytes. */
++					/* Update the inode's nbytes. */
+ 					ret = btrfs_update_inode(wc->trans,
+ 								 root, inode);
+ 				}
+@@ -2553,6 +2566,9 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+ 				break;
+ 		}
+ 
++		if (wc->ignore_cur_inode)
++			continue;
++
+ 		if (key.type == BTRFS_DIR_INDEX_KEY &&
+ 		    wc->stage == LOG_WALK_REPLAY_DIR_INDEX) {
+ 			ret = replay_one_dir_item(wc->trans, root, path,
+@@ -3209,9 +3225,12 @@ static void free_log_tree(struct btrfs_trans_handle *trans,
+ 	};
+ 
+ 	ret = walk_log_tree(trans, log, &wc);
+-	/* I don't think this can happen but just in case */
+-	if (ret)
+-		btrfs_abort_transaction(trans, ret);
++	if (ret) {
++		if (trans)
++			btrfs_abort_transaction(trans, ret);
++		else
++			btrfs_handle_fs_error(log->fs_info, ret, NULL);
++	}
+ 
+ 	while (1) {
+ 		ret = find_first_extent_bit(&log->dirty_log_pages,
+@@ -4505,7 +4524,6 @@ static int btrfs_log_changed_extents(struct btrfs_trans_handle *trans,
+ 
+ 	INIT_LIST_HEAD(&extents);
+ 
+-	down_write(&inode->dio_sem);
+ 	write_lock(&tree->lock);
+ 	test_gen = root->fs_info->last_trans_committed;
+ 	logged_start = start;
+@@ -4586,7 +4604,6 @@ process:
+ 	}
+ 	WARN_ON(!list_empty(&extents));
+ 	write_unlock(&tree->lock);
+-	up_write(&inode->dio_sem);
+ 
+ 	btrfs_release_path(path);
+ 	if (!ret)
+@@ -4784,7 +4801,8 @@ static int btrfs_log_trailing_hole(struct btrfs_trans_handle *trans,
+ 			ASSERT(len == i_size ||
+ 			       (len == fs_info->sectorsize &&
+ 				btrfs_file_extent_compression(leaf, extent) !=
+-				BTRFS_COMPRESS_NONE));
++				BTRFS_COMPRESS_NONE) ||
++			       (len < i_size && i_size < fs_info->sectorsize));
+ 			return 0;
+ 		}
+ 
+@@ -5718,9 +5736,33 @@ static int btrfs_log_all_parents(struct btrfs_trans_handle *trans,
+ 
+ 			dir_inode = btrfs_iget(fs_info->sb, &inode_key,
+ 					       root, NULL);
+-			/* If parent inode was deleted, skip it. */
+-			if (IS_ERR(dir_inode))
+-				continue;
++			/*
++			 * If the parent inode was deleted, return an error to
++			 * fallback to a transaction commit. This is to prevent
++			 * getting an inode that was moved from one parent A to
++			 * a parent B, got its former parent A deleted and then
++			 * it got fsync'ed, from existing at both parents after
++			 * a log replay (and the old parent still existing).
++			 * Example:
++			 *
++			 * mkdir /mnt/A
++			 * mkdir /mnt/B
++			 * touch /mnt/B/bar
++			 * sync
++			 * mv /mnt/B/bar /mnt/A/bar
++			 * mv -T /mnt/A /mnt/B
++			 * fsync /mnt/B/bar
++			 * <power fail>
++			 *
++			 * If we ignore the old parent B which got deleted,
++			 * after a log replay we would have file bar linked
++			 * at both parents and the old parent B would still
++			 * exist.
++			 */
++			if (IS_ERR(dir_inode)) {
++				ret = PTR_ERR(dir_inode);
++				goto out;
++			}
+ 
+ 			if (ctx)
+ 				ctx->log_new_dentries = false;
+@@ -5794,7 +5836,13 @@ static int btrfs_log_inode_parent(struct btrfs_trans_handle *trans,
+ 	if (ret)
+ 		goto end_no_trans;
+ 
+-	if (btrfs_inode_in_log(inode, trans->transid)) {
++	/*
++	 * Skip already logged inodes or inodes corresponding to tmpfiles
++	 * (since logging them is pointless, a link count of 0 means they
++	 * will never be accessible).
++	 */
++	if (btrfs_inode_in_log(inode, trans->transid) ||
++	    inode->vfs_inode.i_nlink == 0) {
+ 		ret = BTRFS_NO_LOG_SYNC;
+ 		goto end_no_trans;
+ 	}
+diff --git a/fs/cifs/cifs_debug.c b/fs/cifs/cifs_debug.c
+index b20297988fe0..c1261b7fd292 100644
+--- a/fs/cifs/cifs_debug.c
++++ b/fs/cifs/cifs_debug.c
+@@ -383,6 +383,9 @@ static ssize_t cifs_stats_proc_write(struct file *file,
+ 		atomic_set(&totBufAllocCount, 0);
+ 		atomic_set(&totSmBufAllocCount, 0);
+ #endif /* CONFIG_CIFS_STATS2 */
++		atomic_set(&tcpSesReconnectCount, 0);
++		atomic_set(&tconInfoReconnectCount, 0);
++
+ 		spin_lock(&GlobalMid_Lock);
+ 		GlobalMaxActiveXid = 0;
+ 		GlobalCurrentXid = 0;
+diff --git a/fs/cifs/cifs_spnego.c b/fs/cifs/cifs_spnego.c
+index b611fc2e8984..7f01c6e60791 100644
+--- a/fs/cifs/cifs_spnego.c
++++ b/fs/cifs/cifs_spnego.c
+@@ -147,8 +147,10 @@ cifs_get_spnego_key(struct cifs_ses *sesInfo)
+ 		sprintf(dp, ";sec=krb5");
+ 	else if (server->sec_mskerberos)
+ 		sprintf(dp, ";sec=mskrb5");
+-	else
+-		goto out;
++	else {
++		cifs_dbg(VFS, "unknown or missing server auth type, use krb5\n");
++		sprintf(dp, ";sec=krb5");
++	}
+ 
+ 	dp = description + strlen(description);
+ 	sprintf(dp, ";uid=0x%x",
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index d279fa5472db..334b2b3d21a3 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -779,7 +779,15 @@ cifs_get_inode_info(struct inode **inode, const char *full_path,
+ 	} else if (rc == -EREMOTE) {
+ 		cifs_create_dfs_fattr(&fattr, sb);
+ 		rc = 0;
+-	} else if (rc == -EACCES && backup_cred(cifs_sb)) {
++	} else if ((rc == -EACCES) && backup_cred(cifs_sb) &&
++		   (strcmp(server->vals->version_string, SMB1_VERSION_STRING)
++		      == 0)) {
++			/*
++			 * For SMB2 and later the backup intent flag is already
++			 * sent if needed on open and there is no path based
++			 * FindFirst operation to use to retry with
++			 */
++
+ 			srchinf = kzalloc(sizeof(struct cifs_search_info),
+ 						GFP_KERNEL);
+ 			if (srchinf == NULL) {
+diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
+index f408994fc632..6e000392e4a4 100644
+--- a/fs/cramfs/inode.c
++++ b/fs/cramfs/inode.c
+@@ -202,7 +202,8 @@ static void *cramfs_blkdev_read(struct super_block *sb, unsigned int offset,
+ 			continue;
+ 		blk_offset = (blocknr - buffer_blocknr[i]) << PAGE_SHIFT;
+ 		blk_offset += offset;
+-		if (blk_offset + len > BUFFER_SIZE)
++		if (blk_offset > BUFFER_SIZE ||
++		    blk_offset + len > BUFFER_SIZE)
+ 			continue;
+ 		return read_buffers[i] + blk_offset;
+ 	}
+diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h
+index 39c20ef26db4..79debfc9cef9 100644
+--- a/fs/crypto/fscrypt_private.h
++++ b/fs/crypto/fscrypt_private.h
+@@ -83,10 +83,6 @@ static inline bool fscrypt_valid_enc_modes(u32 contents_mode,
+ 	    filenames_mode == FS_ENCRYPTION_MODE_AES_256_CTS)
+ 		return true;
+ 
+-	if (contents_mode == FS_ENCRYPTION_MODE_SPECK128_256_XTS &&
+-	    filenames_mode == FS_ENCRYPTION_MODE_SPECK128_256_CTS)
+-		return true;
+-
+ 	return false;
+ }
+ 
+diff --git a/fs/crypto/keyinfo.c b/fs/crypto/keyinfo.c
+index e997ca51192f..7874c9bb2fc5 100644
+--- a/fs/crypto/keyinfo.c
++++ b/fs/crypto/keyinfo.c
+@@ -174,16 +174,6 @@ static struct fscrypt_mode {
+ 		.cipher_str = "cts(cbc(aes))",
+ 		.keysize = 16,
+ 	},
+-	[FS_ENCRYPTION_MODE_SPECK128_256_XTS] = {
+-		.friendly_name = "Speck128/256-XTS",
+-		.cipher_str = "xts(speck128)",
+-		.keysize = 64,
+-	},
+-	[FS_ENCRYPTION_MODE_SPECK128_256_CTS] = {
+-		.friendly_name = "Speck128/256-CTS-CBC",
+-		.cipher_str = "cts(cbc(speck128))",
+-		.keysize = 32,
+-	},
+ };
+ 
+ static struct fscrypt_mode *
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index aa1ce53d0c87..7fcc11fcbbbd 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -1387,7 +1387,8 @@ struct ext4_sb_info {
+ 	u32 s_min_batch_time;
+ 	struct block_device *journal_bdev;
+ #ifdef CONFIG_QUOTA
+-	char *s_qf_names[EXT4_MAXQUOTAS];	/* Names of quota files with journalled quota */
++	/* Names of quota files with journalled quota */
++	char __rcu *s_qf_names[EXT4_MAXQUOTAS];
+ 	int s_jquota_fmt;			/* Format of quota to use */
+ #endif
+ 	unsigned int s_want_extra_isize; /* New inodes should reserve # bytes */
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 7b4736022761..9c4bac18cc6c 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -863,7 +863,7 @@ int ext4_da_write_inline_data_begin(struct address_space *mapping,
+ 	handle_t *handle;
+ 	struct page *page;
+ 	struct ext4_iloc iloc;
+-	int retries;
++	int retries = 0;
+ 
+ 	ret = ext4_get_inode_loc(inode, &iloc);
+ 	if (ret)
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index a7074115d6f6..0edee31913d1 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -67,7 +67,6 @@ static void swap_inode_data(struct inode *inode1, struct inode *inode2)
+ 	ei1 = EXT4_I(inode1);
+ 	ei2 = EXT4_I(inode2);
+ 
+-	swap(inode1->i_flags, inode2->i_flags);
+ 	swap(inode1->i_version, inode2->i_version);
+ 	swap(inode1->i_blocks, inode2->i_blocks);
+ 	swap(inode1->i_bytes, inode2->i_bytes);
+@@ -85,6 +84,21 @@ static void swap_inode_data(struct inode *inode1, struct inode *inode2)
+ 	i_size_write(inode2, isize);
+ }
+ 
++static void reset_inode_seed(struct inode *inode)
++{
++	struct ext4_inode_info *ei = EXT4_I(inode);
++	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
++	__le32 inum = cpu_to_le32(inode->i_ino);
++	__le32 gen = cpu_to_le32(inode->i_generation);
++	__u32 csum;
++
++	if (!ext4_has_metadata_csum(inode->i_sb))
++		return;
++
++	csum = ext4_chksum(sbi, sbi->s_csum_seed, (__u8 *)&inum, sizeof(inum));
++	ei->i_csum_seed = ext4_chksum(sbi, csum, (__u8 *)&gen, sizeof(gen));
++}
++
+ /**
+  * Swap the information from the given @inode and the inode
+  * EXT4_BOOT_LOADER_INO. It will basically swap i_data and all other
+@@ -102,10 +116,13 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 	struct inode *inode_bl;
+ 	struct ext4_inode_info *ei_bl;
+ 
+-	if (inode->i_nlink != 1 || !S_ISREG(inode->i_mode))
++	if (inode->i_nlink != 1 || !S_ISREG(inode->i_mode) ||
++	    IS_SWAPFILE(inode) || IS_ENCRYPTED(inode) ||
++	    ext4_has_inline_data(inode))
+ 		return -EINVAL;
+ 
+-	if (!inode_owner_or_capable(inode) || !capable(CAP_SYS_ADMIN))
++	if (IS_RDONLY(inode) || IS_APPEND(inode) || IS_IMMUTABLE(inode) ||
++	    !inode_owner_or_capable(inode) || !capable(CAP_SYS_ADMIN))
+ 		return -EPERM;
+ 
+ 	inode_bl = ext4_iget(sb, EXT4_BOOT_LOADER_INO);
+@@ -120,13 +137,13 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 	 * that only 1 swap_inode_boot_loader is running. */
+ 	lock_two_nondirectories(inode, inode_bl);
+ 
+-	truncate_inode_pages(&inode->i_data, 0);
+-	truncate_inode_pages(&inode_bl->i_data, 0);
+-
+ 	/* Wait for all existing dio workers */
+ 	inode_dio_wait(inode);
+ 	inode_dio_wait(inode_bl);
+ 
++	truncate_inode_pages(&inode->i_data, 0);
++	truncate_inode_pages(&inode_bl->i_data, 0);
++
+ 	handle = ext4_journal_start(inode_bl, EXT4_HT_MOVE_EXTENTS, 2);
+ 	if (IS_ERR(handle)) {
+ 		err = -EINVAL;
+@@ -159,6 +176,8 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 
+ 	inode->i_generation = prandom_u32();
+ 	inode_bl->i_generation = prandom_u32();
++	reset_inode_seed(inode);
++	reset_inode_seed(inode_bl);
+ 
+ 	ext4_discard_preallocations(inode);
+ 
+@@ -169,6 +188,7 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 			inode->i_ino, err);
+ 		/* Revert all changes: */
+ 		swap_inode_data(inode, inode_bl);
++		ext4_mark_inode_dirty(handle, inode);
+ 	} else {
+ 		err = ext4_mark_inode_dirty(handle, inode_bl);
+ 		if (err < 0) {
+@@ -178,6 +198,7 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 			/* Revert all changes: */
+ 			swap_inode_data(inode, inode_bl);
+ 			ext4_mark_inode_dirty(handle, inode);
++			ext4_mark_inode_dirty(handle, inode_bl);
+ 		}
+ 	}
+ 	ext4_journal_stop(handle);
+@@ -339,19 +360,14 @@ static int ext4_ioctl_setproject(struct file *filp, __u32 projid)
+ 	if (projid_eq(kprojid, EXT4_I(inode)->i_projid))
+ 		return 0;
+ 
+-	err = mnt_want_write_file(filp);
+-	if (err)
+-		return err;
+-
+ 	err = -EPERM;
+-	inode_lock(inode);
+ 	/* Is it quota file? Do not allow user to mess with it */
+ 	if (ext4_is_quota_file(inode))
+-		goto out_unlock;
++		return err;
+ 
+ 	err = ext4_get_inode_loc(inode, &iloc);
+ 	if (err)
+-		goto out_unlock;
++		return err;
+ 
+ 	raw_inode = ext4_raw_inode(&iloc);
+ 	if (!EXT4_FITS_IN_INODE(raw_inode, ei, i_projid)) {
+@@ -359,20 +375,20 @@ static int ext4_ioctl_setproject(struct file *filp, __u32 projid)
+ 					      EXT4_SB(sb)->s_want_extra_isize,
+ 					      &iloc);
+ 		if (err)
+-			goto out_unlock;
++			return err;
+ 	} else {
+ 		brelse(iloc.bh);
+ 	}
+ 
+-	dquot_initialize(inode);
++	err = dquot_initialize(inode);
++	if (err)
++		return err;
+ 
+ 	handle = ext4_journal_start(inode, EXT4_HT_QUOTA,
+ 		EXT4_QUOTA_INIT_BLOCKS(sb) +
+ 		EXT4_QUOTA_DEL_BLOCKS(sb) + 3);
+-	if (IS_ERR(handle)) {
+-		err = PTR_ERR(handle);
+-		goto out_unlock;
+-	}
++	if (IS_ERR(handle))
++		return PTR_ERR(handle);
+ 
+ 	err = ext4_reserve_inode_write(handle, inode, &iloc);
+ 	if (err)
+@@ -400,9 +416,6 @@ out_dirty:
+ 		err = rc;
+ out_stop:
+ 	ext4_journal_stop(handle);
+-out_unlock:
+-	inode_unlock(inode);
+-	mnt_drop_write_file(filp);
+ 	return err;
+ }
+ #else
+@@ -626,6 +639,30 @@ group_add_out:
+ 	return err;
+ }
+ 
++static int ext4_ioctl_check_project(struct inode *inode, struct fsxattr *fa)
++{
++	/*
++	 * Project Quota ID state is only allowed to change from within the init
++	 * namespace. Enforce that restriction only if we are trying to change
++	 * the quota ID state. Everything else is allowed in user namespaces.
++	 */
++	if (current_user_ns() == &init_user_ns)
++		return 0;
++
++	if (__kprojid_val(EXT4_I(inode)->i_projid) != fa->fsx_projid)
++		return -EINVAL;
++
++	if (ext4_test_inode_flag(inode, EXT4_INODE_PROJINHERIT)) {
++		if (!(fa->fsx_xflags & FS_XFLAG_PROJINHERIT))
++			return -EINVAL;
++	} else {
++		if (fa->fsx_xflags & FS_XFLAG_PROJINHERIT)
++			return -EINVAL;
++	}
++
++	return 0;
++}
++
+ long ext4_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ {
+ 	struct inode *inode = file_inode(filp);
+@@ -1025,19 +1062,19 @@ resizefs_out:
+ 			return err;
+ 
+ 		inode_lock(inode);
++		err = ext4_ioctl_check_project(inode, &fa);
++		if (err)
++			goto out;
+ 		flags = (ei->i_flags & ~EXT4_FL_XFLAG_VISIBLE) |
+ 			 (flags & EXT4_FL_XFLAG_VISIBLE);
+ 		err = ext4_ioctl_setflags(inode, flags);
+-		inode_unlock(inode);
+-		mnt_drop_write_file(filp);
+ 		if (err)
+-			return err;
+-
++			goto out;
+ 		err = ext4_ioctl_setproject(filp, fa.fsx_projid);
+-		if (err)
+-			return err;
+-
+-		return 0;
++out:
++		inode_unlock(inode);
++		mnt_drop_write_file(filp);
++		return err;
+ 	}
+ 	case EXT4_IOC_SHUTDOWN:
+ 		return ext4_shutdown(sb, arg);
+diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c
+index 8e17efdcbf11..887353875060 100644
+--- a/fs/ext4/move_extent.c
++++ b/fs/ext4/move_extent.c
+@@ -518,9 +518,13 @@ mext_check_arguments(struct inode *orig_inode,
+ 			orig_inode->i_ino, donor_inode->i_ino);
+ 		return -EINVAL;
+ 	}
+-	if (orig_eof < orig_start + *len - 1)
++	if (orig_eof <= orig_start)
++		*len = 0;
++	else if (orig_eof < orig_start + *len - 1)
+ 		*len = orig_eof - orig_start;
+-	if (donor_eof < donor_start + *len - 1)
++	if (donor_eof <= donor_start)
++		*len = 0;
++	else if (donor_eof < donor_start + *len - 1)
+ 		*len = donor_eof - donor_start;
+ 	if (!*len) {
+ 		ext4_debug("ext4 move extent: len should not be 0 "
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index a7a0fffc3ae8..8d91d50ccf42 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -895,6 +895,18 @@ static inline void ext4_quota_off_umount(struct super_block *sb)
+ 	for (type = 0; type < EXT4_MAXQUOTAS; type++)
+ 		ext4_quota_off(sb, type);
+ }
++
++/*
++ * This is a helper function which is used in the mount/remount
++ * codepaths (which holds s_umount) to fetch the quota file name.
++ */
++static inline char *get_qf_name(struct super_block *sb,
++				struct ext4_sb_info *sbi,
++				int type)
++{
++	return rcu_dereference_protected(sbi->s_qf_names[type],
++					 lockdep_is_held(&sb->s_umount));
++}
+ #else
+ static inline void ext4_quota_off_umount(struct super_block *sb)
+ {
+@@ -946,7 +958,7 @@ static void ext4_put_super(struct super_block *sb)
+ 	percpu_free_rwsem(&sbi->s_journal_flag_rwsem);
+ #ifdef CONFIG_QUOTA
+ 	for (i = 0; i < EXT4_MAXQUOTAS; i++)
+-		kfree(sbi->s_qf_names[i]);
++		kfree(get_qf_name(sb, sbi, i));
+ #endif
+ 
+ 	/* Debugging code just in case the in-memory inode orphan list
+@@ -1511,11 +1523,10 @@ static const char deprecated_msg[] =
+ static int set_qf_name(struct super_block *sb, int qtype, substring_t *args)
+ {
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+-	char *qname;
++	char *qname, *old_qname = get_qf_name(sb, sbi, qtype);
+ 	int ret = -1;
+ 
+-	if (sb_any_quota_loaded(sb) &&
+-		!sbi->s_qf_names[qtype]) {
++	if (sb_any_quota_loaded(sb) && !old_qname) {
+ 		ext4_msg(sb, KERN_ERR,
+ 			"Cannot change journaled "
+ 			"quota options when quota turned on");
+@@ -1532,8 +1543,8 @@ static int set_qf_name(struct super_block *sb, int qtype, substring_t *args)
+ 			"Not enough memory for storing quotafile name");
+ 		return -1;
+ 	}
+-	if (sbi->s_qf_names[qtype]) {
+-		if (strcmp(sbi->s_qf_names[qtype], qname) == 0)
++	if (old_qname) {
++		if (strcmp(old_qname, qname) == 0)
+ 			ret = 1;
+ 		else
+ 			ext4_msg(sb, KERN_ERR,
+@@ -1546,7 +1557,7 @@ static int set_qf_name(struct super_block *sb, int qtype, substring_t *args)
+ 			"quotafile must be on filesystem root");
+ 		goto errout;
+ 	}
+-	sbi->s_qf_names[qtype] = qname;
++	rcu_assign_pointer(sbi->s_qf_names[qtype], qname);
+ 	set_opt(sb, QUOTA);
+ 	return 1;
+ errout:
+@@ -1558,15 +1569,16 @@ static int clear_qf_name(struct super_block *sb, int qtype)
+ {
+ 
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
++	char *old_qname = get_qf_name(sb, sbi, qtype);
+ 
+-	if (sb_any_quota_loaded(sb) &&
+-		sbi->s_qf_names[qtype]) {
++	if (sb_any_quota_loaded(sb) && old_qname) {
+ 		ext4_msg(sb, KERN_ERR, "Cannot change journaled quota options"
+ 			" when quota turned on");
+ 		return -1;
+ 	}
+-	kfree(sbi->s_qf_names[qtype]);
+-	sbi->s_qf_names[qtype] = NULL;
++	rcu_assign_pointer(sbi->s_qf_names[qtype], NULL);
++	synchronize_rcu();
++	kfree(old_qname);
+ 	return 1;
+ }
+ #endif
+@@ -1941,7 +1953,7 @@ static int parse_options(char *options, struct super_block *sb,
+ 			 int is_remount)
+ {
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+-	char *p;
++	char *p, __maybe_unused *usr_qf_name, __maybe_unused *grp_qf_name;
+ 	substring_t args[MAX_OPT_ARGS];
+ 	int token;
+ 
+@@ -1972,11 +1984,13 @@ static int parse_options(char *options, struct super_block *sb,
+ 			 "Cannot enable project quota enforcement.");
+ 		return 0;
+ 	}
+-	if (sbi->s_qf_names[USRQUOTA] || sbi->s_qf_names[GRPQUOTA]) {
+-		if (test_opt(sb, USRQUOTA) && sbi->s_qf_names[USRQUOTA])
++	usr_qf_name = get_qf_name(sb, sbi, USRQUOTA);
++	grp_qf_name = get_qf_name(sb, sbi, GRPQUOTA);
++	if (usr_qf_name || grp_qf_name) {
++		if (test_opt(sb, USRQUOTA) && usr_qf_name)
+ 			clear_opt(sb, USRQUOTA);
+ 
+-		if (test_opt(sb, GRPQUOTA) && sbi->s_qf_names[GRPQUOTA])
++		if (test_opt(sb, GRPQUOTA) && grp_qf_name)
+ 			clear_opt(sb, GRPQUOTA);
+ 
+ 		if (test_opt(sb, GRPQUOTA) || test_opt(sb, USRQUOTA)) {
+@@ -2010,6 +2024,7 @@ static inline void ext4_show_quota_options(struct seq_file *seq,
+ {
+ #if defined(CONFIG_QUOTA)
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
++	char *usr_qf_name, *grp_qf_name;
+ 
+ 	if (sbi->s_jquota_fmt) {
+ 		char *fmtname = "";
+@@ -2028,11 +2043,14 @@ static inline void ext4_show_quota_options(struct seq_file *seq,
+ 		seq_printf(seq, ",jqfmt=%s", fmtname);
+ 	}
+ 
+-	if (sbi->s_qf_names[USRQUOTA])
+-		seq_show_option(seq, "usrjquota", sbi->s_qf_names[USRQUOTA]);
+-
+-	if (sbi->s_qf_names[GRPQUOTA])
+-		seq_show_option(seq, "grpjquota", sbi->s_qf_names[GRPQUOTA]);
++	rcu_read_lock();
++	usr_qf_name = rcu_dereference(sbi->s_qf_names[USRQUOTA]);
++	grp_qf_name = rcu_dereference(sbi->s_qf_names[GRPQUOTA]);
++	if (usr_qf_name)
++		seq_show_option(seq, "usrjquota", usr_qf_name);
++	if (grp_qf_name)
++		seq_show_option(seq, "grpjquota", grp_qf_name);
++	rcu_read_unlock();
+ #endif
+ }
+ 
+@@ -5081,6 +5099,7 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 	int err = 0;
+ #ifdef CONFIG_QUOTA
+ 	int i, j;
++	char *to_free[EXT4_MAXQUOTAS];
+ #endif
+ 	char *orig_data = kstrdup(data, GFP_KERNEL);
+ 
+@@ -5097,8 +5116,9 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 	old_opts.s_jquota_fmt = sbi->s_jquota_fmt;
+ 	for (i = 0; i < EXT4_MAXQUOTAS; i++)
+ 		if (sbi->s_qf_names[i]) {
+-			old_opts.s_qf_names[i] = kstrdup(sbi->s_qf_names[i],
+-							 GFP_KERNEL);
++			char *qf_name = get_qf_name(sb, sbi, i);
++
++			old_opts.s_qf_names[i] = kstrdup(qf_name, GFP_KERNEL);
+ 			if (!old_opts.s_qf_names[i]) {
+ 				for (j = 0; j < i; j++)
+ 					kfree(old_opts.s_qf_names[j]);
+@@ -5327,9 +5347,12 @@ restore_opts:
+ #ifdef CONFIG_QUOTA
+ 	sbi->s_jquota_fmt = old_opts.s_jquota_fmt;
+ 	for (i = 0; i < EXT4_MAXQUOTAS; i++) {
+-		kfree(sbi->s_qf_names[i]);
+-		sbi->s_qf_names[i] = old_opts.s_qf_names[i];
++		to_free[i] = get_qf_name(sb, sbi, i);
++		rcu_assign_pointer(sbi->s_qf_names[i], old_opts.s_qf_names[i]);
+ 	}
++	synchronize_rcu();
++	for (i = 0; i < EXT4_MAXQUOTAS; i++)
++		kfree(to_free[i]);
+ #endif
+ 	kfree(orig_data);
+ 	return err;
+@@ -5520,7 +5543,7 @@ static int ext4_write_info(struct super_block *sb, int type)
+  */
+ static int ext4_quota_on_mount(struct super_block *sb, int type)
+ {
+-	return dquot_quota_on_mount(sb, EXT4_SB(sb)->s_qf_names[type],
++	return dquot_quota_on_mount(sb, get_qf_name(sb, EXT4_SB(sb), type),
+ 					EXT4_SB(sb)->s_jquota_fmt, type);
+ }
+ 
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index b61954d40c25..e397515261dc 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -80,7 +80,8 @@ static void __read_end_io(struct bio *bio)
+ 		/* PG_error was set if any post_read step failed */
+ 		if (bio->bi_status || PageError(page)) {
+ 			ClearPageUptodate(page);
+-			SetPageError(page);
++			/* will re-read again later */
++			ClearPageError(page);
+ 		} else {
+ 			SetPageUptodate(page);
+ 		}
+@@ -453,12 +454,16 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio)
+ 		bio_put(bio);
+ 		return -EFAULT;
+ 	}
+-	bio_set_op_attrs(bio, fio->op, fio->op_flags);
+ 
+-	__submit_bio(fio->sbi, bio, fio->type);
++	if (fio->io_wbc && !is_read_io(fio->op))
++		wbc_account_io(fio->io_wbc, page, PAGE_SIZE);
++
++	bio_set_op_attrs(bio, fio->op, fio->op_flags);
+ 
+ 	if (!is_read_io(fio->op))
+ 		inc_page_count(fio->sbi, WB_DATA_TYPE(fio->page));
++
++	__submit_bio(fio->sbi, bio, fio->type);
+ 	return 0;
+ }
+ 
+@@ -580,6 +585,7 @@ static int f2fs_submit_page_read(struct inode *inode, struct page *page,
+ 		bio_put(bio);
+ 		return -EFAULT;
+ 	}
++	ClearPageError(page);
+ 	__submit_bio(F2FS_I_SB(inode), bio, DATA);
+ 	return 0;
+ }
+@@ -1524,6 +1530,7 @@ submit_and_realloc:
+ 		if (bio_add_page(bio, page, blocksize, 0) < blocksize)
+ 			goto submit_and_realloc;
+ 
++		ClearPageError(page);
+ 		last_block_in_bio = block_nr;
+ 		goto next_page;
+ set_error_page:
+@@ -2494,10 +2501,6 @@ static int f2fs_set_data_page_dirty(struct page *page)
+ 	if (!PageUptodate(page))
+ 		SetPageUptodate(page);
+ 
+-	/* don't remain PG_checked flag which was set during GC */
+-	if (is_cold_data(page))
+-		clear_cold_data(page);
+-
+ 	if (f2fs_is_atomic_file(inode) && !f2fs_is_commit_atomic_write(inode)) {
+ 		if (!IS_ATOMIC_WRITTEN_PAGE(page)) {
+ 			f2fs_register_inmem_page(inode, page);
+diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c
+index 231b77ef5a53..a70cd2580eae 100644
+--- a/fs/f2fs/extent_cache.c
++++ b/fs/f2fs/extent_cache.c
+@@ -308,14 +308,13 @@ static unsigned int __free_extent_tree(struct f2fs_sb_info *sbi,
+ 	return count - atomic_read(&et->node_cnt);
+ }
+ 
+-static void __drop_largest_extent(struct inode *inode,
++static void __drop_largest_extent(struct extent_tree *et,
+ 					pgoff_t fofs, unsigned int len)
+ {
+-	struct extent_info *largest = &F2FS_I(inode)->extent_tree->largest;
+-
+-	if (fofs < largest->fofs + largest->len && fofs + len > largest->fofs) {
+-		largest->len = 0;
+-		f2fs_mark_inode_dirty_sync(inode, true);
++	if (fofs < et->largest.fofs + et->largest.len &&
++			fofs + len > et->largest.fofs) {
++		et->largest.len = 0;
++		et->largest_updated = true;
+ 	}
+ }
+ 
+@@ -416,12 +415,11 @@ out:
+ 	return ret;
+ }
+ 
+-static struct extent_node *__try_merge_extent_node(struct inode *inode,
++static struct extent_node *__try_merge_extent_node(struct f2fs_sb_info *sbi,
+ 				struct extent_tree *et, struct extent_info *ei,
+ 				struct extent_node *prev_ex,
+ 				struct extent_node *next_ex)
+ {
+-	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	struct extent_node *en = NULL;
+ 
+ 	if (prev_ex && __is_back_mergeable(ei, &prev_ex->ei)) {
+@@ -443,7 +441,7 @@ static struct extent_node *__try_merge_extent_node(struct inode *inode,
+ 	if (!en)
+ 		return NULL;
+ 
+-	__try_update_largest_extent(inode, et, en);
++	__try_update_largest_extent(et, en);
+ 
+ 	spin_lock(&sbi->extent_lock);
+ 	if (!list_empty(&en->list)) {
+@@ -454,12 +452,11 @@ static struct extent_node *__try_merge_extent_node(struct inode *inode,
+ 	return en;
+ }
+ 
+-static struct extent_node *__insert_extent_tree(struct inode *inode,
++static struct extent_node *__insert_extent_tree(struct f2fs_sb_info *sbi,
+ 				struct extent_tree *et, struct extent_info *ei,
+ 				struct rb_node **insert_p,
+ 				struct rb_node *insert_parent)
+ {
+-	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	struct rb_node **p;
+ 	struct rb_node *parent = NULL;
+ 	struct extent_node *en = NULL;
+@@ -476,7 +473,7 @@ do_insert:
+ 	if (!en)
+ 		return NULL;
+ 
+-	__try_update_largest_extent(inode, et, en);
++	__try_update_largest_extent(et, en);
+ 
+ 	/* update in global extent list */
+ 	spin_lock(&sbi->extent_lock);
+@@ -497,6 +494,7 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
+ 	struct rb_node **insert_p = NULL, *insert_parent = NULL;
+ 	unsigned int end = fofs + len;
+ 	unsigned int pos = (unsigned int)fofs;
++	bool updated = false;
+ 
+ 	if (!et)
+ 		return;
+@@ -517,7 +515,7 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
+ 	 * drop largest extent before lookup, in case it's already
+ 	 * been shrunk from extent tree
+ 	 */
+-	__drop_largest_extent(inode, fofs, len);
++	__drop_largest_extent(et, fofs, len);
+ 
+ 	/* 1. lookup first extent node in range [fofs, fofs + len - 1] */
+ 	en = (struct extent_node *)f2fs_lookup_rb_tree_ret(&et->root,
+@@ -550,7 +548,7 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
+ 				set_extent_info(&ei, end,
+ 						end - dei.fofs + dei.blk,
+ 						org_end - end);
+-				en1 = __insert_extent_tree(inode, et, &ei,
++				en1 = __insert_extent_tree(sbi, et, &ei,
+ 							NULL, NULL);
+ 				next_en = en1;
+ 			} else {
+@@ -570,7 +568,7 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
+ 		}
+ 
+ 		if (parts)
+-			__try_update_largest_extent(inode, et, en);
++			__try_update_largest_extent(et, en);
+ 		else
+ 			__release_extent_node(sbi, et, en);
+ 
+@@ -590,15 +588,16 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
+ 	if (blkaddr) {
+ 
+ 		set_extent_info(&ei, fofs, blkaddr, len);
+-		if (!__try_merge_extent_node(inode, et, &ei, prev_en, next_en))
+-			__insert_extent_tree(inode, et, &ei,
++		if (!__try_merge_extent_node(sbi, et, &ei, prev_en, next_en))
++			__insert_extent_tree(sbi, et, &ei,
+ 						insert_p, insert_parent);
+ 
+ 		/* give up extent_cache, if split and small updates happen */
+ 		if (dei.len >= 1 &&
+ 				prev.len < F2FS_MIN_EXTENT_LEN &&
+ 				et->largest.len < F2FS_MIN_EXTENT_LEN) {
+-			__drop_largest_extent(inode, 0, UINT_MAX);
++			et->largest.len = 0;
++			et->largest_updated = true;
+ 			set_inode_flag(inode, FI_NO_EXTENT);
+ 		}
+ 	}
+@@ -606,7 +605,15 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
+ 	if (is_inode_flag_set(inode, FI_NO_EXTENT))
+ 		__free_extent_tree(sbi, et);
+ 
++	if (et->largest_updated) {
++		et->largest_updated = false;
++		updated = true;
++	}
++
+ 	write_unlock(&et->lock);
++
++	if (updated)
++		f2fs_mark_inode_dirty_sync(inode, true);
+ }
+ 
+ unsigned int f2fs_shrink_extent_tree(struct f2fs_sb_info *sbi, int nr_shrink)
+@@ -705,6 +712,7 @@ void f2fs_drop_extent_tree(struct inode *inode)
+ {
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	struct extent_tree *et = F2FS_I(inode)->extent_tree;
++	bool updated = false;
+ 
+ 	if (!f2fs_may_extent_tree(inode))
+ 		return;
+@@ -713,8 +721,13 @@ void f2fs_drop_extent_tree(struct inode *inode)
+ 
+ 	write_lock(&et->lock);
+ 	__free_extent_tree(sbi, et);
+-	__drop_largest_extent(inode, 0, UINT_MAX);
++	if (et->largest.len) {
++		et->largest.len = 0;
++		updated = true;
++	}
+ 	write_unlock(&et->lock);
++	if (updated)
++		f2fs_mark_inode_dirty_sync(inode, true);
+ }
+ 
+ void f2fs_destroy_extent_tree(struct inode *inode)
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index b6f2dc8163e1..181aade161e8 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -556,6 +556,7 @@ struct extent_tree {
+ 	struct list_head list;		/* to be used by sbi->zombie_list */
+ 	rwlock_t lock;			/* protect extent info rb-tree */
+ 	atomic_t node_cnt;		/* # of extent node in rb-tree*/
++	bool largest_updated;		/* largest extent updated */
+ };
+ 
+ /*
+@@ -736,12 +737,12 @@ static inline bool __is_front_mergeable(struct extent_info *cur,
+ }
+ 
+ extern void f2fs_mark_inode_dirty_sync(struct inode *inode, bool sync);
+-static inline void __try_update_largest_extent(struct inode *inode,
+-			struct extent_tree *et, struct extent_node *en)
++static inline void __try_update_largest_extent(struct extent_tree *et,
++						struct extent_node *en)
+ {
+ 	if (en->ei.len > et->largest.len) {
+ 		et->largest = en->ei;
+-		f2fs_mark_inode_dirty_sync(inode, true);
++		et->largest_updated = true;
+ 	}
+ }
+ 
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index cf0f944fcaea..4a2e75bce36a 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -287,6 +287,12 @@ static int do_read_inode(struct inode *inode)
+ 	if (f2fs_has_inline_data(inode) && !f2fs_exist_data(inode))
+ 		__recover_inline_status(inode, node_page);
+ 
++	/* try to recover cold bit for non-dir inode */
++	if (!S_ISDIR(inode->i_mode) && !is_cold_node(node_page)) {
++		set_cold_node(node_page, false);
++		set_page_dirty(node_page);
++	}
++
+ 	/* get rdev by using inline_info */
+ 	__get_inode_rdev(inode, ri);
+ 
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 52ed02b0327c..ec22e7c5b37e 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -2356,7 +2356,7 @@ retry:
+ 	if (!PageUptodate(ipage))
+ 		SetPageUptodate(ipage);
+ 	fill_node_footer(ipage, ino, ino, 0, true);
+-	set_cold_node(page, false);
++	set_cold_node(ipage, false);
+ 
+ 	src = F2FS_INODE(page);
+ 	dst = F2FS_INODE(ipage);
+@@ -2379,6 +2379,13 @@ retry:
+ 			F2FS_FITS_IN_INODE(src, le16_to_cpu(src->i_extra_isize),
+ 								i_projid))
+ 			dst->i_projid = src->i_projid;
++
++		if (f2fs_sb_has_inode_crtime(sbi->sb) &&
++			F2FS_FITS_IN_INODE(src, le16_to_cpu(src->i_extra_isize),
++							i_crtime_nsec)) {
++			dst->i_crtime = src->i_crtime;
++			dst->i_crtime_nsec = src->i_crtime_nsec;
++		}
+ 	}
+ 
+ 	new_ni = old_ni;
+diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
+index ad70e62c5da4..a69a2c5c6682 100644
+--- a/fs/f2fs/recovery.c
++++ b/fs/f2fs/recovery.c
+@@ -221,6 +221,7 @@ static void recover_inode(struct inode *inode, struct page *page)
+ 	inode->i_mtime.tv_nsec = le32_to_cpu(raw->i_mtime_nsec);
+ 
+ 	F2FS_I(inode)->i_advise = raw->i_advise;
++	F2FS_I(inode)->i_flags = le32_to_cpu(raw->i_flags);
+ 
+ 	recover_inline_flags(inode, raw);
+ 
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 742147cbe759..a3e90e6f72a8 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1820,7 +1820,9 @@ static int f2fs_quota_off(struct super_block *sb, int type)
+ 	if (!inode || !igrab(inode))
+ 		return dquot_quota_off(sb, type);
+ 
+-	f2fs_quota_sync(sb, type);
++	err = f2fs_quota_sync(sb, type);
++	if (err)
++		goto out_put;
+ 
+ 	err = dquot_quota_off(sb, type);
+ 	if (err || f2fs_sb_has_quota_ino(sb))
+@@ -1839,9 +1841,20 @@ out_put:
+ void f2fs_quota_off_umount(struct super_block *sb)
+ {
+ 	int type;
++	int err;
++
++	for (type = 0; type < MAXQUOTAS; type++) {
++		err = f2fs_quota_off(sb, type);
++		if (err) {
++			int ret = dquot_quota_off(sb, type);
+ 
+-	for (type = 0; type < MAXQUOTAS; type++)
+-		f2fs_quota_off(sb, type);
++			f2fs_msg(sb, KERN_ERR,
++				"Fail to turn off disk quota "
++				"(type: %d, err: %d, ret:%d), Please "
++				"run fsck to fix it.", type, err, ret);
++			set_sbi_flag(F2FS_SB(sb), SBI_NEED_FSCK);
++		}
++	}
+ }
+ 
+ static int f2fs_get_projid(struct inode *inode, kprojid_t *projid)
+diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
+index c2469833b4fb..6b84ef6ccff3 100644
+--- a/fs/gfs2/ops_fstype.c
++++ b/fs/gfs2/ops_fstype.c
+@@ -1333,6 +1333,9 @@ static struct dentry *gfs2_mount_meta(struct file_system_type *fs_type,
+ 	struct path path;
+ 	int error;
+ 
++	if (!dev_name || !*dev_name)
++		return ERR_PTR(-EINVAL);
++
+ 	error = kern_path(dev_name, LOOKUP_FOLLOW, &path);
+ 	if (error) {
+ 		pr_warn("path_lookup on %s returned error %d\n",
+diff --git a/fs/jbd2/checkpoint.c b/fs/jbd2/checkpoint.c
+index c125d662777c..26f8d7e46462 100644
+--- a/fs/jbd2/checkpoint.c
++++ b/fs/jbd2/checkpoint.c
+@@ -251,8 +251,8 @@ restart:
+ 		bh = jh2bh(jh);
+ 
+ 		if (buffer_locked(bh)) {
+-			spin_unlock(&journal->j_list_lock);
+ 			get_bh(bh);
++			spin_unlock(&journal->j_list_lock);
+ 			wait_on_buffer(bh);
+ 			/* the journal_head may have gone by now */
+ 			BUFFER_TRACE(bh, "brelse");
+@@ -333,8 +333,8 @@ restart2:
+ 		jh = transaction->t_checkpoint_io_list;
+ 		bh = jh2bh(jh);
+ 		if (buffer_locked(bh)) {
+-			spin_unlock(&journal->j_list_lock);
+ 			get_bh(bh);
++			spin_unlock(&journal->j_list_lock);
+ 			wait_on_buffer(bh);
+ 			/* the journal_head may have gone by now */
+ 			BUFFER_TRACE(bh, "brelse");
+diff --git a/fs/jffs2/super.c b/fs/jffs2/super.c
+index 87bdf0f4cba1..902a7dd10e5c 100644
+--- a/fs/jffs2/super.c
++++ b/fs/jffs2/super.c
+@@ -285,10 +285,8 @@ static int jffs2_fill_super(struct super_block *sb, void *data, int silent)
+ 	sb->s_fs_info = c;
+ 
+ 	ret = jffs2_parse_options(c, data);
+-	if (ret) {
+-		kfree(c);
++	if (ret)
+ 		return -EINVAL;
+-	}
+ 
+ 	/* Initialize JFFS2 superblock locks, the further initialization will
+ 	 * be done later */
+diff --git a/fs/lockd/host.c b/fs/lockd/host.c
+index d35cd6be0675..93fb7cf0b92b 100644
+--- a/fs/lockd/host.c
++++ b/fs/lockd/host.c
+@@ -341,7 +341,7 @@ struct nlm_host *nlmsvc_lookup_host(const struct svc_rqst *rqstp,
+ 	};
+ 	struct lockd_net *ln = net_generic(net, lockd_net_id);
+ 
+-	dprintk("lockd: %s(host='%*s', vers=%u, proto=%s)\n", __func__,
++	dprintk("lockd: %s(host='%.*s', vers=%u, proto=%s)\n", __func__,
+ 			(int)hostname_len, hostname, rqstp->rq_vers,
+ 			(rqstp->rq_prot == IPPROTO_UDP ? "udp" : "tcp"));
+ 
+diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c
+index d7124fb12041..5df68d79d661 100644
+--- a/fs/nfs/nfs4client.c
++++ b/fs/nfs/nfs4client.c
+@@ -935,10 +935,10 @@ EXPORT_SYMBOL_GPL(nfs4_set_ds_client);
+ 
+ /*
+  * Session has been established, and the client marked ready.
+- * Set the mount rsize and wsize with negotiated fore channel
+- * attributes which will be bound checked in nfs_server_set_fsinfo.
++ * Limit the mount rsize, wsize and dtsize using negotiated fore
++ * channel attributes.
+  */
+-static void nfs4_session_set_rwsize(struct nfs_server *server)
++static void nfs4_session_limit_rwsize(struct nfs_server *server)
+ {
+ #ifdef CONFIG_NFS_V4_1
+ 	struct nfs4_session *sess;
+@@ -951,9 +951,11 @@ static void nfs4_session_set_rwsize(struct nfs_server *server)
+ 	server_resp_sz = sess->fc_attrs.max_resp_sz - nfs41_maxread_overhead;
+ 	server_rqst_sz = sess->fc_attrs.max_rqst_sz - nfs41_maxwrite_overhead;
+ 
+-	if (!server->rsize || server->rsize > server_resp_sz)
++	if (server->dtsize > server_resp_sz)
++		server->dtsize = server_resp_sz;
++	if (server->rsize > server_resp_sz)
+ 		server->rsize = server_resp_sz;
+-	if (!server->wsize || server->wsize > server_rqst_sz)
++	if (server->wsize > server_rqst_sz)
+ 		server->wsize = server_rqst_sz;
+ #endif /* CONFIG_NFS_V4_1 */
+ }
+@@ -1000,12 +1002,12 @@ static int nfs4_server_common_setup(struct nfs_server *server,
+ 			(unsigned long long) server->fsid.minor);
+ 	nfs_display_fhandle(mntfh, "Pseudo-fs root FH");
+ 
+-	nfs4_session_set_rwsize(server);
+-
+ 	error = nfs_probe_fsinfo(server, mntfh, fattr);
+ 	if (error < 0)
+ 		goto out;
+ 
++	nfs4_session_limit_rwsize(server);
++
+ 	if (server->namelen == 0 || server->namelen > NFS4_MAXNAMLEN)
+ 		server->namelen = NFS4_MAXNAMLEN;
+ 
+diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
+index 67d19cd92e44..7e6425791388 100644
+--- a/fs/nfs/pagelist.c
++++ b/fs/nfs/pagelist.c
+@@ -1110,6 +1110,20 @@ static int nfs_pageio_add_request_mirror(struct nfs_pageio_descriptor *desc,
+ 	return ret;
+ }
+ 
++static void nfs_pageio_error_cleanup(struct nfs_pageio_descriptor *desc)
++{
++	u32 midx;
++	struct nfs_pgio_mirror *mirror;
++
++	if (!desc->pg_error)
++		return;
++
++	for (midx = 0; midx < desc->pg_mirror_count; midx++) {
++		mirror = &desc->pg_mirrors[midx];
++		desc->pg_completion_ops->error_cleanup(&mirror->pg_list);
++	}
++}
++
+ int nfs_pageio_add_request(struct nfs_pageio_descriptor *desc,
+ 			   struct nfs_page *req)
+ {
+@@ -1160,25 +1174,11 @@ int nfs_pageio_add_request(struct nfs_pageio_descriptor *desc,
+ 	return 1;
+ 
+ out_failed:
+-	/*
+-	 * We might have failed before sending any reqs over wire.
+-	 * Clean up rest of the reqs in mirror pg_list.
+-	 */
+-	if (desc->pg_error) {
+-		struct nfs_pgio_mirror *mirror;
+-		void (*func)(struct list_head *);
+-
+-		/* remember fatal errors */
+-		if (nfs_error_is_fatal(desc->pg_error))
+-			nfs_context_set_write_error(req->wb_context,
+-						    desc->pg_error);
+-
+-		func = desc->pg_completion_ops->error_cleanup;
+-		for (midx = 0; midx < desc->pg_mirror_count; midx++) {
+-			mirror = &desc->pg_mirrors[midx];
+-			func(&mirror->pg_list);
+-		}
+-	}
++	/* remember fatal errors */
++	if (nfs_error_is_fatal(desc->pg_error))
++		nfs_context_set_write_error(req->wb_context,
++						desc->pg_error);
++	nfs_pageio_error_cleanup(desc);
+ 	return 0;
+ }
+ 
+@@ -1250,6 +1250,8 @@ void nfs_pageio_complete(struct nfs_pageio_descriptor *desc)
+ 	for (midx = 0; midx < desc->pg_mirror_count; midx++)
+ 		nfs_pageio_complete_mirror(desc, midx);
+ 
++	if (desc->pg_error < 0)
++		nfs_pageio_error_cleanup(desc);
+ 	if (desc->pg_ops->pg_cleanup)
+ 		desc->pg_ops->pg_cleanup(desc);
+ 	nfs_pageio_cleanup_mirroring(desc);
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 4a17fad93411..18fa7fd3bae9 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -4361,7 +4361,7 @@ nfs4_set_delegation(struct nfs4_client *clp, struct svc_fh *fh,
+ 
+ 	fl = nfs4_alloc_init_lease(dp, NFS4_OPEN_DELEGATE_READ);
+ 	if (!fl)
+-		goto out_stid;
++		goto out_clnt_odstate;
+ 
+ 	status = vfs_setlease(fp->fi_deleg_file, fl->fl_type, &fl, NULL);
+ 	if (fl)
+@@ -4386,7 +4386,6 @@ out_unlock:
+ 	vfs_setlease(fp->fi_deleg_file, F_UNLCK, NULL, (void **)&dp);
+ out_clnt_odstate:
+ 	put_clnt_odstate(dp->dl_clnt_odstate);
+-out_stid:
+ 	nfs4_put_stid(&dp->dl_stid);
+ out_delegees:
+ 	put_deleg_file(fp);
+diff --git a/fs/notify/fsnotify.c b/fs/notify/fsnotify.c
+index ababdbfab537..f43ea1aad542 100644
+--- a/fs/notify/fsnotify.c
++++ b/fs/notify/fsnotify.c
+@@ -96,6 +96,9 @@ void fsnotify_unmount_inodes(struct super_block *sb)
+ 
+ 	if (iput_inode)
+ 		iput(iput_inode);
++	/* Wait for outstanding inode references from connectors */
++	wait_var_event(&sb->s_fsnotify_inode_refs,
++		       !atomic_long_read(&sb->s_fsnotify_inode_refs));
+ }
+ 
+ /*
+diff --git a/fs/notify/mark.c b/fs/notify/mark.c
+index 61f4c5fa34c7..75394ae96673 100644
+--- a/fs/notify/mark.c
++++ b/fs/notify/mark.c
+@@ -161,15 +161,18 @@ static void fsnotify_connector_destroy_workfn(struct work_struct *work)
+ 	}
+ }
+ 
+-static struct inode *fsnotify_detach_connector_from_object(
+-					struct fsnotify_mark_connector *conn)
++static void *fsnotify_detach_connector_from_object(
++					struct fsnotify_mark_connector *conn,
++					unsigned int *type)
+ {
+ 	struct inode *inode = NULL;
+ 
++	*type = conn->type;
+ 	if (conn->type == FSNOTIFY_OBJ_TYPE_INODE) {
+ 		inode = conn->inode;
+ 		rcu_assign_pointer(inode->i_fsnotify_marks, NULL);
+ 		inode->i_fsnotify_mask = 0;
++		atomic_long_inc(&inode->i_sb->s_fsnotify_inode_refs);
+ 		conn->inode = NULL;
+ 		conn->type = FSNOTIFY_OBJ_TYPE_DETACHED;
+ 	} else if (conn->type == FSNOTIFY_OBJ_TYPE_VFSMOUNT) {
+@@ -193,10 +196,29 @@ static void fsnotify_final_mark_destroy(struct fsnotify_mark *mark)
+ 	fsnotify_put_group(group);
+ }
+ 
++/* Drop object reference originally held by a connector */
++static void fsnotify_drop_object(unsigned int type, void *objp)
++{
++	struct inode *inode;
++	struct super_block *sb;
++
++	if (!objp)
++		return;
++	/* Currently only inode references are passed to be dropped */
++	if (WARN_ON_ONCE(type != FSNOTIFY_OBJ_TYPE_INODE))
++		return;
++	inode = objp;
++	sb = inode->i_sb;
++	iput(inode);
++	if (atomic_long_dec_and_test(&sb->s_fsnotify_inode_refs))
++		wake_up_var(&sb->s_fsnotify_inode_refs);
++}
++
+ void fsnotify_put_mark(struct fsnotify_mark *mark)
+ {
+ 	struct fsnotify_mark_connector *conn;
+-	struct inode *inode = NULL;
++	void *objp = NULL;
++	unsigned int type = FSNOTIFY_OBJ_TYPE_DETACHED;
+ 	bool free_conn = false;
+ 
+ 	/* Catch marks that were actually never attached to object */
+@@ -216,7 +238,7 @@ void fsnotify_put_mark(struct fsnotify_mark *mark)
+ 	conn = mark->connector;
+ 	hlist_del_init_rcu(&mark->obj_list);
+ 	if (hlist_empty(&conn->list)) {
+-		inode = fsnotify_detach_connector_from_object(conn);
++		objp = fsnotify_detach_connector_from_object(conn, &type);
+ 		free_conn = true;
+ 	} else {
+ 		__fsnotify_recalc_mask(conn);
+@@ -224,7 +246,7 @@ void fsnotify_put_mark(struct fsnotify_mark *mark)
+ 	mark->connector = NULL;
+ 	spin_unlock(&conn->lock);
+ 
+-	iput(inode);
++	fsnotify_drop_object(type, objp);
+ 
+ 	if (free_conn) {
+ 		spin_lock(&destroy_lock);
+@@ -702,7 +724,8 @@ void fsnotify_destroy_marks(struct fsnotify_mark_connector __rcu **connp)
+ {
+ 	struct fsnotify_mark_connector *conn;
+ 	struct fsnotify_mark *mark, *old_mark = NULL;
+-	struct inode *inode;
++	void *objp;
++	unsigned int type;
+ 
+ 	conn = fsnotify_grab_connector(connp);
+ 	if (!conn)
+@@ -728,11 +751,11 @@ void fsnotify_destroy_marks(struct fsnotify_mark_connector __rcu **connp)
+ 	 * mark references get dropped. It would lead to strange results such
+ 	 * as delaying inode deletion or blocking unmount.
+ 	 */
+-	inode = fsnotify_detach_connector_from_object(conn);
++	objp = fsnotify_detach_connector_from_object(conn, &type);
+ 	spin_unlock(&conn->lock);
+ 	if (old_mark)
+ 		fsnotify_put_mark(old_mark);
+-	iput(inode);
++	fsnotify_drop_object(type, objp);
+ }
+ 
+ /*
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index dfd73a4616ce..3437da437099 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -767,6 +767,8 @@ static int show_smap(struct seq_file *m, void *v, int is_pid)
+ 	smaps_walk.private = mss;
+ 
+ #ifdef CONFIG_SHMEM
++	/* In case of smaps_rollup, reset the value from previous vma */
++	mss->check_shmem_swap = false;
+ 	if (vma->vm_file && shmem_mapping(vma->vm_file->f_mapping)) {
+ 		/*
+ 		 * For shared or readonly shmem mappings we know that all
+@@ -782,7 +784,7 @@ static int show_smap(struct seq_file *m, void *v, int is_pid)
+ 
+ 		if (!shmem_swapped || (vma->vm_flags & VM_SHARED) ||
+ 					!(vma->vm_flags & VM_WRITE)) {
+-			mss->swap = shmem_swapped;
++			mss->swap += shmem_swapped;
+ 		} else {
+ 			mss->check_shmem_swap = true;
+ 			smaps_walk.pte_hole = smaps_pte_hole;
+diff --git a/include/crypto/speck.h b/include/crypto/speck.h
+deleted file mode 100644
+index 73cfc952d405..000000000000
+--- a/include/crypto/speck.h
++++ /dev/null
+@@ -1,62 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * Common values for the Speck algorithm
+- */
+-
+-#ifndef _CRYPTO_SPECK_H
+-#define _CRYPTO_SPECK_H
+-
+-#include <linux/types.h>
+-
+-/* Speck128 */
+-
+-#define SPECK128_BLOCK_SIZE	16
+-
+-#define SPECK128_128_KEY_SIZE	16
+-#define SPECK128_128_NROUNDS	32
+-
+-#define SPECK128_192_KEY_SIZE	24
+-#define SPECK128_192_NROUNDS	33
+-
+-#define SPECK128_256_KEY_SIZE	32
+-#define SPECK128_256_NROUNDS	34
+-
+-struct speck128_tfm_ctx {
+-	u64 round_keys[SPECK128_256_NROUNDS];
+-	int nrounds;
+-};
+-
+-void crypto_speck128_encrypt(const struct speck128_tfm_ctx *ctx,
+-			     u8 *out, const u8 *in);
+-
+-void crypto_speck128_decrypt(const struct speck128_tfm_ctx *ctx,
+-			     u8 *out, const u8 *in);
+-
+-int crypto_speck128_setkey(struct speck128_tfm_ctx *ctx, const u8 *key,
+-			   unsigned int keysize);
+-
+-/* Speck64 */
+-
+-#define SPECK64_BLOCK_SIZE	8
+-
+-#define SPECK64_96_KEY_SIZE	12
+-#define SPECK64_96_NROUNDS	26
+-
+-#define SPECK64_128_KEY_SIZE	16
+-#define SPECK64_128_NROUNDS	27
+-
+-struct speck64_tfm_ctx {
+-	u32 round_keys[SPECK64_128_NROUNDS];
+-	int nrounds;
+-};
+-
+-void crypto_speck64_encrypt(const struct speck64_tfm_ctx *ctx,
+-			    u8 *out, const u8 *in);
+-
+-void crypto_speck64_decrypt(const struct speck64_tfm_ctx *ctx,
+-			    u8 *out, const u8 *in);
+-
+-int crypto_speck64_setkey(struct speck64_tfm_ctx *ctx, const u8 *key,
+-			  unsigned int keysize);
+-
+-#endif /* _CRYPTO_SPECK_H */
+diff --git a/include/drm/drm_atomic.h b/include/drm/drm_atomic.h
+index a57a8aa90ffb..2b0d02458a18 100644
+--- a/include/drm/drm_atomic.h
++++ b/include/drm/drm_atomic.h
+@@ -153,6 +153,17 @@ struct __drm_planes_state {
+ struct __drm_crtcs_state {
+ 	struct drm_crtc *ptr;
+ 	struct drm_crtc_state *state, *old_state, *new_state;
++
++	/**
++	 * @commit:
++	 *
++	 * A reference to the CRTC commit object that is kept for use by
++	 * drm_atomic_helper_wait_for_flip_done() after
++	 * drm_atomic_helper_commit_hw_done() is called. This ensures that a
++	 * concurrent commit won't free a commit object that is still in use.
++	 */
++	struct drm_crtc_commit *commit;
++
+ 	s32 __user *out_fence_ptr;
+ 	u64 last_vblank_count;
+ };
+diff --git a/include/linux/compat.h b/include/linux/compat.h
+index c68acc47da57..47041c7fed28 100644
+--- a/include/linux/compat.h
++++ b/include/linux/compat.h
+@@ -103,6 +103,9 @@ typedef struct compat_sigaltstack {
+ 	compat_size_t			ss_size;
+ } compat_stack_t;
+ #endif
++#ifndef COMPAT_MINSIGSTKSZ
++#define COMPAT_MINSIGSTKSZ	MINSIGSTKSZ
++#endif
+ 
+ #define compat_jiffies_to_clock_t(x)	\
+ 		(((unsigned long)(x) * COMPAT_USER_HZ) / HZ)
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index e73363bd8646..cf23c128ac46 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -1416,6 +1416,9 @@ struct super_block {
+ 	/* Number of inodes with nlink == 0 but still referenced */
+ 	atomic_long_t s_remove_count;
+ 
++	/* Pending fsnotify inode refs */
++	atomic_long_t s_fsnotify_inode_refs;
++
+ 	/* Being remounted read-only */
+ 	int s_readonly_remount;
+ 
+diff --git a/include/linux/hdmi.h b/include/linux/hdmi.h
+index d271ff23984f..4f3febc0f971 100644
+--- a/include/linux/hdmi.h
++++ b/include/linux/hdmi.h
+@@ -101,8 +101,8 @@ enum hdmi_extended_colorimetry {
+ 	HDMI_EXTENDED_COLORIMETRY_XV_YCC_601,
+ 	HDMI_EXTENDED_COLORIMETRY_XV_YCC_709,
+ 	HDMI_EXTENDED_COLORIMETRY_S_YCC_601,
+-	HDMI_EXTENDED_COLORIMETRY_ADOBE_YCC_601,
+-	HDMI_EXTENDED_COLORIMETRY_ADOBE_RGB,
++	HDMI_EXTENDED_COLORIMETRY_OPYCC_601,
++	HDMI_EXTENDED_COLORIMETRY_OPRGB,
+ 
+ 	/* The following EC values are only defined in CEA-861-F. */
+ 	HDMI_EXTENDED_COLORIMETRY_BT2020_CONST_LUM,
+diff --git a/include/linux/signal.h b/include/linux/signal.h
+index 3c5200137b24..42ba31da534f 100644
+--- a/include/linux/signal.h
++++ b/include/linux/signal.h
+@@ -36,7 +36,7 @@ enum siginfo_layout {
+ 	SIL_SYS,
+ };
+ 
+-enum siginfo_layout siginfo_layout(int sig, int si_code);
++enum siginfo_layout siginfo_layout(unsigned sig, int si_code);
+ 
+ /*
+  * Define some primitives to manipulate sigset_t.
+diff --git a/include/linux/tc.h b/include/linux/tc.h
+index f92511e57cdb..a60639f37963 100644
+--- a/include/linux/tc.h
++++ b/include/linux/tc.h
+@@ -84,6 +84,7 @@ struct tc_dev {
+ 					   device. */
+ 	struct device	dev;		/* Generic device interface. */
+ 	struct resource	resource;	/* Address space of this device. */
++	u64		dma_mask;	/* DMA addressable range. */
+ 	char		vendor[9];
+ 	char		name[9];
+ 	char		firmware[9];
+diff --git a/include/media/cec.h b/include/media/cec.h
+index 580ab1042898..71cc0272b053 100644
+--- a/include/media/cec.h
++++ b/include/media/cec.h
+@@ -63,7 +63,6 @@ struct cec_data {
+ 	struct delayed_work work;
+ 	struct completion c;
+ 	u8 attempts;
+-	bool new_initiator;
+ 	bool blocking;
+ 	bool completed;
+ };
+@@ -174,6 +173,7 @@ struct cec_adapter {
+ 	bool is_configuring;
+ 	bool is_configured;
+ 	bool cec_pin_is_high;
++	u8 last_initiator;
+ 	u32 monitor_all_cnt;
+ 	u32 monitor_pin_cnt;
+ 	u32 follower_cnt;
+@@ -451,4 +451,74 @@ static inline void cec_phys_addr_invalidate(struct cec_adapter *adap)
+ 	cec_s_phys_addr(adap, CEC_PHYS_ADDR_INVALID, false);
+ }
+ 
++/**
++ * cec_get_edid_spa_location() - find location of the Source Physical Address
++ *
++ * @edid: the EDID
++ * @size: the size of the EDID
++ *
++ * This EDID is expected to be a CEA-861 compliant, which means that there are
++ * at least two blocks and one or more of the extensions blocks are CEA-861
++ * blocks.
++ *
++ * The returned location is guaranteed to be <= size-2.
++ *
++ * This is an inline function since it is used by both CEC and V4L2.
++ * Ideally this would go in a module shared by both, but it is overkill to do
++ * that for just a single function.
++ */
++static inline unsigned int cec_get_edid_spa_location(const u8 *edid,
++						     unsigned int size)
++{
++	unsigned int blocks = size / 128;
++	unsigned int block;
++	u8 d;
++
++	/* Sanity check: at least 2 blocks and a multiple of the block size */
++	if (blocks < 2 || size % 128)
++		return 0;
++
++	/*
++	 * If there are fewer extension blocks than the size, then update
++	 * 'blocks'. It is allowed to have more extension blocks than the size,
++	 * since some hardware can only read e.g. 256 bytes of the EDID, even
++	 * though more blocks are present. The first CEA-861 extension block
++	 * should normally be in block 1 anyway.
++	 */
++	if (edid[0x7e] + 1 < blocks)
++		blocks = edid[0x7e] + 1;
++
++	for (block = 1; block < blocks; block++) {
++		unsigned int offset = block * 128;
++
++		/* Skip any non-CEA-861 extension blocks */
++		if (edid[offset] != 0x02 || edid[offset + 1] != 0x03)
++			continue;
++
++		/* search Vendor Specific Data Block (tag 3) */
++		d = edid[offset + 2] & 0x7f;
++		/* Check if there are Data Blocks */
++		if (d <= 4)
++			continue;
++		if (d > 4) {
++			unsigned int i = offset + 4;
++			unsigned int end = offset + d;
++
++			/* Note: 'end' is always < 'size' */
++			do {
++				u8 tag = edid[i] >> 5;
++				u8 len = edid[i] & 0x1f;
++
++				if (tag == 3 && len >= 5 && i + len <= end &&
++				    edid[i + 1] == 0x03 &&
++				    edid[i + 2] == 0x0c &&
++				    edid[i + 3] == 0x00)
++					return i + 4;
++				i += len + 1;
++			} while (i < end);
++		}
++	}
++	return 0;
++}
++
+ #endif /* _MEDIA_CEC_H */
+diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
+index 6c003995347a..59185fbbd202 100644
+--- a/include/rdma/ib_verbs.h
++++ b/include/rdma/ib_verbs.h
+@@ -1296,21 +1296,27 @@ struct ib_qp_attr {
+ };
+ 
+ enum ib_wr_opcode {
+-	IB_WR_RDMA_WRITE,
+-	IB_WR_RDMA_WRITE_WITH_IMM,
+-	IB_WR_SEND,
+-	IB_WR_SEND_WITH_IMM,
+-	IB_WR_RDMA_READ,
+-	IB_WR_ATOMIC_CMP_AND_SWP,
+-	IB_WR_ATOMIC_FETCH_AND_ADD,
+-	IB_WR_LSO,
+-	IB_WR_SEND_WITH_INV,
+-	IB_WR_RDMA_READ_WITH_INV,
+-	IB_WR_LOCAL_INV,
+-	IB_WR_REG_MR,
+-	IB_WR_MASKED_ATOMIC_CMP_AND_SWP,
+-	IB_WR_MASKED_ATOMIC_FETCH_AND_ADD,
++	/* These are shared with userspace */
++	IB_WR_RDMA_WRITE = IB_UVERBS_WR_RDMA_WRITE,
++	IB_WR_RDMA_WRITE_WITH_IMM = IB_UVERBS_WR_RDMA_WRITE_WITH_IMM,
++	IB_WR_SEND = IB_UVERBS_WR_SEND,
++	IB_WR_SEND_WITH_IMM = IB_UVERBS_WR_SEND_WITH_IMM,
++	IB_WR_RDMA_READ = IB_UVERBS_WR_RDMA_READ,
++	IB_WR_ATOMIC_CMP_AND_SWP = IB_UVERBS_WR_ATOMIC_CMP_AND_SWP,
++	IB_WR_ATOMIC_FETCH_AND_ADD = IB_UVERBS_WR_ATOMIC_FETCH_AND_ADD,
++	IB_WR_LSO = IB_UVERBS_WR_TSO,
++	IB_WR_SEND_WITH_INV = IB_UVERBS_WR_SEND_WITH_INV,
++	IB_WR_RDMA_READ_WITH_INV = IB_UVERBS_WR_RDMA_READ_WITH_INV,
++	IB_WR_LOCAL_INV = IB_UVERBS_WR_LOCAL_INV,
++	IB_WR_MASKED_ATOMIC_CMP_AND_SWP =
++		IB_UVERBS_WR_MASKED_ATOMIC_CMP_AND_SWP,
++	IB_WR_MASKED_ATOMIC_FETCH_AND_ADD =
++		IB_UVERBS_WR_MASKED_ATOMIC_FETCH_AND_ADD,
++
++	/* These are kernel only and can not be issued by userspace */
++	IB_WR_REG_MR = 0x20,
+ 	IB_WR_REG_SIG_MR,
++
+ 	/* reserve values for low level drivers' internal use.
+ 	 * These values will not be used at all in the ib core layer.
+ 	 */
+diff --git a/include/uapi/linux/cec.h b/include/uapi/linux/cec.h
+index 20fe091b7e96..bc2a1b98d9dd 100644
+--- a/include/uapi/linux/cec.h
++++ b/include/uapi/linux/cec.h
+@@ -152,10 +152,13 @@ static inline void cec_msg_set_reply_to(struct cec_msg *msg,
+ #define CEC_TX_STATUS_LOW_DRIVE		(1 << 3)
+ #define CEC_TX_STATUS_ERROR		(1 << 4)
+ #define CEC_TX_STATUS_MAX_RETRIES	(1 << 5)
++#define CEC_TX_STATUS_ABORTED		(1 << 6)
++#define CEC_TX_STATUS_TIMEOUT		(1 << 7)
+ 
+ #define CEC_RX_STATUS_OK		(1 << 0)
+ #define CEC_RX_STATUS_TIMEOUT		(1 << 1)
+ #define CEC_RX_STATUS_FEATURE_ABORT	(1 << 2)
++#define CEC_RX_STATUS_ABORTED		(1 << 3)
+ 
+ static inline int cec_msg_status_is_ok(const struct cec_msg *msg)
+ {
+diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
+index 73e01918f996..a441ea1bfe6d 100644
+--- a/include/uapi/linux/fs.h
++++ b/include/uapi/linux/fs.h
+@@ -279,8 +279,8 @@ struct fsxattr {
+ #define FS_ENCRYPTION_MODE_AES_256_CTS		4
+ #define FS_ENCRYPTION_MODE_AES_128_CBC		5
+ #define FS_ENCRYPTION_MODE_AES_128_CTS		6
+-#define FS_ENCRYPTION_MODE_SPECK128_256_XTS	7
+-#define FS_ENCRYPTION_MODE_SPECK128_256_CTS	8
++#define FS_ENCRYPTION_MODE_SPECK128_256_XTS	7 /* Removed, do not use. */
++#define FS_ENCRYPTION_MODE_SPECK128_256_CTS	8 /* Removed, do not use. */
+ 
+ struct fscrypt_policy {
+ 	__u8 version;
+diff --git a/include/uapi/linux/ndctl.h b/include/uapi/linux/ndctl.h
+index 7e27070b9440..2f2c43d633c5 100644
+--- a/include/uapi/linux/ndctl.h
++++ b/include/uapi/linux/ndctl.h
+@@ -128,37 +128,31 @@ enum {
+ 
+ static inline const char *nvdimm_bus_cmd_name(unsigned cmd)
+ {
+-	static const char * const names[] = {
+-		[ND_CMD_ARS_CAP] = "ars_cap",
+-		[ND_CMD_ARS_START] = "ars_start",
+-		[ND_CMD_ARS_STATUS] = "ars_status",
+-		[ND_CMD_CLEAR_ERROR] = "clear_error",
+-		[ND_CMD_CALL] = "cmd_call",
+-	};
+-
+-	if (cmd < ARRAY_SIZE(names) && names[cmd])
+-		return names[cmd];
+-	return "unknown";
++	switch (cmd) {
++	case ND_CMD_ARS_CAP:		return "ars_cap";
++	case ND_CMD_ARS_START:		return "ars_start";
++	case ND_CMD_ARS_STATUS:		return "ars_status";
++	case ND_CMD_CLEAR_ERROR:	return "clear_error";
++	case ND_CMD_CALL:		return "cmd_call";
++	default:			return "unknown";
++	}
+ }
+ 
+ static inline const char *nvdimm_cmd_name(unsigned cmd)
+ {
+-	static const char * const names[] = {
+-		[ND_CMD_SMART] = "smart",
+-		[ND_CMD_SMART_THRESHOLD] = "smart_thresh",
+-		[ND_CMD_DIMM_FLAGS] = "flags",
+-		[ND_CMD_GET_CONFIG_SIZE] = "get_size",
+-		[ND_CMD_GET_CONFIG_DATA] = "get_data",
+-		[ND_CMD_SET_CONFIG_DATA] = "set_data",
+-		[ND_CMD_VENDOR_EFFECT_LOG_SIZE] = "effect_size",
+-		[ND_CMD_VENDOR_EFFECT_LOG] = "effect_log",
+-		[ND_CMD_VENDOR] = "vendor",
+-		[ND_CMD_CALL] = "cmd_call",
+-	};
+-
+-	if (cmd < ARRAY_SIZE(names) && names[cmd])
+-		return names[cmd];
+-	return "unknown";
++	switch (cmd) {
++	case ND_CMD_SMART:			return "smart";
++	case ND_CMD_SMART_THRESHOLD:		return "smart_thresh";
++	case ND_CMD_DIMM_FLAGS:			return "flags";
++	case ND_CMD_GET_CONFIG_SIZE:		return "get_size";
++	case ND_CMD_GET_CONFIG_DATA:		return "get_data";
++	case ND_CMD_SET_CONFIG_DATA:		return "set_data";
++	case ND_CMD_VENDOR_EFFECT_LOG_SIZE:	return "effect_size";
++	case ND_CMD_VENDOR_EFFECT_LOG:		return "effect_log";
++	case ND_CMD_VENDOR:			return "vendor";
++	case ND_CMD_CALL:			return "cmd_call";
++	default:				return "unknown";
++	}
+ }
+ 
+ #define ND_IOCTL 'N'
+diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
+index 600877be5c22..082dc1439a50 100644
+--- a/include/uapi/linux/videodev2.h
++++ b/include/uapi/linux/videodev2.h
+@@ -225,8 +225,8 @@ enum v4l2_colorspace {
+ 	/* For RGB colorspaces such as produces by most webcams. */
+ 	V4L2_COLORSPACE_SRGB          = 8,
+ 
+-	/* AdobeRGB colorspace */
+-	V4L2_COLORSPACE_ADOBERGB      = 9,
++	/* opRGB colorspace */
++	V4L2_COLORSPACE_OPRGB         = 9,
+ 
+ 	/* BT.2020 colorspace, used for UHDTV. */
+ 	V4L2_COLORSPACE_BT2020        = 10,
+@@ -258,7 +258,7 @@ enum v4l2_xfer_func {
+ 	 *
+ 	 * V4L2_COLORSPACE_SRGB, V4L2_COLORSPACE_JPEG: V4L2_XFER_FUNC_SRGB
+ 	 *
+-	 * V4L2_COLORSPACE_ADOBERGB: V4L2_XFER_FUNC_ADOBERGB
++	 * V4L2_COLORSPACE_OPRGB: V4L2_XFER_FUNC_OPRGB
+ 	 *
+ 	 * V4L2_COLORSPACE_SMPTE240M: V4L2_XFER_FUNC_SMPTE240M
+ 	 *
+@@ -269,7 +269,7 @@ enum v4l2_xfer_func {
+ 	V4L2_XFER_FUNC_DEFAULT     = 0,
+ 	V4L2_XFER_FUNC_709         = 1,
+ 	V4L2_XFER_FUNC_SRGB        = 2,
+-	V4L2_XFER_FUNC_ADOBERGB    = 3,
++	V4L2_XFER_FUNC_OPRGB       = 3,
+ 	V4L2_XFER_FUNC_SMPTE240M   = 4,
+ 	V4L2_XFER_FUNC_NONE        = 5,
+ 	V4L2_XFER_FUNC_DCI_P3      = 6,
+@@ -281,7 +281,7 @@ enum v4l2_xfer_func {
+  * This depends on the colorspace.
+  */
+ #define V4L2_MAP_XFER_FUNC_DEFAULT(colsp) \
+-	((colsp) == V4L2_COLORSPACE_ADOBERGB ? V4L2_XFER_FUNC_ADOBERGB : \
++	((colsp) == V4L2_COLORSPACE_OPRGB ? V4L2_XFER_FUNC_OPRGB : \
+ 	 ((colsp) == V4L2_COLORSPACE_SMPTE240M ? V4L2_XFER_FUNC_SMPTE240M : \
+ 	  ((colsp) == V4L2_COLORSPACE_DCI_P3 ? V4L2_XFER_FUNC_DCI_P3 : \
+ 	   ((colsp) == V4L2_COLORSPACE_RAW ? V4L2_XFER_FUNC_NONE : \
+@@ -295,7 +295,7 @@ enum v4l2_ycbcr_encoding {
+ 	 *
+ 	 * V4L2_COLORSPACE_SMPTE170M, V4L2_COLORSPACE_470_SYSTEM_M,
+ 	 * V4L2_COLORSPACE_470_SYSTEM_BG, V4L2_COLORSPACE_SRGB,
+-	 * V4L2_COLORSPACE_ADOBERGB and V4L2_COLORSPACE_JPEG: V4L2_YCBCR_ENC_601
++	 * V4L2_COLORSPACE_OPRGB and V4L2_COLORSPACE_JPEG: V4L2_YCBCR_ENC_601
+ 	 *
+ 	 * V4L2_COLORSPACE_REC709 and V4L2_COLORSPACE_DCI_P3: V4L2_YCBCR_ENC_709
+ 	 *
+@@ -382,6 +382,17 @@ enum v4l2_quantization {
+ 	 (((is_rgb_or_hsv) || (colsp) == V4L2_COLORSPACE_JPEG) ? \
+ 	 V4L2_QUANTIZATION_FULL_RANGE : V4L2_QUANTIZATION_LIM_RANGE))
+ 
++/*
++ * Deprecated names for opRGB colorspace (IEC 61966-2-5)
++ *
++ * WARNING: Please don't use these deprecated defines in your code, as
++ * there is a chance we have to remove them in the future.
++ */
++#ifndef __KERNEL__
++#define V4L2_COLORSPACE_ADOBERGB V4L2_COLORSPACE_OPRGB
++#define V4L2_XFER_FUNC_ADOBERGB  V4L2_XFER_FUNC_OPRGB
++#endif
++
+ enum v4l2_priority {
+ 	V4L2_PRIORITY_UNSET       = 0,  /* not initialized */
+ 	V4L2_PRIORITY_BACKGROUND  = 1,
+diff --git a/include/uapi/rdma/ib_user_verbs.h b/include/uapi/rdma/ib_user_verbs.h
+index 4f9991de8e3a..8345ca799ad8 100644
+--- a/include/uapi/rdma/ib_user_verbs.h
++++ b/include/uapi/rdma/ib_user_verbs.h
+@@ -762,10 +762,28 @@ struct ib_uverbs_sge {
+ 	__u32 lkey;
+ };
+ 
++enum ib_uverbs_wr_opcode {
++	IB_UVERBS_WR_RDMA_WRITE = 0,
++	IB_UVERBS_WR_RDMA_WRITE_WITH_IMM = 1,
++	IB_UVERBS_WR_SEND = 2,
++	IB_UVERBS_WR_SEND_WITH_IMM = 3,
++	IB_UVERBS_WR_RDMA_READ = 4,
++	IB_UVERBS_WR_ATOMIC_CMP_AND_SWP = 5,
++	IB_UVERBS_WR_ATOMIC_FETCH_AND_ADD = 6,
++	IB_UVERBS_WR_LOCAL_INV = 7,
++	IB_UVERBS_WR_BIND_MW = 8,
++	IB_UVERBS_WR_SEND_WITH_INV = 9,
++	IB_UVERBS_WR_TSO = 10,
++	IB_UVERBS_WR_RDMA_READ_WITH_INV = 11,
++	IB_UVERBS_WR_MASKED_ATOMIC_CMP_AND_SWP = 12,
++	IB_UVERBS_WR_MASKED_ATOMIC_FETCH_AND_ADD = 13,
++	/* Review enum ib_wr_opcode before modifying this */
++};
++
+ struct ib_uverbs_send_wr {
+ 	__aligned_u64 wr_id;
+ 	__u32 num_sge;
+-	__u32 opcode;
++	__u32 opcode;		/* see enum ib_uverbs_wr_opcode */
+ 	__u32 send_flags;
+ 	union {
+ 		__be32 imm_data;
+diff --git a/kernel/bounds.c b/kernel/bounds.c
+index c373e887c066..9795d75b09b2 100644
+--- a/kernel/bounds.c
++++ b/kernel/bounds.c
+@@ -13,7 +13,7 @@
+ #include <linux/log2.h>
+ #include <linux/spinlock_types.h>
+ 
+-void foo(void)
++int main(void)
+ {
+ 	/* The enum constants to put into include/generated/bounds.h */
+ 	DEFINE(NR_PAGEFLAGS, __NR_PAGEFLAGS);
+@@ -23,4 +23,6 @@ void foo(void)
+ #endif
+ 	DEFINE(SPINLOCK_SIZE, sizeof(spinlock_t));
+ 	/* End of constants */
++
++	return 0;
+ }
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index a31a1ba0f8ea..0f5d2e66cd6b 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -683,6 +683,17 @@ err_put:
+ 	return err;
+ }
+ 
++static void maybe_wait_bpf_programs(struct bpf_map *map)
++{
++	/* Wait for any running BPF programs to complete so that
++	 * userspace, when we return to it, knows that all programs
++	 * that could be running use the new map value.
++	 */
++	if (map->map_type == BPF_MAP_TYPE_HASH_OF_MAPS ||
++	    map->map_type == BPF_MAP_TYPE_ARRAY_OF_MAPS)
++		synchronize_rcu();
++}
++
+ #define BPF_MAP_UPDATE_ELEM_LAST_FIELD flags
+ 
+ static int map_update_elem(union bpf_attr *attr)
+@@ -769,6 +780,7 @@ static int map_update_elem(union bpf_attr *attr)
+ 	}
+ 	__this_cpu_dec(bpf_prog_active);
+ 	preempt_enable();
++	maybe_wait_bpf_programs(map);
+ out:
+ free_value:
+ 	kfree(value);
+@@ -821,6 +833,7 @@ static int map_delete_elem(union bpf_attr *attr)
+ 	rcu_read_unlock();
+ 	__this_cpu_dec(bpf_prog_active);
+ 	preempt_enable();
++	maybe_wait_bpf_programs(map);
+ out:
+ 	kfree(key);
+ err_put:
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index b000686fa1a1..d565ec6af97c 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -553,7 +553,9 @@ static void __mark_reg_not_init(struct bpf_reg_state *reg);
+  */
+ static void __mark_reg_known(struct bpf_reg_state *reg, u64 imm)
+ {
+-	reg->id = 0;
++	/* Clear id, off, and union(map_ptr, range) */
++	memset(((u8 *)reg) + sizeof(reg->type), 0,
++	       offsetof(struct bpf_reg_state, var_off) - sizeof(reg->type));
+ 	reg->var_off = tnum_const(imm);
+ 	reg->smin_value = (s64)imm;
+ 	reg->smax_value = (s64)imm;
+@@ -572,7 +574,6 @@ static void __mark_reg_known_zero(struct bpf_reg_state *reg)
+ static void __mark_reg_const_zero(struct bpf_reg_state *reg)
+ {
+ 	__mark_reg_known(reg, 0);
+-	reg->off = 0;
+ 	reg->type = SCALAR_VALUE;
+ }
+ 
+@@ -683,9 +684,12 @@ static void __mark_reg_unbounded(struct bpf_reg_state *reg)
+ /* Mark a register as having a completely unknown (scalar) value. */
+ static void __mark_reg_unknown(struct bpf_reg_state *reg)
+ {
++	/*
++	 * Clear type, id, off, and union(map_ptr, range) and
++	 * padding between 'type' and union
++	 */
++	memset(reg, 0, offsetof(struct bpf_reg_state, var_off));
+ 	reg->type = SCALAR_VALUE;
+-	reg->id = 0;
+-	reg->off = 0;
+ 	reg->var_off = tnum_unknown;
+ 	reg->frameno = 0;
+ 	__mark_reg_unbounded(reg);
+@@ -1726,9 +1730,6 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
+ 			else
+ 				mark_reg_known_zero(env, regs,
+ 						    value_regno);
+-			regs[value_regno].id = 0;
+-			regs[value_regno].off = 0;
+-			regs[value_regno].range = 0;
+ 			regs[value_regno].type = reg_type;
+ 		}
+ 
+@@ -2549,7 +2550,6 @@ static int check_helper_call(struct bpf_verifier_env *env, int func_id, int insn
+ 		regs[BPF_REG_0].type = PTR_TO_MAP_VALUE_OR_NULL;
+ 		/* There is no offset yet applied, variable or fixed */
+ 		mark_reg_known_zero(env, regs, BPF_REG_0);
+-		regs[BPF_REG_0].off = 0;
+ 		/* remember map_ptr, so that check_map_access()
+ 		 * can check 'value_size' boundary of memory access
+ 		 * to map element returned from bpf_map_lookup_elem()
+diff --git a/kernel/bpf/xskmap.c b/kernel/bpf/xskmap.c
+index b3c557476a8d..c98501a04742 100644
+--- a/kernel/bpf/xskmap.c
++++ b/kernel/bpf/xskmap.c
+@@ -191,11 +191,8 @@ static int xsk_map_update_elem(struct bpf_map *map, void *key, void *value,
+ 	sock_hold(sock->sk);
+ 
+ 	old_xs = xchg(&m->xsk_map[i], xs);
+-	if (old_xs) {
+-		/* Make sure we've flushed everything. */
+-		synchronize_net();
++	if (old_xs)
+ 		sock_put((struct sock *)old_xs);
+-	}
+ 
+ 	sockfd_put(sock);
+ 	return 0;
+@@ -211,11 +208,8 @@ static int xsk_map_delete_elem(struct bpf_map *map, void *key)
+ 		return -EINVAL;
+ 
+ 	old_xs = xchg(&m->xsk_map[k], NULL);
+-	if (old_xs) {
+-		/* Make sure we've flushed everything. */
+-		synchronize_net();
++	if (old_xs)
+ 		sock_put((struct sock *)old_xs);
+-	}
+ 
+ 	return 0;
+ }
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 517907b082df..3ec5a37e3068 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -2033,6 +2033,12 @@ static void cpuhp_online_cpu_device(unsigned int cpu)
+ 	kobject_uevent(&dev->kobj, KOBJ_ONLINE);
+ }
+ 
++/*
++ * Architectures that need SMT-specific errata handling during SMT hotplug
++ * should override this.
++ */
++void __weak arch_smt_update(void) { };
++
+ static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
+ {
+ 	int cpu, ret = 0;
+@@ -2059,8 +2065,10 @@ static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
+ 		 */
+ 		cpuhp_offline_cpu_device(cpu);
+ 	}
+-	if (!ret)
++	if (!ret) {
+ 		cpu_smt_control = ctrlval;
++		arch_smt_update();
++	}
+ 	cpu_maps_update_done();
+ 	return ret;
+ }
+@@ -2071,6 +2079,7 @@ static int cpuhp_smt_enable(void)
+ 
+ 	cpu_maps_update_begin();
+ 	cpu_smt_control = CPU_SMT_ENABLED;
++	arch_smt_update();
+ 	for_each_present_cpu(cpu) {
+ 		/* Skip online CPUs and CPUs on offline nodes */
+ 		if (cpu_online(cpu) || !node_online(cpu_to_node(cpu)))
+diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c
+index d987dcd1bd56..54a33337680f 100644
+--- a/kernel/dma/contiguous.c
++++ b/kernel/dma/contiguous.c
+@@ -49,7 +49,11 @@ static phys_addr_t limit_cmdline;
+ 
+ static int __init early_cma(char *p)
+ {
+-	pr_debug("%s(%s)\n", __func__, p);
++	if (!p) {
++		pr_err("Config string not provided\n");
++		return -EINVAL;
++	}
++
+ 	size_cmdline = memparse(p, &p);
+ 	if (*p != '@')
+ 		return 0;
+diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
+index 9a8b7ba9aa88..c4e31f44a0ff 100644
+--- a/kernel/irq/manage.c
++++ b/kernel/irq/manage.c
+@@ -920,6 +920,9 @@ irq_forced_thread_fn(struct irq_desc *desc, struct irqaction *action)
+ 
+ 	local_bh_disable();
+ 	ret = action->thread_fn(action->irq, action->dev_id);
++	if (ret == IRQ_HANDLED)
++		atomic_inc(&desc->threads_handled);
++
+ 	irq_finalize_oneshot(desc, action);
+ 	local_bh_enable();
+ 	return ret;
+@@ -936,6 +939,9 @@ static irqreturn_t irq_thread_fn(struct irq_desc *desc,
+ 	irqreturn_t ret;
+ 
+ 	ret = action->thread_fn(action->irq, action->dev_id);
++	if (ret == IRQ_HANDLED)
++		atomic_inc(&desc->threads_handled);
++
+ 	irq_finalize_oneshot(desc, action);
+ 	return ret;
+ }
+@@ -1013,8 +1019,6 @@ static int irq_thread(void *data)
+ 		irq_thread_check_affinity(desc, action);
+ 
+ 		action_ret = handler_fn(desc, action);
+-		if (action_ret == IRQ_HANDLED)
+-			atomic_inc(&desc->threads_handled);
+ 		if (action_ret == IRQ_WAKE_THREAD)
+ 			irq_wake_secondary(desc, action);
+ 
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index f3183ad10d96..07f912b765db 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -700,9 +700,10 @@ static void unoptimize_kprobe(struct kprobe *p, bool force)
+ }
+ 
+ /* Cancel unoptimizing for reusing */
+-static void reuse_unused_kprobe(struct kprobe *ap)
++static int reuse_unused_kprobe(struct kprobe *ap)
+ {
+ 	struct optimized_kprobe *op;
++	int ret;
+ 
+ 	BUG_ON(!kprobe_unused(ap));
+ 	/*
+@@ -714,8 +715,12 @@ static void reuse_unused_kprobe(struct kprobe *ap)
+ 	/* Enable the probe again */
+ 	ap->flags &= ~KPROBE_FLAG_DISABLED;
+ 	/* Optimize it again (remove from op->list) */
+-	BUG_ON(!kprobe_optready(ap));
++	ret = kprobe_optready(ap);
++	if (ret)
++		return ret;
++
+ 	optimize_kprobe(ap);
++	return 0;
+ }
+ 
+ /* Remove optimized instructions */
+@@ -940,11 +945,16 @@ static void __disarm_kprobe(struct kprobe *p, bool reopt)
+ #define kprobe_disarmed(p)			kprobe_disabled(p)
+ #define wait_for_kprobe_optimizer()		do {} while (0)
+ 
+-/* There should be no unused kprobes can be reused without optimization */
+-static void reuse_unused_kprobe(struct kprobe *ap)
++static int reuse_unused_kprobe(struct kprobe *ap)
+ {
++	/*
++	 * If the optimized kprobe is NOT supported, the aggr kprobe is
++	 * released at the same time that the last aggregated kprobe is
++	 * unregistered.
++	 * Thus there should be no chance to reuse unused kprobe.
++	 */
+ 	printk(KERN_ERR "Error: There should be no unused kprobe here.\n");
+-	BUG_ON(kprobe_unused(ap));
++	return -EINVAL;
+ }
+ 
+ static void free_aggr_kprobe(struct kprobe *p)
+@@ -1343,9 +1353,12 @@ static int register_aggr_kprobe(struct kprobe *orig_p, struct kprobe *p)
+ 			goto out;
+ 		}
+ 		init_aggr_kprobe(ap, orig_p);
+-	} else if (kprobe_unused(ap))
++	} else if (kprobe_unused(ap)) {
+ 		/* This probe is going to die. Rescue it */
+-		reuse_unused_kprobe(ap);
++		ret = reuse_unused_kprobe(ap);
++		if (ret)
++			goto out;
++	}
+ 
+ 	if (kprobe_gone(ap)) {
+ 		/*
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index 5fa4d3138bf1..aa6ebb799f16 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -4148,7 +4148,7 @@ void lock_contended(struct lockdep_map *lock, unsigned long ip)
+ {
+ 	unsigned long flags;
+ 
+-	if (unlikely(!lock_stat))
++	if (unlikely(!lock_stat || !debug_locks))
+ 		return;
+ 
+ 	if (unlikely(current->lockdep_recursion))
+@@ -4168,7 +4168,7 @@ void lock_acquired(struct lockdep_map *lock, unsigned long ip)
+ {
+ 	unsigned long flags;
+ 
+-	if (unlikely(!lock_stat))
++	if (unlikely(!lock_stat || !debug_locks))
+ 		return;
+ 
+ 	if (unlikely(current->lockdep_recursion))
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index 1d1513215c22..72de8cc5a13e 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -1047,7 +1047,12 @@ static void __init log_buf_len_update(unsigned size)
+ /* save requested log_buf_len since it's too early to process it */
+ static int __init log_buf_len_setup(char *str)
+ {
+-	unsigned size = memparse(str, &str);
++	unsigned int size;
++
++	if (!str)
++		return -EINVAL;
++
++	size = memparse(str, &str);
+ 
+ 	log_buf_len_update(size);
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index b27b9509ea89..9e4f550e4797 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -4321,7 +4321,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
+ 	 * put back on, and if we advance min_vruntime, we'll be placed back
+ 	 * further than we started -- ie. we'll be penalized.
+ 	 */
+-	if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) == DEQUEUE_SAVE)
++	if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) != DEQUEUE_SAVE)
+ 		update_min_vruntime(cfs_rq);
+ }
+ 
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 8d8a940422a8..dce9859f6547 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -1009,7 +1009,7 @@ static int __send_signal(int sig, struct siginfo *info, struct task_struct *t,
+ 
+ 	result = TRACE_SIGNAL_IGNORED;
+ 	if (!prepare_signal(sig, t,
+-			from_ancestor_ns || (info == SEND_SIG_FORCED)))
++			from_ancestor_ns || (info == SEND_SIG_PRIV) || (info == SEND_SIG_FORCED)))
+ 		goto ret;
+ 
+ 	pending = group ? &t->signal->shared_pending : &t->pending;
+@@ -2804,7 +2804,7 @@ COMPAT_SYSCALL_DEFINE2(rt_sigpending, compat_sigset_t __user *, uset,
+ }
+ #endif
+ 
+-enum siginfo_layout siginfo_layout(int sig, int si_code)
++enum siginfo_layout siginfo_layout(unsigned sig, int si_code)
+ {
+ 	enum siginfo_layout layout = SIL_KILL;
+ 	if ((si_code > SI_USER) && (si_code < SI_KERNEL)) {
+@@ -3417,7 +3417,8 @@ int do_sigaction(int sig, struct k_sigaction *act, struct k_sigaction *oact)
+ }
+ 
+ static int
+-do_sigaltstack (const stack_t *ss, stack_t *oss, unsigned long sp)
++do_sigaltstack (const stack_t *ss, stack_t *oss, unsigned long sp,
++		size_t min_ss_size)
+ {
+ 	struct task_struct *t = current;
+ 
+@@ -3447,7 +3448,7 @@ do_sigaltstack (const stack_t *ss, stack_t *oss, unsigned long sp)
+ 			ss_size = 0;
+ 			ss_sp = NULL;
+ 		} else {
+-			if (unlikely(ss_size < MINSIGSTKSZ))
++			if (unlikely(ss_size < min_ss_size))
+ 				return -ENOMEM;
+ 		}
+ 
+@@ -3465,7 +3466,8 @@ SYSCALL_DEFINE2(sigaltstack,const stack_t __user *,uss, stack_t __user *,uoss)
+ 	if (uss && copy_from_user(&new, uss, sizeof(stack_t)))
+ 		return -EFAULT;
+ 	err = do_sigaltstack(uss ? &new : NULL, uoss ? &old : NULL,
+-			      current_user_stack_pointer());
++			      current_user_stack_pointer(),
++			      MINSIGSTKSZ);
+ 	if (!err && uoss && copy_to_user(uoss, &old, sizeof(stack_t)))
+ 		err = -EFAULT;
+ 	return err;
+@@ -3476,7 +3478,8 @@ int restore_altstack(const stack_t __user *uss)
+ 	stack_t new;
+ 	if (copy_from_user(&new, uss, sizeof(stack_t)))
+ 		return -EFAULT;
+-	(void)do_sigaltstack(&new, NULL, current_user_stack_pointer());
++	(void)do_sigaltstack(&new, NULL, current_user_stack_pointer(),
++			     MINSIGSTKSZ);
+ 	/* squash all but EFAULT for now */
+ 	return 0;
+ }
+@@ -3510,7 +3513,8 @@ static int do_compat_sigaltstack(const compat_stack_t __user *uss_ptr,
+ 		uss.ss_size = uss32.ss_size;
+ 	}
+ 	ret = do_sigaltstack(uss_ptr ? &uss : NULL, &uoss,
+-			     compat_user_stack_pointer());
++			     compat_user_stack_pointer(),
++			     COMPAT_MINSIGSTKSZ);
+ 	if (ret >= 0 && uoss_ptr)  {
+ 		compat_stack_t old;
+ 		memset(&old, 0, sizeof(old));
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 6c78bc2b7fff..b3482eed270c 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -1072,8 +1072,10 @@ static int create_synth_event(int argc, char **argv)
+ 		event = NULL;
+ 		ret = -EEXIST;
+ 		goto out;
+-	} else if (delete_event)
++	} else if (delete_event) {
++		ret = -ENOENT;
+ 		goto out;
++	}
+ 
+ 	if (argc < 2) {
+ 		ret = -EINVAL;
+diff --git a/kernel/user_namespace.c b/kernel/user_namespace.c
+index e5222b5fb4fe..923414a246e9 100644
+--- a/kernel/user_namespace.c
++++ b/kernel/user_namespace.c
+@@ -974,10 +974,6 @@ static ssize_t map_write(struct file *file, const char __user *buf,
+ 	if (!new_idmap_permitted(file, ns, cap_setid, &new_map))
+ 		goto out;
+ 
+-	ret = sort_idmaps(&new_map);
+-	if (ret < 0)
+-		goto out;
+-
+ 	ret = -EPERM;
+ 	/* Map the lower ids from the parent user namespace to the
+ 	 * kernel global id space.
+@@ -1004,6 +1000,14 @@ static ssize_t map_write(struct file *file, const char __user *buf,
+ 		e->lower_first = lower_first;
+ 	}
+ 
++	/*
++	 * If we want to use binary search for lookup, this clones the extent
++	 * array and sorts both copies.
++	 */
++	ret = sort_idmaps(&new_map);
++	if (ret < 0)
++		goto out;
++
+ 	/* Install the map */
+ 	if (new_map.nr_extents <= UID_GID_MAP_MAX_BASE_EXTENTS) {
+ 		memcpy(map->extent, new_map.extent,
+diff --git a/lib/debug_locks.c b/lib/debug_locks.c
+index 96c4c633d95e..124fdf238b3d 100644
+--- a/lib/debug_locks.c
++++ b/lib/debug_locks.c
+@@ -37,7 +37,7 @@ EXPORT_SYMBOL_GPL(debug_locks_silent);
+  */
+ int debug_locks_off(void)
+ {
+-	if (__debug_locks_off()) {
++	if (debug_locks && __debug_locks_off()) {
+ 		if (!debug_locks_silent) {
+ 			console_verbose();
+ 			return 1;
+diff --git a/mm/hmm.c b/mm/hmm.c
+index f9d1d89dec4d..49e3db686348 100644
+--- a/mm/hmm.c
++++ b/mm/hmm.c
+@@ -91,16 +91,6 @@ static struct hmm *hmm_register(struct mm_struct *mm)
+ 	spin_lock_init(&hmm->lock);
+ 	hmm->mm = mm;
+ 
+-	/*
+-	 * We should only get here if hold the mmap_sem in write mode ie on
+-	 * registration of first mirror through hmm_mirror_register()
+-	 */
+-	hmm->mmu_notifier.ops = &hmm_mmu_notifier_ops;
+-	if (__mmu_notifier_register(&hmm->mmu_notifier, mm)) {
+-		kfree(hmm);
+-		return NULL;
+-	}
+-
+ 	spin_lock(&mm->page_table_lock);
+ 	if (!mm->hmm)
+ 		mm->hmm = hmm;
+@@ -108,12 +98,27 @@ static struct hmm *hmm_register(struct mm_struct *mm)
+ 		cleanup = true;
+ 	spin_unlock(&mm->page_table_lock);
+ 
+-	if (cleanup) {
+-		mmu_notifier_unregister(&hmm->mmu_notifier, mm);
+-		kfree(hmm);
+-	}
++	if (cleanup)
++		goto error;
++
++	/*
++	 * We should only get here if hold the mmap_sem in write mode ie on
++	 * registration of first mirror through hmm_mirror_register()
++	 */
++	hmm->mmu_notifier.ops = &hmm_mmu_notifier_ops;
++	if (__mmu_notifier_register(&hmm->mmu_notifier, mm))
++		goto error_mm;
+ 
+ 	return mm->hmm;
++
++error_mm:
++	spin_lock(&mm->page_table_lock);
++	if (mm->hmm == hmm)
++		mm->hmm = NULL;
++	spin_unlock(&mm->page_table_lock);
++error:
++	kfree(hmm);
++	return NULL;
+ }
+ 
+ void hmm_mm_destroy(struct mm_struct *mm)
+@@ -275,12 +280,13 @@ void hmm_mirror_unregister(struct hmm_mirror *mirror)
+ 	if (!should_unregister || mm == NULL)
+ 		return;
+ 
++	mmu_notifier_unregister_no_release(&hmm->mmu_notifier, mm);
++
+ 	spin_lock(&mm->page_table_lock);
+ 	if (mm->hmm == hmm)
+ 		mm->hmm = NULL;
+ 	spin_unlock(&mm->page_table_lock);
+ 
+-	mmu_notifier_unregister_no_release(&hmm->mmu_notifier, mm);
+ 	kfree(hmm);
+ }
+ EXPORT_SYMBOL(hmm_mirror_unregister);
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index f469315a6a0f..5b38fbef9441 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -3678,6 +3678,12 @@ int huge_add_to_page_cache(struct page *page, struct address_space *mapping,
+ 		return err;
+ 	ClearPagePrivate(page);
+ 
++	/*
++	 * set page dirty so that it will not be removed from cache/file
++	 * by non-hugetlbfs specific code paths.
++	 */
++	set_page_dirty(page);
++
+ 	spin_lock(&inode->i_lock);
+ 	inode->i_blocks += blocks_per_huge_page(h);
+ 	spin_unlock(&inode->i_lock);
+diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
+index ae3c2a35d61b..11df03e71288 100644
+--- a/mm/page_vma_mapped.c
++++ b/mm/page_vma_mapped.c
+@@ -21,7 +21,29 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw)
+ 			if (!is_swap_pte(*pvmw->pte))
+ 				return false;
+ 		} else {
+-			if (!pte_present(*pvmw->pte))
++			/*
++			 * We get here when we are trying to unmap a private
++			 * device page from the process address space. Such
++			 * page is not CPU accessible and thus is mapped as
++			 * a special swap entry, nonetheless it still does
++			 * count as a valid regular mapping for the page (and
++			 * is accounted as such in page maps count).
++			 *
++			 * So handle this special case as if it was a normal
++			 * page mapping ie lock CPU page table and returns
++			 * true.
++			 *
++			 * For more details on device private memory see HMM
++			 * (include/linux/hmm.h or mm/hmm.c).
++			 */
++			if (is_swap_pte(*pvmw->pte)) {
++				swp_entry_t entry;
++
++				/* Handle un-addressable ZONE_DEVICE memory */
++				entry = pte_to_swp_entry(*pvmw->pte);
++				if (!is_device_private_entry(entry))
++					return false;
++			} else if (!pte_present(*pvmw->pte))
+ 				return false;
+ 		}
+ 	}
+diff --git a/net/core/netclassid_cgroup.c b/net/core/netclassid_cgroup.c
+index 5e4f04004a49..7bf833598615 100644
+--- a/net/core/netclassid_cgroup.c
++++ b/net/core/netclassid_cgroup.c
+@@ -106,6 +106,7 @@ static int write_classid(struct cgroup_subsys_state *css, struct cftype *cft,
+ 		iterate_fd(p->files, 0, update_classid_sock,
+ 			   (void *)(unsigned long)cs->classid);
+ 		task_unlock(p);
++		cond_resched();
+ 	}
+ 	css_task_iter_end(&it);
+ 
+diff --git a/net/ipv4/cipso_ipv4.c b/net/ipv4/cipso_ipv4.c
+index 82178cc69c96..777fa3b7fb13 100644
+--- a/net/ipv4/cipso_ipv4.c
++++ b/net/ipv4/cipso_ipv4.c
+@@ -1512,7 +1512,7 @@ static int cipso_v4_parsetag_loc(const struct cipso_v4_doi *doi_def,
+  *
+  * Description:
+  * Parse the packet's IP header looking for a CIPSO option.  Returns a pointer
+- * to the start of the CIPSO option on success, NULL if one if not found.
++ * to the start of the CIPSO option on success, NULL if one is not found.
+  *
+  */
+ unsigned char *cipso_v4_optptr(const struct sk_buff *skb)
+@@ -1522,10 +1522,8 @@ unsigned char *cipso_v4_optptr(const struct sk_buff *skb)
+ 	int optlen;
+ 	int taglen;
+ 
+-	for (optlen = iph->ihl*4 - sizeof(struct iphdr); optlen > 0; ) {
++	for (optlen = iph->ihl*4 - sizeof(struct iphdr); optlen > 1; ) {
+ 		switch (optptr[0]) {
+-		case IPOPT_CIPSO:
+-			return optptr;
+ 		case IPOPT_END:
+ 			return NULL;
+ 		case IPOPT_NOOP:
+@@ -1534,6 +1532,11 @@ unsigned char *cipso_v4_optptr(const struct sk_buff *skb)
+ 		default:
+ 			taglen = optptr[1];
+ 		}
++		if (!taglen || taglen > optlen)
++			return NULL;
++		if (optptr[0] == IPOPT_CIPSO)
++			return optptr;
++
+ 		optlen -= taglen;
+ 		optptr += taglen;
+ 	}
+diff --git a/net/netfilter/xt_nat.c b/net/netfilter/xt_nat.c
+index 8af9707f8789..ac91170fc8c8 100644
+--- a/net/netfilter/xt_nat.c
++++ b/net/netfilter/xt_nat.c
+@@ -216,6 +216,8 @@ static struct xt_target xt_nat_target_reg[] __read_mostly = {
+ 	{
+ 		.name		= "DNAT",
+ 		.revision	= 2,
++		.checkentry	= xt_nat_checkentry,
++		.destroy	= xt_nat_destroy,
+ 		.target		= xt_dnat_target_v2,
+ 		.targetsize	= sizeof(struct nf_nat_range2),
+ 		.table		= "nat",
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 57f71765febe..ce852f8c1d27 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -1306,7 +1306,6 @@ check_loop_fn(struct Qdisc *q, unsigned long cl, struct qdisc_walker *w)
+ 
+ const struct nla_policy rtm_tca_policy[TCA_MAX + 1] = {
+ 	[TCA_KIND]		= { .type = NLA_STRING },
+-	[TCA_OPTIONS]		= { .type = NLA_NESTED },
+ 	[TCA_RATE]		= { .type = NLA_BINARY,
+ 				    .len = sizeof(struct tc_estimator) },
+ 	[TCA_STAB]		= { .type = NLA_NESTED },
+diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
+index 5185efb9027b..83ccd0221c98 100644
+--- a/net/sunrpc/svc_xprt.c
++++ b/net/sunrpc/svc_xprt.c
+@@ -989,7 +989,7 @@ static void call_xpt_users(struct svc_xprt *xprt)
+ 	spin_lock(&xprt->xpt_lock);
+ 	while (!list_empty(&xprt->xpt_users)) {
+ 		u = list_first_entry(&xprt->xpt_users, struct svc_xpt_user, list);
+-		list_del(&u->list);
++		list_del_init(&u->list);
+ 		u->callback(u);
+ 	}
+ 	spin_unlock(&xprt->xpt_lock);
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
+index a68180090554..b9827665ff35 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
+@@ -248,6 +248,7 @@ static void
+ xprt_rdma_bc_close(struct rpc_xprt *xprt)
+ {
+ 	dprintk("svcrdma: %s: xprt %p\n", __func__, xprt);
++	xprt->cwnd = RPC_CWNDSHIFT;
+ }
+ 
+ static void
+diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
+index 143ce2579ba9..98cbc7b060ba 100644
+--- a/net/sunrpc/xprtrdma/transport.c
++++ b/net/sunrpc/xprtrdma/transport.c
+@@ -468,6 +468,12 @@ xprt_rdma_close(struct rpc_xprt *xprt)
+ 		xprt->reestablish_timeout = 0;
+ 	xprt_disconnect_done(xprt);
+ 	rpcrdma_ep_disconnect(ep, ia);
++
++	/* Prepare @xprt for the next connection by reinitializing
++	 * its credit grant to one (see RFC 8166, Section 3.3.3).
++	 */
++	r_xprt->rx_buf.rb_credits = 1;
++	xprt->cwnd = RPC_CWNDSHIFT;
+ }
+ 
+ /**
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 4e937cd7c17d..661504042d30 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -744,6 +744,8 @@ static int xsk_create(struct net *net, struct socket *sock, int protocol,
+ 	sk->sk_destruct = xsk_destruct;
+ 	sk_refcnt_debug_inc(sk);
+ 
++	sock_set_flag(sk, SOCK_RCU_FREE);
++
+ 	xs = xdp_sk(sk);
+ 	mutex_init(&xs->mutex);
+ 	spin_lock_init(&xs->tx_completion_lock);
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 526e6814ed4b..1d2e0a90c0ca 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -625,9 +625,9 @@ static void xfrm_hash_rebuild(struct work_struct *work)
+ 				break;
+ 		}
+ 		if (newpos)
+-			hlist_add_behind(&policy->bydst, newpos);
++			hlist_add_behind_rcu(&policy->bydst, newpos);
+ 		else
+-			hlist_add_head(&policy->bydst, chain);
++			hlist_add_head_rcu(&policy->bydst, chain);
+ 	}
+ 
+ 	spin_unlock_bh(&net->xfrm.xfrm_policy_lock);
+@@ -766,9 +766,9 @@ int xfrm_policy_insert(int dir, struct xfrm_policy *policy, int excl)
+ 			break;
+ 	}
+ 	if (newpos)
+-		hlist_add_behind(&policy->bydst, newpos);
++		hlist_add_behind_rcu(&policy->bydst, newpos);
+ 	else
+-		hlist_add_head(&policy->bydst, chain);
++		hlist_add_head_rcu(&policy->bydst, chain);
+ 	__xfrm_policy_link(policy, dir);
+ 
+ 	/* After previous checking, family can either be AF_INET or AF_INET6 */
+diff --git a/security/integrity/ima/ima_fs.c b/security/integrity/ima/ima_fs.c
+index ae9d5c766a3c..cfb8cc3b975e 100644
+--- a/security/integrity/ima/ima_fs.c
++++ b/security/integrity/ima/ima_fs.c
+@@ -42,14 +42,14 @@ static int __init default_canonical_fmt_setup(char *str)
+ __setup("ima_canonical_fmt", default_canonical_fmt_setup);
+ 
+ static int valid_policy = 1;
+-#define TMPBUFLEN 12
++
+ static ssize_t ima_show_htable_value(char __user *buf, size_t count,
+ 				     loff_t *ppos, atomic_long_t *val)
+ {
+-	char tmpbuf[TMPBUFLEN];
++	char tmpbuf[32];	/* greater than largest 'long' string value */
+ 	ssize_t len;
+ 
+-	len = scnprintf(tmpbuf, TMPBUFLEN, "%li\n", atomic_long_read(val));
++	len = scnprintf(tmpbuf, sizeof(tmpbuf), "%li\n", atomic_long_read(val));
+ 	return simple_read_from_buffer(buf, count, ppos, tmpbuf, len);
+ }
+ 
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 2b5ee5fbd652..4680a217d0fa 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -1509,6 +1509,11 @@ static int selinux_genfs_get_sid(struct dentry *dentry,
+ 		}
+ 		rc = security_genfs_sid(&selinux_state, sb->s_type->name,
+ 					path, tclass, sid);
++		if (rc == -ENOENT) {
++			/* No match in policy, mark as unlabeled. */
++			*sid = SECINITSID_UNLABELED;
++			rc = 0;
++		}
+ 	}
+ 	free_page((unsigned long)buffer);
+ 	return rc;
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 8b6cd5a79bfa..a81d815c81f3 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -420,6 +420,7 @@ static int smk_ptrace_rule_check(struct task_struct *tracer,
+ 	struct smk_audit_info ad, *saip = NULL;
+ 	struct task_smack *tsp;
+ 	struct smack_known *tracer_known;
++	const struct cred *tracercred;
+ 
+ 	if ((mode & PTRACE_MODE_NOAUDIT) == 0) {
+ 		smk_ad_init(&ad, func, LSM_AUDIT_DATA_TASK);
+@@ -428,7 +429,8 @@ static int smk_ptrace_rule_check(struct task_struct *tracer,
+ 	}
+ 
+ 	rcu_read_lock();
+-	tsp = __task_cred(tracer)->security;
++	tracercred = __task_cred(tracer);
++	tsp = tracercred->security;
+ 	tracer_known = smk_of_task(tsp);
+ 
+ 	if ((mode & PTRACE_MODE_ATTACH) &&
+@@ -438,7 +440,7 @@ static int smk_ptrace_rule_check(struct task_struct *tracer,
+ 			rc = 0;
+ 		else if (smack_ptrace_rule == SMACK_PTRACE_DRACONIAN)
+ 			rc = -EACCES;
+-		else if (capable(CAP_SYS_PTRACE))
++		else if (smack_privileged_cred(CAP_SYS_PTRACE, tracercred))
+ 			rc = 0;
+ 		else
+ 			rc = -EACCES;
+@@ -1840,6 +1842,7 @@ static int smack_file_send_sigiotask(struct task_struct *tsk,
+ {
+ 	struct smack_known *skp;
+ 	struct smack_known *tkp = smk_of_task(tsk->cred->security);
++	const struct cred *tcred;
+ 	struct file *file;
+ 	int rc;
+ 	struct smk_audit_info ad;
+@@ -1853,8 +1856,12 @@ static int smack_file_send_sigiotask(struct task_struct *tsk,
+ 	skp = file->f_security;
+ 	rc = smk_access(skp, tkp, MAY_DELIVER, NULL);
+ 	rc = smk_bu_note("sigiotask", skp, tkp, MAY_DELIVER, rc);
+-	if (rc != 0 && has_capability(tsk, CAP_MAC_OVERRIDE))
++
++	rcu_read_lock();
++	tcred = __task_cred(tsk);
++	if (rc != 0 && smack_privileged_cred(CAP_MAC_OVERRIDE, tcred))
+ 		rc = 0;
++	rcu_read_unlock();
+ 
+ 	smk_ad_init(&ad, __func__, LSM_AUDIT_DATA_TASK);
+ 	smk_ad_setfield_u_tsk(&ad, tsk);
+diff --git a/sound/pci/ca0106/ca0106.h b/sound/pci/ca0106/ca0106.h
+index 04402c14cb23..9847b669cf3c 100644
+--- a/sound/pci/ca0106/ca0106.h
++++ b/sound/pci/ca0106/ca0106.h
+@@ -582,7 +582,7 @@
+ #define SPI_PL_BIT_R_R		(2<<7)	/* right channel = right */
+ #define SPI_PL_BIT_R_C		(3<<7)	/* right channel = (L+R)/2 */
+ #define SPI_IZD_REG		2
+-#define SPI_IZD_BIT		(1<<4)	/* infinite zero detect */
++#define SPI_IZD_BIT		(0<<4)	/* infinite zero detect */
+ 
+ #define SPI_FMT_REG		3
+ #define SPI_FMT_BIT_RJ		(0<<0)	/* right justified mode */
+diff --git a/sound/pci/hda/hda_controller.h b/sound/pci/hda/hda_controller.h
+index a68e75b00ea3..53c3cd28bc99 100644
+--- a/sound/pci/hda/hda_controller.h
++++ b/sound/pci/hda/hda_controller.h
+@@ -160,6 +160,7 @@ struct azx {
+ 	unsigned int msi:1;
+ 	unsigned int probing:1; /* codec probing phase */
+ 	unsigned int snoop:1;
++	unsigned int uc_buffer:1; /* non-cached pages for stream buffers */
+ 	unsigned int align_buffer_size:1;
+ 	unsigned int region_requested:1;
+ 	unsigned int disabled:1; /* disabled by vga_switcheroo */
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 28dc5e124995..6f6703e53a05 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -410,7 +410,7 @@ static void __mark_pages_wc(struct azx *chip, struct snd_dma_buffer *dmab, bool
+ #ifdef CONFIG_SND_DMA_SGBUF
+ 	if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_SG) {
+ 		struct snd_sg_buf *sgbuf = dmab->private_data;
+-		if (chip->driver_type == AZX_DRIVER_CMEDIA)
++		if (!chip->uc_buffer)
+ 			return; /* deal with only CORB/RIRB buffers */
+ 		if (on)
+ 			set_pages_array_wc(sgbuf->page_table, sgbuf->pages);
+@@ -1636,6 +1636,7 @@ static void azx_check_snoop_available(struct azx *chip)
+ 		dev_info(chip->card->dev, "Force to %s mode by module option\n",
+ 			 snoop ? "snoop" : "non-snoop");
+ 		chip->snoop = snoop;
++		chip->uc_buffer = !snoop;
+ 		return;
+ 	}
+ 
+@@ -1656,8 +1657,12 @@ static void azx_check_snoop_available(struct azx *chip)
+ 		snoop = false;
+ 
+ 	chip->snoop = snoop;
+-	if (!snoop)
++	if (!snoop) {
+ 		dev_info(chip->card->dev, "Force to non-snoop mode\n");
++		/* C-Media requires non-cached pages only for CORB/RIRB */
++		if (chip->driver_type != AZX_DRIVER_CMEDIA)
++			chip->uc_buffer = true;
++	}
+ }
+ 
+ static void azx_probe_work(struct work_struct *work)
+@@ -2096,7 +2101,7 @@ static void pcm_mmap_prepare(struct snd_pcm_substream *substream,
+ #ifdef CONFIG_X86
+ 	struct azx_pcm *apcm = snd_pcm_substream_chip(substream);
+ 	struct azx *chip = apcm->chip;
+-	if (!azx_snoop(chip) && chip->driver_type != AZX_DRIVER_CMEDIA)
++	if (chip->uc_buffer)
+ 		area->vm_page_prot = pgprot_writecombine(area->vm_page_prot);
+ #endif
+ }
+@@ -2215,8 +2220,12 @@ static struct snd_pci_quirk power_save_blacklist[] = {
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1581607 */
+ 	SND_PCI_QUIRK(0x1558, 0x3501, "Clevo W35xSS_370SS", 0),
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
++	SND_PCI_QUIRK(0x1028, 0x0497, "Dell Precision T3600", 0),
++	/* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
+ 	/* Note the P55A-UD3 and Z87-D3HP share the subsys id for the HDA dev */
+ 	SND_PCI_QUIRK(0x1458, 0xa002, "Gigabyte P55A-UD3 / Z87-D3HP", 0),
++	/* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
++	SND_PCI_QUIRK(0x8086, 0x2040, "Intel DZ77BH-55K", 0),
+ 	/* https://bugzilla.kernel.org/show_bug.cgi?id=199607 */
+ 	SND_PCI_QUIRK(0x8086, 0x2057, "Intel NUC5i7RYB", 0),
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1520902 */
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 1a8a2d440fbd..7d6c3cebb0e3 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -980,6 +980,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x21da, "Lenovo X220", CXT_PINCFG_LENOVO_TP410),
+ 	SND_PCI_QUIRK(0x17aa, 0x21db, "Lenovo X220-tablet", CXT_PINCFG_LENOVO_TP410),
+ 	SND_PCI_QUIRK(0x17aa, 0x38af, "Lenovo IdeaPad Z560", CXT_FIXUP_MUTE_LED_EAPD),
++	SND_PCI_QUIRK(0x17aa, 0x3905, "Lenovo G50-30", CXT_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x390b, "Lenovo G50-80", CXT_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3975, "Lenovo U300s", CXT_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3977, "Lenovo IdeaPad U310", CXT_FIXUP_STEREO_DMIC),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 08b6369f930b..23dd4bb026d1 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6799,6 +6799,12 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x1a, 0x02a11040},
+ 		{0x1b, 0x01014020},
+ 		{0x21, 0x0221101f}),
++	SND_HDA_PIN_QUIRK(0x10ec0235, 0x17aa, "Lenovo", ALC294_FIXUP_LENOVO_MIC_LOCATION,
++		{0x14, 0x90170110},
++		{0x19, 0x02a11030},
++		{0x1a, 0x02a11040},
++		{0x1b, 0x01011020},
++		{0x21, 0x0221101f}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0235, 0x17aa, "Lenovo", ALC294_FIXUP_LENOVO_MIC_LOCATION,
+ 		{0x14, 0x90170110},
+ 		{0x19, 0x02a11020},
+@@ -7690,6 +7696,8 @@ enum {
+ 	ALC662_FIXUP_ASUS_Nx50,
+ 	ALC668_FIXUP_ASUS_Nx51_HEADSET_MODE,
+ 	ALC668_FIXUP_ASUS_Nx51,
++	ALC668_FIXUP_MIC_COEF,
++	ALC668_FIXUP_ASUS_G751,
+ 	ALC891_FIXUP_HEADSET_MODE,
+ 	ALC891_FIXUP_DELL_MIC_NO_PRESENCE,
+ 	ALC662_FIXUP_ACER_VERITON,
+@@ -7959,6 +7967,23 @@ static const struct hda_fixup alc662_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC668_FIXUP_ASUS_Nx51_HEADSET_MODE,
+ 	},
++	[ALC668_FIXUP_MIC_COEF] = {
++		.type = HDA_FIXUP_VERBS,
++		.v.verbs = (const struct hda_verb[]) {
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0xc3 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x4000 },
++			{}
++		},
++	},
++	[ALC668_FIXUP_ASUS_G751] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x16, 0x0421101f }, /* HP */
++			{}
++		},
++		.chained = true,
++		.chain_id = ALC668_FIXUP_MIC_COEF
++	},
+ 	[ALC891_FIXUP_HEADSET_MODE] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc_fixup_headset_mode,
+@@ -8032,6 +8057,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x11cd, "Asus N550", ALC662_FIXUP_ASUS_Nx50),
+ 	SND_PCI_QUIRK(0x1043, 0x13df, "Asus N550JX", ALC662_FIXUP_BASS_1A),
+ 	SND_PCI_QUIRK(0x1043, 0x129d, "Asus N750", ALC662_FIXUP_ASUS_Nx50),
++	SND_PCI_QUIRK(0x1043, 0x12ff, "ASUS G751", ALC668_FIXUP_ASUS_G751),
+ 	SND_PCI_QUIRK(0x1043, 0x1477, "ASUS N56VZ", ALC662_FIXUP_BASS_MODE4_CHMAP),
+ 	SND_PCI_QUIRK(0x1043, 0x15a7, "ASUS UX51VZH", ALC662_FIXUP_BASS_16),
+ 	SND_PCI_QUIRK(0x1043, 0x177d, "ASUS N551", ALC668_FIXUP_ASUS_Nx51),
+diff --git a/sound/soc/codecs/sta32x.c b/sound/soc/codecs/sta32x.c
+index d5035f2f2b2b..ce508b4cc85c 100644
+--- a/sound/soc/codecs/sta32x.c
++++ b/sound/soc/codecs/sta32x.c
+@@ -879,6 +879,9 @@ static int sta32x_probe(struct snd_soc_component *component)
+ 	struct sta32x_priv *sta32x = snd_soc_component_get_drvdata(component);
+ 	struct sta32x_platform_data *pdata = sta32x->pdata;
+ 	int i, ret = 0, thermal = 0;
++
++	sta32x->component = component;
++
+ 	ret = regulator_bulk_enable(ARRAY_SIZE(sta32x->supplies),
+ 				    sta32x->supplies);
+ 	if (ret != 0) {
+diff --git a/sound/soc/intel/skylake/skl-topology.c b/sound/soc/intel/skylake/skl-topology.c
+index fcdc716754b6..bde2effde861 100644
+--- a/sound/soc/intel/skylake/skl-topology.c
++++ b/sound/soc/intel/skylake/skl-topology.c
+@@ -2458,6 +2458,7 @@ static int skl_tplg_get_token(struct device *dev,
+ 
+ 	case SKL_TKN_U8_CORE_ID:
+ 		mconfig->core_id = tkn_elem->value;
++		break;
+ 
+ 	case SKL_TKN_U8_MOD_TYPE:
+ 		mconfig->m_type = tkn_elem->value;
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index 67b042738ed7..986151732d68 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -831,7 +831,7 @@ ifndef NO_JVMTI
+     JDIR=$(shell /usr/sbin/update-java-alternatives -l | head -1 | awk '{print $$3}')
+   else
+     ifneq (,$(wildcard /usr/sbin/alternatives))
+-      JDIR=$(shell alternatives --display java | tail -1 | cut -d' ' -f 5 | sed 's%/jre/bin/java.%%g')
++      JDIR=$(shell /usr/sbin/alternatives --display java | tail -1 | cut -d' ' -f 5 | sed 's%/jre/bin/java.%%g')
+     endif
+   endif
+   ifndef JDIR
+diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
+index c04dc7b53797..82a3c8be19ee 100644
+--- a/tools/perf/builtin-report.c
++++ b/tools/perf/builtin-report.c
+@@ -981,6 +981,7 @@ int cmd_report(int argc, const char **argv)
+ 			.id_index	 = perf_event__process_id_index,
+ 			.auxtrace_info	 = perf_event__process_auxtrace_info,
+ 			.auxtrace	 = perf_event__process_auxtrace,
++			.event_update	 = perf_event__process_event_update,
+ 			.feature	 = process_feature_event,
+ 			.ordered_events	 = true,
+ 			.ordering_requires_timestamps = true,
+diff --git a/tools/perf/pmu-events/arch/x86/ivytown/uncore-power.json b/tools/perf/pmu-events/arch/x86/ivytown/uncore-power.json
+index d40498f2cb1e..635c09fda1d9 100644
+--- a/tools/perf/pmu-events/arch/x86/ivytown/uncore-power.json
++++ b/tools/perf/pmu-events/arch/x86/ivytown/uncore-power.json
+@@ -188,7 +188,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xb",
+         "EventName": "UNC_P_FREQ_GE_1200MHZ_CYCLES",
+-        "Filter": "filter_band0=1200",
++        "Filter": "filter_band0=12",
+         "MetricExpr": "(UNC_P_FREQ_GE_1200MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_1200mhz_cycles %",
+         "PerPkg": "1",
+@@ -199,7 +199,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xc",
+         "EventName": "UNC_P_FREQ_GE_2000MHZ_CYCLES",
+-        "Filter": "filter_band1=2000",
++        "Filter": "filter_band1=20",
+         "MetricExpr": "(UNC_P_FREQ_GE_2000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_2000mhz_cycles %",
+         "PerPkg": "1",
+@@ -210,7 +210,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xd",
+         "EventName": "UNC_P_FREQ_GE_3000MHZ_CYCLES",
+-        "Filter": "filter_band2=3000",
++        "Filter": "filter_band2=30",
+         "MetricExpr": "(UNC_P_FREQ_GE_3000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_3000mhz_cycles %",
+         "PerPkg": "1",
+@@ -221,7 +221,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xe",
+         "EventName": "UNC_P_FREQ_GE_4000MHZ_CYCLES",
+-        "Filter": "filter_band3=4000",
++        "Filter": "filter_band3=40",
+         "MetricExpr": "(UNC_P_FREQ_GE_4000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_4000mhz_cycles %",
+         "PerPkg": "1",
+@@ -232,7 +232,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xb",
+         "EventName": "UNC_P_FREQ_GE_1200MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band0=1200",
++        "Filter": "edge=1,filter_band0=12",
+         "MetricExpr": "(UNC_P_FREQ_GE_1200MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_1200mhz_cycles %",
+         "PerPkg": "1",
+@@ -243,7 +243,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xc",
+         "EventName": "UNC_P_FREQ_GE_2000MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band1=2000",
++        "Filter": "edge=1,filter_band1=20",
+         "MetricExpr": "(UNC_P_FREQ_GE_2000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_2000mhz_cycles %",
+         "PerPkg": "1",
+@@ -254,7 +254,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xd",
+         "EventName": "UNC_P_FREQ_GE_3000MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band2=4000",
++        "Filter": "edge=1,filter_band2=30",
+         "MetricExpr": "(UNC_P_FREQ_GE_3000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_3000mhz_cycles %",
+         "PerPkg": "1",
+@@ -265,7 +265,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xe",
+         "EventName": "UNC_P_FREQ_GE_4000MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band3=4000",
++        "Filter": "edge=1,filter_band3=40",
+         "MetricExpr": "(UNC_P_FREQ_GE_4000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_4000mhz_cycles %",
+         "PerPkg": "1",
+diff --git a/tools/perf/pmu-events/arch/x86/jaketown/uncore-power.json b/tools/perf/pmu-events/arch/x86/jaketown/uncore-power.json
+index 16034bfd06dd..8755693d86c6 100644
+--- a/tools/perf/pmu-events/arch/x86/jaketown/uncore-power.json
++++ b/tools/perf/pmu-events/arch/x86/jaketown/uncore-power.json
+@@ -187,7 +187,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xb",
+         "EventName": "UNC_P_FREQ_GE_1200MHZ_CYCLES",
+-        "Filter": "filter_band0=1200",
++        "Filter": "filter_band0=12",
+         "MetricExpr": "(UNC_P_FREQ_GE_1200MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_1200mhz_cycles %",
+         "PerPkg": "1",
+@@ -198,7 +198,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xc",
+         "EventName": "UNC_P_FREQ_GE_2000MHZ_CYCLES",
+-        "Filter": "filter_band1=2000",
++        "Filter": "filter_band1=20",
+         "MetricExpr": "(UNC_P_FREQ_GE_2000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_2000mhz_cycles %",
+         "PerPkg": "1",
+@@ -209,7 +209,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xd",
+         "EventName": "UNC_P_FREQ_GE_3000MHZ_CYCLES",
+-        "Filter": "filter_band2=3000",
++        "Filter": "filter_band2=30",
+         "MetricExpr": "(UNC_P_FREQ_GE_3000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_3000mhz_cycles %",
+         "PerPkg": "1",
+@@ -220,7 +220,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xe",
+         "EventName": "UNC_P_FREQ_GE_4000MHZ_CYCLES",
+-        "Filter": "filter_band3=4000",
++        "Filter": "filter_band3=40",
+         "MetricExpr": "(UNC_P_FREQ_GE_4000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_4000mhz_cycles %",
+         "PerPkg": "1",
+@@ -231,7 +231,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xb",
+         "EventName": "UNC_P_FREQ_GE_1200MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band0=1200",
++        "Filter": "edge=1,filter_band0=12",
+         "MetricExpr": "(UNC_P_FREQ_GE_1200MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_1200mhz_cycles %",
+         "PerPkg": "1",
+@@ -242,7 +242,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xc",
+         "EventName": "UNC_P_FREQ_GE_2000MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band1=2000",
++        "Filter": "edge=1,filter_band1=20",
+         "MetricExpr": "(UNC_P_FREQ_GE_2000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_2000mhz_cycles %",
+         "PerPkg": "1",
+@@ -253,7 +253,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xd",
+         "EventName": "UNC_P_FREQ_GE_3000MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band2=4000",
++        "Filter": "edge=1,filter_band2=30",
+         "MetricExpr": "(UNC_P_FREQ_GE_3000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_3000mhz_cycles %",
+         "PerPkg": "1",
+@@ -264,7 +264,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xe",
+         "EventName": "UNC_P_FREQ_GE_4000MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band3=4000",
++        "Filter": "edge=1,filter_band3=40",
+         "MetricExpr": "(UNC_P_FREQ_GE_4000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_4000mhz_cycles %",
+         "PerPkg": "1",
+diff --git a/tools/perf/tests/shell/record+probe_libc_inet_pton.sh b/tools/perf/tests/shell/record+probe_libc_inet_pton.sh
+index 3013ac8f83d0..cab7b0aea6ea 100755
+--- a/tools/perf/tests/shell/record+probe_libc_inet_pton.sh
++++ b/tools/perf/tests/shell/record+probe_libc_inet_pton.sh
+@@ -48,7 +48,7 @@ trace_libc_inet_pton_backtrace() {
+ 	*)
+ 		eventattr='max-stack=3'
+ 		echo "getaddrinfo\+0x[[:xdigit:]]+[[:space:]]\($libc\)$" >> $expected
+-		echo ".*\+0x[[:xdigit:]]+[[:space:]]\(.*/bin/ping.*\)$" >> $expected
++		echo ".*(\+0x[[:xdigit:]]+|\[unknown\])[[:space:]]\(.*/bin/ping.*\)$" >> $expected
+ 		;;
+ 	esac
+ 
+diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
+index 0c8ecf0c78a4..6f3db78efe39 100644
+--- a/tools/perf/util/event.c
++++ b/tools/perf/util/event.c
+@@ -1074,6 +1074,7 @@ void *cpu_map_data__alloc(struct cpu_map *map, size_t *size, u16 *type, int *max
+ 	}
+ 
+ 	*size += sizeof(struct cpu_map_data);
++	*size = PERF_ALIGN(*size, sizeof(u64));
+ 	return zalloc(*size);
+ }
+ 
+diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
+index 6324afba8fdd..86ad1389ff5a 100644
+--- a/tools/perf/util/evsel.c
++++ b/tools/perf/util/evsel.c
+@@ -1078,6 +1078,9 @@ void perf_evsel__config(struct perf_evsel *evsel, struct record_opts *opts,
+ 		attr->exclude_user   = 1;
+ 	}
+ 
++	if (evsel->own_cpus)
++		evsel->attr.read_format |= PERF_FORMAT_ID;
++
+ 	/*
+ 	 * Apply event specific term settings,
+ 	 * it overloads any global configuration.
+diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
+index 3ba6a1742f91..02580f3ded1a 100644
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -936,13 +936,14 @@ static void pmu_format_value(unsigned long *format, __u64 value, __u64 *v,
+ 
+ static __u64 pmu_format_max_value(const unsigned long *format)
+ {
+-	__u64 w = 0;
+-	int fbit;
+-
+-	for_each_set_bit(fbit, format, PERF_PMU_FORMAT_BITS)
+-		w |= (1ULL << fbit);
++	int w;
+ 
+-	return w;
++	w = bitmap_weight(format, PERF_PMU_FORMAT_BITS);
++	if (!w)
++		return 0;
++	if (w < 64)
++		return (1ULL << w) - 1;
++	return -1;
+ }
+ 
+ /*
+diff --git a/tools/perf/util/srcline.c b/tools/perf/util/srcline.c
+index 09d6746e6ec8..e767c4a9d4d2 100644
+--- a/tools/perf/util/srcline.c
++++ b/tools/perf/util/srcline.c
+@@ -85,6 +85,9 @@ static struct symbol *new_inline_sym(struct dso *dso,
+ 	struct symbol *inline_sym;
+ 	char *demangled = NULL;
+ 
++	if (!funcname)
++		funcname = "??";
++
+ 	if (dso) {
+ 		demangled = dso__demangle_sym(dso, 0, funcname);
+ 		if (demangled)
+diff --git a/tools/perf/util/strbuf.c b/tools/perf/util/strbuf.c
+index 3d1cf5bf7f18..9005fbe0780e 100644
+--- a/tools/perf/util/strbuf.c
++++ b/tools/perf/util/strbuf.c
+@@ -98,19 +98,25 @@ static int strbuf_addv(struct strbuf *sb, const char *fmt, va_list ap)
+ 
+ 	va_copy(ap_saved, ap);
+ 	len = vsnprintf(sb->buf + sb->len, sb->alloc - sb->len, fmt, ap);
+-	if (len < 0)
++	if (len < 0) {
++		va_end(ap_saved);
+ 		return len;
++	}
+ 	if (len > strbuf_avail(sb)) {
+ 		ret = strbuf_grow(sb, len);
+-		if (ret)
++		if (ret) {
++			va_end(ap_saved);
+ 			return ret;
++		}
+ 		len = vsnprintf(sb->buf + sb->len, sb->alloc - sb->len, fmt, ap_saved);
+ 		va_end(ap_saved);
+ 		if (len > strbuf_avail(sb)) {
+ 			pr_debug("this should not happen, your vsnprintf is broken");
++			va_end(ap_saved);
+ 			return -EINVAL;
+ 		}
+ 	}
++	va_end(ap_saved);
+ 	return strbuf_setlen(sb, sb->len + len);
+ }
+ 
+diff --git a/tools/perf/util/trace-event-info.c b/tools/perf/util/trace-event-info.c
+index 7b0ca7cbb7de..8ad8e755127b 100644
+--- a/tools/perf/util/trace-event-info.c
++++ b/tools/perf/util/trace-event-info.c
+@@ -531,12 +531,14 @@ struct tracing_data *tracing_data_get(struct list_head *pattrs,
+ 			 "/tmp/perf-XXXXXX");
+ 		if (!mkstemp(tdata->temp_file)) {
+ 			pr_debug("Can't make temp file");
++			free(tdata);
+ 			return NULL;
+ 		}
+ 
+ 		temp_fd = open(tdata->temp_file, O_RDWR);
+ 		if (temp_fd < 0) {
+ 			pr_debug("Can't read '%s'", tdata->temp_file);
++			free(tdata);
+ 			return NULL;
+ 		}
+ 
+diff --git a/tools/perf/util/trace-event-read.c b/tools/perf/util/trace-event-read.c
+index 40b425949aa3..2d50e4384c72 100644
+--- a/tools/perf/util/trace-event-read.c
++++ b/tools/perf/util/trace-event-read.c
+@@ -349,9 +349,12 @@ static int read_event_files(struct pevent *pevent)
+ 		for (x=0; x < count; x++) {
+ 			size = read8(pevent);
+ 			ret = read_event_file(pevent, sys, size);
+-			if (ret)
++			if (ret) {
++				free(sys);
+ 				return ret;
++			}
+ 		}
++		free(sys);
+ 	}
+ 	return 0;
+ }
+diff --git a/tools/power/cpupower/utils/cpufreq-info.c b/tools/power/cpupower/utils/cpufreq-info.c
+index df43cd45d810..ccd08dd00996 100644
+--- a/tools/power/cpupower/utils/cpufreq-info.c
++++ b/tools/power/cpupower/utils/cpufreq-info.c
+@@ -200,6 +200,8 @@ static int get_boost_mode(unsigned int cpu)
+ 		printf(_("    Boost States: %d\n"), b_states);
+ 		printf(_("    Total States: %d\n"), pstate_no);
+ 		for (i = 0; i < pstate_no; i++) {
++			if (!pstates[i])
++				continue;
+ 			if (i < b_states)
+ 				printf(_("    Pstate-Pb%d: %luMHz (boost state)"
+ 					 "\n"), i, pstates[i]);
+diff --git a/tools/power/cpupower/utils/helpers/amd.c b/tools/power/cpupower/utils/helpers/amd.c
+index bb41cdd0df6b..9607ada5b29a 100644
+--- a/tools/power/cpupower/utils/helpers/amd.c
++++ b/tools/power/cpupower/utils/helpers/amd.c
+@@ -33,7 +33,7 @@ union msr_pstate {
+ 		unsigned vid:8;
+ 		unsigned iddval:8;
+ 		unsigned idddiv:2;
+-		unsigned res1:30;
++		unsigned res1:31;
+ 		unsigned en:1;
+ 	} fam17h_bits;
+ 	unsigned long long val;
+@@ -119,6 +119,11 @@ int decode_pstates(unsigned int cpu, unsigned int cpu_family,
+ 		}
+ 		if (read_msr(cpu, MSR_AMD_PSTATE + i, &pstate.val))
+ 			return -1;
++		if ((cpu_family == 0x17) && (!pstate.fam17h_bits.en))
++			continue;
++		else if (!pstate.bits.en)
++			continue;
++
+ 		pstates[i] = get_cof(cpu_family, pstate);
+ 	}
+ 	*no = i;
+diff --git a/tools/testing/selftests/drivers/usb/usbip/usbip_test.sh b/tools/testing/selftests/drivers/usb/usbip/usbip_test.sh
+index 1893d0f59ad7..059b7e81b922 100755
+--- a/tools/testing/selftests/drivers/usb/usbip/usbip_test.sh
++++ b/tools/testing/selftests/drivers/usb/usbip/usbip_test.sh
+@@ -143,6 +143,10 @@ echo "Import devices from localhost - should work"
+ src/usbip attach -r localhost -b $busid;
+ echo "=============================================================="
+ 
++# Wait for sysfs file to be updated. Without this sleep, usbip port
++# shows no imported devices.
++sleep 3;
++
+ echo "List imported devices - expect to see imported devices";
+ src/usbip port;
+ echo "=============================================================="
+diff --git a/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-createremove.tc b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-createremove.tc
+index cef11377dcbd..c604438df13b 100644
+--- a/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-createremove.tc
++++ b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-createremove.tc
+@@ -35,18 +35,18 @@ fi
+ 
+ reset_trigger
+ 
+-echo "Test create synthetic event with an error"
+-echo 'wakeup_latency  u64 lat pid_t pid char' > synthetic_events > /dev/null
++echo "Test remove synthetic event"
++echo '!wakeup_latency  u64 lat pid_t pid char comm[16]' >> synthetic_events
+ if [ -d events/synthetic/wakeup_latency ]; then
+-    fail "Created wakeup_latency synthetic event with an invalid format"
++    fail "Failed to delete wakeup_latency synthetic event"
+ fi
+ 
+ reset_trigger
+ 
+-echo "Test remove synthetic event"
+-echo '!wakeup_latency  u64 lat pid_t pid char comm[16]' > synthetic_events
++echo "Test create synthetic event with an error"
++echo 'wakeup_latency  u64 lat pid_t pid char' > synthetic_events > /dev/null
+ if [ -d events/synthetic/wakeup_latency ]; then
+-    fail "Failed to delete wakeup_latency synthetic event"
++    fail "Created wakeup_latency synthetic event with an invalid format"
+ fi
+ 
+ do_reset
+diff --git a/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-syntax.tc b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-syntax.tc
+new file mode 100644
+index 000000000000..88e6c3f43006
+--- /dev/null
++++ b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-syntax.tc
+@@ -0,0 +1,80 @@
++#!/bin/sh
++# SPDX-License-Identifier: GPL-2.0
++# description: event trigger - test synthetic_events syntax parser
++
++do_reset() {
++    reset_trigger
++    echo > set_event
++    clear_trace
++}
++
++fail() { #msg
++    do_reset
++    echo $1
++    exit_fail
++}
++
++if [ ! -f set_event ]; then
++    echo "event tracing is not supported"
++    exit_unsupported
++fi
++
++if [ ! -f synthetic_events ]; then
++    echo "synthetic event is not supported"
++    exit_unsupported
++fi
++
++reset_tracer
++do_reset
++
++echo "Test synthetic_events syntax parser"
++
++echo > synthetic_events
++
++# synthetic event must have a field
++! echo "myevent" >> synthetic_events
++echo "myevent u64 var1" >> synthetic_events
++
++# synthetic event must be found in synthetic_events
++grep "myevent[[:space:]]u64 var1" synthetic_events
++
++# it is not possible to add same name event
++! echo "myevent u64 var2" >> synthetic_events
++
++# Non-append open will cleanup all events and add new one
++echo "myevent u64 var2" > synthetic_events
++
++# multiple fields with different spaces
++echo "myevent u64 var1; u64 var2;" > synthetic_events
++grep "myevent[[:space:]]u64 var1; u64 var2" synthetic_events
++echo "myevent u64 var1 ; u64 var2 ;" > synthetic_events
++grep "myevent[[:space:]]u64 var1; u64 var2" synthetic_events
++echo "myevent u64 var1 ;u64 var2" > synthetic_events
++grep "myevent[[:space:]]u64 var1; u64 var2" synthetic_events
++
++# test field types
++echo "myevent u32 var" > synthetic_events
++echo "myevent u16 var" > synthetic_events
++echo "myevent u8 var" > synthetic_events
++echo "myevent s64 var" > synthetic_events
++echo "myevent s32 var" > synthetic_events
++echo "myevent s16 var" > synthetic_events
++echo "myevent s8 var" > synthetic_events
++
++echo "myevent char var" > synthetic_events
++echo "myevent int var" > synthetic_events
++echo "myevent long var" > synthetic_events
++echo "myevent pid_t var" > synthetic_events
++
++echo "myevent unsigned char var" > synthetic_events
++echo "myevent unsigned int var" > synthetic_events
++echo "myevent unsigned long var" > synthetic_events
++grep "myevent[[:space:]]unsigned long var" synthetic_events
++
++# test string type
++echo "myevent char var[10]" > synthetic_events
++grep "myevent[[:space:]]char\[10\] var" synthetic_events
++
++do_reset
++
++exit 0
+diff --git a/tools/testing/selftests/net/reuseport_bpf.c b/tools/testing/selftests/net/reuseport_bpf.c
+index cad14cd0ea92..b5277106df1f 100644
+--- a/tools/testing/selftests/net/reuseport_bpf.c
++++ b/tools/testing/selftests/net/reuseport_bpf.c
+@@ -437,14 +437,19 @@ void enable_fastopen(void)
+ 	}
+ }
+ 
+-static struct rlimit rlim_old, rlim_new;
++static struct rlimit rlim_old;
+ 
+ static  __attribute__((constructor)) void main_ctor(void)
+ {
+ 	getrlimit(RLIMIT_MEMLOCK, &rlim_old);
+-	rlim_new.rlim_cur = rlim_old.rlim_cur + (1UL << 20);
+-	rlim_new.rlim_max = rlim_old.rlim_max + (1UL << 20);
+-	setrlimit(RLIMIT_MEMLOCK, &rlim_new);
++
++	if (rlim_old.rlim_cur != RLIM_INFINITY) {
++		struct rlimit rlim_new;
++
++		rlim_new.rlim_cur = rlim_old.rlim_cur + (1UL << 20);
++		rlim_new.rlim_max = rlim_old.rlim_max + (1UL << 20);
++		setrlimit(RLIMIT_MEMLOCK, &rlim_new);
++	}
+ }
+ 
+ static __attribute__((destructor)) void main_dtor(void)
+diff --git a/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr.c b/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr.c
+index 327fa943c7f3..dbdffa2e2c82 100644
+--- a/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr.c
++++ b/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr.c
+@@ -67,8 +67,8 @@ trans:
+ 		"3: ;"
+ 		: [res] "=r" (result), [texasr] "=r" (texasr)
+ 		: [gpr_1]"i"(GPR_1), [gpr_2]"i"(GPR_2), [gpr_4]"i"(GPR_4),
+-		[sprn_texasr] "i" (SPRN_TEXASR), [flt_1] "r" (&a),
+-		[flt_2] "r" (&b), [flt_4] "r" (&d)
++		[sprn_texasr] "i" (SPRN_TEXASR), [flt_1] "b" (&a),
++		[flt_4] "b" (&d)
+ 		: "memory", "r5", "r6", "r7",
+ 		"r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15",
+ 		"r16", "r17", "r18", "r19", "r20", "r21", "r22", "r23",
+diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
+index 04e554cae3a2..f8c2b9e7c19c 100644
+--- a/virt/kvm/arm/arm.c
++++ b/virt/kvm/arm/arm.c
+@@ -1244,8 +1244,6 @@ static void cpu_init_hyp_mode(void *dummy)
+ 
+ 	__cpu_init_hyp_mode(pgd_ptr, hyp_stack_ptr, vector_ptr);
+ 	__cpu_init_stage2();
+-
+-	kvm_arm_init_debug();
+ }
+ 
+ static void cpu_hyp_reset(void)
+@@ -1269,6 +1267,8 @@ static void cpu_hyp_reinit(void)
+ 		cpu_init_hyp_mode(NULL);
+ 	}
+ 
++	kvm_arm_init_debug();
++
+ 	if (vgic_present)
+ 		kvm_vgic_init_cpu_hardware();
+ }
+diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
+index fd8c88463928..fbba603caf1b 100644
+--- a/virt/kvm/arm/mmu.c
++++ b/virt/kvm/arm/mmu.c
+@@ -1201,8 +1201,14 @@ static bool transparent_hugepage_adjust(kvm_pfn_t *pfnp, phys_addr_t *ipap)
+ {
+ 	kvm_pfn_t pfn = *pfnp;
+ 	gfn_t gfn = *ipap >> PAGE_SHIFT;
++	struct page *page = pfn_to_page(pfn);
+ 
+-	if (PageTransCompoundMap(pfn_to_page(pfn))) {
++	/*
++	 * PageTransCompoungMap() returns true for THP and
++	 * hugetlbfs. Make sure the adjustment is done only for THP
++	 * pages.
++	 */
++	if (!PageHuge(page) && PageTransCompoundMap(page)) {
+ 		unsigned long mask;
+ 		/*
+ 		 * The address we faulted on is backed by a transparent huge


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 13:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 13:15 UTC (permalink / raw
  To: gentoo-commits

commit:     f1ccd5f707733ba5115c78edc91afa9c0fe8745d
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Nov 11 01:51:36 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 13:15:41 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f1ccd5f7

net: sched: Remove TCA_OPTIONS from policy

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                      |  4 ++++
 1800_TCA-OPTIONS-sched-fix.patch | 35 +++++++++++++++++++++++++++++++++++
 2 files changed, 39 insertions(+)

diff --git a/0000_README b/0000_README
index 6774045..bdc7ee9 100644
--- a/0000_README
+++ b/0000_README
@@ -123,6 +123,10 @@ Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.
 
+Patch:  1800_TCA-OPTIONS-sched-fix.patch
+From:   https://git.kernel.org
+Desc:   net: sched: Remove TCA_OPTIONS from policy
+
 Patch:  2500_usb-storage-Disable-UAS-on-JMicron-SATA-enclosure.patch
 From:   https://bugzilla.redhat.com/show_bug.cgi?id=1260207#c5
 Desc:   Add UAS disable quirk. See bug #640082.

diff --git a/1800_TCA-OPTIONS-sched-fix.patch b/1800_TCA-OPTIONS-sched-fix.patch
new file mode 100644
index 0000000..f960fac
--- /dev/null
+++ b/1800_TCA-OPTIONS-sched-fix.patch
@@ -0,0 +1,35 @@
+From e72bde6b66299602087c8c2350d36a525e75d06e Mon Sep 17 00:00:00 2001
+From: David Ahern <dsahern@gmail.com>
+Date: Wed, 24 Oct 2018 08:32:49 -0700
+Subject: net: sched: Remove TCA_OPTIONS from policy
+
+Marco reported an error with hfsc:
+root@Calimero:~# tc qdisc add dev eth0 root handle 1:0 hfsc default 1
+Error: Attribute failed policy validation.
+
+Apparently a few implementations pass TCA_OPTIONS as a binary instead
+of nested attribute, so drop TCA_OPTIONS from the policy.
+
+Fixes: 8b4c3cdd9dd8 ("net: sched: Add policy validation for tc attributes")
+Reported-by: Marco Berizzi <pupilla@libero.it>
+Signed-off-by: David Ahern <dsahern@gmail.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+---
+ net/sched/sch_api.c | 1 -
+ 1 file changed, 1 deletion(-)
+
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 022bca98bde6..ca3b0f46de53 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -1320,7 +1320,6 @@ check_loop_fn(struct Qdisc *q, unsigned long cl, struct qdisc_walker *w)
+ 
+ const struct nla_policy rtm_tca_policy[TCA_MAX + 1] = {
+ 	[TCA_KIND]		= { .type = NLA_STRING },
+-	[TCA_OPTIONS]		= { .type = NLA_NESTED },
+ 	[TCA_RATE]		= { .type = NLA_BINARY,
+ 				    .len = sizeof(struct tc_estimator) },
+ 	[TCA_STAB]		= { .type = NLA_NESTED },
+-- 
+cgit 1.2-0.3.lf.el7
+


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 13:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 13:15 UTC (permalink / raw
  To: gentoo-commits

commit:     7a0e6dcd5c0523564a0bb694e792eb458ddcfa79
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Nov 10 21:33:13 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 13:15:41 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7a0e6dcd

Linux patch 4.18.18

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1017_linux-4.18.18.patch | 1206 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1210 insertions(+)

diff --git a/0000_README b/0000_README
index fcd301e..6774045 100644
--- a/0000_README
+++ b/0000_README
@@ -111,6 +111,10 @@ Patch:  1016_linux-4.18.17.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.17
 
+Patch:  1017_linux-4.18.18.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.18
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1017_linux-4.18.18.patch b/1017_linux-4.18.18.patch
new file mode 100644
index 0000000..093fbfc
--- /dev/null
+++ b/1017_linux-4.18.18.patch
@@ -0,0 +1,1206 @@
+diff --git a/Makefile b/Makefile
+index c051db0ca5a0..7b35c1ec0427 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 17
++SUBLEVEL = 18
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h
+index a38bf5a1e37a..69dcdf195b61 100644
+--- a/arch/x86/include/asm/fpu/internal.h
++++ b/arch/x86/include/asm/fpu/internal.h
+@@ -528,7 +528,7 @@ static inline void fpregs_activate(struct fpu *fpu)
+ static inline void
+ switch_fpu_prepare(struct fpu *old_fpu, int cpu)
+ {
+-	if (old_fpu->initialized) {
++	if (static_cpu_has(X86_FEATURE_FPU) && old_fpu->initialized) {
+ 		if (!copy_fpregs_to_fpstate(old_fpu))
+ 			old_fpu->last_cpu = -1;
+ 		else
+diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
+index a06b07399d17..6abf3af96fc8 100644
+--- a/arch/x86/include/asm/percpu.h
++++ b/arch/x86/include/asm/percpu.h
+@@ -185,22 +185,22 @@ do {									\
+ 	typeof(var) pfo_ret__;				\
+ 	switch (sizeof(var)) {				\
+ 	case 1:						\
+-		asm(op "b "__percpu_arg(1)",%0"		\
++		asm volatile(op "b "__percpu_arg(1)",%0"\
+ 		    : "=q" (pfo_ret__)			\
+ 		    : "m" (var));			\
+ 		break;					\
+ 	case 2:						\
+-		asm(op "w "__percpu_arg(1)",%0"		\
++		asm volatile(op "w "__percpu_arg(1)",%0"\
+ 		    : "=r" (pfo_ret__)			\
+ 		    : "m" (var));			\
+ 		break;					\
+ 	case 4:						\
+-		asm(op "l "__percpu_arg(1)",%0"		\
++		asm volatile(op "l "__percpu_arg(1)",%0"\
+ 		    : "=r" (pfo_ret__)			\
+ 		    : "m" (var));			\
+ 		break;					\
+ 	case 8:						\
+-		asm(op "q "__percpu_arg(1)",%0"		\
++		asm volatile(op "q "__percpu_arg(1)",%0"\
+ 		    : "=r" (pfo_ret__)			\
+ 		    : "m" (var));			\
+ 		break;					\
+diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c
+index 661583662430..71c0b01d93b1 100644
+--- a/arch/x86/kernel/pci-swiotlb.c
++++ b/arch/x86/kernel/pci-swiotlb.c
+@@ -42,10 +42,8 @@ IOMMU_INIT_FINISH(pci_swiotlb_detect_override,
+ int __init pci_swiotlb_detect_4gb(void)
+ {
+ 	/* don't initialize swiotlb if iommu=off (no_iommu=1) */
+-#ifdef CONFIG_X86_64
+ 	if (!no_iommu && max_possible_pfn > MAX_DMA32_PFN)
+ 		swiotlb = 1;
+-#endif
+ 
+ 	/*
+ 	 * If SME is active then swiotlb will be set to 1 so that bounce
+diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
+index 74b4472ba0a6..f32472acf66c 100644
+--- a/arch/x86/kernel/setup.c
++++ b/arch/x86/kernel/setup.c
+@@ -1258,7 +1258,7 @@ void __init setup_arch(char **cmdline_p)
+ 	x86_init.hyper.guest_late_init();
+ 
+ 	e820__reserve_resources();
+-	e820__register_nosave_regions(max_low_pfn);
++	e820__register_nosave_regions(max_pfn);
+ 
+ 	x86_init.resources.reserve_resources();
+ 
+diff --git a/arch/x86/kernel/time.c b/arch/x86/kernel/time.c
+index be01328eb755..fddaefc51fb6 100644
+--- a/arch/x86/kernel/time.c
++++ b/arch/x86/kernel/time.c
+@@ -25,7 +25,7 @@
+ #include <asm/time.h>
+ 
+ #ifdef CONFIG_X86_64
+-__visible volatile unsigned long jiffies __cacheline_aligned = INITIAL_JIFFIES;
++__visible volatile unsigned long jiffies __cacheline_aligned_in_smp = INITIAL_JIFFIES;
+ #endif
+ 
+ unsigned long profile_pc(struct pt_regs *regs)
+diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
+index a10481656d82..2f4af9598f62 100644
+--- a/arch/x86/kernel/tsc.c
++++ b/arch/x86/kernel/tsc.c
+@@ -60,7 +60,7 @@ struct cyc2ns {
+ 
+ static DEFINE_PER_CPU_ALIGNED(struct cyc2ns, cyc2ns);
+ 
+-void cyc2ns_read_begin(struct cyc2ns_data *data)
++void __always_inline cyc2ns_read_begin(struct cyc2ns_data *data)
+ {
+ 	int seq, idx;
+ 
+@@ -77,7 +77,7 @@ void cyc2ns_read_begin(struct cyc2ns_data *data)
+ 	} while (unlikely(seq != this_cpu_read(cyc2ns.seq.sequence)));
+ }
+ 
+-void cyc2ns_read_end(void)
++void __always_inline cyc2ns_read_end(void)
+ {
+ 	preempt_enable_notrace();
+ }
+@@ -123,7 +123,7 @@ static void __init cyc2ns_init(int cpu)
+ 	seqcount_init(&c2n->seq);
+ }
+ 
+-static inline unsigned long long cycles_2_ns(unsigned long long cyc)
++static __always_inline unsigned long long cycles_2_ns(unsigned long long cyc)
+ {
+ 	struct cyc2ns_data data;
+ 	unsigned long long ns;
+diff --git a/drivers/clk/sunxi-ng/ccu-sun4i-a10.c b/drivers/clk/sunxi-ng/ccu-sun4i-a10.c
+index ffa5dac221e4..129ebd2588fd 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun4i-a10.c
++++ b/drivers/clk/sunxi-ng/ccu-sun4i-a10.c
+@@ -1434,8 +1434,16 @@ static void __init sun4i_ccu_init(struct device_node *node,
+ 		return;
+ 	}
+ 
+-	/* Force the PLL-Audio-1x divider to 1 */
+ 	val = readl(reg + SUN4I_PLL_AUDIO_REG);
++
++	/*
++	 * Force VCO and PLL bias current to lowest setting. Higher
++	 * settings interfere with sigma-delta modulation and result
++	 * in audible noise and distortions when using SPDIF or I2S.
++	 */
++	val &= ~GENMASK(25, 16);
++
++	/* Force the PLL-Audio-1x divider to 1 */
+ 	val &= ~GENMASK(29, 26);
+ 	writel(val | (1 << 26), reg + SUN4I_PLL_AUDIO_REG);
+ 
+diff --git a/drivers/gpio/gpio-mxs.c b/drivers/gpio/gpio-mxs.c
+index e2831ee70cdc..deb539b3316b 100644
+--- a/drivers/gpio/gpio-mxs.c
++++ b/drivers/gpio/gpio-mxs.c
+@@ -18,8 +18,6 @@
+ #include <linux/platform_device.h>
+ #include <linux/slab.h>
+ #include <linux/gpio/driver.h>
+-/* FIXME: for gpio_get_value(), replace this by direct register read */
+-#include <linux/gpio.h>
+ #include <linux/module.h>
+ 
+ #define MXS_SET		0x4
+@@ -86,7 +84,7 @@ static int mxs_gpio_set_irq_type(struct irq_data *d, unsigned int type)
+ 	port->both_edges &= ~pin_mask;
+ 	switch (type) {
+ 	case IRQ_TYPE_EDGE_BOTH:
+-		val = gpio_get_value(port->gc.base + d->hwirq);
++		val = port->gc.get(&port->gc, d->hwirq);
+ 		if (val)
+ 			edge = GPIO_INT_FALL_EDGE;
+ 		else
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index c7b4481c90d7..d74d9a8cde2a 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -113,6 +113,9 @@ static const struct edid_quirk {
+ 	/* AEO model 0 reports 8 bpc, but is a 6 bpc panel */
+ 	{ "AEO", 0, EDID_QUIRK_FORCE_6BPC },
+ 
++	/* BOE model on HP Pavilion 15-n233sl reports 8 bpc, but is a 6 bpc panel */
++	{ "BOE", 0x78b, EDID_QUIRK_FORCE_6BPC },
++
+ 	/* CPT panel of Asus UX303LA reports 8 bpc, but is a 6 bpc panel */
+ 	{ "CPT", 0x17df, EDID_QUIRK_FORCE_6BPC },
+ 
+@@ -4279,7 +4282,7 @@ static void drm_parse_ycbcr420_deep_color_info(struct drm_connector *connector,
+ 	struct drm_hdmi_info *hdmi = &connector->display_info.hdmi;
+ 
+ 	dc_mask = db[7] & DRM_EDID_YCBCR420_DC_MASK;
+-	hdmi->y420_dc_modes |= dc_mask;
++	hdmi->y420_dc_modes = dc_mask;
+ }
+ 
+ static void drm_parse_hdmi_forum_vsdb(struct drm_connector *connector,
+diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
+index 2ee1eaa66188..1ebac724fe7b 100644
+--- a/drivers/gpu/drm/drm_fb_helper.c
++++ b/drivers/gpu/drm/drm_fb_helper.c
+@@ -1561,6 +1561,25 @@ unlock:
+ }
+ EXPORT_SYMBOL(drm_fb_helper_ioctl);
+ 
++static bool drm_fb_pixel_format_equal(const struct fb_var_screeninfo *var_1,
++				      const struct fb_var_screeninfo *var_2)
++{
++	return var_1->bits_per_pixel == var_2->bits_per_pixel &&
++	       var_1->grayscale == var_2->grayscale &&
++	       var_1->red.offset == var_2->red.offset &&
++	       var_1->red.length == var_2->red.length &&
++	       var_1->red.msb_right == var_2->red.msb_right &&
++	       var_1->green.offset == var_2->green.offset &&
++	       var_1->green.length == var_2->green.length &&
++	       var_1->green.msb_right == var_2->green.msb_right &&
++	       var_1->blue.offset == var_2->blue.offset &&
++	       var_1->blue.length == var_2->blue.length &&
++	       var_1->blue.msb_right == var_2->blue.msb_right &&
++	       var_1->transp.offset == var_2->transp.offset &&
++	       var_1->transp.length == var_2->transp.length &&
++	       var_1->transp.msb_right == var_2->transp.msb_right;
++}
++
+ /**
+  * drm_fb_helper_check_var - implementation for &fb_ops.fb_check_var
+  * @var: screeninfo to check
+@@ -1571,7 +1590,6 @@ int drm_fb_helper_check_var(struct fb_var_screeninfo *var,
+ {
+ 	struct drm_fb_helper *fb_helper = info->par;
+ 	struct drm_framebuffer *fb = fb_helper->fb;
+-	int depth;
+ 
+ 	if (var->pixclock != 0 || in_dbg_master())
+ 		return -EINVAL;
+@@ -1591,72 +1609,15 @@ int drm_fb_helper_check_var(struct fb_var_screeninfo *var,
+ 		return -EINVAL;
+ 	}
+ 
+-	switch (var->bits_per_pixel) {
+-	case 16:
+-		depth = (var->green.length == 6) ? 16 : 15;
+-		break;
+-	case 32:
+-		depth = (var->transp.length > 0) ? 32 : 24;
+-		break;
+-	default:
+-		depth = var->bits_per_pixel;
+-		break;
+-	}
+-
+-	switch (depth) {
+-	case 8:
+-		var->red.offset = 0;
+-		var->green.offset = 0;
+-		var->blue.offset = 0;
+-		var->red.length = 8;
+-		var->green.length = 8;
+-		var->blue.length = 8;
+-		var->transp.length = 0;
+-		var->transp.offset = 0;
+-		break;
+-	case 15:
+-		var->red.offset = 10;
+-		var->green.offset = 5;
+-		var->blue.offset = 0;
+-		var->red.length = 5;
+-		var->green.length = 5;
+-		var->blue.length = 5;
+-		var->transp.length = 1;
+-		var->transp.offset = 15;
+-		break;
+-	case 16:
+-		var->red.offset = 11;
+-		var->green.offset = 5;
+-		var->blue.offset = 0;
+-		var->red.length = 5;
+-		var->green.length = 6;
+-		var->blue.length = 5;
+-		var->transp.length = 0;
+-		var->transp.offset = 0;
+-		break;
+-	case 24:
+-		var->red.offset = 16;
+-		var->green.offset = 8;
+-		var->blue.offset = 0;
+-		var->red.length = 8;
+-		var->green.length = 8;
+-		var->blue.length = 8;
+-		var->transp.length = 0;
+-		var->transp.offset = 0;
+-		break;
+-	case 32:
+-		var->red.offset = 16;
+-		var->green.offset = 8;
+-		var->blue.offset = 0;
+-		var->red.length = 8;
+-		var->green.length = 8;
+-		var->blue.length = 8;
+-		var->transp.length = 8;
+-		var->transp.offset = 24;
+-		break;
+-	default:
++	/*
++	 * drm fbdev emulation doesn't support changing the pixel format at all,
++	 * so reject all pixel format changing requests.
++	 */
++	if (!drm_fb_pixel_format_equal(var, &info->var)) {
++		DRM_DEBUG("fbdev emulation doesn't support changing the pixel format\n");
+ 		return -EINVAL;
+ 	}
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL(drm_fb_helper_check_var);
+diff --git a/drivers/gpu/drm/sun4i/sun4i_dotclock.c b/drivers/gpu/drm/sun4i/sun4i_dotclock.c
+index e36004fbe453..2a15f2f9271e 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_dotclock.c
++++ b/drivers/gpu/drm/sun4i/sun4i_dotclock.c
+@@ -81,9 +81,19 @@ static long sun4i_dclk_round_rate(struct clk_hw *hw, unsigned long rate,
+ 	int i;
+ 
+ 	for (i = tcon->dclk_min_div; i <= tcon->dclk_max_div; i++) {
+-		unsigned long ideal = rate * i;
++		u64 ideal = (u64)rate * i;
+ 		unsigned long rounded;
+ 
++		/*
++		 * ideal has overflowed the max value that can be stored in an
++		 * unsigned long, and every clk operation we might do on a
++		 * truncated u64 value will give us incorrect results.
++		 * Let's just stop there since bigger dividers will result in
++		 * the same overflow issue.
++		 */
++		if (ideal > ULONG_MAX)
++			goto out;
++
+ 		rounded = clk_hw_round_rate(clk_hw_get_parent(hw),
+ 					    ideal);
+ 
+diff --git a/drivers/infiniband/core/ucm.c b/drivers/infiniband/core/ucm.c
+index 9eef96dacbd7..d93a719d25c1 100644
+--- a/drivers/infiniband/core/ucm.c
++++ b/drivers/infiniband/core/ucm.c
+@@ -46,6 +46,8 @@
+ #include <linux/mutex.h>
+ #include <linux/slab.h>
+ 
++#include <linux/nospec.h>
++
+ #include <linux/uaccess.h>
+ 
+ #include <rdma/ib.h>
+@@ -1123,6 +1125,7 @@ static ssize_t ib_ucm_write(struct file *filp, const char __user *buf,
+ 
+ 	if (hdr.cmd >= ARRAY_SIZE(ucm_cmd_table))
+ 		return -EINVAL;
++	hdr.cmd = array_index_nospec(hdr.cmd, ARRAY_SIZE(ucm_cmd_table));
+ 
+ 	if (hdr.in + sizeof(hdr) > len)
+ 		return -EINVAL;
+diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c
+index 21863ddde63e..01d68ed46c1b 100644
+--- a/drivers/infiniband/core/ucma.c
++++ b/drivers/infiniband/core/ucma.c
+@@ -44,6 +44,8 @@
+ #include <linux/module.h>
+ #include <linux/nsproxy.h>
+ 
++#include <linux/nospec.h>
++
+ #include <rdma/rdma_user_cm.h>
+ #include <rdma/ib_marshall.h>
+ #include <rdma/rdma_cm.h>
+@@ -1676,6 +1678,7 @@ static ssize_t ucma_write(struct file *filp, const char __user *buf,
+ 
+ 	if (hdr.cmd >= ARRAY_SIZE(ucma_cmd_table))
+ 		return -EINVAL;
++	hdr.cmd = array_index_nospec(hdr.cmd, ARRAY_SIZE(ucma_cmd_table));
+ 
+ 	if (hdr.in + sizeof(hdr) > len)
+ 		return -EINVAL;
+diff --git a/drivers/input/mouse/elan_i2c_core.c b/drivers/input/mouse/elan_i2c_core.c
+index f5ae24865355..b0f9d19b3410 100644
+--- a/drivers/input/mouse/elan_i2c_core.c
++++ b/drivers/input/mouse/elan_i2c_core.c
+@@ -1346,6 +1346,7 @@ static const struct acpi_device_id elan_acpi_id[] = {
+ 	{ "ELAN0611", 0 },
+ 	{ "ELAN0612", 0 },
+ 	{ "ELAN0618", 0 },
++	{ "ELAN061C", 0 },
+ 	{ "ELAN061D", 0 },
+ 	{ "ELAN0622", 0 },
+ 	{ "ELAN1000", 0 },
+diff --git a/drivers/misc/eeprom/at24.c b/drivers/misc/eeprom/at24.c
+index f5cc517d1131..7e50e1d6f58c 100644
+--- a/drivers/misc/eeprom/at24.c
++++ b/drivers/misc/eeprom/at24.c
+@@ -478,6 +478,23 @@ static void at24_properties_to_pdata(struct device *dev,
+ 	if (device_property_present(dev, "no-read-rollover"))
+ 		chip->flags |= AT24_FLAG_NO_RDROL;
+ 
++	err = device_property_read_u32(dev, "address-width", &val);
++	if (!err) {
++		switch (val) {
++		case 8:
++			if (chip->flags & AT24_FLAG_ADDR16)
++				dev_warn(dev, "Override address width to be 8, while default is 16\n");
++			chip->flags &= ~AT24_FLAG_ADDR16;
++			break;
++		case 16:
++			chip->flags |= AT24_FLAG_ADDR16;
++			break;
++		default:
++			dev_warn(dev, "Bad \"address-width\" property: %u\n",
++				 val);
++		}
++	}
++
+ 	err = device_property_read_u32(dev, "size", &val);
+ 	if (!err)
+ 		chip->byte_len = val;
+diff --git a/drivers/ptp/ptp_chardev.c b/drivers/ptp/ptp_chardev.c
+index 01b0e2bb3319..2012551d93e0 100644
+--- a/drivers/ptp/ptp_chardev.c
++++ b/drivers/ptp/ptp_chardev.c
+@@ -24,6 +24,8 @@
+ #include <linux/slab.h>
+ #include <linux/timekeeping.h>
+ 
++#include <linux/nospec.h>
++
+ #include "ptp_private.h"
+ 
+ static int ptp_disable_pinfunc(struct ptp_clock_info *ops,
+@@ -248,6 +250,7 @@ long ptp_ioctl(struct posix_clock *pc, unsigned int cmd, unsigned long arg)
+ 			err = -EINVAL;
+ 			break;
+ 		}
++		pin_index = array_index_nospec(pin_index, ops->n_pins);
+ 		if (mutex_lock_interruptible(&ptp->pincfg_mux))
+ 			return -ERESTARTSYS;
+ 		pd = ops->pin_config[pin_index];
+@@ -266,6 +269,7 @@ long ptp_ioctl(struct posix_clock *pc, unsigned int cmd, unsigned long arg)
+ 			err = -EINVAL;
+ 			break;
+ 		}
++		pin_index = array_index_nospec(pin_index, ops->n_pins);
+ 		if (mutex_lock_interruptible(&ptp->pincfg_mux))
+ 			return -ERESTARTSYS;
+ 		err = ptp_set_pinfunc(ptp, pin_index, pd.func, pd.chan);
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 84f52774810a..b61d101894ef 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -309,17 +309,17 @@ static void acm_process_notification(struct acm *acm, unsigned char *buf)
+ 
+ 		if (difference & ACM_CTRL_DSR)
+ 			acm->iocount.dsr++;
+-		if (difference & ACM_CTRL_BRK)
+-			acm->iocount.brk++;
+-		if (difference & ACM_CTRL_RI)
+-			acm->iocount.rng++;
+ 		if (difference & ACM_CTRL_DCD)
+ 			acm->iocount.dcd++;
+-		if (difference & ACM_CTRL_FRAMING)
++		if (newctrl & ACM_CTRL_BRK)
++			acm->iocount.brk++;
++		if (newctrl & ACM_CTRL_RI)
++			acm->iocount.rng++;
++		if (newctrl & ACM_CTRL_FRAMING)
+ 			acm->iocount.frame++;
+-		if (difference & ACM_CTRL_PARITY)
++		if (newctrl & ACM_CTRL_PARITY)
+ 			acm->iocount.parity++;
+-		if (difference & ACM_CTRL_OVERRUN)
++		if (newctrl & ACM_CTRL_OVERRUN)
+ 			acm->iocount.overrun++;
+ 		spin_unlock(&acm->read_lock);
+ 
+@@ -354,7 +354,6 @@ static void acm_ctrl_irq(struct urb *urb)
+ 	case -ENOENT:
+ 	case -ESHUTDOWN:
+ 		/* this urb is terminated, clean up */
+-		acm->nb_index = 0;
+ 		dev_dbg(&acm->control->dev,
+ 			"%s - urb shutting down with status: %d\n",
+ 			__func__, status);
+@@ -1642,6 +1641,7 @@ static int acm_pre_reset(struct usb_interface *intf)
+ 	struct acm *acm = usb_get_intfdata(intf);
+ 
+ 	clear_bit(EVENT_RX_STALL, &acm->flags);
++	acm->nb_index = 0; /* pending control transfers are lost */
+ 
+ 	return 0;
+ }
+diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
+index e1e0c90ce569..2e66711dac9c 100644
+--- a/drivers/usb/core/devio.c
++++ b/drivers/usb/core/devio.c
+@@ -1473,8 +1473,6 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ 	u = 0;
+ 	switch (uurb->type) {
+ 	case USBDEVFS_URB_TYPE_CONTROL:
+-		if (is_in)
+-			allow_short = true;
+ 		if (!usb_endpoint_xfer_control(&ep->desc))
+ 			return -EINVAL;
+ 		/* min 8 byte setup packet */
+@@ -1504,6 +1502,8 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ 			is_in = 0;
+ 			uurb->endpoint &= ~USB_DIR_IN;
+ 		}
++		if (is_in)
++			allow_short = true;
+ 		snoop(&ps->dev->dev, "control urb: bRequestType=%02x "
+ 			"bRequest=%02x wValue=%04x "
+ 			"wIndex=%04x wLength=%04x\n",
+diff --git a/drivers/usb/gadget/function/f_mass_storage.c b/drivers/usb/gadget/function/f_mass_storage.c
+index acecd13dcbd9..b29620e5df83 100644
+--- a/drivers/usb/gadget/function/f_mass_storage.c
++++ b/drivers/usb/gadget/function/f_mass_storage.c
+@@ -222,6 +222,8 @@
+ #include <linux/usb/gadget.h>
+ #include <linux/usb/composite.h>
+ 
++#include <linux/nospec.h>
++
+ #include "configfs.h"
+ 
+ 
+@@ -3171,6 +3173,7 @@ static struct config_group *fsg_lun_make(struct config_group *group,
+ 	fsg_opts = to_fsg_opts(&group->cg_item);
+ 	if (num >= FSG_MAX_LUNS)
+ 		return ERR_PTR(-ERANGE);
++	num = array_index_nospec(num, FSG_MAX_LUNS);
+ 
+ 	mutex_lock(&fsg_opts->lock);
+ 	if (fsg_opts->refcnt || fsg_opts->common->luns[num]) {
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 722860eb5a91..51dd8e00c4f8 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -179,10 +179,12 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 		xhci->quirks |= XHCI_PME_STUCK_QUIRK;
+ 	}
+ 	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+-		 pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI) {
++	    pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI)
+ 		xhci->quirks |= XHCI_SSIC_PORT_UNUSED;
++	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
++	    (pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_INTEL_APL_XHCI))
+ 		xhci->quirks |= XHCI_INTEL_USB_ROLE_SW;
+-	}
+ 	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+ 	    (pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_SUNRISEPOINT_LP_XHCI ||
+diff --git a/drivers/usb/roles/intel-xhci-usb-role-switch.c b/drivers/usb/roles/intel-xhci-usb-role-switch.c
+index 1fb3dd0f1dfa..277de96181f9 100644
+--- a/drivers/usb/roles/intel-xhci-usb-role-switch.c
++++ b/drivers/usb/roles/intel-xhci-usb-role-switch.c
+@@ -161,6 +161,8 @@ static int intel_xhci_usb_remove(struct platform_device *pdev)
+ {
+ 	struct intel_xhci_usb_data *data = platform_get_drvdata(pdev);
+ 
++	pm_runtime_disable(&pdev->dev);
++
+ 	usb_role_switch_unregister(data->role_sw);
+ 	return 0;
+ }
+diff --git a/drivers/usb/usbip/vhci_hcd.c b/drivers/usb/usbip/vhci_hcd.c
+index d11f3f8dad40..1e592ec94ba4 100644
+--- a/drivers/usb/usbip/vhci_hcd.c
++++ b/drivers/usb/usbip/vhci_hcd.c
+@@ -318,8 +318,9 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 	struct vhci_hcd	*vhci_hcd;
+ 	struct vhci	*vhci;
+ 	int             retval = 0;
+-	int		rhport;
++	int		rhport = -1;
+ 	unsigned long	flags;
++	bool invalid_rhport = false;
+ 
+ 	u32 prev_port_status[VHCI_HC_PORTS];
+ 
+@@ -334,9 +335,19 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 	usbip_dbg_vhci_rh("typeReq %x wValue %x wIndex %x\n", typeReq, wValue,
+ 			  wIndex);
+ 
+-	if (wIndex > VHCI_HC_PORTS)
+-		pr_err("invalid port number %d\n", wIndex);
+-	rhport = wIndex - 1;
++	/*
++	 * wIndex can be 0 for some request types (typeReq). rhport is
++	 * in valid range when wIndex >= 1 and < VHCI_HC_PORTS.
++	 *
++	 * Reference port_status[] only with valid rhport when
++	 * invalid_rhport is false.
++	 */
++	if (wIndex < 1 || wIndex > VHCI_HC_PORTS) {
++		invalid_rhport = true;
++		if (wIndex > VHCI_HC_PORTS)
++			pr_err("invalid port number %d\n", wIndex);
++	} else
++		rhport = wIndex - 1;
+ 
+ 	vhci_hcd = hcd_to_vhci_hcd(hcd);
+ 	vhci = vhci_hcd->vhci;
+@@ -345,8 +356,9 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 
+ 	/* store old status and compare now and old later */
+ 	if (usbip_dbg_flag_vhci_rh) {
+-		memcpy(prev_port_status, vhci_hcd->port_status,
+-			sizeof(prev_port_status));
++		if (!invalid_rhport)
++			memcpy(prev_port_status, vhci_hcd->port_status,
++				sizeof(prev_port_status));
+ 	}
+ 
+ 	switch (typeReq) {
+@@ -354,8 +366,10 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 		usbip_dbg_vhci_rh(" ClearHubFeature\n");
+ 		break;
+ 	case ClearPortFeature:
+-		if (rhport < 0)
++		if (invalid_rhport) {
++			pr_err("invalid port number %d\n", wIndex);
+ 			goto error;
++		}
+ 		switch (wValue) {
+ 		case USB_PORT_FEAT_SUSPEND:
+ 			if (hcd->speed == HCD_USB3) {
+@@ -415,9 +429,10 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 		break;
+ 	case GetPortStatus:
+ 		usbip_dbg_vhci_rh(" GetPortStatus port %x\n", wIndex);
+-		if (wIndex < 1) {
++		if (invalid_rhport) {
+ 			pr_err("invalid port number %d\n", wIndex);
+ 			retval = -EPIPE;
++			goto error;
+ 		}
+ 
+ 		/* we do not care about resume. */
+@@ -513,16 +528,20 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 				goto error;
+ 			}
+ 
+-			if (rhport < 0)
++			if (invalid_rhport) {
++				pr_err("invalid port number %d\n", wIndex);
+ 				goto error;
++			}
+ 
+ 			vhci_hcd->port_status[rhport] |= USB_PORT_STAT_SUSPEND;
+ 			break;
+ 		case USB_PORT_FEAT_POWER:
+ 			usbip_dbg_vhci_rh(
+ 				" SetPortFeature: USB_PORT_FEAT_POWER\n");
+-			if (rhport < 0)
++			if (invalid_rhport) {
++				pr_err("invalid port number %d\n", wIndex);
+ 				goto error;
++			}
+ 			if (hcd->speed == HCD_USB3)
+ 				vhci_hcd->port_status[rhport] |= USB_SS_PORT_STAT_POWER;
+ 			else
+@@ -531,8 +550,10 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 		case USB_PORT_FEAT_BH_PORT_RESET:
+ 			usbip_dbg_vhci_rh(
+ 				" SetPortFeature: USB_PORT_FEAT_BH_PORT_RESET\n");
+-			if (rhport < 0)
++			if (invalid_rhport) {
++				pr_err("invalid port number %d\n", wIndex);
+ 				goto error;
++			}
+ 			/* Applicable only for USB3.0 hub */
+ 			if (hcd->speed != HCD_USB3) {
+ 				pr_err("USB_PORT_FEAT_BH_PORT_RESET req not "
+@@ -543,8 +564,10 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 		case USB_PORT_FEAT_RESET:
+ 			usbip_dbg_vhci_rh(
+ 				" SetPortFeature: USB_PORT_FEAT_RESET\n");
+-			if (rhport < 0)
++			if (invalid_rhport) {
++				pr_err("invalid port number %d\n", wIndex);
+ 				goto error;
++			}
+ 			/* if it's already enabled, disable */
+ 			if (hcd->speed == HCD_USB3) {
+ 				vhci_hcd->port_status[rhport] = 0;
+@@ -565,8 +588,10 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 		default:
+ 			usbip_dbg_vhci_rh(" SetPortFeature: default %d\n",
+ 					  wValue);
+-			if (rhport < 0)
++			if (invalid_rhport) {
++				pr_err("invalid port number %d\n", wIndex);
+ 				goto error;
++			}
+ 			if (hcd->speed == HCD_USB3) {
+ 				if ((vhci_hcd->port_status[rhport] &
+ 				     USB_SS_PORT_STAT_POWER) != 0) {
+@@ -608,7 +633,7 @@ error:
+ 	if (usbip_dbg_flag_vhci_rh) {
+ 		pr_debug("port %d\n", rhport);
+ 		/* Only dump valid port status */
+-		if (rhport >= 0) {
++		if (!invalid_rhport) {
+ 			dump_port_status_diff(prev_port_status[rhport],
+ 					      vhci_hcd->port_status[rhport],
+ 					      hcd->speed == HCD_USB3);
+@@ -618,8 +643,10 @@ error:
+ 
+ 	spin_unlock_irqrestore(&vhci->lock, flags);
+ 
+-	if ((vhci_hcd->port_status[rhport] & PORT_C_MASK) != 0)
++	if (!invalid_rhport &&
++	    (vhci_hcd->port_status[rhport] & PORT_C_MASK) != 0) {
+ 		usb_hcd_poll_rh_status(hcd);
++	}
+ 
+ 	return retval;
+ }
+diff --git a/fs/cachefiles/namei.c b/fs/cachefiles/namei.c
+index af2b17b21b94..95983c744164 100644
+--- a/fs/cachefiles/namei.c
++++ b/fs/cachefiles/namei.c
+@@ -343,7 +343,7 @@ try_again:
+ 	trap = lock_rename(cache->graveyard, dir);
+ 
+ 	/* do some checks before getting the grave dentry */
+-	if (rep->d_parent != dir) {
++	if (rep->d_parent != dir || IS_DEADDIR(d_inode(rep))) {
+ 		/* the entry was probably culled when we dropped the parent dir
+ 		 * lock */
+ 		unlock_rename(cache->graveyard, dir);
+diff --git a/fs/fscache/cookie.c b/fs/fscache/cookie.c
+index 83bfe04456b6..c550512ce335 100644
+--- a/fs/fscache/cookie.c
++++ b/fs/fscache/cookie.c
+@@ -70,20 +70,7 @@ void fscache_free_cookie(struct fscache_cookie *cookie)
+ }
+ 
+ /*
+- * initialise an cookie jar slab element prior to any use
+- */
+-void fscache_cookie_init_once(void *_cookie)
+-{
+-	struct fscache_cookie *cookie = _cookie;
+-
+-	memset(cookie, 0, sizeof(*cookie));
+-	spin_lock_init(&cookie->lock);
+-	spin_lock_init(&cookie->stores_lock);
+-	INIT_HLIST_HEAD(&cookie->backing_objects);
+-}
+-
+-/*
+- * Set the index key in a cookie.  The cookie struct has space for a 12-byte
++ * Set the index key in a cookie.  The cookie struct has space for a 16-byte
+  * key plus length and hash, but if that's not big enough, it's instead a
+  * pointer to a buffer containing 3 bytes of hash, 1 byte of length and then
+  * the key data.
+@@ -93,20 +80,18 @@ static int fscache_set_key(struct fscache_cookie *cookie,
+ {
+ 	unsigned long long h;
+ 	u32 *buf;
++	int bufs;
+ 	int i;
+ 
+-	cookie->key_len = index_key_len;
++	bufs = DIV_ROUND_UP(index_key_len, sizeof(*buf));
+ 
+ 	if (index_key_len > sizeof(cookie->inline_key)) {
+-		buf = kzalloc(index_key_len, GFP_KERNEL);
++		buf = kcalloc(bufs, sizeof(*buf), GFP_KERNEL);
+ 		if (!buf)
+ 			return -ENOMEM;
+ 		cookie->key = buf;
+ 	} else {
+ 		buf = (u32 *)cookie->inline_key;
+-		buf[0] = 0;
+-		buf[1] = 0;
+-		buf[2] = 0;
+ 	}
+ 
+ 	memcpy(buf, index_key, index_key_len);
+@@ -116,7 +101,8 @@ static int fscache_set_key(struct fscache_cookie *cookie,
+ 	 */
+ 	h = (unsigned long)cookie->parent;
+ 	h += index_key_len + cookie->type;
+-	for (i = 0; i < (index_key_len + sizeof(u32) - 1) / sizeof(u32); i++)
++
++	for (i = 0; i < bufs; i++)
+ 		h += buf[i];
+ 
+ 	cookie->key_hash = h ^ (h >> 32);
+@@ -161,7 +147,7 @@ struct fscache_cookie *fscache_alloc_cookie(
+ 	struct fscache_cookie *cookie;
+ 
+ 	/* allocate and initialise a cookie */
+-	cookie = kmem_cache_alloc(fscache_cookie_jar, GFP_KERNEL);
++	cookie = kmem_cache_zalloc(fscache_cookie_jar, GFP_KERNEL);
+ 	if (!cookie)
+ 		return NULL;
+ 
+@@ -192,6 +178,9 @@ struct fscache_cookie *fscache_alloc_cookie(
+ 	cookie->netfs_data	= netfs_data;
+ 	cookie->flags		= (1 << FSCACHE_COOKIE_NO_DATA_YET);
+ 	cookie->type		= def->type;
++	spin_lock_init(&cookie->lock);
++	spin_lock_init(&cookie->stores_lock);
++	INIT_HLIST_HEAD(&cookie->backing_objects);
+ 
+ 	/* radix tree insertion won't use the preallocation pool unless it's
+ 	 * told it may not wait */
+diff --git a/fs/fscache/internal.h b/fs/fscache/internal.h
+index f83328a7f048..d6209022e965 100644
+--- a/fs/fscache/internal.h
++++ b/fs/fscache/internal.h
+@@ -51,7 +51,6 @@ extern struct fscache_cache *fscache_select_cache_for_object(
+ extern struct kmem_cache *fscache_cookie_jar;
+ 
+ extern void fscache_free_cookie(struct fscache_cookie *);
+-extern void fscache_cookie_init_once(void *);
+ extern struct fscache_cookie *fscache_alloc_cookie(struct fscache_cookie *,
+ 						   const struct fscache_cookie_def *,
+ 						   const void *, size_t,
+diff --git a/fs/fscache/main.c b/fs/fscache/main.c
+index 7dce110bf17d..30ad89db1efc 100644
+--- a/fs/fscache/main.c
++++ b/fs/fscache/main.c
+@@ -143,9 +143,7 @@ static int __init fscache_init(void)
+ 
+ 	fscache_cookie_jar = kmem_cache_create("fscache_cookie_jar",
+ 					       sizeof(struct fscache_cookie),
+-					       0,
+-					       0,
+-					       fscache_cookie_init_once);
++					       0, 0, NULL);
+ 	if (!fscache_cookie_jar) {
+ 		pr_notice("Failed to allocate a cookie jar\n");
+ 		ret = -ENOMEM;
+diff --git a/fs/ioctl.c b/fs/ioctl.c
+index b445b13fc59b..5444fec607ce 100644
+--- a/fs/ioctl.c
++++ b/fs/ioctl.c
+@@ -229,7 +229,7 @@ static long ioctl_file_clone(struct file *dst_file, unsigned long srcfd,
+ 	ret = -EXDEV;
+ 	if (src_file.file->f_path.mnt != dst_file->f_path.mnt)
+ 		goto fdput;
+-	ret = do_clone_file_range(src_file.file, off, dst_file, destoff, olen);
++	ret = vfs_clone_file_range(src_file.file, off, dst_file, destoff, olen);
+ fdput:
+ 	fdput(src_file);
+ 	return ret;
+diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
+index b0555d7d8200..613d2fe2dddd 100644
+--- a/fs/nfsd/vfs.c
++++ b/fs/nfsd/vfs.c
+@@ -541,7 +541,8 @@ __be32 nfsd4_set_nfs4_label(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ __be32 nfsd4_clone_file_range(struct file *src, u64 src_pos, struct file *dst,
+ 		u64 dst_pos, u64 count)
+ {
+-	return nfserrno(do_clone_file_range(src, src_pos, dst, dst_pos, count));
++	return nfserrno(vfs_clone_file_range(src, src_pos, dst, dst_pos,
++					     count));
+ }
+ 
+ ssize_t nfsd_copy_file_range(struct file *src, u64 src_pos, struct file *dst,
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index ddaddb4ce4c3..26b477f2538d 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -156,7 +156,7 @@ static int ovl_copy_up_data(struct path *old, struct path *new, loff_t len)
+ 	}
+ 
+ 	/* Try to use clone_file_range to clone up within the same fs */
+-	error = vfs_clone_file_range(old_file, 0, new_file, 0, len);
++	error = do_clone_file_range(old_file, 0, new_file, 0, len);
+ 	if (!error)
+ 		goto out;
+ 	/* Couldn't clone, so now we try to copy the data */
+diff --git a/fs/read_write.c b/fs/read_write.c
+index 153f8f690490..c9d489684335 100644
+--- a/fs/read_write.c
++++ b/fs/read_write.c
+@@ -1818,8 +1818,8 @@ int vfs_clone_file_prep_inodes(struct inode *inode_in, loff_t pos_in,
+ }
+ EXPORT_SYMBOL(vfs_clone_file_prep_inodes);
+ 
+-int vfs_clone_file_range(struct file *file_in, loff_t pos_in,
+-		struct file *file_out, loff_t pos_out, u64 len)
++int do_clone_file_range(struct file *file_in, loff_t pos_in,
++			struct file *file_out, loff_t pos_out, u64 len)
+ {
+ 	struct inode *inode_in = file_inode(file_in);
+ 	struct inode *inode_out = file_inode(file_out);
+@@ -1866,6 +1866,19 @@ int vfs_clone_file_range(struct file *file_in, loff_t pos_in,
+ 
+ 	return ret;
+ }
++EXPORT_SYMBOL(do_clone_file_range);
++
++int vfs_clone_file_range(struct file *file_in, loff_t pos_in,
++			 struct file *file_out, loff_t pos_out, u64 len)
++{
++	int ret;
++
++	file_start_write(file_out);
++	ret = do_clone_file_range(file_in, pos_in, file_out, pos_out, len);
++	file_end_write(file_out);
++
++	return ret;
++}
+ EXPORT_SYMBOL(vfs_clone_file_range);
+ 
+ /*
+diff --git a/include/drm/drm_edid.h b/include/drm/drm_edid.h
+index b25d12ef120a..e3c404833115 100644
+--- a/include/drm/drm_edid.h
++++ b/include/drm/drm_edid.h
+@@ -214,9 +214,9 @@ struct detailed_timing {
+ #define DRM_EDID_HDMI_DC_Y444             (1 << 3)
+ 
+ /* YCBCR 420 deep color modes */
+-#define DRM_EDID_YCBCR420_DC_48		  (1 << 6)
+-#define DRM_EDID_YCBCR420_DC_36		  (1 << 5)
+-#define DRM_EDID_YCBCR420_DC_30		  (1 << 4)
++#define DRM_EDID_YCBCR420_DC_48		  (1 << 2)
++#define DRM_EDID_YCBCR420_DC_36		  (1 << 1)
++#define DRM_EDID_YCBCR420_DC_30		  (1 << 0)
+ #define DRM_EDID_YCBCR420_DC_MASK (DRM_EDID_YCBCR420_DC_48 | \
+ 				    DRM_EDID_YCBCR420_DC_36 | \
+ 				    DRM_EDID_YCBCR420_DC_30)
+diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
+index 38b04f559ad3..1fd6fa822d2c 100644
+--- a/include/linux/bpf_verifier.h
++++ b/include/linux/bpf_verifier.h
+@@ -50,6 +50,9 @@ struct bpf_reg_state {
+ 		 *   PTR_TO_MAP_VALUE_OR_NULL
+ 		 */
+ 		struct bpf_map *map_ptr;
++
++		/* Max size from any of the above. */
++		unsigned long raw;
+ 	};
+ 	/* Fixed part of pointer offset, pointer types only */
+ 	s32 off;
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index a3afa50bb79f..e73363bd8646 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -1813,8 +1813,10 @@ extern ssize_t vfs_copy_file_range(struct file *, loff_t , struct file *,
+ extern int vfs_clone_file_prep_inodes(struct inode *inode_in, loff_t pos_in,
+ 				      struct inode *inode_out, loff_t pos_out,
+ 				      u64 *len, bool is_dedupe);
++extern int do_clone_file_range(struct file *file_in, loff_t pos_in,
++			       struct file *file_out, loff_t pos_out, u64 len);
+ extern int vfs_clone_file_range(struct file *file_in, loff_t pos_in,
+-		struct file *file_out, loff_t pos_out, u64 len);
++				struct file *file_out, loff_t pos_out, u64 len);
+ extern int vfs_dedupe_file_range_compare(struct inode *src, loff_t srcoff,
+ 					 struct inode *dest, loff_t destoff,
+ 					 loff_t len, bool *is_same);
+@@ -2755,19 +2757,6 @@ static inline void file_end_write(struct file *file)
+ 	__sb_end_write(file_inode(file)->i_sb, SB_FREEZE_WRITE);
+ }
+ 
+-static inline int do_clone_file_range(struct file *file_in, loff_t pos_in,
+-				      struct file *file_out, loff_t pos_out,
+-				      u64 len)
+-{
+-	int ret;
+-
+-	file_start_write(file_out);
+-	ret = vfs_clone_file_range(file_in, pos_in, file_out, pos_out, len);
+-	file_end_write(file_out);
+-
+-	return ret;
+-}
+-
+ /*
+  * get_write_access() gets write permission for a file.
+  * put_write_access() releases this write permission.
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 82e8edef6ea0..b000686fa1a1 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -2731,7 +2731,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 			dst_reg->umax_value = umax_ptr;
+ 			dst_reg->var_off = ptr_reg->var_off;
+ 			dst_reg->off = ptr_reg->off + smin_val;
+-			dst_reg->range = ptr_reg->range;
++			dst_reg->raw = ptr_reg->raw;
+ 			break;
+ 		}
+ 		/* A new variable offset is created.  Note that off_reg->off
+@@ -2761,10 +2761,11 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 		}
+ 		dst_reg->var_off = tnum_add(ptr_reg->var_off, off_reg->var_off);
+ 		dst_reg->off = ptr_reg->off;
++		dst_reg->raw = ptr_reg->raw;
+ 		if (reg_is_pkt_pointer(ptr_reg)) {
+ 			dst_reg->id = ++env->id_gen;
+ 			/* something was added to pkt_ptr, set range to zero */
+-			dst_reg->range = 0;
++			dst_reg->raw = 0;
+ 		}
+ 		break;
+ 	case BPF_SUB:
+@@ -2793,7 +2794,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 			dst_reg->var_off = ptr_reg->var_off;
+ 			dst_reg->id = ptr_reg->id;
+ 			dst_reg->off = ptr_reg->off - smin_val;
+-			dst_reg->range = ptr_reg->range;
++			dst_reg->raw = ptr_reg->raw;
+ 			break;
+ 		}
+ 		/* A new variable offset is created.  If the subtrahend is known
+@@ -2819,11 +2820,12 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 		}
+ 		dst_reg->var_off = tnum_sub(ptr_reg->var_off, off_reg->var_off);
+ 		dst_reg->off = ptr_reg->off;
++		dst_reg->raw = ptr_reg->raw;
+ 		if (reg_is_pkt_pointer(ptr_reg)) {
+ 			dst_reg->id = ++env->id_gen;
+ 			/* something was added to pkt_ptr, set range to zero */
+ 			if (smin_val < 0)
+-				dst_reg->range = 0;
++				dst_reg->raw = 0;
+ 		}
+ 		break;
+ 	case BPF_AND:
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 26526fc41f0d..b27b9509ea89 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -4797,9 +4797,13 @@ static void throttle_cfs_rq(struct cfs_rq *cfs_rq)
+ 
+ 	/*
+ 	 * Add to the _head_ of the list, so that an already-started
+-	 * distribute_cfs_runtime will not see us
++	 * distribute_cfs_runtime will not see us. If disribute_cfs_runtime is
++	 * not running add to the tail so that later runqueues don't get starved.
+ 	 */
+-	list_add_rcu(&cfs_rq->throttled_list, &cfs_b->throttled_cfs_rq);
++	if (cfs_b->distribute_running)
++		list_add_rcu(&cfs_rq->throttled_list, &cfs_b->throttled_cfs_rq);
++	else
++		list_add_tail_rcu(&cfs_rq->throttled_list, &cfs_b->throttled_cfs_rq);
+ 
+ 	/*
+ 	 * If we're the first throttled task, make sure the bandwidth
+@@ -4943,14 +4947,16 @@ static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun)
+ 	 * in us over-using our runtime if it is all used during this loop, but
+ 	 * only by limited amounts in that extreme case.
+ 	 */
+-	while (throttled && cfs_b->runtime > 0) {
++	while (throttled && cfs_b->runtime > 0 && !cfs_b->distribute_running) {
+ 		runtime = cfs_b->runtime;
++		cfs_b->distribute_running = 1;
+ 		raw_spin_unlock(&cfs_b->lock);
+ 		/* we can't nest cfs_b->lock while distributing bandwidth */
+ 		runtime = distribute_cfs_runtime(cfs_b, runtime,
+ 						 runtime_expires);
+ 		raw_spin_lock(&cfs_b->lock);
+ 
++		cfs_b->distribute_running = 0;
+ 		throttled = !list_empty(&cfs_b->throttled_cfs_rq);
+ 
+ 		cfs_b->runtime -= min(runtime, cfs_b->runtime);
+@@ -5061,6 +5067,11 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
+ 
+ 	/* confirm we're still not at a refresh boundary */
+ 	raw_spin_lock(&cfs_b->lock);
++	if (cfs_b->distribute_running) {
++		raw_spin_unlock(&cfs_b->lock);
++		return;
++	}
++
+ 	if (runtime_refresh_within(cfs_b, min_bandwidth_expiration)) {
+ 		raw_spin_unlock(&cfs_b->lock);
+ 		return;
+@@ -5070,6 +5081,9 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
+ 		runtime = cfs_b->runtime;
+ 
+ 	expires = cfs_b->runtime_expires;
++	if (runtime)
++		cfs_b->distribute_running = 1;
++
+ 	raw_spin_unlock(&cfs_b->lock);
+ 
+ 	if (!runtime)
+@@ -5080,6 +5094,7 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
+ 	raw_spin_lock(&cfs_b->lock);
+ 	if (expires == cfs_b->runtime_expires)
+ 		cfs_b->runtime -= min(runtime, cfs_b->runtime);
++	cfs_b->distribute_running = 0;
+ 	raw_spin_unlock(&cfs_b->lock);
+ }
+ 
+@@ -5188,6 +5203,7 @@ void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b)
+ 	cfs_b->period_timer.function = sched_cfs_period_timer;
+ 	hrtimer_init(&cfs_b->slack_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ 	cfs_b->slack_timer.function = sched_cfs_slack_timer;
++	cfs_b->distribute_running = 0;
+ }
+ 
+ static void init_cfs_rq_runtime(struct cfs_rq *cfs_rq)
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index c7742dcc136c..4565c3f9ecc5 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -346,6 +346,8 @@ struct cfs_bandwidth {
+ 	int			nr_periods;
+ 	int			nr_throttled;
+ 	u64			throttled_time;
++
++	bool                    distribute_running;
+ #endif
+ };
+ 
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index aae18af94c94..6c78bc2b7fff 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -747,16 +747,30 @@ static void free_synth_field(struct synth_field *field)
+ 	kfree(field);
+ }
+ 
+-static struct synth_field *parse_synth_field(char *field_type,
+-					     char *field_name)
++static struct synth_field *parse_synth_field(int argc, char **argv,
++					     int *consumed)
+ {
+ 	struct synth_field *field;
++	const char *prefix = NULL;
++	char *field_type = argv[0], *field_name;
+ 	int len, ret = 0;
+ 	char *array;
+ 
+ 	if (field_type[0] == ';')
+ 		field_type++;
+ 
++	if (!strcmp(field_type, "unsigned")) {
++		if (argc < 3)
++			return ERR_PTR(-EINVAL);
++		prefix = "unsigned ";
++		field_type = argv[1];
++		field_name = argv[2];
++		*consumed = 3;
++	} else {
++		field_name = argv[1];
++		*consumed = 2;
++	}
++
+ 	len = strlen(field_name);
+ 	if (field_name[len - 1] == ';')
+ 		field_name[len - 1] = '\0';
+@@ -769,11 +783,15 @@ static struct synth_field *parse_synth_field(char *field_type,
+ 	array = strchr(field_name, '[');
+ 	if (array)
+ 		len += strlen(array);
++	if (prefix)
++		len += strlen(prefix);
+ 	field->type = kzalloc(len, GFP_KERNEL);
+ 	if (!field->type) {
+ 		ret = -ENOMEM;
+ 		goto free;
+ 	}
++	if (prefix)
++		strcat(field->type, prefix);
+ 	strcat(field->type, field_type);
+ 	if (array) {
+ 		strcat(field->type, array);
+@@ -1018,7 +1036,7 @@ static int create_synth_event(int argc, char **argv)
+ 	struct synth_field *field, *fields[SYNTH_FIELDS_MAX];
+ 	struct synth_event *event = NULL;
+ 	bool delete_event = false;
+-	int i, n_fields = 0, ret = 0;
++	int i, consumed = 0, n_fields = 0, ret = 0;
+ 	char *name;
+ 
+ 	mutex_lock(&synth_event_mutex);
+@@ -1070,16 +1088,16 @@ static int create_synth_event(int argc, char **argv)
+ 			goto err;
+ 		}
+ 
+-		field = parse_synth_field(argv[i], argv[i + 1]);
++		field = parse_synth_field(argc - i, &argv[i], &consumed);
+ 		if (IS_ERR(field)) {
+ 			ret = PTR_ERR(field);
+ 			goto err;
+ 		}
+-		fields[n_fields] = field;
+-		i++; n_fields++;
++		fields[n_fields++] = field;
++		i += consumed - 1;
+ 	}
+ 
+-	if (i < argc) {
++	if (i < argc && strcmp(argv[i], ";") != 0) {
+ 		ret = -EINVAL;
+ 		goto err;
+ 	}


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 13:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 13:15 UTC (permalink / raw
  To: gentoo-commits

commit:     5aa2ee37315b183ead0f3ddac8ff2eccdd693750
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Oct 20 12:36:21 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 13:15:40 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5aa2ee37

Linux patch 4.18.16

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1015_linux-4.18.16.patch | 2439 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2443 insertions(+)

diff --git a/0000_README b/0000_README
index 5676b13..52e9ca9 100644
--- a/0000_README
+++ b/0000_README
@@ -103,6 +103,10 @@ Patch:  1014_linux-4.18.15.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.15
 
+Patch:  1015_linux-4.18.16.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.16
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1015_linux-4.18.16.patch b/1015_linux-4.18.16.patch
new file mode 100644
index 0000000..9bc7017
--- /dev/null
+++ b/1015_linux-4.18.16.patch
@@ -0,0 +1,2439 @@
+diff --git a/Makefile b/Makefile
+index 968eb96a0553..034dd990b0ae 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 15
++SUBLEVEL = 16
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/arc/Makefile b/arch/arc/Makefile
+index 6c1b20dd76ad..7c6c97782022 100644
+--- a/arch/arc/Makefile
++++ b/arch/arc/Makefile
+@@ -6,34 +6,12 @@
+ # published by the Free Software Foundation.
+ #
+ 
+-ifeq ($(CROSS_COMPILE),)
+-ifndef CONFIG_CPU_BIG_ENDIAN
+-CROSS_COMPILE := arc-linux-
+-else
+-CROSS_COMPILE := arceb-linux-
+-endif
+-endif
+-
+ KBUILD_DEFCONFIG := nsim_700_defconfig
+ 
+ cflags-y	+= -fno-common -pipe -fno-builtin -mmedium-calls -D__linux__
+ cflags-$(CONFIG_ISA_ARCOMPACT)	+= -mA7
+ cflags-$(CONFIG_ISA_ARCV2)	+= -mcpu=archs
+ 
+-is_700 = $(shell $(CC) -dM -E - < /dev/null | grep -q "ARC700" && echo 1 || echo 0)
+-
+-ifdef CONFIG_ISA_ARCOMPACT
+-ifeq ($(is_700), 0)
+-    $(error Toolchain not configured for ARCompact builds)
+-endif
+-endif
+-
+-ifdef CONFIG_ISA_ARCV2
+-ifeq ($(is_700), 1)
+-    $(error Toolchain not configured for ARCv2 builds)
+-endif
+-endif
+-
+ ifdef CONFIG_ARC_CURR_IN_REG
+ # For a global register defintion, make sure it gets passed to every file
+ # We had a customer reported bug where some code built in kernel was NOT using
+@@ -87,7 +65,7 @@ ldflags-$(CONFIG_CPU_BIG_ENDIAN)	+= -EB
+ # --build-id w/o "-marclinux". Default arc-elf32-ld is OK
+ ldflags-$(upto_gcc44)			+= -marclinux
+ 
+-LIBGCC	:= $(shell $(CC) $(cflags-y) --print-libgcc-file-name)
++LIBGCC	= $(shell $(CC) $(cflags-y) --print-libgcc-file-name)
+ 
+ # Modules with short calls might break for calls into builtin-kernel
+ KBUILD_CFLAGS_MODULE	+= -mlong-calls -mno-millicode
+diff --git a/arch/powerpc/kernel/tm.S b/arch/powerpc/kernel/tm.S
+index ff12f47a96b6..09d347b61218 100644
+--- a/arch/powerpc/kernel/tm.S
++++ b/arch/powerpc/kernel/tm.S
+@@ -175,13 +175,27 @@ _GLOBAL(tm_reclaim)
+ 	std	r1, PACATMSCRATCH(r13)
+ 	ld	r1, PACAR1(r13)
+ 
+-	/* Store the PPR in r11 and reset to decent value */
+ 	std	r11, GPR11(r1)			/* Temporary stash */
+ 
++	/*
++	 * Move the saved user r1 to the kernel stack in case PACATMSCRATCH is
++	 * clobbered by an exception once we turn on MSR_RI below.
++	 */
++	ld	r11, PACATMSCRATCH(r13)
++	std	r11, GPR1(r1)
++
++	/*
++	 * Store r13 away so we can free up the scratch SPR for the SLB fault
++	 * handler (needed once we start accessing the thread_struct).
++	 */
++	GET_SCRATCH0(r11)
++	std	r11, GPR13(r1)
++
+ 	/* Reset MSR RI so we can take SLB faults again */
+ 	li	r11, MSR_RI
+ 	mtmsrd	r11, 1
+ 
++	/* Store the PPR in r11 and reset to decent value */
+ 	mfspr	r11, SPRN_PPR
+ 	HMT_MEDIUM
+ 
+@@ -206,11 +220,11 @@ _GLOBAL(tm_reclaim)
+ 	SAVE_GPR(8, r7)				/* user r8 */
+ 	SAVE_GPR(9, r7)				/* user r9 */
+ 	SAVE_GPR(10, r7)			/* user r10 */
+-	ld	r3, PACATMSCRATCH(r13)		/* user r1 */
++	ld	r3, GPR1(r1)			/* user r1 */
+ 	ld	r4, GPR7(r1)			/* user r7 */
+ 	ld	r5, GPR11(r1)			/* user r11 */
+ 	ld	r6, GPR12(r1)			/* user r12 */
+-	GET_SCRATCH0(8)				/* user r13 */
++	ld	r8, GPR13(r1)			/* user r13 */
+ 	std	r3, GPR1(r7)
+ 	std	r4, GPR7(r7)
+ 	std	r5, GPR11(r7)
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index b5a71baedbc2..59d07bd5374a 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -1204,7 +1204,9 @@ int find_and_online_cpu_nid(int cpu)
+ 	int new_nid;
+ 
+ 	/* Use associativity from first thread for all siblings */
+-	vphn_get_associativity(cpu, associativity);
++	if (vphn_get_associativity(cpu, associativity))
++		return cpu_to_node(cpu);
++
+ 	new_nid = associativity_to_nid(associativity);
+ 	if (new_nid < 0 || !node_possible(new_nid))
+ 		new_nid = first_online_node;
+diff --git a/arch/riscv/include/asm/asm-prototypes.h b/arch/riscv/include/asm/asm-prototypes.h
+new file mode 100644
+index 000000000000..c9fecd120d18
+--- /dev/null
++++ b/arch/riscv/include/asm/asm-prototypes.h
+@@ -0,0 +1,7 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef _ASM_RISCV_PROTOTYPES_H
++
++#include <linux/ftrace.h>
++#include <asm-generic/asm-prototypes.h>
++
++#endif /* _ASM_RISCV_PROTOTYPES_H */
+diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
+index eaa843a52907..a480356e0ed8 100644
+--- a/arch/x86/boot/compressed/mem_encrypt.S
++++ b/arch/x86/boot/compressed/mem_encrypt.S
+@@ -25,20 +25,6 @@ ENTRY(get_sev_encryption_bit)
+ 	push	%ebx
+ 	push	%ecx
+ 	push	%edx
+-	push	%edi
+-
+-	/*
+-	 * RIP-relative addressing is needed to access the encryption bit
+-	 * variable. Since we are running in 32-bit mode we need this call/pop
+-	 * sequence to get the proper relative addressing.
+-	 */
+-	call	1f
+-1:	popl	%edi
+-	subl	$1b, %edi
+-
+-	movl	enc_bit(%edi), %eax
+-	cmpl	$0, %eax
+-	jge	.Lsev_exit
+ 
+ 	/* Check if running under a hypervisor */
+ 	movl	$1, %eax
+@@ -69,15 +55,12 @@ ENTRY(get_sev_encryption_bit)
+ 
+ 	movl	%ebx, %eax
+ 	andl	$0x3f, %eax		/* Return the encryption bit location */
+-	movl	%eax, enc_bit(%edi)
+ 	jmp	.Lsev_exit
+ 
+ .Lno_sev:
+ 	xor	%eax, %eax
+-	movl	%eax, enc_bit(%edi)
+ 
+ .Lsev_exit:
+-	pop	%edi
+ 	pop	%edx
+ 	pop	%ecx
+ 	pop	%ebx
+@@ -113,8 +96,6 @@ ENTRY(set_sev_encryption_mask)
+ ENDPROC(set_sev_encryption_mask)
+ 
+ 	.data
+-enc_bit:
+-	.int	0xffffffff
+ 
+ #ifdef CONFIG_AMD_MEM_ENCRYPT
+ 	.balign	8
+diff --git a/drivers/clocksource/timer-fttmr010.c b/drivers/clocksource/timer-fttmr010.c
+index c020038ebfab..cf93f6419b51 100644
+--- a/drivers/clocksource/timer-fttmr010.c
++++ b/drivers/clocksource/timer-fttmr010.c
+@@ -130,13 +130,17 @@ static int fttmr010_timer_set_next_event(unsigned long cycles,
+ 	cr &= ~fttmr010->t1_enable_val;
+ 	writel(cr, fttmr010->base + TIMER_CR);
+ 
+-	/* Setup the match register forward/backward in time */
+-	cr = readl(fttmr010->base + TIMER1_COUNT);
+-	if (fttmr010->count_down)
+-		cr -= cycles;
+-	else
+-		cr += cycles;
+-	writel(cr, fttmr010->base + TIMER1_MATCH1);
++	if (fttmr010->count_down) {
++		/*
++		 * ASPEED Timer Controller will load TIMER1_LOAD register
++		 * into TIMER1_COUNT register when the timer is re-enabled.
++		 */
++		writel(cycles, fttmr010->base + TIMER1_LOAD);
++	} else {
++		/* Setup the match register forward in time */
++		cr = readl(fttmr010->base + TIMER1_COUNT);
++		writel(cr + cycles, fttmr010->base + TIMER1_MATCH1);
++	}
+ 
+ 	/* Start */
+ 	cr = readl(fttmr010->base + TIMER_CR);
+diff --git a/drivers/clocksource/timer-ti-32k.c b/drivers/clocksource/timer-ti-32k.c
+index 880a861ab3c8..713214d085e0 100644
+--- a/drivers/clocksource/timer-ti-32k.c
++++ b/drivers/clocksource/timer-ti-32k.c
+@@ -98,6 +98,9 @@ static int __init ti_32k_timer_init(struct device_node *np)
+ 		return -ENXIO;
+ 	}
+ 
++	if (!of_machine_is_compatible("ti,am43"))
++		ti_32k_timer.cs.flags |= CLOCK_SOURCE_SUSPEND_NONSTOP;
++
+ 	ti_32k_timer.counter = ti_32k_timer.base;
+ 
+ 	/*
+diff --git a/drivers/gpu/drm/arm/malidp_drv.c b/drivers/gpu/drm/arm/malidp_drv.c
+index 0a788d76ed5f..0ec4659795f1 100644
+--- a/drivers/gpu/drm/arm/malidp_drv.c
++++ b/drivers/gpu/drm/arm/malidp_drv.c
+@@ -615,6 +615,7 @@ static int malidp_bind(struct device *dev)
+ 	drm->irq_enabled = true;
+ 
+ 	ret = drm_vblank_init(drm, drm->mode_config.num_crtc);
++	drm_crtc_vblank_reset(&malidp->crtc);
+ 	if (ret < 0) {
+ 		DRM_ERROR("failed to initialise vblank\n");
+ 		goto vblank_fail;
+diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c
+index c2e55e5d97f6..1cf6290d6435 100644
+--- a/drivers/hwtracing/intel_th/pci.c
++++ b/drivers/hwtracing/intel_th/pci.c
+@@ -160,6 +160,11 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
+ 		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x18e1),
+ 		.driver_data = (kernel_ulong_t)&intel_th_2x,
+ 	},
++	{
++		/* Ice Lake PCH */
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x34a6),
++		.driver_data = (kernel_ulong_t)&intel_th_2x,
++	},
+ 	{ 0 },
+ };
+ 
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index 0e5eb0f547d3..b83348416885 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -2048,33 +2048,55 @@ static int modify_qp(struct ib_uverbs_file *file,
+ 
+ 	if ((cmd->base.attr_mask & IB_QP_CUR_STATE &&
+ 	    cmd->base.cur_qp_state > IB_QPS_ERR) ||
+-	    cmd->base.qp_state > IB_QPS_ERR) {
++	    (cmd->base.attr_mask & IB_QP_STATE &&
++	    cmd->base.qp_state > IB_QPS_ERR)) {
+ 		ret = -EINVAL;
+ 		goto release_qp;
+ 	}
+ 
+-	attr->qp_state		  = cmd->base.qp_state;
+-	attr->cur_qp_state	  = cmd->base.cur_qp_state;
+-	attr->path_mtu		  = cmd->base.path_mtu;
+-	attr->path_mig_state	  = cmd->base.path_mig_state;
+-	attr->qkey		  = cmd->base.qkey;
+-	attr->rq_psn		  = cmd->base.rq_psn;
+-	attr->sq_psn		  = cmd->base.sq_psn;
+-	attr->dest_qp_num	  = cmd->base.dest_qp_num;
+-	attr->qp_access_flags	  = cmd->base.qp_access_flags;
+-	attr->pkey_index	  = cmd->base.pkey_index;
+-	attr->alt_pkey_index	  = cmd->base.alt_pkey_index;
+-	attr->en_sqd_async_notify = cmd->base.en_sqd_async_notify;
+-	attr->max_rd_atomic	  = cmd->base.max_rd_atomic;
+-	attr->max_dest_rd_atomic  = cmd->base.max_dest_rd_atomic;
+-	attr->min_rnr_timer	  = cmd->base.min_rnr_timer;
+-	attr->port_num		  = cmd->base.port_num;
+-	attr->timeout		  = cmd->base.timeout;
+-	attr->retry_cnt		  = cmd->base.retry_cnt;
+-	attr->rnr_retry		  = cmd->base.rnr_retry;
+-	attr->alt_port_num	  = cmd->base.alt_port_num;
+-	attr->alt_timeout	  = cmd->base.alt_timeout;
+-	attr->rate_limit	  = cmd->rate_limit;
++	if (cmd->base.attr_mask & IB_QP_STATE)
++		attr->qp_state = cmd->base.qp_state;
++	if (cmd->base.attr_mask & IB_QP_CUR_STATE)
++		attr->cur_qp_state = cmd->base.cur_qp_state;
++	if (cmd->base.attr_mask & IB_QP_PATH_MTU)
++		attr->path_mtu = cmd->base.path_mtu;
++	if (cmd->base.attr_mask & IB_QP_PATH_MIG_STATE)
++		attr->path_mig_state = cmd->base.path_mig_state;
++	if (cmd->base.attr_mask & IB_QP_QKEY)
++		attr->qkey = cmd->base.qkey;
++	if (cmd->base.attr_mask & IB_QP_RQ_PSN)
++		attr->rq_psn = cmd->base.rq_psn;
++	if (cmd->base.attr_mask & IB_QP_SQ_PSN)
++		attr->sq_psn = cmd->base.sq_psn;
++	if (cmd->base.attr_mask & IB_QP_DEST_QPN)
++		attr->dest_qp_num = cmd->base.dest_qp_num;
++	if (cmd->base.attr_mask & IB_QP_ACCESS_FLAGS)
++		attr->qp_access_flags = cmd->base.qp_access_flags;
++	if (cmd->base.attr_mask & IB_QP_PKEY_INDEX)
++		attr->pkey_index = cmd->base.pkey_index;
++	if (cmd->base.attr_mask & IB_QP_EN_SQD_ASYNC_NOTIFY)
++		attr->en_sqd_async_notify = cmd->base.en_sqd_async_notify;
++	if (cmd->base.attr_mask & IB_QP_MAX_QP_RD_ATOMIC)
++		attr->max_rd_atomic = cmd->base.max_rd_atomic;
++	if (cmd->base.attr_mask & IB_QP_MAX_DEST_RD_ATOMIC)
++		attr->max_dest_rd_atomic = cmd->base.max_dest_rd_atomic;
++	if (cmd->base.attr_mask & IB_QP_MIN_RNR_TIMER)
++		attr->min_rnr_timer = cmd->base.min_rnr_timer;
++	if (cmd->base.attr_mask & IB_QP_PORT)
++		attr->port_num = cmd->base.port_num;
++	if (cmd->base.attr_mask & IB_QP_TIMEOUT)
++		attr->timeout = cmd->base.timeout;
++	if (cmd->base.attr_mask & IB_QP_RETRY_CNT)
++		attr->retry_cnt = cmd->base.retry_cnt;
++	if (cmd->base.attr_mask & IB_QP_RNR_RETRY)
++		attr->rnr_retry = cmd->base.rnr_retry;
++	if (cmd->base.attr_mask & IB_QP_ALT_PATH) {
++		attr->alt_port_num = cmd->base.alt_port_num;
++		attr->alt_timeout = cmd->base.alt_timeout;
++		attr->alt_pkey_index = cmd->base.alt_pkey_index;
++	}
++	if (cmd->base.attr_mask & IB_QP_RATE_LIMIT)
++		attr->rate_limit = cmd->rate_limit;
+ 
+ 	if (cmd->base.attr_mask & IB_QP_AV)
+ 		copy_ah_attr_from_uverbs(qp->device, &attr->ah_attr,
+diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
+index 20b9f31052bf..85cd1a3593d6 100644
+--- a/drivers/infiniband/hw/bnxt_re/main.c
++++ b/drivers/infiniband/hw/bnxt_re/main.c
+@@ -78,7 +78,7 @@ static struct list_head bnxt_re_dev_list = LIST_HEAD_INIT(bnxt_re_dev_list);
+ /* Mutex to protect the list of bnxt_re devices added */
+ static DEFINE_MUTEX(bnxt_re_dev_lock);
+ static struct workqueue_struct *bnxt_re_wq;
+-static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev, bool lock_wait);
++static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev);
+ 
+ /* SR-IOV helper functions */
+ 
+@@ -182,7 +182,7 @@ static void bnxt_re_shutdown(void *p)
+ 	if (!rdev)
+ 		return;
+ 
+-	bnxt_re_ib_unreg(rdev, false);
++	bnxt_re_ib_unreg(rdev);
+ }
+ 
+ static void bnxt_re_stop_irq(void *handle)
+@@ -251,7 +251,7 @@ static struct bnxt_ulp_ops bnxt_re_ulp_ops = {
+ /* Driver registration routines used to let the networking driver (bnxt_en)
+  * to know that the RoCE driver is now installed
+  */
+-static int bnxt_re_unregister_netdev(struct bnxt_re_dev *rdev, bool lock_wait)
++static int bnxt_re_unregister_netdev(struct bnxt_re_dev *rdev)
+ {
+ 	struct bnxt_en_dev *en_dev;
+ 	int rc;
+@@ -260,14 +260,9 @@ static int bnxt_re_unregister_netdev(struct bnxt_re_dev *rdev, bool lock_wait)
+ 		return -EINVAL;
+ 
+ 	en_dev = rdev->en_dev;
+-	/* Acquire rtnl lock if it is not invokded from netdev event */
+-	if (lock_wait)
+-		rtnl_lock();
+ 
+ 	rc = en_dev->en_ops->bnxt_unregister_device(rdev->en_dev,
+ 						    BNXT_ROCE_ULP);
+-	if (lock_wait)
+-		rtnl_unlock();
+ 	return rc;
+ }
+ 
+@@ -281,14 +276,12 @@ static int bnxt_re_register_netdev(struct bnxt_re_dev *rdev)
+ 
+ 	en_dev = rdev->en_dev;
+ 
+-	rtnl_lock();
+ 	rc = en_dev->en_ops->bnxt_register_device(en_dev, BNXT_ROCE_ULP,
+ 						  &bnxt_re_ulp_ops, rdev);
+-	rtnl_unlock();
+ 	return rc;
+ }
+ 
+-static int bnxt_re_free_msix(struct bnxt_re_dev *rdev, bool lock_wait)
++static int bnxt_re_free_msix(struct bnxt_re_dev *rdev)
+ {
+ 	struct bnxt_en_dev *en_dev;
+ 	int rc;
+@@ -298,13 +291,9 @@ static int bnxt_re_free_msix(struct bnxt_re_dev *rdev, bool lock_wait)
+ 
+ 	en_dev = rdev->en_dev;
+ 
+-	if (lock_wait)
+-		rtnl_lock();
+ 
+ 	rc = en_dev->en_ops->bnxt_free_msix(rdev->en_dev, BNXT_ROCE_ULP);
+ 
+-	if (lock_wait)
+-		rtnl_unlock();
+ 	return rc;
+ }
+ 
+@@ -320,7 +309,6 @@ static int bnxt_re_request_msix(struct bnxt_re_dev *rdev)
+ 
+ 	num_msix_want = min_t(u32, BNXT_RE_MAX_MSIX, num_online_cpus());
+ 
+-	rtnl_lock();
+ 	num_msix_got = en_dev->en_ops->bnxt_request_msix(en_dev, BNXT_ROCE_ULP,
+ 							 rdev->msix_entries,
+ 							 num_msix_want);
+@@ -335,7 +323,6 @@ static int bnxt_re_request_msix(struct bnxt_re_dev *rdev)
+ 	}
+ 	rdev->num_msix = num_msix_got;
+ done:
+-	rtnl_unlock();
+ 	return rc;
+ }
+ 
+@@ -358,24 +345,18 @@ static void bnxt_re_fill_fw_msg(struct bnxt_fw_msg *fw_msg, void *msg,
+ 	fw_msg->timeout = timeout;
+ }
+ 
+-static int bnxt_re_net_ring_free(struct bnxt_re_dev *rdev, u16 fw_ring_id,
+-				 bool lock_wait)
++static int bnxt_re_net_ring_free(struct bnxt_re_dev *rdev, u16 fw_ring_id)
+ {
+ 	struct bnxt_en_dev *en_dev = rdev->en_dev;
+ 	struct hwrm_ring_free_input req = {0};
+ 	struct hwrm_ring_free_output resp;
+ 	struct bnxt_fw_msg fw_msg;
+-	bool do_unlock = false;
+ 	int rc = -EINVAL;
+ 
+ 	if (!en_dev)
+ 		return rc;
+ 
+ 	memset(&fw_msg, 0, sizeof(fw_msg));
+-	if (lock_wait) {
+-		rtnl_lock();
+-		do_unlock = true;
+-	}
+ 
+ 	bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_RING_FREE, -1, -1);
+ 	req.ring_type = RING_ALLOC_REQ_RING_TYPE_L2_CMPL;
+@@ -386,8 +367,6 @@ static int bnxt_re_net_ring_free(struct bnxt_re_dev *rdev, u16 fw_ring_id,
+ 	if (rc)
+ 		dev_err(rdev_to_dev(rdev),
+ 			"Failed to free HW ring:%d :%#x", req.ring_id, rc);
+-	if (do_unlock)
+-		rtnl_unlock();
+ 	return rc;
+ }
+ 
+@@ -405,7 +384,6 @@ static int bnxt_re_net_ring_alloc(struct bnxt_re_dev *rdev, dma_addr_t *dma_arr,
+ 		return rc;
+ 
+ 	memset(&fw_msg, 0, sizeof(fw_msg));
+-	rtnl_lock();
+ 	bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_RING_ALLOC, -1, -1);
+ 	req.enables = 0;
+ 	req.page_tbl_addr =  cpu_to_le64(dma_arr[0]);
+@@ -426,27 +404,21 @@ static int bnxt_re_net_ring_alloc(struct bnxt_re_dev *rdev, dma_addr_t *dma_arr,
+ 	if (!rc)
+ 		*fw_ring_id = le16_to_cpu(resp.ring_id);
+ 
+-	rtnl_unlock();
+ 	return rc;
+ }
+ 
+ static int bnxt_re_net_stats_ctx_free(struct bnxt_re_dev *rdev,
+-				      u32 fw_stats_ctx_id, bool lock_wait)
++				      u32 fw_stats_ctx_id)
+ {
+ 	struct bnxt_en_dev *en_dev = rdev->en_dev;
+ 	struct hwrm_stat_ctx_free_input req = {0};
+ 	struct bnxt_fw_msg fw_msg;
+-	bool do_unlock = false;
+ 	int rc = -EINVAL;
+ 
+ 	if (!en_dev)
+ 		return rc;
+ 
+ 	memset(&fw_msg, 0, sizeof(fw_msg));
+-	if (lock_wait) {
+-		rtnl_lock();
+-		do_unlock = true;
+-	}
+ 
+ 	bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_STAT_CTX_FREE, -1, -1);
+ 	req.stat_ctx_id = cpu_to_le32(fw_stats_ctx_id);
+@@ -457,8 +429,6 @@ static int bnxt_re_net_stats_ctx_free(struct bnxt_re_dev *rdev,
+ 		dev_err(rdev_to_dev(rdev),
+ 			"Failed to free HW stats context %#x", rc);
+ 
+-	if (do_unlock)
+-		rtnl_unlock();
+ 	return rc;
+ }
+ 
+@@ -478,7 +448,6 @@ static int bnxt_re_net_stats_ctx_alloc(struct bnxt_re_dev *rdev,
+ 		return rc;
+ 
+ 	memset(&fw_msg, 0, sizeof(fw_msg));
+-	rtnl_lock();
+ 
+ 	bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_STAT_CTX_ALLOC, -1, -1);
+ 	req.update_period_ms = cpu_to_le32(1000);
+@@ -490,7 +459,6 @@ static int bnxt_re_net_stats_ctx_alloc(struct bnxt_re_dev *rdev,
+ 	if (!rc)
+ 		*fw_stats_ctx_id = le32_to_cpu(resp.stat_ctx_id);
+ 
+-	rtnl_unlock();
+ 	return rc;
+ }
+ 
+@@ -929,19 +897,19 @@ fail:
+ 	return rc;
+ }
+ 
+-static void bnxt_re_free_nq_res(struct bnxt_re_dev *rdev, bool lock_wait)
++static void bnxt_re_free_nq_res(struct bnxt_re_dev *rdev)
+ {
+ 	int i;
+ 
+ 	for (i = 0; i < rdev->num_msix - 1; i++) {
+-		bnxt_re_net_ring_free(rdev, rdev->nq[i].ring_id, lock_wait);
++		bnxt_re_net_ring_free(rdev, rdev->nq[i].ring_id);
+ 		bnxt_qplib_free_nq(&rdev->nq[i]);
+ 	}
+ }
+ 
+-static void bnxt_re_free_res(struct bnxt_re_dev *rdev, bool lock_wait)
++static void bnxt_re_free_res(struct bnxt_re_dev *rdev)
+ {
+-	bnxt_re_free_nq_res(rdev, lock_wait);
++	bnxt_re_free_nq_res(rdev);
+ 
+ 	if (rdev->qplib_res.dpi_tbl.max) {
+ 		bnxt_qplib_dealloc_dpi(&rdev->qplib_res,
+@@ -1219,7 +1187,7 @@ static int bnxt_re_setup_qos(struct bnxt_re_dev *rdev)
+ 	return 0;
+ }
+ 
+-static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev, bool lock_wait)
++static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev)
+ {
+ 	int i, rc;
+ 
+@@ -1234,28 +1202,27 @@ static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev, bool lock_wait)
+ 		cancel_delayed_work(&rdev->worker);
+ 
+ 	bnxt_re_cleanup_res(rdev);
+-	bnxt_re_free_res(rdev, lock_wait);
++	bnxt_re_free_res(rdev);
+ 
+ 	if (test_and_clear_bit(BNXT_RE_FLAG_RCFW_CHANNEL_EN, &rdev->flags)) {
+ 		rc = bnxt_qplib_deinit_rcfw(&rdev->rcfw);
+ 		if (rc)
+ 			dev_warn(rdev_to_dev(rdev),
+ 				 "Failed to deinitialize RCFW: %#x", rc);
+-		bnxt_re_net_stats_ctx_free(rdev, rdev->qplib_ctx.stats.fw_id,
+-					   lock_wait);
++		bnxt_re_net_stats_ctx_free(rdev, rdev->qplib_ctx.stats.fw_id);
+ 		bnxt_qplib_free_ctx(rdev->en_dev->pdev, &rdev->qplib_ctx);
+ 		bnxt_qplib_disable_rcfw_channel(&rdev->rcfw);
+-		bnxt_re_net_ring_free(rdev, rdev->rcfw.creq_ring_id, lock_wait);
++		bnxt_re_net_ring_free(rdev, rdev->rcfw.creq_ring_id);
+ 		bnxt_qplib_free_rcfw_channel(&rdev->rcfw);
+ 	}
+ 	if (test_and_clear_bit(BNXT_RE_FLAG_GOT_MSIX, &rdev->flags)) {
+-		rc = bnxt_re_free_msix(rdev, lock_wait);
++		rc = bnxt_re_free_msix(rdev);
+ 		if (rc)
+ 			dev_warn(rdev_to_dev(rdev),
+ 				 "Failed to free MSI-X vectors: %#x", rc);
+ 	}
+ 	if (test_and_clear_bit(BNXT_RE_FLAG_NETDEV_REGISTERED, &rdev->flags)) {
+-		rc = bnxt_re_unregister_netdev(rdev, lock_wait);
++		rc = bnxt_re_unregister_netdev(rdev);
+ 		if (rc)
+ 			dev_warn(rdev_to_dev(rdev),
+ 				 "Failed to unregister with netdev: %#x", rc);
+@@ -1276,6 +1243,12 @@ static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev)
+ {
+ 	int i, j, rc;
+ 
++	bool locked;
++
++	/* Acquire rtnl lock through out this function */
++	rtnl_lock();
++	locked = true;
++
+ 	/* Registered a new RoCE device instance to netdev */
+ 	rc = bnxt_re_register_netdev(rdev);
+ 	if (rc) {
+@@ -1374,12 +1347,16 @@ static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev)
+ 		schedule_delayed_work(&rdev->worker, msecs_to_jiffies(30000));
+ 	}
+ 
++	rtnl_unlock();
++	locked = false;
++
+ 	/* Register ib dev */
+ 	rc = bnxt_re_register_ib(rdev);
+ 	if (rc) {
+ 		pr_err("Failed to register with IB: %#x\n", rc);
+ 		goto fail;
+ 	}
++	set_bit(BNXT_RE_FLAG_IBDEV_REGISTERED, &rdev->flags);
+ 	dev_info(rdev_to_dev(rdev), "Device registered successfully");
+ 	for (i = 0; i < ARRAY_SIZE(bnxt_re_attributes); i++) {
+ 		rc = device_create_file(&rdev->ibdev.dev,
+@@ -1395,7 +1372,6 @@ static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev)
+ 			goto fail;
+ 		}
+ 	}
+-	set_bit(BNXT_RE_FLAG_IBDEV_REGISTERED, &rdev->flags);
+ 	ib_get_eth_speed(&rdev->ibdev, 1, &rdev->active_speed,
+ 			 &rdev->active_width);
+ 	set_bit(BNXT_RE_FLAG_ISSUE_ROCE_STATS, &rdev->flags);
+@@ -1404,17 +1380,21 @@ static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev)
+ 
+ 	return 0;
+ free_sctx:
+-	bnxt_re_net_stats_ctx_free(rdev, rdev->qplib_ctx.stats.fw_id, true);
++	bnxt_re_net_stats_ctx_free(rdev, rdev->qplib_ctx.stats.fw_id);
+ free_ctx:
+ 	bnxt_qplib_free_ctx(rdev->en_dev->pdev, &rdev->qplib_ctx);
+ disable_rcfw:
+ 	bnxt_qplib_disable_rcfw_channel(&rdev->rcfw);
+ free_ring:
+-	bnxt_re_net_ring_free(rdev, rdev->rcfw.creq_ring_id, true);
++	bnxt_re_net_ring_free(rdev, rdev->rcfw.creq_ring_id);
+ free_rcfw:
+ 	bnxt_qplib_free_rcfw_channel(&rdev->rcfw);
+ fail:
+-	bnxt_re_ib_unreg(rdev, true);
++	if (!locked)
++		rtnl_lock();
++	bnxt_re_ib_unreg(rdev);
++	rtnl_unlock();
++
+ 	return rc;
+ }
+ 
+@@ -1567,7 +1547,7 @@ static int bnxt_re_netdev_event(struct notifier_block *notifier,
+ 		 */
+ 		if (atomic_read(&rdev->sched_count) > 0)
+ 			goto exit;
+-		bnxt_re_ib_unreg(rdev, false);
++		bnxt_re_ib_unreg(rdev);
+ 		bnxt_re_remove_one(rdev);
+ 		bnxt_re_dev_unreg(rdev);
+ 		break;
+@@ -1646,7 +1626,10 @@ static void __exit bnxt_re_mod_exit(void)
+ 		 */
+ 		flush_workqueue(bnxt_re_wq);
+ 		bnxt_re_dev_stop(rdev);
+-		bnxt_re_ib_unreg(rdev, true);
++		/* Acquire the rtnl_lock as the L2 resources are freed here */
++		rtnl_lock();
++		bnxt_re_ib_unreg(rdev);
++		rtnl_unlock();
+ 		bnxt_re_remove_one(rdev);
+ 		bnxt_re_dev_unreg(rdev);
+ 	}
+diff --git a/drivers/input/keyboard/atakbd.c b/drivers/input/keyboard/atakbd.c
+index f1235831283d..fdeda0b0fbd6 100644
+--- a/drivers/input/keyboard/atakbd.c
++++ b/drivers/input/keyboard/atakbd.c
+@@ -79,8 +79,7 @@ MODULE_LICENSE("GPL");
+  */
+ 
+ 
+-static unsigned char atakbd_keycode[0x72] = {	/* American layout */
+-	[0]	 = KEY_GRAVE,
++static unsigned char atakbd_keycode[0x73] = {	/* American layout */
+ 	[1]	 = KEY_ESC,
+ 	[2]	 = KEY_1,
+ 	[3]	 = KEY_2,
+@@ -121,9 +120,9 @@ static unsigned char atakbd_keycode[0x72] = {	/* American layout */
+ 	[38]	 = KEY_L,
+ 	[39]	 = KEY_SEMICOLON,
+ 	[40]	 = KEY_APOSTROPHE,
+-	[41]	 = KEY_BACKSLASH,	/* FIXME, '#' */
++	[41]	 = KEY_GRAVE,
+ 	[42]	 = KEY_LEFTSHIFT,
+-	[43]	 = KEY_GRAVE,		/* FIXME: '~' */
++	[43]	 = KEY_BACKSLASH,
+ 	[44]	 = KEY_Z,
+ 	[45]	 = KEY_X,
+ 	[46]	 = KEY_C,
+@@ -149,45 +148,34 @@ static unsigned char atakbd_keycode[0x72] = {	/* American layout */
+ 	[66]	 = KEY_F8,
+ 	[67]	 = KEY_F9,
+ 	[68]	 = KEY_F10,
+-	[69]	 = KEY_ESC,
+-	[70]	 = KEY_DELETE,
+-	[71]	 = KEY_KP7,
+-	[72]	 = KEY_KP8,
+-	[73]	 = KEY_KP9,
++	[71]	 = KEY_HOME,
++	[72]	 = KEY_UP,
+ 	[74]	 = KEY_KPMINUS,
+-	[75]	 = KEY_KP4,
+-	[76]	 = KEY_KP5,
+-	[77]	 = KEY_KP6,
++	[75]	 = KEY_LEFT,
++	[77]	 = KEY_RIGHT,
+ 	[78]	 = KEY_KPPLUS,
+-	[79]	 = KEY_KP1,
+-	[80]	 = KEY_KP2,
+-	[81]	 = KEY_KP3,
+-	[82]	 = KEY_KP0,
+-	[83]	 = KEY_KPDOT,
+-	[90]	 = KEY_KPLEFTPAREN,
+-	[91]	 = KEY_KPRIGHTPAREN,
+-	[92]	 = KEY_KPASTERISK,	/* FIXME */
+-	[93]	 = KEY_KPASTERISK,
+-	[94]	 = KEY_KPPLUS,
+-	[95]	 = KEY_HELP,
++	[80]	 = KEY_DOWN,
++	[82]	 = KEY_INSERT,
++	[83]	 = KEY_DELETE,
+ 	[96]	 = KEY_102ND,
+-	[97]	 = KEY_KPASTERISK,	/* FIXME */
+-	[98]	 = KEY_KPSLASH,
++	[97]	 = KEY_UNDO,
++	[98]	 = KEY_HELP,
+ 	[99]	 = KEY_KPLEFTPAREN,
+ 	[100]	 = KEY_KPRIGHTPAREN,
+ 	[101]	 = KEY_KPSLASH,
+ 	[102]	 = KEY_KPASTERISK,
+-	[103]	 = KEY_UP,
+-	[104]	 = KEY_KPASTERISK,	/* FIXME */
+-	[105]	 = KEY_LEFT,
+-	[106]	 = KEY_RIGHT,
+-	[107]	 = KEY_KPASTERISK,	/* FIXME */
+-	[108]	 = KEY_DOWN,
+-	[109]	 = KEY_KPASTERISK,	/* FIXME */
+-	[110]	 = KEY_KPASTERISK,	/* FIXME */
+-	[111]	 = KEY_KPASTERISK,	/* FIXME */
+-	[112]	 = KEY_KPASTERISK,	/* FIXME */
+-	[113]	 = KEY_KPASTERISK	/* FIXME */
++	[103]	 = KEY_KP7,
++	[104]	 = KEY_KP8,
++	[105]	 = KEY_KP9,
++	[106]	 = KEY_KP4,
++	[107]	 = KEY_KP5,
++	[108]	 = KEY_KP6,
++	[109]	 = KEY_KP1,
++	[110]	 = KEY_KP2,
++	[111]	 = KEY_KP3,
++	[112]	 = KEY_KP0,
++	[113]	 = KEY_KPDOT,
++	[114]	 = KEY_KPENTER,
+ };
+ 
+ static struct input_dev *atakbd_dev;
+@@ -195,21 +183,15 @@ static struct input_dev *atakbd_dev;
+ static void atakbd_interrupt(unsigned char scancode, char down)
+ {
+ 
+-	if (scancode < 0x72) {		/* scancodes < 0xf2 are keys */
++	if (scancode < 0x73) {		/* scancodes < 0xf3 are keys */
+ 
+ 		// report raw events here?
+ 
+ 		scancode = atakbd_keycode[scancode];
+ 
+-		if (scancode == KEY_CAPSLOCK) {	/* CapsLock is a toggle switch key on Amiga */
+-			input_report_key(atakbd_dev, scancode, 1);
+-			input_report_key(atakbd_dev, scancode, 0);
+-			input_sync(atakbd_dev);
+-		} else {
+-			input_report_key(atakbd_dev, scancode, down);
+-			input_sync(atakbd_dev);
+-		}
+-	} else				/* scancodes >= 0xf2 are mouse data, most likely */
++		input_report_key(atakbd_dev, scancode, down);
++		input_sync(atakbd_dev);
++	} else				/* scancodes >= 0xf3 are mouse data, most likely */
+ 		printk(KERN_INFO "atakbd: unhandled scancode %x\n", scancode);
+ 
+ 	return;
+diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
+index c53363443280..c2b511a16b0e 100644
+--- a/drivers/iommu/amd_iommu.c
++++ b/drivers/iommu/amd_iommu.c
+@@ -246,7 +246,13 @@ static u16 get_alias(struct device *dev)
+ 
+ 	/* The callers make sure that get_device_id() does not fail here */
+ 	devid = get_device_id(dev);
++
++	/* For ACPI HID devices, we simply return the devid as such */
++	if (!dev_is_pci(dev))
++		return devid;
++
+ 	ivrs_alias = amd_iommu_alias_table[devid];
++
+ 	pci_for_each_dma_alias(pdev, __last_alias, &pci_alias);
+ 
+ 	if (ivrs_alias == pci_alias)
+diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c
+index 2b1724e8d307..701820b39fd1 100644
+--- a/drivers/iommu/rockchip-iommu.c
++++ b/drivers/iommu/rockchip-iommu.c
+@@ -1242,6 +1242,12 @@ err_unprepare_clocks:
+ 
+ static void rk_iommu_shutdown(struct platform_device *pdev)
+ {
++	struct rk_iommu *iommu = platform_get_drvdata(pdev);
++	int i = 0, irq;
++
++	while ((irq = platform_get_irq(pdev, i++)) != -ENXIO)
++		devm_free_irq(iommu->dev, irq, iommu);
++
+ 	pm_runtime_force_suspend(&pdev->dev);
+ }
+ 
+diff --git a/drivers/media/usb/dvb-usb-v2/af9035.c b/drivers/media/usb/dvb-usb-v2/af9035.c
+index 666d319d3d1a..1f6c1eefe389 100644
+--- a/drivers/media/usb/dvb-usb-v2/af9035.c
++++ b/drivers/media/usb/dvb-usb-v2/af9035.c
+@@ -402,8 +402,10 @@ static int af9035_i2c_master_xfer(struct i2c_adapter *adap,
+ 			if (msg[0].addr == state->af9033_i2c_addr[1])
+ 				reg |= 0x100000;
+ 
+-			ret = af9035_wr_regs(d, reg, &msg[0].buf[3],
+-					msg[0].len - 3);
++			ret = (msg[0].len >= 3) ? af9035_wr_regs(d, reg,
++							         &msg[0].buf[3],
++							         msg[0].len - 3)
++					        : -EOPNOTSUPP;
+ 		} else {
+ 			/* I2C write */
+ 			u8 buf[MAX_XFER_SIZE];
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h b/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
+index 09e38f0733bd..10b9cb2185b1 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
+@@ -753,7 +753,6 @@ struct cpl_abort_req_rss {
+ };
+ 
+ struct cpl_abort_req_rss6 {
+-	WR_HDR;
+ 	union opcode_tid ot;
+ 	__u32 srqidx_status;
+ };
+diff --git a/drivers/net/ethernet/ibm/emac/core.c b/drivers/net/ethernet/ibm/emac/core.c
+index 372664686309..129f4e9f38da 100644
+--- a/drivers/net/ethernet/ibm/emac/core.c
++++ b/drivers/net/ethernet/ibm/emac/core.c
+@@ -2677,12 +2677,17 @@ static int emac_init_phy(struct emac_instance *dev)
+ 		if (of_phy_is_fixed_link(np)) {
+ 			int res = emac_dt_mdio_probe(dev);
+ 
+-			if (!res) {
+-				res = of_phy_register_fixed_link(np);
+-				if (res)
+-					mdiobus_unregister(dev->mii_bus);
++			if (res)
++				return res;
++
++			res = of_phy_register_fixed_link(np);
++			dev->phy_dev = of_phy_find_device(np);
++			if (res || !dev->phy_dev) {
++				mdiobus_unregister(dev->mii_bus);
++				return res ? res : -EINVAL;
+ 			}
+-			return res;
++			emac_adjust_link(dev->ndev);
++			put_device(&dev->phy_dev->mdio.dev);
+ 		}
+ 		return 0;
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx4/eq.c b/drivers/net/ethernet/mellanox/mlx4/eq.c
+index 1f3372c1802e..2df92dbd38e1 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/eq.c
++++ b/drivers/net/ethernet/mellanox/mlx4/eq.c
+@@ -240,7 +240,8 @@ static void mlx4_set_eq_affinity_hint(struct mlx4_priv *priv, int vec)
+ 	struct mlx4_dev *dev = &priv->dev;
+ 	struct mlx4_eq *eq = &priv->eq_table.eq[vec];
+ 
+-	if (!eq->affinity_mask || cpumask_empty(eq->affinity_mask))
++	if (!cpumask_available(eq->affinity_mask) ||
++	    cpumask_empty(eq->affinity_mask))
+ 		return;
+ 
+ 	hint_err = irq_set_affinity_hint(eq->irq, eq->affinity_mask);
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_dcbx.c b/drivers/net/ethernet/qlogic/qed/qed_dcbx.c
+index e0680ce91328..09ed0ba4225a 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_dcbx.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_dcbx.c
+@@ -190,6 +190,7 @@ qed_dcbx_dp_protocol(struct qed_hwfn *p_hwfn, struct qed_dcbx_results *p_data)
+ 
+ static void
+ qed_dcbx_set_params(struct qed_dcbx_results *p_data,
++		    struct qed_hwfn *p_hwfn,
+ 		    struct qed_hw_info *p_info,
+ 		    bool enable,
+ 		    u8 prio,
+@@ -206,6 +207,11 @@ qed_dcbx_set_params(struct qed_dcbx_results *p_data,
+ 	else
+ 		p_data->arr[type].update = DONT_UPDATE_DCB_DSCP;
+ 
++	/* Do not add vlan tag 0 when DCB is enabled and port in UFP/OV mode */
++	if ((test_bit(QED_MF_8021Q_TAGGING, &p_hwfn->cdev->mf_bits) ||
++	     test_bit(QED_MF_8021AD_TAGGING, &p_hwfn->cdev->mf_bits)))
++		p_data->arr[type].dont_add_vlan0 = true;
++
+ 	/* QM reconf data */
+ 	if (p_info->personality == personality)
+ 		p_info->offload_tc = tc;
+@@ -233,7 +239,7 @@ qed_dcbx_update_app_info(struct qed_dcbx_results *p_data,
+ 		personality = qed_dcbx_app_update[i].personality;
+ 		name = qed_dcbx_app_update[i].name;
+ 
+-		qed_dcbx_set_params(p_data, p_info, enable,
++		qed_dcbx_set_params(p_data, p_hwfn, p_info, enable,
+ 				    prio, tc, type, personality);
+ 	}
+ }
+@@ -956,6 +962,7 @@ static void qed_dcbx_update_protocol_data(struct protocol_dcb_data *p_data,
+ 	p_data->dcb_enable_flag = p_src->arr[type].enable;
+ 	p_data->dcb_priority = p_src->arr[type].priority;
+ 	p_data->dcb_tc = p_src->arr[type].tc;
++	p_data->dcb_dont_add_vlan0 = p_src->arr[type].dont_add_vlan0;
+ }
+ 
+ /* Set pf update ramrod command params */
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_dcbx.h b/drivers/net/ethernet/qlogic/qed/qed_dcbx.h
+index 5feb90e049e0..d950d836858c 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_dcbx.h
++++ b/drivers/net/ethernet/qlogic/qed/qed_dcbx.h
+@@ -55,6 +55,7 @@ struct qed_dcbx_app_data {
+ 	u8 update;		/* Update indication */
+ 	u8 priority;		/* Priority */
+ 	u8 tc;			/* Traffic Class */
++	bool dont_add_vlan0;	/* Do not insert a vlan tag with id 0 */
+ };
+ 
+ #define QED_DCBX_VERSION_DISABLED       0
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c
+index e5249b4741d0..194f4dbe57d3 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_dev.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c
+@@ -1636,7 +1636,7 @@ static int qed_vf_start(struct qed_hwfn *p_hwfn,
+ int qed_hw_init(struct qed_dev *cdev, struct qed_hw_init_params *p_params)
+ {
+ 	struct qed_load_req_params load_req_params;
+-	u32 load_code, param, drv_mb_param;
++	u32 load_code, resp, param, drv_mb_param;
+ 	bool b_default_mtu = true;
+ 	struct qed_hwfn *p_hwfn;
+ 	int rc = 0, mfw_rc, i;
+@@ -1782,6 +1782,19 @@ int qed_hw_init(struct qed_dev *cdev, struct qed_hw_init_params *p_params)
+ 
+ 	if (IS_PF(cdev)) {
+ 		p_hwfn = QED_LEADING_HWFN(cdev);
++
++		/* Get pre-negotiated values for stag, bandwidth etc. */
++		DP_VERBOSE(p_hwfn,
++			   QED_MSG_SPQ,
++			   "Sending GET_OEM_UPDATES command to trigger stag/bandwidth attention handling\n");
++		drv_mb_param = 1 << DRV_MB_PARAM_DUMMY_OEM_UPDATES_OFFSET;
++		rc = qed_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
++				 DRV_MSG_CODE_GET_OEM_UPDATES,
++				 drv_mb_param, &resp, &param);
++		if (rc)
++			DP_NOTICE(p_hwfn,
++				  "Failed to send GET_OEM_UPDATES attention request\n");
++
+ 		drv_mb_param = STORM_FW_VERSION;
+ 		rc = qed_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
+ 				 DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER,
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_hsi.h b/drivers/net/ethernet/qlogic/qed/qed_hsi.h
+index 463ffa83685f..ec5de7cf1af4 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_hsi.h
++++ b/drivers/net/ethernet/qlogic/qed/qed_hsi.h
+@@ -12415,6 +12415,7 @@ struct public_drv_mb {
+ #define DRV_MSG_SET_RESOURCE_VALUE_MSG		0x35000000
+ #define DRV_MSG_CODE_OV_UPDATE_WOL              0x38000000
+ #define DRV_MSG_CODE_OV_UPDATE_ESWITCH_MODE     0x39000000
++#define DRV_MSG_CODE_GET_OEM_UPDATES            0x41000000
+ 
+ #define DRV_MSG_CODE_BW_UPDATE_ACK		0x32000000
+ #define DRV_MSG_CODE_NIG_DRAIN			0x30000000
+@@ -12540,6 +12541,9 @@ struct public_drv_mb {
+ #define DRV_MB_PARAM_ESWITCH_MODE_VEB	0x1
+ #define DRV_MB_PARAM_ESWITCH_MODE_VEPA	0x2
+ 
++#define DRV_MB_PARAM_DUMMY_OEM_UPDATES_MASK	0x1
++#define DRV_MB_PARAM_DUMMY_OEM_UPDATES_OFFSET	0
++
+ #define DRV_MB_PARAM_SET_LED_MODE_OPER		0x0
+ #define DRV_MB_PARAM_SET_LED_MODE_ON		0x1
+ #define DRV_MB_PARAM_SET_LED_MODE_OFF		0x2
+diff --git a/drivers/net/ethernet/renesas/ravb.h b/drivers/net/ethernet/renesas/ravb.h
+index b81f4faf7b10..1c40989479bd 100644
+--- a/drivers/net/ethernet/renesas/ravb.h
++++ b/drivers/net/ethernet/renesas/ravb.h
+@@ -431,6 +431,7 @@ enum EIS_BIT {
+ 	EIS_CULF1	= 0x00000080,
+ 	EIS_TFFF	= 0x00000100,
+ 	EIS_QFS		= 0x00010000,
++	EIS_RESERVED	= (GENMASK(31, 17) | GENMASK(15, 11)),
+ };
+ 
+ /* RIC0 */
+@@ -475,6 +476,7 @@ enum RIS0_BIT {
+ 	RIS0_FRF15	= 0x00008000,
+ 	RIS0_FRF16	= 0x00010000,
+ 	RIS0_FRF17	= 0x00020000,
++	RIS0_RESERVED	= GENMASK(31, 18),
+ };
+ 
+ /* RIC1 */
+@@ -531,6 +533,7 @@ enum RIS2_BIT {
+ 	RIS2_QFF16	= 0x00010000,
+ 	RIS2_QFF17	= 0x00020000,
+ 	RIS2_RFFF	= 0x80000000,
++	RIS2_RESERVED	= GENMASK(30, 18),
+ };
+ 
+ /* TIC */
+@@ -547,6 +550,7 @@ enum TIS_BIT {
+ 	TIS_FTF1	= 0x00000002,	/* Undocumented? */
+ 	TIS_TFUF	= 0x00000100,
+ 	TIS_TFWF	= 0x00000200,
++	TIS_RESERVED	= (GENMASK(31, 20) | GENMASK(15, 12) | GENMASK(7, 4))
+ };
+ 
+ /* ISS */
+@@ -620,6 +624,7 @@ enum GIC_BIT {
+ enum GIS_BIT {
+ 	GIS_PTCF	= 0x00000001,	/* Undocumented? */
+ 	GIS_PTMF	= 0x00000004,
++	GIS_RESERVED	= GENMASK(15, 10),
+ };
+ 
+ /* GIE (R-Car Gen3 only) */
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index 0d811c02ff34..db4e306ca996 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -742,10 +742,11 @@ static void ravb_error_interrupt(struct net_device *ndev)
+ 	u32 eis, ris2;
+ 
+ 	eis = ravb_read(ndev, EIS);
+-	ravb_write(ndev, ~EIS_QFS, EIS);
++	ravb_write(ndev, ~(EIS_QFS | EIS_RESERVED), EIS);
+ 	if (eis & EIS_QFS) {
+ 		ris2 = ravb_read(ndev, RIS2);
+-		ravb_write(ndev, ~(RIS2_QFF0 | RIS2_RFFF), RIS2);
++		ravb_write(ndev, ~(RIS2_QFF0 | RIS2_RFFF | RIS2_RESERVED),
++			   RIS2);
+ 
+ 		/* Receive Descriptor Empty int */
+ 		if (ris2 & RIS2_QFF0)
+@@ -798,7 +799,7 @@ static bool ravb_timestamp_interrupt(struct net_device *ndev)
+ 	u32 tis = ravb_read(ndev, TIS);
+ 
+ 	if (tis & TIS_TFUF) {
+-		ravb_write(ndev, ~TIS_TFUF, TIS);
++		ravb_write(ndev, ~(TIS_TFUF | TIS_RESERVED), TIS);
+ 		ravb_get_tx_tstamp(ndev);
+ 		return true;
+ 	}
+@@ -933,7 +934,7 @@ static int ravb_poll(struct napi_struct *napi, int budget)
+ 		/* Processing RX Descriptor Ring */
+ 		if (ris0 & mask) {
+ 			/* Clear RX interrupt */
+-			ravb_write(ndev, ~mask, RIS0);
++			ravb_write(ndev, ~(mask | RIS0_RESERVED), RIS0);
+ 			if (ravb_rx(ndev, &quota, q))
+ 				goto out;
+ 		}
+@@ -941,7 +942,7 @@ static int ravb_poll(struct napi_struct *napi, int budget)
+ 		if (tis & mask) {
+ 			spin_lock_irqsave(&priv->lock, flags);
+ 			/* Clear TX interrupt */
+-			ravb_write(ndev, ~mask, TIS);
++			ravb_write(ndev, ~(mask | TIS_RESERVED), TIS);
+ 			ravb_tx_free(ndev, q, true);
+ 			netif_wake_subqueue(ndev, q);
+ 			mmiowb();
+diff --git a/drivers/net/ethernet/renesas/ravb_ptp.c b/drivers/net/ethernet/renesas/ravb_ptp.c
+index eede70ec37f8..9e3222fd69f9 100644
+--- a/drivers/net/ethernet/renesas/ravb_ptp.c
++++ b/drivers/net/ethernet/renesas/ravb_ptp.c
+@@ -319,7 +319,7 @@ void ravb_ptp_interrupt(struct net_device *ndev)
+ 		}
+ 	}
+ 
+-	ravb_write(ndev, ~gis, GIS);
++	ravb_write(ndev, ~(gis | GIS_RESERVED), GIS);
+ }
+ 
+ void ravb_ptp_init(struct net_device *ndev, struct platform_device *pdev)
+diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c
+index 778c4f76a884..2153956a0b20 100644
+--- a/drivers/pci/controller/dwc/pcie-designware.c
++++ b/drivers/pci/controller/dwc/pcie-designware.c
+@@ -135,7 +135,7 @@ static void dw_pcie_prog_outbound_atu_unroll(struct dw_pcie *pci, int index,
+ 		if (val & PCIE_ATU_ENABLE)
+ 			return;
+ 
+-		usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX);
++		mdelay(LINK_WAIT_IATU);
+ 	}
+ 	dev_err(pci->dev, "Outbound iATU is not being enabled\n");
+ }
+@@ -178,7 +178,7 @@ void dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, int type,
+ 		if (val & PCIE_ATU_ENABLE)
+ 			return;
+ 
+-		usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX);
++		mdelay(LINK_WAIT_IATU);
+ 	}
+ 	dev_err(pci->dev, "Outbound iATU is not being enabled\n");
+ }
+@@ -236,7 +236,7 @@ static int dw_pcie_prog_inbound_atu_unroll(struct dw_pcie *pci, int index,
+ 		if (val & PCIE_ATU_ENABLE)
+ 			return 0;
+ 
+-		usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX);
++		mdelay(LINK_WAIT_IATU);
+ 	}
+ 	dev_err(pci->dev, "Inbound iATU is not being enabled\n");
+ 
+@@ -282,7 +282,7 @@ int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, int index, int bar,
+ 		if (val & PCIE_ATU_ENABLE)
+ 			return 0;
+ 
+-		usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX);
++		mdelay(LINK_WAIT_IATU);
+ 	}
+ 	dev_err(pci->dev, "Inbound iATU is not being enabled\n");
+ 
+diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h
+index bee4e2535a61..b99d1d72dd12 100644
+--- a/drivers/pci/controller/dwc/pcie-designware.h
++++ b/drivers/pci/controller/dwc/pcie-designware.h
+@@ -26,8 +26,7 @@
+ 
+ /* Parameters for the waiting for iATU enabled routine */
+ #define LINK_WAIT_MAX_IATU_RETRIES	5
+-#define LINK_WAIT_IATU_MIN		9000
+-#define LINK_WAIT_IATU_MAX		10000
++#define LINK_WAIT_IATU			9
+ 
+ /* Synopsys-specific PCIe configuration registers */
+ #define PCIE_PORT_LINK_CONTROL		0x710
+diff --git a/drivers/pinctrl/pinctrl-amd.c b/drivers/pinctrl/pinctrl-amd.c
+index b91db89eb924..d3ba867d01f0 100644
+--- a/drivers/pinctrl/pinctrl-amd.c
++++ b/drivers/pinctrl/pinctrl-amd.c
+@@ -348,21 +348,12 @@ static void amd_gpio_irq_enable(struct irq_data *d)
+ 	unsigned long flags;
+ 	struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+ 	struct amd_gpio *gpio_dev = gpiochip_get_data(gc);
+-	u32 mask = BIT(INTERRUPT_ENABLE_OFF) | BIT(INTERRUPT_MASK_OFF);
+ 
+ 	raw_spin_lock_irqsave(&gpio_dev->lock, flags);
+ 	pin_reg = readl(gpio_dev->base + (d->hwirq)*4);
+ 	pin_reg |= BIT(INTERRUPT_ENABLE_OFF);
+ 	pin_reg |= BIT(INTERRUPT_MASK_OFF);
+ 	writel(pin_reg, gpio_dev->base + (d->hwirq)*4);
+-	/*
+-	 * When debounce logic is enabled it takes ~900 us before interrupts
+-	 * can be enabled.  During this "debounce warm up" period the
+-	 * "INTERRUPT_ENABLE" bit will read as 0. Poll the bit here until it
+-	 * reads back as 1, signaling that interrupts are now enabled.
+-	 */
+-	while ((readl(gpio_dev->base + (d->hwirq)*4) & mask) != mask)
+-		continue;
+ 	raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
+ }
+ 
+@@ -426,7 +417,7 @@ static void amd_gpio_irq_eoi(struct irq_data *d)
+ static int amd_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ {
+ 	int ret = 0;
+-	u32 pin_reg;
++	u32 pin_reg, pin_reg_irq_en, mask;
+ 	unsigned long flags, irq_flags;
+ 	struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+ 	struct amd_gpio *gpio_dev = gpiochip_get_data(gc);
+@@ -495,6 +486,28 @@ static int amd_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ 	}
+ 
+ 	pin_reg |= CLR_INTR_STAT << INTERRUPT_STS_OFF;
++	/*
++	 * If WAKE_INT_MASTER_REG.MaskStsEn is set, a software write to the
++	 * debounce registers of any GPIO will block wake/interrupt status
++	 * generation for *all* GPIOs for a lenght of time that depends on
++	 * WAKE_INT_MASTER_REG.MaskStsLength[11:0].  During this period the
++	 * INTERRUPT_ENABLE bit will read as 0.
++	 *
++	 * We temporarily enable irq for the GPIO whose configuration is
++	 * changing, and then wait for it to read back as 1 to know when
++	 * debounce has settled and then disable the irq again.
++	 * We do this polling with the spinlock held to ensure other GPIO
++	 * access routines do not read an incorrect value for the irq enable
++	 * bit of other GPIOs.  We keep the GPIO masked while polling to avoid
++	 * spurious irqs, and disable the irq again after polling.
++	 */
++	mask = BIT(INTERRUPT_ENABLE_OFF);
++	pin_reg_irq_en = pin_reg;
++	pin_reg_irq_en |= mask;
++	pin_reg_irq_en &= ~BIT(INTERRUPT_MASK_OFF);
++	writel(pin_reg_irq_en, gpio_dev->base + (d->hwirq)*4);
++	while ((readl(gpio_dev->base + (d->hwirq)*4) & mask) != mask)
++		continue;
+ 	writel(pin_reg, gpio_dev->base + (d->hwirq)*4);
+ 	raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
+ 
+diff --git a/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c b/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c
+index c3a76af9f5fa..ada1ebebd325 100644
+--- a/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c
++++ b/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c
+@@ -3475,11 +3475,10 @@ static int ibmvscsis_probe(struct vio_dev *vdev,
+ 		vscsi->dds.window[LOCAL].liobn,
+ 		vscsi->dds.window[REMOTE].liobn);
+ 
+-	strcpy(vscsi->eye, "VSCSI ");
+-	strncat(vscsi->eye, vdev->name, MAX_EYE);
++	snprintf(vscsi->eye, sizeof(vscsi->eye), "VSCSI %s", vdev->name);
+ 
+ 	vscsi->dds.unit_id = vdev->unit_address;
+-	strncpy(vscsi->dds.partition_name, partition_name,
++	strscpy(vscsi->dds.partition_name, partition_name,
+ 		sizeof(vscsi->dds.partition_name));
+ 	vscsi->dds.partition_num = partition_number;
+ 
+diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c
+index 02d65dce74e5..2e8a91341254 100644
+--- a/drivers/scsi/ipr.c
++++ b/drivers/scsi/ipr.c
+@@ -3310,6 +3310,65 @@ static void ipr_release_dump(struct kref *kref)
+ 	LEAVE;
+ }
+ 
++static void ipr_add_remove_thread(struct work_struct *work)
++{
++	unsigned long lock_flags;
++	struct ipr_resource_entry *res;
++	struct scsi_device *sdev;
++	struct ipr_ioa_cfg *ioa_cfg =
++		container_of(work, struct ipr_ioa_cfg, scsi_add_work_q);
++	u8 bus, target, lun;
++	int did_work;
++
++	ENTER;
++	spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
++
++restart:
++	do {
++		did_work = 0;
++		if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].allow_cmds) {
++			spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
++			return;
++		}
++
++		list_for_each_entry(res, &ioa_cfg->used_res_q, queue) {
++			if (res->del_from_ml && res->sdev) {
++				did_work = 1;
++				sdev = res->sdev;
++				if (!scsi_device_get(sdev)) {
++					if (!res->add_to_ml)
++						list_move_tail(&res->queue, &ioa_cfg->free_res_q);
++					else
++						res->del_from_ml = 0;
++					spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
++					scsi_remove_device(sdev);
++					scsi_device_put(sdev);
++					spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
++				}
++				break;
++			}
++		}
++	} while (did_work);
++
++	list_for_each_entry(res, &ioa_cfg->used_res_q, queue) {
++		if (res->add_to_ml) {
++			bus = res->bus;
++			target = res->target;
++			lun = res->lun;
++			res->add_to_ml = 0;
++			spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
++			scsi_add_device(ioa_cfg->host, bus, target, lun);
++			spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
++			goto restart;
++		}
++	}
++
++	ioa_cfg->scan_done = 1;
++	spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
++	kobject_uevent(&ioa_cfg->host->shost_dev.kobj, KOBJ_CHANGE);
++	LEAVE;
++}
++
+ /**
+  * ipr_worker_thread - Worker thread
+  * @work:		ioa config struct
+@@ -3324,13 +3383,9 @@ static void ipr_release_dump(struct kref *kref)
+ static void ipr_worker_thread(struct work_struct *work)
+ {
+ 	unsigned long lock_flags;
+-	struct ipr_resource_entry *res;
+-	struct scsi_device *sdev;
+ 	struct ipr_dump *dump;
+ 	struct ipr_ioa_cfg *ioa_cfg =
+ 		container_of(work, struct ipr_ioa_cfg, work_q);
+-	u8 bus, target, lun;
+-	int did_work;
+ 
+ 	ENTER;
+ 	spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+@@ -3368,49 +3423,9 @@ static void ipr_worker_thread(struct work_struct *work)
+ 		return;
+ 	}
+ 
+-restart:
+-	do {
+-		did_work = 0;
+-		if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].allow_cmds) {
+-			spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+-			return;
+-		}
++	schedule_work(&ioa_cfg->scsi_add_work_q);
+ 
+-		list_for_each_entry(res, &ioa_cfg->used_res_q, queue) {
+-			if (res->del_from_ml && res->sdev) {
+-				did_work = 1;
+-				sdev = res->sdev;
+-				if (!scsi_device_get(sdev)) {
+-					if (!res->add_to_ml)
+-						list_move_tail(&res->queue, &ioa_cfg->free_res_q);
+-					else
+-						res->del_from_ml = 0;
+-					spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+-					scsi_remove_device(sdev);
+-					scsi_device_put(sdev);
+-					spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+-				}
+-				break;
+-			}
+-		}
+-	} while (did_work);
+-
+-	list_for_each_entry(res, &ioa_cfg->used_res_q, queue) {
+-		if (res->add_to_ml) {
+-			bus = res->bus;
+-			target = res->target;
+-			lun = res->lun;
+-			res->add_to_ml = 0;
+-			spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+-			scsi_add_device(ioa_cfg->host, bus, target, lun);
+-			spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+-			goto restart;
+-		}
+-	}
+-
+-	ioa_cfg->scan_done = 1;
+ 	spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+-	kobject_uevent(&ioa_cfg->host->shost_dev.kobj, KOBJ_CHANGE);
+ 	LEAVE;
+ }
+ 
+@@ -9908,6 +9923,7 @@ static void ipr_init_ioa_cfg(struct ipr_ioa_cfg *ioa_cfg,
+ 	INIT_LIST_HEAD(&ioa_cfg->free_res_q);
+ 	INIT_LIST_HEAD(&ioa_cfg->used_res_q);
+ 	INIT_WORK(&ioa_cfg->work_q, ipr_worker_thread);
++	INIT_WORK(&ioa_cfg->scsi_add_work_q, ipr_add_remove_thread);
+ 	init_waitqueue_head(&ioa_cfg->reset_wait_q);
+ 	init_waitqueue_head(&ioa_cfg->msi_wait_q);
+ 	init_waitqueue_head(&ioa_cfg->eeh_wait_q);
+diff --git a/drivers/scsi/ipr.h b/drivers/scsi/ipr.h
+index 93570734cbfb..a98cfd24035a 100644
+--- a/drivers/scsi/ipr.h
++++ b/drivers/scsi/ipr.h
+@@ -1568,6 +1568,7 @@ struct ipr_ioa_cfg {
+ 	u8 saved_mode_page_len;
+ 
+ 	struct work_struct work_q;
++	struct work_struct scsi_add_work_q;
+ 	struct workqueue_struct *reset_work_q;
+ 
+ 	wait_queue_head_t reset_wait_q;
+diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
+index 729d343861f4..de64cbb0e3d5 100644
+--- a/drivers/scsi/lpfc/lpfc_attr.c
++++ b/drivers/scsi/lpfc/lpfc_attr.c
+@@ -320,12 +320,12 @@ lpfc_nvme_info_show(struct device *dev, struct device_attribute *attr,
+ 			localport->port_id, statep);
+ 
+ 	list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
++		nrport = NULL;
++		spin_lock(&vport->phba->hbalock);
+ 		rport = lpfc_ndlp_get_nrport(ndlp);
+-		if (!rport)
+-			continue;
+-
+-		/* local short-hand pointer. */
+-		nrport = rport->remoteport;
++		if (rport)
++			nrport = rport->remoteport;
++		spin_unlock(&vport->phba->hbalock);
+ 		if (!nrport)
+ 			continue;
+ 
+@@ -3304,6 +3304,7 @@ lpfc_update_rport_devloss_tmo(struct lpfc_vport *vport)
+ 	struct lpfc_nodelist  *ndlp;
+ #if (IS_ENABLED(CONFIG_NVME_FC))
+ 	struct lpfc_nvme_rport *rport;
++	struct nvme_fc_remote_port *remoteport = NULL;
+ #endif
+ 
+ 	shost = lpfc_shost_from_vport(vport);
+@@ -3314,8 +3315,12 @@ lpfc_update_rport_devloss_tmo(struct lpfc_vport *vport)
+ 		if (ndlp->rport)
+ 			ndlp->rport->dev_loss_tmo = vport->cfg_devloss_tmo;
+ #if (IS_ENABLED(CONFIG_NVME_FC))
++		spin_lock(&vport->phba->hbalock);
+ 		rport = lpfc_ndlp_get_nrport(ndlp);
+ 		if (rport)
++			remoteport = rport->remoteport;
++		spin_unlock(&vport->phba->hbalock);
++		if (remoteport)
+ 			nvme_fc_set_remoteport_devloss(rport->remoteport,
+ 						       vport->cfg_devloss_tmo);
+ #endif
+diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
+index 9df0c051349f..aec5b10a8c85 100644
+--- a/drivers/scsi/lpfc/lpfc_debugfs.c
++++ b/drivers/scsi/lpfc/lpfc_debugfs.c
+@@ -551,7 +551,7 @@ lpfc_debugfs_nodelist_data(struct lpfc_vport *vport, char *buf, int size)
+ 	unsigned char *statep;
+ 	struct nvme_fc_local_port *localport;
+ 	struct lpfc_nvmet_tgtport *tgtp;
+-	struct nvme_fc_remote_port *nrport;
++	struct nvme_fc_remote_port *nrport = NULL;
+ 	struct lpfc_nvme_rport *rport;
+ 
+ 	cnt = (LPFC_NODELIST_SIZE / LPFC_NODELIST_ENTRY_SIZE);
+@@ -696,11 +696,11 @@ lpfc_debugfs_nodelist_data(struct lpfc_vport *vport, char *buf, int size)
+ 	len += snprintf(buf + len, size - len, "\tRport List:\n");
+ 	list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
+ 		/* local short-hand pointer. */
++		spin_lock(&phba->hbalock);
+ 		rport = lpfc_ndlp_get_nrport(ndlp);
+-		if (!rport)
+-			continue;
+-
+-		nrport = rport->remoteport;
++		if (rport)
++			nrport = rport->remoteport;
++		spin_unlock(&phba->hbalock);
+ 		if (!nrport)
+ 			continue;
+ 
+diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
+index cab1fb087e6a..0960dcaf1684 100644
+--- a/drivers/scsi/lpfc/lpfc_nvme.c
++++ b/drivers/scsi/lpfc/lpfc_nvme.c
+@@ -2718,7 +2718,9 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ 	rpinfo.port_name = wwn_to_u64(ndlp->nlp_portname.u.wwn);
+ 	rpinfo.node_name = wwn_to_u64(ndlp->nlp_nodename.u.wwn);
+ 
++	spin_lock_irq(&vport->phba->hbalock);
+ 	oldrport = lpfc_ndlp_get_nrport(ndlp);
++	spin_unlock_irq(&vport->phba->hbalock);
+ 	if (!oldrport)
+ 		lpfc_nlp_get(ndlp);
+ 
+@@ -2833,7 +2835,7 @@ lpfc_nvme_unregister_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ 	struct nvme_fc_local_port *localport;
+ 	struct lpfc_nvme_lport *lport;
+ 	struct lpfc_nvme_rport *rport;
+-	struct nvme_fc_remote_port *remoteport;
++	struct nvme_fc_remote_port *remoteport = NULL;
+ 
+ 	localport = vport->localport;
+ 
+@@ -2847,11 +2849,14 @@ lpfc_nvme_unregister_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ 	if (!lport)
+ 		goto input_err;
+ 
++	spin_lock_irq(&vport->phba->hbalock);
+ 	rport = lpfc_ndlp_get_nrport(ndlp);
+-	if (!rport)
++	if (rport)
++		remoteport = rport->remoteport;
++	spin_unlock_irq(&vport->phba->hbalock);
++	if (!remoteport)
+ 		goto input_err;
+ 
+-	remoteport = rport->remoteport;
+ 	lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC,
+ 			 "6033 Unreg nvme remoteport %p, portname x%llx, "
+ 			 "port_id x%06x, portstate x%x port type x%x\n",
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 9421d9877730..0949d3db56e7 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -1277,7 +1277,8 @@ static int sd_init_command(struct scsi_cmnd *cmd)
+ 	case REQ_OP_ZONE_RESET:
+ 		return sd_zbc_setup_reset_cmnd(cmd);
+ 	default:
+-		BUG();
++		WARN_ON_ONCE(1);
++		return BLKPREP_KILL;
+ 	}
+ }
+ 
+diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c
+index 4b5e250e8615..e5c7e1ef6318 100644
+--- a/drivers/soundwire/stream.c
++++ b/drivers/soundwire/stream.c
+@@ -899,9 +899,10 @@ static void sdw_release_master_stream(struct sdw_stream_runtime *stream)
+ 	struct sdw_master_runtime *m_rt = stream->m_rt;
+ 	struct sdw_slave_runtime *s_rt, *_s_rt;
+ 
+-	list_for_each_entry_safe(s_rt, _s_rt,
+-			&m_rt->slave_rt_list, m_rt_node)
+-		sdw_stream_remove_slave(s_rt->slave, stream);
++	list_for_each_entry_safe(s_rt, _s_rt, &m_rt->slave_rt_list, m_rt_node) {
++		sdw_slave_port_release(s_rt->slave->bus, s_rt->slave, stream);
++		sdw_release_slave_stream(s_rt->slave, stream);
++	}
+ 
+ 	list_del(&m_rt->bus_node);
+ }
+@@ -1112,7 +1113,7 @@ int sdw_stream_add_master(struct sdw_bus *bus,
+ 				"Master runtime config failed for stream:%s",
+ 				stream->name);
+ 		ret = -ENOMEM;
+-		goto error;
++		goto unlock;
+ 	}
+ 
+ 	ret = sdw_config_stream(bus->dev, stream, stream_config, false);
+@@ -1123,11 +1124,11 @@ int sdw_stream_add_master(struct sdw_bus *bus,
+ 	if (ret)
+ 		goto stream_error;
+ 
+-	stream->state = SDW_STREAM_CONFIGURED;
++	goto unlock;
+ 
+ stream_error:
+ 	sdw_release_master_stream(stream);
+-error:
++unlock:
+ 	mutex_unlock(&bus->bus_lock);
+ 	return ret;
+ }
+@@ -1141,6 +1142,10 @@ EXPORT_SYMBOL(sdw_stream_add_master);
+  * @stream: SoundWire stream
+  * @port_config: Port configuration for audio stream
+  * @num_ports: Number of ports
++ *
++ * It is expected that Slave is added before adding Master
++ * to the Stream.
++ *
+  */
+ int sdw_stream_add_slave(struct sdw_slave *slave,
+ 		struct sdw_stream_config *stream_config,
+@@ -1186,6 +1191,12 @@ int sdw_stream_add_slave(struct sdw_slave *slave,
+ 	if (ret)
+ 		goto stream_error;
+ 
++	/*
++	 * Change stream state to CONFIGURED on first Slave add.
++	 * Bus is not aware of number of Slave(s) in a stream at this
++	 * point so cannot depend on all Slave(s) to be added in order to
++	 * change stream state to CONFIGURED.
++	 */
+ 	stream->state = SDW_STREAM_CONFIGURED;
+ 	goto error;
+ 
+diff --git a/drivers/spi/spi-gpio.c b/drivers/spi/spi-gpio.c
+index 6ae92d4dca19..3b518ead504e 100644
+--- a/drivers/spi/spi-gpio.c
++++ b/drivers/spi/spi-gpio.c
+@@ -287,8 +287,8 @@ static int spi_gpio_request(struct device *dev,
+ 		*mflags |= SPI_MASTER_NO_RX;
+ 
+ 	spi_gpio->sck = devm_gpiod_get(dev, "sck", GPIOD_OUT_LOW);
+-	if (IS_ERR(spi_gpio->mosi))
+-		return PTR_ERR(spi_gpio->mosi);
++	if (IS_ERR(spi_gpio->sck))
++		return PTR_ERR(spi_gpio->sck);
+ 
+ 	for (i = 0; i < num_chipselects; i++) {
+ 		spi_gpio->cs_gpios[i] = devm_gpiod_get_index(dev, "cs",
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 1949e0939d40..bd2f4c68506a 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -446,10 +446,10 @@ int mnt_want_write_file_path(struct file *file)
+ {
+ 	int ret;
+ 
+-	sb_start_write(file_inode(file)->i_sb);
++	sb_start_write(file->f_path.mnt->mnt_sb);
+ 	ret = __mnt_want_write_file(file);
+ 	if (ret)
+-		sb_end_write(file_inode(file)->i_sb);
++		sb_end_write(file->f_path.mnt->mnt_sb);
+ 	return ret;
+ }
+ 
+@@ -540,8 +540,7 @@ void __mnt_drop_write_file(struct file *file)
+ 
+ void mnt_drop_write_file_path(struct file *file)
+ {
+-	__mnt_drop_write_file(file);
+-	sb_end_write(file_inode(file)->i_sb);
++	mnt_drop_write(file->f_path.mnt);
+ }
+ 
+ void mnt_drop_write_file(struct file *file)
+diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
+index a8a126259bc4..0bec79ae4c2d 100644
+--- a/include/linux/huge_mm.h
++++ b/include/linux/huge_mm.h
+@@ -42,7 +42,7 @@ extern int mincore_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ 			unsigned char *vec);
+ extern bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+ 			 unsigned long new_addr, unsigned long old_end,
+-			 pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush);
++			 pmd_t *old_pmd, pmd_t *new_pmd);
+ extern int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ 			unsigned long addr, pgprot_t newprot,
+ 			int prot_numa);
+diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
+index f833a60699ad..e60078ffb302 100644
+--- a/kernel/bpf/sockmap.c
++++ b/kernel/bpf/sockmap.c
+@@ -132,6 +132,7 @@ struct smap_psock {
+ 	struct work_struct gc_work;
+ 
+ 	struct proto *sk_proto;
++	void (*save_unhash)(struct sock *sk);
+ 	void (*save_close)(struct sock *sk, long timeout);
+ 	void (*save_data_ready)(struct sock *sk);
+ 	void (*save_write_space)(struct sock *sk);
+@@ -143,6 +144,7 @@ static int bpf_tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ static int bpf_tcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size);
+ static int bpf_tcp_sendpage(struct sock *sk, struct page *page,
+ 			    int offset, size_t size, int flags);
++static void bpf_tcp_unhash(struct sock *sk);
+ static void bpf_tcp_close(struct sock *sk, long timeout);
+ 
+ static inline struct smap_psock *smap_psock_sk(const struct sock *sk)
+@@ -184,6 +186,7 @@ static void build_protos(struct proto prot[SOCKMAP_NUM_CONFIGS],
+ 			 struct proto *base)
+ {
+ 	prot[SOCKMAP_BASE]			= *base;
++	prot[SOCKMAP_BASE].unhash		= bpf_tcp_unhash;
+ 	prot[SOCKMAP_BASE].close		= bpf_tcp_close;
+ 	prot[SOCKMAP_BASE].recvmsg		= bpf_tcp_recvmsg;
+ 	prot[SOCKMAP_BASE].stream_memory_read	= bpf_tcp_stream_read;
+@@ -217,6 +220,7 @@ static int bpf_tcp_init(struct sock *sk)
+ 		return -EBUSY;
+ 	}
+ 
++	psock->save_unhash = sk->sk_prot->unhash;
+ 	psock->save_close = sk->sk_prot->close;
+ 	psock->sk_proto = sk->sk_prot;
+ 
+@@ -305,30 +309,12 @@ static struct smap_psock_map_entry *psock_map_pop(struct sock *sk,
+ 	return e;
+ }
+ 
+-static void bpf_tcp_close(struct sock *sk, long timeout)
++static void bpf_tcp_remove(struct sock *sk, struct smap_psock *psock)
+ {
+-	void (*close_fun)(struct sock *sk, long timeout);
+ 	struct smap_psock_map_entry *e;
+ 	struct sk_msg_buff *md, *mtmp;
+-	struct smap_psock *psock;
+ 	struct sock *osk;
+ 
+-	lock_sock(sk);
+-	rcu_read_lock();
+-	psock = smap_psock_sk(sk);
+-	if (unlikely(!psock)) {
+-		rcu_read_unlock();
+-		release_sock(sk);
+-		return sk->sk_prot->close(sk, timeout);
+-	}
+-
+-	/* The psock may be destroyed anytime after exiting the RCU critial
+-	 * section so by the time we use close_fun the psock may no longer
+-	 * be valid. However, bpf_tcp_close is called with the sock lock
+-	 * held so the close hook and sk are still valid.
+-	 */
+-	close_fun = psock->save_close;
+-
+ 	if (psock->cork) {
+ 		free_start_sg(psock->sock, psock->cork, true);
+ 		kfree(psock->cork);
+@@ -379,6 +365,42 @@ static void bpf_tcp_close(struct sock *sk, long timeout)
+ 		kfree(e);
+ 		e = psock_map_pop(sk, psock);
+ 	}
++}
++
++static void bpf_tcp_unhash(struct sock *sk)
++{
++	void (*unhash_fun)(struct sock *sk);
++	struct smap_psock *psock;
++
++	rcu_read_lock();
++	psock = smap_psock_sk(sk);
++	if (unlikely(!psock)) {
++		rcu_read_unlock();
++		if (sk->sk_prot->unhash)
++			sk->sk_prot->unhash(sk);
++		return;
++	}
++	unhash_fun = psock->save_unhash;
++	bpf_tcp_remove(sk, psock);
++	rcu_read_unlock();
++	unhash_fun(sk);
++}
++
++static void bpf_tcp_close(struct sock *sk, long timeout)
++{
++	void (*close_fun)(struct sock *sk, long timeout);
++	struct smap_psock *psock;
++
++	lock_sock(sk);
++	rcu_read_lock();
++	psock = smap_psock_sk(sk);
++	if (unlikely(!psock)) {
++		rcu_read_unlock();
++		release_sock(sk);
++		return sk->sk_prot->close(sk, timeout);
++	}
++	close_fun = psock->save_close;
++	bpf_tcp_remove(sk, psock);
+ 	rcu_read_unlock();
+ 	release_sock(sk);
+ 	close_fun(sk, timeout);
+@@ -2100,8 +2122,12 @@ static int sock_map_update_elem(struct bpf_map *map,
+ 		return -EINVAL;
+ 	}
+ 
++	/* ULPs are currently supported only for TCP sockets in ESTABLISHED
++	 * state.
++	 */
+ 	if (skops.sk->sk_type != SOCK_STREAM ||
+-	    skops.sk->sk_protocol != IPPROTO_TCP) {
++	    skops.sk->sk_protocol != IPPROTO_TCP ||
++	    skops.sk->sk_state != TCP_ESTABLISHED) {
+ 		fput(socket->file);
+ 		return -EOPNOTSUPP;
+ 	}
+@@ -2456,6 +2482,16 @@ static int sock_hash_update_elem(struct bpf_map *map,
+ 		return -EINVAL;
+ 	}
+ 
++	/* ULPs are currently supported only for TCP sockets in ESTABLISHED
++	 * state.
++	 */
++	if (skops.sk->sk_type != SOCK_STREAM ||
++	    skops.sk->sk_protocol != IPPROTO_TCP ||
++	    skops.sk->sk_state != TCP_ESTABLISHED) {
++		fput(socket->file);
++		return -EOPNOTSUPP;
++	}
++
+ 	lock_sock(skops.sk);
+ 	preempt_disable();
+ 	rcu_read_lock();
+@@ -2544,10 +2580,22 @@ const struct bpf_map_ops sock_hash_ops = {
+ 	.map_release_uref = sock_map_release,
+ };
+ 
++static bool bpf_is_valid_sock_op(struct bpf_sock_ops_kern *ops)
++{
++	return ops->op == BPF_SOCK_OPS_PASSIVE_ESTABLISHED_CB ||
++	       ops->op == BPF_SOCK_OPS_ACTIVE_ESTABLISHED_CB;
++}
+ BPF_CALL_4(bpf_sock_map_update, struct bpf_sock_ops_kern *, bpf_sock,
+ 	   struct bpf_map *, map, void *, key, u64, flags)
+ {
+ 	WARN_ON_ONCE(!rcu_read_lock_held());
++
++	/* ULPs are currently supported only for TCP sockets in ESTABLISHED
++	 * state. This checks that the sock ops triggering the update is
++	 * one indicating we are (or will be soon) in an ESTABLISHED state.
++	 */
++	if (!bpf_is_valid_sock_op(bpf_sock))
++		return -EOPNOTSUPP;
+ 	return sock_map_ctx_update_elem(bpf_sock, map, key, flags);
+ }
+ 
+@@ -2566,6 +2614,9 @@ BPF_CALL_4(bpf_sock_hash_update, struct bpf_sock_ops_kern *, bpf_sock,
+ 	   struct bpf_map *, map, void *, key, u64, flags)
+ {
+ 	WARN_ON_ONCE(!rcu_read_lock_held());
++
++	if (!bpf_is_valid_sock_op(bpf_sock))
++		return -EOPNOTSUPP;
+ 	return sock_hash_ctx_update_elem(bpf_sock, map, key, flags);
+ }
+ 
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index f7274e0c8bdc..3238bb2d0c93 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -1778,7 +1778,7 @@ static pmd_t move_soft_dirty_pmd(pmd_t pmd)
+ 
+ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+ 		  unsigned long new_addr, unsigned long old_end,
+-		  pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush)
++		  pmd_t *old_pmd, pmd_t *new_pmd)
+ {
+ 	spinlock_t *old_ptl, *new_ptl;
+ 	pmd_t pmd;
+@@ -1809,7 +1809,7 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+ 		if (new_ptl != old_ptl)
+ 			spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
+ 		pmd = pmdp_huge_get_and_clear(mm, old_addr, old_pmd);
+-		if (pmd_present(pmd) && pmd_dirty(pmd))
++		if (pmd_present(pmd))
+ 			force_flush = true;
+ 		VM_BUG_ON(!pmd_none(*new_pmd));
+ 
+@@ -1820,12 +1820,10 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+ 		}
+ 		pmd = move_soft_dirty_pmd(pmd);
+ 		set_pmd_at(mm, new_addr, new_pmd, pmd);
+-		if (new_ptl != old_ptl)
+-			spin_unlock(new_ptl);
+ 		if (force_flush)
+ 			flush_tlb_range(vma, old_addr, old_addr + PMD_SIZE);
+-		else
+-			*need_flush = true;
++		if (new_ptl != old_ptl)
++			spin_unlock(new_ptl);
+ 		spin_unlock(old_ptl);
+ 		return true;
+ 	}
+diff --git a/mm/mremap.c b/mm/mremap.c
+index 5c2e18505f75..a9617e72e6b7 100644
+--- a/mm/mremap.c
++++ b/mm/mremap.c
+@@ -115,7 +115,7 @@ static pte_t move_soft_dirty_pte(pte_t pte)
+ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
+ 		unsigned long old_addr, unsigned long old_end,
+ 		struct vm_area_struct *new_vma, pmd_t *new_pmd,
+-		unsigned long new_addr, bool need_rmap_locks, bool *need_flush)
++		unsigned long new_addr, bool need_rmap_locks)
+ {
+ 	struct mm_struct *mm = vma->vm_mm;
+ 	pte_t *old_pte, *new_pte, pte;
+@@ -163,15 +163,17 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
+ 
+ 		pte = ptep_get_and_clear(mm, old_addr, old_pte);
+ 		/*
+-		 * If we are remapping a dirty PTE, make sure
++		 * If we are remapping a valid PTE, make sure
+ 		 * to flush TLB before we drop the PTL for the
+-		 * old PTE or we may race with page_mkclean().
++		 * PTE.
+ 		 *
+-		 * This check has to be done after we removed the
+-		 * old PTE from page tables or another thread may
+-		 * dirty it after the check and before the removal.
++		 * NOTE! Both old and new PTL matter: the old one
++		 * for racing with page_mkclean(), the new one to
++		 * make sure the physical page stays valid until
++		 * the TLB entry for the old mapping has been
++		 * flushed.
+ 		 */
+-		if (pte_present(pte) && pte_dirty(pte))
++		if (pte_present(pte))
+ 			force_flush = true;
+ 		pte = move_pte(pte, new_vma->vm_page_prot, old_addr, new_addr);
+ 		pte = move_soft_dirty_pte(pte);
+@@ -179,13 +181,11 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
+ 	}
+ 
+ 	arch_leave_lazy_mmu_mode();
++	if (force_flush)
++		flush_tlb_range(vma, old_end - len, old_end);
+ 	if (new_ptl != old_ptl)
+ 		spin_unlock(new_ptl);
+ 	pte_unmap(new_pte - 1);
+-	if (force_flush)
+-		flush_tlb_range(vma, old_end - len, old_end);
+-	else
+-		*need_flush = true;
+ 	pte_unmap_unlock(old_pte - 1, old_ptl);
+ 	if (need_rmap_locks)
+ 		drop_rmap_locks(vma);
+@@ -198,7 +198,6 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
+ {
+ 	unsigned long extent, next, old_end;
+ 	pmd_t *old_pmd, *new_pmd;
+-	bool need_flush = false;
+ 	unsigned long mmun_start;	/* For mmu_notifiers */
+ 	unsigned long mmun_end;		/* For mmu_notifiers */
+ 
+@@ -229,8 +228,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
+ 				if (need_rmap_locks)
+ 					take_rmap_locks(vma);
+ 				moved = move_huge_pmd(vma, old_addr, new_addr,
+-						    old_end, old_pmd, new_pmd,
+-						    &need_flush);
++						    old_end, old_pmd, new_pmd);
+ 				if (need_rmap_locks)
+ 					drop_rmap_locks(vma);
+ 				if (moved)
+@@ -246,10 +244,8 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
+ 		if (extent > next - new_addr)
+ 			extent = next - new_addr;
+ 		move_ptes(vma, old_pmd, old_addr, old_addr + extent, new_vma,
+-			  new_pmd, new_addr, need_rmap_locks, &need_flush);
++			  new_pmd, new_addr, need_rmap_locks);
+ 	}
+-	if (need_flush)
+-		flush_tlb_range(vma, old_end-len, old_addr);
+ 
+ 	mmu_notifier_invalidate_range_end(vma->vm_mm, mmun_start, mmun_end);
+ 
+diff --git a/net/batman-adv/bat_v_elp.c b/net/batman-adv/bat_v_elp.c
+index 71c20c1d4002..9f481cfdf77d 100644
+--- a/net/batman-adv/bat_v_elp.c
++++ b/net/batman-adv/bat_v_elp.c
+@@ -241,7 +241,7 @@ batadv_v_elp_wifi_neigh_probe(struct batadv_hardif_neigh_node *neigh)
+ 		 * the packet to be exactly of that size to make the link
+ 		 * throughput estimation effective.
+ 		 */
+-		skb_put(skb, probe_len - hard_iface->bat_v.elp_skb->len);
++		skb_put_zero(skb, probe_len - hard_iface->bat_v.elp_skb->len);
+ 
+ 		batadv_dbg(BATADV_DBG_BATMAN, bat_priv,
+ 			   "Sending unicast (probe) ELP packet on interface %s to %pM\n",
+@@ -268,6 +268,7 @@ static void batadv_v_elp_periodic_work(struct work_struct *work)
+ 	struct batadv_priv *bat_priv;
+ 	struct sk_buff *skb;
+ 	u32 elp_interval;
++	bool ret;
+ 
+ 	bat_v = container_of(work, struct batadv_hard_iface_bat_v, elp_wq.work);
+ 	hard_iface = container_of(bat_v, struct batadv_hard_iface, bat_v);
+@@ -329,8 +330,11 @@ static void batadv_v_elp_periodic_work(struct work_struct *work)
+ 		 * may sleep and that is not allowed in an rcu protected
+ 		 * context. Therefore schedule a task for that.
+ 		 */
+-		queue_work(batadv_event_workqueue,
+-			   &hardif_neigh->bat_v.metric_work);
++		ret = queue_work(batadv_event_workqueue,
++				 &hardif_neigh->bat_v.metric_work);
++
++		if (!ret)
++			batadv_hardif_neigh_put(hardif_neigh);
+ 	}
+ 	rcu_read_unlock();
+ 
+diff --git a/net/batman-adv/bridge_loop_avoidance.c b/net/batman-adv/bridge_loop_avoidance.c
+index a2de5a44bd41..58c093caf49e 100644
+--- a/net/batman-adv/bridge_loop_avoidance.c
++++ b/net/batman-adv/bridge_loop_avoidance.c
+@@ -1772,6 +1772,7 @@ batadv_bla_loopdetect_check(struct batadv_priv *bat_priv, struct sk_buff *skb,
+ {
+ 	struct batadv_bla_backbone_gw *backbone_gw;
+ 	struct ethhdr *ethhdr;
++	bool ret;
+ 
+ 	ethhdr = eth_hdr(skb);
+ 
+@@ -1795,8 +1796,13 @@ batadv_bla_loopdetect_check(struct batadv_priv *bat_priv, struct sk_buff *skb,
+ 	if (unlikely(!backbone_gw))
+ 		return true;
+ 
+-	queue_work(batadv_event_workqueue, &backbone_gw->report_work);
+-	/* backbone_gw is unreferenced in the report work function function */
++	ret = queue_work(batadv_event_workqueue, &backbone_gw->report_work);
++
++	/* backbone_gw is unreferenced in the report work function function
++	 * if queue_work() call was successful
++	 */
++	if (!ret)
++		batadv_backbone_gw_put(backbone_gw);
+ 
+ 	return true;
+ }
+diff --git a/net/batman-adv/gateway_client.c b/net/batman-adv/gateway_client.c
+index 8b198ee798c9..140c61a3f1ec 100644
+--- a/net/batman-adv/gateway_client.c
++++ b/net/batman-adv/gateway_client.c
+@@ -32,6 +32,7 @@
+ #include <linux/kernel.h>
+ #include <linux/kref.h>
+ #include <linux/list.h>
++#include <linux/lockdep.h>
+ #include <linux/netdevice.h>
+ #include <linux/netlink.h>
+ #include <linux/rculist.h>
+@@ -348,6 +349,9 @@ out:
+  * @bat_priv: the bat priv with all the soft interface information
+  * @orig_node: originator announcing gateway capabilities
+  * @gateway: announced bandwidth information
++ *
++ * Has to be called with the appropriate locks being acquired
++ * (gw.list_lock).
+  */
+ static void batadv_gw_node_add(struct batadv_priv *bat_priv,
+ 			       struct batadv_orig_node *orig_node,
+@@ -355,6 +359,8 @@ static void batadv_gw_node_add(struct batadv_priv *bat_priv,
+ {
+ 	struct batadv_gw_node *gw_node;
+ 
++	lockdep_assert_held(&bat_priv->gw.list_lock);
++
+ 	if (gateway->bandwidth_down == 0)
+ 		return;
+ 
+@@ -369,10 +375,8 @@ static void batadv_gw_node_add(struct batadv_priv *bat_priv,
+ 	gw_node->bandwidth_down = ntohl(gateway->bandwidth_down);
+ 	gw_node->bandwidth_up = ntohl(gateway->bandwidth_up);
+ 
+-	spin_lock_bh(&bat_priv->gw.list_lock);
+ 	kref_get(&gw_node->refcount);
+ 	hlist_add_head_rcu(&gw_node->list, &bat_priv->gw.gateway_list);
+-	spin_unlock_bh(&bat_priv->gw.list_lock);
+ 
+ 	batadv_dbg(BATADV_DBG_BATMAN, bat_priv,
+ 		   "Found new gateway %pM -> gw bandwidth: %u.%u/%u.%u MBit\n",
+@@ -428,11 +432,14 @@ void batadv_gw_node_update(struct batadv_priv *bat_priv,
+ {
+ 	struct batadv_gw_node *gw_node, *curr_gw = NULL;
+ 
++	spin_lock_bh(&bat_priv->gw.list_lock);
+ 	gw_node = batadv_gw_node_get(bat_priv, orig_node);
+ 	if (!gw_node) {
+ 		batadv_gw_node_add(bat_priv, orig_node, gateway);
++		spin_unlock_bh(&bat_priv->gw.list_lock);
+ 		goto out;
+ 	}
++	spin_unlock_bh(&bat_priv->gw.list_lock);
+ 
+ 	if (gw_node->bandwidth_down == ntohl(gateway->bandwidth_down) &&
+ 	    gw_node->bandwidth_up == ntohl(gateway->bandwidth_up))
+diff --git a/net/batman-adv/network-coding.c b/net/batman-adv/network-coding.c
+index c3578444f3cb..34caf129a9bf 100644
+--- a/net/batman-adv/network-coding.c
++++ b/net/batman-adv/network-coding.c
+@@ -854,16 +854,27 @@ batadv_nc_get_nc_node(struct batadv_priv *bat_priv,
+ 	spinlock_t *lock; /* Used to lock list selected by "int in_coding" */
+ 	struct list_head *list;
+ 
++	/* Select ingoing or outgoing coding node */
++	if (in_coding) {
++		lock = &orig_neigh_node->in_coding_list_lock;
++		list = &orig_neigh_node->in_coding_list;
++	} else {
++		lock = &orig_neigh_node->out_coding_list_lock;
++		list = &orig_neigh_node->out_coding_list;
++	}
++
++	spin_lock_bh(lock);
++
+ 	/* Check if nc_node is already added */
+ 	nc_node = batadv_nc_find_nc_node(orig_node, orig_neigh_node, in_coding);
+ 
+ 	/* Node found */
+ 	if (nc_node)
+-		return nc_node;
++		goto unlock;
+ 
+ 	nc_node = kzalloc(sizeof(*nc_node), GFP_ATOMIC);
+ 	if (!nc_node)
+-		return NULL;
++		goto unlock;
+ 
+ 	/* Initialize nc_node */
+ 	INIT_LIST_HEAD(&nc_node->list);
+@@ -872,22 +883,14 @@ batadv_nc_get_nc_node(struct batadv_priv *bat_priv,
+ 	kref_get(&orig_neigh_node->refcount);
+ 	nc_node->orig_node = orig_neigh_node;
+ 
+-	/* Select ingoing or outgoing coding node */
+-	if (in_coding) {
+-		lock = &orig_neigh_node->in_coding_list_lock;
+-		list = &orig_neigh_node->in_coding_list;
+-	} else {
+-		lock = &orig_neigh_node->out_coding_list_lock;
+-		list = &orig_neigh_node->out_coding_list;
+-	}
+-
+ 	batadv_dbg(BATADV_DBG_NC, bat_priv, "Adding nc_node %pM -> %pM\n",
+ 		   nc_node->addr, nc_node->orig_node->orig);
+ 
+ 	/* Add nc_node to orig_node */
+-	spin_lock_bh(lock);
+ 	kref_get(&nc_node->refcount);
+ 	list_add_tail_rcu(&nc_node->list, list);
++
++unlock:
+ 	spin_unlock_bh(lock);
+ 
+ 	return nc_node;
+diff --git a/net/batman-adv/soft-interface.c b/net/batman-adv/soft-interface.c
+index 1485263a348b..626ddca332db 100644
+--- a/net/batman-adv/soft-interface.c
++++ b/net/batman-adv/soft-interface.c
+@@ -574,15 +574,20 @@ int batadv_softif_create_vlan(struct batadv_priv *bat_priv, unsigned short vid)
+ 	struct batadv_softif_vlan *vlan;
+ 	int err;
+ 
++	spin_lock_bh(&bat_priv->softif_vlan_list_lock);
++
+ 	vlan = batadv_softif_vlan_get(bat_priv, vid);
+ 	if (vlan) {
+ 		batadv_softif_vlan_put(vlan);
++		spin_unlock_bh(&bat_priv->softif_vlan_list_lock);
+ 		return -EEXIST;
+ 	}
+ 
+ 	vlan = kzalloc(sizeof(*vlan), GFP_ATOMIC);
+-	if (!vlan)
++	if (!vlan) {
++		spin_unlock_bh(&bat_priv->softif_vlan_list_lock);
+ 		return -ENOMEM;
++	}
+ 
+ 	vlan->bat_priv = bat_priv;
+ 	vlan->vid = vid;
+@@ -590,17 +595,23 @@ int batadv_softif_create_vlan(struct batadv_priv *bat_priv, unsigned short vid)
+ 
+ 	atomic_set(&vlan->ap_isolation, 0);
+ 
++	kref_get(&vlan->refcount);
++	hlist_add_head_rcu(&vlan->list, &bat_priv->softif_vlan_list);
++	spin_unlock_bh(&bat_priv->softif_vlan_list_lock);
++
++	/* batadv_sysfs_add_vlan cannot be in the spinlock section due to the
++	 * sleeping behavior of the sysfs functions and the fs_reclaim lock
++	 */
+ 	err = batadv_sysfs_add_vlan(bat_priv->soft_iface, vlan);
+ 	if (err) {
+-		kfree(vlan);
++		/* ref for the function */
++		batadv_softif_vlan_put(vlan);
++
++		/* ref for the list */
++		batadv_softif_vlan_put(vlan);
+ 		return err;
+ 	}
+ 
+-	spin_lock_bh(&bat_priv->softif_vlan_list_lock);
+-	kref_get(&vlan->refcount);
+-	hlist_add_head_rcu(&vlan->list, &bat_priv->softif_vlan_list);
+-	spin_unlock_bh(&bat_priv->softif_vlan_list_lock);
+-
+ 	/* add a new TT local entry. This one will be marked with the NOPURGE
+ 	 * flag
+ 	 */
+diff --git a/net/batman-adv/sysfs.c b/net/batman-adv/sysfs.c
+index f2eef43bd2ec..09427fc6494a 100644
+--- a/net/batman-adv/sysfs.c
++++ b/net/batman-adv/sysfs.c
+@@ -188,7 +188,8 @@ ssize_t batadv_store_##_name(struct kobject *kobj,			\
+ 									\
+ 	return __batadv_store_uint_attr(buff, count, _min, _max,	\
+ 					_post_func, attr,		\
+-					&bat_priv->_var, net_dev);	\
++					&bat_priv->_var, net_dev,	\
++					NULL);	\
+ }
+ 
+ #define BATADV_ATTR_SIF_SHOW_UINT(_name, _var)				\
+@@ -262,7 +263,9 @@ ssize_t batadv_store_##_name(struct kobject *kobj,			\
+ 									\
+ 	length = __batadv_store_uint_attr(buff, count, _min, _max,	\
+ 					  _post_func, attr,		\
+-					  &hard_iface->_var, net_dev);	\
++					  &hard_iface->_var,		\
++					  hard_iface->soft_iface,	\
++					  net_dev);			\
+ 									\
+ 	batadv_hardif_put(hard_iface);				\
+ 	return length;							\
+@@ -356,10 +359,12 @@ __batadv_store_bool_attr(char *buff, size_t count,
+ 
+ static int batadv_store_uint_attr(const char *buff, size_t count,
+ 				  struct net_device *net_dev,
++				  struct net_device *slave_dev,
+ 				  const char *attr_name,
+ 				  unsigned int min, unsigned int max,
+ 				  atomic_t *attr)
+ {
++	char ifname[IFNAMSIZ + 3] = "";
+ 	unsigned long uint_val;
+ 	int ret;
+ 
+@@ -385,8 +390,11 @@ static int batadv_store_uint_attr(const char *buff, size_t count,
+ 	if (atomic_read(attr) == uint_val)
+ 		return count;
+ 
+-	batadv_info(net_dev, "%s: Changing from: %i to: %lu\n",
+-		    attr_name, atomic_read(attr), uint_val);
++	if (slave_dev)
++		snprintf(ifname, sizeof(ifname), "%s: ", slave_dev->name);
++
++	batadv_info(net_dev, "%s: %sChanging from: %i to: %lu\n",
++		    attr_name, ifname, atomic_read(attr), uint_val);
+ 
+ 	atomic_set(attr, uint_val);
+ 	return count;
+@@ -397,12 +405,13 @@ static ssize_t __batadv_store_uint_attr(const char *buff, size_t count,
+ 					void (*post_func)(struct net_device *),
+ 					const struct attribute *attr,
+ 					atomic_t *attr_store,
+-					struct net_device *net_dev)
++					struct net_device *net_dev,
++					struct net_device *slave_dev)
+ {
+ 	int ret;
+ 
+-	ret = batadv_store_uint_attr(buff, count, net_dev, attr->name, min, max,
+-				     attr_store);
++	ret = batadv_store_uint_attr(buff, count, net_dev, slave_dev,
++				     attr->name, min, max, attr_store);
+ 	if (post_func && ret)
+ 		post_func(net_dev);
+ 
+@@ -571,7 +580,7 @@ static ssize_t batadv_store_gw_sel_class(struct kobject *kobj,
+ 	return __batadv_store_uint_attr(buff, count, 1, BATADV_TQ_MAX_VALUE,
+ 					batadv_post_gw_reselect, attr,
+ 					&bat_priv->gw.sel_class,
+-					bat_priv->soft_iface);
++					bat_priv->soft_iface, NULL);
+ }
+ 
+ static ssize_t batadv_show_gw_bwidth(struct kobject *kobj,
+@@ -1090,8 +1099,9 @@ static ssize_t batadv_store_throughput_override(struct kobject *kobj,
+ 	if (old_tp_override == tp_override)
+ 		goto out;
+ 
+-	batadv_info(net_dev, "%s: Changing from: %u.%u MBit to: %u.%u MBit\n",
+-		    "throughput_override",
++	batadv_info(hard_iface->soft_iface,
++		    "%s: %s: Changing from: %u.%u MBit to: %u.%u MBit\n",
++		    "throughput_override", net_dev->name,
+ 		    old_tp_override / 10, old_tp_override % 10,
+ 		    tp_override / 10, tp_override % 10);
+ 
+diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
+index 12a2b7d21376..d21624c44665 100644
+--- a/net/batman-adv/translation-table.c
++++ b/net/batman-adv/translation-table.c
+@@ -1613,6 +1613,8 @@ batadv_tt_global_orig_entry_add(struct batadv_tt_global_entry *tt_global,
+ {
+ 	struct batadv_tt_orig_list_entry *orig_entry;
+ 
++	spin_lock_bh(&tt_global->list_lock);
++
+ 	orig_entry = batadv_tt_global_orig_entry_find(tt_global, orig_node);
+ 	if (orig_entry) {
+ 		/* refresh the ttvn: the current value could be a bogus one that
+@@ -1635,11 +1637,9 @@ batadv_tt_global_orig_entry_add(struct batadv_tt_global_entry *tt_global,
+ 	orig_entry->flags = flags;
+ 	kref_init(&orig_entry->refcount);
+ 
+-	spin_lock_bh(&tt_global->list_lock);
+ 	kref_get(&orig_entry->refcount);
+ 	hlist_add_head_rcu(&orig_entry->list,
+ 			   &tt_global->orig_list);
+-	spin_unlock_bh(&tt_global->list_lock);
+ 	atomic_inc(&tt_global->orig_list_count);
+ 
+ sync_flags:
+@@ -1647,6 +1647,8 @@ sync_flags:
+ out:
+ 	if (orig_entry)
+ 		batadv_tt_orig_list_entry_put(orig_entry);
++
++	spin_unlock_bh(&tt_global->list_lock);
+ }
+ 
+ /**
+diff --git a/net/batman-adv/tvlv.c b/net/batman-adv/tvlv.c
+index a637458205d1..40e69c9346d2 100644
+--- a/net/batman-adv/tvlv.c
++++ b/net/batman-adv/tvlv.c
+@@ -529,15 +529,20 @@ void batadv_tvlv_handler_register(struct batadv_priv *bat_priv,
+ {
+ 	struct batadv_tvlv_handler *tvlv_handler;
+ 
++	spin_lock_bh(&bat_priv->tvlv.handler_list_lock);
++
+ 	tvlv_handler = batadv_tvlv_handler_get(bat_priv, type, version);
+ 	if (tvlv_handler) {
++		spin_unlock_bh(&bat_priv->tvlv.handler_list_lock);
+ 		batadv_tvlv_handler_put(tvlv_handler);
+ 		return;
+ 	}
+ 
+ 	tvlv_handler = kzalloc(sizeof(*tvlv_handler), GFP_ATOMIC);
+-	if (!tvlv_handler)
++	if (!tvlv_handler) {
++		spin_unlock_bh(&bat_priv->tvlv.handler_list_lock);
+ 		return;
++	}
+ 
+ 	tvlv_handler->ogm_handler = optr;
+ 	tvlv_handler->unicast_handler = uptr;
+@@ -547,7 +552,6 @@ void batadv_tvlv_handler_register(struct batadv_priv *bat_priv,
+ 	kref_init(&tvlv_handler->refcount);
+ 	INIT_HLIST_NODE(&tvlv_handler->list);
+ 
+-	spin_lock_bh(&bat_priv->tvlv.handler_list_lock);
+ 	kref_get(&tvlv_handler->refcount);
+ 	hlist_add_head_rcu(&tvlv_handler->list, &bat_priv->tvlv.handler_list);
+ 	spin_unlock_bh(&bat_priv->tvlv.handler_list_lock);
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index e7de5f282722..effa87858b21 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -612,7 +612,10 @@ static void smc_connect_work(struct work_struct *work)
+ 		smc->sk.sk_err = -rc;
+ 
+ out:
+-	smc->sk.sk_state_change(&smc->sk);
++	if (smc->sk.sk_err)
++		smc->sk.sk_state_change(&smc->sk);
++	else
++		smc->sk.sk_write_space(&smc->sk);
+ 	kfree(smc->connect_info);
+ 	smc->connect_info = NULL;
+ 	release_sock(&smc->sk);
+@@ -1345,7 +1348,7 @@ static __poll_t smc_poll(struct file *file, struct socket *sock,
+ 		return EPOLLNVAL;
+ 
+ 	smc = smc_sk(sock->sk);
+-	if ((sk->sk_state == SMC_INIT) || smc->use_fallback) {
++	if (smc->use_fallback) {
+ 		/* delegate to CLC child sock */
+ 		mask = smc->clcsock->ops->poll(file, smc->clcsock, wait);
+ 		sk->sk_err = smc->clcsock->sk->sk_err;
+diff --git a/net/smc/smc_clc.c b/net/smc/smc_clc.c
+index ae5d168653ce..086157555ac3 100644
+--- a/net/smc/smc_clc.c
++++ b/net/smc/smc_clc.c
+@@ -405,14 +405,12 @@ int smc_clc_send_proposal(struct smc_sock *smc,
+ 	vec[i++].iov_len = sizeof(trl);
+ 	/* due to the few bytes needed for clc-handshake this cannot block */
+ 	len = kernel_sendmsg(smc->clcsock, &msg, vec, i, plen);
+-	if (len < sizeof(pclc)) {
+-		if (len >= 0) {
+-			reason_code = -ENETUNREACH;
+-			smc->sk.sk_err = -reason_code;
+-		} else {
+-			smc->sk.sk_err = smc->clcsock->sk->sk_err;
+-			reason_code = -smc->sk.sk_err;
+-		}
++	if (len < 0) {
++		smc->sk.sk_err = smc->clcsock->sk->sk_err;
++		reason_code = -smc->sk.sk_err;
++	} else if (len < (int)sizeof(pclc)) {
++		reason_code = -ENETUNREACH;
++		smc->sk.sk_err = -reason_code;
+ 	}
+ 
+ 	return reason_code;
+diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
+index 6c253343a6f9..70d18d0d39ff 100644
+--- a/tools/testing/selftests/bpf/test_maps.c
++++ b/tools/testing/selftests/bpf/test_maps.c
+@@ -566,7 +566,11 @@ static void test_sockmap(int tasks, void *data)
+ 	/* Test update without programs */
+ 	for (i = 0; i < 6; i++) {
+ 		err = bpf_map_update_elem(fd, &i, &sfd[i], BPF_ANY);
+-		if (err) {
++		if (i < 2 && !err) {
++			printf("Allowed update sockmap '%i:%i' not in ESTABLISHED\n",
++			       i, sfd[i]);
++			goto out_sockmap;
++		} else if (i >= 2 && err) {
+ 			printf("Failed noprog update sockmap '%i:%i'\n",
+ 			       i, sfd[i]);
+ 			goto out_sockmap;
+@@ -727,7 +731,7 @@ static void test_sockmap(int tasks, void *data)
+ 	}
+ 
+ 	/* Test map update elem afterwards fd lives in fd and map_fd */
+-	for (i = 0; i < 6; i++) {
++	for (i = 2; i < 6; i++) {
+ 		err = bpf_map_update_elem(map_fd_rx, &i, &sfd[i], BPF_ANY);
+ 		if (err) {
+ 			printf("Failed map_fd_rx update sockmap %i '%i:%i'\n",
+@@ -831,7 +835,7 @@ static void test_sockmap(int tasks, void *data)
+ 	}
+ 
+ 	/* Delete the elems without programs */
+-	for (i = 0; i < 6; i++) {
++	for (i = 2; i < 6; i++) {
+ 		err = bpf_map_delete_elem(fd, &i);
+ 		if (err) {
+ 			printf("Failed delete sockmap %i '%i:%i'\n",
+diff --git a/tools/testing/selftests/net/pmtu.sh b/tools/testing/selftests/net/pmtu.sh
+index 32a194e3e07a..0ab9423d009f 100755
+--- a/tools/testing/selftests/net/pmtu.sh
++++ b/tools/testing/selftests/net/pmtu.sh
+@@ -178,8 +178,8 @@ setup() {
+ 
+ cleanup() {
+ 	[ ${cleanup_done} -eq 1 ] && return
+-	ip netns del ${NS_A} 2 > /dev/null
+-	ip netns del ${NS_B} 2 > /dev/null
++	ip netns del ${NS_A} 2> /dev/null
++	ip netns del ${NS_B} 2> /dev/null
+ 	cleanup_done=1
+ }
+ 


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 13:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 13:15 UTC (permalink / raw
  To: gentoo-commits

commit:     171b3b7ec507044ac98ac85e3bdecb9ea3c96432
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Sun Nov  4 17:33:00 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 13:15:40 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=171b3b7e

linux kernel 4.18.17

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1016_linux-4.18.17.patch | 4982 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4986 insertions(+)

diff --git a/0000_README b/0000_README
index 52e9ca9..fcd301e 100644
--- a/0000_README
+++ b/0000_README
@@ -107,6 +107,10 @@ Patch:  1015_linux-4.18.16.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.16
 
+Patch:  1016_linux-4.18.17.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.17
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1016_linux-4.18.17.patch b/1016_linux-4.18.17.patch
new file mode 100644
index 0000000..1e385a1
--- /dev/null
+++ b/1016_linux-4.18.17.patch
@@ -0,0 +1,4982 @@
+diff --git a/Makefile b/Makefile
+index 034dd990b0ae..c051db0ca5a0 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 16
++SUBLEVEL = 17
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/Kconfig b/arch/Kconfig
+index f03b72644902..a18371a36e03 100644
+--- a/arch/Kconfig
++++ b/arch/Kconfig
+@@ -977,4 +977,12 @@ config REFCOUNT_FULL
+ 	  against various use-after-free conditions that can be used in
+ 	  security flaw exploits.
+ 
++config HAVE_ARCH_COMPILER_H
++	bool
++	help
++	  An architecture can select this if it provides an
++	  asm/compiler.h header that should be included after
++	  linux/compiler-*.h in order to override macro definitions that those
++	  headers generally provide.
++
+ source "kernel/gcov/Kconfig"
+diff --git a/arch/arm/boot/dts/bcm63138.dtsi b/arch/arm/boot/dts/bcm63138.dtsi
+index 43ee992ccdcf..6df61518776f 100644
+--- a/arch/arm/boot/dts/bcm63138.dtsi
++++ b/arch/arm/boot/dts/bcm63138.dtsi
+@@ -106,21 +106,23 @@
+ 		global_timer: timer@1e200 {
+ 			compatible = "arm,cortex-a9-global-timer";
+ 			reg = <0x1e200 0x20>;
+-			interrupts = <GIC_PPI 11 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_PPI 11 IRQ_TYPE_EDGE_RISING>;
+ 			clocks = <&axi_clk>;
+ 		};
+ 
+ 		local_timer: local-timer@1e600 {
+ 			compatible = "arm,cortex-a9-twd-timer";
+ 			reg = <0x1e600 0x20>;
+-			interrupts = <GIC_PPI 13 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(2) |
++						  IRQ_TYPE_EDGE_RISING)>;
+ 			clocks = <&axi_clk>;
+ 		};
+ 
+ 		twd_watchdog: watchdog@1e620 {
+ 			compatible = "arm,cortex-a9-twd-wdt";
+ 			reg = <0x1e620 0x20>;
+-			interrupts = <GIC_PPI 14 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_PPI 14 (GIC_CPU_MASK_SIMPLE(2) |
++						  IRQ_TYPE_LEVEL_HIGH)>;
+ 		};
+ 
+ 		armpll: armpll {
+@@ -158,7 +160,7 @@
+ 		serial0: serial@600 {
+ 			compatible = "brcm,bcm6345-uart";
+ 			reg = <0x600 0x1b>;
+-			interrupts = <GIC_SPI 32 0>;
++			interrupts = <GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&periph_clk>;
+ 			clock-names = "periph";
+ 			status = "disabled";
+@@ -167,7 +169,7 @@
+ 		serial1: serial@620 {
+ 			compatible = "brcm,bcm6345-uart";
+ 			reg = <0x620 0x1b>;
+-			interrupts = <GIC_SPI 33 0>;
++			interrupts = <GIC_SPI 33 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&periph_clk>;
+ 			clock-names = "periph";
+ 			status = "disabled";
+@@ -180,7 +182,7 @@
+ 			reg = <0x2000 0x600>, <0xf0 0x10>;
+ 			reg-names = "nand", "nand-int-base";
+ 			status = "disabled";
+-			interrupts = <GIC_SPI 38 0>;
++			interrupts = <GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "nand";
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/imx53-qsb-common.dtsi b/arch/arm/boot/dts/imx53-qsb-common.dtsi
+index ef7658a78836..c1548adee789 100644
+--- a/arch/arm/boot/dts/imx53-qsb-common.dtsi
++++ b/arch/arm/boot/dts/imx53-qsb-common.dtsi
+@@ -123,6 +123,17 @@
+ 	};
+ };
+ 
++&cpu0 {
++	/* CPU rated to 1GHz, not 1.2GHz as per the default settings */
++	operating-points = <
++		/* kHz   uV */
++		166666  850000
++		400000  900000
++		800000  1050000
++		1000000 1200000
++	>;
++};
++
+ &esdhc1 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_esdhc1>;
+diff --git a/arch/arm/kernel/vmlinux.lds.h b/arch/arm/kernel/vmlinux.lds.h
+index ae5fdff18406..8247bc15addc 100644
+--- a/arch/arm/kernel/vmlinux.lds.h
++++ b/arch/arm/kernel/vmlinux.lds.h
+@@ -49,6 +49,8 @@
+ #define ARM_DISCARD							\
+ 		*(.ARM.exidx.exit.text)					\
+ 		*(.ARM.extab.exit.text)					\
++		*(.ARM.exidx.text.exit)					\
++		*(.ARM.extab.text.exit)					\
+ 		ARM_CPU_DISCARD(*(.ARM.exidx.cpuexit.text))		\
+ 		ARM_CPU_DISCARD(*(.ARM.extab.cpuexit.text))		\
+ 		ARM_EXIT_DISCARD(EXIT_TEXT)				\
+diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
+index fc91205ff46c..5bf9443cfbaa 100644
+--- a/arch/arm/mm/ioremap.c
++++ b/arch/arm/mm/ioremap.c
+@@ -473,7 +473,7 @@ void pci_ioremap_set_mem_type(int mem_type)
+ 
+ int pci_ioremap_io(unsigned int offset, phys_addr_t phys_addr)
+ {
+-	BUG_ON(offset + SZ_64K > IO_SPACE_LIMIT);
++	BUG_ON(offset + SZ_64K - 1 > IO_SPACE_LIMIT);
+ 
+ 	return ioremap_page_range(PCI_IO_VIRT_BASE + offset,
+ 				  PCI_IO_VIRT_BASE + offset + SZ_64K,
+diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
+index 192b3ba07075..f85be2f8b140 100644
+--- a/arch/arm64/mm/hugetlbpage.c
++++ b/arch/arm64/mm/hugetlbpage.c
+@@ -117,11 +117,14 @@ static pte_t get_clear_flush(struct mm_struct *mm,
+ 
+ 		/*
+ 		 * If HW_AFDBM is enabled, then the HW could turn on
+-		 * the dirty bit for any page in the set, so check
+-		 * them all.  All hugetlb entries are already young.
++		 * the dirty or accessed bit for any page in the set,
++		 * so check them all.
+ 		 */
+ 		if (pte_dirty(pte))
+ 			orig_pte = pte_mkdirty(orig_pte);
++
++		if (pte_young(pte))
++			orig_pte = pte_mkyoung(orig_pte);
+ 	}
+ 
+ 	if (valid) {
+@@ -340,10 +343,13 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
+ 	if (!pte_same(orig_pte, pte))
+ 		changed = 1;
+ 
+-	/* Make sure we don't lose the dirty state */
++	/* Make sure we don't lose the dirty or young state */
+ 	if (pte_dirty(orig_pte))
+ 		pte = pte_mkdirty(pte);
+ 
++	if (pte_young(orig_pte))
++		pte = pte_mkyoung(pte);
++
+ 	hugeprot = pte_pgprot(pte);
+ 	for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn)
+ 		set_pte_at(vma->vm_mm, addr, ptep, pfn_pte(pfn, hugeprot));
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index 59d07bd5374a..055b211b7126 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -1217,9 +1217,10 @@ int find_and_online_cpu_nid(int cpu)
+ 		 * Need to ensure that NODE_DATA is initialized for a node from
+ 		 * available memory (see memblock_alloc_try_nid). If unable to
+ 		 * init the node, then default to nearest node that has memory
+-		 * installed.
++		 * installed. Skip onlining a node if the subsystems are not
++		 * yet initialized.
+ 		 */
+-		if (try_online_node(new_nid))
++		if (!topology_inited || try_online_node(new_nid))
+ 			new_nid = first_online_node;
+ #else
+ 		/*
+diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
+index 0efa5b29d0a3..dcff272aee06 100644
+--- a/arch/riscv/kernel/setup.c
++++ b/arch/riscv/kernel/setup.c
+@@ -165,7 +165,7 @@ static void __init setup_bootmem(void)
+ 	BUG_ON(mem_size == 0);
+ 
+ 	set_max_mapnr(PFN_DOWN(mem_size));
+-	max_low_pfn = pfn_base + PFN_DOWN(mem_size);
++	max_low_pfn = memblock_end_of_DRAM();
+ 
+ #ifdef CONFIG_BLK_DEV_INITRD
+ 	setup_initrd();
+diff --git a/arch/sparc/include/asm/cpudata_64.h b/arch/sparc/include/asm/cpudata_64.h
+index 666d6b5c0440..9c3fc03abe9a 100644
+--- a/arch/sparc/include/asm/cpudata_64.h
++++ b/arch/sparc/include/asm/cpudata_64.h
+@@ -28,7 +28,7 @@ typedef struct {
+ 	unsigned short	sock_id;	/* physical package */
+ 	unsigned short	core_id;
+ 	unsigned short  max_cache_id;	/* groupings of highest shared cache */
+-	unsigned short	proc_id;	/* strand (aka HW thread) id */
++	signed short	proc_id;	/* strand (aka HW thread) id */
+ } cpuinfo_sparc;
+ 
+ DECLARE_PER_CPU(cpuinfo_sparc, __cpu_data);
+diff --git a/arch/sparc/include/asm/switch_to_64.h b/arch/sparc/include/asm/switch_to_64.h
+index 4ff29b1406a9..b1d4e2e3210f 100644
+--- a/arch/sparc/include/asm/switch_to_64.h
++++ b/arch/sparc/include/asm/switch_to_64.h
+@@ -67,6 +67,7 @@ do {	save_and_clear_fpu();						\
+ } while(0)
+ 
+ void synchronize_user_stack(void);
+-void fault_in_user_windows(void);
++struct pt_regs;
++void fault_in_user_windows(struct pt_regs *);
+ 
+ #endif /* __SPARC64_SWITCH_TO_64_H */
+diff --git a/arch/sparc/kernel/perf_event.c b/arch/sparc/kernel/perf_event.c
+index d3149baaa33c..67b3e6b3ce5d 100644
+--- a/arch/sparc/kernel/perf_event.c
++++ b/arch/sparc/kernel/perf_event.c
+@@ -24,6 +24,7 @@
+ #include <asm/cpudata.h>
+ #include <linux/uaccess.h>
+ #include <linux/atomic.h>
++#include <linux/sched/clock.h>
+ #include <asm/nmi.h>
+ #include <asm/pcr.h>
+ #include <asm/cacheflush.h>
+@@ -927,6 +928,8 @@ static void read_in_all_counters(struct cpu_hw_events *cpuc)
+ 			sparc_perf_event_update(cp, &cp->hw,
+ 						cpuc->current_idx[i]);
+ 			cpuc->current_idx[i] = PIC_NO_INDEX;
++			if (cp->hw.state & PERF_HES_STOPPED)
++				cp->hw.state |= PERF_HES_ARCH;
+ 		}
+ 	}
+ }
+@@ -959,10 +962,12 @@ static void calculate_single_pcr(struct cpu_hw_events *cpuc)
+ 
+ 		enc = perf_event_get_enc(cpuc->events[i]);
+ 		cpuc->pcr[0] &= ~mask_for_index(idx);
+-		if (hwc->state & PERF_HES_STOPPED)
++		if (hwc->state & PERF_HES_ARCH) {
+ 			cpuc->pcr[0] |= nop_for_index(idx);
+-		else
++		} else {
+ 			cpuc->pcr[0] |= event_encoding(enc, idx);
++			hwc->state = 0;
++		}
+ 	}
+ out:
+ 	cpuc->pcr[0] |= cpuc->event[0]->hw.config_base;
+@@ -988,6 +993,9 @@ static void calculate_multiple_pcrs(struct cpu_hw_events *cpuc)
+ 
+ 		cpuc->current_idx[i] = idx;
+ 
++		if (cp->hw.state & PERF_HES_ARCH)
++			continue;
++
+ 		sparc_pmu_start(cp, PERF_EF_RELOAD);
+ 	}
+ out:
+@@ -1079,6 +1087,8 @@ static void sparc_pmu_start(struct perf_event *event, int flags)
+ 	event->hw.state = 0;
+ 
+ 	sparc_pmu_enable_event(cpuc, &event->hw, idx);
++
++	perf_event_update_userpage(event);
+ }
+ 
+ static void sparc_pmu_stop(struct perf_event *event, int flags)
+@@ -1371,9 +1381,9 @@ static int sparc_pmu_add(struct perf_event *event, int ef_flags)
+ 	cpuc->events[n0] = event->hw.event_base;
+ 	cpuc->current_idx[n0] = PIC_NO_INDEX;
+ 
+-	event->hw.state = PERF_HES_UPTODATE;
++	event->hw.state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
+ 	if (!(ef_flags & PERF_EF_START))
+-		event->hw.state |= PERF_HES_STOPPED;
++		event->hw.state |= PERF_HES_ARCH;
+ 
+ 	/*
+ 	 * If group events scheduling transaction was started,
+@@ -1603,6 +1613,8 @@ static int __kprobes perf_event_nmi_handler(struct notifier_block *self,
+ 	struct perf_sample_data data;
+ 	struct cpu_hw_events *cpuc;
+ 	struct pt_regs *regs;
++	u64 finish_clock;
++	u64 start_clock;
+ 	int i;
+ 
+ 	if (!atomic_read(&active_events))
+@@ -1616,6 +1628,8 @@ static int __kprobes perf_event_nmi_handler(struct notifier_block *self,
+ 		return NOTIFY_DONE;
+ 	}
+ 
++	start_clock = sched_clock();
++
+ 	regs = args->regs;
+ 
+ 	cpuc = this_cpu_ptr(&cpu_hw_events);
+@@ -1654,6 +1668,10 @@ static int __kprobes perf_event_nmi_handler(struct notifier_block *self,
+ 			sparc_pmu_stop(event, 0);
+ 	}
+ 
++	finish_clock = sched_clock();
++
++	perf_sample_event_took(finish_clock - start_clock);
++
+ 	return NOTIFY_STOP;
+ }
+ 
+diff --git a/arch/sparc/kernel/process_64.c b/arch/sparc/kernel/process_64.c
+index 6c086086ca8f..59eaf6227af1 100644
+--- a/arch/sparc/kernel/process_64.c
++++ b/arch/sparc/kernel/process_64.c
+@@ -36,6 +36,7 @@
+ #include <linux/sysrq.h>
+ #include <linux/nmi.h>
+ #include <linux/context_tracking.h>
++#include <linux/signal.h>
+ 
+ #include <linux/uaccess.h>
+ #include <asm/page.h>
+@@ -521,7 +522,12 @@ static void stack_unaligned(unsigned long sp)
+ 	force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *) sp, 0, current);
+ }
+ 
+-void fault_in_user_windows(void)
++static const char uwfault32[] = KERN_INFO \
++	"%s[%d]: bad register window fault: SP %08lx (orig_sp %08lx) TPC %08lx O7 %08lx\n";
++static const char uwfault64[] = KERN_INFO \
++	"%s[%d]: bad register window fault: SP %016lx (orig_sp %016lx) TPC %08lx O7 %016lx\n";
++
++void fault_in_user_windows(struct pt_regs *regs)
+ {
+ 	struct thread_info *t = current_thread_info();
+ 	unsigned long window;
+@@ -534,9 +540,9 @@ void fault_in_user_windows(void)
+ 		do {
+ 			struct reg_window *rwin = &t->reg_window[window];
+ 			int winsize = sizeof(struct reg_window);
+-			unsigned long sp;
++			unsigned long sp, orig_sp;
+ 
+-			sp = t->rwbuf_stkptrs[window];
++			orig_sp = sp = t->rwbuf_stkptrs[window];
+ 
+ 			if (test_thread_64bit_stack(sp))
+ 				sp += STACK_BIAS;
+@@ -547,8 +553,16 @@ void fault_in_user_windows(void)
+ 				stack_unaligned(sp);
+ 
+ 			if (unlikely(copy_to_user((char __user *)sp,
+-						  rwin, winsize)))
++						  rwin, winsize))) {
++				if (show_unhandled_signals)
++					printk_ratelimited(is_compat_task() ?
++							   uwfault32 : uwfault64,
++							   current->comm, current->pid,
++							   sp, orig_sp,
++							   regs->tpc,
++							   regs->u_regs[UREG_I7]);
+ 				goto barf;
++			}
+ 		} while (window--);
+ 	}
+ 	set_thread_wsaved(0);
+@@ -556,8 +570,7 @@ void fault_in_user_windows(void)
+ 
+ barf:
+ 	set_thread_wsaved(window + 1);
+-	user_exit();
+-	do_exit(SIGILL);
++	force_sig(SIGSEGV, current);
+ }
+ 
+ asmlinkage long sparc_do_fork(unsigned long clone_flags,
+diff --git a/arch/sparc/kernel/rtrap_64.S b/arch/sparc/kernel/rtrap_64.S
+index f6528884a2c8..29aa34f11720 100644
+--- a/arch/sparc/kernel/rtrap_64.S
++++ b/arch/sparc/kernel/rtrap_64.S
+@@ -39,6 +39,7 @@ __handle_preemption:
+ 		 wrpr			%g0, RTRAP_PSTATE_IRQOFF, %pstate
+ 
+ __handle_user_windows:
++		add			%sp, PTREGS_OFF, %o0
+ 		call			fault_in_user_windows
+ 661:		 wrpr			%g0, RTRAP_PSTATE, %pstate
+ 		/* If userspace is using ADI, it could potentially pass
+@@ -84,8 +85,9 @@ __handle_signal:
+ 		ldx			[%sp + PTREGS_OFF + PT_V9_TSTATE], %l1
+ 		sethi			%hi(0xf << 20), %l4
+ 		and			%l1, %l4, %l4
++		andn			%l1, %l4, %l1
+ 		ba,pt			%xcc, __handle_preemption_continue
+-		 andn			%l1, %l4, %l1
++		 srl			%l4, 20, %l4
+ 
+ 		/* When returning from a NMI (%pil==15) interrupt we want to
+ 		 * avoid running softirqs, doing IRQ tracing, preempting, etc.
+diff --git a/arch/sparc/kernel/signal32.c b/arch/sparc/kernel/signal32.c
+index 44d379db3f64..4c5b3fcbed94 100644
+--- a/arch/sparc/kernel/signal32.c
++++ b/arch/sparc/kernel/signal32.c
+@@ -371,7 +371,11 @@ static int setup_frame32(struct ksignal *ksig, struct pt_regs *regs,
+ 		get_sigframe(ksig, regs, sigframe_size);
+ 	
+ 	if (invalid_frame_pointer(sf, sigframe_size)) {
+-		do_exit(SIGILL);
++		if (show_unhandled_signals)
++			pr_info("%s[%d] bad frame in setup_frame32: %08lx TPC %08lx O7 %08lx\n",
++				current->comm, current->pid, (unsigned long)sf,
++				regs->tpc, regs->u_regs[UREG_I7]);
++		force_sigsegv(ksig->sig, current);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -501,7 +505,11 @@ static int setup_rt_frame32(struct ksignal *ksig, struct pt_regs *regs,
+ 		get_sigframe(ksig, regs, sigframe_size);
+ 	
+ 	if (invalid_frame_pointer(sf, sigframe_size)) {
+-		do_exit(SIGILL);
++		if (show_unhandled_signals)
++			pr_info("%s[%d] bad frame in setup_rt_frame32: %08lx TPC %08lx O7 %08lx\n",
++				current->comm, current->pid, (unsigned long)sf,
++				regs->tpc, regs->u_regs[UREG_I7]);
++		force_sigsegv(ksig->sig, current);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/arch/sparc/kernel/signal_64.c b/arch/sparc/kernel/signal_64.c
+index 48366e5eb5b2..e9de1803a22e 100644
+--- a/arch/sparc/kernel/signal_64.c
++++ b/arch/sparc/kernel/signal_64.c
+@@ -370,7 +370,11 @@ setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs)
+ 		get_sigframe(ksig, regs, sf_size);
+ 
+ 	if (invalid_frame_pointer (sf)) {
+-		do_exit(SIGILL);	/* won't return, actually */
++		if (show_unhandled_signals)
++			pr_info("%s[%d] bad frame in setup_rt_frame: %016lx TPC %016lx O7 %016lx\n",
++				current->comm, current->pid, (unsigned long)sf,
++				regs->tpc, regs->u_regs[UREG_I7]);
++		force_sigsegv(ksig->sig, current);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/arch/sparc/kernel/systbls_64.S b/arch/sparc/kernel/systbls_64.S
+index 387ef993880a..25699462ad5b 100644
+--- a/arch/sparc/kernel/systbls_64.S
++++ b/arch/sparc/kernel/systbls_64.S
+@@ -47,9 +47,9 @@ sys_call_table32:
+ 	.word sys_recvfrom, sys_setreuid16, sys_setregid16, sys_rename, compat_sys_truncate
+ /*130*/	.word compat_sys_ftruncate, sys_flock, compat_sys_lstat64, sys_sendto, sys_shutdown
+ 	.word sys_socketpair, sys_mkdir, sys_rmdir, compat_sys_utimes, compat_sys_stat64
+-/*140*/	.word sys_sendfile64, sys_nis_syscall, compat_sys_futex, sys_gettid, compat_sys_getrlimit
++/*140*/	.word sys_sendfile64, sys_getpeername, compat_sys_futex, sys_gettid, compat_sys_getrlimit
+ 	.word compat_sys_setrlimit, sys_pivot_root, sys_prctl, sys_pciconfig_read, sys_pciconfig_write
+-/*150*/	.word sys_nis_syscall, sys_inotify_init, sys_inotify_add_watch, sys_poll, sys_getdents64
++/*150*/	.word sys_getsockname, sys_inotify_init, sys_inotify_add_watch, sys_poll, sys_getdents64
+ 	.word compat_sys_fcntl64, sys_inotify_rm_watch, compat_sys_statfs, compat_sys_fstatfs, sys_oldumount
+ /*160*/	.word compat_sys_sched_setaffinity, compat_sys_sched_getaffinity, sys_getdomainname, sys_setdomainname, sys_nis_syscall
+ 	.word sys_quotactl, sys_set_tid_address, compat_sys_mount, compat_sys_ustat, sys_setxattr
+diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
+index f396048a0d68..39822f611c01 100644
+--- a/arch/sparc/mm/init_64.c
++++ b/arch/sparc/mm/init_64.c
+@@ -1383,6 +1383,7 @@ int __node_distance(int from, int to)
+ 	}
+ 	return numa_latency[from][to];
+ }
++EXPORT_SYMBOL(__node_distance);
+ 
+ static int __init find_best_numa_node_for_mlgroup(struct mdesc_mlgroup *grp)
+ {
+diff --git a/arch/sparc/vdso/vclock_gettime.c b/arch/sparc/vdso/vclock_gettime.c
+index 3feb3d960ca5..75dca9aab737 100644
+--- a/arch/sparc/vdso/vclock_gettime.c
++++ b/arch/sparc/vdso/vclock_gettime.c
+@@ -33,9 +33,19 @@
+ #define	TICK_PRIV_BIT	(1ULL << 63)
+ #endif
+ 
++#ifdef	CONFIG_SPARC64
+ #define SYSCALL_STRING							\
+ 	"ta	0x6d;"							\
+-	"sub	%%g0, %%o0, %%o0;"					\
++	"bcs,a	1f;"							\
++	" sub	%%g0, %%o0, %%o0;"					\
++	"1:"
++#else
++#define SYSCALL_STRING							\
++	"ta	0x10;"							\
++	"bcs,a	1f;"							\
++	" sub	%%g0, %%o0, %%o0;"					\
++	"1:"
++#endif
+ 
+ #define SYSCALL_CLOBBERS						\
+ 	"f0", "f1", "f2", "f3", "f4", "f5", "f6", "f7",			\
+diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c
+index 981ba5e8241b..8671de126eac 100644
+--- a/arch/x86/events/amd/uncore.c
++++ b/arch/x86/events/amd/uncore.c
+@@ -36,6 +36,7 @@
+ 
+ static int num_counters_llc;
+ static int num_counters_nb;
++static bool l3_mask;
+ 
+ static HLIST_HEAD(uncore_unused_list);
+ 
+@@ -209,6 +210,13 @@ static int amd_uncore_event_init(struct perf_event *event)
+ 	hwc->config = event->attr.config & AMD64_RAW_EVENT_MASK_NB;
+ 	hwc->idx = -1;
+ 
++	/*
++	 * SliceMask and ThreadMask need to be set for certain L3 events in
++	 * Family 17h. For other events, the two fields do not affect the count.
++	 */
++	if (l3_mask)
++		hwc->config |= (AMD64_L3_SLICE_MASK | AMD64_L3_THREAD_MASK);
++
+ 	if (event->cpu < 0)
+ 		return -EINVAL;
+ 
+@@ -525,6 +533,7 @@ static int __init amd_uncore_init(void)
+ 		amd_llc_pmu.name	  = "amd_l3";
+ 		format_attr_event_df.show = &event_show_df;
+ 		format_attr_event_l3.show = &event_show_l3;
++		l3_mask			  = true;
+ 	} else {
+ 		num_counters_nb		  = NUM_COUNTERS_NB;
+ 		num_counters_llc	  = NUM_COUNTERS_L2;
+@@ -532,6 +541,7 @@ static int __init amd_uncore_init(void)
+ 		amd_llc_pmu.name	  = "amd_l2";
+ 		format_attr_event_df	  = format_attr_event;
+ 		format_attr_event_l3	  = format_attr_event;
++		l3_mask			  = false;
+ 	}
+ 
+ 	amd_nb_pmu.attr_groups	= amd_uncore_attr_groups_df;
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index 51d7c117e3c7..c07bee31abe8 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -3061,7 +3061,7 @@ static struct event_constraint bdx_uncore_pcu_constraints[] = {
+ 
+ void bdx_uncore_cpu_init(void)
+ {
+-	int pkg = topology_phys_to_logical_pkg(0);
++	int pkg = topology_phys_to_logical_pkg(boot_cpu_data.phys_proc_id);
+ 
+ 	if (bdx_uncore_cbox.num_boxes > boot_cpu_data.x86_max_cores)
+ 		bdx_uncore_cbox.num_boxes = boot_cpu_data.x86_max_cores;
+@@ -3931,16 +3931,16 @@ static const struct pci_device_id skx_uncore_pci_ids[] = {
+ 		.driver_data = UNCORE_PCI_DEV_FULL_DATA(21, 5, SKX_PCI_UNCORE_M2PCIE, 3),
+ 	},
+ 	{ /* M3UPI0 Link 0 */
+-		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x204C),
+-		.driver_data = UNCORE_PCI_DEV_FULL_DATA(18, 0, SKX_PCI_UNCORE_M3UPI, 0),
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x204D),
++		.driver_data = UNCORE_PCI_DEV_FULL_DATA(18, 1, SKX_PCI_UNCORE_M3UPI, 0),
+ 	},
+ 	{ /* M3UPI0 Link 1 */
+-		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x204D),
+-		.driver_data = UNCORE_PCI_DEV_FULL_DATA(18, 1, SKX_PCI_UNCORE_M3UPI, 1),
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x204E),
++		.driver_data = UNCORE_PCI_DEV_FULL_DATA(18, 2, SKX_PCI_UNCORE_M3UPI, 1),
+ 	},
+ 	{ /* M3UPI1 Link 2 */
+-		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x204C),
+-		.driver_data = UNCORE_PCI_DEV_FULL_DATA(18, 4, SKX_PCI_UNCORE_M3UPI, 2),
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x204D),
++		.driver_data = UNCORE_PCI_DEV_FULL_DATA(18, 5, SKX_PCI_UNCORE_M3UPI, 2),
+ 	},
+ 	{ /* end: all zeroes */ }
+ };
+diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
+index 12f54082f4c8..78241b736f2a 100644
+--- a/arch/x86/include/asm/perf_event.h
++++ b/arch/x86/include/asm/perf_event.h
+@@ -46,6 +46,14 @@
+ #define INTEL_ARCH_EVENT_MASK	\
+ 	(ARCH_PERFMON_EVENTSEL_UMASK | ARCH_PERFMON_EVENTSEL_EVENT)
+ 
++#define AMD64_L3_SLICE_SHIFT				48
++#define AMD64_L3_SLICE_MASK				\
++	((0xFULL) << AMD64_L3_SLICE_SHIFT)
++
++#define AMD64_L3_THREAD_SHIFT				56
++#define AMD64_L3_THREAD_MASK				\
++	((0xFFULL) << AMD64_L3_THREAD_SHIFT)
++
+ #define X86_RAW_EVENT_MASK		\
+ 	(ARCH_PERFMON_EVENTSEL_EVENT |	\
+ 	 ARCH_PERFMON_EVENTSEL_UMASK |	\
+diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
+index 930c88341e4e..1fbf38dde84c 100644
+--- a/arch/x86/kernel/paravirt.c
++++ b/arch/x86/kernel/paravirt.c
+@@ -90,7 +90,7 @@ unsigned paravirt_patch_call(void *insnbuf,
+ 
+ 	if (len < 5) {
+ #ifdef CONFIG_RETPOLINE
+-		WARN_ONCE("Failing to patch indirect CALL in %ps\n", (void *)addr);
++		WARN_ONCE(1, "Failing to patch indirect CALL in %ps\n", (void *)addr);
+ #endif
+ 		return len;	/* call too long for patch site */
+ 	}
+@@ -110,7 +110,7 @@ unsigned paravirt_patch_jmp(void *insnbuf, const void *target,
+ 
+ 	if (len < 5) {
+ #ifdef CONFIG_RETPOLINE
+-		WARN_ONCE("Failing to patch indirect JMP in %ps\n", (void *)addr);
++		WARN_ONCE(1, "Failing to patch indirect JMP in %ps\n", (void *)addr);
+ #endif
+ 		return len;	/* call too long for patch site */
+ 	}
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index ef772e5634d4..3e59a187fe30 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -436,14 +436,18 @@ static inline struct kvm_svm *to_kvm_svm(struct kvm *kvm)
+ 
+ static inline bool svm_sev_enabled(void)
+ {
+-	return max_sev_asid;
++	return IS_ENABLED(CONFIG_KVM_AMD_SEV) ? max_sev_asid : 0;
+ }
+ 
+ static inline bool sev_guest(struct kvm *kvm)
+ {
++#ifdef CONFIG_KVM_AMD_SEV
+ 	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+ 
+ 	return sev->active;
++#else
++	return false;
++#endif
+ }
+ 
+ static inline int sev_get_asid(struct kvm *kvm)
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 32721ef9652d..9efe130ea2e6 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -819,6 +819,7 @@ struct nested_vmx {
+ 
+ 	/* to migrate it to L2 if VM_ENTRY_LOAD_DEBUG_CONTROLS is off */
+ 	u64 vmcs01_debugctl;
++	u64 vmcs01_guest_bndcfgs;
+ 
+ 	u16 vpid02;
+ 	u16 last_vpid;
+@@ -3395,9 +3396,6 @@ static void nested_vmx_setup_ctls_msrs(struct nested_vmx_msrs *msrs, bool apicv)
+ 		VM_EXIT_LOAD_IA32_EFER | VM_EXIT_SAVE_IA32_EFER |
+ 		VM_EXIT_SAVE_VMX_PREEMPTION_TIMER | VM_EXIT_ACK_INTR_ON_EXIT;
+ 
+-	if (kvm_mpx_supported())
+-		msrs->exit_ctls_high |= VM_EXIT_CLEAR_BNDCFGS;
+-
+ 	/* We support free control of debug control saving. */
+ 	msrs->exit_ctls_low &= ~VM_EXIT_SAVE_DEBUG_CONTROLS;
+ 
+@@ -3414,8 +3412,6 @@ static void nested_vmx_setup_ctls_msrs(struct nested_vmx_msrs *msrs, bool apicv)
+ 		VM_ENTRY_LOAD_IA32_PAT;
+ 	msrs->entry_ctls_high |=
+ 		(VM_ENTRY_ALWAYSON_WITHOUT_TRUE_MSR | VM_ENTRY_LOAD_IA32_EFER);
+-	if (kvm_mpx_supported())
+-		msrs->entry_ctls_high |= VM_ENTRY_LOAD_BNDCFGS;
+ 
+ 	/* We support free control of debug control loading. */
+ 	msrs->entry_ctls_low &= ~VM_ENTRY_LOAD_DEBUG_CONTROLS;
+@@ -10825,6 +10821,23 @@ static void nested_vmx_cr_fixed1_bits_update(struct kvm_vcpu *vcpu)
+ #undef cr4_fixed1_update
+ }
+ 
++static void nested_vmx_entry_exit_ctls_update(struct kvm_vcpu *vcpu)
++{
++	struct vcpu_vmx *vmx = to_vmx(vcpu);
++
++	if (kvm_mpx_supported()) {
++		bool mpx_enabled = guest_cpuid_has(vcpu, X86_FEATURE_MPX);
++
++		if (mpx_enabled) {
++			vmx->nested.msrs.entry_ctls_high |= VM_ENTRY_LOAD_BNDCFGS;
++			vmx->nested.msrs.exit_ctls_high |= VM_EXIT_CLEAR_BNDCFGS;
++		} else {
++			vmx->nested.msrs.entry_ctls_high &= ~VM_ENTRY_LOAD_BNDCFGS;
++			vmx->nested.msrs.exit_ctls_high &= ~VM_EXIT_CLEAR_BNDCFGS;
++		}
++	}
++}
++
+ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
+ {
+ 	struct vcpu_vmx *vmx = to_vmx(vcpu);
+@@ -10841,8 +10854,10 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
+ 		to_vmx(vcpu)->msr_ia32_feature_control_valid_bits &=
+ 			~FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX;
+ 
+-	if (nested_vmx_allowed(vcpu))
++	if (nested_vmx_allowed(vcpu)) {
+ 		nested_vmx_cr_fixed1_bits_update(vcpu);
++		nested_vmx_entry_exit_ctls_update(vcpu);
++	}
+ }
+ 
+ static void vmx_set_supported_cpuid(u32 func, struct kvm_cpuid_entry2 *entry)
+@@ -11553,8 +11568,13 @@ static void prepare_vmcs02_full(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
+ 
+ 	set_cr4_guest_host_mask(vmx);
+ 
+-	if (vmx_mpx_supported())
+-		vmcs_write64(GUEST_BNDCFGS, vmcs12->guest_bndcfgs);
++	if (kvm_mpx_supported()) {
++		if (vmx->nested.nested_run_pending &&
++			(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS))
++			vmcs_write64(GUEST_BNDCFGS, vmcs12->guest_bndcfgs);
++		else
++			vmcs_write64(GUEST_BNDCFGS, vmx->nested.vmcs01_guest_bndcfgs);
++	}
+ 
+ 	if (enable_vpid) {
+ 		if (nested_cpu_has_vpid(vmcs12) && vmx->nested.vpid02)
+@@ -12068,6 +12088,9 @@ static int enter_vmx_non_root_mode(struct kvm_vcpu *vcpu)
+ 
+ 	if (!(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS))
+ 		vmx->nested.vmcs01_debugctl = vmcs_read64(GUEST_IA32_DEBUGCTL);
++	if (kvm_mpx_supported() &&
++		!(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS))
++		vmx->nested.vmcs01_guest_bndcfgs = vmcs_read64(GUEST_BNDCFGS);
+ 
+ 	vmx_switch_vmcs(vcpu, &vmx->nested.vmcs02);
+ 	vmx_segment_cache_clear(vmx);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 97fcac34e007..3cd58a5eb449 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -4625,7 +4625,7 @@ static void kvm_init_msr_list(void)
+ 		 */
+ 		switch (msrs_to_save[i]) {
+ 		case MSR_IA32_BNDCFGS:
+-			if (!kvm_x86_ops->mpx_supported())
++			if (!kvm_mpx_supported())
+ 				continue;
+ 			break;
+ 		case MSR_TSC_AUX:
+diff --git a/drivers/clk/mvebu/armada-37xx-periph.c b/drivers/clk/mvebu/armada-37xx-periph.c
+index 6f7637b19738..e764dfdea53f 100644
+--- a/drivers/clk/mvebu/armada-37xx-periph.c
++++ b/drivers/clk/mvebu/armada-37xx-periph.c
+@@ -419,7 +419,6 @@ static unsigned int armada_3700_pm_dvfs_get_cpu_parent(struct regmap *base)
+ static u8 clk_pm_cpu_get_parent(struct clk_hw *hw)
+ {
+ 	struct clk_pm_cpu *pm_cpu = to_clk_pm_cpu(hw);
+-	int num_parents = clk_hw_get_num_parents(hw);
+ 	u32 val;
+ 
+ 	if (armada_3700_pm_dvfs_is_enabled(pm_cpu->nb_pm_base)) {
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 06dce16e22bb..70f0dedca59f 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -1675,7 +1675,8 @@ static void gpiochip_set_cascaded_irqchip(struct gpio_chip *gpiochip,
+ 		irq_set_chained_handler_and_data(parent_irq, parent_handler,
+ 						 gpiochip);
+ 
+-		gpiochip->irq.parents = &parent_irq;
++		gpiochip->irq.parent_irq = parent_irq;
++		gpiochip->irq.parents = &gpiochip->irq.parent_irq;
+ 		gpiochip->irq.num_parents = 1;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index e484d0a94bdc..5b9cc3aeaa55 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -4494,12 +4494,18 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
+ 	}
+ 	spin_unlock_irqrestore(&adev->ddev->event_lock, flags);
+ 
+-	/* Signal HW programming completion */
+-	drm_atomic_helper_commit_hw_done(state);
+ 
+ 	if (wait_for_vblank)
+ 		drm_atomic_helper_wait_for_flip_done(dev, state);
+ 
++	/*
++	 * FIXME:
++	 * Delay hw_done() until flip_done() is signaled. This is to block
++	 * another commit from freeing the CRTC state while we're still
++	 * waiting on flip_done.
++	 */
++	drm_atomic_helper_commit_hw_done(state);
++
+ 	drm_atomic_helper_cleanup_planes(dev, state);
+ 
+ 	/* Finally, drop a runtime PM reference for each newly disabled CRTC,
+diff --git a/drivers/gpu/drm/i2c/tda9950.c b/drivers/gpu/drm/i2c/tda9950.c
+index 3f7396caad48..ccd355d0c123 100644
+--- a/drivers/gpu/drm/i2c/tda9950.c
++++ b/drivers/gpu/drm/i2c/tda9950.c
+@@ -188,7 +188,8 @@ static irqreturn_t tda9950_irq(int irq, void *data)
+ 			break;
+ 		}
+ 		/* TDA9950 executes all retries for us */
+-		tx_status |= CEC_TX_STATUS_MAX_RETRIES;
++		if (tx_status != CEC_TX_STATUS_OK)
++			tx_status |= CEC_TX_STATUS_MAX_RETRIES;
+ 		cec_transmit_done(priv->adap, tx_status, arb_lost_cnt,
+ 				  nack_cnt, 0, err_cnt);
+ 		break;
+@@ -307,7 +308,7 @@ static void tda9950_release(struct tda9950_priv *priv)
+ 	/* Wait up to .5s for it to signal non-busy */
+ 	do {
+ 		csr = tda9950_read(client, REG_CSR);
+-		if (!(csr & CSR_BUSY) || --timeout)
++		if (!(csr & CSR_BUSY) || !--timeout)
+ 			break;
+ 		msleep(10);
+ 	} while (1);
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index eee6b79fb131..ae5b72269e27 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -974,7 +974,6 @@
+ #define USB_DEVICE_ID_SIS817_TOUCH	0x0817
+ #define USB_DEVICE_ID_SIS_TS		0x1013
+ #define USB_DEVICE_ID_SIS1030_TOUCH	0x1030
+-#define USB_DEVICE_ID_SIS10FB_TOUCH	0x10fb
+ 
+ #define USB_VENDOR_ID_SKYCABLE			0x1223
+ #define	USB_DEVICE_ID_SKYCABLE_WIRELESS_PRESENTER	0x3F07
+diff --git a/drivers/hid/i2c-hid/i2c-hid.c b/drivers/hid/i2c-hid/i2c-hid.c
+index 37013b58098c..d17cf6e323b2 100644
+--- a/drivers/hid/i2c-hid/i2c-hid.c
++++ b/drivers/hid/i2c-hid/i2c-hid.c
+@@ -47,8 +47,7 @@
+ /* quirks to control the device */
+ #define I2C_HID_QUIRK_SET_PWR_WAKEUP_DEV	BIT(0)
+ #define I2C_HID_QUIRK_NO_IRQ_AFTER_RESET	BIT(1)
+-#define I2C_HID_QUIRK_RESEND_REPORT_DESCR	BIT(2)
+-#define I2C_HID_QUIRK_NO_RUNTIME_PM		BIT(3)
++#define I2C_HID_QUIRK_NO_RUNTIME_PM		BIT(2)
+ 
+ /* flags */
+ #define I2C_HID_STARTED		0
+@@ -172,8 +171,6 @@ static const struct i2c_hid_quirks {
+ 	{ I2C_VENDOR_ID_HANTICK, I2C_PRODUCT_ID_HANTICK_5288,
+ 		I2C_HID_QUIRK_NO_IRQ_AFTER_RESET |
+ 		I2C_HID_QUIRK_NO_RUNTIME_PM },
+-	{ USB_VENDOR_ID_SIS_TOUCH, USB_DEVICE_ID_SIS10FB_TOUCH,
+-		I2C_HID_QUIRK_RESEND_REPORT_DESCR },
+ 	{ 0, 0 }
+ };
+ 
+@@ -1241,22 +1238,13 @@ static int i2c_hid_resume(struct device *dev)
+ 
+ 	/* Instead of resetting device, simply powers the device on. This
+ 	 * solves "incomplete reports" on Raydium devices 2386:3118 and
+-	 * 2386:4B33
++	 * 2386:4B33 and fixes various SIS touchscreens no longer sending
++	 * data after a suspend/resume.
+ 	 */
+ 	ret = i2c_hid_set_power(client, I2C_HID_PWR_ON);
+ 	if (ret)
+ 		return ret;
+ 
+-	/* Some devices need to re-send report descr cmd
+-	 * after resume, after this it will be back normal.
+-	 * otherwise it issues too many incomplete reports.
+-	 */
+-	if (ihid->quirks & I2C_HID_QUIRK_RESEND_REPORT_DESCR) {
+-		ret = i2c_hid_command(client, &hid_report_descr_cmd, NULL, 0);
+-		if (ret)
+-			return ret;
+-	}
+-
+ 	if (hid->driver && hid->driver->reset_resume) {
+ 		ret = hid->driver->reset_resume(hid);
+ 		return ret;
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 308456d28afb..73339fd47dd8 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -544,6 +544,9 @@ void mlx5_mr_cache_free(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr)
+ 	int shrink = 0;
+ 	int c;
+ 
++	if (!mr->allocated_from_cache)
++		return;
++
+ 	c = order2idx(dev, mr->order);
+ 	if (c < 0 || c >= MAX_MR_CACHE_ENTRIES) {
+ 		mlx5_ib_warn(dev, "order %d, cache index %d\n", mr->order, c);
+@@ -1647,18 +1650,19 @@ static void dereg_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr)
+ 		umem = NULL;
+ 	}
+ #endif
+-
+ 	clean_mr(dev, mr);
+ 
++	/*
++	 * We should unregister the DMA address from the HCA before
++	 * remove the DMA mapping.
++	 */
++	mlx5_mr_cache_free(dev, mr);
+ 	if (umem) {
+ 		ib_umem_release(umem);
+ 		atomic_sub(npages, &dev->mdev->priv.reg_pages);
+ 	}
+-
+ 	if (!mr->allocated_from_cache)
+ 		kfree(mr);
+-	else
+-		mlx5_mr_cache_free(dev, mr);
+ }
+ 
+ int mlx5_ib_dereg_mr(struct ib_mr *ibmr)
+diff --git a/drivers/net/bonding/bond_netlink.c b/drivers/net/bonding/bond_netlink.c
+index 9697977b80f0..6b9ad8673218 100644
+--- a/drivers/net/bonding/bond_netlink.c
++++ b/drivers/net/bonding/bond_netlink.c
+@@ -638,8 +638,7 @@ static int bond_fill_info(struct sk_buff *skb,
+ 				goto nla_put_failure;
+ 
+ 			if (nla_put(skb, IFLA_BOND_AD_ACTOR_SYSTEM,
+-				    sizeof(bond->params.ad_actor_system),
+-				    &bond->params.ad_actor_system))
++				    ETH_ALEN, &bond->params.ad_actor_system))
+ 				goto nla_put_failure;
+ 		}
+ 		if (!bond_3ad_get_active_agg_info(bond, &info)) {
+diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+index 1b01cd2820ba..000f0d42a710 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+@@ -1580,8 +1580,6 @@ static int ena_up_complete(struct ena_adapter *adapter)
+ 	if (rc)
+ 		return rc;
+ 
+-	ena_init_napi(adapter);
+-
+ 	ena_change_mtu(adapter->netdev, adapter->netdev->mtu);
+ 
+ 	ena_refill_all_rx_bufs(adapter);
+@@ -1735,6 +1733,13 @@ static int ena_up(struct ena_adapter *adapter)
+ 
+ 	ena_setup_io_intr(adapter);
+ 
++	/* napi poll functions should be initialized before running
++	 * request_irq(), to handle a rare condition where there is a pending
++	 * interrupt, causing the ISR to fire immediately while the poll
++	 * function wasn't set yet, causing a null dereference
++	 */
++	ena_init_napi(adapter);
++
+ 	rc = ena_request_io_irq(adapter);
+ 	if (rc)
+ 		goto err_req_irq;
+@@ -2648,7 +2653,11 @@ err_disable_msix:
+ 	ena_free_mgmnt_irq(adapter);
+ 	ena_disable_msix(adapter);
+ err_device_destroy:
++	ena_com_abort_admin_commands(ena_dev);
++	ena_com_wait_for_abort_completion(ena_dev);
+ 	ena_com_admin_destroy(ena_dev);
++	ena_com_mmio_reg_read_request_destroy(ena_dev);
++	ena_com_dev_reset(ena_dev, ENA_REGS_RESET_DRIVER_INVALID_STATE);
+ err:
+ 	clear_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags);
+ 	clear_bit(ENA_FLAG_ONGOING_RESET, &adapter->flags);
+@@ -3128,15 +3137,8 @@ err_rss_init:
+ 
+ static void ena_release_bars(struct ena_com_dev *ena_dev, struct pci_dev *pdev)
+ {
+-	int release_bars;
+-
+-	if (ena_dev->mem_bar)
+-		devm_iounmap(&pdev->dev, ena_dev->mem_bar);
+-
+-	if (ena_dev->reg_bar)
+-		devm_iounmap(&pdev->dev, ena_dev->reg_bar);
++	int release_bars = pci_select_bars(pdev, IORESOURCE_MEM) & ENA_BAR_MASK;
+ 
+-	release_bars = pci_select_bars(pdev, IORESOURCE_MEM) & ENA_BAR_MASK;
+ 	pci_release_selected_regions(pdev, release_bars);
+ }
+ 
+diff --git a/drivers/net/ethernet/amd/declance.c b/drivers/net/ethernet/amd/declance.c
+index 116997a8b593..00332a1ea84b 100644
+--- a/drivers/net/ethernet/amd/declance.c
++++ b/drivers/net/ethernet/amd/declance.c
+@@ -1031,6 +1031,7 @@ static int dec_lance_probe(struct device *bdev, const int type)
+ 	int i, ret;
+ 	unsigned long esar_base;
+ 	unsigned char *esar;
++	const char *desc;
+ 
+ 	if (dec_lance_debug && version_printed++ == 0)
+ 		printk(version);
+@@ -1216,19 +1217,20 @@ static int dec_lance_probe(struct device *bdev, const int type)
+ 	 */
+ 	switch (type) {
+ 	case ASIC_LANCE:
+-		printk("%s: IOASIC onboard LANCE", name);
++		desc = "IOASIC onboard LANCE";
+ 		break;
+ 	case PMAD_LANCE:
+-		printk("%s: PMAD-AA", name);
++		desc = "PMAD-AA";
+ 		break;
+ 	case PMAX_LANCE:
+-		printk("%s: PMAX onboard LANCE", name);
++		desc = "PMAX onboard LANCE";
+ 		break;
+ 	}
+ 	for (i = 0; i < 6; i++)
+ 		dev->dev_addr[i] = esar[i * 4];
+ 
+-	printk(", addr = %pM, irq = %d\n", dev->dev_addr, dev->irq);
++	printk("%s: %s, addr = %pM, irq = %d\n",
++	       name, desc, dev->dev_addr, dev->irq);
+ 
+ 	dev->netdev_ops = &lance_netdev_ops;
+ 	dev->watchdog_timeo = 5*HZ;
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+index 4241ae928d4a..34af5f1569c8 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+@@ -321,9 +321,12 @@ int bcmgenet_mii_probe(struct net_device *dev)
+ 	phydev->advertising = phydev->supported;
+ 
+ 	/* The internal PHY has its link interrupts routed to the
+-	 * Ethernet MAC ISRs
++	 * Ethernet MAC ISRs. On GENETv5 there is a hardware issue
++	 * that prevents the signaling of link UP interrupts when
++	 * the link operates at 10Mbps, so fallback to polling for
++	 * those versions of GENET.
+ 	 */
+-	if (priv->internal_phy)
++	if (priv->internal_phy && !GENET_IS_V5(priv))
+ 		dev->phydev->irq = PHY_IGNORE_INTERRUPT;
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index dfa045f22ef1..db568232ff3e 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -2089,6 +2089,7 @@ static void macb_configure_dma(struct macb *bp)
+ 		else
+ 			dmacfg &= ~GEM_BIT(TXCOEN);
+ 
++		dmacfg &= ~GEM_BIT(ADDR64);
+ #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
+ 		if (bp->hw_dma_cap & HW_DMA_CAP_64B)
+ 			dmacfg |= GEM_BIT(ADDR64);
+diff --git a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
+index a19172dbe6be..c34ea385fe4a 100644
+--- a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
+@@ -2159,6 +2159,8 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
+ 			return -EPERM;
+ 		if (copy_from_user(&t, useraddr, sizeof(t)))
+ 			return -EFAULT;
++		if (t.cmd != CHELSIO_SET_QSET_PARAMS)
++			return -EINVAL;
+ 		if (t.qset_idx >= SGE_QSETS)
+ 			return -EINVAL;
+ 		if (!in_range(t.intr_lat, 0, M_NEWTIMER) ||
+@@ -2258,6 +2260,9 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
+ 		if (copy_from_user(&t, useraddr, sizeof(t)))
+ 			return -EFAULT;
+ 
++		if (t.cmd != CHELSIO_GET_QSET_PARAMS)
++			return -EINVAL;
++
+ 		/* Display qsets for all ports when offload enabled */
+ 		if (test_bit(OFFLOAD_DEVMAP_BIT, &adapter->open_device_map)) {
+ 			q1 = 0;
+@@ -2303,6 +2308,8 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
+ 			return -EBUSY;
+ 		if (copy_from_user(&edata, useraddr, sizeof(edata)))
+ 			return -EFAULT;
++		if (edata.cmd != CHELSIO_SET_QSET_NUM)
++			return -EINVAL;
+ 		if (edata.val < 1 ||
+ 			(edata.val > 1 && !(adapter->flags & USING_MSIX)))
+ 			return -EINVAL;
+@@ -2343,6 +2350,8 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
+ 			return -EPERM;
+ 		if (copy_from_user(&t, useraddr, sizeof(t)))
+ 			return -EFAULT;
++		if (t.cmd != CHELSIO_LOAD_FW)
++			return -EINVAL;
+ 		/* Check t.len sanity ? */
+ 		fw_data = memdup_user(useraddr + sizeof(t), t.len);
+ 		if (IS_ERR(fw_data))
+@@ -2366,6 +2375,8 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
+ 			return -EBUSY;
+ 		if (copy_from_user(&m, useraddr, sizeof(m)))
+ 			return -EFAULT;
++		if (m.cmd != CHELSIO_SETMTUTAB)
++			return -EINVAL;
+ 		if (m.nmtus != NMTUS)
+ 			return -EINVAL;
+ 		if (m.mtus[0] < 81)	/* accommodate SACK */
+@@ -2407,6 +2418,8 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
+ 			return -EBUSY;
+ 		if (copy_from_user(&m, useraddr, sizeof(m)))
+ 			return -EFAULT;
++		if (m.cmd != CHELSIO_SET_PM)
++			return -EINVAL;
+ 		if (!is_power_of_2(m.rx_pg_sz) ||
+ 			!is_power_of_2(m.tx_pg_sz))
+ 			return -EINVAL;	/* not power of 2 */
+@@ -2440,6 +2453,8 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
+ 			return -EIO;	/* need the memory controllers */
+ 		if (copy_from_user(&t, useraddr, sizeof(t)))
+ 			return -EFAULT;
++		if (t.cmd != CHELSIO_GET_MEM)
++			return -EINVAL;
+ 		if ((t.addr & 7) || (t.len & 7))
+ 			return -EINVAL;
+ 		if (t.mem_id == MEM_CM)
+@@ -2492,6 +2507,8 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
+ 			return -EAGAIN;
+ 		if (copy_from_user(&t, useraddr, sizeof(t)))
+ 			return -EFAULT;
++		if (t.cmd != CHELSIO_SET_TRACE_FILTER)
++			return -EINVAL;
+ 
+ 		tp = (const struct trace_params *)&t.sip;
+ 		if (t.config_tx)
+diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
+index 8f755009ff38..c8445a4135a9 100644
+--- a/drivers/net/ethernet/emulex/benet/be_main.c
++++ b/drivers/net/ethernet/emulex/benet/be_main.c
+@@ -3915,8 +3915,6 @@ static int be_enable_vxlan_offloads(struct be_adapter *adapter)
+ 	netdev->hw_enc_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
+ 				   NETIF_F_TSO | NETIF_F_TSO6 |
+ 				   NETIF_F_GSO_UDP_TUNNEL;
+-	netdev->hw_features |= NETIF_F_GSO_UDP_TUNNEL;
+-	netdev->features |= NETIF_F_GSO_UDP_TUNNEL;
+ 
+ 	dev_info(dev, "Enabled VxLAN offloads for UDP port %d\n",
+ 		 be16_to_cpu(port));
+@@ -3938,8 +3936,6 @@ static void be_disable_vxlan_offloads(struct be_adapter *adapter)
+ 	adapter->vxlan_port = 0;
+ 
+ 	netdev->hw_enc_features = 0;
+-	netdev->hw_features &= ~(NETIF_F_GSO_UDP_TUNNEL);
+-	netdev->features &= ~(NETIF_F_GSO_UDP_TUNNEL);
+ }
+ 
+ static void be_calculate_vf_res(struct be_adapter *adapter, u16 num_vfs,
+@@ -5232,6 +5228,7 @@ static void be_netdev_init(struct net_device *netdev)
+ 	struct be_adapter *adapter = netdev_priv(netdev);
+ 
+ 	netdev->hw_features |= NETIF_F_SG | NETIF_F_TSO | NETIF_F_TSO6 |
++		NETIF_F_GSO_UDP_TUNNEL |
+ 		NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | NETIF_F_RXCSUM |
+ 		NETIF_F_HW_VLAN_CTAG_TX;
+ 	if ((be_if_cap_flags(adapter) & BE_IF_FLAGS_RSS))
+diff --git a/drivers/net/ethernet/freescale/fec.h b/drivers/net/ethernet/freescale/fec.h
+index 4778b663653e..bf80855dd0dd 100644
+--- a/drivers/net/ethernet/freescale/fec.h
++++ b/drivers/net/ethernet/freescale/fec.h
+@@ -452,6 +452,10 @@ struct bufdesc_ex {
+  * initialisation.
+  */
+ #define FEC_QUIRK_MIB_CLEAR		(1 << 15)
++/* Only i.MX25/i.MX27/i.MX28 controller supports FRBR,FRSR registers,
++ * those FIFO receive registers are resolved in other platforms.
++ */
++#define FEC_QUIRK_HAS_FRREG		(1 << 16)
+ 
+ struct bufdesc_prop {
+ 	int qid;
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index c729665107f5..11f90bb2d2a9 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -90,14 +90,16 @@ static struct platform_device_id fec_devtype[] = {
+ 		.driver_data = 0,
+ 	}, {
+ 		.name = "imx25-fec",
+-		.driver_data = FEC_QUIRK_USE_GASKET | FEC_QUIRK_MIB_CLEAR,
++		.driver_data = FEC_QUIRK_USE_GASKET | FEC_QUIRK_MIB_CLEAR |
++			       FEC_QUIRK_HAS_FRREG,
+ 	}, {
+ 		.name = "imx27-fec",
+-		.driver_data = FEC_QUIRK_MIB_CLEAR,
++		.driver_data = FEC_QUIRK_MIB_CLEAR | FEC_QUIRK_HAS_FRREG,
+ 	}, {
+ 		.name = "imx28-fec",
+ 		.driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_SWAP_FRAME |
+-				FEC_QUIRK_SINGLE_MDIO | FEC_QUIRK_HAS_RACC,
++				FEC_QUIRK_SINGLE_MDIO | FEC_QUIRK_HAS_RACC |
++				FEC_QUIRK_HAS_FRREG,
+ 	}, {
+ 		.name = "imx6q-fec",
+ 		.driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT |
+@@ -1157,7 +1159,7 @@ static void fec_enet_timeout_work(struct work_struct *work)
+ 		napi_disable(&fep->napi);
+ 		netif_tx_lock_bh(ndev);
+ 		fec_restart(ndev);
+-		netif_wake_queue(ndev);
++		netif_tx_wake_all_queues(ndev);
+ 		netif_tx_unlock_bh(ndev);
+ 		napi_enable(&fep->napi);
+ 	}
+@@ -1272,7 +1274,7 @@ skb_done:
+ 
+ 		/* Since we have freed up a buffer, the ring is no longer full
+ 		 */
+-		if (netif_queue_stopped(ndev)) {
++		if (netif_tx_queue_stopped(nq)) {
+ 			entries_free = fec_enet_get_free_txdesc_num(txq);
+ 			if (entries_free >= txq->tx_wake_threshold)
+ 				netif_tx_wake_queue(nq);
+@@ -1745,7 +1747,7 @@ static void fec_enet_adjust_link(struct net_device *ndev)
+ 			napi_disable(&fep->napi);
+ 			netif_tx_lock_bh(ndev);
+ 			fec_restart(ndev);
+-			netif_wake_queue(ndev);
++			netif_tx_wake_all_queues(ndev);
+ 			netif_tx_unlock_bh(ndev);
+ 			napi_enable(&fep->napi);
+ 		}
+@@ -2163,7 +2165,13 @@ static void fec_enet_get_regs(struct net_device *ndev,
+ 	memset(buf, 0, regs->len);
+ 
+ 	for (i = 0; i < ARRAY_SIZE(fec_enet_register_offset); i++) {
+-		off = fec_enet_register_offset[i] / 4;
++		off = fec_enet_register_offset[i];
++
++		if ((off == FEC_R_BOUND || off == FEC_R_FSTART) &&
++		    !(fep->quirks & FEC_QUIRK_HAS_FRREG))
++			continue;
++
++		off >>= 2;
+ 		buf[off] = readl(&theregs[off]);
+ 	}
+ }
+@@ -2246,7 +2254,7 @@ static int fec_enet_set_pauseparam(struct net_device *ndev,
+ 		napi_disable(&fep->napi);
+ 		netif_tx_lock_bh(ndev);
+ 		fec_restart(ndev);
+-		netif_wake_queue(ndev);
++		netif_tx_wake_all_queues(ndev);
+ 		netif_tx_unlock_bh(ndev);
+ 		napi_enable(&fep->napi);
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index d3a1dd20e41d..fb6c72cf70a0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -429,10 +429,9 @@ static inline u16 mlx5e_icosq_wrap_cnt(struct mlx5e_icosq *sq)
+ 
+ static inline void mlx5e_fill_icosq_frag_edge(struct mlx5e_icosq *sq,
+ 					      struct mlx5_wq_cyc *wq,
+-					      u16 pi, u16 frag_pi)
++					      u16 pi, u16 nnops)
+ {
+ 	struct mlx5e_sq_wqe_info *edge_wi, *wi = &sq->db.ico_wqe[pi];
+-	u8 nnops = mlx5_wq_cyc_get_frag_size(wq) - frag_pi;
+ 
+ 	edge_wi = wi + nnops;
+ 
+@@ -451,15 +450,14 @@ static int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
+ 	struct mlx5_wq_cyc *wq = &sq->wq;
+ 	struct mlx5e_umr_wqe *umr_wqe;
+ 	u16 xlt_offset = ix << (MLX5E_LOG_ALIGNED_MPWQE_PPW - 1);
+-	u16 pi, frag_pi;
++	u16 pi, contig_wqebbs_room;
+ 	int err;
+ 	int i;
+ 
+ 	pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
+-	frag_pi = mlx5_wq_cyc_ctr2fragix(wq, sq->pc);
+-
+-	if (unlikely(frag_pi + MLX5E_UMR_WQEBBS > mlx5_wq_cyc_get_frag_size(wq))) {
+-		mlx5e_fill_icosq_frag_edge(sq, wq, pi, frag_pi);
++	contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi);
++	if (unlikely(contig_wqebbs_room < MLX5E_UMR_WQEBBS)) {
++		mlx5e_fill_icosq_frag_edge(sq, wq, pi, contig_wqebbs_room);
+ 		pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
+ 	}
+ 
+@@ -693,43 +691,15 @@ static inline bool is_last_ethertype_ip(struct sk_buff *skb, int *network_depth)
+ 	return (ethertype == htons(ETH_P_IP) || ethertype == htons(ETH_P_IPV6));
+ }
+ 
+-static __be32 mlx5e_get_fcs(struct sk_buff *skb)
++static u32 mlx5e_get_fcs(const struct sk_buff *skb)
+ {
+-	int last_frag_sz, bytes_in_prev, nr_frags;
+-	u8 *fcs_p1, *fcs_p2;
+-	skb_frag_t *last_frag;
+-	__be32 fcs_bytes;
+-
+-	if (!skb_is_nonlinear(skb))
+-		return *(__be32 *)(skb->data + skb->len - ETH_FCS_LEN);
+-
+-	nr_frags = skb_shinfo(skb)->nr_frags;
+-	last_frag = &skb_shinfo(skb)->frags[nr_frags - 1];
+-	last_frag_sz = skb_frag_size(last_frag);
+-
+-	/* If all FCS data is in last frag */
+-	if (last_frag_sz >= ETH_FCS_LEN)
+-		return *(__be32 *)(skb_frag_address(last_frag) +
+-				   last_frag_sz - ETH_FCS_LEN);
+-
+-	fcs_p2 = (u8 *)skb_frag_address(last_frag);
+-	bytes_in_prev = ETH_FCS_LEN - last_frag_sz;
+-
+-	/* Find where the other part of the FCS is - Linear or another frag */
+-	if (nr_frags == 1) {
+-		fcs_p1 = skb_tail_pointer(skb);
+-	} else {
+-		skb_frag_t *prev_frag = &skb_shinfo(skb)->frags[nr_frags - 2];
+-
+-		fcs_p1 = skb_frag_address(prev_frag) +
+-			    skb_frag_size(prev_frag);
+-	}
+-	fcs_p1 -= bytes_in_prev;
++	const void *fcs_bytes;
++	u32 _fcs_bytes;
+ 
+-	memcpy(&fcs_bytes, fcs_p1, bytes_in_prev);
+-	memcpy(((u8 *)&fcs_bytes) + bytes_in_prev, fcs_p2, last_frag_sz);
++	fcs_bytes = skb_header_pointer(skb, skb->len - ETH_FCS_LEN,
++				       ETH_FCS_LEN, &_fcs_bytes);
+ 
+-	return fcs_bytes;
++	return __get_unaligned_cpu32(fcs_bytes);
+ }
+ 
+ static inline void mlx5e_handle_csum(struct net_device *netdev,
+@@ -762,8 +732,9 @@ static inline void mlx5e_handle_csum(struct net_device *netdev,
+ 						 network_depth - ETH_HLEN,
+ 						 skb->csum);
+ 		if (unlikely(netdev->features & NETIF_F_RXFCS))
+-			skb->csum = csum_add(skb->csum,
+-					     (__force __wsum)mlx5e_get_fcs(skb));
++			skb->csum = csum_block_add(skb->csum,
++						   (__force __wsum)mlx5e_get_fcs(skb),
++						   skb->len - ETH_FCS_LEN);
+ 		stats->csum_complete++;
+ 		return;
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+index f29deb44bf3b..1e774d979c85 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+@@ -287,10 +287,9 @@ dma_unmap_wqe_err:
+ 
+ static inline void mlx5e_fill_sq_frag_edge(struct mlx5e_txqsq *sq,
+ 					   struct mlx5_wq_cyc *wq,
+-					   u16 pi, u16 frag_pi)
++					   u16 pi, u16 nnops)
+ {
+ 	struct mlx5e_tx_wqe_info *edge_wi, *wi = &sq->db.wqe_info[pi];
+-	u8 nnops = mlx5_wq_cyc_get_frag_size(wq) - frag_pi;
+ 
+ 	edge_wi = wi + nnops;
+ 
+@@ -345,8 +344,8 @@ netdev_tx_t mlx5e_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
+ 	struct mlx5e_tx_wqe_info *wi;
+ 
+ 	struct mlx5e_sq_stats *stats = sq->stats;
++	u16 headlen, ihs, contig_wqebbs_room;
+ 	u16 ds_cnt, ds_cnt_inl = 0;
+-	u16 headlen, ihs, frag_pi;
+ 	u8 num_wqebbs, opcode;
+ 	u32 num_bytes;
+ 	int num_dma;
+@@ -383,9 +382,9 @@ netdev_tx_t mlx5e_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
+ 	}
+ 
+ 	num_wqebbs = DIV_ROUND_UP(ds_cnt, MLX5_SEND_WQEBB_NUM_DS);
+-	frag_pi = mlx5_wq_cyc_ctr2fragix(wq, sq->pc);
+-	if (unlikely(frag_pi + num_wqebbs > mlx5_wq_cyc_get_frag_size(wq))) {
+-		mlx5e_fill_sq_frag_edge(sq, wq, pi, frag_pi);
++	contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi);
++	if (unlikely(contig_wqebbs_room < num_wqebbs)) {
++		mlx5e_fill_sq_frag_edge(sq, wq, pi, contig_wqebbs_room);
+ 		mlx5e_sq_fetch_wqe(sq, &wqe, &pi);
+ 	}
+ 
+@@ -629,7 +628,7 @@ netdev_tx_t mlx5i_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
+ 	struct mlx5e_tx_wqe_info *wi;
+ 
+ 	struct mlx5e_sq_stats *stats = sq->stats;
+-	u16 headlen, ihs, pi, frag_pi;
++	u16 headlen, ihs, pi, contig_wqebbs_room;
+ 	u16 ds_cnt, ds_cnt_inl = 0;
+ 	u8 num_wqebbs, opcode;
+ 	u32 num_bytes;
+@@ -665,13 +664,14 @@ netdev_tx_t mlx5i_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
+ 	}
+ 
+ 	num_wqebbs = DIV_ROUND_UP(ds_cnt, MLX5_SEND_WQEBB_NUM_DS);
+-	frag_pi = mlx5_wq_cyc_ctr2fragix(wq, sq->pc);
+-	if (unlikely(frag_pi + num_wqebbs > mlx5_wq_cyc_get_frag_size(wq))) {
++	pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
++	contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi);
++	if (unlikely(contig_wqebbs_room < num_wqebbs)) {
++		mlx5e_fill_sq_frag_edge(sq, wq, pi, contig_wqebbs_room);
+ 		pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
+-		mlx5e_fill_sq_frag_edge(sq, wq, pi, frag_pi);
+ 	}
+ 
+-	mlx5i_sq_fetch_wqe(sq, &wqe, &pi);
++	mlx5i_sq_fetch_wqe(sq, &wqe, pi);
+ 
+ 	/* fill wqe */
+ 	wi       = &sq->db.wqe_info[pi];
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+index 406c23862f5f..01ccc8201052 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+@@ -269,7 +269,7 @@ static void eq_pf_process(struct mlx5_eq *eq)
+ 		case MLX5_PFAULT_SUBTYPE_WQE:
+ 			/* WQE based event */
+ 			pfault->type =
+-				be32_to_cpu(pf_eqe->wqe.pftype_wq) >> 24;
++				(be32_to_cpu(pf_eqe->wqe.pftype_wq) >> 24) & 0x7;
+ 			pfault->token =
+ 				be32_to_cpu(pf_eqe->wqe.token);
+ 			pfault->wqe.wq_num =
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c
+index 5645a4facad2..b8ee9101c506 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c
+@@ -245,7 +245,7 @@ static void *mlx5_fpga_ipsec_cmd_exec(struct mlx5_core_dev *mdev,
+ 		return ERR_PTR(res);
+ 	}
+ 
+-	/* Context will be freed by wait func after completion */
++	/* Context should be freed by the caller after completion. */
+ 	return context;
+ }
+ 
+@@ -418,10 +418,8 @@ static int mlx5_fpga_ipsec_set_caps(struct mlx5_core_dev *mdev, u32 flags)
+ 	cmd.cmd = htonl(MLX5_FPGA_IPSEC_CMD_OP_SET_CAP);
+ 	cmd.flags = htonl(flags);
+ 	context = mlx5_fpga_ipsec_cmd_exec(mdev, &cmd, sizeof(cmd));
+-	if (IS_ERR(context)) {
+-		err = PTR_ERR(context);
+-		goto out;
+-	}
++	if (IS_ERR(context))
++		return PTR_ERR(context);
+ 
+ 	err = mlx5_fpga_ipsec_cmd_wait(context);
+ 	if (err)
+@@ -435,6 +433,7 @@ static int mlx5_fpga_ipsec_set_caps(struct mlx5_core_dev *mdev, u32 flags)
+ 	}
+ 
+ out:
++	kfree(context);
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.h b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.h
+index 08eac92fc26c..0982c579ec74 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.h
+@@ -109,12 +109,11 @@ struct mlx5i_tx_wqe {
+ 
+ static inline void mlx5i_sq_fetch_wqe(struct mlx5e_txqsq *sq,
+ 				      struct mlx5i_tx_wqe **wqe,
+-				      u16 *pi)
++				      u16 pi)
+ {
+ 	struct mlx5_wq_cyc *wq = &sq->wq;
+ 
+-	*pi  = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
+-	*wqe = mlx5_wq_cyc_get_wqe(wq, *pi);
++	*wqe = mlx5_wq_cyc_get_wqe(wq, pi);
+ 	memset(*wqe, 0, sizeof(**wqe));
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wq.c b/drivers/net/ethernet/mellanox/mlx5/core/wq.c
+index d838af9539b1..9046475c531c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/wq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/wq.c
+@@ -39,11 +39,6 @@ u32 mlx5_wq_cyc_get_size(struct mlx5_wq_cyc *wq)
+ 	return (u32)wq->fbc.sz_m1 + 1;
+ }
+ 
+-u16 mlx5_wq_cyc_get_frag_size(struct mlx5_wq_cyc *wq)
+-{
+-	return wq->fbc.frag_sz_m1 + 1;
+-}
+-
+ u32 mlx5_cqwq_get_size(struct mlx5_cqwq *wq)
+ {
+ 	return wq->fbc.sz_m1 + 1;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wq.h b/drivers/net/ethernet/mellanox/mlx5/core/wq.h
+index 16476cc1a602..311256554520 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/wq.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/wq.h
+@@ -80,7 +80,6 @@ int mlx5_wq_cyc_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
+ 		       void *wqc, struct mlx5_wq_cyc *wq,
+ 		       struct mlx5_wq_ctrl *wq_ctrl);
+ u32 mlx5_wq_cyc_get_size(struct mlx5_wq_cyc *wq);
+-u16 mlx5_wq_cyc_get_frag_size(struct mlx5_wq_cyc *wq);
+ 
+ int mlx5_wq_qp_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
+ 		      void *qpc, struct mlx5_wq_qp *wq,
+@@ -140,11 +139,6 @@ static inline u16 mlx5_wq_cyc_ctr2ix(struct mlx5_wq_cyc *wq, u16 ctr)
+ 	return ctr & wq->fbc.sz_m1;
+ }
+ 
+-static inline u16 mlx5_wq_cyc_ctr2fragix(struct mlx5_wq_cyc *wq, u16 ctr)
+-{
+-	return ctr & wq->fbc.frag_sz_m1;
+-}
+-
+ static inline u16 mlx5_wq_cyc_get_head(struct mlx5_wq_cyc *wq)
+ {
+ 	return mlx5_wq_cyc_ctr2ix(wq, wq->wqe_ctr);
+@@ -160,6 +154,11 @@ static inline void *mlx5_wq_cyc_get_wqe(struct mlx5_wq_cyc *wq, u16 ix)
+ 	return mlx5_frag_buf_get_wqe(&wq->fbc, ix);
+ }
+ 
++static inline u16 mlx5_wq_cyc_get_contig_wqebbs(struct mlx5_wq_cyc *wq, u16 ix)
++{
++	return mlx5_frag_buf_get_idx_last_contig_stride(&wq->fbc, ix) - ix + 1;
++}
++
+ static inline int mlx5_wq_cyc_cc_bigger(u16 cc1, u16 cc2)
+ {
+ 	int equal   = (cc1 == cc2);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c
+index f9c724752a32..13636a537f37 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core.c
+@@ -985,8 +985,8 @@ static int mlxsw_devlink_core_bus_device_reload(struct devlink *devlink,
+ 					     mlxsw_core->bus,
+ 					     mlxsw_core->bus_priv, true,
+ 					     devlink);
+-	if (err)
+-		mlxsw_core->reload_fail = true;
++	mlxsw_core->reload_fail = !!err;
++
+ 	return err;
+ }
+ 
+@@ -1126,8 +1126,15 @@ void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core,
+ 	const char *device_kind = mlxsw_core->bus_info->device_kind;
+ 	struct devlink *devlink = priv_to_devlink(mlxsw_core);
+ 
+-	if (mlxsw_core->reload_fail)
+-		goto reload_fail;
++	if (mlxsw_core->reload_fail) {
++		if (!reload)
++			/* Only the parts that were not de-initialized in the
++			 * failed reload attempt need to be de-initialized.
++			 */
++			goto reload_fail_deinit;
++		else
++			return;
++	}
+ 
+ 	if (mlxsw_core->driver->fini)
+ 		mlxsw_core->driver->fini(mlxsw_core);
+@@ -1140,9 +1147,12 @@ void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core,
+ 	if (!reload)
+ 		devlink_resources_unregister(devlink, NULL);
+ 	mlxsw_core->bus->fini(mlxsw_core->bus_priv);
+-	if (reload)
+-		return;
+-reload_fail:
++
++	return;
++
++reload_fail_deinit:
++	devlink_unregister(devlink);
++	devlink_resources_unregister(devlink, NULL);
+ 	devlink_free(devlink);
+ 	mlxsw_core_driver_put(device_kind);
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+index 6cb43dda8232..9883e48d8a21 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+@@ -2307,8 +2307,6 @@ static void mlxsw_sp_switchdev_event_work(struct work_struct *work)
+ 		break;
+ 	case SWITCHDEV_FDB_DEL_TO_DEVICE:
+ 		fdb_info = &switchdev_work->fdb_info;
+-		if (!fdb_info->added_by_user)
+-			break;
+ 		mlxsw_sp_port_fdb_set(mlxsw_sp_port, fdb_info, false);
+ 		break;
+ 	case SWITCHDEV_FDB_ADD_TO_BRIDGE: /* fall through */
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_iwarp.c b/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
+index 90a2b53096e2..51bbb0e5b514 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
+@@ -1710,7 +1710,7 @@ qed_iwarp_parse_rx_pkt(struct qed_hwfn *p_hwfn,
+ 
+ 		cm_info->local_ip[0] = ntohl(iph->daddr);
+ 		cm_info->remote_ip[0] = ntohl(iph->saddr);
+-		cm_info->ip_version = TCP_IPV4;
++		cm_info->ip_version = QED_TCP_IPV4;
+ 
+ 		ip_hlen = (iph->ihl) * sizeof(u32);
+ 		*payload_len = ntohs(iph->tot_len) - ip_hlen;
+@@ -1730,7 +1730,7 @@ qed_iwarp_parse_rx_pkt(struct qed_hwfn *p_hwfn,
+ 			cm_info->remote_ip[i] =
+ 			    ntohl(ip6h->saddr.in6_u.u6_addr32[i]);
+ 		}
+-		cm_info->ip_version = TCP_IPV6;
++		cm_info->ip_version = QED_TCP_IPV6;
+ 
+ 		ip_hlen = sizeof(*ip6h);
+ 		*payload_len = ntohs(ip6h->payload_len);
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_roce.c b/drivers/net/ethernet/qlogic/qed/qed_roce.c
+index b5ce1581645f..79424e6f0976 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_roce.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_roce.c
+@@ -138,23 +138,16 @@ static void qed_rdma_copy_gids(struct qed_rdma_qp *qp, __le32 *src_gid,
+ 
+ static enum roce_flavor qed_roce_mode_to_flavor(enum roce_mode roce_mode)
+ {
+-	enum roce_flavor flavor;
+-
+ 	switch (roce_mode) {
+ 	case ROCE_V1:
+-		flavor = PLAIN_ROCE;
+-		break;
++		return PLAIN_ROCE;
+ 	case ROCE_V2_IPV4:
+-		flavor = RROCE_IPV4;
+-		break;
++		return RROCE_IPV4;
+ 	case ROCE_V2_IPV6:
+-		flavor = ROCE_V2_IPV6;
+-		break;
++		return RROCE_IPV6;
+ 	default:
+-		flavor = MAX_ROCE_MODE;
+-		break;
++		return MAX_ROCE_FLAVOR;
+ 	}
+-	return flavor;
+ }
+ 
+ void qed_roce_free_cid_pair(struct qed_hwfn *p_hwfn, u16 cid)
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c b/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c
+index 8de644b4721e..77b6248ad3b9 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c
+@@ -154,7 +154,7 @@ qed_set_pf_update_tunn_mode(struct qed_tunnel_info *p_tun,
+ static void qed_set_tunn_cls_info(struct qed_tunnel_info *p_tun,
+ 				  struct qed_tunnel_info *p_src)
+ {
+-	enum tunnel_clss type;
++	int type;
+ 
+ 	p_tun->b_update_rx_cls = p_src->b_update_rx_cls;
+ 	p_tun->b_update_tx_cls = p_src->b_update_tx_cls;
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_vf.c b/drivers/net/ethernet/qlogic/qed/qed_vf.c
+index be6ddde1a104..c4766e4ac485 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_vf.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_vf.c
+@@ -413,7 +413,6 @@ static int qed_vf_pf_acquire(struct qed_hwfn *p_hwfn)
+ 	}
+ 
+ 	if (!p_iov->b_pre_fp_hsi &&
+-	    ETH_HSI_VER_MINOR &&
+ 	    (resp->pfdev_info.minor_fp_hsi < ETH_HSI_VER_MINOR)) {
+ 		DP_INFO(p_hwfn,
+ 			"PF is using older fastpath HSI; %02x.%02x is configured\n",
+@@ -572,7 +571,7 @@ free_p_iov:
+ static void
+ __qed_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req,
+ 			   struct qed_tunn_update_type *p_src,
+-			   enum qed_tunn_clss mask, u8 *p_cls)
++			   enum qed_tunn_mode mask, u8 *p_cls)
+ {
+ 	if (p_src->b_update_mode) {
+ 		p_req->tun_mode_update_mask |= BIT(mask);
+@@ -587,7 +586,7 @@ __qed_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req,
+ static void
+ qed_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req,
+ 			 struct qed_tunn_update_type *p_src,
+-			 enum qed_tunn_clss mask,
++			 enum qed_tunn_mode mask,
+ 			 u8 *p_cls, struct qed_tunn_update_udp_port *p_port,
+ 			 u8 *p_update_port, u16 *p_udp_port)
+ {
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index 627c5cd8f786..f18087102d40 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -7044,17 +7044,15 @@ static int rtl8169_poll(struct napi_struct *napi, int budget)
+ 	struct rtl8169_private *tp = container_of(napi, struct rtl8169_private, napi);
+ 	struct net_device *dev = tp->dev;
+ 	u16 enable_mask = RTL_EVENT_NAPI | tp->event_slow;
+-	int work_done= 0;
++	int work_done;
+ 	u16 status;
+ 
+ 	status = rtl_get_events(tp);
+ 	rtl_ack_events(tp, status & ~tp->event_slow);
+ 
+-	if (status & RTL_EVENT_NAPI_RX)
+-		work_done = rtl_rx(dev, tp, (u32) budget);
++	work_done = rtl_rx(dev, tp, (u32) budget);
+ 
+-	if (status & RTL_EVENT_NAPI_TX)
+-		rtl_tx(dev, tp);
++	rtl_tx(dev, tp);
+ 
+ 	if (status & tp->event_slow) {
+ 		enable_mask &= ~tp->event_slow;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
+index 5df1a608e566..541602d70c24 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
+@@ -133,7 +133,7 @@ static int stmmac_mdio_write(struct mii_bus *bus, int phyaddr, int phyreg,
+  */
+ int stmmac_mdio_reset(struct mii_bus *bus)
+ {
+-#if defined(CONFIG_STMMAC_PLATFORM)
++#if IS_ENABLED(CONFIG_STMMAC_PLATFORM)
+ 	struct net_device *ndev = bus->priv;
+ 	struct stmmac_priv *priv = netdev_priv(ndev);
+ 	unsigned int mii_address = priv->hw->mii.addr;
+diff --git a/drivers/net/hamradio/yam.c b/drivers/net/hamradio/yam.c
+index 16ec7af6ab7b..ba9df430fca6 100644
+--- a/drivers/net/hamradio/yam.c
++++ b/drivers/net/hamradio/yam.c
+@@ -966,6 +966,8 @@ static int yam_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+ 				 sizeof(struct yamdrv_ioctl_mcs));
+ 		if (IS_ERR(ym))
+ 			return PTR_ERR(ym);
++		if (ym->cmd != SIOCYAMSMCS)
++			return -EINVAL;
+ 		if (ym->bitrate > YAM_MAXBITRATE) {
+ 			kfree(ym);
+ 			return -EINVAL;
+@@ -981,6 +983,8 @@ static int yam_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+ 		if (copy_from_user(&yi, ifr->ifr_data, sizeof(struct yamdrv_ioctl_cfg)))
+ 			 return -EFAULT;
+ 
++		if (yi.cmd != SIOCYAMSCFG)
++			return -EINVAL;
+ 		if ((yi.cfg.mask & YAM_IOBASE) && netif_running(dev))
+ 			return -EINVAL;		/* Cannot change this parameter when up */
+ 		if ((yi.cfg.mask & YAM_IRQ) && netif_running(dev))
+diff --git a/drivers/net/usb/asix_common.c b/drivers/net/usb/asix_common.c
+index e95dd12edec4..023b8d0bf175 100644
+--- a/drivers/net/usb/asix_common.c
++++ b/drivers/net/usb/asix_common.c
+@@ -607,6 +607,9 @@ int asix_set_wol(struct net_device *net, struct ethtool_wolinfo *wolinfo)
+ 	struct usbnet *dev = netdev_priv(net);
+ 	u8 opt = 0;
+ 
++	if (wolinfo->wolopts & ~(WAKE_PHY | WAKE_MAGIC))
++		return -EINVAL;
++
+ 	if (wolinfo->wolopts & WAKE_PHY)
+ 		opt |= AX_MONITOR_LINK;
+ 	if (wolinfo->wolopts & WAKE_MAGIC)
+diff --git a/drivers/net/usb/ax88179_178a.c b/drivers/net/usb/ax88179_178a.c
+index 9e8ad372f419..2207f7a7d1ff 100644
+--- a/drivers/net/usb/ax88179_178a.c
++++ b/drivers/net/usb/ax88179_178a.c
+@@ -566,6 +566,9 @@ ax88179_set_wol(struct net_device *net, struct ethtool_wolinfo *wolinfo)
+ 	struct usbnet *dev = netdev_priv(net);
+ 	u8 opt = 0;
+ 
++	if (wolinfo->wolopts & ~(WAKE_PHY | WAKE_MAGIC))
++		return -EINVAL;
++
+ 	if (wolinfo->wolopts & WAKE_PHY)
+ 		opt |= AX_MONITOR_MODE_RWLC;
+ 	if (wolinfo->wolopts & WAKE_MAGIC)
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index aeca484a75b8..2bb3a081ff10 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -1401,19 +1401,10 @@ static int lan78xx_set_wol(struct net_device *netdev,
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	pdata->wol = 0;
+-	if (wol->wolopts & WAKE_UCAST)
+-		pdata->wol |= WAKE_UCAST;
+-	if (wol->wolopts & WAKE_MCAST)
+-		pdata->wol |= WAKE_MCAST;
+-	if (wol->wolopts & WAKE_BCAST)
+-		pdata->wol |= WAKE_BCAST;
+-	if (wol->wolopts & WAKE_MAGIC)
+-		pdata->wol |= WAKE_MAGIC;
+-	if (wol->wolopts & WAKE_PHY)
+-		pdata->wol |= WAKE_PHY;
+-	if (wol->wolopts & WAKE_ARP)
+-		pdata->wol |= WAKE_ARP;
++	if (wol->wolopts & ~WAKE_ALL)
++		return -EINVAL;
++
++	pdata->wol = wol->wolopts;
+ 
+ 	device_set_wakeup_enable(&dev->udev->dev, (bool)wol->wolopts);
+ 
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index 1b07bb5e110d..9a55d75f7f10 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -4503,6 +4503,9 @@ static int rtl8152_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+ 	if (!rtl_can_wakeup(tp))
+ 		return -EOPNOTSUPP;
+ 
++	if (wol->wolopts & ~WAKE_ANY)
++		return -EINVAL;
++
+ 	ret = usb_autopm_get_interface(tp->intf);
+ 	if (ret < 0)
+ 		goto out_set_wol;
+diff --git a/drivers/net/usb/smsc75xx.c b/drivers/net/usb/smsc75xx.c
+index b64b1ee56d2d..ec287c9741e8 100644
+--- a/drivers/net/usb/smsc75xx.c
++++ b/drivers/net/usb/smsc75xx.c
+@@ -731,6 +731,9 @@ static int smsc75xx_ethtool_set_wol(struct net_device *net,
+ 	struct smsc75xx_priv *pdata = (struct smsc75xx_priv *)(dev->data[0]);
+ 	int ret;
+ 
++	if (wolinfo->wolopts & ~SUPPORTED_WAKE)
++		return -EINVAL;
++
+ 	pdata->wolopts = wolinfo->wolopts & SUPPORTED_WAKE;
+ 
+ 	ret = device_set_wakeup_enable(&dev->udev->dev, pdata->wolopts);
+diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c
+index 06b4d290784d..262e7a3c23cb 100644
+--- a/drivers/net/usb/smsc95xx.c
++++ b/drivers/net/usb/smsc95xx.c
+@@ -774,6 +774,9 @@ static int smsc95xx_ethtool_set_wol(struct net_device *net,
+ 	struct smsc95xx_priv *pdata = (struct smsc95xx_priv *)(dev->data[0]);
+ 	int ret;
+ 
++	if (wolinfo->wolopts & ~SUPPORTED_WAKE)
++		return -EINVAL;
++
+ 	pdata->wolopts = wolinfo->wolopts & SUPPORTED_WAKE;
+ 
+ 	ret = device_set_wakeup_enable(&dev->udev->dev, pdata->wolopts);
+diff --git a/drivers/net/usb/sr9800.c b/drivers/net/usb/sr9800.c
+index 9277a0f228df..35f39f23d881 100644
+--- a/drivers/net/usb/sr9800.c
++++ b/drivers/net/usb/sr9800.c
+@@ -421,6 +421,9 @@ sr_set_wol(struct net_device *net, struct ethtool_wolinfo *wolinfo)
+ 	struct usbnet *dev = netdev_priv(net);
+ 	u8 opt = 0;
+ 
++	if (wolinfo->wolopts & ~(WAKE_PHY | WAKE_MAGIC))
++		return -EINVAL;
++
+ 	if (wolinfo->wolopts & WAKE_PHY)
+ 		opt |= SR_MONITOR_LINK;
+ 	if (wolinfo->wolopts & WAKE_MAGIC)
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 2b6ec927809e..500e2d8f10bc 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -2162,8 +2162,9 @@ static void virtnet_freeze_down(struct virtio_device *vdev)
+ 	/* Make sure no work handler is accessing the device */
+ 	flush_work(&vi->config_work);
+ 
++	netif_tx_lock_bh(vi->dev);
+ 	netif_device_detach(vi->dev);
+-	netif_tx_disable(vi->dev);
++	netif_tx_unlock_bh(vi->dev);
+ 	cancel_delayed_work_sync(&vi->refill);
+ 
+ 	if (netif_running(vi->dev)) {
+@@ -2199,7 +2200,9 @@ static int virtnet_restore_up(struct virtio_device *vdev)
+ 		}
+ 	}
+ 
++	netif_tx_lock_bh(vi->dev);
+ 	netif_device_attach(vi->dev);
++	netif_tx_unlock_bh(vi->dev);
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index 80e2c8595c7c..58dd217811c8 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -519,7 +519,6 @@ struct mac80211_hwsim_data {
+ 	int channels, idx;
+ 	bool use_chanctx;
+ 	bool destroy_on_close;
+-	struct work_struct destroy_work;
+ 	u32 portid;
+ 	char alpha2[2];
+ 	const struct ieee80211_regdomain *regd;
+@@ -2812,8 +2811,7 @@ static int mac80211_hwsim_new_radio(struct genl_info *info,
+ 	hwsim_radios_generation++;
+ 	spin_unlock_bh(&hwsim_radio_lock);
+ 
+-	if (idx > 0)
+-		hwsim_mcast_new_radio(idx, info, param);
++	hwsim_mcast_new_radio(idx, info, param);
+ 
+ 	return idx;
+ 
+@@ -3442,30 +3440,27 @@ static struct genl_family hwsim_genl_family __ro_after_init = {
+ 	.n_mcgrps = ARRAY_SIZE(hwsim_mcgrps),
+ };
+ 
+-static void destroy_radio(struct work_struct *work)
+-{
+-	struct mac80211_hwsim_data *data =
+-		container_of(work, struct mac80211_hwsim_data, destroy_work);
+-
+-	hwsim_radios_generation++;
+-	mac80211_hwsim_del_radio(data, wiphy_name(data->hw->wiphy), NULL);
+-}
+-
+ static void remove_user_radios(u32 portid)
+ {
+ 	struct mac80211_hwsim_data *entry, *tmp;
++	LIST_HEAD(list);
+ 
+ 	spin_lock_bh(&hwsim_radio_lock);
+ 	list_for_each_entry_safe(entry, tmp, &hwsim_radios, list) {
+ 		if (entry->destroy_on_close && entry->portid == portid) {
+-			list_del(&entry->list);
++			list_move(&entry->list, &list);
+ 			rhashtable_remove_fast(&hwsim_radios_rht, &entry->rht,
+ 					       hwsim_rht_params);
+-			INIT_WORK(&entry->destroy_work, destroy_radio);
+-			queue_work(hwsim_wq, &entry->destroy_work);
++			hwsim_radios_generation++;
+ 		}
+ 	}
+ 	spin_unlock_bh(&hwsim_radio_lock);
++
++	list_for_each_entry_safe(entry, tmp, &list, list) {
++		list_del(&entry->list);
++		mac80211_hwsim_del_radio(entry, wiphy_name(entry->hw->wiphy),
++					 NULL);
++	}
+ }
+ 
+ static int mac80211_hwsim_netlink_notify(struct notifier_block *nb,
+@@ -3523,6 +3518,7 @@ static __net_init int hwsim_init_net(struct net *net)
+ static void __net_exit hwsim_exit_net(struct net *net)
+ {
+ 	struct mac80211_hwsim_data *data, *tmp;
++	LIST_HEAD(list);
+ 
+ 	spin_lock_bh(&hwsim_radio_lock);
+ 	list_for_each_entry_safe(data, tmp, &hwsim_radios, list) {
+@@ -3533,17 +3529,19 @@ static void __net_exit hwsim_exit_net(struct net *net)
+ 		if (data->netgroup == hwsim_net_get_netgroup(&init_net))
+ 			continue;
+ 
+-		list_del(&data->list);
++		list_move(&data->list, &list);
+ 		rhashtable_remove_fast(&hwsim_radios_rht, &data->rht,
+ 				       hwsim_rht_params);
+ 		hwsim_radios_generation++;
+-		spin_unlock_bh(&hwsim_radio_lock);
++	}
++	spin_unlock_bh(&hwsim_radio_lock);
++
++	list_for_each_entry_safe(data, tmp, &list, list) {
++		list_del(&data->list);
+ 		mac80211_hwsim_del_radio(data,
+ 					 wiphy_name(data->hw->wiphy),
+ 					 NULL);
+-		spin_lock_bh(&hwsim_radio_lock);
+ 	}
+-	spin_unlock_bh(&hwsim_radio_lock);
+ 
+ 	ida_simple_remove(&hwsim_netgroup_ida, hwsim_net_get_netgroup(net));
+ }
+diff --git a/drivers/net/wireless/marvell/libertas/if_sdio.c b/drivers/net/wireless/marvell/libertas/if_sdio.c
+index 43743c26c071..39bf85d0ade0 100644
+--- a/drivers/net/wireless/marvell/libertas/if_sdio.c
++++ b/drivers/net/wireless/marvell/libertas/if_sdio.c
+@@ -1317,6 +1317,10 @@ static int if_sdio_suspend(struct device *dev)
+ 	if (priv->wol_criteria == EHS_REMOVE_WAKEUP) {
+ 		dev_info(dev, "Suspend without wake params -- powering down card\n");
+ 		if (priv->fw_ready) {
++			ret = lbs_suspend(priv);
++			if (ret)
++				return ret;
++
+ 			priv->power_up_on_resume = true;
+ 			if_sdio_power_off(card);
+ 		}
+diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
+index 3e18a68c2b03..054e66d93ed6 100644
+--- a/drivers/scsi/qedi/qedi_main.c
++++ b/drivers/scsi/qedi/qedi_main.c
+@@ -2472,6 +2472,7 @@ static int __qedi_probe(struct pci_dev *pdev, int mode)
+ 		/* start qedi context */
+ 		spin_lock_init(&qedi->hba_lock);
+ 		spin_lock_init(&qedi->task_idx_lock);
++		mutex_init(&qedi->stats_lock);
+ 	}
+ 	qedi_ops->ll2->register_cb_ops(qedi->cdev, &qedi_ll2_cb_ops, qedi);
+ 	qedi_ops->ll2->start(qedi->cdev, &params);
+diff --git a/drivers/soc/fsl/qbman/qman.c b/drivers/soc/fsl/qbman/qman.c
+index ecb22749df0b..8cc015183043 100644
+--- a/drivers/soc/fsl/qbman/qman.c
++++ b/drivers/soc/fsl/qbman/qman.c
+@@ -2729,6 +2729,9 @@ static int qman_alloc_range(struct gen_pool *p, u32 *result, u32 cnt)
+ {
+ 	unsigned long addr;
+ 
++	if (!p)
++		return -ENODEV;
++
+ 	addr = gen_pool_alloc(p, cnt);
+ 	if (!addr)
+ 		return -ENOMEM;
+diff --git a/drivers/soc/fsl/qe/ucc.c b/drivers/soc/fsl/qe/ucc.c
+index c646d8713861..681f7d4b7724 100644
+--- a/drivers/soc/fsl/qe/ucc.c
++++ b/drivers/soc/fsl/qe/ucc.c
+@@ -626,7 +626,7 @@ static u32 ucc_get_tdm_sync_shift(enum comm_dir mode, u32 tdm_num)
+ {
+ 	u32 shift;
+ 
+-	shift = (mode == COMM_DIR_RX) ? RX_SYNC_SHIFT_BASE : RX_SYNC_SHIFT_BASE;
++	shift = (mode == COMM_DIR_RX) ? RX_SYNC_SHIFT_BASE : TX_SYNC_SHIFT_BASE;
+ 	shift -= tdm_num * 2;
+ 
+ 	return shift;
+diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c
+index 500911f16498..5bad9fdec5f8 100644
+--- a/drivers/thunderbolt/icm.c
++++ b/drivers/thunderbolt/icm.c
+@@ -653,14 +653,6 @@ icm_fr_xdomain_connected(struct tb *tb, const struct icm_pkg_header *hdr)
+ 	bool approved;
+ 	u64 route;
+ 
+-	/*
+-	 * After NVM upgrade adding root switch device fails because we
+-	 * initiated reset. During that time ICM might still send
+-	 * XDomain connected message which we ignore here.
+-	 */
+-	if (!tb->root_switch)
+-		return;
+-
+ 	link = pkg->link_info & ICM_LINK_INFO_LINK_MASK;
+ 	depth = (pkg->link_info & ICM_LINK_INFO_DEPTH_MASK) >>
+ 		ICM_LINK_INFO_DEPTH_SHIFT;
+@@ -950,14 +942,6 @@ icm_tr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr)
+ 	if (pkg->hdr.packet_id)
+ 		return;
+ 
+-	/*
+-	 * After NVM upgrade adding root switch device fails because we
+-	 * initiated reset. During that time ICM might still send device
+-	 * connected message which we ignore here.
+-	 */
+-	if (!tb->root_switch)
+-		return;
+-
+ 	route = get_route(pkg->route_hi, pkg->route_lo);
+ 	authorized = pkg->link_info & ICM_LINK_INFO_APPROVED;
+ 	security_level = (pkg->hdr.flags & ICM_FLAGS_SLEVEL_MASK) >>
+@@ -1317,19 +1301,26 @@ static void icm_handle_notification(struct work_struct *work)
+ 
+ 	mutex_lock(&tb->lock);
+ 
+-	switch (n->pkg->code) {
+-	case ICM_EVENT_DEVICE_CONNECTED:
+-		icm->device_connected(tb, n->pkg);
+-		break;
+-	case ICM_EVENT_DEVICE_DISCONNECTED:
+-		icm->device_disconnected(tb, n->pkg);
+-		break;
+-	case ICM_EVENT_XDOMAIN_CONNECTED:
+-		icm->xdomain_connected(tb, n->pkg);
+-		break;
+-	case ICM_EVENT_XDOMAIN_DISCONNECTED:
+-		icm->xdomain_disconnected(tb, n->pkg);
+-		break;
++	/*
++	 * When the domain is stopped we flush its workqueue but before
++	 * that the root switch is removed. In that case we should treat
++	 * the queued events as being canceled.
++	 */
++	if (tb->root_switch) {
++		switch (n->pkg->code) {
++		case ICM_EVENT_DEVICE_CONNECTED:
++			icm->device_connected(tb, n->pkg);
++			break;
++		case ICM_EVENT_DEVICE_DISCONNECTED:
++			icm->device_disconnected(tb, n->pkg);
++			break;
++		case ICM_EVENT_XDOMAIN_CONNECTED:
++			icm->xdomain_connected(tb, n->pkg);
++			break;
++		case ICM_EVENT_XDOMAIN_DISCONNECTED:
++			icm->xdomain_disconnected(tb, n->pkg);
++			break;
++		}
+ 	}
+ 
+ 	mutex_unlock(&tb->lock);
+diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
+index f5a33e88e676..2d042150e41c 100644
+--- a/drivers/thunderbolt/nhi.c
++++ b/drivers/thunderbolt/nhi.c
+@@ -1147,5 +1147,5 @@ static void __exit nhi_unload(void)
+ 	tb_domain_exit();
+ }
+ 
+-fs_initcall(nhi_init);
++rootfs_initcall(nhi_init);
+ module_exit(nhi_unload);
+diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
+index af842000188c..a25f6ea5c784 100644
+--- a/drivers/tty/serial/8250/8250_dw.c
++++ b/drivers/tty/serial/8250/8250_dw.c
+@@ -576,10 +576,6 @@ static int dw8250_probe(struct platform_device *pdev)
+ 	if (!data->skip_autocfg)
+ 		dw8250_setup_port(p);
+ 
+-#ifdef CONFIG_PM
+-	uart.capabilities |= UART_CAP_RPM;
+-#endif
+-
+ 	/* If we have a valid fifosize, try hooking up DMA */
+ 	if (p->fifosize) {
+ 		data->dma.rxconf.src_maxburst = p->fifosize / 4;
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index 560ed8711706..c4424cbd9943 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -30,6 +30,7 @@
+ #include <linux/sched/mm.h>
+ #include <linux/sched/signal.h>
+ #include <linux/interval_tree_generic.h>
++#include <linux/nospec.h>
+ 
+ #include "vhost.h"
+ 
+@@ -1362,6 +1363,7 @@ long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *arg
+ 	if (idx >= d->nvqs)
+ 		return -ENOBUFS;
+ 
++	idx = array_index_nospec(idx, d->nvqs);
+ 	vq = d->vqs[idx];
+ 
+ 	mutex_lock(&vq->mutex);
+diff --git a/drivers/video/fbdev/pxa168fb.c b/drivers/video/fbdev/pxa168fb.c
+index def3a501acd6..d059d04c63ac 100644
+--- a/drivers/video/fbdev/pxa168fb.c
++++ b/drivers/video/fbdev/pxa168fb.c
+@@ -712,7 +712,7 @@ static int pxa168fb_probe(struct platform_device *pdev)
+ 	/*
+ 	 * enable controller clock
+ 	 */
+-	clk_enable(fbi->clk);
++	clk_prepare_enable(fbi->clk);
+ 
+ 	pxa168fb_set_par(info);
+ 
+@@ -767,7 +767,7 @@ static int pxa168fb_probe(struct platform_device *pdev)
+ failed_free_cmap:
+ 	fb_dealloc_cmap(&info->cmap);
+ failed_free_clk:
+-	clk_disable(fbi->clk);
++	clk_disable_unprepare(fbi->clk);
+ failed_free_fbmem:
+ 	dma_free_coherent(fbi->dev, info->fix.smem_len,
+ 			info->screen_base, fbi->fb_start_dma);
+@@ -807,7 +807,7 @@ static int pxa168fb_remove(struct platform_device *pdev)
+ 	dma_free_wc(fbi->dev, PAGE_ALIGN(info->fix.smem_len),
+ 		    info->screen_base, info->fix.smem_start);
+ 
+-	clk_disable(fbi->clk);
++	clk_disable_unprepare(fbi->clk);
+ 
+ 	framebuffer_release(info);
+ 
+diff --git a/fs/afs/cell.c b/fs/afs/cell.c
+index f3d0bef16d78..6127f0fcd62c 100644
+--- a/fs/afs/cell.c
++++ b/fs/afs/cell.c
+@@ -514,6 +514,8 @@ static int afs_alloc_anon_key(struct afs_cell *cell)
+  */
+ static int afs_activate_cell(struct afs_net *net, struct afs_cell *cell)
+ {
++	struct hlist_node **p;
++	struct afs_cell *pcell;
+ 	int ret;
+ 
+ 	if (!cell->anonymous_key) {
+@@ -534,7 +536,18 @@ static int afs_activate_cell(struct afs_net *net, struct afs_cell *cell)
+ 		return ret;
+ 
+ 	mutex_lock(&net->proc_cells_lock);
+-	list_add_tail(&cell->proc_link, &net->proc_cells);
++	for (p = &net->proc_cells.first; *p; p = &(*p)->next) {
++		pcell = hlist_entry(*p, struct afs_cell, proc_link);
++		if (strcmp(cell->name, pcell->name) < 0)
++			break;
++	}
++
++	cell->proc_link.pprev = p;
++	cell->proc_link.next = *p;
++	rcu_assign_pointer(*p, &cell->proc_link.next);
++	if (cell->proc_link.next)
++		cell->proc_link.next->pprev = &cell->proc_link.next;
++
+ 	afs_dynroot_mkdir(net, cell);
+ 	mutex_unlock(&net->proc_cells_lock);
+ 	return 0;
+@@ -550,7 +563,7 @@ static void afs_deactivate_cell(struct afs_net *net, struct afs_cell *cell)
+ 	afs_proc_cell_remove(cell);
+ 
+ 	mutex_lock(&net->proc_cells_lock);
+-	list_del_init(&cell->proc_link);
++	hlist_del_rcu(&cell->proc_link);
+ 	afs_dynroot_rmdir(net, cell);
+ 	mutex_unlock(&net->proc_cells_lock);
+ 
+diff --git a/fs/afs/dynroot.c b/fs/afs/dynroot.c
+index 174e843f0633..7de7223843cc 100644
+--- a/fs/afs/dynroot.c
++++ b/fs/afs/dynroot.c
+@@ -286,7 +286,7 @@ int afs_dynroot_populate(struct super_block *sb)
+ 		return -ERESTARTSYS;
+ 
+ 	net->dynroot_sb = sb;
+-	list_for_each_entry(cell, &net->proc_cells, proc_link) {
++	hlist_for_each_entry(cell, &net->proc_cells, proc_link) {
+ 		ret = afs_dynroot_mkdir(net, cell);
+ 		if (ret < 0)
+ 			goto error;
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index 9778df135717..270d1caa27c6 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -241,7 +241,7 @@ struct afs_net {
+ 	seqlock_t		cells_lock;
+ 
+ 	struct mutex		proc_cells_lock;
+-	struct list_head	proc_cells;
++	struct hlist_head	proc_cells;
+ 
+ 	/* Known servers.  Theoretically each fileserver can only be in one
+ 	 * cell, but in practice, people create aliases and subsets and there's
+@@ -319,7 +319,7 @@ struct afs_cell {
+ 	struct afs_net		*net;
+ 	struct key		*anonymous_key;	/* anonymous user key for this cell */
+ 	struct work_struct	manager;	/* Manager for init/deinit/dns */
+-	struct list_head	proc_link;	/* /proc cell list link */
++	struct hlist_node	proc_link;	/* /proc cell list link */
+ #ifdef CONFIG_AFS_FSCACHE
+ 	struct fscache_cookie	*cache;		/* caching cookie */
+ #endif
+diff --git a/fs/afs/main.c b/fs/afs/main.c
+index e84fe822a960..107427688edd 100644
+--- a/fs/afs/main.c
++++ b/fs/afs/main.c
+@@ -87,7 +87,7 @@ static int __net_init afs_net_init(struct net *net_ns)
+ 	timer_setup(&net->cells_timer, afs_cells_timer, 0);
+ 
+ 	mutex_init(&net->proc_cells_lock);
+-	INIT_LIST_HEAD(&net->proc_cells);
++	INIT_HLIST_HEAD(&net->proc_cells);
+ 
+ 	seqlock_init(&net->fs_lock);
+ 	net->fs_servers = RB_ROOT;
+diff --git a/fs/afs/proc.c b/fs/afs/proc.c
+index 476dcbb79713..9101f62707af 100644
+--- a/fs/afs/proc.c
++++ b/fs/afs/proc.c
+@@ -33,9 +33,8 @@ static inline struct afs_net *afs_seq2net_single(struct seq_file *m)
+ static int afs_proc_cells_show(struct seq_file *m, void *v)
+ {
+ 	struct afs_cell *cell = list_entry(v, struct afs_cell, proc_link);
+-	struct afs_net *net = afs_seq2net(m);
+ 
+-	if (v == &net->proc_cells) {
++	if (v == SEQ_START_TOKEN) {
+ 		/* display header on line 1 */
+ 		seq_puts(m, "USE NAME\n");
+ 		return 0;
+@@ -50,12 +49,12 @@ static void *afs_proc_cells_start(struct seq_file *m, loff_t *_pos)
+ 	__acquires(rcu)
+ {
+ 	rcu_read_lock();
+-	return seq_list_start_head(&afs_seq2net(m)->proc_cells, *_pos);
++	return seq_hlist_start_head_rcu(&afs_seq2net(m)->proc_cells, *_pos);
+ }
+ 
+ static void *afs_proc_cells_next(struct seq_file *m, void *v, loff_t *pos)
+ {
+-	return seq_list_next(v, &afs_seq2net(m)->proc_cells, pos);
++	return seq_hlist_next_rcu(v, &afs_seq2net(m)->proc_cells, pos);
+ }
+ 
+ static void afs_proc_cells_stop(struct seq_file *m, void *v)
+diff --git a/fs/fat/fatent.c b/fs/fat/fatent.c
+index 3aef8630a4b9..95d2c716e0da 100644
+--- a/fs/fat/fatent.c
++++ b/fs/fat/fatent.c
+@@ -681,6 +681,7 @@ int fat_count_free_clusters(struct super_block *sb)
+ 			if (ops->ent_get(&fatent) == FAT_ENT_FREE)
+ 				free++;
+ 		} while (fat_ent_next(sbi, &fatent));
++		cond_resched();
+ 	}
+ 	sbi->free_clusters = free;
+ 	sbi->free_clus_valid = 1;
+diff --git a/fs/ocfs2/refcounttree.c b/fs/ocfs2/refcounttree.c
+index 7869622af22a..7a5ee145c733 100644
+--- a/fs/ocfs2/refcounttree.c
++++ b/fs/ocfs2/refcounttree.c
+@@ -2946,6 +2946,7 @@ int ocfs2_duplicate_clusters_by_page(handle_t *handle,
+ 		if (map_end & (PAGE_SIZE - 1))
+ 			to = map_end & (PAGE_SIZE - 1);
+ 
++retry:
+ 		page = find_or_create_page(mapping, page_index, GFP_NOFS);
+ 		if (!page) {
+ 			ret = -ENOMEM;
+@@ -2954,11 +2955,18 @@ int ocfs2_duplicate_clusters_by_page(handle_t *handle,
+ 		}
+ 
+ 		/*
+-		 * In case PAGE_SIZE <= CLUSTER_SIZE, This page
+-		 * can't be dirtied before we CoW it out.
++		 * In case PAGE_SIZE <= CLUSTER_SIZE, we do not expect a dirty
++		 * page, so write it back.
+ 		 */
+-		if (PAGE_SIZE <= OCFS2_SB(sb)->s_clustersize)
+-			BUG_ON(PageDirty(page));
++		if (PAGE_SIZE <= OCFS2_SB(sb)->s_clustersize) {
++			if (PageDirty(page)) {
++				/*
++				 * write_on_page will unlock the page on return
++				 */
++				ret = write_one_page(page);
++				goto retry;
++			}
++		}
+ 
+ 		if (!PageUptodate(page)) {
+ 			ret = block_read_full_page(page, ocfs2_get_block);
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index e373e2e10f6a..83b930988e21 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -70,7 +70,7 @@
+  */
+ #ifdef CONFIG_LD_DEAD_CODE_DATA_ELIMINATION
+ #define TEXT_MAIN .text .text.[0-9a-zA-Z_]*
+-#define DATA_MAIN .data .data.[0-9a-zA-Z_]*
++#define DATA_MAIN .data .data.[0-9a-zA-Z_]* .data..LPBX*
+ #define SDATA_MAIN .sdata .sdata.[0-9a-zA-Z_]*
+ #define RODATA_MAIN .rodata .rodata.[0-9a-zA-Z_]*
+ #define BSS_MAIN .bss .bss.[0-9a-zA-Z_]*
+@@ -617,8 +617,8 @@
+ 
+ #define EXIT_DATA							\
+ 	*(.exit.data .exit.data.*)					\
+-	*(.fini_array)							\
+-	*(.dtors)							\
++	*(.fini_array .fini_array.*)					\
++	*(.dtors .dtors.*)						\
+ 	MEM_DISCARD(exit.data*)						\
+ 	MEM_DISCARD(exit.rodata*)
+ 
+diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
+index a8ba6b04152c..55e4be8b016b 100644
+--- a/include/linux/compiler_types.h
++++ b/include/linux/compiler_types.h
+@@ -78,6 +78,18 @@ extern void __chk_io_ptr(const volatile void __iomem *);
+ #include <linux/compiler-clang.h>
+ #endif
+ 
++/*
++ * Some architectures need to provide custom definitions of macros provided
++ * by linux/compiler-*.h, and can do so using asm/compiler.h. We include that
++ * conditionally rather than using an asm-generic wrapper in order to avoid
++ * build failures if any C compilation, which will include this file via an
++ * -include argument in c_flags, occurs prior to the asm-generic wrappers being
++ * generated.
++ */
++#ifdef CONFIG_HAVE_ARCH_COMPILER_H
++#include <asm/compiler.h>
++#endif
++
+ /*
+  * Generic compiler-dependent macros required for kernel
+  * build go below this comment. Actual compiler/compiler version
+diff --git a/include/linux/gpio/driver.h b/include/linux/gpio/driver.h
+index 5382b5183b7e..82a953ec5ef0 100644
+--- a/include/linux/gpio/driver.h
++++ b/include/linux/gpio/driver.h
+@@ -94,6 +94,13 @@ struct gpio_irq_chip {
+ 	 */
+ 	unsigned int num_parents;
+ 
++	/**
++	 * @parent_irq:
++	 *
++	 * For use by gpiochip_set_cascaded_irqchip()
++	 */
++	unsigned int parent_irq;
++
+ 	/**
+ 	 * @parents:
+ 	 *
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index 64f450593b54..b49bfc8e68b0 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -1022,6 +1022,14 @@ static inline void *mlx5_frag_buf_get_wqe(struct mlx5_frag_buf_ctrl *fbc,
+ 		((fbc->frag_sz_m1 & ix) << fbc->log_stride);
+ }
+ 
++static inline u32
++mlx5_frag_buf_get_idx_last_contig_stride(struct mlx5_frag_buf_ctrl *fbc, u32 ix)
++{
++	u32 last_frag_stride_idx = (ix + fbc->strides_offset) | fbc->frag_sz_m1;
++
++	return min_t(u32, last_frag_stride_idx - fbc->strides_offset, fbc->sz_m1);
++}
++
+ int mlx5_cmd_init(struct mlx5_core_dev *dev);
+ void mlx5_cmd_cleanup(struct mlx5_core_dev *dev);
+ void mlx5_cmd_use_events(struct mlx5_core_dev *dev);
+diff --git a/include/linux/netfilter.h b/include/linux/netfilter.h
+index dd2052f0efb7..11b7b8ab0696 100644
+--- a/include/linux/netfilter.h
++++ b/include/linux/netfilter.h
+@@ -215,6 +215,8 @@ static inline int nf_hook(u_int8_t pf, unsigned int hook, struct net *net,
+ 		break;
+ 	case NFPROTO_ARP:
+ #ifdef CONFIG_NETFILTER_FAMILY_ARP
++		if (WARN_ON_ONCE(hook >= ARRAY_SIZE(net->nf.hooks_arp)))
++			break;
+ 		hook_head = rcu_dereference(net->nf.hooks_arp[hook]);
+ #endif
+ 		break;
+diff --git a/include/net/ip6_fib.h b/include/net/ip6_fib.h
+index 3d4930528db0..2d31e22babd8 100644
+--- a/include/net/ip6_fib.h
++++ b/include/net/ip6_fib.h
+@@ -159,6 +159,10 @@ struct fib6_info {
+ 	struct rt6_info * __percpu	*rt6i_pcpu;
+ 	struct rt6_exception_bucket __rcu *rt6i_exception_bucket;
+ 
++#ifdef CONFIG_IPV6_ROUTER_PREF
++	unsigned long			last_probe;
++#endif
++
+ 	u32				fib6_metric;
+ 	u8				fib6_protocol;
+ 	u8				fib6_type;
+diff --git a/include/net/sctp/sm.h b/include/net/sctp/sm.h
+index 5ef1bad81ef5..9e3d32746430 100644
+--- a/include/net/sctp/sm.h
++++ b/include/net/sctp/sm.h
+@@ -347,7 +347,7 @@ static inline __u16 sctp_data_size(struct sctp_chunk *chunk)
+ 	__u16 size;
+ 
+ 	size = ntohs(chunk->chunk_hdr->length);
+-	size -= sctp_datahdr_len(&chunk->asoc->stream);
++	size -= sctp_datachk_len(&chunk->asoc->stream);
+ 
+ 	return size;
+ }
+diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
+index 4fff00e9da8a..0a774b64fc29 100644
+--- a/include/trace/events/rxrpc.h
++++ b/include/trace/events/rxrpc.h
+@@ -56,7 +56,6 @@ enum rxrpc_peer_trace {
+ 	rxrpc_peer_new,
+ 	rxrpc_peer_processing,
+ 	rxrpc_peer_put,
+-	rxrpc_peer_queued_error,
+ };
+ 
+ enum rxrpc_conn_trace {
+@@ -257,8 +256,7 @@ enum rxrpc_tx_fail_trace {
+ 	EM(rxrpc_peer_got,			"GOT") \
+ 	EM(rxrpc_peer_new,			"NEW") \
+ 	EM(rxrpc_peer_processing,		"PRO") \
+-	EM(rxrpc_peer_put,			"PUT") \
+-	E_(rxrpc_peer_queued_error,		"QER")
++	E_(rxrpc_peer_put,			"PUT")
+ 
+ #define rxrpc_conn_traces \
+ 	EM(rxrpc_conn_got,			"GOT") \
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index ae22d93701db..fc072b7f839d 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -8319,6 +8319,8 @@ void perf_tp_event(u16 event_type, u64 count, void *record, int entry_size,
+ 			goto unlock;
+ 
+ 		list_for_each_entry_rcu(event, &ctx->event_list, event_entry) {
++			if (event->cpu != smp_processor_id())
++				continue;
+ 			if (event->attr.type != PERF_TYPE_TRACEPOINT)
+ 				continue;
+ 			if (event->attr.config != entry->type)
+@@ -9436,9 +9438,7 @@ static void free_pmu_context(struct pmu *pmu)
+ 	if (pmu->task_ctx_nr > perf_invalid_context)
+ 		return;
+ 
+-	mutex_lock(&pmus_lock);
+ 	free_percpu(pmu->pmu_cpu_context);
+-	mutex_unlock(&pmus_lock);
+ }
+ 
+ /*
+@@ -9694,12 +9694,8 @@ EXPORT_SYMBOL_GPL(perf_pmu_register);
+ 
+ void perf_pmu_unregister(struct pmu *pmu)
+ {
+-	int remove_device;
+-
+ 	mutex_lock(&pmus_lock);
+-	remove_device = pmu_bus_running;
+ 	list_del_rcu(&pmu->entry);
+-	mutex_unlock(&pmus_lock);
+ 
+ 	/*
+ 	 * We dereference the pmu list under both SRCU and regular RCU, so
+@@ -9711,13 +9707,14 @@ void perf_pmu_unregister(struct pmu *pmu)
+ 	free_percpu(pmu->pmu_disable_count);
+ 	if (pmu->type >= PERF_TYPE_MAX)
+ 		idr_remove(&pmu_idr, pmu->type);
+-	if (remove_device) {
++	if (pmu_bus_running) {
+ 		if (pmu->nr_addr_filters)
+ 			device_remove_file(pmu->dev, &dev_attr_nr_addr_filters);
+ 		device_del(pmu->dev);
+ 		put_device(pmu->dev);
+ 	}
+ 	free_pmu_context(pmu);
++	mutex_unlock(&pmus_lock);
+ }
+ EXPORT_SYMBOL_GPL(perf_pmu_unregister);
+ 
+diff --git a/kernel/locking/test-ww_mutex.c b/kernel/locking/test-ww_mutex.c
+index 0e4cd64ad2c0..654977862b06 100644
+--- a/kernel/locking/test-ww_mutex.c
++++ b/kernel/locking/test-ww_mutex.c
+@@ -260,7 +260,7 @@ static void test_cycle_work(struct work_struct *work)
+ {
+ 	struct test_cycle *cycle = container_of(work, typeof(*cycle), work);
+ 	struct ww_acquire_ctx ctx;
+-	int err;
++	int err, erra = 0;
+ 
+ 	ww_acquire_init(&ctx, &ww_class);
+ 	ww_mutex_lock(&cycle->a_mutex, &ctx);
+@@ -270,17 +270,19 @@ static void test_cycle_work(struct work_struct *work)
+ 
+ 	err = ww_mutex_lock(cycle->b_mutex, &ctx);
+ 	if (err == -EDEADLK) {
++		err = 0;
+ 		ww_mutex_unlock(&cycle->a_mutex);
+ 		ww_mutex_lock_slow(cycle->b_mutex, &ctx);
+-		err = ww_mutex_lock(&cycle->a_mutex, &ctx);
++		erra = ww_mutex_lock(&cycle->a_mutex, &ctx);
+ 	}
+ 
+ 	if (!err)
+ 		ww_mutex_unlock(cycle->b_mutex);
+-	ww_mutex_unlock(&cycle->a_mutex);
++	if (!erra)
++		ww_mutex_unlock(&cycle->a_mutex);
+ 	ww_acquire_fini(&ctx);
+ 
+-	cycle->result = err;
++	cycle->result = err ?: erra;
+ }
+ 
+ static int __test_cycle(unsigned int nthreads)
+diff --git a/mm/gup_benchmark.c b/mm/gup_benchmark.c
+index 6a473709e9b6..7405c9d89d65 100644
+--- a/mm/gup_benchmark.c
++++ b/mm/gup_benchmark.c
+@@ -19,7 +19,8 @@ static int __gup_benchmark_ioctl(unsigned int cmd,
+ 		struct gup_benchmark *gup)
+ {
+ 	ktime_t start_time, end_time;
+-	unsigned long i, nr, nr_pages, addr, next;
++	unsigned long i, nr_pages, addr, next;
++	int nr;
+ 	struct page **pages;
+ 
+ 	nr_pages = gup->size / PAGE_SIZE;
+diff --git a/mm/migrate.c b/mm/migrate.c
+index 2a55289ee9f1..f49eb9589d73 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -1415,7 +1415,7 @@ retry:
+ 				 * we encounter them after the rest of the list
+ 				 * is processed.
+ 				 */
+-				if (PageTransHuge(page)) {
++				if (PageTransHuge(page) && !PageHuge(page)) {
+ 					lock_page(page);
+ 					rc = split_huge_page_to_list(page, from);
+ 					unlock_page(page);
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index fc0436407471..03822f86f288 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -386,17 +386,6 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
+ 	delta = freeable >> priority;
+ 	delta *= 4;
+ 	do_div(delta, shrinker->seeks);
+-
+-	/*
+-	 * Make sure we apply some minimal pressure on default priority
+-	 * even on small cgroups. Stale objects are not only consuming memory
+-	 * by themselves, but can also hold a reference to a dying cgroup,
+-	 * preventing it from being reclaimed. A dying cgroup with all
+-	 * corresponding structures like per-cpu stats and kmem caches
+-	 * can be really big, so it may lead to a significant waste of memory.
+-	 */
+-	delta = max_t(unsigned long long, delta, min(freeable, batch_size));
+-
+ 	total_scan += delta;
+ 	if (total_scan < 0) {
+ 		pr_err("shrink_slab: %pF negative objects to delete nr=%ld\n",
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 8a80d48d89c4..1b9984f653dd 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -2298,9 +2298,8 @@ static int unpair_device(struct sock *sk, struct hci_dev *hdev, void *data,
+ 	/* LE address type */
+ 	addr_type = le_addr_type(cp->addr.type);
+ 
+-	hci_remove_irk(hdev, &cp->addr.bdaddr, addr_type);
+-
+-	err = hci_remove_ltk(hdev, &cp->addr.bdaddr, addr_type);
++	/* Abort any ongoing SMP pairing. Removes ltk and irk if they exist. */
++	err = smp_cancel_and_remove_pairing(hdev, &cp->addr.bdaddr, addr_type);
+ 	if (err < 0) {
+ 		err = mgmt_cmd_complete(sk, hdev->id, MGMT_OP_UNPAIR_DEVICE,
+ 					MGMT_STATUS_NOT_PAIRED, &rp,
+@@ -2314,8 +2313,6 @@ static int unpair_device(struct sock *sk, struct hci_dev *hdev, void *data,
+ 		goto done;
+ 	}
+ 
+-	/* Abort any ongoing SMP pairing */
+-	smp_cancel_pairing(conn);
+ 
+ 	/* Defer clearing up the connection parameters until closing to
+ 	 * give a chance of keeping them if a repairing happens.
+diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
+index 3a7b0773536b..73f7211d0431 100644
+--- a/net/bluetooth/smp.c
++++ b/net/bluetooth/smp.c
+@@ -2422,30 +2422,51 @@ unlock:
+ 	return ret;
+ }
+ 
+-void smp_cancel_pairing(struct hci_conn *hcon)
++int smp_cancel_and_remove_pairing(struct hci_dev *hdev, bdaddr_t *bdaddr,
++				  u8 addr_type)
+ {
+-	struct l2cap_conn *conn = hcon->l2cap_data;
++	struct hci_conn *hcon;
++	struct l2cap_conn *conn;
+ 	struct l2cap_chan *chan;
+ 	struct smp_chan *smp;
++	int err;
++
++	err = hci_remove_ltk(hdev, bdaddr, addr_type);
++	hci_remove_irk(hdev, bdaddr, addr_type);
++
++	hcon = hci_conn_hash_lookup_le(hdev, bdaddr, addr_type);
++	if (!hcon)
++		goto done;
+ 
++	conn = hcon->l2cap_data;
+ 	if (!conn)
+-		return;
++		goto done;
+ 
+ 	chan = conn->smp;
+ 	if (!chan)
+-		return;
++		goto done;
+ 
+ 	l2cap_chan_lock(chan);
+ 
+ 	smp = chan->data;
+ 	if (smp) {
++		/* Set keys to NULL to make sure smp_failure() does not try to
++		 * remove and free already invalidated rcu list entries. */
++		smp->ltk = NULL;
++		smp->slave_ltk = NULL;
++		smp->remote_irk = NULL;
++
+ 		if (test_bit(SMP_FLAG_COMPLETE, &smp->flags))
+ 			smp_failure(conn, 0);
+ 		else
+ 			smp_failure(conn, SMP_UNSPECIFIED);
++		err = 0;
+ 	}
+ 
+ 	l2cap_chan_unlock(chan);
++
++done:
++	return err;
+ }
+ 
+ static int smp_cmd_encrypt_info(struct l2cap_conn *conn, struct sk_buff *skb)
+diff --git a/net/bluetooth/smp.h b/net/bluetooth/smp.h
+index 0ff6247eaa6c..121edadd5f8d 100644
+--- a/net/bluetooth/smp.h
++++ b/net/bluetooth/smp.h
+@@ -181,7 +181,8 @@ enum smp_key_pref {
+ };
+ 
+ /* SMP Commands */
+-void smp_cancel_pairing(struct hci_conn *hcon);
++int smp_cancel_and_remove_pairing(struct hci_dev *hdev, bdaddr_t *bdaddr,
++				  u8 addr_type);
+ bool smp_sufficient_security(struct hci_conn *hcon, u8 sec_level,
+ 			     enum smp_key_pref key_pref);
+ int smp_conn_security(struct hci_conn *hcon, __u8 sec_level);
+diff --git a/net/bpfilter/bpfilter_kern.c b/net/bpfilter/bpfilter_kern.c
+index f0fc182d3db7..d5dd6b8b4248 100644
+--- a/net/bpfilter/bpfilter_kern.c
++++ b/net/bpfilter/bpfilter_kern.c
+@@ -23,9 +23,11 @@ static void shutdown_umh(struct umh_info *info)
+ 
+ 	if (!info->pid)
+ 		return;
+-	tsk = pid_task(find_vpid(info->pid), PIDTYPE_PID);
+-	if (tsk)
++	tsk = get_pid_task(find_vpid(info->pid), PIDTYPE_PID);
++	if (tsk) {
+ 		force_sig(SIGKILL, tsk);
++		put_task_struct(tsk);
++	}
+ 	fput(info->pipe_to_umh);
+ 	fput(info->pipe_from_umh);
+ 	info->pid = 0;
+diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
+index 920665dd92db..6059a47f5e0c 100644
+--- a/net/bridge/br_multicast.c
++++ b/net/bridge/br_multicast.c
+@@ -1420,7 +1420,14 @@ static void br_multicast_query_received(struct net_bridge *br,
+ 		return;
+ 
+ 	br_multicast_update_query_timer(br, query, max_delay);
+-	br_multicast_mark_router(br, port);
++
++	/* Based on RFC4541, section 2.1.1 IGMP Forwarding Rules,
++	 * the arrival port for IGMP Queries where the source address
++	 * is 0.0.0.0 should not be added to router port list.
++	 */
++	if ((saddr->proto == htons(ETH_P_IP) && saddr->u.ip4) ||
++	    saddr->proto == htons(ETH_P_IPV6))
++		br_multicast_mark_router(br, port);
+ }
+ 
+ static int br_ip4_multicast_query(struct net_bridge *br,
+diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
+index 9b16eaf33819..58240cc185e7 100644
+--- a/net/bridge/br_netfilter_hooks.c
++++ b/net/bridge/br_netfilter_hooks.c
+@@ -834,7 +834,8 @@ static unsigned int ip_sabotage_in(void *priv,
+ 				   struct sk_buff *skb,
+ 				   const struct nf_hook_state *state)
+ {
+-	if (skb->nf_bridge && !skb->nf_bridge->in_prerouting) {
++	if (skb->nf_bridge && !skb->nf_bridge->in_prerouting &&
++	    !netif_is_l3_master(skb->dev)) {
+ 		state->okfn(state->net, state->sk, skb);
+ 		return NF_STOLEN;
+ 	}
+diff --git a/net/core/datagram.c b/net/core/datagram.c
+index 9938952c5c78..16f0eb0970c4 100644
+--- a/net/core/datagram.c
++++ b/net/core/datagram.c
+@@ -808,8 +808,9 @@ int skb_copy_and_csum_datagram_msg(struct sk_buff *skb,
+ 			return -EINVAL;
+ 		}
+ 
+-		if (unlikely(skb->ip_summed == CHECKSUM_COMPLETE))
+-			netdev_rx_csum_fault(skb->dev);
++		if (unlikely(skb->ip_summed == CHECKSUM_COMPLETE) &&
++		    !skb->csum_complete_sw)
++			netdev_rx_csum_fault(NULL);
+ 	}
+ 	return 0;
+ fault:
+diff --git a/net/core/ethtool.c b/net/core/ethtool.c
+index 6c04f1bf377d..548d0e615bc7 100644
+--- a/net/core/ethtool.c
++++ b/net/core/ethtool.c
+@@ -2461,13 +2461,17 @@ roll_back:
+ 	return ret;
+ }
+ 
+-static int ethtool_set_per_queue(struct net_device *dev, void __user *useraddr)
++static int ethtool_set_per_queue(struct net_device *dev,
++				 void __user *useraddr, u32 sub_cmd)
+ {
+ 	struct ethtool_per_queue_op per_queue_opt;
+ 
+ 	if (copy_from_user(&per_queue_opt, useraddr, sizeof(per_queue_opt)))
+ 		return -EFAULT;
+ 
++	if (per_queue_opt.sub_command != sub_cmd)
++		return -EINVAL;
++
+ 	switch (per_queue_opt.sub_command) {
+ 	case ETHTOOL_GCOALESCE:
+ 		return ethtool_get_per_queue_coalesce(dev, useraddr, &per_queue_opt);
+@@ -2838,7 +2842,7 @@ int dev_ethtool(struct net *net, struct ifreq *ifr)
+ 		rc = ethtool_get_phy_stats(dev, useraddr);
+ 		break;
+ 	case ETHTOOL_PERQUEUE:
+-		rc = ethtool_set_per_queue(dev, useraddr);
++		rc = ethtool_set_per_queue(dev, useraddr, sub_cmd);
+ 		break;
+ 	case ETHTOOL_GLINKSETTINGS:
+ 		rc = ethtool_get_link_ksettings(dev, useraddr);
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 18de39dbdc30..4b25fd14bc5a 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -3480,6 +3480,11 @@ static int rtnl_fdb_add(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 		return -EINVAL;
+ 	}
+ 
++	if (dev->type != ARPHRD_ETHER) {
++		NL_SET_ERR_MSG(extack, "FDB delete only supported for Ethernet devices");
++		return -EINVAL;
++	}
++
+ 	addr = nla_data(tb[NDA_LLADDR]);
+ 
+ 	err = fdb_vid_parse(tb[NDA_VLAN], &vid, extack);
+@@ -3584,6 +3589,11 @@ static int rtnl_fdb_del(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 		return -EINVAL;
+ 	}
+ 
++	if (dev->type != ARPHRD_ETHER) {
++		NL_SET_ERR_MSG(extack, "FDB add only supported for Ethernet devices");
++		return -EINVAL;
++	}
++
+ 	addr = nla_data(tb[NDA_LLADDR]);
+ 
+ 	err = fdb_vid_parse(tb[NDA_VLAN], &vid, extack);
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 3680912f056a..c45916b91a9c 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -1845,8 +1845,9 @@ int pskb_trim_rcsum_slow(struct sk_buff *skb, unsigned int len)
+ 	if (skb->ip_summed == CHECKSUM_COMPLETE) {
+ 		int delta = skb->len - len;
+ 
+-		skb->csum = csum_sub(skb->csum,
+-				     skb_checksum(skb, len, delta, 0));
++		skb->csum = csum_block_sub(skb->csum,
++					   skb_checksum(skb, len, delta, 0),
++					   len);
+ 	}
+ 	return __pskb_trim(skb, len);
+ }
+diff --git a/net/ipv4/ip_fragment.c b/net/ipv4/ip_fragment.c
+index d14d741fb05e..9d3bdce1ad8a 100644
+--- a/net/ipv4/ip_fragment.c
++++ b/net/ipv4/ip_fragment.c
+@@ -657,10 +657,14 @@ struct sk_buff *ip_check_defrag(struct net *net, struct sk_buff *skb, u32 user)
+ 	if (ip_is_fragment(&iph)) {
+ 		skb = skb_share_check(skb, GFP_ATOMIC);
+ 		if (skb) {
+-			if (!pskb_may_pull(skb, netoff + iph.ihl * 4))
+-				return skb;
+-			if (pskb_trim_rcsum(skb, netoff + len))
+-				return skb;
++			if (!pskb_may_pull(skb, netoff + iph.ihl * 4)) {
++				kfree_skb(skb);
++				return NULL;
++			}
++			if (pskb_trim_rcsum(skb, netoff + len)) {
++				kfree_skb(skb);
++				return NULL;
++			}
+ 			memset(IPCB(skb), 0, sizeof(struct inet_skb_parm));
+ 			if (ip_defrag(net, skb, user))
+ 				return NULL;
+diff --git a/net/ipv4/ipmr_base.c b/net/ipv4/ipmr_base.c
+index cafb0506c8c9..33be09791c74 100644
+--- a/net/ipv4/ipmr_base.c
++++ b/net/ipv4/ipmr_base.c
+@@ -295,8 +295,6 @@ int mr_rtm_dumproute(struct sk_buff *skb, struct netlink_callback *cb,
+ next_entry:
+ 			e++;
+ 		}
+-		e = 0;
+-		s_e = 0;
+ 
+ 		spin_lock_bh(lock);
+ 		list_for_each_entry(mfc, &mrt->mfc_unres_queue, list) {
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index a12df801de94..2fe7e2713350 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -2124,8 +2124,24 @@ static inline int udp4_csum_init(struct sk_buff *skb, struct udphdr *uh,
+ 	/* Note, we are only interested in != 0 or == 0, thus the
+ 	 * force to int.
+ 	 */
+-	return (__force int)skb_checksum_init_zero_check(skb, proto, uh->check,
+-							 inet_compute_pseudo);
++	err = (__force int)skb_checksum_init_zero_check(skb, proto, uh->check,
++							inet_compute_pseudo);
++	if (err)
++		return err;
++
++	if (skb->ip_summed == CHECKSUM_COMPLETE && !skb->csum_valid) {
++		/* If SW calculated the value, we know it's bad */
++		if (skb->csum_complete_sw)
++			return 1;
++
++		/* HW says the value is bad. Let's validate that.
++		 * skb->csum is no longer the full packet checksum,
++		 * so don't treat it as such.
++		 */
++		skb_checksum_complete_unset(skb);
++	}
++
++	return 0;
+ }
+ 
+ /* wrapper for udp_queue_rcv_skb tacking care of csum conversion and
+diff --git a/net/ipv4/xfrm4_input.c b/net/ipv4/xfrm4_input.c
+index bcfc00e88756..f8de2482a529 100644
+--- a/net/ipv4/xfrm4_input.c
++++ b/net/ipv4/xfrm4_input.c
+@@ -67,6 +67,7 @@ int xfrm4_transport_finish(struct sk_buff *skb, int async)
+ 
+ 	if (xo && (xo->flags & XFRM_GRO)) {
+ 		skb_mac_header_rebuild(skb);
++		skb_reset_transport_header(skb);
+ 		return 0;
+ 	}
+ 
+diff --git a/net/ipv4/xfrm4_mode_transport.c b/net/ipv4/xfrm4_mode_transport.c
+index 3d36644890bb..1ad2c2c4e250 100644
+--- a/net/ipv4/xfrm4_mode_transport.c
++++ b/net/ipv4/xfrm4_mode_transport.c
+@@ -46,7 +46,6 @@ static int xfrm4_transport_output(struct xfrm_state *x, struct sk_buff *skb)
+ static int xfrm4_transport_input(struct xfrm_state *x, struct sk_buff *skb)
+ {
+ 	int ihl = skb->data - skb_transport_header(skb);
+-	struct xfrm_offload *xo = xfrm_offload(skb);
+ 
+ 	if (skb->transport_header != skb->network_header) {
+ 		memmove(skb_transport_header(skb),
+@@ -54,8 +53,7 @@ static int xfrm4_transport_input(struct xfrm_state *x, struct sk_buff *skb)
+ 		skb->network_header = skb->transport_header;
+ 	}
+ 	ip_hdr(skb)->tot_len = htons(skb->len + ihl);
+-	if (!xo || !(xo->flags & XFRM_GRO))
+-		skb_reset_transport_header(skb);
++	skb_reset_transport_header(skb);
+ 	return 0;
+ }
+ 
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 3484c7020fd9..ac3de1aa1cd3 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -4930,8 +4930,8 @@ static int in6_dump_addrs(struct inet6_dev *idev, struct sk_buff *skb,
+ 
+ 		/* unicast address incl. temp addr */
+ 		list_for_each_entry(ifa, &idev->addr_list, if_list) {
+-			if (++ip_idx < s_ip_idx)
+-				continue;
++			if (ip_idx < s_ip_idx)
++				goto next;
+ 			err = inet6_fill_ifaddr(skb, ifa,
+ 						NETLINK_CB(cb->skb).portid,
+ 						cb->nlh->nlmsg_seq,
+@@ -4940,6 +4940,8 @@ static int in6_dump_addrs(struct inet6_dev *idev, struct sk_buff *skb,
+ 			if (err < 0)
+ 				break;
+ 			nl_dump_check_consistent(cb, nlmsg_hdr(skb));
++next:
++			ip_idx++;
+ 		}
+ 		break;
+ 	}
+diff --git a/net/ipv6/ip6_checksum.c b/net/ipv6/ip6_checksum.c
+index 547515e8450a..377717045f8f 100644
+--- a/net/ipv6/ip6_checksum.c
++++ b/net/ipv6/ip6_checksum.c
+@@ -88,8 +88,24 @@ int udp6_csum_init(struct sk_buff *skb, struct udphdr *uh, int proto)
+ 	 * Note, we are only interested in != 0 or == 0, thus the
+ 	 * force to int.
+ 	 */
+-	return (__force int)skb_checksum_init_zero_check(skb, proto, uh->check,
+-							 ip6_compute_pseudo);
++	err = (__force int)skb_checksum_init_zero_check(skb, proto, uh->check,
++							ip6_compute_pseudo);
++	if (err)
++		return err;
++
++	if (skb->ip_summed == CHECKSUM_COMPLETE && !skb->csum_valid) {
++		/* If SW calculated the value, we know it's bad */
++		if (skb->csum_complete_sw)
++			return 1;
++
++		/* HW says the value is bad. Let's validate that.
++		 * skb->csum is no longer the full packet checksum,
++		 * so don't treat is as such.
++		 */
++		skb_checksum_complete_unset(skb);
++	}
++
++	return 0;
+ }
+ EXPORT_SYMBOL(udp6_csum_init);
+ 
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index f5b5b0574a2d..009b508127e6 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -1184,10 +1184,6 @@ route_lookup:
+ 	}
+ 	skb_dst_set(skb, dst);
+ 
+-	if (encap_limit >= 0) {
+-		init_tel_txopt(&opt, encap_limit);
+-		ipv6_push_frag_opts(skb, &opt.ops, &proto);
+-	}
+ 	hop_limit = hop_limit ? : ip6_dst_hoplimit(dst);
+ 
+ 	/* Calculate max headroom for all the headers and adjust
+@@ -1202,6 +1198,11 @@ route_lookup:
+ 	if (err)
+ 		return err;
+ 
++	if (encap_limit >= 0) {
++		init_tel_txopt(&opt, encap_limit);
++		ipv6_push_frag_opts(skb, &opt.ops, &proto);
++	}
++
+ 	skb_push(skb, sizeof(struct ipv6hdr));
+ 	skb_reset_network_header(skb);
+ 	ipv6h = ipv6_hdr(skb);
+diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
+index f60f310785fd..131440ea6b51 100644
+--- a/net/ipv6/mcast.c
++++ b/net/ipv6/mcast.c
+@@ -2436,17 +2436,17 @@ static int ip6_mc_leave_src(struct sock *sk, struct ipv6_mc_socklist *iml,
+ {
+ 	int err;
+ 
+-	/* callers have the socket lock and rtnl lock
+-	 * so no other readers or writers of iml or its sflist
+-	 */
++	write_lock_bh(&iml->sflock);
+ 	if (!iml->sflist) {
+ 		/* any-source empty exclude case */
+-		return ip6_mc_del_src(idev, &iml->addr, iml->sfmode, 0, NULL, 0);
++		err = ip6_mc_del_src(idev, &iml->addr, iml->sfmode, 0, NULL, 0);
++	} else {
++		err = ip6_mc_del_src(idev, &iml->addr, iml->sfmode,
++				iml->sflist->sl_count, iml->sflist->sl_addr, 0);
++		sock_kfree_s(sk, iml->sflist, IP6_SFLSIZE(iml->sflist->sl_max));
++		iml->sflist = NULL;
+ 	}
+-	err = ip6_mc_del_src(idev, &iml->addr, iml->sfmode,
+-		iml->sflist->sl_count, iml->sflist->sl_addr, 0);
+-	sock_kfree_s(sk, iml->sflist, IP6_SFLSIZE(iml->sflist->sl_max));
+-	iml->sflist = NULL;
++	write_unlock_bh(&iml->sflock);
+ 	return err;
+ }
+ 
+diff --git a/net/ipv6/ndisc.c b/net/ipv6/ndisc.c
+index 0ec273997d1d..673a4a932f2a 100644
+--- a/net/ipv6/ndisc.c
++++ b/net/ipv6/ndisc.c
+@@ -1732,10 +1732,9 @@ int ndisc_rcv(struct sk_buff *skb)
+ 		return 0;
+ 	}
+ 
+-	memset(NEIGH_CB(skb), 0, sizeof(struct neighbour_cb));
+-
+ 	switch (msg->icmph.icmp6_type) {
+ 	case NDISC_NEIGHBOUR_SOLICITATION:
++		memset(NEIGH_CB(skb), 0, sizeof(struct neighbour_cb));
+ 		ndisc_recv_ns(skb);
+ 		break;
+ 
+diff --git a/net/ipv6/netfilter/nf_conntrack_reasm.c b/net/ipv6/netfilter/nf_conntrack_reasm.c
+index e4d9e6976d3c..a452d99c9f52 100644
+--- a/net/ipv6/netfilter/nf_conntrack_reasm.c
++++ b/net/ipv6/netfilter/nf_conntrack_reasm.c
+@@ -585,8 +585,6 @@ int nf_ct_frag6_gather(struct net *net, struct sk_buff *skb, u32 user)
+ 	    fq->q.meat == fq->q.len &&
+ 	    nf_ct_frag6_reasm(fq, skb, dev))
+ 		ret = 0;
+-	else
+-		skb_dst_drop(skb);
+ 
+ out_unlock:
+ 	spin_unlock_bh(&fq->q.lock);
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index ed526e257da6..a243d5249b51 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -517,10 +517,11 @@ static void rt6_probe_deferred(struct work_struct *w)
+ 
+ static void rt6_probe(struct fib6_info *rt)
+ {
+-	struct __rt6_probe_work *work;
++	struct __rt6_probe_work *work = NULL;
+ 	const struct in6_addr *nh_gw;
+ 	struct neighbour *neigh;
+ 	struct net_device *dev;
++	struct inet6_dev *idev;
+ 
+ 	/*
+ 	 * Okay, this does not seem to be appropriate
+@@ -536,15 +537,12 @@ static void rt6_probe(struct fib6_info *rt)
+ 	nh_gw = &rt->fib6_nh.nh_gw;
+ 	dev = rt->fib6_nh.nh_dev;
+ 	rcu_read_lock_bh();
++	idev = __in6_dev_get(dev);
+ 	neigh = __ipv6_neigh_lookup_noref(dev, nh_gw);
+ 	if (neigh) {
+-		struct inet6_dev *idev;
+-
+ 		if (neigh->nud_state & NUD_VALID)
+ 			goto out;
+ 
+-		idev = __in6_dev_get(dev);
+-		work = NULL;
+ 		write_lock(&neigh->lock);
+ 		if (!(neigh->nud_state & NUD_VALID) &&
+ 		    time_after(jiffies,
+@@ -554,11 +552,13 @@ static void rt6_probe(struct fib6_info *rt)
+ 				__neigh_set_probe_once(neigh);
+ 		}
+ 		write_unlock(&neigh->lock);
+-	} else {
++	} else if (time_after(jiffies, rt->last_probe +
++				       idev->cnf.rtr_probe_interval)) {
+ 		work = kmalloc(sizeof(*work), GFP_ATOMIC);
+ 	}
+ 
+ 	if (work) {
++		rt->last_probe = jiffies;
+ 		INIT_WORK(&work->work, rt6_probe_deferred);
+ 		work->target = *nh_gw;
+ 		dev_hold(dev);
+@@ -2792,6 +2792,8 @@ static int ip6_route_check_nh_onlink(struct net *net,
+ 	grt = ip6_nh_lookup_table(net, cfg, gw_addr, tbid, 0);
+ 	if (grt) {
+ 		if (!grt->dst.error &&
++		    /* ignore match if it is the default route */
++		    grt->from && !ipv6_addr_any(&grt->from->fib6_dst.addr) &&
+ 		    (grt->rt6i_flags & flags || dev != grt->dst.dev)) {
+ 			NL_SET_ERR_MSG(extack,
+ 				       "Nexthop has invalid gateway or device mismatch");
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 39d0cab919bb..4f2c7a196365 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -762,11 +762,9 @@ static int udp6_unicast_rcv_skb(struct sock *sk, struct sk_buff *skb,
+ 
+ 	ret = udpv6_queue_rcv_skb(sk, skb);
+ 
+-	/* a return value > 0 means to resubmit the input, but
+-	 * it wants the return to be -protocol, or 0
+-	 */
++	/* a return value > 0 means to resubmit the input */
+ 	if (ret > 0)
+-		return -ret;
++		return ret;
+ 	return 0;
+ }
+ 
+diff --git a/net/ipv6/xfrm6_input.c b/net/ipv6/xfrm6_input.c
+index 841f4a07438e..9ef490dddcea 100644
+--- a/net/ipv6/xfrm6_input.c
++++ b/net/ipv6/xfrm6_input.c
+@@ -59,6 +59,7 @@ int xfrm6_transport_finish(struct sk_buff *skb, int async)
+ 
+ 	if (xo && (xo->flags & XFRM_GRO)) {
+ 		skb_mac_header_rebuild(skb);
++		skb_reset_transport_header(skb);
+ 		return -1;
+ 	}
+ 
+diff --git a/net/ipv6/xfrm6_mode_transport.c b/net/ipv6/xfrm6_mode_transport.c
+index 9ad07a91708e..3c29da5defe6 100644
+--- a/net/ipv6/xfrm6_mode_transport.c
++++ b/net/ipv6/xfrm6_mode_transport.c
+@@ -51,7 +51,6 @@ static int xfrm6_transport_output(struct xfrm_state *x, struct sk_buff *skb)
+ static int xfrm6_transport_input(struct xfrm_state *x, struct sk_buff *skb)
+ {
+ 	int ihl = skb->data - skb_transport_header(skb);
+-	struct xfrm_offload *xo = xfrm_offload(skb);
+ 
+ 	if (skb->transport_header != skb->network_header) {
+ 		memmove(skb_transport_header(skb),
+@@ -60,8 +59,7 @@ static int xfrm6_transport_input(struct xfrm_state *x, struct sk_buff *skb)
+ 	}
+ 	ipv6_hdr(skb)->payload_len = htons(skb->len + ihl -
+ 					   sizeof(struct ipv6hdr));
+-	if (!xo || !(xo->flags & XFRM_GRO))
+-		skb_reset_transport_header(skb);
++	skb_reset_transport_header(skb);
+ 	return 0;
+ }
+ 
+diff --git a/net/ipv6/xfrm6_output.c b/net/ipv6/xfrm6_output.c
+index 5959ce9620eb..6a74080005cf 100644
+--- a/net/ipv6/xfrm6_output.c
++++ b/net/ipv6/xfrm6_output.c
+@@ -170,9 +170,11 @@ static int __xfrm6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ 
+ 	if (toobig && xfrm6_local_dontfrag(skb)) {
+ 		xfrm6_local_rxpmtu(skb, mtu);
++		kfree_skb(skb);
+ 		return -EMSGSIZE;
+ 	} else if (!skb->ignore_df && toobig && skb->sk) {
+ 		xfrm_local_error(skb, mtu);
++		kfree_skb(skb);
+ 		return -EMSGSIZE;
+ 	}
+ 
+diff --git a/net/llc/llc_conn.c b/net/llc/llc_conn.c
+index c0ac522b48a1..4ff89cb7c86f 100644
+--- a/net/llc/llc_conn.c
++++ b/net/llc/llc_conn.c
+@@ -734,6 +734,7 @@ void llc_sap_add_socket(struct llc_sap *sap, struct sock *sk)
+ 	llc_sk(sk)->sap = sap;
+ 
+ 	spin_lock_bh(&sap->sk_lock);
++	sock_set_flag(sk, SOCK_RCU_FREE);
+ 	sap->sk_count++;
+ 	sk_nulls_add_node_rcu(sk, laddr_hb);
+ 	hlist_add_head(&llc->dev_hash_node, dev_hb);
+diff --git a/net/mac80211/mesh.h b/net/mac80211/mesh.h
+index ee56f18cad3f..21526630bf65 100644
+--- a/net/mac80211/mesh.h
++++ b/net/mac80211/mesh.h
+@@ -217,7 +217,8 @@ void mesh_rmc_free(struct ieee80211_sub_if_data *sdata);
+ int mesh_rmc_init(struct ieee80211_sub_if_data *sdata);
+ void ieee80211s_init(void);
+ void ieee80211s_update_metric(struct ieee80211_local *local,
+-			      struct sta_info *sta, struct sk_buff *skb);
++			      struct sta_info *sta,
++			      struct ieee80211_tx_status *st);
+ void ieee80211_mesh_init_sdata(struct ieee80211_sub_if_data *sdata);
+ void ieee80211_mesh_teardown_sdata(struct ieee80211_sub_if_data *sdata);
+ int ieee80211_start_mesh(struct ieee80211_sub_if_data *sdata);
+diff --git a/net/mac80211/mesh_hwmp.c b/net/mac80211/mesh_hwmp.c
+index daf9db3c8f24..6950cd0bf594 100644
+--- a/net/mac80211/mesh_hwmp.c
++++ b/net/mac80211/mesh_hwmp.c
+@@ -295,15 +295,12 @@ int mesh_path_error_tx(struct ieee80211_sub_if_data *sdata,
+ }
+ 
+ void ieee80211s_update_metric(struct ieee80211_local *local,
+-		struct sta_info *sta, struct sk_buff *skb)
++			      struct sta_info *sta,
++			      struct ieee80211_tx_status *st)
+ {
+-	struct ieee80211_tx_info *txinfo = IEEE80211_SKB_CB(skb);
+-	struct ieee80211_hdr *hdr = (struct ieee80211_hdr *) skb->data;
++	struct ieee80211_tx_info *txinfo = st->info;
+ 	int failed;
+ 
+-	if (!ieee80211_is_data(hdr->frame_control))
+-		return;
+-
+ 	failed = !(txinfo->flags & IEEE80211_TX_STAT_ACK);
+ 
+ 	/* moving average, scaled to 100.
+diff --git a/net/mac80211/status.c b/net/mac80211/status.c
+index 9a6d7208bf4f..91d7c0cd1882 100644
+--- a/net/mac80211/status.c
++++ b/net/mac80211/status.c
+@@ -479,11 +479,6 @@ static void ieee80211_report_ack_skb(struct ieee80211_local *local,
+ 	if (!skb)
+ 		return;
+ 
+-	if (dropped) {
+-		dev_kfree_skb_any(skb);
+-		return;
+-	}
+-
+ 	if (info->flags & IEEE80211_TX_INTFL_NL80211_FRAME_TX) {
+ 		u64 cookie = IEEE80211_SKB_CB(skb)->ack.cookie;
+ 		struct ieee80211_sub_if_data *sdata;
+@@ -506,6 +501,8 @@ static void ieee80211_report_ack_skb(struct ieee80211_local *local,
+ 		}
+ 		rcu_read_unlock();
+ 
++		dev_kfree_skb_any(skb);
++	} else if (dropped) {
+ 		dev_kfree_skb_any(skb);
+ 	} else {
+ 		/* consumes skb */
+@@ -811,7 +808,7 @@ static void __ieee80211_tx_status(struct ieee80211_hw *hw,
+ 
+ 		rate_control_tx_status(local, sband, status);
+ 		if (ieee80211_vif_is_mesh(&sta->sdata->vif))
+-			ieee80211s_update_metric(local, sta, skb);
++			ieee80211s_update_metric(local, sta, status);
+ 
+ 		if (!(info->flags & IEEE80211_TX_CTL_INJECTED) && acked)
+ 			ieee80211_frame_acked(sta, skb);
+@@ -972,6 +969,8 @@ void ieee80211_tx_status_ext(struct ieee80211_hw *hw,
+ 		}
+ 
+ 		rate_control_tx_status(local, sband, status);
++		if (ieee80211_vif_is_mesh(&sta->sdata->vif))
++			ieee80211s_update_metric(local, sta, status);
+ 	}
+ 
+ 	if (acked || noack_success) {
+diff --git a/net/mac80211/tdls.c b/net/mac80211/tdls.c
+index 5cd5e6e5834e..6c647f425e05 100644
+--- a/net/mac80211/tdls.c
++++ b/net/mac80211/tdls.c
+@@ -16,6 +16,7 @@
+ #include "ieee80211_i.h"
+ #include "driver-ops.h"
+ #include "rate.h"
++#include "wme.h"
+ 
+ /* give usermode some time for retries in setting up the TDLS session */
+ #define TDLS_PEER_SETUP_TIMEOUT	(15 * HZ)
+@@ -1010,14 +1011,13 @@ ieee80211_tdls_prep_mgmt_packet(struct wiphy *wiphy, struct net_device *dev,
+ 	switch (action_code) {
+ 	case WLAN_TDLS_SETUP_REQUEST:
+ 	case WLAN_TDLS_SETUP_RESPONSE:
+-		skb_set_queue_mapping(skb, IEEE80211_AC_BK);
+-		skb->priority = 2;
++		skb->priority = 256 + 2;
+ 		break;
+ 	default:
+-		skb_set_queue_mapping(skb, IEEE80211_AC_VI);
+-		skb->priority = 5;
++		skb->priority = 256 + 5;
+ 		break;
+ 	}
++	skb_set_queue_mapping(skb, ieee80211_select_queue(sdata, skb));
+ 
+ 	/*
+ 	 * Set the WLAN_TDLS_TEARDOWN flag to indicate a teardown in progress.
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index 9b3b069e418a..361f2f6cc839 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -1886,7 +1886,7 @@ static bool ieee80211_tx(struct ieee80211_sub_if_data *sdata,
+ 			sdata->vif.hw_queue[skb_get_queue_mapping(skb)];
+ 
+ 	if (invoke_tx_handlers_early(&tx))
+-		return false;
++		return true;
+ 
+ 	if (ieee80211_queue_skb(local, sdata, tx.sta, tx.skb))
+ 		return true;
+diff --git a/net/netfilter/nf_conntrack_proto_tcp.c b/net/netfilter/nf_conntrack_proto_tcp.c
+index 8e67910185a0..1004fb5930de 100644
+--- a/net/netfilter/nf_conntrack_proto_tcp.c
++++ b/net/netfilter/nf_conntrack_proto_tcp.c
+@@ -1239,8 +1239,8 @@ static const struct nla_policy tcp_nla_policy[CTA_PROTOINFO_TCP_MAX+1] = {
+ #define TCP_NLATTR_SIZE	( \
+ 	NLA_ALIGN(NLA_HDRLEN + 1) + \
+ 	NLA_ALIGN(NLA_HDRLEN + 1) + \
+-	NLA_ALIGN(NLA_HDRLEN + sizeof(sizeof(struct nf_ct_tcp_flags))) + \
+-	NLA_ALIGN(NLA_HDRLEN + sizeof(sizeof(struct nf_ct_tcp_flags))))
++	NLA_ALIGN(NLA_HDRLEN + sizeof(struct nf_ct_tcp_flags)) + \
++	NLA_ALIGN(NLA_HDRLEN + sizeof(struct nf_ct_tcp_flags)))
+ 
+ static int nlattr_to_tcp(struct nlattr *cda[], struct nf_conn *ct)
+ {
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index 9873d734b494..8ad78b82c8e2 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -355,12 +355,11 @@ cont:
+ 
+ static void nft_rbtree_gc(struct work_struct *work)
+ {
++	struct nft_rbtree_elem *rbe, *rbe_end = NULL, *rbe_prev = NULL;
+ 	struct nft_set_gc_batch *gcb = NULL;
+-	struct rb_node *node, *prev = NULL;
+-	struct nft_rbtree_elem *rbe;
+ 	struct nft_rbtree *priv;
++	struct rb_node *node;
+ 	struct nft_set *set;
+-	int i;
+ 
+ 	priv = container_of(work, struct nft_rbtree, gc_work.work);
+ 	set  = nft_set_container_of(priv);
+@@ -371,7 +370,7 @@ static void nft_rbtree_gc(struct work_struct *work)
+ 		rbe = rb_entry(node, struct nft_rbtree_elem, node);
+ 
+ 		if (nft_rbtree_interval_end(rbe)) {
+-			prev = node;
++			rbe_end = rbe;
+ 			continue;
+ 		}
+ 		if (!nft_set_elem_expired(&rbe->ext))
+@@ -379,29 +378,30 @@ static void nft_rbtree_gc(struct work_struct *work)
+ 		if (nft_set_elem_mark_busy(&rbe->ext))
+ 			continue;
+ 
++		if (rbe_prev) {
++			rb_erase(&rbe_prev->node, &priv->root);
++			rbe_prev = NULL;
++		}
+ 		gcb = nft_set_gc_batch_check(set, gcb, GFP_ATOMIC);
+ 		if (!gcb)
+ 			break;
+ 
+ 		atomic_dec(&set->nelems);
+ 		nft_set_gc_batch_add(gcb, rbe);
++		rbe_prev = rbe;
+ 
+-		if (prev) {
+-			rbe = rb_entry(prev, struct nft_rbtree_elem, node);
++		if (rbe_end) {
+ 			atomic_dec(&set->nelems);
+-			nft_set_gc_batch_add(gcb, rbe);
+-			prev = NULL;
++			nft_set_gc_batch_add(gcb, rbe_end);
++			rb_erase(&rbe_end->node, &priv->root);
++			rbe_end = NULL;
+ 		}
+ 		node = rb_next(node);
+ 		if (!node)
+ 			break;
+ 	}
+-	if (gcb) {
+-		for (i = 0; i < gcb->head.cnt; i++) {
+-			rbe = gcb->elems[i];
+-			rb_erase(&rbe->node, &priv->root);
+-		}
+-	}
++	if (rbe_prev)
++		rb_erase(&rbe_prev->node, &priv->root);
+ 	write_seqcount_end(&priv->count);
+ 	write_unlock_bh(&priv->lock);
+ 
+diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c
+index 492ab0c36f7c..8b1ba43b1ece 100644
+--- a/net/openvswitch/flow_netlink.c
++++ b/net/openvswitch/flow_netlink.c
+@@ -2990,7 +2990,7 @@ static int __ovs_nla_copy_actions(struct net *net, const struct nlattr *attr,
+ 			 * is already present */
+ 			if (mac_proto != MAC_PROTO_NONE)
+ 				return -EINVAL;
+-			mac_proto = MAC_PROTO_NONE;
++			mac_proto = MAC_PROTO_ETHERNET;
+ 			break;
+ 
+ 		case OVS_ACTION_ATTR_POP_ETH:
+@@ -2998,7 +2998,7 @@ static int __ovs_nla_copy_actions(struct net *net, const struct nlattr *attr,
+ 				return -EINVAL;
+ 			if (vlan_tci & htons(VLAN_TAG_PRESENT))
+ 				return -EINVAL;
+-			mac_proto = MAC_PROTO_ETHERNET;
++			mac_proto = MAC_PROTO_NONE;
+ 			break;
+ 
+ 		case OVS_ACTION_ATTR_PUSH_NSH:
+diff --git a/net/rds/send.c b/net/rds/send.c
+index 59f17a2335f4..0e54ca0f4e9e 100644
+--- a/net/rds/send.c
++++ b/net/rds/send.c
+@@ -1006,7 +1006,8 @@ static int rds_cmsg_send(struct rds_sock *rs, struct rds_message *rm,
+ 	return ret;
+ }
+ 
+-static int rds_send_mprds_hash(struct rds_sock *rs, struct rds_connection *conn)
++static int rds_send_mprds_hash(struct rds_sock *rs,
++			       struct rds_connection *conn, int nonblock)
+ {
+ 	int hash;
+ 
+@@ -1022,10 +1023,16 @@ static int rds_send_mprds_hash(struct rds_sock *rs, struct rds_connection *conn)
+ 		 * used.  But if we are interrupted, we have to use the zero
+ 		 * c_path in case the connection ends up being non-MP capable.
+ 		 */
+-		if (conn->c_npaths == 0)
++		if (conn->c_npaths == 0) {
++			/* Cannot wait for the connection be made, so just use
++			 * the base c_path.
++			 */
++			if (nonblock)
++				return 0;
+ 			if (wait_event_interruptible(conn->c_hs_waitq,
+ 						     conn->c_npaths != 0))
+ 				hash = 0;
++		}
+ 		if (conn->c_npaths == 1)
+ 			hash = 0;
+ 	}
+@@ -1170,7 +1177,7 @@ int rds_sendmsg(struct socket *sock, struct msghdr *msg, size_t payload_len)
+ 	}
+ 
+ 	if (conn->c_trans->t_mp_capable)
+-		cpath = &conn->c_path[rds_send_mprds_hash(rs, conn)];
++		cpath = &conn->c_path[rds_send_mprds_hash(rs, conn, nonblock)];
+ 	else
+ 		cpath = &conn->c_path[0];
+ 
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index 707630ab4713..330372c04940 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -293,7 +293,6 @@ struct rxrpc_peer {
+ 	struct hlist_node	hash_link;
+ 	struct rxrpc_local	*local;
+ 	struct hlist_head	error_targets;	/* targets for net error distribution */
+-	struct work_struct	error_distributor;
+ 	struct rb_root		service_conns;	/* Service connections */
+ 	struct list_head	keepalive_link;	/* Link in net->peer_keepalive[] */
+ 	time64_t		last_tx_at;	/* Last time packet sent here */
+@@ -304,8 +303,6 @@ struct rxrpc_peer {
+ 	unsigned int		maxdata;	/* data size (MTU - hdrsize) */
+ 	unsigned short		hdrsize;	/* header size (IP + UDP + RxRPC) */
+ 	int			debug_id;	/* debug ID for printks */
+-	int			error_report;	/* Net (+0) or local (+1000000) to distribute */
+-#define RXRPC_LOCAL_ERROR_OFFSET 1000000
+ 	struct sockaddr_rxrpc	srx;		/* remote address */
+ 
+ 	/* calculated RTT cache */
+@@ -449,8 +446,7 @@ struct rxrpc_connection {
+ 	spinlock_t		state_lock;	/* state-change lock */
+ 	enum rxrpc_conn_cache_state cache_state;
+ 	enum rxrpc_conn_proto_state state;	/* current state of connection */
+-	u32			local_abort;	/* local abort code */
+-	u32			remote_abort;	/* remote abort code */
++	u32			abort_code;	/* Abort code of connection abort */
+ 	int			debug_id;	/* debug ID for printks */
+ 	atomic_t		serial;		/* packet serial number counter */
+ 	unsigned int		hi_serial;	/* highest serial number received */
+@@ -460,8 +456,19 @@ struct rxrpc_connection {
+ 	u8			security_size;	/* security header size */
+ 	u8			security_ix;	/* security type */
+ 	u8			out_clientflag;	/* RXRPC_CLIENT_INITIATED if we are client */
++	short			error;		/* Local error code */
+ };
+ 
++static inline bool rxrpc_to_server(const struct rxrpc_skb_priv *sp)
++{
++	return sp->hdr.flags & RXRPC_CLIENT_INITIATED;
++}
++
++static inline bool rxrpc_to_client(const struct rxrpc_skb_priv *sp)
++{
++	return !rxrpc_to_server(sp);
++}
++
+ /*
+  * Flags in call->flags.
+  */
+@@ -1029,7 +1036,6 @@ void rxrpc_send_keepalive(struct rxrpc_peer *);
+  * peer_event.c
+  */
+ void rxrpc_error_report(struct sock *);
+-void rxrpc_peer_error_distributor(struct work_struct *);
+ void rxrpc_peer_add_rtt(struct rxrpc_call *, enum rxrpc_rtt_rx_trace,
+ 			rxrpc_serial_t, rxrpc_serial_t, ktime_t, ktime_t);
+ void rxrpc_peer_keepalive_worker(struct work_struct *);
+@@ -1048,7 +1054,6 @@ void rxrpc_destroy_all_peers(struct rxrpc_net *);
+ struct rxrpc_peer *rxrpc_get_peer(struct rxrpc_peer *);
+ struct rxrpc_peer *rxrpc_get_peer_maybe(struct rxrpc_peer *);
+ void rxrpc_put_peer(struct rxrpc_peer *);
+-void __rxrpc_queue_peer_error(struct rxrpc_peer *);
+ 
+ /*
+  * proc.c
+diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c
+index 9d1e298b784c..0e378d73e856 100644
+--- a/net/rxrpc/call_accept.c
++++ b/net/rxrpc/call_accept.c
+@@ -422,11 +422,11 @@ found_service:
+ 
+ 	case RXRPC_CONN_REMOTELY_ABORTED:
+ 		rxrpc_set_call_completion(call, RXRPC_CALL_REMOTELY_ABORTED,
+-					  conn->remote_abort, -ECONNABORTED);
++					  conn->abort_code, conn->error);
+ 		break;
+ 	case RXRPC_CONN_LOCALLY_ABORTED:
+ 		rxrpc_abort_call("CON", call, sp->hdr.seq,
+-				 conn->local_abort, -ECONNABORTED);
++				 conn->abort_code, conn->error);
+ 		break;
+ 	default:
+ 		BUG();
+diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
+index f6734d8cb01a..ed69257203c2 100644
+--- a/net/rxrpc/call_object.c
++++ b/net/rxrpc/call_object.c
+@@ -400,7 +400,7 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx,
+ 	rcu_assign_pointer(conn->channels[chan].call, call);
+ 
+ 	spin_lock(&conn->params.peer->lock);
+-	hlist_add_head(&call->error_link, &conn->params.peer->error_targets);
++	hlist_add_head_rcu(&call->error_link, &conn->params.peer->error_targets);
+ 	spin_unlock(&conn->params.peer->lock);
+ 
+ 	_net("CALL incoming %d on CONN %d", call->debug_id, call->conn->debug_id);
+diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c
+index 5736f643c516..0be19132202b 100644
+--- a/net/rxrpc/conn_client.c
++++ b/net/rxrpc/conn_client.c
+@@ -709,8 +709,8 @@ int rxrpc_connect_call(struct rxrpc_call *call,
+ 	}
+ 
+ 	spin_lock_bh(&call->conn->params.peer->lock);
+-	hlist_add_head(&call->error_link,
+-		       &call->conn->params.peer->error_targets);
++	hlist_add_head_rcu(&call->error_link,
++			   &call->conn->params.peer->error_targets);
+ 	spin_unlock_bh(&call->conn->params.peer->lock);
+ 
+ out:
+diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c
+index 3fde001fcc39..5e7c8239e703 100644
+--- a/net/rxrpc/conn_event.c
++++ b/net/rxrpc/conn_event.c
+@@ -126,7 +126,7 @@ static void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn,
+ 
+ 	switch (chan->last_type) {
+ 	case RXRPC_PACKET_TYPE_ABORT:
+-		_proto("Tx ABORT %%%u { %d } [re]", serial, conn->local_abort);
++		_proto("Tx ABORT %%%u { %d } [re]", serial, conn->abort_code);
+ 		break;
+ 	case RXRPC_PACKET_TYPE_ACK:
+ 		trace_rxrpc_tx_ack(NULL, serial, chan->last_seq, 0,
+@@ -148,13 +148,12 @@ static void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn,
+  * pass a connection-level abort onto all calls on that connection
+  */
+ static void rxrpc_abort_calls(struct rxrpc_connection *conn,
+-			      enum rxrpc_call_completion compl,
+-			      u32 abort_code, int error)
++			      enum rxrpc_call_completion compl)
+ {
+ 	struct rxrpc_call *call;
+ 	int i;
+ 
+-	_enter("{%d},%x", conn->debug_id, abort_code);
++	_enter("{%d},%x", conn->debug_id, conn->abort_code);
+ 
+ 	spin_lock(&conn->channel_lock);
+ 
+@@ -167,9 +166,11 @@ static void rxrpc_abort_calls(struct rxrpc_connection *conn,
+ 				trace_rxrpc_abort(call->debug_id,
+ 						  "CON", call->cid,
+ 						  call->call_id, 0,
+-						  abort_code, error);
++						  conn->abort_code,
++						  conn->error);
+ 			if (rxrpc_set_call_completion(call, compl,
+-						      abort_code, error))
++						      conn->abort_code,
++						      conn->error))
+ 				rxrpc_notify_socket(call);
+ 		}
+ 	}
+@@ -202,10 +203,12 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn,
+ 		return 0;
+ 	}
+ 
++	conn->error = error;
++	conn->abort_code = abort_code;
+ 	conn->state = RXRPC_CONN_LOCALLY_ABORTED;
+ 	spin_unlock_bh(&conn->state_lock);
+ 
+-	rxrpc_abort_calls(conn, RXRPC_CALL_LOCALLY_ABORTED, abort_code, error);
++	rxrpc_abort_calls(conn, RXRPC_CALL_LOCALLY_ABORTED);
+ 
+ 	msg.msg_name	= &conn->params.peer->srx.transport;
+ 	msg.msg_namelen	= conn->params.peer->srx.transport_len;
+@@ -224,7 +227,7 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn,
+ 	whdr._rsvd	= 0;
+ 	whdr.serviceId	= htons(conn->service_id);
+ 
+-	word		= htonl(conn->local_abort);
++	word		= htonl(conn->abort_code);
+ 
+ 	iov[0].iov_base	= &whdr;
+ 	iov[0].iov_len	= sizeof(whdr);
+@@ -235,7 +238,7 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn,
+ 
+ 	serial = atomic_inc_return(&conn->serial);
+ 	whdr.serial = htonl(serial);
+-	_proto("Tx CONN ABORT %%%u { %d }", serial, conn->local_abort);
++	_proto("Tx CONN ABORT %%%u { %d }", serial, conn->abort_code);
+ 
+ 	ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, 2, len);
+ 	if (ret < 0) {
+@@ -308,9 +311,10 @@ static int rxrpc_process_event(struct rxrpc_connection *conn,
+ 		abort_code = ntohl(wtmp);
+ 		_proto("Rx ABORT %%%u { ac=%d }", sp->hdr.serial, abort_code);
+ 
++		conn->error = -ECONNABORTED;
++		conn->abort_code = abort_code;
+ 		conn->state = RXRPC_CONN_REMOTELY_ABORTED;
+-		rxrpc_abort_calls(conn, RXRPC_CALL_REMOTELY_ABORTED,
+-				  abort_code, -ECONNABORTED);
++		rxrpc_abort_calls(conn, RXRPC_CALL_REMOTELY_ABORTED);
+ 		return -ECONNABORTED;
+ 
+ 	case RXRPC_PACKET_TYPE_CHALLENGE:
+diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c
+index 4c77a78a252a..e0d6d0fb7426 100644
+--- a/net/rxrpc/conn_object.c
++++ b/net/rxrpc/conn_object.c
+@@ -99,7 +99,7 @@ struct rxrpc_connection *rxrpc_find_connection_rcu(struct rxrpc_local *local,
+ 	k.epoch	= sp->hdr.epoch;
+ 	k.cid	= sp->hdr.cid & RXRPC_CIDMASK;
+ 
+-	if (sp->hdr.flags & RXRPC_CLIENT_INITIATED) {
++	if (rxrpc_to_server(sp)) {
+ 		/* We need to look up service connections by the full protocol
+ 		 * parameter set.  We look up the peer first as an intermediate
+ 		 * step and then the connection from the peer's tree.
+@@ -214,7 +214,7 @@ void rxrpc_disconnect_call(struct rxrpc_call *call)
+ 	call->peer->cong_cwnd = call->cong_cwnd;
+ 
+ 	spin_lock_bh(&conn->params.peer->lock);
+-	hlist_del_init(&call->error_link);
++	hlist_del_rcu(&call->error_link);
+ 	spin_unlock_bh(&conn->params.peer->lock);
+ 
+ 	if (rxrpc_is_client_call(call))
+diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c
+index 608d078a4981..a81240845224 100644
+--- a/net/rxrpc/input.c
++++ b/net/rxrpc/input.c
+@@ -216,10 +216,11 @@ static void rxrpc_send_ping(struct rxrpc_call *call, struct sk_buff *skb,
+ /*
+  * Apply a hard ACK by advancing the Tx window.
+  */
+-static void rxrpc_rotate_tx_window(struct rxrpc_call *call, rxrpc_seq_t to,
++static bool rxrpc_rotate_tx_window(struct rxrpc_call *call, rxrpc_seq_t to,
+ 				   struct rxrpc_ack_summary *summary)
+ {
+ 	struct sk_buff *skb, *list = NULL;
++	bool rot_last = false;
+ 	int ix;
+ 	u8 annotation;
+ 
+@@ -243,15 +244,17 @@ static void rxrpc_rotate_tx_window(struct rxrpc_call *call, rxrpc_seq_t to,
+ 		skb->next = list;
+ 		list = skb;
+ 
+-		if (annotation & RXRPC_TX_ANNO_LAST)
++		if (annotation & RXRPC_TX_ANNO_LAST) {
+ 			set_bit(RXRPC_CALL_TX_LAST, &call->flags);
++			rot_last = true;
++		}
+ 		if ((annotation & RXRPC_TX_ANNO_MASK) != RXRPC_TX_ANNO_ACK)
+ 			summary->nr_rot_new_acks++;
+ 	}
+ 
+ 	spin_unlock(&call->lock);
+ 
+-	trace_rxrpc_transmit(call, (test_bit(RXRPC_CALL_TX_LAST, &call->flags) ?
++	trace_rxrpc_transmit(call, (rot_last ?
+ 				    rxrpc_transmit_rotate_last :
+ 				    rxrpc_transmit_rotate));
+ 	wake_up(&call->waitq);
+@@ -262,6 +265,8 @@ static void rxrpc_rotate_tx_window(struct rxrpc_call *call, rxrpc_seq_t to,
+ 		skb->next = NULL;
+ 		rxrpc_free_skb(skb, rxrpc_skb_tx_freed);
+ 	}
++
++	return rot_last;
+ }
+ 
+ /*
+@@ -273,23 +278,26 @@ static void rxrpc_rotate_tx_window(struct rxrpc_call *call, rxrpc_seq_t to,
+ static bool rxrpc_end_tx_phase(struct rxrpc_call *call, bool reply_begun,
+ 			       const char *abort_why)
+ {
++	unsigned int state;
+ 
+ 	ASSERT(test_bit(RXRPC_CALL_TX_LAST, &call->flags));
+ 
+ 	write_lock(&call->state_lock);
+ 
+-	switch (call->state) {
++	state = call->state;
++	switch (state) {
+ 	case RXRPC_CALL_CLIENT_SEND_REQUEST:
+ 	case RXRPC_CALL_CLIENT_AWAIT_REPLY:
+ 		if (reply_begun)
+-			call->state = RXRPC_CALL_CLIENT_RECV_REPLY;
++			call->state = state = RXRPC_CALL_CLIENT_RECV_REPLY;
+ 		else
+-			call->state = RXRPC_CALL_CLIENT_AWAIT_REPLY;
++			call->state = state = RXRPC_CALL_CLIENT_AWAIT_REPLY;
+ 		break;
+ 
+ 	case RXRPC_CALL_SERVER_AWAIT_ACK:
+ 		__rxrpc_call_completed(call);
+ 		rxrpc_notify_socket(call);
++		state = call->state;
+ 		break;
+ 
+ 	default:
+@@ -297,11 +305,10 @@ static bool rxrpc_end_tx_phase(struct rxrpc_call *call, bool reply_begun,
+ 	}
+ 
+ 	write_unlock(&call->state_lock);
+-	if (call->state == RXRPC_CALL_CLIENT_AWAIT_REPLY) {
++	if (state == RXRPC_CALL_CLIENT_AWAIT_REPLY)
+ 		trace_rxrpc_transmit(call, rxrpc_transmit_await_reply);
+-	} else {
++	else
+ 		trace_rxrpc_transmit(call, rxrpc_transmit_end);
+-	}
+ 	_leave(" = ok");
+ 	return true;
+ 
+@@ -332,11 +339,11 @@ static bool rxrpc_receiving_reply(struct rxrpc_call *call)
+ 		trace_rxrpc_timer(call, rxrpc_timer_init_for_reply, now);
+ 	}
+ 
+-	if (!test_bit(RXRPC_CALL_TX_LAST, &call->flags))
+-		rxrpc_rotate_tx_window(call, top, &summary);
+ 	if (!test_bit(RXRPC_CALL_TX_LAST, &call->flags)) {
+-		rxrpc_proto_abort("TXL", call, top);
+-		return false;
++		if (!rxrpc_rotate_tx_window(call, top, &summary)) {
++			rxrpc_proto_abort("TXL", call, top);
++			return false;
++		}
+ 	}
+ 	if (!rxrpc_end_tx_phase(call, true, "ETD"))
+ 		return false;
+@@ -616,13 +623,14 @@ static void rxrpc_input_requested_ack(struct rxrpc_call *call,
+ 		if (!skb)
+ 			continue;
+ 
++		sent_at = skb->tstamp;
++		smp_rmb(); /* Read timestamp before serial. */
+ 		sp = rxrpc_skb(skb);
+ 		if (sp->hdr.serial != orig_serial)
+ 			continue;
+-		smp_rmb();
+-		sent_at = skb->tstamp;
+ 		goto found;
+ 	}
++
+ 	return;
+ 
+ found:
+@@ -854,6 +862,16 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb,
+ 				  rxrpc_propose_ack_respond_to_ack);
+ 	}
+ 
++	/* Discard any out-of-order or duplicate ACKs. */
++	if (before_eq(sp->hdr.serial, call->acks_latest)) {
++		_debug("discard ACK %d <= %d",
++		       sp->hdr.serial, call->acks_latest);
++		return;
++	}
++	call->acks_latest_ts = skb->tstamp;
++	call->acks_latest = sp->hdr.serial;
++
++	/* Parse rwind and mtu sizes if provided. */
+ 	ioffset = offset + nr_acks + 3;
+ 	if (skb->len >= ioffset + sizeof(buf.info)) {
+ 		if (skb_copy_bits(skb, ioffset, &buf.info, sizeof(buf.info)) < 0)
+@@ -875,23 +893,18 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb,
+ 		return;
+ 	}
+ 
+-	/* Discard any out-of-order or duplicate ACKs. */
+-	if (before_eq(sp->hdr.serial, call->acks_latest)) {
+-		_debug("discard ACK %d <= %d",
+-		       sp->hdr.serial, call->acks_latest);
+-		return;
+-	}
+-	call->acks_latest_ts = skb->tstamp;
+-	call->acks_latest = sp->hdr.serial;
+-
+ 	if (before(hard_ack, call->tx_hard_ack) ||
+ 	    after(hard_ack, call->tx_top))
+ 		return rxrpc_proto_abort("AKW", call, 0);
+ 	if (nr_acks > call->tx_top - hard_ack)
+ 		return rxrpc_proto_abort("AKN", call, 0);
+ 
+-	if (after(hard_ack, call->tx_hard_ack))
+-		rxrpc_rotate_tx_window(call, hard_ack, &summary);
++	if (after(hard_ack, call->tx_hard_ack)) {
++		if (rxrpc_rotate_tx_window(call, hard_ack, &summary)) {
++			rxrpc_end_tx_phase(call, false, "ETA");
++			return;
++		}
++	}
+ 
+ 	if (nr_acks > 0) {
+ 		if (skb_copy_bits(skb, offset, buf.acks, nr_acks) < 0)
+@@ -900,11 +913,6 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb,
+ 				      &summary);
+ 	}
+ 
+-	if (test_bit(RXRPC_CALL_TX_LAST, &call->flags)) {
+-		rxrpc_end_tx_phase(call, false, "ETA");
+-		return;
+-	}
+-
+ 	if (call->rxtx_annotations[call->tx_top & RXRPC_RXTX_BUFF_MASK] &
+ 	    RXRPC_TX_ANNO_LAST &&
+ 	    summary.nr_acks == call->tx_top - hard_ack &&
+@@ -926,8 +934,7 @@ static void rxrpc_input_ackall(struct rxrpc_call *call, struct sk_buff *skb)
+ 
+ 	_proto("Rx ACKALL %%%u", sp->hdr.serial);
+ 
+-	rxrpc_rotate_tx_window(call, call->tx_top, &summary);
+-	if (test_bit(RXRPC_CALL_TX_LAST, &call->flags))
++	if (rxrpc_rotate_tx_window(call, call->tx_top, &summary))
+ 		rxrpc_end_tx_phase(call, false, "ETL");
+ }
+ 
+@@ -1137,6 +1144,9 @@ void rxrpc_data_ready(struct sock *udp_sk)
+ 		return;
+ 	}
+ 
++	if (skb->tstamp == 0)
++		skb->tstamp = ktime_get_real();
++
+ 	rxrpc_new_skb(skb, rxrpc_skb_rx_received);
+ 
+ 	_net("recv skb %p", skb);
+@@ -1171,10 +1181,6 @@ void rxrpc_data_ready(struct sock *udp_sk)
+ 
+ 	trace_rxrpc_rx_packet(sp);
+ 
+-	_net("Rx RxRPC %s ep=%x call=%x:%x",
+-	     sp->hdr.flags & RXRPC_CLIENT_INITIATED ? "ToServer" : "ToClient",
+-	     sp->hdr.epoch, sp->hdr.cid, sp->hdr.callNumber);
+-
+ 	if (sp->hdr.type >= RXRPC_N_PACKET_TYPES ||
+ 	    !((RXRPC_SUPPORTED_PACKET_TYPES >> sp->hdr.type) & 1)) {
+ 		_proto("Rx Bad Packet Type %u", sp->hdr.type);
+@@ -1183,13 +1189,13 @@ void rxrpc_data_ready(struct sock *udp_sk)
+ 
+ 	switch (sp->hdr.type) {
+ 	case RXRPC_PACKET_TYPE_VERSION:
+-		if (!(sp->hdr.flags & RXRPC_CLIENT_INITIATED))
++		if (rxrpc_to_client(sp))
+ 			goto discard;
+ 		rxrpc_post_packet_to_local(local, skb);
+ 		goto out;
+ 
+ 	case RXRPC_PACKET_TYPE_BUSY:
+-		if (sp->hdr.flags & RXRPC_CLIENT_INITIATED)
++		if (rxrpc_to_server(sp))
+ 			goto discard;
+ 		/* Fall through */
+ 
+@@ -1269,7 +1275,7 @@ void rxrpc_data_ready(struct sock *udp_sk)
+ 		call = rcu_dereference(chan->call);
+ 
+ 		if (sp->hdr.callNumber > chan->call_id) {
+-			if (!(sp->hdr.flags & RXRPC_CLIENT_INITIATED)) {
++			if (rxrpc_to_client(sp)) {
+ 				rcu_read_unlock();
+ 				goto reject_packet;
+ 			}
+@@ -1292,7 +1298,7 @@ void rxrpc_data_ready(struct sock *udp_sk)
+ 	}
+ 
+ 	if (!call || atomic_read(&call->usage) == 0) {
+-		if (!(sp->hdr.type & RXRPC_CLIENT_INITIATED) ||
++		if (rxrpc_to_client(sp) ||
+ 		    sp->hdr.callNumber == 0 ||
+ 		    sp->hdr.type != RXRPC_PACKET_TYPE_DATA)
+ 			goto bad_message_unlock;
+diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c
+index b493e6b62740..386dc1f20c73 100644
+--- a/net/rxrpc/local_object.c
++++ b/net/rxrpc/local_object.c
+@@ -135,10 +135,10 @@ static int rxrpc_open_socket(struct rxrpc_local *local, struct net *net)
+ 	}
+ 
+ 	switch (local->srx.transport.family) {
+-	case AF_INET:
+-		/* we want to receive ICMP errors */
++	case AF_INET6:
++		/* we want to receive ICMPv6 errors */
+ 		opt = 1;
+-		ret = kernel_setsockopt(local->socket, SOL_IP, IP_RECVERR,
++		ret = kernel_setsockopt(local->socket, SOL_IPV6, IPV6_RECVERR,
+ 					(char *) &opt, sizeof(opt));
+ 		if (ret < 0) {
+ 			_debug("setsockopt failed");
+@@ -146,19 +146,22 @@ static int rxrpc_open_socket(struct rxrpc_local *local, struct net *net)
+ 		}
+ 
+ 		/* we want to set the don't fragment bit */
+-		opt = IP_PMTUDISC_DO;
+-		ret = kernel_setsockopt(local->socket, SOL_IP, IP_MTU_DISCOVER,
++		opt = IPV6_PMTUDISC_DO;
++		ret = kernel_setsockopt(local->socket, SOL_IPV6, IPV6_MTU_DISCOVER,
+ 					(char *) &opt, sizeof(opt));
+ 		if (ret < 0) {
+ 			_debug("setsockopt failed");
+ 			goto error;
+ 		}
+-		break;
+ 
+-	case AF_INET6:
++		/* Fall through and set IPv4 options too otherwise we don't get
++		 * errors from IPv4 packets sent through the IPv6 socket.
++		 */
++
++	case AF_INET:
+ 		/* we want to receive ICMP errors */
+ 		opt = 1;
+-		ret = kernel_setsockopt(local->socket, SOL_IPV6, IPV6_RECVERR,
++		ret = kernel_setsockopt(local->socket, SOL_IP, IP_RECVERR,
+ 					(char *) &opt, sizeof(opt));
+ 		if (ret < 0) {
+ 			_debug("setsockopt failed");
+@@ -166,13 +169,22 @@ static int rxrpc_open_socket(struct rxrpc_local *local, struct net *net)
+ 		}
+ 
+ 		/* we want to set the don't fragment bit */
+-		opt = IPV6_PMTUDISC_DO;
+-		ret = kernel_setsockopt(local->socket, SOL_IPV6, IPV6_MTU_DISCOVER,
++		opt = IP_PMTUDISC_DO;
++		ret = kernel_setsockopt(local->socket, SOL_IP, IP_MTU_DISCOVER,
+ 					(char *) &opt, sizeof(opt));
+ 		if (ret < 0) {
+ 			_debug("setsockopt failed");
+ 			goto error;
+ 		}
++
++		/* We want receive timestamps. */
++		opt = 1;
++		ret = kernel_setsockopt(local->socket, SOL_SOCKET, SO_TIMESTAMPNS,
++					(char *)&opt, sizeof(opt));
++		if (ret < 0) {
++			_debug("setsockopt failed");
++			goto error;
++		}
+ 		break;
+ 
+ 	default:
+diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c
+index 4774c8f5634d..6ac21bb2071d 100644
+--- a/net/rxrpc/output.c
++++ b/net/rxrpc/output.c
+@@ -124,7 +124,6 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, bool ping,
+ 	struct kvec iov[2];
+ 	rxrpc_serial_t serial;
+ 	rxrpc_seq_t hard_ack, top;
+-	ktime_t now;
+ 	size_t len, n;
+ 	int ret;
+ 	u8 reason;
+@@ -196,9 +195,7 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, bool ping,
+ 		/* We need to stick a time in before we send the packet in case
+ 		 * the reply gets back before kernel_sendmsg() completes - but
+ 		 * asking UDP to send the packet can take a relatively long
+-		 * time, so we update the time after, on the assumption that
+-		 * the packet transmission is more likely to happen towards the
+-		 * end of the kernel_sendmsg() call.
++		 * time.
+ 		 */
+ 		call->ping_time = ktime_get_real();
+ 		set_bit(RXRPC_CALL_PINGING, &call->flags);
+@@ -206,9 +203,6 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, bool ping,
+ 	}
+ 
+ 	ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, 2, len);
+-	now = ktime_get_real();
+-	if (ping)
+-		call->ping_time = now;
+ 	conn->params.peer->last_tx_at = ktime_get_seconds();
+ 	if (ret < 0)
+ 		trace_rxrpc_tx_fail(call->debug_id, serial, ret,
+@@ -357,8 +351,14 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct sk_buff *skb,
+ 
+ 	/* If our RTT cache needs working on, request an ACK.  Also request
+ 	 * ACKs if a DATA packet appears to have been lost.
++	 *
++	 * However, we mustn't request an ACK on the last reply packet of a
++	 * service call, lest OpenAFS incorrectly send us an ACK with some
++	 * soft-ACKs in it and then never follow up with a proper hard ACK.
+ 	 */
+-	if (!(sp->hdr.flags & RXRPC_LAST_PACKET) &&
++	if ((!(sp->hdr.flags & RXRPC_LAST_PACKET) ||
++	     rxrpc_to_server(sp)
++	     ) &&
+ 	    (test_and_clear_bit(RXRPC_CALL_EV_ACK_LOST, &call->events) ||
+ 	     retrans ||
+ 	     call->cong_mode == RXRPC_CALL_SLOW_START ||
+@@ -384,6 +384,11 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct sk_buff *skb,
+ 		goto send_fragmentable;
+ 
+ 	down_read(&conn->params.local->defrag_sem);
++
++	sp->hdr.serial = serial;
++	smp_wmb(); /* Set serial before timestamp */
++	skb->tstamp = ktime_get_real();
++
+ 	/* send the packet by UDP
+ 	 * - returns -EMSGSIZE if UDP would have to fragment the packet
+ 	 *   to go out of the interface
+@@ -404,12 +409,8 @@ done:
+ 	trace_rxrpc_tx_data(call, sp->hdr.seq, serial, whdr.flags,
+ 			    retrans, lost);
+ 	if (ret >= 0) {
+-		ktime_t now = ktime_get_real();
+-		skb->tstamp = now;
+-		smp_wmb();
+-		sp->hdr.serial = serial;
+ 		if (whdr.flags & RXRPC_REQUEST_ACK) {
+-			call->peer->rtt_last_req = now;
++			call->peer->rtt_last_req = skb->tstamp;
+ 			trace_rxrpc_rtt_tx(call, rxrpc_rtt_tx_data, serial);
+ 			if (call->peer->rtt_usage > 1) {
+ 				unsigned long nowj = jiffies, ack_lost_at;
+@@ -448,6 +449,10 @@ send_fragmentable:
+ 
+ 	down_write(&conn->params.local->defrag_sem);
+ 
++	sp->hdr.serial = serial;
++	smp_wmb(); /* Set serial before timestamp */
++	skb->tstamp = ktime_get_real();
++
+ 	switch (conn->params.local->srx.transport.family) {
+ 	case AF_INET:
+ 		opt = IP_PMTUDISC_DONT;
+diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c
+index 4f9da2f51c69..f3e6fc670da2 100644
+--- a/net/rxrpc/peer_event.c
++++ b/net/rxrpc/peer_event.c
+@@ -23,6 +23,8 @@
+ #include "ar-internal.h"
+ 
+ static void rxrpc_store_error(struct rxrpc_peer *, struct sock_exterr_skb *);
++static void rxrpc_distribute_error(struct rxrpc_peer *, int,
++				   enum rxrpc_call_completion);
+ 
+ /*
+  * Find the peer associated with an ICMP packet.
+@@ -194,8 +196,6 @@ void rxrpc_error_report(struct sock *sk)
+ 	rcu_read_unlock();
+ 	rxrpc_free_skb(skb, rxrpc_skb_rx_freed);
+ 
+-	/* The ref we obtained is passed off to the work item */
+-	__rxrpc_queue_peer_error(peer);
+ 	_leave("");
+ }
+ 
+@@ -205,6 +205,7 @@ void rxrpc_error_report(struct sock *sk)
+ static void rxrpc_store_error(struct rxrpc_peer *peer,
+ 			      struct sock_exterr_skb *serr)
+ {
++	enum rxrpc_call_completion compl = RXRPC_CALL_NETWORK_ERROR;
+ 	struct sock_extended_err *ee;
+ 	int err;
+ 
+@@ -255,7 +256,7 @@ static void rxrpc_store_error(struct rxrpc_peer *peer,
+ 	case SO_EE_ORIGIN_NONE:
+ 	case SO_EE_ORIGIN_LOCAL:
+ 		_proto("Rx Received local error { error=%d }", err);
+-		err += RXRPC_LOCAL_ERROR_OFFSET;
++		compl = RXRPC_CALL_LOCAL_ERROR;
+ 		break;
+ 
+ 	case SO_EE_ORIGIN_ICMP6:
+@@ -264,48 +265,23 @@ static void rxrpc_store_error(struct rxrpc_peer *peer,
+ 		break;
+ 	}
+ 
+-	peer->error_report = err;
++	rxrpc_distribute_error(peer, err, compl);
+ }
+ 
+ /*
+- * Distribute an error that occurred on a peer
++ * Distribute an error that occurred on a peer.
+  */
+-void rxrpc_peer_error_distributor(struct work_struct *work)
++static void rxrpc_distribute_error(struct rxrpc_peer *peer, int error,
++				   enum rxrpc_call_completion compl)
+ {
+-	struct rxrpc_peer *peer =
+-		container_of(work, struct rxrpc_peer, error_distributor);
+ 	struct rxrpc_call *call;
+-	enum rxrpc_call_completion compl;
+-	int error;
+-
+-	_enter("");
+-
+-	error = READ_ONCE(peer->error_report);
+-	if (error < RXRPC_LOCAL_ERROR_OFFSET) {
+-		compl = RXRPC_CALL_NETWORK_ERROR;
+-	} else {
+-		compl = RXRPC_CALL_LOCAL_ERROR;
+-		error -= RXRPC_LOCAL_ERROR_OFFSET;
+-	}
+ 
+-	_debug("ISSUE ERROR %s %d", rxrpc_call_completions[compl], error);
+-
+-	spin_lock_bh(&peer->lock);
+-
+-	while (!hlist_empty(&peer->error_targets)) {
+-		call = hlist_entry(peer->error_targets.first,
+-				   struct rxrpc_call, error_link);
+-		hlist_del_init(&call->error_link);
++	hlist_for_each_entry_rcu(call, &peer->error_targets, error_link) {
+ 		rxrpc_see_call(call);
+-
+-		if (rxrpc_set_call_completion(call, compl, 0, -error))
++		if (call->state < RXRPC_CALL_COMPLETE &&
++		    rxrpc_set_call_completion(call, compl, 0, -error))
+ 			rxrpc_notify_socket(call);
+ 	}
+-
+-	spin_unlock_bh(&peer->lock);
+-
+-	rxrpc_put_peer(peer);
+-	_leave("");
+ }
+ 
+ /*
+diff --git a/net/rxrpc/peer_object.c b/net/rxrpc/peer_object.c
+index 24ec7cdcf332..ef4c2e8a35cc 100644
+--- a/net/rxrpc/peer_object.c
++++ b/net/rxrpc/peer_object.c
+@@ -222,8 +222,6 @@ struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *local, gfp_t gfp)
+ 		atomic_set(&peer->usage, 1);
+ 		peer->local = local;
+ 		INIT_HLIST_HEAD(&peer->error_targets);
+-		INIT_WORK(&peer->error_distributor,
+-			  &rxrpc_peer_error_distributor);
+ 		peer->service_conns = RB_ROOT;
+ 		seqlock_init(&peer->service_conn_lock);
+ 		spin_lock_init(&peer->lock);
+@@ -415,21 +413,6 @@ struct rxrpc_peer *rxrpc_get_peer_maybe(struct rxrpc_peer *peer)
+ 	return peer;
+ }
+ 
+-/*
+- * Queue a peer record.  This passes the caller's ref to the workqueue.
+- */
+-void __rxrpc_queue_peer_error(struct rxrpc_peer *peer)
+-{
+-	const void *here = __builtin_return_address(0);
+-	int n;
+-
+-	n = atomic_read(&peer->usage);
+-	if (rxrpc_queue_work(&peer->error_distributor))
+-		trace_rxrpc_peer(peer, rxrpc_peer_queued_error, n, here);
+-	else
+-		rxrpc_put_peer(peer);
+-}
+-
+ /*
+  * Discard a peer record.
+  */
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index f74513a7c7a8..c855fd045a3c 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -31,6 +31,8 @@
+ #include <net/pkt_sched.h>
+ #include <net/pkt_cls.h>
+ 
++extern const struct nla_policy rtm_tca_policy[TCA_MAX + 1];
++
+ /* The list of all installed classifier types */
+ static LIST_HEAD(tcf_proto_base);
+ 
+@@ -1083,7 +1085,7 @@ static int tc_new_tfilter(struct sk_buff *skb, struct nlmsghdr *n,
+ replay:
+ 	tp_created = 0;
+ 
+-	err = nlmsg_parse(n, sizeof(*t), tca, TCA_MAX, NULL, extack);
++	err = nlmsg_parse(n, sizeof(*t), tca, TCA_MAX, rtm_tca_policy, extack);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -1226,7 +1228,7 @@ static int tc_del_tfilter(struct sk_buff *skb, struct nlmsghdr *n,
+ 	if (!netlink_ns_capable(skb, net->user_ns, CAP_NET_ADMIN))
+ 		return -EPERM;
+ 
+-	err = nlmsg_parse(n, sizeof(*t), tca, TCA_MAX, NULL, extack);
++	err = nlmsg_parse(n, sizeof(*t), tca, TCA_MAX, rtm_tca_policy, extack);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -1334,7 +1336,7 @@ static int tc_get_tfilter(struct sk_buff *skb, struct nlmsghdr *n,
+ 	void *fh = NULL;
+ 	int err;
+ 
+-	err = nlmsg_parse(n, sizeof(*t), tca, TCA_MAX, NULL, extack);
++	err = nlmsg_parse(n, sizeof(*t), tca, TCA_MAX, rtm_tca_policy, extack);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -1488,7 +1490,8 @@ static int tc_dump_tfilter(struct sk_buff *skb, struct netlink_callback *cb)
+ 	if (nlmsg_len(cb->nlh) < sizeof(*tcm))
+ 		return skb->len;
+ 
+-	err = nlmsg_parse(cb->nlh, sizeof(*tcm), tca, TCA_MAX, NULL, NULL);
++	err = nlmsg_parse(cb->nlh, sizeof(*tcm), tca, TCA_MAX, rtm_tca_policy,
++			  NULL);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 99cc25aae503..57f71765febe 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -2052,7 +2052,8 @@ static int tc_dump_tclass_root(struct Qdisc *root, struct sk_buff *skb,
+ 
+ 	if (tcm->tcm_parent) {
+ 		q = qdisc_match_from_root(root, TC_H_MAJ(tcm->tcm_parent));
+-		if (q && tc_dump_tclass_qdisc(q, skb, tcm, cb, t_p, s_t) < 0)
++		if (q && q != root &&
++		    tc_dump_tclass_qdisc(q, skb, tcm, cb, t_p, s_t) < 0)
+ 			return -1;
+ 		return 0;
+ 	}
+diff --git a/net/sched/sch_gred.c b/net/sched/sch_gred.c
+index cbe4831f46f4..4a042abf844c 100644
+--- a/net/sched/sch_gred.c
++++ b/net/sched/sch_gred.c
+@@ -413,7 +413,7 @@ static int gred_change(struct Qdisc *sch, struct nlattr *opt,
+ 	if (tb[TCA_GRED_PARMS] == NULL && tb[TCA_GRED_STAB] == NULL) {
+ 		if (tb[TCA_GRED_LIMIT] != NULL)
+ 			sch->limit = nla_get_u32(tb[TCA_GRED_LIMIT]);
+-		return gred_change_table_def(sch, opt);
++		return gred_change_table_def(sch, tb[TCA_GRED_DPS]);
+ 	}
+ 
+ 	if (tb[TCA_GRED_PARMS] == NULL ||
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 50ee07cd20c4..9d903b870790 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -270,11 +270,10 @@ struct sctp_association *sctp_id2assoc(struct sock *sk, sctp_assoc_t id)
+ 
+ 	spin_lock_bh(&sctp_assocs_id_lock);
+ 	asoc = (struct sctp_association *)idr_find(&sctp_assocs_id, (int)id);
++	if (asoc && (asoc->base.sk != sk || asoc->base.dead))
++		asoc = NULL;
+ 	spin_unlock_bh(&sctp_assocs_id_lock);
+ 
+-	if (!asoc || (asoc->base.sk != sk) || asoc->base.dead)
+-		return NULL;
+-
+ 	return asoc;
+ }
+ 
+@@ -1940,8 +1939,10 @@ static int sctp_sendmsg_to_asoc(struct sctp_association *asoc,
+ 		if (sp->strm_interleave) {
+ 			timeo = sock_sndtimeo(sk, 0);
+ 			err = sctp_wait_for_connect(asoc, &timeo);
+-			if (err)
++			if (err) {
++				err = -ESRCH;
+ 				goto err;
++			}
+ 		} else {
+ 			wait_connect = true;
+ 		}
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index add82b0266f3..3be95f77ec7f 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -114,22 +114,17 @@ static void __smc_lgr_unregister_conn(struct smc_connection *conn)
+ 	sock_put(&smc->sk); /* sock_hold in smc_lgr_register_conn() */
+ }
+ 
+-/* Unregister connection and trigger lgr freeing if applicable
++/* Unregister connection from lgr
+  */
+ static void smc_lgr_unregister_conn(struct smc_connection *conn)
+ {
+ 	struct smc_link_group *lgr = conn->lgr;
+-	int reduced = 0;
+ 
+ 	write_lock_bh(&lgr->conns_lock);
+ 	if (conn->alert_token_local) {
+-		reduced = 1;
+ 		__smc_lgr_unregister_conn(conn);
+ 	}
+ 	write_unlock_bh(&lgr->conns_lock);
+-	if (!reduced || lgr->conns_num)
+-		return;
+-	smc_lgr_schedule_free_work(lgr);
+ }
+ 
+ static void smc_lgr_free_work(struct work_struct *work)
+@@ -238,7 +233,8 @@ out:
+ 	return rc;
+ }
+ 
+-static void smc_buf_unuse(struct smc_connection *conn)
++static void smc_buf_unuse(struct smc_connection *conn,
++			  struct smc_link_group *lgr)
+ {
+ 	if (conn->sndbuf_desc)
+ 		conn->sndbuf_desc->used = 0;
+@@ -248,8 +244,6 @@ static void smc_buf_unuse(struct smc_connection *conn)
+ 			conn->rmb_desc->used = 0;
+ 		} else {
+ 			/* buf registration failed, reuse not possible */
+-			struct smc_link_group *lgr = conn->lgr;
+-
+ 			write_lock_bh(&lgr->rmbs_lock);
+ 			list_del(&conn->rmb_desc->list);
+ 			write_unlock_bh(&lgr->rmbs_lock);
+@@ -262,11 +256,16 @@ static void smc_buf_unuse(struct smc_connection *conn)
+ /* remove a finished connection from its link group */
+ void smc_conn_free(struct smc_connection *conn)
+ {
+-	if (!conn->lgr)
++	struct smc_link_group *lgr = conn->lgr;
++
++	if (!lgr)
+ 		return;
+ 	smc_cdc_tx_dismiss_slots(conn);
+-	smc_lgr_unregister_conn(conn);
+-	smc_buf_unuse(conn);
++	smc_lgr_unregister_conn(conn);		/* unsets conn->lgr */
++	smc_buf_unuse(conn, lgr);		/* allow buffer reuse */
++
++	if (!lgr->conns_num)
++		smc_lgr_schedule_free_work(lgr);
+ }
+ 
+ static void smc_link_clear(struct smc_link *lnk)
+diff --git a/net/socket.c b/net/socket.c
+index d4187ac17d55..fcb18a7ed14b 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -2887,9 +2887,14 @@ static int ethtool_ioctl(struct net *net, struct compat_ifreq __user *ifr32)
+ 		    copy_in_user(&rxnfc->fs.ring_cookie,
+ 				 &compat_rxnfc->fs.ring_cookie,
+ 				 (void __user *)(&rxnfc->fs.location + 1) -
+-				 (void __user *)&rxnfc->fs.ring_cookie) ||
+-		    copy_in_user(&rxnfc->rule_cnt, &compat_rxnfc->rule_cnt,
+-				 sizeof(rxnfc->rule_cnt)))
++				 (void __user *)&rxnfc->fs.ring_cookie))
++			return -EFAULT;
++		if (ethcmd == ETHTOOL_GRXCLSRLALL) {
++			if (put_user(rule_cnt, &rxnfc->rule_cnt))
++				return -EFAULT;
++		} else if (copy_in_user(&rxnfc->rule_cnt,
++					&compat_rxnfc->rule_cnt,
++					sizeof(rxnfc->rule_cnt)))
+ 			return -EFAULT;
+ 	}
+ 
+diff --git a/net/tipc/name_distr.c b/net/tipc/name_distr.c
+index 51b4b96f89db..3cfeb9df64b0 100644
+--- a/net/tipc/name_distr.c
++++ b/net/tipc/name_distr.c
+@@ -115,7 +115,7 @@ struct sk_buff *tipc_named_withdraw(struct net *net, struct publication *publ)
+ 	struct sk_buff *buf;
+ 	struct distr_item *item;
+ 
+-	list_del(&publ->binding_node);
++	list_del_rcu(&publ->binding_node);
+ 
+ 	if (publ->scope == TIPC_NODE_SCOPE)
+ 		return NULL;
+@@ -147,7 +147,7 @@ static void named_distribute(struct net *net, struct sk_buff_head *list,
+ 			ITEM_SIZE) * ITEM_SIZE;
+ 	u32 msg_rem = msg_dsz;
+ 
+-	list_for_each_entry(publ, pls, binding_node) {
++	list_for_each_entry_rcu(publ, pls, binding_node) {
+ 		/* Prepare next buffer: */
+ 		if (!skb) {
+ 			skb = named_prepare_buf(net, PUBLICATION, msg_rem,
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 9fab8e5a4a5b..994ddc7ec9b1 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -286,7 +286,7 @@ static int zerocopy_from_iter(struct sock *sk, struct iov_iter *from,
+ 			      int length, int *pages_used,
+ 			      unsigned int *size_used,
+ 			      struct scatterlist *to, int to_max_pages,
+-			      bool charge, bool revert)
++			      bool charge)
+ {
+ 	struct page *pages[MAX_SKB_FRAGS];
+ 
+@@ -335,10 +335,10 @@ static int zerocopy_from_iter(struct sock *sk, struct iov_iter *from,
+ 	}
+ 
+ out:
++	if (rc)
++		iov_iter_revert(from, size - *size_used);
+ 	*size_used = size;
+ 	*pages_used = num_elem;
+-	if (revert)
+-		iov_iter_revert(from, size);
+ 
+ 	return rc;
+ }
+@@ -440,7 +440,7 @@ alloc_encrypted:
+ 				&ctx->sg_plaintext_size,
+ 				ctx->sg_plaintext_data,
+ 				ARRAY_SIZE(ctx->sg_plaintext_data),
+-				true, false);
++				true);
+ 			if (ret)
+ 				goto fallback_to_reg_send;
+ 
+@@ -453,8 +453,6 @@ alloc_encrypted:
+ 
+ 			copied -= try_to_copy;
+ fallback_to_reg_send:
+-			iov_iter_revert(&msg->msg_iter,
+-					ctx->sg_plaintext_size - orig_size);
+ 			trim_sg(sk, ctx->sg_plaintext_data,
+ 				&ctx->sg_plaintext_num_elem,
+ 				&ctx->sg_plaintext_size,
+@@ -828,7 +826,7 @@ int tls_sw_recvmsg(struct sock *sk,
+ 				err = zerocopy_from_iter(sk, &msg->msg_iter,
+ 							 to_copy, &pages,
+ 							 &chunk, &sgin[1],
+-							 MAX_SKB_FRAGS,	false, true);
++							 MAX_SKB_FRAGS,	false);
+ 				if (err < 0)
+ 					goto fallback_to_reg_recv;
+ 
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 733ccf867972..214f9ef79a64 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -3699,6 +3699,7 @@ static bool ht_rateset_to_mask(struct ieee80211_supported_band *sband,
+ 			return false;
+ 
+ 		/* check availability */
++		ridx = array_index_nospec(ridx, IEEE80211_HT_MCS_MASK_LEN);
+ 		if (sband->ht_cap.mcs.rx_mask[ridx] & rbit)
+ 			mcs[ridx] |= rbit;
+ 		else
+@@ -10124,7 +10125,7 @@ static int cfg80211_cqm_rssi_update(struct cfg80211_registered_device *rdev,
+ 	struct wireless_dev *wdev = dev->ieee80211_ptr;
+ 	s32 last, low, high;
+ 	u32 hyst;
+-	int i, n;
++	int i, n, low_index;
+ 	int err;
+ 
+ 	/* RSSI reporting disabled? */
+@@ -10161,10 +10162,19 @@ static int cfg80211_cqm_rssi_update(struct cfg80211_registered_device *rdev,
+ 		if (last < wdev->cqm_config->rssi_thresholds[i])
+ 			break;
+ 
+-	low = i > 0 ?
+-		(wdev->cqm_config->rssi_thresholds[i - 1] - hyst) : S32_MIN;
+-	high = i < n ?
+-		(wdev->cqm_config->rssi_thresholds[i] + hyst - 1) : S32_MAX;
++	low_index = i - 1;
++	if (low_index >= 0) {
++		low_index = array_index_nospec(low_index, n);
++		low = wdev->cqm_config->rssi_thresholds[low_index] - hyst;
++	} else {
++		low = S32_MIN;
++	}
++	if (i < n) {
++		i = array_index_nospec(i, n);
++		high = wdev->cqm_config->rssi_thresholds[i] + hyst - 1;
++	} else {
++		high = S32_MAX;
++	}
+ 
+ 	return rdev_set_cqm_rssi_range_config(rdev, dev, low, high);
+ }
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index 2f702adf2912..24cfa2776f50 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -2661,11 +2661,12 @@ static void reg_process_hint(struct regulatory_request *reg_request)
+ {
+ 	struct wiphy *wiphy = NULL;
+ 	enum reg_request_treatment treatment;
++	enum nl80211_reg_initiator initiator = reg_request->initiator;
+ 
+ 	if (reg_request->wiphy_idx != WIPHY_IDX_INVALID)
+ 		wiphy = wiphy_idx_to_wiphy(reg_request->wiphy_idx);
+ 
+-	switch (reg_request->initiator) {
++	switch (initiator) {
+ 	case NL80211_REGDOM_SET_BY_CORE:
+ 		treatment = reg_process_hint_core(reg_request);
+ 		break;
+@@ -2683,7 +2684,7 @@ static void reg_process_hint(struct regulatory_request *reg_request)
+ 		treatment = reg_process_hint_country_ie(wiphy, reg_request);
+ 		break;
+ 	default:
+-		WARN(1, "invalid initiator %d\n", reg_request->initiator);
++		WARN(1, "invalid initiator %d\n", initiator);
+ 		goto out_free;
+ 	}
+ 
+@@ -2698,7 +2699,7 @@ static void reg_process_hint(struct regulatory_request *reg_request)
+ 	 */
+ 	if (treatment == REG_REQ_ALREADY_SET && wiphy &&
+ 	    wiphy->regulatory_flags & REGULATORY_STRICT_REG) {
+-		wiphy_update_regulatory(wiphy, reg_request->initiator);
++		wiphy_update_regulatory(wiphy, initiator);
+ 		wiphy_all_share_dfs_chan_state(wiphy);
+ 		reg_check_channels();
+ 	}
+@@ -2867,6 +2868,7 @@ static int regulatory_hint_core(const char *alpha2)
+ 	request->alpha2[0] = alpha2[0];
+ 	request->alpha2[1] = alpha2[1];
+ 	request->initiator = NL80211_REGDOM_SET_BY_CORE;
++	request->wiphy_idx = WIPHY_IDX_INVALID;
+ 
+ 	queue_regulatory_request(request);
+ 
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index d36c3eb7b931..d0e7472dd9fd 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -1058,13 +1058,23 @@ cfg80211_bss_update(struct cfg80211_registered_device *rdev,
+ 	return NULL;
+ }
+ 
++/*
++ * Update RX channel information based on the available frame payload
++ * information. This is mainly for the 2.4 GHz band where frames can be received
++ * from neighboring channels and the Beacon frames use the DSSS Parameter Set
++ * element to indicate the current (transmitting) channel, but this might also
++ * be needed on other bands if RX frequency does not match with the actual
++ * operating channel of a BSS.
++ */
+ static struct ieee80211_channel *
+ cfg80211_get_bss_channel(struct wiphy *wiphy, const u8 *ie, size_t ielen,
+-			 struct ieee80211_channel *channel)
++			 struct ieee80211_channel *channel,
++			 enum nl80211_bss_scan_width scan_width)
+ {
+ 	const u8 *tmp;
+ 	u32 freq;
+ 	int channel_number = -1;
++	struct ieee80211_channel *alt_channel;
+ 
+ 	tmp = cfg80211_find_ie(WLAN_EID_DS_PARAMS, ie, ielen);
+ 	if (tmp && tmp[1] == 1) {
+@@ -1078,16 +1088,45 @@ cfg80211_get_bss_channel(struct wiphy *wiphy, const u8 *ie, size_t ielen,
+ 		}
+ 	}
+ 
+-	if (channel_number < 0)
++	if (channel_number < 0) {
++		/* No channel information in frame payload */
+ 		return channel;
++	}
+ 
+ 	freq = ieee80211_channel_to_frequency(channel_number, channel->band);
+-	channel = ieee80211_get_channel(wiphy, freq);
+-	if (!channel)
+-		return NULL;
+-	if (channel->flags & IEEE80211_CHAN_DISABLED)
++	alt_channel = ieee80211_get_channel(wiphy, freq);
++	if (!alt_channel) {
++		if (channel->band == NL80211_BAND_2GHZ) {
++			/*
++			 * Better not allow unexpected channels when that could
++			 * be going beyond the 1-11 range (e.g., discovering
++			 * BSS on channel 12 when radio is configured for
++			 * channel 11.
++			 */
++			return NULL;
++		}
++
++		/* No match for the payload channel number - ignore it */
++		return channel;
++	}
++
++	if (scan_width == NL80211_BSS_CHAN_WIDTH_10 ||
++	    scan_width == NL80211_BSS_CHAN_WIDTH_5) {
++		/*
++		 * Ignore channel number in 5 and 10 MHz channels where there
++		 * may not be an n:1 or 1:n mapping between frequencies and
++		 * channel numbers.
++		 */
++		return channel;
++	}
++
++	/*
++	 * Use the channel determined through the payload channel number
++	 * instead of the RX channel reported by the driver.
++	 */
++	if (alt_channel->flags & IEEE80211_CHAN_DISABLED)
+ 		return NULL;
+-	return channel;
++	return alt_channel;
+ }
+ 
+ /* Returned bss is reference counted and must be cleaned up appropriately. */
+@@ -1112,7 +1151,8 @@ cfg80211_inform_bss_data(struct wiphy *wiphy,
+ 		    (data->signal < 0 || data->signal > 100)))
+ 		return NULL;
+ 
+-	channel = cfg80211_get_bss_channel(wiphy, ie, ielen, data->chan);
++	channel = cfg80211_get_bss_channel(wiphy, ie, ielen, data->chan,
++					   data->scan_width);
+ 	if (!channel)
+ 		return NULL;
+ 
+@@ -1210,7 +1250,7 @@ cfg80211_inform_bss_frame_data(struct wiphy *wiphy,
+ 		return NULL;
+ 
+ 	channel = cfg80211_get_bss_channel(wiphy, mgmt->u.beacon.variable,
+-					   ielen, data->chan);
++					   ielen, data->chan, data->scan_width);
+ 	if (!channel)
+ 		return NULL;
+ 
+diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c
+index 352abca2605f..86f5afbd0a0c 100644
+--- a/net/xfrm/xfrm_input.c
++++ b/net/xfrm/xfrm_input.c
+@@ -453,6 +453,7 @@ resume:
+ 			XFRM_INC_STATS(net, LINUX_MIB_XFRMINHDRERROR);
+ 			goto drop;
+ 		}
++		crypto_done = false;
+ 	} while (!err);
+ 
+ 	err = xfrm_rcv_cb(skb, family, x->type->proto, 0);
+diff --git a/net/xfrm/xfrm_output.c b/net/xfrm/xfrm_output.c
+index 89b178a78dc7..36d15a38ce5e 100644
+--- a/net/xfrm/xfrm_output.c
++++ b/net/xfrm/xfrm_output.c
+@@ -101,6 +101,10 @@ static int xfrm_output_one(struct sk_buff *skb, int err)
+ 		spin_unlock_bh(&x->lock);
+ 
+ 		skb_dst_force(skb);
++		if (!skb_dst(skb)) {
++			XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTERROR);
++			goto error_nolock;
++		}
+ 
+ 		if (xfrm_offload(skb)) {
+ 			x->type_offload->encap(x, skb);
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index a94983e03a8b..526e6814ed4b 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -2551,6 +2551,10 @@ int __xfrm_route_forward(struct sk_buff *skb, unsigned short family)
+ 	}
+ 
+ 	skb_dst_force(skb);
++	if (!skb_dst(skb)) {
++		XFRM_INC_STATS(net, LINUX_MIB_XFRMFWDHDRERROR);
++		return 0;
++	}
+ 
+ 	dst = xfrm_lookup(net, skb_dst(skb), &fl, NULL, XFRM_LOOKUP_QUEUE);
+ 	if (IS_ERR(dst)) {
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index 33878e6e0d0a..d0672c400c2f 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -151,10 +151,16 @@ static int verify_newsa_info(struct xfrm_usersa_info *p,
+ 	err = -EINVAL;
+ 	switch (p->family) {
+ 	case AF_INET:
++		if (p->sel.prefixlen_d > 32 || p->sel.prefixlen_s > 32)
++			goto out;
++
+ 		break;
+ 
+ 	case AF_INET6:
+ #if IS_ENABLED(CONFIG_IPV6)
++		if (p->sel.prefixlen_d > 128 || p->sel.prefixlen_s > 128)
++			goto out;
++
+ 		break;
+ #else
+ 		err = -EAFNOSUPPORT;
+@@ -1359,10 +1365,16 @@ static int verify_newpolicy_info(struct xfrm_userpolicy_info *p)
+ 
+ 	switch (p->sel.family) {
+ 	case AF_INET:
++		if (p->sel.prefixlen_d > 32 || p->sel.prefixlen_s > 32)
++			return -EINVAL;
++
+ 		break;
+ 
+ 	case AF_INET6:
+ #if IS_ENABLED(CONFIG_IPV6)
++		if (p->sel.prefixlen_d > 128 || p->sel.prefixlen_s > 128)
++			return -EINVAL;
++
+ 		break;
+ #else
+ 		return  -EAFNOSUPPORT;
+@@ -1443,6 +1455,9 @@ static int validate_tmpl(int nr, struct xfrm_user_tmpl *ut, u16 family)
+ 		    (ut[i].family != prev_family))
+ 			return -EINVAL;
+ 
++		if (ut[i].mode >= XFRM_MODE_MAX)
++			return -EINVAL;
++
+ 		prev_family = ut[i].family;
+ 
+ 		switch (ut[i].family) {
+diff --git a/tools/perf/Makefile b/tools/perf/Makefile
+index 225454416ed5..7902a5681fc8 100644
+--- a/tools/perf/Makefile
++++ b/tools/perf/Makefile
+@@ -84,10 +84,10 @@ endif # has_clean
+ endif # MAKECMDGOALS
+ 
+ #
+-# The clean target is not really parallel, don't print the jobs info:
++# Explicitly disable parallelism for the clean target.
+ #
+ clean:
+-	$(make)
++	$(make) -j1
+ 
+ #
+ # The build-test target is not really parallel, don't print the jobs info,
+diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
+index 22dbb6612b41..b70cce40ca97 100644
+--- a/tools/perf/util/machine.c
++++ b/tools/perf/util/machine.c
+@@ -2246,7 +2246,8 @@ static int append_inlines(struct callchain_cursor *cursor,
+ 	if (!symbol_conf.inline_name || !map || !sym)
+ 		return ret;
+ 
+-	addr = map__rip_2objdump(map, ip);
++	addr = map__map_ip(map, ip);
++	addr = map__rip_2objdump(map, addr);
+ 
+ 	inline_node = inlines__tree_find(&map->dso->inlined_nodes, addr);
+ 	if (!inline_node) {
+@@ -2272,7 +2273,7 @@ static int unwind_entry(struct unwind_entry *entry, void *arg)
+ {
+ 	struct callchain_cursor *cursor = arg;
+ 	const char *srcline = NULL;
+-	u64 addr;
++	u64 addr = entry->ip;
+ 
+ 	if (symbol_conf.hide_unresolved && entry->sym == NULL)
+ 		return 0;
+@@ -2284,7 +2285,8 @@ static int unwind_entry(struct unwind_entry *entry, void *arg)
+ 	 * Convert entry->ip from a virtual address to an offset in
+ 	 * its corresponding binary.
+ 	 */
+-	addr = map__map_ip(entry->map, entry->ip);
++	if (entry->map)
++		addr = map__map_ip(entry->map, entry->ip);
+ 
+ 	srcline = callchain_srcline(entry->map, entry->sym, addr);
+ 	return callchain_cursor_append(cursor, entry->ip,
+diff --git a/tools/perf/util/setup.py b/tools/perf/util/setup.py
+index 001be4f9d3b9..a5f9e236cc71 100644
+--- a/tools/perf/util/setup.py
++++ b/tools/perf/util/setup.py
+@@ -27,7 +27,7 @@ class install_lib(_install_lib):
+ 
+ cflags = getenv('CFLAGS', '').split()
+ # switch off several checks (need to be at the end of cflags list)
+-cflags += ['-fno-strict-aliasing', '-Wno-write-strings', '-Wno-unused-parameter' ]
++cflags += ['-fno-strict-aliasing', '-Wno-write-strings', '-Wno-unused-parameter', '-Wno-redundant-decls' ]
+ if cc != "clang":
+     cflags += ['-Wno-cast-function-type' ]
+ 
+diff --git a/tools/testing/selftests/net/fib-onlink-tests.sh b/tools/testing/selftests/net/fib-onlink-tests.sh
+index 3991ad1a368d..864f865eee55 100755
+--- a/tools/testing/selftests/net/fib-onlink-tests.sh
++++ b/tools/testing/selftests/net/fib-onlink-tests.sh
+@@ -167,8 +167,8 @@ setup()
+ 	# add vrf table
+ 	ip li add ${VRF} type vrf table ${VRF_TABLE}
+ 	ip li set ${VRF} up
+-	ip ro add table ${VRF_TABLE} unreachable default
+-	ip -6 ro add table ${VRF_TABLE} unreachable default
++	ip ro add table ${VRF_TABLE} unreachable default metric 8192
++	ip -6 ro add table ${VRF_TABLE} unreachable default metric 8192
+ 
+ 	# create test interfaces
+ 	ip li add ${NETIFS[p1]} type veth peer name ${NETIFS[p2]}
+@@ -185,20 +185,20 @@ setup()
+ 	for n in 1 3 5 7; do
+ 		ip li set ${NETIFS[p${n}]} up
+ 		ip addr add ${V4ADDRS[p${n}]}/24 dev ${NETIFS[p${n}]}
+-		ip addr add ${V6ADDRS[p${n}]}/64 dev ${NETIFS[p${n}]}
++		ip addr add ${V6ADDRS[p${n}]}/64 dev ${NETIFS[p${n}]} nodad
+ 	done
+ 
+ 	# move peer interfaces to namespace and add addresses
+ 	for n in 2 4 6 8; do
+ 		ip li set ${NETIFS[p${n}]} netns ${PEER_NS} up
+ 		ip -netns ${PEER_NS} addr add ${V4ADDRS[p${n}]}/24 dev ${NETIFS[p${n}]}
+-		ip -netns ${PEER_NS} addr add ${V6ADDRS[p${n}]}/64 dev ${NETIFS[p${n}]}
++		ip -netns ${PEER_NS} addr add ${V6ADDRS[p${n}]}/64 dev ${NETIFS[p${n}]} nodad
+ 	done
+ 
+-	set +e
++	ip -6 ro add default via ${V6ADDRS[p3]/::[0-9]/::64}
++	ip -6 ro add table ${VRF_TABLE} default via ${V6ADDRS[p7]/::[0-9]/::64}
+ 
+-	# let DAD complete - assume default of 1 probe
+-	sleep 1
++	set +e
+ }
+ 
+ cleanup()
+diff --git a/tools/testing/selftests/net/rtnetlink.sh b/tools/testing/selftests/net/rtnetlink.sh
+index 0d7a44fa30af..8e509cbcb209 100755
+--- a/tools/testing/selftests/net/rtnetlink.sh
++++ b/tools/testing/selftests/net/rtnetlink.sh
+@@ -1,4 +1,4 @@
+-#!/bin/sh
++#!/bin/bash
+ #
+ # This test is for checking rtnetlink callpaths, and get as much coverage as possible.
+ #
+diff --git a/tools/testing/selftests/net/udpgso_bench.sh b/tools/testing/selftests/net/udpgso_bench.sh
+index 850767befa47..99e537ab5ad9 100755
+--- a/tools/testing/selftests/net/udpgso_bench.sh
++++ b/tools/testing/selftests/net/udpgso_bench.sh
+@@ -1,4 +1,4 @@
+-#!/bin/sh
++#!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ #
+ # Run a series of udpgso benchmarks


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 13:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 13:15 UTC (permalink / raw
  To: gentoo-commits

commit:     e2a161a0abc849834d64372bfb3cdb8e57845720
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 19 22:41:12 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 13:15:39 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e2a161a0

Linux patch 4.18.9

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1008_linux-4.18.9.patch | 5298 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5302 insertions(+)

diff --git a/0000_README b/0000_README
index 597262e..6534d27 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch:  1007_linux-4.18.8.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.8
 
+Patch:  1008_linux-4.18.9.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.9
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1008_linux-4.18.9.patch b/1008_linux-4.18.9.patch
new file mode 100644
index 0000000..877b17a
--- /dev/null
+++ b/1008_linux-4.18.9.patch
@@ -0,0 +1,5298 @@
+diff --git a/Makefile b/Makefile
+index 0d73431f66cd..1178348fb9ca 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/arc/boot/dts/axs10x_mb.dtsi b/arch/arc/boot/dts/axs10x_mb.dtsi
+index 47b74fbc403c..37bafd44e36d 100644
+--- a/arch/arc/boot/dts/axs10x_mb.dtsi
++++ b/arch/arc/boot/dts/axs10x_mb.dtsi
+@@ -9,6 +9,10 @@
+  */
+ 
+ / {
++	aliases {
++		ethernet = &gmac;
++	};
++
+ 	axs10x_mb {
+ 		compatible = "simple-bus";
+ 		#address-cells = <1>;
+@@ -68,7 +72,7 @@
+ 			};
+ 		};
+ 
+-		ethernet@0x18000 {
++		gmac: ethernet@0x18000 {
+ 			#interrupt-cells = <1>;
+ 			compatible = "snps,dwmac";
+ 			reg = < 0x18000 0x2000 >;
+@@ -81,6 +85,7 @@
+ 			max-speed = <100>;
+ 			resets = <&creg_rst 5>;
+ 			reset-names = "stmmaceth";
++			mac-address = [00 00 00 00 00 00]; /* Filled in by U-Boot */
+ 		};
+ 
+ 		ehci@0x40000 {
+diff --git a/arch/arc/boot/dts/hsdk.dts b/arch/arc/boot/dts/hsdk.dts
+index 006aa3de5348..d00f283094d3 100644
+--- a/arch/arc/boot/dts/hsdk.dts
++++ b/arch/arc/boot/dts/hsdk.dts
+@@ -25,6 +25,10 @@
+ 		bootargs = "earlycon=uart8250,mmio32,0xf0005000,115200n8 console=ttyS0,115200n8 debug print-fatal-signals=1";
+ 	};
+ 
++	aliases {
++		ethernet = &gmac;
++	};
++
+ 	cpus {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+@@ -163,7 +167,7 @@
+ 			#clock-cells = <0>;
+ 		};
+ 
+-		ethernet@8000 {
++		gmac: ethernet@8000 {
+ 			#interrupt-cells = <1>;
+ 			compatible = "snps,dwmac";
+ 			reg = <0x8000 0x2000>;
+@@ -176,6 +180,7 @@
+ 			phy-handle = <&phy0>;
+ 			resets = <&cgu_rst HSDK_ETH_RESET>;
+ 			reset-names = "stmmaceth";
++			mac-address = [00 00 00 00 00 00]; /* Filled in by U-Boot */
+ 
+ 			mdio {
+ 				#address-cells = <1>;
+diff --git a/arch/arc/configs/axs101_defconfig b/arch/arc/configs/axs101_defconfig
+index a635ea972304..df848c44dacd 100644
+--- a/arch/arc/configs/axs101_defconfig
++++ b/arch/arc/configs/axs101_defconfig
+@@ -1,5 +1,4 @@
+ CONFIG_DEFAULT_HOSTNAME="ARCLinux"
+-# CONFIG_SWAP is not set
+ CONFIG_SYSVIPC=y
+ CONFIG_POSIX_MQUEUE=y
+ # CONFIG_CROSS_MEMORY_ATTACH is not set
+diff --git a/arch/arc/configs/axs103_defconfig b/arch/arc/configs/axs103_defconfig
+index aa507e423075..bcbdc0494faa 100644
+--- a/arch/arc/configs/axs103_defconfig
++++ b/arch/arc/configs/axs103_defconfig
+@@ -1,5 +1,4 @@
+ CONFIG_DEFAULT_HOSTNAME="ARCLinux"
+-# CONFIG_SWAP is not set
+ CONFIG_SYSVIPC=y
+ CONFIG_POSIX_MQUEUE=y
+ # CONFIG_CROSS_MEMORY_ATTACH is not set
+diff --git a/arch/arc/configs/axs103_smp_defconfig b/arch/arc/configs/axs103_smp_defconfig
+index eba07f468654..d145bce7ebdf 100644
+--- a/arch/arc/configs/axs103_smp_defconfig
++++ b/arch/arc/configs/axs103_smp_defconfig
+@@ -1,5 +1,4 @@
+ CONFIG_DEFAULT_HOSTNAME="ARCLinux"
+-# CONFIG_SWAP is not set
+ CONFIG_SYSVIPC=y
+ CONFIG_POSIX_MQUEUE=y
+ # CONFIG_CROSS_MEMORY_ATTACH is not set
+diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
+index d496ef579859..ca46153d7915 100644
+--- a/arch/arm64/kvm/hyp/switch.c
++++ b/arch/arm64/kvm/hyp/switch.c
+@@ -98,8 +98,10 @@ static void activate_traps_vhe(struct kvm_vcpu *vcpu)
+ 	val = read_sysreg(cpacr_el1);
+ 	val |= CPACR_EL1_TTA;
+ 	val &= ~CPACR_EL1_ZEN;
+-	if (!update_fp_enabled(vcpu))
++	if (!update_fp_enabled(vcpu)) {
+ 		val &= ~CPACR_EL1_FPEN;
++		__activate_traps_fpsimd32(vcpu);
++	}
+ 
+ 	write_sysreg(val, cpacr_el1);
+ 
+@@ -114,8 +116,10 @@ static void __hyp_text __activate_traps_nvhe(struct kvm_vcpu *vcpu)
+ 
+ 	val = CPTR_EL2_DEFAULT;
+ 	val |= CPTR_EL2_TTA | CPTR_EL2_TZ;
+-	if (!update_fp_enabled(vcpu))
++	if (!update_fp_enabled(vcpu)) {
+ 		val |= CPTR_EL2_TFP;
++		__activate_traps_fpsimd32(vcpu);
++	}
+ 
+ 	write_sysreg(val, cptr_el2);
+ }
+@@ -129,7 +133,6 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
+ 	if (cpus_have_const_cap(ARM64_HAS_RAS_EXTN) && (hcr & HCR_VSE))
+ 		write_sysreg_s(vcpu->arch.vsesr_el2, SYS_VSESR_EL2);
+ 
+-	__activate_traps_fpsimd32(vcpu);
+ 	if (has_vhe())
+ 		activate_traps_vhe(vcpu);
+ 	else
+diff --git a/arch/mips/boot/dts/mscc/ocelot.dtsi b/arch/mips/boot/dts/mscc/ocelot.dtsi
+index 4f33dbc67348..7096915f26e0 100644
+--- a/arch/mips/boot/dts/mscc/ocelot.dtsi
++++ b/arch/mips/boot/dts/mscc/ocelot.dtsi
+@@ -184,7 +184,7 @@
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 			compatible = "mscc,ocelot-miim";
+-			reg = <0x107009c 0x36>, <0x10700f0 0x8>;
++			reg = <0x107009c 0x24>, <0x10700f0 0x8>;
+ 			interrupts = <14>;
+ 			status = "disabled";
+ 
+diff --git a/arch/mips/cavium-octeon/octeon-platform.c b/arch/mips/cavium-octeon/octeon-platform.c
+index 8505db478904..1d92efb82c37 100644
+--- a/arch/mips/cavium-octeon/octeon-platform.c
++++ b/arch/mips/cavium-octeon/octeon-platform.c
+@@ -322,6 +322,7 @@ static int __init octeon_ehci_device_init(void)
+ 		return 0;
+ 
+ 	pd = of_find_device_by_node(ehci_node);
++	of_node_put(ehci_node);
+ 	if (!pd)
+ 		return 0;
+ 
+@@ -384,6 +385,7 @@ static int __init octeon_ohci_device_init(void)
+ 		return 0;
+ 
+ 	pd = of_find_device_by_node(ohci_node);
++	of_node_put(ohci_node);
+ 	if (!pd)
+ 		return 0;
+ 
+diff --git a/arch/mips/generic/init.c b/arch/mips/generic/init.c
+index 5ba6fcc26fa7..94a78dbbc91f 100644
+--- a/arch/mips/generic/init.c
++++ b/arch/mips/generic/init.c
+@@ -204,6 +204,7 @@ void __init arch_init_irq(void)
+ 					    "mti,cpu-interrupt-controller");
+ 	if (!cpu_has_veic && !intc_node)
+ 		mips_cpu_irq_init();
++	of_node_put(intc_node);
+ 
+ 	irqchip_init();
+ }
+diff --git a/arch/mips/include/asm/io.h b/arch/mips/include/asm/io.h
+index cea8ad864b3f..57b34257be2b 100644
+--- a/arch/mips/include/asm/io.h
++++ b/arch/mips/include/asm/io.h
+@@ -141,14 +141,14 @@ static inline void * phys_to_virt(unsigned long address)
+ /*
+  * ISA I/O bus memory addresses are 1:1 with the physical address.
+  */
+-static inline unsigned long isa_virt_to_bus(volatile void * address)
++static inline unsigned long isa_virt_to_bus(volatile void *address)
+ {
+-	return (unsigned long)address - PAGE_OFFSET;
++	return virt_to_phys(address);
+ }
+ 
+-static inline void * isa_bus_to_virt(unsigned long address)
++static inline void *isa_bus_to_virt(unsigned long address)
+ {
+-	return (void *)(address + PAGE_OFFSET);
++	return phys_to_virt(address);
+ }
+ 
+ #define isa_page_to_bus page_to_phys
+diff --git a/arch/mips/kernel/vdso.c b/arch/mips/kernel/vdso.c
+index 019035d7225c..8f845f6e5f42 100644
+--- a/arch/mips/kernel/vdso.c
++++ b/arch/mips/kernel/vdso.c
+@@ -13,6 +13,7 @@
+ #include <linux/err.h>
+ #include <linux/init.h>
+ #include <linux/ioport.h>
++#include <linux/kernel.h>
+ #include <linux/mm.h>
+ #include <linux/sched.h>
+ #include <linux/slab.h>
+@@ -20,6 +21,7 @@
+ 
+ #include <asm/abi.h>
+ #include <asm/mips-cps.h>
++#include <asm/page.h>
+ #include <asm/vdso.h>
+ 
+ /* Kernel-provided data used by the VDSO. */
+@@ -128,12 +130,30 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
+ 	vvar_size = gic_size + PAGE_SIZE;
+ 	size = vvar_size + image->size;
+ 
++	/*
++	 * Find a region that's large enough for us to perform the
++	 * colour-matching alignment below.
++	 */
++	if (cpu_has_dc_aliases)
++		size += shm_align_mask + 1;
++
+ 	base = get_unmapped_area(NULL, 0, size, 0, 0);
+ 	if (IS_ERR_VALUE(base)) {
+ 		ret = base;
+ 		goto out;
+ 	}
+ 
++	/*
++	 * If we suffer from dcache aliasing, ensure that the VDSO data page
++	 * mapping is coloured the same as the kernel's mapping of that memory.
++	 * This ensures that when the kernel updates the VDSO data userland
++	 * will observe it without requiring cache invalidations.
++	 */
++	if (cpu_has_dc_aliases) {
++		base = __ALIGN_MASK(base, shm_align_mask);
++		base += ((unsigned long)&vdso_data - gic_size) & shm_align_mask;
++	}
++
+ 	data_addr = base + gic_size;
+ 	vdso_addr = data_addr + PAGE_SIZE;
+ 
+diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c
+index e12dfa48b478..a5893b2cdc0e 100644
+--- a/arch/mips/mm/c-r4k.c
++++ b/arch/mips/mm/c-r4k.c
+@@ -835,7 +835,8 @@ static void r4k_flush_icache_user_range(unsigned long start, unsigned long end)
+ static void r4k_dma_cache_wback_inv(unsigned long addr, unsigned long size)
+ {
+ 	/* Catch bad driver code */
+-	BUG_ON(size == 0);
++	if (WARN_ON(size == 0))
++		return;
+ 
+ 	preempt_disable();
+ 	if (cpu_has_inclusive_pcaches) {
+@@ -871,7 +872,8 @@ static void r4k_dma_cache_wback_inv(unsigned long addr, unsigned long size)
+ static void r4k_dma_cache_inv(unsigned long addr, unsigned long size)
+ {
+ 	/* Catch bad driver code */
+-	BUG_ON(size == 0);
++	if (WARN_ON(size == 0))
++		return;
+ 
+ 	preempt_disable();
+ 	if (cpu_has_inclusive_pcaches) {
+diff --git a/arch/powerpc/include/asm/book3s/64/pgalloc.h b/arch/powerpc/include/asm/book3s/64/pgalloc.h
+index 01ee40f11f3a..76234a14b97d 100644
+--- a/arch/powerpc/include/asm/book3s/64/pgalloc.h
++++ b/arch/powerpc/include/asm/book3s/64/pgalloc.h
+@@ -9,6 +9,7 @@
+ 
+ #include <linux/slab.h>
+ #include <linux/cpumask.h>
++#include <linux/kmemleak.h>
+ #include <linux/percpu.h>
+ 
+ struct vmemmap_backing {
+@@ -82,6 +83,13 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm)
+ 
+ 	pgd = kmem_cache_alloc(PGT_CACHE(PGD_INDEX_SIZE),
+ 			       pgtable_gfp_flags(mm, GFP_KERNEL));
++	/*
++	 * Don't scan the PGD for pointers, it contains references to PUDs but
++	 * those references are not full pointers and so can't be recognised by
++	 * kmemleak.
++	 */
++	kmemleak_no_scan(pgd);
++
+ 	/*
+ 	 * With hugetlb, we don't clear the second half of the page table.
+ 	 * If we share the same slab cache with the pmd or pud level table,
+@@ -110,8 +118,19 @@ static inline void pgd_populate(struct mm_struct *mm, pgd_t *pgd, pud_t *pud)
+ 
+ static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr)
+ {
+-	return kmem_cache_alloc(PGT_CACHE(PUD_CACHE_INDEX),
+-		pgtable_gfp_flags(mm, GFP_KERNEL));
++	pud_t *pud;
++
++	pud = kmem_cache_alloc(PGT_CACHE(PUD_CACHE_INDEX),
++			       pgtable_gfp_flags(mm, GFP_KERNEL));
++	/*
++	 * Tell kmemleak to ignore the PUD, that means don't scan it for
++	 * pointers and don't consider it a leak. PUDs are typically only
++	 * referred to by their PGD, but kmemleak is not able to recognise those
++	 * as pointers, leading to false leak reports.
++	 */
++	kmemleak_ignore(pud);
++
++	return pud;
+ }
+ 
+ static inline void pud_free(struct mm_struct *mm, pud_t *pud)
+diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+index 176f911ee983..7efc42538ccf 100644
+--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
++++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+@@ -738,10 +738,10 @@ int kvm_unmap_radix(struct kvm *kvm, struct kvm_memory_slot *memslot,
+ 					      gpa, shift);
+ 		kvmppc_radix_tlbie_page(kvm, gpa, shift);
+ 		if ((old & _PAGE_DIRTY) && memslot->dirty_bitmap) {
+-			unsigned long npages = 1;
++			unsigned long psize = PAGE_SIZE;
+ 			if (shift)
+-				npages = 1ul << (shift - PAGE_SHIFT);
+-			kvmppc_update_dirty_map(memslot, gfn, npages);
++				psize = 1ul << shift;
++			kvmppc_update_dirty_map(memslot, gfn, psize);
+ 		}
+ 	}
+ 	return 0;				
+diff --git a/arch/powerpc/platforms/4xx/msi.c b/arch/powerpc/platforms/4xx/msi.c
+index 81b2cbce7df8..7c324eff2f22 100644
+--- a/arch/powerpc/platforms/4xx/msi.c
++++ b/arch/powerpc/platforms/4xx/msi.c
+@@ -146,13 +146,19 @@ static int ppc4xx_setup_pcieh_hw(struct platform_device *dev,
+ 	const u32 *sdr_addr;
+ 	dma_addr_t msi_phys;
+ 	void *msi_virt;
++	int err;
+ 
+ 	sdr_addr = of_get_property(dev->dev.of_node, "sdr-base", NULL);
+ 	if (!sdr_addr)
+-		return -1;
++		return -EINVAL;
+ 
+-	mtdcri(SDR0, *sdr_addr, upper_32_bits(res.start));	/*HIGH addr */
+-	mtdcri(SDR0, *sdr_addr + 1, lower_32_bits(res.start));	/* Low addr */
++	msi_data = of_get_property(dev->dev.of_node, "msi-data", NULL);
++	if (!msi_data)
++		return -EINVAL;
++
++	msi_mask = of_get_property(dev->dev.of_node, "msi-mask", NULL);
++	if (!msi_mask)
++		return -EINVAL;
+ 
+ 	msi->msi_dev = of_find_node_by_name(NULL, "ppc4xx-msi");
+ 	if (!msi->msi_dev)
+@@ -160,30 +166,30 @@ static int ppc4xx_setup_pcieh_hw(struct platform_device *dev,
+ 
+ 	msi->msi_regs = of_iomap(msi->msi_dev, 0);
+ 	if (!msi->msi_regs) {
+-		dev_err(&dev->dev, "of_iomap problem failed\n");
+-		return -ENOMEM;
++		dev_err(&dev->dev, "of_iomap failed\n");
++		err = -ENOMEM;
++		goto node_put;
+ 	}
+ 	dev_dbg(&dev->dev, "PCIE-MSI: msi register mapped 0x%x 0x%x\n",
+ 		(u32) (msi->msi_regs + PEIH_TERMADH), (u32) (msi->msi_regs));
+ 
+ 	msi_virt = dma_alloc_coherent(&dev->dev, 64, &msi_phys, GFP_KERNEL);
+-	if (!msi_virt)
+-		return -ENOMEM;
++	if (!msi_virt) {
++		err = -ENOMEM;
++		goto iounmap;
++	}
+ 	msi->msi_addr_hi = upper_32_bits(msi_phys);
+ 	msi->msi_addr_lo = lower_32_bits(msi_phys & 0xffffffff);
+ 	dev_dbg(&dev->dev, "PCIE-MSI: msi address high 0x%x, low 0x%x\n",
+ 		msi->msi_addr_hi, msi->msi_addr_lo);
+ 
++	mtdcri(SDR0, *sdr_addr, upper_32_bits(res.start));	/*HIGH addr */
++	mtdcri(SDR0, *sdr_addr + 1, lower_32_bits(res.start));	/* Low addr */
++
+ 	/* Progam the Interrupt handler Termination addr registers */
+ 	out_be32(msi->msi_regs + PEIH_TERMADH, msi->msi_addr_hi);
+ 	out_be32(msi->msi_regs + PEIH_TERMADL, msi->msi_addr_lo);
+ 
+-	msi_data = of_get_property(dev->dev.of_node, "msi-data", NULL);
+-	if (!msi_data)
+-		return -1;
+-	msi_mask = of_get_property(dev->dev.of_node, "msi-mask", NULL);
+-	if (!msi_mask)
+-		return -1;
+ 	/* Program MSI Expected data and Mask bits */
+ 	out_be32(msi->msi_regs + PEIH_MSIED, *msi_data);
+ 	out_be32(msi->msi_regs + PEIH_MSIMK, *msi_mask);
+@@ -191,6 +197,12 @@ static int ppc4xx_setup_pcieh_hw(struct platform_device *dev,
+ 	dma_free_coherent(&dev->dev, 64, msi_virt, msi_phys);
+ 
+ 	return 0;
++
++iounmap:
++	iounmap(msi->msi_regs);
++node_put:
++	of_node_put(msi->msi_dev);
++	return err;
+ }
+ 
+ static int ppc4xx_of_msi_remove(struct platform_device *dev)
+@@ -209,7 +221,6 @@ static int ppc4xx_of_msi_remove(struct platform_device *dev)
+ 		msi_bitmap_free(&msi->bitmap);
+ 	iounmap(msi->msi_regs);
+ 	of_node_put(msi->msi_dev);
+-	kfree(msi);
+ 
+ 	return 0;
+ }
+@@ -223,18 +234,16 @@ static int ppc4xx_msi_probe(struct platform_device *dev)
+ 
+ 	dev_dbg(&dev->dev, "PCIE-MSI: Setting up MSI support...\n");
+ 
+-	msi = kzalloc(sizeof(*msi), GFP_KERNEL);
+-	if (!msi) {
+-		dev_err(&dev->dev, "No memory for MSI structure\n");
++	msi = devm_kzalloc(&dev->dev, sizeof(*msi), GFP_KERNEL);
++	if (!msi)
+ 		return -ENOMEM;
+-	}
+ 	dev->dev.platform_data = msi;
+ 
+ 	/* Get MSI ranges */
+ 	err = of_address_to_resource(dev->dev.of_node, 0, &res);
+ 	if (err) {
+ 		dev_err(&dev->dev, "%pOF resource error!\n", dev->dev.of_node);
+-		goto error_out;
++		return err;
+ 	}
+ 
+ 	msi_irqs = of_irq_count(dev->dev.of_node);
+@@ -243,7 +252,7 @@ static int ppc4xx_msi_probe(struct platform_device *dev)
+ 
+ 	err = ppc4xx_setup_pcieh_hw(dev, res, msi);
+ 	if (err)
+-		goto error_out;
++		return err;
+ 
+ 	err = ppc4xx_msi_init_allocator(dev, msi);
+ 	if (err) {
+@@ -256,7 +265,7 @@ static int ppc4xx_msi_probe(struct platform_device *dev)
+ 		phb->controller_ops.setup_msi_irqs = ppc4xx_setup_msi_irqs;
+ 		phb->controller_ops.teardown_msi_irqs = ppc4xx_teardown_msi_irqs;
+ 	}
+-	return err;
++	return 0;
+ 
+ error_out:
+ 	ppc4xx_of_msi_remove(dev);
+diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
+index 8cdf91f5d3a4..c773465b2c95 100644
+--- a/arch/powerpc/platforms/powernv/npu-dma.c
++++ b/arch/powerpc/platforms/powernv/npu-dma.c
+@@ -437,8 +437,9 @@ static int get_mmio_atsd_reg(struct npu *npu)
+ 	int i;
+ 
+ 	for (i = 0; i < npu->mmio_atsd_count; i++) {
+-		if (!test_and_set_bit_lock(i, &npu->mmio_atsd_usage))
+-			return i;
++		if (!test_bit(i, &npu->mmio_atsd_usage))
++			if (!test_and_set_bit_lock(i, &npu->mmio_atsd_usage))
++				return i;
+ 	}
+ 
+ 	return -ENOSPC;
+diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
+index 8a4868a3964b..cb098e962ffe 100644
+--- a/arch/powerpc/platforms/pseries/setup.c
++++ b/arch/powerpc/platforms/pseries/setup.c
+@@ -647,6 +647,15 @@ void of_pci_parse_iov_addrs(struct pci_dev *dev, const int *indexes)
+ 	}
+ }
+ 
++static void pseries_disable_sriov_resources(struct pci_dev *pdev)
++{
++	int i;
++
++	pci_warn(pdev, "No hypervisor support for SR-IOV on this device, IOV BARs disabled.\n");
++	for (i = 0; i < PCI_SRIOV_NUM_BARS; i++)
++		pdev->resource[i + PCI_IOV_RESOURCES].flags = 0;
++}
++
+ static void pseries_pci_fixup_resources(struct pci_dev *pdev)
+ {
+ 	const int *indexes;
+@@ -654,10 +663,10 @@ static void pseries_pci_fixup_resources(struct pci_dev *pdev)
+ 
+ 	/*Firmware must support open sriov otherwise dont configure*/
+ 	indexes = of_get_property(dn, "ibm,open-sriov-vf-bar-info", NULL);
+-	if (!indexes)
+-		return;
+-	/* Assign the addresses from device tree*/
+-	of_pci_set_vf_bar_size(pdev, indexes);
++	if (indexes)
++		of_pci_set_vf_bar_size(pdev, indexes);
++	else
++		pseries_disable_sriov_resources(pdev);
+ }
+ 
+ static void pseries_pci_fixup_iov_resources(struct pci_dev *pdev)
+@@ -669,10 +678,10 @@ static void pseries_pci_fixup_iov_resources(struct pci_dev *pdev)
+ 		return;
+ 	/*Firmware must support open sriov otherwise dont configure*/
+ 	indexes = of_get_property(dn, "ibm,open-sriov-vf-bar-info", NULL);
+-	if (!indexes)
+-		return;
+-	/* Assign the addresses from device tree*/
+-	of_pci_parse_iov_addrs(pdev, indexes);
++	if (indexes)
++		of_pci_parse_iov_addrs(pdev, indexes);
++	else
++		pseries_disable_sriov_resources(pdev);
+ }
+ 
+ static resource_size_t pseries_pci_iov_resource_alignment(struct pci_dev *pdev,
+diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c
+index 84c89cb9636f..cbdd8341f17e 100644
+--- a/arch/s390/kvm/vsie.c
++++ b/arch/s390/kvm/vsie.c
+@@ -173,7 +173,8 @@ static int shadow_crycb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
+ 		return set_validity_icpt(scb_s, 0x0039U);
+ 
+ 	/* copy only the wrapping keys */
+-	if (read_guest_real(vcpu, crycb_addr + 72, &vsie_page->crycb, 56))
++	if (read_guest_real(vcpu, crycb_addr + 72,
++			    vsie_page->crycb.dea_wrapping_key_mask, 56))
+ 		return set_validity_icpt(scb_s, 0x0035U);
+ 
+ 	scb_s->ecb3 |= ecb3_flags;
+diff --git a/arch/x86/include/asm/kdebug.h b/arch/x86/include/asm/kdebug.h
+index 395c9631e000..75f1e35e7c15 100644
+--- a/arch/x86/include/asm/kdebug.h
++++ b/arch/x86/include/asm/kdebug.h
+@@ -22,10 +22,20 @@ enum die_val {
+ 	DIE_NMIUNKNOWN,
+ };
+ 
++enum show_regs_mode {
++	SHOW_REGS_SHORT,
++	/*
++	 * For when userspace crashed, but we don't think it's our fault, and
++	 * therefore don't print kernel registers.
++	 */
++	SHOW_REGS_USER,
++	SHOW_REGS_ALL
++};
++
+ extern void die(const char *, struct pt_regs *,long);
+ extern int __must_check __die(const char *, struct pt_regs *, long);
+ extern void show_stack_regs(struct pt_regs *regs);
+-extern void __show_regs(struct pt_regs *regs, int all);
++extern void __show_regs(struct pt_regs *regs, enum show_regs_mode);
+ extern void show_iret_regs(struct pt_regs *regs);
+ extern unsigned long oops_begin(void);
+ extern void oops_end(unsigned long, struct pt_regs *, int signr);
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index acebb808c4b5..0722b7745382 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1198,18 +1198,22 @@ enum emulation_result {
+ #define EMULTYPE_NO_DECODE	    (1 << 0)
+ #define EMULTYPE_TRAP_UD	    (1 << 1)
+ #define EMULTYPE_SKIP		    (1 << 2)
+-#define EMULTYPE_RETRY		    (1 << 3)
+-#define EMULTYPE_NO_REEXECUTE	    (1 << 4)
+-#define EMULTYPE_NO_UD_ON_FAIL	    (1 << 5)
+-#define EMULTYPE_VMWARE		    (1 << 6)
++#define EMULTYPE_ALLOW_RETRY	    (1 << 3)
++#define EMULTYPE_NO_UD_ON_FAIL	    (1 << 4)
++#define EMULTYPE_VMWARE		    (1 << 5)
+ int x86_emulate_instruction(struct kvm_vcpu *vcpu, unsigned long cr2,
+ 			    int emulation_type, void *insn, int insn_len);
+ 
+ static inline int emulate_instruction(struct kvm_vcpu *vcpu,
+ 			int emulation_type)
+ {
+-	return x86_emulate_instruction(vcpu, 0,
+-			emulation_type | EMULTYPE_NO_REEXECUTE, NULL, 0);
++	return x86_emulate_instruction(vcpu, 0, emulation_type, NULL, 0);
++}
++
++static inline int kvm_emulate_instruction_from_buffer(struct kvm_vcpu *vcpu,
++						      void *insn, int insn_len)
++{
++	return x86_emulate_instruction(vcpu, 0, 0, insn, insn_len);
+ }
+ 
+ void kvm_enable_efer_bits(u64);
+diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
+index c9b773401fd8..21d1fa5eaa5f 100644
+--- a/arch/x86/kernel/apic/vector.c
++++ b/arch/x86/kernel/apic/vector.c
+@@ -422,7 +422,7 @@ static int activate_managed(struct irq_data *irqd)
+ 	if (WARN_ON_ONCE(cpumask_empty(vector_searchmask))) {
+ 		/* Something in the core code broke! Survive gracefully */
+ 		pr_err("Managed startup for irq %u, but no CPU\n", irqd->irq);
+-		return EINVAL;
++		return -EINVAL;
+ 	}
+ 
+ 	ret = assign_managed_vector(irqd, vector_searchmask);
+diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
+index 0624957aa068..07b5fc00b188 100644
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -504,6 +504,7 @@ static enum ucode_state apply_microcode_amd(int cpu)
+ 	struct microcode_amd *mc_amd;
+ 	struct ucode_cpu_info *uci;
+ 	struct ucode_patch *p;
++	enum ucode_state ret;
+ 	u32 rev, dummy;
+ 
+ 	BUG_ON(raw_smp_processor_id() != cpu);
+@@ -521,9 +522,8 @@ static enum ucode_state apply_microcode_amd(int cpu)
+ 
+ 	/* need to apply patch? */
+ 	if (rev >= mc_amd->hdr.patch_id) {
+-		c->microcode = rev;
+-		uci->cpu_sig.rev = rev;
+-		return UCODE_OK;
++		ret = UCODE_OK;
++		goto out;
+ 	}
+ 
+ 	if (__apply_microcode_amd(mc_amd)) {
+@@ -531,13 +531,21 @@ static enum ucode_state apply_microcode_amd(int cpu)
+ 			cpu, mc_amd->hdr.patch_id);
+ 		return UCODE_ERROR;
+ 	}
+-	pr_info("CPU%d: new patch_level=0x%08x\n", cpu,
+-		mc_amd->hdr.patch_id);
+ 
+-	uci->cpu_sig.rev = mc_amd->hdr.patch_id;
+-	c->microcode = mc_amd->hdr.patch_id;
++	rev = mc_amd->hdr.patch_id;
++	ret = UCODE_UPDATED;
++
++	pr_info("CPU%d: new patch_level=0x%08x\n", cpu, rev);
+ 
+-	return UCODE_UPDATED;
++out:
++	uci->cpu_sig.rev = rev;
++	c->microcode	 = rev;
++
++	/* Update boot_cpu_data's revision too, if we're on the BSP: */
++	if (c->cpu_index == boot_cpu_data.cpu_index)
++		boot_cpu_data.microcode = rev;
++
++	return ret;
+ }
+ 
+ static int install_equiv_cpu_table(const u8 *buf)
+diff --git a/arch/x86/kernel/cpu/microcode/intel.c b/arch/x86/kernel/cpu/microcode/intel.c
+index 97ccf4c3b45b..16936a24795c 100644
+--- a/arch/x86/kernel/cpu/microcode/intel.c
++++ b/arch/x86/kernel/cpu/microcode/intel.c
+@@ -795,6 +795,7 @@ static enum ucode_state apply_microcode_intel(int cpu)
+ 	struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
+ 	struct cpuinfo_x86 *c = &cpu_data(cpu);
+ 	struct microcode_intel *mc;
++	enum ucode_state ret;
+ 	static int prev_rev;
+ 	u32 rev;
+ 
+@@ -817,9 +818,8 @@ static enum ucode_state apply_microcode_intel(int cpu)
+ 	 */
+ 	rev = intel_get_microcode_revision();
+ 	if (rev >= mc->hdr.rev) {
+-		uci->cpu_sig.rev = rev;
+-		c->microcode = rev;
+-		return UCODE_OK;
++		ret = UCODE_OK;
++		goto out;
+ 	}
+ 
+ 	/*
+@@ -848,10 +848,17 @@ static enum ucode_state apply_microcode_intel(int cpu)
+ 		prev_rev = rev;
+ 	}
+ 
++	ret = UCODE_UPDATED;
++
++out:
+ 	uci->cpu_sig.rev = rev;
+-	c->microcode = rev;
++	c->microcode	 = rev;
++
++	/* Update boot_cpu_data's revision too, if we're on the BSP: */
++	if (c->cpu_index == boot_cpu_data.cpu_index)
++		boot_cpu_data.microcode = rev;
+ 
+-	return UCODE_UPDATED;
++	return ret;
+ }
+ 
+ static enum ucode_state generic_load_microcode(int cpu, void *data, size_t size,
+diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
+index 17b02adc79aa..0c5a9fc6e36d 100644
+--- a/arch/x86/kernel/dumpstack.c
++++ b/arch/x86/kernel/dumpstack.c
+@@ -155,7 +155,7 @@ static void show_regs_if_on_stack(struct stack_info *info, struct pt_regs *regs,
+ 	 * they can be printed in the right context.
+ 	 */
+ 	if (!partial && on_stack(info, regs, sizeof(*regs))) {
+-		__show_regs(regs, 0);
++		__show_regs(regs, SHOW_REGS_SHORT);
+ 
+ 	} else if (partial && on_stack(info, (void *)regs + IRET_FRAME_OFFSET,
+ 				       IRET_FRAME_SIZE)) {
+@@ -353,7 +353,7 @@ void oops_end(unsigned long flags, struct pt_regs *regs, int signr)
+ 	oops_exit();
+ 
+ 	/* Executive summary in case the oops scrolled away */
+-	__show_regs(&exec_summary_regs, true);
++	__show_regs(&exec_summary_regs, SHOW_REGS_ALL);
+ 
+ 	if (!signr)
+ 		return;
+@@ -416,14 +416,9 @@ void die(const char *str, struct pt_regs *regs, long err)
+ 
+ void show_regs(struct pt_regs *regs)
+ {
+-	bool all = true;
+-
+ 	show_regs_print_info(KERN_DEFAULT);
+ 
+-	if (IS_ENABLED(CONFIG_X86_32))
+-		all = !user_mode(regs);
+-
+-	__show_regs(regs, all);
++	__show_regs(regs, user_mode(regs) ? SHOW_REGS_USER : SHOW_REGS_ALL);
+ 
+ 	/*
+ 	 * When in-kernel, we also print out the stack at the time of the fault..
+diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
+index 0ae659de21eb..666d1825390d 100644
+--- a/arch/x86/kernel/process_32.c
++++ b/arch/x86/kernel/process_32.c
+@@ -59,7 +59,7 @@
+ #include <asm/intel_rdt_sched.h>
+ #include <asm/proto.h>
+ 
+-void __show_regs(struct pt_regs *regs, int all)
++void __show_regs(struct pt_regs *regs, enum show_regs_mode mode)
+ {
+ 	unsigned long cr0 = 0L, cr2 = 0L, cr3 = 0L, cr4 = 0L;
+ 	unsigned long d0, d1, d2, d3, d6, d7;
+@@ -85,7 +85,7 @@ void __show_regs(struct pt_regs *regs, int all)
+ 	printk(KERN_DEFAULT "DS: %04x ES: %04x FS: %04x GS: %04x SS: %04x EFLAGS: %08lx\n",
+ 	       (u16)regs->ds, (u16)regs->es, (u16)regs->fs, gs, ss, regs->flags);
+ 
+-	if (!all)
++	if (mode != SHOW_REGS_ALL)
+ 		return;
+ 
+ 	cr0 = read_cr0();
+diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
+index 4344a032ebe6..0091a733c1cf 100644
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -62,7 +62,7 @@
+ __visible DEFINE_PER_CPU(unsigned long, rsp_scratch);
+ 
+ /* Prints also some state that isn't saved in the pt_regs */
+-void __show_regs(struct pt_regs *regs, int all)
++void __show_regs(struct pt_regs *regs, enum show_regs_mode mode)
+ {
+ 	unsigned long cr0 = 0L, cr2 = 0L, cr3 = 0L, cr4 = 0L, fs, gs, shadowgs;
+ 	unsigned long d0, d1, d2, d3, d6, d7;
+@@ -87,9 +87,17 @@ void __show_regs(struct pt_regs *regs, int all)
+ 	printk(KERN_DEFAULT "R13: %016lx R14: %016lx R15: %016lx\n",
+ 	       regs->r13, regs->r14, regs->r15);
+ 
+-	if (!all)
++	if (mode == SHOW_REGS_SHORT)
+ 		return;
+ 
++	if (mode == SHOW_REGS_USER) {
++		rdmsrl(MSR_FS_BASE, fs);
++		rdmsrl(MSR_KERNEL_GS_BASE, shadowgs);
++		printk(KERN_DEFAULT "FS:  %016lx GS:  %016lx\n",
++		       fs, shadowgs);
++		return;
++	}
++
+ 	asm("movl %%ds,%0" : "=r" (ds));
+ 	asm("movl %%cs,%0" : "=r" (cs));
+ 	asm("movl %%es,%0" : "=r" (es));
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index 42f1ba92622a..97d41754769e 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -4960,7 +4960,7 @@ static int make_mmu_pages_available(struct kvm_vcpu *vcpu)
+ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
+ 		       void *insn, int insn_len)
+ {
+-	int r, emulation_type = EMULTYPE_RETRY;
++	int r, emulation_type = 0;
+ 	enum emulation_result er;
+ 	bool direct = vcpu->arch.mmu.direct_map;
+ 
+@@ -4973,10 +4973,8 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
+ 	r = RET_PF_INVALID;
+ 	if (unlikely(error_code & PFERR_RSVD_MASK)) {
+ 		r = handle_mmio_page_fault(vcpu, cr2, direct);
+-		if (r == RET_PF_EMULATE) {
+-			emulation_type = 0;
++		if (r == RET_PF_EMULATE)
+ 			goto emulate;
+-		}
+ 	}
+ 
+ 	if (r == RET_PF_INVALID) {
+@@ -5003,8 +5001,19 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
+ 		return 1;
+ 	}
+ 
+-	if (mmio_info_in_cache(vcpu, cr2, direct))
+-		emulation_type = 0;
++	/*
++	 * vcpu->arch.mmu.page_fault returned RET_PF_EMULATE, but we can still
++	 * optimistically try to just unprotect the page and let the processor
++	 * re-execute the instruction that caused the page fault.  Do not allow
++	 * retrying MMIO emulation, as it's not only pointless but could also
++	 * cause us to enter an infinite loop because the processor will keep
++	 * faulting on the non-existent MMIO address.  Retrying an instruction
++	 * from a nested guest is also pointless and dangerous as we are only
++	 * explicitly shadowing L1's page tables, i.e. unprotecting something
++	 * for L1 isn't going to magically fix whatever issue cause L2 to fail.
++	 */
++	if (!mmio_info_in_cache(vcpu, cr2, direct) && !is_guest_mode(vcpu))
++		emulation_type = EMULTYPE_ALLOW_RETRY;
+ emulate:
+ 	/*
+ 	 * On AMD platforms, under certain conditions insn_len may be zero on #NPF.
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index 9799f86388e7..ef772e5634d4 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -3875,8 +3875,8 @@ static int emulate_on_interception(struct vcpu_svm *svm)
+ 
+ static int rsm_interception(struct vcpu_svm *svm)
+ {
+-	return x86_emulate_instruction(&svm->vcpu, 0, 0,
+-				       rsm_ins_bytes, 2) == EMULATE_DONE;
++	return kvm_emulate_instruction_from_buffer(&svm->vcpu,
++					rsm_ins_bytes, 2) == EMULATE_DONE;
+ }
+ 
+ static int rdpmc_interception(struct vcpu_svm *svm)
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 9869bfd0c601..d0c3be353bb6 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -7539,8 +7539,8 @@ static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
+ 		if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
+ 			return kvm_skip_emulated_instruction(vcpu);
+ 		else
+-			return x86_emulate_instruction(vcpu, gpa, EMULTYPE_SKIP,
+-						       NULL, 0) == EMULATE_DONE;
++			return emulate_instruction(vcpu, EMULTYPE_SKIP) ==
++								EMULATE_DONE;
+ 	}
+ 
+ 	return kvm_mmu_page_fault(vcpu, gpa, PFERR_RSVD_MASK, NULL, 0);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 94cd63081471..97fcac34e007 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -5810,7 +5810,10 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gva_t cr2,
+ 	gpa_t gpa = cr2;
+ 	kvm_pfn_t pfn;
+ 
+-	if (emulation_type & EMULTYPE_NO_REEXECUTE)
++	if (!(emulation_type & EMULTYPE_ALLOW_RETRY))
++		return false;
++
++	if (WARN_ON_ONCE(is_guest_mode(vcpu)))
+ 		return false;
+ 
+ 	if (!vcpu->arch.mmu.direct_map) {
+@@ -5898,7 +5901,10 @@ static bool retry_instruction(struct x86_emulate_ctxt *ctxt,
+ 	 */
+ 	vcpu->arch.last_retry_eip = vcpu->arch.last_retry_addr = 0;
+ 
+-	if (!(emulation_type & EMULTYPE_RETRY))
++	if (!(emulation_type & EMULTYPE_ALLOW_RETRY))
++		return false;
++
++	if (WARN_ON_ONCE(is_guest_mode(vcpu)))
+ 		return false;
+ 
+ 	if (x86_page_table_writing_insn(ctxt))
+diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
+index d1f1612672c7..045338ac1667 100644
+--- a/arch/x86/mm/fault.c
++++ b/arch/x86/mm/fault.c
+@@ -317,8 +317,6 @@ static noinline int vmalloc_fault(unsigned long address)
+ 	if (!(address >= VMALLOC_START && address < VMALLOC_END))
+ 		return -1;
+ 
+-	WARN_ON_ONCE(in_nmi());
+-
+ 	/*
+ 	 * Synchronize this task's top level page-table
+ 	 * with the 'reference' page table.
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+index 58c6efa9f9a9..9fe5952d117d 100644
+--- a/block/bfq-cgroup.c
++++ b/block/bfq-cgroup.c
+@@ -275,9 +275,9 @@ static void bfqg_and_blkg_get(struct bfq_group *bfqg)
+ 
+ void bfqg_and_blkg_put(struct bfq_group *bfqg)
+ {
+-	bfqg_put(bfqg);
+-
+ 	blkg_put(bfqg_to_blkg(bfqg));
++
++	bfqg_put(bfqg);
+ }
+ 
+ /* @stats = 0 */
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 746a5eac4541..cbaca5a73f2e 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -2161,9 +2161,12 @@ static inline bool bio_check_ro(struct bio *bio, struct hd_struct *part)
+ {
+ 	const int op = bio_op(bio);
+ 
+-	if (part->policy && (op_is_write(op) && !op_is_flush(op))) {
++	if (part->policy && op_is_write(op)) {
+ 		char b[BDEVNAME_SIZE];
+ 
++		if (op_is_flush(bio->bi_opf) && !bio_sectors(bio))
++			return false;
++
+ 		WARN_ONCE(1,
+ 		       "generic_make_request: Trying to write "
+ 			"to read-only block-device %s (partno %d)\n",
+diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
+index d5f2c21d8531..816923bf874d 100644
+--- a/block/blk-mq-tag.c
++++ b/block/blk-mq-tag.c
+@@ -402,8 +402,6 @@ int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx,
+ 	if (tdepth <= tags->nr_reserved_tags)
+ 		return -EINVAL;
+ 
+-	tdepth -= tags->nr_reserved_tags;
+-
+ 	/*
+ 	 * If we are allowed to grow beyond the original size, allocate
+ 	 * a new set of tags before freeing the old one.
+@@ -423,7 +421,8 @@ int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx,
+ 		if (tdepth > 16 * BLKDEV_MAX_RQ)
+ 			return -EINVAL;
+ 
+-		new = blk_mq_alloc_rq_map(set, hctx->queue_num, tdepth, 0);
++		new = blk_mq_alloc_rq_map(set, hctx->queue_num, tdepth,
++				tags->nr_reserved_tags);
+ 		if (!new)
+ 			return -ENOMEM;
+ 		ret = blk_mq_alloc_rqs(set, new, hctx->queue_num, tdepth);
+@@ -440,7 +439,8 @@ int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx,
+ 		 * Don't need (or can't) update reserved tags here, they
+ 		 * remain static and should never need resizing.
+ 		 */
+-		sbitmap_queue_resize(&tags->bitmap_tags, tdepth);
++		sbitmap_queue_resize(&tags->bitmap_tags,
++				tdepth - tags->nr_reserved_tags);
+ 	}
+ 
+ 	return 0;
+diff --git a/block/partitions/aix.c b/block/partitions/aix.c
+index 007f95eea0e1..903f3ed175d0 100644
+--- a/block/partitions/aix.c
++++ b/block/partitions/aix.c
+@@ -178,7 +178,7 @@ int aix_partition(struct parsed_partitions *state)
+ 	u32 vgda_sector = 0;
+ 	u32 vgda_len = 0;
+ 	int numlvs = 0;
+-	struct pvd *pvd;
++	struct pvd *pvd = NULL;
+ 	struct lv_info {
+ 		unsigned short pps_per_lv;
+ 		unsigned short pps_found;
+@@ -232,10 +232,11 @@ int aix_partition(struct parsed_partitions *state)
+ 				if (lvip[i].pps_per_lv)
+ 					foundlvs += 1;
+ 			}
++			/* pvd loops depend on n[].name and lvip[].pps_per_lv */
++			pvd = alloc_pvd(state, vgda_sector + 17);
+ 		}
+ 		put_dev_sector(sect);
+ 	}
+-	pvd = alloc_pvd(state, vgda_sector + 17);
+ 	if (pvd) {
+ 		int numpps = be16_to_cpu(pvd->pp_count);
+ 		int psn_part1 = be32_to_cpu(pvd->psn_part1);
+@@ -282,10 +283,14 @@ int aix_partition(struct parsed_partitions *state)
+ 				next_lp_ix += 1;
+ 		}
+ 		for (i = 0; i < state->limit; i += 1)
+-			if (lvip[i].pps_found && !lvip[i].lv_is_contiguous)
++			if (lvip[i].pps_found && !lvip[i].lv_is_contiguous) {
++				char tmp[sizeof(n[i].name) + 1]; // null char
++
++				snprintf(tmp, sizeof(tmp), "%s", n[i].name);
+ 				pr_warn("partition %s (%u pp's found) is "
+ 					"not contiguous\n",
+-					n[i].name, lvip[i].pps_found);
++					tmp, lvip[i].pps_found);
++			}
+ 		kfree(pvd);
+ 	}
+ 	kfree(n);
+diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c
+index 9706613eecf9..bf64cfa30feb 100644
+--- a/drivers/acpi/acpi_lpss.c
++++ b/drivers/acpi/acpi_lpss.c
+@@ -879,7 +879,7 @@ static void acpi_lpss_dismiss(struct device *dev)
+ #define LPSS_GPIODEF0_DMA_LLP		BIT(13)
+ 
+ static DEFINE_MUTEX(lpss_iosf_mutex);
+-static bool lpss_iosf_d3_entered;
++static bool lpss_iosf_d3_entered = true;
+ 
+ static void lpss_iosf_enter_d3_state(void)
+ {
+diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
+index 2628806c64a2..3d5277a39097 100644
+--- a/drivers/android/binder_alloc.c
++++ b/drivers/android/binder_alloc.c
+@@ -327,6 +327,35 @@ err_no_vma:
+ 	return vma ? -ENOMEM : -ESRCH;
+ }
+ 
++
++static inline void binder_alloc_set_vma(struct binder_alloc *alloc,
++		struct vm_area_struct *vma)
++{
++	if (vma)
++		alloc->vma_vm_mm = vma->vm_mm;
++	/*
++	 * If we see alloc->vma is not NULL, buffer data structures set up
++	 * completely. Look at smp_rmb side binder_alloc_get_vma.
++	 * We also want to guarantee new alloc->vma_vm_mm is always visible
++	 * if alloc->vma is set.
++	 */
++	smp_wmb();
++	alloc->vma = vma;
++}
++
++static inline struct vm_area_struct *binder_alloc_get_vma(
++		struct binder_alloc *alloc)
++{
++	struct vm_area_struct *vma = NULL;
++
++	if (alloc->vma) {
++		/* Look at description in binder_alloc_set_vma */
++		smp_rmb();
++		vma = alloc->vma;
++	}
++	return vma;
++}
++
+ static struct binder_buffer *binder_alloc_new_buf_locked(
+ 				struct binder_alloc *alloc,
+ 				size_t data_size,
+@@ -343,7 +372,7 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
+ 	size_t size, data_offsets_size;
+ 	int ret;
+ 
+-	if (alloc->vma == NULL) {
++	if (!binder_alloc_get_vma(alloc)) {
+ 		pr_err("%d: binder_alloc_buf, no vma\n",
+ 		       alloc->pid);
+ 		return ERR_PTR(-ESRCH);
+@@ -714,9 +743,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc,
+ 	buffer->free = 1;
+ 	binder_insert_free_buffer(alloc, buffer);
+ 	alloc->free_async_space = alloc->buffer_size / 2;
+-	barrier();
+-	alloc->vma = vma;
+-	alloc->vma_vm_mm = vma->vm_mm;
++	binder_alloc_set_vma(alloc, vma);
+ 	mmgrab(alloc->vma_vm_mm);
+ 
+ 	return 0;
+@@ -743,10 +770,10 @@ void binder_alloc_deferred_release(struct binder_alloc *alloc)
+ 	int buffers, page_count;
+ 	struct binder_buffer *buffer;
+ 
+-	BUG_ON(alloc->vma);
+-
+ 	buffers = 0;
+ 	mutex_lock(&alloc->mutex);
++	BUG_ON(alloc->vma);
++
+ 	while ((n = rb_first(&alloc->allocated_buffers))) {
+ 		buffer = rb_entry(n, struct binder_buffer, rb_node);
+ 
+@@ -889,7 +916,7 @@ int binder_alloc_get_allocated_count(struct binder_alloc *alloc)
+  */
+ void binder_alloc_vma_close(struct binder_alloc *alloc)
+ {
+-	WRITE_ONCE(alloc->vma, NULL);
++	binder_alloc_set_vma(alloc, NULL);
+ }
+ 
+ /**
+@@ -924,7 +951,7 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
+ 
+ 	index = page - alloc->pages;
+ 	page_addr = (uintptr_t)alloc->buffer + index * PAGE_SIZE;
+-	vma = alloc->vma;
++	vma = binder_alloc_get_vma(alloc);
+ 	if (vma) {
+ 		if (!mmget_not_zero(alloc->vma_vm_mm))
+ 			goto err_mmget;
+diff --git a/drivers/ata/libahci.c b/drivers/ata/libahci.c
+index 09620c2ffa0f..704a761f94b2 100644
+--- a/drivers/ata/libahci.c
++++ b/drivers/ata/libahci.c
+@@ -2107,7 +2107,7 @@ static void ahci_set_aggressive_devslp(struct ata_port *ap, bool sleep)
+ 	struct ahci_host_priv *hpriv = ap->host->private_data;
+ 	void __iomem *port_mmio = ahci_port_base(ap);
+ 	struct ata_device *dev = ap->link.device;
+-	u32 devslp, dm, dito, mdat, deto;
++	u32 devslp, dm, dito, mdat, deto, dito_conf;
+ 	int rc;
+ 	unsigned int err_mask;
+ 
+@@ -2131,8 +2131,15 @@ static void ahci_set_aggressive_devslp(struct ata_port *ap, bool sleep)
+ 		return;
+ 	}
+ 
+-	/* device sleep was already enabled */
+-	if (devslp & PORT_DEVSLP_ADSE)
++	dm = (devslp & PORT_DEVSLP_DM_MASK) >> PORT_DEVSLP_DM_OFFSET;
++	dito = devslp_idle_timeout / (dm + 1);
++	if (dito > 0x3ff)
++		dito = 0x3ff;
++
++	dito_conf = (devslp >> PORT_DEVSLP_DITO_OFFSET) & 0x3FF;
++
++	/* device sleep was already enabled and same dito */
++	if ((devslp & PORT_DEVSLP_ADSE) && (dito_conf == dito))
+ 		return;
+ 
+ 	/* set DITO, MDAT, DETO and enable DevSlp, need to stop engine first */
+@@ -2140,11 +2147,6 @@ static void ahci_set_aggressive_devslp(struct ata_port *ap, bool sleep)
+ 	if (rc)
+ 		return;
+ 
+-	dm = (devslp & PORT_DEVSLP_DM_MASK) >> PORT_DEVSLP_DM_OFFSET;
+-	dito = devslp_idle_timeout / (dm + 1);
+-	if (dito > 0x3ff)
+-		dito = 0x3ff;
+-
+ 	/* Use the nominal value 10 ms if the read MDAT is zero,
+ 	 * the nominal value of DETO is 20 ms.
+ 	 */
+@@ -2162,6 +2164,8 @@ static void ahci_set_aggressive_devslp(struct ata_port *ap, bool sleep)
+ 		deto = 20;
+ 	}
+ 
++	/* Make dito, mdat, deto bits to 0s */
++	devslp &= ~GENMASK_ULL(24, 2);
+ 	devslp |= ((dito << PORT_DEVSLP_DITO_OFFSET) |
+ 		   (mdat << PORT_DEVSLP_MDAT_OFFSET) |
+ 		   (deto << PORT_DEVSLP_DETO_OFFSET) |
+diff --git a/drivers/base/memory.c b/drivers/base/memory.c
+index f5e560188a18..622ab8edc035 100644
+--- a/drivers/base/memory.c
++++ b/drivers/base/memory.c
+@@ -416,26 +416,24 @@ static ssize_t show_valid_zones(struct device *dev,
+ 	struct zone *default_zone;
+ 	int nid;
+ 
+-	/*
+-	 * The block contains more than one zone can not be offlined.
+-	 * This can happen e.g. for ZONE_DMA and ZONE_DMA32
+-	 */
+-	if (!test_pages_in_a_zone(start_pfn, start_pfn + nr_pages, &valid_start_pfn, &valid_end_pfn))
+-		return sprintf(buf, "none\n");
+-
+-	start_pfn = valid_start_pfn;
+-	nr_pages = valid_end_pfn - start_pfn;
+-
+ 	/*
+ 	 * Check the existing zone. Make sure that we do that only on the
+ 	 * online nodes otherwise the page_zone is not reliable
+ 	 */
+ 	if (mem->state == MEM_ONLINE) {
++		/*
++		 * The block contains more than one zone can not be offlined.
++		 * This can happen e.g. for ZONE_DMA and ZONE_DMA32
++		 */
++		if (!test_pages_in_a_zone(start_pfn, start_pfn + nr_pages,
++					  &valid_start_pfn, &valid_end_pfn))
++			return sprintf(buf, "none\n");
++		start_pfn = valid_start_pfn;
+ 		strcat(buf, page_zone(pfn_to_page(start_pfn))->name);
+ 		goto out;
+ 	}
+ 
+-	nid = pfn_to_nid(start_pfn);
++	nid = mem->nid;
+ 	default_zone = zone_for_pfn_range(MMOP_ONLINE_KEEP, nid, start_pfn, nr_pages);
+ 	strcat(buf, default_zone->name);
+ 
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 3fb95c8d9fd8..15a5ce5bba3d 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -1239,6 +1239,9 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
+ 	case NBD_SET_SOCK:
+ 		return nbd_add_socket(nbd, arg, false);
+ 	case NBD_SET_BLKSIZE:
++		if (!arg || !is_power_of_2(arg) || arg < 512 ||
++		    arg > PAGE_SIZE)
++			return -EINVAL;
+ 		nbd_size_set(nbd, arg,
+ 			     div_s64(config->bytesize, arg));
+ 		return 0;
+diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c
+index b3f83cd96f33..01f59be71433 100644
+--- a/drivers/block/pktcdvd.c
++++ b/drivers/block/pktcdvd.c
+@@ -67,7 +67,7 @@
+ #include <scsi/scsi.h>
+ #include <linux/debugfs.h>
+ #include <linux/device.h>
+-
++#include <linux/nospec.h>
+ #include <linux/uaccess.h>
+ 
+ #define DRIVER_NAME	"pktcdvd"
+@@ -2231,6 +2231,8 @@ static struct pktcdvd_device *pkt_find_dev_from_minor(unsigned int dev_minor)
+ {
+ 	if (dev_minor >= MAX_WRITERS)
+ 		return NULL;
++
++	dev_minor = array_index_nospec(dev_minor, MAX_WRITERS);
+ 	return pkt_devs[dev_minor];
+ }
+ 
+diff --git a/drivers/bluetooth/Kconfig b/drivers/bluetooth/Kconfig
+index f3c643a0473c..5f953ca8ac5b 100644
+--- a/drivers/bluetooth/Kconfig
++++ b/drivers/bluetooth/Kconfig
+@@ -159,6 +159,7 @@ config BT_HCIUART_LL
+ config BT_HCIUART_3WIRE
+ 	bool "Three-wire UART (H5) protocol support"
+ 	depends on BT_HCIUART
++	depends on BT_HCIUART_SERDEV
+ 	help
+ 	  The HCI Three-wire UART Transport Layer makes it possible to
+ 	  user the Bluetooth HCI over a serial port interface. The HCI
+diff --git a/drivers/char/tpm/tpm_i2c_infineon.c b/drivers/char/tpm/tpm_i2c_infineon.c
+index 6116cd05e228..9086edc9066b 100644
+--- a/drivers/char/tpm/tpm_i2c_infineon.c
++++ b/drivers/char/tpm/tpm_i2c_infineon.c
+@@ -117,7 +117,7 @@ static int iic_tpm_read(u8 addr, u8 *buffer, size_t len)
+ 	/* Lock the adapter for the duration of the whole sequence. */
+ 	if (!tpm_dev.client->adapter->algo->master_xfer)
+ 		return -EOPNOTSUPP;
+-	i2c_lock_adapter(tpm_dev.client->adapter);
++	i2c_lock_bus(tpm_dev.client->adapter, I2C_LOCK_SEGMENT);
+ 
+ 	if (tpm_dev.chip_type == SLB9645) {
+ 		/* use a combined read for newer chips
+@@ -192,7 +192,7 @@ static int iic_tpm_read(u8 addr, u8 *buffer, size_t len)
+ 	}
+ 
+ out:
+-	i2c_unlock_adapter(tpm_dev.client->adapter);
++	i2c_unlock_bus(tpm_dev.client->adapter, I2C_LOCK_SEGMENT);
+ 	/* take care of 'guard time' */
+ 	usleep_range(SLEEP_DURATION_LOW, SLEEP_DURATION_HI);
+ 
+@@ -224,7 +224,7 @@ static int iic_tpm_write_generic(u8 addr, u8 *buffer, size_t len,
+ 
+ 	if (!tpm_dev.client->adapter->algo->master_xfer)
+ 		return -EOPNOTSUPP;
+-	i2c_lock_adapter(tpm_dev.client->adapter);
++	i2c_lock_bus(tpm_dev.client->adapter, I2C_LOCK_SEGMENT);
+ 
+ 	/* prepend the 'register address' to the buffer */
+ 	tpm_dev.buf[0] = addr;
+@@ -243,7 +243,7 @@ static int iic_tpm_write_generic(u8 addr, u8 *buffer, size_t len,
+ 		usleep_range(sleep_low, sleep_hi);
+ 	}
+ 
+-	i2c_unlock_adapter(tpm_dev.client->adapter);
++	i2c_unlock_bus(tpm_dev.client->adapter, I2C_LOCK_SEGMENT);
+ 	/* take care of 'guard time' */
+ 	usleep_range(SLEEP_DURATION_LOW, SLEEP_DURATION_HI);
+ 
+diff --git a/drivers/char/tpm/tpm_tis_spi.c b/drivers/char/tpm/tpm_tis_spi.c
+index 424ff2fde1f2..9914f6973463 100644
+--- a/drivers/char/tpm/tpm_tis_spi.c
++++ b/drivers/char/tpm/tpm_tis_spi.c
+@@ -199,6 +199,7 @@ static const struct tpm_tis_phy_ops tpm_spi_phy_ops = {
+ static int tpm_tis_spi_probe(struct spi_device *dev)
+ {
+ 	struct tpm_tis_spi_phy *phy;
++	int irq;
+ 
+ 	phy = devm_kzalloc(&dev->dev, sizeof(struct tpm_tis_spi_phy),
+ 			   GFP_KERNEL);
+@@ -211,7 +212,13 @@ static int tpm_tis_spi_probe(struct spi_device *dev)
+ 	if (!phy->iobuf)
+ 		return -ENOMEM;
+ 
+-	return tpm_tis_core_init(&dev->dev, &phy->priv, -1, &tpm_spi_phy_ops,
++	/* If the SPI device has an IRQ then use that */
++	if (dev->irq > 0)
++		irq = dev->irq;
++	else
++		irq = -1;
++
++	return tpm_tis_core_init(&dev->dev, &phy->priv, irq, &tpm_spi_phy_ops,
+ 				 NULL);
+ }
+ 
+diff --git a/drivers/clk/clk-scmi.c b/drivers/clk/clk-scmi.c
+index bb2a6f2f5516..a985bf5e1ac6 100644
+--- a/drivers/clk/clk-scmi.c
++++ b/drivers/clk/clk-scmi.c
+@@ -38,7 +38,6 @@ static unsigned long scmi_clk_recalc_rate(struct clk_hw *hw,
+ static long scmi_clk_round_rate(struct clk_hw *hw, unsigned long rate,
+ 				unsigned long *parent_rate)
+ {
+-	int step;
+ 	u64 fmin, fmax, ftmp;
+ 	struct scmi_clk *clk = to_scmi_clk(hw);
+ 
+@@ -60,9 +59,9 @@ static long scmi_clk_round_rate(struct clk_hw *hw, unsigned long rate,
+ 
+ 	ftmp = rate - fmin;
+ 	ftmp += clk->info->range.step_size - 1; /* to round up */
+-	step = do_div(ftmp, clk->info->range.step_size);
++	do_div(ftmp, clk->info->range.step_size);
+ 
+-	return step * clk->info->range.step_size + fmin;
++	return ftmp * clk->info->range.step_size + fmin;
+ }
+ 
+ static int scmi_clk_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/dax/pmem.c b/drivers/dax/pmem.c
+index fd49b24fd6af..99e2aace8078 100644
+--- a/drivers/dax/pmem.c
++++ b/drivers/dax/pmem.c
+@@ -105,15 +105,19 @@ static int dax_pmem_probe(struct device *dev)
+ 	if (rc)
+ 		return rc;
+ 
+-	rc = devm_add_action_or_reset(dev, dax_pmem_percpu_exit,
+-							&dax_pmem->ref);
+-	if (rc)
++	rc = devm_add_action(dev, dax_pmem_percpu_exit, &dax_pmem->ref);
++	if (rc) {
++		percpu_ref_exit(&dax_pmem->ref);
+ 		return rc;
++	}
+ 
+ 	dax_pmem->pgmap.ref = &dax_pmem->ref;
+ 	addr = devm_memremap_pages(dev, &dax_pmem->pgmap);
+-	if (IS_ERR(addr))
++	if (IS_ERR(addr)) {
++		devm_remove_action(dev, dax_pmem_percpu_exit, &dax_pmem->ref);
++		percpu_ref_exit(&dax_pmem->ref);
+ 		return PTR_ERR(addr);
++	}
+ 
+ 	rc = devm_add_action_or_reset(dev, dax_pmem_percpu_kill,
+ 							&dax_pmem->ref);
+diff --git a/drivers/firmware/google/vpd.c b/drivers/firmware/google/vpd.c
+index e9db895916c3..1aa67bb5d8c0 100644
+--- a/drivers/firmware/google/vpd.c
++++ b/drivers/firmware/google/vpd.c
+@@ -246,6 +246,7 @@ static int vpd_section_destroy(struct vpd_section *sec)
+ 		sysfs_remove_bin_file(vpd_kobj, &sec->bin_attr);
+ 		kfree(sec->raw_name);
+ 		memunmap(sec->baseaddr);
++		sec->enabled = false;
+ 	}
+ 
+ 	return 0;
+@@ -279,8 +280,10 @@ static int vpd_sections_init(phys_addr_t physaddr)
+ 		ret = vpd_section_init("rw", &rw_vpd,
+ 				       physaddr + sizeof(struct vpd_cbmem) +
+ 				       header.ro_size, header.rw_size);
+-		if (ret)
++		if (ret) {
++			vpd_section_destroy(&ro_vpd);
+ 			return ret;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/gpio/gpio-ml-ioh.c b/drivers/gpio/gpio-ml-ioh.c
+index b23d9a36be1f..51c7d1b84c2e 100644
+--- a/drivers/gpio/gpio-ml-ioh.c
++++ b/drivers/gpio/gpio-ml-ioh.c
+@@ -496,9 +496,10 @@ static int ioh_gpio_probe(struct pci_dev *pdev,
+ 	return 0;
+ 
+ err_gpiochip_add:
++	chip = chip_save;
+ 	while (--i >= 0) {
+-		chip--;
+ 		gpiochip_remove(&chip->gpio);
++		chip++;
+ 	}
+ 	kfree(chip_save);
+ 
+diff --git a/drivers/gpio/gpio-pxa.c b/drivers/gpio/gpio-pxa.c
+index 1e66f808051c..2e33fd552899 100644
+--- a/drivers/gpio/gpio-pxa.c
++++ b/drivers/gpio/gpio-pxa.c
+@@ -241,6 +241,17 @@ int pxa_irq_to_gpio(int irq)
+ 	return irq_gpio0;
+ }
+ 
++static bool pxa_gpio_has_pinctrl(void)
++{
++	switch (gpio_type) {
++	case PXA3XX_GPIO:
++		return false;
++
++	default:
++		return true;
++	}
++}
++
+ static int pxa_gpio_to_irq(struct gpio_chip *chip, unsigned offset)
+ {
+ 	struct pxa_gpio_chip *pchip = chip_to_pxachip(chip);
+@@ -255,9 +266,11 @@ static int pxa_gpio_direction_input(struct gpio_chip *chip, unsigned offset)
+ 	unsigned long flags;
+ 	int ret;
+ 
+-	ret = pinctrl_gpio_direction_input(chip->base + offset);
+-	if (!ret)
+-		return 0;
++	if (pxa_gpio_has_pinctrl()) {
++		ret = pinctrl_gpio_direction_input(chip->base + offset);
++		if (!ret)
++			return 0;
++	}
+ 
+ 	spin_lock_irqsave(&gpio_lock, flags);
+ 
+@@ -282,9 +295,11 @@ static int pxa_gpio_direction_output(struct gpio_chip *chip,
+ 
+ 	writel_relaxed(mask, base + (value ? GPSR_OFFSET : GPCR_OFFSET));
+ 
+-	ret = pinctrl_gpio_direction_output(chip->base + offset);
+-	if (ret)
+-		return ret;
++	if (pxa_gpio_has_pinctrl()) {
++		ret = pinctrl_gpio_direction_output(chip->base + offset);
++		if (ret)
++			return ret;
++	}
+ 
+ 	spin_lock_irqsave(&gpio_lock, flags);
+ 
+@@ -348,8 +363,12 @@ static int pxa_init_gpio_chip(struct pxa_gpio_chip *pchip, int ngpio,
+ 	pchip->chip.set = pxa_gpio_set;
+ 	pchip->chip.to_irq = pxa_gpio_to_irq;
+ 	pchip->chip.ngpio = ngpio;
+-	pchip->chip.request = gpiochip_generic_request;
+-	pchip->chip.free = gpiochip_generic_free;
++
++	if (pxa_gpio_has_pinctrl()) {
++		pchip->chip.request = gpiochip_generic_request;
++		pchip->chip.free = gpiochip_generic_free;
++	}
++
+ #ifdef CONFIG_OF_GPIO
+ 	pchip->chip.of_node = np;
+ 	pchip->chip.of_xlate = pxa_gpio_of_xlate;
+diff --git a/drivers/gpio/gpio-tegra.c b/drivers/gpio/gpio-tegra.c
+index 94396caaca75..d5d79727c55d 100644
+--- a/drivers/gpio/gpio-tegra.c
++++ b/drivers/gpio/gpio-tegra.c
+@@ -720,4 +720,4 @@ static int __init tegra_gpio_init(void)
+ {
+ 	return platform_driver_register(&tegra_gpio_driver);
+ }
+-postcore_initcall(tegra_gpio_init);
++subsys_initcall(tegra_gpio_init);
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c b/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c
+index a576b8bbb3cd..dea40b322191 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c
+@@ -150,7 +150,7 @@ static void dce_dmcu_set_psr_enable(struct dmcu *dmcu, bool enable, bool wait)
+ 	}
+ }
+ 
+-static void dce_dmcu_setup_psr(struct dmcu *dmcu,
++static bool dce_dmcu_setup_psr(struct dmcu *dmcu,
+ 		struct dc_link *link,
+ 		struct psr_context *psr_context)
+ {
+@@ -261,6 +261,8 @@ static void dce_dmcu_setup_psr(struct dmcu *dmcu,
+ 
+ 	/* notifyDMCUMsg */
+ 	REG_UPDATE(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 1);
++
++	return true;
+ }
+ 
+ static bool dce_is_dmcu_initialized(struct dmcu *dmcu)
+@@ -545,24 +547,25 @@ static void dcn10_dmcu_set_psr_enable(struct dmcu *dmcu, bool enable, bool wait)
+ 	 *  least a few frames. Should never hit the max retry assert below.
+ 	 */
+ 	if (wait == true) {
+-	for (retryCount = 0; retryCount <= 1000; retryCount++) {
+-		dcn10_get_dmcu_psr_state(dmcu, &psr_state);
+-		if (enable) {
+-			if (psr_state != 0)
+-				break;
+-		} else {
+-			if (psr_state == 0)
+-				break;
++		for (retryCount = 0; retryCount <= 1000; retryCount++) {
++			dcn10_get_dmcu_psr_state(dmcu, &psr_state);
++			if (enable) {
++				if (psr_state != 0)
++					break;
++			} else {
++				if (psr_state == 0)
++					break;
++			}
++			udelay(500);
+ 		}
+-		udelay(500);
+-	}
+ 
+-	/* assert if max retry hit */
+-	ASSERT(retryCount <= 1000);
++		/* assert if max retry hit */
++		if (retryCount >= 1000)
++			ASSERT(0);
+ 	}
+ }
+ 
+-static void dcn10_dmcu_setup_psr(struct dmcu *dmcu,
++static bool dcn10_dmcu_setup_psr(struct dmcu *dmcu,
+ 		struct dc_link *link,
+ 		struct psr_context *psr_context)
+ {
+@@ -577,7 +580,7 @@ static void dcn10_dmcu_setup_psr(struct dmcu *dmcu,
+ 
+ 	/* If microcontroller is not running, do nothing */
+ 	if (dmcu->dmcu_state != DMCU_RUNNING)
+-		return;
++		return false;
+ 
+ 	link->link_enc->funcs->psr_program_dp_dphy_fast_training(link->link_enc,
+ 			psr_context->psrExitLinkTrainingRequired);
+@@ -677,6 +680,11 @@ static void dcn10_dmcu_setup_psr(struct dmcu *dmcu,
+ 
+ 	/* notifyDMCUMsg */
+ 	REG_UPDATE(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 1);
++
++	/* waitDMCUReadyForCmd */
++	REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 0, 1, 10000);
++
++	return true;
+ }
+ 
+ static void dcn10_psr_wait_loop(
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/dmcu.h b/drivers/gpu/drm/amd/display/dc/inc/hw/dmcu.h
+index de60f940030d..4550747fb61c 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/dmcu.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/dmcu.h
+@@ -48,7 +48,7 @@ struct dmcu_funcs {
+ 			const char *src,
+ 			unsigned int bytes);
+ 	void (*set_psr_enable)(struct dmcu *dmcu, bool enable, bool wait);
+-	void (*setup_psr)(struct dmcu *dmcu,
++	bool (*setup_psr)(struct dmcu *dmcu,
+ 			struct dc_link *link,
+ 			struct psr_context *psr_context);
+ 	void (*get_psr_state)(struct dmcu *dmcu, uint32_t *psr_state);
+diff --git a/drivers/gpu/ipu-v3/ipu-common.c b/drivers/gpu/ipu-v3/ipu-common.c
+index 48685cddbad1..c73bd003f845 100644
+--- a/drivers/gpu/ipu-v3/ipu-common.c
++++ b/drivers/gpu/ipu-v3/ipu-common.c
+@@ -1401,6 +1401,8 @@ static int ipu_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 
+ 	ipu->id = of_alias_get_id(np, "ipu");
++	if (ipu->id < 0)
++		ipu->id = 0;
+ 
+ 	if (of_device_is_compatible(np, "fsl,imx6qp-ipu") &&
+ 	    IS_ENABLED(CONFIG_DRM)) {
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index c7981ddd8776..e80bcd71fe1e 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -528,6 +528,7 @@
+ 
+ #define I2C_VENDOR_ID_RAYD		0x2386
+ #define I2C_PRODUCT_ID_RAYD_3118	0x3118
++#define I2C_PRODUCT_ID_RAYD_4B33	0x4B33
+ 
+ #define USB_VENDOR_ID_HANWANG		0x0b57
+ #define USB_DEVICE_ID_HANWANG_TABLET_FIRST	0x5000
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index ab93dd5927c3..b23c4b5854d8 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -1579,6 +1579,7 @@ static struct hid_input *hidinput_allocate(struct hid_device *hid,
+ 	input_dev->dev.parent = &hid->dev;
+ 
+ 	hidinput->input = input_dev;
++	hidinput->application = application;
+ 	list_add_tail(&hidinput->list, &hid->inputs);
+ 
+ 	INIT_LIST_HEAD(&hidinput->reports);
+@@ -1674,8 +1675,7 @@ static struct hid_input *hidinput_match_application(struct hid_report *report)
+ 	struct hid_input *hidinput;
+ 
+ 	list_for_each_entry(hidinput, &hid->inputs, list) {
+-		if (hidinput->report &&
+-		    hidinput->report->application == report->application)
++		if (hidinput->application == report->application)
+ 			return hidinput;
+ 	}
+ 
+@@ -1812,6 +1812,7 @@ void hidinput_disconnect(struct hid_device *hid)
+ 			input_unregister_device(hidinput->input);
+ 		else
+ 			input_free_device(hidinput->input);
++		kfree(hidinput->name);
+ 		kfree(hidinput);
+ 	}
+ 
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 45968f7970f8..15c934ef6b18 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -1167,7 +1167,8 @@ static bool mt_need_to_apply_feature(struct hid_device *hdev,
+ 				     struct hid_usage *usage,
+ 				     enum latency_mode latency,
+ 				     bool surface_switch,
+-				     bool button_switch)
++				     bool button_switch,
++				     bool *inputmode_found)
+ {
+ 	struct mt_device *td = hid_get_drvdata(hdev);
+ 	struct mt_class *cls = &td->mtclass;
+@@ -1179,6 +1180,14 @@ static bool mt_need_to_apply_feature(struct hid_device *hdev,
+ 
+ 	switch (usage->hid) {
+ 	case HID_DG_INPUTMODE:
++		/*
++		 * Some elan panels wrongly declare 2 input mode features,
++		 * and silently ignore when we set the value in the second
++		 * field. Skip the second feature and hope for the best.
++		 */
++		if (*inputmode_found)
++			return false;
++
+ 		if (cls->quirks & MT_QUIRK_FORCE_GET_FEATURE) {
+ 			report_len = hid_report_len(report);
+ 			buf = hid_alloc_report_buf(report, GFP_KERNEL);
+@@ -1194,6 +1203,7 @@ static bool mt_need_to_apply_feature(struct hid_device *hdev,
+ 		}
+ 
+ 		field->value[index] = td->inputmode_value;
++		*inputmode_found = true;
+ 		return true;
+ 
+ 	case HID_DG_CONTACTMAX:
+@@ -1231,6 +1241,7 @@ static void mt_set_modes(struct hid_device *hdev, enum latency_mode latency,
+ 	struct hid_usage *usage;
+ 	int i, j;
+ 	bool update_report;
++	bool inputmode_found = false;
+ 
+ 	rep_enum = &hdev->report_enum[HID_FEATURE_REPORT];
+ 	list_for_each_entry(rep, &rep_enum->report_list, list) {
+@@ -1249,7 +1260,8 @@ static void mt_set_modes(struct hid_device *hdev, enum latency_mode latency,
+ 							     usage,
+ 							     latency,
+ 							     surface_switch,
+-							     button_switch))
++							     button_switch,
++							     &inputmode_found))
+ 					update_report = true;
+ 			}
+ 		}
+@@ -1476,6 +1488,9 @@ static int mt_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 	 */
+ 	hdev->quirks |= HID_QUIRK_INPUT_PER_APP;
+ 
++	if (id->group != HID_GROUP_MULTITOUCH_WIN_8)
++		hdev->quirks |= HID_QUIRK_MULTI_INPUT;
++
+ 	timer_setup(&td->release_timer, mt_expired_timeout, 0);
+ 
+ 	ret = hid_parse(hdev);
+diff --git a/drivers/hid/i2c-hid/i2c-hid.c b/drivers/hid/i2c-hid/i2c-hid.c
+index eae0cb3ddec6..5fd1159fc095 100644
+--- a/drivers/hid/i2c-hid/i2c-hid.c
++++ b/drivers/hid/i2c-hid/i2c-hid.c
+@@ -174,6 +174,8 @@ static const struct i2c_hid_quirks {
+ 		I2C_HID_QUIRK_RESEND_REPORT_DESCR },
+ 	{ USB_VENDOR_ID_SIS_TOUCH, USB_DEVICE_ID_SIS10FB_TOUCH,
+ 		I2C_HID_QUIRK_RESEND_REPORT_DESCR },
++	{ I2C_VENDOR_ID_RAYD, I2C_PRODUCT_ID_RAYD_4B33,
++		I2C_HID_QUIRK_RESEND_REPORT_DESCR },
+ 	{ 0, 0 }
+ };
+ 
+diff --git a/drivers/hv/hv.c b/drivers/hv/hv.c
+index 658dc765753b..553adccb05d7 100644
+--- a/drivers/hv/hv.c
++++ b/drivers/hv/hv.c
+@@ -242,6 +242,10 @@ int hv_synic_alloc(void)
+ 
+ 	return 0;
+ err:
++	/*
++	 * Any memory allocations that succeeded will be freed when
++	 * the caller cleans up by calling hv_synic_free()
++	 */
+ 	return -ENOMEM;
+ }
+ 
+@@ -254,12 +258,10 @@ void hv_synic_free(void)
+ 		struct hv_per_cpu_context *hv_cpu
+ 			= per_cpu_ptr(hv_context.cpu_context, cpu);
+ 
+-		if (hv_cpu->synic_event_page)
+-			free_page((unsigned long)hv_cpu->synic_event_page);
+-		if (hv_cpu->synic_message_page)
+-			free_page((unsigned long)hv_cpu->synic_message_page);
+-		if (hv_cpu->post_msg_page)
+-			free_page((unsigned long)hv_cpu->post_msg_page);
++		kfree(hv_cpu->clk_evt);
++		free_page((unsigned long)hv_cpu->synic_event_page);
++		free_page((unsigned long)hv_cpu->synic_message_page);
++		free_page((unsigned long)hv_cpu->post_msg_page);
+ 	}
+ 
+ 	kfree(hv_context.hv_numa_map);
+diff --git a/drivers/i2c/busses/i2c-aspeed.c b/drivers/i2c/busses/i2c-aspeed.c
+index 60e4d0e939a3..715b6fdb4989 100644
+--- a/drivers/i2c/busses/i2c-aspeed.c
++++ b/drivers/i2c/busses/i2c-aspeed.c
+@@ -868,7 +868,7 @@ static int aspeed_i2c_probe_bus(struct platform_device *pdev)
+ 	if (!match)
+ 		bus->get_clk_reg_val = aspeed_i2c_24xx_get_clk_reg_val;
+ 	else
+-		bus->get_clk_reg_val = match->data;
++		bus->get_clk_reg_val = (u32 (*)(u32))match->data;
+ 
+ 	/* Initialize the I2C adapter */
+ 	spin_lock_init(&bus->lock);
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index aa726607645e..45fcf0c37a9e 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -139,6 +139,7 @@
+ 
+ #define SBREG_BAR		0x10
+ #define SBREG_SMBCTRL		0xc6000c
++#define SBREG_SMBCTRL_DNV	0xcf000c
+ 
+ /* Host status bits for SMBPCISTS */
+ #define SMBPCISTS_INTS		BIT(3)
+@@ -1396,7 +1397,11 @@ static void i801_add_tco(struct i801_priv *priv)
+ 	spin_unlock(&p2sb_spinlock);
+ 
+ 	res = &tco_res[ICH_RES_MEM_OFF];
+-	res->start = (resource_size_t)base64_addr + SBREG_SMBCTRL;
++	if (pci_dev->device == PCI_DEVICE_ID_INTEL_DNV_SMBUS)
++		res->start = (resource_size_t)base64_addr + SBREG_SMBCTRL_DNV;
++	else
++		res->start = (resource_size_t)base64_addr + SBREG_SMBCTRL;
++
+ 	res->end = res->start + 3;
+ 	res->flags = IORESOURCE_MEM;
+ 
+diff --git a/drivers/i2c/busses/i2c-xiic.c b/drivers/i2c/busses/i2c-xiic.c
+index 9a71e50d21f1..0c51c0ffdda9 100644
+--- a/drivers/i2c/busses/i2c-xiic.c
++++ b/drivers/i2c/busses/i2c-xiic.c
+@@ -532,6 +532,7 @@ static void xiic_start_recv(struct xiic_i2c *i2c)
+ {
+ 	u8 rx_watermark;
+ 	struct i2c_msg *msg = i2c->rx_msg = i2c->tx_msg;
++	unsigned long flags;
+ 
+ 	/* Clear and enable Rx full interrupt. */
+ 	xiic_irq_clr_en(i2c, XIIC_INTR_RX_FULL_MASK | XIIC_INTR_TX_ERROR_MASK);
+@@ -547,6 +548,7 @@ static void xiic_start_recv(struct xiic_i2c *i2c)
+ 		rx_watermark = IIC_RX_FIFO_DEPTH;
+ 	xiic_setreg8(i2c, XIIC_RFD_REG_OFFSET, rx_watermark - 1);
+ 
++	local_irq_save(flags);
+ 	if (!(msg->flags & I2C_M_NOSTART))
+ 		/* write the address */
+ 		xiic_setreg16(i2c, XIIC_DTR_REG_OFFSET,
+@@ -556,6 +558,8 @@ static void xiic_start_recv(struct xiic_i2c *i2c)
+ 
+ 	xiic_setreg16(i2c, XIIC_DTR_REG_OFFSET,
+ 		msg->len | ((i2c->nmsgs == 1) ? XIIC_TX_DYN_STOP_MASK : 0));
++	local_irq_restore(flags);
++
+ 	if (i2c->nmsgs == 1)
+ 		/* very last, enable bus not busy as well */
+ 		xiic_irq_clr_en(i2c, XIIC_INTR_BNB_MASK);
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index bff10ab141b0..dafcb6f019b3 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -1445,9 +1445,16 @@ static bool cma_match_net_dev(const struct rdma_cm_id *id,
+ 		       (addr->src_addr.ss_family == AF_IB ||
+ 			rdma_protocol_roce(id->device, port_num));
+ 
+-	return !addr->dev_addr.bound_dev_if ||
+-	       (net_eq(dev_net(net_dev), addr->dev_addr.net) &&
+-		addr->dev_addr.bound_dev_if == net_dev->ifindex);
++	/*
++	 * Net namespaces must match, and if the listner is listening
++	 * on a specific netdevice than netdevice must match as well.
++	 */
++	if (net_eq(dev_net(net_dev), addr->dev_addr.net) &&
++	    (!!addr->dev_addr.bound_dev_if ==
++	     (addr->dev_addr.bound_dev_if == net_dev->ifindex)))
++		return true;
++	else
++		return false;
+ }
+ 
+ static struct rdma_id_private *cma_find_listener(
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
+index 63b5b3edabcb..8dc336a85128 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
+@@ -494,6 +494,9 @@ static int hns_roce_table_mhop_get(struct hns_roce_dev *hr_dev,
+ 			step_idx = 1;
+ 		} else if (hop_num == HNS_ROCE_HOP_NUM_0) {
+ 			step_idx = 0;
++		} else {
++			ret = -EINVAL;
++			goto err_dma_alloc_l1;
+ 		}
+ 
+ 		/* set HEM base address to hardware */
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index a6e11be0ea0f..c00925ed9da8 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -273,7 +273,8 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
+ 			switch (wr->opcode) {
+ 			case IB_WR_SEND_WITH_IMM:
+ 			case IB_WR_RDMA_WRITE_WITH_IMM:
+-				ud_sq_wqe->immtdata = wr->ex.imm_data;
++				ud_sq_wqe->immtdata =
++				      cpu_to_le32(be32_to_cpu(wr->ex.imm_data));
+ 				break;
+ 			default:
+ 				ud_sq_wqe->immtdata = 0;
+@@ -371,7 +372,8 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
+ 			switch (wr->opcode) {
+ 			case IB_WR_SEND_WITH_IMM:
+ 			case IB_WR_RDMA_WRITE_WITH_IMM:
+-				rc_sq_wqe->immtdata = wr->ex.imm_data;
++				rc_sq_wqe->immtdata =
++				      cpu_to_le32(be32_to_cpu(wr->ex.imm_data));
+ 				break;
+ 			case IB_WR_SEND_WITH_INV:
+ 				rc_sq_wqe->inv_key =
+@@ -1931,7 +1933,8 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *hr_cq,
+ 		case HNS_ROCE_V2_OPCODE_RDMA_WRITE_IMM:
+ 			wc->opcode = IB_WC_RECV_RDMA_WITH_IMM;
+ 			wc->wc_flags = IB_WC_WITH_IMM;
+-			wc->ex.imm_data = cqe->immtdata;
++			wc->ex.imm_data =
++				cpu_to_be32(le32_to_cpu(cqe->immtdata));
+ 			break;
+ 		case HNS_ROCE_V2_OPCODE_SEND:
+ 			wc->opcode = IB_WC_RECV;
+@@ -1940,7 +1943,8 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *hr_cq,
+ 		case HNS_ROCE_V2_OPCODE_SEND_WITH_IMM:
+ 			wc->opcode = IB_WC_RECV;
+ 			wc->wc_flags = IB_WC_WITH_IMM;
+-			wc->ex.imm_data = cqe->immtdata;
++			wc->ex.imm_data =
++				cpu_to_be32(le32_to_cpu(cqe->immtdata));
+ 			break;
+ 		case HNS_ROCE_V2_OPCODE_SEND_WITH_INV:
+ 			wc->opcode = IB_WC_RECV;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+index d47675f365c7..7e2c740e0df5 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+@@ -768,7 +768,7 @@ struct hns_roce_v2_cqe {
+ 	__le32	byte_4;
+ 	union {
+ 		__le32 rkey;
+-		__be32 immtdata;
++		__le32 immtdata;
+ 	};
+ 	__le32	byte_12;
+ 	__le32	byte_16;
+@@ -926,7 +926,7 @@ struct hns_roce_v2_cq_db {
+ struct hns_roce_v2_ud_send_wqe {
+ 	__le32	byte_4;
+ 	__le32	msg_len;
+-	__be32	immtdata;
++	__le32	immtdata;
+ 	__le32	byte_16;
+ 	__le32	byte_20;
+ 	__le32	byte_24;
+@@ -1012,7 +1012,7 @@ struct hns_roce_v2_rc_send_wqe {
+ 	__le32		msg_len;
+ 	union {
+ 		__le32  inv_key;
+-		__be32  immtdata;
++		__le32  immtdata;
+ 	};
+ 	__le32		byte_16;
+ 	__le32		byte_20;
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+index 6709328d90f8..c7e034963738 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+@@ -822,6 +822,7 @@ void ipoib_mcast_send(struct net_device *dev, u8 *daddr, struct sk_buff *skb)
+ 			if (neigh && list_empty(&neigh->list)) {
+ 				kref_get(&mcast->ah->ref);
+ 				neigh->ah	= mcast->ah;
++				neigh->ah->valid = 1;
+ 				list_add_tail(&neigh->list, &mcast->neigh_list);
+ 			}
+ 		}
+diff --git a/drivers/input/touchscreen/atmel_mxt_ts.c b/drivers/input/touchscreen/atmel_mxt_ts.c
+index 54fe190fd4bc..48c5ccab00a0 100644
+--- a/drivers/input/touchscreen/atmel_mxt_ts.c
++++ b/drivers/input/touchscreen/atmel_mxt_ts.c
+@@ -1658,10 +1658,11 @@ static int mxt_parse_object_table(struct mxt_data *data,
+ 			break;
+ 		case MXT_TOUCH_MULTI_T9:
+ 			data->multitouch = MXT_TOUCH_MULTI_T9;
++			/* Only handle messages from first T9 instance */
+ 			data->T9_reportid_min = min_id;
+-			data->T9_reportid_max = max_id;
+-			data->num_touchids = object->num_report_ids
+-						* mxt_obj_instances(object);
++			data->T9_reportid_max = min_id +
++						object->num_report_ids - 1;
++			data->num_touchids = object->num_report_ids;
+ 			break;
+ 		case MXT_SPT_MESSAGECOUNT_T44:
+ 			data->T44_address = object->start_address;
+diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
+index 1d647104bccc..b73c6a7bf7f2 100644
+--- a/drivers/iommu/arm-smmu-v3.c
++++ b/drivers/iommu/arm-smmu-v3.c
+@@ -24,6 +24,7 @@
+ #include <linux/acpi_iort.h>
+ #include <linux/bitfield.h>
+ #include <linux/bitops.h>
++#include <linux/crash_dump.h>
+ #include <linux/delay.h>
+ #include <linux/dma-iommu.h>
+ #include <linux/err.h>
+@@ -2211,8 +2212,12 @@ static int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr)
+ 	reg &= ~clr;
+ 	reg |= set;
+ 	writel_relaxed(reg | GBPA_UPDATE, gbpa);
+-	return readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE),
+-					  1, ARM_SMMU_POLL_TIMEOUT_US);
++	ret = readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE),
++					 1, ARM_SMMU_POLL_TIMEOUT_US);
++
++	if (ret)
++		dev_err(smmu->dev, "GBPA not responding to update\n");
++	return ret;
+ }
+ 
+ static void arm_smmu_free_msis(void *data)
+@@ -2392,8 +2397,15 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
+ 
+ 	/* Clear CR0 and sync (disables SMMU and queue processing) */
+ 	reg = readl_relaxed(smmu->base + ARM_SMMU_CR0);
+-	if (reg & CR0_SMMUEN)
++	if (reg & CR0_SMMUEN) {
++		if (is_kdump_kernel()) {
++			arm_smmu_update_gbpa(smmu, GBPA_ABORT, 0);
++			arm_smmu_device_disable(smmu);
++			return -EBUSY;
++		}
++
+ 		dev_warn(smmu->dev, "SMMU currently enabled! Resetting...\n");
++	}
+ 
+ 	ret = arm_smmu_device_disable(smmu);
+ 	if (ret)
+@@ -2491,10 +2503,8 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
+ 		enables |= CR0_SMMUEN;
+ 	} else {
+ 		ret = arm_smmu_update_gbpa(smmu, 0, GBPA_ABORT);
+-		if (ret) {
+-			dev_err(smmu->dev, "GBPA not responding to update\n");
++		if (ret)
+ 			return ret;
+-		}
+ 	}
+ 	ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+ 				      ARM_SMMU_CR0ACK);
+diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
+index 09b47260c74b..feb1664815b7 100644
+--- a/drivers/iommu/ipmmu-vmsa.c
++++ b/drivers/iommu/ipmmu-vmsa.c
+@@ -73,7 +73,7 @@ struct ipmmu_vmsa_domain {
+ 	struct io_pgtable_ops *iop;
+ 
+ 	unsigned int context_id;
+-	spinlock_t lock;			/* Protects mappings */
++	struct mutex mutex;			/* Protects mappings */
+ };
+ 
+ static struct ipmmu_vmsa_domain *to_vmsa_domain(struct iommu_domain *dom)
+@@ -595,7 +595,7 @@ static struct iommu_domain *__ipmmu_domain_alloc(unsigned type)
+ 	if (!domain)
+ 		return NULL;
+ 
+-	spin_lock_init(&domain->lock);
++	mutex_init(&domain->mutex);
+ 
+ 	return &domain->io_domain;
+ }
+@@ -641,7 +641,6 @@ static int ipmmu_attach_device(struct iommu_domain *io_domain,
+ 	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+ 	struct ipmmu_vmsa_device *mmu = to_ipmmu(dev);
+ 	struct ipmmu_vmsa_domain *domain = to_vmsa_domain(io_domain);
+-	unsigned long flags;
+ 	unsigned int i;
+ 	int ret = 0;
+ 
+@@ -650,7 +649,7 @@ static int ipmmu_attach_device(struct iommu_domain *io_domain,
+ 		return -ENXIO;
+ 	}
+ 
+-	spin_lock_irqsave(&domain->lock, flags);
++	mutex_lock(&domain->mutex);
+ 
+ 	if (!domain->mmu) {
+ 		/* The domain hasn't been used yet, initialize it. */
+@@ -674,7 +673,7 @@ static int ipmmu_attach_device(struct iommu_domain *io_domain,
+ 	} else
+ 		dev_info(dev, "Reusing IPMMU context %u\n", domain->context_id);
+ 
+-	spin_unlock_irqrestore(&domain->lock, flags);
++	mutex_unlock(&domain->mutex);
+ 
+ 	if (ret < 0)
+ 		return ret;
+diff --git a/drivers/macintosh/via-pmu.c b/drivers/macintosh/via-pmu.c
+index 25c1ce811053..1fdd09ebb3f1 100644
+--- a/drivers/macintosh/via-pmu.c
++++ b/drivers/macintosh/via-pmu.c
+@@ -534,8 +534,9 @@ init_pmu(void)
+ 	int timeout;
+ 	struct adb_request req;
+ 
+-	out_8(&via[B], via[B] | TREQ);			/* negate TREQ */
+-	out_8(&via[DIRB], (via[DIRB] | TREQ) & ~TACK);	/* TACK in, TREQ out */
++	/* Negate TREQ. Set TACK to input and TREQ to output. */
++	out_8(&via[B], in_8(&via[B]) | TREQ);
++	out_8(&via[DIRB], (in_8(&via[DIRB]) | TREQ) & ~TACK);
+ 
+ 	pmu_request(&req, NULL, 2, PMU_SET_INTR_MASK, pmu_intr_mask);
+ 	timeout =  100000;
+@@ -1418,8 +1419,8 @@ pmu_sr_intr(void)
+ 	struct adb_request *req;
+ 	int bite = 0;
+ 
+-	if (via[B] & TREQ) {
+-		printk(KERN_ERR "PMU: spurious SR intr (%x)\n", via[B]);
++	if (in_8(&via[B]) & TREQ) {
++		printk(KERN_ERR "PMU: spurious SR intr (%x)\n", in_8(&via[B]));
+ 		out_8(&via[IFR], SR_INT);
+ 		return NULL;
+ 	}
+diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
+index ce14a3d1f609..44df244807e5 100644
+--- a/drivers/md/dm-cache-target.c
++++ b/drivers/md/dm-cache-target.c
+@@ -2250,7 +2250,7 @@ static int parse_features(struct cache_args *ca, struct dm_arg_set *as,
+ 		{0, 2, "Invalid number of cache feature arguments"},
+ 	};
+ 
+-	int r;
++	int r, mode_ctr = 0;
+ 	unsigned argc;
+ 	const char *arg;
+ 	struct cache_features *cf = &ca->features;
+@@ -2264,14 +2264,20 @@ static int parse_features(struct cache_args *ca, struct dm_arg_set *as,
+ 	while (argc--) {
+ 		arg = dm_shift_arg(as);
+ 
+-		if (!strcasecmp(arg, "writeback"))
++		if (!strcasecmp(arg, "writeback")) {
+ 			cf->io_mode = CM_IO_WRITEBACK;
++			mode_ctr++;
++		}
+ 
+-		else if (!strcasecmp(arg, "writethrough"))
++		else if (!strcasecmp(arg, "writethrough")) {
+ 			cf->io_mode = CM_IO_WRITETHROUGH;
++			mode_ctr++;
++		}
+ 
+-		else if (!strcasecmp(arg, "passthrough"))
++		else if (!strcasecmp(arg, "passthrough")) {
+ 			cf->io_mode = CM_IO_PASSTHROUGH;
++			mode_ctr++;
++		}
+ 
+ 		else if (!strcasecmp(arg, "metadata2"))
+ 			cf->metadata_version = 2;
+@@ -2282,6 +2288,11 @@ static int parse_features(struct cache_args *ca, struct dm_arg_set *as,
+ 		}
+ 	}
+ 
++	if (mode_ctr > 1) {
++		*error = "Duplicate cache io_mode features requested";
++		return -EINVAL;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 2031506a0ecd..49107c52c8e6 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -4521,6 +4521,12 @@ static void analyse_stripe(struct stripe_head *sh, struct stripe_head_state *s)
+ 			s->failed++;
+ 			if (rdev && !test_bit(Faulty, &rdev->flags))
+ 				do_recovery = 1;
++			else if (!rdev) {
++				rdev = rcu_dereference(
++				    conf->disks[i].replacement);
++				if (rdev && !test_bit(Faulty, &rdev->flags))
++					do_recovery = 1;
++			}
+ 		}
+ 
+ 		if (test_bit(R5_InJournal, &dev->flags))
+diff --git a/drivers/media/dvb-frontends/helene.c b/drivers/media/dvb-frontends/helene.c
+index a0d0b53c91d7..a5de65dcf784 100644
+--- a/drivers/media/dvb-frontends/helene.c
++++ b/drivers/media/dvb-frontends/helene.c
+@@ -897,7 +897,10 @@ static int helene_x_pon(struct helene_priv *priv)
+ 	helene_write_regs(priv, 0x99, cdata, sizeof(cdata));
+ 
+ 	/* 0x81 - 0x94 */
+-	data[0] = 0x18; /* xtal 24 MHz */
++	if (priv->xtal == SONY_HELENE_XTAL_16000)
++		data[0] = 0x10; /* xtal 16 MHz */
++	else
++		data[0] = 0x18; /* xtal 24 MHz */
+ 	data[1] = (uint8_t)(0x80 | (0x04 & 0x1F)); /* 4 x 25 = 100uA */
+ 	data[2] = (uint8_t)(0x80 | (0x26 & 0x7F)); /* 38 x 0.25 = 9.5pF */
+ 	data[3] = 0x80; /* REFOUT signal output 500mVpp */
+diff --git a/drivers/media/platform/davinci/vpif_display.c b/drivers/media/platform/davinci/vpif_display.c
+index 7be636237acf..0f324055cc9f 100644
+--- a/drivers/media/platform/davinci/vpif_display.c
++++ b/drivers/media/platform/davinci/vpif_display.c
+@@ -1114,6 +1114,14 @@ vpif_init_free_channel_objects:
+ 	return err;
+ }
+ 
++static void free_vpif_objs(void)
++{
++	int i;
++
++	for (i = 0; i < VPIF_DISPLAY_MAX_DEVICES; i++)
++		kfree(vpif_obj.dev[i]);
++}
++
+ static int vpif_async_bound(struct v4l2_async_notifier *notifier,
+ 			    struct v4l2_subdev *subdev,
+ 			    struct v4l2_async_subdev *asd)
+@@ -1255,11 +1263,6 @@ static __init int vpif_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 	}
+ 
+-	if (!pdev->dev.platform_data) {
+-		dev_warn(&pdev->dev, "Missing platform data.  Giving up.\n");
+-		return -EINVAL;
+-	}
+-
+ 	vpif_dev = &pdev->dev;
+ 	err = initialize_vpif();
+ 
+@@ -1271,7 +1274,7 @@ static __init int vpif_probe(struct platform_device *pdev)
+ 	err = v4l2_device_register(vpif_dev, &vpif_obj.v4l2_dev);
+ 	if (err) {
+ 		v4l2_err(vpif_dev->driver, "Error registering v4l2 device\n");
+-		return err;
++		goto vpif_free;
+ 	}
+ 
+ 	while ((res = platform_get_resource(pdev, IORESOURCE_IRQ, res_idx))) {
+@@ -1314,7 +1317,10 @@ static __init int vpif_probe(struct platform_device *pdev)
+ 			if (vpif_obj.sd[i])
+ 				vpif_obj.sd[i]->grp_id = 1 << i;
+ 		}
+-		vpif_probe_complete();
++		err = vpif_probe_complete();
++		if (err) {
++			goto probe_subdev_out;
++		}
+ 	} else {
+ 		vpif_obj.notifier.subdevs = vpif_obj.config->asd;
+ 		vpif_obj.notifier.num_subdevs = vpif_obj.config->asd_sizes[0];
+@@ -1334,6 +1340,8 @@ probe_subdev_out:
+ 	kfree(vpif_obj.sd);
+ vpif_unregister:
+ 	v4l2_device_unregister(&vpif_obj.v4l2_dev);
++vpif_free:
++	free_vpif_objs();
+ 
+ 	return err;
+ }
+@@ -1355,8 +1363,8 @@ static int vpif_remove(struct platform_device *device)
+ 		ch = vpif_obj.dev[i];
+ 		/* Unregister video device */
+ 		video_unregister_device(&ch->video_dev);
+-		kfree(vpif_obj.dev[i]);
+ 	}
++	free_vpif_objs();
+ 
+ 	return 0;
+ }
+diff --git a/drivers/media/platform/qcom/camss-8x16/camss-csid.c b/drivers/media/platform/qcom/camss-8x16/camss-csid.c
+index 226f36ef7419..2bf65805f2c1 100644
+--- a/drivers/media/platform/qcom/camss-8x16/camss-csid.c
++++ b/drivers/media/platform/qcom/camss-8x16/camss-csid.c
+@@ -392,9 +392,6 @@ static int csid_set_stream(struct v4l2_subdev *sd, int enable)
+ 		    !media_entity_remote_pad(&csid->pads[MSM_CSID_PAD_SINK]))
+ 			return -ENOLINK;
+ 
+-		dt = csid_get_fmt_entry(csid->fmt[MSM_CSID_PAD_SRC].code)->
+-								data_type;
+-
+ 		if (tg->enabled) {
+ 			/* Config Test Generator */
+ 			struct v4l2_mbus_framefmt *f =
+@@ -416,6 +413,9 @@ static int csid_set_stream(struct v4l2_subdev *sd, int enable)
+ 			writel_relaxed(val, csid->base +
+ 				       CAMSS_CSID_TG_DT_n_CGG_0(0));
+ 
++			dt = csid_get_fmt_entry(
++				csid->fmt[MSM_CSID_PAD_SRC].code)->data_type;
++
+ 			/* 5:0 data type */
+ 			val = dt;
+ 			writel_relaxed(val, csid->base +
+@@ -425,6 +425,9 @@ static int csid_set_stream(struct v4l2_subdev *sd, int enable)
+ 			val = tg->payload_mode;
+ 			writel_relaxed(val, csid->base +
+ 				       CAMSS_CSID_TG_DT_n_CGG_2(0));
++
++			df = csid_get_fmt_entry(
++				csid->fmt[MSM_CSID_PAD_SRC].code)->decode_format;
+ 		} else {
+ 			struct csid_phy_config *phy = &csid->phy;
+ 
+@@ -439,13 +442,16 @@ static int csid_set_stream(struct v4l2_subdev *sd, int enable)
+ 
+ 			writel_relaxed(val,
+ 				       csid->base + CAMSS_CSID_CORE_CTRL_1);
++
++			dt = csid_get_fmt_entry(
++				csid->fmt[MSM_CSID_PAD_SINK].code)->data_type;
++			df = csid_get_fmt_entry(
++				csid->fmt[MSM_CSID_PAD_SINK].code)->decode_format;
+ 		}
+ 
+ 		/* Config LUT */
+ 
+ 		dt_shift = (cid % 4) * 8;
+-		df = csid_get_fmt_entry(csid->fmt[MSM_CSID_PAD_SINK].code)->
+-								decode_format;
+ 
+ 		val = readl_relaxed(csid->base + CAMSS_CSID_CID_LUT_VC_n(vc));
+ 		val &= ~(0xff << dt_shift);
+diff --git a/drivers/media/platform/rcar-vin/rcar-csi2.c b/drivers/media/platform/rcar-vin/rcar-csi2.c
+index daef72d410a3..dc5ae8025832 100644
+--- a/drivers/media/platform/rcar-vin/rcar-csi2.c
++++ b/drivers/media/platform/rcar-vin/rcar-csi2.c
+@@ -339,6 +339,7 @@ enum rcar_csi2_pads {
+ 
+ struct rcar_csi2_info {
+ 	int (*init_phtw)(struct rcar_csi2 *priv, unsigned int mbps);
++	int (*confirm_start)(struct rcar_csi2 *priv);
+ 	const struct rcsi2_mbps_reg *hsfreqrange;
+ 	unsigned int csi0clkfreqrange;
+ 	bool clear_ulps;
+@@ -545,6 +546,13 @@ static int rcsi2_start(struct rcar_csi2 *priv)
+ 	if (ret)
+ 		return ret;
+ 
++	/* Confirm start */
++	if (priv->info->confirm_start) {
++		ret = priv->info->confirm_start(priv);
++		if (ret)
++			return ret;
++	}
++
+ 	/* Clear Ultra Low Power interrupt. */
+ 	if (priv->info->clear_ulps)
+ 		rcsi2_write(priv, INTSTATE_REG,
+@@ -880,6 +888,11 @@ static int rcsi2_init_phtw_h3_v3h_m3n(struct rcar_csi2 *priv, unsigned int mbps)
+ }
+ 
+ static int rcsi2_init_phtw_v3m_e3(struct rcar_csi2 *priv, unsigned int mbps)
++{
++	return rcsi2_phtw_write_mbps(priv, mbps, phtw_mbps_v3m_e3, 0x44);
++}
++
++static int rcsi2_confirm_start_v3m_e3(struct rcar_csi2 *priv)
+ {
+ 	static const struct phtw_value step1[] = {
+ 		{ .data = 0xed, .code = 0x34 },
+@@ -890,12 +903,6 @@ static int rcsi2_init_phtw_v3m_e3(struct rcar_csi2 *priv, unsigned int mbps)
+ 		{ /* sentinel */ },
+ 	};
+ 
+-	int ret;
+-
+-	ret = rcsi2_phtw_write_mbps(priv, mbps, phtw_mbps_v3m_e3, 0x44);
+-	if (ret)
+-		return ret;
+-
+ 	return rcsi2_phtw_write_array(priv, step1);
+ }
+ 
+@@ -949,6 +956,7 @@ static const struct rcar_csi2_info rcar_csi2_info_r8a77965 = {
+ 
+ static const struct rcar_csi2_info rcar_csi2_info_r8a77970 = {
+ 	.init_phtw = rcsi2_init_phtw_v3m_e3,
++	.confirm_start = rcsi2_confirm_start_v3m_e3,
+ };
+ 
+ static const struct of_device_id rcar_csi2_of_table[] = {
+diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc.c b/drivers/media/platform/s5p-mfc/s5p_mfc.c
+index a80251ed3143..780548dd650e 100644
+--- a/drivers/media/platform/s5p-mfc/s5p_mfc.c
++++ b/drivers/media/platform/s5p-mfc/s5p_mfc.c
+@@ -254,24 +254,24 @@ static void s5p_mfc_handle_frame_all_extracted(struct s5p_mfc_ctx *ctx)
+ static void s5p_mfc_handle_frame_copy_time(struct s5p_mfc_ctx *ctx)
+ {
+ 	struct s5p_mfc_dev *dev = ctx->dev;
+-	struct s5p_mfc_buf  *dst_buf, *src_buf;
+-	size_t dec_y_addr;
++	struct s5p_mfc_buf *dst_buf, *src_buf;
++	u32 dec_y_addr;
+ 	unsigned int frame_type;
+ 
+ 	/* Make sure we actually have a new frame before continuing. */
+ 	frame_type = s5p_mfc_hw_call(dev->mfc_ops, get_dec_frame_type, dev);
+ 	if (frame_type == S5P_FIMV_DECODE_FRAME_SKIPPED)
+ 		return;
+-	dec_y_addr = s5p_mfc_hw_call(dev->mfc_ops, get_dec_y_adr, dev);
++	dec_y_addr = (u32)s5p_mfc_hw_call(dev->mfc_ops, get_dec_y_adr, dev);
+ 
+ 	/* Copy timestamp / timecode from decoded src to dst and set
+ 	   appropriate flags. */
+ 	src_buf = list_entry(ctx->src_queue.next, struct s5p_mfc_buf, list);
+ 	list_for_each_entry(dst_buf, &ctx->dst_queue, list) {
+-		if (vb2_dma_contig_plane_dma_addr(&dst_buf->b->vb2_buf, 0)
+-				== dec_y_addr) {
+-			dst_buf->b->timecode =
+-						src_buf->b->timecode;
++		u32 addr = (u32)vb2_dma_contig_plane_dma_addr(&dst_buf->b->vb2_buf, 0);
++
++		if (addr == dec_y_addr) {
++			dst_buf->b->timecode = src_buf->b->timecode;
+ 			dst_buf->b->vb2_buf.timestamp =
+ 						src_buf->b->vb2_buf.timestamp;
+ 			dst_buf->b->flags &=
+@@ -307,10 +307,10 @@ static void s5p_mfc_handle_frame_new(struct s5p_mfc_ctx *ctx, unsigned int err)
+ {
+ 	struct s5p_mfc_dev *dev = ctx->dev;
+ 	struct s5p_mfc_buf  *dst_buf;
+-	size_t dspl_y_addr;
++	u32 dspl_y_addr;
+ 	unsigned int frame_type;
+ 
+-	dspl_y_addr = s5p_mfc_hw_call(dev->mfc_ops, get_dspl_y_adr, dev);
++	dspl_y_addr = (u32)s5p_mfc_hw_call(dev->mfc_ops, get_dspl_y_adr, dev);
+ 	if (IS_MFCV6_PLUS(dev))
+ 		frame_type = s5p_mfc_hw_call(dev->mfc_ops,
+ 			get_disp_frame_type, ctx);
+@@ -329,9 +329,10 @@ static void s5p_mfc_handle_frame_new(struct s5p_mfc_ctx *ctx, unsigned int err)
+ 	/* The MFC returns address of the buffer, now we have to
+ 	 * check which videobuf does it correspond to */
+ 	list_for_each_entry(dst_buf, &ctx->dst_queue, list) {
++		u32 addr = (u32)vb2_dma_contig_plane_dma_addr(&dst_buf->b->vb2_buf, 0);
++
+ 		/* Check if this is the buffer we're looking for */
+-		if (vb2_dma_contig_plane_dma_addr(&dst_buf->b->vb2_buf, 0)
+-				== dspl_y_addr) {
++		if (addr == dspl_y_addr) {
+ 			list_del(&dst_buf->list);
+ 			ctx->dst_queue_cnt--;
+ 			dst_buf->b->sequence = ctx->sequence;
+diff --git a/drivers/media/usb/dvb-usb/dw2102.c b/drivers/media/usb/dvb-usb/dw2102.c
+index 0d4fdd34a710..9ce8b4d79d1f 100644
+--- a/drivers/media/usb/dvb-usb/dw2102.c
++++ b/drivers/media/usb/dvb-usb/dw2102.c
+@@ -2101,14 +2101,12 @@ static struct dvb_usb_device_properties s6x0_properties = {
+ 	}
+ };
+ 
+-static struct dvb_usb_device_properties *p1100;
+ static const struct dvb_usb_device_description d1100 = {
+ 	"Prof 1100 USB ",
+ 	{&dw2102_table[PROF_1100], NULL},
+ 	{NULL},
+ };
+ 
+-static struct dvb_usb_device_properties *s660;
+ static const struct dvb_usb_device_description d660 = {
+ 	"TeVii S660 USB",
+ 	{&dw2102_table[TEVII_S660], NULL},
+@@ -2127,14 +2125,12 @@ static const struct dvb_usb_device_description d480_2 = {
+ 	{NULL},
+ };
+ 
+-static struct dvb_usb_device_properties *p7500;
+ static const struct dvb_usb_device_description d7500 = {
+ 	"Prof 7500 USB DVB-S2",
+ 	{&dw2102_table[PROF_7500], NULL},
+ 	{NULL},
+ };
+ 
+-static struct dvb_usb_device_properties *s421;
+ static const struct dvb_usb_device_description d421 = {
+ 	"TeVii S421 PCI",
+ 	{&dw2102_table[TEVII_S421], NULL},
+@@ -2334,6 +2330,11 @@ static int dw2102_probe(struct usb_interface *intf,
+ 		const struct usb_device_id *id)
+ {
+ 	int retval = -ENOMEM;
++	struct dvb_usb_device_properties *p1100;
++	struct dvb_usb_device_properties *s660;
++	struct dvb_usb_device_properties *p7500;
++	struct dvb_usb_device_properties *s421;
++
+ 	p1100 = kmemdup(&s6x0_properties,
+ 			sizeof(struct dvb_usb_device_properties), GFP_KERNEL);
+ 	if (!p1100)
+@@ -2402,8 +2403,16 @@ static int dw2102_probe(struct usb_interface *intf,
+ 	    0 == dvb_usb_device_init(intf, &t220_properties,
+ 			 THIS_MODULE, NULL, adapter_nr) ||
+ 	    0 == dvb_usb_device_init(intf, &tt_s2_4600_properties,
+-			 THIS_MODULE, NULL, adapter_nr))
++			 THIS_MODULE, NULL, adapter_nr)) {
++
++		/* clean up copied properties */
++		kfree(s421);
++		kfree(p7500);
++		kfree(s660);
++		kfree(p1100);
++
+ 		return 0;
++	}
+ 
+ 	retval = -ENODEV;
+ 	kfree(s421);
+diff --git a/drivers/media/usb/em28xx/em28xx-cards.c b/drivers/media/usb/em28xx/em28xx-cards.c
+index 6c8438311d3b..ff5e41ac4723 100644
+--- a/drivers/media/usb/em28xx/em28xx-cards.c
++++ b/drivers/media/usb/em28xx/em28xx-cards.c
+@@ -3376,7 +3376,9 @@ void em28xx_free_device(struct kref *ref)
+ 	if (!dev->disconnected)
+ 		em28xx_release_resources(dev);
+ 
+-	kfree(dev->alt_max_pkt_size_isoc);
++	if (dev->ts == PRIMARY_TS)
++		kfree(dev->alt_max_pkt_size_isoc);
++
+ 	kfree(dev);
+ }
+ EXPORT_SYMBOL_GPL(em28xx_free_device);
+diff --git a/drivers/media/usb/em28xx/em28xx-core.c b/drivers/media/usb/em28xx/em28xx-core.c
+index f70845e7d8c6..45b24776a695 100644
+--- a/drivers/media/usb/em28xx/em28xx-core.c
++++ b/drivers/media/usb/em28xx/em28xx-core.c
+@@ -655,12 +655,12 @@ int em28xx_capture_start(struct em28xx *dev, int start)
+ 			rc = em28xx_write_reg_bits(dev,
+ 						   EM2874_R5F_TS_ENABLE,
+ 						   start ? EM2874_TS1_CAPTURE_ENABLE : 0x00,
+-						   EM2874_TS1_CAPTURE_ENABLE);
++						   EM2874_TS1_CAPTURE_ENABLE | EM2874_TS1_FILTER_ENABLE | EM2874_TS1_NULL_DISCARD);
+ 		else
+ 			rc = em28xx_write_reg_bits(dev,
+ 						   EM2874_R5F_TS_ENABLE,
+ 						   start ? EM2874_TS2_CAPTURE_ENABLE : 0x00,
+-						   EM2874_TS2_CAPTURE_ENABLE);
++						   EM2874_TS2_CAPTURE_ENABLE | EM2874_TS2_FILTER_ENABLE | EM2874_TS2_NULL_DISCARD);
+ 	} else {
+ 		/* FIXME: which is the best order? */
+ 		/* video registers are sampled by VREF */
+diff --git a/drivers/media/usb/em28xx/em28xx-dvb.c b/drivers/media/usb/em28xx/em28xx-dvb.c
+index b778d8a1983e..a73faf12f7e4 100644
+--- a/drivers/media/usb/em28xx/em28xx-dvb.c
++++ b/drivers/media/usb/em28xx/em28xx-dvb.c
+@@ -218,7 +218,9 @@ static int em28xx_start_streaming(struct em28xx_dvb *dvb)
+ 		dvb_alt = dev->dvb_alt_isoc;
+ 	}
+ 
+-	usb_set_interface(udev, dev->ifnum, dvb_alt);
++	if (!dev->board.has_dual_ts)
++		usb_set_interface(udev, dev->ifnum, dvb_alt);
++
+ 	rc = em28xx_set_mode(dev, EM28XX_DIGITAL_MODE);
+ 	if (rc < 0)
+ 		return rc;
+diff --git a/drivers/memory/ti-aemif.c b/drivers/memory/ti-aemif.c
+index 31112f622b88..475e5b3790ed 100644
+--- a/drivers/memory/ti-aemif.c
++++ b/drivers/memory/ti-aemif.c
+@@ -411,7 +411,7 @@ static int aemif_probe(struct platform_device *pdev)
+ 			if (ret < 0)
+ 				goto error;
+ 		}
+-	} else {
++	} else if (pdata) {
+ 		for (i = 0; i < pdata->num_sub_devices; i++) {
+ 			pdata->sub_devices[i].dev.parent = dev;
+ 			ret = platform_device_register(&pdata->sub_devices[i]);
+diff --git a/drivers/mfd/rave-sp.c b/drivers/mfd/rave-sp.c
+index 36dcd98977d6..4f545fdc6ebc 100644
+--- a/drivers/mfd/rave-sp.c
++++ b/drivers/mfd/rave-sp.c
+@@ -776,6 +776,13 @@ static int rave_sp_probe(struct serdev_device *serdev)
+ 		return ret;
+ 
+ 	serdev_device_set_baudrate(serdev, baud);
++	serdev_device_set_flow_control(serdev, false);
++
++	ret = serdev_device_set_parity(serdev, SERDEV_PARITY_NONE);
++	if (ret) {
++		dev_err(dev, "Failed to set parity\n");
++		return ret;
++	}
+ 
+ 	ret = rave_sp_get_status(sp);
+ 	if (ret) {
+diff --git a/drivers/mfd/ti_am335x_tscadc.c b/drivers/mfd/ti_am335x_tscadc.c
+index 47012c0899cd..7a30546880a4 100644
+--- a/drivers/mfd/ti_am335x_tscadc.c
++++ b/drivers/mfd/ti_am335x_tscadc.c
+@@ -209,14 +209,13 @@ static	int ti_tscadc_probe(struct platform_device *pdev)
+ 	 * The TSC_ADC_SS controller design assumes the OCP clock is
+ 	 * at least 6x faster than the ADC clock.
+ 	 */
+-	clk = clk_get(&pdev->dev, "adc_tsc_fck");
++	clk = devm_clk_get(&pdev->dev, "adc_tsc_fck");
+ 	if (IS_ERR(clk)) {
+ 		dev_err(&pdev->dev, "failed to get TSC fck\n");
+ 		err = PTR_ERR(clk);
+ 		goto err_disable_clk;
+ 	}
+ 	clock_rate = clk_get_rate(clk);
+-	clk_put(clk);
+ 	tscadc->clk_div = clock_rate / ADC_CLK;
+ 
+ 	/* TSCADC_CLKDIV needs to be configured to the value minus 1 */
+diff --git a/drivers/misc/mic/scif/scif_api.c b/drivers/misc/mic/scif/scif_api.c
+index 7b2dddcdd46d..42f7a12894d6 100644
+--- a/drivers/misc/mic/scif/scif_api.c
++++ b/drivers/misc/mic/scif/scif_api.c
+@@ -370,11 +370,10 @@ int scif_bind(scif_epd_t epd, u16 pn)
+ 			goto scif_bind_exit;
+ 		}
+ 	} else {
+-		pn = scif_get_new_port();
+-		if (!pn) {
+-			ret = -ENOSPC;
++		ret = scif_get_new_port();
++		if (ret < 0)
+ 			goto scif_bind_exit;
+-		}
++		pn = ret;
+ 	}
+ 
+ 	ep->state = SCIFEP_BOUND;
+@@ -648,13 +647,12 @@ int __scif_connect(scif_epd_t epd, struct scif_port_id *dst, bool non_block)
+ 			err = -EISCONN;
+ 		break;
+ 	case SCIFEP_UNBOUND:
+-		ep->port.port = scif_get_new_port();
+-		if (!ep->port.port) {
+-			err = -ENOSPC;
+-		} else {
+-			ep->port.node = scif_info.nodeid;
+-			ep->conn_async_state = ASYNC_CONN_IDLE;
+-		}
++		err = scif_get_new_port();
++		if (err < 0)
++			break;
++		ep->port.port = err;
++		ep->port.node = scif_info.nodeid;
++		ep->conn_async_state = ASYNC_CONN_IDLE;
+ 		/* Fall through */
+ 	case SCIFEP_BOUND:
+ 		/*
+diff --git a/drivers/misc/ti-st/st_kim.c b/drivers/misc/ti-st/st_kim.c
+index 5ec3f5a43718..14a5e9da32bd 100644
+--- a/drivers/misc/ti-st/st_kim.c
++++ b/drivers/misc/ti-st/st_kim.c
+@@ -756,14 +756,14 @@ static int kim_probe(struct platform_device *pdev)
+ 	err = gpio_request(kim_gdata->nshutdown, "kim");
+ 	if (unlikely(err)) {
+ 		pr_err(" gpio %d request failed ", kim_gdata->nshutdown);
+-		return err;
++		goto err_sysfs_group;
+ 	}
+ 
+ 	/* Configure nShutdown GPIO as output=0 */
+ 	err = gpio_direction_output(kim_gdata->nshutdown, 0);
+ 	if (unlikely(err)) {
+ 		pr_err(" unable to configure gpio %d", kim_gdata->nshutdown);
+-		return err;
++		goto err_sysfs_group;
+ 	}
+ 	/* get reference of pdev for request_firmware
+ 	 */
+diff --git a/drivers/mtd/nand/raw/nand_base.c b/drivers/mtd/nand/raw/nand_base.c
+index b01d15ec4c56..3e3e6a8f1abc 100644
+--- a/drivers/mtd/nand/raw/nand_base.c
++++ b/drivers/mtd/nand/raw/nand_base.c
+@@ -2668,8 +2668,8 @@ static bool nand_subop_instr_is_valid(const struct nand_subop *subop,
+ 	return subop && instr_idx < subop->ninstrs;
+ }
+ 
+-static int nand_subop_get_start_off(const struct nand_subop *subop,
+-				    unsigned int instr_idx)
++static unsigned int nand_subop_get_start_off(const struct nand_subop *subop,
++					     unsigned int instr_idx)
+ {
+ 	if (instr_idx)
+ 		return 0;
+@@ -2688,12 +2688,12 @@ static int nand_subop_get_start_off(const struct nand_subop *subop,
+  *
+  * Given an address instruction, returns the offset of the first cycle to issue.
+  */
+-int nand_subop_get_addr_start_off(const struct nand_subop *subop,
+-				  unsigned int instr_idx)
++unsigned int nand_subop_get_addr_start_off(const struct nand_subop *subop,
++					   unsigned int instr_idx)
+ {
+-	if (!nand_subop_instr_is_valid(subop, instr_idx) ||
+-	    subop->instrs[instr_idx].type != NAND_OP_ADDR_INSTR)
+-		return -EINVAL;
++	if (WARN_ON(!nand_subop_instr_is_valid(subop, instr_idx) ||
++		    subop->instrs[instr_idx].type != NAND_OP_ADDR_INSTR))
++		return 0;
+ 
+ 	return nand_subop_get_start_off(subop, instr_idx);
+ }
+@@ -2710,14 +2710,14 @@ EXPORT_SYMBOL_GPL(nand_subop_get_addr_start_off);
+  *
+  * Given an address instruction, returns the number of address cycle to issue.
+  */
+-int nand_subop_get_num_addr_cyc(const struct nand_subop *subop,
+-				unsigned int instr_idx)
++unsigned int nand_subop_get_num_addr_cyc(const struct nand_subop *subop,
++					 unsigned int instr_idx)
+ {
+ 	int start_off, end_off;
+ 
+-	if (!nand_subop_instr_is_valid(subop, instr_idx) ||
+-	    subop->instrs[instr_idx].type != NAND_OP_ADDR_INSTR)
+-		return -EINVAL;
++	if (WARN_ON(!nand_subop_instr_is_valid(subop, instr_idx) ||
++		    subop->instrs[instr_idx].type != NAND_OP_ADDR_INSTR))
++		return 0;
+ 
+ 	start_off = nand_subop_get_addr_start_off(subop, instr_idx);
+ 
+@@ -2742,12 +2742,12 @@ EXPORT_SYMBOL_GPL(nand_subop_get_num_addr_cyc);
+  *
+  * Given a data instruction, returns the offset to start from.
+  */
+-int nand_subop_get_data_start_off(const struct nand_subop *subop,
+-				  unsigned int instr_idx)
++unsigned int nand_subop_get_data_start_off(const struct nand_subop *subop,
++					   unsigned int instr_idx)
+ {
+-	if (!nand_subop_instr_is_valid(subop, instr_idx) ||
+-	    !nand_instr_is_data(&subop->instrs[instr_idx]))
+-		return -EINVAL;
++	if (WARN_ON(!nand_subop_instr_is_valid(subop, instr_idx) ||
++		    !nand_instr_is_data(&subop->instrs[instr_idx])))
++		return 0;
+ 
+ 	return nand_subop_get_start_off(subop, instr_idx);
+ }
+@@ -2764,14 +2764,14 @@ EXPORT_SYMBOL_GPL(nand_subop_get_data_start_off);
+  *
+  * Returns the length of the chunk of data to send/receive.
+  */
+-int nand_subop_get_data_len(const struct nand_subop *subop,
+-			    unsigned int instr_idx)
++unsigned int nand_subop_get_data_len(const struct nand_subop *subop,
++				     unsigned int instr_idx)
+ {
+ 	int start_off = 0, end_off;
+ 
+-	if (!nand_subop_instr_is_valid(subop, instr_idx) ||
+-	    !nand_instr_is_data(&subop->instrs[instr_idx]))
+-		return -EINVAL;
++	if (WARN_ON(!nand_subop_instr_is_valid(subop, instr_idx) ||
++		    !nand_instr_is_data(&subop->instrs[instr_idx])))
++		return 0;
+ 
+ 	start_off = nand_subop_get_data_start_off(subop, instr_idx);
+ 
+diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
+index 82ac1d10f239..b4253d0e056b 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -3196,7 +3196,6 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu)
+ 
+ 	on_each_cpu(mvneta_percpu_enable, pp, true);
+ 	mvneta_start_dev(pp);
+-	mvneta_port_up(pp);
+ 
+ 	netdev_update_features(dev);
+ 
+diff --git a/drivers/net/phy/mdio-mux-bcm-iproc.c b/drivers/net/phy/mdio-mux-bcm-iproc.c
+index 0c5b68e7da51..9b3167054843 100644
+--- a/drivers/net/phy/mdio-mux-bcm-iproc.c
++++ b/drivers/net/phy/mdio-mux-bcm-iproc.c
+@@ -22,7 +22,7 @@
+ #include <linux/mdio-mux.h>
+ #include <linux/delay.h>
+ 
+-#define MDIO_PARAM_OFFSET		0x00
++#define MDIO_PARAM_OFFSET		0x23c
+ #define MDIO_PARAM_MIIM_CYCLE		29
+ #define MDIO_PARAM_INTERNAL_SEL		25
+ #define MDIO_PARAM_BUS_ID		22
+@@ -30,20 +30,22 @@
+ #define MDIO_PARAM_PHY_ID		16
+ #define MDIO_PARAM_PHY_DATA		0
+ 
+-#define MDIO_READ_OFFSET		0x04
++#define MDIO_READ_OFFSET		0x240
+ #define MDIO_READ_DATA_MASK		0xffff
+-#define MDIO_ADDR_OFFSET		0x08
++#define MDIO_ADDR_OFFSET		0x244
+ 
+-#define MDIO_CTRL_OFFSET		0x0C
++#define MDIO_CTRL_OFFSET		0x248
+ #define MDIO_CTRL_WRITE_OP		0x1
+ #define MDIO_CTRL_READ_OP		0x2
+ 
+-#define MDIO_STAT_OFFSET		0x10
++#define MDIO_STAT_OFFSET		0x24c
+ #define MDIO_STAT_DONE			1
+ 
+ #define BUS_MAX_ADDR			32
+ #define EXT_BUS_START_ADDR		16
+ 
++#define MDIO_REG_ADDR_SPACE_SIZE	0x250
++
+ struct iproc_mdiomux_desc {
+ 	void *mux_handle;
+ 	void __iomem *base;
+@@ -169,6 +171,14 @@ static int mdio_mux_iproc_probe(struct platform_device *pdev)
+ 	md->dev = &pdev->dev;
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (res->start & 0xfff) {
++		/* For backward compatibility in case the
++		 * base address is specified with an offset.
++		 */
++		dev_info(&pdev->dev, "fix base address in dt-blob\n");
++		res->start &= ~0xfff;
++		res->end = res->start + MDIO_REG_ADDR_SPACE_SIZE - 1;
++	}
+ 	md->base = devm_ioremap_resource(&pdev->dev, res);
+ 	if (IS_ERR(md->base)) {
+ 		dev_err(&pdev->dev, "failed to ioremap register\n");
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index 836e0a47b94a..747c6951b5c1 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -3085,6 +3085,13 @@ static int ath10k_update_channel_list(struct ath10k *ar)
+ 			passive = channel->flags & IEEE80211_CHAN_NO_IR;
+ 			ch->passive = passive;
+ 
++			/* the firmware is ignoring the "radar" flag of the
++			 * channel and is scanning actively using Probe Requests
++			 * on "Radar detection"/DFS channels which are not
++			 * marked as "available"
++			 */
++			ch->passive |= ch->chan_radar;
++
+ 			ch->freq = channel->center_freq;
+ 			ch->band_center_freq1 = channel->center_freq;
+ 			ch->min_power = 0;
+diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.c b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+index 8c49a26fc571..21eb3a598a86 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.c
++++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+@@ -1584,6 +1584,11 @@ static struct sk_buff *ath10k_wmi_tlv_op_gen_init(struct ath10k *ar)
+ 	cfg->keep_alive_pattern_size = __cpu_to_le32(0);
+ 	cfg->max_tdls_concurrent_sleep_sta = __cpu_to_le32(1);
+ 	cfg->max_tdls_concurrent_buffer_sta = __cpu_to_le32(1);
++	cfg->wmi_send_separate = __cpu_to_le32(0);
++	cfg->num_ocb_vdevs = __cpu_to_le32(0);
++	cfg->num_ocb_channels = __cpu_to_le32(0);
++	cfg->num_ocb_schedules = __cpu_to_le32(0);
++	cfg->host_capab = __cpu_to_le32(0);
+ 
+ 	ath10k_wmi_put_host_mem_chunks(ar, chunks);
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.h b/drivers/net/wireless/ath/ath10k/wmi-tlv.h
+index 3e1e340cd834..1cb93d09b8a9 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.h
++++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.h
+@@ -1670,6 +1670,11 @@ struct wmi_tlv_resource_config {
+ 	__le32 keep_alive_pattern_size;
+ 	__le32 max_tdls_concurrent_sleep_sta;
+ 	__le32 max_tdls_concurrent_buffer_sta;
++	__le32 wmi_send_separate;
++	__le32 num_ocb_vdevs;
++	__le32 num_ocb_channels;
++	__le32 num_ocb_schedules;
++	__le32 host_capab;
+ } __packed;
+ 
+ struct wmi_tlv_init_cmd {
+diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c
+index e60bea4604e4..fcd9d5eeae72 100644
+--- a/drivers/net/wireless/ath/ath9k/hw.c
++++ b/drivers/net/wireless/ath/ath9k/hw.c
+@@ -2942,16 +2942,19 @@ void ath9k_hw_apply_txpower(struct ath_hw *ah, struct ath9k_channel *chan,
+ 	struct ath_regulatory *reg = ath9k_hw_regulatory(ah);
+ 	struct ieee80211_channel *channel;
+ 	int chan_pwr, new_pwr;
++	u16 ctl = NO_CTL;
+ 
+ 	if (!chan)
+ 		return;
+ 
++	if (!test)
++		ctl = ath9k_regd_get_ctl(reg, chan);
++
+ 	channel = chan->chan;
+ 	chan_pwr = min_t(int, channel->max_power * 2, MAX_RATE_POWER);
+ 	new_pwr = min_t(int, chan_pwr, reg->power_limit);
+ 
+-	ah->eep_ops->set_txpower(ah, chan,
+-				 ath9k_regd_get_ctl(reg, chan),
++	ah->eep_ops->set_txpower(ah, chan, ctl,
+ 				 get_antenna_gain(ah, chan), new_pwr, test);
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c
+index 7fdb152be0bb..a249ee747dc9 100644
+--- a/drivers/net/wireless/ath/ath9k/xmit.c
++++ b/drivers/net/wireless/ath/ath9k/xmit.c
+@@ -86,7 +86,8 @@ static void ath_tx_status(struct ieee80211_hw *hw, struct sk_buff *skb)
+ 	struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+ 	struct ieee80211_sta *sta = info->status.status_driver_data[0];
+ 
+-	if (info->flags & IEEE80211_TX_CTL_REQ_TX_STATUS) {
++	if (info->flags & (IEEE80211_TX_CTL_REQ_TX_STATUS |
++			   IEEE80211_TX_STATUS_EOSP)) {
+ 		ieee80211_tx_status(hw, skb);
+ 		return;
+ 	}
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 8520523b91b4..d8d8443c1c93 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -1003,6 +1003,10 @@ static int iwl_pci_resume(struct device *device)
+ 	if (!trans->op_mode)
+ 		return 0;
+ 
++	/* In WOWLAN, let iwl_trans_pcie_d3_resume do the rest of the work */
++	if (test_bit(STATUS_DEVICE_ENABLED, &trans->status))
++		return 0;
++
+ 	/* reconfigure the MSI-X mapping to get the correct IRQ for rfkill */
+ 	iwl_pcie_conf_msix_hw(trans_pcie);
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+index 7229991ae70d..a2a98087eb41 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+@@ -1539,18 +1539,6 @@ static int iwl_trans_pcie_d3_resume(struct iwl_trans *trans,
+ 
+ 	iwl_pcie_enable_rx_wake(trans, true);
+ 
+-	/*
+-	 * Reconfigure IVAR table in case of MSIX or reset ict table in
+-	 * MSI mode since HW reset erased it.
+-	 * Also enables interrupts - none will happen as
+-	 * the device doesn't know we're waking it up, only when
+-	 * the opmode actually tells it after this call.
+-	 */
+-	iwl_pcie_conf_msix_hw(trans_pcie);
+-	if (!trans_pcie->msix_enabled)
+-		iwl_pcie_reset_ict(trans);
+-	iwl_enable_interrupts(trans);
+-
+ 	iwl_set_bit(trans, CSR_GP_CNTRL,
+ 		    BIT(trans->cfg->csr->flag_mac_access_req));
+ 	iwl_set_bit(trans, CSR_GP_CNTRL,
+@@ -1568,6 +1556,18 @@ static int iwl_trans_pcie_d3_resume(struct iwl_trans *trans,
+ 		return ret;
+ 	}
+ 
++	/*
++	 * Reconfigure IVAR table in case of MSIX or reset ict table in
++	 * MSI mode since HW reset erased it.
++	 * Also enables interrupts - none will happen as
++	 * the device doesn't know we're waking it up, only when
++	 * the opmode actually tells it after this call.
++	 */
++	iwl_pcie_conf_msix_hw(trans_pcie);
++	if (!trans_pcie->msix_enabled)
++		iwl_pcie_reset_ict(trans);
++	iwl_enable_interrupts(trans);
++
+ 	iwl_pcie_set_pwr(trans, false);
+ 
+ 	if (!reset) {
+diff --git a/drivers/net/wireless/ti/wlcore/rx.c b/drivers/net/wireless/ti/wlcore/rx.c
+index 0f15696195f8..078a4940bc5c 100644
+--- a/drivers/net/wireless/ti/wlcore/rx.c
++++ b/drivers/net/wireless/ti/wlcore/rx.c
+@@ -59,7 +59,7 @@ static u32 wlcore_rx_get_align_buf_size(struct wl1271 *wl, u32 pkt_len)
+ static void wl1271_rx_status(struct wl1271 *wl,
+ 			     struct wl1271_rx_descriptor *desc,
+ 			     struct ieee80211_rx_status *status,
+-			     u8 beacon)
++			     u8 beacon, u8 probe_rsp)
+ {
+ 	memset(status, 0, sizeof(struct ieee80211_rx_status));
+ 
+@@ -106,6 +106,9 @@ static void wl1271_rx_status(struct wl1271 *wl,
+ 		}
+ 	}
+ 
++	if (beacon || probe_rsp)
++		status->boottime_ns = ktime_get_boot_ns();
++
+ 	if (beacon)
+ 		wlcore_set_pending_regdomain_ch(wl, (u16)desc->channel,
+ 						status->band);
+@@ -191,7 +194,8 @@ static int wl1271_rx_handle_data(struct wl1271 *wl, u8 *data, u32 length,
+ 	if (ieee80211_is_data_present(hdr->frame_control))
+ 		is_data = 1;
+ 
+-	wl1271_rx_status(wl, desc, IEEE80211_SKB_RXCB(skb), beacon);
++	wl1271_rx_status(wl, desc, IEEE80211_SKB_RXCB(skb), beacon,
++			 ieee80211_is_probe_resp(hdr->frame_control));
+ 	wlcore_hw_set_rx_csum(wl, desc, skb);
+ 
+ 	seq_num = (le16_to_cpu(hdr->seq_ctrl) & IEEE80211_SCTL_SEQ) >> 4;
+diff --git a/drivers/pci/controller/pcie-mobiveil.c b/drivers/pci/controller/pcie-mobiveil.c
+index cf0aa7cee5b0..a939e8d31735 100644
+--- a/drivers/pci/controller/pcie-mobiveil.c
++++ b/drivers/pci/controller/pcie-mobiveil.c
+@@ -23,6 +23,8 @@
+ #include <linux/platform_device.h>
+ #include <linux/slab.h>
+ 
++#include "../pci.h"
++
+ /* register offsets and bit positions */
+ 
+ /*
+@@ -130,7 +132,7 @@ struct mobiveil_pcie {
+ 	void __iomem *config_axi_slave_base;	/* endpoint config base */
+ 	void __iomem *csr_axi_slave_base;	/* root port config base */
+ 	void __iomem *apb_csr_base;	/* MSI register base */
+-	void __iomem *pcie_reg_base;	/* Physical PCIe Controller Base */
++	phys_addr_t pcie_reg_base;	/* Physical PCIe Controller Base */
+ 	struct irq_domain *intx_domain;
+ 	raw_spinlock_t intx_mask_lock;
+ 	int irq;
+diff --git a/drivers/pci/switch/switchtec.c b/drivers/pci/switch/switchtec.c
+index 47cd0c037433..f96af1467984 100644
+--- a/drivers/pci/switch/switchtec.c
++++ b/drivers/pci/switch/switchtec.c
+@@ -14,6 +14,8 @@
+ #include <linux/poll.h>
+ #include <linux/wait.h>
+ 
++#include <linux/nospec.h>
++
+ MODULE_DESCRIPTION("Microsemi Switchtec(tm) PCIe Management Driver");
+ MODULE_VERSION("0.1");
+ MODULE_LICENSE("GPL");
+@@ -909,6 +911,8 @@ static int ioctl_port_to_pff(struct switchtec_dev *stdev,
+ 	default:
+ 		if (p.port > ARRAY_SIZE(pcfg->dsp_pff_inst_id))
+ 			return -EINVAL;
++		p.port = array_index_nospec(p.port,
++					ARRAY_SIZE(pcfg->dsp_pff_inst_id) + 1);
+ 		p.pff = ioread32(&pcfg->dsp_pff_inst_id[p.port - 1]);
+ 		break;
+ 	}
+diff --git a/drivers/pinctrl/berlin/berlin.c b/drivers/pinctrl/berlin/berlin.c
+index d6d183e9db17..b5903fffb3d0 100644
+--- a/drivers/pinctrl/berlin/berlin.c
++++ b/drivers/pinctrl/berlin/berlin.c
+@@ -216,10 +216,8 @@ static int berlin_pinctrl_build_state(struct platform_device *pdev)
+ 	}
+ 
+ 	/* we will reallocate later */
+-	pctrl->functions = devm_kcalloc(&pdev->dev,
+-					max_functions,
+-					sizeof(*pctrl->functions),
+-					GFP_KERNEL);
++	pctrl->functions = kcalloc(max_functions,
++				   sizeof(*pctrl->functions), GFP_KERNEL);
+ 	if (!pctrl->functions)
+ 		return -ENOMEM;
+ 
+@@ -257,8 +255,10 @@ static int berlin_pinctrl_build_state(struct platform_device *pdev)
+ 				function++;
+ 			}
+ 
+-			if (!found)
++			if (!found) {
++				kfree(pctrl->functions);
+ 				return -EINVAL;
++			}
+ 
+ 			if (!function->groups) {
+ 				function->groups =
+@@ -267,8 +267,10 @@ static int berlin_pinctrl_build_state(struct platform_device *pdev)
+ 						     sizeof(char *),
+ 						     GFP_KERNEL);
+ 
+-				if (!function->groups)
++				if (!function->groups) {
++					kfree(pctrl->functions);
+ 					return -ENOMEM;
++				}
+ 			}
+ 
+ 			groups = function->groups;
+diff --git a/drivers/pinctrl/freescale/pinctrl-imx.c b/drivers/pinctrl/freescale/pinctrl-imx.c
+index 1c6bb15579e1..b04edc22dad7 100644
+--- a/drivers/pinctrl/freescale/pinctrl-imx.c
++++ b/drivers/pinctrl/freescale/pinctrl-imx.c
+@@ -383,7 +383,7 @@ static void imx_pinconf_group_dbg_show(struct pinctrl_dev *pctldev,
+ 	const char *name;
+ 	int i, ret;
+ 
+-	if (group > pctldev->num_groups)
++	if (group >= pctldev->num_groups)
+ 		return;
+ 
+ 	seq_puts(s, "\n");
+diff --git a/drivers/pinctrl/pinctrl-amd.c b/drivers/pinctrl/pinctrl-amd.c
+index 04ae139671c8..b91db89eb924 100644
+--- a/drivers/pinctrl/pinctrl-amd.c
++++ b/drivers/pinctrl/pinctrl-amd.c
+@@ -552,7 +552,8 @@ static irqreturn_t amd_gpio_irq_handler(int irq, void *dev_id)
+ 		/* Each status bit covers four pins */
+ 		for (i = 0; i < 4; i++) {
+ 			regval = readl(regs + i);
+-			if (!(regval & PIN_IRQ_PENDING))
++			if (!(regval & PIN_IRQ_PENDING) ||
++			    !(regval & BIT(INTERRUPT_MASK_OFF)))
+ 				continue;
+ 			irq = irq_find_mapping(gc->irq.domain, irqnr + i);
+ 			generic_handle_irq(irq);
+diff --git a/drivers/regulator/tps65217-regulator.c b/drivers/regulator/tps65217-regulator.c
+index fc12badf3805..d84fab616abf 100644
+--- a/drivers/regulator/tps65217-regulator.c
++++ b/drivers/regulator/tps65217-regulator.c
+@@ -232,6 +232,8 @@ static int tps65217_regulator_probe(struct platform_device *pdev)
+ 	tps->strobes = devm_kcalloc(&pdev->dev,
+ 				    TPS65217_NUM_REGULATOR, sizeof(u8),
+ 				    GFP_KERNEL);
++	if (!tps->strobes)
++		return -ENOMEM;
+ 
+ 	platform_set_drvdata(pdev, tps);
+ 
+diff --git a/drivers/rpmsg/rpmsg_core.c b/drivers/rpmsg/rpmsg_core.c
+index b714a543a91d..8122807db380 100644
+--- a/drivers/rpmsg/rpmsg_core.c
++++ b/drivers/rpmsg/rpmsg_core.c
+@@ -15,6 +15,7 @@
+ #include <linux/module.h>
+ #include <linux/rpmsg.h>
+ #include <linux/of_device.h>
++#include <linux/pm_domain.h>
+ #include <linux/slab.h>
+ 
+ #include "rpmsg_internal.h"
+@@ -449,6 +450,10 @@ static int rpmsg_dev_probe(struct device *dev)
+ 	struct rpmsg_endpoint *ept = NULL;
+ 	int err;
+ 
++	err = dev_pm_domain_attach(dev, true);
++	if (err)
++		goto out;
++
+ 	if (rpdrv->callback) {
+ 		strncpy(chinfo.name, rpdev->id.name, RPMSG_NAME_SIZE);
+ 		chinfo.src = rpdev->src;
+@@ -490,6 +495,8 @@ static int rpmsg_dev_remove(struct device *dev)
+ 
+ 	rpdrv->remove(rpdev);
+ 
++	dev_pm_domain_detach(dev, true);
++
+ 	if (rpdev->ept)
+ 		rpmsg_destroy_ept(rpdev->ept);
+ 
+diff --git a/drivers/scsi/3w-9xxx.c b/drivers/scsi/3w-9xxx.c
+index 99ba4a770406..27521fc3ef5a 100644
+--- a/drivers/scsi/3w-9xxx.c
++++ b/drivers/scsi/3w-9xxx.c
+@@ -2038,6 +2038,7 @@ static int twa_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 
+ 	if (twa_initialize_device_extension(tw_dev)) {
+ 		TW_PRINTK(tw_dev->host, TW_DRIVER, 0x25, "Failed to initialize device extension");
++		retval = -ENOMEM;
+ 		goto out_free_device_extension;
+ 	}
+ 
+@@ -2060,6 +2061,7 @@ static int twa_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 	tw_dev->base_addr = ioremap(mem_addr, mem_len);
+ 	if (!tw_dev->base_addr) {
+ 		TW_PRINTK(tw_dev->host, TW_DRIVER, 0x35, "Failed to ioremap");
++		retval = -ENOMEM;
+ 		goto out_release_mem_region;
+ 	}
+ 
+@@ -2067,8 +2069,10 @@ static int twa_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 	TW_DISABLE_INTERRUPTS(tw_dev);
+ 
+ 	/* Initialize the card */
+-	if (twa_reset_sequence(tw_dev, 0))
++	if (twa_reset_sequence(tw_dev, 0)) {
++		retval = -ENOMEM;
+ 		goto out_iounmap;
++	}
+ 
+ 	/* Set host specific parameters */
+ 	if ((pdev->device == PCI_DEVICE_ID_3WARE_9650SE) ||
+diff --git a/drivers/scsi/3w-sas.c b/drivers/scsi/3w-sas.c
+index cf9f2a09b47d..40c1e6e64f58 100644
+--- a/drivers/scsi/3w-sas.c
++++ b/drivers/scsi/3w-sas.c
+@@ -1594,6 +1594,7 @@ static int twl_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 
+ 	if (twl_initialize_device_extension(tw_dev)) {
+ 		TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1a, "Failed to initialize device extension");
++		retval = -ENOMEM;
+ 		goto out_free_device_extension;
+ 	}
+ 
+@@ -1608,6 +1609,7 @@ static int twl_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 	tw_dev->base_addr = pci_iomap(pdev, 1, 0);
+ 	if (!tw_dev->base_addr) {
+ 		TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1c, "Failed to ioremap");
++		retval = -ENOMEM;
+ 		goto out_release_mem_region;
+ 	}
+ 
+@@ -1617,6 +1619,7 @@ static int twl_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 	/* Initialize the card */
+ 	if (twl_reset_sequence(tw_dev, 0)) {
+ 		TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1d, "Controller reset failed during probe");
++		retval = -ENOMEM;
+ 		goto out_iounmap;
+ 	}
+ 
+diff --git a/drivers/scsi/3w-xxxx.c b/drivers/scsi/3w-xxxx.c
+index f6179e3d6953..961ea6f7def8 100644
+--- a/drivers/scsi/3w-xxxx.c
++++ b/drivers/scsi/3w-xxxx.c
+@@ -2280,6 +2280,7 @@ static int tw_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 
+ 	if (tw_initialize_device_extension(tw_dev)) {
+ 		printk(KERN_WARNING "3w-xxxx: Failed to initialize device extension.");
++		retval = -ENOMEM;
+ 		goto out_free_device_extension;
+ 	}
+ 
+@@ -2294,6 +2295,7 @@ static int tw_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 	tw_dev->base_addr = pci_resource_start(pdev, 0);
+ 	if (!tw_dev->base_addr) {
+ 		printk(KERN_WARNING "3w-xxxx: Failed to get io address.");
++		retval = -ENOMEM;
+ 		goto out_release_mem_region;
+ 	}
+ 
+diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
+index 20b249a649dd..902004dc8dc7 100644
+--- a/drivers/scsi/lpfc/lpfc.h
++++ b/drivers/scsi/lpfc/lpfc.h
+@@ -672,7 +672,7 @@ struct lpfc_hba {
+ #define LS_NPIV_FAB_SUPPORTED 0x2	/* Fabric supports NPIV */
+ #define LS_IGNORE_ERATT       0x4	/* intr handler should ignore ERATT */
+ #define LS_MDS_LINK_DOWN      0x8	/* MDS Diagnostics Link Down */
+-#define LS_MDS_LOOPBACK      0x16	/* MDS Diagnostics Link Up (Loopback) */
++#define LS_MDS_LOOPBACK      0x10	/* MDS Diagnostics Link Up (Loopback) */
+ 
+ 	uint32_t hba_flag;	/* hba generic flags */
+ #define HBA_ERATT_HANDLED	0x1 /* This flag is set when eratt handled */
+diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
+index 76a5a99605aa..d723fd1d7b26 100644
+--- a/drivers/scsi/lpfc/lpfc_nvme.c
++++ b/drivers/scsi/lpfc/lpfc_nvme.c
+@@ -2687,7 +2687,7 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ 	struct lpfc_nvme_rport *oldrport;
+ 	struct nvme_fc_remote_port *remote_port;
+ 	struct nvme_fc_port_info rpinfo;
+-	struct lpfc_nodelist *prev_ndlp;
++	struct lpfc_nodelist *prev_ndlp = NULL;
+ 
+ 	lpfc_printf_vlog(ndlp->vport, KERN_INFO, LOG_NVME_DISC,
+ 			 "6006 Register NVME PORT. DID x%06x nlptype x%x\n",
+@@ -2736,23 +2736,29 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ 		spin_unlock_irq(&vport->phba->hbalock);
+ 		rport = remote_port->private;
+ 		if (oldrport) {
++			/* New remoteport record does not guarantee valid
++			 * host private memory area.
++			 */
++			prev_ndlp = oldrport->ndlp;
+ 			if (oldrport == remote_port->private) {
+-				/* Same remoteport.  Just reuse. */
++				/* Same remoteport - ndlp should match.
++				 * Just reuse.
++				 */
+ 				lpfc_printf_vlog(ndlp->vport, KERN_INFO,
+ 						 LOG_NVME_DISC,
+ 						 "6014 Rebinding lport to "
+ 						 "remoteport %p wwpn 0x%llx, "
+-						 "Data: x%x x%x %p x%x x%06x\n",
++						 "Data: x%x x%x %p %p x%x x%06x\n",
+ 						 remote_port,
+ 						 remote_port->port_name,
+ 						 remote_port->port_id,
+ 						 remote_port->port_role,
++						 prev_ndlp,
+ 						 ndlp,
+ 						 ndlp->nlp_type,
+ 						 ndlp->nlp_DID);
+ 				return 0;
+ 			}
+-			prev_ndlp = rport->ndlp;
+ 
+ 			/* Sever the ndlp<->rport association
+ 			 * before dropping the ndlp ref from
+@@ -2786,13 +2792,13 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ 		lpfc_printf_vlog(vport, KERN_INFO,
+ 				 LOG_NVME_DISC | LOG_NODE,
+ 				 "6022 Binding new rport to "
+-				 "lport %p Remoteport %p  WWNN 0x%llx, "
++				 "lport %p Remoteport %p rport %p WWNN 0x%llx, "
+ 				 "Rport WWPN 0x%llx DID "
+-				 "x%06x Role x%x, ndlp %p\n",
+-				 lport, remote_port,
++				 "x%06x Role x%x, ndlp %p prev_ndlp %p\n",
++				 lport, remote_port, rport,
+ 				 rpinfo.node_name, rpinfo.port_name,
+ 				 rpinfo.port_id, rpinfo.port_role,
+-				 ndlp);
++				 ndlp, prev_ndlp);
+ 	} else {
+ 		lpfc_printf_vlog(vport, KERN_ERR,
+ 				 LOG_NVME_DISC | LOG_NODE,
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index ec550ee0108e..75d34def2361 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -1074,9 +1074,12 @@ void qla24xx_handle_gpdb_event(scsi_qla_host_t *vha, struct event_arg *ea)
+ 	case PDS_PLOGI_COMPLETE:
+ 	case PDS_PRLI_PENDING:
+ 	case PDS_PRLI2_PENDING:
+-		ql_dbg(ql_dbg_disc, vha, 0x20d5, "%s %d %8phC relogin needed\n",
+-		    __func__, __LINE__, fcport->port_name);
+-		set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
++		/* Set discovery state back to GNL to Relogin attempt */
++		if (qla_dual_mode_enabled(vha) ||
++		    qla_ini_mode_enabled(vha)) {
++			fcport->disc_state = DSC_GNL;
++			set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
++		}
+ 		return;
+ 	case PDS_LOGO_PENDING:
+ 	case PDS_PORT_UNAVAILABLE:
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index 1027b0cb7fa3..6dc1b1bd8069 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -982,8 +982,9 @@ void qlt_free_session_done(struct work_struct *work)
+ 
+ 			logo.id = sess->d_id;
+ 			logo.cmd_count = 0;
++			if (!own)
++				qlt_send_first_logo(vha, &logo);
+ 			sess->send_els_logo = 0;
+-			qlt_send_first_logo(vha, &logo);
+ 		}
+ 
+ 		if (sess->logout_on_delete && sess->loop_id != FC_NO_LOOP_ID) {
+diff --git a/drivers/scsi/qla2xxx/qla_tmpl.c b/drivers/scsi/qla2xxx/qla_tmpl.c
+index 731ca0d8520a..9f3c263756a8 100644
+--- a/drivers/scsi/qla2xxx/qla_tmpl.c
++++ b/drivers/scsi/qla2xxx/qla_tmpl.c
+@@ -571,6 +571,15 @@ qla27xx_fwdt_entry_t268(struct scsi_qla_host *vha,
+ 		}
+ 		break;
+ 
++	case T268_BUF_TYPE_REQ_MIRROR:
++	case T268_BUF_TYPE_RSP_MIRROR:
++		/*
++		 * Mirror pointers are not implemented in the
++		 * driver, instead shadow pointers are used by
++		 * the drier. Skip these entries.
++		 */
++		qla27xx_skip_entry(ent, buf);
++		break;
+ 	default:
+ 		ql_dbg(ql_dbg_async, vha, 0xd02b,
+ 		    "%s: unknown buffer %x\n", __func__, ent->t268.buf_type);
+diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
+index ee5081ba5313..1fc87a3260cc 100644
+--- a/drivers/target/target_core_transport.c
++++ b/drivers/target/target_core_transport.c
+@@ -316,6 +316,7 @@ void __transport_register_session(
+ {
+ 	const struct target_core_fabric_ops *tfo = se_tpg->se_tpg_tfo;
+ 	unsigned char buf[PR_REG_ISID_LEN];
++	unsigned long flags;
+ 
+ 	se_sess->se_tpg = se_tpg;
+ 	se_sess->fabric_sess_ptr = fabric_sess_ptr;
+@@ -352,7 +353,7 @@ void __transport_register_session(
+ 			se_sess->sess_bin_isid = get_unaligned_be64(&buf[0]);
+ 		}
+ 
+-		spin_lock_irq(&se_nacl->nacl_sess_lock);
++		spin_lock_irqsave(&se_nacl->nacl_sess_lock, flags);
+ 		/*
+ 		 * The se_nacl->nacl_sess pointer will be set to the
+ 		 * last active I_T Nexus for each struct se_node_acl.
+@@ -361,7 +362,7 @@ void __transport_register_session(
+ 
+ 		list_add_tail(&se_sess->sess_acl_list,
+ 			      &se_nacl->acl_sess_list);
+-		spin_unlock_irq(&se_nacl->nacl_sess_lock);
++		spin_unlock_irqrestore(&se_nacl->nacl_sess_lock, flags);
+ 	}
+ 	list_add_tail(&se_sess->sess_list, &se_tpg->tpg_sess_list);
+ 
+diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
+index d8dc3d22051f..b8dc5efc606b 100644
+--- a/drivers/target/target_core_user.c
++++ b/drivers/target/target_core_user.c
+@@ -1745,9 +1745,11 @@ static int tcmu_configure_device(struct se_device *dev)
+ 
+ 	info = &udev->uio_info;
+ 
++	mutex_lock(&udev->cmdr_lock);
+ 	udev->data_bitmap = kcalloc(BITS_TO_LONGS(udev->max_blocks),
+ 				    sizeof(unsigned long),
+ 				    GFP_KERNEL);
++	mutex_unlock(&udev->cmdr_lock);
+ 	if (!udev->data_bitmap) {
+ 		ret = -ENOMEM;
+ 		goto err_bitmap_alloc;
+@@ -1957,7 +1959,7 @@ static match_table_t tokens = {
+ 	{Opt_hw_block_size, "hw_block_size=%u"},
+ 	{Opt_hw_max_sectors, "hw_max_sectors=%u"},
+ 	{Opt_nl_reply_supported, "nl_reply_supported=%d"},
+-	{Opt_max_data_area_mb, "max_data_area_mb=%u"},
++	{Opt_max_data_area_mb, "max_data_area_mb=%d"},
+ 	{Opt_err, NULL}
+ };
+ 
+@@ -1985,13 +1987,48 @@ static int tcmu_set_dev_attrib(substring_t *arg, u32 *dev_attrib)
+ 	return 0;
+ }
+ 
++static int tcmu_set_max_blocks_param(struct tcmu_dev *udev, substring_t *arg)
++{
++	int val, ret;
++
++	ret = match_int(arg, &val);
++	if (ret < 0) {
++		pr_err("match_int() failed for max_data_area_mb=. Error %d.\n",
++		       ret);
++		return ret;
++	}
++
++	if (val <= 0) {
++		pr_err("Invalid max_data_area %d.\n", val);
++		return -EINVAL;
++	}
++
++	mutex_lock(&udev->cmdr_lock);
++	if (udev->data_bitmap) {
++		pr_err("Cannot set max_data_area_mb after it has been enabled.\n");
++		ret = -EINVAL;
++		goto unlock;
++	}
++
++	udev->max_blocks = TCMU_MBS_TO_BLOCKS(val);
++	if (udev->max_blocks > tcmu_global_max_blocks) {
++		pr_err("%d is too large. Adjusting max_data_area_mb to global limit of %u\n",
++		       val, TCMU_BLOCKS_TO_MBS(tcmu_global_max_blocks));
++		udev->max_blocks = tcmu_global_max_blocks;
++	}
++
++unlock:
++	mutex_unlock(&udev->cmdr_lock);
++	return ret;
++}
++
+ static ssize_t tcmu_set_configfs_dev_params(struct se_device *dev,
+ 		const char *page, ssize_t count)
+ {
+ 	struct tcmu_dev *udev = TCMU_DEV(dev);
+ 	char *orig, *ptr, *opts, *arg_p;
+ 	substring_t args[MAX_OPT_ARGS];
+-	int ret = 0, token, tmpval;
++	int ret = 0, token;
+ 
+ 	opts = kstrdup(page, GFP_KERNEL);
+ 	if (!opts)
+@@ -2044,37 +2081,7 @@ static ssize_t tcmu_set_configfs_dev_params(struct se_device *dev,
+ 				pr_err("kstrtoint() failed for nl_reply_supported=\n");
+ 			break;
+ 		case Opt_max_data_area_mb:
+-			if (dev->export_count) {
+-				pr_err("Unable to set max_data_area_mb while exports exist\n");
+-				ret = -EINVAL;
+-				break;
+-			}
+-
+-			arg_p = match_strdup(&args[0]);
+-			if (!arg_p) {
+-				ret = -ENOMEM;
+-				break;
+-			}
+-			ret = kstrtoint(arg_p, 0, &tmpval);
+-			kfree(arg_p);
+-			if (ret < 0) {
+-				pr_err("kstrtoint() failed for max_data_area_mb=\n");
+-				break;
+-			}
+-
+-			if (tmpval <= 0) {
+-				pr_err("Invalid max_data_area %d\n", tmpval);
+-				ret = -EINVAL;
+-				break;
+-			}
+-
+-			udev->max_blocks = TCMU_MBS_TO_BLOCKS(tmpval);
+-			if (udev->max_blocks > tcmu_global_max_blocks) {
+-				pr_err("%d is too large. Adjusting max_data_area_mb to global limit of %u\n",
+-				       tmpval,
+-				       TCMU_BLOCKS_TO_MBS(tcmu_global_max_blocks));
+-				udev->max_blocks = tcmu_global_max_blocks;
+-			}
++			ret = tcmu_set_max_blocks_param(udev, &args[0]);
+ 			break;
+ 		default:
+ 			break;
+diff --git a/drivers/thermal/rcar_thermal.c b/drivers/thermal/rcar_thermal.c
+index 45fb284d4c11..e77e63070e99 100644
+--- a/drivers/thermal/rcar_thermal.c
++++ b/drivers/thermal/rcar_thermal.c
+@@ -598,7 +598,7 @@ static int rcar_thermal_probe(struct platform_device *pdev)
+ 			enr_bits |= 3 << (i * 8);
+ 	}
+ 
+-	if (enr_bits)
++	if (common->base && enr_bits)
+ 		rcar_thermal_common_write(common, ENR, enr_bits);
+ 
+ 	dev_info(dev, "%d sensor probed\n", i);
+diff --git a/drivers/thermal/thermal_hwmon.c b/drivers/thermal/thermal_hwmon.c
+index 11278836ed12..0bd47007c57f 100644
+--- a/drivers/thermal/thermal_hwmon.c
++++ b/drivers/thermal/thermal_hwmon.c
+@@ -142,6 +142,7 @@ int thermal_add_hwmon_sysfs(struct thermal_zone_device *tz)
+ 
+ 	INIT_LIST_HEAD(&hwmon->tz_list);
+ 	strlcpy(hwmon->type, tz->type, THERMAL_NAME_LENGTH);
++	strreplace(hwmon->type, '-', '_');
+ 	hwmon->device = hwmon_device_register_with_info(NULL, hwmon->type,
+ 							hwmon, NULL, NULL);
+ 	if (IS_ERR(hwmon->device)) {
+diff --git a/drivers/tty/rocket.c b/drivers/tty/rocket.c
+index bdd17d2aaafd..b121d8f8f3d7 100644
+--- a/drivers/tty/rocket.c
++++ b/drivers/tty/rocket.c
+@@ -1881,7 +1881,7 @@ static __init int register_PCI(int i, struct pci_dev *dev)
+ 	ByteIO_t UPCIRingInd = 0;
+ 
+ 	if (!dev || !pci_match_id(rocket_pci_ids, dev) ||
+-	    pci_enable_device(dev))
++	    pci_enable_device(dev) || i >= NUM_BOARDS)
+ 		return 0;
+ 
+ 	rcktpt_io_addr[i] = pci_resource_start(dev, 0);
+diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c
+index f68c1121fa7c..6c58ad1abd7e 100644
+--- a/drivers/uio/uio.c
++++ b/drivers/uio/uio.c
+@@ -622,6 +622,12 @@ static ssize_t uio_write(struct file *filep, const char __user *buf,
+ 	ssize_t retval;
+ 	s32 irq_on;
+ 
++	if (count != sizeof(s32))
++		return -EINVAL;
++
++	if (copy_from_user(&irq_on, buf, count))
++		return -EFAULT;
++
+ 	mutex_lock(&idev->info_lock);
+ 	if (!idev->info) {
+ 		retval = -EINVAL;
+@@ -633,21 +639,11 @@ static ssize_t uio_write(struct file *filep, const char __user *buf,
+ 		goto out;
+ 	}
+ 
+-	if (count != sizeof(s32)) {
+-		retval = -EINVAL;
+-		goto out;
+-	}
+-
+ 	if (!idev->info->irqcontrol) {
+ 		retval = -ENOSYS;
+ 		goto out;
+ 	}
+ 
+-	if (copy_from_user(&irq_on, buf, count)) {
+-		retval = -EFAULT;
+-		goto out;
+-	}
+-
+ 	retval = idev->info->irqcontrol(idev->info, irq_on);
+ 
+ out:
+@@ -955,8 +951,6 @@ int __uio_register_device(struct module *owner,
+ 	if (ret)
+ 		goto err_uio_dev_add_attributes;
+ 
+-	info->uio_dev = idev;
+-
+ 	if (info->irq && (info->irq != UIO_IRQ_CUSTOM)) {
+ 		/*
+ 		 * Note that we deliberately don't use devm_request_irq
+@@ -972,6 +966,7 @@ int __uio_register_device(struct module *owner,
+ 			goto err_request_irq;
+ 	}
+ 
++	info->uio_dev = idev;
+ 	return 0;
+ 
+ err_request_irq:
+diff --git a/fs/autofs/autofs_i.h b/fs/autofs/autofs_i.h
+index 9400a9f6318a..5057b9f0f846 100644
+--- a/fs/autofs/autofs_i.h
++++ b/fs/autofs/autofs_i.h
+@@ -26,6 +26,7 @@
+ #include <linux/list.h>
+ #include <linux/completion.h>
+ #include <linux/file.h>
++#include <linux/magic.h>
+ 
+ /* This is the range of ioctl() numbers we claim as ours */
+ #define AUTOFS_IOC_FIRST     AUTOFS_IOC_READY
+@@ -124,7 +125,8 @@ struct autofs_sb_info {
+ 
+ static inline struct autofs_sb_info *autofs_sbi(struct super_block *sb)
+ {
+-	return (struct autofs_sb_info *)(sb->s_fs_info);
++	return sb->s_magic != AUTOFS_SUPER_MAGIC ?
++		NULL : (struct autofs_sb_info *)(sb->s_fs_info);
+ }
+ 
+ static inline struct autofs_info *autofs_dentry_ino(struct dentry *dentry)
+diff --git a/fs/autofs/inode.c b/fs/autofs/inode.c
+index b51980fc274e..846c052569dd 100644
+--- a/fs/autofs/inode.c
++++ b/fs/autofs/inode.c
+@@ -10,7 +10,6 @@
+ #include <linux/seq_file.h>
+ #include <linux/pagemap.h>
+ #include <linux/parser.h>
+-#include <linux/magic.h>
+ 
+ #include "autofs_i.h"
+ 
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 53cac20650d8..4ab0bccfa281 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -5935,7 +5935,7 @@ void btrfs_trans_release_chunk_metadata(struct btrfs_trans_handle *trans)
+  * root: the root of the parent directory
+  * rsv: block reservation
+  * items: the number of items that we need do reservation
+- * qgroup_reserved: used to return the reserved size in qgroup
++ * use_global_rsv: allow fallback to the global block reservation
+  *
+  * This function is used to reserve the space for snapshot/subvolume
+  * creation and deletion. Those operations are different with the
+@@ -5945,10 +5945,10 @@ void btrfs_trans_release_chunk_metadata(struct btrfs_trans_handle *trans)
+  * the space reservation mechanism in start_transaction().
+  */
+ int btrfs_subvolume_reserve_metadata(struct btrfs_root *root,
+-				     struct btrfs_block_rsv *rsv,
+-				     int items,
++				     struct btrfs_block_rsv *rsv, int items,
+ 				     bool use_global_rsv)
+ {
++	u64 qgroup_num_bytes = 0;
+ 	u64 num_bytes;
+ 	int ret;
+ 	struct btrfs_fs_info *fs_info = root->fs_info;
+@@ -5956,12 +5956,11 @@ int btrfs_subvolume_reserve_metadata(struct btrfs_root *root,
+ 
+ 	if (test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags)) {
+ 		/* One for parent inode, two for dir entries */
+-		num_bytes = 3 * fs_info->nodesize;
+-		ret = btrfs_qgroup_reserve_meta_prealloc(root, num_bytes, true);
++		qgroup_num_bytes = 3 * fs_info->nodesize;
++		ret = btrfs_qgroup_reserve_meta_prealloc(root,
++				qgroup_num_bytes, true);
+ 		if (ret)
+ 			return ret;
+-	} else {
+-		num_bytes = 0;
+ 	}
+ 
+ 	num_bytes = btrfs_calc_trans_metadata_size(fs_info, items);
+@@ -5973,8 +5972,8 @@ int btrfs_subvolume_reserve_metadata(struct btrfs_root *root,
+ 	if (ret == -ENOSPC && use_global_rsv)
+ 		ret = btrfs_block_rsv_migrate(global_rsv, rsv, num_bytes, 1);
+ 
+-	if (ret && num_bytes)
+-		btrfs_qgroup_free_meta_prealloc(root, num_bytes);
++	if (ret && qgroup_num_bytes)
++		btrfs_qgroup_free_meta_prealloc(root, qgroup_num_bytes);
+ 
+ 	return ret;
+ }
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index b077544b5232..f3d6be0c657b 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -3463,6 +3463,25 @@ static int btrfs_extent_same_range(struct inode *src, u64 loff, u64 olen,
+ 
+ 		same_lock_start = min_t(u64, loff, dst_loff);
+ 		same_lock_len = max_t(u64, loff, dst_loff) + len - same_lock_start;
++	} else {
++		/*
++		 * If the source and destination inodes are different, the
++		 * source's range end offset matches the source's i_size, that
++		 * i_size is not a multiple of the sector size, and the
++		 * destination range does not go past the destination's i_size,
++		 * we must round down the length to the nearest sector size
++		 * multiple. If we don't do this adjustment we end replacing
++		 * with zeroes the bytes in the range that starts at the
++		 * deduplication range's end offset and ends at the next sector
++		 * size multiple.
++		 */
++		if (loff + olen == i_size_read(src) &&
++		    dst_loff + len < i_size_read(dst)) {
++			const u64 sz = BTRFS_I(src)->root->fs_info->sectorsize;
++
++			len = round_down(i_size_read(src), sz) - loff;
++			olen = len;
++		}
+ 	}
+ 
+ again:
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 9d02563b2147..44043f809a3c 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -2523,7 +2523,7 @@ cifs_setup_ipc(struct cifs_ses *ses, struct smb_vol *volume_info)
+ 	if (tcon == NULL)
+ 		return -ENOMEM;
+ 
+-	snprintf(unc, sizeof(unc), "\\\\%s\\IPC$", ses->serverName);
++	snprintf(unc, sizeof(unc), "\\\\%s\\IPC$", ses->server->hostname);
+ 
+ 	/* cannot fail */
+ 	nls_codepage = load_nls_default();
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index 9051b9dfd590..d279fa5472db 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -469,6 +469,8 @@ cifs_sfu_type(struct cifs_fattr *fattr, const char *path,
+ 	oparms.cifs_sb = cifs_sb;
+ 	oparms.desired_access = GENERIC_READ;
+ 	oparms.create_options = CREATE_NOT_DIR;
++	if (backup_cred(cifs_sb))
++		oparms.create_options |= CREATE_OPEN_BACKUP_INTENT;
+ 	oparms.disposition = FILE_OPEN;
+ 	oparms.path = path;
+ 	oparms.fid = &fid;
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index ee6c4a952ce9..5ecbc99f46e4 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -626,7 +626,10 @@ smb2_is_path_accessible(const unsigned int xid, struct cifs_tcon *tcon,
+ 	oparms.tcon = tcon;
+ 	oparms.desired_access = FILE_READ_ATTRIBUTES;
+ 	oparms.disposition = FILE_OPEN;
+-	oparms.create_options = 0;
++	if (backup_cred(cifs_sb))
++		oparms.create_options = CREATE_OPEN_BACKUP_INTENT;
++	else
++		oparms.create_options = 0;
+ 	oparms.fid = &fid;
+ 	oparms.reconnect = false;
+ 
+@@ -775,7 +778,10 @@ smb2_query_eas(const unsigned int xid, struct cifs_tcon *tcon,
+ 	oparms.tcon = tcon;
+ 	oparms.desired_access = FILE_READ_EA;
+ 	oparms.disposition = FILE_OPEN;
+-	oparms.create_options = 0;
++	if (backup_cred(cifs_sb))
++		oparms.create_options = CREATE_OPEN_BACKUP_INTENT;
++	else
++		oparms.create_options = 0;
+ 	oparms.fid = &fid;
+ 	oparms.reconnect = false;
+ 
+@@ -854,7 +860,10 @@ smb2_set_ea(const unsigned int xid, struct cifs_tcon *tcon,
+ 	oparms.tcon = tcon;
+ 	oparms.desired_access = FILE_WRITE_EA;
+ 	oparms.disposition = FILE_OPEN;
+-	oparms.create_options = 0;
++	if (backup_cred(cifs_sb))
++		oparms.create_options = CREATE_OPEN_BACKUP_INTENT;
++	else
++		oparms.create_options = 0;
+ 	oparms.fid = &fid;
+ 	oparms.reconnect = false;
+ 
+@@ -1460,7 +1469,10 @@ smb2_query_dir_first(const unsigned int xid, struct cifs_tcon *tcon,
+ 	oparms.tcon = tcon;
+ 	oparms.desired_access = FILE_READ_ATTRIBUTES | FILE_READ_DATA;
+ 	oparms.disposition = FILE_OPEN;
+-	oparms.create_options = 0;
++	if (backup_cred(cifs_sb))
++		oparms.create_options = CREATE_OPEN_BACKUP_INTENT;
++	else
++		oparms.create_options = 0;
+ 	oparms.fid = fid;
+ 	oparms.reconnect = false;
+ 
+@@ -1735,7 +1747,10 @@ smb2_query_symlink(const unsigned int xid, struct cifs_tcon *tcon,
+ 	oparms.tcon = tcon;
+ 	oparms.desired_access = FILE_READ_ATTRIBUTES;
+ 	oparms.disposition = FILE_OPEN;
+-	oparms.create_options = 0;
++	if (backup_cred(cifs_sb))
++		oparms.create_options = CREATE_OPEN_BACKUP_INTENT;
++	else
++		oparms.create_options = 0;
+ 	oparms.fid = &fid;
+ 	oparms.reconnect = false;
+ 
+@@ -3463,7 +3478,7 @@ struct smb_version_values smb21_values = {
+ struct smb_version_values smb3any_values = {
+ 	.version_string = SMB3ANY_VERSION_STRING,
+ 	.protocol_id = SMB302_PROT_ID, /* doesn't matter, send protocol array */
+-	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION,
++	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION | SMB2_GLOBAL_CAP_DIRECTORY_LEASING,
+ 	.large_lock_type = 0,
+ 	.exclusive_lock_type = SMB2_LOCKFLAG_EXCLUSIVE_LOCK,
+ 	.shared_lock_type = SMB2_LOCKFLAG_SHARED_LOCK,
+@@ -3484,7 +3499,7 @@ struct smb_version_values smb3any_values = {
+ struct smb_version_values smbdefault_values = {
+ 	.version_string = SMBDEFAULT_VERSION_STRING,
+ 	.protocol_id = SMB302_PROT_ID, /* doesn't matter, send protocol array */
+-	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION,
++	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION | SMB2_GLOBAL_CAP_DIRECTORY_LEASING,
+ 	.large_lock_type = 0,
+ 	.exclusive_lock_type = SMB2_LOCKFLAG_EXCLUSIVE_LOCK,
+ 	.shared_lock_type = SMB2_LOCKFLAG_SHARED_LOCK,
+@@ -3505,7 +3520,7 @@ struct smb_version_values smbdefault_values = {
+ struct smb_version_values smb30_values = {
+ 	.version_string = SMB30_VERSION_STRING,
+ 	.protocol_id = SMB30_PROT_ID,
+-	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION,
++	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION | SMB2_GLOBAL_CAP_DIRECTORY_LEASING,
+ 	.large_lock_type = 0,
+ 	.exclusive_lock_type = SMB2_LOCKFLAG_EXCLUSIVE_LOCK,
+ 	.shared_lock_type = SMB2_LOCKFLAG_SHARED_LOCK,
+@@ -3526,7 +3541,7 @@ struct smb_version_values smb30_values = {
+ struct smb_version_values smb302_values = {
+ 	.version_string = SMB302_VERSION_STRING,
+ 	.protocol_id = SMB302_PROT_ID,
+-	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION,
++	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION | SMB2_GLOBAL_CAP_DIRECTORY_LEASING,
+ 	.large_lock_type = 0,
+ 	.exclusive_lock_type = SMB2_LOCKFLAG_EXCLUSIVE_LOCK,
+ 	.shared_lock_type = SMB2_LOCKFLAG_SHARED_LOCK,
+@@ -3548,7 +3563,7 @@ struct smb_version_values smb302_values = {
+ struct smb_version_values smb311_values = {
+ 	.version_string = SMB311_VERSION_STRING,
+ 	.protocol_id = SMB311_PROT_ID,
+-	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION,
++	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION | SMB2_GLOBAL_CAP_DIRECTORY_LEASING,
+ 	.large_lock_type = 0,
+ 	.exclusive_lock_type = SMB2_LOCKFLAG_EXCLUSIVE_LOCK,
+ 	.shared_lock_type = SMB2_LOCKFLAG_SHARED_LOCK,
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 44e511a35559..82be1dfeca33 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -2179,6 +2179,9 @@ SMB2_open(const unsigned int xid, struct cifs_open_parms *oparms, __le16 *path,
+ 	if (!(server->capabilities & SMB2_GLOBAL_CAP_LEASING) ||
+ 	    *oplock == SMB2_OPLOCK_LEVEL_NONE)
+ 		req->RequestedOplockLevel = *oplock;
++	else if (!(server->capabilities & SMB2_GLOBAL_CAP_DIRECTORY_LEASING) &&
++		  (oparms->create_options & CREATE_NOT_FILE))
++		req->RequestedOplockLevel = *oplock; /* no srv lease support */
+ 	else {
+ 		rc = add_lease_context(server, iov, &n_iov,
+ 				       oparms->fid->lease_key, oplock);
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 4d8b1de83143..b6f2dc8163e1 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -1680,18 +1680,20 @@ static inline int inc_valid_block_count(struct f2fs_sb_info *sbi,
+ 		sbi->total_valid_block_count -= diff;
+ 		if (!*count) {
+ 			spin_unlock(&sbi->stat_lock);
+-			percpu_counter_sub(&sbi->alloc_valid_block_count, diff);
+ 			goto enospc;
+ 		}
+ 	}
+ 	spin_unlock(&sbi->stat_lock);
+ 
+-	if (unlikely(release))
++	if (unlikely(release)) {
++		percpu_counter_sub(&sbi->alloc_valid_block_count, release);
+ 		dquot_release_reservation_block(inode, release);
++	}
+ 	f2fs_i_blocks_write(inode, *count, true, true);
+ 	return 0;
+ 
+ enospc:
++	percpu_counter_sub(&sbi->alloc_valid_block_count, release);
+ 	dquot_release_reservation_block(inode, release);
+ 	return -ENOSPC;
+ }
+@@ -1954,8 +1956,13 @@ static inline struct page *f2fs_grab_cache_page(struct address_space *mapping,
+ 						pgoff_t index, bool for_write)
+ {
+ #ifdef CONFIG_F2FS_FAULT_INJECTION
+-	struct page *page = find_lock_page(mapping, index);
++	struct page *page;
+ 
++	if (!for_write)
++		page = find_get_page_flags(mapping, index,
++						FGP_LOCK | FGP_ACCESSED);
++	else
++		page = find_lock_page(mapping, index);
+ 	if (page)
+ 		return page;
+ 
+@@ -2812,7 +2819,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
+ int f2fs_sync_node_pages(struct f2fs_sb_info *sbi,
+ 			struct writeback_control *wbc,
+ 			bool do_balance, enum iostat_type io_type);
+-void f2fs_build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount);
++int f2fs_build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount);
+ bool f2fs_alloc_nid(struct f2fs_sb_info *sbi, nid_t *nid);
+ void f2fs_alloc_nid_done(struct f2fs_sb_info *sbi, nid_t nid);
+ void f2fs_alloc_nid_failed(struct f2fs_sb_info *sbi, nid_t nid);
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 3ffa341cf586..4c9f9bcbd2d9 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -1882,7 +1882,7 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	struct super_block *sb = sbi->sb;
+ 	__u32 in;
+-	int ret;
++	int ret = 0;
+ 
+ 	if (!capable(CAP_SYS_ADMIN))
+ 		return -EPERM;
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 9093be6e7a7d..37ab2d10a872 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -986,7 +986,13 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi,
+ 			goto next;
+ 
+ 		sum = page_address(sum_page);
+-		f2fs_bug_on(sbi, type != GET_SUM_TYPE((&sum->footer)));
++		if (type != GET_SUM_TYPE((&sum->footer))) {
++			f2fs_msg(sbi->sb, KERN_ERR, "Inconsistent segment (%u) "
++				"type [%d, %d] in SSA and SIT",
++				segno, type, GET_SUM_TYPE((&sum->footer)));
++			set_sbi_flag(sbi, SBI_NEED_FSCK);
++			goto next;
++		}
+ 
+ 		/*
+ 		 * this is to avoid deadlock:
+diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
+index 043830be5662..2bcb2d36f024 100644
+--- a/fs/f2fs/inline.c
++++ b/fs/f2fs/inline.c
+@@ -130,6 +130,16 @@ int f2fs_convert_inline_page(struct dnode_of_data *dn, struct page *page)
+ 	if (err)
+ 		return err;
+ 
++	if (unlikely(dn->data_blkaddr != NEW_ADDR)) {
++		f2fs_put_dnode(dn);
++		set_sbi_flag(fio.sbi, SBI_NEED_FSCK);
++		f2fs_msg(fio.sbi->sb, KERN_WARNING,
++			"%s: corrupted inline inode ino=%lx, i_addr[0]:0x%x, "
++			"run fsck to fix.",
++			__func__, dn->inode->i_ino, dn->data_blkaddr);
++		return -EINVAL;
++	}
++
+ 	f2fs_bug_on(F2FS_P_SB(page), PageWriteback(page));
+ 
+ 	f2fs_do_read_inline_data(page, dn->inode_page);
+@@ -363,6 +373,17 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage,
+ 	if (err)
+ 		goto out;
+ 
++	if (unlikely(dn.data_blkaddr != NEW_ADDR)) {
++		f2fs_put_dnode(&dn);
++		set_sbi_flag(F2FS_P_SB(page), SBI_NEED_FSCK);
++		f2fs_msg(F2FS_P_SB(page)->sb, KERN_WARNING,
++			"%s: corrupted inline inode ino=%lx, i_addr[0]:0x%x, "
++			"run fsck to fix.",
++			__func__, dir->i_ino, dn.data_blkaddr);
++		err = -EINVAL;
++		goto out;
++	}
++
+ 	f2fs_wait_on_page_writeback(page, DATA, true);
+ 
+ 	dentry_blk = page_address(page);
+@@ -477,6 +498,7 @@ static int f2fs_move_rehashed_dirents(struct inode *dir, struct page *ipage,
+ 	return 0;
+ recover:
+ 	lock_page(ipage);
++	f2fs_wait_on_page_writeback(ipage, NODE, true);
+ 	memcpy(inline_dentry, backup_dentry, MAX_INLINE_DATA(dir));
+ 	f2fs_i_depth_write(dir, 0);
+ 	f2fs_i_size_write(dir, MAX_INLINE_DATA(dir));
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index f121c864f4c0..cf0f944fcaea 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -197,6 +197,16 @@ static bool sanity_check_inode(struct inode *inode)
+ 			__func__, inode->i_ino);
+ 		return false;
+ 	}
++
++	if (f2fs_has_extra_attr(inode) &&
++			!f2fs_sb_has_extra_attr(sbi->sb)) {
++		set_sbi_flag(sbi, SBI_NEED_FSCK);
++		f2fs_msg(sbi->sb, KERN_WARNING,
++			"%s: inode (ino=%lx) is with extra_attr, "
++			"but extra_attr feature is off",
++			__func__, inode->i_ino);
++		return false;
++	}
+ 	return true;
+ }
+ 
+@@ -249,6 +259,11 @@ static int do_read_inode(struct inode *inode)
+ 
+ 	get_inline_info(inode, ri);
+ 
++	if (!sanity_check_inode(inode)) {
++		f2fs_put_page(node_page, 1);
++		return -EINVAL;
++	}
++
+ 	fi->i_extra_isize = f2fs_has_extra_attr(inode) ?
+ 					le16_to_cpu(ri->i_extra_isize) : 0;
+ 
+@@ -330,10 +345,6 @@ struct inode *f2fs_iget(struct super_block *sb, unsigned long ino)
+ 	ret = do_read_inode(inode);
+ 	if (ret)
+ 		goto bad_inode;
+-	if (!sanity_check_inode(inode)) {
+-		ret = -EINVAL;
+-		goto bad_inode;
+-	}
+ make_now:
+ 	if (ino == F2FS_NODE_INO(sbi)) {
+ 		inode->i_mapping->a_ops = &f2fs_node_aops;
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 10643b11bd59..52ed02b0327c 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -1633,7 +1633,9 @@ next_step:
+ 						!is_cold_node(page)))
+ 				continue;
+ lock_node:
+-			if (!trylock_page(page))
++			if (wbc->sync_mode == WB_SYNC_ALL)
++				lock_page(page);
++			else if (!trylock_page(page))
+ 				continue;
+ 
+ 			if (unlikely(page->mapping != NODE_MAPPING(sbi))) {
+@@ -1968,7 +1970,7 @@ static void remove_free_nid(struct f2fs_sb_info *sbi, nid_t nid)
+ 		kmem_cache_free(free_nid_slab, i);
+ }
+ 
+-static void scan_nat_page(struct f2fs_sb_info *sbi,
++static int scan_nat_page(struct f2fs_sb_info *sbi,
+ 			struct page *nat_page, nid_t start_nid)
+ {
+ 	struct f2fs_nm_info *nm_i = NM_I(sbi);
+@@ -1986,7 +1988,10 @@ static void scan_nat_page(struct f2fs_sb_info *sbi,
+ 			break;
+ 
+ 		blk_addr = le32_to_cpu(nat_blk->entries[i].block_addr);
+-		f2fs_bug_on(sbi, blk_addr == NEW_ADDR);
++
++		if (blk_addr == NEW_ADDR)
++			return -EINVAL;
++
+ 		if (blk_addr == NULL_ADDR) {
+ 			add_free_nid(sbi, start_nid, true, true);
+ 		} else {
+@@ -1995,6 +2000,8 @@ static void scan_nat_page(struct f2fs_sb_info *sbi,
+ 			spin_unlock(&NM_I(sbi)->nid_list_lock);
+ 		}
+ 	}
++
++	return 0;
+ }
+ 
+ static void scan_curseg_cache(struct f2fs_sb_info *sbi)
+@@ -2050,11 +2057,11 @@ out:
+ 	up_read(&nm_i->nat_tree_lock);
+ }
+ 
+-static void __f2fs_build_free_nids(struct f2fs_sb_info *sbi,
++static int __f2fs_build_free_nids(struct f2fs_sb_info *sbi,
+ 						bool sync, bool mount)
+ {
+ 	struct f2fs_nm_info *nm_i = NM_I(sbi);
+-	int i = 0;
++	int i = 0, ret;
+ 	nid_t nid = nm_i->next_scan_nid;
+ 
+ 	if (unlikely(nid >= nm_i->max_nid))
+@@ -2062,17 +2069,17 @@ static void __f2fs_build_free_nids(struct f2fs_sb_info *sbi,
+ 
+ 	/* Enough entries */
+ 	if (nm_i->nid_cnt[FREE_NID] >= NAT_ENTRY_PER_BLOCK)
+-		return;
++		return 0;
+ 
+ 	if (!sync && !f2fs_available_free_memory(sbi, FREE_NIDS))
+-		return;
++		return 0;
+ 
+ 	if (!mount) {
+ 		/* try to find free nids in free_nid_bitmap */
+ 		scan_free_nid_bits(sbi);
+ 
+ 		if (nm_i->nid_cnt[FREE_NID] >= NAT_ENTRY_PER_BLOCK)
+-			return;
++			return 0;
+ 	}
+ 
+ 	/* readahead nat pages to be scanned */
+@@ -2086,8 +2093,16 @@ static void __f2fs_build_free_nids(struct f2fs_sb_info *sbi,
+ 						nm_i->nat_block_bitmap)) {
+ 			struct page *page = get_current_nat_page(sbi, nid);
+ 
+-			scan_nat_page(sbi, page, nid);
++			ret = scan_nat_page(sbi, page, nid);
+ 			f2fs_put_page(page, 1);
++
++			if (ret) {
++				up_read(&nm_i->nat_tree_lock);
++				f2fs_bug_on(sbi, !mount);
++				f2fs_msg(sbi->sb, KERN_ERR,
++					"NAT is corrupt, run fsck to fix it");
++				return -EINVAL;
++			}
+ 		}
+ 
+ 		nid += (NAT_ENTRY_PER_BLOCK - (nid % NAT_ENTRY_PER_BLOCK));
+@@ -2108,13 +2123,19 @@ static void __f2fs_build_free_nids(struct f2fs_sb_info *sbi,
+ 
+ 	f2fs_ra_meta_pages(sbi, NAT_BLOCK_OFFSET(nm_i->next_scan_nid),
+ 					nm_i->ra_nid_pages, META_NAT, false);
++
++	return 0;
+ }
+ 
+-void f2fs_build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount)
++int f2fs_build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount)
+ {
++	int ret;
++
+ 	mutex_lock(&NM_I(sbi)->build_lock);
+-	__f2fs_build_free_nids(sbi, sync, mount);
++	ret = __f2fs_build_free_nids(sbi, sync, mount);
+ 	mutex_unlock(&NM_I(sbi)->build_lock);
++
++	return ret;
+ }
+ 
+ /*
+@@ -2801,8 +2822,7 @@ int f2fs_build_node_manager(struct f2fs_sb_info *sbi)
+ 	/* load free nid status from nat_bits table */
+ 	load_free_nid_bitmap(sbi);
+ 
+-	f2fs_build_free_nids(sbi, true, true);
+-	return 0;
++	return f2fs_build_free_nids(sbi, true, true);
+ }
+ 
+ void f2fs_destroy_node_manager(struct f2fs_sb_info *sbi)
+diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
+index 38f25f0b193a..ad70e62c5da4 100644
+--- a/fs/f2fs/recovery.c
++++ b/fs/f2fs/recovery.c
+@@ -241,8 +241,8 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head,
+ 	struct page *page = NULL;
+ 	block_t blkaddr;
+ 	unsigned int loop_cnt = 0;
+-	unsigned int free_blocks = sbi->user_block_count -
+-					valid_user_blocks(sbi);
++	unsigned int free_blocks = MAIN_SEGS(sbi) * sbi->blocks_per_seg -
++						valid_user_blocks(sbi);
+ 	int err = 0;
+ 
+ 	/* get node pages in the current segment */
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 9efce174c51a..43fecd5eb252 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -1643,21 +1643,30 @@ void f2fs_clear_prefree_segments(struct f2fs_sb_info *sbi,
+ 	unsigned int start = 0, end = -1;
+ 	unsigned int secno, start_segno;
+ 	bool force = (cpc->reason & CP_DISCARD);
++	bool need_align = test_opt(sbi, LFS) && sbi->segs_per_sec > 1;
+ 
+ 	mutex_lock(&dirty_i->seglist_lock);
+ 
+ 	while (1) {
+ 		int i;
++
++		if (need_align && end != -1)
++			end--;
+ 		start = find_next_bit(prefree_map, MAIN_SEGS(sbi), end + 1);
+ 		if (start >= MAIN_SEGS(sbi))
+ 			break;
+ 		end = find_next_zero_bit(prefree_map, MAIN_SEGS(sbi),
+ 								start + 1);
+ 
+-		for (i = start; i < end; i++)
+-			clear_bit(i, prefree_map);
++		if (need_align) {
++			start = rounddown(start, sbi->segs_per_sec);
++			end = roundup(end, sbi->segs_per_sec);
++		}
+ 
+-		dirty_i->nr_dirty[PRE] -= end - start;
++		for (i = start; i < end; i++) {
++			if (test_and_clear_bit(i, prefree_map))
++				dirty_i->nr_dirty[PRE]--;
++		}
+ 
+ 		if (!test_opt(sbi, DISCARD))
+ 			continue;
+@@ -2437,6 +2446,7 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range)
+ 	struct discard_policy dpolicy;
+ 	unsigned long long trimmed = 0;
+ 	int err = 0;
++	bool need_align = test_opt(sbi, LFS) && sbi->segs_per_sec > 1;
+ 
+ 	if (start >= MAX_BLKADDR(sbi) || range->len < sbi->blocksize)
+ 		return -EINVAL;
+@@ -2454,6 +2464,10 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range)
+ 	start_segno = (start <= MAIN_BLKADDR(sbi)) ? 0 : GET_SEGNO(sbi, start);
+ 	end_segno = (end >= MAX_BLKADDR(sbi)) ? MAIN_SEGS(sbi) - 1 :
+ 						GET_SEGNO(sbi, end);
++	if (need_align) {
++		start_segno = rounddown(start_segno, sbi->segs_per_sec);
++		end_segno = roundup(end_segno + 1, sbi->segs_per_sec) - 1;
++	}
+ 
+ 	cpc.reason = CP_DISCARD;
+ 	cpc.trim_minlen = max_t(__u64, 1, F2FS_BYTES_TO_BLK(range->minlen));
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index f18fc82fbe99..38c549d77a80 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -448,6 +448,8 @@ static inline void __set_test_and_free(struct f2fs_sb_info *sbi,
+ 	if (test_and_clear_bit(segno, free_i->free_segmap)) {
+ 		free_i->free_segments++;
+ 
++		if (IS_CURSEC(sbi, secno))
++			goto skip_free;
+ 		next = find_next_bit(free_i->free_segmap,
+ 				start_segno + sbi->segs_per_sec, start_segno);
+ 		if (next >= start_segno + sbi->segs_per_sec) {
+@@ -455,6 +457,7 @@ static inline void __set_test_and_free(struct f2fs_sb_info *sbi,
+ 				free_i->free_sections++;
+ 		}
+ 	}
++skip_free:
+ 	spin_unlock(&free_i->segmap_lock);
+ }
+ 
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 3995e926ba3a..128d489acebb 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -2229,9 +2229,9 @@ static int sanity_check_raw_super(struct f2fs_sb_info *sbi,
+ 		return 1;
+ 	}
+ 
+-	if (secs_per_zone > total_sections) {
++	if (secs_per_zone > total_sections || !secs_per_zone) {
+ 		f2fs_msg(sb, KERN_INFO,
+-			"Wrong secs_per_zone (%u > %u)",
++			"Wrong secs_per_zone / total_sections (%u, %u)",
+ 			secs_per_zone, total_sections);
+ 		return 1;
+ 	}
+@@ -2282,12 +2282,17 @@ int f2fs_sanity_check_ckpt(struct f2fs_sb_info *sbi)
+ 	struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi);
+ 	unsigned int ovp_segments, reserved_segments;
+ 	unsigned int main_segs, blocks_per_seg;
++	unsigned int sit_segs, nat_segs;
++	unsigned int sit_bitmap_size, nat_bitmap_size;
++	unsigned int log_blocks_per_seg;
+ 	int i;
+ 
+ 	total = le32_to_cpu(raw_super->segment_count);
+ 	fsmeta = le32_to_cpu(raw_super->segment_count_ckpt);
+-	fsmeta += le32_to_cpu(raw_super->segment_count_sit);
+-	fsmeta += le32_to_cpu(raw_super->segment_count_nat);
++	sit_segs = le32_to_cpu(raw_super->segment_count_sit);
++	fsmeta += sit_segs;
++	nat_segs = le32_to_cpu(raw_super->segment_count_nat);
++	fsmeta += nat_segs;
+ 	fsmeta += le32_to_cpu(ckpt->rsvd_segment_count);
+ 	fsmeta += le32_to_cpu(raw_super->segment_count_ssa);
+ 
+@@ -2318,6 +2323,18 @@ int f2fs_sanity_check_ckpt(struct f2fs_sb_info *sbi)
+ 			return 1;
+ 	}
+ 
++	sit_bitmap_size = le32_to_cpu(ckpt->sit_ver_bitmap_bytesize);
++	nat_bitmap_size = le32_to_cpu(ckpt->nat_ver_bitmap_bytesize);
++	log_blocks_per_seg = le32_to_cpu(raw_super->log_blocks_per_seg);
++
++	if (sit_bitmap_size != ((sit_segs / 2) << log_blocks_per_seg) / 8 ||
++		nat_bitmap_size != ((nat_segs / 2) << log_blocks_per_seg) / 8) {
++		f2fs_msg(sbi->sb, KERN_ERR,
++			"Wrong bitmap size: sit: %u, nat:%u",
++			sit_bitmap_size, nat_bitmap_size);
++		return 1;
++	}
++
+ 	if (unlikely(f2fs_cp_error(sbi))) {
+ 		f2fs_msg(sbi->sb, KERN_ERR, "A bug case: need to run fsck");
+ 		return 1;
+diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
+index 2e7e611deaef..bca1236fd6fa 100644
+--- a/fs/f2fs/sysfs.c
++++ b/fs/f2fs/sysfs.c
+@@ -9,6 +9,7 @@
+  * it under the terms of the GNU General Public License version 2 as
+  * published by the Free Software Foundation.
+  */
++#include <linux/compiler.h>
+ #include <linux/proc_fs.h>
+ #include <linux/f2fs_fs.h>
+ #include <linux/seq_file.h>
+@@ -286,8 +287,10 @@ static ssize_t f2fs_sbi_store(struct f2fs_attr *a,
+ 	bool gc_entry = (!strcmp(a->attr.name, "gc_urgent") ||
+ 					a->struct_type == GC_THREAD);
+ 
+-	if (gc_entry)
+-		down_read(&sbi->sb->s_umount);
++	if (gc_entry) {
++		if (!down_read_trylock(&sbi->sb->s_umount))
++			return -EAGAIN;
++	}
+ 	ret = __sbi_store(a, sbi, buf, count);
+ 	if (gc_entry)
+ 		up_read(&sbi->sb->s_umount);
+@@ -516,7 +519,8 @@ static struct kobject f2fs_feat = {
+ 	.kset	= &f2fs_kset,
+ };
+ 
+-static int segment_info_seq_show(struct seq_file *seq, void *offset)
++static int __maybe_unused segment_info_seq_show(struct seq_file *seq,
++						void *offset)
+ {
+ 	struct super_block *sb = seq->private;
+ 	struct f2fs_sb_info *sbi = F2FS_SB(sb);
+@@ -543,7 +547,8 @@ static int segment_info_seq_show(struct seq_file *seq, void *offset)
+ 	return 0;
+ }
+ 
+-static int segment_bits_seq_show(struct seq_file *seq, void *offset)
++static int __maybe_unused segment_bits_seq_show(struct seq_file *seq,
++						void *offset)
+ {
+ 	struct super_block *sb = seq->private;
+ 	struct f2fs_sb_info *sbi = F2FS_SB(sb);
+@@ -567,7 +572,8 @@ static int segment_bits_seq_show(struct seq_file *seq, void *offset)
+ 	return 0;
+ }
+ 
+-static int iostat_info_seq_show(struct seq_file *seq, void *offset)
++static int __maybe_unused iostat_info_seq_show(struct seq_file *seq,
++					       void *offset)
+ {
+ 	struct super_block *sb = seq->private;
+ 	struct f2fs_sb_info *sbi = F2FS_SB(sb);
+diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
+index 5d57e818d0c3..6d049dfddb14 100644
+--- a/fs/nfs/callback_proc.c
++++ b/fs/nfs/callback_proc.c
+@@ -215,9 +215,9 @@ static u32 pnfs_check_callback_stateid(struct pnfs_layout_hdr *lo,
+ {
+ 	u32 oldseq, newseq;
+ 
+-	/* Is the stateid still not initialised? */
++	/* Is the stateid not initialised? */
+ 	if (!pnfs_layout_is_valid(lo))
+-		return NFS4ERR_DELAY;
++		return NFS4ERR_NOMATCHING_LAYOUT;
+ 
+ 	/* Mismatched stateid? */
+ 	if (!nfs4_stateid_match_other(&lo->plh_stateid, new))
+diff --git a/fs/nfs/callback_xdr.c b/fs/nfs/callback_xdr.c
+index a813979b5be0..cb905c0e606c 100644
+--- a/fs/nfs/callback_xdr.c
++++ b/fs/nfs/callback_xdr.c
+@@ -883,16 +883,21 @@ static __be32 nfs4_callback_compound(struct svc_rqst *rqstp)
+ 
+ 	if (hdr_arg.minorversion == 0) {
+ 		cps.clp = nfs4_find_client_ident(SVC_NET(rqstp), hdr_arg.cb_ident);
+-		if (!cps.clp || !check_gss_callback_principal(cps.clp, rqstp))
++		if (!cps.clp || !check_gss_callback_principal(cps.clp, rqstp)) {
++			if (cps.clp)
++				nfs_put_client(cps.clp);
+ 			goto out_invalidcred;
++		}
+ 	}
+ 
+ 	cps.minorversion = hdr_arg.minorversion;
+ 	hdr_res.taglen = hdr_arg.taglen;
+ 	hdr_res.tag = hdr_arg.tag;
+-	if (encode_compound_hdr_res(&xdr_out, &hdr_res) != 0)
++	if (encode_compound_hdr_res(&xdr_out, &hdr_res) != 0) {
++		if (cps.clp)
++			nfs_put_client(cps.clp);
+ 		return rpc_system_err;
+-
++	}
+ 	while (status == 0 && nops != hdr_arg.nops) {
+ 		status = process_op(nops, rqstp, &xdr_in,
+ 				    rqstp->rq_argp, &xdr_out, rqstp->rq_resp,
+diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c
+index 979631411a0e..d7124fb12041 100644
+--- a/fs/nfs/nfs4client.c
++++ b/fs/nfs/nfs4client.c
+@@ -1127,7 +1127,7 @@ struct nfs_server *nfs4_create_referral_server(struct nfs_clone_mount *data,
+ 	nfs_server_copy_userdata(server, parent_server);
+ 
+ 	/* Get a client representation */
+-#ifdef CONFIG_SUNRPC_XPRT_RDMA
++#if IS_ENABLED(CONFIG_SUNRPC_XPRT_RDMA)
+ 	rpc_set_port(data->addr, NFS_RDMA_PORT);
+ 	error = nfs4_set_client(server, data->hostname,
+ 				data->addr,
+@@ -1139,7 +1139,7 @@ struct nfs_server *nfs4_create_referral_server(struct nfs_clone_mount *data,
+ 				parent_client->cl_net);
+ 	if (!error)
+ 		goto init_server;
+-#endif	/* CONFIG_SUNRPC_XPRT_RDMA */
++#endif	/* IS_ENABLED(CONFIG_SUNRPC_XPRT_RDMA) */
+ 
+ 	rpc_set_port(data->addr, NFS_PORT);
+ 	error = nfs4_set_client(server, data->hostname,
+@@ -1153,7 +1153,7 @@ struct nfs_server *nfs4_create_referral_server(struct nfs_clone_mount *data,
+ 	if (error < 0)
+ 		goto error;
+ 
+-#ifdef CONFIG_SUNRPC_XPRT_RDMA
++#if IS_ENABLED(CONFIG_SUNRPC_XPRT_RDMA)
+ init_server:
+ #endif
+ 	error = nfs_init_server_rpcclient(server, parent_server->client->cl_timeout, data->authflavor);
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index 773bcb1d4044..5482dd6ae9ef 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -520,6 +520,7 @@ struct hid_input {
+ 	const char *name;
+ 	bool registered;
+ 	struct list_head reports;	/* the list of reports */
++	unsigned int application;	/* application usage for this input */
+ };
+ 
+ enum hid_type {
+diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
+index 22651e124071..a590419e46c5 100644
+--- a/include/linux/mm_types.h
++++ b/include/linux/mm_types.h
+@@ -340,7 +340,7 @@ struct kioctx_table;
+ struct mm_struct {
+ 	struct vm_area_struct *mmap;		/* list of VMAs */
+ 	struct rb_root mm_rb;
+-	u32 vmacache_seqnum;                   /* per-thread vmacache */
++	u64 vmacache_seqnum;                   /* per-thread vmacache */
+ #ifdef CONFIG_MMU
+ 	unsigned long (*get_unmapped_area) (struct file *filp,
+ 				unsigned long addr, unsigned long len,
+diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h
+index 5fe87687664c..d7016dcb245e 100644
+--- a/include/linux/mm_types_task.h
++++ b/include/linux/mm_types_task.h
+@@ -32,7 +32,7 @@
+ #define VMACACHE_MASK (VMACACHE_SIZE - 1)
+ 
+ struct vmacache {
+-	u32 seqnum;
++	u64 seqnum;
+ 	struct vm_area_struct *vmas[VMACACHE_SIZE];
+ };
+ 
+diff --git a/include/linux/mtd/rawnand.h b/include/linux/mtd/rawnand.h
+index 3e8ec3b8a39c..87c635d6c773 100644
+--- a/include/linux/mtd/rawnand.h
++++ b/include/linux/mtd/rawnand.h
+@@ -986,14 +986,14 @@ struct nand_subop {
+ 	unsigned int last_instr_end_off;
+ };
+ 
+-int nand_subop_get_addr_start_off(const struct nand_subop *subop,
+-				  unsigned int op_id);
+-int nand_subop_get_num_addr_cyc(const struct nand_subop *subop,
+-				unsigned int op_id);
+-int nand_subop_get_data_start_off(const struct nand_subop *subop,
+-				  unsigned int op_id);
+-int nand_subop_get_data_len(const struct nand_subop *subop,
+-			    unsigned int op_id);
++unsigned int nand_subop_get_addr_start_off(const struct nand_subop *subop,
++					   unsigned int op_id);
++unsigned int nand_subop_get_num_addr_cyc(const struct nand_subop *subop,
++					 unsigned int op_id);
++unsigned int nand_subop_get_data_start_off(const struct nand_subop *subop,
++					   unsigned int op_id);
++unsigned int nand_subop_get_data_len(const struct nand_subop *subop,
++				     unsigned int op_id);
+ 
+ /**
+  * struct nand_op_parser_addr_constraints - Constraints for address instructions
+diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
+index 5c7f010676a7..47a3441cf4c4 100644
+--- a/include/linux/vm_event_item.h
++++ b/include/linux/vm_event_item.h
+@@ -105,7 +105,6 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
+ #ifdef CONFIG_DEBUG_VM_VMACACHE
+ 		VMACACHE_FIND_CALLS,
+ 		VMACACHE_FIND_HITS,
+-		VMACACHE_FULL_FLUSHES,
+ #endif
+ #ifdef CONFIG_SWAP
+ 		SWAP_RA,
+diff --git a/include/linux/vmacache.h b/include/linux/vmacache.h
+index a5b3aa8d281f..a09b28f76460 100644
+--- a/include/linux/vmacache.h
++++ b/include/linux/vmacache.h
+@@ -16,7 +16,6 @@ static inline void vmacache_flush(struct task_struct *tsk)
+ 	memset(tsk->vmacache.vmas, 0, sizeof(tsk->vmacache.vmas));
+ }
+ 
+-extern void vmacache_flush_all(struct mm_struct *mm);
+ extern void vmacache_update(unsigned long addr, struct vm_area_struct *newvma);
+ extern struct vm_area_struct *vmacache_find(struct mm_struct *mm,
+ 						    unsigned long addr);
+@@ -30,10 +29,6 @@ extern struct vm_area_struct *vmacache_find_exact(struct mm_struct *mm,
+ static inline void vmacache_invalidate(struct mm_struct *mm)
+ {
+ 	mm->vmacache_seqnum++;
+-
+-	/* deal with overflows */
+-	if (unlikely(mm->vmacache_seqnum == 0))
+-		vmacache_flush_all(mm);
+ }
+ 
+ #endif /* __LINUX_VMACACHE_H */
+diff --git a/include/uapi/linux/ethtool.h b/include/uapi/linux/ethtool.h
+index 7363f18e65a5..813282cc8af6 100644
+--- a/include/uapi/linux/ethtool.h
++++ b/include/uapi/linux/ethtool.h
+@@ -902,13 +902,13 @@ struct ethtool_rx_flow_spec {
+ static inline __u64 ethtool_get_flow_spec_ring(__u64 ring_cookie)
+ {
+ 	return ETHTOOL_RX_FLOW_SPEC_RING & ring_cookie;
+-};
++}
+ 
+ static inline __u64 ethtool_get_flow_spec_ring_vf(__u64 ring_cookie)
+ {
+ 	return (ETHTOOL_RX_FLOW_SPEC_RING_VF & ring_cookie) >>
+ 				ETHTOOL_RX_FLOW_SPEC_RING_VF_OFF;
+-};
++}
+ 
+ /**
+  * struct ethtool_rxnfc - command to get or set RX flow classification rules
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index f80afc674f02..517907b082df 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -608,15 +608,15 @@ static void cpuhp_thread_fun(unsigned int cpu)
+ 	bool bringup = st->bringup;
+ 	enum cpuhp_state state;
+ 
++	if (WARN_ON_ONCE(!st->should_run))
++		return;
++
+ 	/*
+ 	 * ACQUIRE for the cpuhp_should_run() load of ->should_run. Ensures
+ 	 * that if we see ->should_run we also see the rest of the state.
+ 	 */
+ 	smp_mb();
+ 
+-	if (WARN_ON_ONCE(!st->should_run))
+-		return;
+-
+ 	cpuhp_lock_acquire(bringup);
+ 
+ 	if (st->single) {
+@@ -928,7 +928,8 @@ static int cpuhp_down_callbacks(unsigned int cpu, struct cpuhp_cpu_state *st,
+ 		ret = cpuhp_invoke_callback(cpu, st->state, false, NULL, NULL);
+ 		if (ret) {
+ 			st->target = prev_state;
+-			undo_cpu_down(cpu, st);
++			if (st->state < prev_state)
++				undo_cpu_down(cpu, st);
+ 			break;
+ 		}
+ 	}
+@@ -981,7 +982,7 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen,
+ 	 * to do the further cleanups.
+ 	 */
+ 	ret = cpuhp_down_callbacks(cpu, st, target);
+-	if (ret && st->state > CPUHP_TEARDOWN_CPU && st->state < prev_state) {
++	if (ret && st->state == CPUHP_TEARDOWN_CPU && st->state < prev_state) {
+ 		cpuhp_reset_state(st, prev_state);
+ 		__cpuhp_kick_ap(st);
+ 	}
+diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
+index f89a78e2792b..443941aa784e 100644
+--- a/kernel/time/clocksource.c
++++ b/kernel/time/clocksource.c
+@@ -129,19 +129,40 @@ static void inline clocksource_watchdog_unlock(unsigned long *flags)
+ 	spin_unlock_irqrestore(&watchdog_lock, *flags);
+ }
+ 
++static int clocksource_watchdog_kthread(void *data);
++static void __clocksource_change_rating(struct clocksource *cs, int rating);
++
+ /*
+  * Interval: 0.5sec Threshold: 0.0625s
+  */
+ #define WATCHDOG_INTERVAL (HZ >> 1)
+ #define WATCHDOG_THRESHOLD (NSEC_PER_SEC >> 4)
+ 
++static void clocksource_watchdog_work(struct work_struct *work)
++{
++	/*
++	 * We cannot directly run clocksource_watchdog_kthread() here, because
++	 * clocksource_select() calls timekeeping_notify() which uses
++	 * stop_machine(). One cannot use stop_machine() from a workqueue() due
++	 * lock inversions wrt CPU hotplug.
++	 *
++	 * Also, we only ever run this work once or twice during the lifetime
++	 * of the kernel, so there is no point in creating a more permanent
++	 * kthread for this.
++	 *
++	 * If kthread_run fails the next watchdog scan over the
++	 * watchdog_list will find the unstable clock again.
++	 */
++	kthread_run(clocksource_watchdog_kthread, NULL, "kwatchdog");
++}
++
+ static void __clocksource_unstable(struct clocksource *cs)
+ {
+ 	cs->flags &= ~(CLOCK_SOURCE_VALID_FOR_HRES | CLOCK_SOURCE_WATCHDOG);
+ 	cs->flags |= CLOCK_SOURCE_UNSTABLE;
+ 
+ 	/*
+-	 * If the clocksource is registered clocksource_watchdog_work() will
++	 * If the clocksource is registered clocksource_watchdog_kthread() will
+ 	 * re-rate and re-select.
+ 	 */
+ 	if (list_empty(&cs->list)) {
+@@ -152,7 +173,7 @@ static void __clocksource_unstable(struct clocksource *cs)
+ 	if (cs->mark_unstable)
+ 		cs->mark_unstable(cs);
+ 
+-	/* kick clocksource_watchdog_work() */
++	/* kick clocksource_watchdog_kthread() */
+ 	if (finished_booting)
+ 		schedule_work(&watchdog_work);
+ }
+@@ -162,7 +183,7 @@ static void __clocksource_unstable(struct clocksource *cs)
+  * @cs:		clocksource to be marked unstable
+  *
+  * This function is called by the x86 TSC code to mark clocksources as unstable;
+- * it defers demotion and re-selection to a work.
++ * it defers demotion and re-selection to a kthread.
+  */
+ void clocksource_mark_unstable(struct clocksource *cs)
+ {
+@@ -387,9 +408,7 @@ static void clocksource_dequeue_watchdog(struct clocksource *cs)
+ 	}
+ }
+ 
+-static void __clocksource_change_rating(struct clocksource *cs, int rating);
+-
+-static int __clocksource_watchdog_work(void)
++static int __clocksource_watchdog_kthread(void)
+ {
+ 	struct clocksource *cs, *tmp;
+ 	unsigned long flags;
+@@ -414,12 +433,13 @@ static int __clocksource_watchdog_work(void)
+ 	return select;
+ }
+ 
+-static void clocksource_watchdog_work(struct work_struct *work)
++static int clocksource_watchdog_kthread(void *data)
+ {
+ 	mutex_lock(&clocksource_mutex);
+-	if (__clocksource_watchdog_work())
++	if (__clocksource_watchdog_kthread())
+ 		clocksource_select();
+ 	mutex_unlock(&clocksource_mutex);
++	return 0;
+ }
+ 
+ static bool clocksource_is_watchdog(struct clocksource *cs)
+@@ -438,7 +458,7 @@ static void clocksource_enqueue_watchdog(struct clocksource *cs)
+ static void clocksource_select_watchdog(bool fallback) { }
+ static inline void clocksource_dequeue_watchdog(struct clocksource *cs) { }
+ static inline void clocksource_resume_watchdog(void) { }
+-static inline int __clocksource_watchdog_work(void) { return 0; }
++static inline int __clocksource_watchdog_kthread(void) { return 0; }
+ static bool clocksource_is_watchdog(struct clocksource *cs) { return false; }
+ void clocksource_mark_unstable(struct clocksource *cs) { }
+ 
+@@ -672,7 +692,7 @@ static int __init clocksource_done_booting(void)
+ 	/*
+ 	 * Run the watchdog first to eliminate unstable clock sources
+ 	 */
+-	__clocksource_watchdog_work();
++	__clocksource_watchdog_kthread();
+ 	clocksource_select();
+ 	mutex_unlock(&clocksource_mutex);
+ 	return 0;
+diff --git a/kernel/time/timer.c b/kernel/time/timer.c
+index cc2d23e6ff61..786f8c014e7e 100644
+--- a/kernel/time/timer.c
++++ b/kernel/time/timer.c
+@@ -1657,6 +1657,22 @@ static inline void __run_timers(struct timer_base *base)
+ 
+ 	raw_spin_lock_irq(&base->lock);
+ 
++	/*
++	 * timer_base::must_forward_clk must be cleared before running
++	 * timers so that any timer functions that call mod_timer() will
++	 * not try to forward the base. Idle tracking / clock forwarding
++	 * logic is only used with BASE_STD timers.
++	 *
++	 * The must_forward_clk flag is cleared unconditionally also for
++	 * the deferrable base. The deferrable base is not affected by idle
++	 * tracking and never forwarded, so clearing the flag is a NOOP.
++	 *
++	 * The fact that the deferrable base is never forwarded can cause
++	 * large variations in granularity for deferrable timers, but they
++	 * can be deferred for long periods due to idle anyway.
++	 */
++	base->must_forward_clk = false;
++
+ 	while (time_after_eq(jiffies, base->clk)) {
+ 
+ 		levels = collect_expired_timers(base, heads);
+@@ -1676,19 +1692,6 @@ static __latent_entropy void run_timer_softirq(struct softirq_action *h)
+ {
+ 	struct timer_base *base = this_cpu_ptr(&timer_bases[BASE_STD]);
+ 
+-	/*
+-	 * must_forward_clk must be cleared before running timers so that any
+-	 * timer functions that call mod_timer will not try to forward the
+-	 * base. idle trcking / clock forwarding logic is only used with
+-	 * BASE_STD timers.
+-	 *
+-	 * The deferrable base does not do idle tracking at all, so we do
+-	 * not forward it. This can result in very large variations in
+-	 * granularity for deferrable timers, but they can be deferred for
+-	 * long periods due to idle.
+-	 */
+-	base->must_forward_clk = false;
+-
+ 	__run_timers(base);
+ 	if (IS_ENABLED(CONFIG_NO_HZ_COMMON))
+ 		__run_timers(this_cpu_ptr(&timer_bases[BASE_DEF]));
+diff --git a/mm/debug.c b/mm/debug.c
+index 38c926520c97..bd10aad8539a 100644
+--- a/mm/debug.c
++++ b/mm/debug.c
+@@ -114,7 +114,7 @@ EXPORT_SYMBOL(dump_vma);
+ 
+ void dump_mm(const struct mm_struct *mm)
+ {
+-	pr_emerg("mm %px mmap %px seqnum %d task_size %lu\n"
++	pr_emerg("mm %px mmap %px seqnum %llu task_size %lu\n"
+ #ifdef CONFIG_MMU
+ 		"get_unmapped_area %px\n"
+ #endif
+@@ -142,7 +142,7 @@ void dump_mm(const struct mm_struct *mm)
+ 		"tlb_flush_pending %d\n"
+ 		"def_flags: %#lx(%pGv)\n",
+ 
+-		mm, mm->mmap, mm->vmacache_seqnum, mm->task_size,
++		mm, mm->mmap, (long long) mm->vmacache_seqnum, mm->task_size,
+ #ifdef CONFIG_MMU
+ 		mm->get_unmapped_area,
+ #endif
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 7deb49f69e27..785252397e35 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1341,7 +1341,8 @@ static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
+ 			if (__PageMovable(page))
+ 				return pfn;
+ 			if (PageHuge(page)) {
+-				if (page_huge_active(page))
++				if (hugepage_migration_supported(page_hstate(page)) &&
++				    page_huge_active(page))
+ 					return pfn;
+ 				else
+ 					pfn = round_up(pfn + 1,
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 3222193c46c6..65f2e6481c99 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -7649,6 +7649,10 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
+ 		 * handle each tail page individually in migration.
+ 		 */
+ 		if (PageHuge(page)) {
++
++			if (!hugepage_migration_supported(page_hstate(page)))
++				goto unmovable;
++
+ 			iter = round_up(iter + 1, 1<<compound_order(page)) - 1;
+ 			continue;
+ 		}
+diff --git a/mm/vmacache.c b/mm/vmacache.c
+index db7596eb6132..f1729617dc85 100644
+--- a/mm/vmacache.c
++++ b/mm/vmacache.c
+@@ -7,44 +7,6 @@
+ #include <linux/mm.h>
+ #include <linux/vmacache.h>
+ 
+-/*
+- * Flush vma caches for threads that share a given mm.
+- *
+- * The operation is safe because the caller holds the mmap_sem
+- * exclusively and other threads accessing the vma cache will
+- * have mmap_sem held at least for read, so no extra locking
+- * is required to maintain the vma cache.
+- */
+-void vmacache_flush_all(struct mm_struct *mm)
+-{
+-	struct task_struct *g, *p;
+-
+-	count_vm_vmacache_event(VMACACHE_FULL_FLUSHES);
+-
+-	/*
+-	 * Single threaded tasks need not iterate the entire
+-	 * list of process. We can avoid the flushing as well
+-	 * since the mm's seqnum was increased and don't have
+-	 * to worry about other threads' seqnum. Current's
+-	 * flush will occur upon the next lookup.
+-	 */
+-	if (atomic_read(&mm->mm_users) == 1)
+-		return;
+-
+-	rcu_read_lock();
+-	for_each_process_thread(g, p) {
+-		/*
+-		 * Only flush the vmacache pointers as the
+-		 * mm seqnum is already set and curr's will
+-		 * be set upon invalidation when the next
+-		 * lookup is done.
+-		 */
+-		if (mm == p->mm)
+-			vmacache_flush(p);
+-	}
+-	rcu_read_unlock();
+-}
+-
+ /*
+  * This task may be accessing a foreign mm via (for example)
+  * get_user_pages()->find_vma().  The vmacache is task-local and this
+diff --git a/net/bluetooth/hidp/core.c b/net/bluetooth/hidp/core.c
+index 3bba8f4b08a9..253975cce943 100644
+--- a/net/bluetooth/hidp/core.c
++++ b/net/bluetooth/hidp/core.c
+@@ -775,7 +775,7 @@ static int hidp_setup_hid(struct hidp_session *session,
+ 	hid->version = req->version;
+ 	hid->country = req->country;
+ 
+-	strncpy(hid->name, req->name, sizeof(req->name) - 1);
++	strncpy(hid->name, req->name, sizeof(hid->name));
+ 
+ 	snprintf(hid->phys, sizeof(hid->phys), "%pMR",
+ 		 &l2cap_pi(session->ctrl_sock->sk)->chan->src);
+diff --git a/net/dcb/dcbnl.c b/net/dcb/dcbnl.c
+index 2589a6b78aa1..013fdb6fa07a 100644
+--- a/net/dcb/dcbnl.c
++++ b/net/dcb/dcbnl.c
+@@ -1786,7 +1786,7 @@ static struct dcb_app_type *dcb_app_lookup(const struct dcb_app *app,
+ 		if (itr->app.selector == app->selector &&
+ 		    itr->app.protocol == app->protocol &&
+ 		    itr->ifindex == ifindex &&
+-		    (!prio || itr->app.priority == prio))
++		    ((prio == -1) || itr->app.priority == prio))
+ 			return itr;
+ 	}
+ 
+@@ -1821,7 +1821,8 @@ u8 dcb_getapp(struct net_device *dev, struct dcb_app *app)
+ 	u8 prio = 0;
+ 
+ 	spin_lock_bh(&dcb_lock);
+-	if ((itr = dcb_app_lookup(app, dev->ifindex, 0)))
++	itr = dcb_app_lookup(app, dev->ifindex, -1);
++	if (itr)
+ 		prio = itr->app.priority;
+ 	spin_unlock_bh(&dcb_lock);
+ 
+@@ -1849,7 +1850,8 @@ int dcb_setapp(struct net_device *dev, struct dcb_app *new)
+ 
+ 	spin_lock_bh(&dcb_lock);
+ 	/* Search for existing match and replace */
+-	if ((itr = dcb_app_lookup(new, dev->ifindex, 0))) {
++	itr = dcb_app_lookup(new, dev->ifindex, -1);
++	if (itr) {
+ 		if (new->priority)
+ 			itr->app.priority = new->priority;
+ 		else {
+@@ -1882,7 +1884,8 @@ u8 dcb_ieee_getapp_mask(struct net_device *dev, struct dcb_app *app)
+ 	u8 prio = 0;
+ 
+ 	spin_lock_bh(&dcb_lock);
+-	if ((itr = dcb_app_lookup(app, dev->ifindex, 0)))
++	itr = dcb_app_lookup(app, dev->ifindex, -1);
++	if (itr)
+ 		prio |= 1 << itr->app.priority;
+ 	spin_unlock_bh(&dcb_lock);
+ 
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 932985ca4e66..3f80a5ca4050 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -1612,6 +1612,7 @@ ieee80211_rx_h_sta_process(struct ieee80211_rx_data *rx)
+ 	 */
+ 	if (!ieee80211_hw_check(&sta->local->hw, AP_LINK_PS) &&
+ 	    !ieee80211_has_morefrags(hdr->frame_control) &&
++	    !is_multicast_ether_addr(hdr->addr1) &&
+ 	    (ieee80211_is_mgmt(hdr->frame_control) ||
+ 	     ieee80211_is_data(hdr->frame_control)) &&
+ 	    !(status->rx_flags & IEEE80211_RX_DEFERRED_RELEASE) &&
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 20a171ac4bb2..16849969c138 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -3910,7 +3910,8 @@ void snd_hda_bus_reset_codecs(struct hda_bus *bus)
+ 
+ 	list_for_each_codec(codec, bus) {
+ 		/* FIXME: maybe a better way needed for forced reset */
+-		cancel_delayed_work_sync(&codec->jackpoll_work);
++		if (current_work() != &codec->jackpoll_work.work)
++			cancel_delayed_work_sync(&codec->jackpoll_work);
+ #ifdef CONFIG_PM
+ 		if (hda_codec_is_power_on(codec)) {
+ 			hda_call_codec_suspend(codec);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index f6af3e1c2b93..d14b05f68d6d 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6530,6 +6530,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x827e, "HP x360", ALC295_FIXUP_HP_X360),
+ 	SND_PCI_QUIRK(0x103c, 0x82bf, "HP", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x82c0, "HP", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x103c, 0x83b9, "HP Spectre x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+ 	SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 5feae9666822..55d6c9488d8e 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -1165,6 +1165,9 @@ static snd_pcm_uframes_t soc_pcm_pointer(struct snd_pcm_substream *substream)
+ 	snd_pcm_sframes_t codec_delay = 0;
+ 	int i;
+ 
++	/* clearing the previous total delay */
++	runtime->delay = 0;
++
+ 	for_each_rtdcom(rtd, rtdcom) {
+ 		component = rtdcom->component;
+ 
+@@ -1176,6 +1179,8 @@ static snd_pcm_uframes_t soc_pcm_pointer(struct snd_pcm_substream *substream)
+ 		offset = component->driver->ops->pointer(substream);
+ 		break;
+ 	}
++	/* base delay if assigned in pointer callback */
++	delay = runtime->delay;
+ 
+ 	if (cpu_dai->driver->ops->delay)
+ 		delay += cpu_dai->driver->ops->delay(substream, cpu_dai);
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index f5a3b402589e..67b042738ed7 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -905,8 +905,8 @@ bindir = $(abspath $(prefix)/$(bindir_relative))
+ mandir = share/man
+ infodir = share/info
+ perfexecdir = libexec/perf-core
+-perf_include_dir = lib/include/perf
+-perf_examples_dir = lib/examples/perf
++perf_include_dir = lib/perf/include
++perf_examples_dir = lib/perf/examples
+ sharedir = $(prefix)/share
+ template_dir = share/perf-core/templates
+ STRACE_GROUPS_DIR = share/perf-core/strace/groups
+diff --git a/tools/perf/builtin-c2c.c b/tools/perf/builtin-c2c.c
+index 6a8738f7ead3..eab66e3b0a19 100644
+--- a/tools/perf/builtin-c2c.c
++++ b/tools/perf/builtin-c2c.c
+@@ -2349,6 +2349,9 @@ static int perf_c2c__browse_cacheline(struct hist_entry *he)
+ 	" s             Toggle full length of symbol and source line columns \n"
+ 	" q             Return back to cacheline list \n";
+ 
++	if (!he)
++		return 0;
++
+ 	/* Display compact version first. */
+ 	c2c.symbol_full = false;
+ 
+diff --git a/tools/perf/perf.h b/tools/perf/perf.h
+index d215714f48df..21bf7f5a3cf5 100644
+--- a/tools/perf/perf.h
++++ b/tools/perf/perf.h
+@@ -25,7 +25,9 @@ static inline unsigned long long rdclock(void)
+ 	return ts.tv_sec * 1000000000ULL + ts.tv_nsec;
+ }
+ 
++#ifndef MAX_NR_CPUS
+ #define MAX_NR_CPUS			1024
++#endif
+ 
+ extern const char *input_name;
+ extern bool perf_host, perf_guest;
+diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
+index 94fce4f537e9..0d5504751cc5 100644
+--- a/tools/perf/util/evsel.c
++++ b/tools/perf/util/evsel.c
+@@ -848,6 +848,12 @@ static void apply_config_terms(struct perf_evsel *evsel,
+ 	}
+ }
+ 
++static bool is_dummy_event(struct perf_evsel *evsel)
++{
++	return (evsel->attr.type == PERF_TYPE_SOFTWARE) &&
++	       (evsel->attr.config == PERF_COUNT_SW_DUMMY);
++}
++
+ /*
+  * The enable_on_exec/disabled value strategy:
+  *
+@@ -1086,6 +1092,14 @@ void perf_evsel__config(struct perf_evsel *evsel, struct record_opts *opts,
+ 		else
+ 			perf_evsel__reset_sample_bit(evsel, PERIOD);
+ 	}
++
++	/*
++	 * For initial_delay, a dummy event is added implicitly.
++	 * The software event will trigger -EOPNOTSUPP error out,
++	 * if BRANCH_STACK bit is set.
++	 */
++	if (opts->initial_delay && is_dummy_event(evsel))
++		perf_evsel__reset_sample_bit(evsel, BRANCH_STACK);
+ }
+ 
+ static int perf_evsel__alloc_fd(struct perf_evsel *evsel, int ncpus, int nthreads)
+diff --git a/tools/testing/nvdimm/pmem-dax.c b/tools/testing/nvdimm/pmem-dax.c
+index b53596ad601b..2e7fd8227969 100644
+--- a/tools/testing/nvdimm/pmem-dax.c
++++ b/tools/testing/nvdimm/pmem-dax.c
+@@ -31,17 +31,21 @@ long __pmem_direct_access(struct pmem_device *pmem, pgoff_t pgoff,
+ 	if (get_nfit_res(pmem->phys_addr + offset)) {
+ 		struct page *page;
+ 
+-		*kaddr = pmem->virt_addr + offset;
++		if (kaddr)
++			*kaddr = pmem->virt_addr + offset;
+ 		page = vmalloc_to_page(pmem->virt_addr + offset);
+-		*pfn = page_to_pfn_t(page);
++		if (pfn)
++			*pfn = page_to_pfn_t(page);
+ 		pr_debug_ratelimited("%s: pmem: %p pgoff: %#lx pfn: %#lx\n",
+ 				__func__, pmem, pgoff, page_to_pfn(page));
+ 
+ 		return 1;
+ 	}
+ 
+-	*kaddr = pmem->virt_addr + offset;
+-	*pfn = phys_to_pfn_t(pmem->phys_addr + offset, pmem->pfn_flags);
++	if (kaddr)
++		*kaddr = pmem->virt_addr + offset;
++	if (pfn)
++		*pfn = phys_to_pfn_t(pmem->phys_addr + offset, pmem->pfn_flags);
+ 
+ 	/*
+ 	 * If badblocks are present, limit known good range to the
+diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
+index 41106d9d5cc7..f9c856c8e472 100644
+--- a/tools/testing/selftests/bpf/test_verifier.c
++++ b/tools/testing/selftests/bpf/test_verifier.c
+@@ -6997,7 +6997,7 @@ static struct bpf_test tests[] = {
+ 			BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
+ 			BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ 				     BPF_FUNC_map_lookup_elem),
+-			BPF_MOV64_REG(BPF_REG_0, 0),
++			BPF_MOV64_IMM(BPF_REG_0, 0),
+ 			BPF_EXIT_INSN(),
+ 		},
+ 		.fixup_map_in_map = { 3 },
+@@ -7020,7 +7020,7 @@ static struct bpf_test tests[] = {
+ 			BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
+ 			BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ 				     BPF_FUNC_map_lookup_elem),
+-			BPF_MOV64_REG(BPF_REG_0, 0),
++			BPF_MOV64_IMM(BPF_REG_0, 0),
+ 			BPF_EXIT_INSN(),
+ 		},
+ 		.fixup_map_in_map = { 3 },
+@@ -7042,7 +7042,7 @@ static struct bpf_test tests[] = {
+ 			BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
+ 			BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ 				     BPF_FUNC_map_lookup_elem),
+-			BPF_MOV64_REG(BPF_REG_0, 0),
++			BPF_MOV64_IMM(BPF_REG_0, 0),
+ 			BPF_EXIT_INSN(),
+ 		},
+ 		.fixup_map_in_map = { 3 },
+diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/connmark.json b/tools/testing/selftests/tc-testing/tc-tests/actions/connmark.json
+index 70952bd98ff9..13147a1f5731 100644
+--- a/tools/testing/selftests/tc-testing/tc-tests/actions/connmark.json
++++ b/tools/testing/selftests/tc-testing/tc-tests/actions/connmark.json
+@@ -17,7 +17,7 @@
+         "cmdUnderTest": "$TC actions add action connmark",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions list action connmark",
+-        "matchPattern": "action order [0-9]+:  connmark zone 0 pipe",
++        "matchPattern": "action order [0-9]+: connmark zone 0 pipe",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -41,7 +41,7 @@
+         "cmdUnderTest": "$TC actions add action connmark pass index 1",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions get action connmark index 1",
+-        "matchPattern": "action order [0-9]+:  connmark zone 0 pass.*index 1 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 0 pass.*index 1 ref",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -65,7 +65,7 @@
+         "cmdUnderTest": "$TC actions add action connmark drop index 100",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions get action connmark index 100",
+-        "matchPattern": "action order [0-9]+:  connmark zone 0 drop.*index 100 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 0 drop.*index 100 ref",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -89,7 +89,7 @@
+         "cmdUnderTest": "$TC actions add action connmark pipe index 455",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions get action connmark index 455",
+-        "matchPattern": "action order [0-9]+:  connmark zone 0 pipe.*index 455 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 0 pipe.*index 455 ref",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -113,7 +113,7 @@
+         "cmdUnderTest": "$TC actions add action connmark reclassify index 7",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions list action connmark",
+-        "matchPattern": "action order [0-9]+:  connmark zone 0 reclassify.*index 7 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 0 reclassify.*index 7 ref",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -137,7 +137,7 @@
+         "cmdUnderTest": "$TC actions add action connmark continue index 17",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions list action connmark",
+-        "matchPattern": "action order [0-9]+:  connmark zone 0 continue.*index 17 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 0 continue.*index 17 ref",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -161,7 +161,7 @@
+         "cmdUnderTest": "$TC actions add action connmark jump 10 index 17",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions list action connmark",
+-        "matchPattern": "action order [0-9]+:  connmark zone 0 jump 10.*index 17 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 0 jump 10.*index 17 ref",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -185,7 +185,7 @@
+         "cmdUnderTest": "$TC actions add action connmark zone 100 pipe index 1",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions get action connmark index 1",
+-        "matchPattern": "action order [0-9]+:  connmark zone 100 pipe.*index 1 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 100 pipe.*index 1 ref",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -209,7 +209,7 @@
+         "cmdUnderTest": "$TC actions add action connmark zone 65536 reclassify index 21",
+         "expExitCode": "255",
+         "verifyCmd": "$TC actions get action connmark index 1",
+-        "matchPattern": "action order [0-9]+:  connmark zone 65536 reclassify.*index 21 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 65536 reclassify.*index 21 ref",
+         "matchCount": "0",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -233,7 +233,7 @@
+         "cmdUnderTest": "$TC actions add action connmark zone 655 unsupp_arg pass index 2",
+         "expExitCode": "255",
+         "verifyCmd": "$TC actions get action connmark index 2",
+-        "matchPattern": "action order [0-9]+:  connmark zone 655 unsupp_arg pass.*index 2 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 655 unsupp_arg pass.*index 2 ref",
+         "matchCount": "0",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -258,7 +258,7 @@
+         "cmdUnderTest": "$TC actions replace action connmark zone 555 reclassify index 555",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions get action connmark index 555",
+-        "matchPattern": "action order [0-9]+:  connmark zone 555 reclassify.*index 555 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 555 reclassify.*index 555 ref",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -282,7 +282,7 @@
+         "cmdUnderTest": "$TC actions add action connmark zone 555 pipe index 5 cookie aabbccddeeff112233445566778800a1",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions get action connmark index 5",
+-        "matchPattern": "action order [0-9]+:  connmark zone 555 pipe.*index 5 ref.*cookie aabbccddeeff112233445566778800a1",
++        "matchPattern": "action order [0-9]+: connmark zone 555 pipe.*index 5 ref.*cookie aabbccddeeff112233445566778800a1",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/mirred.json b/tools/testing/selftests/tc-testing/tc-tests/actions/mirred.json
+index 6e4edfae1799..db49fd0f8445 100644
+--- a/tools/testing/selftests/tc-testing/tc-tests/actions/mirred.json
++++ b/tools/testing/selftests/tc-testing/tc-tests/actions/mirred.json
+@@ -44,7 +44,8 @@
+         "matchPattern": "action order [0-9]*: mirred \\(Egress Redirect to device lo\\).*index 2 ref",
+         "matchCount": "1",
+         "teardown": [
+-            "$TC actions flush action mirred"
++            "$TC actions flush action mirred",
++            "$TC actions flush action gact"
+         ]
+     },
+     {
+diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
+index c2b95a22959b..fd8c88463928 100644
+--- a/virt/kvm/arm/mmu.c
++++ b/virt/kvm/arm/mmu.c
+@@ -1831,13 +1831,20 @@ static int kvm_set_spte_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data
+ void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte)
+ {
+ 	unsigned long end = hva + PAGE_SIZE;
++	kvm_pfn_t pfn = pte_pfn(pte);
+ 	pte_t stage2_pte;
+ 
+ 	if (!kvm->arch.pgd)
+ 		return;
+ 
+ 	trace_kvm_set_spte_hva(hva);
+-	stage2_pte = pfn_pte(pte_pfn(pte), PAGE_S2);
++
++	/*
++	 * We've moved a page around, probably through CoW, so let's treat it
++	 * just like a translation fault and clean the cache to the PoC.
++	 */
++	clean_dcache_guest_page(pfn, PAGE_SIZE);
++	stage2_pte = pfn_pte(pfn, PAGE_S2);
+ 	handle_hva_to_gpa(kvm, hva, end, &kvm_set_spte_handler, &stage2_pte);
+ }
+ 


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 13:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 13:15 UTC (permalink / raw
  To: gentoo-commits

commit:     623228a05488bfda3b696a03ebf1c33b49b63a47
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Sep  5 15:30:20 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 13:15:39 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=623228a0

Linux patch 4.18.6

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1005_linux-4.18.6.patch | 5123 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5127 insertions(+)

diff --git a/0000_README b/0000_README
index 8da0979..8bfc2e4 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch:  1004_linux-4.18.5.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.5
 
+Patch:  1005_linux-4.18.6.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.6
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1005_linux-4.18.6.patch b/1005_linux-4.18.6.patch
new file mode 100644
index 0000000..99632b3
--- /dev/null
+++ b/1005_linux-4.18.6.patch
@@ -0,0 +1,5123 @@
+diff --git a/Makefile b/Makefile
+index a41692c5827a..62524f4d42ad 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+@@ -493,9 +493,13 @@ KBUILD_AFLAGS += $(call cc-option, -no-integrated-as)
+ endif
+ 
+ RETPOLINE_CFLAGS_GCC := -mindirect-branch=thunk-extern -mindirect-branch-register
++RETPOLINE_VDSO_CFLAGS_GCC := -mindirect-branch=thunk-inline -mindirect-branch-register
+ RETPOLINE_CFLAGS_CLANG := -mretpoline-external-thunk
++RETPOLINE_VDSO_CFLAGS_CLANG := -mretpoline
+ RETPOLINE_CFLAGS := $(call cc-option,$(RETPOLINE_CFLAGS_GCC),$(call cc-option,$(RETPOLINE_CFLAGS_CLANG)))
++RETPOLINE_VDSO_CFLAGS := $(call cc-option,$(RETPOLINE_VDSO_CFLAGS_GCC),$(call cc-option,$(RETPOLINE_VDSO_CFLAGS_CLANG)))
+ export RETPOLINE_CFLAGS
++export RETPOLINE_VDSO_CFLAGS
+ 
+ KBUILD_CFLAGS	+= $(call cc-option,-fno-PIE)
+ KBUILD_AFLAGS	+= $(call cc-option,-fno-PIE)
+diff --git a/arch/Kconfig b/arch/Kconfig
+index d1f2ed462ac8..f03b72644902 100644
+--- a/arch/Kconfig
++++ b/arch/Kconfig
+@@ -354,6 +354,9 @@ config HAVE_ARCH_JUMP_LABEL
+ config HAVE_RCU_TABLE_FREE
+ 	bool
+ 
++config HAVE_RCU_TABLE_INVALIDATE
++	bool
++
+ config ARCH_HAVE_NMI_SAFE_CMPXCHG
+ 	bool
+ 
+diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
+index f6a62ae44a65..c864f6b045ba 100644
+--- a/arch/arm/net/bpf_jit_32.c
++++ b/arch/arm/net/bpf_jit_32.c
+@@ -238,7 +238,7 @@ static void jit_fill_hole(void *area, unsigned int size)
+ #define STACK_SIZE	ALIGN(_STACK_SIZE, STACK_ALIGNMENT)
+ 
+ /* Get the offset of eBPF REGISTERs stored on scratch space. */
+-#define STACK_VAR(off) (STACK_SIZE - off)
++#define STACK_VAR(off) (STACK_SIZE - off - 4)
+ 
+ #if __LINUX_ARM_ARCH__ < 7
+ 
+diff --git a/arch/arm/probes/kprobes/core.c b/arch/arm/probes/kprobes/core.c
+index e90cc8a08186..a8be6fe3946d 100644
+--- a/arch/arm/probes/kprobes/core.c
++++ b/arch/arm/probes/kprobes/core.c
+@@ -289,8 +289,8 @@ void __kprobes kprobe_handler(struct pt_regs *regs)
+ 				break;
+ 			case KPROBE_REENTER:
+ 				/* A nested probe was hit in FIQ, it is a BUG */
+-				pr_warn("Unrecoverable kprobe detected at %p.\n",
+-					p->addr);
++				pr_warn("Unrecoverable kprobe detected.\n");
++				dump_kprobe(p);
+ 				/* fall through */
+ 			default:
+ 				/* impossible cases */
+diff --git a/arch/arm/probes/kprobes/test-core.c b/arch/arm/probes/kprobes/test-core.c
+index 14db14152909..cc237fa9b90f 100644
+--- a/arch/arm/probes/kprobes/test-core.c
++++ b/arch/arm/probes/kprobes/test-core.c
+@@ -1461,7 +1461,6 @@ fail:
+ 	print_registers(&result_regs);
+ 
+ 	if (mem) {
+-		pr_err("current_stack=%p\n", current_stack);
+ 		pr_err("expected_memory:\n");
+ 		print_memory(expected_memory, mem_size);
+ 		pr_err("result_memory:\n");
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328.dtsi b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+index b8e9da15e00c..2c1aa84abeea 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+@@ -331,7 +331,7 @@
+ 		reg = <0x0 0xff120000 0x0 0x100>;
+ 		interrupts = <GIC_SPI 56 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&cru SCLK_UART1>, <&cru PCLK_UART1>;
+-		clock-names = "sclk_uart", "pclk_uart";
++		clock-names = "baudclk", "apb_pclk";
+ 		dmas = <&dmac 4>, <&dmac 5>;
+ 		dma-names = "tx", "rx";
+ 		pinctrl-names = "default";
+diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h
+index 5df5cfe1c143..5ee5bca8c24b 100644
+--- a/arch/arm64/include/asm/cache.h
++++ b/arch/arm64/include/asm/cache.h
+@@ -21,12 +21,16 @@
+ #define CTR_L1IP_SHIFT		14
+ #define CTR_L1IP_MASK		3
+ #define CTR_DMINLINE_SHIFT	16
++#define CTR_IMINLINE_SHIFT	0
+ #define CTR_ERG_SHIFT		20
+ #define CTR_CWG_SHIFT		24
+ #define CTR_CWG_MASK		15
+ #define CTR_IDC_SHIFT		28
+ #define CTR_DIC_SHIFT		29
+ 
++#define CTR_CACHE_MINLINE_MASK	\
++	(0xf << CTR_DMINLINE_SHIFT | 0xf << CTR_IMINLINE_SHIFT)
++
+ #define CTR_L1IP(ctr)		(((ctr) >> CTR_L1IP_SHIFT) & CTR_L1IP_MASK)
+ 
+ #define ICACHE_POLICY_VPIPT	0
+diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
+index 8a699c708fc9..be3bf3d08916 100644
+--- a/arch/arm64/include/asm/cpucaps.h
++++ b/arch/arm64/include/asm/cpucaps.h
+@@ -49,7 +49,8 @@
+ #define ARM64_HAS_CACHE_DIC			28
+ #define ARM64_HW_DBM				29
+ #define ARM64_SSBD				30
++#define ARM64_MISMATCHED_CACHE_TYPE		31
+ 
+-#define ARM64_NCAPS				31
++#define ARM64_NCAPS				32
+ 
+ #endif /* __ASM_CPUCAPS_H */
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 1d2b6d768efe..5d59ff9a8da9 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -65,12 +65,18 @@ is_kryo_midr(const struct arm64_cpu_capabilities *entry, int scope)
+ }
+ 
+ static bool
+-has_mismatched_cache_line_size(const struct arm64_cpu_capabilities *entry,
+-				int scope)
++has_mismatched_cache_type(const struct arm64_cpu_capabilities *entry,
++			  int scope)
+ {
++	u64 mask = CTR_CACHE_MINLINE_MASK;
++
++	/* Skip matching the min line sizes for cache type check */
++	if (entry->capability == ARM64_MISMATCHED_CACHE_TYPE)
++		mask ^= arm64_ftr_reg_ctrel0.strict_mask;
++
+ 	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
+-	return (read_cpuid_cachetype() & arm64_ftr_reg_ctrel0.strict_mask) !=
+-		(arm64_ftr_reg_ctrel0.sys_val & arm64_ftr_reg_ctrel0.strict_mask);
++	return (read_cpuid_cachetype() & mask) !=
++	       (arm64_ftr_reg_ctrel0.sys_val & mask);
+ }
+ 
+ static void
+@@ -613,7 +619,14 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ 	{
+ 		.desc = "Mismatched cache line size",
+ 		.capability = ARM64_MISMATCHED_CACHE_LINE_SIZE,
+-		.matches = has_mismatched_cache_line_size,
++		.matches = has_mismatched_cache_type,
++		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
++		.cpu_enable = cpu_enable_trap_ctr_access,
++	},
++	{
++		.desc = "Mismatched cache type",
++		.capability = ARM64_MISMATCHED_CACHE_TYPE,
++		.matches = has_mismatched_cache_type,
+ 		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+ 		.cpu_enable = cpu_enable_trap_ctr_access,
+ 	},
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index c6d80743f4ed..e4103b718a7c 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -214,7 +214,7 @@ static const struct arm64_ftr_bits ftr_ctr[] = {
+ 	 * If we have differing I-cache policies, report it as the weakest - VIPT.
+ 	 */
+ 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_EXACT, 14, 2, ICACHE_POLICY_VIPT),	/* L1Ip */
+-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 0, 4, 0),	/* IminLine */
++	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IMINLINE_SHIFT, 4, 0),
+ 	ARM64_FTR_END,
+ };
+ 
+diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
+index d849d9804011..22a5921562c7 100644
+--- a/arch/arm64/kernel/probes/kprobes.c
++++ b/arch/arm64/kernel/probes/kprobes.c
+@@ -275,7 +275,7 @@ static int __kprobes reenter_kprobe(struct kprobe *p,
+ 		break;
+ 	case KPROBE_HIT_SS:
+ 	case KPROBE_REENTER:
+-		pr_warn("Unrecoverable kprobe detected at %p.\n", p->addr);
++		pr_warn("Unrecoverable kprobe detected.\n");
+ 		dump_kprobe(p);
+ 		BUG();
+ 		break;
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index 9abf8a1e7b25..787e27964ab9 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -287,7 +287,11 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
+ #ifdef CONFIG_HAVE_ARCH_PFN_VALID
+ int pfn_valid(unsigned long pfn)
+ {
+-	return memblock_is_map_memory(pfn << PAGE_SHIFT);
++	phys_addr_t addr = pfn << PAGE_SHIFT;
++
++	if ((addr >> PAGE_SHIFT) != pfn)
++		return 0;
++	return memblock_is_map_memory(addr);
+ }
+ EXPORT_SYMBOL(pfn_valid);
+ #endif
+diff --git a/arch/mips/Makefile b/arch/mips/Makefile
+index e2122cca4ae2..1e98d22ec119 100644
+--- a/arch/mips/Makefile
++++ b/arch/mips/Makefile
+@@ -155,15 +155,11 @@ cflags-$(CONFIG_CPU_R4300)	+= -march=r4300 -Wa,--trap
+ cflags-$(CONFIG_CPU_VR41XX)	+= -march=r4100 -Wa,--trap
+ cflags-$(CONFIG_CPU_R4X00)	+= -march=r4600 -Wa,--trap
+ cflags-$(CONFIG_CPU_TX49XX)	+= -march=r4600 -Wa,--trap
+-cflags-$(CONFIG_CPU_MIPS32_R1)	+= $(call cc-option,-march=mips32,-mips32 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS32) \
+-			-Wa,-mips32 -Wa,--trap
+-cflags-$(CONFIG_CPU_MIPS32_R2)	+= $(call cc-option,-march=mips32r2,-mips32r2 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS32) \
+-			-Wa,-mips32r2 -Wa,--trap
++cflags-$(CONFIG_CPU_MIPS32_R1)	+= -march=mips32 -Wa,--trap
++cflags-$(CONFIG_CPU_MIPS32_R2)	+= -march=mips32r2 -Wa,--trap
+ cflags-$(CONFIG_CPU_MIPS32_R6)	+= -march=mips32r6 -Wa,--trap -modd-spreg
+-cflags-$(CONFIG_CPU_MIPS64_R1)	+= $(call cc-option,-march=mips64,-mips64 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS64) \
+-			-Wa,-mips64 -Wa,--trap
+-cflags-$(CONFIG_CPU_MIPS64_R2)	+= $(call cc-option,-march=mips64r2,-mips64r2 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS64) \
+-			-Wa,-mips64r2 -Wa,--trap
++cflags-$(CONFIG_CPU_MIPS64_R1)	+= -march=mips64 -Wa,--trap
++cflags-$(CONFIG_CPU_MIPS64_R2)	+= -march=mips64r2 -Wa,--trap
+ cflags-$(CONFIG_CPU_MIPS64_R6)	+= -march=mips64r6 -Wa,--trap
+ cflags-$(CONFIG_CPU_R5000)	+= -march=r5000 -Wa,--trap
+ cflags-$(CONFIG_CPU_R5432)	+= $(call cc-option,-march=r5400,-march=r5000) \
+diff --git a/arch/mips/include/asm/processor.h b/arch/mips/include/asm/processor.h
+index af34afbc32d9..b2fa62922d88 100644
+--- a/arch/mips/include/asm/processor.h
++++ b/arch/mips/include/asm/processor.h
+@@ -141,7 +141,7 @@ struct mips_fpu_struct {
+ 
+ #define NUM_DSP_REGS   6
+ 
+-typedef __u32 dspreg_t;
++typedef unsigned long dspreg_t;
+ 
+ struct mips_dsp_state {
+ 	dspreg_t	dspr[NUM_DSP_REGS];
+@@ -386,7 +386,20 @@ unsigned long get_wchan(struct task_struct *p);
+ #define KSTK_ESP(tsk) (task_pt_regs(tsk)->regs[29])
+ #define KSTK_STATUS(tsk) (task_pt_regs(tsk)->cp0_status)
+ 
++#ifdef CONFIG_CPU_LOONGSON3
++/*
++ * Loongson-3's SFB (Store-Fill-Buffer) may buffer writes indefinitely when a
++ * tight read loop is executed, because reads take priority over writes & the
++ * hardware (incorrectly) doesn't ensure that writes will eventually occur.
++ *
++ * Since spin loops of any kind should have a cpu_relax() in them, force an SFB
++ * flush from cpu_relax() such that any pending writes will become visible as
++ * expected.
++ */
++#define cpu_relax()	smp_mb()
++#else
+ #define cpu_relax()	barrier()
++#endif
+ 
+ /*
+  * Return_address is a replacement for __builtin_return_address(count)
+diff --git a/arch/mips/kernel/ptrace.c b/arch/mips/kernel/ptrace.c
+index 9f6c3f2aa2e2..8c8d42823bda 100644
+--- a/arch/mips/kernel/ptrace.c
++++ b/arch/mips/kernel/ptrace.c
+@@ -856,7 +856,7 @@ long arch_ptrace(struct task_struct *child, long request,
+ 				goto out;
+ 			}
+ 			dregs = __get_dsp_regs(child);
+-			tmp = (unsigned long) (dregs[addr - DSP_BASE]);
++			tmp = dregs[addr - DSP_BASE];
+ 			break;
+ 		}
+ 		case DSP_CONTROL:
+diff --git a/arch/mips/kernel/ptrace32.c b/arch/mips/kernel/ptrace32.c
+index 7edc629304c8..bc348d44d151 100644
+--- a/arch/mips/kernel/ptrace32.c
++++ b/arch/mips/kernel/ptrace32.c
+@@ -142,7 +142,7 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
+ 				goto out;
+ 			}
+ 			dregs = __get_dsp_regs(child);
+-			tmp = (unsigned long) (dregs[addr - DSP_BASE]);
++			tmp = dregs[addr - DSP_BASE];
+ 			break;
+ 		}
+ 		case DSP_CONTROL:
+diff --git a/arch/mips/lib/memset.S b/arch/mips/lib/memset.S
+index 1cc306520a55..fac26ce64b2f 100644
+--- a/arch/mips/lib/memset.S
++++ b/arch/mips/lib/memset.S
+@@ -195,6 +195,7 @@
+ #endif
+ #else
+ 	 PTR_SUBU	t0, $0, a2
++	move		a2, zero		/* No remaining longs */
+ 	PTR_ADDIU	t0, 1
+ 	STORE_BYTE(0)
+ 	STORE_BYTE(1)
+@@ -231,7 +232,7 @@
+ 
+ #ifdef CONFIG_CPU_MIPSR6
+ .Lbyte_fixup\@:
+-	PTR_SUBU	a2, $0, t0
++	PTR_SUBU	a2, t0
+ 	jr		ra
+ 	 PTR_ADDIU	a2, 1
+ #endif /* CONFIG_CPU_MIPSR6 */
+diff --git a/arch/mips/lib/multi3.c b/arch/mips/lib/multi3.c
+index 111ad475aa0c..4c2483f410c2 100644
+--- a/arch/mips/lib/multi3.c
++++ b/arch/mips/lib/multi3.c
+@@ -4,12 +4,12 @@
+ #include "libgcc.h"
+ 
+ /*
+- * GCC 7 suboptimally generates __multi3 calls for mips64r6, so for that
+- * specific case only we'll implement it here.
++ * GCC 7 & older can suboptimally generate __multi3 calls for mips64r6, so for
++ * that specific case only we implement that intrinsic here.
+  *
+  * See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82981
+  */
+-#if defined(CONFIG_64BIT) && defined(CONFIG_CPU_MIPSR6) && (__GNUC__ == 7)
++#if defined(CONFIG_64BIT) && defined(CONFIG_CPU_MIPSR6) && (__GNUC__ < 8)
+ 
+ /* multiply 64-bit values, low 64-bits returned */
+ static inline long long notrace dmulu(long long a, long long b)
+diff --git a/arch/s390/include/asm/qdio.h b/arch/s390/include/asm/qdio.h
+index de11ecc99c7c..9c9970a5dfb1 100644
+--- a/arch/s390/include/asm/qdio.h
++++ b/arch/s390/include/asm/qdio.h
+@@ -262,7 +262,6 @@ struct qdio_outbuf_state {
+ 	void *user;
+ };
+ 
+-#define QDIO_OUTBUF_STATE_FLAG_NONE	0x00
+ #define QDIO_OUTBUF_STATE_FLAG_PENDING	0x01
+ 
+ #define CHSC_AC1_INITIATE_INPUTQ	0x80
+diff --git a/arch/s390/lib/mem.S b/arch/s390/lib/mem.S
+index 2311f15be9cf..40c4d59c926e 100644
+--- a/arch/s390/lib/mem.S
++++ b/arch/s390/lib/mem.S
+@@ -17,7 +17,7 @@
+ ENTRY(memmove)
+ 	ltgr	%r4,%r4
+ 	lgr	%r1,%r2
+-	bzr	%r14
++	jz	.Lmemmove_exit
+ 	aghi	%r4,-1
+ 	clgr	%r2,%r3
+ 	jnh	.Lmemmove_forward
+@@ -36,6 +36,7 @@ ENTRY(memmove)
+ .Lmemmove_forward_remainder:
+ 	larl	%r5,.Lmemmove_mvc
+ 	ex	%r4,0(%r5)
++.Lmemmove_exit:
+ 	BR_EX	%r14
+ .Lmemmove_reverse:
+ 	ic	%r0,0(%r4,%r3)
+@@ -65,7 +66,7 @@ EXPORT_SYMBOL(memmove)
+  */
+ ENTRY(memset)
+ 	ltgr	%r4,%r4
+-	bzr	%r14
++	jz	.Lmemset_exit
+ 	ltgr	%r3,%r3
+ 	jnz	.Lmemset_fill
+ 	aghi	%r4,-1
+@@ -80,6 +81,7 @@ ENTRY(memset)
+ .Lmemset_clear_remainder:
+ 	larl	%r3,.Lmemset_xc
+ 	ex	%r4,0(%r3)
++.Lmemset_exit:
+ 	BR_EX	%r14
+ .Lmemset_fill:
+ 	cghi	%r4,1
+@@ -115,7 +117,7 @@ EXPORT_SYMBOL(memset)
+  */
+ ENTRY(memcpy)
+ 	ltgr	%r4,%r4
+-	bzr	%r14
++	jz	.Lmemcpy_exit
+ 	aghi	%r4,-1
+ 	srlg	%r5,%r4,8
+ 	ltgr	%r5,%r5
+@@ -124,6 +126,7 @@ ENTRY(memcpy)
+ .Lmemcpy_remainder:
+ 	larl	%r5,.Lmemcpy_mvc
+ 	ex	%r4,0(%r5)
++.Lmemcpy_exit:
+ 	BR_EX	%r14
+ .Lmemcpy_loop:
+ 	mvc	0(256,%r1),0(%r3)
+@@ -145,9 +148,9 @@ EXPORT_SYMBOL(memcpy)
+ .macro __MEMSET bits,bytes,insn
+ ENTRY(__memset\bits)
+ 	ltgr	%r4,%r4
+-	bzr	%r14
++	jz	.L__memset_exit\bits
+ 	cghi	%r4,\bytes
+-	je	.L__memset_exit\bits
++	je	.L__memset_store\bits
+ 	aghi	%r4,-(\bytes+1)
+ 	srlg	%r5,%r4,8
+ 	ltgr	%r5,%r5
+@@ -163,8 +166,9 @@ ENTRY(__memset\bits)
+ 	larl	%r5,.L__memset_mvc\bits
+ 	ex	%r4,0(%r5)
+ 	BR_EX	%r14
+-.L__memset_exit\bits:
++.L__memset_store\bits:
+ 	\insn	%r3,0(%r2)
++.L__memset_exit\bits:
+ 	BR_EX	%r14
+ .L__memset_mvc\bits:
+ 	mvc	\bytes(1,%r1),0(%r1)
+diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
+index e074480d3598..4cc3f06b0ab3 100644
+--- a/arch/s390/mm/fault.c
++++ b/arch/s390/mm/fault.c
+@@ -502,6 +502,8 @@ retry:
+ 	/* No reason to continue if interrupted by SIGKILL. */
+ 	if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) {
+ 		fault = VM_FAULT_SIGNAL;
++		if (flags & FAULT_FLAG_RETRY_NOWAIT)
++			goto out_up;
+ 		goto out;
+ 	}
+ 	if (unlikely(fault & VM_FAULT_ERROR))
+diff --git a/arch/s390/mm/page-states.c b/arch/s390/mm/page-states.c
+index 382153ff17e3..dc3cede7f2ec 100644
+--- a/arch/s390/mm/page-states.c
++++ b/arch/s390/mm/page-states.c
+@@ -271,7 +271,7 @@ void arch_set_page_states(int make_stable)
+ 			list_for_each(l, &zone->free_area[order].free_list[t]) {
+ 				page = list_entry(l, struct page, lru);
+ 				if (make_stable)
+-					set_page_stable_dat(page, 0);
++					set_page_stable_dat(page, order);
+ 				else
+ 					set_page_unused(page, order);
+ 			}
+diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
+index 5f0234ec8038..d7052cbe984f 100644
+--- a/arch/s390/net/bpf_jit_comp.c
++++ b/arch/s390/net/bpf_jit_comp.c
+@@ -485,8 +485,6 @@ static void bpf_jit_epilogue(struct bpf_jit *jit, u32 stack_depth)
+ 			/* br %r1 */
+ 			_EMIT2(0x07f1);
+ 		} else {
+-			/* larl %r1,.+14 */
+-			EMIT6_PCREL_RILB(0xc0000000, REG_1, jit->prg + 14);
+ 			/* ex 0,S390_lowcore.br_r1_tampoline */
+ 			EMIT4_DISP(0x44000000, REG_0, REG_0,
+ 				   offsetof(struct lowcore, br_r1_trampoline));
+diff --git a/arch/s390/numa/numa.c b/arch/s390/numa/numa.c
+index 06a80434cfe6..5bd374491f94 100644
+--- a/arch/s390/numa/numa.c
++++ b/arch/s390/numa/numa.c
+@@ -134,26 +134,14 @@ void __init numa_setup(void)
+ {
+ 	pr_info("NUMA mode: %s\n", mode->name);
+ 	nodes_clear(node_possible_map);
++	/* Initially attach all possible CPUs to node 0. */
++	cpumask_copy(&node_to_cpumask_map[0], cpu_possible_mask);
+ 	if (mode->setup)
+ 		mode->setup();
+ 	numa_setup_memory();
+ 	memblock_dump_all();
+ }
+ 
+-/*
+- * numa_init_early() - Initialization initcall
+- *
+- * This runs when only one CPU is online and before the first
+- * topology update is called for by the scheduler.
+- */
+-static int __init numa_init_early(void)
+-{
+-	/* Attach all possible CPUs to node 0 for now. */
+-	cpumask_copy(&node_to_cpumask_map[0], cpu_possible_mask);
+-	return 0;
+-}
+-early_initcall(numa_init_early);
+-
+ /*
+  * numa_init_late() - Initialization initcall
+  *
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index 4902fed221c0..8a505cfdd9b9 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -421,6 +421,8 @@ int arch_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type)
+ 	hwirq = 0;
+ 	for_each_pci_msi_entry(msi, pdev) {
+ 		rc = -EIO;
++		if (hwirq >= msi_vecs)
++			break;
+ 		irq = irq_alloc_desc(0);	/* Alloc irq on node 0 */
+ 		if (irq < 0)
+ 			return -ENOMEM;
+diff --git a/arch/s390/purgatory/Makefile b/arch/s390/purgatory/Makefile
+index 1ace023cbdce..abfa8c7a6d9a 100644
+--- a/arch/s390/purgatory/Makefile
++++ b/arch/s390/purgatory/Makefile
+@@ -7,13 +7,13 @@ purgatory-y := head.o purgatory.o string.o sha256.o mem.o
+ targets += $(purgatory-y) purgatory.ro kexec-purgatory.c
+ PURGATORY_OBJS = $(addprefix $(obj)/,$(purgatory-y))
+ 
+-$(obj)/sha256.o: $(srctree)/lib/sha256.c
++$(obj)/sha256.o: $(srctree)/lib/sha256.c FORCE
+ 	$(call if_changed_rule,cc_o_c)
+ 
+-$(obj)/mem.o: $(srctree)/arch/s390/lib/mem.S
++$(obj)/mem.o: $(srctree)/arch/s390/lib/mem.S FORCE
+ 	$(call if_changed_rule,as_o_S)
+ 
+-$(obj)/string.o: $(srctree)/arch/s390/lib/string.c
++$(obj)/string.o: $(srctree)/arch/s390/lib/string.c FORCE
+ 	$(call if_changed_rule,cc_o_c)
+ 
+ LDFLAGS_purgatory.ro := -e purgatory_start -r --no-undefined -nostdlib
+@@ -23,6 +23,7 @@ KBUILD_CFLAGS += -Wno-pointer-sign -Wno-sign-compare
+ KBUILD_CFLAGS += -fno-zero-initialized-in-bss -fno-builtin -ffreestanding
+ KBUILD_CFLAGS += -c -MD -Os -m64 -msoft-float
+ KBUILD_CFLAGS += $(call cc-option,-fno-PIE)
++KBUILD_AFLAGS := $(filter-out -DCC_USING_EXPOLINE,$(KBUILD_AFLAGS))
+ 
+ $(obj)/purgatory.ro: $(PURGATORY_OBJS) FORCE
+ 		$(call if_changed,ld)
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 6b8065d718bd..1aa4dd3b5687 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -179,6 +179,7 @@ config X86
+ 	select HAVE_PERF_REGS
+ 	select HAVE_PERF_USER_STACK_DUMP
+ 	select HAVE_RCU_TABLE_FREE
++	select HAVE_RCU_TABLE_INVALIDATE	if HAVE_RCU_TABLE_FREE
+ 	select HAVE_REGS_AND_STACK_ACCESS_API
+ 	select HAVE_RELIABLE_STACKTRACE		if X86_64 && UNWINDER_FRAME_POINTER && STACK_VALIDATION
+ 	select HAVE_STACKPROTECTOR		if CC_HAS_SANE_STACKPROTECTOR
+diff --git a/arch/x86/Makefile b/arch/x86/Makefile
+index a08e82856563..d944b52649a4 100644
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -180,10 +180,6 @@ ifdef CONFIG_FUNCTION_GRAPH_TRACER
+   endif
+ endif
+ 
+-ifndef CC_HAVE_ASM_GOTO
+-  $(error Compiler lacks asm-goto support.)
+-endif
+-
+ #
+ # Jump labels need '-maccumulate-outgoing-args' for gcc < 4.5.2 to prevent a
+ # GCC bug (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=46226).  There's no way
+@@ -317,6 +313,13 @@ PHONY += vdso_install
+ vdso_install:
+ 	$(Q)$(MAKE) $(build)=arch/x86/entry/vdso $@
+ 
++archprepare: checkbin
++checkbin:
++ifndef CC_HAVE_ASM_GOTO
++	@echo Compiler lacks asm-goto support.
++	@exit 1
++endif
++
+ archclean:
+ 	$(Q)rm -rf $(objtree)/arch/i386
+ 	$(Q)rm -rf $(objtree)/arch/x86_64
+diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
+index 261802b1cc50..9589878faf46 100644
+--- a/arch/x86/entry/vdso/Makefile
++++ b/arch/x86/entry/vdso/Makefile
+@@ -72,9 +72,9 @@ $(obj)/vdso-image-%.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
+ CFL := $(PROFILING) -mcmodel=small -fPIC -O2 -fasynchronous-unwind-tables -m64 \
+        $(filter -g%,$(KBUILD_CFLAGS)) $(call cc-option, -fno-stack-protector) \
+        -fno-omit-frame-pointer -foptimize-sibling-calls \
+-       -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
++       -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO $(RETPOLINE_VDSO_CFLAGS)
+ 
+-$(vobjs): KBUILD_CFLAGS := $(filter-out $(GCC_PLUGINS_CFLAGS),$(KBUILD_CFLAGS)) $(CFL)
++$(vobjs): KBUILD_CFLAGS := $(filter-out $(GCC_PLUGINS_CFLAGS) $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS)) $(CFL)
+ 
+ #
+ # vDSO code runs in userspace and -pg doesn't help with profiling anyway.
+@@ -138,11 +138,13 @@ KBUILD_CFLAGS_32 := $(filter-out -mcmodel=kernel,$(KBUILD_CFLAGS_32))
+ KBUILD_CFLAGS_32 := $(filter-out -fno-pic,$(KBUILD_CFLAGS_32))
+ KBUILD_CFLAGS_32 := $(filter-out -mfentry,$(KBUILD_CFLAGS_32))
+ KBUILD_CFLAGS_32 := $(filter-out $(GCC_PLUGINS_CFLAGS),$(KBUILD_CFLAGS_32))
++KBUILD_CFLAGS_32 := $(filter-out $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS_32))
+ KBUILD_CFLAGS_32 += -m32 -msoft-float -mregparm=0 -fpic
+ KBUILD_CFLAGS_32 += $(call cc-option, -fno-stack-protector)
+ KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls)
+ KBUILD_CFLAGS_32 += -fno-omit-frame-pointer
+ KBUILD_CFLAGS_32 += -DDISABLE_BRANCH_PROFILING
++KBUILD_CFLAGS_32 += $(RETPOLINE_VDSO_CFLAGS)
+ $(obj)/vdso32.so.dbg: KBUILD_CFLAGS = $(KBUILD_CFLAGS_32)
+ 
+ $(obj)/vdso32.so.dbg: FORCE \
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index 5f4829f10129..dfb2f7c0d019 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -2465,7 +2465,7 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs
+ 
+ 	perf_callchain_store(entry, regs->ip);
+ 
+-	if (!current->mm)
++	if (!nmi_uaccess_okay())
+ 		return;
+ 
+ 	if (perf_callchain_user32(regs, entry))
+diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
+index c14f2a74b2be..15450a675031 100644
+--- a/arch/x86/include/asm/irqflags.h
++++ b/arch/x86/include/asm/irqflags.h
+@@ -33,7 +33,8 @@ extern inline unsigned long native_save_fl(void)
+ 	return flags;
+ }
+ 
+-static inline void native_restore_fl(unsigned long flags)
++extern inline void native_restore_fl(unsigned long flags);
++extern inline void native_restore_fl(unsigned long flags)
+ {
+ 	asm volatile("push %0 ; popf"
+ 		     : /* no output */
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index 682286aca881..d53c54b842da 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -132,6 +132,8 @@ struct cpuinfo_x86 {
+ 	/* Index into per_cpu list: */
+ 	u16			cpu_index;
+ 	u32			microcode;
++	/* Address space bits used by the cache internally */
++	u8			x86_cache_bits;
+ 	unsigned		initialized : 1;
+ } __randomize_layout;
+ 
+@@ -181,9 +183,9 @@ extern const struct seq_operations cpuinfo_op;
+ 
+ extern void cpu_detect(struct cpuinfo_x86 *c);
+ 
+-static inline unsigned long l1tf_pfn_limit(void)
++static inline unsigned long long l1tf_pfn_limit(void)
+ {
+-	return BIT(boot_cpu_data.x86_phys_bits - 1 - PAGE_SHIFT) - 1;
++	return BIT_ULL(boot_cpu_data.x86_cache_bits - 1 - PAGE_SHIFT);
+ }
+ 
+ extern void early_cpu_init(void);
+diff --git a/arch/x86/include/asm/stacktrace.h b/arch/x86/include/asm/stacktrace.h
+index b6dc698f992a..f335aad404a4 100644
+--- a/arch/x86/include/asm/stacktrace.h
++++ b/arch/x86/include/asm/stacktrace.h
+@@ -111,6 +111,6 @@ static inline unsigned long caller_frame_pointer(void)
+ 	return (unsigned long)frame;
+ }
+ 
+-void show_opcodes(u8 *rip, const char *loglvl);
++void show_opcodes(struct pt_regs *regs, const char *loglvl);
+ void show_ip(struct pt_regs *regs, const char *loglvl);
+ #endif /* _ASM_X86_STACKTRACE_H */
+diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
+index 6690cd3fc8b1..0af97e51e609 100644
+--- a/arch/x86/include/asm/tlbflush.h
++++ b/arch/x86/include/asm/tlbflush.h
+@@ -175,8 +175,16 @@ struct tlb_state {
+ 	 * are on.  This means that it may not match current->active_mm,
+ 	 * which will contain the previous user mm when we're in lazy TLB
+ 	 * mode even if we've already switched back to swapper_pg_dir.
++	 *
++	 * During switch_mm_irqs_off(), loaded_mm will be set to
++	 * LOADED_MM_SWITCHING during the brief interrupts-off window
++	 * when CR3 and loaded_mm would otherwise be inconsistent.  This
++	 * is for nmi_uaccess_okay()'s benefit.
+ 	 */
+ 	struct mm_struct *loaded_mm;
++
++#define LOADED_MM_SWITCHING ((struct mm_struct *)1)
++
+ 	u16 loaded_mm_asid;
+ 	u16 next_asid;
+ 	/* last user mm's ctx id */
+@@ -246,6 +254,38 @@ struct tlb_state {
+ };
+ DECLARE_PER_CPU_SHARED_ALIGNED(struct tlb_state, cpu_tlbstate);
+ 
++/*
++ * Blindly accessing user memory from NMI context can be dangerous
++ * if we're in the middle of switching the current user task or
++ * switching the loaded mm.  It can also be dangerous if we
++ * interrupted some kernel code that was temporarily using a
++ * different mm.
++ */
++static inline bool nmi_uaccess_okay(void)
++{
++	struct mm_struct *loaded_mm = this_cpu_read(cpu_tlbstate.loaded_mm);
++	struct mm_struct *current_mm = current->mm;
++
++	VM_WARN_ON_ONCE(!loaded_mm);
++
++	/*
++	 * The condition we want to check is
++	 * current_mm->pgd == __va(read_cr3_pa()).  This may be slow, though,
++	 * if we're running in a VM with shadow paging, and nmi_uaccess_okay()
++	 * is supposed to be reasonably fast.
++	 *
++	 * Instead, we check the almost equivalent but somewhat conservative
++	 * condition below, and we rely on the fact that switch_mm_irqs_off()
++	 * sets loaded_mm to LOADED_MM_SWITCHING before writing to CR3.
++	 */
++	if (loaded_mm != current_mm)
++		return false;
++
++	VM_WARN_ON_ONCE(current_mm->pgd != __va(read_cr3_pa()));
++
++	return true;
++}
++
+ /* Initialize cr4 shadow for this CPU. */
+ static inline void cr4_init_shadow(void)
+ {
+diff --git a/arch/x86/include/asm/vgtod.h b/arch/x86/include/asm/vgtod.h
+index fb856c9f0449..53748541c487 100644
+--- a/arch/x86/include/asm/vgtod.h
++++ b/arch/x86/include/asm/vgtod.h
+@@ -93,7 +93,7 @@ static inline unsigned int __getcpu(void)
+ 	 *
+ 	 * If RDPID is available, use it.
+ 	 */
+-	alternative_io ("lsl %[p],%[seg]",
++	alternative_io ("lsl %[seg],%[p]",
+ 			".byte 0xf3,0x0f,0xc7,0xf8", /* RDPID %eax/rax */
+ 			X86_FEATURE_RDPID,
+ 			[p] "=a" (p), [seg] "r" (__PER_CPU_SEG));
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 664f161f96ff..4891a621a752 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -652,6 +652,45 @@ EXPORT_SYMBOL_GPL(l1tf_mitigation);
+ enum vmx_l1d_flush_state l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
+ EXPORT_SYMBOL_GPL(l1tf_vmx_mitigation);
+ 
++/*
++ * These CPUs all support 44bits physical address space internally in the
++ * cache but CPUID can report a smaller number of physical address bits.
++ *
++ * The L1TF mitigation uses the top most address bit for the inversion of
++ * non present PTEs. When the installed memory reaches into the top most
++ * address bit due to memory holes, which has been observed on machines
++ * which report 36bits physical address bits and have 32G RAM installed,
++ * then the mitigation range check in l1tf_select_mitigation() triggers.
++ * This is a false positive because the mitigation is still possible due to
++ * the fact that the cache uses 44bit internally. Use the cache bits
++ * instead of the reported physical bits and adjust them on the affected
++ * machines to 44bit if the reported bits are less than 44.
++ */
++static void override_cache_bits(struct cpuinfo_x86 *c)
++{
++	if (c->x86 != 6)
++		return;
++
++	switch (c->x86_model) {
++	case INTEL_FAM6_NEHALEM:
++	case INTEL_FAM6_WESTMERE:
++	case INTEL_FAM6_SANDYBRIDGE:
++	case INTEL_FAM6_IVYBRIDGE:
++	case INTEL_FAM6_HASWELL_CORE:
++	case INTEL_FAM6_HASWELL_ULT:
++	case INTEL_FAM6_HASWELL_GT3E:
++	case INTEL_FAM6_BROADWELL_CORE:
++	case INTEL_FAM6_BROADWELL_GT3E:
++	case INTEL_FAM6_SKYLAKE_MOBILE:
++	case INTEL_FAM6_SKYLAKE_DESKTOP:
++	case INTEL_FAM6_KABYLAKE_MOBILE:
++	case INTEL_FAM6_KABYLAKE_DESKTOP:
++		if (c->x86_cache_bits < 44)
++			c->x86_cache_bits = 44;
++		break;
++	}
++}
++
+ static void __init l1tf_select_mitigation(void)
+ {
+ 	u64 half_pa;
+@@ -659,6 +698,8 @@ static void __init l1tf_select_mitigation(void)
+ 	if (!boot_cpu_has_bug(X86_BUG_L1TF))
+ 		return;
+ 
++	override_cache_bits(&boot_cpu_data);
++
+ 	switch (l1tf_mitigation) {
+ 	case L1TF_MITIGATION_OFF:
+ 	case L1TF_MITIGATION_FLUSH_NOWARN:
+@@ -678,14 +719,13 @@ static void __init l1tf_select_mitigation(void)
+ 	return;
+ #endif
+ 
+-	/*
+-	 * This is extremely unlikely to happen because almost all
+-	 * systems have far more MAX_PA/2 than RAM can be fit into
+-	 * DIMM slots.
+-	 */
+ 	half_pa = (u64)l1tf_pfn_limit() << PAGE_SHIFT;
+ 	if (e820__mapped_any(half_pa, ULLONG_MAX - half_pa, E820_TYPE_RAM)) {
+ 		pr_warn("System has more than MAX_PA/2 memory. L1TF mitigation not effective.\n");
++		pr_info("You may make it effective by booting the kernel with mem=%llu parameter.\n",
++				half_pa);
++		pr_info("However, doing so will make a part of your RAM unusable.\n");
++		pr_info("Reading https://www.kernel.org/doc/html/latest/admin-guide/l1tf.html might help you decide.\n");
+ 		return;
+ 	}
+ 
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index b41b72bd8bb8..1ee8ea36af30 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -919,6 +919,7 @@ void get_cpu_address_sizes(struct cpuinfo_x86 *c)
+ 	else if (cpu_has(c, X86_FEATURE_PAE) || cpu_has(c, X86_FEATURE_PSE36))
+ 		c->x86_phys_bits = 36;
+ #endif
++	c->x86_cache_bits = c->x86_phys_bits;
+ }
+ 
+ static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c)
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index 6602941cfebf..3f0abb62161b 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -150,6 +150,9 @@ static bool bad_spectre_microcode(struct cpuinfo_x86 *c)
+ 	if (cpu_has(c, X86_FEATURE_HYPERVISOR))
+ 		return false;
+ 
++	if (c->x86 != 6)
++		return false;
++
+ 	for (i = 0; i < ARRAY_SIZE(spectre_bad_microcodes); i++) {
+ 		if (c->x86_model == spectre_bad_microcodes[i].model &&
+ 		    c->x86_stepping == spectre_bad_microcodes[i].stepping)
+diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
+index 666a284116ac..17b02adc79aa 100644
+--- a/arch/x86/kernel/dumpstack.c
++++ b/arch/x86/kernel/dumpstack.c
+@@ -17,6 +17,7 @@
+ #include <linux/bug.h>
+ #include <linux/nmi.h>
+ #include <linux/sysfs.h>
++#include <linux/kasan.h>
+ 
+ #include <asm/cpu_entry_area.h>
+ #include <asm/stacktrace.h>
+@@ -91,23 +92,32 @@ static void printk_stack_address(unsigned long address, int reliable,
+  * Thus, the 2/3rds prologue and 64 byte OPCODE_BUFSIZE is just a random
+  * guesstimate in attempt to achieve all of the above.
+  */
+-void show_opcodes(u8 *rip, const char *loglvl)
++void show_opcodes(struct pt_regs *regs, const char *loglvl)
+ {
+ 	unsigned int code_prologue = OPCODE_BUFSIZE * 2 / 3;
+ 	u8 opcodes[OPCODE_BUFSIZE];
+-	u8 *ip;
++	unsigned long ip;
+ 	int i;
++	bool bad_ip;
+ 
+ 	printk("%sCode: ", loglvl);
+ 
+-	ip = (u8 *)rip - code_prologue;
+-	if (probe_kernel_read(opcodes, ip, OPCODE_BUFSIZE)) {
++	ip = regs->ip - code_prologue;
++
++	/*
++	 * Make sure userspace isn't trying to trick us into dumping kernel
++	 * memory by pointing the userspace instruction pointer at it.
++	 */
++	bad_ip = user_mode(regs) &&
++		 __chk_range_not_ok(ip, OPCODE_BUFSIZE, TASK_SIZE_MAX);
++
++	if (bad_ip || probe_kernel_read(opcodes, (u8 *)ip, OPCODE_BUFSIZE)) {
+ 		pr_cont("Bad RIP value.\n");
+ 		return;
+ 	}
+ 
+ 	for (i = 0; i < OPCODE_BUFSIZE; i++, ip++) {
+-		if (ip == rip)
++		if (ip == regs->ip)
+ 			pr_cont("<%02x> ", opcodes[i]);
+ 		else
+ 			pr_cont("%02x ", opcodes[i]);
+@@ -122,7 +132,7 @@ void show_ip(struct pt_regs *regs, const char *loglvl)
+ #else
+ 	printk("%sRIP: %04x:%pS\n", loglvl, (int)regs->cs, (void *)regs->ip);
+ #endif
+-	show_opcodes((u8 *)regs->ip, loglvl);
++	show_opcodes(regs, loglvl);
+ }
+ 
+ void show_iret_regs(struct pt_regs *regs)
+@@ -356,7 +366,10 @@ void oops_end(unsigned long flags, struct pt_regs *regs, int signr)
+ 	 * We're not going to return, but we might be on an IST stack or
+ 	 * have very little stack space left.  Rewind the stack and kill
+ 	 * the task.
++	 * Before we rewind the stack, we have to tell KASAN that we're going to
++	 * reuse the task stack and that existing poisons are invalid.
+ 	 */
++	kasan_unpoison_task_stack(current);
+ 	rewind_stack_do_exit(signr);
+ }
+ NOKPROBE_SYMBOL(oops_end);
+diff --git a/arch/x86/kernel/early-quirks.c b/arch/x86/kernel/early-quirks.c
+index da5d8ac60062..50d5848bf22e 100644
+--- a/arch/x86/kernel/early-quirks.c
++++ b/arch/x86/kernel/early-quirks.c
+@@ -338,6 +338,18 @@ static resource_size_t __init gen3_stolen_base(int num, int slot, int func,
+ 	return bsm & INTEL_BSM_MASK;
+ }
+ 
++static resource_size_t __init gen11_stolen_base(int num, int slot, int func,
++						resource_size_t stolen_size)
++{
++	u64 bsm;
++
++	bsm = read_pci_config(num, slot, func, INTEL_GEN11_BSM_DW0);
++	bsm &= INTEL_BSM_MASK;
++	bsm |= (u64)read_pci_config(num, slot, func, INTEL_GEN11_BSM_DW1) << 32;
++
++	return bsm;
++}
++
+ static resource_size_t __init i830_stolen_size(int num, int slot, int func)
+ {
+ 	u16 gmch_ctrl;
+@@ -498,6 +510,11 @@ static const struct intel_early_ops chv_early_ops __initconst = {
+ 	.stolen_size = chv_stolen_size,
+ };
+ 
++static const struct intel_early_ops gen11_early_ops __initconst = {
++	.stolen_base = gen11_stolen_base,
++	.stolen_size = gen9_stolen_size,
++};
++
+ static const struct pci_device_id intel_early_ids[] __initconst = {
+ 	INTEL_I830_IDS(&i830_early_ops),
+ 	INTEL_I845G_IDS(&i845_early_ops),
+@@ -529,6 +546,7 @@ static const struct pci_device_id intel_early_ids[] __initconst = {
+ 	INTEL_CFL_IDS(&gen9_early_ops),
+ 	INTEL_GLK_IDS(&gen9_early_ops),
+ 	INTEL_CNL_IDS(&gen9_early_ops),
++	INTEL_ICL_11_IDS(&gen11_early_ops),
+ };
+ 
+ struct resource intel_graphics_stolen_res __ro_after_init = DEFINE_RES_MEM(0, 0);
+diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
+index 12bb445fb98d..4344a032ebe6 100644
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -384,6 +384,7 @@ start_thread(struct pt_regs *regs, unsigned long new_ip, unsigned long new_sp)
+ 	start_thread_common(regs, new_ip, new_sp,
+ 			    __USER_CS, __USER_DS, 0);
+ }
++EXPORT_SYMBOL_GPL(start_thread);
+ 
+ #ifdef CONFIG_COMPAT
+ void compat_start_thread(struct pt_regs *regs, u32 new_ip, u32 new_sp)
+diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
+index af8caf965baa..01d209ab5481 100644
+--- a/arch/x86/kvm/hyperv.c
++++ b/arch/x86/kvm/hyperv.c
+@@ -235,7 +235,7 @@ static int synic_set_msr(struct kvm_vcpu_hv_synic *synic,
+ 	struct kvm_vcpu *vcpu = synic_to_vcpu(synic);
+ 	int ret;
+ 
+-	if (!synic->active)
++	if (!synic->active && !host)
+ 		return 1;
+ 
+ 	trace_kvm_hv_synic_set_msr(vcpu->vcpu_id, msr, data, host);
+@@ -295,11 +295,12 @@ static int synic_set_msr(struct kvm_vcpu_hv_synic *synic,
+ 	return ret;
+ }
+ 
+-static int synic_get_msr(struct kvm_vcpu_hv_synic *synic, u32 msr, u64 *pdata)
++static int synic_get_msr(struct kvm_vcpu_hv_synic *synic, u32 msr, u64 *pdata,
++			 bool host)
+ {
+ 	int ret;
+ 
+-	if (!synic->active)
++	if (!synic->active && !host)
+ 		return 1;
+ 
+ 	ret = 0;
+@@ -1014,6 +1015,11 @@ static int kvm_hv_set_msr_pw(struct kvm_vcpu *vcpu, u32 msr, u64 data,
+ 	case HV_X64_MSR_TSC_EMULATION_STATUS:
+ 		hv->hv_tsc_emulation_status = data;
+ 		break;
++	case HV_X64_MSR_TIME_REF_COUNT:
++		/* read-only, but still ignore it if host-initiated */
++		if (!host)
++			return 1;
++		break;
+ 	default:
+ 		vcpu_unimpl(vcpu, "Hyper-V uhandled wrmsr: 0x%x data 0x%llx\n",
+ 			    msr, data);
+@@ -1101,6 +1107,12 @@ static int kvm_hv_set_msr(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host)
+ 		return stimer_set_count(vcpu_to_stimer(vcpu, timer_index),
+ 					data, host);
+ 	}
++	case HV_X64_MSR_TSC_FREQUENCY:
++	case HV_X64_MSR_APIC_FREQUENCY:
++		/* read-only, but still ignore it if host-initiated */
++		if (!host)
++			return 1;
++		break;
+ 	default:
+ 		vcpu_unimpl(vcpu, "Hyper-V uhandled wrmsr: 0x%x data 0x%llx\n",
+ 			    msr, data);
+@@ -1156,7 +1168,8 @@ static int kvm_hv_get_msr_pw(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
+ 	return 0;
+ }
+ 
+-static int kvm_hv_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
++static int kvm_hv_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata,
++			  bool host)
+ {
+ 	u64 data = 0;
+ 	struct kvm_vcpu_hv *hv = &vcpu->arch.hyperv;
+@@ -1183,7 +1196,7 @@ static int kvm_hv_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
+ 	case HV_X64_MSR_SIMP:
+ 	case HV_X64_MSR_EOM:
+ 	case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15:
+-		return synic_get_msr(vcpu_to_synic(vcpu), msr, pdata);
++		return synic_get_msr(vcpu_to_synic(vcpu), msr, pdata, host);
+ 	case HV_X64_MSR_STIMER0_CONFIG:
+ 	case HV_X64_MSR_STIMER1_CONFIG:
+ 	case HV_X64_MSR_STIMER2_CONFIG:
+@@ -1229,7 +1242,7 @@ int kvm_hv_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host)
+ 		return kvm_hv_set_msr(vcpu, msr, data, host);
+ }
+ 
+-int kvm_hv_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
++int kvm_hv_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata, bool host)
+ {
+ 	if (kvm_hv_msr_partition_wide(msr)) {
+ 		int r;
+@@ -1239,7 +1252,7 @@ int kvm_hv_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
+ 		mutex_unlock(&vcpu->kvm->arch.hyperv.hv_lock);
+ 		return r;
+ 	} else
+-		return kvm_hv_get_msr(vcpu, msr, pdata);
++		return kvm_hv_get_msr(vcpu, msr, pdata, host);
+ }
+ 
+ static __always_inline int get_sparse_bank_no(u64 valid_bank_mask, int bank_no)
+diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h
+index 837465d69c6d..d6aa969e20f1 100644
+--- a/arch/x86/kvm/hyperv.h
++++ b/arch/x86/kvm/hyperv.h
+@@ -48,7 +48,7 @@ static inline struct kvm_vcpu *synic_to_vcpu(struct kvm_vcpu_hv_synic *synic)
+ }
+ 
+ int kvm_hv_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host);
+-int kvm_hv_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata);
++int kvm_hv_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata, bool host);
+ 
+ bool kvm_hv_hypercall_enabled(struct kvm *kvm);
+ int kvm_hv_hypercall(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index f059a73f0fd0..9799f86388e7 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -5580,8 +5580,6 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
+ 
+ 	clgi();
+ 
+-	local_irq_enable();
+-
+ 	/*
+ 	 * If this vCPU has touched SPEC_CTRL, restore the guest's value if
+ 	 * it's non-zero. Since vmentry is serialising on affected CPUs, there
+@@ -5590,6 +5588,8 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
+ 	 */
+ 	x86_spec_ctrl_set_guest(svm->spec_ctrl, svm->virt_spec_ctrl);
+ 
++	local_irq_enable();
++
+ 	asm volatile (
+ 		"push %%" _ASM_BP "; \n\t"
+ 		"mov %c[rbx](%[svm]), %%" _ASM_BX " \n\t"
+@@ -5712,12 +5712,12 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
+ 	if (unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL)))
+ 		svm->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL);
+ 
+-	x86_spec_ctrl_restore_host(svm->spec_ctrl, svm->virt_spec_ctrl);
+-
+ 	reload_tss(vcpu);
+ 
+ 	local_irq_disable();
+ 
++	x86_spec_ctrl_restore_host(svm->spec_ctrl, svm->virt_spec_ctrl);
++
+ 	vcpu->arch.cr2 = svm->vmcb->save.cr2;
+ 	vcpu->arch.regs[VCPU_REGS_RAX] = svm->vmcb->save.rax;
+ 	vcpu->arch.regs[VCPU_REGS_RSP] = svm->vmcb->save.rsp;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index a5caa5e5480c..24c84aa87049 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -2185,10 +2185,11 @@ static int set_msr_mce(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		vcpu->arch.mcg_status = data;
+ 		break;
+ 	case MSR_IA32_MCG_CTL:
+-		if (!(mcg_cap & MCG_CTL_P))
++		if (!(mcg_cap & MCG_CTL_P) &&
++		    (data || !msr_info->host_initiated))
+ 			return 1;
+ 		if (data != 0 && data != ~(u64)0)
+-			return -1;
++			return 1;
+ 		vcpu->arch.mcg_ctl = data;
+ 		break;
+ 	default:
+@@ -2576,7 +2577,7 @@ int kvm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
+ }
+ EXPORT_SYMBOL_GPL(kvm_get_msr);
+ 
+-static int get_msr_mce(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
++static int get_msr_mce(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata, bool host)
+ {
+ 	u64 data;
+ 	u64 mcg_cap = vcpu->arch.mcg_cap;
+@@ -2591,7 +2592,7 @@ static int get_msr_mce(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
+ 		data = vcpu->arch.mcg_cap;
+ 		break;
+ 	case MSR_IA32_MCG_CTL:
+-		if (!(mcg_cap & MCG_CTL_P))
++		if (!(mcg_cap & MCG_CTL_P) && !host)
+ 			return 1;
+ 		data = vcpu->arch.mcg_ctl;
+ 		break;
+@@ -2724,7 +2725,8 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 	case MSR_IA32_MCG_CTL:
+ 	case MSR_IA32_MCG_STATUS:
+ 	case MSR_IA32_MC0_CTL ... MSR_IA32_MCx_CTL(KVM_MAX_MCE_BANKS) - 1:
+-		return get_msr_mce(vcpu, msr_info->index, &msr_info->data);
++		return get_msr_mce(vcpu, msr_info->index, &msr_info->data,
++				   msr_info->host_initiated);
+ 	case MSR_K7_CLK_CTL:
+ 		/*
+ 		 * Provide expected ramp-up count for K7. All other
+@@ -2745,7 +2747,8 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 	case HV_X64_MSR_TSC_EMULATION_CONTROL:
+ 	case HV_X64_MSR_TSC_EMULATION_STATUS:
+ 		return kvm_hv_get_msr_common(vcpu,
+-					     msr_info->index, &msr_info->data);
++					     msr_info->index, &msr_info->data,
++					     msr_info->host_initiated);
+ 		break;
+ 	case MSR_IA32_BBL_CR_CTL3:
+ 		/* This legacy MSR exists but isn't fully documented in current
+diff --git a/arch/x86/lib/usercopy.c b/arch/x86/lib/usercopy.c
+index c8c6ad0d58b8..3f435d7fca5e 100644
+--- a/arch/x86/lib/usercopy.c
++++ b/arch/x86/lib/usercopy.c
+@@ -7,6 +7,8 @@
+ #include <linux/uaccess.h>
+ #include <linux/export.h>
+ 
++#include <asm/tlbflush.h>
++
+ /*
+  * We rely on the nested NMI work to allow atomic faults from the NMI path; the
+  * nested NMI paths are careful to preserve CR2.
+@@ -19,6 +21,9 @@ copy_from_user_nmi(void *to, const void __user *from, unsigned long n)
+ 	if (__range_not_ok(from, n, TASK_SIZE))
+ 		return n;
+ 
++	if (!nmi_uaccess_okay())
++		return n;
++
+ 	/*
+ 	 * Even though this function is typically called from NMI/IRQ context
+ 	 * disable pagefaults so that its behaviour is consistent even when
+diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
+index 2aafa6ab6103..d1f1612672c7 100644
+--- a/arch/x86/mm/fault.c
++++ b/arch/x86/mm/fault.c
+@@ -838,7 +838,7 @@ show_signal_msg(struct pt_regs *regs, unsigned long error_code,
+ 
+ 	printk(KERN_CONT "\n");
+ 
+-	show_opcodes((u8 *)regs->ip, loglvl);
++	show_opcodes(regs, loglvl);
+ }
+ 
+ static void
+diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
+index acfab322fbe0..63a6f9fcaf20 100644
+--- a/arch/x86/mm/init.c
++++ b/arch/x86/mm/init.c
+@@ -923,7 +923,7 @@ unsigned long max_swapfile_size(void)
+ 
+ 	if (boot_cpu_has_bug(X86_BUG_L1TF)) {
+ 		/* Limit the swap file size to MAX_PA/2 for L1TF workaround */
+-		unsigned long l1tf_limit = l1tf_pfn_limit() + 1;
++		unsigned long long l1tf_limit = l1tf_pfn_limit();
+ 		/*
+ 		 * We encode swap offsets also with 3 bits below those for pfn
+ 		 * which makes the usable limit higher.
+@@ -931,7 +931,7 @@ unsigned long max_swapfile_size(void)
+ #if CONFIG_PGTABLE_LEVELS > 2
+ 		l1tf_limit <<= PAGE_SHIFT - SWP_OFFSET_FIRST_BIT;
+ #endif
+-		pages = min_t(unsigned long, l1tf_limit, pages);
++		pages = min_t(unsigned long long, l1tf_limit, pages);
+ 	}
+ 	return pages;
+ }
+diff --git a/arch/x86/mm/mmap.c b/arch/x86/mm/mmap.c
+index f40ab8185d94..1e95d57760cf 100644
+--- a/arch/x86/mm/mmap.c
++++ b/arch/x86/mm/mmap.c
+@@ -257,7 +257,7 @@ bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot)
+ 	/* If it's real memory always allow */
+ 	if (pfn_valid(pfn))
+ 		return true;
+-	if (pfn > l1tf_pfn_limit() && !capable(CAP_SYS_ADMIN))
++	if (pfn >= l1tf_pfn_limit() && !capable(CAP_SYS_ADMIN))
+ 		return false;
+ 	return true;
+ }
+diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
+index 6eb1f34c3c85..cd2617285e2e 100644
+--- a/arch/x86/mm/tlb.c
++++ b/arch/x86/mm/tlb.c
+@@ -298,6 +298,10 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
+ 
+ 		choose_new_asid(next, next_tlb_gen, &new_asid, &need_flush);
+ 
++		/* Let nmi_uaccess_okay() know that we're changing CR3. */
++		this_cpu_write(cpu_tlbstate.loaded_mm, LOADED_MM_SWITCHING);
++		barrier();
++
+ 		if (need_flush) {
+ 			this_cpu_write(cpu_tlbstate.ctxs[new_asid].ctx_id, next->context.ctx_id);
+ 			this_cpu_write(cpu_tlbstate.ctxs[new_asid].tlb_gen, next_tlb_gen);
+@@ -328,6 +332,9 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
+ 		if (next != &init_mm)
+ 			this_cpu_write(cpu_tlbstate.last_ctx_id, next->context.ctx_id);
+ 
++		/* Make sure we write CR3 before loaded_mm. */
++		barrier();
++
+ 		this_cpu_write(cpu_tlbstate.loaded_mm, next);
+ 		this_cpu_write(cpu_tlbstate.loaded_mm_asid, new_asid);
+ 	}
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index cc71c63df381..984b37647b2f 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -6424,6 +6424,7 @@ void ata_host_init(struct ata_host *host, struct device *dev,
+ 	host->n_tags = ATA_MAX_QUEUE;
+ 	host->dev = dev;
+ 	host->ops = ops;
++	kref_init(&host->kref);
+ }
+ 
+ void __ata_port_probe(struct ata_port *ap)
+@@ -7391,3 +7392,5 @@ EXPORT_SYMBOL_GPL(ata_cable_80wire);
+ EXPORT_SYMBOL_GPL(ata_cable_unknown);
+ EXPORT_SYMBOL_GPL(ata_cable_ignore);
+ EXPORT_SYMBOL_GPL(ata_cable_sata);
++EXPORT_SYMBOL_GPL(ata_host_get);
++EXPORT_SYMBOL_GPL(ata_host_put);
+\ No newline at end of file
+diff --git a/drivers/ata/libata.h b/drivers/ata/libata.h
+index 9e21c49cf6be..f953cb4bb1ba 100644
+--- a/drivers/ata/libata.h
++++ b/drivers/ata/libata.h
+@@ -100,8 +100,6 @@ extern int ata_port_probe(struct ata_port *ap);
+ extern void __ata_port_probe(struct ata_port *ap);
+ extern unsigned int ata_read_log_page(struct ata_device *dev, u8 log,
+ 				      u8 page, void *buf, unsigned int sectors);
+-extern void ata_host_get(struct ata_host *host);
+-extern void ata_host_put(struct ata_host *host);
+ 
+ #define to_ata_port(d) container_of(d, struct ata_port, tdev)
+ 
+diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
+index 8e2e4757adcb..5a42ae4078c2 100644
+--- a/drivers/base/power/clock_ops.c
++++ b/drivers/base/power/clock_ops.c
+@@ -185,7 +185,7 @@ EXPORT_SYMBOL_GPL(of_pm_clk_add_clk);
+ int of_pm_clk_add_clks(struct device *dev)
+ {
+ 	struct clk **clks;
+-	unsigned int i, count;
++	int i, count;
+ 	int ret;
+ 
+ 	if (!dev || !dev->of_node)
+diff --git a/drivers/cdrom/cdrom.c b/drivers/cdrom/cdrom.c
+index a78b8e7085e9..66acbd063562 100644
+--- a/drivers/cdrom/cdrom.c
++++ b/drivers/cdrom/cdrom.c
+@@ -2542,7 +2542,7 @@ static int cdrom_ioctl_drive_status(struct cdrom_device_info *cdi,
+ 	if (!CDROM_CAN(CDC_SELECT_DISC) ||
+ 	    (arg == CDSL_CURRENT || arg == CDSL_NONE))
+ 		return cdi->ops->drive_status(cdi, CDSL_CURRENT);
+-	if (((int)arg >= cdi->capacity))
++	if (arg >= cdi->capacity)
+ 		return -EINVAL;
+ 	return cdrom_slot_status(cdi, arg);
+ }
+diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
+index e32f6e85dc6d..3a3a7a548a85 100644
+--- a/drivers/char/tpm/tpm-interface.c
++++ b/drivers/char/tpm/tpm-interface.c
+@@ -29,7 +29,6 @@
+ #include <linux/mutex.h>
+ #include <linux/spinlock.h>
+ #include <linux/freezer.h>
+-#include <linux/pm_runtime.h>
+ #include <linux/tpm_eventlog.h>
+ 
+ #include "tpm.h"
+@@ -369,10 +368,13 @@ err_len:
+ 	return -EINVAL;
+ }
+ 
+-static int tpm_request_locality(struct tpm_chip *chip)
++static int tpm_request_locality(struct tpm_chip *chip, unsigned int flags)
+ {
+ 	int rc;
+ 
++	if (flags & TPM_TRANSMIT_RAW)
++		return 0;
++
+ 	if (!chip->ops->request_locality)
+ 		return 0;
+ 
+@@ -385,10 +387,13 @@ static int tpm_request_locality(struct tpm_chip *chip)
+ 	return 0;
+ }
+ 
+-static void tpm_relinquish_locality(struct tpm_chip *chip)
++static void tpm_relinquish_locality(struct tpm_chip *chip, unsigned int flags)
+ {
+ 	int rc;
+ 
++	if (flags & TPM_TRANSMIT_RAW)
++		return;
++
+ 	if (!chip->ops->relinquish_locality)
+ 		return;
+ 
+@@ -399,6 +404,28 @@ static void tpm_relinquish_locality(struct tpm_chip *chip)
+ 	chip->locality = -1;
+ }
+ 
++static int tpm_cmd_ready(struct tpm_chip *chip, unsigned int flags)
++{
++	if (flags & TPM_TRANSMIT_RAW)
++		return 0;
++
++	if (!chip->ops->cmd_ready)
++		return 0;
++
++	return chip->ops->cmd_ready(chip);
++}
++
++static int tpm_go_idle(struct tpm_chip *chip, unsigned int flags)
++{
++	if (flags & TPM_TRANSMIT_RAW)
++		return 0;
++
++	if (!chip->ops->go_idle)
++		return 0;
++
++	return chip->ops->go_idle(chip);
++}
++
+ static ssize_t tpm_try_transmit(struct tpm_chip *chip,
+ 				struct tpm_space *space,
+ 				u8 *buf, size_t bufsiz,
+@@ -423,7 +450,7 @@ static ssize_t tpm_try_transmit(struct tpm_chip *chip,
+ 		header->tag = cpu_to_be16(TPM2_ST_NO_SESSIONS);
+ 		header->return_code = cpu_to_be32(TPM2_RC_COMMAND_CODE |
+ 						  TSS2_RESMGR_TPM_RC_LAYER);
+-		return bufsiz;
++		return sizeof(*header);
+ 	}
+ 
+ 	if (bufsiz > TPM_BUFSIZE)
+@@ -449,14 +476,15 @@ static ssize_t tpm_try_transmit(struct tpm_chip *chip,
+ 	/* Store the decision as chip->locality will be changed. */
+ 	need_locality = chip->locality == -1;
+ 
+-	if (!(flags & TPM_TRANSMIT_RAW) && need_locality) {
+-		rc = tpm_request_locality(chip);
++	if (need_locality) {
++		rc = tpm_request_locality(chip, flags);
+ 		if (rc < 0)
+ 			goto out_no_locality;
+ 	}
+ 
+-	if (chip->dev.parent)
+-		pm_runtime_get_sync(chip->dev.parent);
++	rc = tpm_cmd_ready(chip, flags);
++	if (rc)
++		goto out;
+ 
+ 	rc = tpm2_prepare_space(chip, space, ordinal, buf);
+ 	if (rc)
+@@ -516,13 +544,16 @@ out_recv:
+ 	}
+ 
+ 	rc = tpm2_commit_space(chip, space, ordinal, buf, &len);
++	if (rc)
++		dev_err(&chip->dev, "tpm2_commit_space: error %d\n", rc);
+ 
+ out:
+-	if (chip->dev.parent)
+-		pm_runtime_put_sync(chip->dev.parent);
++	rc = tpm_go_idle(chip, flags);
++	if (rc)
++		goto out;
+ 
+ 	if (need_locality)
+-		tpm_relinquish_locality(chip);
++		tpm_relinquish_locality(chip, flags);
+ 
+ out_no_locality:
+ 	if (chip->ops->clk_enable != NULL)
+diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
+index 4426649e431c..5f02dcd3df97 100644
+--- a/drivers/char/tpm/tpm.h
++++ b/drivers/char/tpm/tpm.h
+@@ -511,9 +511,17 @@ extern const struct file_operations tpm_fops;
+ extern const struct file_operations tpmrm_fops;
+ extern struct idr dev_nums_idr;
+ 
++/**
++ * enum tpm_transmit_flags
++ *
++ * @TPM_TRANSMIT_UNLOCKED: used to lock sequence of tpm_transmit calls.
++ * @TPM_TRANSMIT_RAW: prevent recursive calls into setup steps
++ *                    (go idle, locality,..). Always use with UNLOCKED
++ *                    as it will fail on double locking.
++ */
+ enum tpm_transmit_flags {
+-	TPM_TRANSMIT_UNLOCKED	= BIT(0),
+-	TPM_TRANSMIT_RAW	= BIT(1),
++	TPM_TRANSMIT_UNLOCKED = BIT(0),
++	TPM_TRANSMIT_RAW      = BIT(1),
+ };
+ 
+ ssize_t tpm_transmit(struct tpm_chip *chip, struct tpm_space *space,
+diff --git a/drivers/char/tpm/tpm2-space.c b/drivers/char/tpm/tpm2-space.c
+index 6122d3276f72..11c85ed8c113 100644
+--- a/drivers/char/tpm/tpm2-space.c
++++ b/drivers/char/tpm/tpm2-space.c
+@@ -39,7 +39,8 @@ static void tpm2_flush_sessions(struct tpm_chip *chip, struct tpm_space *space)
+ 	for (i = 0; i < ARRAY_SIZE(space->session_tbl); i++) {
+ 		if (space->session_tbl[i])
+ 			tpm2_flush_context_cmd(chip, space->session_tbl[i],
+-					       TPM_TRANSMIT_UNLOCKED);
++					       TPM_TRANSMIT_UNLOCKED |
++					       TPM_TRANSMIT_RAW);
+ 	}
+ }
+ 
+@@ -84,7 +85,7 @@ static int tpm2_load_context(struct tpm_chip *chip, u8 *buf,
+ 	tpm_buf_append(&tbuf, &buf[*offset], body_size);
+ 
+ 	rc = tpm_transmit_cmd(chip, NULL, tbuf.data, PAGE_SIZE, 4,
+-			      TPM_TRANSMIT_UNLOCKED, NULL);
++			      TPM_TRANSMIT_UNLOCKED | TPM_TRANSMIT_RAW, NULL);
+ 	if (rc < 0) {
+ 		dev_warn(&chip->dev, "%s: failed with a system error %d\n",
+ 			 __func__, rc);
+@@ -133,7 +134,7 @@ static int tpm2_save_context(struct tpm_chip *chip, u32 handle, u8 *buf,
+ 	tpm_buf_append_u32(&tbuf, handle);
+ 
+ 	rc = tpm_transmit_cmd(chip, NULL, tbuf.data, PAGE_SIZE, 0,
+-			      TPM_TRANSMIT_UNLOCKED, NULL);
++			      TPM_TRANSMIT_UNLOCKED | TPM_TRANSMIT_RAW, NULL);
+ 	if (rc < 0) {
+ 		dev_warn(&chip->dev, "%s: failed with a system error %d\n",
+ 			 __func__, rc);
+@@ -170,7 +171,8 @@ static void tpm2_flush_space(struct tpm_chip *chip)
+ 	for (i = 0; i < ARRAY_SIZE(space->context_tbl); i++)
+ 		if (space->context_tbl[i] && ~space->context_tbl[i])
+ 			tpm2_flush_context_cmd(chip, space->context_tbl[i],
+-					       TPM_TRANSMIT_UNLOCKED);
++					       TPM_TRANSMIT_UNLOCKED |
++					       TPM_TRANSMIT_RAW);
+ 
+ 	tpm2_flush_sessions(chip, space);
+ }
+@@ -377,7 +379,8 @@ static int tpm2_map_response_header(struct tpm_chip *chip, u32 cc, u8 *rsp,
+ 
+ 	return 0;
+ out_no_slots:
+-	tpm2_flush_context_cmd(chip, phandle, TPM_TRANSMIT_UNLOCKED);
++	tpm2_flush_context_cmd(chip, phandle,
++			       TPM_TRANSMIT_UNLOCKED | TPM_TRANSMIT_RAW);
+ 	dev_warn(&chip->dev, "%s: out of slots for 0x%08X\n", __func__,
+ 		 phandle);
+ 	return -ENOMEM;
+@@ -465,7 +468,8 @@ static int tpm2_save_space(struct tpm_chip *chip)
+ 			return rc;
+ 
+ 		tpm2_flush_context_cmd(chip, space->context_tbl[i],
+-				       TPM_TRANSMIT_UNLOCKED);
++				       TPM_TRANSMIT_UNLOCKED |
++				       TPM_TRANSMIT_RAW);
+ 		space->context_tbl[i] = ~0;
+ 	}
+ 
+diff --git a/drivers/char/tpm/tpm_crb.c b/drivers/char/tpm/tpm_crb.c
+index 34fbc6cb097b..36952ef98f90 100644
+--- a/drivers/char/tpm/tpm_crb.c
++++ b/drivers/char/tpm/tpm_crb.c
+@@ -132,7 +132,7 @@ static bool crb_wait_for_reg_32(u32 __iomem *reg, u32 mask, u32 value,
+ }
+ 
+ /**
+- * crb_go_idle - request tpm crb device to go the idle state
++ * __crb_go_idle - request tpm crb device to go the idle state
+  *
+  * @dev:  crb device
+  * @priv: crb private data
+@@ -147,7 +147,7 @@ static bool crb_wait_for_reg_32(u32 __iomem *reg, u32 mask, u32 value,
+  *
+  * Return: 0 always
+  */
+-static int crb_go_idle(struct device *dev, struct crb_priv *priv)
++static int __crb_go_idle(struct device *dev, struct crb_priv *priv)
+ {
+ 	if ((priv->sm == ACPI_TPM2_START_METHOD) ||
+ 	    (priv->sm == ACPI_TPM2_COMMAND_BUFFER_WITH_START_METHOD) ||
+@@ -163,11 +163,20 @@ static int crb_go_idle(struct device *dev, struct crb_priv *priv)
+ 		dev_warn(dev, "goIdle timed out\n");
+ 		return -ETIME;
+ 	}
++
+ 	return 0;
+ }
+ 
++static int crb_go_idle(struct tpm_chip *chip)
++{
++	struct device *dev = &chip->dev;
++	struct crb_priv *priv = dev_get_drvdata(dev);
++
++	return __crb_go_idle(dev, priv);
++}
++
+ /**
+- * crb_cmd_ready - request tpm crb device to enter ready state
++ * __crb_cmd_ready - request tpm crb device to enter ready state
+  *
+  * @dev:  crb device
+  * @priv: crb private data
+@@ -181,7 +190,7 @@ static int crb_go_idle(struct device *dev, struct crb_priv *priv)
+  *
+  * Return: 0 on success -ETIME on timeout;
+  */
+-static int crb_cmd_ready(struct device *dev, struct crb_priv *priv)
++static int __crb_cmd_ready(struct device *dev, struct crb_priv *priv)
+ {
+ 	if ((priv->sm == ACPI_TPM2_START_METHOD) ||
+ 	    (priv->sm == ACPI_TPM2_COMMAND_BUFFER_WITH_START_METHOD) ||
+@@ -200,6 +209,14 @@ static int crb_cmd_ready(struct device *dev, struct crb_priv *priv)
+ 	return 0;
+ }
+ 
++static int crb_cmd_ready(struct tpm_chip *chip)
++{
++	struct device *dev = &chip->dev;
++	struct crb_priv *priv = dev_get_drvdata(dev);
++
++	return __crb_cmd_ready(dev, priv);
++}
++
+ static int __crb_request_locality(struct device *dev,
+ 				  struct crb_priv *priv, int loc)
+ {
+@@ -401,6 +418,8 @@ static const struct tpm_class_ops tpm_crb = {
+ 	.send = crb_send,
+ 	.cancel = crb_cancel,
+ 	.req_canceled = crb_req_canceled,
++	.go_idle  = crb_go_idle,
++	.cmd_ready = crb_cmd_ready,
+ 	.request_locality = crb_request_locality,
+ 	.relinquish_locality = crb_relinquish_locality,
+ 	.req_complete_mask = CRB_DRV_STS_COMPLETE,
+@@ -520,7 +539,7 @@ static int crb_map_io(struct acpi_device *device, struct crb_priv *priv,
+ 	 * PTT HW bug w/a: wake up the device to access
+ 	 * possibly not retained registers.
+ 	 */
+-	ret = crb_cmd_ready(dev, priv);
++	ret = __crb_cmd_ready(dev, priv);
+ 	if (ret)
+ 		goto out_relinquish_locality;
+ 
+@@ -565,7 +584,7 @@ out:
+ 	if (!ret)
+ 		priv->cmd_size = cmd_size;
+ 
+-	crb_go_idle(dev, priv);
++	__crb_go_idle(dev, priv);
+ 
+ out_relinquish_locality:
+ 
+@@ -628,32 +647,7 @@ static int crb_acpi_add(struct acpi_device *device)
+ 	chip->acpi_dev_handle = device->handle;
+ 	chip->flags = TPM_CHIP_FLAG_TPM2;
+ 
+-	rc = __crb_request_locality(dev, priv, 0);
+-	if (rc)
+-		return rc;
+-
+-	rc  = crb_cmd_ready(dev, priv);
+-	if (rc)
+-		goto out;
+-
+-	pm_runtime_get_noresume(dev);
+-	pm_runtime_set_active(dev);
+-	pm_runtime_enable(dev);
+-
+-	rc = tpm_chip_register(chip);
+-	if (rc) {
+-		crb_go_idle(dev, priv);
+-		pm_runtime_put_noidle(dev);
+-		pm_runtime_disable(dev);
+-		goto out;
+-	}
+-
+-	pm_runtime_put_sync(dev);
+-
+-out:
+-	__crb_relinquish_locality(dev, priv, 0);
+-
+-	return rc;
++	return tpm_chip_register(chip);
+ }
+ 
+ static int crb_acpi_remove(struct acpi_device *device)
+@@ -663,52 +657,11 @@ static int crb_acpi_remove(struct acpi_device *device)
+ 
+ 	tpm_chip_unregister(chip);
+ 
+-	pm_runtime_disable(dev);
+-
+ 	return 0;
+ }
+ 
+-static int __maybe_unused crb_pm_runtime_suspend(struct device *dev)
+-{
+-	struct tpm_chip *chip = dev_get_drvdata(dev);
+-	struct crb_priv *priv = dev_get_drvdata(&chip->dev);
+-
+-	return crb_go_idle(dev, priv);
+-}
+-
+-static int __maybe_unused crb_pm_runtime_resume(struct device *dev)
+-{
+-	struct tpm_chip *chip = dev_get_drvdata(dev);
+-	struct crb_priv *priv = dev_get_drvdata(&chip->dev);
+-
+-	return crb_cmd_ready(dev, priv);
+-}
+-
+-static int __maybe_unused crb_pm_suspend(struct device *dev)
+-{
+-	int ret;
+-
+-	ret = tpm_pm_suspend(dev);
+-	if (ret)
+-		return ret;
+-
+-	return crb_pm_runtime_suspend(dev);
+-}
+-
+-static int __maybe_unused crb_pm_resume(struct device *dev)
+-{
+-	int ret;
+-
+-	ret = crb_pm_runtime_resume(dev);
+-	if (ret)
+-		return ret;
+-
+-	return tpm_pm_resume(dev);
+-}
+-
+ static const struct dev_pm_ops crb_pm = {
+-	SET_SYSTEM_SLEEP_PM_OPS(crb_pm_suspend, crb_pm_resume)
+-	SET_RUNTIME_PM_OPS(crb_pm_runtime_suspend, crb_pm_runtime_resume, NULL)
++	SET_SYSTEM_SLEEP_PM_OPS(tpm_pm_suspend, tpm_pm_resume)
+ };
+ 
+ static const struct acpi_device_id crb_device_ids[] = {
+diff --git a/drivers/clk/clk-npcm7xx.c b/drivers/clk/clk-npcm7xx.c
+index 740af90a9508..c5edf8f2fd19 100644
+--- a/drivers/clk/clk-npcm7xx.c
++++ b/drivers/clk/clk-npcm7xx.c
+@@ -558,8 +558,8 @@ static void __init npcm7xx_clk_init(struct device_node *clk_np)
+ 	if (!clk_base)
+ 		goto npcm7xx_init_error;
+ 
+-	npcm7xx_clk_data = kzalloc(sizeof(*npcm7xx_clk_data->hws) *
+-		NPCM7XX_NUM_CLOCKS + sizeof(npcm7xx_clk_data), GFP_KERNEL);
++	npcm7xx_clk_data = kzalloc(struct_size(npcm7xx_clk_data, hws,
++				   NPCM7XX_NUM_CLOCKS), GFP_KERNEL);
+ 	if (!npcm7xx_clk_data)
+ 		goto npcm7xx_init_np_err;
+ 
+diff --git a/drivers/clk/rockchip/clk-rk3399.c b/drivers/clk/rockchip/clk-rk3399.c
+index bca10d618f0a..2a8634a52856 100644
+--- a/drivers/clk/rockchip/clk-rk3399.c
++++ b/drivers/clk/rockchip/clk-rk3399.c
+@@ -631,7 +631,7 @@ static struct rockchip_clk_branch rk3399_clk_branches[] __initdata = {
+ 	MUX(0, "clk_i2sout_src", mux_i2sch_p, CLK_SET_RATE_PARENT,
+ 			RK3399_CLKSEL_CON(31), 0, 2, MFLAGS),
+ 	COMPOSITE_NODIV(SCLK_I2S_8CH_OUT, "clk_i2sout", mux_i2sout_p, CLK_SET_RATE_PARENT,
+-			RK3399_CLKSEL_CON(30), 8, 2, MFLAGS,
++			RK3399_CLKSEL_CON(31), 2, 1, MFLAGS,
+ 			RK3399_CLKGATE_CON(8), 12, GFLAGS),
+ 
+ 	/* uart */
+diff --git a/drivers/gpu/drm/udl/udl_drv.h b/drivers/gpu/drm/udl/udl_drv.h
+index 55c0cc309198..7588a9eb0ee0 100644
+--- a/drivers/gpu/drm/udl/udl_drv.h
++++ b/drivers/gpu/drm/udl/udl_drv.h
+@@ -112,7 +112,7 @@ udl_fb_user_fb_create(struct drm_device *dev,
+ 		      struct drm_file *file,
+ 		      const struct drm_mode_fb_cmd2 *mode_cmd);
+ 
+-int udl_render_hline(struct drm_device *dev, int bpp, struct urb **urb_ptr,
++int udl_render_hline(struct drm_device *dev, int log_bpp, struct urb **urb_ptr,
+ 		     const char *front, char **urb_buf_ptr,
+ 		     u32 byte_offset, u32 device_byte_offset, u32 byte_width,
+ 		     int *ident_ptr, int *sent_ptr);
+diff --git a/drivers/gpu/drm/udl/udl_fb.c b/drivers/gpu/drm/udl/udl_fb.c
+index d5583190f3e4..8746eeeec44d 100644
+--- a/drivers/gpu/drm/udl/udl_fb.c
++++ b/drivers/gpu/drm/udl/udl_fb.c
+@@ -90,7 +90,10 @@ int udl_handle_damage(struct udl_framebuffer *fb, int x, int y,
+ 	int bytes_identical = 0;
+ 	struct urb *urb;
+ 	int aligned_x;
+-	int bpp = fb->base.format->cpp[0];
++	int log_bpp;
++
++	BUG_ON(!is_power_of_2(fb->base.format->cpp[0]));
++	log_bpp = __ffs(fb->base.format->cpp[0]);
+ 
+ 	if (!fb->active_16)
+ 		return 0;
+@@ -125,12 +128,12 @@ int udl_handle_damage(struct udl_framebuffer *fb, int x, int y,
+ 
+ 	for (i = y; i < y + height ; i++) {
+ 		const int line_offset = fb->base.pitches[0] * i;
+-		const int byte_offset = line_offset + (x * bpp);
+-		const int dev_byte_offset = (fb->base.width * bpp * i) + (x * bpp);
+-		if (udl_render_hline(dev, bpp, &urb,
++		const int byte_offset = line_offset + (x << log_bpp);
++		const int dev_byte_offset = (fb->base.width * i + x) << log_bpp;
++		if (udl_render_hline(dev, log_bpp, &urb,
+ 				     (char *) fb->obj->vmapping,
+ 				     &cmd, byte_offset, dev_byte_offset,
+-				     width * bpp,
++				     width << log_bpp,
+ 				     &bytes_identical, &bytes_sent))
+ 			goto error;
+ 	}
+@@ -149,7 +152,7 @@ int udl_handle_damage(struct udl_framebuffer *fb, int x, int y,
+ error:
+ 	atomic_add(bytes_sent, &udl->bytes_sent);
+ 	atomic_add(bytes_identical, &udl->bytes_identical);
+-	atomic_add(width*height*bpp, &udl->bytes_rendered);
++	atomic_add((width * height) << log_bpp, &udl->bytes_rendered);
+ 	end_cycles = get_cycles();
+ 	atomic_add(((unsigned int) ((end_cycles - start_cycles)
+ 		    >> 10)), /* Kcycles */
+@@ -221,7 +224,7 @@ static int udl_fb_open(struct fb_info *info, int user)
+ 
+ 		struct fb_deferred_io *fbdefio;
+ 
+-		fbdefio = kmalloc(sizeof(struct fb_deferred_io), GFP_KERNEL);
++		fbdefio = kzalloc(sizeof(struct fb_deferred_io), GFP_KERNEL);
+ 
+ 		if (fbdefio) {
+ 			fbdefio->delay = DL_DEFIO_WRITE_DELAY;
+diff --git a/drivers/gpu/drm/udl/udl_main.c b/drivers/gpu/drm/udl/udl_main.c
+index d518de8f496b..7e9ad926926a 100644
+--- a/drivers/gpu/drm/udl/udl_main.c
++++ b/drivers/gpu/drm/udl/udl_main.c
+@@ -170,18 +170,13 @@ static void udl_free_urb_list(struct drm_device *dev)
+ 	struct list_head *node;
+ 	struct urb_node *unode;
+ 	struct urb *urb;
+-	int ret;
+ 	unsigned long flags;
+ 
+ 	DRM_DEBUG("Waiting for completes and freeing all render urbs\n");
+ 
+ 	/* keep waiting and freeing, until we've got 'em all */
+ 	while (count--) {
+-
+-		/* Getting interrupted means a leak, but ok at shutdown*/
+-		ret = down_interruptible(&udl->urbs.limit_sem);
+-		if (ret)
+-			break;
++		down(&udl->urbs.limit_sem);
+ 
+ 		spin_lock_irqsave(&udl->urbs.lock, flags);
+ 
+@@ -205,17 +200,22 @@ static void udl_free_urb_list(struct drm_device *dev)
+ static int udl_alloc_urb_list(struct drm_device *dev, int count, size_t size)
+ {
+ 	struct udl_device *udl = dev->dev_private;
+-	int i = 0;
+ 	struct urb *urb;
+ 	struct urb_node *unode;
+ 	char *buf;
++	size_t wanted_size = count * size;
+ 
+ 	spin_lock_init(&udl->urbs.lock);
+ 
++retry:
+ 	udl->urbs.size = size;
+ 	INIT_LIST_HEAD(&udl->urbs.list);
+ 
+-	while (i < count) {
++	sema_init(&udl->urbs.limit_sem, 0);
++	udl->urbs.count = 0;
++	udl->urbs.available = 0;
++
++	while (udl->urbs.count * size < wanted_size) {
+ 		unode = kzalloc(sizeof(struct urb_node), GFP_KERNEL);
+ 		if (!unode)
+ 			break;
+@@ -231,11 +231,16 @@ static int udl_alloc_urb_list(struct drm_device *dev, int count, size_t size)
+ 		}
+ 		unode->urb = urb;
+ 
+-		buf = usb_alloc_coherent(udl->udev, MAX_TRANSFER, GFP_KERNEL,
++		buf = usb_alloc_coherent(udl->udev, size, GFP_KERNEL,
+ 					 &urb->transfer_dma);
+ 		if (!buf) {
+ 			kfree(unode);
+ 			usb_free_urb(urb);
++			if (size > PAGE_SIZE) {
++				size /= 2;
++				udl_free_urb_list(dev);
++				goto retry;
++			}
+ 			break;
+ 		}
+ 
+@@ -246,16 +251,14 @@ static int udl_alloc_urb_list(struct drm_device *dev, int count, size_t size)
+ 
+ 		list_add_tail(&unode->entry, &udl->urbs.list);
+ 
+-		i++;
++		up(&udl->urbs.limit_sem);
++		udl->urbs.count++;
++		udl->urbs.available++;
+ 	}
+ 
+-	sema_init(&udl->urbs.limit_sem, i);
+-	udl->urbs.count = i;
+-	udl->urbs.available = i;
+-
+-	DRM_DEBUG("allocated %d %d byte urbs\n", i, (int) size);
++	DRM_DEBUG("allocated %d %d byte urbs\n", udl->urbs.count, (int) size);
+ 
+-	return i;
++	return udl->urbs.count;
+ }
+ 
+ struct urb *udl_get_urb(struct drm_device *dev)
+diff --git a/drivers/gpu/drm/udl/udl_transfer.c b/drivers/gpu/drm/udl/udl_transfer.c
+index b992644c17e6..f3331d33547a 100644
+--- a/drivers/gpu/drm/udl/udl_transfer.c
++++ b/drivers/gpu/drm/udl/udl_transfer.c
+@@ -83,12 +83,12 @@ static inline u16 pixel32_to_be16(const uint32_t pixel)
+ 		((pixel >> 8) & 0xf800));
+ }
+ 
+-static inline u16 get_pixel_val16(const uint8_t *pixel, int bpp)
++static inline u16 get_pixel_val16(const uint8_t *pixel, int log_bpp)
+ {
+-	u16 pixel_val16 = 0;
+-	if (bpp == 2)
++	u16 pixel_val16;
++	if (log_bpp == 1)
+ 		pixel_val16 = *(const uint16_t *)pixel;
+-	else if (bpp == 4)
++	else
+ 		pixel_val16 = pixel32_to_be16(*(const uint32_t *)pixel);
+ 	return pixel_val16;
+ }
+@@ -125,8 +125,9 @@ static void udl_compress_hline16(
+ 	const u8 *const pixel_end,
+ 	uint32_t *device_address_ptr,
+ 	uint8_t **command_buffer_ptr,
+-	const uint8_t *const cmd_buffer_end, int bpp)
++	const uint8_t *const cmd_buffer_end, int log_bpp)
+ {
++	const int bpp = 1 << log_bpp;
+ 	const u8 *pixel = *pixel_start_ptr;
+ 	uint32_t dev_addr  = *device_address_ptr;
+ 	uint8_t *cmd = *command_buffer_ptr;
+@@ -153,12 +154,12 @@ static void udl_compress_hline16(
+ 		raw_pixels_count_byte = cmd++; /*  we'll know this later */
+ 		raw_pixel_start = pixel;
+ 
+-		cmd_pixel_end = pixel + min3(MAX_CMD_PIXELS + 1UL,
+-					(unsigned long)(pixel_end - pixel) / bpp,
+-					(unsigned long)(cmd_buffer_end - 1 - cmd) / 2) * bpp;
++		cmd_pixel_end = pixel + (min3(MAX_CMD_PIXELS + 1UL,
++					(unsigned long)(pixel_end - pixel) >> log_bpp,
++					(unsigned long)(cmd_buffer_end - 1 - cmd) / 2) << log_bpp);
+ 
+ 		prefetch_range((void *) pixel, cmd_pixel_end - pixel);
+-		pixel_val16 = get_pixel_val16(pixel, bpp);
++		pixel_val16 = get_pixel_val16(pixel, log_bpp);
+ 
+ 		while (pixel < cmd_pixel_end) {
+ 			const u8 *const start = pixel;
+@@ -170,7 +171,7 @@ static void udl_compress_hline16(
+ 			pixel += bpp;
+ 
+ 			while (pixel < cmd_pixel_end) {
+-				pixel_val16 = get_pixel_val16(pixel, bpp);
++				pixel_val16 = get_pixel_val16(pixel, log_bpp);
+ 				if (pixel_val16 != repeating_pixel_val16)
+ 					break;
+ 				pixel += bpp;
+@@ -179,10 +180,10 @@ static void udl_compress_hline16(
+ 			if (unlikely(pixel > start + bpp)) {
+ 				/* go back and fill in raw pixel count */
+ 				*raw_pixels_count_byte = (((start -
+-						raw_pixel_start) / bpp) + 1) & 0xFF;
++						raw_pixel_start) >> log_bpp) + 1) & 0xFF;
+ 
+ 				/* immediately after raw data is repeat byte */
+-				*cmd++ = (((pixel - start) / bpp) - 1) & 0xFF;
++				*cmd++ = (((pixel - start) >> log_bpp) - 1) & 0xFF;
+ 
+ 				/* Then start another raw pixel span */
+ 				raw_pixel_start = pixel;
+@@ -192,14 +193,14 @@ static void udl_compress_hline16(
+ 
+ 		if (pixel > raw_pixel_start) {
+ 			/* finalize last RAW span */
+-			*raw_pixels_count_byte = ((pixel-raw_pixel_start) / bpp) & 0xFF;
++			*raw_pixels_count_byte = ((pixel - raw_pixel_start) >> log_bpp) & 0xFF;
+ 		} else {
+ 			/* undo unused byte */
+ 			cmd--;
+ 		}
+ 
+-		*cmd_pixels_count_byte = ((pixel - cmd_pixel_start) / bpp) & 0xFF;
+-		dev_addr += ((pixel - cmd_pixel_start) / bpp) * 2;
++		*cmd_pixels_count_byte = ((pixel - cmd_pixel_start) >> log_bpp) & 0xFF;
++		dev_addr += ((pixel - cmd_pixel_start) >> log_bpp) * 2;
+ 	}
+ 
+ 	if (cmd_buffer_end <= MIN_RLX_CMD_BYTES + cmd) {
+@@ -222,19 +223,19 @@ static void udl_compress_hline16(
+  * (that we can only write to, slowly, and can never read), and (optionally)
+  * our shadow copy that tracks what's been sent to that hardware buffer.
+  */
+-int udl_render_hline(struct drm_device *dev, int bpp, struct urb **urb_ptr,
++int udl_render_hline(struct drm_device *dev, int log_bpp, struct urb **urb_ptr,
+ 		     const char *front, char **urb_buf_ptr,
+ 		     u32 byte_offset, u32 device_byte_offset,
+ 		     u32 byte_width,
+ 		     int *ident_ptr, int *sent_ptr)
+ {
+ 	const u8 *line_start, *line_end, *next_pixel;
+-	u32 base16 = 0 + (device_byte_offset / bpp) * 2;
++	u32 base16 = 0 + (device_byte_offset >> log_bpp) * 2;
+ 	struct urb *urb = *urb_ptr;
+ 	u8 *cmd = *urb_buf_ptr;
+ 	u8 *cmd_end = (u8 *) urb->transfer_buffer + urb->transfer_buffer_length;
+ 
+-	BUG_ON(!(bpp == 2 || bpp == 4));
++	BUG_ON(!(log_bpp == 1 || log_bpp == 2));
+ 
+ 	line_start = (u8 *) (front + byte_offset);
+ 	next_pixel = line_start;
+@@ -244,7 +245,7 @@ int udl_render_hline(struct drm_device *dev, int bpp, struct urb **urb_ptr,
+ 
+ 		udl_compress_hline16(&next_pixel,
+ 			     line_end, &base16,
+-			     (u8 **) &cmd, (u8 *) cmd_end, bpp);
++			     (u8 **) &cmd, (u8 *) cmd_end, log_bpp);
+ 
+ 		if (cmd >= cmd_end) {
+ 			int len = cmd - (u8 *) urb->transfer_buffer;
+diff --git a/drivers/hwmon/k10temp.c b/drivers/hwmon/k10temp.c
+index 17c6460ae351..577e2ede5a1a 100644
+--- a/drivers/hwmon/k10temp.c
++++ b/drivers/hwmon/k10temp.c
+@@ -105,6 +105,8 @@ static const struct tctl_offset tctl_offset_table[] = {
+ 	{ 0x17, "AMD Ryzen Threadripper 1950", 10000 },
+ 	{ 0x17, "AMD Ryzen Threadripper 1920", 10000 },
+ 	{ 0x17, "AMD Ryzen Threadripper 1910", 10000 },
++	{ 0x17, "AMD Ryzen Threadripper 2950X", 27000 },
++	{ 0x17, "AMD Ryzen Threadripper 2990WX", 27000 },
+ };
+ 
+ static void read_htcreg_pci(struct pci_dev *pdev, u32 *regval)
+diff --git a/drivers/hwmon/nct6775.c b/drivers/hwmon/nct6775.c
+index f9d1349c3286..b89e8379d898 100644
+--- a/drivers/hwmon/nct6775.c
++++ b/drivers/hwmon/nct6775.c
+@@ -63,6 +63,7 @@
+ #include <linux/bitops.h>
+ #include <linux/dmi.h>
+ #include <linux/io.h>
++#include <linux/nospec.h>
+ #include "lm75.h"
+ 
+ #define USE_ALTERNATE
+@@ -2689,6 +2690,7 @@ store_pwm_weight_temp_sel(struct device *dev, struct device_attribute *attr,
+ 		return err;
+ 	if (val > NUM_TEMP)
+ 		return -EINVAL;
++	val = array_index_nospec(val, NUM_TEMP + 1);
+ 	if (val && (!(data->have_temp & BIT(val - 1)) ||
+ 		    !data->temp_src[val - 1]))
+ 		return -EINVAL;
+diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
+index f7a96bcf94a6..5349e22b5c78 100644
+--- a/drivers/iommu/arm-smmu.c
++++ b/drivers/iommu/arm-smmu.c
+@@ -2103,12 +2103,16 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
+ 	if (err)
+ 		return err;
+ 
+-	if (smmu->version == ARM_SMMU_V2 &&
+-	    smmu->num_context_banks != smmu->num_context_irqs) {
+-		dev_err(dev,
+-			"found only %d context interrupt(s) but %d required\n",
+-			smmu->num_context_irqs, smmu->num_context_banks);
+-		return -ENODEV;
++	if (smmu->version == ARM_SMMU_V2) {
++		if (smmu->num_context_banks > smmu->num_context_irqs) {
++			dev_err(dev,
++			      "found only %d context irq(s) but %d required\n",
++			      smmu->num_context_irqs, smmu->num_context_banks);
++			return -ENODEV;
++		}
++
++		/* Ignore superfluous interrupts */
++		smmu->num_context_irqs = smmu->num_context_banks;
+ 	}
+ 
+ 	for (i = 0; i < smmu->num_global_irqs; ++i) {
+diff --git a/drivers/misc/mei/main.c b/drivers/misc/mei/main.c
+index 7465f17e1559..38175ebd92d4 100644
+--- a/drivers/misc/mei/main.c
++++ b/drivers/misc/mei/main.c
+@@ -312,7 +312,6 @@ static ssize_t mei_write(struct file *file, const char __user *ubuf,
+ 		}
+ 	}
+ 
+-	*offset = 0;
+ 	cb = mei_cl_alloc_cb(cl, length, MEI_FOP_WRITE, file);
+ 	if (!cb) {
+ 		rets = -ENOMEM;
+diff --git a/drivers/mtd/nand/raw/fsmc_nand.c b/drivers/mtd/nand/raw/fsmc_nand.c
+index f4a5a317d4ae..e1086a010b88 100644
+--- a/drivers/mtd/nand/raw/fsmc_nand.c
++++ b/drivers/mtd/nand/raw/fsmc_nand.c
+@@ -740,7 +740,7 @@ static int fsmc_read_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip,
+ 	for (i = 0, s = 0; s < eccsteps; s++, i += eccbytes, p += eccsize) {
+ 		nand_read_page_op(chip, page, s * eccsize, NULL, 0);
+ 		chip->ecc.hwctl(mtd, NAND_ECC_READ);
+-		chip->read_buf(mtd, p, eccsize);
++		nand_read_data_op(chip, p, eccsize, false);
+ 
+ 		for (j = 0; j < eccbytes;) {
+ 			struct mtd_oob_region oobregion;
+diff --git a/drivers/mtd/nand/raw/marvell_nand.c b/drivers/mtd/nand/raw/marvell_nand.c
+index ebb1d141b900..c88588815ca1 100644
+--- a/drivers/mtd/nand/raw/marvell_nand.c
++++ b/drivers/mtd/nand/raw/marvell_nand.c
+@@ -2677,6 +2677,21 @@ static int marvell_nfc_init_dma(struct marvell_nfc *nfc)
+ 	return 0;
+ }
+ 
++static void marvell_nfc_reset(struct marvell_nfc *nfc)
++{
++	/*
++	 * ECC operations and interruptions are only enabled when specifically
++	 * needed. ECC shall not be activated in the early stages (fails probe).
++	 * Arbiter flag, even if marked as "reserved", must be set (empirical).
++	 * SPARE_EN bit must always be set or ECC bytes will not be at the same
++	 * offset in the read page and this will fail the protection.
++	 */
++	writel_relaxed(NDCR_ALL_INT | NDCR_ND_ARB_EN | NDCR_SPARE_EN |
++		       NDCR_RD_ID_CNT(NFCV1_READID_LEN), nfc->regs + NDCR);
++	writel_relaxed(0xFFFFFFFF, nfc->regs + NDSR);
++	writel_relaxed(0, nfc->regs + NDECCCTRL);
++}
++
+ static int marvell_nfc_init(struct marvell_nfc *nfc)
+ {
+ 	struct device_node *np = nfc->dev->of_node;
+@@ -2715,17 +2730,7 @@ static int marvell_nfc_init(struct marvell_nfc *nfc)
+ 	if (!nfc->caps->is_nfcv2)
+ 		marvell_nfc_init_dma(nfc);
+ 
+-	/*
+-	 * ECC operations and interruptions are only enabled when specifically
+-	 * needed. ECC shall not be activated in the early stages (fails probe).
+-	 * Arbiter flag, even if marked as "reserved", must be set (empirical).
+-	 * SPARE_EN bit must always be set or ECC bytes will not be at the same
+-	 * offset in the read page and this will fail the protection.
+-	 */
+-	writel_relaxed(NDCR_ALL_INT | NDCR_ND_ARB_EN | NDCR_SPARE_EN |
+-		       NDCR_RD_ID_CNT(NFCV1_READID_LEN), nfc->regs + NDCR);
+-	writel_relaxed(0xFFFFFFFF, nfc->regs + NDSR);
+-	writel_relaxed(0, nfc->regs + NDECCCTRL);
++	marvell_nfc_reset(nfc);
+ 
+ 	return 0;
+ }
+@@ -2840,6 +2845,51 @@ static int marvell_nfc_remove(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
++static int __maybe_unused marvell_nfc_suspend(struct device *dev)
++{
++	struct marvell_nfc *nfc = dev_get_drvdata(dev);
++	struct marvell_nand_chip *chip;
++
++	list_for_each_entry(chip, &nfc->chips, node)
++		marvell_nfc_wait_ndrun(&chip->chip);
++
++	clk_disable_unprepare(nfc->reg_clk);
++	clk_disable_unprepare(nfc->core_clk);
++
++	return 0;
++}
++
++static int __maybe_unused marvell_nfc_resume(struct device *dev)
++{
++	struct marvell_nfc *nfc = dev_get_drvdata(dev);
++	int ret;
++
++	ret = clk_prepare_enable(nfc->core_clk);
++	if (ret < 0)
++		return ret;
++
++	if (!IS_ERR(nfc->reg_clk)) {
++		ret = clk_prepare_enable(nfc->reg_clk);
++		if (ret < 0)
++			return ret;
++	}
++
++	/*
++	 * Reset nfc->selected_chip so the next command will cause the timing
++	 * registers to be restored in marvell_nfc_select_chip().
++	 */
++	nfc->selected_chip = NULL;
++
++	/* Reset registers that have lost their contents */
++	marvell_nfc_reset(nfc);
++
++	return 0;
++}
++
++static const struct dev_pm_ops marvell_nfc_pm_ops = {
++	SET_SYSTEM_SLEEP_PM_OPS(marvell_nfc_suspend, marvell_nfc_resume)
++};
++
+ static const struct marvell_nfc_caps marvell_armada_8k_nfc_caps = {
+ 	.max_cs_nb = 4,
+ 	.max_rb_nb = 2,
+@@ -2924,6 +2974,7 @@ static struct platform_driver marvell_nfc_driver = {
+ 	.driver	= {
+ 		.name		= "marvell-nfc",
+ 		.of_match_table = marvell_nfc_of_ids,
++		.pm		= &marvell_nfc_pm_ops,
+ 	},
+ 	.id_table = marvell_nfc_platform_ids,
+ 	.probe = marvell_nfc_probe,
+diff --git a/drivers/mtd/nand/raw/nand_hynix.c b/drivers/mtd/nand/raw/nand_hynix.c
+index d542908a0ebb..766df4134482 100644
+--- a/drivers/mtd/nand/raw/nand_hynix.c
++++ b/drivers/mtd/nand/raw/nand_hynix.c
+@@ -100,6 +100,16 @@ static int hynix_nand_reg_write_op(struct nand_chip *chip, u8 addr, u8 val)
+ 	struct mtd_info *mtd = nand_to_mtd(chip);
+ 	u16 column = ((u16)addr << 8) | addr;
+ 
++	if (chip->exec_op) {
++		struct nand_op_instr instrs[] = {
++			NAND_OP_ADDR(1, &addr, 0),
++			NAND_OP_8BIT_DATA_OUT(1, &val, 0),
++		};
++		struct nand_operation op = NAND_OPERATION(instrs);
++
++		return nand_exec_op(chip, &op);
++	}
++
+ 	chip->cmdfunc(mtd, NAND_CMD_NONE, column, -1);
+ 	chip->write_byte(mtd, val);
+ 
+diff --git a/drivers/mtd/nand/raw/qcom_nandc.c b/drivers/mtd/nand/raw/qcom_nandc.c
+index 6a5519f0ff25..49b4e70fefe7 100644
+--- a/drivers/mtd/nand/raw/qcom_nandc.c
++++ b/drivers/mtd/nand/raw/qcom_nandc.c
+@@ -213,6 +213,8 @@ nandc_set_reg(nandc, NAND_READ_LOCATION_##reg,			\
+ #define QPIC_PER_CW_CMD_SGL		32
+ #define QPIC_PER_CW_DATA_SGL		8
+ 
++#define QPIC_NAND_COMPLETION_TIMEOUT	msecs_to_jiffies(2000)
++
+ /*
+  * Flags used in DMA descriptor preparation helper functions
+  * (i.e. read_reg_dma/write_reg_dma/read_data_dma/write_data_dma)
+@@ -245,6 +247,11 @@ nandc_set_reg(nandc, NAND_READ_LOCATION_##reg,			\
+  * @tx_sgl_start - start index in data sgl for tx.
+  * @rx_sgl_pos - current index in data sgl for rx.
+  * @rx_sgl_start - start index in data sgl for rx.
++ * @wait_second_completion - wait for second DMA desc completion before making
++ *			     the NAND transfer completion.
++ * @txn_done - completion for NAND transfer.
++ * @last_data_desc - last DMA desc in data channel (tx/rx).
++ * @last_cmd_desc - last DMA desc in command channel.
+  */
+ struct bam_transaction {
+ 	struct bam_cmd_element *bam_ce;
+@@ -258,6 +265,10 @@ struct bam_transaction {
+ 	u32 tx_sgl_start;
+ 	u32 rx_sgl_pos;
+ 	u32 rx_sgl_start;
++	bool wait_second_completion;
++	struct completion txn_done;
++	struct dma_async_tx_descriptor *last_data_desc;
++	struct dma_async_tx_descriptor *last_cmd_desc;
+ };
+ 
+ /*
+@@ -504,6 +515,8 @@ alloc_bam_transaction(struct qcom_nand_controller *nandc)
+ 
+ 	bam_txn->data_sgl = bam_txn_buf;
+ 
++	init_completion(&bam_txn->txn_done);
++
+ 	return bam_txn;
+ }
+ 
+@@ -523,11 +536,33 @@ static void clear_bam_transaction(struct qcom_nand_controller *nandc)
+ 	bam_txn->tx_sgl_start = 0;
+ 	bam_txn->rx_sgl_pos = 0;
+ 	bam_txn->rx_sgl_start = 0;
++	bam_txn->last_data_desc = NULL;
++	bam_txn->wait_second_completion = false;
+ 
+ 	sg_init_table(bam_txn->cmd_sgl, nandc->max_cwperpage *
+ 		      QPIC_PER_CW_CMD_SGL);
+ 	sg_init_table(bam_txn->data_sgl, nandc->max_cwperpage *
+ 		      QPIC_PER_CW_DATA_SGL);
++
++	reinit_completion(&bam_txn->txn_done);
++}
++
++/* Callback for DMA descriptor completion */
++static void qpic_bam_dma_done(void *data)
++{
++	struct bam_transaction *bam_txn = data;
++
++	/*
++	 * In case of data transfer with NAND, 2 callbacks will be generated.
++	 * One for command channel and another one for data channel.
++	 * If current transaction has data descriptors
++	 * (i.e. wait_second_completion is true), then set this to false
++	 * and wait for second DMA descriptor completion.
++	 */
++	if (bam_txn->wait_second_completion)
++		bam_txn->wait_second_completion = false;
++	else
++		complete(&bam_txn->txn_done);
+ }
+ 
+ static inline struct qcom_nand_host *to_qcom_nand_host(struct nand_chip *chip)
+@@ -756,6 +791,12 @@ static int prepare_bam_async_desc(struct qcom_nand_controller *nandc,
+ 
+ 	desc->dma_desc = dma_desc;
+ 
++	/* update last data/command descriptor */
++	if (chan == nandc->cmd_chan)
++		bam_txn->last_cmd_desc = dma_desc;
++	else
++		bam_txn->last_data_desc = dma_desc;
++
+ 	list_add_tail(&desc->node, &nandc->desc_list);
+ 
+ 	return 0;
+@@ -1273,10 +1314,20 @@ static int submit_descs(struct qcom_nand_controller *nandc)
+ 		cookie = dmaengine_submit(desc->dma_desc);
+ 
+ 	if (nandc->props->is_bam) {
++		bam_txn->last_cmd_desc->callback = qpic_bam_dma_done;
++		bam_txn->last_cmd_desc->callback_param = bam_txn;
++		if (bam_txn->last_data_desc) {
++			bam_txn->last_data_desc->callback = qpic_bam_dma_done;
++			bam_txn->last_data_desc->callback_param = bam_txn;
++			bam_txn->wait_second_completion = true;
++		}
++
+ 		dma_async_issue_pending(nandc->tx_chan);
+ 		dma_async_issue_pending(nandc->rx_chan);
++		dma_async_issue_pending(nandc->cmd_chan);
+ 
+-		if (dma_sync_wait(nandc->cmd_chan, cookie) != DMA_COMPLETE)
++		if (!wait_for_completion_timeout(&bam_txn->txn_done,
++						 QPIC_NAND_COMPLETION_TIMEOUT))
+ 			return -ETIMEDOUT;
+ 	} else {
+ 		if (dma_sync_wait(nandc->chan, cookie) != DMA_COMPLETE)
+diff --git a/drivers/net/wireless/broadcom/b43/leds.c b/drivers/net/wireless/broadcom/b43/leds.c
+index cb987c2ecc6b..87131f663292 100644
+--- a/drivers/net/wireless/broadcom/b43/leds.c
++++ b/drivers/net/wireless/broadcom/b43/leds.c
+@@ -131,7 +131,7 @@ static int b43_register_led(struct b43_wldev *dev, struct b43_led *led,
+ 	led->wl = dev->wl;
+ 	led->index = led_index;
+ 	led->activelow = activelow;
+-	strncpy(led->name, name, sizeof(led->name));
++	strlcpy(led->name, name, sizeof(led->name));
+ 	atomic_set(&led->state, 0);
+ 
+ 	led->led_dev.name = led->name;
+diff --git a/drivers/net/wireless/broadcom/b43legacy/leds.c b/drivers/net/wireless/broadcom/b43legacy/leds.c
+index fd4565389c77..bc922118b6ac 100644
+--- a/drivers/net/wireless/broadcom/b43legacy/leds.c
++++ b/drivers/net/wireless/broadcom/b43legacy/leds.c
+@@ -101,7 +101,7 @@ static int b43legacy_register_led(struct b43legacy_wldev *dev,
+ 	led->dev = dev;
+ 	led->index = led_index;
+ 	led->activelow = activelow;
+-	strncpy(led->name, name, sizeof(led->name));
++	strlcpy(led->name, name, sizeof(led->name));
+ 
+ 	led->led_dev.name = led->name;
+ 	led->led_dev.default_trigger = default_trigger;
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index ddd441b1516a..e10b0d20c4a7 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -316,6 +316,14 @@ static bool nvme_dbbuf_update_and_check_event(u16 value, u32 *dbbuf_db,
+ 		old_value = *dbbuf_db;
+ 		*dbbuf_db = value;
+ 
++		/*
++		 * Ensure that the doorbell is updated before reading the event
++		 * index from memory.  The controller needs to provide similar
++		 * ordering to ensure the envent index is updated before reading
++		 * the doorbell.
++		 */
++		mb();
++
+ 		if (!nvme_dbbuf_need_event(*dbbuf_ei, value, old_value))
+ 			return false;
+ 	}
+diff --git a/drivers/pinctrl/freescale/pinctrl-imx1-core.c b/drivers/pinctrl/freescale/pinctrl-imx1-core.c
+index c3bdd90b1422..deb7870b3d1a 100644
+--- a/drivers/pinctrl/freescale/pinctrl-imx1-core.c
++++ b/drivers/pinctrl/freescale/pinctrl-imx1-core.c
+@@ -429,7 +429,7 @@ static void imx1_pinconf_group_dbg_show(struct pinctrl_dev *pctldev,
+ 	const char *name;
+ 	int i, ret;
+ 
+-	if (group > info->ngroups)
++	if (group >= info->ngroups)
+ 		return;
+ 
+ 	seq_puts(s, "\n");
+diff --git a/drivers/platform/x86/ideapad-laptop.c b/drivers/platform/x86/ideapad-laptop.c
+index 45b7cb01f410..307403decf76 100644
+--- a/drivers/platform/x86/ideapad-laptop.c
++++ b/drivers/platform/x86/ideapad-laptop.c
+@@ -1133,10 +1133,10 @@ static const struct dmi_system_id no_hw_rfkill_list[] = {
+ 		},
+ 	},
+ 	{
+-		.ident = "Lenovo Legion Y520-15IKBN",
++		.ident = "Lenovo Legion Y520-15IKB",
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo Y520-15IKBN"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo Y520-15IKB"),
+ 		},
+ 	},
+ 	{
+diff --git a/drivers/platform/x86/wmi.c b/drivers/platform/x86/wmi.c
+index 8e3d0146ff8c..04791ea5d97b 100644
+--- a/drivers/platform/x86/wmi.c
++++ b/drivers/platform/x86/wmi.c
+@@ -895,7 +895,6 @@ static int wmi_dev_probe(struct device *dev)
+ 	struct wmi_driver *wdriver =
+ 		container_of(dev->driver, struct wmi_driver, driver);
+ 	int ret = 0;
+-	int count;
+ 	char *buf;
+ 
+ 	if (ACPI_FAILURE(wmi_method_enable(wblock, 1)))
+@@ -917,9 +916,8 @@ static int wmi_dev_probe(struct device *dev)
+ 			goto probe_failure;
+ 		}
+ 
+-		count = get_order(wblock->req_buf_size);
+-		wblock->handler_data = (void *)__get_free_pages(GFP_KERNEL,
+-								count);
++		wblock->handler_data = kmalloc(wblock->req_buf_size,
++					       GFP_KERNEL);
+ 		if (!wblock->handler_data) {
+ 			ret = -ENOMEM;
+ 			goto probe_failure;
+@@ -964,8 +962,7 @@ static int wmi_dev_remove(struct device *dev)
+ 	if (wdriver->filter_callback) {
+ 		misc_deregister(&wblock->char_dev);
+ 		kfree(wblock->char_dev.name);
+-		free_pages((unsigned long)wblock->handler_data,
+-			   get_order(wblock->req_buf_size));
++		kfree(wblock->handler_data);
+ 	}
+ 
+ 	if (wdriver->remove)
+diff --git a/drivers/power/supply/generic-adc-battery.c b/drivers/power/supply/generic-adc-battery.c
+index 28dc056eaafa..bc462d1ec963 100644
+--- a/drivers/power/supply/generic-adc-battery.c
++++ b/drivers/power/supply/generic-adc-battery.c
+@@ -241,10 +241,10 @@ static int gab_probe(struct platform_device *pdev)
+ 	struct power_supply_desc *psy_desc;
+ 	struct power_supply_config psy_cfg = {};
+ 	struct gab_platform_data *pdata = pdev->dev.platform_data;
+-	enum power_supply_property *properties;
+ 	int ret = 0;
+ 	int chan;
+-	int index = 0;
++	int index = ARRAY_SIZE(gab_props);
++	bool any = false;
+ 
+ 	adc_bat = devm_kzalloc(&pdev->dev, sizeof(*adc_bat), GFP_KERNEL);
+ 	if (!adc_bat) {
+@@ -278,8 +278,6 @@ static int gab_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	memcpy(psy_desc->properties, gab_props, sizeof(gab_props));
+-	properties = (enum power_supply_property *)
+-			((char *)psy_desc->properties + sizeof(gab_props));
+ 
+ 	/*
+ 	 * getting channel from iio and copying the battery properties
+@@ -293,15 +291,22 @@ static int gab_probe(struct platform_device *pdev)
+ 			adc_bat->channel[chan] = NULL;
+ 		} else {
+ 			/* copying properties for supported channels only */
+-			memcpy(properties + sizeof(*(psy_desc->properties)) * index,
+-					&gab_dyn_props[chan],
+-					sizeof(gab_dyn_props[chan]));
+-			index++;
++			int index2;
++
++			for (index2 = 0; index2 < index; index2++) {
++				if (psy_desc->properties[index2] ==
++				    gab_dyn_props[chan])
++					break;	/* already known */
++			}
++			if (index2 == index)	/* really new */
++				psy_desc->properties[index++] =
++					gab_dyn_props[chan];
++			any = true;
+ 		}
+ 	}
+ 
+ 	/* none of the channels are supported so let's bail out */
+-	if (index == 0) {
++	if (!any) {
+ 		ret = -ENODEV;
+ 		goto second_mem_fail;
+ 	}
+@@ -312,7 +317,7 @@ static int gab_probe(struct platform_device *pdev)
+ 	 * as come channels may be not be supported by the device.So
+ 	 * we need to take care of that.
+ 	 */
+-	psy_desc->num_properties = ARRAY_SIZE(gab_props) + index;
++	psy_desc->num_properties = index;
+ 
+ 	adc_bat->psy = power_supply_register(&pdev->dev, psy_desc, &psy_cfg);
+ 	if (IS_ERR(adc_bat->psy)) {
+diff --git a/drivers/regulator/arizona-ldo1.c b/drivers/regulator/arizona-ldo1.c
+index f6d6a4ad9e8a..e976d073f28d 100644
+--- a/drivers/regulator/arizona-ldo1.c
++++ b/drivers/regulator/arizona-ldo1.c
+@@ -36,6 +36,8 @@ struct arizona_ldo1 {
+ 
+ 	struct regulator_consumer_supply supply;
+ 	struct regulator_init_data init_data;
++
++	struct gpio_desc *ena_gpiod;
+ };
+ 
+ static int arizona_ldo1_hc_list_voltage(struct regulator_dev *rdev,
+@@ -253,12 +255,17 @@ static int arizona_ldo1_common_init(struct platform_device *pdev,
+ 		}
+ 	}
+ 
+-	/* We assume that high output = regulator off */
+-	config.ena_gpiod = devm_gpiod_get_optional(&pdev->dev, "wlf,ldoena",
+-						   GPIOD_OUT_HIGH);
++	/* We assume that high output = regulator off
++	 * Don't use devm, since we need to get against the parent device
++	 * so clean up would happen at the wrong time
++	 */
++	config.ena_gpiod = gpiod_get_optional(parent_dev, "wlf,ldoena",
++					      GPIOD_OUT_LOW);
+ 	if (IS_ERR(config.ena_gpiod))
+ 		return PTR_ERR(config.ena_gpiod);
+ 
++	ldo1->ena_gpiod = config.ena_gpiod;
++
+ 	if (pdata->init_data)
+ 		config.init_data = pdata->init_data;
+ 	else
+@@ -276,6 +283,9 @@ static int arizona_ldo1_common_init(struct platform_device *pdev,
+ 	of_node_put(config.of_node);
+ 
+ 	if (IS_ERR(ldo1->regulator)) {
++		if (config.ena_gpiod)
++			gpiod_put(config.ena_gpiod);
++
+ 		ret = PTR_ERR(ldo1->regulator);
+ 		dev_err(&pdev->dev, "Failed to register LDO1 supply: %d\n",
+ 			ret);
+@@ -334,8 +344,19 @@ static int arizona_ldo1_probe(struct platform_device *pdev)
+ 	return ret;
+ }
+ 
++static int arizona_ldo1_remove(struct platform_device *pdev)
++{
++	struct arizona_ldo1 *ldo1 = platform_get_drvdata(pdev);
++
++	if (ldo1->ena_gpiod)
++		gpiod_put(ldo1->ena_gpiod);
++
++	return 0;
++}
++
+ static struct platform_driver arizona_ldo1_driver = {
+ 	.probe = arizona_ldo1_probe,
++	.remove = arizona_ldo1_remove,
+ 	.driver		= {
+ 		.name	= "arizona-ldo1",
+ 	},
+diff --git a/drivers/s390/cio/qdio_main.c b/drivers/s390/cio/qdio_main.c
+index f4ca72dd862f..9c7d9da42ba0 100644
+--- a/drivers/s390/cio/qdio_main.c
++++ b/drivers/s390/cio/qdio_main.c
+@@ -631,21 +631,20 @@ static inline unsigned long qdio_aob_for_buffer(struct qdio_output_q *q,
+ 	unsigned long phys_aob = 0;
+ 
+ 	if (!q->use_cq)
+-		goto out;
++		return 0;
+ 
+ 	if (!q->aobs[bufnr]) {
+ 		struct qaob *aob = qdio_allocate_aob();
+ 		q->aobs[bufnr] = aob;
+ 	}
+ 	if (q->aobs[bufnr]) {
+-		q->sbal_state[bufnr].flags = QDIO_OUTBUF_STATE_FLAG_NONE;
+ 		q->sbal_state[bufnr].aob = q->aobs[bufnr];
+ 		q->aobs[bufnr]->user1 = (u64) q->sbal_state[bufnr].user;
+ 		phys_aob = virt_to_phys(q->aobs[bufnr]);
+ 		WARN_ON_ONCE(phys_aob & 0xFF);
+ 	}
+ 
+-out:
++	q->sbal_state[bufnr].flags = 0;
+ 	return phys_aob;
+ }
+ 
+diff --git a/drivers/scsi/libsas/sas_ata.c b/drivers/scsi/libsas/sas_ata.c
+index ff1d612f6fb9..41cdda7a926b 100644
+--- a/drivers/scsi/libsas/sas_ata.c
++++ b/drivers/scsi/libsas/sas_ata.c
+@@ -557,34 +557,46 @@ int sas_ata_init(struct domain_device *found_dev)
+ {
+ 	struct sas_ha_struct *ha = found_dev->port->ha;
+ 	struct Scsi_Host *shost = ha->core.shost;
++	struct ata_host *ata_host;
+ 	struct ata_port *ap;
+ 	int rc;
+ 
+-	ata_host_init(&found_dev->sata_dev.ata_host, ha->dev, &sas_sata_ops);
+-	ap = ata_sas_port_alloc(&found_dev->sata_dev.ata_host,
+-				&sata_port_info,
+-				shost);
++	ata_host = kzalloc(sizeof(*ata_host), GFP_KERNEL);
++	if (!ata_host)	{
++		SAS_DPRINTK("ata host alloc failed.\n");
++		return -ENOMEM;
++	}
++
++	ata_host_init(ata_host, ha->dev, &sas_sata_ops);
++
++	ap = ata_sas_port_alloc(ata_host, &sata_port_info, shost);
+ 	if (!ap) {
+ 		SAS_DPRINTK("ata_sas_port_alloc failed.\n");
+-		return -ENODEV;
++		rc = -ENODEV;
++		goto free_host;
+ 	}
+ 
+ 	ap->private_data = found_dev;
+ 	ap->cbl = ATA_CBL_SATA;
+ 	ap->scsi_host = shost;
+ 	rc = ata_sas_port_init(ap);
+-	if (rc) {
+-		ata_sas_port_destroy(ap);
+-		return rc;
+-	}
+-	rc = ata_sas_tport_add(found_dev->sata_dev.ata_host.dev, ap);
+-	if (rc) {
+-		ata_sas_port_destroy(ap);
+-		return rc;
+-	}
++	if (rc)
++		goto destroy_port;
++
++	rc = ata_sas_tport_add(ata_host->dev, ap);
++	if (rc)
++		goto destroy_port;
++
++	found_dev->sata_dev.ata_host = ata_host;
+ 	found_dev->sata_dev.ap = ap;
+ 
+ 	return 0;
++
++destroy_port:
++	ata_sas_port_destroy(ap);
++free_host:
++	ata_host_put(ata_host);
++	return rc;
+ }
+ 
+ void sas_ata_task_abort(struct sas_task *task)
+diff --git a/drivers/scsi/libsas/sas_discover.c b/drivers/scsi/libsas/sas_discover.c
+index 1ffca28fe6a8..0148ae62a52a 100644
+--- a/drivers/scsi/libsas/sas_discover.c
++++ b/drivers/scsi/libsas/sas_discover.c
+@@ -316,6 +316,8 @@ void sas_free_device(struct kref *kref)
+ 	if (dev_is_sata(dev) && dev->sata_dev.ap) {
+ 		ata_sas_tport_delete(dev->sata_dev.ap);
+ 		ata_sas_port_destroy(dev->sata_dev.ap);
++		ata_host_put(dev->sata_dev.ata_host);
++		dev->sata_dev.ata_host = NULL;
+ 		dev->sata_dev.ap = NULL;
+ 	}
+ 
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index e44c91edf92d..3c8c17c0b547 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -3284,6 +3284,7 @@ void mpt3sas_base_clear_st(struct MPT3SAS_ADAPTER *ioc,
+ 	st->cb_idx = 0xFF;
+ 	st->direct_io = 0;
+ 	atomic_set(&ioc->chain_lookup[st->smid - 1].chain_offset, 0);
++	st->smid = 0;
+ }
+ 
+ /**
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index b8d131a455d0..f3d727076e1f 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -1489,7 +1489,7 @@ mpt3sas_scsih_scsi_lookup_get(struct MPT3SAS_ADAPTER *ioc, u16 smid)
+ 		scmd = scsi_host_find_tag(ioc->shost, unique_tag);
+ 		if (scmd) {
+ 			st = scsi_cmd_priv(scmd);
+-			if (st->cb_idx == 0xFF)
++			if (st->cb_idx == 0xFF || st->smid == 0)
+ 				scmd = NULL;
+ 		}
+ 	}
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_transport.c b/drivers/scsi/mpt3sas/mpt3sas_transport.c
+index 3a143bb5ca72..6c71b20af9e3 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_transport.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_transport.c
+@@ -1936,12 +1936,12 @@ _transport_smp_handler(struct bsg_job *job, struct Scsi_Host *shost,
+ 		pr_info(MPT3SAS_FMT "%s: host reset in progress!\n",
+ 		    __func__, ioc->name);
+ 		rc = -EFAULT;
+-		goto out;
++		goto job_done;
+ 	}
+ 
+ 	rc = mutex_lock_interruptible(&ioc->transport_cmds.mutex);
+ 	if (rc)
+-		goto out;
++		goto job_done;
+ 
+ 	if (ioc->transport_cmds.status != MPT3_CMD_NOT_USED) {
+ 		pr_err(MPT3SAS_FMT "%s: transport_cmds in use\n", ioc->name,
+@@ -2066,6 +2066,7 @@ _transport_smp_handler(struct bsg_job *job, struct Scsi_Host *shost,
+  out:
+ 	ioc->transport_cmds.status = MPT3_CMD_NOT_USED;
+ 	mutex_unlock(&ioc->transport_cmds.mutex);
++job_done:
+ 	bsg_job_done(job, rc, reslen);
+ }
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 1b19b954bbae..ec550ee0108e 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -382,7 +382,7 @@ qla2x00_async_adisc_sp_done(void *ptr, int res)
+ 	    "Async done-%s res %x %8phC\n",
+ 	    sp->name, res, sp->fcport->port_name);
+ 
+-	sp->fcport->flags &= ~FCF_ASYNC_SENT;
++	sp->fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE);
+ 
+ 	memset(&ea, 0, sizeof(ea));
+ 	ea.event = FCME_ADISC_DONE;
+diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c
+index dd93a22fe843..667055cbe155 100644
+--- a/drivers/scsi/qla2xxx/qla_iocb.c
++++ b/drivers/scsi/qla2xxx/qla_iocb.c
+@@ -2656,6 +2656,7 @@ qla24xx_els_dcmd2_iocb(scsi_qla_host_t *vha, int els_opcode,
+ 	ql_dbg(ql_dbg_io, vha, 0x3073,
+ 	    "Enter: PLOGI portid=%06x\n", fcport->d_id.b24);
+ 
++	fcport->flags |= FCF_ASYNC_SENT;
+ 	sp->type = SRB_ELS_DCMD;
+ 	sp->name = "ELS_DCMD";
+ 	sp->fcport = fcport;
+diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
+index 7943b762c12d..87ef6714845b 100644
+--- a/drivers/scsi/scsi_sysfs.c
++++ b/drivers/scsi/scsi_sysfs.c
+@@ -722,8 +722,24 @@ static ssize_t
+ sdev_store_delete(struct device *dev, struct device_attribute *attr,
+ 		  const char *buf, size_t count)
+ {
+-	if (device_remove_file_self(dev, attr))
+-		scsi_remove_device(to_scsi_device(dev));
++	struct kernfs_node *kn;
++
++	kn = sysfs_break_active_protection(&dev->kobj, &attr->attr);
++	WARN_ON_ONCE(!kn);
++	/*
++	 * Concurrent writes into the "delete" sysfs attribute may trigger
++	 * concurrent calls to device_remove_file() and scsi_remove_device().
++	 * device_remove_file() handles concurrent removal calls by
++	 * serializing these and by ignoring the second and later removal
++	 * attempts.  Concurrent calls of scsi_remove_device() are
++	 * serialized. The second and later calls of scsi_remove_device() are
++	 * ignored because the first call of that function changes the device
++	 * state into SDEV_DEL.
++	 */
++	device_remove_file(dev, attr);
++	scsi_remove_device(to_scsi_device(dev));
++	if (kn)
++		sysfs_unbreak_active_protection(kn);
+ 	return count;
+ };
+ static DEVICE_ATTR(delete, S_IWUSR, NULL, sdev_store_delete);
+diff --git a/drivers/soc/qcom/rmtfs_mem.c b/drivers/soc/qcom/rmtfs_mem.c
+index c8999e38b005..8a3678c2e83c 100644
+--- a/drivers/soc/qcom/rmtfs_mem.c
++++ b/drivers/soc/qcom/rmtfs_mem.c
+@@ -184,6 +184,7 @@ static int qcom_rmtfs_mem_probe(struct platform_device *pdev)
+ 	device_initialize(&rmtfs_mem->dev);
+ 	rmtfs_mem->dev.parent = &pdev->dev;
+ 	rmtfs_mem->dev.groups = qcom_rmtfs_mem_groups;
++	rmtfs_mem->dev.release = qcom_rmtfs_mem_release_device;
+ 
+ 	rmtfs_mem->base = devm_memremap(&rmtfs_mem->dev, rmtfs_mem->addr,
+ 					rmtfs_mem->size, MEMREMAP_WC);
+@@ -206,8 +207,6 @@ static int qcom_rmtfs_mem_probe(struct platform_device *pdev)
+ 		goto put_device;
+ 	}
+ 
+-	rmtfs_mem->dev.release = qcom_rmtfs_mem_release_device;
+-
+ 	ret = of_property_read_u32(node, "qcom,vmid", &vmid);
+ 	if (ret < 0 && ret != -EINVAL) {
+ 		dev_err(&pdev->dev, "failed to parse qcom,vmid\n");
+diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c
+index 99501785cdc1..68b3eb00a9d0 100644
+--- a/drivers/target/iscsi/iscsi_target_login.c
++++ b/drivers/target/iscsi/iscsi_target_login.c
+@@ -348,8 +348,7 @@ static int iscsi_login_zero_tsih_s1(
+ 		pr_err("idr_alloc() for sess_idr failed\n");
+ 		iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+ 				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+-		kfree(sess);
+-		return -ENOMEM;
++		goto free_sess;
+ 	}
+ 
+ 	sess->creation_time = get_jiffies_64();
+@@ -365,20 +364,28 @@ static int iscsi_login_zero_tsih_s1(
+ 				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+ 		pr_err("Unable to allocate memory for"
+ 				" struct iscsi_sess_ops.\n");
+-		kfree(sess);
+-		return -ENOMEM;
++		goto remove_idr;
+ 	}
+ 
+ 	sess->se_sess = transport_init_session(TARGET_PROT_NORMAL);
+ 	if (IS_ERR(sess->se_sess)) {
+ 		iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+ 				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+-		kfree(sess->sess_ops);
+-		kfree(sess);
+-		return -ENOMEM;
++		goto free_ops;
+ 	}
+ 
+ 	return 0;
++
++free_ops:
++	kfree(sess->sess_ops);
++remove_idr:
++	spin_lock_bh(&sess_idr_lock);
++	idr_remove(&sess_idr, sess->session_index);
++	spin_unlock_bh(&sess_idr_lock);
++free_sess:
++	kfree(sess);
++	conn->sess = NULL;
++	return -ENOMEM;
+ }
+ 
+ static int iscsi_login_zero_tsih_s2(
+@@ -1161,13 +1168,13 @@ void iscsi_target_login_sess_out(struct iscsi_conn *conn,
+ 				   ISCSI_LOGIN_STATUS_INIT_ERR);
+ 	if (!zero_tsih || !conn->sess)
+ 		goto old_sess_out;
+-	if (conn->sess->se_sess)
+-		transport_free_session(conn->sess->se_sess);
+-	if (conn->sess->session_index != 0) {
+-		spin_lock_bh(&sess_idr_lock);
+-		idr_remove(&sess_idr, conn->sess->session_index);
+-		spin_unlock_bh(&sess_idr_lock);
+-	}
++
++	transport_free_session(conn->sess->se_sess);
++
++	spin_lock_bh(&sess_idr_lock);
++	idr_remove(&sess_idr, conn->sess->session_index);
++	spin_unlock_bh(&sess_idr_lock);
++
+ 	kfree(conn->sess->sess_ops);
+ 	kfree(conn->sess);
+ 	conn->sess = NULL;
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 205092dc9390..dfed08e70ec1 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -961,8 +961,9 @@ static int btree_writepages(struct address_space *mapping,
+ 
+ 		fs_info = BTRFS_I(mapping->host)->root->fs_info;
+ 		/* this is a bit racy, but that's ok */
+-		ret = percpu_counter_compare(&fs_info->dirty_metadata_bytes,
+-					     BTRFS_DIRTY_METADATA_THRESH);
++		ret = __percpu_counter_compare(&fs_info->dirty_metadata_bytes,
++					     BTRFS_DIRTY_METADATA_THRESH,
++					     fs_info->dirty_metadata_batch);
+ 		if (ret < 0)
+ 			return 0;
+ 	}
+@@ -4150,8 +4151,9 @@ static void __btrfs_btree_balance_dirty(struct btrfs_fs_info *fs_info,
+ 	if (flush_delayed)
+ 		btrfs_balance_delayed_items(fs_info);
+ 
+-	ret = percpu_counter_compare(&fs_info->dirty_metadata_bytes,
+-				     BTRFS_DIRTY_METADATA_THRESH);
++	ret = __percpu_counter_compare(&fs_info->dirty_metadata_bytes,
++				     BTRFS_DIRTY_METADATA_THRESH,
++				     fs_info->dirty_metadata_batch);
+ 	if (ret > 0) {
+ 		balance_dirty_pages_ratelimited(fs_info->btree_inode->i_mapping);
+ 	}
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 3d9fe58c0080..8aab7a6c1e58 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -4358,7 +4358,7 @@ commit_trans:
+ 				      data_sinfo->flags, bytes, 1);
+ 	spin_unlock(&data_sinfo->lock);
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ int btrfs_check_data_free_space(struct inode *inode,
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index eba61bcb9bb3..071d949f69ec 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -6027,32 +6027,6 @@ err:
+ 	return ret;
+ }
+ 
+-int btrfs_write_inode(struct inode *inode, struct writeback_control *wbc)
+-{
+-	struct btrfs_root *root = BTRFS_I(inode)->root;
+-	struct btrfs_trans_handle *trans;
+-	int ret = 0;
+-	bool nolock = false;
+-
+-	if (test_bit(BTRFS_INODE_DUMMY, &BTRFS_I(inode)->runtime_flags))
+-		return 0;
+-
+-	if (btrfs_fs_closing(root->fs_info) &&
+-			btrfs_is_free_space_inode(BTRFS_I(inode)))
+-		nolock = true;
+-
+-	if (wbc->sync_mode == WB_SYNC_ALL) {
+-		if (nolock)
+-			trans = btrfs_join_transaction_nolock(root);
+-		else
+-			trans = btrfs_join_transaction(root);
+-		if (IS_ERR(trans))
+-			return PTR_ERR(trans);
+-		ret = btrfs_commit_transaction(trans);
+-	}
+-	return ret;
+-}
+-
+ /*
+  * This is somewhat expensive, updating the tree every time the
+  * inode changes.  But, it is most likely to find the inode in cache.
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index c47f62b19226..b75b4abaa4a5 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -100,6 +100,7 @@ struct send_ctx {
+ 	u64 cur_inode_rdev;
+ 	u64 cur_inode_last_extent;
+ 	u64 cur_inode_next_write_offset;
++	bool ignore_cur_inode;
+ 
+ 	u64 send_progress;
+ 
+@@ -5006,6 +5007,15 @@ static int send_hole(struct send_ctx *sctx, u64 end)
+ 	u64 len;
+ 	int ret = 0;
+ 
++	/*
++	 * A hole that starts at EOF or beyond it. Since we do not yet support
++	 * fallocate (for extent preallocation and hole punching), sending a
++	 * write of zeroes starting at EOF or beyond would later require issuing
++	 * a truncate operation which would undo the write and achieve nothing.
++	 */
++	if (offset >= sctx->cur_inode_size)
++		return 0;
++
+ 	if (sctx->flags & BTRFS_SEND_FLAG_NO_FILE_DATA)
+ 		return send_update_extent(sctx, offset, end - offset);
+ 
+@@ -5799,6 +5809,9 @@ static int finish_inode_if_needed(struct send_ctx *sctx, int at_end)
+ 	int pending_move = 0;
+ 	int refs_processed = 0;
+ 
++	if (sctx->ignore_cur_inode)
++		return 0;
++
+ 	ret = process_recorded_refs_if_needed(sctx, at_end, &pending_move,
+ 					      &refs_processed);
+ 	if (ret < 0)
+@@ -5917,6 +5930,93 @@ out:
+ 	return ret;
+ }
+ 
++struct parent_paths_ctx {
++	struct list_head *refs;
++	struct send_ctx *sctx;
++};
++
++static int record_parent_ref(int num, u64 dir, int index, struct fs_path *name,
++			     void *ctx)
++{
++	struct parent_paths_ctx *ppctx = ctx;
++
++	return record_ref(ppctx->sctx->parent_root, dir, name, ppctx->sctx,
++			  ppctx->refs);
++}
++
++/*
++ * Issue unlink operations for all paths of the current inode found in the
++ * parent snapshot.
++ */
++static int btrfs_unlink_all_paths(struct send_ctx *sctx)
++{
++	LIST_HEAD(deleted_refs);
++	struct btrfs_path *path;
++	struct btrfs_key key;
++	struct parent_paths_ctx ctx;
++	int ret;
++
++	path = alloc_path_for_send();
++	if (!path)
++		return -ENOMEM;
++
++	key.objectid = sctx->cur_ino;
++	key.type = BTRFS_INODE_REF_KEY;
++	key.offset = 0;
++	ret = btrfs_search_slot(NULL, sctx->parent_root, &key, path, 0, 0);
++	if (ret < 0)
++		goto out;
++
++	ctx.refs = &deleted_refs;
++	ctx.sctx = sctx;
++
++	while (true) {
++		struct extent_buffer *eb = path->nodes[0];
++		int slot = path->slots[0];
++
++		if (slot >= btrfs_header_nritems(eb)) {
++			ret = btrfs_next_leaf(sctx->parent_root, path);
++			if (ret < 0)
++				goto out;
++			else if (ret > 0)
++				break;
++			continue;
++		}
++
++		btrfs_item_key_to_cpu(eb, &key, slot);
++		if (key.objectid != sctx->cur_ino)
++			break;
++		if (key.type != BTRFS_INODE_REF_KEY &&
++		    key.type != BTRFS_INODE_EXTREF_KEY)
++			break;
++
++		ret = iterate_inode_ref(sctx->parent_root, path, &key, 1,
++					record_parent_ref, &ctx);
++		if (ret < 0)
++			goto out;
++
++		path->slots[0]++;
++	}
++
++	while (!list_empty(&deleted_refs)) {
++		struct recorded_ref *ref;
++
++		ref = list_first_entry(&deleted_refs, struct recorded_ref, list);
++		ret = send_unlink(sctx, ref->full_path);
++		if (ret < 0)
++			goto out;
++		fs_path_free(ref->full_path);
++		list_del(&ref->list);
++		kfree(ref);
++	}
++	ret = 0;
++out:
++	btrfs_free_path(path);
++	if (ret)
++		__free_recorded_refs(&deleted_refs);
++	return ret;
++}
++
+ static int changed_inode(struct send_ctx *sctx,
+ 			 enum btrfs_compare_tree_result result)
+ {
+@@ -5931,6 +6031,7 @@ static int changed_inode(struct send_ctx *sctx,
+ 	sctx->cur_inode_new_gen = 0;
+ 	sctx->cur_inode_last_extent = (u64)-1;
+ 	sctx->cur_inode_next_write_offset = 0;
++	sctx->ignore_cur_inode = false;
+ 
+ 	/*
+ 	 * Set send_progress to current inode. This will tell all get_cur_xxx
+@@ -5971,6 +6072,33 @@ static int changed_inode(struct send_ctx *sctx,
+ 			sctx->cur_inode_new_gen = 1;
+ 	}
+ 
++	/*
++	 * Normally we do not find inodes with a link count of zero (orphans)
++	 * because the most common case is to create a snapshot and use it
++	 * for a send operation. However other less common use cases involve
++	 * using a subvolume and send it after turning it to RO mode just
++	 * after deleting all hard links of a file while holding an open
++	 * file descriptor against it or turning a RO snapshot into RW mode,
++	 * keep an open file descriptor against a file, delete it and then
++	 * turn the snapshot back to RO mode before using it for a send
++	 * operation. So if we find such cases, ignore the inode and all its
++	 * items completely if it's a new inode, or if it's a changed inode
++	 * make sure all its previous paths (from the parent snapshot) are all
++	 * unlinked and all other the inode items are ignored.
++	 */
++	if (result == BTRFS_COMPARE_TREE_NEW ||
++	    result == BTRFS_COMPARE_TREE_CHANGED) {
++		u32 nlinks;
++
++		nlinks = btrfs_inode_nlink(sctx->left_path->nodes[0], left_ii);
++		if (nlinks == 0) {
++			sctx->ignore_cur_inode = true;
++			if (result == BTRFS_COMPARE_TREE_CHANGED)
++				ret = btrfs_unlink_all_paths(sctx);
++			goto out;
++		}
++	}
++
+ 	if (result == BTRFS_COMPARE_TREE_NEW) {
+ 		sctx->cur_inode_gen = left_gen;
+ 		sctx->cur_inode_new = 1;
+@@ -6309,15 +6437,17 @@ static int changed_cb(struct btrfs_path *left_path,
+ 	    key->objectid == BTRFS_FREE_SPACE_OBJECTID)
+ 		goto out;
+ 
+-	if (key->type == BTRFS_INODE_ITEM_KEY)
++	if (key->type == BTRFS_INODE_ITEM_KEY) {
+ 		ret = changed_inode(sctx, result);
+-	else if (key->type == BTRFS_INODE_REF_KEY ||
+-		 key->type == BTRFS_INODE_EXTREF_KEY)
+-		ret = changed_ref(sctx, result);
+-	else if (key->type == BTRFS_XATTR_ITEM_KEY)
+-		ret = changed_xattr(sctx, result);
+-	else if (key->type == BTRFS_EXTENT_DATA_KEY)
+-		ret = changed_extent(sctx, result);
++	} else if (!sctx->ignore_cur_inode) {
++		if (key->type == BTRFS_INODE_REF_KEY ||
++		    key->type == BTRFS_INODE_EXTREF_KEY)
++			ret = changed_ref(sctx, result);
++		else if (key->type == BTRFS_XATTR_ITEM_KEY)
++			ret = changed_xattr(sctx, result);
++		else if (key->type == BTRFS_EXTENT_DATA_KEY)
++			ret = changed_extent(sctx, result);
++	}
+ 
+ out:
+ 	return ret;
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index 81107ad49f3a..bddfc28b27c0 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -2331,7 +2331,6 @@ static const struct super_operations btrfs_super_ops = {
+ 	.sync_fs	= btrfs_sync_fs,
+ 	.show_options	= btrfs_show_options,
+ 	.show_devname	= btrfs_show_devname,
+-	.write_inode	= btrfs_write_inode,
+ 	.alloc_inode	= btrfs_alloc_inode,
+ 	.destroy_inode	= btrfs_destroy_inode,
+ 	.statfs		= btrfs_statfs,
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index f8220ec02036..84b00a29d531 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -1291,6 +1291,46 @@ again:
+ 	return ret;
+ }
+ 
++static int btrfs_inode_ref_exists(struct inode *inode, struct inode *dir,
++				  const u8 ref_type, const char *name,
++				  const int namelen)
++{
++	struct btrfs_key key;
++	struct btrfs_path *path;
++	const u64 parent_id = btrfs_ino(BTRFS_I(dir));
++	int ret;
++
++	path = btrfs_alloc_path();
++	if (!path)
++		return -ENOMEM;
++
++	key.objectid = btrfs_ino(BTRFS_I(inode));
++	key.type = ref_type;
++	if (key.type == BTRFS_INODE_REF_KEY)
++		key.offset = parent_id;
++	else
++		key.offset = btrfs_extref_hash(parent_id, name, namelen);
++
++	ret = btrfs_search_slot(NULL, BTRFS_I(inode)->root, &key, path, 0, 0);
++	if (ret < 0)
++		goto out;
++	if (ret > 0) {
++		ret = 0;
++		goto out;
++	}
++	if (key.type == BTRFS_INODE_EXTREF_KEY)
++		ret = btrfs_find_name_in_ext_backref(path->nodes[0],
++						     path->slots[0], parent_id,
++						     name, namelen, NULL);
++	else
++		ret = btrfs_find_name_in_backref(path->nodes[0], path->slots[0],
++						 name, namelen, NULL);
++
++out:
++	btrfs_free_path(path);
++	return ret;
++}
++
+ /*
+  * replay one inode back reference item found in the log tree.
+  * eb, slot and key refer to the buffer and key found in the log tree.
+@@ -1400,6 +1440,32 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans,
+ 				}
+ 			}
+ 
++			/*
++			 * If a reference item already exists for this inode
++			 * with the same parent and name, but different index,
++			 * drop it and the corresponding directory index entries
++			 * from the parent before adding the new reference item
++			 * and dir index entries, otherwise we would fail with
++			 * -EEXIST returned from btrfs_add_link() below.
++			 */
++			ret = btrfs_inode_ref_exists(inode, dir, key->type,
++						     name, namelen);
++			if (ret > 0) {
++				ret = btrfs_unlink_inode(trans, root,
++							 BTRFS_I(dir),
++							 BTRFS_I(inode),
++							 name, namelen);
++				/*
++				 * If we dropped the link count to 0, bump it so
++				 * that later the iput() on the inode will not
++				 * free it. We will fixup the link count later.
++				 */
++				if (!ret && inode->i_nlink == 0)
++					inc_nlink(inode);
++			}
++			if (ret < 0)
++				goto out;
++
+ 			/* insert our name */
+ 			ret = btrfs_add_link(trans, BTRFS_I(dir),
+ 					BTRFS_I(inode),
+diff --git a/fs/cifs/cifs_debug.c b/fs/cifs/cifs_debug.c
+index bfe999505815..991bfb271908 100644
+--- a/fs/cifs/cifs_debug.c
++++ b/fs/cifs/cifs_debug.c
+@@ -160,25 +160,41 @@ static int cifs_debug_data_proc_show(struct seq_file *m, void *v)
+ 	seq_printf(m, "CIFS Version %s\n", CIFS_VERSION);
+ 	seq_printf(m, "Features:");
+ #ifdef CONFIG_CIFS_DFS_UPCALL
+-	seq_printf(m, " dfs");
++	seq_printf(m, " DFS");
+ #endif
+ #ifdef CONFIG_CIFS_FSCACHE
+-	seq_printf(m, " fscache");
++	seq_printf(m, ",FSCACHE");
++#endif
++#ifdef CONFIG_CIFS_SMB_DIRECT
++	seq_printf(m, ",SMB_DIRECT");
++#endif
++#ifdef CONFIG_CIFS_STATS2
++	seq_printf(m, ",STATS2");
++#elif defined(CONFIG_CIFS_STATS)
++	seq_printf(m, ",STATS");
++#endif
++#ifdef CONFIG_CIFS_DEBUG2
++	seq_printf(m, ",DEBUG2");
++#elif defined(CONFIG_CIFS_DEBUG)
++	seq_printf(m, ",DEBUG");
++#endif
++#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY
++	seq_printf(m, ",ALLOW_INSECURE_LEGACY");
+ #endif
+ #ifdef CONFIG_CIFS_WEAK_PW_HASH
+-	seq_printf(m, " lanman");
++	seq_printf(m, ",WEAK_PW_HASH");
+ #endif
+ #ifdef CONFIG_CIFS_POSIX
+-	seq_printf(m, " posix");
++	seq_printf(m, ",CIFS_POSIX");
+ #endif
+ #ifdef CONFIG_CIFS_UPCALL
+-	seq_printf(m, " spnego");
++	seq_printf(m, ",UPCALL(SPNEGO)");
+ #endif
+ #ifdef CONFIG_CIFS_XATTR
+-	seq_printf(m, " xattr");
++	seq_printf(m, ",XATTR");
+ #endif
+ #ifdef CONFIG_CIFS_ACL
+-	seq_printf(m, " acl");
++	seq_printf(m, ",ACL");
+ #endif
+ 	seq_putc(m, '\n');
+ 	seq_printf(m, "Active VFS Requests: %d\n", GlobalTotalActiveXid);
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index d5aa7ae917bf..69ec5427769c 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -209,14 +209,16 @@ cifs_statfs(struct dentry *dentry, struct kstatfs *buf)
+ 
+ 	xid = get_xid();
+ 
+-	/*
+-	 * PATH_MAX may be too long - it would presumably be total path,
+-	 * but note that some servers (includinng Samba 3) have a shorter
+-	 * maximum path.
+-	 *
+-	 * Instead could get the real value via SMB_QUERY_FS_ATTRIBUTE_INFO.
+-	 */
+-	buf->f_namelen = PATH_MAX;
++	if (le32_to_cpu(tcon->fsAttrInfo.MaxPathNameComponentLength) > 0)
++		buf->f_namelen =
++		       le32_to_cpu(tcon->fsAttrInfo.MaxPathNameComponentLength);
++	else
++		buf->f_namelen = PATH_MAX;
++
++	buf->f_fsid.val[0] = tcon->vol_serial_number;
++	/* are using part of create time for more randomness, see man statfs */
++	buf->f_fsid.val[1] =  (int)le64_to_cpu(tcon->vol_create_time);
++
+ 	buf->f_files = 0;	/* undefined */
+ 	buf->f_ffree = 0;	/* unlimited */
+ 
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index c923c7854027..4b45d3ef3f9d 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -913,6 +913,7 @@ cap_unix(struct cifs_ses *ses)
+ 
+ struct cached_fid {
+ 	bool is_valid:1;	/* Do we have a useable root fid */
++	struct kref refcount;
+ 	struct cifs_fid *fid;
+ 	struct mutex fid_mutex;
+ 	struct cifs_tcon *tcon;
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index a2cfb33e85c1..9051b9dfd590 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -1122,6 +1122,8 @@ cifs_set_file_info(struct inode *inode, struct iattr *attrs, unsigned int xid,
+ 	if (!server->ops->set_file_info)
+ 		return -ENOSYS;
+ 
++	info_buf.Pad = 0;
++
+ 	if (attrs->ia_valid & ATTR_ATIME) {
+ 		set_time = true;
+ 		info_buf.LastAccessTime =
+diff --git a/fs/cifs/link.c b/fs/cifs/link.c
+index de41f96aba49..2148b0f60e5e 100644
+--- a/fs/cifs/link.c
++++ b/fs/cifs/link.c
+@@ -396,7 +396,7 @@ smb3_query_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
+ 	struct cifs_io_parms io_parms;
+ 	int buf_type = CIFS_NO_BUFFER;
+ 	__le16 *utf16_path;
+-	__u8 oplock = SMB2_OPLOCK_LEVEL_II;
++	__u8 oplock = SMB2_OPLOCK_LEVEL_NONE;
+ 	struct smb2_file_all_info *pfile_info = NULL;
+ 
+ 	oparms.tcon = tcon;
+@@ -459,7 +459,7 @@ smb3_create_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
+ 	struct cifs_io_parms io_parms;
+ 	int create_options = CREATE_NOT_DIR;
+ 	__le16 *utf16_path;
+-	__u8 oplock = SMB2_OPLOCK_LEVEL_EXCLUSIVE;
++	__u8 oplock = SMB2_OPLOCK_LEVEL_NONE;
+ 	struct kvec iov[2];
+ 
+ 	if (backup_cred(cifs_sb))
+diff --git a/fs/cifs/sess.c b/fs/cifs/sess.c
+index 8b0502cd39af..aa23c00367ec 100644
+--- a/fs/cifs/sess.c
++++ b/fs/cifs/sess.c
+@@ -398,6 +398,12 @@ int build_ntlmssp_auth_blob(unsigned char **pbuffer,
+ 		goto setup_ntlmv2_ret;
+ 	}
+ 	*pbuffer = kmalloc(size_of_ntlmssp_blob(ses), GFP_KERNEL);
++	if (!*pbuffer) {
++		rc = -ENOMEM;
++		cifs_dbg(VFS, "Error %d during NTLMSSP allocation\n", rc);
++		*buflen = 0;
++		goto setup_ntlmv2_ret;
++	}
+ 	sec_blob = (AUTHENTICATE_MESSAGE *)*pbuffer;
+ 
+ 	memcpy(sec_blob->Signature, NTLMSSP_SIGNATURE, 8);
+diff --git a/fs/cifs/smb2inode.c b/fs/cifs/smb2inode.c
+index d01ad706d7fc..1eef1791d0c4 100644
+--- a/fs/cifs/smb2inode.c
++++ b/fs/cifs/smb2inode.c
+@@ -120,7 +120,9 @@ smb2_open_op_close(const unsigned int xid, struct cifs_tcon *tcon,
+ 		break;
+ 	}
+ 
+-	if (use_cached_root_handle == false)
++	if (use_cached_root_handle)
++		close_shroot(&tcon->crfid);
++	else
+ 		rc = SMB2_close(xid, tcon, fid.persistent_fid, fid.volatile_fid);
+ 	if (tmprc)
+ 		rc = tmprc;
+@@ -281,7 +283,7 @@ smb2_set_file_info(struct inode *inode, const char *full_path,
+ 	int rc;
+ 
+ 	if ((buf->CreationTime == 0) && (buf->LastAccessTime == 0) &&
+-	    (buf->LastWriteTime == 0) && (buf->ChangeTime) &&
++	    (buf->LastWriteTime == 0) && (buf->ChangeTime == 0) &&
+ 	    (buf->Attributes == 0))
+ 		return 0; /* would be a no op, no sense sending this */
+ 
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index ea92a38b2f08..ee6c4a952ce9 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -466,21 +466,36 @@ out:
+ 	return rc;
+ }
+ 
+-void
+-smb2_cached_lease_break(struct work_struct *work)
++static void
++smb2_close_cached_fid(struct kref *ref)
+ {
+-	struct cached_fid *cfid = container_of(work,
+-				struct cached_fid, lease_break);
+-	mutex_lock(&cfid->fid_mutex);
++	struct cached_fid *cfid = container_of(ref, struct cached_fid,
++					       refcount);
++
+ 	if (cfid->is_valid) {
+ 		cifs_dbg(FYI, "clear cached root file handle\n");
+ 		SMB2_close(0, cfid->tcon, cfid->fid->persistent_fid,
+ 			   cfid->fid->volatile_fid);
+ 		cfid->is_valid = false;
+ 	}
++}
++
++void close_shroot(struct cached_fid *cfid)
++{
++	mutex_lock(&cfid->fid_mutex);
++	kref_put(&cfid->refcount, smb2_close_cached_fid);
+ 	mutex_unlock(&cfid->fid_mutex);
+ }
+ 
++void
++smb2_cached_lease_break(struct work_struct *work)
++{
++	struct cached_fid *cfid = container_of(work,
++				struct cached_fid, lease_break);
++
++	close_shroot(cfid);
++}
++
+ /*
+  * Open the directory at the root of a share
+  */
+@@ -495,6 +510,7 @@ int open_shroot(unsigned int xid, struct cifs_tcon *tcon, struct cifs_fid *pfid)
+ 	if (tcon->crfid.is_valid) {
+ 		cifs_dbg(FYI, "found a cached root file handle\n");
+ 		memcpy(pfid, tcon->crfid.fid, sizeof(struct cifs_fid));
++		kref_get(&tcon->crfid.refcount);
+ 		mutex_unlock(&tcon->crfid.fid_mutex);
+ 		return 0;
+ 	}
+@@ -511,6 +527,8 @@ int open_shroot(unsigned int xid, struct cifs_tcon *tcon, struct cifs_fid *pfid)
+ 		memcpy(tcon->crfid.fid, pfid, sizeof(struct cifs_fid));
+ 		tcon->crfid.tcon = tcon;
+ 		tcon->crfid.is_valid = true;
++		kref_init(&tcon->crfid.refcount);
++		kref_get(&tcon->crfid.refcount);
+ 	}
+ 	mutex_unlock(&tcon->crfid.fid_mutex);
+ 	return rc;
+@@ -548,10 +566,15 @@ smb3_qfs_tcon(const unsigned int xid, struct cifs_tcon *tcon)
+ 			FS_ATTRIBUTE_INFORMATION);
+ 	SMB2_QFS_attr(xid, tcon, fid.persistent_fid, fid.volatile_fid,
+ 			FS_DEVICE_INFORMATION);
++	SMB2_QFS_attr(xid, tcon, fid.persistent_fid, fid.volatile_fid,
++			FS_VOLUME_INFORMATION);
+ 	SMB2_QFS_attr(xid, tcon, fid.persistent_fid, fid.volatile_fid,
+ 			FS_SECTOR_SIZE_INFORMATION); /* SMB3 specific */
+ 	if (no_cached_open)
+ 		SMB2_close(xid, tcon, fid.persistent_fid, fid.volatile_fid);
++	else
++		close_shroot(&tcon->crfid);
++
+ 	return;
+ }
+ 
+@@ -1353,6 +1376,13 @@ smb3_set_integrity(const unsigned int xid, struct cifs_tcon *tcon,
+ 
+ }
+ 
++/* GMT Token is @GMT-YYYY.MM.DD-HH.MM.SS Unicode which is 48 bytes + null */
++#define GMT_TOKEN_SIZE 50
++
++/*
++ * Input buffer contains (empty) struct smb_snapshot array with size filled in
++ * For output see struct SRV_SNAPSHOT_ARRAY in MS-SMB2 section 2.2.32.2
++ */
+ static int
+ smb3_enum_snapshots(const unsigned int xid, struct cifs_tcon *tcon,
+ 		   struct cifsFileInfo *cfile, void __user *ioc_buf)
+@@ -1382,14 +1412,27 @@ smb3_enum_snapshots(const unsigned int xid, struct cifs_tcon *tcon,
+ 			kfree(retbuf);
+ 			return rc;
+ 		}
+-		if (snapshot_in.snapshot_array_size < sizeof(struct smb_snapshot_array)) {
+-			rc = -ERANGE;
+-			kfree(retbuf);
+-			return rc;
+-		}
+ 
+-		if (ret_data_len > snapshot_in.snapshot_array_size)
+-			ret_data_len = snapshot_in.snapshot_array_size;
++		/*
++		 * Check for min size, ie not large enough to fit even one GMT
++		 * token (snapshot).  On the first ioctl some users may pass in
++		 * smaller size (or zero) to simply get the size of the array
++		 * so the user space caller can allocate sufficient memory
++		 * and retry the ioctl again with larger array size sufficient
++		 * to hold all of the snapshot GMT tokens on the second try.
++		 */
++		if (snapshot_in.snapshot_array_size < GMT_TOKEN_SIZE)
++			ret_data_len = sizeof(struct smb_snapshot_array);
++
++		/*
++		 * We return struct SRV_SNAPSHOT_ARRAY, followed by
++		 * the snapshot array (of 50 byte GMT tokens) each
++		 * representing an available previous version of the data
++		 */
++		if (ret_data_len > (snapshot_in.snapshot_array_size +
++					sizeof(struct smb_snapshot_array)))
++			ret_data_len = snapshot_in.snapshot_array_size +
++					sizeof(struct smb_snapshot_array);
+ 
+ 		if (copy_to_user(ioc_buf, retbuf, ret_data_len))
+ 			rc = -EFAULT;
+@@ -3366,6 +3409,11 @@ struct smb_version_operations smb311_operations = {
+ 	.query_all_EAs = smb2_query_eas,
+ 	.set_EA = smb2_set_ea,
+ #endif /* CIFS_XATTR */
++#ifdef CONFIG_CIFS_ACL
++	.get_acl = get_smb2_acl,
++	.get_acl_by_fid = get_smb2_acl_by_fid,
++	.set_acl = set_smb2_acl,
++#endif /* CIFS_ACL */
+ 	.next_header = smb2_next_header,
+ };
+ #endif /* CIFS_SMB311 */
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 3c92678cb45b..ffce77e00a58 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -4046,6 +4046,9 @@ SMB2_QFS_attr(const unsigned int xid, struct cifs_tcon *tcon,
+ 	} else if (level == FS_SECTOR_SIZE_INFORMATION) {
+ 		max_len = sizeof(struct smb3_fs_ss_info);
+ 		min_len = sizeof(struct smb3_fs_ss_info);
++	} else if (level == FS_VOLUME_INFORMATION) {
++		max_len = sizeof(struct smb3_fs_vol_info) + MAX_VOL_LABEL_LEN;
++		min_len = sizeof(struct smb3_fs_vol_info);
+ 	} else {
+ 		cifs_dbg(FYI, "Invalid qfsinfo level %d\n", level);
+ 		return -EINVAL;
+@@ -4090,6 +4093,11 @@ SMB2_QFS_attr(const unsigned int xid, struct cifs_tcon *tcon,
+ 		tcon->ss_flags = le32_to_cpu(ss_info->Flags);
+ 		tcon->perf_sector_size =
+ 			le32_to_cpu(ss_info->PhysicalBytesPerSectorForPerf);
++	} else if (level == FS_VOLUME_INFORMATION) {
++		struct smb3_fs_vol_info *vol_info = (struct smb3_fs_vol_info *)
++			(offset + (char *)rsp);
++		tcon->vol_serial_number = vol_info->VolumeSerialNumber;
++		tcon->vol_create_time = vol_info->VolumeCreationTime;
+ 	}
+ 
+ qfsattr_exit:
+diff --git a/fs/cifs/smb2pdu.h b/fs/cifs/smb2pdu.h
+index a671adcc44a6..c2a4526512b5 100644
+--- a/fs/cifs/smb2pdu.h
++++ b/fs/cifs/smb2pdu.h
+@@ -1248,6 +1248,17 @@ struct smb3_fs_ss_info {
+ 	__le32 ByteOffsetForPartitionAlignment;
+ } __packed;
+ 
++/* volume info struct - see MS-FSCC 2.5.9 */
++#define MAX_VOL_LABEL_LEN	32
++struct smb3_fs_vol_info {
++	__le64	VolumeCreationTime;
++	__u32	VolumeSerialNumber;
++	__le32	VolumeLabelLength; /* includes trailing null */
++	__u8	SupportsObjects; /* True if eg like NTFS, supports objects */
++	__u8	Reserved;
++	__u8	VolumeLabel[0]; /* variable len */
++} __packed;
++
+ /* partial list of QUERY INFO levels */
+ #define FILE_DIRECTORY_INFORMATION	1
+ #define FILE_FULL_DIRECTORY_INFORMATION 2
+diff --git a/fs/cifs/smb2proto.h b/fs/cifs/smb2proto.h
+index 6e6a4f2ec890..c1520b48d1e1 100644
+--- a/fs/cifs/smb2proto.h
++++ b/fs/cifs/smb2proto.h
+@@ -68,6 +68,7 @@ extern int smb3_handle_read_data(struct TCP_Server_Info *server,
+ 
+ extern int open_shroot(unsigned int xid, struct cifs_tcon *tcon,
+ 			struct cifs_fid *pfid);
++extern void close_shroot(struct cached_fid *cfid);
+ extern void move_smb2_info_to_cifs(FILE_ALL_INFO *dst,
+ 				   struct smb2_file_all_info *src);
+ extern int smb2_query_path_info(const unsigned int xid, struct cifs_tcon *tcon,
+diff --git a/fs/cifs/smb2transport.c b/fs/cifs/smb2transport.c
+index 719d55e63d88..bf61c3774830 100644
+--- a/fs/cifs/smb2transport.c
++++ b/fs/cifs/smb2transport.c
+@@ -173,7 +173,7 @@ smb2_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server)
+ 	struct kvec *iov = rqst->rq_iov;
+ 	struct smb2_sync_hdr *shdr = (struct smb2_sync_hdr *)iov[0].iov_base;
+ 	struct cifs_ses *ses;
+-	struct shash_desc *shash = &server->secmech.sdeschmacsha256->shash;
++	struct shash_desc *shash;
+ 	struct smb_rqst drqst;
+ 
+ 	ses = smb2_find_smb_ses(server, shdr->SessionId);
+@@ -187,7 +187,7 @@ smb2_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server)
+ 
+ 	rc = smb2_crypto_shash_allocate(server);
+ 	if (rc) {
+-		cifs_dbg(VFS, "%s: shah256 alloc failed\n", __func__);
++		cifs_dbg(VFS, "%s: sha256 alloc failed\n", __func__);
+ 		return rc;
+ 	}
+ 
+@@ -198,6 +198,7 @@ smb2_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server)
+ 		return rc;
+ 	}
+ 
++	shash = &server->secmech.sdeschmacsha256->shash;
+ 	rc = crypto_shash_init(shash);
+ 	if (rc) {
+ 		cifs_dbg(VFS, "%s: Could not init sha256", __func__);
+diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
+index aa52d87985aa..e5d6ee61ff48 100644
+--- a/fs/ext4/balloc.c
++++ b/fs/ext4/balloc.c
+@@ -426,9 +426,9 @@ ext4_read_block_bitmap_nowait(struct super_block *sb, ext4_group_t block_group)
+ 	}
+ 	bh = sb_getblk(sb, bitmap_blk);
+ 	if (unlikely(!bh)) {
+-		ext4_error(sb, "Cannot get buffer for block bitmap - "
+-			   "block_group = %u, block_bitmap = %llu",
+-			   block_group, bitmap_blk);
++		ext4_warning(sb, "Cannot get buffer for block bitmap - "
++			     "block_group = %u, block_bitmap = %llu",
++			     block_group, bitmap_blk);
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+ 
+diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
+index f336cbc6e932..796aa609bcb9 100644
+--- a/fs/ext4/ialloc.c
++++ b/fs/ext4/ialloc.c
+@@ -138,9 +138,9 @@ ext4_read_inode_bitmap(struct super_block *sb, ext4_group_t block_group)
+ 	}
+ 	bh = sb_getblk(sb, bitmap_blk);
+ 	if (unlikely(!bh)) {
+-		ext4_error(sb, "Cannot read inode bitmap - "
+-			    "block_group = %u, inode_bitmap = %llu",
+-			    block_group, bitmap_blk);
++		ext4_warning(sb, "Cannot read inode bitmap - "
++			     "block_group = %u, inode_bitmap = %llu",
++			     block_group, bitmap_blk);
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+ 	if (bitmap_uptodate(bh))
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 2a4c25c4681d..116ff68c5bd4 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -1398,6 +1398,7 @@ static struct buffer_head * ext4_find_entry (struct inode *dir,
+ 			goto cleanup_and_exit;
+ 		dxtrace(printk(KERN_DEBUG "ext4_find_entry: dx failed, "
+ 			       "falling back\n"));
++		ret = NULL;
+ 	}
+ 	nblocks = dir->i_size >> EXT4_BLOCK_SIZE_BITS(sb);
+ 	if (!nblocks) {
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index b7f7922061be..130c12974e28 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -776,26 +776,26 @@ void ext4_mark_group_bitmap_corrupted(struct super_block *sb,
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+ 	struct ext4_group_info *grp = ext4_get_group_info(sb, group);
+ 	struct ext4_group_desc *gdp = ext4_get_group_desc(sb, group, NULL);
++	int ret;
+ 
+-	if ((flags & EXT4_GROUP_INFO_BBITMAP_CORRUPT) &&
+-	    !EXT4_MB_GRP_BBITMAP_CORRUPT(grp)) {
+-		percpu_counter_sub(&sbi->s_freeclusters_counter,
+-					grp->bb_free);
+-		set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT,
+-			&grp->bb_state);
++	if (flags & EXT4_GROUP_INFO_BBITMAP_CORRUPT) {
++		ret = ext4_test_and_set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT,
++					    &grp->bb_state);
++		if (!ret)
++			percpu_counter_sub(&sbi->s_freeclusters_counter,
++					   grp->bb_free);
+ 	}
+ 
+-	if ((flags & EXT4_GROUP_INFO_IBITMAP_CORRUPT) &&
+-	    !EXT4_MB_GRP_IBITMAP_CORRUPT(grp)) {
+-		if (gdp) {
++	if (flags & EXT4_GROUP_INFO_IBITMAP_CORRUPT) {
++		ret = ext4_test_and_set_bit(EXT4_GROUP_INFO_IBITMAP_CORRUPT_BIT,
++					    &grp->bb_state);
++		if (!ret && gdp) {
+ 			int count;
+ 
+ 			count = ext4_free_inodes_count(sb, gdp);
+ 			percpu_counter_sub(&sbi->s_freeinodes_counter,
+ 					   count);
+ 		}
+-		set_bit(EXT4_GROUP_INFO_IBITMAP_CORRUPT_BIT,
+-			&grp->bb_state);
+ 	}
+ }
+ 
+diff --git a/fs/ext4/sysfs.c b/fs/ext4/sysfs.c
+index f34da0bb8f17..b970a200f20c 100644
+--- a/fs/ext4/sysfs.c
++++ b/fs/ext4/sysfs.c
+@@ -274,8 +274,12 @@ static ssize_t ext4_attr_show(struct kobject *kobj,
+ 	case attr_pointer_ui:
+ 		if (!ptr)
+ 			return 0;
+-		return snprintf(buf, PAGE_SIZE, "%u\n",
+-				*((unsigned int *) ptr));
++		if (a->attr_ptr == ptr_ext4_super_block_offset)
++			return snprintf(buf, PAGE_SIZE, "%u\n",
++					le32_to_cpup(ptr));
++		else
++			return snprintf(buf, PAGE_SIZE, "%u\n",
++					*((unsigned int *) ptr));
+ 	case attr_pointer_atomic:
+ 		if (!ptr)
+ 			return 0;
+@@ -308,7 +312,10 @@ static ssize_t ext4_attr_store(struct kobject *kobj,
+ 		ret = kstrtoul(skip_spaces(buf), 0, &t);
+ 		if (ret)
+ 			return ret;
+-		*((unsigned int *) ptr) = t;
++		if (a->attr_ptr == ptr_ext4_super_block_offset)
++			*((__le32 *) ptr) = cpu_to_le32(t);
++		else
++			*((unsigned int *) ptr) = t;
+ 		return len;
+ 	case attr_inode_readahead:
+ 		return inode_readahead_blks_store(sbi, buf, len);
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 723df14f4084..f36fc5d5b257 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -190,6 +190,8 @@ ext4_xattr_check_entries(struct ext4_xattr_entry *entry, void *end,
+ 		struct ext4_xattr_entry *next = EXT4_XATTR_NEXT(e);
+ 		if ((void *)next >= end)
+ 			return -EFSCORRUPTED;
++		if (strnlen(e->e_name, e->e_name_len) != e->e_name_len)
++			return -EFSCORRUPTED;
+ 		e = next;
+ 	}
+ 
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index c6b88fa85e2e..4a9ace7280b9 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -127,6 +127,16 @@ static bool fuse_block_alloc(struct fuse_conn *fc, bool for_background)
+ 	return !fc->initialized || (for_background && fc->blocked);
+ }
+ 
++static void fuse_drop_waiting(struct fuse_conn *fc)
++{
++	if (fc->connected) {
++		atomic_dec(&fc->num_waiting);
++	} else if (atomic_dec_and_test(&fc->num_waiting)) {
++		/* wake up aborters */
++		wake_up_all(&fc->blocked_waitq);
++	}
++}
++
+ static struct fuse_req *__fuse_get_req(struct fuse_conn *fc, unsigned npages,
+ 				       bool for_background)
+ {
+@@ -175,7 +185,7 @@ static struct fuse_req *__fuse_get_req(struct fuse_conn *fc, unsigned npages,
+ 	return req;
+ 
+  out:
+-	atomic_dec(&fc->num_waiting);
++	fuse_drop_waiting(fc);
+ 	return ERR_PTR(err);
+ }
+ 
+@@ -285,7 +295,7 @@ void fuse_put_request(struct fuse_conn *fc, struct fuse_req *req)
+ 
+ 		if (test_bit(FR_WAITING, &req->flags)) {
+ 			__clear_bit(FR_WAITING, &req->flags);
+-			atomic_dec(&fc->num_waiting);
++			fuse_drop_waiting(fc);
+ 		}
+ 
+ 		if (req->stolen_file)
+@@ -371,7 +381,7 @@ static void request_end(struct fuse_conn *fc, struct fuse_req *req)
+ 	struct fuse_iqueue *fiq = &fc->iq;
+ 
+ 	if (test_and_set_bit(FR_FINISHED, &req->flags))
+-		return;
++		goto put_request;
+ 
+ 	spin_lock(&fiq->waitq.lock);
+ 	list_del_init(&req->intr_entry);
+@@ -400,6 +410,7 @@ static void request_end(struct fuse_conn *fc, struct fuse_req *req)
+ 	wake_up(&req->waitq);
+ 	if (req->end)
+ 		req->end(fc, req);
++put_request:
+ 	fuse_put_request(fc, req);
+ }
+ 
+@@ -1944,12 +1955,15 @@ static ssize_t fuse_dev_splice_write(struct pipe_inode_info *pipe,
+ 	if (!fud)
+ 		return -EPERM;
+ 
++	pipe_lock(pipe);
++
+ 	bufs = kmalloc_array(pipe->buffers, sizeof(struct pipe_buffer),
+ 			     GFP_KERNEL);
+-	if (!bufs)
++	if (!bufs) {
++		pipe_unlock(pipe);
+ 		return -ENOMEM;
++	}
+ 
+-	pipe_lock(pipe);
+ 	nbuf = 0;
+ 	rem = 0;
+ 	for (idx = 0; idx < pipe->nrbufs && rem < len; idx++)
+@@ -2105,6 +2119,7 @@ void fuse_abort_conn(struct fuse_conn *fc, bool is_abort)
+ 				set_bit(FR_ABORTED, &req->flags);
+ 				if (!test_bit(FR_LOCKED, &req->flags)) {
+ 					set_bit(FR_PRIVATE, &req->flags);
++					__fuse_get_request(req);
+ 					list_move(&req->list, &to_end1);
+ 				}
+ 				spin_unlock(&req->waitq.lock);
+@@ -2131,7 +2146,6 @@ void fuse_abort_conn(struct fuse_conn *fc, bool is_abort)
+ 
+ 		while (!list_empty(&to_end1)) {
+ 			req = list_first_entry(&to_end1, struct fuse_req, list);
+-			__fuse_get_request(req);
+ 			list_del_init(&req->list);
+ 			request_end(fc, req);
+ 		}
+@@ -2142,6 +2156,11 @@ void fuse_abort_conn(struct fuse_conn *fc, bool is_abort)
+ }
+ EXPORT_SYMBOL_GPL(fuse_abort_conn);
+ 
++void fuse_wait_aborted(struct fuse_conn *fc)
++{
++	wait_event(fc->blocked_waitq, atomic_read(&fc->num_waiting) == 0);
++}
++
+ int fuse_dev_release(struct inode *inode, struct file *file)
+ {
+ 	struct fuse_dev *fud = fuse_get_dev(file);
+@@ -2149,9 +2168,15 @@ int fuse_dev_release(struct inode *inode, struct file *file)
+ 	if (fud) {
+ 		struct fuse_conn *fc = fud->fc;
+ 		struct fuse_pqueue *fpq = &fud->pq;
++		LIST_HEAD(to_end);
+ 
++		spin_lock(&fpq->lock);
+ 		WARN_ON(!list_empty(&fpq->io));
+-		end_requests(fc, &fpq->processing);
++		list_splice_init(&fpq->processing, &to_end);
++		spin_unlock(&fpq->lock);
++
++		end_requests(fc, &to_end);
++
+ 		/* Are we the last open device? */
+ 		if (atomic_dec_and_test(&fc->dev_count)) {
+ 			WARN_ON(fc->iq.fasync != NULL);
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index 56231b31f806..606909ed5f21 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -355,11 +355,12 @@ static struct dentry *fuse_lookup(struct inode *dir, struct dentry *entry,
+ 	struct inode *inode;
+ 	struct dentry *newent;
+ 	bool outarg_valid = true;
++	bool locked;
+ 
+-	fuse_lock_inode(dir);
++	locked = fuse_lock_inode(dir);
+ 	err = fuse_lookup_name(dir->i_sb, get_node_id(dir), &entry->d_name,
+ 			       &outarg, &inode);
+-	fuse_unlock_inode(dir);
++	fuse_unlock_inode(dir, locked);
+ 	if (err == -ENOENT) {
+ 		outarg_valid = false;
+ 		err = 0;
+@@ -1340,6 +1341,7 @@ static int fuse_readdir(struct file *file, struct dir_context *ctx)
+ 	struct fuse_conn *fc = get_fuse_conn(inode);
+ 	struct fuse_req *req;
+ 	u64 attr_version = 0;
++	bool locked;
+ 
+ 	if (is_bad_inode(inode))
+ 		return -EIO;
+@@ -1367,9 +1369,9 @@ static int fuse_readdir(struct file *file, struct dir_context *ctx)
+ 		fuse_read_fill(req, file, ctx->pos, PAGE_SIZE,
+ 			       FUSE_READDIR);
+ 	}
+-	fuse_lock_inode(inode);
++	locked = fuse_lock_inode(inode);
+ 	fuse_request_send(fc, req);
+-	fuse_unlock_inode(inode);
++	fuse_unlock_inode(inode, locked);
+ 	nbytes = req->out.args[0].size;
+ 	err = req->out.h.error;
+ 	fuse_put_request(fc, req);
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index a201fb0ac64f..aa23749a943b 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -866,6 +866,7 @@ static int fuse_readpages_fill(void *_data, struct page *page)
+ 	}
+ 
+ 	if (WARN_ON(req->num_pages >= req->max_pages)) {
++		unlock_page(page);
+ 		fuse_put_request(fc, req);
+ 		return -EIO;
+ 	}
+diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
+index 5256ad333b05..f78e9614bb5f 100644
+--- a/fs/fuse/fuse_i.h
++++ b/fs/fuse/fuse_i.h
+@@ -862,6 +862,7 @@ void fuse_request_send_background_locked(struct fuse_conn *fc,
+ 
+ /* Abort all requests */
+ void fuse_abort_conn(struct fuse_conn *fc, bool is_abort);
++void fuse_wait_aborted(struct fuse_conn *fc);
+ 
+ /**
+  * Invalidate inode attributes
+@@ -974,8 +975,8 @@ int fuse_do_setattr(struct dentry *dentry, struct iattr *attr,
+ 
+ void fuse_set_initialized(struct fuse_conn *fc);
+ 
+-void fuse_unlock_inode(struct inode *inode);
+-void fuse_lock_inode(struct inode *inode);
++void fuse_unlock_inode(struct inode *inode, bool locked);
++bool fuse_lock_inode(struct inode *inode);
+ 
+ int fuse_setxattr(struct inode *inode, const char *name, const void *value,
+ 		  size_t size, int flags);
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index a24df8861b40..2dbd487390a3 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -357,15 +357,21 @@ int fuse_reverse_inval_inode(struct super_block *sb, u64 nodeid,
+ 	return 0;
+ }
+ 
+-void fuse_lock_inode(struct inode *inode)
++bool fuse_lock_inode(struct inode *inode)
+ {
+-	if (!get_fuse_conn(inode)->parallel_dirops)
++	bool locked = false;
++
++	if (!get_fuse_conn(inode)->parallel_dirops) {
+ 		mutex_lock(&get_fuse_inode(inode)->mutex);
++		locked = true;
++	}
++
++	return locked;
+ }
+ 
+-void fuse_unlock_inode(struct inode *inode)
++void fuse_unlock_inode(struct inode *inode, bool locked)
+ {
+-	if (!get_fuse_conn(inode)->parallel_dirops)
++	if (locked)
+ 		mutex_unlock(&get_fuse_inode(inode)->mutex);
+ }
+ 
+@@ -391,9 +397,6 @@ static void fuse_put_super(struct super_block *sb)
+ {
+ 	struct fuse_conn *fc = get_fuse_conn_super(sb);
+ 
+-	fuse_send_destroy(fc);
+-
+-	fuse_abort_conn(fc, false);
+ 	mutex_lock(&fuse_mutex);
+ 	list_del(&fc->entry);
+ 	fuse_ctl_remove_conn(fc);
+@@ -1210,16 +1213,25 @@ static struct dentry *fuse_mount(struct file_system_type *fs_type,
+ 	return mount_nodev(fs_type, flags, raw_data, fuse_fill_super);
+ }
+ 
+-static void fuse_kill_sb_anon(struct super_block *sb)
++static void fuse_sb_destroy(struct super_block *sb)
+ {
+ 	struct fuse_conn *fc = get_fuse_conn_super(sb);
+ 
+ 	if (fc) {
++		fuse_send_destroy(fc);
++
++		fuse_abort_conn(fc, false);
++		fuse_wait_aborted(fc);
++
+ 		down_write(&fc->killsb);
+ 		fc->sb = NULL;
+ 		up_write(&fc->killsb);
+ 	}
++}
+ 
++static void fuse_kill_sb_anon(struct super_block *sb)
++{
++	fuse_sb_destroy(sb);
+ 	kill_anon_super(sb);
+ }
+ 
+@@ -1242,14 +1254,7 @@ static struct dentry *fuse_mount_blk(struct file_system_type *fs_type,
+ 
+ static void fuse_kill_sb_blk(struct super_block *sb)
+ {
+-	struct fuse_conn *fc = get_fuse_conn_super(sb);
+-
+-	if (fc) {
+-		down_write(&fc->killsb);
+-		fc->sb = NULL;
+-		up_write(&fc->killsb);
+-	}
+-
++	fuse_sb_destroy(sb);
+ 	kill_block_super(sb);
+ }
+ 
+diff --git a/fs/sysfs/file.c b/fs/sysfs/file.c
+index 5c13f29bfcdb..118fa197a35f 100644
+--- a/fs/sysfs/file.c
++++ b/fs/sysfs/file.c
+@@ -405,6 +405,50 @@ int sysfs_chmod_file(struct kobject *kobj, const struct attribute *attr,
+ }
+ EXPORT_SYMBOL_GPL(sysfs_chmod_file);
+ 
++/**
++ * sysfs_break_active_protection - break "active" protection
++ * @kobj: The kernel object @attr is associated with.
++ * @attr: The attribute to break the "active" protection for.
++ *
++ * With sysfs, just like kernfs, deletion of an attribute is postponed until
++ * all active .show() and .store() callbacks have finished unless this function
++ * is called. Hence this function is useful in methods that implement self
++ * deletion.
++ */
++struct kernfs_node *sysfs_break_active_protection(struct kobject *kobj,
++						  const struct attribute *attr)
++{
++	struct kernfs_node *kn;
++
++	kobject_get(kobj);
++	kn = kernfs_find_and_get(kobj->sd, attr->name);
++	if (kn)
++		kernfs_break_active_protection(kn);
++	return kn;
++}
++EXPORT_SYMBOL_GPL(sysfs_break_active_protection);
++
++/**
++ * sysfs_unbreak_active_protection - restore "active" protection
++ * @kn: Pointer returned by sysfs_break_active_protection().
++ *
++ * Undo the effects of sysfs_break_active_protection(). Since this function
++ * calls kernfs_put() on the kernfs node that corresponds to the 'attr'
++ * argument passed to sysfs_break_active_protection() that attribute may have
++ * been removed between the sysfs_break_active_protection() and
++ * sysfs_unbreak_active_protection() calls, it is not safe to access @kn after
++ * this function has returned.
++ */
++void sysfs_unbreak_active_protection(struct kernfs_node *kn)
++{
++	struct kobject *kobj = kn->parent->priv;
++
++	kernfs_unbreak_active_protection(kn);
++	kernfs_put(kn);
++	kobject_put(kobj);
++}
++EXPORT_SYMBOL_GPL(sysfs_unbreak_active_protection);
++
+ /**
+  * sysfs_remove_file_ns - remove an object attribute with a custom ns tag
+  * @kobj: object we're acting for
+diff --git a/include/drm/i915_drm.h b/include/drm/i915_drm.h
+index c9e5a6621b95..c44703f471b3 100644
+--- a/include/drm/i915_drm.h
++++ b/include/drm/i915_drm.h
+@@ -95,7 +95,9 @@ extern struct resource intel_graphics_stolen_res;
+ #define    I845_TSEG_SIZE_512K	(2 << 1)
+ #define    I845_TSEG_SIZE_1M	(3 << 1)
+ 
+-#define INTEL_BSM 0x5c
++#define INTEL_BSM		0x5c
++#define INTEL_GEN11_BSM_DW0	0xc0
++#define INTEL_GEN11_BSM_DW1	0xc4
+ #define   INTEL_BSM_MASK	(-(1u << 20))
+ 
+ #endif				/* _I915_DRM_H_ */
+diff --git a/include/linux/libata.h b/include/linux/libata.h
+index 32f247cb5e9e..bc4f87cbe7f4 100644
+--- a/include/linux/libata.h
++++ b/include/linux/libata.h
+@@ -1111,6 +1111,8 @@ extern struct ata_host *ata_host_alloc(struct device *dev, int max_ports);
+ extern struct ata_host *ata_host_alloc_pinfo(struct device *dev,
+ 			const struct ata_port_info * const * ppi, int n_ports);
+ extern int ata_slave_link_init(struct ata_port *ap);
++extern void ata_host_get(struct ata_host *host);
++extern void ata_host_put(struct ata_host *host);
+ extern int ata_host_start(struct ata_host *host);
+ extern int ata_host_register(struct ata_host *host,
+ 			     struct scsi_host_template *sht);
+diff --git a/include/linux/printk.h b/include/linux/printk.h
+index 6d7e800affd8..3ede9f46a494 100644
+--- a/include/linux/printk.h
++++ b/include/linux/printk.h
+@@ -148,9 +148,13 @@ void early_printk(const char *s, ...) { }
+ #ifdef CONFIG_PRINTK_NMI
+ extern void printk_nmi_enter(void);
+ extern void printk_nmi_exit(void);
++extern void printk_nmi_direct_enter(void);
++extern void printk_nmi_direct_exit(void);
+ #else
+ static inline void printk_nmi_enter(void) { }
+ static inline void printk_nmi_exit(void) { }
++static inline void printk_nmi_direct_enter(void) { }
++static inline void printk_nmi_direct_exit(void) { }
+ #endif /* PRINTK_NMI */
+ 
+ #ifdef CONFIG_PRINTK
+diff --git a/include/linux/sysfs.h b/include/linux/sysfs.h
+index b8bfdc173ec0..3c12198c0103 100644
+--- a/include/linux/sysfs.h
++++ b/include/linux/sysfs.h
+@@ -237,6 +237,9 @@ int __must_check sysfs_create_files(struct kobject *kobj,
+ 				   const struct attribute **attr);
+ int __must_check sysfs_chmod_file(struct kobject *kobj,
+ 				  const struct attribute *attr, umode_t mode);
++struct kernfs_node *sysfs_break_active_protection(struct kobject *kobj,
++						  const struct attribute *attr);
++void sysfs_unbreak_active_protection(struct kernfs_node *kn);
+ void sysfs_remove_file_ns(struct kobject *kobj, const struct attribute *attr,
+ 			  const void *ns);
+ bool sysfs_remove_file_self(struct kobject *kobj, const struct attribute *attr);
+@@ -350,6 +353,17 @@ static inline int sysfs_chmod_file(struct kobject *kobj,
+ 	return 0;
+ }
+ 
++static inline struct kernfs_node *
++sysfs_break_active_protection(struct kobject *kobj,
++			      const struct attribute *attr)
++{
++	return NULL;
++}
++
++static inline void sysfs_unbreak_active_protection(struct kernfs_node *kn)
++{
++}
++
+ static inline void sysfs_remove_file_ns(struct kobject *kobj,
+ 					const struct attribute *attr,
+ 					const void *ns)
+diff --git a/include/linux/tpm.h b/include/linux/tpm.h
+index 06639fb6ab85..8eb5e5ebe136 100644
+--- a/include/linux/tpm.h
++++ b/include/linux/tpm.h
+@@ -43,6 +43,8 @@ struct tpm_class_ops {
+ 	u8 (*status) (struct tpm_chip *chip);
+ 	bool (*update_timeouts)(struct tpm_chip *chip,
+ 				unsigned long *timeout_cap);
++	int (*go_idle)(struct tpm_chip *chip);
++	int (*cmd_ready)(struct tpm_chip *chip);
+ 	int (*request_locality)(struct tpm_chip *chip, int loc);
+ 	int (*relinquish_locality)(struct tpm_chip *chip, int loc);
+ 	void (*clk_enable)(struct tpm_chip *chip, bool value);
+diff --git a/include/scsi/libsas.h b/include/scsi/libsas.h
+index 225ab7783dfd..3de3b10da19a 100644
+--- a/include/scsi/libsas.h
++++ b/include/scsi/libsas.h
+@@ -161,7 +161,7 @@ struct sata_device {
+ 	u8     port_no;        /* port number, if this is a PM (Port) */
+ 
+ 	struct ata_port *ap;
+-	struct ata_host ata_host;
++	struct ata_host *ata_host;
+ 	struct smp_resp rps_resp ____cacheline_aligned; /* report_phy_sata_resp */
+ 	u8     fis[ATA_RESP_FIS_SIZE];
+ };
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index ea619021d901..f3183ad10d96 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -710,9 +710,7 @@ static void reuse_unused_kprobe(struct kprobe *ap)
+ 	 * there is still a relative jump) and disabled.
+ 	 */
+ 	op = container_of(ap, struct optimized_kprobe, kp);
+-	if (unlikely(list_empty(&op->list)))
+-		printk(KERN_WARNING "Warning: found a stray unused "
+-			"aggrprobe@%p\n", ap->addr);
++	WARN_ON_ONCE(list_empty(&op->list));
+ 	/* Enable the probe again */
+ 	ap->flags &= ~KPROBE_FLAG_DISABLED;
+ 	/* Optimize it again (remove from op->list) */
+@@ -985,7 +983,8 @@ static int arm_kprobe_ftrace(struct kprobe *p)
+ 	ret = ftrace_set_filter_ip(&kprobe_ftrace_ops,
+ 				   (unsigned long)p->addr, 0, 0);
+ 	if (ret) {
+-		pr_debug("Failed to arm kprobe-ftrace at %p (%d)\n", p->addr, ret);
++		pr_debug("Failed to arm kprobe-ftrace at %pS (%d)\n",
++			 p->addr, ret);
+ 		return ret;
+ 	}
+ 
+@@ -1025,7 +1024,8 @@ static int disarm_kprobe_ftrace(struct kprobe *p)
+ 
+ 	ret = ftrace_set_filter_ip(&kprobe_ftrace_ops,
+ 			   (unsigned long)p->addr, 1, 0);
+-	WARN(ret < 0, "Failed to disarm kprobe-ftrace at %p (%d)\n", p->addr, ret);
++	WARN_ONCE(ret < 0, "Failed to disarm kprobe-ftrace at %pS (%d)\n",
++		  p->addr, ret);
+ 	return ret;
+ }
+ #else	/* !CONFIG_KPROBES_ON_FTRACE */
+@@ -2169,11 +2169,12 @@ out:
+ }
+ EXPORT_SYMBOL_GPL(enable_kprobe);
+ 
++/* Caller must NOT call this in usual path. This is only for critical case */
+ void dump_kprobe(struct kprobe *kp)
+ {
+-	printk(KERN_WARNING "Dumping kprobe:\n");
+-	printk(KERN_WARNING "Name: %s\nAddress: %p\nOffset: %x\n",
+-	       kp->symbol_name, kp->addr, kp->offset);
++	pr_err("Dumping kprobe:\n");
++	pr_err("Name: %s\nOffset: %x\nAddress: %pS\n",
++	       kp->symbol_name, kp->offset, kp->addr);
+ }
+ NOKPROBE_SYMBOL(dump_kprobe);
+ 
+@@ -2196,11 +2197,8 @@ static int __init populate_kprobe_blacklist(unsigned long *start,
+ 		entry = arch_deref_entry_point((void *)*iter);
+ 
+ 		if (!kernel_text_address(entry) ||
+-		    !kallsyms_lookup_size_offset(entry, &size, &offset)) {
+-			pr_err("Failed to find blacklist at %p\n",
+-				(void *)entry);
++		    !kallsyms_lookup_size_offset(entry, &size, &offset))
+ 			continue;
+-		}
+ 
+ 		ent = kmalloc(sizeof(*ent), GFP_KERNEL);
+ 		if (!ent)
+@@ -2428,8 +2426,16 @@ static int kprobe_blacklist_seq_show(struct seq_file *m, void *v)
+ 	struct kprobe_blacklist_entry *ent =
+ 		list_entry(v, struct kprobe_blacklist_entry, list);
+ 
+-	seq_printf(m, "0x%px-0x%px\t%ps\n", (void *)ent->start_addr,
+-		   (void *)ent->end_addr, (void *)ent->start_addr);
++	/*
++	 * If /proc/kallsyms is not showing kernel address, we won't
++	 * show them here either.
++	 */
++	if (!kallsyms_show_value())
++		seq_printf(m, "0x%px-0x%px\t%ps\n", NULL, NULL,
++			   (void *)ent->start_addr);
++	else
++		seq_printf(m, "0x%px-0x%px\t%ps\n", (void *)ent->start_addr,
++			   (void *)ent->end_addr, (void *)ent->start_addr);
+ 	return 0;
+ }
+ 
+@@ -2611,7 +2617,7 @@ static int __init debugfs_kprobe_init(void)
+ 	if (!dir)
+ 		return -ENOMEM;
+ 
+-	file = debugfs_create_file("list", 0444, dir, NULL,
++	file = debugfs_create_file("list", 0400, dir, NULL,
+ 				&debugfs_kprobes_operations);
+ 	if (!file)
+ 		goto error;
+@@ -2621,7 +2627,7 @@ static int __init debugfs_kprobe_init(void)
+ 	if (!file)
+ 		goto error;
+ 
+-	file = debugfs_create_file("blacklist", 0444, dir, NULL,
++	file = debugfs_create_file("blacklist", 0400, dir, NULL,
+ 				&debugfs_kprobe_blacklist_ops);
+ 	if (!file)
+ 		goto error;
+diff --git a/kernel/printk/internal.h b/kernel/printk/internal.h
+index 2a7d04049af4..0f1898820cba 100644
+--- a/kernel/printk/internal.h
++++ b/kernel/printk/internal.h
+@@ -19,11 +19,16 @@
+ #ifdef CONFIG_PRINTK
+ 
+ #define PRINTK_SAFE_CONTEXT_MASK	 0x3fffffff
+-#define PRINTK_NMI_DEFERRED_CONTEXT_MASK 0x40000000
++#define PRINTK_NMI_DIRECT_CONTEXT_MASK	 0x40000000
+ #define PRINTK_NMI_CONTEXT_MASK		 0x80000000
+ 
+ extern raw_spinlock_t logbuf_lock;
+ 
++__printf(5, 0)
++int vprintk_store(int facility, int level,
++		  const char *dict, size_t dictlen,
++		  const char *fmt, va_list args);
++
+ __printf(1, 0) int vprintk_default(const char *fmt, va_list args);
+ __printf(1, 0) int vprintk_deferred(const char *fmt, va_list args);
+ __printf(1, 0) int vprintk_func(const char *fmt, va_list args);
+@@ -54,6 +59,8 @@ void __printk_safe_exit(void);
+ 		local_irq_enable();		\
+ 	} while (0)
+ 
++void defer_console_output(void);
++
+ #else
+ 
+ __printf(1, 0) int vprintk_func(const char *fmt, va_list args) { return 0; }
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index 247808333ba4..1d1513215c22 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -1824,28 +1824,16 @@ static size_t log_output(int facility, int level, enum log_flags lflags, const c
+ 	return log_store(facility, level, lflags, 0, dict, dictlen, text, text_len);
+ }
+ 
+-asmlinkage int vprintk_emit(int facility, int level,
+-			    const char *dict, size_t dictlen,
+-			    const char *fmt, va_list args)
++/* Must be called under logbuf_lock. */
++int vprintk_store(int facility, int level,
++		  const char *dict, size_t dictlen,
++		  const char *fmt, va_list args)
+ {
+ 	static char textbuf[LOG_LINE_MAX];
+ 	char *text = textbuf;
+ 	size_t text_len;
+ 	enum log_flags lflags = 0;
+-	unsigned long flags;
+-	int printed_len;
+-	bool in_sched = false;
+-
+-	if (level == LOGLEVEL_SCHED) {
+-		level = LOGLEVEL_DEFAULT;
+-		in_sched = true;
+-	}
+-
+-	boot_delay_msec(level);
+-	printk_delay();
+ 
+-	/* This stops the holder of console_sem just where we want him */
+-	logbuf_lock_irqsave(flags);
+ 	/*
+ 	 * The printf needs to come first; we need the syslog
+ 	 * prefix which might be passed-in as a parameter.
+@@ -1886,8 +1874,29 @@ asmlinkage int vprintk_emit(int facility, int level,
+ 	if (dict)
+ 		lflags |= LOG_PREFIX|LOG_NEWLINE;
+ 
+-	printed_len = log_output(facility, level, lflags, dict, dictlen, text, text_len);
++	return log_output(facility, level, lflags,
++			  dict, dictlen, text, text_len);
++}
+ 
++asmlinkage int vprintk_emit(int facility, int level,
++			    const char *dict, size_t dictlen,
++			    const char *fmt, va_list args)
++{
++	int printed_len;
++	bool in_sched = false;
++	unsigned long flags;
++
++	if (level == LOGLEVEL_SCHED) {
++		level = LOGLEVEL_DEFAULT;
++		in_sched = true;
++	}
++
++	boot_delay_msec(level);
++	printk_delay();
++
++	/* This stops the holder of console_sem just where we want him */
++	logbuf_lock_irqsave(flags);
++	printed_len = vprintk_store(facility, level, dict, dictlen, fmt, args);
+ 	logbuf_unlock_irqrestore(flags);
+ 
+ 	/* If called from the scheduler, we can not call up(). */
+@@ -2878,16 +2887,20 @@ void wake_up_klogd(void)
+ 	preempt_enable();
+ }
+ 
+-int vprintk_deferred(const char *fmt, va_list args)
++void defer_console_output(void)
+ {
+-	int r;
+-
+-	r = vprintk_emit(0, LOGLEVEL_SCHED, NULL, 0, fmt, args);
+-
+ 	preempt_disable();
+ 	__this_cpu_or(printk_pending, PRINTK_PENDING_OUTPUT);
+ 	irq_work_queue(this_cpu_ptr(&wake_up_klogd_work));
+ 	preempt_enable();
++}
++
++int vprintk_deferred(const char *fmt, va_list args)
++{
++	int r;
++
++	r = vprintk_emit(0, LOGLEVEL_SCHED, NULL, 0, fmt, args);
++	defer_console_output();
+ 
+ 	return r;
+ }
+diff --git a/kernel/printk/printk_safe.c b/kernel/printk/printk_safe.c
+index d7d091309054..a0a74c533e4b 100644
+--- a/kernel/printk/printk_safe.c
++++ b/kernel/printk/printk_safe.c
+@@ -308,24 +308,33 @@ static __printf(1, 0) int vprintk_nmi(const char *fmt, va_list args)
+ 
+ void printk_nmi_enter(void)
+ {
+-	/*
+-	 * The size of the extra per-CPU buffer is limited. Use it only when
+-	 * the main one is locked. If this CPU is not in the safe context,
+-	 * the lock must be taken on another CPU and we could wait for it.
+-	 */
+-	if ((this_cpu_read(printk_context) & PRINTK_SAFE_CONTEXT_MASK) &&
+-	    raw_spin_is_locked(&logbuf_lock)) {
+-		this_cpu_or(printk_context, PRINTK_NMI_CONTEXT_MASK);
+-	} else {
+-		this_cpu_or(printk_context, PRINTK_NMI_DEFERRED_CONTEXT_MASK);
+-	}
++	this_cpu_or(printk_context, PRINTK_NMI_CONTEXT_MASK);
+ }
+ 
+ void printk_nmi_exit(void)
+ {
+-	this_cpu_and(printk_context,
+-		     ~(PRINTK_NMI_CONTEXT_MASK |
+-		       PRINTK_NMI_DEFERRED_CONTEXT_MASK));
++	this_cpu_and(printk_context, ~PRINTK_NMI_CONTEXT_MASK);
++}
++
++/*
++ * Marks a code that might produce many messages in NMI context
++ * and the risk of losing them is more critical than eventual
++ * reordering.
++ *
++ * It has effect only when called in NMI context. Then printk()
++ * will try to store the messages into the main logbuf directly
++ * and use the per-CPU buffers only as a fallback when the lock
++ * is not available.
++ */
++void printk_nmi_direct_enter(void)
++{
++	if (this_cpu_read(printk_context) & PRINTK_NMI_CONTEXT_MASK)
++		this_cpu_or(printk_context, PRINTK_NMI_DIRECT_CONTEXT_MASK);
++}
++
++void printk_nmi_direct_exit(void)
++{
++	this_cpu_and(printk_context, ~PRINTK_NMI_DIRECT_CONTEXT_MASK);
+ }
+ 
+ #else
+@@ -363,6 +372,20 @@ void __printk_safe_exit(void)
+ 
+ __printf(1, 0) int vprintk_func(const char *fmt, va_list args)
+ {
++	/*
++	 * Try to use the main logbuf even in NMI. But avoid calling console
++	 * drivers that might have their own locks.
++	 */
++	if ((this_cpu_read(printk_context) & PRINTK_NMI_DIRECT_CONTEXT_MASK) &&
++	    raw_spin_trylock(&logbuf_lock)) {
++		int len;
++
++		len = vprintk_store(0, LOGLEVEL_DEFAULT, NULL, 0, fmt, args);
++		raw_spin_unlock(&logbuf_lock);
++		defer_console_output();
++		return len;
++	}
++
+ 	/* Use extra buffer in NMI when logbuf_lock is taken or in safe mode. */
+ 	if (this_cpu_read(printk_context) & PRINTK_NMI_CONTEXT_MASK)
+ 		return vprintk_nmi(fmt, args);
+@@ -371,13 +394,6 @@ __printf(1, 0) int vprintk_func(const char *fmt, va_list args)
+ 	if (this_cpu_read(printk_context) & PRINTK_SAFE_CONTEXT_MASK)
+ 		return vprintk_safe(fmt, args);
+ 
+-	/*
+-	 * Use the main logbuf when logbuf_lock is available in NMI.
+-	 * But avoid calling console drivers that might have their own locks.
+-	 */
+-	if (this_cpu_read(printk_context) & PRINTK_NMI_DEFERRED_CONTEXT_MASK)
+-		return vprintk_deferred(fmt, args);
+-
+ 	/* No obstacles. */
+ 	return vprintk_default(fmt, args);
+ }
+diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
+index e190d1ef3a23..067cb83f37ea 100644
+--- a/kernel/stop_machine.c
++++ b/kernel/stop_machine.c
+@@ -81,6 +81,7 @@ static bool cpu_stop_queue_work(unsigned int cpu, struct cpu_stop_work *work)
+ 	unsigned long flags;
+ 	bool enabled;
+ 
++	preempt_disable();
+ 	raw_spin_lock_irqsave(&stopper->lock, flags);
+ 	enabled = stopper->enabled;
+ 	if (enabled)
+@@ -90,6 +91,7 @@ static bool cpu_stop_queue_work(unsigned int cpu, struct cpu_stop_work *work)
+ 	raw_spin_unlock_irqrestore(&stopper->lock, flags);
+ 
+ 	wake_up_q(&wakeq);
++	preempt_enable();
+ 
+ 	return enabled;
+ }
+@@ -236,13 +238,24 @@ static int cpu_stop_queue_two_works(int cpu1, struct cpu_stop_work *work1,
+ 	struct cpu_stopper *stopper2 = per_cpu_ptr(&cpu_stopper, cpu2);
+ 	DEFINE_WAKE_Q(wakeq);
+ 	int err;
++
+ retry:
++	/*
++	 * The waking up of stopper threads has to happen in the same
++	 * scheduling context as the queueing.  Otherwise, there is a
++	 * possibility of one of the above stoppers being woken up by another
++	 * CPU, and preempting us. This will cause us to not wake up the other
++	 * stopper forever.
++	 */
++	preempt_disable();
+ 	raw_spin_lock_irq(&stopper1->lock);
+ 	raw_spin_lock_nested(&stopper2->lock, SINGLE_DEPTH_NESTING);
+ 
+-	err = -ENOENT;
+-	if (!stopper1->enabled || !stopper2->enabled)
++	if (!stopper1->enabled || !stopper2->enabled) {
++		err = -ENOENT;
+ 		goto unlock;
++	}
++
+ 	/*
+ 	 * Ensure that if we race with __stop_cpus() the stoppers won't get
+ 	 * queued up in reverse order leading to system deadlock.
+@@ -253,36 +266,30 @@ retry:
+ 	 * It can be falsely true but it is safe to spin until it is cleared,
+ 	 * queue_stop_cpus_work() does everything under preempt_disable().
+ 	 */
+-	err = -EDEADLK;
+-	if (unlikely(stop_cpus_in_progress))
+-			goto unlock;
++	if (unlikely(stop_cpus_in_progress)) {
++		err = -EDEADLK;
++		goto unlock;
++	}
+ 
+ 	err = 0;
+ 	__cpu_stop_queue_work(stopper1, work1, &wakeq);
+ 	__cpu_stop_queue_work(stopper2, work2, &wakeq);
+-	/*
+-	 * The waking up of stopper threads has to happen
+-	 * in the same scheduling context as the queueing.
+-	 * Otherwise, there is a possibility of one of the
+-	 * above stoppers being woken up by another CPU,
+-	 * and preempting us. This will cause us to n ot
+-	 * wake up the other stopper forever.
+-	 */
+-	preempt_disable();
++
+ unlock:
+ 	raw_spin_unlock(&stopper2->lock);
+ 	raw_spin_unlock_irq(&stopper1->lock);
+ 
+ 	if (unlikely(err == -EDEADLK)) {
++		preempt_enable();
++
+ 		while (stop_cpus_in_progress)
+ 			cpu_relax();
++
+ 		goto retry;
+ 	}
+ 
+-	if (!err) {
+-		wake_up_q(&wakeq);
+-		preempt_enable();
+-	}
++	wake_up_q(&wakeq);
++	preempt_enable();
+ 
+ 	return err;
+ }
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 823687997b01..176debd3481b 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -8288,6 +8288,7 @@ void ftrace_dump(enum ftrace_dump_mode oops_dump_mode)
+ 	tracing_off();
+ 
+ 	local_irq_save(flags);
++	printk_nmi_direct_enter();
+ 
+ 	/* Simulate the iterator */
+ 	trace_init_global_iter(&iter);
+@@ -8367,7 +8368,8 @@ void ftrace_dump(enum ftrace_dump_mode oops_dump_mode)
+ 	for_each_tracing_cpu(cpu) {
+ 		atomic_dec(&per_cpu_ptr(iter.trace_buffer->data, cpu)->disabled);
+ 	}
+- 	atomic_dec(&dump_running);
++	atomic_dec(&dump_running);
++	printk_nmi_direct_exit();
+ 	local_irq_restore(flags);
+ }
+ EXPORT_SYMBOL_GPL(ftrace_dump);
+diff --git a/kernel/watchdog.c b/kernel/watchdog.c
+index 576d18045811..51f5a64d9ec2 100644
+--- a/kernel/watchdog.c
++++ b/kernel/watchdog.c
+@@ -266,7 +266,7 @@ static void __touch_watchdog(void)
+  * entering idle state.  This should only be used for scheduler events.
+  * Use touch_softlockup_watchdog() for everything else.
+  */
+-void touch_softlockup_watchdog_sched(void)
++notrace void touch_softlockup_watchdog_sched(void)
+ {
+ 	/*
+ 	 * Preemption can be enabled.  It doesn't matter which CPU's timestamp
+@@ -275,7 +275,7 @@ void touch_softlockup_watchdog_sched(void)
+ 	raw_cpu_write(watchdog_touch_ts, 0);
+ }
+ 
+-void touch_softlockup_watchdog(void)
++notrace void touch_softlockup_watchdog(void)
+ {
+ 	touch_softlockup_watchdog_sched();
+ 	wq_watchdog_touch(raw_smp_processor_id());
+diff --git a/kernel/watchdog_hld.c b/kernel/watchdog_hld.c
+index e449a23e9d59..4ece6028007a 100644
+--- a/kernel/watchdog_hld.c
++++ b/kernel/watchdog_hld.c
+@@ -29,7 +29,7 @@ static struct cpumask dead_events_mask;
+ static unsigned long hardlockup_allcpu_dumped;
+ static atomic_t watchdog_cpus = ATOMIC_INIT(0);
+ 
+-void arch_touch_nmi_watchdog(void)
++notrace void arch_touch_nmi_watchdog(void)
+ {
+ 	/*
+ 	 * Using __raw here because some code paths have
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 78b192071ef7..5f78c6e41796 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -5559,7 +5559,7 @@ static void wq_watchdog_timer_fn(struct timer_list *unused)
+ 	mod_timer(&wq_watchdog_timer, jiffies + thresh);
+ }
+ 
+-void wq_watchdog_touch(int cpu)
++notrace void wq_watchdog_touch(int cpu)
+ {
+ 	if (cpu >= 0)
+ 		per_cpu(wq_watchdog_touched_cpu, cpu) = jiffies;
+diff --git a/lib/nmi_backtrace.c b/lib/nmi_backtrace.c
+index 61a6b5aab07e..15ca78e1c7d4 100644
+--- a/lib/nmi_backtrace.c
++++ b/lib/nmi_backtrace.c
+@@ -87,11 +87,9 @@ void nmi_trigger_cpumask_backtrace(const cpumask_t *mask,
+ 
+ bool nmi_cpu_backtrace(struct pt_regs *regs)
+ {
+-	static arch_spinlock_t lock = __ARCH_SPIN_LOCK_UNLOCKED;
+ 	int cpu = smp_processor_id();
+ 
+ 	if (cpumask_test_cpu(cpu, to_cpumask(backtrace_mask))) {
+-		arch_spin_lock(&lock);
+ 		if (regs && cpu_in_idle(instruction_pointer(regs))) {
+ 			pr_warn("NMI backtrace for cpu %d skipped: idling at %pS\n",
+ 				cpu, (void *)instruction_pointer(regs));
+@@ -102,7 +100,6 @@ bool nmi_cpu_backtrace(struct pt_regs *regs)
+ 			else
+ 				dump_stack();
+ 		}
+-		arch_spin_unlock(&lock);
+ 		cpumask_clear_cpu(cpu, to_cpumask(backtrace_mask));
+ 		return true;
+ 	}
+diff --git a/lib/vsprintf.c b/lib/vsprintf.c
+index a48aaa79d352..cda186230287 100644
+--- a/lib/vsprintf.c
++++ b/lib/vsprintf.c
+@@ -1942,6 +1942,7 @@ char *pointer(const char *fmt, char *buf, char *end, void *ptr,
+ 		case 'F':
+ 			return device_node_string(buf, end, ptr, spec, fmt + 1);
+ 		}
++		break;
+ 	case 'x':
+ 		return pointer_string(buf, end, ptr, spec);
+ 	}
+diff --git a/mm/memory.c b/mm/memory.c
+index 0e356dd923c2..86d4329acb05 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -245,9 +245,6 @@ static void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb)
+ 
+ 	tlb_flush(tlb);
+ 	mmu_notifier_invalidate_range(tlb->mm, tlb->start, tlb->end);
+-#ifdef CONFIG_HAVE_RCU_TABLE_FREE
+-	tlb_table_flush(tlb);
+-#endif
+ 	__tlb_reset_range(tlb);
+ }
+ 
+@@ -255,6 +252,9 @@ static void tlb_flush_mmu_free(struct mmu_gather *tlb)
+ {
+ 	struct mmu_gather_batch *batch;
+ 
++#ifdef CONFIG_HAVE_RCU_TABLE_FREE
++	tlb_table_flush(tlb);
++#endif
+ 	for (batch = &tlb->local; batch && batch->nr; batch = batch->next) {
+ 		free_pages_and_swap_cache(batch->pages, batch->nr);
+ 		batch->nr = 0;
+@@ -330,6 +330,21 @@ bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_
+  * See the comment near struct mmu_table_batch.
+  */
+ 
++/*
++ * If we want tlb_remove_table() to imply TLB invalidates.
++ */
++static inline void tlb_table_invalidate(struct mmu_gather *tlb)
++{
++#ifdef CONFIG_HAVE_RCU_TABLE_INVALIDATE
++	/*
++	 * Invalidate page-table caches used by hardware walkers. Then we still
++	 * need to RCU-sched wait while freeing the pages because software
++	 * walkers can still be in-flight.
++	 */
++	tlb_flush_mmu_tlbonly(tlb);
++#endif
++}
++
+ static void tlb_remove_table_smp_sync(void *arg)
+ {
+ 	/* Simply deliver the interrupt */
+@@ -366,6 +381,7 @@ void tlb_table_flush(struct mmu_gather *tlb)
+ 	struct mmu_table_batch **batch = &tlb->batch;
+ 
+ 	if (*batch) {
++		tlb_table_invalidate(tlb);
+ 		call_rcu_sched(&(*batch)->rcu, tlb_remove_table_rcu);
+ 		*batch = NULL;
+ 	}
+@@ -387,11 +403,13 @@ void tlb_remove_table(struct mmu_gather *tlb, void *table)
+ 	if (*batch == NULL) {
+ 		*batch = (struct mmu_table_batch *)__get_free_page(GFP_NOWAIT | __GFP_NOWARN);
+ 		if (*batch == NULL) {
++			tlb_table_invalidate(tlb);
+ 			tlb_remove_table_one(table);
+ 			return;
+ 		}
+ 		(*batch)->nr = 0;
+ 	}
++
+ 	(*batch)->tables[(*batch)->nr++] = table;
+ 	if ((*batch)->nr == MAX_TABLE_BATCH)
+ 		tlb_table_flush(tlb);
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index 16161a36dc73..e8d1024dc547 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -280,7 +280,6 @@ rpcrdma_conn_upcall(struct rdma_cm_id *id, struct rdma_cm_event *event)
+ 		++xprt->rx_xprt.connect_cookie;
+ 		connstate = -ECONNABORTED;
+ connected:
+-		xprt->rx_buf.rb_credits = 1;
+ 		ep->rep_connected = connstate;
+ 		rpcrdma_conn_func(ep);
+ 		wake_up_all(&ep->rep_connect_wait);
+@@ -755,6 +754,7 @@ retry:
+ 	}
+ 
+ 	ep->rep_connected = 0;
++	rpcrdma_post_recvs(r_xprt, true);
+ 
+ 	rc = rdma_connect(ia->ri_id, &ep->rep_remote_cma);
+ 	if (rc) {
+@@ -773,8 +773,6 @@ retry:
+ 
+ 	dprintk("RPC:       %s: connected\n", __func__);
+ 
+-	rpcrdma_post_recvs(r_xprt, true);
+-
+ out:
+ 	if (rc)
+ 		ep->rep_connected = rc;
+@@ -1171,6 +1169,7 @@ rpcrdma_buffer_create(struct rpcrdma_xprt *r_xprt)
+ 		list_add(&req->rl_list, &buf->rb_send_bufs);
+ 	}
+ 
++	buf->rb_credits = 1;
+ 	buf->rb_posted_receives = 0;
+ 	INIT_LIST_HEAD(&buf->rb_recv_bufs);
+ 
+diff --git a/scripts/kernel-doc b/scripts/kernel-doc
+index 0057d8eafcc1..8f0f508a78e9 100755
+--- a/scripts/kernel-doc
++++ b/scripts/kernel-doc
+@@ -1062,7 +1062,7 @@ sub dump_struct($$) {
+     my $x = shift;
+     my $file = shift;
+ 
+-    if ($x =~ /(struct|union)\s+(\w+)\s*{(.*)}/) {
++    if ($x =~ /(struct|union)\s+(\w+)\s*\{(.*)\}/) {
+ 	my $decl_type = $1;
+ 	$declaration_name = $2;
+ 	my $members = $3;
+@@ -1148,20 +1148,20 @@ sub dump_struct($$) {
+ 				}
+ 			}
+ 		}
+-		$members =~ s/(struct|union)([^\{\};]+)\{([^\{\}]*)}([^\{\}\;]*)\;/$newmember/;
++		$members =~ s/(struct|union)([^\{\};]+)\{([^\{\}]*)\}([^\{\}\;]*)\;/$newmember/;
+ 	}
+ 
+ 	# Ignore other nested elements, like enums
+-	$members =~ s/({[^\{\}]*})//g;
++	$members =~ s/(\{[^\{\}]*\})//g;
+ 
+ 	create_parameterlist($members, ';', $file, $declaration_name);
+ 	check_sections($file, $declaration_name, $decl_type, $sectcheck, $struct_actual);
+ 
+ 	# Adjust declaration for better display
+-	$declaration =~ s/([{;])/$1\n/g;
+-	$declaration =~ s/}\s+;/};/g;
++	$declaration =~ s/([\{;])/$1\n/g;
++	$declaration =~ s/\}\s+;/};/g;
+ 	# Better handle inlined enums
+-	do {} while ($declaration =~ s/(enum\s+{[^}]+),([^\n])/$1,\n$2/);
++	do {} while ($declaration =~ s/(enum\s+\{[^\}]+),([^\n])/$1,\n$2/);
+ 
+ 	my @def_args = split /\n/, $declaration;
+ 	my $level = 1;
+@@ -1171,12 +1171,12 @@ sub dump_struct($$) {
+ 		$clause =~ s/\s+$//;
+ 		$clause =~ s/\s+/ /;
+ 		next if (!$clause);
+-		$level-- if ($clause =~ m/(})/ && $level > 1);
++		$level-- if ($clause =~ m/(\})/ && $level > 1);
+ 		if (!($clause =~ m/^\s*#/)) {
+ 			$declaration .= "\t" x $level;
+ 		}
+ 		$declaration .= "\t" . $clause . "\n";
+-		$level++ if ($clause =~ m/({)/ && !($clause =~m/}/));
++		$level++ if ($clause =~ m/(\{)/ && !($clause =~m/\}/));
+ 	}
+ 	output_declaration($declaration_name,
+ 			   'struct',
+@@ -1244,7 +1244,7 @@ sub dump_enum($$) {
+     # strip #define macros inside enums
+     $x =~ s@#\s*((define|ifdef)\s+|endif)[^;]*;@@gos;
+ 
+-    if ($x =~ /enum\s+(\w+)\s*{(.*)}/) {
++    if ($x =~ /enum\s+(\w+)\s*\{(.*)\}/) {
+ 	$declaration_name = $1;
+ 	my $members = $2;
+ 	my %_members;
+@@ -1785,7 +1785,7 @@ sub process_proto_type($$) {
+     }
+ 
+     while (1) {
+-	if ( $x =~ /([^{};]*)([{};])(.*)/ ) {
++	if ( $x =~ /([^\{\};]*)([\{\};])(.*)/ ) {
+             if( length $prototype ) {
+                 $prototype .= " "
+             }
+diff --git a/sound/soc/codecs/wm_adsp.c b/sound/soc/codecs/wm_adsp.c
+index 2fcdd84021a5..86c7805da997 100644
+--- a/sound/soc/codecs/wm_adsp.c
++++ b/sound/soc/codecs/wm_adsp.c
+@@ -2642,7 +2642,10 @@ int wm_adsp2_preloader_get(struct snd_kcontrol *kcontrol,
+ 			   struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct snd_soc_component *component = snd_soc_kcontrol_component(kcontrol);
+-	struct wm_adsp *dsp = snd_soc_component_get_drvdata(component);
++	struct wm_adsp *dsps = snd_soc_component_get_drvdata(component);
++	struct soc_mixer_control *mc =
++		(struct soc_mixer_control *)kcontrol->private_value;
++	struct wm_adsp *dsp = &dsps[mc->shift - 1];
+ 
+ 	ucontrol->value.integer.value[0] = dsp->preloaded;
+ 
+@@ -2654,10 +2657,11 @@ int wm_adsp2_preloader_put(struct snd_kcontrol *kcontrol,
+ 			   struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct snd_soc_component *component = snd_soc_kcontrol_component(kcontrol);
+-	struct wm_adsp *dsp = snd_soc_component_get_drvdata(component);
++	struct wm_adsp *dsps = snd_soc_component_get_drvdata(component);
+ 	struct snd_soc_dapm_context *dapm = snd_soc_component_get_dapm(component);
+ 	struct soc_mixer_control *mc =
+ 		(struct soc_mixer_control *)kcontrol->private_value;
++	struct wm_adsp *dsp = &dsps[mc->shift - 1];
+ 	char preload[32];
+ 
+ 	snprintf(preload, ARRAY_SIZE(preload), "DSP%u Preload", mc->shift);
+diff --git a/sound/soc/sirf/sirf-usp.c b/sound/soc/sirf/sirf-usp.c
+index 77e7dcf969d0..d70fcd4a1adf 100644
+--- a/sound/soc/sirf/sirf-usp.c
++++ b/sound/soc/sirf/sirf-usp.c
+@@ -370,10 +370,9 @@ static int sirf_usp_pcm_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, usp);
+ 
+ 	mem_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	base = devm_ioremap(&pdev->dev, mem_res->start,
+-		resource_size(mem_res));
+-	if (base == NULL)
+-		return -ENOMEM;
++	base = devm_ioremap_resource(&pdev->dev, mem_res);
++	if (IS_ERR(base))
++		return PTR_ERR(base);
+ 	usp->regmap = devm_regmap_init_mmio(&pdev->dev, base,
+ 					    &sirf_usp_regmap_config);
+ 	if (IS_ERR(usp->regmap))
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 5e7ae47a9658..5feae9666822 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -1694,6 +1694,14 @@ static u64 dpcm_runtime_base_format(struct snd_pcm_substream *substream)
+ 		int i;
+ 
+ 		for (i = 0; i < be->num_codecs; i++) {
++			/*
++			 * Skip CODECs which don't support the current stream
++			 * type. See soc_pcm_init_runtime_hw() for more details
++			 */
++			if (!snd_soc_dai_stream_valid(be->codec_dais[i],
++						      stream))
++				continue;
++
+ 			codec_dai_drv = be->codec_dais[i]->driver;
+ 			if (stream == SNDRV_PCM_STREAM_PLAYBACK)
+ 				codec_stream = &codec_dai_drv->playback;
+diff --git a/sound/soc/zte/zx-tdm.c b/sound/soc/zte/zx-tdm.c
+index dc955272f58b..389272eeba9a 100644
+--- a/sound/soc/zte/zx-tdm.c
++++ b/sound/soc/zte/zx-tdm.c
+@@ -144,8 +144,8 @@ static void zx_tdm_rx_dma_en(struct zx_tdm_info *tdm, bool on)
+ #define ZX_TDM_RATES	(SNDRV_PCM_RATE_8000 | SNDRV_PCM_RATE_16000)
+ 
+ #define ZX_TDM_FMTBIT \
+-	(SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FORMAT_MU_LAW | \
+-	SNDRV_PCM_FORMAT_A_LAW)
++	(SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_MU_LAW | \
++	SNDRV_PCM_FMTBIT_A_LAW)
+ 
+ static int zx_tdm_dai_probe(struct snd_soc_dai *dai)
+ {
+diff --git a/tools/perf/arch/s390/util/kvm-stat.c b/tools/perf/arch/s390/util/kvm-stat.c
+index d233e2eb9592..aaabab5e2830 100644
+--- a/tools/perf/arch/s390/util/kvm-stat.c
++++ b/tools/perf/arch/s390/util/kvm-stat.c
+@@ -102,7 +102,7 @@ const char * const kvm_skip_events[] = {
+ 
+ int cpu_isa_init(struct perf_kvm_stat *kvm, const char *cpuid)
+ {
+-	if (strstr(cpuid, "IBM/S390")) {
++	if (strstr(cpuid, "IBM")) {
+ 		kvm->exit_reasons = sie_exit_reasons;
+ 		kvm->exit_reasons_isa = "SIE";
+ 	} else
+diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
+index bd3d57f40f1b..17cecc96f735 100644
+--- a/virt/kvm/arm/arch_timer.c
++++ b/virt/kvm/arm/arch_timer.c
+@@ -295,9 +295,9 @@ static void phys_timer_emulate(struct kvm_vcpu *vcpu)
+ 	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
+ 
+ 	/*
+-	 * If the timer can fire now we have just raised the IRQ line and we
+-	 * don't need to have a soft timer scheduled for the future.  If the
+-	 * timer cannot fire at all, then we also don't need a soft timer.
++	 * If the timer can fire now, we don't need to have a soft timer
++	 * scheduled for the future.  If the timer cannot fire at all,
++	 * then we also don't need a soft timer.
+ 	 */
+ 	if (kvm_timer_should_fire(ptimer) || !kvm_timer_irq_can_fire(ptimer)) {
+ 		soft_timer_cancel(&timer->phys_timer, NULL);
+@@ -332,10 +332,10 @@ static void kvm_timer_update_state(struct kvm_vcpu *vcpu)
+ 	level = kvm_timer_should_fire(vtimer);
+ 	kvm_timer_update_irq(vcpu, level, vtimer);
+ 
++	phys_timer_emulate(vcpu);
++
+ 	if (kvm_timer_should_fire(ptimer) != ptimer->irq.level)
+ 		kvm_timer_update_irq(vcpu, !ptimer->irq.level, ptimer);
+-
+-	phys_timer_emulate(vcpu);
+ }
+ 
+ static void vtimer_save_state(struct kvm_vcpu *vcpu)
+@@ -487,6 +487,7 @@ void kvm_timer_vcpu_load(struct kvm_vcpu *vcpu)
+ {
+ 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+ 	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
++	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
+ 
+ 	if (unlikely(!timer->enabled))
+ 		return;
+@@ -502,6 +503,10 @@ void kvm_timer_vcpu_load(struct kvm_vcpu *vcpu)
+ 
+ 	/* Set the background timer for the physical timer emulation. */
+ 	phys_timer_emulate(vcpu);
++
++	/* If the timer fired while we weren't running, inject it now */
++	if (kvm_timer_should_fire(ptimer) != ptimer->irq.level)
++		kvm_timer_update_irq(vcpu, !ptimer->irq.level, ptimer);
+ }
+ 
+ bool kvm_timer_should_notify_user(struct kvm_vcpu *vcpu)
+diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
+index 1d90d79706bd..c2b95a22959b 100644
+--- a/virt/kvm/arm/mmu.c
++++ b/virt/kvm/arm/mmu.c
+@@ -1015,19 +1015,35 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
+ 	pmd = stage2_get_pmd(kvm, cache, addr);
+ 	VM_BUG_ON(!pmd);
+ 
+-	/*
+-	 * Mapping in huge pages should only happen through a fault.  If a
+-	 * page is merged into a transparent huge page, the individual
+-	 * subpages of that huge page should be unmapped through MMU
+-	 * notifiers before we get here.
+-	 *
+-	 * Merging of CompoundPages is not supported; they should become
+-	 * splitting first, unmapped, merged, and mapped back in on-demand.
+-	 */
+-	VM_BUG_ON(pmd_present(*pmd) && pmd_pfn(*pmd) != pmd_pfn(*new_pmd));
+-
+ 	old_pmd = *pmd;
+ 	if (pmd_present(old_pmd)) {
++		/*
++		 * Multiple vcpus faulting on the same PMD entry, can
++		 * lead to them sequentially updating the PMD with the
++		 * same value. Following the break-before-make
++		 * (pmd_clear() followed by tlb_flush()) process can
++		 * hinder forward progress due to refaults generated
++		 * on missing translations.
++		 *
++		 * Skip updating the page table if the entry is
++		 * unchanged.
++		 */
++		if (pmd_val(old_pmd) == pmd_val(*new_pmd))
++			return 0;
++
++		/*
++		 * Mapping in huge pages should only happen through a
++		 * fault.  If a page is merged into a transparent huge
++		 * page, the individual subpages of that huge page
++		 * should be unmapped through MMU notifiers before we
++		 * get here.
++		 *
++		 * Merging of CompoundPages is not supported; they
++		 * should become splitting first, unmapped, merged,
++		 * and mapped back in on-demand.
++		 */
++		VM_BUG_ON(pmd_pfn(old_pmd) != pmd_pfn(*new_pmd));
++
+ 		pmd_clear(pmd);
+ 		kvm_tlb_flush_vmid_ipa(kvm, addr);
+ 	} else {
+@@ -1102,6 +1118,10 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
+ 	/* Create 2nd stage page table mapping - Level 3 */
+ 	old_pte = *pte;
+ 	if (pte_present(old_pte)) {
++		/* Skip page table update if there is no change */
++		if (pte_val(old_pte) == pte_val(*new_pte))
++			return 0;
++
+ 		kvm_set_pte(pte, __pte(0));
+ 		kvm_tlb_flush_vmid_ipa(kvm, addr);
+ 	} else {


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 13:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 13:15 UTC (permalink / raw
  To: gentoo-commits

commit:     9d152ea4454cd6ccffcb5738f232da5974d6a081
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Aug 24 11:46:20 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 13:15:39 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9d152ea4

Linux patch 4.18.5

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |   4 +
 1004_linux-4.18.5.patch | 742 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 746 insertions(+)

diff --git a/0000_README b/0000_README
index c7d6cc0..8da0979 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch:  1003_linux-4.18.4.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.4
 
+Patch:  1004_linux-4.18.5.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.5
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1004_linux-4.18.5.patch b/1004_linux-4.18.5.patch
new file mode 100644
index 0000000..abf70a2
--- /dev/null
+++ b/1004_linux-4.18.5.patch
@@ -0,0 +1,742 @@
+diff --git a/Makefile b/Makefile
+index ef0dd566c104..a41692c5827a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/parisc/include/asm/spinlock.h b/arch/parisc/include/asm/spinlock.h
+index 6f84b6acc86e..8a63515f03bf 100644
+--- a/arch/parisc/include/asm/spinlock.h
++++ b/arch/parisc/include/asm/spinlock.h
+@@ -20,7 +20,6 @@ static inline void arch_spin_lock_flags(arch_spinlock_t *x,
+ {
+ 	volatile unsigned int *a;
+ 
+-	mb();
+ 	a = __ldcw_align(x);
+ 	while (__ldcw(a) == 0)
+ 		while (*a == 0)
+@@ -30,17 +29,16 @@ static inline void arch_spin_lock_flags(arch_spinlock_t *x,
+ 				local_irq_disable();
+ 			} else
+ 				cpu_relax();
+-	mb();
+ }
+ #define arch_spin_lock_flags arch_spin_lock_flags
+ 
+ static inline void arch_spin_unlock(arch_spinlock_t *x)
+ {
+ 	volatile unsigned int *a;
+-	mb();
++
+ 	a = __ldcw_align(x);
+-	*a = 1;
+ 	mb();
++	*a = 1;
+ }
+ 
+ static inline int arch_spin_trylock(arch_spinlock_t *x)
+@@ -48,10 +46,8 @@ static inline int arch_spin_trylock(arch_spinlock_t *x)
+ 	volatile unsigned int *a;
+ 	int ret;
+ 
+-	mb();
+ 	a = __ldcw_align(x);
+         ret = __ldcw(a) != 0;
+-	mb();
+ 
+ 	return ret;
+ }
+diff --git a/arch/parisc/kernel/syscall.S b/arch/parisc/kernel/syscall.S
+index 4886a6db42e9..5f7e57fcaeef 100644
+--- a/arch/parisc/kernel/syscall.S
++++ b/arch/parisc/kernel/syscall.S
+@@ -629,12 +629,12 @@ cas_action:
+ 	stw	%r1, 4(%sr2,%r20)
+ #endif
+ 	/* The load and store could fail */
+-1:	ldw,ma	0(%r26), %r28
++1:	ldw	0(%r26), %r28
+ 	sub,<>	%r28, %r25, %r0
+-2:	stw,ma	%r24, 0(%r26)
++2:	stw	%r24, 0(%r26)
+ 	/* Free lock */
+ 	sync
+-	stw,ma	%r20, 0(%sr2,%r20)
++	stw	%r20, 0(%sr2,%r20)
+ #if ENABLE_LWS_DEBUG
+ 	/* Clear thread register indicator */
+ 	stw	%r0, 4(%sr2,%r20)
+@@ -798,30 +798,30 @@ cas2_action:
+ 	ldo	1(%r0),%r28
+ 
+ 	/* 8bit CAS */
+-13:	ldb,ma	0(%r26), %r29
++13:	ldb	0(%r26), %r29
+ 	sub,=	%r29, %r25, %r0
+ 	b,n	cas2_end
+-14:	stb,ma	%r24, 0(%r26)
++14:	stb	%r24, 0(%r26)
+ 	b	cas2_end
+ 	copy	%r0, %r28
+ 	nop
+ 	nop
+ 
+ 	/* 16bit CAS */
+-15:	ldh,ma	0(%r26), %r29
++15:	ldh	0(%r26), %r29
+ 	sub,=	%r29, %r25, %r0
+ 	b,n	cas2_end
+-16:	sth,ma	%r24, 0(%r26)
++16:	sth	%r24, 0(%r26)
+ 	b	cas2_end
+ 	copy	%r0, %r28
+ 	nop
+ 	nop
+ 
+ 	/* 32bit CAS */
+-17:	ldw,ma	0(%r26), %r29
++17:	ldw	0(%r26), %r29
+ 	sub,=	%r29, %r25, %r0
+ 	b,n	cas2_end
+-18:	stw,ma	%r24, 0(%r26)
++18:	stw	%r24, 0(%r26)
+ 	b	cas2_end
+ 	copy	%r0, %r28
+ 	nop
+@@ -829,10 +829,10 @@ cas2_action:
+ 
+ 	/* 64bit CAS */
+ #ifdef CONFIG_64BIT
+-19:	ldd,ma	0(%r26), %r29
++19:	ldd	0(%r26), %r29
+ 	sub,*=	%r29, %r25, %r0
+ 	b,n	cas2_end
+-20:	std,ma	%r24, 0(%r26)
++20:	std	%r24, 0(%r26)
+ 	copy	%r0, %r28
+ #else
+ 	/* Compare first word */
+@@ -851,7 +851,7 @@ cas2_action:
+ cas2_end:
+ 	/* Free lock */
+ 	sync
+-	stw,ma	%r20, 0(%sr2,%r20)
++	stw	%r20, 0(%sr2,%r20)
+ 	/* Enable interrupts */
+ 	ssm	PSW_SM_I, %r0
+ 	/* Return to userspace, set no error */
+diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
+index a8b277362931..4cb8f1f7b593 100644
+--- a/arch/powerpc/kernel/security.c
++++ b/arch/powerpc/kernel/security.c
+@@ -117,25 +117,35 @@ ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, cha
+ 
+ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, char *buf)
+ {
+-	if (!security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR))
+-		return sprintf(buf, "Not affected\n");
++	struct seq_buf s;
++
++	seq_buf_init(&s, buf, PAGE_SIZE - 1);
+ 
+-	if (barrier_nospec_enabled)
+-		return sprintf(buf, "Mitigation: __user pointer sanitization\n");
++	if (security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR)) {
++		if (barrier_nospec_enabled)
++			seq_buf_printf(&s, "Mitigation: __user pointer sanitization");
++		else
++			seq_buf_printf(&s, "Vulnerable");
+ 
+-	return sprintf(buf, "Vulnerable\n");
++		if (security_ftr_enabled(SEC_FTR_SPEC_BAR_ORI31))
++			seq_buf_printf(&s, ", ori31 speculation barrier enabled");
++
++		seq_buf_printf(&s, "\n");
++	} else
++		seq_buf_printf(&s, "Not affected\n");
++
++	return s.len;
+ }
+ 
+ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, char *buf)
+ {
+-	bool bcs, ccd, ori;
+ 	struct seq_buf s;
++	bool bcs, ccd;
+ 
+ 	seq_buf_init(&s, buf, PAGE_SIZE - 1);
+ 
+ 	bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
+ 	ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
+-	ori = security_ftr_enabled(SEC_FTR_SPEC_BAR_ORI31);
+ 
+ 	if (bcs || ccd) {
+ 		seq_buf_printf(&s, "Mitigation: ");
+@@ -151,9 +161,6 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, c
+ 	} else
+ 		seq_buf_printf(&s, "Vulnerable");
+ 
+-	if (ori)
+-		seq_buf_printf(&s, ", ori31 speculation barrier enabled");
+-
+ 	seq_buf_printf(&s, "\n");
+ 
+ 	return s.len;
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index 79e409974ccc..682286aca881 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -971,6 +971,7 @@ static inline uint32_t hypervisor_cpuid_base(const char *sig, uint32_t leaves)
+ 
+ extern unsigned long arch_align_stack(unsigned long sp);
+ extern void free_init_pages(char *what, unsigned long begin, unsigned long end);
++extern void free_kernel_image_pages(void *begin, void *end);
+ 
+ void default_idle(void);
+ #ifdef	CONFIG_XEN
+diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
+index bd090367236c..34cffcef7375 100644
+--- a/arch/x86/include/asm/set_memory.h
++++ b/arch/x86/include/asm/set_memory.h
+@@ -46,6 +46,7 @@ int set_memory_np(unsigned long addr, int numpages);
+ int set_memory_4k(unsigned long addr, int numpages);
+ int set_memory_encrypted(unsigned long addr, int numpages);
+ int set_memory_decrypted(unsigned long addr, int numpages);
++int set_memory_np_noalias(unsigned long addr, int numpages);
+ 
+ int set_memory_array_uc(unsigned long *addr, int addrinarray);
+ int set_memory_array_wc(unsigned long *addr, int addrinarray);
+diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
+index 83241eb71cd4..acfab322fbe0 100644
+--- a/arch/x86/mm/init.c
++++ b/arch/x86/mm/init.c
+@@ -775,13 +775,44 @@ void free_init_pages(char *what, unsigned long begin, unsigned long end)
+ 	}
+ }
+ 
++/*
++ * begin/end can be in the direct map or the "high kernel mapping"
++ * used for the kernel image only.  free_init_pages() will do the
++ * right thing for either kind of address.
++ */
++void free_kernel_image_pages(void *begin, void *end)
++{
++	unsigned long begin_ul = (unsigned long)begin;
++	unsigned long end_ul = (unsigned long)end;
++	unsigned long len_pages = (end_ul - begin_ul) >> PAGE_SHIFT;
++
++
++	free_init_pages("unused kernel image", begin_ul, end_ul);
++
++	/*
++	 * PTI maps some of the kernel into userspace.  For performance,
++	 * this includes some kernel areas that do not contain secrets.
++	 * Those areas might be adjacent to the parts of the kernel image
++	 * being freed, which may contain secrets.  Remove the "high kernel
++	 * image mapping" for these freed areas, ensuring they are not even
++	 * potentially vulnerable to Meltdown regardless of the specific
++	 * optimizations PTI is currently using.
++	 *
++	 * The "noalias" prevents unmapping the direct map alias which is
++	 * needed to access the freed pages.
++	 *
++	 * This is only valid for 64bit kernels. 32bit has only one mapping
++	 * which can't be treated in this way for obvious reasons.
++	 */
++	if (IS_ENABLED(CONFIG_X86_64) && cpu_feature_enabled(X86_FEATURE_PTI))
++		set_memory_np_noalias(begin_ul, len_pages);
++}
++
+ void __ref free_initmem(void)
+ {
+ 	e820__reallocate_tables();
+ 
+-	free_init_pages("unused kernel",
+-			(unsigned long)(&__init_begin),
+-			(unsigned long)(&__init_end));
++	free_kernel_image_pages(&__init_begin, &__init_end);
+ }
+ 
+ #ifdef CONFIG_BLK_DEV_INITRD
+diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
+index a688617c727e..68c292cb1ebf 100644
+--- a/arch/x86/mm/init_64.c
++++ b/arch/x86/mm/init_64.c
+@@ -1283,12 +1283,8 @@ void mark_rodata_ro(void)
+ 	set_memory_ro(start, (end-start) >> PAGE_SHIFT);
+ #endif
+ 
+-	free_init_pages("unused kernel",
+-			(unsigned long) __va(__pa_symbol(text_end)),
+-			(unsigned long) __va(__pa_symbol(rodata_start)));
+-	free_init_pages("unused kernel",
+-			(unsigned long) __va(__pa_symbol(rodata_end)),
+-			(unsigned long) __va(__pa_symbol(_sdata)));
++	free_kernel_image_pages((void *)text_end, (void *)rodata_start);
++	free_kernel_image_pages((void *)rodata_end, (void *)_sdata);
+ 
+ 	debug_checkwx();
+ 
+diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
+index 29505724202a..8d6c34fe49be 100644
+--- a/arch/x86/mm/pageattr.c
++++ b/arch/x86/mm/pageattr.c
+@@ -53,6 +53,7 @@ static DEFINE_SPINLOCK(cpa_lock);
+ #define CPA_FLUSHTLB 1
+ #define CPA_ARRAY 2
+ #define CPA_PAGES_ARRAY 4
++#define CPA_NO_CHECK_ALIAS 8 /* Do not search for aliases */
+ 
+ #ifdef CONFIG_PROC_FS
+ static unsigned long direct_pages_count[PG_LEVEL_NUM];
+@@ -1486,6 +1487,9 @@ static int change_page_attr_set_clr(unsigned long *addr, int numpages,
+ 
+ 	/* No alias checking for _NX bit modifications */
+ 	checkalias = (pgprot_val(mask_set) | pgprot_val(mask_clr)) != _PAGE_NX;
++	/* Has caller explicitly disabled alias checking? */
++	if (in_flag & CPA_NO_CHECK_ALIAS)
++		checkalias = 0;
+ 
+ 	ret = __change_page_attr_set_clr(&cpa, checkalias);
+ 
+@@ -1772,6 +1776,15 @@ int set_memory_np(unsigned long addr, int numpages)
+ 	return change_page_attr_clear(&addr, numpages, __pgprot(_PAGE_PRESENT), 0);
+ }
+ 
++int set_memory_np_noalias(unsigned long addr, int numpages)
++{
++	int cpa_flags = CPA_NO_CHECK_ALIAS;
++
++	return change_page_attr_set_clr(&addr, numpages, __pgprot(0),
++					__pgprot(_PAGE_PRESENT), 0,
++					cpa_flags, NULL);
++}
++
+ int set_memory_4k(unsigned long addr, int numpages)
+ {
+ 	return change_page_attr_set_clr(&addr, numpages, __pgprot(0),
+diff --git a/drivers/edac/edac_mc.c b/drivers/edac/edac_mc.c
+index 3bb82e511eca..7d3edd713932 100644
+--- a/drivers/edac/edac_mc.c
++++ b/drivers/edac/edac_mc.c
+@@ -215,6 +215,7 @@ const char * const edac_mem_types[] = {
+ 	[MEM_LRDDR3]	= "Load-Reduced-DDR3-RAM",
+ 	[MEM_DDR4]	= "Unbuffered-DDR4",
+ 	[MEM_RDDR4]	= "Registered-DDR4",
++	[MEM_LRDDR4]	= "Load-Reduced-DDR4-RAM",
+ 	[MEM_NVDIMM]	= "Non-volatile-RAM",
+ };
+ EXPORT_SYMBOL_GPL(edac_mem_types);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+index fc818b4d849c..a44c3d58fef4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+@@ -31,7 +31,7 @@
+ #include <linux/power_supply.h>
+ #include <linux/hwmon.h>
+ #include <linux/hwmon-sysfs.h>
+-
++#include <linux/nospec.h>
+ 
+ static int amdgpu_debugfs_pm_init(struct amdgpu_device *adev);
+ 
+@@ -393,6 +393,7 @@ static ssize_t amdgpu_set_pp_force_state(struct device *dev,
+ 			count = -EINVAL;
+ 			goto fail;
+ 		}
++		idx = array_index_nospec(idx, ARRAY_SIZE(data.states));
+ 
+ 		amdgpu_dpm_get_pp_num_states(adev, &data);
+ 		state = data.states[idx];
+diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
+index df4e4a07db3d..14dce5c201d5 100644
+--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
++++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
+@@ -43,6 +43,8 @@
+ #include <linux/mdev.h>
+ #include <linux/debugfs.h>
+ 
++#include <linux/nospec.h>
++
+ #include "i915_drv.h"
+ #include "gvt.h"
+ 
+@@ -1084,7 +1086,8 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
+ 	} else if (cmd == VFIO_DEVICE_GET_REGION_INFO) {
+ 		struct vfio_region_info info;
+ 		struct vfio_info_cap caps = { .buf = NULL, .size = 0 };
+-		int i, ret;
++		unsigned int i;
++		int ret;
+ 		struct vfio_region_info_cap_sparse_mmap *sparse = NULL;
+ 		size_t size;
+ 		int nr_areas = 1;
+@@ -1169,6 +1172,10 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
+ 				if (info.index >= VFIO_PCI_NUM_REGIONS +
+ 						vgpu->vdev.num_regions)
+ 					return -EINVAL;
++				info.index =
++					array_index_nospec(info.index,
++							VFIO_PCI_NUM_REGIONS +
++							vgpu->vdev.num_regions);
+ 
+ 				i = info.index - VFIO_PCI_NUM_REGIONS;
+ 
+diff --git a/drivers/i2c/busses/i2c-imx.c b/drivers/i2c/busses/i2c-imx.c
+index 498c5e891649..ad6adefb64da 100644
+--- a/drivers/i2c/busses/i2c-imx.c
++++ b/drivers/i2c/busses/i2c-imx.c
+@@ -668,9 +668,6 @@ static int i2c_imx_dma_read(struct imx_i2c_struct *i2c_imx,
+ 	struct imx_i2c_dma *dma = i2c_imx->dma;
+ 	struct device *dev = &i2c_imx->adapter.dev;
+ 
+-	temp = imx_i2c_read_reg(i2c_imx, IMX_I2C_I2CR);
+-	temp |= I2CR_DMAEN;
+-	imx_i2c_write_reg(temp, i2c_imx, IMX_I2C_I2CR);
+ 
+ 	dma->chan_using = dma->chan_rx;
+ 	dma->dma_transfer_dir = DMA_DEV_TO_MEM;
+@@ -783,6 +780,7 @@ static int i2c_imx_read(struct imx_i2c_struct *i2c_imx, struct i2c_msg *msgs, bo
+ 	int i, result;
+ 	unsigned int temp;
+ 	int block_data = msgs->flags & I2C_M_RECV_LEN;
++	int use_dma = i2c_imx->dma && msgs->len >= DMA_THRESHOLD && !block_data;
+ 
+ 	dev_dbg(&i2c_imx->adapter.dev,
+ 		"<%s> write slave address: addr=0x%x\n",
+@@ -809,12 +807,14 @@ static int i2c_imx_read(struct imx_i2c_struct *i2c_imx, struct i2c_msg *msgs, bo
+ 	 */
+ 	if ((msgs->len - 1) || block_data)
+ 		temp &= ~I2CR_TXAK;
++	if (use_dma)
++		temp |= I2CR_DMAEN;
+ 	imx_i2c_write_reg(temp, i2c_imx, IMX_I2C_I2CR);
+ 	imx_i2c_read_reg(i2c_imx, IMX_I2C_I2DR); /* dummy read */
+ 
+ 	dev_dbg(&i2c_imx->adapter.dev, "<%s> read data\n", __func__);
+ 
+-	if (i2c_imx->dma && msgs->len >= DMA_THRESHOLD && !block_data)
++	if (use_dma)
+ 		return i2c_imx_dma_read(i2c_imx, msgs, is_lastmsg);
+ 
+ 	/* read data */
+diff --git a/drivers/i2c/i2c-core-acpi.c b/drivers/i2c/i2c-core-acpi.c
+index 7c3b4740b94b..b8f303dea305 100644
+--- a/drivers/i2c/i2c-core-acpi.c
++++ b/drivers/i2c/i2c-core-acpi.c
+@@ -482,11 +482,16 @@ static int acpi_gsb_i2c_write_bytes(struct i2c_client *client,
+ 	msgs[0].buf = buffer;
+ 
+ 	ret = i2c_transfer(client->adapter, msgs, ARRAY_SIZE(msgs));
+-	if (ret < 0)
+-		dev_err(&client->adapter->dev, "i2c write failed\n");
+ 
+ 	kfree(buffer);
+-	return ret;
++
++	if (ret < 0) {
++		dev_err(&client->adapter->dev, "i2c write failed: %d\n", ret);
++		return ret;
++	}
++
++	/* 1 transfer must have completed successfully */
++	return (ret == 1) ? 0 : -EIO;
+ }
+ 
+ static acpi_status
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index 0fae816fba39..44604af23b3a 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -952,6 +952,7 @@ static int advk_pcie_probe(struct platform_device *pdev)
+ 
+ 	bus = bridge->bus;
+ 
++	pci_bus_size_bridges(bus);
+ 	pci_bus_assign_resources(bus);
+ 
+ 	list_for_each_entry(child, &bus->children, node)
+diff --git a/drivers/pci/hotplug/pci_hotplug_core.c b/drivers/pci/hotplug/pci_hotplug_core.c
+index af92fed46ab7..fd93783a87b0 100644
+--- a/drivers/pci/hotplug/pci_hotplug_core.c
++++ b/drivers/pci/hotplug/pci_hotplug_core.c
+@@ -438,8 +438,17 @@ int __pci_hp_register(struct hotplug_slot *slot, struct pci_bus *bus,
+ 	list_add(&slot->slot_list, &pci_hotplug_slot_list);
+ 
+ 	result = fs_add_slot(pci_slot);
++	if (result)
++		goto err_list_del;
++
+ 	kobject_uevent(&pci_slot->kobj, KOBJ_ADD);
+ 	dbg("Added slot %s to the list\n", name);
++	goto out;
++
++err_list_del:
++	list_del(&slot->slot_list);
++	pci_slot->hotplug = NULL;
++	pci_destroy_slot(pci_slot);
+ out:
+ 	mutex_unlock(&pci_hp_mutex);
+ 	return result;
+diff --git a/drivers/pci/hotplug/pciehp.h b/drivers/pci/hotplug/pciehp.h
+index 5f892065585e..fca87a1a2b22 100644
+--- a/drivers/pci/hotplug/pciehp.h
++++ b/drivers/pci/hotplug/pciehp.h
+@@ -119,6 +119,7 @@ int pciehp_unconfigure_device(struct slot *p_slot);
+ void pciehp_queue_pushbutton_work(struct work_struct *work);
+ struct controller *pcie_init(struct pcie_device *dev);
+ int pcie_init_notification(struct controller *ctrl);
++void pcie_shutdown_notification(struct controller *ctrl);
+ int pciehp_enable_slot(struct slot *p_slot);
+ int pciehp_disable_slot(struct slot *p_slot);
+ void pcie_reenable_notification(struct controller *ctrl);
+diff --git a/drivers/pci/hotplug/pciehp_core.c b/drivers/pci/hotplug/pciehp_core.c
+index 44a6a63802d5..2ba59fc94827 100644
+--- a/drivers/pci/hotplug/pciehp_core.c
++++ b/drivers/pci/hotplug/pciehp_core.c
+@@ -62,6 +62,12 @@ static int reset_slot(struct hotplug_slot *slot, int probe);
+  */
+ static void release_slot(struct hotplug_slot *hotplug_slot)
+ {
++	struct slot *slot = hotplug_slot->private;
++
++	/* queued work needs hotplug_slot name */
++	cancel_delayed_work(&slot->work);
++	drain_workqueue(slot->wq);
++
+ 	kfree(hotplug_slot->ops);
+ 	kfree(hotplug_slot->info);
+ 	kfree(hotplug_slot);
+@@ -264,6 +270,7 @@ static void pciehp_remove(struct pcie_device *dev)
+ {
+ 	struct controller *ctrl = get_service_data(dev);
+ 
++	pcie_shutdown_notification(ctrl);
+ 	cleanup_slot(ctrl);
+ 	pciehp_release_ctrl(ctrl);
+ }
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index 718b6073afad..aff191b4552c 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -539,8 +539,6 @@ static irqreturn_t pciehp_isr(int irq, void *dev_id)
+ {
+ 	struct controller *ctrl = (struct controller *)dev_id;
+ 	struct pci_dev *pdev = ctrl_dev(ctrl);
+-	struct pci_bus *subordinate = pdev->subordinate;
+-	struct pci_dev *dev;
+ 	struct slot *slot = ctrl->slot;
+ 	u16 status, events;
+ 	u8 present;
+@@ -588,14 +586,9 @@ static irqreturn_t pciehp_isr(int irq, void *dev_id)
+ 		wake_up(&ctrl->queue);
+ 	}
+ 
+-	if (subordinate) {
+-		list_for_each_entry(dev, &subordinate->devices, bus_list) {
+-			if (dev->ignore_hotplug) {
+-				ctrl_dbg(ctrl, "ignoring hotplug event %#06x (%s requested no hotplug)\n",
+-					 events, pci_name(dev));
+-				return IRQ_HANDLED;
+-			}
+-		}
++	if (pdev->ignore_hotplug) {
++		ctrl_dbg(ctrl, "ignoring hotplug event %#06x\n", events);
++		return IRQ_HANDLED;
+ 	}
+ 
+ 	/* Check Attention Button Pressed */
+@@ -765,7 +758,7 @@ int pcie_init_notification(struct controller *ctrl)
+ 	return 0;
+ }
+ 
+-static void pcie_shutdown_notification(struct controller *ctrl)
++void pcie_shutdown_notification(struct controller *ctrl)
+ {
+ 	if (ctrl->notification_enabled) {
+ 		pcie_disable_notification(ctrl);
+@@ -800,7 +793,7 @@ abort:
+ static void pcie_cleanup_slot(struct controller *ctrl)
+ {
+ 	struct slot *slot = ctrl->slot;
+-	cancel_delayed_work(&slot->work);
++
+ 	destroy_workqueue(slot->wq);
+ 	kfree(slot);
+ }
+@@ -893,7 +886,6 @@ abort:
+ 
+ void pciehp_release_ctrl(struct controller *ctrl)
+ {
+-	pcie_shutdown_notification(ctrl);
+ 	pcie_cleanup_slot(ctrl);
+ 	kfree(ctrl);
+ }
+diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
+index 89ee6a2b6eb8..5d1698265da5 100644
+--- a/drivers/pci/pci-acpi.c
++++ b/drivers/pci/pci-acpi.c
+@@ -632,13 +632,11 @@ static bool acpi_pci_need_resume(struct pci_dev *dev)
+ 	/*
+ 	 * In some cases (eg. Samsung 305V4A) leaving a bridge in suspend over
+ 	 * system-wide suspend/resume confuses the platform firmware, so avoid
+-	 * doing that, unless the bridge has a driver that should take care of
+-	 * the PM handling.  According to Section 16.1.6 of ACPI 6.2, endpoint
++	 * doing that.  According to Section 16.1.6 of ACPI 6.2, endpoint
+ 	 * devices are expected to be in D3 before invoking the S3 entry path
+ 	 * from the firmware, so they should not be affected by this issue.
+ 	 */
+-	if (pci_is_bridge(dev) && !dev->driver &&
+-	    acpi_target_system_state() != ACPI_STATE_S0)
++	if (pci_is_bridge(dev) && acpi_target_system_state() != ACPI_STATE_S0)
+ 		return true;
+ 
+ 	if (!adev || !acpi_device_power_manageable(adev))
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 316496e99da9..0abe2865a3a5 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -1171,6 +1171,33 @@ static void pci_restore_config_space(struct pci_dev *pdev)
+ 	}
+ }
+ 
++static void pci_restore_rebar_state(struct pci_dev *pdev)
++{
++	unsigned int pos, nbars, i;
++	u32 ctrl;
++
++	pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_REBAR);
++	if (!pos)
++		return;
++
++	pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl);
++	nbars = (ctrl & PCI_REBAR_CTRL_NBAR_MASK) >>
++		    PCI_REBAR_CTRL_NBAR_SHIFT;
++
++	for (i = 0; i < nbars; i++, pos += 8) {
++		struct resource *res;
++		int bar_idx, size;
++
++		pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl);
++		bar_idx = ctrl & PCI_REBAR_CTRL_BAR_IDX;
++		res = pdev->resource + bar_idx;
++		size = order_base_2((resource_size(res) >> 20) | 1) - 1;
++		ctrl &= ~PCI_REBAR_CTRL_BAR_SIZE;
++		ctrl |= size << 8;
++		pci_write_config_dword(pdev, pos + PCI_REBAR_CTRL, ctrl);
++	}
++}
++
+ /**
+  * pci_restore_state - Restore the saved state of a PCI device
+  * @dev: - PCI device that we're dealing with
+@@ -1186,6 +1213,7 @@ void pci_restore_state(struct pci_dev *dev)
+ 	pci_restore_pri_state(dev);
+ 	pci_restore_ats_state(dev);
+ 	pci_restore_vc_state(dev);
++	pci_restore_rebar_state(dev);
+ 
+ 	pci_cleanup_aer_error_status_regs(dev);
+ 
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index 611adcd9c169..b2857865c0aa 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -1730,6 +1730,10 @@ static void pci_configure_mps(struct pci_dev *dev)
+ 	if (!pci_is_pcie(dev) || !bridge || !pci_is_pcie(bridge))
+ 		return;
+ 
++	/* MPS and MRRS fields are of type 'RsvdP' for VFs, short-circuit out */
++	if (dev->is_virtfn)
++		return;
++
+ 	mps = pcie_get_mps(dev);
+ 	p_mps = pcie_get_mps(bridge);
+ 
+diff --git a/drivers/tty/pty.c b/drivers/tty/pty.c
+index b0e2c4847a5d..678406e0948b 100644
+--- a/drivers/tty/pty.c
++++ b/drivers/tty/pty.c
+@@ -625,7 +625,7 @@ int ptm_open_peer(struct file *master, struct tty_struct *tty, int flags)
+ 	if (tty->driver != ptm_driver)
+ 		return -EIO;
+ 
+-	fd = get_unused_fd_flags(0);
++	fd = get_unused_fd_flags(flags);
+ 	if (fd < 0) {
+ 		retval = fd;
+ 		goto err;
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index f7ab34088162..8b24d3d42cb3 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -14,6 +14,7 @@
+ #include <linux/log2.h>
+ #include <linux/module.h>
+ #include <linux/slab.h>
++#include <linux/nospec.h>
+ #include <linux/backing-dev.h>
+ #include <trace/events/ext4.h>
+ 
+@@ -2140,7 +2141,8 @@ ext4_mb_regular_allocator(struct ext4_allocation_context *ac)
+ 		 * This should tell if fe_len is exactly power of 2
+ 		 */
+ 		if ((ac->ac_g_ex.fe_len & (~(1 << (i - 1)))) == 0)
+-			ac->ac_2order = i - 1;
++			ac->ac_2order = array_index_nospec(i - 1,
++							   sb->s_blocksize_bits + 2);
+ 	}
+ 
+ 	/* if stream allocation is enabled, use global goal */
+diff --git a/fs/reiserfs/xattr.c b/fs/reiserfs/xattr.c
+index ff94fad477e4..48cdfc81fe10 100644
+--- a/fs/reiserfs/xattr.c
++++ b/fs/reiserfs/xattr.c
+@@ -792,8 +792,10 @@ static int listxattr_filler(struct dir_context *ctx, const char *name,
+ 			return 0;
+ 		size = namelen + 1;
+ 		if (b->buf) {
+-			if (size > b->size)
++			if (b->pos + size > b->size) {
++				b->pos = -ERANGE;
+ 				return -ERANGE;
++			}
+ 			memcpy(b->buf + b->pos, name, namelen);
+ 			b->buf[b->pos + namelen] = 0;
+ 		}
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index a790ef4be74e..3222193c46c6 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -6939,9 +6939,21 @@ unsigned long free_reserved_area(void *start, void *end, int poison, char *s)
+ 	start = (void *)PAGE_ALIGN((unsigned long)start);
+ 	end = (void *)((unsigned long)end & PAGE_MASK);
+ 	for (pos = start; pos < end; pos += PAGE_SIZE, pages++) {
++		struct page *page = virt_to_page(pos);
++		void *direct_map_addr;
++
++		/*
++		 * 'direct_map_addr' might be different from 'pos'
++		 * because some architectures' virt_to_page()
++		 * work with aliases.  Getting the direct map
++		 * address ensures that we get a _writeable_
++		 * alias for the memset().
++		 */
++		direct_map_addr = page_address(page);
+ 		if ((unsigned int)poison <= 0xFF)
+-			memset(pos, poison, PAGE_SIZE);
+-		free_reserved_page(virt_to_page(pos));
++			memset(direct_map_addr, poison, PAGE_SIZE);
++
++		free_reserved_page(page);
+ 	}
+ 
+ 	if (pages && s)


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 13:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 13:15 UTC (permalink / raw
  To: gentoo-commits

commit:     1570a5af8b650ddddcd2d62a20f52698ac955f8d
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Sep 15 10:12:46 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 13:15:39 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1570a5af

Linux patch 4.18.8

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1007_linux-4.18.8.patch | 6654 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6658 insertions(+)

diff --git a/0000_README b/0000_README
index f3682ca..597262e 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch:  1006_linux-4.18.7.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.7
 
+Patch:  1007_linux-4.18.8.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.8
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1007_linux-4.18.8.patch b/1007_linux-4.18.8.patch
new file mode 100644
index 0000000..8a888c7
--- /dev/null
+++ b/1007_linux-4.18.8.patch
@@ -0,0 +1,6654 @@
+diff --git a/Makefile b/Makefile
+index 711b04d00e49..0d73431f66cd 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/arm/mach-rockchip/Kconfig b/arch/arm/mach-rockchip/Kconfig
+index fafd3d7f9f8c..8ca926522026 100644
+--- a/arch/arm/mach-rockchip/Kconfig
++++ b/arch/arm/mach-rockchip/Kconfig
+@@ -17,6 +17,7 @@ config ARCH_ROCKCHIP
+ 	select ARM_GLOBAL_TIMER
+ 	select CLKSRC_ARM_GLOBAL_TIMER_SCHED_CLOCK
+ 	select ZONE_DMA if ARM_LPAE
++	select PM
+ 	help
+ 	  Support for Rockchip's Cortex-A9 Single-to-Quad-Core-SoCs
+ 	  containing the RK2928, RK30xx and RK31xx series.
+diff --git a/arch/arm64/Kconfig.platforms b/arch/arm64/Kconfig.platforms
+index d5aeac351fc3..21a715ad8222 100644
+--- a/arch/arm64/Kconfig.platforms
++++ b/arch/arm64/Kconfig.platforms
+@@ -151,6 +151,7 @@ config ARCH_ROCKCHIP
+ 	select GPIOLIB
+ 	select PINCTRL
+ 	select PINCTRL_ROCKCHIP
++	select PM
+ 	select ROCKCHIP_TIMER
+ 	help
+ 	  This enables support for the ARMv8 based Rockchip chipsets,
+diff --git a/arch/powerpc/include/asm/topology.h b/arch/powerpc/include/asm/topology.h
+index 16b077801a5f..a4a718dbfec6 100644
+--- a/arch/powerpc/include/asm/topology.h
++++ b/arch/powerpc/include/asm/topology.h
+@@ -92,6 +92,7 @@ extern int stop_topology_update(void);
+ extern int prrn_is_enabled(void);
+ extern int find_and_online_cpu_nid(int cpu);
+ extern int timed_topology_update(int nsecs);
++extern void __init shared_proc_topology_init(void);
+ #else
+ static inline int start_topology_update(void)
+ {
+@@ -113,6 +114,10 @@ static inline int timed_topology_update(int nsecs)
+ {
+ 	return 0;
+ }
++
++#ifdef CONFIG_SMP
++static inline void shared_proc_topology_init(void) {}
++#endif
+ #endif /* CONFIG_NUMA && CONFIG_PPC_SPLPAR */
+ 
+ #include <asm-generic/topology.h>
+diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
+index 468653ce844c..327f6112fe8e 100644
+--- a/arch/powerpc/include/asm/uaccess.h
++++ b/arch/powerpc/include/asm/uaccess.h
+@@ -250,10 +250,17 @@ do {								\
+ 	}							\
+ } while (0)
+ 
++/*
++ * This is a type: either unsigned long, if the argument fits into
++ * that type, or otherwise unsigned long long.
++ */
++#define __long_type(x) \
++	__typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL))
++
+ #define __get_user_nocheck(x, ptr, size)			\
+ ({								\
+ 	long __gu_err;						\
+-	unsigned long __gu_val;					\
++	__long_type(*(ptr)) __gu_val;				\
+ 	const __typeof__(*(ptr)) __user *__gu_addr = (ptr);	\
+ 	__chk_user_ptr(ptr);					\
+ 	if (!is_kernel_addr((unsigned long)__gu_addr))		\
+@@ -267,7 +274,7 @@ do {								\
+ #define __get_user_check(x, ptr, size)					\
+ ({									\
+ 	long __gu_err = -EFAULT;					\
+-	unsigned long  __gu_val = 0;					\
++	__long_type(*(ptr)) __gu_val = 0;				\
+ 	const __typeof__(*(ptr)) __user *__gu_addr = (ptr);		\
+ 	might_fault();							\
+ 	if (access_ok(VERIFY_READ, __gu_addr, (size))) {		\
+@@ -281,7 +288,7 @@ do {								\
+ #define __get_user_nosleep(x, ptr, size)			\
+ ({								\
+ 	long __gu_err;						\
+-	unsigned long __gu_val;					\
++	__long_type(*(ptr)) __gu_val;				\
+ 	const __typeof__(*(ptr)) __user *__gu_addr = (ptr);	\
+ 	__chk_user_ptr(ptr);					\
+ 	barrier_nospec();					\
+diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
+index 285c6465324a..f817342aab8f 100644
+--- a/arch/powerpc/kernel/exceptions-64s.S
++++ b/arch/powerpc/kernel/exceptions-64s.S
+@@ -1526,6 +1526,8 @@ TRAMP_REAL_BEGIN(stf_barrier_fallback)
+ TRAMP_REAL_BEGIN(rfi_flush_fallback)
+ 	SET_SCRATCH0(r13);
+ 	GET_PACA(r13);
++	std	r1,PACA_EXRFI+EX_R12(r13)
++	ld	r1,PACAKSAVE(r13)
+ 	std	r9,PACA_EXRFI+EX_R9(r13)
+ 	std	r10,PACA_EXRFI+EX_R10(r13)
+ 	std	r11,PACA_EXRFI+EX_R11(r13)
+@@ -1560,12 +1562,15 @@ TRAMP_REAL_BEGIN(rfi_flush_fallback)
+ 	ld	r9,PACA_EXRFI+EX_R9(r13)
+ 	ld	r10,PACA_EXRFI+EX_R10(r13)
+ 	ld	r11,PACA_EXRFI+EX_R11(r13)
++	ld	r1,PACA_EXRFI+EX_R12(r13)
+ 	GET_SCRATCH0(r13);
+ 	rfid
+ 
+ TRAMP_REAL_BEGIN(hrfi_flush_fallback)
+ 	SET_SCRATCH0(r13);
+ 	GET_PACA(r13);
++	std	r1,PACA_EXRFI+EX_R12(r13)
++	ld	r1,PACAKSAVE(r13)
+ 	std	r9,PACA_EXRFI+EX_R9(r13)
+ 	std	r10,PACA_EXRFI+EX_R10(r13)
+ 	std	r11,PACA_EXRFI+EX_R11(r13)
+@@ -1600,6 +1605,7 @@ TRAMP_REAL_BEGIN(hrfi_flush_fallback)
+ 	ld	r9,PACA_EXRFI+EX_R9(r13)
+ 	ld	r10,PACA_EXRFI+EX_R10(r13)
+ 	ld	r11,PACA_EXRFI+EX_R11(r13)
++	ld	r1,PACA_EXRFI+EX_R12(r13)
+ 	GET_SCRATCH0(r13);
+ 	hrfid
+ 
+diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
+index 4794d6b4f4d2..b3142c7b9c31 100644
+--- a/arch/powerpc/kernel/smp.c
++++ b/arch/powerpc/kernel/smp.c
+@@ -1156,6 +1156,11 @@ void __init smp_cpus_done(unsigned int max_cpus)
+ 	if (smp_ops && smp_ops->bringup_done)
+ 		smp_ops->bringup_done();
+ 
++	/*
++	 * On a shared LPAR, associativity needs to be requested.
++	 * Hence, get numa topology before dumping cpu topology
++	 */
++	shared_proc_topology_init();
+ 	dump_numa_cpu_topology();
+ 
+ 	/*
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index 0c7e05d89244..35ac5422903a 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -1078,7 +1078,6 @@ static int prrn_enabled;
+ static void reset_topology_timer(void);
+ static int topology_timer_secs = 1;
+ static int topology_inited;
+-static int topology_update_needed;
+ 
+ /*
+  * Change polling interval for associativity changes.
+@@ -1306,11 +1305,8 @@ int numa_update_cpu_topology(bool cpus_locked)
+ 	struct device *dev;
+ 	int weight, new_nid, i = 0;
+ 
+-	if (!prrn_enabled && !vphn_enabled) {
+-		if (!topology_inited)
+-			topology_update_needed = 1;
++	if (!prrn_enabled && !vphn_enabled && topology_inited)
+ 		return 0;
+-	}
+ 
+ 	weight = cpumask_weight(&cpu_associativity_changes_mask);
+ 	if (!weight)
+@@ -1423,7 +1419,6 @@ int numa_update_cpu_topology(bool cpus_locked)
+ 
+ out:
+ 	kfree(updates);
+-	topology_update_needed = 0;
+ 	return changed;
+ }
+ 
+@@ -1551,6 +1546,15 @@ int prrn_is_enabled(void)
+ 	return prrn_enabled;
+ }
+ 
++void __init shared_proc_topology_init(void)
++{
++	if (lppaca_shared_proc(get_lppaca())) {
++		bitmap_fill(cpumask_bits(&cpu_associativity_changes_mask),
++			    nr_cpumask_bits);
++		numa_update_cpu_topology(false);
++	}
++}
++
+ static int topology_read(struct seq_file *file, void *v)
+ {
+ 	if (vphn_enabled || prrn_enabled)
+@@ -1608,10 +1612,6 @@ static int topology_update_init(void)
+ 		return -ENOMEM;
+ 
+ 	topology_inited = 1;
+-	if (topology_update_needed)
+-		bitmap_fill(cpumask_bits(&cpu_associativity_changes_mask),
+-					nr_cpumask_bits);
+-
+ 	return 0;
+ }
+ device_initcall(topology_update_init);
+diff --git a/arch/powerpc/platforms/85xx/t1042rdb_diu.c b/arch/powerpc/platforms/85xx/t1042rdb_diu.c
+index 58fa3d319f1c..dac36ba82fea 100644
+--- a/arch/powerpc/platforms/85xx/t1042rdb_diu.c
++++ b/arch/powerpc/platforms/85xx/t1042rdb_diu.c
+@@ -9,8 +9,10 @@
+  * option) any later version.
+  */
+ 
++#include <linux/init.h>
+ #include <linux/io.h>
+ #include <linux/kernel.h>
++#include <linux/module.h>
+ #include <linux/of.h>
+ #include <linux/of_address.h>
+ 
+@@ -150,3 +152,5 @@ static int __init t1042rdb_diu_init(void)
+ }
+ 
+ early_initcall(t1042rdb_diu_init);
++
++MODULE_LICENSE("GPL");
+diff --git a/arch/powerpc/platforms/pseries/ras.c b/arch/powerpc/platforms/pseries/ras.c
+index 2edc673be137..99d1152ae224 100644
+--- a/arch/powerpc/platforms/pseries/ras.c
++++ b/arch/powerpc/platforms/pseries/ras.c
+@@ -371,7 +371,7 @@ static struct rtas_error_log *fwnmi_get_errinfo(struct pt_regs *regs)
+ 		int len, error_log_length;
+ 
+ 		error_log_length = 8 + rtas_error_extended_log_length(h);
+-		len = max_t(int, error_log_length, RTAS_ERROR_LOG_MAX);
++		len = min_t(int, error_log_length, RTAS_ERROR_LOG_MAX);
+ 		memset(global_mce_data_buf, 0, RTAS_ERROR_LOG_MAX);
+ 		memcpy(global_mce_data_buf, h, len);
+ 		errhdr = (struct rtas_error_log *)global_mce_data_buf;
+diff --git a/arch/powerpc/sysdev/mpic_msgr.c b/arch/powerpc/sysdev/mpic_msgr.c
+index eb69a5186243..280e964e1aa8 100644
+--- a/arch/powerpc/sysdev/mpic_msgr.c
++++ b/arch/powerpc/sysdev/mpic_msgr.c
+@@ -196,7 +196,7 @@ static int mpic_msgr_probe(struct platform_device *dev)
+ 
+ 	/* IO map the message register block. */
+ 	of_address_to_resource(np, 0, &rsrc);
+-	msgr_block_addr = ioremap(rsrc.start, rsrc.end - rsrc.start);
++	msgr_block_addr = ioremap(rsrc.start, resource_size(&rsrc));
+ 	if (!msgr_block_addr) {
+ 		dev_err(&dev->dev, "Failed to iomap MPIC message registers");
+ 		return -EFAULT;
+diff --git a/arch/riscv/kernel/vdso/Makefile b/arch/riscv/kernel/vdso/Makefile
+index f6561b783b61..eed1c137f618 100644
+--- a/arch/riscv/kernel/vdso/Makefile
++++ b/arch/riscv/kernel/vdso/Makefile
+@@ -52,8 +52,8 @@ $(obj)/%.so: $(obj)/%.so.dbg FORCE
+ # Add -lgcc so rv32 gets static muldi3 and lshrdi3 definitions.
+ # Make sure only to export the intended __vdso_xxx symbol offsets.
+ quiet_cmd_vdsold = VDSOLD  $@
+-      cmd_vdsold = $(CC) $(KCFLAGS) $(call cc-option, -no-pie) -nostdlib $(SYSCFLAGS_$(@F)) \
+-                           -Wl,-T,$(filter-out FORCE,$^) -o $@.tmp -lgcc && \
++      cmd_vdsold = $(CC) $(KBUILD_CFLAGS) $(call cc-option, -no-pie) -nostdlib -nostartfiles $(SYSCFLAGS_$(@F)) \
++                           -Wl,-T,$(filter-out FORCE,$^) -o $@.tmp && \
+                    $(CROSS_COMPILE)objcopy \
+                            $(patsubst %, -G __vdso_%, $(vdso-syms)) $@.tmp $@
+ 
+diff --git a/arch/s390/kernel/crash_dump.c b/arch/s390/kernel/crash_dump.c
+index 9f5ea9d87069..9b0216d571ad 100644
+--- a/arch/s390/kernel/crash_dump.c
++++ b/arch/s390/kernel/crash_dump.c
+@@ -404,11 +404,13 @@ static void *get_vmcoreinfo_old(unsigned long *size)
+ 	if (copy_oldmem_kernel(nt_name, addr + sizeof(note),
+ 			       sizeof(nt_name) - 1))
+ 		return NULL;
+-	if (strcmp(nt_name, "VMCOREINFO") != 0)
++	if (strcmp(nt_name, VMCOREINFO_NOTE_NAME) != 0)
+ 		return NULL;
+ 	vmcoreinfo = kzalloc_panic(note.n_descsz);
+-	if (copy_oldmem_kernel(vmcoreinfo, addr + 24, note.n_descsz))
++	if (copy_oldmem_kernel(vmcoreinfo, addr + 24, note.n_descsz)) {
++		kfree(vmcoreinfo);
+ 		return NULL;
++	}
+ 	*size = note.n_descsz;
+ 	return vmcoreinfo;
+ }
+@@ -418,15 +420,20 @@ static void *get_vmcoreinfo_old(unsigned long *size)
+  */
+ static void *nt_vmcoreinfo(void *ptr)
+ {
++	const char *name = VMCOREINFO_NOTE_NAME;
+ 	unsigned long size;
+ 	void *vmcoreinfo;
+ 
+ 	vmcoreinfo = os_info_old_entry(OS_INFO_VMCOREINFO, &size);
+-	if (!vmcoreinfo)
+-		vmcoreinfo = get_vmcoreinfo_old(&size);
++	if (vmcoreinfo)
++		return nt_init_name(ptr, 0, vmcoreinfo, size, name);
++
++	vmcoreinfo = get_vmcoreinfo_old(&size);
+ 	if (!vmcoreinfo)
+ 		return ptr;
+-	return nt_init_name(ptr, 0, vmcoreinfo, size, "VMCOREINFO");
++	ptr = nt_init_name(ptr, 0, vmcoreinfo, size, name);
++	kfree(vmcoreinfo);
++	return ptr;
+ }
+ 
+ /*
+diff --git a/arch/um/Makefile b/arch/um/Makefile
+index e54dda8a0363..de340e41f3b2 100644
+--- a/arch/um/Makefile
++++ b/arch/um/Makefile
+@@ -122,8 +122,7 @@ archheaders:
+ 	$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.asm-generic \
+ 	            kbuild-file=$(HOST_DIR)/include/uapi/asm/Kbuild \
+ 		    obj=$(HOST_DIR)/include/generated/uapi/asm
+-	$(Q)$(MAKE) KBUILD_SRC= ARCH=$(HEADER_ARCH) archheaders
+-
++	$(Q)$(MAKE) -f $(srctree)/Makefile ARCH=$(HEADER_ARCH) archheaders
+ 
+ archprepare: include/generated/user_constants.h
+ 
+diff --git a/arch/x86/include/asm/mce.h b/arch/x86/include/asm/mce.h
+index 8c7b3e5a2d01..3a17107594c8 100644
+--- a/arch/x86/include/asm/mce.h
++++ b/arch/x86/include/asm/mce.h
+@@ -148,6 +148,7 @@ enum mce_notifier_prios {
+ 	MCE_PRIO_LOWEST		= 0,
+ };
+ 
++struct notifier_block;
+ extern void mce_register_decode_chain(struct notifier_block *nb);
+ extern void mce_unregister_decode_chain(struct notifier_block *nb);
+ 
+diff --git a/arch/x86/include/asm/pgtable-3level.h b/arch/x86/include/asm/pgtable-3level.h
+index bb035a4cbc8c..9eeb1359ec75 100644
+--- a/arch/x86/include/asm/pgtable-3level.h
++++ b/arch/x86/include/asm/pgtable-3level.h
+@@ -2,6 +2,8 @@
+ #ifndef _ASM_X86_PGTABLE_3LEVEL_H
+ #define _ASM_X86_PGTABLE_3LEVEL_H
+ 
++#include <asm/atomic64_32.h>
++
+ /*
+  * Intel Physical Address Extension (PAE) Mode - three-level page
+  * tables on PPro+ CPUs.
+@@ -147,10 +149,7 @@ static inline pte_t native_ptep_get_and_clear(pte_t *ptep)
+ {
+ 	pte_t res;
+ 
+-	/* xchg acts as a barrier before the setting of the high bits */
+-	res.pte_low = xchg(&ptep->pte_low, 0);
+-	res.pte_high = ptep->pte_high;
+-	ptep->pte_high = 0;
++	res.pte = (pteval_t)arch_atomic64_xchg((atomic64_t *)ptep, 0);
+ 
+ 	return res;
+ }
+diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
+index 74392d9d51e0..a10481656d82 100644
+--- a/arch/x86/kernel/tsc.c
++++ b/arch/x86/kernel/tsc.c
+@@ -1343,7 +1343,7 @@ device_initcall(init_tsc_clocksource);
+ 
+ void __init tsc_early_delay_calibrate(void)
+ {
+-	unsigned long lpj;
++	u64 lpj;
+ 
+ 	if (!boot_cpu_has(X86_FEATURE_TSC))
+ 		return;
+@@ -1355,7 +1355,7 @@ void __init tsc_early_delay_calibrate(void)
+ 	if (!tsc_khz)
+ 		return;
+ 
+-	lpj = tsc_khz * 1000;
++	lpj = (u64)tsc_khz * 1000;
+ 	do_div(lpj, HZ);
+ 	loops_per_jiffy = lpj;
+ }
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index a44e568363a4..42f1ba92622a 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -221,6 +221,17 @@ static const u64 shadow_acc_track_saved_bits_mask = PT64_EPT_READABLE_MASK |
+ 						    PT64_EPT_EXECUTABLE_MASK;
+ static const u64 shadow_acc_track_saved_bits_shift = PT64_SECOND_AVAIL_BITS_SHIFT;
+ 
++/*
++ * This mask must be set on all non-zero Non-Present or Reserved SPTEs in order
++ * to guard against L1TF attacks.
++ */
++static u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
++
++/*
++ * The number of high-order 1 bits to use in the mask above.
++ */
++static const u64 shadow_nonpresent_or_rsvd_mask_len = 5;
++
+ static void mmu_spte_set(u64 *sptep, u64 spte);
+ 
+ void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value)
+@@ -308,9 +319,13 @@ static void mark_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 gfn,
+ {
+ 	unsigned int gen = kvm_current_mmio_generation(vcpu);
+ 	u64 mask = generation_mmio_spte_mask(gen);
++	u64 gpa = gfn << PAGE_SHIFT;
+ 
+ 	access &= ACC_WRITE_MASK | ACC_USER_MASK;
+-	mask |= shadow_mmio_value | access | gfn << PAGE_SHIFT;
++	mask |= shadow_mmio_value | access;
++	mask |= gpa | shadow_nonpresent_or_rsvd_mask;
++	mask |= (gpa & shadow_nonpresent_or_rsvd_mask)
++		<< shadow_nonpresent_or_rsvd_mask_len;
+ 
+ 	trace_mark_mmio_spte(sptep, gfn, access, gen);
+ 	mmu_spte_set(sptep, mask);
+@@ -323,8 +338,14 @@ static bool is_mmio_spte(u64 spte)
+ 
+ static gfn_t get_mmio_spte_gfn(u64 spte)
+ {
+-	u64 mask = generation_mmio_spte_mask(MMIO_GEN_MASK) | shadow_mmio_mask;
+-	return (spte & ~mask) >> PAGE_SHIFT;
++	u64 mask = generation_mmio_spte_mask(MMIO_GEN_MASK) | shadow_mmio_mask |
++		   shadow_nonpresent_or_rsvd_mask;
++	u64 gpa = spte & ~mask;
++
++	gpa |= (spte >> shadow_nonpresent_or_rsvd_mask_len)
++	       & shadow_nonpresent_or_rsvd_mask;
++
++	return gpa >> PAGE_SHIFT;
+ }
+ 
+ static unsigned get_mmio_spte_access(u64 spte)
+@@ -381,7 +402,7 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask,
+ }
+ EXPORT_SYMBOL_GPL(kvm_mmu_set_mask_ptes);
+ 
+-static void kvm_mmu_clear_all_pte_masks(void)
++static void kvm_mmu_reset_all_pte_masks(void)
+ {
+ 	shadow_user_mask = 0;
+ 	shadow_accessed_mask = 0;
+@@ -391,6 +412,18 @@ static void kvm_mmu_clear_all_pte_masks(void)
+ 	shadow_mmio_mask = 0;
+ 	shadow_present_mask = 0;
+ 	shadow_acc_track_mask = 0;
++
++	/*
++	 * If the CPU has 46 or less physical address bits, then set an
++	 * appropriate mask to guard against L1TF attacks. Otherwise, it is
++	 * assumed that the CPU is not vulnerable to L1TF.
++	 */
++	if (boot_cpu_data.x86_phys_bits <
++	    52 - shadow_nonpresent_or_rsvd_mask_len)
++		shadow_nonpresent_or_rsvd_mask =
++			rsvd_bits(boot_cpu_data.x86_phys_bits -
++				  shadow_nonpresent_or_rsvd_mask_len,
++				  boot_cpu_data.x86_phys_bits - 1);
+ }
+ 
+ static int is_cpuid_PSE36(void)
+@@ -5500,7 +5533,7 @@ int kvm_mmu_module_init(void)
+ {
+ 	int ret = -ENOMEM;
+ 
+-	kvm_mmu_clear_all_pte_masks();
++	kvm_mmu_reset_all_pte_masks();
+ 
+ 	pte_list_desc_cache = kmem_cache_create("pte_list_desc",
+ 					    sizeof(struct pte_list_desc),
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index bedabcf33a3e..9869bfd0c601 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -939,17 +939,21 @@ struct vcpu_vmx {
+ 	/*
+ 	 * loaded_vmcs points to the VMCS currently used in this vcpu. For a
+ 	 * non-nested (L1) guest, it always points to vmcs01. For a nested
+-	 * guest (L2), it points to a different VMCS.
++	 * guest (L2), it points to a different VMCS.  loaded_cpu_state points
++	 * to the VMCS whose state is loaded into the CPU registers that only
++	 * need to be switched when transitioning to/from the kernel; a NULL
++	 * value indicates that host state is loaded.
+ 	 */
+ 	struct loaded_vmcs    vmcs01;
+ 	struct loaded_vmcs   *loaded_vmcs;
++	struct loaded_vmcs   *loaded_cpu_state;
+ 	bool                  __launched; /* temporary, used in vmx_vcpu_run */
+ 	struct msr_autoload {
+ 		struct vmx_msrs guest;
+ 		struct vmx_msrs host;
+ 	} msr_autoload;
++
+ 	struct {
+-		int           loaded;
+ 		u16           fs_sel, gs_sel, ldt_sel;
+ #ifdef CONFIG_X86_64
+ 		u16           ds_sel, es_sel;
+@@ -2750,10 +2754,11 @@ static void vmx_save_host_state(struct kvm_vcpu *vcpu)
+ #endif
+ 	int i;
+ 
+-	if (vmx->host_state.loaded)
++	if (vmx->loaded_cpu_state)
+ 		return;
+ 
+-	vmx->host_state.loaded = 1;
++	vmx->loaded_cpu_state = vmx->loaded_vmcs;
++
+ 	/*
+ 	 * Set host fs and gs selectors.  Unfortunately, 22.2.3 does not
+ 	 * allow segment selectors with cpl > 0 or ti == 1.
+@@ -2815,11 +2820,14 @@ static void vmx_save_host_state(struct kvm_vcpu *vcpu)
+ 
+ static void __vmx_load_host_state(struct vcpu_vmx *vmx)
+ {
+-	if (!vmx->host_state.loaded)
++	if (!vmx->loaded_cpu_state)
+ 		return;
+ 
++	WARN_ON_ONCE(vmx->loaded_cpu_state != vmx->loaded_vmcs);
++
+ 	++vmx->vcpu.stat.host_state_reload;
+-	vmx->host_state.loaded = 0;
++	vmx->loaded_cpu_state = NULL;
++
+ #ifdef CONFIG_X86_64
+ 	if (is_long_mode(&vmx->vcpu))
+ 		rdmsrl(MSR_KERNEL_GS_BASE, vmx->msr_guest_kernel_gs_base);
+@@ -8115,7 +8123,7 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
+ 
+ 	/* CPL=0 must be checked manually. */
+ 	if (vmx_get_cpl(vcpu)) {
+-		kvm_queue_exception(vcpu, UD_VECTOR);
++		kvm_inject_gp(vcpu, 0);
+ 		return 1;
+ 	}
+ 
+@@ -8179,7 +8187,7 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
+ static int nested_vmx_check_permission(struct kvm_vcpu *vcpu)
+ {
+ 	if (vmx_get_cpl(vcpu)) {
+-		kvm_queue_exception(vcpu, UD_VECTOR);
++		kvm_inject_gp(vcpu, 0);
+ 		return 0;
+ 	}
+ 
+@@ -10517,8 +10525,8 @@ static void vmx_switch_vmcs(struct kvm_vcpu *vcpu, struct loaded_vmcs *vmcs)
+ 		return;
+ 
+ 	cpu = get_cpu();
+-	vmx->loaded_vmcs = vmcs;
+ 	vmx_vcpu_put(vcpu);
++	vmx->loaded_vmcs = vmcs;
+ 	vmx_vcpu_load(vcpu, cpu);
+ 	put_cpu();
+ }
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 24c84aa87049..94cd63081471 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -6506,20 +6506,22 @@ static void kvm_set_mmio_spte_mask(void)
+ 	 * Set the reserved bits and the present bit of an paging-structure
+ 	 * entry to generate page fault with PFER.RSV = 1.
+ 	 */
+-	 /* Mask the reserved physical address bits. */
+-	mask = rsvd_bits(maxphyaddr, 51);
++
++	/*
++	 * Mask the uppermost physical address bit, which would be reserved as
++	 * long as the supported physical address width is less than 52.
++	 */
++	mask = 1ull << 51;
+ 
+ 	/* Set the present bit. */
+ 	mask |= 1ull;
+ 
+-#ifdef CONFIG_X86_64
+ 	/*
+ 	 * If reserved bit is not supported, clear the present bit to disable
+ 	 * mmio page fault.
+ 	 */
+-	if (maxphyaddr == 52)
++	if (IS_ENABLED(CONFIG_X86_64) && maxphyaddr == 52)
+ 		mask &= ~1ull;
+-#endif
+ 
+ 	kvm_mmu_set_mmio_spte_mask(mask, mask);
+ }
+diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
+index 2c30cabfda90..071d82ec9abb 100644
+--- a/arch/x86/xen/mmu_pv.c
++++ b/arch/x86/xen/mmu_pv.c
+@@ -434,14 +434,13 @@ static void xen_set_pud(pud_t *ptr, pud_t val)
+ static void xen_set_pte_atomic(pte_t *ptep, pte_t pte)
+ {
+ 	trace_xen_mmu_set_pte_atomic(ptep, pte);
+-	set_64bit((u64 *)ptep, native_pte_val(pte));
++	__xen_set_pte(ptep, pte);
+ }
+ 
+ static void xen_pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
+ {
+ 	trace_xen_mmu_pte_clear(mm, addr, ptep);
+-	if (!xen_batched_set_pte(ptep, native_make_pte(0)))
+-		native_pte_clear(mm, addr, ptep);
++	__xen_set_pte(ptep, native_make_pte(0));
+ }
+ 
+ static void xen_pmd_clear(pmd_t *pmdp)
+@@ -1571,7 +1570,7 @@ static void __init xen_set_pte_init(pte_t *ptep, pte_t pte)
+ 		pte = __pte_ma(((pte_val_ma(*ptep) & _PAGE_RW) | ~_PAGE_RW) &
+ 			       pte_val_ma(pte));
+ #endif
+-	native_set_pte(ptep, pte);
++	__xen_set_pte(ptep, pte);
+ }
+ 
+ /* Early in boot, while setting up the initial pagetable, assume
+diff --git a/block/bio.c b/block/bio.c
+index 047c5dca6d90..ff94640bc734 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -156,7 +156,7 @@ out:
+ 
+ unsigned int bvec_nr_vecs(unsigned short idx)
+ {
+-	return bvec_slabs[idx].nr_vecs;
++	return bvec_slabs[--idx].nr_vecs;
+ }
+ 
+ void bvec_free(mempool_t *pool, struct bio_vec *bv, unsigned int idx)
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 1646ea85dade..746a5eac4541 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -2159,7 +2159,9 @@ static inline bool should_fail_request(struct hd_struct *part,
+ 
+ static inline bool bio_check_ro(struct bio *bio, struct hd_struct *part)
+ {
+-	if (part->policy && op_is_write(bio_op(bio))) {
++	const int op = bio_op(bio);
++
++	if (part->policy && (op_is_write(op) && !op_is_flush(op))) {
+ 		char b[BDEVNAME_SIZE];
+ 
+ 		WARN_ONCE(1,
+diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
+index 3de0836163c2..d5f2c21d8531 100644
+--- a/block/blk-mq-tag.c
++++ b/block/blk-mq-tag.c
+@@ -23,6 +23,9 @@ bool blk_mq_has_free_tags(struct blk_mq_tags *tags)
+ 
+ /*
+  * If a previously inactive queue goes active, bump the active user count.
++ * We need to do this before try to allocate driver tag, then even if fail
++ * to get tag when first time, the other shared-tag users could reserve
++ * budget for it.
+  */
+ bool __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
+ {
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 654b0dc7e001..2f9e14361673 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -285,7 +285,7 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data,
+ 		rq->tag = -1;
+ 		rq->internal_tag = tag;
+ 	} else {
+-		if (blk_mq_tag_busy(data->hctx)) {
++		if (data->hctx->flags & BLK_MQ_F_TAG_SHARED) {
+ 			rq_flags = RQF_MQ_INFLIGHT;
+ 			atomic_inc(&data->hctx->nr_active);
+ 		}
+@@ -367,6 +367,8 @@ static struct request *blk_mq_get_request(struct request_queue *q,
+ 		if (!op_is_flush(op) && e->type->ops.mq.limit_depth &&
+ 		    !(data->flags & BLK_MQ_REQ_RESERVED))
+ 			e->type->ops.mq.limit_depth(op, data);
++	} else {
++		blk_mq_tag_busy(data->hctx);
+ 	}
+ 
+ 	tag = blk_mq_get_tag(data);
+@@ -970,6 +972,7 @@ bool blk_mq_get_driver_tag(struct request *rq, struct blk_mq_hw_ctx **hctx,
+ 		.hctx = blk_mq_map_queue(rq->q, rq->mq_ctx->cpu),
+ 		.flags = wait ? 0 : BLK_MQ_REQ_NOWAIT,
+ 	};
++	bool shared;
+ 
+ 	might_sleep_if(wait);
+ 
+@@ -979,9 +982,10 @@ bool blk_mq_get_driver_tag(struct request *rq, struct blk_mq_hw_ctx **hctx,
+ 	if (blk_mq_tag_is_reserved(data.hctx->sched_tags, rq->internal_tag))
+ 		data.flags |= BLK_MQ_REQ_RESERVED;
+ 
++	shared = blk_mq_tag_busy(data.hctx);
+ 	rq->tag = blk_mq_get_tag(&data);
+ 	if (rq->tag >= 0) {
+-		if (blk_mq_tag_busy(data.hctx)) {
++		if (shared) {
+ 			rq->rq_flags |= RQF_MQ_INFLIGHT;
+ 			atomic_inc(&data.hctx->nr_active);
+ 		}
+diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
+index 82b6c27b3245..f6f180f3aa1c 100644
+--- a/block/cfq-iosched.c
++++ b/block/cfq-iosched.c
+@@ -4735,12 +4735,13 @@ USEC_SHOW_FUNCTION(cfq_target_latency_us_show, cfqd->cfq_target_latency);
+ static ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count)	\
+ {									\
+ 	struct cfq_data *cfqd = e->elevator_data;			\
+-	unsigned int __data;						\
++	unsigned int __data, __min = (MIN), __max = (MAX);		\
++									\
+ 	cfq_var_store(&__data, (page));					\
+-	if (__data < (MIN))						\
+-		__data = (MIN);						\
+-	else if (__data > (MAX))					\
+-		__data = (MAX);						\
++	if (__data < __min)						\
++		__data = __min;						\
++	else if (__data > __max)					\
++		__data = __max;						\
+ 	if (__CONV)							\
+ 		*(__PTR) = (u64)__data * NSEC_PER_MSEC;			\
+ 	else								\
+@@ -4769,12 +4770,13 @@ STORE_FUNCTION(cfq_target_latency_store, &cfqd->cfq_target_latency, 1, UINT_MAX,
+ static ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count)	\
+ {									\
+ 	struct cfq_data *cfqd = e->elevator_data;			\
+-	unsigned int __data;						\
++	unsigned int __data, __min = (MIN), __max = (MAX);		\
++									\
+ 	cfq_var_store(&__data, (page));					\
+-	if (__data < (MIN))						\
+-		__data = (MIN);						\
+-	else if (__data > (MAX))					\
+-		__data = (MAX);						\
++	if (__data < __min)						\
++		__data = __min;						\
++	else if (__data > __max)					\
++		__data = __max;						\
+ 	*(__PTR) = (u64)__data * NSEC_PER_USEC;				\
+ 	return count;							\
+ }
+diff --git a/drivers/acpi/acpica/hwregs.c b/drivers/acpi/acpica/hwregs.c
+index 3de794bcf8fa..69603ba52a3a 100644
+--- a/drivers/acpi/acpica/hwregs.c
++++ b/drivers/acpi/acpica/hwregs.c
+@@ -528,13 +528,18 @@ acpi_status acpi_hw_register_read(u32 register_id, u32 *return_value)
+ 
+ 		status =
+ 		    acpi_hw_read(&value64, &acpi_gbl_FADT.xpm2_control_block);
+-		value = (u32)value64;
++		if (ACPI_SUCCESS(status)) {
++			value = (u32)value64;
++		}
+ 		break;
+ 
+ 	case ACPI_REGISTER_PM_TIMER:	/* 32-bit access */
+ 
+ 		status = acpi_hw_read(&value64, &acpi_gbl_FADT.xpm_timer_block);
+-		value = (u32)value64;
++		if (ACPI_SUCCESS(status)) {
++			value = (u32)value64;
++		}
++
+ 		break;
+ 
+ 	case ACPI_REGISTER_SMI_COMMAND_BLOCK:	/* 8-bit access */
+diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
+index 970dd87d347c..6799d00dd790 100644
+--- a/drivers/acpi/scan.c
++++ b/drivers/acpi/scan.c
+@@ -1612,7 +1612,8 @@ static int acpi_add_single_object(struct acpi_device **child,
+ 	 * Note this must be done before the get power-/wakeup_dev-flags calls.
+ 	 */
+ 	if (type == ACPI_BUS_TYPE_DEVICE)
+-		acpi_bus_get_status(device);
++		if (acpi_bus_get_status(device) < 0)
++			acpi_set_device_status(device, 0);
+ 
+ 	acpi_bus_get_power_flags(device);
+ 	acpi_bus_get_wakeup_device_flags(device);
+@@ -1690,7 +1691,7 @@ static int acpi_bus_type_and_status(acpi_handle handle, int *type,
+ 		 * acpi_add_single_object updates this once we've an acpi_device
+ 		 * so that acpi_bus_get_status' quirk handling can be used.
+ 		 */
+-		*sta = 0;
++		*sta = ACPI_STA_DEFAULT;
+ 		break;
+ 	case ACPI_TYPE_PROCESSOR:
+ 		*type = ACPI_BUS_TYPE_PROCESSOR;
+diff --git a/drivers/clk/rockchip/clk-rk3399.c b/drivers/clk/rockchip/clk-rk3399.c
+index 2a8634a52856..5a628148f3f0 100644
+--- a/drivers/clk/rockchip/clk-rk3399.c
++++ b/drivers/clk/rockchip/clk-rk3399.c
+@@ -1523,6 +1523,7 @@ static const char *const rk3399_pmucru_critical_clocks[] __initconst = {
+ 	"pclk_pmu_src",
+ 	"fclk_cm0s_src_pmu",
+ 	"clk_timer_src_pmu",
++	"pclk_rkpwm_pmu",
+ };
+ 
+ static void __init rk3399_clk_init(struct device_node *np)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+index 7dcbac8af9a7..b60aa7d43cb7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+@@ -1579,9 +1579,9 @@ struct amdgpu_device {
+ 	DECLARE_HASHTABLE(mn_hash, 7);
+ 
+ 	/* tracking pinned memory */
+-	u64 vram_pin_size;
+-	u64 invisible_pin_size;
+-	u64 gart_pin_size;
++	atomic64_t vram_pin_size;
++	atomic64_t visible_pin_size;
++	atomic64_t gart_pin_size;
+ 
+ 	/* amdkfd interface */
+ 	struct kfd_dev          *kfd;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index 9c85a90be293..5a196ec49be8 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -257,7 +257,7 @@ static void amdgpu_cs_get_threshold_for_moves(struct amdgpu_device *adev,
+ 		return;
+ 	}
+ 
+-	total_vram = adev->gmc.real_vram_size - adev->vram_pin_size;
++	total_vram = adev->gmc.real_vram_size - atomic64_read(&adev->vram_pin_size);
+ 	used_vram = amdgpu_vram_mgr_usage(&adev->mman.bdev.man[TTM_PL_VRAM]);
+ 	free_vram = used_vram >= total_vram ? 0 : total_vram - used_vram;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+index 91517b166a3b..063f9aa96946 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+@@ -494,13 +494,13 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
+ 	case AMDGPU_INFO_VRAM_GTT: {
+ 		struct drm_amdgpu_info_vram_gtt vram_gtt;
+ 
+-		vram_gtt.vram_size = adev->gmc.real_vram_size;
+-		vram_gtt.vram_size -= adev->vram_pin_size;
+-		vram_gtt.vram_cpu_accessible_size = adev->gmc.visible_vram_size;
+-		vram_gtt.vram_cpu_accessible_size -= (adev->vram_pin_size - adev->invisible_pin_size);
++		vram_gtt.vram_size = adev->gmc.real_vram_size -
++			atomic64_read(&adev->vram_pin_size);
++		vram_gtt.vram_cpu_accessible_size = adev->gmc.visible_vram_size -
++			atomic64_read(&adev->visible_pin_size);
+ 		vram_gtt.gtt_size = adev->mman.bdev.man[TTM_PL_TT].size;
+ 		vram_gtt.gtt_size *= PAGE_SIZE;
+-		vram_gtt.gtt_size -= adev->gart_pin_size;
++		vram_gtt.gtt_size -= atomic64_read(&adev->gart_pin_size);
+ 		return copy_to_user(out, &vram_gtt,
+ 				    min((size_t)size, sizeof(vram_gtt))) ? -EFAULT : 0;
+ 	}
+@@ -509,17 +509,16 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
+ 
+ 		memset(&mem, 0, sizeof(mem));
+ 		mem.vram.total_heap_size = adev->gmc.real_vram_size;
+-		mem.vram.usable_heap_size =
+-			adev->gmc.real_vram_size - adev->vram_pin_size;
++		mem.vram.usable_heap_size = adev->gmc.real_vram_size -
++			atomic64_read(&adev->vram_pin_size);
+ 		mem.vram.heap_usage =
+ 			amdgpu_vram_mgr_usage(&adev->mman.bdev.man[TTM_PL_VRAM]);
+ 		mem.vram.max_allocation = mem.vram.usable_heap_size * 3 / 4;
+ 
+ 		mem.cpu_accessible_vram.total_heap_size =
+ 			adev->gmc.visible_vram_size;
+-		mem.cpu_accessible_vram.usable_heap_size =
+-			adev->gmc.visible_vram_size -
+-			(adev->vram_pin_size - adev->invisible_pin_size);
++		mem.cpu_accessible_vram.usable_heap_size = adev->gmc.visible_vram_size -
++			atomic64_read(&adev->visible_pin_size);
+ 		mem.cpu_accessible_vram.heap_usage =
+ 			amdgpu_vram_mgr_vis_usage(&adev->mman.bdev.man[TTM_PL_VRAM]);
+ 		mem.cpu_accessible_vram.max_allocation =
+@@ -527,8 +526,8 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
+ 
+ 		mem.gtt.total_heap_size = adev->mman.bdev.man[TTM_PL_TT].size;
+ 		mem.gtt.total_heap_size *= PAGE_SIZE;
+-		mem.gtt.usable_heap_size = mem.gtt.total_heap_size
+-			- adev->gart_pin_size;
++		mem.gtt.usable_heap_size = mem.gtt.total_heap_size -
++			atomic64_read(&adev->gart_pin_size);
+ 		mem.gtt.heap_usage =
+ 			amdgpu_gtt_mgr_usage(&adev->mman.bdev.man[TTM_PL_TT]);
+ 		mem.gtt.max_allocation = mem.gtt.usable_heap_size * 3 / 4;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index 3526efa8960e..3873c3353020 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -50,11 +50,35 @@ static bool amdgpu_need_backup(struct amdgpu_device *adev)
+ 	return true;
+ }
+ 
++/**
++ * amdgpu_bo_subtract_pin_size - Remove BO from pin_size accounting
++ *
++ * @bo: &amdgpu_bo buffer object
++ *
++ * This function is called when a BO stops being pinned, and updates the
++ * &amdgpu_device pin_size values accordingly.
++ */
++static void amdgpu_bo_subtract_pin_size(struct amdgpu_bo *bo)
++{
++	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
++
++	if (bo->tbo.mem.mem_type == TTM_PL_VRAM) {
++		atomic64_sub(amdgpu_bo_size(bo), &adev->vram_pin_size);
++		atomic64_sub(amdgpu_vram_mgr_bo_visible_size(bo),
++			     &adev->visible_pin_size);
++	} else if (bo->tbo.mem.mem_type == TTM_PL_TT) {
++		atomic64_sub(amdgpu_bo_size(bo), &adev->gart_pin_size);
++	}
++}
++
+ static void amdgpu_ttm_bo_destroy(struct ttm_buffer_object *tbo)
+ {
+ 	struct amdgpu_device *adev = amdgpu_ttm_adev(tbo->bdev);
+ 	struct amdgpu_bo *bo = ttm_to_amdgpu_bo(tbo);
+ 
++	if (bo->pin_count > 0)
++		amdgpu_bo_subtract_pin_size(bo);
++
+ 	if (bo->kfd_bo)
+ 		amdgpu_amdkfd_unreserve_system_memory_limit(bo);
+ 
+@@ -761,10 +785,11 @@ int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 domain,
+ 
+ 	domain = amdgpu_mem_type_to_domain(bo->tbo.mem.mem_type);
+ 	if (domain == AMDGPU_GEM_DOMAIN_VRAM) {
+-		adev->vram_pin_size += amdgpu_bo_size(bo);
+-		adev->invisible_pin_size += amdgpu_vram_mgr_bo_invisible_size(bo);
++		atomic64_add(amdgpu_bo_size(bo), &adev->vram_pin_size);
++		atomic64_add(amdgpu_vram_mgr_bo_visible_size(bo),
++			     &adev->visible_pin_size);
+ 	} else if (domain == AMDGPU_GEM_DOMAIN_GTT) {
+-		adev->gart_pin_size += amdgpu_bo_size(bo);
++		atomic64_add(amdgpu_bo_size(bo), &adev->gart_pin_size);
+ 	}
+ 
+ error:
+@@ -790,12 +815,7 @@ int amdgpu_bo_unpin(struct amdgpu_bo *bo)
+ 	if (bo->pin_count)
+ 		return 0;
+ 
+-	if (bo->tbo.mem.mem_type == TTM_PL_VRAM) {
+-		adev->vram_pin_size -= amdgpu_bo_size(bo);
+-		adev->invisible_pin_size -= amdgpu_vram_mgr_bo_invisible_size(bo);
+-	} else if (bo->tbo.mem.mem_type == TTM_PL_TT) {
+-		adev->gart_pin_size -= amdgpu_bo_size(bo);
+-	}
++	amdgpu_bo_subtract_pin_size(bo);
+ 
+ 	for (i = 0; i < bo->placement.num_placement; i++) {
+ 		bo->placements[i].lpfn = 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+index a44c3d58fef4..2ec20348b983 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+@@ -1157,7 +1157,7 @@ static ssize_t amdgpu_hwmon_show_vddnb(struct device *dev,
+ 	int r, size = sizeof(vddnb);
+ 
+ 	/* only APUs have vddnb */
+-	if  (adev->flags & AMD_IS_APU)
++	if  (!(adev->flags & AMD_IS_APU))
+ 		return -EINVAL;
+ 
+ 	/* Can't get voltage when the card is off */
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index 9f1a5bd39ae8..5b39d1399630 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -131,6 +131,11 @@ psp_cmd_submit_buf(struct psp_context *psp,
+ 		msleep(1);
+ 	}
+ 
++	if (ucode) {
++		ucode->tmr_mc_addr_lo = psp->cmd_buf_mem->resp.fw_addr_lo;
++		ucode->tmr_mc_addr_hi = psp->cmd_buf_mem->resp.fw_addr_hi;
++	}
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
+index 86a0715d9431..1cafe8d83a4d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
+@@ -53,9 +53,8 @@ static int amdgpu_sched_process_priority_override(struct amdgpu_device *adev,
+ 						  int fd,
+ 						  enum drm_sched_priority priority)
+ {
+-	struct file *filp = fcheck(fd);
++	struct file *filp = fget(fd);
+ 	struct drm_file *file;
+-	struct pid *pid;
+ 	struct amdgpu_fpriv *fpriv;
+ 	struct amdgpu_ctx *ctx;
+ 	uint32_t id;
+@@ -63,20 +62,12 @@ static int amdgpu_sched_process_priority_override(struct amdgpu_device *adev,
+ 	if (!filp)
+ 		return -EINVAL;
+ 
+-	pid = get_pid(((struct drm_file *)filp->private_data)->pid);
++	file = filp->private_data;
++	fpriv = file->driver_priv;
++	idr_for_each_entry(&fpriv->ctx_mgr.ctx_handles, ctx, id)
++		amdgpu_ctx_priority_override(ctx, priority);
+ 
+-	mutex_lock(&adev->ddev->filelist_mutex);
+-	list_for_each_entry(file, &adev->ddev->filelist, lhead) {
+-		if (file->pid != pid)
+-			continue;
+-
+-		fpriv = file->driver_priv;
+-		idr_for_each_entry(&fpriv->ctx_mgr.ctx_handles, ctx, id)
+-				amdgpu_ctx_priority_override(ctx, priority);
+-	}
+-	mutex_unlock(&adev->ddev->filelist_mutex);
+-
+-	put_pid(pid);
++	fput(filp);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+index e5da4654b630..8b3cc6687769 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+@@ -73,7 +73,7 @@ bool amdgpu_gtt_mgr_has_gart_addr(struct ttm_mem_reg *mem);
+ uint64_t amdgpu_gtt_mgr_usage(struct ttm_mem_type_manager *man);
+ int amdgpu_gtt_mgr_recover(struct ttm_mem_type_manager *man);
+ 
+-u64 amdgpu_vram_mgr_bo_invisible_size(struct amdgpu_bo *bo);
++u64 amdgpu_vram_mgr_bo_visible_size(struct amdgpu_bo *bo);
+ uint64_t amdgpu_vram_mgr_usage(struct ttm_mem_type_manager *man);
+ uint64_t amdgpu_vram_mgr_vis_usage(struct ttm_mem_type_manager *man);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
+index 08e38579af24..bdc472b6e641 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
+@@ -194,6 +194,7 @@ enum AMDGPU_UCODE_ID {
+ 	AMDGPU_UCODE_ID_SMC,
+ 	AMDGPU_UCODE_ID_UVD,
+ 	AMDGPU_UCODE_ID_VCE,
++	AMDGPU_UCODE_ID_VCN,
+ 	AMDGPU_UCODE_ID_MAXIMUM,
+ };
+ 
+@@ -226,6 +227,9 @@ struct amdgpu_firmware_info {
+ 	void *kaddr;
+ 	/* ucode_size_bytes */
+ 	uint32_t ucode_size;
++	/* starting tmr mc address */
++	uint32_t tmr_mc_addr_lo;
++	uint32_t tmr_mc_addr_hi;
+ };
+ 
+ void amdgpu_ucode_print_mc_hdr(const struct common_firmware_header *hdr);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+index 1b4ad9b2a755..bee49991c1ff 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+@@ -111,9 +111,10 @@ int amdgpu_vcn_sw_init(struct amdgpu_device *adev)
+ 			version_major, version_minor, family_id);
+ 	}
+ 
+-	bo_size = AMDGPU_GPU_PAGE_ALIGN(le32_to_cpu(hdr->ucode_size_bytes) + 8)
+-		  +  AMDGPU_VCN_STACK_SIZE + AMDGPU_VCN_HEAP_SIZE
++	bo_size = AMDGPU_VCN_STACK_SIZE + AMDGPU_VCN_HEAP_SIZE
+ 		  +  AMDGPU_VCN_SESSION_SIZE * 40;
++	if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP)
++		bo_size += AMDGPU_GPU_PAGE_ALIGN(le32_to_cpu(hdr->ucode_size_bytes) + 8);
+ 	r = amdgpu_bo_create_kernel(adev, bo_size, PAGE_SIZE,
+ 				    AMDGPU_GEM_DOMAIN_VRAM, &adev->vcn.vcpu_bo,
+ 				    &adev->vcn.gpu_addr, &adev->vcn.cpu_addr);
+@@ -187,11 +188,13 @@ int amdgpu_vcn_resume(struct amdgpu_device *adev)
+ 		unsigned offset;
+ 
+ 		hdr = (const struct common_firmware_header *)adev->vcn.fw->data;
+-		offset = le32_to_cpu(hdr->ucode_array_offset_bytes);
+-		memcpy_toio(adev->vcn.cpu_addr, adev->vcn.fw->data + offset,
+-			    le32_to_cpu(hdr->ucode_size_bytes));
+-		size -= le32_to_cpu(hdr->ucode_size_bytes);
+-		ptr += le32_to_cpu(hdr->ucode_size_bytes);
++		if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP) {
++			offset = le32_to_cpu(hdr->ucode_array_offset_bytes);
++			memcpy_toio(adev->vcn.cpu_addr, adev->vcn.fw->data + offset,
++				    le32_to_cpu(hdr->ucode_size_bytes));
++			size -= le32_to_cpu(hdr->ucode_size_bytes);
++			ptr += le32_to_cpu(hdr->ucode_size_bytes);
++		}
+ 		memset_io(ptr, 0, size);
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+index b6333f92ba45..ef4784458800 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+@@ -97,33 +97,29 @@ static u64 amdgpu_vram_mgr_vis_size(struct amdgpu_device *adev,
+ }
+ 
+ /**
+- * amdgpu_vram_mgr_bo_invisible_size - CPU invisible BO size
++ * amdgpu_vram_mgr_bo_visible_size - CPU visible BO size
+  *
+  * @bo: &amdgpu_bo buffer object (must be in VRAM)
+  *
+  * Returns:
+- * How much of the given &amdgpu_bo buffer object lies in CPU invisible VRAM.
++ * How much of the given &amdgpu_bo buffer object lies in CPU visible VRAM.
+  */
+-u64 amdgpu_vram_mgr_bo_invisible_size(struct amdgpu_bo *bo)
++u64 amdgpu_vram_mgr_bo_visible_size(struct amdgpu_bo *bo)
+ {
+ 	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
+ 	struct ttm_mem_reg *mem = &bo->tbo.mem;
+ 	struct drm_mm_node *nodes = mem->mm_node;
+ 	unsigned pages = mem->num_pages;
+-	u64 usage = 0;
++	u64 usage;
+ 
+ 	if (adev->gmc.visible_vram_size == adev->gmc.real_vram_size)
+-		return 0;
++		return amdgpu_bo_size(bo);
+ 
+ 	if (mem->start >= adev->gmc.visible_vram_size >> PAGE_SHIFT)
+-		return amdgpu_bo_size(bo);
++		return 0;
+ 
+-	while (nodes && pages) {
+-		usage += nodes->size << PAGE_SHIFT;
+-		usage -= amdgpu_vram_mgr_vis_size(adev, nodes);
+-		pages -= nodes->size;
+-		++nodes;
+-	}
++	for (usage = 0; nodes && pages; pages -= nodes->size, nodes++)
++		usage += amdgpu_vram_mgr_vis_size(adev, nodes);
+ 
+ 	return usage;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index a69153435ea7..8f0ac805ecd2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -3433,7 +3433,7 @@ static void gfx_v9_0_enter_rlc_safe_mode(struct amdgpu_device *adev)
+ 
+ 		/* wait for RLC_SAFE_MODE */
+ 		for (i = 0; i < adev->usec_timeout; i++) {
+-			if (!REG_GET_FIELD(SOC15_REG_OFFSET(GC, 0, mmRLC_SAFE_MODE), RLC_SAFE_MODE, CMD))
++			if (!REG_GET_FIELD(RREG32_SOC15(GC, 0, mmRLC_SAFE_MODE), RLC_SAFE_MODE, CMD))
+ 				break;
+ 			udelay(1);
+ 		}
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c
+index 0ff136d02d9b..02be34e72ed9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c
+@@ -88,6 +88,9 @@ psp_v10_0_get_fw_type(struct amdgpu_firmware_info *ucode, enum psp_gfx_fw_type *
+ 	case AMDGPU_UCODE_ID_VCE:
+ 		*type = GFX_FW_TYPE_VCE;
+ 		break;
++	case AMDGPU_UCODE_ID_VCN:
++		*type = GFX_FW_TYPE_VCN;
++		break;
+ 	case AMDGPU_UCODE_ID_MAXIMUM:
+ 	default:
+ 		return -EINVAL;
+diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
+index bfddf97dd13e..a16eebc05d12 100644
+--- a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
+@@ -1569,7 +1569,6 @@ static const struct amdgpu_ring_funcs uvd_v6_0_ring_phys_funcs = {
+ static const struct amdgpu_ring_funcs uvd_v6_0_ring_vm_funcs = {
+ 	.type = AMDGPU_RING_TYPE_UVD,
+ 	.align_mask = 0xf,
+-	.nop = PACKET0(mmUVD_NO_OP, 0),
+ 	.support_64bit_ptrs = false,
+ 	.get_rptr = uvd_v6_0_ring_get_rptr,
+ 	.get_wptr = uvd_v6_0_ring_get_wptr,
+@@ -1587,7 +1586,7 @@ static const struct amdgpu_ring_funcs uvd_v6_0_ring_vm_funcs = {
+ 	.emit_hdp_flush = uvd_v6_0_ring_emit_hdp_flush,
+ 	.test_ring = uvd_v6_0_ring_test_ring,
+ 	.test_ib = amdgpu_uvd_ring_test_ib,
+-	.insert_nop = amdgpu_ring_insert_nop,
++	.insert_nop = uvd_v6_0_ring_insert_nop,
+ 	.pad_ib = amdgpu_ring_generic_pad_ib,
+ 	.begin_use = amdgpu_uvd_ring_begin_use,
+ 	.end_use = amdgpu_uvd_ring_end_use,
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+index 29684c3ea4ef..700119168067 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+@@ -90,6 +90,16 @@ static int vcn_v1_0_sw_init(void *handle)
+ 	if (r)
+ 		return r;
+ 
++	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
++		const struct common_firmware_header *hdr;
++		hdr = (const struct common_firmware_header *)adev->vcn.fw->data;
++		adev->firmware.ucode[AMDGPU_UCODE_ID_VCN].ucode_id = AMDGPU_UCODE_ID_VCN;
++		adev->firmware.ucode[AMDGPU_UCODE_ID_VCN].fw = adev->vcn.fw;
++		adev->firmware.fw_size +=
++			ALIGN(le32_to_cpu(hdr->ucode_size_bytes), PAGE_SIZE);
++		DRM_INFO("PSP loading VCN firmware\n");
++	}
++
+ 	r = amdgpu_vcn_resume(adev);
+ 	if (r)
+ 		return r;
+@@ -241,26 +251,38 @@ static int vcn_v1_0_resume(void *handle)
+ static void vcn_v1_0_mc_resume(struct amdgpu_device *adev)
+ {
+ 	uint32_t size = AMDGPU_GPU_PAGE_ALIGN(adev->vcn.fw->size + 4);
+-
+-	WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW,
++	uint32_t offset;
++
++	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
++		WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW,
++			     (adev->firmware.ucode[AMDGPU_UCODE_ID_VCN].tmr_mc_addr_lo));
++		WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH,
++			     (adev->firmware.ucode[AMDGPU_UCODE_ID_VCN].tmr_mc_addr_hi));
++		WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_OFFSET0, 0);
++		offset = 0;
++	} else {
++		WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW,
+ 			lower_32_bits(adev->vcn.gpu_addr));
+-	WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH,
++		WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH,
+ 			upper_32_bits(adev->vcn.gpu_addr));
+-	WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_OFFSET0,
+-				AMDGPU_UVD_FIRMWARE_OFFSET >> 3);
++		offset = size;
++		WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_OFFSET0,
++			     AMDGPU_UVD_FIRMWARE_OFFSET >> 3);
++	}
++
+ 	WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_SIZE0, size);
+ 
+ 	WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE1_64BIT_BAR_LOW,
+-			lower_32_bits(adev->vcn.gpu_addr + size));
++		     lower_32_bits(adev->vcn.gpu_addr + offset));
+ 	WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE1_64BIT_BAR_HIGH,
+-			upper_32_bits(adev->vcn.gpu_addr + size));
++		     upper_32_bits(adev->vcn.gpu_addr + offset));
+ 	WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_OFFSET1, 0);
+ 	WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_SIZE1, AMDGPU_VCN_HEAP_SIZE);
+ 
+ 	WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE2_64BIT_BAR_LOW,
+-			lower_32_bits(adev->vcn.gpu_addr + size + AMDGPU_VCN_HEAP_SIZE));
++		     lower_32_bits(adev->vcn.gpu_addr + offset + AMDGPU_VCN_HEAP_SIZE));
+ 	WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE2_64BIT_BAR_HIGH,
+-			upper_32_bits(adev->vcn.gpu_addr + size + AMDGPU_VCN_HEAP_SIZE));
++		     upper_32_bits(adev->vcn.gpu_addr + offset + AMDGPU_VCN_HEAP_SIZE));
+ 	WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_OFFSET2, 0);
+ 	WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_SIZE2,
+ 			AMDGPU_VCN_STACK_SIZE + (AMDGPU_VCN_SESSION_SIZE * 40));
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 770c6b24be0b..e484d0a94bdc 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1334,6 +1334,7 @@ amdgpu_dm_register_backlight_device(struct amdgpu_display_manager *dm)
+ 	struct backlight_properties props = { 0 };
+ 
+ 	props.max_brightness = AMDGPU_MAX_BL_LEVEL;
++	props.brightness = AMDGPU_MAX_BL_LEVEL;
+ 	props.type = BACKLIGHT_RAW;
+ 
+ 	snprintf(bl_name, sizeof(bl_name), "amdgpu_bl%d",
+@@ -2123,13 +2124,8 @@ convert_color_depth_from_display_info(const struct drm_connector *connector)
+ static enum dc_aspect_ratio
+ get_aspect_ratio(const struct drm_display_mode *mode_in)
+ {
+-	int32_t width = mode_in->crtc_hdisplay * 9;
+-	int32_t height = mode_in->crtc_vdisplay * 16;
+-
+-	if ((width - height) < 10 && (width - height) > -10)
+-		return ASPECT_RATIO_16_9;
+-	else
+-		return ASPECT_RATIO_4_3;
++	/* 1-1 mapping, since both enums follow the HDMI spec. */
++	return (enum dc_aspect_ratio) mode_in->picture_aspect_ratio;
+ }
+ 
+ static enum dc_color_space
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
+index 52f2c01349e3..9bfb040352e9 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
+@@ -98,10 +98,16 @@ int amdgpu_dm_crtc_set_crc_source(struct drm_crtc *crtc, const char *src_name,
+  */
+ void amdgpu_dm_crtc_handle_crc_irq(struct drm_crtc *crtc)
+ {
+-	struct dm_crtc_state *crtc_state = to_dm_crtc_state(crtc->state);
+-	struct dc_stream_state *stream_state = crtc_state->stream;
++	struct dm_crtc_state *crtc_state;
++	struct dc_stream_state *stream_state;
+ 	uint32_t crcs[3];
+ 
++	if (crtc == NULL)
++		return;
++
++	crtc_state = to_dm_crtc_state(crtc->state);
++	stream_state = crtc_state->stream;
++
+ 	/* Early return if CRC capture is not enabled. */
+ 	if (!crtc_state->crc_enabled)
+ 		return;
+diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table.c b/drivers/gpu/drm/amd/display/dc/bios/command_table.c
+index 651e1fd4622f..a558bfaa0c46 100644
+--- a/drivers/gpu/drm/amd/display/dc/bios/command_table.c
++++ b/drivers/gpu/drm/amd/display/dc/bios/command_table.c
+@@ -808,6 +808,24 @@ static enum bp_result transmitter_control_v1_5(
+ 	 * (=1: 8bpp, =1.25: 10bpp, =1.5:12bpp, =2: 16bpp)
+ 	 * LVDS mode: usPixelClock = pixel clock
+ 	 */
++	if  (cntl->signal == SIGNAL_TYPE_HDMI_TYPE_A) {
++		switch (cntl->color_depth) {
++		case COLOR_DEPTH_101010:
++			params.usSymClock =
++				cpu_to_le16((le16_to_cpu(params.usSymClock) * 30) / 24);
++			break;
++		case COLOR_DEPTH_121212:
++			params.usSymClock =
++				cpu_to_le16((le16_to_cpu(params.usSymClock) * 36) / 24);
++			break;
++		case COLOR_DEPTH_161616:
++			params.usSymClock =
++				cpu_to_le16((le16_to_cpu(params.usSymClock) * 48) / 24);
++			break;
++		default:
++			break;
++		}
++	}
+ 
+ 	if (EXEC_BIOS_CMD_TABLE(UNIPHYTransmitterControl, params))
+ 		result = BP_RESULT_OK;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+index 2fa521812d23..8a7890b03d97 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+@@ -728,6 +728,17 @@ bool dc_link_detect(struct dc_link *link, enum dc_detect_reason reason)
+ 			break;
+ 		case EDID_NO_RESPONSE:
+ 			DC_LOG_ERROR("No EDID read.\n");
++
++			/*
++			 * Abort detection for non-DP connectors if we have
++			 * no EDID
++			 *
++			 * DP needs to report as connected if HDP is high
++			 * even if we have no EDID in order to go to
++			 * fail-safe mode
++			 */
++			if (!dc_is_dp_signal(link->connector_signal))
++				return false;
+ 		default:
+ 			break;
+ 		}
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index 751f3ac9d921..754b4c2fc90a 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -268,24 +268,30 @@ bool resource_construct(
+ 
+ 	return true;
+ }
++static int find_matching_clock_source(
++		const struct resource_pool *pool,
++		struct clock_source *clock_source)
++{
+ 
++	int i;
++
++	for (i = 0; i < pool->clk_src_count; i++) {
++		if (pool->clock_sources[i] == clock_source)
++			return i;
++	}
++	return -1;
++}
+ 
+ void resource_unreference_clock_source(
+ 		struct resource_context *res_ctx,
+ 		const struct resource_pool *pool,
+ 		struct clock_source *clock_source)
+ {
+-	int i;
+-
+-	for (i = 0; i < pool->clk_src_count; i++) {
+-		if (pool->clock_sources[i] != clock_source)
+-			continue;
++	int i = find_matching_clock_source(pool, clock_source);
+ 
++	if (i > -1)
+ 		res_ctx->clock_source_ref_count[i]--;
+ 
+-		break;
+-	}
+-
+ 	if (pool->dp_clock_source == clock_source)
+ 		res_ctx->dp_clock_source_ref_count--;
+ }
+@@ -295,19 +301,31 @@ void resource_reference_clock_source(
+ 		const struct resource_pool *pool,
+ 		struct clock_source *clock_source)
+ {
+-	int i;
+-	for (i = 0; i < pool->clk_src_count; i++) {
+-		if (pool->clock_sources[i] != clock_source)
+-			continue;
++	int i = find_matching_clock_source(pool, clock_source);
+ 
++	if (i > -1)
+ 		res_ctx->clock_source_ref_count[i]++;
+-		break;
+-	}
+ 
+ 	if (pool->dp_clock_source == clock_source)
+ 		res_ctx->dp_clock_source_ref_count++;
+ }
+ 
++int resource_get_clock_source_reference(
++		struct resource_context *res_ctx,
++		const struct resource_pool *pool,
++		struct clock_source *clock_source)
++{
++	int i = find_matching_clock_source(pool, clock_source);
++
++	if (i > -1)
++		return res_ctx->clock_source_ref_count[i];
++
++	if (pool->dp_clock_source == clock_source)
++		return res_ctx->dp_clock_source_ref_count;
++
++	return -1;
++}
++
+ bool resource_are_streams_timing_synchronizable(
+ 	struct dc_stream_state *stream1,
+ 	struct dc_stream_state *stream2)
+@@ -330,6 +348,9 @@ bool resource_are_streams_timing_synchronizable(
+ 				!= stream2->timing.pix_clk_khz)
+ 		return false;
+ 
++	if (stream1->clamping.c_depth != stream2->clamping.c_depth)
++		return false;
++
+ 	if (stream1->phy_pix_clk != stream2->phy_pix_clk
+ 			&& (!dc_is_dp_signal(stream1->signal)
+ 			|| !dc_is_dp_signal(stream2->signal)))
+@@ -337,6 +358,20 @@ bool resource_are_streams_timing_synchronizable(
+ 
+ 	return true;
+ }
++static bool is_dp_and_hdmi_sharable(
++		struct dc_stream_state *stream1,
++		struct dc_stream_state *stream2)
++{
++	if (stream1->ctx->dc->caps.disable_dp_clk_share)
++		return false;
++
++	if (stream1->clamping.c_depth != COLOR_DEPTH_888 ||
++	    stream2->clamping.c_depth != COLOR_DEPTH_888)
++	return false;
++
++	return true;
++
++}
+ 
+ static bool is_sharable_clk_src(
+ 	const struct pipe_ctx *pipe_with_clk_src,
+@@ -348,7 +383,10 @@ static bool is_sharable_clk_src(
+ 	if (pipe_with_clk_src->stream->signal == SIGNAL_TYPE_VIRTUAL)
+ 		return false;
+ 
+-	if (dc_is_dp_signal(pipe_with_clk_src->stream->signal))
++	if (dc_is_dp_signal(pipe_with_clk_src->stream->signal) ||
++		(dc_is_dp_signal(pipe->stream->signal) &&
++		!is_dp_and_hdmi_sharable(pipe_with_clk_src->stream,
++				     pipe->stream)))
+ 		return false;
+ 
+ 	if (dc_is_hdmi_signal(pipe_with_clk_src->stream->signal)
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index 53c71296f3dd..efe155d50668 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -77,6 +77,7 @@ struct dc_caps {
+ 	bool dual_link_dvi;
+ 	bool post_blend_color_processing;
+ 	bool force_dp_tps4_for_cp2520;
++	bool disable_dp_clk_share;
+ };
+ 
+ struct dc_dcc_surface_param {
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c b/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c
+index dbe3b26b6d9e..f6ec1d3dfd0c 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c
+@@ -919,7 +919,7 @@ void dce110_link_encoder_enable_tmds_output(
+ 	enum bp_result result;
+ 
+ 	/* Enable the PHY */
+-
++	cntl.connector_obj_id = enc110->base.connector;
+ 	cntl.action = TRANSMITTER_CONTROL_ENABLE;
+ 	cntl.engine_id = enc->preferred_engine;
+ 	cntl.transmitter = enc110->base.transmitter;
+@@ -961,7 +961,7 @@ void dce110_link_encoder_enable_dp_output(
+ 	 * We need to set number of lanes manually.
+ 	 */
+ 	configure_encoder(enc110, link_settings);
+-
++	cntl.connector_obj_id = enc110->base.connector;
+ 	cntl.action = TRANSMITTER_CONTROL_ENABLE;
+ 	cntl.engine_id = enc->preferred_engine;
+ 	cntl.transmitter = enc110->base.transmitter;
+diff --git a/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c b/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
+index 344dd2e69e7c..aa2f03eb46fe 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
+@@ -884,7 +884,7 @@ static bool construct(
+ 	dc->caps.i2c_speed_in_khz = 40;
+ 	dc->caps.max_cursor_size = 128;
+ 	dc->caps.dual_link_dvi = true;
+-
++	dc->caps.disable_dp_clk_share = true;
+ 	for (i = 0; i < pool->base.pipe_count; i++) {
+ 		pool->base.timing_generators[i] =
+ 			dce100_timing_generator_create(
+diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_compressor.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_compressor.c
+index e2994d337044..111c4921987f 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_compressor.c
++++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_compressor.c
+@@ -143,7 +143,7 @@ static void wait_for_fbc_state_changed(
+ 	struct dce110_compressor *cp110,
+ 	bool enabled)
+ {
+-	uint8_t counter = 0;
++	uint16_t counter = 0;
+ 	uint32_t addr = mmFBC_STATUS;
+ 	uint32_t value;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+index c29052b6da5a..7c0b1d7aa9b8 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+@@ -1939,7 +1939,9 @@ static void dce110_reset_hw_ctx_wrap(
+ 			pipe_ctx_old->plane_res.mi->funcs->free_mem_input(
+ 					pipe_ctx_old->plane_res.mi, dc->current_state->stream_count);
+ 
+-			if (old_clk)
++			if (old_clk && 0 == resource_get_clock_source_reference(&context->res_ctx,
++										dc->res_pool,
++										old_clk))
+ 				old_clk->funcs->cs_power_down(old_clk);
+ 
+ 			dc->hwss.disable_plane(dc, pipe_ctx_old);
+diff --git a/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c b/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
+index 48a068964722..6f4992bdc9ce 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
+@@ -902,6 +902,7 @@ static bool dce80_construct(
+ 	}
+ 
+ 	dc->caps.max_planes =  pool->base.pipe_count;
++	dc->caps.disable_dp_clk_share = true;
+ 
+ 	if (!resource_construct(num_virtual_links, dc, &pool->base,
+ 			&res_create_funcs))
+@@ -1087,6 +1088,7 @@ static bool dce81_construct(
+ 	}
+ 
+ 	dc->caps.max_planes =  pool->base.pipe_count;
++	dc->caps.disable_dp_clk_share = true;
+ 
+ 	if (!resource_construct(num_virtual_links, dc, &pool->base,
+ 			&res_create_funcs))
+@@ -1268,6 +1270,7 @@ static bool dce83_construct(
+ 	}
+ 
+ 	dc->caps.max_planes =  pool->base.pipe_count;
++	dc->caps.disable_dp_clk_share = true;
+ 
+ 	if (!resource_construct(num_virtual_links, dc, &pool->base,
+ 			&res_create_funcs))
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/resource.h b/drivers/gpu/drm/amd/display/dc/inc/resource.h
+index 640a647f4611..abf42a7d0859 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/resource.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/resource.h
+@@ -102,6 +102,11 @@ void resource_reference_clock_source(
+ 		const struct resource_pool *pool,
+ 		struct clock_source *clock_source);
+ 
++int resource_get_clock_source_reference(
++		struct resource_context *res_ctx,
++		const struct resource_pool *pool,
++		struct clock_source *clock_source);
++
+ bool resource_are_streams_timing_synchronizable(
+ 		struct dc_stream_state *stream1,
+ 		struct dc_stream_state *stream2);
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_powertune.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_powertune.c
+index c952845833d7..5e19f5977eb1 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_powertune.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_powertune.c
+@@ -403,6 +403,49 @@ static const struct gpu_pt_config_reg DIDTConfig_Polaris12[] = {
+ 	{   ixDIDT_SQ_CTRL1,                   DIDT_SQ_CTRL1__MAX_POWER_MASK,                      DIDT_SQ_CTRL1__MAX_POWER__SHIFT,                    0xffff,     GPU_CONFIGREG_DIDT_IND },
+ 
+ 	{   ixDIDT_SQ_CTRL_OCP,                DIDT_SQ_CTRL_OCP__UNUSED_0_MASK,                    DIDT_SQ_CTRL_OCP__UNUSED_0__SHIFT,                  0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL_OCP,                DIDT_SQ_CTRL_OCP__OCP_MAX_POWER_MASK,               DIDT_SQ_CTRL_OCP__OCP_MAX_POWER__SHIFT,             0xffff,     GPU_CONFIGREG_DIDT_IND },
++
++	{   ixDIDT_SQ_CTRL2,                   DIDT_SQ_CTRL2__MAX_POWER_DELTA_MASK,                DIDT_SQ_CTRL2__MAX_POWER_DELTA__SHIFT,              0x3853,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL2,                   DIDT_SQ_CTRL2__UNUSED_0_MASK,                       DIDT_SQ_CTRL2__UNUSED_0__SHIFT,                     0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL2,                   DIDT_SQ_CTRL2__SHORT_TERM_INTERVAL_SIZE_MASK,       DIDT_SQ_CTRL2__SHORT_TERM_INTERVAL_SIZE__SHIFT,     0x005a,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL2,                   DIDT_SQ_CTRL2__UNUSED_1_MASK,                       DIDT_SQ_CTRL2__UNUSED_1__SHIFT,                     0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL2,                   DIDT_SQ_CTRL2__LONG_TERM_INTERVAL_RATIO_MASK,       DIDT_SQ_CTRL2__LONG_TERM_INTERVAL_RATIO__SHIFT,     0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL2,                   DIDT_SQ_CTRL2__UNUSED_2_MASK,                       DIDT_SQ_CTRL2__UNUSED_2__SHIFT,                     0x0000,     GPU_CONFIGREG_DIDT_IND },
++
++	{   ixDIDT_SQ_STALL_CTRL,              DIDT_SQ_STALL_CTRL__DIDT_STALL_CTRL_ENABLE_MASK,    DIDT_SQ_STALL_CTRL__DIDT_STALL_CTRL_ENABLE__SHIFT,  0x0001,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_STALL_CTRL,              DIDT_SQ_STALL_CTRL__DIDT_STALL_DELAY_HI_MASK,       DIDT_SQ_STALL_CTRL__DIDT_STALL_DELAY_HI__SHIFT,     0x0001,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_STALL_CTRL,              DIDT_SQ_STALL_CTRL__DIDT_STALL_DELAY_LO_MASK,       DIDT_SQ_STALL_CTRL__DIDT_STALL_DELAY_LO__SHIFT,     0x0001,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_STALL_CTRL,              DIDT_SQ_STALL_CTRL__DIDT_HI_POWER_THRESHOLD_MASK,   DIDT_SQ_STALL_CTRL__DIDT_HI_POWER_THRESHOLD__SHIFT, 0x0ebb,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_STALL_CTRL,              DIDT_SQ_STALL_CTRL__UNUSED_0_MASK,                  DIDT_SQ_STALL_CTRL__UNUSED_0__SHIFT,                0x0000,     GPU_CONFIGREG_DIDT_IND },
++
++	{   ixDIDT_SQ_TUNING_CTRL,             DIDT_SQ_TUNING_CTRL__DIDT_TUNING_ENABLE_MASK,       DIDT_SQ_TUNING_CTRL__DIDT_TUNING_ENABLE__SHIFT,     0x0001,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_TUNING_CTRL,             DIDT_SQ_TUNING_CTRL__MAX_POWER_DELTA_HI_MASK,       DIDT_SQ_TUNING_CTRL__MAX_POWER_DELTA_HI__SHIFT,     0x3853,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_TUNING_CTRL,             DIDT_SQ_TUNING_CTRL__MAX_POWER_DELTA_LO_MASK,       DIDT_SQ_TUNING_CTRL__MAX_POWER_DELTA_LO__SHIFT,     0x3153,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_TUNING_CTRL,             DIDT_SQ_TUNING_CTRL__UNUSED_0_MASK,                 DIDT_SQ_TUNING_CTRL__UNUSED_0__SHIFT,               0x0000,     GPU_CONFIGREG_DIDT_IND },
++
++	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__DIDT_CTRL_EN_MASK,                   DIDT_SQ_CTRL0__DIDT_CTRL_EN__SHIFT,                 0x0001,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__USE_REF_CLOCK_MASK,                  DIDT_SQ_CTRL0__USE_REF_CLOCK__SHIFT,                0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__PHASE_OFFSET_MASK,                   DIDT_SQ_CTRL0__PHASE_OFFSET__SHIFT,                 0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__DIDT_CTRL_RST_MASK,                  DIDT_SQ_CTRL0__DIDT_CTRL_RST__SHIFT,                0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__DIDT_CLK_EN_OVERRIDE_MASK,           DIDT_SQ_CTRL0__DIDT_CLK_EN_OVERRIDE__SHIFT,         0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__DIDT_MAX_STALLS_ALLOWED_HI_MASK,     DIDT_SQ_CTRL0__DIDT_MAX_STALLS_ALLOWED_HI__SHIFT,   0x0010,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__DIDT_MAX_STALLS_ALLOWED_LO_MASK,     DIDT_SQ_CTRL0__DIDT_MAX_STALLS_ALLOWED_LO__SHIFT,   0x0010,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__UNUSED_0_MASK,                       DIDT_SQ_CTRL0__UNUSED_0__SHIFT,                     0x0000,     GPU_CONFIGREG_DIDT_IND },
++
++	{   ixDIDT_TD_WEIGHT0_3,               DIDT_TD_WEIGHT0_3__WEIGHT0_MASK,                    DIDT_TD_WEIGHT0_3__WEIGHT0__SHIFT,                  0x000a,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_TD_WEIGHT0_3,               DIDT_TD_WEIGHT0_3__WEIGHT1_MASK,                    DIDT_TD_WEIGHT0_3__WEIGHT1__SHIFT,                  0x0010,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_TD_WEIGHT0_3,               DIDT_TD_WEIGHT0_3__WEIGHT2_MASK,                    DIDT_TD_WEIGHT0_3__WEIGHT2__SHIFT,                  0x0017,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_TD_WEIGHT0_3,               DIDT_TD_WEIGHT0_3__WEIGHT3_MASK,                    DIDT_TD_WEIGHT0_3__WEIGHT3__SHIFT,                  0x002f,     GPU_CONFIGREG_DIDT_IND },
++
++	{   ixDIDT_TD_WEIGHT4_7,               DIDT_TD_WEIGHT4_7__WEIGHT4_MASK,                    DIDT_TD_WEIGHT4_7__WEIGHT4__SHIFT,                  0x0046,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_TD_WEIGHT4_7,               DIDT_TD_WEIGHT4_7__WEIGHT5_MASK,                    DIDT_TD_WEIGHT4_7__WEIGHT5__SHIFT,                  0x005d,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_TD_WEIGHT4_7,               DIDT_TD_WEIGHT4_7__WEIGHT6_MASK,                    DIDT_TD_WEIGHT4_7__WEIGHT6__SHIFT,                  0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_TD_WEIGHT4_7,               DIDT_TD_WEIGHT4_7__WEIGHT7_MASK,                    DIDT_TD_WEIGHT4_7__WEIGHT7__SHIFT,                  0x0000,     GPU_CONFIGREG_DIDT_IND },
++
++	{   ixDIDT_TD_CTRL1,                   DIDT_TD_CTRL1__MIN_POWER_MASK,                      DIDT_TD_CTRL1__MIN_POWER__SHIFT,                    0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_TD_CTRL1,                   DIDT_TD_CTRL1__MAX_POWER_MASK,                      DIDT_TD_CTRL1__MAX_POWER__SHIFT,                    0xffff,     GPU_CONFIGREG_DIDT_IND },
++
++	{   ixDIDT_TD_CTRL_OCP,                DIDT_TD_CTRL_OCP__UNUSED_0_MASK,                    DIDT_TD_CTRL_OCP__UNUSED_0__SHIFT,                  0x0000,     GPU_CONFIGREG_DIDT_IND },
+ 	{   ixDIDT_TD_CTRL_OCP,                DIDT_TD_CTRL_OCP__OCP_MAX_POWER_MASK,               DIDT_TD_CTRL_OCP__OCP_MAX_POWER__SHIFT,             0x00ff,     GPU_CONFIGREG_DIDT_IND },
+ 
+ 	{   ixDIDT_TD_CTRL2,                   DIDT_TD_CTRL2__MAX_POWER_DELTA_MASK,                DIDT_TD_CTRL2__MAX_POWER_DELTA__SHIFT,              0x3fff,     GPU_CONFIGREG_DIDT_IND },
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
+index 50690c72b2ea..617557bd8c24 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
+@@ -244,6 +244,7 @@ static int smu8_initialize_dpm_defaults(struct pp_hwmgr *hwmgr)
+ 	return 0;
+ }
+ 
++/* convert form 8bit vid to real voltage in mV*4 */
+ static uint32_t smu8_convert_8Bit_index_to_voltage(
+ 			struct pp_hwmgr *hwmgr, uint16_t voltage)
+ {
+@@ -1702,13 +1703,13 @@ static int smu8_read_sensor(struct pp_hwmgr *hwmgr, int idx,
+ 	case AMDGPU_PP_SENSOR_VDDNB:
+ 		tmp = (cgs_read_ind_register(hwmgr->device, CGS_IND_REG__SMC, ixSMUSVI_NB_CURRENTVID) &
+ 			CURRENT_NB_VID_MASK) >> CURRENT_NB_VID__SHIFT;
+-		vddnb = smu8_convert_8Bit_index_to_voltage(hwmgr, tmp);
++		vddnb = smu8_convert_8Bit_index_to_voltage(hwmgr, tmp) / 4;
+ 		*((uint32_t *)value) = vddnb;
+ 		return 0;
+ 	case AMDGPU_PP_SENSOR_VDDGFX:
+ 		tmp = (cgs_read_ind_register(hwmgr->device, CGS_IND_REG__SMC, ixSMUSVI_GFX_CURRENTVID) &
+ 			CURRENT_GFX_VID_MASK) >> CURRENT_GFX_VID__SHIFT;
+-		vddgfx = smu8_convert_8Bit_index_to_voltage(hwmgr, (u16)tmp);
++		vddgfx = smu8_convert_8Bit_index_to_voltage(hwmgr, (u16)tmp) / 4;
+ 		*((uint32_t *)value) = vddgfx;
+ 		return 0;
+ 	case AMDGPU_PP_SENSOR_UVD_VCLK:
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
+index c98e5de777cd..fcd2808874bf 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
+@@ -490,7 +490,7 @@ static int vega12_get_number_dpm_level(struct pp_hwmgr *hwmgr,
+ static int vega12_get_dpm_frequency_by_index(struct pp_hwmgr *hwmgr,
+ 		PPCLK_e clkID, uint32_t index, uint32_t *clock)
+ {
+-	int result;
++	int result = 0;
+ 
+ 	/*
+ 	 *SMU expects the Clock ID to be in the top 16 bits.
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index a5808382bdf0..c7b4481c90d7 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -116,6 +116,9 @@ static const struct edid_quirk {
+ 	/* CPT panel of Asus UX303LA reports 8 bpc, but is a 6 bpc panel */
+ 	{ "CPT", 0x17df, EDID_QUIRK_FORCE_6BPC },
+ 
++	/* SDC panel of Lenovo B50-80 reports 8 bpc, but is a 6 bpc panel */
++	{ "SDC", 0x3652, EDID_QUIRK_FORCE_6BPC },
++
+ 	/* Belinea 10 15 55 */
+ 	{ "MAX", 1516, EDID_QUIRK_PREFER_LARGE_60 },
+ 	{ "MAX", 0x77e, EDID_QUIRK_PREFER_LARGE_60 },
+@@ -163,8 +166,9 @@ static const struct edid_quirk {
+ 	/* Rotel RSX-1058 forwards sink's EDID but only does HDMI 1.1*/
+ 	{ "ETR", 13896, EDID_QUIRK_FORCE_8BPC },
+ 
+-	/* HTC Vive VR Headset */
++	/* HTC Vive and Vive Pro VR Headsets */
+ 	{ "HVR", 0xaa01, EDID_QUIRK_NON_DESKTOP },
++	{ "HVR", 0xaa02, EDID_QUIRK_NON_DESKTOP },
+ 
+ 	/* Oculus Rift DK1, DK2, and CV1 VR Headsets */
+ 	{ "OVR", 0x0001, EDID_QUIRK_NON_DESKTOP },
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+index 686f6552db48..3ef440b235e5 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+@@ -799,6 +799,7 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
+ 
+ free_buffer:
+ 	etnaviv_cmdbuf_free(&gpu->buffer);
++	gpu->buffer.suballoc = NULL;
+ destroy_iommu:
+ 	etnaviv_iommu_destroy(gpu->mmu);
+ 	gpu->mmu = NULL;
+diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
+index 9c449b8d8eab..015f9e93419d 100644
+--- a/drivers/gpu/drm/i915/i915_drv.c
++++ b/drivers/gpu/drm/i915/i915_drv.c
+@@ -919,7 +919,6 @@ static int i915_driver_init_early(struct drm_i915_private *dev_priv,
+ 	spin_lock_init(&dev_priv->uncore.lock);
+ 
+ 	mutex_init(&dev_priv->sb_lock);
+-	mutex_init(&dev_priv->modeset_restore_lock);
+ 	mutex_init(&dev_priv->av_mutex);
+ 	mutex_init(&dev_priv->wm.wm_mutex);
+ 	mutex_init(&dev_priv->pps_mutex);
+@@ -1560,11 +1559,6 @@ static int i915_drm_suspend(struct drm_device *dev)
+ 	pci_power_t opregion_target_state;
+ 	int error;
+ 
+-	/* ignore lid events during suspend */
+-	mutex_lock(&dev_priv->modeset_restore_lock);
+-	dev_priv->modeset_restore = MODESET_SUSPENDED;
+-	mutex_unlock(&dev_priv->modeset_restore_lock);
+-
+ 	disable_rpm_wakeref_asserts(dev_priv);
+ 
+ 	/* We do a lot of poking in a lot of registers, make sure they work
+@@ -1764,10 +1758,6 @@ static int i915_drm_resume(struct drm_device *dev)
+ 
+ 	intel_fbdev_set_suspend(dev, FBINFO_STATE_RUNNING, false);
+ 
+-	mutex_lock(&dev_priv->modeset_restore_lock);
+-	dev_priv->modeset_restore = MODESET_DONE;
+-	mutex_unlock(&dev_priv->modeset_restore_lock);
+-
+ 	intel_opregion_notify_adapter(dev_priv, PCI_D0);
+ 
+ 	enable_rpm_wakeref_asserts(dev_priv);
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index 71e1aa54f774..7c22fac3aa04 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -1003,12 +1003,6 @@ struct i915_gem_mm {
+ #define I915_ENGINE_DEAD_TIMEOUT  (4 * HZ)  /* Seqno, head and subunits dead */
+ #define I915_SEQNO_DEAD_TIMEOUT   (12 * HZ) /* Seqno dead with active head */
+ 
+-enum modeset_restore {
+-	MODESET_ON_LID_OPEN,
+-	MODESET_DONE,
+-	MODESET_SUSPENDED,
+-};
+-
+ #define DP_AUX_A 0x40
+ #define DP_AUX_B 0x10
+ #define DP_AUX_C 0x20
+@@ -1740,8 +1734,6 @@ struct drm_i915_private {
+ 
+ 	unsigned long quirks;
+ 
+-	enum modeset_restore modeset_restore;
+-	struct mutex modeset_restore_lock;
+ 	struct drm_atomic_state *modeset_restore_state;
+ 	struct drm_modeset_acquire_ctx reset_ctx;
+ 
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 7720569f2024..6e048ee88e3f 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -8825,6 +8825,7 @@ enum skl_power_gate {
+ #define  TRANS_MSA_10_BPC		(2<<5)
+ #define  TRANS_MSA_12_BPC		(3<<5)
+ #define  TRANS_MSA_16_BPC		(4<<5)
++#define  TRANS_MSA_CEA_RANGE		(1<<3)
+ 
+ /* LCPLL Control */
+ #define LCPLL_CTL			_MMIO(0x130040)
+diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c
+index fed26d6e4e27..e195c287c263 100644
+--- a/drivers/gpu/drm/i915/intel_ddi.c
++++ b/drivers/gpu/drm/i915/intel_ddi.c
+@@ -1659,6 +1659,10 @@ void intel_ddi_set_pipe_settings(const struct intel_crtc_state *crtc_state)
+ 	WARN_ON(transcoder_is_dsi(cpu_transcoder));
+ 
+ 	temp = TRANS_MSA_SYNC_CLK;
++
++	if (crtc_state->limited_color_range)
++		temp |= TRANS_MSA_CEA_RANGE;
++
+ 	switch (crtc_state->pipe_bpp) {
+ 	case 18:
+ 		temp |= TRANS_MSA_6_BPC;
+diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
+index 16faea30114a..8e465095fe06 100644
+--- a/drivers/gpu/drm/i915/intel_dp.c
++++ b/drivers/gpu/drm/i915/intel_dp.c
+@@ -4293,18 +4293,6 @@ intel_dp_needs_link_retrain(struct intel_dp *intel_dp)
+ 	return !drm_dp_channel_eq_ok(link_status, intel_dp->lane_count);
+ }
+ 
+-/*
+- * If display is now connected check links status,
+- * there has been known issues of link loss triggering
+- * long pulse.
+- *
+- * Some sinks (eg. ASUS PB287Q) seem to perform some
+- * weird HPD ping pong during modesets. So we can apparently
+- * end up with HPD going low during a modeset, and then
+- * going back up soon after. And once that happens we must
+- * retrain the link to get a picture. That's in case no
+- * userspace component reacted to intermittent HPD dip.
+- */
+ int intel_dp_retrain_link(struct intel_encoder *encoder,
+ 			  struct drm_modeset_acquire_ctx *ctx)
+ {
+@@ -4794,7 +4782,8 @@ intel_dp_unset_edid(struct intel_dp *intel_dp)
+ }
+ 
+ static int
+-intel_dp_long_pulse(struct intel_connector *connector)
++intel_dp_long_pulse(struct intel_connector *connector,
++		    struct drm_modeset_acquire_ctx *ctx)
+ {
+ 	struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
+ 	struct intel_dp *intel_dp = intel_attached_dp(&connector->base);
+@@ -4853,6 +4842,22 @@ intel_dp_long_pulse(struct intel_connector *connector)
+ 		 */
+ 		status = connector_status_disconnected;
+ 		goto out;
++	} else {
++		/*
++		 * If display is now connected check links status,
++		 * there has been known issues of link loss triggering
++		 * long pulse.
++		 *
++		 * Some sinks (eg. ASUS PB287Q) seem to perform some
++		 * weird HPD ping pong during modesets. So we can apparently
++		 * end up with HPD going low during a modeset, and then
++		 * going back up soon after. And once that happens we must
++		 * retrain the link to get a picture. That's in case no
++		 * userspace component reacted to intermittent HPD dip.
++		 */
++		struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
++
++		intel_dp_retrain_link(encoder, ctx);
+ 	}
+ 
+ 	/*
+@@ -4914,7 +4919,7 @@ intel_dp_detect(struct drm_connector *connector,
+ 				return ret;
+ 		}
+ 
+-		status = intel_dp_long_pulse(intel_dp->attached_connector);
++		status = intel_dp_long_pulse(intel_dp->attached_connector, ctx);
+ 	}
+ 
+ 	intel_dp->detect_done = false;
+diff --git a/drivers/gpu/drm/i915/intel_hdmi.c b/drivers/gpu/drm/i915/intel_hdmi.c
+index d8cb53ef4351..c8640959a7fc 100644
+--- a/drivers/gpu/drm/i915/intel_hdmi.c
++++ b/drivers/gpu/drm/i915/intel_hdmi.c
+@@ -933,8 +933,12 @@ static int intel_hdmi_hdcp_write(struct intel_digital_port *intel_dig_port,
+ 
+ 	ret = i2c_transfer(adapter, &msg, 1);
+ 	if (ret == 1)
+-		return 0;
+-	return ret >= 0 ? -EIO : ret;
++		ret = 0;
++	else if (ret >= 0)
++		ret = -EIO;
++
++	kfree(write_buf);
++	return ret;
+ }
+ 
+ static
+diff --git a/drivers/gpu/drm/i915/intel_lpe_audio.c b/drivers/gpu/drm/i915/intel_lpe_audio.c
+index b4941101f21a..cdf19553ffac 100644
+--- a/drivers/gpu/drm/i915/intel_lpe_audio.c
++++ b/drivers/gpu/drm/i915/intel_lpe_audio.c
+@@ -127,9 +127,7 @@ lpe_audio_platdev_create(struct drm_i915_private *dev_priv)
+ 		return platdev;
+ 	}
+ 
+-	pm_runtime_forbid(&platdev->dev);
+-	pm_runtime_set_active(&platdev->dev);
+-	pm_runtime_enable(&platdev->dev);
++	pm_runtime_no_callbacks(&platdev->dev);
+ 
+ 	return platdev;
+ }
+diff --git a/drivers/gpu/drm/i915/intel_lspcon.c b/drivers/gpu/drm/i915/intel_lspcon.c
+index 8ae8f42f430a..6b6758419fb3 100644
+--- a/drivers/gpu/drm/i915/intel_lspcon.c
++++ b/drivers/gpu/drm/i915/intel_lspcon.c
+@@ -74,7 +74,7 @@ static enum drm_lspcon_mode lspcon_wait_mode(struct intel_lspcon *lspcon,
+ 	DRM_DEBUG_KMS("Waiting for LSPCON mode %s to settle\n",
+ 		      lspcon_mode_name(mode));
+ 
+-	wait_for((current_mode = lspcon_get_current_mode(lspcon)) == mode, 100);
++	wait_for((current_mode = lspcon_get_current_mode(lspcon)) == mode, 400);
+ 	if (current_mode != mode)
+ 		DRM_ERROR("LSPCON mode hasn't settled\n");
+ 
+diff --git a/drivers/gpu/drm/i915/intel_lvds.c b/drivers/gpu/drm/i915/intel_lvds.c
+index 48f618dc9abb..63d7faa99946 100644
+--- a/drivers/gpu/drm/i915/intel_lvds.c
++++ b/drivers/gpu/drm/i915/intel_lvds.c
+@@ -44,8 +44,6 @@
+ /* Private structure for the integrated LVDS support */
+ struct intel_lvds_connector {
+ 	struct intel_connector base;
+-
+-	struct notifier_block lid_notifier;
+ };
+ 
+ struct intel_lvds_pps {
+@@ -454,26 +452,9 @@ static bool intel_lvds_compute_config(struct intel_encoder *intel_encoder,
+ 	return true;
+ }
+ 
+-/*
+- * Detect the LVDS connection.
+- *
+- * Since LVDS doesn't have hotlug, we use the lid as a proxy.  Open means
+- * connected and closed means disconnected.  We also send hotplug events as
+- * needed, using lid status notification from the input layer.
+- */
+ static enum drm_connector_status
+ intel_lvds_detect(struct drm_connector *connector, bool force)
+ {
+-	struct drm_i915_private *dev_priv = to_i915(connector->dev);
+-	enum drm_connector_status status;
+-
+-	DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n",
+-		      connector->base.id, connector->name);
+-
+-	status = intel_panel_detect(dev_priv);
+-	if (status != connector_status_unknown)
+-		return status;
+-
+ 	return connector_status_connected;
+ }
+ 
+@@ -498,117 +479,6 @@ static int intel_lvds_get_modes(struct drm_connector *connector)
+ 	return 1;
+ }
+ 
+-static int intel_no_modeset_on_lid_dmi_callback(const struct dmi_system_id *id)
+-{
+-	DRM_INFO("Skipping forced modeset for %s\n", id->ident);
+-	return 1;
+-}
+-
+-/* The GPU hangs up on these systems if modeset is performed on LID open */
+-static const struct dmi_system_id intel_no_modeset_on_lid[] = {
+-	{
+-		.callback = intel_no_modeset_on_lid_dmi_callback,
+-		.ident = "Toshiba Tecra A11",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "TECRA A11"),
+-		},
+-	},
+-
+-	{ }	/* terminating entry */
+-};
+-
+-/*
+- * Lid events. Note the use of 'modeset':
+- *  - we set it to MODESET_ON_LID_OPEN on lid close,
+- *    and set it to MODESET_DONE on open
+- *  - we use it as a "only once" bit (ie we ignore
+- *    duplicate events where it was already properly set)
+- *  - the suspend/resume paths will set it to
+- *    MODESET_SUSPENDED and ignore the lid open event,
+- *    because they restore the mode ("lid open").
+- */
+-static int intel_lid_notify(struct notifier_block *nb, unsigned long val,
+-			    void *unused)
+-{
+-	struct intel_lvds_connector *lvds_connector =
+-		container_of(nb, struct intel_lvds_connector, lid_notifier);
+-	struct drm_connector *connector = &lvds_connector->base.base;
+-	struct drm_device *dev = connector->dev;
+-	struct drm_i915_private *dev_priv = to_i915(dev);
+-
+-	if (dev->switch_power_state != DRM_SWITCH_POWER_ON)
+-		return NOTIFY_OK;
+-
+-	mutex_lock(&dev_priv->modeset_restore_lock);
+-	if (dev_priv->modeset_restore == MODESET_SUSPENDED)
+-		goto exit;
+-	/*
+-	 * check and update the status of LVDS connector after receiving
+-	 * the LID nofication event.
+-	 */
+-	connector->status = connector->funcs->detect(connector, false);
+-
+-	/* Don't force modeset on machines where it causes a GPU lockup */
+-	if (dmi_check_system(intel_no_modeset_on_lid))
+-		goto exit;
+-	if (!acpi_lid_open()) {
+-		/* do modeset on next lid open event */
+-		dev_priv->modeset_restore = MODESET_ON_LID_OPEN;
+-		goto exit;
+-	}
+-
+-	if (dev_priv->modeset_restore == MODESET_DONE)
+-		goto exit;
+-
+-	/*
+-	 * Some old platform's BIOS love to wreak havoc while the lid is closed.
+-	 * We try to detect this here and undo any damage. The split for PCH
+-	 * platforms is rather conservative and a bit arbitrary expect that on
+-	 * those platforms VGA disabling requires actual legacy VGA I/O access,
+-	 * and as part of the cleanup in the hw state restore we also redisable
+-	 * the vga plane.
+-	 */
+-	if (!HAS_PCH_SPLIT(dev_priv))
+-		intel_display_resume(dev);
+-
+-	dev_priv->modeset_restore = MODESET_DONE;
+-
+-exit:
+-	mutex_unlock(&dev_priv->modeset_restore_lock);
+-	return NOTIFY_OK;
+-}
+-
+-static int
+-intel_lvds_connector_register(struct drm_connector *connector)
+-{
+-	struct intel_lvds_connector *lvds = to_lvds_connector(connector);
+-	int ret;
+-
+-	ret = intel_connector_register(connector);
+-	if (ret)
+-		return ret;
+-
+-	lvds->lid_notifier.notifier_call = intel_lid_notify;
+-	if (acpi_lid_notifier_register(&lvds->lid_notifier)) {
+-		DRM_DEBUG_KMS("lid notifier registration failed\n");
+-		lvds->lid_notifier.notifier_call = NULL;
+-	}
+-
+-	return 0;
+-}
+-
+-static void
+-intel_lvds_connector_unregister(struct drm_connector *connector)
+-{
+-	struct intel_lvds_connector *lvds = to_lvds_connector(connector);
+-
+-	if (lvds->lid_notifier.notifier_call)
+-		acpi_lid_notifier_unregister(&lvds->lid_notifier);
+-
+-	intel_connector_unregister(connector);
+-}
+-
+ /**
+  * intel_lvds_destroy - unregister and free LVDS structures
+  * @connector: connector to free
+@@ -641,8 +511,8 @@ static const struct drm_connector_funcs intel_lvds_connector_funcs = {
+ 	.fill_modes = drm_helper_probe_single_connector_modes,
+ 	.atomic_get_property = intel_digital_connector_atomic_get_property,
+ 	.atomic_set_property = intel_digital_connector_atomic_set_property,
+-	.late_register = intel_lvds_connector_register,
+-	.early_unregister = intel_lvds_connector_unregister,
++	.late_register = intel_connector_register,
++	.early_unregister = intel_connector_unregister,
+ 	.destroy = intel_lvds_destroy,
+ 	.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+ 	.atomic_duplicate_state = intel_digital_connector_duplicate_state,
+@@ -1108,8 +978,6 @@ void intel_lvds_init(struct drm_i915_private *dev_priv)
+ 	 * 2) check for VBT data
+ 	 * 3) check to see if LVDS is already on
+ 	 *    if none of the above, no panel
+-	 * 4) make sure lid is open
+-	 *    if closed, act like it's not there for now
+ 	 */
+ 
+ 	/*
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+index 2121345a61af..78ce3d232c4d 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+@@ -486,6 +486,31 @@ static void vop_line_flag_irq_disable(struct vop *vop)
+ 	spin_unlock_irqrestore(&vop->irq_lock, flags);
+ }
+ 
++static int vop_core_clks_enable(struct vop *vop)
++{
++	int ret;
++
++	ret = clk_enable(vop->hclk);
++	if (ret < 0)
++		return ret;
++
++	ret = clk_enable(vop->aclk);
++	if (ret < 0)
++		goto err_disable_hclk;
++
++	return 0;
++
++err_disable_hclk:
++	clk_disable(vop->hclk);
++	return ret;
++}
++
++static void vop_core_clks_disable(struct vop *vop)
++{
++	clk_disable(vop->aclk);
++	clk_disable(vop->hclk);
++}
++
+ static int vop_enable(struct drm_crtc *crtc)
+ {
+ 	struct vop *vop = to_vop(crtc);
+@@ -497,17 +522,13 @@ static int vop_enable(struct drm_crtc *crtc)
+ 		return ret;
+ 	}
+ 
+-	ret = clk_enable(vop->hclk);
++	ret = vop_core_clks_enable(vop);
+ 	if (WARN_ON(ret < 0))
+ 		goto err_put_pm_runtime;
+ 
+ 	ret = clk_enable(vop->dclk);
+ 	if (WARN_ON(ret < 0))
+-		goto err_disable_hclk;
+-
+-	ret = clk_enable(vop->aclk);
+-	if (WARN_ON(ret < 0))
+-		goto err_disable_dclk;
++		goto err_disable_core;
+ 
+ 	/*
+ 	 * Slave iommu shares power, irq and clock with vop.  It was associated
+@@ -519,7 +540,7 @@ static int vop_enable(struct drm_crtc *crtc)
+ 	if (ret) {
+ 		DRM_DEV_ERROR(vop->dev,
+ 			      "failed to attach dma mapping, %d\n", ret);
+-		goto err_disable_aclk;
++		goto err_disable_dclk;
+ 	}
+ 
+ 	spin_lock(&vop->reg_lock);
+@@ -552,18 +573,14 @@ static int vop_enable(struct drm_crtc *crtc)
+ 
+ 	spin_unlock(&vop->reg_lock);
+ 
+-	enable_irq(vop->irq);
+-
+ 	drm_crtc_vblank_on(crtc);
+ 
+ 	return 0;
+ 
+-err_disable_aclk:
+-	clk_disable(vop->aclk);
+ err_disable_dclk:
+ 	clk_disable(vop->dclk);
+-err_disable_hclk:
+-	clk_disable(vop->hclk);
++err_disable_core:
++	vop_core_clks_disable(vop);
+ err_put_pm_runtime:
+ 	pm_runtime_put_sync(vop->dev);
+ 	return ret;
+@@ -599,8 +616,6 @@ static void vop_crtc_atomic_disable(struct drm_crtc *crtc,
+ 
+ 	vop_dsp_hold_valid_irq_disable(vop);
+ 
+-	disable_irq(vop->irq);
+-
+ 	vop->is_enabled = false;
+ 
+ 	/*
+@@ -609,8 +624,7 @@ static void vop_crtc_atomic_disable(struct drm_crtc *crtc,
+ 	rockchip_drm_dma_detach_device(vop->drm_dev, vop->dev);
+ 
+ 	clk_disable(vop->dclk);
+-	clk_disable(vop->aclk);
+-	clk_disable(vop->hclk);
++	vop_core_clks_disable(vop);
+ 	pm_runtime_put(vop->dev);
+ 	mutex_unlock(&vop->vop_lock);
+ 
+@@ -1177,6 +1191,18 @@ static irqreturn_t vop_isr(int irq, void *data)
+ 	uint32_t active_irqs;
+ 	int ret = IRQ_NONE;
+ 
++	/*
++	 * The irq is shared with the iommu. If the runtime-pm state of the
++	 * vop-device is disabled the irq has to be targeted at the iommu.
++	 */
++	if (!pm_runtime_get_if_in_use(vop->dev))
++		return IRQ_NONE;
++
++	if (vop_core_clks_enable(vop)) {
++		DRM_DEV_ERROR_RATELIMITED(vop->dev, "couldn't enable clocks\n");
++		goto out;
++	}
++
+ 	/*
+ 	 * interrupt register has interrupt status, enable and clear bits, we
+ 	 * must hold irq_lock to avoid a race with enable/disable_vblank().
+@@ -1192,7 +1218,7 @@ static irqreturn_t vop_isr(int irq, void *data)
+ 
+ 	/* This is expected for vop iommu irqs, since the irq is shared */
+ 	if (!active_irqs)
+-		return IRQ_NONE;
++		goto out_disable;
+ 
+ 	if (active_irqs & DSP_HOLD_VALID_INTR) {
+ 		complete(&vop->dsp_hold_completion);
+@@ -1218,6 +1244,10 @@ static irqreturn_t vop_isr(int irq, void *data)
+ 		DRM_DEV_ERROR(vop->dev, "Unknown VOP IRQs: %#02x\n",
+ 			      active_irqs);
+ 
++out_disable:
++	vop_core_clks_disable(vop);
++out:
++	pm_runtime_put(vop->dev);
+ 	return ret;
+ }
+ 
+@@ -1596,9 +1626,6 @@ static int vop_bind(struct device *dev, struct device *master, void *data)
+ 	if (ret)
+ 		goto err_disable_pm_runtime;
+ 
+-	/* IRQ is initially disabled; it gets enabled in power_on */
+-	disable_irq(vop->irq);
+-
+ 	return 0;
+ 
+ err_disable_pm_runtime:
+diff --git a/drivers/gpu/drm/rockchip/rockchip_lvds.c b/drivers/gpu/drm/rockchip/rockchip_lvds.c
+index e67f4ea28c0e..051b8be3dc0f 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_lvds.c
++++ b/drivers/gpu/drm/rockchip/rockchip_lvds.c
+@@ -363,8 +363,10 @@ static int rockchip_lvds_bind(struct device *dev, struct device *master,
+ 		of_property_read_u32(endpoint, "reg", &endpoint_id);
+ 		ret = drm_of_find_panel_or_bridge(dev->of_node, 1, endpoint_id,
+ 						  &lvds->panel, &lvds->bridge);
+-		if (!ret)
++		if (!ret) {
++			of_node_put(endpoint);
+ 			break;
++		}
+ 	}
+ 	if (!child_count) {
+ 		DRM_DEV_ERROR(dev, "lvds port does not have any children\n");
+diff --git a/drivers/hid/hid-redragon.c b/drivers/hid/hid-redragon.c
+index daf59578bf93..73c9d4c4fa34 100644
+--- a/drivers/hid/hid-redragon.c
++++ b/drivers/hid/hid-redragon.c
+@@ -44,29 +44,6 @@ static __u8 *redragon_report_fixup(struct hid_device *hdev, __u8 *rdesc,
+ 	return rdesc;
+ }
+ 
+-static int redragon_probe(struct hid_device *dev,
+-	const struct hid_device_id *id)
+-{
+-	int ret;
+-
+-	ret = hid_parse(dev);
+-	if (ret) {
+-		hid_err(dev, "parse failed\n");
+-		return ret;
+-	}
+-
+-	/* do not register unused input device */
+-	if (dev->maxapplication == 1)
+-		return 0;
+-
+-	ret = hid_hw_start(dev, HID_CONNECT_DEFAULT);
+-	if (ret) {
+-		hid_err(dev, "hw start failed\n");
+-		return ret;
+-	}
+-
+-	return 0;
+-}
+ static const struct hid_device_id redragon_devices[] = {
+ 	{HID_USB_DEVICE(USB_VENDOR_ID_JESS, USB_DEVICE_ID_REDRAGON_ASURA)},
+ 	{}
+@@ -77,8 +54,7 @@ MODULE_DEVICE_TABLE(hid, redragon_devices);
+ static struct hid_driver redragon_driver = {
+ 	.name = "redragon",
+ 	.id_table = redragon_devices,
+-	.report_fixup = redragon_report_fixup,
+-	.probe = redragon_probe
++	.report_fixup = redragon_report_fixup
+ };
+ 
+ module_hid_driver(redragon_driver);
+diff --git a/drivers/i2c/i2c-core-acpi.c b/drivers/i2c/i2c-core-acpi.c
+index b8f303dea305..32affd3fa8bd 100644
+--- a/drivers/i2c/i2c-core-acpi.c
++++ b/drivers/i2c/i2c-core-acpi.c
+@@ -453,8 +453,12 @@ static int acpi_gsb_i2c_read_bytes(struct i2c_client *client,
+ 		else
+ 			dev_err(&client->adapter->dev, "i2c read %d bytes from client@%#x starting at reg %#x failed, error: %d\n",
+ 				data_len, client->addr, cmd, ret);
+-	} else {
++	/* 2 transfers must have completed successfully */
++	} else if (ret == 2) {
+ 		memcpy(data, buffer, data_len);
++		ret = 0;
++	} else {
++		ret = -EIO;
+ 	}
+ 
+ 	kfree(buffer);
+@@ -595,8 +599,6 @@ i2c_acpi_space_handler(u32 function, acpi_physical_address command,
+ 		if (action == ACPI_READ) {
+ 			status = acpi_gsb_i2c_read_bytes(client, command,
+ 					gsb->data, info->access_length);
+-			if (status > 0)
+-				status = 0;
+ 		} else {
+ 			status = acpi_gsb_i2c_write_bytes(client, command,
+ 					gsb->data, info->access_length);
+diff --git a/drivers/infiniband/hw/hfi1/affinity.c b/drivers/infiniband/hw/hfi1/affinity.c
+index fbe7198a715a..bedd5fba33b0 100644
+--- a/drivers/infiniband/hw/hfi1/affinity.c
++++ b/drivers/infiniband/hw/hfi1/affinity.c
+@@ -198,7 +198,7 @@ int node_affinity_init(void)
+ 		while ((dev = pci_get_device(ids->vendor, ids->device, dev))) {
+ 			node = pcibus_to_node(dev->bus);
+ 			if (node < 0)
+-				node = numa_node_id();
++				goto out;
+ 
+ 			hfi1_per_node_cntr[node]++;
+ 		}
+@@ -206,6 +206,18 @@ int node_affinity_init(void)
+ 	}
+ 
+ 	return 0;
++
++out:
++	/*
++	 * Invalid PCI NUMA node information found, note it, and populate
++	 * our database 1:1.
++	 */
++	pr_err("HFI: Invalid PCI NUMA node. Performance may be affected\n");
++	pr_err("HFI: System BIOS may need to be upgraded\n");
++	for (node = 0; node < node_affinity.num_possible_nodes; node++)
++		hfi1_per_node_cntr[node] = 1;
++
++	return 0;
+ }
+ 
+ static void node_affinity_destroy(struct hfi1_affinity_node *entry)
+@@ -622,8 +634,14 @@ int hfi1_dev_affinity_init(struct hfi1_devdata *dd)
+ 	int curr_cpu, possible, i, ret;
+ 	bool new_entry = false;
+ 
+-	if (node < 0)
+-		node = numa_node_id();
++	/*
++	 * If the BIOS does not have the NUMA node information set, select
++	 * NUMA 0 so we get consistent performance.
++	 */
++	if (node < 0) {
++		dd_dev_err(dd, "Invalid PCI NUMA node. Performance may be affected\n");
++		node = 0;
++	}
+ 	dd->node = node;
+ 
+ 	local_mask = cpumask_of_node(dd->node);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_pd.c b/drivers/infiniband/hw/hns/hns_roce_pd.c
+index b9f2c871ff9a..e11c149da04d 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_pd.c
++++ b/drivers/infiniband/hw/hns/hns_roce_pd.c
+@@ -37,7 +37,7 @@
+ 
+ static int hns_roce_pd_alloc(struct hns_roce_dev *hr_dev, unsigned long *pdn)
+ {
+-	return hns_roce_bitmap_alloc(&hr_dev->pd_bitmap, pdn);
++	return hns_roce_bitmap_alloc(&hr_dev->pd_bitmap, pdn) ? -ENOMEM : 0;
+ }
+ 
+ static void hns_roce_pd_free(struct hns_roce_dev *hr_dev, unsigned long pdn)
+diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
+index baaf906f7c2e..97664570c5ac 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
+@@ -115,7 +115,10 @@ static int hns_roce_reserve_range_qp(struct hns_roce_dev *hr_dev, int cnt,
+ {
+ 	struct hns_roce_qp_table *qp_table = &hr_dev->qp_table;
+ 
+-	return hns_roce_bitmap_alloc_range(&qp_table->bitmap, cnt, align, base);
++	return hns_roce_bitmap_alloc_range(&qp_table->bitmap, cnt, align,
++					   base) ?
++		       -ENOMEM :
++		       0;
+ }
+ 
+ enum hns_roce_qp_state to_hns_roce_state(enum ib_qp_state state)
+diff --git a/drivers/input/input.c b/drivers/input/input.c
+index 6365c1958264..3304aaaffe87 100644
+--- a/drivers/input/input.c
++++ b/drivers/input/input.c
+@@ -480,11 +480,19 @@ EXPORT_SYMBOL(input_inject_event);
+  */
+ void input_alloc_absinfo(struct input_dev *dev)
+ {
+-	if (!dev->absinfo)
+-		dev->absinfo = kcalloc(ABS_CNT, sizeof(*dev->absinfo),
+-					GFP_KERNEL);
++	if (dev->absinfo)
++		return;
+ 
+-	WARN(!dev->absinfo, "%s(): kcalloc() failed?\n", __func__);
++	dev->absinfo = kcalloc(ABS_CNT, sizeof(*dev->absinfo), GFP_KERNEL);
++	if (!dev->absinfo) {
++		dev_err(dev->dev.parent ?: &dev->dev,
++			"%s: unable to allocate memory\n", __func__);
++		/*
++		 * We will handle this allocation failure in
++		 * input_register_device() when we refuse to register input
++		 * device with ABS bits but without absinfo.
++		 */
++	}
+ }
+ EXPORT_SYMBOL(input_alloc_absinfo);
+ 
+diff --git a/drivers/iommu/omap-iommu.c b/drivers/iommu/omap-iommu.c
+index af4a8e7fcd27..3b05117118c3 100644
+--- a/drivers/iommu/omap-iommu.c
++++ b/drivers/iommu/omap-iommu.c
+@@ -550,7 +550,7 @@ static u32 *iopte_alloc(struct omap_iommu *obj, u32 *iopgd,
+ 
+ pte_ready:
+ 	iopte = iopte_offset(iopgd, da);
+-	*pt_dma = virt_to_phys(iopte);
++	*pt_dma = iopgd_page_paddr(iopgd);
+ 	dev_vdbg(obj->dev,
+ 		 "%s: da:%08x pgd:%p *pgd:%08x pte:%p *pte:%08x\n",
+ 		 __func__, da, iopgd, *iopgd, iopte, *iopte);
+@@ -738,7 +738,7 @@ static size_t iopgtable_clear_entry_core(struct omap_iommu *obj, u32 da)
+ 		}
+ 		bytes *= nent;
+ 		memset(iopte, 0, nent * sizeof(*iopte));
+-		pt_dma = virt_to_phys(iopte);
++		pt_dma = iopgd_page_paddr(iopgd);
+ 		flush_iopte_range(obj->dev, pt_dma, pt_offset, nent);
+ 
+ 		/*
+diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c
+index 054cd2c8e9c8..2b1724e8d307 100644
+--- a/drivers/iommu/rockchip-iommu.c
++++ b/drivers/iommu/rockchip-iommu.c
+@@ -521,10 +521,11 @@ static irqreturn_t rk_iommu_irq(int irq, void *dev_id)
+ 	u32 int_status;
+ 	dma_addr_t iova;
+ 	irqreturn_t ret = IRQ_NONE;
+-	int i;
++	int i, err;
+ 
+-	if (WARN_ON(!pm_runtime_get_if_in_use(iommu->dev)))
+-		return 0;
++	err = pm_runtime_get_if_in_use(iommu->dev);
++	if (WARN_ON_ONCE(err <= 0))
++		return ret;
+ 
+ 	if (WARN_ON(clk_bulk_enable(iommu->num_clocks, iommu->clocks)))
+ 		goto out;
+@@ -620,11 +621,15 @@ static void rk_iommu_zap_iova(struct rk_iommu_domain *rk_domain,
+ 	spin_lock_irqsave(&rk_domain->iommus_lock, flags);
+ 	list_for_each(pos, &rk_domain->iommus) {
+ 		struct rk_iommu *iommu;
++		int ret;
+ 
+ 		iommu = list_entry(pos, struct rk_iommu, node);
+ 
+ 		/* Only zap TLBs of IOMMUs that are powered on. */
+-		if (pm_runtime_get_if_in_use(iommu->dev)) {
++		ret = pm_runtime_get_if_in_use(iommu->dev);
++		if (WARN_ON_ONCE(ret < 0))
++			continue;
++		if (ret) {
+ 			WARN_ON(clk_bulk_enable(iommu->num_clocks,
+ 						iommu->clocks));
+ 			rk_iommu_zap_lines(iommu, iova, size);
+@@ -891,6 +896,7 @@ static void rk_iommu_detach_device(struct iommu_domain *domain,
+ 	struct rk_iommu *iommu;
+ 	struct rk_iommu_domain *rk_domain = to_rk_domain(domain);
+ 	unsigned long flags;
++	int ret;
+ 
+ 	/* Allow 'virtual devices' (eg drm) to detach from domain */
+ 	iommu = rk_iommu_from_dev(dev);
+@@ -909,7 +915,9 @@ static void rk_iommu_detach_device(struct iommu_domain *domain,
+ 	list_del_init(&iommu->node);
+ 	spin_unlock_irqrestore(&rk_domain->iommus_lock, flags);
+ 
+-	if (pm_runtime_get_if_in_use(iommu->dev)) {
++	ret = pm_runtime_get_if_in_use(iommu->dev);
++	WARN_ON_ONCE(ret < 0);
++	if (ret > 0) {
+ 		rk_iommu_disable(iommu);
+ 		pm_runtime_put(iommu->dev);
+ 	}
+@@ -946,7 +954,8 @@ static int rk_iommu_attach_device(struct iommu_domain *domain,
+ 	list_add_tail(&iommu->node, &rk_domain->iommus);
+ 	spin_unlock_irqrestore(&rk_domain->iommus_lock, flags);
+ 
+-	if (!pm_runtime_get_if_in_use(iommu->dev))
++	ret = pm_runtime_get_if_in_use(iommu->dev);
++	if (!ret || WARN_ON_ONCE(ret < 0))
+ 		return 0;
+ 
+ 	ret = rk_iommu_enable(iommu);
+@@ -1152,17 +1161,6 @@ static int rk_iommu_probe(struct platform_device *pdev)
+ 	if (iommu->num_mmu == 0)
+ 		return PTR_ERR(iommu->bases[0]);
+ 
+-	i = 0;
+-	while ((irq = platform_get_irq(pdev, i++)) != -ENXIO) {
+-		if (irq < 0)
+-			return irq;
+-
+-		err = devm_request_irq(iommu->dev, irq, rk_iommu_irq,
+-				       IRQF_SHARED, dev_name(dev), iommu);
+-		if (err)
+-			return err;
+-	}
+-
+ 	iommu->reset_disabled = device_property_read_bool(dev,
+ 					"rockchip,disable-mmu-reset");
+ 
+@@ -1219,6 +1217,19 @@ static int rk_iommu_probe(struct platform_device *pdev)
+ 
+ 	pm_runtime_enable(dev);
+ 
++	i = 0;
++	while ((irq = platform_get_irq(pdev, i++)) != -ENXIO) {
++		if (irq < 0)
++			return irq;
++
++		err = devm_request_irq(iommu->dev, irq, rk_iommu_irq,
++				       IRQF_SHARED, dev_name(dev), iommu);
++		if (err) {
++			pm_runtime_disable(dev);
++			goto err_remove_sysfs;
++		}
++	}
++
+ 	return 0;
+ err_remove_sysfs:
+ 	iommu_device_sysfs_remove(&iommu->iommu);
+diff --git a/drivers/irqchip/irq-bcm7038-l1.c b/drivers/irqchip/irq-bcm7038-l1.c
+index faf734ff4cf3..0f6e30e9009d 100644
+--- a/drivers/irqchip/irq-bcm7038-l1.c
++++ b/drivers/irqchip/irq-bcm7038-l1.c
+@@ -217,6 +217,7 @@ static int bcm7038_l1_set_affinity(struct irq_data *d,
+ 	return 0;
+ }
+ 
++#ifdef CONFIG_SMP
+ static void bcm7038_l1_cpu_offline(struct irq_data *d)
+ {
+ 	struct cpumask *mask = irq_data_get_affinity_mask(d);
+@@ -241,6 +242,7 @@ static void bcm7038_l1_cpu_offline(struct irq_data *d)
+ 	}
+ 	irq_set_affinity_locked(d, &new_affinity, false);
+ }
++#endif
+ 
+ static int __init bcm7038_l1_init_one(struct device_node *dn,
+ 				      unsigned int idx,
+@@ -293,7 +295,9 @@ static struct irq_chip bcm7038_l1_irq_chip = {
+ 	.irq_mask		= bcm7038_l1_mask,
+ 	.irq_unmask		= bcm7038_l1_unmask,
+ 	.irq_set_affinity	= bcm7038_l1_set_affinity,
++#ifdef CONFIG_SMP
+ 	.irq_cpu_offline	= bcm7038_l1_cpu_offline,
++#endif
+ };
+ 
+ static int bcm7038_l1_map(struct irq_domain *d, unsigned int virq,
+diff --git a/drivers/irqchip/irq-stm32-exti.c b/drivers/irqchip/irq-stm32-exti.c
+index 3a7e8905a97e..880e48947576 100644
+--- a/drivers/irqchip/irq-stm32-exti.c
++++ b/drivers/irqchip/irq-stm32-exti.c
+@@ -602,17 +602,24 @@ stm32_exti_host_data *stm32_exti_host_init(const struct stm32_exti_drv_data *dd,
+ 					sizeof(struct stm32_exti_chip_data),
+ 					GFP_KERNEL);
+ 	if (!host_data->chips_data)
+-		return NULL;
++		goto free_host_data;
+ 
+ 	host_data->base = of_iomap(node, 0);
+ 	if (!host_data->base) {
+ 		pr_err("%pOF: Unable to map registers\n", node);
+-		return NULL;
++		goto free_chips_data;
+ 	}
+ 
+ 	stm32_host_data = host_data;
+ 
+ 	return host_data;
++
++free_chips_data:
++	kfree(host_data->chips_data);
++free_host_data:
++	kfree(host_data);
++
++	return NULL;
+ }
+ 
+ static struct
+@@ -664,10 +671,8 @@ static int __init stm32_exti_init(const struct stm32_exti_drv_data *drv_data,
+ 	struct irq_domain *domain;
+ 
+ 	host_data = stm32_exti_host_init(drv_data, node);
+-	if (!host_data) {
+-		ret = -ENOMEM;
+-		goto out_free_mem;
+-	}
++	if (!host_data)
++		return -ENOMEM;
+ 
+ 	domain = irq_domain_add_linear(node, drv_data->bank_nr * IRQS_PER_BANK,
+ 				       &irq_exti_domain_ops, NULL);
+@@ -724,7 +729,6 @@ out_free_domain:
+ 	irq_domain_remove(domain);
+ out_unmap:
+ 	iounmap(host_data->base);
+-out_free_mem:
+ 	kfree(host_data->chips_data);
+ 	kfree(host_data);
+ 	return ret;
+@@ -751,10 +755,8 @@ __init stm32_exti_hierarchy_init(const struct stm32_exti_drv_data *drv_data,
+ 	}
+ 
+ 	host_data = stm32_exti_host_init(drv_data, node);
+-	if (!host_data) {
+-		ret = -ENOMEM;
+-		goto out_free_mem;
+-	}
++	if (!host_data)
++		return -ENOMEM;
+ 
+ 	for (i = 0; i < drv_data->bank_nr; i++)
+ 		stm32_exti_chip_init(host_data, i, node);
+@@ -776,7 +778,6 @@ __init stm32_exti_hierarchy_init(const struct stm32_exti_drv_data *drv_data,
+ 
+ out_unmap:
+ 	iounmap(host_data->base);
+-out_free_mem:
+ 	kfree(host_data->chips_data);
+ 	kfree(host_data);
+ 	return ret;
+diff --git a/drivers/md/dm-kcopyd.c b/drivers/md/dm-kcopyd.c
+index 3c7547a3c371..d7b9cdafd1c3 100644
+--- a/drivers/md/dm-kcopyd.c
++++ b/drivers/md/dm-kcopyd.c
+@@ -487,6 +487,8 @@ static int run_complete_job(struct kcopyd_job *job)
+ 	if (atomic_dec_and_test(&kc->nr_jobs))
+ 		wake_up(&kc->destroyq);
+ 
++	cond_resched();
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/mfd/sm501.c b/drivers/mfd/sm501.c
+index 2a87b0d2f21f..a530972c5a7e 100644
+--- a/drivers/mfd/sm501.c
++++ b/drivers/mfd/sm501.c
+@@ -715,6 +715,7 @@ sm501_create_subdev(struct sm501_devdata *sm, char *name,
+ 	smdev->pdev.name = name;
+ 	smdev->pdev.id = sm->pdev_id;
+ 	smdev->pdev.dev.parent = sm->dev;
++	smdev->pdev.dev.coherent_dma_mask = 0xffffffff;
+ 
+ 	if (res_count) {
+ 		smdev->pdev.resource = (struct resource *)(smdev+1);
+diff --git a/drivers/mtd/ubi/vtbl.c b/drivers/mtd/ubi/vtbl.c
+index 94d7a865b135..7504f430c011 100644
+--- a/drivers/mtd/ubi/vtbl.c
++++ b/drivers/mtd/ubi/vtbl.c
+@@ -578,6 +578,16 @@ static int init_volumes(struct ubi_device *ubi,
+ 		vol->ubi = ubi;
+ 		reserved_pebs += vol->reserved_pebs;
+ 
++		/*
++		 * We use ubi->peb_count and not vol->reserved_pebs because
++		 * we want to keep the code simple. Otherwise we'd have to
++		 * resize/check the bitmap upon volume resize too.
++		 * Allocating a few bytes more does not hurt.
++		 */
++		err = ubi_fastmap_init_checkmap(vol, ubi->peb_count);
++		if (err)
++			return err;
++
+ 		/*
+ 		 * In case of dynamic volume UBI knows nothing about how many
+ 		 * data is stored there. So assume the whole volume is used.
+@@ -620,16 +630,6 @@ static int init_volumes(struct ubi_device *ubi,
+ 			(long long)(vol->used_ebs - 1) * vol->usable_leb_size;
+ 		vol->used_bytes += av->last_data_size;
+ 		vol->last_eb_bytes = av->last_data_size;
+-
+-		/*
+-		 * We use ubi->peb_count and not vol->reserved_pebs because
+-		 * we want to keep the code simple. Otherwise we'd have to
+-		 * resize/check the bitmap upon volume resize too.
+-		 * Allocating a few bytes more does not hurt.
+-		 */
+-		err = ubi_fastmap_init_checkmap(vol, ubi->peb_count);
+-		if (err)
+-			return err;
+ 	}
+ 
+ 	/* And add the layout volume */
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 4394c1162be4..4fdf3d33aa59 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -5907,12 +5907,12 @@ unsigned int bnxt_get_max_func_cp_rings(struct bnxt *bp)
+ 	return bp->hw_resc.max_cp_rings;
+ }
+ 
+-void bnxt_set_max_func_cp_rings(struct bnxt *bp, unsigned int max)
++unsigned int bnxt_get_max_func_cp_rings_for_en(struct bnxt *bp)
+ {
+-	bp->hw_resc.max_cp_rings = max;
++	return bp->hw_resc.max_cp_rings - bnxt_get_ulp_msix_num(bp);
+ }
+ 
+-unsigned int bnxt_get_max_func_irqs(struct bnxt *bp)
++static unsigned int bnxt_get_max_func_irqs(struct bnxt *bp)
+ {
+ 	struct bnxt_hw_resc *hw_resc = &bp->hw_resc;
+ 
+@@ -8492,7 +8492,8 @@ static void _bnxt_get_max_rings(struct bnxt *bp, int *max_rx, int *max_tx,
+ 
+ 	*max_tx = hw_resc->max_tx_rings;
+ 	*max_rx = hw_resc->max_rx_rings;
+-	*max_cp = min_t(int, hw_resc->max_irqs, hw_resc->max_cp_rings);
++	*max_cp = min_t(int, bnxt_get_max_func_cp_rings_for_en(bp),
++			hw_resc->max_irqs);
+ 	*max_cp = min_t(int, *max_cp, hw_resc->max_stat_ctxs);
+ 	max_ring_grps = hw_resc->max_hw_ring_grps;
+ 	if (BNXT_CHIP_TYPE_NITRO_A0(bp) && BNXT_PF(bp)) {
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index 91575ef97c8c..ea1246a94b38 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -1468,8 +1468,7 @@ int bnxt_hwrm_set_coal(struct bnxt *);
+ unsigned int bnxt_get_max_func_stat_ctxs(struct bnxt *bp);
+ void bnxt_set_max_func_stat_ctxs(struct bnxt *bp, unsigned int max);
+ unsigned int bnxt_get_max_func_cp_rings(struct bnxt *bp);
+-void bnxt_set_max_func_cp_rings(struct bnxt *bp, unsigned int max);
+-unsigned int bnxt_get_max_func_irqs(struct bnxt *bp);
++unsigned int bnxt_get_max_func_cp_rings_for_en(struct bnxt *bp);
+ int bnxt_get_avail_msix(struct bnxt *bp, int num);
+ int bnxt_reserve_rings(struct bnxt *bp);
+ void bnxt_tx_disable(struct bnxt *bp);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+index a64910892c25..2c77004a022b 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+@@ -451,7 +451,7 @@ static int bnxt_hwrm_func_vf_resc_cfg(struct bnxt *bp, int num_vfs)
+ 
+ 	bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_VF_RESOURCE_CFG, -1, -1);
+ 
+-	vf_cp_rings = hw_resc->max_cp_rings - bp->cp_nr_rings;
++	vf_cp_rings = bnxt_get_max_func_cp_rings_for_en(bp) - bp->cp_nr_rings;
+ 	vf_stat_ctx = hw_resc->max_stat_ctxs - bp->num_stat_ctxs;
+ 	if (bp->flags & BNXT_FLAG_AGG_RINGS)
+ 		vf_rx_rings = hw_resc->max_rx_rings - bp->rx_nr_rings * 2;
+@@ -544,7 +544,8 @@ static int bnxt_hwrm_func_cfg(struct bnxt *bp, int num_vfs)
+ 	max_stat_ctxs = hw_resc->max_stat_ctxs;
+ 
+ 	/* Remaining rings are distributed equally amongs VF's for now */
+-	vf_cp_rings = (hw_resc->max_cp_rings - bp->cp_nr_rings) / num_vfs;
++	vf_cp_rings = (bnxt_get_max_func_cp_rings_for_en(bp) -
++		       bp->cp_nr_rings) / num_vfs;
+ 	vf_stat_ctx = (max_stat_ctxs - bp->num_stat_ctxs) / num_vfs;
+ 	if (bp->flags & BNXT_FLAG_AGG_RINGS)
+ 		vf_rx_rings = (hw_resc->max_rx_rings - bp->rx_nr_rings * 2) /
+@@ -638,7 +639,7 @@ static int bnxt_sriov_enable(struct bnxt *bp, int *num_vfs)
+ 	 */
+ 	vfs_supported = *num_vfs;
+ 
+-	avail_cp = hw_resc->max_cp_rings - bp->cp_nr_rings;
++	avail_cp = bnxt_get_max_func_cp_rings_for_en(bp) - bp->cp_nr_rings;
+ 	avail_stat = hw_resc->max_stat_ctxs - bp->num_stat_ctxs;
+ 	avail_cp = min_t(int, avail_cp, avail_stat);
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+index 840f6e505f73..4209cfd73971 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+@@ -169,7 +169,6 @@ static int bnxt_req_msix_vecs(struct bnxt_en_dev *edev, int ulp_id,
+ 		edev->ulp_tbl[ulp_id].msix_requested = avail_msix;
+ 	}
+ 	bnxt_fill_msix_vecs(bp, ent);
+-	bnxt_set_max_func_cp_rings(bp, max_cp_rings - avail_msix);
+ 	edev->flags |= BNXT_EN_FLAG_MSIX_REQUESTED;
+ 	return avail_msix;
+ }
+@@ -178,7 +177,6 @@ static int bnxt_free_msix_vecs(struct bnxt_en_dev *edev, int ulp_id)
+ {
+ 	struct net_device *dev = edev->net;
+ 	struct bnxt *bp = netdev_priv(dev);
+-	int max_cp_rings, msix_requested;
+ 
+ 	ASSERT_RTNL();
+ 	if (ulp_id != BNXT_ROCE_ULP)
+@@ -187,9 +185,6 @@ static int bnxt_free_msix_vecs(struct bnxt_en_dev *edev, int ulp_id)
+ 	if (!(edev->flags & BNXT_EN_FLAG_MSIX_REQUESTED))
+ 		return 0;
+ 
+-	max_cp_rings = bnxt_get_max_func_cp_rings(bp);
+-	msix_requested = edev->ulp_tbl[ulp_id].msix_requested;
+-	bnxt_set_max_func_cp_rings(bp, max_cp_rings + msix_requested);
+ 	edev->ulp_tbl[ulp_id].msix_requested = 0;
+ 	edev->flags &= ~BNXT_EN_FLAG_MSIX_REQUESTED;
+ 	if (netif_running(dev)) {
+@@ -220,21 +215,6 @@ int bnxt_get_ulp_msix_base(struct bnxt *bp)
+ 	return 0;
+ }
+ 
+-void bnxt_subtract_ulp_resources(struct bnxt *bp, int ulp_id)
+-{
+-	ASSERT_RTNL();
+-	if (bnxt_ulp_registered(bp->edev, ulp_id)) {
+-		struct bnxt_en_dev *edev = bp->edev;
+-		unsigned int msix_req, max;
+-
+-		msix_req = edev->ulp_tbl[ulp_id].msix_requested;
+-		max = bnxt_get_max_func_cp_rings(bp);
+-		bnxt_set_max_func_cp_rings(bp, max - msix_req);
+-		max = bnxt_get_max_func_stat_ctxs(bp);
+-		bnxt_set_max_func_stat_ctxs(bp, max - 1);
+-	}
+-}
+-
+ static int bnxt_send_msg(struct bnxt_en_dev *edev, int ulp_id,
+ 			 struct bnxt_fw_msg *fw_msg)
+ {
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h
+index df48ac71729f..d9bea37cd211 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h
+@@ -90,7 +90,6 @@ static inline bool bnxt_ulp_registered(struct bnxt_en_dev *edev, int ulp_id)
+ 
+ int bnxt_get_ulp_msix_num(struct bnxt *bp);
+ int bnxt_get_ulp_msix_base(struct bnxt *bp);
+-void bnxt_subtract_ulp_resources(struct bnxt *bp, int ulp_id);
+ void bnxt_ulp_stop(struct bnxt *bp);
+ void bnxt_ulp_start(struct bnxt *bp);
+ void bnxt_ulp_sriov_cfg(struct bnxt *bp, int num_vfs);
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.h b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+index b773bc07edf7..14b49612aa86 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.h
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+@@ -186,6 +186,9 @@ struct bcmgenet_mib_counters {
+ #define UMAC_MAC1			0x010
+ #define UMAC_MAX_FRAME_LEN		0x014
+ 
++#define UMAC_MODE			0x44
++#define  MODE_LINK_STATUS		(1 << 5)
++
+ #define UMAC_EEE_CTRL			0x064
+ #define  EN_LPI_RX_PAUSE		(1 << 0)
+ #define  EN_LPI_TX_PFC			(1 << 1)
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+index 5333274a283c..4241ae928d4a 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+@@ -115,8 +115,14 @@ void bcmgenet_mii_setup(struct net_device *dev)
+ static int bcmgenet_fixed_phy_link_update(struct net_device *dev,
+ 					  struct fixed_phy_status *status)
+ {
+-	if (dev && dev->phydev && status)
+-		status->link = dev->phydev->link;
++	struct bcmgenet_priv *priv;
++	u32 reg;
++
++	if (dev && dev->phydev && status) {
++		priv = netdev_priv(dev);
++		reg = bcmgenet_umac_readl(priv, UMAC_MODE);
++		status->link = !!(reg & MODE_LINK_STATUS);
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index a6c911bb5ce2..515d96e32143 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -481,11 +481,6 @@ static int macb_mii_probe(struct net_device *dev)
+ 
+ 	if (np) {
+ 		if (of_phy_is_fixed_link(np)) {
+-			if (of_phy_register_fixed_link(np) < 0) {
+-				dev_err(&bp->pdev->dev,
+-					"broken fixed-link specification\n");
+-				return -ENODEV;
+-			}
+ 			bp->phy_node = of_node_get(np);
+ 		} else {
+ 			bp->phy_node = of_parse_phandle(np, "phy-handle", 0);
+@@ -568,7 +563,7 @@ static int macb_mii_init(struct macb *bp)
+ {
+ 	struct macb_platform_data *pdata;
+ 	struct device_node *np;
+-	int err;
++	int err = -ENXIO;
+ 
+ 	/* Enable management port */
+ 	macb_writel(bp, NCR, MACB_BIT(MPE));
+@@ -591,12 +586,23 @@ static int macb_mii_init(struct macb *bp)
+ 	dev_set_drvdata(&bp->dev->dev, bp->mii_bus);
+ 
+ 	np = bp->pdev->dev.of_node;
+-	if (pdata)
+-		bp->mii_bus->phy_mask = pdata->phy_mask;
++	if (np && of_phy_is_fixed_link(np)) {
++		if (of_phy_register_fixed_link(np) < 0) {
++			dev_err(&bp->pdev->dev,
++				"broken fixed-link specification %pOF\n", np);
++			goto err_out_free_mdiobus;
++		}
++
++		err = mdiobus_register(bp->mii_bus);
++	} else {
++		if (pdata)
++			bp->mii_bus->phy_mask = pdata->phy_mask;
++
++		err = of_mdiobus_register(bp->mii_bus, np);
++	}
+ 
+-	err = of_mdiobus_register(bp->mii_bus, np);
+ 	if (err)
+-		goto err_out_free_mdiobus;
++		goto err_out_free_fixed_link;
+ 
+ 	err = macb_mii_probe(bp->dev);
+ 	if (err)
+@@ -606,6 +612,7 @@ static int macb_mii_init(struct macb *bp)
+ 
+ err_out_unregister_bus:
+ 	mdiobus_unregister(bp->mii_bus);
++err_out_free_fixed_link:
+ 	if (np && of_phy_is_fixed_link(np))
+ 		of_phy_deregister_fixed_link(np);
+ err_out_free_mdiobus:
+@@ -1957,14 +1964,17 @@ static void macb_reset_hw(struct macb *bp)
+ {
+ 	struct macb_queue *queue;
+ 	unsigned int q;
++	u32 ctrl = macb_readl(bp, NCR);
+ 
+ 	/* Disable RX and TX (XXX: Should we halt the transmission
+ 	 * more gracefully?)
+ 	 */
+-	macb_writel(bp, NCR, 0);
++	ctrl &= ~(MACB_BIT(RE) | MACB_BIT(TE));
+ 
+ 	/* Clear the stats registers (XXX: Update stats first?) */
+-	macb_writel(bp, NCR, MACB_BIT(CLRSTAT));
++	ctrl |= MACB_BIT(CLRSTAT);
++
++	macb_writel(bp, NCR, ctrl);
+ 
+ 	/* Clear all status flags */
+ 	macb_writel(bp, TSR, -1);
+@@ -2152,7 +2162,7 @@ static void macb_init_hw(struct macb *bp)
+ 	}
+ 
+ 	/* Enable TX and RX */
+-	macb_writel(bp, NCR, MACB_BIT(RE) | MACB_BIT(TE) | MACB_BIT(MPE));
++	macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(RE) | MACB_BIT(TE));
+ }
+ 
+ /* The hash address register is 64 bits long and takes up two
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index d318d35e598f..6fd7ea8074b0 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -3911,7 +3911,7 @@ static bool hclge_is_all_function_id_zero(struct hclge_desc *desc)
+ #define HCLGE_FUNC_NUMBER_PER_DESC 6
+ 	int i, j;
+ 
+-	for (i = 0; i < HCLGE_DESC_NUMBER; i++)
++	for (i = 1; i < HCLGE_DESC_NUMBER; i++)
+ 		for (j = 0; j < HCLGE_FUNC_NUMBER_PER_DESC; j++)
+ 			if (desc[i].data[j])
+ 				return false;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
+index 9f7932e423b5..6315e8ad8467 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
+@@ -208,6 +208,8 @@ int hclge_mac_start_phy(struct hclge_dev *hdev)
+ 	if (!phydev)
+ 		return 0;
+ 
++	phydev->supported &= ~SUPPORTED_FIBRE;
++
+ 	ret = phy_connect_direct(netdev, phydev,
+ 				 hclge_mac_adjust_link,
+ 				 PHY_INTERFACE_MODE_SGMII);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wq.c b/drivers/net/ethernet/mellanox/mlx5/core/wq.c
+index 86478a6b99c5..c8c315eb5128 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/wq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/wq.c
+@@ -139,14 +139,15 @@ int mlx5_wq_qp_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
+ 		      struct mlx5_wq_ctrl *wq_ctrl)
+ {
+ 	u32 sq_strides_offset;
++	u32 rq_pg_remainder;
+ 	int err;
+ 
+ 	mlx5_fill_fbc(MLX5_GET(qpc, qpc, log_rq_stride) + 4,
+ 		      MLX5_GET(qpc, qpc, log_rq_size),
+ 		      &wq->rq.fbc);
+ 
+-	sq_strides_offset =
+-		((wq->rq.fbc.frag_sz_m1 + 1) % PAGE_SIZE) / MLX5_SEND_WQE_BB;
++	rq_pg_remainder   = mlx5_wq_cyc_get_byte_size(&wq->rq) % PAGE_SIZE;
++	sq_strides_offset = rq_pg_remainder / MLX5_SEND_WQE_BB;
+ 
+ 	mlx5_fill_fbc_offset(ilog2(MLX5_SEND_WQE_BB),
+ 			     MLX5_GET(qpc, qpc, log_sq_size),
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
+index 4a519d8edec8..3500c79e29cd 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
+@@ -433,6 +433,8 @@ mlxsw_sp_netdevice_ipip_ul_event(struct mlxsw_sp *mlxsw_sp,
+ void
+ mlxsw_sp_port_vlan_router_leave(struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan);
+ void mlxsw_sp_rif_destroy(struct mlxsw_sp_rif *rif);
++void mlxsw_sp_rif_destroy_by_dev(struct mlxsw_sp *mlxsw_sp,
++				 struct net_device *dev);
+ 
+ /* spectrum_kvdl.c */
+ int mlxsw_sp_kvdl_init(struct mlxsw_sp *mlxsw_sp);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+index 77b2adb29341..cb43d17097fa 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+@@ -6228,6 +6228,17 @@ void mlxsw_sp_rif_destroy(struct mlxsw_sp_rif *rif)
+ 	mlxsw_sp_vr_put(mlxsw_sp, vr);
+ }
+ 
++void mlxsw_sp_rif_destroy_by_dev(struct mlxsw_sp *mlxsw_sp,
++				 struct net_device *dev)
++{
++	struct mlxsw_sp_rif *rif;
++
++	rif = mlxsw_sp_rif_find_by_dev(mlxsw_sp, dev);
++	if (!rif)
++		return;
++	mlxsw_sp_rif_destroy(rif);
++}
++
+ static void
+ mlxsw_sp_rif_subport_params_init(struct mlxsw_sp_rif_params *params,
+ 				 struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan)
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+index eea5666a86b2..6cb43dda8232 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+@@ -160,6 +160,24 @@ bool mlxsw_sp_bridge_device_is_offloaded(const struct mlxsw_sp *mlxsw_sp,
+ 	return !!mlxsw_sp_bridge_device_find(mlxsw_sp->bridge, br_dev);
+ }
+ 
++static int mlxsw_sp_bridge_device_upper_rif_destroy(struct net_device *dev,
++						    void *data)
++{
++	struct mlxsw_sp *mlxsw_sp = data;
++
++	mlxsw_sp_rif_destroy_by_dev(mlxsw_sp, dev);
++	return 0;
++}
++
++static void mlxsw_sp_bridge_device_rifs_destroy(struct mlxsw_sp *mlxsw_sp,
++						struct net_device *dev)
++{
++	mlxsw_sp_rif_destroy_by_dev(mlxsw_sp, dev);
++	netdev_walk_all_upper_dev_rcu(dev,
++				      mlxsw_sp_bridge_device_upper_rif_destroy,
++				      mlxsw_sp);
++}
++
+ static struct mlxsw_sp_bridge_device *
+ mlxsw_sp_bridge_device_create(struct mlxsw_sp_bridge *bridge,
+ 			      struct net_device *br_dev)
+@@ -198,6 +216,8 @@ static void
+ mlxsw_sp_bridge_device_destroy(struct mlxsw_sp_bridge *bridge,
+ 			       struct mlxsw_sp_bridge_device *bridge_device)
+ {
++	mlxsw_sp_bridge_device_rifs_destroy(bridge->mlxsw_sp,
++					    bridge_device->dev);
+ 	list_del(&bridge_device->list);
+ 	if (bridge_device->vlan_enabled)
+ 		bridge->vlan_enabled_exists = false;
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+index d4c27f849f9b..c2a9e64bc57b 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+@@ -227,29 +227,16 @@ done:
+ 	spin_unlock_bh(&nn->reconfig_lock);
+ }
+ 
+-/**
+- * nfp_net_reconfig() - Reconfigure the firmware
+- * @nn:      NFP Net device to reconfigure
+- * @update:  The value for the update field in the BAR config
+- *
+- * Write the update word to the BAR and ping the reconfig queue.  The
+- * poll until the firmware has acknowledged the update by zeroing the
+- * update word.
+- *
+- * Return: Negative errno on error, 0 on success
+- */
+-int nfp_net_reconfig(struct nfp_net *nn, u32 update)
++static void nfp_net_reconfig_sync_enter(struct nfp_net *nn)
+ {
+ 	bool cancelled_timer = false;
+ 	u32 pre_posted_requests;
+-	int ret;
+ 
+ 	spin_lock_bh(&nn->reconfig_lock);
+ 
+ 	nn->reconfig_sync_present = true;
+ 
+ 	if (nn->reconfig_timer_active) {
+-		del_timer(&nn->reconfig_timer);
+ 		nn->reconfig_timer_active = false;
+ 		cancelled_timer = true;
+ 	}
+@@ -258,14 +245,43 @@ int nfp_net_reconfig(struct nfp_net *nn, u32 update)
+ 
+ 	spin_unlock_bh(&nn->reconfig_lock);
+ 
+-	if (cancelled_timer)
++	if (cancelled_timer) {
++		del_timer_sync(&nn->reconfig_timer);
+ 		nfp_net_reconfig_wait(nn, nn->reconfig_timer.expires);
++	}
+ 
+ 	/* Run the posted reconfigs which were issued before we started */
+ 	if (pre_posted_requests) {
+ 		nfp_net_reconfig_start(nn, pre_posted_requests);
+ 		nfp_net_reconfig_wait(nn, jiffies + HZ * NFP_NET_POLL_TIMEOUT);
+ 	}
++}
++
++static void nfp_net_reconfig_wait_posted(struct nfp_net *nn)
++{
++	nfp_net_reconfig_sync_enter(nn);
++
++	spin_lock_bh(&nn->reconfig_lock);
++	nn->reconfig_sync_present = false;
++	spin_unlock_bh(&nn->reconfig_lock);
++}
++
++/**
++ * nfp_net_reconfig() - Reconfigure the firmware
++ * @nn:      NFP Net device to reconfigure
++ * @update:  The value for the update field in the BAR config
++ *
++ * Write the update word to the BAR and ping the reconfig queue.  The
++ * poll until the firmware has acknowledged the update by zeroing the
++ * update word.
++ *
++ * Return: Negative errno on error, 0 on success
++ */
++int nfp_net_reconfig(struct nfp_net *nn, u32 update)
++{
++	int ret;
++
++	nfp_net_reconfig_sync_enter(nn);
+ 
+ 	nfp_net_reconfig_start(nn, update);
+ 	ret = nfp_net_reconfig_wait(nn, jiffies + HZ * NFP_NET_POLL_TIMEOUT);
+@@ -3609,6 +3625,7 @@ struct nfp_net *nfp_net_alloc(struct pci_dev *pdev, bool needs_netdev,
+  */
+ void nfp_net_free(struct nfp_net *nn)
+ {
++	WARN_ON(timer_pending(&nn->reconfig_timer) || nn->reconfig_posted);
+ 	if (nn->dp.netdev)
+ 		free_netdev(nn->dp.netdev);
+ 	else
+@@ -3893,4 +3910,5 @@ void nfp_net_clean(struct nfp_net *nn)
+ 		return;
+ 
+ 	unregister_netdev(nn->dp.netdev);
++	nfp_net_reconfig_wait_posted(nn);
+ }
+diff --git a/drivers/net/ethernet/qlogic/qlge/qlge_main.c b/drivers/net/ethernet/qlogic/qlge/qlge_main.c
+index 353f1c129af1..059ba9429e51 100644
+--- a/drivers/net/ethernet/qlogic/qlge/qlge_main.c
++++ b/drivers/net/ethernet/qlogic/qlge/qlge_main.c
+@@ -2384,26 +2384,20 @@ static int qlge_update_hw_vlan_features(struct net_device *ndev,
+ 	return status;
+ }
+ 
+-static netdev_features_t qlge_fix_features(struct net_device *ndev,
+-	netdev_features_t features)
+-{
+-	int err;
+-
+-	/* Update the behavior of vlan accel in the adapter */
+-	err = qlge_update_hw_vlan_features(ndev, features);
+-	if (err)
+-		return err;
+-
+-	return features;
+-}
+-
+ static int qlge_set_features(struct net_device *ndev,
+ 	netdev_features_t features)
+ {
+ 	netdev_features_t changed = ndev->features ^ features;
++	int err;
++
++	if (changed & NETIF_F_HW_VLAN_CTAG_RX) {
++		/* Update the behavior of vlan accel in the adapter */
++		err = qlge_update_hw_vlan_features(ndev, features);
++		if (err)
++			return err;
+ 
+-	if (changed & NETIF_F_HW_VLAN_CTAG_RX)
+ 		qlge_vlan_mode(ndev, features);
++	}
+ 
+ 	return 0;
+ }
+@@ -4719,7 +4713,6 @@ static const struct net_device_ops qlge_netdev_ops = {
+ 	.ndo_set_mac_address	= qlge_set_mac_address,
+ 	.ndo_validate_addr	= eth_validate_addr,
+ 	.ndo_tx_timeout		= qlge_tx_timeout,
+-	.ndo_fix_features	= qlge_fix_features,
+ 	.ndo_set_features	= qlge_set_features,
+ 	.ndo_vlan_rx_add_vid	= qlge_vlan_rx_add_vid,
+ 	.ndo_vlan_rx_kill_vid	= qlge_vlan_rx_kill_vid,
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index 9ceb34bac3a9..e5eb361b973c 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -303,6 +303,7 @@ static const struct pci_device_id rtl8169_pci_tbl[] = {
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_REALTEK,	0x8161), 0, 0, RTL_CFG_1 },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_REALTEK,	0x8167), 0, 0, RTL_CFG_0 },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_REALTEK,	0x8168), 0, 0, RTL_CFG_1 },
++	{ PCI_DEVICE(PCI_VENDOR_ID_NCUBE,	0x8168), 0, 0, RTL_CFG_1 },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_REALTEK,	0x8169), 0, 0, RTL_CFG_0 },
+ 	{ PCI_VENDOR_ID_DLINK,			0x4300,
+ 		PCI_VENDOR_ID_DLINK, 0x4b10,		 0, 0, RTL_CFG_1 },
+@@ -5038,7 +5039,7 @@ static void rtl8169_hw_reset(struct rtl8169_private *tp)
+ 	rtl_hw_reset(tp);
+ }
+ 
+-static void rtl_set_rx_tx_config_registers(struct rtl8169_private *tp)
++static void rtl_set_tx_config_registers(struct rtl8169_private *tp)
+ {
+ 	/* Set DMA burst size and Interframe Gap Time */
+ 	RTL_W32(tp, TxConfig, (TX_DMA_BURST << TxDMAShift) |
+@@ -5149,12 +5150,14 @@ static void rtl_hw_start(struct  rtl8169_private *tp)
+ 
+ 	rtl_set_rx_max_size(tp);
+ 	rtl_set_rx_tx_desc_registers(tp);
+-	rtl_set_rx_tx_config_registers(tp);
++	rtl_set_tx_config_registers(tp);
+ 	RTL_W8(tp, Cfg9346, Cfg9346_Lock);
+ 
+ 	/* Initially a 10 us delay. Turned it into a PCI commit. - FR */
+ 	RTL_R8(tp, IntrMask);
+ 	RTL_W8(tp, ChipCmd, CmdTxEnb | CmdRxEnb);
++	rtl_init_rxcfg(tp);
++
+ 	rtl_set_rx_mode(tp->dev);
+ 	/* no early-rx interrupts */
+ 	RTL_W16(tp, MultiIntr, RTL_R16(tp, MultiIntr) & 0xf000);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+index 76649adf8fb0..c0a855b7ab3b 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+@@ -112,7 +112,6 @@ struct stmmac_priv {
+ 	u32 tx_count_frames;
+ 	u32 tx_coal_frames;
+ 	u32 tx_coal_timer;
+-	bool tx_timer_armed;
+ 
+ 	int tx_coalesce;
+ 	int hwts_tx_en;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index ef6a8d39db2f..c579d98b9666 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -3126,16 +3126,13 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	 * element in case of no SG.
+ 	 */
+ 	priv->tx_count_frames += nfrags + 1;
+-	if (likely(priv->tx_coal_frames > priv->tx_count_frames) &&
+-	    !priv->tx_timer_armed) {
++	if (likely(priv->tx_coal_frames > priv->tx_count_frames)) {
+ 		mod_timer(&priv->txtimer,
+ 			  STMMAC_COAL_TIMER(priv->tx_coal_timer));
+-		priv->tx_timer_armed = true;
+ 	} else {
+ 		priv->tx_count_frames = 0;
+ 		stmmac_set_tx_ic(priv, desc);
+ 		priv->xstats.tx_set_ic_bit++;
+-		priv->tx_timer_armed = false;
+ 	}
+ 
+ 	skb_tx_timestamp(skb);
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index dd1d6e115145..6d74cde68163 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -29,6 +29,7 @@
+ #include <linux/netdevice.h>
+ #include <linux/inetdevice.h>
+ #include <linux/etherdevice.h>
++#include <linux/pci.h>
+ #include <linux/skbuff.h>
+ #include <linux/if_vlan.h>
+ #include <linux/in.h>
+@@ -1939,12 +1940,16 @@ static int netvsc_register_vf(struct net_device *vf_netdev)
+ {
+ 	struct net_device *ndev;
+ 	struct net_device_context *net_device_ctx;
++	struct device *pdev = vf_netdev->dev.parent;
+ 	struct netvsc_device *netvsc_dev;
+ 	int ret;
+ 
+ 	if (vf_netdev->addr_len != ETH_ALEN)
+ 		return NOTIFY_DONE;
+ 
++	if (!pdev || !dev_is_pci(pdev) || dev_is_pf(pdev))
++		return NOTIFY_DONE;
++
+ 	/*
+ 	 * We will use the MAC address to locate the synthetic interface to
+ 	 * associate with the VF interface. If we don't find a matching
+@@ -2101,6 +2106,16 @@ static int netvsc_probe(struct hv_device *dev,
+ 
+ 	memcpy(net->dev_addr, device_info.mac_adr, ETH_ALEN);
+ 
++	/* We must get rtnl lock before scheduling nvdev->subchan_work,
++	 * otherwise netvsc_subchan_work() can get rtnl lock first and wait
++	 * all subchannels to show up, but that may not happen because
++	 * netvsc_probe() can't get rtnl lock and as a result vmbus_onoffer()
++	 * -> ... -> device_add() -> ... -> __device_attach() can't get
++	 * the device lock, so all the subchannels can't be processed --
++	 * finally netvsc_subchan_work() hangs for ever.
++	 */
++	rtnl_lock();
++
+ 	if (nvdev->num_chn > 1)
+ 		schedule_work(&nvdev->subchan_work);
+ 
+@@ -2119,7 +2134,6 @@ static int netvsc_probe(struct hv_device *dev,
+ 	else
+ 		net->max_mtu = ETH_DATA_LEN;
+ 
+-	rtnl_lock();
+ 	ret = register_netdevice(net);
+ 	if (ret != 0) {
+ 		pr_err("Unable to register netdev.\n");
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index 2a58607a6aea..1b07bb5e110d 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -5214,8 +5214,8 @@ static int rtl8152_probe(struct usb_interface *intf,
+ 		netdev->hw_features &= ~NETIF_F_RXCSUM;
+ 	}
+ 
+-	if (le16_to_cpu(udev->descriptor.bcdDevice) == 0x3011 &&
+-	    udev->serial && !strcmp(udev->serial, "000001000000")) {
++	if (le16_to_cpu(udev->descriptor.bcdDevice) == 0x3011 && udev->serial &&
++	    (!strcmp(udev->serial, "000001000000") || !strcmp(udev->serial, "000002000000"))) {
+ 		dev_info(&udev->dev, "Dell TB16 Dock, disable RX aggregation");
+ 		set_bit(DELL_TB_RX_AGG_BUG, &tp->flags);
+ 	}
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index b6122aad639e..7569f9af8d47 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -6926,15 +6926,15 @@ struct brcmf_cfg80211_info *brcmf_cfg80211_attach(struct brcmf_pub *drvr,
+ 	cfg->d11inf.io_type = (u8)io_type;
+ 	brcmu_d11_attach(&cfg->d11inf);
+ 
+-	err = brcmf_setup_wiphy(wiphy, ifp);
+-	if (err < 0)
+-		goto priv_out;
+-
+ 	/* regulatory notifer below needs access to cfg so
+ 	 * assign it now.
+ 	 */
+ 	drvr->config = cfg;
+ 
++	err = brcmf_setup_wiphy(wiphy, ifp);
++	if (err < 0)
++		goto priv_out;
++
+ 	brcmf_dbg(INFO, "Registering custom regulatory\n");
+ 	wiphy->reg_notifier = brcmf_cfg80211_reg_notifier;
+ 	wiphy->regulatory_flags |= REGULATORY_CUSTOM_REG;
+diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
+index 23e270839e6a..f00df2384985 100644
+--- a/drivers/pci/controller/pci-mvebu.c
++++ b/drivers/pci/controller/pci-mvebu.c
+@@ -1219,7 +1219,7 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
+ 		pcie->realio.start = PCIBIOS_MIN_IO;
+ 		pcie->realio.end = min_t(resource_size_t,
+ 					 IO_SPACE_LIMIT,
+-					 resource_size(&pcie->io));
++					 resource_size(&pcie->io) - 1);
+ 	} else
+ 		pcie->realio = pcie->io;
+ 
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index b2857865c0aa..a1a243ee36bb 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -1725,7 +1725,7 @@ int pci_setup_device(struct pci_dev *dev)
+ static void pci_configure_mps(struct pci_dev *dev)
+ {
+ 	struct pci_dev *bridge = pci_upstream_bridge(dev);
+-	int mps, p_mps, rc;
++	int mps, mpss, p_mps, rc;
+ 
+ 	if (!pci_is_pcie(dev) || !bridge || !pci_is_pcie(bridge))
+ 		return;
+@@ -1753,6 +1753,14 @@ static void pci_configure_mps(struct pci_dev *dev)
+ 	if (pcie_bus_config != PCIE_BUS_DEFAULT)
+ 		return;
+ 
++	mpss = 128 << dev->pcie_mpss;
++	if (mpss < p_mps && pci_pcie_type(bridge) == PCI_EXP_TYPE_ROOT_PORT) {
++		pcie_set_mps(bridge, mpss);
++		pci_info(dev, "Upstream bridge's Max Payload Size set to %d (was %d, max %d)\n",
++			 mpss, p_mps, 128 << bridge->pcie_mpss);
++		p_mps = pcie_get_mps(bridge);
++	}
++
+ 	rc = pcie_set_mps(dev, p_mps);
+ 	if (rc) {
+ 		pci_warn(dev, "can't set Max Payload Size to %d; if necessary, use \"pci=pcie_bus_safe\" and report a bug\n",
+@@ -1761,7 +1769,7 @@ static void pci_configure_mps(struct pci_dev *dev)
+ 	}
+ 
+ 	pci_info(dev, "Max Payload Size set to %d (was %d, max %d)\n",
+-		 p_mps, mps, 128 << dev->pcie_mpss);
++		 p_mps, mps, mpss);
+ }
+ 
+ static struct hpp_type0 pci_default_type0 = {
+diff --git a/drivers/pinctrl/pinctrl-axp209.c b/drivers/pinctrl/pinctrl-axp209.c
+index a52779f33ad4..afd0b533c40a 100644
+--- a/drivers/pinctrl/pinctrl-axp209.c
++++ b/drivers/pinctrl/pinctrl-axp209.c
+@@ -316,7 +316,7 @@ static const struct pinctrl_ops axp20x_pctrl_ops = {
+ 	.get_group_pins		= axp20x_group_pins,
+ };
+ 
+-static void axp20x_funcs_groups_from_mask(struct device *dev, unsigned int mask,
++static int axp20x_funcs_groups_from_mask(struct device *dev, unsigned int mask,
+ 					  unsigned int mask_len,
+ 					  struct axp20x_pinctrl_function *func,
+ 					  const struct pinctrl_pin_desc *pins)
+@@ -331,18 +331,22 @@ static void axp20x_funcs_groups_from_mask(struct device *dev, unsigned int mask,
+ 		func->groups = devm_kcalloc(dev,
+ 					    ngroups, sizeof(const char *),
+ 					    GFP_KERNEL);
++		if (!func->groups)
++			return -ENOMEM;
+ 		group = func->groups;
+ 		for_each_set_bit(bit, &mask_cpy, mask_len) {
+ 			*group = pins[bit].name;
+ 			group++;
+ 		}
+ 	}
++
++	return 0;
+ }
+ 
+-static void axp20x_build_funcs_groups(struct platform_device *pdev)
++static int axp20x_build_funcs_groups(struct platform_device *pdev)
+ {
+ 	struct axp20x_pctl *pctl = platform_get_drvdata(pdev);
+-	int i, pin, npins = pctl->desc->npins;
++	int i, ret, pin, npins = pctl->desc->npins;
+ 
+ 	pctl->funcs[AXP20X_FUNC_GPIO_OUT].name = "gpio_out";
+ 	pctl->funcs[AXP20X_FUNC_GPIO_OUT].muxval = AXP20X_MUX_GPIO_OUT;
+@@ -366,13 +370,19 @@ static void axp20x_build_funcs_groups(struct platform_device *pdev)
+ 			pctl->funcs[i].groups[pin] = pctl->desc->pins[pin].name;
+ 	}
+ 
+-	axp20x_funcs_groups_from_mask(&pdev->dev, pctl->desc->ldo_mask,
++	ret = axp20x_funcs_groups_from_mask(&pdev->dev, pctl->desc->ldo_mask,
+ 				      npins, &pctl->funcs[AXP20X_FUNC_LDO],
+ 				      pctl->desc->pins);
++	if (ret)
++		return ret;
+ 
+-	axp20x_funcs_groups_from_mask(&pdev->dev, pctl->desc->adc_mask,
++	ret = axp20x_funcs_groups_from_mask(&pdev->dev, pctl->desc->adc_mask,
+ 				      npins, &pctl->funcs[AXP20X_FUNC_ADC],
+ 				      pctl->desc->pins);
++	if (ret)
++		return ret;
++
++	return 0;
+ }
+ 
+ static const struct of_device_id axp20x_pctl_match[] = {
+@@ -424,7 +434,11 @@ static int axp20x_pctl_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, pctl);
+ 
+-	axp20x_build_funcs_groups(pdev);
++	ret = axp20x_build_funcs_groups(pdev);
++	if (ret) {
++		dev_err(&pdev->dev, "failed to build groups\n");
++		return ret;
++	}
+ 
+ 	pctrl_desc = devm_kzalloc(&pdev->dev, sizeof(*pctrl_desc), GFP_KERNEL);
+ 	if (!pctrl_desc)
+diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c
+index 136ff2b4cce5..db2af09067db 100644
+--- a/drivers/platform/x86/asus-nb-wmi.c
++++ b/drivers/platform/x86/asus-nb-wmi.c
+@@ -496,6 +496,7 @@ static const struct key_entry asus_nb_wmi_keymap[] = {
+ 	{ KE_KEY, 0xC4, { KEY_KBDILLUMUP } },
+ 	{ KE_KEY, 0xC5, { KEY_KBDILLUMDOWN } },
+ 	{ KE_IGNORE, 0xC6, },  /* Ambient Light Sensor notification */
++	{ KE_KEY, 0xFA, { KEY_PROG2 } },           /* Lid flip action */
+ 	{ KE_END, 0},
+ };
+ 
+diff --git a/drivers/platform/x86/intel_punit_ipc.c b/drivers/platform/x86/intel_punit_ipc.c
+index b5b890127479..b7dfe06261f1 100644
+--- a/drivers/platform/x86/intel_punit_ipc.c
++++ b/drivers/platform/x86/intel_punit_ipc.c
+@@ -17,6 +17,7 @@
+ #include <linux/bitops.h>
+ #include <linux/device.h>
+ #include <linux/interrupt.h>
++#include <linux/io.h>
+ #include <linux/platform_device.h>
+ #include <asm/intel_punit_ipc.h>
+ 
+diff --git a/drivers/pwm/pwm-meson.c b/drivers/pwm/pwm-meson.c
+index 822860b4801a..c1ed641b3e26 100644
+--- a/drivers/pwm/pwm-meson.c
++++ b/drivers/pwm/pwm-meson.c
+@@ -458,7 +458,6 @@ static int meson_pwm_init_channels(struct meson_pwm *meson,
+ 				   struct meson_pwm_channel *channels)
+ {
+ 	struct device *dev = meson->chip.dev;
+-	struct device_node *np = dev->of_node;
+ 	struct clk_init_data init;
+ 	unsigned int i;
+ 	char name[255];
+@@ -467,7 +466,7 @@ static int meson_pwm_init_channels(struct meson_pwm *meson,
+ 	for (i = 0; i < meson->chip.npwm; i++) {
+ 		struct meson_pwm_channel *channel = &channels[i];
+ 
+-		snprintf(name, sizeof(name), "%pOF#mux%u", np, i);
++		snprintf(name, sizeof(name), "%s#mux%u", dev_name(dev), i);
+ 
+ 		init.name = name;
+ 		init.ops = &clk_mux_ops;
+diff --git a/drivers/s390/block/dasd_eckd.c b/drivers/s390/block/dasd_eckd.c
+index bbf95b78ef5d..43e3398c9268 100644
+--- a/drivers/s390/block/dasd_eckd.c
++++ b/drivers/s390/block/dasd_eckd.c
+@@ -1780,6 +1780,9 @@ static void dasd_eckd_uncheck_device(struct dasd_device *device)
+ 	struct dasd_eckd_private *private = device->private;
+ 	int i;
+ 
++	if (!private)
++		return;
++
+ 	dasd_alias_disconnect_device_from_lcu(device);
+ 	private->ned = NULL;
+ 	private->sneq = NULL;
+@@ -2035,8 +2038,11 @@ static int dasd_eckd_basic_to_ready(struct dasd_device *device)
+ 
+ static int dasd_eckd_online_to_ready(struct dasd_device *device)
+ {
+-	cancel_work_sync(&device->reload_device);
+-	cancel_work_sync(&device->kick_validate);
++	if (cancel_work_sync(&device->reload_device))
++		dasd_put_device(device);
++	if (cancel_work_sync(&device->kick_validate))
++		dasd_put_device(device);
++
+ 	return 0;
+ };
+ 
+diff --git a/drivers/scsi/aic94xx/aic94xx_init.c b/drivers/scsi/aic94xx/aic94xx_init.c
+index 80e5b283fd81..1391e5f35918 100644
+--- a/drivers/scsi/aic94xx/aic94xx_init.c
++++ b/drivers/scsi/aic94xx/aic94xx_init.c
+@@ -1030,8 +1030,10 @@ static int __init aic94xx_init(void)
+ 
+ 	aic94xx_transport_template =
+ 		sas_domain_attach_transport(&aic94xx_transport_functions);
+-	if (!aic94xx_transport_template)
++	if (!aic94xx_transport_template) {
++		err = -ENOMEM;
+ 		goto out_destroy_caches;
++	}
+ 
+ 	err = pci_register_driver(&aic94xx_pci_driver);
+ 	if (err)
+diff --git a/drivers/staging/comedi/drivers/ni_mio_common.c b/drivers/staging/comedi/drivers/ni_mio_common.c
+index e40a2c0a9543..d3da39a9f567 100644
+--- a/drivers/staging/comedi/drivers/ni_mio_common.c
++++ b/drivers/staging/comedi/drivers/ni_mio_common.c
+@@ -5446,11 +5446,11 @@ static int ni_E_init(struct comedi_device *dev,
+ 	/* Digital I/O (PFI) subdevice */
+ 	s = &dev->subdevices[NI_PFI_DIO_SUBDEV];
+ 	s->type		= COMEDI_SUBD_DIO;
+-	s->subdev_flags	= SDF_READABLE | SDF_WRITABLE | SDF_INTERNAL;
+ 	s->maxdata	= 1;
+ 	if (devpriv->is_m_series) {
+ 		s->n_chan	= 16;
+ 		s->insn_bits	= ni_pfi_insn_bits;
++		s->subdev_flags	= SDF_READABLE | SDF_WRITABLE | SDF_INTERNAL;
+ 
+ 		ni_writew(dev, s->state, NI_M_PFI_DO_REG);
+ 		for (i = 0; i < NUM_PFI_OUTPUT_SELECT_REGS; ++i) {
+@@ -5459,6 +5459,7 @@ static int ni_E_init(struct comedi_device *dev,
+ 		}
+ 	} else {
+ 		s->n_chan	= 10;
++		s->subdev_flags	= SDF_INTERNAL;
+ 	}
+ 	s->insn_config	= ni_pfi_insn_config;
+ 
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index ed3114556fda..560ed8711706 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -951,7 +951,7 @@ static void vhost_iotlb_notify_vq(struct vhost_dev *d,
+ 	list_for_each_entry_safe(node, n, &d->pending_list, node) {
+ 		struct vhost_iotlb_msg *vq_msg = &node->msg.iotlb;
+ 		if (msg->iova <= vq_msg->iova &&
+-		    msg->iova + msg->size - 1 > vq_msg->iova &&
++		    msg->iova + msg->size - 1 >= vq_msg->iova &&
+ 		    vq_msg->type == VHOST_IOTLB_MISS) {
+ 			vhost_poll_queue(&node->vq->poll);
+ 			list_del(&node->node);
+diff --git a/drivers/virtio/virtio_pci_legacy.c b/drivers/virtio/virtio_pci_legacy.c
+index 2780886e8ba3..de062fb201bc 100644
+--- a/drivers/virtio/virtio_pci_legacy.c
++++ b/drivers/virtio/virtio_pci_legacy.c
+@@ -122,6 +122,7 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
+ 	struct virtqueue *vq;
+ 	u16 num;
+ 	int err;
++	u64 q_pfn;
+ 
+ 	/* Select the queue we're interested in */
+ 	iowrite16(index, vp_dev->ioaddr + VIRTIO_PCI_QUEUE_SEL);
+@@ -141,9 +142,17 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
+ 	if (!vq)
+ 		return ERR_PTR(-ENOMEM);
+ 
++	q_pfn = virtqueue_get_desc_addr(vq) >> VIRTIO_PCI_QUEUE_ADDR_SHIFT;
++	if (q_pfn >> 32) {
++		dev_err(&vp_dev->pci_dev->dev,
++			"platform bug: legacy virtio-mmio must not be used with RAM above 0x%llxGB\n",
++			0x1ULL << (32 + PAGE_SHIFT - 30));
++		err = -E2BIG;
++		goto out_del_vq;
++	}
++
+ 	/* activate the queue */
+-	iowrite32(virtqueue_get_desc_addr(vq) >> VIRTIO_PCI_QUEUE_ADDR_SHIFT,
+-		  vp_dev->ioaddr + VIRTIO_PCI_QUEUE_PFN);
++	iowrite32(q_pfn, vp_dev->ioaddr + VIRTIO_PCI_QUEUE_PFN);
+ 
+ 	vq->priv = (void __force *)vp_dev->ioaddr + VIRTIO_PCI_QUEUE_NOTIFY;
+ 
+@@ -160,6 +169,7 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
+ 
+ out_deactivate:
+ 	iowrite32(0, vp_dev->ioaddr + VIRTIO_PCI_QUEUE_PFN);
++out_del_vq:
+ 	vring_del_virtqueue(vq);
+ 	return ERR_PTR(err);
+ }
+diff --git a/drivers/xen/xen-balloon.c b/drivers/xen/xen-balloon.c
+index b437fccd4e62..294f35ce9e46 100644
+--- a/drivers/xen/xen-balloon.c
++++ b/drivers/xen/xen-balloon.c
+@@ -81,7 +81,7 @@ static void watch_target(struct xenbus_watch *watch,
+ 			static_max = new_target;
+ 		else
+ 			static_max >>= PAGE_SHIFT - 10;
+-		target_diff = xen_pv_domain() ? 0
++		target_diff = (xen_pv_domain() || xen_initial_domain()) ? 0
+ 				: static_max - balloon_stats.target_pages;
+ 	}
+ 
+diff --git a/fs/btrfs/check-integrity.c b/fs/btrfs/check-integrity.c
+index a3fdb4fe967d..daf45472bef9 100644
+--- a/fs/btrfs/check-integrity.c
++++ b/fs/btrfs/check-integrity.c
+@@ -1539,7 +1539,12 @@ static int btrfsic_map_block(struct btrfsic_state *state, u64 bytenr, u32 len,
+ 	}
+ 
+ 	device = multi->stripes[0].dev;
+-	block_ctx_out->dev = btrfsic_dev_state_lookup(device->bdev->bd_dev);
++	if (test_bit(BTRFS_DEV_STATE_MISSING, &device->dev_state) ||
++	    !device->bdev || !device->name)
++		block_ctx_out->dev = NULL;
++	else
++		block_ctx_out->dev = btrfsic_dev_state_lookup(
++							device->bdev->bd_dev);
+ 	block_ctx_out->dev_bytenr = multi->stripes[0].physical;
+ 	block_ctx_out->start = bytenr;
+ 	block_ctx_out->len = len;
+diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
+index e2ba0419297a..d20b244623f2 100644
+--- a/fs/btrfs/dev-replace.c
++++ b/fs/btrfs/dev-replace.c
+@@ -676,6 +676,12 @@ static int btrfs_dev_replace_finishing(struct btrfs_fs_info *fs_info,
+ 
+ 	btrfs_rm_dev_replace_unblocked(fs_info);
+ 
++	/*
++	 * Increment dev_stats_ccnt so that btrfs_run_dev_stats() will
++	 * update on-disk dev stats value during commit transaction
++	 */
++	atomic_inc(&tgt_device->dev_stats_ccnt);
++
+ 	/*
+ 	 * this is again a consistent state where no dev_replace procedure
+ 	 * is running, the target device is part of the filesystem, the
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 8aab7a6c1e58..53cac20650d8 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -10687,7 +10687,7 @@ void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info)
+ 		/* Don't want to race with allocators so take the groups_sem */
+ 		down_write(&space_info->groups_sem);
+ 		spin_lock(&block_group->lock);
+-		if (block_group->reserved ||
++		if (block_group->reserved || block_group->pinned ||
+ 		    btrfs_block_group_used(&block_group->item) ||
+ 		    block_group->ro ||
+ 		    list_is_singular(&block_group->list)) {
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index 879b76fa881a..be94c65bb4d2 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -1321,18 +1321,19 @@ static void __del_reloc_root(struct btrfs_root *root)
+ 	struct mapping_node *node = NULL;
+ 	struct reloc_control *rc = fs_info->reloc_ctl;
+ 
+-	spin_lock(&rc->reloc_root_tree.lock);
+-	rb_node = tree_search(&rc->reloc_root_tree.rb_root,
+-			      root->node->start);
+-	if (rb_node) {
+-		node = rb_entry(rb_node, struct mapping_node, rb_node);
+-		rb_erase(&node->rb_node, &rc->reloc_root_tree.rb_root);
++	if (rc) {
++		spin_lock(&rc->reloc_root_tree.lock);
++		rb_node = tree_search(&rc->reloc_root_tree.rb_root,
++				      root->node->start);
++		if (rb_node) {
++			node = rb_entry(rb_node, struct mapping_node, rb_node);
++			rb_erase(&node->rb_node, &rc->reloc_root_tree.rb_root);
++		}
++		spin_unlock(&rc->reloc_root_tree.lock);
++		if (!node)
++			return;
++		BUG_ON((struct btrfs_root *)node->data != root);
+ 	}
+-	spin_unlock(&rc->reloc_root_tree.lock);
+-
+-	if (!node)
+-		return;
+-	BUG_ON((struct btrfs_root *)node->data != root);
+ 
+ 	spin_lock(&fs_info->trans_lock);
+ 	list_del_init(&root->root_list);
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index bddfc28b27c0..9b25f29d0e73 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -892,6 +892,8 @@ static int btrfs_parse_early_options(const char *options, fmode_t flags,
+ 	char *device_name, *opts, *orig, *p;
+ 	int error = 0;
+ 
++	lockdep_assert_held(&uuid_mutex);
++
+ 	if (!options)
+ 		return 0;
+ 
+@@ -1526,12 +1528,6 @@ static struct dentry *btrfs_mount_root(struct file_system_type *fs_type,
+ 	if (!(flags & SB_RDONLY))
+ 		mode |= FMODE_WRITE;
+ 
+-	error = btrfs_parse_early_options(data, mode, fs_type,
+-					  &fs_devices);
+-	if (error) {
+-		return ERR_PTR(error);
+-	}
+-
+ 	security_init_mnt_opts(&new_sec_opts);
+ 	if (data) {
+ 		error = parse_security_options(data, &new_sec_opts);
+@@ -1539,10 +1535,6 @@ static struct dentry *btrfs_mount_root(struct file_system_type *fs_type,
+ 			return ERR_PTR(error);
+ 	}
+ 
+-	error = btrfs_scan_one_device(device_name, mode, fs_type, &fs_devices);
+-	if (error)
+-		goto error_sec_opts;
+-
+ 	/*
+ 	 * Setup a dummy root and fs_info for test/set super.  This is because
+ 	 * we don't actually fill this stuff out until open_ctree, but we need
+@@ -1555,8 +1547,6 @@ static struct dentry *btrfs_mount_root(struct file_system_type *fs_type,
+ 		goto error_sec_opts;
+ 	}
+ 
+-	fs_info->fs_devices = fs_devices;
+-
+ 	fs_info->super_copy = kzalloc(BTRFS_SUPER_INFO_SIZE, GFP_KERNEL);
+ 	fs_info->super_for_commit = kzalloc(BTRFS_SUPER_INFO_SIZE, GFP_KERNEL);
+ 	security_init_mnt_opts(&fs_info->security_opts);
+@@ -1565,7 +1555,23 @@ static struct dentry *btrfs_mount_root(struct file_system_type *fs_type,
+ 		goto error_fs_info;
+ 	}
+ 
++	mutex_lock(&uuid_mutex);
++	error = btrfs_parse_early_options(data, mode, fs_type, &fs_devices);
++	if (error) {
++		mutex_unlock(&uuid_mutex);
++		goto error_fs_info;
++	}
++
++	error = btrfs_scan_one_device(device_name, mode, fs_type, &fs_devices);
++	if (error) {
++		mutex_unlock(&uuid_mutex);
++		goto error_fs_info;
++	}
++
++	fs_info->fs_devices = fs_devices;
++
+ 	error = btrfs_open_devices(fs_devices, mode, fs_type);
++	mutex_unlock(&uuid_mutex);
+ 	if (error)
+ 		goto error_fs_info;
+ 
+@@ -2234,15 +2240,21 @@ static long btrfs_control_ioctl(struct file *file, unsigned int cmd,
+ 
+ 	switch (cmd) {
+ 	case BTRFS_IOC_SCAN_DEV:
++		mutex_lock(&uuid_mutex);
+ 		ret = btrfs_scan_one_device(vol->name, FMODE_READ,
+ 					    &btrfs_root_fs_type, &fs_devices);
++		mutex_unlock(&uuid_mutex);
+ 		break;
+ 	case BTRFS_IOC_DEVICES_READY:
++		mutex_lock(&uuid_mutex);
+ 		ret = btrfs_scan_one_device(vol->name, FMODE_READ,
+ 					    &btrfs_root_fs_type, &fs_devices);
+-		if (ret)
++		if (ret) {
++			mutex_unlock(&uuid_mutex);
+ 			break;
++		}
+ 		ret = !(fs_devices->num_devices == fs_devices->total_devices);
++		mutex_unlock(&uuid_mutex);
+ 		break;
+ 	case BTRFS_IOC_GET_SUPPORTED_FEATURES:
+ 		ret = btrfs_ioctl_get_supported_features((void __user*)arg);
+@@ -2368,7 +2380,7 @@ static __cold void btrfs_interface_exit(void)
+ 
+ static void __init btrfs_print_mod_info(void)
+ {
+-	pr_info("Btrfs loaded, crc32c=%s"
++	static const char options[] = ""
+ #ifdef CONFIG_BTRFS_DEBUG
+ 			", debug=on"
+ #endif
+@@ -2381,8 +2393,8 @@ static void __init btrfs_print_mod_info(void)
+ #ifdef CONFIG_BTRFS_FS_REF_VERIFY
+ 			", ref-verify=on"
+ #endif
+-			"\n",
+-			crc32c_impl());
++			;
++	pr_info("Btrfs loaded, crc32c=%s%s\n", crc32c_impl(), options);
+ }
+ 
+ static int __init init_btrfs_fs(void)
+diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
+index 8d40e7dd8c30..d014af352ce0 100644
+--- a/fs/btrfs/tree-checker.c
++++ b/fs/btrfs/tree-checker.c
+@@ -396,9 +396,22 @@ static int check_leaf(struct btrfs_fs_info *fs_info, struct extent_buffer *leaf,
+ 	 * skip this check for relocation trees.
+ 	 */
+ 	if (nritems == 0 && !btrfs_header_flag(leaf, BTRFS_HEADER_FLAG_RELOC)) {
++		u64 owner = btrfs_header_owner(leaf);
+ 		struct btrfs_root *check_root;
+ 
+-		key.objectid = btrfs_header_owner(leaf);
++		/* These trees must never be empty */
++		if (owner == BTRFS_ROOT_TREE_OBJECTID ||
++		    owner == BTRFS_CHUNK_TREE_OBJECTID ||
++		    owner == BTRFS_EXTENT_TREE_OBJECTID ||
++		    owner == BTRFS_DEV_TREE_OBJECTID ||
++		    owner == BTRFS_FS_TREE_OBJECTID ||
++		    owner == BTRFS_DATA_RELOC_TREE_OBJECTID) {
++			generic_err(fs_info, leaf, 0,
++			"invalid root, root %llu must never be empty",
++				    owner);
++			return -EUCLEAN;
++		}
++		key.objectid = owner;
+ 		key.type = BTRFS_ROOT_ITEM_KEY;
+ 		key.offset = (u64)-1;
+ 
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 1da162928d1a..5304b8d6ceb8 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -634,44 +634,48 @@ static void pending_bios_fn(struct btrfs_work *work)
+  *		devices.
+  */
+ static void btrfs_free_stale_devices(const char *path,
+-				     struct btrfs_device *skip_dev)
++				     struct btrfs_device *skip_device)
+ {
+-	struct btrfs_fs_devices *fs_devs, *tmp_fs_devs;
+-	struct btrfs_device *dev, *tmp_dev;
++	struct btrfs_fs_devices *fs_devices, *tmp_fs_devices;
++	struct btrfs_device *device, *tmp_device;
+ 
+-	list_for_each_entry_safe(fs_devs, tmp_fs_devs, &fs_uuids, fs_list) {
+-
+-		if (fs_devs->opened)
++	list_for_each_entry_safe(fs_devices, tmp_fs_devices, &fs_uuids, fs_list) {
++		mutex_lock(&fs_devices->device_list_mutex);
++		if (fs_devices->opened) {
++			mutex_unlock(&fs_devices->device_list_mutex);
+ 			continue;
++		}
+ 
+-		list_for_each_entry_safe(dev, tmp_dev,
+-					 &fs_devs->devices, dev_list) {
++		list_for_each_entry_safe(device, tmp_device,
++					 &fs_devices->devices, dev_list) {
+ 			int not_found = 0;
+ 
+-			if (skip_dev && skip_dev == dev)
++			if (skip_device && skip_device == device)
+ 				continue;
+-			if (path && !dev->name)
++			if (path && !device->name)
+ 				continue;
+ 
+ 			rcu_read_lock();
+ 			if (path)
+-				not_found = strcmp(rcu_str_deref(dev->name),
++				not_found = strcmp(rcu_str_deref(device->name),
+ 						   path);
+ 			rcu_read_unlock();
+ 			if (not_found)
+ 				continue;
+ 
+ 			/* delete the stale device */
+-			if (fs_devs->num_devices == 1) {
+-				btrfs_sysfs_remove_fsid(fs_devs);
+-				list_del(&fs_devs->fs_list);
+-				free_fs_devices(fs_devs);
++			fs_devices->num_devices--;
++			list_del(&device->dev_list);
++			btrfs_free_device(device);
++
++			if (fs_devices->num_devices == 0)
+ 				break;
+-			} else {
+-				fs_devs->num_devices--;
+-				list_del(&dev->dev_list);
+-				btrfs_free_device(dev);
+-			}
++		}
++		mutex_unlock(&fs_devices->device_list_mutex);
++		if (fs_devices->num_devices == 0) {
++			btrfs_sysfs_remove_fsid(fs_devices);
++			list_del(&fs_devices->fs_list);
++			free_fs_devices(fs_devices);
+ 		}
+ 	}
+ }
+@@ -750,7 +754,8 @@ error_brelse:
+  * error pointer when failed
+  */
+ static noinline struct btrfs_device *device_list_add(const char *path,
+-			   struct btrfs_super_block *disk_super)
++			   struct btrfs_super_block *disk_super,
++			   bool *new_device_added)
+ {
+ 	struct btrfs_device *device;
+ 	struct btrfs_fs_devices *fs_devices;
+@@ -764,21 +769,26 @@ static noinline struct btrfs_device *device_list_add(const char *path,
+ 		if (IS_ERR(fs_devices))
+ 			return ERR_CAST(fs_devices);
+ 
++		mutex_lock(&fs_devices->device_list_mutex);
+ 		list_add(&fs_devices->fs_list, &fs_uuids);
+ 
+ 		device = NULL;
+ 	} else {
++		mutex_lock(&fs_devices->device_list_mutex);
+ 		device = find_device(fs_devices, devid,
+ 				disk_super->dev_item.uuid);
+ 	}
+ 
+ 	if (!device) {
+-		if (fs_devices->opened)
++		if (fs_devices->opened) {
++			mutex_unlock(&fs_devices->device_list_mutex);
+ 			return ERR_PTR(-EBUSY);
++		}
+ 
+ 		device = btrfs_alloc_device(NULL, &devid,
+ 					    disk_super->dev_item.uuid);
+ 		if (IS_ERR(device)) {
++			mutex_unlock(&fs_devices->device_list_mutex);
+ 			/* we can safely leave the fs_devices entry around */
+ 			return device;
+ 		}
+@@ -786,17 +796,16 @@ static noinline struct btrfs_device *device_list_add(const char *path,
+ 		name = rcu_string_strdup(path, GFP_NOFS);
+ 		if (!name) {
+ 			btrfs_free_device(device);
++			mutex_unlock(&fs_devices->device_list_mutex);
+ 			return ERR_PTR(-ENOMEM);
+ 		}
+ 		rcu_assign_pointer(device->name, name);
+ 
+-		mutex_lock(&fs_devices->device_list_mutex);
+ 		list_add_rcu(&device->dev_list, &fs_devices->devices);
+ 		fs_devices->num_devices++;
+-		mutex_unlock(&fs_devices->device_list_mutex);
+ 
+ 		device->fs_devices = fs_devices;
+-		btrfs_free_stale_devices(path, device);
++		*new_device_added = true;
+ 
+ 		if (disk_super->label[0])
+ 			pr_info("BTRFS: device label %s devid %llu transid %llu %s\n",
+@@ -840,12 +849,15 @@ static noinline struct btrfs_device *device_list_add(const char *path,
+ 			 * with larger generation number or the last-in if
+ 			 * generation are equal.
+ 			 */
++			mutex_unlock(&fs_devices->device_list_mutex);
+ 			return ERR_PTR(-EEXIST);
+ 		}
+ 
+ 		name = rcu_string_strdup(path, GFP_NOFS);
+-		if (!name)
++		if (!name) {
++			mutex_unlock(&fs_devices->device_list_mutex);
+ 			return ERR_PTR(-ENOMEM);
++		}
+ 		rcu_string_free(device->name);
+ 		rcu_assign_pointer(device->name, name);
+ 		if (test_bit(BTRFS_DEV_STATE_MISSING, &device->dev_state)) {
+@@ -865,6 +877,7 @@ static noinline struct btrfs_device *device_list_add(const char *path,
+ 
+ 	fs_devices->total_devices = btrfs_super_num_devices(disk_super);
+ 
++	mutex_unlock(&fs_devices->device_list_mutex);
+ 	return device;
+ }
+ 
+@@ -1146,7 +1159,8 @@ int btrfs_open_devices(struct btrfs_fs_devices *fs_devices,
+ {
+ 	int ret;
+ 
+-	mutex_lock(&uuid_mutex);
++	lockdep_assert_held(&uuid_mutex);
++
+ 	mutex_lock(&fs_devices->device_list_mutex);
+ 	if (fs_devices->opened) {
+ 		fs_devices->opened++;
+@@ -1156,7 +1170,6 @@ int btrfs_open_devices(struct btrfs_fs_devices *fs_devices,
+ 		ret = open_fs_devices(fs_devices, flags, holder);
+ 	}
+ 	mutex_unlock(&fs_devices->device_list_mutex);
+-	mutex_unlock(&uuid_mutex);
+ 
+ 	return ret;
+ }
+@@ -1221,12 +1234,15 @@ int btrfs_scan_one_device(const char *path, fmode_t flags, void *holder,
+ 			  struct btrfs_fs_devices **fs_devices_ret)
+ {
+ 	struct btrfs_super_block *disk_super;
++	bool new_device_added = false;
+ 	struct btrfs_device *device;
+ 	struct block_device *bdev;
+ 	struct page *page;
+ 	int ret = 0;
+ 	u64 bytenr;
+ 
++	lockdep_assert_held(&uuid_mutex);
++
+ 	/*
+ 	 * we would like to check all the supers, but that would make
+ 	 * a btrfs mount succeed after a mkfs from a different FS.
+@@ -1245,13 +1261,14 @@ int btrfs_scan_one_device(const char *path, fmode_t flags, void *holder,
+ 		goto error_bdev_put;
+ 	}
+ 
+-	mutex_lock(&uuid_mutex);
+-	device = device_list_add(path, disk_super);
+-	if (IS_ERR(device))
++	device = device_list_add(path, disk_super, &new_device_added);
++	if (IS_ERR(device)) {
+ 		ret = PTR_ERR(device);
+-	else
++	} else {
+ 		*fs_devices_ret = device->fs_devices;
+-	mutex_unlock(&uuid_mutex);
++		if (new_device_added)
++			btrfs_free_stale_devices(path, device);
++	}
+ 
+ 	btrfs_release_disk_super(page);
+ 
+@@ -2029,6 +2046,9 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info, const char *device_path,
+ 
+ 	cur_devices->num_devices--;
+ 	cur_devices->total_devices--;
++	/* Update total_devices of the parent fs_devices if it's seed */
++	if (cur_devices != fs_devices)
++		fs_devices->total_devices--;
+ 
+ 	if (test_bit(BTRFS_DEV_STATE_MISSING, &device->dev_state))
+ 		cur_devices->missing_devices--;
+@@ -6563,10 +6583,14 @@ static int read_one_chunk(struct btrfs_fs_info *fs_info, struct btrfs_key *key,
+ 	write_lock(&map_tree->map_tree.lock);
+ 	ret = add_extent_mapping(&map_tree->map_tree, em, 0);
+ 	write_unlock(&map_tree->map_tree.lock);
+-	BUG_ON(ret); /* Tree corruption */
++	if (ret < 0) {
++		btrfs_err(fs_info,
++			  "failed to add chunk map, start=%llu len=%llu: %d",
++			  em->start, em->len, ret);
++	}
+ 	free_extent_map(em);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static void fill_device_from_item(struct extent_buffer *leaf,
+diff --git a/fs/cifs/cifs_debug.c b/fs/cifs/cifs_debug.c
+index 991bfb271908..b20297988fe0 100644
+--- a/fs/cifs/cifs_debug.c
++++ b/fs/cifs/cifs_debug.c
+@@ -383,6 +383,10 @@ static ssize_t cifs_stats_proc_write(struct file *file,
+ 		atomic_set(&totBufAllocCount, 0);
+ 		atomic_set(&totSmBufAllocCount, 0);
+ #endif /* CONFIG_CIFS_STATS2 */
++		spin_lock(&GlobalMid_Lock);
++		GlobalMaxActiveXid = 0;
++		GlobalCurrentXid = 0;
++		spin_unlock(&GlobalMid_Lock);
+ 		spin_lock(&cifs_tcp_ses_lock);
+ 		list_for_each(tmp1, &cifs_tcp_ses_list) {
+ 			server = list_entry(tmp1, struct TCP_Server_Info,
+@@ -395,6 +399,10 @@ static ssize_t cifs_stats_proc_write(struct file *file,
+ 							  struct cifs_tcon,
+ 							  tcon_list);
+ 					atomic_set(&tcon->num_smbs_sent, 0);
++					spin_lock(&tcon->stat_lock);
++					tcon->bytes_read = 0;
++					tcon->bytes_written = 0;
++					spin_unlock(&tcon->stat_lock);
+ 					if (server->ops->clear_stats)
+ 						server->ops->clear_stats(tcon);
+ 				}
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 5df2c0698cda..9d02563b2147 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -3031,11 +3031,15 @@ cifs_get_tcon(struct cifs_ses *ses, struct smb_vol *volume_info)
+ 	}
+ 
+ #ifdef CONFIG_CIFS_SMB311
+-	if ((volume_info->linux_ext) && (ses->server->posix_ext_supported)) {
+-		if (ses->server->vals->protocol_id == SMB311_PROT_ID) {
++	if (volume_info->linux_ext) {
++		if (ses->server->posix_ext_supported) {
+ 			tcon->posix_extensions = true;
+ 			printk_once(KERN_WARNING
+ 				"SMB3.11 POSIX Extensions are experimental\n");
++		} else {
++			cifs_dbg(VFS, "Server does not support mounting with posix SMB3.11 extensions.\n");
++			rc = -EOPNOTSUPP;
++			goto out_fail;
+ 		}
+ 	}
+ #endif /* 311 */
+diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c
+index 3ff7cec2da81..239215dcc00b 100644
+--- a/fs/cifs/smb2misc.c
++++ b/fs/cifs/smb2misc.c
+@@ -240,6 +240,13 @@ smb2_check_message(char *buf, unsigned int len, struct TCP_Server_Info *srvr)
+ 		if (clc_len == len + 1)
+ 			return 0;
+ 
++		/*
++		 * Some windows servers (win2016) will pad also the final
++		 * PDU in a compound to 8 bytes.
++		 */
++		if (((clc_len + 7) & ~7) == len)
++			return 0;
++
+ 		/*
+ 		 * MacOS server pads after SMB2.1 write response with 3 bytes
+ 		 * of junk. Other servers match RFC1001 len to actual
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index ffce77e00a58..44e511a35559 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -360,7 +360,7 @@ smb2_plain_req_init(__le16 smb2_command, struct cifs_tcon *tcon,
+ 		       total_len);
+ 
+ 	if (tcon != NULL) {
+-#ifdef CONFIG_CIFS_STATS2
++#ifdef CONFIG_CIFS_STATS
+ 		uint16_t com_code = le16_to_cpu(smb2_command);
+ 		cifs_stats_inc(&tcon->stats.smb2_stats.smb2_com_sent[com_code]);
+ #endif
+@@ -1928,7 +1928,7 @@ int smb311_posix_mkdir(const unsigned int xid, struct inode *inode,
+ {
+ 	struct smb_rqst rqst;
+ 	struct smb2_create_req *req;
+-	struct smb2_create_rsp *rsp;
++	struct smb2_create_rsp *rsp = NULL;
+ 	struct TCP_Server_Info *server;
+ 	struct cifs_ses *ses = tcon->ses;
+ 	struct kvec iov[3]; /* make sure at least one for each open context */
+@@ -1943,27 +1943,31 @@ int smb311_posix_mkdir(const unsigned int xid, struct inode *inode,
+ 	char *pc_buf = NULL;
+ 	int flags = 0;
+ 	unsigned int total_len;
+-	__le16 *path = cifs_convert_path_to_utf16(full_path, cifs_sb);
+-
+-	if (!path)
+-		return -ENOMEM;
++	__le16 *utf16_path = NULL;
+ 
+ 	cifs_dbg(FYI, "mkdir\n");
+ 
++	/* resource #1: path allocation */
++	utf16_path = cifs_convert_path_to_utf16(full_path, cifs_sb);
++	if (!utf16_path)
++		return -ENOMEM;
++
+ 	if (ses && (ses->server))
+ 		server = ses->server;
+-	else
+-		return -EIO;
++	else {
++		rc = -EIO;
++		goto err_free_path;
++	}
+ 
++	/* resource #2: request */
+ 	rc = smb2_plain_req_init(SMB2_CREATE, tcon, (void **) &req, &total_len);
+-
+ 	if (rc)
+-		return rc;
++		goto err_free_path;
++
+ 
+ 	if (smb3_encryption_required(tcon))
+ 		flags |= CIFS_TRANSFORM_REQ;
+ 
+-
+ 	req->ImpersonationLevel = IL_IMPERSONATION;
+ 	req->DesiredAccess = cpu_to_le32(FILE_WRITE_ATTRIBUTES);
+ 	/* File attributes ignored on open (used in create though) */
+@@ -1992,50 +1996,44 @@ int smb311_posix_mkdir(const unsigned int xid, struct inode *inode,
+ 		req->sync_hdr.Flags |= SMB2_FLAGS_DFS_OPERATIONS;
+ 		rc = alloc_path_with_tree_prefix(&copy_path, &copy_size,
+ 						 &name_len,
+-						 tcon->treeName, path);
+-		if (rc) {
+-			cifs_small_buf_release(req);
+-			return rc;
+-		}
++						 tcon->treeName, utf16_path);
++		if (rc)
++			goto err_free_req;
++
+ 		req->NameLength = cpu_to_le16(name_len * 2);
+ 		uni_path_len = copy_size;
+-		path = copy_path;
++		/* free before overwriting resource */
++		kfree(utf16_path);
++		utf16_path = copy_path;
+ 	} else {
+-		uni_path_len = (2 * UniStrnlen((wchar_t *)path, PATH_MAX)) + 2;
++		uni_path_len = (2 * UniStrnlen((wchar_t *)utf16_path, PATH_MAX)) + 2;
+ 		/* MUST set path len (NameLength) to 0 opening root of share */
+ 		req->NameLength = cpu_to_le16(uni_path_len - 2);
+ 		if (uni_path_len % 8 != 0) {
+ 			copy_size = roundup(uni_path_len, 8);
+ 			copy_path = kzalloc(copy_size, GFP_KERNEL);
+ 			if (!copy_path) {
+-				cifs_small_buf_release(req);
+-				return -ENOMEM;
++				rc = -ENOMEM;
++				goto err_free_req;
+ 			}
+-			memcpy((char *)copy_path, (const char *)path,
++			memcpy((char *)copy_path, (const char *)utf16_path,
+ 			       uni_path_len);
+ 			uni_path_len = copy_size;
+-			path = copy_path;
++			/* free before overwriting resource */
++			kfree(utf16_path);
++			utf16_path = copy_path;
+ 		}
+ 	}
+ 
+ 	iov[1].iov_len = uni_path_len;
+-	iov[1].iov_base = path;
++	iov[1].iov_base = utf16_path;
+ 	req->RequestedOplockLevel = SMB2_OPLOCK_LEVEL_NONE;
+ 
+ 	if (tcon->posix_extensions) {
+-		if (n_iov > 2) {
+-			struct create_context *ccontext =
+-			    (struct create_context *)iov[n_iov-1].iov_base;
+-			ccontext->Next =
+-				cpu_to_le32(iov[n_iov-1].iov_len);
+-		}
+-
++		/* resource #3: posix buf */
+ 		rc = add_posix_context(iov, &n_iov, mode);
+-		if (rc) {
+-			cifs_small_buf_release(req);
+-			kfree(copy_path);
+-			return rc;
+-		}
++		if (rc)
++			goto err_free_req;
+ 		pc_buf = iov[n_iov-1].iov_base;
+ 	}
+ 
+@@ -2044,32 +2042,33 @@ int smb311_posix_mkdir(const unsigned int xid, struct inode *inode,
+ 	rqst.rq_iov = iov;
+ 	rqst.rq_nvec = n_iov;
+ 
+-	rc = cifs_send_recv(xid, ses, &rqst, &resp_buftype, flags,
+-			    &rsp_iov);
+-
+-	cifs_small_buf_release(req);
+-	rsp = (struct smb2_create_rsp *)rsp_iov.iov_base;
+-
+-	if (rc != 0) {
++	/* resource #4: response buffer */
++	rc = cifs_send_recv(xid, ses, &rqst, &resp_buftype, flags, &rsp_iov);
++	if (rc) {
+ 		cifs_stats_fail_inc(tcon, SMB2_CREATE_HE);
+ 		trace_smb3_posix_mkdir_err(xid, tcon->tid, ses->Suid,
+-				    CREATE_NOT_FILE, FILE_WRITE_ATTRIBUTES, rc);
+-		goto smb311_mkdir_exit;
+-	} else
+-		trace_smb3_posix_mkdir_done(xid, rsp->PersistentFileId, tcon->tid,
+-				     ses->Suid, CREATE_NOT_FILE,
+-				     FILE_WRITE_ATTRIBUTES);
++					   CREATE_NOT_FILE,
++					   FILE_WRITE_ATTRIBUTES, rc);
++		goto err_free_rsp_buf;
++	}
++
++	rsp = (struct smb2_create_rsp *)rsp_iov.iov_base;
++	trace_smb3_posix_mkdir_done(xid, rsp->PersistentFileId, tcon->tid,
++				    ses->Suid, CREATE_NOT_FILE,
++				    FILE_WRITE_ATTRIBUTES);
+ 
+ 	SMB2_close(xid, tcon, rsp->PersistentFileId, rsp->VolatileFileId);
+ 
+ 	/* Eventually save off posix specific response info and timestaps */
+ 
+-smb311_mkdir_exit:
+-	kfree(copy_path);
+-	kfree(pc_buf);
++err_free_rsp_buf:
+ 	free_rsp_buf(resp_buftype, rsp);
++	kfree(pc_buf);
++err_free_req:
++	cifs_small_buf_release(req);
++err_free_path:
++	kfree(utf16_path);
+ 	return rc;
+-
+ }
+ #endif /* SMB311 */
+ 
+diff --git a/fs/dcache.c b/fs/dcache.c
+index ceb7b491d1b9..d19a0dc46c04 100644
+--- a/fs/dcache.c
++++ b/fs/dcache.c
+@@ -292,7 +292,8 @@ void take_dentry_name_snapshot(struct name_snapshot *name, struct dentry *dentry
+ 		spin_unlock(&dentry->d_lock);
+ 		name->name = p->name;
+ 	} else {
+-		memcpy(name->inline_name, dentry->d_iname, DNAME_INLINE_LEN);
++		memcpy(name->inline_name, dentry->d_iname,
++		       dentry->d_name.len + 1);
+ 		spin_unlock(&dentry->d_lock);
+ 		name->name = name->inline_name;
+ 	}
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 8f931d699287..b61954d40c25 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -2149,8 +2149,12 @@ static void f2fs_write_failed(struct address_space *mapping, loff_t to)
+ 
+ 	if (to > i_size) {
+ 		down_write(&F2FS_I(inode)->i_mmap_sem);
++		down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
++
+ 		truncate_pagecache(inode, i_size);
+ 		f2fs_truncate_blocks(inode, i_size, true);
++
++		up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ 		up_write(&F2FS_I(inode)->i_mmap_sem);
+ 	}
+ }
+@@ -2490,6 +2494,10 @@ static int f2fs_set_data_page_dirty(struct page *page)
+ 	if (!PageUptodate(page))
+ 		SetPageUptodate(page);
+ 
++	/* don't remain PG_checked flag which was set during GC */
++	if (is_cold_data(page))
++		clear_cold_data(page);
++
+ 	if (f2fs_is_atomic_file(inode) && !f2fs_is_commit_atomic_write(inode)) {
+ 		if (!IS_ATOMIC_WRITTEN_PAGE(page)) {
+ 			f2fs_register_inmem_page(inode, page);
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 6880c6f78d58..3ffa341cf586 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -782,22 +782,26 @@ int f2fs_setattr(struct dentry *dentry, struct iattr *attr)
+ 	}
+ 
+ 	if (attr->ia_valid & ATTR_SIZE) {
+-		if (attr->ia_size <= i_size_read(inode)) {
+-			down_write(&F2FS_I(inode)->i_mmap_sem);
+-			truncate_setsize(inode, attr->ia_size);
++		bool to_smaller = (attr->ia_size <= i_size_read(inode));
++
++		down_write(&F2FS_I(inode)->i_mmap_sem);
++		down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
++
++		truncate_setsize(inode, attr->ia_size);
++
++		if (to_smaller)
+ 			err = f2fs_truncate(inode);
+-			up_write(&F2FS_I(inode)->i_mmap_sem);
+-			if (err)
+-				return err;
+-		} else {
+-			/*
+-			 * do not trim all blocks after i_size if target size is
+-			 * larger than i_size.
+-			 */
+-			down_write(&F2FS_I(inode)->i_mmap_sem);
+-			truncate_setsize(inode, attr->ia_size);
+-			up_write(&F2FS_I(inode)->i_mmap_sem);
++		/*
++		 * do not trim all blocks after i_size if target size is
++		 * larger than i_size.
++		 */
++		up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
++		up_write(&F2FS_I(inode)->i_mmap_sem);
+ 
++		if (err)
++			return err;
++
++		if (!to_smaller) {
+ 			/* should convert inline inode here */
+ 			if (!f2fs_may_inline_data(inode)) {
+ 				err = f2fs_convert_inline_inode(inode);
+@@ -944,13 +948,18 @@ static int punch_hole(struct inode *inode, loff_t offset, loff_t len)
+ 
+ 			blk_start = (loff_t)pg_start << PAGE_SHIFT;
+ 			blk_end = (loff_t)pg_end << PAGE_SHIFT;
++
+ 			down_write(&F2FS_I(inode)->i_mmap_sem);
++			down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
++
+ 			truncate_inode_pages_range(mapping, blk_start,
+ 					blk_end - 1);
+ 
+ 			f2fs_lock_op(sbi);
+ 			ret = f2fs_truncate_hole(inode, pg_start, pg_end);
+ 			f2fs_unlock_op(sbi);
++
++			up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ 			up_write(&F2FS_I(inode)->i_mmap_sem);
+ 		}
+ 	}
+@@ -1295,8 +1304,6 @@ static int f2fs_zero_range(struct inode *inode, loff_t offset, loff_t len,
+ 	if (ret)
+ 		goto out_sem;
+ 
+-	truncate_pagecache_range(inode, offset, offset + len - 1);
+-
+ 	pg_start = ((unsigned long long) offset) >> PAGE_SHIFT;
+ 	pg_end = ((unsigned long long) offset + len) >> PAGE_SHIFT;
+ 
+@@ -1326,12 +1333,19 @@ static int f2fs_zero_range(struct inode *inode, loff_t offset, loff_t len,
+ 			unsigned int end_offset;
+ 			pgoff_t end;
+ 
++			down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
++
++			truncate_pagecache_range(inode,
++				(loff_t)index << PAGE_SHIFT,
++				((loff_t)pg_end << PAGE_SHIFT) - 1);
++
+ 			f2fs_lock_op(sbi);
+ 
+ 			set_new_dnode(&dn, inode, NULL, NULL, 0);
+ 			ret = f2fs_get_dnode_of_data(&dn, index, ALLOC_NODE);
+ 			if (ret) {
+ 				f2fs_unlock_op(sbi);
++				up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ 				goto out;
+ 			}
+ 
+@@ -1340,7 +1354,9 @@ static int f2fs_zero_range(struct inode *inode, loff_t offset, loff_t len,
+ 
+ 			ret = f2fs_do_zero_range(&dn, index, end);
+ 			f2fs_put_dnode(&dn);
++
+ 			f2fs_unlock_op(sbi);
++			up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ 
+ 			f2fs_balance_fs(sbi, dn.node_changed);
+ 
+diff --git a/fs/fat/cache.c b/fs/fat/cache.c
+index e9bed49df6b7..78d501c1fb65 100644
+--- a/fs/fat/cache.c
++++ b/fs/fat/cache.c
+@@ -225,7 +225,8 @@ static inline void cache_init(struct fat_cache_id *cid, int fclus, int dclus)
+ int fat_get_cluster(struct inode *inode, int cluster, int *fclus, int *dclus)
+ {
+ 	struct super_block *sb = inode->i_sb;
+-	const int limit = sb->s_maxbytes >> MSDOS_SB(sb)->cluster_bits;
++	struct msdos_sb_info *sbi = MSDOS_SB(sb);
++	const int limit = sb->s_maxbytes >> sbi->cluster_bits;
+ 	struct fat_entry fatent;
+ 	struct fat_cache_id cid;
+ 	int nr;
+@@ -234,6 +235,12 @@ int fat_get_cluster(struct inode *inode, int cluster, int *fclus, int *dclus)
+ 
+ 	*fclus = 0;
+ 	*dclus = MSDOS_I(inode)->i_start;
++	if (!fat_valid_entry(sbi, *dclus)) {
++		fat_fs_error_ratelimit(sb,
++			"%s: invalid start cluster (i_pos %lld, start %08x)",
++			__func__, MSDOS_I(inode)->i_pos, *dclus);
++		return -EIO;
++	}
+ 	if (cluster == 0)
+ 		return 0;
+ 
+@@ -250,9 +257,8 @@ int fat_get_cluster(struct inode *inode, int cluster, int *fclus, int *dclus)
+ 		/* prevent the infinite loop of cluster chain */
+ 		if (*fclus > limit) {
+ 			fat_fs_error_ratelimit(sb,
+-					"%s: detected the cluster chain loop"
+-					" (i_pos %lld)", __func__,
+-					MSDOS_I(inode)->i_pos);
++				"%s: detected the cluster chain loop (i_pos %lld)",
++				__func__, MSDOS_I(inode)->i_pos);
+ 			nr = -EIO;
+ 			goto out;
+ 		}
+@@ -262,9 +268,8 @@ int fat_get_cluster(struct inode *inode, int cluster, int *fclus, int *dclus)
+ 			goto out;
+ 		else if (nr == FAT_ENT_FREE) {
+ 			fat_fs_error_ratelimit(sb,
+-				       "%s: invalid cluster chain (i_pos %lld)",
+-				       __func__,
+-				       MSDOS_I(inode)->i_pos);
++				"%s: invalid cluster chain (i_pos %lld)",
++				__func__, MSDOS_I(inode)->i_pos);
+ 			nr = -EIO;
+ 			goto out;
+ 		} else if (nr == FAT_ENT_EOF) {
+diff --git a/fs/fat/fat.h b/fs/fat/fat.h
+index 8fc1093da47d..a0a00f3734bc 100644
+--- a/fs/fat/fat.h
++++ b/fs/fat/fat.h
+@@ -348,6 +348,11 @@ static inline void fatent_brelse(struct fat_entry *fatent)
+ 	fatent->fat_inode = NULL;
+ }
+ 
++static inline bool fat_valid_entry(struct msdos_sb_info *sbi, int entry)
++{
++	return FAT_START_ENT <= entry && entry < sbi->max_cluster;
++}
++
+ extern void fat_ent_access_init(struct super_block *sb);
+ extern int fat_ent_read(struct inode *inode, struct fat_entry *fatent,
+ 			int entry);
+diff --git a/fs/fat/fatent.c b/fs/fat/fatent.c
+index bac10de678cc..3aef8630a4b9 100644
+--- a/fs/fat/fatent.c
++++ b/fs/fat/fatent.c
+@@ -23,7 +23,7 @@ static void fat12_ent_blocknr(struct super_block *sb, int entry,
+ {
+ 	struct msdos_sb_info *sbi = MSDOS_SB(sb);
+ 	int bytes = entry + (entry >> 1);
+-	WARN_ON(entry < FAT_START_ENT || sbi->max_cluster <= entry);
++	WARN_ON(!fat_valid_entry(sbi, entry));
+ 	*offset = bytes & (sb->s_blocksize - 1);
+ 	*blocknr = sbi->fat_start + (bytes >> sb->s_blocksize_bits);
+ }
+@@ -33,7 +33,7 @@ static void fat_ent_blocknr(struct super_block *sb, int entry,
+ {
+ 	struct msdos_sb_info *sbi = MSDOS_SB(sb);
+ 	int bytes = (entry << sbi->fatent_shift);
+-	WARN_ON(entry < FAT_START_ENT || sbi->max_cluster <= entry);
++	WARN_ON(!fat_valid_entry(sbi, entry));
+ 	*offset = bytes & (sb->s_blocksize - 1);
+ 	*blocknr = sbi->fat_start + (bytes >> sb->s_blocksize_bits);
+ }
+@@ -353,7 +353,7 @@ int fat_ent_read(struct inode *inode, struct fat_entry *fatent, int entry)
+ 	int err, offset;
+ 	sector_t blocknr;
+ 
+-	if (entry < FAT_START_ENT || sbi->max_cluster <= entry) {
++	if (!fat_valid_entry(sbi, entry)) {
+ 		fatent_brelse(fatent);
+ 		fat_fs_error(sb, "invalid access to FAT (entry 0x%08x)", entry);
+ 		return -EIO;
+diff --git a/fs/hfs/brec.c b/fs/hfs/brec.c
+index ad04a5741016..9a8772465a90 100644
+--- a/fs/hfs/brec.c
++++ b/fs/hfs/brec.c
+@@ -75,9 +75,10 @@ int hfs_brec_insert(struct hfs_find_data *fd, void *entry, int entry_len)
+ 	if (!fd->bnode) {
+ 		if (!tree->root)
+ 			hfs_btree_inc_height(tree);
+-		fd->bnode = hfs_bnode_find(tree, tree->leaf_head);
+-		if (IS_ERR(fd->bnode))
+-			return PTR_ERR(fd->bnode);
++		node = hfs_bnode_find(tree, tree->leaf_head);
++		if (IS_ERR(node))
++			return PTR_ERR(node);
++		fd->bnode = node;
+ 		fd->record = -1;
+ 	}
+ 	new_node = NULL;
+diff --git a/fs/hfsplus/dir.c b/fs/hfsplus/dir.c
+index b5254378f011..cd017d7dbdfa 100644
+--- a/fs/hfsplus/dir.c
++++ b/fs/hfsplus/dir.c
+@@ -78,13 +78,13 @@ again:
+ 				cpu_to_be32(HFSP_HARDLINK_TYPE) &&
+ 				entry.file.user_info.fdCreator ==
+ 				cpu_to_be32(HFSP_HFSPLUS_CREATOR) &&
++				HFSPLUS_SB(sb)->hidden_dir &&
+ 				(entry.file.create_date ==
+ 					HFSPLUS_I(HFSPLUS_SB(sb)->hidden_dir)->
+ 						create_date ||
+ 				entry.file.create_date ==
+ 					HFSPLUS_I(d_inode(sb->s_root))->
+-						create_date) &&
+-				HFSPLUS_SB(sb)->hidden_dir) {
++						create_date)) {
+ 			struct qstr str;
+ 			char name[32];
+ 
+diff --git a/fs/hfsplus/super.c b/fs/hfsplus/super.c
+index a6c0f54c48c3..80abba550bfa 100644
+--- a/fs/hfsplus/super.c
++++ b/fs/hfsplus/super.c
+@@ -524,8 +524,10 @@ static int hfsplus_fill_super(struct super_block *sb, void *data, int silent)
+ 		goto out_put_root;
+ 	if (!hfs_brec_read(&fd, &entry, sizeof(entry))) {
+ 		hfs_find_exit(&fd);
+-		if (entry.type != cpu_to_be16(HFSPLUS_FOLDER))
++		if (entry.type != cpu_to_be16(HFSPLUS_FOLDER)) {
++			err = -EINVAL;
+ 			goto out_put_root;
++		}
+ 		inode = hfsplus_iget(sb, be32_to_cpu(entry.folder.id));
+ 		if (IS_ERR(inode)) {
+ 			err = PTR_ERR(inode);
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 464db0c0f5c8..ff98e2a3f3cc 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -7734,7 +7734,7 @@ static int nfs4_sp4_select_mode(struct nfs_client *clp,
+ 	}
+ out:
+ 	clp->cl_sp4_flags = flags;
+-	return 0;
++	return ret;
+ }
+ 
+ struct nfs41_exchange_id_data {
+diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c
+index e64ecb9f2720..66c373230e60 100644
+--- a/fs/proc/kcore.c
++++ b/fs/proc/kcore.c
+@@ -384,8 +384,10 @@ static void elf_kcore_store_hdr(char *bufp, int nphdr, int dataoff)
+ 		phdr->p_flags	= PF_R|PF_W|PF_X;
+ 		phdr->p_offset	= kc_vaddr_to_offset(m->addr) + dataoff;
+ 		phdr->p_vaddr	= (size_t)m->addr;
+-		if (m->type == KCORE_RAM || m->type == KCORE_TEXT)
++		if (m->type == KCORE_RAM)
+ 			phdr->p_paddr	= __pa(m->addr);
++		else if (m->type == KCORE_TEXT)
++			phdr->p_paddr	= __pa_symbol(m->addr);
+ 		else
+ 			phdr->p_paddr	= (elf_addr_t)-1;
+ 		phdr->p_filesz	= phdr->p_memsz	= m->size;
+diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
+index cfb6674331fd..0651646dd04d 100644
+--- a/fs/proc/vmcore.c
++++ b/fs/proc/vmcore.c
+@@ -225,6 +225,7 @@ out_unlock:
+ 	return ret;
+ }
+ 
++#ifdef CONFIG_MMU
+ static int vmcoredd_mmap_dumps(struct vm_area_struct *vma, unsigned long dst,
+ 			       u64 start, size_t size)
+ {
+@@ -259,6 +260,7 @@ out_unlock:
+ 	mutex_unlock(&vmcoredd_mutex);
+ 	return ret;
+ }
++#endif /* CONFIG_MMU */
+ #endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP */
+ 
+ /* Read from the ELF header and then the crash dump. On error, negative value is
+diff --git a/fs/reiserfs/reiserfs.h b/fs/reiserfs/reiserfs.h
+index ae4811fecc1f..6d670bd9ab6b 100644
+--- a/fs/reiserfs/reiserfs.h
++++ b/fs/reiserfs/reiserfs.h
+@@ -271,7 +271,7 @@ struct reiserfs_journal_list {
+ 
+ 	struct mutex j_commit_mutex;
+ 	unsigned int j_trans_id;
+-	time_t j_timestamp;
++	time64_t j_timestamp; /* write-only but useful for crash dump analysis */
+ 	struct reiserfs_list_bitmap *j_list_bitmap;
+ 	struct buffer_head *j_commit_bh;	/* commit buffer head */
+ 	struct reiserfs_journal_cnode *j_realblock;
+diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
+index 29502238e510..bf85e152af05 100644
+--- a/include/linux/pci_ids.h
++++ b/include/linux/pci_ids.h
+@@ -3082,4 +3082,6 @@
+ 
+ #define PCI_VENDOR_ID_OCZ		0x1b85
+ 
++#define PCI_VENDOR_ID_NCUBE		0x10ff
++
+ #endif /* _LINUX_PCI_IDS_H */
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index cd3ecda9386a..106e01c721e6 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -2023,6 +2023,10 @@ int tcp_set_ulp_id(struct sock *sk, const int ulp);
+ void tcp_get_available_ulp(char *buf, size_t len);
+ void tcp_cleanup_ulp(struct sock *sk);
+ 
++#define MODULE_ALIAS_TCP_ULP(name)				\
++	__MODULE_INFO(alias, alias_userspace, name);		\
++	__MODULE_INFO(alias, alias_tcp_ulp, "tcp-ulp-" name)
++
+ /* Call BPF_SOCK_OPS program that returns an int. If the return value
+  * is < 0, then the BPF op failed (for example if the loaded BPF
+  * program does not support the chosen operation or there is no BPF
+diff --git a/include/uapi/linux/keyctl.h b/include/uapi/linux/keyctl.h
+index 7b8c9e19bad1..910cc4334b21 100644
+--- a/include/uapi/linux/keyctl.h
++++ b/include/uapi/linux/keyctl.h
+@@ -65,7 +65,7 @@
+ 
+ /* keyctl structures */
+ struct keyctl_dh_params {
+-	__s32 private;
++	__s32 dh_private;
+ 	__s32 prime;
+ 	__s32 base;
+ };
+diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c
+index 76efe9a183f5..fc5b103512e7 100644
+--- a/kernel/bpf/inode.c
++++ b/kernel/bpf/inode.c
+@@ -196,19 +196,21 @@ static void *map_seq_next(struct seq_file *m, void *v, loff_t *pos)
+ {
+ 	struct bpf_map *map = seq_file_to_map(m);
+ 	void *key = map_iter(m)->key;
++	void *prev_key;
+ 
+ 	if (map_iter(m)->done)
+ 		return NULL;
+ 
+ 	if (unlikely(v == SEQ_START_TOKEN))
+-		goto done;
++		prev_key = NULL;
++	else
++		prev_key = key;
+ 
+-	if (map->ops->map_get_next_key(map, key, key)) {
++	if (map->ops->map_get_next_key(map, prev_key, key)) {
+ 		map_iter(m)->done = true;
+ 		return NULL;
+ 	}
+ 
+-done:
+ 	++(*pos);
+ 	return key;
+ }
+diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
+index c4d75c52b4fc..58899601fccf 100644
+--- a/kernel/bpf/sockmap.c
++++ b/kernel/bpf/sockmap.c
+@@ -58,6 +58,7 @@ struct bpf_stab {
+ 	struct bpf_map map;
+ 	struct sock **sock_map;
+ 	struct bpf_sock_progs progs;
++	raw_spinlock_t lock;
+ };
+ 
+ struct bucket {
+@@ -89,9 +90,9 @@ enum smap_psock_state {
+ 
+ struct smap_psock_map_entry {
+ 	struct list_head list;
++	struct bpf_map *map;
+ 	struct sock **entry;
+ 	struct htab_elem __rcu *hash_link;
+-	struct bpf_htab __rcu *htab;
+ };
+ 
+ struct smap_psock {
+@@ -343,13 +344,18 @@ static void bpf_tcp_close(struct sock *sk, long timeout)
+ 	e = psock_map_pop(sk, psock);
+ 	while (e) {
+ 		if (e->entry) {
+-			osk = cmpxchg(e->entry, sk, NULL);
++			struct bpf_stab *stab = container_of(e->map, struct bpf_stab, map);
++
++			raw_spin_lock_bh(&stab->lock);
++			osk = *e->entry;
+ 			if (osk == sk) {
++				*e->entry = NULL;
+ 				smap_release_sock(psock, sk);
+ 			}
++			raw_spin_unlock_bh(&stab->lock);
+ 		} else {
+ 			struct htab_elem *link = rcu_dereference(e->hash_link);
+-			struct bpf_htab *htab = rcu_dereference(e->htab);
++			struct bpf_htab *htab = container_of(e->map, struct bpf_htab, map);
+ 			struct hlist_head *head;
+ 			struct htab_elem *l;
+ 			struct bucket *b;
+@@ -370,6 +376,7 @@ static void bpf_tcp_close(struct sock *sk, long timeout)
+ 			}
+ 			raw_spin_unlock_bh(&b->lock);
+ 		}
++		kfree(e);
+ 		e = psock_map_pop(sk, psock);
+ 	}
+ 	rcu_read_unlock();
+@@ -1644,6 +1651,7 @@ static struct bpf_map *sock_map_alloc(union bpf_attr *attr)
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	bpf_map_init_from_attr(&stab->map, attr);
++	raw_spin_lock_init(&stab->lock);
+ 
+ 	/* make sure page count doesn't overflow */
+ 	cost = (u64) stab->map.max_entries * sizeof(struct sock *);
+@@ -1678,8 +1686,10 @@ static void smap_list_map_remove(struct smap_psock *psock,
+ 
+ 	spin_lock_bh(&psock->maps_lock);
+ 	list_for_each_entry_safe(e, tmp, &psock->maps, list) {
+-		if (e->entry == entry)
++		if (e->entry == entry) {
+ 			list_del(&e->list);
++			kfree(e);
++		}
+ 	}
+ 	spin_unlock_bh(&psock->maps_lock);
+ }
+@@ -1693,8 +1703,10 @@ static void smap_list_hash_remove(struct smap_psock *psock,
+ 	list_for_each_entry_safe(e, tmp, &psock->maps, list) {
+ 		struct htab_elem *c = rcu_dereference(e->hash_link);
+ 
+-		if (c == hash_link)
++		if (c == hash_link) {
+ 			list_del(&e->list);
++			kfree(e);
++		}
+ 	}
+ 	spin_unlock_bh(&psock->maps_lock);
+ }
+@@ -1714,14 +1726,15 @@ static void sock_map_free(struct bpf_map *map)
+ 	 * and a grace period expire to ensure psock is really safe to remove.
+ 	 */
+ 	rcu_read_lock();
++	raw_spin_lock_bh(&stab->lock);
+ 	for (i = 0; i < stab->map.max_entries; i++) {
+ 		struct smap_psock *psock;
+ 		struct sock *sock;
+ 
+-		sock = xchg(&stab->sock_map[i], NULL);
++		sock = stab->sock_map[i];
+ 		if (!sock)
+ 			continue;
+-
++		stab->sock_map[i] = NULL;
+ 		psock = smap_psock_sk(sock);
+ 		/* This check handles a racing sock event that can get the
+ 		 * sk_callback_lock before this case but after xchg happens
+@@ -1733,6 +1746,7 @@ static void sock_map_free(struct bpf_map *map)
+ 			smap_release_sock(psock, sock);
+ 		}
+ 	}
++	raw_spin_unlock_bh(&stab->lock);
+ 	rcu_read_unlock();
+ 
+ 	sock_map_remove_complete(stab);
+@@ -1776,19 +1790,23 @@ static int sock_map_delete_elem(struct bpf_map *map, void *key)
+ 	if (k >= map->max_entries)
+ 		return -EINVAL;
+ 
+-	sock = xchg(&stab->sock_map[k], NULL);
++	raw_spin_lock_bh(&stab->lock);
++	sock = stab->sock_map[k];
++	stab->sock_map[k] = NULL;
++	raw_spin_unlock_bh(&stab->lock);
+ 	if (!sock)
+ 		return -EINVAL;
+ 
+ 	psock = smap_psock_sk(sock);
+ 	if (!psock)
+-		goto out;
+-
+-	if (psock->bpf_parse)
++		return 0;
++	if (psock->bpf_parse) {
++		write_lock_bh(&sock->sk_callback_lock);
+ 		smap_stop_sock(psock, sock);
++		write_unlock_bh(&sock->sk_callback_lock);
++	}
+ 	smap_list_map_remove(psock, &stab->sock_map[k]);
+ 	smap_release_sock(psock, sock);
+-out:
+ 	return 0;
+ }
+ 
+@@ -1824,11 +1842,9 @@ out:
+ static int __sock_map_ctx_update_elem(struct bpf_map *map,
+ 				      struct bpf_sock_progs *progs,
+ 				      struct sock *sock,
+-				      struct sock **map_link,
+ 				      void *key)
+ {
+ 	struct bpf_prog *verdict, *parse, *tx_msg;
+-	struct smap_psock_map_entry *e = NULL;
+ 	struct smap_psock *psock;
+ 	bool new = false;
+ 	int err = 0;
+@@ -1901,14 +1917,6 @@ static int __sock_map_ctx_update_elem(struct bpf_map *map,
+ 		new = true;
+ 	}
+ 
+-	if (map_link) {
+-		e = kzalloc(sizeof(*e), GFP_ATOMIC | __GFP_NOWARN);
+-		if (!e) {
+-			err = -ENOMEM;
+-			goto out_free;
+-		}
+-	}
+-
+ 	/* 3. At this point we have a reference to a valid psock that is
+ 	 * running. Attach any BPF programs needed.
+ 	 */
+@@ -1930,17 +1938,6 @@ static int __sock_map_ctx_update_elem(struct bpf_map *map,
+ 		write_unlock_bh(&sock->sk_callback_lock);
+ 	}
+ 
+-	/* 4. Place psock in sockmap for use and stop any programs on
+-	 * the old sock assuming its not the same sock we are replacing
+-	 * it with. Because we can only have a single set of programs if
+-	 * old_sock has a strp we can stop it.
+-	 */
+-	if (map_link) {
+-		e->entry = map_link;
+-		spin_lock_bh(&psock->maps_lock);
+-		list_add_tail(&e->list, &psock->maps);
+-		spin_unlock_bh(&psock->maps_lock);
+-	}
+ 	return err;
+ out_free:
+ 	smap_release_sock(psock, sock);
+@@ -1951,7 +1948,6 @@ out_progs:
+ 	}
+ 	if (tx_msg)
+ 		bpf_prog_put(tx_msg);
+-	kfree(e);
+ 	return err;
+ }
+ 
+@@ -1961,36 +1957,57 @@ static int sock_map_ctx_update_elem(struct bpf_sock_ops_kern *skops,
+ {
+ 	struct bpf_stab *stab = container_of(map, struct bpf_stab, map);
+ 	struct bpf_sock_progs *progs = &stab->progs;
+-	struct sock *osock, *sock;
++	struct sock *osock, *sock = skops->sk;
++	struct smap_psock_map_entry *e;
++	struct smap_psock *psock;
+ 	u32 i = *(u32 *)key;
+ 	int err;
+ 
+ 	if (unlikely(flags > BPF_EXIST))
+ 		return -EINVAL;
+-
+ 	if (unlikely(i >= stab->map.max_entries))
+ 		return -E2BIG;
+ 
+-	sock = READ_ONCE(stab->sock_map[i]);
+-	if (flags == BPF_EXIST && !sock)
+-		return -ENOENT;
+-	else if (flags == BPF_NOEXIST && sock)
+-		return -EEXIST;
++	e = kzalloc(sizeof(*e), GFP_ATOMIC | __GFP_NOWARN);
++	if (!e)
++		return -ENOMEM;
+ 
+-	sock = skops->sk;
+-	err = __sock_map_ctx_update_elem(map, progs, sock, &stab->sock_map[i],
+-					 key);
++	err = __sock_map_ctx_update_elem(map, progs, sock, key);
+ 	if (err)
+ 		goto out;
+ 
+-	osock = xchg(&stab->sock_map[i], sock);
+-	if (osock) {
+-		struct smap_psock *opsock = smap_psock_sk(osock);
++	/* psock guaranteed to be present. */
++	psock = smap_psock_sk(sock);
++	raw_spin_lock_bh(&stab->lock);
++	osock = stab->sock_map[i];
++	if (osock && flags == BPF_NOEXIST) {
++		err = -EEXIST;
++		goto out_unlock;
++	}
++	if (!osock && flags == BPF_EXIST) {
++		err = -ENOENT;
++		goto out_unlock;
++	}
+ 
+-		smap_list_map_remove(opsock, &stab->sock_map[i]);
+-		smap_release_sock(opsock, osock);
++	e->entry = &stab->sock_map[i];
++	e->map = map;
++	spin_lock_bh(&psock->maps_lock);
++	list_add_tail(&e->list, &psock->maps);
++	spin_unlock_bh(&psock->maps_lock);
++
++	stab->sock_map[i] = sock;
++	if (osock) {
++		psock = smap_psock_sk(osock);
++		smap_list_map_remove(psock, &stab->sock_map[i]);
++		smap_release_sock(psock, osock);
+ 	}
++	raw_spin_unlock_bh(&stab->lock);
++	return 0;
++out_unlock:
++	smap_release_sock(psock, sock);
++	raw_spin_unlock_bh(&stab->lock);
+ out:
++	kfree(e);
+ 	return err;
+ }
+ 
+@@ -2353,7 +2370,7 @@ static int sock_hash_ctx_update_elem(struct bpf_sock_ops_kern *skops,
+ 	b = __select_bucket(htab, hash);
+ 	head = &b->head;
+ 
+-	err = __sock_map_ctx_update_elem(map, progs, sock, NULL, key);
++	err = __sock_map_ctx_update_elem(map, progs, sock, key);
+ 	if (err)
+ 		goto err;
+ 
+@@ -2379,8 +2396,7 @@ static int sock_hash_ctx_update_elem(struct bpf_sock_ops_kern *skops,
+ 	}
+ 
+ 	rcu_assign_pointer(e->hash_link, l_new);
+-	rcu_assign_pointer(e->htab,
+-			   container_of(map, struct bpf_htab, map));
++	e->map = map;
+ 	spin_lock_bh(&psock->maps_lock);
+ 	list_add_tail(&e->list, &psock->maps);
+ 	spin_unlock_bh(&psock->maps_lock);
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 1b27babc4c78..8ed48ca2cc43 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -549,8 +549,7 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
+ 			goto out;
+ 	}
+ 	/* a new mm has just been created */
+-	arch_dup_mmap(oldmm, mm);
+-	retval = 0;
++	retval = arch_dup_mmap(oldmm, mm);
+ out:
+ 	up_write(&mm->mmap_sem);
+ 	flush_tlb_mm(oldmm);
+@@ -1417,7 +1416,9 @@ static int copy_sighand(unsigned long clone_flags, struct task_struct *tsk)
+ 		return -ENOMEM;
+ 
+ 	atomic_set(&sig->count, 1);
++	spin_lock_irq(&current->sighand->siglock);
+ 	memcpy(sig->action, current->sighand->action, sizeof(sig->action));
++	spin_unlock_irq(&current->sighand->siglock);
+ 	return 0;
+ }
+ 
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 5f78c6e41796..0280deac392e 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -2652,6 +2652,9 @@ void flush_workqueue(struct workqueue_struct *wq)
+ 	if (WARN_ON(!wq_online))
+ 		return;
+ 
++	lock_map_acquire(&wq->lockdep_map);
++	lock_map_release(&wq->lockdep_map);
++
+ 	mutex_lock(&wq->mutex);
+ 
+ 	/*
+@@ -2843,7 +2846,8 @@ reflush:
+ }
+ EXPORT_SYMBOL_GPL(drain_workqueue);
+ 
+-static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr)
++static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr,
++			     bool from_cancel)
+ {
+ 	struct worker *worker = NULL;
+ 	struct worker_pool *pool;
+@@ -2885,7 +2889,8 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr)
+ 	 * workqueues the deadlock happens when the rescuer stalls, blocking
+ 	 * forward progress.
+ 	 */
+-	if (pwq->wq->saved_max_active == 1 || pwq->wq->rescuer) {
++	if (!from_cancel &&
++	    (pwq->wq->saved_max_active == 1 || pwq->wq->rescuer)) {
+ 		lock_map_acquire(&pwq->wq->lockdep_map);
+ 		lock_map_release(&pwq->wq->lockdep_map);
+ 	}
+@@ -2896,6 +2901,27 @@ already_gone:
+ 	return false;
+ }
+ 
++static bool __flush_work(struct work_struct *work, bool from_cancel)
++{
++	struct wq_barrier barr;
++
++	if (WARN_ON(!wq_online))
++		return false;
++
++	if (!from_cancel) {
++		lock_map_acquire(&work->lockdep_map);
++		lock_map_release(&work->lockdep_map);
++	}
++
++	if (start_flush_work(work, &barr, from_cancel)) {
++		wait_for_completion(&barr.done);
++		destroy_work_on_stack(&barr.work);
++		return true;
++	} else {
++		return false;
++	}
++}
++
+ /**
+  * flush_work - wait for a work to finish executing the last queueing instance
+  * @work: the work to flush
+@@ -2909,18 +2935,7 @@ already_gone:
+  */
+ bool flush_work(struct work_struct *work)
+ {
+-	struct wq_barrier barr;
+-
+-	if (WARN_ON(!wq_online))
+-		return false;
+-
+-	if (start_flush_work(work, &barr)) {
+-		wait_for_completion(&barr.done);
+-		destroy_work_on_stack(&barr.work);
+-		return true;
+-	} else {
+-		return false;
+-	}
++	return __flush_work(work, false);
+ }
+ EXPORT_SYMBOL_GPL(flush_work);
+ 
+@@ -2986,7 +3001,7 @@ static bool __cancel_work_timer(struct work_struct *work, bool is_dwork)
+ 	 * isn't executing.
+ 	 */
+ 	if (wq_online)
+-		flush_work(work);
++		__flush_work(work, true);
+ 
+ 	clear_work_data(work);
+ 
+diff --git a/lib/debugobjects.c b/lib/debugobjects.c
+index 994be4805cec..24c1df0d7466 100644
+--- a/lib/debugobjects.c
++++ b/lib/debugobjects.c
+@@ -360,9 +360,12 @@ static void debug_object_is_on_stack(void *addr, int onstack)
+ 
+ 	limit++;
+ 	if (is_on_stack)
+-		pr_warn("object is on stack, but not annotated\n");
++		pr_warn("object %p is on stack %p, but NOT annotated.\n", addr,
++			 task_stack_page(current));
+ 	else
+-		pr_warn("object is not on stack, but annotated\n");
++		pr_warn("object %p is NOT on stack %p, but annotated.\n", addr,
++			 task_stack_page(current));
++
+ 	WARN_ON(1);
+ }
+ 
+diff --git a/mm/Kconfig b/mm/Kconfig
+index ce95491abd6a..94af022b7f3d 100644
+--- a/mm/Kconfig
++++ b/mm/Kconfig
+@@ -635,7 +635,7 @@ config DEFERRED_STRUCT_PAGE_INIT
+ 	bool "Defer initialisation of struct pages to kthreads"
+ 	default n
+ 	depends on NO_BOOTMEM
+-	depends on !FLATMEM
++	depends on SPARSEMEM
+ 	depends on !NEED_PER_CPU_KM
+ 	help
+ 	  Ordinarily all struct pages are initialised during early boot in a
+diff --git a/mm/fadvise.c b/mm/fadvise.c
+index afa41491d324..2d8376e3c640 100644
+--- a/mm/fadvise.c
++++ b/mm/fadvise.c
+@@ -72,8 +72,12 @@ int ksys_fadvise64_64(int fd, loff_t offset, loff_t len, int advice)
+ 		goto out;
+ 	}
+ 
+-	/* Careful about overflows. Len == 0 means "as much as possible" */
+-	endbyte = offset + len;
++	/*
++	 * Careful about overflows. Len == 0 means "as much as possible".  Use
++	 * unsigned math because signed overflows are undefined and UBSan
++	 * complains.
++	 */
++	endbyte = (u64)offset + (u64)len;
+ 	if (!len || endbyte < len)
+ 		endbyte = -1;
+ 	else
+diff --git a/net/9p/trans_fd.c b/net/9p/trans_fd.c
+index ef456395645a..7fb60dd4be79 100644
+--- a/net/9p/trans_fd.c
++++ b/net/9p/trans_fd.c
+@@ -199,15 +199,14 @@ static void p9_mux_poll_stop(struct p9_conn *m)
+ static void p9_conn_cancel(struct p9_conn *m, int err)
+ {
+ 	struct p9_req_t *req, *rtmp;
+-	unsigned long flags;
+ 	LIST_HEAD(cancel_list);
+ 
+ 	p9_debug(P9_DEBUG_ERROR, "mux %p err %d\n", m, err);
+ 
+-	spin_lock_irqsave(&m->client->lock, flags);
++	spin_lock(&m->client->lock);
+ 
+ 	if (m->err) {
+-		spin_unlock_irqrestore(&m->client->lock, flags);
++		spin_unlock(&m->client->lock);
+ 		return;
+ 	}
+ 
+@@ -219,7 +218,6 @@ static void p9_conn_cancel(struct p9_conn *m, int err)
+ 	list_for_each_entry_safe(req, rtmp, &m->unsent_req_list, req_list) {
+ 		list_move(&req->req_list, &cancel_list);
+ 	}
+-	spin_unlock_irqrestore(&m->client->lock, flags);
+ 
+ 	list_for_each_entry_safe(req, rtmp, &cancel_list, req_list) {
+ 		p9_debug(P9_DEBUG_ERROR, "call back req %p\n", req);
+@@ -228,6 +226,7 @@ static void p9_conn_cancel(struct p9_conn *m, int err)
+ 			req->t_err = err;
+ 		p9_client_cb(m->client, req, REQ_STATUS_ERROR);
+ 	}
++	spin_unlock(&m->client->lock);
+ }
+ 
+ static __poll_t
+@@ -375,8 +374,9 @@ static void p9_read_work(struct work_struct *work)
+ 		if (m->req->status != REQ_STATUS_ERROR)
+ 			status = REQ_STATUS_RCVD;
+ 		list_del(&m->req->req_list);
+-		spin_unlock(&m->client->lock);
++		/* update req->status while holding client->lock  */
+ 		p9_client_cb(m->client, m->req, status);
++		spin_unlock(&m->client->lock);
+ 		m->rc.sdata = NULL;
+ 		m->rc.offset = 0;
+ 		m->rc.capacity = 0;
+diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c
+index 4c2da2513c8b..2dc1c293092b 100644
+--- a/net/9p/trans_virtio.c
++++ b/net/9p/trans_virtio.c
+@@ -571,7 +571,7 @@ static int p9_virtio_probe(struct virtio_device *vdev)
+ 	chan->vq = virtio_find_single_vq(vdev, req_done, "requests");
+ 	if (IS_ERR(chan->vq)) {
+ 		err = PTR_ERR(chan->vq);
+-		goto out_free_vq;
++		goto out_free_chan;
+ 	}
+ 	chan->vq->vdev->priv = chan;
+ 	spin_lock_init(&chan->lock);
+@@ -624,6 +624,7 @@ out_free_tag:
+ 	kfree(tag);
+ out_free_vq:
+ 	vdev->config->del_vqs(vdev);
++out_free_chan:
+ 	kfree(chan);
+ fail:
+ 	return err;
+diff --git a/net/core/xdp.c b/net/core/xdp.c
+index 6771f1855b96..2657056130a4 100644
+--- a/net/core/xdp.c
++++ b/net/core/xdp.c
+@@ -95,23 +95,15 @@ static void __xdp_rxq_info_unreg_mem_model(struct xdp_rxq_info *xdp_rxq)
+ {
+ 	struct xdp_mem_allocator *xa;
+ 	int id = xdp_rxq->mem.id;
+-	int err;
+ 
+ 	if (id == 0)
+ 		return;
+ 
+ 	mutex_lock(&mem_id_lock);
+ 
+-	xa = rhashtable_lookup(mem_id_ht, &id, mem_id_rht_params);
+-	if (!xa) {
+-		mutex_unlock(&mem_id_lock);
+-		return;
+-	}
+-
+-	err = rhashtable_remove_fast(mem_id_ht, &xa->node, mem_id_rht_params);
+-	WARN_ON(err);
+-
+-	call_rcu(&xa->rcu, __xdp_mem_allocator_rcu_free);
++	xa = rhashtable_lookup_fast(mem_id_ht, &id, mem_id_rht_params);
++	if (xa && !rhashtable_remove_fast(mem_id_ht, &xa->node, mem_id_rht_params))
++		call_rcu(&xa->rcu, __xdp_mem_allocator_rcu_free);
+ 
+ 	mutex_unlock(&mem_id_lock);
+ }
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index 2d8efeecf619..055f4bbba86b 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -1511,11 +1511,14 @@ nla_put_failure:
+ 
+ static void erspan_setup(struct net_device *dev)
+ {
++	struct ip_tunnel *t = netdev_priv(dev);
++
+ 	ether_setup(dev);
+ 	dev->netdev_ops = &erspan_netdev_ops;
+ 	dev->priv_flags &= ~IFF_TX_SKB_SHARING;
+ 	dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
+ 	ip_tunnel_setup(dev, erspan_net_id);
++	t->erspan_ver = 1;
+ }
+ 
+ static const struct nla_policy ipgre_policy[IFLA_GRE_MAX + 1] = {
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 3b2711e33e4c..488b201851d7 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -2516,6 +2516,12 @@ static int __net_init tcp_sk_init(struct net *net)
+ 		if (res)
+ 			goto fail;
+ 		sock_set_flag(sk, SOCK_USE_WRITE_QUEUE);
++
++		/* Please enforce IP_DF and IPID==0 for RST and
++		 * ACK sent in SYN-RECV and TIME-WAIT state.
++		 */
++		inet_sk(sk)->pmtudisc = IP_PMTUDISC_DO;
++
+ 		*per_cpu_ptr(net->ipv4.tcp_sk, cpu) = sk;
+ 	}
+ 
+diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
+index 1dda1341a223..b690132f5da2 100644
+--- a/net/ipv4/tcp_minisocks.c
++++ b/net/ipv4/tcp_minisocks.c
+@@ -184,8 +184,9 @@ kill:
+ 				inet_twsk_deschedule_put(tw);
+ 				return TCP_TW_SUCCESS;
+ 			}
++		} else {
++			inet_twsk_reschedule(tw, TCP_TIMEWAIT_LEN);
+ 		}
+-		inet_twsk_reschedule(tw, TCP_TIMEWAIT_LEN);
+ 
+ 		if (tmp_opt.saw_tstamp) {
+ 			tcptw->tw_ts_recent	  = tmp_opt.rcv_tsval;
+diff --git a/net/ipv4/tcp_ulp.c b/net/ipv4/tcp_ulp.c
+index 622caa4039e0..a5995bb2eaca 100644
+--- a/net/ipv4/tcp_ulp.c
++++ b/net/ipv4/tcp_ulp.c
+@@ -51,7 +51,7 @@ static const struct tcp_ulp_ops *__tcp_ulp_find_autoload(const char *name)
+ #ifdef CONFIG_MODULES
+ 	if (!ulp && capable(CAP_NET_ADMIN)) {
+ 		rcu_read_unlock();
+-		request_module("%s", name);
++		request_module("tcp-ulp-%s", name);
+ 		rcu_read_lock();
+ 		ulp = tcp_ulp_find(name);
+ 	}
+@@ -129,6 +129,8 @@ void tcp_cleanup_ulp(struct sock *sk)
+ 	if (icsk->icsk_ulp_ops->release)
+ 		icsk->icsk_ulp_ops->release(sk);
+ 	module_put(icsk->icsk_ulp_ops->owner);
++
++	icsk->icsk_ulp_ops = NULL;
+ }
+ 
+ /* Change upper layer protocol for socket */
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index d212738e9d10..5516f55e214b 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -198,6 +198,8 @@ void fib6_info_destroy_rcu(struct rcu_head *head)
+ 		}
+ 	}
+ 
++	lwtstate_put(f6i->fib6_nh.nh_lwtstate);
++
+ 	if (f6i->fib6_nh.nh_dev)
+ 		dev_put(f6i->fib6_nh.nh_dev);
+ 
+@@ -987,7 +989,10 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct fib6_info *rt,
+ 					fib6_clean_expires(iter);
+ 				else
+ 					fib6_set_expires(iter, rt->expires);
+-				fib6_metric_set(iter, RTAX_MTU, rt->fib6_pmtu);
++
++				if (rt->fib6_pmtu)
++					fib6_metric_set(iter, RTAX_MTU,
++							rt->fib6_pmtu);
+ 				return -EEXIST;
+ 			}
+ 			/* If we have the same destination and the same metric,
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index cd2cfb04e5d8..7ec997fcbc43 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -1776,6 +1776,7 @@ static void ip6gre_netlink_parms(struct nlattr *data[],
+ 	if (data[IFLA_GRE_COLLECT_METADATA])
+ 		parms->collect_md = true;
+ 
++	parms->erspan_ver = 1;
+ 	if (data[IFLA_GRE_ERSPAN_VER])
+ 		parms->erspan_ver = nla_get_u8(data[IFLA_GRE_ERSPAN_VER]);
+ 
+diff --git a/net/ipv6/ip6_vti.c b/net/ipv6/ip6_vti.c
+index c72ae3a4fe09..c31a7c4a9249 100644
+--- a/net/ipv6/ip6_vti.c
++++ b/net/ipv6/ip6_vti.c
+@@ -481,7 +481,7 @@ vti6_xmit(struct sk_buff *skb, struct net_device *dev, struct flowi *fl)
+ 	}
+ 
+ 	mtu = dst_mtu(dst);
+-	if (!skb->ignore_df && skb->len > mtu) {
++	if (skb->len > mtu) {
+ 		skb_dst_update_pmtu(skb, mtu);
+ 
+ 		if (skb->protocol == htons(ETH_P_IPV6)) {
+@@ -1102,7 +1102,8 @@ static void __net_exit vti6_destroy_tunnels(struct vti6_net *ip6n,
+ 	}
+ 
+ 	t = rtnl_dereference(ip6n->tnls_wc[0]);
+-	unregister_netdevice_queue(t->dev, list);
++	if (t)
++		unregister_netdevice_queue(t->dev, list);
+ }
+ 
+ static int __net_init vti6_init_net(struct net *net)
+@@ -1114,6 +1115,8 @@ static int __net_init vti6_init_net(struct net *net)
+ 	ip6n->tnls[0] = ip6n->tnls_wc;
+ 	ip6n->tnls[1] = ip6n->tnls_r_l;
+ 
++	if (!net_has_fallback_tunnels(net))
++		return 0;
+ 	err = -ENOMEM;
+ 	ip6n->fb_tnl_dev = alloc_netdev(sizeof(struct ip6_tnl), "ip6_vti0",
+ 					NET_NAME_UNKNOWN, vti6_dev_setup);
+diff --git a/net/ipv6/netfilter/ip6t_rpfilter.c b/net/ipv6/netfilter/ip6t_rpfilter.c
+index 0fe61ede77c6..c3c6b09acdc4 100644
+--- a/net/ipv6/netfilter/ip6t_rpfilter.c
++++ b/net/ipv6/netfilter/ip6t_rpfilter.c
+@@ -26,6 +26,12 @@ static bool rpfilter_addr_unicast(const struct in6_addr *addr)
+ 	return addr_type & IPV6_ADDR_UNICAST;
+ }
+ 
++static bool rpfilter_addr_linklocal(const struct in6_addr *addr)
++{
++	int addr_type = ipv6_addr_type(addr);
++	return addr_type & IPV6_ADDR_LINKLOCAL;
++}
++
+ static bool rpfilter_lookup_reverse6(struct net *net, const struct sk_buff *skb,
+ 				     const struct net_device *dev, u8 flags)
+ {
+@@ -48,7 +54,11 @@ static bool rpfilter_lookup_reverse6(struct net *net, const struct sk_buff *skb,
+ 	}
+ 
+ 	fl6.flowi6_mark = flags & XT_RPFILTER_VALID_MARK ? skb->mark : 0;
+-	if ((flags & XT_RPFILTER_LOOSE) == 0)
++
++	if (rpfilter_addr_linklocal(&iph->saddr)) {
++		lookup_flags |= RT6_LOOKUP_F_IFACE;
++		fl6.flowi6_oif = dev->ifindex;
++	} else if ((flags & XT_RPFILTER_LOOSE) == 0)
+ 		fl6.flowi6_oif = dev->ifindex;
+ 
+ 	rt = (void *)ip6_route_lookup(net, &fl6, skb, lookup_flags);
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 7208c16302f6..18e00ce1719a 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -956,7 +956,7 @@ static void ip6_rt_init_dst(struct rt6_info *rt, struct fib6_info *ort)
+ 	rt->dst.error = 0;
+ 	rt->dst.output = ip6_output;
+ 
+-	if (ort->fib6_type == RTN_LOCAL) {
++	if (ort->fib6_type == RTN_LOCAL || ort->fib6_type == RTN_ANYCAST) {
+ 		rt->dst.input = ip6_input;
+ 	} else if (ipv6_addr_type(&ort->fib6_dst.addr) & IPV6_ADDR_MULTICAST) {
+ 		rt->dst.input = ip6_mc_input;
+@@ -996,7 +996,6 @@ static void ip6_rt_copy_init(struct rt6_info *rt, struct fib6_info *ort)
+ 	rt->rt6i_src = ort->fib6_src;
+ #endif
+ 	rt->rt6i_prefsrc = ort->fib6_prefsrc;
+-	rt->dst.lwtstate = lwtstate_get(ort->fib6_nh.nh_lwtstate);
+ }
+ 
+ static struct fib6_node* fib6_backtrack(struct fib6_node *fn,
+diff --git a/net/netfilter/ipvs/ip_vs_core.c b/net/netfilter/ipvs/ip_vs_core.c
+index 0679dd101e72..7ca926a03b81 100644
+--- a/net/netfilter/ipvs/ip_vs_core.c
++++ b/net/netfilter/ipvs/ip_vs_core.c
+@@ -1972,13 +1972,20 @@ ip_vs_in(struct netns_ipvs *ipvs, unsigned int hooknum, struct sk_buff *skb, int
+ 	if (cp->dest && !(cp->dest->flags & IP_VS_DEST_F_AVAILABLE)) {
+ 		/* the destination server is not available */
+ 
+-		if (sysctl_expire_nodest_conn(ipvs)) {
++		__u32 flags = cp->flags;
++
++		/* when timer already started, silently drop the packet.*/
++		if (timer_pending(&cp->timer))
++			__ip_vs_conn_put(cp);
++		else
++			ip_vs_conn_put(cp);
++
++		if (sysctl_expire_nodest_conn(ipvs) &&
++		    !(flags & IP_VS_CONN_F_ONE_PACKET)) {
+ 			/* try to expire the connection immediately */
+ 			ip_vs_conn_expire_now(cp);
+ 		}
+-		/* don't restart its timer, and silently
+-		   drop the packet. */
+-		__ip_vs_conn_put(cp);
++
+ 		return NF_DROP;
+ 	}
+ 
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index 20a2e37c76d1..e952eedf44b4 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -821,6 +821,21 @@ ctnetlink_alloc_filter(const struct nlattr * const cda[])
+ #endif
+ }
+ 
++static int ctnetlink_start(struct netlink_callback *cb)
++{
++	const struct nlattr * const *cda = cb->data;
++	struct ctnetlink_filter *filter = NULL;
++
++	if (cda[CTA_MARK] && cda[CTA_MARK_MASK]) {
++		filter = ctnetlink_alloc_filter(cda);
++		if (IS_ERR(filter))
++			return PTR_ERR(filter);
++	}
++
++	cb->data = filter;
++	return 0;
++}
++
+ static int ctnetlink_filter_match(struct nf_conn *ct, void *data)
+ {
+ 	struct ctnetlink_filter *filter = data;
+@@ -1240,19 +1255,12 @@ static int ctnetlink_get_conntrack(struct net *net, struct sock *ctnl,
+ 
+ 	if (nlh->nlmsg_flags & NLM_F_DUMP) {
+ 		struct netlink_dump_control c = {
++			.start = ctnetlink_start,
+ 			.dump = ctnetlink_dump_table,
+ 			.done = ctnetlink_done,
++			.data = (void *)cda,
+ 		};
+ 
+-		if (cda[CTA_MARK] && cda[CTA_MARK_MASK]) {
+-			struct ctnetlink_filter *filter;
+-
+-			filter = ctnetlink_alloc_filter(cda);
+-			if (IS_ERR(filter))
+-				return PTR_ERR(filter);
+-
+-			c.data = filter;
+-		}
+ 		return netlink_dump_start(ctnl, skb, nlh, &c);
+ 	}
+ 
+diff --git a/net/netfilter/nfnetlink_acct.c b/net/netfilter/nfnetlink_acct.c
+index a0e5adf0b3b6..8fa8bf7c48e6 100644
+--- a/net/netfilter/nfnetlink_acct.c
++++ b/net/netfilter/nfnetlink_acct.c
+@@ -238,29 +238,33 @@ static const struct nla_policy filter_policy[NFACCT_FILTER_MAX + 1] = {
+ 	[NFACCT_FILTER_VALUE]	= { .type = NLA_U32 },
+ };
+ 
+-static struct nfacct_filter *
+-nfacct_filter_alloc(const struct nlattr * const attr)
++static int nfnl_acct_start(struct netlink_callback *cb)
+ {
+-	struct nfacct_filter *filter;
++	const struct nlattr *const attr = cb->data;
+ 	struct nlattr *tb[NFACCT_FILTER_MAX + 1];
++	struct nfacct_filter *filter;
+ 	int err;
+ 
++	if (!attr)
++		return 0;
++
+ 	err = nla_parse_nested(tb, NFACCT_FILTER_MAX, attr, filter_policy,
+ 			       NULL);
+ 	if (err < 0)
+-		return ERR_PTR(err);
++		return err;
+ 
+ 	if (!tb[NFACCT_FILTER_MASK] || !tb[NFACCT_FILTER_VALUE])
+-		return ERR_PTR(-EINVAL);
++		return -EINVAL;
+ 
+ 	filter = kzalloc(sizeof(struct nfacct_filter), GFP_KERNEL);
+ 	if (!filter)
+-		return ERR_PTR(-ENOMEM);
++		return -ENOMEM;
+ 
+ 	filter->mask = ntohl(nla_get_be32(tb[NFACCT_FILTER_MASK]));
+ 	filter->value = ntohl(nla_get_be32(tb[NFACCT_FILTER_VALUE]));
++	cb->data = filter;
+ 
+-	return filter;
++	return 0;
+ }
+ 
+ static int nfnl_acct_get(struct net *net, struct sock *nfnl,
+@@ -275,18 +279,11 @@ static int nfnl_acct_get(struct net *net, struct sock *nfnl,
+ 	if (nlh->nlmsg_flags & NLM_F_DUMP) {
+ 		struct netlink_dump_control c = {
+ 			.dump = nfnl_acct_dump,
++			.start = nfnl_acct_start,
+ 			.done = nfnl_acct_done,
++			.data = (void *)tb[NFACCT_FILTER],
+ 		};
+ 
+-		if (tb[NFACCT_FILTER]) {
+-			struct nfacct_filter *filter;
+-
+-			filter = nfacct_filter_alloc(tb[NFACCT_FILTER]);
+-			if (IS_ERR(filter))
+-				return PTR_ERR(filter);
+-
+-			c.data = filter;
+-		}
+ 		return netlink_dump_start(nfnl, skb, nlh, &c);
+ 	}
+ 
+diff --git a/net/netfilter/x_tables.c b/net/netfilter/x_tables.c
+index d0d8397c9588..aecadd471e1d 100644
+--- a/net/netfilter/x_tables.c
++++ b/net/netfilter/x_tables.c
+@@ -1178,12 +1178,7 @@ struct xt_table_info *xt_alloc_table_info(unsigned int size)
+ 	if (sz < sizeof(*info) || sz >= XT_MAX_TABLE_SIZE)
+ 		return NULL;
+ 
+-	/* __GFP_NORETRY is not fully supported by kvmalloc but it should
+-	 * work reasonably well if sz is too large and bail out rather
+-	 * than shoot all processes down before realizing there is nothing
+-	 * more to reclaim.
+-	 */
+-	info = kvmalloc(sz, GFP_KERNEL | __GFP_NORETRY);
++	info = kvmalloc(sz, GFP_KERNEL_ACCOUNT);
+ 	if (!info)
+ 		return NULL;
+ 
+diff --git a/net/rds/ib_frmr.c b/net/rds/ib_frmr.c
+index d152e48ea371..8596eed6d9a8 100644
+--- a/net/rds/ib_frmr.c
++++ b/net/rds/ib_frmr.c
+@@ -61,6 +61,7 @@ static struct rds_ib_mr *rds_ib_alloc_frmr(struct rds_ib_device *rds_ibdev,
+ 			 pool->fmr_attr.max_pages);
+ 	if (IS_ERR(frmr->mr)) {
+ 		pr_warn("RDS/IB: %s failed to allocate MR", __func__);
++		err = PTR_ERR(frmr->mr);
+ 		goto out_no_cigar;
+ 	}
+ 
+diff --git a/net/sched/act_ife.c b/net/sched/act_ife.c
+index 20d7d36b2fc9..005cb21348c9 100644
+--- a/net/sched/act_ife.c
++++ b/net/sched/act_ife.c
+@@ -265,10 +265,8 @@ static const char *ife_meta_id2name(u32 metaid)
+ #endif
+ 
+ /* called when adding new meta information
+- * under ife->tcf_lock for existing action
+ */
+-static int load_metaops_and_vet(struct tcf_ife_info *ife, u32 metaid,
+-				void *val, int len, bool exists)
++static int load_metaops_and_vet(u32 metaid, void *val, int len)
+ {
+ 	struct tcf_meta_ops *ops = find_ife_oplist(metaid);
+ 	int ret = 0;
+@@ -276,13 +274,9 @@ static int load_metaops_and_vet(struct tcf_ife_info *ife, u32 metaid,
+ 	if (!ops) {
+ 		ret = -ENOENT;
+ #ifdef CONFIG_MODULES
+-		if (exists)
+-			spin_unlock_bh(&ife->tcf_lock);
+ 		rtnl_unlock();
+ 		request_module("ife-meta-%s", ife_meta_id2name(metaid));
+ 		rtnl_lock();
+-		if (exists)
+-			spin_lock_bh(&ife->tcf_lock);
+ 		ops = find_ife_oplist(metaid);
+ #endif
+ 	}
+@@ -299,24 +293,17 @@ static int load_metaops_and_vet(struct tcf_ife_info *ife, u32 metaid,
+ }
+ 
+ /* called when adding new meta information
+- * under ife->tcf_lock for existing action
+ */
+-static int add_metainfo(struct tcf_ife_info *ife, u32 metaid, void *metaval,
+-			int len, bool atomic)
++static int __add_metainfo(const struct tcf_meta_ops *ops,
++			  struct tcf_ife_info *ife, u32 metaid, void *metaval,
++			  int len, bool atomic, bool exists)
+ {
+ 	struct tcf_meta_info *mi = NULL;
+-	struct tcf_meta_ops *ops = find_ife_oplist(metaid);
+ 	int ret = 0;
+ 
+-	if (!ops)
+-		return -ENOENT;
+-
+ 	mi = kzalloc(sizeof(*mi), atomic ? GFP_ATOMIC : GFP_KERNEL);
+-	if (!mi) {
+-		/*put back what find_ife_oplist took */
+-		module_put(ops->owner);
++	if (!mi)
+ 		return -ENOMEM;
+-	}
+ 
+ 	mi->metaid = metaid;
+ 	mi->ops = ops;
+@@ -324,17 +311,49 @@ static int add_metainfo(struct tcf_ife_info *ife, u32 metaid, void *metaval,
+ 		ret = ops->alloc(mi, metaval, atomic ? GFP_ATOMIC : GFP_KERNEL);
+ 		if (ret != 0) {
+ 			kfree(mi);
+-			module_put(ops->owner);
+ 			return ret;
+ 		}
+ 	}
+ 
++	if (exists)
++		spin_lock_bh(&ife->tcf_lock);
+ 	list_add_tail(&mi->metalist, &ife->metalist);
++	if (exists)
++		spin_unlock_bh(&ife->tcf_lock);
+ 
+ 	return ret;
+ }
+ 
+-static int use_all_metadata(struct tcf_ife_info *ife)
++static int add_metainfo_and_get_ops(const struct tcf_meta_ops *ops,
++				    struct tcf_ife_info *ife, u32 metaid,
++				    bool exists)
++{
++	int ret;
++
++	if (!try_module_get(ops->owner))
++		return -ENOENT;
++	ret = __add_metainfo(ops, ife, metaid, NULL, 0, true, exists);
++	if (ret)
++		module_put(ops->owner);
++	return ret;
++}
++
++static int add_metainfo(struct tcf_ife_info *ife, u32 metaid, void *metaval,
++			int len, bool exists)
++{
++	const struct tcf_meta_ops *ops = find_ife_oplist(metaid);
++	int ret;
++
++	if (!ops)
++		return -ENOENT;
++	ret = __add_metainfo(ops, ife, metaid, metaval, len, false, exists);
++	if (ret)
++		/*put back what find_ife_oplist took */
++		module_put(ops->owner);
++	return ret;
++}
++
++static int use_all_metadata(struct tcf_ife_info *ife, bool exists)
+ {
+ 	struct tcf_meta_ops *o;
+ 	int rc = 0;
+@@ -342,7 +361,7 @@ static int use_all_metadata(struct tcf_ife_info *ife)
+ 
+ 	read_lock(&ife_mod_lock);
+ 	list_for_each_entry(o, &ifeoplist, list) {
+-		rc = add_metainfo(ife, o->metaid, NULL, 0, true);
++		rc = add_metainfo_and_get_ops(o, ife, o->metaid, exists);
+ 		if (rc == 0)
+ 			installed += 1;
+ 	}
+@@ -393,7 +412,6 @@ static void _tcf_ife_cleanup(struct tc_action *a)
+ 	struct tcf_meta_info *e, *n;
+ 
+ 	list_for_each_entry_safe(e, n, &ife->metalist, metalist) {
+-		module_put(e->ops->owner);
+ 		list_del(&e->metalist);
+ 		if (e->metaval) {
+ 			if (e->ops->release)
+@@ -401,6 +419,7 @@ static void _tcf_ife_cleanup(struct tc_action *a)
+ 			else
+ 				kfree(e->metaval);
+ 		}
++		module_put(e->ops->owner);
+ 		kfree(e);
+ 	}
+ }
+@@ -419,7 +438,6 @@ static void tcf_ife_cleanup(struct tc_action *a)
+ 		kfree_rcu(p, rcu);
+ }
+ 
+-/* under ife->tcf_lock for existing action */
+ static int populate_metalist(struct tcf_ife_info *ife, struct nlattr **tb,
+ 			     bool exists)
+ {
+@@ -433,7 +451,7 @@ static int populate_metalist(struct tcf_ife_info *ife, struct nlattr **tb,
+ 			val = nla_data(tb[i]);
+ 			len = nla_len(tb[i]);
+ 
+-			rc = load_metaops_and_vet(ife, i, val, len, exists);
++			rc = load_metaops_and_vet(i, val, len);
+ 			if (rc != 0)
+ 				return rc;
+ 
+@@ -531,8 +549,6 @@ static int tcf_ife_init(struct net *net, struct nlattr *nla,
+ 		p->eth_type = ife_type;
+ 	}
+ 
+-	if (exists)
+-		spin_lock_bh(&ife->tcf_lock);
+ 
+ 	if (ret == ACT_P_CREATED)
+ 		INIT_LIST_HEAD(&ife->metalist);
+@@ -544,9 +560,6 @@ static int tcf_ife_init(struct net *net, struct nlattr *nla,
+ metadata_parse_err:
+ 			if (ret == ACT_P_CREATED)
+ 				tcf_idr_release(*a, bind);
+-
+-			if (exists)
+-				spin_unlock_bh(&ife->tcf_lock);
+ 			kfree(p);
+ 			return err;
+ 		}
+@@ -561,18 +574,17 @@ metadata_parse_err:
+ 		 * as we can. You better have at least one else we are
+ 		 * going to bail out
+ 		 */
+-		err = use_all_metadata(ife);
++		err = use_all_metadata(ife, exists);
+ 		if (err) {
+ 			if (ret == ACT_P_CREATED)
+ 				tcf_idr_release(*a, bind);
+-
+-			if (exists)
+-				spin_unlock_bh(&ife->tcf_lock);
+ 			kfree(p);
+ 			return err;
+ 		}
+ 	}
+ 
++	if (exists)
++		spin_lock_bh(&ife->tcf_lock);
+ 	ife->tcf_action = parm->action;
+ 	if (exists)
+ 		spin_unlock_bh(&ife->tcf_lock);
+diff --git a/net/sched/act_pedit.c b/net/sched/act_pedit.c
+index 8a925c72db5f..bad475c87688 100644
+--- a/net/sched/act_pedit.c
++++ b/net/sched/act_pedit.c
+@@ -109,16 +109,18 @@ static int tcf_pedit_key_ex_dump(struct sk_buff *skb,
+ {
+ 	struct nlattr *keys_start = nla_nest_start(skb, TCA_PEDIT_KEYS_EX);
+ 
++	if (!keys_start)
++		goto nla_failure;
+ 	for (; n > 0; n--) {
+ 		struct nlattr *key_start;
+ 
+ 		key_start = nla_nest_start(skb, TCA_PEDIT_KEY_EX);
++		if (!key_start)
++			goto nla_failure;
+ 
+ 		if (nla_put_u16(skb, TCA_PEDIT_KEY_EX_HTYPE, keys_ex->htype) ||
+-		    nla_put_u16(skb, TCA_PEDIT_KEY_EX_CMD, keys_ex->cmd)) {
+-			nlmsg_trim(skb, keys_start);
+-			return -EINVAL;
+-		}
++		    nla_put_u16(skb, TCA_PEDIT_KEY_EX_CMD, keys_ex->cmd))
++			goto nla_failure;
+ 
+ 		nla_nest_end(skb, key_start);
+ 
+@@ -128,6 +130,9 @@ static int tcf_pedit_key_ex_dump(struct sk_buff *skb,
+ 	nla_nest_end(skb, keys_start);
+ 
+ 	return 0;
++nla_failure:
++	nla_nest_cancel(skb, keys_start);
++	return -EINVAL;
+ }
+ 
+ static int tcf_pedit_init(struct net *net, struct nlattr *nla,
+@@ -395,7 +400,10 @@ static int tcf_pedit_dump(struct sk_buff *skb, struct tc_action *a,
+ 	opt->bindcnt = p->tcf_bindcnt - bind;
+ 
+ 	if (p->tcfp_keys_ex) {
+-		tcf_pedit_key_ex_dump(skb, p->tcfp_keys_ex, p->tcfp_nkeys);
++		if (tcf_pedit_key_ex_dump(skb,
++					  p->tcfp_keys_ex,
++					  p->tcfp_nkeys))
++			goto nla_put_failure;
+ 
+ 		if (nla_put(skb, TCA_PEDIT_PARMS_EX, s, opt))
+ 			goto nla_put_failure;
+diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c
+index fb861f90fde6..260749956ef3 100644
+--- a/net/sched/cls_u32.c
++++ b/net/sched/cls_u32.c
+@@ -912,6 +912,7 @@ static int u32_change(struct net *net, struct sk_buff *in_skb,
+ 	struct nlattr *opt = tca[TCA_OPTIONS];
+ 	struct nlattr *tb[TCA_U32_MAX + 1];
+ 	u32 htid, flags = 0;
++	size_t sel_size;
+ 	int err;
+ #ifdef CONFIG_CLS_U32_PERF
+ 	size_t size;
+@@ -1074,8 +1075,13 @@ static int u32_change(struct net *net, struct sk_buff *in_skb,
+ 	}
+ 
+ 	s = nla_data(tb[TCA_U32_SEL]);
++	sel_size = struct_size(s, keys, s->nkeys);
++	if (nla_len(tb[TCA_U32_SEL]) < sel_size) {
++		err = -EINVAL;
++		goto erridr;
++	}
+ 
+-	n = kzalloc(sizeof(*n) + s->nkeys*sizeof(struct tc_u32_key), GFP_KERNEL);
++	n = kzalloc(offsetof(typeof(*n), sel) + sel_size, GFP_KERNEL);
+ 	if (n == NULL) {
+ 		err = -ENOBUFS;
+ 		goto erridr;
+@@ -1090,7 +1096,7 @@ static int u32_change(struct net *net, struct sk_buff *in_skb,
+ 	}
+ #endif
+ 
+-	memcpy(&n->sel, s, sizeof(*s) + s->nkeys*sizeof(struct tc_u32_key));
++	memcpy(&n->sel, s, sel_size);
+ 	RCU_INIT_POINTER(n->ht_up, ht);
+ 	n->handle = handle;
+ 	n->fshift = s->hmask ? ffs(ntohl(s->hmask)) - 1 : 0;
+diff --git a/net/sctp/proc.c b/net/sctp/proc.c
+index ef5c9a82d4e8..a644292f9faf 100644
+--- a/net/sctp/proc.c
++++ b/net/sctp/proc.c
+@@ -215,7 +215,6 @@ static const struct seq_operations sctp_eps_ops = {
+ struct sctp_ht_iter {
+ 	struct seq_net_private p;
+ 	struct rhashtable_iter hti;
+-	int start_fail;
+ };
+ 
+ static void *sctp_transport_seq_start(struct seq_file *seq, loff_t *pos)
+@@ -224,7 +223,6 @@ static void *sctp_transport_seq_start(struct seq_file *seq, loff_t *pos)
+ 
+ 	sctp_transport_walk_start(&iter->hti);
+ 
+-	iter->start_fail = 0;
+ 	return sctp_transport_get_idx(seq_file_net(seq), &iter->hti, *pos);
+ }
+ 
+@@ -232,8 +230,6 @@ static void sctp_transport_seq_stop(struct seq_file *seq, void *v)
+ {
+ 	struct sctp_ht_iter *iter = seq->private;
+ 
+-	if (iter->start_fail)
+-		return;
+ 	sctp_transport_walk_stop(&iter->hti);
+ }
+ 
+@@ -264,8 +260,6 @@ static int sctp_assocs_seq_show(struct seq_file *seq, void *v)
+ 	}
+ 
+ 	transport = (struct sctp_transport *)v;
+-	if (!sctp_transport_hold(transport))
+-		return 0;
+ 	assoc = transport->asoc;
+ 	epb = &assoc->base;
+ 	sk = epb->sk;
+@@ -322,8 +316,6 @@ static int sctp_remaddr_seq_show(struct seq_file *seq, void *v)
+ 	}
+ 
+ 	transport = (struct sctp_transport *)v;
+-	if (!sctp_transport_hold(transport))
+-		return 0;
+ 	assoc = transport->asoc;
+ 
+ 	list_for_each_entry_rcu(tsp, &assoc->peer.transport_addr_list,
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index ce620e878538..50ee07cd20c4 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -4881,9 +4881,14 @@ struct sctp_transport *sctp_transport_get_next(struct net *net,
+ 			break;
+ 		}
+ 
++		if (!sctp_transport_hold(t))
++			continue;
++
+ 		if (net_eq(sock_net(t->asoc->base.sk), net) &&
+ 		    t->asoc->peer.primary_path == t)
+ 			break;
++
++		sctp_transport_put(t);
+ 	}
+ 
+ 	return t;
+@@ -4893,13 +4898,18 @@ struct sctp_transport *sctp_transport_get_idx(struct net *net,
+ 					      struct rhashtable_iter *iter,
+ 					      int pos)
+ {
+-	void *obj = SEQ_START_TOKEN;
++	struct sctp_transport *t;
+ 
+-	while (pos && (obj = sctp_transport_get_next(net, iter)) &&
+-	       !IS_ERR(obj))
+-		pos--;
++	if (!pos)
++		return SEQ_START_TOKEN;
+ 
+-	return obj;
++	while ((t = sctp_transport_get_next(net, iter)) && !IS_ERR(t)) {
++		if (!--pos)
++			break;
++		sctp_transport_put(t);
++	}
++
++	return t;
+ }
+ 
+ int sctp_for_each_endpoint(int (*cb)(struct sctp_endpoint *, void *),
+@@ -4958,8 +4968,6 @@ again:
+ 
+ 	tsp = sctp_transport_get_idx(net, &hti, *pos + 1);
+ 	for (; !IS_ERR_OR_NULL(tsp); tsp = sctp_transport_get_next(net, &hti)) {
+-		if (!sctp_transport_hold(tsp))
+-			continue;
+ 		ret = cb(tsp, p);
+ 		if (ret)
+ 			break;
+diff --git a/net/sunrpc/auth_gss/gss_krb5_crypto.c b/net/sunrpc/auth_gss/gss_krb5_crypto.c
+index 8654494b4d0a..834eb2b9e41b 100644
+--- a/net/sunrpc/auth_gss/gss_krb5_crypto.c
++++ b/net/sunrpc/auth_gss/gss_krb5_crypto.c
+@@ -169,7 +169,7 @@ make_checksum_hmac_md5(struct krb5_ctx *kctx, char *header, int hdrlen,
+ 	struct scatterlist              sg[1];
+ 	int err = -1;
+ 	u8 *checksumdata;
+-	u8 rc4salt[4];
++	u8 *rc4salt;
+ 	struct crypto_ahash *md5;
+ 	struct crypto_ahash *hmac_md5;
+ 	struct ahash_request *req;
+@@ -183,14 +183,18 @@ make_checksum_hmac_md5(struct krb5_ctx *kctx, char *header, int hdrlen,
+ 		return GSS_S_FAILURE;
+ 	}
+ 
++	rc4salt = kmalloc_array(4, sizeof(*rc4salt), GFP_NOFS);
++	if (!rc4salt)
++		return GSS_S_FAILURE;
++
+ 	if (arcfour_hmac_md5_usage_to_salt(usage, rc4salt)) {
+ 		dprintk("%s: invalid usage value %u\n", __func__, usage);
+-		return GSS_S_FAILURE;
++		goto out_free_rc4salt;
+ 	}
+ 
+ 	checksumdata = kmalloc(GSS_KRB5_MAX_CKSUM_LEN, GFP_NOFS);
+ 	if (!checksumdata)
+-		return GSS_S_FAILURE;
++		goto out_free_rc4salt;
+ 
+ 	md5 = crypto_alloc_ahash("md5", 0, CRYPTO_ALG_ASYNC);
+ 	if (IS_ERR(md5))
+@@ -258,6 +262,8 @@ out_free_md5:
+ 	crypto_free_ahash(md5);
+ out_free_cksum:
+ 	kfree(checksumdata);
++out_free_rc4salt:
++	kfree(rc4salt);
+ 	return err ? GSS_S_FAILURE : 0;
+ }
+ 
+diff --git a/net/tipc/name_table.c b/net/tipc/name_table.c
+index bebe88cae07b..ff968c7afef6 100644
+--- a/net/tipc/name_table.c
++++ b/net/tipc/name_table.c
+@@ -980,20 +980,17 @@ int tipc_nl_name_table_dump(struct sk_buff *skb, struct netlink_callback *cb)
+ 
+ struct tipc_dest *tipc_dest_find(struct list_head *l, u32 node, u32 port)
+ {
+-	u64 value = (u64)node << 32 | port;
+ 	struct tipc_dest *dst;
+ 
+ 	list_for_each_entry(dst, l, list) {
+-		if (dst->value != value)
+-			continue;
+-		return dst;
++		if (dst->node == node && dst->port == port)
++			return dst;
+ 	}
+ 	return NULL;
+ }
+ 
+ bool tipc_dest_push(struct list_head *l, u32 node, u32 port)
+ {
+-	u64 value = (u64)node << 32 | port;
+ 	struct tipc_dest *dst;
+ 
+ 	if (tipc_dest_find(l, node, port))
+@@ -1002,7 +999,8 @@ bool tipc_dest_push(struct list_head *l, u32 node, u32 port)
+ 	dst = kmalloc(sizeof(*dst), GFP_ATOMIC);
+ 	if (unlikely(!dst))
+ 		return false;
+-	dst->value = value;
++	dst->node = node;
++	dst->port = port;
+ 	list_add(&dst->list, l);
+ 	return true;
+ }
+diff --git a/net/tipc/name_table.h b/net/tipc/name_table.h
+index 0febba41da86..892bd750b85f 100644
+--- a/net/tipc/name_table.h
++++ b/net/tipc/name_table.h
+@@ -133,13 +133,8 @@ void tipc_nametbl_stop(struct net *net);
+ 
+ struct tipc_dest {
+ 	struct list_head list;
+-	union {
+-		struct {
+-			u32 port;
+-			u32 node;
+-		};
+-		u64 value;
+-	};
++	u32 port;
++	u32 node;
+ };
+ 
+ struct tipc_dest *tipc_dest_find(struct list_head *l, u32 node, u32 port);
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 930852c54d7a..0a5fa347135e 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -2675,6 +2675,8 @@ void tipc_sk_reinit(struct net *net)
+ 
+ 		rhashtable_walk_stop(&iter);
+ 	} while (tsk == ERR_PTR(-EAGAIN));
++
++	rhashtable_walk_exit(&iter);
+ }
+ 
+ static struct tipc_sock *tipc_sk_lookup(struct net *net, u32 portid)
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index 301f22430469..45188d920013 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -45,6 +45,7 @@
+ MODULE_AUTHOR("Mellanox Technologies");
+ MODULE_DESCRIPTION("Transport Layer Security Support");
+ MODULE_LICENSE("Dual BSD/GPL");
++MODULE_ALIAS_TCP_ULP("tls");
+ 
+ enum {
+ 	TLSV4,
+diff --git a/samples/bpf/xdp_redirect_cpu_user.c b/samples/bpf/xdp_redirect_cpu_user.c
+index 4b4d78fffe30..da9070889223 100644
+--- a/samples/bpf/xdp_redirect_cpu_user.c
++++ b/samples/bpf/xdp_redirect_cpu_user.c
+@@ -679,8 +679,9 @@ int main(int argc, char **argv)
+ 		return EXIT_FAIL_OPTION;
+ 	}
+ 
+-	/* Remove XDP program when program is interrupted */
++	/* Remove XDP program when program is interrupted or killed */
+ 	signal(SIGINT, int_exit);
++	signal(SIGTERM, int_exit);
+ 
+ 	if (bpf_set_link_xdp_fd(ifindex, prog_fd[prog_num], xdp_flags) < 0) {
+ 		fprintf(stderr, "link set xdp fd failed\n");
+diff --git a/samples/bpf/xdp_rxq_info_user.c b/samples/bpf/xdp_rxq_info_user.c
+index e4e9ba52bff0..bb278447299c 100644
+--- a/samples/bpf/xdp_rxq_info_user.c
++++ b/samples/bpf/xdp_rxq_info_user.c
+@@ -534,8 +534,9 @@ int main(int argc, char **argv)
+ 		exit(EXIT_FAIL_BPF);
+ 	}
+ 
+-	/* Remove XDP program when program is interrupted */
++	/* Remove XDP program when program is interrupted or killed */
+ 	signal(SIGINT, int_exit);
++	signal(SIGTERM, int_exit);
+ 
+ 	if (bpf_set_link_xdp_fd(ifindex, prog_fd, xdp_flags) < 0) {
+ 		fprintf(stderr, "link set xdp fd failed\n");
+diff --git a/scripts/coccicheck b/scripts/coccicheck
+index 9fedca611b7f..e04d328210ac 100755
+--- a/scripts/coccicheck
++++ b/scripts/coccicheck
+@@ -128,9 +128,10 @@ run_cmd_parmap() {
+ 	fi
+ 	echo $@ >>$DEBUG_FILE
+ 	$@ 2>>$DEBUG_FILE
+-	if [[ $? -ne 0 ]]; then
++	err=$?
++	if [[ $err -ne 0 ]]; then
+ 		echo "coccicheck failed"
+-		exit $?
++		exit $err
+ 	fi
+ }
+ 
+diff --git a/scripts/depmod.sh b/scripts/depmod.sh
+index 999d585eaa73..e5f0aad75b96 100755
+--- a/scripts/depmod.sh
++++ b/scripts/depmod.sh
+@@ -15,9 +15,9 @@ if ! test -r System.map ; then
+ fi
+ 
+ if [ -z $(command -v $DEPMOD) ]; then
+-	echo "'make modules_install' requires $DEPMOD. Please install it." >&2
++	echo "Warning: 'make modules_install' requires $DEPMOD. Please install it." >&2
+ 	echo "This is probably in the kmod package." >&2
+-	exit 1
++	exit 0
+ fi
+ 
+ # older versions of depmod require the version string to start with three
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index 1663fb19343a..b95cf57782a3 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -672,7 +672,7 @@ static void handle_modversions(struct module *mod, struct elf_info *info,
+ 			if (ELF_ST_TYPE(sym->st_info) == STT_SPARC_REGISTER)
+ 				break;
+ 			if (symname[0] == '.') {
+-				char *munged = strdup(symname);
++				char *munged = NOFAIL(strdup(symname));
+ 				munged[0] = '_';
+ 				munged[1] = toupper(munged[1]);
+ 				symname = munged;
+@@ -1318,7 +1318,7 @@ static Elf_Sym *find_elf_symbol2(struct elf_info *elf, Elf_Addr addr,
+ static char *sec2annotation(const char *s)
+ {
+ 	if (match(s, init_exit_sections)) {
+-		char *p = malloc(20);
++		char *p = NOFAIL(malloc(20));
+ 		char *r = p;
+ 
+ 		*p++ = '_';
+@@ -1338,7 +1338,7 @@ static char *sec2annotation(const char *s)
+ 			strcat(p, " ");
+ 		return r;
+ 	} else {
+-		return strdup("");
++		return NOFAIL(strdup(""));
+ 	}
+ }
+ 
+@@ -2036,7 +2036,7 @@ void buf_write(struct buffer *buf, const char *s, int len)
+ {
+ 	if (buf->size - buf->pos < len) {
+ 		buf->size += len + SZ;
+-		buf->p = realloc(buf->p, buf->size);
++		buf->p = NOFAIL(realloc(buf->p, buf->size));
+ 	}
+ 	strncpy(buf->p + buf->pos, s, len);
+ 	buf->pos += len;
+diff --git a/security/apparmor/policy_ns.c b/security/apparmor/policy_ns.c
+index b0f9dc3f765a..1a7cec5d9cac 100644
+--- a/security/apparmor/policy_ns.c
++++ b/security/apparmor/policy_ns.c
+@@ -255,7 +255,7 @@ static struct aa_ns *__aa_create_ns(struct aa_ns *parent, const char *name,
+ 
+ 	ns = alloc_ns(parent->base.hname, name);
+ 	if (!ns)
+-		return NULL;
++		return ERR_PTR(-ENOMEM);
+ 	ns->level = parent->level + 1;
+ 	mutex_lock_nested(&ns->lock, ns->level);
+ 	error = __aafs_ns_mkdir(ns, ns_subns_dir(parent), name, dir);
+diff --git a/security/keys/dh.c b/security/keys/dh.c
+index b203f7758f97..1a68d27e72b4 100644
+--- a/security/keys/dh.c
++++ b/security/keys/dh.c
+@@ -300,7 +300,7 @@ long __keyctl_dh_compute(struct keyctl_dh_params __user *params,
+ 	}
+ 	dh_inputs.g_size = dlen;
+ 
+-	dlen = dh_data_from_key(pcopy.private, &dh_inputs.key);
++	dlen = dh_data_from_key(pcopy.dh_private, &dh_inputs.key);
+ 	if (dlen < 0) {
+ 		ret = dlen;
+ 		goto out2;
+diff --git a/security/selinux/selinuxfs.c b/security/selinux/selinuxfs.c
+index 79d3709b0671..0b66d7283b00 100644
+--- a/security/selinux/selinuxfs.c
++++ b/security/selinux/selinuxfs.c
+@@ -1365,13 +1365,18 @@ static int sel_make_bools(struct selinux_fs_info *fsi)
+ 
+ 		ret = -ENOMEM;
+ 		inode = sel_make_inode(dir->d_sb, S_IFREG | S_IRUGO | S_IWUSR);
+-		if (!inode)
++		if (!inode) {
++			dput(dentry);
+ 			goto out;
++		}
+ 
+ 		ret = -ENAMETOOLONG;
+ 		len = snprintf(page, PAGE_SIZE, "/%s/%s", BOOL_DIR_NAME, names[i]);
+-		if (len >= PAGE_SIZE)
++		if (len >= PAGE_SIZE) {
++			dput(dentry);
++			iput(inode);
+ 			goto out;
++		}
+ 
+ 		isec = (struct inode_security_struct *)inode->i_security;
+ 		ret = security_genfs_sid(fsi->state, "selinuxfs", page,
+@@ -1586,8 +1591,10 @@ static int sel_make_avc_files(struct dentry *dir)
+ 			return -ENOMEM;
+ 
+ 		inode = sel_make_inode(dir->d_sb, S_IFREG|files[i].mode);
+-		if (!inode)
++		if (!inode) {
++			dput(dentry);
+ 			return -ENOMEM;
++		}
+ 
+ 		inode->i_fop = files[i].ops;
+ 		inode->i_ino = ++fsi->last_ino;
+@@ -1632,8 +1639,10 @@ static int sel_make_initcon_files(struct dentry *dir)
+ 			return -ENOMEM;
+ 
+ 		inode = sel_make_inode(dir->d_sb, S_IFREG|S_IRUGO);
+-		if (!inode)
++		if (!inode) {
++			dput(dentry);
+ 			return -ENOMEM;
++		}
+ 
+ 		inode->i_fop = &sel_initcon_ops;
+ 		inode->i_ino = i|SEL_INITCON_INO_OFFSET;
+@@ -1733,8 +1742,10 @@ static int sel_make_perm_files(char *objclass, int classvalue,
+ 
+ 		rc = -ENOMEM;
+ 		inode = sel_make_inode(dir->d_sb, S_IFREG|S_IRUGO);
+-		if (!inode)
++		if (!inode) {
++			dput(dentry);
+ 			goto out;
++		}
+ 
+ 		inode->i_fop = &sel_perm_ops;
+ 		/* i+1 since perm values are 1-indexed */
+@@ -1763,8 +1774,10 @@ static int sel_make_class_dir_entries(char *classname, int index,
+ 		return -ENOMEM;
+ 
+ 	inode = sel_make_inode(dir->d_sb, S_IFREG|S_IRUGO);
+-	if (!inode)
++	if (!inode) {
++		dput(dentry);
+ 		return -ENOMEM;
++	}
+ 
+ 	inode->i_fop = &sel_class_ops;
+ 	inode->i_ino = sel_class_to_ino(index);
+@@ -1838,8 +1851,10 @@ static int sel_make_policycap(struct selinux_fs_info *fsi)
+ 			return -ENOMEM;
+ 
+ 		inode = sel_make_inode(fsi->sb, S_IFREG | 0444);
+-		if (inode == NULL)
++		if (inode == NULL) {
++			dput(dentry);
+ 			return -ENOMEM;
++		}
+ 
+ 		inode->i_fop = &sel_policycap_ops;
+ 		inode->i_ino = iter | SEL_POLICYCAP_INO_OFFSET;
+@@ -1932,8 +1947,10 @@ static int sel_fill_super(struct super_block *sb, void *data, int silent)
+ 
+ 	ret = -ENOMEM;
+ 	inode = sel_make_inode(sb, S_IFCHR | S_IRUGO | S_IWUGO);
+-	if (!inode)
++	if (!inode) {
++		dput(dentry);
+ 		goto err;
++	}
+ 
+ 	inode->i_ino = ++fsi->last_ino;
+ 	isec = (struct inode_security_struct *)inode->i_security;
+diff --git a/sound/soc/codecs/rt5677.c b/sound/soc/codecs/rt5677.c
+index 8a0181a2db08..47feef30dadb 100644
+--- a/sound/soc/codecs/rt5677.c
++++ b/sound/soc/codecs/rt5677.c
+@@ -5007,7 +5007,7 @@ static const struct regmap_config rt5677_regmap = {
+ };
+ 
+ static const struct of_device_id rt5677_of_match[] = {
+-	{ .compatible = "realtek,rt5677", RT5677 },
++	{ .compatible = "realtek,rt5677", .data = (const void *)RT5677 },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(of, rt5677_of_match);
+diff --git a/sound/soc/codecs/wm8994.c b/sound/soc/codecs/wm8994.c
+index 7fdfdf3f6e67..14f1b0c0d286 100644
+--- a/sound/soc/codecs/wm8994.c
++++ b/sound/soc/codecs/wm8994.c
+@@ -2432,6 +2432,7 @@ static int wm8994_set_dai_sysclk(struct snd_soc_dai *dai,
+ 			snd_soc_component_update_bits(component, WM8994_POWER_MANAGEMENT_2,
+ 					    WM8994_OPCLK_ENA, 0);
+ 		}
++		break;
+ 
+ 	default:
+ 		return -EINVAL;
+diff --git a/tools/perf/arch/arm64/util/arm-spe.c b/tools/perf/arch/arm64/util/arm-spe.c
+index 1120e39c1b00..5ccfce87e693 100644
+--- a/tools/perf/arch/arm64/util/arm-spe.c
++++ b/tools/perf/arch/arm64/util/arm-spe.c
+@@ -194,6 +194,7 @@ struct auxtrace_record *arm_spe_recording_init(int *err,
+ 	sper->itr.read_finish = arm_spe_read_finish;
+ 	sper->itr.alignment = 0;
+ 
++	*err = 0;
+ 	return &sper->itr;
+ }
+ 
+diff --git a/tools/perf/arch/powerpc/util/sym-handling.c b/tools/perf/arch/powerpc/util/sym-handling.c
+index 53d83d7e6a09..20e7d74d86cd 100644
+--- a/tools/perf/arch/powerpc/util/sym-handling.c
++++ b/tools/perf/arch/powerpc/util/sym-handling.c
+@@ -141,8 +141,10 @@ void arch__post_process_probe_trace_events(struct perf_probe_event *pev,
+ 	for (i = 0; i < ntevs; i++) {
+ 		tev = &pev->tevs[i];
+ 		map__for_each_symbol(map, sym, tmp) {
+-			if (map->unmap_ip(map, sym->start) == tev->point.address)
++			if (map->unmap_ip(map, sym->start) == tev->point.address) {
+ 				arch__fix_tev_from_maps(pev, tev, map, sym);
++				break;
++			}
+ 		}
+ 	}
+ }
+diff --git a/tools/perf/util/namespaces.c b/tools/perf/util/namespaces.c
+index 5be021701f34..cf8bd123cf73 100644
+--- a/tools/perf/util/namespaces.c
++++ b/tools/perf/util/namespaces.c
+@@ -139,6 +139,9 @@ struct nsinfo *nsinfo__copy(struct nsinfo *nsi)
+ {
+ 	struct nsinfo *nnsi;
+ 
++	if (nsi == NULL)
++		return NULL;
++
+ 	nnsi = calloc(1, sizeof(*nnsi));
+ 	if (nnsi != NULL) {
+ 		nnsi->pid = nsi->pid;
+diff --git a/tools/testing/selftests/powerpc/harness.c b/tools/testing/selftests/powerpc/harness.c
+index 66d31de60b9a..9d7166dfad1e 100644
+--- a/tools/testing/selftests/powerpc/harness.c
++++ b/tools/testing/selftests/powerpc/harness.c
+@@ -85,13 +85,13 @@ wait:
+ 	return status;
+ }
+ 
+-static void alarm_handler(int signum)
++static void sig_handler(int signum)
+ {
+-	/* Jut wake us up from waitpid */
++	/* Just wake us up from waitpid */
+ }
+ 
+-static struct sigaction alarm_action = {
+-	.sa_handler = alarm_handler,
++static struct sigaction sig_action = {
++	.sa_handler = sig_handler,
+ };
+ 
+ void test_harness_set_timeout(uint64_t time)
+@@ -106,8 +106,14 @@ int test_harness(int (test_function)(void), char *name)
+ 	test_start(name);
+ 	test_set_git_version(GIT_VERSION);
+ 
+-	if (sigaction(SIGALRM, &alarm_action, NULL)) {
+-		perror("sigaction");
++	if (sigaction(SIGINT, &sig_action, NULL)) {
++		perror("sigaction (sigint)");
++		test_error(name);
++		return 1;
++	}
++
++	if (sigaction(SIGALRM, &sig_action, NULL)) {
++		perror("sigaction (sigalrm)");
+ 		test_error(name);
+ 		return 1;
+ 	}


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 13:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 13:15 UTC (permalink / raw
  To: gentoo-commits

commit:     0fd54b316d2fd526c783449dd6f1a99119ea1bab
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Oct 13 16:32:27 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 13:15:40 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0fd54b31

Linux patch 4.18.14

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1013_linux-4.18.14.patch | 1692 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1696 insertions(+)

diff --git a/0000_README b/0000_README
index f5bb594..6d1cb28 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,10 @@ Patch:  1012_linux-4.18.13.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.13
 
+Patch:  1013_linux-4.18.14.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.14
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1013_linux-4.18.14.patch b/1013_linux-4.18.14.patch
new file mode 100644
index 0000000..742cbc9
--- /dev/null
+++ b/1013_linux-4.18.14.patch
@@ -0,0 +1,1692 @@
+diff --git a/Makefile b/Makefile
+index 4442e9ea4b6d..5274f8ae6b44 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 13
++SUBLEVEL = 14
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/arc/kernel/process.c b/arch/arc/kernel/process.c
+index 4674541eba3f..8ce6e7235915 100644
+--- a/arch/arc/kernel/process.c
++++ b/arch/arc/kernel/process.c
+@@ -241,6 +241,26 @@ int copy_thread(unsigned long clone_flags,
+ 		task_thread_info(current)->thr_ptr;
+ 	}
+ 
++
++	/*
++	 * setup usermode thread pointer #1:
++	 * when child is picked by scheduler, __switch_to() uses @c_callee to
++	 * populate usermode callee regs: this works (despite being in a kernel
++	 * function) since special return path for child @ret_from_fork()
++	 * ensures those regs are not clobbered all the way to RTIE to usermode
++	 */
++	c_callee->r25 = task_thread_info(p)->thr_ptr;
++
++#ifdef CONFIG_ARC_CURR_IN_REG
++	/*
++	 * setup usermode thread pointer #2:
++	 * however for this special use of r25 in kernel, __switch_to() sets
++	 * r25 for kernel needs and only in the final return path is usermode
++	 * r25 setup, from pt_regs->user_r25. So set that up as well
++	 */
++	c_regs->user_r25 = c_callee->r25;
++#endif
++
+ 	return 0;
+ }
+ 
+diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
+index 8721fd004291..e1a1518a1ec7 100644
+--- a/arch/powerpc/include/asm/setup.h
++++ b/arch/powerpc/include/asm/setup.h
+@@ -9,6 +9,7 @@ extern void ppc_printk_progress(char *s, unsigned short hex);
+ 
+ extern unsigned int rtas_data;
+ extern unsigned long long memory_limit;
++extern bool init_mem_is_free;
+ extern unsigned long klimit;
+ extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
+ 
+diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
+index e0d881ab304e..30cbcadb54d5 100644
+--- a/arch/powerpc/lib/code-patching.c
++++ b/arch/powerpc/lib/code-patching.c
+@@ -142,7 +142,7 @@ static inline int unmap_patch_area(unsigned long addr)
+ 	return 0;
+ }
+ 
+-int patch_instruction(unsigned int *addr, unsigned int instr)
++static int do_patch_instruction(unsigned int *addr, unsigned int instr)
+ {
+ 	int err;
+ 	unsigned int *patch_addr = NULL;
+@@ -182,12 +182,22 @@ out:
+ }
+ #else /* !CONFIG_STRICT_KERNEL_RWX */
+ 
+-int patch_instruction(unsigned int *addr, unsigned int instr)
++static int do_patch_instruction(unsigned int *addr, unsigned int instr)
+ {
+ 	return raw_patch_instruction(addr, instr);
+ }
+ 
+ #endif /* CONFIG_STRICT_KERNEL_RWX */
++
++int patch_instruction(unsigned int *addr, unsigned int instr)
++{
++	/* Make sure we aren't patching a freed init section */
++	if (init_mem_is_free && init_section_contains(addr, 4)) {
++		pr_debug("Skipping init section patching addr: 0x%px\n", addr);
++		return 0;
++	}
++	return do_patch_instruction(addr, instr);
++}
+ NOKPROBE_SYMBOL(patch_instruction);
+ 
+ int patch_branch(unsigned int *addr, unsigned long target, int flags)
+diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
+index 5c8530d0c611..04ccb274a620 100644
+--- a/arch/powerpc/mm/mem.c
++++ b/arch/powerpc/mm/mem.c
+@@ -63,6 +63,7 @@
+ #endif
+ 
+ unsigned long long memory_limit;
++bool init_mem_is_free;
+ 
+ #ifdef CONFIG_HIGHMEM
+ pte_t *kmap_pte;
+@@ -396,6 +397,7 @@ void free_initmem(void)
+ {
+ 	ppc_md.progress = ppc_printk_progress;
+ 	mark_initmem_nx();
++	init_mem_is_free = true;
+ 	free_initmem_default(POISON_FREE_INITMEM);
+ }
+ 
+diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
+index 9589878faf46..eb1ed9a7109d 100644
+--- a/arch/x86/entry/vdso/Makefile
++++ b/arch/x86/entry/vdso/Makefile
+@@ -72,7 +72,13 @@ $(obj)/vdso-image-%.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
+ CFL := $(PROFILING) -mcmodel=small -fPIC -O2 -fasynchronous-unwind-tables -m64 \
+        $(filter -g%,$(KBUILD_CFLAGS)) $(call cc-option, -fno-stack-protector) \
+        -fno-omit-frame-pointer -foptimize-sibling-calls \
+-       -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO $(RETPOLINE_VDSO_CFLAGS)
++       -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
++
++ifdef CONFIG_RETPOLINE
++ifneq ($(RETPOLINE_VDSO_CFLAGS),)
++  CFL += $(RETPOLINE_VDSO_CFLAGS)
++endif
++endif
+ 
+ $(vobjs): KBUILD_CFLAGS := $(filter-out $(GCC_PLUGINS_CFLAGS) $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS)) $(CFL)
+ 
+@@ -144,7 +150,13 @@ KBUILD_CFLAGS_32 += $(call cc-option, -fno-stack-protector)
+ KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls)
+ KBUILD_CFLAGS_32 += -fno-omit-frame-pointer
+ KBUILD_CFLAGS_32 += -DDISABLE_BRANCH_PROFILING
+-KBUILD_CFLAGS_32 += $(RETPOLINE_VDSO_CFLAGS)
++
++ifdef CONFIG_RETPOLINE
++ifneq ($(RETPOLINE_VDSO_CFLAGS),)
++  KBUILD_CFLAGS_32 += $(RETPOLINE_VDSO_CFLAGS)
++endif
++endif
++
+ $(obj)/vdso32.so.dbg: KBUILD_CFLAGS = $(KBUILD_CFLAGS_32)
+ 
+ $(obj)/vdso32.so.dbg: FORCE \
+diff --git a/arch/x86/entry/vdso/vclock_gettime.c b/arch/x86/entry/vdso/vclock_gettime.c
+index f19856d95c60..e48ca3afa091 100644
+--- a/arch/x86/entry/vdso/vclock_gettime.c
++++ b/arch/x86/entry/vdso/vclock_gettime.c
+@@ -43,8 +43,9 @@ extern u8 hvclock_page
+ notrace static long vdso_fallback_gettime(long clock, struct timespec *ts)
+ {
+ 	long ret;
+-	asm("syscall" : "=a" (ret) :
+-	    "0" (__NR_clock_gettime), "D" (clock), "S" (ts) : "memory");
++	asm ("syscall" : "=a" (ret), "=m" (*ts) :
++	     "0" (__NR_clock_gettime), "D" (clock), "S" (ts) :
++	     "memory", "rcx", "r11");
+ 	return ret;
+ }
+ 
+@@ -52,8 +53,9 @@ notrace static long vdso_fallback_gtod(struct timeval *tv, struct timezone *tz)
+ {
+ 	long ret;
+ 
+-	asm("syscall" : "=a" (ret) :
+-	    "0" (__NR_gettimeofday), "D" (tv), "S" (tz) : "memory");
++	asm ("syscall" : "=a" (ret), "=m" (*tv), "=m" (*tz) :
++	     "0" (__NR_gettimeofday), "D" (tv), "S" (tz) :
++	     "memory", "rcx", "r11");
+ 	return ret;
+ }
+ 
+@@ -64,13 +66,13 @@ notrace static long vdso_fallback_gettime(long clock, struct timespec *ts)
+ {
+ 	long ret;
+ 
+-	asm(
++	asm (
+ 		"mov %%ebx, %%edx \n"
+-		"mov %2, %%ebx \n"
++		"mov %[clock], %%ebx \n"
+ 		"call __kernel_vsyscall \n"
+ 		"mov %%edx, %%ebx \n"
+-		: "=a" (ret)
+-		: "0" (__NR_clock_gettime), "g" (clock), "c" (ts)
++		: "=a" (ret), "=m" (*ts)
++		: "0" (__NR_clock_gettime), [clock] "g" (clock), "c" (ts)
+ 		: "memory", "edx");
+ 	return ret;
+ }
+@@ -79,13 +81,13 @@ notrace static long vdso_fallback_gtod(struct timeval *tv, struct timezone *tz)
+ {
+ 	long ret;
+ 
+-	asm(
++	asm (
+ 		"mov %%ebx, %%edx \n"
+-		"mov %2, %%ebx \n"
++		"mov %[tv], %%ebx \n"
+ 		"call __kernel_vsyscall \n"
+ 		"mov %%edx, %%ebx \n"
+-		: "=a" (ret)
+-		: "0" (__NR_gettimeofday), "g" (tv), "c" (tz)
++		: "=a" (ret), "=m" (*tv), "=m" (*tz)
++		: "0" (__NR_gettimeofday), [tv] "g" (tv), "c" (tz)
+ 		: "memory", "edx");
+ 	return ret;
+ }
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index 97d41754769e..d02f0390c1c1 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -232,6 +232,17 @@ static u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
+  */
+ static const u64 shadow_nonpresent_or_rsvd_mask_len = 5;
+ 
++/*
++ * In some cases, we need to preserve the GFN of a non-present or reserved
++ * SPTE when we usurp the upper five bits of the physical address space to
++ * defend against L1TF, e.g. for MMIO SPTEs.  To preserve the GFN, we'll
++ * shift bits of the GFN that overlap with shadow_nonpresent_or_rsvd_mask
++ * left into the reserved bits, i.e. the GFN in the SPTE will be split into
++ * high and low parts.  This mask covers the lower bits of the GFN.
++ */
++static u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask;
++
++
+ static void mmu_spte_set(u64 *sptep, u64 spte);
+ 
+ void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value)
+@@ -338,9 +349,7 @@ static bool is_mmio_spte(u64 spte)
+ 
+ static gfn_t get_mmio_spte_gfn(u64 spte)
+ {
+-	u64 mask = generation_mmio_spte_mask(MMIO_GEN_MASK) | shadow_mmio_mask |
+-		   shadow_nonpresent_or_rsvd_mask;
+-	u64 gpa = spte & ~mask;
++	u64 gpa = spte & shadow_nonpresent_or_rsvd_lower_gfn_mask;
+ 
+ 	gpa |= (spte >> shadow_nonpresent_or_rsvd_mask_len)
+ 	       & shadow_nonpresent_or_rsvd_mask;
+@@ -404,6 +413,8 @@ EXPORT_SYMBOL_GPL(kvm_mmu_set_mask_ptes);
+ 
+ static void kvm_mmu_reset_all_pte_masks(void)
+ {
++	u8 low_phys_bits;
++
+ 	shadow_user_mask = 0;
+ 	shadow_accessed_mask = 0;
+ 	shadow_dirty_mask = 0;
+@@ -418,12 +429,17 @@ static void kvm_mmu_reset_all_pte_masks(void)
+ 	 * appropriate mask to guard against L1TF attacks. Otherwise, it is
+ 	 * assumed that the CPU is not vulnerable to L1TF.
+ 	 */
++	low_phys_bits = boot_cpu_data.x86_phys_bits;
+ 	if (boot_cpu_data.x86_phys_bits <
+-	    52 - shadow_nonpresent_or_rsvd_mask_len)
++	    52 - shadow_nonpresent_or_rsvd_mask_len) {
+ 		shadow_nonpresent_or_rsvd_mask =
+ 			rsvd_bits(boot_cpu_data.x86_phys_bits -
+ 				  shadow_nonpresent_or_rsvd_mask_len,
+ 				  boot_cpu_data.x86_phys_bits - 1);
++		low_phys_bits -= shadow_nonpresent_or_rsvd_mask_len;
++	}
++	shadow_nonpresent_or_rsvd_lower_gfn_mask =
++		GENMASK_ULL(low_phys_bits - 1, PAGE_SHIFT);
+ }
+ 
+ static int is_cpuid_PSE36(void)
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index d0c3be353bb6..32721ef9652d 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -9826,15 +9826,16 @@ static void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu)
+ 	if (!lapic_in_kernel(vcpu))
+ 		return;
+ 
++	if (!flexpriority_enabled &&
++	    !cpu_has_vmx_virtualize_x2apic_mode())
++		return;
++
+ 	/* Postpone execution until vmcs01 is the current VMCS. */
+ 	if (is_guest_mode(vcpu)) {
+ 		to_vmx(vcpu)->nested.change_vmcs01_virtual_apic_mode = true;
+ 		return;
+ 	}
+ 
+-	if (!cpu_need_tpr_shadow(vcpu))
+-		return;
+-
+ 	sec_exec_control = vmcs_read32(SECONDARY_VM_EXEC_CONTROL);
+ 	sec_exec_control &= ~(SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
+ 			      SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE);
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 2f9e14361673..90e8058ae557 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -1596,7 +1596,7 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
+ 		BUG_ON(!rq->q);
+ 		if (rq->mq_ctx != this_ctx) {
+ 			if (this_ctx) {
+-				trace_block_unplug(this_q, depth, from_schedule);
++				trace_block_unplug(this_q, depth, !from_schedule);
+ 				blk_mq_sched_insert_requests(this_q, this_ctx,
+ 								&ctx_list,
+ 								from_schedule);
+@@ -1616,7 +1616,7 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
+ 	 * on 'ctx_list'. Do those.
+ 	 */
+ 	if (this_ctx) {
+-		trace_block_unplug(this_q, depth, from_schedule);
++		trace_block_unplug(this_q, depth, !from_schedule);
+ 		blk_mq_sched_insert_requests(this_q, this_ctx, &ctx_list,
+ 						from_schedule);
+ 	}
+diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
+index 3f68e2919dc5..a690fd400260 100644
+--- a/drivers/base/power/main.c
++++ b/drivers/base/power/main.c
+@@ -1713,8 +1713,10 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
+ 
+ 	dpm_wait_for_subordinate(dev, async);
+ 
+-	if (async_error)
++	if (async_error) {
++		dev->power.direct_complete = false;
+ 		goto Complete;
++	}
+ 
+ 	/*
+ 	 * If a device configured to wake up the system from sleep states
+@@ -1726,6 +1728,7 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
+ 		pm_wakeup_event(dev, 0);
+ 
+ 	if (pm_wakeup_pending()) {
++		dev->power.direct_complete = false;
+ 		async_error = -EBUSY;
+ 		goto Complete;
+ 	}
+diff --git a/drivers/clocksource/timer-atmel-pit.c b/drivers/clocksource/timer-atmel-pit.c
+index ec8a4376f74f..2fab18fae4fc 100644
+--- a/drivers/clocksource/timer-atmel-pit.c
++++ b/drivers/clocksource/timer-atmel-pit.c
+@@ -180,26 +180,29 @@ static int __init at91sam926x_pit_dt_init(struct device_node *node)
+ 	data->base = of_iomap(node, 0);
+ 	if (!data->base) {
+ 		pr_err("Could not map PIT address\n");
+-		return -ENXIO;
++		ret = -ENXIO;
++		goto exit;
+ 	}
+ 
+ 	data->mck = of_clk_get(node, 0);
+ 	if (IS_ERR(data->mck)) {
+ 		pr_err("Unable to get mck clk\n");
+-		return PTR_ERR(data->mck);
++		ret = PTR_ERR(data->mck);
++		goto exit;
+ 	}
+ 
+ 	ret = clk_prepare_enable(data->mck);
+ 	if (ret) {
+ 		pr_err("Unable to enable mck\n");
+-		return ret;
++		goto exit;
+ 	}
+ 
+ 	/* Get the interrupts property */
+ 	data->irq = irq_of_parse_and_map(node, 0);
+ 	if (!data->irq) {
+ 		pr_err("Unable to get IRQ from DT\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto exit;
+ 	}
+ 
+ 	/*
+@@ -227,7 +230,7 @@ static int __init at91sam926x_pit_dt_init(struct device_node *node)
+ 	ret = clocksource_register_hz(&data->clksrc, pit_rate);
+ 	if (ret) {
+ 		pr_err("Failed to register clocksource\n");
+-		return ret;
++		goto exit;
+ 	}
+ 
+ 	/* Set up irq handler */
+@@ -236,7 +239,8 @@ static int __init at91sam926x_pit_dt_init(struct device_node *node)
+ 			  "at91_tick", data);
+ 	if (ret) {
+ 		pr_err("Unable to setup IRQ\n");
+-		return ret;
++		clocksource_unregister(&data->clksrc);
++		goto exit;
+ 	}
+ 
+ 	/* Set up and register clockevents */
+@@ -254,6 +258,10 @@ static int __init at91sam926x_pit_dt_init(struct device_node *node)
+ 	clockevents_register_device(&data->clkevt);
+ 
+ 	return 0;
++
++exit:
++	kfree(data);
++	return ret;
+ }
+ TIMER_OF_DECLARE(at91sam926x_pit, "atmel,at91sam9260-pit",
+ 		       at91sam926x_pit_dt_init);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+index 23d960ec1cf2..acad2999560c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+@@ -246,6 +246,8 @@ int amdgpu_vce_suspend(struct amdgpu_device *adev)
+ {
+ 	int i;
+ 
++	cancel_delayed_work_sync(&adev->vce.idle_work);
++
+ 	if (adev->vce.vcpu_bo == NULL)
+ 		return 0;
+ 
+@@ -256,7 +258,6 @@ int amdgpu_vce_suspend(struct amdgpu_device *adev)
+ 	if (i == AMDGPU_MAX_VCE_HANDLES)
+ 		return 0;
+ 
+-	cancel_delayed_work_sync(&adev->vce.idle_work);
+ 	/* TODO: suspending running encoding sessions isn't supported */
+ 	return -EINVAL;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+index bee49991c1ff..2dc3d1e28f3c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+@@ -151,11 +151,11 @@ int amdgpu_vcn_suspend(struct amdgpu_device *adev)
+ 	unsigned size;
+ 	void *ptr;
+ 
++	cancel_delayed_work_sync(&adev->vcn.idle_work);
++
+ 	if (adev->vcn.vcpu_bo == NULL)
+ 		return 0;
+ 
+-	cancel_delayed_work_sync(&adev->vcn.idle_work);
+-
+ 	size = amdgpu_bo_size(adev->vcn.vcpu_bo);
+ 	ptr = adev->vcn.cpu_addr;
+ 
+diff --git a/drivers/gpu/drm/drm_lease.c b/drivers/gpu/drm/drm_lease.c
+index d638c0fb3418..23a5643a4b98 100644
+--- a/drivers/gpu/drm/drm_lease.c
++++ b/drivers/gpu/drm/drm_lease.c
+@@ -566,14 +566,14 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev,
+ 	lessee_priv->is_master = 1;
+ 	lessee_priv->authenticated = 1;
+ 
+-	/* Hook up the fd */
+-	fd_install(fd, lessee_file);
+-
+ 	/* Pass fd back to userspace */
+ 	DRM_DEBUG_LEASE("Returning fd %d id %d\n", fd, lessee->lessee_id);
+ 	cl->fd = fd;
+ 	cl->lessee_id = lessee->lessee_id;
+ 
++	/* Hook up the fd */
++	fd_install(fd, lessee_file);
++
+ 	DRM_DEBUG_LEASE("drm_mode_create_lease_ioctl succeeded\n");
+ 	return 0;
+ 
+diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
+index d4f4ce484529..8e71da403324 100644
+--- a/drivers/gpu/drm/drm_syncobj.c
++++ b/drivers/gpu/drm/drm_syncobj.c
+@@ -97,6 +97,8 @@ static int drm_syncobj_fence_get_or_add_callback(struct drm_syncobj *syncobj,
+ {
+ 	int ret;
+ 
++	WARN_ON(*fence);
++
+ 	*fence = drm_syncobj_fence_get(syncobj);
+ 	if (*fence)
+ 		return 1;
+@@ -744,6 +746,9 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
+ 
+ 	if (flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT) {
+ 		for (i = 0; i < count; ++i) {
++			if (entries[i].fence)
++				continue;
++
+ 			drm_syncobj_fence_get_or_add_callback(syncobjs[i],
+ 							      &entries[i].fence,
+ 							      &entries[i].syncobj_cb,
+diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c
+index 5f437d1570fb..21863ddde63e 100644
+--- a/drivers/infiniband/core/ucma.c
++++ b/drivers/infiniband/core/ucma.c
+@@ -1759,6 +1759,8 @@ static int ucma_close(struct inode *inode, struct file *filp)
+ 		mutex_lock(&mut);
+ 		if (!ctx->closing) {
+ 			mutex_unlock(&mut);
++			ucma_put_ctx(ctx);
++			wait_for_completion(&ctx->comp);
+ 			/* rdma_destroy_id ensures that no event handlers are
+ 			 * inflight for that id before releasing it.
+ 			 */
+diff --git a/drivers/md/dm-cache-metadata.c b/drivers/md/dm-cache-metadata.c
+index 69dddeab124c..5936de71883f 100644
+--- a/drivers/md/dm-cache-metadata.c
++++ b/drivers/md/dm-cache-metadata.c
+@@ -1455,8 +1455,8 @@ static int __load_mappings(struct dm_cache_metadata *cmd,
+ 		if (hints_valid) {
+ 			r = dm_array_cursor_next(&cmd->hint_cursor);
+ 			if (r) {
+-				DMERR("dm_array_cursor_next for hint failed");
+-				goto out;
++				dm_array_cursor_end(&cmd->hint_cursor);
++				hints_valid = false;
+ 			}
+ 		}
+ 
+diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
+index 44df244807e5..a39ae8f45e32 100644
+--- a/drivers/md/dm-cache-target.c
++++ b/drivers/md/dm-cache-target.c
+@@ -3017,8 +3017,13 @@ static dm_cblock_t get_cache_dev_size(struct cache *cache)
+ 
+ static bool can_resize(struct cache *cache, dm_cblock_t new_size)
+ {
+-	if (from_cblock(new_size) > from_cblock(cache->cache_size))
+-		return true;
++	if (from_cblock(new_size) > from_cblock(cache->cache_size)) {
++		if (cache->sized) {
++			DMERR("%s: unable to extend cache due to missing cache table reload",
++			      cache_device_name(cache));
++			return false;
++		}
++	}
+ 
+ 	/*
+ 	 * We can't drop a dirty block when shrinking the cache.
+diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
+index d94ba6f72ff5..419362c2d8ac 100644
+--- a/drivers/md/dm-mpath.c
++++ b/drivers/md/dm-mpath.c
+@@ -806,19 +806,19 @@ static int parse_path_selector(struct dm_arg_set *as, struct priority_group *pg,
+ }
+ 
+ static int setup_scsi_dh(struct block_device *bdev, struct multipath *m,
+-			 const char *attached_handler_name, char **error)
++			 const char **attached_handler_name, char **error)
+ {
+ 	struct request_queue *q = bdev_get_queue(bdev);
+ 	int r;
+ 
+ 	if (test_bit(MPATHF_RETAIN_ATTACHED_HW_HANDLER, &m->flags)) {
+ retain:
+-		if (attached_handler_name) {
++		if (*attached_handler_name) {
+ 			/*
+ 			 * Clear any hw_handler_params associated with a
+ 			 * handler that isn't already attached.
+ 			 */
+-			if (m->hw_handler_name && strcmp(attached_handler_name, m->hw_handler_name)) {
++			if (m->hw_handler_name && strcmp(*attached_handler_name, m->hw_handler_name)) {
+ 				kfree(m->hw_handler_params);
+ 				m->hw_handler_params = NULL;
+ 			}
+@@ -830,7 +830,8 @@ retain:
+ 			 * handler instead of the original table passed in.
+ 			 */
+ 			kfree(m->hw_handler_name);
+-			m->hw_handler_name = attached_handler_name;
++			m->hw_handler_name = *attached_handler_name;
++			*attached_handler_name = NULL;
+ 		}
+ 	}
+ 
+@@ -867,7 +868,7 @@ static struct pgpath *parse_path(struct dm_arg_set *as, struct path_selector *ps
+ 	struct pgpath *p;
+ 	struct multipath *m = ti->private;
+ 	struct request_queue *q;
+-	const char *attached_handler_name;
++	const char *attached_handler_name = NULL;
+ 
+ 	/* we need at least a path arg */
+ 	if (as->argc < 1) {
+@@ -890,7 +891,7 @@ static struct pgpath *parse_path(struct dm_arg_set *as, struct path_selector *ps
+ 	attached_handler_name = scsi_dh_attached_handler_name(q, GFP_KERNEL);
+ 	if (attached_handler_name || m->hw_handler_name) {
+ 		INIT_DELAYED_WORK(&p->activate_path, activate_path_work);
+-		r = setup_scsi_dh(p->path.dev->bdev, m, attached_handler_name, &ti->error);
++		r = setup_scsi_dh(p->path.dev->bdev, m, &attached_handler_name, &ti->error);
+ 		if (r) {
+ 			dm_put_device(ti, p->path.dev);
+ 			goto bad;
+@@ -905,6 +906,7 @@ static struct pgpath *parse_path(struct dm_arg_set *as, struct path_selector *ps
+ 
+ 	return p;
+  bad:
++	kfree(attached_handler_name);
+ 	free_pgpath(p);
+ 	return ERR_PTR(r);
+ }
+diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
+index abf9e884386c..f57f5de54206 100644
+--- a/drivers/mmc/core/host.c
++++ b/drivers/mmc/core/host.c
+@@ -235,7 +235,7 @@ int mmc_of_parse(struct mmc_host *host)
+ 			host->caps |= MMC_CAP_NEEDS_POLL;
+ 
+ 		ret = mmc_gpiod_request_cd(host, "cd", 0, true,
+-					   cd_debounce_delay_ms,
++					   cd_debounce_delay_ms * 1000,
+ 					   &cd_gpio_invert);
+ 		if (!ret)
+ 			dev_info(host->parent, "Got CD GPIO\n");
+diff --git a/drivers/mmc/core/slot-gpio.c b/drivers/mmc/core/slot-gpio.c
+index 2a833686784b..86803a3a04dc 100644
+--- a/drivers/mmc/core/slot-gpio.c
++++ b/drivers/mmc/core/slot-gpio.c
+@@ -271,7 +271,7 @@ int mmc_gpiod_request_cd(struct mmc_host *host, const char *con_id,
+ 	if (debounce) {
+ 		ret = gpiod_set_debounce(desc, debounce);
+ 		if (ret < 0)
+-			ctx->cd_debounce_delay_ms = debounce;
++			ctx->cd_debounce_delay_ms = debounce / 1000;
+ 	}
+ 
+ 	if (gpio_invert)
+diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.c b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+index 21eb3a598a86..bdaad6e93be5 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.c
++++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+@@ -1619,10 +1619,10 @@ ath10k_wmi_tlv_op_gen_start_scan(struct ath10k *ar,
+ 	bssid_len = arg->n_bssids * sizeof(struct wmi_mac_addr);
+ 	ie_len = roundup(arg->ie_len, 4);
+ 	len = (sizeof(*tlv) + sizeof(*cmd)) +
+-	      (arg->n_channels ? sizeof(*tlv) + chan_len : 0) +
+-	      (arg->n_ssids ? sizeof(*tlv) + ssid_len : 0) +
+-	      (arg->n_bssids ? sizeof(*tlv) + bssid_len : 0) +
+-	      (arg->ie_len ? sizeof(*tlv) + ie_len : 0);
++	      sizeof(*tlv) + chan_len +
++	      sizeof(*tlv) + ssid_len +
++	      sizeof(*tlv) + bssid_len +
++	      sizeof(*tlv) + ie_len;
+ 
+ 	skb = ath10k_wmi_alloc_skb(ar, len);
+ 	if (!skb)
+diff --git a/drivers/net/xen-netback/hash.c b/drivers/net/xen-netback/hash.c
+index 3c4c58b9fe76..3b6fb5b3bdb2 100644
+--- a/drivers/net/xen-netback/hash.c
++++ b/drivers/net/xen-netback/hash.c
+@@ -332,20 +332,22 @@ u32 xenvif_set_hash_mapping_size(struct xenvif *vif, u32 size)
+ u32 xenvif_set_hash_mapping(struct xenvif *vif, u32 gref, u32 len,
+ 			    u32 off)
+ {
+-	u32 *mapping = &vif->hash.mapping[off];
++	u32 *mapping = vif->hash.mapping;
+ 	struct gnttab_copy copy_op = {
+ 		.source.u.ref = gref,
+ 		.source.domid = vif->domid,
+-		.dest.u.gmfn = virt_to_gfn(mapping),
+ 		.dest.domid = DOMID_SELF,
+-		.dest.offset = xen_offset_in_page(mapping),
+-		.len = len * sizeof(u32),
++		.len = len * sizeof(*mapping),
+ 		.flags = GNTCOPY_source_gref
+ 	};
+ 
+-	if ((off + len > vif->hash.size) || copy_op.len > XEN_PAGE_SIZE)
++	if ((off + len < off) || (off + len > vif->hash.size) ||
++	    len > XEN_PAGE_SIZE / sizeof(*mapping))
+ 		return XEN_NETIF_CTRL_STATUS_INVALID_PARAMETER;
+ 
++	copy_op.dest.u.gmfn = virt_to_gfn(mapping + off);
++	copy_op.dest.offset = xen_offset_in_page(mapping + off);
++
+ 	while (len-- != 0)
+ 		if (mapping[off++] >= vif->num_queues)
+ 			return XEN_NETIF_CTRL_STATUS_INVALID_PARAMETER;
+diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
+index 722537e14848..41b49716ac75 100644
+--- a/drivers/of/unittest.c
++++ b/drivers/of/unittest.c
+@@ -771,6 +771,9 @@ static void __init of_unittest_parse_interrupts(void)
+ 	struct of_phandle_args args;
+ 	int i, rc;
+ 
++	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
++		return;
++
+ 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts0");
+ 	if (!np) {
+ 		pr_err("missing testcase data\n");
+@@ -845,6 +848,9 @@ static void __init of_unittest_parse_interrupts_extended(void)
+ 	struct of_phandle_args args;
+ 	int i, rc;
+ 
++	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
++		return;
++
+ 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts-extended0");
+ 	if (!np) {
+ 		pr_err("missing testcase data\n");
+@@ -1001,15 +1007,19 @@ static void __init of_unittest_platform_populate(void)
+ 	pdev = of_find_device_by_node(np);
+ 	unittest(pdev, "device 1 creation failed\n");
+ 
+-	irq = platform_get_irq(pdev, 0);
+-	unittest(irq == -EPROBE_DEFER, "device deferred probe failed - %d\n", irq);
++	if (!(of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)) {
++		irq = platform_get_irq(pdev, 0);
++		unittest(irq == -EPROBE_DEFER,
++			 "device deferred probe failed - %d\n", irq);
+ 
+-	/* Test that a parsing failure does not return -EPROBE_DEFER */
+-	np = of_find_node_by_path("/testcase-data/testcase-device2");
+-	pdev = of_find_device_by_node(np);
+-	unittest(pdev, "device 2 creation failed\n");
+-	irq = platform_get_irq(pdev, 0);
+-	unittest(irq < 0 && irq != -EPROBE_DEFER, "device parsing error failed - %d\n", irq);
++		/* Test that a parsing failure does not return -EPROBE_DEFER */
++		np = of_find_node_by_path("/testcase-data/testcase-device2");
++		pdev = of_find_device_by_node(np);
++		unittest(pdev, "device 2 creation failed\n");
++		irq = platform_get_irq(pdev, 0);
++		unittest(irq < 0 && irq != -EPROBE_DEFER,
++			 "device parsing error failed - %d\n", irq);
++	}
+ 
+ 	np = of_find_node_by_path("/testcase-data/platform-tests");
+ 	unittest(np, "No testcase data in device tree\n");
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 0abe2865a3a5..c97ad905e7c9 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -1125,12 +1125,12 @@ int pci_save_state(struct pci_dev *dev)
+ EXPORT_SYMBOL(pci_save_state);
+ 
+ static void pci_restore_config_dword(struct pci_dev *pdev, int offset,
+-				     u32 saved_val, int retry)
++				     u32 saved_val, int retry, bool force)
+ {
+ 	u32 val;
+ 
+ 	pci_read_config_dword(pdev, offset, &val);
+-	if (val == saved_val)
++	if (!force && val == saved_val)
+ 		return;
+ 
+ 	for (;;) {
+@@ -1149,25 +1149,36 @@ static void pci_restore_config_dword(struct pci_dev *pdev, int offset,
+ }
+ 
+ static void pci_restore_config_space_range(struct pci_dev *pdev,
+-					   int start, int end, int retry)
++					   int start, int end, int retry,
++					   bool force)
+ {
+ 	int index;
+ 
+ 	for (index = end; index >= start; index--)
+ 		pci_restore_config_dword(pdev, 4 * index,
+ 					 pdev->saved_config_space[index],
+-					 retry);
++					 retry, force);
+ }
+ 
+ static void pci_restore_config_space(struct pci_dev *pdev)
+ {
+ 	if (pdev->hdr_type == PCI_HEADER_TYPE_NORMAL) {
+-		pci_restore_config_space_range(pdev, 10, 15, 0);
++		pci_restore_config_space_range(pdev, 10, 15, 0, false);
+ 		/* Restore BARs before the command register. */
+-		pci_restore_config_space_range(pdev, 4, 9, 10);
+-		pci_restore_config_space_range(pdev, 0, 3, 0);
++		pci_restore_config_space_range(pdev, 4, 9, 10, false);
++		pci_restore_config_space_range(pdev, 0, 3, 0, false);
++	} else if (pdev->hdr_type == PCI_HEADER_TYPE_BRIDGE) {
++		pci_restore_config_space_range(pdev, 12, 15, 0, false);
++
++		/*
++		 * Force rewriting of prefetch registers to avoid S3 resume
++		 * issues on Intel PCI bridges that occur when these
++		 * registers are not explicitly written.
++		 */
++		pci_restore_config_space_range(pdev, 9, 11, 0, true);
++		pci_restore_config_space_range(pdev, 0, 8, 0, false);
+ 	} else {
+-		pci_restore_config_space_range(pdev, 0, 15, 0);
++		pci_restore_config_space_range(pdev, 0, 15, 0, false);
+ 	}
+ }
+ 
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index aba59521ad48..31d06f59c4e4 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -1264,6 +1264,7 @@ static void tty_driver_remove_tty(struct tty_driver *driver, struct tty_struct *
+ static int tty_reopen(struct tty_struct *tty)
+ {
+ 	struct tty_driver *driver = tty->driver;
++	int retval;
+ 
+ 	if (driver->type == TTY_DRIVER_TYPE_PTY &&
+ 	    driver->subtype == PTY_TYPE_MASTER)
+@@ -1277,10 +1278,14 @@ static int tty_reopen(struct tty_struct *tty)
+ 
+ 	tty->count++;
+ 
+-	if (!tty->ldisc)
+-		return tty_ldisc_reinit(tty, tty->termios.c_line);
++	if (tty->ldisc)
++		return 0;
+ 
+-	return 0;
++	retval = tty_ldisc_reinit(tty, tty->termios.c_line);
++	if (retval)
++		tty->count--;
++
++	return retval;
+ }
+ 
+ /**
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index f8ee32d9843a..84f52774810a 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1514,6 +1514,7 @@ static void acm_disconnect(struct usb_interface *intf)
+ {
+ 	struct acm *acm = usb_get_intfdata(intf);
+ 	struct tty_struct *tty;
++	int i;
+ 
+ 	/* sibling interface is already cleaning up */
+ 	if (!acm)
+@@ -1544,6 +1545,11 @@ static void acm_disconnect(struct usb_interface *intf)
+ 
+ 	tty_unregister_device(acm_tty_driver, acm->minor);
+ 
++	usb_free_urb(acm->ctrlurb);
++	for (i = 0; i < ACM_NW; i++)
++		usb_free_urb(acm->wb[i].urb);
++	for (i = 0; i < acm->rx_buflimit; i++)
++		usb_free_urb(acm->read_urbs[i]);
+ 	acm_write_buffers_free(acm);
+ 	usb_free_coherent(acm->dev, acm->ctrlsize, acm->ctrl_buffer, acm->ctrl_dma);
+ 	acm_read_buffers_free(acm);
+diff --git a/drivers/usb/host/xhci-mtk.c b/drivers/usb/host/xhci-mtk.c
+index 7334da9e9779..71d0d33c3286 100644
+--- a/drivers/usb/host/xhci-mtk.c
++++ b/drivers/usb/host/xhci-mtk.c
+@@ -642,10 +642,10 @@ static int __maybe_unused xhci_mtk_resume(struct device *dev)
+ 	xhci_mtk_host_enable(mtk);
+ 
+ 	xhci_dbg(xhci, "%s: restart port polling\n", __func__);
+-	set_bit(HCD_FLAG_POLL_RH, &hcd->flags);
+-	usb_hcd_poll_rh_status(hcd);
+ 	set_bit(HCD_FLAG_POLL_RH, &xhci->shared_hcd->flags);
+ 	usb_hcd_poll_rh_status(xhci->shared_hcd);
++	set_bit(HCD_FLAG_POLL_RH, &hcd->flags);
++	usb_hcd_poll_rh_status(hcd);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 6372edf339d9..722860eb5a91 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -185,6 +185,8 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	}
+ 	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+ 	    (pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_INTEL_SUNRISEPOINT_LP_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_INTEL_SUNRISEPOINT_H_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_APL_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_DNV_XHCI))
+ 		xhci->quirks |= XHCI_MISSING_CAS;
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 0215b70c4efc..e72ad9f81c73 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -561,6 +561,9 @@ static void option_instat_callback(struct urb *urb);
+ /* Interface is reserved */
+ #define RSVD(ifnum)	((BIT(ifnum) & 0xff) << 0)
+ 
++/* Interface must have two endpoints */
++#define NUMEP2		BIT(16)
++
+ 
+ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_COLT) },
+@@ -1081,8 +1084,9 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(4) },
+ 	{ USB_DEVICE(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_BG96),
+ 	  .driver_info = RSVD(4) },
+-	{ USB_DEVICE(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06),
+-	  .driver_info = RSVD(4) | RSVD(5) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0xff, 0xff),
++	  .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0, 0) },
+ 	{ USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6001) },
+ 	{ USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CMU_300) },
+ 	{ USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6003),
+@@ -1999,6 +2003,13 @@ static int option_probe(struct usb_serial *serial,
+ 	if (device_flags & RSVD(iface_desc->bInterfaceNumber))
+ 		return -ENODEV;
+ 
++	/*
++	 * Allow matching on bNumEndpoints for devices whose interface numbers
++	 * can change (e.g. Quectel EP06).
++	 */
++	if (device_flags & NUMEP2 && iface_desc->bNumEndpoints != 2)
++		return -ENODEV;
++
+ 	/* Store the device flags so we can use them during attach. */
+ 	usb_set_serial_data(serial, (void *)device_flags);
+ 
+diff --git a/drivers/usb/serial/usb-serial-simple.c b/drivers/usb/serial/usb-serial-simple.c
+index 40864c2bd9dc..4d0273508043 100644
+--- a/drivers/usb/serial/usb-serial-simple.c
++++ b/drivers/usb/serial/usb-serial-simple.c
+@@ -84,7 +84,8 @@ DEVICE(moto_modem, MOTO_IDS);
+ 
+ /* Motorola Tetra driver */
+ #define MOTOROLA_TETRA_IDS()			\
+-	{ USB_DEVICE(0x0cad, 0x9011) }	/* Motorola Solutions TETRA PEI */
++	{ USB_DEVICE(0x0cad, 0x9011) },	/* Motorola Solutions TETRA PEI */ \
++	{ USB_DEVICE(0x0cad, 0x9012) }	/* MTP6550 */
+ DEVICE(motorola_tetra, MOTOROLA_TETRA_IDS);
+ 
+ /* Novatel Wireless GPS driver */
+diff --git a/drivers/video/fbdev/omap2/omapfb/omapfb-ioctl.c b/drivers/video/fbdev/omap2/omapfb/omapfb-ioctl.c
+index ef69273074ba..a3edb20ea4c3 100644
+--- a/drivers/video/fbdev/omap2/omapfb/omapfb-ioctl.c
++++ b/drivers/video/fbdev/omap2/omapfb/omapfb-ioctl.c
+@@ -496,6 +496,9 @@ static int omapfb_memory_read(struct fb_info *fbi,
+ 	if (!access_ok(VERIFY_WRITE, mr->buffer, mr->buffer_size))
+ 		return -EFAULT;
+ 
++	if (mr->w > 4096 || mr->h > 4096)
++		return -EINVAL;
++
+ 	if (mr->w * mr->h * 3 > mr->buffer_size)
+ 		return -EINVAL;
+ 
+@@ -509,7 +512,7 @@ static int omapfb_memory_read(struct fb_info *fbi,
+ 			mr->x, mr->y, mr->w, mr->h);
+ 
+ 	if (r > 0) {
+-		if (copy_to_user(mr->buffer, buf, mr->buffer_size))
++		if (copy_to_user(mr->buffer, buf, r))
+ 			r = -EFAULT;
+ 	}
+ 
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index 9f1c96caebda..782e7243c5c0 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -746,6 +746,7 @@ static int get_checkpoint_version(struct f2fs_sb_info *sbi, block_t cp_addr,
+ 
+ 	crc_offset = le32_to_cpu((*cp_block)->checksum_offset);
+ 	if (crc_offset > (blk_size - sizeof(__le32))) {
++		f2fs_put_page(*cp_page, 1);
+ 		f2fs_msg(sbi->sb, KERN_WARNING,
+ 			"invalid crc_offset: %zu", crc_offset);
+ 		return -EINVAL;
+@@ -753,6 +754,7 @@ static int get_checkpoint_version(struct f2fs_sb_info *sbi, block_t cp_addr,
+ 
+ 	crc = cur_cp_crc(*cp_block);
+ 	if (!f2fs_crc_valid(sbi, crc, *cp_block, crc_offset)) {
++		f2fs_put_page(*cp_page, 1);
+ 		f2fs_msg(sbi->sb, KERN_WARNING, "invalid crc value");
+ 		return -EINVAL;
+ 	}
+@@ -772,14 +774,14 @@ static struct page *validate_checkpoint(struct f2fs_sb_info *sbi,
+ 	err = get_checkpoint_version(sbi, cp_addr, &cp_block,
+ 					&cp_page_1, version);
+ 	if (err)
+-		goto invalid_cp1;
++		return NULL;
+ 	pre_version = *version;
+ 
+ 	cp_addr += le32_to_cpu(cp_block->cp_pack_total_block_count) - 1;
+ 	err = get_checkpoint_version(sbi, cp_addr, &cp_block,
+ 					&cp_page_2, version);
+ 	if (err)
+-		goto invalid_cp2;
++		goto invalid_cp;
+ 	cur_version = *version;
+ 
+ 	if (cur_version == pre_version) {
+@@ -787,9 +789,8 @@ static struct page *validate_checkpoint(struct f2fs_sb_info *sbi,
+ 		f2fs_put_page(cp_page_2, 1);
+ 		return cp_page_1;
+ 	}
+-invalid_cp2:
+ 	f2fs_put_page(cp_page_2, 1);
+-invalid_cp1:
++invalid_cp:
+ 	f2fs_put_page(cp_page_1, 1);
+ 	return NULL;
+ }
+diff --git a/fs/pstore/ram.c b/fs/pstore/ram.c
+index bbd1e357c23d..f4fd2e72add4 100644
+--- a/fs/pstore/ram.c
++++ b/fs/pstore/ram.c
+@@ -898,8 +898,22 @@ static struct platform_driver ramoops_driver = {
+ 	},
+ };
+ 
+-static void ramoops_register_dummy(void)
++static inline void ramoops_unregister_dummy(void)
+ {
++	platform_device_unregister(dummy);
++	dummy = NULL;
++
++	kfree(dummy_data);
++	dummy_data = NULL;
++}
++
++static void __init ramoops_register_dummy(void)
++{
++	/*
++	 * Prepare a dummy platform data structure to carry the module
++	 * parameters. If mem_size isn't set, then there are no module
++	 * parameters, and we can skip this.
++	 */
+ 	if (!mem_size)
+ 		return;
+ 
+@@ -932,21 +946,28 @@ static void ramoops_register_dummy(void)
+ 	if (IS_ERR(dummy)) {
+ 		pr_info("could not create platform device: %ld\n",
+ 			PTR_ERR(dummy));
++		dummy = NULL;
++		ramoops_unregister_dummy();
+ 	}
+ }
+ 
+ static int __init ramoops_init(void)
+ {
++	int ret;
++
+ 	ramoops_register_dummy();
+-	return platform_driver_register(&ramoops_driver);
++	ret = platform_driver_register(&ramoops_driver);
++	if (ret != 0)
++		ramoops_unregister_dummy();
++
++	return ret;
+ }
+ late_initcall(ramoops_init);
+ 
+ static void __exit ramoops_exit(void)
+ {
+ 	platform_driver_unregister(&ramoops_driver);
+-	platform_device_unregister(dummy);
+-	kfree(dummy_data);
++	ramoops_unregister_dummy();
+ }
+ module_exit(ramoops_exit);
+ 
+diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
+index c5466c70d620..2a82aeeacba5 100644
+--- a/fs/ubifs/super.c
++++ b/fs/ubifs/super.c
+@@ -1929,6 +1929,9 @@ static struct ubi_volume_desc *open_ubi(const char *name, int mode)
+ 	int dev, vol;
+ 	char *endptr;
+ 
++	if (!name || !*name)
++		return ERR_PTR(-EINVAL);
++
+ 	/* First, try to open using the device node path method */
+ 	ubi = ubi_open_volume_path(name, mode);
+ 	if (!IS_ERR(ubi))
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index 36fa6a2a82e3..4ee95d8c8413 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -140,6 +140,8 @@ pte_t *huge_pte_alloc(struct mm_struct *mm,
+ pte_t *huge_pte_offset(struct mm_struct *mm,
+ 		       unsigned long addr, unsigned long sz);
+ int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep);
++void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
++				unsigned long *start, unsigned long *end);
+ struct page *follow_huge_addr(struct mm_struct *mm, unsigned long address,
+ 			      int write);
+ struct page *follow_huge_pd(struct vm_area_struct *vma,
+@@ -170,6 +172,18 @@ static inline unsigned long hugetlb_total_pages(void)
+ 	return 0;
+ }
+ 
++static inline int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr,
++					pte_t *ptep)
++{
++	return 0;
++}
++
++static inline void adjust_range_if_pmd_sharing_possible(
++				struct vm_area_struct *vma,
++				unsigned long *start, unsigned long *end)
++{
++}
++
+ #define follow_hugetlb_page(m,v,p,vs,a,b,i,w,n)	({ BUG(); 0; })
+ #define follow_huge_addr(mm, addr, write)	ERR_PTR(-EINVAL)
+ #define copy_hugetlb_page_range(src, dst, vma)	({ BUG(); 0; })
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index 68a5121694ef..40ad93bc9548 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -2463,6 +2463,12 @@ static inline struct vm_area_struct *find_exact_vma(struct mm_struct *mm,
+ 	return vma;
+ }
+ 
++static inline bool range_in_vma(struct vm_area_struct *vma,
++				unsigned long start, unsigned long end)
++{
++	return (vma && vma->vm_start <= start && end <= vma->vm_end);
++}
++
+ #ifdef CONFIG_MMU
+ pgprot_t vm_get_page_prot(unsigned long vm_flags);
+ void vma_set_page_prot(struct vm_area_struct *vma);
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index c7b3e34811ec..ae22d93701db 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -3940,6 +3940,12 @@ int perf_event_read_local(struct perf_event *event, u64 *value,
+ 		goto out;
+ 	}
+ 
++	/* If this is a pinned event it must be running on this CPU */
++	if (event->attr.pinned && event->oncpu != smp_processor_id()) {
++		ret = -EBUSY;
++		goto out;
++	}
++
+ 	/*
+ 	 * If the event is currently on this CPU, its either a per-task event,
+ 	 * or local to this CPU. Furthermore it means its ACTIVE (otherwise
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 25346bd99364..571875b37453 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2929,7 +2929,7 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
+ 	else
+ 		page_add_file_rmap(new, true);
+ 	set_pmd_at(mm, mmun_start, pvmw->pmd, pmde);
+-	if (vma->vm_flags & VM_LOCKED)
++	if ((vma->vm_flags & VM_LOCKED) && !PageDoubleMap(new))
+ 		mlock_vma_page(new);
+ 	update_mmu_cache_pmd(vma, address, pvmw->pmd);
+ }
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 3103099f64fd..f469315a6a0f 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -4556,12 +4556,40 @@ static bool vma_shareable(struct vm_area_struct *vma, unsigned long addr)
+ 	/*
+ 	 * check on proper vm_flags and page table alignment
+ 	 */
+-	if (vma->vm_flags & VM_MAYSHARE &&
+-	    vma->vm_start <= base && end <= vma->vm_end)
++	if (vma->vm_flags & VM_MAYSHARE && range_in_vma(vma, base, end))
+ 		return true;
+ 	return false;
+ }
+ 
++/*
++ * Determine if start,end range within vma could be mapped by shared pmd.
++ * If yes, adjust start and end to cover range associated with possible
++ * shared pmd mappings.
++ */
++void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
++				unsigned long *start, unsigned long *end)
++{
++	unsigned long check_addr = *start;
++
++	if (!(vma->vm_flags & VM_MAYSHARE))
++		return;
++
++	for (check_addr = *start; check_addr < *end; check_addr += PUD_SIZE) {
++		unsigned long a_start = check_addr & PUD_MASK;
++		unsigned long a_end = a_start + PUD_SIZE;
++
++		/*
++		 * If sharing is possible, adjust start/end if necessary.
++		 */
++		if (range_in_vma(vma, a_start, a_end)) {
++			if (a_start < *start)
++				*start = a_start;
++			if (a_end > *end)
++				*end = a_end;
++		}
++	}
++}
++
+ /*
+  * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc()
+  * and returns the corresponding pte. While this is not necessary for the
+@@ -4659,6 +4687,11 @@ int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep)
+ {
+ 	return 0;
+ }
++
++void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
++				unsigned long *start, unsigned long *end)
++{
++}
+ #define want_pmd_share()	(0)
+ #endif /* CONFIG_ARCH_WANT_HUGE_PMD_SHARE */
+ 
+diff --git a/mm/migrate.c b/mm/migrate.c
+index 8c0af0f7cab1..2a55289ee9f1 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -275,6 +275,9 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
+ 		if (vma->vm_flags & VM_LOCKED && !PageTransCompound(new))
+ 			mlock_vma_page(new);
+ 
++		if (PageTransHuge(page) && PageMlocked(page))
++			clear_page_mlock(page);
++
+ 		/* No need to invalidate - it was non-present before */
+ 		update_mmu_cache(vma, pvmw.address, pvmw.pte);
+ 	}
+diff --git a/mm/rmap.c b/mm/rmap.c
+index eb477809a5c0..1e79fac3186b 100644
+--- a/mm/rmap.c
++++ b/mm/rmap.c
+@@ -1362,11 +1362,21 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
+ 	}
+ 
+ 	/*
+-	 * We have to assume the worse case ie pmd for invalidation. Note that
+-	 * the page can not be free in this function as call of try_to_unmap()
+-	 * must hold a reference on the page.
++	 * For THP, we have to assume the worse case ie pmd for invalidation.
++	 * For hugetlb, it could be much worse if we need to do pud
++	 * invalidation in the case of pmd sharing.
++	 *
++	 * Note that the page can not be free in this function as call of
++	 * try_to_unmap() must hold a reference on the page.
+ 	 */
+ 	end = min(vma->vm_end, start + (PAGE_SIZE << compound_order(page)));
++	if (PageHuge(page)) {
++		/*
++		 * If sharing is possible, start and end will be adjusted
++		 * accordingly.
++		 */
++		adjust_range_if_pmd_sharing_possible(vma, &start, &end);
++	}
+ 	mmu_notifier_invalidate_range_start(vma->vm_mm, start, end);
+ 
+ 	while (page_vma_mapped_walk(&pvmw)) {
+@@ -1409,6 +1419,32 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
+ 		subpage = page - page_to_pfn(page) + pte_pfn(*pvmw.pte);
+ 		address = pvmw.address;
+ 
++		if (PageHuge(page)) {
++			if (huge_pmd_unshare(mm, &address, pvmw.pte)) {
++				/*
++				 * huge_pmd_unshare unmapped an entire PMD
++				 * page.  There is no way of knowing exactly
++				 * which PMDs may be cached for this mm, so
++				 * we must flush them all.  start/end were
++				 * already adjusted above to cover this range.
++				 */
++				flush_cache_range(vma, start, end);
++				flush_tlb_range(vma, start, end);
++				mmu_notifier_invalidate_range(mm, start, end);
++
++				/*
++				 * The ref count of the PMD page was dropped
++				 * which is part of the way map counting
++				 * is done for shared PMDs.  Return 'true'
++				 * here.  When there is no other sharing,
++				 * huge_pmd_unshare returns false and we will
++				 * unmap the actual page and drop map count
++				 * to zero.
++				 */
++				page_vma_mapped_walk_done(&pvmw);
++				break;
++			}
++		}
+ 
+ 		if (IS_ENABLED(CONFIG_MIGRATION) &&
+ 		    (flags & TTU_MIGRATION) &&
+diff --git a/mm/vmstat.c b/mm/vmstat.c
+index 8ba0870ecddd..55a5bb1d773d 100644
+--- a/mm/vmstat.c
++++ b/mm/vmstat.c
+@@ -1275,6 +1275,9 @@ const char * const vmstat_text[] = {
+ #ifdef CONFIG_SMP
+ 	"nr_tlb_remote_flush",
+ 	"nr_tlb_remote_flush_received",
++#else
++	"", /* nr_tlb_remote_flush */
++	"", /* nr_tlb_remote_flush_received */
+ #endif /* CONFIG_SMP */
+ 	"nr_tlb_local_flush_all",
+ 	"nr_tlb_local_flush_one",
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index aa082b71d2e4..c6bbe5b56378 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -427,7 +427,7 @@ static int ieee80211_add_key(struct wiphy *wiphy, struct net_device *dev,
+ 	case NL80211_IFTYPE_AP:
+ 	case NL80211_IFTYPE_AP_VLAN:
+ 		/* Keys without a station are used for TX only */
+-		if (key->sta && test_sta_flag(key->sta, WLAN_STA_MFP))
++		if (sta && test_sta_flag(sta, WLAN_STA_MFP))
+ 			key->conf.flags |= IEEE80211_KEY_FLAG_RX_MGMT;
+ 		break;
+ 	case NL80211_IFTYPE_ADHOC:
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index 555e389b7dfa..5d22c058ae23 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -1756,7 +1756,8 @@ int ieee80211_if_add(struct ieee80211_local *local, const char *name,
+ 
+ 		if (local->ops->wake_tx_queue &&
+ 		    type != NL80211_IFTYPE_AP_VLAN &&
+-		    type != NL80211_IFTYPE_MONITOR)
++		    (type != NL80211_IFTYPE_MONITOR ||
++		     (params->flags & MONITOR_FLAG_ACTIVE)))
+ 			txq_size += sizeof(struct txq_info) +
+ 				    local->hw.txq_data_size;
+ 
+diff --git a/net/rds/ib.h b/net/rds/ib.h
+index a6f4d7d68e95..83ff7c18d691 100644
+--- a/net/rds/ib.h
++++ b/net/rds/ib.h
+@@ -371,7 +371,7 @@ void rds_ib_mr_cqe_handler(struct rds_ib_connection *ic, struct ib_wc *wc);
+ int rds_ib_recv_init(void);
+ void rds_ib_recv_exit(void);
+ int rds_ib_recv_path(struct rds_conn_path *conn);
+-int rds_ib_recv_alloc_caches(struct rds_ib_connection *ic);
++int rds_ib_recv_alloc_caches(struct rds_ib_connection *ic, gfp_t gfp);
+ void rds_ib_recv_free_caches(struct rds_ib_connection *ic);
+ void rds_ib_recv_refill(struct rds_connection *conn, int prefill, gfp_t gfp);
+ void rds_ib_inc_free(struct rds_incoming *inc);
+diff --git a/net/rds/ib_cm.c b/net/rds/ib_cm.c
+index f1684ae6abfd..6a909ea9e8fb 100644
+--- a/net/rds/ib_cm.c
++++ b/net/rds/ib_cm.c
+@@ -949,7 +949,7 @@ int rds_ib_conn_alloc(struct rds_connection *conn, gfp_t gfp)
+ 	if (!ic)
+ 		return -ENOMEM;
+ 
+-	ret = rds_ib_recv_alloc_caches(ic);
++	ret = rds_ib_recv_alloc_caches(ic, gfp);
+ 	if (ret) {
+ 		kfree(ic);
+ 		return ret;
+diff --git a/net/rds/ib_recv.c b/net/rds/ib_recv.c
+index b4e421aa9727..918d2e676b9b 100644
+--- a/net/rds/ib_recv.c
++++ b/net/rds/ib_recv.c
+@@ -98,12 +98,12 @@ static void rds_ib_cache_xfer_to_ready(struct rds_ib_refill_cache *cache)
+ 	}
+ }
+ 
+-static int rds_ib_recv_alloc_cache(struct rds_ib_refill_cache *cache)
++static int rds_ib_recv_alloc_cache(struct rds_ib_refill_cache *cache, gfp_t gfp)
+ {
+ 	struct rds_ib_cache_head *head;
+ 	int cpu;
+ 
+-	cache->percpu = alloc_percpu(struct rds_ib_cache_head);
++	cache->percpu = alloc_percpu_gfp(struct rds_ib_cache_head, gfp);
+ 	if (!cache->percpu)
+ 	       return -ENOMEM;
+ 
+@@ -118,13 +118,13 @@ static int rds_ib_recv_alloc_cache(struct rds_ib_refill_cache *cache)
+ 	return 0;
+ }
+ 
+-int rds_ib_recv_alloc_caches(struct rds_ib_connection *ic)
++int rds_ib_recv_alloc_caches(struct rds_ib_connection *ic, gfp_t gfp)
+ {
+ 	int ret;
+ 
+-	ret = rds_ib_recv_alloc_cache(&ic->i_cache_incs);
++	ret = rds_ib_recv_alloc_cache(&ic->i_cache_incs, gfp);
+ 	if (!ret) {
+-		ret = rds_ib_recv_alloc_cache(&ic->i_cache_frags);
++		ret = rds_ib_recv_alloc_cache(&ic->i_cache_frags, gfp);
+ 		if (ret)
+ 			free_percpu(ic->i_cache_incs.percpu);
+ 	}
+diff --git a/net/tipc/netlink_compat.c b/net/tipc/netlink_compat.c
+index a2f76743c73a..82f665728382 100644
+--- a/net/tipc/netlink_compat.c
++++ b/net/tipc/netlink_compat.c
+@@ -185,6 +185,7 @@ static int __tipc_nl_compat_dumpit(struct tipc_nl_compat_cmd_dump *cmd,
+ 		return -ENOMEM;
+ 
+ 	buf->sk = msg->dst_sk;
++	__tipc_dump_start(&cb, msg->net);
+ 
+ 	do {
+ 		int rem;
+@@ -216,6 +217,7 @@ static int __tipc_nl_compat_dumpit(struct tipc_nl_compat_cmd_dump *cmd,
+ 	err = 0;
+ 
+ err_out:
++	tipc_dump_done(&cb);
+ 	kfree_skb(buf);
+ 
+ 	if (err == -EMSGSIZE) {
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index bdb4a9a5a83a..093e16d1b770 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -3233,7 +3233,7 @@ int tipc_nl_sk_walk(struct sk_buff *skb, struct netlink_callback *cb,
+ 				       struct netlink_callback *cb,
+ 				       struct tipc_sock *tsk))
+ {
+-	struct rhashtable_iter *iter = (void *)cb->args[0];
++	struct rhashtable_iter *iter = (void *)cb->args[4];
+ 	struct tipc_sock *tsk;
+ 	int err;
+ 
+@@ -3269,8 +3269,14 @@ EXPORT_SYMBOL(tipc_nl_sk_walk);
+ 
+ int tipc_dump_start(struct netlink_callback *cb)
+ {
+-	struct rhashtable_iter *iter = (void *)cb->args[0];
+-	struct net *net = sock_net(cb->skb->sk);
++	return __tipc_dump_start(cb, sock_net(cb->skb->sk));
++}
++EXPORT_SYMBOL(tipc_dump_start);
++
++int __tipc_dump_start(struct netlink_callback *cb, struct net *net)
++{
++	/* tipc_nl_name_table_dump() uses cb->args[0...3]. */
++	struct rhashtable_iter *iter = (void *)cb->args[4];
+ 	struct tipc_net *tn = tipc_net(net);
+ 
+ 	if (!iter) {
+@@ -3278,17 +3284,16 @@ int tipc_dump_start(struct netlink_callback *cb)
+ 		if (!iter)
+ 			return -ENOMEM;
+ 
+-		cb->args[0] = (long)iter;
++		cb->args[4] = (long)iter;
+ 	}
+ 
+ 	rhashtable_walk_enter(&tn->sk_rht, iter);
+ 	return 0;
+ }
+-EXPORT_SYMBOL(tipc_dump_start);
+ 
+ int tipc_dump_done(struct netlink_callback *cb)
+ {
+-	struct rhashtable_iter *hti = (void *)cb->args[0];
++	struct rhashtable_iter *hti = (void *)cb->args[4];
+ 
+ 	rhashtable_walk_exit(hti);
+ 	kfree(hti);
+diff --git a/net/tipc/socket.h b/net/tipc/socket.h
+index d43032e26532..5e575f205afe 100644
+--- a/net/tipc/socket.h
++++ b/net/tipc/socket.h
+@@ -69,5 +69,6 @@ int tipc_nl_sk_walk(struct sk_buff *skb, struct netlink_callback *cb,
+ 				       struct netlink_callback *cb,
+ 				       struct tipc_sock *tsk));
+ int tipc_dump_start(struct netlink_callback *cb);
++int __tipc_dump_start(struct netlink_callback *cb, struct net *net);
+ int tipc_dump_done(struct netlink_callback *cb);
+ #endif
+diff --git a/tools/testing/selftests/x86/test_vdso.c b/tools/testing/selftests/x86/test_vdso.c
+index 235259011704..35edd61d1663 100644
+--- a/tools/testing/selftests/x86/test_vdso.c
++++ b/tools/testing/selftests/x86/test_vdso.c
+@@ -17,6 +17,7 @@
+ #include <errno.h>
+ #include <sched.h>
+ #include <stdbool.h>
++#include <limits.h>
+ 
+ #ifndef SYS_getcpu
+ # ifdef __x86_64__
+@@ -31,6 +32,14 @@
+ 
+ int nerrs = 0;
+ 
++typedef int (*vgettime_t)(clockid_t, struct timespec *);
++
++vgettime_t vdso_clock_gettime;
++
++typedef long (*vgtod_t)(struct timeval *tv, struct timezone *tz);
++
++vgtod_t vdso_gettimeofday;
++
+ typedef long (*getcpu_t)(unsigned *, unsigned *, void *);
+ 
+ getcpu_t vgetcpu;
+@@ -95,6 +104,15 @@ static void fill_function_pointers()
+ 		printf("Warning: failed to find getcpu in vDSO\n");
+ 
+ 	vgetcpu = (getcpu_t) vsyscall_getcpu();
++
++	vdso_clock_gettime = (vgettime_t)dlsym(vdso, "__vdso_clock_gettime");
++	if (!vdso_clock_gettime)
++		printf("Warning: failed to find clock_gettime in vDSO\n");
++
++	vdso_gettimeofday = (vgtod_t)dlsym(vdso, "__vdso_gettimeofday");
++	if (!vdso_gettimeofday)
++		printf("Warning: failed to find gettimeofday in vDSO\n");
++
+ }
+ 
+ static long sys_getcpu(unsigned * cpu, unsigned * node,
+@@ -103,6 +121,16 @@ static long sys_getcpu(unsigned * cpu, unsigned * node,
+ 	return syscall(__NR_getcpu, cpu, node, cache);
+ }
+ 
++static inline int sys_clock_gettime(clockid_t id, struct timespec *ts)
++{
++	return syscall(__NR_clock_gettime, id, ts);
++}
++
++static inline int sys_gettimeofday(struct timeval *tv, struct timezone *tz)
++{
++	return syscall(__NR_gettimeofday, tv, tz);
++}
++
+ static void test_getcpu(void)
+ {
+ 	printf("[RUN]\tTesting getcpu...\n");
+@@ -155,10 +183,154 @@ static void test_getcpu(void)
+ 	}
+ }
+ 
++static bool ts_leq(const struct timespec *a, const struct timespec *b)
++{
++	if (a->tv_sec != b->tv_sec)
++		return a->tv_sec < b->tv_sec;
++	else
++		return a->tv_nsec <= b->tv_nsec;
++}
++
++static bool tv_leq(const struct timeval *a, const struct timeval *b)
++{
++	if (a->tv_sec != b->tv_sec)
++		return a->tv_sec < b->tv_sec;
++	else
++		return a->tv_usec <= b->tv_usec;
++}
++
++static char const * const clocknames[] = {
++	[0] = "CLOCK_REALTIME",
++	[1] = "CLOCK_MONOTONIC",
++	[2] = "CLOCK_PROCESS_CPUTIME_ID",
++	[3] = "CLOCK_THREAD_CPUTIME_ID",
++	[4] = "CLOCK_MONOTONIC_RAW",
++	[5] = "CLOCK_REALTIME_COARSE",
++	[6] = "CLOCK_MONOTONIC_COARSE",
++	[7] = "CLOCK_BOOTTIME",
++	[8] = "CLOCK_REALTIME_ALARM",
++	[9] = "CLOCK_BOOTTIME_ALARM",
++	[10] = "CLOCK_SGI_CYCLE",
++	[11] = "CLOCK_TAI",
++};
++
++static void test_one_clock_gettime(int clock, const char *name)
++{
++	struct timespec start, vdso, end;
++	int vdso_ret, end_ret;
++
++	printf("[RUN]\tTesting clock_gettime for clock %s (%d)...\n", name, clock);
++
++	if (sys_clock_gettime(clock, &start) < 0) {
++		if (errno == EINVAL) {
++			vdso_ret = vdso_clock_gettime(clock, &vdso);
++			if (vdso_ret == -EINVAL) {
++				printf("[OK]\tNo such clock.\n");
++			} else {
++				printf("[FAIL]\tNo such clock, but __vdso_clock_gettime returned %d\n", vdso_ret);
++				nerrs++;
++			}
++		} else {
++			printf("[WARN]\t clock_gettime(%d) syscall returned error %d\n", clock, errno);
++		}
++		return;
++	}
++
++	vdso_ret = vdso_clock_gettime(clock, &vdso);
++	end_ret = sys_clock_gettime(clock, &end);
++
++	if (vdso_ret != 0 || end_ret != 0) {
++		printf("[FAIL]\tvDSO returned %d, syscall errno=%d\n",
++		       vdso_ret, errno);
++		nerrs++;
++		return;
++	}
++
++	printf("\t%llu.%09ld %llu.%09ld %llu.%09ld\n",
++	       (unsigned long long)start.tv_sec, start.tv_nsec,
++	       (unsigned long long)vdso.tv_sec, vdso.tv_nsec,
++	       (unsigned long long)end.tv_sec, end.tv_nsec);
++
++	if (!ts_leq(&start, &vdso) || !ts_leq(&vdso, &end)) {
++		printf("[FAIL]\tTimes are out of sequence\n");
++		nerrs++;
++	}
++}
++
++static void test_clock_gettime(void)
++{
++	for (int clock = 0; clock < sizeof(clocknames) / sizeof(clocknames[0]);
++	     clock++) {
++		test_one_clock_gettime(clock, clocknames[clock]);
++	}
++
++	/* Also test some invalid clock ids */
++	test_one_clock_gettime(-1, "invalid");
++	test_one_clock_gettime(INT_MIN, "invalid");
++	test_one_clock_gettime(INT_MAX, "invalid");
++}
++
++static void test_gettimeofday(void)
++{
++	struct timeval start, vdso, end;
++	struct timezone sys_tz, vdso_tz;
++	int vdso_ret, end_ret;
++
++	if (!vdso_gettimeofday)
++		return;
++
++	printf("[RUN]\tTesting gettimeofday...\n");
++
++	if (sys_gettimeofday(&start, &sys_tz) < 0) {
++		printf("[FAIL]\tsys_gettimeofday failed (%d)\n", errno);
++		nerrs++;
++		return;
++	}
++
++	vdso_ret = vdso_gettimeofday(&vdso, &vdso_tz);
++	end_ret = sys_gettimeofday(&end, NULL);
++
++	if (vdso_ret != 0 || end_ret != 0) {
++		printf("[FAIL]\tvDSO returned %d, syscall errno=%d\n",
++		       vdso_ret, errno);
++		nerrs++;
++		return;
++	}
++
++	printf("\t%llu.%06ld %llu.%06ld %llu.%06ld\n",
++	       (unsigned long long)start.tv_sec, start.tv_usec,
++	       (unsigned long long)vdso.tv_sec, vdso.tv_usec,
++	       (unsigned long long)end.tv_sec, end.tv_usec);
++
++	if (!tv_leq(&start, &vdso) || !tv_leq(&vdso, &end)) {
++		printf("[FAIL]\tTimes are out of sequence\n");
++		nerrs++;
++	}
++
++	if (sys_tz.tz_minuteswest == vdso_tz.tz_minuteswest &&
++	    sys_tz.tz_dsttime == vdso_tz.tz_dsttime) {
++		printf("[OK]\ttimezones match: minuteswest=%d, dsttime=%d\n",
++		       sys_tz.tz_minuteswest, sys_tz.tz_dsttime);
++	} else {
++		printf("[FAIL]\ttimezones do not match\n");
++		nerrs++;
++	}
++
++	/* And make sure that passing NULL for tz doesn't crash. */
++	vdso_gettimeofday(&vdso, NULL);
++}
++
+ int main(int argc, char **argv)
+ {
+ 	fill_function_pointers();
+ 
++	test_clock_gettime();
++	test_gettimeofday();
++
++	/*
++	 * Test getcpu() last so that, if something goes wrong setting affinity,
++	 * we still run the other tests.
++	 */
+ 	test_getcpu();
+ 
+ 	return nerrs ? 1 : 0;


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 13:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 13:15 UTC (permalink / raw
  To: gentoo-commits

commit:     28cd2e72d3781b65f44eaf350acc33df771c43ca
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Sep 29 13:36:23 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 13:15:40 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=28cd2e72

Linux patch 4.18.11

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1010_linux-4.18.11.patch | 2983 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2987 insertions(+)

diff --git a/0000_README b/0000_README
index a9e2bd7..cccbd63 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch:  1009_linux-4.18.10.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.10
 
+Patch:  1010_linux-4.18.11.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.11
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1010_linux-4.18.11.patch b/1010_linux-4.18.11.patch
new file mode 100644
index 0000000..fe34a23
--- /dev/null
+++ b/1010_linux-4.18.11.patch
@@ -0,0 +1,2983 @@
+diff --git a/Makefile b/Makefile
+index ffab15235ff0..de0ecace693a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/x86/crypto/aegis128-aesni-glue.c b/arch/x86/crypto/aegis128-aesni-glue.c
+index acd11b3bf639..2a356b948720 100644
+--- a/arch/x86/crypto/aegis128-aesni-glue.c
++++ b/arch/x86/crypto/aegis128-aesni-glue.c
+@@ -379,7 +379,6 @@ static int __init crypto_aegis128_aesni_module_init(void)
+ {
+ 	if (!boot_cpu_has(X86_FEATURE_XMM2) ||
+ 	    !boot_cpu_has(X86_FEATURE_AES) ||
+-	    !boot_cpu_has(X86_FEATURE_OSXSAVE) ||
+ 	    !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL))
+ 		return -ENODEV;
+ 
+diff --git a/arch/x86/crypto/aegis128l-aesni-glue.c b/arch/x86/crypto/aegis128l-aesni-glue.c
+index 2071c3d1ae07..dbe8bb980da1 100644
+--- a/arch/x86/crypto/aegis128l-aesni-glue.c
++++ b/arch/x86/crypto/aegis128l-aesni-glue.c
+@@ -379,7 +379,6 @@ static int __init crypto_aegis128l_aesni_module_init(void)
+ {
+ 	if (!boot_cpu_has(X86_FEATURE_XMM2) ||
+ 	    !boot_cpu_has(X86_FEATURE_AES) ||
+-	    !boot_cpu_has(X86_FEATURE_OSXSAVE) ||
+ 	    !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL))
+ 		return -ENODEV;
+ 
+diff --git a/arch/x86/crypto/aegis256-aesni-glue.c b/arch/x86/crypto/aegis256-aesni-glue.c
+index b5f2a8fd5a71..8bebda2de92f 100644
+--- a/arch/x86/crypto/aegis256-aesni-glue.c
++++ b/arch/x86/crypto/aegis256-aesni-glue.c
+@@ -379,7 +379,6 @@ static int __init crypto_aegis256_aesni_module_init(void)
+ {
+ 	if (!boot_cpu_has(X86_FEATURE_XMM2) ||
+ 	    !boot_cpu_has(X86_FEATURE_AES) ||
+-	    !boot_cpu_has(X86_FEATURE_OSXSAVE) ||
+ 	    !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL))
+ 		return -ENODEV;
+ 
+diff --git a/arch/x86/crypto/morus1280-sse2-glue.c b/arch/x86/crypto/morus1280-sse2-glue.c
+index 95cf857d2cbb..f40244eaf14d 100644
+--- a/arch/x86/crypto/morus1280-sse2-glue.c
++++ b/arch/x86/crypto/morus1280-sse2-glue.c
+@@ -40,7 +40,6 @@ MORUS1280_DECLARE_ALGS(sse2, "morus1280-sse2", 350);
+ static int __init crypto_morus1280_sse2_module_init(void)
+ {
+ 	if (!boot_cpu_has(X86_FEATURE_XMM2) ||
+-	    !boot_cpu_has(X86_FEATURE_OSXSAVE) ||
+ 	    !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL))
+ 		return -ENODEV;
+ 
+diff --git a/arch/x86/crypto/morus640-sse2-glue.c b/arch/x86/crypto/morus640-sse2-glue.c
+index 615fb7bc9a32..9afaf8f8565a 100644
+--- a/arch/x86/crypto/morus640-sse2-glue.c
++++ b/arch/x86/crypto/morus640-sse2-glue.c
+@@ -40,7 +40,6 @@ MORUS640_DECLARE_ALGS(sse2, "morus640-sse2", 400);
+ static int __init crypto_morus640_sse2_module_init(void)
+ {
+ 	if (!boot_cpu_has(X86_FEATURE_XMM2) ||
+-	    !boot_cpu_has(X86_FEATURE_OSXSAVE) ||
+ 	    !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL))
+ 		return -ENODEV;
+ 
+diff --git a/arch/x86/xen/pmu.c b/arch/x86/xen/pmu.c
+index 7d00d4ad44d4..95997e6c0696 100644
+--- a/arch/x86/xen/pmu.c
++++ b/arch/x86/xen/pmu.c
+@@ -478,7 +478,7 @@ static void xen_convert_regs(const struct xen_pmu_regs *xen_regs,
+ irqreturn_t xen_pmu_irq_handler(int irq, void *dev_id)
+ {
+ 	int err, ret = IRQ_NONE;
+-	struct pt_regs regs;
++	struct pt_regs regs = {0};
+ 	const struct xen_pmu_data *xenpmu_data = get_xenpmu_data();
+ 	uint8_t xenpmu_flags = get_xenpmu_flags();
+ 
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 984b37647b2f..22a2bc5f25ce 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -5358,10 +5358,20 @@ void ata_qc_complete(struct ata_queued_cmd *qc)
+  */
+ int ata_qc_complete_multiple(struct ata_port *ap, u64 qc_active)
+ {
++	u64 done_mask, ap_qc_active = ap->qc_active;
+ 	int nr_done = 0;
+-	u64 done_mask;
+ 
+-	done_mask = ap->qc_active ^ qc_active;
++	/*
++	 * If the internal tag is set on ap->qc_active, then we care about
++	 * bit0 on the passed in qc_active mask. Move that bit up to match
++	 * the internal tag.
++	 */
++	if (ap_qc_active & (1ULL << ATA_TAG_INTERNAL)) {
++		qc_active |= (qc_active & 0x01) << ATA_TAG_INTERNAL;
++		qc_active ^= qc_active & 0x01;
++	}
++
++	done_mask = ap_qc_active ^ qc_active;
+ 
+ 	if (unlikely(done_mask & qc_active)) {
+ 		ata_port_err(ap, "illegal qc_active transition (%08llx->%08llx)\n",
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
+index e950730f1933..5a6e7e1cb351 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
+@@ -367,12 +367,14 @@ static int amdgpu_cgs_get_firmware_info(struct cgs_device *cgs_device,
+ 				break;
+ 			case CHIP_POLARIS10:
+ 				if (type == CGS_UCODE_ID_SMU) {
+-					if ((adev->pdev->device == 0x67df) &&
+-					    ((adev->pdev->revision == 0xe0) ||
+-					     (adev->pdev->revision == 0xe3) ||
+-					     (adev->pdev->revision == 0xe4) ||
+-					     (adev->pdev->revision == 0xe5) ||
+-					     (adev->pdev->revision == 0xe7) ||
++					if (((adev->pdev->device == 0x67df) &&
++					     ((adev->pdev->revision == 0xe0) ||
++					      (adev->pdev->revision == 0xe3) ||
++					      (adev->pdev->revision == 0xe4) ||
++					      (adev->pdev->revision == 0xe5) ||
++					      (adev->pdev->revision == 0xe7) ||
++					      (adev->pdev->revision == 0xef))) ||
++					    ((adev->pdev->device == 0x6fdf) &&
+ 					     (adev->pdev->revision == 0xef))) {
+ 						info->is_kicker = true;
+ 						strcpy(fw_name, "amdgpu/polaris10_k_smc.bin");
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index b0bf2f24da48..dc893076398e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -532,6 +532,7 @@ static const struct pci_device_id pciidlist[] = {
+ 	{0x1002, 0x67CA, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
+ 	{0x1002, 0x67CC, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
+ 	{0x1002, 0x67CF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
++	{0x1002, 0x6FDF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
+ 	/* Polaris12 */
+ 	{0x1002, 0x6980, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
+ 	{0x1002, 0x6981, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
+diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
+index dec0d60921bf..00486c744f24 100644
+--- a/drivers/gpu/drm/i915/intel_display.c
++++ b/drivers/gpu/drm/i915/intel_display.c
+@@ -5062,10 +5062,14 @@ void hsw_disable_ips(const struct intel_crtc_state *crtc_state)
+ 		mutex_lock(&dev_priv->pcu_lock);
+ 		WARN_ON(sandybridge_pcode_write(dev_priv, DISPLAY_IPS_CONTROL, 0));
+ 		mutex_unlock(&dev_priv->pcu_lock);
+-		/* wait for pcode to finish disabling IPS, which may take up to 42ms */
++		/*
++		 * Wait for PCODE to finish disabling IPS. The BSpec specified
++		 * 42ms timeout value leads to occasional timeouts so use 100ms
++		 * instead.
++		 */
+ 		if (intel_wait_for_register(dev_priv,
+ 					    IPS_CTL, IPS_ENABLE, 0,
+-					    42))
++					    100))
+ 			DRM_ERROR("Timed out waiting for IPS disable\n");
+ 	} else {
+ 		I915_WRITE(IPS_CTL, 0);
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+index 9bae4db84cfb..7a12d75e5157 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+@@ -1098,17 +1098,21 @@ nv50_mstm_enable(struct nv50_mstm *mstm, u8 dpcd, int state)
+ 	int ret;
+ 
+ 	if (dpcd >= 0x12) {
+-		ret = drm_dp_dpcd_readb(mstm->mgr.aux, DP_MSTM_CTRL, &dpcd);
++		/* Even if we're enabling MST, start with disabling the
++		 * branching unit to clear any sink-side MST topology state
++		 * that wasn't set by us
++		 */
++		ret = drm_dp_dpcd_writeb(mstm->mgr.aux, DP_MSTM_CTRL, 0);
+ 		if (ret < 0)
+ 			return ret;
+ 
+-		dpcd &= ~DP_MST_EN;
+-		if (state)
+-			dpcd |= DP_MST_EN;
+-
+-		ret = drm_dp_dpcd_writeb(mstm->mgr.aux, DP_MSTM_CTRL, dpcd);
+-		if (ret < 0)
+-			return ret;
++		if (state) {
++			/* Now, start initializing */
++			ret = drm_dp_dpcd_writeb(mstm->mgr.aux, DP_MSTM_CTRL,
++						 DP_MST_EN);
++			if (ret < 0)
++				return ret;
++		}
+ 	}
+ 
+ 	return nvif_mthd(disp, 0, &args, sizeof(args));
+@@ -1117,31 +1121,58 @@ nv50_mstm_enable(struct nv50_mstm *mstm, u8 dpcd, int state)
+ int
+ nv50_mstm_detect(struct nv50_mstm *mstm, u8 dpcd[8], int allow)
+ {
+-	int ret, state = 0;
++	struct drm_dp_aux *aux;
++	int ret;
++	bool old_state, new_state;
++	u8 mstm_ctrl;
+ 
+ 	if (!mstm)
+ 		return 0;
+ 
+-	if (dpcd[0] >= 0x12) {
+-		ret = drm_dp_dpcd_readb(mstm->mgr.aux, DP_MSTM_CAP, &dpcd[1]);
++	mutex_lock(&mstm->mgr.lock);
++
++	old_state = mstm->mgr.mst_state;
++	new_state = old_state;
++	aux = mstm->mgr.aux;
++
++	if (old_state) {
++		/* Just check that the MST hub is still as we expect it */
++		ret = drm_dp_dpcd_readb(aux, DP_MSTM_CTRL, &mstm_ctrl);
++		if (ret < 0 || !(mstm_ctrl & DP_MST_EN)) {
++			DRM_DEBUG_KMS("Hub gone, disabling MST topology\n");
++			new_state = false;
++		}
++	} else if (dpcd[0] >= 0x12) {
++		ret = drm_dp_dpcd_readb(aux, DP_MSTM_CAP, &dpcd[1]);
+ 		if (ret < 0)
+-			return ret;
++			goto probe_error;
+ 
+ 		if (!(dpcd[1] & DP_MST_CAP))
+ 			dpcd[0] = 0x11;
+ 		else
+-			state = allow;
++			new_state = allow;
++	}
++
++	if (new_state == old_state) {
++		mutex_unlock(&mstm->mgr.lock);
++		return new_state;
+ 	}
+ 
+-	ret = nv50_mstm_enable(mstm, dpcd[0], state);
++	ret = nv50_mstm_enable(mstm, dpcd[0], new_state);
+ 	if (ret)
+-		return ret;
++		goto probe_error;
++
++	mutex_unlock(&mstm->mgr.lock);
+ 
+-	ret = drm_dp_mst_topology_mgr_set_mst(&mstm->mgr, state);
++	ret = drm_dp_mst_topology_mgr_set_mst(&mstm->mgr, new_state);
+ 	if (ret)
+ 		return nv50_mstm_enable(mstm, dpcd[0], 0);
+ 
+-	return mstm->mgr.mst_state;
++	return new_state;
++
++probe_error:
++	mutex_unlock(&mstm->mgr.lock);
++	return ret;
+ }
+ 
+ static void
+@@ -2049,7 +2080,7 @@ nv50_disp_atomic_state_alloc(struct drm_device *dev)
+ static const struct drm_mode_config_funcs
+ nv50_disp_func = {
+ 	.fb_create = nouveau_user_framebuffer_create,
+-	.output_poll_changed = drm_fb_helper_output_poll_changed,
++	.output_poll_changed = nouveau_fbcon_output_poll_changed,
+ 	.atomic_check = nv50_disp_atomic_check,
+ 	.atomic_commit = nv50_disp_atomic_commit,
+ 	.atomic_state_alloc = nv50_disp_atomic_state_alloc,
+diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c
+index af68eae4c626..de4ab310ef8e 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_connector.c
++++ b/drivers/gpu/drm/nouveau/nouveau_connector.c
+@@ -570,12 +570,16 @@ nouveau_connector_detect(struct drm_connector *connector, bool force)
+ 		nv_connector->edid = NULL;
+ 	}
+ 
+-	/* Outputs are only polled while runtime active, so acquiring a
+-	 * runtime PM ref here is unnecessary (and would deadlock upon
+-	 * runtime suspend because it waits for polling to finish).
++	/* Outputs are only polled while runtime active, so resuming the
++	 * device here is unnecessary (and would deadlock upon runtime suspend
++	 * because it waits for polling to finish). We do however, want to
++	 * prevent the autosuspend timer from elapsing during this operation
++	 * if possible.
+ 	 */
+-	if (!drm_kms_helper_is_poll_worker()) {
+-		ret = pm_runtime_get_sync(connector->dev->dev);
++	if (drm_kms_helper_is_poll_worker()) {
++		pm_runtime_get_noresume(dev->dev);
++	} else {
++		ret = pm_runtime_get_sync(dev->dev);
+ 		if (ret < 0 && ret != -EACCES)
+ 			return conn_status;
+ 	}
+@@ -653,10 +657,8 @@ detect_analog:
+ 
+  out:
+ 
+-	if (!drm_kms_helper_is_poll_worker()) {
+-		pm_runtime_mark_last_busy(connector->dev->dev);
+-		pm_runtime_put_autosuspend(connector->dev->dev);
+-	}
++	pm_runtime_mark_last_busy(dev->dev);
++	pm_runtime_put_autosuspend(dev->dev);
+ 
+ 	return conn_status;
+ }
+@@ -1120,6 +1122,26 @@ nouveau_connector_hotplug(struct nvif_notify *notify)
+ 	const struct nvif_notify_conn_rep_v0 *rep = notify->data;
+ 	const char *name = connector->name;
+ 	struct nouveau_encoder *nv_encoder;
++	int ret;
++
++	ret = pm_runtime_get(drm->dev->dev);
++	if (ret == 0) {
++		/* We can't block here if there's a pending PM request
++		 * running, as we'll deadlock nouveau_display_fini() when it
++		 * calls nvif_put() on our nvif_notify struct. So, simply
++		 * defer the hotplug event until the device finishes resuming
++		 */
++		NV_DEBUG(drm, "Deferring HPD on %s until runtime resume\n",
++			 name);
++		schedule_work(&drm->hpd_work);
++
++		pm_runtime_put_noidle(drm->dev->dev);
++		return NVIF_NOTIFY_KEEP;
++	} else if (ret != 1 && ret != -EACCES) {
++		NV_WARN(drm, "HPD on %s dropped due to RPM failure: %d\n",
++			name, ret);
++		return NVIF_NOTIFY_DROP;
++	}
+ 
+ 	if (rep->mask & NVIF_NOTIFY_CONN_V0_IRQ) {
+ 		NV_DEBUG(drm, "service %s\n", name);
+@@ -1137,6 +1159,8 @@ nouveau_connector_hotplug(struct nvif_notify *notify)
+ 		drm_helper_hpd_irq_event(connector->dev);
+ 	}
+ 
++	pm_runtime_mark_last_busy(drm->dev->dev);
++	pm_runtime_put_autosuspend(drm->dev->dev);
+ 	return NVIF_NOTIFY_KEEP;
+ }
+ 
+diff --git a/drivers/gpu/drm/nouveau/nouveau_display.c b/drivers/gpu/drm/nouveau/nouveau_display.c
+index ec7861457b84..c5b3cc17965c 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_display.c
++++ b/drivers/gpu/drm/nouveau/nouveau_display.c
+@@ -293,7 +293,7 @@ nouveau_user_framebuffer_create(struct drm_device *dev,
+ 
+ static const struct drm_mode_config_funcs nouveau_mode_config_funcs = {
+ 	.fb_create = nouveau_user_framebuffer_create,
+-	.output_poll_changed = drm_fb_helper_output_poll_changed,
++	.output_poll_changed = nouveau_fbcon_output_poll_changed,
+ };
+ 
+ 
+@@ -355,8 +355,6 @@ nouveau_display_hpd_work(struct work_struct *work)
+ 	pm_runtime_get_sync(drm->dev->dev);
+ 
+ 	drm_helper_hpd_irq_event(drm->dev);
+-	/* enable polling for external displays */
+-	drm_kms_helper_poll_enable(drm->dev);
+ 
+ 	pm_runtime_mark_last_busy(drm->dev->dev);
+ 	pm_runtime_put_sync(drm->dev->dev);
+@@ -379,15 +377,29 @@ nouveau_display_acpi_ntfy(struct notifier_block *nb, unsigned long val,
+ {
+ 	struct nouveau_drm *drm = container_of(nb, typeof(*drm), acpi_nb);
+ 	struct acpi_bus_event *info = data;
++	int ret;
+ 
+ 	if (!strcmp(info->device_class, ACPI_VIDEO_CLASS)) {
+ 		if (info->type == ACPI_VIDEO_NOTIFY_PROBE) {
+-			/*
+-			 * This may be the only indication we receive of a
+-			 * connector hotplug on a runtime suspended GPU,
+-			 * schedule hpd_work to check.
+-			 */
+-			schedule_work(&drm->hpd_work);
++			ret = pm_runtime_get(drm->dev->dev);
++			if (ret == 1 || ret == -EACCES) {
++				/* If the GPU is already awake, or in a state
++				 * where we can't wake it up, it can handle
++				 * it's own hotplug events.
++				 */
++				pm_runtime_put_autosuspend(drm->dev->dev);
++			} else if (ret == 0) {
++				/* This may be the only indication we receive
++				 * of a connector hotplug on a runtime
++				 * suspended GPU, schedule hpd_work to check.
++				 */
++				NV_DEBUG(drm, "ACPI requested connector reprobe\n");
++				schedule_work(&drm->hpd_work);
++				pm_runtime_put_noidle(drm->dev->dev);
++			} else {
++				NV_WARN(drm, "Dropped ACPI reprobe event due to RPM error: %d\n",
++					ret);
++			}
+ 
+ 			/* acpi-video should not generate keypresses for this */
+ 			return NOTIFY_BAD;
+@@ -411,6 +423,11 @@ nouveau_display_init(struct drm_device *dev)
+ 	if (ret)
+ 		return ret;
+ 
++	/* enable connector detection and polling for connectors without HPD
++	 * support
++	 */
++	drm_kms_helper_poll_enable(dev);
++
+ 	/* enable hotplug interrupts */
+ 	drm_connector_list_iter_begin(dev, &conn_iter);
+ 	nouveau_for_each_non_mst_connector_iter(connector, &conn_iter) {
+@@ -425,7 +442,7 @@ nouveau_display_init(struct drm_device *dev)
+ }
+ 
+ void
+-nouveau_display_fini(struct drm_device *dev, bool suspend)
++nouveau_display_fini(struct drm_device *dev, bool suspend, bool runtime)
+ {
+ 	struct nouveau_display *disp = nouveau_display(dev);
+ 	struct nouveau_drm *drm = nouveau_drm(dev);
+@@ -450,6 +467,9 @@ nouveau_display_fini(struct drm_device *dev, bool suspend)
+ 	}
+ 	drm_connector_list_iter_end(&conn_iter);
+ 
++	if (!runtime)
++		cancel_work_sync(&drm->hpd_work);
++
+ 	drm_kms_helper_poll_disable(dev);
+ 	disp->fini(dev);
+ }
+@@ -618,11 +638,11 @@ nouveau_display_suspend(struct drm_device *dev, bool runtime)
+ 			}
+ 		}
+ 
+-		nouveau_display_fini(dev, true);
++		nouveau_display_fini(dev, true, runtime);
+ 		return 0;
+ 	}
+ 
+-	nouveau_display_fini(dev, true);
++	nouveau_display_fini(dev, true, runtime);
+ 
+ 	list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
+ 		struct nouveau_framebuffer *nouveau_fb;
+diff --git a/drivers/gpu/drm/nouveau/nouveau_display.h b/drivers/gpu/drm/nouveau/nouveau_display.h
+index 54aa7c3fa42d..ff92b54ce448 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_display.h
++++ b/drivers/gpu/drm/nouveau/nouveau_display.h
+@@ -62,7 +62,7 @@ nouveau_display(struct drm_device *dev)
+ int  nouveau_display_create(struct drm_device *dev);
+ void nouveau_display_destroy(struct drm_device *dev);
+ int  nouveau_display_init(struct drm_device *dev);
+-void nouveau_display_fini(struct drm_device *dev, bool suspend);
++void nouveau_display_fini(struct drm_device *dev, bool suspend, bool runtime);
+ int  nouveau_display_suspend(struct drm_device *dev, bool runtime);
+ void nouveau_display_resume(struct drm_device *dev, bool runtime);
+ int  nouveau_display_vblank_enable(struct drm_device *, unsigned int);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index c7ec86d6c3c9..c2ebe5da34d0 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -629,7 +629,7 @@ nouveau_drm_unload(struct drm_device *dev)
+ 	nouveau_debugfs_fini(drm);
+ 
+ 	if (dev->mode_config.num_crtc)
+-		nouveau_display_fini(dev, false);
++		nouveau_display_fini(dev, false, false);
+ 	nouveau_display_destroy(dev);
+ 
+ 	nouveau_bios_takedown(dev);
+@@ -835,7 +835,6 @@ nouveau_pmops_runtime_suspend(struct device *dev)
+ 		return -EBUSY;
+ 	}
+ 
+-	drm_kms_helper_poll_disable(drm_dev);
+ 	nouveau_switcheroo_optimus_dsm();
+ 	ret = nouveau_do_suspend(drm_dev, true);
+ 	pci_save_state(pdev);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_fbcon.c b/drivers/gpu/drm/nouveau/nouveau_fbcon.c
+index 85c1f10bc2b6..8cf966690963 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_fbcon.c
++++ b/drivers/gpu/drm/nouveau/nouveau_fbcon.c
+@@ -466,6 +466,7 @@ nouveau_fbcon_set_suspend_work(struct work_struct *work)
+ 	console_unlock();
+ 
+ 	if (state == FBINFO_STATE_RUNNING) {
++		nouveau_fbcon_hotplug_resume(drm->fbcon);
+ 		pm_runtime_mark_last_busy(drm->dev->dev);
+ 		pm_runtime_put_sync(drm->dev->dev);
+ 	}
+@@ -487,6 +488,61 @@ nouveau_fbcon_set_suspend(struct drm_device *dev, int state)
+ 	schedule_work(&drm->fbcon_work);
+ }
+ 
++void
++nouveau_fbcon_output_poll_changed(struct drm_device *dev)
++{
++	struct nouveau_drm *drm = nouveau_drm(dev);
++	struct nouveau_fbdev *fbcon = drm->fbcon;
++	int ret;
++
++	if (!fbcon)
++		return;
++
++	mutex_lock(&fbcon->hotplug_lock);
++
++	ret = pm_runtime_get(dev->dev);
++	if (ret == 1 || ret == -EACCES) {
++		drm_fb_helper_hotplug_event(&fbcon->helper);
++
++		pm_runtime_mark_last_busy(dev->dev);
++		pm_runtime_put_autosuspend(dev->dev);
++	} else if (ret == 0) {
++		/* If the GPU was already in the process of suspending before
++		 * this event happened, then we can't block here as we'll
++		 * deadlock the runtime pmops since they wait for us to
++		 * finish. So, just defer this event for when we runtime
++		 * resume again. It will be handled by fbcon_work.
++		 */
++		NV_DEBUG(drm, "fbcon HPD event deferred until runtime resume\n");
++		fbcon->hotplug_waiting = true;
++		pm_runtime_put_noidle(drm->dev->dev);
++	} else {
++		DRM_WARN("fbcon HPD event lost due to RPM failure: %d\n",
++			 ret);
++	}
++
++	mutex_unlock(&fbcon->hotplug_lock);
++}
++
++void
++nouveau_fbcon_hotplug_resume(struct nouveau_fbdev *fbcon)
++{
++	struct nouveau_drm *drm;
++
++	if (!fbcon)
++		return;
++	drm = nouveau_drm(fbcon->helper.dev);
++
++	mutex_lock(&fbcon->hotplug_lock);
++	if (fbcon->hotplug_waiting) {
++		fbcon->hotplug_waiting = false;
++
++		NV_DEBUG(drm, "Handling deferred fbcon HPD events\n");
++		drm_fb_helper_hotplug_event(&fbcon->helper);
++	}
++	mutex_unlock(&fbcon->hotplug_lock);
++}
++
+ int
+ nouveau_fbcon_init(struct drm_device *dev)
+ {
+@@ -505,6 +561,7 @@ nouveau_fbcon_init(struct drm_device *dev)
+ 
+ 	drm->fbcon = fbcon;
+ 	INIT_WORK(&drm->fbcon_work, nouveau_fbcon_set_suspend_work);
++	mutex_init(&fbcon->hotplug_lock);
+ 
+ 	drm_fb_helper_prepare(dev, &fbcon->helper, &nouveau_fbcon_helper_funcs);
+ 
+diff --git a/drivers/gpu/drm/nouveau/nouveau_fbcon.h b/drivers/gpu/drm/nouveau/nouveau_fbcon.h
+index a6f192ea3fa6..db9d52047ef8 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_fbcon.h
++++ b/drivers/gpu/drm/nouveau/nouveau_fbcon.h
+@@ -41,6 +41,9 @@ struct nouveau_fbdev {
+ 	struct nvif_object gdi;
+ 	struct nvif_object blit;
+ 	struct nvif_object twod;
++
++	struct mutex hotplug_lock;
++	bool hotplug_waiting;
+ };
+ 
+ void nouveau_fbcon_restore(void);
+@@ -68,6 +71,8 @@ void nouveau_fbcon_set_suspend(struct drm_device *dev, int state);
+ void nouveau_fbcon_accel_save_disable(struct drm_device *dev);
+ void nouveau_fbcon_accel_restore(struct drm_device *dev);
+ 
++void nouveau_fbcon_output_poll_changed(struct drm_device *dev);
++void nouveau_fbcon_hotplug_resume(struct nouveau_fbdev *fbcon);
+ extern int nouveau_nofbaccel;
+ 
+ #endif /* __NV50_FBCON_H__ */
+diff --git a/drivers/gpu/drm/udl/udl_fb.c b/drivers/gpu/drm/udl/udl_fb.c
+index 8746eeeec44d..491f1892b50e 100644
+--- a/drivers/gpu/drm/udl/udl_fb.c
++++ b/drivers/gpu/drm/udl/udl_fb.c
+@@ -432,9 +432,11 @@ static void udl_fbdev_destroy(struct drm_device *dev,
+ {
+ 	drm_fb_helper_unregister_fbi(&ufbdev->helper);
+ 	drm_fb_helper_fini(&ufbdev->helper);
+-	drm_framebuffer_unregister_private(&ufbdev->ufb.base);
+-	drm_framebuffer_cleanup(&ufbdev->ufb.base);
+-	drm_gem_object_put_unlocked(&ufbdev->ufb.obj->base);
++	if (ufbdev->ufb.obj) {
++		drm_framebuffer_unregister_private(&ufbdev->ufb.base);
++		drm_framebuffer_cleanup(&ufbdev->ufb.base);
++		drm_gem_object_put_unlocked(&ufbdev->ufb.obj->base);
++	}
+ }
+ 
+ int udl_fbdev_init(struct drm_device *dev)
+diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c
+index a951ec75d01f..cf5aea1d6488 100644
+--- a/drivers/gpu/drm/vc4/vc4_plane.c
++++ b/drivers/gpu/drm/vc4/vc4_plane.c
+@@ -297,6 +297,9 @@ static int vc4_plane_setup_clipping_and_scaling(struct drm_plane_state *state)
+ 	vc4_state->y_scaling[0] = vc4_get_scaling_mode(vc4_state->src_h[0],
+ 						       vc4_state->crtc_h);
+ 
++	vc4_state->is_unity = (vc4_state->x_scaling[0] == VC4_SCALING_NONE &&
++			       vc4_state->y_scaling[0] == VC4_SCALING_NONE);
++
+ 	if (num_planes > 1) {
+ 		vc4_state->is_yuv = true;
+ 
+@@ -312,24 +315,17 @@ static int vc4_plane_setup_clipping_and_scaling(struct drm_plane_state *state)
+ 			vc4_get_scaling_mode(vc4_state->src_h[1],
+ 					     vc4_state->crtc_h);
+ 
+-		/* YUV conversion requires that scaling be enabled,
+-		 * even on a plane that's otherwise 1:1.  Choose TPZ
+-		 * for simplicity.
++		/* YUV conversion requires that horizontal scaling be enabled,
++		 * even on a plane that's otherwise 1:1. Looks like only PPF
++		 * works in that case, so let's pick that one.
+ 		 */
+-		if (vc4_state->x_scaling[0] == VC4_SCALING_NONE)
+-			vc4_state->x_scaling[0] = VC4_SCALING_TPZ;
+-		if (vc4_state->y_scaling[0] == VC4_SCALING_NONE)
+-			vc4_state->y_scaling[0] = VC4_SCALING_TPZ;
++		if (vc4_state->is_unity)
++			vc4_state->x_scaling[0] = VC4_SCALING_PPF;
+ 	} else {
+ 		vc4_state->x_scaling[1] = VC4_SCALING_NONE;
+ 		vc4_state->y_scaling[1] = VC4_SCALING_NONE;
+ 	}
+ 
+-	vc4_state->is_unity = (vc4_state->x_scaling[0] == VC4_SCALING_NONE &&
+-			       vc4_state->y_scaling[0] == VC4_SCALING_NONE &&
+-			       vc4_state->x_scaling[1] == VC4_SCALING_NONE &&
+-			       vc4_state->y_scaling[1] == VC4_SCALING_NONE);
+-
+ 	/* No configuring scaling on the cursor plane, since it gets
+ 	   non-vblank-synced updates, and scaling requires requires
+ 	   LBM changes which have to be vblank-synced.
+@@ -621,7 +617,10 @@ static int vc4_plane_mode_set(struct drm_plane *plane,
+ 		vc4_dlist_write(vc4_state, SCALER_CSC2_ITR_R_601_5);
+ 	}
+ 
+-	if (!vc4_state->is_unity) {
++	if (vc4_state->x_scaling[0] != VC4_SCALING_NONE ||
++	    vc4_state->x_scaling[1] != VC4_SCALING_NONE ||
++	    vc4_state->y_scaling[0] != VC4_SCALING_NONE ||
++	    vc4_state->y_scaling[1] != VC4_SCALING_NONE) {
+ 		/* LBM Base Address. */
+ 		if (vc4_state->y_scaling[0] != VC4_SCALING_NONE ||
+ 		    vc4_state->y_scaling[1] != VC4_SCALING_NONE) {
+diff --git a/drivers/infiniband/hw/cxgb4/qp.c b/drivers/infiniband/hw/cxgb4/qp.c
+index aef53305f1c3..d97581ae3bf9 100644
+--- a/drivers/infiniband/hw/cxgb4/qp.c
++++ b/drivers/infiniband/hw/cxgb4/qp.c
+@@ -1388,6 +1388,12 @@ static void flush_qp(struct c4iw_qp *qhp)
+ 	schp = to_c4iw_cq(qhp->ibqp.send_cq);
+ 
+ 	if (qhp->ibqp.uobject) {
++
++		/* for user qps, qhp->wq.flushed is protected by qhp->mutex */
++		if (qhp->wq.flushed)
++			return;
++
++		qhp->wq.flushed = 1;
+ 		t4_set_wq_in_error(&qhp->wq);
+ 		t4_set_cq_in_error(&rchp->cq);
+ 		spin_lock_irqsave(&rchp->comp_handler_lock, flag);
+diff --git a/drivers/misc/vmw_balloon.c b/drivers/misc/vmw_balloon.c
+index 5f8b583c6e41..f74166aa9a0d 100644
+--- a/drivers/misc/vmw_balloon.c
++++ b/drivers/misc/vmw_balloon.c
+@@ -45,6 +45,7 @@
+ #include <linux/seq_file.h>
+ #include <linux/vmw_vmci_defs.h>
+ #include <linux/vmw_vmci_api.h>
++#include <linux/io.h>
+ #include <asm/hypervisor.h>
+ 
+ MODULE_AUTHOR("VMware, Inc.");
+diff --git a/drivers/mtd/devices/m25p80.c b/drivers/mtd/devices/m25p80.c
+index e84563d2067f..3463cd94a7f6 100644
+--- a/drivers/mtd/devices/m25p80.c
++++ b/drivers/mtd/devices/m25p80.c
+@@ -41,13 +41,23 @@ static int m25p80_read_reg(struct spi_nor *nor, u8 code, u8 *val, int len)
+ 	struct spi_mem_op op = SPI_MEM_OP(SPI_MEM_OP_CMD(code, 1),
+ 					  SPI_MEM_OP_NO_ADDR,
+ 					  SPI_MEM_OP_NO_DUMMY,
+-					  SPI_MEM_OP_DATA_IN(len, val, 1));
++					  SPI_MEM_OP_DATA_IN(len, NULL, 1));
++	void *scratchbuf;
+ 	int ret;
+ 
++	scratchbuf = kmalloc(len, GFP_KERNEL);
++	if (!scratchbuf)
++		return -ENOMEM;
++
++	op.data.buf.in = scratchbuf;
+ 	ret = spi_mem_exec_op(flash->spimem, &op);
+ 	if (ret < 0)
+ 		dev_err(&flash->spimem->spi->dev, "error %d reading %x\n", ret,
+ 			code);
++	else
++		memcpy(val, scratchbuf, len);
++
++	kfree(scratchbuf);
+ 
+ 	return ret;
+ }
+@@ -58,9 +68,19 @@ static int m25p80_write_reg(struct spi_nor *nor, u8 opcode, u8 *buf, int len)
+ 	struct spi_mem_op op = SPI_MEM_OP(SPI_MEM_OP_CMD(opcode, 1),
+ 					  SPI_MEM_OP_NO_ADDR,
+ 					  SPI_MEM_OP_NO_DUMMY,
+-					  SPI_MEM_OP_DATA_OUT(len, buf, 1));
++					  SPI_MEM_OP_DATA_OUT(len, NULL, 1));
++	void *scratchbuf;
++	int ret;
+ 
+-	return spi_mem_exec_op(flash->spimem, &op);
++	scratchbuf = kmemdup(buf, len, GFP_KERNEL);
++	if (!scratchbuf)
++		return -ENOMEM;
++
++	op.data.buf.out = scratchbuf;
++	ret = spi_mem_exec_op(flash->spimem, &op);
++	kfree(scratchbuf);
++
++	return ret;
+ }
+ 
+ static ssize_t m25p80_write(struct spi_nor *nor, loff_t to, size_t len,
+diff --git a/drivers/mtd/nand/raw/denali.c b/drivers/mtd/nand/raw/denali.c
+index 2a302a1d1430..c502075e5721 100644
+--- a/drivers/mtd/nand/raw/denali.c
++++ b/drivers/mtd/nand/raw/denali.c
+@@ -604,6 +604,12 @@ static int denali_dma_xfer(struct denali_nand_info *denali, void *buf,
+ 	}
+ 
+ 	iowrite32(DMA_ENABLE__FLAG, denali->reg + DMA_ENABLE);
++	/*
++	 * The ->setup_dma() hook kicks DMA by using the data/command
++	 * interface, which belongs to a different AXI port from the
++	 * register interface.  Read back the register to avoid a race.
++	 */
++	ioread32(denali->reg + DMA_ENABLE);
+ 
+ 	denali_reset_irq(denali);
+ 	denali->setup_dma(denali, dma_addr, page, write);
+diff --git a/drivers/net/appletalk/ipddp.c b/drivers/net/appletalk/ipddp.c
+index 9375cef22420..3d27616d9c85 100644
+--- a/drivers/net/appletalk/ipddp.c
++++ b/drivers/net/appletalk/ipddp.c
+@@ -283,8 +283,12 @@ static int ipddp_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+                 case SIOCFINDIPDDPRT:
+ 			spin_lock_bh(&ipddp_route_lock);
+ 			rp = __ipddp_find_route(&rcp);
+-			if (rp)
+-				memcpy(&rcp2, rp, sizeof(rcp2));
++			if (rp) {
++				memset(&rcp2, 0, sizeof(rcp2));
++				rcp2.ip    = rp->ip;
++				rcp2.at    = rp->at;
++				rcp2.flags = rp->flags;
++			}
+ 			spin_unlock_bh(&ipddp_route_lock);
+ 
+ 			if (rp) {
+diff --git a/drivers/net/dsa/mv88e6xxx/global1.h b/drivers/net/dsa/mv88e6xxx/global1.h
+index 7c791c1da4b9..bef01331266f 100644
+--- a/drivers/net/dsa/mv88e6xxx/global1.h
++++ b/drivers/net/dsa/mv88e6xxx/global1.h
+@@ -128,7 +128,7 @@
+ #define MV88E6XXX_G1_ATU_OP_GET_CLR_VIOLATION		0x7000
+ #define MV88E6XXX_G1_ATU_OP_AGE_OUT_VIOLATION		BIT(7)
+ #define MV88E6XXX_G1_ATU_OP_MEMBER_VIOLATION		BIT(6)
+-#define MV88E6XXX_G1_ATU_OP_MISS_VIOLTATION		BIT(5)
++#define MV88E6XXX_G1_ATU_OP_MISS_VIOLATION		BIT(5)
+ #define MV88E6XXX_G1_ATU_OP_FULL_VIOLATION		BIT(4)
+ 
+ /* Offset 0x0C: ATU Data Register */
+diff --git a/drivers/net/dsa/mv88e6xxx/global1_atu.c b/drivers/net/dsa/mv88e6xxx/global1_atu.c
+index 307410898fc9..5200e4bdce93 100644
+--- a/drivers/net/dsa/mv88e6xxx/global1_atu.c
++++ b/drivers/net/dsa/mv88e6xxx/global1_atu.c
+@@ -349,7 +349,7 @@ static irqreturn_t mv88e6xxx_g1_atu_prob_irq_thread_fn(int irq, void *dev_id)
+ 		chip->ports[entry.portvec].atu_member_violation++;
+ 	}
+ 
+-	if (val & MV88E6XXX_G1_ATU_OP_MEMBER_VIOLATION) {
++	if (val & MV88E6XXX_G1_ATU_OP_MISS_VIOLATION) {
+ 		dev_err_ratelimited(chip->dev,
+ 				    "ATU miss violation for %pM portvec %x\n",
+ 				    entry.mac, entry.portvec);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 4fdf3d33aa59..80b05597c5fe 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -7888,7 +7888,7 @@ static int bnxt_change_mac_addr(struct net_device *dev, void *p)
+ 	if (ether_addr_equal(addr->sa_data, dev->dev_addr))
+ 		return 0;
+ 
+-	rc = bnxt_approve_mac(bp, addr->sa_data);
++	rc = bnxt_approve_mac(bp, addr->sa_data, true);
+ 	if (rc)
+ 		return rc;
+ 
+@@ -8683,14 +8683,19 @@ static int bnxt_init_mac_addr(struct bnxt *bp)
+ 	} else {
+ #ifdef CONFIG_BNXT_SRIOV
+ 		struct bnxt_vf_info *vf = &bp->vf;
++		bool strict_approval = true;
+ 
+ 		if (is_valid_ether_addr(vf->mac_addr)) {
+ 			/* overwrite netdev dev_addr with admin VF MAC */
+ 			memcpy(bp->dev->dev_addr, vf->mac_addr, ETH_ALEN);
++			/* Older PF driver or firmware may not approve this
++			 * correctly.
++			 */
++			strict_approval = false;
+ 		} else {
+ 			eth_hw_addr_random(bp->dev);
+ 		}
+-		rc = bnxt_approve_mac(bp, bp->dev->dev_addr);
++		rc = bnxt_approve_mac(bp, bp->dev->dev_addr, strict_approval);
+ #endif
+ 	}
+ 	return rc;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+index 2c77004a022b..24d16d3d33a1 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+@@ -1095,7 +1095,7 @@ update_vf_mac_exit:
+ 	mutex_unlock(&bp->hwrm_cmd_lock);
+ }
+ 
+-int bnxt_approve_mac(struct bnxt *bp, u8 *mac)
++int bnxt_approve_mac(struct bnxt *bp, u8 *mac, bool strict)
+ {
+ 	struct hwrm_func_vf_cfg_input req = {0};
+ 	int rc = 0;
+@@ -1113,12 +1113,13 @@ int bnxt_approve_mac(struct bnxt *bp, u8 *mac)
+ 	memcpy(req.dflt_mac_addr, mac, ETH_ALEN);
+ 	rc = hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
+ mac_done:
+-	if (rc) {
++	if (rc && strict) {
+ 		rc = -EADDRNOTAVAIL;
+ 		netdev_warn(bp->dev, "VF MAC address %pM not approved by the PF\n",
+ 			    mac);
++		return rc;
+ 	}
+-	return rc;
++	return 0;
+ }
+ #else
+ 
+@@ -1135,7 +1136,7 @@ void bnxt_update_vf_mac(struct bnxt *bp)
+ {
+ }
+ 
+-int bnxt_approve_mac(struct bnxt *bp, u8 *mac)
++int bnxt_approve_mac(struct bnxt *bp, u8 *mac, bool strict)
+ {
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.h
+index e9b20cd19881..2eed9eda1195 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.h
+@@ -39,5 +39,5 @@ int bnxt_sriov_configure(struct pci_dev *pdev, int num_vfs);
+ void bnxt_sriov_disable(struct bnxt *);
+ void bnxt_hwrm_exec_fwd_req(struct bnxt *);
+ void bnxt_update_vf_mac(struct bnxt *);
+-int bnxt_approve_mac(struct bnxt *, u8 *);
++int bnxt_approve_mac(struct bnxt *, u8 *, bool);
+ #endif
+diff --git a/drivers/net/ethernet/hp/hp100.c b/drivers/net/ethernet/hp/hp100.c
+index c8c7ad2eff77..9b5a68b65432 100644
+--- a/drivers/net/ethernet/hp/hp100.c
++++ b/drivers/net/ethernet/hp/hp100.c
+@@ -2634,7 +2634,7 @@ static int hp100_login_to_vg_hub(struct net_device *dev, u_short force_relogin)
+ 		/* Wait for link to drop */
+ 		time = jiffies + (HZ / 10);
+ 		do {
+-			if (~(hp100_inb(VG_LAN_CFG_1) & HP100_LINK_UP_ST))
++			if (!(hp100_inb(VG_LAN_CFG_1) & HP100_LINK_UP_ST))
+ 				break;
+ 			if (!in_interrupt())
+ 				schedule_timeout_interruptible(1);
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index f7f08e3fa761..661fa5a38df2 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -61,6 +61,8 @@ static struct {
+  */
+ static void mvpp2_mac_config(struct net_device *dev, unsigned int mode,
+ 			     const struct phylink_link_state *state);
++static void mvpp2_mac_link_up(struct net_device *dev, unsigned int mode,
++			      phy_interface_t interface, struct phy_device *phy);
+ 
+ /* Queue modes */
+ #define MVPP2_QDIST_SINGLE_MODE	0
+@@ -3142,6 +3144,7 @@ static void mvpp2_start_dev(struct mvpp2_port *port)
+ 		mvpp22_mode_reconfigure(port);
+ 
+ 	if (port->phylink) {
++		netif_carrier_off(port->dev);
+ 		phylink_start(port->phylink);
+ 	} else {
+ 		/* Phylink isn't used as of now for ACPI, so the MAC has to be
+@@ -3150,9 +3153,10 @@ static void mvpp2_start_dev(struct mvpp2_port *port)
+ 		 */
+ 		struct phylink_link_state state = {
+ 			.interface = port->phy_interface,
+-			.link = 1,
+ 		};
+ 		mvpp2_mac_config(port->dev, MLO_AN_INBAND, &state);
++		mvpp2_mac_link_up(port->dev, MLO_AN_INBAND, port->phy_interface,
++				  NULL);
+ 	}
+ 
+ 	netif_tx_start_all_queues(port->dev);
+@@ -4389,10 +4393,6 @@ static void mvpp2_mac_config(struct net_device *dev, unsigned int mode,
+ 		return;
+ 	}
+ 
+-	netif_tx_stop_all_queues(port->dev);
+-	if (!port->has_phy)
+-		netif_carrier_off(port->dev);
+-
+ 	/* Make sure the port is disabled when reconfiguring the mode */
+ 	mvpp2_port_disable(port);
+ 
+@@ -4417,16 +4417,7 @@ static void mvpp2_mac_config(struct net_device *dev, unsigned int mode,
+ 	if (port->priv->hw_version == MVPP21 && port->flags & MVPP2_F_LOOPBACK)
+ 		mvpp2_port_loopback_set(port, state);
+ 
+-	/* If the port already was up, make sure it's still in the same state */
+-	if (state->link || !port->has_phy) {
+-		mvpp2_port_enable(port);
+-
+-		mvpp2_egress_enable(port);
+-		mvpp2_ingress_enable(port);
+-		if (!port->has_phy)
+-			netif_carrier_on(dev);
+-		netif_tx_wake_all_queues(dev);
+-	}
++	mvpp2_port_enable(port);
+ }
+ 
+ static void mvpp2_mac_link_up(struct net_device *dev, unsigned int mode,
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index 6d74cde68163..c0fc30a1f600 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -2172,17 +2172,15 @@ static int netvsc_remove(struct hv_device *dev)
+ 
+ 	cancel_delayed_work_sync(&ndev_ctx->dwork);
+ 
+-	rcu_read_lock();
+-	nvdev = rcu_dereference(ndev_ctx->nvdev);
+-
+-	if  (nvdev)
++	rtnl_lock();
++	nvdev = rtnl_dereference(ndev_ctx->nvdev);
++	if (nvdev)
+ 		cancel_work_sync(&nvdev->subchan_work);
+ 
+ 	/*
+ 	 * Call to the vsc driver to let it know that the device is being
+ 	 * removed. Also blocks mtu and channel changes.
+ 	 */
+-	rtnl_lock();
+ 	vf_netdev = rtnl_dereference(ndev_ctx->vf_netdev);
+ 	if (vf_netdev)
+ 		netvsc_unregister_vf(vf_netdev);
+@@ -2194,7 +2192,6 @@ static int netvsc_remove(struct hv_device *dev)
+ 	list_del(&ndev_ctx->list);
+ 
+ 	rtnl_unlock();
+-	rcu_read_unlock();
+ 
+ 	hv_set_drvdata(dev, NULL);
+ 
+diff --git a/drivers/net/ppp/pppoe.c b/drivers/net/ppp/pppoe.c
+index ce61231e96ea..62dc564b251d 100644
+--- a/drivers/net/ppp/pppoe.c
++++ b/drivers/net/ppp/pppoe.c
+@@ -429,6 +429,9 @@ static int pppoe_rcv(struct sk_buff *skb, struct net_device *dev,
+ 	if (!skb)
+ 		goto out;
+ 
++	if (skb_mac_header_len(skb) < ETH_HLEN)
++		goto drop;
++
+ 	if (!pskb_may_pull(skb, sizeof(struct pppoe_hdr)))
+ 		goto drop;
+ 
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index cb0cc30c3d6a..1e95d37c6e27 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1206,13 +1206,13 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x1199, 0x9061, 8)},	/* Sierra Wireless Modem */
+ 	{QMI_FIXED_INTF(0x1199, 0x9063, 8)},	/* Sierra Wireless EM7305 */
+ 	{QMI_FIXED_INTF(0x1199, 0x9063, 10)},	/* Sierra Wireless EM7305 */
+-	{QMI_FIXED_INTF(0x1199, 0x9071, 8)},	/* Sierra Wireless MC74xx */
+-	{QMI_FIXED_INTF(0x1199, 0x9071, 10)},	/* Sierra Wireless MC74xx */
+-	{QMI_FIXED_INTF(0x1199, 0x9079, 8)},	/* Sierra Wireless EM74xx */
+-	{QMI_FIXED_INTF(0x1199, 0x9079, 10)},	/* Sierra Wireless EM74xx */
+-	{QMI_FIXED_INTF(0x1199, 0x907b, 8)},	/* Sierra Wireless EM74xx */
+-	{QMI_FIXED_INTF(0x1199, 0x907b, 10)},	/* Sierra Wireless EM74xx */
+-	{QMI_FIXED_INTF(0x1199, 0x9091, 8)},	/* Sierra Wireless EM7565 */
++	{QMI_QUIRK_SET_DTR(0x1199, 0x9071, 8)},	/* Sierra Wireless MC74xx */
++	{QMI_QUIRK_SET_DTR(0x1199, 0x9071, 10)},/* Sierra Wireless MC74xx */
++	{QMI_QUIRK_SET_DTR(0x1199, 0x9079, 8)},	/* Sierra Wireless EM74xx */
++	{QMI_QUIRK_SET_DTR(0x1199, 0x9079, 10)},/* Sierra Wireless EM74xx */
++	{QMI_QUIRK_SET_DTR(0x1199, 0x907b, 8)},	/* Sierra Wireless EM74xx */
++	{QMI_QUIRK_SET_DTR(0x1199, 0x907b, 10)},/* Sierra Wireless EM74xx */
++	{QMI_QUIRK_SET_DTR(0x1199, 0x9091, 8)},	/* Sierra Wireless EM7565 */
+ 	{QMI_FIXED_INTF(0x1bbb, 0x011e, 4)},	/* Telekom Speedstick LTE II (Alcatel One Touch L100V LTE) */
+ 	{QMI_FIXED_INTF(0x1bbb, 0x0203, 2)},	/* Alcatel L800MA */
+ 	{QMI_FIXED_INTF(0x2357, 0x0201, 4)},	/* TP-LINK HSUPA Modem MA180 */
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index c2b6aa1d485f..f49c2a60a6eb 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -907,7 +907,11 @@ static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
+ 			BUG_ON(pull_to <= skb_headlen(skb));
+ 			__pskb_pull_tail(skb, pull_to - skb_headlen(skb));
+ 		}
+-		BUG_ON(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS);
++		if (unlikely(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS)) {
++			queue->rx.rsp_cons = ++cons;
++			kfree_skb(nskb);
++			return ~0U;
++		}
+ 
+ 		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
+ 				skb_frag_page(nfrag),
+@@ -1044,6 +1048,8 @@ err:
+ 		skb->len += rx->status;
+ 
+ 		i = xennet_fill_frags(queue, skb, &tmpq);
++		if (unlikely(i == ~0U))
++			goto err;
+ 
+ 		if (rx->flags & XEN_NETRXF_csum_blank)
+ 			skb->ip_summed = CHECKSUM_PARTIAL;
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index f439de848658..d1e2d175c10b 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -4235,11 +4235,6 @@ static int pci_quirk_qcom_rp_acs(struct pci_dev *dev, u16 acs_flags)
+  *
+  * 0x9d10-0x9d1b PCI Express Root port #{1-12}
+  *
+- * The 300 series chipset suffers from the same bug so include those root
+- * ports here as well.
+- *
+- * 0xa32c-0xa343 PCI Express Root port #{0-24}
+- *
+  * [1] http://www.intel.com/content/www/us/en/chipsets/100-series-chipset-datasheet-vol-2.html
+  * [2] http://www.intel.com/content/www/us/en/chipsets/100-series-chipset-datasheet-vol-1.html
+  * [3] http://www.intel.com/content/www/us/en/chipsets/100-series-chipset-spec-update.html
+@@ -4257,7 +4252,6 @@ static bool pci_quirk_intel_spt_pch_acs_match(struct pci_dev *dev)
+ 	case 0xa110 ... 0xa11f: case 0xa167 ... 0xa16a: /* Sunrise Point */
+ 	case 0xa290 ... 0xa29f: case 0xa2e7 ... 0xa2ee: /* Union Point */
+ 	case 0x9d10 ... 0x9d1b: /* 7th & 8th Gen Mobile */
+-	case 0xa32c ... 0xa343:				/* 300 series */
+ 		return true;
+ 	}
+ 
+diff --git a/drivers/platform/x86/alienware-wmi.c b/drivers/platform/x86/alienware-wmi.c
+index d975462a4c57..f10af5c383c5 100644
+--- a/drivers/platform/x86/alienware-wmi.c
++++ b/drivers/platform/x86/alienware-wmi.c
+@@ -536,6 +536,7 @@ static acpi_status alienware_wmax_command(struct wmax_basic_args *in_args,
+ 		if (obj && obj->type == ACPI_TYPE_INTEGER)
+ 			*out_data = (u32) obj->integer.value;
+ 	}
++	kfree(output.pointer);
+ 	return status;
+ 
+ }
+diff --git a/drivers/platform/x86/dell-smbios-wmi.c b/drivers/platform/x86/dell-smbios-wmi.c
+index fbefedb1c172..548abba2c1e9 100644
+--- a/drivers/platform/x86/dell-smbios-wmi.c
++++ b/drivers/platform/x86/dell-smbios-wmi.c
+@@ -78,6 +78,7 @@ static int run_smbios_call(struct wmi_device *wdev)
+ 	dev_dbg(&wdev->dev, "result: [%08x,%08x,%08x,%08x]\n",
+ 		priv->buf->std.output[0], priv->buf->std.output[1],
+ 		priv->buf->std.output[2], priv->buf->std.output[3]);
++	kfree(output.pointer);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/rpmsg/rpmsg_core.c b/drivers/rpmsg/rpmsg_core.c
+index 8122807db380..b714a543a91d 100644
+--- a/drivers/rpmsg/rpmsg_core.c
++++ b/drivers/rpmsg/rpmsg_core.c
+@@ -15,7 +15,6 @@
+ #include <linux/module.h>
+ #include <linux/rpmsg.h>
+ #include <linux/of_device.h>
+-#include <linux/pm_domain.h>
+ #include <linux/slab.h>
+ 
+ #include "rpmsg_internal.h"
+@@ -450,10 +449,6 @@ static int rpmsg_dev_probe(struct device *dev)
+ 	struct rpmsg_endpoint *ept = NULL;
+ 	int err;
+ 
+-	err = dev_pm_domain_attach(dev, true);
+-	if (err)
+-		goto out;
+-
+ 	if (rpdrv->callback) {
+ 		strncpy(chinfo.name, rpdev->id.name, RPMSG_NAME_SIZE);
+ 		chinfo.src = rpdev->src;
+@@ -495,8 +490,6 @@ static int rpmsg_dev_remove(struct device *dev)
+ 
+ 	rpdrv->remove(rpdev);
+ 
+-	dev_pm_domain_detach(dev, true);
+-
+ 	if (rpdev->ept)
+ 		rpmsg_destroy_ept(rpdev->ept);
+ 
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index ec395a6baf9c..9da0bc5a036c 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -2143,8 +2143,17 @@ int spi_register_controller(struct spi_controller *ctlr)
+ 	 */
+ 	if (ctlr->num_chipselect == 0)
+ 		return -EINVAL;
+-	/* allocate dynamic bus number using Linux idr */
+-	if ((ctlr->bus_num < 0) && ctlr->dev.of_node) {
++	if (ctlr->bus_num >= 0) {
++		/* devices with a fixed bus num must check-in with the num */
++		mutex_lock(&board_lock);
++		id = idr_alloc(&spi_master_idr, ctlr, ctlr->bus_num,
++			ctlr->bus_num + 1, GFP_KERNEL);
++		mutex_unlock(&board_lock);
++		if (WARN(id < 0, "couldn't get idr"))
++			return id == -ENOSPC ? -EBUSY : id;
++		ctlr->bus_num = id;
++	} else if (ctlr->dev.of_node) {
++		/* allocate dynamic bus number using Linux idr */
+ 		id = of_alias_get_id(ctlr->dev.of_node, "spi");
+ 		if (id >= 0) {
+ 			ctlr->bus_num = id;
+diff --git a/drivers/target/iscsi/iscsi_target_auth.c b/drivers/target/iscsi/iscsi_target_auth.c
+index 9518ffd8b8ba..4e680d753941 100644
+--- a/drivers/target/iscsi/iscsi_target_auth.c
++++ b/drivers/target/iscsi/iscsi_target_auth.c
+@@ -26,27 +26,6 @@
+ #include "iscsi_target_nego.h"
+ #include "iscsi_target_auth.h"
+ 
+-static int chap_string_to_hex(unsigned char *dst, unsigned char *src, int len)
+-{
+-	int j = DIV_ROUND_UP(len, 2), rc;
+-
+-	rc = hex2bin(dst, src, j);
+-	if (rc < 0)
+-		pr_debug("CHAP string contains non hex digit symbols\n");
+-
+-	dst[j] = '\0';
+-	return j;
+-}
+-
+-static void chap_binaryhex_to_asciihex(char *dst, char *src, int src_len)
+-{
+-	int i;
+-
+-	for (i = 0; i < src_len; i++) {
+-		sprintf(&dst[i*2], "%02x", (int) src[i] & 0xff);
+-	}
+-}
+-
+ static int chap_gen_challenge(
+ 	struct iscsi_conn *conn,
+ 	int caller,
+@@ -62,7 +41,7 @@ static int chap_gen_challenge(
+ 	ret = get_random_bytes_wait(chap->challenge, CHAP_CHALLENGE_LENGTH);
+ 	if (unlikely(ret))
+ 		return ret;
+-	chap_binaryhex_to_asciihex(challenge_asciihex, chap->challenge,
++	bin2hex(challenge_asciihex, chap->challenge,
+ 				CHAP_CHALLENGE_LENGTH);
+ 	/*
+ 	 * Set CHAP_C, and copy the generated challenge into c_str.
+@@ -248,9 +227,16 @@ static int chap_server_compute_md5(
+ 		pr_err("Could not find CHAP_R.\n");
+ 		goto out;
+ 	}
++	if (strlen(chap_r) != MD5_SIGNATURE_SIZE * 2) {
++		pr_err("Malformed CHAP_R\n");
++		goto out;
++	}
++	if (hex2bin(client_digest, chap_r, MD5_SIGNATURE_SIZE) < 0) {
++		pr_err("Malformed CHAP_R\n");
++		goto out;
++	}
+ 
+ 	pr_debug("[server] Got CHAP_R=%s\n", chap_r);
+-	chap_string_to_hex(client_digest, chap_r, strlen(chap_r));
+ 
+ 	tfm = crypto_alloc_shash("md5", 0, 0);
+ 	if (IS_ERR(tfm)) {
+@@ -294,7 +280,7 @@ static int chap_server_compute_md5(
+ 		goto out;
+ 	}
+ 
+-	chap_binaryhex_to_asciihex(response, server_digest, MD5_SIGNATURE_SIZE);
++	bin2hex(response, server_digest, MD5_SIGNATURE_SIZE);
+ 	pr_debug("[server] MD5 Server Digest: %s\n", response);
+ 
+ 	if (memcmp(server_digest, client_digest, MD5_SIGNATURE_SIZE) != 0) {
+@@ -349,9 +335,7 @@ static int chap_server_compute_md5(
+ 		pr_err("Could not find CHAP_C.\n");
+ 		goto out;
+ 	}
+-	pr_debug("[server] Got CHAP_C=%s\n", challenge);
+-	challenge_len = chap_string_to_hex(challenge_binhex, challenge,
+-				strlen(challenge));
++	challenge_len = DIV_ROUND_UP(strlen(challenge), 2);
+ 	if (!challenge_len) {
+ 		pr_err("Unable to convert incoming challenge\n");
+ 		goto out;
+@@ -360,6 +344,11 @@ static int chap_server_compute_md5(
+ 		pr_err("CHAP_C exceeds maximum binary size of 1024 bytes\n");
+ 		goto out;
+ 	}
++	if (hex2bin(challenge_binhex, challenge, challenge_len) < 0) {
++		pr_err("Malformed CHAP_C\n");
++		goto out;
++	}
++	pr_debug("[server] Got CHAP_C=%s\n", challenge);
+ 	/*
+ 	 * During mutual authentication, the CHAP_C generated by the
+ 	 * initiator must not match the original CHAP_C generated by
+@@ -413,7 +402,7 @@ static int chap_server_compute_md5(
+ 	/*
+ 	 * Convert response from binary hex to ascii hext.
+ 	 */
+-	chap_binaryhex_to_asciihex(response, digest, MD5_SIGNATURE_SIZE);
++	bin2hex(response, digest, MD5_SIGNATURE_SIZE);
+ 	*nr_out_len += sprintf(nr_out_ptr + *nr_out_len, "CHAP_R=0x%s",
+ 			response);
+ 	*nr_out_len += 1;
+diff --git a/drivers/tty/vt/vt_ioctl.c b/drivers/tty/vt/vt_ioctl.c
+index a78ad10a119b..73cdc0d633dd 100644
+--- a/drivers/tty/vt/vt_ioctl.c
++++ b/drivers/tty/vt/vt_ioctl.c
+@@ -32,6 +32,8 @@
+ #include <asm/io.h>
+ #include <linux/uaccess.h>
+ 
++#include <linux/nospec.h>
++
+ #include <linux/kbd_kern.h>
+ #include <linux/vt_kern.h>
+ #include <linux/kbd_diacr.h>
+@@ -700,6 +702,8 @@ int vt_ioctl(struct tty_struct *tty,
+ 		if (vsa.console == 0 || vsa.console > MAX_NR_CONSOLES)
+ 			ret = -ENXIO;
+ 		else {
++			vsa.console = array_index_nospec(vsa.console,
++							 MAX_NR_CONSOLES + 1);
+ 			vsa.console--;
+ 			console_lock();
+ 			ret = vc_allocate(vsa.console);
+diff --git a/fs/ext4/dir.c b/fs/ext4/dir.c
+index e2902d394f1b..f93f9881ec18 100644
+--- a/fs/ext4/dir.c
++++ b/fs/ext4/dir.c
+@@ -76,7 +76,7 @@ int __ext4_check_dir_entry(const char *function, unsigned int line,
+ 	else if (unlikely(rlen < EXT4_DIR_REC_LEN(de->name_len)))
+ 		error_msg = "rec_len is too small for name_len";
+ 	else if (unlikely(((char *) de - buf) + rlen > size))
+-		error_msg = "directory entry across range";
++		error_msg = "directory entry overrun";
+ 	else if (unlikely(le32_to_cpu(de->inode) >
+ 			le32_to_cpu(EXT4_SB(dir->i_sb)->s_es->s_inodes_count)))
+ 		error_msg = "inode out of bounds";
+@@ -85,18 +85,16 @@ int __ext4_check_dir_entry(const char *function, unsigned int line,
+ 
+ 	if (filp)
+ 		ext4_error_file(filp, function, line, bh->b_blocknr,
+-				"bad entry in directory: %s - offset=%u(%u), "
+-				"inode=%u, rec_len=%d, name_len=%d",
+-				error_msg, (unsigned) (offset % size),
+-				offset, le32_to_cpu(de->inode),
+-				rlen, de->name_len);
++				"bad entry in directory: %s - offset=%u, "
++				"inode=%u, rec_len=%d, name_len=%d, size=%d",
++				error_msg, offset, le32_to_cpu(de->inode),
++				rlen, de->name_len, size);
+ 	else
+ 		ext4_error_inode(dir, function, line, bh->b_blocknr,
+-				"bad entry in directory: %s - offset=%u(%u), "
+-				"inode=%u, rec_len=%d, name_len=%d",
+-				error_msg, (unsigned) (offset % size),
+-				offset, le32_to_cpu(de->inode),
+-				rlen, de->name_len);
++				"bad entry in directory: %s - offset=%u, "
++				"inode=%u, rec_len=%d, name_len=%d, size=%d",
++				 error_msg, offset, le32_to_cpu(de->inode),
++				 rlen, de->name_len, size);
+ 
+ 	return 1;
+ }
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 7c7123f265c2..aa1ce53d0c87 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -675,6 +675,9 @@ enum {
+ /* Max physical block we can address w/o extents */
+ #define EXT4_MAX_BLOCK_FILE_PHYS	0xFFFFFFFF
+ 
++/* Max logical block we can support */
++#define EXT4_MAX_LOGICAL_BLOCK		0xFFFFFFFF
++
+ /*
+  * Structure of an inode on the disk
+  */
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 3543fe80a3c4..7b4736022761 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -1753,6 +1753,7 @@ bool empty_inline_dir(struct inode *dir, int *has_inline_data)
+ {
+ 	int err, inline_size;
+ 	struct ext4_iloc iloc;
++	size_t inline_len;
+ 	void *inline_pos;
+ 	unsigned int offset;
+ 	struct ext4_dir_entry_2 *de;
+@@ -1780,8 +1781,9 @@ bool empty_inline_dir(struct inode *dir, int *has_inline_data)
+ 		goto out;
+ 	}
+ 
++	inline_len = ext4_get_inline_size(dir);
+ 	offset = EXT4_INLINE_DOTDOT_SIZE;
+-	while (offset < dir->i_size) {
++	while (offset < inline_len) {
+ 		de = ext4_get_inline_entry(dir, &iloc, offset,
+ 					   &inline_pos, &inline_size);
+ 		if (ext4_check_dir_entry(dir, NULL, de,
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 4efe77286ecd..2276137d0083 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -3412,12 +3412,16 @@ static int ext4_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
+ {
+ 	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
+ 	unsigned int blkbits = inode->i_blkbits;
+-	unsigned long first_block = offset >> blkbits;
+-	unsigned long last_block = (offset + length - 1) >> blkbits;
++	unsigned long first_block, last_block;
+ 	struct ext4_map_blocks map;
+ 	bool delalloc = false;
+ 	int ret;
+ 
++	if ((offset >> blkbits) > EXT4_MAX_LOGICAL_BLOCK)
++		return -EINVAL;
++	first_block = offset >> blkbits;
++	last_block = min_t(loff_t, (offset + length - 1) >> blkbits,
++			   EXT4_MAX_LOGICAL_BLOCK);
+ 
+ 	if (flags & IOMAP_REPORT) {
+ 		if (ext4_has_inline_data(inode)) {
+@@ -3947,6 +3951,7 @@ static const struct address_space_operations ext4_dax_aops = {
+ 	.writepages		= ext4_dax_writepages,
+ 	.direct_IO		= noop_direct_IO,
+ 	.set_page_dirty		= noop_set_page_dirty,
++	.bmap			= ext4_bmap,
+ 	.invalidatepage		= noop_invalidatepage,
+ };
+ 
+@@ -4856,6 +4861,7 @@ struct inode *ext4_iget(struct super_block *sb, unsigned long ino)
+ 		 * not initialized on a new filesystem. */
+ 	}
+ 	ei->i_flags = le32_to_cpu(raw_inode->i_flags);
++	ext4_set_inode_flags(inode);
+ 	inode->i_blocks = ext4_inode_blocks(raw_inode, ei);
+ 	ei->i_file_acl = le32_to_cpu(raw_inode->i_file_acl_lo);
+ 	if (ext4_has_feature_64bit(sb))
+@@ -5005,7 +5011,6 @@ struct inode *ext4_iget(struct super_block *sb, unsigned long ino)
+ 		goto bad_inode;
+ 	}
+ 	brelse(iloc.bh);
+-	ext4_set_inode_flags(inode);
+ 
+ 	unlock_new_inode(inode);
+ 	return inode;
+diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c
+index 638ad4743477..38e6a846aac1 100644
+--- a/fs/ext4/mmp.c
++++ b/fs/ext4/mmp.c
+@@ -49,7 +49,6 @@ static int write_mmp_block(struct super_block *sb, struct buffer_head *bh)
+ 	 */
+ 	sb_start_write(sb);
+ 	ext4_mmp_csum_set(sb, mmp);
+-	mark_buffer_dirty(bh);
+ 	lock_buffer(bh);
+ 	bh->b_end_io = end_buffer_write_sync;
+ 	get_bh(bh);
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 116ff68c5bd4..377d516c475f 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -3478,6 +3478,12 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 	int credits;
+ 	u8 old_file_type;
+ 
++	if (new.inode && new.inode->i_nlink == 0) {
++		EXT4_ERROR_INODE(new.inode,
++				 "target of rename is already freed");
++		return -EFSCORRUPTED;
++	}
++
+ 	if ((ext4_test_inode_flag(new_dir, EXT4_INODE_PROJINHERIT)) &&
+ 	    (!projid_eq(EXT4_I(new_dir)->i_projid,
+ 			EXT4_I(old_dentry->d_inode)->i_projid)))
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index e5fb38451a73..ebbc663d0798 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -19,6 +19,7 @@
+ 
+ int ext4_resize_begin(struct super_block *sb)
+ {
++	struct ext4_sb_info *sbi = EXT4_SB(sb);
+ 	int ret = 0;
+ 
+ 	if (!capable(CAP_SYS_RESOURCE))
+@@ -29,7 +30,7 @@ int ext4_resize_begin(struct super_block *sb)
+          * because the user tools have no way of handling this.  Probably a
+          * bad time to do it anyways.
+          */
+-	if (EXT4_SB(sb)->s_sbh->b_blocknr !=
++	if (EXT4_B2C(sbi, sbi->s_sbh->b_blocknr) !=
+ 	    le32_to_cpu(EXT4_SB(sb)->s_es->s_first_data_block)) {
+ 		ext4_warning(sb, "won't resize using backup superblock at %llu",
+ 			(unsigned long long)EXT4_SB(sb)->s_sbh->b_blocknr);
+@@ -1986,6 +1987,26 @@ retry:
+ 		}
+ 	}
+ 
++	/*
++	 * Make sure the last group has enough space so that it's
++	 * guaranteed to have enough space for all metadata blocks
++	 * that it might need to hold.  (We might not need to store
++	 * the inode table blocks in the last block group, but there
++	 * will be cases where this might be needed.)
++	 */
++	if ((ext4_group_first_block_no(sb, n_group) +
++	     ext4_group_overhead_blocks(sb, n_group) + 2 +
++	     sbi->s_itb_per_group + sbi->s_cluster_ratio) >= n_blocks_count) {
++		n_blocks_count = ext4_group_first_block_no(sb, n_group);
++		n_group--;
++		n_blocks_count_retry = 0;
++		if (resize_inode) {
++			iput(resize_inode);
++			resize_inode = NULL;
++		}
++		goto retry;
++	}
++
+ 	/* extend the last group */
+ 	if (n_group == o_group)
+ 		add = n_blocks_count - o_blocks_count;
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 130c12974e28..a7a0fffc3ae8 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -2126,6 +2126,8 @@ static int _ext4_show_options(struct seq_file *seq, struct super_block *sb,
+ 		SEQ_OPTS_PRINT("max_dir_size_kb=%u", sbi->s_max_dir_size_kb);
+ 	if (test_opt(sb, DATA_ERR_ABORT))
+ 		SEQ_OPTS_PUTS("data_err=abort");
++	if (DUMMY_ENCRYPTION_ENABLED(sbi))
++		SEQ_OPTS_PUTS("test_dummy_encryption");
+ 
+ 	ext4_show_quota_options(seq, sb);
+ 	return 0;
+@@ -4357,11 +4359,13 @@ no_journal:
+ 	block = ext4_count_free_clusters(sb);
+ 	ext4_free_blocks_count_set(sbi->s_es, 
+ 				   EXT4_C2B(sbi, block));
++	ext4_superblock_csum_set(sb);
+ 	err = percpu_counter_init(&sbi->s_freeclusters_counter, block,
+ 				  GFP_KERNEL);
+ 	if (!err) {
+ 		unsigned long freei = ext4_count_free_inodes(sb);
+ 		sbi->s_es->s_free_inodes_count = cpu_to_le32(freei);
++		ext4_superblock_csum_set(sb);
+ 		err = percpu_counter_init(&sbi->s_freeinodes_counter, freei,
+ 					  GFP_KERNEL);
+ 	}
+diff --git a/fs/ocfs2/buffer_head_io.c b/fs/ocfs2/buffer_head_io.c
+index d9ebe11c8990..1d098c3c00e0 100644
+--- a/fs/ocfs2/buffer_head_io.c
++++ b/fs/ocfs2/buffer_head_io.c
+@@ -342,6 +342,7 @@ int ocfs2_read_blocks(struct ocfs2_caching_info *ci, u64 block, int nr,
+ 				 * for this bh as it's not marked locally
+ 				 * uptodate. */
+ 				status = -EIO;
++				clear_buffer_needs_validate(bh);
+ 				put_bh(bh);
+ 				bhs[i] = NULL;
+ 				continue;
+diff --git a/fs/ubifs/xattr.c b/fs/ubifs/xattr.c
+index 09e37e63bddd..6f720fdf5020 100644
+--- a/fs/ubifs/xattr.c
++++ b/fs/ubifs/xattr.c
+@@ -152,12 +152,6 @@ static int create_xattr(struct ubifs_info *c, struct inode *host,
+ 	ui->data_len = size;
+ 
+ 	mutex_lock(&host_ui->ui_mutex);
+-
+-	if (!host->i_nlink) {
+-		err = -ENOENT;
+-		goto out_noent;
+-	}
+-
+ 	host->i_ctime = current_time(host);
+ 	host_ui->xattr_cnt += 1;
+ 	host_ui->xattr_size += CALC_DENT_SIZE(fname_len(nm));
+@@ -190,7 +184,6 @@ out_cancel:
+ 	host_ui->xattr_size -= CALC_XATTR_BYTES(size);
+ 	host_ui->xattr_names -= fname_len(nm);
+ 	host_ui->flags &= ~UBIFS_CRYPT_FL;
+-out_noent:
+ 	mutex_unlock(&host_ui->ui_mutex);
+ out_free:
+ 	make_bad_inode(inode);
+@@ -242,12 +235,6 @@ static int change_xattr(struct ubifs_info *c, struct inode *host,
+ 	mutex_unlock(&ui->ui_mutex);
+ 
+ 	mutex_lock(&host_ui->ui_mutex);
+-
+-	if (!host->i_nlink) {
+-		err = -ENOENT;
+-		goto out_noent;
+-	}
+-
+ 	host->i_ctime = current_time(host);
+ 	host_ui->xattr_size -= CALC_XATTR_BYTES(old_size);
+ 	host_ui->xattr_size += CALC_XATTR_BYTES(size);
+@@ -269,7 +256,6 @@ static int change_xattr(struct ubifs_info *c, struct inode *host,
+ out_cancel:
+ 	host_ui->xattr_size -= CALC_XATTR_BYTES(size);
+ 	host_ui->xattr_size += CALC_XATTR_BYTES(old_size);
+-out_noent:
+ 	mutex_unlock(&host_ui->ui_mutex);
+ 	make_bad_inode(inode);
+ out_free:
+@@ -496,12 +482,6 @@ static int remove_xattr(struct ubifs_info *c, struct inode *host,
+ 		return err;
+ 
+ 	mutex_lock(&host_ui->ui_mutex);
+-
+-	if (!host->i_nlink) {
+-		err = -ENOENT;
+-		goto out_noent;
+-	}
+-
+ 	host->i_ctime = current_time(host);
+ 	host_ui->xattr_cnt -= 1;
+ 	host_ui->xattr_size -= CALC_DENT_SIZE(fname_len(nm));
+@@ -521,7 +501,6 @@ out_cancel:
+ 	host_ui->xattr_size += CALC_DENT_SIZE(fname_len(nm));
+ 	host_ui->xattr_size += CALC_XATTR_BYTES(ui->data_len);
+ 	host_ui->xattr_names += fname_len(nm);
+-out_noent:
+ 	mutex_unlock(&host_ui->ui_mutex);
+ 	ubifs_release_budget(c, &req);
+ 	make_bad_inode(inode);
+@@ -561,9 +540,6 @@ static int ubifs_xattr_remove(struct inode *host, const char *name)
+ 
+ 	ubifs_assert(inode_is_locked(host));
+ 
+-	if (!host->i_nlink)
+-		return -ENOENT;
+-
+ 	if (fname_len(&nm) > UBIFS_MAX_NLEN)
+ 		return -ENAMETOOLONG;
+ 
+diff --git a/include/net/nfc/hci.h b/include/net/nfc/hci.h
+index 316694dafa5b..008f466d1da7 100644
+--- a/include/net/nfc/hci.h
++++ b/include/net/nfc/hci.h
+@@ -87,7 +87,7 @@ struct nfc_hci_pipe {
+  * According to specification 102 622 chapter 4.4 Pipes,
+  * the pipe identifier is 7 bits long.
+  */
+-#define NFC_HCI_MAX_PIPES		127
++#define NFC_HCI_MAX_PIPES		128
+ struct nfc_hci_init_data {
+ 	u8 gate_count;
+ 	struct nfc_hci_gate gates[NFC_HCI_MAX_CUSTOM_GATES];
+diff --git a/include/net/tls.h b/include/net/tls.h
+index 70c273777fe9..32b71e5b1290 100644
+--- a/include/net/tls.h
++++ b/include/net/tls.h
+@@ -165,15 +165,14 @@ struct cipher_context {
+ 	char *rec_seq;
+ };
+ 
++union tls_crypto_context {
++	struct tls_crypto_info info;
++	struct tls12_crypto_info_aes_gcm_128 aes_gcm_128;
++};
++
+ struct tls_context {
+-	union {
+-		struct tls_crypto_info crypto_send;
+-		struct tls12_crypto_info_aes_gcm_128 crypto_send_aes_gcm_128;
+-	};
+-	union {
+-		struct tls_crypto_info crypto_recv;
+-		struct tls12_crypto_info_aes_gcm_128 crypto_recv_aes_gcm_128;
+-	};
++	union tls_crypto_context crypto_send;
++	union tls_crypto_context crypto_recv;
+ 
+ 	struct list_head list;
+ 	struct net_device *netdev;
+@@ -337,8 +336,8 @@ static inline void tls_fill_prepend(struct tls_context *ctx,
+ 	 * size KTLS_DTLS_HEADER_SIZE + KTLS_DTLS_NONCE_EXPLICIT_SIZE
+ 	 */
+ 	buf[0] = record_type;
+-	buf[1] = TLS_VERSION_MINOR(ctx->crypto_send.version);
+-	buf[2] = TLS_VERSION_MAJOR(ctx->crypto_send.version);
++	buf[1] = TLS_VERSION_MINOR(ctx->crypto_send.info.version);
++	buf[2] = TLS_VERSION_MAJOR(ctx->crypto_send.info.version);
+ 	/* we can use IV for nonce explicit according to spec */
+ 	buf[3] = pkt_len >> 8;
+ 	buf[4] = pkt_len & 0xFF;
+diff --git a/include/uapi/linux/keyctl.h b/include/uapi/linux/keyctl.h
+index 910cc4334b21..7b8c9e19bad1 100644
+--- a/include/uapi/linux/keyctl.h
++++ b/include/uapi/linux/keyctl.h
+@@ -65,7 +65,7 @@
+ 
+ /* keyctl structures */
+ struct keyctl_dh_params {
+-	__s32 dh_private;
++	__s32 private;
+ 	__s32 prime;
+ 	__s32 base;
+ };
+diff --git a/include/uapi/sound/skl-tplg-interface.h b/include/uapi/sound/skl-tplg-interface.h
+index f58cafa42f18..f39352cef382 100644
+--- a/include/uapi/sound/skl-tplg-interface.h
++++ b/include/uapi/sound/skl-tplg-interface.h
+@@ -10,6 +10,8 @@
+ #ifndef __HDA_TPLG_INTERFACE_H__
+ #define __HDA_TPLG_INTERFACE_H__
+ 
++#include <linux/types.h>
++
+ /*
+  * Default types range from 0~12. type can range from 0 to 0xff
+  * SST types start at higher to avoid any overlapping in future
+@@ -143,10 +145,10 @@ enum skl_module_param_type {
+ };
+ 
+ struct skl_dfw_algo_data {
+-	u32 set_params:2;
+-	u32 rsvd:30;
+-	u32 param_id;
+-	u32 max;
++	__u32 set_params:2;
++	__u32 rsvd:30;
++	__u32 param_id;
++	__u32 max;
+ 	char params[0];
+ } __packed;
+ 
+@@ -163,68 +165,68 @@ enum skl_tuple_type {
+ /* v4 configuration data */
+ 
+ struct skl_dfw_v4_module_pin {
+-	u16 module_id;
+-	u16 instance_id;
++	__u16 module_id;
++	__u16 instance_id;
+ } __packed;
+ 
+ struct skl_dfw_v4_module_fmt {
+-	u32 channels;
+-	u32 freq;
+-	u32 bit_depth;
+-	u32 valid_bit_depth;
+-	u32 ch_cfg;
+-	u32 interleaving_style;
+-	u32 sample_type;
+-	u32 ch_map;
++	__u32 channels;
++	__u32 freq;
++	__u32 bit_depth;
++	__u32 valid_bit_depth;
++	__u32 ch_cfg;
++	__u32 interleaving_style;
++	__u32 sample_type;
++	__u32 ch_map;
+ } __packed;
+ 
+ struct skl_dfw_v4_module_caps {
+-	u32 set_params:2;
+-	u32 rsvd:30;
+-	u32 param_id;
+-	u32 caps_size;
+-	u32 caps[HDA_SST_CFG_MAX];
++	__u32 set_params:2;
++	__u32 rsvd:30;
++	__u32 param_id;
++	__u32 caps_size;
++	__u32 caps[HDA_SST_CFG_MAX];
+ } __packed;
+ 
+ struct skl_dfw_v4_pipe {
+-	u8 pipe_id;
+-	u8 pipe_priority;
+-	u16 conn_type:4;
+-	u16 rsvd:4;
+-	u16 memory_pages:8;
++	__u8 pipe_id;
++	__u8 pipe_priority;
++	__u16 conn_type:4;
++	__u16 rsvd:4;
++	__u16 memory_pages:8;
+ } __packed;
+ 
+ struct skl_dfw_v4_module {
+ 	char uuid[SKL_UUID_STR_SZ];
+ 
+-	u16 module_id;
+-	u16 instance_id;
+-	u32 max_mcps;
+-	u32 mem_pages;
+-	u32 obs;
+-	u32 ibs;
+-	u32 vbus_id;
+-
+-	u32 max_in_queue:8;
+-	u32 max_out_queue:8;
+-	u32 time_slot:8;
+-	u32 core_id:4;
+-	u32 rsvd1:4;
+-
+-	u32 module_type:8;
+-	u32 conn_type:4;
+-	u32 dev_type:4;
+-	u32 hw_conn_type:4;
+-	u32 rsvd2:12;
+-
+-	u32 params_fixup:8;
+-	u32 converter:8;
+-	u32 input_pin_type:1;
+-	u32 output_pin_type:1;
+-	u32 is_dynamic_in_pin:1;
+-	u32 is_dynamic_out_pin:1;
+-	u32 is_loadable:1;
+-	u32 rsvd3:11;
++	__u16 module_id;
++	__u16 instance_id;
++	__u32 max_mcps;
++	__u32 mem_pages;
++	__u32 obs;
++	__u32 ibs;
++	__u32 vbus_id;
++
++	__u32 max_in_queue:8;
++	__u32 max_out_queue:8;
++	__u32 time_slot:8;
++	__u32 core_id:4;
++	__u32 rsvd1:4;
++
++	__u32 module_type:8;
++	__u32 conn_type:4;
++	__u32 dev_type:4;
++	__u32 hw_conn_type:4;
++	__u32 rsvd2:12;
++
++	__u32 params_fixup:8;
++	__u32 converter:8;
++	__u32 input_pin_type:1;
++	__u32 output_pin_type:1;
++	__u32 is_dynamic_in_pin:1;
++	__u32 is_dynamic_out_pin:1;
++	__u32 is_loadable:1;
++	__u32 rsvd3:11;
+ 
+ 	struct skl_dfw_v4_pipe pipe;
+ 	struct skl_dfw_v4_module_fmt in_fmt[MAX_IN_QUEUE];
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 63aaac52a265..adbe21c8876e 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -3132,7 +3132,7 @@ static int adjust_reg_min_max_vals(struct bpf_verifier_env *env,
+ 				 * an arbitrary scalar. Disallow all math except
+ 				 * pointer subtraction
+ 				 */
+-				if (opcode == BPF_SUB){
++				if (opcode == BPF_SUB && env->allow_ptr_leaks) {
+ 					mark_reg_unknown(env, regs, insn->dst_reg);
+ 					return 0;
+ 				}
+diff --git a/kernel/pid.c b/kernel/pid.c
+index 157fe4b19971..2ff2d8bfa4e0 100644
+--- a/kernel/pid.c
++++ b/kernel/pid.c
+@@ -195,7 +195,7 @@ struct pid *alloc_pid(struct pid_namespace *ns)
+ 		idr_preload_end();
+ 
+ 		if (nr < 0) {
+-			retval = nr;
++			retval = (nr == -ENOSPC) ? -EAGAIN : nr;
+ 			goto out_free;
+ 		}
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 478d9d3e6be9..26526fc41f0d 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -10019,7 +10019,8 @@ static inline bool vruntime_normalized(struct task_struct *p)
+ 	 * - A task which has been woken up by try_to_wake_up() and
+ 	 *   waiting for actually being woken up by sched_ttwu_pending().
+ 	 */
+-	if (!se->sum_exec_runtime || p->state == TASK_WAKING)
++	if (!se->sum_exec_runtime ||
++	    (p->state == TASK_WAKING && p->sched_remote_wakeup))
+ 		return true;
+ 
+ 	return false;
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 0b0b688ea166..e58fd35ff64a 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -1545,6 +1545,8 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned long nr_pages)
+ 	tmp_iter_page = first_page;
+ 
+ 	do {
++		cond_resched();
++
+ 		to_remove_page = tmp_iter_page;
+ 		rb_inc_page(cpu_buffer, &tmp_iter_page);
+ 
+diff --git a/mm/Kconfig b/mm/Kconfig
+index 94af022b7f3d..22e949e263f0 100644
+--- a/mm/Kconfig
++++ b/mm/Kconfig
+@@ -637,6 +637,7 @@ config DEFERRED_STRUCT_PAGE_INIT
+ 	depends on NO_BOOTMEM
+ 	depends on SPARSEMEM
+ 	depends on !NEED_PER_CPU_KM
++	depends on 64BIT
+ 	help
+ 	  Ordinarily all struct pages are initialised during early boot in a
+ 	  single thread. On very large machines this can take a considerable
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 41b9bbf24e16..8264bbdbb6a5 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -2226,6 +2226,8 @@ static struct inode *shmem_get_inode(struct super_block *sb, const struct inode
+ 			mpol_shared_policy_init(&info->policy, NULL);
+ 			break;
+ 		}
++
++		lockdep_annotate_inode_mutex_key(inode);
+ 	} else
+ 		shmem_free_inode(sb);
+ 	return inode;
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 8e3fda9e725c..cb01d509d511 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -1179,6 +1179,12 @@ int neigh_update(struct neighbour *neigh, const u8 *lladdr, u8 new,
+ 		lladdr = neigh->ha;
+ 	}
+ 
++	/* Update confirmed timestamp for neighbour entry after we
++	 * received ARP packet even if it doesn't change IP to MAC binding.
++	 */
++	if (new & NUD_CONNECTED)
++		neigh->confirmed = jiffies;
++
+ 	/* If entry was valid and address is not changed,
+ 	   do not change entry state, if new one is STALE.
+ 	 */
+@@ -1200,15 +1206,12 @@ int neigh_update(struct neighbour *neigh, const u8 *lladdr, u8 new,
+ 		}
+ 	}
+ 
+-	/* Update timestamps only once we know we will make a change to the
++	/* Update timestamp only once we know we will make a change to the
+ 	 * neighbour entry. Otherwise we risk to move the locktime window with
+ 	 * noop updates and ignore relevant ARP updates.
+ 	 */
+-	if (new != old || lladdr != neigh->ha) {
+-		if (new & NUD_CONNECTED)
+-			neigh->confirmed = jiffies;
++	if (new != old || lladdr != neigh->ha)
+ 		neigh->updated = jiffies;
+-	}
+ 
+ 	if (new != old) {
+ 		neigh_del_timer(neigh);
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index e3f743c141b3..bafaa033826f 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -2760,7 +2760,7 @@ int rtnl_configure_link(struct net_device *dev, const struct ifinfomsg *ifm)
+ 	}
+ 
+ 	if (dev->rtnl_link_state == RTNL_LINK_INITIALIZED) {
+-		__dev_notify_flags(dev, old_flags, 0U);
++		__dev_notify_flags(dev, old_flags, (old_flags ^ dev->flags));
+ 	} else {
+ 		dev->rtnl_link_state = RTNL_LINK_INITIALIZED;
+ 		__dev_notify_flags(dev, old_flags, ~0U);
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index b403499fdabe..0c43b050dac7 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -1377,6 +1377,7 @@ struct sk_buff *inet_gso_segment(struct sk_buff *skb,
+ 		if (encap)
+ 			skb_reset_inner_headers(skb);
+ 		skb->network_header = (u8 *)iph - skb->head;
++		skb_reset_mac_len(skb);
+ 	} while ((skb = skb->next));
+ 
+ out:
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 24e116ddae79..fed65bc9df86 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -2128,6 +2128,28 @@ static inline int udp4_csum_init(struct sk_buff *skb, struct udphdr *uh,
+ 							 inet_compute_pseudo);
+ }
+ 
++/* wrapper for udp_queue_rcv_skb tacking care of csum conversion and
++ * return code conversion for ip layer consumption
++ */
++static int udp_unicast_rcv_skb(struct sock *sk, struct sk_buff *skb,
++			       struct udphdr *uh)
++{
++	int ret;
++
++	if (inet_get_convert_csum(sk) && uh->check && !IS_UDPLITE(sk))
++		skb_checksum_try_convert(skb, IPPROTO_UDP, uh->check,
++					 inet_compute_pseudo);
++
++	ret = udp_queue_rcv_skb(sk, skb);
++
++	/* a return value > 0 means to resubmit the input, but
++	 * it wants the return to be -protocol, or 0
++	 */
++	if (ret > 0)
++		return -ret;
++	return 0;
++}
++
+ /*
+  *	All we need to do is get the socket, and then do a checksum.
+  */
+@@ -2174,14 +2196,9 @@ int __udp4_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
+ 		if (unlikely(sk->sk_rx_dst != dst))
+ 			udp_sk_rx_dst_set(sk, dst);
+ 
+-		ret = udp_queue_rcv_skb(sk, skb);
++		ret = udp_unicast_rcv_skb(sk, skb, uh);
+ 		sock_put(sk);
+-		/* a return value > 0 means to resubmit the input, but
+-		 * it wants the return to be -protocol, or 0
+-		 */
+-		if (ret > 0)
+-			return -ret;
+-		return 0;
++		return ret;
+ 	}
+ 
+ 	if (rt->rt_flags & (RTCF_BROADCAST|RTCF_MULTICAST))
+@@ -2189,22 +2206,8 @@ int __udp4_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
+ 						saddr, daddr, udptable, proto);
+ 
+ 	sk = __udp4_lib_lookup_skb(skb, uh->source, uh->dest, udptable);
+-	if (sk) {
+-		int ret;
+-
+-		if (inet_get_convert_csum(sk) && uh->check && !IS_UDPLITE(sk))
+-			skb_checksum_try_convert(skb, IPPROTO_UDP, uh->check,
+-						 inet_compute_pseudo);
+-
+-		ret = udp_queue_rcv_skb(sk, skb);
+-
+-		/* a return value > 0 means to resubmit the input, but
+-		 * it wants the return to be -protocol, or 0
+-		 */
+-		if (ret > 0)
+-			return -ret;
+-		return 0;
+-	}
++	if (sk)
++		return udp_unicast_rcv_skb(sk, skb, uh);
+ 
+ 	if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb))
+ 		goto drop;
+diff --git a/net/ipv6/ip6_offload.c b/net/ipv6/ip6_offload.c
+index 5b3f2f89ef41..c6b75e96868c 100644
+--- a/net/ipv6/ip6_offload.c
++++ b/net/ipv6/ip6_offload.c
+@@ -115,6 +115,7 @@ static struct sk_buff *ipv6_gso_segment(struct sk_buff *skb,
+ 			payload_len = skb->len - nhoff - sizeof(*ipv6h);
+ 		ipv6h->payload_len = htons(payload_len);
+ 		skb->network_header = (u8 *)ipv6h - skb->head;
++		skb_reset_mac_len(skb);
+ 
+ 		if (udpfrag) {
+ 			int err = ip6_find_1stfragopt(skb, &prevhdr);
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 3168847c30d1..4f607aace43c 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -219,12 +219,10 @@ int ip6_xmit(const struct sock *sk, struct sk_buff *skb, struct flowi6 *fl6,
+ 				kfree_skb(skb);
+ 				return -ENOBUFS;
+ 			}
++			if (skb->sk)
++				skb_set_owner_w(skb2, skb->sk);
+ 			consume_skb(skb);
+ 			skb = skb2;
+-			/* skb_set_owner_w() changes sk->sk_wmem_alloc atomically,
+-			 * it is safe to call in our context (socket lock not held)
+-			 */
+-			skb_set_owner_w(skb, (struct sock *)sk);
+ 		}
+ 		if (opt->opt_flen)
+ 			ipv6_push_frag_opts(skb, opt, &proto);
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 18e00ce1719a..480a79f47c52 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -946,8 +946,6 @@ static void ip6_rt_init_dst_reject(struct rt6_info *rt, struct fib6_info *ort)
+ 
+ static void ip6_rt_init_dst(struct rt6_info *rt, struct fib6_info *ort)
+ {
+-	rt->dst.flags |= fib6_info_dst_flags(ort);
+-
+ 	if (ort->fib6_flags & RTF_REJECT) {
+ 		ip6_rt_init_dst_reject(rt, ort);
+ 		return;
+@@ -4670,20 +4668,31 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 			 int iif, int type, u32 portid, u32 seq,
+ 			 unsigned int flags)
+ {
+-	struct rtmsg *rtm;
++	struct rt6_info *rt6 = (struct rt6_info *)dst;
++	struct rt6key *rt6_dst, *rt6_src;
++	u32 *pmetrics, table, rt6_flags;
+ 	struct nlmsghdr *nlh;
++	struct rtmsg *rtm;
+ 	long expires = 0;
+-	u32 *pmetrics;
+-	u32 table;
+ 
+ 	nlh = nlmsg_put(skb, portid, seq, type, sizeof(*rtm), flags);
+ 	if (!nlh)
+ 		return -EMSGSIZE;
+ 
++	if (rt6) {
++		rt6_dst = &rt6->rt6i_dst;
++		rt6_src = &rt6->rt6i_src;
++		rt6_flags = rt6->rt6i_flags;
++	} else {
++		rt6_dst = &rt->fib6_dst;
++		rt6_src = &rt->fib6_src;
++		rt6_flags = rt->fib6_flags;
++	}
++
+ 	rtm = nlmsg_data(nlh);
+ 	rtm->rtm_family = AF_INET6;
+-	rtm->rtm_dst_len = rt->fib6_dst.plen;
+-	rtm->rtm_src_len = rt->fib6_src.plen;
++	rtm->rtm_dst_len = rt6_dst->plen;
++	rtm->rtm_src_len = rt6_src->plen;
+ 	rtm->rtm_tos = 0;
+ 	if (rt->fib6_table)
+ 		table = rt->fib6_table->tb6_id;
+@@ -4698,7 +4707,7 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 	rtm->rtm_scope = RT_SCOPE_UNIVERSE;
+ 	rtm->rtm_protocol = rt->fib6_protocol;
+ 
+-	if (rt->fib6_flags & RTF_CACHE)
++	if (rt6_flags & RTF_CACHE)
+ 		rtm->rtm_flags |= RTM_F_CLONED;
+ 
+ 	if (dest) {
+@@ -4706,7 +4715,7 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 			goto nla_put_failure;
+ 		rtm->rtm_dst_len = 128;
+ 	} else if (rtm->rtm_dst_len)
+-		if (nla_put_in6_addr(skb, RTA_DST, &rt->fib6_dst.addr))
++		if (nla_put_in6_addr(skb, RTA_DST, &rt6_dst->addr))
+ 			goto nla_put_failure;
+ #ifdef CONFIG_IPV6_SUBTREES
+ 	if (src) {
+@@ -4714,12 +4723,12 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 			goto nla_put_failure;
+ 		rtm->rtm_src_len = 128;
+ 	} else if (rtm->rtm_src_len &&
+-		   nla_put_in6_addr(skb, RTA_SRC, &rt->fib6_src.addr))
++		   nla_put_in6_addr(skb, RTA_SRC, &rt6_src->addr))
+ 		goto nla_put_failure;
+ #endif
+ 	if (iif) {
+ #ifdef CONFIG_IPV6_MROUTE
+-		if (ipv6_addr_is_multicast(&rt->fib6_dst.addr)) {
++		if (ipv6_addr_is_multicast(&rt6_dst->addr)) {
+ 			int err = ip6mr_get_route(net, skb, rtm, portid);
+ 
+ 			if (err == 0)
+@@ -4754,7 +4763,14 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 	/* For multipath routes, walk the siblings list and add
+ 	 * each as a nexthop within RTA_MULTIPATH.
+ 	 */
+-	if (rt->fib6_nsiblings) {
++	if (rt6) {
++		if (rt6_flags & RTF_GATEWAY &&
++		    nla_put_in6_addr(skb, RTA_GATEWAY, &rt6->rt6i_gateway))
++			goto nla_put_failure;
++
++		if (dst->dev && nla_put_u32(skb, RTA_OIF, dst->dev->ifindex))
++			goto nla_put_failure;
++	} else if (rt->fib6_nsiblings) {
+ 		struct fib6_info *sibling, *next_sibling;
+ 		struct nlattr *mp;
+ 
+@@ -4777,7 +4793,7 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 			goto nla_put_failure;
+ 	}
+ 
+-	if (rt->fib6_flags & RTF_EXPIRES) {
++	if (rt6_flags & RTF_EXPIRES) {
+ 		expires = dst ? dst->expires : rt->expires;
+ 		expires -= jiffies;
+ 	}
+@@ -4785,7 +4801,7 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 	if (rtnl_put_cacheinfo(skb, dst, 0, expires, dst ? dst->error : 0) < 0)
+ 		goto nla_put_failure;
+ 
+-	if (nla_put_u8(skb, RTA_PREF, IPV6_EXTRACT_PREF(rt->fib6_flags)))
++	if (nla_put_u8(skb, RTA_PREF, IPV6_EXTRACT_PREF(rt6_flags)))
+ 		goto nla_put_failure;
+ 
+ 
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index e6645cae403e..39d0cab919bb 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -748,6 +748,28 @@ static void udp6_sk_rx_dst_set(struct sock *sk, struct dst_entry *dst)
+ 	}
+ }
+ 
++/* wrapper for udp_queue_rcv_skb tacking care of csum conversion and
++ * return code conversion for ip layer consumption
++ */
++static int udp6_unicast_rcv_skb(struct sock *sk, struct sk_buff *skb,
++				struct udphdr *uh)
++{
++	int ret;
++
++	if (inet_get_convert_csum(sk) && uh->check && !IS_UDPLITE(sk))
++		skb_checksum_try_convert(skb, IPPROTO_UDP, uh->check,
++					 ip6_compute_pseudo);
++
++	ret = udpv6_queue_rcv_skb(sk, skb);
++
++	/* a return value > 0 means to resubmit the input, but
++	 * it wants the return to be -protocol, or 0
++	 */
++	if (ret > 0)
++		return -ret;
++	return 0;
++}
++
+ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
+ 		   int proto)
+ {
+@@ -799,13 +821,14 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
+ 		if (unlikely(sk->sk_rx_dst != dst))
+ 			udp6_sk_rx_dst_set(sk, dst);
+ 
+-		ret = udpv6_queue_rcv_skb(sk, skb);
+-		sock_put(sk);
++		if (!uh->check && !udp_sk(sk)->no_check6_rx) {
++			sock_put(sk);
++			goto report_csum_error;
++		}
+ 
+-		/* a return value > 0 means to resubmit the input */
+-		if (ret > 0)
+-			return ret;
+-		return 0;
++		ret = udp6_unicast_rcv_skb(sk, skb, uh);
++		sock_put(sk);
++		return ret;
+ 	}
+ 
+ 	/*
+@@ -818,30 +841,13 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
+ 	/* Unicast */
+ 	sk = __udp6_lib_lookup_skb(skb, uh->source, uh->dest, udptable);
+ 	if (sk) {
+-		int ret;
+-
+-		if (!uh->check && !udp_sk(sk)->no_check6_rx) {
+-			udp6_csum_zero_error(skb);
+-			goto csum_error;
+-		}
+-
+-		if (inet_get_convert_csum(sk) && uh->check && !IS_UDPLITE(sk))
+-			skb_checksum_try_convert(skb, IPPROTO_UDP, uh->check,
+-						 ip6_compute_pseudo);
+-
+-		ret = udpv6_queue_rcv_skb(sk, skb);
+-
+-		/* a return value > 0 means to resubmit the input */
+-		if (ret > 0)
+-			return ret;
+-
+-		return 0;
++		if (!uh->check && !udp_sk(sk)->no_check6_rx)
++			goto report_csum_error;
++		return udp6_unicast_rcv_skb(sk, skb, uh);
+ 	}
+ 
+-	if (!uh->check) {
+-		udp6_csum_zero_error(skb);
+-		goto csum_error;
+-	}
++	if (!uh->check)
++		goto report_csum_error;
+ 
+ 	if (!xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb))
+ 		goto discard;
+@@ -862,6 +868,9 @@ short_packet:
+ 			    ulen, skb->len,
+ 			    daddr, ntohs(uh->dest));
+ 	goto discard;
++
++report_csum_error:
++	udp6_csum_zero_error(skb);
+ csum_error:
+ 	__UDP6_INC_STATS(net, UDP_MIB_CSUMERRORS, proto == IPPROTO_UDPLITE);
+ discard:
+diff --git a/net/nfc/hci/core.c b/net/nfc/hci/core.c
+index ac8030c4bcf8..19cb2e473ea6 100644
+--- a/net/nfc/hci/core.c
++++ b/net/nfc/hci/core.c
+@@ -209,6 +209,11 @@ void nfc_hci_cmd_received(struct nfc_hci_dev *hdev, u8 pipe, u8 cmd,
+ 		}
+ 		create_info = (struct hci_create_pipe_resp *)skb->data;
+ 
++		if (create_info->pipe >= NFC_HCI_MAX_PIPES) {
++			status = NFC_HCI_ANY_E_NOK;
++			goto exit;
++		}
++
+ 		/* Save the new created pipe and bind with local gate,
+ 		 * the description for skb->data[3] is destination gate id
+ 		 * but since we received this cmd from host controller, we
+@@ -232,6 +237,11 @@ void nfc_hci_cmd_received(struct nfc_hci_dev *hdev, u8 pipe, u8 cmd,
+ 		}
+ 		delete_info = (struct hci_delete_pipe_noti *)skb->data;
+ 
++		if (delete_info->pipe >= NFC_HCI_MAX_PIPES) {
++			status = NFC_HCI_ANY_E_NOK;
++			goto exit;
++		}
++
+ 		hdev->pipes[delete_info->pipe].gate = NFC_HCI_INVALID_GATE;
+ 		hdev->pipes[delete_info->pipe].dest_host = NFC_HCI_INVALID_HOST;
+ 		break;
+diff --git a/net/sched/act_sample.c b/net/sched/act_sample.c
+index 5db358497c9e..e0e334a3a6e1 100644
+--- a/net/sched/act_sample.c
++++ b/net/sched/act_sample.c
+@@ -64,7 +64,7 @@ static int tcf_sample_init(struct net *net, struct nlattr *nla,
+ 
+ 	if (!exists) {
+ 		ret = tcf_idr_create(tn, parm->index, est, a,
+-				     &act_sample_ops, bind, false);
++				     &act_sample_ops, bind, true);
+ 		if (ret)
+ 			return ret;
+ 		ret = ACT_P_CREATED;
+diff --git a/net/socket.c b/net/socket.c
+index 4ac3b834cce9..d4187ac17d55 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -962,7 +962,8 @@ void dlci_ioctl_set(int (*hook) (unsigned int, void __user *))
+ EXPORT_SYMBOL(dlci_ioctl_set);
+ 
+ static long sock_do_ioctl(struct net *net, struct socket *sock,
+-				 unsigned int cmd, unsigned long arg)
++			  unsigned int cmd, unsigned long arg,
++			  unsigned int ifreq_size)
+ {
+ 	int err;
+ 	void __user *argp = (void __user *)arg;
+@@ -988,11 +989,11 @@ static long sock_do_ioctl(struct net *net, struct socket *sock,
+ 	} else {
+ 		struct ifreq ifr;
+ 		bool need_copyout;
+-		if (copy_from_user(&ifr, argp, sizeof(struct ifreq)))
++		if (copy_from_user(&ifr, argp, ifreq_size))
+ 			return -EFAULT;
+ 		err = dev_ioctl(net, cmd, &ifr, &need_copyout);
+ 		if (!err && need_copyout)
+-			if (copy_to_user(argp, &ifr, sizeof(struct ifreq)))
++			if (copy_to_user(argp, &ifr, ifreq_size))
+ 				return -EFAULT;
+ 	}
+ 	return err;
+@@ -1091,7 +1092,8 @@ static long sock_ioctl(struct file *file, unsigned cmd, unsigned long arg)
+ 			err = open_related_ns(&net->ns, get_net_ns);
+ 			break;
+ 		default:
+-			err = sock_do_ioctl(net, sock, cmd, arg);
++			err = sock_do_ioctl(net, sock, cmd, arg,
++					    sizeof(struct ifreq));
+ 			break;
+ 		}
+ 	return err;
+@@ -2762,7 +2764,8 @@ static int do_siocgstamp(struct net *net, struct socket *sock,
+ 	int err;
+ 
+ 	set_fs(KERNEL_DS);
+-	err = sock_do_ioctl(net, sock, cmd, (unsigned long)&ktv);
++	err = sock_do_ioctl(net, sock, cmd, (unsigned long)&ktv,
++			    sizeof(struct compat_ifreq));
+ 	set_fs(old_fs);
+ 	if (!err)
+ 		err = compat_put_timeval(&ktv, up);
+@@ -2778,7 +2781,8 @@ static int do_siocgstampns(struct net *net, struct socket *sock,
+ 	int err;
+ 
+ 	set_fs(KERNEL_DS);
+-	err = sock_do_ioctl(net, sock, cmd, (unsigned long)&kts);
++	err = sock_do_ioctl(net, sock, cmd, (unsigned long)&kts,
++			    sizeof(struct compat_ifreq));
+ 	set_fs(old_fs);
+ 	if (!err)
+ 		err = compat_put_timespec(&kts, up);
+@@ -3084,7 +3088,8 @@ static int routing_ioctl(struct net *net, struct socket *sock,
+ 	}
+ 
+ 	set_fs(KERNEL_DS);
+-	ret = sock_do_ioctl(net, sock, cmd, (unsigned long) r);
++	ret = sock_do_ioctl(net, sock, cmd, (unsigned long) r,
++			    sizeof(struct compat_ifreq));
+ 	set_fs(old_fs);
+ 
+ out:
+@@ -3197,7 +3202,8 @@ static int compat_sock_ioctl_trans(struct file *file, struct socket *sock,
+ 	case SIOCBONDSETHWADDR:
+ 	case SIOCBONDCHANGEACTIVE:
+ 	case SIOCGIFNAME:
+-		return sock_do_ioctl(net, sock, cmd, arg);
++		return sock_do_ioctl(net, sock, cmd, arg,
++				     sizeof(struct compat_ifreq));
+ 	}
+ 
+ 	return -ENOIOCTLCMD;
+diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
+index a7a8f8e20ff3..9bd0286d5407 100644
+--- a/net/tls/tls_device.c
++++ b/net/tls/tls_device.c
+@@ -552,7 +552,7 @@ int tls_set_device_offload(struct sock *sk, struct tls_context *ctx)
+ 		goto free_marker_record;
+ 	}
+ 
+-	crypto_info = &ctx->crypto_send;
++	crypto_info = &ctx->crypto_send.info;
+ 	switch (crypto_info->cipher_type) {
+ 	case TLS_CIPHER_AES_GCM_128:
+ 		nonce_size = TLS_CIPHER_AES_GCM_128_IV_SIZE;
+@@ -650,7 +650,7 @@ int tls_set_device_offload(struct sock *sk, struct tls_context *ctx)
+ 
+ 	ctx->priv_ctx_tx = offload_ctx;
+ 	rc = netdev->tlsdev_ops->tls_dev_add(netdev, sk, TLS_OFFLOAD_CTX_DIR_TX,
+-					     &ctx->crypto_send,
++					     &ctx->crypto_send.info,
+ 					     tcp_sk(sk)->write_seq);
+ 	if (rc)
+ 		goto release_netdev;
+diff --git a/net/tls/tls_device_fallback.c b/net/tls/tls_device_fallback.c
+index 748914abdb60..72143679d3d6 100644
+--- a/net/tls/tls_device_fallback.c
++++ b/net/tls/tls_device_fallback.c
+@@ -320,7 +320,7 @@ static struct sk_buff *tls_enc_skb(struct tls_context *tls_ctx,
+ 		goto free_req;
+ 
+ 	iv = buf;
+-	memcpy(iv, tls_ctx->crypto_send_aes_gcm_128.salt,
++	memcpy(iv, tls_ctx->crypto_send.aes_gcm_128.salt,
+ 	       TLS_CIPHER_AES_GCM_128_SALT_SIZE);
+ 	aad = buf + TLS_CIPHER_AES_GCM_128_SALT_SIZE +
+ 	      TLS_CIPHER_AES_GCM_128_IV_SIZE;
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index 45188d920013..2ccf194c3ebb 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -245,6 +245,16 @@ static void tls_write_space(struct sock *sk)
+ 	ctx->sk_write_space(sk);
+ }
+ 
++static void tls_ctx_free(struct tls_context *ctx)
++{
++	if (!ctx)
++		return;
++
++	memzero_explicit(&ctx->crypto_send, sizeof(ctx->crypto_send));
++	memzero_explicit(&ctx->crypto_recv, sizeof(ctx->crypto_recv));
++	kfree(ctx);
++}
++
+ static void tls_sk_proto_close(struct sock *sk, long timeout)
+ {
+ 	struct tls_context *ctx = tls_get_ctx(sk);
+@@ -295,7 +305,7 @@ static void tls_sk_proto_close(struct sock *sk, long timeout)
+ #else
+ 	{
+ #endif
+-		kfree(ctx);
++		tls_ctx_free(ctx);
+ 		ctx = NULL;
+ 	}
+ 
+@@ -306,7 +316,7 @@ skip_tx_cleanup:
+ 	 * for sk->sk_prot->unhash [tls_hw_unhash]
+ 	 */
+ 	if (free_ctx)
+-		kfree(ctx);
++		tls_ctx_free(ctx);
+ }
+ 
+ static int do_tls_getsockopt_tx(struct sock *sk, char __user *optval,
+@@ -331,7 +341,7 @@ static int do_tls_getsockopt_tx(struct sock *sk, char __user *optval,
+ 	}
+ 
+ 	/* get user crypto info */
+-	crypto_info = &ctx->crypto_send;
++	crypto_info = &ctx->crypto_send.info;
+ 
+ 	if (!TLS_CRYPTO_INFO_READY(crypto_info)) {
+ 		rc = -EBUSY;
+@@ -418,9 +428,9 @@ static int do_tls_setsockopt_conf(struct sock *sk, char __user *optval,
+ 	}
+ 
+ 	if (tx)
+-		crypto_info = &ctx->crypto_send;
++		crypto_info = &ctx->crypto_send.info;
+ 	else
+-		crypto_info = &ctx->crypto_recv;
++		crypto_info = &ctx->crypto_recv.info;
+ 
+ 	/* Currently we don't support set crypto info more than one time */
+ 	if (TLS_CRYPTO_INFO_READY(crypto_info)) {
+@@ -492,7 +502,7 @@ static int do_tls_setsockopt_conf(struct sock *sk, char __user *optval,
+ 	goto out;
+ 
+ err_crypto_info:
+-	memset(crypto_info, 0, sizeof(*crypto_info));
++	memzero_explicit(crypto_info, sizeof(union tls_crypto_context));
+ out:
+ 	return rc;
+ }
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index b3344bbe336b..9fab8e5a4a5b 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -872,7 +872,15 @@ fallback_to_reg_recv:
+ 				if (control != TLS_RECORD_TYPE_DATA)
+ 					goto recv_end;
+ 			}
++		} else {
++			/* MSG_PEEK right now cannot look beyond current skb
++			 * from strparser, meaning we cannot advance skb here
++			 * and thus unpause strparser since we'd loose original
++			 * one.
++			 */
++			break;
+ 		}
++
+ 		/* If we have a new message from strparser, continue now. */
+ 		if (copied >= target && !ctx->recv_pkt)
+ 			break;
+@@ -989,8 +997,8 @@ static int tls_read_size(struct strparser *strp, struct sk_buff *skb)
+ 		goto read_failure;
+ 	}
+ 
+-	if (header[1] != TLS_VERSION_MINOR(tls_ctx->crypto_recv.version) ||
+-	    header[2] != TLS_VERSION_MAJOR(tls_ctx->crypto_recv.version)) {
++	if (header[1] != TLS_VERSION_MINOR(tls_ctx->crypto_recv.info.version) ||
++	    header[2] != TLS_VERSION_MAJOR(tls_ctx->crypto_recv.info.version)) {
+ 		ret = -EINVAL;
+ 		goto read_failure;
+ 	}
+@@ -1064,7 +1072,6 @@ void tls_sw_free_resources_rx(struct sock *sk)
+ 
+ int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx)
+ {
+-	char keyval[TLS_CIPHER_AES_GCM_128_KEY_SIZE];
+ 	struct tls_crypto_info *crypto_info;
+ 	struct tls12_crypto_info_aes_gcm_128 *gcm_128_info;
+ 	struct tls_sw_context_tx *sw_ctx_tx = NULL;
+@@ -1100,11 +1107,11 @@ int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx)
+ 	}
+ 
+ 	if (tx) {
+-		crypto_info = &ctx->crypto_send;
++		crypto_info = &ctx->crypto_send.info;
+ 		cctx = &ctx->tx;
+ 		aead = &sw_ctx_tx->aead_send;
+ 	} else {
+-		crypto_info = &ctx->crypto_recv;
++		crypto_info = &ctx->crypto_recv.info;
+ 		cctx = &ctx->rx;
+ 		aead = &sw_ctx_rx->aead_recv;
+ 	}
+@@ -1184,9 +1191,7 @@ int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx)
+ 
+ 	ctx->push_pending_record = tls_sw_push_pending_record;
+ 
+-	memcpy(keyval, gcm_128_info->key, TLS_CIPHER_AES_GCM_128_KEY_SIZE);
+-
+-	rc = crypto_aead_setkey(*aead, keyval,
++	rc = crypto_aead_setkey(*aead, gcm_128_info->key,
+ 				TLS_CIPHER_AES_GCM_128_KEY_SIZE);
+ 	if (rc)
+ 		goto free_aead;
+diff --git a/security/keys/dh.c b/security/keys/dh.c
+index 1a68d27e72b4..b203f7758f97 100644
+--- a/security/keys/dh.c
++++ b/security/keys/dh.c
+@@ -300,7 +300,7 @@ long __keyctl_dh_compute(struct keyctl_dh_params __user *params,
+ 	}
+ 	dh_inputs.g_size = dlen;
+ 
+-	dlen = dh_data_from_key(pcopy.dh_private, &dh_inputs.key);
++	dlen = dh_data_from_key(pcopy.private, &dh_inputs.key);
+ 	if (dlen < 0) {
+ 		ret = dlen;
+ 		goto out2;
+diff --git a/sound/firewire/bebob/bebob.c b/sound/firewire/bebob/bebob.c
+index 730ea91d9be8..93676354f87f 100644
+--- a/sound/firewire/bebob/bebob.c
++++ b/sound/firewire/bebob/bebob.c
+@@ -263,6 +263,8 @@ do_registration(struct work_struct *work)
+ error:
+ 	mutex_unlock(&devices_mutex);
+ 	snd_bebob_stream_destroy_duplex(bebob);
++	kfree(bebob->maudio_special_quirk);
++	bebob->maudio_special_quirk = NULL;
+ 	snd_card_free(bebob->card);
+ 	dev_info(&bebob->unit->device,
+ 		 "Sound card registration failed: %d\n", err);
+diff --git a/sound/firewire/bebob/bebob_maudio.c b/sound/firewire/bebob/bebob_maudio.c
+index bd55620c6a47..c266997ad299 100644
+--- a/sound/firewire/bebob/bebob_maudio.c
++++ b/sound/firewire/bebob/bebob_maudio.c
+@@ -96,17 +96,13 @@ int snd_bebob_maudio_load_firmware(struct fw_unit *unit)
+ 	struct fw_device *device = fw_parent_device(unit);
+ 	int err, rcode;
+ 	u64 date;
+-	__le32 cues[3] = {
+-		cpu_to_le32(MAUDIO_BOOTLOADER_CUE1),
+-		cpu_to_le32(MAUDIO_BOOTLOADER_CUE2),
+-		cpu_to_le32(MAUDIO_BOOTLOADER_CUE3)
+-	};
++	__le32 *cues;
+ 
+ 	/* check date of software used to build */
+ 	err = snd_bebob_read_block(unit, INFO_OFFSET_SW_DATE,
+ 				   &date, sizeof(u64));
+ 	if (err < 0)
+-		goto end;
++		return err;
+ 	/*
+ 	 * firmware version 5058 or later has date later than "20070401", but
+ 	 * 'date' is not null-terminated.
+@@ -114,20 +110,28 @@ int snd_bebob_maudio_load_firmware(struct fw_unit *unit)
+ 	if (date < 0x3230303730343031LL) {
+ 		dev_err(&unit->device,
+ 			"Use firmware version 5058 or later\n");
+-		err = -ENOSYS;
+-		goto end;
++		return -ENXIO;
+ 	}
+ 
++	cues = kmalloc_array(3, sizeof(*cues), GFP_KERNEL);
++	if (!cues)
++		return -ENOMEM;
++
++	cues[0] = cpu_to_le32(MAUDIO_BOOTLOADER_CUE1);
++	cues[1] = cpu_to_le32(MAUDIO_BOOTLOADER_CUE2);
++	cues[2] = cpu_to_le32(MAUDIO_BOOTLOADER_CUE3);
++
+ 	rcode = fw_run_transaction(device->card, TCODE_WRITE_BLOCK_REQUEST,
+ 				   device->node_id, device->generation,
+ 				   device->max_speed, BEBOB_ADDR_REG_REQ,
+-				   cues, sizeof(cues));
++				   cues, 3 * sizeof(*cues));
++	kfree(cues);
+ 	if (rcode != RCODE_COMPLETE) {
+ 		dev_err(&unit->device,
+ 			"Failed to send a cue to load firmware\n");
+ 		err = -EIO;
+ 	}
+-end:
++
+ 	return err;
+ }
+ 
+@@ -290,10 +294,6 @@ snd_bebob_maudio_special_discover(struct snd_bebob *bebob, bool is1814)
+ 		bebob->midi_output_ports = 2;
+ 	}
+ end:
+-	if (err < 0) {
+-		kfree(params);
+-		bebob->maudio_special_quirk = NULL;
+-	}
+ 	mutex_unlock(&bebob->mutex);
+ 	return err;
+ }
+diff --git a/sound/firewire/digi00x/digi00x.c b/sound/firewire/digi00x/digi00x.c
+index 1f5e1d23f31a..ef689997d6a5 100644
+--- a/sound/firewire/digi00x/digi00x.c
++++ b/sound/firewire/digi00x/digi00x.c
+@@ -49,6 +49,7 @@ static void dg00x_free(struct snd_dg00x *dg00x)
+ 	fw_unit_put(dg00x->unit);
+ 
+ 	mutex_destroy(&dg00x->mutex);
++	kfree(dg00x);
+ }
+ 
+ static void dg00x_card_free(struct snd_card *card)
+diff --git a/sound/firewire/fireface/ff-protocol-ff400.c b/sound/firewire/fireface/ff-protocol-ff400.c
+index ad7a0a32557d..64c3cb0fb926 100644
+--- a/sound/firewire/fireface/ff-protocol-ff400.c
++++ b/sound/firewire/fireface/ff-protocol-ff400.c
+@@ -146,6 +146,7 @@ static int ff400_switch_fetching_mode(struct snd_ff *ff, bool enable)
+ {
+ 	__le32 *reg;
+ 	int i;
++	int err;
+ 
+ 	reg = kcalloc(18, sizeof(__le32), GFP_KERNEL);
+ 	if (reg == NULL)
+@@ -163,9 +164,11 @@ static int ff400_switch_fetching_mode(struct snd_ff *ff, bool enable)
+ 			reg[i] = cpu_to_le32(0x00000001);
+ 	}
+ 
+-	return snd_fw_transaction(ff->unit, TCODE_WRITE_BLOCK_REQUEST,
+-				  FF400_FETCH_PCM_FRAMES, reg,
+-				  sizeof(__le32) * 18, 0);
++	err = snd_fw_transaction(ff->unit, TCODE_WRITE_BLOCK_REQUEST,
++				 FF400_FETCH_PCM_FRAMES, reg,
++				 sizeof(__le32) * 18, 0);
++	kfree(reg);
++	return err;
+ }
+ 
+ static void ff400_dump_sync_status(struct snd_ff *ff,
+diff --git a/sound/firewire/fireworks/fireworks.c b/sound/firewire/fireworks/fireworks.c
+index 71a0613d3da0..f2d073365cf6 100644
+--- a/sound/firewire/fireworks/fireworks.c
++++ b/sound/firewire/fireworks/fireworks.c
+@@ -301,6 +301,8 @@ error:
+ 	snd_efw_transaction_remove_instance(efw);
+ 	snd_efw_stream_destroy_duplex(efw);
+ 	snd_card_free(efw->card);
++	kfree(efw->resp_buf);
++	efw->resp_buf = NULL;
+ 	dev_info(&efw->unit->device,
+ 		 "Sound card registration failed: %d\n", err);
+ }
+diff --git a/sound/firewire/oxfw/oxfw.c b/sound/firewire/oxfw/oxfw.c
+index 1e5b2c802635..2ea8be6c8584 100644
+--- a/sound/firewire/oxfw/oxfw.c
++++ b/sound/firewire/oxfw/oxfw.c
+@@ -130,6 +130,7 @@ static void oxfw_free(struct snd_oxfw *oxfw)
+ 
+ 	kfree(oxfw->spec);
+ 	mutex_destroy(&oxfw->mutex);
++	kfree(oxfw);
+ }
+ 
+ /*
+@@ -207,6 +208,7 @@ static int detect_quirks(struct snd_oxfw *oxfw)
+ static void do_registration(struct work_struct *work)
+ {
+ 	struct snd_oxfw *oxfw = container_of(work, struct snd_oxfw, dwork.work);
++	int i;
+ 	int err;
+ 
+ 	if (oxfw->registered)
+@@ -269,7 +271,15 @@ error:
+ 	snd_oxfw_stream_destroy_simplex(oxfw, &oxfw->rx_stream);
+ 	if (oxfw->has_output)
+ 		snd_oxfw_stream_destroy_simplex(oxfw, &oxfw->tx_stream);
++	for (i = 0; i < SND_OXFW_STREAM_FORMAT_ENTRIES; ++i) {
++		kfree(oxfw->tx_stream_formats[i]);
++		oxfw->tx_stream_formats[i] = NULL;
++		kfree(oxfw->rx_stream_formats[i]);
++		oxfw->rx_stream_formats[i] = NULL;
++	}
+ 	snd_card_free(oxfw->card);
++	kfree(oxfw->spec);
++	oxfw->spec = NULL;
+ 	dev_info(&oxfw->unit->device,
+ 		 "Sound card registration failed: %d\n", err);
+ }
+diff --git a/sound/firewire/tascam/tascam.c b/sound/firewire/tascam/tascam.c
+index 44ad41fb7374..d3fdc463a884 100644
+--- a/sound/firewire/tascam/tascam.c
++++ b/sound/firewire/tascam/tascam.c
+@@ -93,6 +93,7 @@ static void tscm_free(struct snd_tscm *tscm)
+ 	fw_unit_put(tscm->unit);
+ 
+ 	mutex_destroy(&tscm->mutex);
++	kfree(tscm);
+ }
+ 
+ static void tscm_card_free(struct snd_card *card)
+diff --git a/sound/pci/emu10k1/emufx.c b/sound/pci/emu10k1/emufx.c
+index de2ecbe95d6c..2c54d26f30a6 100644
+--- a/sound/pci/emu10k1/emufx.c
++++ b/sound/pci/emu10k1/emufx.c
+@@ -2540,7 +2540,7 @@ static int snd_emu10k1_fx8010_ioctl(struct snd_hwdep * hw, struct file *file, un
+ 		emu->support_tlv = 1;
+ 		return put_user(SNDRV_EMU10K1_VERSION, (int __user *)argp);
+ 	case SNDRV_EMU10K1_IOCTL_INFO:
+-		info = kmalloc(sizeof(*info), GFP_KERNEL);
++		info = kzalloc(sizeof(*info), GFP_KERNEL);
+ 		if (!info)
+ 			return -ENOMEM;
+ 		snd_emu10k1_fx8010_info(emu, info);
+diff --git a/sound/soc/codecs/cs4265.c b/sound/soc/codecs/cs4265.c
+index 275677de669f..407554175282 100644
+--- a/sound/soc/codecs/cs4265.c
++++ b/sound/soc/codecs/cs4265.c
+@@ -157,8 +157,8 @@ static const struct snd_kcontrol_new cs4265_snd_controls[] = {
+ 	SOC_SINGLE("Validity Bit Control Switch", CS4265_SPDIF_CTL2,
+ 				3, 1, 0),
+ 	SOC_ENUM("SPDIF Mono/Stereo", spdif_mono_stereo_enum),
+-	SOC_SINGLE("MMTLR Data Switch", 0,
+-				1, 1, 0),
++	SOC_SINGLE("MMTLR Data Switch", CS4265_SPDIF_CTL2,
++				0, 1, 0),
+ 	SOC_ENUM("Mono Channel Select", spdif_mono_select_enum),
+ 	SND_SOC_BYTES("C Data Buffer", CS4265_C_DATA_BUFF, 24),
+ };
+diff --git a/sound/soc/codecs/tas6424.c b/sound/soc/codecs/tas6424.c
+index 14999b999fd3..0d6145549a98 100644
+--- a/sound/soc/codecs/tas6424.c
++++ b/sound/soc/codecs/tas6424.c
+@@ -424,8 +424,10 @@ static void tas6424_fault_check_work(struct work_struct *work)
+ 	       TAS6424_FAULT_PVDD_UV |
+ 	       TAS6424_FAULT_VBAT_UV;
+ 
+-	if (reg)
++	if (!reg) {
++		tas6424->last_fault1 = reg;
+ 		goto check_global_fault2_reg;
++	}
+ 
+ 	/*
+ 	 * Only flag errors once for a given occurrence. This is needed as
+@@ -461,8 +463,10 @@ check_global_fault2_reg:
+ 	       TAS6424_FAULT_OTSD_CH3 |
+ 	       TAS6424_FAULT_OTSD_CH4;
+ 
+-	if (!reg)
++	if (!reg) {
++		tas6424->last_fault2 = reg;
+ 		goto check_warn_reg;
++	}
+ 
+ 	if ((reg & TAS6424_FAULT_OTSD) && !(tas6424->last_fault2 & TAS6424_FAULT_OTSD))
+ 		dev_crit(dev, "experienced a global overtemp shutdown\n");
+@@ -497,8 +501,10 @@ check_warn_reg:
+ 	       TAS6424_WARN_VDD_OTW_CH3 |
+ 	       TAS6424_WARN_VDD_OTW_CH4;
+ 
+-	if (!reg)
++	if (!reg) {
++		tas6424->last_warn = reg;
+ 		goto out;
++	}
+ 
+ 	if ((reg & TAS6424_WARN_VDD_UV) && !(tas6424->last_warn & TAS6424_WARN_VDD_UV))
+ 		dev_warn(dev, "experienced a VDD under voltage condition\n");
+diff --git a/sound/soc/codecs/wm9712.c b/sound/soc/codecs/wm9712.c
+index 953d94d50586..ade34c26ad2f 100644
+--- a/sound/soc/codecs/wm9712.c
++++ b/sound/soc/codecs/wm9712.c
+@@ -719,7 +719,7 @@ static int wm9712_probe(struct platform_device *pdev)
+ 
+ static struct platform_driver wm9712_component_driver = {
+ 	.driver = {
+-		.name = "wm9712-component",
++		.name = "wm9712-codec",
+ 	},
+ 
+ 	.probe = wm9712_probe,
+diff --git a/sound/soc/sh/rcar/core.c b/sound/soc/sh/rcar/core.c
+index f237002180c0..ff13189a7ee4 100644
+--- a/sound/soc/sh/rcar/core.c
++++ b/sound/soc/sh/rcar/core.c
+@@ -953,12 +953,23 @@ static void rsnd_soc_dai_shutdown(struct snd_pcm_substream *substream,
+ 	rsnd_dai_stream_quit(io);
+ }
+ 
++static int rsnd_soc_dai_prepare(struct snd_pcm_substream *substream,
++				struct snd_soc_dai *dai)
++{
++	struct rsnd_priv *priv = rsnd_dai_to_priv(dai);
++	struct rsnd_dai *rdai = rsnd_dai_to_rdai(dai);
++	struct rsnd_dai_stream *io = rsnd_rdai_to_io(rdai, substream);
++
++	return rsnd_dai_call(prepare, io, priv);
++}
++
+ static const struct snd_soc_dai_ops rsnd_soc_dai_ops = {
+ 	.startup	= rsnd_soc_dai_startup,
+ 	.shutdown	= rsnd_soc_dai_shutdown,
+ 	.trigger	= rsnd_soc_dai_trigger,
+ 	.set_fmt	= rsnd_soc_dai_set_fmt,
+ 	.set_tdm_slot	= rsnd_soc_set_dai_tdm_slot,
++	.prepare	= rsnd_soc_dai_prepare,
+ };
+ 
+ void rsnd_parse_connect_common(struct rsnd_dai *rdai,
+diff --git a/sound/soc/sh/rcar/rsnd.h b/sound/soc/sh/rcar/rsnd.h
+index 6d7280d2d9be..e93032498a5b 100644
+--- a/sound/soc/sh/rcar/rsnd.h
++++ b/sound/soc/sh/rcar/rsnd.h
+@@ -283,6 +283,9 @@ struct rsnd_mod_ops {
+ 	int (*nolock_stop)(struct rsnd_mod *mod,
+ 		    struct rsnd_dai_stream *io,
+ 		    struct rsnd_priv *priv);
++	int (*prepare)(struct rsnd_mod *mod,
++		       struct rsnd_dai_stream *io,
++		       struct rsnd_priv *priv);
+ };
+ 
+ struct rsnd_dai_stream;
+@@ -312,6 +315,7 @@ struct rsnd_mod {
+  * H	0: fallback
+  * H	0: hw_params
+  * H	0: pointer
++ * H	0: prepare
+  */
+ #define __rsnd_mod_shift_nolock_start	0
+ #define __rsnd_mod_shift_nolock_stop	0
+@@ -326,6 +330,7 @@ struct rsnd_mod {
+ #define __rsnd_mod_shift_fallback	28 /* always called */
+ #define __rsnd_mod_shift_hw_params	28 /* always called */
+ #define __rsnd_mod_shift_pointer	28 /* always called */
++#define __rsnd_mod_shift_prepare	28 /* always called */
+ 
+ #define __rsnd_mod_add_probe		0
+ #define __rsnd_mod_add_remove		0
+@@ -340,6 +345,7 @@ struct rsnd_mod {
+ #define __rsnd_mod_add_fallback		0
+ #define __rsnd_mod_add_hw_params	0
+ #define __rsnd_mod_add_pointer		0
++#define __rsnd_mod_add_prepare		0
+ 
+ #define __rsnd_mod_call_probe		0
+ #define __rsnd_mod_call_remove		0
+@@ -354,6 +360,7 @@ struct rsnd_mod {
+ #define __rsnd_mod_call_pointer		0
+ #define __rsnd_mod_call_nolock_start	0
+ #define __rsnd_mod_call_nolock_stop	1
++#define __rsnd_mod_call_prepare		0
+ 
+ #define rsnd_mod_to_priv(mod)	((mod)->priv)
+ #define rsnd_mod_name(mod)	((mod)->ops->name)
+diff --git a/sound/soc/sh/rcar/ssi.c b/sound/soc/sh/rcar/ssi.c
+index 6e1166ec24a0..cf4b40d376e5 100644
+--- a/sound/soc/sh/rcar/ssi.c
++++ b/sound/soc/sh/rcar/ssi.c
+@@ -286,7 +286,7 @@ static int rsnd_ssi_master_clk_start(struct rsnd_mod *mod,
+ 	if (rsnd_ssi_is_multi_slave(mod, io))
+ 		return 0;
+ 
+-	if (ssi->usrcnt > 1) {
++	if (ssi->rate) {
+ 		if (ssi->rate != rate) {
+ 			dev_err(dev, "SSI parent/child should use same rate\n");
+ 			return -EINVAL;
+@@ -431,7 +431,6 @@ static int rsnd_ssi_init(struct rsnd_mod *mod,
+ 			 struct rsnd_priv *priv)
+ {
+ 	struct rsnd_ssi *ssi = rsnd_mod_to_ssi(mod);
+-	int ret;
+ 
+ 	if (!rsnd_ssi_is_run_mods(mod, io))
+ 		return 0;
+@@ -440,10 +439,6 @@ static int rsnd_ssi_init(struct rsnd_mod *mod,
+ 
+ 	rsnd_mod_power_on(mod);
+ 
+-	ret = rsnd_ssi_master_clk_start(mod, io);
+-	if (ret < 0)
+-		return ret;
+-
+ 	rsnd_ssi_config_init(mod, io);
+ 
+ 	rsnd_ssi_register_setup(mod);
+@@ -846,6 +841,13 @@ static int rsnd_ssi_pio_pointer(struct rsnd_mod *mod,
+ 	return 0;
+ }
+ 
++static int rsnd_ssi_prepare(struct rsnd_mod *mod,
++			    struct rsnd_dai_stream *io,
++			    struct rsnd_priv *priv)
++{
++	return rsnd_ssi_master_clk_start(mod, io);
++}
++
+ static struct rsnd_mod_ops rsnd_ssi_pio_ops = {
+ 	.name	= SSI_NAME,
+ 	.probe	= rsnd_ssi_common_probe,
+@@ -858,6 +860,7 @@ static struct rsnd_mod_ops rsnd_ssi_pio_ops = {
+ 	.pointer = rsnd_ssi_pio_pointer,
+ 	.pcm_new = rsnd_ssi_pcm_new,
+ 	.hw_params = rsnd_ssi_hw_params,
++	.prepare = rsnd_ssi_prepare,
+ };
+ 
+ static int rsnd_ssi_dma_probe(struct rsnd_mod *mod,
+@@ -934,6 +937,7 @@ static struct rsnd_mod_ops rsnd_ssi_dma_ops = {
+ 	.pcm_new = rsnd_ssi_pcm_new,
+ 	.fallback = rsnd_ssi_fallback,
+ 	.hw_params = rsnd_ssi_hw_params,
++	.prepare = rsnd_ssi_prepare,
+ };
+ 
+ int rsnd_ssi_is_dma_mode(struct rsnd_mod *mod)


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 13:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 13:15 UTC (permalink / raw
  To: gentoo-commits

commit:     743bbef4368f2de1a577c154f5cd943cd678790e
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 22 09:59:11 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 13:15:39 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=743bbef4

linux kernel 4.18.4

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |   4 +
 1003_linux-4.18.4.patch | 817 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 821 insertions(+)

diff --git a/0000_README b/0000_README
index c313d8e..c7d6cc0 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  1002_linux-4.18.3.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.3
 
+Patch:  1003_linux-4.18.4.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.4
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1003_linux-4.18.4.patch b/1003_linux-4.18.4.patch
new file mode 100644
index 0000000..a94a413
--- /dev/null
+++ b/1003_linux-4.18.4.patch
@@ -0,0 +1,817 @@
+diff --git a/Makefile b/Makefile
+index e2bd815f24eb..ef0dd566c104 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index 5d0486f1cfcd..1a1c0718cd7a 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -338,6 +338,14 @@ static const struct dmi_system_id acpisleep_dmi_table[] __initconst = {
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "K54HR"),
+ 		},
+ 	},
++	{
++	.callback = init_nvs_save_s3,
++	.ident = "Asus 1025C",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++		DMI_MATCH(DMI_PRODUCT_NAME, "1025C"),
++		},
++	},
+ 	/*
+ 	 * https://bugzilla.kernel.org/show_bug.cgi?id=189431
+ 	 * Lenovo G50-45 is a platform later than 2012, but needs nvs memory
+diff --git a/drivers/isdn/i4l/isdn_common.c b/drivers/isdn/i4l/isdn_common.c
+index 7a501dbe7123..6a5b3f00f9ad 100644
+--- a/drivers/isdn/i4l/isdn_common.c
++++ b/drivers/isdn/i4l/isdn_common.c
+@@ -1640,13 +1640,7 @@ isdn_ioctl(struct file *file, uint cmd, ulong arg)
+ 			} else
+ 				return -EINVAL;
+ 		case IIOCDBGVAR:
+-			if (arg) {
+-				if (copy_to_user(argp, &dev, sizeof(ulong)))
+-					return -EFAULT;
+-				return 0;
+-			} else
+-				return -EINVAL;
+-			break;
++			return -EINVAL;
+ 		default:
+ 			if ((cmd & IIOCDRVCTL) == IIOCDRVCTL)
+ 				cmd = ((cmd >> _IOC_NRSHIFT) & _IOC_NRMASK) & ISDN_DRVIOCTL_MASK;
+diff --git a/drivers/media/usb/dvb-usb-v2/gl861.c b/drivers/media/usb/dvb-usb-v2/gl861.c
+index 9d154fdae45b..fee4b30df778 100644
+--- a/drivers/media/usb/dvb-usb-v2/gl861.c
++++ b/drivers/media/usb/dvb-usb-v2/gl861.c
+@@ -26,10 +26,14 @@ static int gl861_i2c_msg(struct dvb_usb_device *d, u8 addr,
+ 	if (wo) {
+ 		req = GL861_REQ_I2C_WRITE;
+ 		type = GL861_WRITE;
++		buf = kmemdup(wbuf, wlen, GFP_KERNEL);
+ 	} else { /* rw */
+ 		req = GL861_REQ_I2C_READ;
+ 		type = GL861_READ;
++		buf = kmalloc(rlen, GFP_KERNEL);
+ 	}
++	if (!buf)
++		return -ENOMEM;
+ 
+ 	switch (wlen) {
+ 	case 1:
+@@ -42,24 +46,19 @@ static int gl861_i2c_msg(struct dvb_usb_device *d, u8 addr,
+ 	default:
+ 		dev_err(&d->udev->dev, "%s: wlen=%d, aborting\n",
+ 				KBUILD_MODNAME, wlen);
++		kfree(buf);
+ 		return -EINVAL;
+ 	}
+-	buf = NULL;
+-	if (rlen > 0) {
+-		buf = kmalloc(rlen, GFP_KERNEL);
+-		if (!buf)
+-			return -ENOMEM;
+-	}
++
+ 	usleep_range(1000, 2000); /* avoid I2C errors */
+ 
+ 	ret = usb_control_msg(d->udev, usb_rcvctrlpipe(d->udev, 0), req, type,
+ 			      value, index, buf, rlen, 2000);
+-	if (rlen > 0) {
+-		if (ret > 0)
+-			memcpy(rbuf, buf, rlen);
+-		kfree(buf);
+-	}
+ 
++	if (!wo && ret > 0)
++		memcpy(rbuf, buf, rlen);
++
++	kfree(buf);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/misc/sram.c b/drivers/misc/sram.c
+index c5dc6095686a..679647713e36 100644
+--- a/drivers/misc/sram.c
++++ b/drivers/misc/sram.c
+@@ -407,13 +407,20 @@ static int sram_probe(struct platform_device *pdev)
+ 	if (init_func) {
+ 		ret = init_func();
+ 		if (ret)
+-			return ret;
++			goto err_disable_clk;
+ 	}
+ 
+ 	dev_dbg(sram->dev, "SRAM pool: %zu KiB @ 0x%p\n",
+ 		gen_pool_size(sram->pool) / 1024, sram->virt_base);
+ 
+ 	return 0;
++
++err_disable_clk:
++	if (sram->clk)
++		clk_disable_unprepare(sram->clk);
++	sram_free_partitions(sram);
++
++	return ret;
+ }
+ 
+ static int sram_remove(struct platform_device *pdev)
+diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
+index 0ad2f3f7da85..82ac1d10f239 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -1901,10 +1901,10 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
+ }
+ 
+ /* Main rx processing when using software buffer management */
+-static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
++static int mvneta_rx_swbm(struct napi_struct *napi,
++			  struct mvneta_port *pp, int rx_todo,
+ 			  struct mvneta_rx_queue *rxq)
+ {
+-	struct mvneta_pcpu_port *port = this_cpu_ptr(pp->ports);
+ 	struct net_device *dev = pp->dev;
+ 	int rx_done;
+ 	u32 rcvd_pkts = 0;
+@@ -1959,7 +1959,7 @@ err_drop_frame:
+ 
+ 			skb->protocol = eth_type_trans(skb, dev);
+ 			mvneta_rx_csum(pp, rx_status, skb);
+-			napi_gro_receive(&port->napi, skb);
++			napi_gro_receive(napi, skb);
+ 
+ 			rcvd_pkts++;
+ 			rcvd_bytes += rx_bytes;
+@@ -2001,7 +2001,7 @@ err_drop_frame:
+ 
+ 		mvneta_rx_csum(pp, rx_status, skb);
+ 
+-		napi_gro_receive(&port->napi, skb);
++		napi_gro_receive(napi, skb);
+ 	}
+ 
+ 	if (rcvd_pkts) {
+@@ -2020,10 +2020,10 @@ err_drop_frame:
+ }
+ 
+ /* Main rx processing when using hardware buffer management */
+-static int mvneta_rx_hwbm(struct mvneta_port *pp, int rx_todo,
++static int mvneta_rx_hwbm(struct napi_struct *napi,
++			  struct mvneta_port *pp, int rx_todo,
+ 			  struct mvneta_rx_queue *rxq)
+ {
+-	struct mvneta_pcpu_port *port = this_cpu_ptr(pp->ports);
+ 	struct net_device *dev = pp->dev;
+ 	int rx_done;
+ 	u32 rcvd_pkts = 0;
+@@ -2085,7 +2085,7 @@ err_drop_frame:
+ 
+ 			skb->protocol = eth_type_trans(skb, dev);
+ 			mvneta_rx_csum(pp, rx_status, skb);
+-			napi_gro_receive(&port->napi, skb);
++			napi_gro_receive(napi, skb);
+ 
+ 			rcvd_pkts++;
+ 			rcvd_bytes += rx_bytes;
+@@ -2129,7 +2129,7 @@ err_drop_frame:
+ 
+ 		mvneta_rx_csum(pp, rx_status, skb);
+ 
+-		napi_gro_receive(&port->napi, skb);
++		napi_gro_receive(napi, skb);
+ 	}
+ 
+ 	if (rcvd_pkts) {
+@@ -2722,9 +2722,11 @@ static int mvneta_poll(struct napi_struct *napi, int budget)
+ 	if (rx_queue) {
+ 		rx_queue = rx_queue - 1;
+ 		if (pp->bm_priv)
+-			rx_done = mvneta_rx_hwbm(pp, budget, &pp->rxqs[rx_queue]);
++			rx_done = mvneta_rx_hwbm(napi, pp, budget,
++						 &pp->rxqs[rx_queue]);
+ 		else
+-			rx_done = mvneta_rx_swbm(pp, budget, &pp->rxqs[rx_queue]);
++			rx_done = mvneta_rx_swbm(napi, pp, budget,
++						 &pp->rxqs[rx_queue]);
+ 	}
+ 
+ 	if (rx_done < budget) {
+@@ -4018,13 +4020,18 @@ static int  mvneta_config_rss(struct mvneta_port *pp)
+ 
+ 	on_each_cpu(mvneta_percpu_mask_interrupt, pp, true);
+ 
+-	/* We have to synchronise on the napi of each CPU */
+-	for_each_online_cpu(cpu) {
+-		struct mvneta_pcpu_port *pcpu_port =
+-			per_cpu_ptr(pp->ports, cpu);
++	if (!pp->neta_armada3700) {
++		/* We have to synchronise on the napi of each CPU */
++		for_each_online_cpu(cpu) {
++			struct mvneta_pcpu_port *pcpu_port =
++				per_cpu_ptr(pp->ports, cpu);
+ 
+-		napi_synchronize(&pcpu_port->napi);
+-		napi_disable(&pcpu_port->napi);
++			napi_synchronize(&pcpu_port->napi);
++			napi_disable(&pcpu_port->napi);
++		}
++	} else {
++		napi_synchronize(&pp->napi);
++		napi_disable(&pp->napi);
+ 	}
+ 
+ 	pp->rxq_def = pp->indir[0];
+@@ -4041,12 +4048,16 @@ static int  mvneta_config_rss(struct mvneta_port *pp)
+ 	mvneta_percpu_elect(pp);
+ 	spin_unlock(&pp->lock);
+ 
+-	/* We have to synchronise on the napi of each CPU */
+-	for_each_online_cpu(cpu) {
+-		struct mvneta_pcpu_port *pcpu_port =
+-			per_cpu_ptr(pp->ports, cpu);
++	if (!pp->neta_armada3700) {
++		/* We have to synchronise on the napi of each CPU */
++		for_each_online_cpu(cpu) {
++			struct mvneta_pcpu_port *pcpu_port =
++				per_cpu_ptr(pp->ports, cpu);
+ 
+-		napi_enable(&pcpu_port->napi);
++			napi_enable(&pcpu_port->napi);
++		}
++	} else {
++		napi_enable(&pp->napi);
+ 	}
+ 
+ 	netif_tx_start_all_queues(pp->dev);
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index eaedc11ed686..9ceb34bac3a9 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -7539,12 +7539,20 @@ static int rtl_alloc_irq(struct rtl8169_private *tp)
+ {
+ 	unsigned int flags;
+ 
+-	if (tp->mac_version <= RTL_GIGA_MAC_VER_06) {
++	switch (tp->mac_version) {
++	case RTL_GIGA_MAC_VER_01 ... RTL_GIGA_MAC_VER_06:
+ 		RTL_W8(tp, Cfg9346, Cfg9346_Unlock);
+ 		RTL_W8(tp, Config2, RTL_R8(tp, Config2) & ~MSIEnable);
+ 		RTL_W8(tp, Cfg9346, Cfg9346_Lock);
+ 		flags = PCI_IRQ_LEGACY;
+-	} else {
++		break;
++	case RTL_GIGA_MAC_VER_39 ... RTL_GIGA_MAC_VER_40:
++		/* This version was reported to have issues with resume
++		 * from suspend when using MSI-X
++		 */
++		flags = PCI_IRQ_LEGACY | PCI_IRQ_MSI;
++		break;
++	default:
+ 		flags = PCI_IRQ_ALL_TYPES;
+ 	}
+ 
+diff --git a/drivers/net/hyperv/rndis_filter.c b/drivers/net/hyperv/rndis_filter.c
+index 408ece27131c..2a5209f23f29 100644
+--- a/drivers/net/hyperv/rndis_filter.c
++++ b/drivers/net/hyperv/rndis_filter.c
+@@ -1338,7 +1338,7 @@ out:
+ 	/* setting up multiple channels failed */
+ 	net_device->max_chn = 1;
+ 	net_device->num_chn = 1;
+-	return 0;
++	return net_device;
+ 
+ err_dev_remv:
+ 	rndis_filter_device_remove(dev, net_device);
+diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
+index aff04f1de3a5..af842000188c 100644
+--- a/drivers/tty/serial/8250/8250_dw.c
++++ b/drivers/tty/serial/8250/8250_dw.c
+@@ -293,7 +293,7 @@ static void dw8250_set_termios(struct uart_port *p, struct ktermios *termios,
+ 	long rate;
+ 	int ret;
+ 
+-	if (IS_ERR(d->clk) || !old)
++	if (IS_ERR(d->clk))
+ 		goto out;
+ 
+ 	clk_disable_unprepare(d->clk);
+@@ -707,6 +707,7 @@ static const struct acpi_device_id dw8250_acpi_match[] = {
+ 	{ "APMC0D08", 0},
+ 	{ "AMD0020", 0 },
+ 	{ "AMDI0020", 0 },
++	{ "BRCM2032", 0 },
+ 	{ "HISI0031", 0 },
+ 	{ },
+ };
+diff --git a/drivers/tty/serial/8250/8250_exar.c b/drivers/tty/serial/8250/8250_exar.c
+index 38af306ca0e8..a951511f04cf 100644
+--- a/drivers/tty/serial/8250/8250_exar.c
++++ b/drivers/tty/serial/8250/8250_exar.c
+@@ -433,7 +433,11 @@ static irqreturn_t exar_misc_handler(int irq, void *data)
+ 	struct exar8250 *priv = data;
+ 
+ 	/* Clear all PCI interrupts by reading INT0. No effect on IIR */
+-	ioread8(priv->virt + UART_EXAR_INT0);
++	readb(priv->virt + UART_EXAR_INT0);
++
++	/* Clear INT0 for Expansion Interface slave ports, too */
++	if (priv->board->num_ports > 8)
++		readb(priv->virt + 0x2000 + UART_EXAR_INT0);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index cf541aab2bd0..5cbc13e3d316 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -90,8 +90,7 @@ static const struct serial8250_config uart_config[] = {
+ 		.name		= "16550A",
+ 		.fifo_size	= 16,
+ 		.tx_loadsz	= 16,
+-		.fcr		= UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10 |
+-				  UART_FCR_CLEAR_RCVR | UART_FCR_CLEAR_XMIT,
++		.fcr		= UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
+ 		.rxtrig_bytes	= {1, 4, 8, 14},
+ 		.flags		= UART_CAP_FIFO,
+ 	},
+diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c
+index 5d421d7e8904..f68c1121fa7c 100644
+--- a/drivers/uio/uio.c
++++ b/drivers/uio/uio.c
+@@ -443,13 +443,10 @@ static irqreturn_t uio_interrupt(int irq, void *dev_id)
+ 	struct uio_device *idev = (struct uio_device *)dev_id;
+ 	irqreturn_t ret;
+ 
+-	mutex_lock(&idev->info_lock);
+-
+ 	ret = idev->info->handler(irq, idev->info);
+ 	if (ret == IRQ_HANDLED)
+ 		uio_event_notify(idev->info);
+ 
+-	mutex_unlock(&idev->info_lock);
+ 	return ret;
+ }
+ 
+@@ -814,7 +811,7 @@ static int uio_mmap(struct file *filep, struct vm_area_struct *vma)
+ 
+ out:
+ 	mutex_unlock(&idev->info_lock);
+-	return 0;
++	return ret;
+ }
+ 
+ static const struct file_operations uio_fops = {
+@@ -969,9 +966,8 @@ int __uio_register_device(struct module *owner,
+ 		 * FDs at the time of unregister and therefore may not be
+ 		 * freed until they are released.
+ 		 */
+-		ret = request_threaded_irq(info->irq, NULL, uio_interrupt,
+-					   info->irq_flags, info->name, idev);
+-
++		ret = request_irq(info->irq, uio_interrupt,
++				  info->irq_flags, info->name, idev);
+ 		if (ret)
+ 			goto err_request_irq;
+ 	}
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 664e61f16b6a..0215b70c4efc 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -196,6 +196,8 @@ static void option_instat_callback(struct urb *urb);
+ #define DELL_PRODUCT_5800_V2_MINICARD_VZW	0x8196  /* Novatel E362 */
+ #define DELL_PRODUCT_5804_MINICARD_ATT		0x819b  /* Novatel E371 */
+ 
++#define DELL_PRODUCT_5821E			0x81d7
++
+ #define KYOCERA_VENDOR_ID			0x0c88
+ #define KYOCERA_PRODUCT_KPC650			0x17da
+ #define KYOCERA_PRODUCT_KPC680			0x180a
+@@ -1030,6 +1032,8 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(DELL_VENDOR_ID, DELL_PRODUCT_5800_MINICARD_VZW, 0xff, 0xff, 0xff) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(DELL_VENDOR_ID, DELL_PRODUCT_5800_V2_MINICARD_VZW, 0xff, 0xff, 0xff) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(DELL_VENDOR_ID, DELL_PRODUCT_5804_MINICARD_ATT, 0xff, 0xff, 0xff) },
++	{ USB_DEVICE(DELL_VENDOR_ID, DELL_PRODUCT_5821E),
++	  .driver_info = RSVD(0) | RSVD(1) | RSVD(6) },
+ 	{ USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_E100A) },	/* ADU-E100, ADU-310 */
+ 	{ USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_500A) },
+ 	{ USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_620UW) },
+diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
+index 5d1a1931967e..e41f725ac7aa 100644
+--- a/drivers/usb/serial/pl2303.c
++++ b/drivers/usb/serial/pl2303.c
+@@ -52,6 +52,8 @@ static const struct usb_device_id id_table[] = {
+ 		.driver_info = PL2303_QUIRK_ENDPOINT_HACK },
+ 	{ USB_DEVICE(ATEN_VENDOR_ID, ATEN_PRODUCT_UC485),
+ 		.driver_info = PL2303_QUIRK_ENDPOINT_HACK },
++	{ USB_DEVICE(ATEN_VENDOR_ID, ATEN_PRODUCT_UC232B),
++		.driver_info = PL2303_QUIRK_ENDPOINT_HACK },
+ 	{ USB_DEVICE(ATEN_VENDOR_ID, ATEN_PRODUCT_ID2) },
+ 	{ USB_DEVICE(ATEN_VENDOR_ID2, ATEN_PRODUCT_ID) },
+ 	{ USB_DEVICE(ELCOM_VENDOR_ID, ELCOM_PRODUCT_ID) },
+diff --git a/drivers/usb/serial/pl2303.h b/drivers/usb/serial/pl2303.h
+index fcd72396a7b6..26965cc23c17 100644
+--- a/drivers/usb/serial/pl2303.h
++++ b/drivers/usb/serial/pl2303.h
+@@ -24,6 +24,7 @@
+ #define ATEN_VENDOR_ID2		0x0547
+ #define ATEN_PRODUCT_ID		0x2008
+ #define ATEN_PRODUCT_UC485	0x2021
++#define ATEN_PRODUCT_UC232B	0x2022
+ #define ATEN_PRODUCT_ID2	0x2118
+ 
+ #define IODATA_VENDOR_ID	0x04bb
+diff --git a/drivers/usb/serial/sierra.c b/drivers/usb/serial/sierra.c
+index d189f953c891..55956a638f5b 100644
+--- a/drivers/usb/serial/sierra.c
++++ b/drivers/usb/serial/sierra.c
+@@ -770,9 +770,9 @@ static void sierra_close(struct usb_serial_port *port)
+ 		kfree(urb->transfer_buffer);
+ 		usb_free_urb(urb);
+ 		usb_autopm_put_interface_async(serial->interface);
+-		spin_lock(&portdata->lock);
++		spin_lock_irq(&portdata->lock);
+ 		portdata->outstanding_urbs--;
+-		spin_unlock(&portdata->lock);
++		spin_unlock_irq(&portdata->lock);
+ 	}
+ 
+ 	sierra_stop_rx_urbs(port);
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index 413b8ee49fec..8f0f9279eac9 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -393,7 +393,8 @@ static void sco_sock_cleanup_listen(struct sock *parent)
+  */
+ static void sco_sock_kill(struct sock *sk)
+ {
+-	if (!sock_flag(sk, SOCK_ZAPPED) || sk->sk_socket)
++	if (!sock_flag(sk, SOCK_ZAPPED) || sk->sk_socket ||
++	    sock_flag(sk, SOCK_DEAD))
+ 		return;
+ 
+ 	BT_DBG("sk %p state %d", sk, sk->sk_state);
+diff --git a/net/core/sock_diag.c b/net/core/sock_diag.c
+index c37b5be7c5e4..3312a5849a97 100644
+--- a/net/core/sock_diag.c
++++ b/net/core/sock_diag.c
+@@ -10,6 +10,7 @@
+ #include <linux/kernel.h>
+ #include <linux/tcp.h>
+ #include <linux/workqueue.h>
++#include <linux/nospec.h>
+ 
+ #include <linux/inet_diag.h>
+ #include <linux/sock_diag.h>
+@@ -218,6 +219,7 @@ static int __sock_diag_cmd(struct sk_buff *skb, struct nlmsghdr *nlh)
+ 
+ 	if (req->sdiag_family >= AF_MAX)
+ 		return -EINVAL;
++	req->sdiag_family = array_index_nospec(req->sdiag_family, AF_MAX);
+ 
+ 	if (sock_diag_handlers[req->sdiag_family] == NULL)
+ 		sock_load_diag_module(req->sdiag_family, 0);
+diff --git a/net/ipv4/ip_vti.c b/net/ipv4/ip_vti.c
+index 3f091ccad9af..f38cb21d773d 100644
+--- a/net/ipv4/ip_vti.c
++++ b/net/ipv4/ip_vti.c
+@@ -438,7 +438,8 @@ static int __net_init vti_init_net(struct net *net)
+ 	if (err)
+ 		return err;
+ 	itn = net_generic(net, vti_net_id);
+-	vti_fb_tunnel_init(itn->fb_tunnel_dev);
++	if (itn->fb_tunnel_dev)
++		vti_fb_tunnel_init(itn->fb_tunnel_dev);
+ 	return 0;
+ }
+ 
+diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
+index 40261cb68e83..8aaf8157da2b 100644
+--- a/net/l2tp/l2tp_core.c
++++ b/net/l2tp/l2tp_core.c
+@@ -1110,7 +1110,7 @@ int l2tp_xmit_skb(struct l2tp_session *session, struct sk_buff *skb, int hdr_len
+ 
+ 	/* Get routing info from the tunnel socket */
+ 	skb_dst_drop(skb);
+-	skb_dst_set(skb, dst_clone(__sk_dst_check(sk, 0)));
++	skb_dst_set(skb, sk_dst_check(sk, 0));
+ 
+ 	inet = inet_sk(sk);
+ 	fl = &inet->cork.fl;
+diff --git a/net/sched/cls_matchall.c b/net/sched/cls_matchall.c
+index 47b207ef7762..7ad65daf66a4 100644
+--- a/net/sched/cls_matchall.c
++++ b/net/sched/cls_matchall.c
+@@ -111,6 +111,8 @@ static void mall_destroy(struct tcf_proto *tp, struct netlink_ext_ack *extack)
+ 	if (!head)
+ 		return;
+ 
++	tcf_unbind_filter(tp, &head->res);
++
+ 	if (!tc_skip_hw(head->flags))
+ 		mall_destroy_hw_filter(tp, head, (unsigned long) head, extack);
+ 
+diff --git a/net/sched/cls_tcindex.c b/net/sched/cls_tcindex.c
+index 32f4bbd82f35..9ccc93f257db 100644
+--- a/net/sched/cls_tcindex.c
++++ b/net/sched/cls_tcindex.c
+@@ -447,11 +447,6 @@ tcindex_set_parms(struct net *net, struct tcf_proto *tp, unsigned long base,
+ 		tcf_bind_filter(tp, &cr.res, base);
+ 	}
+ 
+-	if (old_r)
+-		tcf_exts_change(&r->exts, &e);
+-	else
+-		tcf_exts_change(&cr.exts, &e);
+-
+ 	if (old_r && old_r != r) {
+ 		err = tcindex_filter_result_init(old_r);
+ 		if (err < 0) {
+@@ -462,12 +457,15 @@ tcindex_set_parms(struct net *net, struct tcf_proto *tp, unsigned long base,
+ 
+ 	oldp = p;
+ 	r->res = cr.res;
++	tcf_exts_change(&r->exts, &e);
++
+ 	rcu_assign_pointer(tp->root, cp);
+ 
+ 	if (r == &new_filter_result) {
+ 		struct tcindex_filter *nfp;
+ 		struct tcindex_filter __rcu **fp;
+ 
++		f->result.res = r->res;
+ 		tcf_exts_change(&f->result.exts, &r->exts);
+ 
+ 		fp = cp->h + (handle % cp->hash);
+diff --git a/net/socket.c b/net/socket.c
+index 8c24d5dc4bc8..4ac3b834cce9 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -2690,8 +2690,7 @@ EXPORT_SYMBOL(sock_unregister);
+ 
+ bool sock_is_registered(int family)
+ {
+-	return family < NPROTO &&
+-		rcu_access_pointer(net_families[array_index_nospec(family, NPROTO)]);
++	return family < NPROTO && rcu_access_pointer(net_families[family]);
+ }
+ 
+ static int __init sock_init(void)
+diff --git a/sound/core/memalloc.c b/sound/core/memalloc.c
+index 7f89d3c79a4b..753d5fc4b284 100644
+--- a/sound/core/memalloc.c
++++ b/sound/core/memalloc.c
+@@ -242,16 +242,12 @@ int snd_dma_alloc_pages_fallback(int type, struct device *device, size_t size,
+ 	int err;
+ 
+ 	while ((err = snd_dma_alloc_pages(type, device, size, dmab)) < 0) {
+-		size_t aligned_size;
+ 		if (err != -ENOMEM)
+ 			return err;
+ 		if (size <= PAGE_SIZE)
+ 			return -ENOMEM;
+-		aligned_size = PAGE_SIZE << get_order(size);
+-		if (size != aligned_size)
+-			size = aligned_size;
+-		else
+-			size >>= 1;
++		size >>= 1;
++		size = PAGE_SIZE << get_order(size);
+ 	}
+ 	if (! dmab->area)
+ 		return -ENOMEM;
+diff --git a/sound/core/seq/oss/seq_oss.c b/sound/core/seq/oss/seq_oss.c
+index 5f64d0d88320..e1f44fc86885 100644
+--- a/sound/core/seq/oss/seq_oss.c
++++ b/sound/core/seq/oss/seq_oss.c
+@@ -203,7 +203,7 @@ odev_poll(struct file *file, poll_table * wait)
+ 	struct seq_oss_devinfo *dp;
+ 	dp = file->private_data;
+ 	if (snd_BUG_ON(!dp))
+-		return -ENXIO;
++		return EPOLLERR;
+ 	return snd_seq_oss_poll(dp, file, wait);
+ }
+ 
+diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
+index 56ca78423040..6fd4b074b206 100644
+--- a/sound/core/seq/seq_clientmgr.c
++++ b/sound/core/seq/seq_clientmgr.c
+@@ -1101,7 +1101,7 @@ static __poll_t snd_seq_poll(struct file *file, poll_table * wait)
+ 
+ 	/* check client structures are in place */
+ 	if (snd_BUG_ON(!client))
+-		return -ENXIO;
++		return EPOLLERR;
+ 
+ 	if ((snd_seq_file_flags(file) & SNDRV_SEQ_LFLG_INPUT) &&
+ 	    client->data.user.fifo) {
+diff --git a/sound/core/seq/seq_virmidi.c b/sound/core/seq/seq_virmidi.c
+index 289ae6bb81d9..8ebbca554e99 100644
+--- a/sound/core/seq/seq_virmidi.c
++++ b/sound/core/seq/seq_virmidi.c
+@@ -163,6 +163,7 @@ static void snd_virmidi_output_trigger(struct snd_rawmidi_substream *substream,
+ 	int count, res;
+ 	unsigned char buf[32], *pbuf;
+ 	unsigned long flags;
++	bool check_resched = !in_atomic();
+ 
+ 	if (up) {
+ 		vmidi->trigger = 1;
+@@ -200,6 +201,15 @@ static void snd_virmidi_output_trigger(struct snd_rawmidi_substream *substream,
+ 					vmidi->event.type = SNDRV_SEQ_EVENT_NONE;
+ 				}
+ 			}
++			if (!check_resched)
++				continue;
++			/* do temporary unlock & cond_resched() for avoiding
++			 * CPU soft lockup, which may happen via a write from
++			 * a huge rawmidi buffer
++			 */
++			spin_unlock_irqrestore(&substream->runtime->lock, flags);
++			cond_resched();
++			spin_lock_irqsave(&substream->runtime->lock, flags);
+ 		}
+ 	out:
+ 		spin_unlock_irqrestore(&substream->runtime->lock, flags);
+diff --git a/sound/firewire/dice/dice-alesis.c b/sound/firewire/dice/dice-alesis.c
+index b2efb1c71a98..218292bdace6 100644
+--- a/sound/firewire/dice/dice-alesis.c
++++ b/sound/firewire/dice/dice-alesis.c
+@@ -37,7 +37,7 @@ int snd_dice_detect_alesis_formats(struct snd_dice *dice)
+ 				MAX_STREAMS * SND_DICE_RATE_MODE_COUNT *
+ 				sizeof(unsigned int));
+ 	} else {
+-		memcpy(dice->rx_pcm_chs, alesis_io26_tx_pcm_chs,
++		memcpy(dice->tx_pcm_chs, alesis_io26_tx_pcm_chs,
+ 				MAX_STREAMS * SND_DICE_RATE_MODE_COUNT *
+ 				sizeof(unsigned int));
+ 	}
+diff --git a/sound/pci/cs5535audio/cs5535audio.h b/sound/pci/cs5535audio/cs5535audio.h
+index f4fcdf93f3c8..d84620a0c26c 100644
+--- a/sound/pci/cs5535audio/cs5535audio.h
++++ b/sound/pci/cs5535audio/cs5535audio.h
+@@ -67,9 +67,9 @@ struct cs5535audio_dma_ops {
+ };
+ 
+ struct cs5535audio_dma_desc {
+-	u32 addr;
+-	u16 size;
+-	u16 ctlreserved;
++	__le32 addr;
++	__le16 size;
++	__le16 ctlreserved;
+ };
+ 
+ struct cs5535audio_dma {
+diff --git a/sound/pci/cs5535audio/cs5535audio_pcm.c b/sound/pci/cs5535audio/cs5535audio_pcm.c
+index ee7065f6e162..326caec854e1 100644
+--- a/sound/pci/cs5535audio/cs5535audio_pcm.c
++++ b/sound/pci/cs5535audio/cs5535audio_pcm.c
+@@ -158,8 +158,8 @@ static int cs5535audio_build_dma_packets(struct cs5535audio *cs5535au,
+ 	lastdesc->addr = cpu_to_le32((u32) dma->desc_buf.addr);
+ 	lastdesc->size = 0;
+ 	lastdesc->ctlreserved = cpu_to_le16(PRD_JMP);
+-	jmpprd_addr = cpu_to_le32(lastdesc->addr +
+-				  (sizeof(struct cs5535audio_dma_desc)*periods));
++	jmpprd_addr = (u32)dma->desc_buf.addr +
++		sizeof(struct cs5535audio_dma_desc) * periods;
+ 
+ 	dma->substream = substream;
+ 	dma->period_bytes = period_bytes;
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 1ae1850b3bfd..647ae1a71e10 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2207,7 +2207,7 @@ out_free:
+  */
+ static struct snd_pci_quirk power_save_blacklist[] = {
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
+-	SND_PCI_QUIRK(0x1849, 0x0c0c, "Asrock B85M-ITX", 0),
++	SND_PCI_QUIRK(0x1849, 0xc892, "Asrock B85M-ITX", 0),
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
+ 	SND_PCI_QUIRK(0x1849, 0x7662, "Asrock H81M-HDS", 0),
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index f641c20095f7..1a8a2d440fbd 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -211,6 +211,7 @@ static void cx_auto_reboot_notify(struct hda_codec *codec)
+ 	struct conexant_spec *spec = codec->spec;
+ 
+ 	switch (codec->core.vendor_id) {
++	case 0x14f12008: /* CX8200 */
+ 	case 0x14f150f2: /* CX20722 */
+ 	case 0x14f150f4: /* CX20724 */
+ 		break;
+@@ -218,13 +219,14 @@ static void cx_auto_reboot_notify(struct hda_codec *codec)
+ 		return;
+ 	}
+ 
+-	/* Turn the CX20722 codec into D3 to avoid spurious noises
++	/* Turn the problematic codec into D3 to avoid spurious noises
+ 	   from the internal speaker during (and after) reboot */
+ 	cx_auto_turn_eapd(codec, spec->num_eapds, spec->eapds, false);
+ 
+ 	snd_hda_codec_set_power_to_all(codec, codec->core.afg, AC_PWRST_D3);
+ 	snd_hda_codec_write(codec, codec->core.afg, 0,
+ 			    AC_VERB_SET_POWER_STATE, AC_PWRST_D3);
++	msleep(10);
+ }
+ 
+ static void cx_auto_free(struct hda_codec *codec)
+diff --git a/sound/pci/vx222/vx222_ops.c b/sound/pci/vx222/vx222_ops.c
+index d4298af6d3ee..c0d0bf44f365 100644
+--- a/sound/pci/vx222/vx222_ops.c
++++ b/sound/pci/vx222/vx222_ops.c
+@@ -275,7 +275,7 @@ static void vx2_dma_write(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ 		length >>= 2; /* in 32bit words */
+ 		/* Transfer using pseudo-dma. */
+ 		for (; length > 0; length--) {
+-			outl(cpu_to_le32(*addr), port);
++			outl(*addr, port);
+ 			addr++;
+ 		}
+ 		addr = (u32 *)runtime->dma_area;
+@@ -285,7 +285,7 @@ static void vx2_dma_write(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ 	count >>= 2; /* in 32bit words */
+ 	/* Transfer using pseudo-dma. */
+ 	for (; count > 0; count--) {
+-		outl(cpu_to_le32(*addr), port);
++		outl(*addr, port);
+ 		addr++;
+ 	}
+ 
+@@ -313,7 +313,7 @@ static void vx2_dma_read(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ 		length >>= 2; /* in 32bit words */
+ 		/* Transfer using pseudo-dma. */
+ 		for (; length > 0; length--)
+-			*addr++ = le32_to_cpu(inl(port));
++			*addr++ = inl(port);
+ 		addr = (u32 *)runtime->dma_area;
+ 		pipe->hw_ptr = 0;
+ 	}
+@@ -321,7 +321,7 @@ static void vx2_dma_read(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ 	count >>= 2; /* in 32bit words */
+ 	/* Transfer using pseudo-dma. */
+ 	for (; count > 0; count--)
+-		*addr++ = le32_to_cpu(inl(port));
++		*addr++ = inl(port);
+ 
+ 	vx2_release_pseudo_dma(chip);
+ }
+diff --git a/sound/pcmcia/vx/vxp_ops.c b/sound/pcmcia/vx/vxp_ops.c
+index 8cde40226355..4c4ef1fec69f 100644
+--- a/sound/pcmcia/vx/vxp_ops.c
++++ b/sound/pcmcia/vx/vxp_ops.c
+@@ -375,7 +375,7 @@ static void vxp_dma_write(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ 		length >>= 1; /* in 16bit words */
+ 		/* Transfer using pseudo-dma. */
+ 		for (; length > 0; length--) {
+-			outw(cpu_to_le16(*addr), port);
++			outw(*addr, port);
+ 			addr++;
+ 		}
+ 		addr = (unsigned short *)runtime->dma_area;
+@@ -385,7 +385,7 @@ static void vxp_dma_write(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ 	count >>= 1; /* in 16bit words */
+ 	/* Transfer using pseudo-dma. */
+ 	for (; count > 0; count--) {
+-		outw(cpu_to_le16(*addr), port);
++		outw(*addr, port);
+ 		addr++;
+ 	}
+ 	vx_release_pseudo_dma(chip);
+@@ -417,7 +417,7 @@ static void vxp_dma_read(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ 		length >>= 1; /* in 16bit words */
+ 		/* Transfer using pseudo-dma. */
+ 		for (; length > 0; length--)
+-			*addr++ = le16_to_cpu(inw(port));
++			*addr++ = inw(port);
+ 		addr = (unsigned short *)runtime->dma_area;
+ 		pipe->hw_ptr = 0;
+ 	}
+@@ -425,12 +425,12 @@ static void vxp_dma_read(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ 	count >>= 1; /* in 16bit words */
+ 	/* Transfer using pseudo-dma. */
+ 	for (; count > 1; count--)
+-		*addr++ = le16_to_cpu(inw(port));
++		*addr++ = inw(port);
+ 	/* Disable DMA */
+ 	pchip->regDIALOG &= ~VXP_DLG_DMAREAD_SEL_MASK;
+ 	vx_outb(chip, DIALOG, pchip->regDIALOG);
+ 	/* Read the last word (16 bits) */
+-	*addr = le16_to_cpu(inw(port));
++	*addr = inw(port);
+ 	/* Disable 16-bit accesses */
+ 	pchip->regDIALOG &= ~VXP_DLG_DMA16_SEL_MASK;
+ 	vx_outb(chip, DIALOG, pchip->regDIALOG);


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 13:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 13:15 UTC (permalink / raw
  To: gentoo-commits

commit:     420b8ac4a69b8c4c52ac9151e9c5783100bb1a7b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Oct  4 10:44:07 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 13:15:40 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=420b8ac4

Linux patch 4.18.12

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1011_linux-4.18.12.patch | 7724 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 7728 insertions(+)

diff --git a/0000_README b/0000_README
index cccbd63..ff87445 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch:  1010_linux-4.18.11.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.11
 
+Patch:  1011_linux-4.18.12.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.12
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1011_linux-4.18.12.patch b/1011_linux-4.18.12.patch
new file mode 100644
index 0000000..0851ea8
--- /dev/null
+++ b/1011_linux-4.18.12.patch
@@ -0,0 +1,7724 @@
+diff --git a/Documentation/hwmon/ina2xx b/Documentation/hwmon/ina2xx
+index 72d16f08e431..b8df81f6d6bc 100644
+--- a/Documentation/hwmon/ina2xx
++++ b/Documentation/hwmon/ina2xx
+@@ -32,7 +32,7 @@ Supported chips:
+     Datasheet: Publicly available at the Texas Instruments website
+                http://www.ti.com/
+ 
+-Author: Lothar Felten <l-felten@ti.com>
++Author: Lothar Felten <lothar.felten@gmail.com>
+ 
+ Description
+ -----------
+diff --git a/Documentation/process/2.Process.rst b/Documentation/process/2.Process.rst
+index a9c46dd0706b..51d0349c7809 100644
+--- a/Documentation/process/2.Process.rst
++++ b/Documentation/process/2.Process.rst
+@@ -134,7 +134,7 @@ and their maintainers are:
+ 	4.4	Greg Kroah-Hartman	(very long-term stable kernel)
+ 	4.9	Greg Kroah-Hartman
+ 	4.14	Greg Kroah-Hartman
+-	======  ======================  ===========================
++	======  ======================  ==============================
+ 
+ The selection of a kernel for long-term support is purely a matter of a
+ maintainer having the need and the time to maintain that release.  There
+diff --git a/Makefile b/Makefile
+index de0ecace693a..466e07af8473 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 11
++SUBLEVEL = 12
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/arm/boot/dts/dra7.dtsi b/arch/arm/boot/dts/dra7.dtsi
+index e03495a799ce..a0ddf497e8cd 100644
+--- a/arch/arm/boot/dts/dra7.dtsi
++++ b/arch/arm/boot/dts/dra7.dtsi
+@@ -1893,7 +1893,7 @@
+ 			};
+ 		};
+ 
+-		dcan1: can@481cc000 {
++		dcan1: can@4ae3c000 {
+ 			compatible = "ti,dra7-d_can";
+ 			ti,hwmods = "dcan1";
+ 			reg = <0x4ae3c000 0x2000>;
+@@ -1903,7 +1903,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		dcan2: can@481d0000 {
++		dcan2: can@48480000 {
+ 			compatible = "ti,dra7-d_can";
+ 			ti,hwmods = "dcan2";
+ 			reg = <0x48480000 0x2000>;
+diff --git a/arch/arm/boot/dts/imx7d.dtsi b/arch/arm/boot/dts/imx7d.dtsi
+index 8d3d123d0a5c..37f0a5afe348 100644
+--- a/arch/arm/boot/dts/imx7d.dtsi
++++ b/arch/arm/boot/dts/imx7d.dtsi
+@@ -125,10 +125,14 @@
+ 		interrupt-names = "msi";
+ 		#interrupt-cells = <1>;
+ 		interrupt-map-mask = <0 0 0 0x7>;
+-		interrupt-map = <0 0 0 1 &intc GIC_SPI 122 IRQ_TYPE_LEVEL_HIGH>,
+-				<0 0 0 2 &intc GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>,
+-				<0 0 0 3 &intc GIC_SPI 124 IRQ_TYPE_LEVEL_HIGH>,
+-				<0 0 0 4 &intc GIC_SPI 125 IRQ_TYPE_LEVEL_HIGH>;
++		/*
++		 * Reference manual lists pci irqs incorrectly
++		 * Real hardware ordering is same as imx6: D+MSI, C, B, A
++		 */
++		interrupt-map = <0 0 0 1 &intc GIC_SPI 125 IRQ_TYPE_LEVEL_HIGH>,
++				<0 0 0 2 &intc GIC_SPI 124 IRQ_TYPE_LEVEL_HIGH>,
++				<0 0 0 3 &intc GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>,
++				<0 0 0 4 &intc GIC_SPI 122 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&clks IMX7D_PCIE_CTRL_ROOT_CLK>,
+ 			 <&clks IMX7D_PLL_ENET_MAIN_100M_CLK>,
+ 			 <&clks IMX7D_PCIE_PHY_ROOT_CLK>;
+diff --git a/arch/arm/boot/dts/ls1021a.dtsi b/arch/arm/boot/dts/ls1021a.dtsi
+index c55d479971cc..f18490548c78 100644
+--- a/arch/arm/boot/dts/ls1021a.dtsi
++++ b/arch/arm/boot/dts/ls1021a.dtsi
+@@ -84,6 +84,7 @@
+ 			device_type = "cpu";
+ 			reg = <0xf01>;
+ 			clocks = <&clockgen 1 0>;
++			#cooling-cells = <2>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/mt7623.dtsi b/arch/arm/boot/dts/mt7623.dtsi
+index d1eb123bc73b..1cdc346a05e8 100644
+--- a/arch/arm/boot/dts/mt7623.dtsi
++++ b/arch/arm/boot/dts/mt7623.dtsi
+@@ -92,6 +92,7 @@
+ 				 <&apmixedsys CLK_APMIXED_MAINPLL>;
+ 			clock-names = "cpu", "intermediate";
+ 			operating-points-v2 = <&cpu_opp_table>;
++			#cooling-cells = <2>;
+ 			clock-frequency = <1300000000>;
+ 		};
+ 
+@@ -103,6 +104,7 @@
+ 				 <&apmixedsys CLK_APMIXED_MAINPLL>;
+ 			clock-names = "cpu", "intermediate";
+ 			operating-points-v2 = <&cpu_opp_table>;
++			#cooling-cells = <2>;
+ 			clock-frequency = <1300000000>;
+ 		};
+ 
+@@ -114,6 +116,7 @@
+ 				 <&apmixedsys CLK_APMIXED_MAINPLL>;
+ 			clock-names = "cpu", "intermediate";
+ 			operating-points-v2 = <&cpu_opp_table>;
++			#cooling-cells = <2>;
+ 			clock-frequency = <1300000000>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/omap4-droid4-xt894.dts b/arch/arm/boot/dts/omap4-droid4-xt894.dts
+index e7c3c563ff8f..5f27518561c4 100644
+--- a/arch/arm/boot/dts/omap4-droid4-xt894.dts
++++ b/arch/arm/boot/dts/omap4-droid4-xt894.dts
+@@ -351,7 +351,7 @@
+ &mmc2 {
+ 	vmmc-supply = <&vsdio>;
+ 	bus-width = <8>;
+-	non-removable;
++	ti,non-removable;
+ };
+ 
+ &mmc3 {
+@@ -618,15 +618,6 @@
+ 		OMAP4_IOPAD(0x10c, PIN_INPUT | MUX_MODE1)	/* abe_mcbsp3_fsx */
+ 		>;
+ 	};
+-};
+-
+-&omap4_pmx_wkup {
+-	usb_gpio_mux_sel2: pinmux_usb_gpio_mux_sel2_pins {
+-		/* gpio_wk0 */
+-		pinctrl-single,pins = <
+-		OMAP4_IOPAD(0x040, PIN_OUTPUT_PULLDOWN | MUX_MODE3)
+-		>;
+-	};
+ 
+ 	vibrator_direction_pin: pinmux_vibrator_direction_pin {
+ 		pinctrl-single,pins = <
+@@ -641,6 +632,15 @@
+ 	};
+ };
+ 
++&omap4_pmx_wkup {
++	usb_gpio_mux_sel2: pinmux_usb_gpio_mux_sel2_pins {
++		/* gpio_wk0 */
++		pinctrl-single,pins = <
++		OMAP4_IOPAD(0x040, PIN_OUTPUT_PULLDOWN | MUX_MODE3)
++		>;
++	};
++};
++
+ /*
+  * As uart1 is wired to mdm6600 with rts and cts, we can use the cts pin for
+  * uart1 wakeirq.
+diff --git a/arch/arm/mach-mvebu/pmsu.c b/arch/arm/mach-mvebu/pmsu.c
+index 27a78c80e5b1..73d5d72dfc3e 100644
+--- a/arch/arm/mach-mvebu/pmsu.c
++++ b/arch/arm/mach-mvebu/pmsu.c
+@@ -116,8 +116,8 @@ void mvebu_pmsu_set_cpu_boot_addr(int hw_cpu, void *boot_addr)
+ 		PMSU_BOOT_ADDR_REDIRECT_OFFSET(hw_cpu));
+ }
+ 
+-extern unsigned char mvebu_boot_wa_start;
+-extern unsigned char mvebu_boot_wa_end;
++extern unsigned char mvebu_boot_wa_start[];
++extern unsigned char mvebu_boot_wa_end[];
+ 
+ /*
+  * This function sets up the boot address workaround needed for SMP
+@@ -130,7 +130,7 @@ int mvebu_setup_boot_addr_wa(unsigned int crypto_eng_target,
+ 			     phys_addr_t resume_addr_reg)
+ {
+ 	void __iomem *sram_virt_base;
+-	u32 code_len = &mvebu_boot_wa_end - &mvebu_boot_wa_start;
++	u32 code_len = mvebu_boot_wa_end - mvebu_boot_wa_start;
+ 
+ 	mvebu_mbus_del_window(BOOTROM_BASE, BOOTROM_SIZE);
+ 	mvebu_mbus_add_window_by_id(crypto_eng_target, crypto_eng_attribute,
+diff --git a/arch/arm/mach-omap2/omap_hwmod.c b/arch/arm/mach-omap2/omap_hwmod.c
+index 2ceffd85dd3d..cd65ea4e9c54 100644
+--- a/arch/arm/mach-omap2/omap_hwmod.c
++++ b/arch/arm/mach-omap2/omap_hwmod.c
+@@ -2160,6 +2160,37 @@ static int of_dev_hwmod_lookup(struct device_node *np,
+ 	return -ENODEV;
+ }
+ 
++/**
++ * omap_hwmod_fix_mpu_rt_idx - fix up mpu_rt_idx register offsets
++ *
++ * @oh: struct omap_hwmod *
++ * @np: struct device_node *
++ *
++ * Fix up module register offsets for modules with mpu_rt_idx.
++ * Only needed for cpsw with interconnect target module defined
++ * in device tree while still using legacy hwmod platform data
++ * for rev, sysc and syss registers.
++ *
++ * Can be removed when all cpsw hwmod platform data has been
++ * dropped.
++ */
++static void omap_hwmod_fix_mpu_rt_idx(struct omap_hwmod *oh,
++				      struct device_node *np,
++				      struct resource *res)
++{
++	struct device_node *child = NULL;
++	int error;
++
++	child = of_get_next_child(np, child);
++	if (!child)
++		return;
++
++	error = of_address_to_resource(child, oh->mpu_rt_idx, res);
++	if (error)
++		pr_err("%s: error mapping mpu_rt_idx: %i\n",
++		       __func__, error);
++}
++
+ /**
+  * omap_hwmod_parse_module_range - map module IO range from device tree
+  * @oh: struct omap_hwmod *
+@@ -2220,7 +2251,13 @@ int omap_hwmod_parse_module_range(struct omap_hwmod *oh,
+ 	size = be32_to_cpup(ranges);
+ 
+ 	pr_debug("omap_hwmod: %s %s at 0x%llx size 0x%llx\n",
+-		 oh->name, np->name, base, size);
++		 oh ? oh->name : "", np->name, base, size);
++
++	if (oh && oh->mpu_rt_idx) {
++		omap_hwmod_fix_mpu_rt_idx(oh, np, res);
++
++		return 0;
++	}
+ 
+ 	res->start = base;
+ 	res->end = base + size - 1;
+diff --git a/arch/arm/mach-omap2/omap_hwmod_reset.c b/arch/arm/mach-omap2/omap_hwmod_reset.c
+index b68f9c0aff0b..d5ddba00bb73 100644
+--- a/arch/arm/mach-omap2/omap_hwmod_reset.c
++++ b/arch/arm/mach-omap2/omap_hwmod_reset.c
+@@ -92,11 +92,13 @@ static void omap_rtc_wait_not_busy(struct omap_hwmod *oh)
+  */
+ void omap_hwmod_rtc_unlock(struct omap_hwmod *oh)
+ {
+-	local_irq_disable();
++	unsigned long flags;
++
++	local_irq_save(flags);
+ 	omap_rtc_wait_not_busy(oh);
+ 	omap_hwmod_write(OMAP_RTC_KICK0_VALUE, oh, OMAP_RTC_KICK0_REG);
+ 	omap_hwmod_write(OMAP_RTC_KICK1_VALUE, oh, OMAP_RTC_KICK1_REG);
+-	local_irq_enable();
++	local_irq_restore(flags);
+ }
+ 
+ /**
+@@ -110,9 +112,11 @@ void omap_hwmod_rtc_unlock(struct omap_hwmod *oh)
+  */
+ void omap_hwmod_rtc_lock(struct omap_hwmod *oh)
+ {
+-	local_irq_disable();
++	unsigned long flags;
++
++	local_irq_save(flags);
+ 	omap_rtc_wait_not_busy(oh);
+ 	omap_hwmod_write(0x0, oh, OMAP_RTC_KICK0_REG);
+ 	omap_hwmod_write(0x0, oh, OMAP_RTC_KICK1_REG);
+-	local_irq_enable();
++	local_irq_restore(flags);
+ }
+diff --git a/arch/arm64/boot/dts/renesas/r8a7795-es1.dtsi b/arch/arm64/boot/dts/renesas/r8a7795-es1.dtsi
+index e19dcd6cb767..0a42b016f257 100644
+--- a/arch/arm64/boot/dts/renesas/r8a7795-es1.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a7795-es1.dtsi
+@@ -80,7 +80,7 @@
+ 
+ 	vspd3: vsp@fea38000 {
+ 		compatible = "renesas,vsp2";
+-		reg = <0 0xfea38000 0 0x8000>;
++		reg = <0 0xfea38000 0 0x5000>;
+ 		interrupts = <GIC_SPI 469 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&cpg CPG_MOD 620>;
+ 		power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a7795.dtsi b/arch/arm64/boot/dts/renesas/r8a7795.dtsi
+index d842940b2f43..91c392f879f9 100644
+--- a/arch/arm64/boot/dts/renesas/r8a7795.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a7795.dtsi
+@@ -2530,7 +2530,7 @@
+ 
+ 		vspd0: vsp@fea20000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea20000 0 0x8000>;
++			reg = <0 0xfea20000 0 0x5000>;
+ 			interrupts = <GIC_SPI 466 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 623>;
+ 			power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
+@@ -2541,7 +2541,7 @@
+ 
+ 		vspd1: vsp@fea28000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea28000 0 0x8000>;
++			reg = <0 0xfea28000 0 0x5000>;
+ 			interrupts = <GIC_SPI 467 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 622>;
+ 			power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
+@@ -2552,7 +2552,7 @@
+ 
+ 		vspd2: vsp@fea30000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea30000 0 0x8000>;
++			reg = <0 0xfea30000 0 0x5000>;
+ 			interrupts = <GIC_SPI 468 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 621>;
+ 			power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a7796.dtsi b/arch/arm64/boot/dts/renesas/r8a7796.dtsi
+index 7c25be6b5af3..a3653f9f4627 100644
+--- a/arch/arm64/boot/dts/renesas/r8a7796.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a7796.dtsi
+@@ -2212,7 +2212,7 @@
+ 
+ 		vspd0: vsp@fea20000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea20000 0 0x8000>;
++			reg = <0 0xfea20000 0 0x5000>;
+ 			interrupts = <GIC_SPI 466 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 623>;
+ 			power-domains = <&sysc R8A7796_PD_ALWAYS_ON>;
+@@ -2223,7 +2223,7 @@
+ 
+ 		vspd1: vsp@fea28000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea28000 0 0x8000>;
++			reg = <0 0xfea28000 0 0x5000>;
+ 			interrupts = <GIC_SPI 467 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 622>;
+ 			power-domains = <&sysc R8A7796_PD_ALWAYS_ON>;
+@@ -2234,7 +2234,7 @@
+ 
+ 		vspd2: vsp@fea30000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea30000 0 0x8000>;
++			reg = <0 0xfea30000 0 0x5000>;
+ 			interrupts = <GIC_SPI 468 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 621>;
+ 			power-domains = <&sysc R8A7796_PD_ALWAYS_ON>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77965.dtsi b/arch/arm64/boot/dts/renesas/r8a77965.dtsi
+index 486aecacb22a..ca618228fce1 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77965.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77965.dtsi
+@@ -1397,7 +1397,7 @@
+ 
+ 		vspd0: vsp@fea20000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea20000 0 0x8000>;
++			reg = <0 0xfea20000 0 0x5000>;
+ 			interrupts = <GIC_SPI 466 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 623>;
+ 			power-domains = <&sysc R8A77965_PD_ALWAYS_ON>;
+@@ -1416,7 +1416,7 @@
+ 
+ 		vspd1: vsp@fea28000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea28000 0 0x8000>;
++			reg = <0 0xfea28000 0 0x5000>;
+ 			interrupts = <GIC_SPI 467 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 622>;
+ 			power-domains = <&sysc R8A77965_PD_ALWAYS_ON>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77970.dtsi b/arch/arm64/boot/dts/renesas/r8a77970.dtsi
+index 98a2317a16c4..89dc4e343b7c 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77970.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77970.dtsi
+@@ -776,7 +776,7 @@
+ 
+ 		vspd0: vsp@fea20000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea20000 0 0x8000>;
++			reg = <0 0xfea20000 0 0x5000>;
+ 			interrupts = <GIC_SPI 169 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 623>;
+ 			power-domains = <&sysc R8A77970_PD_ALWAYS_ON>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77995.dtsi b/arch/arm64/boot/dts/renesas/r8a77995.dtsi
+index 2506f46293e8..ac9aadf2723c 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77995.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77995.dtsi
+@@ -699,7 +699,7 @@
+ 
+ 		vspd0: vsp@fea20000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea20000 0 0x8000>;
++			reg = <0 0xfea20000 0 0x5000>;
+ 			interrupts = <GIC_SPI 466 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 623>;
+ 			power-domains = <&sysc R8A77995_PD_ALWAYS_ON>;
+@@ -709,7 +709,7 @@
+ 
+ 		vspd1: vsp@fea28000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea28000 0 0x8000>;
++			reg = <0 0xfea28000 0 0x5000>;
+ 			interrupts = <GIC_SPI 467 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 622>;
+ 			power-domains = <&sysc R8A77995_PD_ALWAYS_ON>;
+diff --git a/arch/arm64/boot/dts/renesas/salvator-common.dtsi b/arch/arm64/boot/dts/renesas/salvator-common.dtsi
+index 9256fbaaab7f..5853f5177b4b 100644
+--- a/arch/arm64/boot/dts/renesas/salvator-common.dtsi
++++ b/arch/arm64/boot/dts/renesas/salvator-common.dtsi
+@@ -440,7 +440,7 @@
+ 			};
+ 		};
+ 
+-		port@10 {
++		port@a {
+ 			reg = <10>;
+ 
+ 			adv7482_txa: endpoint {
+@@ -450,7 +450,7 @@
+ 			};
+ 		};
+ 
+-		port@11 {
++		port@b {
+ 			reg = <11>;
+ 
+ 			adv7482_txb: endpoint {
+diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
+index 56a0260ceb11..d5c6bb1562d8 100644
+--- a/arch/arm64/kvm/guest.c
++++ b/arch/arm64/kvm/guest.c
+@@ -57,6 +57,45 @@ static u64 core_reg_offset_from_id(u64 id)
+ 	return id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_ARM_CORE);
+ }
+ 
++static int validate_core_offset(const struct kvm_one_reg *reg)
++{
++	u64 off = core_reg_offset_from_id(reg->id);
++	int size;
++
++	switch (off) {
++	case KVM_REG_ARM_CORE_REG(regs.regs[0]) ...
++	     KVM_REG_ARM_CORE_REG(regs.regs[30]):
++	case KVM_REG_ARM_CORE_REG(regs.sp):
++	case KVM_REG_ARM_CORE_REG(regs.pc):
++	case KVM_REG_ARM_CORE_REG(regs.pstate):
++	case KVM_REG_ARM_CORE_REG(sp_el1):
++	case KVM_REG_ARM_CORE_REG(elr_el1):
++	case KVM_REG_ARM_CORE_REG(spsr[0]) ...
++	     KVM_REG_ARM_CORE_REG(spsr[KVM_NR_SPSR - 1]):
++		size = sizeof(__u64);
++		break;
++
++	case KVM_REG_ARM_CORE_REG(fp_regs.vregs[0]) ...
++	     KVM_REG_ARM_CORE_REG(fp_regs.vregs[31]):
++		size = sizeof(__uint128_t);
++		break;
++
++	case KVM_REG_ARM_CORE_REG(fp_regs.fpsr):
++	case KVM_REG_ARM_CORE_REG(fp_regs.fpcr):
++		size = sizeof(__u32);
++		break;
++
++	default:
++		return -EINVAL;
++	}
++
++	if (KVM_REG_SIZE(reg->id) == size &&
++	    IS_ALIGNED(off, size / sizeof(__u32)))
++		return 0;
++
++	return -EINVAL;
++}
++
+ static int get_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ {
+ 	/*
+@@ -76,6 +115,9 @@ static int get_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ 	    (off + (KVM_REG_SIZE(reg->id) / sizeof(__u32))) >= nr_regs)
+ 		return -ENOENT;
+ 
++	if (validate_core_offset(reg))
++		return -EINVAL;
++
+ 	if (copy_to_user(uaddr, ((u32 *)regs) + off, KVM_REG_SIZE(reg->id)))
+ 		return -EFAULT;
+ 
+@@ -98,6 +140,9 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ 	    (off + (KVM_REG_SIZE(reg->id) / sizeof(__u32))) >= nr_regs)
+ 		return -ENOENT;
+ 
++	if (validate_core_offset(reg))
++		return -EINVAL;
++
+ 	if (KVM_REG_SIZE(reg->id) > sizeof(tmp))
+ 		return -EINVAL;
+ 
+@@ -107,17 +152,25 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ 	}
+ 
+ 	if (off == KVM_REG_ARM_CORE_REG(regs.pstate)) {
+-		u32 mode = (*(u32 *)valp) & COMPAT_PSR_MODE_MASK;
++		u64 mode = (*(u64 *)valp) & COMPAT_PSR_MODE_MASK;
+ 		switch (mode) {
+ 		case COMPAT_PSR_MODE_USR:
++			if (!system_supports_32bit_el0())
++				return -EINVAL;
++			break;
+ 		case COMPAT_PSR_MODE_FIQ:
+ 		case COMPAT_PSR_MODE_IRQ:
+ 		case COMPAT_PSR_MODE_SVC:
+ 		case COMPAT_PSR_MODE_ABT:
+ 		case COMPAT_PSR_MODE_UND:
++			if (!vcpu_el1_is_32bit(vcpu))
++				return -EINVAL;
++			break;
+ 		case PSR_MODE_EL0t:
+ 		case PSR_MODE_EL1t:
+ 		case PSR_MODE_EL1h:
++			if (vcpu_el1_is_32bit(vcpu))
++				return -EINVAL;
+ 			break;
+ 		default:
+ 			err = -EINVAL;
+diff --git a/arch/mips/boot/Makefile b/arch/mips/boot/Makefile
+index c22da16d67b8..5c7bfa8478e7 100644
+--- a/arch/mips/boot/Makefile
++++ b/arch/mips/boot/Makefile
+@@ -118,10 +118,12 @@ ifeq ($(ADDR_BITS),64)
+ 	itb_addr_cells = 2
+ endif
+ 
++targets += vmlinux.its.S
++
+ quiet_cmd_its_cat = CAT     $@
+-      cmd_its_cat = cat $^ >$@
++      cmd_its_cat = cat $(filter-out $(PHONY), $^) >$@
+ 
+-$(obj)/vmlinux.its.S: $(addprefix $(srctree)/arch/mips/$(PLATFORM)/,$(ITS_INPUTS))
++$(obj)/vmlinux.its.S: $(addprefix $(srctree)/arch/mips/$(PLATFORM)/,$(ITS_INPUTS)) FORCE
+ 	$(call if_changed,its_cat)
+ 
+ quiet_cmd_cpp_its_S = ITS     $@
+diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
+index f817342aab8f..53729220b48d 100644
+--- a/arch/powerpc/kernel/exceptions-64s.S
++++ b/arch/powerpc/kernel/exceptions-64s.S
+@@ -1321,9 +1321,7 @@ EXC_REAL_BEGIN(denorm_exception_hv, 0x1500, 0x100)
+ 
+ #ifdef CONFIG_PPC_DENORMALISATION
+ 	mfspr	r10,SPRN_HSRR1
+-	mfspr	r11,SPRN_HSRR0		/* save HSRR0 */
+ 	andis.	r10,r10,(HSRR1_DENORM)@h /* denorm? */
+-	addi	r11,r11,-4		/* HSRR0 is next instruction */
+ 	bne+	denorm_assist
+ #endif
+ 
+@@ -1389,6 +1387,8 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
+  */
+ 	XVCPSGNDP32(32)
+ denorm_done:
++	mfspr	r11,SPRN_HSRR0
++	subi	r11,r11,4
+ 	mtspr	SPRN_HSRR0,r11
+ 	mtcrf	0x80,r9
+ 	ld	r9,PACA_EXGEN+EX_R9(r13)
+diff --git a/arch/powerpc/kernel/machine_kexec.c b/arch/powerpc/kernel/machine_kexec.c
+index 936c7e2d421e..b53401334e81 100644
+--- a/arch/powerpc/kernel/machine_kexec.c
++++ b/arch/powerpc/kernel/machine_kexec.c
+@@ -188,7 +188,12 @@ void __init reserve_crashkernel(void)
+ 			(unsigned long)(crashk_res.start >> 20),
+ 			(unsigned long)(memblock_phys_mem_size() >> 20));
+ 
+-	memblock_reserve(crashk_res.start, crash_size);
++	if (!memblock_is_region_memory(crashk_res.start, crash_size) ||
++	    memblock_reserve(crashk_res.start, crash_size)) {
++		pr_err("Failed to reserve memory for crashkernel!\n");
++		crashk_res.start = crashk_res.end = 0;
++		return;
++	}
+ }
+ 
+ int overlaps_crashkernel(unsigned long start, unsigned long size)
+diff --git a/arch/powerpc/lib/checksum_64.S b/arch/powerpc/lib/checksum_64.S
+index 886ed94b9c13..d05c8af4ac51 100644
+--- a/arch/powerpc/lib/checksum_64.S
++++ b/arch/powerpc/lib/checksum_64.S
+@@ -443,6 +443,9 @@ _GLOBAL(csum_ipv6_magic)
+ 	addc	r0, r8, r9
+ 	ld	r10, 0(r4)
+ 	ld	r11, 8(r4)
++#ifdef CONFIG_CPU_LITTLE_ENDIAN
++	rotldi	r5, r5, 8
++#endif
+ 	adde	r0, r0, r10
+ 	add	r5, r5, r7
+ 	adde	r0, r0, r11
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index 35ac5422903a..b5a71baedbc2 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -1452,7 +1452,8 @@ static struct timer_list topology_timer;
+ 
+ static void reset_topology_timer(void)
+ {
+-	mod_timer(&topology_timer, jiffies + topology_timer_secs * HZ);
++	if (vphn_enabled)
++		mod_timer(&topology_timer, jiffies + topology_timer_secs * HZ);
+ }
+ 
+ #ifdef CONFIG_SMP
+diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
+index 0e7810ccd1ae..c18d17d830a1 100644
+--- a/arch/powerpc/mm/pkeys.c
++++ b/arch/powerpc/mm/pkeys.c
+@@ -44,7 +44,7 @@ static void scan_pkey_feature(void)
+ 	 * Since any pkey can be used for data or execute, we will just treat
+ 	 * all keys as equal and track them as one entity.
+ 	 */
+-	pkeys_total = be32_to_cpu(vals[0]);
++	pkeys_total = vals[0];
+ 	pkeys_devtree_defined = true;
+ }
+ 
+diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
+index a2cdf358a3ac..0976049d3365 100644
+--- a/arch/powerpc/platforms/powernv/pci-ioda.c
++++ b/arch/powerpc/platforms/powernv/pci-ioda.c
+@@ -2841,7 +2841,7 @@ static long pnv_pci_ioda2_table_alloc_pages(int nid, __u64 bus_offset,
+ 	level_shift = entries_shift + 3;
+ 	level_shift = max_t(unsigned, level_shift, PAGE_SHIFT);
+ 
+-	if ((level_shift - 3) * levels + page_shift >= 60)
++	if ((level_shift - 3) * levels + page_shift >= 55)
+ 		return -EINVAL;
+ 
+ 	/* Allocate TCE table */
+diff --git a/arch/s390/kernel/sysinfo.c b/arch/s390/kernel/sysinfo.c
+index 54f5496913fa..12f80d1f0415 100644
+--- a/arch/s390/kernel/sysinfo.c
++++ b/arch/s390/kernel/sysinfo.c
+@@ -59,6 +59,8 @@ int stsi(void *sysinfo, int fc, int sel1, int sel2)
+ }
+ EXPORT_SYMBOL(stsi);
+ 
++#ifdef CONFIG_PROC_FS
++
+ static bool convert_ext_name(unsigned char encoding, char *name, size_t len)
+ {
+ 	switch (encoding) {
+@@ -301,6 +303,8 @@ static int __init sysinfo_create_proc(void)
+ }
+ device_initcall(sysinfo_create_proc);
+ 
++#endif /* CONFIG_PROC_FS */
++
+ /*
+  * Service levels interface.
+  */
+diff --git a/arch/s390/mm/extmem.c b/arch/s390/mm/extmem.c
+index 6ad15d3fab81..84111a43ea29 100644
+--- a/arch/s390/mm/extmem.c
++++ b/arch/s390/mm/extmem.c
+@@ -80,7 +80,7 @@ struct qin64 {
+ struct dcss_segment {
+ 	struct list_head list;
+ 	char dcss_name[8];
+-	char res_name[15];
++	char res_name[16];
+ 	unsigned long start_addr;
+ 	unsigned long end;
+ 	atomic_t ref_count;
+@@ -433,7 +433,7 @@ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long
+ 	memcpy(&seg->res_name, seg->dcss_name, 8);
+ 	EBCASC(seg->res_name, 8);
+ 	seg->res_name[8] = '\0';
+-	strncat(seg->res_name, " (DCSS)", 7);
++	strlcat(seg->res_name, " (DCSS)", sizeof(seg->res_name));
+ 	seg->res->name = seg->res_name;
+ 	rc = seg->vm_segtype;
+ 	if (rc == SEG_TYPE_SC ||
+diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c
+index e3bd5627afef..76d89ee8b428 100644
+--- a/arch/s390/mm/pgalloc.c
++++ b/arch/s390/mm/pgalloc.c
+@@ -28,7 +28,7 @@ static struct ctl_table page_table_sysctl[] = {
+ 		.data		= &page_table_allocate_pgste,
+ 		.maxlen		= sizeof(int),
+ 		.mode		= S_IRUGO | S_IWUSR,
+-		.proc_handler	= proc_dointvec,
++		.proc_handler	= proc_dointvec_minmax,
+ 		.extra1		= &page_table_allocate_pgste_min,
+ 		.extra2		= &page_table_allocate_pgste_max,
+ 	},
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index 8ae7ffda8f98..0ab33af41fbd 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -92,7 +92,7 @@ END(native_usergs_sysret64)
+ .endm
+ 
+ .macro TRACE_IRQS_IRETQ_DEBUG
+-	bt	$9, EFLAGS(%rsp)		/* interrupts off? */
++	btl	$9, EFLAGS(%rsp)		/* interrupts off? */
+ 	jnc	1f
+ 	TRACE_IRQS_ON_DEBUG
+ 1:
+@@ -701,7 +701,7 @@ retint_kernel:
+ #ifdef CONFIG_PREEMPT
+ 	/* Interrupts are off */
+ 	/* Check if we need preemption */
+-	bt	$9, EFLAGS(%rsp)		/* were interrupts off? */
++	btl	$9, EFLAGS(%rsp)		/* were interrupts off? */
+ 	jnc	1f
+ 0:	cmpl	$0, PER_CPU_VAR(__preempt_count)
+ 	jnz	1f
+diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
+index cf372b90557e..a4170048a30b 100644
+--- a/arch/x86/events/intel/lbr.c
++++ b/arch/x86/events/intel/lbr.c
+@@ -346,7 +346,7 @@ static void __intel_pmu_lbr_restore(struct x86_perf_task_context *task_ctx)
+ 
+ 	mask = x86_pmu.lbr_nr - 1;
+ 	tos = task_ctx->tos;
+-	for (i = 0; i < tos; i++) {
++	for (i = 0; i < task_ctx->valid_lbrs; i++) {
+ 		lbr_idx = (tos - i) & mask;
+ 		wrlbr_from(lbr_idx, task_ctx->lbr_from[i]);
+ 		wrlbr_to  (lbr_idx, task_ctx->lbr_to[i]);
+@@ -354,6 +354,15 @@ static void __intel_pmu_lbr_restore(struct x86_perf_task_context *task_ctx)
+ 		if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_INFO)
+ 			wrmsrl(MSR_LBR_INFO_0 + lbr_idx, task_ctx->lbr_info[i]);
+ 	}
++
++	for (; i < x86_pmu.lbr_nr; i++) {
++		lbr_idx = (tos - i) & mask;
++		wrlbr_from(lbr_idx, 0);
++		wrlbr_to(lbr_idx, 0);
++		if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_INFO)
++			wrmsrl(MSR_LBR_INFO_0 + lbr_idx, 0);
++	}
++
+ 	wrmsrl(x86_pmu.lbr_tos, tos);
+ 	task_ctx->lbr_stack_state = LBR_NONE;
+ }
+@@ -361,7 +370,7 @@ static void __intel_pmu_lbr_restore(struct x86_perf_task_context *task_ctx)
+ static void __intel_pmu_lbr_save(struct x86_perf_task_context *task_ctx)
+ {
+ 	unsigned lbr_idx, mask;
+-	u64 tos;
++	u64 tos, from;
+ 	int i;
+ 
+ 	if (task_ctx->lbr_callstack_users == 0) {
+@@ -371,13 +380,17 @@ static void __intel_pmu_lbr_save(struct x86_perf_task_context *task_ctx)
+ 
+ 	mask = x86_pmu.lbr_nr - 1;
+ 	tos = intel_pmu_lbr_tos();
+-	for (i = 0; i < tos; i++) {
++	for (i = 0; i < x86_pmu.lbr_nr; i++) {
+ 		lbr_idx = (tos - i) & mask;
+-		task_ctx->lbr_from[i] = rdlbr_from(lbr_idx);
++		from = rdlbr_from(lbr_idx);
++		if (!from)
++			break;
++		task_ctx->lbr_from[i] = from;
+ 		task_ctx->lbr_to[i]   = rdlbr_to(lbr_idx);
+ 		if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_INFO)
+ 			rdmsrl(MSR_LBR_INFO_0 + lbr_idx, task_ctx->lbr_info[i]);
+ 	}
++	task_ctx->valid_lbrs = i;
+ 	task_ctx->tos = tos;
+ 	task_ctx->lbr_stack_state = LBR_VALID;
+ }
+@@ -531,7 +544,7 @@ static void intel_pmu_lbr_read_32(struct cpu_hw_events *cpuc)
+  */
+ static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc)
+ {
+-	bool need_info = false;
++	bool need_info = false, call_stack = false;
+ 	unsigned long mask = x86_pmu.lbr_nr - 1;
+ 	int lbr_format = x86_pmu.intel_cap.lbr_format;
+ 	u64 tos = intel_pmu_lbr_tos();
+@@ -542,7 +555,7 @@ static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc)
+ 	if (cpuc->lbr_sel) {
+ 		need_info = !(cpuc->lbr_sel->config & LBR_NO_INFO);
+ 		if (cpuc->lbr_sel->config & LBR_CALL_STACK)
+-			num = tos;
++			call_stack = true;
+ 	}
+ 
+ 	for (i = 0; i < num; i++) {
+@@ -555,6 +568,13 @@ static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc)
+ 		from = rdlbr_from(lbr_idx);
+ 		to   = rdlbr_to(lbr_idx);
+ 
++		/*
++		 * Read LBR call stack entries
++		 * until invalid entry (0s) is detected.
++		 */
++		if (call_stack && !from)
++			break;
++
+ 		if (lbr_format == LBR_FORMAT_INFO && need_info) {
+ 			u64 info;
+ 
+diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
+index 9f3711470ec1..6b72a92069fd 100644
+--- a/arch/x86/events/perf_event.h
++++ b/arch/x86/events/perf_event.h
+@@ -648,6 +648,7 @@ struct x86_perf_task_context {
+ 	u64 lbr_to[MAX_LBR_ENTRIES];
+ 	u64 lbr_info[MAX_LBR_ENTRIES];
+ 	int tos;
++	int valid_lbrs;
+ 	int lbr_callstack_users;
+ 	int lbr_stack_state;
+ };
+diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h
+index e203169931c7..6390bd8c141b 100644
+--- a/arch/x86/include/asm/fixmap.h
++++ b/arch/x86/include/asm/fixmap.h
+@@ -14,6 +14,16 @@
+ #ifndef _ASM_X86_FIXMAP_H
+ #define _ASM_X86_FIXMAP_H
+ 
++/*
++ * Exposed to assembly code for setting up initial page tables. Cannot be
++ * calculated in assembly code (fixmap entries are an enum), but is sanity
++ * checked in the actual fixmap C code to make sure that the fixmap is
++ * covered fully.
++ */
++#define FIXMAP_PMD_NUM	2
++/* fixmap starts downwards from the 507th entry in level2_fixmap_pgt */
++#define FIXMAP_PMD_TOP	507
++
+ #ifndef __ASSEMBLY__
+ #include <linux/kernel.h>
+ #include <asm/acpi.h>
+diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h
+index 82ff20b0ae45..20127d551ab5 100644
+--- a/arch/x86/include/asm/pgtable_64.h
++++ b/arch/x86/include/asm/pgtable_64.h
+@@ -14,6 +14,7 @@
+ #include <asm/processor.h>
+ #include <linux/bitops.h>
+ #include <linux/threads.h>
++#include <asm/fixmap.h>
+ 
+ extern p4d_t level4_kernel_pgt[512];
+ extern p4d_t level4_ident_pgt[512];
+@@ -22,7 +23,7 @@ extern pud_t level3_ident_pgt[512];
+ extern pmd_t level2_kernel_pgt[512];
+ extern pmd_t level2_fixmap_pgt[512];
+ extern pmd_t level2_ident_pgt[512];
+-extern pte_t level1_fixmap_pgt[512];
++extern pte_t level1_fixmap_pgt[512 * FIXMAP_PMD_NUM];
+ extern pgd_t init_top_pgt[];
+ 
+ #define swapper_pg_dir init_top_pgt
+diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
+index 8047379e575a..11455200ae66 100644
+--- a/arch/x86/kernel/head64.c
++++ b/arch/x86/kernel/head64.c
+@@ -35,6 +35,7 @@
+ #include <asm/bootparam_utils.h>
+ #include <asm/microcode.h>
+ #include <asm/kasan.h>
++#include <asm/fixmap.h>
+ 
+ /*
+  * Manage page tables very early on.
+@@ -165,7 +166,8 @@ unsigned long __head __startup_64(unsigned long physaddr,
+ 	pud[511] += load_delta;
+ 
+ 	pmd = fixup_pointer(level2_fixmap_pgt, physaddr);
+-	pmd[506] += load_delta;
++	for (i = FIXMAP_PMD_TOP; i > FIXMAP_PMD_TOP - FIXMAP_PMD_NUM; i--)
++		pmd[i] += load_delta;
+ 
+ 	/*
+ 	 * Set up the identity mapping for the switchover.  These
+diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
+index 8344dd2f310a..6bc215c15ce0 100644
+--- a/arch/x86/kernel/head_64.S
++++ b/arch/x86/kernel/head_64.S
+@@ -24,6 +24,7 @@
+ #include "../entry/calling.h"
+ #include <asm/export.h>
+ #include <asm/nospec-branch.h>
++#include <asm/fixmap.h>
+ 
+ #ifdef CONFIG_PARAVIRT
+ #include <asm/asm-offsets.h>
+@@ -445,13 +446,20 @@ NEXT_PAGE(level2_kernel_pgt)
+ 		KERNEL_IMAGE_SIZE/PMD_SIZE)
+ 
+ NEXT_PAGE(level2_fixmap_pgt)
+-	.fill	506,8,0
+-	.quad	level1_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC
+-	/* 8MB reserved for vsyscalls + a 2MB hole = 4 + 1 entries */
+-	.fill	5,8,0
++	.fill	(512 - 4 - FIXMAP_PMD_NUM),8,0
++	pgtno = 0
++	.rept (FIXMAP_PMD_NUM)
++	.quad level1_fixmap_pgt + (pgtno << PAGE_SHIFT) - __START_KERNEL_map \
++		+ _PAGE_TABLE_NOENC;
++	pgtno = pgtno + 1
++	.endr
++	/* 6 MB reserved space + a 2MB hole */
++	.fill	4,8,0
+ 
+ NEXT_PAGE(level1_fixmap_pgt)
++	.rept (FIXMAP_PMD_NUM)
+ 	.fill	512,8,0
++	.endr
+ 
+ #undef PMDS
+ 
+diff --git a/arch/x86/kernel/tsc_msr.c b/arch/x86/kernel/tsc_msr.c
+index 19afdbd7d0a7..5532d1be7687 100644
+--- a/arch/x86/kernel/tsc_msr.c
++++ b/arch/x86/kernel/tsc_msr.c
+@@ -12,6 +12,7 @@
+ #include <asm/setup.h>
+ #include <asm/apic.h>
+ #include <asm/param.h>
++#include <asm/tsc.h>
+ 
+ #define MAX_NUM_FREQS	9
+ 
+diff --git a/arch/x86/mm/numa_emulation.c b/arch/x86/mm/numa_emulation.c
+index 34a2a3bfde9c..22cbad56acab 100644
+--- a/arch/x86/mm/numa_emulation.c
++++ b/arch/x86/mm/numa_emulation.c
+@@ -61,7 +61,7 @@ static int __init emu_setup_memblk(struct numa_meminfo *ei,
+ 	eb->nid = nid;
+ 
+ 	if (emu_nid_to_phys[nid] == NUMA_NO_NODE)
+-		emu_nid_to_phys[nid] = nid;
++		emu_nid_to_phys[nid] = pb->nid;
+ 
+ 	pb->start += size;
+ 	if (pb->start >= pb->end) {
+diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
+index e3deefb891da..a300ffeece9b 100644
+--- a/arch/x86/mm/pgtable.c
++++ b/arch/x86/mm/pgtable.c
+@@ -577,6 +577,15 @@ void __native_set_fixmap(enum fixed_addresses idx, pte_t pte)
+ {
+ 	unsigned long address = __fix_to_virt(idx);
+ 
++#ifdef CONFIG_X86_64
++       /*
++	* Ensure that the static initial page tables are covering the
++	* fixmap completely.
++	*/
++	BUILD_BUG_ON(__end_of_permanent_fixed_addresses >
++		     (FIXMAP_PMD_NUM * PTRS_PER_PTE));
++#endif
++
+ 	if (idx >= __end_of_fixed_addresses) {
+ 		BUG();
+ 		return;
+diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
+index 1d2106d83b4e..019da252a04f 100644
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -239,7 +239,7 @@ static pmd_t *pti_user_pagetable_walk_pmd(unsigned long address)
+  *
+  * Returns a pointer to a PTE on success, or NULL on failure.
+  */
+-static __init pte_t *pti_user_pagetable_walk_pte(unsigned long address)
++static pte_t *pti_user_pagetable_walk_pte(unsigned long address)
+ {
+ 	gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
+ 	pmd_t *pmd;
+diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
+index 071d82ec9abb..2473eaca3468 100644
+--- a/arch/x86/xen/mmu_pv.c
++++ b/arch/x86/xen/mmu_pv.c
+@@ -1908,7 +1908,7 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
+ 	/* L3_k[511] -> level2_fixmap_pgt */
+ 	convert_pfn_mfn(level3_kernel_pgt);
+ 
+-	/* L3_k[511][506] -> level1_fixmap_pgt */
++	/* L3_k[511][508-FIXMAP_PMD_NUM ... 507] -> level1_fixmap_pgt */
+ 	convert_pfn_mfn(level2_fixmap_pgt);
+ 
+ 	/* We get [511][511] and have Xen's version of level2_kernel_pgt */
+@@ -1953,7 +1953,11 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
+ 	set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO);
+ 	set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
+ 	set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
+-	set_page_prot(level1_fixmap_pgt, PAGE_KERNEL_RO);
++
++	for (i = 0; i < FIXMAP_PMD_NUM; i++) {
++		set_page_prot(level1_fixmap_pgt + i * PTRS_PER_PTE,
++			      PAGE_KERNEL_RO);
++	}
+ 
+ 	/* Pin down new L4 */
+ 	pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
+diff --git a/block/elevator.c b/block/elevator.c
+index fa828b5bfd4b..89a48a3a8c12 100644
+--- a/block/elevator.c
++++ b/block/elevator.c
+@@ -609,7 +609,7 @@ void elv_drain_elevator(struct request_queue *q)
+ 
+ 	while (e->type->ops.sq.elevator_dispatch_fn(q, 1))
+ 		;
+-	if (q->nr_sorted && printed++ < 10) {
++	if (q->nr_sorted && !blk_queue_is_zoned(q) && printed++ < 10 ) {
+ 		printk(KERN_ERR "%s: forced dispatching is broken "
+ 		       "(nr_sorted=%u), please report this\n",
+ 		       q->elevator->type->elevator_name, q->nr_sorted);
+diff --git a/crypto/ablkcipher.c b/crypto/ablkcipher.c
+index 4ee7c041bb82..8882e90e868e 100644
+--- a/crypto/ablkcipher.c
++++ b/crypto/ablkcipher.c
+@@ -368,6 +368,7 @@ static int crypto_ablkcipher_report(struct sk_buff *skb, struct crypto_alg *alg)
+ 	strncpy(rblkcipher.type, "ablkcipher", sizeof(rblkcipher.type));
+ 	strncpy(rblkcipher.geniv, alg->cra_ablkcipher.geniv ?: "<default>",
+ 		sizeof(rblkcipher.geniv));
++	rblkcipher.geniv[sizeof(rblkcipher.geniv) - 1] = '\0';
+ 
+ 	rblkcipher.blocksize = alg->cra_blocksize;
+ 	rblkcipher.min_keysize = alg->cra_ablkcipher.min_keysize;
+@@ -442,6 +443,7 @@ static int crypto_givcipher_report(struct sk_buff *skb, struct crypto_alg *alg)
+ 	strncpy(rblkcipher.type, "givcipher", sizeof(rblkcipher.type));
+ 	strncpy(rblkcipher.geniv, alg->cra_ablkcipher.geniv ?: "<built-in>",
+ 		sizeof(rblkcipher.geniv));
++	rblkcipher.geniv[sizeof(rblkcipher.geniv) - 1] = '\0';
+ 
+ 	rblkcipher.blocksize = alg->cra_blocksize;
+ 	rblkcipher.min_keysize = alg->cra_ablkcipher.min_keysize;
+diff --git a/crypto/blkcipher.c b/crypto/blkcipher.c
+index 77b5fa293f66..f93abf13b5d4 100644
+--- a/crypto/blkcipher.c
++++ b/crypto/blkcipher.c
+@@ -510,6 +510,7 @@ static int crypto_blkcipher_report(struct sk_buff *skb, struct crypto_alg *alg)
+ 	strncpy(rblkcipher.type, "blkcipher", sizeof(rblkcipher.type));
+ 	strncpy(rblkcipher.geniv, alg->cra_blkcipher.geniv ?: "<default>",
+ 		sizeof(rblkcipher.geniv));
++	rblkcipher.geniv[sizeof(rblkcipher.geniv) - 1] = '\0';
+ 
+ 	rblkcipher.blocksize = alg->cra_blocksize;
+ 	rblkcipher.min_keysize = alg->cra_blkcipher.min_keysize;
+diff --git a/drivers/acpi/button.c b/drivers/acpi/button.c
+index 2345a5ee2dbb..40ed3ec9fc94 100644
+--- a/drivers/acpi/button.c
++++ b/drivers/acpi/button.c
+@@ -235,9 +235,6 @@ static int acpi_lid_notify_state(struct acpi_device *device, int state)
+ 		button->last_time = ktime_get();
+ 	}
+ 
+-	if (state)
+-		acpi_pm_wakeup_event(&device->dev);
+-
+ 	ret = blocking_notifier_call_chain(&acpi_lid_notifier, state, device);
+ 	if (ret == NOTIFY_DONE)
+ 		ret = blocking_notifier_call_chain(&acpi_lid_notifier, state,
+@@ -366,7 +363,8 @@ int acpi_lid_open(void)
+ }
+ EXPORT_SYMBOL(acpi_lid_open);
+ 
+-static int acpi_lid_update_state(struct acpi_device *device)
++static int acpi_lid_update_state(struct acpi_device *device,
++				 bool signal_wakeup)
+ {
+ 	int state;
+ 
+@@ -374,6 +372,9 @@ static int acpi_lid_update_state(struct acpi_device *device)
+ 	if (state < 0)
+ 		return state;
+ 
++	if (state && signal_wakeup)
++		acpi_pm_wakeup_event(&device->dev);
++
+ 	return acpi_lid_notify_state(device, state);
+ }
+ 
+@@ -384,7 +385,7 @@ static void acpi_lid_initialize_state(struct acpi_device *device)
+ 		(void)acpi_lid_notify_state(device, 1);
+ 		break;
+ 	case ACPI_BUTTON_LID_INIT_METHOD:
+-		(void)acpi_lid_update_state(device);
++		(void)acpi_lid_update_state(device, false);
+ 		break;
+ 	case ACPI_BUTTON_LID_INIT_IGNORE:
+ 	default:
+@@ -409,7 +410,7 @@ static void acpi_button_notify(struct acpi_device *device, u32 event)
+ 			users = button->input->users;
+ 			mutex_unlock(&button->input->mutex);
+ 			if (users)
+-				acpi_lid_update_state(device);
++				acpi_lid_update_state(device, true);
+ 		} else {
+ 			int keycode;
+ 
+diff --git a/drivers/ata/pata_ftide010.c b/drivers/ata/pata_ftide010.c
+index 5d4b72e21161..569a4a662dcd 100644
+--- a/drivers/ata/pata_ftide010.c
++++ b/drivers/ata/pata_ftide010.c
+@@ -256,14 +256,12 @@ static struct ata_port_operations pata_ftide010_port_ops = {
+ 	.qc_issue	= ftide010_qc_issue,
+ };
+ 
+-static struct ata_port_info ftide010_port_info[] = {
+-	{
+-		.flags		= ATA_FLAG_SLAVE_POSS,
+-		.mwdma_mask	= ATA_MWDMA2,
+-		.udma_mask	= ATA_UDMA6,
+-		.pio_mask	= ATA_PIO4,
+-		.port_ops	= &pata_ftide010_port_ops,
+-	},
++static struct ata_port_info ftide010_port_info = {
++	.flags		= ATA_FLAG_SLAVE_POSS,
++	.mwdma_mask	= ATA_MWDMA2,
++	.udma_mask	= ATA_UDMA6,
++	.pio_mask	= ATA_PIO4,
++	.port_ops	= &pata_ftide010_port_ops,
+ };
+ 
+ #if IS_ENABLED(CONFIG_SATA_GEMINI)
+@@ -349,6 +347,7 @@ static int pata_ftide010_gemini_cable_detect(struct ata_port *ap)
+ }
+ 
+ static int pata_ftide010_gemini_init(struct ftide010 *ftide,
++				     struct ata_port_info *pi,
+ 				     bool is_ata1)
+ {
+ 	struct device *dev = ftide->dev;
+@@ -373,7 +372,13 @@ static int pata_ftide010_gemini_init(struct ftide010 *ftide,
+ 
+ 	/* Flag port as SATA-capable */
+ 	if (gemini_sata_bridge_enabled(sg, is_ata1))
+-		ftide010_port_info[0].flags |= ATA_FLAG_SATA;
++		pi->flags |= ATA_FLAG_SATA;
++
++	/* This device has broken DMA, only PIO works */
++	if (of_machine_is_compatible("itian,sq201")) {
++		pi->mwdma_mask = 0;
++		pi->udma_mask = 0;
++	}
+ 
+ 	/*
+ 	 * We assume that a simple 40-wire cable is used in the PATA mode.
+@@ -435,6 +440,7 @@ static int pata_ftide010_gemini_init(struct ftide010 *ftide,
+ }
+ #else
+ static int pata_ftide010_gemini_init(struct ftide010 *ftide,
++				     struct ata_port_info *pi,
+ 				     bool is_ata1)
+ {
+ 	return -ENOTSUPP;
+@@ -446,7 +452,7 @@ static int pata_ftide010_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+ 	struct device_node *np = dev->of_node;
+-	const struct ata_port_info pi = ftide010_port_info[0];
++	struct ata_port_info pi = ftide010_port_info;
+ 	const struct ata_port_info *ppi[] = { &pi, NULL };
+ 	struct ftide010 *ftide;
+ 	struct resource *res;
+@@ -490,6 +496,7 @@ static int pata_ftide010_probe(struct platform_device *pdev)
+ 		 * are ATA0. This will also set up the cable types.
+ 		 */
+ 		ret = pata_ftide010_gemini_init(ftide,
++				&pi,
+ 				(res->start == 0x63400000));
+ 		if (ret)
+ 			goto err_dis_clk;
+diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
+index 8871b5044d9e..7d7c698c0213 100644
+--- a/drivers/block/floppy.c
++++ b/drivers/block/floppy.c
+@@ -3470,6 +3470,9 @@ static int fd_locked_ioctl(struct block_device *bdev, fmode_t mode, unsigned int
+ 					  (struct floppy_struct **)&outparam);
+ 		if (ret)
+ 			return ret;
++		memcpy(&inparam.g, outparam,
++				offsetof(struct floppy_struct, name));
++		outparam = &inparam.g;
+ 		break;
+ 	case FDMSGON:
+ 		UDP->flags |= FTD_MSG;
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index f73a27ea28cc..75947f04fc75 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -374,6 +374,7 @@ static const struct usb_device_id blacklist_table[] = {
+ 	{ USB_DEVICE(0x7392, 0xa611), .driver_info = BTUSB_REALTEK },
+ 
+ 	/* Additional Realtek 8723DE Bluetooth devices */
++	{ USB_DEVICE(0x0bda, 0xb009), .driver_info = BTUSB_REALTEK },
+ 	{ USB_DEVICE(0x2ff8, 0xb011), .driver_info = BTUSB_REALTEK },
+ 
+ 	/* Additional Realtek 8821AE Bluetooth devices */
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 80d60f43db56..4576a1268e0e 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -490,32 +490,29 @@ static int sysc_check_registers(struct sysc *ddata)
+ 
+ /**
+  * syc_ioremap - ioremap register space for the interconnect target module
+- * @ddata: deviec driver data
++ * @ddata: device driver data
+  *
+  * Note that the interconnect target module registers can be anywhere
+- * within the first child device address space. For example, SGX has
+- * them at offset 0x1fc00 in the 32MB module address space. We just
+- * what we need around the interconnect target module registers.
++ * within the interconnect target module range. For example, SGX has
++ * them at offset 0x1fc00 in the 32MB module address space. And cpsw
++ * has them at offset 0x1200 in the CPSW_WR child. Usually the
++ * the interconnect target module registers are at the beginning of
++ * the module range though.
+  */
+ static int sysc_ioremap(struct sysc *ddata)
+ {
+-	u32 size = 0;
+-
+-	if (ddata->offsets[SYSC_SYSSTATUS] >= 0)
+-		size = ddata->offsets[SYSC_SYSSTATUS];
+-	else if (ddata->offsets[SYSC_SYSCONFIG] >= 0)
+-		size = ddata->offsets[SYSC_SYSCONFIG];
+-	else if (ddata->offsets[SYSC_REVISION] >= 0)
+-		size = ddata->offsets[SYSC_REVISION];
+-	else
+-		return -EINVAL;
++	int size;
+ 
+-	size &= 0xfff00;
+-	size += SZ_256;
++	size = max3(ddata->offsets[SYSC_REVISION],
++		    ddata->offsets[SYSC_SYSCONFIG],
++		    ddata->offsets[SYSC_SYSSTATUS]);
++
++	if (size < 0 || (size + sizeof(u32)) > ddata->module_size)
++		return -EINVAL;
+ 
+ 	ddata->module_va = devm_ioremap(ddata->dev,
+ 					ddata->module_pa,
+-					size);
++					size + sizeof(u32));
+ 	if (!ddata->module_va)
+ 		return -EIO;
+ 
+@@ -1178,10 +1175,10 @@ static int sysc_child_suspend_noirq(struct device *dev)
+ 	if (!pm_runtime_status_suspended(dev)) {
+ 		error = pm_generic_runtime_suspend(dev);
+ 		if (error) {
+-			dev_err(dev, "%s error at %i: %i\n",
+-				__func__, __LINE__, error);
++			dev_warn(dev, "%s busy at %i: %i\n",
++				 __func__, __LINE__, error);
+ 
+-			return error;
++			return 0;
+ 		}
+ 
+ 		error = sysc_runtime_suspend(ddata->dev);
+diff --git a/drivers/clk/x86/clk-st.c b/drivers/clk/x86/clk-st.c
+index fb62f3938008..3a0996f2d556 100644
+--- a/drivers/clk/x86/clk-st.c
++++ b/drivers/clk/x86/clk-st.c
+@@ -46,7 +46,7 @@ static int st_clk_probe(struct platform_device *pdev)
+ 		clk_oscout1_parents, ARRAY_SIZE(clk_oscout1_parents),
+ 		0, st_data->base + CLKDRVSTR2, OSCOUT1CLK25MHZ, 3, 0, NULL);
+ 
+-	clk_set_parent(hws[ST_CLK_MUX]->clk, hws[ST_CLK_25M]->clk);
++	clk_set_parent(hws[ST_CLK_MUX]->clk, hws[ST_CLK_48M]->clk);
+ 
+ 	hws[ST_CLK_GATE] = clk_hw_register_gate(NULL, "oscout1", "oscout1_mux",
+ 		0, st_data->base + MISCCLKCNTL1, OSCCLKENB,
+diff --git a/drivers/crypto/cavium/nitrox/nitrox_dev.h b/drivers/crypto/cavium/nitrox/nitrox_dev.h
+index 9a476bb6d4c7..af596455b420 100644
+--- a/drivers/crypto/cavium/nitrox/nitrox_dev.h
++++ b/drivers/crypto/cavium/nitrox/nitrox_dev.h
+@@ -35,6 +35,7 @@ struct nitrox_cmdq {
+ 	/* requests in backlog queues */
+ 	atomic_t backlog_count;
+ 
++	int write_idx;
+ 	/* command size 32B/64B */
+ 	u8 instr_size;
+ 	u8 qno;
+@@ -87,7 +88,7 @@ struct nitrox_bh {
+ 	struct bh_data *slc;
+ };
+ 
+-/* NITROX-5 driver state */
++/* NITROX-V driver state */
+ #define NITROX_UCODE_LOADED	0
+ #define NITROX_READY		1
+ 
+diff --git a/drivers/crypto/cavium/nitrox/nitrox_lib.c b/drivers/crypto/cavium/nitrox/nitrox_lib.c
+index 4fdc921ba611..9906c0086647 100644
+--- a/drivers/crypto/cavium/nitrox/nitrox_lib.c
++++ b/drivers/crypto/cavium/nitrox/nitrox_lib.c
+@@ -36,6 +36,7 @@ static int cmdq_common_init(struct nitrox_cmdq *cmdq)
+ 	cmdq->head = PTR_ALIGN(cmdq->head_unaligned, PKT_IN_ALIGN);
+ 	cmdq->dma = PTR_ALIGN(cmdq->dma_unaligned, PKT_IN_ALIGN);
+ 	cmdq->qsize = (qsize + PKT_IN_ALIGN);
++	cmdq->write_idx = 0;
+ 
+ 	spin_lock_init(&cmdq->response_lock);
+ 	spin_lock_init(&cmdq->cmdq_lock);
+diff --git a/drivers/crypto/cavium/nitrox/nitrox_reqmgr.c b/drivers/crypto/cavium/nitrox/nitrox_reqmgr.c
+index deaefd532aaa..4a362fc22f62 100644
+--- a/drivers/crypto/cavium/nitrox/nitrox_reqmgr.c
++++ b/drivers/crypto/cavium/nitrox/nitrox_reqmgr.c
+@@ -42,6 +42,16 @@
+  *   Invalid flag options in AES-CCM IV.
+  */
+ 
++static inline int incr_index(int index, int count, int max)
++{
++	if ((index + count) >= max)
++		index = index + count - max;
++	else
++		index += count;
++
++	return index;
++}
++
+ /**
+  * dma_free_sglist - unmap and free the sg lists.
+  * @ndev: N5 device
+@@ -426,30 +436,29 @@ static void post_se_instr(struct nitrox_softreq *sr,
+ 			  struct nitrox_cmdq *cmdq)
+ {
+ 	struct nitrox_device *ndev = sr->ndev;
+-	union nps_pkt_in_instr_baoff_dbell pkt_in_baoff_dbell;
+-	u64 offset;
++	int idx;
+ 	u8 *ent;
+ 
+ 	spin_lock_bh(&cmdq->cmdq_lock);
+ 
+-	/* get the next write offset */
+-	offset = NPS_PKT_IN_INSTR_BAOFF_DBELLX(cmdq->qno);
+-	pkt_in_baoff_dbell.value = nitrox_read_csr(ndev, offset);
++	idx = cmdq->write_idx;
+ 	/* copy the instruction */
+-	ent = cmdq->head + pkt_in_baoff_dbell.s.aoff;
++	ent = cmdq->head + (idx * cmdq->instr_size);
+ 	memcpy(ent, &sr->instr, cmdq->instr_size);
+-	/* flush the command queue updates */
+-	dma_wmb();
+ 
+-	sr->tstamp = jiffies;
+ 	atomic_set(&sr->status, REQ_POSTED);
+ 	response_list_add(sr, cmdq);
++	sr->tstamp = jiffies;
++	/* flush the command queue updates */
++	dma_wmb();
+ 
+ 	/* Ring doorbell with count 1 */
+ 	writeq(1, cmdq->dbell_csr_addr);
+ 	/* orders the doorbell rings */
+ 	mmiowb();
+ 
++	cmdq->write_idx = incr_index(idx, 1, ndev->qlen);
++
+ 	spin_unlock_bh(&cmdq->cmdq_lock);
+ }
+ 
+@@ -459,6 +468,9 @@ static int post_backlog_cmds(struct nitrox_cmdq *cmdq)
+ 	struct nitrox_softreq *sr, *tmp;
+ 	int ret = 0;
+ 
++	if (!atomic_read(&cmdq->backlog_count))
++		return 0;
++
+ 	spin_lock_bh(&cmdq->backlog_lock);
+ 
+ 	list_for_each_entry_safe(sr, tmp, &cmdq->backlog_head, backlog) {
+@@ -466,7 +478,7 @@ static int post_backlog_cmds(struct nitrox_cmdq *cmdq)
+ 
+ 		/* submit until space available */
+ 		if (unlikely(cmdq_full(cmdq, ndev->qlen))) {
+-			ret = -EBUSY;
++			ret = -ENOSPC;
+ 			break;
+ 		}
+ 		/* delete from backlog list */
+@@ -491,23 +503,20 @@ static int nitrox_enqueue_request(struct nitrox_softreq *sr)
+ {
+ 	struct nitrox_cmdq *cmdq = sr->cmdq;
+ 	struct nitrox_device *ndev = sr->ndev;
+-	int ret = -EBUSY;
++
++	/* try to post backlog requests */
++	post_backlog_cmds(cmdq);
+ 
+ 	if (unlikely(cmdq_full(cmdq, ndev->qlen))) {
+ 		if (!(sr->flags & CRYPTO_TFM_REQ_MAY_BACKLOG))
+-			return -EAGAIN;
+-
++			return -ENOSPC;
++		/* add to backlog list */
+ 		backlog_list_add(sr, cmdq);
+-	} else {
+-		ret = post_backlog_cmds(cmdq);
+-		if (ret) {
+-			backlog_list_add(sr, cmdq);
+-			return ret;
+-		}
+-		post_se_instr(sr, cmdq);
+-		ret = -EINPROGRESS;
++		return -EBUSY;
+ 	}
+-	return ret;
++	post_se_instr(sr, cmdq);
++
++	return -EINPROGRESS;
+ }
+ 
+ /**
+@@ -624,11 +633,9 @@ int nitrox_process_se_request(struct nitrox_device *ndev,
+ 	 */
+ 	sr->instr.fdata[0] = *((u64 *)&req->gph);
+ 	sr->instr.fdata[1] = 0;
+-	/* flush the soft_req changes before posting the cmd */
+-	wmb();
+ 
+ 	ret = nitrox_enqueue_request(sr);
+-	if (ret == -EAGAIN)
++	if (ret == -ENOSPC)
+ 		goto send_fail;
+ 
+ 	return ret;
+diff --git a/drivers/crypto/chelsio/chtls/chtls.h b/drivers/crypto/chelsio/chtls/chtls.h
+index a53a0e6ba024..7725b6ee14ef 100644
+--- a/drivers/crypto/chelsio/chtls/chtls.h
++++ b/drivers/crypto/chelsio/chtls/chtls.h
+@@ -96,6 +96,10 @@ enum csk_flags {
+ 	CSK_CONN_INLINE,	/* Connection on HW */
+ };
+ 
++enum chtls_cdev_state {
++	CHTLS_CDEV_STATE_UP = 1
++};
++
+ struct listen_ctx {
+ 	struct sock *lsk;
+ 	struct chtls_dev *cdev;
+@@ -146,6 +150,7 @@ struct chtls_dev {
+ 	unsigned int send_page_order;
+ 	int max_host_sndbuf;
+ 	struct key_map kmap;
++	unsigned int cdev_state;
+ };
+ 
+ struct chtls_hws {
+diff --git a/drivers/crypto/chelsio/chtls/chtls_main.c b/drivers/crypto/chelsio/chtls/chtls_main.c
+index 9b07f9165658..f59b044ebd25 100644
+--- a/drivers/crypto/chelsio/chtls/chtls_main.c
++++ b/drivers/crypto/chelsio/chtls/chtls_main.c
+@@ -160,6 +160,7 @@ static void chtls_register_dev(struct chtls_dev *cdev)
+ 	tlsdev->hash = chtls_create_hash;
+ 	tlsdev->unhash = chtls_destroy_hash;
+ 	tls_register_device(&cdev->tlsdev);
++	cdev->cdev_state = CHTLS_CDEV_STATE_UP;
+ }
+ 
+ static void chtls_unregister_dev(struct chtls_dev *cdev)
+@@ -281,8 +282,10 @@ static void chtls_free_all_uld(void)
+ 	struct chtls_dev *cdev, *tmp;
+ 
+ 	mutex_lock(&cdev_mutex);
+-	list_for_each_entry_safe(cdev, tmp, &cdev_list, list)
+-		chtls_free_uld(cdev);
++	list_for_each_entry_safe(cdev, tmp, &cdev_list, list) {
++		if (cdev->cdev_state == CHTLS_CDEV_STATE_UP)
++			chtls_free_uld(cdev);
++	}
+ 	mutex_unlock(&cdev_mutex);
+ }
+ 
+diff --git a/drivers/edac/altera_edac.c b/drivers/edac/altera_edac.c
+index d0d5c4dbe097..5762c3c383f2 100644
+--- a/drivers/edac/altera_edac.c
++++ b/drivers/edac/altera_edac.c
+@@ -730,7 +730,8 @@ static int altr_s10_sdram_probe(struct platform_device *pdev)
+ 			 S10_DDR0_IRQ_MASK)) {
+ 		edac_printk(KERN_ERR, EDAC_MC,
+ 			    "Error clearing SDRAM ECC count\n");
+-		return -ENODEV;
++		ret = -ENODEV;
++		goto err2;
+ 	}
+ 
+ 	if (regmap_update_bits(drvdata->mc_vbase, priv->ecc_irq_en_offset,
+diff --git a/drivers/edac/edac_mc_sysfs.c b/drivers/edac/edac_mc_sysfs.c
+index 7481955160a4..20374b8248f0 100644
+--- a/drivers/edac/edac_mc_sysfs.c
++++ b/drivers/edac/edac_mc_sysfs.c
+@@ -1075,14 +1075,14 @@ int __init edac_mc_sysfs_init(void)
+ 
+ 	err = device_add(mci_pdev);
+ 	if (err < 0)
+-		goto out_dev_free;
++		goto out_put_device;
+ 
+ 	edac_dbg(0, "device %s created\n", dev_name(mci_pdev));
+ 
+ 	return 0;
+ 
+- out_dev_free:
+-	kfree(mci_pdev);
++ out_put_device:
++	put_device(mci_pdev);
+  out:
+ 	return err;
+ }
+diff --git a/drivers/edac/i7core_edac.c b/drivers/edac/i7core_edac.c
+index 8ed4dd9c571b..8e120bf60624 100644
+--- a/drivers/edac/i7core_edac.c
++++ b/drivers/edac/i7core_edac.c
+@@ -1177,15 +1177,14 @@ static int i7core_create_sysfs_devices(struct mem_ctl_info *mci)
+ 
+ 	rc = device_add(pvt->addrmatch_dev);
+ 	if (rc < 0)
+-		return rc;
++		goto err_put_addrmatch;
+ 
+ 	if (!pvt->is_registered) {
+ 		pvt->chancounts_dev = kzalloc(sizeof(*pvt->chancounts_dev),
+ 					      GFP_KERNEL);
+ 		if (!pvt->chancounts_dev) {
+-			put_device(pvt->addrmatch_dev);
+-			device_del(pvt->addrmatch_dev);
+-			return -ENOMEM;
++			rc = -ENOMEM;
++			goto err_del_addrmatch;
+ 		}
+ 
+ 		pvt->chancounts_dev->type = &all_channel_counts_type;
+@@ -1199,9 +1198,18 @@ static int i7core_create_sysfs_devices(struct mem_ctl_info *mci)
+ 
+ 		rc = device_add(pvt->chancounts_dev);
+ 		if (rc < 0)
+-			return rc;
++			goto err_put_chancounts;
+ 	}
+ 	return 0;
++
++err_put_chancounts:
++	put_device(pvt->chancounts_dev);
++err_del_addrmatch:
++	device_del(pvt->addrmatch_dev);
++err_put_addrmatch:
++	put_device(pvt->addrmatch_dev);
++
++	return rc;
+ }
+ 
+ static void i7core_delete_sysfs_devices(struct mem_ctl_info *mci)
+@@ -1211,11 +1219,11 @@ static void i7core_delete_sysfs_devices(struct mem_ctl_info *mci)
+ 	edac_dbg(1, "\n");
+ 
+ 	if (!pvt->is_registered) {
+-		put_device(pvt->chancounts_dev);
+ 		device_del(pvt->chancounts_dev);
++		put_device(pvt->chancounts_dev);
+ 	}
+-	put_device(pvt->addrmatch_dev);
+ 	device_del(pvt->addrmatch_dev);
++	put_device(pvt->addrmatch_dev);
+ }
+ 
+ /****************************************************************************
+diff --git a/drivers/gpio/gpio-menz127.c b/drivers/gpio/gpio-menz127.c
+index e1037582e34d..b2635326546e 100644
+--- a/drivers/gpio/gpio-menz127.c
++++ b/drivers/gpio/gpio-menz127.c
+@@ -56,9 +56,9 @@ static int men_z127_debounce(struct gpio_chip *gc, unsigned gpio,
+ 		rnd = fls(debounce) - 1;
+ 
+ 		if (rnd && (debounce & BIT(rnd - 1)))
+-			debounce = round_up(debounce, MEN_Z127_DB_MIN_US);
++			debounce = roundup(debounce, MEN_Z127_DB_MIN_US);
+ 		else
+-			debounce = round_down(debounce, MEN_Z127_DB_MIN_US);
++			debounce = rounddown(debounce, MEN_Z127_DB_MIN_US);
+ 
+ 		if (debounce > MEN_Z127_DB_MAX_US)
+ 			debounce = MEN_Z127_DB_MAX_US;
+diff --git a/drivers/gpio/gpio-tegra.c b/drivers/gpio/gpio-tegra.c
+index d5d79727c55d..d9e4da146227 100644
+--- a/drivers/gpio/gpio-tegra.c
++++ b/drivers/gpio/gpio-tegra.c
+@@ -323,13 +323,6 @@ static int tegra_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ 		return -EINVAL;
+ 	}
+ 
+-	ret = gpiochip_lock_as_irq(&tgi->gc, gpio);
+-	if (ret) {
+-		dev_err(tgi->dev,
+-			"unable to lock Tegra GPIO %u as IRQ\n", gpio);
+-		return ret;
+-	}
+-
+ 	spin_lock_irqsave(&bank->lvl_lock[port], flags);
+ 
+ 	val = tegra_gpio_readl(tgi, GPIO_INT_LVL(tgi, gpio));
+@@ -342,6 +335,14 @@ static int tegra_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ 	tegra_gpio_mask_write(tgi, GPIO_MSK_OE(tgi, gpio), gpio, 0);
+ 	tegra_gpio_enable(tgi, gpio);
+ 
++	ret = gpiochip_lock_as_irq(&tgi->gc, gpio);
++	if (ret) {
++		dev_err(tgi->dev,
++			"unable to lock Tegra GPIO %u as IRQ\n", gpio);
++		tegra_gpio_disable(tgi, gpio);
++		return ret;
++	}
++
+ 	if (type & (IRQ_TYPE_LEVEL_LOW | IRQ_TYPE_LEVEL_HIGH))
+ 		irq_set_handler_locked(d, handle_level_irq);
+ 	else if (type & (IRQ_TYPE_EDGE_FALLING | IRQ_TYPE_EDGE_RISING))
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index 5a196ec49be8..7200eea4f918 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -975,13 +975,9 @@ static int amdgpu_cs_ib_fill(struct amdgpu_device *adev,
+ 		if (r)
+ 			return r;
+ 
+-		if (chunk_ib->flags & AMDGPU_IB_FLAG_PREAMBLE) {
+-			parser->job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT;
+-			if (!parser->ctx->preamble_presented) {
+-				parser->job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT_FIRST;
+-				parser->ctx->preamble_presented = true;
+-			}
+-		}
++		if (chunk_ib->flags & AMDGPU_IB_FLAG_PREAMBLE)
++			parser->job->preamble_status |=
++				AMDGPU_PREAMBLE_IB_PRESENT;
+ 
+ 		if (parser->job->ring && parser->job->ring != ring)
+ 			return -EINVAL;
+@@ -1206,6 +1202,12 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
+ 
+ 	amdgpu_cs_post_dependencies(p);
+ 
++	if ((job->preamble_status & AMDGPU_PREAMBLE_IB_PRESENT) &&
++	    !p->ctx->preamble_presented) {
++		job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT_FIRST;
++		p->ctx->preamble_presented = true;
++	}
++
+ 	cs->out.handle = seq;
+ 	job->uf_sequence = seq;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+index 7aaa263ad8c7..6b5d4a20860d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+@@ -164,8 +164,10 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
+ 		return r;
+ 	}
+ 
++	need_ctx_switch = ring->current_ctx != fence_ctx;
+ 	if (ring->funcs->emit_pipeline_sync && job &&
+ 	    ((tmp = amdgpu_sync_get_fence(&job->sched_sync, NULL)) ||
++	     (amdgpu_sriov_vf(adev) && need_ctx_switch) ||
+ 	     amdgpu_vm_need_pipeline_sync(ring, job))) {
+ 		need_pipe_sync = true;
+ 		dma_fence_put(tmp);
+@@ -196,7 +198,6 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
+ 	}
+ 
+ 	skip_preamble = ring->current_ctx == fence_ctx;
+-	need_ctx_switch = ring->current_ctx != fence_ctx;
+ 	if (job && ring->funcs->emit_cntxcntl) {
+ 		if (need_ctx_switch)
+ 			status |= AMDGPU_HAVE_CTX_SWITCH;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index fdcb498f6d19..c31fff32a321 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -123,6 +123,7 @@ static void amdgpu_vm_bo_base_init(struct amdgpu_vm_bo_base *base,
+ 	 * is validated on next vm use to avoid fault.
+ 	 * */
+ 	list_move_tail(&base->vm_status, &vm->evicted);
++	base->moved = true;
+ }
+ 
+ /**
+@@ -303,7 +304,6 @@ static int amdgpu_vm_clear_bo(struct amdgpu_device *adev,
+ 	uint64_t addr;
+ 	int r;
+ 
+-	addr = amdgpu_bo_gpu_offset(bo);
+ 	entries = amdgpu_bo_size(bo) / 8;
+ 
+ 	if (pte_support_ats) {
+@@ -335,6 +335,7 @@ static int amdgpu_vm_clear_bo(struct amdgpu_device *adev,
+ 	if (r)
+ 		goto error;
+ 
++	addr = amdgpu_bo_gpu_offset(bo);
+ 	if (ats_entries) {
+ 		uint64_t ats_value;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+index 818874b13c99..9057a5adb31b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+@@ -5614,6 +5614,11 @@ static int gfx_v8_0_set_powergating_state(void *handle,
+ 	if (amdgpu_sriov_vf(adev))
+ 		return 0;
+ 
++	if (adev->pg_flags & (AMD_PG_SUPPORT_GFX_SMG |
++				AMD_PG_SUPPORT_RLC_SMU_HS |
++				AMD_PG_SUPPORT_CP |
++				AMD_PG_SUPPORT_GFX_DMG))
++		adev->gfx.rlc.funcs->enter_safe_mode(adev);
+ 	switch (adev->asic_type) {
+ 	case CHIP_CARRIZO:
+ 	case CHIP_STONEY:
+@@ -5663,7 +5668,11 @@ static int gfx_v8_0_set_powergating_state(void *handle,
+ 	default:
+ 		break;
+ 	}
+-
++	if (adev->pg_flags & (AMD_PG_SUPPORT_GFX_SMG |
++				AMD_PG_SUPPORT_RLC_SMU_HS |
++				AMD_PG_SUPPORT_CP |
++				AMD_PG_SUPPORT_GFX_DMG))
++		adev->gfx.rlc.funcs->exit_safe_mode(adev);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/kv_dpm.c b/drivers/gpu/drm/amd/amdgpu/kv_dpm.c
+index 7a1e77c93bf1..d8e469c594bb 100644
+--- a/drivers/gpu/drm/amd/amdgpu/kv_dpm.c
++++ b/drivers/gpu/drm/amd/amdgpu/kv_dpm.c
+@@ -1354,8 +1354,6 @@ static int kv_dpm_enable(struct amdgpu_device *adev)
+ 		return ret;
+ 	}
+ 
+-	kv_update_current_ps(adev, adev->pm.dpm.boot_ps);
+-
+ 	if (adev->irq.installed &&
+ 	    amdgpu_is_internal_thermal_sensor(adev->pm.int_thermal_type)) {
+ 		ret = kv_set_thermal_temperature_range(adev, KV_TEMP_RANGE_MIN, KV_TEMP_RANGE_MAX);
+@@ -3061,7 +3059,7 @@ static int kv_dpm_hw_init(void *handle)
+ 	else
+ 		adev->pm.dpm_enabled = true;
+ 	mutex_unlock(&adev->pm.mutex);
+-
++	amdgpu_pm_compute_clocks(adev);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/si_dpm.c b/drivers/gpu/drm/amd/amdgpu/si_dpm.c
+index 5c97a3671726..606f461dce49 100644
+--- a/drivers/gpu/drm/amd/amdgpu/si_dpm.c
++++ b/drivers/gpu/drm/amd/amdgpu/si_dpm.c
+@@ -6887,7 +6887,6 @@ static int si_dpm_enable(struct amdgpu_device *adev)
+ 
+ 	si_enable_auto_throttle_source(adev, AMDGPU_DPM_AUTO_THROTTLE_SRC_THERMAL, true);
+ 	si_thermal_start_thermal_controller(adev);
+-	ni_update_current_ps(adev, boot_ps);
+ 
+ 	return 0;
+ }
+@@ -7764,7 +7763,7 @@ static int si_dpm_hw_init(void *handle)
+ 	else
+ 		adev->pm.dpm_enabled = true;
+ 	mutex_unlock(&adev->pm.mutex);
+-
++	amdgpu_pm_compute_clocks(adev);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
+index 88b09dd758ba..ca137757a69e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
+@@ -133,7 +133,7 @@ static bool calculate_fb_and_fractional_fb_divider(
+ 	uint64_t feedback_divider;
+ 
+ 	feedback_divider =
+-		(uint64_t)(target_pix_clk_khz * ref_divider * post_divider);
++		(uint64_t)target_pix_clk_khz * ref_divider * post_divider;
+ 	feedback_divider *= 10;
+ 	/* additional factor, since we divide by 10 afterwards */
+ 	feedback_divider *= (uint64_t)(calc_pll_cs->fract_fb_divider_factor);
+@@ -145,8 +145,8 @@ static bool calculate_fb_and_fractional_fb_divider(
+  * of fractional feedback decimal point and the fractional FB Divider precision
+  * is 2 then the equation becomes (ullfeedbackDivider + 5*100) / (10*100))*/
+ 
+-	feedback_divider += (uint64_t)
+-			(5 * calc_pll_cs->fract_fb_divider_precision_factor);
++	feedback_divider += 5ULL *
++			    calc_pll_cs->fract_fb_divider_precision_factor;
+ 	feedback_divider =
+ 		div_u64(feedback_divider,
+ 			calc_pll_cs->fract_fb_divider_precision_factor * 10);
+@@ -203,8 +203,8 @@ static bool calc_fb_divider_checking_tolerance(
+ 			&fract_feedback_divider);
+ 
+ 	/*Actual calculated value*/
+-	actual_calc_clk_khz = (uint64_t)(feedback_divider *
+-					calc_pll_cs->fract_fb_divider_factor) +
++	actual_calc_clk_khz = (uint64_t)feedback_divider *
++					calc_pll_cs->fract_fb_divider_factor +
+ 							fract_feedback_divider;
+ 	actual_calc_clk_khz *= calc_pll_cs->ref_freq_khz;
+ 	actual_calc_clk_khz =
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c b/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
+index c2037daa8e66..0efbf411667a 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
+@@ -239,6 +239,8 @@ void dml1_extract_rq_regs(
+ 	extract_rq_sizing_regs(mode_lib, &(rq_regs->rq_regs_l), rq_param.sizing.rq_l);
+ 	if (rq_param.yuv420)
+ 		extract_rq_sizing_regs(mode_lib, &(rq_regs->rq_regs_c), rq_param.sizing.rq_c);
++	else
++		memset(&(rq_regs->rq_regs_c), 0, sizeof(rq_regs->rq_regs_c));
+ 
+ 	rq_regs->rq_regs_l.swath_height = dml_log2(rq_param.dlg.rq_l.swath_height);
+ 	rq_regs->rq_regs_c.swath_height = dml_log2(rq_param.dlg.rq_c.swath_height);
+diff --git a/drivers/gpu/drm/omapdrm/omap_debugfs.c b/drivers/gpu/drm/omapdrm/omap_debugfs.c
+index b42e286616b0..84da7a5b84f3 100644
+--- a/drivers/gpu/drm/omapdrm/omap_debugfs.c
++++ b/drivers/gpu/drm/omapdrm/omap_debugfs.c
+@@ -37,7 +37,9 @@ static int gem_show(struct seq_file *m, void *arg)
+ 		return ret;
+ 
+ 	seq_printf(m, "All Objects:\n");
++	mutex_lock(&priv->list_lock);
+ 	omap_gem_describe_objects(&priv->obj_list, m);
++	mutex_unlock(&priv->list_lock);
+ 
+ 	mutex_unlock(&dev->struct_mutex);
+ 
+diff --git a/drivers/gpu/drm/omapdrm/omap_drv.c b/drivers/gpu/drm/omapdrm/omap_drv.c
+index ef3b0e3571ec..5fcf9eaf3eaf 100644
+--- a/drivers/gpu/drm/omapdrm/omap_drv.c
++++ b/drivers/gpu/drm/omapdrm/omap_drv.c
+@@ -540,7 +540,7 @@ static int omapdrm_init(struct omap_drm_private *priv, struct device *dev)
+ 	priv->omaprev = soc ? (unsigned int)soc->data : 0;
+ 	priv->wq = alloc_ordered_workqueue("omapdrm", 0);
+ 
+-	spin_lock_init(&priv->list_lock);
++	mutex_init(&priv->list_lock);
+ 	INIT_LIST_HEAD(&priv->obj_list);
+ 
+ 	/* Allocate and initialize the DRM device. */
+diff --git a/drivers/gpu/drm/omapdrm/omap_drv.h b/drivers/gpu/drm/omapdrm/omap_drv.h
+index 6eaee4df4559..f27c8e216adf 100644
+--- a/drivers/gpu/drm/omapdrm/omap_drv.h
++++ b/drivers/gpu/drm/omapdrm/omap_drv.h
+@@ -71,7 +71,7 @@ struct omap_drm_private {
+ 	struct workqueue_struct *wq;
+ 
+ 	/* lock for obj_list below */
+-	spinlock_t list_lock;
++	struct mutex list_lock;
+ 
+ 	/* list of GEM objects: */
+ 	struct list_head obj_list;
+diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c
+index 17a53d207978..7a029b892a37 100644
+--- a/drivers/gpu/drm/omapdrm/omap_gem.c
++++ b/drivers/gpu/drm/omapdrm/omap_gem.c
+@@ -1001,6 +1001,7 @@ int omap_gem_resume(struct drm_device *dev)
+ 	struct omap_gem_object *omap_obj;
+ 	int ret = 0;
+ 
++	mutex_lock(&priv->list_lock);
+ 	list_for_each_entry(omap_obj, &priv->obj_list, mm_list) {
+ 		if (omap_obj->block) {
+ 			struct drm_gem_object *obj = &omap_obj->base;
+@@ -1012,12 +1013,14 @@ int omap_gem_resume(struct drm_device *dev)
+ 					omap_obj->roll, true);
+ 			if (ret) {
+ 				dev_err(dev->dev, "could not repin: %d\n", ret);
+-				return ret;
++				goto done;
+ 			}
+ 		}
+ 	}
+ 
+-	return 0;
++done:
++	mutex_unlock(&priv->list_lock);
++	return ret;
+ }
+ #endif
+ 
+@@ -1085,9 +1088,9 @@ void omap_gem_free_object(struct drm_gem_object *obj)
+ 
+ 	WARN_ON(!mutex_is_locked(&dev->struct_mutex));
+ 
+-	spin_lock(&priv->list_lock);
++	mutex_lock(&priv->list_lock);
+ 	list_del(&omap_obj->mm_list);
+-	spin_unlock(&priv->list_lock);
++	mutex_unlock(&priv->list_lock);
+ 
+ 	/* this means the object is still pinned.. which really should
+ 	 * not happen.  I think..
+@@ -1206,9 +1209,9 @@ struct drm_gem_object *omap_gem_new(struct drm_device *dev,
+ 			goto err_release;
+ 	}
+ 
+-	spin_lock(&priv->list_lock);
++	mutex_lock(&priv->list_lock);
+ 	list_add(&omap_obj->mm_list, &priv->obj_list);
+-	spin_unlock(&priv->list_lock);
++	mutex_unlock(&priv->list_lock);
+ 
+ 	return obj;
+ 
+diff --git a/drivers/gpu/drm/sun4i/sun4i_drv.c b/drivers/gpu/drm/sun4i/sun4i_drv.c
+index 50d19605c38f..e15fa2389e3f 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_drv.c
++++ b/drivers/gpu/drm/sun4i/sun4i_drv.c
+@@ -283,7 +283,6 @@ static int sun4i_drv_add_endpoints(struct device *dev,
+ 		remote = of_graph_get_remote_port_parent(ep);
+ 		if (!remote) {
+ 			DRM_DEBUG_DRIVER("Error retrieving the output node\n");
+-			of_node_put(remote);
+ 			continue;
+ 		}
+ 
+@@ -297,11 +296,13 @@ static int sun4i_drv_add_endpoints(struct device *dev,
+ 
+ 			if (of_graph_parse_endpoint(ep, &endpoint)) {
+ 				DRM_DEBUG_DRIVER("Couldn't parse endpoint\n");
++				of_node_put(remote);
+ 				continue;
+ 			}
+ 
+ 			if (!endpoint.id) {
+ 				DRM_DEBUG_DRIVER("Endpoint is our panel... skipping\n");
++				of_node_put(remote);
+ 				continue;
+ 			}
+ 		}
+diff --git a/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c b/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
+index 5a52fc489a9d..966688f04741 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
++++ b/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
+@@ -477,13 +477,15 @@ int sun8i_hdmi_phy_probe(struct sun8i_dw_hdmi *hdmi, struct device_node *node)
+ 			dev_err(dev, "Couldn't create the PHY clock\n");
+ 			goto err_put_clk_pll0;
+ 		}
++
++		clk_prepare_enable(phy->clk_phy);
+ 	}
+ 
+ 	phy->rst_phy = of_reset_control_get_shared(node, "phy");
+ 	if (IS_ERR(phy->rst_phy)) {
+ 		dev_err(dev, "Could not get phy reset control\n");
+ 		ret = PTR_ERR(phy->rst_phy);
+-		goto err_put_clk_pll0;
++		goto err_disable_clk_phy;
+ 	}
+ 
+ 	ret = reset_control_deassert(phy->rst_phy);
+@@ -514,6 +516,8 @@ err_deassert_rst_phy:
+ 	reset_control_assert(phy->rst_phy);
+ err_put_rst_phy:
+ 	reset_control_put(phy->rst_phy);
++err_disable_clk_phy:
++	clk_disable_unprepare(phy->clk_phy);
+ err_put_clk_pll0:
+ 	if (phy->variant->has_phy_clk)
+ 		clk_put(phy->clk_pll0);
+@@ -531,6 +535,7 @@ void sun8i_hdmi_phy_remove(struct sun8i_dw_hdmi *hdmi)
+ 
+ 	clk_disable_unprepare(phy->clk_mod);
+ 	clk_disable_unprepare(phy->clk_bus);
++	clk_disable_unprepare(phy->clk_phy);
+ 
+ 	reset_control_assert(phy->rst_phy);
+ 
+diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h
+index a043ac3aae98..26005abd9c5d 100644
+--- a/drivers/gpu/drm/v3d/v3d_drv.h
++++ b/drivers/gpu/drm/v3d/v3d_drv.h
+@@ -85,6 +85,11 @@ struct v3d_dev {
+ 	 */
+ 	struct mutex reset_lock;
+ 
++	/* Lock taken when creating and pushing the GPU scheduler
++	 * jobs, to keep the sched-fence seqnos in order.
++	 */
++	struct mutex sched_lock;
++
+ 	struct {
+ 		u32 num_allocated;
+ 		u32 pages_allocated;
+diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c
+index b513f9189caf..269fe16379c0 100644
+--- a/drivers/gpu/drm/v3d/v3d_gem.c
++++ b/drivers/gpu/drm/v3d/v3d_gem.c
+@@ -550,6 +550,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data,
+ 	if (ret)
+ 		goto fail;
+ 
++	mutex_lock(&v3d->sched_lock);
+ 	if (exec->bin.start != exec->bin.end) {
+ 		ret = drm_sched_job_init(&exec->bin.base,
+ 					 &v3d->queue[V3D_BIN].sched,
+@@ -576,6 +577,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data,
+ 	kref_get(&exec->refcount); /* put by scheduler job completion */
+ 	drm_sched_entity_push_job(&exec->render.base,
+ 				  &v3d_priv->sched_entity[V3D_RENDER]);
++	mutex_unlock(&v3d->sched_lock);
+ 
+ 	v3d_attach_object_fences(exec);
+ 
+@@ -594,6 +596,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data,
+ 	return 0;
+ 
+ fail_unreserve:
++	mutex_unlock(&v3d->sched_lock);
+ 	v3d_unlock_bo_reservations(dev, exec, &acquire_ctx);
+ fail:
+ 	v3d_exec_put(exec);
+@@ -615,6 +618,7 @@ v3d_gem_init(struct drm_device *dev)
+ 	spin_lock_init(&v3d->job_lock);
+ 	mutex_init(&v3d->bo_lock);
+ 	mutex_init(&v3d->reset_lock);
++	mutex_init(&v3d->sched_lock);
+ 
+ 	/* Note: We don't allocate address 0.  Various bits of HW
+ 	 * treat 0 as special, such as the occlusion query counters
+diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c
+index cf5aea1d6488..203ddf5723e8 100644
+--- a/drivers/gpu/drm/vc4/vc4_plane.c
++++ b/drivers/gpu/drm/vc4/vc4_plane.c
+@@ -543,6 +543,7 @@ static int vc4_plane_mode_set(struct drm_plane *plane,
+ 	/* Control word */
+ 	vc4_dlist_write(vc4_state,
+ 			SCALER_CTL0_VALID |
++			VC4_SET_FIELD(SCALER_CTL0_RGBA_EXPAND_ROUND, SCALER_CTL0_RGBA_EXPAND) |
+ 			(format->pixel_order << SCALER_CTL0_ORDER_SHIFT) |
+ 			(format->hvs << SCALER_CTL0_PIXEL_FORMAT_SHIFT) |
+ 			VC4_SET_FIELD(tiling, SCALER_CTL0_TILING) |
+@@ -874,7 +875,9 @@ static bool vc4_format_mod_supported(struct drm_plane *plane,
+ 	case DRM_FORMAT_YUV420:
+ 	case DRM_FORMAT_YVU420:
+ 	case DRM_FORMAT_NV12:
++	case DRM_FORMAT_NV21:
+ 	case DRM_FORMAT_NV16:
++	case DRM_FORMAT_NV61:
+ 	default:
+ 		return (modifier == DRM_FORMAT_MOD_LINEAR);
+ 	}
+diff --git a/drivers/hid/hid-ntrig.c b/drivers/hid/hid-ntrig.c
+index 43b1c7234316..9bc6f4867cb3 100644
+--- a/drivers/hid/hid-ntrig.c
++++ b/drivers/hid/hid-ntrig.c
+@@ -955,6 +955,8 @@ static int ntrig_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 
+ 	ret = sysfs_create_group(&hdev->dev.kobj,
+ 			&ntrig_attribute_group);
++	if (ret)
++		hid_err(hdev, "cannot create sysfs group\n");
+ 
+ 	return 0;
+ err_free:
+diff --git a/drivers/hid/i2c-hid/i2c-hid.c b/drivers/hid/i2c-hid/i2c-hid.c
+index 5fd1159fc095..64773433b947 100644
+--- a/drivers/hid/i2c-hid/i2c-hid.c
++++ b/drivers/hid/i2c-hid/i2c-hid.c
+@@ -1004,18 +1004,18 @@ static int i2c_hid_probe(struct i2c_client *client,
+ 		return client->irq;
+ 	}
+ 
+-	ihid = kzalloc(sizeof(struct i2c_hid), GFP_KERNEL);
++	ihid = devm_kzalloc(&client->dev, sizeof(*ihid), GFP_KERNEL);
+ 	if (!ihid)
+ 		return -ENOMEM;
+ 
+ 	if (client->dev.of_node) {
+ 		ret = i2c_hid_of_probe(client, &ihid->pdata);
+ 		if (ret)
+-			goto err;
++			return ret;
+ 	} else if (!platform_data) {
+ 		ret = i2c_hid_acpi_pdata(client, &ihid->pdata);
+ 		if (ret)
+-			goto err;
++			return ret;
+ 	} else {
+ 		ihid->pdata = *platform_data;
+ 	}
+@@ -1128,7 +1128,6 @@ err_regulator:
+ 
+ err:
+ 	i2c_hid_free_buffers(ihid);
+-	kfree(ihid);
+ 	return ret;
+ }
+ 
+@@ -1152,8 +1151,6 @@ static int i2c_hid_remove(struct i2c_client *client)
+ 
+ 	regulator_disable(ihid->pdata.supply);
+ 
+-	kfree(ihid);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/hwmon/adt7475.c b/drivers/hwmon/adt7475.c
+index 9ef84998c7f3..37db2eb66ed7 100644
+--- a/drivers/hwmon/adt7475.c
++++ b/drivers/hwmon/adt7475.c
+@@ -303,14 +303,18 @@ static inline u16 volt2reg(int channel, long volt, u8 bypass_attn)
+ 	return clamp_val(reg, 0, 1023) & (0xff << 2);
+ }
+ 
+-static u16 adt7475_read_word(struct i2c_client *client, int reg)
++static int adt7475_read_word(struct i2c_client *client, int reg)
+ {
+-	u16 val;
++	int val1, val2;
+ 
+-	val = i2c_smbus_read_byte_data(client, reg);
+-	val |= (i2c_smbus_read_byte_data(client, reg + 1) << 8);
++	val1 = i2c_smbus_read_byte_data(client, reg);
++	if (val1 < 0)
++		return val1;
++	val2 = i2c_smbus_read_byte_data(client, reg + 1);
++	if (val2 < 0)
++		return val2;
+ 
+-	return val;
++	return val1 | (val2 << 8);
+ }
+ 
+ static void adt7475_write_word(struct i2c_client *client, int reg, u16 val)
+diff --git a/drivers/hwmon/ina2xx.c b/drivers/hwmon/ina2xx.c
+index e9e6aeabbf84..71d3445ba869 100644
+--- a/drivers/hwmon/ina2xx.c
++++ b/drivers/hwmon/ina2xx.c
+@@ -17,7 +17,7 @@
+  * Bi-directional Current/Power Monitor with I2C Interface
+  * Datasheet: http://www.ti.com/product/ina230
+  *
+- * Copyright (C) 2012 Lothar Felten <l-felten@ti.com>
++ * Copyright (C) 2012 Lothar Felten <lothar.felten@gmail.com>
+  * Thanks to Jan Volkering
+  *
+  * This program is free software; you can redistribute it and/or modify
+@@ -329,6 +329,15 @@ static int ina2xx_set_shunt(struct ina2xx_data *data, long val)
+ 	return 0;
+ }
+ 
++static ssize_t ina2xx_show_shunt(struct device *dev,
++			      struct device_attribute *da,
++			      char *buf)
++{
++	struct ina2xx_data *data = dev_get_drvdata(dev);
++
++	return snprintf(buf, PAGE_SIZE, "%li\n", data->rshunt);
++}
++
+ static ssize_t ina2xx_store_shunt(struct device *dev,
+ 				  struct device_attribute *da,
+ 				  const char *buf, size_t count)
+@@ -403,7 +412,7 @@ static SENSOR_DEVICE_ATTR(power1_input, S_IRUGO, ina2xx_show_value, NULL,
+ 
+ /* shunt resistance */
+ static SENSOR_DEVICE_ATTR(shunt_resistor, S_IRUGO | S_IWUSR,
+-			  ina2xx_show_value, ina2xx_store_shunt,
++			  ina2xx_show_shunt, ina2xx_store_shunt,
+ 			  INA2XX_CALIBRATION);
+ 
+ /* update interval (ina226 only) */
+diff --git a/drivers/hwtracing/intel_th/core.c b/drivers/hwtracing/intel_th/core.c
+index da962aa2cef5..fc6b7f8b62fb 100644
+--- a/drivers/hwtracing/intel_th/core.c
++++ b/drivers/hwtracing/intel_th/core.c
+@@ -139,7 +139,8 @@ static int intel_th_remove(struct device *dev)
+ 			th->thdev[i] = NULL;
+ 		}
+ 
+-		th->num_thdevs = lowest;
++		if (lowest >= 0)
++			th->num_thdevs = lowest;
+ 	}
+ 
+ 	if (thdrv->attr_group)
+@@ -487,7 +488,7 @@ static const struct intel_th_subdevice {
+ 				.flags	= IORESOURCE_MEM,
+ 			},
+ 			{
+-				.start	= TH_MMIO_SW,
++				.start	= 1, /* use resource[1] */
+ 				.end	= 0,
+ 				.flags	= IORESOURCE_MEM,
+ 			},
+@@ -580,6 +581,7 @@ intel_th_subdevice_alloc(struct intel_th *th,
+ 	struct intel_th_device *thdev;
+ 	struct resource res[3];
+ 	unsigned int req = 0;
++	bool is64bit = false;
+ 	int r, err;
+ 
+ 	thdev = intel_th_device_alloc(th, subdev->type, subdev->name,
+@@ -589,12 +591,18 @@ intel_th_subdevice_alloc(struct intel_th *th,
+ 
+ 	thdev->drvdata = th->drvdata;
+ 
++	for (r = 0; r < th->num_resources; r++)
++		if (th->resource[r].flags & IORESOURCE_MEM_64) {
++			is64bit = true;
++			break;
++		}
++
+ 	memcpy(res, subdev->res,
+ 	       sizeof(struct resource) * subdev->nres);
+ 
+ 	for (r = 0; r < subdev->nres; r++) {
+ 		struct resource *devres = th->resource;
+-		int bar = TH_MMIO_CONFIG;
++		int bar = 0; /* cut subdevices' MMIO from resource[0] */
+ 
+ 		/*
+ 		 * Take .end == 0 to mean 'take the whole bar',
+@@ -603,6 +611,8 @@ intel_th_subdevice_alloc(struct intel_th *th,
+ 		 */
+ 		if (!res[r].end && res[r].flags == IORESOURCE_MEM) {
+ 			bar = res[r].start;
++			if (is64bit)
++				bar *= 2;
+ 			res[r].start = 0;
+ 			res[r].end = resource_size(&devres[bar]) - 1;
+ 		}
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index 45fcf0c37a9e..2806cdeda053 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -1417,6 +1417,13 @@ static void i801_add_tco(struct i801_priv *priv)
+ }
+ 
+ #ifdef CONFIG_ACPI
++static bool i801_acpi_is_smbus_ioport(const struct i801_priv *priv,
++				      acpi_physical_address address)
++{
++	return address >= priv->smba &&
++	       address <= pci_resource_end(priv->pci_dev, SMBBAR);
++}
++
+ static acpi_status
+ i801_acpi_io_handler(u32 function, acpi_physical_address address, u32 bits,
+ 		     u64 *value, void *handler_context, void *region_context)
+@@ -1432,7 +1439,7 @@ i801_acpi_io_handler(u32 function, acpi_physical_address address, u32 bits,
+ 	 */
+ 	mutex_lock(&priv->acpi_lock);
+ 
+-	if (!priv->acpi_reserved) {
++	if (!priv->acpi_reserved && i801_acpi_is_smbus_ioport(priv, address)) {
+ 		priv->acpi_reserved = true;
+ 
+ 		dev_warn(&pdev->dev, "BIOS is accessing SMBus registers\n");
+diff --git a/drivers/iio/accel/adxl345_core.c b/drivers/iio/accel/adxl345_core.c
+index 7251d0e63d74..98080e05ac6d 100644
+--- a/drivers/iio/accel/adxl345_core.c
++++ b/drivers/iio/accel/adxl345_core.c
+@@ -21,6 +21,8 @@
+ #define ADXL345_REG_DATAX0		0x32
+ #define ADXL345_REG_DATAY0		0x34
+ #define ADXL345_REG_DATAZ0		0x36
++#define ADXL345_REG_DATA_AXIS(index)	\
++	(ADXL345_REG_DATAX0 + (index) * sizeof(__le16))
+ 
+ #define ADXL345_POWER_CTL_MEASURE	BIT(3)
+ #define ADXL345_POWER_CTL_STANDBY	0x00
+@@ -47,19 +49,19 @@ struct adxl345_data {
+ 	u8 data_range;
+ };
+ 
+-#define ADXL345_CHANNEL(reg, axis) {					\
++#define ADXL345_CHANNEL(index, axis) {					\
+ 	.type = IIO_ACCEL,						\
+ 	.modified = 1,							\
+ 	.channel2 = IIO_MOD_##axis,					\
+-	.address = reg,							\
++	.address = index,						\
+ 	.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),			\
+ 	.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),		\
+ }
+ 
+ static const struct iio_chan_spec adxl345_channels[] = {
+-	ADXL345_CHANNEL(ADXL345_REG_DATAX0, X),
+-	ADXL345_CHANNEL(ADXL345_REG_DATAY0, Y),
+-	ADXL345_CHANNEL(ADXL345_REG_DATAZ0, Z),
++	ADXL345_CHANNEL(0, X),
++	ADXL345_CHANNEL(1, Y),
++	ADXL345_CHANNEL(2, Z),
+ };
+ 
+ static int adxl345_read_raw(struct iio_dev *indio_dev,
+@@ -67,7 +69,7 @@ static int adxl345_read_raw(struct iio_dev *indio_dev,
+ 			    int *val, int *val2, long mask)
+ {
+ 	struct adxl345_data *data = iio_priv(indio_dev);
+-	__le16 regval;
++	__le16 accel;
+ 	int ret;
+ 
+ 	switch (mask) {
+@@ -77,12 +79,13 @@ static int adxl345_read_raw(struct iio_dev *indio_dev,
+ 		 * ADXL345_REG_DATA(X0/Y0/Z0) contain the least significant byte
+ 		 * and ADXL345_REG_DATA(X0/Y0/Z0) + 1 the most significant byte
+ 		 */
+-		ret = regmap_bulk_read(data->regmap, chan->address, &regval,
+-				       sizeof(regval));
++		ret = regmap_bulk_read(data->regmap,
++				       ADXL345_REG_DATA_AXIS(chan->address),
++				       &accel, sizeof(accel));
+ 		if (ret < 0)
+ 			return ret;
+ 
+-		*val = sign_extend32(le16_to_cpu(regval), 12);
++		*val = sign_extend32(le16_to_cpu(accel), 12);
+ 		return IIO_VAL_INT;
+ 	case IIO_CHAN_INFO_SCALE:
+ 		*val = 0;
+diff --git a/drivers/iio/adc/ina2xx-adc.c b/drivers/iio/adc/ina2xx-adc.c
+index 0635a79864bf..d1239624187d 100644
+--- a/drivers/iio/adc/ina2xx-adc.c
++++ b/drivers/iio/adc/ina2xx-adc.c
+@@ -30,6 +30,7 @@
+ #include <linux/module.h>
+ #include <linux/of_device.h>
+ #include <linux/regmap.h>
++#include <linux/sched/task.h>
+ #include <linux/util_macros.h>
+ 
+ #include <linux/platform_data/ina2xx.h>
+@@ -826,6 +827,7 @@ static int ina2xx_buffer_enable(struct iio_dev *indio_dev)
+ {
+ 	struct ina2xx_chip_info *chip = iio_priv(indio_dev);
+ 	unsigned int sampling_us = SAMPLING_PERIOD(chip);
++	struct task_struct *task;
+ 
+ 	dev_dbg(&indio_dev->dev, "Enabling buffer w/ scan_mask %02x, freq = %d, avg =%u\n",
+ 		(unsigned int)(*indio_dev->active_scan_mask),
+@@ -835,11 +837,17 @@ static int ina2xx_buffer_enable(struct iio_dev *indio_dev)
+ 	dev_dbg(&indio_dev->dev, "Async readout mode: %d\n",
+ 		chip->allow_async_readout);
+ 
+-	chip->task = kthread_run(ina2xx_capture_thread, (void *)indio_dev,
+-				 "%s:%d-%uus", indio_dev->name, indio_dev->id,
+-				 sampling_us);
++	task = kthread_create(ina2xx_capture_thread, (void *)indio_dev,
++			      "%s:%d-%uus", indio_dev->name, indio_dev->id,
++			      sampling_us);
++	if (IS_ERR(task))
++		return PTR_ERR(task);
++
++	get_task_struct(task);
++	wake_up_process(task);
++	chip->task = task;
+ 
+-	return PTR_ERR_OR_ZERO(chip->task);
++	return 0;
+ }
+ 
+ static int ina2xx_buffer_disable(struct iio_dev *indio_dev)
+@@ -848,6 +856,7 @@ static int ina2xx_buffer_disable(struct iio_dev *indio_dev)
+ 
+ 	if (chip->task) {
+ 		kthread_stop(chip->task);
++		put_task_struct(chip->task);
+ 		chip->task = NULL;
+ 	}
+ 
+diff --git a/drivers/iio/counter/104-quad-8.c b/drivers/iio/counter/104-quad-8.c
+index b56985078d8c..4be85ec54af4 100644
+--- a/drivers/iio/counter/104-quad-8.c
++++ b/drivers/iio/counter/104-quad-8.c
+@@ -138,7 +138,7 @@ static int quad8_write_raw(struct iio_dev *indio_dev,
+ 			outb(val >> (8 * i), base_offset);
+ 
+ 		/* Reset Borrow, Carry, Compare, and Sign flags */
+-		outb(0x02, base_offset + 1);
++		outb(0x04, base_offset + 1);
+ 		/* Reset Error flag */
+ 		outb(0x06, base_offset + 1);
+ 
+diff --git a/drivers/infiniband/core/rw.c b/drivers/infiniband/core/rw.c
+index c8963e91f92a..3ee0adfb45e9 100644
+--- a/drivers/infiniband/core/rw.c
++++ b/drivers/infiniband/core/rw.c
+@@ -87,7 +87,7 @@ static int rdma_rw_init_one_mr(struct ib_qp *qp, u8 port_num,
+ 	}
+ 
+ 	ret = ib_map_mr_sg(reg->mr, sg, nents, &offset, PAGE_SIZE);
+-	if (ret < nents) {
++	if (ret < 0 || ret < nents) {
+ 		ib_mr_pool_put(qp, &qp->rdma_mrs, reg->mr);
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index 583d3a10b940..0e5eb0f547d3 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -2812,6 +2812,9 @@ static struct ib_uflow_resources *flow_resources_alloc(size_t num_specs)
+ 	if (!resources)
+ 		goto err_res;
+ 
++	if (!num_specs)
++		goto out;
++
+ 	resources->counters =
+ 		kcalloc(num_specs, sizeof(*resources->counters), GFP_KERNEL);
+ 
+@@ -2824,8 +2827,8 @@ static struct ib_uflow_resources *flow_resources_alloc(size_t num_specs)
+ 	if (!resources->collection)
+ 		goto err_collection;
+ 
++out:
+ 	resources->max = num_specs;
+-
+ 	return resources;
+ 
+ err_collection:
+diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
+index 2094d136513d..92d8469e28f3 100644
+--- a/drivers/infiniband/core/uverbs_main.c
++++ b/drivers/infiniband/core/uverbs_main.c
+@@ -429,6 +429,7 @@ static int ib_uverbs_comp_event_close(struct inode *inode, struct file *filp)
+ 			list_del(&entry->obj_list);
+ 		kfree(entry);
+ 	}
++	file->ev_queue.is_closed = 1;
+ 	spin_unlock_irq(&file->ev_queue.lock);
+ 
+ 	uverbs_close_fd(filp);
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index 50d8f1fc98d5..e426b990c1dd 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -2354,7 +2354,7 @@ static int bnxt_qplib_cq_process_res_rc(struct bnxt_qplib_cq *cq,
+ 		srq = qp->srq;
+ 		if (!srq)
+ 			return -EINVAL;
+-		if (wr_id_idx > srq->hwq.max_elements) {
++		if (wr_id_idx >= srq->hwq.max_elements) {
+ 			dev_err(&cq->hwq.pdev->dev,
+ 				"QPLIB: FP: CQ Process RC ");
+ 			dev_err(&cq->hwq.pdev->dev,
+@@ -2369,7 +2369,7 @@ static int bnxt_qplib_cq_process_res_rc(struct bnxt_qplib_cq *cq,
+ 		*pcqe = cqe;
+ 	} else {
+ 		rq = &qp->rq;
+-		if (wr_id_idx > rq->hwq.max_elements) {
++		if (wr_id_idx >= rq->hwq.max_elements) {
+ 			dev_err(&cq->hwq.pdev->dev,
+ 				"QPLIB: FP: CQ Process RC ");
+ 			dev_err(&cq->hwq.pdev->dev,
+@@ -2437,7 +2437,7 @@ static int bnxt_qplib_cq_process_res_ud(struct bnxt_qplib_cq *cq,
+ 		if (!srq)
+ 			return -EINVAL;
+ 
+-		if (wr_id_idx > srq->hwq.max_elements) {
++		if (wr_id_idx >= srq->hwq.max_elements) {
+ 			dev_err(&cq->hwq.pdev->dev,
+ 				"QPLIB: FP: CQ Process UD ");
+ 			dev_err(&cq->hwq.pdev->dev,
+@@ -2452,7 +2452,7 @@ static int bnxt_qplib_cq_process_res_ud(struct bnxt_qplib_cq *cq,
+ 		*pcqe = cqe;
+ 	} else {
+ 		rq = &qp->rq;
+-		if (wr_id_idx > rq->hwq.max_elements) {
++		if (wr_id_idx >= rq->hwq.max_elements) {
+ 			dev_err(&cq->hwq.pdev->dev,
+ 				"QPLIB: FP: CQ Process UD ");
+ 			dev_err(&cq->hwq.pdev->dev,
+@@ -2546,7 +2546,7 @@ static int bnxt_qplib_cq_process_res_raweth_qp1(struct bnxt_qplib_cq *cq,
+ 				"QPLIB: FP: SRQ used but not defined??");
+ 			return -EINVAL;
+ 		}
+-		if (wr_id_idx > srq->hwq.max_elements) {
++		if (wr_id_idx >= srq->hwq.max_elements) {
+ 			dev_err(&cq->hwq.pdev->dev,
+ 				"QPLIB: FP: CQ Process Raw/QP1 ");
+ 			dev_err(&cq->hwq.pdev->dev,
+@@ -2561,7 +2561,7 @@ static int bnxt_qplib_cq_process_res_raweth_qp1(struct bnxt_qplib_cq *cq,
+ 		*pcqe = cqe;
+ 	} else {
+ 		rq = &qp->rq;
+-		if (wr_id_idx > rq->hwq.max_elements) {
++		if (wr_id_idx >= rq->hwq.max_elements) {
+ 			dev_err(&cq->hwq.pdev->dev,
+ 				"QPLIB: FP: CQ Process Raw/QP1 RQ wr_id ");
+ 			dev_err(&cq->hwq.pdev->dev,
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.c b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+index 2f3f32eaa1d5..4097f3fa25c5 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_sp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+@@ -197,7 +197,7 @@ int bnxt_qplib_get_sgid(struct bnxt_qplib_res *res,
+ 			struct bnxt_qplib_sgid_tbl *sgid_tbl, int index,
+ 			struct bnxt_qplib_gid *gid)
+ {
+-	if (index > sgid_tbl->max) {
++	if (index >= sgid_tbl->max) {
+ 		dev_err(&res->pdev->dev,
+ 			"QPLIB: Index %d exceeded SGID table max (%d)",
+ 			index, sgid_tbl->max);
+@@ -402,7 +402,7 @@ int bnxt_qplib_get_pkey(struct bnxt_qplib_res *res,
+ 		*pkey = 0xFFFF;
+ 		return 0;
+ 	}
+-	if (index > pkey_tbl->max) {
++	if (index >= pkey_tbl->max) {
+ 		dev_err(&res->pdev->dev,
+ 			"QPLIB: Index %d exceeded PKEY table max (%d)",
+ 			index, pkey_tbl->max);
+diff --git a/drivers/infiniband/hw/hfi1/chip.c b/drivers/infiniband/hw/hfi1/chip.c
+index 6deb101cdd43..b49351914feb 100644
+--- a/drivers/infiniband/hw/hfi1/chip.c
++++ b/drivers/infiniband/hw/hfi1/chip.c
+@@ -6733,6 +6733,7 @@ void start_freeze_handling(struct hfi1_pportdata *ppd, int flags)
+ 	struct hfi1_devdata *dd = ppd->dd;
+ 	struct send_context *sc;
+ 	int i;
++	int sc_flags;
+ 
+ 	if (flags & FREEZE_SELF)
+ 		write_csr(dd, CCE_CTRL, CCE_CTRL_SPC_FREEZE_SMASK);
+@@ -6743,11 +6744,13 @@ void start_freeze_handling(struct hfi1_pportdata *ppd, int flags)
+ 	/* notify all SDMA engines that they are going into a freeze */
+ 	sdma_freeze_notify(dd, !!(flags & FREEZE_LINK_DOWN));
+ 
++	sc_flags = SCF_FROZEN | SCF_HALTED | (flags & FREEZE_LINK_DOWN ?
++					      SCF_LINK_DOWN : 0);
+ 	/* do halt pre-handling on all enabled send contexts */
+ 	for (i = 0; i < dd->num_send_contexts; i++) {
+ 		sc = dd->send_contexts[i].sc;
+ 		if (sc && (sc->flags & SCF_ENABLED))
+-			sc_stop(sc, SCF_FROZEN | SCF_HALTED);
++			sc_stop(sc, sc_flags);
+ 	}
+ 
+ 	/* Send context are frozen. Notify user space */
+@@ -10665,6 +10668,7 @@ int set_link_state(struct hfi1_pportdata *ppd, u32 state)
+ 		add_rcvctrl(dd, RCV_CTRL_RCV_PORT_ENABLE_SMASK);
+ 
+ 		handle_linkup_change(dd, 1);
++		pio_kernel_linkup(dd);
+ 
+ 		/*
+ 		 * After link up, a new link width will have been set.
+diff --git a/drivers/infiniband/hw/hfi1/pio.c b/drivers/infiniband/hw/hfi1/pio.c
+index 9cac15d10c4f..81f7cd7abcc5 100644
+--- a/drivers/infiniband/hw/hfi1/pio.c
++++ b/drivers/infiniband/hw/hfi1/pio.c
+@@ -86,6 +86,7 @@ void pio_send_control(struct hfi1_devdata *dd, int op)
+ 	unsigned long flags;
+ 	int write = 1;	/* write sendctrl back */
+ 	int flush = 0;	/* re-read sendctrl to make sure it is flushed */
++	int i;
+ 
+ 	spin_lock_irqsave(&dd->sendctrl_lock, flags);
+ 
+@@ -95,9 +96,13 @@ void pio_send_control(struct hfi1_devdata *dd, int op)
+ 		reg |= SEND_CTRL_SEND_ENABLE_SMASK;
+ 	/* Fall through */
+ 	case PSC_DATA_VL_ENABLE:
++		mask = 0;
++		for (i = 0; i < ARRAY_SIZE(dd->vld); i++)
++			if (!dd->vld[i].mtu)
++				mask |= BIT_ULL(i);
+ 		/* Disallow sending on VLs not enabled */
+-		mask = (((~0ull) << num_vls) & SEND_CTRL_UNSUPPORTED_VL_MASK) <<
+-				SEND_CTRL_UNSUPPORTED_VL_SHIFT;
++		mask = (mask & SEND_CTRL_UNSUPPORTED_VL_MASK) <<
++			SEND_CTRL_UNSUPPORTED_VL_SHIFT;
+ 		reg = (reg & ~SEND_CTRL_UNSUPPORTED_VL_SMASK) | mask;
+ 		break;
+ 	case PSC_GLOBAL_DISABLE:
+@@ -921,20 +926,18 @@ void sc_free(struct send_context *sc)
+ void sc_disable(struct send_context *sc)
+ {
+ 	u64 reg;
+-	unsigned long flags;
+ 	struct pio_buf *pbuf;
+ 
+ 	if (!sc)
+ 		return;
+ 
+ 	/* do all steps, even if already disabled */
+-	spin_lock_irqsave(&sc->alloc_lock, flags);
++	spin_lock_irq(&sc->alloc_lock);
+ 	reg = read_kctxt_csr(sc->dd, sc->hw_context, SC(CTRL));
+ 	reg &= ~SC(CTRL_CTXT_ENABLE_SMASK);
+ 	sc->flags &= ~SCF_ENABLED;
+ 	sc_wait_for_packet_egress(sc, 1);
+ 	write_kctxt_csr(sc->dd, sc->hw_context, SC(CTRL), reg);
+-	spin_unlock_irqrestore(&sc->alloc_lock, flags);
+ 
+ 	/*
+ 	 * Flush any waiters.  Once the context is disabled,
+@@ -944,7 +947,7 @@ void sc_disable(struct send_context *sc)
+ 	 * proceed with the flush.
+ 	 */
+ 	udelay(1);
+-	spin_lock_irqsave(&sc->release_lock, flags);
++	spin_lock(&sc->release_lock);
+ 	if (sc->sr) {	/* this context has a shadow ring */
+ 		while (sc->sr_tail != sc->sr_head) {
+ 			pbuf = &sc->sr[sc->sr_tail].pbuf;
+@@ -955,7 +958,8 @@ void sc_disable(struct send_context *sc)
+ 				sc->sr_tail = 0;
+ 		}
+ 	}
+-	spin_unlock_irqrestore(&sc->release_lock, flags);
++	spin_unlock(&sc->release_lock);
++	spin_unlock_irq(&sc->alloc_lock);
+ }
+ 
+ /* return SendEgressCtxtStatus.PacketOccupancy */
+@@ -1178,11 +1182,39 @@ void pio_kernel_unfreeze(struct hfi1_devdata *dd)
+ 		sc = dd->send_contexts[i].sc;
+ 		if (!sc || !(sc->flags & SCF_FROZEN) || sc->type == SC_USER)
+ 			continue;
++		if (sc->flags & SCF_LINK_DOWN)
++			continue;
+ 
+ 		sc_enable(sc);	/* will clear the sc frozen flag */
+ 	}
+ }
+ 
++/**
++ * pio_kernel_linkup() - Re-enable send contexts after linkup event
++ * @dd: valid devive data
++ *
++ * When the link goes down, the freeze path is taken.  However, a link down
++ * event is different from a freeze because if the send context is re-enabled
++ * whowever is sending data will start sending data again, which will hang
++ * any QP that is sending data.
++ *
++ * The freeze path now looks at the type of event that occurs and takes this
++ * path for link down event.
++ */
++void pio_kernel_linkup(struct hfi1_devdata *dd)
++{
++	struct send_context *sc;
++	int i;
++
++	for (i = 0; i < dd->num_send_contexts; i++) {
++		sc = dd->send_contexts[i].sc;
++		if (!sc || !(sc->flags & SCF_LINK_DOWN) || sc->type == SC_USER)
++			continue;
++
++		sc_enable(sc);	/* will clear the sc link down flag */
++	}
++}
++
+ /*
+  * Wait for the SendPioInitCtxt.PioInitInProgress bit to clear.
+  * Returns:
+@@ -1382,11 +1414,10 @@ void sc_stop(struct send_context *sc, int flag)
+ {
+ 	unsigned long flags;
+ 
+-	/* mark the context */
+-	sc->flags |= flag;
+-
+ 	/* stop buffer allocations */
+ 	spin_lock_irqsave(&sc->alloc_lock, flags);
++	/* mark the context */
++	sc->flags |= flag;
+ 	sc->flags &= ~SCF_ENABLED;
+ 	spin_unlock_irqrestore(&sc->alloc_lock, flags);
+ 	wake_up(&sc->halt_wait);
+diff --git a/drivers/infiniband/hw/hfi1/pio.h b/drivers/infiniband/hw/hfi1/pio.h
+index 058b08f459ab..aaf372c3e5d6 100644
+--- a/drivers/infiniband/hw/hfi1/pio.h
++++ b/drivers/infiniband/hw/hfi1/pio.h
+@@ -139,6 +139,7 @@ struct send_context {
+ #define SCF_IN_FREE 0x02
+ #define SCF_HALTED  0x04
+ #define SCF_FROZEN  0x08
++#define SCF_LINK_DOWN 0x10
+ 
+ struct send_context_info {
+ 	struct send_context *sc;	/* allocated working context */
+@@ -306,6 +307,7 @@ void set_pio_integrity(struct send_context *sc);
+ void pio_reset_all(struct hfi1_devdata *dd);
+ void pio_freeze(struct hfi1_devdata *dd);
+ void pio_kernel_unfreeze(struct hfi1_devdata *dd);
++void pio_kernel_linkup(struct hfi1_devdata *dd);
+ 
+ /* global PIO send control operations */
+ #define PSC_GLOBAL_ENABLE 0
+diff --git a/drivers/infiniband/hw/hfi1/user_sdma.c b/drivers/infiniband/hw/hfi1/user_sdma.c
+index a3a7b33196d6..5c88706121c1 100644
+--- a/drivers/infiniband/hw/hfi1/user_sdma.c
++++ b/drivers/infiniband/hw/hfi1/user_sdma.c
+@@ -828,7 +828,7 @@ static int user_sdma_send_pkts(struct user_sdma_request *req, unsigned maxpkts)
+ 			if (READ_ONCE(iovec->offset) == iovec->iov.iov_len) {
+ 				if (++req->iov_idx == req->data_iovs) {
+ 					ret = -EFAULT;
+-					goto free_txreq;
++					goto free_tx;
+ 				}
+ 				iovec = &req->iovs[req->iov_idx];
+ 				WARN_ON(iovec->offset);
+diff --git a/drivers/infiniband/hw/hfi1/verbs.c b/drivers/infiniband/hw/hfi1/verbs.c
+index 08991874c0e2..a1040a142aac 100644
+--- a/drivers/infiniband/hw/hfi1/verbs.c
++++ b/drivers/infiniband/hw/hfi1/verbs.c
+@@ -1590,6 +1590,7 @@ static int hfi1_check_ah(struct ib_device *ibdev, struct rdma_ah_attr *ah_attr)
+ 	struct hfi1_pportdata *ppd;
+ 	struct hfi1_devdata *dd;
+ 	u8 sc5;
++	u8 sl;
+ 
+ 	if (hfi1_check_mcast(rdma_ah_get_dlid(ah_attr)) &&
+ 	    !(rdma_ah_get_ah_flags(ah_attr) & IB_AH_GRH))
+@@ -1598,8 +1599,13 @@ static int hfi1_check_ah(struct ib_device *ibdev, struct rdma_ah_attr *ah_attr)
+ 	/* test the mapping for validity */
+ 	ibp = to_iport(ibdev, rdma_ah_get_port_num(ah_attr));
+ 	ppd = ppd_from_ibp(ibp);
+-	sc5 = ibp->sl_to_sc[rdma_ah_get_sl(ah_attr)];
+ 	dd = dd_from_ppd(ppd);
++
++	sl = rdma_ah_get_sl(ah_attr);
++	if (sl >= ARRAY_SIZE(ibp->sl_to_sc))
++		return -EINVAL;
++
++	sc5 = ibp->sl_to_sc[sl];
+ 	if (sc_to_vlt(dd, sc5) > num_vls && sc_to_vlt(dd, sc5) != 0xf)
+ 		return -EINVAL;
+ 	return 0;
+diff --git a/drivers/infiniband/hw/i40iw/i40iw_verbs.c b/drivers/infiniband/hw/i40iw/i40iw_verbs.c
+index 68679ad4c6da..937899fea01d 100644
+--- a/drivers/infiniband/hw/i40iw/i40iw_verbs.c
++++ b/drivers/infiniband/hw/i40iw/i40iw_verbs.c
+@@ -1409,6 +1409,7 @@ static void i40iw_set_hugetlb_values(u64 addr, struct i40iw_mr *iwmr)
+ 	struct vm_area_struct *vma;
+ 	struct hstate *h;
+ 
++	down_read(&current->mm->mmap_sem);
+ 	vma = find_vma(current->mm, addr);
+ 	if (vma && is_vm_hugetlb_page(vma)) {
+ 		h = hstate_vma(vma);
+@@ -1417,6 +1418,7 @@ static void i40iw_set_hugetlb_values(u64 addr, struct i40iw_mr *iwmr)
+ 			iwmr->page_msk = huge_page_mask(h);
+ 		}
+ 	}
++	up_read(&current->mm->mmap_sem);
+ }
+ 
+ /**
+diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
+index 3b8045fd23ed..b94e33a56e97 100644
+--- a/drivers/infiniband/hw/mlx4/qp.c
++++ b/drivers/infiniband/hw/mlx4/qp.c
+@@ -4047,9 +4047,9 @@ static void to_rdma_ah_attr(struct mlx4_ib_dev *ibdev,
+ 	u8 port_num = path->sched_queue & 0x40 ? 2 : 1;
+ 
+ 	memset(ah_attr, 0, sizeof(*ah_attr));
+-	ah_attr->type = rdma_ah_find_type(&ibdev->ib_dev, port_num);
+ 	if (port_num == 0 || port_num > dev->caps.num_ports)
+ 		return;
++	ah_attr->type = rdma_ah_find_type(&ibdev->ib_dev, port_num);
+ 
+ 	if (ah_attr->type == RDMA_AH_ATTR_TYPE_ROCE)
+ 		rdma_ah_set_sl(ah_attr, ((path->sched_queue >> 3) & 0x7) |
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index cbeae4509359..85677afa6f77 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -2699,7 +2699,7 @@ static int parse_flow_attr(struct mlx5_core_dev *mdev, u32 *match_c,
+ 			 IPPROTO_GRE);
+ 
+ 		MLX5_SET(fte_match_set_misc, misc_params_c, gre_protocol,
+-			 0xffff);
++			 ntohs(ib_spec->gre.mask.protocol));
+ 		MLX5_SET(fte_match_set_misc, misc_params_v, gre_protocol,
+ 			 ntohs(ib_spec->gre.val.protocol));
+ 
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
+index 9786b24b956f..2b8cc76bb77e 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.c
++++ b/drivers/infiniband/ulp/srp/ib_srp.c
+@@ -2954,7 +2954,7 @@ static int srp_reset_device(struct scsi_cmnd *scmnd)
+ {
+ 	struct srp_target_port *target = host_to_target(scmnd->device->host);
+ 	struct srp_rdma_ch *ch;
+-	int i;
++	int i, j;
+ 	u8 status;
+ 
+ 	shost_printk(KERN_ERR, target->scsi_host, "SRP reset_device called\n");
+@@ -2968,8 +2968,8 @@ static int srp_reset_device(struct scsi_cmnd *scmnd)
+ 
+ 	for (i = 0; i < target->ch_count; i++) {
+ 		ch = &target->ch[i];
+-		for (i = 0; i < target->req_ring_size; ++i) {
+-			struct srp_request *req = &ch->req_ring[i];
++		for (j = 0; j < target->req_ring_size; ++j) {
++			struct srp_request *req = &ch->req_ring[j];
+ 
+ 			srp_finish_req(ch, req, scmnd->device, DID_RESET << 16);
+ 		}
+diff --git a/drivers/input/misc/xen-kbdfront.c b/drivers/input/misc/xen-kbdfront.c
+index d91f3b1c5375..92d739649022 100644
+--- a/drivers/input/misc/xen-kbdfront.c
++++ b/drivers/input/misc/xen-kbdfront.c
+@@ -229,7 +229,7 @@ static int xenkbd_probe(struct xenbus_device *dev,
+ 		}
+ 	}
+ 
+-	touch = xenbus_read_unsigned(dev->nodename,
++	touch = xenbus_read_unsigned(dev->otherend,
+ 				     XENKBD_FIELD_FEAT_MTOUCH, 0);
+ 	if (touch) {
+ 		ret = xenbus_write(XBT_NIL, dev->nodename,
+@@ -304,13 +304,13 @@ static int xenkbd_probe(struct xenbus_device *dev,
+ 		if (!mtouch)
+ 			goto error_nomem;
+ 
+-		num_cont = xenbus_read_unsigned(info->xbdev->nodename,
++		num_cont = xenbus_read_unsigned(info->xbdev->otherend,
+ 						XENKBD_FIELD_MT_NUM_CONTACTS,
+ 						1);
+-		width = xenbus_read_unsigned(info->xbdev->nodename,
++		width = xenbus_read_unsigned(info->xbdev->otherend,
+ 					     XENKBD_FIELD_MT_WIDTH,
+ 					     XENFB_WIDTH);
+-		height = xenbus_read_unsigned(info->xbdev->nodename,
++		height = xenbus_read_unsigned(info->xbdev->otherend,
+ 					      XENKBD_FIELD_MT_HEIGHT,
+ 					      XENFB_HEIGHT);
+ 
+diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
+index dd85b16dc6f8..88564f729e93 100644
+--- a/drivers/input/mouse/elantech.c
++++ b/drivers/input/mouse/elantech.c
+@@ -1178,6 +1178,8 @@ static const struct dmi_system_id elantech_dmi_has_middle_button[] = {
+ static const char * const middle_button_pnp_ids[] = {
+ 	"LEN2131", /* ThinkPad P52 w/ NFC */
+ 	"LEN2132", /* ThinkPad P52 */
++	"LEN2133", /* ThinkPad P72 w/ NFC */
++	"LEN2134", /* ThinkPad P72 */
+ 	NULL
+ };
+ 
+diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
+index 596b95c50051..d77c97fe4a23 100644
+--- a/drivers/iommu/amd_iommu.c
++++ b/drivers/iommu/amd_iommu.c
+@@ -2405,9 +2405,9 @@ static void __unmap_single(struct dma_ops_domain *dma_dom,
+ 	}
+ 
+ 	if (amd_iommu_unmap_flush) {
+-		dma_ops_free_iova(dma_dom, dma_addr, pages);
+ 		domain_flush_tlb(&dma_dom->domain);
+ 		domain_flush_complete(&dma_dom->domain);
++		dma_ops_free_iova(dma_dom, dma_addr, pages);
+ 	} else {
+ 		pages = __roundup_pow_of_two(pages);
+ 		queue_iova(&dma_dom->iovad, dma_addr >> PAGE_SHIFT, pages, 0);
+diff --git a/drivers/iommu/msm_iommu.c b/drivers/iommu/msm_iommu.c
+index 0d3350463a3f..9a95c9b9d0d8 100644
+--- a/drivers/iommu/msm_iommu.c
++++ b/drivers/iommu/msm_iommu.c
+@@ -395,20 +395,15 @@ static int msm_iommu_add_device(struct device *dev)
+ 	struct msm_iommu_dev *iommu;
+ 	struct iommu_group *group;
+ 	unsigned long flags;
+-	int ret = 0;
+ 
+ 	spin_lock_irqsave(&msm_iommu_lock, flags);
+-
+ 	iommu = find_iommu_for_dev(dev);
++	spin_unlock_irqrestore(&msm_iommu_lock, flags);
++
+ 	if (iommu)
+ 		iommu_device_link(&iommu->iommu, dev);
+ 	else
+-		ret = -ENODEV;
+-
+-	spin_unlock_irqrestore(&msm_iommu_lock, flags);
+-
+-	if (ret)
+-		return ret;
++		return -ENODEV;
+ 
+ 	group = iommu_group_get_for_dev(dev);
+ 	if (IS_ERR(group))
+@@ -425,13 +420,12 @@ static void msm_iommu_remove_device(struct device *dev)
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&msm_iommu_lock, flags);
+-
+ 	iommu = find_iommu_for_dev(dev);
++	spin_unlock_irqrestore(&msm_iommu_lock, flags);
++
+ 	if (iommu)
+ 		iommu_device_unlink(&iommu->iommu, dev);
+ 
+-	spin_unlock_irqrestore(&msm_iommu_lock, flags);
+-
+ 	iommu_group_remove_device(dev);
+ }
+ 
+diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
+index 021cbf9ef1bf..1ac945f7a3c2 100644
+--- a/drivers/md/md-cluster.c
++++ b/drivers/md/md-cluster.c
+@@ -304,15 +304,6 @@ static void recover_bitmaps(struct md_thread *thread)
+ 	while (cinfo->recovery_map) {
+ 		slot = fls64((u64)cinfo->recovery_map) - 1;
+ 
+-		/* Clear suspend_area associated with the bitmap */
+-		spin_lock_irq(&cinfo->suspend_lock);
+-		list_for_each_entry_safe(s, tmp, &cinfo->suspend_list, list)
+-			if (slot == s->slot) {
+-				list_del(&s->list);
+-				kfree(s);
+-			}
+-		spin_unlock_irq(&cinfo->suspend_lock);
+-
+ 		snprintf(str, 64, "bitmap%04d", slot);
+ 		bm_lockres = lockres_init(mddev, str, NULL, 1);
+ 		if (!bm_lockres) {
+@@ -331,6 +322,16 @@ static void recover_bitmaps(struct md_thread *thread)
+ 			pr_err("md-cluster: Could not copy data from bitmap %d\n", slot);
+ 			goto clear_bit;
+ 		}
++
++		/* Clear suspend_area associated with the bitmap */
++		spin_lock_irq(&cinfo->suspend_lock);
++		list_for_each_entry_safe(s, tmp, &cinfo->suspend_list, list)
++			if (slot == s->slot) {
++				list_del(&s->list);
++				kfree(s);
++			}
++		spin_unlock_irq(&cinfo->suspend_lock);
++
+ 		if (hi > 0) {
+ 			if (lo < mddev->recovery_cp)
+ 				mddev->recovery_cp = lo;
+diff --git a/drivers/media/i2c/ov772x.c b/drivers/media/i2c/ov772x.c
+index e2550708abc8..3fdbe644648a 100644
+--- a/drivers/media/i2c/ov772x.c
++++ b/drivers/media/i2c/ov772x.c
+@@ -542,9 +542,19 @@ static struct ov772x_priv *to_ov772x(struct v4l2_subdev *sd)
+ 	return container_of(sd, struct ov772x_priv, subdev);
+ }
+ 
+-static inline int ov772x_read(struct i2c_client *client, u8 addr)
++static int ov772x_read(struct i2c_client *client, u8 addr)
+ {
+-	return i2c_smbus_read_byte_data(client, addr);
++	int ret;
++	u8 val;
++
++	ret = i2c_master_send(client, &addr, 1);
++	if (ret < 0)
++		return ret;
++	ret = i2c_master_recv(client, &val, 1);
++	if (ret < 0)
++		return ret;
++
++	return val;
+ }
+ 
+ static inline int ov772x_write(struct i2c_client *client, u8 addr, u8 value)
+@@ -1136,7 +1146,7 @@ static int ov772x_set_fmt(struct v4l2_subdev *sd,
+ static int ov772x_video_probe(struct ov772x_priv *priv)
+ {
+ 	struct i2c_client  *client = v4l2_get_subdevdata(&priv->subdev);
+-	u8                  pid, ver;
++	int		    pid, ver, midh, midl;
+ 	const char         *devname;
+ 	int		    ret;
+ 
+@@ -1146,7 +1156,11 @@ static int ov772x_video_probe(struct ov772x_priv *priv)
+ 
+ 	/* Check and show product ID and manufacturer ID. */
+ 	pid = ov772x_read(client, PID);
++	if (pid < 0)
++		return pid;
+ 	ver = ov772x_read(client, VER);
++	if (ver < 0)
++		return ver;
+ 
+ 	switch (VERSION(pid, ver)) {
+ 	case OV7720:
+@@ -1162,13 +1176,17 @@ static int ov772x_video_probe(struct ov772x_priv *priv)
+ 		goto done;
+ 	}
+ 
++	midh = ov772x_read(client, MIDH);
++	if (midh < 0)
++		return midh;
++	midl = ov772x_read(client, MIDL);
++	if (midl < 0)
++		return midl;
++
+ 	dev_info(&client->dev,
+ 		 "%s Product ID %0x:%0x Manufacturer ID %x:%x\n",
+-		 devname,
+-		 pid,
+-		 ver,
+-		 ov772x_read(client, MIDH),
+-		 ov772x_read(client, MIDL));
++		 devname, pid, ver, midh, midl);
++
+ 	ret = v4l2_ctrl_handler_setup(&priv->hdl);
+ 
+ done:
+@@ -1255,13 +1273,11 @@ static int ov772x_probe(struct i2c_client *client,
+ 		return -EINVAL;
+ 	}
+ 
+-	if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE_DATA |
+-					      I2C_FUNC_PROTOCOL_MANGLING)) {
++	if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE_DATA)) {
+ 		dev_err(&adapter->dev,
+-			"I2C-Adapter doesn't support SMBUS_BYTE_DATA or PROTOCOL_MANGLING\n");
++			"I2C-Adapter doesn't support SMBUS_BYTE_DATA\n");
+ 		return -EIO;
+ 	}
+-	client->flags |= I2C_CLIENT_SCCB;
+ 
+ 	priv = devm_kzalloc(&client->dev, sizeof(*priv), GFP_KERNEL);
+ 	if (!priv)
+diff --git a/drivers/media/i2c/soc_camera/ov772x.c b/drivers/media/i2c/soc_camera/ov772x.c
+index 806383500313..14377af7c888 100644
+--- a/drivers/media/i2c/soc_camera/ov772x.c
++++ b/drivers/media/i2c/soc_camera/ov772x.c
+@@ -834,7 +834,7 @@ static int ov772x_set_params(struct ov772x_priv *priv,
+ 	 * set COM8
+ 	 */
+ 	if (priv->band_filter) {
+-		ret = ov772x_mask_set(client, COM8, BNDF_ON_OFF, 1);
++		ret = ov772x_mask_set(client, COM8, BNDF_ON_OFF, BNDF_ON_OFF);
+ 		if (!ret)
+ 			ret = ov772x_mask_set(client, BDBASE,
+ 					      0xff, 256 - priv->band_filter);
+diff --git a/drivers/media/platform/exynos4-is/fimc-isp-video.c b/drivers/media/platform/exynos4-is/fimc-isp-video.c
+index 55ba696b8cf4..a920164f53f1 100644
+--- a/drivers/media/platform/exynos4-is/fimc-isp-video.c
++++ b/drivers/media/platform/exynos4-is/fimc-isp-video.c
+@@ -384,12 +384,17 @@ static void __isp_video_try_fmt(struct fimc_isp *isp,
+ 				struct v4l2_pix_format_mplane *pixm,
+ 				const struct fimc_fmt **fmt)
+ {
+-	*fmt = fimc_isp_find_format(&pixm->pixelformat, NULL, 2);
++	const struct fimc_fmt *__fmt;
++
++	__fmt = fimc_isp_find_format(&pixm->pixelformat, NULL, 2);
++
++	if (fmt)
++		*fmt = __fmt;
+ 
+ 	pixm->colorspace = V4L2_COLORSPACE_SRGB;
+ 	pixm->field = V4L2_FIELD_NONE;
+-	pixm->num_planes = (*fmt)->memplanes;
+-	pixm->pixelformat = (*fmt)->fourcc;
++	pixm->num_planes = __fmt->memplanes;
++	pixm->pixelformat = __fmt->fourcc;
+ 	/*
+ 	 * TODO: double check with the docmentation these width/height
+ 	 * constraints are correct.
+diff --git a/drivers/media/platform/fsl-viu.c b/drivers/media/platform/fsl-viu.c
+index e41510ce69a4..0273302aa741 100644
+--- a/drivers/media/platform/fsl-viu.c
++++ b/drivers/media/platform/fsl-viu.c
+@@ -1414,7 +1414,7 @@ static int viu_of_probe(struct platform_device *op)
+ 				     sizeof(struct viu_reg), DRV_NAME)) {
+ 		dev_err(&op->dev, "Error while requesting mem region\n");
+ 		ret = -EBUSY;
+-		goto err;
++		goto err_irq;
+ 	}
+ 
+ 	/* remap registers */
+@@ -1422,7 +1422,7 @@ static int viu_of_probe(struct platform_device *op)
+ 	if (!viu_regs) {
+ 		dev_err(&op->dev, "Can't map register set\n");
+ 		ret = -ENOMEM;
+-		goto err;
++		goto err_irq;
+ 	}
+ 
+ 	/* Prepare our private structure */
+@@ -1430,7 +1430,7 @@ static int viu_of_probe(struct platform_device *op)
+ 	if (!viu_dev) {
+ 		dev_err(&op->dev, "Can't allocate private structure\n");
+ 		ret = -ENOMEM;
+-		goto err;
++		goto err_irq;
+ 	}
+ 
+ 	viu_dev->vr = viu_regs;
+@@ -1446,16 +1446,21 @@ static int viu_of_probe(struct platform_device *op)
+ 	ret = v4l2_device_register(viu_dev->dev, &viu_dev->v4l2_dev);
+ 	if (ret < 0) {
+ 		dev_err(&op->dev, "v4l2_device_register() failed: %d\n", ret);
+-		goto err;
++		goto err_irq;
+ 	}
+ 
+ 	ad = i2c_get_adapter(0);
++	if (!ad) {
++		ret = -EFAULT;
++		dev_err(&op->dev, "couldn't get i2c adapter\n");
++		goto err_v4l2;
++	}
+ 
+ 	v4l2_ctrl_handler_init(&viu_dev->hdl, 5);
+ 	if (viu_dev->hdl.error) {
+ 		ret = viu_dev->hdl.error;
+ 		dev_err(&op->dev, "couldn't register control\n");
+-		goto err_vdev;
++		goto err_i2c;
+ 	}
+ 	/* This control handler will inherit the control(s) from the
+ 	   sub-device(s). */
+@@ -1471,7 +1476,7 @@ static int viu_of_probe(struct platform_device *op)
+ 	vdev = video_device_alloc();
+ 	if (vdev == NULL) {
+ 		ret = -ENOMEM;
+-		goto err_vdev;
++		goto err_hdl;
+ 	}
+ 
+ 	*vdev = viu_template;
+@@ -1492,7 +1497,7 @@ static int viu_of_probe(struct platform_device *op)
+ 	ret = video_register_device(viu_dev->vdev, VFL_TYPE_GRABBER, -1);
+ 	if (ret < 0) {
+ 		video_device_release(viu_dev->vdev);
+-		goto err_vdev;
++		goto err_unlock;
+ 	}
+ 
+ 	/* enable VIU clock */
+@@ -1500,12 +1505,12 @@ static int viu_of_probe(struct platform_device *op)
+ 	if (IS_ERR(clk)) {
+ 		dev_err(&op->dev, "failed to lookup the clock!\n");
+ 		ret = PTR_ERR(clk);
+-		goto err_clk;
++		goto err_vdev;
+ 	}
+ 	ret = clk_prepare_enable(clk);
+ 	if (ret) {
+ 		dev_err(&op->dev, "failed to enable the clock!\n");
+-		goto err_clk;
++		goto err_vdev;
+ 	}
+ 	viu_dev->clk = clk;
+ 
+@@ -1516,7 +1521,7 @@ static int viu_of_probe(struct platform_device *op)
+ 	if (request_irq(viu_dev->irq, viu_intr, 0, "viu", (void *)viu_dev)) {
+ 		dev_err(&op->dev, "Request VIU IRQ failed.\n");
+ 		ret = -ENODEV;
+-		goto err_irq;
++		goto err_clk;
+ 	}
+ 
+ 	mutex_unlock(&viu_dev->lock);
+@@ -1524,16 +1529,19 @@ static int viu_of_probe(struct platform_device *op)
+ 	dev_info(&op->dev, "Freescale VIU Video Capture Board\n");
+ 	return ret;
+ 
+-err_irq:
+-	clk_disable_unprepare(viu_dev->clk);
+ err_clk:
+-	video_unregister_device(viu_dev->vdev);
++	clk_disable_unprepare(viu_dev->clk);
+ err_vdev:
+-	v4l2_ctrl_handler_free(&viu_dev->hdl);
++	video_unregister_device(viu_dev->vdev);
++err_unlock:
+ 	mutex_unlock(&viu_dev->lock);
++err_hdl:
++	v4l2_ctrl_handler_free(&viu_dev->hdl);
++err_i2c:
+ 	i2c_put_adapter(ad);
++err_v4l2:
+ 	v4l2_device_unregister(&viu_dev->v4l2_dev);
+-err:
++err_irq:
+ 	irq_dispose_mapping(viu_irq);
+ 	return ret;
+ }
+diff --git a/drivers/media/platform/omap3isp/isp.c b/drivers/media/platform/omap3isp/isp.c
+index f22cf351e3ee..ae0ef8b241a7 100644
+--- a/drivers/media/platform/omap3isp/isp.c
++++ b/drivers/media/platform/omap3isp/isp.c
+@@ -300,7 +300,7 @@ static struct clk *isp_xclk_src_get(struct of_phandle_args *clkspec, void *data)
+ static int isp_xclk_init(struct isp_device *isp)
+ {
+ 	struct device_node *np = isp->dev->of_node;
+-	struct clk_init_data init;
++	struct clk_init_data init = { 0 };
+ 	unsigned int i;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(isp->xclks); ++i)
+diff --git a/drivers/media/platform/s3c-camif/camif-capture.c b/drivers/media/platform/s3c-camif/camif-capture.c
+index 9ab8e7ee2e1e..b1d9f3857d3d 100644
+--- a/drivers/media/platform/s3c-camif/camif-capture.c
++++ b/drivers/media/platform/s3c-camif/camif-capture.c
+@@ -117,6 +117,8 @@ static int sensor_set_power(struct camif_dev *camif, int on)
+ 
+ 	if (camif->sensor.power_count == !on)
+ 		err = v4l2_subdev_call(sensor->sd, core, s_power, on);
++	if (err == -ENOIOCTLCMD)
++		err = 0;
+ 	if (!err)
+ 		sensor->power_count += on ? 1 : -1;
+ 
+diff --git a/drivers/media/usb/tm6000/tm6000-dvb.c b/drivers/media/usb/tm6000/tm6000-dvb.c
+index c811fc6cf48a..3a4e545c6037 100644
+--- a/drivers/media/usb/tm6000/tm6000-dvb.c
++++ b/drivers/media/usb/tm6000/tm6000-dvb.c
+@@ -266,6 +266,11 @@ static int register_dvb(struct tm6000_core *dev)
+ 
+ 	ret = dvb_register_adapter(&dvb->adapter, "Trident TVMaster 6000 DVB-T",
+ 					THIS_MODULE, &dev->udev->dev, adapter_nr);
++	if (ret < 0) {
++		pr_err("tm6000: couldn't register the adapter!\n");
++		goto err;
++	}
++
+ 	dvb->adapter.priv = dev;
+ 
+ 	if (dvb->frontend) {
+diff --git a/drivers/media/v4l2-core/v4l2-event.c b/drivers/media/v4l2-core/v4l2-event.c
+index 127fe6eb91d9..a3ef1f50a4b3 100644
+--- a/drivers/media/v4l2-core/v4l2-event.c
++++ b/drivers/media/v4l2-core/v4l2-event.c
+@@ -115,14 +115,6 @@ static void __v4l2_event_queue_fh(struct v4l2_fh *fh, const struct v4l2_event *e
+ 	if (sev == NULL)
+ 		return;
+ 
+-	/*
+-	 * If the event has been added to the fh->subscribed list, but its
+-	 * add op has not completed yet elems will be 0, treat this as
+-	 * not being subscribed.
+-	 */
+-	if (!sev->elems)
+-		return;
+-
+ 	/* Increase event sequence number on fh. */
+ 	fh->sequence++;
+ 
+@@ -208,6 +200,7 @@ int v4l2_event_subscribe(struct v4l2_fh *fh,
+ 	struct v4l2_subscribed_event *sev, *found_ev;
+ 	unsigned long flags;
+ 	unsigned i;
++	int ret = 0;
+ 
+ 	if (sub->type == V4L2_EVENT_ALL)
+ 		return -EINVAL;
+@@ -225,31 +218,36 @@ int v4l2_event_subscribe(struct v4l2_fh *fh,
+ 	sev->flags = sub->flags;
+ 	sev->fh = fh;
+ 	sev->ops = ops;
++	sev->elems = elems;
++
++	mutex_lock(&fh->subscribe_lock);
+ 
+ 	spin_lock_irqsave(&fh->vdev->fh_lock, flags);
+ 	found_ev = v4l2_event_subscribed(fh, sub->type, sub->id);
+-	if (!found_ev)
+-		list_add(&sev->list, &fh->subscribed);
+ 	spin_unlock_irqrestore(&fh->vdev->fh_lock, flags);
+ 
+ 	if (found_ev) {
++		/* Already listening */
+ 		kvfree(sev);
+-		return 0; /* Already listening */
++		goto out_unlock;
+ 	}
+ 
+ 	if (sev->ops && sev->ops->add) {
+-		int ret = sev->ops->add(sev, elems);
++		ret = sev->ops->add(sev, elems);
+ 		if (ret) {
+-			sev->ops = NULL;
+-			v4l2_event_unsubscribe(fh, sub);
+-			return ret;
++			kvfree(sev);
++			goto out_unlock;
+ 		}
+ 	}
+ 
+-	/* Mark as ready for use */
+-	sev->elems = elems;
++	spin_lock_irqsave(&fh->vdev->fh_lock, flags);
++	list_add(&sev->list, &fh->subscribed);
++	spin_unlock_irqrestore(&fh->vdev->fh_lock, flags);
+ 
+-	return 0;
++out_unlock:
++	mutex_unlock(&fh->subscribe_lock);
++
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_event_subscribe);
+ 
+@@ -288,6 +286,8 @@ int v4l2_event_unsubscribe(struct v4l2_fh *fh,
+ 		return 0;
+ 	}
+ 
++	mutex_lock(&fh->subscribe_lock);
++
+ 	spin_lock_irqsave(&fh->vdev->fh_lock, flags);
+ 
+ 	sev = v4l2_event_subscribed(fh, sub->type, sub->id);
+@@ -305,6 +305,8 @@ int v4l2_event_unsubscribe(struct v4l2_fh *fh,
+ 	if (sev && sev->ops && sev->ops->del)
+ 		sev->ops->del(sev);
+ 
++	mutex_unlock(&fh->subscribe_lock);
++
+ 	kvfree(sev);
+ 
+ 	return 0;
+diff --git a/drivers/media/v4l2-core/v4l2-fh.c b/drivers/media/v4l2-core/v4l2-fh.c
+index 3895999bf880..c91a7bd3ecfc 100644
+--- a/drivers/media/v4l2-core/v4l2-fh.c
++++ b/drivers/media/v4l2-core/v4l2-fh.c
+@@ -45,6 +45,7 @@ void v4l2_fh_init(struct v4l2_fh *fh, struct video_device *vdev)
+ 	INIT_LIST_HEAD(&fh->available);
+ 	INIT_LIST_HEAD(&fh->subscribed);
+ 	fh->sequence = -1;
++	mutex_init(&fh->subscribe_lock);
+ }
+ EXPORT_SYMBOL_GPL(v4l2_fh_init);
+ 
+@@ -90,6 +91,7 @@ void v4l2_fh_exit(struct v4l2_fh *fh)
+ 		return;
+ 	v4l_disable_media_source(fh->vdev);
+ 	v4l2_event_unsubscribe_all(fh);
++	mutex_destroy(&fh->subscribe_lock);
+ 	fh->vdev = NULL;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_fh_exit);
+diff --git a/drivers/misc/ibmvmc.c b/drivers/misc/ibmvmc.c
+index 50d82c3d032a..b8aaa684c397 100644
+--- a/drivers/misc/ibmvmc.c
++++ b/drivers/misc/ibmvmc.c
+@@ -273,7 +273,7 @@ static void *alloc_dma_buffer(struct vio_dev *vdev, size_t size,
+ 			      dma_addr_t *dma_handle)
+ {
+ 	/* allocate memory */
+-	void *buffer = kzalloc(size, GFP_KERNEL);
++	void *buffer = kzalloc(size, GFP_ATOMIC);
+ 
+ 	if (!buffer) {
+ 		*dma_handle = 0;
+diff --git a/drivers/misc/sram.c b/drivers/misc/sram.c
+index 679647713e36..74b183baf044 100644
+--- a/drivers/misc/sram.c
++++ b/drivers/misc/sram.c
+@@ -391,23 +391,23 @@ static int sram_probe(struct platform_device *pdev)
+ 	if (IS_ERR(sram->pool))
+ 		return PTR_ERR(sram->pool);
+ 
+-	ret = sram_reserve_regions(sram, res);
+-	if (ret)
+-		return ret;
+-
+ 	sram->clk = devm_clk_get(sram->dev, NULL);
+ 	if (IS_ERR(sram->clk))
+ 		sram->clk = NULL;
+ 	else
+ 		clk_prepare_enable(sram->clk);
+ 
++	ret = sram_reserve_regions(sram, res);
++	if (ret)
++		goto err_disable_clk;
++
+ 	platform_set_drvdata(pdev, sram);
+ 
+ 	init_func = of_device_get_match_data(&pdev->dev);
+ 	if (init_func) {
+ 		ret = init_func();
+ 		if (ret)
+-			goto err_disable_clk;
++			goto err_free_partitions;
+ 	}
+ 
+ 	dev_dbg(sram->dev, "SRAM pool: %zu KiB @ 0x%p\n",
+@@ -415,10 +415,11 @@ static int sram_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ 
++err_free_partitions:
++	sram_free_partitions(sram);
+ err_disable_clk:
+ 	if (sram->clk)
+ 		clk_disable_unprepare(sram->clk);
+-	sram_free_partitions(sram);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/misc/tsl2550.c b/drivers/misc/tsl2550.c
+index adf46072cb37..3fce3b6a3624 100644
+--- a/drivers/misc/tsl2550.c
++++ b/drivers/misc/tsl2550.c
+@@ -177,7 +177,7 @@ static int tsl2550_calculate_lux(u8 ch0, u8 ch1)
+ 		} else
+ 			lux = 0;
+ 	else
+-		return -EAGAIN;
++		return 0;
+ 
+ 	/* LUX range check */
+ 	return lux > TSL2550_MAX_LUX ? TSL2550_MAX_LUX : lux;
+diff --git a/drivers/misc/vmw_vmci/vmci_queue_pair.c b/drivers/misc/vmw_vmci/vmci_queue_pair.c
+index b4d7774cfe07..d95e8648e7b3 100644
+--- a/drivers/misc/vmw_vmci/vmci_queue_pair.c
++++ b/drivers/misc/vmw_vmci/vmci_queue_pair.c
+@@ -668,7 +668,7 @@ static int qp_host_get_user_memory(u64 produce_uva,
+ 	retval = get_user_pages_fast((uintptr_t) produce_uva,
+ 				     produce_q->kernel_if->num_pages, 1,
+ 				     produce_q->kernel_if->u.h.header_page);
+-	if (retval < produce_q->kernel_if->num_pages) {
++	if (retval < (int)produce_q->kernel_if->num_pages) {
+ 		pr_debug("get_user_pages_fast(produce) failed (retval=%d)",
+ 			retval);
+ 		qp_release_pages(produce_q->kernel_if->u.h.header_page,
+@@ -680,7 +680,7 @@ static int qp_host_get_user_memory(u64 produce_uva,
+ 	retval = get_user_pages_fast((uintptr_t) consume_uva,
+ 				     consume_q->kernel_if->num_pages, 1,
+ 				     consume_q->kernel_if->u.h.header_page);
+-	if (retval < consume_q->kernel_if->num_pages) {
++	if (retval < (int)consume_q->kernel_if->num_pages) {
+ 		pr_debug("get_user_pages_fast(consume) failed (retval=%d)",
+ 			retval);
+ 		qp_release_pages(consume_q->kernel_if->u.h.header_page,
+diff --git a/drivers/mmc/host/android-goldfish.c b/drivers/mmc/host/android-goldfish.c
+index 294de177632c..61e4e2a213c9 100644
+--- a/drivers/mmc/host/android-goldfish.c
++++ b/drivers/mmc/host/android-goldfish.c
+@@ -217,7 +217,7 @@ static void goldfish_mmc_xfer_done(struct goldfish_mmc_host *host,
+ 			 * We don't really have DMA, so we need
+ 			 * to copy from our platform driver buffer
+ 			 */
+-			sg_copy_to_buffer(data->sg, 1, host->virt_base,
++			sg_copy_from_buffer(data->sg, 1, host->virt_base,
+ 					data->sg->length);
+ 		}
+ 		host->data->bytes_xfered += data->sg->length;
+@@ -393,7 +393,7 @@ static void goldfish_mmc_prepare_data(struct goldfish_mmc_host *host,
+ 		 * We don't really have DMA, so we need to copy to our
+ 		 * platform driver buffer
+ 		 */
+-		sg_copy_from_buffer(data->sg, 1, host->virt_base,
++		sg_copy_to_buffer(data->sg, 1, host->virt_base,
+ 				data->sg->length);
+ 	}
+ }
+diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c
+index 5aa2c9404e92..be53044086c7 100644
+--- a/drivers/mmc/host/atmel-mci.c
++++ b/drivers/mmc/host/atmel-mci.c
+@@ -1976,7 +1976,7 @@ static void atmci_read_data_pio(struct atmel_mci *host)
+ 	do {
+ 		value = atmci_readl(host, ATMCI_RDR);
+ 		if (likely(offset + 4 <= sg->length)) {
+-			sg_pcopy_to_buffer(sg, 1, &value, sizeof(u32), offset);
++			sg_pcopy_from_buffer(sg, 1, &value, sizeof(u32), offset);
+ 
+ 			offset += 4;
+ 			nbytes += 4;
+@@ -1993,7 +1993,7 @@ static void atmci_read_data_pio(struct atmel_mci *host)
+ 		} else {
+ 			unsigned int remaining = sg->length - offset;
+ 
+-			sg_pcopy_to_buffer(sg, 1, &value, remaining, offset);
++			sg_pcopy_from_buffer(sg, 1, &value, remaining, offset);
+ 			nbytes += remaining;
+ 
+ 			flush_dcache_page(sg_page(sg));
+@@ -2003,7 +2003,7 @@ static void atmci_read_data_pio(struct atmel_mci *host)
+ 				goto done;
+ 
+ 			offset = 4 - remaining;
+-			sg_pcopy_to_buffer(sg, 1, (u8 *)&value + remaining,
++			sg_pcopy_from_buffer(sg, 1, (u8 *)&value + remaining,
+ 					offset, 0);
+ 			nbytes += offset;
+ 		}
+@@ -2042,7 +2042,7 @@ static void atmci_write_data_pio(struct atmel_mci *host)
+ 
+ 	do {
+ 		if (likely(offset + 4 <= sg->length)) {
+-			sg_pcopy_from_buffer(sg, 1, &value, sizeof(u32), offset);
++			sg_pcopy_to_buffer(sg, 1, &value, sizeof(u32), offset);
+ 			atmci_writel(host, ATMCI_TDR, value);
+ 
+ 			offset += 4;
+@@ -2059,7 +2059,7 @@ static void atmci_write_data_pio(struct atmel_mci *host)
+ 			unsigned int remaining = sg->length - offset;
+ 
+ 			value = 0;
+-			sg_pcopy_from_buffer(sg, 1, &value, remaining, offset);
++			sg_pcopy_to_buffer(sg, 1, &value, remaining, offset);
+ 			nbytes += remaining;
+ 
+ 			host->sg = sg = sg_next(sg);
+@@ -2070,7 +2070,7 @@ static void atmci_write_data_pio(struct atmel_mci *host)
+ 			}
+ 
+ 			offset = 4 - remaining;
+-			sg_pcopy_from_buffer(sg, 1, (u8 *)&value + remaining,
++			sg_pcopy_to_buffer(sg, 1, (u8 *)&value + remaining,
+ 					offset, 0);
+ 			atmci_writel(host, ATMCI_TDR, value);
+ 			nbytes += offset;
+diff --git a/drivers/mtd/nand/raw/atmel/nand-controller.c b/drivers/mtd/nand/raw/atmel/nand-controller.c
+index 12f6753d47ae..e686fe73159e 100644
+--- a/drivers/mtd/nand/raw/atmel/nand-controller.c
++++ b/drivers/mtd/nand/raw/atmel/nand-controller.c
+@@ -129,6 +129,11 @@
+ #define DEFAULT_TIMEOUT_MS			1000
+ #define MIN_DMA_LEN				128
+ 
++static bool atmel_nand_avoid_dma __read_mostly;
++
++MODULE_PARM_DESC(avoiddma, "Avoid using DMA");
++module_param_named(avoiddma, atmel_nand_avoid_dma, bool, 0400);
++
+ enum atmel_nand_rb_type {
+ 	ATMEL_NAND_NO_RB,
+ 	ATMEL_NAND_NATIVE_RB,
+@@ -1977,7 +1982,7 @@ static int atmel_nand_controller_init(struct atmel_nand_controller *nc,
+ 		return ret;
+ 	}
+ 
+-	if (nc->caps->has_dma) {
++	if (nc->caps->has_dma && !atmel_nand_avoid_dma) {
+ 		dma_cap_mask_t mask;
+ 
+ 		dma_cap_zero(mask);
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+index a8926e97935e..c5d387be6cfe 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+@@ -5705,7 +5705,7 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 		if (t4_read_reg(adapter, LE_DB_CONFIG_A) & HASHEN_F) {
+ 			u32 hash_base, hash_reg;
+ 
+-			if (chip <= CHELSIO_T5) {
++			if (chip_ver <= CHELSIO_T5) {
+ 				hash_reg = LE_DB_TID_HASHBASE_A;
+ 				hash_base = t4_read_reg(adapter, hash_reg);
+ 				adapter->tids.hash_base = hash_base / 4;
+diff --git a/drivers/net/ethernet/hisilicon/hns/hnae.h b/drivers/net/ethernet/hisilicon/hns/hnae.h
+index fa5b30f547f6..cad52bd331f7 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hnae.h
++++ b/drivers/net/ethernet/hisilicon/hns/hnae.h
+@@ -220,10 +220,10 @@ struct hnae_desc_cb {
+ 
+ 	/* priv data for the desc, e.g. skb when use with ip stack*/
+ 	void *priv;
+-	u16 page_offset;
+-	u16 reuse_flag;
++	u32 page_offset;
++	u32 length;     /* length of the buffer */
+ 
+-	u16 length;     /* length of the buffer */
++	u16 reuse_flag;
+ 
+        /* desc type, used by the ring user to mark the type of the priv data */
+ 	u16 type;
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_enet.c b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+index ef9ef703d13a..ef994a715f93 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+@@ -530,7 +530,7 @@ static void hns_nic_reuse_page(struct sk_buff *skb, int i,
+ 	}
+ 
+ 	skb_add_rx_frag(skb, i, desc_cb->priv, desc_cb->page_offset + pull_len,
+-			size - pull_len, truesize - pull_len);
++			size - pull_len, truesize);
+ 
+ 	 /* avoid re-using remote pages,flag default unreuse */
+ 	if (unlikely(page_to_nid(desc_cb->priv) != numa_node_id()))
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+index 3b083d5ae9ce..c84c09053640 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+@@ -290,11 +290,11 @@ struct hns3_desc_cb {
+ 
+ 	/* priv data for the desc, e.g. skb when use with ip stack*/
+ 	void *priv;
+-	u16 page_offset;
+-	u16 reuse_flag;
+-
++	u32 page_offset;
+ 	u32 length;     /* length of the buffer */
+ 
++	u16 reuse_flag;
++
+        /* desc type, used by the ring user to mark the type of the priv data */
+ 	u16 type;
+ };
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+index 40c0425b4023..11620e003a8e 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+@@ -201,7 +201,9 @@ static u32 hns3_lb_check_rx_ring(struct hns3_nic_priv *priv, u32 budget)
+ 		rx_group = &ring->tqp_vector->rx_group;
+ 		pre_rx_pkt = rx_group->total_packets;
+ 
++		preempt_disable();
+ 		hns3_clean_rx_ring(ring, budget, hns3_lb_check_skb_data);
++		preempt_enable();
+ 
+ 		rcv_good_pkt_total += (rx_group->total_packets - pre_rx_pkt);
+ 		rx_group->total_packets = pre_rx_pkt;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+index 262c125f8137..f027fceea548 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+@@ -1223,6 +1223,10 @@ static int hclge_mac_pause_setup_hw(struct hclge_dev *hdev)
+ 		tx_en = true;
+ 		rx_en = true;
+ 		break;
++	case HCLGE_FC_PFC:
++		tx_en = false;
++		rx_en = false;
++		break;
+ 	default:
+ 		tx_en = true;
+ 		rx_en = true;
+@@ -1240,8 +1244,9 @@ int hclge_pause_setup_hw(struct hclge_dev *hdev)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (hdev->tm_info.fc_mode != HCLGE_FC_PFC)
+-		return hclge_mac_pause_setup_hw(hdev);
++	ret = hclge_mac_pause_setup_hw(hdev);
++	if (ret)
++		return ret;
+ 
+ 	/* Only DCB-supported dev supports qset back pressure and pfc cmd */
+ 	if (!hnae3_dev_dcb_supported(hdev))
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index a17872aab168..12aa1f1b99ef 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -648,8 +648,17 @@ static int hclgevf_unmap_ring_from_vector(
+ static int hclgevf_put_vector(struct hnae3_handle *handle, int vector)
+ {
+ 	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
++	int vector_id;
+ 
+-	hclgevf_free_vector(hdev, vector);
++	vector_id = hclgevf_get_vector_index(hdev, vector);
++	if (vector_id < 0) {
++		dev_err(&handle->pdev->dev,
++			"hclgevf_put_vector get vector index fail. ret =%d\n",
++			vector_id);
++		return vector_id;
++	}
++
++	hclgevf_free_vector(hdev, vector_id);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
+index b598c06af8e0..cd246f906150 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
+@@ -208,7 +208,8 @@ void hclgevf_mbx_handler(struct hclgevf_dev *hdev)
+ 
+ 			/* tail the async message in arq */
+ 			msg_q = hdev->arq.msg_q[hdev->arq.tail];
+-			memcpy(&msg_q[0], req->msg, HCLGE_MBX_MAX_ARQ_MSG_SIZE);
++			memcpy(&msg_q[0], req->msg,
++			       HCLGE_MBX_MAX_ARQ_MSG_SIZE * sizeof(u16));
+ 			hclge_mbx_tail_ptr_move_arq(hdev->arq);
+ 			hdev->arq.count++;
+ 
+diff --git a/drivers/net/ethernet/intel/e1000/e1000_ethtool.c b/drivers/net/ethernet/intel/e1000/e1000_ethtool.c
+index bdb3f8e65ed4..2569a168334c 100644
+--- a/drivers/net/ethernet/intel/e1000/e1000_ethtool.c
++++ b/drivers/net/ethernet/intel/e1000/e1000_ethtool.c
+@@ -624,14 +624,14 @@ static int e1000_set_ringparam(struct net_device *netdev,
+ 		adapter->tx_ring = tx_old;
+ 		e1000_free_all_rx_resources(adapter);
+ 		e1000_free_all_tx_resources(adapter);
+-		kfree(tx_old);
+-		kfree(rx_old);
+ 		adapter->rx_ring = rxdr;
+ 		adapter->tx_ring = txdr;
+ 		err = e1000_up(adapter);
+ 		if (err)
+ 			goto err_setup;
+ 	}
++	kfree(tx_old);
++	kfree(rx_old);
+ 
+ 	clear_bit(__E1000_RESETTING, &adapter->flags);
+ 	return 0;
+@@ -644,7 +644,8 @@ err_setup_rx:
+ err_alloc_rx:
+ 	kfree(txdr);
+ err_alloc_tx:
+-	e1000_up(adapter);
++	if (netif_running(adapter->netdev))
++		e1000_up(adapter);
+ err_setup:
+ 	clear_bit(__E1000_RESETTING, &adapter->flags);
+ 	return err;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+index 6947a2a571cb..5d670f4ce5ac 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+@@ -1903,7 +1903,7 @@ static void i40e_get_stat_strings(struct net_device *netdev, u8 *data)
+ 		data += ETH_GSTRING_LEN;
+ 	}
+ 
+-	WARN_ONCE(p - data != i40e_get_stats_count(netdev) * ETH_GSTRING_LEN,
++	WARN_ONCE(data - p != i40e_get_stats_count(netdev) * ETH_GSTRING_LEN,
+ 		  "stat strings count mismatch!");
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index c944bd10b03d..5f105bc68c6a 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -5121,15 +5121,17 @@ static int i40e_vsi_configure_bw_alloc(struct i40e_vsi *vsi, u8 enabled_tc,
+ 				       u8 *bw_share)
+ {
+ 	struct i40e_aqc_configure_vsi_tc_bw_data bw_data;
++	struct i40e_pf *pf = vsi->back;
+ 	i40e_status ret;
+ 	int i;
+ 
+-	if (vsi->back->flags & I40E_FLAG_TC_MQPRIO)
++	/* There is no need to reset BW when mqprio mode is on.  */
++	if (pf->flags & I40E_FLAG_TC_MQPRIO)
+ 		return 0;
+-	if (!vsi->mqprio_qopt.qopt.hw) {
++	if (!vsi->mqprio_qopt.qopt.hw && !(pf->flags & I40E_FLAG_DCB_ENABLED)) {
+ 		ret = i40e_set_bw_limit(vsi, vsi->seid, 0);
+ 		if (ret)
+-			dev_info(&vsi->back->pdev->dev,
++			dev_info(&pf->pdev->dev,
+ 				 "Failed to reset tx rate for vsi->seid %u\n",
+ 				 vsi->seid);
+ 		return ret;
+@@ -5138,12 +5140,11 @@ static int i40e_vsi_configure_bw_alloc(struct i40e_vsi *vsi, u8 enabled_tc,
+ 	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++)
+ 		bw_data.tc_bw_credits[i] = bw_share[i];
+ 
+-	ret = i40e_aq_config_vsi_tc_bw(&vsi->back->hw, vsi->seid, &bw_data,
+-				       NULL);
++	ret = i40e_aq_config_vsi_tc_bw(&pf->hw, vsi->seid, &bw_data, NULL);
+ 	if (ret) {
+-		dev_info(&vsi->back->pdev->dev,
++		dev_info(&pf->pdev->dev,
+ 			 "AQ command Config VSI BW allocation per TC failed = %d\n",
+-			 vsi->back->hw.aq.asq_last_status);
++			 pf->hw.aq.asq_last_status);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index d8b5fff581e7..ed071ea75f20 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -89,6 +89,13 @@ extern const char ice_drv_ver[];
+ #define ice_for_each_rxq(vsi, i) \
+ 	for ((i) = 0; (i) < (vsi)->num_rxq; (i)++)
+ 
++/* Macros for each allocated tx/rx ring whether used or not in a VSI */
++#define ice_for_each_alloc_txq(vsi, i) \
++	for ((i) = 0; (i) < (vsi)->alloc_txq; (i)++)
++
++#define ice_for_each_alloc_rxq(vsi, i) \
++	for ((i) = 0; (i) < (vsi)->alloc_rxq; (i)++)
++
+ struct ice_tc_info {
+ 	u16 qoffset;
+ 	u16 qcount;
+diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+index 7541ec2270b3..a0614f472658 100644
+--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
++++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+@@ -329,19 +329,19 @@ struct ice_aqc_vsi_props {
+ 	/* VLAN section */
+ 	__le16 pvid; /* VLANS include priority bits */
+ 	u8 pvlan_reserved[2];
+-	u8 port_vlan_flags;
+-#define ICE_AQ_VSI_PVLAN_MODE_S	0
+-#define ICE_AQ_VSI_PVLAN_MODE_M	(0x3 << ICE_AQ_VSI_PVLAN_MODE_S)
+-#define ICE_AQ_VSI_PVLAN_MODE_UNTAGGED	0x1
+-#define ICE_AQ_VSI_PVLAN_MODE_TAGGED	0x2
+-#define ICE_AQ_VSI_PVLAN_MODE_ALL	0x3
++	u8 vlan_flags;
++#define ICE_AQ_VSI_VLAN_MODE_S	0
++#define ICE_AQ_VSI_VLAN_MODE_M	(0x3 << ICE_AQ_VSI_VLAN_MODE_S)
++#define ICE_AQ_VSI_VLAN_MODE_UNTAGGED	0x1
++#define ICE_AQ_VSI_VLAN_MODE_TAGGED	0x2
++#define ICE_AQ_VSI_VLAN_MODE_ALL	0x3
+ #define ICE_AQ_VSI_PVLAN_INSERT_PVID	BIT(2)
+-#define ICE_AQ_VSI_PVLAN_EMOD_S	3
+-#define ICE_AQ_VSI_PVLAN_EMOD_M	(0x3 << ICE_AQ_VSI_PVLAN_EMOD_S)
+-#define ICE_AQ_VSI_PVLAN_EMOD_STR_BOTH	(0x0 << ICE_AQ_VSI_PVLAN_EMOD_S)
+-#define ICE_AQ_VSI_PVLAN_EMOD_STR_UP	(0x1 << ICE_AQ_VSI_PVLAN_EMOD_S)
+-#define ICE_AQ_VSI_PVLAN_EMOD_STR	(0x2 << ICE_AQ_VSI_PVLAN_EMOD_S)
+-#define ICE_AQ_VSI_PVLAN_EMOD_NOTHING	(0x3 << ICE_AQ_VSI_PVLAN_EMOD_S)
++#define ICE_AQ_VSI_VLAN_EMOD_S		3
++#define ICE_AQ_VSI_VLAN_EMOD_M		(0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
++#define ICE_AQ_VSI_VLAN_EMOD_STR_BOTH	(0x0 << ICE_AQ_VSI_VLAN_EMOD_S)
++#define ICE_AQ_VSI_VLAN_EMOD_STR_UP	(0x1 << ICE_AQ_VSI_VLAN_EMOD_S)
++#define ICE_AQ_VSI_VLAN_EMOD_STR	(0x2 << ICE_AQ_VSI_VLAN_EMOD_S)
++#define ICE_AQ_VSI_VLAN_EMOD_NOTHING	(0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
+ 	u8 pvlan_reserved2[3];
+ 	/* ingress egress up sections */
+ 	__le32 ingress_table; /* bitmap, 3 bits per up */
+@@ -594,6 +594,7 @@ struct ice_sw_rule_lg_act {
+ #define ICE_LG_ACT_GENERIC_OFFSET_M	(0x7 << ICE_LG_ACT_GENERIC_OFFSET_S)
+ #define ICE_LG_ACT_GENERIC_PRIORITY_S	22
+ #define ICE_LG_ACT_GENERIC_PRIORITY_M	(0x7 << ICE_LG_ACT_GENERIC_PRIORITY_S)
++#define ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX	7
+ 
+ 	/* Action = 7 - Set Stat count */
+ #define ICE_LG_ACT_STAT_COUNT		0x7
+diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
+index 71d032cc5fa7..ebd701ac9428 100644
+--- a/drivers/net/ethernet/intel/ice/ice_common.c
++++ b/drivers/net/ethernet/intel/ice/ice_common.c
+@@ -1483,7 +1483,7 @@ enum ice_status ice_get_link_status(struct ice_port_info *pi, bool *link_up)
+ 	struct ice_phy_info *phy_info;
+ 	enum ice_status status = 0;
+ 
+-	if (!pi)
++	if (!pi || !link_up)
+ 		return ICE_ERR_PARAM;
+ 
+ 	phy_info = &pi->phy;
+@@ -1619,20 +1619,23 @@ __ice_aq_get_set_rss_lut(struct ice_hw *hw, u16 vsi_id, u8 lut_type, u8 *lut,
+ 	}
+ 
+ 	/* LUT size is only valid for Global and PF table types */
+-	if (lut_size == ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128) {
+-		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128_FLAG <<
+-			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+-			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+-	} else if (lut_size == ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512) {
++	switch (lut_size) {
++	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128:
++		break;
++	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512:
+ 		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG <<
+ 			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+ 			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+-	} else if ((lut_size == ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K) &&
+-		   (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF)) {
+-		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG <<
+-			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+-			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+-	} else {
++		break;
++	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K:
++		if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
++			flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG <<
++				  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
++				 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
++			break;
++		}
++		/* fall-through */
++	default:
+ 		status = ICE_ERR_PARAM;
+ 		goto ice_aq_get_set_rss_lut_exit;
+ 	}
+diff --git a/drivers/net/ethernet/intel/ice/ice_controlq.c b/drivers/net/ethernet/intel/ice/ice_controlq.c
+index 7c511f144ed6..62be72fdc8f3 100644
+--- a/drivers/net/ethernet/intel/ice/ice_controlq.c
++++ b/drivers/net/ethernet/intel/ice/ice_controlq.c
+@@ -597,10 +597,14 @@ static enum ice_status ice_init_check_adminq(struct ice_hw *hw)
+ 	return 0;
+ 
+ init_ctrlq_free_rq:
+-	ice_shutdown_rq(hw, cq);
+-	ice_shutdown_sq(hw, cq);
+-	mutex_destroy(&cq->sq_lock);
+-	mutex_destroy(&cq->rq_lock);
++	if (cq->rq.head) {
++		ice_shutdown_rq(hw, cq);
++		mutex_destroy(&cq->rq_lock);
++	}
++	if (cq->sq.head) {
++		ice_shutdown_sq(hw, cq);
++		mutex_destroy(&cq->sq_lock);
++	}
+ 	return status;
+ }
+ 
+@@ -706,10 +710,14 @@ static void ice_shutdown_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
+ 		return;
+ 	}
+ 
+-	ice_shutdown_sq(hw, cq);
+-	ice_shutdown_rq(hw, cq);
+-	mutex_destroy(&cq->sq_lock);
+-	mutex_destroy(&cq->rq_lock);
++	if (cq->sq.head) {
++		ice_shutdown_sq(hw, cq);
++		mutex_destroy(&cq->sq_lock);
++	}
++	if (cq->rq.head) {
++		ice_shutdown_rq(hw, cq);
++		mutex_destroy(&cq->rq_lock);
++	}
+ }
+ 
+ /**
+@@ -1057,8 +1065,11 @@ ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+ 
+ clean_rq_elem_out:
+ 	/* Set pending if needed, unlock and return */
+-	if (pending)
++	if (pending) {
++		/* re-read HW head to calculate actual pending messages */
++		ntu = (u16)(rd32(hw, cq->rq.head) & cq->rq.head_mask);
+ 		*pending = (u16)((ntc > ntu ? cq->rq.count : 0) + (ntu - ntc));
++	}
+ clean_rq_elem_err:
+ 	mutex_unlock(&cq->rq_lock);
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+index 1db304c01d10..c71a9b528d6d 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+@@ -26,7 +26,7 @@ static int ice_q_stats_len(struct net_device *netdev)
+ {
+ 	struct ice_netdev_priv *np = netdev_priv(netdev);
+ 
+-	return ((np->vsi->num_txq + np->vsi->num_rxq) *
++	return ((np->vsi->alloc_txq + np->vsi->alloc_rxq) *
+ 		(sizeof(struct ice_q_stats) / sizeof(u64)));
+ }
+ 
+@@ -218,7 +218,7 @@ static void ice_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
+ 			p += ETH_GSTRING_LEN;
+ 		}
+ 
+-		ice_for_each_txq(vsi, i) {
++		ice_for_each_alloc_txq(vsi, i) {
+ 			snprintf(p, ETH_GSTRING_LEN,
+ 				 "tx-queue-%u.tx_packets", i);
+ 			p += ETH_GSTRING_LEN;
+@@ -226,7 +226,7 @@ static void ice_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
+ 			p += ETH_GSTRING_LEN;
+ 		}
+ 
+-		ice_for_each_rxq(vsi, i) {
++		ice_for_each_alloc_rxq(vsi, i) {
+ 			snprintf(p, ETH_GSTRING_LEN,
+ 				 "rx-queue-%u.rx_packets", i);
+ 			p += ETH_GSTRING_LEN;
+@@ -253,6 +253,24 @@ static int ice_get_sset_count(struct net_device *netdev, int sset)
+ {
+ 	switch (sset) {
+ 	case ETH_SS_STATS:
++		/* The number (and order) of strings reported *must* remain
++		 * constant for a given netdevice. This function must not
++		 * report a different number based on run time parameters
++		 * (such as the number of queues in use, or the setting of
++		 * a private ethtool flag). This is due to the nature of the
++		 * ethtool stats API.
++		 *
++		 * User space programs such as ethtool must make 3 separate
++		 * ioctl requests, one for size, one for the strings, and
++		 * finally one for the stats. Since these cross into
++		 * user space, changes to the number or size could result in
++		 * undefined memory access or incorrect string<->value
++		 * correlations for statistics.
++		 *
++		 * Even if it appears to be safe, changes to the size or
++		 * order of strings will suffer from race conditions and are
++		 * not safe.
++		 */
+ 		return ICE_ALL_STATS_LEN(netdev);
+ 	default:
+ 		return -EOPNOTSUPP;
+@@ -280,18 +298,26 @@ ice_get_ethtool_stats(struct net_device *netdev,
+ 	/* populate per queue stats */
+ 	rcu_read_lock();
+ 
+-	ice_for_each_txq(vsi, j) {
++	ice_for_each_alloc_txq(vsi, j) {
+ 		ring = READ_ONCE(vsi->tx_rings[j]);
+-		if (!ring)
+-			continue;
+-		data[i++] = ring->stats.pkts;
+-		data[i++] = ring->stats.bytes;
++		if (ring) {
++			data[i++] = ring->stats.pkts;
++			data[i++] = ring->stats.bytes;
++		} else {
++			data[i++] = 0;
++			data[i++] = 0;
++		}
+ 	}
+ 
+-	ice_for_each_rxq(vsi, j) {
++	ice_for_each_alloc_rxq(vsi, j) {
+ 		ring = READ_ONCE(vsi->rx_rings[j]);
+-		data[i++] = ring->stats.pkts;
+-		data[i++] = ring->stats.bytes;
++		if (ring) {
++			data[i++] = ring->stats.pkts;
++			data[i++] = ring->stats.bytes;
++		} else {
++			data[i++] = 0;
++			data[i++] = 0;
++		}
+ 	}
+ 
+ 	rcu_read_unlock();
+@@ -519,7 +545,7 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring)
+ 		goto done;
+ 	}
+ 
+-	for (i = 0; i < vsi->num_txq; i++) {
++	for (i = 0; i < vsi->alloc_txq; i++) {
+ 		/* clone ring and setup updated count */
+ 		tx_rings[i] = *vsi->tx_rings[i];
+ 		tx_rings[i].count = new_tx_cnt;
+@@ -551,7 +577,7 @@ process_rx:
+ 		goto done;
+ 	}
+ 
+-	for (i = 0; i < vsi->num_rxq; i++) {
++	for (i = 0; i < vsi->alloc_rxq; i++) {
+ 		/* clone ring and setup updated count */
+ 		rx_rings[i] = *vsi->rx_rings[i];
+ 		rx_rings[i].count = new_rx_cnt;
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 5299caf55a7f..27c9aa31b248 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -916,6 +916,21 @@ static int __ice_clean_ctrlq(struct ice_pf *pf, enum ice_ctl_q q_type)
+ 	return pending && (i == ICE_DFLT_IRQ_WORK);
+ }
+ 
++/**
++ * ice_ctrlq_pending - check if there is a difference between ntc and ntu
++ * @hw: pointer to hardware info
++ * @cq: control queue information
++ *
++ * returns true if there are pending messages in a queue, false if there aren't
++ */
++static bool ice_ctrlq_pending(struct ice_hw *hw, struct ice_ctl_q_info *cq)
++{
++	u16 ntu;
++
++	ntu = (u16)(rd32(hw, cq->rq.head) & cq->rq.head_mask);
++	return cq->rq.next_to_clean != ntu;
++}
++
+ /**
+  * ice_clean_adminq_subtask - clean the AdminQ rings
+  * @pf: board private structure
+@@ -923,7 +938,6 @@ static int __ice_clean_ctrlq(struct ice_pf *pf, enum ice_ctl_q q_type)
+ static void ice_clean_adminq_subtask(struct ice_pf *pf)
+ {
+ 	struct ice_hw *hw = &pf->hw;
+-	u32 val;
+ 
+ 	if (!test_bit(__ICE_ADMINQ_EVENT_PENDING, pf->state))
+ 		return;
+@@ -933,9 +947,13 @@ static void ice_clean_adminq_subtask(struct ice_pf *pf)
+ 
+ 	clear_bit(__ICE_ADMINQ_EVENT_PENDING, pf->state);
+ 
+-	/* re-enable Admin queue interrupt causes */
+-	val = rd32(hw, PFINT_FW_CTL);
+-	wr32(hw, PFINT_FW_CTL, (val | PFINT_FW_CTL_CAUSE_ENA_M));
++	/* There might be a situation where new messages arrive to a control
++	 * queue between processing the last message and clearing the
++	 * EVENT_PENDING bit. So before exiting, check queue head again (using
++	 * ice_ctrlq_pending) and process new messages if any.
++	 */
++	if (ice_ctrlq_pending(hw, &hw->adminq))
++		__ice_clean_ctrlq(pf, ICE_CTL_Q_ADMIN);
+ 
+ 	ice_flush(hw);
+ }
+@@ -1295,11 +1313,8 @@ static void ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt)
+ 		qcount = numq_tc;
+ 	}
+ 
+-	/* find higher power-of-2 of qcount */
+-	pow = ilog2(qcount);
+-
+-	if (!is_power_of_2(qcount))
+-		pow++;
++	/* find the (rounded up) power-of-2 of qcount */
++	pow = order_base_2(qcount);
+ 
+ 	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
+ 		if (!(vsi->tc_cfg.ena_tc & BIT(i))) {
+@@ -1352,14 +1367,15 @@ static void ice_set_dflt_vsi_ctx(struct ice_vsi_ctx *ctxt)
+ 	ctxt->info.sw_flags = ICE_AQ_VSI_SW_FLAG_SRC_PRUNE;
+ 	/* Traffic from VSI can be sent to LAN */
+ 	ctxt->info.sw_flags2 = ICE_AQ_VSI_SW_FLAG_LAN_ENA;
+-	/* Allow all packets untagged/tagged */
+-	ctxt->info.port_vlan_flags = ((ICE_AQ_VSI_PVLAN_MODE_ALL &
+-				       ICE_AQ_VSI_PVLAN_MODE_M) >>
+-				      ICE_AQ_VSI_PVLAN_MODE_S);
+-	/* Show VLAN/UP from packets in Rx descriptors */
+-	ctxt->info.port_vlan_flags |= ((ICE_AQ_VSI_PVLAN_EMOD_STR_BOTH &
+-					ICE_AQ_VSI_PVLAN_EMOD_M) >>
+-				       ICE_AQ_VSI_PVLAN_EMOD_S);
++
++	/* By default bits 3 and 4 in vlan_flags are 0's which results in legacy
++	 * behavior (show VLAN, DEI, and UP) in descriptor. Also, allow all
++	 * packets untagged/tagged.
++	 */
++	ctxt->info.vlan_flags = ((ICE_AQ_VSI_VLAN_MODE_ALL &
++				  ICE_AQ_VSI_VLAN_MODE_M) >>
++				 ICE_AQ_VSI_VLAN_MODE_S);
++
+ 	/* Have 1:1 UP mapping for both ingress/egress tables */
+ 	table |= ICE_UP_TABLE_TRANSLATE(0, 0);
+ 	table |= ICE_UP_TABLE_TRANSLATE(1, 1);
+@@ -2058,15 +2074,13 @@ static int ice_req_irq_msix_misc(struct ice_pf *pf)
+ skip_req_irq:
+ 	ice_ena_misc_vector(pf);
+ 
+-	val = (pf->oicr_idx & PFINT_OICR_CTL_MSIX_INDX_M) |
+-	      (ICE_RX_ITR & PFINT_OICR_CTL_ITR_INDX_M) |
+-	      PFINT_OICR_CTL_CAUSE_ENA_M;
++	val = ((pf->oicr_idx & PFINT_OICR_CTL_MSIX_INDX_M) |
++	       PFINT_OICR_CTL_CAUSE_ENA_M);
+ 	wr32(hw, PFINT_OICR_CTL, val);
+ 
+ 	/* This enables Admin queue Interrupt causes */
+-	val = (pf->oicr_idx & PFINT_FW_CTL_MSIX_INDX_M) |
+-	      (ICE_RX_ITR & PFINT_FW_CTL_ITR_INDX_M) |
+-	      PFINT_FW_CTL_CAUSE_ENA_M;
++	val = ((pf->oicr_idx & PFINT_FW_CTL_MSIX_INDX_M) |
++	       PFINT_FW_CTL_CAUSE_ENA_M);
+ 	wr32(hw, PFINT_FW_CTL, val);
+ 
+ 	itr_gran = hw->itr_gran_200;
+@@ -3246,8 +3260,10 @@ static void ice_clear_interrupt_scheme(struct ice_pf *pf)
+ 	if (test_bit(ICE_FLAG_MSIX_ENA, pf->flags))
+ 		ice_dis_msix(pf);
+ 
+-	devm_kfree(&pf->pdev->dev, pf->irq_tracker);
+-	pf->irq_tracker = NULL;
++	if (pf->irq_tracker) {
++		devm_kfree(&pf->pdev->dev, pf->irq_tracker);
++		pf->irq_tracker = NULL;
++	}
+ }
+ 
+ /**
+@@ -3720,10 +3736,10 @@ static int ice_vsi_manage_vlan_insertion(struct ice_vsi *vsi)
+ 	enum ice_status status;
+ 
+ 	/* Here we are configuring the VSI to let the driver add VLAN tags by
+-	 * setting port_vlan_flags to ICE_AQ_VSI_PVLAN_MODE_ALL. The actual VLAN
+-	 * tag insertion happens in the Tx hot path, in ice_tx_map.
++	 * setting vlan_flags to ICE_AQ_VSI_VLAN_MODE_ALL. The actual VLAN tag
++	 * insertion happens in the Tx hot path, in ice_tx_map.
+ 	 */
+-	ctxt.info.port_vlan_flags = ICE_AQ_VSI_PVLAN_MODE_ALL;
++	ctxt.info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_ALL;
+ 
+ 	ctxt.info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID);
+ 	ctxt.vsi_num = vsi->vsi_num;
+@@ -3735,7 +3751,7 @@ static int ice_vsi_manage_vlan_insertion(struct ice_vsi *vsi)
+ 		return -EIO;
+ 	}
+ 
+-	vsi->info.port_vlan_flags = ctxt.info.port_vlan_flags;
++	vsi->info.vlan_flags = ctxt.info.vlan_flags;
+ 	return 0;
+ }
+ 
+@@ -3757,12 +3773,15 @@ static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena)
+ 	 */
+ 	if (ena) {
+ 		/* Strip VLAN tag from Rx packet and put it in the desc */
+-		ctxt.info.port_vlan_flags = ICE_AQ_VSI_PVLAN_EMOD_STR_BOTH;
++		ctxt.info.vlan_flags = ICE_AQ_VSI_VLAN_EMOD_STR_BOTH;
+ 	} else {
+ 		/* Disable stripping. Leave tag in packet */
+-		ctxt.info.port_vlan_flags = ICE_AQ_VSI_PVLAN_EMOD_NOTHING;
++		ctxt.info.vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING;
+ 	}
+ 
++	/* Allow all packets untagged/tagged */
++	ctxt.info.vlan_flags |= ICE_AQ_VSI_VLAN_MODE_ALL;
++
+ 	ctxt.info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID);
+ 	ctxt.vsi_num = vsi->vsi_num;
+ 
+@@ -3773,7 +3792,7 @@ static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena)
+ 		return -EIO;
+ 	}
+ 
+-	vsi->info.port_vlan_flags = ctxt.info.port_vlan_flags;
++	vsi->info.vlan_flags = ctxt.info.vlan_flags;
+ 	return 0;
+ }
+ 
+@@ -4098,11 +4117,12 @@ static int ice_vsi_cfg(struct ice_vsi *vsi)
+ {
+ 	int err;
+ 
+-	ice_set_rx_mode(vsi->netdev);
+-
+-	err = ice_restore_vlan(vsi);
+-	if (err)
+-		return err;
++	if (vsi->netdev) {
++		ice_set_rx_mode(vsi->netdev);
++		err = ice_restore_vlan(vsi);
++		if (err)
++			return err;
++	}
+ 
+ 	err = ice_vsi_cfg_txqs(vsi);
+ 	if (!err)
+@@ -4868,7 +4888,7 @@ int ice_down(struct ice_vsi *vsi)
+  */
+ static int ice_vsi_setup_tx_rings(struct ice_vsi *vsi)
+ {
+-	int i, err;
++	int i, err = 0;
+ 
+ 	if (!vsi->num_txq) {
+ 		dev_err(&vsi->back->pdev->dev, "VSI %d has 0 Tx queues\n",
+@@ -4893,7 +4913,7 @@ static int ice_vsi_setup_tx_rings(struct ice_vsi *vsi)
+  */
+ static int ice_vsi_setup_rx_rings(struct ice_vsi *vsi)
+ {
+-	int i, err;
++	int i, err = 0;
+ 
+ 	if (!vsi->num_rxq) {
+ 		dev_err(&vsi->back->pdev->dev, "VSI %d has 0 Rx queues\n",
+diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c
+index 723d15f1e90b..6b7ec2ae5ad6 100644
+--- a/drivers/net/ethernet/intel/ice/ice_switch.c
++++ b/drivers/net/ethernet/intel/ice/ice_switch.c
+@@ -645,14 +645,14 @@ ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent,
+ 	act |= (1 << ICE_LG_ACT_GENERIC_VALUE_S) & ICE_LG_ACT_GENERIC_VALUE_M;
+ 	lg_act->pdata.lg_act.act[1] = cpu_to_le32(act);
+ 
+-	act = (7 << ICE_LG_ACT_GENERIC_OFFSET_S) & ICE_LG_ACT_GENERIC_VALUE_M;
++	act = (ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX <<
++	       ICE_LG_ACT_GENERIC_OFFSET_S) & ICE_LG_ACT_GENERIC_OFFSET_M;
+ 
+ 	/* Third action Marker value */
+ 	act |= ICE_LG_ACT_GENERIC;
+ 	act |= (sw_marker << ICE_LG_ACT_GENERIC_VALUE_S) &
+ 		ICE_LG_ACT_GENERIC_VALUE_M;
+ 
+-	act |= (0 << ICE_LG_ACT_GENERIC_OFFSET_S) & ICE_LG_ACT_GENERIC_VALUE_M;
+ 	lg_act->pdata.lg_act.act[2] = cpu_to_le32(act);
+ 
+ 	/* call the fill switch rule to fill the lookup tx rx structure */
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+index 6f59933cdff7..2bc4fe475f28 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+@@ -688,8 +688,13 @@ static int ixgbe_set_vf_macvlan(struct ixgbe_adapter *adapter,
+ static inline void ixgbe_vf_reset_event(struct ixgbe_adapter *adapter, u32 vf)
+ {
+ 	struct ixgbe_hw *hw = &adapter->hw;
++	struct ixgbe_ring_feature *vmdq = &adapter->ring_feature[RING_F_VMDQ];
+ 	struct vf_data_storage *vfinfo = &adapter->vfinfo[vf];
++	u32 q_per_pool = __ALIGN_MASK(1, ~vmdq->mask);
+ 	u8 num_tcs = adapter->hw_tcs;
++	u32 reg_val;
++	u32 queue;
++	u32 word;
+ 
+ 	/* remove VLAN filters beloning to this VF */
+ 	ixgbe_clear_vf_vlans(adapter, vf);
+@@ -726,6 +731,27 @@ static inline void ixgbe_vf_reset_event(struct ixgbe_adapter *adapter, u32 vf)
+ 
+ 	/* reset VF api back to unknown */
+ 	adapter->vfinfo[vf].vf_api = ixgbe_mbox_api_10;
++
++	/* Restart each queue for given VF */
++	for (queue = 0; queue < q_per_pool; queue++) {
++		unsigned int reg_idx = (vf * q_per_pool) + queue;
++
++		reg_val = IXGBE_READ_REG(hw, IXGBE_PVFTXDCTL(reg_idx));
++
++		/* Re-enabling only configured queues */
++		if (reg_val) {
++			reg_val |= IXGBE_TXDCTL_ENABLE;
++			IXGBE_WRITE_REG(hw, IXGBE_PVFTXDCTL(reg_idx), reg_val);
++			reg_val &= ~IXGBE_TXDCTL_ENABLE;
++			IXGBE_WRITE_REG(hw, IXGBE_PVFTXDCTL(reg_idx), reg_val);
++		}
++	}
++
++	/* Clear VF's mailbox memory */
++	for (word = 0; word < IXGBE_VFMAILBOX_SIZE; word++)
++		IXGBE_WRITE_REG_ARRAY(hw, IXGBE_PFMBMEM(vf), word, 0);
++
++	IXGBE_WRITE_FLUSH(hw);
+ }
+ 
+ static int ixgbe_set_vf_mac(struct ixgbe_adapter *adapter,
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
+index 44cfb2021145..41bcbb337e83 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
+@@ -2518,6 +2518,7 @@ enum {
+ /* Translated register #defines */
+ #define IXGBE_PVFTDH(P)		(0x06010 + (0x40 * (P)))
+ #define IXGBE_PVFTDT(P)		(0x06018 + (0x40 * (P)))
++#define IXGBE_PVFTXDCTL(P)	(0x06028 + (0x40 * (P)))
+ #define IXGBE_PVFTDWBAL(P)	(0x06038 + (0x40 * (P)))
+ #define IXGBE_PVFTDWBAH(P)	(0x0603C + (0x40 * (P)))
+ 
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.c b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
+index cdd645024a32..ad6826b5f758 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_mcp.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
+@@ -48,7 +48,7 @@
+ #include "qed_reg_addr.h"
+ #include "qed_sriov.h"
+ 
+-#define CHIP_MCP_RESP_ITER_US 10
++#define QED_MCP_RESP_ITER_US	10
+ 
+ #define QED_DRV_MB_MAX_RETRIES	(500 * 1000)	/* Account for 5 sec */
+ #define QED_MCP_RESET_RETRIES	(50 * 1000)	/* Account for 500 msec */
+@@ -183,18 +183,57 @@ int qed_mcp_free(struct qed_hwfn *p_hwfn)
+ 	return 0;
+ }
+ 
++/* Maximum of 1 sec to wait for the SHMEM ready indication */
++#define QED_MCP_SHMEM_RDY_MAX_RETRIES	20
++#define QED_MCP_SHMEM_RDY_ITER_MS	50
++
+ static int qed_load_mcp_offsets(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ {
+ 	struct qed_mcp_info *p_info = p_hwfn->mcp_info;
++	u8 cnt = QED_MCP_SHMEM_RDY_MAX_RETRIES;
++	u8 msec = QED_MCP_SHMEM_RDY_ITER_MS;
+ 	u32 drv_mb_offsize, mfw_mb_offsize;
+ 	u32 mcp_pf_id = MCP_PF_ID(p_hwfn);
+ 
+ 	p_info->public_base = qed_rd(p_hwfn, p_ptt, MISC_REG_SHARED_MEM_ADDR);
+-	if (!p_info->public_base)
+-		return 0;
++	if (!p_info->public_base) {
++		DP_NOTICE(p_hwfn,
++			  "The address of the MCP scratch-pad is not configured\n");
++		return -EINVAL;
++	}
+ 
+ 	p_info->public_base |= GRCBASE_MCP;
+ 
++	/* Get the MFW MB address and number of supported messages */
++	mfw_mb_offsize = qed_rd(p_hwfn, p_ptt,
++				SECTION_OFFSIZE_ADDR(p_info->public_base,
++						     PUBLIC_MFW_MB));
++	p_info->mfw_mb_addr = SECTION_ADDR(mfw_mb_offsize, mcp_pf_id);
++	p_info->mfw_mb_length = (u16)qed_rd(p_hwfn, p_ptt,
++					    p_info->mfw_mb_addr +
++					    offsetof(struct public_mfw_mb,
++						     sup_msgs));
++
++	/* The driver can notify that there was an MCP reset, and might read the
++	 * SHMEM values before the MFW has completed initializing them.
++	 * To avoid this, the "sup_msgs" field in the MFW mailbox is used as a
++	 * data ready indication.
++	 */
++	while (!p_info->mfw_mb_length && --cnt) {
++		msleep(msec);
++		p_info->mfw_mb_length =
++			(u16)qed_rd(p_hwfn, p_ptt,
++				    p_info->mfw_mb_addr +
++				    offsetof(struct public_mfw_mb, sup_msgs));
++	}
++
++	if (!cnt) {
++		DP_NOTICE(p_hwfn,
++			  "Failed to get the SHMEM ready notification after %d msec\n",
++			  QED_MCP_SHMEM_RDY_MAX_RETRIES * msec);
++		return -EBUSY;
++	}
++
+ 	/* Calculate the driver and MFW mailbox address */
+ 	drv_mb_offsize = qed_rd(p_hwfn, p_ptt,
+ 				SECTION_OFFSIZE_ADDR(p_info->public_base,
+@@ -204,13 +243,6 @@ static int qed_load_mcp_offsets(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ 		   "drv_mb_offsiz = 0x%x, drv_mb_addr = 0x%x mcp_pf_id = 0x%x\n",
+ 		   drv_mb_offsize, p_info->drv_mb_addr, mcp_pf_id);
+ 
+-	/* Set the MFW MB address */
+-	mfw_mb_offsize = qed_rd(p_hwfn, p_ptt,
+-				SECTION_OFFSIZE_ADDR(p_info->public_base,
+-						     PUBLIC_MFW_MB));
+-	p_info->mfw_mb_addr = SECTION_ADDR(mfw_mb_offsize, mcp_pf_id);
+-	p_info->mfw_mb_length =	(u16)qed_rd(p_hwfn, p_ptt, p_info->mfw_mb_addr);
+-
+ 	/* Get the current driver mailbox sequence before sending
+ 	 * the first command
+ 	 */
+@@ -285,9 +317,15 @@ static void qed_mcp_reread_offsets(struct qed_hwfn *p_hwfn,
+ 
+ int qed_mcp_reset(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ {
+-	u32 org_mcp_reset_seq, seq, delay = CHIP_MCP_RESP_ITER_US, cnt = 0;
++	u32 org_mcp_reset_seq, seq, delay = QED_MCP_RESP_ITER_US, cnt = 0;
+ 	int rc = 0;
+ 
++	if (p_hwfn->mcp_info->b_block_cmd) {
++		DP_NOTICE(p_hwfn,
++			  "The MFW is not responsive. Avoid sending MCP_RESET mailbox command.\n");
++		return -EBUSY;
++	}
++
+ 	/* Ensure that only a single thread is accessing the mailbox */
+ 	spin_lock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 
+@@ -413,14 +451,41 @@ static void __qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 		   (p_mb_params->cmd | seq_num), p_mb_params->param);
+ }
+ 
++static void qed_mcp_cmd_set_blocking(struct qed_hwfn *p_hwfn, bool block_cmd)
++{
++	p_hwfn->mcp_info->b_block_cmd = block_cmd;
++
++	DP_INFO(p_hwfn, "%s sending of mailbox commands to the MFW\n",
++		block_cmd ? "Block" : "Unblock");
++}
++
++static void qed_mcp_print_cpu_info(struct qed_hwfn *p_hwfn,
++				   struct qed_ptt *p_ptt)
++{
++	u32 cpu_mode, cpu_state, cpu_pc_0, cpu_pc_1, cpu_pc_2;
++	u32 delay = QED_MCP_RESP_ITER_US;
++
++	cpu_mode = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_MODE);
++	cpu_state = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_STATE);
++	cpu_pc_0 = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_PROGRAM_COUNTER);
++	udelay(delay);
++	cpu_pc_1 = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_PROGRAM_COUNTER);
++	udelay(delay);
++	cpu_pc_2 = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_PROGRAM_COUNTER);
++
++	DP_NOTICE(p_hwfn,
++		  "MCP CPU info: mode 0x%08x, state 0x%08x, pc {0x%08x, 0x%08x, 0x%08x}\n",
++		  cpu_mode, cpu_state, cpu_pc_0, cpu_pc_1, cpu_pc_2);
++}
++
+ static int
+ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 		       struct qed_ptt *p_ptt,
+ 		       struct qed_mcp_mb_params *p_mb_params,
+-		       u32 max_retries, u32 delay)
++		       u32 max_retries, u32 usecs)
+ {
++	u32 cnt = 0, msecs = DIV_ROUND_UP(usecs, 1000);
+ 	struct qed_mcp_cmd_elem *p_cmd_elem;
+-	u32 cnt = 0;
+ 	u16 seq_num;
+ 	int rc = 0;
+ 
+@@ -443,7 +508,11 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 			goto err;
+ 
+ 		spin_unlock_bh(&p_hwfn->mcp_info->cmd_lock);
+-		udelay(delay);
++
++		if (QED_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP))
++			msleep(msecs);
++		else
++			udelay(usecs);
+ 	} while (++cnt < max_retries);
+ 
+ 	if (cnt >= max_retries) {
+@@ -472,7 +541,11 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 		 * The spinlock stays locked until the list element is removed.
+ 		 */
+ 
+-		udelay(delay);
++		if (QED_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP))
++			msleep(msecs);
++		else
++			udelay(usecs);
++
+ 		spin_lock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 
+ 		if (p_cmd_elem->b_is_completed)
+@@ -491,11 +564,15 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 		DP_NOTICE(p_hwfn,
+ 			  "The MFW failed to respond to command 0x%08x [param 0x%08x].\n",
+ 			  p_mb_params->cmd, p_mb_params->param);
++		qed_mcp_print_cpu_info(p_hwfn, p_ptt);
+ 
+ 		spin_lock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 		qed_mcp_cmd_del_elem(p_hwfn, p_cmd_elem);
+ 		spin_unlock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 
++		if (!QED_MB_FLAGS_IS_SET(p_mb_params, AVOID_BLOCK))
++			qed_mcp_cmd_set_blocking(p_hwfn, true);
++
+ 		return -EAGAIN;
+ 	}
+ 
+@@ -507,7 +584,7 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 		   "MFW mailbox: response 0x%08x param 0x%08x [after %d.%03d ms]\n",
+ 		   p_mb_params->mcp_resp,
+ 		   p_mb_params->mcp_param,
+-		   (cnt * delay) / 1000, (cnt * delay) % 1000);
++		   (cnt * usecs) / 1000, (cnt * usecs) % 1000);
+ 
+ 	/* Clear the sequence number from the MFW response */
+ 	p_mb_params->mcp_resp &= FW_MSG_CODE_MASK;
+@@ -525,7 +602,7 @@ static int qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ {
+ 	size_t union_data_size = sizeof(union drv_union_data);
+ 	u32 max_retries = QED_DRV_MB_MAX_RETRIES;
+-	u32 delay = CHIP_MCP_RESP_ITER_US;
++	u32 usecs = QED_MCP_RESP_ITER_US;
+ 
+ 	/* MCP not initialized */
+ 	if (!qed_mcp_is_init(p_hwfn)) {
+@@ -533,6 +610,13 @@ static int qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 		return -EBUSY;
+ 	}
+ 
++	if (p_hwfn->mcp_info->b_block_cmd) {
++		DP_NOTICE(p_hwfn,
++			  "The MFW is not responsive. Avoid sending mailbox command 0x%08x [param 0x%08x].\n",
++			  p_mb_params->cmd, p_mb_params->param);
++		return -EBUSY;
++	}
++
+ 	if (p_mb_params->data_src_size > union_data_size ||
+ 	    p_mb_params->data_dst_size > union_data_size) {
+ 		DP_ERR(p_hwfn,
+@@ -542,8 +626,13 @@ static int qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 		return -EINVAL;
+ 	}
+ 
++	if (QED_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP)) {
++		max_retries = DIV_ROUND_UP(max_retries, 1000);
++		usecs *= 1000;
++	}
++
+ 	return _qed_mcp_cmd_and_union(p_hwfn, p_ptt, p_mb_params, max_retries,
+-				      delay);
++				      usecs);
+ }
+ 
+ int qed_mcp_cmd(struct qed_hwfn *p_hwfn,
+@@ -760,6 +849,7 @@ __qed_mcp_load_req(struct qed_hwfn *p_hwfn,
+ 	mb_params.data_src_size = sizeof(load_req);
+ 	mb_params.p_data_dst = &load_rsp;
+ 	mb_params.data_dst_size = sizeof(load_rsp);
++	mb_params.flags = QED_MB_FLAG_CAN_SLEEP | QED_MB_FLAG_AVOID_BLOCK;
+ 
+ 	DP_VERBOSE(p_hwfn, QED_MSG_SP,
+ 		   "Load Request: param 0x%08x [init_hw %d, drv_type %d, hsi_ver %d, pda 0x%04x]\n",
+@@ -981,7 +1071,8 @@ int qed_mcp_load_req(struct qed_hwfn *p_hwfn,
+ 
+ int qed_mcp_unload_req(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ {
+-	u32 wol_param, mcp_resp, mcp_param;
++	struct qed_mcp_mb_params mb_params;
++	u32 wol_param;
+ 
+ 	switch (p_hwfn->cdev->wol_config) {
+ 	case QED_OV_WOL_DISABLED:
+@@ -999,8 +1090,12 @@ int qed_mcp_unload_req(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ 		wol_param = DRV_MB_PARAM_UNLOAD_WOL_MCP;
+ 	}
+ 
+-	return qed_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_UNLOAD_REQ, wol_param,
+-			   &mcp_resp, &mcp_param);
++	memset(&mb_params, 0, sizeof(mb_params));
++	mb_params.cmd = DRV_MSG_CODE_UNLOAD_REQ;
++	mb_params.param = wol_param;
++	mb_params.flags = QED_MB_FLAG_CAN_SLEEP | QED_MB_FLAG_AVOID_BLOCK;
++
++	return qed_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+ }
+ 
+ int qed_mcp_unload_done(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+@@ -2075,31 +2170,65 @@ qed_mcp_send_drv_version(struct qed_hwfn *p_hwfn,
+ 	return rc;
+ }
+ 
++/* A maximal 100 msec waiting time for the MCP to halt */
++#define QED_MCP_HALT_SLEEP_MS		10
++#define QED_MCP_HALT_MAX_RETRIES	10
++
+ int qed_mcp_halt(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ {
+-	u32 resp = 0, param = 0;
++	u32 resp = 0, param = 0, cpu_state, cnt = 0;
+ 	int rc;
+ 
+ 	rc = qed_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MCP_HALT, 0, &resp,
+ 			 &param);
+-	if (rc)
++	if (rc) {
+ 		DP_ERR(p_hwfn, "MCP response failure, aborting\n");
++		return rc;
++	}
+ 
+-	return rc;
++	do {
++		msleep(QED_MCP_HALT_SLEEP_MS);
++		cpu_state = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_STATE);
++		if (cpu_state & MCP_REG_CPU_STATE_SOFT_HALTED)
++			break;
++	} while (++cnt < QED_MCP_HALT_MAX_RETRIES);
++
++	if (cnt == QED_MCP_HALT_MAX_RETRIES) {
++		DP_NOTICE(p_hwfn,
++			  "Failed to halt the MCP [CPU_MODE = 0x%08x, CPU_STATE = 0x%08x]\n",
++			  qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_MODE), cpu_state);
++		return -EBUSY;
++	}
++
++	qed_mcp_cmd_set_blocking(p_hwfn, true);
++
++	return 0;
+ }
+ 
++#define QED_MCP_RESUME_SLEEP_MS	10
++
+ int qed_mcp_resume(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ {
+-	u32 value, cpu_mode;
++	u32 cpu_mode, cpu_state;
+ 
+ 	qed_wr(p_hwfn, p_ptt, MCP_REG_CPU_STATE, 0xffffffff);
+ 
+-	value = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_MODE);
+-	value &= ~MCP_REG_CPU_MODE_SOFT_HALT;
+-	qed_wr(p_hwfn, p_ptt, MCP_REG_CPU_MODE, value);
+ 	cpu_mode = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_MODE);
++	cpu_mode &= ~MCP_REG_CPU_MODE_SOFT_HALT;
++	qed_wr(p_hwfn, p_ptt, MCP_REG_CPU_MODE, cpu_mode);
++	msleep(QED_MCP_RESUME_SLEEP_MS);
++	cpu_state = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_STATE);
+ 
+-	return (cpu_mode & MCP_REG_CPU_MODE_SOFT_HALT) ? -EAGAIN : 0;
++	if (cpu_state & MCP_REG_CPU_STATE_SOFT_HALTED) {
++		DP_NOTICE(p_hwfn,
++			  "Failed to resume the MCP [CPU_MODE = 0x%08x, CPU_STATE = 0x%08x]\n",
++			  cpu_mode, cpu_state);
++		return -EBUSY;
++	}
++
++	qed_mcp_cmd_set_blocking(p_hwfn, false);
++
++	return 0;
+ }
+ 
+ int qed_mcp_ov_update_current_config(struct qed_hwfn *p_hwfn,
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.h b/drivers/net/ethernet/qlogic/qed/qed_mcp.h
+index 632a838f1fe3..ce2e617d2cab 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_mcp.h
++++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.h
+@@ -635,11 +635,14 @@ struct qed_mcp_info {
+ 	 */
+ 	spinlock_t				cmd_lock;
+ 
++	/* Flag to indicate whether sending a MFW mailbox command is blocked */
++	bool					b_block_cmd;
++
+ 	/* Spinlock used for syncing SW link-changes and link-changes
+ 	 * originating from attention context.
+ 	 */
+ 	spinlock_t				link_lock;
+-	bool					block_mb_sending;
++
+ 	u32					public_base;
+ 	u32					drv_mb_addr;
+ 	u32					mfw_mb_addr;
+@@ -660,14 +663,20 @@ struct qed_mcp_info {
+ };
+ 
+ struct qed_mcp_mb_params {
+-	u32			cmd;
+-	u32			param;
+-	void			*p_data_src;
+-	u8			data_src_size;
+-	void			*p_data_dst;
+-	u8			data_dst_size;
+-	u32			mcp_resp;
+-	u32			mcp_param;
++	u32 cmd;
++	u32 param;
++	void *p_data_src;
++	void *p_data_dst;
++	u8 data_src_size;
++	u8 data_dst_size;
++	u32 mcp_resp;
++	u32 mcp_param;
++	u32 flags;
++#define QED_MB_FLAG_CAN_SLEEP	(0x1 << 0)
++#define QED_MB_FLAG_AVOID_BLOCK	(0x1 << 1)
++#define QED_MB_FLAGS_IS_SET(params, flag) \
++	({ typeof(params) __params = (params); \
++	   (__params && (__params->flags & QED_MB_FLAG_ ## flag)); })
+ };
+ 
+ struct qed_drv_tlv_hdr {
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h b/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
+index d8ad2dcad8d5..f736f70956fd 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
++++ b/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
+@@ -562,8 +562,10 @@
+ 	0
+ #define MCP_REG_CPU_STATE \
+ 	0xe05004UL
++#define MCP_REG_CPU_STATE_SOFT_HALTED	(0x1UL << 10)
+ #define MCP_REG_CPU_EVENT_MASK \
+ 	0xe05008UL
++#define MCP_REG_CPU_PROGRAM_COUNTER	0xe0501cUL
+ #define PGLUE_B_REG_PF_BAR0_SIZE \
+ 	0x2aae60UL
+ #define PGLUE_B_REG_PF_BAR1_SIZE \
+diff --git a/drivers/net/phy/xilinx_gmii2rgmii.c b/drivers/net/phy/xilinx_gmii2rgmii.c
+index 2e5150b0b8d5..7a14e8170e82 100644
+--- a/drivers/net/phy/xilinx_gmii2rgmii.c
++++ b/drivers/net/phy/xilinx_gmii2rgmii.c
+@@ -40,8 +40,11 @@ static int xgmiitorgmii_read_status(struct phy_device *phydev)
+ {
+ 	struct gmii2rgmii *priv = phydev->priv;
+ 	u16 val = 0;
++	int err;
+ 
+-	priv->phy_drv->read_status(phydev);
++	err = priv->phy_drv->read_status(phydev);
++	if (err < 0)
++		return err;
+ 
+ 	val = mdiobus_read(phydev->mdio.bus, priv->addr, XILINX_GMII2RGMII_REG);
+ 	val &= ~XILINX_GMII2RGMII_SPEED_MASK;
+@@ -81,6 +84,11 @@ static int xgmiitorgmii_probe(struct mdio_device *mdiodev)
+ 		return -EPROBE_DEFER;
+ 	}
+ 
++	if (!priv->phy_dev->drv) {
++		dev_info(dev, "Attached phy not ready\n");
++		return -EPROBE_DEFER;
++	}
++
+ 	priv->addr = mdiodev->addr;
+ 	priv->phy_drv = priv->phy_dev->drv;
+ 	memcpy(&priv->conv_phy_drv, priv->phy_dev->drv,
+diff --git a/drivers/net/wireless/ath/ath10k/ce.c b/drivers/net/wireless/ath/ath10k/ce.c
+index 3b96a43fbda4..18c709c484e7 100644
+--- a/drivers/net/wireless/ath/ath10k/ce.c
++++ b/drivers/net/wireless/ath/ath10k/ce.c
+@@ -1512,7 +1512,7 @@ ath10k_ce_alloc_src_ring_64(struct ath10k *ar, unsigned int ce_id,
+ 		ret = ath10k_ce_alloc_shadow_base(ar, src_ring, nentries);
+ 		if (ret) {
+ 			dma_free_coherent(ar->dev,
+-					  (nentries * sizeof(struct ce_desc) +
++					  (nentries * sizeof(struct ce_desc_64) +
+ 					   CE_DESC_RING_ALIGN),
+ 					  src_ring->base_addr_owner_space_unaligned,
+ 					  base_addr);
+diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c
+index c72d8af122a2..4d1cd90d6d27 100644
+--- a/drivers/net/wireless/ath/ath10k/htt_rx.c
++++ b/drivers/net/wireless/ath/ath10k/htt_rx.c
+@@ -268,11 +268,12 @@ int ath10k_htt_rx_ring_refill(struct ath10k *ar)
+ 	spin_lock_bh(&htt->rx_ring.lock);
+ 	ret = ath10k_htt_rx_ring_fill_n(htt, (htt->rx_ring.fill_level -
+ 					      htt->rx_ring.fill_cnt));
+-	spin_unlock_bh(&htt->rx_ring.lock);
+ 
+ 	if (ret)
+ 		ath10k_htt_rx_ring_free(htt);
+ 
++	spin_unlock_bh(&htt->rx_ring.lock);
++
+ 	return ret;
+ }
+ 
+@@ -284,7 +285,9 @@ void ath10k_htt_rx_free(struct ath10k_htt *htt)
+ 	skb_queue_purge(&htt->rx_in_ord_compl_q);
+ 	skb_queue_purge(&htt->tx_fetch_ind_q);
+ 
++	spin_lock_bh(&htt->rx_ring.lock);
+ 	ath10k_htt_rx_ring_free(htt);
++	spin_unlock_bh(&htt->rx_ring.lock);
+ 
+ 	dma_free_coherent(htt->ar->dev,
+ 			  ath10k_htt_get_rx_ring_size(htt),
+@@ -1089,7 +1092,7 @@ static void ath10k_htt_rx_h_queue_msdu(struct ath10k *ar,
+ 	status = IEEE80211_SKB_RXCB(skb);
+ 	*status = *rx_status;
+ 
+-	__skb_queue_tail(&ar->htt.rx_msdus_q, skb);
++	skb_queue_tail(&ar->htt.rx_msdus_q, skb);
+ }
+ 
+ static void ath10k_process_rx(struct ath10k *ar, struct sk_buff *skb)
+@@ -2810,7 +2813,7 @@ bool ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb)
+ 		break;
+ 	}
+ 	case HTT_T2H_MSG_TYPE_RX_IN_ORD_PADDR_IND: {
+-		__skb_queue_tail(&htt->rx_in_ord_compl_q, skb);
++		skb_queue_tail(&htt->rx_in_ord_compl_q, skb);
+ 		return false;
+ 	}
+ 	case HTT_T2H_MSG_TYPE_TX_CREDIT_UPDATE_IND:
+@@ -2874,7 +2877,7 @@ static int ath10k_htt_rx_deliver_msdu(struct ath10k *ar, int quota, int budget)
+ 		if (skb_queue_empty(&ar->htt.rx_msdus_q))
+ 			break;
+ 
+-		skb = __skb_dequeue(&ar->htt.rx_msdus_q);
++		skb = skb_dequeue(&ar->htt.rx_msdus_q);
+ 		if (!skb)
+ 			break;
+ 		ath10k_process_rx(ar, skb);
+@@ -2905,7 +2908,7 @@ int ath10k_htt_txrx_compl_task(struct ath10k *ar, int budget)
+ 		goto exit;
+ 	}
+ 
+-	while ((skb = __skb_dequeue(&htt->rx_in_ord_compl_q))) {
++	while ((skb = skb_dequeue(&htt->rx_in_ord_compl_q))) {
+ 		spin_lock_bh(&htt->rx_ring.lock);
+ 		ret = ath10k_htt_rx_in_ord_ind(ar, skb);
+ 		spin_unlock_bh(&htt->rx_ring.lock);
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index 747c6951b5c1..e0b9f7d0dfd3 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -4054,6 +4054,7 @@ void ath10k_mac_tx_push_pending(struct ath10k *ar)
+ 	rcu_read_unlock();
+ 	spin_unlock_bh(&ar->txqs_lock);
+ }
++EXPORT_SYMBOL(ath10k_mac_tx_push_pending);
+ 
+ /************/
+ /* Scanning */
+diff --git a/drivers/net/wireless/ath/ath10k/sdio.c b/drivers/net/wireless/ath/ath10k/sdio.c
+index d612ce8c9cff..299db8b1c9ba 100644
+--- a/drivers/net/wireless/ath/ath10k/sdio.c
++++ b/drivers/net/wireless/ath/ath10k/sdio.c
+@@ -30,6 +30,7 @@
+ #include "debug.h"
+ #include "hif.h"
+ #include "htc.h"
++#include "mac.h"
+ #include "targaddrs.h"
+ #include "trace.h"
+ #include "sdio.h"
+@@ -396,6 +397,7 @@ static int ath10k_sdio_mbox_rx_process_packet(struct ath10k *ar,
+ 	int ret;
+ 
+ 	payload_len = le16_to_cpu(htc_hdr->len);
++	skb->len = payload_len + sizeof(struct ath10k_htc_hdr);
+ 
+ 	if (trailer_present) {
+ 		trailer = skb->data + sizeof(*htc_hdr) +
+@@ -434,12 +436,14 @@ static int ath10k_sdio_mbox_rx_process_packets(struct ath10k *ar,
+ 	enum ath10k_htc_ep_id id;
+ 	int ret, i, *n_lookahead_local;
+ 	u32 *lookaheads_local;
++	int lookahead_idx = 0;
+ 
+ 	for (i = 0; i < ar_sdio->n_rx_pkts; i++) {
+ 		lookaheads_local = lookaheads;
+ 		n_lookahead_local = n_lookahead;
+ 
+-		id = ((struct ath10k_htc_hdr *)&lookaheads[i])->eid;
++		id = ((struct ath10k_htc_hdr *)
++		      &lookaheads[lookahead_idx++])->eid;
+ 
+ 		if (id >= ATH10K_HTC_EP_COUNT) {
+ 			ath10k_warn(ar, "invalid endpoint in look-ahead: %d\n",
+@@ -462,6 +466,7 @@ static int ath10k_sdio_mbox_rx_process_packets(struct ath10k *ar,
+ 			/* Only read lookahead's from RX trailers
+ 			 * for the last packet in a bundle.
+ 			 */
++			lookahead_idx--;
+ 			lookaheads_local = NULL;
+ 			n_lookahead_local = NULL;
+ 		}
+@@ -1342,6 +1347,8 @@ static void ath10k_sdio_irq_handler(struct sdio_func *func)
+ 			break;
+ 	} while (time_before(jiffies, timeout) && !done);
+ 
++	ath10k_mac_tx_push_pending(ar);
++
+ 	sdio_claim_host(ar_sdio->func);
+ 
+ 	if (ret && ret != -ECANCELED)
+diff --git a/drivers/net/wireless/ath/ath10k/snoc.c b/drivers/net/wireless/ath/ath10k/snoc.c
+index a3a7042fe13a..aa621bf50a91 100644
+--- a/drivers/net/wireless/ath/ath10k/snoc.c
++++ b/drivers/net/wireless/ath/ath10k/snoc.c
+@@ -449,7 +449,7 @@ static void ath10k_snoc_htt_rx_cb(struct ath10k_ce_pipe *ce_state)
+ 
+ static void ath10k_snoc_rx_replenish_retry(struct timer_list *t)
+ {
+-	struct ath10k_pci *ar_snoc = from_timer(ar_snoc, t, rx_post_retry);
++	struct ath10k_snoc *ar_snoc = from_timer(ar_snoc, t, rx_post_retry);
+ 	struct ath10k *ar = ar_snoc->ar;
+ 
+ 	ath10k_snoc_rx_post(ar);
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
+index f97ab795cf2e..2319f79b34f0 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.c
++++ b/drivers/net/wireless/ath/ath10k/wmi.c
+@@ -4602,10 +4602,6 @@ void ath10k_wmi_event_pdev_tpc_config(struct ath10k *ar, struct sk_buff *skb)
+ 
+ 	ev = (struct wmi_pdev_tpc_config_event *)skb->data;
+ 
+-	tpc_stats = kzalloc(sizeof(*tpc_stats), GFP_ATOMIC);
+-	if (!tpc_stats)
+-		return;
+-
+ 	num_tx_chain = __le32_to_cpu(ev->num_tx_chain);
+ 
+ 	if (num_tx_chain > WMI_TPC_TX_N_CHAIN) {
+@@ -4614,6 +4610,10 @@ void ath10k_wmi_event_pdev_tpc_config(struct ath10k *ar, struct sk_buff *skb)
+ 		return;
+ 	}
+ 
++	tpc_stats = kzalloc(sizeof(*tpc_stats), GFP_ATOMIC);
++	if (!tpc_stats)
++		return;
++
+ 	ath10k_wmi_tpc_config_get_rate_code(rate_code, pream_table,
+ 					    num_tx_chain);
+ 
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_qmath.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_qmath.c
+index b9672da24a9d..b24bc57ca91b 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_qmath.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_qmath.c
+@@ -213,7 +213,7 @@ static const s16 log_table[] = {
+ 	30498,
+ 	31267,
+ 	32024,
+-	32768
++	32767
+ };
+ 
+ #define LOG_TABLE_SIZE 32       /* log_table size */
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c b/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c
+index b49aea4da2d6..8985446570bd 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c
+@@ -439,15 +439,13 @@ mt76x2_mac_fill_tx_status(struct mt76x2_dev *dev,
+ 	if (last_rate < IEEE80211_TX_MAX_RATES - 1)
+ 		rate[last_rate + 1].idx = -1;
+ 
+-	cur_idx = rate[last_rate].idx + st->retry;
++	cur_idx = rate[last_rate].idx + last_rate;
+ 	for (i = 0; i <= last_rate; i++) {
+ 		rate[i].flags = rate[last_rate].flags;
+ 		rate[i].idx = max_t(int, 0, cur_idx - i);
+ 		rate[i].count = 1;
+ 	}
+-
+-	if (last_rate > 0)
+-		rate[last_rate - 1].count = st->retry + 1 - last_rate;
++	rate[last_rate].count = st->retry + 1 - last_rate;
+ 
+ 	info->status.ampdu_len = n_frames;
+ 	info->status.ampdu_ack_len = st->success ? n_frames : 0;
+diff --git a/drivers/net/wireless/rndis_wlan.c b/drivers/net/wireless/rndis_wlan.c
+index 9935bd09db1f..d4947e3a909e 100644
+--- a/drivers/net/wireless/rndis_wlan.c
++++ b/drivers/net/wireless/rndis_wlan.c
+@@ -2928,6 +2928,8 @@ static void rndis_wlan_auth_indication(struct usbnet *usbdev,
+ 
+ 	while (buflen >= sizeof(*auth_req)) {
+ 		auth_req = (void *)buf;
++		if (buflen < le32_to_cpu(auth_req->length))
++			return;
+ 		type = "unknown";
+ 		flags = le32_to_cpu(auth_req->flags);
+ 		pairwise_error = false;
+diff --git a/drivers/net/wireless/ti/wlcore/cmd.c b/drivers/net/wireless/ti/wlcore/cmd.c
+index 761cf8573a80..f48c3f62966d 100644
+--- a/drivers/net/wireless/ti/wlcore/cmd.c
++++ b/drivers/net/wireless/ti/wlcore/cmd.c
+@@ -35,6 +35,7 @@
+ #include "wl12xx_80211.h"
+ #include "cmd.h"
+ #include "event.h"
++#include "ps.h"
+ #include "tx.h"
+ #include "hw_ops.h"
+ 
+@@ -191,6 +192,10 @@ int wlcore_cmd_wait_for_event_or_timeout(struct wl1271 *wl,
+ 
+ 	timeout_time = jiffies + msecs_to_jiffies(WL1271_EVENT_TIMEOUT);
+ 
++	ret = wl1271_ps_elp_wakeup(wl);
++	if (ret < 0)
++		return ret;
++
+ 	do {
+ 		if (time_after(jiffies, timeout_time)) {
+ 			wl1271_debug(DEBUG_CMD, "timeout waiting for event %d",
+@@ -222,6 +227,7 @@ int wlcore_cmd_wait_for_event_or_timeout(struct wl1271 *wl,
+ 	} while (!event);
+ 
+ out:
++	wl1271_ps_elp_sleep(wl);
+ 	kfree(events_vector);
+ 	return ret;
+ }
+diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
+index 34712def81b1..5251689a1d9a 100644
+--- a/drivers/nvme/target/fcloop.c
++++ b/drivers/nvme/target/fcloop.c
+@@ -311,7 +311,7 @@ fcloop_tgt_lsrqst_done_work(struct work_struct *work)
+ 	struct fcloop_tport *tport = tls_req->tport;
+ 	struct nvmefc_ls_req *lsreq = tls_req->lsreq;
+ 
+-	if (tport->remoteport)
++	if (!tport || tport->remoteport)
+ 		lsreq->done(lsreq, tls_req->status);
+ }
+ 
+@@ -329,6 +329,7 @@ fcloop_ls_req(struct nvme_fc_local_port *localport,
+ 
+ 	if (!rport->targetport) {
+ 		tls_req->status = -ECONNREFUSED;
++		tls_req->tport = NULL;
+ 		schedule_work(&tls_req->work);
+ 		return ret;
+ 	}
+diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c
+index ef0b1b6ba86f..12afa7fdf77e 100644
+--- a/drivers/pci/hotplug/acpiphp_glue.c
++++ b/drivers/pci/hotplug/acpiphp_glue.c
+@@ -457,17 +457,18 @@ static void acpiphp_native_scan_bridge(struct pci_dev *bridge)
+ /**
+  * enable_slot - enable, configure a slot
+  * @slot: slot to be enabled
++ * @bridge: true if enable is for the whole bridge (not a single slot)
+  *
+  * This function should be called per *physical slot*,
+  * not per each slot object in ACPI namespace.
+  */
+-static void enable_slot(struct acpiphp_slot *slot)
++static void enable_slot(struct acpiphp_slot *slot, bool bridge)
+ {
+ 	struct pci_dev *dev;
+ 	struct pci_bus *bus = slot->bus;
+ 	struct acpiphp_func *func;
+ 
+-	if (bus->self && hotplug_is_native(bus->self)) {
++	if (bridge && bus->self && hotplug_is_native(bus->self)) {
+ 		/*
+ 		 * If native hotplug is used, it will take care of hotplug
+ 		 * slot management and resource allocation for hotplug
+@@ -701,7 +702,7 @@ static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
+ 					trim_stale_devices(dev);
+ 
+ 			/* configure all functions */
+-			enable_slot(slot);
++			enable_slot(slot, true);
+ 		} else {
+ 			disable_slot(slot);
+ 		}
+@@ -785,7 +786,7 @@ static void hotplug_event(u32 type, struct acpiphp_context *context)
+ 		if (bridge)
+ 			acpiphp_check_bridge(bridge);
+ 		else if (!(slot->flags & SLOT_IS_GOING_AWAY))
+-			enable_slot(slot);
++			enable_slot(slot, false);
+ 
+ 		break;
+ 
+@@ -973,7 +974,7 @@ int acpiphp_enable_slot(struct acpiphp_slot *slot)
+ 
+ 	/* configure all functions */
+ 	if (!(slot->flags & SLOT_ENABLED))
+-		enable_slot(slot);
++		enable_slot(slot, false);
+ 
+ 	pci_unlock_rescan_remove();
+ 	return 0;
+diff --git a/drivers/platform/x86/asus-wireless.c b/drivers/platform/x86/asus-wireless.c
+index 6afd011de9e5..b8e35a8d65cf 100644
+--- a/drivers/platform/x86/asus-wireless.c
++++ b/drivers/platform/x86/asus-wireless.c
+@@ -52,13 +52,12 @@ static const struct acpi_device_id device_ids[] = {
+ };
+ MODULE_DEVICE_TABLE(acpi, device_ids);
+ 
+-static u64 asus_wireless_method(acpi_handle handle, const char *method,
+-				int param)
++static acpi_status asus_wireless_method(acpi_handle handle, const char *method,
++					int param, u64 *ret)
+ {
+ 	struct acpi_object_list p;
+ 	union acpi_object obj;
+ 	acpi_status s;
+-	u64 ret;
+ 
+ 	acpi_handle_debug(handle, "Evaluating method %s, parameter %#x\n",
+ 			  method, param);
+@@ -67,24 +66,27 @@ static u64 asus_wireless_method(acpi_handle handle, const char *method,
+ 	p.count = 1;
+ 	p.pointer = &obj;
+ 
+-	s = acpi_evaluate_integer(handle, (acpi_string) method, &p, &ret);
++	s = acpi_evaluate_integer(handle, (acpi_string) method, &p, ret);
+ 	if (ACPI_FAILURE(s))
+ 		acpi_handle_err(handle,
+ 				"Failed to eval method %s, param %#x (%d)\n",
+ 				method, param, s);
+-	acpi_handle_debug(handle, "%s returned %#llx\n", method, ret);
+-	return ret;
++	else
++		acpi_handle_debug(handle, "%s returned %#llx\n", method, *ret);
++
++	return s;
+ }
+ 
+ static enum led_brightness led_state_get(struct led_classdev *led)
+ {
+ 	struct asus_wireless_data *data;
+-	int s;
++	acpi_status s;
++	u64 ret;
+ 
+ 	data = container_of(led, struct asus_wireless_data, led);
+ 	s = asus_wireless_method(acpi_device_handle(data->adev), "HSWC",
+-				 data->hswc_params->status);
+-	if (s == data->hswc_params->on)
++				 data->hswc_params->status, &ret);
++	if (ACPI_SUCCESS(s) && ret == data->hswc_params->on)
+ 		return LED_FULL;
+ 	return LED_OFF;
+ }
+@@ -92,10 +94,11 @@ static enum led_brightness led_state_get(struct led_classdev *led)
+ static void led_state_update(struct work_struct *work)
+ {
+ 	struct asus_wireless_data *data;
++	u64 ret;
+ 
+ 	data = container_of(work, struct asus_wireless_data, led_work);
+ 	asus_wireless_method(acpi_device_handle(data->adev), "HSWC",
+-			     data->led_state);
++			     data->led_state, &ret);
+ }
+ 
+ static void led_state_set(struct led_classdev *led, enum led_brightness value)
+diff --git a/drivers/power/reset/vexpress-poweroff.c b/drivers/power/reset/vexpress-poweroff.c
+index 102f95a09460..e9e749f87517 100644
+--- a/drivers/power/reset/vexpress-poweroff.c
++++ b/drivers/power/reset/vexpress-poweroff.c
+@@ -35,6 +35,7 @@ static void vexpress_reset_do(struct device *dev, const char *what)
+ }
+ 
+ static struct device *vexpress_power_off_device;
++static atomic_t vexpress_restart_nb_refcnt = ATOMIC_INIT(0);
+ 
+ static void vexpress_power_off(void)
+ {
+@@ -99,10 +100,13 @@ static int _vexpress_register_restart_handler(struct device *dev)
+ 	int err;
+ 
+ 	vexpress_restart_device = dev;
+-	err = register_restart_handler(&vexpress_restart_nb);
+-	if (err) {
+-		dev_err(dev, "cannot register restart handler (err=%d)\n", err);
+-		return err;
++	if (atomic_inc_return(&vexpress_restart_nb_refcnt) == 1) {
++		err = register_restart_handler(&vexpress_restart_nb);
++		if (err) {
++			dev_err(dev, "cannot register restart handler (err=%d)\n", err);
++			atomic_dec(&vexpress_restart_nb_refcnt);
++			return err;
++		}
+ 	}
+ 	device_create_file(dev, &dev_attr_active);
+ 
+diff --git a/drivers/power/supply/axp288_charger.c b/drivers/power/supply/axp288_charger.c
+index 6e1bc14c3304..735658ee1c60 100644
+--- a/drivers/power/supply/axp288_charger.c
++++ b/drivers/power/supply/axp288_charger.c
+@@ -718,7 +718,7 @@ static int charger_init_hw_regs(struct axp288_chrg_info *info)
+ 	}
+ 
+ 	/* Determine charge current limit */
+-	cc = (ret & CHRG_CCCV_CC_MASK) >> CHRG_CCCV_CC_BIT_POS;
++	cc = (val & CHRG_CCCV_CC_MASK) >> CHRG_CCCV_CC_BIT_POS;
+ 	cc = (cc * CHRG_CCCV_CC_LSB_RES) + CHRG_CCCV_CC_OFFSET;
+ 	info->cc = cc;
+ 
+diff --git a/drivers/power/supply/power_supply_core.c b/drivers/power/supply/power_supply_core.c
+index d21f478741c1..e85361878450 100644
+--- a/drivers/power/supply/power_supply_core.c
++++ b/drivers/power/supply/power_supply_core.c
+@@ -14,6 +14,7 @@
+ #include <linux/types.h>
+ #include <linux/init.h>
+ #include <linux/slab.h>
++#include <linux/delay.h>
+ #include <linux/device.h>
+ #include <linux/notifier.h>
+ #include <linux/err.h>
+@@ -140,8 +141,13 @@ static void power_supply_deferred_register_work(struct work_struct *work)
+ 	struct power_supply *psy = container_of(work, struct power_supply,
+ 						deferred_register_work.work);
+ 
+-	if (psy->dev.parent)
+-		mutex_lock(&psy->dev.parent->mutex);
++	if (psy->dev.parent) {
++		while (!mutex_trylock(&psy->dev.parent->mutex)) {
++			if (psy->removing)
++				return;
++			msleep(10);
++		}
++	}
+ 
+ 	power_supply_changed(psy);
+ 
+@@ -1082,6 +1088,7 @@ EXPORT_SYMBOL_GPL(devm_power_supply_register_no_ws);
+ void power_supply_unregister(struct power_supply *psy)
+ {
+ 	WARN_ON(atomic_dec_return(&psy->use_cnt));
++	psy->removing = true;
+ 	cancel_work_sync(&psy->changed_work);
+ 	cancel_delayed_work_sync(&psy->deferred_register_work);
+ 	sysfs_remove_link(&psy->dev.kobj, "powers");
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 6ed568b96c0e..cc1450c53fb2 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -3147,7 +3147,7 @@ static inline int regulator_suspend_toggle(struct regulator_dev *rdev,
+ 	if (!rstate->changeable)
+ 		return -EPERM;
+ 
+-	rstate->enabled = en;
++	rstate->enabled = (en) ? ENABLE_IN_SUSPEND : DISABLE_IN_SUSPEND;
+ 
+ 	return 0;
+ }
+@@ -4381,13 +4381,13 @@ regulator_register(const struct regulator_desc *regulator_desc,
+ 	    !rdev->desc->fixed_uV)
+ 		rdev->is_switch = true;
+ 
++	dev_set_drvdata(&rdev->dev, rdev);
+ 	ret = device_register(&rdev->dev);
+ 	if (ret != 0) {
+ 		put_device(&rdev->dev);
+ 		goto unset_supplies;
+ 	}
+ 
+-	dev_set_drvdata(&rdev->dev, rdev);
+ 	rdev_init_debugfs(rdev);
+ 
+ 	/* try to resolve regulators supply since a new one was registered */
+diff --git a/drivers/regulator/of_regulator.c b/drivers/regulator/of_regulator.c
+index 638f17d4c848..210fc20f7de7 100644
+--- a/drivers/regulator/of_regulator.c
++++ b/drivers/regulator/of_regulator.c
+@@ -213,8 +213,6 @@ static void of_get_regulation_constraints(struct device_node *np,
+ 		else if (of_property_read_bool(suspend_np,
+ 					"regulator-off-in-suspend"))
+ 			suspend_state->enabled = DISABLE_IN_SUSPEND;
+-		else
+-			suspend_state->enabled = DO_NOTHING_IN_SUSPEND;
+ 
+ 		if (!of_property_read_u32(np, "regulator-suspend-min-microvolt",
+ 					  &pval))
+diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
+index a9f60d0ee02e..7c732414367f 100644
+--- a/drivers/s390/block/dasd.c
++++ b/drivers/s390/block/dasd.c
+@@ -3127,6 +3127,7 @@ static int dasd_alloc_queue(struct dasd_block *block)
+ 	block->tag_set.nr_hw_queues = nr_hw_queues;
+ 	block->tag_set.queue_depth = queue_depth;
+ 	block->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
++	block->tag_set.numa_node = NUMA_NO_NODE;
+ 
+ 	rc = blk_mq_alloc_tag_set(&block->tag_set);
+ 	if (rc)
+diff --git a/drivers/s390/block/scm_blk.c b/drivers/s390/block/scm_blk.c
+index b1fcb76dd272..98f66b7b6794 100644
+--- a/drivers/s390/block/scm_blk.c
++++ b/drivers/s390/block/scm_blk.c
+@@ -455,6 +455,7 @@ int scm_blk_dev_setup(struct scm_blk_dev *bdev, struct scm_device *scmdev)
+ 	bdev->tag_set.nr_hw_queues = nr_requests;
+ 	bdev->tag_set.queue_depth = nr_requests_per_io * nr_requests;
+ 	bdev->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
++	bdev->tag_set.numa_node = NUMA_NO_NODE;
+ 
+ 	ret = blk_mq_alloc_tag_set(&bdev->tag_set);
+ 	if (ret)
+diff --git a/drivers/scsi/bnx2i/bnx2i_hwi.c b/drivers/scsi/bnx2i/bnx2i_hwi.c
+index 8f03a869ac98..e9e669a6c2bc 100644
+--- a/drivers/scsi/bnx2i/bnx2i_hwi.c
++++ b/drivers/scsi/bnx2i/bnx2i_hwi.c
+@@ -2727,6 +2727,8 @@ int bnx2i_map_ep_dbell_regs(struct bnx2i_endpoint *ep)
+ 					      BNX2X_DOORBELL_PCI_BAR);
+ 		reg_off = (1 << BNX2X_DB_SHIFT) * (cid_num & 0x1FFFF);
+ 		ep->qp.ctx_base = ioremap_nocache(reg_base + reg_off, 4);
++		if (!ep->qp.ctx_base)
++			return -ENOMEM;
+ 		goto arm_cq;
+ 	}
+ 
+diff --git a/drivers/scsi/hisi_sas/hisi_sas.h b/drivers/scsi/hisi_sas/hisi_sas.h
+index 7052a5d45f7f..78e5a9254143 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas.h
++++ b/drivers/scsi/hisi_sas/hisi_sas.h
+@@ -277,6 +277,7 @@ struct hisi_hba {
+ 
+ 	int n_phy;
+ 	spinlock_t lock;
++	struct semaphore sem;
+ 
+ 	struct timer_list timer;
+ 	struct workqueue_struct *wq;
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
+index 6f562974f8f6..bfbd2fb7e69e 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
+@@ -914,7 +914,9 @@ static void hisi_sas_dev_gone(struct domain_device *device)
+ 
+ 		hisi_sas_dereg_device(hisi_hba, device);
+ 
++		down(&hisi_hba->sem);
+ 		hisi_hba->hw->clear_itct(hisi_hba, sas_dev);
++		up(&hisi_hba->sem);
+ 		device->lldd_dev = NULL;
+ 	}
+ 
+@@ -1364,6 +1366,7 @@ static int hisi_sas_controller_reset(struct hisi_hba *hisi_hba)
+ 	if (test_and_set_bit(HISI_SAS_RESET_BIT, &hisi_hba->flags))
+ 		return -1;
+ 
++	down(&hisi_hba->sem);
+ 	dev_info(dev, "controller resetting...\n");
+ 	old_state = hisi_hba->hw->get_phys_state(hisi_hba);
+ 
+@@ -1378,6 +1381,7 @@ static int hisi_sas_controller_reset(struct hisi_hba *hisi_hba)
+ 	if (rc) {
+ 		dev_warn(dev, "controller reset failed (%d)\n", rc);
+ 		clear_bit(HISI_SAS_REJECT_CMD_BIT, &hisi_hba->flags);
++		up(&hisi_hba->sem);
+ 		scsi_unblock_requests(shost);
+ 		goto out;
+ 	}
+@@ -1388,6 +1392,7 @@ static int hisi_sas_controller_reset(struct hisi_hba *hisi_hba)
+ 	hisi_hba->hw->phys_init(hisi_hba);
+ 	msleep(1000);
+ 	hisi_sas_refresh_port_id(hisi_hba);
++	up(&hisi_hba->sem);
+ 
+ 	if (hisi_hba->reject_stp_links_msk)
+ 		hisi_sas_terminate_stp_reject(hisi_hba);
+@@ -2016,6 +2021,7 @@ int hisi_sas_alloc(struct hisi_hba *hisi_hba, struct Scsi_Host *shost)
+ 	struct device *dev = hisi_hba->dev;
+ 	int i, s, max_command_entries = hisi_hba->hw->max_command_entries;
+ 
++	sema_init(&hisi_hba->sem, 1);
+ 	spin_lock_init(&hisi_hba->lock);
+ 	for (i = 0; i < hisi_hba->n_phy; i++) {
+ 		hisi_sas_phy_init(hisi_hba, i);
+diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.c b/drivers/scsi/ibmvscsi/ibmvscsi.c
+index 17df76f0be3c..67a2c844e30d 100644
+--- a/drivers/scsi/ibmvscsi/ibmvscsi.c
++++ b/drivers/scsi/ibmvscsi/ibmvscsi.c
+@@ -93,7 +93,7 @@ static int max_requests = IBMVSCSI_MAX_REQUESTS_DEFAULT;
+ static int max_events = IBMVSCSI_MAX_REQUESTS_DEFAULT + 2;
+ static int fast_fail = 1;
+ static int client_reserve = 1;
+-static char partition_name[97] = "UNKNOWN";
++static char partition_name[96] = "UNKNOWN";
+ static unsigned int partition_number = -1;
+ static LIST_HEAD(ibmvscsi_head);
+ 
+@@ -262,7 +262,7 @@ static void gather_partition_info(void)
+ 
+ 	ppartition_name = of_get_property(of_root, "ibm,partition-name", NULL);
+ 	if (ppartition_name)
+-		strncpy(partition_name, ppartition_name,
++		strlcpy(partition_name, ppartition_name,
+ 				sizeof(partition_name));
+ 	p_number_ptr = of_get_property(of_root, "ibm,partition-no", NULL);
+ 	if (p_number_ptr)
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index 71d97573a667..8e84e3fb648a 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -6789,6 +6789,9 @@ megasas_resume(struct pci_dev *pdev)
+ 			goto fail_init_mfi;
+ 	}
+ 
++	if (megasas_get_ctrl_info(instance) != DCMD_SUCCESS)
++		goto fail_init_mfi;
++
+ 	tasklet_init(&instance->isr_tasklet, instance->instancet->tasklet,
+ 		     (unsigned long)instance);
+ 
+diff --git a/drivers/siox/siox-core.c b/drivers/siox/siox-core.c
+index 16590dfaafa4..cef307c0399c 100644
+--- a/drivers/siox/siox-core.c
++++ b/drivers/siox/siox-core.c
+@@ -715,17 +715,17 @@ int siox_master_register(struct siox_master *smaster)
+ 
+ 	dev_set_name(&smaster->dev, "siox-%d", smaster->busno);
+ 
++	mutex_init(&smaster->lock);
++	INIT_LIST_HEAD(&smaster->devices);
++
+ 	smaster->last_poll = jiffies;
+-	smaster->poll_thread = kthread_create(siox_poll_thread, smaster,
+-					      "siox-%d", smaster->busno);
++	smaster->poll_thread = kthread_run(siox_poll_thread, smaster,
++					   "siox-%d", smaster->busno);
+ 	if (IS_ERR(smaster->poll_thread)) {
+ 		smaster->active = 0;
+ 		return PTR_ERR(smaster->poll_thread);
+ 	}
+ 
+-	mutex_init(&smaster->lock);
+-	INIT_LIST_HEAD(&smaster->devices);
+-
+ 	ret = device_add(&smaster->dev);
+ 	if (ret)
+ 		kthread_stop(smaster->poll_thread);
+diff --git a/drivers/spi/spi-orion.c b/drivers/spi/spi-orion.c
+index d01a6adc726e..47ef6b1a2e76 100644
+--- a/drivers/spi/spi-orion.c
++++ b/drivers/spi/spi-orion.c
+@@ -20,6 +20,7 @@
+ #include <linux/of.h>
+ #include <linux/of_address.h>
+ #include <linux/of_device.h>
++#include <linux/of_gpio.h>
+ #include <linux/clk.h>
+ #include <linux/sizes.h>
+ #include <linux/gpio.h>
+@@ -681,9 +682,9 @@ static int orion_spi_probe(struct platform_device *pdev)
+ 		goto out_rel_axi_clk;
+ 	}
+ 
+-	/* Scan all SPI devices of this controller for direct mapped devices */
+ 	for_each_available_child_of_node(pdev->dev.of_node, np) {
+ 		u32 cs;
++		int cs_gpio;
+ 
+ 		/* Get chip-select number from the "reg" property */
+ 		status = of_property_read_u32(np, "reg", &cs);
+@@ -694,6 +695,44 @@ static int orion_spi_probe(struct platform_device *pdev)
+ 			continue;
+ 		}
+ 
++		/*
++		 * Initialize the CS GPIO:
++		 * - properly request the actual GPIO signal
++		 * - de-assert the logical signal so that all GPIO CS lines
++		 *   are inactive when probing for slaves
++		 * - find an unused physical CS which will be driven for any
++		 *   slave which uses a CS GPIO
++		 */
++		cs_gpio = of_get_named_gpio(pdev->dev.of_node, "cs-gpios", cs);
++		if (cs_gpio > 0) {
++			char *gpio_name;
++			int cs_flags;
++
++			if (spi->unused_hw_gpio == -1) {
++				dev_info(&pdev->dev,
++					"Selected unused HW CS#%d for any GPIO CSes\n",
++					cs);
++				spi->unused_hw_gpio = cs;
++			}
++
++			gpio_name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
++					"%s-CS%d", dev_name(&pdev->dev), cs);
++			if (!gpio_name) {
++				status = -ENOMEM;
++				goto out_rel_axi_clk;
++			}
++
++			cs_flags = of_property_read_bool(np, "spi-cs-high") ?
++				GPIOF_OUT_INIT_LOW : GPIOF_OUT_INIT_HIGH;
++			status = devm_gpio_request_one(&pdev->dev, cs_gpio,
++					cs_flags, gpio_name);
++			if (status) {
++				dev_err(&pdev->dev,
++					"Can't request GPIO for CS %d\n", cs);
++				goto out_rel_axi_clk;
++			}
++		}
++
+ 		/*
+ 		 * Check if an address is configured for this SPI device. If
+ 		 * not, the MBus mapping via the 'ranges' property in the 'soc'
+@@ -740,44 +779,8 @@ static int orion_spi_probe(struct platform_device *pdev)
+ 	if (status < 0)
+ 		goto out_rel_pm;
+ 
+-	if (master->cs_gpios) {
+-		int i;
+-		for (i = 0; i < master->num_chipselect; ++i) {
+-			char *gpio_name;
+-
+-			if (!gpio_is_valid(master->cs_gpios[i])) {
+-				continue;
+-			}
+-
+-			gpio_name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
+-					"%s-CS%d", dev_name(&pdev->dev), i);
+-			if (!gpio_name) {
+-				status = -ENOMEM;
+-				goto out_rel_master;
+-			}
+-
+-			status = devm_gpio_request(&pdev->dev,
+-					master->cs_gpios[i], gpio_name);
+-			if (status) {
+-				dev_err(&pdev->dev,
+-					"Can't request GPIO for CS %d\n",
+-					master->cs_gpios[i]);
+-				goto out_rel_master;
+-			}
+-			if (spi->unused_hw_gpio == -1) {
+-				dev_info(&pdev->dev,
+-					"Selected unused HW CS#%d for any GPIO CSes\n",
+-					i);
+-				spi->unused_hw_gpio = i;
+-			}
+-		}
+-	}
+-
+-
+ 	return status;
+ 
+-out_rel_master:
+-	spi_unregister_master(master);
+ out_rel_pm:
+ 	pm_runtime_disable(&pdev->dev);
+ out_rel_axi_clk:
+diff --git a/drivers/spi/spi-rspi.c b/drivers/spi/spi-rspi.c
+index 95dc4d78618d..b37de1d991d6 100644
+--- a/drivers/spi/spi-rspi.c
++++ b/drivers/spi/spi-rspi.c
+@@ -598,11 +598,13 @@ static int rspi_dma_transfer(struct rspi_data *rspi, struct sg_table *tx,
+ 
+ 	ret = wait_event_interruptible_timeout(rspi->wait,
+ 					       rspi->dma_callbacked, HZ);
+-	if (ret > 0 && rspi->dma_callbacked)
++	if (ret > 0 && rspi->dma_callbacked) {
+ 		ret = 0;
+-	else if (!ret) {
+-		dev_err(&rspi->master->dev, "DMA timeout\n");
+-		ret = -ETIMEDOUT;
++	} else {
++		if (!ret) {
++			dev_err(&rspi->master->dev, "DMA timeout\n");
++			ret = -ETIMEDOUT;
++		}
+ 		if (tx)
+ 			dmaengine_terminate_all(rspi->master->dma_tx);
+ 		if (rx)
+@@ -1350,12 +1352,36 @@ static const struct platform_device_id spi_driver_ids[] = {
+ 
+ MODULE_DEVICE_TABLE(platform, spi_driver_ids);
+ 
++#ifdef CONFIG_PM_SLEEP
++static int rspi_suspend(struct device *dev)
++{
++	struct platform_device *pdev = to_platform_device(dev);
++	struct rspi_data *rspi = platform_get_drvdata(pdev);
++
++	return spi_master_suspend(rspi->master);
++}
++
++static int rspi_resume(struct device *dev)
++{
++	struct platform_device *pdev = to_platform_device(dev);
++	struct rspi_data *rspi = platform_get_drvdata(pdev);
++
++	return spi_master_resume(rspi->master);
++}
++
++static SIMPLE_DEV_PM_OPS(rspi_pm_ops, rspi_suspend, rspi_resume);
++#define DEV_PM_OPS	&rspi_pm_ops
++#else
++#define DEV_PM_OPS	NULL
++#endif /* CONFIG_PM_SLEEP */
++
+ static struct platform_driver rspi_driver = {
+ 	.probe =	rspi_probe,
+ 	.remove =	rspi_remove,
+ 	.id_table =	spi_driver_ids,
+ 	.driver		= {
+ 		.name = "renesas_spi",
++		.pm = DEV_PM_OPS,
+ 		.of_match_table = of_match_ptr(rspi_of_match),
+ 	},
+ };
+diff --git a/drivers/spi/spi-sh-msiof.c b/drivers/spi/spi-sh-msiof.c
+index 0e74cbf9929d..37364c634fef 100644
+--- a/drivers/spi/spi-sh-msiof.c
++++ b/drivers/spi/spi-sh-msiof.c
+@@ -396,7 +396,8 @@ static void sh_msiof_spi_set_mode_regs(struct sh_msiof_spi_priv *p,
+ 
+ static void sh_msiof_reset_str(struct sh_msiof_spi_priv *p)
+ {
+-	sh_msiof_write(p, STR, sh_msiof_read(p, STR));
++	sh_msiof_write(p, STR,
++		       sh_msiof_read(p, STR) & ~(STR_TDREQ | STR_RDREQ));
+ }
+ 
+ static void sh_msiof_spi_write_fifo_8(struct sh_msiof_spi_priv *p,
+@@ -1421,12 +1422,37 @@ static const struct platform_device_id spi_driver_ids[] = {
+ };
+ MODULE_DEVICE_TABLE(platform, spi_driver_ids);
+ 
++#ifdef CONFIG_PM_SLEEP
++static int sh_msiof_spi_suspend(struct device *dev)
++{
++	struct platform_device *pdev = to_platform_device(dev);
++	struct sh_msiof_spi_priv *p = platform_get_drvdata(pdev);
++
++	return spi_master_suspend(p->master);
++}
++
++static int sh_msiof_spi_resume(struct device *dev)
++{
++	struct platform_device *pdev = to_platform_device(dev);
++	struct sh_msiof_spi_priv *p = platform_get_drvdata(pdev);
++
++	return spi_master_resume(p->master);
++}
++
++static SIMPLE_DEV_PM_OPS(sh_msiof_spi_pm_ops, sh_msiof_spi_suspend,
++			 sh_msiof_spi_resume);
++#define DEV_PM_OPS	&sh_msiof_spi_pm_ops
++#else
++#define DEV_PM_OPS	NULL
++#endif /* CONFIG_PM_SLEEP */
++
+ static struct platform_driver sh_msiof_spi_drv = {
+ 	.probe		= sh_msiof_spi_probe,
+ 	.remove		= sh_msiof_spi_remove,
+ 	.id_table	= spi_driver_ids,
+ 	.driver		= {
+ 		.name		= "spi_sh_msiof",
++		.pm		= DEV_PM_OPS,
+ 		.of_match_table = of_match_ptr(sh_msiof_match),
+ 	},
+ };
+diff --git a/drivers/spi/spi-tegra20-slink.c b/drivers/spi/spi-tegra20-slink.c
+index 6f7b946b5ced..1427f343b39a 100644
+--- a/drivers/spi/spi-tegra20-slink.c
++++ b/drivers/spi/spi-tegra20-slink.c
+@@ -1063,6 +1063,24 @@ static int tegra_slink_probe(struct platform_device *pdev)
+ 		goto exit_free_master;
+ 	}
+ 
++	/* disabled clock may cause interrupt storm upon request */
++	tspi->clk = devm_clk_get(&pdev->dev, NULL);
++	if (IS_ERR(tspi->clk)) {
++		ret = PTR_ERR(tspi->clk);
++		dev_err(&pdev->dev, "Can not get clock %d\n", ret);
++		goto exit_free_master;
++	}
++	ret = clk_prepare(tspi->clk);
++	if (ret < 0) {
++		dev_err(&pdev->dev, "Clock prepare failed %d\n", ret);
++		goto exit_free_master;
++	}
++	ret = clk_enable(tspi->clk);
++	if (ret < 0) {
++		dev_err(&pdev->dev, "Clock enable failed %d\n", ret);
++		goto exit_free_master;
++	}
++
+ 	spi_irq = platform_get_irq(pdev, 0);
+ 	tspi->irq = spi_irq;
+ 	ret = request_threaded_irq(tspi->irq, tegra_slink_isr,
+@@ -1071,14 +1089,7 @@ static int tegra_slink_probe(struct platform_device *pdev)
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "Failed to register ISR for IRQ %d\n",
+ 					tspi->irq);
+-		goto exit_free_master;
+-	}
+-
+-	tspi->clk = devm_clk_get(&pdev->dev, NULL);
+-	if (IS_ERR(tspi->clk)) {
+-		dev_err(&pdev->dev, "can not get clock\n");
+-		ret = PTR_ERR(tspi->clk);
+-		goto exit_free_irq;
++		goto exit_clk_disable;
+ 	}
+ 
+ 	tspi->rst = devm_reset_control_get_exclusive(&pdev->dev, "spi");
+@@ -1138,6 +1149,8 @@ exit_rx_dma_free:
+ 	tegra_slink_deinit_dma_param(tspi, true);
+ exit_free_irq:
+ 	free_irq(spi_irq, tspi);
++exit_clk_disable:
++	clk_disable(tspi->clk);
+ exit_free_master:
+ 	spi_master_put(master);
+ 	return ret;
+@@ -1150,6 +1163,8 @@ static int tegra_slink_remove(struct platform_device *pdev)
+ 
+ 	free_irq(tspi->irq, tspi);
+ 
++	clk_disable(tspi->clk);
++
+ 	if (tspi->tx_dma_chan)
+ 		tegra_slink_deinit_dma_param(tspi, false);
+ 
+diff --git a/drivers/staging/android/ashmem.c b/drivers/staging/android/ashmem.c
+index d5d33e12e952..716573c21579 100644
+--- a/drivers/staging/android/ashmem.c
++++ b/drivers/staging/android/ashmem.c
+@@ -366,6 +366,12 @@ static int ashmem_mmap(struct file *file, struct vm_area_struct *vma)
+ 		goto out;
+ 	}
+ 
++	/* requested mapping size larger than object size */
++	if (vma->vm_end - vma->vm_start > PAGE_ALIGN(asma->size)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
+ 	/* requested protection bits must match our allowed protection mask */
+ 	if (unlikely((vma->vm_flags & ~calc_vm_prot_bits(asma->prot_mask, 0)) &
+ 		     calc_vm_prot_bits(PROT_MASK, 0))) {
+diff --git a/drivers/staging/media/imx/imx-ic-prpencvf.c b/drivers/staging/media/imx/imx-ic-prpencvf.c
+index ae453fd422f0..ffeb017c73b2 100644
+--- a/drivers/staging/media/imx/imx-ic-prpencvf.c
++++ b/drivers/staging/media/imx/imx-ic-prpencvf.c
+@@ -210,6 +210,7 @@ static void prp_vb2_buf_done(struct prp_priv *priv, struct ipuv3_channel *ch)
+ 
+ 	done = priv->active_vb2_buf[priv->ipu_buf_num];
+ 	if (done) {
++		done->vbuf.field = vdev->fmt.fmt.pix.field;
+ 		vb = &done->vbuf.vb2_buf;
+ 		vb->timestamp = ktime_get_ns();
+ 		vb2_buffer_done(vb, priv->nfb4eof ?
+diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
+index 95d7805f3485..0e963c24af37 100644
+--- a/drivers/staging/media/imx/imx-media-csi.c
++++ b/drivers/staging/media/imx/imx-media-csi.c
+@@ -236,6 +236,7 @@ static void csi_vb2_buf_done(struct csi_priv *priv)
+ 
+ 	done = priv->active_vb2_buf[priv->ipu_buf_num];
+ 	if (done) {
++		done->vbuf.field = vdev->fmt.fmt.pix.field;
+ 		vb = &done->vbuf.vb2_buf;
+ 		vb->timestamp = ktime_get_ns();
+ 		vb2_buffer_done(vb, priv->nfb4eof ?
+diff --git a/drivers/staging/mt7621-dts/gbpc1.dts b/drivers/staging/mt7621-dts/gbpc1.dts
+index 6b13d85d9d34..87555600195f 100644
+--- a/drivers/staging/mt7621-dts/gbpc1.dts
++++ b/drivers/staging/mt7621-dts/gbpc1.dts
+@@ -113,6 +113,8 @@
+ };
+ 
+ &pcie {
++	pinctrl-names = "default";
++	pinctrl-0 = <&pcie_pins>;
+ 	status = "okay";
+ };
+ 
+diff --git a/drivers/staging/mt7621-dts/mt7621.dtsi b/drivers/staging/mt7621-dts/mt7621.dtsi
+index eb3966b7f033..ce6b43639079 100644
+--- a/drivers/staging/mt7621-dts/mt7621.dtsi
++++ b/drivers/staging/mt7621-dts/mt7621.dtsi
+@@ -447,31 +447,28 @@
+ 		clocks = <&clkctrl 24 &clkctrl 25 &clkctrl 26>;
+ 		clock-names = "pcie0", "pcie1", "pcie2";
+ 
+-		pcie0 {
++		pcie@0,0 {
+ 			reg = <0x0000 0 0 0 0>;
+-
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+-
+-			device_type = "pci";
++			ranges;
++			bus-range = <0x00 0xff>;
+ 		};
+ 
+-		pcie1 {
++		pcie@1,0 {
+ 			reg = <0x0800 0 0 0 0>;
+-
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+-
+-			device_type = "pci";
++			ranges;
++			bus-range = <0x00 0xff>;
+ 		};
+ 
+-		pcie2 {
++		pcie@2,0 {
+ 			reg = <0x1000 0 0 0 0>;
+-
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+-
+-			device_type = "pci";
++			ranges;
++			bus-range = <0x00 0xff>;
+ 		};
+ 	};
+ };
+diff --git a/drivers/staging/mt7621-eth/mtk_eth_soc.c b/drivers/staging/mt7621-eth/mtk_eth_soc.c
+index 2c7a2e666bfb..381d9d270bf5 100644
+--- a/drivers/staging/mt7621-eth/mtk_eth_soc.c
++++ b/drivers/staging/mt7621-eth/mtk_eth_soc.c
+@@ -2012,8 +2012,10 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np)
+ 		mac->hw_stats = devm_kzalloc(eth->dev,
+ 					     sizeof(*mac->hw_stats),
+ 					     GFP_KERNEL);
+-		if (!mac->hw_stats)
+-			return -ENOMEM;
++		if (!mac->hw_stats) {
++			err = -ENOMEM;
++			goto free_netdev;
++		}
+ 		spin_lock_init(&mac->hw_stats->stats_lock);
+ 		mac->hw_stats->reg_offset = id * MTK_STAT_OFFSET;
+ 	}
+@@ -2037,7 +2039,8 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np)
+ 	err = register_netdev(eth->netdev[id]);
+ 	if (err) {
+ 		dev_err(eth->dev, "error bringing up device\n");
+-		return err;
++		err = -ENOMEM;
++		goto free_netdev;
+ 	}
+ 	eth->netdev[id]->irq = eth->irq;
+ 	netif_info(eth, probe, eth->netdev[id],
+@@ -2045,6 +2048,10 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np)
+ 		   eth->netdev[id]->base_addr, eth->netdev[id]->irq);
+ 
+ 	return 0;
++
++free_netdev:
++	free_netdev(eth->netdev[id]);
++	return err;
+ }
+ 
+ static int mtk_probe(struct platform_device *pdev)
+diff --git a/drivers/staging/pi433/pi433_if.c b/drivers/staging/pi433/pi433_if.c
+index b061f77dda41..94e0bfcec991 100644
+--- a/drivers/staging/pi433/pi433_if.c
++++ b/drivers/staging/pi433/pi433_if.c
+@@ -880,6 +880,7 @@ pi433_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ 	int			retval = 0;
+ 	struct pi433_instance	*instance;
+ 	struct pi433_device	*device;
++	struct pi433_tx_cfg	tx_cfg;
+ 	void __user *argp = (void __user *)arg;
+ 
+ 	/* Check type and command number */
+@@ -902,9 +903,11 @@ pi433_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ 			return -EFAULT;
+ 		break;
+ 	case PI433_IOC_WR_TX_CFG:
+-		if (copy_from_user(&instance->tx_cfg, argp,
+-				   sizeof(struct pi433_tx_cfg)))
++		if (copy_from_user(&tx_cfg, argp, sizeof(struct pi433_tx_cfg)))
+ 			return -EFAULT;
++		mutex_lock(&device->tx_fifo_lock);
++		memcpy(&instance->tx_cfg, &tx_cfg, sizeof(struct pi433_tx_cfg));
++		mutex_unlock(&device->tx_fifo_lock);
+ 		break;
+ 	case PI433_IOC_RD_RX_CFG:
+ 		if (copy_to_user(argp, &device->rx_cfg,
+diff --git a/drivers/staging/rts5208/sd.c b/drivers/staging/rts5208/sd.c
+index d548bc695f9e..0421dd9277a8 100644
+--- a/drivers/staging/rts5208/sd.c
++++ b/drivers/staging/rts5208/sd.c
+@@ -4996,7 +4996,7 @@ int sd_execute_write_data(struct scsi_cmnd *srb, struct rtsx_chip *chip)
+ 			goto sd_execute_write_cmd_failed;
+ 		}
+ 
+-		rtsx_write_register(chip, SD_BYTE_CNT_L, 0xFF, 0x00);
++		retval = rtsx_write_register(chip, SD_BYTE_CNT_L, 0xFF, 0x00);
+ 		if (retval != STATUS_SUCCESS) {
+ 			rtsx_trace(chip);
+ 			goto sd_execute_write_cmd_failed;
+diff --git a/drivers/target/iscsi/iscsi_target_tpg.c b/drivers/target/iscsi/iscsi_target_tpg.c
+index 4b34f71547c6..101d62105c93 100644
+--- a/drivers/target/iscsi/iscsi_target_tpg.c
++++ b/drivers/target/iscsi/iscsi_target_tpg.c
+@@ -636,8 +636,7 @@ int iscsit_ta_authentication(struct iscsi_portal_group *tpg, u32 authentication)
+ 		none = strstr(buf1, NONE);
+ 		if (none)
+ 			goto out;
+-		strncat(buf1, ",", strlen(","));
+-		strncat(buf1, NONE, strlen(NONE));
++		strlcat(buf1, "," NONE, sizeof(buf1));
+ 		if (iscsi_update_param_value(param, buf1) < 0)
+ 			return -EINVAL;
+ 	}
+diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
+index e27db4d45a9d..06c9886e556c 100644
+--- a/drivers/target/target_core_device.c
++++ b/drivers/target/target_core_device.c
+@@ -904,14 +904,20 @@ struct se_device *target_find_device(int id, bool do_depend)
+ EXPORT_SYMBOL(target_find_device);
+ 
+ struct devices_idr_iter {
++	struct config_item *prev_item;
+ 	int (*fn)(struct se_device *dev, void *data);
+ 	void *data;
+ };
+ 
+ static int target_devices_idr_iter(int id, void *p, void *data)
++	 __must_hold(&device_mutex)
+ {
+ 	struct devices_idr_iter *iter = data;
+ 	struct se_device *dev = p;
++	int ret;
++
++	config_item_put(iter->prev_item);
++	iter->prev_item = NULL;
+ 
+ 	/*
+ 	 * We add the device early to the idr, so it can be used
+@@ -922,7 +928,15 @@ static int target_devices_idr_iter(int id, void *p, void *data)
+ 	if (!(dev->dev_flags & DF_CONFIGURED))
+ 		return 0;
+ 
+-	return iter->fn(dev, iter->data);
++	iter->prev_item = config_item_get_unless_zero(&dev->dev_group.cg_item);
++	if (!iter->prev_item)
++		return 0;
++	mutex_unlock(&device_mutex);
++
++	ret = iter->fn(dev, iter->data);
++
++	mutex_lock(&device_mutex);
++	return ret;
+ }
+ 
+ /**
+@@ -936,15 +950,13 @@ static int target_devices_idr_iter(int id, void *p, void *data)
+ int target_for_each_device(int (*fn)(struct se_device *dev, void *data),
+ 			   void *data)
+ {
+-	struct devices_idr_iter iter;
++	struct devices_idr_iter iter = { .fn = fn, .data = data };
+ 	int ret;
+ 
+-	iter.fn = fn;
+-	iter.data = data;
+-
+ 	mutex_lock(&device_mutex);
+ 	ret = idr_for_each(&devices_idr, target_devices_idr_iter, &iter);
+ 	mutex_unlock(&device_mutex);
++	config_item_put(iter.prev_item);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/thermal/imx_thermal.c b/drivers/thermal/imx_thermal.c
+index 334d98be03b9..b1f82d64253e 100644
+--- a/drivers/thermal/imx_thermal.c
++++ b/drivers/thermal/imx_thermal.c
+@@ -604,7 +604,10 @@ static int imx_init_from_nvmem_cells(struct platform_device *pdev)
+ 	ret = nvmem_cell_read_u32(&pdev->dev, "calib", &val);
+ 	if (ret)
+ 		return ret;
+-	imx_init_calib(pdev, val);
++
++	ret = imx_init_calib(pdev, val);
++	if (ret)
++		return ret;
+ 
+ 	ret = nvmem_cell_read_u32(&pdev->dev, "temp_grade", &val);
+ 	if (ret)
+diff --git a/drivers/thermal/of-thermal.c b/drivers/thermal/of-thermal.c
+index 977a8307fbb1..4f2816559205 100644
+--- a/drivers/thermal/of-thermal.c
++++ b/drivers/thermal/of-thermal.c
+@@ -260,10 +260,13 @@ static int of_thermal_set_mode(struct thermal_zone_device *tz,
+ 
+ 	mutex_lock(&tz->lock);
+ 
+-	if (mode == THERMAL_DEVICE_ENABLED)
++	if (mode == THERMAL_DEVICE_ENABLED) {
+ 		tz->polling_delay = data->polling_delay;
+-	else
++		tz->passive_delay = data->passive_delay;
++	} else {
+ 		tz->polling_delay = 0;
++		tz->passive_delay = 0;
++	}
+ 
+ 	mutex_unlock(&tz->lock);
+ 
+diff --git a/drivers/tty/serial/8250/serial_cs.c b/drivers/tty/serial/8250/serial_cs.c
+index 9963a766dcfb..c8186a05a453 100644
+--- a/drivers/tty/serial/8250/serial_cs.c
++++ b/drivers/tty/serial/8250/serial_cs.c
+@@ -638,8 +638,10 @@ static int serial_config(struct pcmcia_device *link)
+ 	    (link->has_func_id) &&
+ 	    (link->socket->pcmcia_pfc == 0) &&
+ 	    ((link->func_id == CISTPL_FUNCID_MULTI) ||
+-	     (link->func_id == CISTPL_FUNCID_SERIAL)))
+-		pcmcia_loop_config(link, serial_check_for_multi, info);
++	     (link->func_id == CISTPL_FUNCID_SERIAL))) {
++		if (pcmcia_loop_config(link, serial_check_for_multi, info))
++			goto failed;
++	}
+ 
+ 	/*
+ 	 * Apply any multi-port quirk.
+diff --git a/drivers/tty/serial/cpm_uart/cpm_uart_core.c b/drivers/tty/serial/cpm_uart/cpm_uart_core.c
+index 24a5f05e769b..e5389591bb4f 100644
+--- a/drivers/tty/serial/cpm_uart/cpm_uart_core.c
++++ b/drivers/tty/serial/cpm_uart/cpm_uart_core.c
+@@ -1054,8 +1054,8 @@ static int poll_wait_key(char *obuf, struct uart_cpm_port *pinfo)
+ 	/* Get the address of the host memory buffer.
+ 	 */
+ 	bdp = pinfo->rx_cur;
+-	while (bdp->cbd_sc & BD_SC_EMPTY)
+-		;
++	if (bdp->cbd_sc & BD_SC_EMPTY)
++		return NO_POLL_CHAR;
+ 
+ 	/* If the buffer address is in the CPM DPRAM, don't
+ 	 * convert it.
+@@ -1090,7 +1090,11 @@ static int cpm_get_poll_char(struct uart_port *port)
+ 		poll_chars = 0;
+ 	}
+ 	if (poll_chars <= 0) {
+-		poll_chars = poll_wait_key(poll_buf, pinfo);
++		int ret = poll_wait_key(poll_buf, pinfo);
++
++		if (ret == NO_POLL_CHAR)
++			return ret;
++		poll_chars = ret;
+ 		pollp = poll_buf;
+ 	}
+ 	poll_chars--;
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 51e47a63d61a..3f8d1274fc85 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -979,7 +979,8 @@ static inline int lpuart_start_rx_dma(struct lpuart_port *sport)
+ 	struct circ_buf *ring = &sport->rx_ring;
+ 	int ret, nent;
+ 	int bits, baud;
+-	struct tty_struct *tty = tty_port_tty_get(&sport->port.state->port);
++	struct tty_port *port = &sport->port.state->port;
++	struct tty_struct *tty = port->tty;
+ 	struct ktermios *termios = &tty->termios;
+ 
+ 	baud = tty_get_baud_rate(tty);
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index 4e853570ea80..554a69db1bca 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -2350,6 +2350,14 @@ static int imx_uart_probe(struct platform_device *pdev)
+ 				ret);
+ 			return ret;
+ 		}
++
++		ret = devm_request_irq(&pdev->dev, rtsirq, imx_uart_rtsint, 0,
++				       dev_name(&pdev->dev), sport);
++		if (ret) {
++			dev_err(&pdev->dev, "failed to request rts irq: %d\n",
++				ret);
++			return ret;
++		}
+ 	} else {
+ 		ret = devm_request_irq(&pdev->dev, rxirq, imx_uart_int, 0,
+ 				       dev_name(&pdev->dev), sport);
+diff --git a/drivers/tty/serial/mvebu-uart.c b/drivers/tty/serial/mvebu-uart.c
+index d04b5eeea3c6..170e446a2f62 100644
+--- a/drivers/tty/serial/mvebu-uart.c
++++ b/drivers/tty/serial/mvebu-uart.c
+@@ -511,6 +511,7 @@ static void mvebu_uart_set_termios(struct uart_port *port,
+ 		termios->c_iflag |= old->c_iflag & ~(INPCK | IGNPAR);
+ 		termios->c_cflag &= CREAD | CBAUD;
+ 		termios->c_cflag |= old->c_cflag & ~(CREAD | CBAUD);
++		termios->c_cflag |= CS8;
+ 	}
+ 
+ 	spin_unlock_irqrestore(&port->lock, flags);
+diff --git a/drivers/tty/serial/pxa.c b/drivers/tty/serial/pxa.c
+index eda3c7710d6a..4932b674f7ef 100644
+--- a/drivers/tty/serial/pxa.c
++++ b/drivers/tty/serial/pxa.c
+@@ -887,7 +887,8 @@ static int serial_pxa_probe(struct platform_device *dev)
+ 		goto err_clk;
+ 	if (sport->port.line >= ARRAY_SIZE(serial_pxa_ports)) {
+ 		dev_err(&dev->dev, "serial%d out of range\n", sport->port.line);
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err_clk;
+ 	}
+ 	snprintf(sport->name, PXA_NAME_LEN - 1, "UART%d", sport->port.line + 1);
+ 
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index c181eb37f985..3c55600a8236 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -2099,6 +2099,8 @@ static void sci_shutdown(struct uart_port *port)
+ 	}
+ #endif
+ 
++	if (s->rx_trigger > 1 && s->rx_fifo_timeout > 0)
++		del_timer_sync(&s->rx_fifo_timer);
+ 	sci_free_irq(s);
+ 	sci_free_dma(port);
+ }
+diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
+index 632a2bfabc08..a0d284ef3f40 100644
+--- a/drivers/usb/class/cdc-wdm.c
++++ b/drivers/usb/class/cdc-wdm.c
+@@ -458,7 +458,7 @@ static int service_outstanding_interrupt(struct wdm_device *desc)
+ 
+ 	set_bit(WDM_RESPONDING, &desc->flags);
+ 	spin_unlock_irq(&desc->iuspin);
+-	rv = usb_submit_urb(desc->response, GFP_ATOMIC);
++	rv = usb_submit_urb(desc->response, GFP_KERNEL);
+ 	spin_lock_irq(&desc->iuspin);
+ 	if (rv) {
+ 		dev_err(&desc->intf->dev,
+diff --git a/drivers/usb/common/roles.c b/drivers/usb/common/roles.c
+index 15cc76e22123..99116af07f1d 100644
+--- a/drivers/usb/common/roles.c
++++ b/drivers/usb/common/roles.c
+@@ -109,8 +109,15 @@ static void *usb_role_switch_match(struct device_connection *con, int ep,
+  */
+ struct usb_role_switch *usb_role_switch_get(struct device *dev)
+ {
+-	return device_connection_find_match(dev, "usb-role-switch", NULL,
+-					    usb_role_switch_match);
++	struct usb_role_switch *sw;
++
++	sw = device_connection_find_match(dev, "usb-role-switch", NULL,
++					  usb_role_switch_match);
++
++	if (!IS_ERR_OR_NULL(sw))
++		WARN_ON(!try_module_get(sw->dev.parent->driver->owner));
++
++	return sw;
+ }
+ EXPORT_SYMBOL_GPL(usb_role_switch_get);
+ 
+@@ -122,8 +129,10 @@ EXPORT_SYMBOL_GPL(usb_role_switch_get);
+  */
+ void usb_role_switch_put(struct usb_role_switch *sw)
+ {
+-	if (!IS_ERR_OR_NULL(sw))
++	if (!IS_ERR_OR_NULL(sw)) {
+ 		put_device(&sw->dev);
++		module_put(sw->dev.parent->driver->owner);
++	}
+ }
+ EXPORT_SYMBOL_GPL(usb_role_switch_put);
+ 
+diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
+index 476dcc5f2da3..e1e0c90ce569 100644
+--- a/drivers/usb/core/devio.c
++++ b/drivers/usb/core/devio.c
+@@ -1433,10 +1433,13 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ 	struct async *as = NULL;
+ 	struct usb_ctrlrequest *dr = NULL;
+ 	unsigned int u, totlen, isofrmlen;
+-	int i, ret, is_in, num_sgs = 0, ifnum = -1;
++	int i, ret, num_sgs = 0, ifnum = -1;
+ 	int number_of_packets = 0;
+ 	unsigned int stream_id = 0;
+ 	void *buf;
++	bool is_in;
++	bool allow_short = false;
++	bool allow_zero = false;
+ 	unsigned long mask =	USBDEVFS_URB_SHORT_NOT_OK |
+ 				USBDEVFS_URB_BULK_CONTINUATION |
+ 				USBDEVFS_URB_NO_FSBR |
+@@ -1470,6 +1473,8 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ 	u = 0;
+ 	switch (uurb->type) {
+ 	case USBDEVFS_URB_TYPE_CONTROL:
++		if (is_in)
++			allow_short = true;
+ 		if (!usb_endpoint_xfer_control(&ep->desc))
+ 			return -EINVAL;
+ 		/* min 8 byte setup packet */
+@@ -1510,6 +1515,10 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ 		break;
+ 
+ 	case USBDEVFS_URB_TYPE_BULK:
++		if (!is_in)
++			allow_zero = true;
++		else
++			allow_short = true;
+ 		switch (usb_endpoint_type(&ep->desc)) {
+ 		case USB_ENDPOINT_XFER_CONTROL:
+ 		case USB_ENDPOINT_XFER_ISOC:
+@@ -1530,6 +1539,10 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ 		if (!usb_endpoint_xfer_int(&ep->desc))
+ 			return -EINVAL;
+  interrupt_urb:
++		if (!is_in)
++			allow_zero = true;
++		else
++			allow_short = true;
+ 		break;
+ 
+ 	case USBDEVFS_URB_TYPE_ISO:
+@@ -1675,14 +1688,19 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ 	u = (is_in ? URB_DIR_IN : URB_DIR_OUT);
+ 	if (uurb->flags & USBDEVFS_URB_ISO_ASAP)
+ 		u |= URB_ISO_ASAP;
+-	if (uurb->flags & USBDEVFS_URB_SHORT_NOT_OK && is_in)
++	if (allow_short && uurb->flags & USBDEVFS_URB_SHORT_NOT_OK)
+ 		u |= URB_SHORT_NOT_OK;
+-	if (uurb->flags & USBDEVFS_URB_ZERO_PACKET)
++	if (allow_zero && uurb->flags & USBDEVFS_URB_ZERO_PACKET)
+ 		u |= URB_ZERO_PACKET;
+ 	if (uurb->flags & USBDEVFS_URB_NO_INTERRUPT)
+ 		u |= URB_NO_INTERRUPT;
+ 	as->urb->transfer_flags = u;
+ 
++	if (!allow_short && uurb->flags & USBDEVFS_URB_SHORT_NOT_OK)
++		dev_warn(&ps->dev->dev, "Requested nonsensical USBDEVFS_URB_SHORT_NOT_OK.\n");
++	if (!allow_zero && uurb->flags & USBDEVFS_URB_ZERO_PACKET)
++		dev_warn(&ps->dev->dev, "Requested nonsensical USBDEVFS_URB_ZERO_PACKET.\n");
++
+ 	as->urb->transfer_buffer_length = uurb->buffer_length;
+ 	as->urb->setup_packet = (unsigned char *)dr;
+ 	dr = NULL;
+diff --git a/drivers/usb/core/driver.c b/drivers/usb/core/driver.c
+index e76e95f62f76..a1f225f077cd 100644
+--- a/drivers/usb/core/driver.c
++++ b/drivers/usb/core/driver.c
+@@ -512,7 +512,6 @@ int usb_driver_claim_interface(struct usb_driver *driver,
+ 	struct device *dev;
+ 	struct usb_device *udev;
+ 	int retval = 0;
+-	int lpm_disable_error = -ENODEV;
+ 
+ 	if (!iface)
+ 		return -ENODEV;
+@@ -533,16 +532,6 @@ int usb_driver_claim_interface(struct usb_driver *driver,
+ 
+ 	iface->condition = USB_INTERFACE_BOUND;
+ 
+-	/* See the comment about disabling LPM in usb_probe_interface(). */
+-	if (driver->disable_hub_initiated_lpm) {
+-		lpm_disable_error = usb_unlocked_disable_lpm(udev);
+-		if (lpm_disable_error) {
+-			dev_err(&iface->dev, "%s Failed to disable LPM for driver %s\n",
+-				__func__, driver->name);
+-			return -ENOMEM;
+-		}
+-	}
+-
+ 	/* Claimed interfaces are initially inactive (suspended) and
+ 	 * runtime-PM-enabled, but only if the driver has autosuspend
+ 	 * support.  Otherwise they are marked active, to prevent the
+@@ -561,9 +550,20 @@ int usb_driver_claim_interface(struct usb_driver *driver,
+ 	if (device_is_registered(dev))
+ 		retval = device_bind_driver(dev);
+ 
+-	/* Attempt to re-enable USB3 LPM, if the disable was successful. */
+-	if (!lpm_disable_error)
+-		usb_unlocked_enable_lpm(udev);
++	if (retval) {
++		dev->driver = NULL;
++		usb_set_intfdata(iface, NULL);
++		iface->needs_remote_wakeup = 0;
++		iface->condition = USB_INTERFACE_UNBOUND;
++
++		/*
++		 * Unbound interfaces are always runtime-PM-disabled
++		 * and runtime-PM-suspended
++		 */
++		if (driver->supports_autosuspend)
++			pm_runtime_disable(dev);
++		pm_runtime_set_suspended(dev);
++	}
+ 
+ 	return retval;
+ }
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index e77dfe5ed5ec..178d6c6063c0 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -58,6 +58,7 @@ static int quirks_param_set(const char *val, const struct kernel_param *kp)
+ 	quirk_list = kcalloc(quirk_count, sizeof(struct quirk_entry),
+ 			     GFP_KERNEL);
+ 	if (!quirk_list) {
++		quirk_count = 0;
+ 		mutex_unlock(&quirk_mutex);
+ 		return -ENOMEM;
+ 	}
+@@ -154,7 +155,7 @@ static struct kparam_string quirks_param_string = {
+ 	.string = quirks_param,
+ };
+ 
+-module_param_cb(quirks, &quirks_param_ops, &quirks_param_string, 0644);
++device_param_cb(quirks, &quirks_param_ops, &quirks_param_string, 0644);
+ MODULE_PARM_DESC(quirks, "Add/modify USB quirks by specifying quirks=vendorID:productID:quirks");
+ 
+ /* Lists of quirky USB devices, split in device quirks and interface quirks.
+diff --git a/drivers/usb/core/usb.c b/drivers/usb/core/usb.c
+index 623be3174fb3..79d8bd7a612e 100644
+--- a/drivers/usb/core/usb.c
++++ b/drivers/usb/core/usb.c
+@@ -228,6 +228,8 @@ struct usb_host_interface *usb_find_alt_setting(
+ 	struct usb_interface_cache *intf_cache = NULL;
+ 	int i;
+ 
++	if (!config)
++		return NULL;
+ 	for (i = 0; i < config->desc.bNumInterfaces; i++) {
+ 		if (config->intf_cache[i]->altsetting[0].desc.bInterfaceNumber
+ 				== iface_num) {
+diff --git a/drivers/usb/musb/musb_dsps.c b/drivers/usb/musb/musb_dsps.c
+index fb871eabcc10..a129d601a0c3 100644
+--- a/drivers/usb/musb/musb_dsps.c
++++ b/drivers/usb/musb/musb_dsps.c
+@@ -658,16 +658,6 @@ dsps_dma_controller_create(struct musb *musb, void __iomem *base)
+ 	return controller;
+ }
+ 
+-static void dsps_dma_controller_destroy(struct dma_controller *c)
+-{
+-	struct musb *musb = c->musb;
+-	struct dsps_glue *glue = dev_get_drvdata(musb->controller->parent);
+-	void __iomem *usbss_base = glue->usbss_base;
+-
+-	musb_writel(usbss_base, USBSS_IRQ_CLEARR, USBSS_IRQ_PD_COMP);
+-	cppi41_dma_controller_destroy(c);
+-}
+-
+ #ifdef CONFIG_PM_SLEEP
+ static void dsps_dma_controller_suspend(struct dsps_glue *glue)
+ {
+@@ -697,7 +687,7 @@ static struct musb_platform_ops dsps_ops = {
+ 
+ #ifdef CONFIG_USB_TI_CPPI41_DMA
+ 	.dma_init	= dsps_dma_controller_create,
+-	.dma_exit	= dsps_dma_controller_destroy,
++	.dma_exit	= cppi41_dma_controller_destroy,
+ #endif
+ 	.enable		= dsps_musb_enable,
+ 	.disable	= dsps_musb_disable,
+diff --git a/drivers/usb/serial/kobil_sct.c b/drivers/usb/serial/kobil_sct.c
+index a31ea7e194dd..a6ebed1e0f20 100644
+--- a/drivers/usb/serial/kobil_sct.c
++++ b/drivers/usb/serial/kobil_sct.c
+@@ -393,12 +393,20 @@ static int kobil_tiocmget(struct tty_struct *tty)
+ 			  transfer_buffer_length,
+ 			  KOBIL_TIMEOUT);
+ 
+-	dev_dbg(&port->dev, "%s - Send get_status_line_state URB returns: %i. Statusline: %02x\n",
+-		__func__, result, transfer_buffer[0]);
++	dev_dbg(&port->dev, "Send get_status_line_state URB returns: %i\n",
++			result);
++	if (result < 1) {
++		if (result >= 0)
++			result = -EIO;
++		goto out_free;
++	}
++
++	dev_dbg(&port->dev, "Statusline: %02x\n", transfer_buffer[0]);
+ 
+ 	result = 0;
+ 	if ((transfer_buffer[0] & SUSBCR_GSL_DSR) != 0)
+ 		result = TIOCM_DSR;
++out_free:
+ 	kfree(transfer_buffer);
+ 	return result;
+ }
+diff --git a/drivers/usb/wusbcore/security.c b/drivers/usb/wusbcore/security.c
+index 33d2f5d7f33b..14ac8c98ac9e 100644
+--- a/drivers/usb/wusbcore/security.c
++++ b/drivers/usb/wusbcore/security.c
+@@ -217,7 +217,7 @@ int wusb_dev_sec_add(struct wusbhc *wusbhc,
+ 
+ 	result = usb_get_descriptor(usb_dev, USB_DT_SECURITY,
+ 				    0, secd, sizeof(*secd));
+-	if (result < sizeof(*secd)) {
++	if (result < (int)sizeof(*secd)) {
+ 		dev_err(dev, "Can't read security descriptor or "
+ 			"not enough data: %d\n", result);
+ 		goto out;
+diff --git a/drivers/uwb/hwa-rc.c b/drivers/uwb/hwa-rc.c
+index 9a53912bdfe9..5d3ba747ae17 100644
+--- a/drivers/uwb/hwa-rc.c
++++ b/drivers/uwb/hwa-rc.c
+@@ -873,6 +873,7 @@ error_get_version:
+ error_rc_add:
+ 	usb_put_intf(iface);
+ 	usb_put_dev(hwarc->usb_dev);
++	kfree(hwarc);
+ error_alloc:
+ 	uwb_rc_put(uwb_rc);
+ error_rc_alloc:
+diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
+index 29756d88799b..6b86ca8772fb 100644
+--- a/drivers/vhost/net.c
++++ b/drivers/vhost/net.c
+@@ -396,13 +396,10 @@ static inline unsigned long busy_clock(void)
+ 	return local_clock() >> 10;
+ }
+ 
+-static bool vhost_can_busy_poll(struct vhost_dev *dev,
+-				unsigned long endtime)
++static bool vhost_can_busy_poll(unsigned long endtime)
+ {
+-	return likely(!need_resched()) &&
+-	       likely(!time_after(busy_clock(), endtime)) &&
+-	       likely(!signal_pending(current)) &&
+-	       !vhost_has_work(dev);
++	return likely(!need_resched() && !time_after(busy_clock(), endtime) &&
++		      !signal_pending(current));
+ }
+ 
+ static void vhost_net_disable_vq(struct vhost_net *n,
+@@ -434,7 +431,8 @@ static int vhost_net_enable_vq(struct vhost_net *n,
+ static int vhost_net_tx_get_vq_desc(struct vhost_net *net,
+ 				    struct vhost_virtqueue *vq,
+ 				    struct iovec iov[], unsigned int iov_size,
+-				    unsigned int *out_num, unsigned int *in_num)
++				    unsigned int *out_num, unsigned int *in_num,
++				    bool *busyloop_intr)
+ {
+ 	unsigned long uninitialized_var(endtime);
+ 	int r = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
+@@ -443,9 +441,15 @@ static int vhost_net_tx_get_vq_desc(struct vhost_net *net,
+ 	if (r == vq->num && vq->busyloop_timeout) {
+ 		preempt_disable();
+ 		endtime = busy_clock() + vq->busyloop_timeout;
+-		while (vhost_can_busy_poll(vq->dev, endtime) &&
+-		       vhost_vq_avail_empty(vq->dev, vq))
++		while (vhost_can_busy_poll(endtime)) {
++			if (vhost_has_work(vq->dev)) {
++				*busyloop_intr = true;
++				break;
++			}
++			if (!vhost_vq_avail_empty(vq->dev, vq))
++				break;
+ 			cpu_relax();
++		}
+ 		preempt_enable();
+ 		r = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
+ 				      out_num, in_num, NULL, NULL);
+@@ -501,20 +505,24 @@ static void handle_tx(struct vhost_net *net)
+ 	zcopy = nvq->ubufs;
+ 
+ 	for (;;) {
++		bool busyloop_intr;
++
+ 		/* Release DMAs done buffers first */
+ 		if (zcopy)
+ 			vhost_zerocopy_signal_used(net, vq);
+ 
+-
++		busyloop_intr = false;
+ 		head = vhost_net_tx_get_vq_desc(net, vq, vq->iov,
+ 						ARRAY_SIZE(vq->iov),
+-						&out, &in);
++						&out, &in, &busyloop_intr);
+ 		/* On error, stop handling until the next kick. */
+ 		if (unlikely(head < 0))
+ 			break;
+ 		/* Nothing new?  Wait for eventfd to tell us they refilled. */
+ 		if (head == vq->num) {
+-			if (unlikely(vhost_enable_notify(&net->dev, vq))) {
++			if (unlikely(busyloop_intr)) {
++				vhost_poll_queue(&vq->poll);
++			} else if (unlikely(vhost_enable_notify(&net->dev, vq))) {
+ 				vhost_disable_notify(&net->dev, vq);
+ 				continue;
+ 			}
+@@ -663,7 +671,8 @@ static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk)
+ 		preempt_disable();
+ 		endtime = busy_clock() + vq->busyloop_timeout;
+ 
+-		while (vhost_can_busy_poll(&net->dev, endtime) &&
++		while (vhost_can_busy_poll(endtime) &&
++		       !vhost_has_work(&net->dev) &&
+ 		       !sk_has_rx_data(sk) &&
+ 		       vhost_vq_avail_empty(&net->dev, vq))
+ 			cpu_relax();
+diff --git a/fs/dax.c b/fs/dax.c
+index 641192808bb6..94f9fe002b12 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -1007,21 +1007,12 @@ static vm_fault_t dax_load_hole(struct address_space *mapping, void *entry,
+ {
+ 	struct inode *inode = mapping->host;
+ 	unsigned long vaddr = vmf->address;
+-	vm_fault_t ret = VM_FAULT_NOPAGE;
+-	struct page *zero_page;
+-	pfn_t pfn;
+-
+-	zero_page = ZERO_PAGE(0);
+-	if (unlikely(!zero_page)) {
+-		ret = VM_FAULT_OOM;
+-		goto out;
+-	}
++	pfn_t pfn = pfn_to_pfn_t(my_zero_pfn(vaddr));
++	vm_fault_t ret;
+ 
+-	pfn = page_to_pfn_t(zero_page);
+ 	dax_insert_mapping_entry(mapping, vmf, entry, pfn, RADIX_DAX_ZERO_PAGE,
+ 			false);
+ 	ret = vmf_insert_mixed(vmf->vma, vaddr, pfn);
+-out:
+ 	trace_dax_load_hole(inode, vmf, ret);
+ 	return ret;
+ }
+diff --git a/fs/ext2/inode.c b/fs/ext2/inode.c
+index 71635909df3b..b4e0501bcba1 100644
+--- a/fs/ext2/inode.c
++++ b/fs/ext2/inode.c
+@@ -1448,6 +1448,7 @@ struct inode *ext2_iget (struct super_block *sb, unsigned long ino)
+ 	}
+ 	inode->i_blocks = le32_to_cpu(raw_inode->i_blocks);
+ 	ei->i_flags = le32_to_cpu(raw_inode->i_flags);
++	ext2_set_inode_flags(inode);
+ 	ei->i_faddr = le32_to_cpu(raw_inode->i_faddr);
+ 	ei->i_frag_no = raw_inode->i_frag;
+ 	ei->i_frag_size = raw_inode->i_fsize;
+@@ -1517,7 +1518,6 @@ struct inode *ext2_iget (struct super_block *sb, unsigned long ino)
+ 			   new_decode_dev(le32_to_cpu(raw_inode->i_block[1])));
+ 	}
+ 	brelse (bh);
+-	ext2_set_inode_flags(inode);
+ 	unlock_new_inode(inode);
+ 	return inode;
+ 	
+diff --git a/fs/iomap.c b/fs/iomap.c
+index 0d0bd8845586..af6144fd4919 100644
+--- a/fs/iomap.c
++++ b/fs/iomap.c
+@@ -811,6 +811,7 @@ struct iomap_dio {
+ 	atomic_t		ref;
+ 	unsigned		flags;
+ 	int			error;
++	bool			wait_for_completion;
+ 
+ 	union {
+ 		/* used during submission and for synchronous completion: */
+@@ -914,9 +915,8 @@ static void iomap_dio_bio_end_io(struct bio *bio)
+ 		iomap_dio_set_error(dio, blk_status_to_errno(bio->bi_status));
+ 
+ 	if (atomic_dec_and_test(&dio->ref)) {
+-		if (is_sync_kiocb(dio->iocb)) {
++		if (dio->wait_for_completion) {
+ 			struct task_struct *waiter = dio->submit.waiter;
+-
+ 			WRITE_ONCE(dio->submit.waiter, NULL);
+ 			wake_up_process(waiter);
+ 		} else if (dio->flags & IOMAP_DIO_WRITE) {
+@@ -1131,13 +1131,12 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
+ 	dio->end_io = end_io;
+ 	dio->error = 0;
+ 	dio->flags = 0;
++	dio->wait_for_completion = is_sync_kiocb(iocb);
+ 
+ 	dio->submit.iter = iter;
+-	if (is_sync_kiocb(iocb)) {
+-		dio->submit.waiter = current;
+-		dio->submit.cookie = BLK_QC_T_NONE;
+-		dio->submit.last_queue = NULL;
+-	}
++	dio->submit.waiter = current;
++	dio->submit.cookie = BLK_QC_T_NONE;
++	dio->submit.last_queue = NULL;
+ 
+ 	if (iov_iter_rw(iter) == READ) {
+ 		if (pos >= dio->i_size)
+@@ -1187,7 +1186,7 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
+ 		dio_warn_stale_pagecache(iocb->ki_filp);
+ 	ret = 0;
+ 
+-	if (iov_iter_rw(iter) == WRITE && !is_sync_kiocb(iocb) &&
++	if (iov_iter_rw(iter) == WRITE && !dio->wait_for_completion &&
+ 	    !inode->i_sb->s_dio_done_wq) {
+ 		ret = sb_init_dio_done_wq(inode->i_sb);
+ 		if (ret < 0)
+@@ -1202,8 +1201,10 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
+ 				iomap_dio_actor);
+ 		if (ret <= 0) {
+ 			/* magic error code to fall back to buffered I/O */
+-			if (ret == -ENOTBLK)
++			if (ret == -ENOTBLK) {
++				dio->wait_for_completion = true;
+ 				ret = 0;
++			}
+ 			break;
+ 		}
+ 		pos += ret;
+@@ -1224,7 +1225,7 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
+ 		dio->flags &= ~IOMAP_DIO_NEED_SYNC;
+ 
+ 	if (!atomic_dec_and_test(&dio->ref)) {
+-		if (!is_sync_kiocb(iocb))
++		if (!dio->wait_for_completion)
+ 			return -EIOCBQUEUED;
+ 
+ 		for (;;) {
+diff --git a/fs/isofs/inode.c b/fs/isofs/inode.c
+index ec3fba7d492f..488a9e7f8f66 100644
+--- a/fs/isofs/inode.c
++++ b/fs/isofs/inode.c
+@@ -24,6 +24,7 @@
+ #include <linux/mpage.h>
+ #include <linux/user_namespace.h>
+ #include <linux/seq_file.h>
++#include <linux/blkdev.h>
+ 
+ #include "isofs.h"
+ #include "zisofs.h"
+@@ -653,6 +654,12 @@ static int isofs_fill_super(struct super_block *s, void *data, int silent)
+ 	/*
+ 	 * What if bugger tells us to go beyond page size?
+ 	 */
++	if (bdev_logical_block_size(s->s_bdev) > 2048) {
++		printk(KERN_WARNING
++		       "ISOFS: unsupported/invalid hardware sector size %d\n",
++			bdev_logical_block_size(s->s_bdev));
++		goto out_freesbi;
++	}
+ 	opt.blocksize = sb_min_blocksize(s, opt.blocksize);
+ 
+ 	sbi->s_high_sierra = 0; /* default is iso9660 */
+diff --git a/fs/locks.c b/fs/locks.c
+index db7b6917d9c5..fafce5a8d74f 100644
+--- a/fs/locks.c
++++ b/fs/locks.c
+@@ -2072,6 +2072,13 @@ static pid_t locks_translate_pid(struct file_lock *fl, struct pid_namespace *ns)
+ 		return -1;
+ 	if (IS_REMOTELCK(fl))
+ 		return fl->fl_pid;
++	/*
++	 * If the flock owner process is dead and its pid has been already
++	 * freed, the translation below won't work, but we still want to show
++	 * flock owner pid number in init pidns.
++	 */
++	if (ns == &init_pid_ns)
++		return (pid_t)fl->fl_pid;
+ 
+ 	rcu_read_lock();
+ 	pid = find_pid_ns(fl->fl_pid, &init_pid_ns);
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index 5d99e8810b85..0dded931f119 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -1726,6 +1726,7 @@ nfsd4_proc_compound(struct svc_rqst *rqstp)
+ 	if (status) {
+ 		op = &args->ops[0];
+ 		op->status = status;
++		resp->opcnt = 1;
+ 		goto encode_op;
+ 	}
+ 
+diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
+index ca1d2cc2cdfa..18863d56273c 100644
+--- a/include/linux/arm-smccc.h
++++ b/include/linux/arm-smccc.h
+@@ -199,47 +199,57 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
+ 
+ #define __declare_arg_0(a0, res)					\
+ 	struct arm_smccc_res   *___res = res;				\
+-	register u32           r0 asm("r0") = a0;			\
++	register unsigned long r0 asm("r0") = (u32)a0;			\
+ 	register unsigned long r1 asm("r1");				\
+ 	register unsigned long r2 asm("r2");				\
+ 	register unsigned long r3 asm("r3")
+ 
+ #define __declare_arg_1(a0, a1, res)					\
++	typeof(a1) __a1 = a1;						\
+ 	struct arm_smccc_res   *___res = res;				\
+-	register u32           r0 asm("r0") = a0;			\
+-	register typeof(a1)    r1 asm("r1") = a1;			\
++	register unsigned long r0 asm("r0") = (u32)a0;			\
++	register unsigned long r1 asm("r1") = __a1;			\
+ 	register unsigned long r2 asm("r2");				\
+ 	register unsigned long r3 asm("r3")
+ 
+ #define __declare_arg_2(a0, a1, a2, res)				\
++	typeof(a1) __a1 = a1;						\
++	typeof(a2) __a2 = a2;						\
+ 	struct arm_smccc_res   *___res = res;				\
+-	register u32           r0 asm("r0") = a0;			\
+-	register typeof(a1)    r1 asm("r1") = a1;			\
+-	register typeof(a2)    r2 asm("r2") = a2;			\
++	register unsigned long r0 asm("r0") = (u32)a0;			\
++	register unsigned long r1 asm("r1") = __a1;			\
++	register unsigned long r2 asm("r2") = __a2;			\
+ 	register unsigned long r3 asm("r3")
+ 
+ #define __declare_arg_3(a0, a1, a2, a3, res)				\
++	typeof(a1) __a1 = a1;						\
++	typeof(a2) __a2 = a2;						\
++	typeof(a3) __a3 = a3;						\
+ 	struct arm_smccc_res   *___res = res;				\
+-	register u32           r0 asm("r0") = a0;			\
+-	register typeof(a1)    r1 asm("r1") = a1;			\
+-	register typeof(a2)    r2 asm("r2") = a2;			\
+-	register typeof(a3)    r3 asm("r3") = a3
++	register unsigned long r0 asm("r0") = (u32)a0;			\
++	register unsigned long r1 asm("r1") = __a1;			\
++	register unsigned long r2 asm("r2") = __a2;			\
++	register unsigned long r3 asm("r3") = __a3
+ 
+ #define __declare_arg_4(a0, a1, a2, a3, a4, res)			\
++	typeof(a4) __a4 = a4;						\
+ 	__declare_arg_3(a0, a1, a2, a3, res);				\
+-	register typeof(a4) r4 asm("r4") = a4
++	register unsigned long r4 asm("r4") = __a4
+ 
+ #define __declare_arg_5(a0, a1, a2, a3, a4, a5, res)			\
++	typeof(a5) __a5 = a5;						\
+ 	__declare_arg_4(a0, a1, a2, a3, a4, res);			\
+-	register typeof(a5) r5 asm("r5") = a5
++	register unsigned long r5 asm("r5") = __a5
+ 
+ #define __declare_arg_6(a0, a1, a2, a3, a4, a5, a6, res)		\
++	typeof(a6) __a6 = a6;						\
+ 	__declare_arg_5(a0, a1, a2, a3, a4, a5, res);			\
+-	register typeof(a6) r6 asm("r6") = a6
++	register unsigned long r6 asm("r6") = __a6
+ 
+ #define __declare_arg_7(a0, a1, a2, a3, a4, a5, a6, a7, res)		\
++	typeof(a7) __a7 = a7;						\
+ 	__declare_arg_6(a0, a1, a2, a3, a4, a5, a6, res);		\
+-	register typeof(a7) r7 asm("r7") = a7
++	register unsigned long r7 asm("r7") = __a7
+ 
+ #define ___declare_args(count, ...) __declare_arg_ ## count(__VA_ARGS__)
+ #define __declare_args(count, ...)  ___declare_args(count, __VA_ARGS__)
+diff --git a/include/linux/bitfield.h b/include/linux/bitfield.h
+index cf2588d81148..147a7bb341dd 100644
+--- a/include/linux/bitfield.h
++++ b/include/linux/bitfield.h
+@@ -104,7 +104,7 @@
+ 		(typeof(_mask))(((_reg) & (_mask)) >> __bf_shf(_mask));	\
+ 	})
+ 
+-extern void __compiletime_warning("value doesn't fit into mask")
++extern void __compiletime_error("value doesn't fit into mask")
+ __field_overflow(void);
+ extern void __compiletime_error("bad bitfield mask")
+ __bad_mask(void);
+@@ -121,8 +121,8 @@ static __always_inline u64 field_mask(u64 field)
+ #define ____MAKE_OP(type,base,to,from)					\
+ static __always_inline __##type type##_encode_bits(base v, base field)	\
+ {									\
+-        if (__builtin_constant_p(v) &&	(v & ~field_multiplier(field)))	\
+-			    __field_overflow();				\
++	if (__builtin_constant_p(v) && (v & ~field_mask(field)))	\
++		__field_overflow();					\
+ 	return to((v & field_mask(field)) * field_multiplier(field));	\
+ }									\
+ static __always_inline __##type type##_replace_bits(__##type old,	\
+diff --git a/include/linux/platform_data/ina2xx.h b/include/linux/platform_data/ina2xx.h
+index 9abc0ca7259b..9f0aa1b48c78 100644
+--- a/include/linux/platform_data/ina2xx.h
++++ b/include/linux/platform_data/ina2xx.h
+@@ -1,7 +1,7 @@
+ /*
+  * Driver for Texas Instruments INA219, INA226 power monitor chips
+  *
+- * Copyright (C) 2012 Lothar Felten <l-felten@ti.com>
++ * Copyright (C) 2012 Lothar Felten <lothar.felten@gmail.com>
+  *
+  * This program is free software; you can redistribute it and/or modify
+  * it under the terms of the GNU General Public License version 2 as
+diff --git a/include/linux/posix-timers.h b/include/linux/posix-timers.h
+index c85704fcdbd2..ee7e987ea1b4 100644
+--- a/include/linux/posix-timers.h
++++ b/include/linux/posix-timers.h
+@@ -95,8 +95,8 @@ struct k_itimer {
+ 	clockid_t		it_clock;
+ 	timer_t			it_id;
+ 	int			it_active;
+-	int			it_overrun;
+-	int			it_overrun_last;
++	s64			it_overrun;
++	s64			it_overrun_last;
+ 	int			it_requeue_pending;
+ 	int			it_sigev_notify;
+ 	ktime_t			it_interval;
+diff --git a/include/linux/power_supply.h b/include/linux/power_supply.h
+index b21c4bd96b84..f80769175c56 100644
+--- a/include/linux/power_supply.h
++++ b/include/linux/power_supply.h
+@@ -269,6 +269,7 @@ struct power_supply {
+ 	spinlock_t changed_lock;
+ 	bool changed;
+ 	bool initialized;
++	bool removing;
+ 	atomic_t use_cnt;
+ #ifdef CONFIG_THERMAL
+ 	struct thermal_zone_device *tzd;
+diff --git a/include/linux/regulator/machine.h b/include/linux/regulator/machine.h
+index 3468703d663a..a459a5e973a7 100644
+--- a/include/linux/regulator/machine.h
++++ b/include/linux/regulator/machine.h
+@@ -48,9 +48,9 @@ struct regulator;
+  * DISABLE_IN_SUSPEND	- turn off regulator in suspend states
+  * ENABLE_IN_SUSPEND	- keep regulator on in suspend states
+  */
+-#define DO_NOTHING_IN_SUSPEND	(-1)
+-#define DISABLE_IN_SUSPEND	0
+-#define ENABLE_IN_SUSPEND	1
++#define DO_NOTHING_IN_SUSPEND	0
++#define DISABLE_IN_SUSPEND	1
++#define ENABLE_IN_SUSPEND	2
+ 
+ /* Regulator active discharge flags */
+ enum regulator_active_discharge {
+diff --git a/include/linux/uio.h b/include/linux/uio.h
+index 409c845d4cd3..422b1c01ee0d 100644
+--- a/include/linux/uio.h
++++ b/include/linux/uio.h
+@@ -172,7 +172,7 @@ size_t copy_from_iter_flushcache(void *addr, size_t bytes, struct iov_iter *i)
+ static __always_inline __must_check
+ size_t copy_to_iter_mcsafe(void *addr, size_t bytes, struct iov_iter *i)
+ {
+-	if (unlikely(!check_copy_size(addr, bytes, false)))
++	if (unlikely(!check_copy_size(addr, bytes, true)))
+ 		return 0;
+ 	else
+ 		return _copy_to_iter_mcsafe(addr, bytes, i);
+diff --git a/include/media/v4l2-fh.h b/include/media/v4l2-fh.h
+index ea73fef8bdc0..8586cfb49828 100644
+--- a/include/media/v4l2-fh.h
++++ b/include/media/v4l2-fh.h
+@@ -38,10 +38,13 @@ struct v4l2_ctrl_handler;
+  * @prio: priority of the file handler, as defined by &enum v4l2_priority
+  *
+  * @wait: event' s wait queue
++ * @subscribe_lock: serialise changes to the subscribed list; guarantee that
++ *		    the add and del event callbacks are orderly called
+  * @subscribed: list of subscribed events
+  * @available: list of events waiting to be dequeued
+  * @navailable: number of available events at @available list
+  * @sequence: event sequence number
++ *
+  * @m2m_ctx: pointer to &struct v4l2_m2m_ctx
+  */
+ struct v4l2_fh {
+@@ -52,6 +55,7 @@ struct v4l2_fh {
+ 
+ 	/* Events */
+ 	wait_queue_head_t	wait;
++	struct mutex		subscribe_lock;
+ 	struct list_head	subscribed;
+ 	struct list_head	available;
+ 	unsigned int		navailable;
+diff --git a/include/rdma/opa_addr.h b/include/rdma/opa_addr.h
+index 2bbb7a67e643..66d4393d339c 100644
+--- a/include/rdma/opa_addr.h
++++ b/include/rdma/opa_addr.h
+@@ -120,7 +120,7 @@ static inline bool rdma_is_valid_unicast_lid(struct rdma_ah_attr *attr)
+ 	if (attr->type == RDMA_AH_ATTR_TYPE_IB) {
+ 		if (!rdma_ah_get_dlid(attr) ||
+ 		    rdma_ah_get_dlid(attr) >=
+-		    be32_to_cpu(IB_MULTICAST_LID_BASE))
++		    be16_to_cpu(IB_MULTICAST_LID_BASE))
+ 			return false;
+ 	} else if (attr->type == RDMA_AH_ATTR_TYPE_OPA) {
+ 		if (!rdma_ah_get_dlid(attr) ||
+diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
+index 58899601fccf..ed707b21d152 100644
+--- a/kernel/bpf/sockmap.c
++++ b/kernel/bpf/sockmap.c
+@@ -1430,12 +1430,15 @@ out:
+ static void smap_write_space(struct sock *sk)
+ {
+ 	struct smap_psock *psock;
++	void (*write_space)(struct sock *sk);
+ 
+ 	rcu_read_lock();
+ 	psock = smap_psock_sk(sk);
+ 	if (likely(psock && test_bit(SMAP_TX_RUNNING, &psock->state)))
+ 		schedule_work(&psock->tx_work);
++	write_space = psock->save_write_space;
+ 	rcu_read_unlock();
++	write_space(sk);
+ }
+ 
+ static void smap_stop_sock(struct smap_psock *psock, struct sock *sk)
+@@ -2143,7 +2146,9 @@ static struct bpf_map *sock_hash_alloc(union bpf_attr *attr)
+ 		return ERR_PTR(-EPERM);
+ 
+ 	/* check sanity of attributes */
+-	if (attr->max_entries == 0 || attr->value_size != 4 ||
++	if (attr->max_entries == 0 ||
++	    attr->key_size == 0 ||
++	    attr->value_size != 4 ||
+ 	    attr->map_flags & ~SOCK_CREATE_FLAG_MASK)
+ 		return ERR_PTR(-EINVAL);
+ 
+@@ -2270,8 +2275,10 @@ static struct htab_elem *alloc_sock_hash_elem(struct bpf_htab *htab,
+ 	}
+ 	l_new = kmalloc_node(htab->elem_size, GFP_ATOMIC | __GFP_NOWARN,
+ 			     htab->map.numa_node);
+-	if (!l_new)
++	if (!l_new) {
++		atomic_dec(&htab->count);
+ 		return ERR_PTR(-ENOMEM);
++	}
+ 
+ 	memcpy(l_new->key, key, key_size);
+ 	l_new->sk = sk;
+diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c
+index 6e28d2866be5..314e2a9040c7 100644
+--- a/kernel/events/hw_breakpoint.c
++++ b/kernel/events/hw_breakpoint.c
+@@ -400,16 +400,35 @@ int dbg_release_bp_slot(struct perf_event *bp)
+ 	return 0;
+ }
+ 
+-static int validate_hw_breakpoint(struct perf_event *bp)
++#ifndef hw_breakpoint_arch_parse
++int hw_breakpoint_arch_parse(struct perf_event *bp,
++			     const struct perf_event_attr *attr,
++			     struct arch_hw_breakpoint *hw)
+ {
+-	int ret;
++	int err;
+ 
+-	ret = arch_validate_hwbkpt_settings(bp);
+-	if (ret)
+-		return ret;
++	err = arch_validate_hwbkpt_settings(bp);
++	if (err)
++		return err;
++
++	*hw = bp->hw.info;
++
++	return 0;
++}
++#endif
++
++static int hw_breakpoint_parse(struct perf_event *bp,
++			       const struct perf_event_attr *attr,
++			       struct arch_hw_breakpoint *hw)
++{
++	int err;
++
++	err = hw_breakpoint_arch_parse(bp, attr, hw);
++	if (err)
++		return err;
+ 
+ 	if (arch_check_bp_in_kernelspace(bp)) {
+-		if (bp->attr.exclude_kernel)
++		if (attr->exclude_kernel)
+ 			return -EINVAL;
+ 		/*
+ 		 * Don't let unprivileged users set a breakpoint in the trap
+@@ -424,19 +443,22 @@ static int validate_hw_breakpoint(struct perf_event *bp)
+ 
+ int register_perf_hw_breakpoint(struct perf_event *bp)
+ {
+-	int ret;
+-
+-	ret = reserve_bp_slot(bp);
+-	if (ret)
+-		return ret;
++	struct arch_hw_breakpoint hw;
++	int err;
+ 
+-	ret = validate_hw_breakpoint(bp);
++	err = reserve_bp_slot(bp);
++	if (err)
++		return err;
+ 
+-	/* if arch_validate_hwbkpt_settings() fails then release bp slot */
+-	if (ret)
++	err = hw_breakpoint_parse(bp, &bp->attr, &hw);
++	if (err) {
+ 		release_bp_slot(bp);
++		return err;
++	}
+ 
+-	return ret;
++	bp->hw.info = hw;
++
++	return 0;
+ }
+ 
+ /**
+@@ -464,6 +486,7 @@ modify_user_hw_breakpoint_check(struct perf_event *bp, struct perf_event_attr *a
+ 	u64 old_len  = bp->attr.bp_len;
+ 	int old_type = bp->attr.bp_type;
+ 	bool modify  = attr->bp_type != old_type;
++	struct arch_hw_breakpoint hw;
+ 	int err = 0;
+ 
+ 	bp->attr.bp_addr = attr->bp_addr;
+@@ -473,7 +496,7 @@ modify_user_hw_breakpoint_check(struct perf_event *bp, struct perf_event_attr *a
+ 	if (check && memcmp(&bp->attr, attr, sizeof(*attr)))
+ 		return -EINVAL;
+ 
+-	err = validate_hw_breakpoint(bp);
++	err = hw_breakpoint_parse(bp, attr, &hw);
+ 	if (!err && modify)
+ 		err = modify_bp_slot(bp, old_type);
+ 
+@@ -484,7 +507,9 @@ modify_user_hw_breakpoint_check(struct perf_event *bp, struct perf_event_attr *a
+ 		return err;
+ 	}
+ 
++	bp->hw.info = hw;
+ 	bp->attr.disabled = attr->disabled;
++
+ 	return 0;
+ }
+ 
+diff --git a/kernel/module.c b/kernel/module.c
+index f475f30eed8c..4a6b9c6d5f2c 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -4067,7 +4067,7 @@ static unsigned long mod_find_symname(struct module *mod, const char *name)
+ 
+ 	for (i = 0; i < kallsyms->num_symtab; i++)
+ 		if (strcmp(name, symname(kallsyms, i)) == 0 &&
+-		    kallsyms->symtab[i].st_info != 'U')
++		    kallsyms->symtab[i].st_shndx != SHN_UNDEF)
+ 			return kallsyms->symtab[i].st_value;
+ 	return 0;
+ }
+@@ -4113,6 +4113,10 @@ int module_kallsyms_on_each_symbol(int (*fn)(void *, const char *,
+ 		if (mod->state == MODULE_STATE_UNFORMED)
+ 			continue;
+ 		for (i = 0; i < kallsyms->num_symtab; i++) {
++
++			if (kallsyms->symtab[i].st_shndx == SHN_UNDEF)
++				continue;
++
+ 			ret = fn(data, symname(kallsyms, i),
+ 				 mod, kallsyms->symtab[i].st_value);
+ 			if (ret != 0)
+diff --git a/kernel/time/alarmtimer.c b/kernel/time/alarmtimer.c
+index 639321bf2e39..fa5de5e8de61 100644
+--- a/kernel/time/alarmtimer.c
++++ b/kernel/time/alarmtimer.c
+@@ -581,11 +581,11 @@ static void alarm_timer_rearm(struct k_itimer *timr)
+  * @timr:	Pointer to the posixtimer data struct
+  * @now:	Current time to forward the timer against
+  */
+-static int alarm_timer_forward(struct k_itimer *timr, ktime_t now)
++static s64 alarm_timer_forward(struct k_itimer *timr, ktime_t now)
+ {
+ 	struct alarm *alarm = &timr->it.alarm.alarmtimer;
+ 
+-	return (int) alarm_forward(alarm, timr->it_interval, now);
++	return alarm_forward(alarm, timr->it_interval, now);
+ }
+ 
+ /**
+@@ -808,7 +808,8 @@ static int alarm_timer_nsleep(const clockid_t which_clock, int flags,
+ 	/* Convert (if necessary) to absolute time */
+ 	if (flags != TIMER_ABSTIME) {
+ 		ktime_t now = alarm_bases[type].gettime();
+-		exp = ktime_add(now, exp);
++
++		exp = ktime_add_safe(now, exp);
+ 	}
+ 
+ 	ret = alarmtimer_do_nsleep(&alarm, exp, type);
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index 9cdf54b04ca8..294d7b65af33 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -85,7 +85,7 @@ static void bump_cpu_timer(struct k_itimer *timer, u64 now)
+ 			continue;
+ 
+ 		timer->it.cpu.expires += incr;
+-		timer->it_overrun += 1 << i;
++		timer->it_overrun += 1LL << i;
+ 		delta -= incr;
+ 	}
+ }
+diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
+index e08ce3f27447..e475012bff7e 100644
+--- a/kernel/time/posix-timers.c
++++ b/kernel/time/posix-timers.c
+@@ -283,6 +283,17 @@ static __init int init_posix_timers(void)
+ }
+ __initcall(init_posix_timers);
+ 
++/*
++ * The siginfo si_overrun field and the return value of timer_getoverrun(2)
++ * are of type int. Clamp the overrun value to INT_MAX
++ */
++static inline int timer_overrun_to_int(struct k_itimer *timr, int baseval)
++{
++	s64 sum = timr->it_overrun_last + (s64)baseval;
++
++	return sum > (s64)INT_MAX ? INT_MAX : (int)sum;
++}
++
+ static void common_hrtimer_rearm(struct k_itimer *timr)
+ {
+ 	struct hrtimer *timer = &timr->it.real.timer;
+@@ -290,9 +301,8 @@ static void common_hrtimer_rearm(struct k_itimer *timr)
+ 	if (!timr->it_interval)
+ 		return;
+ 
+-	timr->it_overrun += (unsigned int) hrtimer_forward(timer,
+-						timer->base->get_time(),
+-						timr->it_interval);
++	timr->it_overrun += hrtimer_forward(timer, timer->base->get_time(),
++					    timr->it_interval);
+ 	hrtimer_restart(timer);
+ }
+ 
+@@ -321,10 +331,10 @@ void posixtimer_rearm(struct siginfo *info)
+ 
+ 		timr->it_active = 1;
+ 		timr->it_overrun_last = timr->it_overrun;
+-		timr->it_overrun = -1;
++		timr->it_overrun = -1LL;
+ 		++timr->it_requeue_pending;
+ 
+-		info->si_overrun += timr->it_overrun_last;
++		info->si_overrun = timer_overrun_to_int(timr, info->si_overrun);
+ 	}
+ 
+ 	unlock_timer(timr, flags);
+@@ -418,9 +428,8 @@ static enum hrtimer_restart posix_timer_fn(struct hrtimer *timer)
+ 					now = ktime_add(now, kj);
+ 			}
+ #endif
+-			timr->it_overrun += (unsigned int)
+-				hrtimer_forward(timer, now,
+-						timr->it_interval);
++			timr->it_overrun += hrtimer_forward(timer, now,
++							    timr->it_interval);
+ 			ret = HRTIMER_RESTART;
+ 			++timr->it_requeue_pending;
+ 			timr->it_active = 1;
+@@ -524,7 +533,7 @@ static int do_timer_create(clockid_t which_clock, struct sigevent *event,
+ 	new_timer->it_id = (timer_t) new_timer_id;
+ 	new_timer->it_clock = which_clock;
+ 	new_timer->kclock = kc;
+-	new_timer->it_overrun = -1;
++	new_timer->it_overrun = -1LL;
+ 
+ 	if (event) {
+ 		rcu_read_lock();
+@@ -645,11 +654,11 @@ static ktime_t common_hrtimer_remaining(struct k_itimer *timr, ktime_t now)
+ 	return __hrtimer_expires_remaining_adjusted(timer, now);
+ }
+ 
+-static int common_hrtimer_forward(struct k_itimer *timr, ktime_t now)
++static s64 common_hrtimer_forward(struct k_itimer *timr, ktime_t now)
+ {
+ 	struct hrtimer *timer = &timr->it.real.timer;
+ 
+-	return (int)hrtimer_forward(timer, now, timr->it_interval);
++	return hrtimer_forward(timer, now, timr->it_interval);
+ }
+ 
+ /*
+@@ -789,7 +798,7 @@ SYSCALL_DEFINE1(timer_getoverrun, timer_t, timer_id)
+ 	if (!timr)
+ 		return -EINVAL;
+ 
+-	overrun = timr->it_overrun_last;
++	overrun = timer_overrun_to_int(timr, 0);
+ 	unlock_timer(timr, flags);
+ 
+ 	return overrun;
+diff --git a/kernel/time/posix-timers.h b/kernel/time/posix-timers.h
+index 151e28f5bf30..ddb21145211a 100644
+--- a/kernel/time/posix-timers.h
++++ b/kernel/time/posix-timers.h
+@@ -19,7 +19,7 @@ struct k_clock {
+ 	void	(*timer_get)(struct k_itimer *timr,
+ 			     struct itimerspec64 *cur_setting);
+ 	void	(*timer_rearm)(struct k_itimer *timr);
+-	int	(*timer_forward)(struct k_itimer *timr, ktime_t now);
++	s64	(*timer_forward)(struct k_itimer *timr, ktime_t now);
+ 	ktime_t	(*timer_remaining)(struct k_itimer *timr, ktime_t now);
+ 	int	(*timer_try_to_cancel)(struct k_itimer *timr);
+ 	void	(*timer_arm)(struct k_itimer *timr, ktime_t expires,
+diff --git a/lib/klist.c b/lib/klist.c
+index 0507fa5d84c5..f6b547812fe3 100644
+--- a/lib/klist.c
++++ b/lib/klist.c
+@@ -336,8 +336,9 @@ struct klist_node *klist_prev(struct klist_iter *i)
+ 	void (*put)(struct klist_node *) = i->i_klist->put;
+ 	struct klist_node *last = i->i_cur;
+ 	struct klist_node *prev;
++	unsigned long flags;
+ 
+-	spin_lock(&i->i_klist->k_lock);
++	spin_lock_irqsave(&i->i_klist->k_lock, flags);
+ 
+ 	if (last) {
+ 		prev = to_klist_node(last->n_node.prev);
+@@ -356,7 +357,7 @@ struct klist_node *klist_prev(struct klist_iter *i)
+ 		prev = to_klist_node(prev->n_node.prev);
+ 	}
+ 
+-	spin_unlock(&i->i_klist->k_lock);
++	spin_unlock_irqrestore(&i->i_klist->k_lock, flags);
+ 
+ 	if (put && last)
+ 		put(last);
+@@ -377,8 +378,9 @@ struct klist_node *klist_next(struct klist_iter *i)
+ 	void (*put)(struct klist_node *) = i->i_klist->put;
+ 	struct klist_node *last = i->i_cur;
+ 	struct klist_node *next;
++	unsigned long flags;
+ 
+-	spin_lock(&i->i_klist->k_lock);
++	spin_lock_irqsave(&i->i_klist->k_lock, flags);
+ 
+ 	if (last) {
+ 		next = to_klist_node(last->n_node.next);
+@@ -397,7 +399,7 @@ struct klist_node *klist_next(struct klist_iter *i)
+ 		next = to_klist_node(next->n_node.next);
+ 	}
+ 
+-	spin_unlock(&i->i_klist->k_lock);
++	spin_unlock_irqrestore(&i->i_klist->k_lock, flags);
+ 
+ 	if (put && last)
+ 		put(last);
+diff --git a/net/6lowpan/iphc.c b/net/6lowpan/iphc.c
+index 6b1042e21656..52fad5dad9f7 100644
+--- a/net/6lowpan/iphc.c
++++ b/net/6lowpan/iphc.c
+@@ -770,6 +770,7 @@ int lowpan_header_decompress(struct sk_buff *skb, const struct net_device *dev,
+ 		hdr.hop_limit, &hdr.daddr);
+ 
+ 	skb_push(skb, sizeof(hdr));
++	skb_reset_mac_header(skb);
+ 	skb_reset_network_header(skb);
+ 	skb_copy_to_linear_data(skb, &hdr, sizeof(hdr));
+ 
+diff --git a/net/ipv4/tcp_bbr.c b/net/ipv4/tcp_bbr.c
+index 4bfff3c87e8e..e99d6afb70ef 100644
+--- a/net/ipv4/tcp_bbr.c
++++ b/net/ipv4/tcp_bbr.c
+@@ -95,11 +95,10 @@ struct bbr {
+ 	u32     mode:3,		     /* current bbr_mode in state machine */
+ 		prev_ca_state:3,     /* CA state on previous ACK */
+ 		packet_conservation:1,  /* use packet conservation? */
+-		restore_cwnd:1,	     /* decided to revert cwnd to old value */
+ 		round_start:1,	     /* start of packet-timed tx->ack round? */
+ 		idle_restart:1,	     /* restarting after idle? */
+ 		probe_rtt_round_done:1,  /* a BBR_PROBE_RTT round at 4 pkts? */
+-		unused:12,
++		unused:13,
+ 		lt_is_sampling:1,    /* taking long-term ("LT") samples now? */
+ 		lt_rtt_cnt:7,	     /* round trips in long-term interval */
+ 		lt_use_bw:1;	     /* use lt_bw as our bw estimate? */
+@@ -175,6 +174,8 @@ static const u32 bbr_lt_bw_diff = 4000 / 8;
+ /* If we estimate we're policed, use lt_bw for this many round trips: */
+ static const u32 bbr_lt_bw_max_rtts = 48;
+ 
++static void bbr_check_probe_rtt_done(struct sock *sk);
++
+ /* Do we estimate that STARTUP filled the pipe? */
+ static bool bbr_full_bw_reached(const struct sock *sk)
+ {
+@@ -305,6 +306,8 @@ static void bbr_cwnd_event(struct sock *sk, enum tcp_ca_event event)
+ 		 */
+ 		if (bbr->mode == BBR_PROBE_BW)
+ 			bbr_set_pacing_rate(sk, bbr_bw(sk), BBR_UNIT);
++		else if (bbr->mode == BBR_PROBE_RTT)
++			bbr_check_probe_rtt_done(sk);
+ 	}
+ }
+ 
+@@ -392,17 +395,11 @@ static bool bbr_set_cwnd_to_recover_or_restore(
+ 		cwnd = tcp_packets_in_flight(tp) + acked;
+ 	} else if (prev_state >= TCP_CA_Recovery && state < TCP_CA_Recovery) {
+ 		/* Exiting loss recovery; restore cwnd saved before recovery. */
+-		bbr->restore_cwnd = 1;
++		cwnd = max(cwnd, bbr->prior_cwnd);
+ 		bbr->packet_conservation = 0;
+ 	}
+ 	bbr->prev_ca_state = state;
+ 
+-	if (bbr->restore_cwnd) {
+-		/* Restore cwnd after exiting loss recovery or PROBE_RTT. */
+-		cwnd = max(cwnd, bbr->prior_cwnd);
+-		bbr->restore_cwnd = 0;
+-	}
+-
+ 	if (bbr->packet_conservation) {
+ 		*new_cwnd = max(cwnd, tcp_packets_in_flight(tp) + acked);
+ 		return true;	/* yes, using packet conservation */
+@@ -744,6 +741,20 @@ static void bbr_check_drain(struct sock *sk, const struct rate_sample *rs)
+ 		bbr_reset_probe_bw_mode(sk);  /* we estimate queue is drained */
+ }
+ 
++static void bbr_check_probe_rtt_done(struct sock *sk)
++{
++	struct tcp_sock *tp = tcp_sk(sk);
++	struct bbr *bbr = inet_csk_ca(sk);
++
++	if (!(bbr->probe_rtt_done_stamp &&
++	      after(tcp_jiffies32, bbr->probe_rtt_done_stamp)))
++		return;
++
++	bbr->min_rtt_stamp = tcp_jiffies32;  /* wait a while until PROBE_RTT */
++	tp->snd_cwnd = max(tp->snd_cwnd, bbr->prior_cwnd);
++	bbr_reset_mode(sk);
++}
++
+ /* The goal of PROBE_RTT mode is to have BBR flows cooperatively and
+  * periodically drain the bottleneck queue, to converge to measure the true
+  * min_rtt (unloaded propagation delay). This allows the flows to keep queues
+@@ -802,12 +813,8 @@ static void bbr_update_min_rtt(struct sock *sk, const struct rate_sample *rs)
+ 		} else if (bbr->probe_rtt_done_stamp) {
+ 			if (bbr->round_start)
+ 				bbr->probe_rtt_round_done = 1;
+-			if (bbr->probe_rtt_round_done &&
+-			    after(tcp_jiffies32, bbr->probe_rtt_done_stamp)) {
+-				bbr->min_rtt_stamp = tcp_jiffies32;
+-				bbr->restore_cwnd = 1;  /* snap to prior_cwnd */
+-				bbr_reset_mode(sk);
+-			}
++			if (bbr->probe_rtt_round_done)
++				bbr_check_probe_rtt_done(sk);
+ 		}
+ 	}
+ 	/* Restart after idle ends only once we process a new S/ACK for data */
+@@ -858,7 +865,6 @@ static void bbr_init(struct sock *sk)
+ 	bbr->has_seen_rtt = 0;
+ 	bbr_init_pacing_rate_from_rtt(sk);
+ 
+-	bbr->restore_cwnd = 0;
+ 	bbr->round_start = 0;
+ 	bbr->idle_restart = 0;
+ 	bbr->full_bw_reached = 0;
+diff --git a/net/ncsi/ncsi-netlink.c b/net/ncsi/ncsi-netlink.c
+index 82e6edf9c5d9..45f33d6dedf7 100644
+--- a/net/ncsi/ncsi-netlink.c
++++ b/net/ncsi/ncsi-netlink.c
+@@ -100,7 +100,7 @@ static int ncsi_write_package_info(struct sk_buff *skb,
+ 	bool found;
+ 	int rc;
+ 
+-	if (id > ndp->package_num) {
++	if (id > ndp->package_num - 1) {
+ 		netdev_info(ndp->ndev.dev, "NCSI: No package with id %u\n", id);
+ 		return -ENODEV;
+ 	}
+@@ -240,7 +240,7 @@ static int ncsi_pkg_info_all_nl(struct sk_buff *skb,
+ 		return 0; /* done */
+ 
+ 	hdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq,
+-			  &ncsi_genl_family, 0,  NCSI_CMD_PKG_INFO);
++			  &ncsi_genl_family, NLM_F_MULTI,  NCSI_CMD_PKG_INFO);
+ 	if (!hdr) {
+ 		rc = -EMSGSIZE;
+ 		goto err;
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index 2ccf194c3ebb..8015e50e8d0a 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -222,9 +222,14 @@ static void tls_write_space(struct sock *sk)
+ {
+ 	struct tls_context *ctx = tls_get_ctx(sk);
+ 
+-	/* We are already sending pages, ignore notification */
+-	if (ctx->in_tcp_sendpages)
++	/* If in_tcp_sendpages call lower protocol write space handler
++	 * to ensure we wake up any waiting operations there. For example
++	 * if do_tcp_sendpages where to call sk_wait_event.
++	 */
++	if (ctx->in_tcp_sendpages) {
++		ctx->sk_write_space(sk);
+ 		return;
++	}
+ 
+ 	if (!sk->sk_write_pending && tls_is_pending_closed_record(ctx)) {
+ 		gfp_t sk_allocation = sk->sk_allocation;
+diff --git a/sound/aoa/core/gpio-feature.c b/sound/aoa/core/gpio-feature.c
+index 71960089e207..65557421fe0b 100644
+--- a/sound/aoa/core/gpio-feature.c
++++ b/sound/aoa/core/gpio-feature.c
+@@ -88,8 +88,10 @@ static struct device_node *get_gpio(char *name,
+ 	}
+ 
+ 	reg = of_get_property(np, "reg", NULL);
+-	if (!reg)
++	if (!reg) {
++		of_node_put(np);
+ 		return NULL;
++	}
+ 
+ 	*gpioptr = *reg;
+ 
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 647ae1a71e10..28dc5e124995 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2535,7 +2535,8 @@ static const struct pci_device_id azx_ids[] = {
+ 	  .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB },
+ 	/* AMD Raven */
+ 	{ PCI_DEVICE(0x1022, 0x15e3),
+-	  .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB },
++	  .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB |
++			 AZX_DCAPS_PM_RUNTIME },
+ 	/* ATI HDMI */
+ 	{ PCI_DEVICE(0x1002, 0x0002),
+ 	  .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
+diff --git a/sound/soc/codecs/rt1305.c b/sound/soc/codecs/rt1305.c
+index f4c8c45f4010..421b8fb2fa04 100644
+--- a/sound/soc/codecs/rt1305.c
++++ b/sound/soc/codecs/rt1305.c
+@@ -1066,7 +1066,7 @@ static void rt1305_calibrate(struct rt1305_priv *rt1305)
+ 	pr_debug("Left_rhl = 0x%x rh=0x%x rl=0x%x\n", rhl, rh, rl);
+ 	pr_info("Left channel %d.%dohm\n", (r0ohm/10), (r0ohm%10));
+ 
+-	r0l = 562949953421312;
++	r0l = 562949953421312ULL;
+ 	if (rhl != 0)
+ 		do_div(r0l, rhl);
+ 	pr_debug("Left_r0 = 0x%llx\n", r0l);
+@@ -1083,7 +1083,7 @@ static void rt1305_calibrate(struct rt1305_priv *rt1305)
+ 	pr_debug("Right_rhl = 0x%x rh=0x%x rl=0x%x\n", rhl, rh, rl);
+ 	pr_info("Right channel %d.%dohm\n", (r0ohm/10), (r0ohm%10));
+ 
+-	r0r = 562949953421312;
++	r0r = 562949953421312ULL;
+ 	if (rhl != 0)
+ 		do_div(r0r, rhl);
+ 	pr_debug("Right_r0 = 0x%llx\n", r0r);
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index 33065ba294a9..d2c9d7865bde 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -404,7 +404,7 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ 		},
+ 		.driver_data = (void *)(BYT_RT5640_DMIC1_MAP |
+ 					BYT_RT5640_JD_SRC_JD1_IN4P |
+-					BYT_RT5640_OVCD_TH_2000UA |
++					BYT_RT5640_OVCD_TH_1500UA |
+ 					BYT_RT5640_OVCD_SF_0P75 |
+ 					BYT_RT5640_SSP0_AIF1 |
+ 					BYT_RT5640_MCLK_EN),
+diff --git a/sound/soc/qcom/qdsp6/q6afe.c b/sound/soc/qcom/qdsp6/q6afe.c
+index 01f43218984b..69a7896cb713 100644
+--- a/sound/soc/qcom/qdsp6/q6afe.c
++++ b/sound/soc/qcom/qdsp6/q6afe.c
+@@ -777,7 +777,7 @@ static int q6afe_callback(struct apr_device *adev, struct apr_resp_pkt *data)
+  */
+ int q6afe_get_port_id(int index)
+ {
+-	if (index < 0 || index > AFE_PORT_MAX)
++	if (index < 0 || index >= AFE_PORT_MAX)
+ 		return -EINVAL;
+ 
+ 	return port_maps[index].port_id;
+@@ -1014,7 +1014,7 @@ int q6afe_port_stop(struct q6afe_port *port)
+ 
+ 	port_id = port->id;
+ 	index = port->token;
+-	if (index < 0 || index > AFE_PORT_MAX) {
++	if (index < 0 || index >= AFE_PORT_MAX) {
+ 		dev_err(afe->dev, "AFE port index[%d] invalid!\n", index);
+ 		return -EINVAL;
+ 	}
+@@ -1355,7 +1355,7 @@ struct q6afe_port *q6afe_port_get_from_id(struct device *dev, int id)
+ 	unsigned long flags;
+ 	int cfg_type;
+ 
+-	if (id < 0 || id > AFE_PORT_MAX) {
++	if (id < 0 || id >= AFE_PORT_MAX) {
+ 		dev_err(dev, "AFE port token[%d] invalid!\n", id);
+ 		return ERR_PTR(-EINVAL);
+ 	}
+diff --git a/sound/soc/sh/rcar/ssi.c b/sound/soc/sh/rcar/ssi.c
+index cf4b40d376e5..c675058b908b 100644
+--- a/sound/soc/sh/rcar/ssi.c
++++ b/sound/soc/sh/rcar/ssi.c
+@@ -37,6 +37,7 @@
+ #define	CHNL_4		(1 << 22)	/* Channels */
+ #define	CHNL_6		(2 << 22)	/* Channels */
+ #define	CHNL_8		(3 << 22)	/* Channels */
++#define DWL_MASK	(7 << 19)	/* Data Word Length mask */
+ #define	DWL_8		(0 << 19)	/* Data Word Length */
+ #define	DWL_16		(1 << 19)	/* Data Word Length */
+ #define	DWL_18		(2 << 19)	/* Data Word Length */
+@@ -353,21 +354,18 @@ static void rsnd_ssi_config_init(struct rsnd_mod *mod,
+ 	struct rsnd_dai *rdai = rsnd_io_to_rdai(io);
+ 	struct snd_pcm_runtime *runtime = rsnd_io_to_runtime(io);
+ 	struct rsnd_ssi *ssi = rsnd_mod_to_ssi(mod);
+-	u32 cr_own;
+-	u32 cr_mode;
+-	u32 wsr;
++	u32 cr_own	= ssi->cr_own;
++	u32 cr_mode	= ssi->cr_mode;
++	u32 wsr		= ssi->wsr;
+ 	int is_tdm;
+ 
+-	if (rsnd_ssi_is_parent(mod, io))
+-		return;
+-
+ 	is_tdm = rsnd_runtime_is_ssi_tdm(io);
+ 
+ 	/*
+ 	 * always use 32bit system word.
+ 	 * see also rsnd_ssi_master_clk_enable()
+ 	 */
+-	cr_own = FORCE | SWL_32;
++	cr_own |= FORCE | SWL_32;
+ 
+ 	if (rdai->bit_clk_inv)
+ 		cr_own |= SCKP;
+@@ -377,9 +375,18 @@ static void rsnd_ssi_config_init(struct rsnd_mod *mod,
+ 		cr_own |= SDTA;
+ 	if (rdai->sys_delay)
+ 		cr_own |= DEL;
++
++	/*
++	 * We shouldn't exchange SWSP after running.
++	 * This means, parent needs to care it.
++	 */
++	if (rsnd_ssi_is_parent(mod, io))
++		goto init_end;
++
+ 	if (rsnd_io_is_play(io))
+ 		cr_own |= TRMD;
+ 
++	cr_own &= ~DWL_MASK;
+ 	switch (snd_pcm_format_width(runtime->format)) {
+ 	case 16:
+ 		cr_own |= DWL_16;
+@@ -406,7 +413,7 @@ static void rsnd_ssi_config_init(struct rsnd_mod *mod,
+ 		wsr	|= WS_MODE;
+ 		cr_own	|= CHNL_8;
+ 	}
+-
++init_end:
+ 	ssi->cr_own	= cr_own;
+ 	ssi->cr_mode	= cr_mode;
+ 	ssi->wsr	= wsr;
+@@ -465,15 +472,18 @@ static int rsnd_ssi_quit(struct rsnd_mod *mod,
+ 		return -EIO;
+ 	}
+ 
+-	if (!rsnd_ssi_is_parent(mod, io))
+-		ssi->cr_own	= 0;
+-
+ 	rsnd_ssi_master_clk_stop(mod, io);
+ 
+ 	rsnd_mod_power_off(mod);
+ 
+ 	ssi->usrcnt--;
+ 
++	if (!ssi->usrcnt) {
++		ssi->cr_own	= 0;
++		ssi->cr_mode	= 0;
++		ssi->wsr	= 0;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index 229c12349803..a099c3e45504 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -4073,6 +4073,13 @@ int snd_soc_dapm_link_dai_widgets(struct snd_soc_card *card)
+ 			continue;
+ 		}
+ 
++		/* let users know there is no DAI to link */
++		if (!dai_w->priv) {
++			dev_dbg(card->dev, "dai widget %s has no DAI\n",
++				dai_w->name);
++			continue;
++		}
++
+ 		dai = dai_w->priv;
+ 
+ 		/* ...find all widgets with the same stream and link them */
+diff --git a/tools/bpf/bpftool/map_perf_ring.c b/tools/bpf/bpftool/map_perf_ring.c
+index 1832100d1b27..6d41323be291 100644
+--- a/tools/bpf/bpftool/map_perf_ring.c
++++ b/tools/bpf/bpftool/map_perf_ring.c
+@@ -194,8 +194,10 @@ int do_event_pipe(int argc, char **argv)
+ 	}
+ 
+ 	while (argc) {
+-		if (argc < 2)
++		if (argc < 2) {
+ 			BAD_ARG();
++			goto err_close_map;
++		}
+ 
+ 		if (is_prefix(*argv, "cpu")) {
+ 			char *endptr;
+@@ -221,6 +223,7 @@ int do_event_pipe(int argc, char **argv)
+ 			NEXT_ARG();
+ 		} else {
+ 			BAD_ARG();
++			goto err_close_map;
+ 		}
+ 
+ 		do_all = false;
+diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
+index 4f5de8245b32..6631b0b8b4ab 100644
+--- a/tools/perf/tests/builtin-test.c
++++ b/tools/perf/tests/builtin-test.c
+@@ -385,7 +385,7 @@ static int test_and_print(struct test *t, bool force_skip, int subtest)
+ 	if (!t->subtest.get_nr)
+ 		pr_debug("%s:", t->desc);
+ 	else
+-		pr_debug("%s subtest %d:", t->desc, subtest);
++		pr_debug("%s subtest %d:", t->desc, subtest + 1);
+ 
+ 	switch (err) {
+ 	case TEST_OK:
+diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh
+index 3bb4c2ba7b14..197e769c2ed1 100755
+--- a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh
++++ b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh
+@@ -74,12 +74,14 @@ test_vlan_match()
+ 
+ test_gretap()
+ {
+-	test_vlan_match gt4 'vlan_id 555 vlan_ethtype ip' "mirror to gretap"
++	test_vlan_match gt4 'skip_hw vlan_id 555 vlan_ethtype ip' \
++			"mirror to gretap"
+ }
+ 
+ test_ip6gretap()
+ {
+-	test_vlan_match gt6 'vlan_id 555 vlan_ethtype ipv6' "mirror to ip6gretap"
++	test_vlan_match gt6 'skip_hw vlan_id 555 vlan_ethtype ip' \
++			"mirror to ip6gretap"
+ }
+ 
+ test_gretap_stp()
+diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_lib.sh b/tools/testing/selftests/net/forwarding/mirror_gre_lib.sh
+index 619b469365be..1c18e332cd4f 100644
+--- a/tools/testing/selftests/net/forwarding/mirror_gre_lib.sh
++++ b/tools/testing/selftests/net/forwarding/mirror_gre_lib.sh
+@@ -62,7 +62,7 @@ full_test_span_gre_dir_vlan_ips()
+ 			  "$backward_type" "$ip1" "$ip2"
+ 
+ 	tc filter add dev $h3 ingress pref 77 prot 802.1q \
+-		flower $vlan_match ip_proto 0x2f \
++		flower $vlan_match \
+ 		action pass
+ 	mirror_test v$h1 $ip1 $ip2 $h3 77 10
+ 	tc filter del dev $h3 ingress pref 77
+diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh b/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh
+index 5dbc7a08f4bd..a12274776116 100755
+--- a/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh
++++ b/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh
+@@ -79,12 +79,14 @@ test_vlan_match()
+ 
+ test_gretap()
+ {
+-	test_vlan_match gt4 'vlan_id 555 vlan_ethtype ip' "mirror to gretap"
++	test_vlan_match gt4 'skip_hw vlan_id 555 vlan_ethtype ip' \
++			"mirror to gretap"
+ }
+ 
+ test_ip6gretap()
+ {
+-	test_vlan_match gt6 'vlan_id 555 vlan_ethtype ipv6' "mirror to ip6gretap"
++	test_vlan_match gt6 'skip_hw vlan_id 555 vlan_ethtype ip' \
++			"mirror to ip6gretap"
+ }
+ 
+ test_span_gre_forbidden_cpu()


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 13:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 13:15 UTC (permalink / raw
  To: gentoo-commits

commit:     4e6c97cc0a2856d9fbc6b37ab578f31a207b4f7a
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Sep  9 11:25:12 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 13:15:39 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4e6c97cc

Linux patch 4.18.7

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1006_linux-4.18.7.patch | 5658 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5662 insertions(+)

diff --git a/0000_README b/0000_README
index 8bfc2e4..f3682ca 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch:  1005_linux-4.18.6.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.6
 
+Patch:  1006_linux-4.18.7.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.7
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1006_linux-4.18.7.patch b/1006_linux-4.18.7.patch
new file mode 100644
index 0000000..7ab3155
--- /dev/null
+++ b/1006_linux-4.18.7.patch
@@ -0,0 +1,5658 @@
+diff --git a/Makefile b/Makefile
+index 62524f4d42ad..711b04d00e49 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/alpha/kernel/osf_sys.c b/arch/alpha/kernel/osf_sys.c
+index c210a25dd6da..cff52d8ffdb1 100644
+--- a/arch/alpha/kernel/osf_sys.c
++++ b/arch/alpha/kernel/osf_sys.c
+@@ -530,24 +530,19 @@ SYSCALL_DEFINE4(osf_mount, unsigned long, typenr, const char __user *, path,
+ SYSCALL_DEFINE1(osf_utsname, char __user *, name)
+ {
+ 	int error;
++	char tmp[5 * 32];
+ 
+ 	down_read(&uts_sem);
+-	error = -EFAULT;
+-	if (copy_to_user(name + 0, utsname()->sysname, 32))
+-		goto out;
+-	if (copy_to_user(name + 32, utsname()->nodename, 32))
+-		goto out;
+-	if (copy_to_user(name + 64, utsname()->release, 32))
+-		goto out;
+-	if (copy_to_user(name + 96, utsname()->version, 32))
+-		goto out;
+-	if (copy_to_user(name + 128, utsname()->machine, 32))
+-		goto out;
++	memcpy(tmp + 0 * 32, utsname()->sysname, 32);
++	memcpy(tmp + 1 * 32, utsname()->nodename, 32);
++	memcpy(tmp + 2 * 32, utsname()->release, 32);
++	memcpy(tmp + 3 * 32, utsname()->version, 32);
++	memcpy(tmp + 4 * 32, utsname()->machine, 32);
++	up_read(&uts_sem);
+ 
+-	error = 0;
+- out:
+-	up_read(&uts_sem);	
+-	return error;
++	if (copy_to_user(name, tmp, sizeof(tmp)))
++		return -EFAULT;
++	return 0;
+ }
+ 
+ SYSCALL_DEFINE0(getpagesize)
+@@ -567,18 +562,21 @@ SYSCALL_DEFINE2(osf_getdomainname, char __user *, name, int, namelen)
+ {
+ 	int len, err = 0;
+ 	char *kname;
++	char tmp[32];
+ 
+-	if (namelen > 32)
++	if (namelen < 0 || namelen > 32)
+ 		namelen = 32;
+ 
+ 	down_read(&uts_sem);
+ 	kname = utsname()->domainname;
+ 	len = strnlen(kname, namelen);
+-	if (copy_to_user(name, kname, min(len + 1, namelen)))
+-		err = -EFAULT;
++	len = min(len + 1, namelen);
++	memcpy(tmp, kname, len);
+ 	up_read(&uts_sem);
+ 
+-	return err;
++	if (copy_to_user(name, tmp, len))
++		return -EFAULT;
++	return 0;
+ }
+ 
+ /*
+@@ -739,13 +737,14 @@ SYSCALL_DEFINE3(osf_sysinfo, int, command, char __user *, buf, long, count)
+ 	};
+ 	unsigned long offset;
+ 	const char *res;
+-	long len, err = -EINVAL;
++	long len;
++	char tmp[__NEW_UTS_LEN + 1];
+ 
+ 	offset = command-1;
+ 	if (offset >= ARRAY_SIZE(sysinfo_table)) {
+ 		/* Digital UNIX has a few unpublished interfaces here */
+ 		printk("sysinfo(%d)", command);
+-		goto out;
++		return -EINVAL;
+ 	}
+ 
+ 	down_read(&uts_sem);
+@@ -753,13 +752,11 @@ SYSCALL_DEFINE3(osf_sysinfo, int, command, char __user *, buf, long, count)
+ 	len = strlen(res)+1;
+ 	if ((unsigned long)len > (unsigned long)count)
+ 		len = count;
+-	if (copy_to_user(buf, res, len))
+-		err = -EFAULT;
+-	else
+-		err = 0;
++	memcpy(tmp, res, len);
+ 	up_read(&uts_sem);
+- out:
+-	return err;
++	if (copy_to_user(buf, tmp, len))
++		return -EFAULT;
++	return 0;
+ }
+ 
+ SYSCALL_DEFINE5(osf_getsysinfo, unsigned long, op, void __user *, buffer,
+diff --git a/arch/arm/boot/dts/am571x-idk.dts b/arch/arm/boot/dts/am571x-idk.dts
+index 5bb9d68d6e90..d9a2049a1ea8 100644
+--- a/arch/arm/boot/dts/am571x-idk.dts
++++ b/arch/arm/boot/dts/am571x-idk.dts
+@@ -66,10 +66,6 @@
+ 	};
+ };
+ 
+-&omap_dwc3_2 {
+-	extcon = <&extcon_usb2>;
+-};
+-
+ &extcon_usb2 {
+ 	id-gpio = <&gpio5 7 GPIO_ACTIVE_HIGH>;
+ 	vbus-gpio = <&gpio7 22 GPIO_ACTIVE_HIGH>;
+diff --git a/arch/arm/boot/dts/am572x-idk-common.dtsi b/arch/arm/boot/dts/am572x-idk-common.dtsi
+index c6d858b31011..784639ddf451 100644
+--- a/arch/arm/boot/dts/am572x-idk-common.dtsi
++++ b/arch/arm/boot/dts/am572x-idk-common.dtsi
+@@ -57,10 +57,6 @@
+ 	};
+ };
+ 
+-&omap_dwc3_2 {
+-	extcon = <&extcon_usb2>;
+-};
+-
+ &extcon_usb2 {
+ 	id-gpio = <&gpio3 16 GPIO_ACTIVE_HIGH>;
+ 	vbus-gpio = <&gpio3 26 GPIO_ACTIVE_HIGH>;
+diff --git a/arch/arm/boot/dts/am57xx-idk-common.dtsi b/arch/arm/boot/dts/am57xx-idk-common.dtsi
+index ad87f1ae904d..c9063ffca524 100644
+--- a/arch/arm/boot/dts/am57xx-idk-common.dtsi
++++ b/arch/arm/boot/dts/am57xx-idk-common.dtsi
+@@ -395,8 +395,13 @@
+ 	dr_mode = "host";
+ };
+ 
++&omap_dwc3_2 {
++	extcon = <&extcon_usb2>;
++};
++
+ &usb2 {
+-	dr_mode = "peripheral";
++	extcon = <&extcon_usb2>;
++	dr_mode = "otg";
+ };
+ 
+ &mmc1 {
+diff --git a/arch/arm/boot/dts/tegra30-cardhu.dtsi b/arch/arm/boot/dts/tegra30-cardhu.dtsi
+index 92a9740c533f..3b1db7b9ec50 100644
+--- a/arch/arm/boot/dts/tegra30-cardhu.dtsi
++++ b/arch/arm/boot/dts/tegra30-cardhu.dtsi
+@@ -206,6 +206,7 @@
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 			reg = <0x70>;
++			reset-gpio = <&gpio TEGRA_GPIO(BB, 0) GPIO_ACTIVE_LOW>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 42c090cf0292..3eb034189cf8 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -754,7 +754,6 @@ config NEED_PER_CPU_EMBED_FIRST_CHUNK
+ 
+ config HOLES_IN_ZONE
+ 	def_bool y
+-	depends on NUMA
+ 
+ source kernel/Kconfig.preempt
+ source kernel/Kconfig.hz
+diff --git a/arch/arm64/crypto/sm4-ce-glue.c b/arch/arm64/crypto/sm4-ce-glue.c
+index b7fb5274b250..0c4fc223f225 100644
+--- a/arch/arm64/crypto/sm4-ce-glue.c
++++ b/arch/arm64/crypto/sm4-ce-glue.c
+@@ -69,5 +69,5 @@ static void __exit sm4_ce_mod_fini(void)
+ 	crypto_unregister_alg(&sm4_ce_alg);
+ }
+ 
+-module_cpu_feature_match(SM3, sm4_ce_mod_init);
++module_cpu_feature_match(SM4, sm4_ce_mod_init);
+ module_exit(sm4_ce_mod_fini);
+diff --git a/arch/powerpc/include/asm/fadump.h b/arch/powerpc/include/asm/fadump.h
+index 5a23010af600..1e7a33592e29 100644
+--- a/arch/powerpc/include/asm/fadump.h
++++ b/arch/powerpc/include/asm/fadump.h
+@@ -195,9 +195,6 @@ struct fadump_crash_info_header {
+ 	struct cpumask	online_mask;
+ };
+ 
+-/* Crash memory ranges */
+-#define INIT_CRASHMEM_RANGES	(INIT_MEMBLOCK_REGIONS + 2)
+-
+ struct fad_crash_memory_ranges {
+ 	unsigned long long	base;
+ 	unsigned long long	size;
+diff --git a/arch/powerpc/include/asm/nohash/pgtable.h b/arch/powerpc/include/asm/nohash/pgtable.h
+index 2160be2e4339..b321c82b3624 100644
+--- a/arch/powerpc/include/asm/nohash/pgtable.h
++++ b/arch/powerpc/include/asm/nohash/pgtable.h
+@@ -51,17 +51,14 @@ static inline int pte_present(pte_t pte)
+ #define pte_access_permitted pte_access_permitted
+ static inline bool pte_access_permitted(pte_t pte, bool write)
+ {
+-	unsigned long pteval = pte_val(pte);
+ 	/*
+ 	 * A read-only access is controlled by _PAGE_USER bit.
+ 	 * We have _PAGE_READ set for WRITE and EXECUTE
+ 	 */
+-	unsigned long need_pte_bits = _PAGE_PRESENT | _PAGE_USER;
+-
+-	if (write)
+-		need_pte_bits |= _PAGE_WRITE;
++	if (!pte_present(pte) || !pte_user(pte) || !pte_read(pte))
++		return false;
+ 
+-	if ((pteval & need_pte_bits) != need_pte_bits)
++	if (write && !pte_write(pte))
+ 		return false;
+ 
+ 	return true;
+diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
+index 5ba80cffb505..3312606fda07 100644
+--- a/arch/powerpc/include/asm/pkeys.h
++++ b/arch/powerpc/include/asm/pkeys.h
+@@ -94,8 +94,6 @@ static inline bool mm_pkey_is_allocated(struct mm_struct *mm, int pkey)
+ 		__mm_pkey_is_allocated(mm, pkey));
+ }
+ 
+-extern void __arch_activate_pkey(int pkey);
+-extern void __arch_deactivate_pkey(int pkey);
+ /*
+  * Returns a positive, 5-bit key on success, or -1 on failure.
+  * Relies on the mmap_sem to protect against concurrency in mm_pkey_alloc() and
+@@ -124,11 +122,6 @@ static inline int mm_pkey_alloc(struct mm_struct *mm)
+ 	ret = ffz((u32)mm_pkey_allocation_map(mm));
+ 	__mm_pkey_allocated(mm, ret);
+ 
+-	/*
+-	 * Enable the key in the hardware
+-	 */
+-	if (ret > 0)
+-		__arch_activate_pkey(ret);
+ 	return ret;
+ }
+ 
+@@ -140,10 +133,6 @@ static inline int mm_pkey_free(struct mm_struct *mm, int pkey)
+ 	if (!mm_pkey_is_allocated(mm, pkey))
+ 		return -EINVAL;
+ 
+-	/*
+-	 * Disable the key in the hardware
+-	 */
+-	__arch_deactivate_pkey(pkey);
+ 	__mm_pkey_free(mm, pkey);
+ 
+ 	return 0;
+diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
+index 07e8396d472b..958eb5cd2a9e 100644
+--- a/arch/powerpc/kernel/fadump.c
++++ b/arch/powerpc/kernel/fadump.c
+@@ -47,8 +47,10 @@ static struct fadump_mem_struct fdm;
+ static const struct fadump_mem_struct *fdm_active;
+ 
+ static DEFINE_MUTEX(fadump_mutex);
+-struct fad_crash_memory_ranges crash_memory_ranges[INIT_CRASHMEM_RANGES];
++struct fad_crash_memory_ranges *crash_memory_ranges;
++int crash_memory_ranges_size;
+ int crash_mem_ranges;
++int max_crash_mem_ranges;
+ 
+ /* Scan the Firmware Assisted dump configuration details. */
+ int __init early_init_dt_scan_fw_dump(unsigned long node,
+@@ -868,38 +870,88 @@ static int __init process_fadump(const struct fadump_mem_struct *fdm_active)
+ 	return 0;
+ }
+ 
+-static inline void fadump_add_crash_memory(unsigned long long base,
+-					unsigned long long end)
++static void free_crash_memory_ranges(void)
++{
++	kfree(crash_memory_ranges);
++	crash_memory_ranges = NULL;
++	crash_memory_ranges_size = 0;
++	max_crash_mem_ranges = 0;
++}
++
++/*
++ * Allocate or reallocate crash memory ranges array in incremental units
++ * of PAGE_SIZE.
++ */
++static int allocate_crash_memory_ranges(void)
++{
++	struct fad_crash_memory_ranges *new_array;
++	u64 new_size;
++
++	new_size = crash_memory_ranges_size + PAGE_SIZE;
++	pr_debug("Allocating %llu bytes of memory for crash memory ranges\n",
++		 new_size);
++
++	new_array = krealloc(crash_memory_ranges, new_size, GFP_KERNEL);
++	if (new_array == NULL) {
++		pr_err("Insufficient memory for setting up crash memory ranges\n");
++		free_crash_memory_ranges();
++		return -ENOMEM;
++	}
++
++	crash_memory_ranges = new_array;
++	crash_memory_ranges_size = new_size;
++	max_crash_mem_ranges = (new_size /
++				sizeof(struct fad_crash_memory_ranges));
++	return 0;
++}
++
++static inline int fadump_add_crash_memory(unsigned long long base,
++					  unsigned long long end)
+ {
+ 	if (base == end)
+-		return;
++		return 0;
++
++	if (crash_mem_ranges == max_crash_mem_ranges) {
++		int ret;
++
++		ret = allocate_crash_memory_ranges();
++		if (ret)
++			return ret;
++	}
+ 
+ 	pr_debug("crash_memory_range[%d] [%#016llx-%#016llx], %#llx bytes\n",
+ 		crash_mem_ranges, base, end - 1, (end - base));
+ 	crash_memory_ranges[crash_mem_ranges].base = base;
+ 	crash_memory_ranges[crash_mem_ranges].size = end - base;
+ 	crash_mem_ranges++;
++	return 0;
+ }
+ 
+-static void fadump_exclude_reserved_area(unsigned long long start,
++static int fadump_exclude_reserved_area(unsigned long long start,
+ 					unsigned long long end)
+ {
+ 	unsigned long long ra_start, ra_end;
++	int ret = 0;
+ 
+ 	ra_start = fw_dump.reserve_dump_area_start;
+ 	ra_end = ra_start + fw_dump.reserve_dump_area_size;
+ 
+ 	if ((ra_start < end) && (ra_end > start)) {
+ 		if ((start < ra_start) && (end > ra_end)) {
+-			fadump_add_crash_memory(start, ra_start);
+-			fadump_add_crash_memory(ra_end, end);
++			ret = fadump_add_crash_memory(start, ra_start);
++			if (ret)
++				return ret;
++
++			ret = fadump_add_crash_memory(ra_end, end);
+ 		} else if (start < ra_start) {
+-			fadump_add_crash_memory(start, ra_start);
++			ret = fadump_add_crash_memory(start, ra_start);
+ 		} else if (ra_end < end) {
+-			fadump_add_crash_memory(ra_end, end);
++			ret = fadump_add_crash_memory(ra_end, end);
+ 		}
+ 	} else
+-		fadump_add_crash_memory(start, end);
++		ret = fadump_add_crash_memory(start, end);
++
++	return ret;
+ }
+ 
+ static int fadump_init_elfcore_header(char *bufp)
+@@ -939,10 +991,11 @@ static int fadump_init_elfcore_header(char *bufp)
+  * Traverse through memblock structure and setup crash memory ranges. These
+  * ranges will be used create PT_LOAD program headers in elfcore header.
+  */
+-static void fadump_setup_crash_memory_ranges(void)
++static int fadump_setup_crash_memory_ranges(void)
+ {
+ 	struct memblock_region *reg;
+ 	unsigned long long start, end;
++	int ret;
+ 
+ 	pr_debug("Setup crash memory ranges.\n");
+ 	crash_mem_ranges = 0;
+@@ -953,7 +1006,9 @@ static void fadump_setup_crash_memory_ranges(void)
+ 	 * specified during fadump registration. We need to create a separate
+ 	 * program header for this chunk with the correct offset.
+ 	 */
+-	fadump_add_crash_memory(RMA_START, fw_dump.boot_memory_size);
++	ret = fadump_add_crash_memory(RMA_START, fw_dump.boot_memory_size);
++	if (ret)
++		return ret;
+ 
+ 	for_each_memblock(memory, reg) {
+ 		start = (unsigned long long)reg->base;
+@@ -973,8 +1028,12 @@ static void fadump_setup_crash_memory_ranges(void)
+ 		}
+ 
+ 		/* add this range excluding the reserved dump area. */
+-		fadump_exclude_reserved_area(start, end);
++		ret = fadump_exclude_reserved_area(start, end);
++		if (ret)
++			return ret;
+ 	}
++
++	return 0;
+ }
+ 
+ /*
+@@ -1097,6 +1156,7 @@ static int register_fadump(void)
+ {
+ 	unsigned long addr;
+ 	void *vaddr;
++	int ret;
+ 
+ 	/*
+ 	 * If no memory is reserved then we can not register for firmware-
+@@ -1105,7 +1165,9 @@ static int register_fadump(void)
+ 	if (!fw_dump.reserve_dump_area_size)
+ 		return -ENODEV;
+ 
+-	fadump_setup_crash_memory_ranges();
++	ret = fadump_setup_crash_memory_ranges();
++	if (ret)
++		return ret;
+ 
+ 	addr = be64_to_cpu(fdm.rmr_region.destination_address) + be64_to_cpu(fdm.rmr_region.source_len);
+ 	/* Initialize fadump crash info header. */
+@@ -1183,6 +1245,7 @@ void fadump_cleanup(void)
+ 	} else if (fw_dump.dump_registered) {
+ 		/* Un-register Firmware-assisted dump if it was registered. */
+ 		fadump_unregister_dump(&fdm);
++		free_crash_memory_ranges();
+ 	}
+ }
+ 
+diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
+index 9ef4aea9fffe..991d09774108 100644
+--- a/arch/powerpc/kernel/process.c
++++ b/arch/powerpc/kernel/process.c
+@@ -583,6 +583,7 @@ static void save_all(struct task_struct *tsk)
+ 		__giveup_spe(tsk);
+ 
+ 	msr_check_and_clear(msr_all_available);
++	thread_pkey_regs_save(&tsk->thread);
+ }
+ 
+ void flush_all_to_thread(struct task_struct *tsk)
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index de686b340f4a..a995513573c2 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -46,6 +46,7 @@
+ #include <linux/compiler.h>
+ #include <linux/of.h>
+ 
++#include <asm/ftrace.h>
+ #include <asm/reg.h>
+ #include <asm/ppc-opcode.h>
+ #include <asm/asm-prototypes.h>
+diff --git a/arch/powerpc/mm/mmu_context_book3s64.c b/arch/powerpc/mm/mmu_context_book3s64.c
+index f3d4b4a0e561..3bb5cec03d1f 100644
+--- a/arch/powerpc/mm/mmu_context_book3s64.c
++++ b/arch/powerpc/mm/mmu_context_book3s64.c
+@@ -200,9 +200,9 @@ static void pte_frag_destroy(void *pte_frag)
+ 	/* drop all the pending references */
+ 	count = ((unsigned long)pte_frag & ~PAGE_MASK) >> PTE_FRAG_SIZE_SHIFT;
+ 	/* We allow PTE_FRAG_NR fragments from a PTE page */
+-	if (page_ref_sub_and_test(page, PTE_FRAG_NR - count)) {
++	if (atomic_sub_and_test(PTE_FRAG_NR - count, &page->pt_frag_refcount)) {
+ 		pgtable_page_dtor(page);
+-		free_unref_page(page);
++		__free_page(page);
+ 	}
+ }
+ 
+@@ -215,9 +215,9 @@ static void pmd_frag_destroy(void *pmd_frag)
+ 	/* drop all the pending references */
+ 	count = ((unsigned long)pmd_frag & ~PAGE_MASK) >> PMD_FRAG_SIZE_SHIFT;
+ 	/* We allow PTE_FRAG_NR fragments from a PTE page */
+-	if (page_ref_sub_and_test(page, PMD_FRAG_NR - count)) {
++	if (atomic_sub_and_test(PMD_FRAG_NR - count, &page->pt_frag_refcount)) {
+ 		pgtable_pmd_page_dtor(page);
+-		free_unref_page(page);
++		__free_page(page);
+ 	}
+ }
+ 
+diff --git a/arch/powerpc/mm/mmu_context_iommu.c b/arch/powerpc/mm/mmu_context_iommu.c
+index a4ca57612558..c9ee9e23845f 100644
+--- a/arch/powerpc/mm/mmu_context_iommu.c
++++ b/arch/powerpc/mm/mmu_context_iommu.c
+@@ -129,6 +129,7 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
+ 	long i, j, ret = 0, locked_entries = 0;
+ 	unsigned int pageshift;
+ 	unsigned long flags;
++	unsigned long cur_ua;
+ 	struct page *page = NULL;
+ 
+ 	mutex_lock(&mem_list_mutex);
+@@ -177,7 +178,8 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
+ 	}
+ 
+ 	for (i = 0; i < entries; ++i) {
+-		if (1 != get_user_pages_fast(ua + (i << PAGE_SHIFT),
++		cur_ua = ua + (i << PAGE_SHIFT);
++		if (1 != get_user_pages_fast(cur_ua,
+ 					1/* pages */, 1/* iswrite */, &page)) {
+ 			ret = -EFAULT;
+ 			for (j = 0; j < i; ++j)
+@@ -196,7 +198,7 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
+ 		if (is_migrate_cma_page(page)) {
+ 			if (mm_iommu_move_page_from_cma(page))
+ 				goto populate;
+-			if (1 != get_user_pages_fast(ua + (i << PAGE_SHIFT),
++			if (1 != get_user_pages_fast(cur_ua,
+ 						1/* pages */, 1/* iswrite */,
+ 						&page)) {
+ 				ret = -EFAULT;
+@@ -210,20 +212,21 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
+ 		}
+ populate:
+ 		pageshift = PAGE_SHIFT;
+-		if (PageCompound(page)) {
++		if (mem->pageshift > PAGE_SHIFT && PageCompound(page)) {
+ 			pte_t *pte;
+ 			struct page *head = compound_head(page);
+ 			unsigned int compshift = compound_order(head);
++			unsigned int pteshift;
+ 
+ 			local_irq_save(flags); /* disables as well */
+-			pte = find_linux_pte(mm->pgd, ua, NULL, &pageshift);
+-			local_irq_restore(flags);
++			pte = find_linux_pte(mm->pgd, cur_ua, NULL, &pteshift);
+ 
+ 			/* Double check it is still the same pinned page */
+ 			if (pte && pte_page(*pte) == head &&
+-					pageshift == compshift)
+-				pageshift = max_t(unsigned int, pageshift,
++			    pteshift == compshift + PAGE_SHIFT)
++				pageshift = max_t(unsigned int, pteshift,
+ 						PAGE_SHIFT);
++			local_irq_restore(flags);
+ 		}
+ 		mem->pageshift = min(mem->pageshift, pageshift);
+ 		mem->hpas[i] = page_to_pfn(page) << PAGE_SHIFT;
+diff --git a/arch/powerpc/mm/pgtable-book3s64.c b/arch/powerpc/mm/pgtable-book3s64.c
+index 4afbfbb64bfd..78d0b3d5ebad 100644
+--- a/arch/powerpc/mm/pgtable-book3s64.c
++++ b/arch/powerpc/mm/pgtable-book3s64.c
+@@ -270,6 +270,8 @@ static pmd_t *__alloc_for_pmdcache(struct mm_struct *mm)
+ 		return NULL;
+ 	}
+ 
++	atomic_set(&page->pt_frag_refcount, 1);
++
+ 	ret = page_address(page);
+ 	/*
+ 	 * if we support only one fragment just return the
+@@ -285,7 +287,7 @@ static pmd_t *__alloc_for_pmdcache(struct mm_struct *mm)
+ 	 * count.
+ 	 */
+ 	if (likely(!mm->context.pmd_frag)) {
+-		set_page_count(page, PMD_FRAG_NR);
++		atomic_set(&page->pt_frag_refcount, PMD_FRAG_NR);
+ 		mm->context.pmd_frag = ret + PMD_FRAG_SIZE;
+ 	}
+ 	spin_unlock(&mm->page_table_lock);
+@@ -308,9 +310,10 @@ void pmd_fragment_free(unsigned long *pmd)
+ {
+ 	struct page *page = virt_to_page(pmd);
+ 
+-	if (put_page_testzero(page)) {
++	BUG_ON(atomic_read(&page->pt_frag_refcount) <= 0);
++	if (atomic_dec_and_test(&page->pt_frag_refcount)) {
+ 		pgtable_pmd_page_dtor(page);
+-		free_unref_page(page);
++		__free_page(page);
+ 	}
+ }
+ 
+@@ -352,6 +355,7 @@ static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel)
+ 			return NULL;
+ 	}
+ 
++	atomic_set(&page->pt_frag_refcount, 1);
+ 
+ 	ret = page_address(page);
+ 	/*
+@@ -367,7 +371,7 @@ static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel)
+ 	 * count.
+ 	 */
+ 	if (likely(!mm->context.pte_frag)) {
+-		set_page_count(page, PTE_FRAG_NR);
++		atomic_set(&page->pt_frag_refcount, PTE_FRAG_NR);
+ 		mm->context.pte_frag = ret + PTE_FRAG_SIZE;
+ 	}
+ 	spin_unlock(&mm->page_table_lock);
+@@ -390,10 +394,11 @@ void pte_fragment_free(unsigned long *table, int kernel)
+ {
+ 	struct page *page = virt_to_page(table);
+ 
+-	if (put_page_testzero(page)) {
++	BUG_ON(atomic_read(&page->pt_frag_refcount) <= 0);
++	if (atomic_dec_and_test(&page->pt_frag_refcount)) {
+ 		if (!kernel)
+ 			pgtable_page_dtor(page);
+-		free_unref_page(page);
++		__free_page(page);
+ 	}
+ }
+ 
+diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
+index e6f500fabf5e..0e7810ccd1ae 100644
+--- a/arch/powerpc/mm/pkeys.c
++++ b/arch/powerpc/mm/pkeys.c
+@@ -15,8 +15,10 @@ bool pkey_execute_disable_supported;
+ int  pkeys_total;		/* Total pkeys as per device tree */
+ bool pkeys_devtree_defined;	/* pkey property exported by device tree */
+ u32  initial_allocation_mask;	/* Bits set for reserved keys */
+-u64  pkey_amr_uamor_mask;	/* Bits in AMR/UMOR not to be touched */
++u64  pkey_amr_mask;		/* Bits in AMR not to be touched */
+ u64  pkey_iamr_mask;		/* Bits in AMR not to be touched */
++u64  pkey_uamor_mask;		/* Bits in UMOR not to be touched */
++int  execute_only_key = 2;
+ 
+ #define AMR_BITS_PER_PKEY 2
+ #define AMR_RD_BIT 0x1UL
+@@ -91,7 +93,7 @@ int pkey_initialize(void)
+ 	 * arch-neutral code.
+ 	 */
+ 	pkeys_total = min_t(int, pkeys_total,
+-			(ARCH_VM_PKEY_FLAGS >> VM_PKEY_SHIFT));
++			((ARCH_VM_PKEY_FLAGS >> VM_PKEY_SHIFT)+1));
+ 
+ 	if (!pkey_mmu_enabled() || radix_enabled() || !pkeys_total)
+ 		static_branch_enable(&pkey_disabled);
+@@ -119,20 +121,38 @@ int pkey_initialize(void)
+ #else
+ 	os_reserved = 0;
+ #endif
+-	initial_allocation_mask = ~0x0;
+-	pkey_amr_uamor_mask = ~0x0ul;
++	initial_allocation_mask  = (0x1 << 0) | (0x1 << 1) |
++					(0x1 << execute_only_key);
++
++	/* register mask is in BE format */
++	pkey_amr_mask = ~0x0ul;
++	pkey_amr_mask &= ~(0x3ul << pkeyshift(0));
++
+ 	pkey_iamr_mask = ~0x0ul;
+-	/*
+-	 * key 0, 1 are reserved.
+-	 * key 0 is the default key, which allows read/write/execute.
+-	 * key 1 is recommended not to be used. PowerISA(3.0) page 1015,
+-	 * programming note.
+-	 */
+-	for (i = 2; i < (pkeys_total - os_reserved); i++) {
+-		initial_allocation_mask &= ~(0x1 << i);
+-		pkey_amr_uamor_mask &= ~(0x3ul << pkeyshift(i));
+-		pkey_iamr_mask &= ~(0x1ul << pkeyshift(i));
++	pkey_iamr_mask &= ~(0x3ul << pkeyshift(0));
++	pkey_iamr_mask &= ~(0x3ul << pkeyshift(execute_only_key));
++
++	pkey_uamor_mask = ~0x0ul;
++	pkey_uamor_mask &= ~(0x3ul << pkeyshift(0));
++	pkey_uamor_mask &= ~(0x3ul << pkeyshift(execute_only_key));
++
++	/* mark the rest of the keys as reserved and hence unavailable */
++	for (i = (pkeys_total - os_reserved); i < pkeys_total; i++) {
++		initial_allocation_mask |= (0x1 << i);
++		pkey_uamor_mask &= ~(0x3ul << pkeyshift(i));
++	}
++
++	if (unlikely((pkeys_total - os_reserved) <= execute_only_key)) {
++		/*
++		 * Insufficient number of keys to support
++		 * execute only key. Mark it unavailable.
++		 * Any AMR, UAMOR, IAMR bit set for
++		 * this key is irrelevant since this key
++		 * can never be allocated.
++		 */
++		execute_only_key = -1;
+ 	}
++
+ 	return 0;
+ }
+ 
+@@ -143,8 +163,7 @@ void pkey_mm_init(struct mm_struct *mm)
+ 	if (static_branch_likely(&pkey_disabled))
+ 		return;
+ 	mm_pkey_allocation_map(mm) = initial_allocation_mask;
+-	/* -1 means unallocated or invalid */
+-	mm->context.execute_only_pkey = -1;
++	mm->context.execute_only_pkey = execute_only_key;
+ }
+ 
+ static inline u64 read_amr(void)
+@@ -213,33 +232,6 @@ static inline void init_iamr(int pkey, u8 init_bits)
+ 	write_iamr(old_iamr | new_iamr_bits);
+ }
+ 
+-static void pkey_status_change(int pkey, bool enable)
+-{
+-	u64 old_uamor;
+-
+-	/* Reset the AMR and IAMR bits for this key */
+-	init_amr(pkey, 0x0);
+-	init_iamr(pkey, 0x0);
+-
+-	/* Enable/disable key */
+-	old_uamor = read_uamor();
+-	if (enable)
+-		old_uamor |= (0x3ul << pkeyshift(pkey));
+-	else
+-		old_uamor &= ~(0x3ul << pkeyshift(pkey));
+-	write_uamor(old_uamor);
+-}
+-
+-void __arch_activate_pkey(int pkey)
+-{
+-	pkey_status_change(pkey, true);
+-}
+-
+-void __arch_deactivate_pkey(int pkey)
+-{
+-	pkey_status_change(pkey, false);
+-}
+-
+ /*
+  * Set the access rights in AMR IAMR and UAMOR registers for @pkey to that
+  * specified in @init_val.
+@@ -289,9 +281,6 @@ void thread_pkey_regs_restore(struct thread_struct *new_thread,
+ 	if (static_branch_likely(&pkey_disabled))
+ 		return;
+ 
+-	/*
+-	 * TODO: Just set UAMOR to zero if @new_thread hasn't used any keys yet.
+-	 */
+ 	if (old_thread->amr != new_thread->amr)
+ 		write_amr(new_thread->amr);
+ 	if (old_thread->iamr != new_thread->iamr)
+@@ -305,9 +294,13 @@ void thread_pkey_regs_init(struct thread_struct *thread)
+ 	if (static_branch_likely(&pkey_disabled))
+ 		return;
+ 
+-	thread->amr = read_amr() & pkey_amr_uamor_mask;
+-	thread->iamr = read_iamr() & pkey_iamr_mask;
+-	thread->uamor = read_uamor() & pkey_amr_uamor_mask;
++	thread->amr = pkey_amr_mask;
++	thread->iamr = pkey_iamr_mask;
++	thread->uamor = pkey_uamor_mask;
++
++	write_uamor(pkey_uamor_mask);
++	write_amr(pkey_amr_mask);
++	write_iamr(pkey_iamr_mask);
+ }
+ 
+ static inline bool pkey_allows_readwrite(int pkey)
+@@ -322,48 +315,7 @@ static inline bool pkey_allows_readwrite(int pkey)
+ 
+ int __execute_only_pkey(struct mm_struct *mm)
+ {
+-	bool need_to_set_mm_pkey = false;
+-	int execute_only_pkey = mm->context.execute_only_pkey;
+-	int ret;
+-
+-	/* Do we need to assign a pkey for mm's execute-only maps? */
+-	if (execute_only_pkey == -1) {
+-		/* Go allocate one to use, which might fail */
+-		execute_only_pkey = mm_pkey_alloc(mm);
+-		if (execute_only_pkey < 0)
+-			return -1;
+-		need_to_set_mm_pkey = true;
+-	}
+-
+-	/*
+-	 * We do not want to go through the relatively costly dance to set AMR
+-	 * if we do not need to. Check it first and assume that if the
+-	 * execute-only pkey is readwrite-disabled than we do not have to set it
+-	 * ourselves.
+-	 */
+-	if (!need_to_set_mm_pkey && !pkey_allows_readwrite(execute_only_pkey))
+-		return execute_only_pkey;
+-
+-	/*
+-	 * Set up AMR so that it denies access for everything other than
+-	 * execution.
+-	 */
+-	ret = __arch_set_user_pkey_access(current, execute_only_pkey,
+-					  PKEY_DISABLE_ACCESS |
+-					  PKEY_DISABLE_WRITE);
+-	/*
+-	 * If the AMR-set operation failed somehow, just return 0 and
+-	 * effectively disable execute-only support.
+-	 */
+-	if (ret) {
+-		mm_pkey_free(mm, execute_only_pkey);
+-		return -1;
+-	}
+-
+-	/* We got one, store it and use it from here on out */
+-	if (need_to_set_mm_pkey)
+-		mm->context.execute_only_pkey = execute_only_pkey;
+-	return execute_only_pkey;
++	return mm->context.execute_only_pkey;
+ }
+ 
+ static inline bool vma_is_pkey_exec_only(struct vm_area_struct *vma)
+diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
+index 70b2e1e0f23c..a2cdf358a3ac 100644
+--- a/arch/powerpc/platforms/powernv/pci-ioda.c
++++ b/arch/powerpc/platforms/powernv/pci-ioda.c
+@@ -3368,12 +3368,49 @@ static void pnv_pci_ioda_create_dbgfs(void)
+ #endif /* CONFIG_DEBUG_FS */
+ }
+ 
++static void pnv_pci_enable_bridge(struct pci_bus *bus)
++{
++	struct pci_dev *dev = bus->self;
++	struct pci_bus *child;
++
++	/* Empty bus ? bail */
++	if (list_empty(&bus->devices))
++		return;
++
++	/*
++	 * If there's a bridge associated with that bus enable it. This works
++	 * around races in the generic code if the enabling is done during
++	 * parallel probing. This can be removed once those races have been
++	 * fixed.
++	 */
++	if (dev) {
++		int rc = pci_enable_device(dev);
++		if (rc)
++			pci_err(dev, "Error enabling bridge (%d)\n", rc);
++		pci_set_master(dev);
++	}
++
++	/* Perform the same to child busses */
++	list_for_each_entry(child, &bus->children, node)
++		pnv_pci_enable_bridge(child);
++}
++
++static void pnv_pci_enable_bridges(void)
++{
++	struct pci_controller *hose;
++
++	list_for_each_entry(hose, &hose_list, list_node)
++		pnv_pci_enable_bridge(hose->bus);
++}
++
+ static void pnv_pci_ioda_fixup(void)
+ {
+ 	pnv_pci_ioda_setup_PEs();
+ 	pnv_pci_ioda_setup_iommu_api();
+ 	pnv_pci_ioda_create_dbgfs();
+ 
++	pnv_pci_enable_bridges();
++
+ #ifdef CONFIG_EEH
+ 	pnv_eeh_post_init();
+ #endif
+diff --git a/arch/powerpc/platforms/pseries/ras.c b/arch/powerpc/platforms/pseries/ras.c
+index 5e1ef9150182..2edc673be137 100644
+--- a/arch/powerpc/platforms/pseries/ras.c
++++ b/arch/powerpc/platforms/pseries/ras.c
+@@ -360,7 +360,7 @@ static struct rtas_error_log *fwnmi_get_errinfo(struct pt_regs *regs)
+ 	}
+ 
+ 	savep = __va(regs->gpr[3]);
+-	regs->gpr[3] = savep[0];	/* restore original r3 */
++	regs->gpr[3] = be64_to_cpu(savep[0]);	/* restore original r3 */
+ 
+ 	/* If it isn't an extended log we can use the per cpu 64bit buffer */
+ 	h = (struct rtas_error_log *)&savep[1];
+diff --git a/arch/sparc/kernel/sys_sparc_32.c b/arch/sparc/kernel/sys_sparc_32.c
+index 7f3d9c59719a..452e4d080855 100644
+--- a/arch/sparc/kernel/sys_sparc_32.c
++++ b/arch/sparc/kernel/sys_sparc_32.c
+@@ -197,23 +197,27 @@ SYSCALL_DEFINE5(rt_sigaction, int, sig,
+ 
+ SYSCALL_DEFINE2(getdomainname, char __user *, name, int, len)
+ {
+- 	int nlen, err;
+- 	
++	int nlen, err;
++	char tmp[__NEW_UTS_LEN + 1];
++
+ 	if (len < 0)
+ 		return -EINVAL;
+ 
+- 	down_read(&uts_sem);
+- 	
++	down_read(&uts_sem);
++
+ 	nlen = strlen(utsname()->domainname) + 1;
+ 	err = -EINVAL;
+ 	if (nlen > len)
+-		goto out;
++		goto out_unlock;
++	memcpy(tmp, utsname()->domainname, nlen);
+ 
+-	err = -EFAULT;
+-	if (!copy_to_user(name, utsname()->domainname, nlen))
+-		err = 0;
++	up_read(&uts_sem);
+ 
+-out:
++	if (copy_to_user(name, tmp, nlen))
++		return -EFAULT;
++	return 0;
++
++out_unlock:
+ 	up_read(&uts_sem);
+ 	return err;
+ }
+diff --git a/arch/sparc/kernel/sys_sparc_64.c b/arch/sparc/kernel/sys_sparc_64.c
+index 63baa8aa9414..274ed0b9b3e0 100644
+--- a/arch/sparc/kernel/sys_sparc_64.c
++++ b/arch/sparc/kernel/sys_sparc_64.c
+@@ -519,23 +519,27 @@ asmlinkage void sparc_breakpoint(struct pt_regs *regs)
+ 
+ SYSCALL_DEFINE2(getdomainname, char __user *, name, int, len)
+ {
+-        int nlen, err;
++	int nlen, err;
++	char tmp[__NEW_UTS_LEN + 1];
+ 
+ 	if (len < 0)
+ 		return -EINVAL;
+ 
+- 	down_read(&uts_sem);
+- 	
++	down_read(&uts_sem);
++
+ 	nlen = strlen(utsname()->domainname) + 1;
+ 	err = -EINVAL;
+ 	if (nlen > len)
+-		goto out;
++		goto out_unlock;
++	memcpy(tmp, utsname()->domainname, nlen);
++
++	up_read(&uts_sem);
+ 
+-	err = -EFAULT;
+-	if (!copy_to_user(name, utsname()->domainname, nlen))
+-		err = 0;
++	if (copy_to_user(name, tmp, nlen))
++		return -EFAULT;
++	return 0;
+ 
+-out:
++out_unlock:
+ 	up_read(&uts_sem);
+ 	return err;
+ }
+diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S
+index e762ef417562..d27a50656aa1 100644
+--- a/arch/x86/crypto/aesni-intel_asm.S
++++ b/arch/x86/crypto/aesni-intel_asm.S
+@@ -223,34 +223,34 @@ ALL_F:      .octa 0xffffffffffffffffffffffffffffffff
+ 	pcmpeqd TWOONE(%rip), \TMP2
+ 	pand	POLY(%rip), \TMP2
+ 	pxor	\TMP2, \TMP3
+-	movdqa	\TMP3, HashKey(%arg2)
++	movdqu	\TMP3, HashKey(%arg2)
+ 
+ 	movdqa	   \TMP3, \TMP5
+ 	pshufd	   $78, \TMP3, \TMP1
+ 	pxor	   \TMP3, \TMP1
+-	movdqa	   \TMP1, HashKey_k(%arg2)
++	movdqu	   \TMP1, HashKey_k(%arg2)
+ 
+ 	GHASH_MUL  \TMP5, \TMP3, \TMP1, \TMP2, \TMP4, \TMP6, \TMP7
+ # TMP5 = HashKey^2<<1 (mod poly)
+-	movdqa	   \TMP5, HashKey_2(%arg2)
++	movdqu	   \TMP5, HashKey_2(%arg2)
+ # HashKey_2 = HashKey^2<<1 (mod poly)
+ 	pshufd	   $78, \TMP5, \TMP1
+ 	pxor	   \TMP5, \TMP1
+-	movdqa	   \TMP1, HashKey_2_k(%arg2)
++	movdqu	   \TMP1, HashKey_2_k(%arg2)
+ 
+ 	GHASH_MUL  \TMP5, \TMP3, \TMP1, \TMP2, \TMP4, \TMP6, \TMP7
+ # TMP5 = HashKey^3<<1 (mod poly)
+-	movdqa	   \TMP5, HashKey_3(%arg2)
++	movdqu	   \TMP5, HashKey_3(%arg2)
+ 	pshufd	   $78, \TMP5, \TMP1
+ 	pxor	   \TMP5, \TMP1
+-	movdqa	   \TMP1, HashKey_3_k(%arg2)
++	movdqu	   \TMP1, HashKey_3_k(%arg2)
+ 
+ 	GHASH_MUL  \TMP5, \TMP3, \TMP1, \TMP2, \TMP4, \TMP6, \TMP7
+ # TMP5 = HashKey^3<<1 (mod poly)
+-	movdqa	   \TMP5, HashKey_4(%arg2)
++	movdqu	   \TMP5, HashKey_4(%arg2)
+ 	pshufd	   $78, \TMP5, \TMP1
+ 	pxor	   \TMP5, \TMP1
+-	movdqa	   \TMP1, HashKey_4_k(%arg2)
++	movdqu	   \TMP1, HashKey_4_k(%arg2)
+ .endm
+ 
+ # GCM_INIT initializes a gcm_context struct to prepare for encoding/decoding.
+@@ -271,7 +271,7 @@ ALL_F:      .octa 0xffffffffffffffffffffffffffffffff
+ 	movdqu %xmm0, CurCount(%arg2) # ctx_data.current_counter = iv
+ 
+ 	PRECOMPUTE \SUBKEY, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7,
+-	movdqa HashKey(%arg2), %xmm13
++	movdqu HashKey(%arg2), %xmm13
+ 
+ 	CALC_AAD_HASH %xmm13, \AAD, \AADLEN, %xmm0, %xmm1, %xmm2, %xmm3, \
+ 	%xmm4, %xmm5, %xmm6
+@@ -997,7 +997,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	pshufd	  $78, \XMM5, \TMP6
+ 	pxor	  \XMM5, \TMP6
+ 	paddd     ONE(%rip), \XMM0		# INCR CNT
+-	movdqa	  HashKey_4(%arg2), \TMP5
++	movdqu	  HashKey_4(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP4           # TMP4 = a1*b1
+ 	movdqa    \XMM0, \XMM1
+ 	paddd     ONE(%rip), \XMM0		# INCR CNT
+@@ -1016,7 +1016,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	pxor	  (%arg1), \XMM2
+ 	pxor	  (%arg1), \XMM3
+ 	pxor	  (%arg1), \XMM4
+-	movdqa	  HashKey_4_k(%arg2), \TMP5
++	movdqu	  HashKey_4_k(%arg2), \TMP5
+ 	PCLMULQDQ 0x00, \TMP5, \TMP6           # TMP6 = (a1+a0)*(b1+b0)
+ 	movaps 0x10(%arg1), \TMP1
+ 	AESENC	  \TMP1, \XMM1              # Round 1
+@@ -1031,7 +1031,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	movdqa	  \XMM6, \TMP1
+ 	pshufd	  $78, \XMM6, \TMP2
+ 	pxor	  \XMM6, \TMP2
+-	movdqa	  HashKey_3(%arg2), \TMP5
++	movdqu	  HashKey_3(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP1           # TMP1 = a1 * b1
+ 	movaps 0x30(%arg1), \TMP3
+ 	AESENC    \TMP3, \XMM1              # Round 3
+@@ -1044,7 +1044,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	AESENC	  \TMP3, \XMM2
+ 	AESENC	  \TMP3, \XMM3
+ 	AESENC	  \TMP3, \XMM4
+-	movdqa	  HashKey_3_k(%arg2), \TMP5
++	movdqu	  HashKey_3_k(%arg2), \TMP5
+ 	PCLMULQDQ 0x00, \TMP5, \TMP2           # TMP2 = (a1+a0)*(b1+b0)
+ 	movaps 0x50(%arg1), \TMP3
+ 	AESENC	  \TMP3, \XMM1              # Round 5
+@@ -1058,7 +1058,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	movdqa	  \XMM7, \TMP1
+ 	pshufd	  $78, \XMM7, \TMP2
+ 	pxor	  \XMM7, \TMP2
+-	movdqa	  HashKey_2(%arg2), \TMP5
++	movdqu	  HashKey_2(%arg2), \TMP5
+ 
+         # Multiply TMP5 * HashKey using karatsuba
+ 
+@@ -1074,7 +1074,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	AESENC	  \TMP3, \XMM2
+ 	AESENC	  \TMP3, \XMM3
+ 	AESENC	  \TMP3, \XMM4
+-	movdqa	  HashKey_2_k(%arg2), \TMP5
++	movdqu	  HashKey_2_k(%arg2), \TMP5
+ 	PCLMULQDQ 0x00, \TMP5, \TMP2           # TMP2 = (a1+a0)*(b1+b0)
+ 	movaps 0x80(%arg1), \TMP3
+ 	AESENC	  \TMP3, \XMM1             # Round 8
+@@ -1092,7 +1092,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	movdqa	  \XMM8, \TMP1
+ 	pshufd	  $78, \XMM8, \TMP2
+ 	pxor	  \XMM8, \TMP2
+-	movdqa	  HashKey(%arg2), \TMP5
++	movdqu	  HashKey(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP1          # TMP1 = a1*b1
+ 	movaps 0x90(%arg1), \TMP3
+ 	AESENC	  \TMP3, \XMM1            # Round 9
+@@ -1121,7 +1121,7 @@ aes_loop_par_enc_done\@:
+ 	AESENCLAST \TMP3, \XMM2
+ 	AESENCLAST \TMP3, \XMM3
+ 	AESENCLAST \TMP3, \XMM4
+-	movdqa    HashKey_k(%arg2), \TMP5
++	movdqu    HashKey_k(%arg2), \TMP5
+ 	PCLMULQDQ 0x00, \TMP5, \TMP2          # TMP2 = (a1+a0)*(b1+b0)
+ 	movdqu	  (%arg4,%r11,1), \TMP3
+ 	pxor	  \TMP3, \XMM1                 # Ciphertext/Plaintext XOR EK
+@@ -1205,7 +1205,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	pshufd	  $78, \XMM5, \TMP6
+ 	pxor	  \XMM5, \TMP6
+ 	paddd     ONE(%rip), \XMM0		# INCR CNT
+-	movdqa	  HashKey_4(%arg2), \TMP5
++	movdqu	  HashKey_4(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP4           # TMP4 = a1*b1
+ 	movdqa    \XMM0, \XMM1
+ 	paddd     ONE(%rip), \XMM0		# INCR CNT
+@@ -1224,7 +1224,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	pxor	  (%arg1), \XMM2
+ 	pxor	  (%arg1), \XMM3
+ 	pxor	  (%arg1), \XMM4
+-	movdqa	  HashKey_4_k(%arg2), \TMP5
++	movdqu	  HashKey_4_k(%arg2), \TMP5
+ 	PCLMULQDQ 0x00, \TMP5, \TMP6           # TMP6 = (a1+a0)*(b1+b0)
+ 	movaps 0x10(%arg1), \TMP1
+ 	AESENC	  \TMP1, \XMM1              # Round 1
+@@ -1239,7 +1239,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	movdqa	  \XMM6, \TMP1
+ 	pshufd	  $78, \XMM6, \TMP2
+ 	pxor	  \XMM6, \TMP2
+-	movdqa	  HashKey_3(%arg2), \TMP5
++	movdqu	  HashKey_3(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP1           # TMP1 = a1 * b1
+ 	movaps 0x30(%arg1), \TMP3
+ 	AESENC    \TMP3, \XMM1              # Round 3
+@@ -1252,7 +1252,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	AESENC	  \TMP3, \XMM2
+ 	AESENC	  \TMP3, \XMM3
+ 	AESENC	  \TMP3, \XMM4
+-	movdqa	  HashKey_3_k(%arg2), \TMP5
++	movdqu	  HashKey_3_k(%arg2), \TMP5
+ 	PCLMULQDQ 0x00, \TMP5, \TMP2           # TMP2 = (a1+a0)*(b1+b0)
+ 	movaps 0x50(%arg1), \TMP3
+ 	AESENC	  \TMP3, \XMM1              # Round 5
+@@ -1266,7 +1266,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	movdqa	  \XMM7, \TMP1
+ 	pshufd	  $78, \XMM7, \TMP2
+ 	pxor	  \XMM7, \TMP2
+-	movdqa	  HashKey_2(%arg2), \TMP5
++	movdqu	  HashKey_2(%arg2), \TMP5
+ 
+         # Multiply TMP5 * HashKey using karatsuba
+ 
+@@ -1282,7 +1282,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	AESENC	  \TMP3, \XMM2
+ 	AESENC	  \TMP3, \XMM3
+ 	AESENC	  \TMP3, \XMM4
+-	movdqa	  HashKey_2_k(%arg2), \TMP5
++	movdqu	  HashKey_2_k(%arg2), \TMP5
+ 	PCLMULQDQ 0x00, \TMP5, \TMP2           # TMP2 = (a1+a0)*(b1+b0)
+ 	movaps 0x80(%arg1), \TMP3
+ 	AESENC	  \TMP3, \XMM1             # Round 8
+@@ -1300,7 +1300,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	movdqa	  \XMM8, \TMP1
+ 	pshufd	  $78, \XMM8, \TMP2
+ 	pxor	  \XMM8, \TMP2
+-	movdqa	  HashKey(%arg2), \TMP5
++	movdqu	  HashKey(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP1          # TMP1 = a1*b1
+ 	movaps 0x90(%arg1), \TMP3
+ 	AESENC	  \TMP3, \XMM1            # Round 9
+@@ -1329,7 +1329,7 @@ aes_loop_par_dec_done\@:
+ 	AESENCLAST \TMP3, \XMM2
+ 	AESENCLAST \TMP3, \XMM3
+ 	AESENCLAST \TMP3, \XMM4
+-	movdqa    HashKey_k(%arg2), \TMP5
++	movdqu    HashKey_k(%arg2), \TMP5
+ 	PCLMULQDQ 0x00, \TMP5, \TMP2          # TMP2 = (a1+a0)*(b1+b0)
+ 	movdqu	  (%arg4,%r11,1), \TMP3
+ 	pxor	  \TMP3, \XMM1                 # Ciphertext/Plaintext XOR EK
+@@ -1405,10 +1405,10 @@ TMP7 XMM1 XMM2 XMM3 XMM4 XMMDst
+ 	movdqa	  \XMM1, \TMP6
+ 	pshufd	  $78, \XMM1, \TMP2
+ 	pxor	  \XMM1, \TMP2
+-	movdqa	  HashKey_4(%arg2), \TMP5
++	movdqu	  HashKey_4(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP6       # TMP6 = a1*b1
+ 	PCLMULQDQ 0x00, \TMP5, \XMM1       # XMM1 = a0*b0
+-	movdqa	  HashKey_4_k(%arg2), \TMP4
++	movdqu	  HashKey_4_k(%arg2), \TMP4
+ 	PCLMULQDQ 0x00, \TMP4, \TMP2       # TMP2 = (a1+a0)*(b1+b0)
+ 	movdqa	  \XMM1, \XMMDst
+ 	movdqa	  \TMP2, \XMM1              # result in TMP6, XMMDst, XMM1
+@@ -1418,10 +1418,10 @@ TMP7 XMM1 XMM2 XMM3 XMM4 XMMDst
+ 	movdqa	  \XMM2, \TMP1
+ 	pshufd	  $78, \XMM2, \TMP2
+ 	pxor	  \XMM2, \TMP2
+-	movdqa	  HashKey_3(%arg2), \TMP5
++	movdqu	  HashKey_3(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP1       # TMP1 = a1*b1
+ 	PCLMULQDQ 0x00, \TMP5, \XMM2       # XMM2 = a0*b0
+-	movdqa	  HashKey_3_k(%arg2), \TMP4
++	movdqu	  HashKey_3_k(%arg2), \TMP4
+ 	PCLMULQDQ 0x00, \TMP4, \TMP2       # TMP2 = (a1+a0)*(b1+b0)
+ 	pxor	  \TMP1, \TMP6
+ 	pxor	  \XMM2, \XMMDst
+@@ -1433,10 +1433,10 @@ TMP7 XMM1 XMM2 XMM3 XMM4 XMMDst
+ 	movdqa	  \XMM3, \TMP1
+ 	pshufd	  $78, \XMM3, \TMP2
+ 	pxor	  \XMM3, \TMP2
+-	movdqa	  HashKey_2(%arg2), \TMP5
++	movdqu	  HashKey_2(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP1       # TMP1 = a1*b1
+ 	PCLMULQDQ 0x00, \TMP5, \XMM3       # XMM3 = a0*b0
+-	movdqa	  HashKey_2_k(%arg2), \TMP4
++	movdqu	  HashKey_2_k(%arg2), \TMP4
+ 	PCLMULQDQ 0x00, \TMP4, \TMP2       # TMP2 = (a1+a0)*(b1+b0)
+ 	pxor	  \TMP1, \TMP6
+ 	pxor	  \XMM3, \XMMDst
+@@ -1446,10 +1446,10 @@ TMP7 XMM1 XMM2 XMM3 XMM4 XMMDst
+ 	movdqa	  \XMM4, \TMP1
+ 	pshufd	  $78, \XMM4, \TMP2
+ 	pxor	  \XMM4, \TMP2
+-	movdqa	  HashKey(%arg2), \TMP5
++	movdqu	  HashKey(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP1	    # TMP1 = a1*b1
+ 	PCLMULQDQ 0x00, \TMP5, \XMM4       # XMM4 = a0*b0
+-	movdqa	  HashKey_k(%arg2), \TMP4
++	movdqu	  HashKey_k(%arg2), \TMP4
+ 	PCLMULQDQ 0x00, \TMP4, \TMP2       # TMP2 = (a1+a0)*(b1+b0)
+ 	pxor	  \TMP1, \TMP6
+ 	pxor	  \XMM4, \XMMDst
+diff --git a/arch/x86/kernel/kexec-bzimage64.c b/arch/x86/kernel/kexec-bzimage64.c
+index 7326078eaa7a..278cd07228dd 100644
+--- a/arch/x86/kernel/kexec-bzimage64.c
++++ b/arch/x86/kernel/kexec-bzimage64.c
+@@ -532,7 +532,7 @@ static int bzImage64_cleanup(void *loader_data)
+ static int bzImage64_verify_sig(const char *kernel, unsigned long kernel_len)
+ {
+ 	return verify_pefile_signature(kernel, kernel_len,
+-				       NULL,
++				       VERIFY_USE_SECONDARY_KEYRING,
+ 				       VERIFYING_KEXEC_PE_SIGNATURE);
+ }
+ #endif
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 46b428c0990e..bedabcf33a3e 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -197,12 +197,14 @@ static enum vmx_l1d_flush_state __read_mostly vmentry_l1d_flush_param = VMENTER_
+ 
+ static const struct {
+ 	const char *option;
+-	enum vmx_l1d_flush_state cmd;
++	bool for_parse;
+ } vmentry_l1d_param[] = {
+-	{"auto",	VMENTER_L1D_FLUSH_AUTO},
+-	{"never",	VMENTER_L1D_FLUSH_NEVER},
+-	{"cond",	VMENTER_L1D_FLUSH_COND},
+-	{"always",	VMENTER_L1D_FLUSH_ALWAYS},
++	[VMENTER_L1D_FLUSH_AUTO]	 = {"auto", true},
++	[VMENTER_L1D_FLUSH_NEVER]	 = {"never", true},
++	[VMENTER_L1D_FLUSH_COND]	 = {"cond", true},
++	[VMENTER_L1D_FLUSH_ALWAYS]	 = {"always", true},
++	[VMENTER_L1D_FLUSH_EPT_DISABLED] = {"EPT disabled", false},
++	[VMENTER_L1D_FLUSH_NOT_REQUIRED] = {"not required", false},
+ };
+ 
+ #define L1D_CACHE_ORDER 4
+@@ -286,8 +288,9 @@ static int vmentry_l1d_flush_parse(const char *s)
+ 
+ 	if (s) {
+ 		for (i = 0; i < ARRAY_SIZE(vmentry_l1d_param); i++) {
+-			if (sysfs_streq(s, vmentry_l1d_param[i].option))
+-				return vmentry_l1d_param[i].cmd;
++			if (vmentry_l1d_param[i].for_parse &&
++			    sysfs_streq(s, vmentry_l1d_param[i].option))
++				return i;
+ 		}
+ 	}
+ 	return -EINVAL;
+@@ -297,13 +300,13 @@ static int vmentry_l1d_flush_set(const char *s, const struct kernel_param *kp)
+ {
+ 	int l1tf, ret;
+ 
+-	if (!boot_cpu_has(X86_BUG_L1TF))
+-		return 0;
+-
+ 	l1tf = vmentry_l1d_flush_parse(s);
+ 	if (l1tf < 0)
+ 		return l1tf;
+ 
++	if (!boot_cpu_has(X86_BUG_L1TF))
++		return 0;
++
+ 	/*
+ 	 * Has vmx_init() run already? If not then this is the pre init
+ 	 * parameter parsing. In that case just store the value and let
+@@ -323,6 +326,9 @@ static int vmentry_l1d_flush_set(const char *s, const struct kernel_param *kp)
+ 
+ static int vmentry_l1d_flush_get(char *s, const struct kernel_param *kp)
+ {
++	if (WARN_ON_ONCE(l1tf_vmx_mitigation >= ARRAY_SIZE(vmentry_l1d_param)))
++		return sprintf(s, "???\n");
++
+ 	return sprintf(s, "%s\n", vmentry_l1d_param[l1tf_vmx_mitigation].option);
+ }
+ 
+diff --git a/arch/xtensa/include/asm/cacheasm.h b/arch/xtensa/include/asm/cacheasm.h
+index 2041abb10a23..34545ecfdd6b 100644
+--- a/arch/xtensa/include/asm/cacheasm.h
++++ b/arch/xtensa/include/asm/cacheasm.h
+@@ -31,16 +31,32 @@
+  *
+  */
+ 
+-	.macro	__loop_cache_all ar at insn size line_width
+ 
+-	movi	\ar, 0
++	.macro	__loop_cache_unroll ar at insn size line_width max_immed
++
++	.if	(1 << (\line_width)) > (\max_immed)
++	.set	_reps, 1
++	.elseif	(2 << (\line_width)) > (\max_immed)
++	.set	_reps, 2
++	.else
++	.set	_reps, 4
++	.endif
++
++	__loopi	\ar, \at, \size, (_reps << (\line_width))
++	.set	_index, 0
++	.rep	_reps
++	\insn	\ar, _index << (\line_width)
++	.set	_index, _index + 1
++	.endr
++	__endla	\ar, \at, _reps << (\line_width)
++
++	.endm
++
+ 
+-	__loopi	\ar, \at, \size, (4 << (\line_width))
+-	\insn	\ar, 0 << (\line_width)
+-	\insn	\ar, 1 << (\line_width)
+-	\insn	\ar, 2 << (\line_width)
+-	\insn	\ar, 3 << (\line_width)
+-	__endla	\ar, \at, 4 << (\line_width)
++	.macro	__loop_cache_all ar at insn size line_width max_immed
++
++	movi	\ar, 0
++	__loop_cache_unroll \ar, \at, \insn, \size, \line_width, \max_immed
+ 
+ 	.endm
+ 
+@@ -57,14 +73,9 @@
+ 	.endm
+ 
+ 
+-	.macro	__loop_cache_page ar at insn line_width
++	.macro	__loop_cache_page ar at insn line_width max_immed
+ 
+-	__loopi	\ar, \at, PAGE_SIZE, 4 << (\line_width)
+-	\insn	\ar, 0 << (\line_width)
+-	\insn	\ar, 1 << (\line_width)
+-	\insn	\ar, 2 << (\line_width)
+-	\insn	\ar, 3 << (\line_width)
+-	__endla	\ar, \at, 4 << (\line_width)
++	__loop_cache_unroll \ar, \at, \insn, PAGE_SIZE, \line_width, \max_immed
+ 
+ 	.endm
+ 
+@@ -72,7 +83,8 @@
+ 	.macro	___unlock_dcache_all ar at
+ 
+ #if XCHAL_DCACHE_LINE_LOCKABLE && XCHAL_DCACHE_SIZE
+-	__loop_cache_all \ar \at diu XCHAL_DCACHE_SIZE XCHAL_DCACHE_LINEWIDTH
++	__loop_cache_all \ar \at diu XCHAL_DCACHE_SIZE \
++		XCHAL_DCACHE_LINEWIDTH 240
+ #endif
+ 
+ 	.endm
+@@ -81,7 +93,8 @@
+ 	.macro	___unlock_icache_all ar at
+ 
+ #if XCHAL_ICACHE_LINE_LOCKABLE && XCHAL_ICACHE_SIZE
+-	__loop_cache_all \ar \at iiu XCHAL_ICACHE_SIZE XCHAL_ICACHE_LINEWIDTH
++	__loop_cache_all \ar \at iiu XCHAL_ICACHE_SIZE \
++		XCHAL_ICACHE_LINEWIDTH 240
+ #endif
+ 
+ 	.endm
+@@ -90,7 +103,8 @@
+ 	.macro	___flush_invalidate_dcache_all ar at
+ 
+ #if XCHAL_DCACHE_SIZE
+-	__loop_cache_all \ar \at diwbi XCHAL_DCACHE_SIZE XCHAL_DCACHE_LINEWIDTH
++	__loop_cache_all \ar \at diwbi XCHAL_DCACHE_SIZE \
++		XCHAL_DCACHE_LINEWIDTH 240
+ #endif
+ 
+ 	.endm
+@@ -99,7 +113,8 @@
+ 	.macro	___flush_dcache_all ar at
+ 
+ #if XCHAL_DCACHE_SIZE
+-	__loop_cache_all \ar \at diwb XCHAL_DCACHE_SIZE XCHAL_DCACHE_LINEWIDTH
++	__loop_cache_all \ar \at diwb XCHAL_DCACHE_SIZE \
++		XCHAL_DCACHE_LINEWIDTH 240
+ #endif
+ 
+ 	.endm
+@@ -108,8 +123,8 @@
+ 	.macro	___invalidate_dcache_all ar at
+ 
+ #if XCHAL_DCACHE_SIZE
+-	__loop_cache_all \ar \at dii __stringify(DCACHE_WAY_SIZE) \
+-			 XCHAL_DCACHE_LINEWIDTH
++	__loop_cache_all \ar \at dii XCHAL_DCACHE_SIZE \
++			 XCHAL_DCACHE_LINEWIDTH 1020
+ #endif
+ 
+ 	.endm
+@@ -118,8 +133,8 @@
+ 	.macro	___invalidate_icache_all ar at
+ 
+ #if XCHAL_ICACHE_SIZE
+-	__loop_cache_all \ar \at iii __stringify(ICACHE_WAY_SIZE) \
+-			 XCHAL_ICACHE_LINEWIDTH
++	__loop_cache_all \ar \at iii XCHAL_ICACHE_SIZE \
++			 XCHAL_ICACHE_LINEWIDTH 1020
+ #endif
+ 
+ 	.endm
+@@ -166,7 +181,7 @@
+ 	.macro	___flush_invalidate_dcache_page ar as
+ 
+ #if XCHAL_DCACHE_SIZE
+-	__loop_cache_page \ar \as dhwbi XCHAL_DCACHE_LINEWIDTH
++	__loop_cache_page \ar \as dhwbi XCHAL_DCACHE_LINEWIDTH 1020
+ #endif
+ 
+ 	.endm
+@@ -175,7 +190,7 @@
+ 	.macro ___flush_dcache_page ar as
+ 
+ #if XCHAL_DCACHE_SIZE
+-	__loop_cache_page \ar \as dhwb XCHAL_DCACHE_LINEWIDTH
++	__loop_cache_page \ar \as dhwb XCHAL_DCACHE_LINEWIDTH 1020
+ #endif
+ 
+ 	.endm
+@@ -184,7 +199,7 @@
+ 	.macro	___invalidate_dcache_page ar as
+ 
+ #if XCHAL_DCACHE_SIZE
+-	__loop_cache_page \ar \as dhi XCHAL_DCACHE_LINEWIDTH
++	__loop_cache_page \ar \as dhi XCHAL_DCACHE_LINEWIDTH 1020
+ #endif
+ 
+ 	.endm
+@@ -193,7 +208,7 @@
+ 	.macro	___invalidate_icache_page ar as
+ 
+ #if XCHAL_ICACHE_SIZE
+-	__loop_cache_page \ar \as ihi XCHAL_ICACHE_LINEWIDTH
++	__loop_cache_page \ar \as ihi XCHAL_ICACHE_LINEWIDTH 1020
+ #endif
+ 
+ 	.endm
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+index a9e8633388f4..58c6efa9f9a9 100644
+--- a/block/bfq-cgroup.c
++++ b/block/bfq-cgroup.c
+@@ -913,7 +913,8 @@ static ssize_t bfq_io_set_weight(struct kernfs_open_file *of,
+ 	if (ret)
+ 		return ret;
+ 
+-	return bfq_io_set_weight_legacy(of_css(of), NULL, weight);
++	ret = bfq_io_set_weight_legacy(of_css(of), NULL, weight);
++	return ret ?: nbytes;
+ }
+ 
+ #ifdef CONFIG_DEBUG_BLK_CGROUP
+diff --git a/block/blk-core.c b/block/blk-core.c
+index ee33590f54eb..1646ea85dade 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -715,6 +715,35 @@ void blk_set_queue_dying(struct request_queue *q)
+ }
+ EXPORT_SYMBOL_GPL(blk_set_queue_dying);
+ 
++/* Unconfigure the I/O scheduler and dissociate from the cgroup controller. */
++void blk_exit_queue(struct request_queue *q)
++{
++	/*
++	 * Since the I/O scheduler exit code may access cgroup information,
++	 * perform I/O scheduler exit before disassociating from the block
++	 * cgroup controller.
++	 */
++	if (q->elevator) {
++		ioc_clear_queue(q);
++		elevator_exit(q, q->elevator);
++		q->elevator = NULL;
++	}
++
++	/*
++	 * Remove all references to @q from the block cgroup controller before
++	 * restoring @q->queue_lock to avoid that restoring this pointer causes
++	 * e.g. blkcg_print_blkgs() to crash.
++	 */
++	blkcg_exit_queue(q);
++
++	/*
++	 * Since the cgroup code may dereference the @q->backing_dev_info
++	 * pointer, only decrease its reference count after having removed the
++	 * association with the block cgroup controller.
++	 */
++	bdi_put(q->backing_dev_info);
++}
++
+ /**
+  * blk_cleanup_queue - shutdown a request queue
+  * @q: request queue to shutdown
+@@ -780,30 +809,7 @@ void blk_cleanup_queue(struct request_queue *q)
+ 	 */
+ 	WARN_ON_ONCE(q->kobj.state_in_sysfs);
+ 
+-	/*
+-	 * Since the I/O scheduler exit code may access cgroup information,
+-	 * perform I/O scheduler exit before disassociating from the block
+-	 * cgroup controller.
+-	 */
+-	if (q->elevator) {
+-		ioc_clear_queue(q);
+-		elevator_exit(q, q->elevator);
+-		q->elevator = NULL;
+-	}
+-
+-	/*
+-	 * Remove all references to @q from the block cgroup controller before
+-	 * restoring @q->queue_lock to avoid that restoring this pointer causes
+-	 * e.g. blkcg_print_blkgs() to crash.
+-	 */
+-	blkcg_exit_queue(q);
+-
+-	/*
+-	 * Since the cgroup code may dereference the @q->backing_dev_info
+-	 * pointer, only decrease its reference count after having removed the
+-	 * association with the block cgroup controller.
+-	 */
+-	bdi_put(q->backing_dev_info);
++	blk_exit_queue(q);
+ 
+ 	if (q->mq_ops)
+ 		blk_mq_free_queue(q);
+@@ -1180,6 +1186,7 @@ out_exit_flush_rq:
+ 		q->exit_rq_fn(q, q->fq->flush_rq);
+ out_free_flush_queue:
+ 	blk_free_flush_queue(q->fq);
++	q->fq = NULL;
+ 	return -ENOMEM;
+ }
+ EXPORT_SYMBOL(blk_init_allocated_queue);
+@@ -3763,9 +3770,11 @@ EXPORT_SYMBOL(blk_finish_plug);
+  */
+ void blk_pm_runtime_init(struct request_queue *q, struct device *dev)
+ {
+-	/* not support for RQF_PM and ->rpm_status in blk-mq yet */
+-	if (q->mq_ops)
++	/* Don't enable runtime PM for blk-mq until it is ready */
++	if (q->mq_ops) {
++		pm_runtime_disable(dev);
+ 		return;
++	}
+ 
+ 	q->dev = dev;
+ 	q->rpm_status = RPM_ACTIVE;
+diff --git a/block/blk-lib.c b/block/blk-lib.c
+index 8faa70f26fcd..d1b9dd03da25 100644
+--- a/block/blk-lib.c
++++ b/block/blk-lib.c
+@@ -68,6 +68,8 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
+ 		 */
+ 		req_sects = min_t(sector_t, nr_sects,
+ 					q->limits.max_discard_sectors);
++		if (!req_sects)
++			goto fail;
+ 		if (req_sects > UINT_MAX >> 9)
+ 			req_sects = UINT_MAX >> 9;
+ 
+@@ -105,6 +107,14 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
+ 
+ 	*biop = bio;
+ 	return 0;
++
++fail:
++	if (bio) {
++		submit_bio_wait(bio);
++		bio_put(bio);
++	}
++	*biop = NULL;
++	return -EOPNOTSUPP;
+ }
+ EXPORT_SYMBOL(__blkdev_issue_discard);
+ 
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index 94987b1f69e1..96c7dfc04852 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -804,6 +804,21 @@ static void __blk_release_queue(struct work_struct *work)
+ 		blk_stat_remove_callback(q, q->poll_cb);
+ 	blk_stat_free_callback(q->poll_cb);
+ 
++	if (!blk_queue_dead(q)) {
++		/*
++		 * Last reference was dropped without having called
++		 * blk_cleanup_queue().
++		 */
++		WARN_ONCE(blk_queue_init_done(q),
++			  "request queue %p has been registered but blk_cleanup_queue() has not been called for that queue\n",
++			  q);
++		blk_exit_queue(q);
++	}
++
++	WARN(blkg_root_lookup(q),
++	     "request queue %p is being released but it has not yet been removed from the blkcg controller\n",
++	     q);
++
+ 	blk_free_queue_stats(q->stats);
+ 
+ 	blk_exit_rl(q, &q->root_rl);
+diff --git a/block/blk.h b/block/blk.h
+index 8d23aea96ce9..a8f0f7986cfd 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -130,6 +130,7 @@ void blk_free_flush_queue(struct blk_flush_queue *q);
+ int blk_init_rl(struct request_list *rl, struct request_queue *q,
+ 		gfp_t gfp_mask);
+ void blk_exit_rl(struct request_queue *q, struct request_list *rl);
++void blk_exit_queue(struct request_queue *q);
+ void blk_rq_bio_prep(struct request_queue *q, struct request *rq,
+ 			struct bio *bio);
+ void blk_queue_bypass_start(struct request_queue *q);
+diff --git a/certs/system_keyring.c b/certs/system_keyring.c
+index 6251d1b27f0c..81728717523d 100644
+--- a/certs/system_keyring.c
++++ b/certs/system_keyring.c
+@@ -15,6 +15,7 @@
+ #include <linux/cred.h>
+ #include <linux/err.h>
+ #include <linux/slab.h>
++#include <linux/verification.h>
+ #include <keys/asymmetric-type.h>
+ #include <keys/system_keyring.h>
+ #include <crypto/pkcs7.h>
+@@ -230,7 +231,7 @@ int verify_pkcs7_signature(const void *data, size_t len,
+ 
+ 	if (!trusted_keys) {
+ 		trusted_keys = builtin_trusted_keys;
+-	} else if (trusted_keys == (void *)1UL) {
++	} else if (trusted_keys == VERIFY_USE_SECONDARY_KEYRING) {
+ #ifdef CONFIG_SECONDARY_TRUSTED_KEYRING
+ 		trusted_keys = secondary_trusted_keys;
+ #else
+diff --git a/crypto/asymmetric_keys/pkcs7_key_type.c b/crypto/asymmetric_keys/pkcs7_key_type.c
+index e284d9cb9237..5b2f6a2b5585 100644
+--- a/crypto/asymmetric_keys/pkcs7_key_type.c
++++ b/crypto/asymmetric_keys/pkcs7_key_type.c
+@@ -63,7 +63,7 @@ static int pkcs7_preparse(struct key_preparsed_payload *prep)
+ 
+ 	return verify_pkcs7_signature(NULL, 0,
+ 				      prep->data, prep->datalen,
+-				      (void *)1UL, usage,
++				      VERIFY_USE_SECONDARY_KEYRING, usage,
+ 				      pkcs7_view_content, prep);
+ }
+ 
+diff --git a/drivers/acpi/acpica/hwsleep.c b/drivers/acpi/acpica/hwsleep.c
+index fe9d46d81750..d8b8fc2ff563 100644
+--- a/drivers/acpi/acpica/hwsleep.c
++++ b/drivers/acpi/acpica/hwsleep.c
+@@ -56,14 +56,9 @@ acpi_status acpi_hw_legacy_sleep(u8 sleep_state)
+ 	if (ACPI_FAILURE(status)) {
+ 		return_ACPI_STATUS(status);
+ 	}
+-	/*
+-	 * If the target sleep state is S5, clear all GPEs and fixed events too
+-	 */
+-	if (sleep_state == ACPI_STATE_S5) {
+-		status = acpi_hw_clear_acpi_status();
+-		if (ACPI_FAILURE(status)) {
+-			return_ACPI_STATUS(status);
+-		}
++	status = acpi_hw_clear_acpi_status();
++	if (ACPI_FAILURE(status)) {
++		return_ACPI_STATUS(status);
+ 	}
+ 	acpi_gbl_system_awake_and_running = FALSE;
+ 
+diff --git a/drivers/acpi/acpica/psloop.c b/drivers/acpi/acpica/psloop.c
+index 44f35ab3347d..0f0bdc9d24c6 100644
+--- a/drivers/acpi/acpica/psloop.c
++++ b/drivers/acpi/acpica/psloop.c
+@@ -22,6 +22,7 @@
+ #include "acdispat.h"
+ #include "amlcode.h"
+ #include "acconvert.h"
++#include "acnamesp.h"
+ 
+ #define _COMPONENT          ACPI_PARSER
+ ACPI_MODULE_NAME("psloop")
+@@ -527,12 +528,18 @@ acpi_status acpi_ps_parse_loop(struct acpi_walk_state *walk_state)
+ 				if (ACPI_FAILURE(status)) {
+ 					return_ACPI_STATUS(status);
+ 				}
+-				if (walk_state->opcode == AML_SCOPE_OP) {
++				if (acpi_ns_opens_scope
++				    (acpi_ps_get_opcode_info
++				     (walk_state->opcode)->object_type)) {
+ 					/*
+-					 * If the scope op fails to parse, skip the body of the
+-					 * scope op because the parse failure indicates that the
+-					 * device may not exist.
++					 * If the scope/device op fails to parse, skip the body of
++					 * the scope op because the parse failure indicates that
++					 * the device may not exist.
+ 					 */
++					ACPI_ERROR((AE_INFO,
++						    "Skip parsing opcode %s",
++						    acpi_ps_get_opcode_name
++						    (walk_state->opcode)));
+ 					walk_state->parser_state.aml =
+ 					    walk_state->aml + 1;
+ 					walk_state->parser_state.aml =
+@@ -540,8 +547,6 @@ acpi_status acpi_ps_parse_loop(struct acpi_walk_state *walk_state)
+ 					    (&walk_state->parser_state);
+ 					walk_state->aml =
+ 					    walk_state->parser_state.aml;
+-					ACPI_ERROR((AE_INFO,
+-						    "Skipping Scope block"));
+ 				}
+ 
+ 				continue;
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index a390c6d4f72d..af7cb8e618fe 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -337,6 +337,7 @@ static ssize_t backing_dev_store(struct device *dev,
+ 		struct device_attribute *attr, const char *buf, size_t len)
+ {
+ 	char *file_name;
++	size_t sz;
+ 	struct file *backing_dev = NULL;
+ 	struct inode *inode;
+ 	struct address_space *mapping;
+@@ -357,7 +358,11 @@ static ssize_t backing_dev_store(struct device *dev,
+ 		goto out;
+ 	}
+ 
+-	strlcpy(file_name, buf, len);
++	strlcpy(file_name, buf, PATH_MAX);
++	/* ignore trailing newline */
++	sz = strlen(file_name);
++	if (sz > 0 && file_name[sz - 1] == '\n')
++		file_name[sz - 1] = 0x00;
+ 
+ 	backing_dev = filp_open(file_name, O_RDWR|O_LARGEFILE, 0);
+ 	if (IS_ERR(backing_dev)) {
+diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
+index 1d50e97d49f1..6d53f7d9fc7a 100644
+--- a/drivers/cpufreq/cpufreq_governor.c
++++ b/drivers/cpufreq/cpufreq_governor.c
+@@ -555,12 +555,20 @@ EXPORT_SYMBOL_GPL(cpufreq_dbs_governor_stop);
+ 
+ void cpufreq_dbs_governor_limits(struct cpufreq_policy *policy)
+ {
+-	struct policy_dbs_info *policy_dbs = policy->governor_data;
++	struct policy_dbs_info *policy_dbs;
++
++	/* Protect gov->gdbs_data against cpufreq_dbs_governor_exit() */
++	mutex_lock(&gov_dbs_data_mutex);
++	policy_dbs = policy->governor_data;
++	if (!policy_dbs)
++		goto out;
+ 
+ 	mutex_lock(&policy_dbs->update_mutex);
+ 	cpufreq_policy_apply_limits(policy);
+ 	gov_update_sample_delay(policy_dbs, 0);
+-
+ 	mutex_unlock(&policy_dbs->update_mutex);
++
++out:
++	mutex_unlock(&gov_dbs_data_mutex);
+ }
+ EXPORT_SYMBOL_GPL(cpufreq_dbs_governor_limits);
+diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c
+index 1aef60d160eb..910f8a68f58b 100644
+--- a/drivers/cpuidle/governors/menu.c
++++ b/drivers/cpuidle/governors/menu.c
+@@ -349,14 +349,12 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ 		 * If the tick is already stopped, the cost of possible short
+ 		 * idle duration misprediction is much higher, because the CPU
+ 		 * may be stuck in a shallow idle state for a long time as a
+-		 * result of it.  In that case say we might mispredict and try
+-		 * to force the CPU into a state for which we would have stopped
+-		 * the tick, unless a timer is going to expire really soon
+-		 * anyway.
++		 * result of it.  In that case say we might mispredict and use
++		 * the known time till the closest timer event for the idle
++		 * state selection.
+ 		 */
+ 		if (data->predicted_us < TICK_USEC)
+-			data->predicted_us = min_t(unsigned int, TICK_USEC,
+-						   ktime_to_us(delta_next));
++			data->predicted_us = ktime_to_us(delta_next);
+ 	} else {
+ 		/*
+ 		 * Use the performance multiplier and the user-configurable
+@@ -381,8 +379,33 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ 			continue;
+ 		if (idx == -1)
+ 			idx = i; /* first enabled state */
+-		if (s->target_residency > data->predicted_us)
+-			break;
++		if (s->target_residency > data->predicted_us) {
++			if (data->predicted_us < TICK_USEC)
++				break;
++
++			if (!tick_nohz_tick_stopped()) {
++				/*
++				 * If the state selected so far is shallow,
++				 * waking up early won't hurt, so retain the
++				 * tick in that case and let the governor run
++				 * again in the next iteration of the loop.
++				 */
++				expected_interval = drv->states[idx].target_residency;
++				break;
++			}
++
++			/*
++			 * If the state selected so far is shallow and this
++			 * state's target residency matches the time till the
++			 * closest timer event, select this one to avoid getting
++			 * stuck in the shallow one for too long.
++			 */
++			if (drv->states[idx].target_residency < TICK_USEC &&
++			    s->target_residency <= ktime_to_us(delta_next))
++				idx = i;
++
++			goto out;
++		}
+ 		if (s->exit_latency > latency_req) {
+ 			/*
+ 			 * If we break out of the loop for latency reasons, use
+@@ -403,14 +426,13 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ 	 * Don't stop the tick if the selected state is a polling one or if the
+ 	 * expected idle duration is shorter than the tick period length.
+ 	 */
+-	if ((drv->states[idx].flags & CPUIDLE_FLAG_POLLING) ||
+-	    expected_interval < TICK_USEC) {
++	if (((drv->states[idx].flags & CPUIDLE_FLAG_POLLING) ||
++	     expected_interval < TICK_USEC) && !tick_nohz_tick_stopped()) {
+ 		unsigned int delta_next_us = ktime_to_us(delta_next);
+ 
+ 		*stop_tick = false;
+ 
+-		if (!tick_nohz_tick_stopped() && idx > 0 &&
+-		    drv->states[idx].target_residency > delta_next_us) {
++		if (idx > 0 && drv->states[idx].target_residency > delta_next_us) {
+ 			/*
+ 			 * The tick is not going to be stopped and the target
+ 			 * residency of the state to be returned is not within
+@@ -429,6 +451,7 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ 		}
+ 	}
+ 
++out:
+ 	data->last_state_idx = idx;
+ 
+ 	return data->last_state_idx;
+diff --git a/drivers/crypto/caam/caamalg_qi.c b/drivers/crypto/caam/caamalg_qi.c
+index 6e61cc93c2b0..d7aa7d7ff102 100644
+--- a/drivers/crypto/caam/caamalg_qi.c
++++ b/drivers/crypto/caam/caamalg_qi.c
+@@ -679,10 +679,8 @@ static int xts_ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher,
+ 	int ret = 0;
+ 
+ 	if (keylen != 2 * AES_MIN_KEY_SIZE  && keylen != 2 * AES_MAX_KEY_SIZE) {
+-		crypto_ablkcipher_set_flags(ablkcipher,
+-					    CRYPTO_TFM_RES_BAD_KEY_LEN);
+ 		dev_err(jrdev, "key size mismatch\n");
+-		return -EINVAL;
++		goto badkey;
+ 	}
+ 
+ 	ctx->cdata.keylen = keylen;
+@@ -715,7 +713,7 @@ static int xts_ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher,
+ 	return ret;
+ badkey:
+ 	crypto_ablkcipher_set_flags(ablkcipher, CRYPTO_TFM_RES_BAD_KEY_LEN);
+-	return 0;
++	return -EINVAL;
+ }
+ 
+ /*
+diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
+index 578ea63a3109..f26d62e5533a 100644
+--- a/drivers/crypto/caam/caampkc.c
++++ b/drivers/crypto/caam/caampkc.c
+@@ -71,8 +71,8 @@ static void rsa_priv_f2_unmap(struct device *dev, struct rsa_edesc *edesc,
+ 	dma_unmap_single(dev, pdb->d_dma, key->d_sz, DMA_TO_DEVICE);
+ 	dma_unmap_single(dev, pdb->p_dma, p_sz, DMA_TO_DEVICE);
+ 	dma_unmap_single(dev, pdb->q_dma, q_sz, DMA_TO_DEVICE);
+-	dma_unmap_single(dev, pdb->tmp1_dma, p_sz, DMA_TO_DEVICE);
+-	dma_unmap_single(dev, pdb->tmp2_dma, q_sz, DMA_TO_DEVICE);
++	dma_unmap_single(dev, pdb->tmp1_dma, p_sz, DMA_BIDIRECTIONAL);
++	dma_unmap_single(dev, pdb->tmp2_dma, q_sz, DMA_BIDIRECTIONAL);
+ }
+ 
+ static void rsa_priv_f3_unmap(struct device *dev, struct rsa_edesc *edesc,
+@@ -90,8 +90,8 @@ static void rsa_priv_f3_unmap(struct device *dev, struct rsa_edesc *edesc,
+ 	dma_unmap_single(dev, pdb->dp_dma, p_sz, DMA_TO_DEVICE);
+ 	dma_unmap_single(dev, pdb->dq_dma, q_sz, DMA_TO_DEVICE);
+ 	dma_unmap_single(dev, pdb->c_dma, p_sz, DMA_TO_DEVICE);
+-	dma_unmap_single(dev, pdb->tmp1_dma, p_sz, DMA_TO_DEVICE);
+-	dma_unmap_single(dev, pdb->tmp2_dma, q_sz, DMA_TO_DEVICE);
++	dma_unmap_single(dev, pdb->tmp1_dma, p_sz, DMA_BIDIRECTIONAL);
++	dma_unmap_single(dev, pdb->tmp2_dma, q_sz, DMA_BIDIRECTIONAL);
+ }
+ 
+ /* RSA Job Completion handler */
+@@ -417,13 +417,13 @@ static int set_rsa_priv_f2_pdb(struct akcipher_request *req,
+ 		goto unmap_p;
+ 	}
+ 
+-	pdb->tmp1_dma = dma_map_single(dev, key->tmp1, p_sz, DMA_TO_DEVICE);
++	pdb->tmp1_dma = dma_map_single(dev, key->tmp1, p_sz, DMA_BIDIRECTIONAL);
+ 	if (dma_mapping_error(dev, pdb->tmp1_dma)) {
+ 		dev_err(dev, "Unable to map RSA tmp1 memory\n");
+ 		goto unmap_q;
+ 	}
+ 
+-	pdb->tmp2_dma = dma_map_single(dev, key->tmp2, q_sz, DMA_TO_DEVICE);
++	pdb->tmp2_dma = dma_map_single(dev, key->tmp2, q_sz, DMA_BIDIRECTIONAL);
+ 	if (dma_mapping_error(dev, pdb->tmp2_dma)) {
+ 		dev_err(dev, "Unable to map RSA tmp2 memory\n");
+ 		goto unmap_tmp1;
+@@ -451,7 +451,7 @@ static int set_rsa_priv_f2_pdb(struct akcipher_request *req,
+ 	return 0;
+ 
+ unmap_tmp1:
+-	dma_unmap_single(dev, pdb->tmp1_dma, p_sz, DMA_TO_DEVICE);
++	dma_unmap_single(dev, pdb->tmp1_dma, p_sz, DMA_BIDIRECTIONAL);
+ unmap_q:
+ 	dma_unmap_single(dev, pdb->q_dma, q_sz, DMA_TO_DEVICE);
+ unmap_p:
+@@ -504,13 +504,13 @@ static int set_rsa_priv_f3_pdb(struct akcipher_request *req,
+ 		goto unmap_dq;
+ 	}
+ 
+-	pdb->tmp1_dma = dma_map_single(dev, key->tmp1, p_sz, DMA_TO_DEVICE);
++	pdb->tmp1_dma = dma_map_single(dev, key->tmp1, p_sz, DMA_BIDIRECTIONAL);
+ 	if (dma_mapping_error(dev, pdb->tmp1_dma)) {
+ 		dev_err(dev, "Unable to map RSA tmp1 memory\n");
+ 		goto unmap_qinv;
+ 	}
+ 
+-	pdb->tmp2_dma = dma_map_single(dev, key->tmp2, q_sz, DMA_TO_DEVICE);
++	pdb->tmp2_dma = dma_map_single(dev, key->tmp2, q_sz, DMA_BIDIRECTIONAL);
+ 	if (dma_mapping_error(dev, pdb->tmp2_dma)) {
+ 		dev_err(dev, "Unable to map RSA tmp2 memory\n");
+ 		goto unmap_tmp1;
+@@ -538,7 +538,7 @@ static int set_rsa_priv_f3_pdb(struct akcipher_request *req,
+ 	return 0;
+ 
+ unmap_tmp1:
+-	dma_unmap_single(dev, pdb->tmp1_dma, p_sz, DMA_TO_DEVICE);
++	dma_unmap_single(dev, pdb->tmp1_dma, p_sz, DMA_BIDIRECTIONAL);
+ unmap_qinv:
+ 	dma_unmap_single(dev, pdb->c_dma, p_sz, DMA_TO_DEVICE);
+ unmap_dq:
+diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c
+index f4f258075b89..acdd72016ffe 100644
+--- a/drivers/crypto/caam/jr.c
++++ b/drivers/crypto/caam/jr.c
+@@ -190,7 +190,8 @@ static void caam_jr_dequeue(unsigned long devarg)
+ 		BUG_ON(CIRC_CNT(head, tail + i, JOBR_DEPTH) <= 0);
+ 
+ 		/* Unmap just-run descriptor so we can post-process */
+-		dma_unmap_single(dev, jrp->outring[hw_idx].desc,
++		dma_unmap_single(dev,
++				 caam_dma_to_cpu(jrp->outring[hw_idx].desc),
+ 				 jrp->entinfo[sw_idx].desc_size,
+ 				 DMA_TO_DEVICE);
+ 
+diff --git a/drivers/crypto/vmx/aes_cbc.c b/drivers/crypto/vmx/aes_cbc.c
+index 5285ece4f33a..b71895871be3 100644
+--- a/drivers/crypto/vmx/aes_cbc.c
++++ b/drivers/crypto/vmx/aes_cbc.c
+@@ -107,24 +107,23 @@ static int p8_aes_cbc_encrypt(struct blkcipher_desc *desc,
+ 		ret = crypto_skcipher_encrypt(req);
+ 		skcipher_request_zero(req);
+ 	} else {
+-		preempt_disable();
+-		pagefault_disable();
+-		enable_kernel_vsx();
+-
+ 		blkcipher_walk_init(&walk, dst, src, nbytes);
+ 		ret = blkcipher_walk_virt(desc, &walk);
+ 		while ((nbytes = walk.nbytes)) {
++			preempt_disable();
++			pagefault_disable();
++			enable_kernel_vsx();
+ 			aes_p8_cbc_encrypt(walk.src.virt.addr,
+ 					   walk.dst.virt.addr,
+ 					   nbytes & AES_BLOCK_MASK,
+ 					   &ctx->enc_key, walk.iv, 1);
++			disable_kernel_vsx();
++			pagefault_enable();
++			preempt_enable();
++
+ 			nbytes &= AES_BLOCK_SIZE - 1;
+ 			ret = blkcipher_walk_done(desc, &walk, nbytes);
+ 		}
+-
+-		disable_kernel_vsx();
+-		pagefault_enable();
+-		preempt_enable();
+ 	}
+ 
+ 	return ret;
+@@ -147,24 +146,23 @@ static int p8_aes_cbc_decrypt(struct blkcipher_desc *desc,
+ 		ret = crypto_skcipher_decrypt(req);
+ 		skcipher_request_zero(req);
+ 	} else {
+-		preempt_disable();
+-		pagefault_disable();
+-		enable_kernel_vsx();
+-
+ 		blkcipher_walk_init(&walk, dst, src, nbytes);
+ 		ret = blkcipher_walk_virt(desc, &walk);
+ 		while ((nbytes = walk.nbytes)) {
++			preempt_disable();
++			pagefault_disable();
++			enable_kernel_vsx();
+ 			aes_p8_cbc_encrypt(walk.src.virt.addr,
+ 					   walk.dst.virt.addr,
+ 					   nbytes & AES_BLOCK_MASK,
+ 					   &ctx->dec_key, walk.iv, 0);
++			disable_kernel_vsx();
++			pagefault_enable();
++			preempt_enable();
++
+ 			nbytes &= AES_BLOCK_SIZE - 1;
+ 			ret = blkcipher_walk_done(desc, &walk, nbytes);
+ 		}
+-
+-		disable_kernel_vsx();
+-		pagefault_enable();
+-		preempt_enable();
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/crypto/vmx/aes_xts.c b/drivers/crypto/vmx/aes_xts.c
+index 8bd9aff0f55f..e9954a7d4694 100644
+--- a/drivers/crypto/vmx/aes_xts.c
++++ b/drivers/crypto/vmx/aes_xts.c
+@@ -116,32 +116,39 @@ static int p8_aes_xts_crypt(struct blkcipher_desc *desc,
+ 		ret = enc? crypto_skcipher_encrypt(req) : crypto_skcipher_decrypt(req);
+ 		skcipher_request_zero(req);
+ 	} else {
++		blkcipher_walk_init(&walk, dst, src, nbytes);
++
++		ret = blkcipher_walk_virt(desc, &walk);
++
+ 		preempt_disable();
+ 		pagefault_disable();
+ 		enable_kernel_vsx();
+ 
+-		blkcipher_walk_init(&walk, dst, src, nbytes);
+-
+-		ret = blkcipher_walk_virt(desc, &walk);
+ 		iv = walk.iv;
+ 		memset(tweak, 0, AES_BLOCK_SIZE);
+ 		aes_p8_encrypt(iv, tweak, &ctx->tweak_key);
+ 
++		disable_kernel_vsx();
++		pagefault_enable();
++		preempt_enable();
++
+ 		while ((nbytes = walk.nbytes)) {
++			preempt_disable();
++			pagefault_disable();
++			enable_kernel_vsx();
+ 			if (enc)
+ 				aes_p8_xts_encrypt(walk.src.virt.addr, walk.dst.virt.addr,
+ 						nbytes & AES_BLOCK_MASK, &ctx->enc_key, NULL, tweak);
+ 			else
+ 				aes_p8_xts_decrypt(walk.src.virt.addr, walk.dst.virt.addr,
+ 						nbytes & AES_BLOCK_MASK, &ctx->dec_key, NULL, tweak);
++			disable_kernel_vsx();
++			pagefault_enable();
++			preempt_enable();
+ 
+ 			nbytes &= AES_BLOCK_SIZE - 1;
+ 			ret = blkcipher_walk_done(desc, &walk, nbytes);
+ 		}
+-
+-		disable_kernel_vsx();
+-		pagefault_enable();
+-		preempt_enable();
+ 	}
+ 	return ret;
+ }
+diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c
+index 314eb1071cce..532545b9488e 100644
+--- a/drivers/dma-buf/reservation.c
++++ b/drivers/dma-buf/reservation.c
+@@ -141,6 +141,7 @@ reservation_object_add_shared_inplace(struct reservation_object *obj,
+ 	if (signaled) {
+ 		RCU_INIT_POINTER(fobj->shared[signaled_idx], fence);
+ 	} else {
++		BUG_ON(fobj->shared_count >= fobj->shared_max);
+ 		RCU_INIT_POINTER(fobj->shared[fobj->shared_count], fence);
+ 		fobj->shared_count++;
+ 	}
+@@ -230,10 +231,9 @@ void reservation_object_add_shared_fence(struct reservation_object *obj,
+ 	old = reservation_object_get_list(obj);
+ 	obj->staged = NULL;
+ 
+-	if (!fobj) {
+-		BUG_ON(old->shared_count >= old->shared_max);
++	if (!fobj)
+ 		reservation_object_add_shared_inplace(obj, old, fence);
+-	} else
++	else
+ 		reservation_object_add_shared_replace(obj, old, fobj, fence);
+ }
+ EXPORT_SYMBOL(reservation_object_add_shared_fence);
+diff --git a/drivers/extcon/extcon.c b/drivers/extcon/extcon.c
+index af83ad58819c..b9d27c8fe57e 100644
+--- a/drivers/extcon/extcon.c
++++ b/drivers/extcon/extcon.c
+@@ -433,8 +433,8 @@ int extcon_sync(struct extcon_dev *edev, unsigned int id)
+ 		return index;
+ 
+ 	spin_lock_irqsave(&edev->lock, flags);
+-
+ 	state = !!(edev->state & BIT(index));
++	spin_unlock_irqrestore(&edev->lock, flags);
+ 
+ 	/*
+ 	 * Call functions in a raw notifier chain for the specific one
+@@ -448,6 +448,7 @@ int extcon_sync(struct extcon_dev *edev, unsigned int id)
+ 	 */
+ 	raw_notifier_call_chain(&edev->nh_all, state, edev);
+ 
++	spin_lock_irqsave(&edev->lock, flags);
+ 	/* This could be in interrupt handler */
+ 	prop_buf = (char *)get_zeroed_page(GFP_ATOMIC);
+ 	if (!prop_buf) {
+diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
+index ba0a092ae085..c3949220b770 100644
+--- a/drivers/hv/channel.c
++++ b/drivers/hv/channel.c
+@@ -558,11 +558,8 @@ static void reset_channel_cb(void *arg)
+ 	channel->onchannel_callback = NULL;
+ }
+ 
+-static int vmbus_close_internal(struct vmbus_channel *channel)
++void vmbus_reset_channel_cb(struct vmbus_channel *channel)
+ {
+-	struct vmbus_channel_close_channel *msg;
+-	int ret;
+-
+ 	/*
+ 	 * vmbus_on_event(), running in the per-channel tasklet, can race
+ 	 * with vmbus_close_internal() in the case of SMP guest, e.g., when
+@@ -572,6 +569,29 @@ static int vmbus_close_internal(struct vmbus_channel *channel)
+ 	 */
+ 	tasklet_disable(&channel->callback_event);
+ 
++	channel->sc_creation_callback = NULL;
++
++	/* Stop the callback asap */
++	if (channel->target_cpu != get_cpu()) {
++		put_cpu();
++		smp_call_function_single(channel->target_cpu, reset_channel_cb,
++					 channel, true);
++	} else {
++		reset_channel_cb(channel);
++		put_cpu();
++	}
++
++	/* Re-enable tasklet for use on re-open */
++	tasklet_enable(&channel->callback_event);
++}
++
++static int vmbus_close_internal(struct vmbus_channel *channel)
++{
++	struct vmbus_channel_close_channel *msg;
++	int ret;
++
++	vmbus_reset_channel_cb(channel);
++
+ 	/*
+ 	 * In case a device driver's probe() fails (e.g.,
+ 	 * util_probe() -> vmbus_open() returns -ENOMEM) and the device is
+@@ -585,16 +605,6 @@ static int vmbus_close_internal(struct vmbus_channel *channel)
+ 	}
+ 
+ 	channel->state = CHANNEL_OPEN_STATE;
+-	channel->sc_creation_callback = NULL;
+-	/* Stop callback and cancel the timer asap */
+-	if (channel->target_cpu != get_cpu()) {
+-		put_cpu();
+-		smp_call_function_single(channel->target_cpu, reset_channel_cb,
+-					 channel, true);
+-	} else {
+-		reset_channel_cb(channel);
+-		put_cpu();
+-	}
+ 
+ 	/* Send a closing message */
+ 
+@@ -639,8 +649,6 @@ static int vmbus_close_internal(struct vmbus_channel *channel)
+ 		get_order(channel->ringbuffer_pagecount * PAGE_SIZE));
+ 
+ out:
+-	/* re-enable tasklet for use on re-open */
+-	tasklet_enable(&channel->callback_event);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index ecc2bd275a73..0f0e091c117c 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -527,10 +527,8 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
+ 		struct hv_device *dev
+ 			= newchannel->primary_channel->device_obj;
+ 
+-		if (vmbus_add_channel_kobj(dev, newchannel)) {
+-			atomic_dec(&vmbus_connection.offer_in_progress);
++		if (vmbus_add_channel_kobj(dev, newchannel))
+ 			goto err_free_chan;
+-		}
+ 
+ 		if (channel->sc_creation_callback != NULL)
+ 			channel->sc_creation_callback(newchannel);
+@@ -894,6 +892,12 @@ static void vmbus_onoffer_rescind(struct vmbus_channel_message_header *hdr)
+ 		return;
+ 	}
+ 
++	/*
++	 * Before setting channel->rescind in vmbus_rescind_cleanup(), we
++	 * should make sure the channel callback is not running any more.
++	 */
++	vmbus_reset_channel_cb(channel);
++
+ 	/*
+ 	 * Now wait for offer handling to complete.
+ 	 */
+diff --git a/drivers/i2c/busses/i2c-designware-master.c b/drivers/i2c/busses/i2c-designware-master.c
+index 27436a937492..54b2a3a86677 100644
+--- a/drivers/i2c/busses/i2c-designware-master.c
++++ b/drivers/i2c/busses/i2c-designware-master.c
+@@ -693,7 +693,6 @@ int i2c_dw_probe(struct dw_i2c_dev *dev)
+ 	i2c_set_adapdata(adap, dev);
+ 
+ 	if (dev->pm_disabled) {
+-		dev_pm_syscore_device(dev->dev, true);
+ 		irq_flags = IRQF_NO_SUSPEND;
+ 	} else {
+ 		irq_flags = IRQF_SHARED | IRQF_COND_SUSPEND;
+diff --git a/drivers/i2c/busses/i2c-designware-platdrv.c b/drivers/i2c/busses/i2c-designware-platdrv.c
+index 5660daf6c92e..d281d21cdd8e 100644
+--- a/drivers/i2c/busses/i2c-designware-platdrv.c
++++ b/drivers/i2c/busses/i2c-designware-platdrv.c
+@@ -448,6 +448,9 @@ static int dw_i2c_plat_suspend(struct device *dev)
+ {
+ 	struct dw_i2c_dev *i_dev = dev_get_drvdata(dev);
+ 
++	if (i_dev->pm_disabled)
++		return 0;
++
+ 	i_dev->disable(i_dev);
+ 	i2c_dw_prepare_clk(i_dev, false);
+ 
+@@ -458,7 +461,9 @@ static int dw_i2c_plat_resume(struct device *dev)
+ {
+ 	struct dw_i2c_dev *i_dev = dev_get_drvdata(dev);
+ 
+-	i2c_dw_prepare_clk(i_dev, true);
++	if (!i_dev->pm_disabled)
++		i2c_dw_prepare_clk(i_dev, true);
++
+ 	i_dev->init(i_dev);
+ 
+ 	return 0;
+diff --git a/drivers/iio/accel/sca3000.c b/drivers/iio/accel/sca3000.c
+index 4dceb75e3586..4964561595f5 100644
+--- a/drivers/iio/accel/sca3000.c
++++ b/drivers/iio/accel/sca3000.c
+@@ -797,6 +797,7 @@ static int sca3000_write_raw(struct iio_dev *indio_dev,
+ 		mutex_lock(&st->lock);
+ 		ret = sca3000_write_3db_freq(st, val);
+ 		mutex_unlock(&st->lock);
++		return ret;
+ 	default:
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/iio/frequency/ad9523.c b/drivers/iio/frequency/ad9523.c
+index ddb6a334ae68..8e8263758439 100644
+--- a/drivers/iio/frequency/ad9523.c
++++ b/drivers/iio/frequency/ad9523.c
+@@ -508,7 +508,7 @@ static ssize_t ad9523_store(struct device *dev,
+ 		return ret;
+ 
+ 	if (!state)
+-		return 0;
++		return len;
+ 
+ 	mutex_lock(&indio_dev->mlock);
+ 	switch ((u32)this_attr->address) {
+@@ -642,7 +642,7 @@ static int ad9523_read_raw(struct iio_dev *indio_dev,
+ 		code = (AD9523_CLK_DIST_DIV_PHASE_REV(ret) * 3141592) /
+ 			AD9523_CLK_DIST_DIV_REV(ret);
+ 		*val = code / 1000000;
+-		*val2 = (code % 1000000) * 10;
++		*val2 = code % 1000000;
+ 		return IIO_VAL_INT_PLUS_MICRO;
+ 	default:
+ 		return -EINVAL;
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index b3ba9a222550..cbeae4509359 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -4694,7 +4694,7 @@ static void mlx5_ib_dealloc_counters(struct mlx5_ib_dev *dev)
+ 	int i;
+ 
+ 	for (i = 0; i < dev->num_ports; i++) {
+-		if (dev->port[i].cnts.set_id)
++		if (dev->port[i].cnts.set_id_valid)
+ 			mlx5_core_dealloc_q_counter(dev->mdev,
+ 						    dev->port[i].cnts.set_id);
+ 		kfree(dev->port[i].cnts.names);
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index a4f1f638509f..01eae67d5a6e 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -1626,7 +1626,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
+ 	struct mlx5_ib_resources *devr = &dev->devr;
+ 	int inlen = MLX5_ST_SZ_BYTES(create_qp_in);
+ 	struct mlx5_core_dev *mdev = dev->mdev;
+-	struct mlx5_ib_create_qp_resp resp;
++	struct mlx5_ib_create_qp_resp resp = {};
+ 	struct mlx5_ib_cq *send_cq;
+ 	struct mlx5_ib_cq *recv_cq;
+ 	unsigned long flags;
+@@ -5365,7 +5365,9 @@ static int set_user_rq_size(struct mlx5_ib_dev *dev,
+ 
+ 	rwq->wqe_count = ucmd->rq_wqe_count;
+ 	rwq->wqe_shift = ucmd->rq_wqe_shift;
+-	rwq->buf_size = (rwq->wqe_count << rwq->wqe_shift);
++	if (check_shl_overflow(rwq->wqe_count, rwq->wqe_shift, &rwq->buf_size))
++		return -EINVAL;
++
+ 	rwq->log_rq_stride = rwq->wqe_shift;
+ 	rwq->log_rq_size = ilog2(rwq->wqe_count);
+ 	return 0;
+diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c
+index 98d470d1f3fc..83311dd07019 100644
+--- a/drivers/infiniband/sw/rxe/rxe_comp.c
++++ b/drivers/infiniband/sw/rxe/rxe_comp.c
+@@ -276,6 +276,7 @@ static inline enum comp_state check_ack(struct rxe_qp *qp,
+ 	case IB_OPCODE_RC_RDMA_READ_RESPONSE_MIDDLE:
+ 		if (wqe->wr.opcode != IB_WR_RDMA_READ &&
+ 		    wqe->wr.opcode != IB_WR_RDMA_READ_WITH_INV) {
++			wqe->status = IB_WC_FATAL_ERR;
+ 			return COMPST_ERROR;
+ 		}
+ 		reset_retry_counters(qp);
+diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
+index 3081c629a7f7..8a9633e97bec 100644
+--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
++++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
+@@ -1833,8 +1833,7 @@ static bool srpt_close_ch(struct srpt_rdma_ch *ch)
+ 	int ret;
+ 
+ 	if (!srpt_set_ch_state(ch, CH_DRAINING)) {
+-		pr_debug("%s-%d: already closed\n", ch->sess_name,
+-			 ch->qp->qp_num);
++		pr_debug("%s: already closed\n", ch->sess_name);
+ 		return false;
+ 	}
+ 
+@@ -1940,8 +1939,8 @@ static void __srpt_close_all_ch(struct srpt_port *sport)
+ 	list_for_each_entry(nexus, &sport->nexus_list, entry) {
+ 		list_for_each_entry(ch, &nexus->ch_list, list) {
+ 			if (srpt_disconnect_ch(ch) >= 0)
+-				pr_info("Closing channel %s-%d because target %s_%d has been disabled\n",
+-					ch->sess_name, ch->qp->qp_num,
++				pr_info("Closing channel %s because target %s_%d has been disabled\n",
++					ch->sess_name,
+ 					sport->sdev->device->name, sport->port);
+ 			srpt_close_ch(ch);
+ 		}
+@@ -2087,7 +2086,7 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev,
+ 		struct rdma_conn_param rdma_cm;
+ 		struct ib_cm_rep_param ib_cm;
+ 	} *rep_param = NULL;
+-	struct srpt_rdma_ch *ch;
++	struct srpt_rdma_ch *ch = NULL;
+ 	char i_port_id[36];
+ 	u32 it_iu_len;
+ 	int i, ret;
+@@ -2234,13 +2233,15 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev,
+ 						TARGET_PROT_NORMAL,
+ 						i_port_id + 2, ch, NULL);
+ 	if (IS_ERR_OR_NULL(ch->sess)) {
++		WARN_ON_ONCE(ch->sess == NULL);
+ 		ret = PTR_ERR(ch->sess);
++		ch->sess = NULL;
+ 		pr_info("Rejected login for initiator %s: ret = %d.\n",
+ 			ch->sess_name, ret);
+ 		rej->reason = cpu_to_be32(ret == -ENOMEM ?
+ 				SRP_LOGIN_REJ_INSUFFICIENT_RESOURCES :
+ 				SRP_LOGIN_REJ_CHANNEL_LIMIT_REACHED);
+-		goto reject;
++		goto destroy_ib;
+ 	}
+ 
+ 	mutex_lock(&sport->mutex);
+@@ -2279,7 +2280,7 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev,
+ 		rej->reason = cpu_to_be32(SRP_LOGIN_REJ_INSUFFICIENT_RESOURCES);
+ 		pr_err("rejected SRP_LOGIN_REQ because enabling RTR failed (error code = %d)\n",
+ 		       ret);
+-		goto destroy_ib;
++		goto reject;
+ 	}
+ 
+ 	pr_debug("Establish connection sess=%p name=%s ch=%p\n", ch->sess,
+@@ -2358,8 +2359,11 @@ free_ring:
+ 	srpt_free_ioctx_ring((struct srpt_ioctx **)ch->ioctx_ring,
+ 			     ch->sport->sdev, ch->rq_size,
+ 			     ch->max_rsp_size, DMA_TO_DEVICE);
++
+ free_ch:
+-	if (ib_cm_id)
++	if (rdma_cm_id)
++		rdma_cm_id->context = NULL;
++	else
+ 		ib_cm_id->context = NULL;
+ 	kfree(ch);
+ 	ch = NULL;
+@@ -2379,6 +2383,15 @@ reject:
+ 		ib_send_cm_rej(ib_cm_id, IB_CM_REJ_CONSUMER_DEFINED, NULL, 0,
+ 			       rej, sizeof(*rej));
+ 
++	if (ch && ch->sess) {
++		srpt_close_ch(ch);
++		/*
++		 * Tell the caller not to free cm_id since
++		 * srpt_release_channel_work() will do that.
++		 */
++		ret = 0;
++	}
++
+ out:
+ 	kfree(rep_param);
+ 	kfree(rsp);
+@@ -2969,7 +2982,8 @@ static void srpt_add_one(struct ib_device *device)
+ 
+ 	pr_debug("device = %p\n", device);
+ 
+-	sdev = kzalloc(sizeof(*sdev), GFP_KERNEL);
++	sdev = kzalloc(struct_size(sdev, port, device->phys_port_cnt),
++		       GFP_KERNEL);
+ 	if (!sdev)
+ 		goto err;
+ 
+@@ -3023,8 +3037,6 @@ static void srpt_add_one(struct ib_device *device)
+ 			      srpt_event_handler);
+ 	ib_register_event_handler(&sdev->event_handler);
+ 
+-	WARN_ON(sdev->device->phys_port_cnt > ARRAY_SIZE(sdev->port));
+-
+ 	for (i = 1; i <= sdev->device->phys_port_cnt; i++) {
+ 		sport = &sdev->port[i - 1];
+ 		INIT_LIST_HEAD(&sport->nexus_list);
+diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.h b/drivers/infiniband/ulp/srpt/ib_srpt.h
+index 2361483476a0..444dfd7281b5 100644
+--- a/drivers/infiniband/ulp/srpt/ib_srpt.h
++++ b/drivers/infiniband/ulp/srpt/ib_srpt.h
+@@ -396,9 +396,9 @@ struct srpt_port {
+  * @sdev_mutex:	   Serializes use_srq changes.
+  * @use_srq:       Whether or not to use SRQ.
+  * @ioctx_ring:    Per-HCA SRQ.
+- * @port:          Information about the ports owned by this HCA.
+  * @event_handler: Per-HCA asynchronous IB event handler.
+  * @list:          Node in srpt_dev_list.
++ * @port:          Information about the ports owned by this HCA.
+  */
+ struct srpt_device {
+ 	struct ib_device	*device;
+@@ -410,9 +410,9 @@ struct srpt_device {
+ 	struct mutex		sdev_mutex;
+ 	bool			use_srq;
+ 	struct srpt_recv_ioctx	**ioctx_ring;
+-	struct srpt_port	port[2];
+ 	struct ib_event_handler	event_handler;
+ 	struct list_head	list;
++	struct srpt_port        port[];
+ };
+ 
+ #endif				/* IB_SRPT_H */
+diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
+index 75456b5aa825..d9c748b6f9e4 100644
+--- a/drivers/iommu/dmar.c
++++ b/drivers/iommu/dmar.c
+@@ -1339,8 +1339,8 @@ void qi_flush_iotlb(struct intel_iommu *iommu, u16 did, u64 addr,
+ 	qi_submit_sync(&desc, iommu);
+ }
+ 
+-void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 qdep,
+-			u64 addr, unsigned mask)
++void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 pfsid,
++			u16 qdep, u64 addr, unsigned mask)
+ {
+ 	struct qi_desc desc;
+ 
+@@ -1355,7 +1355,7 @@ void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 qdep,
+ 		qdep = 0;
+ 
+ 	desc.low = QI_DEV_IOTLB_SID(sid) | QI_DEV_IOTLB_QDEP(qdep) |
+-		   QI_DIOTLB_TYPE;
++		   QI_DIOTLB_TYPE | QI_DEV_IOTLB_PFSID(pfsid);
+ 
+ 	qi_submit_sync(&desc, iommu);
+ }
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index 115ff26e9ced..07dc938199f9 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -421,6 +421,7 @@ struct device_domain_info {
+ 	struct list_head global; /* link to global list */
+ 	u8 bus;			/* PCI bus number */
+ 	u8 devfn;		/* PCI devfn number */
++	u16 pfsid;		/* SRIOV physical function source ID */
+ 	u8 pasid_supported:3;
+ 	u8 pasid_enabled:1;
+ 	u8 pri_supported:1;
+@@ -1501,6 +1502,20 @@ static void iommu_enable_dev_iotlb(struct device_domain_info *info)
+ 		return;
+ 
+ 	pdev = to_pci_dev(info->dev);
++	/* For IOMMU that supports device IOTLB throttling (DIT), we assign
++	 * PFSID to the invalidation desc of a VF such that IOMMU HW can gauge
++	 * queue depth at PF level. If DIT is not set, PFSID will be treated as
++	 * reserved, which should be set to 0.
++	 */
++	if (!ecap_dit(info->iommu->ecap))
++		info->pfsid = 0;
++	else {
++		struct pci_dev *pf_pdev;
++
++		/* pdev will be returned if device is not a vf */
++		pf_pdev = pci_physfn(pdev);
++		info->pfsid = PCI_DEVID(pf_pdev->bus->number, pf_pdev->devfn);
++	}
+ 
+ #ifdef CONFIG_INTEL_IOMMU_SVM
+ 	/* The PCIe spec, in its wisdom, declares that the behaviour of
+@@ -1566,7 +1581,8 @@ static void iommu_flush_dev_iotlb(struct dmar_domain *domain,
+ 
+ 		sid = info->bus << 8 | info->devfn;
+ 		qdep = info->ats_qdep;
+-		qi_flush_dev_iotlb(info->iommu, sid, qdep, addr, mask);
++		qi_flush_dev_iotlb(info->iommu, sid, info->pfsid,
++				qdep, addr, mask);
+ 	}
+ 	spin_unlock_irqrestore(&device_domain_lock, flags);
+ }
+diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
+index 40ae6e87cb88..09b47260c74b 100644
+--- a/drivers/iommu/ipmmu-vmsa.c
++++ b/drivers/iommu/ipmmu-vmsa.c
+@@ -1081,12 +1081,19 @@ static struct platform_driver ipmmu_driver = {
+ 
+ static int __init ipmmu_init(void)
+ {
++	struct device_node *np;
+ 	static bool setup_done;
+ 	int ret;
+ 
+ 	if (setup_done)
+ 		return 0;
+ 
++	np = of_find_matching_node(NULL, ipmmu_of_ids);
++	if (!np)
++		return 0;
++
++	of_node_put(np);
++
+ 	ret = platform_driver_register(&ipmmu_driver);
+ 	if (ret < 0)
+ 		return ret;
+diff --git a/drivers/mailbox/mailbox-xgene-slimpro.c b/drivers/mailbox/mailbox-xgene-slimpro.c
+index a7040163dd43..b8b2b3533f46 100644
+--- a/drivers/mailbox/mailbox-xgene-slimpro.c
++++ b/drivers/mailbox/mailbox-xgene-slimpro.c
+@@ -195,9 +195,9 @@ static int slimpro_mbox_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, ctx);
+ 
+ 	regs = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	mb_base = devm_ioremap(&pdev->dev, regs->start, resource_size(regs));
+-	if (!mb_base)
+-		return -ENOMEM;
++	mb_base = devm_ioremap_resource(&pdev->dev, regs);
++	if (IS_ERR(mb_base))
++		return PTR_ERR(mb_base);
+ 
+ 	/* Setup mailbox links */
+ 	for (i = 0; i < MBOX_CNT; i++) {
+diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
+index ad45ebe1a74b..6c33923c2c35 100644
+--- a/drivers/md/bcache/writeback.c
++++ b/drivers/md/bcache/writeback.c
+@@ -645,8 +645,10 @@ static int bch_writeback_thread(void *arg)
+ 			 * data on cache. BCACHE_DEV_DETACHING flag is set in
+ 			 * bch_cached_dev_detach().
+ 			 */
+-			if (test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags))
++			if (test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags)) {
++				up_write(&dc->writeback_lock);
+ 				break;
++			}
+ 		}
+ 
+ 		up_write(&dc->writeback_lock);
+diff --git a/drivers/md/dm-cache-metadata.c b/drivers/md/dm-cache-metadata.c
+index 0d7212410e21..69dddeab124c 100644
+--- a/drivers/md/dm-cache-metadata.c
++++ b/drivers/md/dm-cache-metadata.c
+@@ -363,7 +363,7 @@ static int __write_initial_superblock(struct dm_cache_metadata *cmd)
+ 	disk_super->version = cpu_to_le32(cmd->version);
+ 	memset(disk_super->policy_name, 0, sizeof(disk_super->policy_name));
+ 	memset(disk_super->policy_version, 0, sizeof(disk_super->policy_version));
+-	disk_super->policy_hint_size = 0;
++	disk_super->policy_hint_size = cpu_to_le32(0);
+ 
+ 	__copy_sm_root(cmd, disk_super);
+ 
+@@ -701,6 +701,7 @@ static int __commit_transaction(struct dm_cache_metadata *cmd,
+ 	disk_super->policy_version[0] = cpu_to_le32(cmd->policy_version[0]);
+ 	disk_super->policy_version[1] = cpu_to_le32(cmd->policy_version[1]);
+ 	disk_super->policy_version[2] = cpu_to_le32(cmd->policy_version[2]);
++	disk_super->policy_hint_size = cpu_to_le32(cmd->policy_hint_size);
+ 
+ 	disk_super->read_hits = cpu_to_le32(cmd->stats.read_hits);
+ 	disk_super->read_misses = cpu_to_le32(cmd->stats.read_misses);
+@@ -1322,6 +1323,7 @@ static int __load_mapping_v1(struct dm_cache_metadata *cmd,
+ 
+ 	dm_oblock_t oblock;
+ 	unsigned flags;
++	bool dirty = true;
+ 
+ 	dm_array_cursor_get_value(mapping_cursor, (void **) &mapping_value_le);
+ 	memcpy(&mapping, mapping_value_le, sizeof(mapping));
+@@ -1332,8 +1334,10 @@ static int __load_mapping_v1(struct dm_cache_metadata *cmd,
+ 			dm_array_cursor_get_value(hint_cursor, (void **) &hint_value_le);
+ 			memcpy(&hint, hint_value_le, sizeof(hint));
+ 		}
++		if (cmd->clean_when_opened)
++			dirty = flags & M_DIRTY;
+ 
+-		r = fn(context, oblock, to_cblock(cb), flags & M_DIRTY,
++		r = fn(context, oblock, to_cblock(cb), dirty,
+ 		       le32_to_cpu(hint), hints_valid);
+ 		if (r) {
+ 			DMERR("policy couldn't load cache block %llu",
+@@ -1361,7 +1365,7 @@ static int __load_mapping_v2(struct dm_cache_metadata *cmd,
+ 
+ 	dm_oblock_t oblock;
+ 	unsigned flags;
+-	bool dirty;
++	bool dirty = true;
+ 
+ 	dm_array_cursor_get_value(mapping_cursor, (void **) &mapping_value_le);
+ 	memcpy(&mapping, mapping_value_le, sizeof(mapping));
+@@ -1372,8 +1376,9 @@ static int __load_mapping_v2(struct dm_cache_metadata *cmd,
+ 			dm_array_cursor_get_value(hint_cursor, (void **) &hint_value_le);
+ 			memcpy(&hint, hint_value_le, sizeof(hint));
+ 		}
++		if (cmd->clean_when_opened)
++			dirty = dm_bitset_cursor_get_value(dirty_cursor);
+ 
+-		dirty = dm_bitset_cursor_get_value(dirty_cursor);
+ 		r = fn(context, oblock, to_cblock(cb), dirty,
+ 		       le32_to_cpu(hint), hints_valid);
+ 		if (r) {
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index b61b069c33af..3fdec1147221 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -3069,11 +3069,11 @@ static void crypt_io_hints(struct dm_target *ti, struct queue_limits *limits)
+ 	 */
+ 	limits->max_segment_size = PAGE_SIZE;
+ 
+-	if (cc->sector_size != (1 << SECTOR_SHIFT)) {
+-		limits->logical_block_size = cc->sector_size;
+-		limits->physical_block_size = cc->sector_size;
+-		blk_limits_io_min(limits, cc->sector_size);
+-	}
++	limits->logical_block_size =
++		max_t(unsigned short, limits->logical_block_size, cc->sector_size);
++	limits->physical_block_size =
++		max_t(unsigned, limits->physical_block_size, cc->sector_size);
++	limits->io_min = max_t(unsigned, limits->io_min, cc->sector_size);
+ }
+ 
+ static struct target_type crypt_target = {
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index 86438b2f10dd..0a8a4c2aa3ea 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -178,7 +178,7 @@ struct dm_integrity_c {
+ 	__u8 sectors_per_block;
+ 
+ 	unsigned char mode;
+-	bool suspending;
++	int suspending;
+ 
+ 	int failed;
+ 
+@@ -2210,7 +2210,7 @@ static void dm_integrity_postsuspend(struct dm_target *ti)
+ 
+ 	del_timer_sync(&ic->autocommit_timer);
+ 
+-	ic->suspending = true;
++	WRITE_ONCE(ic->suspending, 1);
+ 
+ 	queue_work(ic->commit_wq, &ic->commit_work);
+ 	drain_workqueue(ic->commit_wq);
+@@ -2220,7 +2220,7 @@ static void dm_integrity_postsuspend(struct dm_target *ti)
+ 		dm_integrity_flush_buffers(ic);
+ 	}
+ 
+-	ic->suspending = false;
++	WRITE_ONCE(ic->suspending, 0);
+ 
+ 	BUG_ON(!RB_EMPTY_ROOT(&ic->in_progress));
+ 
+diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
+index b900723bbd0f..1087f6a1ac79 100644
+--- a/drivers/md/dm-thin.c
++++ b/drivers/md/dm-thin.c
+@@ -2520,6 +2520,8 @@ static void set_pool_mode(struct pool *pool, enum pool_mode new_mode)
+ 	case PM_WRITE:
+ 		if (old_mode != new_mode)
+ 			notify_of_pool_mode_change(pool, "write");
++		if (old_mode == PM_OUT_OF_DATA_SPACE)
++			cancel_delayed_work_sync(&pool->no_space_timeout);
+ 		pool->out_of_data_space = false;
+ 		pool->pf.error_if_no_space = pt->requested_pf.error_if_no_space;
+ 		dm_pool_metadata_read_write(pool->pmd);
+diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c
+index 87107c995cb5..7669069005e9 100644
+--- a/drivers/md/dm-writecache.c
++++ b/drivers/md/dm-writecache.c
+@@ -457,7 +457,7 @@ static void ssd_commit_flushed(struct dm_writecache *wc)
+ 		COMPLETION_INITIALIZER_ONSTACK(endio.c),
+ 		ATOMIC_INIT(1),
+ 	};
+-	unsigned bitmap_bits = wc->dirty_bitmap_size * BITS_PER_LONG;
++	unsigned bitmap_bits = wc->dirty_bitmap_size * 8;
+ 	unsigned i = 0;
+ 
+ 	while (1) {
+diff --git a/drivers/media/i2c/tvp5150.c b/drivers/media/i2c/tvp5150.c
+index b162c2fe62c3..76e6bed5a1da 100644
+--- a/drivers/media/i2c/tvp5150.c
++++ b/drivers/media/i2c/tvp5150.c
+@@ -872,7 +872,7 @@ static int tvp5150_fill_fmt(struct v4l2_subdev *sd,
+ 	f = &format->format;
+ 
+ 	f->width = decoder->rect.width;
+-	f->height = decoder->rect.height;
++	f->height = decoder->rect.height / 2;
+ 
+ 	f->code = MEDIA_BUS_FMT_UYVY8_2X8;
+ 	f->field = V4L2_FIELD_ALTERNATE;
+diff --git a/drivers/mfd/hi655x-pmic.c b/drivers/mfd/hi655x-pmic.c
+index c37ccbfd52f2..96c07fa1802a 100644
+--- a/drivers/mfd/hi655x-pmic.c
++++ b/drivers/mfd/hi655x-pmic.c
+@@ -49,7 +49,7 @@ static struct regmap_config hi655x_regmap_config = {
+ 	.reg_bits = 32,
+ 	.reg_stride = HI655X_STRIDE,
+ 	.val_bits = 8,
+-	.max_register = HI655X_BUS_ADDR(0xFFF),
++	.max_register = HI655X_BUS_ADDR(0x400) - HI655X_STRIDE,
+ };
+ 
+ static struct resource pwrkey_resources[] = {
+diff --git a/drivers/misc/cxl/main.c b/drivers/misc/cxl/main.c
+index c1ba0d42cbc8..e0f29b8a872d 100644
+--- a/drivers/misc/cxl/main.c
++++ b/drivers/misc/cxl/main.c
+@@ -287,7 +287,7 @@ int cxl_adapter_context_get(struct cxl *adapter)
+ 	int rc;
+ 
+ 	rc = atomic_inc_unless_negative(&adapter->contexts_num);
+-	return rc >= 0 ? 0 : -EBUSY;
++	return rc ? 0 : -EBUSY;
+ }
+ 
+ void cxl_adapter_context_put(struct cxl *adapter)
+diff --git a/drivers/misc/ocxl/link.c b/drivers/misc/ocxl/link.c
+index 88876ae8f330..a963b0a4a3c5 100644
+--- a/drivers/misc/ocxl/link.c
++++ b/drivers/misc/ocxl/link.c
+@@ -136,7 +136,7 @@ static void xsl_fault_handler_bh(struct work_struct *fault_work)
+ 	int rc;
+ 
+ 	/*
+-	 * We need to release a reference on the mm whenever exiting this
++	 * We must release a reference on mm_users whenever exiting this
+ 	 * function (taken in the memory fault interrupt handler)
+ 	 */
+ 	rc = copro_handle_mm_fault(fault->pe_data.mm, fault->dar, fault->dsisr,
+@@ -172,7 +172,7 @@ static void xsl_fault_handler_bh(struct work_struct *fault_work)
+ 	}
+ 	r = RESTART;
+ ack:
+-	mmdrop(fault->pe_data.mm);
++	mmput(fault->pe_data.mm);
+ 	ack_irq(spa, r);
+ }
+ 
+@@ -184,6 +184,7 @@ static irqreturn_t xsl_fault_handler(int irq, void *data)
+ 	struct pe_data *pe_data;
+ 	struct ocxl_process_element *pe;
+ 	int lpid, pid, tid;
++	bool schedule = false;
+ 
+ 	read_irq(spa, &dsisr, &dar, &pe_handle);
+ 	trace_ocxl_fault(spa->spa_mem, pe_handle, dsisr, dar, -1);
+@@ -226,14 +227,19 @@ static irqreturn_t xsl_fault_handler(int irq, void *data)
+ 	}
+ 	WARN_ON(pe_data->mm->context.id != pid);
+ 
+-	spa->xsl_fault.pe = pe_handle;
+-	spa->xsl_fault.dar = dar;
+-	spa->xsl_fault.dsisr = dsisr;
+-	spa->xsl_fault.pe_data = *pe_data;
+-	mmgrab(pe_data->mm); /* mm count is released by bottom half */
+-
++	if (mmget_not_zero(pe_data->mm)) {
++			spa->xsl_fault.pe = pe_handle;
++			spa->xsl_fault.dar = dar;
++			spa->xsl_fault.dsisr = dsisr;
++			spa->xsl_fault.pe_data = *pe_data;
++			schedule = true;
++			/* mm_users count released by bottom half */
++	}
+ 	rcu_read_unlock();
+-	schedule_work(&spa->xsl_fault.fault_work);
++	if (schedule)
++		schedule_work(&spa->xsl_fault.fault_work);
++	else
++		ack_irq(spa, ADDRESS_ERROR);
+ 	return IRQ_HANDLED;
+ }
+ 
+diff --git a/drivers/misc/vmw_balloon.c b/drivers/misc/vmw_balloon.c
+index 56c6f79a5c5a..5f8b583c6e41 100644
+--- a/drivers/misc/vmw_balloon.c
++++ b/drivers/misc/vmw_balloon.c
+@@ -341,7 +341,13 @@ static bool vmballoon_send_start(struct vmballoon *b, unsigned long req_caps)
+ 		success = false;
+ 	}
+ 
+-	if (b->capabilities & VMW_BALLOON_BATCHED_2M_CMDS)
++	/*
++	 * 2MB pages are only supported with batching. If batching is for some
++	 * reason disabled, do not use 2MB pages, since otherwise the legacy
++	 * mechanism is used with 2MB pages, causing a failure.
++	 */
++	if ((b->capabilities & VMW_BALLOON_BATCHED_2M_CMDS) &&
++	    (b->capabilities & VMW_BALLOON_BATCHED_CMDS))
+ 		b->supported_page_sizes = 2;
+ 	else
+ 		b->supported_page_sizes = 1;
+@@ -450,7 +456,7 @@ static int vmballoon_send_lock_page(struct vmballoon *b, unsigned long pfn,
+ 
+ 	pfn32 = (u32)pfn;
+ 	if (pfn32 != pfn)
+-		return -1;
++		return -EINVAL;
+ 
+ 	STATS_INC(b->stats.lock[false]);
+ 
+@@ -460,7 +466,7 @@ static int vmballoon_send_lock_page(struct vmballoon *b, unsigned long pfn,
+ 
+ 	pr_debug("%s - ppn %lx, hv returns %ld\n", __func__, pfn, status);
+ 	STATS_INC(b->stats.lock_fail[false]);
+-	return 1;
++	return -EIO;
+ }
+ 
+ static int vmballoon_send_batched_lock(struct vmballoon *b,
+@@ -597,11 +603,12 @@ static int vmballoon_lock_page(struct vmballoon *b, unsigned int num_pages,
+ 
+ 	locked = vmballoon_send_lock_page(b, page_to_pfn(page), &hv_status,
+ 								target);
+-	if (locked > 0) {
++	if (locked) {
+ 		STATS_INC(b->stats.refused_alloc[false]);
+ 
+-		if (hv_status == VMW_BALLOON_ERROR_RESET ||
+-				hv_status == VMW_BALLOON_ERROR_PPN_NOTNEEDED) {
++		if (locked == -EIO &&
++		    (hv_status == VMW_BALLOON_ERROR_RESET ||
++		     hv_status == VMW_BALLOON_ERROR_PPN_NOTNEEDED)) {
+ 			vmballoon_free_page(page, false);
+ 			return -EIO;
+ 		}
+@@ -617,7 +624,7 @@ static int vmballoon_lock_page(struct vmballoon *b, unsigned int num_pages,
+ 		} else {
+ 			vmballoon_free_page(page, false);
+ 		}
+-		return -EIO;
++		return locked;
+ 	}
+ 
+ 	/* track allocated page */
+@@ -1029,29 +1036,30 @@ static void vmballoon_vmci_cleanup(struct vmballoon *b)
+  */
+ static int vmballoon_vmci_init(struct vmballoon *b)
+ {
+-	int error = 0;
++	unsigned long error, dummy;
+ 
+-	if ((b->capabilities & VMW_BALLOON_SIGNALLED_WAKEUP_CMD) != 0) {
+-		error = vmci_doorbell_create(&b->vmci_doorbell,
+-				VMCI_FLAG_DELAYED_CB,
+-				VMCI_PRIVILEGE_FLAG_RESTRICTED,
+-				vmballoon_doorbell, b);
+-
+-		if (error == VMCI_SUCCESS) {
+-			VMWARE_BALLOON_CMD(VMCI_DOORBELL_SET,
+-					b->vmci_doorbell.context,
+-					b->vmci_doorbell.resource, error);
+-			STATS_INC(b->stats.doorbell_set);
+-		}
+-	}
++	if ((b->capabilities & VMW_BALLOON_SIGNALLED_WAKEUP_CMD) == 0)
++		return 0;
+ 
+-	if (error != 0) {
+-		vmballoon_vmci_cleanup(b);
++	error = vmci_doorbell_create(&b->vmci_doorbell, VMCI_FLAG_DELAYED_CB,
++				     VMCI_PRIVILEGE_FLAG_RESTRICTED,
++				     vmballoon_doorbell, b);
+ 
+-		return -EIO;
+-	}
++	if (error != VMCI_SUCCESS)
++		goto fail;
++
++	error = VMWARE_BALLOON_CMD(VMCI_DOORBELL_SET, b->vmci_doorbell.context,
++				   b->vmci_doorbell.resource, dummy);
++
++	STATS_INC(b->stats.doorbell_set);
++
++	if (error != VMW_BALLOON_SUCCESS)
++		goto fail;
+ 
+ 	return 0;
++fail:
++	vmballoon_vmci_cleanup(b);
++	return -EIO;
+ }
+ 
+ /*
+@@ -1289,7 +1297,14 @@ static int __init vmballoon_init(void)
+ 
+ 	return 0;
+ }
+-module_init(vmballoon_init);
++
++/*
++ * Using late_initcall() instead of module_init() allows the balloon to use the
++ * VMCI doorbell even when the balloon is built into the kernel. Otherwise the
++ * VMCI is probed only after the balloon is initialized. If the balloon is used
++ * as a module, late_initcall() is equivalent to module_init().
++ */
++late_initcall(vmballoon_init);
+ 
+ static void __exit vmballoon_exit(void)
+ {
+diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
+index 648eb6743ed5..6edffeed9953 100644
+--- a/drivers/mmc/core/queue.c
++++ b/drivers/mmc/core/queue.c
+@@ -238,10 +238,6 @@ static void mmc_mq_exit_request(struct blk_mq_tag_set *set, struct request *req,
+ 	mmc_exit_request(mq->queue, req);
+ }
+ 
+-/*
+- * We use BLK_MQ_F_BLOCKING and have only 1 hardware queue, which means requests
+- * will not be dispatched in parallel.
+- */
+ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 				    const struct blk_mq_queue_data *bd)
+ {
+@@ -264,7 +260,7 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 
+ 	spin_lock_irq(q->queue_lock);
+ 
+-	if (mq->recovery_needed) {
++	if (mq->recovery_needed || mq->busy) {
+ 		spin_unlock_irq(q->queue_lock);
+ 		return BLK_STS_RESOURCE;
+ 	}
+@@ -291,6 +287,9 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 		break;
+ 	}
+ 
++	/* Parallel dispatch of requests is not supported at the moment */
++	mq->busy = true;
++
+ 	mq->in_flight[issue_type] += 1;
+ 	get_card = (mmc_tot_in_flight(mq) == 1);
+ 	cqe_retune_ok = (mmc_cqe_qcnt(mq) == 1);
+@@ -333,9 +332,12 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 		mq->in_flight[issue_type] -= 1;
+ 		if (mmc_tot_in_flight(mq) == 0)
+ 			put_card = true;
++		mq->busy = false;
+ 		spin_unlock_irq(q->queue_lock);
+ 		if (put_card)
+ 			mmc_put_card(card, &mq->ctx);
++	} else {
++		WRITE_ONCE(mq->busy, false);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
+index 17e59d50b496..9bf3c9245075 100644
+--- a/drivers/mmc/core/queue.h
++++ b/drivers/mmc/core/queue.h
+@@ -81,6 +81,7 @@ struct mmc_queue {
+ 	unsigned int		cqe_busy;
+ #define MMC_CQE_DCMD_BUSY	BIT(0)
+ #define MMC_CQE_QUEUE_FULL	BIT(1)
++	bool			busy;
+ 	bool			use_cqe;
+ 	bool			recovery_needed;
+ 	bool			in_recovery;
+diff --git a/drivers/mmc/host/renesas_sdhi_internal_dmac.c b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
+index d032bd63444d..4a7991151918 100644
+--- a/drivers/mmc/host/renesas_sdhi_internal_dmac.c
++++ b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
+@@ -45,14 +45,16 @@
+ /* DM_CM_RST */
+ #define RST_DTRANRST1		BIT(9)
+ #define RST_DTRANRST0		BIT(8)
+-#define RST_RESERVED_BITS	GENMASK_ULL(32, 0)
++#define RST_RESERVED_BITS	GENMASK_ULL(31, 0)
+ 
+ /* DM_CM_INFO1 and DM_CM_INFO1_MASK */
+ #define INFO1_CLEAR		0
++#define INFO1_MASK_CLEAR	GENMASK_ULL(31, 0)
+ #define INFO1_DTRANEND1		BIT(17)
+ #define INFO1_DTRANEND0		BIT(16)
+ 
+ /* DM_CM_INFO2 and DM_CM_INFO2_MASK */
++#define INFO2_MASK_CLEAR	GENMASK_ULL(31, 0)
+ #define INFO2_DTRANERR1		BIT(17)
+ #define INFO2_DTRANERR0		BIT(16)
+ 
+@@ -236,6 +238,12 @@ renesas_sdhi_internal_dmac_request_dma(struct tmio_mmc_host *host,
+ {
+ 	struct renesas_sdhi *priv = host_to_priv(host);
+ 
++	/* Disable DMAC interrupts, we don't use them */
++	renesas_sdhi_internal_dmac_dm_write(host, DM_CM_INFO1_MASK,
++					    INFO1_MASK_CLEAR);
++	renesas_sdhi_internal_dmac_dm_write(host, DM_CM_INFO2_MASK,
++					    INFO2_MASK_CLEAR);
++
+ 	/* Each value is set to non-zero to assume "enabling" each DMA */
+ 	host->chan_rx = host->chan_tx = (void *)0xdeadbeaf;
+ 
+diff --git a/drivers/net/wireless/marvell/libertas/dev.h b/drivers/net/wireless/marvell/libertas/dev.h
+index dd1ee1f0af48..469134930026 100644
+--- a/drivers/net/wireless/marvell/libertas/dev.h
++++ b/drivers/net/wireless/marvell/libertas/dev.h
+@@ -104,6 +104,7 @@ struct lbs_private {
+ 	u8 fw_ready;
+ 	u8 surpriseremoved;
+ 	u8 setup_fw_on_resume;
++	u8 power_up_on_resume;
+ 	int (*hw_host_to_card) (struct lbs_private *priv, u8 type, u8 *payload, u16 nb);
+ 	void (*reset_card) (struct lbs_private *priv);
+ 	int (*power_save) (struct lbs_private *priv);
+diff --git a/drivers/net/wireless/marvell/libertas/if_sdio.c b/drivers/net/wireless/marvell/libertas/if_sdio.c
+index 2300e796c6ab..43743c26c071 100644
+--- a/drivers/net/wireless/marvell/libertas/if_sdio.c
++++ b/drivers/net/wireless/marvell/libertas/if_sdio.c
+@@ -1290,15 +1290,23 @@ static void if_sdio_remove(struct sdio_func *func)
+ static int if_sdio_suspend(struct device *dev)
+ {
+ 	struct sdio_func *func = dev_to_sdio_func(dev);
+-	int ret;
+ 	struct if_sdio_card *card = sdio_get_drvdata(func);
++	struct lbs_private *priv = card->priv;
++	int ret;
+ 
+ 	mmc_pm_flag_t flags = sdio_get_host_pm_caps(func);
++	priv->power_up_on_resume = false;
+ 
+ 	/* If we're powered off anyway, just let the mmc layer remove the
+ 	 * card. */
+-	if (!lbs_iface_active(card->priv))
+-		return -ENOSYS;
++	if (!lbs_iface_active(priv)) {
++		if (priv->fw_ready) {
++			priv->power_up_on_resume = true;
++			if_sdio_power_off(card);
++		}
++
++		return 0;
++	}
+ 
+ 	dev_info(dev, "%s: suspend: PM flags = 0x%x\n",
+ 		 sdio_func_id(func), flags);
+@@ -1306,9 +1314,14 @@ static int if_sdio_suspend(struct device *dev)
+ 	/* If we aren't being asked to wake on anything, we should bail out
+ 	 * and let the SD stack power down the card.
+ 	 */
+-	if (card->priv->wol_criteria == EHS_REMOVE_WAKEUP) {
++	if (priv->wol_criteria == EHS_REMOVE_WAKEUP) {
+ 		dev_info(dev, "Suspend without wake params -- powering down card\n");
+-		return -ENOSYS;
++		if (priv->fw_ready) {
++			priv->power_up_on_resume = true;
++			if_sdio_power_off(card);
++		}
++
++		return 0;
+ 	}
+ 
+ 	if (!(flags & MMC_PM_KEEP_POWER)) {
+@@ -1321,7 +1334,7 @@ static int if_sdio_suspend(struct device *dev)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = lbs_suspend(card->priv);
++	ret = lbs_suspend(priv);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1336,6 +1349,11 @@ static int if_sdio_resume(struct device *dev)
+ 
+ 	dev_info(dev, "%s: resume: we're back\n", sdio_func_id(func));
+ 
++	if (card->priv->power_up_on_resume) {
++		if_sdio_power_on(card);
++		wait_event(card->pwron_waitq, card->priv->fw_ready);
++	}
++
+ 	ret = lbs_resume(card->priv);
+ 
+ 	return ret;
+diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
+index 27902a8799b1..8aae6dcc839f 100644
+--- a/drivers/nvdimm/bus.c
++++ b/drivers/nvdimm/bus.c
+@@ -812,9 +812,9 @@ u32 nd_cmd_out_size(struct nvdimm *nvdimm, int cmd,
+ 		 * overshoots the remainder by 4 bytes, assume it was
+ 		 * including 'status'.
+ 		 */
+-		if (out_field[1] - 8 == remainder)
++		if (out_field[1] - 4 == remainder)
+ 			return remainder;
+-		return out_field[1] - 4;
++		return out_field[1] - 8;
+ 	} else if (cmd == ND_CMD_CALL) {
+ 		struct nd_cmd_pkg *pkg = (struct nd_cmd_pkg *) in_field;
+ 
+diff --git a/drivers/nvdimm/dimm_devs.c b/drivers/nvdimm/dimm_devs.c
+index 8d348b22ba45..863cabc35215 100644
+--- a/drivers/nvdimm/dimm_devs.c
++++ b/drivers/nvdimm/dimm_devs.c
+@@ -536,6 +536,37 @@ resource_size_t nd_blk_available_dpa(struct nd_region *nd_region)
+ 	return info.available;
+ }
+ 
++/**
++ * nd_pmem_max_contiguous_dpa - For the given dimm+region, return the max
++ *			   contiguous unallocated dpa range.
++ * @nd_region: constrain available space check to this reference region
++ * @nd_mapping: container of dpa-resource-root + labels
++ */
++resource_size_t nd_pmem_max_contiguous_dpa(struct nd_region *nd_region,
++					   struct nd_mapping *nd_mapping)
++{
++	struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
++	struct nvdimm_bus *nvdimm_bus;
++	resource_size_t max = 0;
++	struct resource *res;
++
++	/* if a dimm is disabled the available capacity is zero */
++	if (!ndd)
++		return 0;
++
++	nvdimm_bus = walk_to_nvdimm_bus(ndd->dev);
++	if (__reserve_free_pmem(&nd_region->dev, nd_mapping->nvdimm))
++		return 0;
++	for_each_dpa_resource(ndd, res) {
++		if (strcmp(res->name, "pmem-reserve") != 0)
++			continue;
++		if (resource_size(res) > max)
++			max = resource_size(res);
++	}
++	release_free_pmem(nvdimm_bus, nd_mapping);
++	return max;
++}
++
+ /**
+  * nd_pmem_available_dpa - for the given dimm+region account unallocated dpa
+  * @nd_mapping: container of dpa-resource-root + labels
+diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
+index 28afdd668905..4525d8ef6022 100644
+--- a/drivers/nvdimm/namespace_devs.c
++++ b/drivers/nvdimm/namespace_devs.c
+@@ -799,7 +799,7 @@ static int merge_dpa(struct nd_region *nd_region,
+ 	return 0;
+ }
+ 
+-static int __reserve_free_pmem(struct device *dev, void *data)
++int __reserve_free_pmem(struct device *dev, void *data)
+ {
+ 	struct nvdimm *nvdimm = data;
+ 	struct nd_region *nd_region;
+@@ -836,7 +836,7 @@ static int __reserve_free_pmem(struct device *dev, void *data)
+ 	return 0;
+ }
+ 
+-static void release_free_pmem(struct nvdimm_bus *nvdimm_bus,
++void release_free_pmem(struct nvdimm_bus *nvdimm_bus,
+ 		struct nd_mapping *nd_mapping)
+ {
+ 	struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
+@@ -1032,7 +1032,7 @@ static ssize_t __size_store(struct device *dev, unsigned long long val)
+ 
+ 		allocated += nvdimm_allocated_dpa(ndd, &label_id);
+ 	}
+-	available = nd_region_available_dpa(nd_region);
++	available = nd_region_allocatable_dpa(nd_region);
+ 
+ 	if (val > available + allocated)
+ 		return -ENOSPC;
+diff --git a/drivers/nvdimm/nd-core.h b/drivers/nvdimm/nd-core.h
+index 79274ead54fb..ac68072fb8cd 100644
+--- a/drivers/nvdimm/nd-core.h
++++ b/drivers/nvdimm/nd-core.h
+@@ -100,6 +100,14 @@ struct nd_region;
+ struct nvdimm_drvdata;
+ struct nd_mapping;
+ void nd_mapping_free_labels(struct nd_mapping *nd_mapping);
++
++int __reserve_free_pmem(struct device *dev, void *data);
++void release_free_pmem(struct nvdimm_bus *nvdimm_bus,
++		       struct nd_mapping *nd_mapping);
++
++resource_size_t nd_pmem_max_contiguous_dpa(struct nd_region *nd_region,
++					   struct nd_mapping *nd_mapping);
++resource_size_t nd_region_allocatable_dpa(struct nd_region *nd_region);
+ resource_size_t nd_pmem_available_dpa(struct nd_region *nd_region,
+ 		struct nd_mapping *nd_mapping, resource_size_t *overlap);
+ resource_size_t nd_blk_available_dpa(struct nd_region *nd_region);
+diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
+index ec3543b83330..c30d5af02cc2 100644
+--- a/drivers/nvdimm/region_devs.c
++++ b/drivers/nvdimm/region_devs.c
+@@ -389,6 +389,30 @@ resource_size_t nd_region_available_dpa(struct nd_region *nd_region)
+ 	return available;
+ }
+ 
++resource_size_t nd_region_allocatable_dpa(struct nd_region *nd_region)
++{
++	resource_size_t available = 0;
++	int i;
++
++	if (is_memory(&nd_region->dev))
++		available = PHYS_ADDR_MAX;
++
++	WARN_ON(!is_nvdimm_bus_locked(&nd_region->dev));
++	for (i = 0; i < nd_region->ndr_mappings; i++) {
++		struct nd_mapping *nd_mapping = &nd_region->mapping[i];
++
++		if (is_memory(&nd_region->dev))
++			available = min(available,
++					nd_pmem_max_contiguous_dpa(nd_region,
++								   nd_mapping));
++		else if (is_nd_blk(&nd_region->dev))
++			available += nd_blk_available_dpa(nd_region);
++	}
++	if (is_memory(&nd_region->dev))
++		return available * nd_region->ndr_mappings;
++	return available;
++}
++
+ static ssize_t available_size_show(struct device *dev,
+ 		struct device_attribute *attr, char *buf)
+ {
+diff --git a/drivers/pwm/pwm-omap-dmtimer.c b/drivers/pwm/pwm-omap-dmtimer.c
+index 665da3c8fbce..f45798679e3c 100644
+--- a/drivers/pwm/pwm-omap-dmtimer.c
++++ b/drivers/pwm/pwm-omap-dmtimer.c
+@@ -264,8 +264,9 @@ static int pwm_omap_dmtimer_probe(struct platform_device *pdev)
+ 
+ 	timer_pdata = dev_get_platdata(&timer_pdev->dev);
+ 	if (!timer_pdata) {
+-		dev_err(&pdev->dev, "dmtimer pdata structure NULL\n");
+-		ret = -EINVAL;
++		dev_dbg(&pdev->dev,
++			 "dmtimer pdata structure NULL, deferring probe\n");
++		ret = -EPROBE_DEFER;
+ 		goto put;
+ 	}
+ 
+diff --git a/drivers/pwm/pwm-tiehrpwm.c b/drivers/pwm/pwm-tiehrpwm.c
+index 4c22cb395040..f7b8a86fa5c5 100644
+--- a/drivers/pwm/pwm-tiehrpwm.c
++++ b/drivers/pwm/pwm-tiehrpwm.c
+@@ -33,10 +33,6 @@
+ #define TBCTL			0x00
+ #define TBPRD			0x0A
+ 
+-#define TBCTL_RUN_MASK		(BIT(15) | BIT(14))
+-#define TBCTL_STOP_NEXT		0
+-#define TBCTL_STOP_ON_CYCLE	BIT(14)
+-#define TBCTL_FREE_RUN		(BIT(15) | BIT(14))
+ #define TBCTL_PRDLD_MASK	BIT(3)
+ #define TBCTL_PRDLD_SHDW	0
+ #define TBCTL_PRDLD_IMDT	BIT(3)
+@@ -360,7 +356,7 @@ static int ehrpwm_pwm_enable(struct pwm_chip *chip, struct pwm_device *pwm)
+ 	/* Channels polarity can be configured from action qualifier module */
+ 	configure_polarity(pc, pwm->hwpwm);
+ 
+-	/* Enable TBCLK before enabling PWM device */
++	/* Enable TBCLK */
+ 	ret = clk_enable(pc->tbclk);
+ 	if (ret) {
+ 		dev_err(chip->dev, "Failed to enable TBCLK for %s: %d\n",
+@@ -368,9 +364,6 @@ static int ehrpwm_pwm_enable(struct pwm_chip *chip, struct pwm_device *pwm)
+ 		return ret;
+ 	}
+ 
+-	/* Enable time counter for free_run */
+-	ehrpwm_modify(pc->mmio_base, TBCTL, TBCTL_RUN_MASK, TBCTL_FREE_RUN);
+-
+ 	return 0;
+ }
+ 
+@@ -388,6 +381,8 @@ static void ehrpwm_pwm_disable(struct pwm_chip *chip, struct pwm_device *pwm)
+ 		aqcsfrc_mask = AQCSFRC_CSFA_MASK;
+ 	}
+ 
++	/* Update shadow register first before modifying active register */
++	ehrpwm_modify(pc->mmio_base, AQCSFRC, aqcsfrc_mask, aqcsfrc_val);
+ 	/*
+ 	 * Changes to immediate action on Action Qualifier. This puts
+ 	 * Action Qualifier control on PWM output from next TBCLK
+@@ -400,9 +395,6 @@ static void ehrpwm_pwm_disable(struct pwm_chip *chip, struct pwm_device *pwm)
+ 	/* Disabling TBCLK on PWM disable */
+ 	clk_disable(pc->tbclk);
+ 
+-	/* Stop Time base counter */
+-	ehrpwm_modify(pc->mmio_base, TBCTL, TBCTL_RUN_MASK, TBCTL_STOP_NEXT);
+-
+ 	/* Disable clock on PWM disable */
+ 	pm_runtime_put_sync(chip->dev);
+ }
+diff --git a/drivers/rtc/rtc-omap.c b/drivers/rtc/rtc-omap.c
+index 39086398833e..6a7b804c3074 100644
+--- a/drivers/rtc/rtc-omap.c
++++ b/drivers/rtc/rtc-omap.c
+@@ -861,13 +861,6 @@ static int omap_rtc_probe(struct platform_device *pdev)
+ 			goto err;
+ 	}
+ 
+-	if (rtc->is_pmic_controller) {
+-		if (!pm_power_off) {
+-			omap_rtc_power_off_rtc = rtc;
+-			pm_power_off = omap_rtc_power_off;
+-		}
+-	}
+-
+ 	/* Support ext_wakeup pinconf */
+ 	rtc_pinctrl_desc.name = dev_name(&pdev->dev);
+ 
+@@ -880,12 +873,21 @@ static int omap_rtc_probe(struct platform_device *pdev)
+ 
+ 	ret = rtc_register_device(rtc->rtc);
+ 	if (ret)
+-		goto err;
++		goto err_deregister_pinctrl;
+ 
+ 	rtc_nvmem_register(rtc->rtc, &omap_rtc_nvmem_config);
+ 
++	if (rtc->is_pmic_controller) {
++		if (!pm_power_off) {
++			omap_rtc_power_off_rtc = rtc;
++			pm_power_off = omap_rtc_power_off;
++		}
++	}
++
+ 	return 0;
+ 
++err_deregister_pinctrl:
++	pinctrl_unregister(rtc->pctldev);
+ err:
+ 	clk_disable_unprepare(rtc->clk);
+ 	device_init_wakeup(&pdev->dev, false);
+diff --git a/drivers/spi/spi-cadence.c b/drivers/spi/spi-cadence.c
+index f3dad6fcdc35..a568f35522f9 100644
+--- a/drivers/spi/spi-cadence.c
++++ b/drivers/spi/spi-cadence.c
+@@ -319,7 +319,7 @@ static void cdns_spi_fill_tx_fifo(struct cdns_spi *xspi)
+ 		 */
+ 		if (cdns_spi_read(xspi, CDNS_SPI_ISR) &
+ 		    CDNS_SPI_IXR_TXFULL)
+-			usleep_range(10, 20);
++			udelay(10);
+ 
+ 		if (xspi->txbuf)
+ 			cdns_spi_write(xspi, CDNS_SPI_TXD, *xspi->txbuf++);
+diff --git a/drivers/spi/spi-davinci.c b/drivers/spi/spi-davinci.c
+index 577084bb911b..a02099c90c5c 100644
+--- a/drivers/spi/spi-davinci.c
++++ b/drivers/spi/spi-davinci.c
+@@ -217,7 +217,7 @@ static void davinci_spi_chipselect(struct spi_device *spi, int value)
+ 	pdata = &dspi->pdata;
+ 
+ 	/* program delay transfers if tx_delay is non zero */
+-	if (spicfg->wdelay)
++	if (spicfg && spicfg->wdelay)
+ 		spidat1 |= SPIDAT1_WDEL;
+ 
+ 	/*
+diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c
+index 0630962ce442..f225f7c99a32 100644
+--- a/drivers/spi/spi-fsl-dspi.c
++++ b/drivers/spi/spi-fsl-dspi.c
+@@ -1029,30 +1029,30 @@ static int dspi_probe(struct platform_device *pdev)
+ 		goto out_master_put;
+ 	}
+ 
++	dspi->clk = devm_clk_get(&pdev->dev, "dspi");
++	if (IS_ERR(dspi->clk)) {
++		ret = PTR_ERR(dspi->clk);
++		dev_err(&pdev->dev, "unable to get clock\n");
++		goto out_master_put;
++	}
++	ret = clk_prepare_enable(dspi->clk);
++	if (ret)
++		goto out_master_put;
++
+ 	dspi_init(dspi);
+ 	dspi->irq = platform_get_irq(pdev, 0);
+ 	if (dspi->irq < 0) {
+ 		dev_err(&pdev->dev, "can't get platform irq\n");
+ 		ret = dspi->irq;
+-		goto out_master_put;
++		goto out_clk_put;
+ 	}
+ 
+ 	ret = devm_request_irq(&pdev->dev, dspi->irq, dspi_interrupt, 0,
+ 			pdev->name, dspi);
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "Unable to attach DSPI interrupt\n");
+-		goto out_master_put;
+-	}
+-
+-	dspi->clk = devm_clk_get(&pdev->dev, "dspi");
+-	if (IS_ERR(dspi->clk)) {
+-		ret = PTR_ERR(dspi->clk);
+-		dev_err(&pdev->dev, "unable to get clock\n");
+-		goto out_master_put;
++		goto out_clk_put;
+ 	}
+-	ret = clk_prepare_enable(dspi->clk);
+-	if (ret)
+-		goto out_master_put;
+ 
+ 	if (dspi->devtype_data->trans_mode == DSPI_DMA_MODE) {
+ 		ret = dspi_request_dma(dspi, res->start);
+diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c
+index 0b2d60d30f69..14f4ea59caff 100644
+--- a/drivers/spi/spi-pxa2xx.c
++++ b/drivers/spi/spi-pxa2xx.c
+@@ -1391,6 +1391,10 @@ static const struct pci_device_id pxa2xx_spi_pci_compound_match[] = {
+ 	{ PCI_VDEVICE(INTEL, 0x31c2), LPSS_BXT_SSP },
+ 	{ PCI_VDEVICE(INTEL, 0x31c4), LPSS_BXT_SSP },
+ 	{ PCI_VDEVICE(INTEL, 0x31c6), LPSS_BXT_SSP },
++	/* ICL-LP */
++	{ PCI_VDEVICE(INTEL, 0x34aa), LPSS_CNL_SSP },
++	{ PCI_VDEVICE(INTEL, 0x34ab), LPSS_CNL_SSP },
++	{ PCI_VDEVICE(INTEL, 0x34fb), LPSS_CNL_SSP },
+ 	/* APL */
+ 	{ PCI_VDEVICE(INTEL, 0x5ac2), LPSS_BXT_SSP },
+ 	{ PCI_VDEVICE(INTEL, 0x5ac4), LPSS_BXT_SSP },
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 9c14a453f73c..80bb56facfb6 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -182,6 +182,7 @@ static int uart_port_startup(struct tty_struct *tty, struct uart_state *state,
+ {
+ 	struct uart_port *uport = uart_port_check(state);
+ 	unsigned long page;
++	unsigned long flags = 0;
+ 	int retval = 0;
+ 
+ 	if (uport->type == PORT_UNKNOWN)
+@@ -196,15 +197,18 @@ static int uart_port_startup(struct tty_struct *tty, struct uart_state *state,
+ 	 * Initialise and allocate the transmit and temporary
+ 	 * buffer.
+ 	 */
+-	if (!state->xmit.buf) {
+-		/* This is protected by the per port mutex */
+-		page = get_zeroed_page(GFP_KERNEL);
+-		if (!page)
+-			return -ENOMEM;
++	page = get_zeroed_page(GFP_KERNEL);
++	if (!page)
++		return -ENOMEM;
+ 
++	uart_port_lock(state, flags);
++	if (!state->xmit.buf) {
+ 		state->xmit.buf = (unsigned char *) page;
+ 		uart_circ_clear(&state->xmit);
++	} else {
++		free_page(page);
+ 	}
++	uart_port_unlock(uport, flags);
+ 
+ 	retval = uport->ops->startup(uport);
+ 	if (retval == 0) {
+@@ -263,6 +267,7 @@ static void uart_shutdown(struct tty_struct *tty, struct uart_state *state)
+ {
+ 	struct uart_port *uport = uart_port_check(state);
+ 	struct tty_port *port = &state->port;
++	unsigned long flags = 0;
+ 
+ 	/*
+ 	 * Set the TTY IO error marker
+@@ -295,10 +300,12 @@ static void uart_shutdown(struct tty_struct *tty, struct uart_state *state)
+ 	/*
+ 	 * Free the transmit buffer page.
+ 	 */
++	uart_port_lock(state, flags);
+ 	if (state->xmit.buf) {
+ 		free_page((unsigned long)state->xmit.buf);
+ 		state->xmit.buf = NULL;
+ 	}
++	uart_port_unlock(uport, flags);
+ }
+ 
+ /**
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index 609438d2465b..9ae2fb1344de 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -1704,12 +1704,12 @@ static int do_register_framebuffer(struct fb_info *fb_info)
+ 	return 0;
+ }
+ 
+-static int do_unregister_framebuffer(struct fb_info *fb_info)
++static int unbind_console(struct fb_info *fb_info)
+ {
+ 	struct fb_event event;
+-	int i, ret = 0;
++	int ret;
++	int i = fb_info->node;
+ 
+-	i = fb_info->node;
+ 	if (i < 0 || i >= FB_MAX || registered_fb[i] != fb_info)
+ 		return -EINVAL;
+ 
+@@ -1724,17 +1724,29 @@ static int do_unregister_framebuffer(struct fb_info *fb_info)
+ 	unlock_fb_info(fb_info);
+ 	console_unlock();
+ 
++	return ret;
++}
++
++static int __unlink_framebuffer(struct fb_info *fb_info);
++
++static int do_unregister_framebuffer(struct fb_info *fb_info)
++{
++	struct fb_event event;
++	int ret;
++
++	ret = unbind_console(fb_info);
++
+ 	if (ret)
+ 		return -EINVAL;
+ 
+ 	pm_vt_switch_unregister(fb_info->dev);
+ 
+-	unlink_framebuffer(fb_info);
++	__unlink_framebuffer(fb_info);
+ 	if (fb_info->pixmap.addr &&
+ 	    (fb_info->pixmap.flags & FB_PIXMAP_DEFAULT))
+ 		kfree(fb_info->pixmap.addr);
+ 	fb_destroy_modelist(&fb_info->modelist);
+-	registered_fb[i] = NULL;
++	registered_fb[fb_info->node] = NULL;
+ 	num_registered_fb--;
+ 	fb_cleanup_device(fb_info);
+ 	event.info = fb_info;
+@@ -1747,7 +1759,7 @@ static int do_unregister_framebuffer(struct fb_info *fb_info)
+ 	return 0;
+ }
+ 
+-int unlink_framebuffer(struct fb_info *fb_info)
++static int __unlink_framebuffer(struct fb_info *fb_info)
+ {
+ 	int i;
+ 
+@@ -1759,6 +1771,20 @@ int unlink_framebuffer(struct fb_info *fb_info)
+ 		device_destroy(fb_class, MKDEV(FB_MAJOR, i));
+ 		fb_info->dev = NULL;
+ 	}
++
++	return 0;
++}
++
++int unlink_framebuffer(struct fb_info *fb_info)
++{
++	int ret;
++
++	ret = __unlink_framebuffer(fb_info);
++	if (ret)
++		return ret;
++
++	unbind_console(fb_info);
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL(unlink_framebuffer);
+diff --git a/drivers/video/fbdev/udlfb.c b/drivers/video/fbdev/udlfb.c
+index f365d4862015..862e8027acf6 100644
+--- a/drivers/video/fbdev/udlfb.c
++++ b/drivers/video/fbdev/udlfb.c
+@@ -27,6 +27,7 @@
+ #include <linux/slab.h>
+ #include <linux/prefetch.h>
+ #include <linux/delay.h>
++#include <asm/unaligned.h>
+ #include <video/udlfb.h>
+ #include "edid.h"
+ 
+@@ -450,17 +451,17 @@ static void dlfb_compress_hline(
+ 		raw_pixels_count_byte = cmd++; /*  we'll know this later */
+ 		raw_pixel_start = pixel;
+ 
+-		cmd_pixel_end = pixel + min(MAX_CMD_PIXELS + 1,
+-			min((int)(pixel_end - pixel),
+-			    (int)(cmd_buffer_end - cmd) / BPP));
++		cmd_pixel_end = pixel + min3(MAX_CMD_PIXELS + 1UL,
++					(unsigned long)(pixel_end - pixel),
++					(unsigned long)(cmd_buffer_end - 1 - cmd) / BPP);
+ 
+-		prefetch_range((void *) pixel, (cmd_pixel_end - pixel) * BPP);
++		prefetch_range((void *) pixel, (u8 *)cmd_pixel_end - (u8 *)pixel);
+ 
+ 		while (pixel < cmd_pixel_end) {
+ 			const uint16_t * const repeating_pixel = pixel;
+ 
+-			*cmd++ = *pixel >> 8;
+-			*cmd++ = *pixel;
++			put_unaligned_be16(*pixel, cmd);
++			cmd += 2;
+ 			pixel++;
+ 
+ 			if (unlikely((pixel < cmd_pixel_end) &&
+@@ -486,13 +487,16 @@ static void dlfb_compress_hline(
+ 		if (pixel > raw_pixel_start) {
+ 			/* finalize last RAW span */
+ 			*raw_pixels_count_byte = (pixel-raw_pixel_start) & 0xFF;
++		} else {
++			/* undo unused byte */
++			cmd--;
+ 		}
+ 
+ 		*cmd_pixels_count_byte = (pixel - cmd_pixel_start) & 0xFF;
+-		dev_addr += (pixel - cmd_pixel_start) * BPP;
++		dev_addr += (u8 *)pixel - (u8 *)cmd_pixel_start;
+ 	}
+ 
+-	if (cmd_buffer_end <= MIN_RLX_CMD_BYTES + cmd) {
++	if (cmd_buffer_end - MIN_RLX_CMD_BYTES <= cmd) {
+ 		/* Fill leftover bytes with no-ops */
+ 		if (cmd_buffer_end > cmd)
+ 			memset(cmd, 0xAF, cmd_buffer_end - cmd);
+@@ -610,8 +614,11 @@ static int dlfb_handle_damage(struct dlfb_data *dlfb, int x, int y,
+ 	}
+ 
+ 	if (cmd > (char *) urb->transfer_buffer) {
++		int len;
++		if (cmd < (char *) urb->transfer_buffer + urb->transfer_buffer_length)
++			*cmd++ = 0xAF;
+ 		/* Send partial buffer remaining before exiting */
+-		int len = cmd - (char *) urb->transfer_buffer;
++		len = cmd - (char *) urb->transfer_buffer;
+ 		ret = dlfb_submit_urb(dlfb, urb, len);
+ 		bytes_sent += len;
+ 	} else
+@@ -735,8 +742,11 @@ static void dlfb_dpy_deferred_io(struct fb_info *info,
+ 	}
+ 
+ 	if (cmd > (char *) urb->transfer_buffer) {
++		int len;
++		if (cmd < (char *) urb->transfer_buffer + urb->transfer_buffer_length)
++			*cmd++ = 0xAF;
+ 		/* Send partial buffer remaining before exiting */
+-		int len = cmd - (char *) urb->transfer_buffer;
++		len = cmd - (char *) urb->transfer_buffer;
+ 		dlfb_submit_urb(dlfb, urb, len);
+ 		bytes_sent += len;
+ 	} else
+@@ -922,14 +932,6 @@ static void dlfb_free(struct kref *kref)
+ 	kfree(dlfb);
+ }
+ 
+-static void dlfb_release_urb_work(struct work_struct *work)
+-{
+-	struct urb_node *unode = container_of(work, struct urb_node,
+-					      release_urb_work.work);
+-
+-	up(&unode->dlfb->urbs.limit_sem);
+-}
+-
+ static void dlfb_free_framebuffer(struct dlfb_data *dlfb)
+ {
+ 	struct fb_info *info = dlfb->info;
+@@ -1039,10 +1041,25 @@ static int dlfb_ops_set_par(struct fb_info *info)
+ 	int result;
+ 	u16 *pix_framebuffer;
+ 	int i;
++	struct fb_var_screeninfo fvs;
++
++	/* clear the activate field because it causes spurious miscompares */
++	fvs = info->var;
++	fvs.activate = 0;
++	fvs.vmode &= ~FB_VMODE_SMOOTH_XPAN;
++
++	if (!memcmp(&dlfb->current_mode, &fvs, sizeof(struct fb_var_screeninfo)))
++		return 0;
+ 
+ 	result = dlfb_set_video_mode(dlfb, &info->var);
+ 
+-	if ((result == 0) && (dlfb->fb_count == 0)) {
++	if (result)
++		return result;
++
++	dlfb->current_mode = fvs;
++	info->fix.line_length = info->var.xres * (info->var.bits_per_pixel / 8);
++
++	if (dlfb->fb_count == 0) {
+ 
+ 		/* paint greenscreen */
+ 
+@@ -1054,7 +1071,7 @@ static int dlfb_ops_set_par(struct fb_info *info)
+ 				   info->screen_base);
+ 	}
+ 
+-	return result;
++	return 0;
+ }
+ 
+ /* To fonzi the jukebox (e.g. make blanking changes take effect) */
+@@ -1649,7 +1666,8 @@ static void dlfb_init_framebuffer_work(struct work_struct *work)
+ 	dlfb->info = info;
+ 	info->par = dlfb;
+ 	info->pseudo_palette = dlfb->pseudo_palette;
+-	info->fbops = &dlfb_ops;
++	dlfb->ops = dlfb_ops;
++	info->fbops = &dlfb->ops;
+ 
+ 	retval = fb_alloc_cmap(&info->cmap, 256, 0);
+ 	if (retval < 0) {
+@@ -1789,14 +1807,7 @@ static void dlfb_urb_completion(struct urb *urb)
+ 	dlfb->urbs.available++;
+ 	spin_unlock_irqrestore(&dlfb->urbs.lock, flags);
+ 
+-	/*
+-	 * When using fb_defio, we deadlock if up() is called
+-	 * while another is waiting. So queue to another process.
+-	 */
+-	if (fb_defio)
+-		schedule_delayed_work(&unode->release_urb_work, 0);
+-	else
+-		up(&dlfb->urbs.limit_sem);
++	up(&dlfb->urbs.limit_sem);
+ }
+ 
+ static void dlfb_free_urb_list(struct dlfb_data *dlfb)
+@@ -1805,16 +1816,11 @@ static void dlfb_free_urb_list(struct dlfb_data *dlfb)
+ 	struct list_head *node;
+ 	struct urb_node *unode;
+ 	struct urb *urb;
+-	int ret;
+ 	unsigned long flags;
+ 
+ 	/* keep waiting and freeing, until we've got 'em all */
+ 	while (count--) {
+-
+-		/* Getting interrupted means a leak, but ok at disconnect */
+-		ret = down_interruptible(&dlfb->urbs.limit_sem);
+-		if (ret)
+-			break;
++		down(&dlfb->urbs.limit_sem);
+ 
+ 		spin_lock_irqsave(&dlfb->urbs.lock, flags);
+ 
+@@ -1838,25 +1844,27 @@ static void dlfb_free_urb_list(struct dlfb_data *dlfb)
+ 
+ static int dlfb_alloc_urb_list(struct dlfb_data *dlfb, int count, size_t size)
+ {
+-	int i = 0;
+ 	struct urb *urb;
+ 	struct urb_node *unode;
+ 	char *buf;
++	size_t wanted_size = count * size;
+ 
+ 	spin_lock_init(&dlfb->urbs.lock);
+ 
++retry:
+ 	dlfb->urbs.size = size;
+ 	INIT_LIST_HEAD(&dlfb->urbs.list);
+ 
+-	while (i < count) {
++	sema_init(&dlfb->urbs.limit_sem, 0);
++	dlfb->urbs.count = 0;
++	dlfb->urbs.available = 0;
++
++	while (dlfb->urbs.count * size < wanted_size) {
+ 		unode = kzalloc(sizeof(*unode), GFP_KERNEL);
+ 		if (!unode)
+ 			break;
+ 		unode->dlfb = dlfb;
+ 
+-		INIT_DELAYED_WORK(&unode->release_urb_work,
+-			  dlfb_release_urb_work);
+-
+ 		urb = usb_alloc_urb(0, GFP_KERNEL);
+ 		if (!urb) {
+ 			kfree(unode);
+@@ -1864,11 +1872,16 @@ static int dlfb_alloc_urb_list(struct dlfb_data *dlfb, int count, size_t size)
+ 		}
+ 		unode->urb = urb;
+ 
+-		buf = usb_alloc_coherent(dlfb->udev, MAX_TRANSFER, GFP_KERNEL,
++		buf = usb_alloc_coherent(dlfb->udev, size, GFP_KERNEL,
+ 					 &urb->transfer_dma);
+ 		if (!buf) {
+ 			kfree(unode);
+ 			usb_free_urb(urb);
++			if (size > PAGE_SIZE) {
++				size /= 2;
++				dlfb_free_urb_list(dlfb);
++				goto retry;
++			}
+ 			break;
+ 		}
+ 
+@@ -1879,14 +1892,12 @@ static int dlfb_alloc_urb_list(struct dlfb_data *dlfb, int count, size_t size)
+ 
+ 		list_add_tail(&unode->entry, &dlfb->urbs.list);
+ 
+-		i++;
++		up(&dlfb->urbs.limit_sem);
++		dlfb->urbs.count++;
++		dlfb->urbs.available++;
+ 	}
+ 
+-	sema_init(&dlfb->urbs.limit_sem, i);
+-	dlfb->urbs.count = i;
+-	dlfb->urbs.available = i;
+-
+-	return i;
++	return dlfb->urbs.count;
+ }
+ 
+ static struct urb *dlfb_get_urb(struct dlfb_data *dlfb)
+diff --git a/fs/9p/xattr.c b/fs/9p/xattr.c
+index f329eee6dc93..352abc39e891 100644
+--- a/fs/9p/xattr.c
++++ b/fs/9p/xattr.c
+@@ -105,7 +105,7 @@ int v9fs_fid_xattr_set(struct p9_fid *fid, const char *name,
+ {
+ 	struct kvec kvec = {.iov_base = (void *)value, .iov_len = value_len};
+ 	struct iov_iter from;
+-	int retval;
++	int retval, err;
+ 
+ 	iov_iter_kvec(&from, WRITE | ITER_KVEC, &kvec, 1, value_len);
+ 
+@@ -126,7 +126,9 @@ int v9fs_fid_xattr_set(struct p9_fid *fid, const char *name,
+ 			 retval);
+ 	else
+ 		p9_client_write(fid, 0, &from, &retval);
+-	p9_client_clunk(fid);
++	err = p9_client_clunk(fid);
++	if (!retval && err)
++		retval = err;
+ 	return retval;
+ }
+ 
+diff --git a/fs/lockd/clntlock.c b/fs/lockd/clntlock.c
+index 96c1d14c18f1..c2a128678e6e 100644
+--- a/fs/lockd/clntlock.c
++++ b/fs/lockd/clntlock.c
+@@ -187,7 +187,7 @@ __be32 nlmclnt_grant(const struct sockaddr *addr, const struct nlm_lock *lock)
+ 			continue;
+ 		if (!rpc_cmp_addr(nlm_addr(block->b_host), addr))
+ 			continue;
+-		if (nfs_compare_fh(NFS_FH(file_inode(fl_blocked->fl_file)) ,fh) != 0)
++		if (nfs_compare_fh(NFS_FH(locks_inode(fl_blocked->fl_file)), fh) != 0)
+ 			continue;
+ 		/* Alright, we found a lock. Set the return status
+ 		 * and wake up the caller
+diff --git a/fs/lockd/clntproc.c b/fs/lockd/clntproc.c
+index a2c0dfc6fdc0..d20b92f271c2 100644
+--- a/fs/lockd/clntproc.c
++++ b/fs/lockd/clntproc.c
+@@ -128,7 +128,7 @@ static void nlmclnt_setlockargs(struct nlm_rqst *req, struct file_lock *fl)
+ 	char *nodename = req->a_host->h_rpcclnt->cl_nodename;
+ 
+ 	nlmclnt_next_cookie(&argp->cookie);
+-	memcpy(&lock->fh, NFS_FH(file_inode(fl->fl_file)), sizeof(struct nfs_fh));
++	memcpy(&lock->fh, NFS_FH(locks_inode(fl->fl_file)), sizeof(struct nfs_fh));
+ 	lock->caller  = nodename;
+ 	lock->oh.data = req->a_owner;
+ 	lock->oh.len  = snprintf(req->a_owner, sizeof(req->a_owner), "%u@%s",
+diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c
+index 3701bccab478..74330daeab71 100644
+--- a/fs/lockd/svclock.c
++++ b/fs/lockd/svclock.c
+@@ -405,8 +405,8 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
+ 	__be32			ret;
+ 
+ 	dprintk("lockd: nlmsvc_lock(%s/%ld, ty=%d, pi=%d, %Ld-%Ld, bl=%d)\n",
+-				file_inode(file->f_file)->i_sb->s_id,
+-				file_inode(file->f_file)->i_ino,
++				locks_inode(file->f_file)->i_sb->s_id,
++				locks_inode(file->f_file)->i_ino,
+ 				lock->fl.fl_type, lock->fl.fl_pid,
+ 				(long long)lock->fl.fl_start,
+ 				(long long)lock->fl.fl_end,
+@@ -511,8 +511,8 @@ nlmsvc_testlock(struct svc_rqst *rqstp, struct nlm_file *file,
+ 	__be32			ret;
+ 
+ 	dprintk("lockd: nlmsvc_testlock(%s/%ld, ty=%d, %Ld-%Ld)\n",
+-				file_inode(file->f_file)->i_sb->s_id,
+-				file_inode(file->f_file)->i_ino,
++				locks_inode(file->f_file)->i_sb->s_id,
++				locks_inode(file->f_file)->i_ino,
+ 				lock->fl.fl_type,
+ 				(long long)lock->fl.fl_start,
+ 				(long long)lock->fl.fl_end);
+@@ -566,8 +566,8 @@ nlmsvc_unlock(struct net *net, struct nlm_file *file, struct nlm_lock *lock)
+ 	int	error;
+ 
+ 	dprintk("lockd: nlmsvc_unlock(%s/%ld, pi=%d, %Ld-%Ld)\n",
+-				file_inode(file->f_file)->i_sb->s_id,
+-				file_inode(file->f_file)->i_ino,
++				locks_inode(file->f_file)->i_sb->s_id,
++				locks_inode(file->f_file)->i_ino,
+ 				lock->fl.fl_pid,
+ 				(long long)lock->fl.fl_start,
+ 				(long long)lock->fl.fl_end);
+@@ -595,8 +595,8 @@ nlmsvc_cancel_blocked(struct net *net, struct nlm_file *file, struct nlm_lock *l
+ 	int status = 0;
+ 
+ 	dprintk("lockd: nlmsvc_cancel(%s/%ld, pi=%d, %Ld-%Ld)\n",
+-				file_inode(file->f_file)->i_sb->s_id,
+-				file_inode(file->f_file)->i_ino,
++				locks_inode(file->f_file)->i_sb->s_id,
++				locks_inode(file->f_file)->i_ino,
+ 				lock->fl.fl_pid,
+ 				(long long)lock->fl.fl_start,
+ 				(long long)lock->fl.fl_end);
+diff --git a/fs/lockd/svcsubs.c b/fs/lockd/svcsubs.c
+index 4ec3d6e03e76..899360ba3b84 100644
+--- a/fs/lockd/svcsubs.c
++++ b/fs/lockd/svcsubs.c
+@@ -44,7 +44,7 @@ static inline void nlm_debug_print_fh(char *msg, struct nfs_fh *f)
+ 
+ static inline void nlm_debug_print_file(char *msg, struct nlm_file *file)
+ {
+-	struct inode *inode = file_inode(file->f_file);
++	struct inode *inode = locks_inode(file->f_file);
+ 
+ 	dprintk("lockd: %s %s/%ld\n",
+ 		msg, inode->i_sb->s_id, inode->i_ino);
+@@ -414,7 +414,7 @@ nlmsvc_match_sb(void *datap, struct nlm_file *file)
+ {
+ 	struct super_block *sb = datap;
+ 
+-	return sb == file_inode(file->f_file)->i_sb;
++	return sb == locks_inode(file->f_file)->i_sb;
+ }
+ 
+ /**
+diff --git a/fs/nfs/blocklayout/dev.c b/fs/nfs/blocklayout/dev.c
+index a7efd83779d2..dec5880ac6de 100644
+--- a/fs/nfs/blocklayout/dev.c
++++ b/fs/nfs/blocklayout/dev.c
+@@ -204,7 +204,7 @@ static bool bl_map_stripe(struct pnfs_block_dev *dev, u64 offset,
+ 	chunk = div_u64(offset, dev->chunk_size);
+ 	div_u64_rem(chunk, dev->nr_children, &chunk_idx);
+ 
+-	if (chunk_idx > dev->nr_children) {
++	if (chunk_idx >= dev->nr_children) {
+ 		dprintk("%s: invalid chunk idx %d (%lld/%lld)\n",
+ 			__func__, chunk_idx, offset, dev->chunk_size);
+ 		/* error, should not happen */
+diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
+index 64c214fb9da6..5d57e818d0c3 100644
+--- a/fs/nfs/callback_proc.c
++++ b/fs/nfs/callback_proc.c
+@@ -441,11 +441,14 @@ validate_seqid(const struct nfs4_slot_table *tbl, const struct nfs4_slot *slot,
+  * a match.  If the slot is in use and the sequence numbers match, the
+  * client is still waiting for a response to the original request.
+  */
+-static bool referring_call_exists(struct nfs_client *clp,
++static int referring_call_exists(struct nfs_client *clp,
+ 				  uint32_t nrclists,
+-				  struct referring_call_list *rclists)
++				  struct referring_call_list *rclists,
++				  spinlock_t *lock)
++	__releases(lock)
++	__acquires(lock)
+ {
+-	bool status = false;
++	int status = 0;
+ 	int i, j;
+ 	struct nfs4_session *session;
+ 	struct nfs4_slot_table *tbl;
+@@ -468,8 +471,10 @@ static bool referring_call_exists(struct nfs_client *clp,
+ 
+ 		for (j = 0; j < rclist->rcl_nrefcalls; j++) {
+ 			ref = &rclist->rcl_refcalls[j];
++			spin_unlock(lock);
+ 			status = nfs4_slot_wait_on_seqid(tbl, ref->rc_slotid,
+ 					ref->rc_sequenceid, HZ >> 1) < 0;
++			spin_lock(lock);
+ 			if (status)
+ 				goto out;
+ 		}
+@@ -546,7 +551,8 @@ __be32 nfs4_callback_sequence(void *argp, void *resp,
+ 	 * related callback was received before the response to the original
+ 	 * call.
+ 	 */
+-	if (referring_call_exists(clp, args->csa_nrclists, args->csa_rclists)) {
++	if (referring_call_exists(clp, args->csa_nrclists, args->csa_rclists,
++				&tbl->slot_tbl_lock) < 0) {
+ 		status = htonl(NFS4ERR_DELAY);
+ 		goto out_unlock;
+ 	}
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index f6c4ccd693f4..464db0c0f5c8 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -581,8 +581,15 @@ nfs4_async_handle_exception(struct rpc_task *task, struct nfs_server *server,
+ 		ret = -EIO;
+ 	return ret;
+ out_retry:
+-	if (ret == 0)
++	if (ret == 0) {
+ 		exception->retry = 1;
++		/*
++		 * For NFS4ERR_MOVED, the client transport will need to
++		 * be recomputed after migration recovery has completed.
++		 */
++		if (errorcode == -NFS4ERR_MOVED)
++			rpc_task_release_transport(task);
++	}
+ 	return ret;
+ }
+ 
+diff --git a/fs/nfs/pnfs_nfs.c b/fs/nfs/pnfs_nfs.c
+index 32ba2d471853..d5e4d3cd8c7f 100644
+--- a/fs/nfs/pnfs_nfs.c
++++ b/fs/nfs/pnfs_nfs.c
+@@ -61,7 +61,7 @@ EXPORT_SYMBOL_GPL(pnfs_generic_commit_release);
+ 
+ /* The generic layer is about to remove the req from the commit list.
+  * If this will make the bucket empty, it will need to put the lseg reference.
+- * Note this must be called holding i_lock
++ * Note this must be called holding nfsi->commit_mutex
+  */
+ void
+ pnfs_generic_clear_request_commit(struct nfs_page *req,
+@@ -149,9 +149,7 @@ restart:
+ 		if (list_empty(&b->written)) {
+ 			freeme = b->wlseg;
+ 			b->wlseg = NULL;
+-			spin_unlock(&cinfo->inode->i_lock);
+ 			pnfs_put_lseg(freeme);
+-			spin_lock(&cinfo->inode->i_lock);
+ 			goto restart;
+ 		}
+ 	}
+@@ -167,7 +165,7 @@ static void pnfs_generic_retry_commit(struct nfs_commit_info *cinfo, int idx)
+ 	LIST_HEAD(pages);
+ 	int i;
+ 
+-	spin_lock(&cinfo->inode->i_lock);
++	mutex_lock(&NFS_I(cinfo->inode)->commit_mutex);
+ 	for (i = idx; i < fl_cinfo->nbuckets; i++) {
+ 		bucket = &fl_cinfo->buckets[i];
+ 		if (list_empty(&bucket->committing))
+@@ -177,12 +175,12 @@ static void pnfs_generic_retry_commit(struct nfs_commit_info *cinfo, int idx)
+ 		list_for_each(pos, &bucket->committing)
+ 			cinfo->ds->ncommitting--;
+ 		list_splice_init(&bucket->committing, &pages);
+-		spin_unlock(&cinfo->inode->i_lock);
++		mutex_unlock(&NFS_I(cinfo->inode)->commit_mutex);
+ 		nfs_retry_commit(&pages, freeme, cinfo, i);
+ 		pnfs_put_lseg(freeme);
+-		spin_lock(&cinfo->inode->i_lock);
++		mutex_lock(&NFS_I(cinfo->inode)->commit_mutex);
+ 	}
+-	spin_unlock(&cinfo->inode->i_lock);
++	mutex_unlock(&NFS_I(cinfo->inode)->commit_mutex);
+ }
+ 
+ static unsigned int
+@@ -222,13 +220,13 @@ void pnfs_fetch_commit_bucket_list(struct list_head *pages,
+ 	struct list_head *pos;
+ 
+ 	bucket = &cinfo->ds->buckets[data->ds_commit_index];
+-	spin_lock(&cinfo->inode->i_lock);
++	mutex_lock(&NFS_I(cinfo->inode)->commit_mutex);
+ 	list_for_each(pos, &bucket->committing)
+ 		cinfo->ds->ncommitting--;
+ 	list_splice_init(&bucket->committing, pages);
+ 	data->lseg = bucket->clseg;
+ 	bucket->clseg = NULL;
+-	spin_unlock(&cinfo->inode->i_lock);
++	mutex_unlock(&NFS_I(cinfo->inode)->commit_mutex);
+ 
+ }
+ 
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 857141446d6b..4a17fad93411 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -6293,7 +6293,7 @@ check_for_locks(struct nfs4_file *fp, struct nfs4_lockowner *lowner)
+ 		return status;
+ 	}
+ 
+-	inode = file_inode(filp);
++	inode = locks_inode(filp);
+ 	flctx = inode->i_flctx;
+ 
+ 	if (flctx && !list_empty_careful(&flctx->flc_posix)) {
+diff --git a/fs/overlayfs/readdir.c b/fs/overlayfs/readdir.c
+index ef1fe42ff7bb..cc8303a806b4 100644
+--- a/fs/overlayfs/readdir.c
++++ b/fs/overlayfs/readdir.c
+@@ -668,6 +668,21 @@ static int ovl_fill_real(struct dir_context *ctx, const char *name,
+ 	return orig_ctx->actor(orig_ctx, name, namelen, offset, ino, d_type);
+ }
+ 
++static bool ovl_is_impure_dir(struct file *file)
++{
++	struct ovl_dir_file *od = file->private_data;
++	struct inode *dir = d_inode(file->f_path.dentry);
++
++	/*
++	 * Only upper dir can be impure, but if we are in the middle of
++	 * iterating a lower real dir, dir could be copied up and marked
++	 * impure. We only want the impure cache if we started iterating
++	 * a real upper dir to begin with.
++	 */
++	return od->is_upper && ovl_test_flag(OVL_IMPURE, dir);
++
++}
++
+ static int ovl_iterate_real(struct file *file, struct dir_context *ctx)
+ {
+ 	int err;
+@@ -696,7 +711,7 @@ static int ovl_iterate_real(struct file *file, struct dir_context *ctx)
+ 		rdt.parent_ino = stat.ino;
+ 	}
+ 
+-	if (ovl_test_flag(OVL_IMPURE, d_inode(dir))) {
++	if (ovl_is_impure_dir(file)) {
+ 		rdt.cache = ovl_cache_get_impure(&file->f_path);
+ 		if (IS_ERR(rdt.cache))
+ 			return PTR_ERR(rdt.cache);
+@@ -727,7 +742,7 @@ static int ovl_iterate(struct file *file, struct dir_context *ctx)
+ 		 */
+ 		if (ovl_xino_bits(dentry->d_sb) ||
+ 		    (ovl_same_sb(dentry->d_sb) &&
+-		     (ovl_test_flag(OVL_IMPURE, d_inode(dentry)) ||
++		     (ovl_is_impure_dir(file) ||
+ 		      OVL_TYPE_MERGE(ovl_path_type(dentry->d_parent))))) {
+ 			return ovl_iterate_real(file, ctx);
+ 		}
+diff --git a/fs/quota/quota.c b/fs/quota/quota.c
+index 860bfbe7a07a..dac1735312df 100644
+--- a/fs/quota/quota.c
++++ b/fs/quota/quota.c
+@@ -18,6 +18,7 @@
+ #include <linux/quotaops.h>
+ #include <linux/types.h>
+ #include <linux/writeback.h>
++#include <linux/nospec.h>
+ 
+ static int check_quotactl_permission(struct super_block *sb, int type, int cmd,
+ 				     qid_t id)
+@@ -703,6 +704,7 @@ static int do_quotactl(struct super_block *sb, int type, int cmd, qid_t id,
+ 
+ 	if (type >= (XQM_COMMAND(cmd) ? XQM_MAXQUOTAS : MAXQUOTAS))
+ 		return -EINVAL;
++	type = array_index_nospec(type, MAXQUOTAS);
+ 	/*
+ 	 * Quota not supported on this fs? Check this before s_quota_types
+ 	 * since they needn't be set if quota is not supported at all.
+diff --git a/fs/ubifs/dir.c b/fs/ubifs/dir.c
+index 9da224d4f2da..e8616040bffc 100644
+--- a/fs/ubifs/dir.c
++++ b/fs/ubifs/dir.c
+@@ -1123,8 +1123,7 @@ static int ubifs_symlink(struct inode *dir, struct dentry *dentry,
+ 	struct ubifs_inode *ui;
+ 	struct ubifs_inode *dir_ui = ubifs_inode(dir);
+ 	struct ubifs_info *c = dir->i_sb->s_fs_info;
+-	int err, len = strlen(symname);
+-	int sz_change = CALC_DENT_SIZE(len);
++	int err, sz_change, len = strlen(symname);
+ 	struct fscrypt_str disk_link;
+ 	struct ubifs_budget_req req = { .new_ino = 1, .new_dent = 1,
+ 					.new_ino_d = ALIGN(len, 8),
+@@ -1151,6 +1150,8 @@ static int ubifs_symlink(struct inode *dir, struct dentry *dentry,
+ 	if (err)
+ 		goto out_budg;
+ 
++	sz_change = CALC_DENT_SIZE(fname_len(&nm));
++
+ 	inode = ubifs_new_inode(c, dir, S_IFLNK | S_IRWXUGO);
+ 	if (IS_ERR(inode)) {
+ 		err = PTR_ERR(inode);
+diff --git a/fs/ubifs/journal.c b/fs/ubifs/journal.c
+index 07b4956e0425..48060dc48683 100644
+--- a/fs/ubifs/journal.c
++++ b/fs/ubifs/journal.c
+@@ -664,6 +664,11 @@ int ubifs_jnl_update(struct ubifs_info *c, const struct inode *dir,
+ 	spin_lock(&ui->ui_lock);
+ 	ui->synced_i_size = ui->ui_size;
+ 	spin_unlock(&ui->ui_lock);
++	if (xent) {
++		spin_lock(&host_ui->ui_lock);
++		host_ui->synced_i_size = host_ui->ui_size;
++		spin_unlock(&host_ui->ui_lock);
++	}
+ 	mark_inode_clean(c, ui);
+ 	mark_inode_clean(c, host_ui);
+ 	return 0;
+@@ -1282,11 +1287,10 @@ static int truncate_data_node(const struct ubifs_info *c, const struct inode *in
+ 			      int *new_len)
+ {
+ 	void *buf;
+-	int err, compr_type;
+-	u32 dlen, out_len, old_dlen;
++	int err, dlen, compr_type, out_len, old_dlen;
+ 
+ 	out_len = le32_to_cpu(dn->size);
+-	buf = kmalloc_array(out_len, WORST_COMPR_FACTOR, GFP_NOFS);
++	buf = kmalloc(out_len * WORST_COMPR_FACTOR, GFP_NOFS);
+ 	if (!buf)
+ 		return -ENOMEM;
+ 
+@@ -1388,7 +1392,16 @@ int ubifs_jnl_truncate(struct ubifs_info *c, const struct inode *inode,
+ 		else if (err)
+ 			goto out_free;
+ 		else {
+-			if (le32_to_cpu(dn->size) <= dlen)
++			int dn_len = le32_to_cpu(dn->size);
++
++			if (dn_len <= 0 || dn_len > UBIFS_BLOCK_SIZE) {
++				ubifs_err(c, "bad data node (block %u, inode %lu)",
++					  blk, inode->i_ino);
++				ubifs_dump_node(c, dn);
++				goto out_free;
++			}
++
++			if (dn_len <= dlen)
+ 				dlen = 0; /* Nothing to do */
+ 			else {
+ 				err = truncate_data_node(c, inode, blk, dn, &dlen);
+diff --git a/fs/ubifs/lprops.c b/fs/ubifs/lprops.c
+index f5a46844340c..8ade493a423a 100644
+--- a/fs/ubifs/lprops.c
++++ b/fs/ubifs/lprops.c
+@@ -1089,10 +1089,6 @@ static int scan_check_cb(struct ubifs_info *c,
+ 		}
+ 	}
+ 
+-	buf = __vmalloc(c->leb_size, GFP_NOFS, PAGE_KERNEL);
+-	if (!buf)
+-		return -ENOMEM;
+-
+ 	/*
+ 	 * After an unclean unmount, empty and freeable LEBs
+ 	 * may contain garbage - do not scan them.
+@@ -1111,6 +1107,10 @@ static int scan_check_cb(struct ubifs_info *c,
+ 		return LPT_SCAN_CONTINUE;
+ 	}
+ 
++	buf = __vmalloc(c->leb_size, GFP_NOFS, PAGE_KERNEL);
++	if (!buf)
++		return -ENOMEM;
++
+ 	sleb = ubifs_scan(c, lnum, 0, buf, 0);
+ 	if (IS_ERR(sleb)) {
+ 		ret = PTR_ERR(sleb);
+diff --git a/fs/ubifs/xattr.c b/fs/ubifs/xattr.c
+index 6f720fdf5020..09e37e63bddd 100644
+--- a/fs/ubifs/xattr.c
++++ b/fs/ubifs/xattr.c
+@@ -152,6 +152,12 @@ static int create_xattr(struct ubifs_info *c, struct inode *host,
+ 	ui->data_len = size;
+ 
+ 	mutex_lock(&host_ui->ui_mutex);
++
++	if (!host->i_nlink) {
++		err = -ENOENT;
++		goto out_noent;
++	}
++
+ 	host->i_ctime = current_time(host);
+ 	host_ui->xattr_cnt += 1;
+ 	host_ui->xattr_size += CALC_DENT_SIZE(fname_len(nm));
+@@ -184,6 +190,7 @@ out_cancel:
+ 	host_ui->xattr_size -= CALC_XATTR_BYTES(size);
+ 	host_ui->xattr_names -= fname_len(nm);
+ 	host_ui->flags &= ~UBIFS_CRYPT_FL;
++out_noent:
+ 	mutex_unlock(&host_ui->ui_mutex);
+ out_free:
+ 	make_bad_inode(inode);
+@@ -235,6 +242,12 @@ static int change_xattr(struct ubifs_info *c, struct inode *host,
+ 	mutex_unlock(&ui->ui_mutex);
+ 
+ 	mutex_lock(&host_ui->ui_mutex);
++
++	if (!host->i_nlink) {
++		err = -ENOENT;
++		goto out_noent;
++	}
++
+ 	host->i_ctime = current_time(host);
+ 	host_ui->xattr_size -= CALC_XATTR_BYTES(old_size);
+ 	host_ui->xattr_size += CALC_XATTR_BYTES(size);
+@@ -256,6 +269,7 @@ static int change_xattr(struct ubifs_info *c, struct inode *host,
+ out_cancel:
+ 	host_ui->xattr_size -= CALC_XATTR_BYTES(size);
+ 	host_ui->xattr_size += CALC_XATTR_BYTES(old_size);
++out_noent:
+ 	mutex_unlock(&host_ui->ui_mutex);
+ 	make_bad_inode(inode);
+ out_free:
+@@ -482,6 +496,12 @@ static int remove_xattr(struct ubifs_info *c, struct inode *host,
+ 		return err;
+ 
+ 	mutex_lock(&host_ui->ui_mutex);
++
++	if (!host->i_nlink) {
++		err = -ENOENT;
++		goto out_noent;
++	}
++
+ 	host->i_ctime = current_time(host);
+ 	host_ui->xattr_cnt -= 1;
+ 	host_ui->xattr_size -= CALC_DENT_SIZE(fname_len(nm));
+@@ -501,6 +521,7 @@ out_cancel:
+ 	host_ui->xattr_size += CALC_DENT_SIZE(fname_len(nm));
+ 	host_ui->xattr_size += CALC_XATTR_BYTES(ui->data_len);
+ 	host_ui->xattr_names += fname_len(nm);
++out_noent:
+ 	mutex_unlock(&host_ui->ui_mutex);
+ 	ubifs_release_budget(c, &req);
+ 	make_bad_inode(inode);
+@@ -540,6 +561,9 @@ static int ubifs_xattr_remove(struct inode *host, const char *name)
+ 
+ 	ubifs_assert(inode_is_locked(host));
+ 
++	if (!host->i_nlink)
++		return -ENOENT;
++
+ 	if (fname_len(&nm) > UBIFS_MAX_NLEN)
+ 		return -ENAMETOOLONG;
+ 
+diff --git a/fs/udf/super.c b/fs/udf/super.c
+index 0c504c8031d3..74b13347cd94 100644
+--- a/fs/udf/super.c
++++ b/fs/udf/super.c
+@@ -1570,10 +1570,16 @@ static void udf_load_logicalvolint(struct super_block *sb, struct kernel_extent_
+  */
+ #define PART_DESC_ALLOC_STEP 32
+ 
++struct part_desc_seq_scan_data {
++	struct udf_vds_record rec;
++	u32 partnum;
++};
++
+ struct desc_seq_scan_data {
+ 	struct udf_vds_record vds[VDS_POS_LENGTH];
+ 	unsigned int size_part_descs;
+-	struct udf_vds_record *part_descs_loc;
++	unsigned int num_part_descs;
++	struct part_desc_seq_scan_data *part_descs_loc;
+ };
+ 
+ static struct udf_vds_record *handle_partition_descriptor(
+@@ -1582,10 +1588,14 @@ static struct udf_vds_record *handle_partition_descriptor(
+ {
+ 	struct partitionDesc *desc = (struct partitionDesc *)bh->b_data;
+ 	int partnum;
++	int i;
+ 
+ 	partnum = le16_to_cpu(desc->partitionNumber);
+-	if (partnum >= data->size_part_descs) {
+-		struct udf_vds_record *new_loc;
++	for (i = 0; i < data->num_part_descs; i++)
++		if (partnum == data->part_descs_loc[i].partnum)
++			return &(data->part_descs_loc[i].rec);
++	if (data->num_part_descs >= data->size_part_descs) {
++		struct part_desc_seq_scan_data *new_loc;
+ 		unsigned int new_size = ALIGN(partnum, PART_DESC_ALLOC_STEP);
+ 
+ 		new_loc = kcalloc(new_size, sizeof(*new_loc), GFP_KERNEL);
+@@ -1597,7 +1607,7 @@ static struct udf_vds_record *handle_partition_descriptor(
+ 		data->part_descs_loc = new_loc;
+ 		data->size_part_descs = new_size;
+ 	}
+-	return &(data->part_descs_loc[partnum]);
++	return &(data->part_descs_loc[data->num_part_descs++].rec);
+ }
+ 
+ 
+@@ -1647,6 +1657,7 @@ static noinline int udf_process_sequence(
+ 
+ 	memset(data.vds, 0, sizeof(struct udf_vds_record) * VDS_POS_LENGTH);
+ 	data.size_part_descs = PART_DESC_ALLOC_STEP;
++	data.num_part_descs = 0;
+ 	data.part_descs_loc = kcalloc(data.size_part_descs,
+ 				      sizeof(*data.part_descs_loc),
+ 				      GFP_KERNEL);
+@@ -1658,7 +1669,6 @@ static noinline int udf_process_sequence(
+ 	 * are in it.
+ 	 */
+ 	for (; (!done && block <= lastblock); block++) {
+-
+ 		bh = udf_read_tagged(sb, block, block, &ident);
+ 		if (!bh)
+ 			break;
+@@ -1730,13 +1740,10 @@ static noinline int udf_process_sequence(
+ 	}
+ 
+ 	/* Now handle prevailing Partition Descriptors */
+-	for (i = 0; i < data.size_part_descs; i++) {
+-		if (data.part_descs_loc[i].block) {
+-			ret = udf_load_partdesc(sb,
+-						data.part_descs_loc[i].block);
+-			if (ret < 0)
+-				return ret;
+-		}
++	for (i = 0; i < data.num_part_descs; i++) {
++		ret = udf_load_partdesc(sb, data.part_descs_loc[i].rec.block);
++		if (ret < 0)
++			return ret;
+ 	}
+ 
+ 	return 0;
+diff --git a/fs/xattr.c b/fs/xattr.c
+index f9cb1db187b7..1bee74682513 100644
+--- a/fs/xattr.c
++++ b/fs/xattr.c
+@@ -539,7 +539,7 @@ getxattr(struct dentry *d, const char __user *name, void __user *value,
+ 	if (error > 0) {
+ 		if ((strcmp(kname, XATTR_NAME_POSIX_ACL_ACCESS) == 0) ||
+ 		    (strcmp(kname, XATTR_NAME_POSIX_ACL_DEFAULT) == 0))
+-			posix_acl_fix_xattr_to_user(kvalue, size);
++			posix_acl_fix_xattr_to_user(kvalue, error);
+ 		if (size && copy_to_user(value, kvalue, error))
+ 			error = -EFAULT;
+ 	} else if (error == -ERANGE && size >= XATTR_SIZE_MAX) {
+diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
+index 6c666fd7de3c..0fce47d5acb1 100644
+--- a/include/linux/blk-cgroup.h
++++ b/include/linux/blk-cgroup.h
+@@ -295,6 +295,23 @@ static inline struct blkcg_gq *blkg_lookup(struct blkcg *blkcg,
+ 	return __blkg_lookup(blkcg, q, false);
+ }
+ 
++/**
++ * blkg_lookup - look up blkg for the specified request queue
++ * @q: request_queue of interest
++ *
++ * Lookup blkg for @q at the root level. See also blkg_lookup().
++ */
++static inline struct blkcg_gq *blkg_root_lookup(struct request_queue *q)
++{
++	struct blkcg_gq *blkg;
++
++	rcu_read_lock();
++	blkg = blkg_lookup(&blkcg_root, q);
++	rcu_read_unlock();
++
++	return blkg;
++}
++
+ /**
+  * blkg_to_pdata - get policy private data
+  * @blkg: blkg of interest
+@@ -737,6 +754,7 @@ struct blkcg_policy {
+ #ifdef CONFIG_BLOCK
+ 
+ static inline struct blkcg_gq *blkg_lookup(struct blkcg *blkcg, void *key) { return NULL; }
++static inline struct blkcg_gq *blkg_root_lookup(struct request_queue *q) { return NULL; }
+ static inline int blkcg_init_queue(struct request_queue *q) { return 0; }
+ static inline void blkcg_drain_queue(struct request_queue *q) { }
+ static inline void blkcg_exit_queue(struct request_queue *q) { }
+diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
+index 3a3012f57be4..5389012f1d25 100644
+--- a/include/linux/hyperv.h
++++ b/include/linux/hyperv.h
+@@ -1046,6 +1046,8 @@ extern int vmbus_establish_gpadl(struct vmbus_channel *channel,
+ extern int vmbus_teardown_gpadl(struct vmbus_channel *channel,
+ 				     u32 gpadl_handle);
+ 
++void vmbus_reset_channel_cb(struct vmbus_channel *channel);
++
+ extern int vmbus_recvpacket(struct vmbus_channel *channel,
+ 				  void *buffer,
+ 				  u32 bufferlen,
+diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
+index ef169d67df92..7fd9fbaea5aa 100644
+--- a/include/linux/intel-iommu.h
++++ b/include/linux/intel-iommu.h
+@@ -114,6 +114,7 @@
+  * Extended Capability Register
+  */
+ 
++#define ecap_dit(e)		((e >> 41) & 0x1)
+ #define ecap_pasid(e)		((e >> 40) & 0x1)
+ #define ecap_pss(e)		((e >> 35) & 0x1f)
+ #define ecap_eafs(e)		((e >> 34) & 0x1)
+@@ -284,6 +285,7 @@ enum {
+ #define QI_DEV_IOTLB_SID(sid)	((u64)((sid) & 0xffff) << 32)
+ #define QI_DEV_IOTLB_QDEP(qdep)	(((qdep) & 0x1f) << 16)
+ #define QI_DEV_IOTLB_ADDR(addr)	((u64)(addr) & VTD_PAGE_MASK)
++#define QI_DEV_IOTLB_PFSID(pfsid) (((u64)(pfsid & 0xf) << 12) | ((u64)(pfsid & 0xfff) << 52))
+ #define QI_DEV_IOTLB_SIZE	1
+ #define QI_DEV_IOTLB_MAX_INVS	32
+ 
+@@ -308,6 +310,7 @@ enum {
+ #define QI_DEV_EIOTLB_PASID(p)	(((u64)p) << 32)
+ #define QI_DEV_EIOTLB_SID(sid)	((u64)((sid) & 0xffff) << 16)
+ #define QI_DEV_EIOTLB_QDEP(qd)	((u64)((qd) & 0x1f) << 4)
++#define QI_DEV_EIOTLB_PFSID(pfsid) (((u64)(pfsid & 0xf) << 12) | ((u64)(pfsid & 0xfff) << 52))
+ #define QI_DEV_EIOTLB_MAX_INVS	32
+ 
+ #define QI_PGRP_IDX(idx)	(((u64)(idx)) << 55)
+@@ -453,9 +456,8 @@ extern void qi_flush_context(struct intel_iommu *iommu, u16 did, u16 sid,
+ 			     u8 fm, u64 type);
+ extern void qi_flush_iotlb(struct intel_iommu *iommu, u16 did, u64 addr,
+ 			  unsigned int size_order, u64 type);
+-extern void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 qdep,
+-			       u64 addr, unsigned mask);
+-
++extern void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 pfsid,
++			u16 qdep, u64 addr, unsigned mask);
+ extern int qi_submit_sync(struct qi_desc *desc, struct intel_iommu *iommu);
+ 
+ extern int dmar_ir_support(void);
+diff --git a/include/linux/lockd/lockd.h b/include/linux/lockd/lockd.h
+index 4fd95dbeb52f..b065ef406770 100644
+--- a/include/linux/lockd/lockd.h
++++ b/include/linux/lockd/lockd.h
+@@ -299,7 +299,7 @@ int           nlmsvc_unlock_all_by_ip(struct sockaddr *server_addr);
+ 
+ static inline struct inode *nlmsvc_file_inode(struct nlm_file *file)
+ {
+-	return file_inode(file->f_file);
++	return locks_inode(file->f_file);
+ }
+ 
+ static inline int __nlm_privileged_request4(const struct sockaddr *sap)
+@@ -359,7 +359,7 @@ static inline int nlm_privileged_requester(const struct svc_rqst *rqstp)
+ static inline int nlm_compare_locks(const struct file_lock *fl1,
+ 				    const struct file_lock *fl2)
+ {
+-	return file_inode(fl1->fl_file) == file_inode(fl2->fl_file)
++	return locks_inode(fl1->fl_file) == locks_inode(fl2->fl_file)
+ 	     && fl1->fl_pid   == fl2->fl_pid
+ 	     && fl1->fl_owner == fl2->fl_owner
+ 	     && fl1->fl_start == fl2->fl_start
+diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
+index 99ce070e7dcb..22651e124071 100644
+--- a/include/linux/mm_types.h
++++ b/include/linux/mm_types.h
+@@ -139,7 +139,10 @@ struct page {
+ 			unsigned long _pt_pad_1;	/* compound_head */
+ 			pgtable_t pmd_huge_pte; /* protected by page->ptl */
+ 			unsigned long _pt_pad_2;	/* mapping */
+-			struct mm_struct *pt_mm;	/* x86 pgds only */
++			union {
++				struct mm_struct *pt_mm; /* x86 pgds only */
++				atomic_t pt_frag_refcount; /* powerpc */
++			};
+ #if ALLOC_SPLIT_PTLOCKS
+ 			spinlock_t *ptl;
+ #else
+diff --git a/include/linux/overflow.h b/include/linux/overflow.h
+index 8712ff70995f..40b48e2133cb 100644
+--- a/include/linux/overflow.h
++++ b/include/linux/overflow.h
+@@ -202,6 +202,37 @@
+ 
+ #endif /* COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW */
+ 
++/** check_shl_overflow() - Calculate a left-shifted value and check overflow
++ *
++ * @a: Value to be shifted
++ * @s: How many bits left to shift
++ * @d: Pointer to where to store the result
++ *
++ * Computes *@d = (@a << @s)
++ *
++ * Returns true if '*d' cannot hold the result or when 'a << s' doesn't
++ * make sense. Example conditions:
++ * - 'a << s' causes bits to be lost when stored in *d.
++ * - 's' is garbage (e.g. negative) or so large that the result of
++ *   'a << s' is guaranteed to be 0.
++ * - 'a' is negative.
++ * - 'a << s' sets the sign bit, if any, in '*d'.
++ *
++ * '*d' will hold the results of the attempted shift, but is not
++ * considered "safe for use" if false is returned.
++ */
++#define check_shl_overflow(a, s, d) ({					\
++	typeof(a) _a = a;						\
++	typeof(s) _s = s;						\
++	typeof(d) _d = d;						\
++	u64 _a_full = _a;						\
++	unsigned int _to_shift =					\
++		_s >= 0 && _s < 8 * sizeof(*d) ? _s : 0;		\
++	*_d = (_a_full << _to_shift);					\
++	(_to_shift != _s || *_d < 0 || _a < 0 ||			\
++		(*_d >> _to_shift) != _a);				\
++})
++
+ /**
+  * array_size() - Calculate size of 2-dimensional array.
+  *
+diff --git a/include/linux/sunrpc/clnt.h b/include/linux/sunrpc/clnt.h
+index 9b11b6a0978c..73d5c4a870fa 100644
+--- a/include/linux/sunrpc/clnt.h
++++ b/include/linux/sunrpc/clnt.h
+@@ -156,6 +156,7 @@ int		rpc_switch_client_transport(struct rpc_clnt *,
+ 
+ void		rpc_shutdown_client(struct rpc_clnt *);
+ void		rpc_release_client(struct rpc_clnt *);
++void		rpc_task_release_transport(struct rpc_task *);
+ void		rpc_task_release_client(struct rpc_task *);
+ 
+ int		rpcb_create_local(struct net *);
+diff --git a/include/linux/verification.h b/include/linux/verification.h
+index a10549a6c7cd..cfa4730d607a 100644
+--- a/include/linux/verification.h
++++ b/include/linux/verification.h
+@@ -12,6 +12,12 @@
+ #ifndef _LINUX_VERIFICATION_H
+ #define _LINUX_VERIFICATION_H
+ 
++/*
++ * Indicate that both builtin trusted keys and secondary trusted keys
++ * should be used.
++ */
++#define VERIFY_USE_SECONDARY_KEYRING ((struct key *)1UL)
++
+ /*
+  * The use to which an asymmetric key is being put.
+  */
+diff --git a/include/uapi/linux/eventpoll.h b/include/uapi/linux/eventpoll.h
+index bf48e71f2634..8a3432d0f0dc 100644
+--- a/include/uapi/linux/eventpoll.h
++++ b/include/uapi/linux/eventpoll.h
+@@ -42,7 +42,7 @@
+ #define EPOLLRDHUP	(__force __poll_t)0x00002000
+ 
+ /* Set exclusive wakeup mode for the target file descriptor */
+-#define EPOLLEXCLUSIVE (__force __poll_t)(1U << 28)
++#define EPOLLEXCLUSIVE	((__force __poll_t)(1U << 28))
+ 
+ /*
+  * Request the handling of system wakeup events so as to prevent system suspends
+@@ -54,13 +54,13 @@
+  *
+  * Requires CAP_BLOCK_SUSPEND
+  */
+-#define EPOLLWAKEUP (__force __poll_t)(1U << 29)
++#define EPOLLWAKEUP	((__force __poll_t)(1U << 29))
+ 
+ /* Set the One Shot behaviour for the target file descriptor */
+-#define EPOLLONESHOT (__force __poll_t)(1U << 30)
++#define EPOLLONESHOT	((__force __poll_t)(1U << 30))
+ 
+ /* Set the Edge Triggered behaviour for the target file descriptor */
+-#define EPOLLET (__force __poll_t)(1U << 31)
++#define EPOLLET		((__force __poll_t)(1U << 31))
+ 
+ /* 
+  * On x86-64 make the 64bit structure have the same alignment as the
+diff --git a/include/video/udlfb.h b/include/video/udlfb.h
+index 0cabe6b09095..6e1a2e790b1b 100644
+--- a/include/video/udlfb.h
++++ b/include/video/udlfb.h
+@@ -20,7 +20,6 @@ struct dloarea {
+ struct urb_node {
+ 	struct list_head entry;
+ 	struct dlfb_data *dlfb;
+-	struct delayed_work release_urb_work;
+ 	struct urb *urb;
+ };
+ 
+@@ -52,11 +51,13 @@ struct dlfb_data {
+ 	int base8;
+ 	u32 pseudo_palette[256];
+ 	int blank_mode; /*one of FB_BLANK_ */
++	struct fb_ops ops;
+ 	/* blit-only rendering path metrics, exposed through sysfs */
+ 	atomic_t bytes_rendered; /* raw pixel-bytes driver asked to render */
+ 	atomic_t bytes_identical; /* saved effort with backbuffer comparison */
+ 	atomic_t bytes_sent; /* to usb, after compression including overhead */
+ 	atomic_t cpu_kcycles_used; /* transpired during pixel processing */
++	struct fb_var_screeninfo current_mode;
+ };
+ 
+ #define NR_USB_REQUEST_I2C_SUB_IO 0x02
+@@ -87,7 +88,7 @@ struct dlfb_data {
+ #define MIN_RAW_PIX_BYTES	2
+ #define MIN_RAW_CMD_BYTES	(RAW_HEADER_BYTES + MIN_RAW_PIX_BYTES)
+ 
+-#define DL_DEFIO_WRITE_DELAY    5 /* fb_deferred_io.delay in jiffies */
++#define DL_DEFIO_WRITE_DELAY    msecs_to_jiffies(HZ <= 300 ? 4 : 10) /* optimal value for 720p video */
+ #define DL_DEFIO_WRITE_DISABLE  (HZ*60) /* "disable" with long delay */
+ 
+ /* remove these once align.h patch is taken into kernel */
+diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
+index 3a4656fb7047..5b77a7314e01 100644
+--- a/kernel/livepatch/core.c
++++ b/kernel/livepatch/core.c
+@@ -678,6 +678,9 @@ static int klp_init_func(struct klp_object *obj, struct klp_func *func)
+ 	if (!func->old_name || !func->new_func)
+ 		return -EINVAL;
+ 
++	if (strlen(func->old_name) >= KSYM_NAME_LEN)
++		return -EINVAL;
++
+ 	INIT_LIST_HEAD(&func->stack_node);
+ 	func->patched = false;
+ 	func->transition = false;
+@@ -751,6 +754,9 @@ static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
+ 	if (!obj->funcs)
+ 		return -EINVAL;
+ 
++	if (klp_is_module(obj) && strlen(obj->name) >= MODULE_NAME_LEN)
++		return -EINVAL;
++
+ 	obj->patched = false;
+ 	obj->mod = NULL;
+ 
+diff --git a/kernel/memremap.c b/kernel/memremap.c
+index 38283363da06..cfb750105e1e 100644
+--- a/kernel/memremap.c
++++ b/kernel/memremap.c
+@@ -355,7 +355,6 @@ void __put_devmap_managed_page(struct page *page)
+ 		__ClearPageActive(page);
+ 		__ClearPageWaiters(page);
+ 
+-		page->mapping = NULL;
+ 		mem_cgroup_uncharge(page);
+ 
+ 		page->pgmap->page_free(page, page->pgmap->data);
+diff --git a/kernel/power/Kconfig b/kernel/power/Kconfig
+index e880ca22c5a5..3a6c2f87699e 100644
+--- a/kernel/power/Kconfig
++++ b/kernel/power/Kconfig
+@@ -105,6 +105,7 @@ config PM_SLEEP
+ 	def_bool y
+ 	depends on SUSPEND || HIBERNATE_CALLBACKS
+ 	select PM
++	select SRCU
+ 
+ config PM_SLEEP_SMP
+ 	def_bool y
+diff --git a/kernel/printk/printk_safe.c b/kernel/printk/printk_safe.c
+index a0a74c533e4b..0913b4d385de 100644
+--- a/kernel/printk/printk_safe.c
++++ b/kernel/printk/printk_safe.c
+@@ -306,12 +306,12 @@ static __printf(1, 0) int vprintk_nmi(const char *fmt, va_list args)
+ 	return printk_safe_log_store(s, fmt, args);
+ }
+ 
+-void printk_nmi_enter(void)
++void notrace printk_nmi_enter(void)
+ {
+ 	this_cpu_or(printk_context, PRINTK_NMI_CONTEXT_MASK);
+ }
+ 
+-void printk_nmi_exit(void)
++void notrace printk_nmi_exit(void)
+ {
+ 	this_cpu_and(printk_context, ~PRINTK_NMI_CONTEXT_MASK);
+ }
+diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
+index d40708e8c5d6..01b6ddeb4f05 100644
+--- a/kernel/rcu/tree_exp.h
++++ b/kernel/rcu/tree_exp.h
+@@ -472,6 +472,7 @@ retry_ipi:
+ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
+ 				     smp_call_func_t func)
+ {
++	int cpu;
+ 	struct rcu_node *rnp;
+ 
+ 	trace_rcu_exp_grace_period(rsp->name, rcu_exp_gp_seq_endval(rsp), TPS("reset"));
+@@ -492,7 +493,13 @@ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
+ 			continue;
+ 		}
+ 		INIT_WORK(&rnp->rew.rew_work, sync_rcu_exp_select_node_cpus);
+-		queue_work_on(rnp->grplo, rcu_par_gp_wq, &rnp->rew.rew_work);
++		preempt_disable();
++		cpu = cpumask_next(rnp->grplo - 1, cpu_online_mask);
++		/* If all offline, queue the work on an unbound CPU. */
++		if (unlikely(cpu > rnp->grphi))
++			cpu = WORK_CPU_UNBOUND;
++		queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
++		preempt_enable();
+ 		rnp->exp_need_flush = true;
+ 	}
+ 
+diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
+index 1a3e9bddd17b..16f84142f2f4 100644
+--- a/kernel/sched/idle.c
++++ b/kernel/sched/idle.c
+@@ -190,7 +190,7 @@ static void cpuidle_idle_call(void)
+ 		 */
+ 		next_state = cpuidle_select(drv, dev, &stop_tick);
+ 
+-		if (stop_tick)
++		if (stop_tick || tick_nohz_tick_stopped())
+ 			tick_nohz_idle_stop_tick();
+ 		else
+ 			tick_nohz_idle_retain_tick();
+diff --git a/kernel/sys.c b/kernel/sys.c
+index 38509dc1f77b..69b9a37ecf0d 100644
+--- a/kernel/sys.c
++++ b/kernel/sys.c
+@@ -1237,18 +1237,19 @@ static int override_release(char __user *release, size_t len)
+ 
+ SYSCALL_DEFINE1(newuname, struct new_utsname __user *, name)
+ {
+-	int errno = 0;
++	struct new_utsname tmp;
+ 
+ 	down_read(&uts_sem);
+-	if (copy_to_user(name, utsname(), sizeof *name))
+-		errno = -EFAULT;
++	memcpy(&tmp, utsname(), sizeof(tmp));
+ 	up_read(&uts_sem);
++	if (copy_to_user(name, &tmp, sizeof(tmp)))
++		return -EFAULT;
+ 
+-	if (!errno && override_release(name->release, sizeof(name->release)))
+-		errno = -EFAULT;
+-	if (!errno && override_architecture(name))
+-		errno = -EFAULT;
+-	return errno;
++	if (override_release(name->release, sizeof(name->release)))
++		return -EFAULT;
++	if (override_architecture(name))
++		return -EFAULT;
++	return 0;
+ }
+ 
+ #ifdef __ARCH_WANT_SYS_OLD_UNAME
+@@ -1257,55 +1258,46 @@ SYSCALL_DEFINE1(newuname, struct new_utsname __user *, name)
+  */
+ SYSCALL_DEFINE1(uname, struct old_utsname __user *, name)
+ {
+-	int error = 0;
++	struct old_utsname tmp;
+ 
+ 	if (!name)
+ 		return -EFAULT;
+ 
+ 	down_read(&uts_sem);
+-	if (copy_to_user(name, utsname(), sizeof(*name)))
+-		error = -EFAULT;
++	memcpy(&tmp, utsname(), sizeof(tmp));
+ 	up_read(&uts_sem);
++	if (copy_to_user(name, &tmp, sizeof(tmp)))
++		return -EFAULT;
+ 
+-	if (!error && override_release(name->release, sizeof(name->release)))
+-		error = -EFAULT;
+-	if (!error && override_architecture(name))
+-		error = -EFAULT;
+-	return error;
++	if (override_release(name->release, sizeof(name->release)))
++		return -EFAULT;
++	if (override_architecture(name))
++		return -EFAULT;
++	return 0;
+ }
+ 
+ SYSCALL_DEFINE1(olduname, struct oldold_utsname __user *, name)
+ {
+-	int error;
++	struct oldold_utsname tmp = {};
+ 
+ 	if (!name)
+ 		return -EFAULT;
+-	if (!access_ok(VERIFY_WRITE, name, sizeof(struct oldold_utsname)))
+-		return -EFAULT;
+ 
+ 	down_read(&uts_sem);
+-	error = __copy_to_user(&name->sysname, &utsname()->sysname,
+-			       __OLD_UTS_LEN);
+-	error |= __put_user(0, name->sysname + __OLD_UTS_LEN);
+-	error |= __copy_to_user(&name->nodename, &utsname()->nodename,
+-				__OLD_UTS_LEN);
+-	error |= __put_user(0, name->nodename + __OLD_UTS_LEN);
+-	error |= __copy_to_user(&name->release, &utsname()->release,
+-				__OLD_UTS_LEN);
+-	error |= __put_user(0, name->release + __OLD_UTS_LEN);
+-	error |= __copy_to_user(&name->version, &utsname()->version,
+-				__OLD_UTS_LEN);
+-	error |= __put_user(0, name->version + __OLD_UTS_LEN);
+-	error |= __copy_to_user(&name->machine, &utsname()->machine,
+-				__OLD_UTS_LEN);
+-	error |= __put_user(0, name->machine + __OLD_UTS_LEN);
++	memcpy(&tmp.sysname, &utsname()->sysname, __OLD_UTS_LEN);
++	memcpy(&tmp.nodename, &utsname()->nodename, __OLD_UTS_LEN);
++	memcpy(&tmp.release, &utsname()->release, __OLD_UTS_LEN);
++	memcpy(&tmp.version, &utsname()->version, __OLD_UTS_LEN);
++	memcpy(&tmp.machine, &utsname()->machine, __OLD_UTS_LEN);
+ 	up_read(&uts_sem);
++	if (copy_to_user(name, &tmp, sizeof(tmp)))
++		return -EFAULT;
+ 
+-	if (!error && override_architecture(name))
+-		error = -EFAULT;
+-	if (!error && override_release(name->release, sizeof(name->release)))
+-		error = -EFAULT;
+-	return error ? -EFAULT : 0;
++	if (override_architecture(name))
++		return -EFAULT;
++	if (override_release(name->release, sizeof(name->release)))
++		return -EFAULT;
++	return 0;
+ }
+ #endif
+ 
+@@ -1319,17 +1311,18 @@ SYSCALL_DEFINE2(sethostname, char __user *, name, int, len)
+ 
+ 	if (len < 0 || len > __NEW_UTS_LEN)
+ 		return -EINVAL;
+-	down_write(&uts_sem);
+ 	errno = -EFAULT;
+ 	if (!copy_from_user(tmp, name, len)) {
+-		struct new_utsname *u = utsname();
++		struct new_utsname *u;
+ 
++		down_write(&uts_sem);
++		u = utsname();
+ 		memcpy(u->nodename, tmp, len);
+ 		memset(u->nodename + len, 0, sizeof(u->nodename) - len);
+ 		errno = 0;
+ 		uts_proc_notify(UTS_PROC_HOSTNAME);
++		up_write(&uts_sem);
+ 	}
+-	up_write(&uts_sem);
+ 	return errno;
+ }
+ 
+@@ -1337,8 +1330,9 @@ SYSCALL_DEFINE2(sethostname, char __user *, name, int, len)
+ 
+ SYSCALL_DEFINE2(gethostname, char __user *, name, int, len)
+ {
+-	int i, errno;
++	int i;
+ 	struct new_utsname *u;
++	char tmp[__NEW_UTS_LEN + 1];
+ 
+ 	if (len < 0)
+ 		return -EINVAL;
+@@ -1347,11 +1341,11 @@ SYSCALL_DEFINE2(gethostname, char __user *, name, int, len)
+ 	i = 1 + strlen(u->nodename);
+ 	if (i > len)
+ 		i = len;
+-	errno = 0;
+-	if (copy_to_user(name, u->nodename, i))
+-		errno = -EFAULT;
++	memcpy(tmp, u->nodename, i);
+ 	up_read(&uts_sem);
+-	return errno;
++	if (copy_to_user(name, tmp, i))
++		return -EFAULT;
++	return 0;
+ }
+ 
+ #endif
+@@ -1370,17 +1364,18 @@ SYSCALL_DEFINE2(setdomainname, char __user *, name, int, len)
+ 	if (len < 0 || len > __NEW_UTS_LEN)
+ 		return -EINVAL;
+ 
+-	down_write(&uts_sem);
+ 	errno = -EFAULT;
+ 	if (!copy_from_user(tmp, name, len)) {
+-		struct new_utsname *u = utsname();
++		struct new_utsname *u;
+ 
++		down_write(&uts_sem);
++		u = utsname();
+ 		memcpy(u->domainname, tmp, len);
+ 		memset(u->domainname + len, 0, sizeof(u->domainname) - len);
+ 		errno = 0;
+ 		uts_proc_notify(UTS_PROC_DOMAINNAME);
++		up_write(&uts_sem);
+ 	}
+-	up_write(&uts_sem);
+ 	return errno;
+ }
+ 
+diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
+index 987d9a9ae283..8defc6fd8c0f 100644
+--- a/kernel/trace/blktrace.c
++++ b/kernel/trace/blktrace.c
+@@ -1841,6 +1841,10 @@ static ssize_t sysfs_blk_trace_attr_store(struct device *dev,
+ 	mutex_lock(&q->blk_trace_mutex);
+ 
+ 	if (attr == &dev_attr_enable) {
++		if (!!value == !!q->blk_trace) {
++			ret = 0;
++			goto out_unlock_bdev;
++		}
+ 		if (value)
+ 			ret = blk_trace_setup_queue(q, bdev);
+ 		else
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 176debd3481b..ddae35127571 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -7628,7 +7628,9 @@ rb_simple_write(struct file *filp, const char __user *ubuf,
+ 
+ 	if (buffer) {
+ 		mutex_lock(&trace_types_lock);
+-		if (val) {
++		if (!!val == tracer_tracing_is_on(tr)) {
++			val = 0; /* do nothing */
++		} else if (val) {
+ 			tracer_tracing_on(tr);
+ 			if (tr->current_trace->start)
+ 				tr->current_trace->start(tr);
+diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
+index bf89a51e740d..ac02fafc9f1b 100644
+--- a/kernel/trace/trace_uprobe.c
++++ b/kernel/trace/trace_uprobe.c
+@@ -952,7 +952,7 @@ probe_event_disable(struct trace_uprobe *tu, struct trace_event_file *file)
+ 
+ 		list_del_rcu(&link->list);
+ 		/* synchronize with u{,ret}probe_trace_func */
+-		synchronize_sched();
++		synchronize_rcu();
+ 		kfree(link);
+ 
+ 		if (!list_empty(&tu->tp.files))
+diff --git a/kernel/user_namespace.c b/kernel/user_namespace.c
+index c3d7583fcd21..e5222b5fb4fe 100644
+--- a/kernel/user_namespace.c
++++ b/kernel/user_namespace.c
+@@ -859,7 +859,16 @@ static ssize_t map_write(struct file *file, const char __user *buf,
+ 	unsigned idx;
+ 	struct uid_gid_extent extent;
+ 	char *kbuf = NULL, *pos, *next_line;
+-	ssize_t ret = -EINVAL;
++	ssize_t ret;
++
++	/* Only allow < page size writes at the beginning of the file */
++	if ((*ppos != 0) || (count >= PAGE_SIZE))
++		return -EINVAL;
++
++	/* Slurp in the user data */
++	kbuf = memdup_user_nul(buf, count);
++	if (IS_ERR(kbuf))
++		return PTR_ERR(kbuf);
+ 
+ 	/*
+ 	 * The userns_state_mutex serializes all writes to any given map.
+@@ -895,19 +904,6 @@ static ssize_t map_write(struct file *file, const char __user *buf,
+ 	if (cap_valid(cap_setid) && !file_ns_capable(file, ns, CAP_SYS_ADMIN))
+ 		goto out;
+ 
+-	/* Only allow < page size writes at the beginning of the file */
+-	ret = -EINVAL;
+-	if ((*ppos != 0) || (count >= PAGE_SIZE))
+-		goto out;
+-
+-	/* Slurp in the user data */
+-	kbuf = memdup_user_nul(buf, count);
+-	if (IS_ERR(kbuf)) {
+-		ret = PTR_ERR(kbuf);
+-		kbuf = NULL;
+-		goto out;
+-	}
+-
+ 	/* Parse the user data */
+ 	ret = -EINVAL;
+ 	pos = kbuf;
+diff --git a/kernel/utsname_sysctl.c b/kernel/utsname_sysctl.c
+index 233cd8fc6910..258033d62cb3 100644
+--- a/kernel/utsname_sysctl.c
++++ b/kernel/utsname_sysctl.c
+@@ -18,7 +18,7 @@
+ 
+ #ifdef CONFIG_PROC_SYSCTL
+ 
+-static void *get_uts(struct ctl_table *table, int write)
++static void *get_uts(struct ctl_table *table)
+ {
+ 	char *which = table->data;
+ 	struct uts_namespace *uts_ns;
+@@ -26,21 +26,9 @@ static void *get_uts(struct ctl_table *table, int write)
+ 	uts_ns = current->nsproxy->uts_ns;
+ 	which = (which - (char *)&init_uts_ns) + (char *)uts_ns;
+ 
+-	if (!write)
+-		down_read(&uts_sem);
+-	else
+-		down_write(&uts_sem);
+ 	return which;
+ }
+ 
+-static void put_uts(struct ctl_table *table, int write, void *which)
+-{
+-	if (!write)
+-		up_read(&uts_sem);
+-	else
+-		up_write(&uts_sem);
+-}
+-
+ /*
+  *	Special case of dostring for the UTS structure. This has locks
+  *	to observe. Should this be in kernel/sys.c ????
+@@ -50,13 +38,34 @@ static int proc_do_uts_string(struct ctl_table *table, int write,
+ {
+ 	struct ctl_table uts_table;
+ 	int r;
++	char tmp_data[__NEW_UTS_LEN + 1];
++
+ 	memcpy(&uts_table, table, sizeof(uts_table));
+-	uts_table.data = get_uts(table, write);
++	uts_table.data = tmp_data;
++
++	/*
++	 * Buffer the value in tmp_data so that proc_dostring() can be called
++	 * without holding any locks.
++	 * We also need to read the original value in the write==1 case to
++	 * support partial writes.
++	 */
++	down_read(&uts_sem);
++	memcpy(tmp_data, get_uts(table), sizeof(tmp_data));
++	up_read(&uts_sem);
+ 	r = proc_dostring(&uts_table, write, buffer, lenp, ppos);
+-	put_uts(table, write, uts_table.data);
+ 
+-	if (write)
++	if (write) {
++		/*
++		 * Write back the new value.
++		 * Note that, since we dropped uts_sem, the result can
++		 * theoretically be incorrect if there are two parallel writes
++		 * at non-zero offsets to the same sysctl.
++		 */
++		down_write(&uts_sem);
++		memcpy(get_uts(table), tmp_data, sizeof(tmp_data));
++		up_write(&uts_sem);
+ 		proc_sys_poll_notify(table->poll);
++	}
+ 
+ 	return r;
+ }
+diff --git a/mm/hmm.c b/mm/hmm.c
+index de7b6bf77201..f9d1d89dec4d 100644
+--- a/mm/hmm.c
++++ b/mm/hmm.c
+@@ -963,6 +963,8 @@ static void hmm_devmem_free(struct page *page, void *data)
+ {
+ 	struct hmm_devmem *devmem = data;
+ 
++	page->mapping = NULL;
++
+ 	devmem->ops->free(devmem, page);
+ }
+ 
+diff --git a/mm/memory.c b/mm/memory.c
+index 86d4329acb05..f94feec6518d 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -391,15 +391,6 @@ void tlb_remove_table(struct mmu_gather *tlb, void *table)
+ {
+ 	struct mmu_table_batch **batch = &tlb->batch;
+ 
+-	/*
+-	 * When there's less then two users of this mm there cannot be a
+-	 * concurrent page-table walk.
+-	 */
+-	if (atomic_read(&tlb->mm->mm_users) < 2) {
+-		__tlb_remove_table(table);
+-		return;
+-	}
+-
+ 	if (*batch == NULL) {
+ 		*batch = (struct mmu_table_batch *)__get_free_page(GFP_NOWAIT | __GFP_NOWARN);
+ 		if (*batch == NULL) {
+diff --git a/mm/readahead.c b/mm/readahead.c
+index e273f0de3376..792dea696d54 100644
+--- a/mm/readahead.c
++++ b/mm/readahead.c
+@@ -385,6 +385,7 @@ ondemand_readahead(struct address_space *mapping,
+ {
+ 	struct backing_dev_info *bdi = inode_to_bdi(mapping->host);
+ 	unsigned long max_pages = ra->ra_pages;
++	unsigned long add_pages;
+ 	pgoff_t prev_offset;
+ 
+ 	/*
+@@ -474,10 +475,17 @@ readit:
+ 	 * Will this read hit the readahead marker made by itself?
+ 	 * If so, trigger the readahead marker hit now, and merge
+ 	 * the resulted next readahead window into the current one.
++	 * Take care of maximum IO pages as above.
+ 	 */
+ 	if (offset == ra->start && ra->size == ra->async_size) {
+-		ra->async_size = get_next_ra_size(ra, max_pages);
+-		ra->size += ra->async_size;
++		add_pages = get_next_ra_size(ra, max_pages);
++		if (ra->size + add_pages <= max_pages) {
++			ra->async_size = add_pages;
++			ra->size += add_pages;
++		} else {
++			ra->size = max_pages;
++			ra->async_size = max_pages >> 1;
++		}
+ 	}
+ 
+ 	return ra_submit(ra, mapping, filp);
+diff --git a/net/9p/client.c b/net/9p/client.c
+index 5c1343195292..2872f3dbfd86 100644
+--- a/net/9p/client.c
++++ b/net/9p/client.c
+@@ -958,7 +958,7 @@ static int p9_client_version(struct p9_client *c)
+ {
+ 	int err = 0;
+ 	struct p9_req_t *req;
+-	char *version;
++	char *version = NULL;
+ 	int msize;
+ 
+ 	p9_debug(P9_DEBUG_9P, ">>> TVERSION msize %d protocol %d\n",
+diff --git a/net/9p/trans_fd.c b/net/9p/trans_fd.c
+index 588bf88c3305..ef456395645a 100644
+--- a/net/9p/trans_fd.c
++++ b/net/9p/trans_fd.c
+@@ -185,6 +185,8 @@ static void p9_mux_poll_stop(struct p9_conn *m)
+ 	spin_lock_irqsave(&p9_poll_lock, flags);
+ 	list_del_init(&m->poll_pending_link);
+ 	spin_unlock_irqrestore(&p9_poll_lock, flags);
++
++	flush_work(&p9_poll_work);
+ }
+ 
+ /**
+@@ -940,7 +942,7 @@ p9_fd_create_tcp(struct p9_client *client, const char *addr, char *args)
+ 	if (err < 0)
+ 		return err;
+ 
+-	if (valid_ipaddr4(addr) < 0)
++	if (addr == NULL || valid_ipaddr4(addr) < 0)
+ 		return -EINVAL;
+ 
+ 	csocket = NULL;
+@@ -990,6 +992,9 @@ p9_fd_create_unix(struct p9_client *client, const char *addr, char *args)
+ 
+ 	csocket = NULL;
+ 
++	if (addr == NULL)
++		return -EINVAL;
++
+ 	if (strlen(addr) >= UNIX_PATH_MAX) {
+ 		pr_err("%s (%d): address too long: %s\n",
+ 		       __func__, task_pid_nr(current), addr);
+diff --git a/net/9p/trans_rdma.c b/net/9p/trans_rdma.c
+index 3d414acb7015..afaf0d65f3dd 100644
+--- a/net/9p/trans_rdma.c
++++ b/net/9p/trans_rdma.c
+@@ -644,6 +644,9 @@ rdma_create_trans(struct p9_client *client, const char *addr, char *args)
+ 	struct rdma_conn_param conn_param;
+ 	struct ib_qp_init_attr qp_attr;
+ 
++	if (addr == NULL)
++		return -EINVAL;
++
+ 	/* Parse the transport specific mount options */
+ 	err = parse_opts(args, &opts);
+ 	if (err < 0)
+diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c
+index 05006cbb3361..4c2da2513c8b 100644
+--- a/net/9p/trans_virtio.c
++++ b/net/9p/trans_virtio.c
+@@ -188,7 +188,7 @@ static int pack_sg_list(struct scatterlist *sg, int start,
+ 		s = rest_of_page(data);
+ 		if (s > count)
+ 			s = count;
+-		BUG_ON(index > limit);
++		BUG_ON(index >= limit);
+ 		/* Make sure we don't terminate early. */
+ 		sg_unmark_end(&sg[index]);
+ 		sg_set_buf(&sg[index++], data, s);
+@@ -233,6 +233,7 @@ pack_sg_list_p(struct scatterlist *sg, int start, int limit,
+ 		s = PAGE_SIZE - data_off;
+ 		if (s > count)
+ 			s = count;
++		BUG_ON(index >= limit);
+ 		/* Make sure we don't terminate early. */
+ 		sg_unmark_end(&sg[index]);
+ 		sg_set_page(&sg[index++], pdata[i++], s, data_off);
+@@ -406,6 +407,7 @@ p9_virtio_zc_request(struct p9_client *client, struct p9_req_t *req,
+ 	p9_debug(P9_DEBUG_TRANS, "virtio request\n");
+ 
+ 	if (uodata) {
++		__le32 sz;
+ 		int n = p9_get_mapped_pages(chan, &out_pages, uodata,
+ 					    outlen, &offs, &need_drop);
+ 		if (n < 0)
+@@ -416,6 +418,12 @@ p9_virtio_zc_request(struct p9_client *client, struct p9_req_t *req,
+ 			memcpy(&req->tc->sdata[req->tc->size - 4], &v, 4);
+ 			outlen = n;
+ 		}
++		/* The size field of the message must include the length of the
++		 * header and the length of the data.  We didn't actually know
++		 * the length of the data until this point so add it in now.
++		 */
++		sz = cpu_to_le32(req->tc->size + outlen);
++		memcpy(&req->tc->sdata[0], &sz, sizeof(sz));
+ 	} else if (uidata) {
+ 		int n = p9_get_mapped_pages(chan, &in_pages, uidata,
+ 					    inlen, &offs, &need_drop);
+@@ -643,6 +651,9 @@ p9_virtio_create(struct p9_client *client, const char *devname, char *args)
+ 	int ret = -ENOENT;
+ 	int found = 0;
+ 
++	if (devname == NULL)
++		return -EINVAL;
++
+ 	mutex_lock(&virtio_9p_lock);
+ 	list_for_each_entry(chan, &virtio_chan_list, chan_list) {
+ 		if (!strncmp(devname, chan->tag, chan->tag_len) &&
+diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
+index 2e2b8bca54f3..c2d54ac76bfd 100644
+--- a/net/9p/trans_xen.c
++++ b/net/9p/trans_xen.c
+@@ -94,6 +94,9 @@ static int p9_xen_create(struct p9_client *client, const char *addr, char *args)
+ {
+ 	struct xen_9pfs_front_priv *priv;
+ 
++	if (addr == NULL)
++		return -EINVAL;
++
+ 	read_lock(&xen_9pfs_lock);
+ 	list_for_each_entry(priv, &xen_9pfs_devs, list) {
+ 		if (!strcmp(priv->tag, addr)) {
+diff --git a/net/ieee802154/6lowpan/tx.c b/net/ieee802154/6lowpan/tx.c
+index e6ff5128e61a..ca53efa17be1 100644
+--- a/net/ieee802154/6lowpan/tx.c
++++ b/net/ieee802154/6lowpan/tx.c
+@@ -265,9 +265,24 @@ netdev_tx_t lowpan_xmit(struct sk_buff *skb, struct net_device *ldev)
+ 	/* We must take a copy of the skb before we modify/replace the ipv6
+ 	 * header as the header could be used elsewhere
+ 	 */
+-	skb = skb_unshare(skb, GFP_ATOMIC);
+-	if (!skb)
+-		return NET_XMIT_DROP;
++	if (unlikely(skb_headroom(skb) < ldev->needed_headroom ||
++		     skb_tailroom(skb) < ldev->needed_tailroom)) {
++		struct sk_buff *nskb;
++
++		nskb = skb_copy_expand(skb, ldev->needed_headroom,
++				       ldev->needed_tailroom, GFP_ATOMIC);
++		if (likely(nskb)) {
++			consume_skb(skb);
++			skb = nskb;
++		} else {
++			kfree_skb(skb);
++			return NET_XMIT_DROP;
++		}
++	} else {
++		skb = skb_unshare(skb, GFP_ATOMIC);
++		if (!skb)
++			return NET_XMIT_DROP;
++	}
+ 
+ 	ret = lowpan_header(skb, ldev, &dgram_size, &dgram_offset);
+ 	if (ret < 0) {
+diff --git a/net/mac802154/tx.c b/net/mac802154/tx.c
+index 7e253455f9dd..bcd1a5e6ebf4 100644
+--- a/net/mac802154/tx.c
++++ b/net/mac802154/tx.c
+@@ -63,8 +63,21 @@ ieee802154_tx(struct ieee802154_local *local, struct sk_buff *skb)
+ 	int ret;
+ 
+ 	if (!(local->hw.flags & IEEE802154_HW_TX_OMIT_CKSUM)) {
+-		u16 crc = crc_ccitt(0, skb->data, skb->len);
++		struct sk_buff *nskb;
++		u16 crc;
++
++		if (unlikely(skb_tailroom(skb) < IEEE802154_FCS_LEN)) {
++			nskb = skb_copy_expand(skb, 0, IEEE802154_FCS_LEN,
++					       GFP_ATOMIC);
++			if (likely(nskb)) {
++				consume_skb(skb);
++				skb = nskb;
++			} else {
++				goto err_tx;
++			}
++		}
+ 
++		crc = crc_ccitt(0, skb->data, skb->len);
+ 		put_unaligned_le16(crc, skb_put(skb, 2));
+ 	}
+ 
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index d839c33ae7d9..0d85425b1e07 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -965,10 +965,20 @@ out:
+ }
+ EXPORT_SYMBOL_GPL(rpc_bind_new_program);
+ 
++void rpc_task_release_transport(struct rpc_task *task)
++{
++	struct rpc_xprt *xprt = task->tk_xprt;
++
++	if (xprt) {
++		task->tk_xprt = NULL;
++		xprt_put(xprt);
++	}
++}
++EXPORT_SYMBOL_GPL(rpc_task_release_transport);
++
+ void rpc_task_release_client(struct rpc_task *task)
+ {
+ 	struct rpc_clnt *clnt = task->tk_client;
+-	struct rpc_xprt *xprt = task->tk_xprt;
+ 
+ 	if (clnt != NULL) {
+ 		/* Remove from client task list */
+@@ -979,12 +989,14 @@ void rpc_task_release_client(struct rpc_task *task)
+ 
+ 		rpc_release_client(clnt);
+ 	}
++	rpc_task_release_transport(task);
++}
+ 
+-	if (xprt != NULL) {
+-		task->tk_xprt = NULL;
+-
+-		xprt_put(xprt);
+-	}
++static
++void rpc_task_set_transport(struct rpc_task *task, struct rpc_clnt *clnt)
++{
++	if (!task->tk_xprt)
++		task->tk_xprt = xprt_iter_get_next(&clnt->cl_xpi);
+ }
+ 
+ static
+@@ -992,8 +1004,7 @@ void rpc_task_set_client(struct rpc_task *task, struct rpc_clnt *clnt)
+ {
+ 
+ 	if (clnt != NULL) {
+-		if (task->tk_xprt == NULL)
+-			task->tk_xprt = xprt_iter_get_next(&clnt->cl_xpi);
++		rpc_task_set_transport(task, clnt);
+ 		task->tk_client = clnt;
+ 		atomic_inc(&clnt->cl_count);
+ 		if (clnt->cl_softrtry)
+@@ -1512,6 +1523,7 @@ call_start(struct rpc_task *task)
+ 		clnt->cl_program->version[clnt->cl_vers]->counts[idx]++;
+ 	clnt->cl_stats->rpccnt++;
+ 	task->tk_action = call_reserve;
++	rpc_task_set_transport(task, clnt);
+ }
+ 
+ /*
+diff --git a/scripts/kconfig/Makefile b/scripts/kconfig/Makefile
+index a3ac2c91331c..5e1dd493ce59 100644
+--- a/scripts/kconfig/Makefile
++++ b/scripts/kconfig/Makefile
+@@ -173,7 +173,7 @@ HOSTLOADLIBES_nconf	= $(shell . $(obj)/.nconf-cfg && echo $$libs)
+ HOSTCFLAGS_nconf.o	= $(shell . $(obj)/.nconf-cfg && echo $$cflags)
+ HOSTCFLAGS_nconf.gui.o	= $(shell . $(obj)/.nconf-cfg && echo $$cflags)
+ 
+-$(obj)/nconf.o: $(obj)/.nconf-cfg
++$(obj)/nconf.o $(obj)/nconf.gui.o: $(obj)/.nconf-cfg
+ 
+ # mconf: Used for the menuconfig target based on lxdialog
+ hostprogs-y	+= mconf
+@@ -184,7 +184,8 @@ HOSTLOADLIBES_mconf = $(shell . $(obj)/.mconf-cfg && echo $$libs)
+ $(foreach f, mconf.o $(lxdialog), \
+   $(eval HOSTCFLAGS_$f = $$(shell . $(obj)/.mconf-cfg && echo $$$$cflags)))
+ 
+-$(addprefix $(obj)/, mconf.o $(lxdialog)): $(obj)/.mconf-cfg
++$(obj)/mconf.o: $(obj)/.mconf-cfg
++$(addprefix $(obj)/lxdialog/, $(lxdialog)): $(obj)/.mconf-cfg
+ 
+ # qconf: Used for the xconfig target based on Qt
+ hostprogs-y	+= qconf
+diff --git a/security/apparmor/secid.c b/security/apparmor/secid.c
+index f2f22d00db18..4ccec1bcf6f5 100644
+--- a/security/apparmor/secid.c
++++ b/security/apparmor/secid.c
+@@ -79,7 +79,6 @@ int apparmor_secid_to_secctx(u32 secid, char **secdata, u32 *seclen)
+ 	struct aa_label *label = aa_secid_to_label(secid);
+ 	int len;
+ 
+-	AA_BUG(!secdata);
+ 	AA_BUG(!seclen);
+ 
+ 	if (!label)
+diff --git a/security/commoncap.c b/security/commoncap.c
+index f4c33abd9959..2e489d6a3ac8 100644
+--- a/security/commoncap.c
++++ b/security/commoncap.c
+@@ -388,7 +388,7 @@ int cap_inode_getsecurity(struct inode *inode, const char *name, void **buffer,
+ 	if (strcmp(name, "capability") != 0)
+ 		return -EOPNOTSUPP;
+ 
+-	dentry = d_find_alias(inode);
++	dentry = d_find_any_alias(inode);
+ 	if (!dentry)
+ 		return -EINVAL;
+ 
+diff --git a/sound/ac97/bus.c b/sound/ac97/bus.c
+index 31f858eceffc..83eed9d7f679 100644
+--- a/sound/ac97/bus.c
++++ b/sound/ac97/bus.c
+@@ -503,7 +503,7 @@ static int ac97_bus_remove(struct device *dev)
+ 	int ret;
+ 
+ 	ret = pm_runtime_get_sync(dev);
+-	if (ret)
++	if (ret < 0)
+ 		return ret;
+ 
+ 	ret = adrv->remove(adev);
+@@ -511,6 +511,8 @@ static int ac97_bus_remove(struct device *dev)
+ 	if (ret == 0)
+ 		ac97_put_disable_clk(adev);
+ 
++	pm_runtime_disable(dev);
++
+ 	return ret;
+ }
+ 
+diff --git a/sound/ac97/snd_ac97_compat.c b/sound/ac97/snd_ac97_compat.c
+index 61544e0d8de4..8bab44f74bb8 100644
+--- a/sound/ac97/snd_ac97_compat.c
++++ b/sound/ac97/snd_ac97_compat.c
+@@ -15,6 +15,11 @@
+ 
+ #include "ac97_core.h"
+ 
++static void compat_ac97_release(struct device *dev)
++{
++	kfree(to_ac97_t(dev));
++}
++
+ static void compat_ac97_reset(struct snd_ac97 *ac97)
+ {
+ 	struct ac97_codec_device *adev = to_ac97_device(ac97->private_data);
+@@ -65,21 +70,31 @@ static struct snd_ac97_bus compat_soc_ac97_bus = {
+ struct snd_ac97 *snd_ac97_compat_alloc(struct ac97_codec_device *adev)
+ {
+ 	struct snd_ac97 *ac97;
++	int ret;
+ 
+ 	ac97 = kzalloc(sizeof(struct snd_ac97), GFP_KERNEL);
+ 	if (ac97 == NULL)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	ac97->dev = adev->dev;
+ 	ac97->private_data = adev;
+ 	ac97->bus = &compat_soc_ac97_bus;
++
++	ac97->dev.parent = &adev->dev;
++	ac97->dev.release = compat_ac97_release;
++	dev_set_name(&ac97->dev, "%s-compat", dev_name(&adev->dev));
++	ret = device_register(&ac97->dev);
++	if (ret) {
++		put_device(&ac97->dev);
++		return ERR_PTR(ret);
++	}
++
+ 	return ac97;
+ }
+ EXPORT_SYMBOL_GPL(snd_ac97_compat_alloc);
+ 
+ void snd_ac97_compat_release(struct snd_ac97 *ac97)
+ {
+-	kfree(ac97);
++	device_unregister(&ac97->dev);
+ }
+ EXPORT_SYMBOL_GPL(snd_ac97_compat_release);
+ 
+diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
+index d056447520a2..eeb6d1f7cfb3 100644
+--- a/tools/perf/util/auxtrace.c
++++ b/tools/perf/util/auxtrace.c
+@@ -202,6 +202,9 @@ static int auxtrace_queues__grow(struct auxtrace_queues *queues,
+ 	for (i = 0; i < queues->nr_queues; i++) {
+ 		list_splice_tail(&queues->queue_array[i].head,
+ 				 &queue_array[i].head);
++		queue_array[i].tid = queues->queue_array[i].tid;
++		queue_array[i].cpu = queues->queue_array[i].cpu;
++		queue_array[i].set = queues->queue_array[i].set;
+ 		queue_array[i].priv = queues->queue_array[i].priv;
+ 	}
+ 


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 13:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 13:15 UTC (permalink / raw
  To: gentoo-commits

commit:     a1bbcc3697bf4aadf04158a41d0c8bfccb901b52
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Aug 18 18:13:21 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 13:15:38 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a1bbcc36

Linux patch 4.18.3

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |  4 ++++
 1002_linux-4.18.3.patch | 37 +++++++++++++++++++++++++++++++++++++
 2 files changed, 41 insertions(+)

diff --git a/0000_README b/0000_README
index f72e2ad..c313d8e 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch:  1001_linux-4.18.2.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.2
 
+Patch:  1002_linux-4.18.3.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.3
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1002_linux-4.18.3.patch b/1002_linux-4.18.3.patch
new file mode 100644
index 0000000..62abf0a
--- /dev/null
+++ b/1002_linux-4.18.3.patch
@@ -0,0 +1,37 @@
+diff --git a/Makefile b/Makefile
+index fd409a0fd4e1..e2bd815f24eb 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/x86/include/asm/pgtable-invert.h b/arch/x86/include/asm/pgtable-invert.h
+index 44b1203ece12..a0c1525f1b6f 100644
+--- a/arch/x86/include/asm/pgtable-invert.h
++++ b/arch/x86/include/asm/pgtable-invert.h
+@@ -4,9 +4,18 @@
+ 
+ #ifndef __ASSEMBLY__
+ 
++/*
++ * A clear pte value is special, and doesn't get inverted.
++ *
++ * Note that even users that only pass a pgprot_t (rather
++ * than a full pte) won't trigger the special zero case,
++ * because even PAGE_NONE has _PAGE_PROTNONE | _PAGE_ACCESSED
++ * set. So the all zero case really is limited to just the
++ * cleared page table entry case.
++ */
+ static inline bool __pte_needs_invert(u64 val)
+ {
+-	return !(val & _PAGE_PRESENT);
++	return val && !(val & _PAGE_PRESENT);
+ }
+ 
+ /* Get a mask to xor with the page table entry to get the correct pfn. */


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 13:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 13:15 UTC (permalink / raw
  To: gentoo-commits

commit:     c6cb82254a2c0ba82fecc167a4fe3d20e2c2c3c2
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Oct 18 10:27:08 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 13:15:40 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c6cb8225

Linux patch 4.18.15

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1014_linux-4.18.15.patch | 5433 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5437 insertions(+)

diff --git a/0000_README b/0000_README
index 6d1cb28..5676b13 100644
--- a/0000_README
+++ b/0000_README
@@ -99,6 +99,10 @@ Patch:  1013_linux-4.18.14.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.14
 
+Patch:  1014_linux-4.18.15.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.15
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1014_linux-4.18.15.patch b/1014_linux-4.18.15.patch
new file mode 100644
index 0000000..5477884
--- /dev/null
+++ b/1014_linux-4.18.15.patch
@@ -0,0 +1,5433 @@
+diff --git a/Documentation/devicetree/bindings/net/macb.txt b/Documentation/devicetree/bindings/net/macb.txt
+index 457d5ae16f23..3e17ac1d5d58 100644
+--- a/Documentation/devicetree/bindings/net/macb.txt
++++ b/Documentation/devicetree/bindings/net/macb.txt
+@@ -10,6 +10,7 @@ Required properties:
+   Use "cdns,pc302-gem" for Picochip picoXcell pc302 and later devices based on
+   the Cadence GEM, or the generic form: "cdns,gem".
+   Use "atmel,sama5d2-gem" for the GEM IP (10/100) available on Atmel sama5d2 SoCs.
++  Use "atmel,sama5d3-macb" for the 10/100Mbit IP available on Atmel sama5d3 SoCs.
+   Use "atmel,sama5d3-gem" for the Gigabit IP available on Atmel sama5d3 SoCs.
+   Use "atmel,sama5d4-gem" for the GEM IP (10/100) available on Atmel sama5d4 SoCs.
+   Use "cdns,zynq-gem" Xilinx Zynq-7xxx SoC.
+diff --git a/Makefile b/Makefile
+index 5274f8ae6b44..968eb96a0553 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 14
++SUBLEVEL = 15
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+@@ -298,19 +298,7 @@ KERNELRELEASE = $(shell cat include/config/kernel.release 2> /dev/null)
+ KERNELVERSION = $(VERSION)$(if $(PATCHLEVEL),.$(PATCHLEVEL)$(if $(SUBLEVEL),.$(SUBLEVEL)))$(EXTRAVERSION)
+ export VERSION PATCHLEVEL SUBLEVEL KERNELRELEASE KERNELVERSION
+ 
+-# SUBARCH tells the usermode build what the underlying arch is.  That is set
+-# first, and if a usermode build is happening, the "ARCH=um" on the command
+-# line overrides the setting of ARCH below.  If a native build is happening,
+-# then ARCH is assigned, getting whatever value it gets normally, and
+-# SUBARCH is subsequently ignored.
+-
+-SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \
+-				  -e s/sun4u/sparc64/ \
+-				  -e s/arm.*/arm/ -e s/sa110/arm/ \
+-				  -e s/s390x/s390/ -e s/parisc64/parisc/ \
+-				  -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \
+-				  -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ \
+-				  -e s/riscv.*/riscv/)
++include scripts/subarch.include
+ 
+ # Cross compiling and selecting different set of gcc/bin-utils
+ # ---------------------------------------------------------------------------
+diff --git a/arch/arm/boot/dts/sama5d3_emac.dtsi b/arch/arm/boot/dts/sama5d3_emac.dtsi
+index 7cb235ef0fb6..6e9e1c2f9def 100644
+--- a/arch/arm/boot/dts/sama5d3_emac.dtsi
++++ b/arch/arm/boot/dts/sama5d3_emac.dtsi
+@@ -41,7 +41,7 @@
+ 			};
+ 
+ 			macb1: ethernet@f802c000 {
+-				compatible = "cdns,at91sam9260-macb", "cdns,macb";
++				compatible = "atmel,sama5d3-macb", "cdns,at91sam9260-macb", "cdns,macb";
+ 				reg = <0xf802c000 0x100>;
+ 				interrupts = <35 IRQ_TYPE_LEVEL_HIGH 3>;
+ 				pinctrl-names = "default";
+diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
+index dd5b4fab114f..b7c8a718544c 100644
+--- a/arch/arm64/kernel/perf_event.c
++++ b/arch/arm64/kernel/perf_event.c
+@@ -823,6 +823,12 @@ static int armv8pmu_set_event_filter(struct hw_perf_event *event,
+ 	return 0;
+ }
+ 
++static int armv8pmu_filter_match(struct perf_event *event)
++{
++	unsigned long evtype = event->hw.config_base & ARMV8_PMU_EVTYPE_EVENT;
++	return evtype != ARMV8_PMUV3_PERFCTR_CHAIN;
++}
++
+ static void armv8pmu_reset(void *info)
+ {
+ 	struct arm_pmu *cpu_pmu = (struct arm_pmu *)info;
+@@ -968,6 +974,7 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu)
+ 	cpu_pmu->reset			= armv8pmu_reset,
+ 	cpu_pmu->max_period		= (1LLU << 32) - 1,
+ 	cpu_pmu->set_event_filter	= armv8pmu_set_event_filter;
++	cpu_pmu->filter_match		= armv8pmu_filter_match;
+ 
+ 	return 0;
+ }
+diff --git a/arch/mips/include/asm/processor.h b/arch/mips/include/asm/processor.h
+index b2fa62922d88..49d6046ca1d0 100644
+--- a/arch/mips/include/asm/processor.h
++++ b/arch/mips/include/asm/processor.h
+@@ -13,6 +13,7 @@
+ 
+ #include <linux/atomic.h>
+ #include <linux/cpumask.h>
++#include <linux/sizes.h>
+ #include <linux/threads.h>
+ 
+ #include <asm/cachectl.h>
+@@ -80,11 +81,10 @@ extern unsigned int vced_count, vcei_count;
+ 
+ #endif
+ 
+-/*
+- * One page above the stack is used for branch delay slot "emulation".
+- * See dsemul.c for details.
+- */
+-#define STACK_TOP	((TASK_SIZE & PAGE_MASK) - PAGE_SIZE)
++#define VDSO_RANDOMIZE_SIZE	(TASK_IS_32BIT_ADDR ? SZ_1M : SZ_256M)
++
++extern unsigned long mips_stack_top(void);
++#define STACK_TOP		mips_stack_top()
+ 
+ /*
+  * This decides where the kernel will search for a free chunk of vm
+diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
+index 9670e70139fd..1efd1798532b 100644
+--- a/arch/mips/kernel/process.c
++++ b/arch/mips/kernel/process.c
+@@ -31,6 +31,7 @@
+ #include <linux/prctl.h>
+ #include <linux/nmi.h>
+ 
++#include <asm/abi.h>
+ #include <asm/asm.h>
+ #include <asm/bootinfo.h>
+ #include <asm/cpu.h>
+@@ -38,6 +39,7 @@
+ #include <asm/dsp.h>
+ #include <asm/fpu.h>
+ #include <asm/irq.h>
++#include <asm/mips-cps.h>
+ #include <asm/msa.h>
+ #include <asm/pgtable.h>
+ #include <asm/mipsregs.h>
+@@ -644,6 +646,29 @@ out:
+ 	return pc;
+ }
+ 
++unsigned long mips_stack_top(void)
++{
++	unsigned long top = TASK_SIZE & PAGE_MASK;
++
++	/* One page for branch delay slot "emulation" */
++	top -= PAGE_SIZE;
++
++	/* Space for the VDSO, data page & GIC user page */
++	top -= PAGE_ALIGN(current->thread.abi->vdso->size);
++	top -= PAGE_SIZE;
++	top -= mips_gic_present() ? PAGE_SIZE : 0;
++
++	/* Space for cache colour alignment */
++	if (cpu_has_dc_aliases)
++		top -= shm_align_mask + 1;
++
++	/* Space to randomize the VDSO base */
++	if (current->flags & PF_RANDOMIZE)
++		top -= VDSO_RANDOMIZE_SIZE;
++
++	return top;
++}
++
+ /*
+  * Don't forget that the stack pointer must be aligned on a 8 bytes
+  * boundary for 32-bits ABI and 16 bytes for 64-bits ABI.
+diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
+index 2c96c0c68116..6138224a96b1 100644
+--- a/arch/mips/kernel/setup.c
++++ b/arch/mips/kernel/setup.c
+@@ -835,6 +835,34 @@ static void __init arch_mem_init(char **cmdline_p)
+ 	struct memblock_region *reg;
+ 	extern void plat_mem_setup(void);
+ 
++	/*
++	 * Initialize boot_command_line to an innocuous but non-empty string in
++	 * order to prevent early_init_dt_scan_chosen() from copying
++	 * CONFIG_CMDLINE into it without our knowledge. We handle
++	 * CONFIG_CMDLINE ourselves below & don't want to duplicate its
++	 * content because repeating arguments can be problematic.
++	 */
++	strlcpy(boot_command_line, " ", COMMAND_LINE_SIZE);
++
++	/* call board setup routine */
++	plat_mem_setup();
++
++	/*
++	 * Make sure all kernel memory is in the maps.  The "UP" and
++	 * "DOWN" are opposite for initdata since if it crosses over
++	 * into another memory section you don't want that to be
++	 * freed when the initdata is freed.
++	 */
++	arch_mem_addpart(PFN_DOWN(__pa_symbol(&_text)) << PAGE_SHIFT,
++			 PFN_UP(__pa_symbol(&_edata)) << PAGE_SHIFT,
++			 BOOT_MEM_RAM);
++	arch_mem_addpart(PFN_UP(__pa_symbol(&__init_begin)) << PAGE_SHIFT,
++			 PFN_DOWN(__pa_symbol(&__init_end)) << PAGE_SHIFT,
++			 BOOT_MEM_INIT_RAM);
++
++	pr_info("Determined physical RAM map:\n");
++	print_memory_map();
++
+ #if defined(CONFIG_CMDLINE_BOOL) && defined(CONFIG_CMDLINE_OVERRIDE)
+ 	strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
+ #else
+@@ -862,26 +890,6 @@ static void __init arch_mem_init(char **cmdline_p)
+ 	}
+ #endif
+ #endif
+-
+-	/* call board setup routine */
+-	plat_mem_setup();
+-
+-	/*
+-	 * Make sure all kernel memory is in the maps.  The "UP" and
+-	 * "DOWN" are opposite for initdata since if it crosses over
+-	 * into another memory section you don't want that to be
+-	 * freed when the initdata is freed.
+-	 */
+-	arch_mem_addpart(PFN_DOWN(__pa_symbol(&_text)) << PAGE_SHIFT,
+-			 PFN_UP(__pa_symbol(&_edata)) << PAGE_SHIFT,
+-			 BOOT_MEM_RAM);
+-	arch_mem_addpart(PFN_UP(__pa_symbol(&__init_begin)) << PAGE_SHIFT,
+-			 PFN_DOWN(__pa_symbol(&__init_end)) << PAGE_SHIFT,
+-			 BOOT_MEM_INIT_RAM);
+-
+-	pr_info("Determined physical RAM map:\n");
+-	print_memory_map();
+-
+ 	strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE);
+ 
+ 	*cmdline_p = command_line;
+diff --git a/arch/mips/kernel/vdso.c b/arch/mips/kernel/vdso.c
+index 8f845f6e5f42..48a9c6b90e07 100644
+--- a/arch/mips/kernel/vdso.c
++++ b/arch/mips/kernel/vdso.c
+@@ -15,6 +15,7 @@
+ #include <linux/ioport.h>
+ #include <linux/kernel.h>
+ #include <linux/mm.h>
++#include <linux/random.h>
+ #include <linux/sched.h>
+ #include <linux/slab.h>
+ #include <linux/timekeeper_internal.h>
+@@ -97,6 +98,21 @@ void update_vsyscall_tz(void)
+ 	}
+ }
+ 
++static unsigned long vdso_base(void)
++{
++	unsigned long base;
++
++	/* Skip the delay slot emulation page */
++	base = STACK_TOP + PAGE_SIZE;
++
++	if (current->flags & PF_RANDOMIZE) {
++		base += get_random_int() & (VDSO_RANDOMIZE_SIZE - 1);
++		base = PAGE_ALIGN(base);
++	}
++
++	return base;
++}
++
+ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
+ {
+ 	struct mips_vdso_image *image = current->thread.abi->vdso;
+@@ -137,7 +153,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
+ 	if (cpu_has_dc_aliases)
+ 		size += shm_align_mask + 1;
+ 
+-	base = get_unmapped_area(NULL, 0, size, 0, 0);
++	base = get_unmapped_area(NULL, vdso_base(), size, 0, 0);
+ 	if (IS_ERR_VALUE(base)) {
+ 		ret = base;
+ 		goto out;
+diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
+index 42aafba7a308..9532dff28091 100644
+--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
++++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
+@@ -104,7 +104,7 @@
+  */
+ #define _HPAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
+ 			 _PAGE_ACCESSED | H_PAGE_THP_HUGE | _PAGE_PTE | \
+-			 _PAGE_SOFT_DIRTY)
++			 _PAGE_SOFT_DIRTY | _PAGE_DEVMAP)
+ /*
+  * user access blocked by key
+  */
+@@ -122,7 +122,7 @@
+  */
+ #define _PAGE_CHG_MASK	(PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
+ 			 _PAGE_ACCESSED | _PAGE_SPECIAL | _PAGE_PTE |	\
+-			 _PAGE_SOFT_DIRTY)
++			 _PAGE_SOFT_DIRTY | _PAGE_DEVMAP)
+ 
+ #define H_PTE_PKEY  (H_PTE_PKEY_BIT0 | H_PTE_PKEY_BIT1 | H_PTE_PKEY_BIT2 | \
+ 		     H_PTE_PKEY_BIT3 | H_PTE_PKEY_BIT4)
+diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+index 7efc42538ccf..26d927bf2fdb 100644
+--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
++++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+@@ -538,8 +538,8 @@ int kvmppc_book3s_radix_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
+ 				   unsigned long ea, unsigned long dsisr)
+ {
+ 	struct kvm *kvm = vcpu->kvm;
+-	unsigned long mmu_seq, pte_size;
+-	unsigned long gpa, gfn, hva, pfn;
++	unsigned long mmu_seq;
++	unsigned long gpa, gfn, hva;
+ 	struct kvm_memory_slot *memslot;
+ 	struct page *page = NULL;
+ 	long ret;
+@@ -636,9 +636,10 @@ int kvmppc_book3s_radix_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
+ 	 */
+ 	hva = gfn_to_hva_memslot(memslot, gfn);
+ 	if (upgrade_p && __get_user_pages_fast(hva, 1, 1, &page) == 1) {
+-		pfn = page_to_pfn(page);
+ 		upgrade_write = true;
+ 	} else {
++		unsigned long pfn;
++
+ 		/* Call KVM generic code to do the slow-path check */
+ 		pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL,
+ 					   writing, upgrade_p);
+@@ -652,63 +653,55 @@ int kvmppc_book3s_radix_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
+ 		}
+ 	}
+ 
+-	/* See if we can insert a 1GB or 2MB large PTE here */
+-	level = 0;
+-	if (page && PageCompound(page)) {
+-		pte_size = PAGE_SIZE << compound_order(compound_head(page));
+-		if (pte_size >= PUD_SIZE &&
+-		    (gpa & (PUD_SIZE - PAGE_SIZE)) ==
+-		    (hva & (PUD_SIZE - PAGE_SIZE))) {
+-			level = 2;
+-			pfn &= ~((PUD_SIZE >> PAGE_SHIFT) - 1);
+-		} else if (pte_size >= PMD_SIZE &&
+-			   (gpa & (PMD_SIZE - PAGE_SIZE)) ==
+-			   (hva & (PMD_SIZE - PAGE_SIZE))) {
+-			level = 1;
+-			pfn &= ~((PMD_SIZE >> PAGE_SHIFT) - 1);
+-		}
+-	}
+-
+ 	/*
+-	 * Compute the PTE value that we need to insert.
++	 * Read the PTE from the process' radix tree and use that
++	 * so we get the shift and attribute bits.
+ 	 */
+-	if (page) {
+-		pgflags = _PAGE_READ | _PAGE_EXEC | _PAGE_PRESENT | _PAGE_PTE |
+-			_PAGE_ACCESSED;
+-		if (writing || upgrade_write)
+-			pgflags |= _PAGE_WRITE | _PAGE_DIRTY;
+-		pte = pfn_pte(pfn, __pgprot(pgflags));
+-	} else {
+-		/*
+-		 * Read the PTE from the process' radix tree and use that
+-		 * so we get the attribute bits.
+-		 */
+-		local_irq_disable();
+-		ptep = __find_linux_pte(vcpu->arch.pgdir, hva, NULL, &shift);
+-		pte = *ptep;
++	local_irq_disable();
++	ptep = __find_linux_pte(vcpu->arch.pgdir, hva, NULL, &shift);
++	/*
++	 * If the PTE disappeared temporarily due to a THP
++	 * collapse, just return and let the guest try again.
++	 */
++	if (!ptep) {
+ 		local_irq_enable();
+-		if (shift == PUD_SHIFT &&
+-		    (gpa & (PUD_SIZE - PAGE_SIZE)) ==
+-		    (hva & (PUD_SIZE - PAGE_SIZE))) {
+-			level = 2;
+-		} else if (shift == PMD_SHIFT &&
+-			   (gpa & (PMD_SIZE - PAGE_SIZE)) ==
+-			   (hva & (PMD_SIZE - PAGE_SIZE))) {
+-			level = 1;
+-		} else if (shift && shift != PAGE_SHIFT) {
+-			/* Adjust PFN */
+-			unsigned long mask = (1ul << shift) - PAGE_SIZE;
+-			pte = __pte(pte_val(pte) | (hva & mask));
+-		}
+-		pte = __pte(pte_val(pte) | _PAGE_EXEC | _PAGE_ACCESSED);
+-		if (writing || upgrade_write) {
+-			if (pte_val(pte) & _PAGE_WRITE)
+-				pte = __pte(pte_val(pte) | _PAGE_DIRTY);
+-		} else {
+-			pte = __pte(pte_val(pte) & ~(_PAGE_WRITE | _PAGE_DIRTY));
++		if (page)
++			put_page(page);
++		return RESUME_GUEST;
++	}
++	pte = *ptep;
++	local_irq_enable();
++
++	/* Get pte level from shift/size */
++	if (shift == PUD_SHIFT &&
++	    (gpa & (PUD_SIZE - PAGE_SIZE)) ==
++	    (hva & (PUD_SIZE - PAGE_SIZE))) {
++		level = 2;
++	} else if (shift == PMD_SHIFT &&
++		   (gpa & (PMD_SIZE - PAGE_SIZE)) ==
++		   (hva & (PMD_SIZE - PAGE_SIZE))) {
++		level = 1;
++	} else {
++		level = 0;
++		if (shift > PAGE_SHIFT) {
++			/*
++			 * If the pte maps more than one page, bring over
++			 * bits from the virtual address to get the real
++			 * address of the specific single page we want.
++			 */
++			unsigned long rpnmask = (1ul << shift) - PAGE_SIZE;
++			pte = __pte(pte_val(pte) | (hva & rpnmask));
+ 		}
+ 	}
+ 
++	pte = __pte(pte_val(pte) | _PAGE_EXEC | _PAGE_ACCESSED);
++	if (writing || upgrade_write) {
++		if (pte_val(pte) & _PAGE_WRITE)
++			pte = __pte(pte_val(pte) | _PAGE_DIRTY);
++	} else {
++		pte = __pte(pte_val(pte) & ~(_PAGE_WRITE | _PAGE_DIRTY));
++	}
++
+ 	/* Allocate space in the tree and write the PTE */
+ 	ret = kvmppc_create_pte(kvm, pte, gpa, level, mmu_seq);
+ 
+diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
+index 99fff853c944..a558381b016b 100644
+--- a/arch/x86/include/asm/pgtable_types.h
++++ b/arch/x86/include/asm/pgtable_types.h
+@@ -123,7 +123,7 @@
+  */
+ #define _PAGE_CHG_MASK	(PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT |		\
+ 			 _PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY |	\
+-			 _PAGE_SOFT_DIRTY)
++			 _PAGE_SOFT_DIRTY | _PAGE_DEVMAP)
+ #define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE)
+ 
+ /*
+diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
+index c535c2fdea13..9bba9737ee0b 100644
+--- a/arch/x86/include/uapi/asm/kvm.h
++++ b/arch/x86/include/uapi/asm/kvm.h
+@@ -377,5 +377,6 @@ struct kvm_sync_regs {
+ 
+ #define KVM_X86_QUIRK_LINT0_REENABLED	(1 << 0)
+ #define KVM_X86_QUIRK_CD_NW_CLEARED	(1 << 1)
++#define KVM_X86_QUIRK_LAPIC_MMIO_HOLE	(1 << 2)
+ 
+ #endif /* _ASM_X86_KVM_H */
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index b5cd8465d44f..83c4e8cc7eb9 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -1291,9 +1291,8 @@ EXPORT_SYMBOL_GPL(kvm_lapic_reg_read);
+ 
+ static int apic_mmio_in_range(struct kvm_lapic *apic, gpa_t addr)
+ {
+-	return kvm_apic_hw_enabled(apic) &&
+-	    addr >= apic->base_address &&
+-	    addr < apic->base_address + LAPIC_MMIO_LENGTH;
++	return addr >= apic->base_address &&
++		addr < apic->base_address + LAPIC_MMIO_LENGTH;
+ }
+ 
+ static int apic_mmio_read(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
+@@ -1305,6 +1304,15 @@ static int apic_mmio_read(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
+ 	if (!apic_mmio_in_range(apic, address))
+ 		return -EOPNOTSUPP;
+ 
++	if (!kvm_apic_hw_enabled(apic) || apic_x2apic_mode(apic)) {
++		if (!kvm_check_has_quirk(vcpu->kvm,
++					 KVM_X86_QUIRK_LAPIC_MMIO_HOLE))
++			return -EOPNOTSUPP;
++
++		memset(data, 0xff, len);
++		return 0;
++	}
++
+ 	kvm_lapic_reg_read(apic, offset, len, data);
+ 
+ 	return 0;
+@@ -1864,6 +1872,14 @@ static int apic_mmio_write(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
+ 	if (!apic_mmio_in_range(apic, address))
+ 		return -EOPNOTSUPP;
+ 
++	if (!kvm_apic_hw_enabled(apic) || apic_x2apic_mode(apic)) {
++		if (!kvm_check_has_quirk(vcpu->kvm,
++					 KVM_X86_QUIRK_LAPIC_MMIO_HOLE))
++			return -EOPNOTSUPP;
++
++		return 0;
++	}
++
+ 	/*
+ 	 * APIC register must be aligned on 128-bits boundary.
+ 	 * 32/64/128 bits registers must be accessed thru 32 bits.
+diff --git a/drivers/bluetooth/hci_ldisc.c b/drivers/bluetooth/hci_ldisc.c
+index 963bb0309e25..ea6238ed5c0e 100644
+--- a/drivers/bluetooth/hci_ldisc.c
++++ b/drivers/bluetooth/hci_ldisc.c
+@@ -543,6 +543,8 @@ static void hci_uart_tty_close(struct tty_struct *tty)
+ 	}
+ 	clear_bit(HCI_UART_PROTO_SET, &hu->flags);
+ 
++	percpu_free_rwsem(&hu->proto_lock);
++
+ 	kfree(hu);
+ }
+ 
+diff --git a/drivers/clk/x86/clk-pmc-atom.c b/drivers/clk/x86/clk-pmc-atom.c
+index 08ef69945ffb..d977193842df 100644
+--- a/drivers/clk/x86/clk-pmc-atom.c
++++ b/drivers/clk/x86/clk-pmc-atom.c
+@@ -55,6 +55,7 @@ struct clk_plt_data {
+ 	u8 nparents;
+ 	struct clk_plt *clks[PMC_CLK_NUM];
+ 	struct clk_lookup *mclk_lookup;
++	struct clk_lookup *ether_clk_lookup;
+ };
+ 
+ /* Return an index in parent table */
+@@ -186,13 +187,6 @@ static struct clk_plt *plt_clk_register(struct platform_device *pdev, int id,
+ 	pclk->reg = base + PMC_CLK_CTL_OFFSET + id * PMC_CLK_CTL_SIZE;
+ 	spin_lock_init(&pclk->lock);
+ 
+-	/*
+-	 * If the clock was already enabled by the firmware mark it as critical
+-	 * to avoid it being gated by the clock framework if no driver owns it.
+-	 */
+-	if (plt_clk_is_enabled(&pclk->hw))
+-		init.flags |= CLK_IS_CRITICAL;
+-
+ 	ret = devm_clk_hw_register(&pdev->dev, &pclk->hw);
+ 	if (ret) {
+ 		pclk = ERR_PTR(ret);
+@@ -351,11 +345,20 @@ static int plt_clk_probe(struct platform_device *pdev)
+ 		goto err_unreg_clk_plt;
+ 	}
+ 
++	data->ether_clk_lookup = clkdev_hw_create(&data->clks[4]->hw,
++						  "ether_clk", NULL);
++	if (!data->ether_clk_lookup) {
++		err = -ENOMEM;
++		goto err_drop_mclk;
++	}
++
+ 	plt_clk_free_parent_names_loop(parent_names, data->nparents);
+ 
+ 	platform_set_drvdata(pdev, data);
+ 	return 0;
+ 
++err_drop_mclk:
++	clkdev_drop(data->mclk_lookup);
+ err_unreg_clk_plt:
+ 	plt_clk_unregister_loop(data, i);
+ 	plt_clk_unregister_parents(data);
+@@ -369,6 +372,7 @@ static int plt_clk_remove(struct platform_device *pdev)
+ 
+ 	data = platform_get_drvdata(pdev);
+ 
++	clkdev_drop(data->ether_clk_lookup);
+ 	clkdev_drop(data->mclk_lookup);
+ 	plt_clk_unregister_loop(data, PMC_CLK_NUM);
+ 	plt_clk_unregister_parents(data);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+index 305143fcc1ce..1ac7933cccc5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+@@ -245,7 +245,7 @@ int amdgpu_amdkfd_resume(struct amdgpu_device *adev)
+ 
+ int alloc_gtt_mem(struct kgd_dev *kgd, size_t size,
+ 			void **mem_obj, uint64_t *gpu_addr,
+-			void **cpu_ptr)
++			void **cpu_ptr, bool mqd_gfx9)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)kgd;
+ 	struct amdgpu_bo *bo = NULL;
+@@ -261,6 +261,10 @@ int alloc_gtt_mem(struct kgd_dev *kgd, size_t size,
+ 	bp.flags = AMDGPU_GEM_CREATE_CPU_GTT_USWC;
+ 	bp.type = ttm_bo_type_kernel;
+ 	bp.resv = NULL;
++
++	if (mqd_gfx9)
++		bp.flags |= AMDGPU_GEM_CREATE_MQD_GFX9;
++
+ 	r = amdgpu_bo_create(adev, &bp, &bo);
+ 	if (r) {
+ 		dev_err(adev->dev,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+index a8418a3f4e9d..e3cf1c9fb3db 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+@@ -129,7 +129,7 @@ bool amdgpu_amdkfd_is_kfd_vmid(struct amdgpu_device *adev, u32 vmid);
+ /* Shared API */
+ int alloc_gtt_mem(struct kgd_dev *kgd, size_t size,
+ 			void **mem_obj, uint64_t *gpu_addr,
+-			void **cpu_ptr);
++			void **cpu_ptr, bool mqd_gfx9);
+ void free_gtt_mem(struct kgd_dev *kgd, void *mem_obj);
+ void get_local_mem_info(struct kgd_dev *kgd,
+ 			struct kfd_local_mem_info *mem_info);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c
+index ea79908dac4c..29a260e4aefe 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c
+@@ -677,7 +677,7 @@ static int kgd_hqd_sdma_destroy(struct kgd_dev *kgd, void *mqd,
+ 
+ 	while (true) {
+ 		temp = RREG32(sdma_base_addr + mmSDMA0_RLC0_CONTEXT_STATUS);
+-		if (temp & SDMA0_STATUS_REG__RB_CMD_IDLE__SHIFT)
++		if (temp & SDMA0_RLC0_CONTEXT_STATUS__IDLE_MASK)
+ 			break;
+ 		if (time_after(jiffies, end_jiffies))
+ 			return -ETIME;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index 7ee6cec2c060..6881b5a9275f 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -423,7 +423,8 @@ bool kgd2kfd_device_init(struct kfd_dev *kfd,
+ 
+ 	if (kfd->kfd2kgd->init_gtt_mem_allocation(
+ 			kfd->kgd, size, &kfd->gtt_mem,
+-			&kfd->gtt_start_gpu_addr, &kfd->gtt_start_cpu_ptr)){
++			&kfd->gtt_start_gpu_addr, &kfd->gtt_start_cpu_ptr,
++			false)) {
+ 		dev_err(kfd_device, "Could not allocate %d bytes\n", size);
+ 		goto out;
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_iommu.c b/drivers/gpu/drm/amd/amdkfd/kfd_iommu.c
+index c71817963eea..66c2f856d922 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_iommu.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_iommu.c
+@@ -62,9 +62,20 @@ int kfd_iommu_device_init(struct kfd_dev *kfd)
+ 	struct amd_iommu_device_info iommu_info;
+ 	unsigned int pasid_limit;
+ 	int err;
++	struct kfd_topology_device *top_dev;
+ 
+-	if (!kfd->device_info->needs_iommu_device)
++	top_dev = kfd_topology_device_by_id(kfd->id);
++
++	/*
++	 * Overwrite ATS capability according to needs_iommu_device to fix
++	 * potential missing corresponding bit in CRAT of BIOS.
++	 */
++	if (!kfd->device_info->needs_iommu_device) {
++		top_dev->node_props.capability &= ~HSA_CAP_ATS_PRESENT;
+ 		return 0;
++	}
++
++	top_dev->node_props.capability |= HSA_CAP_ATS_PRESENT;
+ 
+ 	iommu_info.flags = 0;
+ 	err = amd_iommu_device_info(kfd->pdev, &iommu_info);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
+index 684054ff02cd..8da079cc6fb9 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
+@@ -63,7 +63,7 @@ static int init_mqd(struct mqd_manager *mm, void **mqd,
+ 				ALIGN(sizeof(struct v9_mqd), PAGE_SIZE),
+ 			&((*mqd_mem_obj)->gtt_mem),
+ 			&((*mqd_mem_obj)->gpu_addr),
+-			(void *)&((*mqd_mem_obj)->cpu_ptr));
++			(void *)&((*mqd_mem_obj)->cpu_ptr), true);
+ 	} else
+ 		retval = kfd_gtt_sa_allocate(mm->dev, sizeof(struct v9_mqd),
+ 				mqd_mem_obj);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+index 5e3990bb4c4b..c4de9b2baf1c 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+@@ -796,6 +796,7 @@ int kfd_topology_add_device(struct kfd_dev *gpu);
+ int kfd_topology_remove_device(struct kfd_dev *gpu);
+ struct kfd_topology_device *kfd_topology_device_by_proximity_domain(
+ 						uint32_t proximity_domain);
++struct kfd_topology_device *kfd_topology_device_by_id(uint32_t gpu_id);
+ struct kfd_dev *kfd_device_by_id(uint32_t gpu_id);
+ struct kfd_dev *kfd_device_by_pci_dev(const struct pci_dev *pdev);
+ int kfd_topology_enum_kfd_devices(uint8_t idx, struct kfd_dev **kdev);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+index bc95d4dfee2e..80f5db4ef75f 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+@@ -63,22 +63,33 @@ struct kfd_topology_device *kfd_topology_device_by_proximity_domain(
+ 	return device;
+ }
+ 
+-struct kfd_dev *kfd_device_by_id(uint32_t gpu_id)
++struct kfd_topology_device *kfd_topology_device_by_id(uint32_t gpu_id)
+ {
+-	struct kfd_topology_device *top_dev;
+-	struct kfd_dev *device = NULL;
++	struct kfd_topology_device *top_dev = NULL;
++	struct kfd_topology_device *ret = NULL;
+ 
+ 	down_read(&topology_lock);
+ 
+ 	list_for_each_entry(top_dev, &topology_device_list, list)
+ 		if (top_dev->gpu_id == gpu_id) {
+-			device = top_dev->gpu;
++			ret = top_dev;
+ 			break;
+ 		}
+ 
+ 	up_read(&topology_lock);
+ 
+-	return device;
++	return ret;
++}
++
++struct kfd_dev *kfd_device_by_id(uint32_t gpu_id)
++{
++	struct kfd_topology_device *top_dev;
++
++	top_dev = kfd_topology_device_by_id(gpu_id);
++	if (!top_dev)
++		return NULL;
++
++	return top_dev->gpu;
+ }
+ 
+ struct kfd_dev *kfd_device_by_pci_dev(const struct pci_dev *pdev)
+diff --git a/drivers/gpu/drm/amd/include/kgd_kfd_interface.h b/drivers/gpu/drm/amd/include/kgd_kfd_interface.h
+index 5733fbee07f7..f56b7553e5ed 100644
+--- a/drivers/gpu/drm/amd/include/kgd_kfd_interface.h
++++ b/drivers/gpu/drm/amd/include/kgd_kfd_interface.h
+@@ -266,7 +266,7 @@ struct tile_config {
+ struct kfd2kgd_calls {
+ 	int (*init_gtt_mem_allocation)(struct kgd_dev *kgd, size_t size,
+ 					void **mem_obj, uint64_t *gpu_addr,
+-					void **cpu_ptr);
++					void **cpu_ptr, bool mqd_gfx9);
+ 
+ 	void (*free_gtt_mem)(struct kgd_dev *kgd, void *mem_obj);
+ 
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+index 7a12d75e5157..c3c8c84da113 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+@@ -875,9 +875,22 @@ static enum drm_connector_status
+ nv50_mstc_detect(struct drm_connector *connector, bool force)
+ {
+ 	struct nv50_mstc *mstc = nv50_mstc(connector);
++	enum drm_connector_status conn_status;
++	int ret;
++
+ 	if (!mstc->port)
+ 		return connector_status_disconnected;
+-	return drm_dp_mst_detect_port(connector, mstc->port->mgr, mstc->port);
++
++	ret = pm_runtime_get_sync(connector->dev->dev);
++	if (ret < 0 && ret != -EACCES)
++		return connector_status_disconnected;
++
++	conn_status = drm_dp_mst_detect_port(connector, mstc->port->mgr,
++					     mstc->port);
++
++	pm_runtime_mark_last_busy(connector->dev->dev);
++	pm_runtime_put_autosuspend(connector->dev->dev);
++	return conn_status;
+ }
+ 
+ static void
+diff --git a/drivers/gpu/drm/pl111/pl111_vexpress.c b/drivers/gpu/drm/pl111/pl111_vexpress.c
+index a534b225e31b..5fa0441bb6df 100644
+--- a/drivers/gpu/drm/pl111/pl111_vexpress.c
++++ b/drivers/gpu/drm/pl111/pl111_vexpress.c
+@@ -111,7 +111,8 @@ static int vexpress_muxfpga_probe(struct platform_device *pdev)
+ }
+ 
+ static const struct of_device_id vexpress_muxfpga_match[] = {
+-	{ .compatible = "arm,vexpress-muxfpga", }
++	{ .compatible = "arm,vexpress-muxfpga", },
++	{}
+ };
+ 
+ static struct platform_driver vexpress_muxfpga_driver = {
+diff --git a/drivers/hwmon/nct6775.c b/drivers/hwmon/nct6775.c
+index b89e8379d898..8859f5572885 100644
+--- a/drivers/hwmon/nct6775.c
++++ b/drivers/hwmon/nct6775.c
+@@ -207,8 +207,6 @@ superio_exit(int ioreg)
+ 
+ #define NUM_FAN		7
+ 
+-#define TEMP_SOURCE_VIRTUAL	0x1f
+-
+ /* Common and NCT6775 specific data */
+ 
+ /* Voltage min/max registers for nr=7..14 are in bank 5 */
+@@ -299,8 +297,9 @@ static const u16 NCT6775_REG_PWM_READ[] = {
+ 
+ static const u16 NCT6775_REG_FAN[] = { 0x630, 0x632, 0x634, 0x636, 0x638 };
+ static const u16 NCT6775_REG_FAN_MIN[] = { 0x3b, 0x3c, 0x3d };
+-static const u16 NCT6775_REG_FAN_PULSES[] = { 0x641, 0x642, 0x643, 0x644, 0 };
+-static const u16 NCT6775_FAN_PULSE_SHIFT[] = { 0, 0, 0, 0, 0, 0 };
++static const u16 NCT6775_REG_FAN_PULSES[NUM_FAN] = {
++	0x641, 0x642, 0x643, 0x644 };
++static const u16 NCT6775_FAN_PULSE_SHIFT[NUM_FAN] = { };
+ 
+ static const u16 NCT6775_REG_TEMP[] = {
+ 	0x27, 0x150, 0x250, 0x62b, 0x62c, 0x62d };
+@@ -373,6 +372,7 @@ static const char *const nct6775_temp_label[] = {
+ };
+ 
+ #define NCT6775_TEMP_MASK	0x001ffffe
++#define NCT6775_VIRT_TEMP_MASK	0x00000000
+ 
+ static const u16 NCT6775_REG_TEMP_ALTERNATE[32] = {
+ 	[13] = 0x661,
+@@ -425,8 +425,8 @@ static const u8 NCT6776_PWM_MODE_MASK[] = { 0x01, 0, 0, 0, 0, 0 };
+ 
+ static const u16 NCT6776_REG_FAN_MIN[] = {
+ 	0x63a, 0x63c, 0x63e, 0x640, 0x642, 0x64a, 0x64c };
+-static const u16 NCT6776_REG_FAN_PULSES[] = {
+-	0x644, 0x645, 0x646, 0x647, 0x648, 0x649, 0 };
++static const u16 NCT6776_REG_FAN_PULSES[NUM_FAN] = {
++	0x644, 0x645, 0x646, 0x647, 0x648, 0x649 };
+ 
+ static const u16 NCT6776_REG_WEIGHT_DUTY_BASE[] = {
+ 	0x13e, 0x23e, 0x33e, 0x83e, 0x93e, 0xa3e };
+@@ -461,6 +461,7 @@ static const char *const nct6776_temp_label[] = {
+ };
+ 
+ #define NCT6776_TEMP_MASK	0x007ffffe
++#define NCT6776_VIRT_TEMP_MASK	0x00000000
+ 
+ static const u16 NCT6776_REG_TEMP_ALTERNATE[32] = {
+ 	[14] = 0x401,
+@@ -501,9 +502,9 @@ static const s8 NCT6779_BEEP_BITS[] = {
+ 	30, 31 };			/* intrusion0, intrusion1 */
+ 
+ static const u16 NCT6779_REG_FAN[] = {
+-	0x4b0, 0x4b2, 0x4b4, 0x4b6, 0x4b8, 0x4ba, 0x660 };
+-static const u16 NCT6779_REG_FAN_PULSES[] = {
+-	0x644, 0x645, 0x646, 0x647, 0x648, 0x649, 0 };
++	0x4c0, 0x4c2, 0x4c4, 0x4c6, 0x4c8, 0x4ca, 0x4ce };
++static const u16 NCT6779_REG_FAN_PULSES[NUM_FAN] = {
++	0x644, 0x645, 0x646, 0x647, 0x648, 0x649 };
+ 
+ static const u16 NCT6779_REG_CRITICAL_PWM_ENABLE[] = {
+ 	0x136, 0x236, 0x336, 0x836, 0x936, 0xa36, 0xb36 };
+@@ -559,7 +560,9 @@ static const char *const nct6779_temp_label[] = {
+ };
+ 
+ #define NCT6779_TEMP_MASK	0x07ffff7e
++#define NCT6779_VIRT_TEMP_MASK	0x00000000
+ #define NCT6791_TEMP_MASK	0x87ffff7e
++#define NCT6791_VIRT_TEMP_MASK	0x80000000
+ 
+ static const u16 NCT6779_REG_TEMP_ALTERNATE[32]
+ 	= { 0x490, 0x491, 0x492, 0x493, 0x494, 0x495, 0, 0,
+@@ -638,6 +641,7 @@ static const char *const nct6792_temp_label[] = {
+ };
+ 
+ #define NCT6792_TEMP_MASK	0x9fffff7e
++#define NCT6792_VIRT_TEMP_MASK	0x80000000
+ 
+ static const char *const nct6793_temp_label[] = {
+ 	"",
+@@ -675,6 +679,7 @@ static const char *const nct6793_temp_label[] = {
+ };
+ 
+ #define NCT6793_TEMP_MASK	0xbfff037e
++#define NCT6793_VIRT_TEMP_MASK	0x80000000
+ 
+ static const char *const nct6795_temp_label[] = {
+ 	"",
+@@ -712,6 +717,7 @@ static const char *const nct6795_temp_label[] = {
+ };
+ 
+ #define NCT6795_TEMP_MASK	0xbfffff7e
++#define NCT6795_VIRT_TEMP_MASK	0x80000000
+ 
+ static const char *const nct6796_temp_label[] = {
+ 	"",
+@@ -724,8 +730,8 @@ static const char *const nct6796_temp_label[] = {
+ 	"AUXTIN4",
+ 	"SMBUSMASTER 0",
+ 	"SMBUSMASTER 1",
+-	"",
+-	"",
++	"Virtual_TEMP",
++	"Virtual_TEMP",
+ 	"",
+ 	"",
+ 	"",
+@@ -748,7 +754,8 @@ static const char *const nct6796_temp_label[] = {
+ 	"Virtual_TEMP"
+ };
+ 
+-#define NCT6796_TEMP_MASK	0xbfff03fe
++#define NCT6796_TEMP_MASK	0xbfff0ffe
++#define NCT6796_VIRT_TEMP_MASK	0x80000c00
+ 
+ /* NCT6102D/NCT6106D specific data */
+ 
+@@ -779,8 +786,8 @@ static const u16 NCT6106_REG_TEMP_CONFIG[] = {
+ 
+ static const u16 NCT6106_REG_FAN[] = { 0x20, 0x22, 0x24 };
+ static const u16 NCT6106_REG_FAN_MIN[] = { 0xe0, 0xe2, 0xe4 };
+-static const u16 NCT6106_REG_FAN_PULSES[] = { 0xf6, 0xf6, 0xf6, 0, 0 };
+-static const u16 NCT6106_FAN_PULSE_SHIFT[] = { 0, 2, 4, 0, 0 };
++static const u16 NCT6106_REG_FAN_PULSES[] = { 0xf6, 0xf6, 0xf6 };
++static const u16 NCT6106_FAN_PULSE_SHIFT[] = { 0, 2, 4 };
+ 
+ static const u8 NCT6106_REG_PWM_MODE[] = { 0xf3, 0xf3, 0xf3 };
+ static const u8 NCT6106_PWM_MODE_MASK[] = { 0x01, 0x02, 0x04 };
+@@ -917,6 +924,11 @@ static unsigned int fan_from_reg16(u16 reg, unsigned int divreg)
+ 	return 1350000U / (reg << divreg);
+ }
+ 
++static unsigned int fan_from_reg_rpm(u16 reg, unsigned int divreg)
++{
++	return reg;
++}
++
+ static u16 fan_to_reg(u32 fan, unsigned int divreg)
+ {
+ 	if (!fan)
+@@ -969,6 +981,7 @@ struct nct6775_data {
+ 	u16 reg_temp_config[NUM_TEMP];
+ 	const char * const *temp_label;
+ 	u32 temp_mask;
++	u32 virt_temp_mask;
+ 
+ 	u16 REG_CONFIG;
+ 	u16 REG_VBAT;
+@@ -1276,11 +1289,11 @@ static bool is_word_sized(struct nct6775_data *data, u16 reg)
+ 	case nct6795:
+ 	case nct6796:
+ 		return reg == 0x150 || reg == 0x153 || reg == 0x155 ||
+-		  ((reg & 0xfff0) == 0x4b0 && (reg & 0x000f) < 0x0b) ||
++		  (reg & 0xfff0) == 0x4c0 ||
+ 		  reg == 0x402 ||
+ 		  reg == 0x63a || reg == 0x63c || reg == 0x63e ||
+ 		  reg == 0x640 || reg == 0x642 || reg == 0x64a ||
+-		  reg == 0x64c || reg == 0x660 ||
++		  reg == 0x64c ||
+ 		  reg == 0x73 || reg == 0x75 || reg == 0x77 || reg == 0x79 ||
+ 		  reg == 0x7b || reg == 0x7d;
+ 	}
+@@ -1682,9 +1695,13 @@ static struct nct6775_data *nct6775_update_device(struct device *dev)
+ 			if (data->has_fan_min & BIT(i))
+ 				data->fan_min[i] = nct6775_read_value(data,
+ 					   data->REG_FAN_MIN[i]);
+-			data->fan_pulses[i] =
+-			  (nct6775_read_value(data, data->REG_FAN_PULSES[i])
+-				>> data->FAN_PULSE_SHIFT[i]) & 0x03;
++
++			if (data->REG_FAN_PULSES[i]) {
++				data->fan_pulses[i] =
++				  (nct6775_read_value(data,
++						      data->REG_FAN_PULSES[i])
++				   >> data->FAN_PULSE_SHIFT[i]) & 0x03;
++			}
+ 
+ 			nct6775_select_fan_div(dev, data, i, reg);
+ 		}
+@@ -3639,6 +3656,7 @@ static int nct6775_probe(struct platform_device *pdev)
+ 
+ 		data->temp_label = nct6776_temp_label;
+ 		data->temp_mask = NCT6776_TEMP_MASK;
++		data->virt_temp_mask = NCT6776_VIRT_TEMP_MASK;
+ 
+ 		data->REG_VBAT = NCT6106_REG_VBAT;
+ 		data->REG_DIODE = NCT6106_REG_DIODE;
+@@ -3717,6 +3735,7 @@ static int nct6775_probe(struct platform_device *pdev)
+ 
+ 		data->temp_label = nct6775_temp_label;
+ 		data->temp_mask = NCT6775_TEMP_MASK;
++		data->virt_temp_mask = NCT6775_VIRT_TEMP_MASK;
+ 
+ 		data->REG_CONFIG = NCT6775_REG_CONFIG;
+ 		data->REG_VBAT = NCT6775_REG_VBAT;
+@@ -3789,6 +3808,7 @@ static int nct6775_probe(struct platform_device *pdev)
+ 
+ 		data->temp_label = nct6776_temp_label;
+ 		data->temp_mask = NCT6776_TEMP_MASK;
++		data->virt_temp_mask = NCT6776_VIRT_TEMP_MASK;
+ 
+ 		data->REG_CONFIG = NCT6775_REG_CONFIG;
+ 		data->REG_VBAT = NCT6775_REG_VBAT;
+@@ -3853,7 +3873,7 @@ static int nct6775_probe(struct platform_device *pdev)
+ 		data->ALARM_BITS = NCT6779_ALARM_BITS;
+ 		data->BEEP_BITS = NCT6779_BEEP_BITS;
+ 
+-		data->fan_from_reg = fan_from_reg13;
++		data->fan_from_reg = fan_from_reg_rpm;
+ 		data->fan_from_reg_min = fan_from_reg13;
+ 		data->target_temp_mask = 0xff;
+ 		data->tolerance_mask = 0x07;
+@@ -3861,6 +3881,7 @@ static int nct6775_probe(struct platform_device *pdev)
+ 
+ 		data->temp_label = nct6779_temp_label;
+ 		data->temp_mask = NCT6779_TEMP_MASK;
++		data->virt_temp_mask = NCT6779_VIRT_TEMP_MASK;
+ 
+ 		data->REG_CONFIG = NCT6775_REG_CONFIG;
+ 		data->REG_VBAT = NCT6775_REG_VBAT;
+@@ -3933,7 +3954,7 @@ static int nct6775_probe(struct platform_device *pdev)
+ 		data->ALARM_BITS = NCT6791_ALARM_BITS;
+ 		data->BEEP_BITS = NCT6779_BEEP_BITS;
+ 
+-		data->fan_from_reg = fan_from_reg13;
++		data->fan_from_reg = fan_from_reg_rpm;
+ 		data->fan_from_reg_min = fan_from_reg13;
+ 		data->target_temp_mask = 0xff;
+ 		data->tolerance_mask = 0x07;
+@@ -3944,22 +3965,27 @@ static int nct6775_probe(struct platform_device *pdev)
+ 		case nct6791:
+ 			data->temp_label = nct6779_temp_label;
+ 			data->temp_mask = NCT6791_TEMP_MASK;
++			data->virt_temp_mask = NCT6791_VIRT_TEMP_MASK;
+ 			break;
+ 		case nct6792:
+ 			data->temp_label = nct6792_temp_label;
+ 			data->temp_mask = NCT6792_TEMP_MASK;
++			data->virt_temp_mask = NCT6792_VIRT_TEMP_MASK;
+ 			break;
+ 		case nct6793:
+ 			data->temp_label = nct6793_temp_label;
+ 			data->temp_mask = NCT6793_TEMP_MASK;
++			data->virt_temp_mask = NCT6793_VIRT_TEMP_MASK;
+ 			break;
+ 		case nct6795:
+ 			data->temp_label = nct6795_temp_label;
+ 			data->temp_mask = NCT6795_TEMP_MASK;
++			data->virt_temp_mask = NCT6795_VIRT_TEMP_MASK;
+ 			break;
+ 		case nct6796:
+ 			data->temp_label = nct6796_temp_label;
+ 			data->temp_mask = NCT6796_TEMP_MASK;
++			data->virt_temp_mask = NCT6796_VIRT_TEMP_MASK;
+ 			break;
+ 		}
+ 
+@@ -4143,7 +4169,7 @@ static int nct6775_probe(struct platform_device *pdev)
+ 		 * for each fan reflects a different temperature, and there
+ 		 * are no duplicates.
+ 		 */
+-		if (src != TEMP_SOURCE_VIRTUAL) {
++		if (!(data->virt_temp_mask & BIT(src))) {
+ 			if (mask & BIT(src))
+ 				continue;
+ 			mask |= BIT(src);
+diff --git a/drivers/i2c/busses/i2c-scmi.c b/drivers/i2c/busses/i2c-scmi.c
+index a01389b85f13..7e9a2bbf5ddc 100644
+--- a/drivers/i2c/busses/i2c-scmi.c
++++ b/drivers/i2c/busses/i2c-scmi.c
+@@ -152,6 +152,7 @@ acpi_smbus_cmi_access(struct i2c_adapter *adap, u16 addr, unsigned short flags,
+ 			mt_params[3].type = ACPI_TYPE_INTEGER;
+ 			mt_params[3].integer.value = len;
+ 			mt_params[4].type = ACPI_TYPE_BUFFER;
++			mt_params[4].buffer.length = len;
+ 			mt_params[4].buffer.pointer = data->block + 1;
+ 		}
+ 		break;
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index cd620e009bad..d4b9db487b16 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -231,6 +231,7 @@ static const struct xpad_device {
+ 	{ 0x0e6f, 0x0246, "Rock Candy Gamepad for Xbox One 2015", 0, XTYPE_XBOXONE },
+ 	{ 0x0e6f, 0x02ab, "PDP Controller for Xbox One", 0, XTYPE_XBOXONE },
+ 	{ 0x0e6f, 0x02a4, "PDP Wired Controller for Xbox One - Stealth Series", 0, XTYPE_XBOXONE },
++	{ 0x0e6f, 0x02a6, "PDP Wired Controller for Xbox One - Camo Series", 0, XTYPE_XBOXONE },
+ 	{ 0x0e6f, 0x0301, "Logic3 Controller", 0, XTYPE_XBOX360 },
+ 	{ 0x0e6f, 0x0346, "Rock Candy Gamepad for Xbox One 2016", 0, XTYPE_XBOXONE },
+ 	{ 0x0e6f, 0x0401, "Logic3 Controller", 0, XTYPE_XBOX360 },
+@@ -530,6 +531,8 @@ static const struct xboxone_init_packet xboxone_init_packets[] = {
+ 	XBOXONE_INIT_PKT(0x0e6f, 0x02ab, xboxone_pdp_init2),
+ 	XBOXONE_INIT_PKT(0x0e6f, 0x02a4, xboxone_pdp_init1),
+ 	XBOXONE_INIT_PKT(0x0e6f, 0x02a4, xboxone_pdp_init2),
++	XBOXONE_INIT_PKT(0x0e6f, 0x02a6, xboxone_pdp_init1),
++	XBOXONE_INIT_PKT(0x0e6f, 0x02a6, xboxone_pdp_init2),
+ 	XBOXONE_INIT_PKT(0x24c6, 0x541a, xboxone_rumblebegin_init),
+ 	XBOXONE_INIT_PKT(0x24c6, 0x542a, xboxone_rumblebegin_init),
+ 	XBOXONE_INIT_PKT(0x24c6, 0x543a, xboxone_rumblebegin_init),
+diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
+index a39ae8f45e32..32379e0ac536 100644
+--- a/drivers/md/dm-cache-target.c
++++ b/drivers/md/dm-cache-target.c
+@@ -3492,14 +3492,13 @@ static int __init dm_cache_init(void)
+ 	int r;
+ 
+ 	migration_cache = KMEM_CACHE(dm_cache_migration, 0);
+-	if (!migration_cache) {
+-		dm_unregister_target(&cache_target);
++	if (!migration_cache)
+ 		return -ENOMEM;
+-	}
+ 
+ 	r = dm_register_target(&cache_target);
+ 	if (r) {
+ 		DMERR("cache target registration failed: %d", r);
++		kmem_cache_destroy(migration_cache);
+ 		return r;
+ 	}
+ 
+diff --git a/drivers/md/dm-flakey.c b/drivers/md/dm-flakey.c
+index 21d126a5078c..32aabe27b37c 100644
+--- a/drivers/md/dm-flakey.c
++++ b/drivers/md/dm-flakey.c
+@@ -467,7 +467,9 @@ static int flakey_iterate_devices(struct dm_target *ti, iterate_devices_callout_
+ static struct target_type flakey_target = {
+ 	.name   = "flakey",
+ 	.version = {1, 5, 0},
++#ifdef CONFIG_BLK_DEV_ZONED
+ 	.features = DM_TARGET_ZONED_HM,
++#endif
+ 	.module = THIS_MODULE,
+ 	.ctr    = flakey_ctr,
+ 	.dtr    = flakey_dtr,
+diff --git a/drivers/md/dm-linear.c b/drivers/md/dm-linear.c
+index d10964d41fd7..2f7c44a006c4 100644
+--- a/drivers/md/dm-linear.c
++++ b/drivers/md/dm-linear.c
+@@ -102,6 +102,7 @@ static int linear_map(struct dm_target *ti, struct bio *bio)
+ 	return DM_MAPIO_REMAPPED;
+ }
+ 
++#ifdef CONFIG_BLK_DEV_ZONED
+ static int linear_end_io(struct dm_target *ti, struct bio *bio,
+ 			 blk_status_t *error)
+ {
+@@ -112,6 +113,7 @@ static int linear_end_io(struct dm_target *ti, struct bio *bio,
+ 
+ 	return DM_ENDIO_DONE;
+ }
++#endif
+ 
+ static void linear_status(struct dm_target *ti, status_type_t type,
+ 			  unsigned status_flags, char *result, unsigned maxlen)
+@@ -208,12 +210,16 @@ static size_t linear_dax_copy_to_iter(struct dm_target *ti, pgoff_t pgoff,
+ static struct target_type linear_target = {
+ 	.name   = "linear",
+ 	.version = {1, 4, 0},
++#ifdef CONFIG_BLK_DEV_ZONED
++	.end_io = linear_end_io,
+ 	.features = DM_TARGET_PASSES_INTEGRITY | DM_TARGET_ZONED_HM,
++#else
++	.features = DM_TARGET_PASSES_INTEGRITY,
++#endif
+ 	.module = THIS_MODULE,
+ 	.ctr    = linear_ctr,
+ 	.dtr    = linear_dtr,
+ 	.map    = linear_map,
+-	.end_io = linear_end_io,
+ 	.status = linear_status,
+ 	.prepare_ioctl = linear_prepare_ioctl,
+ 	.iterate_devices = linear_iterate_devices,
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index b0dd7027848b..4ad8312d5b8d 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1153,12 +1153,14 @@ void dm_accept_partial_bio(struct bio *bio, unsigned n_sectors)
+ EXPORT_SYMBOL_GPL(dm_accept_partial_bio);
+ 
+ /*
+- * The zone descriptors obtained with a zone report indicate
+- * zone positions within the target device. The zone descriptors
+- * must be remapped to match their position within the dm device.
+- * A target may call dm_remap_zone_report after completion of a
+- * REQ_OP_ZONE_REPORT bio to remap the zone descriptors obtained
+- * from the target device mapping to the dm device.
++ * The zone descriptors obtained with a zone report indicate zone positions
++ * within the target backing device, regardless of that device is a partition
++ * and regardless of the target mapping start sector on the device or partition.
++ * The zone descriptors start sector and write pointer position must be adjusted
++ * to match their relative position within the dm device.
++ * A target may call dm_remap_zone_report() after completion of a
++ * REQ_OP_ZONE_REPORT bio to remap the zone descriptors obtained from the
++ * backing device.
+  */
+ void dm_remap_zone_report(struct dm_target *ti, struct bio *bio, sector_t start)
+ {
+@@ -1169,6 +1171,7 @@ void dm_remap_zone_report(struct dm_target *ti, struct bio *bio, sector_t start)
+ 	struct blk_zone *zone;
+ 	unsigned int nr_rep = 0;
+ 	unsigned int ofst;
++	sector_t part_offset;
+ 	struct bio_vec bvec;
+ 	struct bvec_iter iter;
+ 	void *addr;
+@@ -1176,6 +1179,15 @@ void dm_remap_zone_report(struct dm_target *ti, struct bio *bio, sector_t start)
+ 	if (bio->bi_status)
+ 		return;
+ 
++	/*
++	 * bio sector was incremented by the request size on completion. Taking
++	 * into account the original request sector, the target start offset on
++	 * the backing device and the target mapping offset (ti->begin), the
++	 * start sector of the backing device. The partition offset is always 0
++	 * if the target uses a whole device.
++	 */
++	part_offset = bio->bi_iter.bi_sector + ti->begin - (start + bio_end_sector(report_bio));
++
+ 	/*
+ 	 * Remap the start sector of the reported zones. For sequential zones,
+ 	 * also remap the write pointer position.
+@@ -1193,6 +1205,7 @@ void dm_remap_zone_report(struct dm_target *ti, struct bio *bio, sector_t start)
+ 		/* Set zones start sector */
+ 		while (hdr->nr_zones && ofst < bvec.bv_len) {
+ 			zone = addr + ofst;
++			zone->start -= part_offset;
+ 			if (zone->start >= start + ti->len) {
+ 				hdr->nr_zones = 0;
+ 				break;
+@@ -1204,7 +1217,7 @@ void dm_remap_zone_report(struct dm_target *ti, struct bio *bio, sector_t start)
+ 				else if (zone->cond == BLK_ZONE_COND_EMPTY)
+ 					zone->wp = zone->start;
+ 				else
+-					zone->wp = zone->wp + ti->begin - start;
++					zone->wp = zone->wp + ti->begin - start - part_offset;
+ 			}
+ 			ofst += sizeof(struct blk_zone);
+ 			hdr->nr_zones--;
+diff --git a/drivers/mfd/omap-usb-host.c b/drivers/mfd/omap-usb-host.c
+index e11ab12fbdf2..800986a79704 100644
+--- a/drivers/mfd/omap-usb-host.c
++++ b/drivers/mfd/omap-usb-host.c
+@@ -528,8 +528,8 @@ static int usbhs_omap_get_dt_pdata(struct device *dev,
+ }
+ 
+ static const struct of_device_id usbhs_child_match_table[] = {
+-	{ .compatible = "ti,omap-ehci", },
+-	{ .compatible = "ti,omap-ohci", },
++	{ .compatible = "ti,ehci-omap", },
++	{ .compatible = "ti,ohci-omap3", },
+ 	{ }
+ };
+ 
+@@ -855,6 +855,7 @@ static struct platform_driver usbhs_omap_driver = {
+ 		.pm		= &usbhsomap_dev_pm_ops,
+ 		.of_match_table = usbhs_omap_dt_ids,
+ 	},
++	.probe		= usbhs_omap_probe,
+ 	.remove		= usbhs_omap_remove,
+ };
+ 
+@@ -864,9 +865,9 @@ MODULE_ALIAS("platform:" USBHS_DRIVER_NAME);
+ MODULE_LICENSE("GPL v2");
+ MODULE_DESCRIPTION("usb host common core driver for omap EHCI and OHCI");
+ 
+-static int __init omap_usbhs_drvinit(void)
++static int omap_usbhs_drvinit(void)
+ {
+-	return platform_driver_probe(&usbhs_omap_driver, usbhs_omap_probe);
++	return platform_driver_register(&usbhs_omap_driver);
+ }
+ 
+ /*
+@@ -878,7 +879,7 @@ static int __init omap_usbhs_drvinit(void)
+  */
+ fs_initcall_sync(omap_usbhs_drvinit);
+ 
+-static void __exit omap_usbhs_drvexit(void)
++static void omap_usbhs_drvexit(void)
+ {
+ 	platform_driver_unregister(&usbhs_omap_driver);
+ }
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index a0b9102c4c6e..e201ccb3fda4 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -1370,6 +1370,16 @@ static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
+ 		brq->data.blocks = card->host->max_blk_count;
+ 
+ 	if (brq->data.blocks > 1) {
++		/*
++		 * Some SD cards in SPI mode return a CRC error or even lock up
++		 * completely when trying to read the last block using a
++		 * multiblock read command.
++		 */
++		if (mmc_host_is_spi(card->host) && (rq_data_dir(req) == READ) &&
++		    (blk_rq_pos(req) + blk_rq_sectors(req) ==
++		     get_capacity(md->disk)))
++			brq->data.blocks--;
++
+ 		/*
+ 		 * After a read error, we redo the request one sector
+ 		 * at a time in order to accurately determine which
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 217b790d22ed..2b01180be834 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -210,6 +210,7 @@ static void bond_get_stats(struct net_device *bond_dev,
+ static void bond_slave_arr_handler(struct work_struct *work);
+ static bool bond_time_in_interval(struct bonding *bond, unsigned long last_act,
+ 				  int mod);
++static void bond_netdev_notify_work(struct work_struct *work);
+ 
+ /*---------------------------- General routines -----------------------------*/
+ 
+@@ -1177,9 +1178,27 @@ static rx_handler_result_t bond_handle_frame(struct sk_buff **pskb)
+ 		}
+ 	}
+ 
+-	/* don't change skb->dev for link-local packets */
+-	if (is_link_local_ether_addr(eth_hdr(skb)->h_dest))
++	/* Link-local multicast packets should be passed to the
++	 * stack on the link they arrive as well as pass them to the
++	 * bond-master device. These packets are mostly usable when
++	 * stack receives it with the link on which they arrive
++	 * (e.g. LLDP) they also must be available on master. Some of
++	 * the use cases include (but are not limited to): LLDP agents
++	 * that must be able to operate both on enslaved interfaces as
++	 * well as on bonds themselves; linux bridges that must be able
++	 * to process/pass BPDUs from attached bonds when any kind of
++	 * STP version is enabled on the network.
++	 */
++	if (is_link_local_ether_addr(eth_hdr(skb)->h_dest)) {
++		struct sk_buff *nskb = skb_clone(skb, GFP_ATOMIC);
++
++		if (nskb) {
++			nskb->dev = bond->dev;
++			nskb->queue_mapping = 0;
++			netif_rx(nskb);
++		}
+ 		return RX_HANDLER_PASS;
++	}
+ 	if (bond_should_deliver_exact_match(skb, slave, bond))
+ 		return RX_HANDLER_EXACT;
+ 
+@@ -1276,6 +1295,8 @@ static struct slave *bond_alloc_slave(struct bonding *bond)
+ 			return NULL;
+ 		}
+ 	}
++	INIT_DELAYED_WORK(&slave->notify_work, bond_netdev_notify_work);
++
+ 	return slave;
+ }
+ 
+@@ -1283,6 +1304,7 @@ static void bond_free_slave(struct slave *slave)
+ {
+ 	struct bonding *bond = bond_get_bond_by_slave(slave);
+ 
++	cancel_delayed_work_sync(&slave->notify_work);
+ 	if (BOND_MODE(bond) == BOND_MODE_8023AD)
+ 		kfree(SLAVE_AD_INFO(slave));
+ 
+@@ -1304,39 +1326,26 @@ static void bond_fill_ifslave(struct slave *slave, struct ifslave *info)
+ 	info->link_failure_count = slave->link_failure_count;
+ }
+ 
+-static void bond_netdev_notify(struct net_device *dev,
+-			       struct netdev_bonding_info *info)
+-{
+-	rtnl_lock();
+-	netdev_bonding_info_change(dev, info);
+-	rtnl_unlock();
+-}
+-
+ static void bond_netdev_notify_work(struct work_struct *_work)
+ {
+-	struct netdev_notify_work *w =
+-		container_of(_work, struct netdev_notify_work, work.work);
++	struct slave *slave = container_of(_work, struct slave,
++					   notify_work.work);
++
++	if (rtnl_trylock()) {
++		struct netdev_bonding_info binfo;
+ 
+-	bond_netdev_notify(w->dev, &w->bonding_info);
+-	dev_put(w->dev);
+-	kfree(w);
++		bond_fill_ifslave(slave, &binfo.slave);
++		bond_fill_ifbond(slave->bond, &binfo.master);
++		netdev_bonding_info_change(slave->dev, &binfo);
++		rtnl_unlock();
++	} else {
++		queue_delayed_work(slave->bond->wq, &slave->notify_work, 1);
++	}
+ }
+ 
+ void bond_queue_slave_event(struct slave *slave)
+ {
+-	struct bonding *bond = slave->bond;
+-	struct netdev_notify_work *nnw = kzalloc(sizeof(*nnw), GFP_ATOMIC);
+-
+-	if (!nnw)
+-		return;
+-
+-	dev_hold(slave->dev);
+-	nnw->dev = slave->dev;
+-	bond_fill_ifslave(slave, &nnw->bonding_info.slave);
+-	bond_fill_ifbond(bond, &nnw->bonding_info.master);
+-	INIT_DELAYED_WORK(&nnw->work, bond_netdev_notify_work);
+-
+-	queue_delayed_work(slave->bond->wq, &nnw->work, 0);
++	queue_delayed_work(slave->bond->wq, &slave->notify_work, 0);
+ }
+ 
+ void bond_lower_state_changed(struct slave *slave)
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index d93c790bfbe8..ad534b90ef21 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -1107,7 +1107,7 @@ void b53_vlan_add(struct dsa_switch *ds, int port,
+ 		b53_get_vlan_entry(dev, vid, vl);
+ 
+ 		vl->members |= BIT(port);
+-		if (untagged)
++		if (untagged && !dsa_is_cpu_port(ds, port))
+ 			vl->untag |= BIT(port);
+ 		else
+ 			vl->untag &= ~BIT(port);
+@@ -1149,7 +1149,7 @@ int b53_vlan_del(struct dsa_switch *ds, int port,
+ 				pvid = 0;
+ 		}
+ 
+-		if (untagged)
++		if (untagged && !dsa_is_cpu_port(ds, port))
+ 			vl->untag &= ~(BIT(port));
+ 
+ 		b53_set_vlan_entry(dev, vid, vl);
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index 02e8982519ce..d73204767cbe 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -698,7 +698,6 @@ static int bcm_sf2_sw_suspend(struct dsa_switch *ds)
+ static int bcm_sf2_sw_resume(struct dsa_switch *ds)
+ {
+ 	struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds);
+-	unsigned int port;
+ 	int ret;
+ 
+ 	ret = bcm_sf2_sw_rst(priv);
+@@ -710,14 +709,7 @@ static int bcm_sf2_sw_resume(struct dsa_switch *ds)
+ 	if (priv->hw_params.num_gphy == 1)
+ 		bcm_sf2_gphy_enable_set(ds, true);
+ 
+-	for (port = 0; port < DSA_MAX_PORTS; port++) {
+-		if (dsa_is_user_port(ds, port))
+-			bcm_sf2_port_setup(ds, port, NULL);
+-		else if (dsa_is_cpu_port(ds, port))
+-			bcm_sf2_imp_setup(ds, port);
+-	}
+-
+-	bcm_sf2_enable_acb(ds);
++	ds->ops->setup(ds);
+ 
+ 	return 0;
+ }
+@@ -1168,10 +1160,10 @@ static int bcm_sf2_sw_remove(struct platform_device *pdev)
+ {
+ 	struct bcm_sf2_priv *priv = platform_get_drvdata(pdev);
+ 
+-	/* Disable all ports and interrupts */
+ 	priv->wol_ports_mask = 0;
+-	bcm_sf2_sw_suspend(priv->dev->ds);
+ 	dsa_unregister_switch(priv->dev->ds);
++	/* Disable all ports and interrupts */
++	bcm_sf2_sw_suspend(priv->dev->ds);
+ 	bcm_sf2_mdio_unregister(priv);
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+index b5f1f62e8e25..d1e1a0ba8615 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+@@ -225,9 +225,10 @@ int aq_ring_rx_clean(struct aq_ring_s *self,
+ 		}
+ 
+ 		/* for single fragment packets use build_skb() */
+-		if (buff->is_eop) {
++		if (buff->is_eop &&
++		    buff->len <= AQ_CFG_RX_FRAME_MAX - AQ_SKB_ALIGN) {
+ 			skb = build_skb(page_address(buff->page),
+-					buff->len + AQ_SKB_ALIGN);
++					AQ_CFG_RX_FRAME_MAX);
+ 			if (unlikely(!skb)) {
+ 				err = -ENOMEM;
+ 				goto err_exit;
+@@ -247,18 +248,21 @@ int aq_ring_rx_clean(struct aq_ring_s *self,
+ 					buff->len - ETH_HLEN,
+ 					SKB_TRUESIZE(buff->len - ETH_HLEN));
+ 
+-			for (i = 1U, next_ = buff->next,
+-			     buff_ = &self->buff_ring[next_]; true;
+-			     next_ = buff_->next,
+-			     buff_ = &self->buff_ring[next_], ++i) {
+-				skb_add_rx_frag(skb, i, buff_->page, 0,
+-						buff_->len,
+-						SKB_TRUESIZE(buff->len -
+-						ETH_HLEN));
+-				buff_->is_cleaned = 1;
+-
+-				if (buff_->is_eop)
+-					break;
++			if (!buff->is_eop) {
++				for (i = 1U, next_ = buff->next,
++				     buff_ = &self->buff_ring[next_];
++				     true; next_ = buff_->next,
++				     buff_ = &self->buff_ring[next_], ++i) {
++					skb_add_rx_frag(skb, i,
++							buff_->page, 0,
++							buff_->len,
++							SKB_TRUESIZE(buff->len -
++							ETH_HLEN));
++					buff_->is_cleaned = 1;
++
++					if (buff_->is_eop)
++						break;
++				}
+ 			}
+ 		}
+ 
+diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c
+index a1f60f89e059..7a03ee45840e 100644
+--- a/drivers/net/ethernet/broadcom/bcmsysport.c
++++ b/drivers/net/ethernet/broadcom/bcmsysport.c
+@@ -1045,14 +1045,22 @@ static void bcm_sysport_resume_from_wol(struct bcm_sysport_priv *priv)
+ {
+ 	u32 reg;
+ 
+-	/* Stop monitoring MPD interrupt */
+-	intrl2_0_mask_set(priv, INTRL2_0_MPD);
+-
+ 	/* Clear the MagicPacket detection logic */
+ 	reg = umac_readl(priv, UMAC_MPD_CTRL);
+ 	reg &= ~MPD_EN;
+ 	umac_writel(priv, reg, UMAC_MPD_CTRL);
+ 
++	reg = intrl2_0_readl(priv, INTRL2_CPU_STATUS);
++	if (reg & INTRL2_0_MPD)
++		netdev_info(priv->netdev, "Wake-on-LAN (MPD) interrupt!\n");
++
++	if (reg & INTRL2_0_BRCM_MATCH_TAG) {
++		reg = rxchk_readl(priv, RXCHK_BRCM_TAG_MATCH_STATUS) &
++				  RXCHK_BRCM_TAG_MATCH_MASK;
++		netdev_info(priv->netdev,
++			    "Wake-on-LAN (filters 0x%02x) interrupt!\n", reg);
++	}
++
+ 	netif_dbg(priv, wol, priv->netdev, "resumed from WOL\n");
+ }
+ 
+@@ -1102,11 +1110,6 @@ static irqreturn_t bcm_sysport_rx_isr(int irq, void *dev_id)
+ 	if (priv->irq0_stat & INTRL2_0_TX_RING_FULL)
+ 		bcm_sysport_tx_reclaim_all(priv);
+ 
+-	if (priv->irq0_stat & INTRL2_0_MPD) {
+-		netdev_info(priv->netdev, "Wake-on-LAN interrupt!\n");
+-		bcm_sysport_resume_from_wol(priv);
+-	}
+-
+ 	if (!priv->is_lite)
+ 		goto out;
+ 
+@@ -2459,9 +2462,6 @@ static int bcm_sysport_suspend_to_wol(struct bcm_sysport_priv *priv)
+ 	/* UniMAC receive needs to be turned on */
+ 	umac_enable_set(priv, CMD_RX_EN, 1);
+ 
+-	/* Enable the interrupt wake-up source */
+-	intrl2_0_mask_clear(priv, INTRL2_0_MPD);
+-
+ 	netif_dbg(priv, wol, ndev, "entered WOL mode\n");
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 80b05597c5fe..33f0861057fd 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -1882,8 +1882,11 @@ static int bnxt_poll_work(struct bnxt *bp, struct bnxt_napi *bnapi, int budget)
+ 		if (TX_CMP_TYPE(txcmp) == CMP_TYPE_TX_L2_CMP) {
+ 			tx_pkts++;
+ 			/* return full budget so NAPI will complete. */
+-			if (unlikely(tx_pkts > bp->tx_wake_thresh))
++			if (unlikely(tx_pkts > bp->tx_wake_thresh)) {
+ 				rx_pkts = budget;
++				raw_cons = NEXT_RAW_CMP(raw_cons);
++				break;
++			}
+ 		} else if ((TX_CMP_TYPE(txcmp) & 0x30) == 0x10) {
+ 			if (likely(budget))
+ 				rc = bnxt_rx_pkt(bp, bnapi, &raw_cons, &event);
+@@ -1911,7 +1914,7 @@ static int bnxt_poll_work(struct bnxt *bp, struct bnxt_napi *bnapi, int budget)
+ 		}
+ 		raw_cons = NEXT_RAW_CMP(raw_cons);
+ 
+-		if (rx_pkts == budget)
++		if (rx_pkts && rx_pkts == budget)
+ 			break;
+ 	}
+ 
+@@ -2025,8 +2028,12 @@ static int bnxt_poll(struct napi_struct *napi, int budget)
+ 	while (1) {
+ 		work_done += bnxt_poll_work(bp, bnapi, budget - work_done);
+ 
+-		if (work_done >= budget)
++		if (work_done >= budget) {
++			if (!budget)
++				BNXT_CP_DB_REARM(cpr->cp_doorbell,
++						 cpr->cp_raw_cons);
+ 			break;
++		}
+ 
+ 		if (!bnxt_has_work(bp, cpr)) {
+ 			if (napi_complete_done(napi, work_done))
+@@ -3008,10 +3015,11 @@ static void bnxt_free_hwrm_resources(struct bnxt *bp)
+ {
+ 	struct pci_dev *pdev = bp->pdev;
+ 
+-	dma_free_coherent(&pdev->dev, PAGE_SIZE, bp->hwrm_cmd_resp_addr,
+-			  bp->hwrm_cmd_resp_dma_addr);
+-
+-	bp->hwrm_cmd_resp_addr = NULL;
++	if (bp->hwrm_cmd_resp_addr) {
++		dma_free_coherent(&pdev->dev, PAGE_SIZE, bp->hwrm_cmd_resp_addr,
++				  bp->hwrm_cmd_resp_dma_addr);
++		bp->hwrm_cmd_resp_addr = NULL;
++	}
+ 	if (bp->hwrm_dbg_resp_addr) {
+ 		dma_free_coherent(&pdev->dev, HWRM_DBG_REG_BUF_SIZE,
+ 				  bp->hwrm_dbg_resp_addr,
+@@ -4643,7 +4651,7 @@ __bnxt_hwrm_reserve_pf_rings(struct bnxt *bp, struct hwrm_func_cfg_input *req,
+ 				      FUNC_CFG_REQ_ENABLES_NUM_STAT_CTXS : 0;
+ 		enables |= ring_grps ?
+ 			   FUNC_CFG_REQ_ENABLES_NUM_HW_RING_GRPS : 0;
+-		enables |= vnics ? FUNC_VF_CFG_REQ_ENABLES_NUM_VNICS : 0;
++		enables |= vnics ? FUNC_CFG_REQ_ENABLES_NUM_VNICS : 0;
+ 
+ 		req->num_rx_rings = cpu_to_le16(rx_rings);
+ 		req->num_hw_ring_grps = cpu_to_le16(ring_grps);
+@@ -8493,7 +8501,7 @@ static void _bnxt_get_max_rings(struct bnxt *bp, int *max_rx, int *max_tx,
+ 	*max_tx = hw_resc->max_tx_rings;
+ 	*max_rx = hw_resc->max_rx_rings;
+ 	*max_cp = min_t(int, bnxt_get_max_func_cp_rings_for_en(bp),
+-			hw_resc->max_irqs);
++			hw_resc->max_irqs - bnxt_get_ulp_msix_num(bp));
+ 	*max_cp = min_t(int, *max_cp, hw_resc->max_stat_ctxs);
+ 	max_ring_grps = hw_resc->max_hw_ring_grps;
+ 	if (BNXT_CHIP_TYPE_NITRO_A0(bp) && BNXT_PF(bp)) {
+@@ -8924,6 +8932,7 @@ init_err_cleanup_tc:
+ 	bnxt_clear_int_mode(bp);
+ 
+ init_err_pci_clean:
++	bnxt_free_hwrm_resources(bp);
+ 	bnxt_cleanup_pci(bp);
+ 
+ init_err_free:
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
+index d5bc72cecde3..3f896acc4ca8 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
+@@ -98,13 +98,13 @@ static int bnxt_hwrm_queue_cos2bw_cfg(struct bnxt *bp, struct ieee_ets *ets,
+ 
+ 	bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_QUEUE_COS2BW_CFG, -1, -1);
+ 	for (i = 0; i < max_tc; i++) {
+-		u8 qidx;
++		u8 qidx = bp->tc_to_qidx[i];
+ 
+ 		req.enables |= cpu_to_le32(
+-			QUEUE_COS2BW_CFG_REQ_ENABLES_COS_QUEUE_ID0_VALID << i);
++			QUEUE_COS2BW_CFG_REQ_ENABLES_COS_QUEUE_ID0_VALID <<
++			qidx);
+ 
+ 		memset(&cos2bw, 0, sizeof(cos2bw));
+-		qidx = bp->tc_to_qidx[i];
+ 		cos2bw.queue_id = bp->q_info[qidx].queue_id;
+ 		if (ets->tc_tsa[i] == IEEE_8021QAZ_TSA_STRICT) {
+ 			cos2bw.tsa =
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+index 491bd40a254d..c4c9df029466 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+@@ -75,17 +75,23 @@ static int bnxt_tc_parse_redir(struct bnxt *bp,
+ 	return 0;
+ }
+ 
+-static void bnxt_tc_parse_vlan(struct bnxt *bp,
+-			       struct bnxt_tc_actions *actions,
+-			       const struct tc_action *tc_act)
++static int bnxt_tc_parse_vlan(struct bnxt *bp,
++			      struct bnxt_tc_actions *actions,
++			      const struct tc_action *tc_act)
+ {
+-	if (tcf_vlan_action(tc_act) == TCA_VLAN_ACT_POP) {
++	switch (tcf_vlan_action(tc_act)) {
++	case TCA_VLAN_ACT_POP:
+ 		actions->flags |= BNXT_TC_ACTION_FLAG_POP_VLAN;
+-	} else if (tcf_vlan_action(tc_act) == TCA_VLAN_ACT_PUSH) {
++		break;
++	case TCA_VLAN_ACT_PUSH:
+ 		actions->flags |= BNXT_TC_ACTION_FLAG_PUSH_VLAN;
+ 		actions->push_vlan_tci = htons(tcf_vlan_push_vid(tc_act));
+ 		actions->push_vlan_tpid = tcf_vlan_push_proto(tc_act);
++		break;
++	default:
++		return -EOPNOTSUPP;
+ 	}
++	return 0;
+ }
+ 
+ static int bnxt_tc_parse_tunnel_set(struct bnxt *bp,
+@@ -136,7 +142,9 @@ static int bnxt_tc_parse_actions(struct bnxt *bp,
+ 
+ 		/* Push/pop VLAN */
+ 		if (is_tcf_vlan(tc_act)) {
+-			bnxt_tc_parse_vlan(bp, actions, tc_act);
++			rc = bnxt_tc_parse_vlan(bp, actions, tc_act);
++			if (rc)
++				return rc;
+ 			continue;
+ 		}
+ 
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index c4d7479938e2..dfa045f22ef1 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -3765,6 +3765,13 @@ static const struct macb_config at91sam9260_config = {
+ 	.init = macb_init,
+ };
+ 
++static const struct macb_config sama5d3macb_config = {
++	.caps = MACB_CAPS_SG_DISABLED
++	      | MACB_CAPS_USRIO_HAS_CLKEN | MACB_CAPS_USRIO_DEFAULT_IS_MII_GMII,
++	.clk_init = macb_clk_init,
++	.init = macb_init,
++};
++
+ static const struct macb_config pc302gem_config = {
+ 	.caps = MACB_CAPS_SG_DISABLED | MACB_CAPS_GIGABIT_MODE_AVAILABLE,
+ 	.dma_burst_length = 16,
+@@ -3832,6 +3839,7 @@ static const struct of_device_id macb_dt_ids[] = {
+ 	{ .compatible = "cdns,gem", .data = &pc302gem_config },
+ 	{ .compatible = "atmel,sama5d2-gem", .data = &sama5d2_config },
+ 	{ .compatible = "atmel,sama5d3-gem", .data = &sama5d3_config },
++	{ .compatible = "atmel,sama5d3-macb", .data = &sama5d3macb_config },
+ 	{ .compatible = "atmel,sama5d4-gem", .data = &sama5d4_config },
+ 	{ .compatible = "cdns,at91rm9200-emac", .data = &emac_config },
+ 	{ .compatible = "cdns,emac", .data = &emac_config },
+diff --git a/drivers/net/ethernet/hisilicon/hns/hnae.c b/drivers/net/ethernet/hisilicon/hns/hnae.c
+index a051e582d541..79d03f8ee7b1 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hnae.c
++++ b/drivers/net/ethernet/hisilicon/hns/hnae.c
+@@ -84,7 +84,7 @@ static void hnae_unmap_buffer(struct hnae_ring *ring, struct hnae_desc_cb *cb)
+ 	if (cb->type == DESC_TYPE_SKB)
+ 		dma_unmap_single(ring_to_dev(ring), cb->dma, cb->length,
+ 				 ring_to_dma_dir(ring));
+-	else
++	else if (cb->length)
+ 		dma_unmap_page(ring_to_dev(ring), cb->dma, cb->length,
+ 			       ring_to_dma_dir(ring));
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_enet.c b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+index b4518f45f048..1336ec73230d 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+@@ -40,9 +40,9 @@
+ #define SKB_TMP_LEN(SKB) \
+ 	(((SKB)->transport_header - (SKB)->mac_header) + tcp_hdrlen(SKB))
+ 
+-static void fill_v2_desc(struct hnae_ring *ring, void *priv,
+-			 int size, dma_addr_t dma, int frag_end,
+-			 int buf_num, enum hns_desc_type type, int mtu)
++static void fill_v2_desc_hw(struct hnae_ring *ring, void *priv, int size,
++			    int send_sz, dma_addr_t dma, int frag_end,
++			    int buf_num, enum hns_desc_type type, int mtu)
+ {
+ 	struct hnae_desc *desc = &ring->desc[ring->next_to_use];
+ 	struct hnae_desc_cb *desc_cb = &ring->desc_cb[ring->next_to_use];
+@@ -64,7 +64,7 @@ static void fill_v2_desc(struct hnae_ring *ring, void *priv,
+ 	desc_cb->type = type;
+ 
+ 	desc->addr = cpu_to_le64(dma);
+-	desc->tx.send_size = cpu_to_le16((u16)size);
++	desc->tx.send_size = cpu_to_le16((u16)send_sz);
+ 
+ 	/* config bd buffer end */
+ 	hnae_set_bit(rrcfv, HNSV2_TXD_VLD_B, 1);
+@@ -133,6 +133,14 @@ static void fill_v2_desc(struct hnae_ring *ring, void *priv,
+ 	ring_ptr_move_fw(ring, next_to_use);
+ }
+ 
++static void fill_v2_desc(struct hnae_ring *ring, void *priv,
++			 int size, dma_addr_t dma, int frag_end,
++			 int buf_num, enum hns_desc_type type, int mtu)
++{
++	fill_v2_desc_hw(ring, priv, size, size, dma, frag_end,
++			buf_num, type, mtu);
++}
++
+ static const struct acpi_device_id hns_enet_acpi_match[] = {
+ 	{ "HISI00C1", 0 },
+ 	{ "HISI00C2", 0 },
+@@ -289,15 +297,15 @@ static void fill_tso_desc(struct hnae_ring *ring, void *priv,
+ 
+ 	/* when the frag size is bigger than hardware, split this frag */
+ 	for (k = 0; k < frag_buf_num; k++)
+-		fill_v2_desc(ring, priv,
+-			     (k == frag_buf_num - 1) ?
++		fill_v2_desc_hw(ring, priv, k == 0 ? size : 0,
++				(k == frag_buf_num - 1) ?
+ 					sizeoflast : BD_MAX_SEND_SIZE,
+-			     dma + BD_MAX_SEND_SIZE * k,
+-			     frag_end && (k == frag_buf_num - 1) ? 1 : 0,
+-			     buf_num,
+-			     (type == DESC_TYPE_SKB && !k) ?
++				dma + BD_MAX_SEND_SIZE * k,
++				frag_end && (k == frag_buf_num - 1) ? 1 : 0,
++				buf_num,
++				(type == DESC_TYPE_SKB && !k) ?
+ 					DESC_TYPE_SKB : DESC_TYPE_PAGE,
+-			     mtu);
++				mtu);
+ }
+ 
+ netdev_tx_t hns_nic_net_xmit_hw(struct net_device *ndev,
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index b8bba64673e5..3986ef83111b 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -1725,7 +1725,7 @@ static void mvpp2_txq_desc_put(struct mvpp2_tx_queue *txq)
+ }
+ 
+ /* Set Tx descriptors fields relevant for CSUM calculation */
+-static u32 mvpp2_txq_desc_csum(int l3_offs, int l3_proto,
++static u32 mvpp2_txq_desc_csum(int l3_offs, __be16 l3_proto,
+ 			       int ip_hdr_len, int l4_proto)
+ {
+ 	u32 command;
+@@ -2600,14 +2600,15 @@ static u32 mvpp2_skb_tx_csum(struct mvpp2_port *port, struct sk_buff *skb)
+ 	if (skb->ip_summed == CHECKSUM_PARTIAL) {
+ 		int ip_hdr_len = 0;
+ 		u8 l4_proto;
++		__be16 l3_proto = vlan_get_protocol(skb);
+ 
+-		if (skb->protocol == htons(ETH_P_IP)) {
++		if (l3_proto == htons(ETH_P_IP)) {
+ 			struct iphdr *ip4h = ip_hdr(skb);
+ 
+ 			/* Calculate IPv4 checksum and L4 checksum */
+ 			ip_hdr_len = ip4h->ihl;
+ 			l4_proto = ip4h->protocol;
+-		} else if (skb->protocol == htons(ETH_P_IPV6)) {
++		} else if (l3_proto == htons(ETH_P_IPV6)) {
+ 			struct ipv6hdr *ip6h = ipv6_hdr(skb);
+ 
+ 			/* Read l4_protocol from one of IPv6 extra headers */
+@@ -2619,7 +2620,7 @@ static u32 mvpp2_skb_tx_csum(struct mvpp2_port *port, struct sk_buff *skb)
+ 		}
+ 
+ 		return mvpp2_txq_desc_csum(skb_network_offset(skb),
+-				skb->protocol, ip_hdr_len, l4_proto);
++					   l3_proto, ip_hdr_len, l4_proto);
+ 	}
+ 
+ 	return MVPP2_TXD_L4_CSUM_NOT | MVPP2_TXD_IP_CSUM_DISABLE;
+@@ -3055,10 +3056,12 @@ static int mvpp2_poll(struct napi_struct *napi, int budget)
+ 				   cause_rx_tx & ~MVPP2_CAUSE_MISC_SUM_MASK);
+ 	}
+ 
+-	cause_tx = cause_rx_tx & MVPP2_CAUSE_TXQ_OCCUP_DESC_ALL_MASK;
+-	if (cause_tx) {
+-		cause_tx >>= MVPP2_CAUSE_TXQ_OCCUP_DESC_ALL_OFFSET;
+-		mvpp2_tx_done(port, cause_tx, qv->sw_thread_id);
++	if (port->has_tx_irqs) {
++		cause_tx = cause_rx_tx & MVPP2_CAUSE_TXQ_OCCUP_DESC_ALL_MASK;
++		if (cause_tx) {
++			cause_tx >>= MVPP2_CAUSE_TXQ_OCCUP_DESC_ALL_OFFSET;
++			mvpp2_tx_done(port, cause_tx, qv->sw_thread_id);
++		}
+ 	}
+ 
+ 	/* Process RX packets */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index dfbcda0d0e08..701af5ffcbc9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -1339,6 +1339,9 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
+ 
+ 			*match_level = MLX5_MATCH_L2;
+ 		}
++	} else {
++		MLX5_SET(fte_match_set_lyr_2_4, headers_c, svlan_tag, 1);
++		MLX5_SET(fte_match_set_lyr_2_4, headers_c, cvlan_tag, 1);
+ 	}
+ 
+ 	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+index 40dba9e8af92..69f356f5f8f5 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+@@ -2000,7 +2000,7 @@ static u32 calculate_vports_min_rate_divider(struct mlx5_eswitch *esw)
+ 	u32 max_guarantee = 0;
+ 	int i;
+ 
+-	for (i = 0; i <= esw->total_vports; i++) {
++	for (i = 0; i < esw->total_vports; i++) {
+ 		evport = &esw->vports[i];
+ 		if (!evport->enabled || evport->info.min_rate < max_guarantee)
+ 			continue;
+@@ -2020,7 +2020,7 @@ static int normalize_vports_min_rate(struct mlx5_eswitch *esw, u32 divider)
+ 	int err;
+ 	int i;
+ 
+-	for (i = 0; i <= esw->total_vports; i++) {
++	for (i = 0; i < esw->total_vports; i++) {
+ 		evport = &esw->vports[i];
+ 		if (!evport->enabled)
+ 			continue;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/transobj.c b/drivers/net/ethernet/mellanox/mlx5/core/transobj.c
+index dae1c5c5d27c..d2f76070ea7c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/transobj.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/transobj.c
+@@ -509,7 +509,7 @@ static int mlx5_hairpin_modify_sq(struct mlx5_core_dev *peer_mdev, u32 sqn,
+ 
+ 	sqc = MLX5_ADDR_OF(modify_sq_in, in, ctx);
+ 
+-	if (next_state == MLX5_RQC_STATE_RDY) {
++	if (next_state == MLX5_SQC_STATE_RDY) {
+ 		MLX5_SET(sqc, sqc, hairpin_peer_rq, peer_rq);
+ 		MLX5_SET(sqc, sqc, hairpin_peer_vhca, peer_vhca);
+ 	}
+diff --git a/drivers/net/ethernet/mscc/ocelot_board.c b/drivers/net/ethernet/mscc/ocelot_board.c
+index 18df7d934e81..ccfcf3048cd0 100644
+--- a/drivers/net/ethernet/mscc/ocelot_board.c
++++ b/drivers/net/ethernet/mscc/ocelot_board.c
+@@ -91,7 +91,7 @@ static irqreturn_t ocelot_xtr_irq_handler(int irq, void *arg)
+ 		struct sk_buff *skb;
+ 		struct net_device *dev;
+ 		u32 *buf;
+-		int sz, len;
++		int sz, len, buf_len;
+ 		u32 ifh[4];
+ 		u32 val;
+ 		struct frame_info info;
+@@ -116,14 +116,20 @@ static irqreturn_t ocelot_xtr_irq_handler(int irq, void *arg)
+ 			err = -ENOMEM;
+ 			break;
+ 		}
+-		buf = (u32 *)skb_put(skb, info.len);
++		buf_len = info.len - ETH_FCS_LEN;
++		buf = (u32 *)skb_put(skb, buf_len);
+ 
+ 		len = 0;
+ 		do {
+ 			sz = ocelot_rx_frame_word(ocelot, grp, false, &val);
+ 			*buf++ = val;
+ 			len += sz;
+-		} while ((sz == 4) && (len < info.len));
++		} while (len < buf_len);
++
++		/* Read the FCS and discard it */
++		sz = ocelot_rx_frame_word(ocelot, grp, false, &val);
++		/* Update the statistics if part of the FCS was read before */
++		len -= ETH_FCS_LEN - sz;
+ 
+ 		if (sz < 0) {
+ 			err = sz;
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+index bfccc1955907..80306e4f247c 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+@@ -2068,14 +2068,17 @@ nfp_ctrl_rx_one(struct nfp_net *nn, struct nfp_net_dp *dp,
+ 	return true;
+ }
+ 
+-static void nfp_ctrl_rx(struct nfp_net_r_vector *r_vec)
++static bool nfp_ctrl_rx(struct nfp_net_r_vector *r_vec)
+ {
+ 	struct nfp_net_rx_ring *rx_ring = r_vec->rx_ring;
+ 	struct nfp_net *nn = r_vec->nfp_net;
+ 	struct nfp_net_dp *dp = &nn->dp;
++	unsigned int budget = 512;
+ 
+-	while (nfp_ctrl_rx_one(nn, dp, r_vec, rx_ring))
++	while (nfp_ctrl_rx_one(nn, dp, r_vec, rx_ring) && budget--)
+ 		continue;
++
++	return budget;
+ }
+ 
+ static void nfp_ctrl_poll(unsigned long arg)
+@@ -2087,9 +2090,13 @@ static void nfp_ctrl_poll(unsigned long arg)
+ 	__nfp_ctrl_tx_queued(r_vec);
+ 	spin_unlock_bh(&r_vec->lock);
+ 
+-	nfp_ctrl_rx(r_vec);
+-
+-	nfp_net_irq_unmask(r_vec->nfp_net, r_vec->irq_entry);
++	if (nfp_ctrl_rx(r_vec)) {
++		nfp_net_irq_unmask(r_vec->nfp_net, r_vec->irq_entry);
++	} else {
++		tasklet_schedule(&r_vec->tasklet);
++		nn_dp_warn(&r_vec->nfp_net->dp,
++			   "control message budget exceeded!\n");
++	}
+ }
+ 
+ /* Setup and Configuration
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_hsi.h b/drivers/net/ethernet/qlogic/qed/qed_hsi.h
+index bee10c1781fb..463ffa83685f 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_hsi.h
++++ b/drivers/net/ethernet/qlogic/qed/qed_hsi.h
+@@ -11987,6 +11987,7 @@ struct public_global {
+ 	u32 running_bundle_id;
+ 	s32 external_temperature;
+ 	u32 mdump_reason;
++	u64 reserved;
+ 	u32 data_ptr;
+ 	u32 data_size;
+ };
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic.h b/drivers/net/ethernet/qlogic/qlcnic/qlcnic.h
+index 81312924df14..0c443ea98479 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic.h
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic.h
+@@ -1800,7 +1800,8 @@ struct qlcnic_hardware_ops {
+ 	int (*config_loopback) (struct qlcnic_adapter *, u8);
+ 	int (*clear_loopback) (struct qlcnic_adapter *, u8);
+ 	int (*config_promisc_mode) (struct qlcnic_adapter *, u32);
+-	void (*change_l2_filter) (struct qlcnic_adapter *, u64 *, u16);
++	void (*change_l2_filter)(struct qlcnic_adapter *adapter, u64 *addr,
++				 u16 vlan, struct qlcnic_host_tx_ring *tx_ring);
+ 	int (*get_board_info) (struct qlcnic_adapter *);
+ 	void (*set_mac_filter_count) (struct qlcnic_adapter *);
+ 	void (*free_mac_list) (struct qlcnic_adapter *);
+@@ -2064,9 +2065,10 @@ static inline int qlcnic_nic_set_promisc(struct qlcnic_adapter *adapter,
+ }
+ 
+ static inline void qlcnic_change_filter(struct qlcnic_adapter *adapter,
+-					u64 *addr, u16 id)
++					u64 *addr, u16 vlan,
++					struct qlcnic_host_tx_ring *tx_ring)
+ {
+-	adapter->ahw->hw_ops->change_l2_filter(adapter, addr, id);
++	adapter->ahw->hw_ops->change_l2_filter(adapter, addr, vlan, tx_ring);
+ }
+ 
+ static inline int qlcnic_get_board_info(struct qlcnic_adapter *adapter)
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
+index 569d54ededec..a79d84f99102 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
+@@ -2135,7 +2135,8 @@ out:
+ }
+ 
+ void qlcnic_83xx_change_l2_filter(struct qlcnic_adapter *adapter, u64 *addr,
+-				  u16 vlan_id)
++				  u16 vlan_id,
++				  struct qlcnic_host_tx_ring *tx_ring)
+ {
+ 	u8 mac[ETH_ALEN];
+ 	memcpy(&mac, addr, ETH_ALEN);
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.h b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.h
+index b75a81246856..73fe2f64491d 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.h
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.h
+@@ -550,7 +550,8 @@ int qlcnic_83xx_wrt_reg_indirect(struct qlcnic_adapter *, ulong, u32);
+ int qlcnic_83xx_nic_set_promisc(struct qlcnic_adapter *, u32);
+ int qlcnic_83xx_config_hw_lro(struct qlcnic_adapter *, int);
+ int qlcnic_83xx_config_rss(struct qlcnic_adapter *, int);
+-void qlcnic_83xx_change_l2_filter(struct qlcnic_adapter *, u64 *, u16);
++void qlcnic_83xx_change_l2_filter(struct qlcnic_adapter *adapter, u64 *addr,
++				  u16 vlan, struct qlcnic_host_tx_ring *ring);
+ int qlcnic_83xx_get_pci_info(struct qlcnic_adapter *, struct qlcnic_pci_info *);
+ int qlcnic_83xx_set_nic_info(struct qlcnic_adapter *, struct qlcnic_info *);
+ void qlcnic_83xx_initialize_nic(struct qlcnic_adapter *, int);
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.h b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.h
+index 4bb33af8e2b3..56a3bd9e37dc 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.h
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.h
+@@ -173,7 +173,8 @@ int qlcnic_82xx_napi_add(struct qlcnic_adapter *adapter,
+ 			 struct net_device *netdev);
+ void qlcnic_82xx_get_beacon_state(struct qlcnic_adapter *);
+ void qlcnic_82xx_change_filter(struct qlcnic_adapter *adapter,
+-			       u64 *uaddr, u16 vlan_id);
++			       u64 *uaddr, u16 vlan_id,
++			       struct qlcnic_host_tx_ring *tx_ring);
+ int qlcnic_82xx_config_intr_coalesce(struct qlcnic_adapter *,
+ 				     struct ethtool_coalesce *);
+ int qlcnic_82xx_set_rx_coalesce(struct qlcnic_adapter *);
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c
+index 84dd83031a1b..9647578cbe6a 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c
+@@ -268,13 +268,12 @@ static void qlcnic_add_lb_filter(struct qlcnic_adapter *adapter,
+ }
+ 
+ void qlcnic_82xx_change_filter(struct qlcnic_adapter *adapter, u64 *uaddr,
+-			       u16 vlan_id)
++			       u16 vlan_id, struct qlcnic_host_tx_ring *tx_ring)
+ {
+ 	struct cmd_desc_type0 *hwdesc;
+ 	struct qlcnic_nic_req *req;
+ 	struct qlcnic_mac_req *mac_req;
+ 	struct qlcnic_vlan_req *vlan_req;
+-	struct qlcnic_host_tx_ring *tx_ring = adapter->tx_ring;
+ 	u32 producer;
+ 	u64 word;
+ 
+@@ -301,7 +300,8 @@ void qlcnic_82xx_change_filter(struct qlcnic_adapter *adapter, u64 *uaddr,
+ 
+ static void qlcnic_send_filter(struct qlcnic_adapter *adapter,
+ 			       struct cmd_desc_type0 *first_desc,
+-			       struct sk_buff *skb)
++			       struct sk_buff *skb,
++			       struct qlcnic_host_tx_ring *tx_ring)
+ {
+ 	struct vlan_ethhdr *vh = (struct vlan_ethhdr *)(skb->data);
+ 	struct ethhdr *phdr = (struct ethhdr *)(skb->data);
+@@ -335,7 +335,7 @@ static void qlcnic_send_filter(struct qlcnic_adapter *adapter,
+ 		    tmp_fil->vlan_id == vlan_id) {
+ 			if (jiffies > (QLCNIC_READD_AGE * HZ + tmp_fil->ftime))
+ 				qlcnic_change_filter(adapter, &src_addr,
+-						     vlan_id);
++						     vlan_id, tx_ring);
+ 			tmp_fil->ftime = jiffies;
+ 			return;
+ 		}
+@@ -350,7 +350,7 @@ static void qlcnic_send_filter(struct qlcnic_adapter *adapter,
+ 	if (!fil)
+ 		return;
+ 
+-	qlcnic_change_filter(adapter, &src_addr, vlan_id);
++	qlcnic_change_filter(adapter, &src_addr, vlan_id, tx_ring);
+ 	fil->ftime = jiffies;
+ 	fil->vlan_id = vlan_id;
+ 	memcpy(fil->faddr, &src_addr, ETH_ALEN);
+@@ -766,7 +766,7 @@ netdev_tx_t qlcnic_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
+ 	}
+ 
+ 	if (adapter->drv_mac_learn)
+-		qlcnic_send_filter(adapter, first_desc, skb);
++		qlcnic_send_filter(adapter, first_desc, skb, tx_ring);
+ 
+ 	tx_ring->tx_stats.tx_bytes += skb->len;
+ 	tx_ring->tx_stats.xmit_called++;
+diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
+index 7fd86d40a337..11167abe5934 100644
+--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
++++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
+@@ -113,7 +113,7 @@ rmnet_map_ingress_handler(struct sk_buff *skb,
+ 	struct sk_buff *skbn;
+ 
+ 	if (skb->dev->type == ARPHRD_ETHER) {
+-		if (pskb_expand_head(skb, ETH_HLEN, 0, GFP_KERNEL)) {
++		if (pskb_expand_head(skb, ETH_HLEN, 0, GFP_ATOMIC)) {
+ 			kfree_skb(skb);
+ 			return;
+ 		}
+@@ -147,7 +147,7 @@ static int rmnet_map_egress_handler(struct sk_buff *skb,
+ 	}
+ 
+ 	if (skb_headroom(skb) < required_headroom) {
+-		if (pskb_expand_head(skb, required_headroom, 0, GFP_KERNEL))
++		if (pskb_expand_head(skb, required_headroom, 0, GFP_ATOMIC))
+ 			return -ENOMEM;
+ 	}
+ 
+@@ -189,6 +189,9 @@ rx_handler_result_t rmnet_rx_handler(struct sk_buff **pskb)
+ 	if (!skb)
+ 		goto done;
+ 
++	if (skb->pkt_type == PACKET_LOOPBACK)
++		return RX_HANDLER_PASS;
++
+ 	dev = skb->dev;
+ 	port = rmnet_get_port(dev);
+ 
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index 1d1e66002232..627c5cd8f786 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -4788,8 +4788,8 @@ static void rtl_init_rxcfg(struct rtl8169_private *tp)
+ 		RTL_W32(tp, RxConfig, RX_FIFO_THRESH | RX_DMA_BURST);
+ 		break;
+ 	case RTL_GIGA_MAC_VER_18 ... RTL_GIGA_MAC_VER_24:
+-	case RTL_GIGA_MAC_VER_34:
+-	case RTL_GIGA_MAC_VER_35:
++	case RTL_GIGA_MAC_VER_34 ... RTL_GIGA_MAC_VER_36:
++	case RTL_GIGA_MAC_VER_38:
+ 		RTL_W32(tp, RxConfig, RX128_INT_EN | RX_MULTI_EN | RX_DMA_BURST);
+ 		break;
+ 	case RTL_GIGA_MAC_VER_40 ... RTL_GIGA_MAC_VER_51:
+@@ -5041,9 +5041,14 @@ static void rtl8169_hw_reset(struct rtl8169_private *tp)
+ 
+ static void rtl_set_tx_config_registers(struct rtl8169_private *tp)
+ {
+-	/* Set DMA burst size and Interframe Gap Time */
+-	RTL_W32(tp, TxConfig, (TX_DMA_BURST << TxDMAShift) |
+-		(InterFrameGap << TxInterFrameGapShift));
++	u32 val = TX_DMA_BURST << TxDMAShift |
++		  InterFrameGap << TxInterFrameGapShift;
++
++	if (tp->mac_version >= RTL_GIGA_MAC_VER_34 &&
++	    tp->mac_version != RTL_GIGA_MAC_VER_39)
++		val |= TXCFG_AUTO_FIFO;
++
++	RTL_W32(tp, TxConfig, val);
+ }
+ 
+ static void rtl_set_rx_max_size(struct rtl8169_private *tp)
+@@ -5530,7 +5535,6 @@ static void rtl_hw_start_8168e_2(struct rtl8169_private *tp)
+ 
+ 	rtl_disable_clock_request(tp);
+ 
+-	RTL_W32(tp, TxConfig, RTL_R32(tp, TxConfig) | TXCFG_AUTO_FIFO);
+ 	RTL_W8(tp, MCU, RTL_R8(tp, MCU) & ~NOW_IS_OOB);
+ 
+ 	/* Adjust EEE LED frequency */
+@@ -5562,7 +5566,6 @@ static void rtl_hw_start_8168f(struct rtl8169_private *tp)
+ 
+ 	rtl_disable_clock_request(tp);
+ 
+-	RTL_W32(tp, TxConfig, RTL_R32(tp, TxConfig) | TXCFG_AUTO_FIFO);
+ 	RTL_W8(tp, MCU, RTL_R8(tp, MCU) & ~NOW_IS_OOB);
+ 	RTL_W8(tp, DLLPR, RTL_R8(tp, DLLPR) | PFM_EN);
+ 	RTL_W32(tp, MISC, RTL_R32(tp, MISC) | PWM_EN);
+@@ -5607,8 +5610,6 @@ static void rtl_hw_start_8411(struct rtl8169_private *tp)
+ 
+ static void rtl_hw_start_8168g(struct rtl8169_private *tp)
+ {
+-	RTL_W32(tp, TxConfig, RTL_R32(tp, TxConfig) | TXCFG_AUTO_FIFO);
+-
+ 	rtl_eri_write(tp, 0xc8, ERIAR_MASK_0101, 0x080002, ERIAR_EXGMAC);
+ 	rtl_eri_write(tp, 0xcc, ERIAR_MASK_0001, 0x38, ERIAR_EXGMAC);
+ 	rtl_eri_write(tp, 0xd0, ERIAR_MASK_0001, 0x48, ERIAR_EXGMAC);
+@@ -5707,8 +5708,6 @@ static void rtl_hw_start_8168h_1(struct rtl8169_private *tp)
+ 	RTL_W8(tp, Config5, RTL_R8(tp, Config5) & ~ASPM_en);
+ 	rtl_ephy_init(tp, e_info_8168h_1, ARRAY_SIZE(e_info_8168h_1));
+ 
+-	RTL_W32(tp, TxConfig, RTL_R32(tp, TxConfig) | TXCFG_AUTO_FIFO);
+-
+ 	rtl_eri_write(tp, 0xc8, ERIAR_MASK_0101, 0x00080002, ERIAR_EXGMAC);
+ 	rtl_eri_write(tp, 0xcc, ERIAR_MASK_0001, 0x38, ERIAR_EXGMAC);
+ 	rtl_eri_write(tp, 0xd0, ERIAR_MASK_0001, 0x48, ERIAR_EXGMAC);
+@@ -5789,8 +5788,6 @@ static void rtl_hw_start_8168ep(struct rtl8169_private *tp)
+ {
+ 	rtl8168ep_stop_cmac(tp);
+ 
+-	RTL_W32(tp, TxConfig, RTL_R32(tp, TxConfig) | TXCFG_AUTO_FIFO);
+-
+ 	rtl_eri_write(tp, 0xc8, ERIAR_MASK_0101, 0x00080002, ERIAR_EXGMAC);
+ 	rtl_eri_write(tp, 0xcc, ERIAR_MASK_0001, 0x2f, ERIAR_EXGMAC);
+ 	rtl_eri_write(tp, 0xd0, ERIAR_MASK_0001, 0x5f, ERIAR_EXGMAC);
+@@ -6108,7 +6105,6 @@ static void rtl_hw_start_8402(struct rtl8169_private *tp)
+ 	/* Force LAN exit from ASPM if Rx/Tx are not idle */
+ 	RTL_W32(tp, FuncEvent, RTL_R32(tp, FuncEvent) | 0x002800);
+ 
+-	RTL_W32(tp, TxConfig, RTL_R32(tp, TxConfig) | TXCFG_AUTO_FIFO);
+ 	RTL_W8(tp, MCU, RTL_R8(tp, MCU) & ~NOW_IS_OOB);
+ 
+ 	rtl_ephy_init(tp, e_info_8402, ARRAY_SIZE(e_info_8402));
+diff --git a/drivers/net/ethernet/stmicro/stmmac/common.h b/drivers/net/ethernet/stmicro/stmmac/common.h
+index 78fd0f8b8e81..a15006e2fb29 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/common.h
++++ b/drivers/net/ethernet/stmicro/stmmac/common.h
+@@ -256,10 +256,10 @@ struct stmmac_safety_stats {
+ #define MAX_DMA_RIWT		0xff
+ #define MIN_DMA_RIWT		0x20
+ /* Tx coalesce parameters */
+-#define STMMAC_COAL_TX_TIMER	40000
++#define STMMAC_COAL_TX_TIMER	1000
+ #define STMMAC_MAX_COAL_TX_TICK	100000
+ #define STMMAC_TX_MAX_FRAMES	256
+-#define STMMAC_TX_FRAMES	64
++#define STMMAC_TX_FRAMES	25
+ 
+ /* Packets types */
+ enum packets_types {
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+index c0a855b7ab3b..63e1064b27a2 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+@@ -48,6 +48,8 @@ struct stmmac_tx_info {
+ 
+ /* Frequently used values are kept adjacent for cache effect */
+ struct stmmac_tx_queue {
++	u32 tx_count_frames;
++	struct timer_list txtimer;
+ 	u32 queue_index;
+ 	struct stmmac_priv *priv_data;
+ 	struct dma_extended_desc *dma_etx ____cacheline_aligned_in_smp;
+@@ -73,7 +75,14 @@ struct stmmac_rx_queue {
+ 	u32 rx_zeroc_thresh;
+ 	dma_addr_t dma_rx_phy;
+ 	u32 rx_tail_addr;
++};
++
++struct stmmac_channel {
+ 	struct napi_struct napi ____cacheline_aligned_in_smp;
++	struct stmmac_priv *priv_data;
++	u32 index;
++	int has_rx;
++	int has_tx;
+ };
+ 
+ struct stmmac_tc_entry {
+@@ -109,14 +118,12 @@ struct stmmac_pps_cfg {
+ 
+ struct stmmac_priv {
+ 	/* Frequently used values are kept adjacent for cache effect */
+-	u32 tx_count_frames;
+ 	u32 tx_coal_frames;
+ 	u32 tx_coal_timer;
+ 
+ 	int tx_coalesce;
+ 	int hwts_tx_en;
+ 	bool tx_path_in_lpi_mode;
+-	struct timer_list txtimer;
+ 	bool tso;
+ 
+ 	unsigned int dma_buf_sz;
+@@ -137,6 +144,9 @@ struct stmmac_priv {
+ 	/* TX Queue */
+ 	struct stmmac_tx_queue tx_queue[MTL_MAX_TX_QUEUES];
+ 
++	/* Generic channel for NAPI */
++	struct stmmac_channel channel[STMMAC_CH_MAX];
++
+ 	bool oldlink;
+ 	int speed;
+ 	int oldduplex;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index c579d98b9666..1c6ba74e294b 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -147,12 +147,14 @@ static void stmmac_verify_args(void)
+ static void stmmac_disable_all_queues(struct stmmac_priv *priv)
+ {
+ 	u32 rx_queues_cnt = priv->plat->rx_queues_to_use;
++	u32 tx_queues_cnt = priv->plat->tx_queues_to_use;
++	u32 maxq = max(rx_queues_cnt, tx_queues_cnt);
+ 	u32 queue;
+ 
+-	for (queue = 0; queue < rx_queues_cnt; queue++) {
+-		struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
++	for (queue = 0; queue < maxq; queue++) {
++		struct stmmac_channel *ch = &priv->channel[queue];
+ 
+-		napi_disable(&rx_q->napi);
++		napi_disable(&ch->napi);
+ 	}
+ }
+ 
+@@ -163,12 +165,14 @@ static void stmmac_disable_all_queues(struct stmmac_priv *priv)
+ static void stmmac_enable_all_queues(struct stmmac_priv *priv)
+ {
+ 	u32 rx_queues_cnt = priv->plat->rx_queues_to_use;
++	u32 tx_queues_cnt = priv->plat->tx_queues_to_use;
++	u32 maxq = max(rx_queues_cnt, tx_queues_cnt);
+ 	u32 queue;
+ 
+-	for (queue = 0; queue < rx_queues_cnt; queue++) {
+-		struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
++	for (queue = 0; queue < maxq; queue++) {
++		struct stmmac_channel *ch = &priv->channel[queue];
+ 
+-		napi_enable(&rx_q->napi);
++		napi_enable(&ch->napi);
+ 	}
+ }
+ 
+@@ -1822,18 +1826,18 @@ static void stmmac_dma_operation_mode(struct stmmac_priv *priv)
+  * @queue: TX queue index
+  * Description: it reclaims the transmit resources after transmission completes.
+  */
+-static void stmmac_tx_clean(struct stmmac_priv *priv, u32 queue)
++static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue)
+ {
+ 	struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue];
+ 	unsigned int bytes_compl = 0, pkts_compl = 0;
+-	unsigned int entry;
++	unsigned int entry, count = 0;
+ 
+-	netif_tx_lock(priv->dev);
++	__netif_tx_lock_bh(netdev_get_tx_queue(priv->dev, queue));
+ 
+ 	priv->xstats.tx_clean++;
+ 
+ 	entry = tx_q->dirty_tx;
+-	while (entry != tx_q->cur_tx) {
++	while ((entry != tx_q->cur_tx) && (count < budget)) {
+ 		struct sk_buff *skb = tx_q->tx_skbuff[entry];
+ 		struct dma_desc *p;
+ 		int status;
+@@ -1849,6 +1853,8 @@ static void stmmac_tx_clean(struct stmmac_priv *priv, u32 queue)
+ 		if (unlikely(status & tx_dma_own))
+ 			break;
+ 
++		count++;
++
+ 		/* Make sure descriptor fields are read after reading
+ 		 * the own bit.
+ 		 */
+@@ -1916,7 +1922,10 @@ static void stmmac_tx_clean(struct stmmac_priv *priv, u32 queue)
+ 		stmmac_enable_eee_mode(priv);
+ 		mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_T(eee_timer));
+ 	}
+-	netif_tx_unlock(priv->dev);
++
++	__netif_tx_unlock_bh(netdev_get_tx_queue(priv->dev, queue));
++
++	return count;
+ }
+ 
+ /**
+@@ -1999,6 +2008,33 @@ static bool stmmac_safety_feat_interrupt(struct stmmac_priv *priv)
+ 	return false;
+ }
+ 
++static int stmmac_napi_check(struct stmmac_priv *priv, u32 chan)
++{
++	int status = stmmac_dma_interrupt_status(priv, priv->ioaddr,
++						 &priv->xstats, chan);
++	struct stmmac_channel *ch = &priv->channel[chan];
++	bool needs_work = false;
++
++	if ((status & handle_rx) && ch->has_rx) {
++		needs_work = true;
++	} else {
++		status &= ~handle_rx;
++	}
++
++	if ((status & handle_tx) && ch->has_tx) {
++		needs_work = true;
++	} else {
++		status &= ~handle_tx;
++	}
++
++	if (needs_work && napi_schedule_prep(&ch->napi)) {
++		stmmac_disable_dma_irq(priv, priv->ioaddr, chan);
++		__napi_schedule(&ch->napi);
++	}
++
++	return status;
++}
++
+ /**
+  * stmmac_dma_interrupt - DMA ISR
+  * @priv: driver private structure
+@@ -2013,57 +2049,14 @@ static void stmmac_dma_interrupt(struct stmmac_priv *priv)
+ 	u32 channels_to_check = tx_channel_count > rx_channel_count ?
+ 				tx_channel_count : rx_channel_count;
+ 	u32 chan;
+-	bool poll_scheduled = false;
+ 	int status[max_t(u32, MTL_MAX_TX_QUEUES, MTL_MAX_RX_QUEUES)];
+ 
+ 	/* Make sure we never check beyond our status buffer. */
+ 	if (WARN_ON_ONCE(channels_to_check > ARRAY_SIZE(status)))
+ 		channels_to_check = ARRAY_SIZE(status);
+ 
+-	/* Each DMA channel can be used for rx and tx simultaneously, yet
+-	 * napi_struct is embedded in struct stmmac_rx_queue rather than in a
+-	 * stmmac_channel struct.
+-	 * Because of this, stmmac_poll currently checks (and possibly wakes)
+-	 * all tx queues rather than just a single tx queue.
+-	 */
+ 	for (chan = 0; chan < channels_to_check; chan++)
+-		status[chan] = stmmac_dma_interrupt_status(priv, priv->ioaddr,
+-				&priv->xstats, chan);
+-
+-	for (chan = 0; chan < rx_channel_count; chan++) {
+-		if (likely(status[chan] & handle_rx)) {
+-			struct stmmac_rx_queue *rx_q = &priv->rx_queue[chan];
+-
+-			if (likely(napi_schedule_prep(&rx_q->napi))) {
+-				stmmac_disable_dma_irq(priv, priv->ioaddr, chan);
+-				__napi_schedule(&rx_q->napi);
+-				poll_scheduled = true;
+-			}
+-		}
+-	}
+-
+-	/* If we scheduled poll, we already know that tx queues will be checked.
+-	 * If we didn't schedule poll, see if any DMA channel (used by tx) has a
+-	 * completed transmission, if so, call stmmac_poll (once).
+-	 */
+-	if (!poll_scheduled) {
+-		for (chan = 0; chan < tx_channel_count; chan++) {
+-			if (status[chan] & handle_tx) {
+-				/* It doesn't matter what rx queue we choose
+-				 * here. We use 0 since it always exists.
+-				 */
+-				struct stmmac_rx_queue *rx_q =
+-					&priv->rx_queue[0];
+-
+-				if (likely(napi_schedule_prep(&rx_q->napi))) {
+-					stmmac_disable_dma_irq(priv,
+-							priv->ioaddr, chan);
+-					__napi_schedule(&rx_q->napi);
+-				}
+-				break;
+-			}
+-		}
+-	}
++		status[chan] = stmmac_napi_check(priv, chan);
+ 
+ 	for (chan = 0; chan < tx_channel_count; chan++) {
+ 		if (unlikely(status[chan] & tx_hard_error_bump_tc)) {
+@@ -2193,8 +2186,7 @@ static int stmmac_init_dma_engine(struct stmmac_priv *priv)
+ 		stmmac_init_tx_chan(priv, priv->ioaddr, priv->plat->dma_cfg,
+ 				    tx_q->dma_tx_phy, chan);
+ 
+-		tx_q->tx_tail_addr = tx_q->dma_tx_phy +
+-			    (DMA_TX_SIZE * sizeof(struct dma_desc));
++		tx_q->tx_tail_addr = tx_q->dma_tx_phy;
+ 		stmmac_set_tx_tail_ptr(priv, priv->ioaddr,
+ 				       tx_q->tx_tail_addr, chan);
+ 	}
+@@ -2212,6 +2204,13 @@ static int stmmac_init_dma_engine(struct stmmac_priv *priv)
+ 	return ret;
+ }
+ 
++static void stmmac_tx_timer_arm(struct stmmac_priv *priv, u32 queue)
++{
++	struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue];
++
++	mod_timer(&tx_q->txtimer, STMMAC_COAL_TIMER(priv->tx_coal_timer));
++}
++
+ /**
+  * stmmac_tx_timer - mitigation sw timer for tx.
+  * @data: data pointer
+@@ -2220,13 +2219,14 @@ static int stmmac_init_dma_engine(struct stmmac_priv *priv)
+  */
+ static void stmmac_tx_timer(struct timer_list *t)
+ {
+-	struct stmmac_priv *priv = from_timer(priv, t, txtimer);
+-	u32 tx_queues_count = priv->plat->tx_queues_to_use;
+-	u32 queue;
++	struct stmmac_tx_queue *tx_q = from_timer(tx_q, t, txtimer);
++	struct stmmac_priv *priv = tx_q->priv_data;
++	struct stmmac_channel *ch;
++
++	ch = &priv->channel[tx_q->queue_index];
+ 
+-	/* let's scan all the tx queues */
+-	for (queue = 0; queue < tx_queues_count; queue++)
+-		stmmac_tx_clean(priv, queue);
++	if (likely(napi_schedule_prep(&ch->napi)))
++		__napi_schedule(&ch->napi);
+ }
+ 
+ /**
+@@ -2239,11 +2239,17 @@ static void stmmac_tx_timer(struct timer_list *t)
+  */
+ static void stmmac_init_tx_coalesce(struct stmmac_priv *priv)
+ {
++	u32 tx_channel_count = priv->plat->tx_queues_to_use;
++	u32 chan;
++
+ 	priv->tx_coal_frames = STMMAC_TX_FRAMES;
+ 	priv->tx_coal_timer = STMMAC_COAL_TX_TIMER;
+-	timer_setup(&priv->txtimer, stmmac_tx_timer, 0);
+-	priv->txtimer.expires = STMMAC_COAL_TIMER(priv->tx_coal_timer);
+-	add_timer(&priv->txtimer);
++
++	for (chan = 0; chan < tx_channel_count; chan++) {
++		struct stmmac_tx_queue *tx_q = &priv->tx_queue[chan];
++
++		timer_setup(&tx_q->txtimer, stmmac_tx_timer, 0);
++	}
+ }
+ 
+ static void stmmac_set_rings_length(struct stmmac_priv *priv)
+@@ -2571,6 +2577,7 @@ static void stmmac_hw_teardown(struct net_device *dev)
+ static int stmmac_open(struct net_device *dev)
+ {
+ 	struct stmmac_priv *priv = netdev_priv(dev);
++	u32 chan;
+ 	int ret;
+ 
+ 	stmmac_check_ether_addr(priv);
+@@ -2667,7 +2674,9 @@ irq_error:
+ 	if (dev->phydev)
+ 		phy_stop(dev->phydev);
+ 
+-	del_timer_sync(&priv->txtimer);
++	for (chan = 0; chan < priv->plat->tx_queues_to_use; chan++)
++		del_timer_sync(&priv->tx_queue[chan].txtimer);
++
+ 	stmmac_hw_teardown(dev);
+ init_error:
+ 	free_dma_desc_resources(priv);
+@@ -2687,6 +2696,7 @@ dma_desc_error:
+ static int stmmac_release(struct net_device *dev)
+ {
+ 	struct stmmac_priv *priv = netdev_priv(dev);
++	u32 chan;
+ 
+ 	if (priv->eee_enabled)
+ 		del_timer_sync(&priv->eee_ctrl_timer);
+@@ -2701,7 +2711,8 @@ static int stmmac_release(struct net_device *dev)
+ 
+ 	stmmac_disable_all_queues(priv);
+ 
+-	del_timer_sync(&priv->txtimer);
++	for (chan = 0; chan < priv->plat->tx_queues_to_use; chan++)
++		del_timer_sync(&priv->tx_queue[chan].txtimer);
+ 
+ 	/* Free the IRQ lines */
+ 	free_irq(dev->irq, dev);
+@@ -2915,14 +2926,13 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	priv->xstats.tx_tso_nfrags += nfrags;
+ 
+ 	/* Manage tx mitigation */
+-	priv->tx_count_frames += nfrags + 1;
+-	if (likely(priv->tx_coal_frames > priv->tx_count_frames)) {
+-		mod_timer(&priv->txtimer,
+-			  STMMAC_COAL_TIMER(priv->tx_coal_timer));
+-	} else {
+-		priv->tx_count_frames = 0;
++	tx_q->tx_count_frames += nfrags + 1;
++	if (priv->tx_coal_frames <= tx_q->tx_count_frames) {
+ 		stmmac_set_tx_ic(priv, desc);
+ 		priv->xstats.tx_set_ic_bit++;
++		tx_q->tx_count_frames = 0;
++	} else {
++		stmmac_tx_timer_arm(priv, queue);
+ 	}
+ 
+ 	skb_tx_timestamp(skb);
+@@ -2971,6 +2981,7 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ 	netdev_tx_sent_queue(netdev_get_tx_queue(dev, queue), skb->len);
+ 
++	tx_q->tx_tail_addr = tx_q->dma_tx_phy + (tx_q->cur_tx * sizeof(*desc));
+ 	stmmac_set_tx_tail_ptr(priv, priv->ioaddr, tx_q->tx_tail_addr, queue);
+ 
+ 	return NETDEV_TX_OK;
+@@ -3125,14 +3136,13 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	 * This approach takes care about the fragments: desc is the first
+ 	 * element in case of no SG.
+ 	 */
+-	priv->tx_count_frames += nfrags + 1;
+-	if (likely(priv->tx_coal_frames > priv->tx_count_frames)) {
+-		mod_timer(&priv->txtimer,
+-			  STMMAC_COAL_TIMER(priv->tx_coal_timer));
+-	} else {
+-		priv->tx_count_frames = 0;
++	tx_q->tx_count_frames += nfrags + 1;
++	if (priv->tx_coal_frames <= tx_q->tx_count_frames) {
+ 		stmmac_set_tx_ic(priv, desc);
+ 		priv->xstats.tx_set_ic_bit++;
++		tx_q->tx_count_frames = 0;
++	} else {
++		stmmac_tx_timer_arm(priv, queue);
+ 	}
+ 
+ 	skb_tx_timestamp(skb);
+@@ -3178,6 +3188,8 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	netdev_tx_sent_queue(netdev_get_tx_queue(dev, queue), skb->len);
+ 
+ 	stmmac_enable_dma_transmission(priv, priv->ioaddr);
++
++	tx_q->tx_tail_addr = tx_q->dma_tx_phy + (tx_q->cur_tx * sizeof(*desc));
+ 	stmmac_set_tx_tail_ptr(priv, priv->ioaddr, tx_q->tx_tail_addr, queue);
+ 
+ 	return NETDEV_TX_OK;
+@@ -3298,6 +3310,7 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue)
+ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
+ {
+ 	struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
++	struct stmmac_channel *ch = &priv->channel[queue];
+ 	unsigned int entry = rx_q->cur_rx;
+ 	int coe = priv->hw->rx_csum;
+ 	unsigned int next_entry;
+@@ -3467,7 +3480,7 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
+ 			else
+ 				skb->ip_summed = CHECKSUM_UNNECESSARY;
+ 
+-			napi_gro_receive(&rx_q->napi, skb);
++			napi_gro_receive(&ch->napi, skb);
+ 
+ 			priv->dev->stats.rx_packets++;
+ 			priv->dev->stats.rx_bytes += frame_len;
+@@ -3490,27 +3503,33 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
+  *  Description :
+  *  To look at the incoming frames and clear the tx resources.
+  */
+-static int stmmac_poll(struct napi_struct *napi, int budget)
++static int stmmac_napi_poll(struct napi_struct *napi, int budget)
+ {
+-	struct stmmac_rx_queue *rx_q =
+-		container_of(napi, struct stmmac_rx_queue, napi);
+-	struct stmmac_priv *priv = rx_q->priv_data;
+-	u32 tx_count = priv->plat->tx_queues_to_use;
+-	u32 chan = rx_q->queue_index;
+-	int work_done = 0;
+-	u32 queue;
++	struct stmmac_channel *ch =
++		container_of(napi, struct stmmac_channel, napi);
++	struct stmmac_priv *priv = ch->priv_data;
++	int work_done = 0, work_rem = budget;
++	u32 chan = ch->index;
+ 
+ 	priv->xstats.napi_poll++;
+ 
+-	/* check all the queues */
+-	for (queue = 0; queue < tx_count; queue++)
+-		stmmac_tx_clean(priv, queue);
++	if (ch->has_tx) {
++		int done = stmmac_tx_clean(priv, work_rem, chan);
+ 
+-	work_done = stmmac_rx(priv, budget, rx_q->queue_index);
+-	if (work_done < budget) {
+-		napi_complete_done(napi, work_done);
+-		stmmac_enable_dma_irq(priv, priv->ioaddr, chan);
++		work_done += done;
++		work_rem -= done;
++	}
++
++	if (ch->has_rx) {
++		int done = stmmac_rx(priv, work_rem, chan);
++
++		work_done += done;
++		work_rem -= done;
+ 	}
++
++	if (work_done < budget && napi_complete_done(napi, work_done))
++		stmmac_enable_dma_irq(priv, priv->ioaddr, chan);
++
+ 	return work_done;
+ }
+ 
+@@ -4170,8 +4189,8 @@ int stmmac_dvr_probe(struct device *device,
+ {
+ 	struct net_device *ndev = NULL;
+ 	struct stmmac_priv *priv;
++	u32 queue, maxq;
+ 	int ret = 0;
+-	u32 queue;
+ 
+ 	ndev = alloc_etherdev_mqs(sizeof(struct stmmac_priv),
+ 				  MTL_MAX_TX_QUEUES,
+@@ -4291,11 +4310,22 @@ int stmmac_dvr_probe(struct device *device,
+ 			 "Enable RX Mitigation via HW Watchdog Timer\n");
+ 	}
+ 
+-	for (queue = 0; queue < priv->plat->rx_queues_to_use; queue++) {
+-		struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
++	/* Setup channels NAPI */
++	maxq = max(priv->plat->rx_queues_to_use, priv->plat->tx_queues_to_use);
+ 
+-		netif_napi_add(ndev, &rx_q->napi, stmmac_poll,
+-			       (8 * priv->plat->rx_queues_to_use));
++	for (queue = 0; queue < maxq; queue++) {
++		struct stmmac_channel *ch = &priv->channel[queue];
++
++		ch->priv_data = priv;
++		ch->index = queue;
++
++		if (queue < priv->plat->rx_queues_to_use)
++			ch->has_rx = true;
++		if (queue < priv->plat->tx_queues_to_use)
++			ch->has_tx = true;
++
++		netif_napi_add(ndev, &ch->napi, stmmac_napi_poll,
++			       NAPI_POLL_WEIGHT);
+ 	}
+ 
+ 	mutex_init(&priv->lock);
+@@ -4341,10 +4371,10 @@ error_netdev_register:
+ 	    priv->hw->pcs != STMMAC_PCS_RTBI)
+ 		stmmac_mdio_unregister(ndev);
+ error_mdio_register:
+-	for (queue = 0; queue < priv->plat->rx_queues_to_use; queue++) {
+-		struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
++	for (queue = 0; queue < maxq; queue++) {
++		struct stmmac_channel *ch = &priv->channel[queue];
+ 
+-		netif_napi_del(&rx_q->napi);
++		netif_napi_del(&ch->napi);
+ 	}
+ error_hw_init:
+ 	destroy_workqueue(priv->wq);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+index 72da77b94ecd..8a3867cec67a 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+@@ -67,7 +67,7 @@ static int dwmac1000_validate_mcast_bins(int mcast_bins)
+  * Description:
+  * This function validates the number of Unicast address entries supported
+  * by a particular Synopsys 10/100/1000 controller. The Synopsys controller
+- * supports 1, 32, 64, or 128 Unicast filter entries for it's Unicast filter
++ * supports 1..32, 64, or 128 Unicast filter entries for it's Unicast filter
+  * logic. This function validates a valid, supported configuration is
+  * selected, and defaults to 1 Unicast address if an unsupported
+  * configuration is selected.
+@@ -77,8 +77,7 @@ static int dwmac1000_validate_ucast_entries(int ucast_entries)
+ 	int x = ucast_entries;
+ 
+ 	switch (x) {
+-	case 1:
+-	case 32:
++	case 1 ... 32:
+ 	case 64:
+ 	case 128:
+ 		break;
+diff --git a/drivers/net/ethernet/ti/Kconfig b/drivers/net/ethernet/ti/Kconfig
+index 9263d638bd6d..f932923f7d56 100644
+--- a/drivers/net/ethernet/ti/Kconfig
++++ b/drivers/net/ethernet/ti/Kconfig
+@@ -41,6 +41,7 @@ config TI_DAVINCI_MDIO
+ config TI_DAVINCI_CPDMA
+ 	tristate "TI DaVinci CPDMA Support"
+ 	depends on ARCH_DAVINCI || ARCH_OMAP2PLUS || COMPILE_TEST
++	select GENERIC_ALLOCATOR
+ 	---help---
+ 	  This driver supports TI's DaVinci CPDMA dma engine.
+ 
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index af4dc4425be2..5827fccd4f29 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -717,6 +717,30 @@ static int phylink_bringup_phy(struct phylink *pl, struct phy_device *phy)
+ 	return 0;
+ }
+ 
++static int __phylink_connect_phy(struct phylink *pl, struct phy_device *phy,
++		phy_interface_t interface)
++{
++	int ret;
++
++	if (WARN_ON(pl->link_an_mode == MLO_AN_FIXED ||
++		    (pl->link_an_mode == MLO_AN_INBAND &&
++		     phy_interface_mode_is_8023z(interface))))
++		return -EINVAL;
++
++	if (pl->phydev)
++		return -EBUSY;
++
++	ret = phy_attach_direct(pl->netdev, phy, 0, interface);
++	if (ret)
++		return ret;
++
++	ret = phylink_bringup_phy(pl, phy);
++	if (ret)
++		phy_detach(phy);
++
++	return ret;
++}
++
+ /**
+  * phylink_connect_phy() - connect a PHY to the phylink instance
+  * @pl: a pointer to a &struct phylink returned from phylink_create()
+@@ -734,31 +758,13 @@ static int phylink_bringup_phy(struct phylink *pl, struct phy_device *phy)
+  */
+ int phylink_connect_phy(struct phylink *pl, struct phy_device *phy)
+ {
+-	int ret;
+-
+-	if (WARN_ON(pl->link_an_mode == MLO_AN_FIXED ||
+-		    (pl->link_an_mode == MLO_AN_INBAND &&
+-		     phy_interface_mode_is_8023z(pl->link_interface))))
+-		return -EINVAL;
+-
+-	if (pl->phydev)
+-		return -EBUSY;
+-
+ 	/* Use PHY device/driver interface */
+ 	if (pl->link_interface == PHY_INTERFACE_MODE_NA) {
+ 		pl->link_interface = phy->interface;
+ 		pl->link_config.interface = pl->link_interface;
+ 	}
+ 
+-	ret = phy_attach_direct(pl->netdev, phy, 0, pl->link_interface);
+-	if (ret)
+-		return ret;
+-
+-	ret = phylink_bringup_phy(pl, phy);
+-	if (ret)
+-		phy_detach(phy);
+-
+-	return ret;
++	return __phylink_connect_phy(pl, phy, pl->link_interface);
+ }
+ EXPORT_SYMBOL_GPL(phylink_connect_phy);
+ 
+@@ -1672,7 +1678,9 @@ static void phylink_sfp_link_up(void *upstream)
+ 
+ static int phylink_sfp_connect_phy(void *upstream, struct phy_device *phy)
+ {
+-	return phylink_connect_phy(upstream, phy);
++	struct phylink *pl = upstream;
++
++	return __phylink_connect_phy(upstream, phy, pl->link_config.interface);
+ }
+ 
+ static void phylink_sfp_disconnect_phy(void *upstream)
+diff --git a/drivers/net/phy/sfp-bus.c b/drivers/net/phy/sfp-bus.c
+index 740655261e5b..83060fb349f4 100644
+--- a/drivers/net/phy/sfp-bus.c
++++ b/drivers/net/phy/sfp-bus.c
+@@ -349,6 +349,7 @@ static int sfp_register_bus(struct sfp_bus *bus)
+ 	}
+ 	if (bus->started)
+ 		bus->socket_ops->start(bus->sfp);
++	bus->netdev->sfp_bus = bus;
+ 	bus->registered = true;
+ 	return 0;
+ }
+@@ -357,6 +358,7 @@ static void sfp_unregister_bus(struct sfp_bus *bus)
+ {
+ 	const struct sfp_upstream_ops *ops = bus->upstream_ops;
+ 
++	bus->netdev->sfp_bus = NULL;
+ 	if (bus->registered) {
+ 		if (bus->started)
+ 			bus->socket_ops->stop(bus->sfp);
+@@ -438,7 +440,6 @@ static void sfp_upstream_clear(struct sfp_bus *bus)
+ {
+ 	bus->upstream_ops = NULL;
+ 	bus->upstream = NULL;
+-	bus->netdev->sfp_bus = NULL;
+ 	bus->netdev = NULL;
+ }
+ 
+@@ -467,7 +468,6 @@ struct sfp_bus *sfp_register_upstream(struct fwnode_handle *fwnode,
+ 		bus->upstream_ops = ops;
+ 		bus->upstream = upstream;
+ 		bus->netdev = ndev;
+-		ndev->sfp_bus = bus;
+ 
+ 		if (bus->sfp) {
+ 			ret = sfp_register_bus(bus);
+diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
+index b070959737ff..286c947cb48d 100644
+--- a/drivers/net/team/team.c
++++ b/drivers/net/team/team.c
+@@ -1172,6 +1172,12 @@ static int team_port_add(struct team *team, struct net_device *port_dev,
+ 		return -EBUSY;
+ 	}
+ 
++	if (dev == port_dev) {
++		NL_SET_ERR_MSG(extack, "Cannot enslave team device to itself");
++		netdev_err(dev, "Cannot enslave team device to itself\n");
++		return -EINVAL;
++	}
++
+ 	if (port_dev->features & NETIF_F_VLAN_CHALLENGED &&
+ 	    vlan_uses_dev(dev)) {
+ 		NL_SET_ERR_MSG(extack, "Device is VLAN challenged and team device has VLAN set up");
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index f5727baac84a..725dd63f8413 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -181,6 +181,7 @@ struct tun_file {
+ 	};
+ 	struct napi_struct napi;
+ 	bool napi_enabled;
++	bool napi_frags_enabled;
+ 	struct mutex napi_mutex;	/* Protects access to the above napi */
+ 	struct list_head next;
+ 	struct tun_struct *detached;
+@@ -312,32 +313,32 @@ static int tun_napi_poll(struct napi_struct *napi, int budget)
+ }
+ 
+ static void tun_napi_init(struct tun_struct *tun, struct tun_file *tfile,
+-			  bool napi_en)
++			  bool napi_en, bool napi_frags)
+ {
+ 	tfile->napi_enabled = napi_en;
++	tfile->napi_frags_enabled = napi_en && napi_frags;
+ 	if (napi_en) {
+ 		netif_napi_add(tun->dev, &tfile->napi, tun_napi_poll,
+ 			       NAPI_POLL_WEIGHT);
+ 		napi_enable(&tfile->napi);
+-		mutex_init(&tfile->napi_mutex);
+ 	}
+ }
+ 
+-static void tun_napi_disable(struct tun_struct *tun, struct tun_file *tfile)
++static void tun_napi_disable(struct tun_file *tfile)
+ {
+ 	if (tfile->napi_enabled)
+ 		napi_disable(&tfile->napi);
+ }
+ 
+-static void tun_napi_del(struct tun_struct *tun, struct tun_file *tfile)
++static void tun_napi_del(struct tun_file *tfile)
+ {
+ 	if (tfile->napi_enabled)
+ 		netif_napi_del(&tfile->napi);
+ }
+ 
+-static bool tun_napi_frags_enabled(const struct tun_struct *tun)
++static bool tun_napi_frags_enabled(const struct tun_file *tfile)
+ {
+-	return READ_ONCE(tun->flags) & IFF_NAPI_FRAGS;
++	return tfile->napi_frags_enabled;
+ }
+ 
+ #ifdef CONFIG_TUN_VNET_CROSS_LE
+@@ -688,8 +689,8 @@ static void __tun_detach(struct tun_file *tfile, bool clean)
+ 	tun = rtnl_dereference(tfile->tun);
+ 
+ 	if (tun && clean) {
+-		tun_napi_disable(tun, tfile);
+-		tun_napi_del(tun, tfile);
++		tun_napi_disable(tfile);
++		tun_napi_del(tfile);
+ 	}
+ 
+ 	if (tun && !tfile->detached) {
+@@ -756,7 +757,7 @@ static void tun_detach_all(struct net_device *dev)
+ 	for (i = 0; i < n; i++) {
+ 		tfile = rtnl_dereference(tun->tfiles[i]);
+ 		BUG_ON(!tfile);
+-		tun_napi_disable(tun, tfile);
++		tun_napi_disable(tfile);
+ 		tfile->socket.sk->sk_shutdown = RCV_SHUTDOWN;
+ 		tfile->socket.sk->sk_data_ready(tfile->socket.sk);
+ 		RCU_INIT_POINTER(tfile->tun, NULL);
+@@ -772,7 +773,7 @@ static void tun_detach_all(struct net_device *dev)
+ 	synchronize_net();
+ 	for (i = 0; i < n; i++) {
+ 		tfile = rtnl_dereference(tun->tfiles[i]);
+-		tun_napi_del(tun, tfile);
++		tun_napi_del(tfile);
+ 		/* Drop read queue */
+ 		tun_queue_purge(tfile);
+ 		xdp_rxq_info_unreg(&tfile->xdp_rxq);
+@@ -791,7 +792,7 @@ static void tun_detach_all(struct net_device *dev)
+ }
+ 
+ static int tun_attach(struct tun_struct *tun, struct file *file,
+-		      bool skip_filter, bool napi)
++		      bool skip_filter, bool napi, bool napi_frags)
+ {
+ 	struct tun_file *tfile = file->private_data;
+ 	struct net_device *dev = tun->dev;
+@@ -864,7 +865,7 @@ static int tun_attach(struct tun_struct *tun, struct file *file,
+ 		tun_enable_queue(tfile);
+ 	} else {
+ 		sock_hold(&tfile->sk);
+-		tun_napi_init(tun, tfile, napi);
++		tun_napi_init(tun, tfile, napi, napi_frags);
+ 	}
+ 
+ 	tun_set_real_num_queues(tun);
+@@ -1174,13 +1175,11 @@ static void tun_poll_controller(struct net_device *dev)
+ 		struct tun_file *tfile;
+ 		int i;
+ 
+-		if (tun_napi_frags_enabled(tun))
+-			return;
+-
+ 		rcu_read_lock();
+ 		for (i = 0; i < tun->numqueues; i++) {
+ 			tfile = rcu_dereference(tun->tfiles[i]);
+-			if (tfile->napi_enabled)
++			if (!tun_napi_frags_enabled(tfile) &&
++			    tfile->napi_enabled)
+ 				napi_schedule(&tfile->napi);
+ 		}
+ 		rcu_read_unlock();
+@@ -1751,7 +1750,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
+ 	int err;
+ 	u32 rxhash = 0;
+ 	int skb_xdp = 1;
+-	bool frags = tun_napi_frags_enabled(tun);
++	bool frags = tun_napi_frags_enabled(tfile);
+ 
+ 	if (!(tun->dev->flags & IFF_UP))
+ 		return -EIO;
+@@ -2576,7 +2575,8 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr)
+ 			return err;
+ 
+ 		err = tun_attach(tun, file, ifr->ifr_flags & IFF_NOFILTER,
+-				 ifr->ifr_flags & IFF_NAPI);
++				 ifr->ifr_flags & IFF_NAPI,
++				 ifr->ifr_flags & IFF_NAPI_FRAGS);
+ 		if (err < 0)
+ 			return err;
+ 
+@@ -2674,7 +2674,8 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr)
+ 			      (ifr->ifr_flags & TUN_FEATURES);
+ 
+ 		INIT_LIST_HEAD(&tun->disabled);
+-		err = tun_attach(tun, file, false, ifr->ifr_flags & IFF_NAPI);
++		err = tun_attach(tun, file, false, ifr->ifr_flags & IFF_NAPI,
++				 ifr->ifr_flags & IFF_NAPI_FRAGS);
+ 		if (err < 0)
+ 			goto err_free_flow;
+ 
+@@ -2823,7 +2824,8 @@ static int tun_set_queue(struct file *file, struct ifreq *ifr)
+ 		ret = security_tun_dev_attach_queue(tun->security);
+ 		if (ret < 0)
+ 			goto unlock;
+-		ret = tun_attach(tun, file, false, tun->flags & IFF_NAPI);
++		ret = tun_attach(tun, file, false, tun->flags & IFF_NAPI,
++				 tun->flags & IFF_NAPI_FRAGS);
+ 	} else if (ifr->ifr_flags & IFF_DETACH_QUEUE) {
+ 		tun = rtnl_dereference(tfile->tun);
+ 		if (!tun || !(tun->flags & IFF_MULTI_QUEUE) || tfile->detached)
+@@ -3241,6 +3243,7 @@ static int tun_chr_open(struct inode *inode, struct file * file)
+ 		return -ENOMEM;
+ 	}
+ 
++	mutex_init(&tfile->napi_mutex);
+ 	RCU_INIT_POINTER(tfile->tun, NULL);
+ 	tfile->flags = 0;
+ 	tfile->ifindex = 0;
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 1e95d37c6e27..1bb01a9e5f92 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1234,6 +1234,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x0b3c, 0xc00b, 4)},	/* Olivetti Olicard 500 */
+ 	{QMI_FIXED_INTF(0x1e2d, 0x0060, 4)},	/* Cinterion PLxx */
+ 	{QMI_FIXED_INTF(0x1e2d, 0x0053, 4)},	/* Cinterion PHxx,PXxx */
++	{QMI_FIXED_INTF(0x1e2d, 0x0063, 10)},	/* Cinterion ALASxx (1 RmNet) */
+ 	{QMI_FIXED_INTF(0x1e2d, 0x0082, 4)},	/* Cinterion PHxx,PXxx (2 RmNet) */
+ 	{QMI_FIXED_INTF(0x1e2d, 0x0082, 5)},	/* Cinterion PHxx,PXxx (2 RmNet) */
+ 	{QMI_FIXED_INTF(0x1e2d, 0x0083, 4)},	/* Cinterion PHxx,PXxx (1 RmNet + USB Audio)*/
+diff --git a/drivers/net/usb/smsc75xx.c b/drivers/net/usb/smsc75xx.c
+index 05553d252446..b64b1ee56d2d 100644
+--- a/drivers/net/usb/smsc75xx.c
++++ b/drivers/net/usb/smsc75xx.c
+@@ -1517,6 +1517,7 @@ static void smsc75xx_unbind(struct usbnet *dev, struct usb_interface *intf)
+ {
+ 	struct smsc75xx_priv *pdata = (struct smsc75xx_priv *)(dev->data[0]);
+ 	if (pdata) {
++		cancel_work_sync(&pdata->set_multicast);
+ 		netif_dbg(dev, ifdown, dev->net, "free pdata\n");
+ 		kfree(pdata);
+ 		pdata = NULL;
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index e857cb3335f6..93a6c43a2354 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -3537,6 +3537,7 @@ static size_t vxlan_get_size(const struct net_device *dev)
+ 		nla_total_size(sizeof(__u32)) +	/* IFLA_VXLAN_LINK */
+ 		nla_total_size(sizeof(struct in6_addr)) + /* IFLA_VXLAN_LOCAL{6} */
+ 		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_TTL */
++		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_TTL_INHERIT */
+ 		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_TOS */
+ 		nla_total_size(sizeof(__be32)) + /* IFLA_VXLAN_LABEL */
+ 		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_LEARNING */
+@@ -3601,6 +3602,8 @@ static int vxlan_fill_info(struct sk_buff *skb, const struct net_device *dev)
+ 	}
+ 
+ 	if (nla_put_u8(skb, IFLA_VXLAN_TTL, vxlan->cfg.ttl) ||
++	    nla_put_u8(skb, IFLA_VXLAN_TTL_INHERIT,
++		       !!(vxlan->cfg.flags & VXLAN_F_TTL_INHERIT)) ||
+ 	    nla_put_u8(skb, IFLA_VXLAN_TOS, vxlan->cfg.tos) ||
+ 	    nla_put_be32(skb, IFLA_VXLAN_LABEL, vxlan->cfg.label) ||
+ 	    nla_put_u8(skb, IFLA_VXLAN_LEARNING,
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index d4d4a55f09f8..c6f375e9cce7 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -89,6 +89,9 @@ static enum pci_protocol_version_t pci_protocol_version;
+ 
+ #define STATUS_REVISION_MISMATCH 0xC0000059
+ 
++/* space for 32bit serial number as string */
++#define SLOT_NAME_SIZE 11
++
+ /*
+  * Message Types
+  */
+@@ -494,6 +497,7 @@ struct hv_pci_dev {
+ 	struct list_head list_entry;
+ 	refcount_t refs;
+ 	enum hv_pcichild_state state;
++	struct pci_slot *pci_slot;
+ 	struct pci_function_description desc;
+ 	bool reported_missing;
+ 	struct hv_pcibus_device *hbus;
+@@ -1457,6 +1461,34 @@ static void prepopulate_bars(struct hv_pcibus_device *hbus)
+ 	spin_unlock_irqrestore(&hbus->device_list_lock, flags);
+ }
+ 
++/*
++ * Assign entries in sysfs pci slot directory.
++ *
++ * Note that this function does not need to lock the children list
++ * because it is called from pci_devices_present_work which
++ * is serialized with hv_eject_device_work because they are on the
++ * same ordered workqueue. Therefore hbus->children list will not change
++ * even when pci_create_slot sleeps.
++ */
++static void hv_pci_assign_slots(struct hv_pcibus_device *hbus)
++{
++	struct hv_pci_dev *hpdev;
++	char name[SLOT_NAME_SIZE];
++	int slot_nr;
++
++	list_for_each_entry(hpdev, &hbus->children, list_entry) {
++		if (hpdev->pci_slot)
++			continue;
++
++		slot_nr = PCI_SLOT(wslot_to_devfn(hpdev->desc.win_slot.slot));
++		snprintf(name, SLOT_NAME_SIZE, "%u", hpdev->desc.ser);
++		hpdev->pci_slot = pci_create_slot(hbus->pci_bus, slot_nr,
++					  name, NULL);
++		if (!hpdev->pci_slot)
++			pr_warn("pci_create slot %s failed\n", name);
++	}
++}
++
+ /**
+  * create_root_hv_pci_bus() - Expose a new root PCI bus
+  * @hbus:	Root PCI bus, as understood by this driver
+@@ -1480,6 +1512,7 @@ static int create_root_hv_pci_bus(struct hv_pcibus_device *hbus)
+ 	pci_lock_rescan_remove();
+ 	pci_scan_child_bus(hbus->pci_bus);
+ 	pci_bus_assign_resources(hbus->pci_bus);
++	hv_pci_assign_slots(hbus);
+ 	pci_bus_add_devices(hbus->pci_bus);
+ 	pci_unlock_rescan_remove();
+ 	hbus->state = hv_pcibus_installed;
+@@ -1742,6 +1775,7 @@ static void pci_devices_present_work(struct work_struct *work)
+ 		 */
+ 		pci_lock_rescan_remove();
+ 		pci_scan_child_bus(hbus->pci_bus);
++		hv_pci_assign_slots(hbus);
+ 		pci_unlock_rescan_remove();
+ 		break;
+ 
+@@ -1858,6 +1892,9 @@ static void hv_eject_device_work(struct work_struct *work)
+ 	list_del(&hpdev->list_entry);
+ 	spin_unlock_irqrestore(&hpdev->hbus->device_list_lock, flags);
+ 
++	if (hpdev->pci_slot)
++		pci_destroy_slot(hpdev->pci_slot);
++
+ 	memset(&ctxt, 0, sizeof(ctxt));
+ 	ejct_pkt = (struct pci_eject_response *)&ctxt.pkt.message;
+ 	ejct_pkt->message_type.type = PCI_EJECTION_COMPLETE;
+diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
+index a6347d487635..1321104b9b9f 100644
+--- a/drivers/perf/arm_pmu.c
++++ b/drivers/perf/arm_pmu.c
+@@ -474,7 +474,13 @@ static int armpmu_filter_match(struct perf_event *event)
+ {
+ 	struct arm_pmu *armpmu = to_arm_pmu(event->pmu);
+ 	unsigned int cpu = smp_processor_id();
+-	return cpumask_test_cpu(cpu, &armpmu->supported_cpus);
++	int ret;
++
++	ret = cpumask_test_cpu(cpu, &armpmu->supported_cpus);
++	if (ret && armpmu->filter_match)
++		return armpmu->filter_match(event);
++
++	return ret;
+ }
+ 
+ static ssize_t armpmu_cpumask_show(struct device *dev,
+diff --git a/drivers/pinctrl/intel/pinctrl-cannonlake.c b/drivers/pinctrl/intel/pinctrl-cannonlake.c
+index 6243e7d95e7e..d36afb17f5e4 100644
+--- a/drivers/pinctrl/intel/pinctrl-cannonlake.c
++++ b/drivers/pinctrl/intel/pinctrl-cannonlake.c
+@@ -382,7 +382,7 @@ static const struct intel_padgroup cnlh_community1_gpps[] = {
+ static const struct intel_padgroup cnlh_community3_gpps[] = {
+ 	CNL_GPP(0, 155, 178, 192),		/* GPP_K */
+ 	CNL_GPP(1, 179, 202, 224),		/* GPP_H */
+-	CNL_GPP(2, 203, 215, 258),		/* GPP_E */
++	CNL_GPP(2, 203, 215, 256),		/* GPP_E */
+ 	CNL_GPP(3, 216, 239, 288),		/* GPP_F */
+ 	CNL_GPP(4, 240, 248, CNL_NO_GPIO),	/* SPI */
+ };
+diff --git a/drivers/pinctrl/pinctrl-mcp23s08.c b/drivers/pinctrl/pinctrl-mcp23s08.c
+index 022307dd4b54..bef6ff2e8f4f 100644
+--- a/drivers/pinctrl/pinctrl-mcp23s08.c
++++ b/drivers/pinctrl/pinctrl-mcp23s08.c
+@@ -636,6 +636,14 @@ static int mcp23s08_irq_setup(struct mcp23s08 *mcp)
+ 		return err;
+ 	}
+ 
++	return 0;
++}
++
++static int mcp23s08_irqchip_setup(struct mcp23s08 *mcp)
++{
++	struct gpio_chip *chip = &mcp->chip;
++	int err;
++
+ 	err =  gpiochip_irqchip_add_nested(chip,
+ 					   &mcp23s08_irq_chip,
+ 					   0,
+@@ -912,7 +920,7 @@ static int mcp23s08_probe_one(struct mcp23s08 *mcp, struct device *dev,
+ 	}
+ 
+ 	if (mcp->irq && mcp->irq_controller) {
+-		ret = mcp23s08_irq_setup(mcp);
++		ret = mcp23s08_irqchip_setup(mcp);
+ 		if (ret)
+ 			goto fail;
+ 	}
+@@ -944,6 +952,9 @@ static int mcp23s08_probe_one(struct mcp23s08 *mcp, struct device *dev,
+ 		goto fail;
+ 	}
+ 
++	if (mcp->irq)
++		ret = mcp23s08_irq_setup(mcp);
++
+ fail:
+ 	if (ret < 0)
+ 		dev_dbg(dev, "can't setup chip %d, --> %d\n", addr, ret);
+diff --git a/drivers/s390/cio/vfio_ccw_cp.c b/drivers/s390/cio/vfio_ccw_cp.c
+index dbe7c7ac9ac8..fd77e46eb3b2 100644
+--- a/drivers/s390/cio/vfio_ccw_cp.c
++++ b/drivers/s390/cio/vfio_ccw_cp.c
+@@ -163,7 +163,7 @@ static bool pfn_array_table_iova_pinned(struct pfn_array_table *pat,
+ 
+ 	for (i = 0; i < pat->pat_nr; i++, pa++)
+ 		for (j = 0; j < pa->pa_nr; j++)
+-			if (pa->pa_iova_pfn[i] == iova_pfn)
++			if (pa->pa_iova_pfn[j] == iova_pfn)
+ 				return true;
+ 
+ 	return false;
+diff --git a/drivers/scsi/qla2xxx/qla_target.h b/drivers/scsi/qla2xxx/qla_target.h
+index fecf96f0225c..199d3ba1916d 100644
+--- a/drivers/scsi/qla2xxx/qla_target.h
++++ b/drivers/scsi/qla2xxx/qla_target.h
+@@ -374,8 +374,8 @@ struct atio_from_isp {
+ static inline int fcpcmd_is_corrupted(struct atio *atio)
+ {
+ 	if (atio->entry_type == ATIO_TYPE7 &&
+-	    (le16_to_cpu(atio->attr_n_length & FCP_CMD_LENGTH_MASK) <
+-	    FCP_CMD_LENGTH_MIN))
++	    ((le16_to_cpu(atio->attr_n_length) & FCP_CMD_LENGTH_MASK) <
++	     FCP_CMD_LENGTH_MIN))
+ 		return 1;
+ 	else
+ 		return 0;
+diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
+index a4ecc9d77624..8e1c3cff567a 100644
+--- a/drivers/target/iscsi/iscsi_target.c
++++ b/drivers/target/iscsi/iscsi_target.c
+@@ -1419,7 +1419,8 @@ static void iscsit_do_crypto_hash_buf(struct ahash_request *hash,
+ 
+ 	sg_init_table(sg, ARRAY_SIZE(sg));
+ 	sg_set_buf(sg, buf, payload_length);
+-	sg_set_buf(sg + 1, pad_bytes, padding);
++	if (padding)
++		sg_set_buf(sg + 1, pad_bytes, padding);
+ 
+ 	ahash_request_set_crypt(hash, sg, data_crc, payload_length + padding);
+ 
+@@ -3913,10 +3914,14 @@ static bool iscsi_target_check_conn_state(struct iscsi_conn *conn)
+ static void iscsit_get_rx_pdu(struct iscsi_conn *conn)
+ {
+ 	int ret;
+-	u8 buffer[ISCSI_HDR_LEN], opcode;
++	u8 *buffer, opcode;
+ 	u32 checksum = 0, digest = 0;
+ 	struct kvec iov;
+ 
++	buffer = kcalloc(ISCSI_HDR_LEN, sizeof(*buffer), GFP_KERNEL);
++	if (!buffer)
++		return;
++
+ 	while (!kthread_should_stop()) {
+ 		/*
+ 		 * Ensure that both TX and RX per connection kthreads
+@@ -3924,7 +3929,6 @@ static void iscsit_get_rx_pdu(struct iscsi_conn *conn)
+ 		 */
+ 		iscsit_thread_check_cpumask(conn, current, 0);
+ 
+-		memset(buffer, 0, ISCSI_HDR_LEN);
+ 		memset(&iov, 0, sizeof(struct kvec));
+ 
+ 		iov.iov_base	= buffer;
+@@ -3933,7 +3937,7 @@ static void iscsit_get_rx_pdu(struct iscsi_conn *conn)
+ 		ret = rx_data(conn, &iov, 1, ISCSI_HDR_LEN);
+ 		if (ret != ISCSI_HDR_LEN) {
+ 			iscsit_rx_thread_wait_for_tcp(conn);
+-			return;
++			break;
+ 		}
+ 
+ 		if (conn->conn_ops->HeaderDigest) {
+@@ -3943,7 +3947,7 @@ static void iscsit_get_rx_pdu(struct iscsi_conn *conn)
+ 			ret = rx_data(conn, &iov, 1, ISCSI_CRC_LEN);
+ 			if (ret != ISCSI_CRC_LEN) {
+ 				iscsit_rx_thread_wait_for_tcp(conn);
+-				return;
++				break;
+ 			}
+ 
+ 			iscsit_do_crypto_hash_buf(conn->conn_rx_hash, buffer,
+@@ -3967,7 +3971,7 @@ static void iscsit_get_rx_pdu(struct iscsi_conn *conn)
+ 		}
+ 
+ 		if (conn->conn_state == TARG_CONN_STATE_IN_LOGOUT)
+-			return;
++			break;
+ 
+ 		opcode = buffer[0] & ISCSI_OPCODE_MASK;
+ 
+@@ -3978,13 +3982,15 @@ static void iscsit_get_rx_pdu(struct iscsi_conn *conn)
+ 			" while in Discovery Session, rejecting.\n", opcode);
+ 			iscsit_add_reject(conn, ISCSI_REASON_PROTOCOL_ERROR,
+ 					  buffer);
+-			return;
++			break;
+ 		}
+ 
+ 		ret = iscsi_target_rx_opcode(conn, buffer);
+ 		if (ret < 0)
+-			return;
++			break;
+ 	}
++
++	kfree(buffer);
+ }
+ 
+ int iscsi_target_rx_thread(void *arg)
+diff --git a/drivers/video/fbdev/aty/atyfb.h b/drivers/video/fbdev/aty/atyfb.h
+index 8235b285dbb2..d09bab3bf224 100644
+--- a/drivers/video/fbdev/aty/atyfb.h
++++ b/drivers/video/fbdev/aty/atyfb.h
+@@ -333,6 +333,8 @@ extern const struct aty_pll_ops aty_pll_ct; /* Integrated */
+ extern void aty_set_pll_ct(const struct fb_info *info, const union aty_pll *pll);
+ extern u8 aty_ld_pll_ct(int offset, const struct atyfb_par *par);
+ 
++extern const u8 aty_postdividers[8];
++
+ 
+     /*
+      *  Hardware cursor support
+@@ -359,7 +361,6 @@ static inline void wait_for_idle(struct atyfb_par *par)
+ 
+ extern void aty_reset_engine(const struct atyfb_par *par);
+ extern void aty_init_engine(struct atyfb_par *par, struct fb_info *info);
+-extern u8   aty_ld_pll_ct(int offset, const struct atyfb_par *par);
+ 
+ void atyfb_copyarea(struct fb_info *info, const struct fb_copyarea *area);
+ void atyfb_fillrect(struct fb_info *info, const struct fb_fillrect *rect);
+diff --git a/drivers/video/fbdev/aty/atyfb_base.c b/drivers/video/fbdev/aty/atyfb_base.c
+index a9a8272f7a6e..05111e90f168 100644
+--- a/drivers/video/fbdev/aty/atyfb_base.c
++++ b/drivers/video/fbdev/aty/atyfb_base.c
+@@ -3087,17 +3087,18 @@ static int atyfb_setup_sparc(struct pci_dev *pdev, struct fb_info *info,
+ 		/*
+ 		 * PLL Reference Divider M:
+ 		 */
+-		M = pll_regs[2];
++		M = pll_regs[PLL_REF_DIV];
+ 
+ 		/*
+ 		 * PLL Feedback Divider N (Dependent on CLOCK_CNTL):
+ 		 */
+-		N = pll_regs[7 + (clock_cntl & 3)];
++		N = pll_regs[VCLK0_FB_DIV + (clock_cntl & 3)];
+ 
+ 		/*
+ 		 * PLL Post Divider P (Dependent on CLOCK_CNTL):
+ 		 */
+-		P = 1 << (pll_regs[6] >> ((clock_cntl & 3) << 1));
++		P = aty_postdividers[((pll_regs[VCLK_POST_DIV] >> ((clock_cntl & 3) << 1)) & 3) |
++		                     ((pll_regs[PLL_EXT_CNTL] >> (2 + (clock_cntl & 3))) & 4)];
+ 
+ 		/*
+ 		 * PLL Divider Q:
+diff --git a/drivers/video/fbdev/aty/mach64_ct.c b/drivers/video/fbdev/aty/mach64_ct.c
+index 74a62aa193c0..f87cc81f4fa2 100644
+--- a/drivers/video/fbdev/aty/mach64_ct.c
++++ b/drivers/video/fbdev/aty/mach64_ct.c
+@@ -115,7 +115,7 @@ static void aty_st_pll_ct(int offset, u8 val, const struct atyfb_par *par)
+  */
+ 
+ #define Maximum_DSP_PRECISION 7
+-static u8 postdividers[] = {1,2,4,8,3};
++const u8 aty_postdividers[8] = {1,2,4,8,3,5,6,12};
+ 
+ static int aty_dsp_gt(const struct fb_info *info, u32 bpp, struct pll_ct *pll)
+ {
+@@ -222,7 +222,7 @@ static int aty_valid_pll_ct(const struct fb_info *info, u32 vclk_per, struct pll
+ 		pll->vclk_post_div += (q <  64*8);
+ 		pll->vclk_post_div += (q <  32*8);
+ 	}
+-	pll->vclk_post_div_real = postdividers[pll->vclk_post_div];
++	pll->vclk_post_div_real = aty_postdividers[pll->vclk_post_div];
+ 	//    pll->vclk_post_div <<= 6;
+ 	pll->vclk_fb_div = q * pll->vclk_post_div_real / 8;
+ 	pllvclk = (1000000 * 2 * pll->vclk_fb_div) /
+@@ -513,7 +513,7 @@ static int aty_init_pll_ct(const struct fb_info *info, union aty_pll *pll)
+ 		u8 mclk_fb_div, pll_ext_cntl;
+ 		pll->ct.pll_ref_div = aty_ld_pll_ct(PLL_REF_DIV, par);
+ 		pll_ext_cntl = aty_ld_pll_ct(PLL_EXT_CNTL, par);
+-		pll->ct.xclk_post_div_real = postdividers[pll_ext_cntl & 0x07];
++		pll->ct.xclk_post_div_real = aty_postdividers[pll_ext_cntl & 0x07];
+ 		mclk_fb_div = aty_ld_pll_ct(MCLK_FB_DIV, par);
+ 		if (pll_ext_cntl & PLL_MFB_TIMES_4_2B)
+ 			mclk_fb_div <<= 1;
+@@ -535,7 +535,7 @@ static int aty_init_pll_ct(const struct fb_info *info, union aty_pll *pll)
+ 		xpost_div += (q <  64*8);
+ 		xpost_div += (q <  32*8);
+ 	}
+-	pll->ct.xclk_post_div_real = postdividers[xpost_div];
++	pll->ct.xclk_post_div_real = aty_postdividers[xpost_div];
+ 	pll->ct.mclk_fb_div = q * pll->ct.xclk_post_div_real / 8;
+ 
+ #ifdef CONFIG_PPC
+@@ -584,7 +584,7 @@ static int aty_init_pll_ct(const struct fb_info *info, union aty_pll *pll)
+ 			mpost_div += (q <  64*8);
+ 			mpost_div += (q <  32*8);
+ 		}
+-		sclk_post_div_real = postdividers[mpost_div];
++		sclk_post_div_real = aty_postdividers[mpost_div];
+ 		pll->ct.sclk_fb_div = q * sclk_post_div_real / 8;
+ 		pll->ct.spll_cntl2 = mpost_div << 4;
+ #ifdef DEBUG
+diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c
+index a1b18082991b..b6735ae3334e 100644
+--- a/fs/afs/rxrpc.c
++++ b/fs/afs/rxrpc.c
+@@ -690,8 +690,6 @@ static void afs_process_async_call(struct work_struct *work)
+ 	}
+ 
+ 	if (call->state == AFS_CALL_COMPLETE) {
+-		call->reply[0] = NULL;
+-
+ 		/* We have two refs to release - one from the alloc and one
+ 		 * queued with the work item - and we can't just deallocate the
+ 		 * call because the work item may be queued again.
+diff --git a/fs/dax.c b/fs/dax.c
+index 94f9fe002b12..0d3f640653c0 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -558,6 +558,8 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
+ 	while (index < end && pagevec_lookup_entries(&pvec, mapping, index,
+ 				min(end - index, (pgoff_t)PAGEVEC_SIZE),
+ 				indices)) {
++		pgoff_t nr_pages = 1;
++
+ 		for (i = 0; i < pagevec_count(&pvec); i++) {
+ 			struct page *pvec_ent = pvec.pages[i];
+ 			void *entry;
+@@ -571,8 +573,15 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
+ 
+ 			xa_lock_irq(&mapping->i_pages);
+ 			entry = get_unlocked_mapping_entry(mapping, index, NULL);
+-			if (entry)
++			if (entry) {
+ 				page = dax_busy_page(entry);
++				/*
++				 * Account for multi-order entries at
++				 * the end of the pagevec.
++				 */
++				if (i + 1 >= pagevec_count(&pvec))
++					nr_pages = 1UL << dax_radix_order(entry);
++			}
+ 			put_unlocked_mapping_entry(mapping, index, entry);
+ 			xa_unlock_irq(&mapping->i_pages);
+ 			if (page)
+@@ -580,7 +589,7 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
+ 		}
+ 		pagevec_remove_exceptionals(&pvec);
+ 		pagevec_release(&pvec);
+-		index++;
++		index += nr_pages;
+ 
+ 		if (page)
+ 			break;
+diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
+index c0e68f903011..04da6a7c9d2d 100644
+--- a/include/linux/cgroup-defs.h
++++ b/include/linux/cgroup-defs.h
+@@ -412,6 +412,7 @@ struct cgroup {
+ 	 * specific task are charged to the dom_cgrp.
+ 	 */
+ 	struct cgroup *dom_cgrp;
++	struct cgroup *old_dom_cgrp;		/* used while enabling threaded */
+ 
+ 	/* per-cpu recursive resource statistics */
+ 	struct cgroup_rstat_cpu __percpu *rstat_cpu;
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 3d0cc0b5cec2..3045a5cee0d8 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -2420,6 +2420,13 @@ struct netdev_notifier_info {
+ 	struct netlink_ext_ack	*extack;
+ };
+ 
++struct netdev_notifier_info_ext {
++	struct netdev_notifier_info info; /* must be first */
++	union {
++		u32 mtu;
++	} ext;
++};
++
+ struct netdev_notifier_change_info {
+ 	struct netdev_notifier_info info; /* must be first */
+ 	unsigned int flags_changed;
+diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
+index ad5444491975..a2f6e178a2d7 100644
+--- a/include/linux/perf/arm_pmu.h
++++ b/include/linux/perf/arm_pmu.h
+@@ -93,6 +93,7 @@ struct arm_pmu {
+ 	void		(*stop)(struct arm_pmu *);
+ 	void		(*reset)(void *);
+ 	int		(*map_event)(struct perf_event *event);
++	int		(*filter_match)(struct perf_event *event);
+ 	int		num_events;
+ 	u64		max_period;
+ 	bool		secure_access; /* 32-bit ARM only */
+diff --git a/include/linux/stmmac.h b/include/linux/stmmac.h
+index 32feac5bbd75..f62e7721cd71 100644
+--- a/include/linux/stmmac.h
++++ b/include/linux/stmmac.h
+@@ -30,6 +30,7 @@
+ 
+ #define MTL_MAX_RX_QUEUES	8
+ #define MTL_MAX_TX_QUEUES	8
++#define STMMAC_CH_MAX		8
+ 
+ #define STMMAC_RX_COE_NONE	0
+ #define STMMAC_RX_COE_TYPE1	1
+diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
+index 9397628a1967..cb462f9ab7dd 100644
+--- a/include/linux/virtio_net.h
++++ b/include/linux/virtio_net.h
+@@ -5,6 +5,24 @@
+ #include <linux/if_vlan.h>
+ #include <uapi/linux/virtio_net.h>
+ 
++static inline int virtio_net_hdr_set_proto(struct sk_buff *skb,
++					   const struct virtio_net_hdr *hdr)
++{
++	switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) {
++	case VIRTIO_NET_HDR_GSO_TCPV4:
++	case VIRTIO_NET_HDR_GSO_UDP:
++		skb->protocol = cpu_to_be16(ETH_P_IP);
++		break;
++	case VIRTIO_NET_HDR_GSO_TCPV6:
++		skb->protocol = cpu_to_be16(ETH_P_IPV6);
++		break;
++	default:
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
+ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
+ 					const struct virtio_net_hdr *hdr,
+ 					bool little_endian)
+diff --git a/include/net/bonding.h b/include/net/bonding.h
+index 808f1d167349..a4f116f06c50 100644
+--- a/include/net/bonding.h
++++ b/include/net/bonding.h
+@@ -139,12 +139,6 @@ struct bond_parm_tbl {
+ 	int mode;
+ };
+ 
+-struct netdev_notify_work {
+-	struct delayed_work	work;
+-	struct net_device	*dev;
+-	struct netdev_bonding_info bonding_info;
+-};
+-
+ struct slave {
+ 	struct net_device *dev; /* first - useful for panic debug */
+ 	struct bonding *bond; /* our master */
+@@ -172,6 +166,7 @@ struct slave {
+ #ifdef CONFIG_NET_POLL_CONTROLLER
+ 	struct netpoll *np;
+ #endif
++	struct delayed_work notify_work;
+ 	struct kobject kobj;
+ 	struct rtnl_link_stats64 slave_stats;
+ };
+diff --git a/include/net/inet_sock.h b/include/net/inet_sock.h
+index 83d5b3c2ac42..7dba2d116e8c 100644
+--- a/include/net/inet_sock.h
++++ b/include/net/inet_sock.h
+@@ -130,12 +130,6 @@ static inline int inet_request_bound_dev_if(const struct sock *sk,
+ 	return sk->sk_bound_dev_if;
+ }
+ 
+-static inline struct ip_options_rcu *ireq_opt_deref(const struct inet_request_sock *ireq)
+-{
+-	return rcu_dereference_check(ireq->ireq_opt,
+-				     refcount_read(&ireq->req.rsk_refcnt) > 0);
+-}
+-
+ struct inet_cork {
+ 	unsigned int		flags;
+ 	__be32			addr;
+diff --git a/include/net/ip_fib.h b/include/net/ip_fib.h
+index 69c91d1934c1..c9b7b136939d 100644
+--- a/include/net/ip_fib.h
++++ b/include/net/ip_fib.h
+@@ -394,6 +394,7 @@ int ip_fib_check_default(__be32 gw, struct net_device *dev);
+ int fib_sync_down_dev(struct net_device *dev, unsigned long event, bool force);
+ int fib_sync_down_addr(struct net_device *dev, __be32 local);
+ int fib_sync_up(struct net_device *dev, unsigned int nh_flags);
++void fib_sync_mtu(struct net_device *dev, u32 orig_mtu);
+ 
+ #ifdef CONFIG_IP_ROUTE_MULTIPATH
+ int fib_multipath_hash(const struct net *net, const struct flowi4 *fl4,
+diff --git a/include/sound/hdaudio.h b/include/sound/hdaudio.h
+index c052afc27547..138e976a2ba2 100644
+--- a/include/sound/hdaudio.h
++++ b/include/sound/hdaudio.h
+@@ -355,6 +355,7 @@ void snd_hdac_bus_init_cmd_io(struct hdac_bus *bus);
+ void snd_hdac_bus_stop_cmd_io(struct hdac_bus *bus);
+ void snd_hdac_bus_enter_link_reset(struct hdac_bus *bus);
+ void snd_hdac_bus_exit_link_reset(struct hdac_bus *bus);
++int snd_hdac_bus_reset_link(struct hdac_bus *bus, bool full_reset);
+ 
+ void snd_hdac_bus_update_rirb(struct hdac_bus *bus);
+ int snd_hdac_bus_handle_stream_irq(struct hdac_bus *bus, unsigned int status,
+diff --git a/include/sound/soc-dapm.h b/include/sound/soc-dapm.h
+index a6ce2de4e20a..be3bee1cf91f 100644
+--- a/include/sound/soc-dapm.h
++++ b/include/sound/soc-dapm.h
+@@ -410,6 +410,7 @@ int snd_soc_dapm_new_dai_widgets(struct snd_soc_dapm_context *dapm,
+ int snd_soc_dapm_link_dai_widgets(struct snd_soc_card *card);
+ void snd_soc_dapm_connect_dai_link_widgets(struct snd_soc_card *card);
+ int snd_soc_dapm_new_pcm(struct snd_soc_card *card,
++			 struct snd_soc_pcm_runtime *rtd,
+ 			 const struct snd_soc_pcm_stream *params,
+ 			 unsigned int num_params,
+ 			 struct snd_soc_dapm_widget *source,
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 2590700237c1..138f0302692e 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -1844,7 +1844,7 @@ static int btf_check_all_metas(struct btf_verifier_env *env)
+ 
+ 	hdr = &btf->hdr;
+ 	cur = btf->nohdr_data + hdr->type_off;
+-	end = btf->nohdr_data + hdr->type_len;
++	end = cur + hdr->type_len;
+ 
+ 	env->log_type_id = 1;
+ 	while (cur < end) {
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 077370bf8964..6e052c899cab 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -2833,11 +2833,12 @@ restart:
+ }
+ 
+ /**
+- * cgroup_save_control - save control masks of a subtree
++ * cgroup_save_control - save control masks and dom_cgrp of a subtree
+  * @cgrp: root of the target subtree
+  *
+- * Save ->subtree_control and ->subtree_ss_mask to the respective old_
+- * prefixed fields for @cgrp's subtree including @cgrp itself.
++ * Save ->subtree_control, ->subtree_ss_mask and ->dom_cgrp to the
++ * respective old_ prefixed fields for @cgrp's subtree including @cgrp
++ * itself.
+  */
+ static void cgroup_save_control(struct cgroup *cgrp)
+ {
+@@ -2847,6 +2848,7 @@ static void cgroup_save_control(struct cgroup *cgrp)
+ 	cgroup_for_each_live_descendant_pre(dsct, d_css, cgrp) {
+ 		dsct->old_subtree_control = dsct->subtree_control;
+ 		dsct->old_subtree_ss_mask = dsct->subtree_ss_mask;
++		dsct->old_dom_cgrp = dsct->dom_cgrp;
+ 	}
+ }
+ 
+@@ -2872,11 +2874,12 @@ static void cgroup_propagate_control(struct cgroup *cgrp)
+ }
+ 
+ /**
+- * cgroup_restore_control - restore control masks of a subtree
++ * cgroup_restore_control - restore control masks and dom_cgrp of a subtree
+  * @cgrp: root of the target subtree
+  *
+- * Restore ->subtree_control and ->subtree_ss_mask from the respective old_
+- * prefixed fields for @cgrp's subtree including @cgrp itself.
++ * Restore ->subtree_control, ->subtree_ss_mask and ->dom_cgrp from the
++ * respective old_ prefixed fields for @cgrp's subtree including @cgrp
++ * itself.
+  */
+ static void cgroup_restore_control(struct cgroup *cgrp)
+ {
+@@ -2886,6 +2889,7 @@ static void cgroup_restore_control(struct cgroup *cgrp)
+ 	cgroup_for_each_live_descendant_post(dsct, d_css, cgrp) {
+ 		dsct->subtree_control = dsct->old_subtree_control;
+ 		dsct->subtree_ss_mask = dsct->old_subtree_ss_mask;
++		dsct->dom_cgrp = dsct->old_dom_cgrp;
+ 	}
+ }
+ 
+@@ -3193,6 +3197,8 @@ static int cgroup_enable_threaded(struct cgroup *cgrp)
+ {
+ 	struct cgroup *parent = cgroup_parent(cgrp);
+ 	struct cgroup *dom_cgrp = parent->dom_cgrp;
++	struct cgroup *dsct;
++	struct cgroup_subsys_state *d_css;
+ 	int ret;
+ 
+ 	lockdep_assert_held(&cgroup_mutex);
+@@ -3222,12 +3228,13 @@ static int cgroup_enable_threaded(struct cgroup *cgrp)
+ 	 */
+ 	cgroup_save_control(cgrp);
+ 
+-	cgrp->dom_cgrp = dom_cgrp;
++	cgroup_for_each_live_descendant_pre(dsct, d_css, cgrp)
++		if (dsct == cgrp || cgroup_is_threaded(dsct))
++			dsct->dom_cgrp = dom_cgrp;
++
+ 	ret = cgroup_apply_control(cgrp);
+ 	if (!ret)
+ 		parent->nr_threaded_children++;
+-	else
+-		cgrp->dom_cgrp = cgrp;
+ 
+ 	cgroup_finalize_control(cgrp, ret);
+ 	return ret;
+diff --git a/lib/vsprintf.c b/lib/vsprintf.c
+index cda186230287..8e58928e8227 100644
+--- a/lib/vsprintf.c
++++ b/lib/vsprintf.c
+@@ -2769,7 +2769,7 @@ int bstr_printf(char *buf, size_t size, const char *fmt, const u32 *bin_buf)
+ 						copy = end - str;
+ 					memcpy(str, args, copy);
+ 					str += len;
+-					args += len;
++					args += len + 1;
+ 				}
+ 			}
+ 			if (process)
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 571875b37453..f7274e0c8bdc 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2883,9 +2883,6 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
+ 	if (!(pvmw->pmd && !pvmw->pte))
+ 		return;
+ 
+-	mmu_notifier_invalidate_range_start(mm, address,
+-			address + HPAGE_PMD_SIZE);
+-
+ 	flush_cache_range(vma, address, address + HPAGE_PMD_SIZE);
+ 	pmdval = *pvmw->pmd;
+ 	pmdp_invalidate(vma, address, pvmw->pmd);
+@@ -2898,9 +2895,6 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
+ 	set_pmd_at(mm, address, pvmw->pmd, pmdswp);
+ 	page_remove_rmap(page, true);
+ 	put_page(page);
+-
+-	mmu_notifier_invalidate_range_end(mm, address,
+-			address + HPAGE_PMD_SIZE);
+ }
+ 
+ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
+diff --git a/mm/mmap.c b/mm/mmap.c
+index 17bbf4d3e24f..080c6b9b1d65 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -1410,7 +1410,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
+ 	if (flags & MAP_FIXED_NOREPLACE) {
+ 		struct vm_area_struct *vma = find_vma(mm, addr);
+ 
+-		if (vma && vma->vm_start <= addr)
++		if (vma && vma->vm_start < addr + len)
+ 			return -EEXIST;
+ 	}
+ 
+diff --git a/mm/percpu.c b/mm/percpu.c
+index 0b6480979ac7..074732f3c209 100644
+--- a/mm/percpu.c
++++ b/mm/percpu.c
+@@ -1204,6 +1204,7 @@ static void pcpu_free_chunk(struct pcpu_chunk *chunk)
+ {
+ 	if (!chunk)
+ 		return;
++	pcpu_mem_free(chunk->md_blocks);
+ 	pcpu_mem_free(chunk->bound_map);
+ 	pcpu_mem_free(chunk->alloc_map);
+ 	pcpu_mem_free(chunk);
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 03822f86f288..fc0436407471 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -386,6 +386,17 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
+ 	delta = freeable >> priority;
+ 	delta *= 4;
+ 	do_div(delta, shrinker->seeks);
++
++	/*
++	 * Make sure we apply some minimal pressure on default priority
++	 * even on small cgroups. Stale objects are not only consuming memory
++	 * by themselves, but can also hold a reference to a dying cgroup,
++	 * preventing it from being reclaimed. A dying cgroup with all
++	 * corresponding structures like per-cpu stats and kmem caches
++	 * can be really big, so it may lead to a significant waste of memory.
++	 */
++	delta = max_t(unsigned long long, delta, min(freeable, batch_size));
++
+ 	total_scan += delta;
+ 	if (total_scan < 0) {
+ 		pr_err("shrink_slab: %pF negative objects to delete nr=%ld\n",
+diff --git a/mm/vmstat.c b/mm/vmstat.c
+index 55a5bb1d773d..7878da76abf2 100644
+--- a/mm/vmstat.c
++++ b/mm/vmstat.c
+@@ -1286,7 +1286,6 @@ const char * const vmstat_text[] = {
+ #ifdef CONFIG_DEBUG_VM_VMACACHE
+ 	"vmacache_find_calls",
+ 	"vmacache_find_hits",
+-	"vmacache_full_flushes",
+ #endif
+ #ifdef CONFIG_SWAP
+ 	"swap_ra",
+diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
+index ae91e2d40056..3a7b0773536b 100644
+--- a/net/bluetooth/smp.c
++++ b/net/bluetooth/smp.c
+@@ -83,6 +83,7 @@ enum {
+ 
+ struct smp_dev {
+ 	/* Secure Connections OOB data */
++	bool			local_oob;
+ 	u8			local_pk[64];
+ 	u8			local_rand[16];
+ 	bool			debug_key;
+@@ -599,6 +600,8 @@ int smp_generate_oob(struct hci_dev *hdev, u8 hash[16], u8 rand[16])
+ 
+ 	memcpy(rand, smp->local_rand, 16);
+ 
++	smp->local_oob = true;
++
+ 	return 0;
+ }
+ 
+@@ -1785,7 +1788,7 @@ static u8 smp_cmd_pairing_req(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	 * successfully received our local OOB data - therefore set the
+ 	 * flag to indicate that local OOB is in use.
+ 	 */
+-	if (req->oob_flag == SMP_OOB_PRESENT)
++	if (req->oob_flag == SMP_OOB_PRESENT && SMP_DEV(hdev)->local_oob)
+ 		set_bit(SMP_FLAG_LOCAL_OOB, &smp->flags);
+ 
+ 	/* SMP over BR/EDR requires special treatment */
+@@ -1967,7 +1970,7 @@ static u8 smp_cmd_pairing_rsp(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	 * successfully received our local OOB data - therefore set the
+ 	 * flag to indicate that local OOB is in use.
+ 	 */
+-	if (rsp->oob_flag == SMP_OOB_PRESENT)
++	if (rsp->oob_flag == SMP_OOB_PRESENT && SMP_DEV(hdev)->local_oob)
+ 		set_bit(SMP_FLAG_LOCAL_OOB, &smp->flags);
+ 
+ 	smp->prsp[0] = SMP_CMD_PAIRING_RSP;
+@@ -2697,7 +2700,13 @@ static int smp_cmd_public_key(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	 * key was set/generated.
+ 	 */
+ 	if (test_bit(SMP_FLAG_LOCAL_OOB, &smp->flags)) {
+-		struct smp_dev *smp_dev = chan->data;
++		struct l2cap_chan *hchan = hdev->smp_data;
++		struct smp_dev *smp_dev;
++
++		if (!hchan || !hchan->data)
++			return SMP_UNSPECIFIED;
++
++		smp_dev = hchan->data;
+ 
+ 		tfm_ecdh = smp_dev->tfm_ecdh;
+ 	} else {
+@@ -3230,6 +3239,7 @@ static struct l2cap_chan *smp_add_cid(struct hci_dev *hdev, u16 cid)
+ 		return ERR_CAST(tfm_ecdh);
+ 	}
+ 
++	smp->local_oob = false;
+ 	smp->tfm_aes = tfm_aes;
+ 	smp->tfm_cmac = tfm_cmac;
+ 	smp->tfm_ecdh = tfm_ecdh;
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 559a91271f82..bf669e77f9f3 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -1754,6 +1754,28 @@ int call_netdevice_notifiers(unsigned long val, struct net_device *dev)
+ }
+ EXPORT_SYMBOL(call_netdevice_notifiers);
+ 
++/**
++ *	call_netdevice_notifiers_mtu - call all network notifier blocks
++ *	@val: value passed unmodified to notifier function
++ *	@dev: net_device pointer passed unmodified to notifier function
++ *	@arg: additional u32 argument passed to the notifier function
++ *
++ *	Call all network notifier blocks.  Parameters and return value
++ *	are as for raw_notifier_call_chain().
++ */
++static int call_netdevice_notifiers_mtu(unsigned long val,
++					struct net_device *dev, u32 arg)
++{
++	struct netdev_notifier_info_ext info = {
++		.info.dev = dev,
++		.ext.mtu = arg,
++	};
++
++	BUILD_BUG_ON(offsetof(struct netdev_notifier_info_ext, info) != 0);
++
++	return call_netdevice_notifiers_info(val, &info.info);
++}
++
+ #ifdef CONFIG_NET_INGRESS
+ static DEFINE_STATIC_KEY_FALSE(ingress_needed_key);
+ 
+@@ -7118,14 +7140,16 @@ int dev_set_mtu(struct net_device *dev, int new_mtu)
+ 	err = __dev_set_mtu(dev, new_mtu);
+ 
+ 	if (!err) {
+-		err = call_netdevice_notifiers(NETDEV_CHANGEMTU, dev);
++		err = call_netdevice_notifiers_mtu(NETDEV_CHANGEMTU, dev,
++						   orig_mtu);
+ 		err = notifier_to_errno(err);
+ 		if (err) {
+ 			/* setting mtu back and notifying everyone again,
+ 			 * so that they have a chance to revert changes.
+ 			 */
+ 			__dev_set_mtu(dev, orig_mtu);
+-			call_netdevice_notifiers(NETDEV_CHANGEMTU, dev);
++			call_netdevice_notifiers_mtu(NETDEV_CHANGEMTU, dev,
++						     new_mtu);
+ 		}
+ 	}
+ 	return err;
+diff --git a/net/core/ethtool.c b/net/core/ethtool.c
+index e677a20180cf..6c04f1bf377d 100644
+--- a/net/core/ethtool.c
++++ b/net/core/ethtool.c
+@@ -2623,6 +2623,7 @@ int dev_ethtool(struct net *net, struct ifreq *ifr)
+ 	case ETHTOOL_GPHYSTATS:
+ 	case ETHTOOL_GTSO:
+ 	case ETHTOOL_GPERMADDR:
++	case ETHTOOL_GUFO:
+ 	case ETHTOOL_GGSO:
+ 	case ETHTOOL_GGRO:
+ 	case ETHTOOL_GFLAGS:
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 963ee2e88861..0b2bd7d3220f 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2334,7 +2334,8 @@ BPF_CALL_4(bpf_msg_pull_data,
+ 	if (unlikely(bytes_sg_total > copy))
+ 		return -EINVAL;
+ 
+-	page = alloc_pages(__GFP_NOWARN | GFP_ATOMIC, get_order(copy));
++	page = alloc_pages(__GFP_NOWARN | GFP_ATOMIC | __GFP_COMP,
++			   get_order(copy));
+ 	if (unlikely(!page))
+ 		return -ENOMEM;
+ 	p = page_address(page);
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index bafaa033826f..18de39dbdc30 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -1848,10 +1848,8 @@ static int rtnl_dump_ifinfo(struct sk_buff *skb, struct netlink_callback *cb)
+ 		if (tb[IFLA_IF_NETNSID]) {
+ 			netnsid = nla_get_s32(tb[IFLA_IF_NETNSID]);
+ 			tgt_net = get_target_net(skb->sk, netnsid);
+-			if (IS_ERR(tgt_net)) {
+-				tgt_net = net;
+-				netnsid = -1;
+-			}
++			if (IS_ERR(tgt_net))
++				return PTR_ERR(tgt_net);
+ 		}
+ 
+ 		if (tb[IFLA_EXT_MASK])
+@@ -2787,6 +2785,12 @@ struct net_device *rtnl_create_link(struct net *net,
+ 	else if (ops->get_num_rx_queues)
+ 		num_rx_queues = ops->get_num_rx_queues();
+ 
++	if (num_tx_queues < 1 || num_tx_queues > 4096)
++		return ERR_PTR(-EINVAL);
++
++	if (num_rx_queues < 1 || num_rx_queues > 4096)
++		return ERR_PTR(-EINVAL);
++
+ 	dev = alloc_netdev_mqs(ops->priv_size, ifname, name_assign_type,
+ 			       ops->setup, num_tx_queues, num_rx_queues);
+ 	if (!dev)
+@@ -3694,16 +3698,27 @@ static int rtnl_fdb_dump(struct sk_buff *skb, struct netlink_callback *cb)
+ 	int err = 0;
+ 	int fidx = 0;
+ 
+-	err = nlmsg_parse(cb->nlh, sizeof(struct ifinfomsg), tb,
+-			  IFLA_MAX, ifla_policy, NULL);
+-	if (err < 0) {
+-		return -EINVAL;
+-	} else if (err == 0) {
+-		if (tb[IFLA_MASTER])
+-			br_idx = nla_get_u32(tb[IFLA_MASTER]);
+-	}
++	/* A hack to preserve kernel<->userspace interface.
++	 * Before Linux v4.12 this code accepted ndmsg since iproute2 v3.3.0.
++	 * However, ndmsg is shorter than ifinfomsg thus nlmsg_parse() bails.
++	 * So, check for ndmsg with an optional u32 attribute (not used here).
++	 * Fortunately these sizes don't conflict with the size of ifinfomsg
++	 * with an optional attribute.
++	 */
++	if (nlmsg_len(cb->nlh) != sizeof(struct ndmsg) &&
++	    (nlmsg_len(cb->nlh) != sizeof(struct ndmsg) +
++	     nla_attr_size(sizeof(u32)))) {
++		err = nlmsg_parse(cb->nlh, sizeof(struct ifinfomsg), tb,
++				  IFLA_MAX, ifla_policy, NULL);
++		if (err < 0) {
++			return -EINVAL;
++		} else if (err == 0) {
++			if (tb[IFLA_MASTER])
++				br_idx = nla_get_u32(tb[IFLA_MASTER]);
++		}
+ 
+-	brport_idx = ifm->ifi_index;
++		brport_idx = ifm->ifi_index;
++	}
+ 
+ 	if (br_idx) {
+ 		br_dev = __dev_get_by_index(net, br_idx);
+diff --git a/net/dccp/input.c b/net/dccp/input.c
+index d28d46bff6ab..85d6c879383d 100644
+--- a/net/dccp/input.c
++++ b/net/dccp/input.c
+@@ -606,11 +606,13 @@ int dccp_rcv_state_process(struct sock *sk, struct sk_buff *skb,
+ 	if (sk->sk_state == DCCP_LISTEN) {
+ 		if (dh->dccph_type == DCCP_PKT_REQUEST) {
+ 			/* It is possible that we process SYN packets from backlog,
+-			 * so we need to make sure to disable BH right there.
++			 * so we need to make sure to disable BH and RCU right there.
+ 			 */
++			rcu_read_lock();
+ 			local_bh_disable();
+ 			acceptable = inet_csk(sk)->icsk_af_ops->conn_request(sk, skb) >= 0;
+ 			local_bh_enable();
++			rcu_read_unlock();
+ 			if (!acceptable)
+ 				return 1;
+ 			consume_skb(skb);
+diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c
+index b08feb219b44..8e08cea6f178 100644
+--- a/net/dccp/ipv4.c
++++ b/net/dccp/ipv4.c
+@@ -493,9 +493,11 @@ static int dccp_v4_send_response(const struct sock *sk, struct request_sock *req
+ 
+ 		dh->dccph_checksum = dccp_v4_csum_finish(skb, ireq->ir_loc_addr,
+ 							      ireq->ir_rmt_addr);
++		rcu_read_lock();
+ 		err = ip_build_and_send_pkt(skb, sk, ireq->ir_loc_addr,
+ 					    ireq->ir_rmt_addr,
+-					    ireq_opt_deref(ireq));
++					    rcu_dereference(ireq->ireq_opt));
++		rcu_read_unlock();
+ 		err = net_xmit_eval(err);
+ 	}
+ 
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index 2998b0e47d4b..0113993e9b2c 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -1243,7 +1243,8 @@ static int fib_inetaddr_event(struct notifier_block *this, unsigned long event,
+ static int fib_netdev_event(struct notifier_block *this, unsigned long event, void *ptr)
+ {
+ 	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
+-	struct netdev_notifier_changeupper_info *info;
++	struct netdev_notifier_changeupper_info *upper_info = ptr;
++	struct netdev_notifier_info_ext *info_ext = ptr;
+ 	struct in_device *in_dev;
+ 	struct net *net = dev_net(dev);
+ 	unsigned int flags;
+@@ -1278,16 +1279,19 @@ static int fib_netdev_event(struct notifier_block *this, unsigned long event, vo
+ 			fib_sync_up(dev, RTNH_F_LINKDOWN);
+ 		else
+ 			fib_sync_down_dev(dev, event, false);
+-		/* fall through */
++		rt_cache_flush(net);
++		break;
+ 	case NETDEV_CHANGEMTU:
++		fib_sync_mtu(dev, info_ext->ext.mtu);
+ 		rt_cache_flush(net);
+ 		break;
+ 	case NETDEV_CHANGEUPPER:
+-		info = ptr;
++		upper_info = ptr;
+ 		/* flush all routes if dev is linked to or unlinked from
+ 		 * an L3 master device (e.g., VRF)
+ 		 */
+-		if (info->upper_dev && netif_is_l3_master(info->upper_dev))
++		if (upper_info->upper_dev &&
++		    netif_is_l3_master(upper_info->upper_dev))
+ 			fib_disable_ip(dev, NETDEV_DOWN, true);
+ 		break;
+ 	}
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index f3c89ccf14c5..446204ca7406 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -1470,6 +1470,56 @@ static int call_fib_nh_notifiers(struct fib_nh *fib_nh,
+ 	return NOTIFY_DONE;
+ }
+ 
++/* Update the PMTU of exceptions when:
++ * - the new MTU of the first hop becomes smaller than the PMTU
++ * - the old MTU was the same as the PMTU, and it limited discovery of
++ *   larger MTUs on the path. With that limit raised, we can now
++ *   discover larger MTUs
++ * A special case is locked exceptions, for which the PMTU is smaller
++ * than the minimal accepted PMTU:
++ * - if the new MTU is greater than the PMTU, don't make any change
++ * - otherwise, unlock and set PMTU
++ */
++static void nh_update_mtu(struct fib_nh *nh, u32 new, u32 orig)
++{
++	struct fnhe_hash_bucket *bucket;
++	int i;
++
++	bucket = rcu_dereference_protected(nh->nh_exceptions, 1);
++	if (!bucket)
++		return;
++
++	for (i = 0; i < FNHE_HASH_SIZE; i++) {
++		struct fib_nh_exception *fnhe;
++
++		for (fnhe = rcu_dereference_protected(bucket[i].chain, 1);
++		     fnhe;
++		     fnhe = rcu_dereference_protected(fnhe->fnhe_next, 1)) {
++			if (fnhe->fnhe_mtu_locked) {
++				if (new <= fnhe->fnhe_pmtu) {
++					fnhe->fnhe_pmtu = new;
++					fnhe->fnhe_mtu_locked = false;
++				}
++			} else if (new < fnhe->fnhe_pmtu ||
++				   orig == fnhe->fnhe_pmtu) {
++				fnhe->fnhe_pmtu = new;
++			}
++		}
++	}
++}
++
++void fib_sync_mtu(struct net_device *dev, u32 orig_mtu)
++{
++	unsigned int hash = fib_devindex_hashfn(dev->ifindex);
++	struct hlist_head *head = &fib_info_devhash[hash];
++	struct fib_nh *nh;
++
++	hlist_for_each_entry(nh, head, nh_hash) {
++		if (nh->nh_dev == dev)
++			nh_update_mtu(nh, dev->mtu, orig_mtu);
++	}
++}
++
+ /* Event              force Flags           Description
+  * NETDEV_CHANGE      0     LINKDOWN        Carrier OFF, not for scope host
+  * NETDEV_DOWN        0     LINKDOWN|DEAD   Link down, not for scope host
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index 33a88e045efd..39cfa3a191d8 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -535,7 +535,8 @@ struct dst_entry *inet_csk_route_req(const struct sock *sk,
+ 	struct ip_options_rcu *opt;
+ 	struct rtable *rt;
+ 
+-	opt = ireq_opt_deref(ireq);
++	rcu_read_lock();
++	opt = rcu_dereference(ireq->ireq_opt);
+ 
+ 	flowi4_init_output(fl4, ireq->ir_iif, ireq->ir_mark,
+ 			   RT_CONN_FLAGS(sk), RT_SCOPE_UNIVERSE,
+@@ -549,11 +550,13 @@ struct dst_entry *inet_csk_route_req(const struct sock *sk,
+ 		goto no_route;
+ 	if (opt && opt->opt.is_strictroute && rt->rt_uses_gateway)
+ 		goto route_err;
++	rcu_read_unlock();
+ 	return &rt->dst;
+ 
+ route_err:
+ 	ip_rt_put(rt);
+ no_route:
++	rcu_read_unlock();
+ 	__IP_INC_STATS(net, IPSTATS_MIB_OUTNOROUTES);
+ 	return NULL;
+ }
+diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c
+index c0fe5ad996f2..26c36cccabdc 100644
+--- a/net/ipv4/ip_sockglue.c
++++ b/net/ipv4/ip_sockglue.c
+@@ -149,7 +149,6 @@ static void ip_cmsg_recv_security(struct msghdr *msg, struct sk_buff *skb)
+ static void ip_cmsg_recv_dstaddr(struct msghdr *msg, struct sk_buff *skb)
+ {
+ 	struct sockaddr_in sin;
+-	const struct iphdr *iph = ip_hdr(skb);
+ 	__be16 *ports;
+ 	int end;
+ 
+@@ -164,7 +163,7 @@ static void ip_cmsg_recv_dstaddr(struct msghdr *msg, struct sk_buff *skb)
+ 	ports = (__be16 *)skb_transport_header(skb);
+ 
+ 	sin.sin_family = AF_INET;
+-	sin.sin_addr.s_addr = iph->daddr;
++	sin.sin_addr.s_addr = ip_hdr(skb)->daddr;
+ 	sin.sin_port = ports[1];
+ 	memset(sin.sin_zero, 0, sizeof(sin.sin_zero));
+ 
+diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
+index c4f5602308ed..284a22154b4e 100644
+--- a/net/ipv4/ip_tunnel.c
++++ b/net/ipv4/ip_tunnel.c
+@@ -627,6 +627,7 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
+ 		    const struct iphdr *tnl_params, u8 protocol)
+ {
+ 	struct ip_tunnel *tunnel = netdev_priv(dev);
++	unsigned int inner_nhdr_len = 0;
+ 	const struct iphdr *inner_iph;
+ 	struct flowi4 fl4;
+ 	u8     tos, ttl;
+@@ -636,6 +637,14 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
+ 	__be32 dst;
+ 	bool connected;
+ 
++	/* ensure we can access the inner net header, for several users below */
++	if (skb->protocol == htons(ETH_P_IP))
++		inner_nhdr_len = sizeof(struct iphdr);
++	else if (skb->protocol == htons(ETH_P_IPV6))
++		inner_nhdr_len = sizeof(struct ipv6hdr);
++	if (unlikely(!pskb_may_pull(skb, inner_nhdr_len)))
++		goto tx_error;
++
+ 	inner_iph = (const struct iphdr *)skb_inner_network_header(skb);
+ 	connected = (tunnel->parms.iph.daddr != 0);
+ 
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 1df6e97106d7..f80acb5f1896 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -1001,21 +1001,22 @@ out:	kfree_skb(skb);
+ static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
+ {
+ 	struct dst_entry *dst = &rt->dst;
++	u32 old_mtu = ipv4_mtu(dst);
+ 	struct fib_result res;
+ 	bool lock = false;
+ 
+ 	if (ip_mtu_locked(dst))
+ 		return;
+ 
+-	if (ipv4_mtu(dst) < mtu)
++	if (old_mtu < mtu)
+ 		return;
+ 
+ 	if (mtu < ip_rt_min_pmtu) {
+ 		lock = true;
+-		mtu = ip_rt_min_pmtu;
++		mtu = min(old_mtu, ip_rt_min_pmtu);
+ 	}
+ 
+-	if (rt->rt_pmtu == mtu &&
++	if (rt->rt_pmtu == mtu && !lock &&
+ 	    time_before(jiffies, dst->expires - ip_rt_mtu_expires / 2))
+ 		return;
+ 
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index f9dcb29be12d..8b7294688633 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -5976,11 +5976,13 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb)
+ 			if (th->fin)
+ 				goto discard;
+ 			/* It is possible that we process SYN packets from backlog,
+-			 * so we need to make sure to disable BH right there.
++			 * so we need to make sure to disable BH and RCU right there.
+ 			 */
++			rcu_read_lock();
+ 			local_bh_disable();
+ 			acceptable = icsk->icsk_af_ops->conn_request(sk, skb) >= 0;
+ 			local_bh_enable();
++			rcu_read_unlock();
+ 
+ 			if (!acceptable)
+ 				return 1;
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 488b201851d7..d380856ba488 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -942,9 +942,11 @@ static int tcp_v4_send_synack(const struct sock *sk, struct dst_entry *dst,
+ 	if (skb) {
+ 		__tcp_v4_send_check(skb, ireq->ir_loc_addr, ireq->ir_rmt_addr);
+ 
++		rcu_read_lock();
+ 		err = ip_build_and_send_pkt(skb, sk, ireq->ir_loc_addr,
+ 					    ireq->ir_rmt_addr,
+-					    ireq_opt_deref(ireq));
++					    rcu_dereference(ireq->ireq_opt));
++		rcu_read_unlock();
+ 		err = net_xmit_eval(err);
+ 	}
+ 
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index fed65bc9df86..a12df801de94 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -1631,7 +1631,7 @@ busy_check:
+ 	*err = error;
+ 	return NULL;
+ }
+-EXPORT_SYMBOL_GPL(__skb_recv_udp);
++EXPORT_SYMBOL(__skb_recv_udp);
+ 
+ /*
+  * 	This should be easy, if there is something there we
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index f66a1cae3366..3484c7020fd9 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -4203,7 +4203,6 @@ static struct inet6_ifaddr *if6_get_first(struct seq_file *seq, loff_t pos)
+ 				p++;
+ 				continue;
+ 			}
+-			state->offset++;
+ 			return ifa;
+ 		}
+ 
+@@ -4227,13 +4226,12 @@ static struct inet6_ifaddr *if6_get_next(struct seq_file *seq,
+ 		return ifa;
+ 	}
+ 
++	state->offset = 0;
+ 	while (++state->bucket < IN6_ADDR_HSIZE) {
+-		state->offset = 0;
+ 		hlist_for_each_entry_rcu(ifa,
+ 				     &inet6_addr_lst[state->bucket], addr_lst) {
+ 			if (!net_eq(dev_net(ifa->idev->dev), net))
+ 				continue;
+-			state->offset++;
+ 			return ifa;
+ 		}
+ 	}
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index 5516f55e214b..cbe46175bb59 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -196,6 +196,8 @@ void fib6_info_destroy_rcu(struct rcu_head *head)
+ 				*ppcpu_rt = NULL;
+ 			}
+ 		}
++
++		free_percpu(f6i->rt6i_pcpu);
+ 	}
+ 
+ 	lwtstate_put(f6i->fib6_nh.nh_lwtstate);
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index 1cc9650af9fb..f5b5b0574a2d 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -1226,7 +1226,7 @@ static inline int
+ ip4ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct ip6_tnl *t = netdev_priv(dev);
+-	const struct iphdr  *iph = ip_hdr(skb);
++	const struct iphdr  *iph;
+ 	int encap_limit = -1;
+ 	struct flowi6 fl6;
+ 	__u8 dsfield;
+@@ -1234,6 +1234,11 @@ ip4ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	u8 tproto;
+ 	int err;
+ 
++	/* ensure we can access the full inner ip header */
++	if (!pskb_may_pull(skb, sizeof(struct iphdr)))
++		return -1;
++
++	iph = ip_hdr(skb);
+ 	memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt));
+ 
+ 	tproto = READ_ONCE(t->parms.proto);
+@@ -1297,7 +1302,7 @@ static inline int
+ ip6ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct ip6_tnl *t = netdev_priv(dev);
+-	struct ipv6hdr *ipv6h = ipv6_hdr(skb);
++	struct ipv6hdr *ipv6h;
+ 	int encap_limit = -1;
+ 	__u16 offset;
+ 	struct flowi6 fl6;
+@@ -1306,6 +1311,10 @@ ip6ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	u8 tproto;
+ 	int err;
+ 
++	if (unlikely(!pskb_may_pull(skb, sizeof(*ipv6h))))
++		return -1;
++
++	ipv6h = ipv6_hdr(skb);
+ 	tproto = READ_ONCE(t->parms.proto);
+ 	if ((tproto != IPPROTO_IPV6 && tproto != 0) ||
+ 	    ip6_tnl_addr_conflict(t, ipv6h))
+diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
+index afc307c89d1a..7ef3e0a5bf86 100644
+--- a/net/ipv6/raw.c
++++ b/net/ipv6/raw.c
+@@ -650,8 +650,6 @@ static int rawv6_send_hdrinc(struct sock *sk, struct msghdr *msg, int length,
+ 	skb->protocol = htons(ETH_P_IPV6);
+ 	skb->priority = sk->sk_priority;
+ 	skb->mark = sk->sk_mark;
+-	skb_dst_set(skb, &rt->dst);
+-	*dstp = NULL;
+ 
+ 	skb_put(skb, length);
+ 	skb_reset_network_header(skb);
+@@ -664,8 +662,14 @@ static int rawv6_send_hdrinc(struct sock *sk, struct msghdr *msg, int length,
+ 
+ 	skb->transport_header = skb->network_header;
+ 	err = memcpy_from_msg(iph, msg, length);
+-	if (err)
+-		goto error_fault;
++	if (err) {
++		err = -EFAULT;
++		kfree_skb(skb);
++		goto error;
++	}
++
++	skb_dst_set(skb, &rt->dst);
++	*dstp = NULL;
+ 
+ 	/* if egress device is enslaved to an L3 master device pass the
+ 	 * skb to its handler for processing
+@@ -674,21 +678,28 @@ static int rawv6_send_hdrinc(struct sock *sk, struct msghdr *msg, int length,
+ 	if (unlikely(!skb))
+ 		return 0;
+ 
++	/* Acquire rcu_read_lock() in case we need to use rt->rt6i_idev
++	 * in the error path. Since skb has been freed, the dst could
++	 * have been queued for deletion.
++	 */
++	rcu_read_lock();
+ 	IP6_UPD_PO_STATS(net, rt->rt6i_idev, IPSTATS_MIB_OUT, skb->len);
+ 	err = NF_HOOK(NFPROTO_IPV6, NF_INET_LOCAL_OUT, net, sk, skb,
+ 		      NULL, rt->dst.dev, dst_output);
+ 	if (err > 0)
+ 		err = net_xmit_errno(err);
+-	if (err)
+-		goto error;
++	if (err) {
++		IP6_INC_STATS(net, rt->rt6i_idev, IPSTATS_MIB_OUTDISCARDS);
++		rcu_read_unlock();
++		goto error_check;
++	}
++	rcu_read_unlock();
+ out:
+ 	return 0;
+ 
+-error_fault:
+-	err = -EFAULT;
+-	kfree_skb(skb);
+ error:
+ 	IP6_INC_STATS(net, rt->rt6i_idev, IPSTATS_MIB_OUTDISCARDS);
++error_check:
+ 	if (err == -ENOBUFS && !np->recverr)
+ 		err = 0;
+ 	return err;
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 480a79f47c52..ed526e257da6 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -4314,11 +4314,6 @@ static int ip6_route_info_append(struct net *net,
+ 	if (!nh)
+ 		return -ENOMEM;
+ 	nh->fib6_info = rt;
+-	err = ip6_convert_metrics(net, rt, r_cfg);
+-	if (err) {
+-		kfree(nh);
+-		return err;
+-	}
+ 	memcpy(&nh->r_cfg, r_cfg, sizeof(*r_cfg));
+ 	list_add_tail(&nh->next, rt6_nh_list);
+ 
+diff --git a/net/netlabel/netlabel_unlabeled.c b/net/netlabel/netlabel_unlabeled.c
+index c070dfc0190a..c92894c3e40a 100644
+--- a/net/netlabel/netlabel_unlabeled.c
++++ b/net/netlabel/netlabel_unlabeled.c
+@@ -781,7 +781,8 @@ static int netlbl_unlabel_addrinfo_get(struct genl_info *info,
+ {
+ 	u32 addr_len;
+ 
+-	if (info->attrs[NLBL_UNLABEL_A_IPV4ADDR]) {
++	if (info->attrs[NLBL_UNLABEL_A_IPV4ADDR] &&
++	    info->attrs[NLBL_UNLABEL_A_IPV4MASK]) {
+ 		addr_len = nla_len(info->attrs[NLBL_UNLABEL_A_IPV4ADDR]);
+ 		if (addr_len != sizeof(struct in_addr) &&
+ 		    addr_len != nla_len(info->attrs[NLBL_UNLABEL_A_IPV4MASK]))
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index e6445d8f3f57..3237e9978c1a 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -2712,10 +2712,12 @@ tpacket_error:
+ 			}
+ 		}
+ 
+-		if (po->has_vnet_hdr && virtio_net_hdr_to_skb(skb, vnet_hdr,
+-							      vio_le())) {
+-			tp_len = -EINVAL;
+-			goto tpacket_error;
++		if (po->has_vnet_hdr) {
++			if (virtio_net_hdr_to_skb(skb, vnet_hdr, vio_le())) {
++				tp_len = -EINVAL;
++				goto tpacket_error;
++			}
++			virtio_net_hdr_set_proto(skb, vnet_hdr);
+ 		}
+ 
+ 		skb->destructor = tpacket_destruct_skb;
+@@ -2911,6 +2913,7 @@ static int packet_snd(struct socket *sock, struct msghdr *msg, size_t len)
+ 		if (err)
+ 			goto out_free;
+ 		len += sizeof(vnet_hdr);
++		virtio_net_hdr_set_proto(skb, &vnet_hdr);
+ 	}
+ 
+ 	skb_probe_transport_header(skb, reserve);
+diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c
+index 260749956ef3..24df95a7b9c7 100644
+--- a/net/sched/cls_u32.c
++++ b/net/sched/cls_u32.c
+@@ -397,6 +397,7 @@ static int u32_init(struct tcf_proto *tp)
+ 	rcu_assign_pointer(tp_c->hlist, root_ht);
+ 	root_ht->tp_c = tp_c;
+ 
++	root_ht->refcnt++;
+ 	rcu_assign_pointer(tp->root, root_ht);
+ 	tp->data = tp_c;
+ 	return 0;
+@@ -608,7 +609,7 @@ static int u32_destroy_hnode(struct tcf_proto *tp, struct tc_u_hnode *ht,
+ 	struct tc_u_hnode __rcu **hn;
+ 	struct tc_u_hnode *phn;
+ 
+-	WARN_ON(ht->refcnt);
++	WARN_ON(--ht->refcnt);
+ 
+ 	u32_clear_hnode(tp, ht, extack);
+ 
+@@ -647,7 +648,7 @@ static void u32_destroy(struct tcf_proto *tp, struct netlink_ext_ack *extack)
+ 
+ 	WARN_ON(root_ht == NULL);
+ 
+-	if (root_ht && --root_ht->refcnt == 0)
++	if (root_ht && --root_ht->refcnt == 1)
+ 		u32_destroy_hnode(tp, root_ht, extack);
+ 
+ 	if (--tp_c->refcnt == 0) {
+@@ -696,7 +697,6 @@ static int u32_delete(struct tcf_proto *tp, void *arg, bool *last,
+ 	}
+ 
+ 	if (ht->refcnt == 1) {
+-		ht->refcnt--;
+ 		u32_destroy_hnode(tp, ht, extack);
+ 	} else {
+ 		NL_SET_ERR_MSG_MOD(extack, "Can not delete in-use filter");
+@@ -706,11 +706,11 @@ static int u32_delete(struct tcf_proto *tp, void *arg, bool *last,
+ out:
+ 	*last = true;
+ 	if (root_ht) {
+-		if (root_ht->refcnt > 1) {
++		if (root_ht->refcnt > 2) {
+ 			*last = false;
+ 			goto ret;
+ 		}
+-		if (root_ht->refcnt == 1) {
++		if (root_ht->refcnt == 2) {
+ 			if (!ht_empty(root_ht)) {
+ 				*last = false;
+ 				goto ret;
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 54eca685420f..99cc25aae503 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -1304,6 +1304,18 @@ check_loop_fn(struct Qdisc *q, unsigned long cl, struct qdisc_walker *w)
+  * Delete/get qdisc.
+  */
+ 
++const struct nla_policy rtm_tca_policy[TCA_MAX + 1] = {
++	[TCA_KIND]		= { .type = NLA_STRING },
++	[TCA_OPTIONS]		= { .type = NLA_NESTED },
++	[TCA_RATE]		= { .type = NLA_BINARY,
++				    .len = sizeof(struct tc_estimator) },
++	[TCA_STAB]		= { .type = NLA_NESTED },
++	[TCA_DUMP_INVISIBLE]	= { .type = NLA_FLAG },
++	[TCA_CHAIN]		= { .type = NLA_U32 },
++	[TCA_INGRESS_BLOCK]	= { .type = NLA_U32 },
++	[TCA_EGRESS_BLOCK]	= { .type = NLA_U32 },
++};
++
+ static int tc_get_qdisc(struct sk_buff *skb, struct nlmsghdr *n,
+ 			struct netlink_ext_ack *extack)
+ {
+@@ -1320,7 +1332,8 @@ static int tc_get_qdisc(struct sk_buff *skb, struct nlmsghdr *n,
+ 	    !netlink_ns_capable(skb, net->user_ns, CAP_NET_ADMIN))
+ 		return -EPERM;
+ 
+-	err = nlmsg_parse(n, sizeof(*tcm), tca, TCA_MAX, NULL, extack);
++	err = nlmsg_parse(n, sizeof(*tcm), tca, TCA_MAX, rtm_tca_policy,
++			  extack);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -1404,7 +1417,8 @@ static int tc_modify_qdisc(struct sk_buff *skb, struct nlmsghdr *n,
+ 
+ replay:
+ 	/* Reinit, just in case something touches this. */
+-	err = nlmsg_parse(n, sizeof(*tcm), tca, TCA_MAX, NULL, extack);
++	err = nlmsg_parse(n, sizeof(*tcm), tca, TCA_MAX, rtm_tca_policy,
++			  extack);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -1638,7 +1652,8 @@ static int tc_dump_qdisc(struct sk_buff *skb, struct netlink_callback *cb)
+ 	idx = 0;
+ 	ASSERT_RTNL();
+ 
+-	err = nlmsg_parse(nlh, sizeof(struct tcmsg), tca, TCA_MAX, NULL, NULL);
++	err = nlmsg_parse(nlh, sizeof(struct tcmsg), tca, TCA_MAX,
++			  rtm_tca_policy, NULL);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -1857,7 +1872,8 @@ static int tc_ctl_tclass(struct sk_buff *skb, struct nlmsghdr *n,
+ 	    !netlink_ns_capable(skb, net->user_ns, CAP_NET_ADMIN))
+ 		return -EPERM;
+ 
+-	err = nlmsg_parse(n, sizeof(*tcm), tca, TCA_MAX, NULL, extack);
++	err = nlmsg_parse(n, sizeof(*tcm), tca, TCA_MAX, rtm_tca_policy,
++			  extack);
+ 	if (err < 0)
+ 		return err;
+ 
+diff --git a/net/sctp/transport.c b/net/sctp/transport.c
+index 12cac85da994..033696e6f74f 100644
+--- a/net/sctp/transport.c
++++ b/net/sctp/transport.c
+@@ -260,6 +260,7 @@ void sctp_transport_pmtu(struct sctp_transport *transport, struct sock *sk)
+ bool sctp_transport_update_pmtu(struct sctp_transport *t, u32 pmtu)
+ {
+ 	struct dst_entry *dst = sctp_transport_dst_check(t);
++	struct sock *sk = t->asoc->base.sk;
+ 	bool change = true;
+ 
+ 	if (unlikely(pmtu < SCTP_DEFAULT_MINSEGMENT)) {
+@@ -271,12 +272,19 @@ bool sctp_transport_update_pmtu(struct sctp_transport *t, u32 pmtu)
+ 	pmtu = SCTP_TRUNC4(pmtu);
+ 
+ 	if (dst) {
+-		dst->ops->update_pmtu(dst, t->asoc->base.sk, NULL, pmtu);
++		struct sctp_pf *pf = sctp_get_pf_specific(dst->ops->family);
++		union sctp_addr addr;
++
++		pf->af->from_sk(&addr, sk);
++		pf->to_sk_daddr(&t->ipaddr, sk);
++		dst->ops->update_pmtu(dst, sk, NULL, pmtu);
++		pf->to_sk_daddr(&addr, sk);
++
+ 		dst = sctp_transport_dst_check(t);
+ 	}
+ 
+ 	if (!dst) {
+-		t->af_specific->get_dst(t, &t->saddr, &t->fl, t->asoc->base.sk);
++		t->af_specific->get_dst(t, &t->saddr, &t->fl, sk);
+ 		dst = t->dst;
+ 	}
+ 
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 093e16d1b770..cdaf3534e373 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -1422,8 +1422,10 @@ static int __tipc_sendstream(struct socket *sock, struct msghdr *m, size_t dlen)
+ 	/* Handle implicit connection setup */
+ 	if (unlikely(dest)) {
+ 		rc = __tipc_sendmsg(sock, m, dlen);
+-		if (dlen && (dlen == rc))
++		if (dlen && dlen == rc) {
++			tsk->peer_caps = tipc_node_get_capabilities(net, dnode);
+ 			tsk->snt_unacked = tsk_inc(tsk, dlen + msg_hdr_sz(hdr));
++		}
+ 		return rc;
+ 	}
+ 
+diff --git a/scripts/subarch.include b/scripts/subarch.include
+new file mode 100644
+index 000000000000..650682821126
+--- /dev/null
++++ b/scripts/subarch.include
+@@ -0,0 +1,13 @@
++# SUBARCH tells the usermode build what the underlying arch is.  That is set
++# first, and if a usermode build is happening, the "ARCH=um" on the command
++# line overrides the setting of ARCH below.  If a native build is happening,
++# then ARCH is assigned, getting whatever value it gets normally, and
++# SUBARCH is subsequently ignored.
++
++SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \
++				  -e s/sun4u/sparc64/ \
++				  -e s/arm.*/arm/ -e s/sa110/arm/ \
++				  -e s/s390x/s390/ -e s/parisc64/parisc/ \
++				  -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \
++				  -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ \
++				  -e s/riscv.*/riscv/)
+diff --git a/sound/hda/hdac_controller.c b/sound/hda/hdac_controller.c
+index 560ec0986e1a..74244d8e2909 100644
+--- a/sound/hda/hdac_controller.c
++++ b/sound/hda/hdac_controller.c
+@@ -40,6 +40,8 @@ static void azx_clear_corbrp(struct hdac_bus *bus)
+  */
+ void snd_hdac_bus_init_cmd_io(struct hdac_bus *bus)
+ {
++	WARN_ON_ONCE(!bus->rb.area);
++
+ 	spin_lock_irq(&bus->reg_lock);
+ 	/* CORB set up */
+ 	bus->corb.addr = bus->rb.addr;
+@@ -383,7 +385,7 @@ void snd_hdac_bus_exit_link_reset(struct hdac_bus *bus)
+ EXPORT_SYMBOL_GPL(snd_hdac_bus_exit_link_reset);
+ 
+ /* reset codec link */
+-static int azx_reset(struct hdac_bus *bus, bool full_reset)
++int snd_hdac_bus_reset_link(struct hdac_bus *bus, bool full_reset)
+ {
+ 	if (!full_reset)
+ 		goto skip_reset;
+@@ -408,7 +410,7 @@ static int azx_reset(struct hdac_bus *bus, bool full_reset)
+  skip_reset:
+ 	/* check to see if controller is ready */
+ 	if (!snd_hdac_chip_readb(bus, GCTL)) {
+-		dev_dbg(bus->dev, "azx_reset: controller not ready!\n");
++		dev_dbg(bus->dev, "controller not ready!\n");
+ 		return -EBUSY;
+ 	}
+ 
+@@ -423,6 +425,7 @@ static int azx_reset(struct hdac_bus *bus, bool full_reset)
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(snd_hdac_bus_reset_link);
+ 
+ /* enable interrupts */
+ static void azx_int_enable(struct hdac_bus *bus)
+@@ -477,15 +480,17 @@ bool snd_hdac_bus_init_chip(struct hdac_bus *bus, bool full_reset)
+ 		return false;
+ 
+ 	/* reset controller */
+-	azx_reset(bus, full_reset);
++	snd_hdac_bus_reset_link(bus, full_reset);
+ 
+-	/* initialize interrupts */
++	/* clear interrupts */
+ 	azx_int_clear(bus);
+-	azx_int_enable(bus);
+ 
+ 	/* initialize the codec command I/O */
+ 	snd_hdac_bus_init_cmd_io(bus);
+ 
++	/* enable interrupts after CORB/RIRB buffers are initialized above */
++	azx_int_enable(bus);
++
+ 	/* program the position buffer */
+ 	if (bus->use_posbuf && bus->posbuf.addr) {
+ 		snd_hdac_chip_writel(bus, DPLBASE, (u32)bus->posbuf.addr);
+diff --git a/sound/soc/amd/acp-pcm-dma.c b/sound/soc/amd/acp-pcm-dma.c
+index 77203841c535..90df61d263b8 100644
+--- a/sound/soc/amd/acp-pcm-dma.c
++++ b/sound/soc/amd/acp-pcm-dma.c
+@@ -16,6 +16,7 @@
+ #include <linux/module.h>
+ #include <linux/delay.h>
+ #include <linux/io.h>
++#include <linux/iopoll.h>
+ #include <linux/sizes.h>
+ #include <linux/pm_runtime.h>
+ 
+@@ -184,6 +185,24 @@ static void config_dma_descriptor_in_sram(void __iomem *acp_mmio,
+ 	acp_reg_write(descr_info->xfer_val, acp_mmio, mmACP_SRBM_Targ_Idx_Data);
+ }
+ 
++static void pre_config_reset(void __iomem *acp_mmio, u16 ch_num)
++{
++	u32 dma_ctrl;
++	int ret;
++
++	/* clear the reset bit */
++	dma_ctrl = acp_reg_read(acp_mmio, mmACP_DMA_CNTL_0 + ch_num);
++	dma_ctrl &= ~ACP_DMA_CNTL_0__DMAChRst_MASK;
++	acp_reg_write(dma_ctrl, acp_mmio, mmACP_DMA_CNTL_0 + ch_num);
++	/* check the reset bit before programming configuration registers */
++	ret = readl_poll_timeout(acp_mmio + ((mmACP_DMA_CNTL_0 + ch_num) * 4),
++				 dma_ctrl,
++				 !(dma_ctrl & ACP_DMA_CNTL_0__DMAChRst_MASK),
++				 100, ACP_DMA_RESET_TIME);
++	if (ret < 0)
++		pr_err("Failed to clear reset of channel : %d\n", ch_num);
++}
++
+ /*
+  * Initialize the DMA descriptor information for transfer between
+  * system memory <-> ACP SRAM
+@@ -238,6 +257,7 @@ static void set_acp_sysmem_dma_descriptors(void __iomem *acp_mmio,
+ 		config_dma_descriptor_in_sram(acp_mmio, dma_dscr_idx,
+ 					      &dmadscr[i]);
+ 	}
++	pre_config_reset(acp_mmio, ch);
+ 	config_acp_dma_channel(acp_mmio, ch,
+ 			       dma_dscr_idx - 1,
+ 			       NUM_DSCRS_PER_CHANNEL,
+@@ -277,6 +297,7 @@ static void set_acp_to_i2s_dma_descriptors(void __iomem *acp_mmio, u32 size,
+ 		config_dma_descriptor_in_sram(acp_mmio, dma_dscr_idx,
+ 					      &dmadscr[i]);
+ 	}
++	pre_config_reset(acp_mmio, ch);
+ 	/* Configure the DMA channel with the above descriptore */
+ 	config_acp_dma_channel(acp_mmio, ch, dma_dscr_idx - 1,
+ 			       NUM_DSCRS_PER_CHANNEL,
+diff --git a/sound/soc/codecs/max98373.c b/sound/soc/codecs/max98373.c
+index a92586106932..f0948e84f6ae 100644
+--- a/sound/soc/codecs/max98373.c
++++ b/sound/soc/codecs/max98373.c
+@@ -519,6 +519,7 @@ static bool max98373_volatile_reg(struct device *dev, unsigned int reg)
+ {
+ 	switch (reg) {
+ 	case MAX98373_R2000_SW_RESET ... MAX98373_R2009_INT_FLAG3:
++	case MAX98373_R203E_AMP_PATH_GAIN:
+ 	case MAX98373_R2054_MEAS_ADC_PVDD_CH_READBACK:
+ 	case MAX98373_R2055_MEAS_ADC_THERM_CH_READBACK:
+ 	case MAX98373_R20B6_BDE_CUR_STATE_READBACK:
+@@ -728,6 +729,7 @@ static int max98373_probe(struct snd_soc_component *component)
+ 	/* Software Reset */
+ 	regmap_write(max98373->regmap,
+ 		MAX98373_R2000_SW_RESET, MAX98373_SOFT_RESET);
++	usleep_range(10000, 11000);
+ 
+ 	/* IV default slot configuration */
+ 	regmap_write(max98373->regmap,
+@@ -816,6 +818,7 @@ static int max98373_resume(struct device *dev)
+ 
+ 	regmap_write(max98373->regmap,
+ 		MAX98373_R2000_SW_RESET, MAX98373_SOFT_RESET);
++	usleep_range(10000, 11000);
+ 	regcache_cache_only(max98373->regmap, false);
+ 	regcache_sync(max98373->regmap);
+ 	return 0;
+diff --git a/sound/soc/codecs/rt5514.c b/sound/soc/codecs/rt5514.c
+index dca82dd6e3bf..32fe76c3134a 100644
+--- a/sound/soc/codecs/rt5514.c
++++ b/sound/soc/codecs/rt5514.c
+@@ -64,8 +64,8 @@ static const struct reg_sequence rt5514_patch[] = {
+ 	{RT5514_ANA_CTRL_LDO10,		0x00028604},
+ 	{RT5514_ANA_CTRL_ADCFED,	0x00000800},
+ 	{RT5514_ASRC_IN_CTRL1,		0x00000003},
+-	{RT5514_DOWNFILTER0_CTRL3,	0x10000352},
+-	{RT5514_DOWNFILTER1_CTRL3,	0x10000352},
++	{RT5514_DOWNFILTER0_CTRL3,	0x10000342},
++	{RT5514_DOWNFILTER1_CTRL3,	0x10000342},
+ };
+ 
+ static const struct reg_default rt5514_reg[] = {
+@@ -92,10 +92,10 @@ static const struct reg_default rt5514_reg[] = {
+ 	{RT5514_ASRC_IN_CTRL1,		0x00000003},
+ 	{RT5514_DOWNFILTER0_CTRL1,	0x00020c2f},
+ 	{RT5514_DOWNFILTER0_CTRL2,	0x00020c2f},
+-	{RT5514_DOWNFILTER0_CTRL3,	0x10000352},
++	{RT5514_DOWNFILTER0_CTRL3,	0x10000342},
+ 	{RT5514_DOWNFILTER1_CTRL1,	0x00020c2f},
+ 	{RT5514_DOWNFILTER1_CTRL2,	0x00020c2f},
+-	{RT5514_DOWNFILTER1_CTRL3,	0x10000352},
++	{RT5514_DOWNFILTER1_CTRL3,	0x10000342},
+ 	{RT5514_ANA_CTRL_LDO10,		0x00028604},
+ 	{RT5514_ANA_CTRL_LDO18_16,	0x02000345},
+ 	{RT5514_ANA_CTRL_ADC12,		0x0000a2a8},
+diff --git a/sound/soc/codecs/sigmadsp.c b/sound/soc/codecs/sigmadsp.c
+index d53680ac78e4..6df158669420 100644
+--- a/sound/soc/codecs/sigmadsp.c
++++ b/sound/soc/codecs/sigmadsp.c
+@@ -117,8 +117,7 @@ static int sigmadsp_ctrl_write(struct sigmadsp *sigmadsp,
+ 	struct sigmadsp_control *ctrl, void *data)
+ {
+ 	/* safeload loads up to 20 bytes in a atomic operation */
+-	if (ctrl->num_bytes > 4 && ctrl->num_bytes <= 20 && sigmadsp->ops &&
+-	    sigmadsp->ops->safeload)
++	if (ctrl->num_bytes <= 20 && sigmadsp->ops && sigmadsp->ops->safeload)
+ 		return sigmadsp->ops->safeload(sigmadsp, ctrl->addr, data,
+ 			ctrl->num_bytes);
+ 	else
+diff --git a/sound/soc/codecs/wm8804-i2c.c b/sound/soc/codecs/wm8804-i2c.c
+index f27464c2c5ba..79541960f45d 100644
+--- a/sound/soc/codecs/wm8804-i2c.c
++++ b/sound/soc/codecs/wm8804-i2c.c
+@@ -13,6 +13,7 @@
+ #include <linux/init.h>
+ #include <linux/module.h>
+ #include <linux/i2c.h>
++#include <linux/acpi.h>
+ 
+ #include "wm8804.h"
+ 
+@@ -40,17 +41,29 @@ static const struct i2c_device_id wm8804_i2c_id[] = {
+ };
+ MODULE_DEVICE_TABLE(i2c, wm8804_i2c_id);
+ 
++#if defined(CONFIG_OF)
+ static const struct of_device_id wm8804_of_match[] = {
+ 	{ .compatible = "wlf,wm8804", },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(of, wm8804_of_match);
++#endif
++
++#ifdef CONFIG_ACPI
++static const struct acpi_device_id wm8804_acpi_match[] = {
++	{ "1AEC8804", 0 }, /* Wolfson PCI ID + part ID */
++	{ "10138804", 0 }, /* Cirrus Logic PCI ID + part ID */
++	{ },
++};
++MODULE_DEVICE_TABLE(acpi, wm8804_acpi_match);
++#endif
+ 
+ static struct i2c_driver wm8804_i2c_driver = {
+ 	.driver = {
+ 		.name = "wm8804",
+ 		.pm = &wm8804_pm,
+-		.of_match_table = wm8804_of_match,
++		.of_match_table = of_match_ptr(wm8804_of_match),
++		.acpi_match_table = ACPI_PTR(wm8804_acpi_match),
+ 	},
+ 	.probe = wm8804_i2c_probe,
+ 	.remove = wm8804_i2c_remove,
+diff --git a/sound/soc/intel/skylake/skl.c b/sound/soc/intel/skylake/skl.c
+index f0d9793f872a..c7cdfa4a7076 100644
+--- a/sound/soc/intel/skylake/skl.c
++++ b/sound/soc/intel/skylake/skl.c
+@@ -844,7 +844,7 @@ static int skl_first_init(struct hdac_ext_bus *ebus)
+ 		return -ENXIO;
+ 	}
+ 
+-	skl_init_chip(bus, true);
++	snd_hdac_bus_reset_link(bus, true);
+ 
+ 	snd_hdac_bus_parse_capabilities(bus);
+ 
+diff --git a/sound/soc/qcom/qdsp6/q6routing.c b/sound/soc/qcom/qdsp6/q6routing.c
+index 593f66b8622f..33bb97c0b6b6 100644
+--- a/sound/soc/qcom/qdsp6/q6routing.c
++++ b/sound/soc/qcom/qdsp6/q6routing.c
+@@ -933,8 +933,10 @@ static int msm_routing_probe(struct snd_soc_component *c)
+ {
+ 	int i;
+ 
+-	for (i = 0; i < MAX_SESSIONS; i++)
++	for (i = 0; i < MAX_SESSIONS; i++) {
+ 		routing_data->sessions[i].port_id = -1;
++		routing_data->sessions[i].fedai_id = -1;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/sound/soc/sh/rcar/adg.c b/sound/soc/sh/rcar/adg.c
+index 4672688cac32..b7c1f34ec280 100644
+--- a/sound/soc/sh/rcar/adg.c
++++ b/sound/soc/sh/rcar/adg.c
+@@ -465,6 +465,11 @@ static void rsnd_adg_get_clkout(struct rsnd_priv *priv,
+ 		goto rsnd_adg_get_clkout_end;
+ 
+ 	req_size = prop->length / sizeof(u32);
++	if (req_size > REQ_SIZE) {
++		dev_err(dev,
++			"too many clock-frequency, use top %d\n", REQ_SIZE);
++		req_size = REQ_SIZE;
++	}
+ 
+ 	of_property_read_u32_array(np, "clock-frequency", req_rate, req_size);
+ 	req_48kHz_rate = 0;
+diff --git a/sound/soc/sh/rcar/core.c b/sound/soc/sh/rcar/core.c
+index ff13189a7ee4..982a72e73ea9 100644
+--- a/sound/soc/sh/rcar/core.c
++++ b/sound/soc/sh/rcar/core.c
+@@ -482,7 +482,7 @@ static int rsnd_status_update(u32 *status,
+ 			(func_call && (mod)->ops->fn) ? #fn : "");	\
+ 		if (func_call && (mod)->ops->fn)			\
+ 			tmp = (mod)->ops->fn(mod, io, param);		\
+-		if (tmp)						\
++		if (tmp && (tmp != -EPROBE_DEFER))			\
+ 			dev_err(dev, "%s[%d] : %s error %d\n",		\
+ 				rsnd_mod_name(mod), rsnd_mod_id(mod),	\
+ 						     #fn, tmp);		\
+@@ -1550,6 +1550,14 @@ exit_snd_probe:
+ 		rsnd_dai_call(remove, &rdai->capture, priv);
+ 	}
+ 
++	/*
++	 * adg is very special mod which can't use rsnd_dai_call(remove),
++	 * and it registers ADG clock on probe.
++	 * It should be unregister if probe failed.
++	 * Mainly it is assuming -EPROBE_DEFER case
++	 */
++	rsnd_adg_remove(priv);
++
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/sh/rcar/dma.c b/sound/soc/sh/rcar/dma.c
+index ef82b94d038b..2f3f4108fda5 100644
+--- a/sound/soc/sh/rcar/dma.c
++++ b/sound/soc/sh/rcar/dma.c
+@@ -244,6 +244,10 @@ static int rsnd_dmaen_attach(struct rsnd_dai_stream *io,
+ 	/* try to get DMAEngine channel */
+ 	chan = rsnd_dmaen_request_channel(io, mod_from, mod_to);
+ 	if (IS_ERR_OR_NULL(chan)) {
++		/* Let's follow when -EPROBE_DEFER case */
++		if (PTR_ERR(chan) == -EPROBE_DEFER)
++			return PTR_ERR(chan);
++
+ 		/*
+ 		 * DMA failed. try to PIO mode
+ 		 * see
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index 4663de3cf495..0b4896d411f9 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -1430,7 +1430,7 @@ static int soc_link_dai_widgets(struct snd_soc_card *card,
+ 	sink = codec_dai->playback_widget;
+ 	source = cpu_dai->capture_widget;
+ 	if (sink && source) {
+-		ret = snd_soc_dapm_new_pcm(card, dai_link->params,
++		ret = snd_soc_dapm_new_pcm(card, rtd, dai_link->params,
+ 					   dai_link->num_params,
+ 					   source, sink);
+ 		if (ret != 0) {
+@@ -1443,7 +1443,7 @@ static int soc_link_dai_widgets(struct snd_soc_card *card,
+ 	sink = cpu_dai->playback_widget;
+ 	source = codec_dai->capture_widget;
+ 	if (sink && source) {
+-		ret = snd_soc_dapm_new_pcm(card, dai_link->params,
++		ret = snd_soc_dapm_new_pcm(card, rtd, dai_link->params,
+ 					   dai_link->num_params,
+ 					   source, sink);
+ 		if (ret != 0) {
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index a099c3e45504..577f6178af57 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -3658,6 +3658,7 @@ static int snd_soc_dai_link_event(struct snd_soc_dapm_widget *w,
+ {
+ 	struct snd_soc_dapm_path *source_p, *sink_p;
+ 	struct snd_soc_dai *source, *sink;
++	struct snd_soc_pcm_runtime *rtd = w->priv;
+ 	const struct snd_soc_pcm_stream *config = w->params + w->params_select;
+ 	struct snd_pcm_substream substream;
+ 	struct snd_pcm_hw_params *params = NULL;
+@@ -3717,6 +3718,7 @@ static int snd_soc_dai_link_event(struct snd_soc_dapm_widget *w,
+ 		goto out;
+ 	}
+ 	substream.runtime = runtime;
++	substream.private_data = rtd;
+ 
+ 	switch (event) {
+ 	case SND_SOC_DAPM_PRE_PMU:
+@@ -3901,6 +3903,7 @@ outfree_w_param:
+ }
+ 
+ int snd_soc_dapm_new_pcm(struct snd_soc_card *card,
++			 struct snd_soc_pcm_runtime *rtd,
+ 			 const struct snd_soc_pcm_stream *params,
+ 			 unsigned int num_params,
+ 			 struct snd_soc_dapm_widget *source,
+@@ -3969,6 +3972,7 @@ int snd_soc_dapm_new_pcm(struct snd_soc_card *card,
+ 
+ 	w->params = params;
+ 	w->num_params = num_params;
++	w->priv = rtd;
+ 
+ 	ret = snd_soc_dapm_add_path(&card->dapm, source, w, NULL, NULL);
+ 	if (ret)
+diff --git a/tools/perf/scripts/python/export-to-postgresql.py b/tools/perf/scripts/python/export-to-postgresql.py
+index efcaf6cac2eb..e46f51b17513 100644
+--- a/tools/perf/scripts/python/export-to-postgresql.py
++++ b/tools/perf/scripts/python/export-to-postgresql.py
+@@ -204,14 +204,23 @@ from ctypes import *
+ libpq = CDLL("libpq.so.5")
+ PQconnectdb = libpq.PQconnectdb
+ PQconnectdb.restype = c_void_p
++PQconnectdb.argtypes = [ c_char_p ]
+ PQfinish = libpq.PQfinish
++PQfinish.argtypes = [ c_void_p ]
+ PQstatus = libpq.PQstatus
++PQstatus.restype = c_int
++PQstatus.argtypes = [ c_void_p ]
+ PQexec = libpq.PQexec
+ PQexec.restype = c_void_p
++PQexec.argtypes = [ c_void_p, c_char_p ]
+ PQresultStatus = libpq.PQresultStatus
++PQresultStatus.restype = c_int
++PQresultStatus.argtypes = [ c_void_p ]
+ PQputCopyData = libpq.PQputCopyData
++PQputCopyData.restype = c_int
+ PQputCopyData.argtypes = [ c_void_p, c_void_p, c_int ]
+ PQputCopyEnd = libpq.PQputCopyEnd
++PQputCopyEnd.restype = c_int
+ PQputCopyEnd.argtypes = [ c_void_p, c_void_p ]
+ 
+ sys.path.append(os.environ['PERF_EXEC_PATH'] + \
+diff --git a/tools/perf/scripts/python/export-to-sqlite.py b/tools/perf/scripts/python/export-to-sqlite.py
+index f827bf77e9d2..e4bb82c8aba9 100644
+--- a/tools/perf/scripts/python/export-to-sqlite.py
++++ b/tools/perf/scripts/python/export-to-sqlite.py
+@@ -440,7 +440,11 @@ def branch_type_table(*x):
+ 
+ def sample_table(*x):
+ 	if branches:
+-		bind_exec(sample_query, 18, x)
++		for xx in x[0:15]:
++			sample_query.addBindValue(str(xx))
++		for xx in x[19:22]:
++			sample_query.addBindValue(str(xx))
++		do_query_(sample_query)
+ 	else:
+ 		bind_exec(sample_query, 22, x)
+ 
+diff --git a/tools/testing/selftests/android/Makefile b/tools/testing/selftests/android/Makefile
+index 72c25a3cb658..d9a725478375 100644
+--- a/tools/testing/selftests/android/Makefile
++++ b/tools/testing/selftests/android/Makefile
+@@ -6,7 +6,7 @@ TEST_PROGS := run.sh
+ 
+ include ../lib.mk
+ 
+-all:
++all: khdr
+ 	@for DIR in $(SUBDIRS); do		\
+ 		BUILD_TARGET=$(OUTPUT)/$$DIR;	\
+ 		mkdir $$BUILD_TARGET  -p;	\
+diff --git a/tools/testing/selftests/android/config b/tools/testing/selftests/android/config
+new file mode 100644
+index 000000000000..b4ad748a9dd9
+--- /dev/null
++++ b/tools/testing/selftests/android/config
+@@ -0,0 +1,5 @@
++CONFIG_ANDROID=y
++CONFIG_STAGING=y
++CONFIG_ION=y
++CONFIG_ION_SYSTEM_HEAP=y
++CONFIG_DRM_VGEM=y
+diff --git a/tools/testing/selftests/android/ion/Makefile b/tools/testing/selftests/android/ion/Makefile
+index e03695287f76..88cfe88e466f 100644
+--- a/tools/testing/selftests/android/ion/Makefile
++++ b/tools/testing/selftests/android/ion/Makefile
+@@ -10,6 +10,8 @@ $(TEST_GEN_FILES): ipcsocket.c ionutils.c
+ 
+ TEST_PROGS := ion_test.sh
+ 
++KSFT_KHDR_INSTALL := 1
++top_srcdir = ../../../../..
+ include ../../lib.mk
+ 
+ $(OUTPUT)/ionapp_export: ionapp_export.c ipcsocket.c ionutils.c
+diff --git a/tools/testing/selftests/android/ion/config b/tools/testing/selftests/android/ion/config
+deleted file mode 100644
+index b4ad748a9dd9..000000000000
+--- a/tools/testing/selftests/android/ion/config
++++ /dev/null
+@@ -1,5 +0,0 @@
+-CONFIG_ANDROID=y
+-CONFIG_STAGING=y
+-CONFIG_ION=y
+-CONFIG_ION_SYSTEM_HEAP=y
+-CONFIG_DRM_VGEM=y
+diff --git a/tools/testing/selftests/cgroup/cgroup_util.c b/tools/testing/selftests/cgroup/cgroup_util.c
+index 1e9e3c470561..8b644ea39725 100644
+--- a/tools/testing/selftests/cgroup/cgroup_util.c
++++ b/tools/testing/selftests/cgroup/cgroup_util.c
+@@ -89,17 +89,28 @@ int cg_read(const char *cgroup, const char *control, char *buf, size_t len)
+ int cg_read_strcmp(const char *cgroup, const char *control,
+ 		   const char *expected)
+ {
+-	size_t size = strlen(expected) + 1;
++	size_t size;
+ 	char *buf;
++	int ret;
++
++	/* Handle the case of comparing against empty string */
++	if (!expected)
++		size = 32;
++	else
++		size = strlen(expected) + 1;
+ 
+ 	buf = malloc(size);
+ 	if (!buf)
+ 		return -1;
+ 
+-	if (cg_read(cgroup, control, buf, size))
++	if (cg_read(cgroup, control, buf, size)) {
++		free(buf);
+ 		return -1;
++	}
+ 
+-	return strcmp(expected, buf);
++	ret = strcmp(expected, buf);
++	free(buf);
++	return ret;
+ }
+ 
+ int cg_read_strstr(const char *cgroup, const char *control, const char *needle)
+diff --git a/tools/testing/selftests/efivarfs/config b/tools/testing/selftests/efivarfs/config
+new file mode 100644
+index 000000000000..4e151f1005b2
+--- /dev/null
++++ b/tools/testing/selftests/efivarfs/config
+@@ -0,0 +1 @@
++CONFIG_EFIVAR_FS=y
+diff --git a/tools/testing/selftests/futex/functional/Makefile b/tools/testing/selftests/futex/functional/Makefile
+index ff8feca49746..ad1eeb14fda7 100644
+--- a/tools/testing/selftests/futex/functional/Makefile
++++ b/tools/testing/selftests/futex/functional/Makefile
+@@ -18,6 +18,7 @@ TEST_GEN_FILES := \
+ 
+ TEST_PROGS := run.sh
+ 
++top_srcdir = ../../../../..
+ include ../../lib.mk
+ 
+ $(TEST_GEN_FILES): $(HEADERS)
+diff --git a/tools/testing/selftests/gpio/Makefile b/tools/testing/selftests/gpio/Makefile
+index 1bbb47565c55..4665cdbf1a8d 100644
+--- a/tools/testing/selftests/gpio/Makefile
++++ b/tools/testing/selftests/gpio/Makefile
+@@ -21,11 +21,8 @@ endef
+ CFLAGS += -O2 -g -std=gnu99 -Wall -I../../../../usr/include/
+ LDLIBS += -lmount -I/usr/include/libmount
+ 
+-$(BINARIES): ../../../gpio/gpio-utils.o ../../../../usr/include/linux/gpio.h
++$(BINARIES):| khdr
++$(BINARIES): ../../../gpio/gpio-utils.o
+ 
+ ../../../gpio/gpio-utils.o:
+ 	make ARCH=$(ARCH) CROSS_COMPILE=$(CROSS_COMPILE) -C ../../../gpio
+-
+-../../../../usr/include/linux/gpio.h:
+-	make -C ../../../.. headers_install INSTALL_HDR_PATH=$(shell pwd)/../../../../usr/
+-
+diff --git a/tools/testing/selftests/kselftest.h b/tools/testing/selftests/kselftest.h
+index 15e6b75fc3a5..a3edb2c8e43d 100644
+--- a/tools/testing/selftests/kselftest.h
++++ b/tools/testing/selftests/kselftest.h
+@@ -19,7 +19,6 @@
+ #define KSFT_FAIL  1
+ #define KSFT_XFAIL 2
+ #define KSFT_XPASS 3
+-/* Treat skip as pass */
+ #define KSFT_SKIP  4
+ 
+ /* counters */
+diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
+index d9d00319b07c..bcb69380bbab 100644
+--- a/tools/testing/selftests/kvm/Makefile
++++ b/tools/testing/selftests/kvm/Makefile
+@@ -32,9 +32,6 @@ $(LIBKVM_OBJ): $(OUTPUT)/%.o: %.c
+ $(OUTPUT)/libkvm.a: $(LIBKVM_OBJ)
+ 	$(AR) crs $@ $^
+ 
+-$(LINUX_HDR_PATH):
+-	make -C $(top_srcdir) headers_install
+-
+-all: $(STATIC_LIBS) $(LINUX_HDR_PATH)
++all: $(STATIC_LIBS)
+ $(TEST_GEN_PROGS): $(STATIC_LIBS)
+-$(TEST_GEN_PROGS) $(LIBKVM_OBJ): | $(LINUX_HDR_PATH)
++$(STATIC_LIBS):| khdr
+diff --git a/tools/testing/selftests/lib.mk b/tools/testing/selftests/lib.mk
+index 17ab36605a8e..0a8e75886224 100644
+--- a/tools/testing/selftests/lib.mk
++++ b/tools/testing/selftests/lib.mk
+@@ -16,8 +16,20 @@ TEST_GEN_PROGS := $(patsubst %,$(OUTPUT)/%,$(TEST_GEN_PROGS))
+ TEST_GEN_PROGS_EXTENDED := $(patsubst %,$(OUTPUT)/%,$(TEST_GEN_PROGS_EXTENDED))
+ TEST_GEN_FILES := $(patsubst %,$(OUTPUT)/%,$(TEST_GEN_FILES))
+ 
++top_srcdir ?= ../../../..
++include $(top_srcdir)/scripts/subarch.include
++ARCH		?= $(SUBARCH)
++
+ all: $(TEST_GEN_PROGS) $(TEST_GEN_PROGS_EXTENDED) $(TEST_GEN_FILES)
+ 
++.PHONY: khdr
++khdr:
++	make ARCH=$(ARCH) -C $(top_srcdir) headers_install
++
++ifdef KSFT_KHDR_INSTALL
++$(TEST_GEN_PROGS) $(TEST_GEN_PROGS_EXTENDED) $(TEST_GEN_FILES):| khdr
++endif
++
+ .ONESHELL:
+ define RUN_TEST_PRINT_RESULT
+ 	TEST_HDR_MSG="selftests: "`basename $$PWD`:" $$BASENAME_TEST";	\
+diff --git a/tools/testing/selftests/memory-hotplug/config b/tools/testing/selftests/memory-hotplug/config
+index 2fde30191a47..a7e8cd5bb265 100644
+--- a/tools/testing/selftests/memory-hotplug/config
++++ b/tools/testing/selftests/memory-hotplug/config
+@@ -2,3 +2,4 @@ CONFIG_MEMORY_HOTPLUG=y
+ CONFIG_MEMORY_HOTPLUG_SPARSE=y
+ CONFIG_NOTIFIER_ERROR_INJECTION=y
+ CONFIG_MEMORY_NOTIFIER_ERROR_INJECT=m
++CONFIG_MEMORY_HOTREMOVE=y
+diff --git a/tools/testing/selftests/net/Makefile b/tools/testing/selftests/net/Makefile
+index 663e11e85727..d515dabc6b0d 100644
+--- a/tools/testing/selftests/net/Makefile
++++ b/tools/testing/selftests/net/Makefile
+@@ -15,6 +15,7 @@ TEST_GEN_FILES += udpgso udpgso_bench_tx udpgso_bench_rx
+ TEST_GEN_PROGS = reuseport_bpf reuseport_bpf_cpu reuseport_bpf_numa
+ TEST_GEN_PROGS += reuseport_dualstack reuseaddr_conflict
+ 
++KSFT_KHDR_INSTALL := 1
+ include ../lib.mk
+ 
+ $(OUTPUT)/reuseport_bpf_numa: LDFLAGS += -lnuma
+diff --git a/tools/testing/selftests/networking/timestamping/Makefile b/tools/testing/selftests/networking/timestamping/Makefile
+index a728040edbe1..14cfcf006936 100644
+--- a/tools/testing/selftests/networking/timestamping/Makefile
++++ b/tools/testing/selftests/networking/timestamping/Makefile
+@@ -5,6 +5,7 @@ TEST_PROGS := hwtstamp_config rxtimestamp timestamping txtimestamp
+ 
+ all: $(TEST_PROGS)
+ 
++top_srcdir = ../../../../..
+ include ../../lib.mk
+ 
+ clean:
+diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
+index fdefa2295ddc..58759454b1d0 100644
+--- a/tools/testing/selftests/vm/Makefile
++++ b/tools/testing/selftests/vm/Makefile
+@@ -25,10 +25,6 @@ TEST_PROGS := run_vmtests
+ 
+ include ../lib.mk
+ 
+-$(OUTPUT)/userfaultfd: ../../../../usr/include/linux/kernel.h
+ $(OUTPUT)/userfaultfd: LDLIBS += -lpthread
+ 
+ $(OUTPUT)/mlock-random-test: LDLIBS += -lcap
+-
+-../../../../usr/include/linux/kernel.h:
+-	make -C ../../../.. headers_install


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 13:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 13:15 UTC (permalink / raw
  To: gentoo-commits

commit:     35ca78d07fc1667111b9fca7ab8c27f15f391989
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Oct 10 11:16:13 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 13:15:40 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=35ca78d0

Linux patch 4.18.13

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1012_linux-4.18.13.patch | 7273 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 7277 insertions(+)

diff --git a/0000_README b/0000_README
index ff87445..f5bb594 100644
--- a/0000_README
+++ b/0000_README
@@ -91,6 +91,10 @@ Patch:  1011_linux-4.18.12.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.12
 
+Patch:  1012_linux-4.18.13.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.13
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1012_linux-4.18.13.patch b/1012_linux-4.18.13.patch
new file mode 100644
index 0000000..6c8e751
--- /dev/null
+++ b/1012_linux-4.18.13.patch
@@ -0,0 +1,7273 @@
+diff --git a/Documentation/devicetree/bindings/net/sh_eth.txt b/Documentation/devicetree/bindings/net/sh_eth.txt
+index 82a4cf2c145d..a62fe3b613fc 100644
+--- a/Documentation/devicetree/bindings/net/sh_eth.txt
++++ b/Documentation/devicetree/bindings/net/sh_eth.txt
+@@ -16,6 +16,7 @@ Required properties:
+ 	      "renesas,ether-r8a7794"  if the device is a part of R8A7794 SoC.
+ 	      "renesas,gether-r8a77980" if the device is a part of R8A77980 SoC.
+ 	      "renesas,ether-r7s72100" if the device is a part of R7S72100 SoC.
++	      "renesas,ether-r7s9210" if the device is a part of R7S9210 SoC.
+ 	      "renesas,rcar-gen1-ether" for a generic R-Car Gen1 device.
+ 	      "renesas,rcar-gen2-ether" for a generic R-Car Gen2 or RZ/G1
+ 	                                device.
+diff --git a/Makefile b/Makefile
+index 466e07af8473..4442e9ea4b6d 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 12
++SUBLEVEL = 13
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
+index 11859287c52a..c98b59ac0612 100644
+--- a/arch/arc/include/asm/atomic.h
++++ b/arch/arc/include/asm/atomic.h
+@@ -84,7 +84,7 @@ static inline int atomic_fetch_##op(int i, atomic_t *v)			\
+ 	"1:	llock   %[orig], [%[ctr]]		\n"		\
+ 	"	" #asm_op " %[val], %[orig], %[i]	\n"		\
+ 	"	scond   %[val], [%[ctr]]		\n"		\
+-	"						\n"		\
++	"	bnz     1b				\n"		\
+ 	: [val]	"=&r"	(val),						\
+ 	  [orig] "=&r" (orig)						\
+ 	: [ctr]	"r"	(&v->counter),					\
+diff --git a/arch/arm64/include/asm/jump_label.h b/arch/arm64/include/asm/jump_label.h
+index 1b5e0e843c3a..7e2b3e360086 100644
+--- a/arch/arm64/include/asm/jump_label.h
++++ b/arch/arm64/include/asm/jump_label.h
+@@ -28,7 +28,7 @@
+ 
+ static __always_inline bool arch_static_branch(struct static_key *key, bool branch)
+ {
+-	asm goto("1: nop\n\t"
++	asm_volatile_goto("1: nop\n\t"
+ 		 ".pushsection __jump_table,  \"aw\"\n\t"
+ 		 ".align 3\n\t"
+ 		 ".quad 1b, %l[l_yes], %c0\n\t"
+@@ -42,7 +42,7 @@ l_yes:
+ 
+ static __always_inline bool arch_static_branch_jump(struct static_key *key, bool branch)
+ {
+-	asm goto("1: b %l[l_yes]\n\t"
++	asm_volatile_goto("1: b %l[l_yes]\n\t"
+ 		 ".pushsection __jump_table,  \"aw\"\n\t"
+ 		 ".align 3\n\t"
+ 		 ".quad 1b, %l[l_yes], %c0\n\t"
+diff --git a/arch/hexagon/include/asm/bitops.h b/arch/hexagon/include/asm/bitops.h
+index 5e4a59b3ec1b..2691a1857d20 100644
+--- a/arch/hexagon/include/asm/bitops.h
++++ b/arch/hexagon/include/asm/bitops.h
+@@ -211,7 +211,7 @@ static inline long ffz(int x)
+  * This is defined the same way as ffs.
+  * Note fls(0) = 0, fls(1) = 1, fls(0x80000000) = 32.
+  */
+-static inline long fls(int x)
++static inline int fls(int x)
+ {
+ 	int r;
+ 
+@@ -232,7 +232,7 @@ static inline long fls(int x)
+  * the libc and compiler builtin ffs routines, therefore
+  * differs in spirit from the above ffz (man ffs).
+  */
+-static inline long ffs(int x)
++static inline int ffs(int x)
+ {
+ 	int r;
+ 
+diff --git a/arch/hexagon/kernel/dma.c b/arch/hexagon/kernel/dma.c
+index 77459df34e2e..7ebe7ad19d15 100644
+--- a/arch/hexagon/kernel/dma.c
++++ b/arch/hexagon/kernel/dma.c
+@@ -60,7 +60,7 @@ static void *hexagon_dma_alloc_coherent(struct device *dev, size_t size,
+ 			panic("Can't create %s() memory pool!", __func__);
+ 		else
+ 			gen_pool_add(coherent_pool,
+-				pfn_to_virt(max_low_pfn),
++				(unsigned long)pfn_to_virt(max_low_pfn),
+ 				hexagon_coherent_pool_size, -1);
+ 	}
+ 
+diff --git a/arch/nds32/include/asm/elf.h b/arch/nds32/include/asm/elf.h
+index 56c479058802..f5f9cf7e0544 100644
+--- a/arch/nds32/include/asm/elf.h
++++ b/arch/nds32/include/asm/elf.h
+@@ -121,9 +121,9 @@ struct elf32_hdr;
+  */
+ #define ELF_CLASS	ELFCLASS32
+ #ifdef __NDS32_EB__
+-#define ELF_DATA	ELFDATA2MSB;
++#define ELF_DATA	ELFDATA2MSB
+ #else
+-#define ELF_DATA	ELFDATA2LSB;
++#define ELF_DATA	ELFDATA2LSB
+ #endif
+ #define ELF_ARCH	EM_NDS32
+ #define USE_ELF_CORE_DUMP
+diff --git a/arch/nds32/include/asm/uaccess.h b/arch/nds32/include/asm/uaccess.h
+index 18a009f3804d..3f771e0595e8 100644
+--- a/arch/nds32/include/asm/uaccess.h
++++ b/arch/nds32/include/asm/uaccess.h
+@@ -78,8 +78,9 @@ static inline void set_fs(mm_segment_t fs)
+ #define get_user(x,p)							\
+ ({									\
+ 	long __e = -EFAULT;						\
+-	if(likely(access_ok(VERIFY_READ,  p, sizeof(*p)))) {		\
+-		__e = __get_user(x,p);					\
++	const __typeof__(*(p)) __user *__p = (p);			\
++	if(likely(access_ok(VERIFY_READ, __p, sizeof(*__p)))) {		\
++		__e = __get_user(x, __p);				\
+ 	} else								\
+ 		x = 0;							\
+ 	__e;								\
+@@ -99,10 +100,10 @@ static inline void set_fs(mm_segment_t fs)
+ 
+ #define __get_user_err(x,ptr,err)					\
+ do {									\
+-	unsigned long __gu_addr = (unsigned long)(ptr);			\
++	const __typeof__(*(ptr)) __user *__gu_addr = (ptr);		\
+ 	unsigned long __gu_val;						\
+-	__chk_user_ptr(ptr);						\
+-	switch (sizeof(*(ptr))) {					\
++	__chk_user_ptr(__gu_addr);					\
++	switch (sizeof(*(__gu_addr))) {					\
+ 	case 1:								\
+ 		__get_user_asm("lbi",__gu_val,__gu_addr,err);		\
+ 		break;							\
+@@ -119,7 +120,7 @@ do {									\
+ 		BUILD_BUG(); 						\
+ 		break;							\
+ 	}								\
+-	(x) = (__typeof__(*(ptr)))__gu_val;				\
++	(x) = (__typeof__(*(__gu_addr)))__gu_val;			\
+ } while (0)
+ 
+ #define __get_user_asm(inst,x,addr,err)					\
+@@ -169,8 +170,9 @@ do {									\
+ #define put_user(x,p)							\
+ ({									\
+ 	long __e = -EFAULT;						\
+-	if(likely(access_ok(VERIFY_WRITE,  p, sizeof(*p)))) {		\
+-		__e = __put_user(x,p);					\
++	__typeof__(*(p)) __user *__p = (p);				\
++	if(likely(access_ok(VERIFY_WRITE, __p, sizeof(*__p)))) {	\
++		__e = __put_user(x, __p);				\
+ 	}								\
+ 	__e;								\
+ })
+@@ -189,10 +191,10 @@ do {									\
+ 
+ #define __put_user_err(x,ptr,err)					\
+ do {									\
+-	unsigned long __pu_addr = (unsigned long)(ptr);			\
+-	__typeof__(*(ptr)) __pu_val = (x);				\
+-	__chk_user_ptr(ptr);						\
+-	switch (sizeof(*(ptr))) {					\
++	__typeof__(*(ptr)) __user *__pu_addr = (ptr);			\
++	__typeof__(*(__pu_addr)) __pu_val = (x);			\
++	__chk_user_ptr(__pu_addr);					\
++	switch (sizeof(*(__pu_addr))) {					\
+ 	case 1:								\
+ 		__put_user_asm("sbi",__pu_val,__pu_addr,err);		\
+ 		break;							\
+diff --git a/arch/nds32/kernel/atl2c.c b/arch/nds32/kernel/atl2c.c
+index 0c6d031a1c4a..0c5386e72098 100644
+--- a/arch/nds32/kernel/atl2c.c
++++ b/arch/nds32/kernel/atl2c.c
+@@ -9,7 +9,8 @@
+ 
+ void __iomem *atl2c_base;
+ static const struct of_device_id atl2c_ids[] __initconst = {
+-	{.compatible = "andestech,atl2c",}
++	{.compatible = "andestech,atl2c",},
++	{}
+ };
+ 
+ static int __init atl2c_of_init(void)
+diff --git a/arch/nds32/kernel/module.c b/arch/nds32/kernel/module.c
+index 4167283d8293..1e31829cbc2a 100644
+--- a/arch/nds32/kernel/module.c
++++ b/arch/nds32/kernel/module.c
+@@ -40,7 +40,7 @@ void do_reloc16(unsigned int val, unsigned int *loc, unsigned int val_mask,
+ 
+ 	tmp2 = tmp & loc_mask;
+ 	if (partial_in_place) {
+-		tmp &= (!loc_mask);
++		tmp &= (~loc_mask);
+ 		tmp =
+ 		    tmp2 | ((tmp + ((val & val_mask) >> val_shift)) & val_mask);
+ 	} else {
+@@ -70,7 +70,7 @@ void do_reloc32(unsigned int val, unsigned int *loc, unsigned int val_mask,
+ 
+ 	tmp2 = tmp & loc_mask;
+ 	if (partial_in_place) {
+-		tmp &= (!loc_mask);
++		tmp &= (~loc_mask);
+ 		tmp =
+ 		    tmp2 | ((tmp + ((val & val_mask) >> val_shift)) & val_mask);
+ 	} else {
+diff --git a/arch/nds32/kernel/traps.c b/arch/nds32/kernel/traps.c
+index a6205fd4db52..f0e974347c26 100644
+--- a/arch/nds32/kernel/traps.c
++++ b/arch/nds32/kernel/traps.c
+@@ -137,7 +137,7 @@ static void __dump(struct task_struct *tsk, unsigned long *base_reg)
+ 		       !((unsigned long)base_reg & 0x3) &&
+ 		       ((unsigned long)base_reg >= TASK_SIZE)) {
+ 			unsigned long next_fp;
+-#if !defined(NDS32_ABI_2)
++#if !defined(__NDS32_ABI_2)
+ 			ret_addr = base_reg[0];
+ 			next_fp = base_reg[1];
+ #else
+diff --git a/arch/nds32/kernel/vmlinux.lds.S b/arch/nds32/kernel/vmlinux.lds.S
+index 288313b886ef..9e90f30a181d 100644
+--- a/arch/nds32/kernel/vmlinux.lds.S
++++ b/arch/nds32/kernel/vmlinux.lds.S
+@@ -13,14 +13,26 @@ OUTPUT_ARCH(nds32)
+ ENTRY(_stext_lma)
+ jiffies = jiffies_64;
+ 
++#if defined(CONFIG_GCOV_KERNEL)
++#define NDS32_EXIT_KEEP(x)	x
++#else
++#define NDS32_EXIT_KEEP(x)
++#endif
++
+ SECTIONS
+ {
+ 	_stext_lma = TEXTADDR - LOAD_OFFSET;
+ 	. = TEXTADDR;
+ 	__init_begin = .;
+ 	HEAD_TEXT_SECTION
++	.exit.text : {
++		NDS32_EXIT_KEEP(EXIT_TEXT)
++	}
+ 	INIT_TEXT_SECTION(PAGE_SIZE)
+ 	INIT_DATA_SECTION(16)
++	.exit.data : {
++		NDS32_EXIT_KEEP(EXIT_DATA)
++	}
+ 	PERCPU_SECTION(L1_CACHE_BYTES)
+ 	__init_end = .;
+ 
+diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
+index 7f3a8cf5d66f..4c08f42f6406 100644
+--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
++++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
+@@ -359,7 +359,7 @@ static int kvmppc_mmu_book3s_64_hv_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
+ 	unsigned long pp, key;
+ 	unsigned long v, orig_v, gr;
+ 	__be64 *hptep;
+-	int index;
++	long int index;
+ 	int virtmode = vcpu->arch.shregs.msr & (data ? MSR_DR : MSR_IR);
+ 
+ 	if (kvm_is_radix(vcpu->kvm))
+diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
+index f0d2070866d4..0efa5b29d0a3 100644
+--- a/arch/riscv/kernel/setup.c
++++ b/arch/riscv/kernel/setup.c
+@@ -64,15 +64,8 @@ atomic_t hart_lottery;
+ #ifdef CONFIG_BLK_DEV_INITRD
+ static void __init setup_initrd(void)
+ {
+-	extern char __initramfs_start[];
+-	extern unsigned long __initramfs_size;
+ 	unsigned long size;
+ 
+-	if (__initramfs_size > 0) {
+-		initrd_start = (unsigned long)(&__initramfs_start);
+-		initrd_end = initrd_start + __initramfs_size;
+-	}
+-
+ 	if (initrd_start >= initrd_end) {
+ 		printk(KERN_INFO "initrd not found or empty");
+ 		goto disable;
+diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
+index a4170048a30b..17fbd07e4245 100644
+--- a/arch/x86/events/intel/lbr.c
++++ b/arch/x86/events/intel/lbr.c
+@@ -1250,4 +1250,8 @@ void intel_pmu_lbr_init_knl(void)
+ 
+ 	x86_pmu.lbr_sel_mask = LBR_SEL_MASK;
+ 	x86_pmu.lbr_sel_map  = snb_lbr_sel_map;
++
++	/* Knights Landing does have MISPREDICT bit */
++	if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_LIP)
++		x86_pmu.intel_cap.lbr_format = LBR_FORMAT_EIP_FLAGS;
+ }
+diff --git a/arch/x86/kernel/apm_32.c b/arch/x86/kernel/apm_32.c
+index ec00d1ff5098..f7151cd03cb0 100644
+--- a/arch/x86/kernel/apm_32.c
++++ b/arch/x86/kernel/apm_32.c
+@@ -1640,6 +1640,7 @@ static int do_open(struct inode *inode, struct file *filp)
+ 	return 0;
+ }
+ 
++#ifdef CONFIG_PROC_FS
+ static int proc_apm_show(struct seq_file *m, void *v)
+ {
+ 	unsigned short	bx;
+@@ -1719,6 +1720,7 @@ static int proc_apm_show(struct seq_file *m, void *v)
+ 		   units);
+ 	return 0;
+ }
++#endif
+ 
+ static int apm(void *unused)
+ {
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index eb85cb87c40f..ec868373b11b 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -307,28 +307,11 @@ struct blkcg_gq *blkg_lookup_create(struct blkcg *blkcg,
+ 	}
+ }
+ 
+-static void blkg_pd_offline(struct blkcg_gq *blkg)
+-{
+-	int i;
+-
+-	lockdep_assert_held(blkg->q->queue_lock);
+-	lockdep_assert_held(&blkg->blkcg->lock);
+-
+-	for (i = 0; i < BLKCG_MAX_POLS; i++) {
+-		struct blkcg_policy *pol = blkcg_policy[i];
+-
+-		if (blkg->pd[i] && !blkg->pd[i]->offline &&
+-		    pol->pd_offline_fn) {
+-			pol->pd_offline_fn(blkg->pd[i]);
+-			blkg->pd[i]->offline = true;
+-		}
+-	}
+-}
+-
+ static void blkg_destroy(struct blkcg_gq *blkg)
+ {
+ 	struct blkcg *blkcg = blkg->blkcg;
+ 	struct blkcg_gq *parent = blkg->parent;
++	int i;
+ 
+ 	lockdep_assert_held(blkg->q->queue_lock);
+ 	lockdep_assert_held(&blkcg->lock);
+@@ -337,6 +320,13 @@ static void blkg_destroy(struct blkcg_gq *blkg)
+ 	WARN_ON_ONCE(list_empty(&blkg->q_node));
+ 	WARN_ON_ONCE(hlist_unhashed(&blkg->blkcg_node));
+ 
++	for (i = 0; i < BLKCG_MAX_POLS; i++) {
++		struct blkcg_policy *pol = blkcg_policy[i];
++
++		if (blkg->pd[i] && pol->pd_offline_fn)
++			pol->pd_offline_fn(blkg->pd[i]);
++	}
++
+ 	if (parent) {
+ 		blkg_rwstat_add_aux(&parent->stat_bytes, &blkg->stat_bytes);
+ 		blkg_rwstat_add_aux(&parent->stat_ios, &blkg->stat_ios);
+@@ -379,7 +369,6 @@ static void blkg_destroy_all(struct request_queue *q)
+ 		struct blkcg *blkcg = blkg->blkcg;
+ 
+ 		spin_lock(&blkcg->lock);
+-		blkg_pd_offline(blkg);
+ 		blkg_destroy(blkg);
+ 		spin_unlock(&blkcg->lock);
+ 	}
+@@ -1006,54 +995,21 @@ static struct cftype blkcg_legacy_files[] = {
+  * @css: css of interest
+  *
+  * This function is called when @css is about to go away and responsible
+- * for offlining all blkgs pd and killing all wbs associated with @css.
+- * blkgs pd offline should be done while holding both q and blkcg locks.
+- * As blkcg lock is nested inside q lock, this function performs reverse
+- * double lock dancing.
++ * for shooting down all blkgs associated with @css.  blkgs should be
++ * removed while holding both q and blkcg locks.  As blkcg lock is nested
++ * inside q lock, this function performs reverse double lock dancing.
+  *
+  * This is the blkcg counterpart of ioc_release_fn().
+  */
+ static void blkcg_css_offline(struct cgroup_subsys_state *css)
+ {
+ 	struct blkcg *blkcg = css_to_blkcg(css);
+-	struct blkcg_gq *blkg;
+ 
+ 	spin_lock_irq(&blkcg->lock);
+ 
+-	hlist_for_each_entry(blkg, &blkcg->blkg_list, blkcg_node) {
+-		struct request_queue *q = blkg->q;
+-
+-		if (spin_trylock(q->queue_lock)) {
+-			blkg_pd_offline(blkg);
+-			spin_unlock(q->queue_lock);
+-		} else {
+-			spin_unlock_irq(&blkcg->lock);
+-			cpu_relax();
+-			spin_lock_irq(&blkcg->lock);
+-		}
+-	}
+-
+-	spin_unlock_irq(&blkcg->lock);
+-
+-	wb_blkcg_offline(blkcg);
+-}
+-
+-/**
+- * blkcg_destroy_all_blkgs - destroy all blkgs associated with a blkcg
+- * @blkcg: blkcg of interest
+- *
+- * This function is called when blkcg css is about to free and responsible for
+- * destroying all blkgs associated with @blkcg.
+- * blkgs should be removed while holding both q and blkcg locks. As blkcg lock
+- * is nested inside q lock, this function performs reverse double lock dancing.
+- */
+-static void blkcg_destroy_all_blkgs(struct blkcg *blkcg)
+-{
+-	spin_lock_irq(&blkcg->lock);
+ 	while (!hlist_empty(&blkcg->blkg_list)) {
+ 		struct blkcg_gq *blkg = hlist_entry(blkcg->blkg_list.first,
+-						    struct blkcg_gq,
+-						    blkcg_node);
++						struct blkcg_gq, blkcg_node);
+ 		struct request_queue *q = blkg->q;
+ 
+ 		if (spin_trylock(q->queue_lock)) {
+@@ -1065,7 +1021,10 @@ static void blkcg_destroy_all_blkgs(struct blkcg *blkcg)
+ 			spin_lock_irq(&blkcg->lock);
+ 		}
+ 	}
++
+ 	spin_unlock_irq(&blkcg->lock);
++
++	wb_blkcg_offline(blkcg);
+ }
+ 
+ static void blkcg_css_free(struct cgroup_subsys_state *css)
+@@ -1073,8 +1032,6 @@ static void blkcg_css_free(struct cgroup_subsys_state *css)
+ 	struct blkcg *blkcg = css_to_blkcg(css);
+ 	int i;
+ 
+-	blkcg_destroy_all_blkgs(blkcg);
+-
+ 	mutex_lock(&blkcg_pol_mutex);
+ 
+ 	list_del(&blkcg->all_blkcgs_node);
+@@ -1412,11 +1369,8 @@ void blkcg_deactivate_policy(struct request_queue *q,
+ 
+ 	list_for_each_entry(blkg, &q->blkg_list, q_node) {
+ 		if (blkg->pd[pol->plid]) {
+-			if (!blkg->pd[pol->plid]->offline &&
+-			    pol->pd_offline_fn) {
++			if (pol->pd_offline_fn)
+ 				pol->pd_offline_fn(blkg->pd[pol->plid]);
+-				blkg->pd[pol->plid]->offline = true;
+-			}
+ 			pol->pd_free_fn(blkg->pd[pol->plid]);
+ 			blkg->pd[pol->plid] = NULL;
+ 		}
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 22a2bc5f25ce..99bf0c0394f8 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -7403,4 +7403,4 @@ EXPORT_SYMBOL_GPL(ata_cable_unknown);
+ EXPORT_SYMBOL_GPL(ata_cable_ignore);
+ EXPORT_SYMBOL_GPL(ata_cable_sata);
+ EXPORT_SYMBOL_GPL(ata_host_get);
+-EXPORT_SYMBOL_GPL(ata_host_put);
+\ No newline at end of file
++EXPORT_SYMBOL_GPL(ata_host_put);
+diff --git a/drivers/base/firmware_loader/main.c b/drivers/base/firmware_loader/main.c
+index 0943e7065e0e..8e9213b36e31 100644
+--- a/drivers/base/firmware_loader/main.c
++++ b/drivers/base/firmware_loader/main.c
+@@ -209,22 +209,28 @@ static struct fw_priv *__lookup_fw_priv(const char *fw_name)
+ static int alloc_lookup_fw_priv(const char *fw_name,
+ 				struct firmware_cache *fwc,
+ 				struct fw_priv **fw_priv, void *dbuf,
+-				size_t size)
++				size_t size, enum fw_opt opt_flags)
+ {
+ 	struct fw_priv *tmp;
+ 
+ 	spin_lock(&fwc->lock);
+-	tmp = __lookup_fw_priv(fw_name);
+-	if (tmp) {
+-		kref_get(&tmp->ref);
+-		spin_unlock(&fwc->lock);
+-		*fw_priv = tmp;
+-		pr_debug("batched request - sharing the same struct fw_priv and lookup for multiple requests\n");
+-		return 1;
++	if (!(opt_flags & FW_OPT_NOCACHE)) {
++		tmp = __lookup_fw_priv(fw_name);
++		if (tmp) {
++			kref_get(&tmp->ref);
++			spin_unlock(&fwc->lock);
++			*fw_priv = tmp;
++			pr_debug("batched request - sharing the same struct fw_priv and lookup for multiple requests\n");
++			return 1;
++		}
+ 	}
++
+ 	tmp = __allocate_fw_priv(fw_name, fwc, dbuf, size);
+-	if (tmp)
+-		list_add(&tmp->list, &fwc->head);
++	if (tmp) {
++		INIT_LIST_HEAD(&tmp->list);
++		if (!(opt_flags & FW_OPT_NOCACHE))
++			list_add(&tmp->list, &fwc->head);
++	}
+ 	spin_unlock(&fwc->lock);
+ 
+ 	*fw_priv = tmp;
+@@ -493,7 +499,8 @@ int assign_fw(struct firmware *fw, struct device *device,
+  */
+ static int
+ _request_firmware_prepare(struct firmware **firmware_p, const char *name,
+-			  struct device *device, void *dbuf, size_t size)
++			  struct device *device, void *dbuf, size_t size,
++			  enum fw_opt opt_flags)
+ {
+ 	struct firmware *firmware;
+ 	struct fw_priv *fw_priv;
+@@ -511,7 +518,8 @@ _request_firmware_prepare(struct firmware **firmware_p, const char *name,
+ 		return 0; /* assigned */
+ 	}
+ 
+-	ret = alloc_lookup_fw_priv(name, &fw_cache, &fw_priv, dbuf, size);
++	ret = alloc_lookup_fw_priv(name, &fw_cache, &fw_priv, dbuf, size,
++				  opt_flags);
+ 
+ 	/*
+ 	 * bind with 'priv' now to avoid warning in failure path
+@@ -571,7 +579,8 @@ _request_firmware(const struct firmware **firmware_p, const char *name,
+ 		goto out;
+ 	}
+ 
+-	ret = _request_firmware_prepare(&fw, name, device, buf, size);
++	ret = _request_firmware_prepare(&fw, name, device, buf, size,
++					opt_flags);
+ 	if (ret <= 0) /* error or already assigned */
+ 		goto out;
+ 
+diff --git a/drivers/cpufreq/qcom-cpufreq-kryo.c b/drivers/cpufreq/qcom-cpufreq-kryo.c
+index efc9a7ae4857..35e81d7dd929 100644
+--- a/drivers/cpufreq/qcom-cpufreq-kryo.c
++++ b/drivers/cpufreq/qcom-cpufreq-kryo.c
+@@ -44,7 +44,7 @@ enum _msm8996_version {
+ 
+ struct platform_device *cpufreq_dt_pdev, *kryo_cpufreq_pdev;
+ 
+-static enum _msm8996_version __init qcom_cpufreq_kryo_get_msm_id(void)
++static enum _msm8996_version qcom_cpufreq_kryo_get_msm_id(void)
+ {
+ 	size_t len;
+ 	u32 *msm_id;
+@@ -221,7 +221,7 @@ static int __init qcom_cpufreq_kryo_init(void)
+ }
+ module_init(qcom_cpufreq_kryo_init);
+ 
+-static void __init qcom_cpufreq_kryo_exit(void)
++static void __exit qcom_cpufreq_kryo_exit(void)
+ {
+ 	platform_device_unregister(kryo_cpufreq_pdev);
+ 	platform_driver_unregister(&qcom_cpufreq_kryo_driver);
+diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
+index d67667970f7e..ec40f991e6c6 100644
+--- a/drivers/crypto/caam/caamalg.c
++++ b/drivers/crypto/caam/caamalg.c
+@@ -1553,8 +1553,8 @@ static struct ablkcipher_edesc *ablkcipher_edesc_alloc(struct ablkcipher_request
+ 	edesc->src_nents = src_nents;
+ 	edesc->dst_nents = dst_nents;
+ 	edesc->sec4_sg_bytes = sec4_sg_bytes;
+-	edesc->sec4_sg = (void *)edesc + sizeof(struct ablkcipher_edesc) +
+-			 desc_bytes;
++	edesc->sec4_sg = (struct sec4_sg_entry *)((u8 *)edesc->hw_desc +
++						  desc_bytes);
+ 	edesc->iv_dir = DMA_TO_DEVICE;
+ 
+ 	/* Make sure IV is located in a DMAable area */
+@@ -1757,8 +1757,8 @@ static struct ablkcipher_edesc *ablkcipher_giv_edesc_alloc(
+ 	edesc->src_nents = src_nents;
+ 	edesc->dst_nents = dst_nents;
+ 	edesc->sec4_sg_bytes = sec4_sg_bytes;
+-	edesc->sec4_sg = (void *)edesc + sizeof(struct ablkcipher_edesc) +
+-			 desc_bytes;
++	edesc->sec4_sg = (struct sec4_sg_entry *)((u8 *)edesc->hw_desc +
++						  desc_bytes);
+ 	edesc->iv_dir = DMA_FROM_DEVICE;
+ 
+ 	/* Make sure IV is located in a DMAable area */
+diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c
+index b916c4eb608c..e5d2ac5aec40 100644
+--- a/drivers/crypto/chelsio/chcr_algo.c
++++ b/drivers/crypto/chelsio/chcr_algo.c
+@@ -367,7 +367,8 @@ static inline void dsgl_walk_init(struct dsgl_walk *walk,
+ 	walk->to = (struct phys_sge_pairs *)(dsgl + 1);
+ }
+ 
+-static inline void dsgl_walk_end(struct dsgl_walk *walk, unsigned short qid)
++static inline void dsgl_walk_end(struct dsgl_walk *walk, unsigned short qid,
++				 int pci_chan_id)
+ {
+ 	struct cpl_rx_phys_dsgl *phys_cpl;
+ 
+@@ -385,6 +386,7 @@ static inline void dsgl_walk_end(struct dsgl_walk *walk, unsigned short qid)
+ 	phys_cpl->rss_hdr_int.opcode = CPL_RX_PHYS_ADDR;
+ 	phys_cpl->rss_hdr_int.qid = htons(qid);
+ 	phys_cpl->rss_hdr_int.hash_val = 0;
++	phys_cpl->rss_hdr_int.channel = pci_chan_id;
+ }
+ 
+ static inline void dsgl_walk_add_page(struct dsgl_walk *walk,
+@@ -718,7 +720,7 @@ static inline void create_wreq(struct chcr_context *ctx,
+ 		FILL_WR_RX_Q_ID(ctx->dev->rx_channel_id, qid,
+ 				!!lcb, ctx->tx_qidx);
+ 
+-	chcr_req->ulptx.cmd_dest = FILL_ULPTX_CMD_DEST(ctx->dev->tx_channel_id,
++	chcr_req->ulptx.cmd_dest = FILL_ULPTX_CMD_DEST(ctx->tx_chan_id,
+ 						       qid);
+ 	chcr_req->ulptx.len = htonl((DIV_ROUND_UP(len16, 16) -
+ 				     ((sizeof(chcr_req->wreq)) >> 4)));
+@@ -1339,16 +1341,23 @@ static int chcr_device_init(struct chcr_context *ctx)
+ 				    adap->vres.ncrypto_fc);
+ 		rxq_perchan = u_ctx->lldi.nrxq / u_ctx->lldi.nchan;
+ 		txq_perchan = ntxq / u_ctx->lldi.nchan;
+-		rxq_idx = ctx->dev->tx_channel_id * rxq_perchan;
+-		rxq_idx += id % rxq_perchan;
+-		txq_idx = ctx->dev->tx_channel_id * txq_perchan;
+-		txq_idx += id % txq_perchan;
+ 		spin_lock(&ctx->dev->lock_chcr_dev);
+-		ctx->rx_qidx = rxq_idx;
+-		ctx->tx_qidx = txq_idx;
++		ctx->tx_chan_id = ctx->dev->tx_channel_id;
+ 		ctx->dev->tx_channel_id = !ctx->dev->tx_channel_id;
+ 		ctx->dev->rx_channel_id = 0;
+ 		spin_unlock(&ctx->dev->lock_chcr_dev);
++		rxq_idx = ctx->tx_chan_id * rxq_perchan;
++		rxq_idx += id % rxq_perchan;
++		txq_idx = ctx->tx_chan_id * txq_perchan;
++		txq_idx += id % txq_perchan;
++		ctx->rx_qidx = rxq_idx;
++		ctx->tx_qidx = txq_idx;
++		/* Channel Id used by SGE to forward packet to Host.
++		 * Same value should be used in cpl_fw6_pld RSS_CH field
++		 * by FW. Driver programs PCI channel ID to be used in fw
++		 * at the time of queue allocation with value "pi->tx_chan"
++		 */
++		ctx->pci_chan_id = txq_idx / txq_perchan;
+ 	}
+ out:
+ 	return err;
+@@ -2503,6 +2512,7 @@ void chcr_add_aead_dst_ent(struct aead_request *req,
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	struct dsgl_walk dsgl_walk;
+ 	unsigned int authsize = crypto_aead_authsize(tfm);
++	struct chcr_context *ctx = a_ctx(tfm);
+ 	u32 temp;
+ 
+ 	dsgl_walk_init(&dsgl_walk, phys_cpl);
+@@ -2512,7 +2522,7 @@ void chcr_add_aead_dst_ent(struct aead_request *req,
+ 	dsgl_walk_add_page(&dsgl_walk, IV, &reqctx->iv_dma);
+ 	temp = req->cryptlen + (reqctx->op ? -authsize : authsize);
+ 	dsgl_walk_add_sg(&dsgl_walk, req->dst, temp, req->assoclen);
+-	dsgl_walk_end(&dsgl_walk, qid);
++	dsgl_walk_end(&dsgl_walk, qid, ctx->pci_chan_id);
+ }
+ 
+ void chcr_add_cipher_src_ent(struct ablkcipher_request *req,
+@@ -2544,6 +2554,8 @@ void chcr_add_cipher_dst_ent(struct ablkcipher_request *req,
+ 			     unsigned short qid)
+ {
+ 	struct chcr_blkcipher_req_ctx *reqctx = ablkcipher_request_ctx(req);
++	struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(wrparam->req);
++	struct chcr_context *ctx = c_ctx(tfm);
+ 	struct dsgl_walk dsgl_walk;
+ 
+ 	dsgl_walk_init(&dsgl_walk, phys_cpl);
+@@ -2552,7 +2564,7 @@ void chcr_add_cipher_dst_ent(struct ablkcipher_request *req,
+ 	reqctx->dstsg = dsgl_walk.last_sg;
+ 	reqctx->dst_ofst = dsgl_walk.last_sg_len;
+ 
+-	dsgl_walk_end(&dsgl_walk, qid);
++	dsgl_walk_end(&dsgl_walk, qid, ctx->pci_chan_id);
+ }
+ 
+ void chcr_add_hash_src_ent(struct ahash_request *req,
+diff --git a/drivers/crypto/chelsio/chcr_crypto.h b/drivers/crypto/chelsio/chcr_crypto.h
+index 54835cb109e5..0d2c70c344f3 100644
+--- a/drivers/crypto/chelsio/chcr_crypto.h
++++ b/drivers/crypto/chelsio/chcr_crypto.h
+@@ -255,6 +255,8 @@ struct chcr_context {
+ 	struct chcr_dev *dev;
+ 	unsigned char tx_qidx;
+ 	unsigned char rx_qidx;
++	unsigned char tx_chan_id;
++	unsigned char pci_chan_id;
+ 	struct __crypto_ctx crypto_ctx[0];
+ };
+ 
+diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c
+index a10c418d4e5c..56bd28174f52 100644
+--- a/drivers/crypto/mxs-dcp.c
++++ b/drivers/crypto/mxs-dcp.c
+@@ -63,7 +63,7 @@ struct dcp {
+ 	struct dcp_coherent_block	*coh;
+ 
+ 	struct completion		completion[DCP_MAX_CHANS];
+-	struct mutex			mutex[DCP_MAX_CHANS];
++	spinlock_t			lock[DCP_MAX_CHANS];
+ 	struct task_struct		*thread[DCP_MAX_CHANS];
+ 	struct crypto_queue		queue[DCP_MAX_CHANS];
+ };
+@@ -349,13 +349,20 @@ static int dcp_chan_thread_aes(void *data)
+ 
+ 	int ret;
+ 
+-	do {
+-		__set_current_state(TASK_INTERRUPTIBLE);
++	while (!kthread_should_stop()) {
++		set_current_state(TASK_INTERRUPTIBLE);
+ 
+-		mutex_lock(&sdcp->mutex[chan]);
++		spin_lock(&sdcp->lock[chan]);
+ 		backlog = crypto_get_backlog(&sdcp->queue[chan]);
+ 		arq = crypto_dequeue_request(&sdcp->queue[chan]);
+-		mutex_unlock(&sdcp->mutex[chan]);
++		spin_unlock(&sdcp->lock[chan]);
++
++		if (!backlog && !arq) {
++			schedule();
++			continue;
++		}
++
++		set_current_state(TASK_RUNNING);
+ 
+ 		if (backlog)
+ 			backlog->complete(backlog, -EINPROGRESS);
+@@ -363,11 +370,8 @@ static int dcp_chan_thread_aes(void *data)
+ 		if (arq) {
+ 			ret = mxs_dcp_aes_block_crypt(arq);
+ 			arq->complete(arq, ret);
+-			continue;
+ 		}
+-
+-		schedule();
+-	} while (!kthread_should_stop());
++	}
+ 
+ 	return 0;
+ }
+@@ -409,9 +413,9 @@ static int mxs_dcp_aes_enqueue(struct ablkcipher_request *req, int enc, int ecb)
+ 	rctx->ecb = ecb;
+ 	actx->chan = DCP_CHAN_CRYPTO;
+ 
+-	mutex_lock(&sdcp->mutex[actx->chan]);
++	spin_lock(&sdcp->lock[actx->chan]);
+ 	ret = crypto_enqueue_request(&sdcp->queue[actx->chan], &req->base);
+-	mutex_unlock(&sdcp->mutex[actx->chan]);
++	spin_unlock(&sdcp->lock[actx->chan]);
+ 
+ 	wake_up_process(sdcp->thread[actx->chan]);
+ 
+@@ -640,13 +644,20 @@ static int dcp_chan_thread_sha(void *data)
+ 	struct ahash_request *req;
+ 	int ret, fini;
+ 
+-	do {
+-		__set_current_state(TASK_INTERRUPTIBLE);
++	while (!kthread_should_stop()) {
++		set_current_state(TASK_INTERRUPTIBLE);
+ 
+-		mutex_lock(&sdcp->mutex[chan]);
++		spin_lock(&sdcp->lock[chan]);
+ 		backlog = crypto_get_backlog(&sdcp->queue[chan]);
+ 		arq = crypto_dequeue_request(&sdcp->queue[chan]);
+-		mutex_unlock(&sdcp->mutex[chan]);
++		spin_unlock(&sdcp->lock[chan]);
++
++		if (!backlog && !arq) {
++			schedule();
++			continue;
++		}
++
++		set_current_state(TASK_RUNNING);
+ 
+ 		if (backlog)
+ 			backlog->complete(backlog, -EINPROGRESS);
+@@ -658,12 +669,8 @@ static int dcp_chan_thread_sha(void *data)
+ 			ret = dcp_sha_req_to_buf(arq);
+ 			fini = rctx->fini;
+ 			arq->complete(arq, ret);
+-			if (!fini)
+-				continue;
+ 		}
+-
+-		schedule();
+-	} while (!kthread_should_stop());
++	}
+ 
+ 	return 0;
+ }
+@@ -721,9 +728,9 @@ static int dcp_sha_update_fx(struct ahash_request *req, int fini)
+ 		rctx->init = 1;
+ 	}
+ 
+-	mutex_lock(&sdcp->mutex[actx->chan]);
++	spin_lock(&sdcp->lock[actx->chan]);
+ 	ret = crypto_enqueue_request(&sdcp->queue[actx->chan], &req->base);
+-	mutex_unlock(&sdcp->mutex[actx->chan]);
++	spin_unlock(&sdcp->lock[actx->chan]);
+ 
+ 	wake_up_process(sdcp->thread[actx->chan]);
+ 	mutex_unlock(&actx->mutex);
+@@ -997,7 +1004,7 @@ static int mxs_dcp_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, sdcp);
+ 
+ 	for (i = 0; i < DCP_MAX_CHANS; i++) {
+-		mutex_init(&sdcp->mutex[i]);
++		spin_lock_init(&sdcp->lock[i]);
+ 		init_completion(&sdcp->completion[i]);
+ 		crypto_init_queue(&sdcp->queue[i], 50);
+ 	}
+diff --git a/drivers/crypto/qat/qat_c3xxx/adf_drv.c b/drivers/crypto/qat/qat_c3xxx/adf_drv.c
+index ba197f34c252..763c2166ee0e 100644
+--- a/drivers/crypto/qat/qat_c3xxx/adf_drv.c
++++ b/drivers/crypto/qat/qat_c3xxx/adf_drv.c
+@@ -123,7 +123,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	struct adf_hw_device_data *hw_data;
+ 	char name[ADF_DEVICE_NAME_LENGTH];
+ 	unsigned int i, bar_nr;
+-	int ret, bar_mask;
++	unsigned long bar_mask;
++	int ret;
+ 
+ 	switch (ent->device) {
+ 	case ADF_C3XXX_PCI_DEVICE_ID:
+@@ -235,8 +236,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	/* Find and map all the device's BARS */
+ 	i = 0;
+ 	bar_mask = pci_select_bars(pdev, IORESOURCE_MEM);
+-	for_each_set_bit(bar_nr, (const unsigned long *)&bar_mask,
+-			 ADF_PCI_MAX_BARS * 2) {
++	for_each_set_bit(bar_nr, &bar_mask, ADF_PCI_MAX_BARS * 2) {
+ 		struct adf_bar *bar = &accel_pci_dev->pci_bars[i++];
+ 
+ 		bar->base_addr = pci_resource_start(pdev, bar_nr);
+diff --git a/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c b/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c
+index 24ec908eb26c..613c7d5644ce 100644
+--- a/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c
++++ b/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c
+@@ -125,7 +125,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	struct adf_hw_device_data *hw_data;
+ 	char name[ADF_DEVICE_NAME_LENGTH];
+ 	unsigned int i, bar_nr;
+-	int ret, bar_mask;
++	unsigned long bar_mask;
++	int ret;
+ 
+ 	switch (ent->device) {
+ 	case ADF_C3XXXIOV_PCI_DEVICE_ID:
+@@ -215,8 +216,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	/* Find and map all the device's BARS */
+ 	i = 0;
+ 	bar_mask = pci_select_bars(pdev, IORESOURCE_MEM);
+-	for_each_set_bit(bar_nr, (const unsigned long *)&bar_mask,
+-			 ADF_PCI_MAX_BARS * 2) {
++	for_each_set_bit(bar_nr, &bar_mask, ADF_PCI_MAX_BARS * 2) {
+ 		struct adf_bar *bar = &accel_pci_dev->pci_bars[i++];
+ 
+ 		bar->base_addr = pci_resource_start(pdev, bar_nr);
+diff --git a/drivers/crypto/qat/qat_c62x/adf_drv.c b/drivers/crypto/qat/qat_c62x/adf_drv.c
+index 59a5a0df50b6..9cb832963357 100644
+--- a/drivers/crypto/qat/qat_c62x/adf_drv.c
++++ b/drivers/crypto/qat/qat_c62x/adf_drv.c
+@@ -123,7 +123,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	struct adf_hw_device_data *hw_data;
+ 	char name[ADF_DEVICE_NAME_LENGTH];
+ 	unsigned int i, bar_nr;
+-	int ret, bar_mask;
++	unsigned long bar_mask;
++	int ret;
+ 
+ 	switch (ent->device) {
+ 	case ADF_C62X_PCI_DEVICE_ID:
+@@ -235,8 +236,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	/* Find and map all the device's BARS */
+ 	i = (hw_data->fuses & ADF_DEVICE_FUSECTL_MASK) ? 1 : 0;
+ 	bar_mask = pci_select_bars(pdev, IORESOURCE_MEM);
+-	for_each_set_bit(bar_nr, (const unsigned long *)&bar_mask,
+-			 ADF_PCI_MAX_BARS * 2) {
++	for_each_set_bit(bar_nr, &bar_mask, ADF_PCI_MAX_BARS * 2) {
+ 		struct adf_bar *bar = &accel_pci_dev->pci_bars[i++];
+ 
+ 		bar->base_addr = pci_resource_start(pdev, bar_nr);
+diff --git a/drivers/crypto/qat/qat_c62xvf/adf_drv.c b/drivers/crypto/qat/qat_c62xvf/adf_drv.c
+index b9f3e0e4fde9..278452b8ef81 100644
+--- a/drivers/crypto/qat/qat_c62xvf/adf_drv.c
++++ b/drivers/crypto/qat/qat_c62xvf/adf_drv.c
+@@ -125,7 +125,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	struct adf_hw_device_data *hw_data;
+ 	char name[ADF_DEVICE_NAME_LENGTH];
+ 	unsigned int i, bar_nr;
+-	int ret, bar_mask;
++	unsigned long bar_mask;
++	int ret;
+ 
+ 	switch (ent->device) {
+ 	case ADF_C62XIOV_PCI_DEVICE_ID:
+@@ -215,8 +216,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	/* Find and map all the device's BARS */
+ 	i = 0;
+ 	bar_mask = pci_select_bars(pdev, IORESOURCE_MEM);
+-	for_each_set_bit(bar_nr, (const unsigned long *)&bar_mask,
+-			 ADF_PCI_MAX_BARS * 2) {
++	for_each_set_bit(bar_nr, &bar_mask, ADF_PCI_MAX_BARS * 2) {
+ 		struct adf_bar *bar = &accel_pci_dev->pci_bars[i++];
+ 
+ 		bar->base_addr = pci_resource_start(pdev, bar_nr);
+diff --git a/drivers/crypto/qat/qat_dh895xcc/adf_drv.c b/drivers/crypto/qat/qat_dh895xcc/adf_drv.c
+index be5c5a988ca5..3a9708ef4ce2 100644
+--- a/drivers/crypto/qat/qat_dh895xcc/adf_drv.c
++++ b/drivers/crypto/qat/qat_dh895xcc/adf_drv.c
+@@ -123,7 +123,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	struct adf_hw_device_data *hw_data;
+ 	char name[ADF_DEVICE_NAME_LENGTH];
+ 	unsigned int i, bar_nr;
+-	int ret, bar_mask;
++	unsigned long bar_mask;
++	int ret;
+ 
+ 	switch (ent->device) {
+ 	case ADF_DH895XCC_PCI_DEVICE_ID:
+@@ -237,8 +238,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	/* Find and map all the device's BARS */
+ 	i = 0;
+ 	bar_mask = pci_select_bars(pdev, IORESOURCE_MEM);
+-	for_each_set_bit(bar_nr, (const unsigned long *)&bar_mask,
+-			 ADF_PCI_MAX_BARS * 2) {
++	for_each_set_bit(bar_nr, &bar_mask, ADF_PCI_MAX_BARS * 2) {
+ 		struct adf_bar *bar = &accel_pci_dev->pci_bars[i++];
+ 
+ 		bar->base_addr = pci_resource_start(pdev, bar_nr);
+diff --git a/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c b/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c
+index 26ab17bfc6da..3da0f951cb59 100644
+--- a/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c
++++ b/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c
+@@ -125,7 +125,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	struct adf_hw_device_data *hw_data;
+ 	char name[ADF_DEVICE_NAME_LENGTH];
+ 	unsigned int i, bar_nr;
+-	int ret, bar_mask;
++	unsigned long bar_mask;
++	int ret;
+ 
+ 	switch (ent->device) {
+ 	case ADF_DH895XCCIOV_PCI_DEVICE_ID:
+@@ -215,8 +216,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	/* Find and map all the device's BARS */
+ 	i = 0;
+ 	bar_mask = pci_select_bars(pdev, IORESOURCE_MEM);
+-	for_each_set_bit(bar_nr, (const unsigned long *)&bar_mask,
+-			 ADF_PCI_MAX_BARS * 2) {
++	for_each_set_bit(bar_nr, &bar_mask, ADF_PCI_MAX_BARS * 2) {
+ 		struct adf_bar *bar = &accel_pci_dev->pci_bars[i++];
+ 
+ 		bar->base_addr = pci_resource_start(pdev, bar_nr);
+diff --git a/drivers/firmware/arm_scmi/perf.c b/drivers/firmware/arm_scmi/perf.c
+index 2a219b1261b1..49cb74f54a10 100644
+--- a/drivers/firmware/arm_scmi/perf.c
++++ b/drivers/firmware/arm_scmi/perf.c
+@@ -166,7 +166,13 @@ scmi_perf_domain_attributes_get(const struct scmi_handle *handle, u32 domain,
+ 					le32_to_cpu(attr->sustained_freq_khz);
+ 		dom_info->sustained_perf_level =
+ 					le32_to_cpu(attr->sustained_perf_level);
+-		dom_info->mult_factor =	(dom_info->sustained_freq_khz * 1000) /
++		if (!dom_info->sustained_freq_khz ||
++		    !dom_info->sustained_perf_level)
++			/* CPUFreq converts to kHz, hence default 1000 */
++			dom_info->mult_factor =	1000;
++		else
++			dom_info->mult_factor =
++					(dom_info->sustained_freq_khz * 1000) /
+ 					dom_info->sustained_perf_level;
+ 		memcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE);
+ 	}
+diff --git a/drivers/gpio/gpio-adp5588.c b/drivers/gpio/gpio-adp5588.c
+index 3530ccd17e04..da9781a2ef4a 100644
+--- a/drivers/gpio/gpio-adp5588.c
++++ b/drivers/gpio/gpio-adp5588.c
+@@ -41,6 +41,8 @@ struct adp5588_gpio {
+ 	uint8_t int_en[3];
+ 	uint8_t irq_mask[3];
+ 	uint8_t irq_stat[3];
++	uint8_t int_input_en[3];
++	uint8_t int_lvl_cached[3];
+ };
+ 
+ static int adp5588_gpio_read(struct i2c_client *client, u8 reg)
+@@ -173,12 +175,28 @@ static void adp5588_irq_bus_sync_unlock(struct irq_data *d)
+ 	struct adp5588_gpio *dev = irq_data_get_irq_chip_data(d);
+ 	int i;
+ 
+-	for (i = 0; i <= ADP5588_BANK(ADP5588_MAXGPIO); i++)
++	for (i = 0; i <= ADP5588_BANK(ADP5588_MAXGPIO); i++) {
++		if (dev->int_input_en[i]) {
++			mutex_lock(&dev->lock);
++			dev->dir[i] &= ~dev->int_input_en[i];
++			dev->int_input_en[i] = 0;
++			adp5588_gpio_write(dev->client, GPIO_DIR1 + i,
++					   dev->dir[i]);
++			mutex_unlock(&dev->lock);
++		}
++
++		if (dev->int_lvl_cached[i] != dev->int_lvl[i]) {
++			dev->int_lvl_cached[i] = dev->int_lvl[i];
++			adp5588_gpio_write(dev->client, GPIO_INT_LVL1 + i,
++					   dev->int_lvl[i]);
++		}
++
+ 		if (dev->int_en[i] ^ dev->irq_mask[i]) {
+ 			dev->int_en[i] = dev->irq_mask[i];
+ 			adp5588_gpio_write(dev->client, GPIO_INT_EN1 + i,
+ 					   dev->int_en[i]);
+ 		}
++	}
+ 
+ 	mutex_unlock(&dev->irq_lock);
+ }
+@@ -221,9 +239,7 @@ static int adp5588_irq_set_type(struct irq_data *d, unsigned int type)
+ 	else
+ 		return -EINVAL;
+ 
+-	adp5588_gpio_direction_input(&dev->gpio_chip, gpio);
+-	adp5588_gpio_write(dev->client, GPIO_INT_LVL1 + bank,
+-			   dev->int_lvl[bank]);
++	dev->int_input_en[bank] |= bit;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpio/gpio-dwapb.c b/drivers/gpio/gpio-dwapb.c
+index 7a2de3de6571..5b12d6fdd448 100644
+--- a/drivers/gpio/gpio-dwapb.c
++++ b/drivers/gpio/gpio-dwapb.c
+@@ -726,6 +726,7 @@ static int dwapb_gpio_probe(struct platform_device *pdev)
+ out_unregister:
+ 	dwapb_gpio_unregister(gpio);
+ 	dwapb_irq_teardown(gpio);
++	clk_disable_unprepare(gpio->clk);
+ 
+ 	return err;
+ }
+diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c
+index addd9fecc198..a3e43cacd78e 100644
+--- a/drivers/gpio/gpiolib-acpi.c
++++ b/drivers/gpio/gpiolib-acpi.c
+@@ -25,7 +25,6 @@
+ 
+ struct acpi_gpio_event {
+ 	struct list_head node;
+-	struct list_head initial_sync_list;
+ 	acpi_handle handle;
+ 	unsigned int pin;
+ 	unsigned int irq;
+@@ -49,10 +48,19 @@ struct acpi_gpio_chip {
+ 	struct mutex conn_lock;
+ 	struct gpio_chip *chip;
+ 	struct list_head events;
++	struct list_head deferred_req_irqs_list_entry;
+ };
+ 
+-static LIST_HEAD(acpi_gpio_initial_sync_list);
+-static DEFINE_MUTEX(acpi_gpio_initial_sync_list_lock);
++/*
++ * For gpiochips which call acpi_gpiochip_request_interrupts() before late_init
++ * (so builtin drivers) we register the ACPI GpioInt event handlers from a
++ * late_initcall_sync handler, so that other builtin drivers can register their
++ * OpRegions before the event handlers can run.  This list contains gpiochips
++ * for which the acpi_gpiochip_request_interrupts() has been deferred.
++ */
++static DEFINE_MUTEX(acpi_gpio_deferred_req_irqs_lock);
++static LIST_HEAD(acpi_gpio_deferred_req_irqs_list);
++static bool acpi_gpio_deferred_req_irqs_done;
+ 
+ static int acpi_gpiochip_find(struct gpio_chip *gc, void *data)
+ {
+@@ -89,21 +97,6 @@ static struct gpio_desc *acpi_get_gpiod(char *path, int pin)
+ 	return gpiochip_get_desc(chip, pin);
+ }
+ 
+-static void acpi_gpio_add_to_initial_sync_list(struct acpi_gpio_event *event)
+-{
+-	mutex_lock(&acpi_gpio_initial_sync_list_lock);
+-	list_add(&event->initial_sync_list, &acpi_gpio_initial_sync_list);
+-	mutex_unlock(&acpi_gpio_initial_sync_list_lock);
+-}
+-
+-static void acpi_gpio_del_from_initial_sync_list(struct acpi_gpio_event *event)
+-{
+-	mutex_lock(&acpi_gpio_initial_sync_list_lock);
+-	if (!list_empty(&event->initial_sync_list))
+-		list_del_init(&event->initial_sync_list);
+-	mutex_unlock(&acpi_gpio_initial_sync_list_lock);
+-}
+-
+ static irqreturn_t acpi_gpio_irq_handler(int irq, void *data)
+ {
+ 	struct acpi_gpio_event *event = data;
+@@ -186,7 +179,7 @@ static acpi_status acpi_gpiochip_request_interrupt(struct acpi_resource *ares,
+ 
+ 	gpiod_direction_input(desc);
+ 
+-	value = gpiod_get_value(desc);
++	value = gpiod_get_value_cansleep(desc);
+ 
+ 	ret = gpiochip_lock_as_irq(chip, pin);
+ 	if (ret) {
+@@ -229,7 +222,6 @@ static acpi_status acpi_gpiochip_request_interrupt(struct acpi_resource *ares,
+ 	event->irq = irq;
+ 	event->pin = pin;
+ 	event->desc = desc;
+-	INIT_LIST_HEAD(&event->initial_sync_list);
+ 
+ 	ret = request_threaded_irq(event->irq, NULL, handler, irqflags,
+ 				   "ACPI:Event", event);
+@@ -251,10 +243,9 @@ static acpi_status acpi_gpiochip_request_interrupt(struct acpi_resource *ares,
+ 	 * may refer to OperationRegions from other (builtin) drivers which
+ 	 * may be probed after us.
+ 	 */
+-	if (handler == acpi_gpio_irq_handler &&
+-	    (((irqflags & IRQF_TRIGGER_RISING) && value == 1) ||
+-	     ((irqflags & IRQF_TRIGGER_FALLING) && value == 0)))
+-		acpi_gpio_add_to_initial_sync_list(event);
++	if (((irqflags & IRQF_TRIGGER_RISING) && value == 1) ||
++	    ((irqflags & IRQF_TRIGGER_FALLING) && value == 0))
++		handler(event->irq, event);
+ 
+ 	return AE_OK;
+ 
+@@ -283,6 +274,7 @@ void acpi_gpiochip_request_interrupts(struct gpio_chip *chip)
+ 	struct acpi_gpio_chip *acpi_gpio;
+ 	acpi_handle handle;
+ 	acpi_status status;
++	bool defer;
+ 
+ 	if (!chip->parent || !chip->to_irq)
+ 		return;
+@@ -295,6 +287,16 @@ void acpi_gpiochip_request_interrupts(struct gpio_chip *chip)
+ 	if (ACPI_FAILURE(status))
+ 		return;
+ 
++	mutex_lock(&acpi_gpio_deferred_req_irqs_lock);
++	defer = !acpi_gpio_deferred_req_irqs_done;
++	if (defer)
++		list_add(&acpi_gpio->deferred_req_irqs_list_entry,
++			 &acpi_gpio_deferred_req_irqs_list);
++	mutex_unlock(&acpi_gpio_deferred_req_irqs_lock);
++
++	if (defer)
++		return;
++
+ 	acpi_walk_resources(handle, "_AEI",
+ 			    acpi_gpiochip_request_interrupt, acpi_gpio);
+ }
+@@ -325,11 +327,14 @@ void acpi_gpiochip_free_interrupts(struct gpio_chip *chip)
+ 	if (ACPI_FAILURE(status))
+ 		return;
+ 
++	mutex_lock(&acpi_gpio_deferred_req_irqs_lock);
++	if (!list_empty(&acpi_gpio->deferred_req_irqs_list_entry))
++		list_del_init(&acpi_gpio->deferred_req_irqs_list_entry);
++	mutex_unlock(&acpi_gpio_deferred_req_irqs_lock);
++
+ 	list_for_each_entry_safe_reverse(event, ep, &acpi_gpio->events, node) {
+ 		struct gpio_desc *desc;
+ 
+-		acpi_gpio_del_from_initial_sync_list(event);
+-
+ 		if (irqd_is_wakeup_set(irq_get_irq_data(event->irq)))
+ 			disable_irq_wake(event->irq);
+ 
+@@ -1049,6 +1054,7 @@ void acpi_gpiochip_add(struct gpio_chip *chip)
+ 
+ 	acpi_gpio->chip = chip;
+ 	INIT_LIST_HEAD(&acpi_gpio->events);
++	INIT_LIST_HEAD(&acpi_gpio->deferred_req_irqs_list_entry);
+ 
+ 	status = acpi_attach_data(handle, acpi_gpio_chip_dh, acpi_gpio);
+ 	if (ACPI_FAILURE(status)) {
+@@ -1195,20 +1201,28 @@ bool acpi_can_fallback_to_crs(struct acpi_device *adev, const char *con_id)
+ 	return con_id == NULL;
+ }
+ 
+-/* Sync the initial state of handlers after all builtin drivers have probed */
+-static int acpi_gpio_initial_sync(void)
++/* Run deferred acpi_gpiochip_request_interrupts() */
++static int acpi_gpio_handle_deferred_request_interrupts(void)
+ {
+-	struct acpi_gpio_event *event, *ep;
++	struct acpi_gpio_chip *acpi_gpio, *tmp;
++
++	mutex_lock(&acpi_gpio_deferred_req_irqs_lock);
++	list_for_each_entry_safe(acpi_gpio, tmp,
++				 &acpi_gpio_deferred_req_irqs_list,
++				 deferred_req_irqs_list_entry) {
++		acpi_handle handle;
+ 
+-	mutex_lock(&acpi_gpio_initial_sync_list_lock);
+-	list_for_each_entry_safe(event, ep, &acpi_gpio_initial_sync_list,
+-				 initial_sync_list) {
+-		acpi_evaluate_object(event->handle, NULL, NULL, NULL);
+-		list_del_init(&event->initial_sync_list);
++		handle = ACPI_HANDLE(acpi_gpio->chip->parent);
++		acpi_walk_resources(handle, "_AEI",
++				    acpi_gpiochip_request_interrupt, acpi_gpio);
++
++		list_del_init(&acpi_gpio->deferred_req_irqs_list_entry);
+ 	}
+-	mutex_unlock(&acpi_gpio_initial_sync_list_lock);
++
++	acpi_gpio_deferred_req_irqs_done = true;
++	mutex_unlock(&acpi_gpio_deferred_req_irqs_lock);
+ 
+ 	return 0;
+ }
+ /* We must use _sync so that this runs after the first deferred_probe run */
+-late_initcall_sync(acpi_gpio_initial_sync);
++late_initcall_sync(acpi_gpio_handle_deferred_request_interrupts);
+diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c
+index 53a14ee8ad6d..a704d2e74421 100644
+--- a/drivers/gpio/gpiolib-of.c
++++ b/drivers/gpio/gpiolib-of.c
+@@ -31,6 +31,7 @@ static int of_gpiochip_match_node_and_xlate(struct gpio_chip *chip, void *data)
+ 	struct of_phandle_args *gpiospec = data;
+ 
+ 	return chip->gpiodev->dev.of_node == gpiospec->np &&
++				chip->of_xlate &&
+ 				chip->of_xlate(chip, gpiospec, NULL) >= 0;
+ }
+ 
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index e11a3bb03820..06dce16e22bb 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -565,7 +565,7 @@ static int linehandle_create(struct gpio_device *gdev, void __user *ip)
+ 		if (ret)
+ 			goto out_free_descs;
+ 		lh->descs[i] = desc;
+-		count = i;
++		count = i + 1;
+ 
+ 		if (lflags & GPIOHANDLE_REQUEST_ACTIVE_LOW)
+ 			set_bit(FLAG_ACTIVE_LOW, &desc->flags);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index 7200eea4f918..d9d8964a6e97 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -38,6 +38,7 @@ static int amdgpu_cs_user_fence_chunk(struct amdgpu_cs_parser *p,
+ {
+ 	struct drm_gem_object *gobj;
+ 	unsigned long size;
++	int r;
+ 
+ 	gobj = drm_gem_object_lookup(p->filp, data->handle);
+ 	if (gobj == NULL)
+@@ -49,20 +50,26 @@ static int amdgpu_cs_user_fence_chunk(struct amdgpu_cs_parser *p,
+ 	p->uf_entry.tv.shared = true;
+ 	p->uf_entry.user_pages = NULL;
+ 
+-	size = amdgpu_bo_size(p->uf_entry.robj);
+-	if (size != PAGE_SIZE || (data->offset + 8) > size)
+-		return -EINVAL;
+-
+-	*offset = data->offset;
+-
+ 	drm_gem_object_put_unlocked(gobj);
+ 
++	size = amdgpu_bo_size(p->uf_entry.robj);
++	if (size != PAGE_SIZE || (data->offset + 8) > size) {
++		r = -EINVAL;
++		goto error_unref;
++	}
++
+ 	if (amdgpu_ttm_tt_get_usermm(p->uf_entry.robj->tbo.ttm)) {
+-		amdgpu_bo_unref(&p->uf_entry.robj);
+-		return -EINVAL;
++		r = -EINVAL;
++		goto error_unref;
+ 	}
+ 
++	*offset = data->offset;
++
+ 	return 0;
++
++error_unref:
++	amdgpu_bo_unref(&p->uf_entry.robj);
++	return r;
+ }
+ 
+ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, void *data)
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+index ca53b3fba422..3e3e4e907ee5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+@@ -67,6 +67,7 @@ static const struct soc15_reg_golden golden_settings_sdma_4[] = {
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_IB_CNTL, 0x800f0100, 0x00000100),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_RB_WPTR_POLL_CNTL, 0x0000fff0, 0x00403000),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003c0),
++	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_WATERMK, 0xfc000000, 0x00000000),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CHICKEN_BITS, 0xfe931f07, 0x02831f07),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CLK_CTRL, 0xffffffff, 0x3f000100),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GFX_IB_CNTL, 0x800f0100, 0x00000100),
+@@ -78,7 +79,8 @@ static const struct soc15_reg_golden golden_settings_sdma_4[] = {
+ 	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_RLC0_RB_WPTR_POLL_CNTL, 0x0000fff0, 0x00403000),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_RLC1_IB_CNTL, 0x800f0100, 0x00000100),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_RLC1_RB_WPTR_POLL_CNTL, 0x0000fff0, 0x00403000),
+-	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_UTCL1_PAGE, 0x000003ff, 0x000003c0)
++	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_UTCL1_PAGE, 0x000003ff, 0x000003c0),
++	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_UTCL1_WATERMK, 0xfc000000, 0x00000000)
+ };
+ 
+ static const struct soc15_reg_golden golden_settings_sdma_vg10[] = {
+@@ -106,7 +108,8 @@ static const struct soc15_reg_golden golden_settings_sdma_4_1[] =
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC0_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_IB_CNTL, 0x800f0111, 0x00000100),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+-	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003c0)
++	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003c0),
++	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_WATERMK, 0xfc000000, 0x00000000)
+ };
+ 
+ static const struct soc15_reg_golden golden_settings_sdma_4_2[] =
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+index 77779adeef28..f8e866ceda02 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+@@ -4555,12 +4555,12 @@ static int smu7_get_sclks(struct pp_hwmgr *hwmgr, struct amd_pp_clocks *clocks)
+ 			return -EINVAL;
+ 		dep_sclk_table = table_info->vdd_dep_on_sclk;
+ 		for (i = 0; i < dep_sclk_table->count; i++)
+-			clocks->clock[i] = dep_sclk_table->entries[i].clk * 10;
++			clocks->clock[i] = dep_sclk_table->entries[i].clk;
+ 		clocks->count = dep_sclk_table->count;
+ 	} else if (hwmgr->pp_table_version == PP_TABLE_V0) {
+ 		sclk_table = hwmgr->dyn_state.vddc_dependency_on_sclk;
+ 		for (i = 0; i < sclk_table->count; i++)
+-			clocks->clock[i] = sclk_table->entries[i].clk * 10;
++			clocks->clock[i] = sclk_table->entries[i].clk;
+ 		clocks->count = sclk_table->count;
+ 	}
+ 
+@@ -4592,7 +4592,7 @@ static int smu7_get_mclks(struct pp_hwmgr *hwmgr, struct amd_pp_clocks *clocks)
+ 			return -EINVAL;
+ 		dep_mclk_table = table_info->vdd_dep_on_mclk;
+ 		for (i = 0; i < dep_mclk_table->count; i++) {
+-			clocks->clock[i] = dep_mclk_table->entries[i].clk * 10;
++			clocks->clock[i] = dep_mclk_table->entries[i].clk;
+ 			clocks->latency[i] = smu7_get_mem_latency(hwmgr,
+ 						dep_mclk_table->entries[i].clk);
+ 		}
+@@ -4600,7 +4600,7 @@ static int smu7_get_mclks(struct pp_hwmgr *hwmgr, struct amd_pp_clocks *clocks)
+ 	} else if (hwmgr->pp_table_version == PP_TABLE_V0) {
+ 		mclk_table = hwmgr->dyn_state.vddc_dependency_on_mclk;
+ 		for (i = 0; i < mclk_table->count; i++)
+-			clocks->clock[i] = mclk_table->entries[i].clk * 10;
++			clocks->clock[i] = mclk_table->entries[i].clk;
+ 		clocks->count = mclk_table->count;
+ 	}
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
+index 0adfc5392cd3..617557bd8c24 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
+@@ -1605,17 +1605,17 @@ static int smu8_get_clock_by_type(struct pp_hwmgr *hwmgr, enum amd_pp_clock_type
+ 	switch (type) {
+ 	case amd_pp_disp_clock:
+ 		for (i = 0; i < clocks->count; i++)
+-			clocks->clock[i] = data->sys_info.display_clock[i] * 10;
++			clocks->clock[i] = data->sys_info.display_clock[i];
+ 		break;
+ 	case amd_pp_sys_clock:
+ 		table = hwmgr->dyn_state.vddc_dependency_on_sclk;
+ 		for (i = 0; i < clocks->count; i++)
+-			clocks->clock[i] = table->entries[i].clk * 10;
++			clocks->clock[i] = table->entries[i].clk;
+ 		break;
+ 	case amd_pp_mem_clock:
+ 		clocks->count = SMU8_NUM_NBPMEMORYCLOCK;
+ 		for (i = 0; i < clocks->count; i++)
+-			clocks->clock[i] = data->sys_info.nbp_memory_clock[clocks->count - 1 - i] * 10;
++			clocks->clock[i] = data->sys_info.nbp_memory_clock[clocks->count - 1 - i];
+ 		break;
+ 	default:
+ 		return -1;
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index c2ebe5da34d0..89225adaa60a 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -230,7 +230,7 @@ nouveau_cli_init(struct nouveau_drm *drm, const char *sname,
+ 		mutex_unlock(&drm->master.lock);
+ 	}
+ 	if (ret) {
+-		NV_ERROR(drm, "Client allocation failed: %d\n", ret);
++		NV_PRINTK(err, cli, "Client allocation failed: %d\n", ret);
+ 		goto done;
+ 	}
+ 
+@@ -240,37 +240,37 @@ nouveau_cli_init(struct nouveau_drm *drm, const char *sname,
+ 			       }, sizeof(struct nv_device_v0),
+ 			       &cli->device);
+ 	if (ret) {
+-		NV_ERROR(drm, "Device allocation failed: %d\n", ret);
++		NV_PRINTK(err, cli, "Device allocation failed: %d\n", ret);
+ 		goto done;
+ 	}
+ 
+ 	ret = nvif_mclass(&cli->device.object, mmus);
+ 	if (ret < 0) {
+-		NV_ERROR(drm, "No supported MMU class\n");
++		NV_PRINTK(err, cli, "No supported MMU class\n");
+ 		goto done;
+ 	}
+ 
+ 	ret = nvif_mmu_init(&cli->device.object, mmus[ret].oclass, &cli->mmu);
+ 	if (ret) {
+-		NV_ERROR(drm, "MMU allocation failed: %d\n", ret);
++		NV_PRINTK(err, cli, "MMU allocation failed: %d\n", ret);
+ 		goto done;
+ 	}
+ 
+ 	ret = nvif_mclass(&cli->mmu.object, vmms);
+ 	if (ret < 0) {
+-		NV_ERROR(drm, "No supported VMM class\n");
++		NV_PRINTK(err, cli, "No supported VMM class\n");
+ 		goto done;
+ 	}
+ 
+ 	ret = nouveau_vmm_init(cli, vmms[ret].oclass, &cli->vmm);
+ 	if (ret) {
+-		NV_ERROR(drm, "VMM allocation failed: %d\n", ret);
++		NV_PRINTK(err, cli, "VMM allocation failed: %d\n", ret);
+ 		goto done;
+ 	}
+ 
+ 	ret = nvif_mclass(&cli->mmu.object, mems);
+ 	if (ret < 0) {
+-		NV_ERROR(drm, "No supported MEM class\n");
++		NV_PRINTK(err, cli, "No supported MEM class\n");
+ 		goto done;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/base.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/base.c
+index 32fa94a9773f..cbd33e87b799 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/base.c
+@@ -275,6 +275,7 @@ nvkm_disp_oneinit(struct nvkm_engine *engine)
+ 	struct nvkm_outp *outp, *outt, *pair;
+ 	struct nvkm_conn *conn;
+ 	struct nvkm_head *head;
++	struct nvkm_ior *ior;
+ 	struct nvbios_connE connE;
+ 	struct dcb_output dcbE;
+ 	u8  hpd = 0, ver, hdr;
+@@ -399,6 +400,19 @@ nvkm_disp_oneinit(struct nvkm_engine *engine)
+ 			return ret;
+ 	}
+ 
++	/* Enforce identity-mapped SOR assignment for panels, which have
++	 * certain bits (ie. backlight controls) wired to a specific SOR.
++	 */
++	list_for_each_entry(outp, &disp->outp, head) {
++		if (outp->conn->info.type == DCB_CONNECTOR_LVDS ||
++		    outp->conn->info.type == DCB_CONNECTOR_eDP) {
++			ior = nvkm_ior_find(disp, SOR, ffs(outp->info.or) - 1);
++			if (!WARN_ON(!ior))
++				ior->identity = true;
++			outp->identity = true;
++		}
++	}
++
+ 	i = 0;
+ 	list_for_each_entry(head, &disp->head, head)
+ 		i = max(i, head->id + 1);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c
+index 7c5bed29ffef..6160a6158cf2 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c
+@@ -412,14 +412,10 @@ nvkm_dp_train(struct nvkm_dp *dp, u32 dataKBps)
+ }
+ 
+ static void
+-nvkm_dp_release(struct nvkm_outp *outp, struct nvkm_ior *ior)
++nvkm_dp_disable(struct nvkm_outp *outp, struct nvkm_ior *ior)
+ {
+ 	struct nvkm_dp *dp = nvkm_dp(outp);
+ 
+-	/* Prevent link from being retrained if sink sends an IRQ. */
+-	atomic_set(&dp->lt.done, 0);
+-	ior->dp.nr = 0;
+-
+ 	/* Execute DisableLT script from DP Info Table. */
+ 	nvbios_init(&ior->disp->engine.subdev, dp->info.script[4],
+ 		init.outp = &dp->outp.info;
+@@ -428,6 +424,16 @@ nvkm_dp_release(struct nvkm_outp *outp, struct nvkm_ior *ior)
+ 	);
+ }
+ 
++static void
++nvkm_dp_release(struct nvkm_outp *outp)
++{
++	struct nvkm_dp *dp = nvkm_dp(outp);
++
++	/* Prevent link from being retrained if sink sends an IRQ. */
++	atomic_set(&dp->lt.done, 0);
++	dp->outp.ior->dp.nr = 0;
++}
++
+ static int
+ nvkm_dp_acquire(struct nvkm_outp *outp)
+ {
+@@ -576,6 +582,7 @@ nvkm_dp_func = {
+ 	.fini = nvkm_dp_fini,
+ 	.acquire = nvkm_dp_acquire,
+ 	.release = nvkm_dp_release,
++	.disable = nvkm_dp_disable,
+ };
+ 
+ static int
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/ior.h b/drivers/gpu/drm/nouveau/nvkm/engine/disp/ior.h
+index e0b4e0c5704e..19911211a12a 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/ior.h
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/ior.h
+@@ -16,6 +16,7 @@ struct nvkm_ior {
+ 	char name[8];
+ 
+ 	struct list_head head;
++	bool identity;
+ 
+ 	struct nvkm_ior_state {
+ 		struct nvkm_outp *outp;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.c
+index f89c7b977aa5..def005dd5fda 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.c
+@@ -501,11 +501,11 @@ nv50_disp_super_2_0(struct nv50_disp *disp, struct nvkm_head *head)
+ 	nv50_disp_super_ied_off(head, ior, 2);
+ 
+ 	/* If we're shutting down the OR's only active head, execute
+-	 * the output path's release function.
++	 * the output path's disable function.
+ 	 */
+ 	if (ior->arm.head == (1 << head->id)) {
+-		if ((outp = ior->arm.outp) && outp->func->release)
+-			outp->func->release(outp, ior);
++		if ((outp = ior->arm.outp) && outp->func->disable)
++			outp->func->disable(outp, ior);
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.c
+index be9e7f8c3b23..44df835e5473 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.c
+@@ -93,6 +93,8 @@ nvkm_outp_release(struct nvkm_outp *outp, u8 user)
+ 	if (ior) {
+ 		outp->acquired &= ~user;
+ 		if (!outp->acquired) {
++			if (outp->func->release && outp->ior)
++				outp->func->release(outp);
+ 			outp->ior->asy.outp = NULL;
+ 			outp->ior = NULL;
+ 		}
+@@ -127,17 +129,26 @@ nvkm_outp_acquire(struct nvkm_outp *outp, u8 user)
+ 	if (proto == UNKNOWN)
+ 		return -ENOSYS;
+ 
++	/* Deal with panels requiring identity-mapped SOR assignment. */
++	if (outp->identity) {
++		ior = nvkm_ior_find(outp->disp, SOR, ffs(outp->info.or) - 1);
++		if (WARN_ON(!ior))
++			return -ENOSPC;
++		return nvkm_outp_acquire_ior(outp, user, ior);
++	}
++
+ 	/* First preference is to reuse the OR that is currently armed
+ 	 * on HW, if any, in order to prevent unnecessary switching.
+ 	 */
+ 	list_for_each_entry(ior, &outp->disp->ior, head) {
+-		if (!ior->asy.outp && ior->arm.outp == outp)
++		if (!ior->identity && !ior->asy.outp && ior->arm.outp == outp)
+ 			return nvkm_outp_acquire_ior(outp, user, ior);
+ 	}
+ 
+ 	/* Failing that, a completely unused OR is the next best thing. */
+ 	list_for_each_entry(ior, &outp->disp->ior, head) {
+-		if (!ior->asy.outp && ior->type == type && !ior->arm.outp &&
++		if (!ior->identity &&
++		    !ior->asy.outp && ior->type == type && !ior->arm.outp &&
+ 		    (ior->func->route.set || ior->id == __ffs(outp->info.or)))
+ 			return nvkm_outp_acquire_ior(outp, user, ior);
+ 	}
+@@ -146,7 +157,7 @@ nvkm_outp_acquire(struct nvkm_outp *outp, u8 user)
+ 	 * but will be released during the next modeset.
+ 	 */
+ 	list_for_each_entry(ior, &outp->disp->ior, head) {
+-		if (!ior->asy.outp && ior->type == type &&
++		if (!ior->identity && !ior->asy.outp && ior->type == type &&
+ 		    (ior->func->route.set || ior->id == __ffs(outp->info.or)))
+ 			return nvkm_outp_acquire_ior(outp, user, ior);
+ 	}
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.h b/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.h
+index ea84d7d5741a..3f932fb39c94 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.h
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.h
+@@ -17,6 +17,7 @@ struct nvkm_outp {
+ 
+ 	struct list_head head;
+ 	struct nvkm_conn *conn;
++	bool identity;
+ 
+ 	/* Assembly state. */
+ #define NVKM_OUTP_PRIV 1
+@@ -41,7 +42,8 @@ struct nvkm_outp_func {
+ 	void (*init)(struct nvkm_outp *);
+ 	void (*fini)(struct nvkm_outp *);
+ 	int (*acquire)(struct nvkm_outp *);
+-	void (*release)(struct nvkm_outp *, struct nvkm_ior *);
++	void (*release)(struct nvkm_outp *);
++	void (*disable)(struct nvkm_outp *, struct nvkm_ior *);
+ };
+ 
+ #define OUTP_MSG(o,l,f,a...) do {                                              \
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/gm200.c b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/gm200.c
+index b80618e35491..d65959ef0564 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/gm200.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/gm200.c
+@@ -158,7 +158,8 @@ gm200_devinit_post(struct nvkm_devinit *base, bool post)
+ 	}
+ 
+ 	/* load and execute some other ucode image (bios therm?) */
+-	return pmu_load(init, 0x01, post, NULL, NULL);
++	pmu_load(init, 0x01, post, NULL, NULL);
++	return 0;
+ }
+ 
+ static const struct nvkm_devinit_func
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
+index de269eb482dd..7459def78d50 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
+@@ -1423,7 +1423,7 @@ nvkm_vmm_get(struct nvkm_vmm *vmm, u8 page, u64 size, struct nvkm_vma **pvma)
+ void
+ nvkm_vmm_part(struct nvkm_vmm *vmm, struct nvkm_memory *inst)
+ {
+-	if (vmm->func->part && inst) {
++	if (inst && vmm->func->part) {
+ 		mutex_lock(&vmm->mutex);
+ 		vmm->func->part(vmm, inst);
+ 		mutex_unlock(&vmm->mutex);
+diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c
+index 25b7bd56ae11..1cb41992aaa1 100644
+--- a/drivers/hid/hid-apple.c
++++ b/drivers/hid/hid-apple.c
+@@ -335,7 +335,8 @@ static int apple_input_mapping(struct hid_device *hdev, struct hid_input *hi,
+ 		struct hid_field *field, struct hid_usage *usage,
+ 		unsigned long **bit, int *max)
+ {
+-	if (usage->hid == (HID_UP_CUSTOM | 0x0003)) {
++	if (usage->hid == (HID_UP_CUSTOM | 0x0003) ||
++			usage->hid == (HID_UP_MSVENDOR | 0x0003)) {
+ 		/* The fn key on Apple USB keyboards */
+ 		set_bit(EV_REP, hi->input->evbit);
+ 		hid_map_usage_clear(hi, usage, bit, max, EV_KEY, KEY_FN);
+@@ -472,6 +473,12 @@ static const struct hid_device_id apple_devices[] = {
+ 		.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_ANSI),
+ 		.driver_data = APPLE_HAS_FN },
++	{ HID_BLUETOOTH_DEVICE(BT_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_ANSI),
++		.driver_data = APPLE_HAS_FN },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_NUMPAD_ANSI),
++		.driver_data = APPLE_HAS_FN },
++	{ HID_BLUETOOTH_DEVICE(BT_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_NUMPAD_ANSI),
++		.driver_data = APPLE_HAS_FN },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING_ANSI),
+ 		.driver_data = APPLE_HAS_FN },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING_ISO),
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index e80bcd71fe1e..eee6b79fb131 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -88,6 +88,7 @@
+ #define USB_DEVICE_ID_ANTON_TOUCH_PAD	0x3101
+ 
+ #define USB_VENDOR_ID_APPLE		0x05ac
++#define BT_VENDOR_ID_APPLE		0x004c
+ #define USB_DEVICE_ID_APPLE_MIGHTYMOUSE	0x0304
+ #define USB_DEVICE_ID_APPLE_MAGICMOUSE	0x030d
+ #define USB_DEVICE_ID_APPLE_MAGICTRACKPAD	0x030e
+@@ -157,6 +158,7 @@
+ #define USB_DEVICE_ID_APPLE_ALU_WIRELESS_2011_ISO   0x0256
+ #define USB_DEVICE_ID_APPLE_ALU_WIRELESS_2011_JIS   0x0257
+ #define USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_ANSI   0x0267
++#define USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_NUMPAD_ANSI   0x026c
+ #define USB_DEVICE_ID_APPLE_WELLSPRING8_ANSI	0x0290
+ #define USB_DEVICE_ID_APPLE_WELLSPRING8_ISO	0x0291
+ #define USB_DEVICE_ID_APPLE_WELLSPRING8_JIS	0x0292
+@@ -526,10 +528,6 @@
+ #define I2C_VENDOR_ID_HANTICK		0x0911
+ #define I2C_PRODUCT_ID_HANTICK_5288	0x5288
+ 
+-#define I2C_VENDOR_ID_RAYD		0x2386
+-#define I2C_PRODUCT_ID_RAYD_3118	0x3118
+-#define I2C_PRODUCT_ID_RAYD_4B33	0x4B33
+-
+ #define USB_VENDOR_ID_HANWANG		0x0b57
+ #define USB_DEVICE_ID_HANWANG_TABLET_FIRST	0x5000
+ #define USB_DEVICE_ID_HANWANG_TABLET_LAST	0x8fff
+@@ -949,6 +947,7 @@
+ #define USB_DEVICE_ID_SAITEK_RUMBLEPAD	0xff17
+ #define USB_DEVICE_ID_SAITEK_PS1000	0x0621
+ #define USB_DEVICE_ID_SAITEK_RAT7_OLD	0x0ccb
++#define USB_DEVICE_ID_SAITEK_RAT7_CONTAGION	0x0ccd
+ #define USB_DEVICE_ID_SAITEK_RAT7	0x0cd7
+ #define USB_DEVICE_ID_SAITEK_RAT9	0x0cfa
+ #define USB_DEVICE_ID_SAITEK_MMO7	0x0cd0
+diff --git a/drivers/hid/hid-saitek.c b/drivers/hid/hid-saitek.c
+index 39e642686ff0..683861f324e3 100644
+--- a/drivers/hid/hid-saitek.c
++++ b/drivers/hid/hid-saitek.c
+@@ -183,6 +183,8 @@ static const struct hid_device_id saitek_devices[] = {
+ 		.driver_data = SAITEK_RELEASE_MODE_RAT7 },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_RAT7),
+ 		.driver_data = SAITEK_RELEASE_MODE_RAT7 },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_RAT7_CONTAGION),
++		.driver_data = SAITEK_RELEASE_MODE_RAT7 },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_RAT9),
+ 		.driver_data = SAITEK_RELEASE_MODE_RAT7 },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_RAT9),
+diff --git a/drivers/hid/hid-sensor-hub.c b/drivers/hid/hid-sensor-hub.c
+index 50af72baa5ca..2b63487057c2 100644
+--- a/drivers/hid/hid-sensor-hub.c
++++ b/drivers/hid/hid-sensor-hub.c
+@@ -579,6 +579,28 @@ void sensor_hub_device_close(struct hid_sensor_hub_device *hsdev)
+ }
+ EXPORT_SYMBOL_GPL(sensor_hub_device_close);
+ 
++static __u8 *sensor_hub_report_fixup(struct hid_device *hdev, __u8 *rdesc,
++		unsigned int *rsize)
++{
++	/*
++	 * Checks if the report descriptor of Thinkpad Helix 2 has a logical
++	 * minimum for magnetic flux axis greater than the maximum.
++	 */
++	if (hdev->product == USB_DEVICE_ID_TEXAS_INSTRUMENTS_LENOVO_YOGA &&
++		*rsize == 2558 && rdesc[913] == 0x17 && rdesc[914] == 0x40 &&
++		rdesc[915] == 0x81 && rdesc[916] == 0x08 &&
++		rdesc[917] == 0x00 && rdesc[918] == 0x27 &&
++		rdesc[921] == 0x07 && rdesc[922] == 0x00) {
++		/* Sets negative logical minimum for mag x, y and z */
++		rdesc[914] = rdesc[935] = rdesc[956] = 0xc0;
++		rdesc[915] = rdesc[936] = rdesc[957] = 0x7e;
++		rdesc[916] = rdesc[937] = rdesc[958] = 0xf7;
++		rdesc[917] = rdesc[938] = rdesc[959] = 0xff;
++	}
++
++	return rdesc;
++}
++
+ static int sensor_hub_probe(struct hid_device *hdev,
+ 				const struct hid_device_id *id)
+ {
+@@ -743,6 +765,7 @@ static struct hid_driver sensor_hub_driver = {
+ 	.probe = sensor_hub_probe,
+ 	.remove = sensor_hub_remove,
+ 	.raw_event = sensor_hub_raw_event,
++	.report_fixup = sensor_hub_report_fixup,
+ #ifdef CONFIG_PM
+ 	.suspend = sensor_hub_suspend,
+ 	.resume = sensor_hub_resume,
+diff --git a/drivers/hid/i2c-hid/i2c-hid.c b/drivers/hid/i2c-hid/i2c-hid.c
+index 64773433b947..37013b58098c 100644
+--- a/drivers/hid/i2c-hid/i2c-hid.c
++++ b/drivers/hid/i2c-hid/i2c-hid.c
+@@ -48,6 +48,7 @@
+ #define I2C_HID_QUIRK_SET_PWR_WAKEUP_DEV	BIT(0)
+ #define I2C_HID_QUIRK_NO_IRQ_AFTER_RESET	BIT(1)
+ #define I2C_HID_QUIRK_RESEND_REPORT_DESCR	BIT(2)
++#define I2C_HID_QUIRK_NO_RUNTIME_PM		BIT(3)
+ 
+ /* flags */
+ #define I2C_HID_STARTED		0
+@@ -169,13 +170,10 @@ static const struct i2c_hid_quirks {
+ 	{ USB_VENDOR_ID_WEIDA, USB_DEVICE_ID_WEIDA_8755,
+ 		I2C_HID_QUIRK_SET_PWR_WAKEUP_DEV },
+ 	{ I2C_VENDOR_ID_HANTICK, I2C_PRODUCT_ID_HANTICK_5288,
+-		I2C_HID_QUIRK_NO_IRQ_AFTER_RESET },
+-	{ I2C_VENDOR_ID_RAYD, I2C_PRODUCT_ID_RAYD_3118,
+-		I2C_HID_QUIRK_RESEND_REPORT_DESCR },
++		I2C_HID_QUIRK_NO_IRQ_AFTER_RESET |
++		I2C_HID_QUIRK_NO_RUNTIME_PM },
+ 	{ USB_VENDOR_ID_SIS_TOUCH, USB_DEVICE_ID_SIS10FB_TOUCH,
+ 		I2C_HID_QUIRK_RESEND_REPORT_DESCR },
+-	{ I2C_VENDOR_ID_RAYD, I2C_PRODUCT_ID_RAYD_4B33,
+-		I2C_HID_QUIRK_RESEND_REPORT_DESCR },
+ 	{ 0, 0 }
+ };
+ 
+@@ -1110,7 +1108,9 @@ static int i2c_hid_probe(struct i2c_client *client,
+ 		goto err_mem_free;
+ 	}
+ 
+-	pm_runtime_put(&client->dev);
++	if (!(ihid->quirks & I2C_HID_QUIRK_NO_RUNTIME_PM))
++		pm_runtime_put(&client->dev);
++
+ 	return 0;
+ 
+ err_mem_free:
+@@ -1136,7 +1136,8 @@ static int i2c_hid_remove(struct i2c_client *client)
+ 	struct i2c_hid *ihid = i2c_get_clientdata(client);
+ 	struct hid_device *hid;
+ 
+-	pm_runtime_get_sync(&client->dev);
++	if (!(ihid->quirks & I2C_HID_QUIRK_NO_RUNTIME_PM))
++		pm_runtime_get_sync(&client->dev);
+ 	pm_runtime_disable(&client->dev);
+ 	pm_runtime_set_suspended(&client->dev);
+ 	pm_runtime_put_noidle(&client->dev);
+@@ -1237,11 +1238,16 @@ static int i2c_hid_resume(struct device *dev)
+ 	pm_runtime_enable(dev);
+ 
+ 	enable_irq(client->irq);
+-	ret = i2c_hid_hwreset(client);
++
++	/* Instead of resetting device, simply powers the device on. This
++	 * solves "incomplete reports" on Raydium devices 2386:3118 and
++	 * 2386:4B33
++	 */
++	ret = i2c_hid_set_power(client, I2C_HID_PWR_ON);
+ 	if (ret)
+ 		return ret;
+ 
+-	/* RAYDIUM device (2386:3118) need to re-send report descr cmd
++	/* Some devices need to re-send report descr cmd
+ 	 * after resume, after this it will be back normal.
+ 	 * otherwise it issues too many incomplete reports.
+ 	 */
+diff --git a/drivers/hid/intel-ish-hid/ipc/hw-ish.h b/drivers/hid/intel-ish-hid/ipc/hw-ish.h
+index 97869b7410eb..da133716bed0 100644
+--- a/drivers/hid/intel-ish-hid/ipc/hw-ish.h
++++ b/drivers/hid/intel-ish-hid/ipc/hw-ish.h
+@@ -29,6 +29,7 @@
+ #define CNL_Ax_DEVICE_ID	0x9DFC
+ #define GLK_Ax_DEVICE_ID	0x31A2
+ #define CNL_H_DEVICE_ID		0xA37C
++#define SPT_H_DEVICE_ID		0xA135
+ 
+ #define	REVISION_ID_CHT_A0	0x6
+ #define	REVISION_ID_CHT_Ax_SI	0x0
+diff --git a/drivers/hid/intel-ish-hid/ipc/pci-ish.c b/drivers/hid/intel-ish-hid/ipc/pci-ish.c
+index a2c53ea3b5ed..c7b8eb32b1ea 100644
+--- a/drivers/hid/intel-ish-hid/ipc/pci-ish.c
++++ b/drivers/hid/intel-ish-hid/ipc/pci-ish.c
+@@ -38,6 +38,7 @@ static const struct pci_device_id ish_pci_tbl[] = {
+ 	{PCI_DEVICE(PCI_VENDOR_ID_INTEL, CNL_Ax_DEVICE_ID)},
+ 	{PCI_DEVICE(PCI_VENDOR_ID_INTEL, GLK_Ax_DEVICE_ID)},
+ 	{PCI_DEVICE(PCI_VENDOR_ID_INTEL, CNL_H_DEVICE_ID)},
++	{PCI_DEVICE(PCI_VENDOR_ID_INTEL, SPT_H_DEVICE_ID)},
+ 	{0, }
+ };
+ MODULE_DEVICE_TABLE(pci, ish_pci_tbl);
+diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c
+index ced041899456..f4d08c8ac7f8 100644
+--- a/drivers/hv/connection.c
++++ b/drivers/hv/connection.c
+@@ -76,6 +76,7 @@ static int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo,
+ 					__u32 version)
+ {
+ 	int ret = 0;
++	unsigned int cur_cpu;
+ 	struct vmbus_channel_initiate_contact *msg;
+ 	unsigned long flags;
+ 
+@@ -118,9 +119,10 @@ static int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo,
+ 	 * the CPU attempting to connect may not be CPU 0.
+ 	 */
+ 	if (version >= VERSION_WIN8_1) {
+-		msg->target_vcpu =
+-			hv_cpu_number_to_vp_number(smp_processor_id());
+-		vmbus_connection.connect_cpu = smp_processor_id();
++		cur_cpu = get_cpu();
++		msg->target_vcpu = hv_cpu_number_to_vp_number(cur_cpu);
++		vmbus_connection.connect_cpu = cur_cpu;
++		put_cpu();
+ 	} else {
+ 		msg->target_vcpu = 0;
+ 		vmbus_connection.connect_cpu = 0;
+diff --git a/drivers/i2c/busses/i2c-uniphier-f.c b/drivers/i2c/busses/i2c-uniphier-f.c
+index 9918bdd81619..a403e8579b65 100644
+--- a/drivers/i2c/busses/i2c-uniphier-f.c
++++ b/drivers/i2c/busses/i2c-uniphier-f.c
+@@ -401,11 +401,8 @@ static int uniphier_fi2c_master_xfer(struct i2c_adapter *adap,
+ 		return ret;
+ 
+ 	for (msg = msgs; msg < emsg; msg++) {
+-		/* If next message is read, skip the stop condition */
+-		bool stop = !(msg + 1 < emsg && msg[1].flags & I2C_M_RD);
+-		/* but, force it if I2C_M_STOP is set */
+-		if (msg->flags & I2C_M_STOP)
+-			stop = true;
++		/* Emit STOP if it is the last message or I2C_M_STOP is set. */
++		bool stop = (msg + 1 == emsg) || (msg->flags & I2C_M_STOP);
+ 
+ 		ret = uniphier_fi2c_master_xfer_one(adap, msg, stop);
+ 		if (ret)
+diff --git a/drivers/i2c/busses/i2c-uniphier.c b/drivers/i2c/busses/i2c-uniphier.c
+index bb181b088291..454f914ae66d 100644
+--- a/drivers/i2c/busses/i2c-uniphier.c
++++ b/drivers/i2c/busses/i2c-uniphier.c
+@@ -248,11 +248,8 @@ static int uniphier_i2c_master_xfer(struct i2c_adapter *adap,
+ 		return ret;
+ 
+ 	for (msg = msgs; msg < emsg; msg++) {
+-		/* If next message is read, skip the stop condition */
+-		bool stop = !(msg + 1 < emsg && msg[1].flags & I2C_M_RD);
+-		/* but, force it if I2C_M_STOP is set */
+-		if (msg->flags & I2C_M_STOP)
+-			stop = true;
++		/* Emit STOP if it is the last message or I2C_M_STOP is set. */
++		bool stop = (msg + 1 == emsg) || (msg->flags & I2C_M_STOP);
+ 
+ 		ret = uniphier_i2c_master_xfer_one(adap, msg, stop);
+ 		if (ret)
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
+index 4994f920a836..8653182be818 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
+@@ -187,12 +187,15 @@ static int st_lsm6dsx_set_fifo_odr(struct st_lsm6dsx_sensor *sensor,
+ 
+ int st_lsm6dsx_update_watermark(struct st_lsm6dsx_sensor *sensor, u16 watermark)
+ {
+-	u16 fifo_watermark = ~0, cur_watermark, sip = 0, fifo_th_mask;
++	u16 fifo_watermark = ~0, cur_watermark, fifo_th_mask;
+ 	struct st_lsm6dsx_hw *hw = sensor->hw;
+ 	struct st_lsm6dsx_sensor *cur_sensor;
+ 	int i, err, data;
+ 	__le16 wdata;
+ 
++	if (!hw->sip)
++		return 0;
++
+ 	for (i = 0; i < ST_LSM6DSX_ID_MAX; i++) {
+ 		cur_sensor = iio_priv(hw->iio_devs[i]);
+ 
+@@ -203,14 +206,10 @@ int st_lsm6dsx_update_watermark(struct st_lsm6dsx_sensor *sensor, u16 watermark)
+ 						       : cur_sensor->watermark;
+ 
+ 		fifo_watermark = min_t(u16, fifo_watermark, cur_watermark);
+-		sip += cur_sensor->sip;
+ 	}
+ 
+-	if (!sip)
+-		return 0;
+-
+-	fifo_watermark = max_t(u16, fifo_watermark, sip);
+-	fifo_watermark = (fifo_watermark / sip) * sip;
++	fifo_watermark = max_t(u16, fifo_watermark, hw->sip);
++	fifo_watermark = (fifo_watermark / hw->sip) * hw->sip;
+ 	fifo_watermark = fifo_watermark * hw->settings->fifo_ops.th_wl;
+ 
+ 	err = regmap_read(hw->regmap, hw->settings->fifo_ops.fifo_th.addr + 1,
+diff --git a/drivers/iio/temperature/maxim_thermocouple.c b/drivers/iio/temperature/maxim_thermocouple.c
+index 54e383231d1e..c31b9633f32d 100644
+--- a/drivers/iio/temperature/maxim_thermocouple.c
++++ b/drivers/iio/temperature/maxim_thermocouple.c
+@@ -258,7 +258,6 @@ static int maxim_thermocouple_remove(struct spi_device *spi)
+ static const struct spi_device_id maxim_thermocouple_id[] = {
+ 	{"max6675", MAX6675},
+ 	{"max31855", MAX31855},
+-	{"max31856", MAX31855},
+ 	{},
+ };
+ MODULE_DEVICE_TABLE(spi, maxim_thermocouple_id);
+diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c
+index ec8fb289621f..5f437d1570fb 100644
+--- a/drivers/infiniband/core/ucma.c
++++ b/drivers/infiniband/core/ucma.c
+@@ -124,6 +124,8 @@ static DEFINE_MUTEX(mut);
+ static DEFINE_IDR(ctx_idr);
+ static DEFINE_IDR(multicast_idr);
+ 
++static const struct file_operations ucma_fops;
++
+ static inline struct ucma_context *_ucma_find_context(int id,
+ 						      struct ucma_file *file)
+ {
+@@ -1581,6 +1583,10 @@ static ssize_t ucma_migrate_id(struct ucma_file *new_file,
+ 	f = fdget(cmd.fd);
+ 	if (!f.file)
+ 		return -ENOENT;
++	if (f.file->f_op != &ucma_fops) {
++		ret = -EINVAL;
++		goto file_put;
++	}
+ 
+ 	/* Validate current fd and prevent destruction of id. */
+ 	ctx = ucma_get_ctx(f.file->private_data, cmd.id);
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index a76e206704d4..cb1e69bdad0b 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -844,6 +844,8 @@ int bnxt_re_destroy_qp(struct ib_qp *ib_qp)
+ 				"Failed to destroy Shadow QP");
+ 			return rc;
+ 		}
++		bnxt_qplib_free_qp_res(&rdev->qplib_res,
++				       &rdev->qp1_sqp->qplib_qp);
+ 		mutex_lock(&rdev->qp_lock);
+ 		list_del(&rdev->qp1_sqp->list);
+ 		atomic_dec(&rdev->qp_count);
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index e426b990c1dd..6ad0d46ab879 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -196,7 +196,7 @@ static int bnxt_qplib_alloc_qp_hdr_buf(struct bnxt_qplib_res *res,
+ 				       struct bnxt_qplib_qp *qp)
+ {
+ 	struct bnxt_qplib_q *rq = &qp->rq;
+-	struct bnxt_qplib_q *sq = &qp->rq;
++	struct bnxt_qplib_q *sq = &qp->sq;
+ 	int rc = 0;
+ 
+ 	if (qp->sq_hdr_buf_size && sq->hwq.max_elements) {
+diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
+index d77c97fe4a23..c53363443280 100644
+--- a/drivers/iommu/amd_iommu.c
++++ b/drivers/iommu/amd_iommu.c
+@@ -3073,7 +3073,7 @@ static phys_addr_t amd_iommu_iova_to_phys(struct iommu_domain *dom,
+ 		return 0;
+ 
+ 	offset_mask = pte_pgsize - 1;
+-	__pte	    = *pte & PM_ADDR_MASK;
++	__pte	    = __sme_clr(*pte & PM_ADDR_MASK);
+ 
+ 	return (__pte & ~offset_mask) | (iova & offset_mask);
+ }
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index 75df4c9d8b54..1c7c1250bf75 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -29,9 +29,6 @@
+  */
+ #define	MIN_RAID456_JOURNAL_SPACE (4*2048)
+ 
+-/* Global list of all raid sets */
+-static LIST_HEAD(raid_sets);
+-
+ static bool devices_handle_discard_safely = false;
+ 
+ /*
+@@ -227,7 +224,6 @@ struct rs_layout {
+ 
+ struct raid_set {
+ 	struct dm_target *ti;
+-	struct list_head list;
+ 
+ 	uint32_t stripe_cache_entries;
+ 	unsigned long ctr_flags;
+@@ -273,19 +269,6 @@ static void rs_config_restore(struct raid_set *rs, struct rs_layout *l)
+ 	mddev->new_chunk_sectors = l->new_chunk_sectors;
+ }
+ 
+-/* Find any raid_set in active slot for @rs on global list */
+-static struct raid_set *rs_find_active(struct raid_set *rs)
+-{
+-	struct raid_set *r;
+-	struct mapped_device *md = dm_table_get_md(rs->ti->table);
+-
+-	list_for_each_entry(r, &raid_sets, list)
+-		if (r != rs && dm_table_get_md(r->ti->table) == md)
+-			return r;
+-
+-	return NULL;
+-}
+-
+ /* raid10 algorithms (i.e. formats) */
+ #define	ALGORITHM_RAID10_DEFAULT	0
+ #define	ALGORITHM_RAID10_NEAR		1
+@@ -764,7 +747,6 @@ static struct raid_set *raid_set_alloc(struct dm_target *ti, struct raid_type *r
+ 
+ 	mddev_init(&rs->md);
+ 
+-	INIT_LIST_HEAD(&rs->list);
+ 	rs->raid_disks = raid_devs;
+ 	rs->delta_disks = 0;
+ 
+@@ -782,9 +764,6 @@ static struct raid_set *raid_set_alloc(struct dm_target *ti, struct raid_type *r
+ 	for (i = 0; i < raid_devs; i++)
+ 		md_rdev_init(&rs->dev[i].rdev);
+ 
+-	/* Add @rs to global list. */
+-	list_add(&rs->list, &raid_sets);
+-
+ 	/*
+ 	 * Remaining items to be initialized by further RAID params:
+ 	 *  rs->md.persistent
+@@ -797,7 +776,7 @@ static struct raid_set *raid_set_alloc(struct dm_target *ti, struct raid_type *r
+ 	return rs;
+ }
+ 
+-/* Free all @rs allocations and remove it from global list. */
++/* Free all @rs allocations */
+ static void raid_set_free(struct raid_set *rs)
+ {
+ 	int i;
+@@ -815,8 +794,6 @@ static void raid_set_free(struct raid_set *rs)
+ 			dm_put_device(rs->ti, rs->dev[i].data_dev);
+ 	}
+ 
+-	list_del(&rs->list);
+-
+ 	kfree(rs);
+ }
+ 
+@@ -3149,6 +3126,11 @@ static int raid_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+ 		set_bit(RT_FLAG_UPDATE_SBS, &rs->runtime_flags);
+ 		rs_set_new(rs);
+ 	} else if (rs_is_recovering(rs)) {
++		/* Rebuild particular devices */
++		if (test_bit(__CTR_FLAG_REBUILD, &rs->ctr_flags)) {
++			set_bit(RT_FLAG_UPDATE_SBS, &rs->runtime_flags);
++			rs_setup_recovery(rs, MaxSector);
++		}
+ 		/* A recovering raid set may be resized */
+ 		; /* skip setup rs */
+ 	} else if (rs_is_reshaping(rs)) {
+@@ -3350,32 +3332,53 @@ static int raid_map(struct dm_target *ti, struct bio *bio)
+ 	return DM_MAPIO_SUBMITTED;
+ }
+ 
+-/* Return string describing the current sync action of @mddev */
+-static const char *decipher_sync_action(struct mddev *mddev, unsigned long recovery)
++/* Return sync state string for @state */
++enum sync_state { st_frozen, st_reshape, st_resync, st_check, st_repair, st_recover, st_idle };
++static const char *sync_str(enum sync_state state)
++{
++	/* Has to be in above sync_state order! */
++	static const char *sync_strs[] = {
++		"frozen",
++		"reshape",
++		"resync",
++		"check",
++		"repair",
++		"recover",
++		"idle"
++	};
++
++	return __within_range(state, 0, ARRAY_SIZE(sync_strs) - 1) ? sync_strs[state] : "undef";
++};
++
++/* Return enum sync_state for @mddev derived from @recovery flags */
++static const enum sync_state decipher_sync_action(struct mddev *mddev, unsigned long recovery)
+ {
+ 	if (test_bit(MD_RECOVERY_FROZEN, &recovery))
+-		return "frozen";
++		return st_frozen;
+ 
+-	/* The MD sync thread can be done with io but still be running */
++	/* The MD sync thread can be done with io or be interrupted but still be running */
+ 	if (!test_bit(MD_RECOVERY_DONE, &recovery) &&
+ 	    (test_bit(MD_RECOVERY_RUNNING, &recovery) ||
+ 	     (!mddev->ro && test_bit(MD_RECOVERY_NEEDED, &recovery)))) {
+ 		if (test_bit(MD_RECOVERY_RESHAPE, &recovery))
+-			return "reshape";
++			return st_reshape;
+ 
+ 		if (test_bit(MD_RECOVERY_SYNC, &recovery)) {
+ 			if (!test_bit(MD_RECOVERY_REQUESTED, &recovery))
+-				return "resync";
+-			else if (test_bit(MD_RECOVERY_CHECK, &recovery))
+-				return "check";
+-			return "repair";
++				return st_resync;
++			if (test_bit(MD_RECOVERY_CHECK, &recovery))
++				return st_check;
++			return st_repair;
+ 		}
+ 
+ 		if (test_bit(MD_RECOVERY_RECOVER, &recovery))
+-			return "recover";
++			return st_recover;
++
++		if (mddev->reshape_position != MaxSector)
++			return st_reshape;
+ 	}
+ 
+-	return "idle";
++	return st_idle;
+ }
+ 
+ /*
+@@ -3409,6 +3412,7 @@ static sector_t rs_get_progress(struct raid_set *rs, unsigned long recovery,
+ 				sector_t resync_max_sectors)
+ {
+ 	sector_t r;
++	enum sync_state state;
+ 	struct mddev *mddev = &rs->md;
+ 
+ 	clear_bit(RT_FLAG_RS_IN_SYNC, &rs->runtime_flags);
+@@ -3419,20 +3423,14 @@ static sector_t rs_get_progress(struct raid_set *rs, unsigned long recovery,
+ 		set_bit(RT_FLAG_RS_IN_SYNC, &rs->runtime_flags);
+ 
+ 	} else {
+-		if (!test_bit(__CTR_FLAG_NOSYNC, &rs->ctr_flags) &&
+-		    !test_bit(MD_RECOVERY_INTR, &recovery) &&
+-		    (test_bit(MD_RECOVERY_NEEDED, &recovery) ||
+-		     test_bit(MD_RECOVERY_RESHAPE, &recovery) ||
+-		     test_bit(MD_RECOVERY_RUNNING, &recovery)))
+-			r = mddev->curr_resync_completed;
+-		else
++		state = decipher_sync_action(mddev, recovery);
++
++		if (state == st_idle && !test_bit(MD_RECOVERY_INTR, &recovery))
+ 			r = mddev->recovery_cp;
++		else
++			r = mddev->curr_resync_completed;
+ 
+-		if (r >= resync_max_sectors &&
+-		    (!test_bit(MD_RECOVERY_REQUESTED, &recovery) ||
+-		     (!test_bit(MD_RECOVERY_FROZEN, &recovery) &&
+-		      !test_bit(MD_RECOVERY_NEEDED, &recovery) &&
+-		      !test_bit(MD_RECOVERY_RUNNING, &recovery)))) {
++		if (state == st_idle && r >= resync_max_sectors) {
+ 			/*
+ 			 * Sync complete.
+ 			 */
+@@ -3440,24 +3438,20 @@ static sector_t rs_get_progress(struct raid_set *rs, unsigned long recovery,
+ 			if (test_bit(MD_RECOVERY_RECOVER, &recovery))
+ 				set_bit(RT_FLAG_RS_IN_SYNC, &rs->runtime_flags);
+ 
+-		} else if (test_bit(MD_RECOVERY_RECOVER, &recovery)) {
++		} else if (state == st_recover)
+ 			/*
+ 			 * In case we are recovering, the array is not in sync
+ 			 * and health chars should show the recovering legs.
+ 			 */
+ 			;
+-
+-		} else if (test_bit(MD_RECOVERY_SYNC, &recovery) &&
+-			   !test_bit(MD_RECOVERY_REQUESTED, &recovery)) {
++		else if (state == st_resync)
+ 			/*
+ 			 * If "resync" is occurring, the raid set
+ 			 * is or may be out of sync hence the health
+ 			 * characters shall be 'a'.
+ 			 */
+ 			set_bit(RT_FLAG_RS_RESYNCING, &rs->runtime_flags);
+-
+-		} else if (test_bit(MD_RECOVERY_RESHAPE, &recovery) &&
+-			   !test_bit(MD_RECOVERY_REQUESTED, &recovery)) {
++		else if (state == st_reshape)
+ 			/*
+ 			 * If "reshape" is occurring, the raid set
+ 			 * is or may be out of sync hence the health
+@@ -3465,7 +3459,7 @@ static sector_t rs_get_progress(struct raid_set *rs, unsigned long recovery,
+ 			 */
+ 			set_bit(RT_FLAG_RS_RESYNCING, &rs->runtime_flags);
+ 
+-		} else if (test_bit(MD_RECOVERY_REQUESTED, &recovery)) {
++		else if (state == st_check || state == st_repair)
+ 			/*
+ 			 * If "check" or "repair" is occurring, the raid set has
+ 			 * undergone an initial sync and the health characters
+@@ -3473,12 +3467,12 @@ static sector_t rs_get_progress(struct raid_set *rs, unsigned long recovery,
+ 			 */
+ 			set_bit(RT_FLAG_RS_IN_SYNC, &rs->runtime_flags);
+ 
+-		} else {
++		else {
+ 			struct md_rdev *rdev;
+ 
+ 			/*
+ 			 * We are idle and recovery is needed, prevent 'A' chars race
+-			 * caused by components still set to in-sync by constrcuctor.
++			 * caused by components still set to in-sync by constructor.
+ 			 */
+ 			if (test_bit(MD_RECOVERY_NEEDED, &recovery))
+ 				set_bit(RT_FLAG_RS_RESYNCING, &rs->runtime_flags);
+@@ -3542,7 +3536,7 @@ static void raid_status(struct dm_target *ti, status_type_t type,
+ 		progress = rs_get_progress(rs, recovery, resync_max_sectors);
+ 		resync_mismatches = (mddev->last_sync_action && !strcasecmp(mddev->last_sync_action, "check")) ?
+ 				    atomic64_read(&mddev->resync_mismatches) : 0;
+-		sync_action = decipher_sync_action(&rs->md, recovery);
++		sync_action = sync_str(decipher_sync_action(&rs->md, recovery));
+ 
+ 		/* HM FIXME: do we want another state char for raid0? It shows 'D'/'A'/'-' now */
+ 		for (i = 0; i < rs->raid_disks; i++)
+@@ -3892,14 +3886,13 @@ static int rs_start_reshape(struct raid_set *rs)
+ 	struct mddev *mddev = &rs->md;
+ 	struct md_personality *pers = mddev->pers;
+ 
++	/* Don't allow the sync thread to work until the table gets reloaded. */
++	set_bit(MD_RECOVERY_WAIT, &mddev->recovery);
++
+ 	r = rs_setup_reshape(rs);
+ 	if (r)
+ 		return r;
+ 
+-	/* Need to be resumed to be able to start reshape, recovery is frozen until raid_resume() though */
+-	if (test_and_clear_bit(RT_FLAG_RS_SUSPENDED, &rs->runtime_flags))
+-		mddev_resume(mddev);
+-
+ 	/*
+ 	 * Check any reshape constraints enforced by the personalility
+ 	 *
+@@ -3923,10 +3916,6 @@ static int rs_start_reshape(struct raid_set *rs)
+ 		}
+ 	}
+ 
+-	/* Suspend because a resume will happen in raid_resume() */
+-	set_bit(RT_FLAG_RS_SUSPENDED, &rs->runtime_flags);
+-	mddev_suspend(mddev);
+-
+ 	/*
+ 	 * Now reshape got set up, update superblocks to
+ 	 * reflect the fact so that a table reload will
+@@ -3947,29 +3936,6 @@ static int raid_preresume(struct dm_target *ti)
+ 	if (test_and_set_bit(RT_FLAG_RS_PRERESUMED, &rs->runtime_flags))
+ 		return 0;
+ 
+-	if (!test_bit(__CTR_FLAG_REBUILD, &rs->ctr_flags)) {
+-		struct raid_set *rs_active = rs_find_active(rs);
+-
+-		if (rs_active) {
+-			/*
+-			 * In case no rebuilds have been requested
+-			 * and an active table slot exists, copy
+-			 * current resynchonization completed and
+-			 * reshape position pointers across from
+-			 * suspended raid set in the active slot.
+-			 *
+-			 * This resumes the new mapping at current
+-			 * offsets to continue recover/reshape without
+-			 * necessarily redoing a raid set partially or
+-			 * causing data corruption in case of a reshape.
+-			 */
+-			if (rs_active->md.curr_resync_completed != MaxSector)
+-				mddev->curr_resync_completed = rs_active->md.curr_resync_completed;
+-			if (rs_active->md.reshape_position != MaxSector)
+-				mddev->reshape_position = rs_active->md.reshape_position;
+-		}
+-	}
+-
+ 	/*
+ 	 * The superblocks need to be updated on disk if the
+ 	 * array is new or new devices got added (thus zeroed
+diff --git a/drivers/md/dm-thin-metadata.c b/drivers/md/dm-thin-metadata.c
+index 72142021b5c9..20b0776e39ef 100644
+--- a/drivers/md/dm-thin-metadata.c
++++ b/drivers/md/dm-thin-metadata.c
+@@ -188,6 +188,12 @@ struct dm_pool_metadata {
+ 	unsigned long flags;
+ 	sector_t data_block_size;
+ 
++	/*
++	 * We reserve a section of the metadata for commit overhead.
++	 * All reported space does *not* include this.
++	 */
++	dm_block_t metadata_reserve;
++
+ 	/*
+ 	 * Set if a transaction has to be aborted but the attempt to roll back
+ 	 * to the previous (good) transaction failed.  The only pool metadata
+@@ -816,6 +822,20 @@ static int __commit_transaction(struct dm_pool_metadata *pmd)
+ 	return dm_tm_commit(pmd->tm, sblock);
+ }
+ 
++static void __set_metadata_reserve(struct dm_pool_metadata *pmd)
++{
++	int r;
++	dm_block_t total;
++	dm_block_t max_blocks = 4096; /* 16M */
++
++	r = dm_sm_get_nr_blocks(pmd->metadata_sm, &total);
++	if (r) {
++		DMERR("could not get size of metadata device");
++		pmd->metadata_reserve = max_blocks;
++	} else
++		pmd->metadata_reserve = min(max_blocks, div_u64(total, 10));
++}
++
+ struct dm_pool_metadata *dm_pool_metadata_open(struct block_device *bdev,
+ 					       sector_t data_block_size,
+ 					       bool format_device)
+@@ -849,6 +869,8 @@ struct dm_pool_metadata *dm_pool_metadata_open(struct block_device *bdev,
+ 		return ERR_PTR(r);
+ 	}
+ 
++	__set_metadata_reserve(pmd);
++
+ 	return pmd;
+ }
+ 
+@@ -1820,6 +1842,13 @@ int dm_pool_get_free_metadata_block_count(struct dm_pool_metadata *pmd,
+ 	down_read(&pmd->root_lock);
+ 	if (!pmd->fail_io)
+ 		r = dm_sm_get_nr_free(pmd->metadata_sm, result);
++
++	if (!r) {
++		if (*result < pmd->metadata_reserve)
++			*result = 0;
++		else
++			*result -= pmd->metadata_reserve;
++	}
+ 	up_read(&pmd->root_lock);
+ 
+ 	return r;
+@@ -1932,8 +1961,11 @@ int dm_pool_resize_metadata_dev(struct dm_pool_metadata *pmd, dm_block_t new_cou
+ 	int r = -EINVAL;
+ 
+ 	down_write(&pmd->root_lock);
+-	if (!pmd->fail_io)
++	if (!pmd->fail_io) {
+ 		r = __resize_space_map(pmd->metadata_sm, new_count);
++		if (!r)
++			__set_metadata_reserve(pmd);
++	}
+ 	up_write(&pmd->root_lock);
+ 
+ 	return r;
+diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
+index 1087f6a1ac79..b512efd4050c 100644
+--- a/drivers/md/dm-thin.c
++++ b/drivers/md/dm-thin.c
+@@ -200,7 +200,13 @@ struct dm_thin_new_mapping;
+ enum pool_mode {
+ 	PM_WRITE,		/* metadata may be changed */
+ 	PM_OUT_OF_DATA_SPACE,	/* metadata may be changed, though data may not be allocated */
++
++	/*
++	 * Like READ_ONLY, except may switch back to WRITE on metadata resize. Reported as READ_ONLY.
++	 */
++	PM_OUT_OF_METADATA_SPACE,
+ 	PM_READ_ONLY,		/* metadata may not be changed */
++
+ 	PM_FAIL,		/* all I/O fails */
+ };
+ 
+@@ -1388,7 +1394,35 @@ static void set_pool_mode(struct pool *pool, enum pool_mode new_mode);
+ 
+ static void requeue_bios(struct pool *pool);
+ 
+-static void check_for_space(struct pool *pool)
++static bool is_read_only_pool_mode(enum pool_mode mode)
++{
++	return (mode == PM_OUT_OF_METADATA_SPACE || mode == PM_READ_ONLY);
++}
++
++static bool is_read_only(struct pool *pool)
++{
++	return is_read_only_pool_mode(get_pool_mode(pool));
++}
++
++static void check_for_metadata_space(struct pool *pool)
++{
++	int r;
++	const char *ooms_reason = NULL;
++	dm_block_t nr_free;
++
++	r = dm_pool_get_free_metadata_block_count(pool->pmd, &nr_free);
++	if (r)
++		ooms_reason = "Could not get free metadata blocks";
++	else if (!nr_free)
++		ooms_reason = "No free metadata blocks";
++
++	if (ooms_reason && !is_read_only(pool)) {
++		DMERR("%s", ooms_reason);
++		set_pool_mode(pool, PM_OUT_OF_METADATA_SPACE);
++	}
++}
++
++static void check_for_data_space(struct pool *pool)
+ {
+ 	int r;
+ 	dm_block_t nr_free;
+@@ -1414,14 +1448,16 @@ static int commit(struct pool *pool)
+ {
+ 	int r;
+ 
+-	if (get_pool_mode(pool) >= PM_READ_ONLY)
++	if (get_pool_mode(pool) >= PM_OUT_OF_METADATA_SPACE)
+ 		return -EINVAL;
+ 
+ 	r = dm_pool_commit_metadata(pool->pmd);
+ 	if (r)
+ 		metadata_operation_failed(pool, "dm_pool_commit_metadata", r);
+-	else
+-		check_for_space(pool);
++	else {
++		check_for_metadata_space(pool);
++		check_for_data_space(pool);
++	}
+ 
+ 	return r;
+ }
+@@ -1487,6 +1523,19 @@ static int alloc_data_block(struct thin_c *tc, dm_block_t *result)
+ 		return r;
+ 	}
+ 
++	r = dm_pool_get_free_metadata_block_count(pool->pmd, &free_blocks);
++	if (r) {
++		metadata_operation_failed(pool, "dm_pool_get_free_metadata_block_count", r);
++		return r;
++	}
++
++	if (!free_blocks) {
++		/* Let's commit before we use up the metadata reserve. */
++		r = commit(pool);
++		if (r)
++			return r;
++	}
++
+ 	return 0;
+ }
+ 
+@@ -1518,6 +1567,7 @@ static blk_status_t should_error_unserviceable_bio(struct pool *pool)
+ 	case PM_OUT_OF_DATA_SPACE:
+ 		return pool->pf.error_if_no_space ? BLK_STS_NOSPC : 0;
+ 
++	case PM_OUT_OF_METADATA_SPACE:
+ 	case PM_READ_ONLY:
+ 	case PM_FAIL:
+ 		return BLK_STS_IOERR;
+@@ -2481,8 +2531,9 @@ static void set_pool_mode(struct pool *pool, enum pool_mode new_mode)
+ 		error_retry_list(pool);
+ 		break;
+ 
++	case PM_OUT_OF_METADATA_SPACE:
+ 	case PM_READ_ONLY:
+-		if (old_mode != new_mode)
++		if (!is_read_only_pool_mode(old_mode))
+ 			notify_of_pool_mode_change(pool, "read-only");
+ 		dm_pool_metadata_read_only(pool->pmd);
+ 		pool->process_bio = process_bio_read_only;
+@@ -3420,6 +3471,10 @@ static int maybe_resize_metadata_dev(struct dm_target *ti, bool *need_commit)
+ 		DMINFO("%s: growing the metadata device from %llu to %llu blocks",
+ 		       dm_device_name(pool->pool_md),
+ 		       sb_metadata_dev_size, metadata_dev_size);
++
++		if (get_pool_mode(pool) == PM_OUT_OF_METADATA_SPACE)
++			set_pool_mode(pool, PM_WRITE);
++
+ 		r = dm_pool_resize_metadata_dev(pool->pmd, metadata_dev_size);
+ 		if (r) {
+ 			metadata_operation_failed(pool, "dm_pool_resize_metadata_dev", r);
+@@ -3724,7 +3779,7 @@ static int pool_message(struct dm_target *ti, unsigned argc, char **argv,
+ 	struct pool_c *pt = ti->private;
+ 	struct pool *pool = pt->pool;
+ 
+-	if (get_pool_mode(pool) >= PM_READ_ONLY) {
++	if (get_pool_mode(pool) >= PM_OUT_OF_METADATA_SPACE) {
+ 		DMERR("%s: unable to service pool target messages in READ_ONLY or FAIL mode",
+ 		      dm_device_name(pool->pool_md));
+ 		return -EOPNOTSUPP;
+@@ -3798,6 +3853,7 @@ static void pool_status(struct dm_target *ti, status_type_t type,
+ 	dm_block_t nr_blocks_data;
+ 	dm_block_t nr_blocks_metadata;
+ 	dm_block_t held_root;
++	enum pool_mode mode;
+ 	char buf[BDEVNAME_SIZE];
+ 	char buf2[BDEVNAME_SIZE];
+ 	struct pool_c *pt = ti->private;
+@@ -3868,9 +3924,10 @@ static void pool_status(struct dm_target *ti, status_type_t type,
+ 		else
+ 			DMEMIT("- ");
+ 
+-		if (pool->pf.mode == PM_OUT_OF_DATA_SPACE)
++		mode = get_pool_mode(pool);
++		if (mode == PM_OUT_OF_DATA_SPACE)
+ 			DMEMIT("out_of_data_space ");
+-		else if (pool->pf.mode == PM_READ_ONLY)
++		else if (is_read_only_pool_mode(mode))
+ 			DMEMIT("ro ");
+ 		else
+ 			DMEMIT("rw ");
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 35bd3a62451b..8c93d44a052c 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -4531,11 +4531,12 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr,
+ 		allow_barrier(conf);
+ 	}
+ 
++	raise_barrier(conf, 0);
+ read_more:
+ 	/* Now schedule reads for blocks from sector_nr to last */
+ 	r10_bio = raid10_alloc_init_r10buf(conf);
+ 	r10_bio->state = 0;
+-	raise_barrier(conf, sectors_done != 0);
++	raise_barrier(conf, 1);
+ 	atomic_set(&r10_bio->remaining, 0);
+ 	r10_bio->mddev = mddev;
+ 	r10_bio->sector = sector_nr;
+@@ -4631,6 +4632,8 @@ read_more:
+ 	if (sector_nr <= last)
+ 		goto read_more;
+ 
++	lower_barrier(conf);
++
+ 	/* Now that we have done the whole section we can
+ 	 * update reshape_progress
+ 	 */
+diff --git a/drivers/md/raid5-log.h b/drivers/md/raid5-log.h
+index a001808a2b77..bfb811407061 100644
+--- a/drivers/md/raid5-log.h
++++ b/drivers/md/raid5-log.h
+@@ -46,6 +46,11 @@ extern int ppl_modify_log(struct r5conf *conf, struct md_rdev *rdev, bool add);
+ extern void ppl_quiesce(struct r5conf *conf, int quiesce);
+ extern int ppl_handle_flush_request(struct r5l_log *log, struct bio *bio);
+ 
++static inline bool raid5_has_log(struct r5conf *conf)
++{
++	return test_bit(MD_HAS_JOURNAL, &conf->mddev->flags);
++}
++
+ static inline bool raid5_has_ppl(struct r5conf *conf)
+ {
+ 	return test_bit(MD_HAS_PPL, &conf->mddev->flags);
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 49107c52c8e6..9050bfc71309 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -735,7 +735,7 @@ static bool stripe_can_batch(struct stripe_head *sh)
+ {
+ 	struct r5conf *conf = sh->raid_conf;
+ 
+-	if (conf->log || raid5_has_ppl(conf))
++	if (raid5_has_log(conf) || raid5_has_ppl(conf))
+ 		return false;
+ 	return test_bit(STRIPE_BATCH_READY, &sh->state) &&
+ 		!test_bit(STRIPE_BITMAP_PENDING, &sh->state) &&
+@@ -7739,7 +7739,7 @@ static int raid5_resize(struct mddev *mddev, sector_t sectors)
+ 	sector_t newsize;
+ 	struct r5conf *conf = mddev->private;
+ 
+-	if (conf->log || raid5_has_ppl(conf))
++	if (raid5_has_log(conf) || raid5_has_ppl(conf))
+ 		return -EINVAL;
+ 	sectors &= ~((sector_t)conf->chunk_sectors - 1);
+ 	newsize = raid5_size(mddev, sectors, mddev->raid_disks);
+@@ -7790,7 +7790,7 @@ static int check_reshape(struct mddev *mddev)
+ {
+ 	struct r5conf *conf = mddev->private;
+ 
+-	if (conf->log || raid5_has_ppl(conf))
++	if (raid5_has_log(conf) || raid5_has_ppl(conf))
+ 		return -EINVAL;
+ 	if (mddev->delta_disks == 0 &&
+ 	    mddev->new_layout == mddev->layout &&
+diff --git a/drivers/net/ethernet/amazon/ena/ena_com.c b/drivers/net/ethernet/amazon/ena/ena_com.c
+index 17f12c18d225..c37deef3bcf1 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_com.c
++++ b/drivers/net/ethernet/amazon/ena/ena_com.c
+@@ -459,7 +459,7 @@ static void ena_com_handle_admin_completion(struct ena_com_admin_queue *admin_qu
+ 	cqe = &admin_queue->cq.entries[head_masked];
+ 
+ 	/* Go over all the completions */
+-	while ((cqe->acq_common_descriptor.flags &
++	while ((READ_ONCE(cqe->acq_common_descriptor.flags) &
+ 			ENA_ADMIN_ACQ_COMMON_DESC_PHASE_MASK) == phase) {
+ 		/* Do not read the rest of the completion entry before the
+ 		 * phase bit was validated
+@@ -637,7 +637,7 @@ static u32 ena_com_reg_bar_read32(struct ena_com_dev *ena_dev, u16 offset)
+ 
+ 	mmiowb();
+ 	for (i = 0; i < timeout; i++) {
+-		if (read_resp->req_id == mmio_read->seq_num)
++		if (READ_ONCE(read_resp->req_id) == mmio_read->seq_num)
+ 			break;
+ 
+ 		udelay(1);
+@@ -1796,8 +1796,8 @@ void ena_com_aenq_intr_handler(struct ena_com_dev *dev, void *data)
+ 	aenq_common = &aenq_e->aenq_common_desc;
+ 
+ 	/* Go over all the events */
+-	while ((aenq_common->flags & ENA_ADMIN_AENQ_COMMON_DESC_PHASE_MASK) ==
+-	       phase) {
++	while ((READ_ONCE(aenq_common->flags) &
++		ENA_ADMIN_AENQ_COMMON_DESC_PHASE_MASK) == phase) {
+ 		pr_debug("AENQ! Group[%x] Syndrom[%x] timestamp: [%llus]\n",
+ 			 aenq_common->group, aenq_common->syndrom,
+ 			 (u64)aenq_common->timestamp_low +
+diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+index f2af87d70594..1b01cd2820ba 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+@@ -76,7 +76,7 @@ MODULE_DEVICE_TABLE(pci, ena_pci_tbl);
+ 
+ static int ena_rss_init_default(struct ena_adapter *adapter);
+ static void check_for_admin_com_state(struct ena_adapter *adapter);
+-static void ena_destroy_device(struct ena_adapter *adapter);
++static void ena_destroy_device(struct ena_adapter *adapter, bool graceful);
+ static int ena_restore_device(struct ena_adapter *adapter);
+ 
+ static void ena_tx_timeout(struct net_device *dev)
+@@ -461,7 +461,7 @@ static inline int ena_alloc_rx_page(struct ena_ring *rx_ring,
+ 		return -ENOMEM;
+ 	}
+ 
+-	dma = dma_map_page(rx_ring->dev, page, 0, PAGE_SIZE,
++	dma = dma_map_page(rx_ring->dev, page, 0, ENA_PAGE_SIZE,
+ 			   DMA_FROM_DEVICE);
+ 	if (unlikely(dma_mapping_error(rx_ring->dev, dma))) {
+ 		u64_stats_update_begin(&rx_ring->syncp);
+@@ -478,7 +478,7 @@ static inline int ena_alloc_rx_page(struct ena_ring *rx_ring,
+ 	rx_info->page_offset = 0;
+ 	ena_buf = &rx_info->ena_buf;
+ 	ena_buf->paddr = dma;
+-	ena_buf->len = PAGE_SIZE;
++	ena_buf->len = ENA_PAGE_SIZE;
+ 
+ 	return 0;
+ }
+@@ -495,7 +495,7 @@ static void ena_free_rx_page(struct ena_ring *rx_ring,
+ 		return;
+ 	}
+ 
+-	dma_unmap_page(rx_ring->dev, ena_buf->paddr, PAGE_SIZE,
++	dma_unmap_page(rx_ring->dev, ena_buf->paddr, ENA_PAGE_SIZE,
+ 		       DMA_FROM_DEVICE);
+ 
+ 	__free_page(page);
+@@ -916,10 +916,10 @@ static struct sk_buff *ena_rx_skb(struct ena_ring *rx_ring,
+ 	do {
+ 		dma_unmap_page(rx_ring->dev,
+ 			       dma_unmap_addr(&rx_info->ena_buf, paddr),
+-			       PAGE_SIZE, DMA_FROM_DEVICE);
++			       ENA_PAGE_SIZE, DMA_FROM_DEVICE);
+ 
+ 		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_info->page,
+-				rx_info->page_offset, len, PAGE_SIZE);
++				rx_info->page_offset, len, ENA_PAGE_SIZE);
+ 
+ 		netif_dbg(rx_ring->adapter, rx_status, rx_ring->netdev,
+ 			  "rx skb updated. len %d. data_len %d\n",
+@@ -1900,7 +1900,7 @@ static int ena_close(struct net_device *netdev)
+ 			  "Destroy failure, restarting device\n");
+ 		ena_dump_stats_to_dmesg(adapter);
+ 		/* rtnl lock already obtained in dev_ioctl() layer */
+-		ena_destroy_device(adapter);
++		ena_destroy_device(adapter, false);
+ 		ena_restore_device(adapter);
+ 	}
+ 
+@@ -2549,12 +2549,15 @@ err_disable_msix:
+ 	return rc;
+ }
+ 
+-static void ena_destroy_device(struct ena_adapter *adapter)
++static void ena_destroy_device(struct ena_adapter *adapter, bool graceful)
+ {
+ 	struct net_device *netdev = adapter->netdev;
+ 	struct ena_com_dev *ena_dev = adapter->ena_dev;
+ 	bool dev_up;
+ 
++	if (!test_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags))
++		return;
++
+ 	netif_carrier_off(netdev);
+ 
+ 	del_timer_sync(&adapter->timer_service);
+@@ -2562,7 +2565,8 @@ static void ena_destroy_device(struct ena_adapter *adapter)
+ 	dev_up = test_bit(ENA_FLAG_DEV_UP, &adapter->flags);
+ 	adapter->dev_up_before_reset = dev_up;
+ 
+-	ena_com_set_admin_running_state(ena_dev, false);
++	if (!graceful)
++		ena_com_set_admin_running_state(ena_dev, false);
+ 
+ 	if (test_bit(ENA_FLAG_DEV_UP, &adapter->flags))
+ 		ena_down(adapter);
+@@ -2590,6 +2594,7 @@ static void ena_destroy_device(struct ena_adapter *adapter)
+ 	adapter->reset_reason = ENA_REGS_RESET_NORMAL;
+ 
+ 	clear_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags);
++	clear_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags);
+ }
+ 
+ static int ena_restore_device(struct ena_adapter *adapter)
+@@ -2634,6 +2639,7 @@ static int ena_restore_device(struct ena_adapter *adapter)
+ 		}
+ 	}
+ 
++	set_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags);
+ 	mod_timer(&adapter->timer_service, round_jiffies(jiffies + HZ));
+ 	dev_err(&pdev->dev, "Device reset completed successfully\n");
+ 
+@@ -2664,7 +2670,7 @@ static void ena_fw_reset_device(struct work_struct *work)
+ 		return;
+ 	}
+ 	rtnl_lock();
+-	ena_destroy_device(adapter);
++	ena_destroy_device(adapter, false);
+ 	ena_restore_device(adapter);
+ 	rtnl_unlock();
+ }
+@@ -3408,30 +3414,24 @@ static void ena_remove(struct pci_dev *pdev)
+ 		netdev->rx_cpu_rmap = NULL;
+ 	}
+ #endif /* CONFIG_RFS_ACCEL */
+-
+-	unregister_netdev(netdev);
+ 	del_timer_sync(&adapter->timer_service);
+ 
+ 	cancel_work_sync(&adapter->reset_task);
+ 
+-	/* Reset the device only if the device is running. */
+-	if (test_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags))
+-		ena_com_dev_reset(ena_dev, adapter->reset_reason);
++	unregister_netdev(netdev);
+ 
+-	ena_free_mgmnt_irq(adapter);
++	/* If the device is running then we want to make sure the device will be
++	 * reset to make sure no more events will be issued by the device.
++	 */
++	if (test_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags))
++		set_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags);
+ 
+-	ena_disable_msix(adapter);
++	rtnl_lock();
++	ena_destroy_device(adapter, true);
++	rtnl_unlock();
+ 
+ 	free_netdev(netdev);
+ 
+-	ena_com_mmio_reg_read_request_destroy(ena_dev);
+-
+-	ena_com_abort_admin_commands(ena_dev);
+-
+-	ena_com_wait_for_abort_completion(ena_dev);
+-
+-	ena_com_admin_destroy(ena_dev);
+-
+ 	ena_com_rss_destroy(ena_dev);
+ 
+ 	ena_com_delete_debug_area(ena_dev);
+@@ -3466,7 +3466,7 @@ static int ena_suspend(struct pci_dev *pdev,  pm_message_t state)
+ 			"ignoring device reset request as the device is being suspended\n");
+ 		clear_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags);
+ 	}
+-	ena_destroy_device(adapter);
++	ena_destroy_device(adapter, true);
+ 	rtnl_unlock();
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h
+index f1972b5ab650..7c7ae56c52cf 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_netdev.h
++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h
+@@ -355,4 +355,15 @@ void ena_dump_stats_to_buf(struct ena_adapter *adapter, u8 *buf);
+ 
+ int ena_get_sset_count(struct net_device *netdev, int sset);
+ 
++/* The ENA buffer length fields is 16 bit long. So when PAGE_SIZE == 64kB the
++ * driver passas 0.
++ * Since the max packet size the ENA handles is ~9kB limit the buffer length to
++ * 16kB.
++ */
++#if PAGE_SIZE > SZ_16K
++#define ENA_PAGE_SIZE SZ_16K
++#else
++#define ENA_PAGE_SIZE PAGE_SIZE
++#endif
++
+ #endif /* !(ENA_H) */
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index 515d96e32143..c4d7479938e2 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -648,7 +648,7 @@ static int macb_halt_tx(struct macb *bp)
+ 		if (!(status & MACB_BIT(TGO)))
+ 			return 0;
+ 
+-		usleep_range(10, 250);
++		udelay(250);
+ 	} while (time_before(halt_time, timeout));
+ 
+ 	return -ETIMEDOUT;
+diff --git a/drivers/net/ethernet/hisilicon/hns/hnae.h b/drivers/net/ethernet/hisilicon/hns/hnae.h
+index cad52bd331f7..08a750fb60c4 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hnae.h
++++ b/drivers/net/ethernet/hisilicon/hns/hnae.h
+@@ -486,6 +486,8 @@ struct hnae_ae_ops {
+ 			u8 *auto_neg, u16 *speed, u8 *duplex);
+ 	void (*toggle_ring_irq)(struct hnae_ring *ring, u32 val);
+ 	void (*adjust_link)(struct hnae_handle *handle, int speed, int duplex);
++	bool (*need_adjust_link)(struct hnae_handle *handle,
++				 int speed, int duplex);
+ 	int (*set_loopback)(struct hnae_handle *handle,
+ 			    enum hnae_loop loop_mode, int en);
+ 	void (*get_ring_bdnum_limit)(struct hnae_queue *queue,
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c b/drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c
+index bd68379d2bea..bf930ab3c2bd 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c
+@@ -155,6 +155,41 @@ static void hns_ae_put_handle(struct hnae_handle *handle)
+ 		hns_ae_get_ring_pair(handle->qs[i])->used_by_vf = 0;
+ }
+ 
++static int hns_ae_wait_flow_down(struct hnae_handle *handle)
++{
++	struct dsaf_device *dsaf_dev;
++	struct hns_ppe_cb *ppe_cb;
++	struct hnae_vf_cb *vf_cb;
++	int ret;
++	int i;
++
++	for (i = 0; i < handle->q_num; i++) {
++		ret = hns_rcb_wait_tx_ring_clean(handle->qs[i]);
++		if (ret)
++			return ret;
++	}
++
++	ppe_cb = hns_get_ppe_cb(handle);
++	ret = hns_ppe_wait_tx_fifo_clean(ppe_cb);
++	if (ret)
++		return ret;
++
++	dsaf_dev = hns_ae_get_dsaf_dev(handle->dev);
++	if (!dsaf_dev)
++		return -EINVAL;
++	ret = hns_dsaf_wait_pkt_clean(dsaf_dev, handle->dport_id);
++	if (ret)
++		return ret;
++
++	vf_cb = hns_ae_get_vf_cb(handle);
++	ret = hns_mac_wait_fifo_clean(vf_cb->mac_cb);
++	if (ret)
++		return ret;
++
++	mdelay(10);
++	return 0;
++}
++
+ static void hns_ae_ring_enable_all(struct hnae_handle *handle, int val)
+ {
+ 	int q_num = handle->q_num;
+@@ -399,12 +434,41 @@ static int hns_ae_get_mac_info(struct hnae_handle *handle,
+ 	return hns_mac_get_port_info(mac_cb, auto_neg, speed, duplex);
+ }
+ 
++static bool hns_ae_need_adjust_link(struct hnae_handle *handle, int speed,
++				    int duplex)
++{
++	struct hns_mac_cb *mac_cb = hns_get_mac_cb(handle);
++
++	return hns_mac_need_adjust_link(mac_cb, speed, duplex);
++}
++
+ static void hns_ae_adjust_link(struct hnae_handle *handle, int speed,
+ 			       int duplex)
+ {
+ 	struct hns_mac_cb *mac_cb = hns_get_mac_cb(handle);
+ 
+-	hns_mac_adjust_link(mac_cb, speed, duplex);
++	switch (mac_cb->dsaf_dev->dsaf_ver) {
++	case AE_VERSION_1:
++		hns_mac_adjust_link(mac_cb, speed, duplex);
++		break;
++
++	case AE_VERSION_2:
++		/* chip need to clear all pkt inside */
++		hns_mac_disable(mac_cb, MAC_COMM_MODE_RX);
++		if (hns_ae_wait_flow_down(handle)) {
++			hns_mac_enable(mac_cb, MAC_COMM_MODE_RX);
++			break;
++		}
++
++		hns_mac_adjust_link(mac_cb, speed, duplex);
++		hns_mac_enable(mac_cb, MAC_COMM_MODE_RX);
++		break;
++
++	default:
++		break;
++	}
++
++	return;
+ }
+ 
+ static void hns_ae_get_ring_bdnum_limit(struct hnae_queue *queue,
+@@ -902,6 +966,7 @@ static struct hnae_ae_ops hns_dsaf_ops = {
+ 	.get_status = hns_ae_get_link_status,
+ 	.get_info = hns_ae_get_mac_info,
+ 	.adjust_link = hns_ae_adjust_link,
++	.need_adjust_link = hns_ae_need_adjust_link,
+ 	.set_loopback = hns_ae_config_loopback,
+ 	.get_ring_bdnum_limit = hns_ae_get_ring_bdnum_limit,
+ 	.get_pauseparam = hns_ae_get_pauseparam,
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c
+index 74bd260ca02a..8c7bc5cf193c 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c
+@@ -257,6 +257,16 @@ static void hns_gmac_get_pausefrm_cfg(void *mac_drv, u32 *rx_pause_en,
+ 	*tx_pause_en = dsaf_get_bit(pause_en, GMAC_PAUSE_EN_TX_FDFC_B);
+ }
+ 
++static bool hns_gmac_need_adjust_link(void *mac_drv, enum mac_speed speed,
++				      int duplex)
++{
++	struct mac_driver *drv = (struct mac_driver *)mac_drv;
++	struct hns_mac_cb *mac_cb = drv->mac_cb;
++
++	return (mac_cb->speed != speed) ||
++		(mac_cb->half_duplex == duplex);
++}
++
+ static int hns_gmac_adjust_link(void *mac_drv, enum mac_speed speed,
+ 				u32 full_duplex)
+ {
+@@ -309,6 +319,30 @@ static void hns_gmac_set_promisc(void *mac_drv, u8 en)
+ 		hns_gmac_set_uc_match(mac_drv, en);
+ }
+ 
++int hns_gmac_wait_fifo_clean(void *mac_drv)
++{
++	struct mac_driver *drv = (struct mac_driver *)mac_drv;
++	int wait_cnt;
++	u32 val;
++
++	wait_cnt = 0;
++	while (wait_cnt++ < HNS_MAX_WAIT_CNT) {
++		val = dsaf_read_dev(drv, GMAC_FIFO_STATE_REG);
++		/* bit5~bit0 is not send complete pkts */
++		if ((val & 0x3f) == 0)
++			break;
++		usleep_range(100, 200);
++	}
++
++	if (wait_cnt >= HNS_MAX_WAIT_CNT) {
++		dev_err(drv->dev,
++			"hns ge %d fifo was not idle.\n", drv->mac_id);
++		return -EBUSY;
++	}
++
++	return 0;
++}
++
+ static void hns_gmac_init(void *mac_drv)
+ {
+ 	u32 port;
+@@ -690,6 +724,7 @@ void *hns_gmac_config(struct hns_mac_cb *mac_cb, struct mac_params *mac_param)
+ 	mac_drv->mac_disable = hns_gmac_disable;
+ 	mac_drv->mac_free = hns_gmac_free;
+ 	mac_drv->adjust_link = hns_gmac_adjust_link;
++	mac_drv->need_adjust_link = hns_gmac_need_adjust_link;
+ 	mac_drv->set_tx_auto_pause_frames = hns_gmac_set_tx_auto_pause_frames;
+ 	mac_drv->config_max_frame_length = hns_gmac_config_max_frame_length;
+ 	mac_drv->mac_pausefrm_cfg = hns_gmac_pause_frm_cfg;
+@@ -717,6 +752,7 @@ void *hns_gmac_config(struct hns_mac_cb *mac_cb, struct mac_params *mac_param)
+ 	mac_drv->get_strings = hns_gmac_get_strings;
+ 	mac_drv->update_stats = hns_gmac_update_stats;
+ 	mac_drv->set_promiscuous = hns_gmac_set_promisc;
++	mac_drv->wait_fifo_clean = hns_gmac_wait_fifo_clean;
+ 
+ 	return (void *)mac_drv;
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c
+index 9dcc5765f11f..5c6b880c3eb7 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c
+@@ -114,6 +114,26 @@ int hns_mac_get_port_info(struct hns_mac_cb *mac_cb,
+ 	return 0;
+ }
+ 
++/**
++ *hns_mac_is_adjust_link - check is need change mac speed and duplex register
++ *@mac_cb: mac device
++ *@speed: phy device speed
++ *@duplex:phy device duplex
++ *
++ */
++bool hns_mac_need_adjust_link(struct hns_mac_cb *mac_cb, int speed, int duplex)
++{
++	struct mac_driver *mac_ctrl_drv;
++
++	mac_ctrl_drv = (struct mac_driver *)(mac_cb->priv.mac);
++
++	if (mac_ctrl_drv->need_adjust_link)
++		return mac_ctrl_drv->need_adjust_link(mac_ctrl_drv,
++			(enum mac_speed)speed, duplex);
++	else
++		return true;
++}
++
+ void hns_mac_adjust_link(struct hns_mac_cb *mac_cb, int speed, int duplex)
+ {
+ 	int ret;
+@@ -430,6 +450,16 @@ int hns_mac_vm_config_bc_en(struct hns_mac_cb *mac_cb, u32 vmid, bool enable)
+ 	return 0;
+ }
+ 
++int hns_mac_wait_fifo_clean(struct hns_mac_cb *mac_cb)
++{
++	struct mac_driver *drv = hns_mac_get_drv(mac_cb);
++
++	if (drv->wait_fifo_clean)
++		return drv->wait_fifo_clean(drv);
++
++	return 0;
++}
++
+ void hns_mac_reset(struct hns_mac_cb *mac_cb)
+ {
+ 	struct mac_driver *drv = hns_mac_get_drv(mac_cb);
+@@ -999,6 +1029,20 @@ static int hns_mac_get_max_port_num(struct dsaf_device *dsaf_dev)
+ 		return  DSAF_MAX_PORT_NUM;
+ }
+ 
++void hns_mac_enable(struct hns_mac_cb *mac_cb, enum mac_commom_mode mode)
++{
++	struct mac_driver *mac_ctrl_drv = hns_mac_get_drv(mac_cb);
++
++	mac_ctrl_drv->mac_enable(mac_cb->priv.mac, mode);
++}
++
++void hns_mac_disable(struct hns_mac_cb *mac_cb, enum mac_commom_mode mode)
++{
++	struct mac_driver *mac_ctrl_drv = hns_mac_get_drv(mac_cb);
++
++	mac_ctrl_drv->mac_disable(mac_cb->priv.mac, mode);
++}
++
+ /**
+  * hns_mac_init - init mac
+  * @dsaf_dev: dsa fabric device struct pointer
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.h
+index bbc0a98e7ca3..fbc75341bef7 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.h
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.h
+@@ -356,6 +356,9 @@ struct mac_driver {
+ 	/*adjust mac mode of port,include speed and duplex*/
+ 	int (*adjust_link)(void *mac_drv, enum mac_speed speed,
+ 			   u32 full_duplex);
++	/* need adjust link */
++	bool (*need_adjust_link)(void *mac_drv, enum mac_speed speed,
++				 int duplex);
+ 	/* config autoegotaite mode of port*/
+ 	void (*set_an_mode)(void *mac_drv, u8 enable);
+ 	/* config loopbank mode */
+@@ -394,6 +397,7 @@ struct mac_driver {
+ 	void (*get_info)(void *mac_drv, struct mac_info *mac_info);
+ 
+ 	void (*update_stats)(void *mac_drv);
++	int (*wait_fifo_clean)(void *mac_drv);
+ 
+ 	enum mac_mode mac_mode;
+ 	u8 mac_id;
+@@ -427,6 +431,7 @@ void *hns_xgmac_config(struct hns_mac_cb *mac_cb,
+ 
+ int hns_mac_init(struct dsaf_device *dsaf_dev);
+ void mac_adjust_link(struct net_device *net_dev);
++bool hns_mac_need_adjust_link(struct hns_mac_cb *mac_cb, int speed, int duplex);
+ void hns_mac_get_link_status(struct hns_mac_cb *mac_cb,	u32 *link_status);
+ int hns_mac_change_vf_addr(struct hns_mac_cb *mac_cb, u32 vmid, char *addr);
+ int hns_mac_set_multi(struct hns_mac_cb *mac_cb,
+@@ -463,5 +468,8 @@ int hns_mac_add_uc_addr(struct hns_mac_cb *mac_cb, u8 vf_id,
+ int hns_mac_rm_uc_addr(struct hns_mac_cb *mac_cb, u8 vf_id,
+ 		       const unsigned char *addr);
+ int hns_mac_clr_multicast(struct hns_mac_cb *mac_cb, int vfn);
++void hns_mac_enable(struct hns_mac_cb *mac_cb, enum mac_commom_mode mode);
++void hns_mac_disable(struct hns_mac_cb *mac_cb, enum mac_commom_mode mode);
++int hns_mac_wait_fifo_clean(struct hns_mac_cb *mac_cb);
+ 
+ #endif /* _HNS_DSAF_MAC_H */
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c
+index 0ce07f6eb1e6..0ef6d429308f 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c
+@@ -2733,6 +2733,35 @@ void hns_dsaf_set_promisc_tcam(struct dsaf_device *dsaf_dev,
+ 	soft_mac_entry->index = enable ? entry_index : DSAF_INVALID_ENTRY_IDX;
+ }
+ 
++int hns_dsaf_wait_pkt_clean(struct dsaf_device *dsaf_dev, int port)
++{
++	u32 val, val_tmp;
++	int wait_cnt;
++
++	if (port >= DSAF_SERVICE_NW_NUM)
++		return 0;
++
++	wait_cnt = 0;
++	while (wait_cnt++ < HNS_MAX_WAIT_CNT) {
++		val = dsaf_read_dev(dsaf_dev, DSAF_VOQ_IN_PKT_NUM_0_REG +
++			(port + DSAF_XGE_NUM) * 0x40);
++		val_tmp = dsaf_read_dev(dsaf_dev, DSAF_VOQ_OUT_PKT_NUM_0_REG +
++			(port + DSAF_XGE_NUM) * 0x40);
++		if (val == val_tmp)
++			break;
++
++		usleep_range(100, 200);
++	}
++
++	if (wait_cnt >= HNS_MAX_WAIT_CNT) {
++		dev_err(dsaf_dev->dev, "hns dsaf clean wait timeout(%u - %u).\n",
++			val, val_tmp);
++		return -EBUSY;
++	}
++
++	return 0;
++}
++
+ /**
+  * dsaf_probe - probo dsaf dev
+  * @pdev: dasf platform device
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.h
+index 4507e8222683..0e1cd99831a6 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.h
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.h
+@@ -44,6 +44,8 @@ struct hns_mac_cb;
+ #define DSAF_ROCE_CREDIT_CHN	8
+ #define DSAF_ROCE_CHAN_MODE	3
+ 
++#define HNS_MAX_WAIT_CNT 10000
++
+ enum dsaf_roce_port_mode {
+ 	DSAF_ROCE_6PORT_MODE,
+ 	DSAF_ROCE_4PORT_MODE,
+@@ -463,5 +465,6 @@ int hns_dsaf_rm_mac_addr(
+ 
+ int hns_dsaf_clr_mac_mc_port(struct dsaf_device *dsaf_dev,
+ 			     u8 mac_id, u8 port_num);
++int hns_dsaf_wait_pkt_clean(struct dsaf_device *dsaf_dev, int port);
+ 
+ #endif /* __HNS_DSAF_MAIN_H__ */
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c
+index 93e71e27401b..a19932aeb9d7 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c
+@@ -274,6 +274,29 @@ static void hns_ppe_exc_irq_en(struct hns_ppe_cb *ppe_cb, int en)
+ 	dsaf_write_dev(ppe_cb, PPE_INTEN_REG, msk_vlue & vld_msk);
+ }
+ 
++int hns_ppe_wait_tx_fifo_clean(struct hns_ppe_cb *ppe_cb)
++{
++	int wait_cnt;
++	u32 val;
++
++	wait_cnt = 0;
++	while (wait_cnt++ < HNS_MAX_WAIT_CNT) {
++		val = dsaf_read_dev(ppe_cb, PPE_CURR_TX_FIFO0_REG) & 0x3ffU;
++		if (!val)
++			break;
++
++		usleep_range(100, 200);
++	}
++
++	if (wait_cnt >= HNS_MAX_WAIT_CNT) {
++		dev_err(ppe_cb->dev, "hns ppe tx fifo clean wait timeout, still has %u pkt.\n",
++			val);
++		return -EBUSY;
++	}
++
++	return 0;
++}
++
+ /**
+  * ppe_init_hw - init ppe
+  * @ppe_cb: ppe device
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h
+index 9d8e643e8aa6..f670e63a5a01 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h
+@@ -100,6 +100,7 @@ struct ppe_common_cb {
+ 
+ };
+ 
++int hns_ppe_wait_tx_fifo_clean(struct hns_ppe_cb *ppe_cb);
+ int hns_ppe_init(struct dsaf_device *dsaf_dev);
+ 
+ void hns_ppe_uninit(struct dsaf_device *dsaf_dev);
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c
+index e2e28532e4dc..1e43d7a3ca86 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c
+@@ -66,6 +66,29 @@ void hns_rcb_wait_fbd_clean(struct hnae_queue **qs, int q_num, u32 flag)
+ 			"queue(%d) wait fbd(%d) clean fail!!\n", i, fbd_num);
+ }
+ 
++int hns_rcb_wait_tx_ring_clean(struct hnae_queue *qs)
++{
++	u32 head, tail;
++	int wait_cnt;
++
++	tail = dsaf_read_dev(&qs->tx_ring, RCB_REG_TAIL);
++	wait_cnt = 0;
++	while (wait_cnt++ < HNS_MAX_WAIT_CNT) {
++		head = dsaf_read_dev(&qs->tx_ring, RCB_REG_HEAD);
++		if (tail == head)
++			break;
++
++		usleep_range(100, 200);
++	}
++
++	if (wait_cnt >= HNS_MAX_WAIT_CNT) {
++		dev_err(qs->dev->dev, "rcb wait timeout, head not equal to tail.\n");
++		return -EBUSY;
++	}
++
++	return 0;
++}
++
+ /**
+  *hns_rcb_reset_ring_hw - ring reset
+  *@q: ring struct pointer
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h
+index 602816498c8d..2319b772a271 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h
+@@ -136,6 +136,7 @@ void hns_rcbv2_int_clr_hw(struct hnae_queue *q, u32 flag);
+ void hns_rcb_init_hw(struct ring_pair_cb *ring);
+ void hns_rcb_reset_ring_hw(struct hnae_queue *q);
+ void hns_rcb_wait_fbd_clean(struct hnae_queue **qs, int q_num, u32 flag);
++int hns_rcb_wait_tx_ring_clean(struct hnae_queue *qs);
+ u32 hns_rcb_get_rx_coalesced_frames(
+ 	struct rcb_common_cb *rcb_common, u32 port_idx);
+ u32 hns_rcb_get_tx_coalesced_frames(
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h
+index 886cbbf25761..74d935d82cbc 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h
+@@ -464,6 +464,7 @@
+ #define RCB_RING_INTMSK_TX_OVERTIME_REG		0x000C4
+ #define RCB_RING_INTSTS_TX_OVERTIME_REG		0x000C8
+ 
++#define GMAC_FIFO_STATE_REG			0x0000UL
+ #define GMAC_DUPLEX_TYPE_REG			0x0008UL
+ #define GMAC_FD_FC_TYPE_REG			0x000CUL
+ #define GMAC_TX_WATER_LINE_REG			0x0010UL
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_enet.c b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+index ef994a715f93..b4518f45f048 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+@@ -1212,11 +1212,26 @@ static void hns_nic_adjust_link(struct net_device *ndev)
+ 	struct hnae_handle *h = priv->ae_handle;
+ 	int state = 1;
+ 
++	/* If there is no phy, do not need adjust link */
+ 	if (ndev->phydev) {
+-		h->dev->ops->adjust_link(h, ndev->phydev->speed,
+-					 ndev->phydev->duplex);
+-		state = ndev->phydev->link;
++		/* When phy link down, do nothing */
++		if (ndev->phydev->link == 0)
++			return;
++
++		if (h->dev->ops->need_adjust_link(h, ndev->phydev->speed,
++						  ndev->phydev->duplex)) {
++			/* because Hi161X chip don't support to change gmac
++			 * speed and duplex with traffic. Delay 200ms to
++			 * make sure there is no more data in chip FIFO.
++			 */
++			netif_carrier_off(ndev);
++			msleep(200);
++			h->dev->ops->adjust_link(h, ndev->phydev->speed,
++						 ndev->phydev->duplex);
++			netif_carrier_on(ndev);
++		}
+ 	}
++
+ 	state = state && h->dev->ops->get_status(h);
+ 
+ 	if (state != priv->link) {
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c b/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
+index 2e14a3ae1d8b..c1e947bb852f 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
+@@ -243,7 +243,9 @@ static int hns_nic_set_link_ksettings(struct net_device *net_dev,
+ 	}
+ 
+ 	if (h->dev->ops->adjust_link) {
++		netif_carrier_off(net_dev);
+ 		h->dev->ops->adjust_link(h, (int)speed, cmd->base.duplex);
++		netif_carrier_on(net_dev);
+ 		return 0;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/ibm/emac/core.c b/drivers/net/ethernet/ibm/emac/core.c
+index 354c0982847b..372664686309 100644
+--- a/drivers/net/ethernet/ibm/emac/core.c
++++ b/drivers/net/ethernet/ibm/emac/core.c
+@@ -494,9 +494,6 @@ static u32 __emac_calc_base_mr1(struct emac_instance *dev, int tx_size, int rx_s
+ 	case 16384:
+ 		ret |= EMAC_MR1_RFS_16K;
+ 		break;
+-	case 8192:
+-		ret |= EMAC4_MR1_RFS_8K;
+-		break;
+ 	case 4096:
+ 		ret |= EMAC_MR1_RFS_4K;
+ 		break;
+@@ -537,6 +534,9 @@ static u32 __emac4_calc_base_mr1(struct emac_instance *dev, int tx_size, int rx_
+ 	case 16384:
+ 		ret |= EMAC4_MR1_RFS_16K;
+ 		break;
++	case 8192:
++		ret |= EMAC4_MR1_RFS_8K;
++		break;
+ 	case 4096:
+ 		ret |= EMAC4_MR1_RFS_4K;
+ 		break;
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index ffe7acbeaa22..d834308adf95 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -1841,11 +1841,17 @@ static int do_reset(struct ibmvnic_adapter *adapter,
+ 			adapter->map_id = 1;
+ 			release_rx_pools(adapter);
+ 			release_tx_pools(adapter);
+-			init_rx_pools(netdev);
+-			init_tx_pools(netdev);
++			rc = init_rx_pools(netdev);
++			if (rc)
++				return rc;
++			rc = init_tx_pools(netdev);
++			if (rc)
++				return rc;
+ 
+ 			release_napi(adapter);
+-			init_napi(adapter);
++			rc = init_napi(adapter);
++			if (rc)
++				return rc;
+ 		} else {
+ 			rc = reset_tx_pools(adapter);
+ 			if (rc)
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index 62e57b05a0ae..56b31e903cc1 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -3196,11 +3196,13 @@ int ixgbe_poll(struct napi_struct *napi, int budget)
+ 		return budget;
+ 
+ 	/* all work done, exit the polling mode */
+-	napi_complete_done(napi, work_done);
+-	if (adapter->rx_itr_setting & 1)
+-		ixgbe_set_itr(q_vector);
+-	if (!test_bit(__IXGBE_DOWN, &adapter->state))
+-		ixgbe_irq_enable_queues(adapter, BIT_ULL(q_vector->v_idx));
++	if (likely(napi_complete_done(napi, work_done))) {
++		if (adapter->rx_itr_setting & 1)
++			ixgbe_set_itr(q_vector);
++		if (!test_bit(__IXGBE_DOWN, &adapter->state))
++			ixgbe_irq_enable_queues(adapter,
++						BIT_ULL(q_vector->v_idx));
++	}
+ 
+ 	return min(work_done, budget - 1);
+ }
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index 661fa5a38df2..b8bba64673e5 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -4685,6 +4685,7 @@ static int mvpp2_port_probe(struct platform_device *pdev,
+ 	dev->min_mtu = ETH_MIN_MTU;
+ 	/* 9704 == 9728 - 20 and rounding to 8 */
+ 	dev->max_mtu = MVPP2_BM_JUMBO_PKT_SIZE;
++	dev->dev.of_node = port_node;
+ 
+ 	/* Phylink isn't used w/ ACPI as of now */
+ 	if (port_node) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dev.c b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+index 922811fb66e7..37ba7c78859d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/dev.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+@@ -396,16 +396,17 @@ void mlx5_remove_dev_by_protocol(struct mlx5_core_dev *dev, int protocol)
+ 		}
+ }
+ 
+-static u16 mlx5_gen_pci_id(struct mlx5_core_dev *dev)
++static u32 mlx5_gen_pci_id(struct mlx5_core_dev *dev)
+ {
+-	return (u16)((dev->pdev->bus->number << 8) |
++	return (u32)((pci_domain_nr(dev->pdev->bus) << 16) |
++		     (dev->pdev->bus->number << 8) |
+ 		     PCI_SLOT(dev->pdev->devfn));
+ }
+ 
+ /* Must be called with intf_mutex held */
+ struct mlx5_core_dev *mlx5_get_next_phys_dev(struct mlx5_core_dev *dev)
+ {
+-	u16 pci_id = mlx5_gen_pci_id(dev);
++	u32 pci_id = mlx5_gen_pci_id(dev);
+ 	struct mlx5_core_dev *res = NULL;
+ 	struct mlx5_core_dev *tmp_dev;
+ 	struct mlx5_priv *priv;
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index e5eb361b973c..1d1e66002232 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -730,7 +730,7 @@ struct rtl8169_tc_offsets {
+ };
+ 
+ enum rtl_flag {
+-	RTL_FLAG_TASK_ENABLED,
++	RTL_FLAG_TASK_ENABLED = 0,
+ 	RTL_FLAG_TASK_SLOW_PENDING,
+ 	RTL_FLAG_TASK_RESET_PENDING,
+ 	RTL_FLAG_TASK_PHY_PENDING,
+@@ -5150,13 +5150,13 @@ static void rtl_hw_start(struct  rtl8169_private *tp)
+ 
+ 	rtl_set_rx_max_size(tp);
+ 	rtl_set_rx_tx_desc_registers(tp);
+-	rtl_set_tx_config_registers(tp);
+ 	RTL_W8(tp, Cfg9346, Cfg9346_Lock);
+ 
+ 	/* Initially a 10 us delay. Turned it into a PCI commit. - FR */
+ 	RTL_R8(tp, IntrMask);
+ 	RTL_W8(tp, ChipCmd, CmdTxEnb | CmdRxEnb);
+ 	rtl_init_rxcfg(tp);
++	rtl_set_tx_config_registers(tp);
+ 
+ 	rtl_set_rx_mode(tp->dev);
+ 	/* no early-rx interrupts */
+@@ -7125,7 +7125,8 @@ static int rtl8169_close(struct net_device *dev)
+ 	rtl8169_update_counters(tp);
+ 
+ 	rtl_lock_work(tp);
+-	clear_bit(RTL_FLAG_TASK_ENABLED, tp->wk.flags);
++	/* Clear all task flags */
++	bitmap_zero(tp->wk.flags, RTL_FLAG_MAX);
+ 
+ 	rtl8169_down(dev);
+ 	rtl_unlock_work(tp);
+@@ -7301,7 +7302,9 @@ static void rtl8169_net_suspend(struct net_device *dev)
+ 
+ 	rtl_lock_work(tp);
+ 	napi_disable(&tp->napi);
+-	clear_bit(RTL_FLAG_TASK_ENABLED, tp->wk.flags);
++	/* Clear all task flags */
++	bitmap_zero(tp->wk.flags, RTL_FLAG_MAX);
++
+ 	rtl_unlock_work(tp);
+ 
+ 	rtl_pll_power_down(tp);
+diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c
+index 5614fd231bbe..6520379b390e 100644
+--- a/drivers/net/ethernet/renesas/sh_eth.c
++++ b/drivers/net/ethernet/renesas/sh_eth.c
+@@ -807,6 +807,41 @@ static struct sh_eth_cpu_data r8a77980_data = {
+ 	.magic		= 1,
+ 	.cexcr		= 1,
+ };
++
++/* R7S9210 */
++static struct sh_eth_cpu_data r7s9210_data = {
++	.soft_reset	= sh_eth_soft_reset,
++
++	.set_duplex	= sh_eth_set_duplex,
++	.set_rate	= sh_eth_set_rate_rcar,
++
++	.register_type	= SH_ETH_REG_FAST_SH4,
++
++	.edtrr_trns	= EDTRR_TRNS_ETHER,
++	.ecsr_value	= ECSR_ICD,
++	.ecsipr_value	= ECSIPR_ICDIP,
++	.eesipr_value	= EESIPR_TWBIP | EESIPR_TABTIP | EESIPR_RABTIP |
++			  EESIPR_RFCOFIP | EESIPR_ECIIP | EESIPR_FTCIP |
++			  EESIPR_TDEIP | EESIPR_TFUFIP | EESIPR_FRIP |
++			  EESIPR_RDEIP | EESIPR_RFOFIP | EESIPR_CNDIP |
++			  EESIPR_DLCIP | EESIPR_CDIP | EESIPR_TROIP |
++			  EESIPR_RMAFIP | EESIPR_RRFIP | EESIPR_RTLFIP |
++			  EESIPR_RTSFIP | EESIPR_PREIP | EESIPR_CERFIP,
++
++	.tx_check	= EESR_FTC | EESR_CND | EESR_DLC | EESR_CD | EESR_TRO,
++	.eesr_err_check	= EESR_TWB | EESR_TABT | EESR_RABT | EESR_RFE |
++			  EESR_RDE | EESR_RFRMER | EESR_TFE | EESR_TDE,
++
++	.fdr_value	= 0x0000070f,
++
++	.apr		= 1,
++	.mpr		= 1,
++	.tpauser	= 1,
++	.hw_swap	= 1,
++	.rpadir		= 1,
++	.no_ade		= 1,
++	.xdfar_rw	= 1,
++};
+ #endif /* CONFIG_OF */
+ 
+ static void sh_eth_set_rate_sh7724(struct net_device *ndev)
+@@ -3131,6 +3166,7 @@ static const struct of_device_id sh_eth_match_table[] = {
+ 	{ .compatible = "renesas,ether-r8a7794", .data = &rcar_gen2_data },
+ 	{ .compatible = "renesas,gether-r8a77980", .data = &r8a77980_data },
+ 	{ .compatible = "renesas,ether-r7s72100", .data = &r7s72100_data },
++	{ .compatible = "renesas,ether-r7s9210", .data = &r7s9210_data },
+ 	{ .compatible = "renesas,rcar-gen1-ether", .data = &rcar_gen1_data },
+ 	{ .compatible = "renesas,rcar-gen2-ether", .data = &rcar_gen2_data },
+ 	{ }
+diff --git a/drivers/net/wireless/broadcom/b43/dma.c b/drivers/net/wireless/broadcom/b43/dma.c
+index 6b0e1ec346cb..d46d57b989ae 100644
+--- a/drivers/net/wireless/broadcom/b43/dma.c
++++ b/drivers/net/wireless/broadcom/b43/dma.c
+@@ -1518,13 +1518,15 @@ void b43_dma_handle_txstatus(struct b43_wldev *dev,
+ 			}
+ 		} else {
+ 			/* More than a single header/data pair were missed.
+-			 * Report this error, and reset the controller to
++			 * Report this error. If running with open-source
++			 * firmware, then reset the controller to
+ 			 * revive operation.
+ 			 */
+ 			b43dbg(dev->wl,
+ 			       "Out of order TX status report on DMA ring %d. Expected %d, but got %d\n",
+ 			       ring->index, firstused, slot);
+-			b43_controller_restart(dev, "Out of order TX");
++			if (dev->fw.opensource)
++				b43_controller_restart(dev, "Out of order TX");
+ 			return;
+ 		}
+ 	}
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+index b815ba38dbdb..88121548eb9f 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+@@ -877,15 +877,12 @@ iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg,
+ 	const u8 *nvm_chan = cfg->nvm_type == IWL_NVM_EXT ?
+ 			     iwl_ext_nvm_channels : iwl_nvm_channels;
+ 	struct ieee80211_regdomain *regd, *copy_rd;
+-	int size_of_regd, regd_to_copy, wmms_to_copy;
+-	int size_of_wmms = 0;
++	int size_of_regd, regd_to_copy;
+ 	struct ieee80211_reg_rule *rule;
+-	struct ieee80211_wmm_rule *wmm_rule, *d_wmm, *s_wmm;
+ 	struct regdb_ptrs *regdb_ptrs;
+ 	enum nl80211_band band;
+ 	int center_freq, prev_center_freq = 0;
+-	int valid_rules = 0, n_wmms = 0;
+-	int i;
++	int valid_rules = 0;
+ 	bool new_rule;
+ 	int max_num_ch = cfg->nvm_type == IWL_NVM_EXT ?
+ 			 IWL_NVM_NUM_CHANNELS_EXT : IWL_NVM_NUM_CHANNELS;
+@@ -904,11 +901,7 @@ iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg,
+ 		sizeof(struct ieee80211_regdomain) +
+ 		num_of_ch * sizeof(struct ieee80211_reg_rule);
+ 
+-	if (geo_info & GEO_WMM_ETSI_5GHZ_INFO)
+-		size_of_wmms =
+-			num_of_ch * sizeof(struct ieee80211_wmm_rule);
+-
+-	regd = kzalloc(size_of_regd + size_of_wmms, GFP_KERNEL);
++	regd = kzalloc(size_of_regd, GFP_KERNEL);
+ 	if (!regd)
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -922,8 +915,6 @@ iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg,
+ 	regd->alpha2[0] = fw_mcc >> 8;
+ 	regd->alpha2[1] = fw_mcc & 0xff;
+ 
+-	wmm_rule = (struct ieee80211_wmm_rule *)((u8 *)regd + size_of_regd);
+-
+ 	for (ch_idx = 0; ch_idx < num_of_ch; ch_idx++) {
+ 		ch_flags = (u16)__le32_to_cpup(channels + ch_idx);
+ 		band = (ch_idx < NUM_2GHZ_CHANNELS) ?
+@@ -977,26 +968,10 @@ iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg,
+ 		    band == NL80211_BAND_2GHZ)
+ 			continue;
+ 
+-		if (!reg_query_regdb_wmm(regd->alpha2, center_freq,
+-					 &regdb_ptrs[n_wmms].token, wmm_rule)) {
+-			/* Add only new rules */
+-			for (i = 0; i < n_wmms; i++) {
+-				if (regdb_ptrs[i].token ==
+-				    regdb_ptrs[n_wmms].token) {
+-					rule->wmm_rule = regdb_ptrs[i].rule;
+-					break;
+-				}
+-			}
+-			if (i == n_wmms) {
+-				rule->wmm_rule = wmm_rule;
+-				regdb_ptrs[n_wmms++].rule = wmm_rule;
+-				wmm_rule++;
+-			}
+-		}
++		reg_query_regdb_wmm(regd->alpha2, center_freq, rule);
+ 	}
+ 
+ 	regd->n_reg_rules = valid_rules;
+-	regd->n_wmm_rules = n_wmms;
+ 
+ 	/*
+ 	 * Narrow down regdom for unused regulatory rules to prevent hole
+@@ -1005,28 +980,13 @@ iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg,
+ 	regd_to_copy = sizeof(struct ieee80211_regdomain) +
+ 		valid_rules * sizeof(struct ieee80211_reg_rule);
+ 
+-	wmms_to_copy = sizeof(struct ieee80211_wmm_rule) * n_wmms;
+-
+-	copy_rd = kzalloc(regd_to_copy + wmms_to_copy, GFP_KERNEL);
++	copy_rd = kzalloc(regd_to_copy, GFP_KERNEL);
+ 	if (!copy_rd) {
+ 		copy_rd = ERR_PTR(-ENOMEM);
+ 		goto out;
+ 	}
+ 
+ 	memcpy(copy_rd, regd, regd_to_copy);
+-	memcpy((u8 *)copy_rd + regd_to_copy, (u8 *)regd + size_of_regd,
+-	       wmms_to_copy);
+-
+-	d_wmm = (struct ieee80211_wmm_rule *)((u8 *)copy_rd + regd_to_copy);
+-	s_wmm = (struct ieee80211_wmm_rule *)((u8 *)regd + size_of_regd);
+-
+-	for (i = 0; i < regd->n_reg_rules; i++) {
+-		if (!regd->reg_rules[i].wmm_rule)
+-			continue;
+-
+-		copy_rd->reg_rules[i].wmm_rule = d_wmm +
+-			(regd->reg_rules[i].wmm_rule - s_wmm);
+-	}
+ 
+ out:
+ 	kfree(regdb_ptrs);
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index 18e819d964f1..80e2c8595c7c 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -33,6 +33,7 @@
+ #include <net/net_namespace.h>
+ #include <net/netns/generic.h>
+ #include <linux/rhashtable.h>
++#include <linux/nospec.h>
+ #include "mac80211_hwsim.h"
+ 
+ #define WARN_QUEUE 100
+@@ -2699,9 +2700,6 @@ static int mac80211_hwsim_new_radio(struct genl_info *info,
+ 				IEEE80211_VHT_CAP_SHORT_GI_80 |
+ 				IEEE80211_VHT_CAP_SHORT_GI_160 |
+ 				IEEE80211_VHT_CAP_TXSTBC |
+-				IEEE80211_VHT_CAP_RXSTBC_1 |
+-				IEEE80211_VHT_CAP_RXSTBC_2 |
+-				IEEE80211_VHT_CAP_RXSTBC_3 |
+ 				IEEE80211_VHT_CAP_RXSTBC_4 |
+ 				IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK;
+ 			sband->vht_cap.vht_mcs.rx_mcs_map =
+@@ -3194,6 +3192,11 @@ static int hwsim_new_radio_nl(struct sk_buff *msg, struct genl_info *info)
+ 	if (info->attrs[HWSIM_ATTR_CHANNELS])
+ 		param.channels = nla_get_u32(info->attrs[HWSIM_ATTR_CHANNELS]);
+ 
++	if (param.channels < 1) {
++		GENL_SET_ERR_MSG(info, "must have at least one channel");
++		return -EINVAL;
++	}
++
+ 	if (param.channels > CFG80211_MAX_NUM_DIFFERENT_CHANNELS) {
+ 		GENL_SET_ERR_MSG(info, "too many channels specified");
+ 		return -EINVAL;
+@@ -3227,6 +3230,9 @@ static int hwsim_new_radio_nl(struct sk_buff *msg, struct genl_info *info)
+ 			kfree(hwname);
+ 			return -EINVAL;
+ 		}
++
++		idx = array_index_nospec(idx,
++					 ARRAY_SIZE(hwsim_world_regdom_custom));
+ 		param.regd = hwsim_world_regdom_custom[idx];
+ 	}
+ 
+diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
+index 52e0c5d579a7..1d909e5ba657 100644
+--- a/drivers/nvme/target/rdma.c
++++ b/drivers/nvme/target/rdma.c
+@@ -65,6 +65,7 @@ struct nvmet_rdma_rsp {
+ 
+ 	struct nvmet_req	req;
+ 
++	bool			allocated;
+ 	u8			n_rdma;
+ 	u32			flags;
+ 	u32			invalidate_rkey;
+@@ -166,11 +167,19 @@ nvmet_rdma_get_rsp(struct nvmet_rdma_queue *queue)
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&queue->rsps_lock, flags);
+-	rsp = list_first_entry(&queue->free_rsps,
++	rsp = list_first_entry_or_null(&queue->free_rsps,
+ 				struct nvmet_rdma_rsp, free_list);
+-	list_del(&rsp->free_list);
++	if (likely(rsp))
++		list_del(&rsp->free_list);
+ 	spin_unlock_irqrestore(&queue->rsps_lock, flags);
+ 
++	if (unlikely(!rsp)) {
++		rsp = kmalloc(sizeof(*rsp), GFP_KERNEL);
++		if (unlikely(!rsp))
++			return NULL;
++		rsp->allocated = true;
++	}
++
+ 	return rsp;
+ }
+ 
+@@ -179,6 +188,11 @@ nvmet_rdma_put_rsp(struct nvmet_rdma_rsp *rsp)
+ {
+ 	unsigned long flags;
+ 
++	if (rsp->allocated) {
++		kfree(rsp);
++		return;
++	}
++
+ 	spin_lock_irqsave(&rsp->queue->rsps_lock, flags);
+ 	list_add_tail(&rsp->free_list, &rsp->queue->free_rsps);
+ 	spin_unlock_irqrestore(&rsp->queue->rsps_lock, flags);
+@@ -702,6 +716,15 @@ static void nvmet_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 
+ 	cmd->queue = queue;
+ 	rsp = nvmet_rdma_get_rsp(queue);
++	if (unlikely(!rsp)) {
++		/*
++		 * we get here only under memory pressure,
++		 * silently drop and have the host retry
++		 * as we can't even fail it.
++		 */
++		nvmet_rdma_post_recv(queue->dev, cmd);
++		return;
++	}
+ 	rsp->queue = queue;
+ 	rsp->cmd = cmd;
+ 	rsp->flags = 0;
+diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
+index ffdb78421a25..b0f0d4e86f67 100644
+--- a/drivers/s390/net/qeth_core_main.c
++++ b/drivers/s390/net/qeth_core_main.c
+@@ -25,6 +25,7 @@
+ #include <linux/netdevice.h>
+ #include <linux/netdev_features.h>
+ #include <linux/skbuff.h>
++#include <linux/vmalloc.h>
+ 
+ #include <net/iucv/af_iucv.h>
+ #include <net/dsfield.h>
+@@ -4738,7 +4739,7 @@ static int qeth_query_oat_command(struct qeth_card *card, char __user *udata)
+ 
+ 	priv.buffer_len = oat_data.buffer_len;
+ 	priv.response_len = 0;
+-	priv.buffer =  kzalloc(oat_data.buffer_len, GFP_KERNEL);
++	priv.buffer = vzalloc(oat_data.buffer_len);
+ 	if (!priv.buffer) {
+ 		rc = -ENOMEM;
+ 		goto out;
+@@ -4779,7 +4780,7 @@ static int qeth_query_oat_command(struct qeth_card *card, char __user *udata)
+ 			rc = -EFAULT;
+ 
+ out_free:
+-	kfree(priv.buffer);
++	vfree(priv.buffer);
+ out:
+ 	return rc;
+ }
+diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c
+index 2487f0aeb165..3bef60ae0480 100644
+--- a/drivers/s390/net/qeth_l2_main.c
++++ b/drivers/s390/net/qeth_l2_main.c
+@@ -425,7 +425,7 @@ static int qeth_l2_process_inbound_buffer(struct qeth_card *card,
+ 		default:
+ 			dev_kfree_skb_any(skb);
+ 			QETH_CARD_TEXT(card, 3, "inbunkno");
+-			QETH_DBF_HEX(CTRL, 3, hdr, QETH_DBF_CTRL_LEN);
++			QETH_DBF_HEX(CTRL, 3, hdr, sizeof(*hdr));
+ 			continue;
+ 		}
+ 		work_done++;
+diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c
+index 5905dc63e256..3ea840542767 100644
+--- a/drivers/s390/net/qeth_l3_main.c
++++ b/drivers/s390/net/qeth_l3_main.c
+@@ -1390,7 +1390,7 @@ static int qeth_l3_process_inbound_buffer(struct qeth_card *card,
+ 		default:
+ 			dev_kfree_skb_any(skb);
+ 			QETH_CARD_TEXT(card, 3, "inbunkno");
+-			QETH_DBF_HEX(CTRL, 3, hdr, QETH_DBF_CTRL_LEN);
++			QETH_DBF_HEX(CTRL, 3, hdr, sizeof(*hdr));
+ 			continue;
+ 		}
+ 		work_done++;
+diff --git a/drivers/scsi/aacraid/aacraid.h b/drivers/scsi/aacraid/aacraid.h
+index 29bf1e60f542..39eb415987fc 100644
+--- a/drivers/scsi/aacraid/aacraid.h
++++ b/drivers/scsi/aacraid/aacraid.h
+@@ -1346,7 +1346,7 @@ struct fib {
+ struct aac_hba_map_info {
+ 	__le32	rmw_nexus;		/* nexus for native HBA devices */
+ 	u8		devtype;	/* device type */
+-	u8		reset_state;	/* 0 - no reset, 1..x - */
++	s8		reset_state;	/* 0 - no reset, 1..x - */
+ 					/* after xth TM LUN reset */
+ 	u16		qd_limit;
+ 	u32		scan_counter;
+diff --git a/drivers/scsi/csiostor/csio_hw.c b/drivers/scsi/csiostor/csio_hw.c
+index a10cf25ee7f9..e4baf04ec5ea 100644
+--- a/drivers/scsi/csiostor/csio_hw.c
++++ b/drivers/scsi/csiostor/csio_hw.c
+@@ -1512,6 +1512,46 @@ fw_port_cap32_t fwcaps16_to_caps32(fw_port_cap16_t caps16)
+ 	return caps32;
+ }
+ 
++/**
++ *	fwcaps32_to_caps16 - convert 32-bit Port Capabilities to 16-bits
++ *	@caps32: a 32-bit Port Capabilities value
++ *
++ *	Returns the equivalent 16-bit Port Capabilities value.  Note that
++ *	not all 32-bit Port Capabilities can be represented in the 16-bit
++ *	Port Capabilities and some fields/values may not make it.
++ */
++fw_port_cap16_t fwcaps32_to_caps16(fw_port_cap32_t caps32)
++{
++	fw_port_cap16_t caps16 = 0;
++
++	#define CAP32_TO_CAP16(__cap) \
++		do { \
++			if (caps32 & FW_PORT_CAP32_##__cap) \
++				caps16 |= FW_PORT_CAP_##__cap; \
++		} while (0)
++
++	CAP32_TO_CAP16(SPEED_100M);
++	CAP32_TO_CAP16(SPEED_1G);
++	CAP32_TO_CAP16(SPEED_10G);
++	CAP32_TO_CAP16(SPEED_25G);
++	CAP32_TO_CAP16(SPEED_40G);
++	CAP32_TO_CAP16(SPEED_100G);
++	CAP32_TO_CAP16(FC_RX);
++	CAP32_TO_CAP16(FC_TX);
++	CAP32_TO_CAP16(802_3_PAUSE);
++	CAP32_TO_CAP16(802_3_ASM_DIR);
++	CAP32_TO_CAP16(ANEG);
++	CAP32_TO_CAP16(FORCE_PAUSE);
++	CAP32_TO_CAP16(MDIAUTO);
++	CAP32_TO_CAP16(MDISTRAIGHT);
++	CAP32_TO_CAP16(FEC_RS);
++	CAP32_TO_CAP16(FEC_BASER_RS);
++
++	#undef CAP32_TO_CAP16
++
++	return caps16;
++}
++
+ /**
+  *      lstatus_to_fwcap - translate old lstatus to 32-bit Port Capabilities
+  *      @lstatus: old FW_PORT_ACTION_GET_PORT_INFO lstatus value
+@@ -1670,7 +1710,7 @@ csio_enable_ports(struct csio_hw *hw)
+ 			val = 1;
+ 
+ 			csio_mb_params(hw, mbp, CSIO_MB_DEFAULT_TMO,
+-				       hw->pfn, 0, 1, &param, &val, false,
++				       hw->pfn, 0, 1, &param, &val, true,
+ 				       NULL);
+ 
+ 			if (csio_mb_issue(hw, mbp)) {
+@@ -1680,16 +1720,9 @@ csio_enable_ports(struct csio_hw *hw)
+ 				return -EINVAL;
+ 			}
+ 
+-			csio_mb_process_read_params_rsp(hw, mbp, &retval, 1,
+-							&val);
+-			if (retval != FW_SUCCESS) {
+-				csio_err(hw, "FW_PARAMS_CMD(r) port:%d failed: 0x%x\n",
+-					 portid, retval);
+-				mempool_free(mbp, hw->mb_mempool);
+-				return -EINVAL;
+-			}
+-
+-			fw_caps = val;
++			csio_mb_process_read_params_rsp(hw, mbp, &retval,
++							0, NULL);
++			fw_caps = retval ? FW_CAPS16 : FW_CAPS32;
+ 		}
+ 
+ 		/* Read PORT information */
+@@ -2275,8 +2308,8 @@ bye:
+ }
+ 
+ /*
+- * Returns -EINVAL if attempts to flash the firmware failed
+- * else returns 0,
++ * Returns -EINVAL if attempts to flash the firmware failed,
++ * -ENOMEM if memory allocation failed else returns 0,
+  * if flashing was not attempted because the card had the
+  * latest firmware ECANCELED is returned
+  */
+@@ -2304,6 +2337,13 @@ csio_hw_flash_fw(struct csio_hw *hw, int *reset)
+ 		return -EINVAL;
+ 	}
+ 
++	/* allocate memory to read the header of the firmware on the
++	 * card
++	 */
++	card_fw = kmalloc(sizeof(*card_fw), GFP_KERNEL);
++	if (!card_fw)
++		return -ENOMEM;
++
+ 	if (csio_is_t5(pci_dev->device & CSIO_HW_CHIP_MASK))
+ 		fw_bin_file = FW_FNAME_T5;
+ 	else
+@@ -2317,11 +2357,6 @@ csio_hw_flash_fw(struct csio_hw *hw, int *reset)
+ 		fw_size = fw->size;
+ 	}
+ 
+-	/* allocate memory to read the header of the firmware on the
+-	 * card
+-	 */
+-	card_fw = kmalloc(sizeof(*card_fw), GFP_KERNEL);
+-
+ 	/* upgrade FW logic */
+ 	ret = csio_hw_prep_fw(hw, fw_info, fw_data, fw_size, card_fw,
+ 			 hw->fw_state, reset);
+diff --git a/drivers/scsi/csiostor/csio_hw.h b/drivers/scsi/csiostor/csio_hw.h
+index 9e73ef771eb7..e351af6e7c81 100644
+--- a/drivers/scsi/csiostor/csio_hw.h
++++ b/drivers/scsi/csiostor/csio_hw.h
+@@ -639,6 +639,7 @@ int csio_handle_intr_status(struct csio_hw *, unsigned int,
+ 
+ fw_port_cap32_t fwcap_to_fwspeed(fw_port_cap32_t acaps);
+ fw_port_cap32_t fwcaps16_to_caps32(fw_port_cap16_t caps16);
++fw_port_cap16_t fwcaps32_to_caps16(fw_port_cap32_t caps32);
+ fw_port_cap32_t lstatus_to_fwcap(u32 lstatus);
+ 
+ int csio_hw_start(struct csio_hw *);
+diff --git a/drivers/scsi/csiostor/csio_mb.c b/drivers/scsi/csiostor/csio_mb.c
+index c026417269c3..6f13673d6aa0 100644
+--- a/drivers/scsi/csiostor/csio_mb.c
++++ b/drivers/scsi/csiostor/csio_mb.c
+@@ -368,7 +368,7 @@ csio_mb_port(struct csio_hw *hw, struct csio_mb *mbp, uint32_t tmo,
+ 			FW_CMD_LEN16_V(sizeof(*cmdp) / 16));
+ 
+ 	if (fw_caps == FW_CAPS16)
+-		cmdp->u.l1cfg.rcap = cpu_to_be32(fc);
++		cmdp->u.l1cfg.rcap = cpu_to_be32(fwcaps32_to_caps16(fc));
+ 	else
+ 		cmdp->u.l1cfg32.rcap32 = cpu_to_be32(fc);
+ }
+@@ -395,8 +395,8 @@ csio_mb_process_read_port_rsp(struct csio_hw *hw, struct csio_mb *mbp,
+ 			*pcaps = fwcaps16_to_caps32(ntohs(rsp->u.info.pcap));
+ 			*acaps = fwcaps16_to_caps32(ntohs(rsp->u.info.acap));
+ 		} else {
+-			*pcaps = ntohs(rsp->u.info32.pcaps32);
+-			*acaps = ntohs(rsp->u.info32.acaps32);
++			*pcaps = be32_to_cpu(rsp->u.info32.pcaps32);
++			*acaps = be32_to_cpu(rsp->u.info32.acaps32);
+ 		}
+ 	}
+ }
+diff --git a/drivers/scsi/qedi/qedi.h b/drivers/scsi/qedi/qedi.h
+index fc3babc15fa3..a6f96b35e971 100644
+--- a/drivers/scsi/qedi/qedi.h
++++ b/drivers/scsi/qedi/qedi.h
+@@ -77,6 +77,11 @@ enum qedi_nvm_tgts {
+ 	QEDI_NVM_TGT_SEC,
+ };
+ 
++struct qedi_nvm_iscsi_image {
++	struct nvm_iscsi_cfg iscsi_cfg;
++	u32 crc;
++};
++
+ struct qedi_uio_ctrl {
+ 	/* meta data */
+ 	u32 uio_hsi_version;
+@@ -294,7 +299,7 @@ struct qedi_ctx {
+ 	void *bdq_pbl_list;
+ 	dma_addr_t bdq_pbl_list_dma;
+ 	u8 bdq_pbl_list_num_entries;
+-	struct nvm_iscsi_cfg *iscsi_cfg;
++	struct qedi_nvm_iscsi_image *iscsi_image;
+ 	dma_addr_t nvm_buf_dma;
+ 	void __iomem *bdq_primary_prod;
+ 	void __iomem *bdq_secondary_prod;
+diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
+index cff83b9457f7..3e18a68c2b03 100644
+--- a/drivers/scsi/qedi/qedi_main.c
++++ b/drivers/scsi/qedi/qedi_main.c
+@@ -1346,23 +1346,26 @@ exit_setup_int:
+ 
+ static void qedi_free_nvm_iscsi_cfg(struct qedi_ctx *qedi)
+ {
+-	if (qedi->iscsi_cfg)
++	if (qedi->iscsi_image)
+ 		dma_free_coherent(&qedi->pdev->dev,
+-				  sizeof(struct nvm_iscsi_cfg),
+-				  qedi->iscsi_cfg, qedi->nvm_buf_dma);
++				  sizeof(struct qedi_nvm_iscsi_image),
++				  qedi->iscsi_image, qedi->nvm_buf_dma);
+ }
+ 
+ static int qedi_alloc_nvm_iscsi_cfg(struct qedi_ctx *qedi)
+ {
+-	qedi->iscsi_cfg = dma_zalloc_coherent(&qedi->pdev->dev,
+-					     sizeof(struct nvm_iscsi_cfg),
+-					     &qedi->nvm_buf_dma, GFP_KERNEL);
+-	if (!qedi->iscsi_cfg) {
++	struct qedi_nvm_iscsi_image nvm_image;
++
++	qedi->iscsi_image = dma_zalloc_coherent(&qedi->pdev->dev,
++						sizeof(nvm_image),
++						&qedi->nvm_buf_dma,
++						GFP_KERNEL);
++	if (!qedi->iscsi_image) {
+ 		QEDI_ERR(&qedi->dbg_ctx, "Could not allocate NVM BUF.\n");
+ 		return -ENOMEM;
+ 	}
+ 	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+-		  "NVM BUF addr=0x%p dma=0x%llx.\n", qedi->iscsi_cfg,
++		  "NVM BUF addr=0x%p dma=0x%llx.\n", qedi->iscsi_image,
+ 		  qedi->nvm_buf_dma);
+ 
+ 	return 0;
+@@ -1905,7 +1908,7 @@ qedi_get_nvram_block(struct qedi_ctx *qedi)
+ 	struct nvm_iscsi_block *block;
+ 
+ 	pf = qedi->dev_info.common.abs_pf_id;
+-	block = &qedi->iscsi_cfg->block[0];
++	block = &qedi->iscsi_image->iscsi_cfg.block[0];
+ 	for (i = 0; i < NUM_OF_ISCSI_PF_SUPPORTED; i++, block++) {
+ 		flags = ((block->id) & NVM_ISCSI_CFG_BLK_CTRL_FLAG_MASK) >>
+ 			NVM_ISCSI_CFG_BLK_CTRL_FLAG_OFFSET;
+@@ -2194,15 +2197,14 @@ static void qedi_boot_release(void *data)
+ static int qedi_get_boot_info(struct qedi_ctx *qedi)
+ {
+ 	int ret = 1;
+-	u16 len;
+-
+-	len = sizeof(struct nvm_iscsi_cfg);
++	struct qedi_nvm_iscsi_image nvm_image;
+ 
+ 	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+ 		  "Get NVM iSCSI CFG image\n");
+ 	ret = qedi_ops->common->nvm_get_image(qedi->cdev,
+ 					      QED_NVM_IMAGE_ISCSI_CFG,
+-					      (char *)qedi->iscsi_cfg, len);
++					      (char *)qedi->iscsi_image,
++					      sizeof(nvm_image));
+ 	if (ret)
+ 		QEDI_ERR(&qedi->dbg_ctx,
+ 			 "Could not get NVM image. ret = %d\n", ret);
+diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
+index 8e223799347a..a4ecc9d77624 100644
+--- a/drivers/target/iscsi/iscsi_target.c
++++ b/drivers/target/iscsi/iscsi_target.c
+@@ -4211,22 +4211,15 @@ int iscsit_close_connection(
+ 		crypto_free_ahash(tfm);
+ 	}
+ 
+-	free_cpumask_var(conn->conn_cpumask);
+-
+-	kfree(conn->conn_ops);
+-	conn->conn_ops = NULL;
+-
+ 	if (conn->sock)
+ 		sock_release(conn->sock);
+ 
+ 	if (conn->conn_transport->iscsit_free_conn)
+ 		conn->conn_transport->iscsit_free_conn(conn);
+ 
+-	iscsit_put_transport(conn->conn_transport);
+-
+ 	pr_debug("Moving to TARG_CONN_STATE_FREE.\n");
+ 	conn->conn_state = TARG_CONN_STATE_FREE;
+-	kfree(conn);
++	iscsit_free_conn(conn);
+ 
+ 	spin_lock_bh(&sess->conn_lock);
+ 	atomic_dec(&sess->nconn);
+diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c
+index 68b3eb00a9d0..2fda5b0664fd 100644
+--- a/drivers/target/iscsi/iscsi_target_login.c
++++ b/drivers/target/iscsi/iscsi_target_login.c
+@@ -67,45 +67,10 @@ static struct iscsi_login *iscsi_login_init_conn(struct iscsi_conn *conn)
+ 		goto out_req_buf;
+ 	}
+ 
+-	conn->conn_ops = kzalloc(sizeof(struct iscsi_conn_ops), GFP_KERNEL);
+-	if (!conn->conn_ops) {
+-		pr_err("Unable to allocate memory for"
+-			" struct iscsi_conn_ops.\n");
+-		goto out_rsp_buf;
+-	}
+-
+-	init_waitqueue_head(&conn->queues_wq);
+-	INIT_LIST_HEAD(&conn->conn_list);
+-	INIT_LIST_HEAD(&conn->conn_cmd_list);
+-	INIT_LIST_HEAD(&conn->immed_queue_list);
+-	INIT_LIST_HEAD(&conn->response_queue_list);
+-	init_completion(&conn->conn_post_wait_comp);
+-	init_completion(&conn->conn_wait_comp);
+-	init_completion(&conn->conn_wait_rcfr_comp);
+-	init_completion(&conn->conn_waiting_on_uc_comp);
+-	init_completion(&conn->conn_logout_comp);
+-	init_completion(&conn->rx_half_close_comp);
+-	init_completion(&conn->tx_half_close_comp);
+-	init_completion(&conn->rx_login_comp);
+-	spin_lock_init(&conn->cmd_lock);
+-	spin_lock_init(&conn->conn_usage_lock);
+-	spin_lock_init(&conn->immed_queue_lock);
+-	spin_lock_init(&conn->nopin_timer_lock);
+-	spin_lock_init(&conn->response_queue_lock);
+-	spin_lock_init(&conn->state_lock);
+-
+-	if (!zalloc_cpumask_var(&conn->conn_cpumask, GFP_KERNEL)) {
+-		pr_err("Unable to allocate conn->conn_cpumask\n");
+-		goto out_conn_ops;
+-	}
+ 	conn->conn_login = login;
+ 
+ 	return login;
+ 
+-out_conn_ops:
+-	kfree(conn->conn_ops);
+-out_rsp_buf:
+-	kfree(login->rsp_buf);
+ out_req_buf:
+ 	kfree(login->req_buf);
+ out_login:
+@@ -310,11 +275,9 @@ static int iscsi_login_zero_tsih_s1(
+ 		return -ENOMEM;
+ 	}
+ 
+-	ret = iscsi_login_set_conn_values(sess, conn, pdu->cid);
+-	if (unlikely(ret)) {
+-		kfree(sess);
+-		return ret;
+-	}
++	if (iscsi_login_set_conn_values(sess, conn, pdu->cid))
++		goto free_sess;
++
+ 	sess->init_task_tag	= pdu->itt;
+ 	memcpy(&sess->isid, pdu->isid, 6);
+ 	sess->exp_cmd_sn	= be32_to_cpu(pdu->cmdsn);
+@@ -1157,6 +1120,75 @@ iscsit_conn_set_transport(struct iscsi_conn *conn, struct iscsit_transport *t)
+ 	return 0;
+ }
+ 
++static struct iscsi_conn *iscsit_alloc_conn(struct iscsi_np *np)
++{
++	struct iscsi_conn *conn;
++
++	conn = kzalloc(sizeof(struct iscsi_conn), GFP_KERNEL);
++	if (!conn) {
++		pr_err("Could not allocate memory for new connection\n");
++		return NULL;
++	}
++	pr_debug("Moving to TARG_CONN_STATE_FREE.\n");
++	conn->conn_state = TARG_CONN_STATE_FREE;
++
++	init_waitqueue_head(&conn->queues_wq);
++	INIT_LIST_HEAD(&conn->conn_list);
++	INIT_LIST_HEAD(&conn->conn_cmd_list);
++	INIT_LIST_HEAD(&conn->immed_queue_list);
++	INIT_LIST_HEAD(&conn->response_queue_list);
++	init_completion(&conn->conn_post_wait_comp);
++	init_completion(&conn->conn_wait_comp);
++	init_completion(&conn->conn_wait_rcfr_comp);
++	init_completion(&conn->conn_waiting_on_uc_comp);
++	init_completion(&conn->conn_logout_comp);
++	init_completion(&conn->rx_half_close_comp);
++	init_completion(&conn->tx_half_close_comp);
++	init_completion(&conn->rx_login_comp);
++	spin_lock_init(&conn->cmd_lock);
++	spin_lock_init(&conn->conn_usage_lock);
++	spin_lock_init(&conn->immed_queue_lock);
++	spin_lock_init(&conn->nopin_timer_lock);
++	spin_lock_init(&conn->response_queue_lock);
++	spin_lock_init(&conn->state_lock);
++
++	timer_setup(&conn->nopin_response_timer,
++		    iscsit_handle_nopin_response_timeout, 0);
++	timer_setup(&conn->nopin_timer, iscsit_handle_nopin_timeout, 0);
++
++	if (iscsit_conn_set_transport(conn, np->np_transport) < 0)
++		goto free_conn;
++
++	conn->conn_ops = kzalloc(sizeof(struct iscsi_conn_ops), GFP_KERNEL);
++	if (!conn->conn_ops) {
++		pr_err("Unable to allocate memory for struct iscsi_conn_ops.\n");
++		goto put_transport;
++	}
++
++	if (!zalloc_cpumask_var(&conn->conn_cpumask, GFP_KERNEL)) {
++		pr_err("Unable to allocate conn->conn_cpumask\n");
++		goto free_mask;
++	}
++
++	return conn;
++
++free_mask:
++	free_cpumask_var(conn->conn_cpumask);
++put_transport:
++	iscsit_put_transport(conn->conn_transport);
++free_conn:
++	kfree(conn);
++	return NULL;
++}
++
++void iscsit_free_conn(struct iscsi_conn *conn)
++{
++	free_cpumask_var(conn->conn_cpumask);
++	kfree(conn->conn_ops);
++	iscsit_put_transport(conn->conn_transport);
++	kfree(conn);
++}
++
+ void iscsi_target_login_sess_out(struct iscsi_conn *conn,
+ 		struct iscsi_np *np, bool zero_tsih, bool new_sess)
+ {
+@@ -1210,10 +1242,6 @@ old_sess_out:
+ 		crypto_free_ahash(tfm);
+ 	}
+ 
+-	free_cpumask_var(conn->conn_cpumask);
+-
+-	kfree(conn->conn_ops);
+-
+ 	if (conn->param_list) {
+ 		iscsi_release_param_list(conn->param_list);
+ 		conn->param_list = NULL;
+@@ -1231,8 +1259,7 @@ old_sess_out:
+ 	if (conn->conn_transport->iscsit_free_conn)
+ 		conn->conn_transport->iscsit_free_conn(conn);
+ 
+-	iscsit_put_transport(conn->conn_transport);
+-	kfree(conn);
++	iscsit_free_conn(conn);
+ }
+ 
+ static int __iscsi_target_login_thread(struct iscsi_np *np)
+@@ -1262,31 +1289,16 @@ static int __iscsi_target_login_thread(struct iscsi_np *np)
+ 	}
+ 	spin_unlock_bh(&np->np_thread_lock);
+ 
+-	conn = kzalloc(sizeof(struct iscsi_conn), GFP_KERNEL);
++	conn = iscsit_alloc_conn(np);
+ 	if (!conn) {
+-		pr_err("Could not allocate memory for"
+-			" new connection\n");
+ 		/* Get another socket */
+ 		return 1;
+ 	}
+-	pr_debug("Moving to TARG_CONN_STATE_FREE.\n");
+-	conn->conn_state = TARG_CONN_STATE_FREE;
+-
+-	timer_setup(&conn->nopin_response_timer,
+-		    iscsit_handle_nopin_response_timeout, 0);
+-	timer_setup(&conn->nopin_timer, iscsit_handle_nopin_timeout, 0);
+-
+-	if (iscsit_conn_set_transport(conn, np->np_transport) < 0) {
+-		kfree(conn);
+-		return 1;
+-	}
+ 
+ 	rc = np->np_transport->iscsit_accept_np(np, conn);
+ 	if (rc == -ENOSYS) {
+ 		complete(&np->np_restart_comp);
+-		iscsit_put_transport(conn->conn_transport);
+-		kfree(conn);
+-		conn = NULL;
++		iscsit_free_conn(conn);
+ 		goto exit;
+ 	} else if (rc < 0) {
+ 		spin_lock_bh(&np->np_thread_lock);
+@@ -1294,17 +1306,13 @@ static int __iscsi_target_login_thread(struct iscsi_np *np)
+ 			np->np_thread_state = ISCSI_NP_THREAD_ACTIVE;
+ 			spin_unlock_bh(&np->np_thread_lock);
+ 			complete(&np->np_restart_comp);
+-			iscsit_put_transport(conn->conn_transport);
+-			kfree(conn);
+-			conn = NULL;
++			iscsit_free_conn(conn);
+ 			/* Get another socket */
+ 			return 1;
+ 		}
+ 		spin_unlock_bh(&np->np_thread_lock);
+-		iscsit_put_transport(conn->conn_transport);
+-		kfree(conn);
+-		conn = NULL;
+-		goto out;
++		iscsit_free_conn(conn);
++		return 1;
+ 	}
+ 	/*
+ 	 * Perform the remaining iSCSI connection initialization items..
+@@ -1454,7 +1462,6 @@ old_sess_out:
+ 		tpg_np = NULL;
+ 	}
+ 
+-out:
+ 	return 1;
+ 
+ exit:
+diff --git a/drivers/target/iscsi/iscsi_target_login.h b/drivers/target/iscsi/iscsi_target_login.h
+index 74ac3abc44a0..3b8e3639ff5d 100644
+--- a/drivers/target/iscsi/iscsi_target_login.h
++++ b/drivers/target/iscsi/iscsi_target_login.h
+@@ -19,7 +19,7 @@ extern int iscsi_target_setup_login_socket(struct iscsi_np *,
+ extern int iscsit_accept_np(struct iscsi_np *, struct iscsi_conn *);
+ extern int iscsit_get_login_rx(struct iscsi_conn *, struct iscsi_login *);
+ extern int iscsit_put_login_tx(struct iscsi_conn *, struct iscsi_login *, u32);
+-extern void iscsit_free_conn(struct iscsi_np *, struct iscsi_conn *);
++extern void iscsit_free_conn(struct iscsi_conn *);
+ extern int iscsit_start_kthreads(struct iscsi_conn *);
+ extern void iscsi_post_login_handler(struct iscsi_np *, struct iscsi_conn *, u8);
+ extern void iscsi_target_login_sess_out(struct iscsi_conn *, struct iscsi_np *,
+diff --git a/drivers/usb/gadget/udc/fotg210-udc.c b/drivers/usb/gadget/udc/fotg210-udc.c
+index 53a48f561458..587c5037ff07 100644
+--- a/drivers/usb/gadget/udc/fotg210-udc.c
++++ b/drivers/usb/gadget/udc/fotg210-udc.c
+@@ -1063,12 +1063,15 @@ static const struct usb_gadget_ops fotg210_gadget_ops = {
+ static int fotg210_udc_remove(struct platform_device *pdev)
+ {
+ 	struct fotg210_udc *fotg210 = platform_get_drvdata(pdev);
++	int i;
+ 
+ 	usb_del_gadget_udc(&fotg210->gadget);
+ 	iounmap(fotg210->reg);
+ 	free_irq(platform_get_irq(pdev, 0), fotg210);
+ 
+ 	fotg210_ep_free_request(&fotg210->ep[0]->ep, fotg210->ep0_req);
++	for (i = 0; i < FOTG210_MAX_NUM_EP; i++)
++		kfree(fotg210->ep[i]);
+ 	kfree(fotg210);
+ 
+ 	return 0;
+@@ -1099,7 +1102,7 @@ static int fotg210_udc_probe(struct platform_device *pdev)
+ 	/* initialize udc */
+ 	fotg210 = kzalloc(sizeof(struct fotg210_udc), GFP_KERNEL);
+ 	if (fotg210 == NULL)
+-		goto err_alloc;
++		goto err;
+ 
+ 	for (i = 0; i < FOTG210_MAX_NUM_EP; i++) {
+ 		_ep[i] = kzalloc(sizeof(struct fotg210_ep), GFP_KERNEL);
+@@ -1111,7 +1114,7 @@ static int fotg210_udc_probe(struct platform_device *pdev)
+ 	fotg210->reg = ioremap(res->start, resource_size(res));
+ 	if (fotg210->reg == NULL) {
+ 		pr_err("ioremap error.\n");
+-		goto err_map;
++		goto err_alloc;
+ 	}
+ 
+ 	spin_lock_init(&fotg210->lock);
+@@ -1159,7 +1162,7 @@ static int fotg210_udc_probe(struct platform_device *pdev)
+ 	fotg210->ep0_req = fotg210_ep_alloc_request(&fotg210->ep[0]->ep,
+ 				GFP_KERNEL);
+ 	if (fotg210->ep0_req == NULL)
+-		goto err_req;
++		goto err_map;
+ 
+ 	fotg210_init(fotg210);
+ 
+@@ -1187,12 +1190,14 @@ err_req:
+ 	fotg210_ep_free_request(&fotg210->ep[0]->ep, fotg210->ep0_req);
+ 
+ err_map:
+-	if (fotg210->reg)
+-		iounmap(fotg210->reg);
++	iounmap(fotg210->reg);
+ 
+ err_alloc:
++	for (i = 0; i < FOTG210_MAX_NUM_EP; i++)
++		kfree(fotg210->ep[i]);
+ 	kfree(fotg210);
+ 
++err:
+ 	return ret;
+ }
+ 
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index c1b22fc64e38..b5a14caa9297 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -152,7 +152,7 @@ static int xhci_plat_probe(struct platform_device *pdev)
+ {
+ 	const struct xhci_plat_priv *priv_match;
+ 	const struct hc_driver	*driver;
+-	struct device		*sysdev;
++	struct device		*sysdev, *tmpdev;
+ 	struct xhci_hcd		*xhci;
+ 	struct resource         *res;
+ 	struct usb_hcd		*hcd;
+@@ -272,19 +272,24 @@ static int xhci_plat_probe(struct platform_device *pdev)
+ 		goto disable_clk;
+ 	}
+ 
+-	if (device_property_read_bool(sysdev, "usb2-lpm-disable"))
+-		xhci->quirks |= XHCI_HW_LPM_DISABLE;
++	/* imod_interval is the interrupt moderation value in nanoseconds. */
++	xhci->imod_interval = 40000;
+ 
+-	if (device_property_read_bool(sysdev, "usb3-lpm-capable"))
+-		xhci->quirks |= XHCI_LPM_SUPPORT;
++	/* Iterate over all parent nodes for finding quirks */
++	for (tmpdev = &pdev->dev; tmpdev; tmpdev = tmpdev->parent) {
+ 
+-	if (device_property_read_bool(&pdev->dev, "quirk-broken-port-ped"))
+-		xhci->quirks |= XHCI_BROKEN_PORT_PED;
++		if (device_property_read_bool(tmpdev, "usb2-lpm-disable"))
++			xhci->quirks |= XHCI_HW_LPM_DISABLE;
+ 
+-	/* imod_interval is the interrupt moderation value in nanoseconds. */
+-	xhci->imod_interval = 40000;
+-	device_property_read_u32(sysdev, "imod-interval-ns",
+-				 &xhci->imod_interval);
++		if (device_property_read_bool(tmpdev, "usb3-lpm-capable"))
++			xhci->quirks |= XHCI_LPM_SUPPORT;
++
++		if (device_property_read_bool(tmpdev, "quirk-broken-port-ped"))
++			xhci->quirks |= XHCI_BROKEN_PORT_PED;
++
++		device_property_read_u32(tmpdev, "imod-interval-ns",
++					 &xhci->imod_interval);
++	}
+ 
+ 	hcd->usb_phy = devm_usb_get_phy_by_phandle(sysdev, "usb-phy", 0);
+ 	if (IS_ERR(hcd->usb_phy)) {
+diff --git a/drivers/usb/misc/yurex.c b/drivers/usb/misc/yurex.c
+index 1232dd49556d..6d9fd5f64903 100644
+--- a/drivers/usb/misc/yurex.c
++++ b/drivers/usb/misc/yurex.c
+@@ -413,6 +413,9 @@ static ssize_t yurex_read(struct file *file, char __user *buffer, size_t count,
+ 	spin_unlock_irqrestore(&dev->lock, flags);
+ 	mutex_unlock(&dev->io_mutex);
+ 
++	if (WARN_ON_ONCE(len >= sizeof(in_buffer)))
++		return -EIO;
++
+ 	return simple_read_from_buffer(buffer, count, ppos, in_buffer, len);
+ }
+ 
+diff --git a/drivers/xen/cpu_hotplug.c b/drivers/xen/cpu_hotplug.c
+index d4265c8ebb22..b1357aa4bc55 100644
+--- a/drivers/xen/cpu_hotplug.c
++++ b/drivers/xen/cpu_hotplug.c
+@@ -19,15 +19,16 @@ static void enable_hotplug_cpu(int cpu)
+ 
+ static void disable_hotplug_cpu(int cpu)
+ {
+-	if (cpu_online(cpu)) {
+-		lock_device_hotplug();
++	if (!cpu_is_hotpluggable(cpu))
++		return;
++	lock_device_hotplug();
++	if (cpu_online(cpu))
+ 		device_offline(get_cpu_device(cpu));
+-		unlock_device_hotplug();
+-	}
+-	if (cpu_present(cpu))
++	if (!cpu_online(cpu) && cpu_present(cpu)) {
+ 		xen_arch_unregister_cpu(cpu);
+-
+-	set_cpu_present(cpu, false);
++		set_cpu_present(cpu, false);
++	}
++	unlock_device_hotplug();
+ }
+ 
+ static int vcpu_online(unsigned int cpu)
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index 08e4af04d6f2..e6c1934734b7 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -138,7 +138,7 @@ static int set_evtchn_to_irq(unsigned evtchn, unsigned irq)
+ 		clear_evtchn_to_irq_row(row);
+ 	}
+ 
+-	evtchn_to_irq[EVTCHN_ROW(evtchn)][EVTCHN_COL(evtchn)] = irq;
++	evtchn_to_irq[row][col] = irq;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
+index c93d8ef8df34..5bb01a62f214 100644
+--- a/drivers/xen/manage.c
++++ b/drivers/xen/manage.c
+@@ -280,9 +280,11 @@ static void sysrq_handler(struct xenbus_watch *watch, const char *path,
+ 		/*
+ 		 * The Xenstore watch fires directly after registering it and
+ 		 * after a suspend/resume cycle. So ENOENT is no error but
+-		 * might happen in those cases.
++		 * might happen in those cases. ERANGE is observed when we get
++		 * an empty value (''), this happens when we acknowledge the
++		 * request by writing '\0' below.
+ 		 */
+-		if (err != -ENOENT)
++		if (err != -ENOENT && err != -ERANGE)
+ 			pr_err("Error %d reading sysrq code in control/sysrq\n",
+ 			       err);
+ 		xenbus_transaction_end(xbt, 1);
+diff --git a/fs/afs/proc.c b/fs/afs/proc.c
+index 0c3285c8db95..476dcbb79713 100644
+--- a/fs/afs/proc.c
++++ b/fs/afs/proc.c
+@@ -98,13 +98,13 @@ static int afs_proc_cells_write(struct file *file, char *buf, size_t size)
+ 		goto inval;
+ 
+ 	args = strchr(name, ' ');
+-	if (!args)
+-		goto inval;
+-	do {
+-		*args++ = 0;
+-	} while(*args == ' ');
+-	if (!*args)
+-		goto inval;
++	if (args) {
++		do {
++			*args++ = 0;
++		} while(*args == ' ');
++		if (!*args)
++			goto inval;
++	}
+ 
+ 	/* determine command to perform */
+ 	_debug("cmd=%s name=%s args=%s", buf, name, args);
+@@ -120,7 +120,6 @@ static int afs_proc_cells_write(struct file *file, char *buf, size_t size)
+ 
+ 		if (test_and_set_bit(AFS_CELL_FL_NO_GC, &cell->flags))
+ 			afs_put_cell(net, cell);
+-		printk("kAFS: Added new cell '%s'\n", name);
+ 	} else {
+ 		goto inval;
+ 	}
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 118346aceea9..663ce0518d27 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -1277,6 +1277,7 @@ struct btrfs_root {
+ 	int send_in_progress;
+ 	struct btrfs_subvolume_writers *subv_writers;
+ 	atomic_t will_be_snapshotted;
++	atomic_t snapshot_force_cow;
+ 
+ 	/* For qgroup metadata reserved space */
+ 	spinlock_t qgroup_meta_rsv_lock;
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index dfed08e70ec1..891b1aab3480 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -1217,6 +1217,7 @@ static void __setup_root(struct btrfs_root *root, struct btrfs_fs_info *fs_info,
+ 	atomic_set(&root->log_batch, 0);
+ 	refcount_set(&root->refs, 1);
+ 	atomic_set(&root->will_be_snapshotted, 0);
++	atomic_set(&root->snapshot_force_cow, 0);
+ 	root->log_transid = 0;
+ 	root->log_transid_committed = -1;
+ 	root->last_log_commit = 0;
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 071d949f69ec..d3736fbf6774 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -1275,7 +1275,7 @@ static noinline int run_delalloc_nocow(struct inode *inode,
+ 	u64 disk_num_bytes;
+ 	u64 ram_bytes;
+ 	int extent_type;
+-	int ret, err;
++	int ret;
+ 	int type;
+ 	int nocow;
+ 	int check_prev = 1;
+@@ -1407,11 +1407,8 @@ next_slot:
+ 			 * if there are pending snapshots for this root,
+ 			 * we fall into common COW way.
+ 			 */
+-			if (!nolock) {
+-				err = btrfs_start_write_no_snapshotting(root);
+-				if (!err)
+-					goto out_check;
+-			}
++			if (!nolock && atomic_read(&root->snapshot_force_cow))
++				goto out_check;
+ 			/*
+ 			 * force cow if csum exists in the range.
+ 			 * this ensure that csum for a given extent are
+@@ -1420,9 +1417,6 @@ next_slot:
+ 			ret = csum_exist_in_range(fs_info, disk_bytenr,
+ 						  num_bytes);
+ 			if (ret) {
+-				if (!nolock)
+-					btrfs_end_write_no_snapshotting(root);
+-
+ 				/*
+ 				 * ret could be -EIO if the above fails to read
+ 				 * metadata.
+@@ -1435,11 +1429,8 @@ next_slot:
+ 				WARN_ON_ONCE(nolock);
+ 				goto out_check;
+ 			}
+-			if (!btrfs_inc_nocow_writers(fs_info, disk_bytenr)) {
+-				if (!nolock)
+-					btrfs_end_write_no_snapshotting(root);
++			if (!btrfs_inc_nocow_writers(fs_info, disk_bytenr))
+ 				goto out_check;
+-			}
+ 			nocow = 1;
+ 		} else if (extent_type == BTRFS_FILE_EXTENT_INLINE) {
+ 			extent_end = found_key.offset +
+@@ -1453,8 +1444,6 @@ next_slot:
+ out_check:
+ 		if (extent_end <= start) {
+ 			path->slots[0]++;
+-			if (!nolock && nocow)
+-				btrfs_end_write_no_snapshotting(root);
+ 			if (nocow)
+ 				btrfs_dec_nocow_writers(fs_info, disk_bytenr);
+ 			goto next_slot;
+@@ -1476,8 +1465,6 @@ out_check:
+ 					     end, page_started, nr_written, 1,
+ 					     NULL);
+ 			if (ret) {
+-				if (!nolock && nocow)
+-					btrfs_end_write_no_snapshotting(root);
+ 				if (nocow)
+ 					btrfs_dec_nocow_writers(fs_info,
+ 								disk_bytenr);
+@@ -1497,8 +1484,6 @@ out_check:
+ 					  ram_bytes, BTRFS_COMPRESS_NONE,
+ 					  BTRFS_ORDERED_PREALLOC);
+ 			if (IS_ERR(em)) {
+-				if (!nolock && nocow)
+-					btrfs_end_write_no_snapshotting(root);
+ 				if (nocow)
+ 					btrfs_dec_nocow_writers(fs_info,
+ 								disk_bytenr);
+@@ -1537,8 +1522,6 @@ out_check:
+ 					     EXTENT_CLEAR_DATA_RESV,
+ 					     PAGE_UNLOCK | PAGE_SET_PRIVATE2);
+ 
+-		if (!nolock && nocow)
+-			btrfs_end_write_no_snapshotting(root);
+ 		cur_offset = extent_end;
+ 
+ 		/*
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index f3d6be0c657b..ef7159646615 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -761,6 +761,7 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir,
+ 	struct btrfs_pending_snapshot *pending_snapshot;
+ 	struct btrfs_trans_handle *trans;
+ 	int ret;
++	bool snapshot_force_cow = false;
+ 
+ 	if (!test_bit(BTRFS_ROOT_REF_COWS, &root->state))
+ 		return -EINVAL;
+@@ -777,6 +778,11 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir,
+ 		goto free_pending;
+ 	}
+ 
++	/*
++	 * Force new buffered writes to reserve space even when NOCOW is
++	 * possible. This is to avoid later writeback (running dealloc) to
++	 * fallback to COW mode and unexpectedly fail with ENOSPC.
++	 */
+ 	atomic_inc(&root->will_be_snapshotted);
+ 	smp_mb__after_atomic();
+ 	/* wait for no snapshot writes */
+@@ -787,6 +793,14 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir,
+ 	if (ret)
+ 		goto dec_and_free;
+ 
++	/*
++	 * All previous writes have started writeback in NOCOW mode, so now
++	 * we force future writes to fallback to COW mode during snapshot
++	 * creation.
++	 */
++	atomic_inc(&root->snapshot_force_cow);
++	snapshot_force_cow = true;
++
+ 	btrfs_wait_ordered_extents(root, U64_MAX, 0, (u64)-1);
+ 
+ 	btrfs_init_block_rsv(&pending_snapshot->block_rsv,
+@@ -851,6 +865,8 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir,
+ fail:
+ 	btrfs_subvolume_release_metadata(fs_info, &pending_snapshot->block_rsv);
+ dec_and_free:
++	if (snapshot_force_cow)
++		atomic_dec(&root->snapshot_force_cow);
+ 	if (atomic_dec_and_test(&root->will_be_snapshotted))
+ 		wake_up_var(&root->will_be_snapshotted);
+ free_pending:
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 5304b8d6ceb8..1a22c0ecaf67 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -4584,7 +4584,12 @@ again:
+ 
+ 	/* Now btrfs_update_device() will change the on-disk size. */
+ 	ret = btrfs_update_device(trans, device);
+-	btrfs_end_transaction(trans);
++	if (ret < 0) {
++		btrfs_abort_transaction(trans, ret);
++		btrfs_end_transaction(trans);
++	} else {
++		ret = btrfs_commit_transaction(trans);
++	}
+ done:
+ 	btrfs_free_path(path);
+ 	if (ret) {
+diff --git a/fs/ceph/super.c b/fs/ceph/super.c
+index 95a3b3ac9b6e..60f81ac369b5 100644
+--- a/fs/ceph/super.c
++++ b/fs/ceph/super.c
+@@ -603,6 +603,8 @@ static int extra_mon_dispatch(struct ceph_client *client, struct ceph_msg *msg)
+ 
+ /*
+  * create a new fs client
++ *
++ * Success or not, this function consumes @fsopt and @opt.
+  */
+ static struct ceph_fs_client *create_fs_client(struct ceph_mount_options *fsopt,
+ 					struct ceph_options *opt)
+@@ -610,17 +612,20 @@ static struct ceph_fs_client *create_fs_client(struct ceph_mount_options *fsopt,
+ 	struct ceph_fs_client *fsc;
+ 	int page_count;
+ 	size_t size;
+-	int err = -ENOMEM;
++	int err;
+ 
+ 	fsc = kzalloc(sizeof(*fsc), GFP_KERNEL);
+-	if (!fsc)
+-		return ERR_PTR(-ENOMEM);
++	if (!fsc) {
++		err = -ENOMEM;
++		goto fail;
++	}
+ 
+ 	fsc->client = ceph_create_client(opt, fsc);
+ 	if (IS_ERR(fsc->client)) {
+ 		err = PTR_ERR(fsc->client);
+ 		goto fail;
+ 	}
++	opt = NULL; /* fsc->client now owns this */
+ 
+ 	fsc->client->extra_mon_dispatch = extra_mon_dispatch;
+ 	fsc->client->osdc.abort_on_full = true;
+@@ -678,6 +683,9 @@ fail_client:
+ 	ceph_destroy_client(fsc->client);
+ fail:
+ 	kfree(fsc);
++	if (opt)
++		ceph_destroy_options(opt);
++	destroy_mount_options(fsopt);
+ 	return ERR_PTR(err);
+ }
+ 
+@@ -1042,8 +1050,6 @@ static struct dentry *ceph_mount(struct file_system_type *fs_type,
+ 	fsc = create_fs_client(fsopt, opt);
+ 	if (IS_ERR(fsc)) {
+ 		res = ERR_CAST(fsc);
+-		destroy_mount_options(fsopt);
+-		ceph_destroy_options(opt);
+ 		goto out_final;
+ 	}
+ 
+diff --git a/fs/cifs/cifs_unicode.c b/fs/cifs/cifs_unicode.c
+index b380e0871372..a2b2355e7f01 100644
+--- a/fs/cifs/cifs_unicode.c
++++ b/fs/cifs/cifs_unicode.c
+@@ -105,9 +105,6 @@ convert_sfm_char(const __u16 src_char, char *target)
+ 	case SFM_LESSTHAN:
+ 		*target = '<';
+ 		break;
+-	case SFM_SLASH:
+-		*target = '\\';
+-		break;
+ 	case SFM_SPACE:
+ 		*target = ' ';
+ 		break;
+diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c
+index 93408eab92e7..f5baf777564c 100644
+--- a/fs/cifs/cifssmb.c
++++ b/fs/cifs/cifssmb.c
+@@ -601,10 +601,15 @@ CIFSSMBNegotiate(const unsigned int xid, struct cifs_ses *ses)
+ 	}
+ 
+ 	count = 0;
++	/*
++	 * We know that all the name entries in the protocols array
++	 * are short (< 16 bytes anyway) and are NUL terminated.
++	 */
+ 	for (i = 0; i < CIFS_NUM_PROT; i++) {
+-		strncpy(pSMB->DialectsArray+count, protocols[i].name, 16);
+-		count += strlen(protocols[i].name) + 1;
+-		/* null at end of source and target buffers anyway */
++		size_t len = strlen(protocols[i].name) + 1;
++
++		memcpy(pSMB->DialectsArray+count, protocols[i].name, len);
++		count += len;
+ 	}
+ 	inc_rfc1001_len(pSMB, count);
+ 	pSMB->ByteCount = cpu_to_le16(count);
+diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
+index 53e8362cbc4a..6737f54d9a34 100644
+--- a/fs/cifs/misc.c
++++ b/fs/cifs/misc.c
+@@ -404,9 +404,17 @@ is_valid_oplock_break(char *buffer, struct TCP_Server_Info *srv)
+ 			(struct smb_com_transaction_change_notify_rsp *)buf;
+ 		struct file_notify_information *pnotify;
+ 		__u32 data_offset = 0;
++		size_t len = srv->total_read - sizeof(pSMBr->hdr.smb_buf_length);
++
+ 		if (get_bcc(buf) > sizeof(struct file_notify_information)) {
+ 			data_offset = le32_to_cpu(pSMBr->DataOffset);
+ 
++			if (data_offset >
++			    len - sizeof(struct file_notify_information)) {
++				cifs_dbg(FYI, "invalid data_offset %u\n",
++					 data_offset);
++				return true;
++			}
+ 			pnotify = (struct file_notify_information *)
+ 				((char *)&pSMBr->hdr.Protocol + data_offset);
+ 			cifs_dbg(FYI, "dnotify on %s Action: 0x%x\n",
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 5ecbc99f46e4..abb54b852bdc 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -1484,7 +1484,7 @@ smb2_query_dir_first(const unsigned int xid, struct cifs_tcon *tcon,
+ 	}
+ 
+ 	srch_inf->entries_in_buffer = 0;
+-	srch_inf->index_of_last_entry = 0;
++	srch_inf->index_of_last_entry = 2;
+ 
+ 	rc = SMB2_query_directory(xid, tcon, fid->persistent_fid,
+ 				  fid->volatile_fid, 0, srch_inf);
+diff --git a/fs/dcache.c b/fs/dcache.c
+index d19a0dc46c04..baa89f092a2d 100644
+--- a/fs/dcache.c
++++ b/fs/dcache.c
+@@ -1890,7 +1890,7 @@ void d_instantiate_new(struct dentry *entry, struct inode *inode)
+ 	spin_lock(&inode->i_lock);
+ 	__d_instantiate(entry, inode);
+ 	WARN_ON(!(inode->i_state & I_NEW));
+-	inode->i_state &= ~I_NEW;
++	inode->i_state &= ~I_NEW & ~I_CREATING;
+ 	smp_mb();
+ 	wake_up_bit(&inode->i_state, __I_NEW);
+ 	spin_unlock(&inode->i_lock);
+diff --git a/fs/inode.c b/fs/inode.c
+index 8c86c809ca17..a06de4454232 100644
+--- a/fs/inode.c
++++ b/fs/inode.c
+@@ -804,6 +804,10 @@ repeat:
+ 			__wait_on_freeing_inode(inode);
+ 			goto repeat;
+ 		}
++		if (unlikely(inode->i_state & I_CREATING)) {
++			spin_unlock(&inode->i_lock);
++			return ERR_PTR(-ESTALE);
++		}
+ 		__iget(inode);
+ 		spin_unlock(&inode->i_lock);
+ 		return inode;
+@@ -831,6 +835,10 @@ repeat:
+ 			__wait_on_freeing_inode(inode);
+ 			goto repeat;
+ 		}
++		if (unlikely(inode->i_state & I_CREATING)) {
++			spin_unlock(&inode->i_lock);
++			return ERR_PTR(-ESTALE);
++		}
+ 		__iget(inode);
+ 		spin_unlock(&inode->i_lock);
+ 		return inode;
+@@ -961,13 +969,26 @@ void unlock_new_inode(struct inode *inode)
+ 	lockdep_annotate_inode_mutex_key(inode);
+ 	spin_lock(&inode->i_lock);
+ 	WARN_ON(!(inode->i_state & I_NEW));
+-	inode->i_state &= ~I_NEW;
++	inode->i_state &= ~I_NEW & ~I_CREATING;
+ 	smp_mb();
+ 	wake_up_bit(&inode->i_state, __I_NEW);
+ 	spin_unlock(&inode->i_lock);
+ }
+ EXPORT_SYMBOL(unlock_new_inode);
+ 
++void discard_new_inode(struct inode *inode)
++{
++	lockdep_annotate_inode_mutex_key(inode);
++	spin_lock(&inode->i_lock);
++	WARN_ON(!(inode->i_state & I_NEW));
++	inode->i_state &= ~I_NEW;
++	smp_mb();
++	wake_up_bit(&inode->i_state, __I_NEW);
++	spin_unlock(&inode->i_lock);
++	iput(inode);
++}
++EXPORT_SYMBOL(discard_new_inode);
++
+ /**
+  * lock_two_nondirectories - take two i_mutexes on non-directory objects
+  *
+@@ -1029,6 +1050,7 @@ struct inode *inode_insert5(struct inode *inode, unsigned long hashval,
+ {
+ 	struct hlist_head *head = inode_hashtable + hash(inode->i_sb, hashval);
+ 	struct inode *old;
++	bool creating = inode->i_state & I_CREATING;
+ 
+ again:
+ 	spin_lock(&inode_hash_lock);
+@@ -1039,6 +1061,8 @@ again:
+ 		 * Use the old inode instead of the preallocated one.
+ 		 */
+ 		spin_unlock(&inode_hash_lock);
++		if (IS_ERR(old))
++			return NULL;
+ 		wait_on_inode(old);
+ 		if (unlikely(inode_unhashed(old))) {
+ 			iput(old);
+@@ -1060,6 +1084,8 @@ again:
+ 	inode->i_state |= I_NEW;
+ 	hlist_add_head(&inode->i_hash, head);
+ 	spin_unlock(&inode->i_lock);
++	if (!creating)
++		inode_sb_list_add(inode);
+ unlock:
+ 	spin_unlock(&inode_hash_lock);
+ 
+@@ -1094,12 +1120,13 @@ struct inode *iget5_locked(struct super_block *sb, unsigned long hashval,
+ 	struct inode *inode = ilookup5(sb, hashval, test, data);
+ 
+ 	if (!inode) {
+-		struct inode *new = new_inode(sb);
++		struct inode *new = alloc_inode(sb);
+ 
+ 		if (new) {
++			new->i_state = 0;
+ 			inode = inode_insert5(new, hashval, test, set, data);
+ 			if (unlikely(inode != new))
+-				iput(new);
++				destroy_inode(new);
+ 		}
+ 	}
+ 	return inode;
+@@ -1128,6 +1155,8 @@ again:
+ 	inode = find_inode_fast(sb, head, ino);
+ 	spin_unlock(&inode_hash_lock);
+ 	if (inode) {
++		if (IS_ERR(inode))
++			return NULL;
+ 		wait_on_inode(inode);
+ 		if (unlikely(inode_unhashed(inode))) {
+ 			iput(inode);
+@@ -1165,6 +1194,8 @@ again:
+ 		 */
+ 		spin_unlock(&inode_hash_lock);
+ 		destroy_inode(inode);
++		if (IS_ERR(old))
++			return NULL;
+ 		inode = old;
+ 		wait_on_inode(inode);
+ 		if (unlikely(inode_unhashed(inode))) {
+@@ -1282,7 +1313,7 @@ struct inode *ilookup5_nowait(struct super_block *sb, unsigned long hashval,
+ 	inode = find_inode(sb, head, test, data);
+ 	spin_unlock(&inode_hash_lock);
+ 
+-	return inode;
++	return IS_ERR(inode) ? NULL : inode;
+ }
+ EXPORT_SYMBOL(ilookup5_nowait);
+ 
+@@ -1338,6 +1369,8 @@ again:
+ 	spin_unlock(&inode_hash_lock);
+ 
+ 	if (inode) {
++		if (IS_ERR(inode))
++			return NULL;
+ 		wait_on_inode(inode);
+ 		if (unlikely(inode_unhashed(inode))) {
+ 			iput(inode);
+@@ -1421,12 +1454,17 @@ int insert_inode_locked(struct inode *inode)
+ 		}
+ 		if (likely(!old)) {
+ 			spin_lock(&inode->i_lock);
+-			inode->i_state |= I_NEW;
++			inode->i_state |= I_NEW | I_CREATING;
+ 			hlist_add_head(&inode->i_hash, head);
+ 			spin_unlock(&inode->i_lock);
+ 			spin_unlock(&inode_hash_lock);
+ 			return 0;
+ 		}
++		if (unlikely(old->i_state & I_CREATING)) {
++			spin_unlock(&old->i_lock);
++			spin_unlock(&inode_hash_lock);
++			return -EBUSY;
++		}
+ 		__iget(old);
+ 		spin_unlock(&old->i_lock);
+ 		spin_unlock(&inode_hash_lock);
+@@ -1443,7 +1481,10 @@ EXPORT_SYMBOL(insert_inode_locked);
+ int insert_inode_locked4(struct inode *inode, unsigned long hashval,
+ 		int (*test)(struct inode *, void *), void *data)
+ {
+-	struct inode *old = inode_insert5(inode, hashval, test, NULL, data);
++	struct inode *old;
++
++	inode->i_state |= I_CREATING;
++	old = inode_insert5(inode, hashval, test, NULL, data);
+ 
+ 	if (old != inode) {
+ 		iput(old);
+diff --git a/fs/notify/fsnotify.c b/fs/notify/fsnotify.c
+index f174397b63a0..ababdbfab537 100644
+--- a/fs/notify/fsnotify.c
++++ b/fs/notify/fsnotify.c
+@@ -351,16 +351,9 @@ int fsnotify(struct inode *to_tell, __u32 mask, const void *data, int data_is,
+ 
+ 	iter_info.srcu_idx = srcu_read_lock(&fsnotify_mark_srcu);
+ 
+-	if ((mask & FS_MODIFY) ||
+-	    (test_mask & to_tell->i_fsnotify_mask)) {
+-		iter_info.marks[FSNOTIFY_OBJ_TYPE_INODE] =
+-			fsnotify_first_mark(&to_tell->i_fsnotify_marks);
+-	}
+-
+-	if (mnt && ((mask & FS_MODIFY) ||
+-		    (test_mask & mnt->mnt_fsnotify_mask))) {
+-		iter_info.marks[FSNOTIFY_OBJ_TYPE_INODE] =
+-			fsnotify_first_mark(&to_tell->i_fsnotify_marks);
++	iter_info.marks[FSNOTIFY_OBJ_TYPE_INODE] =
++		fsnotify_first_mark(&to_tell->i_fsnotify_marks);
++	if (mnt) {
+ 		iter_info.marks[FSNOTIFY_OBJ_TYPE_VFSMOUNT] =
+ 			fsnotify_first_mark(&mnt->mnt_fsnotify_marks);
+ 	}
+diff --git a/fs/ocfs2/dlm/dlmmaster.c b/fs/ocfs2/dlm/dlmmaster.c
+index aaca0949fe53..826f0567ec43 100644
+--- a/fs/ocfs2/dlm/dlmmaster.c
++++ b/fs/ocfs2/dlm/dlmmaster.c
+@@ -584,9 +584,9 @@ static void dlm_init_lockres(struct dlm_ctxt *dlm,
+ 
+ 	res->last_used = 0;
+ 
+-	spin_lock(&dlm->spinlock);
++	spin_lock(&dlm->track_lock);
+ 	list_add_tail(&res->tracking, &dlm->tracking_list);
+-	spin_unlock(&dlm->spinlock);
++	spin_unlock(&dlm->track_lock);
+ 
+ 	memset(res->lvb, 0, DLM_LVB_LEN);
+ 	memset(res->refmap, 0, sizeof(res->refmap));
+diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c
+index f480b1a2cd2e..da9b3ccfde23 100644
+--- a/fs/overlayfs/dir.c
++++ b/fs/overlayfs/dir.c
+@@ -601,6 +601,10 @@ static int ovl_create_object(struct dentry *dentry, int mode, dev_t rdev,
+ 	if (!inode)
+ 		goto out_drop_write;
+ 
++	spin_lock(&inode->i_lock);
++	inode->i_state |= I_CREATING;
++	spin_unlock(&inode->i_lock);
++
+ 	inode_init_owner(inode, dentry->d_parent->d_inode, mode);
+ 	attr.mode = inode->i_mode;
+ 
+diff --git a/fs/overlayfs/namei.c b/fs/overlayfs/namei.c
+index c993dd8db739..c2229f02389b 100644
+--- a/fs/overlayfs/namei.c
++++ b/fs/overlayfs/namei.c
+@@ -705,7 +705,7 @@ struct dentry *ovl_lookup_index(struct ovl_fs *ofs, struct dentry *upper,
+ 			index = NULL;
+ 			goto out;
+ 		}
+-		pr_warn_ratelimited("overlayfs: failed inode index lookup (ino=%lu, key=%*s, err=%i);\n"
++		pr_warn_ratelimited("overlayfs: failed inode index lookup (ino=%lu, key=%.*s, err=%i);\n"
+ 				    "overlayfs: mount with '-o index=off' to disable inodes index.\n",
+ 				    d_inode(origin)->i_ino, name.len, name.name,
+ 				    err);
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index 7538b9b56237..e789924e9833 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -147,8 +147,8 @@ static inline int ovl_do_setxattr(struct dentry *dentry, const char *name,
+ 				  const void *value, size_t size, int flags)
+ {
+ 	int err = vfs_setxattr(dentry, name, value, size, flags);
+-	pr_debug("setxattr(%pd2, \"%s\", \"%*s\", 0x%x) = %i\n",
+-		 dentry, name, (int) size, (char *) value, flags, err);
++	pr_debug("setxattr(%pd2, \"%s\", \"%*pE\", %zu, 0x%x) = %i\n",
++		 dentry, name, min((int)size, 48), value, size, flags, err);
+ 	return err;
+ }
+ 
+diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c
+index 6f1078028c66..319a7eeb388f 100644
+--- a/fs/overlayfs/util.c
++++ b/fs/overlayfs/util.c
+@@ -531,7 +531,7 @@ static void ovl_cleanup_index(struct dentry *dentry)
+ 	struct dentry *upperdentry = ovl_dentry_upper(dentry);
+ 	struct dentry *index = NULL;
+ 	struct inode *inode;
+-	struct qstr name;
++	struct qstr name = { };
+ 	int err;
+ 
+ 	err = ovl_get_index_name(lowerdentry, &name);
+@@ -574,6 +574,7 @@ static void ovl_cleanup_index(struct dentry *dentry)
+ 		goto fail;
+ 
+ out:
++	kfree(name.name);
+ 	dput(index);
+ 	return;
+ 
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index aaffc0c30216..bbcad104505c 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -407,6 +407,20 @@ static int proc_pid_stack(struct seq_file *m, struct pid_namespace *ns,
+ 	unsigned long *entries;
+ 	int err;
+ 
++	/*
++	 * The ability to racily run the kernel stack unwinder on a running task
++	 * and then observe the unwinder output is scary; while it is useful for
++	 * debugging kernel issues, it can also allow an attacker to leak kernel
++	 * stack contents.
++	 * Doing this in a manner that is at least safe from races would require
++	 * some work to ensure that the remote task can not be scheduled; and
++	 * even then, this would still expose the unwinder as local attack
++	 * surface.
++	 * Therefore, this interface is restricted to root.
++	 */
++	if (!file_ns_capable(m->file, &init_user_ns, CAP_SYS_ADMIN))
++		return -EACCES;
++
+ 	entries = kmalloc_array(MAX_STACK_TRACE_DEPTH, sizeof(*entries),
+ 				GFP_KERNEL);
+ 	if (!entries)
+diff --git a/fs/xattr.c b/fs/xattr.c
+index 1bee74682513..c689fd5b5679 100644
+--- a/fs/xattr.c
++++ b/fs/xattr.c
+@@ -949,17 +949,19 @@ ssize_t simple_xattr_list(struct inode *inode, struct simple_xattrs *xattrs,
+ 	int err = 0;
+ 
+ #ifdef CONFIG_FS_POSIX_ACL
+-	if (inode->i_acl) {
+-		err = xattr_list_one(&buffer, &remaining_size,
+-				     XATTR_NAME_POSIX_ACL_ACCESS);
+-		if (err)
+-			return err;
+-	}
+-	if (inode->i_default_acl) {
+-		err = xattr_list_one(&buffer, &remaining_size,
+-				     XATTR_NAME_POSIX_ACL_DEFAULT);
+-		if (err)
+-			return err;
++	if (IS_POSIXACL(inode)) {
++		if (inode->i_acl) {
++			err = xattr_list_one(&buffer, &remaining_size,
++					     XATTR_NAME_POSIX_ACL_ACCESS);
++			if (err)
++				return err;
++		}
++		if (inode->i_default_acl) {
++			err = xattr_list_one(&buffer, &remaining_size,
++					     XATTR_NAME_POSIX_ACL_DEFAULT);
++			if (err)
++				return err;
++		}
+ 	}
+ #endif
+ 
+diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h
+index 66d1d45fa2e1..d356f802945a 100644
+--- a/include/asm-generic/io.h
++++ b/include/asm-generic/io.h
+@@ -1026,7 +1026,8 @@ static inline void __iomem *ioremap_wt(phys_addr_t offset, size_t size)
+ #define ioport_map ioport_map
+ static inline void __iomem *ioport_map(unsigned long port, unsigned int nr)
+ {
+-	return PCI_IOBASE + (port & MMIO_UPPER_LIMIT);
++	port &= IO_SPACE_LIMIT;
++	return (port > MMIO_UPPER_LIMIT) ? NULL : PCI_IOBASE + port;
+ }
+ #endif
+ 
+diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
+index 0fce47d5acb1..5d46b83d4820 100644
+--- a/include/linux/blk-cgroup.h
++++ b/include/linux/blk-cgroup.h
+@@ -88,7 +88,6 @@ struct blkg_policy_data {
+ 	/* the blkg and policy id this per-policy data belongs to */
+ 	struct blkcg_gq			*blkg;
+ 	int				plid;
+-	bool				offline;
+ };
+ 
+ /*
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 805bf22898cf..a3afa50bb79f 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -2014,6 +2014,8 @@ static inline void init_sync_kiocb(struct kiocb *kiocb, struct file *filp)
+  * I_OVL_INUSE		Used by overlayfs to get exclusive ownership on upper
+  *			and work dirs among overlayfs mounts.
+  *
++ * I_CREATING		New object's inode in the middle of setting up.
++ *
+  * Q: What is the difference between I_WILL_FREE and I_FREEING?
+  */
+ #define I_DIRTY_SYNC		(1 << 0)
+@@ -2034,7 +2036,8 @@ static inline void init_sync_kiocb(struct kiocb *kiocb, struct file *filp)
+ #define __I_DIRTY_TIME_EXPIRED	12
+ #define I_DIRTY_TIME_EXPIRED	(1 << __I_DIRTY_TIME_EXPIRED)
+ #define I_WB_SWITCH		(1 << 13)
+-#define I_OVL_INUSE			(1 << 14)
++#define I_OVL_INUSE		(1 << 14)
++#define I_CREATING		(1 << 15)
+ 
+ #define I_DIRTY_INODE (I_DIRTY_SYNC | I_DIRTY_DATASYNC)
+ #define I_DIRTY (I_DIRTY_INODE | I_DIRTY_PAGES)
+@@ -2918,6 +2921,7 @@ extern void lockdep_annotate_inode_mutex_key(struct inode *inode);
+ static inline void lockdep_annotate_inode_mutex_key(struct inode *inode) { };
+ #endif
+ extern void unlock_new_inode(struct inode *);
++extern void discard_new_inode(struct inode *);
+ extern unsigned int get_next_ino(void);
+ extern void evict_inodes(struct super_block *sb);
+ 
+diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
+index 1beb3ead0385..7229c186d199 100644
+--- a/include/net/cfg80211.h
++++ b/include/net/cfg80211.h
+@@ -4763,8 +4763,8 @@ const char *reg_initiator_name(enum nl80211_reg_initiator initiator);
+  *
+  * Return: 0 on success. -ENODATA.
+  */
+-int reg_query_regdb_wmm(char *alpha2, int freq, u32 *ptr,
+-			struct ieee80211_wmm_rule *rule);
++int reg_query_regdb_wmm(char *alpha2, int freq,
++			struct ieee80211_reg_rule *rule);
+ 
+ /*
+  * callbacks for asynchronous cfg80211 methods, notification
+diff --git a/include/net/regulatory.h b/include/net/regulatory.h
+index 60f8cc86a447..3469750df0f4 100644
+--- a/include/net/regulatory.h
++++ b/include/net/regulatory.h
+@@ -217,15 +217,15 @@ struct ieee80211_wmm_rule {
+ struct ieee80211_reg_rule {
+ 	struct ieee80211_freq_range freq_range;
+ 	struct ieee80211_power_rule power_rule;
+-	struct ieee80211_wmm_rule *wmm_rule;
++	struct ieee80211_wmm_rule wmm_rule;
+ 	u32 flags;
+ 	u32 dfs_cac_ms;
++	bool has_wmm;
+ };
+ 
+ struct ieee80211_regdomain {
+ 	struct rcu_head rcu_head;
+ 	u32 n_reg_rules;
+-	u32 n_wmm_rules;
+ 	char alpha2[3];
+ 	enum nl80211_dfs_regions dfs_region;
+ 	struct ieee80211_reg_rule reg_rules[];
+diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
+index ed707b21d152..f833a60699ad 100644
+--- a/kernel/bpf/sockmap.c
++++ b/kernel/bpf/sockmap.c
+@@ -236,7 +236,7 @@ static int bpf_tcp_init(struct sock *sk)
+ }
+ 
+ static void smap_release_sock(struct smap_psock *psock, struct sock *sock);
+-static int free_start_sg(struct sock *sk, struct sk_msg_buff *md);
++static int free_start_sg(struct sock *sk, struct sk_msg_buff *md, bool charge);
+ 
+ static void bpf_tcp_release(struct sock *sk)
+ {
+@@ -248,7 +248,7 @@ static void bpf_tcp_release(struct sock *sk)
+ 		goto out;
+ 
+ 	if (psock->cork) {
+-		free_start_sg(psock->sock, psock->cork);
++		free_start_sg(psock->sock, psock->cork, true);
+ 		kfree(psock->cork);
+ 		psock->cork = NULL;
+ 	}
+@@ -330,14 +330,14 @@ static void bpf_tcp_close(struct sock *sk, long timeout)
+ 	close_fun = psock->save_close;
+ 
+ 	if (psock->cork) {
+-		free_start_sg(psock->sock, psock->cork);
++		free_start_sg(psock->sock, psock->cork, true);
+ 		kfree(psock->cork);
+ 		psock->cork = NULL;
+ 	}
+ 
+ 	list_for_each_entry_safe(md, mtmp, &psock->ingress, list) {
+ 		list_del(&md->list);
+-		free_start_sg(psock->sock, md);
++		free_start_sg(psock->sock, md, true);
+ 		kfree(md);
+ 	}
+ 
+@@ -369,7 +369,7 @@ static void bpf_tcp_close(struct sock *sk, long timeout)
+ 			/* If another thread deleted this object skip deletion.
+ 			 * The refcnt on psock may or may not be zero.
+ 			 */
+-			if (l) {
++			if (l && l == link) {
+ 				hlist_del_rcu(&link->hash_node);
+ 				smap_release_sock(psock, link->sk);
+ 				free_htab_elem(htab, link);
+@@ -570,14 +570,16 @@ static void free_bytes_sg(struct sock *sk, int bytes,
+ 	md->sg_start = i;
+ }
+ 
+-static int free_sg(struct sock *sk, int start, struct sk_msg_buff *md)
++static int free_sg(struct sock *sk, int start,
++		   struct sk_msg_buff *md, bool charge)
+ {
+ 	struct scatterlist *sg = md->sg_data;
+ 	int i = start, free = 0;
+ 
+ 	while (sg[i].length) {
+ 		free += sg[i].length;
+-		sk_mem_uncharge(sk, sg[i].length);
++		if (charge)
++			sk_mem_uncharge(sk, sg[i].length);
+ 		if (!md->skb)
+ 			put_page(sg_page(&sg[i]));
+ 		sg[i].length = 0;
+@@ -594,9 +596,9 @@ static int free_sg(struct sock *sk, int start, struct sk_msg_buff *md)
+ 	return free;
+ }
+ 
+-static int free_start_sg(struct sock *sk, struct sk_msg_buff *md)
++static int free_start_sg(struct sock *sk, struct sk_msg_buff *md, bool charge)
+ {
+-	int free = free_sg(sk, md->sg_start, md);
++	int free = free_sg(sk, md->sg_start, md, charge);
+ 
+ 	md->sg_start = md->sg_end;
+ 	return free;
+@@ -604,7 +606,7 @@ static int free_start_sg(struct sock *sk, struct sk_msg_buff *md)
+ 
+ static int free_curr_sg(struct sock *sk, struct sk_msg_buff *md)
+ {
+-	return free_sg(sk, md->sg_curr, md);
++	return free_sg(sk, md->sg_curr, md, true);
+ }
+ 
+ static int bpf_map_msg_verdict(int _rc, struct sk_msg_buff *md)
+@@ -718,7 +720,7 @@ static int bpf_tcp_ingress(struct sock *sk, int apply_bytes,
+ 		list_add_tail(&r->list, &psock->ingress);
+ 		sk->sk_data_ready(sk);
+ 	} else {
+-		free_start_sg(sk, r);
++		free_start_sg(sk, r, true);
+ 		kfree(r);
+ 	}
+ 
+@@ -755,14 +757,10 @@ static int bpf_tcp_sendmsg_do_redirect(struct sock *sk, int send,
+ 		release_sock(sk);
+ 	}
+ 	smap_release_sock(psock, sk);
+-	if (unlikely(err))
+-		goto out;
+-	return 0;
++	return err;
+ out_rcu:
+ 	rcu_read_unlock();
+-out:
+-	free_bytes_sg(NULL, send, md, false);
+-	return err;
++	return 0;
+ }
+ 
+ static inline void bpf_md_init(struct smap_psock *psock)
+@@ -825,7 +823,7 @@ more_data:
+ 	case __SK_PASS:
+ 		err = bpf_tcp_push(sk, send, m, flags, true);
+ 		if (unlikely(err)) {
+-			*copied -= free_start_sg(sk, m);
++			*copied -= free_start_sg(sk, m, true);
+ 			break;
+ 		}
+ 
+@@ -848,16 +846,17 @@ more_data:
+ 		lock_sock(sk);
+ 
+ 		if (unlikely(err < 0)) {
+-			free_start_sg(sk, m);
++			int free = free_start_sg(sk, m, false);
++
+ 			psock->sg_size = 0;
+ 			if (!cork)
+-				*copied -= send;
++				*copied -= free;
+ 		} else {
+ 			psock->sg_size -= send;
+ 		}
+ 
+ 		if (cork) {
+-			free_start_sg(sk, m);
++			free_start_sg(sk, m, true);
+ 			psock->sg_size = 0;
+ 			kfree(m);
+ 			m = NULL;
+@@ -915,6 +914,8 @@ static int bpf_tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ 
+ 	if (unlikely(flags & MSG_ERRQUEUE))
+ 		return inet_recv_error(sk, msg, len, addr_len);
++	if (!skb_queue_empty(&sk->sk_receive_queue))
++		return tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len);
+ 
+ 	rcu_read_lock();
+ 	psock = smap_psock_sk(sk);
+@@ -925,9 +926,6 @@ static int bpf_tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ 		goto out;
+ 	rcu_read_unlock();
+ 
+-	if (!skb_queue_empty(&sk->sk_receive_queue))
+-		return tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len);
+-
+ 	lock_sock(sk);
+ bytes_ready:
+ 	while (copied != len) {
+@@ -1125,7 +1123,7 @@ wait_for_memory:
+ 		err = sk_stream_wait_memory(sk, &timeo);
+ 		if (err) {
+ 			if (m && m != psock->cork)
+-				free_start_sg(sk, m);
++				free_start_sg(sk, m, true);
+ 			goto out_err;
+ 		}
+ 	}
+@@ -1467,10 +1465,16 @@ static void smap_destroy_psock(struct rcu_head *rcu)
+ 	schedule_work(&psock->gc_work);
+ }
+ 
++static bool psock_is_smap_sk(struct sock *sk)
++{
++	return inet_csk(sk)->icsk_ulp_ops == &bpf_tcp_ulp_ops;
++}
++
+ static void smap_release_sock(struct smap_psock *psock, struct sock *sock)
+ {
+ 	if (refcount_dec_and_test(&psock->refcnt)) {
+-		tcp_cleanup_ulp(sock);
++		if (psock_is_smap_sk(sock))
++			tcp_cleanup_ulp(sock);
+ 		write_lock_bh(&sock->sk_callback_lock);
+ 		smap_stop_sock(psock, sock);
+ 		write_unlock_bh(&sock->sk_callback_lock);
+@@ -1584,13 +1588,13 @@ static void smap_gc_work(struct work_struct *w)
+ 		bpf_prog_put(psock->bpf_tx_msg);
+ 
+ 	if (psock->cork) {
+-		free_start_sg(psock->sock, psock->cork);
++		free_start_sg(psock->sock, psock->cork, true);
+ 		kfree(psock->cork);
+ 	}
+ 
+ 	list_for_each_entry_safe(md, mtmp, &psock->ingress, list) {
+ 		list_del(&md->list);
+-		free_start_sg(psock->sock, md);
++		free_start_sg(psock->sock, md, true);
+ 		kfree(md);
+ 	}
+ 
+@@ -1897,6 +1901,10 @@ static int __sock_map_ctx_update_elem(struct bpf_map *map,
+ 	 * doesn't update user data.
+ 	 */
+ 	if (psock) {
++		if (!psock_is_smap_sk(sock)) {
++			err = -EBUSY;
++			goto out_progs;
++		}
+ 		if (READ_ONCE(psock->bpf_parse) && parse) {
+ 			err = -EBUSY;
+ 			goto out_progs;
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index adbe21c8876e..82e8edef6ea0 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -2865,6 +2865,15 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
+ 	u64 umin_val, umax_val;
+ 	u64 insn_bitness = (BPF_CLASS(insn->code) == BPF_ALU64) ? 64 : 32;
+ 
++	if (insn_bitness == 32) {
++		/* Relevant for 32-bit RSH: Information can propagate towards
++		 * LSB, so it isn't sufficient to only truncate the output to
++		 * 32 bits.
++		 */
++		coerce_reg_to_size(dst_reg, 4);
++		coerce_reg_to_size(&src_reg, 4);
++	}
++
+ 	smin_val = src_reg.smin_value;
+ 	smax_val = src_reg.smax_value;
+ 	umin_val = src_reg.umin_value;
+@@ -3100,7 +3109,6 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
+ 	if (BPF_CLASS(insn->code) != BPF_ALU64) {
+ 		/* 32-bit ALU ops are (32,32)->32 */
+ 		coerce_reg_to_size(dst_reg, 4);
+-		coerce_reg_to_size(&src_reg, 4);
+ 	}
+ 
+ 	__reg_deduce_bounds(dst_reg);
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index 56a0fed30c0a..505a41c42b96 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -1295,7 +1295,7 @@ static void init_numa_topology_type(void)
+ 
+ 	n = sched_max_numa_distance;
+ 
+-	if (sched_domains_numa_levels <= 1) {
++	if (sched_domains_numa_levels <= 2) {
+ 		sched_numa_topology_type = NUMA_DIRECT;
+ 		return;
+ 	}
+@@ -1380,9 +1380,6 @@ void sched_init_numa(void)
+ 			break;
+ 	}
+ 
+-	if (!level)
+-		return;
+-
+ 	/*
+ 	 * 'level' contains the number of unique distances
+ 	 *
+diff --git a/mm/madvise.c b/mm/madvise.c
+index 4d3c922ea1a1..8534ea2978c5 100644
+--- a/mm/madvise.c
++++ b/mm/madvise.c
+@@ -96,7 +96,7 @@ static long madvise_behavior(struct vm_area_struct *vma,
+ 		new_flags |= VM_DONTDUMP;
+ 		break;
+ 	case MADV_DODUMP:
+-		if (new_flags & VM_SPECIAL) {
++		if (!is_vm_hugetlb_page(vma) && new_flags & VM_SPECIAL) {
+ 			error = -EINVAL;
+ 			goto out;
+ 		}
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 9dfd145eedcc..963ee2e88861 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2272,14 +2272,21 @@ static const struct bpf_func_proto bpf_msg_cork_bytes_proto = {
+ 	.arg2_type      = ARG_ANYTHING,
+ };
+ 
++#define sk_msg_iter_var(var)			\
++	do {					\
++		var++;				\
++		if (var == MAX_SKB_FRAGS)	\
++			var = 0;		\
++	} while (0)
++
+ BPF_CALL_4(bpf_msg_pull_data,
+ 	   struct sk_msg_buff *, msg, u32, start, u32, end, u64, flags)
+ {
+-	unsigned int len = 0, offset = 0, copy = 0;
++	unsigned int len = 0, offset = 0, copy = 0, poffset = 0;
++	int bytes = end - start, bytes_sg_total;
+ 	struct scatterlist *sg = msg->sg_data;
+ 	int first_sg, last_sg, i, shift;
+ 	unsigned char *p, *to, *from;
+-	int bytes = end - start;
+ 	struct page *page;
+ 
+ 	if (unlikely(flags || end <= start))
+@@ -2289,21 +2296,22 @@ BPF_CALL_4(bpf_msg_pull_data,
+ 	i = msg->sg_start;
+ 	do {
+ 		len = sg[i].length;
+-		offset += len;
+ 		if (start < offset + len)
+ 			break;
+-		i++;
+-		if (i == MAX_SKB_FRAGS)
+-			i = 0;
++		offset += len;
++		sk_msg_iter_var(i);
+ 	} while (i != msg->sg_end);
+ 
+ 	if (unlikely(start >= offset + len))
+ 		return -EINVAL;
+ 
+-	if (!msg->sg_copy[i] && bytes <= len)
+-		goto out;
+-
+ 	first_sg = i;
++	/* The start may point into the sg element so we need to also
++	 * account for the headroom.
++	 */
++	bytes_sg_total = start - offset + bytes;
++	if (!msg->sg_copy[i] && bytes_sg_total <= len)
++		goto out;
+ 
+ 	/* At this point we need to linearize multiple scatterlist
+ 	 * elements or a single shared page. Either way we need to
+@@ -2317,37 +2325,32 @@ BPF_CALL_4(bpf_msg_pull_data,
+ 	 */
+ 	do {
+ 		copy += sg[i].length;
+-		i++;
+-		if (i == MAX_SKB_FRAGS)
+-			i = 0;
+-		if (bytes < copy)
++		sk_msg_iter_var(i);
++		if (bytes_sg_total <= copy)
+ 			break;
+ 	} while (i != msg->sg_end);
+ 	last_sg = i;
+ 
+-	if (unlikely(copy < end - start))
++	if (unlikely(bytes_sg_total > copy))
+ 		return -EINVAL;
+ 
+ 	page = alloc_pages(__GFP_NOWARN | GFP_ATOMIC, get_order(copy));
+ 	if (unlikely(!page))
+ 		return -ENOMEM;
+ 	p = page_address(page);
+-	offset = 0;
+ 
+ 	i = first_sg;
+ 	do {
+ 		from = sg_virt(&sg[i]);
+ 		len = sg[i].length;
+-		to = p + offset;
++		to = p + poffset;
+ 
+ 		memcpy(to, from, len);
+-		offset += len;
++		poffset += len;
+ 		sg[i].length = 0;
+ 		put_page(sg_page(&sg[i]));
+ 
+-		i++;
+-		if (i == MAX_SKB_FRAGS)
+-			i = 0;
++		sk_msg_iter_var(i);
+ 	} while (i != last_sg);
+ 
+ 	sg[first_sg].length = copy;
+@@ -2357,11 +2360,15 @@ BPF_CALL_4(bpf_msg_pull_data,
+ 	 * had a single entry though we can just replace it and
+ 	 * be done. Otherwise walk the ring and shift the entries.
+ 	 */
+-	shift = last_sg - first_sg - 1;
++	WARN_ON_ONCE(last_sg == first_sg);
++	shift = last_sg > first_sg ?
++		last_sg - first_sg - 1 :
++		MAX_SKB_FRAGS - first_sg + last_sg - 1;
+ 	if (!shift)
+ 		goto out;
+ 
+-	i = first_sg + 1;
++	i = first_sg;
++	sk_msg_iter_var(i);
+ 	do {
+ 		int move_from;
+ 
+@@ -2378,15 +2385,13 @@ BPF_CALL_4(bpf_msg_pull_data,
+ 		sg[move_from].page_link = 0;
+ 		sg[move_from].offset = 0;
+ 
+-		i++;
+-		if (i == MAX_SKB_FRAGS)
+-			i = 0;
++		sk_msg_iter_var(i);
+ 	} while (1);
+ 	msg->sg_end -= shift;
+ 	if (msg->sg_end < 0)
+ 		msg->sg_end += MAX_SKB_FRAGS;
+ out:
+-	msg->data = sg_virt(&sg[i]) + start - offset;
++	msg->data = sg_virt(&sg[first_sg]) + start - offset;
+ 	msg->data_end = msg->data + bytes;
+ 
+ 	return 0;
+diff --git a/net/ipv4/netfilter/Kconfig b/net/ipv4/netfilter/Kconfig
+index bbfc356cb1b5..d7ecae5e93ea 100644
+--- a/net/ipv4/netfilter/Kconfig
++++ b/net/ipv4/netfilter/Kconfig
+@@ -122,6 +122,10 @@ config NF_NAT_IPV4
+ 
+ if NF_NAT_IPV4
+ 
++config NF_NAT_MASQUERADE_IPV4
++	bool
++
++if NF_TABLES
+ config NFT_CHAIN_NAT_IPV4
+ 	depends on NF_TABLES_IPV4
+ 	tristate "IPv4 nf_tables nat chain support"
+@@ -131,9 +135,6 @@ config NFT_CHAIN_NAT_IPV4
+ 	  packet transformations such as the source, destination address and
+ 	  source and destination ports.
+ 
+-config NF_NAT_MASQUERADE_IPV4
+-	bool
+-
+ config NFT_MASQ_IPV4
+ 	tristate "IPv4 masquerading support for nf_tables"
+ 	depends on NF_TABLES_IPV4
+@@ -151,6 +152,7 @@ config NFT_REDIR_IPV4
+ 	help
+ 	  This is the expression that provides IPv4 redirect support for
+ 	  nf_tables.
++endif # NF_TABLES
+ 
+ config NF_NAT_SNMP_BASIC
+ 	tristate "Basic SNMP-ALG support"
+diff --git a/net/mac80211/ibss.c b/net/mac80211/ibss.c
+index 6449a1c2283b..f0f5fedb8caa 100644
+--- a/net/mac80211/ibss.c
++++ b/net/mac80211/ibss.c
+@@ -947,8 +947,8 @@ static void ieee80211_rx_mgmt_deauth_ibss(struct ieee80211_sub_if_data *sdata,
+ 	if (len < IEEE80211_DEAUTH_FRAME_LEN)
+ 		return;
+ 
+-	ibss_dbg(sdata, "RX DeAuth SA=%pM DA=%pM BSSID=%pM (reason: %d)\n",
+-		 mgmt->sa, mgmt->da, mgmt->bssid, reason);
++	ibss_dbg(sdata, "RX DeAuth SA=%pM DA=%pM\n", mgmt->sa, mgmt->da);
++	ibss_dbg(sdata, "\tBSSID=%pM (reason: %d)\n", mgmt->bssid, reason);
+ 	sta_info_destroy_addr(sdata, mgmt->sa);
+ }
+ 
+@@ -966,9 +966,9 @@ static void ieee80211_rx_mgmt_auth_ibss(struct ieee80211_sub_if_data *sdata,
+ 	auth_alg = le16_to_cpu(mgmt->u.auth.auth_alg);
+ 	auth_transaction = le16_to_cpu(mgmt->u.auth.auth_transaction);
+ 
+-	ibss_dbg(sdata,
+-		 "RX Auth SA=%pM DA=%pM BSSID=%pM (auth_transaction=%d)\n",
+-		 mgmt->sa, mgmt->da, mgmt->bssid, auth_transaction);
++	ibss_dbg(sdata, "RX Auth SA=%pM DA=%pM\n", mgmt->sa, mgmt->da);
++	ibss_dbg(sdata, "\tBSSID=%pM (auth_transaction=%d)\n",
++		 mgmt->bssid, auth_transaction);
+ 
+ 	if (auth_alg != WLAN_AUTH_OPEN || auth_transaction != 1)
+ 		return;
+@@ -1175,10 +1175,10 @@ static void ieee80211_rx_bss_info(struct ieee80211_sub_if_data *sdata,
+ 		rx_timestamp = drv_get_tsf(local, sdata);
+ 	}
+ 
+-	ibss_dbg(sdata,
+-		 "RX beacon SA=%pM BSSID=%pM TSF=0x%llx BCN=0x%llx diff=%lld @%lu\n",
++	ibss_dbg(sdata, "RX beacon SA=%pM BSSID=%pM TSF=0x%llx\n",
+ 		 mgmt->sa, mgmt->bssid,
+-		 (unsigned long long)rx_timestamp,
++		 (unsigned long long)rx_timestamp);
++	ibss_dbg(sdata, "\tBCN=0x%llx diff=%lld @%lu\n",
+ 		 (unsigned long long)beacon_timestamp,
+ 		 (unsigned long long)(rx_timestamp - beacon_timestamp),
+ 		 jiffies);
+@@ -1537,9 +1537,9 @@ static void ieee80211_rx_mgmt_probe_req(struct ieee80211_sub_if_data *sdata,
+ 
+ 	tx_last_beacon = drv_tx_last_beacon(local);
+ 
+-	ibss_dbg(sdata,
+-		 "RX ProbeReq SA=%pM DA=%pM BSSID=%pM (tx_last_beacon=%d)\n",
+-		 mgmt->sa, mgmt->da, mgmt->bssid, tx_last_beacon);
++	ibss_dbg(sdata, "RX ProbeReq SA=%pM DA=%pM\n", mgmt->sa, mgmt->da);
++	ibss_dbg(sdata, "\tBSSID=%pM (tx_last_beacon=%d)\n",
++		 mgmt->bssid, tx_last_beacon);
+ 
+ 	if (!tx_last_beacon && is_multicast_ether_addr(mgmt->da))
+ 		return;
+diff --git a/net/mac80211/main.c b/net/mac80211/main.c
+index fb73451ed85e..66cbddd65b47 100644
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -255,8 +255,27 @@ static void ieee80211_restart_work(struct work_struct *work)
+ 
+ 	flush_work(&local->radar_detected_work);
+ 	rtnl_lock();
+-	list_for_each_entry(sdata, &local->interfaces, list)
++	list_for_each_entry(sdata, &local->interfaces, list) {
++		/*
++		 * XXX: there may be more work for other vif types and even
++		 * for station mode: a good thing would be to run most of
++		 * the iface type's dependent _stop (ieee80211_mg_stop,
++		 * ieee80211_ibss_stop) etc...
++		 * For now, fix only the specific bug that was seen: race
++		 * between csa_connection_drop_work and us.
++		 */
++		if (sdata->vif.type == NL80211_IFTYPE_STATION) {
++			/*
++			 * This worker is scheduled from the iface worker that
++			 * runs on mac80211's workqueue, so we can't be
++			 * scheduling this worker after the cancel right here.
++			 * The exception is ieee80211_chswitch_done.
++			 * Then we can have a race...
++			 */
++			cancel_work_sync(&sdata->u.mgd.csa_connection_drop_work);
++		}
+ 		flush_delayed_work(&sdata->dec_tailroom_needed_wk);
++	}
+ 	ieee80211_scan_cancel(local);
+ 
+ 	/* make sure any new ROC will consider local->in_reconfig */
+@@ -470,10 +489,7 @@ static const struct ieee80211_vht_cap mac80211_vht_capa_mod_mask = {
+ 		cpu_to_le32(IEEE80211_VHT_CAP_RXLDPC |
+ 			    IEEE80211_VHT_CAP_SHORT_GI_80 |
+ 			    IEEE80211_VHT_CAP_SHORT_GI_160 |
+-			    IEEE80211_VHT_CAP_RXSTBC_1 |
+-			    IEEE80211_VHT_CAP_RXSTBC_2 |
+-			    IEEE80211_VHT_CAP_RXSTBC_3 |
+-			    IEEE80211_VHT_CAP_RXSTBC_4 |
++			    IEEE80211_VHT_CAP_RXSTBC_MASK |
+ 			    IEEE80211_VHT_CAP_TXSTBC |
+ 			    IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE |
+ 			    IEEE80211_VHT_CAP_SU_BEAMFORMEE_CAPABLE |
+@@ -1182,6 +1198,7 @@ void ieee80211_unregister_hw(struct ieee80211_hw *hw)
+ #if IS_ENABLED(CONFIG_IPV6)
+ 	unregister_inet6addr_notifier(&local->ifa6_notifier);
+ #endif
++	ieee80211_txq_teardown_flows(local);
+ 
+ 	rtnl_lock();
+ 
+@@ -1210,7 +1227,6 @@ void ieee80211_unregister_hw(struct ieee80211_hw *hw)
+ 	skb_queue_purge(&local->skb_queue);
+ 	skb_queue_purge(&local->skb_queue_unreliable);
+ 	skb_queue_purge(&local->skb_queue_tdls_chsw);
+-	ieee80211_txq_teardown_flows(local);
+ 
+ 	destroy_workqueue(local->workqueue);
+ 	wiphy_unregister(local->hw.wiphy);
+diff --git a/net/mac80211/mesh_hwmp.c b/net/mac80211/mesh_hwmp.c
+index 35ad3983ae4b..daf9db3c8f24 100644
+--- a/net/mac80211/mesh_hwmp.c
++++ b/net/mac80211/mesh_hwmp.c
+@@ -572,6 +572,10 @@ static void hwmp_preq_frame_process(struct ieee80211_sub_if_data *sdata,
+ 		forward = false;
+ 		reply = true;
+ 		target_metric = 0;
++
++		if (SN_GT(target_sn, ifmsh->sn))
++			ifmsh->sn = target_sn;
++
+ 		if (time_after(jiffies, ifmsh->last_sn_update +
+ 					net_traversal_jiffies(sdata)) ||
+ 		    time_before(jiffies, ifmsh->last_sn_update)) {
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index a59187c016e0..b046bf95eb3c 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -978,6 +978,10 @@ static void ieee80211_chswitch_work(struct work_struct *work)
+ 	 */
+ 
+ 	if (sdata->reserved_chanctx) {
++		struct ieee80211_supported_band *sband = NULL;
++		struct sta_info *mgd_sta = NULL;
++		enum ieee80211_sta_rx_bandwidth bw = IEEE80211_STA_RX_BW_20;
++
+ 		/*
+ 		 * with multi-vif csa driver may call ieee80211_csa_finish()
+ 		 * many times while waiting for other interfaces to use their
+@@ -986,6 +990,48 @@ static void ieee80211_chswitch_work(struct work_struct *work)
+ 		if (sdata->reserved_ready)
+ 			goto out;
+ 
++		if (sdata->vif.bss_conf.chandef.width !=
++		    sdata->csa_chandef.width) {
++			/*
++			 * For managed interface, we need to also update the AP
++			 * station bandwidth and align the rate scale algorithm
++			 * on the bandwidth change. Here we only consider the
++			 * bandwidth of the new channel definition (as channel
++			 * switch flow does not have the full HT/VHT/HE
++			 * information), assuming that if additional changes are
++			 * required they would be done as part of the processing
++			 * of the next beacon from the AP.
++			 */
++			switch (sdata->csa_chandef.width) {
++			case NL80211_CHAN_WIDTH_20_NOHT:
++			case NL80211_CHAN_WIDTH_20:
++			default:
++				bw = IEEE80211_STA_RX_BW_20;
++				break;
++			case NL80211_CHAN_WIDTH_40:
++				bw = IEEE80211_STA_RX_BW_40;
++				break;
++			case NL80211_CHAN_WIDTH_80:
++				bw = IEEE80211_STA_RX_BW_80;
++				break;
++			case NL80211_CHAN_WIDTH_80P80:
++			case NL80211_CHAN_WIDTH_160:
++				bw = IEEE80211_STA_RX_BW_160;
++				break;
++			}
++
++			mgd_sta = sta_info_get(sdata, ifmgd->bssid);
++			sband =
++				local->hw.wiphy->bands[sdata->csa_chandef.chan->band];
++		}
++
++		if (sdata->vif.bss_conf.chandef.width >
++		    sdata->csa_chandef.width) {
++			mgd_sta->sta.bandwidth = bw;
++			rate_control_rate_update(local, sband, mgd_sta,
++						 IEEE80211_RC_BW_CHANGED);
++		}
++
+ 		ret = ieee80211_vif_use_reserved_context(sdata);
+ 		if (ret) {
+ 			sdata_info(sdata,
+@@ -996,6 +1042,13 @@ static void ieee80211_chswitch_work(struct work_struct *work)
+ 			goto out;
+ 		}
+ 
++		if (sdata->vif.bss_conf.chandef.width <
++		    sdata->csa_chandef.width) {
++			mgd_sta->sta.bandwidth = bw;
++			rate_control_rate_update(local, sband, mgd_sta,
++						 IEEE80211_RC_BW_CHANGED);
++		}
++
+ 		goto out;
+ 	}
+ 
+@@ -1217,6 +1270,16 @@ ieee80211_sta_process_chanswitch(struct ieee80211_sub_if_data *sdata,
+ 					 cbss->beacon_interval));
+ 	return;
+  drop_connection:
++	/*
++	 * This is just so that the disconnect flow will know that
++	 * we were trying to switch channel and failed. In case the
++	 * mode is 1 (we are not allowed to Tx), we will know not to
++	 * send a deauthentication frame. Those two fields will be
++	 * reset when the disconnection worker runs.
++	 */
++	sdata->vif.csa_active = true;
++	sdata->csa_block_tx = csa_ie.mode;
++
+ 	ieee80211_queue_work(&local->hw, &ifmgd->csa_connection_drop_work);
+ 	mutex_unlock(&local->chanctx_mtx);
+ 	mutex_unlock(&local->mtx);
+@@ -2400,6 +2463,7 @@ static void __ieee80211_disconnect(struct ieee80211_sub_if_data *sdata)
+ 	struct ieee80211_local *local = sdata->local;
+ 	struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
+ 	u8 frame_buf[IEEE80211_DEAUTH_FRAME_LEN];
++	bool tx;
+ 
+ 	sdata_lock(sdata);
+ 	if (!ifmgd->associated) {
+@@ -2407,6 +2471,8 @@ static void __ieee80211_disconnect(struct ieee80211_sub_if_data *sdata)
+ 		return;
+ 	}
+ 
++	tx = !sdata->csa_block_tx;
++
+ 	/* AP is probably out of range (or not reachable for another reason) so
+ 	 * remove the bss struct for that AP.
+ 	 */
+@@ -2414,7 +2480,7 @@ static void __ieee80211_disconnect(struct ieee80211_sub_if_data *sdata)
+ 
+ 	ieee80211_set_disassoc(sdata, IEEE80211_STYPE_DEAUTH,
+ 			       WLAN_REASON_DISASSOC_DUE_TO_INACTIVITY,
+-			       true, frame_buf);
++			       tx, frame_buf);
+ 	mutex_lock(&local->mtx);
+ 	sdata->vif.csa_active = false;
+ 	ifmgd->csa_waiting_bcn = false;
+@@ -2425,7 +2491,7 @@ static void __ieee80211_disconnect(struct ieee80211_sub_if_data *sdata)
+ 	}
+ 	mutex_unlock(&local->mtx);
+ 
+-	ieee80211_report_disconnect(sdata, frame_buf, sizeof(frame_buf), true,
++	ieee80211_report_disconnect(sdata, frame_buf, sizeof(frame_buf), tx,
+ 				    WLAN_REASON_DISASSOC_DUE_TO_INACTIVITY);
+ 
+ 	sdata_unlock(sdata);
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index fa1f1e63a264..9b3b069e418a 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -3073,27 +3073,18 @@ void ieee80211_clear_fast_xmit(struct sta_info *sta)
+ }
+ 
+ static bool ieee80211_amsdu_realloc_pad(struct ieee80211_local *local,
+-					struct sk_buff *skb, int headroom,
+-					int *subframe_len)
++					struct sk_buff *skb, int headroom)
+ {
+-	int amsdu_len = *subframe_len + sizeof(struct ethhdr);
+-	int padding = (4 - amsdu_len) & 3;
+-
+-	if (skb_headroom(skb) < headroom || skb_tailroom(skb) < padding) {
++	if (skb_headroom(skb) < headroom) {
+ 		I802_DEBUG_INC(local->tx_expand_skb_head);
+ 
+-		if (pskb_expand_head(skb, headroom, padding, GFP_ATOMIC)) {
++		if (pskb_expand_head(skb, headroom, 0, GFP_ATOMIC)) {
+ 			wiphy_debug(local->hw.wiphy,
+ 				    "failed to reallocate TX buffer\n");
+ 			return false;
+ 		}
+ 	}
+ 
+-	if (padding) {
+-		*subframe_len += padding;
+-		skb_put_zero(skb, padding);
+-	}
+-
+ 	return true;
+ }
+ 
+@@ -3117,8 +3108,7 @@ static bool ieee80211_amsdu_prepare_head(struct ieee80211_sub_if_data *sdata,
+ 	if (info->control.flags & IEEE80211_TX_CTRL_AMSDU)
+ 		return true;
+ 
+-	if (!ieee80211_amsdu_realloc_pad(local, skb, sizeof(*amsdu_hdr),
+-					 &subframe_len))
++	if (!ieee80211_amsdu_realloc_pad(local, skb, sizeof(*amsdu_hdr)))
+ 		return false;
+ 
+ 	data = skb_push(skb, sizeof(*amsdu_hdr));
+@@ -3184,7 +3174,8 @@ static bool ieee80211_amsdu_aggregate(struct ieee80211_sub_if_data *sdata,
+ 	void *data;
+ 	bool ret = false;
+ 	unsigned int orig_len;
+-	int n = 1, nfrags;
++	int n = 2, nfrags, pad = 0;
++	u16 hdrlen;
+ 
+ 	if (!ieee80211_hw_check(&local->hw, TX_AMSDU))
+ 		return false;
+@@ -3217,9 +3208,6 @@ static bool ieee80211_amsdu_aggregate(struct ieee80211_sub_if_data *sdata,
+ 	if (skb->len + head->len > max_amsdu_len)
+ 		goto out;
+ 
+-	if (!ieee80211_amsdu_prepare_head(sdata, fast_tx, head))
+-		goto out;
+-
+ 	nfrags = 1 + skb_shinfo(skb)->nr_frags;
+ 	nfrags += 1 + skb_shinfo(head)->nr_frags;
+ 	frag_tail = &skb_shinfo(head)->frag_list;
+@@ -3235,10 +3223,24 @@ static bool ieee80211_amsdu_aggregate(struct ieee80211_sub_if_data *sdata,
+ 	if (max_frags && nfrags > max_frags)
+ 		goto out;
+ 
+-	if (!ieee80211_amsdu_realloc_pad(local, skb, sizeof(rfc1042_header) + 2,
+-					 &subframe_len))
++	if (!ieee80211_amsdu_prepare_head(sdata, fast_tx, head))
+ 		goto out;
+ 
++	/*
++	 * Pad out the previous subframe to a multiple of 4 by adding the
++	 * padding to the next one, that's being added. Note that head->len
++	 * is the length of the full A-MSDU, but that works since each time
++	 * we add a new subframe we pad out the previous one to a multiple
++	 * of 4 and thus it no longer matters in the next round.
++	 */
++	hdrlen = fast_tx->hdr_len - sizeof(rfc1042_header);
++	if ((head->len - hdrlen) & 3)
++		pad = 4 - ((head->len - hdrlen) & 3);
++
++	if (!ieee80211_amsdu_realloc_pad(local, skb, sizeof(rfc1042_header) +
++						     2 + pad))
++		goto out_recalc;
++
+ 	ret = true;
+ 	data = skb_push(skb, ETH_ALEN + 2);
+ 	memmove(data, data + ETH_ALEN + 2, 2 * ETH_ALEN);
+@@ -3248,15 +3250,19 @@ static bool ieee80211_amsdu_aggregate(struct ieee80211_sub_if_data *sdata,
+ 	memcpy(data, &len, 2);
+ 	memcpy(data + 2, rfc1042_header, sizeof(rfc1042_header));
+ 
++	memset(skb_push(skb, pad), 0, pad);
++
+ 	head->len += skb->len;
+ 	head->data_len += skb->len;
+ 	*frag_tail = skb;
+ 
+-	flow->backlog += head->len - orig_len;
+-	tin->backlog_bytes += head->len - orig_len;
+-
+-	fq_recalc_backlog(fq, tin, flow);
++out_recalc:
++	if (head->len != orig_len) {
++		flow->backlog += head->len - orig_len;
++		tin->backlog_bytes += head->len - orig_len;
+ 
++		fq_recalc_backlog(fq, tin, flow);
++	}
+ out:
+ 	spin_unlock_bh(&fq->lock);
+ 
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index d02fbfec3783..93b5bb849ad7 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -1120,7 +1120,7 @@ void ieee80211_regulatory_limit_wmm_params(struct ieee80211_sub_if_data *sdata,
+ {
+ 	struct ieee80211_chanctx_conf *chanctx_conf;
+ 	const struct ieee80211_reg_rule *rrule;
+-	struct ieee80211_wmm_ac *wmm_ac;
++	const struct ieee80211_wmm_ac *wmm_ac;
+ 	u16 center_freq = 0;
+ 
+ 	if (sdata->vif.type != NL80211_IFTYPE_AP &&
+@@ -1139,20 +1139,19 @@ void ieee80211_regulatory_limit_wmm_params(struct ieee80211_sub_if_data *sdata,
+ 
+ 	rrule = freq_reg_info(sdata->wdev.wiphy, MHZ_TO_KHZ(center_freq));
+ 
+-	if (IS_ERR_OR_NULL(rrule) || !rrule->wmm_rule) {
++	if (IS_ERR_OR_NULL(rrule) || !rrule->has_wmm) {
+ 		rcu_read_unlock();
+ 		return;
+ 	}
+ 
+ 	if (sdata->vif.type == NL80211_IFTYPE_AP)
+-		wmm_ac = &rrule->wmm_rule->ap[ac];
++		wmm_ac = &rrule->wmm_rule.ap[ac];
+ 	else
+-		wmm_ac = &rrule->wmm_rule->client[ac];
++		wmm_ac = &rrule->wmm_rule.client[ac];
+ 	qparam->cw_min = max_t(u16, qparam->cw_min, wmm_ac->cw_min);
+ 	qparam->cw_max = max_t(u16, qparam->cw_max, wmm_ac->cw_max);
+ 	qparam->aifs = max_t(u8, qparam->aifs, wmm_ac->aifsn);
+-	qparam->txop = !qparam->txop ? wmm_ac->cot / 32 :
+-		min_t(u16, qparam->txop, wmm_ac->cot / 32);
++	qparam->txop = min_t(u16, qparam->txop, wmm_ac->cot / 32);
+ 	rcu_read_unlock();
+ }
+ 
+diff --git a/net/netfilter/Kconfig b/net/netfilter/Kconfig
+index f0a1c536ef15..e6d5c87f0d96 100644
+--- a/net/netfilter/Kconfig
++++ b/net/netfilter/Kconfig
+@@ -740,13 +740,13 @@ config NETFILTER_XT_TARGET_CHECKSUM
+ 	depends on NETFILTER_ADVANCED
+ 	---help---
+ 	  This option adds a `CHECKSUM' target, which can be used in the iptables mangle
+-	  table.
++	  table to work around buggy DHCP clients in virtualized environments.
+ 
+-	  You can use this target to compute and fill in the checksum in
+-	  a packet that lacks a checksum.  This is particularly useful,
+-	  if you need to work around old applications such as dhcp clients,
+-	  that do not work well with checksum offloads, but don't want to disable
+-	  checksum offload in your device.
++	  Some old DHCP clients drop packets because they are not aware
++	  that the checksum would normally be offloaded to hardware and
++	  thus should be considered valid.
++	  This target can be used to fill in the checksum using iptables
++	  when such packets are sent via a virtual network device.
+ 
+ 	  To compile it as a module, choose M here.  If unsure, say N.
+ 
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index f5745e4c6513..77d690a87144 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -4582,6 +4582,7 @@ static int nft_flush_set(const struct nft_ctx *ctx,
+ 	}
+ 	set->ndeact++;
+ 
++	nft_set_elem_deactivate(ctx->net, set, elem);
+ 	nft_trans_elem_set(trans) = set;
+ 	nft_trans_elem(trans) = *elem;
+ 	list_add_tail(&trans->list, &ctx->net->nft.commit_list);
+diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c
+index ea4ba551abb2..d33094f4ec41 100644
+--- a/net/netfilter/nfnetlink_queue.c
++++ b/net/netfilter/nfnetlink_queue.c
+@@ -233,6 +233,7 @@ static void nfqnl_reinject(struct nf_queue_entry *entry, unsigned int verdict)
+ 	int err;
+ 
+ 	if (verdict == NF_ACCEPT ||
++	    verdict == NF_REPEAT ||
+ 	    verdict == NF_STOP) {
+ 		rcu_read_lock();
+ 		ct_hook = rcu_dereference(nf_ct_hook);
+diff --git a/net/netfilter/xt_CHECKSUM.c b/net/netfilter/xt_CHECKSUM.c
+index 9f4151ec3e06..6c7aa6a0a0d2 100644
+--- a/net/netfilter/xt_CHECKSUM.c
++++ b/net/netfilter/xt_CHECKSUM.c
+@@ -16,6 +16,9 @@
+ #include <linux/netfilter/x_tables.h>
+ #include <linux/netfilter/xt_CHECKSUM.h>
+ 
++#include <linux/netfilter_ipv4/ip_tables.h>
++#include <linux/netfilter_ipv6/ip6_tables.h>
++
+ MODULE_LICENSE("GPL");
+ MODULE_AUTHOR("Michael S. Tsirkin <mst@redhat.com>");
+ MODULE_DESCRIPTION("Xtables: checksum modification");
+@@ -25,7 +28,7 @@ MODULE_ALIAS("ip6t_CHECKSUM");
+ static unsigned int
+ checksum_tg(struct sk_buff *skb, const struct xt_action_param *par)
+ {
+-	if (skb->ip_summed == CHECKSUM_PARTIAL)
++	if (skb->ip_summed == CHECKSUM_PARTIAL && !skb_is_gso(skb))
+ 		skb_checksum_help(skb);
+ 
+ 	return XT_CONTINUE;
+@@ -34,6 +37,8 @@ checksum_tg(struct sk_buff *skb, const struct xt_action_param *par)
+ static int checksum_tg_check(const struct xt_tgchk_param *par)
+ {
+ 	const struct xt_CHECKSUM_info *einfo = par->targinfo;
++	const struct ip6t_ip6 *i6 = par->entryinfo;
++	const struct ipt_ip *i4 = par->entryinfo;
+ 
+ 	if (einfo->operation & ~XT_CHECKSUM_OP_FILL) {
+ 		pr_info_ratelimited("unsupported CHECKSUM operation %x\n",
+@@ -43,6 +48,21 @@ static int checksum_tg_check(const struct xt_tgchk_param *par)
+ 	if (!einfo->operation)
+ 		return -EINVAL;
+ 
++	switch (par->family) {
++	case NFPROTO_IPV4:
++		if (i4->proto == IPPROTO_UDP &&
++		    (i4->invflags & XT_INV_PROTO) == 0)
++			return 0;
++		break;
++	case NFPROTO_IPV6:
++		if ((i6->flags & IP6T_F_PROTO) &&
++		    i6->proto == IPPROTO_UDP &&
++		    (i6->invflags & XT_INV_PROTO) == 0)
++			return 0;
++		break;
++	}
++
++	pr_warn_once("CHECKSUM should be avoided.  If really needed, restrict with \"-p udp\" and only use in OUTPUT\n");
+ 	return 0;
+ }
+ 
+diff --git a/net/netfilter/xt_cluster.c b/net/netfilter/xt_cluster.c
+index dfbdbb2fc0ed..51d0c257e7a5 100644
+--- a/net/netfilter/xt_cluster.c
++++ b/net/netfilter/xt_cluster.c
+@@ -125,6 +125,7 @@ xt_cluster_mt(const struct sk_buff *skb, struct xt_action_param *par)
+ static int xt_cluster_mt_checkentry(const struct xt_mtchk_param *par)
+ {
+ 	struct xt_cluster_match_info *info = par->matchinfo;
++	int ret;
+ 
+ 	if (info->total_nodes > XT_CLUSTER_NODES_MAX) {
+ 		pr_info_ratelimited("you have exceeded the maximum number of cluster nodes (%u > %u)\n",
+@@ -135,7 +136,17 @@ static int xt_cluster_mt_checkentry(const struct xt_mtchk_param *par)
+ 		pr_info_ratelimited("node mask cannot exceed total number of nodes\n");
+ 		return -EDOM;
+ 	}
+-	return 0;
++
++	ret = nf_ct_netns_get(par->net, par->family);
++	if (ret < 0)
++		pr_info_ratelimited("cannot load conntrack support for proto=%u\n",
++				    par->family);
++	return ret;
++}
++
++static void xt_cluster_mt_destroy(const struct xt_mtdtor_param *par)
++{
++	nf_ct_netns_put(par->net, par->family);
+ }
+ 
+ static struct xt_match xt_cluster_match __read_mostly = {
+@@ -144,6 +155,7 @@ static struct xt_match xt_cluster_match __read_mostly = {
+ 	.match		= xt_cluster_mt,
+ 	.checkentry	= xt_cluster_mt_checkentry,
+ 	.matchsize	= sizeof(struct xt_cluster_match_info),
++	.destroy	= xt_cluster_mt_destroy,
+ 	.me		= THIS_MODULE,
+ };
+ 
+diff --git a/net/netfilter/xt_hashlimit.c b/net/netfilter/xt_hashlimit.c
+index 9b16402f29af..3e7d259e5d8d 100644
+--- a/net/netfilter/xt_hashlimit.c
++++ b/net/netfilter/xt_hashlimit.c
+@@ -1057,7 +1057,7 @@ static struct xt_match hashlimit_mt_reg[] __read_mostly = {
+ static void *dl_seq_start(struct seq_file *s, loff_t *pos)
+ 	__acquires(htable->lock)
+ {
+-	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->file));
+ 	unsigned int *bucket;
+ 
+ 	spin_lock_bh(&htable->lock);
+@@ -1074,7 +1074,7 @@ static void *dl_seq_start(struct seq_file *s, loff_t *pos)
+ 
+ static void *dl_seq_next(struct seq_file *s, void *v, loff_t *pos)
+ {
+-	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->file));
+ 	unsigned int *bucket = v;
+ 
+ 	*pos = ++(*bucket);
+@@ -1088,7 +1088,7 @@ static void *dl_seq_next(struct seq_file *s, void *v, loff_t *pos)
+ static void dl_seq_stop(struct seq_file *s, void *v)
+ 	__releases(htable->lock)
+ {
+-	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->file));
+ 	unsigned int *bucket = v;
+ 
+ 	if (!IS_ERR(bucket))
+@@ -1130,7 +1130,7 @@ static void dl_seq_print(struct dsthash_ent *ent, u_int8_t family,
+ static int dl_seq_real_show_v2(struct dsthash_ent *ent, u_int8_t family,
+ 			       struct seq_file *s)
+ {
+-	struct xt_hashlimit_htable *ht = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *ht = PDE_DATA(file_inode(s->file));
+ 
+ 	spin_lock(&ent->lock);
+ 	/* recalculate to show accurate numbers */
+@@ -1145,7 +1145,7 @@ static int dl_seq_real_show_v2(struct dsthash_ent *ent, u_int8_t family,
+ static int dl_seq_real_show_v1(struct dsthash_ent *ent, u_int8_t family,
+ 			       struct seq_file *s)
+ {
+-	struct xt_hashlimit_htable *ht = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *ht = PDE_DATA(file_inode(s->file));
+ 
+ 	spin_lock(&ent->lock);
+ 	/* recalculate to show accurate numbers */
+@@ -1160,7 +1160,7 @@ static int dl_seq_real_show_v1(struct dsthash_ent *ent, u_int8_t family,
+ static int dl_seq_real_show(struct dsthash_ent *ent, u_int8_t family,
+ 			    struct seq_file *s)
+ {
+-	struct xt_hashlimit_htable *ht = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *ht = PDE_DATA(file_inode(s->file));
+ 
+ 	spin_lock(&ent->lock);
+ 	/* recalculate to show accurate numbers */
+@@ -1174,7 +1174,7 @@ static int dl_seq_real_show(struct dsthash_ent *ent, u_int8_t family,
+ 
+ static int dl_seq_show_v2(struct seq_file *s, void *v)
+ {
+-	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->file));
+ 	unsigned int *bucket = (unsigned int *)v;
+ 	struct dsthash_ent *ent;
+ 
+@@ -1188,7 +1188,7 @@ static int dl_seq_show_v2(struct seq_file *s, void *v)
+ 
+ static int dl_seq_show_v1(struct seq_file *s, void *v)
+ {
+-	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->file));
+ 	unsigned int *bucket = v;
+ 	struct dsthash_ent *ent;
+ 
+@@ -1202,7 +1202,7 @@ static int dl_seq_show_v1(struct seq_file *s, void *v)
+ 
+ static int dl_seq_show(struct seq_file *s, void *v)
+ {
+-	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->file));
+ 	unsigned int *bucket = v;
+ 	struct dsthash_ent *ent;
+ 
+diff --git a/net/tipc/diag.c b/net/tipc/diag.c
+index aaabb0b776dd..73137f4aeb68 100644
+--- a/net/tipc/diag.c
++++ b/net/tipc/diag.c
+@@ -84,7 +84,9 @@ static int tipc_sock_diag_handler_dump(struct sk_buff *skb,
+ 
+ 	if (h->nlmsg_flags & NLM_F_DUMP) {
+ 		struct netlink_dump_control c = {
++			.start = tipc_dump_start,
+ 			.dump = tipc_diag_dump,
++			.done = tipc_dump_done,
+ 		};
+ 		netlink_dump_start(net->diag_nlsk, skb, h, &c);
+ 		return 0;
+diff --git a/net/tipc/netlink.c b/net/tipc/netlink.c
+index 6ff2254088f6..99ee419210ba 100644
+--- a/net/tipc/netlink.c
++++ b/net/tipc/netlink.c
+@@ -167,7 +167,9 @@ static const struct genl_ops tipc_genl_v2_ops[] = {
+ 	},
+ 	{
+ 		.cmd	= TIPC_NL_SOCK_GET,
++		.start = tipc_dump_start,
+ 		.dumpit	= tipc_nl_sk_dump,
++		.done	= tipc_dump_done,
+ 		.policy = tipc_nl_policy,
+ 	},
+ 	{
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index ac8ca238c541..bdb4a9a5a83a 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -3233,45 +3233,69 @@ int tipc_nl_sk_walk(struct sk_buff *skb, struct netlink_callback *cb,
+ 				       struct netlink_callback *cb,
+ 				       struct tipc_sock *tsk))
+ {
+-	struct net *net = sock_net(skb->sk);
+-	struct tipc_net *tn = tipc_net(net);
+-	const struct bucket_table *tbl;
+-	u32 prev_portid = cb->args[1];
+-	u32 tbl_id = cb->args[0];
+-	struct rhash_head *pos;
++	struct rhashtable_iter *iter = (void *)cb->args[0];
+ 	struct tipc_sock *tsk;
+ 	int err;
+ 
+-	rcu_read_lock();
+-	tbl = rht_dereference_rcu((&tn->sk_rht)->tbl, &tn->sk_rht);
+-	for (; tbl_id < tbl->size; tbl_id++) {
+-		rht_for_each_entry_rcu(tsk, pos, tbl, tbl_id, node) {
+-			spin_lock_bh(&tsk->sk.sk_lock.slock);
+-			if (prev_portid && prev_portid != tsk->portid) {
+-				spin_unlock_bh(&tsk->sk.sk_lock.slock);
++	rhashtable_walk_start(iter);
++	while ((tsk = rhashtable_walk_next(iter)) != NULL) {
++		if (IS_ERR(tsk)) {
++			err = PTR_ERR(tsk);
++			if (err == -EAGAIN) {
++				err = 0;
+ 				continue;
+ 			}
++			break;
++		}
+ 
+-			err = skb_handler(skb, cb, tsk);
+-			if (err) {
+-				prev_portid = tsk->portid;
+-				spin_unlock_bh(&tsk->sk.sk_lock.slock);
+-				goto out;
+-			}
+-
+-			prev_portid = 0;
+-			spin_unlock_bh(&tsk->sk.sk_lock.slock);
++		sock_hold(&tsk->sk);
++		rhashtable_walk_stop(iter);
++		lock_sock(&tsk->sk);
++		err = skb_handler(skb, cb, tsk);
++		if (err) {
++			release_sock(&tsk->sk);
++			sock_put(&tsk->sk);
++			goto out;
+ 		}
++		release_sock(&tsk->sk);
++		rhashtable_walk_start(iter);
++		sock_put(&tsk->sk);
+ 	}
++	rhashtable_walk_stop(iter);
+ out:
+-	rcu_read_unlock();
+-	cb->args[0] = tbl_id;
+-	cb->args[1] = prev_portid;
+-
+ 	return skb->len;
+ }
+ EXPORT_SYMBOL(tipc_nl_sk_walk);
+ 
++int tipc_dump_start(struct netlink_callback *cb)
++{
++	struct rhashtable_iter *iter = (void *)cb->args[0];
++	struct net *net = sock_net(cb->skb->sk);
++	struct tipc_net *tn = tipc_net(net);
++
++	if (!iter) {
++		iter = kmalloc(sizeof(*iter), GFP_KERNEL);
++		if (!iter)
++			return -ENOMEM;
++
++		cb->args[0] = (long)iter;
++	}
++
++	rhashtable_walk_enter(&tn->sk_rht, iter);
++	return 0;
++}
++EXPORT_SYMBOL(tipc_dump_start);
++
++int tipc_dump_done(struct netlink_callback *cb)
++{
++	struct rhashtable_iter *hti = (void *)cb->args[0];
++
++	rhashtable_walk_exit(hti);
++	kfree(hti);
++	return 0;
++}
++EXPORT_SYMBOL(tipc_dump_done);
++
+ int tipc_sk_fill_sock_diag(struct sk_buff *skb, struct netlink_callback *cb,
+ 			   struct tipc_sock *tsk, u32 sk_filter_state,
+ 			   u64 (*tipc_diag_gen_cookie)(struct sock *sk))
+diff --git a/net/tipc/socket.h b/net/tipc/socket.h
+index aff9b2ae5a1f..d43032e26532 100644
+--- a/net/tipc/socket.h
++++ b/net/tipc/socket.h
+@@ -68,4 +68,6 @@ int tipc_nl_sk_walk(struct sk_buff *skb, struct netlink_callback *cb,
+ 		    int (*skb_handler)(struct sk_buff *skb,
+ 				       struct netlink_callback *cb,
+ 				       struct tipc_sock *tsk));
++int tipc_dump_start(struct netlink_callback *cb);
++int tipc_dump_done(struct netlink_callback *cb);
+ #endif
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 80bc986c79e5..733ccf867972 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -667,13 +667,13 @@ static int nl80211_msg_put_wmm_rules(struct sk_buff *msg,
+ 			goto nla_put_failure;
+ 
+ 		if (nla_put_u16(msg, NL80211_WMMR_CW_MIN,
+-				rule->wmm_rule->client[j].cw_min) ||
++				rule->wmm_rule.client[j].cw_min) ||
+ 		    nla_put_u16(msg, NL80211_WMMR_CW_MAX,
+-				rule->wmm_rule->client[j].cw_max) ||
++				rule->wmm_rule.client[j].cw_max) ||
+ 		    nla_put_u8(msg, NL80211_WMMR_AIFSN,
+-			       rule->wmm_rule->client[j].aifsn) ||
+-		    nla_put_u8(msg, NL80211_WMMR_TXOP,
+-			       rule->wmm_rule->client[j].cot))
++			       rule->wmm_rule.client[j].aifsn) ||
++		    nla_put_u16(msg, NL80211_WMMR_TXOP,
++			        rule->wmm_rule.client[j].cot))
+ 			goto nla_put_failure;
+ 
+ 		nla_nest_end(msg, nl_wmm_rule);
+@@ -764,9 +764,9 @@ static int nl80211_msg_put_channel(struct sk_buff *msg, struct wiphy *wiphy,
+ 
+ 	if (large) {
+ 		const struct ieee80211_reg_rule *rule =
+-			freq_reg_info(wiphy, chan->center_freq);
++			freq_reg_info(wiphy, MHZ_TO_KHZ(chan->center_freq));
+ 
+-		if (!IS_ERR(rule) && rule->wmm_rule) {
++		if (!IS_ERR_OR_NULL(rule) && rule->has_wmm) {
+ 			if (nl80211_msg_put_wmm_rules(msg, rule))
+ 				goto nla_put_failure;
+ 		}
+@@ -12099,6 +12099,7 @@ static int nl80211_update_ft_ies(struct sk_buff *skb, struct genl_info *info)
+ 		return -EOPNOTSUPP;
+ 
+ 	if (!info->attrs[NL80211_ATTR_MDID] ||
++	    !info->attrs[NL80211_ATTR_IE] ||
+ 	    !is_valid_ie_attr(info->attrs[NL80211_ATTR_IE]))
+ 		return -EINVAL;
+ 
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index 4fc66a117b7d..2f702adf2912 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -425,36 +425,23 @@ static const struct ieee80211_regdomain *
+ reg_copy_regd(const struct ieee80211_regdomain *src_regd)
+ {
+ 	struct ieee80211_regdomain *regd;
+-	int size_of_regd, size_of_wmms;
++	int size_of_regd;
+ 	unsigned int i;
+-	struct ieee80211_wmm_rule *d_wmm, *s_wmm;
+ 
+ 	size_of_regd =
+ 		sizeof(struct ieee80211_regdomain) +
+ 		src_regd->n_reg_rules * sizeof(struct ieee80211_reg_rule);
+-	size_of_wmms = src_regd->n_wmm_rules *
+-		sizeof(struct ieee80211_wmm_rule);
+ 
+-	regd = kzalloc(size_of_regd + size_of_wmms, GFP_KERNEL);
++	regd = kzalloc(size_of_regd, GFP_KERNEL);
+ 	if (!regd)
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	memcpy(regd, src_regd, sizeof(struct ieee80211_regdomain));
+ 
+-	d_wmm = (struct ieee80211_wmm_rule *)((u8 *)regd + size_of_regd);
+-	s_wmm = (struct ieee80211_wmm_rule *)((u8 *)src_regd + size_of_regd);
+-	memcpy(d_wmm, s_wmm, size_of_wmms);
+-
+-	for (i = 0; i < src_regd->n_reg_rules; i++) {
++	for (i = 0; i < src_regd->n_reg_rules; i++)
+ 		memcpy(&regd->reg_rules[i], &src_regd->reg_rules[i],
+ 		       sizeof(struct ieee80211_reg_rule));
+-		if (!src_regd->reg_rules[i].wmm_rule)
+-			continue;
+ 
+-		regd->reg_rules[i].wmm_rule = d_wmm +
+-			(src_regd->reg_rules[i].wmm_rule - s_wmm) /
+-			sizeof(struct ieee80211_wmm_rule);
+-	}
+ 	return regd;
+ }
+ 
+@@ -860,9 +847,10 @@ static bool valid_regdb(const u8 *data, unsigned int size)
+ 	return true;
+ }
+ 
+-static void set_wmm_rule(struct ieee80211_wmm_rule *rule,
++static void set_wmm_rule(struct ieee80211_reg_rule *rrule,
+ 			 struct fwdb_wmm_rule *wmm)
+ {
++	struct ieee80211_wmm_rule *rule = &rrule->wmm_rule;
+ 	unsigned int i;
+ 
+ 	for (i = 0; i < IEEE80211_NUM_ACS; i++) {
+@@ -876,11 +864,13 @@ static void set_wmm_rule(struct ieee80211_wmm_rule *rule,
+ 		rule->ap[i].aifsn = wmm->ap[i].aifsn;
+ 		rule->ap[i].cot = 1000 * be16_to_cpu(wmm->ap[i].cot);
+ 	}
++
++	rrule->has_wmm = true;
+ }
+ 
+ static int __regdb_query_wmm(const struct fwdb_header *db,
+ 			     const struct fwdb_country *country, int freq,
+-			     u32 *dbptr, struct ieee80211_wmm_rule *rule)
++			     struct ieee80211_reg_rule *rule)
+ {
+ 	unsigned int ptr = be16_to_cpu(country->coll_ptr) << 2;
+ 	struct fwdb_collection *coll = (void *)((u8 *)db + ptr);
+@@ -901,8 +891,6 @@ static int __regdb_query_wmm(const struct fwdb_header *db,
+ 			wmm_ptr = be16_to_cpu(rrule->wmm_ptr) << 2;
+ 			wmm = (void *)((u8 *)db + wmm_ptr);
+ 			set_wmm_rule(rule, wmm);
+-			if (dbptr)
+-				*dbptr = wmm_ptr;
+ 			return 0;
+ 		}
+ 	}
+@@ -910,8 +898,7 @@ static int __regdb_query_wmm(const struct fwdb_header *db,
+ 	return -ENODATA;
+ }
+ 
+-int reg_query_regdb_wmm(char *alpha2, int freq, u32 *dbptr,
+-			struct ieee80211_wmm_rule *rule)
++int reg_query_regdb_wmm(char *alpha2, int freq, struct ieee80211_reg_rule *rule)
+ {
+ 	const struct fwdb_header *hdr = regdb;
+ 	const struct fwdb_country *country;
+@@ -925,8 +912,7 @@ int reg_query_regdb_wmm(char *alpha2, int freq, u32 *dbptr,
+ 	country = &hdr->country[0];
+ 	while (country->coll_ptr) {
+ 		if (alpha2_equal(alpha2, country->alpha2))
+-			return __regdb_query_wmm(regdb, country, freq, dbptr,
+-						 rule);
++			return __regdb_query_wmm(regdb, country, freq, rule);
+ 
+ 		country++;
+ 	}
+@@ -935,32 +921,13 @@ int reg_query_regdb_wmm(char *alpha2, int freq, u32 *dbptr,
+ }
+ EXPORT_SYMBOL(reg_query_regdb_wmm);
+ 
+-struct wmm_ptrs {
+-	struct ieee80211_wmm_rule *rule;
+-	u32 ptr;
+-};
+-
+-static struct ieee80211_wmm_rule *find_wmm_ptr(struct wmm_ptrs *wmm_ptrs,
+-					       u32 wmm_ptr, int n_wmms)
+-{
+-	int i;
+-
+-	for (i = 0; i < n_wmms; i++) {
+-		if (wmm_ptrs[i].ptr == wmm_ptr)
+-			return wmm_ptrs[i].rule;
+-	}
+-	return NULL;
+-}
+-
+ static int regdb_query_country(const struct fwdb_header *db,
+ 			       const struct fwdb_country *country)
+ {
+ 	unsigned int ptr = be16_to_cpu(country->coll_ptr) << 2;
+ 	struct fwdb_collection *coll = (void *)((u8 *)db + ptr);
+ 	struct ieee80211_regdomain *regdom;
+-	struct ieee80211_regdomain *tmp_rd;
+-	unsigned int size_of_regd, i, n_wmms = 0;
+-	struct wmm_ptrs *wmm_ptrs;
++	unsigned int size_of_regd, i;
+ 
+ 	size_of_regd = sizeof(struct ieee80211_regdomain) +
+ 		coll->n_rules * sizeof(struct ieee80211_reg_rule);
+@@ -969,12 +936,6 @@ static int regdb_query_country(const struct fwdb_header *db,
+ 	if (!regdom)
+ 		return -ENOMEM;
+ 
+-	wmm_ptrs = kcalloc(coll->n_rules, sizeof(*wmm_ptrs), GFP_KERNEL);
+-	if (!wmm_ptrs) {
+-		kfree(regdom);
+-		return -ENOMEM;
+-	}
+-
+ 	regdom->n_reg_rules = coll->n_rules;
+ 	regdom->alpha2[0] = country->alpha2[0];
+ 	regdom->alpha2[1] = country->alpha2[1];
+@@ -1013,37 +974,11 @@ static int regdb_query_country(const struct fwdb_header *db,
+ 				1000 * be16_to_cpu(rule->cac_timeout);
+ 		if (rule->len >= offsetofend(struct fwdb_rule, wmm_ptr)) {
+ 			u32 wmm_ptr = be16_to_cpu(rule->wmm_ptr) << 2;
+-			struct ieee80211_wmm_rule *wmm_pos =
+-				find_wmm_ptr(wmm_ptrs, wmm_ptr, n_wmms);
+-			struct fwdb_wmm_rule *wmm;
+-			struct ieee80211_wmm_rule *wmm_rule;
+-
+-			if (wmm_pos) {
+-				rrule->wmm_rule = wmm_pos;
+-				continue;
+-			}
+-			wmm = (void *)((u8 *)db + wmm_ptr);
+-			tmp_rd = krealloc(regdom, size_of_regd + (n_wmms + 1) *
+-					  sizeof(struct ieee80211_wmm_rule),
+-					  GFP_KERNEL);
+-
+-			if (!tmp_rd) {
+-				kfree(regdom);
+-				kfree(wmm_ptrs);
+-				return -ENOMEM;
+-			}
+-			regdom = tmp_rd;
+-
+-			wmm_rule = (struct ieee80211_wmm_rule *)
+-				((u8 *)regdom + size_of_regd + n_wmms *
+-				sizeof(struct ieee80211_wmm_rule));
++			struct fwdb_wmm_rule *wmm = (void *)((u8 *)db + wmm_ptr);
+ 
+-			set_wmm_rule(wmm_rule, wmm);
+-			wmm_ptrs[n_wmms].ptr = wmm_ptr;
+-			wmm_ptrs[n_wmms++].rule = wmm_rule;
++			set_wmm_rule(rrule, wmm);
+ 		}
+ 	}
+-	kfree(wmm_ptrs);
+ 
+ 	return reg_schedule_apply(regdom);
+ }
+diff --git a/net/wireless/util.c b/net/wireless/util.c
+index 3c654cd7ba56..908bf5b6d89e 100644
+--- a/net/wireless/util.c
++++ b/net/wireless/util.c
+@@ -1374,7 +1374,7 @@ bool ieee80211_chandef_to_operating_class(struct cfg80211_chan_def *chandef,
+ 					  u8 *op_class)
+ {
+ 	u8 vht_opclass;
+-	u16 freq = chandef->center_freq1;
++	u32 freq = chandef->center_freq1;
+ 
+ 	if (freq >= 2412 && freq <= 2472) {
+ 		if (chandef->width > NL80211_CHAN_WIDTH_40)
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index d14b05f68d6d..08b6369f930b 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6455,6 +6455,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0706, "Dell Inspiron 7559", ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER),
+ 	SND_PCI_QUIRK(0x1028, 0x0725, "Dell Inspiron 3162", ALC255_FIXUP_DELL_SPK_NOISE),
+ 	SND_PCI_QUIRK(0x1028, 0x075b, "Dell XPS 13 9360", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE),
++	SND_PCI_QUIRK(0x1028, 0x075c, "Dell XPS 27 7760", ALC298_FIXUP_SPK_VOLUME),
+ 	SND_PCI_QUIRK(0x1028, 0x075d, "Dell AIO", ALC298_FIXUP_SPK_VOLUME),
+ 	SND_PCI_QUIRK(0x1028, 0x07b0, "Dell Precision 7520", ALC295_FIXUP_DISABLE_DAC3),
+ 	SND_PCI_QUIRK(0x1028, 0x0798, "Dell Inspiron 17 7000 Gaming", ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER),
+diff --git a/tools/hv/hv_fcopy_daemon.c b/tools/hv/hv_fcopy_daemon.c
+index d78aed86af09..8ff8cb1a11f4 100644
+--- a/tools/hv/hv_fcopy_daemon.c
++++ b/tools/hv/hv_fcopy_daemon.c
+@@ -234,6 +234,7 @@ int main(int argc, char *argv[])
+ 			break;
+ 
+ 		default:
++			error = HV_E_FAIL;
+ 			syslog(LOG_ERR, "Unknown operation: %d",
+ 				buffer.hdr.operation);
+ 
+diff --git a/tools/kvm/kvm_stat/kvm_stat b/tools/kvm/kvm_stat/kvm_stat
+index 56c4b3f8a01b..7c92545931e3 100755
+--- a/tools/kvm/kvm_stat/kvm_stat
++++ b/tools/kvm/kvm_stat/kvm_stat
+@@ -759,13 +759,20 @@ class DebugfsProvider(Provider):
+             if len(vms) == 0:
+                 self.do_read = False
+ 
+-            self.paths = filter(lambda x: "{}-".format(pid) in x, vms)
++            self.paths = list(filter(lambda x: "{}-".format(pid) in x, vms))
+ 
+         else:
+             self.paths = []
+             self.do_read = True
+         self.reset()
+ 
++    def _verify_paths(self):
++        """Remove invalid paths"""
++        for path in self.paths:
++            if not os.path.exists(os.path.join(PATH_DEBUGFS_KVM, path)):
++                self.paths.remove(path)
++                continue
++
+     def read(self, reset=0, by_guest=0):
+         """Returns a dict with format:'file name / field -> current value'.
+ 
+@@ -780,6 +787,7 @@ class DebugfsProvider(Provider):
+         # If no debugfs filtering support is available, then don't read.
+         if not self.do_read:
+             return results
++        self._verify_paths()
+ 
+         paths = self.paths
+         if self._pid == 0:
+@@ -1162,6 +1170,9 @@ class Tui(object):
+ 
+             return sorted_items
+ 
++        if not self._is_running_guest(self.stats.pid_filter):
++            # leave final data on screen
++            return
+         row = 3
+         self.screen.move(row, 0)
+         self.screen.clrtobot()
+@@ -1219,10 +1230,10 @@ class Tui(object):
+         (x, term_width) = self.screen.getmaxyx()
+         row = 2
+         for line in text:
+-            start = (term_width - len(line)) / 2
++            start = (term_width - len(line)) // 2
+             self.screen.addstr(row, start, line)
+             row += 1
+-        self.screen.addstr(row + 1, (term_width - len(hint)) / 2, hint,
++        self.screen.addstr(row + 1, (term_width - len(hint)) // 2, hint,
+                            curses.A_STANDOUT)
+         self.screen.getkey()
+ 
+@@ -1319,6 +1330,12 @@ class Tui(object):
+                 msg = '"' + str(val) + '": Invalid value'
+         self._refresh_header()
+ 
++    def _is_running_guest(self, pid):
++        """Check if pid is still a running process."""
++        if not pid:
++            return True
++        return os.path.isdir(os.path.join('/proc/', str(pid)))
++
+     def _show_vm_selection_by_guest(self):
+         """Draws guest selection mask.
+ 
+@@ -1346,7 +1363,7 @@ class Tui(object):
+             if not guest or guest == '0':
+                 break
+             if guest.isdigit():
+-                if not os.path.isdir(os.path.join('/proc/', guest)):
++                if not self._is_running_guest(guest):
+                     msg = '"' + guest + '": Not a running process'
+                     continue
+                 pid = int(guest)
+diff --git a/tools/perf/arch/powerpc/util/sym-handling.c b/tools/perf/arch/powerpc/util/sym-handling.c
+index 20e7d74d86cd..10a44e946f77 100644
+--- a/tools/perf/arch/powerpc/util/sym-handling.c
++++ b/tools/perf/arch/powerpc/util/sym-handling.c
+@@ -22,15 +22,16 @@ bool elf__needs_adjust_symbols(GElf_Ehdr ehdr)
+ 
+ #endif
+ 
+-#if !defined(_CALL_ELF) || _CALL_ELF != 2
+ int arch__choose_best_symbol(struct symbol *syma,
+ 			     struct symbol *symb __maybe_unused)
+ {
+ 	char *sym = syma->name;
+ 
++#if !defined(_CALL_ELF) || _CALL_ELF != 2
+ 	/* Skip over any initial dot */
+ 	if (*sym == '.')
+ 		sym++;
++#endif
+ 
+ 	/* Avoid "SyS" kernel syscall aliases */
+ 	if (strlen(sym) >= 3 && !strncmp(sym, "SyS", 3))
+@@ -41,6 +42,7 @@ int arch__choose_best_symbol(struct symbol *syma,
+ 	return SYMBOL_A;
+ }
+ 
++#if !defined(_CALL_ELF) || _CALL_ELF != 2
+ /* Allow matching against dot variants */
+ int arch__compare_symbol_names(const char *namea, const char *nameb)
+ {
+diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
+index f91775b4bc3c..3b05219c3ed7 100644
+--- a/tools/perf/util/annotate.c
++++ b/tools/perf/util/annotate.c
+@@ -245,8 +245,14 @@ find_target:
+ 
+ indirect_call:
+ 	tok = strchr(endptr, '*');
+-	if (tok != NULL)
+-		ops->target.addr = strtoull(tok + 1, NULL, 16);
++	if (tok != NULL) {
++		endptr++;
++
++		/* Indirect call can use a non-rip register and offset: callq  *0x8(%rbx).
++		 * Do not parse such instruction.  */
++		if (strstr(endptr, "(%r") == NULL)
++			ops->target.addr = strtoull(endptr, NULL, 16);
++	}
+ 	goto find_target;
+ }
+ 
+@@ -275,7 +281,19 @@ bool ins__is_call(const struct ins *ins)
+ 	return ins->ops == &call_ops || ins->ops == &s390_call_ops;
+ }
+ 
+-static int jump__parse(struct arch *arch __maybe_unused, struct ins_operands *ops, struct map_symbol *ms)
++/*
++ * Prevents from matching commas in the comment section, e.g.:
++ * ffff200008446e70:       b.cs    ffff2000084470f4 <generic_exec_single+0x314>  // b.hs, b.nlast
++ */
++static inline const char *validate_comma(const char *c, struct ins_operands *ops)
++{
++	if (ops->raw_comment && c > ops->raw_comment)
++		return NULL;
++
++	return c;
++}
++
++static int jump__parse(struct arch *arch, struct ins_operands *ops, struct map_symbol *ms)
+ {
+ 	struct map *map = ms->map;
+ 	struct symbol *sym = ms->sym;
+@@ -284,6 +302,10 @@ static int jump__parse(struct arch *arch __maybe_unused, struct ins_operands *op
+ 	};
+ 	const char *c = strchr(ops->raw, ',');
+ 	u64 start, end;
++
++	ops->raw_comment = strchr(ops->raw, arch->objdump.comment_char);
++	c = validate_comma(c, ops);
++
+ 	/*
+ 	 * Examples of lines to parse for the _cpp_lex_token@@Base
+ 	 * function:
+@@ -303,6 +325,7 @@ static int jump__parse(struct arch *arch __maybe_unused, struct ins_operands *op
+ 		ops->target.addr = strtoull(c, NULL, 16);
+ 		if (!ops->target.addr) {
+ 			c = strchr(c, ',');
++			c = validate_comma(c, ops);
+ 			if (c++ != NULL)
+ 				ops->target.addr = strtoull(c, NULL, 16);
+ 		}
+@@ -360,9 +383,12 @@ static int jump__scnprintf(struct ins *ins, char *bf, size_t size,
+ 		return scnprintf(bf, size, "%-6s %s", ins->name, ops->target.sym->name);
+ 
+ 	c = strchr(ops->raw, ',');
++	c = validate_comma(c, ops);
++
+ 	if (c != NULL) {
+ 		const char *c2 = strchr(c + 1, ',');
+ 
++		c2 = validate_comma(c2, ops);
+ 		/* check for 3-op insn */
+ 		if (c2 != NULL)
+ 			c = c2;
+diff --git a/tools/perf/util/annotate.h b/tools/perf/util/annotate.h
+index a4c0d91907e6..61e0c7fd5efd 100644
+--- a/tools/perf/util/annotate.h
++++ b/tools/perf/util/annotate.h
+@@ -21,6 +21,7 @@ struct ins {
+ 
+ struct ins_operands {
+ 	char	*raw;
++	char	*raw_comment;
+ 	struct {
+ 		char	*raw;
+ 		char	*name;
+diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
+index 0d5504751cc5..6324afba8fdd 100644
+--- a/tools/perf/util/evsel.c
++++ b/tools/perf/util/evsel.c
+@@ -251,8 +251,9 @@ struct perf_evsel *perf_evsel__new_idx(struct perf_event_attr *attr, int idx)
+ {
+ 	struct perf_evsel *evsel = zalloc(perf_evsel__object.size);
+ 
+-	if (evsel != NULL)
+-		perf_evsel__init(evsel, attr, idx);
++	if (!evsel)
++		return NULL;
++	perf_evsel__init(evsel, attr, idx);
+ 
+ 	if (perf_evsel__is_bpf_output(evsel)) {
+ 		evsel->attr.sample_type |= (PERF_SAMPLE_RAW | PERF_SAMPLE_TIME |
+diff --git a/tools/perf/util/trace-event-info.c b/tools/perf/util/trace-event-info.c
+index c85d0d1a65ed..7b0ca7cbb7de 100644
+--- a/tools/perf/util/trace-event-info.c
++++ b/tools/perf/util/trace-event-info.c
+@@ -377,7 +377,7 @@ out:
+ 
+ static int record_saved_cmdline(void)
+ {
+-	unsigned int size;
++	unsigned long long size;
+ 	char *path;
+ 	struct stat st;
+ 	int ret, err = 0;
+diff --git a/tools/testing/selftests/net/pmtu.sh b/tools/testing/selftests/net/pmtu.sh
+index f8cc38afffa2..32a194e3e07a 100755
+--- a/tools/testing/selftests/net/pmtu.sh
++++ b/tools/testing/selftests/net/pmtu.sh
+@@ -46,6 +46,9 @@
+ # Kselftest framework requirement - SKIP code is 4.
+ ksft_skip=4
+ 
++# Some systems don't have a ping6 binary anymore
++which ping6 > /dev/null 2>&1 && ping6=$(which ping6) || ping6=$(which ping)
++
+ tests="
+ 	pmtu_vti6_exception		vti6: PMTU exceptions
+ 	pmtu_vti4_exception		vti4: PMTU exceptions
+@@ -274,7 +277,7 @@ test_pmtu_vti6_exception() {
+ 	mtu "${ns_b}" veth_b 4000
+ 	mtu "${ns_a}" vti6_a 5000
+ 	mtu "${ns_b}" vti6_b 5000
+-	${ns_a} ping6 -q -i 0.1 -w 2 -s 60000 ${vti6_b_addr} > /dev/null
++	${ns_a} ${ping6} -q -i 0.1 -w 2 -s 60000 ${vti6_b_addr} > /dev/null
+ 
+ 	# Check that exception was created
+ 	if [ "$(route_get_dst_pmtu_from_exception "${ns_a}" ${vti6_b_addr})" = "" ]; then
+@@ -334,7 +337,7 @@ test_pmtu_vti4_link_add_mtu() {
+ 	fail=0
+ 
+ 	min=68
+-	max=$((65528 - 20))
++	max=$((65535 - 20))
+ 	# Check invalid values first
+ 	for v in $((min - 1)) $((max + 1)); do
+ 		${ns_a} ip link add vti4_a mtu ${v} type vti local ${veth4_a_addr} remote ${veth4_b_addr} key 10 2>/dev/null
+diff --git a/tools/testing/selftests/rseq/param_test.c b/tools/testing/selftests/rseq/param_test.c
+index 615252331813..4bc071525bf7 100644
+--- a/tools/testing/selftests/rseq/param_test.c
++++ b/tools/testing/selftests/rseq/param_test.c
+@@ -56,15 +56,13 @@ unsigned int yield_mod_cnt, nr_abort;
+ 			printf(fmt, ## __VA_ARGS__);	\
+ 	} while (0)
+ 
+-#if defined(__x86_64__) || defined(__i386__)
++#ifdef __i386__
+ 
+ #define INJECT_ASM_REG	"eax"
+ 
+ #define RSEQ_INJECT_CLOBBER \
+ 	, INJECT_ASM_REG
+ 
+-#ifdef __i386__
+-
+ #define RSEQ_INJECT_ASM(n) \
+ 	"mov asm_loop_cnt_" #n ", %%" INJECT_ASM_REG "\n\t" \
+ 	"test %%" INJECT_ASM_REG ",%%" INJECT_ASM_REG "\n\t" \
+@@ -76,9 +74,16 @@ unsigned int yield_mod_cnt, nr_abort;
+ 
+ #elif defined(__x86_64__)
+ 
++#define INJECT_ASM_REG_P	"rax"
++#define INJECT_ASM_REG		"eax"
++
++#define RSEQ_INJECT_CLOBBER \
++	, INJECT_ASM_REG_P \
++	, INJECT_ASM_REG
++
+ #define RSEQ_INJECT_ASM(n) \
+-	"lea asm_loop_cnt_" #n "(%%rip), %%" INJECT_ASM_REG "\n\t" \
+-	"mov (%%" INJECT_ASM_REG "), %%" INJECT_ASM_REG "\n\t" \
++	"lea asm_loop_cnt_" #n "(%%rip), %%" INJECT_ASM_REG_P "\n\t" \
++	"mov (%%" INJECT_ASM_REG_P "), %%" INJECT_ASM_REG "\n\t" \
+ 	"test %%" INJECT_ASM_REG ",%%" INJECT_ASM_REG "\n\t" \
+ 	"jz 333f\n\t" \
+ 	"222:\n\t" \
+@@ -86,10 +91,6 @@ unsigned int yield_mod_cnt, nr_abort;
+ 	"jnz 222b\n\t" \
+ 	"333:\n\t"
+ 
+-#else
+-#error "Unsupported architecture"
+-#endif
+-
+ #elif defined(__ARMEL__)
+ 
+ #define RSEQ_INJECT_INPUT \
+diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/police.json b/tools/testing/selftests/tc-testing/tc-tests/actions/police.json
+index f03763d81617..30f9b54bd666 100644
+--- a/tools/testing/selftests/tc-testing/tc-tests/actions/police.json
++++ b/tools/testing/selftests/tc-testing/tc-tests/actions/police.json
+@@ -312,6 +312,54 @@
+             "$TC actions flush action police"
+         ]
+     },
++    {
++        "id": "6aaf",
++        "name": "Add police actions with conform-exceed control pass/pipe [with numeric values]",
++        "category": [
++            "actions",
++            "police"
++        ],
++        "setup": [
++            [
++                "$TC actions flush action police",
++                0,
++                1,
++                255
++            ]
++        ],
++        "cmdUnderTest": "$TC actions add action police rate 3mbit burst 250k conform-exceed 0/3 index 1",
++        "expExitCode": "0",
++        "verifyCmd": "$TC actions get action police index 1",
++        "matchPattern": "action order [0-9]*:  police 0x1 rate 3Mbit burst 250Kb mtu 2Kb action pass/pipe",
++        "matchCount": "1",
++        "teardown": [
++            "$TC actions flush action police"
++        ]
++    },
++    {
++        "id": "29b1",
++        "name": "Add police actions with conform-exceed control <invalid>/drop",
++        "category": [
++            "actions",
++            "police"
++        ],
++        "setup": [
++            [
++                "$TC actions flush action police",
++                0,
++                1,
++                255
++            ]
++        ],
++        "cmdUnderTest": "$TC actions add action police rate 3mbit burst 250k conform-exceed 10/drop index 1",
++        "expExitCode": "255",
++        "verifyCmd": "$TC actions ls action police",
++        "matchPattern": "action order [0-9]*:  police 0x1 rate 3Mbit burst 250Kb mtu 2Kb action ",
++        "matchCount": "0",
++        "teardown": [
++            "$TC actions flush action police"
++        ]
++    },
+     {
+         "id": "c26f",
+         "name": "Add police action with invalid peakrate value",
+diff --git a/tools/vm/page-types.c b/tools/vm/page-types.c
+index cce853dca691..a4c31fb2887b 100644
+--- a/tools/vm/page-types.c
++++ b/tools/vm/page-types.c
+@@ -156,12 +156,6 @@ static const char * const page_flag_names[] = {
+ };
+ 
+ 
+-static const char * const debugfs_known_mountpoints[] = {
+-	"/sys/kernel/debug",
+-	"/debug",
+-	0,
+-};
+-
+ /*
+  * data structures
+  */
+diff --git a/tools/vm/slabinfo.c b/tools/vm/slabinfo.c
+index f82c2eaa859d..334b16db0ebb 100644
+--- a/tools/vm/slabinfo.c
++++ b/tools/vm/slabinfo.c
+@@ -30,8 +30,8 @@ struct slabinfo {
+ 	int alias;
+ 	int refs;
+ 	int aliases, align, cache_dma, cpu_slabs, destroy_by_rcu;
+-	int hwcache_align, object_size, objs_per_slab;
+-	int sanity_checks, slab_size, store_user, trace;
++	unsigned int hwcache_align, object_size, objs_per_slab;
++	unsigned int sanity_checks, slab_size, store_user, trace;
+ 	int order, poison, reclaim_account, red_zone;
+ 	unsigned long partial, objects, slabs, objects_partial, objects_total;
+ 	unsigned long alloc_fastpath, alloc_slowpath;


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 13:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 13:15 UTC (permalink / raw
  To: gentoo-commits

commit:     07e759d23ad6b020d442810f70ba68fbe623d8a6
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 26 10:40:05 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 13:15:39 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=07e759d2

Linux patch 4.18.10

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1009_linux-4.18.10.patch | 6974 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6978 insertions(+)

diff --git a/0000_README b/0000_README
index 6534d27..a9e2bd7 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch:  1008_linux-4.18.9.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.9
 
+Patch:  1009_linux-4.18.10.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.10
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1009_linux-4.18.10.patch b/1009_linux-4.18.10.patch
new file mode 100644
index 0000000..16ee162
--- /dev/null
+++ b/1009_linux-4.18.10.patch
@@ -0,0 +1,6974 @@
+diff --git a/Makefile b/Makefile
+index 1178348fb9ca..ffab15235ff0 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+@@ -225,10 +225,12 @@ no-dot-config-targets := $(clean-targets) \
+ 			 cscope gtags TAGS tags help% %docs check% coccicheck \
+ 			 $(version_h) headers_% archheaders archscripts \
+ 			 kernelversion %src-pkg
++no-sync-config-targets := $(no-dot-config-targets) install %install
+ 
+-config-targets := 0
+-mixed-targets  := 0
+-dot-config     := 1
++config-targets  := 0
++mixed-targets   := 0
++dot-config      := 1
++may-sync-config := 1
+ 
+ ifneq ($(filter $(no-dot-config-targets), $(MAKECMDGOALS)),)
+ 	ifeq ($(filter-out $(no-dot-config-targets), $(MAKECMDGOALS)),)
+@@ -236,6 +238,16 @@ ifneq ($(filter $(no-dot-config-targets), $(MAKECMDGOALS)),)
+ 	endif
+ endif
+ 
++ifneq ($(filter $(no-sync-config-targets), $(MAKECMDGOALS)),)
++	ifeq ($(filter-out $(no-sync-config-targets), $(MAKECMDGOALS)),)
++		may-sync-config := 0
++	endif
++endif
++
++ifneq ($(KBUILD_EXTMOD),)
++	may-sync-config := 0
++endif
++
+ ifeq ($(KBUILD_EXTMOD),)
+         ifneq ($(filter config %config,$(MAKECMDGOALS)),)
+                 config-targets := 1
+@@ -610,7 +622,7 @@ ARCH_CFLAGS :=
+ include arch/$(SRCARCH)/Makefile
+ 
+ ifeq ($(dot-config),1)
+-ifeq ($(KBUILD_EXTMOD),)
++ifeq ($(may-sync-config),1)
+ # Read in dependencies to all Kconfig* files, make sure to run syncconfig if
+ # changes are detected. This should be included after arch/$(SRCARCH)/Makefile
+ # because some architectures define CROSS_COMPILE there.
+@@ -625,8 +637,9 @@ $(KCONFIG_CONFIG) include/config/auto.conf.cmd: ;
+ include/config/%.conf: $(KCONFIG_CONFIG) include/config/auto.conf.cmd
+ 	$(Q)$(MAKE) -f $(srctree)/Makefile syncconfig
+ else
+-# external modules needs include/generated/autoconf.h and include/config/auto.conf
+-# but do not care if they are up-to-date. Use auto.conf to trigger the test
++# External modules and some install targets need include/generated/autoconf.h
++# and include/config/auto.conf but do not care if they are up-to-date.
++# Use auto.conf to trigger the test
+ PHONY += include/config/auto.conf
+ 
+ include/config/auto.conf:
+@@ -638,7 +651,7 @@ include/config/auto.conf:
+ 	echo >&2 ;							\
+ 	/bin/false)
+ 
+-endif # KBUILD_EXTMOD
++endif # may-sync-config
+ 
+ else
+ # Dummy target needed, because used as prerequisite
+diff --git a/arch/arm/boot/dts/qcom-msm8974-lge-nexus5-hammerhead.dts b/arch/arm/boot/dts/qcom-msm8974-lge-nexus5-hammerhead.dts
+index 4dc0b347b1ee..c2dc9d09484a 100644
+--- a/arch/arm/boot/dts/qcom-msm8974-lge-nexus5-hammerhead.dts
++++ b/arch/arm/boot/dts/qcom-msm8974-lge-nexus5-hammerhead.dts
+@@ -189,6 +189,8 @@
+ 						regulator-max-microvolt = <2950000>;
+ 
+ 						regulator-boot-on;
++						regulator-system-load = <200000>;
++						regulator-allow-set-load;
+ 					};
+ 
+ 					l21 {
+diff --git a/arch/arm/mach-exynos/suspend.c b/arch/arm/mach-exynos/suspend.c
+index d3db306a5a70..941b0ffd9806 100644
+--- a/arch/arm/mach-exynos/suspend.c
++++ b/arch/arm/mach-exynos/suspend.c
+@@ -203,6 +203,7 @@ static int __init exynos_pmu_irq_init(struct device_node *node,
+ 					  NULL);
+ 	if (!domain) {
+ 		iounmap(pmu_base_addr);
++		pmu_base_addr = NULL;
+ 		return -ENOMEM;
+ 	}
+ 
+diff --git a/arch/arm/mach-hisi/hotplug.c b/arch/arm/mach-hisi/hotplug.c
+index a129aae72602..909bb2493781 100644
+--- a/arch/arm/mach-hisi/hotplug.c
++++ b/arch/arm/mach-hisi/hotplug.c
+@@ -148,13 +148,20 @@ static int hi3xxx_hotplug_init(void)
+ 	struct device_node *node;
+ 
+ 	node = of_find_compatible_node(NULL, NULL, "hisilicon,sysctrl");
+-	if (node) {
+-		ctrl_base = of_iomap(node, 0);
+-		id = HI3620_CTRL;
+-		return 0;
++	if (!node) {
++		id = ERROR_CTRL;
++		return -ENOENT;
+ 	}
+-	id = ERROR_CTRL;
+-	return -ENOENT;
++
++	ctrl_base = of_iomap(node, 0);
++	of_node_put(node);
++	if (!ctrl_base) {
++		id = ERROR_CTRL;
++		return -ENOMEM;
++	}
++
++	id = HI3620_CTRL;
++	return 0;
+ }
+ 
+ void hi3xxx_set_cpu(int cpu, bool enable)
+@@ -173,11 +180,15 @@ static bool hix5hd2_hotplug_init(void)
+ 	struct device_node *np;
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "hisilicon,cpuctrl");
+-	if (np) {
+-		ctrl_base = of_iomap(np, 0);
+-		return true;
+-	}
+-	return false;
++	if (!np)
++		return false;
++
++	ctrl_base = of_iomap(np, 0);
++	of_node_put(np);
++	if (!ctrl_base)
++		return false;
++
++	return true;
+ }
+ 
+ void hix5hd2_set_cpu(int cpu, bool enable)
+@@ -219,10 +230,10 @@ void hip01_set_cpu(int cpu, bool enable)
+ 
+ 	if (!ctrl_base) {
+ 		np = of_find_compatible_node(NULL, NULL, "hisilicon,hip01-sysctrl");
+-		if (np)
+-			ctrl_base = of_iomap(np, 0);
+-		else
+-			BUG();
++		BUG_ON(!np);
++		ctrl_base = of_iomap(np, 0);
++		of_node_put(np);
++		BUG_ON(!ctrl_base);
+ 	}
+ 
+ 	if (enable) {
+diff --git a/arch/arm64/boot/dts/mediatek/mt7622.dtsi b/arch/arm64/boot/dts/mediatek/mt7622.dtsi
+index 9213c966c224..ec7ea8dca777 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7622.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt7622.dtsi
+@@ -331,7 +331,7 @@
+ 		reg = <0 0x11002000 0 0x400>;
+ 		interrupts = <GIC_SPI 91 IRQ_TYPE_LEVEL_LOW>;
+ 		clocks = <&topckgen CLK_TOP_UART_SEL>,
+-			 <&pericfg CLK_PERI_UART1_PD>;
++			 <&pericfg CLK_PERI_UART0_PD>;
+ 		clock-names = "baud", "bus";
+ 		status = "disabled";
+ 	};
+diff --git a/arch/arm64/boot/dts/qcom/apq8016-sbc.dtsi b/arch/arm64/boot/dts/qcom/apq8016-sbc.dtsi
+index 9ff848792712..78ce3979ef09 100644
+--- a/arch/arm64/boot/dts/qcom/apq8016-sbc.dtsi
++++ b/arch/arm64/boot/dts/qcom/apq8016-sbc.dtsi
+@@ -338,7 +338,7 @@
+ 			led@6 {
+ 				label = "apq8016-sbc:blue:bt";
+ 				gpios = <&pm8916_mpps 3 GPIO_ACTIVE_HIGH>;
+-				linux,default-trigger = "bt";
++				linux,default-trigger = "bluetooth-power";
+ 				default-state = "off";
+ 			};
+ 		};
+diff --git a/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi b/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi
+index 0298bd0d0e1a..caf112629caa 100644
+--- a/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi
++++ b/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi
+@@ -58,6 +58,7 @@
+ 			clocks = <&sys_clk 32>;
+ 			enable-method = "psci";
+ 			operating-points-v2 = <&cluster0_opp>;
++			#cooling-cells = <2>;
+ 		};
+ 
+ 		cpu2: cpu@100 {
+@@ -77,6 +78,7 @@
+ 			clocks = <&sys_clk 33>;
+ 			enable-method = "psci";
+ 			operating-points-v2 = <&cluster1_opp>;
++			#cooling-cells = <2>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
+index 33147aacdafd..dd5b4fab114f 100644
+--- a/arch/arm64/kernel/perf_event.c
++++ b/arch/arm64/kernel/perf_event.c
+@@ -670,6 +670,28 @@ static void armv8pmu_disable_event(struct perf_event *event)
+ 	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
+ }
+ 
++static void armv8pmu_start(struct arm_pmu *cpu_pmu)
++{
++	unsigned long flags;
++	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
++
++	raw_spin_lock_irqsave(&events->pmu_lock, flags);
++	/* Enable all counters */
++	armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E);
++	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
++}
++
++static void armv8pmu_stop(struct arm_pmu *cpu_pmu)
++{
++	unsigned long flags;
++	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
++
++	raw_spin_lock_irqsave(&events->pmu_lock, flags);
++	/* Disable all counters */
++	armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E);
++	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
++}
++
+ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
+ {
+ 	u32 pmovsr;
+@@ -694,6 +716,11 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
+ 	 */
+ 	regs = get_irq_regs();
+ 
++	/*
++	 * Stop the PMU while processing the counter overflows
++	 * to prevent skews in group events.
++	 */
++	armv8pmu_stop(cpu_pmu);
+ 	for (idx = 0; idx < cpu_pmu->num_events; ++idx) {
+ 		struct perf_event *event = cpuc->events[idx];
+ 		struct hw_perf_event *hwc;
+@@ -718,6 +745,7 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
+ 		if (perf_event_overflow(event, &data, regs))
+ 			cpu_pmu->disable(event);
+ 	}
++	armv8pmu_start(cpu_pmu);
+ 
+ 	/*
+ 	 * Handle the pending perf events.
+@@ -731,28 +759,6 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
+ 	return IRQ_HANDLED;
+ }
+ 
+-static void armv8pmu_start(struct arm_pmu *cpu_pmu)
+-{
+-	unsigned long flags;
+-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
+-
+-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
+-	/* Enable all counters */
+-	armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E);
+-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
+-}
+-
+-static void armv8pmu_stop(struct arm_pmu *cpu_pmu)
+-{
+-	unsigned long flags;
+-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
+-
+-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
+-	/* Disable all counters */
+-	armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E);
+-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
+-}
+-
+ static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc,
+ 				  struct perf_event *event)
+ {
+diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
+index 5c338ce5a7fa..db5440339ab3 100644
+--- a/arch/arm64/kernel/ptrace.c
++++ b/arch/arm64/kernel/ptrace.c
+@@ -277,19 +277,22 @@ static int ptrace_hbp_set_event(unsigned int note_type,
+ 
+ 	switch (note_type) {
+ 	case NT_ARM_HW_BREAK:
+-		if (idx < ARM_MAX_BRP) {
+-			tsk->thread.debug.hbp_break[idx] = bp;
+-			err = 0;
+-		}
++		if (idx >= ARM_MAX_BRP)
++			goto out;
++		idx = array_index_nospec(idx, ARM_MAX_BRP);
++		tsk->thread.debug.hbp_break[idx] = bp;
++		err = 0;
+ 		break;
+ 	case NT_ARM_HW_WATCH:
+-		if (idx < ARM_MAX_WRP) {
+-			tsk->thread.debug.hbp_watch[idx] = bp;
+-			err = 0;
+-		}
++		if (idx >= ARM_MAX_WRP)
++			goto out;
++		idx = array_index_nospec(idx, ARM_MAX_WRP);
++		tsk->thread.debug.hbp_watch[idx] = bp;
++		err = 0;
+ 		break;
+ 	}
+ 
++out:
+ 	return err;
+ }
+ 
+diff --git a/arch/mips/ath79/setup.c b/arch/mips/ath79/setup.c
+index f206dafbb0a3..26a058d58d37 100644
+--- a/arch/mips/ath79/setup.c
++++ b/arch/mips/ath79/setup.c
+@@ -40,6 +40,7 @@ static char ath79_sys_type[ATH79_SYS_TYPE_LEN];
+ 
+ static void ath79_restart(char *command)
+ {
++	local_irq_disable();
+ 	ath79_device_reset_set(AR71XX_RESET_FULL_CHIP);
+ 	for (;;)
+ 		if (cpu_wait)
+diff --git a/arch/mips/include/asm/mach-ath79/ath79.h b/arch/mips/include/asm/mach-ath79/ath79.h
+index 441faa92c3cd..6e6c0fead776 100644
+--- a/arch/mips/include/asm/mach-ath79/ath79.h
++++ b/arch/mips/include/asm/mach-ath79/ath79.h
+@@ -134,6 +134,7 @@ static inline u32 ath79_pll_rr(unsigned reg)
+ static inline void ath79_reset_wr(unsigned reg, u32 val)
+ {
+ 	__raw_writel(val, ath79_reset_base + reg);
++	(void) __raw_readl(ath79_reset_base + reg); /* flush */
+ }
+ 
+ static inline u32 ath79_reset_rr(unsigned reg)
+diff --git a/arch/mips/jz4740/Platform b/arch/mips/jz4740/Platform
+index 28448d358c10..a2a5a85ea1f9 100644
+--- a/arch/mips/jz4740/Platform
++++ b/arch/mips/jz4740/Platform
+@@ -1,4 +1,4 @@
+ platform-$(CONFIG_MACH_INGENIC)	+= jz4740/
+ cflags-$(CONFIG_MACH_INGENIC)	+= -I$(srctree)/arch/mips/include/asm/mach-jz4740
+ load-$(CONFIG_MACH_INGENIC)	+= 0xffffffff80010000
+-zload-$(CONFIG_MACH_INGENIC)	+= 0xffffffff80600000
++zload-$(CONFIG_MACH_INGENIC)	+= 0xffffffff81000000
+diff --git a/arch/mips/loongson64/common/cs5536/cs5536_ohci.c b/arch/mips/loongson64/common/cs5536/cs5536_ohci.c
+index f7c905e50dc4..92dc6bafc127 100644
+--- a/arch/mips/loongson64/common/cs5536/cs5536_ohci.c
++++ b/arch/mips/loongson64/common/cs5536/cs5536_ohci.c
+@@ -138,7 +138,7 @@ u32 pci_ohci_read_reg(int reg)
+ 		break;
+ 	case PCI_OHCI_INT_REG:
+ 		_rdmsr(DIVIL_MSR_REG(PIC_YSEL_LOW), &hi, &lo);
+-		if ((lo & 0x00000f00) == CS5536_USB_INTR)
++		if (((lo >> PIC_YSEL_LOW_USB_SHIFT) & 0xf) == CS5536_USB_INTR)
+ 			conf_data = 1;
+ 		break;
+ 	default:
+diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c
+index 8c456fa691a5..8167ce8e0cdd 100644
+--- a/arch/powerpc/kvm/book3s_64_vio.c
++++ b/arch/powerpc/kvm/book3s_64_vio.c
+@@ -180,7 +180,7 @@ extern long kvm_spapr_tce_attach_iommu_group(struct kvm *kvm, int tablefd,
+ 		if ((tbltmp->it_page_shift <= stt->page_shift) &&
+ 				(tbltmp->it_offset << tbltmp->it_page_shift ==
+ 				 stt->offset << stt->page_shift) &&
+-				(tbltmp->it_size << tbltmp->it_page_shift ==
++				(tbltmp->it_size << tbltmp->it_page_shift >=
+ 				 stt->size << stt->page_shift)) {
+ 			/*
+ 			 * Reference the table to avoid races with
+@@ -296,7 +296,7 @@ long kvm_vm_ioctl_create_spapr_tce(struct kvm *kvm,
+ {
+ 	struct kvmppc_spapr_tce_table *stt = NULL;
+ 	struct kvmppc_spapr_tce_table *siter;
+-	unsigned long npages, size;
++	unsigned long npages, size = args->size;
+ 	int ret = -ENOMEM;
+ 	int i;
+ 
+@@ -304,7 +304,6 @@ long kvm_vm_ioctl_create_spapr_tce(struct kvm *kvm,
+ 		(args->offset + args->size > (ULLONG_MAX >> args->page_shift)))
+ 		return -EINVAL;
+ 
+-	size = _ALIGN_UP(args->size, PAGE_SIZE >> 3);
+ 	npages = kvmppc_tce_pages(size);
+ 	ret = kvmppc_account_memlimit(kvmppc_stt_pages(npages), true);
+ 	if (ret)
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index a995513573c2..2ebd5132a29f 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -4562,6 +4562,8 @@ static int kvmppc_book3s_init_hv(void)
+ 			pr_err("KVM-HV: Cannot determine method for accessing XICS\n");
+ 			return -ENODEV;
+ 		}
++		/* presence of intc confirmed - node can be dropped again */
++		of_node_put(np);
+ 	}
+ #endif
+ 
+diff --git a/arch/powerpc/platforms/powernv/opal.c b/arch/powerpc/platforms/powernv/opal.c
+index 0d539c661748..371e33ecc547 100644
+--- a/arch/powerpc/platforms/powernv/opal.c
++++ b/arch/powerpc/platforms/powernv/opal.c
+@@ -388,7 +388,7 @@ int opal_put_chars(uint32_t vtermno, const char *data, int total_len)
+ 		/* Closed or other error drop */
+ 		if (rc != OPAL_SUCCESS && rc != OPAL_BUSY &&
+ 		    rc != OPAL_BUSY_EVENT) {
+-			written = total_len;
++			written += total_len;
+ 			break;
+ 		}
+ 		if (rc == OPAL_SUCCESS) {
+diff --git a/arch/s390/crypto/paes_s390.c b/arch/s390/crypto/paes_s390.c
+index 80b27294c1de..ab9a0ebecc19 100644
+--- a/arch/s390/crypto/paes_s390.c
++++ b/arch/s390/crypto/paes_s390.c
+@@ -208,7 +208,7 @@ static int cbc_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
+ 			      walk->dst.virt.addr, walk->src.virt.addr, n);
+ 		if (k)
+ 			ret = blkcipher_walk_done(desc, walk, nbytes - k);
+-		if (n < k) {
++		if (k < n) {
+ 			if (__cbc_paes_set_key(ctx) != 0)
+ 				return blkcipher_walk_done(desc, walk, -EIO);
+ 			memcpy(param.key, ctx->pk.protkey, MAXPROTKEYSIZE);
+diff --git a/arch/x86/kernel/eisa.c b/arch/x86/kernel/eisa.c
+index f260e452e4f8..e8c8c5d78dbd 100644
+--- a/arch/x86/kernel/eisa.c
++++ b/arch/x86/kernel/eisa.c
+@@ -7,11 +7,17 @@
+ #include <linux/eisa.h>
+ #include <linux/io.h>
+ 
++#include <xen/xen.h>
++
+ static __init int eisa_bus_probe(void)
+ {
+-	void __iomem *p = ioremap(0x0FFFD9, 4);
++	void __iomem *p;
++
++	if (xen_pv_domain() && !xen_initial_domain())
++		return 0;
+ 
+-	if (readl(p) == 'E' + ('I'<<8) + ('S'<<16) + ('A'<<24))
++	p = ioremap(0x0FFFD9, 4);
++	if (p && readl(p) == 'E' + ('I' << 8) + ('S' << 16) + ('A' << 24))
+ 		EISA_bus = 1;
+ 	iounmap(p);
+ 	return 0;
+diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
+index 946455e9cfef..1d2106d83b4e 100644
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -177,7 +177,7 @@ static p4d_t *pti_user_pagetable_walk_p4d(unsigned long address)
+ 
+ 	if (pgd_none(*pgd)) {
+ 		unsigned long new_p4d_page = __get_free_page(gfp);
+-		if (!new_p4d_page)
++		if (WARN_ON_ONCE(!new_p4d_page))
+ 			return NULL;
+ 
+ 		set_pgd(pgd, __pgd(_KERNPG_TABLE | __pa(new_p4d_page)));
+@@ -196,13 +196,17 @@ static p4d_t *pti_user_pagetable_walk_p4d(unsigned long address)
+ static pmd_t *pti_user_pagetable_walk_pmd(unsigned long address)
+ {
+ 	gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
+-	p4d_t *p4d = pti_user_pagetable_walk_p4d(address);
++	p4d_t *p4d;
+ 	pud_t *pud;
+ 
++	p4d = pti_user_pagetable_walk_p4d(address);
++	if (!p4d)
++		return NULL;
++
+ 	BUILD_BUG_ON(p4d_large(*p4d) != 0);
+ 	if (p4d_none(*p4d)) {
+ 		unsigned long new_pud_page = __get_free_page(gfp);
+-		if (!new_pud_page)
++		if (WARN_ON_ONCE(!new_pud_page))
+ 			return NULL;
+ 
+ 		set_p4d(p4d, __p4d(_KERNPG_TABLE | __pa(new_pud_page)));
+@@ -216,7 +220,7 @@ static pmd_t *pti_user_pagetable_walk_pmd(unsigned long address)
+ 	}
+ 	if (pud_none(*pud)) {
+ 		unsigned long new_pmd_page = __get_free_page(gfp);
+-		if (!new_pmd_page)
++		if (WARN_ON_ONCE(!new_pmd_page))
+ 			return NULL;
+ 
+ 		set_pud(pud, __pud(_KERNPG_TABLE | __pa(new_pmd_page)));
+@@ -238,9 +242,13 @@ static pmd_t *pti_user_pagetable_walk_pmd(unsigned long address)
+ static __init pte_t *pti_user_pagetable_walk_pte(unsigned long address)
+ {
+ 	gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
+-	pmd_t *pmd = pti_user_pagetable_walk_pmd(address);
++	pmd_t *pmd;
+ 	pte_t *pte;
+ 
++	pmd = pti_user_pagetable_walk_pmd(address);
++	if (!pmd)
++		return NULL;
++
+ 	/* We can't do anything sensible if we hit a large mapping. */
+ 	if (pmd_large(*pmd)) {
+ 		WARN_ON(1);
+@@ -298,6 +306,10 @@ pti_clone_pmds(unsigned long start, unsigned long end, pmdval_t clear)
+ 		p4d_t *p4d;
+ 		pud_t *pud;
+ 
++		/* Overflow check */
++		if (addr < start)
++			break;
++
+ 		pgd = pgd_offset_k(addr);
+ 		if (WARN_ON(pgd_none(*pgd)))
+ 			return;
+@@ -355,6 +367,9 @@ static void __init pti_clone_p4d(unsigned long addr)
+ 	pgd_t *kernel_pgd;
+ 
+ 	user_p4d = pti_user_pagetable_walk_p4d(addr);
++	if (!user_p4d)
++		return;
++
+ 	kernel_pgd = pgd_offset_k(addr);
+ 	kernel_p4d = p4d_offset(kernel_pgd, addr);
+ 	*user_p4d = *kernel_p4d;
+diff --git a/arch/xtensa/platforms/iss/setup.c b/arch/xtensa/platforms/iss/setup.c
+index f4bbb28026f8..58709e89a8ed 100644
+--- a/arch/xtensa/platforms/iss/setup.c
++++ b/arch/xtensa/platforms/iss/setup.c
+@@ -78,23 +78,28 @@ static struct notifier_block iss_panic_block = {
+ 
+ void __init platform_setup(char **p_cmdline)
+ {
++	static void *argv[COMMAND_LINE_SIZE / sizeof(void *)] __initdata;
++	static char cmdline[COMMAND_LINE_SIZE] __initdata;
+ 	int argc = simc_argc();
+ 	int argv_size = simc_argv_size();
+ 
+ 	if (argc > 1) {
+-		void **argv = alloc_bootmem(argv_size);
+-		char *cmdline = alloc_bootmem(argv_size);
+-		int i;
++		if (argv_size > sizeof(argv)) {
++			pr_err("%s: command line too long: argv_size = %d\n",
++			       __func__, argv_size);
++		} else {
++			int i;
+ 
+-		cmdline[0] = 0;
+-		simc_argv((void *)argv);
++			cmdline[0] = 0;
++			simc_argv((void *)argv);
+ 
+-		for (i = 1; i < argc; ++i) {
+-			if (i > 1)
+-				strcat(cmdline, " ");
+-			strcat(cmdline, argv[i]);
++			for (i = 1; i < argc; ++i) {
++				if (i > 1)
++					strcat(cmdline, " ");
++				strcat(cmdline, argv[i]);
++			}
++			*p_cmdline = cmdline;
+ 		}
+-		*p_cmdline = cmdline;
+ 	}
+ 
+ 	atomic_notifier_chain_register(&panic_notifier_list, &iss_panic_block);
+diff --git a/block/blk-core.c b/block/blk-core.c
+index cbaca5a73f2e..f9d2e1b66e05 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -791,9 +791,13 @@ void blk_cleanup_queue(struct request_queue *q)
+ 	 * make sure all in-progress dispatch are completed because
+ 	 * blk_freeze_queue() can only complete all requests, and
+ 	 * dispatch may still be in-progress since we dispatch requests
+-	 * from more than one contexts
++	 * from more than one contexts.
++	 *
++	 * No need to quiesce queue if it isn't initialized yet since
++	 * blk_freeze_queue() should be enough for cases of passthrough
++	 * request.
+ 	 */
+-	if (q->mq_ops)
++	if (q->mq_ops && blk_queue_init_done(q))
+ 		blk_mq_quiesce_queue(q);
+ 
+ 	/* for synchronous bio-based driver finish in-flight integrity i/o */
+diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
+index 56c493c6cd90..f5745acc2d98 100644
+--- a/block/blk-mq-sched.c
++++ b/block/blk-mq-sched.c
+@@ -339,7 +339,8 @@ bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio)
+ 		return e->type->ops.mq.bio_merge(hctx, bio);
+ 	}
+ 
+-	if (hctx->flags & BLK_MQ_F_SHOULD_MERGE) {
++	if ((hctx->flags & BLK_MQ_F_SHOULD_MERGE) &&
++			!list_empty_careful(&ctx->rq_list)) {
+ 		/* default per sw-queue merge */
+ 		spin_lock(&ctx->lock);
+ 		ret = blk_mq_attempt_merge(q, ctx, bio);
+diff --git a/block/blk-settings.c b/block/blk-settings.c
+index d1de71124656..24fff4a3d08a 100644
+--- a/block/blk-settings.c
++++ b/block/blk-settings.c
+@@ -128,7 +128,7 @@ void blk_set_stacking_limits(struct queue_limits *lim)
+ 
+ 	/* Inherit limits from component devices */
+ 	lim->max_segments = USHRT_MAX;
+-	lim->max_discard_segments = 1;
++	lim->max_discard_segments = USHRT_MAX;
+ 	lim->max_hw_sectors = UINT_MAX;
+ 	lim->max_segment_size = UINT_MAX;
+ 	lim->max_sectors = UINT_MAX;
+diff --git a/crypto/api.c b/crypto/api.c
+index 0ee632bba064..7aca9f86c5f3 100644
+--- a/crypto/api.c
++++ b/crypto/api.c
+@@ -229,7 +229,7 @@ static struct crypto_alg *crypto_larval_lookup(const char *name, u32 type,
+ 	mask &= ~(CRYPTO_ALG_LARVAL | CRYPTO_ALG_DEAD);
+ 
+ 	alg = crypto_alg_lookup(name, type, mask);
+-	if (!alg) {
++	if (!alg && !(mask & CRYPTO_NOLOAD)) {
+ 		request_module("crypto-%s", name);
+ 
+ 		if (!((type ^ CRYPTO_ALG_NEED_FALLBACK) & mask &
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index df3e1a44707a..3aba4ad8af5c 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -2809,6 +2809,9 @@ void device_shutdown(void)
+ {
+ 	struct device *dev, *parent;
+ 
++	wait_for_device_probe();
++	device_block_probing();
++
+ 	spin_lock(&devices_kset->list_lock);
+ 	/*
+ 	 * Walk the devices list backward, shutting down each in turn.
+diff --git a/drivers/block/DAC960.c b/drivers/block/DAC960.c
+index f6518067aa7d..f99e5c883368 100644
+--- a/drivers/block/DAC960.c
++++ b/drivers/block/DAC960.c
+@@ -21,6 +21,7 @@
+ #define DAC960_DriverDate			"21 Aug 2007"
+ 
+ 
++#include <linux/compiler.h>
+ #include <linux/module.h>
+ #include <linux/types.h>
+ #include <linux/miscdevice.h>
+@@ -6426,7 +6427,7 @@ static bool DAC960_V2_ExecuteUserCommand(DAC960_Controller_T *Controller,
+   return true;
+ }
+ 
+-static int dac960_proc_show(struct seq_file *m, void *v)
++static int __maybe_unused dac960_proc_show(struct seq_file *m, void *v)
+ {
+   unsigned char *StatusMessage = "OK\n";
+   int ControllerNumber;
+@@ -6446,14 +6447,16 @@ static int dac960_proc_show(struct seq_file *m, void *v)
+   return 0;
+ }
+ 
+-static int dac960_initial_status_proc_show(struct seq_file *m, void *v)
++static int __maybe_unused dac960_initial_status_proc_show(struct seq_file *m,
++							  void *v)
+ {
+ 	DAC960_Controller_T *Controller = (DAC960_Controller_T *)m->private;
+ 	seq_printf(m, "%.*s", Controller->InitialStatusLength, Controller->CombinedStatusBuffer);
+ 	return 0;
+ }
+ 
+-static int dac960_current_status_proc_show(struct seq_file *m, void *v)
++static int __maybe_unused dac960_current_status_proc_show(struct seq_file *m,
++							  void *v)
+ {
+   DAC960_Controller_T *Controller = (DAC960_Controller_T *) m->private;
+   unsigned char *StatusMessage =
+diff --git a/drivers/char/ipmi/ipmi_bt_sm.c b/drivers/char/ipmi/ipmi_bt_sm.c
+index a3397664f800..97d6856c9c0f 100644
+--- a/drivers/char/ipmi/ipmi_bt_sm.c
++++ b/drivers/char/ipmi/ipmi_bt_sm.c
+@@ -59,8 +59,6 @@ enum bt_states {
+ 	BT_STATE_RESET3,
+ 	BT_STATE_RESTART,
+ 	BT_STATE_PRINTME,
+-	BT_STATE_CAPABILITIES_BEGIN,
+-	BT_STATE_CAPABILITIES_END,
+ 	BT_STATE_LONG_BUSY	/* BT doesn't get hosed :-) */
+ };
+ 
+@@ -86,7 +84,6 @@ struct si_sm_data {
+ 	int		error_retries;	/* end of "common" fields */
+ 	int		nonzero_status;	/* hung BMCs stay all 0 */
+ 	enum bt_states	complete;	/* to divert the state machine */
+-	int		BT_CAP_outreqs;
+ 	long		BT_CAP_req2rsp;
+ 	int		BT_CAP_retries;	/* Recommended retries */
+ };
+@@ -137,8 +134,6 @@ static char *state2txt(unsigned char state)
+ 	case BT_STATE_RESET3:		return("RESET3");
+ 	case BT_STATE_RESTART:		return("RESTART");
+ 	case BT_STATE_LONG_BUSY:	return("LONG_BUSY");
+-	case BT_STATE_CAPABILITIES_BEGIN: return("CAP_BEGIN");
+-	case BT_STATE_CAPABILITIES_END:	return("CAP_END");
+ 	}
+ 	return("BAD STATE");
+ }
+@@ -185,7 +180,6 @@ static unsigned int bt_init_data(struct si_sm_data *bt, struct si_sm_io *io)
+ 	bt->complete = BT_STATE_IDLE;	/* end here */
+ 	bt->BT_CAP_req2rsp = BT_NORMAL_TIMEOUT * USEC_PER_SEC;
+ 	bt->BT_CAP_retries = BT_NORMAL_RETRY_LIMIT;
+-	/* BT_CAP_outreqs == zero is a flag to read BT Capabilities */
+ 	return 3; /* We claim 3 bytes of space; ought to check SPMI table */
+ }
+ 
+@@ -451,7 +445,7 @@ static enum si_sm_result error_recovery(struct si_sm_data *bt,
+ 
+ static enum si_sm_result bt_event(struct si_sm_data *bt, long time)
+ {
+-	unsigned char status, BT_CAP[8];
++	unsigned char status;
+ 	static enum bt_states last_printed = BT_STATE_PRINTME;
+ 	int i;
+ 
+@@ -504,12 +498,6 @@ static enum si_sm_result bt_event(struct si_sm_data *bt, long time)
+ 		if (status & BT_H_BUSY)		/* clear a leftover H_BUSY */
+ 			BT_CONTROL(BT_H_BUSY);
+ 
+-		bt->timeout = bt->BT_CAP_req2rsp;
+-
+-		/* Read BT capabilities if it hasn't been done yet */
+-		if (!bt->BT_CAP_outreqs)
+-			BT_STATE_CHANGE(BT_STATE_CAPABILITIES_BEGIN,
+-					SI_SM_CALL_WITHOUT_DELAY);
+ 		BT_SI_SM_RETURN(SI_SM_IDLE);
+ 
+ 	case BT_STATE_XACTION_START:
+@@ -614,37 +602,6 @@ static enum si_sm_result bt_event(struct si_sm_data *bt, long time)
+ 		BT_STATE_CHANGE(BT_STATE_XACTION_START,
+ 				SI_SM_CALL_WITH_DELAY);
+ 
+-	/*
+-	 * Get BT Capabilities, using timing of upper level state machine.
+-	 * Set outreqs to prevent infinite loop on timeout.
+-	 */
+-	case BT_STATE_CAPABILITIES_BEGIN:
+-		bt->BT_CAP_outreqs = 1;
+-		{
+-			unsigned char GetBT_CAP[] = { 0x18, 0x36 };
+-			bt->state = BT_STATE_IDLE;
+-			bt_start_transaction(bt, GetBT_CAP, sizeof(GetBT_CAP));
+-		}
+-		bt->complete = BT_STATE_CAPABILITIES_END;
+-		BT_STATE_CHANGE(BT_STATE_XACTION_START,
+-				SI_SM_CALL_WITH_DELAY);
+-
+-	case BT_STATE_CAPABILITIES_END:
+-		i = bt_get_result(bt, BT_CAP, sizeof(BT_CAP));
+-		bt_init_data(bt, bt->io);
+-		if ((i == 8) && !BT_CAP[2]) {
+-			bt->BT_CAP_outreqs = BT_CAP[3];
+-			bt->BT_CAP_req2rsp = BT_CAP[6] * USEC_PER_SEC;
+-			bt->BT_CAP_retries = BT_CAP[7];
+-		} else
+-			printk(KERN_WARNING "IPMI BT: using default values\n");
+-		if (!bt->BT_CAP_outreqs)
+-			bt->BT_CAP_outreqs = 1;
+-		printk(KERN_WARNING "IPMI BT: req2rsp=%ld secs retries=%d\n",
+-			bt->BT_CAP_req2rsp / USEC_PER_SEC, bt->BT_CAP_retries);
+-		bt->timeout = bt->BT_CAP_req2rsp;
+-		return SI_SM_CALL_WITHOUT_DELAY;
+-
+ 	default:	/* should never occur */
+ 		return error_recovery(bt,
+ 				      status,
+@@ -655,6 +612,11 @@ static enum si_sm_result bt_event(struct si_sm_data *bt, long time)
+ 
+ static int bt_detect(struct si_sm_data *bt)
+ {
++	unsigned char GetBT_CAP[] = { 0x18, 0x36 };
++	unsigned char BT_CAP[8];
++	enum si_sm_result smi_result;
++	int rv;
++
+ 	/*
+ 	 * It's impossible for the BT status and interrupt registers to be
+ 	 * all 1's, (assuming a properly functioning, self-initialized BMC)
+@@ -665,6 +627,48 @@ static int bt_detect(struct si_sm_data *bt)
+ 	if ((BT_STATUS == 0xFF) && (BT_INTMASK_R == 0xFF))
+ 		return 1;
+ 	reset_flags(bt);
++
++	/*
++	 * Try getting the BT capabilities here.
++	 */
++	rv = bt_start_transaction(bt, GetBT_CAP, sizeof(GetBT_CAP));
++	if (rv) {
++		dev_warn(bt->io->dev,
++			 "Can't start capabilities transaction: %d\n", rv);
++		goto out_no_bt_cap;
++	}
++
++	smi_result = SI_SM_CALL_WITHOUT_DELAY;
++	for (;;) {
++		if (smi_result == SI_SM_CALL_WITH_DELAY ||
++		    smi_result == SI_SM_CALL_WITH_TICK_DELAY) {
++			schedule_timeout_uninterruptible(1);
++			smi_result = bt_event(bt, jiffies_to_usecs(1));
++		} else if (smi_result == SI_SM_CALL_WITHOUT_DELAY) {
++			smi_result = bt_event(bt, 0);
++		} else
++			break;
++	}
++
++	rv = bt_get_result(bt, BT_CAP, sizeof(BT_CAP));
++	bt_init_data(bt, bt->io);
++	if (rv < 8) {
++		dev_warn(bt->io->dev, "bt cap response too short: %d\n", rv);
++		goto out_no_bt_cap;
++	}
++
++	if (BT_CAP[2]) {
++		dev_warn(bt->io->dev, "Error fetching bt cap: %x\n", BT_CAP[2]);
++out_no_bt_cap:
++		dev_warn(bt->io->dev, "using default values\n");
++	} else {
++		bt->BT_CAP_req2rsp = BT_CAP[6] * USEC_PER_SEC;
++		bt->BT_CAP_retries = BT_CAP[7];
++	}
++
++	dev_info(bt->io->dev, "req2rsp=%ld secs retries=%d\n",
++		 bt->BT_CAP_req2rsp / USEC_PER_SEC, bt->BT_CAP_retries);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
+index 51832b8a2c62..7fc9612070a1 100644
+--- a/drivers/char/ipmi/ipmi_msghandler.c
++++ b/drivers/char/ipmi/ipmi_msghandler.c
+@@ -3381,39 +3381,45 @@ int ipmi_register_smi(const struct ipmi_smi_handlers *handlers,
+ 
+ 	rv = handlers->start_processing(send_info, intf);
+ 	if (rv)
+-		goto out;
++		goto out_err;
+ 
+ 	rv = __bmc_get_device_id(intf, NULL, &id, NULL, NULL, i);
+ 	if (rv) {
+ 		dev_err(si_dev, "Unable to get the device id: %d\n", rv);
+-		goto out;
++		goto out_err_started;
+ 	}
+ 
+ 	mutex_lock(&intf->bmc_reg_mutex);
+ 	rv = __scan_channels(intf, &id);
+ 	mutex_unlock(&intf->bmc_reg_mutex);
++	if (rv)
++		goto out_err_bmc_reg;
+ 
+- out:
+-	if (rv) {
+-		ipmi_bmc_unregister(intf);
+-		list_del_rcu(&intf->link);
+-		mutex_unlock(&ipmi_interfaces_mutex);
+-		synchronize_srcu(&ipmi_interfaces_srcu);
+-		cleanup_srcu_struct(&intf->users_srcu);
+-		kref_put(&intf->refcount, intf_free);
+-	} else {
+-		/*
+-		 * Keep memory order straight for RCU readers.  Make
+-		 * sure everything else is committed to memory before
+-		 * setting intf_num to mark the interface valid.
+-		 */
+-		smp_wmb();
+-		intf->intf_num = i;
+-		mutex_unlock(&ipmi_interfaces_mutex);
++	/*
++	 * Keep memory order straight for RCU readers.  Make
++	 * sure everything else is committed to memory before
++	 * setting intf_num to mark the interface valid.
++	 */
++	smp_wmb();
++	intf->intf_num = i;
++	mutex_unlock(&ipmi_interfaces_mutex);
+ 
+-		/* After this point the interface is legal to use. */
+-		call_smi_watchers(i, intf->si_dev);
+-	}
++	/* After this point the interface is legal to use. */
++	call_smi_watchers(i, intf->si_dev);
++
++	return 0;
++
++ out_err_bmc_reg:
++	ipmi_bmc_unregister(intf);
++ out_err_started:
++	if (intf->handlers->shutdown)
++		intf->handlers->shutdown(intf->send_info);
++ out_err:
++	list_del_rcu(&intf->link);
++	mutex_unlock(&ipmi_interfaces_mutex);
++	synchronize_srcu(&ipmi_interfaces_srcu);
++	cleanup_srcu_struct(&intf->users_srcu);
++	kref_put(&intf->refcount, intf_free);
+ 
+ 	return rv;
+ }
+@@ -3504,7 +3510,8 @@ void ipmi_unregister_smi(struct ipmi_smi *intf)
+ 	}
+ 	srcu_read_unlock(&intf->users_srcu, index);
+ 
+-	intf->handlers->shutdown(intf->send_info);
++	if (intf->handlers->shutdown)
++		intf->handlers->shutdown(intf->send_info);
+ 
+ 	cleanup_smi_msgs(intf);
+ 
+diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
+index 90ec010bffbd..5faa917df1b6 100644
+--- a/drivers/char/ipmi/ipmi_si_intf.c
++++ b/drivers/char/ipmi/ipmi_si_intf.c
+@@ -2083,18 +2083,9 @@ static int try_smi_init(struct smi_info *new_smi)
+ 		 si_to_str[new_smi->io.si_type]);
+ 
+ 	WARN_ON(new_smi->io.dev->init_name != NULL);
+-	kfree(init_name);
+-
+-	return 0;
+-
+-out_err:
+-	if (new_smi->intf) {
+-		ipmi_unregister_smi(new_smi->intf);
+-		new_smi->intf = NULL;
+-	}
+ 
++ out_err:
+ 	kfree(init_name);
+-
+ 	return rv;
+ }
+ 
+@@ -2227,6 +2218,8 @@ static void shutdown_smi(void *send_info)
+ 
+ 	kfree(smi_info->si_sm);
+ 	smi_info->si_sm = NULL;
++
++	smi_info->intf = NULL;
+ }
+ 
+ /*
+@@ -2240,10 +2233,8 @@ static void cleanup_one_si(struct smi_info *smi_info)
+ 
+ 	list_del(&smi_info->link);
+ 
+-	if (smi_info->intf) {
++	if (smi_info->intf)
+ 		ipmi_unregister_smi(smi_info->intf);
+-		smi_info->intf = NULL;
+-	}
+ 
+ 	if (smi_info->pdev) {
+ 		if (smi_info->pdev_registered)
+diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c
+index 18e4650c233b..265d6a6583bc 100644
+--- a/drivers/char/ipmi/ipmi_ssif.c
++++ b/drivers/char/ipmi/ipmi_ssif.c
+@@ -181,6 +181,8 @@ struct ssif_addr_info {
+ 	struct device *dev;
+ 	struct i2c_client *client;
+ 
++	struct i2c_client *added_client;
++
+ 	struct mutex clients_mutex;
+ 	struct list_head clients;
+ 
+@@ -1214,18 +1216,11 @@ static void shutdown_ssif(void *send_info)
+ 		complete(&ssif_info->wake_thread);
+ 		kthread_stop(ssif_info->thread);
+ 	}
+-
+-	/*
+-	 * No message can be outstanding now, we have removed the
+-	 * upper layer and it permitted us to do so.
+-	 */
+-	kfree(ssif_info);
+ }
+ 
+ static int ssif_remove(struct i2c_client *client)
+ {
+ 	struct ssif_info *ssif_info = i2c_get_clientdata(client);
+-	struct ipmi_smi *intf;
+ 	struct ssif_addr_info *addr_info;
+ 
+ 	if (!ssif_info)
+@@ -1235,9 +1230,7 @@ static int ssif_remove(struct i2c_client *client)
+ 	 * After this point, we won't deliver anything asychronously
+ 	 * to the message handler.  We can unregister ourself.
+ 	 */
+-	intf = ssif_info->intf;
+-	ssif_info->intf = NULL;
+-	ipmi_unregister_smi(intf);
++	ipmi_unregister_smi(ssif_info->intf);
+ 
+ 	list_for_each_entry(addr_info, &ssif_infos, link) {
+ 		if (addr_info->client == client) {
+@@ -1246,6 +1239,8 @@ static int ssif_remove(struct i2c_client *client)
+ 		}
+ 	}
+ 
++	kfree(ssif_info);
++
+ 	return 0;
+ }
+ 
+@@ -1648,15 +1643,7 @@ static int ssif_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ 
+  out:
+ 	if (rv) {
+-		/*
+-		 * Note that if addr_info->client is assigned, we
+-		 * leave it.  The i2c client hangs around even if we
+-		 * return a failure here, and the failure here is not
+-		 * propagated back to the i2c code.  This seems to be
+-		 * design intent, strange as it may be.  But if we
+-		 * don't leave it, ssif_platform_remove will not remove
+-		 * the client like it should.
+-		 */
++		addr_info->client = NULL;
+ 		dev_err(&client->dev, "Unable to start IPMI SSIF: %d\n", rv);
+ 		kfree(ssif_info);
+ 	}
+@@ -1676,7 +1663,8 @@ static int ssif_adapter_handler(struct device *adev, void *opaque)
+ 	if (adev->type != &i2c_adapter_type)
+ 		return 0;
+ 
+-	i2c_new_device(to_i2c_adapter(adev), &addr_info->binfo);
++	addr_info->added_client = i2c_new_device(to_i2c_adapter(adev),
++						 &addr_info->binfo);
+ 
+ 	if (!addr_info->adapter_name)
+ 		return 1; /* Only try the first I2C adapter by default. */
+@@ -1849,7 +1837,7 @@ static int ssif_platform_remove(struct platform_device *dev)
+ 		return 0;
+ 
+ 	mutex_lock(&ssif_infos_mutex);
+-	i2c_unregister_device(addr_info->client);
++	i2c_unregister_device(addr_info->added_client);
+ 
+ 	list_del(&addr_info->link);
+ 	kfree(addr_info);
+diff --git a/drivers/clk/clk-fixed-factor.c b/drivers/clk/clk-fixed-factor.c
+index a5d402de5584..20724abd38bd 100644
+--- a/drivers/clk/clk-fixed-factor.c
++++ b/drivers/clk/clk-fixed-factor.c
+@@ -177,8 +177,15 @@ static struct clk *_of_fixed_factor_clk_setup(struct device_node *node)
+ 
+ 	clk = clk_register_fixed_factor(NULL, clk_name, parent_name, flags,
+ 					mult, div);
+-	if (IS_ERR(clk))
++	if (IS_ERR(clk)) {
++		/*
++		 * If parent clock is not registered, registration would fail.
++		 * Clear OF_POPULATED flag so that clock registration can be
++		 * attempted again from probe function.
++		 */
++		of_node_clear_flag(node, OF_POPULATED);
+ 		return clk;
++	}
+ 
+ 	ret = of_clk_add_provider(node, of_clk_src_simple_get, clk);
+ 	if (ret) {
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index e2ed078abd90..2d96e7966e94 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -2933,6 +2933,7 @@ struct clk *__clk_create_clk(struct clk_hw *hw, const char *dev_id,
+ 	return clk;
+ }
+ 
++/* keep in sync with __clk_put */
+ void __clk_free_clk(struct clk *clk)
+ {
+ 	clk_prepare_lock();
+@@ -3312,6 +3313,7 @@ int __clk_get(struct clk *clk)
+ 	return 1;
+ }
+ 
++/* keep in sync with __clk_free_clk */
+ void __clk_put(struct clk *clk)
+ {
+ 	struct module *owner;
+@@ -3345,6 +3347,7 @@ void __clk_put(struct clk *clk)
+ 
+ 	module_put(owner);
+ 
++	kfree_const(clk->con_id);
+ 	kfree(clk);
+ }
+ 
+diff --git a/drivers/clk/imx/clk-imx6sll.c b/drivers/clk/imx/clk-imx6sll.c
+index 3651c77fbabe..645d8a42007c 100644
+--- a/drivers/clk/imx/clk-imx6sll.c
++++ b/drivers/clk/imx/clk-imx6sll.c
+@@ -92,6 +92,7 @@ static void __init imx6sll_clocks_init(struct device_node *ccm_node)
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "fsl,imx6sll-anatop");
+ 	base = of_iomap(np, 0);
++	of_node_put(np);
+ 	WARN_ON(!base);
+ 
+ 	/* Do not bypass PLLs initially */
+diff --git a/drivers/clk/imx/clk-imx6ul.c b/drivers/clk/imx/clk-imx6ul.c
+index ba563ba50b40..9f1a40498642 100644
+--- a/drivers/clk/imx/clk-imx6ul.c
++++ b/drivers/clk/imx/clk-imx6ul.c
+@@ -142,6 +142,7 @@ static void __init imx6ul_clocks_init(struct device_node *ccm_node)
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "fsl,imx6ul-anatop");
+ 	base = of_iomap(np, 0);
++	of_node_put(np);
+ 	WARN_ON(!base);
+ 
+ 	clks[IMX6UL_PLL1_BYPASS_SRC] = imx_clk_mux("pll1_bypass_src", base + 0x00, 14, 1, pll_bypass_src_sels, ARRAY_SIZE(pll_bypass_src_sels));
+diff --git a/drivers/clk/mvebu/armada-37xx-periph.c b/drivers/clk/mvebu/armada-37xx-periph.c
+index 44e4e27eddad..6f7637b19738 100644
+--- a/drivers/clk/mvebu/armada-37xx-periph.c
++++ b/drivers/clk/mvebu/armada-37xx-periph.c
+@@ -429,9 +429,6 @@ static u8 clk_pm_cpu_get_parent(struct clk_hw *hw)
+ 		val &= pm_cpu->mask_mux;
+ 	}
+ 
+-	if (val >= num_parents)
+-		return -EINVAL;
+-
+ 	return val;
+ }
+ 
+diff --git a/drivers/clk/tegra/clk-bpmp.c b/drivers/clk/tegra/clk-bpmp.c
+index a896692b74ec..01dada561c10 100644
+--- a/drivers/clk/tegra/clk-bpmp.c
++++ b/drivers/clk/tegra/clk-bpmp.c
+@@ -586,9 +586,15 @@ static struct clk_hw *tegra_bpmp_clk_of_xlate(struct of_phandle_args *clkspec,
+ 	unsigned int id = clkspec->args[0], i;
+ 	struct tegra_bpmp *bpmp = data;
+ 
+-	for (i = 0; i < bpmp->num_clocks; i++)
+-		if (bpmp->clocks[i]->id == id)
+-			return &bpmp->clocks[i]->hw;
++	for (i = 0; i < bpmp->num_clocks; i++) {
++		struct tegra_bpmp_clk *clk = bpmp->clocks[i];
++
++		if (!clk)
++			continue;
++
++		if (clk->id == id)
++			return &clk->hw;
++	}
+ 
+ 	return NULL;
+ }
+diff --git a/drivers/crypto/ccp/psp-dev.c b/drivers/crypto/ccp/psp-dev.c
+index 051b8c6bae64..a9c85095bd56 100644
+--- a/drivers/crypto/ccp/psp-dev.c
++++ b/drivers/crypto/ccp/psp-dev.c
+@@ -38,6 +38,17 @@ static DEFINE_MUTEX(sev_cmd_mutex);
+ static struct sev_misc_dev *misc_dev;
+ static struct psp_device *psp_master;
+ 
++static int psp_cmd_timeout = 100;
++module_param(psp_cmd_timeout, int, 0644);
++MODULE_PARM_DESC(psp_cmd_timeout, " default timeout value, in seconds, for PSP commands");
++
++static int psp_probe_timeout = 5;
++module_param(psp_probe_timeout, int, 0644);
++MODULE_PARM_DESC(psp_probe_timeout, " default timeout value, in seconds, during PSP device probe");
++
++static bool psp_dead;
++static int psp_timeout;
++
+ static struct psp_device *psp_alloc_struct(struct sp_device *sp)
+ {
+ 	struct device *dev = sp->dev;
+@@ -82,10 +93,19 @@ done:
+ 	return IRQ_HANDLED;
+ }
+ 
+-static void sev_wait_cmd_ioc(struct psp_device *psp, unsigned int *reg)
++static int sev_wait_cmd_ioc(struct psp_device *psp,
++			    unsigned int *reg, unsigned int timeout)
+ {
+-	wait_event(psp->sev_int_queue, psp->sev_int_rcvd);
++	int ret;
++
++	ret = wait_event_timeout(psp->sev_int_queue,
++			psp->sev_int_rcvd, timeout * HZ);
++	if (!ret)
++		return -ETIMEDOUT;
++
+ 	*reg = ioread32(psp->io_regs + PSP_CMDRESP);
++
++	return 0;
+ }
+ 
+ static int sev_cmd_buffer_len(int cmd)
+@@ -133,12 +153,15 @@ static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
+ 	if (!psp)
+ 		return -ENODEV;
+ 
++	if (psp_dead)
++		return -EBUSY;
++
+ 	/* Get the physical address of the command buffer */
+ 	phys_lsb = data ? lower_32_bits(__psp_pa(data)) : 0;
+ 	phys_msb = data ? upper_32_bits(__psp_pa(data)) : 0;
+ 
+-	dev_dbg(psp->dev, "sev command id %#x buffer 0x%08x%08x\n",
+-		cmd, phys_msb, phys_lsb);
++	dev_dbg(psp->dev, "sev command id %#x buffer 0x%08x%08x timeout %us\n",
++		cmd, phys_msb, phys_lsb, psp_timeout);
+ 
+ 	print_hex_dump_debug("(in):  ", DUMP_PREFIX_OFFSET, 16, 2, data,
+ 			     sev_cmd_buffer_len(cmd), false);
+@@ -154,7 +177,18 @@ static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
+ 	iowrite32(reg, psp->io_regs + PSP_CMDRESP);
+ 
+ 	/* wait for command completion */
+-	sev_wait_cmd_ioc(psp, &reg);
++	ret = sev_wait_cmd_ioc(psp, &reg, psp_timeout);
++	if (ret) {
++		if (psp_ret)
++			*psp_ret = 0;
++
++		dev_err(psp->dev, "sev command %#x timed out, disabling PSP \n", cmd);
++		psp_dead = true;
++
++		return ret;
++	}
++
++	psp_timeout = psp_cmd_timeout;
+ 
+ 	if (psp_ret)
+ 		*psp_ret = reg & PSP_CMDRESP_ERR_MASK;
+@@ -886,6 +920,8 @@ void psp_pci_init(void)
+ 
+ 	psp_master = sp->psp_data;
+ 
++	psp_timeout = psp_probe_timeout;
++
+ 	if (sev_get_api_version())
+ 		goto err;
+ 
+diff --git a/drivers/crypto/sahara.c b/drivers/crypto/sahara.c
+index 0f2245e1af2b..97d86dca7e85 100644
+--- a/drivers/crypto/sahara.c
++++ b/drivers/crypto/sahara.c
+@@ -1351,7 +1351,7 @@ err_sha_v4_algs:
+ 
+ err_sha_v3_algs:
+ 	for (j = 0; j < k; j++)
+-		crypto_unregister_ahash(&sha_v4_algs[j]);
++		crypto_unregister_ahash(&sha_v3_algs[j]);
+ 
+ err_aes_algs:
+ 	for (j = 0; j < i; j++)
+@@ -1367,7 +1367,7 @@ static void sahara_unregister_algs(struct sahara_dev *dev)
+ 	for (i = 0; i < ARRAY_SIZE(aes_algs); i++)
+ 		crypto_unregister_alg(&aes_algs[i]);
+ 
+-	for (i = 0; i < ARRAY_SIZE(sha_v4_algs); i++)
++	for (i = 0; i < ARRAY_SIZE(sha_v3_algs); i++)
+ 		crypto_unregister_ahash(&sha_v3_algs[i]);
+ 
+ 	if (dev->version > SAHARA_VERSION_3)
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index 0b5b3abe054e..e26adf67e218 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -625,7 +625,8 @@ struct devfreq *devfreq_add_device(struct device *dev,
+ 	err = device_register(&devfreq->dev);
+ 	if (err) {
+ 		mutex_unlock(&devfreq->lock);
+-		goto err_dev;
++		put_device(&devfreq->dev);
++		goto err_out;
+ 	}
+ 
+ 	devfreq->trans_table =
+@@ -672,6 +673,7 @@ err_init:
+ 	mutex_unlock(&devfreq_list_lock);
+ 
+ 	device_unregister(&devfreq->dev);
++	devfreq = NULL;
+ err_dev:
+ 	if (devfreq)
+ 		kfree(devfreq);
+diff --git a/drivers/dma/mv_xor_v2.c b/drivers/dma/mv_xor_v2.c
+index c6589ccf1b9a..d349fedf4ab2 100644
+--- a/drivers/dma/mv_xor_v2.c
++++ b/drivers/dma/mv_xor_v2.c
+@@ -899,6 +899,8 @@ static int mv_xor_v2_remove(struct platform_device *pdev)
+ 
+ 	platform_msi_domain_free_irqs(&pdev->dev);
+ 
++	tasklet_kill(&xor_dev->irq_tasklet);
++
+ 	clk_disable_unprepare(xor_dev->clk);
+ 
+ 	return 0;
+diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c
+index de0957fe9668..bb6dfa2e1e8a 100644
+--- a/drivers/dma/pl330.c
++++ b/drivers/dma/pl330.c
+@@ -2257,13 +2257,14 @@ static int pl330_terminate_all(struct dma_chan *chan)
+ 
+ 	pm_runtime_get_sync(pl330->ddma.dev);
+ 	spin_lock_irqsave(&pch->lock, flags);
++
+ 	spin_lock(&pl330->lock);
+ 	_stop(pch->thread);
+-	spin_unlock(&pl330->lock);
+-
+ 	pch->thread->req[0].desc = NULL;
+ 	pch->thread->req[1].desc = NULL;
+ 	pch->thread->req_running = -1;
++	spin_unlock(&pl330->lock);
++
+ 	power_down = pch->active;
+ 	pch->active = false;
+ 
+diff --git a/drivers/dma/sh/rcar-dmac.c b/drivers/dma/sh/rcar-dmac.c
+index 2a2ccd9c78e4..8305a1ce8a9b 100644
+--- a/drivers/dma/sh/rcar-dmac.c
++++ b/drivers/dma/sh/rcar-dmac.c
+@@ -774,8 +774,9 @@ static void rcar_dmac_sync_tcr(struct rcar_dmac_chan *chan)
+ 	/* make sure all remaining data was flushed */
+ 	rcar_dmac_chcr_de_barrier(chan);
+ 
+-	/* back DE */
+-	rcar_dmac_chan_write(chan, RCAR_DMACHCR, chcr);
++	/* back DE if remain data exists */
++	if (rcar_dmac_chan_read(chan, RCAR_DMATCR))
++		rcar_dmac_chan_write(chan, RCAR_DMACHCR, chcr);
+ }
+ 
+ static void rcar_dmac_chan_halt(struct rcar_dmac_chan *chan)
+diff --git a/drivers/firmware/efi/arm-init.c b/drivers/firmware/efi/arm-init.c
+index b5214c143fee..388a929baf95 100644
+--- a/drivers/firmware/efi/arm-init.c
++++ b/drivers/firmware/efi/arm-init.c
+@@ -259,7 +259,6 @@ void __init efi_init(void)
+ 
+ 	reserve_regions();
+ 	efi_esrt_init();
+-	efi_memmap_unmap();
+ 
+ 	memblock_reserve(params.mmap & PAGE_MASK,
+ 			 PAGE_ALIGN(params.mmap_size +
+diff --git a/drivers/firmware/efi/arm-runtime.c b/drivers/firmware/efi/arm-runtime.c
+index 5889cbea60b8..4712445c3213 100644
+--- a/drivers/firmware/efi/arm-runtime.c
++++ b/drivers/firmware/efi/arm-runtime.c
+@@ -110,11 +110,13 @@ static int __init arm_enable_runtime_services(void)
+ {
+ 	u64 mapsize;
+ 
+-	if (!efi_enabled(EFI_BOOT)) {
++	if (!efi_enabled(EFI_BOOT) || !efi_enabled(EFI_MEMMAP)) {
+ 		pr_info("EFI services will not be available.\n");
+ 		return 0;
+ 	}
+ 
++	efi_memmap_unmap();
++
+ 	if (efi_runtime_disabled()) {
+ 		pr_info("EFI runtime services will be disabled.\n");
+ 		return 0;
+diff --git a/drivers/firmware/efi/esrt.c b/drivers/firmware/efi/esrt.c
+index 1ab80e06e7c5..e5d80ebd72b6 100644
+--- a/drivers/firmware/efi/esrt.c
++++ b/drivers/firmware/efi/esrt.c
+@@ -326,7 +326,8 @@ void __init efi_esrt_init(void)
+ 
+ 	end = esrt_data + size;
+ 	pr_info("Reserving ESRT space from %pa to %pa.\n", &esrt_data, &end);
+-	efi_mem_reserve(esrt_data, esrt_data_size);
++	if (md.type == EFI_BOOT_SERVICES_DATA)
++		efi_mem_reserve(esrt_data, esrt_data_size);
+ 
+ 	pr_debug("esrt-init: loaded.\n");
+ }
+diff --git a/drivers/gpio/gpio-pxa.c b/drivers/gpio/gpio-pxa.c
+index 2e33fd552899..99070e2ac3cd 100644
+--- a/drivers/gpio/gpio-pxa.c
++++ b/drivers/gpio/gpio-pxa.c
+@@ -665,6 +665,8 @@ static int pxa_gpio_probe(struct platform_device *pdev)
+ 	pchip->irq0 = irq0;
+ 	pchip->irq1 = irq1;
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!res)
++		return -EINVAL;
+ 	gpio_reg_base = devm_ioremap(&pdev->dev, res->start,
+ 				     resource_size(res));
+ 	if (!gpio_reg_base)
+diff --git a/drivers/gpio/gpiolib.h b/drivers/gpio/gpiolib.h
+index 1a8e20363861..a7e49fef73d4 100644
+--- a/drivers/gpio/gpiolib.h
++++ b/drivers/gpio/gpiolib.h
+@@ -92,7 +92,7 @@ struct acpi_gpio_info {
+ };
+ 
+ /* gpio suffixes used for ACPI and device tree lookup */
+-static const char * const gpio_suffixes[] = { "gpios", "gpio" };
++static __maybe_unused const char * const gpio_suffixes[] = { "gpios", "gpio" };
+ 
+ #ifdef CONFIG_OF_GPIO
+ struct gpio_desc *of_find_gpio(struct device *dev,
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c b/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
+index c3744d89352c..ebe79bf00145 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
+@@ -188,9 +188,9 @@ void __iomem *kfd_get_kernel_doorbell(struct kfd_dev *kfd,
+ 	*doorbell_off = kfd->doorbell_id_offset + inx;
+ 
+ 	pr_debug("Get kernel queue doorbell\n"
+-			 "     doorbell offset   == 0x%08X\n"
+-			 "     kernel address    == %p\n",
+-		*doorbell_off, (kfd->doorbell_kernel_ptr + inx));
++			"     doorbell offset   == 0x%08X\n"
++			"     doorbell index    == 0x%x\n",
++		*doorbell_off, inx);
+ 
+ 	return kfd->doorbell_kernel_ptr + inx;
+ }
+@@ -199,7 +199,8 @@ void kfd_release_kernel_doorbell(struct kfd_dev *kfd, u32 __iomem *db_addr)
+ {
+ 	unsigned int inx;
+ 
+-	inx = (unsigned int)(db_addr - kfd->doorbell_kernel_ptr);
++	inx = (unsigned int)(db_addr - kfd->doorbell_kernel_ptr)
++		* sizeof(u32) / kfd->device_info->doorbell_size;
+ 
+ 	mutex_lock(&kfd->doorbell_mutex);
+ 	__clear_bit(inx, kfd->doorbell_available_index);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+index 1d80b4f7c681..4694386cc623 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+@@ -244,6 +244,8 @@ struct kfd_process *kfd_get_process(const struct task_struct *thread)
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	process = find_process(thread);
++	if (!process)
++		return ERR_PTR(-EINVAL);
+ 
+ 	return process;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+index 8a7890b03d97..6ccd59b87403 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+@@ -497,6 +497,10 @@ static bool detect_dp(
+ 			sink_caps->signal = SIGNAL_TYPE_DISPLAY_PORT_MST;
+ 			link->type = dc_connection_mst_branch;
+ 
++			dal_ddc_service_set_transaction_type(
++							link->ddc,
++							sink_caps->transaction_type);
++
+ 			/*
+ 			 * This call will initiate MST topology discovery. Which
+ 			 * will detect MST ports and add new DRM connector DRM
+diff --git a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
+index d567be49c31b..b487774d8041 100644
+--- a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
++++ b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
+@@ -1020,7 +1020,7 @@ static int pp_get_display_power_level(void *handle,
+ static int pp_get_current_clocks(void *handle,
+ 		struct amd_pp_clock_info *clocks)
+ {
+-	struct amd_pp_simple_clock_info simple_clocks;
++	struct amd_pp_simple_clock_info simple_clocks = { 0 };
+ 	struct pp_clock_info hw_clocks;
+ 	struct pp_hwmgr *hwmgr = handle;
+ 	int ret = 0;
+@@ -1056,7 +1056,10 @@ static int pp_get_current_clocks(void *handle,
+ 	clocks->max_engine_clock_in_sr = hw_clocks.max_eng_clk;
+ 	clocks->min_engine_clock_in_sr = hw_clocks.min_eng_clk;
+ 
+-	clocks->max_clocks_state = simple_clocks.level;
++	if (simple_clocks.level == 0)
++		clocks->max_clocks_state = PP_DAL_POWERLEVEL_7;
++	else
++		clocks->max_clocks_state = simple_clocks.level;
+ 
+ 	if (0 == phm_get_current_shallow_sleep_clocks(hwmgr, &hwmgr->current_ps->hardware, &hw_clocks)) {
+ 		clocks->max_engine_clock_in_sr = hw_clocks.max_eng_clk;
+@@ -1159,6 +1162,8 @@ static int pp_get_display_mode_validation_clocks(void *handle,
+ 	if (!hwmgr || !hwmgr->pm_en ||!clocks)
+ 		return -EINVAL;
+ 
++	clocks->level = PP_DAL_POWERLEVEL_7;
++
+ 	mutex_lock(&hwmgr->smu_lock);
+ 
+ 	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps, PHM_PlatformCaps_DynamicPatchPowerState))
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+index f8e866ceda02..77779adeef28 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+@@ -4555,12 +4555,12 @@ static int smu7_get_sclks(struct pp_hwmgr *hwmgr, struct amd_pp_clocks *clocks)
+ 			return -EINVAL;
+ 		dep_sclk_table = table_info->vdd_dep_on_sclk;
+ 		for (i = 0; i < dep_sclk_table->count; i++)
+-			clocks->clock[i] = dep_sclk_table->entries[i].clk;
++			clocks->clock[i] = dep_sclk_table->entries[i].clk * 10;
+ 		clocks->count = dep_sclk_table->count;
+ 	} else if (hwmgr->pp_table_version == PP_TABLE_V0) {
+ 		sclk_table = hwmgr->dyn_state.vddc_dependency_on_sclk;
+ 		for (i = 0; i < sclk_table->count; i++)
+-			clocks->clock[i] = sclk_table->entries[i].clk;
++			clocks->clock[i] = sclk_table->entries[i].clk * 10;
+ 		clocks->count = sclk_table->count;
+ 	}
+ 
+@@ -4592,7 +4592,7 @@ static int smu7_get_mclks(struct pp_hwmgr *hwmgr, struct amd_pp_clocks *clocks)
+ 			return -EINVAL;
+ 		dep_mclk_table = table_info->vdd_dep_on_mclk;
+ 		for (i = 0; i < dep_mclk_table->count; i++) {
+-			clocks->clock[i] = dep_mclk_table->entries[i].clk;
++			clocks->clock[i] = dep_mclk_table->entries[i].clk * 10;
+ 			clocks->latency[i] = smu7_get_mem_latency(hwmgr,
+ 						dep_mclk_table->entries[i].clk);
+ 		}
+@@ -4600,7 +4600,7 @@ static int smu7_get_mclks(struct pp_hwmgr *hwmgr, struct amd_pp_clocks *clocks)
+ 	} else if (hwmgr->pp_table_version == PP_TABLE_V0) {
+ 		mclk_table = hwmgr->dyn_state.vddc_dependency_on_mclk;
+ 		for (i = 0; i < mclk_table->count; i++)
+-			clocks->clock[i] = mclk_table->entries[i].clk;
++			clocks->clock[i] = mclk_table->entries[i].clk * 10;
+ 		clocks->count = mclk_table->count;
+ 	}
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
+index 617557bd8c24..0adfc5392cd3 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
+@@ -1605,17 +1605,17 @@ static int smu8_get_clock_by_type(struct pp_hwmgr *hwmgr, enum amd_pp_clock_type
+ 	switch (type) {
+ 	case amd_pp_disp_clock:
+ 		for (i = 0; i < clocks->count; i++)
+-			clocks->clock[i] = data->sys_info.display_clock[i];
++			clocks->clock[i] = data->sys_info.display_clock[i] * 10;
+ 		break;
+ 	case amd_pp_sys_clock:
+ 		table = hwmgr->dyn_state.vddc_dependency_on_sclk;
+ 		for (i = 0; i < clocks->count; i++)
+-			clocks->clock[i] = table->entries[i].clk;
++			clocks->clock[i] = table->entries[i].clk * 10;
+ 		break;
+ 	case amd_pp_mem_clock:
+ 		clocks->count = SMU8_NUM_NBPMEMORYCLOCK;
+ 		for (i = 0; i < clocks->count; i++)
+-			clocks->clock[i] = data->sys_info.nbp_memory_clock[clocks->count - 1 - i];
++			clocks->clock[i] = data->sys_info.nbp_memory_clock[clocks->count - 1 - i] * 10;
+ 		break;
+ 	default:
+ 		return -1;
+diff --git a/drivers/gpu/drm/nouveau/nouveau_debugfs.c b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
+index 963a4dba8213..9109b69cd052 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_debugfs.c
++++ b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
+@@ -160,7 +160,11 @@ nouveau_debugfs_pstate_set(struct file *file, const char __user *ubuf,
+ 		args.ustate = value;
+ 	}
+ 
++	ret = pm_runtime_get_sync(drm->dev);
++	if (IS_ERR_VALUE(ret) && ret != -EACCES)
++		return ret;
+ 	ret = nvif_mthd(ctrl, NVIF_CONTROL_PSTATE_USER, &args, sizeof(args));
++	pm_runtime_put_autosuspend(drm->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index f5d3158f0378..c7ec86d6c3c9 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -908,8 +908,10 @@ nouveau_drm_open(struct drm_device *dev, struct drm_file *fpriv)
+ 	get_task_comm(tmpname, current);
+ 	snprintf(name, sizeof(name), "%s[%d]", tmpname, pid_nr(fpriv->pid));
+ 
+-	if (!(cli = kzalloc(sizeof(*cli), GFP_KERNEL)))
+-		return ret;
++	if (!(cli = kzalloc(sizeof(*cli), GFP_KERNEL))) {
++		ret = -ENOMEM;
++		goto done;
++	}
+ 
+ 	ret = nouveau_cli_init(drm, name, cli);
+ 	if (ret)
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c b/drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c
+index 78597da6313a..0e372a190d3f 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c
+@@ -23,6 +23,10 @@
+ #ifdef CONFIG_NOUVEAU_PLATFORM_DRIVER
+ #include "priv.h"
+ 
++#if IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU)
++#include <asm/dma-iommu.h>
++#endif
++
+ static int
+ nvkm_device_tegra_power_up(struct nvkm_device_tegra *tdev)
+ {
+@@ -105,6 +109,15 @@ nvkm_device_tegra_probe_iommu(struct nvkm_device_tegra *tdev)
+ 	unsigned long pgsize_bitmap;
+ 	int ret;
+ 
++#if IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU)
++	if (dev->archdata.mapping) {
++		struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev);
++
++		arm_iommu_detach_device(dev);
++		arm_iommu_release_mapping(mapping);
++	}
++#endif
++
+ 	if (!tdev->func->iommu_bit)
+ 		return;
+ 
+diff --git a/drivers/gpu/drm/panel/panel-samsung-s6e8aa0.c b/drivers/gpu/drm/panel/panel-samsung-s6e8aa0.c
+index a188a3959f1a..6ad827b93ae1 100644
+--- a/drivers/gpu/drm/panel/panel-samsung-s6e8aa0.c
++++ b/drivers/gpu/drm/panel/panel-samsung-s6e8aa0.c
+@@ -823,7 +823,7 @@ static void s6e8aa0_read_mtp_id(struct s6e8aa0 *ctx)
+ 	int ret, i;
+ 
+ 	ret = s6e8aa0_dcs_read(ctx, 0xd1, id, ARRAY_SIZE(id));
+-	if (ret < ARRAY_SIZE(id) || id[0] == 0x00) {
++	if (ret < 0 || ret < ARRAY_SIZE(id) || id[0] == 0x00) {
+ 		dev_err(ctx->dev, "read id failed\n");
+ 		ctx->error = -EIO;
+ 		return;
+diff --git a/drivers/gpu/ipu-v3/ipu-csi.c b/drivers/gpu/ipu-v3/ipu-csi.c
+index 5450a2db1219..2beadb3f79c2 100644
+--- a/drivers/gpu/ipu-v3/ipu-csi.c
++++ b/drivers/gpu/ipu-v3/ipu-csi.c
+@@ -318,13 +318,17 @@ static int mbus_code_to_bus_cfg(struct ipu_csi_bus_config *cfg, u32 mbus_code)
+ /*
+  * Fill a CSI bus config struct from mbus_config and mbus_framefmt.
+  */
+-static void fill_csi_bus_cfg(struct ipu_csi_bus_config *csicfg,
++static int fill_csi_bus_cfg(struct ipu_csi_bus_config *csicfg,
+ 				 struct v4l2_mbus_config *mbus_cfg,
+ 				 struct v4l2_mbus_framefmt *mbus_fmt)
+ {
++	int ret;
++
+ 	memset(csicfg, 0, sizeof(*csicfg));
+ 
+-	mbus_code_to_bus_cfg(csicfg, mbus_fmt->code);
++	ret = mbus_code_to_bus_cfg(csicfg, mbus_fmt->code);
++	if (ret < 0)
++		return ret;
+ 
+ 	switch (mbus_cfg->type) {
+ 	case V4L2_MBUS_PARALLEL:
+@@ -356,6 +360,8 @@ static void fill_csi_bus_cfg(struct ipu_csi_bus_config *csicfg,
+ 		/* will never get here, keep compiler quiet */
+ 		break;
+ 	}
++
++	return 0;
+ }
+ 
+ int ipu_csi_init_interface(struct ipu_csi *csi,
+@@ -365,8 +371,11 @@ int ipu_csi_init_interface(struct ipu_csi *csi,
+ 	struct ipu_csi_bus_config cfg;
+ 	unsigned long flags;
+ 	u32 width, height, data = 0;
++	int ret;
+ 
+-	fill_csi_bus_cfg(&cfg, mbus_cfg, mbus_fmt);
++	ret = fill_csi_bus_cfg(&cfg, mbus_cfg, mbus_fmt);
++	if (ret < 0)
++		return ret;
+ 
+ 	/* set default sensor frame width and height */
+ 	width = mbus_fmt->width;
+@@ -587,11 +596,14 @@ int ipu_csi_set_mipi_datatype(struct ipu_csi *csi, u32 vc,
+ 	struct ipu_csi_bus_config cfg;
+ 	unsigned long flags;
+ 	u32 temp;
++	int ret;
+ 
+ 	if (vc > 3)
+ 		return -EINVAL;
+ 
+-	mbus_code_to_bus_cfg(&cfg, mbus_fmt->code);
++	ret = mbus_code_to_bus_cfg(&cfg, mbus_fmt->code);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&csi->lock, flags);
+ 
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
+index b10fe26c4891..c9a466be7709 100644
+--- a/drivers/hv/vmbus_drv.c
++++ b/drivers/hv/vmbus_drv.c
+@@ -1178,6 +1178,9 @@ static ssize_t vmbus_chan_attr_show(struct kobject *kobj,
+ 	if (!attribute->show)
+ 		return -EIO;
+ 
++	if (chan->state != CHANNEL_OPENED_STATE)
++		return -EINVAL;
++
+ 	return attribute->show(chan, buf);
+ }
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x.c b/drivers/hwtracing/coresight/coresight-etm4x.c
+index 9bc04c50d45b..1d94ebec027b 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x.c
+@@ -1027,7 +1027,8 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
+ 	}
+ 
+ 	pm_runtime_put(&adev->dev);
+-	dev_info(dev, "%s initialized\n", (char *)id->data);
++	dev_info(dev, "CPU%d: ETM v%d.%d initialized\n",
++		 drvdata->cpu, drvdata->arch >> 4, drvdata->arch & 0xf);
+ 
+ 	if (boot_enable) {
+ 		coresight_enable(drvdata->csdev);
+@@ -1045,23 +1046,19 @@ err_arch_supported:
+ 	return ret;
+ }
+ 
++#define ETM4x_AMBA_ID(pid)			\
++	{					\
++		.id	= pid,			\
++		.mask	= 0x000fffff,		\
++	}
++
+ static const struct amba_id etm4_ids[] = {
+-	{       /* ETM 4.0 - Cortex-A53  */
+-		.id	= 0x000bb95d,
+-		.mask	= 0x000fffff,
+-		.data	= "ETM 4.0",
+-	},
+-	{       /* ETM 4.0 - Cortex-A57 */
+-		.id	= 0x000bb95e,
+-		.mask	= 0x000fffff,
+-		.data	= "ETM 4.0",
+-	},
+-	{       /* ETM 4.0 - A72, Maia, HiSilicon */
+-		.id = 0x000bb95a,
+-		.mask = 0x000fffff,
+-		.data = "ETM 4.0",
+-	},
+-	{ 0, 0},
++	ETM4x_AMBA_ID(0x000bb95d),		/* Cortex-A53 */
++	ETM4x_AMBA_ID(0x000bb95e),		/* Cortex-A57 */
++	ETM4x_AMBA_ID(0x000bb95a),		/* Cortex-A72 */
++	ETM4x_AMBA_ID(0x000bb959),		/* Cortex-A73 */
++	ETM4x_AMBA_ID(0x000bb9da),		/* Cortex-A35 */
++	{},
+ };
+ 
+ static struct amba_driver etm4x_driver = {
+diff --git a/drivers/hwtracing/coresight/coresight-tpiu.c b/drivers/hwtracing/coresight/coresight-tpiu.c
+index 01b7457fe8fc..459ef930d98c 100644
+--- a/drivers/hwtracing/coresight/coresight-tpiu.c
++++ b/drivers/hwtracing/coresight/coresight-tpiu.c
+@@ -40,8 +40,9 @@
+ 
+ /** register definition **/
+ /* FFSR - 0x300 */
+-#define FFSR_FT_STOPPED		BIT(1)
++#define FFSR_FT_STOPPED_BIT	1
+ /* FFCR - 0x304 */
++#define FFCR_FON_MAN_BIT	6
+ #define FFCR_FON_MAN		BIT(6)
+ #define FFCR_STOP_FI		BIT(12)
+ 
+@@ -86,9 +87,9 @@ static void tpiu_disable_hw(struct tpiu_drvdata *drvdata)
+ 	/* Generate manual flush */
+ 	writel_relaxed(FFCR_STOP_FI | FFCR_FON_MAN, drvdata->base + TPIU_FFCR);
+ 	/* Wait for flush to complete */
+-	coresight_timeout(drvdata->base, TPIU_FFCR, FFCR_FON_MAN, 0);
++	coresight_timeout(drvdata->base, TPIU_FFCR, FFCR_FON_MAN_BIT, 0);
+ 	/* Wait for formatter to stop */
+-	coresight_timeout(drvdata->base, TPIU_FFSR, FFSR_FT_STOPPED, 1);
++	coresight_timeout(drvdata->base, TPIU_FFSR, FFSR_FT_STOPPED_BIT, 1);
+ 
+ 	CS_LOCK(drvdata->base);
+ }
+diff --git a/drivers/hwtracing/coresight/coresight.c b/drivers/hwtracing/coresight/coresight.c
+index 29e834aab539..b673718952f6 100644
+--- a/drivers/hwtracing/coresight/coresight.c
++++ b/drivers/hwtracing/coresight/coresight.c
+@@ -108,7 +108,7 @@ static int coresight_find_link_inport(struct coresight_device *csdev,
+ 	dev_err(&csdev->dev, "couldn't find inport, parent: %s, child: %s\n",
+ 		dev_name(&parent->dev), dev_name(&csdev->dev));
+ 
+-	return 0;
++	return -ENODEV;
+ }
+ 
+ static int coresight_find_link_outport(struct coresight_device *csdev,
+@@ -126,7 +126,7 @@ static int coresight_find_link_outport(struct coresight_device *csdev,
+ 	dev_err(&csdev->dev, "couldn't find outport, parent: %s, child: %s\n",
+ 		dev_name(&csdev->dev), dev_name(&child->dev));
+ 
+-	return 0;
++	return -ENODEV;
+ }
+ 
+ static int coresight_enable_sink(struct coresight_device *csdev, u32 mode)
+@@ -179,6 +179,9 @@ static int coresight_enable_link(struct coresight_device *csdev,
+ 	else
+ 		refport = 0;
+ 
++	if (refport < 0)
++		return refport;
++
+ 	if (atomic_inc_return(&csdev->refcnt[refport]) == 1) {
+ 		if (link_ops(csdev)->enable) {
+ 			ret = link_ops(csdev)->enable(csdev, inport, outport);
+diff --git a/drivers/i2c/busses/i2c-aspeed.c b/drivers/i2c/busses/i2c-aspeed.c
+index 715b6fdb4989..5c8ea4e9203c 100644
+--- a/drivers/i2c/busses/i2c-aspeed.c
++++ b/drivers/i2c/busses/i2c-aspeed.c
+@@ -111,22 +111,22 @@
+ #define ASPEED_I2CD_DEV_ADDR_MASK			GENMASK(6, 0)
+ 
+ enum aspeed_i2c_master_state {
++	ASPEED_I2C_MASTER_INACTIVE,
+ 	ASPEED_I2C_MASTER_START,
+ 	ASPEED_I2C_MASTER_TX_FIRST,
+ 	ASPEED_I2C_MASTER_TX,
+ 	ASPEED_I2C_MASTER_RX_FIRST,
+ 	ASPEED_I2C_MASTER_RX,
+ 	ASPEED_I2C_MASTER_STOP,
+-	ASPEED_I2C_MASTER_INACTIVE,
+ };
+ 
+ enum aspeed_i2c_slave_state {
++	ASPEED_I2C_SLAVE_STOP,
+ 	ASPEED_I2C_SLAVE_START,
+ 	ASPEED_I2C_SLAVE_READ_REQUESTED,
+ 	ASPEED_I2C_SLAVE_READ_PROCESSED,
+ 	ASPEED_I2C_SLAVE_WRITE_REQUESTED,
+ 	ASPEED_I2C_SLAVE_WRITE_RECEIVED,
+-	ASPEED_I2C_SLAVE_STOP,
+ };
+ 
+ struct aspeed_i2c_bus {
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index dafcb6f019b3..2702ead01a03 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -722,6 +722,7 @@ static int cma_resolve_ib_dev(struct rdma_id_private *id_priv)
+ 	dgid = (union ib_gid *) &addr->sib_addr;
+ 	pkey = ntohs(addr->sib_pkey);
+ 
++	mutex_lock(&lock);
+ 	list_for_each_entry(cur_dev, &dev_list, list) {
+ 		for (p = 1; p <= cur_dev->device->phys_port_cnt; ++p) {
+ 			if (!rdma_cap_af_ib(cur_dev->device, p))
+@@ -748,18 +749,19 @@ static int cma_resolve_ib_dev(struct rdma_id_private *id_priv)
+ 					cma_dev = cur_dev;
+ 					sgid = gid;
+ 					id_priv->id.port_num = p;
++					goto found;
+ 				}
+ 			}
+ 		}
+ 	}
+-
+-	if (!cma_dev)
+-		return -ENODEV;
++	mutex_unlock(&lock);
++	return -ENODEV;
+ 
+ found:
+ 	cma_attach_to_dev(id_priv, cma_dev);
+-	addr = (struct sockaddr_ib *) cma_src_addr(id_priv);
+-	memcpy(&addr->sib_addr, &sgid, sizeof sgid);
++	mutex_unlock(&lock);
++	addr = (struct sockaddr_ib *)cma_src_addr(id_priv);
++	memcpy(&addr->sib_addr, &sgid, sizeof(sgid));
+ 	cma_translate_ib(addr, &id_priv->id.route.addr.dev_addr);
+ 	return 0;
+ }
+diff --git a/drivers/infiniband/hw/mlx5/cong.c b/drivers/infiniband/hw/mlx5/cong.c
+index 985fa2637390..7e4e358a4fd8 100644
+--- a/drivers/infiniband/hw/mlx5/cong.c
++++ b/drivers/infiniband/hw/mlx5/cong.c
+@@ -359,9 +359,6 @@ static ssize_t get_param(struct file *filp, char __user *buf, size_t count,
+ 	int ret;
+ 	char lbuf[11];
+ 
+-	if (*pos)
+-		return 0;
+-
+ 	ret = mlx5_ib_get_cc_params(param->dev, param->port_num, offset, &var);
+ 	if (ret)
+ 		return ret;
+@@ -370,11 +367,7 @@ static ssize_t get_param(struct file *filp, char __user *buf, size_t count,
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	if (copy_to_user(buf, lbuf, ret))
+-		return -EFAULT;
+-
+-	*pos += ret;
+-	return ret;
++	return simple_read_from_buffer(buf, count, pos, lbuf, ret);
+ }
+ 
+ static const struct file_operations dbg_cc_fops = {
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 90a9c461cedc..308456d28afb 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -271,16 +271,16 @@ static ssize_t size_write(struct file *filp, const char __user *buf,
+ {
+ 	struct mlx5_cache_ent *ent = filp->private_data;
+ 	struct mlx5_ib_dev *dev = ent->dev;
+-	char lbuf[20];
++	char lbuf[20] = {0};
+ 	u32 var;
+ 	int err;
+ 	int c;
+ 
+-	if (copy_from_user(lbuf, buf, sizeof(lbuf)))
++	count = min(count, sizeof(lbuf) - 1);
++	if (copy_from_user(lbuf, buf, count))
+ 		return -EFAULT;
+ 
+ 	c = order2idx(dev, ent->order);
+-	lbuf[sizeof(lbuf) - 1] = 0;
+ 
+ 	if (sscanf(lbuf, "%u", &var) != 1)
+ 		return -EINVAL;
+@@ -310,19 +310,11 @@ static ssize_t size_read(struct file *filp, char __user *buf, size_t count,
+ 	char lbuf[20];
+ 	int err;
+ 
+-	if (*pos)
+-		return 0;
+-
+ 	err = snprintf(lbuf, sizeof(lbuf), "%d\n", ent->size);
+ 	if (err < 0)
+ 		return err;
+ 
+-	if (copy_to_user(buf, lbuf, err))
+-		return -EFAULT;
+-
+-	*pos += err;
+-
+-	return err;
++	return simple_read_from_buffer(buf, count, pos, lbuf, err);
+ }
+ 
+ static const struct file_operations size_fops = {
+@@ -337,16 +329,16 @@ static ssize_t limit_write(struct file *filp, const char __user *buf,
+ {
+ 	struct mlx5_cache_ent *ent = filp->private_data;
+ 	struct mlx5_ib_dev *dev = ent->dev;
+-	char lbuf[20];
++	char lbuf[20] = {0};
+ 	u32 var;
+ 	int err;
+ 	int c;
+ 
+-	if (copy_from_user(lbuf, buf, sizeof(lbuf)))
++	count = min(count, sizeof(lbuf) - 1);
++	if (copy_from_user(lbuf, buf, count))
+ 		return -EFAULT;
+ 
+ 	c = order2idx(dev, ent->order);
+-	lbuf[sizeof(lbuf) - 1] = 0;
+ 
+ 	if (sscanf(lbuf, "%u", &var) != 1)
+ 		return -EINVAL;
+@@ -372,19 +364,11 @@ static ssize_t limit_read(struct file *filp, char __user *buf, size_t count,
+ 	char lbuf[20];
+ 	int err;
+ 
+-	if (*pos)
+-		return 0;
+-
+ 	err = snprintf(lbuf, sizeof(lbuf), "%d\n", ent->limit);
+ 	if (err < 0)
+ 		return err;
+ 
+-	if (copy_to_user(buf, lbuf, err))
+-		return -EFAULT;
+-
+-	*pos += err;
+-
+-	return err;
++	return simple_read_from_buffer(buf, count, pos, lbuf, err);
+ }
+ 
+ static const struct file_operations limit_fops = {
+diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c
+index dfba44a40f0b..fe45d6cad6cd 100644
+--- a/drivers/infiniband/sw/rxe/rxe_recv.c
++++ b/drivers/infiniband/sw/rxe/rxe_recv.c
+@@ -225,9 +225,14 @@ static int hdr_check(struct rxe_pkt_info *pkt)
+ 		goto err1;
+ 	}
+ 
++	if (unlikely(qpn == 0)) {
++		pr_warn_once("QP 0 not supported");
++		goto err1;
++	}
++
+ 	if (qpn != IB_MULTICAST_QPN) {
+-		index = (qpn == 0) ? port->qp_smi_index :
+-			((qpn == 1) ? port->qp_gsi_index : qpn);
++		index = (qpn == 1) ? port->qp_gsi_index : qpn;
++
+ 		qp = rxe_pool_get_index(&rxe->qp_pool, index);
+ 		if (unlikely(!qp)) {
+ 			pr_warn_ratelimited("no qp matches qpn 0x%x\n", qpn);
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+index 6535d9beb24d..a620701f9d41 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+@@ -1028,12 +1028,14 @@ static int ipoib_cm_rep_handler(struct ib_cm_id *cm_id, struct ib_cm_event *even
+ 
+ 	skb_queue_head_init(&skqueue);
+ 
++	netif_tx_lock_bh(p->dev);
+ 	spin_lock_irq(&priv->lock);
+ 	set_bit(IPOIB_FLAG_OPER_UP, &p->flags);
+ 	if (p->neigh)
+ 		while ((skb = __skb_dequeue(&p->neigh->queue)))
+ 			__skb_queue_tail(&skqueue, skb);
+ 	spin_unlock_irq(&priv->lock);
++	netif_tx_unlock_bh(p->dev);
+ 
+ 	while ((skb = __skb_dequeue(&skqueue))) {
+ 		skb->dev = p->dev;
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+index 26cde95bc0f3..7630d5ed2b41 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+@@ -1787,7 +1787,8 @@ int ipoib_dev_init(struct net_device *dev, struct ib_device *ca, int port)
+ 		goto out_free_pd;
+ 	}
+ 
+-	if (ipoib_neigh_hash_init(priv) < 0) {
++	ret = ipoib_neigh_hash_init(priv);
++	if (ret) {
+ 		pr_warn("%s failed to init neigh hash\n", dev->name);
+ 		goto out_dev_uninit;
+ 	}
+diff --git a/drivers/input/joystick/pxrc.c b/drivers/input/joystick/pxrc.c
+index 07a0dbd3ced2..cfb410cf0789 100644
+--- a/drivers/input/joystick/pxrc.c
++++ b/drivers/input/joystick/pxrc.c
+@@ -120,48 +120,51 @@ static void pxrc_close(struct input_dev *input)
+ 	mutex_unlock(&pxrc->pm_mutex);
+ }
+ 
++static void pxrc_free_urb(void *_pxrc)
++{
++	struct pxrc *pxrc = _pxrc;
++
++	usb_free_urb(pxrc->urb);
++}
++
+ static int pxrc_usb_init(struct pxrc *pxrc)
+ {
+ 	struct usb_endpoint_descriptor *epirq;
+ 	unsigned int pipe;
+-	int retval;
++	int error;
+ 
+ 	/* Set up the endpoint information */
+ 	/* This device only has an interrupt endpoint */
+-	retval = usb_find_common_endpoints(pxrc->intf->cur_altsetting,
+-			NULL, NULL, &epirq, NULL);
+-	if (retval) {
+-		dev_err(&pxrc->intf->dev,
+-			"Could not find endpoint\n");
+-		goto error;
++	error = usb_find_common_endpoints(pxrc->intf->cur_altsetting,
++					  NULL, NULL, &epirq, NULL);
++	if (error) {
++		dev_err(&pxrc->intf->dev, "Could not find endpoint\n");
++		return error;
+ 	}
+ 
+ 	pxrc->bsize = usb_endpoint_maxp(epirq);
+ 	pxrc->epaddr = epirq->bEndpointAddress;
+ 	pxrc->data = devm_kmalloc(&pxrc->intf->dev, pxrc->bsize, GFP_KERNEL);
+-	if (!pxrc->data) {
+-		retval = -ENOMEM;
+-		goto error;
+-	}
++	if (!pxrc->data)
++		return -ENOMEM;
+ 
+ 	usb_set_intfdata(pxrc->intf, pxrc);
+ 	usb_make_path(pxrc->udev, pxrc->phys, sizeof(pxrc->phys));
+ 	strlcat(pxrc->phys, "/input0", sizeof(pxrc->phys));
+ 
+ 	pxrc->urb = usb_alloc_urb(0, GFP_KERNEL);
+-	if (!pxrc->urb) {
+-		retval = -ENOMEM;
+-		goto error;
+-	}
++	if (!pxrc->urb)
++		return -ENOMEM;
++
++	error = devm_add_action_or_reset(&pxrc->intf->dev, pxrc_free_urb, pxrc);
++	if (error)
++		return error;
+ 
+ 	pipe = usb_rcvintpipe(pxrc->udev, pxrc->epaddr),
+ 	usb_fill_int_urb(pxrc->urb, pxrc->udev, pipe, pxrc->data, pxrc->bsize,
+ 						pxrc_usb_irq, pxrc, 1);
+ 
+-error:
+-	return retval;
+-
+-
++	return 0;
+ }
+ 
+ static int pxrc_input_init(struct pxrc *pxrc)
+@@ -197,7 +200,7 @@ static int pxrc_probe(struct usb_interface *intf,
+ 		      const struct usb_device_id *id)
+ {
+ 	struct pxrc *pxrc;
+-	int retval;
++	int error;
+ 
+ 	pxrc = devm_kzalloc(&intf->dev, sizeof(*pxrc), GFP_KERNEL);
+ 	if (!pxrc)
+@@ -207,29 +210,20 @@ static int pxrc_probe(struct usb_interface *intf,
+ 	pxrc->udev = usb_get_dev(interface_to_usbdev(intf));
+ 	pxrc->intf = intf;
+ 
+-	retval = pxrc_usb_init(pxrc);
+-	if (retval)
+-		goto error;
++	error = pxrc_usb_init(pxrc);
++	if (error)
++		return error;
+ 
+-	retval = pxrc_input_init(pxrc);
+-	if (retval)
+-		goto err_free_urb;
++	error = pxrc_input_init(pxrc);
++	if (error)
++		return error;
+ 
+ 	return 0;
+-
+-err_free_urb:
+-	usb_free_urb(pxrc->urb);
+-
+-error:
+-	return retval;
+ }
+ 
+ static void pxrc_disconnect(struct usb_interface *intf)
+ {
+-	struct pxrc *pxrc = usb_get_intfdata(intf);
+-
+-	usb_free_urb(pxrc->urb);
+-	usb_set_intfdata(intf, NULL);
++	/* All driver resources are devm-managed. */
+ }
+ 
+ static int pxrc_suspend(struct usb_interface *intf, pm_message_t message)
+diff --git a/drivers/input/touchscreen/rohm_bu21023.c b/drivers/input/touchscreen/rohm_bu21023.c
+index bda0500c9b57..714affdd742f 100644
+--- a/drivers/input/touchscreen/rohm_bu21023.c
++++ b/drivers/input/touchscreen/rohm_bu21023.c
+@@ -304,7 +304,7 @@ static int rohm_i2c_burst_read(struct i2c_client *client, u8 start, void *buf,
+ 	msg[1].len = len;
+ 	msg[1].buf = buf;
+ 
+-	i2c_lock_adapter(adap);
++	i2c_lock_bus(adap, I2C_LOCK_SEGMENT);
+ 
+ 	for (i = 0; i < 2; i++) {
+ 		if (__i2c_transfer(adap, &msg[i], 1) < 0) {
+@@ -313,7 +313,7 @@ static int rohm_i2c_burst_read(struct i2c_client *client, u8 start, void *buf,
+ 		}
+ 	}
+ 
+-	i2c_unlock_adapter(adap);
++	i2c_unlock_bus(adap, I2C_LOCK_SEGMENT);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
+index b73c6a7bf7f2..b7076aa24d6b 100644
+--- a/drivers/iommu/arm-smmu-v3.c
++++ b/drivers/iommu/arm-smmu-v3.c
+@@ -1302,6 +1302,7 @@ static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
+ 
+ 	/* Sync our overflow flag, as we believe we're up to speed */
+ 	q->cons = Q_OVF(q, q->prod) | Q_WRP(q, q->cons) | Q_IDX(q, q->cons);
++	writel(q->cons, q->cons_reg);
+ 	return IRQ_HANDLED;
+ }
+ 
+diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-arm-v7s.c
+index 50e3a9fcf43e..b5948ba6b3b3 100644
+--- a/drivers/iommu/io-pgtable-arm-v7s.c
++++ b/drivers/iommu/io-pgtable-arm-v7s.c
+@@ -192,6 +192,7 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
+ {
+ 	struct io_pgtable_cfg *cfg = &data->iop.cfg;
+ 	struct device *dev = cfg->iommu_dev;
++	phys_addr_t phys;
+ 	dma_addr_t dma;
+ 	size_t size = ARM_V7S_TABLE_SIZE(lvl);
+ 	void *table = NULL;
+@@ -200,6 +201,10 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
+ 		table = (void *)__get_dma_pages(__GFP_ZERO, get_order(size));
+ 	else if (lvl == 2)
+ 		table = kmem_cache_zalloc(data->l2_tables, gfp | GFP_DMA);
++	phys = virt_to_phys(table);
++	if (phys != (arm_v7s_iopte)phys)
++		/* Doesn't fit in PTE */
++		goto out_free;
+ 	if (table && !(cfg->quirks & IO_PGTABLE_QUIRK_NO_DMA)) {
+ 		dma = dma_map_single(dev, table, size, DMA_TO_DEVICE);
+ 		if (dma_mapping_error(dev, dma))
+@@ -209,7 +214,7 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
+ 		 * address directly, so if the DMA layer suggests otherwise by
+ 		 * translating or truncating them, that bodes very badly...
+ 		 */
+-		if (dma != virt_to_phys(table))
++		if (dma != phys)
+ 			goto out_unmap;
+ 	}
+ 	kmemleak_ignore(table);
+diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
+index 010a254305dd..88641b4560bc 100644
+--- a/drivers/iommu/io-pgtable-arm.c
++++ b/drivers/iommu/io-pgtable-arm.c
+@@ -237,7 +237,8 @@ static void *__arm_lpae_alloc_pages(size_t size, gfp_t gfp,
+ 	void *pages;
+ 
+ 	VM_BUG_ON((gfp & __GFP_HIGHMEM));
+-	p = alloc_pages_node(dev_to_node(dev), gfp | __GFP_ZERO, order);
++	p = alloc_pages_node(dev ? dev_to_node(dev) : NUMA_NO_NODE,
++			     gfp | __GFP_ZERO, order);
+ 	if (!p)
+ 		return NULL;
+ 
+diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
+index feb1664815b7..6e2882cda55d 100644
+--- a/drivers/iommu/ipmmu-vmsa.c
++++ b/drivers/iommu/ipmmu-vmsa.c
+@@ -47,6 +47,7 @@ struct ipmmu_features {
+ 	unsigned int number_of_contexts;
+ 	bool setup_imbuscr;
+ 	bool twobit_imttbcr_sl0;
++	bool reserved_context;
+ };
+ 
+ struct ipmmu_vmsa_device {
+@@ -916,6 +917,7 @@ static const struct ipmmu_features ipmmu_features_default = {
+ 	.number_of_contexts = 1, /* software only tested with one context */
+ 	.setup_imbuscr = true,
+ 	.twobit_imttbcr_sl0 = false,
++	.reserved_context = false,
+ };
+ 
+ static const struct ipmmu_features ipmmu_features_r8a7795 = {
+@@ -924,6 +926,7 @@ static const struct ipmmu_features ipmmu_features_r8a7795 = {
+ 	.number_of_contexts = 8,
+ 	.setup_imbuscr = false,
+ 	.twobit_imttbcr_sl0 = true,
++	.reserved_context = true,
+ };
+ 
+ static const struct of_device_id ipmmu_of_ids[] = {
+@@ -1017,6 +1020,11 @@ static int ipmmu_probe(struct platform_device *pdev)
+ 		}
+ 
+ 		ipmmu_device_reset(mmu);
++
++		if (mmu->features->reserved_context) {
++			dev_info(&pdev->dev, "IPMMU context 0 is reserved\n");
++			set_bit(0, mmu->ctx);
++		}
+ 	}
+ 
+ 	/*
+diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c
+index b57f764d6a16..93ebba6dcc25 100644
+--- a/drivers/lightnvm/pblk-init.c
++++ b/drivers/lightnvm/pblk-init.c
+@@ -716,10 +716,11 @@ static int pblk_setup_line_meta_12(struct pblk *pblk, struct pblk_line *line,
+ 
+ 		/*
+ 		 * In 1.2 spec. chunk state is not persisted by the device. Thus
+-		 * some of the values are reset each time pblk is instantiated.
++		 * some of the values are reset each time pblk is instantiated,
++		 * so we have to assume that the block is closed.
+ 		 */
+ 		if (lun_bb_meta[line->id] == NVM_BLK_T_FREE)
+-			chunk->state =  NVM_CHK_ST_FREE;
++			chunk->state =  NVM_CHK_ST_CLOSED;
+ 		else
+ 			chunk->state = NVM_CHK_ST_OFFLINE;
+ 
+diff --git a/drivers/lightnvm/pblk-recovery.c b/drivers/lightnvm/pblk-recovery.c
+index 3a5069183859..d83466b3821b 100644
+--- a/drivers/lightnvm/pblk-recovery.c
++++ b/drivers/lightnvm/pblk-recovery.c
+@@ -742,9 +742,10 @@ static int pblk_recov_check_line_version(struct pblk *pblk,
+ 		return 1;
+ 	}
+ 
+-#ifdef NVM_DEBUG
++#ifdef CONFIG_NVM_PBLK_DEBUG
+ 	if (header->version_minor > EMETA_VERSION_MINOR)
+-		pr_info("pblk: newer line minor version found: %d\n", line_v);
++		pr_info("pblk: newer line minor version found: %d\n",
++				header->version_minor);
+ #endif
+ 
+ 	return 0;
+diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
+index 12decdbd722d..fc65f0dedf7f 100644
+--- a/drivers/md/dm-verity-target.c
++++ b/drivers/md/dm-verity-target.c
+@@ -99,10 +99,26 @@ static int verity_hash_update(struct dm_verity *v, struct ahash_request *req,
+ {
+ 	struct scatterlist sg;
+ 
+-	sg_init_one(&sg, data, len);
+-	ahash_request_set_crypt(req, &sg, NULL, len);
+-
+-	return crypto_wait_req(crypto_ahash_update(req), wait);
++	if (likely(!is_vmalloc_addr(data))) {
++		sg_init_one(&sg, data, len);
++		ahash_request_set_crypt(req, &sg, NULL, len);
++		return crypto_wait_req(crypto_ahash_update(req), wait);
++	} else {
++		do {
++			int r;
++			size_t this_step = min_t(size_t, len, PAGE_SIZE - offset_in_page(data));
++			flush_kernel_vmap_range((void *)data, this_step);
++			sg_init_table(&sg, 1);
++			sg_set_page(&sg, vmalloc_to_page(data), this_step, offset_in_page(data));
++			ahash_request_set_crypt(req, &sg, NULL, this_step);
++			r = crypto_wait_req(crypto_ahash_update(req), wait);
++			if (unlikely(r))
++				return r;
++			data += this_step;
++			len -= this_step;
++		} while (len);
++		return 0;
++	}
+ }
+ 
+ /*
+diff --git a/drivers/media/common/videobuf2/videobuf2-core.c b/drivers/media/common/videobuf2/videobuf2-core.c
+index f32ec7342ef0..5653e8eebe2b 100644
+--- a/drivers/media/common/videobuf2/videobuf2-core.c
++++ b/drivers/media/common/videobuf2/videobuf2-core.c
+@@ -1377,6 +1377,11 @@ int vb2_core_qbuf(struct vb2_queue *q, unsigned int index, void *pb)
+ 	struct vb2_buffer *vb;
+ 	int ret;
+ 
++	if (q->error) {
++		dprintk(1, "fatal error occurred on queue\n");
++		return -EIO;
++	}
++
+ 	vb = q->bufs[index];
+ 
+ 	switch (vb->state) {
+diff --git a/drivers/media/i2c/ov5645.c b/drivers/media/i2c/ov5645.c
+index b3f762578f7f..1722cdab0daf 100644
+--- a/drivers/media/i2c/ov5645.c
++++ b/drivers/media/i2c/ov5645.c
+@@ -510,8 +510,8 @@ static const struct reg_value ov5645_setting_full[] = {
+ };
+ 
+ static const s64 link_freq[] = {
+-	222880000,
+-	334320000
++	224000000,
++	336000000
+ };
+ 
+ static const struct ov5645_mode_info ov5645_mode_info_data[] = {
+@@ -520,7 +520,7 @@ static const struct ov5645_mode_info ov5645_mode_info_data[] = {
+ 		.height = 960,
+ 		.data = ov5645_setting_sxga,
+ 		.data_size = ARRAY_SIZE(ov5645_setting_sxga),
+-		.pixel_clock = 111440000,
++		.pixel_clock = 112000000,
+ 		.link_freq = 0 /* an index in link_freq[] */
+ 	},
+ 	{
+@@ -528,7 +528,7 @@ static const struct ov5645_mode_info ov5645_mode_info_data[] = {
+ 		.height = 1080,
+ 		.data = ov5645_setting_1080p,
+ 		.data_size = ARRAY_SIZE(ov5645_setting_1080p),
+-		.pixel_clock = 167160000,
++		.pixel_clock = 168000000,
+ 		.link_freq = 1 /* an index in link_freq[] */
+ 	},
+ 	{
+@@ -536,7 +536,7 @@ static const struct ov5645_mode_info ov5645_mode_info_data[] = {
+ 		.height = 1944,
+ 		.data = ov5645_setting_full,
+ 		.data_size = ARRAY_SIZE(ov5645_setting_full),
+-		.pixel_clock = 167160000,
++		.pixel_clock = 168000000,
+ 		.link_freq = 1 /* an index in link_freq[] */
+ 	},
+ };
+@@ -1145,7 +1145,8 @@ static int ov5645_probe(struct i2c_client *client,
+ 		return ret;
+ 	}
+ 
+-	if (xclk_freq != 23880000) {
++	/* external clock must be 24MHz, allow 1% tolerance */
++	if (xclk_freq < 23760000 || xclk_freq > 24240000) {
+ 		dev_err(dev, "external clock frequency %u is not supported\n",
+ 			xclk_freq);
+ 		return -EINVAL;
+diff --git a/drivers/media/pci/tw686x/tw686x-video.c b/drivers/media/pci/tw686x/tw686x-video.c
+index 0ea8dd44026c..3a06c000f97b 100644
+--- a/drivers/media/pci/tw686x/tw686x-video.c
++++ b/drivers/media/pci/tw686x/tw686x-video.c
+@@ -1190,6 +1190,14 @@ int tw686x_video_init(struct tw686x_dev *dev)
+ 			return err;
+ 	}
+ 
++	/* Initialize vc->dev and vc->ch for the error path */
++	for (ch = 0; ch < max_channels(dev); ch++) {
++		struct tw686x_video_channel *vc = &dev->video_channels[ch];
++
++		vc->dev = dev;
++		vc->ch = ch;
++	}
++
+ 	for (ch = 0; ch < max_channels(dev); ch++) {
+ 		struct tw686x_video_channel *vc = &dev->video_channels[ch];
+ 		struct video_device *vdev;
+@@ -1198,9 +1206,6 @@ int tw686x_video_init(struct tw686x_dev *dev)
+ 		spin_lock_init(&vc->qlock);
+ 		INIT_LIST_HEAD(&vc->vidq_queued);
+ 
+-		vc->dev = dev;
+-		vc->ch = ch;
+-
+ 		/* default settings */
+ 		err = tw686x_set_standard(vc, V4L2_STD_NTSC);
+ 		if (err)
+diff --git a/drivers/mfd/88pm860x-i2c.c b/drivers/mfd/88pm860x-i2c.c
+index 84e313107233..7b9052ea7413 100644
+--- a/drivers/mfd/88pm860x-i2c.c
++++ b/drivers/mfd/88pm860x-i2c.c
+@@ -146,14 +146,14 @@ int pm860x_page_reg_write(struct i2c_client *i2c, int reg,
+ 	unsigned char zero;
+ 	int ret;
+ 
+-	i2c_lock_adapter(i2c->adapter);
++	i2c_lock_bus(i2c->adapter, I2C_LOCK_SEGMENT);
+ 	read_device(i2c, 0xFA, 0, &zero);
+ 	read_device(i2c, 0xFB, 0, &zero);
+ 	read_device(i2c, 0xFF, 0, &zero);
+ 	ret = write_device(i2c, reg, 1, &data);
+ 	read_device(i2c, 0xFE, 0, &zero);
+ 	read_device(i2c, 0xFC, 0, &zero);
+-	i2c_unlock_adapter(i2c->adapter);
++	i2c_unlock_bus(i2c->adapter, I2C_LOCK_SEGMENT);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(pm860x_page_reg_write);
+@@ -164,14 +164,14 @@ int pm860x_page_bulk_read(struct i2c_client *i2c, int reg,
+ 	unsigned char zero = 0;
+ 	int ret;
+ 
+-	i2c_lock_adapter(i2c->adapter);
++	i2c_lock_bus(i2c->adapter, I2C_LOCK_SEGMENT);
+ 	read_device(i2c, 0xfa, 0, &zero);
+ 	read_device(i2c, 0xfb, 0, &zero);
+ 	read_device(i2c, 0xff, 0, &zero);
+ 	ret = read_device(i2c, reg, count, buf);
+ 	read_device(i2c, 0xFE, 0, &zero);
+ 	read_device(i2c, 0xFC, 0, &zero);
+-	i2c_unlock_adapter(i2c->adapter);
++	i2c_unlock_bus(i2c->adapter, I2C_LOCK_SEGMENT);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(pm860x_page_bulk_read);
+diff --git a/drivers/misc/hmc6352.c b/drivers/misc/hmc6352.c
+index eeb7eef62174..38f90e179927 100644
+--- a/drivers/misc/hmc6352.c
++++ b/drivers/misc/hmc6352.c
+@@ -27,6 +27,7 @@
+ #include <linux/err.h>
+ #include <linux/delay.h>
+ #include <linux/sysfs.h>
++#include <linux/nospec.h>
+ 
+ static DEFINE_MUTEX(compass_mutex);
+ 
+@@ -50,6 +51,7 @@ static int compass_store(struct device *dev, const char *buf, size_t count,
+ 		return ret;
+ 	if (val >= strlen(map))
+ 		return -EINVAL;
++	val = array_index_nospec(val, strlen(map));
+ 	mutex_lock(&compass_mutex);
+ 	ret = compass_command(c, map[val]);
+ 	mutex_unlock(&compass_mutex);
+diff --git a/drivers/misc/ibmvmc.c b/drivers/misc/ibmvmc.c
+index fb83d1375638..50d82c3d032a 100644
+--- a/drivers/misc/ibmvmc.c
++++ b/drivers/misc/ibmvmc.c
+@@ -2131,7 +2131,7 @@ static int ibmvmc_init_crq_queue(struct crq_server_adapter *adapter)
+ 	retrc = plpar_hcall_norets(H_REG_CRQ,
+ 				   vdev->unit_address,
+ 				   queue->msg_token, PAGE_SIZE);
+-	retrc = rc;
++	rc = retrc;
+ 
+ 	if (rc == H_RESOURCE)
+ 		rc = ibmvmc_reset_crq_queue(adapter);
+diff --git a/drivers/misc/mei/bus-fixup.c b/drivers/misc/mei/bus-fixup.c
+index 0208c4b027c5..fa0236a5e59a 100644
+--- a/drivers/misc/mei/bus-fixup.c
++++ b/drivers/misc/mei/bus-fixup.c
+@@ -267,7 +267,7 @@ static int mei_nfc_if_version(struct mei_cl *cl,
+ 
+ 	ret = 0;
+ 	bytes_recv = __mei_cl_recv(cl, (u8 *)reply, if_version_length, 0);
+-	if (bytes_recv < if_version_length) {
++	if (bytes_recv < 0 || bytes_recv < if_version_length) {
+ 		dev_err(bus->dev, "Could not read IF version\n");
+ 		ret = -EIO;
+ 		goto err;
+diff --git a/drivers/misc/mei/bus.c b/drivers/misc/mei/bus.c
+index b1133739fb4b..692b2f9a18cb 100644
+--- a/drivers/misc/mei/bus.c
++++ b/drivers/misc/mei/bus.c
+@@ -505,17 +505,15 @@ int mei_cldev_enable(struct mei_cl_device *cldev)
+ 
+ 	cl = cldev->cl;
+ 
++	mutex_lock(&bus->device_lock);
+ 	if (cl->state == MEI_FILE_UNINITIALIZED) {
+-		mutex_lock(&bus->device_lock);
+ 		ret = mei_cl_link(cl);
+-		mutex_unlock(&bus->device_lock);
+ 		if (ret)
+-			return ret;
++			goto out;
+ 		/* update pointers */
+ 		cl->cldev = cldev;
+ 	}
+ 
+-	mutex_lock(&bus->device_lock);
+ 	if (mei_cl_is_connected(cl)) {
+ 		ret = 0;
+ 		goto out;
+@@ -600,9 +598,8 @@ int mei_cldev_disable(struct mei_cl_device *cldev)
+ 	if (err < 0)
+ 		dev_err(bus->dev, "Could not disconnect from the ME client\n");
+ 
+-out:
+ 	mei_cl_bus_module_put(cldev);
+-
++out:
+ 	/* Flush queues and remove any pending read */
+ 	mei_cl_flush_queues(cl, NULL);
+ 	mei_cl_unlink(cl);
+@@ -860,12 +857,13 @@ static void mei_cl_bus_dev_release(struct device *dev)
+ 
+ 	mei_me_cl_put(cldev->me_cl);
+ 	mei_dev_bus_put(cldev->bus);
++	mei_cl_unlink(cldev->cl);
+ 	kfree(cldev->cl);
+ 	kfree(cldev);
+ }
+ 
+ static const struct device_type mei_cl_device_type = {
+-	.release	= mei_cl_bus_dev_release,
++	.release = mei_cl_bus_dev_release,
+ };
+ 
+ /**
+diff --git a/drivers/misc/mei/hbm.c b/drivers/misc/mei/hbm.c
+index fe6595fe94f1..995ff1b7e7b5 100644
+--- a/drivers/misc/mei/hbm.c
++++ b/drivers/misc/mei/hbm.c
+@@ -1140,15 +1140,18 @@ int mei_hbm_dispatch(struct mei_device *dev, struct mei_msg_hdr *hdr)
+ 
+ 		props_res = (struct hbm_props_response *)mei_msg;
+ 
+-		if (props_res->status) {
++		if (props_res->status == MEI_HBMS_CLIENT_NOT_FOUND) {
++			dev_dbg(dev->dev, "hbm: properties response: %d CLIENT_NOT_FOUND\n",
++				props_res->me_addr);
++		} else if (props_res->status) {
+ 			dev_err(dev->dev, "hbm: properties response: wrong status = %d %s\n",
+ 				props_res->status,
+ 				mei_hbm_status_str(props_res->status));
+ 			return -EPROTO;
++		} else {
++			mei_hbm_me_cl_add(dev, props_res);
+ 		}
+ 
+-		mei_hbm_me_cl_add(dev, props_res);
+-
+ 		/* request property for the next client */
+ 		if (mei_hbm_prop_req(dev, props_res->me_addr + 1))
+ 			return -EIO;
+diff --git a/drivers/mmc/host/meson-mx-sdio.c b/drivers/mmc/host/meson-mx-sdio.c
+index 09cb89645d06..2cfec33178c1 100644
+--- a/drivers/mmc/host/meson-mx-sdio.c
++++ b/drivers/mmc/host/meson-mx-sdio.c
+@@ -517,19 +517,23 @@ static struct mmc_host_ops meson_mx_mmc_ops = {
+ static struct platform_device *meson_mx_mmc_slot_pdev(struct device *parent)
+ {
+ 	struct device_node *slot_node;
++	struct platform_device *pdev;
+ 
+ 	/*
+ 	 * TODO: the MMC core framework currently does not support
+ 	 * controllers with multiple slots properly. So we only register
+ 	 * the first slot for now
+ 	 */
+-	slot_node = of_find_compatible_node(parent->of_node, NULL, "mmc-slot");
++	slot_node = of_get_compatible_child(parent->of_node, "mmc-slot");
+ 	if (!slot_node) {
+ 		dev_warn(parent, "no 'mmc-slot' sub-node found\n");
+ 		return ERR_PTR(-ENOENT);
+ 	}
+ 
+-	return of_platform_device_create(slot_node, NULL, parent);
++	pdev = of_platform_device_create(slot_node, NULL, parent);
++	of_node_put(slot_node);
++
++	return pdev;
+ }
+ 
+ static int meson_mx_mmc_add_host(struct meson_mx_mmc_host *host)
+diff --git a/drivers/mmc/host/omap_hsmmc.c b/drivers/mmc/host/omap_hsmmc.c
+index 071693ebfe18..68760d4a5d3d 100644
+--- a/drivers/mmc/host/omap_hsmmc.c
++++ b/drivers/mmc/host/omap_hsmmc.c
+@@ -2177,6 +2177,7 @@ static int omap_hsmmc_remove(struct platform_device *pdev)
+ 	dma_release_channel(host->tx_chan);
+ 	dma_release_channel(host->rx_chan);
+ 
++	dev_pm_clear_wake_irq(host->dev);
+ 	pm_runtime_dont_use_autosuspend(host->dev);
+ 	pm_runtime_put_sync(host->dev);
+ 	pm_runtime_disable(host->dev);
+diff --git a/drivers/mmc/host/sdhci-of-esdhc.c b/drivers/mmc/host/sdhci-of-esdhc.c
+index 4ffa6b173a21..8332f56e6c0d 100644
+--- a/drivers/mmc/host/sdhci-of-esdhc.c
++++ b/drivers/mmc/host/sdhci-of-esdhc.c
+@@ -22,6 +22,7 @@
+ #include <linux/sys_soc.h>
+ #include <linux/clk.h>
+ #include <linux/ktime.h>
++#include <linux/dma-mapping.h>
+ #include <linux/mmc/host.h>
+ #include "sdhci-pltfm.h"
+ #include "sdhci-esdhc.h"
+@@ -427,6 +428,11 @@ static void esdhc_of_adma_workaround(struct sdhci_host *host, u32 intmask)
+ static int esdhc_of_enable_dma(struct sdhci_host *host)
+ {
+ 	u32 value;
++	struct device *dev = mmc_dev(host->mmc);
++
++	if (of_device_is_compatible(dev->of_node, "fsl,ls1043a-esdhc") ||
++	    of_device_is_compatible(dev->of_node, "fsl,ls1046a-esdhc"))
++		dma_set_mask_and_coherent(dev, DMA_BIT_MASK(40));
+ 
+ 	value = sdhci_readl(host, ESDHC_DMA_SYSCTL);
+ 	value |= ESDHC_DMA_SNOOP;
+diff --git a/drivers/mmc/host/sdhci-tegra.c b/drivers/mmc/host/sdhci-tegra.c
+index 970d38f68939..137df06b9b6e 100644
+--- a/drivers/mmc/host/sdhci-tegra.c
++++ b/drivers/mmc/host/sdhci-tegra.c
+@@ -334,7 +334,8 @@ static const struct sdhci_pltfm_data sdhci_tegra30_pdata = {
+ 		  SDHCI_QUIRK_NO_HISPD_BIT |
+ 		  SDHCI_QUIRK_BROKEN_ADMA_ZEROLEN_DESC |
+ 		  SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN,
+-	.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
++	.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
++		   SDHCI_QUIRK2_BROKEN_HS200,
+ 	.ops  = &tegra_sdhci_ops,
+ };
+ 
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index 1c828e0e9905..a7b5602ef6f7 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -3734,14 +3734,21 @@ int sdhci_setup_host(struct sdhci_host *host)
+ 	    mmc_gpio_get_cd(host->mmc) < 0)
+ 		mmc->caps |= MMC_CAP_NEEDS_POLL;
+ 
+-	/* If vqmmc regulator and no 1.8V signalling, then there's no UHS */
+ 	if (!IS_ERR(mmc->supply.vqmmc)) {
+ 		ret = regulator_enable(mmc->supply.vqmmc);
++
++		/* If vqmmc provides no 1.8V signalling, then there's no UHS */
+ 		if (!regulator_is_supported_voltage(mmc->supply.vqmmc, 1700000,
+ 						    1950000))
+ 			host->caps1 &= ~(SDHCI_SUPPORT_SDR104 |
+ 					 SDHCI_SUPPORT_SDR50 |
+ 					 SDHCI_SUPPORT_DDR50);
++
++		/* In eMMC case vqmmc might be a fixed 1.8V regulator */
++		if (!regulator_is_supported_voltage(mmc->supply.vqmmc, 2700000,
++						    3600000))
++			host->flags &= ~SDHCI_SIGNALING_330;
++
+ 		if (ret) {
+ 			pr_warn("%s: Failed to enable vqmmc regulator: %d\n",
+ 				mmc_hostname(mmc), ret);
+diff --git a/drivers/mtd/maps/solutionengine.c b/drivers/mtd/maps/solutionengine.c
+index bb580bc16445..c07f21b20463 100644
+--- a/drivers/mtd/maps/solutionengine.c
++++ b/drivers/mtd/maps/solutionengine.c
+@@ -59,9 +59,9 @@ static int __init init_soleng_maps(void)
+ 			return -ENXIO;
+ 		}
+ 	}
+-	printk(KERN_NOTICE "Solution Engine: Flash at 0x%08lx, EPROM at 0x%08lx\n",
+-	       soleng_flash_map.phys & 0x1fffffff,
+-	       soleng_eprom_map.phys & 0x1fffffff);
++	printk(KERN_NOTICE "Solution Engine: Flash at 0x%pap, EPROM at 0x%pap\n",
++	       &soleng_flash_map.phys,
++	       &soleng_eprom_map.phys);
+ 	flash_mtd->owner = THIS_MODULE;
+ 
+ 	eprom_mtd = do_map_probe("map_rom", &soleng_eprom_map);
+diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c
+index cd67c85cc87d..02389528f622 100644
+--- a/drivers/mtd/mtdchar.c
++++ b/drivers/mtd/mtdchar.c
+@@ -160,8 +160,12 @@ static ssize_t mtdchar_read(struct file *file, char __user *buf, size_t count,
+ 
+ 	pr_debug("MTD_read\n");
+ 
+-	if (*ppos + count > mtd->size)
+-		count = mtd->size - *ppos;
++	if (*ppos + count > mtd->size) {
++		if (*ppos < mtd->size)
++			count = mtd->size - *ppos;
++		else
++			count = 0;
++	}
+ 
+ 	if (!count)
+ 		return 0;
+@@ -246,7 +250,7 @@ static ssize_t mtdchar_write(struct file *file, const char __user *buf, size_t c
+ 
+ 	pr_debug("MTD_write\n");
+ 
+-	if (*ppos == mtd->size)
++	if (*ppos >= mtd->size)
+ 		return -ENOSPC;
+ 
+ 	if (*ppos + count > mtd->size)
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-desc.c b/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
+index cc1e4f820e64..533094233659 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
+@@ -289,7 +289,7 @@ static int xgbe_alloc_pages(struct xgbe_prv_data *pdata,
+ 	struct page *pages = NULL;
+ 	dma_addr_t pages_dma;
+ 	gfp_t gfp;
+-	int order, ret;
++	int order;
+ 
+ again:
+ 	order = alloc_order;
+@@ -316,10 +316,9 @@ again:
+ 	/* Map the pages */
+ 	pages_dma = dma_map_page(pdata->dev, pages, 0,
+ 				 PAGE_SIZE << order, DMA_FROM_DEVICE);
+-	ret = dma_mapping_error(pdata->dev, pages_dma);
+-	if (ret) {
++	if (dma_mapping_error(pdata->dev, pages_dma)) {
+ 		put_page(pages);
+-		return ret;
++		return -ENOMEM;
+ 	}
+ 
+ 	pa->pages = pages;
+diff --git a/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c b/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c
+index 929d485a3a2f..e088dedc1747 100644
+--- a/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c
++++ b/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c
+@@ -493,6 +493,9 @@ static void cn23xx_pf_setup_global_output_regs(struct octeon_device *oct)
+ 	for (q_no = srn; q_no < ern; q_no++) {
+ 		reg_val = octeon_read_csr(oct, CN23XX_SLI_OQ_PKT_CONTROL(q_no));
+ 
++		/* clear IPTR */
++		reg_val &= ~CN23XX_PKT_OUTPUT_CTL_IPTR;
++
+ 		/* set DPTR */
+ 		reg_val |= CN23XX_PKT_OUTPUT_CTL_DPTR;
+ 
+diff --git a/drivers/net/ethernet/cavium/liquidio/cn23xx_vf_device.c b/drivers/net/ethernet/cavium/liquidio/cn23xx_vf_device.c
+index 9338a0008378..1f8b7f651254 100644
+--- a/drivers/net/ethernet/cavium/liquidio/cn23xx_vf_device.c
++++ b/drivers/net/ethernet/cavium/liquidio/cn23xx_vf_device.c
+@@ -165,6 +165,9 @@ static void cn23xx_vf_setup_global_output_regs(struct octeon_device *oct)
+ 		reg_val =
+ 		    octeon_read_csr(oct, CN23XX_VF_SLI_OQ_PKT_CONTROL(q_no));
+ 
++		/* clear IPTR */
++		reg_val &= ~CN23XX_PKT_OUTPUT_CTL_IPTR;
++
+ 		/* set DPTR */
+ 		reg_val |= CN23XX_PKT_OUTPUT_CTL_DPTR;
+ 
+diff --git a/drivers/net/ethernet/cortina/gemini.c b/drivers/net/ethernet/cortina/gemini.c
+index 6d7404f66f84..c9a061e707c4 100644
+--- a/drivers/net/ethernet/cortina/gemini.c
++++ b/drivers/net/ethernet/cortina/gemini.c
+@@ -1753,7 +1753,10 @@ static int gmac_open(struct net_device *netdev)
+ 	phy_start(netdev->phydev);
+ 
+ 	err = geth_resize_freeq(port);
+-	if (err) {
++	/* It's fine if it's just busy, the other port has set up
++	 * the freeq in that case.
++	 */
++	if (err && (err != -EBUSY)) {
+ 		netdev_err(netdev, "could not resize freeq\n");
+ 		goto err_stop_phy;
+ 	}
+diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.c b/drivers/net/ethernet/emulex/benet/be_cmds.c
+index ff92ab1daeb8..1e9d882c04ef 100644
+--- a/drivers/net/ethernet/emulex/benet/be_cmds.c
++++ b/drivers/net/ethernet/emulex/benet/be_cmds.c
+@@ -4500,7 +4500,7 @@ int be_cmd_get_profile_config(struct be_adapter *adapter,
+ 				port_res->max_vfs += le16_to_cpu(pcie->num_vfs);
+ 			}
+ 		}
+-		return status;
++		goto err;
+ 	}
+ 
+ 	pcie = be_get_pcie_desc(resp->func_param, desc_count,
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 25a73bb2e642..9d69621f5ab4 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -3081,7 +3081,6 @@ static int hns3_client_init(struct hnae3_handle *handle)
+ 	priv->dev = &pdev->dev;
+ 	priv->netdev = netdev;
+ 	priv->ae_handle = handle;
+-	priv->ae_handle->reset_level = HNAE3_NONE_RESET;
+ 	priv->ae_handle->last_reset_time = jiffies;
+ 	priv->tx_timeout_count = 0;
+ 
+@@ -3102,6 +3101,11 @@ static int hns3_client_init(struct hnae3_handle *handle)
+ 	/* Carrier off reporting is important to ethtool even BEFORE open */
+ 	netif_carrier_off(netdev);
+ 
++	if (handle->flags & HNAE3_SUPPORT_VF)
++		handle->reset_level = HNAE3_VF_RESET;
++	else
++		handle->reset_level = HNAE3_FUNC_RESET;
++
+ 	ret = hns3_get_ring_config(priv);
+ 	if (ret) {
+ 		ret = -ENOMEM;
+@@ -3418,7 +3422,7 @@ static int hns3_reset_notify_down_enet(struct hnae3_handle *handle)
+ 	struct net_device *ndev = kinfo->netdev;
+ 
+ 	if (!netif_running(ndev))
+-		return -EIO;
++		return 0;
+ 
+ 	return hns3_nic_net_stop(ndev);
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 6fd7ea8074b0..13f43b74fd6d 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -2825,15 +2825,13 @@ static void hclge_clear_reset_cause(struct hclge_dev *hdev)
+ static void hclge_reset(struct hclge_dev *hdev)
+ {
+ 	/* perform reset of the stack & ae device for a client */
+-
++	rtnl_lock();
+ 	hclge_notify_client(hdev, HNAE3_DOWN_CLIENT);
+ 
+ 	if (!hclge_reset_wait(hdev)) {
+-		rtnl_lock();
+ 		hclge_notify_client(hdev, HNAE3_UNINIT_CLIENT);
+ 		hclge_reset_ae_dev(hdev->ae_dev);
+ 		hclge_notify_client(hdev, HNAE3_INIT_CLIENT);
+-		rtnl_unlock();
+ 
+ 		hclge_clear_reset_cause(hdev);
+ 	} else {
+@@ -2843,6 +2841,7 @@ static void hclge_reset(struct hclge_dev *hdev)
+ 	}
+ 
+ 	hclge_notify_client(hdev, HNAE3_UP_CLIENT);
++	rtnl_unlock();
+ }
+ 
+ static void hclge_reset_event(struct hnae3_handle *handle)
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index 0319ed9ef8b8..f7f08e3fa761 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -5011,6 +5011,12 @@ static int mvpp2_probe(struct platform_device *pdev)
+ 			(unsigned long)of_device_get_match_data(&pdev->dev);
+ 	}
+ 
++	/* multi queue mode isn't supported on PPV2.1, fallback to single
++	 * mode
++	 */
++	if (priv->hw_version == MVPP21)
++		queue_mode = MVPP2_QDIST_SINGLE_MODE;
++
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	base = devm_ioremap_resource(&pdev->dev, res);
+ 	if (IS_ERR(base))
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index 384c1fa49081..f167f4eec3ff 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -452,6 +452,7 @@ const char *mlx5_command_str(int command)
+ 	MLX5_COMMAND_STR_CASE(SET_HCA_CAP);
+ 	MLX5_COMMAND_STR_CASE(QUERY_ISSI);
+ 	MLX5_COMMAND_STR_CASE(SET_ISSI);
++	MLX5_COMMAND_STR_CASE(SET_DRIVER_VERSION);
+ 	MLX5_COMMAND_STR_CASE(CREATE_MKEY);
+ 	MLX5_COMMAND_STR_CASE(QUERY_MKEY);
+ 	MLX5_COMMAND_STR_CASE(DESTROY_MKEY);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dev.c b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+index b994b80d5714..922811fb66e7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/dev.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+@@ -132,11 +132,11 @@ void mlx5_add_device(struct mlx5_interface *intf, struct mlx5_priv *priv)
+ 	delayed_event_start(priv);
+ 
+ 	dev_ctx->context = intf->add(dev);
+-	set_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state);
+-	if (intf->attach)
+-		set_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state);
+-
+ 	if (dev_ctx->context) {
++		set_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state);
++		if (intf->attach)
++			set_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state);
++
+ 		spin_lock_irq(&priv->ctx_lock);
+ 		list_add_tail(&dev_ctx->list, &priv->ctx_list);
+ 
+@@ -211,12 +211,17 @@ static void mlx5_attach_interface(struct mlx5_interface *intf, struct mlx5_priv
+ 	if (intf->attach) {
+ 		if (test_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state))
+ 			goto out;
+-		intf->attach(dev, dev_ctx->context);
++		if (intf->attach(dev, dev_ctx->context))
++			goto out;
++
+ 		set_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state);
+ 	} else {
+ 		if (test_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state))
+ 			goto out;
+ 		dev_ctx->context = intf->add(dev);
++		if (!dev_ctx->context)
++			goto out;
++
+ 		set_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state);
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index 91f1209886ff..4c53957c918c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -658,6 +658,7 @@ static int esw_create_offloads_fdb_tables(struct mlx5_eswitch *esw, int nvports)
+ 	if (err)
+ 		goto miss_rule_err;
+ 
++	kvfree(flow_group_in);
+ 	return 0;
+ 
+ miss_rule_err:
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 6ddb2565884d..0031c510ab68 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -1649,6 +1649,33 @@ static u64 matched_fgs_get_version(struct list_head *match_head)
+ 	return version;
+ }
+ 
++static struct fs_fte *
++lookup_fte_locked(struct mlx5_flow_group *g,
++		  u32 *match_value,
++		  bool take_write)
++{
++	struct fs_fte *fte_tmp;
++
++	if (take_write)
++		nested_down_write_ref_node(&g->node, FS_LOCK_PARENT);
++	else
++		nested_down_read_ref_node(&g->node, FS_LOCK_PARENT);
++	fte_tmp = rhashtable_lookup_fast(&g->ftes_hash, match_value,
++					 rhash_fte);
++	if (!fte_tmp || !tree_get_node(&fte_tmp->node)) {
++		fte_tmp = NULL;
++		goto out;
++	}
++
++	nested_down_write_ref_node(&fte_tmp->node, FS_LOCK_CHILD);
++out:
++	if (take_write)
++		up_write_ref_node(&g->node);
++	else
++		up_read_ref_node(&g->node);
++	return fte_tmp;
++}
++
+ static struct mlx5_flow_handle *
+ try_add_to_existing_fg(struct mlx5_flow_table *ft,
+ 		       struct list_head *match_head,
+@@ -1671,10 +1698,6 @@ try_add_to_existing_fg(struct mlx5_flow_table *ft,
+ 	if (IS_ERR(fte))
+ 		return  ERR_PTR(-ENOMEM);
+ 
+-	list_for_each_entry(iter, match_head, list) {
+-		nested_down_read_ref_node(&iter->g->node, FS_LOCK_PARENT);
+-	}
+-
+ search_again_locked:
+ 	version = matched_fgs_get_version(match_head);
+ 	/* Try to find a fg that already contains a matching fte */
+@@ -1682,20 +1705,9 @@ search_again_locked:
+ 		struct fs_fte *fte_tmp;
+ 
+ 		g = iter->g;
+-		fte_tmp = rhashtable_lookup_fast(&g->ftes_hash, spec->match_value,
+-						 rhash_fte);
+-		if (!fte_tmp || !tree_get_node(&fte_tmp->node))
++		fte_tmp = lookup_fte_locked(g, spec->match_value, take_write);
++		if (!fte_tmp)
+ 			continue;
+-
+-		nested_down_write_ref_node(&fte_tmp->node, FS_LOCK_CHILD);
+-		if (!take_write) {
+-			list_for_each_entry(iter, match_head, list)
+-				up_read_ref_node(&iter->g->node);
+-		} else {
+-			list_for_each_entry(iter, match_head, list)
+-				up_write_ref_node(&iter->g->node);
+-		}
+-
+ 		rule = add_rule_fg(g, spec->match_value,
+ 				   flow_act, dest, dest_num, fte_tmp);
+ 		up_write_ref_node(&fte_tmp->node);
+@@ -1704,19 +1716,6 @@ search_again_locked:
+ 		return rule;
+ 	}
+ 
+-	/* No group with matching fte found. Try to add a new fte to any
+-	 * matching fg.
+-	 */
+-
+-	if (!take_write) {
+-		list_for_each_entry(iter, match_head, list)
+-			up_read_ref_node(&iter->g->node);
+-		list_for_each_entry(iter, match_head, list)
+-			nested_down_write_ref_node(&iter->g->node,
+-						   FS_LOCK_PARENT);
+-		take_write = true;
+-	}
+-
+ 	/* Check the ft version, for case that new flow group
+ 	 * was added while the fgs weren't locked
+ 	 */
+@@ -1728,27 +1727,30 @@ search_again_locked:
+ 	/* Check the fgs version, for case the new FTE with the
+ 	 * same values was added while the fgs weren't locked
+ 	 */
+-	if (version != matched_fgs_get_version(match_head))
++	if (version != matched_fgs_get_version(match_head)) {
++		take_write = true;
+ 		goto search_again_locked;
++	}
+ 
+ 	list_for_each_entry(iter, match_head, list) {
+ 		g = iter->g;
+ 
+ 		if (!g->node.active)
+ 			continue;
++
++		nested_down_write_ref_node(&g->node, FS_LOCK_PARENT);
++
+ 		err = insert_fte(g, fte);
+ 		if (err) {
++			up_write_ref_node(&g->node);
+ 			if (err == -ENOSPC)
+ 				continue;
+-			list_for_each_entry(iter, match_head, list)
+-				up_write_ref_node(&iter->g->node);
+ 			kmem_cache_free(steering->ftes_cache, fte);
+ 			return ERR_PTR(err);
+ 		}
+ 
+ 		nested_down_write_ref_node(&fte->node, FS_LOCK_CHILD);
+-		list_for_each_entry(iter, match_head, list)
+-			up_write_ref_node(&iter->g->node);
++		up_write_ref_node(&g->node);
+ 		rule = add_rule_fg(g, spec->match_value,
+ 				   flow_act, dest, dest_num, fte);
+ 		up_write_ref_node(&fte->node);
+@@ -1757,8 +1759,6 @@ search_again_locked:
+ 	}
+ 	rule = ERR_PTR(-ENOENT);
+ out:
+-	list_for_each_entry(iter, match_head, list)
+-		up_write_ref_node(&iter->g->node);
+ 	kmem_cache_free(steering->ftes_cache, fte);
+ 	return rule;
+ }
+@@ -1797,6 +1797,8 @@ search_again_locked:
+ 	if (err) {
+ 		if (take_write)
+ 			up_write_ref_node(&ft->node);
++		else
++			up_read_ref_node(&ft->node);
+ 		return ERR_PTR(err);
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+index d39b0b7011b2..9f39aeca863f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/health.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+@@ -331,9 +331,17 @@ void mlx5_start_health_poll(struct mlx5_core_dev *dev)
+ 	add_timer(&health->timer);
+ }
+ 
+-void mlx5_stop_health_poll(struct mlx5_core_dev *dev)
++void mlx5_stop_health_poll(struct mlx5_core_dev *dev, bool disable_health)
+ {
+ 	struct mlx5_core_health *health = &dev->priv.health;
++	unsigned long flags;
++
++	if (disable_health) {
++		spin_lock_irqsave(&health->wq_lock, flags);
++		set_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags);
++		set_bit(MLX5_DROP_NEW_RECOVERY_WORK, &health->flags);
++		spin_unlock_irqrestore(&health->wq_lock, flags);
++	}
+ 
+ 	del_timer_sync(&health->timer);
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 615005e63819..76e6ca87db11 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -874,8 +874,10 @@ static int mlx5_pci_init(struct mlx5_core_dev *dev, struct mlx5_priv *priv)
+ 	priv->numa_node = dev_to_node(&dev->pdev->dev);
+ 
+ 	priv->dbg_root = debugfs_create_dir(dev_name(&pdev->dev), mlx5_debugfs_root);
+-	if (!priv->dbg_root)
++	if (!priv->dbg_root) {
++		dev_err(&pdev->dev, "Cannot create debugfs dir, aborting\n");
+ 		return -ENOMEM;
++	}
+ 
+ 	err = mlx5_pci_enable_device(dev);
+ 	if (err) {
+@@ -924,7 +926,7 @@ static void mlx5_pci_close(struct mlx5_core_dev *dev, struct mlx5_priv *priv)
+ 	pci_clear_master(dev->pdev);
+ 	release_bar(dev->pdev);
+ 	mlx5_pci_disable_device(dev);
+-	debugfs_remove(priv->dbg_root);
++	debugfs_remove_recursive(priv->dbg_root);
+ }
+ 
+ static int mlx5_init_once(struct mlx5_core_dev *dev, struct mlx5_priv *priv)
+@@ -1266,7 +1268,7 @@ err_cleanup_once:
+ 		mlx5_cleanup_once(dev);
+ 
+ err_stop_poll:
+-	mlx5_stop_health_poll(dev);
++	mlx5_stop_health_poll(dev, boot);
+ 	if (mlx5_cmd_teardown_hca(dev)) {
+ 		dev_err(&dev->pdev->dev, "tear_down_hca failed, skip cleanup\n");
+ 		goto out_err;
+@@ -1325,7 +1327,7 @@ static int mlx5_unload_one(struct mlx5_core_dev *dev, struct mlx5_priv *priv,
+ 	mlx5_free_irq_vectors(dev);
+ 	if (cleanup)
+ 		mlx5_cleanup_once(dev);
+-	mlx5_stop_health_poll(dev);
++	mlx5_stop_health_poll(dev, cleanup);
+ 	err = mlx5_cmd_teardown_hca(dev);
+ 	if (err) {
+ 		dev_err(&dev->pdev->dev, "tear_down_hca failed, skip cleanup\n");
+@@ -1587,7 +1589,7 @@ static int mlx5_try_fast_unload(struct mlx5_core_dev *dev)
+ 	 * with the HCA, so the health polll is no longer needed.
+ 	 */
+ 	mlx5_drain_health_wq(dev);
+-	mlx5_stop_health_poll(dev);
++	mlx5_stop_health_poll(dev, false);
+ 
+ 	ret = mlx5_cmd_force_teardown_hca(dev);
+ 	if (ret) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wq.c b/drivers/net/ethernet/mellanox/mlx5/core/wq.c
+index c8c315eb5128..d838af9539b1 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/wq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/wq.c
+@@ -39,9 +39,9 @@ u32 mlx5_wq_cyc_get_size(struct mlx5_wq_cyc *wq)
+ 	return (u32)wq->fbc.sz_m1 + 1;
+ }
+ 
+-u32 mlx5_wq_cyc_get_frag_size(struct mlx5_wq_cyc *wq)
++u16 mlx5_wq_cyc_get_frag_size(struct mlx5_wq_cyc *wq)
+ {
+-	return (u32)wq->fbc.frag_sz_m1 + 1;
++	return wq->fbc.frag_sz_m1 + 1;
+ }
+ 
+ u32 mlx5_cqwq_get_size(struct mlx5_cqwq *wq)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wq.h b/drivers/net/ethernet/mellanox/mlx5/core/wq.h
+index 0b47126815b6..16476cc1a602 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/wq.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/wq.h
+@@ -80,7 +80,7 @@ int mlx5_wq_cyc_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
+ 		       void *wqc, struct mlx5_wq_cyc *wq,
+ 		       struct mlx5_wq_ctrl *wq_ctrl);
+ u32 mlx5_wq_cyc_get_size(struct mlx5_wq_cyc *wq);
+-u32 mlx5_wq_cyc_get_frag_size(struct mlx5_wq_cyc *wq);
++u16 mlx5_wq_cyc_get_frag_size(struct mlx5_wq_cyc *wq);
+ 
+ int mlx5_wq_qp_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
+ 		      void *qpc, struct mlx5_wq_qp *wq,
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_main.c b/drivers/net/ethernet/netronome/nfp/nfp_main.c
+index 152283d7e59c..4a540c5e27fe 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_main.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_main.c
+@@ -236,16 +236,20 @@ static int nfp_pcie_sriov_read_nfd_limit(struct nfp_pf *pf)
+ 	int err;
+ 
+ 	pf->limit_vfs = nfp_rtsym_read_le(pf->rtbl, "nfd_vf_cfg_max_vfs", &err);
+-	if (!err)
+-		return pci_sriov_set_totalvfs(pf->pdev, pf->limit_vfs);
++	if (err) {
++		/* For backwards compatibility if symbol not found allow all */
++		pf->limit_vfs = ~0;
++		if (err == -ENOENT)
++			return 0;
+ 
+-	pf->limit_vfs = ~0;
+-	/* Allow any setting for backwards compatibility if symbol not found */
+-	if (err == -ENOENT)
+-		return 0;
++		nfp_warn(pf->cpp, "Warning: VF limit read failed: %d\n", err);
++		return err;
++	}
+ 
+-	nfp_warn(pf->cpp, "Warning: VF limit read failed: %d\n", err);
+-	return err;
++	err = pci_sriov_set_totalvfs(pf->pdev, pf->limit_vfs);
++	if (err)
++		nfp_warn(pf->cpp, "Failed to set VF count in sysfs: %d\n", err);
++	return 0;
+ }
+ 
+ static int nfp_pcie_sriov_enable(struct pci_dev *pdev, int num_vfs)
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+index c2a9e64bc57b..bfccc1955907 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+@@ -1093,7 +1093,7 @@ static bool nfp_net_xdp_complete(struct nfp_net_tx_ring *tx_ring)
+  * @dp:		NFP Net data path struct
+  * @tx_ring:	TX ring structure
+  *
+- * Assumes that the device is stopped
++ * Assumes that the device is stopped, must be idempotent.
+  */
+ static void
+ nfp_net_tx_ring_reset(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring)
+@@ -1295,13 +1295,18 @@ static void nfp_net_rx_give_one(const struct nfp_net_dp *dp,
+  * nfp_net_rx_ring_reset() - Reflect in SW state of freelist after disable
+  * @rx_ring:	RX ring structure
+  *
+- * Warning: Do *not* call if ring buffers were never put on the FW freelist
+- *	    (i.e. device was not enabled)!
++ * Assumes that the device is stopped, must be idempotent.
+  */
+ static void nfp_net_rx_ring_reset(struct nfp_net_rx_ring *rx_ring)
+ {
+ 	unsigned int wr_idx, last_idx;
+ 
++	/* wr_p == rd_p means ring was never fed FL bufs.  RX rings are always
++	 * kept at cnt - 1 FL bufs.
++	 */
++	if (rx_ring->wr_p == 0 && rx_ring->rd_p == 0)
++		return;
++
+ 	/* Move the empty entry to the end of the list */
+ 	wr_idx = D_IDX(rx_ring, rx_ring->wr_p);
+ 	last_idx = rx_ring->cnt - 1;
+@@ -2524,6 +2529,8 @@ static void nfp_net_vec_clear_ring_data(struct nfp_net *nn, unsigned int idx)
+ /**
+  * nfp_net_clear_config_and_disable() - Clear control BAR and disable NFP
+  * @nn:      NFP Net device to reconfigure
++ *
++ * Warning: must be fully idempotent.
+  */
+ static void nfp_net_clear_config_and_disable(struct nfp_net *nn)
+ {
+diff --git a/drivers/net/ethernet/qualcomm/qca_7k.c b/drivers/net/ethernet/qualcomm/qca_7k.c
+index ffe7a16bdfc8..6c8543fb90c0 100644
+--- a/drivers/net/ethernet/qualcomm/qca_7k.c
++++ b/drivers/net/ethernet/qualcomm/qca_7k.c
+@@ -45,34 +45,33 @@ qcaspi_read_register(struct qcaspi *qca, u16 reg, u16 *result)
+ {
+ 	__be16 rx_data;
+ 	__be16 tx_data;
+-	struct spi_transfer *transfer;
+-	struct spi_message *msg;
++	struct spi_transfer transfer[2];
++	struct spi_message msg;
+ 	int ret;
+ 
++	memset(transfer, 0, sizeof(transfer));
++
++	spi_message_init(&msg);
++
+ 	tx_data = cpu_to_be16(QCA7K_SPI_READ | QCA7K_SPI_INTERNAL | reg);
++	*result = 0;
++
++	transfer[0].tx_buf = &tx_data;
++	transfer[0].len = QCASPI_CMD_LEN;
++	transfer[1].rx_buf = &rx_data;
++	transfer[1].len = QCASPI_CMD_LEN;
++
++	spi_message_add_tail(&transfer[0], &msg);
+ 
+ 	if (qca->legacy_mode) {
+-		msg = &qca->spi_msg1;
+-		transfer = &qca->spi_xfer1;
+-		transfer->tx_buf = &tx_data;
+-		transfer->rx_buf = NULL;
+-		transfer->len = QCASPI_CMD_LEN;
+-		spi_sync(qca->spi_dev, msg);
+-	} else {
+-		msg = &qca->spi_msg2;
+-		transfer = &qca->spi_xfer2[0];
+-		transfer->tx_buf = &tx_data;
+-		transfer->rx_buf = NULL;
+-		transfer->len = QCASPI_CMD_LEN;
+-		transfer = &qca->spi_xfer2[1];
++		spi_sync(qca->spi_dev, &msg);
++		spi_message_init(&msg);
+ 	}
+-	transfer->tx_buf = NULL;
+-	transfer->rx_buf = &rx_data;
+-	transfer->len = QCASPI_CMD_LEN;
+-	ret = spi_sync(qca->spi_dev, msg);
++	spi_message_add_tail(&transfer[1], &msg);
++	ret = spi_sync(qca->spi_dev, &msg);
+ 
+ 	if (!ret)
+-		ret = msg->status;
++		ret = msg.status;
+ 
+ 	if (ret)
+ 		qcaspi_spi_error(qca);
+@@ -86,35 +85,32 @@ int
+ qcaspi_write_register(struct qcaspi *qca, u16 reg, u16 value)
+ {
+ 	__be16 tx_data[2];
+-	struct spi_transfer *transfer;
+-	struct spi_message *msg;
++	struct spi_transfer transfer[2];
++	struct spi_message msg;
+ 	int ret;
+ 
++	memset(&transfer, 0, sizeof(transfer));
++
++	spi_message_init(&msg);
++
+ 	tx_data[0] = cpu_to_be16(QCA7K_SPI_WRITE | QCA7K_SPI_INTERNAL | reg);
+ 	tx_data[1] = cpu_to_be16(value);
+ 
++	transfer[0].tx_buf = &tx_data[0];
++	transfer[0].len = QCASPI_CMD_LEN;
++	transfer[1].tx_buf = &tx_data[1];
++	transfer[1].len = QCASPI_CMD_LEN;
++
++	spi_message_add_tail(&transfer[0], &msg);
+ 	if (qca->legacy_mode) {
+-		msg = &qca->spi_msg1;
+-		transfer = &qca->spi_xfer1;
+-		transfer->tx_buf = &tx_data[0];
+-		transfer->rx_buf = NULL;
+-		transfer->len = QCASPI_CMD_LEN;
+-		spi_sync(qca->spi_dev, msg);
+-	} else {
+-		msg = &qca->spi_msg2;
+-		transfer = &qca->spi_xfer2[0];
+-		transfer->tx_buf = &tx_data[0];
+-		transfer->rx_buf = NULL;
+-		transfer->len = QCASPI_CMD_LEN;
+-		transfer = &qca->spi_xfer2[1];
++		spi_sync(qca->spi_dev, &msg);
++		spi_message_init(&msg);
+ 	}
+-	transfer->tx_buf = &tx_data[1];
+-	transfer->rx_buf = NULL;
+-	transfer->len = QCASPI_CMD_LEN;
+-	ret = spi_sync(qca->spi_dev, msg);
++	spi_message_add_tail(&transfer[1], &msg);
++	ret = spi_sync(qca->spi_dev, &msg);
+ 
+ 	if (!ret)
+-		ret = msg->status;
++		ret = msg.status;
+ 
+ 	if (ret)
+ 		qcaspi_spi_error(qca);
+diff --git a/drivers/net/ethernet/qualcomm/qca_spi.c b/drivers/net/ethernet/qualcomm/qca_spi.c
+index 206f0266463e..66b775d462fd 100644
+--- a/drivers/net/ethernet/qualcomm/qca_spi.c
++++ b/drivers/net/ethernet/qualcomm/qca_spi.c
+@@ -99,22 +99,24 @@ static u32
+ qcaspi_write_burst(struct qcaspi *qca, u8 *src, u32 len)
+ {
+ 	__be16 cmd;
+-	struct spi_message *msg = &qca->spi_msg2;
+-	struct spi_transfer *transfer = &qca->spi_xfer2[0];
++	struct spi_message msg;
++	struct spi_transfer transfer[2];
+ 	int ret;
+ 
++	memset(&transfer, 0, sizeof(transfer));
++	spi_message_init(&msg);
++
+ 	cmd = cpu_to_be16(QCA7K_SPI_WRITE | QCA7K_SPI_EXTERNAL);
+-	transfer->tx_buf = &cmd;
+-	transfer->rx_buf = NULL;
+-	transfer->len = QCASPI_CMD_LEN;
+-	transfer = &qca->spi_xfer2[1];
+-	transfer->tx_buf = src;
+-	transfer->rx_buf = NULL;
+-	transfer->len = len;
++	transfer[0].tx_buf = &cmd;
++	transfer[0].len = QCASPI_CMD_LEN;
++	transfer[1].tx_buf = src;
++	transfer[1].len = len;
+ 
+-	ret = spi_sync(qca->spi_dev, msg);
++	spi_message_add_tail(&transfer[0], &msg);
++	spi_message_add_tail(&transfer[1], &msg);
++	ret = spi_sync(qca->spi_dev, &msg);
+ 
+-	if (ret || (msg->actual_length != QCASPI_CMD_LEN + len)) {
++	if (ret || (msg.actual_length != QCASPI_CMD_LEN + len)) {
+ 		qcaspi_spi_error(qca);
+ 		return 0;
+ 	}
+@@ -125,17 +127,20 @@ qcaspi_write_burst(struct qcaspi *qca, u8 *src, u32 len)
+ static u32
+ qcaspi_write_legacy(struct qcaspi *qca, u8 *src, u32 len)
+ {
+-	struct spi_message *msg = &qca->spi_msg1;
+-	struct spi_transfer *transfer = &qca->spi_xfer1;
++	struct spi_message msg;
++	struct spi_transfer transfer;
+ 	int ret;
+ 
+-	transfer->tx_buf = src;
+-	transfer->rx_buf = NULL;
+-	transfer->len = len;
++	memset(&transfer, 0, sizeof(transfer));
++	spi_message_init(&msg);
++
++	transfer.tx_buf = src;
++	transfer.len = len;
+ 
+-	ret = spi_sync(qca->spi_dev, msg);
++	spi_message_add_tail(&transfer, &msg);
++	ret = spi_sync(qca->spi_dev, &msg);
+ 
+-	if (ret || (msg->actual_length != len)) {
++	if (ret || (msg.actual_length != len)) {
+ 		qcaspi_spi_error(qca);
+ 		return 0;
+ 	}
+@@ -146,23 +151,25 @@ qcaspi_write_legacy(struct qcaspi *qca, u8 *src, u32 len)
+ static u32
+ qcaspi_read_burst(struct qcaspi *qca, u8 *dst, u32 len)
+ {
+-	struct spi_message *msg = &qca->spi_msg2;
++	struct spi_message msg;
+ 	__be16 cmd;
+-	struct spi_transfer *transfer = &qca->spi_xfer2[0];
++	struct spi_transfer transfer[2];
+ 	int ret;
+ 
++	memset(&transfer, 0, sizeof(transfer));
++	spi_message_init(&msg);
++
+ 	cmd = cpu_to_be16(QCA7K_SPI_READ | QCA7K_SPI_EXTERNAL);
+-	transfer->tx_buf = &cmd;
+-	transfer->rx_buf = NULL;
+-	transfer->len = QCASPI_CMD_LEN;
+-	transfer = &qca->spi_xfer2[1];
+-	transfer->tx_buf = NULL;
+-	transfer->rx_buf = dst;
+-	transfer->len = len;
++	transfer[0].tx_buf = &cmd;
++	transfer[0].len = QCASPI_CMD_LEN;
++	transfer[1].rx_buf = dst;
++	transfer[1].len = len;
+ 
+-	ret = spi_sync(qca->spi_dev, msg);
++	spi_message_add_tail(&transfer[0], &msg);
++	spi_message_add_tail(&transfer[1], &msg);
++	ret = spi_sync(qca->spi_dev, &msg);
+ 
+-	if (ret || (msg->actual_length != QCASPI_CMD_LEN + len)) {
++	if (ret || (msg.actual_length != QCASPI_CMD_LEN + len)) {
+ 		qcaspi_spi_error(qca);
+ 		return 0;
+ 	}
+@@ -173,17 +180,20 @@ qcaspi_read_burst(struct qcaspi *qca, u8 *dst, u32 len)
+ static u32
+ qcaspi_read_legacy(struct qcaspi *qca, u8 *dst, u32 len)
+ {
+-	struct spi_message *msg = &qca->spi_msg1;
+-	struct spi_transfer *transfer = &qca->spi_xfer1;
++	struct spi_message msg;
++	struct spi_transfer transfer;
+ 	int ret;
+ 
+-	transfer->tx_buf = NULL;
+-	transfer->rx_buf = dst;
+-	transfer->len = len;
++	memset(&transfer, 0, sizeof(transfer));
++	spi_message_init(&msg);
+ 
+-	ret = spi_sync(qca->spi_dev, msg);
++	transfer.rx_buf = dst;
++	transfer.len = len;
+ 
+-	if (ret || (msg->actual_length != len)) {
++	spi_message_add_tail(&transfer, &msg);
++	ret = spi_sync(qca->spi_dev, &msg);
++
++	if (ret || (msg.actual_length != len)) {
+ 		qcaspi_spi_error(qca);
+ 		return 0;
+ 	}
+@@ -195,19 +205,23 @@ static int
+ qcaspi_tx_cmd(struct qcaspi *qca, u16 cmd)
+ {
+ 	__be16 tx_data;
+-	struct spi_message *msg = &qca->spi_msg1;
+-	struct spi_transfer *transfer = &qca->spi_xfer1;
++	struct spi_message msg;
++	struct spi_transfer transfer;
+ 	int ret;
+ 
++	memset(&transfer, 0, sizeof(transfer));
++
++	spi_message_init(&msg);
++
+ 	tx_data = cpu_to_be16(cmd);
+-	transfer->len = sizeof(tx_data);
+-	transfer->tx_buf = &tx_data;
+-	transfer->rx_buf = NULL;
++	transfer.len = sizeof(cmd);
++	transfer.tx_buf = &tx_data;
++	spi_message_add_tail(&transfer, &msg);
+ 
+-	ret = spi_sync(qca->spi_dev, msg);
++	ret = spi_sync(qca->spi_dev, &msg);
+ 
+ 	if (!ret)
+-		ret = msg->status;
++		ret = msg.status;
+ 
+ 	if (ret)
+ 		qcaspi_spi_error(qca);
+@@ -835,16 +849,6 @@ qcaspi_netdev_setup(struct net_device *dev)
+ 	qca = netdev_priv(dev);
+ 	memset(qca, 0, sizeof(struct qcaspi));
+ 
+-	memset(&qca->spi_xfer1, 0, sizeof(struct spi_transfer));
+-	memset(&qca->spi_xfer2, 0, sizeof(struct spi_transfer) * 2);
+-
+-	spi_message_init(&qca->spi_msg1);
+-	spi_message_add_tail(&qca->spi_xfer1, &qca->spi_msg1);
+-
+-	spi_message_init(&qca->spi_msg2);
+-	spi_message_add_tail(&qca->spi_xfer2[0], &qca->spi_msg2);
+-	spi_message_add_tail(&qca->spi_xfer2[1], &qca->spi_msg2);
+-
+ 	memset(&qca->txr, 0, sizeof(qca->txr));
+ 	qca->txr.count = TX_RING_MAX_LEN;
+ }
+diff --git a/drivers/net/ethernet/qualcomm/qca_spi.h b/drivers/net/ethernet/qualcomm/qca_spi.h
+index fc4beb1b32d1..fc0e98726b36 100644
+--- a/drivers/net/ethernet/qualcomm/qca_spi.h
++++ b/drivers/net/ethernet/qualcomm/qca_spi.h
+@@ -83,11 +83,6 @@ struct qcaspi {
+ 	struct tx_ring txr;
+ 	struct qcaspi_stats stats;
+ 
+-	struct spi_message spi_msg1;
+-	struct spi_message spi_msg2;
+-	struct spi_transfer spi_xfer1;
+-	struct spi_transfer spi_xfer2[2];
+-
+ 	u8 *rx_buffer;
+ 	u32 buffer_size;
+ 	u8 sync;
+diff --git a/drivers/net/wan/fsl_ucc_hdlc.c b/drivers/net/wan/fsl_ucc_hdlc.c
+index 9b09c9d0d0fb..5f0366a125e2 100644
+--- a/drivers/net/wan/fsl_ucc_hdlc.c
++++ b/drivers/net/wan/fsl_ucc_hdlc.c
+@@ -192,7 +192,7 @@ static int uhdlc_init(struct ucc_hdlc_private *priv)
+ 	priv->ucc_pram_offset = qe_muram_alloc(sizeof(struct ucc_hdlc_param),
+ 				ALIGNMENT_OF_UCC_HDLC_PRAM);
+ 
+-	if (priv->ucc_pram_offset < 0) {
++	if (IS_ERR_VALUE(priv->ucc_pram_offset)) {
+ 		dev_err(priv->dev, "Can not allocate MURAM for hdlc parameter.\n");
+ 		ret = -ENOMEM;
+ 		goto free_tx_bd;
+@@ -230,14 +230,14 @@ static int uhdlc_init(struct ucc_hdlc_private *priv)
+ 
+ 	/* Alloc riptr, tiptr */
+ 	riptr = qe_muram_alloc(32, 32);
+-	if (riptr < 0) {
++	if (IS_ERR_VALUE(riptr)) {
+ 		dev_err(priv->dev, "Cannot allocate MURAM mem for Receive internal temp data pointer\n");
+ 		ret = -ENOMEM;
+ 		goto free_tx_skbuff;
+ 	}
+ 
+ 	tiptr = qe_muram_alloc(32, 32);
+-	if (tiptr < 0) {
++	if (IS_ERR_VALUE(tiptr)) {
+ 		dev_err(priv->dev, "Cannot allocate MURAM mem for Transmit internal temp data pointer\n");
+ 		ret = -ENOMEM;
+ 		goto free_riptr;
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
+index 45ea32796cda..92b38a21cd10 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
+@@ -660,7 +660,7 @@ static inline void iwl_enable_fw_load_int(struct iwl_trans *trans)
+ 	}
+ }
+ 
+-static inline u8 iwl_pcie_get_cmd_index(struct iwl_txq *q, u32 index)
++static inline u8 iwl_pcie_get_cmd_index(const struct iwl_txq *q, u32 index)
+ {
+ 	return index & (q->n_window - 1);
+ }
+@@ -730,9 +730,13 @@ static inline void iwl_stop_queue(struct iwl_trans *trans,
+ 
+ static inline bool iwl_queue_used(const struct iwl_txq *q, int i)
+ {
+-	return q->write_ptr >= q->read_ptr ?
+-		(i >= q->read_ptr && i < q->write_ptr) :
+-		!(i < q->read_ptr && i >= q->write_ptr);
++	int index = iwl_pcie_get_cmd_index(q, i);
++	int r = iwl_pcie_get_cmd_index(q, q->read_ptr);
++	int w = iwl_pcie_get_cmd_index(q, q->write_ptr);
++
++	return w >= r ?
++		(index >= r && index < w) :
++		!(index < r && index >= w);
+ }
+ 
+ static inline bool iwl_is_rfkill_set(struct iwl_trans *trans)
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+index 473fe7ccb07c..11bd7ce2be8e 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+@@ -1225,9 +1225,13 @@ static void iwl_pcie_cmdq_reclaim(struct iwl_trans *trans, int txq_id, int idx)
+ 	struct iwl_txq *txq = trans_pcie->txq[txq_id];
+ 	unsigned long flags;
+ 	int nfreed = 0;
++	u16 r;
+ 
+ 	lockdep_assert_held(&txq->lock);
+ 
++	idx = iwl_pcie_get_cmd_index(txq, idx);
++	r = iwl_pcie_get_cmd_index(txq, txq->read_ptr);
++
+ 	if ((idx >= TFD_QUEUE_SIZE_MAX) || (!iwl_queue_used(txq, idx))) {
+ 		IWL_ERR(trans,
+ 			"%s: Read index for DMA queue txq id (%d), index %d is out of range [0-%d] %d %d.\n",
+@@ -1236,12 +1240,13 @@ static void iwl_pcie_cmdq_reclaim(struct iwl_trans *trans, int txq_id, int idx)
+ 		return;
+ 	}
+ 
+-	for (idx = iwl_queue_inc_wrap(idx); txq->read_ptr != idx;
+-	     txq->read_ptr = iwl_queue_inc_wrap(txq->read_ptr)) {
++	for (idx = iwl_queue_inc_wrap(idx); r != idx;
++	     r = iwl_queue_inc_wrap(r)) {
++		txq->read_ptr = iwl_queue_inc_wrap(txq->read_ptr);
+ 
+ 		if (nfreed++ > 0) {
+ 			IWL_ERR(trans, "HCMD skipped: index (%d) %d %d\n",
+-				idx, txq->write_ptr, txq->read_ptr);
++				idx, txq->write_ptr, r);
+ 			iwl_force_nmi(trans);
+ 		}
+ 	}
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index 9dd2ca62d84a..c2b6aa1d485f 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -87,8 +87,7 @@ struct netfront_cb {
+ /* IRQ name is queue name with "-tx" or "-rx" appended */
+ #define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 3)
+ 
+-static DECLARE_WAIT_QUEUE_HEAD(module_load_q);
+-static DECLARE_WAIT_QUEUE_HEAD(module_unload_q);
++static DECLARE_WAIT_QUEUE_HEAD(module_wq);
+ 
+ struct netfront_stats {
+ 	u64			packets;
+@@ -1331,11 +1330,11 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
+ 	netif_carrier_off(netdev);
+ 
+ 	xenbus_switch_state(dev, XenbusStateInitialising);
+-	wait_event(module_load_q,
+-			   xenbus_read_driver_state(dev->otherend) !=
+-			   XenbusStateClosed &&
+-			   xenbus_read_driver_state(dev->otherend) !=
+-			   XenbusStateUnknown);
++	wait_event(module_wq,
++		   xenbus_read_driver_state(dev->otherend) !=
++		   XenbusStateClosed &&
++		   xenbus_read_driver_state(dev->otherend) !=
++		   XenbusStateUnknown);
+ 	return netdev;
+ 
+  exit:
+@@ -1603,14 +1602,16 @@ static int xennet_init_queue(struct netfront_queue *queue)
+ {
+ 	unsigned short i;
+ 	int err = 0;
++	char *devid;
+ 
+ 	spin_lock_init(&queue->tx_lock);
+ 	spin_lock_init(&queue->rx_lock);
+ 
+ 	timer_setup(&queue->rx_refill_timer, rx_refill_timeout, 0);
+ 
+-	snprintf(queue->name, sizeof(queue->name), "%s-q%u",
+-		 queue->info->netdev->name, queue->id);
++	devid = strrchr(queue->info->xbdev->nodename, '/') + 1;
++	snprintf(queue->name, sizeof(queue->name), "vif%s-q%u",
++		 devid, queue->id);
+ 
+ 	/* Initialise tx_skbs as a free chain containing every entry. */
+ 	queue->tx_skb_freelist = 0;
+@@ -2007,15 +2008,14 @@ static void netback_changed(struct xenbus_device *dev,
+ 
+ 	dev_dbg(&dev->dev, "%s\n", xenbus_strstate(backend_state));
+ 
++	wake_up_all(&module_wq);
++
+ 	switch (backend_state) {
+ 	case XenbusStateInitialising:
+ 	case XenbusStateInitialised:
+ 	case XenbusStateReconfiguring:
+ 	case XenbusStateReconfigured:
+-		break;
+-
+ 	case XenbusStateUnknown:
+-		wake_up_all(&module_unload_q);
+ 		break;
+ 
+ 	case XenbusStateInitWait:
+@@ -2031,12 +2031,10 @@ static void netback_changed(struct xenbus_device *dev,
+ 		break;
+ 
+ 	case XenbusStateClosed:
+-		wake_up_all(&module_unload_q);
+ 		if (dev->state == XenbusStateClosed)
+ 			break;
+ 		/* Missed the backend's CLOSING state -- fallthrough */
+ 	case XenbusStateClosing:
+-		wake_up_all(&module_unload_q);
+ 		xenbus_frontend_closed(dev);
+ 		break;
+ 	}
+@@ -2144,14 +2142,14 @@ static int xennet_remove(struct xenbus_device *dev)
+ 
+ 	if (xenbus_read_driver_state(dev->otherend) != XenbusStateClosed) {
+ 		xenbus_switch_state(dev, XenbusStateClosing);
+-		wait_event(module_unload_q,
++		wait_event(module_wq,
+ 			   xenbus_read_driver_state(dev->otherend) ==
+ 			   XenbusStateClosing ||
+ 			   xenbus_read_driver_state(dev->otherend) ==
+ 			   XenbusStateUnknown);
+ 
+ 		xenbus_switch_state(dev, XenbusStateClosed);
+-		wait_event(module_unload_q,
++		wait_event(module_wq,
+ 			   xenbus_read_driver_state(dev->otherend) ==
+ 			   XenbusStateClosed ||
+ 			   xenbus_read_driver_state(dev->otherend) ==
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 66ec5985c9f3..69fb62feb833 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -1741,6 +1741,8 @@ static void nvme_rdma_shutdown_ctrl(struct nvme_rdma_ctrl *ctrl, bool shutdown)
+ 		nvme_rdma_stop_io_queues(ctrl);
+ 		blk_mq_tagset_busy_iter(&ctrl->tag_set,
+ 					nvme_cancel_request, &ctrl->ctrl);
++		if (shutdown)
++			nvme_start_queues(&ctrl->ctrl);
+ 		nvme_rdma_destroy_io_queues(ctrl, shutdown);
+ 	}
+ 
+diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c
+index 8c42b3a8c420..64c7596a46a1 100644
+--- a/drivers/nvme/target/io-cmd-file.c
++++ b/drivers/nvme/target/io-cmd-file.c
+@@ -209,22 +209,24 @@ static void nvmet_file_execute_discard(struct nvmet_req *req)
+ {
+ 	int mode = FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE;
+ 	struct nvme_dsm_range range;
+-	loff_t offset;
+-	loff_t len;
+-	int i, ret;
++	loff_t offset, len;
++	u16 ret;
++	int i;
+ 
+ 	for (i = 0; i <= le32_to_cpu(req->cmd->dsm.nr); i++) {
+-		if (nvmet_copy_from_sgl(req, i * sizeof(range), &range,
+-					sizeof(range)))
++		ret = nvmet_copy_from_sgl(req, i * sizeof(range), &range,
++					sizeof(range));
++		if (ret)
+ 			break;
+ 		offset = le64_to_cpu(range.slba) << req->ns->blksize_shift;
+ 		len = le32_to_cpu(range.nlb) << req->ns->blksize_shift;
+-		ret = vfs_fallocate(req->ns->file, mode, offset, len);
+-		if (ret)
++		if (vfs_fallocate(req->ns->file, mode, offset, len)) {
++			ret = NVME_SC_INTERNAL | NVME_SC_DNR;
+ 			break;
++		}
+ 	}
+ 
+-	nvmet_req_complete(req, ret < 0 ? NVME_SC_INTERNAL | NVME_SC_DNR : 0);
++	nvmet_req_complete(req, ret);
+ }
+ 
+ static void nvmet_file_dsm_work(struct work_struct *w)
+diff --git a/drivers/of/base.c b/drivers/of/base.c
+index 466e3c8582f0..53a51c6911eb 100644
+--- a/drivers/of/base.c
++++ b/drivers/of/base.c
+@@ -118,6 +118,9 @@ void of_populate_phandle_cache(void)
+ 		if (np->phandle && np->phandle != OF_PHANDLE_ILLEGAL)
+ 			phandles++;
+ 
++	if (!phandles)
++		goto out;
++
+ 	cache_entries = roundup_pow_of_two(phandles);
+ 	phandle_cache_mask = cache_entries - 1;
+ 
+@@ -719,6 +722,31 @@ struct device_node *of_get_next_available_child(const struct device_node *node,
+ }
+ EXPORT_SYMBOL(of_get_next_available_child);
+ 
++/**
++ * of_get_compatible_child - Find compatible child node
++ * @parent:	parent node
++ * @compatible:	compatible string
++ *
++ * Lookup child node whose compatible property contains the given compatible
++ * string.
++ *
++ * Returns a node pointer with refcount incremented, use of_node_put() on it
++ * when done; or NULL if not found.
++ */
++struct device_node *of_get_compatible_child(const struct device_node *parent,
++				const char *compatible)
++{
++	struct device_node *child;
++
++	for_each_child_of_node(parent, child) {
++		if (of_device_is_compatible(child, compatible))
++			break;
++	}
++
++	return child;
++}
++EXPORT_SYMBOL(of_get_compatible_child);
++
+ /**
+  *	of_get_child_by_name - Find the child node by name for a given parent
+  *	@node:	parent node
+diff --git a/drivers/parport/parport_sunbpp.c b/drivers/parport/parport_sunbpp.c
+index 01cf1c1a841a..8de329546b82 100644
+--- a/drivers/parport/parport_sunbpp.c
++++ b/drivers/parport/parport_sunbpp.c
+@@ -286,12 +286,16 @@ static int bpp_probe(struct platform_device *op)
+ 
+ 	ops = kmemdup(&parport_sunbpp_ops, sizeof(struct parport_operations),
+ 		      GFP_KERNEL);
+-        if (!ops)
++	if (!ops) {
++		err = -ENOMEM;
+ 		goto out_unmap;
++	}
+ 
+ 	dprintk(("register_port\n"));
+-	if (!(p = parport_register_port((unsigned long)base, irq, dma, ops)))
++	if (!(p = parport_register_port((unsigned long)base, irq, dma, ops))) {
++		err = -ENOMEM;
+ 		goto out_free_ops;
++	}
+ 
+ 	p->size = size;
+ 	p->dev = &op->dev;
+diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c
+index a2e88386af28..0fbf612b8ef2 100644
+--- a/drivers/pci/pcie/aer.c
++++ b/drivers/pci/pcie/aer.c
+@@ -303,6 +303,9 @@ int pcie_aer_get_firmware_first(struct pci_dev *dev)
+ 	if (!pci_is_pcie(dev))
+ 		return 0;
+ 
++	if (pcie_ports_native)
++		return 0;
++
+ 	if (!dev->__aer_firmware_first_valid)
+ 		aer_set_firmware_first(dev);
+ 	return dev->__aer_firmware_first;
+@@ -323,6 +326,9 @@ bool aer_acpi_firmware_first(void)
+ 		.firmware_first	= 0,
+ 	};
+ 
++	if (pcie_ports_native)
++		return false;
++
+ 	if (!parsed) {
+ 		apei_hest_parse(aer_hest_parse, &info);
+ 		aer_firmware_first = info.firmware_first;
+diff --git a/drivers/pinctrl/mediatek/pinctrl-mt7622.c b/drivers/pinctrl/mediatek/pinctrl-mt7622.c
+index 4c4740ffeb9c..3ea685634b6c 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-mt7622.c
++++ b/drivers/pinctrl/mediatek/pinctrl-mt7622.c
+@@ -1537,7 +1537,7 @@ static int mtk_build_groups(struct mtk_pinctrl *hw)
+ 		err = pinctrl_generic_add_group(hw->pctrl, group->name,
+ 						group->pins, group->num_pins,
+ 						group->data);
+-		if (err) {
++		if (err < 0) {
+ 			dev_err(hw->dev, "Failed to register group %s\n",
+ 				group->name);
+ 			return err;
+@@ -1558,7 +1558,7 @@ static int mtk_build_functions(struct mtk_pinctrl *hw)
+ 						  func->group_names,
+ 						  func->num_group_names,
+ 						  func->data);
+-		if (err) {
++		if (err < 0) {
+ 			dev_err(hw->dev, "Failed to register function %s\n",
+ 				func->name);
+ 			return err;
+diff --git a/drivers/pinctrl/pinctrl-rza1.c b/drivers/pinctrl/pinctrl-rza1.c
+index 717c0f4449a0..f76edf664539 100644
+--- a/drivers/pinctrl/pinctrl-rza1.c
++++ b/drivers/pinctrl/pinctrl-rza1.c
+@@ -1006,6 +1006,7 @@ static int rza1_dt_node_to_map(struct pinctrl_dev *pctldev,
+ 	const char *grpname;
+ 	const char **fngrps;
+ 	int ret, npins;
++	int gsel, fsel;
+ 
+ 	npins = rza1_dt_node_pin_count(np);
+ 	if (npins < 0) {
+@@ -1055,18 +1056,19 @@ static int rza1_dt_node_to_map(struct pinctrl_dev *pctldev,
+ 	fngrps[0] = grpname;
+ 
+ 	mutex_lock(&rza1_pctl->mutex);
+-	ret = pinctrl_generic_add_group(pctldev, grpname, grpins, npins,
+-					NULL);
+-	if (ret) {
++	gsel = pinctrl_generic_add_group(pctldev, grpname, grpins, npins,
++					 NULL);
++	if (gsel < 0) {
+ 		mutex_unlock(&rza1_pctl->mutex);
+-		return ret;
++		return gsel;
+ 	}
+ 
+-	ret = pinmux_generic_add_function(pctldev, grpname, fngrps, 1,
+-					  mux_confs);
+-	if (ret)
++	fsel = pinmux_generic_add_function(pctldev, grpname, fngrps, 1,
++					   mux_confs);
++	if (fsel < 0) {
++		ret = fsel;
+ 		goto remove_group;
+-	mutex_unlock(&rza1_pctl->mutex);
++	}
+ 
+ 	dev_info(rza1_pctl->dev, "Parsed function and group %s with %d pins\n",
+ 				 grpname, npins);
+@@ -1083,15 +1085,15 @@ static int rza1_dt_node_to_map(struct pinctrl_dev *pctldev,
+ 	(*map)->data.mux.group = np->name;
+ 	(*map)->data.mux.function = np->name;
+ 	*num_maps = 1;
++	mutex_unlock(&rza1_pctl->mutex);
+ 
+ 	return 0;
+ 
+ remove_function:
+-	mutex_lock(&rza1_pctl->mutex);
+-	pinmux_generic_remove_last_function(pctldev);
++	pinmux_generic_remove_function(pctldev, fsel);
+ 
+ remove_group:
+-	pinctrl_generic_remove_last_group(pctldev);
++	pinctrl_generic_remove_group(pctldev, gsel);
+ 	mutex_unlock(&rza1_pctl->mutex);
+ 
+ 	dev_info(rza1_pctl->dev, "Unable to parse function and group %s\n",
+diff --git a/drivers/pinctrl/qcom/pinctrl-msm.c b/drivers/pinctrl/qcom/pinctrl-msm.c
+index 0e22f52b2a19..2155a30c282b 100644
+--- a/drivers/pinctrl/qcom/pinctrl-msm.c
++++ b/drivers/pinctrl/qcom/pinctrl-msm.c
+@@ -250,22 +250,30 @@ static int msm_config_group_get(struct pinctrl_dev *pctldev,
+ 	/* Convert register value to pinconf value */
+ 	switch (param) {
+ 	case PIN_CONFIG_BIAS_DISABLE:
+-		arg = arg == MSM_NO_PULL;
++		if (arg != MSM_NO_PULL)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_PULL_DOWN:
+-		arg = arg == MSM_PULL_DOWN;
++		if (arg != MSM_PULL_DOWN)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_BUS_HOLD:
+ 		if (pctrl->soc->pull_no_keeper)
+ 			return -ENOTSUPP;
+ 
+-		arg = arg == MSM_KEEPER;
++		if (arg != MSM_KEEPER)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_PULL_UP:
+ 		if (pctrl->soc->pull_no_keeper)
+ 			arg = arg == MSM_PULL_UP_NO_KEEPER;
+ 		else
+ 			arg = arg == MSM_PULL_UP;
++		if (!arg)
++			return -EINVAL;
+ 		break;
+ 	case PIN_CONFIG_DRIVE_STRENGTH:
+ 		arg = msm_regval_to_drive(arg);
+diff --git a/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c b/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
+index 3e66e0d10010..cf82db78e69e 100644
+--- a/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
++++ b/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
+@@ -390,31 +390,47 @@ static int pmic_gpio_config_get(struct pinctrl_dev *pctldev,
+ 
+ 	switch (param) {
+ 	case PIN_CONFIG_DRIVE_PUSH_PULL:
+-		arg = pad->buffer_type == PMIC_GPIO_OUT_BUF_CMOS;
++		if (pad->buffer_type != PMIC_GPIO_OUT_BUF_CMOS)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_DRIVE_OPEN_DRAIN:
+-		arg = pad->buffer_type == PMIC_GPIO_OUT_BUF_OPEN_DRAIN_NMOS;
++		if (pad->buffer_type != PMIC_GPIO_OUT_BUF_OPEN_DRAIN_NMOS)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_DRIVE_OPEN_SOURCE:
+-		arg = pad->buffer_type == PMIC_GPIO_OUT_BUF_OPEN_DRAIN_PMOS;
++		if (pad->buffer_type != PMIC_GPIO_OUT_BUF_OPEN_DRAIN_PMOS)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_PULL_DOWN:
+-		arg = pad->pullup == PMIC_GPIO_PULL_DOWN;
++		if (pad->pullup != PMIC_GPIO_PULL_DOWN)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_DISABLE:
+-		arg = pad->pullup = PMIC_GPIO_PULL_DISABLE;
++		if (pad->pullup != PMIC_GPIO_PULL_DISABLE)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_PULL_UP:
+-		arg = pad->pullup == PMIC_GPIO_PULL_UP_30;
++		if (pad->pullup != PMIC_GPIO_PULL_UP_30)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_HIGH_IMPEDANCE:
+-		arg = !pad->is_enabled;
++		if (pad->is_enabled)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_POWER_SOURCE:
+ 		arg = pad->power_source;
+ 		break;
+ 	case PIN_CONFIG_INPUT_ENABLE:
+-		arg = pad->input_enabled;
++		if (!pad->input_enabled)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_OUTPUT:
+ 		arg = pad->out_value;
+diff --git a/drivers/platform/x86/toshiba_acpi.c b/drivers/platform/x86/toshiba_acpi.c
+index eef76bfa5d73..e50941c3ba54 100644
+--- a/drivers/platform/x86/toshiba_acpi.c
++++ b/drivers/platform/x86/toshiba_acpi.c
+@@ -34,6 +34,7 @@
+ #define TOSHIBA_ACPI_VERSION	"0.24"
+ #define PROC_INTERFACE_VERSION	1
+ 
++#include <linux/compiler.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/moduleparam.h>
+@@ -1682,7 +1683,7 @@ static const struct file_operations keys_proc_fops = {
+ 	.write		= keys_proc_write,
+ };
+ 
+-static int version_proc_show(struct seq_file *m, void *v)
++static int __maybe_unused version_proc_show(struct seq_file *m, void *v)
+ {
+ 	seq_printf(m, "driver:                  %s\n", TOSHIBA_ACPI_VERSION);
+ 	seq_printf(m, "proc_interface:          %d\n", PROC_INTERFACE_VERSION);
+diff --git a/drivers/regulator/qcom_spmi-regulator.c b/drivers/regulator/qcom_spmi-regulator.c
+index 9817f1a75342..ba3d5e63ada6 100644
+--- a/drivers/regulator/qcom_spmi-regulator.c
++++ b/drivers/regulator/qcom_spmi-regulator.c
+@@ -1752,7 +1752,8 @@ static int qcom_spmi_regulator_probe(struct platform_device *pdev)
+ 	const char *name;
+ 	struct device *dev = &pdev->dev;
+ 	struct device_node *node = pdev->dev.of_node;
+-	struct device_node *syscon;
++	struct device_node *syscon, *reg_node;
++	struct property *reg_prop;
+ 	int ret, lenp;
+ 	struct list_head *vreg_list;
+ 
+@@ -1774,16 +1775,19 @@ static int qcom_spmi_regulator_probe(struct platform_device *pdev)
+ 		syscon = of_parse_phandle(node, "qcom,saw-reg", 0);
+ 		saw_regmap = syscon_node_to_regmap(syscon);
+ 		of_node_put(syscon);
+-		if (IS_ERR(regmap))
++		if (IS_ERR(saw_regmap))
+ 			dev_err(dev, "ERROR reading SAW regmap\n");
+ 	}
+ 
+ 	for (reg = match->data; reg->name; reg++) {
+ 
+-		if (saw_regmap && \
+-		    of_find_property(of_find_node_by_name(node, reg->name), \
+-				     "qcom,saw-slave", &lenp)) {
+-			continue;
++		if (saw_regmap) {
++			reg_node = of_get_child_by_name(node, reg->name);
++			reg_prop = of_find_property(reg_node, "qcom,saw-slave",
++						    &lenp);
++			of_node_put(reg_node);
++			if (reg_prop)
++				continue;
+ 		}
+ 
+ 		vreg = devm_kzalloc(dev, sizeof(*vreg), GFP_KERNEL);
+@@ -1816,13 +1820,17 @@ static int qcom_spmi_regulator_probe(struct platform_device *pdev)
+ 		if (ret)
+ 			continue;
+ 
+-		if (saw_regmap && \
+-		    of_find_property(of_find_node_by_name(node, reg->name), \
+-				     "qcom,saw-leader", &lenp)) {
+-			spmi_saw_ops = *(vreg->desc.ops);
+-			spmi_saw_ops.set_voltage_sel = \
+-				spmi_regulator_saw_set_voltage;
+-			vreg->desc.ops = &spmi_saw_ops;
++		if (saw_regmap) {
++			reg_node = of_get_child_by_name(node, reg->name);
++			reg_prop = of_find_property(reg_node, "qcom,saw-leader",
++						    &lenp);
++			of_node_put(reg_node);
++			if (reg_prop) {
++				spmi_saw_ops = *(vreg->desc.ops);
++				spmi_saw_ops.set_voltage_sel =
++					spmi_regulator_saw_set_voltage;
++				vreg->desc.ops = &spmi_saw_ops;
++			}
+ 		}
+ 
+ 		config.dev = dev;
+diff --git a/drivers/remoteproc/qcom_q6v5_pil.c b/drivers/remoteproc/qcom_q6v5_pil.c
+index 2bf8e7c49f2a..e5ec59102b01 100644
+--- a/drivers/remoteproc/qcom_q6v5_pil.c
++++ b/drivers/remoteproc/qcom_q6v5_pil.c
+@@ -1370,7 +1370,6 @@ static const struct rproc_hexagon_res sdm845_mss = {
+ 	.hexagon_mba_image = "mba.mbn",
+ 	.proxy_clk_names = (char*[]){
+ 			"xo",
+-			"axis2",
+ 			"prng",
+ 			NULL
+ 	},
+diff --git a/drivers/reset/reset-imx7.c b/drivers/reset/reset-imx7.c
+index 4db177bc89bc..fdeac1946429 100644
+--- a/drivers/reset/reset-imx7.c
++++ b/drivers/reset/reset-imx7.c
+@@ -80,7 +80,7 @@ static int imx7_reset_set(struct reset_controller_dev *rcdev,
+ {
+ 	struct imx7_src *imx7src = to_imx7_src(rcdev);
+ 	const struct imx7_src_signal *signal = &imx7_src_signals[id];
+-	unsigned int value = 0;
++	unsigned int value = assert ? signal->bit : 0;
+ 
+ 	switch (id) {
+ 	case IMX7_RESET_PCIEPHY:
+diff --git a/drivers/rtc/rtc-bq4802.c b/drivers/rtc/rtc-bq4802.c
+index d768f6747961..113493b52149 100644
+--- a/drivers/rtc/rtc-bq4802.c
++++ b/drivers/rtc/rtc-bq4802.c
+@@ -162,6 +162,10 @@ static int bq4802_probe(struct platform_device *pdev)
+ 	} else if (p->r->flags & IORESOURCE_MEM) {
+ 		p->regs = devm_ioremap(&pdev->dev, p->r->start,
+ 					resource_size(p->r));
++		if (!p->regs){
++			err = -ENOMEM;
++			goto out;
++		}
+ 		p->read = bq4802_read_mem;
+ 		p->write = bq4802_write_mem;
+ 	} else {
+diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
+index d01ac29fd986..ffdb78421a25 100644
+--- a/drivers/s390/net/qeth_core_main.c
++++ b/drivers/s390/net/qeth_core_main.c
+@@ -3530,13 +3530,14 @@ static void qeth_flush_buffers(struct qeth_qdio_out_q *queue, int index,
+ 	qdio_flags = QDIO_FLAG_SYNC_OUTPUT;
+ 	if (atomic_read(&queue->set_pci_flags_count))
+ 		qdio_flags |= QDIO_FLAG_PCI_OUT;
++	atomic_add(count, &queue->used_buffers);
++
+ 	rc = do_QDIO(CARD_DDEV(queue->card), qdio_flags,
+ 		     queue->queue_no, index, count);
+ 	if (queue->card->options.performance_stats)
+ 		queue->card->perf_stats.outbound_do_qdio_time +=
+ 			qeth_get_micros() -
+ 			queue->card->perf_stats.outbound_do_qdio_start_time;
+-	atomic_add(count, &queue->used_buffers);
+ 	if (rc) {
+ 		queue->card->stats.tx_errors += count;
+ 		/* ignore temporary SIGA errors without busy condition */
+diff --git a/drivers/s390/net/qeth_core_sys.c b/drivers/s390/net/qeth_core_sys.c
+index c3f18afb368b..cfb659747693 100644
+--- a/drivers/s390/net/qeth_core_sys.c
++++ b/drivers/s390/net/qeth_core_sys.c
+@@ -426,6 +426,7 @@ static ssize_t qeth_dev_layer2_store(struct device *dev,
+ 	if (card->discipline) {
+ 		card->discipline->remove(card->gdev);
+ 		qeth_core_free_discipline(card);
++		card->options.layer2 = -1;
+ 	}
+ 
+ 	rc = qeth_core_load_discipline(card, newdis);
+diff --git a/drivers/scsi/libfc/fc_disc.c b/drivers/scsi/libfc/fc_disc.c
+index 3f3569ec5ce3..ddc7921ae5da 100644
+--- a/drivers/scsi/libfc/fc_disc.c
++++ b/drivers/scsi/libfc/fc_disc.c
+@@ -294,9 +294,11 @@ static void fc_disc_done(struct fc_disc *disc, enum fc_disc_event event)
+ 	 * discovery, reverify or log them in.	Otherwise, log them out.
+ 	 * Skip ports which were never discovered.  These are the dNS port
+ 	 * and ports which were created by PLOGI.
++	 *
++	 * We don't need to use the _rcu variant here as the rport list
++	 * is protected by the disc mutex which is already held on entry.
+ 	 */
+-	rcu_read_lock();
+-	list_for_each_entry_rcu(rdata, &disc->rports, peers) {
++	list_for_each_entry(rdata, &disc->rports, peers) {
+ 		if (!kref_get_unless_zero(&rdata->kref))
+ 			continue;
+ 		if (rdata->disc_id) {
+@@ -307,7 +309,6 @@ static void fc_disc_done(struct fc_disc *disc, enum fc_disc_event event)
+ 		}
+ 		kref_put(&rdata->kref, fc_rport_destroy);
+ 	}
+-	rcu_read_unlock();
+ 	mutex_unlock(&disc->disc_mutex);
+ 	disc->disc_callback(lport, event);
+ 	mutex_lock(&disc->disc_mutex);
+diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
+index d723fd1d7b26..cab1fb087e6a 100644
+--- a/drivers/scsi/lpfc/lpfc_nvme.c
++++ b/drivers/scsi/lpfc/lpfc_nvme.c
+@@ -2976,7 +2976,7 @@ lpfc_nvme_wait_for_io_drain(struct lpfc_hba *phba)
+ 	struct lpfc_sli_ring  *pring;
+ 	u32 i, wait_cnt = 0;
+ 
+-	if (phba->sli_rev < LPFC_SLI_REV4)
++	if (phba->sli_rev < LPFC_SLI_REV4 || !phba->sli4_hba.nvme_wq)
+ 		return;
+ 
+ 	/* Cycle through all NVME rings and make sure all outstanding
+@@ -2985,6 +2985,9 @@ lpfc_nvme_wait_for_io_drain(struct lpfc_hba *phba)
+ 	for (i = 0; i < phba->cfg_nvme_io_channel; i++) {
+ 		pring = phba->sli4_hba.nvme_wq[i]->pring;
+ 
++		if (!pring)
++			continue;
++
+ 		/* Retrieve everything on the txcmplq */
+ 		while (!list_empty(&pring->txcmplq)) {
+ 			msleep(LPFC_XRI_EXCH_BUSY_WAIT_T1);
+diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
+index 7271c9d885dd..5e5ec3363b44 100644
+--- a/drivers/scsi/lpfc/lpfc_nvmet.c
++++ b/drivers/scsi/lpfc/lpfc_nvmet.c
+@@ -402,6 +402,7 @@ lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctx_buf)
+ 
+ 		/* Process FCP command */
+ 		if (rc == 0) {
++			ctxp->rqb_buffer = NULL;
+ 			atomic_inc(&tgtp->rcv_fcp_cmd_out);
+ 			nvmebuf->hrq->rqbp->rqb_free_buffer(phba, nvmebuf);
+ 			return;
+@@ -1116,8 +1117,17 @@ lpfc_nvmet_defer_rcv(struct nvmet_fc_target_port *tgtport,
+ 	lpfc_nvmeio_data(phba, "NVMET DEFERRCV: xri x%x sz %d CPU %02x\n",
+ 			 ctxp->oxid, ctxp->size, smp_processor_id());
+ 
++	if (!nvmebuf) {
++		lpfc_printf_log(phba, KERN_INFO, LOG_NVME_IOERR,
++				"6425 Defer rcv: no buffer xri x%x: "
++				"flg %x ste %x\n",
++				ctxp->oxid, ctxp->flag, ctxp->state);
++		return;
++	}
++
+ 	tgtp = phba->targetport->private;
+-	atomic_inc(&tgtp->rcv_fcp_cmd_defer);
++	if (tgtp)
++		atomic_inc(&tgtp->rcv_fcp_cmd_defer);
+ 
+ 	/* Free the nvmebuf since a new buffer already replaced it */
+ 	nvmebuf->hrq->rqbp->rqb_free_buffer(phba, nvmebuf);
+diff --git a/drivers/soc/qcom/smem.c b/drivers/soc/qcom/smem.c
+index 70b2ee80d6bd..bf4bd71ab53f 100644
+--- a/drivers/soc/qcom/smem.c
++++ b/drivers/soc/qcom/smem.c
+@@ -364,11 +364,6 @@ static int qcom_smem_alloc_private(struct qcom_smem *smem,
+ 	end = phdr_to_last_uncached_entry(phdr);
+ 	cached = phdr_to_last_cached_entry(phdr);
+ 
+-	if (smem->global_partition) {
+-		dev_err(smem->dev, "Already found the global partition\n");
+-		return -EINVAL;
+-	}
+-
+ 	while (hdr < end) {
+ 		if (hdr->canary != SMEM_PRIVATE_CANARY)
+ 			goto bad_canary;
+@@ -736,6 +731,11 @@ static int qcom_smem_set_global_partition(struct qcom_smem *smem)
+ 	bool found = false;
+ 	int i;
+ 
++	if (smem->global_partition) {
++		dev_err(smem->dev, "Already found the global partition\n");
++		return -EINVAL;
++	}
++
+ 	ptable = qcom_smem_get_ptable(smem);
+ 	if (IS_ERR(ptable))
+ 		return PTR_ERR(ptable);
+diff --git a/drivers/spi/spi-dw.c b/drivers/spi/spi-dw.c
+index f693bfe95ab9..a087464efdd7 100644
+--- a/drivers/spi/spi-dw.c
++++ b/drivers/spi/spi-dw.c
+@@ -485,6 +485,8 @@ int dw_spi_add_host(struct device *dev, struct dw_spi *dws)
+ 	dws->dma_inited = 0;
+ 	dws->dma_addr = (dma_addr_t)(dws->paddr + DW_SPI_DR);
+ 
++	spi_controller_set_devdata(master, dws);
++
+ 	ret = request_irq(dws->irq, dw_spi_irq, IRQF_SHARED, dev_name(dev),
+ 			  master);
+ 	if (ret < 0) {
+@@ -518,7 +520,6 @@ int dw_spi_add_host(struct device *dev, struct dw_spi *dws)
+ 		}
+ 	}
+ 
+-	spi_controller_set_devdata(master, dws);
+ 	ret = devm_spi_register_controller(dev, master);
+ 	if (ret) {
+ 		dev_err(&master->dev, "problem registering spi master\n");
+diff --git a/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c
+index 396371728aa1..537d5bb5e294 100644
+--- a/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c
++++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c
+@@ -767,7 +767,7 @@ static void free_bufs(struct dpaa2_eth_priv *priv, u64 *buf_array, int count)
+ 	for (i = 0; i < count; i++) {
+ 		vaddr = dpaa2_iova_to_virt(priv->iommu_domain, buf_array[i]);
+ 		dma_unmap_single(dev, buf_array[i], DPAA2_ETH_RX_BUF_SIZE,
+-				 DMA_BIDIRECTIONAL);
++				 DMA_FROM_DEVICE);
+ 		skb_free_frag(vaddr);
+ 	}
+ }
+diff --git a/drivers/staging/vc04_services/bcm2835-audio/bcm2835-vchiq.c b/drivers/staging/vc04_services/bcm2835-audio/bcm2835-vchiq.c
+index f0cefa1b7b0f..b20d34449ed4 100644
+--- a/drivers/staging/vc04_services/bcm2835-audio/bcm2835-vchiq.c
++++ b/drivers/staging/vc04_services/bcm2835-audio/bcm2835-vchiq.c
+@@ -439,16 +439,16 @@ int bcm2835_audio_open(struct bcm2835_alsa_stream *alsa_stream)
+ 	my_workqueue_init(alsa_stream);
+ 
+ 	ret = bcm2835_audio_open_connection(alsa_stream);
+-	if (ret) {
+-		ret = -1;
+-		goto exit;
+-	}
++	if (ret)
++		goto free_wq;
++
+ 	instance = alsa_stream->instance;
+ 	LOG_DBG(" instance (%p)\n", instance);
+ 
+ 	if (mutex_lock_interruptible(&instance->vchi_mutex)) {
+ 		LOG_DBG("Interrupted whilst waiting for lock on (%d)\n", instance->num_connections);
+-		return -EINTR;
++		ret = -EINTR;
++		goto free_wq;
+ 	}
+ 	vchi_service_use(instance->vchi_handle[0]);
+ 
+@@ -471,7 +471,11 @@ int bcm2835_audio_open(struct bcm2835_alsa_stream *alsa_stream)
+ unlock:
+ 	vchi_service_release(instance->vchi_handle[0]);
+ 	mutex_unlock(&instance->vchi_mutex);
+-exit:
++
++free_wq:
++	if (ret)
++		destroy_workqueue(alsa_stream->my_wq);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c b/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c
+index ce26741ae9d9..3f61d04c47ab 100644
+--- a/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c
++++ b/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c
+@@ -580,6 +580,7 @@ static int start_streaming(struct vb2_queue *vq, unsigned int count)
+ static void stop_streaming(struct vb2_queue *vq)
+ {
+ 	int ret;
++	unsigned long timeout;
+ 	struct bm2835_mmal_dev *dev = vb2_get_drv_priv(vq);
+ 
+ 	v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev, "%s: dev:%p\n",
+@@ -605,10 +606,10 @@ static void stop_streaming(struct vb2_queue *vq)
+ 				      sizeof(dev->capture.frame_count));
+ 
+ 	/* wait for last frame to complete */
+-	ret = wait_for_completion_timeout(&dev->capture.frame_cmplt, HZ);
+-	if (ret <= 0)
++	timeout = wait_for_completion_timeout(&dev->capture.frame_cmplt, HZ);
++	if (timeout == 0)
+ 		v4l2_err(&dev->v4l2_dev,
+-			 "error %d waiting for frame completion\n", ret);
++			 "timed out waiting for frame completion\n");
+ 
+ 	v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
+ 		 "disabling connection\n");
+diff --git a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c
+index f5b5ead6347c..51e5b04ff0f5 100644
+--- a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c
++++ b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c
+@@ -630,6 +630,7 @@ static int send_synchronous_mmal_msg(struct vchiq_mmal_instance *instance,
+ {
+ 	struct mmal_msg_context *msg_context;
+ 	int ret;
++	unsigned long timeout;
+ 
+ 	/* payload size must not cause message to exceed max size */
+ 	if (payload_len >
+@@ -668,11 +669,11 @@ static int send_synchronous_mmal_msg(struct vchiq_mmal_instance *instance,
+ 		return ret;
+ 	}
+ 
+-	ret = wait_for_completion_timeout(&msg_context->u.sync.cmplt, 3 * HZ);
+-	if (ret <= 0) {
+-		pr_err("error %d waiting for sync completion\n", ret);
+-		if (ret == 0)
+-			ret = -ETIME;
++	timeout = wait_for_completion_timeout(&msg_context->u.sync.cmplt,
++					      3 * HZ);
++	if (timeout == 0) {
++		pr_err("timed out waiting for sync completion\n");
++		ret = -ETIME;
+ 		/* todo: what happens if the message arrives after aborting */
+ 		release_msg_context(msg_context);
+ 		return ret;
+diff --git a/drivers/tty/serial/8250/8250_of.c b/drivers/tty/serial/8250/8250_of.c
+index bfb37f0be22f..863e86b9a424 100644
+--- a/drivers/tty/serial/8250/8250_of.c
++++ b/drivers/tty/serial/8250/8250_of.c
+@@ -124,7 +124,7 @@ static int of_platform_serial_setup(struct platform_device *ofdev,
+ 				dev_warn(&ofdev->dev, "unsupported reg-io-width (%d)\n",
+ 					 prop);
+ 				ret = -EINVAL;
+-				goto err_dispose;
++				goto err_unprepare;
+ 			}
+ 		}
+ 		port->flags |= UPF_IOREMAP;
+diff --git a/drivers/tty/tty_baudrate.c b/drivers/tty/tty_baudrate.c
+index 6ff8cdfc9d2a..3e827a3d48d5 100644
+--- a/drivers/tty/tty_baudrate.c
++++ b/drivers/tty/tty_baudrate.c
+@@ -157,18 +157,25 @@ void tty_termios_encode_baud_rate(struct ktermios *termios,
+ 	termios->c_ospeed = obaud;
+ 
+ #ifdef BOTHER
++	if ((termios->c_cflag >> IBSHIFT) & CBAUD)
++		ibinput = 1;	/* An input speed was specified */
++
+ 	/* If the user asked for a precise weird speed give a precise weird
+ 	   answer. If they asked for a Bfoo speed they may have problems
+ 	   digesting non-exact replies so fuzz a bit */
+ 
+-	if ((termios->c_cflag & CBAUD) == BOTHER)
++	if ((termios->c_cflag & CBAUD) == BOTHER) {
+ 		oclose = 0;
++		if (!ibinput)
++			iclose = 0;
++	}
+ 	if (((termios->c_cflag >> IBSHIFT) & CBAUD) == BOTHER)
+ 		iclose = 0;
+-	if ((termios->c_cflag >> IBSHIFT) & CBAUD)
+-		ibinput = 1;	/* An input speed was specified */
+ #endif
+ 	termios->c_cflag &= ~CBAUD;
++#ifdef IBSHIFT
++	termios->c_cflag &= ~(CBAUD << IBSHIFT);
++#endif
+ 
+ 	/*
+ 	 *	Our goal is to find a close match to the standard baud rate
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 75c4623ad779..f8ee32d9843a 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -779,20 +779,9 @@ static int acm_tty_write(struct tty_struct *tty,
+ 	}
+ 
+ 	if (acm->susp_count) {
+-		if (acm->putbuffer) {
+-			/* now to preserve order */
+-			usb_anchor_urb(acm->putbuffer->urb, &acm->delayed);
+-			acm->putbuffer = NULL;
+-		}
+ 		usb_anchor_urb(wb->urb, &acm->delayed);
+ 		spin_unlock_irqrestore(&acm->write_lock, flags);
+ 		return count;
+-	} else {
+-		if (acm->putbuffer) {
+-			/* at this point there is no good way to handle errors */
+-			acm_start_wb(acm, acm->putbuffer);
+-			acm->putbuffer = NULL;
+-		}
+ 	}
+ 
+ 	stat = acm_start_wb(acm, wb);
+@@ -803,66 +792,6 @@ static int acm_tty_write(struct tty_struct *tty,
+ 	return count;
+ }
+ 
+-static void acm_tty_flush_chars(struct tty_struct *tty)
+-{
+-	struct acm *acm = tty->driver_data;
+-	struct acm_wb *cur;
+-	int err;
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&acm->write_lock, flags);
+-
+-	cur = acm->putbuffer;
+-	if (!cur) /* nothing to do */
+-		goto out;
+-
+-	acm->putbuffer = NULL;
+-	err = usb_autopm_get_interface_async(acm->control);
+-	if (err < 0) {
+-		cur->use = 0;
+-		acm->putbuffer = cur;
+-		goto out;
+-	}
+-
+-	if (acm->susp_count)
+-		usb_anchor_urb(cur->urb, &acm->delayed);
+-	else
+-		acm_start_wb(acm, cur);
+-out:
+-	spin_unlock_irqrestore(&acm->write_lock, flags);
+-	return;
+-}
+-
+-static int acm_tty_put_char(struct tty_struct *tty, unsigned char ch)
+-{
+-	struct acm *acm = tty->driver_data;
+-	struct acm_wb *cur;
+-	int wbn;
+-	unsigned long flags;
+-
+-overflow:
+-	cur = acm->putbuffer;
+-	if (!cur) {
+-		spin_lock_irqsave(&acm->write_lock, flags);
+-		wbn = acm_wb_alloc(acm);
+-		if (wbn >= 0) {
+-			cur = &acm->wb[wbn];
+-			acm->putbuffer = cur;
+-		}
+-		spin_unlock_irqrestore(&acm->write_lock, flags);
+-		if (!cur)
+-			return 0;
+-	}
+-
+-	if (cur->len == acm->writesize) {
+-		acm_tty_flush_chars(tty);
+-		goto overflow;
+-	}
+-
+-	cur->buf[cur->len++] = ch;
+-	return 1;
+-}
+-
+ static int acm_tty_write_room(struct tty_struct *tty)
+ {
+ 	struct acm *acm = tty->driver_data;
+@@ -1987,8 +1916,6 @@ static const struct tty_operations acm_ops = {
+ 	.cleanup =		acm_tty_cleanup,
+ 	.hangup =		acm_tty_hangup,
+ 	.write =		acm_tty_write,
+-	.put_char =		acm_tty_put_char,
+-	.flush_chars =		acm_tty_flush_chars,
+ 	.write_room =		acm_tty_write_room,
+ 	.ioctl =		acm_tty_ioctl,
+ 	.throttle =		acm_tty_throttle,
+diff --git a/drivers/usb/class/cdc-acm.h b/drivers/usb/class/cdc-acm.h
+index eacc116e83da..ca06b20d7af9 100644
+--- a/drivers/usb/class/cdc-acm.h
++++ b/drivers/usb/class/cdc-acm.h
+@@ -96,7 +96,6 @@ struct acm {
+ 	unsigned long read_urbs_free;
+ 	struct urb *read_urbs[ACM_NR];
+ 	struct acm_rb read_buffers[ACM_NR];
+-	struct acm_wb *putbuffer;			/* for acm_tty_put_char() */
+ 	int rx_buflimit;
+ 	spinlock_t read_lock;
+ 	u8 *notification_buffer;			/* to reassemble fragmented notifications */
+diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
+index a0d284ef3f40..632a2bfabc08 100644
+--- a/drivers/usb/class/cdc-wdm.c
++++ b/drivers/usb/class/cdc-wdm.c
+@@ -458,7 +458,7 @@ static int service_outstanding_interrupt(struct wdm_device *desc)
+ 
+ 	set_bit(WDM_RESPONDING, &desc->flags);
+ 	spin_unlock_irq(&desc->iuspin);
+-	rv = usb_submit_urb(desc->response, GFP_KERNEL);
++	rv = usb_submit_urb(desc->response, GFP_ATOMIC);
+ 	spin_lock_irq(&desc->iuspin);
+ 	if (rv) {
+ 		dev_err(&desc->intf->dev,
+diff --git a/drivers/usb/core/hcd-pci.c b/drivers/usb/core/hcd-pci.c
+index 66fe1b78d952..03432467b05f 100644
+--- a/drivers/usb/core/hcd-pci.c
++++ b/drivers/usb/core/hcd-pci.c
+@@ -515,8 +515,6 @@ static int resume_common(struct device *dev, int event)
+ 				event == PM_EVENT_RESTORE);
+ 		if (retval) {
+ 			dev_err(dev, "PCI post-resume error %d!\n", retval);
+-			if (hcd->shared_hcd)
+-				usb_hc_died(hcd->shared_hcd);
+ 			usb_hc_died(hcd);
+ 		}
+ 	}
+diff --git a/drivers/usb/core/message.c b/drivers/usb/core/message.c
+index 1a15392326fc..525ebd03cfe5 100644
+--- a/drivers/usb/core/message.c
++++ b/drivers/usb/core/message.c
+@@ -1340,6 +1340,11 @@ void usb_enable_interface(struct usb_device *dev,
+  * is submitted that needs that bandwidth.  Some other operating systems
+  * allocate bandwidth early, when a configuration is chosen.
+  *
++ * xHCI reserves bandwidth and configures the alternate setting in
++ * usb_hcd_alloc_bandwidth(). If it fails the original interface altsetting
++ * may be disabled. Drivers cannot rely on any particular alternate
++ * setting being in effect after a failure.
++ *
+  * This call is synchronous, and may not be used in an interrupt context.
+  * Also, drivers must not change altsettings while urbs are scheduled for
+  * endpoints in that interface; all such urbs must first be completed
+@@ -1375,6 +1380,12 @@ int usb_set_interface(struct usb_device *dev, int interface, int alternate)
+ 			 alternate);
+ 		return -EINVAL;
+ 	}
++	/*
++	 * usb3 hosts configure the interface in usb_hcd_alloc_bandwidth,
++	 * including freeing dropped endpoint ring buffers.
++	 * Make sure the interface endpoints are flushed before that
++	 */
++	usb_disable_interface(dev, iface, false);
+ 
+ 	/* Make sure we have enough bandwidth for this alternate interface.
+ 	 * Remove the current alt setting and add the new alt setting.
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 097057d2eacf..e77dfe5ed5ec 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -178,6 +178,10 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	/* CBM - Flash disk */
+ 	{ USB_DEVICE(0x0204, 0x6025), .driver_info = USB_QUIRK_RESET_RESUME },
+ 
++	/* WORLDE Controller KS49 or Prodipe MIDI 49C USB controller */
++	{ USB_DEVICE(0x0218, 0x0201), .driver_info =
++			USB_QUIRK_CONFIG_INTF_STRINGS },
++
+ 	/* WORLDE easy key (easykey.25) MIDI controller  */
+ 	{ USB_DEVICE(0x0218, 0x0401), .driver_info =
+ 			USB_QUIRK_CONFIG_INTF_STRINGS },
+@@ -406,6 +410,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	{ USB_DEVICE(0x2040, 0x7200), .driver_info =
+ 			USB_QUIRK_CONFIG_INTF_STRINGS },
+ 
++	/* DJI CineSSD */
++	{ USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM },
++
+ 	/* INTEL VALUE SSD */
+ 	{ USB_DEVICE(0x8086, 0xf1a5), .driver_info = USB_QUIRK_RESET_RESUME },
+ 
+diff --git a/drivers/usb/dwc3/gadget.h b/drivers/usb/dwc3/gadget.h
+index db610c56f1d6..2aacd1afd9ff 100644
+--- a/drivers/usb/dwc3/gadget.h
++++ b/drivers/usb/dwc3/gadget.h
+@@ -25,7 +25,7 @@ struct dwc3;
+ #define DWC3_DEPCFG_XFER_IN_PROGRESS_EN	BIT(9)
+ #define DWC3_DEPCFG_XFER_NOT_READY_EN	BIT(10)
+ #define DWC3_DEPCFG_FIFO_ERROR_EN	BIT(11)
+-#define DWC3_DEPCFG_STREAM_EVENT_EN	BIT(12)
++#define DWC3_DEPCFG_STREAM_EVENT_EN	BIT(13)
+ #define DWC3_DEPCFG_BINTERVAL_M1(n)	(((n) & 0xff) << 16)
+ #define DWC3_DEPCFG_STREAM_CAPABLE	BIT(24)
+ #define DWC3_DEPCFG_EP_NUMBER(n)	(((n) & 0x1f) << 25)
+diff --git a/drivers/usb/gadget/udc/net2280.c b/drivers/usb/gadget/udc/net2280.c
+index 318246d8b2e2..b02ab2a8d927 100644
+--- a/drivers/usb/gadget/udc/net2280.c
++++ b/drivers/usb/gadget/udc/net2280.c
+@@ -1545,11 +1545,14 @@ static int net2280_pullup(struct usb_gadget *_gadget, int is_on)
+ 		writel(tmp | BIT(USB_DETECT_ENABLE), &dev->usb->usbctl);
+ 	} else {
+ 		writel(tmp & ~BIT(USB_DETECT_ENABLE), &dev->usb->usbctl);
+-		stop_activity(dev, dev->driver);
++		stop_activity(dev, NULL);
+ 	}
+ 
+ 	spin_unlock_irqrestore(&dev->lock, flags);
+ 
++	if (!is_on && dev->driver)
++		dev->driver->disconnect(&dev->gadget);
++
+ 	return 0;
+ }
+ 
+@@ -2466,8 +2469,11 @@ static void stop_activity(struct net2280 *dev, struct usb_gadget_driver *driver)
+ 		nuke(&dev->ep[i]);
+ 
+ 	/* report disconnect; the driver is already quiesced */
+-	if (driver)
++	if (driver) {
++		spin_unlock(&dev->lock);
+ 		driver->disconnect(&dev->gadget);
++		spin_lock(&dev->lock);
++	}
+ 
+ 	usb_reinit(dev);
+ }
+@@ -3341,6 +3347,8 @@ next_endpoints:
+ 		BIT(PCI_RETRY_ABORT_INTERRUPT))
+ 
+ static void handle_stat1_irqs(struct net2280 *dev, u32 stat)
++__releases(dev->lock)
++__acquires(dev->lock)
+ {
+ 	struct net2280_ep	*ep;
+ 	u32			tmp, num, mask, scratch;
+@@ -3381,12 +3389,14 @@ static void handle_stat1_irqs(struct net2280 *dev, u32 stat)
+ 			if (disconnect || reset) {
+ 				stop_activity(dev, dev->driver);
+ 				ep0_start(dev);
++				spin_unlock(&dev->lock);
+ 				if (reset)
+ 					usb_gadget_udc_reset
+ 						(&dev->gadget, dev->driver);
+ 				else
+ 					(dev->driver->disconnect)
+ 						(&dev->gadget);
++				spin_lock(&dev->lock);
+ 				return;
+ 			}
+ 		}
+@@ -3405,6 +3415,7 @@ static void handle_stat1_irqs(struct net2280 *dev, u32 stat)
+ 	tmp = BIT(SUSPEND_REQUEST_CHANGE_INTERRUPT);
+ 	if (stat & tmp) {
+ 		writel(tmp, &dev->regs->irqstat1);
++		spin_unlock(&dev->lock);
+ 		if (stat & BIT(SUSPEND_REQUEST_INTERRUPT)) {
+ 			if (dev->driver->suspend)
+ 				dev->driver->suspend(&dev->gadget);
+@@ -3415,6 +3426,7 @@ static void handle_stat1_irqs(struct net2280 *dev, u32 stat)
+ 				dev->driver->resume(&dev->gadget);
+ 			/* at high speed, note erratum 0133 */
+ 		}
++		spin_lock(&dev->lock);
+ 		stat &= ~tmp;
+ 	}
+ 
+diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c
+index 7cf98c793e04..5b5f1c8b47c9 100644
+--- a/drivers/usb/gadget/udc/renesas_usb3.c
++++ b/drivers/usb/gadget/udc/renesas_usb3.c
+@@ -787,12 +787,15 @@ static void usb3_irq_epc_int_1_speed(struct renesas_usb3 *usb3)
+ 	switch (speed) {
+ 	case USB_STA_SPEED_SS:
+ 		usb3->gadget.speed = USB_SPEED_SUPER;
++		usb3->gadget.ep0->maxpacket = USB3_EP0_SS_MAX_PACKET_SIZE;
+ 		break;
+ 	case USB_STA_SPEED_HS:
+ 		usb3->gadget.speed = USB_SPEED_HIGH;
++		usb3->gadget.ep0->maxpacket = USB3_EP0_HSFS_MAX_PACKET_SIZE;
+ 		break;
+ 	case USB_STA_SPEED_FS:
+ 		usb3->gadget.speed = USB_SPEED_FULL;
++		usb3->gadget.ep0->maxpacket = USB3_EP0_HSFS_MAX_PACKET_SIZE;
+ 		break;
+ 	default:
+ 		usb3->gadget.speed = USB_SPEED_UNKNOWN;
+@@ -2451,7 +2454,7 @@ static int renesas_usb3_init_ep(struct renesas_usb3 *usb3, struct device *dev,
+ 			/* for control pipe */
+ 			usb3->gadget.ep0 = &usb3_ep->ep;
+ 			usb_ep_set_maxpacket_limit(&usb3_ep->ep,
+-						USB3_EP0_HSFS_MAX_PACKET_SIZE);
++						USB3_EP0_SS_MAX_PACKET_SIZE);
+ 			usb3_ep->ep.caps.type_control = true;
+ 			usb3_ep->ep.caps.dir_in = true;
+ 			usb3_ep->ep.caps.dir_out = true;
+diff --git a/drivers/usb/host/u132-hcd.c b/drivers/usb/host/u132-hcd.c
+index 032b8652910a..02f8e08b3ee8 100644
+--- a/drivers/usb/host/u132-hcd.c
++++ b/drivers/usb/host/u132-hcd.c
+@@ -2555,7 +2555,7 @@ static int u132_get_frame(struct usb_hcd *hcd)
+ 	} else {
+ 		int frame = 0;
+ 		dev_err(&u132->platform_dev->dev, "TODO: u132_get_frame\n");
+-		msleep(100);
++		mdelay(100);
+ 		return frame;
+ 	}
+ }
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index ef350c33dc4a..b1f27aa38b10 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -1613,6 +1613,10 @@ void xhci_endpoint_copy(struct xhci_hcd *xhci,
+ 	in_ep_ctx->ep_info2 = out_ep_ctx->ep_info2;
+ 	in_ep_ctx->deq = out_ep_ctx->deq;
+ 	in_ep_ctx->tx_info = out_ep_ctx->tx_info;
++	if (xhci->quirks & XHCI_MTK_HOST) {
++		in_ep_ctx->reserved[0] = out_ep_ctx->reserved[0];
++		in_ep_ctx->reserved[1] = out_ep_ctx->reserved[1];
++	}
+ }
+ 
+ /* Copy output xhci_slot_ctx to the input xhci_slot_ctx.
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 68e6132aa8b2..c2220a7fc758 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -37,6 +37,21 @@ static unsigned long long quirks;
+ module_param(quirks, ullong, S_IRUGO);
+ MODULE_PARM_DESC(quirks, "Bit flags for quirks to be enabled as default");
+ 
++static bool td_on_ring(struct xhci_td *td, struct xhci_ring *ring)
++{
++	struct xhci_segment *seg = ring->first_seg;
++
++	if (!td || !td->start_seg)
++		return false;
++	do {
++		if (seg == td->start_seg)
++			return true;
++		seg = seg->next;
++	} while (seg && seg != ring->first_seg);
++
++	return false;
++}
++
+ /* TODO: copied from ehci-hcd.c - can this be refactored? */
+ /*
+  * xhci_handshake - spin reading hc until handshake completes or fails
+@@ -1571,6 +1586,21 @@ static int xhci_urb_dequeue(struct usb_hcd *hcd, struct urb *urb, int status)
+ 		goto done;
+ 	}
+ 
++	/*
++	 * check ring is not re-allocated since URB was enqueued. If it is, then
++	 * make sure none of the ring related pointers in this URB private data
++	 * are touched, such as td_list, otherwise we overwrite freed data
++	 */
++	if (!td_on_ring(&urb_priv->td[0], ep_ring)) {
++		xhci_err(xhci, "Canceled URB td not found on endpoint ring");
++		for (i = urb_priv->num_tds_done; i < urb_priv->num_tds; i++) {
++			td = &urb_priv->td[i];
++			if (!list_empty(&td->cancelled_td_list))
++				list_del_init(&td->cancelled_td_list);
++		}
++		goto err_giveback;
++	}
++
+ 	if (xhci->xhc_state & XHCI_STATE_HALTED) {
+ 		xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb,
+ 				"HC halted, freeing TD manually.");
+diff --git a/drivers/usb/misc/uss720.c b/drivers/usb/misc/uss720.c
+index de9a502491c2..69822852888a 100644
+--- a/drivers/usb/misc/uss720.c
++++ b/drivers/usb/misc/uss720.c
+@@ -369,7 +369,7 @@ static unsigned char parport_uss720_frob_control(struct parport *pp, unsigned ch
+ 	mask &= 0x0f;
+ 	val &= 0x0f;
+ 	d = (priv->reg[1] & (~mask)) ^ val;
+-	if (set_1284_register(pp, 2, d, GFP_KERNEL))
++	if (set_1284_register(pp, 2, d, GFP_ATOMIC))
+ 		return 0;
+ 	priv->reg[1] = d;
+ 	return d & 0xf;
+@@ -379,7 +379,7 @@ static unsigned char parport_uss720_read_status(struct parport *pp)
+ {
+ 	unsigned char ret;
+ 
+-	if (get_1284_register(pp, 1, &ret, GFP_KERNEL))
++	if (get_1284_register(pp, 1, &ret, GFP_ATOMIC))
+ 		return 0;
+ 	return ret & 0xf8;
+ }
+diff --git a/drivers/usb/misc/yurex.c b/drivers/usb/misc/yurex.c
+index 3be40eaa1ac9..1232dd49556d 100644
+--- a/drivers/usb/misc/yurex.c
++++ b/drivers/usb/misc/yurex.c
+@@ -421,13 +421,13 @@ static ssize_t yurex_write(struct file *file, const char __user *user_buffer,
+ {
+ 	struct usb_yurex *dev;
+ 	int i, set = 0, retval = 0;
+-	char buffer[16];
++	char buffer[16 + 1];
+ 	char *data = buffer;
+ 	unsigned long long c, c2 = 0;
+ 	signed long timeout = 0;
+ 	DEFINE_WAIT(wait);
+ 
+-	count = min(sizeof(buffer), count);
++	count = min(sizeof(buffer) - 1, count);
+ 	dev = file->private_data;
+ 
+ 	/* verify that we actually have some data to write */
+@@ -446,6 +446,7 @@ static ssize_t yurex_write(struct file *file, const char __user *user_buffer,
+ 		retval = -EFAULT;
+ 		goto error;
+ 	}
++	buffer[count] = 0;
+ 	memset(dev->cntl_buffer, CMD_PADDING, YUREX_BUF_SIZE);
+ 
+ 	switch (buffer[0]) {
+diff --git a/drivers/usb/mtu3/mtu3_core.c b/drivers/usb/mtu3/mtu3_core.c
+index eecfd0671362..d045d8458f81 100644
+--- a/drivers/usb/mtu3/mtu3_core.c
++++ b/drivers/usb/mtu3/mtu3_core.c
+@@ -107,8 +107,12 @@ static int mtu3_device_enable(struct mtu3 *mtu)
+ 		(SSUSB_U2_PORT_DIS | SSUSB_U2_PORT_PDN |
+ 		SSUSB_U2_PORT_HOST_SEL));
+ 
+-	if (mtu->ssusb->dr_mode == USB_DR_MODE_OTG)
++	if (mtu->ssusb->dr_mode == USB_DR_MODE_OTG) {
+ 		mtu3_setbits(ibase, SSUSB_U2_CTRL(0), SSUSB_U2_PORT_OTG_SEL);
++		if (mtu->is_u3_ip)
++			mtu3_setbits(ibase, SSUSB_U3_CTRL(0),
++				     SSUSB_U3_PORT_DUAL_MODE);
++	}
+ 
+ 	return ssusb_check_clocks(mtu->ssusb, check_clk);
+ }
+diff --git a/drivers/usb/mtu3/mtu3_hw_regs.h b/drivers/usb/mtu3/mtu3_hw_regs.h
+index 6ee371478d89..a45bb253939f 100644
+--- a/drivers/usb/mtu3/mtu3_hw_regs.h
++++ b/drivers/usb/mtu3/mtu3_hw_regs.h
+@@ -459,6 +459,7 @@
+ 
+ /* U3D_SSUSB_U3_CTRL_0P */
+ #define SSUSB_U3_PORT_SSP_SPEED	BIT(9)
++#define SSUSB_U3_PORT_DUAL_MODE	BIT(7)
+ #define SSUSB_U3_PORT_HOST_SEL		BIT(2)
+ #define SSUSB_U3_PORT_PDN		BIT(1)
+ #define SSUSB_U3_PORT_DIS		BIT(0)
+diff --git a/drivers/usb/serial/io_ti.h b/drivers/usb/serial/io_ti.h
+index e53c68261017..9bbcee37524e 100644
+--- a/drivers/usb/serial/io_ti.h
++++ b/drivers/usb/serial/io_ti.h
+@@ -173,7 +173,7 @@ struct ump_interrupt {
+ }  __attribute__((packed));
+ 
+ 
+-#define TIUMP_GET_PORT_FROM_CODE(c)	(((c) >> 4) - 3)
++#define TIUMP_GET_PORT_FROM_CODE(c)	(((c) >> 6) & 0x01)
+ #define TIUMP_GET_FUNC_FROM_CODE(c)	((c) & 0x0f)
+ #define TIUMP_INTERRUPT_CODE_LSR	0x03
+ #define TIUMP_INTERRUPT_CODE_MSR	0x04
+diff --git a/drivers/usb/serial/ti_usb_3410_5052.c b/drivers/usb/serial/ti_usb_3410_5052.c
+index 6b22857f6e52..58fc7964ee6b 100644
+--- a/drivers/usb/serial/ti_usb_3410_5052.c
++++ b/drivers/usb/serial/ti_usb_3410_5052.c
+@@ -1119,7 +1119,7 @@ static void ti_break(struct tty_struct *tty, int break_state)
+ 
+ static int ti_get_port_from_code(unsigned char code)
+ {
+-	return (code >> 4) - 3;
++	return (code >> 6) & 0x01;
+ }
+ 
+ static int ti_get_func_from_code(unsigned char code)
+diff --git a/drivers/usb/storage/scsiglue.c b/drivers/usb/storage/scsiglue.c
+index c267f2812a04..e227bb5b794f 100644
+--- a/drivers/usb/storage/scsiglue.c
++++ b/drivers/usb/storage/scsiglue.c
+@@ -376,6 +376,15 @@ static int queuecommand_lck(struct scsi_cmnd *srb,
+ 		return 0;
+ 	}
+ 
++	if ((us->fflags & US_FL_NO_ATA_1X) &&
++			(srb->cmnd[0] == ATA_12 || srb->cmnd[0] == ATA_16)) {
++		memcpy(srb->sense_buffer, usb_stor_sense_invalidCDB,
++		       sizeof(usb_stor_sense_invalidCDB));
++		srb->result = SAM_STAT_CHECK_CONDITION;
++		done(srb);
++		return 0;
++	}
++
+ 	/* enqueue the command and wake up the control thread */
+ 	srb->scsi_done = done;
+ 	us->srb = srb;
+diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c
+index 9e9de5452860..1f7b401c4d04 100644
+--- a/drivers/usb/storage/uas.c
++++ b/drivers/usb/storage/uas.c
+@@ -842,6 +842,27 @@ static int uas_slave_configure(struct scsi_device *sdev)
+ 		sdev->skip_ms_page_8 = 1;
+ 		sdev->wce_default_on = 1;
+ 	}
++
++	/*
++	 * Some disks return the total number of blocks in response
++	 * to READ CAPACITY rather than the highest block number.
++	 * If this device makes that mistake, tell the sd driver.
++	 */
++	if (devinfo->flags & US_FL_FIX_CAPACITY)
++		sdev->fix_capacity = 1;
++
++	/*
++	 * Some devices don't like MODE SENSE with page=0x3f,
++	 * which is the command used for checking if a device
++	 * is write-protected.  Now that we tell the sd driver
++	 * to do a 192-byte transfer with this command the
++	 * majority of devices work fine, but a few still can't
++	 * handle it.  The sd driver will simply assume those
++	 * devices are write-enabled.
++	 */
++	if (devinfo->flags & US_FL_NO_WP_DETECT)
++		sdev->skip_ms_page_3f = 1;
++
+ 	scsi_change_queue_depth(sdev, devinfo->qdepth - 2);
+ 	return 0;
+ }
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index 22fcfccf453a..f7f83b21dc74 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -2288,6 +2288,13 @@ UNUSUAL_DEV(  0x2735, 0x100b, 0x0000, 0x9999,
+ 		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ 		US_FL_GO_SLOW ),
+ 
++/* Reported-by: Tim Anderson <tsa@biglakesoftware.com> */
++UNUSUAL_DEV(  0x2ca3, 0x0031, 0x0000, 0x9999,
++		"DJI",
++		"CineSSD",
++		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++		US_FL_NO_ATA_1X),
++
+ /*
+  * Reported by Frederic Marchal <frederic.marchal@wowcompany.com>
+  * Mio Moov 330
+diff --git a/drivers/video/fbdev/core/modedb.c b/drivers/video/fbdev/core/modedb.c
+index 2510fa728d77..de119f11b78f 100644
+--- a/drivers/video/fbdev/core/modedb.c
++++ b/drivers/video/fbdev/core/modedb.c
+@@ -644,7 +644,7 @@ static int fb_try_mode(struct fb_var_screeninfo *var, struct fb_info *info,
+  *
+  *     Valid mode specifiers for @mode_option:
+  *
+- *     <xres>x<yres>[M][R][-<bpp>][@<refresh>][i][m] or
++ *     <xres>x<yres>[M][R][-<bpp>][@<refresh>][i][p][m] or
+  *     <name>[-<bpp>][@<refresh>]
+  *
+  *     with <xres>, <yres>, <bpp> and <refresh> decimal numbers and
+@@ -653,10 +653,10 @@ static int fb_try_mode(struct fb_var_screeninfo *var, struct fb_info *info,
+  *      If 'M' is present after yres (and before refresh/bpp if present),
+  *      the function will compute the timings using VESA(tm) Coordinated
+  *      Video Timings (CVT).  If 'R' is present after 'M', will compute with
+- *      reduced blanking (for flatpanels).  If 'i' is present, compute
+- *      interlaced mode.  If 'm' is present, add margins equal to 1.8%
+- *      of xres rounded down to 8 pixels, and 1.8% of yres. The char
+- *      'i' and 'm' must be after 'M' and 'R'. Example:
++ *      reduced blanking (for flatpanels).  If 'i' or 'p' are present, compute
++ *      interlaced or progressive mode.  If 'm' is present, add margins equal
++ *      to 1.8% of xres rounded down to 8 pixels, and 1.8% of yres. The chars
++ *      'i', 'p' and 'm' must be after 'M' and 'R'. Example:
+  *
+  *      1024x768MR-8@60m - Reduced blank with margins at 60Hz.
+  *
+@@ -697,7 +697,8 @@ int fb_find_mode(struct fb_var_screeninfo *var,
+ 		unsigned int namelen = strlen(name);
+ 		int res_specified = 0, bpp_specified = 0, refresh_specified = 0;
+ 		unsigned int xres = 0, yres = 0, bpp = default_bpp, refresh = 0;
+-		int yres_specified = 0, cvt = 0, rb = 0, interlace = 0;
++		int yres_specified = 0, cvt = 0, rb = 0;
++		int interlace_specified = 0, interlace = 0;
+ 		int margins = 0;
+ 		u32 best, diff, tdiff;
+ 
+@@ -748,9 +749,17 @@ int fb_find_mode(struct fb_var_screeninfo *var,
+ 				if (!cvt)
+ 					margins = 1;
+ 				break;
++			case 'p':
++				if (!cvt) {
++					interlace = 0;
++					interlace_specified = 1;
++				}
++				break;
+ 			case 'i':
+-				if (!cvt)
++				if (!cvt) {
+ 					interlace = 1;
++					interlace_specified = 1;
++				}
+ 				break;
+ 			default:
+ 				goto done;
+@@ -819,11 +828,21 @@ done:
+ 			if ((name_matches(db[i], name, namelen) ||
+ 			     (res_specified && res_matches(db[i], xres, yres))) &&
+ 			    !fb_try_mode(var, info, &db[i], bpp)) {
+-				if (refresh_specified && db[i].refresh == refresh)
+-					return 1;
++				const int db_interlace = (db[i].vmode &
++					FB_VMODE_INTERLACED ? 1 : 0);
++				int score = abs(db[i].refresh - refresh);
++
++				if (interlace_specified)
++					score += abs(db_interlace - interlace);
++
++				if (!interlace_specified ||
++				    db_interlace == interlace)
++					if (refresh_specified &&
++					    db[i].refresh == refresh)
++						return 1;
+ 
+-				if (abs(db[i].refresh - refresh) < diff) {
+-					diff = abs(db[i].refresh - refresh);
++				if (score < diff) {
++					diff = score;
+ 					best = i;
+ 				}
+ 			}
+diff --git a/drivers/video/fbdev/goldfishfb.c b/drivers/video/fbdev/goldfishfb.c
+index 3b70044773b6..9fe7edf725c6 100644
+--- a/drivers/video/fbdev/goldfishfb.c
++++ b/drivers/video/fbdev/goldfishfb.c
+@@ -301,6 +301,7 @@ static int goldfish_fb_remove(struct platform_device *pdev)
+ 	dma_free_coherent(&pdev->dev, framesize, (void *)fb->fb.screen_base,
+ 						fb->fb.fix.smem_start);
+ 	iounmap(fb->reg_base);
++	kfree(fb);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/video/fbdev/omap/omapfb_main.c b/drivers/video/fbdev/omap/omapfb_main.c
+index 585f39efcff6..1c75f4806ed3 100644
+--- a/drivers/video/fbdev/omap/omapfb_main.c
++++ b/drivers/video/fbdev/omap/omapfb_main.c
+@@ -958,7 +958,7 @@ int omapfb_register_client(struct omapfb_notifier_block *omapfb_nb,
+ {
+ 	int r;
+ 
+-	if ((unsigned)omapfb_nb->plane_idx > OMAPFB_PLANE_NUM)
++	if ((unsigned)omapfb_nb->plane_idx >= OMAPFB_PLANE_NUM)
+ 		return -EINVAL;
+ 
+ 	if (!notifier_inited) {
+diff --git a/drivers/video/fbdev/omap2/omapfb/Makefile b/drivers/video/fbdev/omap2/omapfb/Makefile
+index 602edfed09df..f54c3f56b641 100644
+--- a/drivers/video/fbdev/omap2/omapfb/Makefile
++++ b/drivers/video/fbdev/omap2/omapfb/Makefile
+@@ -2,5 +2,5 @@
+ obj-$(CONFIG_OMAP2_VRFB) += vrfb.o
+ obj-y += dss/
+ obj-y += displays/
+-obj-$(CONFIG_FB_OMAP2) += omapfb.o
+-omapfb-y := omapfb-main.o omapfb-sysfs.o omapfb-ioctl.o
++obj-$(CONFIG_FB_OMAP2) += omap2fb.o
++omap2fb-y := omapfb-main.o omapfb-sysfs.o omapfb-ioctl.o
+diff --git a/drivers/video/fbdev/pxafb.c b/drivers/video/fbdev/pxafb.c
+index 76722a59f55e..dfe382e68287 100644
+--- a/drivers/video/fbdev/pxafb.c
++++ b/drivers/video/fbdev/pxafb.c
+@@ -2128,8 +2128,8 @@ static int of_get_pxafb_display(struct device *dev, struct device_node *disp,
+ 		return -EINVAL;
+ 
+ 	ret = -ENOMEM;
+-	info->modes = kmalloc_array(timings->num_timings,
+-				    sizeof(info->modes[0]), GFP_KERNEL);
++	info->modes = kcalloc(timings->num_timings, sizeof(info->modes[0]),
++			      GFP_KERNEL);
+ 	if (!info->modes)
+ 		goto out;
+ 	info->num_modes = timings->num_timings;
+diff --git a/drivers/video/fbdev/via/viafbdev.c b/drivers/video/fbdev/via/viafbdev.c
+index d2f785068ef4..7bb7e90b8f00 100644
+--- a/drivers/video/fbdev/via/viafbdev.c
++++ b/drivers/video/fbdev/via/viafbdev.c
+@@ -19,6 +19,7 @@
+  * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+  */
+ 
++#include <linux/compiler.h>
+ #include <linux/module.h>
+ #include <linux/seq_file.h>
+ #include <linux/slab.h>
+@@ -1468,7 +1469,7 @@ static const struct file_operations viafb_vt1636_proc_fops = {
+ 
+ #endif /* CONFIG_FB_VIA_DIRECT_PROCFS */
+ 
+-static int viafb_sup_odev_proc_show(struct seq_file *m, void *v)
++static int __maybe_unused viafb_sup_odev_proc_show(struct seq_file *m, void *v)
+ {
+ 	via_odev_to_seq(m, supported_odev_map[
+ 		viaparinfo->shared->chip_info.gfx_chip_name]);
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index 816cc921cf36..efae2fb0930a 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -1751,7 +1751,7 @@ static int fill_thread_core_info(struct elf_thread_core_info *t,
+ 		const struct user_regset *regset = &view->regsets[i];
+ 		do_thread_regset_writeback(t->task, regset);
+ 		if (regset->core_note_type && regset->get &&
+-		    (!regset->active || regset->active(t->task, regset))) {
++		    (!regset->active || regset->active(t->task, regset) > 0)) {
+ 			int ret;
+ 			size_t size = regset_size(t->task, regset);
+ 			void *data = kmalloc(size, GFP_KERNEL);
+diff --git a/fs/cifs/readdir.c b/fs/cifs/readdir.c
+index eeab81c9452f..e169e1a5fd35 100644
+--- a/fs/cifs/readdir.c
++++ b/fs/cifs/readdir.c
+@@ -376,8 +376,15 @@ static char *nxt_dir_entry(char *old_entry, char *end_of_smb, int level)
+ 
+ 		new_entry = old_entry + sizeof(FIND_FILE_STANDARD_INFO) +
+ 				pfData->FileNameLength;
+-	} else
+-		new_entry = old_entry + le32_to_cpu(pDirInfo->NextEntryOffset);
++	} else {
++		u32 next_offset = le32_to_cpu(pDirInfo->NextEntryOffset);
++
++		if (old_entry + next_offset < old_entry) {
++			cifs_dbg(VFS, "invalid offset %u\n", next_offset);
++			return NULL;
++		}
++		new_entry = old_entry + next_offset;
++	}
+ 	cifs_dbg(FYI, "new entry %p old entry %p\n", new_entry, old_entry);
+ 	/* validate that new_entry is not past end of SMB */
+ 	if (new_entry >= end_of_smb) {
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 82be1dfeca33..29cce842ed04 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -2418,14 +2418,14 @@ SMB2_ioctl(const unsigned int xid, struct cifs_tcon *tcon, u64 persistent_fid,
+ 	/* We check for obvious errors in the output buffer length and offset */
+ 	if (*plen == 0)
+ 		goto ioctl_exit; /* server returned no data */
+-	else if (*plen > 0xFF00) {
++	else if (*plen > rsp_iov.iov_len || *plen > 0xFF00) {
+ 		cifs_dbg(VFS, "srv returned invalid ioctl length: %d\n", *plen);
+ 		*plen = 0;
+ 		rc = -EIO;
+ 		goto ioctl_exit;
+ 	}
+ 
+-	if (rsp_iov.iov_len < le32_to_cpu(rsp->OutputOffset) + *plen) {
++	if (rsp_iov.iov_len - *plen < le32_to_cpu(rsp->OutputOffset)) {
+ 		cifs_dbg(VFS, "Malformed ioctl resp: len %d offset %d\n", *plen,
+ 			le32_to_cpu(rsp->OutputOffset));
+ 		*plen = 0;
+@@ -3492,33 +3492,38 @@ num_entries(char *bufstart, char *end_of_buf, char **lastentry, size_t size)
+ 	int len;
+ 	unsigned int entrycount = 0;
+ 	unsigned int next_offset = 0;
+-	FILE_DIRECTORY_INFO *entryptr;
++	char *entryptr;
++	FILE_DIRECTORY_INFO *dir_info;
+ 
+ 	if (bufstart == NULL)
+ 		return 0;
+ 
+-	entryptr = (FILE_DIRECTORY_INFO *)bufstart;
++	entryptr = bufstart;
+ 
+ 	while (1) {
+-		entryptr = (FILE_DIRECTORY_INFO *)
+-					((char *)entryptr + next_offset);
+-
+-		if ((char *)entryptr + size > end_of_buf) {
++		if (entryptr + next_offset < entryptr ||
++		    entryptr + next_offset > end_of_buf ||
++		    entryptr + next_offset + size > end_of_buf) {
+ 			cifs_dbg(VFS, "malformed search entry would overflow\n");
+ 			break;
+ 		}
+ 
+-		len = le32_to_cpu(entryptr->FileNameLength);
+-		if ((char *)entryptr + len + size > end_of_buf) {
++		entryptr = entryptr + next_offset;
++		dir_info = (FILE_DIRECTORY_INFO *)entryptr;
++
++		len = le32_to_cpu(dir_info->FileNameLength);
++		if (entryptr + len < entryptr ||
++		    entryptr + len > end_of_buf ||
++		    entryptr + len + size > end_of_buf) {
+ 			cifs_dbg(VFS, "directory entry name would overflow frame end of buf %p\n",
+ 				 end_of_buf);
+ 			break;
+ 		}
+ 
+-		*lastentry = (char *)entryptr;
++		*lastentry = entryptr;
+ 		entrycount++;
+ 
+-		next_offset = le32_to_cpu(entryptr->NextEntryOffset);
++		next_offset = le32_to_cpu(dir_info->NextEntryOffset);
+ 		if (!next_offset)
+ 			break;
+ 	}
+diff --git a/fs/configfs/dir.c b/fs/configfs/dir.c
+index 577cff24707b..39843fa7e11b 100644
+--- a/fs/configfs/dir.c
++++ b/fs/configfs/dir.c
+@@ -1777,6 +1777,16 @@ void configfs_unregister_group(struct config_group *group)
+ 	struct dentry *dentry = group->cg_item.ci_dentry;
+ 	struct dentry *parent = group->cg_item.ci_parent->ci_dentry;
+ 
++	mutex_lock(&subsys->su_mutex);
++	if (!group->cg_item.ci_parent->ci_group) {
++		/*
++		 * The parent has already been unlinked and detached
++		 * due to a rmdir.
++		 */
++		goto unlink_group;
++	}
++	mutex_unlock(&subsys->su_mutex);
++
+ 	inode_lock_nested(d_inode(parent), I_MUTEX_PARENT);
+ 	spin_lock(&configfs_dirent_lock);
+ 	configfs_detach_prep(dentry, NULL);
+@@ -1791,6 +1801,7 @@ void configfs_unregister_group(struct config_group *group)
+ 	dput(dentry);
+ 
+ 	mutex_lock(&subsys->su_mutex);
++unlink_group:
+ 	unlink_group(group);
+ 	mutex_unlock(&subsys->su_mutex);
+ }
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 128d489acebb..742147cbe759 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -3106,9 +3106,19 @@ static struct dentry *f2fs_mount(struct file_system_type *fs_type, int flags,
+ static void kill_f2fs_super(struct super_block *sb)
+ {
+ 	if (sb->s_root) {
+-		set_sbi_flag(F2FS_SB(sb), SBI_IS_CLOSE);
+-		f2fs_stop_gc_thread(F2FS_SB(sb));
+-		f2fs_stop_discard_thread(F2FS_SB(sb));
++		struct f2fs_sb_info *sbi = F2FS_SB(sb);
++
++		set_sbi_flag(sbi, SBI_IS_CLOSE);
++		f2fs_stop_gc_thread(sbi);
++		f2fs_stop_discard_thread(sbi);
++
++		if (is_sbi_flag_set(sbi, SBI_IS_DIRTY) ||
++				!is_set_ckpt_flags(sbi, CP_UMOUNT_FLAG)) {
++			struct cp_control cpc = {
++				.reason = CP_UMOUNT,
++			};
++			f2fs_write_checkpoint(sbi, &cpc);
++		}
+ 	}
+ 	kill_block_super(sb);
+ }
+diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
+index ed6699705c13..fd5bea55fd60 100644
+--- a/fs/gfs2/bmap.c
++++ b/fs/gfs2/bmap.c
+@@ -2060,7 +2060,7 @@ int gfs2_write_alloc_required(struct gfs2_inode *ip, u64 offset,
+ 	end_of_file = (i_size_read(&ip->i_inode) + sdp->sd_sb.sb_bsize - 1) >> shift;
+ 	lblock = offset >> shift;
+ 	lblock_stop = (offset + len + sdp->sd_sb.sb_bsize - 1) >> shift;
+-	if (lblock_stop > end_of_file)
++	if (lblock_stop > end_of_file && ip != GFS2_I(sdp->sd_rindex))
+ 		return 1;
+ 
+ 	size = (lblock_stop - lblock) << shift;
+diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c
+index 33abcf29bc05..b86249ebde11 100644
+--- a/fs/gfs2/rgrp.c
++++ b/fs/gfs2/rgrp.c
+@@ -1686,7 +1686,8 @@ static int gfs2_rbm_find(struct gfs2_rbm *rbm, u8 state, u32 *minext,
+ 
+ 	while(1) {
+ 		bi = rbm_bi(rbm);
+-		if (test_bit(GBF_FULL, &bi->bi_flags) &&
++		if ((ip == NULL || !gfs2_rs_active(&ip->i_res)) &&
++		    test_bit(GBF_FULL, &bi->bi_flags) &&
+ 		    (state == GFS2_BLKST_FREE))
+ 			goto next_bitmap;
+ 
+diff --git a/fs/namespace.c b/fs/namespace.c
+index bd2f4c68506a..1949e0939d40 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -446,10 +446,10 @@ int mnt_want_write_file_path(struct file *file)
+ {
+ 	int ret;
+ 
+-	sb_start_write(file->f_path.mnt->mnt_sb);
++	sb_start_write(file_inode(file)->i_sb);
+ 	ret = __mnt_want_write_file(file);
+ 	if (ret)
+-		sb_end_write(file->f_path.mnt->mnt_sb);
++		sb_end_write(file_inode(file)->i_sb);
+ 	return ret;
+ }
+ 
+@@ -540,7 +540,8 @@ void __mnt_drop_write_file(struct file *file)
+ 
+ void mnt_drop_write_file_path(struct file *file)
+ {
+-	mnt_drop_write(file->f_path.mnt);
++	__mnt_drop_write_file(file);
++	sb_end_write(file_inode(file)->i_sb);
+ }
+ 
+ void mnt_drop_write_file(struct file *file)
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index ff98e2a3f3cc..f688338b0482 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -2642,14 +2642,18 @@ static void nfs41_check_delegation_stateid(struct nfs4_state *state)
+ 	}
+ 
+ 	nfs4_stateid_copy(&stateid, &delegation->stateid);
+-	if (test_bit(NFS_DELEGATION_REVOKED, &delegation->flags) ||
+-		!test_and_clear_bit(NFS_DELEGATION_TEST_EXPIRED,
+-			&delegation->flags)) {
++	if (test_bit(NFS_DELEGATION_REVOKED, &delegation->flags)) {
+ 		rcu_read_unlock();
+ 		nfs_finish_clear_delegation_stateid(state, &stateid);
+ 		return;
+ 	}
+ 
++	if (!test_and_clear_bit(NFS_DELEGATION_TEST_EXPIRED,
++				&delegation->flags)) {
++		rcu_read_unlock();
++		return;
++	}
++
+ 	cred = get_rpccred(delegation->cred);
+ 	rcu_read_unlock();
+ 	status = nfs41_test_and_free_expired_stateid(server, &stateid, cred);
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index 2bf2eaa08ca7..3c18c12a5c4c 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -1390,6 +1390,8 @@ int nfs4_schedule_stateid_recovery(const struct nfs_server *server, struct nfs4_
+ 
+ 	if (!nfs4_state_mark_reclaim_nograce(clp, state))
+ 		return -EBADF;
++	nfs_inode_find_delegation_state_and_recover(state->inode,
++			&state->stateid);
+ 	dprintk("%s: scheduling stateid recovery for server %s\n", __func__,
+ 			clp->cl_hostname);
+ 	nfs4_schedule_state_manager(clp);
+diff --git a/fs/nfs/nfs4trace.h b/fs/nfs/nfs4trace.h
+index a275fba93170..708342f4692f 100644
+--- a/fs/nfs/nfs4trace.h
++++ b/fs/nfs/nfs4trace.h
+@@ -1194,7 +1194,7 @@ DECLARE_EVENT_CLASS(nfs4_inode_stateid_callback_event,
+ 		TP_fast_assign(
+ 			__entry->error = error;
+ 			__entry->fhandle = nfs_fhandle_hash(fhandle);
+-			if (inode != NULL) {
++			if (!IS_ERR_OR_NULL(inode)) {
+ 				__entry->fileid = NFS_FILEID(inode);
+ 				__entry->dev = inode->i_sb->s_dev;
+ 			} else {
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index 704b37311467..fa2121f877c1 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -970,16 +970,6 @@ static int ovl_get_upper(struct ovl_fs *ofs, struct path *upperpath)
+ 	if (err)
+ 		goto out;
+ 
+-	err = -EBUSY;
+-	if (ovl_inuse_trylock(upperpath->dentry)) {
+-		ofs->upperdir_locked = true;
+-	} else if (ofs->config.index) {
+-		pr_err("overlayfs: upperdir is in-use by another mount, mount with '-o index=off' to override exclusive upperdir protection.\n");
+-		goto out;
+-	} else {
+-		pr_warn("overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.\n");
+-	}
+-
+ 	upper_mnt = clone_private_mount(upperpath);
+ 	err = PTR_ERR(upper_mnt);
+ 	if (IS_ERR(upper_mnt)) {
+@@ -990,6 +980,17 @@ static int ovl_get_upper(struct ovl_fs *ofs, struct path *upperpath)
+ 	/* Don't inherit atime flags */
+ 	upper_mnt->mnt_flags &= ~(MNT_NOATIME | MNT_NODIRATIME | MNT_RELATIME);
+ 	ofs->upper_mnt = upper_mnt;
++
++	err = -EBUSY;
++	if (ovl_inuse_trylock(ofs->upper_mnt->mnt_root)) {
++		ofs->upperdir_locked = true;
++	} else if (ofs->config.index) {
++		pr_err("overlayfs: upperdir is in-use by another mount, mount with '-o index=off' to override exclusive upperdir protection.\n");
++		goto out;
++	} else {
++		pr_warn("overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.\n");
++	}
++
+ 	err = 0;
+ out:
+ 	return err;
+@@ -1089,8 +1090,10 @@ static int ovl_get_workdir(struct ovl_fs *ofs, struct path *upperpath)
+ 		goto out;
+ 	}
+ 
++	ofs->workbasedir = dget(workpath.dentry);
++
+ 	err = -EBUSY;
+-	if (ovl_inuse_trylock(workpath.dentry)) {
++	if (ovl_inuse_trylock(ofs->workbasedir)) {
+ 		ofs->workdir_locked = true;
+ 	} else if (ofs->config.index) {
+ 		pr_err("overlayfs: workdir is in-use by another mount, mount with '-o index=off' to override exclusive workdir protection.\n");
+@@ -1099,7 +1102,6 @@ static int ovl_get_workdir(struct ovl_fs *ofs, struct path *upperpath)
+ 		pr_warn("overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.\n");
+ 	}
+ 
+-	ofs->workbasedir = dget(workpath.dentry);
+ 	err = ovl_make_workdir(ofs, &workpath);
+ 	if (err)
+ 		goto out;
+diff --git a/fs/pstore/ram_core.c b/fs/pstore/ram_core.c
+index 951a14edcf51..0792595ebcfb 100644
+--- a/fs/pstore/ram_core.c
++++ b/fs/pstore/ram_core.c
+@@ -429,7 +429,12 @@ static void *persistent_ram_vmap(phys_addr_t start, size_t size,
+ 	vaddr = vmap(pages, page_count, VM_MAP, prot);
+ 	kfree(pages);
+ 
+-	return vaddr;
++	/*
++	 * Since vmap() uses page granularity, we must add the offset
++	 * into the page here, to get the byte granularity address
++	 * into the mapping to represent the actual "start" location.
++	 */
++	return vaddr + offset_in_page(start);
+ }
+ 
+ static void *persistent_ram_iomap(phys_addr_t start, size_t size,
+@@ -448,6 +453,11 @@ static void *persistent_ram_iomap(phys_addr_t start, size_t size,
+ 	else
+ 		va = ioremap_wc(start, size);
+ 
++	/*
++	 * Since request_mem_region() and ioremap() are byte-granularity
++	 * there is no need handle anything special like we do when the
++	 * vmap() case in persistent_ram_vmap() above.
++	 */
+ 	return va;
+ }
+ 
+@@ -468,7 +478,7 @@ static int persistent_ram_buffer_map(phys_addr_t start, phys_addr_t size,
+ 		return -ENOMEM;
+ 	}
+ 
+-	prz->buffer = prz->vaddr + offset_in_page(start);
++	prz->buffer = prz->vaddr;
+ 	prz->buffer_size = size - sizeof(struct persistent_ram_buffer);
+ 
+ 	return 0;
+@@ -515,7 +525,8 @@ void persistent_ram_free(struct persistent_ram_zone *prz)
+ 
+ 	if (prz->vaddr) {
+ 		if (pfn_valid(prz->paddr >> PAGE_SHIFT)) {
+-			vunmap(prz->vaddr);
++			/* We must vunmap() at page-granularity. */
++			vunmap(prz->vaddr - offset_in_page(prz->paddr));
+ 		} else {
+ 			iounmap(prz->vaddr);
+ 			release_mem_region(prz->paddr, prz->size);
+diff --git a/include/linux/crypto.h b/include/linux/crypto.h
+index 6eb06101089f..e8839d3a7559 100644
+--- a/include/linux/crypto.h
++++ b/include/linux/crypto.h
+@@ -112,6 +112,11 @@
+  */
+ #define CRYPTO_ALG_OPTIONAL_KEY		0x00004000
+ 
++/*
++ * Don't trigger module loading
++ */
++#define CRYPTO_NOLOAD			0x00008000
++
+ /*
+  * Transform masks and values (for crt_flags).
+  */
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index 83957920653a..64f450593b54 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -357,7 +357,7 @@ struct mlx5_frag_buf {
+ struct mlx5_frag_buf_ctrl {
+ 	struct mlx5_frag_buf	frag_buf;
+ 	u32			sz_m1;
+-	u32			frag_sz_m1;
++	u16			frag_sz_m1;
+ 	u32			strides_offset;
+ 	u8			log_sz;
+ 	u8			log_stride;
+@@ -1042,7 +1042,7 @@ int mlx5_cmd_free_uar(struct mlx5_core_dev *dev, u32 uarn);
+ void mlx5_health_cleanup(struct mlx5_core_dev *dev);
+ int mlx5_health_init(struct mlx5_core_dev *dev);
+ void mlx5_start_health_poll(struct mlx5_core_dev *dev);
+-void mlx5_stop_health_poll(struct mlx5_core_dev *dev);
++void mlx5_stop_health_poll(struct mlx5_core_dev *dev, bool disable_health);
+ void mlx5_drain_health_wq(struct mlx5_core_dev *dev);
+ void mlx5_trigger_health_work(struct mlx5_core_dev *dev);
+ void mlx5_drain_health_recovery(struct mlx5_core_dev *dev);
+diff --git a/include/linux/of.h b/include/linux/of.h
+index 4d25e4f952d9..b99a1a8c2952 100644
+--- a/include/linux/of.h
++++ b/include/linux/of.h
+@@ -290,6 +290,8 @@ extern struct device_node *of_get_next_child(const struct device_node *node,
+ extern struct device_node *of_get_next_available_child(
+ 	const struct device_node *node, struct device_node *prev);
+ 
++extern struct device_node *of_get_compatible_child(const struct device_node *parent,
++					const char *compatible);
+ extern struct device_node *of_get_child_by_name(const struct device_node *node,
+ 					const char *name);
+ 
+@@ -632,6 +634,12 @@ static inline bool of_have_populated_dt(void)
+ 	return false;
+ }
+ 
++static inline struct device_node *of_get_compatible_child(const struct device_node *parent,
++					const char *compatible)
++{
++	return NULL;
++}
++
+ static inline struct device_node *of_get_child_by_name(
+ 					const struct device_node *node,
+ 					const char *name)
+diff --git a/kernel/audit_watch.c b/kernel/audit_watch.c
+index c17c0c268436..dce35e16bff4 100644
+--- a/kernel/audit_watch.c
++++ b/kernel/audit_watch.c
+@@ -419,6 +419,13 @@ int audit_add_watch(struct audit_krule *krule, struct list_head **list)
+ 	struct path parent_path;
+ 	int h, ret = 0;
+ 
++	/*
++	 * When we will be calling audit_add_to_parent, krule->watch might have
++	 * been updated and watch might have been freed.
++	 * So we need to keep a reference of watch.
++	 */
++	audit_get_watch(watch);
++
+ 	mutex_unlock(&audit_filter_mutex);
+ 
+ 	/* Avoid calling path_lookup under audit_filter_mutex. */
+@@ -427,8 +434,10 @@ int audit_add_watch(struct audit_krule *krule, struct list_head **list)
+ 	/* caller expects mutex locked */
+ 	mutex_lock(&audit_filter_mutex);
+ 
+-	if (ret)
++	if (ret) {
++		audit_put_watch(watch);
+ 		return ret;
++	}
+ 
+ 	/* either find an old parent or attach a new one */
+ 	parent = audit_find_parent(d_backing_inode(parent_path.dentry));
+@@ -446,6 +455,7 @@ int audit_add_watch(struct audit_krule *krule, struct list_head **list)
+ 	*list = &audit_inode_hash[h];
+ error:
+ 	path_put(&parent_path);
++	audit_put_watch(watch);
+ 	return ret;
+ }
+ 
+diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
+index 3d83ee7df381..badabb0b435c 100644
+--- a/kernel/bpf/cgroup.c
++++ b/kernel/bpf/cgroup.c
+@@ -95,7 +95,7 @@ static int compute_effective_progs(struct cgroup *cgrp,
+ 				   enum bpf_attach_type type,
+ 				   struct bpf_prog_array __rcu **array)
+ {
+-	struct bpf_prog_array __rcu *progs;
++	struct bpf_prog_array *progs;
+ 	struct bpf_prog_list *pl;
+ 	struct cgroup *p = cgrp;
+ 	int cnt = 0;
+@@ -120,13 +120,12 @@ static int compute_effective_progs(struct cgroup *cgrp,
+ 					    &p->bpf.progs[type], node) {
+ 				if (!pl->prog)
+ 					continue;
+-				rcu_dereference_protected(progs, 1)->
+-					progs[cnt++] = pl->prog;
++				progs->progs[cnt++] = pl->prog;
+ 			}
+ 		p = cgroup_parent(p);
+ 	} while (p);
+ 
+-	*array = progs;
++	rcu_assign_pointer(*array, progs);
+ 	return 0;
+ }
+ 
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index eec2d5fb676b..c7b3e34811ec 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -5948,6 +5948,7 @@ perf_output_sample_ustack(struct perf_output_handle *handle, u64 dump_size,
+ 		unsigned long sp;
+ 		unsigned int rem;
+ 		u64 dyn_size;
++		mm_segment_t fs;
+ 
+ 		/*
+ 		 * We dump:
+@@ -5965,7 +5966,10 @@ perf_output_sample_ustack(struct perf_output_handle *handle, u64 dump_size,
+ 
+ 		/* Data. */
+ 		sp = perf_user_stack_pointer(regs);
++		fs = get_fs();
++		set_fs(USER_DS);
+ 		rem = __output_copy_user(handle, (void *) sp, dump_size);
++		set_fs(fs);
+ 		dyn_size = dump_size - rem;
+ 
+ 		perf_output_skip(handle, rem);
+diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
+index 42fcb7f05fac..f42cf69ef539 100644
+--- a/kernel/rcu/rcutorture.c
++++ b/kernel/rcu/rcutorture.c
+@@ -1446,7 +1446,7 @@ static int rcu_torture_stall(void *args)
+ 		VERBOSE_TOROUT_STRING("rcu_torture_stall end holdoff");
+ 	}
+ 	if (!kthread_should_stop()) {
+-		stop_at = get_seconds() + stall_cpu;
++		stop_at = ktime_get_seconds() + stall_cpu;
+ 		/* RCU CPU stall is expected behavior in following code. */
+ 		rcu_read_lock();
+ 		if (stall_cpu_irqsoff)
+@@ -1455,7 +1455,8 @@ static int rcu_torture_stall(void *args)
+ 			preempt_disable();
+ 		pr_alert("rcu_torture_stall start on CPU %d.\n",
+ 			 smp_processor_id());
+-		while (ULONG_CMP_LT(get_seconds(), stop_at))
++		while (ULONG_CMP_LT((unsigned long)ktime_get_seconds(),
++				    stop_at))
+ 			continue;  /* Induce RCU CPU stall warning. */
+ 		if (stall_cpu_irqsoff)
+ 			local_irq_enable();
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 9c219f7b0970..478d9d3e6be9 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -735,11 +735,12 @@ static void attach_entity_cfs_rq(struct sched_entity *se);
+  * To solve this problem, we also cap the util_avg of successive tasks to
+  * only 1/2 of the left utilization budget:
+  *
+- *   util_avg_cap = (1024 - cfs_rq->avg.util_avg) / 2^n
++ *   util_avg_cap = (cpu_scale - cfs_rq->avg.util_avg) / 2^n
+  *
+- * where n denotes the nth task.
++ * where n denotes the nth task and cpu_scale the CPU capacity.
+  *
+- * For example, a simplest series from the beginning would be like:
++ * For example, for a CPU with 1024 of capacity, a simplest series from
++ * the beginning would be like:
+  *
+  *  task  util_avg: 512, 256, 128,  64,  32,   16,    8, ...
+  * cfs_rq util_avg: 512, 768, 896, 960, 992, 1008, 1016, ...
+@@ -751,7 +752,8 @@ void post_init_entity_util_avg(struct sched_entity *se)
+ {
+ 	struct cfs_rq *cfs_rq = cfs_rq_of(se);
+ 	struct sched_avg *sa = &se->avg;
+-	long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2;
++	long cpu_scale = arch_scale_cpu_capacity(NULL, cpu_of(rq_of(cfs_rq)));
++	long cap = (long)(cpu_scale - cfs_rq->avg.util_avg) / 2;
+ 
+ 	if (cap > 0) {
+ 		if (cfs_rq->avg.util_avg != 0) {
+diff --git a/kernel/sched/wait.c b/kernel/sched/wait.c
+index 928be527477e..a7a2aaa3026a 100644
+--- a/kernel/sched/wait.c
++++ b/kernel/sched/wait.c
+@@ -392,35 +392,36 @@ static inline bool is_kthread_should_stop(void)
+  *     if (condition)
+  *         break;
+  *
+- *     p->state = mode;				condition = true;
+- *     smp_mb(); // A				smp_wmb(); // C
+- *     if (!wq_entry->flags & WQ_FLAG_WOKEN)	wq_entry->flags |= WQ_FLAG_WOKEN;
+- *         schedule()				try_to_wake_up();
+- *     p->state = TASK_RUNNING;		    ~~~~~~~~~~~~~~~~~~
+- *     wq_entry->flags &= ~WQ_FLAG_WOKEN;		condition = true;
+- *     smp_mb() // B				smp_wmb(); // C
+- *						wq_entry->flags |= WQ_FLAG_WOKEN;
+- * }
+- * remove_wait_queue(&wq_head, &wait);
++ *     // in wait_woken()			// in woken_wake_function()
+  *
++ *     p->state = mode;				wq_entry->flags |= WQ_FLAG_WOKEN;
++ *     smp_mb(); // A				try_to_wake_up():
++ *     if (!(wq_entry->flags & WQ_FLAG_WOKEN))	   <full barrier>
++ *         schedule()				   if (p->state & mode)
++ *     p->state = TASK_RUNNING;			      p->state = TASK_RUNNING;
++ *     wq_entry->flags &= ~WQ_FLAG_WOKEN;	~~~~~~~~~~~~~~~~~~
++ *     smp_mb(); // B				condition = true;
++ * }						smp_mb(); // C
++ * remove_wait_queue(&wq_head, &wait);		wq_entry->flags |= WQ_FLAG_WOKEN;
+  */
+ long wait_woken(struct wait_queue_entry *wq_entry, unsigned mode, long timeout)
+ {
+-	set_current_state(mode); /* A */
+ 	/*
+-	 * The above implies an smp_mb(), which matches with the smp_wmb() from
+-	 * woken_wake_function() such that if we observe WQ_FLAG_WOKEN we must
+-	 * also observe all state before the wakeup.
++	 * The below executes an smp_mb(), which matches with the full barrier
++	 * executed by the try_to_wake_up() in woken_wake_function() such that
++	 * either we see the store to wq_entry->flags in woken_wake_function()
++	 * or woken_wake_function() sees our store to current->state.
+ 	 */
++	set_current_state(mode); /* A */
+ 	if (!(wq_entry->flags & WQ_FLAG_WOKEN) && !is_kthread_should_stop())
+ 		timeout = schedule_timeout(timeout);
+ 	__set_current_state(TASK_RUNNING);
+ 
+ 	/*
+-	 * The below implies an smp_mb(), it too pairs with the smp_wmb() from
+-	 * woken_wake_function() such that we must either observe the wait
+-	 * condition being true _OR_ WQ_FLAG_WOKEN such that we will not miss
+-	 * an event.
++	 * The below executes an smp_mb(), which matches with the smp_mb() (C)
++	 * in woken_wake_function() such that either we see the wait condition
++	 * being true or the store to wq_entry->flags in woken_wake_function()
++	 * follows ours in the coherence order.
+ 	 */
+ 	smp_store_mb(wq_entry->flags, wq_entry->flags & ~WQ_FLAG_WOKEN); /* B */
+ 
+@@ -430,14 +431,8 @@ EXPORT_SYMBOL(wait_woken);
+ 
+ int woken_wake_function(struct wait_queue_entry *wq_entry, unsigned mode, int sync, void *key)
+ {
+-	/*
+-	 * Although this function is called under waitqueue lock, LOCK
+-	 * doesn't imply write barrier and the users expects write
+-	 * barrier semantics on wakeup functions.  The following
+-	 * smp_wmb() is equivalent to smp_wmb() in try_to_wake_up()
+-	 * and is paired with smp_store_mb() in wait_woken().
+-	 */
+-	smp_wmb(); /* C */
++	/* Pairs with the smp_store_mb() in wait_woken(). */
++	smp_mb(); /* C */
+ 	wq_entry->flags |= WQ_FLAG_WOKEN;
+ 
+ 	return default_wake_function(wq_entry, mode, sync, key);
+diff --git a/net/bluetooth/af_bluetooth.c b/net/bluetooth/af_bluetooth.c
+index 3264e1873219..deacc52d7ff1 100644
+--- a/net/bluetooth/af_bluetooth.c
++++ b/net/bluetooth/af_bluetooth.c
+@@ -159,7 +159,7 @@ void bt_accept_enqueue(struct sock *parent, struct sock *sk)
+ 	BT_DBG("parent %p, sk %p", parent, sk);
+ 
+ 	sock_hold(sk);
+-	lock_sock(sk);
++	lock_sock_nested(sk, SINGLE_DEPTH_NESTING);
+ 	list_add_tail(&bt_sk(sk)->accept_q, &bt_sk(parent)->accept_q);
+ 	bt_sk(sk)->parent = parent;
+ 	release_sock(sk);
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index fb35b62af272..3680912f056a 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -939,9 +939,6 @@ struct ubuf_info *sock_zerocopy_alloc(struct sock *sk, size_t size)
+ 
+ 	WARN_ON_ONCE(!in_task());
+ 
+-	if (!sock_flag(sk, SOCK_ZEROCOPY))
+-		return NULL;
+-
+ 	skb = sock_omalloc(sk, 0, GFP_KERNEL);
+ 	if (!skb)
+ 		return NULL;
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index 055f4bbba86b..41883c34a385 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -178,6 +178,9 @@ static void ipgre_err(struct sk_buff *skb, u32 info,
+ 
+ 	if (tpi->proto == htons(ETH_P_TEB))
+ 		itn = net_generic(net, gre_tap_net_id);
++	else if (tpi->proto == htons(ETH_P_ERSPAN) ||
++		 tpi->proto == htons(ETH_P_ERSPAN2))
++		itn = net_generic(net, erspan_net_id);
+ 	else
+ 		itn = net_generic(net, ipgre_net_id);
+ 
+@@ -328,6 +331,8 @@ static int erspan_rcv(struct sk_buff *skb, struct tnl_ptk_info *tpi,
+ 		ip_tunnel_rcv(tunnel, skb, tpi, tun_dst, log_ecn_error);
+ 		return PACKET_RCVD;
+ 	}
++	return PACKET_REJECT;
++
+ drop:
+ 	kfree_skb(skb);
+ 	return PACKET_RCVD;
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 4491faf83f4f..086201d96d54 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -1186,7 +1186,7 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
+ 
+ 	flags = msg->msg_flags;
+ 
+-	if (flags & MSG_ZEROCOPY && size) {
++	if (flags & MSG_ZEROCOPY && size && sock_flag(sk, SOCK_ZEROCOPY)) {
+ 		if (sk->sk_state != TCP_ESTABLISHED) {
+ 			err = -EINVAL;
+ 			goto out_err;
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index bdf6fa78d0d2..aa082b71d2e4 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -495,7 +495,7 @@ static int ieee80211_del_key(struct wiphy *wiphy, struct net_device *dev,
+ 		goto out_unlock;
+ 	}
+ 
+-	ieee80211_key_free(key, true);
++	ieee80211_key_free(key, sdata->vif.type == NL80211_IFTYPE_STATION);
+ 
+ 	ret = 0;
+  out_unlock:
+diff --git a/net/mac80211/key.c b/net/mac80211/key.c
+index ee0d0cc8dc3b..c054ac85793c 100644
+--- a/net/mac80211/key.c
++++ b/net/mac80211/key.c
+@@ -656,11 +656,15 @@ int ieee80211_key_link(struct ieee80211_key *key,
+ {
+ 	struct ieee80211_local *local = sdata->local;
+ 	struct ieee80211_key *old_key;
+-	int idx, ret;
+-	bool pairwise;
+-
+-	pairwise = key->conf.flags & IEEE80211_KEY_FLAG_PAIRWISE;
+-	idx = key->conf.keyidx;
++	int idx = key->conf.keyidx;
++	bool pairwise = key->conf.flags & IEEE80211_KEY_FLAG_PAIRWISE;
++	/*
++	 * We want to delay tailroom updates only for station - in that
++	 * case it helps roaming speed, but in other cases it hurts and
++	 * can cause warnings to appear.
++	 */
++	bool delay_tailroom = sdata->vif.type == NL80211_IFTYPE_STATION;
++	int ret;
+ 
+ 	mutex_lock(&sdata->local->key_mtx);
+ 
+@@ -688,14 +692,14 @@ int ieee80211_key_link(struct ieee80211_key *key,
+ 	increment_tailroom_need_count(sdata);
+ 
+ 	ieee80211_key_replace(sdata, sta, pairwise, old_key, key);
+-	ieee80211_key_destroy(old_key, true);
++	ieee80211_key_destroy(old_key, delay_tailroom);
+ 
+ 	ieee80211_debugfs_key_add(key);
+ 
+ 	if (!local->wowlan) {
+ 		ret = ieee80211_key_enable_hw_accel(key);
+ 		if (ret)
+-			ieee80211_key_free(key, true);
++			ieee80211_key_free(key, delay_tailroom);
+ 	} else {
+ 		ret = 0;
+ 	}
+@@ -930,7 +934,8 @@ void ieee80211_free_sta_keys(struct ieee80211_local *local,
+ 		ieee80211_key_replace(key->sdata, key->sta,
+ 				key->conf.flags & IEEE80211_KEY_FLAG_PAIRWISE,
+ 				key, NULL);
+-		__ieee80211_key_destroy(key, true);
++		__ieee80211_key_destroy(key, key->sdata->vif.type ==
++					NL80211_IFTYPE_STATION);
+ 	}
+ 
+ 	for (i = 0; i < NUM_DEFAULT_KEYS; i++) {
+@@ -940,7 +945,8 @@ void ieee80211_free_sta_keys(struct ieee80211_local *local,
+ 		ieee80211_key_replace(key->sdata, key->sta,
+ 				key->conf.flags & IEEE80211_KEY_FLAG_PAIRWISE,
+ 				key, NULL);
+-		__ieee80211_key_destroy(key, true);
++		__ieee80211_key_destroy(key, key->sdata->vif.type ==
++					NL80211_IFTYPE_STATION);
+ 	}
+ 
+ 	mutex_unlock(&local->key_mtx);
+diff --git a/net/rds/bind.c b/net/rds/bind.c
+index 5aa3a64aa4f0..48257d3a4201 100644
+--- a/net/rds/bind.c
++++ b/net/rds/bind.c
+@@ -60,11 +60,13 @@ struct rds_sock *rds_find_bound(__be32 addr, __be16 port)
+ 	u64 key = ((u64)addr << 32) | port;
+ 	struct rds_sock *rs;
+ 
+-	rs = rhashtable_lookup_fast(&bind_hash_table, &key, ht_parms);
++	rcu_read_lock();
++	rs = rhashtable_lookup(&bind_hash_table, &key, ht_parms);
+ 	if (rs && !sock_flag(rds_rs_to_sk(rs), SOCK_DEAD))
+ 		rds_sock_addref(rs);
+ 	else
+ 		rs = NULL;
++	rcu_read_unlock();
+ 
+ 	rdsdebug("returning rs %p for %pI4:%u\n", rs, &addr,
+ 		ntohs(port));
+@@ -157,6 +159,7 @@ int rds_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ 		goto out;
+ 	}
+ 
++	sock_set_flag(sk, SOCK_RCU_FREE);
+ 	ret = rds_add_bound(rs, sin->sin_addr.s_addr, &sin->sin_port);
+ 	if (ret)
+ 		goto out;
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 0a5fa347135e..ac8ca238c541 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -578,6 +578,7 @@ static int tipc_release(struct socket *sock)
+ 	sk_stop_timer(sk, &sk->sk_timer);
+ 	tipc_sk_remove(tsk);
+ 
++	sock_orphan(sk);
+ 	/* Reject any messages that accumulated in backlog queue */
+ 	release_sock(sk);
+ 	tipc_dest_list_purge(&tsk->cong_links);
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 1f3d9789af30..b3344bbe336b 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -149,6 +149,9 @@ static int alloc_encrypted_sg(struct sock *sk, int len)
+ 			 &ctx->sg_encrypted_num_elem,
+ 			 &ctx->sg_encrypted_size, 0);
+ 
++	if (rc == -ENOSPC)
++		ctx->sg_encrypted_num_elem = ARRAY_SIZE(ctx->sg_encrypted_data);
++
+ 	return rc;
+ }
+ 
+@@ -162,6 +165,9 @@ static int alloc_plaintext_sg(struct sock *sk, int len)
+ 			 &ctx->sg_plaintext_num_elem, &ctx->sg_plaintext_size,
+ 			 tls_ctx->pending_open_record_frags);
+ 
++	if (rc == -ENOSPC)
++		ctx->sg_plaintext_num_elem = ARRAY_SIZE(ctx->sg_plaintext_data);
++
+ 	return rc;
+ }
+ 
+@@ -280,7 +286,7 @@ static int zerocopy_from_iter(struct sock *sk, struct iov_iter *from,
+ 			      int length, int *pages_used,
+ 			      unsigned int *size_used,
+ 			      struct scatterlist *to, int to_max_pages,
+-			      bool charge)
++			      bool charge, bool revert)
+ {
+ 	struct page *pages[MAX_SKB_FRAGS];
+ 
+@@ -331,6 +337,8 @@ static int zerocopy_from_iter(struct sock *sk, struct iov_iter *from,
+ out:
+ 	*size_used = size;
+ 	*pages_used = num_elem;
++	if (revert)
++		iov_iter_revert(from, size);
+ 
+ 	return rc;
+ }
+@@ -432,7 +440,7 @@ alloc_encrypted:
+ 				&ctx->sg_plaintext_size,
+ 				ctx->sg_plaintext_data,
+ 				ARRAY_SIZE(ctx->sg_plaintext_data),
+-				true);
++				true, false);
+ 			if (ret)
+ 				goto fallback_to_reg_send;
+ 
+@@ -820,7 +828,7 @@ int tls_sw_recvmsg(struct sock *sk,
+ 				err = zerocopy_from_iter(sk, &msg->msg_iter,
+ 							 to_copy, &pages,
+ 							 &chunk, &sgin[1],
+-							 MAX_SKB_FRAGS,	false);
++							 MAX_SKB_FRAGS,	false, true);
+ 				if (err < 0)
+ 					goto fallback_to_reg_recv;
+ 
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 7c5e8978aeaa..a94983e03a8b 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -1831,7 +1831,10 @@ xfrm_resolve_and_create_bundle(struct xfrm_policy **pols, int num_pols,
+ 	/* Try to instantiate a bundle */
+ 	err = xfrm_tmpl_resolve(pols, num_pols, fl, xfrm, family);
+ 	if (err <= 0) {
+-		if (err != 0 && err != -EAGAIN)
++		if (err == 0)
++			return NULL;
++
++		if (err != -EAGAIN)
+ 			XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTPOLERROR);
+ 		return ERR_PTR(err);
+ 	}
+diff --git a/scripts/Kbuild.include b/scripts/Kbuild.include
+index 86321f06461e..ed303f552f9d 100644
+--- a/scripts/Kbuild.include
++++ b/scripts/Kbuild.include
+@@ -400,3 +400,6 @@ endif
+ endef
+ #
+ ###############################################################################
++
++# delete partially updated (i.e. corrupted) files on error
++.DELETE_ON_ERROR:
+diff --git a/security/integrity/evm/evm_crypto.c b/security/integrity/evm/evm_crypto.c
+index b60524310855..c20e3142b541 100644
+--- a/security/integrity/evm/evm_crypto.c
++++ b/security/integrity/evm/evm_crypto.c
+@@ -97,7 +97,8 @@ static struct shash_desc *init_desc(char type)
+ 		mutex_lock(&mutex);
+ 		if (*tfm)
+ 			goto out;
+-		*tfm = crypto_alloc_shash(algo, 0, CRYPTO_ALG_ASYNC);
++		*tfm = crypto_alloc_shash(algo, 0,
++					  CRYPTO_ALG_ASYNC | CRYPTO_NOLOAD);
+ 		if (IS_ERR(*tfm)) {
+ 			rc = PTR_ERR(*tfm);
+ 			pr_err("Can not allocate %s (reason: %ld)\n", algo, rc);
+diff --git a/security/security.c b/security/security.c
+index 68f46d849abe..4e572b38937d 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -118,6 +118,8 @@ static int lsm_append(char *new, char **result)
+ 
+ 	if (*result == NULL) {
+ 		*result = kstrdup(new, GFP_KERNEL);
++		if (*result == NULL)
++			return -ENOMEM;
+ 	} else {
+ 		/* Check if it is the last registered name */
+ 		if (match_last_lsm(*result, new))
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 19de675d4504..8b6cd5a79bfa 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -3924,15 +3924,19 @@ static int smack_socket_sock_rcv_skb(struct sock *sk, struct sk_buff *skb)
+ 	struct smack_known *skp = NULL;
+ 	int rc = 0;
+ 	struct smk_audit_info ad;
++	u16 family = sk->sk_family;
+ #ifdef CONFIG_AUDIT
+ 	struct lsm_network_audit net;
+ #endif
+ #if IS_ENABLED(CONFIG_IPV6)
+ 	struct sockaddr_in6 sadd;
+ 	int proto;
++
++	if (family == PF_INET6 && skb->protocol == htons(ETH_P_IP))
++		family = PF_INET;
+ #endif /* CONFIG_IPV6 */
+ 
+-	switch (sk->sk_family) {
++	switch (family) {
+ 	case PF_INET:
+ #ifdef CONFIG_SECURITY_SMACK_NETFILTER
+ 		/*
+@@ -3950,7 +3954,7 @@ static int smack_socket_sock_rcv_skb(struct sock *sk, struct sk_buff *skb)
+ 		 */
+ 		netlbl_secattr_init(&secattr);
+ 
+-		rc = netlbl_skbuff_getattr(skb, sk->sk_family, &secattr);
++		rc = netlbl_skbuff_getattr(skb, family, &secattr);
+ 		if (rc == 0)
+ 			skp = smack_from_secattr(&secattr, ssp);
+ 		else
+@@ -3963,7 +3967,7 @@ access_check:
+ #endif
+ #ifdef CONFIG_AUDIT
+ 		smk_ad_init_net(&ad, __func__, LSM_AUDIT_DATA_NET, &net);
+-		ad.a.u.net->family = sk->sk_family;
++		ad.a.u.net->family = family;
+ 		ad.a.u.net->netif = skb->skb_iif;
+ 		ipv4_skb_to_auditdata(skb, &ad.a, NULL);
+ #endif
+@@ -3977,7 +3981,7 @@ access_check:
+ 		rc = smk_bu_note("IPv4 delivery", skp, ssp->smk_in,
+ 					MAY_WRITE, rc);
+ 		if (rc != 0)
+-			netlbl_skbuff_err(skb, sk->sk_family, rc, 0);
++			netlbl_skbuff_err(skb, family, rc, 0);
+ 		break;
+ #if IS_ENABLED(CONFIG_IPV6)
+ 	case PF_INET6:
+@@ -3993,7 +3997,7 @@ access_check:
+ 			skp = smack_net_ambient;
+ #ifdef CONFIG_AUDIT
+ 		smk_ad_init_net(&ad, __func__, LSM_AUDIT_DATA_NET, &net);
+-		ad.a.u.net->family = sk->sk_family;
++		ad.a.u.net->family = family;
+ 		ad.a.u.net->netif = skb->skb_iif;
+ 		ipv6_skb_to_auditdata(skb, &ad.a, NULL);
+ #endif /* CONFIG_AUDIT */
+diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c
+index 44b5ae833082..a4aac948ea49 100644
+--- a/sound/core/pcm_lib.c
++++ b/sound/core/pcm_lib.c
+@@ -626,27 +626,33 @@ EXPORT_SYMBOL(snd_interval_refine);
+ 
+ static int snd_interval_refine_first(struct snd_interval *i)
+ {
++	const unsigned int last_max = i->max;
++
+ 	if (snd_BUG_ON(snd_interval_empty(i)))
+ 		return -EINVAL;
+ 	if (snd_interval_single(i))
+ 		return 0;
+ 	i->max = i->min;
+-	i->openmax = i->openmin;
+-	if (i->openmax)
++	if (i->openmin)
+ 		i->max++;
++	/* only exclude max value if also excluded before refine */
++	i->openmax = (i->openmax && i->max >= last_max);
+ 	return 1;
+ }
+ 
+ static int snd_interval_refine_last(struct snd_interval *i)
+ {
++	const unsigned int last_min = i->min;
++
+ 	if (snd_BUG_ON(snd_interval_empty(i)))
+ 		return -EINVAL;
+ 	if (snd_interval_single(i))
+ 		return 0;
+ 	i->min = i->max;
+-	i->openmin = i->openmax;
+-	if (i->openmin)
++	if (i->openmax)
+ 		i->min--;
++	/* only exclude min value if also excluded before refine */
++	i->openmin = (i->openmin && i->min <= last_min);
+ 	return 1;
+ }
+ 
+diff --git a/sound/isa/msnd/msnd_pinnacle.c b/sound/isa/msnd/msnd_pinnacle.c
+index 6c584d9b6c42..a19f802b2071 100644
+--- a/sound/isa/msnd/msnd_pinnacle.c
++++ b/sound/isa/msnd/msnd_pinnacle.c
+@@ -82,10 +82,10 @@
+ 
+ static void set_default_audio_parameters(struct snd_msnd *chip)
+ {
+-	chip->play_sample_size = DEFSAMPLESIZE;
++	chip->play_sample_size = snd_pcm_format_width(DEFSAMPLESIZE);
+ 	chip->play_sample_rate = DEFSAMPLERATE;
+ 	chip->play_channels = DEFCHANNELS;
+-	chip->capture_sample_size = DEFSAMPLESIZE;
++	chip->capture_sample_size = snd_pcm_format_width(DEFSAMPLESIZE);
+ 	chip->capture_sample_rate = DEFSAMPLERATE;
+ 	chip->capture_channels = DEFCHANNELS;
+ }
+diff --git a/sound/soc/codecs/hdmi-codec.c b/sound/soc/codecs/hdmi-codec.c
+index 38e4a8515709..d00734d31e04 100644
+--- a/sound/soc/codecs/hdmi-codec.c
++++ b/sound/soc/codecs/hdmi-codec.c
+@@ -291,10 +291,6 @@ static const struct snd_soc_dapm_widget hdmi_widgets[] = {
+ 	SND_SOC_DAPM_OUTPUT("TX"),
+ };
+ 
+-static const struct snd_soc_dapm_route hdmi_routes[] = {
+-	{ "TX", NULL, "Playback" },
+-};
+-
+ enum {
+ 	DAI_ID_I2S = 0,
+ 	DAI_ID_SPDIF,
+@@ -689,9 +685,23 @@ static int hdmi_codec_pcm_new(struct snd_soc_pcm_runtime *rtd,
+ 	return snd_ctl_add(rtd->card->snd_card, kctl);
+ }
+ 
++static int hdmi_dai_probe(struct snd_soc_dai *dai)
++{
++	struct snd_soc_dapm_context *dapm;
++	struct snd_soc_dapm_route route = {
++		.sink = "TX",
++		.source = dai->driver->playback.stream_name,
++	};
++
++	dapm = snd_soc_component_get_dapm(dai->component);
++
++	return snd_soc_dapm_add_routes(dapm, &route, 1);
++}
++
+ static const struct snd_soc_dai_driver hdmi_i2s_dai = {
+ 	.name = "i2s-hifi",
+ 	.id = DAI_ID_I2S,
++	.probe = hdmi_dai_probe,
+ 	.playback = {
+ 		.stream_name = "I2S Playback",
+ 		.channels_min = 2,
+@@ -707,6 +717,7 @@ static const struct snd_soc_dai_driver hdmi_i2s_dai = {
+ static const struct snd_soc_dai_driver hdmi_spdif_dai = {
+ 	.name = "spdif-hifi",
+ 	.id = DAI_ID_SPDIF,
++	.probe = hdmi_dai_probe,
+ 	.playback = {
+ 		.stream_name = "SPDIF Playback",
+ 		.channels_min = 2,
+@@ -733,8 +744,6 @@ static int hdmi_of_xlate_dai_id(struct snd_soc_component *component,
+ static const struct snd_soc_component_driver hdmi_driver = {
+ 	.dapm_widgets		= hdmi_widgets,
+ 	.num_dapm_widgets	= ARRAY_SIZE(hdmi_widgets),
+-	.dapm_routes		= hdmi_routes,
+-	.num_dapm_routes	= ARRAY_SIZE(hdmi_routes),
+ 	.of_xlate_dai_id	= hdmi_of_xlate_dai_id,
+ 	.idle_bias_on		= 1,
+ 	.use_pmdown_time	= 1,
+diff --git a/sound/soc/codecs/rt5514.c b/sound/soc/codecs/rt5514.c
+index 1570b91bf018..dca82dd6e3bf 100644
+--- a/sound/soc/codecs/rt5514.c
++++ b/sound/soc/codecs/rt5514.c
+@@ -64,8 +64,8 @@ static const struct reg_sequence rt5514_patch[] = {
+ 	{RT5514_ANA_CTRL_LDO10,		0x00028604},
+ 	{RT5514_ANA_CTRL_ADCFED,	0x00000800},
+ 	{RT5514_ASRC_IN_CTRL1,		0x00000003},
+-	{RT5514_DOWNFILTER0_CTRL3,	0x10000362},
+-	{RT5514_DOWNFILTER1_CTRL3,	0x10000362},
++	{RT5514_DOWNFILTER0_CTRL3,	0x10000352},
++	{RT5514_DOWNFILTER1_CTRL3,	0x10000352},
+ };
+ 
+ static const struct reg_default rt5514_reg[] = {
+@@ -92,10 +92,10 @@ static const struct reg_default rt5514_reg[] = {
+ 	{RT5514_ASRC_IN_CTRL1,		0x00000003},
+ 	{RT5514_DOWNFILTER0_CTRL1,	0x00020c2f},
+ 	{RT5514_DOWNFILTER0_CTRL2,	0x00020c2f},
+-	{RT5514_DOWNFILTER0_CTRL3,	0x10000362},
++	{RT5514_DOWNFILTER0_CTRL3,	0x10000352},
+ 	{RT5514_DOWNFILTER1_CTRL1,	0x00020c2f},
+ 	{RT5514_DOWNFILTER1_CTRL2,	0x00020c2f},
+-	{RT5514_DOWNFILTER1_CTRL3,	0x10000362},
++	{RT5514_DOWNFILTER1_CTRL3,	0x10000352},
+ 	{RT5514_ANA_CTRL_LDO10,		0x00028604},
+ 	{RT5514_ANA_CTRL_LDO18_16,	0x02000345},
+ 	{RT5514_ANA_CTRL_ADC12,		0x0000a2a8},
+diff --git a/sound/soc/codecs/rt5651.c b/sound/soc/codecs/rt5651.c
+index 6b5669f3e85d..39d2c67cd064 100644
+--- a/sound/soc/codecs/rt5651.c
++++ b/sound/soc/codecs/rt5651.c
+@@ -1696,6 +1696,13 @@ static irqreturn_t rt5651_irq(int irq, void *data)
+ 	return IRQ_HANDLED;
+ }
+ 
++static void rt5651_cancel_work(void *data)
++{
++	struct rt5651_priv *rt5651 = data;
++
++	cancel_work_sync(&rt5651->jack_detect_work);
++}
++
+ static int rt5651_set_jack(struct snd_soc_component *component,
+ 			   struct snd_soc_jack *hp_jack, void *data)
+ {
+@@ -2036,6 +2043,11 @@ static int rt5651_i2c_probe(struct i2c_client *i2c,
+ 
+ 	INIT_WORK(&rt5651->jack_detect_work, rt5651_jack_detect_work);
+ 
++	/* Make sure work is stopped on probe-error / remove */
++	ret = devm_add_action_or_reset(&i2c->dev, rt5651_cancel_work, rt5651);
++	if (ret)
++		return ret;
++
+ 	ret = devm_snd_soc_register_component(&i2c->dev,
+ 				&soc_component_dev_rt5651,
+ 				rt5651_dai, ARRAY_SIZE(rt5651_dai));
+@@ -2043,15 +2055,6 @@ static int rt5651_i2c_probe(struct i2c_client *i2c,
+ 	return ret;
+ }
+ 
+-static int rt5651_i2c_remove(struct i2c_client *i2c)
+-{
+-	struct rt5651_priv *rt5651 = i2c_get_clientdata(i2c);
+-
+-	cancel_work_sync(&rt5651->jack_detect_work);
+-
+-	return 0;
+-}
+-
+ static struct i2c_driver rt5651_i2c_driver = {
+ 	.driver = {
+ 		.name = "rt5651",
+@@ -2059,7 +2062,6 @@ static struct i2c_driver rt5651_i2c_driver = {
+ 		.of_match_table = of_match_ptr(rt5651_of_match),
+ 	},
+ 	.probe = rt5651_i2c_probe,
+-	.remove   = rt5651_i2c_remove,
+ 	.id_table = rt5651_i2c_id,
+ };
+ module_i2c_driver(rt5651_i2c_driver);
+diff --git a/sound/soc/qcom/qdsp6/q6afe-dai.c b/sound/soc/qcom/qdsp6/q6afe-dai.c
+index 5002dd05bf27..f8298be7038f 100644
+--- a/sound/soc/qcom/qdsp6/q6afe-dai.c
++++ b/sound/soc/qcom/qdsp6/q6afe-dai.c
+@@ -1180,7 +1180,7 @@ static void of_q6afe_parse_dai_data(struct device *dev,
+ 		int id, i, num_lines;
+ 
+ 		ret = of_property_read_u32(node, "reg", &id);
+-		if (ret || id > AFE_PORT_MAX) {
++		if (ret || id < 0 || id >= AFE_PORT_MAX) {
+ 			dev_err(dev, "valid dai id not found:%d\n", ret);
+ 			continue;
+ 		}
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 8aac48f9c322..08aa78007020 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -2875,7 +2875,8 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+  */
+ 
+ #define AU0828_DEVICE(vid, pid, vname, pname) { \
+-	USB_DEVICE_VENDOR_SPEC(vid, pid), \
++	.idVendor = vid, \
++	.idProduct = pid, \
+ 	.match_flags = USB_DEVICE_ID_MATCH_DEVICE | \
+ 		       USB_DEVICE_ID_MATCH_INT_CLASS | \
+ 		       USB_DEVICE_ID_MATCH_INT_SUBCLASS, \
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 02b6cc02767f..dde87d64bc32 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1373,6 +1373,7 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip,
+ 			return SNDRV_PCM_FMTBIT_DSD_U32_BE;
+ 		break;
+ 
++	case USB_ID(0x16d0, 0x09dd): /* Encore mDSD */
+ 	case USB_ID(0x0d8c, 0x0316): /* Hegel HD12 DSD */
+ 	case USB_ID(0x16b0, 0x06b2): /* NuPrime DAC-10 */
+ 	case USB_ID(0x16d0, 0x0733): /* Furutech ADL Stratos */
+@@ -1443,6 +1444,7 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip,
+ 	 */
+ 	switch (USB_ID_VENDOR(chip->usb_id)) {
+ 	case 0x20b1:  /* XMOS based devices */
++	case 0x152a:  /* Thesycon devices */
+ 	case 0x25ce:  /* Mytek devices */
+ 		if (fp->dsd_raw)
+ 			return SNDRV_PCM_FMTBIT_DSD_U32_BE;
+diff --git a/tools/hv/hv_kvp_daemon.c b/tools/hv/hv_kvp_daemon.c
+index dbf6e8bd98ba..bbb2a8ef367c 100644
+--- a/tools/hv/hv_kvp_daemon.c
++++ b/tools/hv/hv_kvp_daemon.c
+@@ -286,7 +286,7 @@ static int kvp_key_delete(int pool, const __u8 *key, int key_size)
+ 		 * Found a match; just move the remaining
+ 		 * entries up.
+ 		 */
+-		if (i == num_records) {
++		if (i == (num_records - 1)) {
+ 			kvp_file_info[pool].num_records--;
+ 			kvp_update_file(pool);
+ 			return 0;
+diff --git a/tools/perf/arch/powerpc/util/skip-callchain-idx.c b/tools/perf/arch/powerpc/util/skip-callchain-idx.c
+index ef5d59a5742e..7c6eeb4633fe 100644
+--- a/tools/perf/arch/powerpc/util/skip-callchain-idx.c
++++ b/tools/perf/arch/powerpc/util/skip-callchain-idx.c
+@@ -58,9 +58,13 @@ static int check_return_reg(int ra_regno, Dwarf_Frame *frame)
+ 	}
+ 
+ 	/*
+-	 * Check if return address is on the stack.
++	 * Check if return address is on the stack. If return address
++	 * is in a register (typically R0), it is yet to be saved on
++	 * the stack.
+ 	 */
+-	if (nops != 0 || ops != NULL)
++	if ((nops != 0 || ops != NULL) &&
++		!(nops == 1 && ops[0].atom == DW_OP_regx &&
++			ops[0].number2 == 0 && ops[0].offset == 0))
+ 		return 0;
+ 
+ 	/*
+@@ -246,7 +250,7 @@ int arch_skip_callchain_idx(struct thread *thread, struct ip_callchain *chain)
+ 	if (!chain || chain->nr < 3)
+ 		return skip_slot;
+ 
+-	ip = chain->ips[2];
++	ip = chain->ips[1];
+ 
+ 	thread__find_symbol(thread, PERF_RECORD_MISC_USER, ip, &al);
+ 
+diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
+index dd850a26d579..4f5de8245b32 100644
+--- a/tools/perf/tests/builtin-test.c
++++ b/tools/perf/tests/builtin-test.c
+@@ -599,7 +599,7 @@ static int __cmd_test(int argc, const char *argv[], struct intlist *skiplist)
+ 			for (subi = 0; subi < subn; subi++) {
+ 				pr_info("%2d.%1d: %-*s:", i, subi + 1, subw,
+ 					t->subtest.get_desc(subi));
+-				err = test_and_print(t, skip, subi);
++				err = test_and_print(t, skip, subi + 1);
+ 				if (err != TEST_OK && t->subtest.skip_if_fail)
+ 					skip = true;
+ 			}
+diff --git a/tools/perf/tests/shell/record+probe_libc_inet_pton.sh b/tools/perf/tests/shell/record+probe_libc_inet_pton.sh
+index 94e513e62b34..3013ac8f83d0 100755
+--- a/tools/perf/tests/shell/record+probe_libc_inet_pton.sh
++++ b/tools/perf/tests/shell/record+probe_libc_inet_pton.sh
+@@ -13,11 +13,24 @@
+ libc=$(grep -w libc /proc/self/maps | head -1 | sed -r 's/.*[[:space:]](\/.*)/\1/g')
+ nm -Dg $libc 2>/dev/null | fgrep -q inet_pton || exit 254
+ 
++event_pattern='probe_libc:inet_pton(\_[[:digit:]]+)?'
++
++add_libc_inet_pton_event() {
++
++	event_name=$(perf probe -f -x $libc -a inet_pton 2>&1 | tail -n +2 | head -n -5 | \
++			grep -P -o "$event_pattern(?=[[:space:]]\(on inet_pton in $libc\))")
++
++	if [ $? -ne 0 -o -z "$event_name" ] ; then
++		printf "FAIL: could not add event\n"
++		return 1
++	fi
++}
++
+ trace_libc_inet_pton_backtrace() {
+ 
+ 	expected=`mktemp -u /tmp/expected.XXX`
+ 
+-	echo "ping[][0-9 \.:]+probe_libc:inet_pton: \([[:xdigit:]]+\)" > $expected
++	echo "ping[][0-9 \.:]+$event_name: \([[:xdigit:]]+\)" > $expected
+ 	echo ".*inet_pton\+0x[[:xdigit:]]+[[:space:]]\($libc|inlined\)$" >> $expected
+ 	case "$(uname -m)" in
+ 	s390x)
+@@ -26,6 +39,12 @@ trace_libc_inet_pton_backtrace() {
+ 		echo "(__GI_)?getaddrinfo\+0x[[:xdigit:]]+[[:space:]]\($libc|inlined\)$" >> $expected
+ 		echo "main\+0x[[:xdigit:]]+[[:space:]]\(.*/bin/ping.*\)$" >> $expected
+ 		;;
++	ppc64|ppc64le)
++		eventattr='max-stack=4'
++		echo "gaih_inet.*\+0x[[:xdigit:]]+[[:space:]]\($libc\)$" >> $expected
++		echo "getaddrinfo\+0x[[:xdigit:]]+[[:space:]]\($libc\)$" >> $expected
++		echo ".*\+0x[[:xdigit:]]+[[:space:]]\(.*/bin/ping.*\)$" >> $expected
++		;;
+ 	*)
+ 		eventattr='max-stack=3'
+ 		echo "getaddrinfo\+0x[[:xdigit:]]+[[:space:]]\($libc\)$" >> $expected
+@@ -35,7 +54,7 @@ trace_libc_inet_pton_backtrace() {
+ 
+ 	perf_data=`mktemp -u /tmp/perf.data.XXX`
+ 	perf_script=`mktemp -u /tmp/perf.script.XXX`
+-	perf record -e probe_libc:inet_pton/$eventattr/ -o $perf_data ping -6 -c 1 ::1 > /dev/null 2>&1
++	perf record -e $event_name/$eventattr/ -o $perf_data ping -6 -c 1 ::1 > /dev/null 2>&1
+ 	perf script -i $perf_data > $perf_script
+ 
+ 	exec 3<$perf_script
+@@ -46,7 +65,7 @@ trace_libc_inet_pton_backtrace() {
+ 		echo "$line" | egrep -q "$pattern"
+ 		if [ $? -ne 0 ] ; then
+ 			printf "FAIL: expected backtrace entry \"%s\" got \"%s\"\n" "$pattern" "$line"
+-			exit 1
++			return 1
+ 		fi
+ 	done
+ 
+@@ -56,13 +75,20 @@ trace_libc_inet_pton_backtrace() {
+ 	# even if the perf script output does not match.
+ }
+ 
++delete_libc_inet_pton_event() {
++
++	if [ -n "$event_name" ] ; then
++		perf probe -q -d $event_name
++	fi
++}
++
+ # Check for IPv6 interface existence
+ ip a sh lo | fgrep -q inet6 || exit 2
+ 
+ skip_if_no_perf_probe && \
+-perf probe -q $libc inet_pton && \
++add_libc_inet_pton_event && \
+ trace_libc_inet_pton_backtrace
+ err=$?
+ rm -f ${perf_data} ${perf_script} ${expected}
+-perf probe -q -d probe_libc:inet_pton
++delete_libc_inet_pton_event
+ exit $err
+diff --git a/tools/perf/util/comm.c b/tools/perf/util/comm.c
+index 7798a2cc8a86..31279a7bd919 100644
+--- a/tools/perf/util/comm.c
++++ b/tools/perf/util/comm.c
+@@ -20,9 +20,10 @@ static struct rw_semaphore comm_str_lock = {.lock = PTHREAD_RWLOCK_INITIALIZER,}
+ 
+ static struct comm_str *comm_str__get(struct comm_str *cs)
+ {
+-	if (cs)
+-		refcount_inc(&cs->refcnt);
+-	return cs;
++	if (cs && refcount_inc_not_zero(&cs->refcnt))
++		return cs;
++
++	return NULL;
+ }
+ 
+ static void comm_str__put(struct comm_str *cs)
+@@ -67,9 +68,14 @@ struct comm_str *__comm_str__findnew(const char *str, struct rb_root *root)
+ 		parent = *p;
+ 		iter = rb_entry(parent, struct comm_str, rb_node);
+ 
++		/*
++		 * If we race with comm_str__put, iter->refcnt is 0
++		 * and it will be removed within comm_str__put call
++		 * shortly, ignore it in this search.
++		 */
+ 		cmp = strcmp(str, iter->str);
+-		if (!cmp)
+-			return comm_str__get(iter);
++		if (!cmp && comm_str__get(iter))
++			return iter;
+ 
+ 		if (cmp < 0)
+ 			p = &(*p)->rb_left;
+diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
+index 653ff65aa2c3..5af58aac91ad 100644
+--- a/tools/perf/util/header.c
++++ b/tools/perf/util/header.c
+@@ -2587,7 +2587,7 @@ static const struct feature_ops feat_ops[HEADER_LAST_FEATURE] = {
+ 	FEAT_OPR(NUMA_TOPOLOGY,	numa_topology,	true),
+ 	FEAT_OPN(BRANCH_STACK,	branch_stack,	false),
+ 	FEAT_OPR(PMU_MAPPINGS,	pmu_mappings,	false),
+-	FEAT_OPN(GROUP_DESC,	group_desc,	false),
++	FEAT_OPR(GROUP_DESC,	group_desc,	false),
+ 	FEAT_OPN(AUXTRACE,	auxtrace,	false),
+ 	FEAT_OPN(STAT,		stat,		false),
+ 	FEAT_OPN(CACHE,		cache,		true),
+diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
+index e7b4a8b513f2..22dbb6612b41 100644
+--- a/tools/perf/util/machine.c
++++ b/tools/perf/util/machine.c
+@@ -2272,6 +2272,7 @@ static int unwind_entry(struct unwind_entry *entry, void *arg)
+ {
+ 	struct callchain_cursor *cursor = arg;
+ 	const char *srcline = NULL;
++	u64 addr;
+ 
+ 	if (symbol_conf.hide_unresolved && entry->sym == NULL)
+ 		return 0;
+@@ -2279,7 +2280,13 @@ static int unwind_entry(struct unwind_entry *entry, void *arg)
+ 	if (append_inlines(cursor, entry->map, entry->sym, entry->ip) == 0)
+ 		return 0;
+ 
+-	srcline = callchain_srcline(entry->map, entry->sym, entry->ip);
++	/*
++	 * Convert entry->ip from a virtual address to an offset in
++	 * its corresponding binary.
++	 */
++	addr = map__map_ip(entry->map, entry->ip);
++
++	srcline = callchain_srcline(entry->map, entry->sym, addr);
+ 	return callchain_cursor_append(cursor, entry->ip,
+ 				       entry->map, entry->sym,
+ 				       false, NULL, 0, 0, 0, srcline);
+diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
+index 89ac5b5dc218..f5431092c6d1 100644
+--- a/tools/perf/util/map.c
++++ b/tools/perf/util/map.c
+@@ -590,6 +590,13 @@ struct symbol *map_groups__find_symbol(struct map_groups *mg,
+ 	return NULL;
+ }
+ 
++static bool map__contains_symbol(struct map *map, struct symbol *sym)
++{
++	u64 ip = map->unmap_ip(map, sym->start);
++
++	return ip >= map->start && ip < map->end;
++}
++
+ struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name,
+ 					 struct map **mapp)
+ {
+@@ -605,6 +612,10 @@ struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name,
+ 
+ 		if (sym == NULL)
+ 			continue;
++		if (!map__contains_symbol(pos, sym)) {
++			sym = NULL;
++			continue;
++		}
+ 		if (mapp != NULL)
+ 			*mapp = pos;
+ 		goto out;
+diff --git a/tools/perf/util/unwind-libdw.c b/tools/perf/util/unwind-libdw.c
+index 538db4e5d1e6..6f318b15950e 100644
+--- a/tools/perf/util/unwind-libdw.c
++++ b/tools/perf/util/unwind-libdw.c
+@@ -77,7 +77,7 @@ static int entry(u64 ip, struct unwind_info *ui)
+ 	if (__report_module(&al, ip, ui))
+ 		return -1;
+ 
+-	e->ip  = al.addr;
++	e->ip  = ip;
+ 	e->map = al.map;
+ 	e->sym = al.sym;
+ 
+diff --git a/tools/perf/util/unwind-libunwind-local.c b/tools/perf/util/unwind-libunwind-local.c
+index 6a11bc7e6b27..79f521a552cf 100644
+--- a/tools/perf/util/unwind-libunwind-local.c
++++ b/tools/perf/util/unwind-libunwind-local.c
+@@ -575,7 +575,7 @@ static int entry(u64 ip, struct thread *thread,
+ 	struct addr_location al;
+ 
+ 	e.sym = thread__find_symbol(thread, PERF_RECORD_MISC_USER, ip, &al);
+-	e.ip = al.addr;
++	e.ip  = ip;
+ 	e.map = al.map;
+ 
+ 	pr_debug("unwind: %s:ip = 0x%" PRIx64 " (0x%" PRIx64 ")\n",
+diff --git a/tools/testing/nvdimm/test/nfit.c b/tools/testing/nvdimm/test/nfit.c
+index e2926f72a821..94c3bdf82ff7 100644
+--- a/tools/testing/nvdimm/test/nfit.c
++++ b/tools/testing/nvdimm/test/nfit.c
+@@ -1308,7 +1308,8 @@ static void smart_init(struct nfit_test *t)
+ 			| ND_INTEL_SMART_ALARM_VALID
+ 			| ND_INTEL_SMART_USED_VALID
+ 			| ND_INTEL_SMART_SHUTDOWN_VALID
+-			| ND_INTEL_SMART_MTEMP_VALID,
++			| ND_INTEL_SMART_MTEMP_VALID
++			| ND_INTEL_SMART_CTEMP_VALID,
+ 		.health = ND_INTEL_SMART_NON_CRITICAL_HEALTH,
+ 		.media_temperature = 23 * 16,
+ 		.ctrl_temperature = 25 * 16,
+diff --git a/tools/testing/selftests/android/ion/ionapp_export.c b/tools/testing/selftests/android/ion/ionapp_export.c
+index a944e72621a9..b5fa0a2dc968 100644
+--- a/tools/testing/selftests/android/ion/ionapp_export.c
++++ b/tools/testing/selftests/android/ion/ionapp_export.c
+@@ -51,6 +51,7 @@ int main(int argc, char *argv[])
+ 
+ 	heap_size = 0;
+ 	flags = 0;
++	heap_type = ION_HEAP_TYPE_SYSTEM;
+ 
+ 	while ((opt = getopt(argc, argv, "hi:s:")) != -1) {
+ 		switch (opt) {
+diff --git a/tools/testing/selftests/timers/raw_skew.c b/tools/testing/selftests/timers/raw_skew.c
+index ca6cd146aafe..dcf73c5dab6e 100644
+--- a/tools/testing/selftests/timers/raw_skew.c
++++ b/tools/testing/selftests/timers/raw_skew.c
+@@ -134,6 +134,11 @@ int main(int argv, char **argc)
+ 	printf(" %lld.%i(act)", ppm/1000, abs((int)(ppm%1000)));
+ 
+ 	if (llabs(eppm - ppm) > 1000) {
++		if (tx1.offset || tx2.offset ||
++		    tx1.freq != tx2.freq || tx1.tick != tx2.tick) {
++			printf("	[SKIP]\n");
++			return ksft_exit_skip("The clock was adjusted externally. Shutdown NTPd or other time sync daemons\n");
++		}
+ 		printf("	[FAILED]\n");
+ 		return ksft_exit_fail();
+ 	}
+diff --git a/tools/testing/selftests/vDSO/vdso_test.c b/tools/testing/selftests/vDSO/vdso_test.c
+index 2df26bd0099c..eda53f833d8e 100644
+--- a/tools/testing/selftests/vDSO/vdso_test.c
++++ b/tools/testing/selftests/vDSO/vdso_test.c
+@@ -15,6 +15,8 @@
+ #include <sys/auxv.h>
+ #include <sys/time.h>
+ 
++#include "../kselftest.h"
++
+ extern void *vdso_sym(const char *version, const char *name);
+ extern void vdso_init_from_sysinfo_ehdr(uintptr_t base);
+ extern void vdso_init_from_auxv(void *auxv);
+@@ -37,7 +39,7 @@ int main(int argc, char **argv)
+ 	unsigned long sysinfo_ehdr = getauxval(AT_SYSINFO_EHDR);
+ 	if (!sysinfo_ehdr) {
+ 		printf("AT_SYSINFO_EHDR is not present!\n");
+-		return 0;
++		return KSFT_SKIP;
+ 	}
+ 
+ 	vdso_init_from_sysinfo_ehdr(getauxval(AT_SYSINFO_EHDR));
+@@ -48,7 +50,7 @@ int main(int argc, char **argv)
+ 
+ 	if (!gtod) {
+ 		printf("Could not find %s\n", name);
+-		return 1;
++		return KSFT_SKIP;
+ 	}
+ 
+ 	struct timeval tv;
+@@ -59,6 +61,7 @@ int main(int argc, char **argv)
+ 		       (long long)tv.tv_sec, (long long)tv.tv_usec);
+ 	} else {
+ 		printf("%s failed\n", name);
++		return KSFT_FAIL;
+ 	}
+ 
+ 	return 0;
+diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c
+index 2673efce65f3..b71417913741 100644
+--- a/virt/kvm/arm/vgic/vgic-init.c
++++ b/virt/kvm/arm/vgic/vgic-init.c
+@@ -271,6 +271,10 @@ int vgic_init(struct kvm *kvm)
+ 	if (vgic_initialized(kvm))
+ 		return 0;
+ 
++	/* Are we also in the middle of creating a VCPU? */
++	if (kvm->created_vcpus != atomic_read(&kvm->online_vcpus))
++		return -EBUSY;
++
+ 	/* freeze the number of spis */
+ 	if (!dist->nr_spis)
+ 		dist->nr_spis = VGIC_NR_IRQS_LEGACY - VGIC_NR_PRIVATE_IRQS;
+diff --git a/virt/kvm/arm/vgic/vgic-mmio-v2.c b/virt/kvm/arm/vgic/vgic-mmio-v2.c
+index ffc587bf4742..64e571cc02df 100644
+--- a/virt/kvm/arm/vgic/vgic-mmio-v2.c
++++ b/virt/kvm/arm/vgic/vgic-mmio-v2.c
+@@ -352,6 +352,9 @@ static void vgic_mmio_write_apr(struct kvm_vcpu *vcpu,
+ 
+ 		if (n > vgic_v3_max_apr_idx(vcpu))
+ 			return;
++
++		n = array_index_nospec(n, 4);
++
+ 		/* GICv3 only uses ICH_AP1Rn for memory mapped (GICv2) guests */
+ 		vgicv3->vgic_ap1r[n] = val;
+ 	}


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 13:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 13:15 UTC (permalink / raw
  To: gentoo-commits

commit:     d40a062a74bcd8b78ded7bb3cbc35664ce34671e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Aug 17 19:28:20 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 13:15:38 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d40a062a

Linux patch 4.18.2

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1001_linux-4.18.2.patch | 1679 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1683 insertions(+)

diff --git a/0000_README b/0000_README
index ad4a3ed..c801597 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,10 @@ Patch:  1000_linux-4.18.1.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.1
 
+Patch:  1001_linux-4.18.2.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.2
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1001_linux-4.18.2.patch b/1001_linux-4.18.2.patch
new file mode 100644
index 0000000..1853255
--- /dev/null
+++ b/1001_linux-4.18.2.patch
@@ -0,0 +1,1679 @@
+diff --git a/Documentation/process/changes.rst b/Documentation/process/changes.rst
+index ddc029734b25..005d8842a503 100644
+--- a/Documentation/process/changes.rst
++++ b/Documentation/process/changes.rst
+@@ -35,7 +35,7 @@ binutils               2.20             ld -v
+ flex                   2.5.35           flex --version
+ bison                  2.0              bison --version
+ util-linux             2.10o            fdformat --version
+-module-init-tools      0.9.10           depmod -V
++kmod                   13               depmod -V
+ e2fsprogs              1.41.4           e2fsck -V
+ jfsutils               1.1.3            fsck.jfs -V
+ reiserfsprogs          3.6.3            reiserfsck -V
+@@ -156,12 +156,6 @@ is not build with ``CONFIG_KALLSYMS`` and you have no way to rebuild and
+ reproduce the Oops with that option, then you can still decode that Oops
+ with ksymoops.
+ 
+-Module-Init-Tools
+------------------
+-
+-A new module loader is now in the kernel that requires ``module-init-tools``
+-to use.  It is backward compatible with the 2.4.x series kernels.
+-
+ Mkinitrd
+ --------
+ 
+@@ -371,16 +365,17 @@ Util-linux
+ 
+ - <https://www.kernel.org/pub/linux/utils/util-linux/>
+ 
++Kmod
++----
++
++- <https://www.kernel.org/pub/linux/utils/kernel/kmod/>
++- <https://git.kernel.org/pub/scm/utils/kernel/kmod/kmod.git>
++
+ Ksymoops
+ --------
+ 
+ - <https://www.kernel.org/pub/linux/utils/kernel/ksymoops/v2.4/>
+ 
+-Module-Init-Tools
+------------------
+-
+-- <https://www.kernel.org/pub/linux/utils/kernel/module-init-tools/>
+-
+ Mkinitrd
+ --------
+ 
+diff --git a/Makefile b/Makefile
+index 5edf963148e8..fd409a0fd4e1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index 493ff75670ff..8ae5d7ae4af3 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -977,12 +977,12 @@ int pmd_clear_huge(pmd_t *pmdp)
+ 	return 1;
+ }
+ 
+-int pud_free_pmd_page(pud_t *pud)
++int pud_free_pmd_page(pud_t *pud, unsigned long addr)
+ {
+ 	return pud_none(*pud);
+ }
+ 
+-int pmd_free_pte_page(pmd_t *pmd)
++int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
+ {
+ 	return pmd_none(*pmd);
+ }
+diff --git a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S b/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S
+index 16c4ccb1f154..d2364c55bbde 100644
+--- a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S
++++ b/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S
+@@ -265,7 +265,7 @@ ENTRY(sha256_mb_mgr_get_comp_job_avx2)
+ 	vpinsrd	$1, _args_digest+1*32(state, idx, 4), %xmm0, %xmm0
+ 	vpinsrd	$2, _args_digest+2*32(state, idx, 4), %xmm0, %xmm0
+ 	vpinsrd	$3, _args_digest+3*32(state, idx, 4), %xmm0, %xmm0
+-	vmovd   _args_digest(state , idx, 4) , %xmm0
++	vmovd	_args_digest+4*32(state, idx, 4), %xmm1
+ 	vpinsrd	$1, _args_digest+5*32(state, idx, 4), %xmm1, %xmm1
+ 	vpinsrd	$2, _args_digest+6*32(state, idx, 4), %xmm1, %xmm1
+ 	vpinsrd	$3, _args_digest+7*32(state, idx, 4), %xmm1, %xmm1
+diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c
+index de27615c51ea..0c662cb6a723 100644
+--- a/arch/x86/hyperv/mmu.c
++++ b/arch/x86/hyperv/mmu.c
+@@ -95,6 +95,11 @@ static void hyperv_flush_tlb_others(const struct cpumask *cpus,
+ 	} else {
+ 		for_each_cpu(cpu, cpus) {
+ 			vcpu = hv_cpu_number_to_vp_number(cpu);
++			if (vcpu == VP_INVAL) {
++				local_irq_restore(flags);
++				goto do_native;
++			}
++
+ 			if (vcpu >= 64)
+ 				goto do_native;
+ 
+diff --git a/arch/x86/include/asm/i8259.h b/arch/x86/include/asm/i8259.h
+index 5cdcdbd4d892..89789e8c80f6 100644
+--- a/arch/x86/include/asm/i8259.h
++++ b/arch/x86/include/asm/i8259.h
+@@ -3,6 +3,7 @@
+ #define _ASM_X86_I8259_H
+ 
+ #include <linux/delay.h>
++#include <asm/io.h>
+ 
+ extern unsigned int cached_irq_mask;
+ 
+diff --git a/arch/x86/kernel/apic/x2apic_uv_x.c b/arch/x86/kernel/apic/x2apic_uv_x.c
+index d492752f79e1..391f358ebb4c 100644
+--- a/arch/x86/kernel/apic/x2apic_uv_x.c
++++ b/arch/x86/kernel/apic/x2apic_uv_x.c
+@@ -394,10 +394,10 @@ extern int uv_hub_info_version(void)
+ EXPORT_SYMBOL(uv_hub_info_version);
+ 
+ /* Default UV memory block size is 2GB */
+-static unsigned long mem_block_size = (2UL << 30);
++static unsigned long mem_block_size __initdata = (2UL << 30);
+ 
+ /* Kernel parameter to specify UV mem block size */
+-static int parse_mem_block_size(char *ptr)
++static int __init parse_mem_block_size(char *ptr)
+ {
+ 	unsigned long size = memparse(ptr, NULL);
+ 
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index c4f0ae49a53d..664f161f96ff 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -648,10 +648,9 @@ void x86_spec_ctrl_setup_ap(void)
+ enum l1tf_mitigations l1tf_mitigation __ro_after_init = L1TF_MITIGATION_FLUSH;
+ #if IS_ENABLED(CONFIG_KVM_INTEL)
+ EXPORT_SYMBOL_GPL(l1tf_mitigation);
+-
++#endif
+ enum vmx_l1d_flush_state l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
+ EXPORT_SYMBOL_GPL(l1tf_vmx_mitigation);
+-#endif
+ 
+ static void __init l1tf_select_mitigation(void)
+ {
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 9eda6f730ec4..b41b72bd8bb8 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -905,7 +905,7 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
+ 	apply_forced_caps(c);
+ }
+ 
+-static void get_cpu_address_sizes(struct cpuinfo_x86 *c)
++void get_cpu_address_sizes(struct cpuinfo_x86 *c)
+ {
+ 	u32 eax, ebx, ecx, edx;
+ 
+diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
+index e59c0ea82a33..7b229afa0a37 100644
+--- a/arch/x86/kernel/cpu/cpu.h
++++ b/arch/x86/kernel/cpu/cpu.h
+@@ -46,6 +46,7 @@ extern const struct cpu_dev *const __x86_cpu_dev_start[],
+ 			    *const __x86_cpu_dev_end[];
+ 
+ extern void get_cpu_cap(struct cpuinfo_x86 *c);
++extern void get_cpu_address_sizes(struct cpuinfo_x86 *c);
+ extern void cpu_detect_cache_sizes(struct cpuinfo_x86 *c);
+ extern void init_scattered_cpuid_features(struct cpuinfo_x86 *c);
+ extern u32 get_scattered_cpuid_leaf(unsigned int level,
+diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
+index 7bb6f65c79de..29505724202a 100644
+--- a/arch/x86/mm/pageattr.c
++++ b/arch/x86/mm/pageattr.c
+@@ -1784,6 +1784,12 @@ int set_memory_nonglobal(unsigned long addr, int numpages)
+ 				      __pgprot(_PAGE_GLOBAL), 0);
+ }
+ 
++int set_memory_global(unsigned long addr, int numpages)
++{
++	return change_page_attr_set(&addr, numpages,
++				    __pgprot(_PAGE_GLOBAL), 0);
++}
++
+ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc)
+ {
+ 	struct cpa_data cpa;
+diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
+index 47b5951e592b..e3deefb891da 100644
+--- a/arch/x86/mm/pgtable.c
++++ b/arch/x86/mm/pgtable.c
+@@ -719,28 +719,50 @@ int pmd_clear_huge(pmd_t *pmd)
+ 	return 0;
+ }
+ 
++#ifdef CONFIG_X86_64
+ /**
+  * pud_free_pmd_page - Clear pud entry and free pmd page.
+  * @pud: Pointer to a PUD.
++ * @addr: Virtual address associated with pud.
+  *
+- * Context: The pud range has been unmaped and TLB purged.
++ * Context: The pud range has been unmapped and TLB purged.
+  * Return: 1 if clearing the entry succeeded. 0 otherwise.
++ *
++ * NOTE: Callers must allow a single page allocation.
+  */
+-int pud_free_pmd_page(pud_t *pud)
++int pud_free_pmd_page(pud_t *pud, unsigned long addr)
+ {
+-	pmd_t *pmd;
++	pmd_t *pmd, *pmd_sv;
++	pte_t *pte;
+ 	int i;
+ 
+ 	if (pud_none(*pud))
+ 		return 1;
+ 
+ 	pmd = (pmd_t *)pud_page_vaddr(*pud);
++	pmd_sv = (pmd_t *)__get_free_page(GFP_KERNEL);
++	if (!pmd_sv)
++		return 0;
+ 
+-	for (i = 0; i < PTRS_PER_PMD; i++)
+-		if (!pmd_free_pte_page(&pmd[i]))
+-			return 0;
++	for (i = 0; i < PTRS_PER_PMD; i++) {
++		pmd_sv[i] = pmd[i];
++		if (!pmd_none(pmd[i]))
++			pmd_clear(&pmd[i]);
++	}
+ 
+ 	pud_clear(pud);
++
++	/* INVLPG to clear all paging-structure caches */
++	flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1);
++
++	for (i = 0; i < PTRS_PER_PMD; i++) {
++		if (!pmd_none(pmd_sv[i])) {
++			pte = (pte_t *)pmd_page_vaddr(pmd_sv[i]);
++			free_page((unsigned long)pte);
++		}
++	}
++
++	free_page((unsigned long)pmd_sv);
+ 	free_page((unsigned long)pmd);
+ 
+ 	return 1;
+@@ -749,11 +771,12 @@ int pud_free_pmd_page(pud_t *pud)
+ /**
+  * pmd_free_pte_page - Clear pmd entry and free pte page.
+  * @pmd: Pointer to a PMD.
++ * @addr: Virtual address associated with pmd.
+  *
+- * Context: The pmd range has been unmaped and TLB purged.
++ * Context: The pmd range has been unmapped and TLB purged.
+  * Return: 1 if clearing the entry succeeded. 0 otherwise.
+  */
+-int pmd_free_pte_page(pmd_t *pmd)
++int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
+ {
+ 	pte_t *pte;
+ 
+@@ -762,8 +785,30 @@ int pmd_free_pte_page(pmd_t *pmd)
+ 
+ 	pte = (pte_t *)pmd_page_vaddr(*pmd);
+ 	pmd_clear(pmd);
++
++	/* INVLPG to clear all paging-structure caches */
++	flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1);
++
+ 	free_page((unsigned long)pte);
+ 
+ 	return 1;
+ }
++
++#else /* !CONFIG_X86_64 */
++
++int pud_free_pmd_page(pud_t *pud, unsigned long addr)
++{
++	return pud_none(*pud);
++}
++
++/*
++ * Disable free page handling on x86-PAE. This assures that ioremap()
++ * does not update sync'd pmd entries. See vmalloc_sync_one().
++ */
++int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
++{
++	return pmd_none(*pmd);
++}
++
++#endif /* CONFIG_X86_64 */
+ #endif	/* CONFIG_HAVE_ARCH_HUGE_VMAP */
+diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
+index fb752d9a3ce9..946455e9cfef 100644
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -435,6 +435,13 @@ static inline bool pti_kernel_image_global_ok(void)
+ 	return true;
+ }
+ 
++/*
++ * This is the only user for these and it is not arch-generic
++ * like the other set_memory.h functions.  Just extern them.
++ */
++extern int set_memory_nonglobal(unsigned long addr, int numpages);
++extern int set_memory_global(unsigned long addr, int numpages);
++
+ /*
+  * For some configurations, map all of kernel text into the user page
+  * tables.  This reduces TLB misses, especially on non-PCID systems.
+@@ -447,7 +454,8 @@ void pti_clone_kernel_text(void)
+ 	 * clone the areas past rodata, they might contain secrets.
+ 	 */
+ 	unsigned long start = PFN_ALIGN(_text);
+-	unsigned long end = (unsigned long)__end_rodata_hpage_align;
++	unsigned long end_clone  = (unsigned long)__end_rodata_hpage_align;
++	unsigned long end_global = PFN_ALIGN((unsigned long)__stop___ex_table);
+ 
+ 	if (!pti_kernel_image_global_ok())
+ 		return;
+@@ -459,14 +467,18 @@ void pti_clone_kernel_text(void)
+ 	 * pti_set_kernel_image_nonglobal() did to clear the
+ 	 * global bit.
+ 	 */
+-	pti_clone_pmds(start, end, _PAGE_RW);
++	pti_clone_pmds(start, end_clone, _PAGE_RW);
++
++	/*
++	 * pti_clone_pmds() will set the global bit in any PMDs
++	 * that it clones, but we also need to get any PTEs in
++	 * the last level for areas that are not huge-page-aligned.
++	 */
++
++	/* Set the global bit for normal non-__init kernel text: */
++	set_memory_global(start, (end_global - start) >> PAGE_SHIFT);
+ }
+ 
+-/*
+- * This is the only user for it and it is not arch-generic like
+- * the other set_memory.h functions.  Just extern it.
+- */
+-extern int set_memory_nonglobal(unsigned long addr, int numpages);
+ void pti_set_kernel_image_nonglobal(void)
+ {
+ 	/*
+@@ -478,9 +490,11 @@ void pti_set_kernel_image_nonglobal(void)
+ 	unsigned long start = PFN_ALIGN(_text);
+ 	unsigned long end = ALIGN((unsigned long)_end, PMD_PAGE_SIZE);
+ 
+-	if (pti_kernel_image_global_ok())
+-		return;
+-
++	/*
++	 * This clears _PAGE_GLOBAL from the entire kernel image.
++	 * pti_clone_kernel_text() map put _PAGE_GLOBAL back for
++	 * areas that are mapped to userspace.
++	 */
+ 	set_memory_nonglobal(start, (end - start) >> PAGE_SHIFT);
+ }
+ 
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index 439a94bf89ad..c5e3f2acc7f0 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -1259,6 +1259,9 @@ asmlinkage __visible void __init xen_start_kernel(void)
+ 	get_cpu_cap(&boot_cpu_data);
+ 	x86_configure_nx();
+ 
++	/* Determine virtual and physical address sizes */
++	get_cpu_address_sizes(&boot_cpu_data);
++
+ 	/* Let's presume PV guests always boot on vCPU with id 0. */
+ 	per_cpu(xen_vcpu_id, 0) = 0;
+ 
+diff --git a/crypto/ablkcipher.c b/crypto/ablkcipher.c
+index d880a4897159..4ee7c041bb82 100644
+--- a/crypto/ablkcipher.c
++++ b/crypto/ablkcipher.c
+@@ -71,11 +71,9 @@ static inline u8 *ablkcipher_get_spot(u8 *start, unsigned int len)
+ 	return max(start, end_page);
+ }
+ 
+-static inline unsigned int ablkcipher_done_slow(struct ablkcipher_walk *walk,
+-						unsigned int bsize)
++static inline void ablkcipher_done_slow(struct ablkcipher_walk *walk,
++					unsigned int n)
+ {
+-	unsigned int n = bsize;
+-
+ 	for (;;) {
+ 		unsigned int len_this_page = scatterwalk_pagelen(&walk->out);
+ 
+@@ -87,17 +85,13 @@ static inline unsigned int ablkcipher_done_slow(struct ablkcipher_walk *walk,
+ 		n -= len_this_page;
+ 		scatterwalk_start(&walk->out, sg_next(walk->out.sg));
+ 	}
+-
+-	return bsize;
+ }
+ 
+-static inline unsigned int ablkcipher_done_fast(struct ablkcipher_walk *walk,
+-						unsigned int n)
++static inline void ablkcipher_done_fast(struct ablkcipher_walk *walk,
++					unsigned int n)
+ {
+ 	scatterwalk_advance(&walk->in, n);
+ 	scatterwalk_advance(&walk->out, n);
+-
+-	return n;
+ }
+ 
+ static int ablkcipher_walk_next(struct ablkcipher_request *req,
+@@ -107,39 +101,40 @@ int ablkcipher_walk_done(struct ablkcipher_request *req,
+ 			 struct ablkcipher_walk *walk, int err)
+ {
+ 	struct crypto_tfm *tfm = req->base.tfm;
+-	unsigned int nbytes = 0;
++	unsigned int n; /* bytes processed */
++	bool more;
+ 
+-	if (likely(err >= 0)) {
+-		unsigned int n = walk->nbytes - err;
++	if (unlikely(err < 0))
++		goto finish;
+ 
+-		if (likely(!(walk->flags & ABLKCIPHER_WALK_SLOW)))
+-			n = ablkcipher_done_fast(walk, n);
+-		else if (WARN_ON(err)) {
+-			err = -EINVAL;
+-			goto err;
+-		} else
+-			n = ablkcipher_done_slow(walk, n);
++	n = walk->nbytes - err;
++	walk->total -= n;
++	more = (walk->total != 0);
+ 
+-		nbytes = walk->total - n;
+-		err = 0;
++	if (likely(!(walk->flags & ABLKCIPHER_WALK_SLOW))) {
++		ablkcipher_done_fast(walk, n);
++	} else {
++		if (WARN_ON(err)) {
++			/* unexpected case; didn't process all bytes */
++			err = -EINVAL;
++			goto finish;
++		}
++		ablkcipher_done_slow(walk, n);
+ 	}
+ 
+-	scatterwalk_done(&walk->in, 0, nbytes);
+-	scatterwalk_done(&walk->out, 1, nbytes);
+-
+-err:
+-	walk->total = nbytes;
+-	walk->nbytes = nbytes;
++	scatterwalk_done(&walk->in, 0, more);
++	scatterwalk_done(&walk->out, 1, more);
+ 
+-	if (nbytes) {
++	if (more) {
+ 		crypto_yield(req->base.flags);
+ 		return ablkcipher_walk_next(req, walk);
+ 	}
+-
++	err = 0;
++finish:
++	walk->nbytes = 0;
+ 	if (walk->iv != req->info)
+ 		memcpy(req->info, walk->iv, tfm->crt_ablkcipher.ivsize);
+ 	kfree(walk->iv_buffer);
+-
+ 	return err;
+ }
+ EXPORT_SYMBOL_GPL(ablkcipher_walk_done);
+diff --git a/crypto/blkcipher.c b/crypto/blkcipher.c
+index 01c0d4aa2563..77b5fa293f66 100644
+--- a/crypto/blkcipher.c
++++ b/crypto/blkcipher.c
+@@ -70,19 +70,18 @@ static inline u8 *blkcipher_get_spot(u8 *start, unsigned int len)
+ 	return max(start, end_page);
+ }
+ 
+-static inline unsigned int blkcipher_done_slow(struct blkcipher_walk *walk,
+-					       unsigned int bsize)
++static inline void blkcipher_done_slow(struct blkcipher_walk *walk,
++				       unsigned int bsize)
+ {
+ 	u8 *addr;
+ 
+ 	addr = (u8 *)ALIGN((unsigned long)walk->buffer, walk->alignmask + 1);
+ 	addr = blkcipher_get_spot(addr, bsize);
+ 	scatterwalk_copychunks(addr, &walk->out, bsize, 1);
+-	return bsize;
+ }
+ 
+-static inline unsigned int blkcipher_done_fast(struct blkcipher_walk *walk,
+-					       unsigned int n)
++static inline void blkcipher_done_fast(struct blkcipher_walk *walk,
++				       unsigned int n)
+ {
+ 	if (walk->flags & BLKCIPHER_WALK_COPY) {
+ 		blkcipher_map_dst(walk);
+@@ -96,49 +95,48 @@ static inline unsigned int blkcipher_done_fast(struct blkcipher_walk *walk,
+ 
+ 	scatterwalk_advance(&walk->in, n);
+ 	scatterwalk_advance(&walk->out, n);
+-
+-	return n;
+ }
+ 
+ int blkcipher_walk_done(struct blkcipher_desc *desc,
+ 			struct blkcipher_walk *walk, int err)
+ {
+-	unsigned int nbytes = 0;
++	unsigned int n; /* bytes processed */
++	bool more;
+ 
+-	if (likely(err >= 0)) {
+-		unsigned int n = walk->nbytes - err;
++	if (unlikely(err < 0))
++		goto finish;
+ 
+-		if (likely(!(walk->flags & BLKCIPHER_WALK_SLOW)))
+-			n = blkcipher_done_fast(walk, n);
+-		else if (WARN_ON(err)) {
+-			err = -EINVAL;
+-			goto err;
+-		} else
+-			n = blkcipher_done_slow(walk, n);
++	n = walk->nbytes - err;
++	walk->total -= n;
++	more = (walk->total != 0);
+ 
+-		nbytes = walk->total - n;
+-		err = 0;
++	if (likely(!(walk->flags & BLKCIPHER_WALK_SLOW))) {
++		blkcipher_done_fast(walk, n);
++	} else {
++		if (WARN_ON(err)) {
++			/* unexpected case; didn't process all bytes */
++			err = -EINVAL;
++			goto finish;
++		}
++		blkcipher_done_slow(walk, n);
+ 	}
+ 
+-	scatterwalk_done(&walk->in, 0, nbytes);
+-	scatterwalk_done(&walk->out, 1, nbytes);
++	scatterwalk_done(&walk->in, 0, more);
++	scatterwalk_done(&walk->out, 1, more);
+ 
+-err:
+-	walk->total = nbytes;
+-	walk->nbytes = nbytes;
+-
+-	if (nbytes) {
++	if (more) {
+ 		crypto_yield(desc->flags);
+ 		return blkcipher_walk_next(desc, walk);
+ 	}
+-
++	err = 0;
++finish:
++	walk->nbytes = 0;
+ 	if (walk->iv != desc->info)
+ 		memcpy(desc->info, walk->iv, walk->ivsize);
+ 	if (walk->buffer != walk->page)
+ 		kfree(walk->buffer);
+ 	if (walk->page)
+ 		free_page((unsigned long)walk->page);
+-
+ 	return err;
+ }
+ EXPORT_SYMBOL_GPL(blkcipher_walk_done);
+diff --git a/crypto/skcipher.c b/crypto/skcipher.c
+index 0fe2a2923ad0..5dc8407bdaa9 100644
+--- a/crypto/skcipher.c
++++ b/crypto/skcipher.c
+@@ -95,7 +95,7 @@ static inline u8 *skcipher_get_spot(u8 *start, unsigned int len)
+ 	return max(start, end_page);
+ }
+ 
+-static int skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize)
++static void skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize)
+ {
+ 	u8 *addr;
+ 
+@@ -103,23 +103,24 @@ static int skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize)
+ 	addr = skcipher_get_spot(addr, bsize);
+ 	scatterwalk_copychunks(addr, &walk->out, bsize,
+ 			       (walk->flags & SKCIPHER_WALK_PHYS) ? 2 : 1);
+-	return 0;
+ }
+ 
+ int skcipher_walk_done(struct skcipher_walk *walk, int err)
+ {
+-	unsigned int n = walk->nbytes - err;
+-	unsigned int nbytes;
+-
+-	nbytes = walk->total - n;
+-
+-	if (unlikely(err < 0)) {
+-		nbytes = 0;
+-		n = 0;
+-	} else if (likely(!(walk->flags & (SKCIPHER_WALK_PHYS |
+-					   SKCIPHER_WALK_SLOW |
+-					   SKCIPHER_WALK_COPY |
+-					   SKCIPHER_WALK_DIFF)))) {
++	unsigned int n; /* bytes processed */
++	bool more;
++
++	if (unlikely(err < 0))
++		goto finish;
++
++	n = walk->nbytes - err;
++	walk->total -= n;
++	more = (walk->total != 0);
++
++	if (likely(!(walk->flags & (SKCIPHER_WALK_PHYS |
++				    SKCIPHER_WALK_SLOW |
++				    SKCIPHER_WALK_COPY |
++				    SKCIPHER_WALK_DIFF)))) {
+ unmap_src:
+ 		skcipher_unmap_src(walk);
+ 	} else if (walk->flags & SKCIPHER_WALK_DIFF) {
+@@ -131,28 +132,28 @@ unmap_src:
+ 		skcipher_unmap_dst(walk);
+ 	} else if (unlikely(walk->flags & SKCIPHER_WALK_SLOW)) {
+ 		if (WARN_ON(err)) {
++			/* unexpected case; didn't process all bytes */
+ 			err = -EINVAL;
+-			nbytes = 0;
+-		} else
+-			n = skcipher_done_slow(walk, n);
++			goto finish;
++		}
++		skcipher_done_slow(walk, n);
++		goto already_advanced;
+ 	}
+ 
+-	if (err > 0)
+-		err = 0;
+-
+-	walk->total = nbytes;
+-	walk->nbytes = nbytes;
+-
+ 	scatterwalk_advance(&walk->in, n);
+ 	scatterwalk_advance(&walk->out, n);
+-	scatterwalk_done(&walk->in, 0, nbytes);
+-	scatterwalk_done(&walk->out, 1, nbytes);
++already_advanced:
++	scatterwalk_done(&walk->in, 0, more);
++	scatterwalk_done(&walk->out, 1, more);
+ 
+-	if (nbytes) {
++	if (more) {
+ 		crypto_yield(walk->flags & SKCIPHER_WALK_SLEEP ?
+ 			     CRYPTO_TFM_REQ_MAY_SLEEP : 0);
+ 		return skcipher_walk_next(walk);
+ 	}
++	err = 0;
++finish:
++	walk->nbytes = 0;
+ 
+ 	/* Short-circuit for the common/fast path. */
+ 	if (!((unsigned long)walk->buffer | (unsigned long)walk->page))
+@@ -399,7 +400,7 @@ static int skcipher_copy_iv(struct skcipher_walk *walk)
+ 	unsigned size;
+ 	u8 *iv;
+ 
+-	aligned_bs = ALIGN(bs, alignmask);
++	aligned_bs = ALIGN(bs, alignmask + 1);
+ 
+ 	/* Minimum size to align buffer by alignmask. */
+ 	size = alignmask & ~a;
+diff --git a/crypto/vmac.c b/crypto/vmac.c
+index df76a816cfb2..bb2fc787d615 100644
+--- a/crypto/vmac.c
++++ b/crypto/vmac.c
+@@ -1,6 +1,10 @@
+ /*
+- * Modified to interface to the Linux kernel
++ * VMAC: Message Authentication Code using Universal Hashing
++ *
++ * Reference: https://tools.ietf.org/html/draft-krovetz-vmac-01
++ *
+  * Copyright (c) 2009, Intel Corporation.
++ * Copyright (c) 2018, Google Inc.
+  *
+  * This program is free software; you can redistribute it and/or modify it
+  * under the terms and conditions of the GNU General Public License,
+@@ -16,14 +20,15 @@
+  * Place - Suite 330, Boston, MA 02111-1307 USA.
+  */
+ 
+-/* --------------------------------------------------------------------------
+- * VMAC and VHASH Implementation by Ted Krovetz (tdk@acm.org) and Wei Dai.
+- * This implementation is herby placed in the public domain.
+- * The authors offers no warranty. Use at your own risk.
+- * Please send bug reports to the authors.
+- * Last modified: 17 APR 08, 1700 PDT
+- * ----------------------------------------------------------------------- */
++/*
++ * Derived from:
++ *	VMAC and VHASH Implementation by Ted Krovetz (tdk@acm.org) and Wei Dai.
++ *	This implementation is herby placed in the public domain.
++ *	The authors offers no warranty. Use at your own risk.
++ *	Last modified: 17 APR 08, 1700 PDT
++ */
+ 
++#include <asm/unaligned.h>
+ #include <linux/init.h>
+ #include <linux/types.h>
+ #include <linux/crypto.h>
+@@ -31,9 +36,35 @@
+ #include <linux/scatterlist.h>
+ #include <asm/byteorder.h>
+ #include <crypto/scatterwalk.h>
+-#include <crypto/vmac.h>
+ #include <crypto/internal/hash.h>
+ 
++/*
++ * User definable settings.
++ */
++#define VMAC_TAG_LEN	64
++#define VMAC_KEY_SIZE	128/* Must be 128, 192 or 256			*/
++#define VMAC_KEY_LEN	(VMAC_KEY_SIZE/8)
++#define VMAC_NHBYTES	128/* Must 2^i for any 3 < i < 13 Standard = 128*/
++
++/* per-transform (per-key) context */
++struct vmac_tfm_ctx {
++	struct crypto_cipher *cipher;
++	u64 nhkey[(VMAC_NHBYTES/8)+2*(VMAC_TAG_LEN/64-1)];
++	u64 polykey[2*VMAC_TAG_LEN/64];
++	u64 l3key[2*VMAC_TAG_LEN/64];
++};
++
++/* per-request context */
++struct vmac_desc_ctx {
++	union {
++		u8 partial[VMAC_NHBYTES];	/* partial block */
++		__le64 partial_words[VMAC_NHBYTES / 8];
++	};
++	unsigned int partial_size;	/* size of the partial block */
++	bool first_block_processed;
++	u64 polytmp[2*VMAC_TAG_LEN/64];	/* running total of L2-hash */
++};
++
+ /*
+  * Constants and masks
+  */
+@@ -318,13 +349,6 @@ static void poly_step_func(u64 *ahi, u64 *alo,
+ 	} while (0)
+ #endif
+ 
+-static void vhash_abort(struct vmac_ctx *ctx)
+-{
+-	ctx->polytmp[0] = ctx->polykey[0] ;
+-	ctx->polytmp[1] = ctx->polykey[1] ;
+-	ctx->first_block_processed = 0;
+-}
+-
+ static u64 l3hash(u64 p1, u64 p2, u64 k1, u64 k2, u64 len)
+ {
+ 	u64 rh, rl, t, z = 0;
+@@ -364,280 +388,209 @@ static u64 l3hash(u64 p1, u64 p2, u64 k1, u64 k2, u64 len)
+ 	return rl;
+ }
+ 
+-static void vhash_update(const unsigned char *m,
+-			unsigned int mbytes, /* Pos multiple of VMAC_NHBYTES */
+-			struct vmac_ctx *ctx)
++/* L1 and L2-hash one or more VMAC_NHBYTES-byte blocks */
++static void vhash_blocks(const struct vmac_tfm_ctx *tctx,
++			 struct vmac_desc_ctx *dctx,
++			 const __le64 *mptr, unsigned int blocks)
+ {
+-	u64 rh, rl, *mptr;
+-	const u64 *kptr = (u64 *)ctx->nhkey;
+-	int i;
+-	u64 ch, cl;
+-	u64 pkh = ctx->polykey[0];
+-	u64 pkl = ctx->polykey[1];
+-
+-	if (!mbytes)
+-		return;
+-
+-	BUG_ON(mbytes % VMAC_NHBYTES);
+-
+-	mptr = (u64 *)m;
+-	i = mbytes / VMAC_NHBYTES;  /* Must be non-zero */
+-
+-	ch = ctx->polytmp[0];
+-	cl = ctx->polytmp[1];
+-
+-	if (!ctx->first_block_processed) {
+-		ctx->first_block_processed = 1;
++	const u64 *kptr = tctx->nhkey;
++	const u64 pkh = tctx->polykey[0];
++	const u64 pkl = tctx->polykey[1];
++	u64 ch = dctx->polytmp[0];
++	u64 cl = dctx->polytmp[1];
++	u64 rh, rl;
++
++	if (!dctx->first_block_processed) {
++		dctx->first_block_processed = true;
+ 		nh_vmac_nhbytes(mptr, kptr, VMAC_NHBYTES/8, rh, rl);
+ 		rh &= m62;
+ 		ADD128(ch, cl, rh, rl);
+ 		mptr += (VMAC_NHBYTES/sizeof(u64));
+-		i--;
++		blocks--;
+ 	}
+ 
+-	while (i--) {
++	while (blocks--) {
+ 		nh_vmac_nhbytes(mptr, kptr, VMAC_NHBYTES/8, rh, rl);
+ 		rh &= m62;
+ 		poly_step(ch, cl, pkh, pkl, rh, rl);
+ 		mptr += (VMAC_NHBYTES/sizeof(u64));
+ 	}
+ 
+-	ctx->polytmp[0] = ch;
+-	ctx->polytmp[1] = cl;
++	dctx->polytmp[0] = ch;
++	dctx->polytmp[1] = cl;
+ }
+ 
+-static u64 vhash(unsigned char m[], unsigned int mbytes,
+-			u64 *tagl, struct vmac_ctx *ctx)
++static int vmac_setkey(struct crypto_shash *tfm,
++		       const u8 *key, unsigned int keylen)
+ {
+-	u64 rh, rl, *mptr;
+-	const u64 *kptr = (u64 *)ctx->nhkey;
+-	int i, remaining;
+-	u64 ch, cl;
+-	u64 pkh = ctx->polykey[0];
+-	u64 pkl = ctx->polykey[1];
+-
+-	mptr = (u64 *)m;
+-	i = mbytes / VMAC_NHBYTES;
+-	remaining = mbytes % VMAC_NHBYTES;
+-
+-	if (ctx->first_block_processed) {
+-		ch = ctx->polytmp[0];
+-		cl = ctx->polytmp[1];
+-	} else if (i) {
+-		nh_vmac_nhbytes(mptr, kptr, VMAC_NHBYTES/8, ch, cl);
+-		ch &= m62;
+-		ADD128(ch, cl, pkh, pkl);
+-		mptr += (VMAC_NHBYTES/sizeof(u64));
+-		i--;
+-	} else if (remaining) {
+-		nh_16(mptr, kptr, 2*((remaining+15)/16), ch, cl);
+-		ch &= m62;
+-		ADD128(ch, cl, pkh, pkl);
+-		mptr += (VMAC_NHBYTES/sizeof(u64));
+-		goto do_l3;
+-	} else {/* Empty String */
+-		ch = pkh; cl = pkl;
+-		goto do_l3;
+-	}
+-
+-	while (i--) {
+-		nh_vmac_nhbytes(mptr, kptr, VMAC_NHBYTES/8, rh, rl);
+-		rh &= m62;
+-		poly_step(ch, cl, pkh, pkl, rh, rl);
+-		mptr += (VMAC_NHBYTES/sizeof(u64));
+-	}
+-	if (remaining) {
+-		nh_16(mptr, kptr, 2*((remaining+15)/16), rh, rl);
+-		rh &= m62;
+-		poly_step(ch, cl, pkh, pkl, rh, rl);
+-	}
+-
+-do_l3:
+-	vhash_abort(ctx);
+-	remaining *= 8;
+-	return l3hash(ch, cl, ctx->l3key[0], ctx->l3key[1], remaining);
+-}
++	struct vmac_tfm_ctx *tctx = crypto_shash_ctx(tfm);
++	__be64 out[2];
++	u8 in[16] = { 0 };
++	unsigned int i;
++	int err;
+ 
+-static u64 vmac(unsigned char m[], unsigned int mbytes,
+-			const unsigned char n[16], u64 *tagl,
+-			struct vmac_ctx_t *ctx)
+-{
+-	u64 *in_n, *out_p;
+-	u64 p, h;
+-	int i;
+-
+-	in_n = ctx->__vmac_ctx.cached_nonce;
+-	out_p = ctx->__vmac_ctx.cached_aes;
+-
+-	i = n[15] & 1;
+-	if ((*(u64 *)(n+8) != in_n[1]) || (*(u64 *)(n) != in_n[0])) {
+-		in_n[0] = *(u64 *)(n);
+-		in_n[1] = *(u64 *)(n+8);
+-		((unsigned char *)in_n)[15] &= 0xFE;
+-		crypto_cipher_encrypt_one(ctx->child,
+-			(unsigned char *)out_p, (unsigned char *)in_n);
+-
+-		((unsigned char *)in_n)[15] |= (unsigned char)(1-i);
++	if (keylen != VMAC_KEY_LEN) {
++		crypto_shash_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
++		return -EINVAL;
+ 	}
+-	p = be64_to_cpup(out_p + i);
+-	h = vhash(m, mbytes, (u64 *)0, &ctx->__vmac_ctx);
+-	return le64_to_cpu(p + h);
+-}
+ 
+-static int vmac_set_key(unsigned char user_key[], struct vmac_ctx_t *ctx)
+-{
+-	u64 in[2] = {0}, out[2];
+-	unsigned i;
+-	int err = 0;
+-
+-	err = crypto_cipher_setkey(ctx->child, user_key, VMAC_KEY_LEN);
++	err = crypto_cipher_setkey(tctx->cipher, key, keylen);
+ 	if (err)
+ 		return err;
+ 
+ 	/* Fill nh key */
+-	((unsigned char *)in)[0] = 0x80;
+-	for (i = 0; i < sizeof(ctx->__vmac_ctx.nhkey)/8; i += 2) {
+-		crypto_cipher_encrypt_one(ctx->child,
+-			(unsigned char *)out, (unsigned char *)in);
+-		ctx->__vmac_ctx.nhkey[i] = be64_to_cpup(out);
+-		ctx->__vmac_ctx.nhkey[i+1] = be64_to_cpup(out+1);
+-		((unsigned char *)in)[15] += 1;
++	in[0] = 0x80;
++	for (i = 0; i < ARRAY_SIZE(tctx->nhkey); i += 2) {
++		crypto_cipher_encrypt_one(tctx->cipher, (u8 *)out, in);
++		tctx->nhkey[i] = be64_to_cpu(out[0]);
++		tctx->nhkey[i+1] = be64_to_cpu(out[1]);
++		in[15]++;
+ 	}
+ 
+ 	/* Fill poly key */
+-	((unsigned char *)in)[0] = 0xC0;
+-	in[1] = 0;
+-	for (i = 0; i < sizeof(ctx->__vmac_ctx.polykey)/8; i += 2) {
+-		crypto_cipher_encrypt_one(ctx->child,
+-			(unsigned char *)out, (unsigned char *)in);
+-		ctx->__vmac_ctx.polytmp[i] =
+-			ctx->__vmac_ctx.polykey[i] =
+-				be64_to_cpup(out) & mpoly;
+-		ctx->__vmac_ctx.polytmp[i+1] =
+-			ctx->__vmac_ctx.polykey[i+1] =
+-				be64_to_cpup(out+1) & mpoly;
+-		((unsigned char *)in)[15] += 1;
++	in[0] = 0xC0;
++	in[15] = 0;
++	for (i = 0; i < ARRAY_SIZE(tctx->polykey); i += 2) {
++		crypto_cipher_encrypt_one(tctx->cipher, (u8 *)out, in);
++		tctx->polykey[i] = be64_to_cpu(out[0]) & mpoly;
++		tctx->polykey[i+1] = be64_to_cpu(out[1]) & mpoly;
++		in[15]++;
+ 	}
+ 
+ 	/* Fill ip key */
+-	((unsigned char *)in)[0] = 0xE0;
+-	in[1] = 0;
+-	for (i = 0; i < sizeof(ctx->__vmac_ctx.l3key)/8; i += 2) {
++	in[0] = 0xE0;
++	in[15] = 0;
++	for (i = 0; i < ARRAY_SIZE(tctx->l3key); i += 2) {
+ 		do {
+-			crypto_cipher_encrypt_one(ctx->child,
+-				(unsigned char *)out, (unsigned char *)in);
+-			ctx->__vmac_ctx.l3key[i] = be64_to_cpup(out);
+-			ctx->__vmac_ctx.l3key[i+1] = be64_to_cpup(out+1);
+-			((unsigned char *)in)[15] += 1;
+-		} while (ctx->__vmac_ctx.l3key[i] >= p64
+-			|| ctx->__vmac_ctx.l3key[i+1] >= p64);
++			crypto_cipher_encrypt_one(tctx->cipher, (u8 *)out, in);
++			tctx->l3key[i] = be64_to_cpu(out[0]);
++			tctx->l3key[i+1] = be64_to_cpu(out[1]);
++			in[15]++;
++		} while (tctx->l3key[i] >= p64 || tctx->l3key[i+1] >= p64);
+ 	}
+ 
+-	/* Invalidate nonce/aes cache and reset other elements */
+-	ctx->__vmac_ctx.cached_nonce[0] = (u64)-1; /* Ensure illegal nonce */
+-	ctx->__vmac_ctx.cached_nonce[1] = (u64)0;  /* Ensure illegal nonce */
+-	ctx->__vmac_ctx.first_block_processed = 0;
+-
+-	return err;
++	return 0;
+ }
+ 
+-static int vmac_setkey(struct crypto_shash *parent,
+-		const u8 *key, unsigned int keylen)
++static int vmac_init(struct shash_desc *desc)
+ {
+-	struct vmac_ctx_t *ctx = crypto_shash_ctx(parent);
++	const struct vmac_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
++	struct vmac_desc_ctx *dctx = shash_desc_ctx(desc);
+ 
+-	if (keylen != VMAC_KEY_LEN) {
+-		crypto_shash_set_flags(parent, CRYPTO_TFM_RES_BAD_KEY_LEN);
+-		return -EINVAL;
+-	}
+-
+-	return vmac_set_key((u8 *)key, ctx);
+-}
+-
+-static int vmac_init(struct shash_desc *pdesc)
+-{
++	dctx->partial_size = 0;
++	dctx->first_block_processed = false;
++	memcpy(dctx->polytmp, tctx->polykey, sizeof(dctx->polytmp));
+ 	return 0;
+ }
+ 
+-static int vmac_update(struct shash_desc *pdesc, const u8 *p,
+-		unsigned int len)
++static int vmac_update(struct shash_desc *desc, const u8 *p, unsigned int len)
+ {
+-	struct crypto_shash *parent = pdesc->tfm;
+-	struct vmac_ctx_t *ctx = crypto_shash_ctx(parent);
+-	int expand;
+-	int min;
+-
+-	expand = VMAC_NHBYTES - ctx->partial_size > 0 ?
+-			VMAC_NHBYTES - ctx->partial_size : 0;
+-
+-	min = len < expand ? len : expand;
+-
+-	memcpy(ctx->partial + ctx->partial_size, p, min);
+-	ctx->partial_size += min;
+-
+-	if (len < expand)
+-		return 0;
+-
+-	vhash_update(ctx->partial, VMAC_NHBYTES, &ctx->__vmac_ctx);
+-	ctx->partial_size = 0;
+-
+-	len -= expand;
+-	p += expand;
++	const struct vmac_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
++	struct vmac_desc_ctx *dctx = shash_desc_ctx(desc);
++	unsigned int n;
++
++	if (dctx->partial_size) {
++		n = min(len, VMAC_NHBYTES - dctx->partial_size);
++		memcpy(&dctx->partial[dctx->partial_size], p, n);
++		dctx->partial_size += n;
++		p += n;
++		len -= n;
++		if (dctx->partial_size == VMAC_NHBYTES) {
++			vhash_blocks(tctx, dctx, dctx->partial_words, 1);
++			dctx->partial_size = 0;
++		}
++	}
+ 
+-	if (len % VMAC_NHBYTES) {
+-		memcpy(ctx->partial, p + len - (len % VMAC_NHBYTES),
+-			len % VMAC_NHBYTES);
+-		ctx->partial_size = len % VMAC_NHBYTES;
++	if (len >= VMAC_NHBYTES) {
++		n = round_down(len, VMAC_NHBYTES);
++		/* TODO: 'p' may be misaligned here */
++		vhash_blocks(tctx, dctx, (const __le64 *)p, n / VMAC_NHBYTES);
++		p += n;
++		len -= n;
+ 	}
+ 
+-	vhash_update(p, len - len % VMAC_NHBYTES, &ctx->__vmac_ctx);
++	if (len) {
++		memcpy(dctx->partial, p, len);
++		dctx->partial_size = len;
++	}
+ 
+ 	return 0;
+ }
+ 
+-static int vmac_final(struct shash_desc *pdesc, u8 *out)
++static u64 vhash_final(const struct vmac_tfm_ctx *tctx,
++		       struct vmac_desc_ctx *dctx)
+ {
+-	struct crypto_shash *parent = pdesc->tfm;
+-	struct vmac_ctx_t *ctx = crypto_shash_ctx(parent);
+-	vmac_t mac;
+-	u8 nonce[16] = {};
+-
+-	/* vmac() ends up accessing outside the array bounds that
+-	 * we specify.  In appears to access up to the next 2-word
+-	 * boundary.  We'll just be uber cautious and zero the
+-	 * unwritten bytes in the buffer.
+-	 */
+-	if (ctx->partial_size) {
+-		memset(ctx->partial + ctx->partial_size, 0,
+-			VMAC_NHBYTES - ctx->partial_size);
++	unsigned int partial = dctx->partial_size;
++	u64 ch = dctx->polytmp[0];
++	u64 cl = dctx->polytmp[1];
++
++	/* L1 and L2-hash the final block if needed */
++	if (partial) {
++		/* Zero-pad to next 128-bit boundary */
++		unsigned int n = round_up(partial, 16);
++		u64 rh, rl;
++
++		memset(&dctx->partial[partial], 0, n - partial);
++		nh_16(dctx->partial_words, tctx->nhkey, n / 8, rh, rl);
++		rh &= m62;
++		if (dctx->first_block_processed)
++			poly_step(ch, cl, tctx->polykey[0], tctx->polykey[1],
++				  rh, rl);
++		else
++			ADD128(ch, cl, rh, rl);
+ 	}
+-	mac = vmac(ctx->partial, ctx->partial_size, nonce, NULL, ctx);
+-	memcpy(out, &mac, sizeof(vmac_t));
+-	memzero_explicit(&mac, sizeof(vmac_t));
+-	memset(&ctx->__vmac_ctx, 0, sizeof(struct vmac_ctx));
+-	ctx->partial_size = 0;
++
++	/* L3-hash the 128-bit output of L2-hash */
++	return l3hash(ch, cl, tctx->l3key[0], tctx->l3key[1], partial * 8);
++}
++
++static int vmac_final(struct shash_desc *desc, u8 *out)
++{
++	const struct vmac_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
++	struct vmac_desc_ctx *dctx = shash_desc_ctx(desc);
++	static const u8 nonce[16] = {}; /* TODO: this is insecure */
++	union {
++		u8 bytes[16];
++		__be64 pads[2];
++	} block;
++	int index;
++	u64 hash, pad;
++
++	/* Finish calculating the VHASH of the message */
++	hash = vhash_final(tctx, dctx);
++
++	/* Generate pseudorandom pad by encrypting the nonce */
++	memcpy(&block, nonce, 16);
++	index = block.bytes[15] & 1;
++	block.bytes[15] &= ~1;
++	crypto_cipher_encrypt_one(tctx->cipher, block.bytes, block.bytes);
++	pad = be64_to_cpu(block.pads[index]);
++
++	/* The VMAC is the sum of VHASH and the pseudorandom pad */
++	put_unaligned_le64(hash + pad, out);
+ 	return 0;
+ }
+ 
+ static int vmac_init_tfm(struct crypto_tfm *tfm)
+ {
+-	struct crypto_cipher *cipher;
+-	struct crypto_instance *inst = (void *)tfm->__crt_alg;
++	struct crypto_instance *inst = crypto_tfm_alg_instance(tfm);
+ 	struct crypto_spawn *spawn = crypto_instance_ctx(inst);
+-	struct vmac_ctx_t *ctx = crypto_tfm_ctx(tfm);
++	struct vmac_tfm_ctx *tctx = crypto_tfm_ctx(tfm);
++	struct crypto_cipher *cipher;
+ 
+ 	cipher = crypto_spawn_cipher(spawn);
+ 	if (IS_ERR(cipher))
+ 		return PTR_ERR(cipher);
+ 
+-	ctx->child = cipher;
++	tctx->cipher = cipher;
+ 	return 0;
+ }
+ 
+ static void vmac_exit_tfm(struct crypto_tfm *tfm)
+ {
+-	struct vmac_ctx_t *ctx = crypto_tfm_ctx(tfm);
+-	crypto_free_cipher(ctx->child);
++	struct vmac_tfm_ctx *tctx = crypto_tfm_ctx(tfm);
++
++	crypto_free_cipher(tctx->cipher);
+ }
+ 
+ static int vmac_create(struct crypto_template *tmpl, struct rtattr **tb)
+@@ -655,6 +608,10 @@ static int vmac_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 	if (IS_ERR(alg))
+ 		return PTR_ERR(alg);
+ 
++	err = -EINVAL;
++	if (alg->cra_blocksize != 16)
++		goto out_put_alg;
++
+ 	inst = shash_alloc_instance("vmac", alg);
+ 	err = PTR_ERR(inst);
+ 	if (IS_ERR(inst))
+@@ -670,11 +627,12 @@ static int vmac_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 	inst->alg.base.cra_blocksize = alg->cra_blocksize;
+ 	inst->alg.base.cra_alignmask = alg->cra_alignmask;
+ 
+-	inst->alg.digestsize = sizeof(vmac_t);
+-	inst->alg.base.cra_ctxsize = sizeof(struct vmac_ctx_t);
++	inst->alg.base.cra_ctxsize = sizeof(struct vmac_tfm_ctx);
+ 	inst->alg.base.cra_init = vmac_init_tfm;
+ 	inst->alg.base.cra_exit = vmac_exit_tfm;
+ 
++	inst->alg.descsize = sizeof(struct vmac_desc_ctx);
++	inst->alg.digestsize = VMAC_TAG_LEN / 8;
+ 	inst->alg.init = vmac_init;
+ 	inst->alg.update = vmac_update;
+ 	inst->alg.final = vmac_final;
+diff --git a/drivers/crypto/ccp/psp-dev.c b/drivers/crypto/ccp/psp-dev.c
+index ff478d826d7d..051b8c6bae64 100644
+--- a/drivers/crypto/ccp/psp-dev.c
++++ b/drivers/crypto/ccp/psp-dev.c
+@@ -84,8 +84,6 @@ done:
+ 
+ static void sev_wait_cmd_ioc(struct psp_device *psp, unsigned int *reg)
+ {
+-	psp->sev_int_rcvd = 0;
+-
+ 	wait_event(psp->sev_int_queue, psp->sev_int_rcvd);
+ 	*reg = ioread32(psp->io_regs + PSP_CMDRESP);
+ }
+@@ -148,6 +146,8 @@ static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
+ 	iowrite32(phys_lsb, psp->io_regs + PSP_CMDBUFF_ADDR_LO);
+ 	iowrite32(phys_msb, psp->io_regs + PSP_CMDBUFF_ADDR_HI);
+ 
++	psp->sev_int_rcvd = 0;
++
+ 	reg = cmd;
+ 	reg <<= PSP_CMDRESP_CMD_SHIFT;
+ 	reg |= PSP_CMDRESP_IOC;
+@@ -856,6 +856,9 @@ void psp_dev_destroy(struct sp_device *sp)
+ {
+ 	struct psp_device *psp = sp->psp_data;
+ 
++	if (!psp)
++		return;
++
+ 	if (psp->sev_misc)
+ 		kref_put(&misc_dev->refcount, sev_exit);
+ 
+diff --git a/drivers/crypto/ccree/cc_cipher.c b/drivers/crypto/ccree/cc_cipher.c
+index d2810c183b73..958ced3ca485 100644
+--- a/drivers/crypto/ccree/cc_cipher.c
++++ b/drivers/crypto/ccree/cc_cipher.c
+@@ -593,34 +593,82 @@ static void cc_setup_cipher_data(struct crypto_tfm *tfm,
+ 	}
+ }
+ 
++/*
++ * Update a CTR-AES 128 bit counter
++ */
++static void cc_update_ctr(u8 *ctr, unsigned int increment)
++{
++	if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) ||
++	    IS_ALIGNED((unsigned long)ctr, 8)) {
++
++		__be64 *high_be = (__be64 *)ctr;
++		__be64 *low_be = high_be + 1;
++		u64 orig_low = __be64_to_cpu(*low_be);
++		u64 new_low = orig_low + (u64)increment;
++
++		*low_be = __cpu_to_be64(new_low);
++
++		if (new_low < orig_low)
++			*high_be = __cpu_to_be64(__be64_to_cpu(*high_be) + 1);
++	} else {
++		u8 *pos = (ctr + AES_BLOCK_SIZE);
++		u8 val;
++		unsigned int size;
++
++		for (; increment; increment--)
++			for (size = AES_BLOCK_SIZE; size; size--) {
++				val = *--pos + 1;
++				*pos = val;
++				if (val)
++					break;
++			}
++	}
++}
++
+ static void cc_cipher_complete(struct device *dev, void *cc_req, int err)
+ {
+ 	struct skcipher_request *req = (struct skcipher_request *)cc_req;
+ 	struct scatterlist *dst = req->dst;
+ 	struct scatterlist *src = req->src;
+ 	struct cipher_req_ctx *req_ctx = skcipher_request_ctx(req);
+-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+-	unsigned int ivsize = crypto_skcipher_ivsize(tfm);
++	struct crypto_skcipher *sk_tfm = crypto_skcipher_reqtfm(req);
++	struct crypto_tfm *tfm = crypto_skcipher_tfm(sk_tfm);
++	struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
++	unsigned int ivsize = crypto_skcipher_ivsize(sk_tfm);
++	unsigned int len;
+ 
+-	cc_unmap_cipher_request(dev, req_ctx, ivsize, src, dst);
+-	kzfree(req_ctx->iv);
++	switch (ctx_p->cipher_mode) {
++	case DRV_CIPHER_CBC:
++		/*
++		 * The crypto API expects us to set the req->iv to the last
++		 * ciphertext block. For encrypt, simply copy from the result.
++		 * For decrypt, we must copy from a saved buffer since this
++		 * could be an in-place decryption operation and the src is
++		 * lost by this point.
++		 */
++		if (req_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT)  {
++			memcpy(req->iv, req_ctx->backup_info, ivsize);
++			kzfree(req_ctx->backup_info);
++		} else if (!err) {
++			len = req->cryptlen - ivsize;
++			scatterwalk_map_and_copy(req->iv, req->dst, len,
++						 ivsize, 0);
++		}
++		break;
+ 
+-	/*
+-	 * The crypto API expects us to set the req->iv to the last
+-	 * ciphertext block. For encrypt, simply copy from the result.
+-	 * For decrypt, we must copy from a saved buffer since this
+-	 * could be an in-place decryption operation and the src is
+-	 * lost by this point.
+-	 */
+-	if (req_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT)  {
+-		memcpy(req->iv, req_ctx->backup_info, ivsize);
+-		kzfree(req_ctx->backup_info);
+-	} else if (!err) {
+-		scatterwalk_map_and_copy(req->iv, req->dst,
+-					 (req->cryptlen - ivsize),
+-					 ivsize, 0);
++	case DRV_CIPHER_CTR:
++		/* Compute the counter of the last block */
++		len = ALIGN(req->cryptlen, AES_BLOCK_SIZE) / AES_BLOCK_SIZE;
++		cc_update_ctr((u8 *)req->iv, len);
++		break;
++
++	default:
++		break;
+ 	}
+ 
++	cc_unmap_cipher_request(dev, req_ctx, ivsize, src, dst);
++	kzfree(req_ctx->iv);
++
+ 	skcipher_request_complete(req, err);
+ }
+ 
+@@ -752,20 +800,29 @@ static int cc_cipher_encrypt(struct skcipher_request *req)
+ static int cc_cipher_decrypt(struct skcipher_request *req)
+ {
+ 	struct crypto_skcipher *sk_tfm = crypto_skcipher_reqtfm(req);
++	struct crypto_tfm *tfm = crypto_skcipher_tfm(sk_tfm);
++	struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+ 	struct cipher_req_ctx *req_ctx = skcipher_request_ctx(req);
+ 	unsigned int ivsize = crypto_skcipher_ivsize(sk_tfm);
+ 	gfp_t flags = cc_gfp_flags(&req->base);
++	unsigned int len;
+ 
+-	/*
+-	 * Allocate and save the last IV sized bytes of the source, which will
+-	 * be lost in case of in-place decryption and might be needed for CTS.
+-	 */
+-	req_ctx->backup_info = kmalloc(ivsize, flags);
+-	if (!req_ctx->backup_info)
+-		return -ENOMEM;
++	if (ctx_p->cipher_mode == DRV_CIPHER_CBC) {
++
++		/* Allocate and save the last IV sized bytes of the source,
++		 * which will be lost in case of in-place decryption.
++		 */
++		req_ctx->backup_info = kzalloc(ivsize, flags);
++		if (!req_ctx->backup_info)
++			return -ENOMEM;
++
++		len = req->cryptlen - ivsize;
++		scatterwalk_map_and_copy(req_ctx->backup_info, req->src, len,
++					 ivsize, 0);
++	} else {
++		req_ctx->backup_info = NULL;
++	}
+ 
+-	scatterwalk_map_and_copy(req_ctx->backup_info, req->src,
+-				 (req->cryptlen - ivsize), ivsize, 0);
+ 	req_ctx->is_giv = false;
+ 
+ 	return cc_cipher_process(req, DRV_CRYPTO_DIRECTION_DECRYPT);
+diff --git a/drivers/crypto/ccree/cc_hash.c b/drivers/crypto/ccree/cc_hash.c
+index 96ff777474d7..e4ebde05a8a0 100644
+--- a/drivers/crypto/ccree/cc_hash.c
++++ b/drivers/crypto/ccree/cc_hash.c
+@@ -602,66 +602,7 @@ static int cc_hash_update(struct ahash_request *req)
+ 	return rc;
+ }
+ 
+-static int cc_hash_finup(struct ahash_request *req)
+-{
+-	struct ahash_req_ctx *state = ahash_request_ctx(req);
+-	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+-	struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+-	u32 digestsize = crypto_ahash_digestsize(tfm);
+-	struct scatterlist *src = req->src;
+-	unsigned int nbytes = req->nbytes;
+-	u8 *result = req->result;
+-	struct device *dev = drvdata_to_dev(ctx->drvdata);
+-	bool is_hmac = ctx->is_hmac;
+-	struct cc_crypto_req cc_req = {};
+-	struct cc_hw_desc desc[CC_MAX_HASH_SEQ_LEN];
+-	unsigned int idx = 0;
+-	int rc;
+-	gfp_t flags = cc_gfp_flags(&req->base);
+-
+-	dev_dbg(dev, "===== %s-finup (%d) ====\n", is_hmac ? "hmac" : "hash",
+-		nbytes);
+-
+-	if (cc_map_req(dev, state, ctx)) {
+-		dev_err(dev, "map_ahash_source() failed\n");
+-		return -EINVAL;
+-	}
+-
+-	if (cc_map_hash_request_final(ctx->drvdata, state, src, nbytes, 1,
+-				      flags)) {
+-		dev_err(dev, "map_ahash_request_final() failed\n");
+-		cc_unmap_req(dev, state, ctx);
+-		return -ENOMEM;
+-	}
+-	if (cc_map_result(dev, state, digestsize)) {
+-		dev_err(dev, "map_ahash_digest() failed\n");
+-		cc_unmap_hash_request(dev, state, src, true);
+-		cc_unmap_req(dev, state, ctx);
+-		return -ENOMEM;
+-	}
+-
+-	/* Setup request structure */
+-	cc_req.user_cb = cc_hash_complete;
+-	cc_req.user_arg = req;
+-
+-	idx = cc_restore_hash(desc, ctx, state, idx);
+-
+-	if (is_hmac)
+-		idx = cc_fin_hmac(desc, req, idx);
+-
+-	idx = cc_fin_result(desc, req, idx);
+-
+-	rc = cc_send_request(ctx->drvdata, &cc_req, desc, idx, &req->base);
+-	if (rc != -EINPROGRESS && rc != -EBUSY) {
+-		dev_err(dev, "send_request() failed (rc=%d)\n", rc);
+-		cc_unmap_hash_request(dev, state, src, true);
+-		cc_unmap_result(dev, state, digestsize, result);
+-		cc_unmap_req(dev, state, ctx);
+-	}
+-	return rc;
+-}
+-
+-static int cc_hash_final(struct ahash_request *req)
++static int cc_do_finup(struct ahash_request *req, bool update)
+ {
+ 	struct ahash_req_ctx *state = ahash_request_ctx(req);
+ 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+@@ -678,21 +619,20 @@ static int cc_hash_final(struct ahash_request *req)
+ 	int rc;
+ 	gfp_t flags = cc_gfp_flags(&req->base);
+ 
+-	dev_dbg(dev, "===== %s-final (%d) ====\n", is_hmac ? "hmac" : "hash",
+-		nbytes);
++	dev_dbg(dev, "===== %s-%s (%d) ====\n", is_hmac ? "hmac" : "hash",
++		update ? "finup" : "final", nbytes);
+ 
+ 	if (cc_map_req(dev, state, ctx)) {
+ 		dev_err(dev, "map_ahash_source() failed\n");
+ 		return -EINVAL;
+ 	}
+ 
+-	if (cc_map_hash_request_final(ctx->drvdata, state, src, nbytes, 0,
++	if (cc_map_hash_request_final(ctx->drvdata, state, src, nbytes, update,
+ 				      flags)) {
+ 		dev_err(dev, "map_ahash_request_final() failed\n");
+ 		cc_unmap_req(dev, state, ctx);
+ 		return -ENOMEM;
+ 	}
+-
+ 	if (cc_map_result(dev, state, digestsize)) {
+ 		dev_err(dev, "map_ahash_digest() failed\n");
+ 		cc_unmap_hash_request(dev, state, src, true);
+@@ -706,7 +646,7 @@ static int cc_hash_final(struct ahash_request *req)
+ 
+ 	idx = cc_restore_hash(desc, ctx, state, idx);
+ 
+-	/* "DO-PAD" must be enabled only when writing current length to HW */
++	/* Pad the hash */
+ 	hw_desc_init(&desc[idx]);
+ 	set_cipher_do(&desc[idx], DO_PAD);
+ 	set_cipher_mode(&desc[idx], ctx->hw_mode);
+@@ -731,6 +671,17 @@ static int cc_hash_final(struct ahash_request *req)
+ 	return rc;
+ }
+ 
++static int cc_hash_finup(struct ahash_request *req)
++{
++	return cc_do_finup(req, true);
++}
++
++
++static int cc_hash_final(struct ahash_request *req)
++{
++	return cc_do_finup(req, false);
++}
++
+ static int cc_hash_init(struct ahash_request *req)
+ {
+ 	struct ahash_req_ctx *state = ahash_request_ctx(req);
+diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
+index 26ca0276b503..a75cb371cd19 100644
+--- a/include/asm-generic/pgtable.h
++++ b/include/asm-generic/pgtable.h
+@@ -1019,8 +1019,8 @@ int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot);
+ int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot);
+ int pud_clear_huge(pud_t *pud);
+ int pmd_clear_huge(pmd_t *pmd);
+-int pud_free_pmd_page(pud_t *pud);
+-int pmd_free_pte_page(pmd_t *pmd);
++int pud_free_pmd_page(pud_t *pud, unsigned long addr);
++int pmd_free_pte_page(pmd_t *pmd, unsigned long addr);
+ #else	/* !CONFIG_HAVE_ARCH_HUGE_VMAP */
+ static inline int p4d_set_huge(p4d_t *p4d, phys_addr_t addr, pgprot_t prot)
+ {
+@@ -1046,11 +1046,11 @@ static inline int pmd_clear_huge(pmd_t *pmd)
+ {
+ 	return 0;
+ }
+-static inline int pud_free_pmd_page(pud_t *pud)
++static inline int pud_free_pmd_page(pud_t *pud, unsigned long addr)
+ {
+ 	return 0;
+ }
+-static inline int pmd_free_pte_page(pmd_t *pmd)
++static inline int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
+ {
+ 	return 0;
+ }
+diff --git a/include/crypto/vmac.h b/include/crypto/vmac.h
+deleted file mode 100644
+index 6b700c7b2fe1..000000000000
+--- a/include/crypto/vmac.h
++++ /dev/null
+@@ -1,63 +0,0 @@
+-/*
+- * Modified to interface to the Linux kernel
+- * Copyright (c) 2009, Intel Corporation.
+- *
+- * This program is free software; you can redistribute it and/or modify it
+- * under the terms and conditions of the GNU General Public License,
+- * version 2, as published by the Free Software Foundation.
+- *
+- * This program is distributed in the hope it will be useful, but WITHOUT
+- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+- * more details.
+- *
+- * You should have received a copy of the GNU General Public License along with
+- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+- * Place - Suite 330, Boston, MA 02111-1307 USA.
+- */
+-
+-#ifndef __CRYPTO_VMAC_H
+-#define __CRYPTO_VMAC_H
+-
+-/* --------------------------------------------------------------------------
+- * VMAC and VHASH Implementation by Ted Krovetz (tdk@acm.org) and Wei Dai.
+- * This implementation is herby placed in the public domain.
+- * The authors offers no warranty. Use at your own risk.
+- * Please send bug reports to the authors.
+- * Last modified: 17 APR 08, 1700 PDT
+- * ----------------------------------------------------------------------- */
+-
+-/*
+- * User definable settings.
+- */
+-#define VMAC_TAG_LEN	64
+-#define VMAC_KEY_SIZE	128/* Must be 128, 192 or 256			*/
+-#define VMAC_KEY_LEN	(VMAC_KEY_SIZE/8)
+-#define VMAC_NHBYTES	128/* Must 2^i for any 3 < i < 13 Standard = 128*/
+-
+-/*
+- * This implementation uses u32 and u64 as names for unsigned 32-
+- * and 64-bit integer types. These are defined in C99 stdint.h. The
+- * following may need adaptation if you are not running a C99 or
+- * Microsoft C environment.
+- */
+-struct vmac_ctx {
+-	u64 nhkey[(VMAC_NHBYTES/8)+2*(VMAC_TAG_LEN/64-1)];
+-	u64 polykey[2*VMAC_TAG_LEN/64];
+-	u64 l3key[2*VMAC_TAG_LEN/64];
+-	u64 polytmp[2*VMAC_TAG_LEN/64];
+-	u64 cached_nonce[2];
+-	u64 cached_aes[2];
+-	int first_block_processed;
+-};
+-
+-typedef u64 vmac_t;
+-
+-struct vmac_ctx_t {
+-	struct crypto_cipher *child;
+-	struct vmac_ctx __vmac_ctx;
+-	u8 partial[VMAC_NHBYTES];	/* partial block */
+-	int partial_size;		/* size of the partial block */
+-};
+-
+-#endif /* __CRYPTO_VMAC_H */
+diff --git a/lib/ioremap.c b/lib/ioremap.c
+index 54e5bbaa3200..517f5853ffed 100644
+--- a/lib/ioremap.c
++++ b/lib/ioremap.c
+@@ -92,7 +92,7 @@ static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr,
+ 		if (ioremap_pmd_enabled() &&
+ 		    ((next - addr) == PMD_SIZE) &&
+ 		    IS_ALIGNED(phys_addr + addr, PMD_SIZE) &&
+-		    pmd_free_pte_page(pmd)) {
++		    pmd_free_pte_page(pmd, addr)) {
+ 			if (pmd_set_huge(pmd, phys_addr + addr, prot))
+ 				continue;
+ 		}
+@@ -119,7 +119,7 @@ static inline int ioremap_pud_range(p4d_t *p4d, unsigned long addr,
+ 		if (ioremap_pud_enabled() &&
+ 		    ((next - addr) == PUD_SIZE) &&
+ 		    IS_ALIGNED(phys_addr + addr, PUD_SIZE) &&
+-		    pud_free_pmd_page(pud)) {
++		    pud_free_pmd_page(pud, addr)) {
+ 			if (pud_set_huge(pud, phys_addr + addr, prot))
+ 				continue;
+ 		}
+diff --git a/net/bluetooth/hidp/core.c b/net/bluetooth/hidp/core.c
+index 1036e4fa1ea2..3bba8f4b08a9 100644
+--- a/net/bluetooth/hidp/core.c
++++ b/net/bluetooth/hidp/core.c
+@@ -431,8 +431,8 @@ static void hidp_del_timer(struct hidp_session *session)
+ 		del_timer(&session->timer);
+ }
+ 
+-static void hidp_process_report(struct hidp_session *session,
+-				int type, const u8 *data, int len, int intr)
++static void hidp_process_report(struct hidp_session *session, int type,
++				const u8 *data, unsigned int len, int intr)
+ {
+ 	if (len > HID_MAX_BUFFER_SIZE)
+ 		len = HID_MAX_BUFFER_SIZE;
+diff --git a/scripts/depmod.sh b/scripts/depmod.sh
+index 1a6f85e0e6e1..999d585eaa73 100755
+--- a/scripts/depmod.sh
++++ b/scripts/depmod.sh
+@@ -10,10 +10,16 @@ fi
+ DEPMOD=$1
+ KERNELRELEASE=$2
+ 
+-if ! test -r System.map -a -x "$DEPMOD"; then
++if ! test -r System.map ; then
+ 	exit 0
+ fi
+ 
++if [ -z $(command -v $DEPMOD) ]; then
++	echo "'make modules_install' requires $DEPMOD. Please install it." >&2
++	echo "This is probably in the kmod package." >&2
++	exit 1
++fi
++
+ # older versions of depmod require the version string to start with three
+ # numbers, so we cheat with a symlink here
+ depmod_hack_needed=true


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 13:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 13:15 UTC (permalink / raw
  To: gentoo-commits

commit:     c25319c18a134fe3e6ff8b66a6e53342c0fd0ff3
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Jun 23 22:21:38 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 13:15:21 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c25319c1

Patch to support for namespace user.pax.* on tmpfs.
Enable link security restrictions by default.
Add UAS disable quirk. See bug #640082.
hid-apple patch to enable swapping of the FN and left Control keys and
some additional on some apple keyboards. See bug #622902
Bootsplash ported by Conrad Kostecki. (Bug #637434)
Enable control of the unaligned access control policy from sysctl
Patch that adds Gentoo Linux support config settings and defaults.
Kernel patch enables gcc >= v4.13 optimizations for additional CPUs.

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |   28 +
 1500_XATTR_USER_PREFIX.patch                       |   69 +
 ...ble-link-security-restrictions-by-default.patch |   22 +
 ...age-Disable-UAS-on-JMicron-SATA-enclosure.patch |   40 +
 2600_enable-key-swapping-for-apple-mac.patch       |  114 ++
 4200_fbcondecor.patch                              | 2095 ++++++++++++++++++++
 4400_alpha-sysctl-uac.patch                        |  142 ++
 4567_distro-Gentoo-Kconfig.patch                   |  160 +-
 ...able-additional-cpu-optimizations-for-gcc.patch |  545 +++++
 9 files changed, 3061 insertions(+), 154 deletions(-)

diff --git a/0000_README b/0000_README
index 9018993..917d838 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,34 @@ EXPERIMENTAL
 Individual Patch Descriptions:
 --------------------------------------------------------------------------
 
+Patch:  1500_XATTR_USER_PREFIX.patch
+From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
+Desc:   Support for namespace user.pax.* on tmpfs.
+
+Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
+From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
+Desc:   Enable link security restrictions by default.
+
+Patch:  2500_usb-storage-Disable-UAS-on-JMicron-SATA-enclosure.patch
+From:   https://bugzilla.redhat.com/show_bug.cgi?id=1260207#c5
+Desc:   Add UAS disable quirk. See bug #640082.
+
+Patch:  2600_enable-key-swapping-for-apple-mac.patch
+From:   https://github.com/free5lot/hid-apple-patched
+Desc:   This hid-apple patch enables swapping of the FN and left Control keys and some additional on some apple keyboards. See bug #622902
+
+Patch:  4200_fbcondecor.patch
+From:   http://www.mepiscommunity.org/fbcondecor
+Desc:   Bootsplash ported by Conrad Kostecki. (Bug #637434)
+
+Patch:  4400_alpha-sysctl-uac.patch
+From:   Tobias Klausmann (klausman@gentoo.org) and http://bugs.gentoo.org/show_bug.cgi?id=217323
+Desc:   Enable control of the unaligned access control policy from sysctl
+
 Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.
+
+Patch:  5010_enable-additional-cpu-optimizations-for-gcc.patch
+From:   https://github.com/graysky2/kernel_gcc_patch/
+Desc:   Kernel patch enables gcc >= v4.13 optimizations for additional CPUs.

diff --git a/1500_XATTR_USER_PREFIX.patch b/1500_XATTR_USER_PREFIX.patch
new file mode 100644
index 0000000..bacd032
--- /dev/null
+++ b/1500_XATTR_USER_PREFIX.patch
@@ -0,0 +1,69 @@
+From: Anthony G. Basile <blueness@gentoo.org>
+
+This patch adds support for a restricted user-controlled namespace on
+tmpfs filesystem used to house PaX flags.  The namespace must be of the
+form user.pax.* and its value cannot exceed a size of 8 bytes.
+
+This is needed even on all Gentoo systems so that XATTR_PAX flags
+are preserved for users who might build packages using portage on
+a tmpfs system with a non-hardened kernel and then switch to a
+hardened kernel with XATTR_PAX enabled.
+
+The namespace is added to any user with Extended Attribute support
+enabled for tmpfs.  Users who do not enable xattrs will not have
+the XATTR_PAX flags preserved.
+
+diff --git a/include/uapi/linux/xattr.h b/include/uapi/linux/xattr.h
+index 1590c49..5eab462 100644
+--- a/include/uapi/linux/xattr.h
++++ b/include/uapi/linux/xattr.h
+@@ -73,5 +73,9 @@
+ #define XATTR_POSIX_ACL_DEFAULT  "posix_acl_default"
+ #define XATTR_NAME_POSIX_ACL_DEFAULT XATTR_SYSTEM_PREFIX XATTR_POSIX_ACL_DEFAULT
+ 
++/* User namespace */
++#define XATTR_PAX_PREFIX XATTR_USER_PREFIX "pax."
++#define XATTR_PAX_FLAGS_SUFFIX "flags"
++#define XATTR_NAME_PAX_FLAGS XATTR_PAX_PREFIX XATTR_PAX_FLAGS_SUFFIX
+ 
+ #endif /* _UAPI_LINUX_XATTR_H */
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 440e2a7..c377172 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -2667,6 +2667,14 @@ static int shmem_xattr_handler_set(const struct xattr_handler *handler,
+ 	struct shmem_inode_info *info = SHMEM_I(d_inode(dentry));
+ 
+ 	name = xattr_full_name(handler, name);
++
++	if (!strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN)) {
++		if (strcmp(name, XATTR_NAME_PAX_FLAGS))
++			return -EOPNOTSUPP;
++		if (size > 8)
++			return -EINVAL;
++	}
++
+ 	return simple_xattr_set(&info->xattrs, name, value, size, flags);
+ }
+ 
+@@ -2682,6 +2690,12 @@ static const struct xattr_handler shmem_trusted_xattr_handler = {
+ 	.set = shmem_xattr_handler_set,
+ };
+ 
++static const struct xattr_handler shmem_user_xattr_handler = {
++	.prefix = XATTR_USER_PREFIX,
++	.get = shmem_xattr_handler_get,
++	.set = shmem_xattr_handler_set,
++};
++
+ static const struct xattr_handler *shmem_xattr_handlers[] = {
+ #ifdef CONFIG_TMPFS_POSIX_ACL
+ 	&posix_acl_access_xattr_handler,
+@@ -2689,6 +2703,7 @@ static const struct xattr_handler *shmem_xattr_handlers[] = {
+ #endif
+ 	&shmem_security_xattr_handler,
+ 	&shmem_trusted_xattr_handler,
++	&shmem_user_xattr_handler,
+ 	NULL
+ };
+ 

diff --git a/1510_fs-enable-link-security-restrictions-by-default.patch b/1510_fs-enable-link-security-restrictions-by-default.patch
new file mode 100644
index 0000000..639fb3c
--- /dev/null
+++ b/1510_fs-enable-link-security-restrictions-by-default.patch
@@ -0,0 +1,22 @@
+From: Ben Hutchings <ben@decadent.org.uk>
+Subject: fs: Enable link security restrictions by default
+Date: Fri, 02 Nov 2012 05:32:06 +0000
+Bug-Debian: https://bugs.debian.org/609455
+Forwarded: not-needed
+
+This reverts commit 561ec64ae67ef25cac8d72bb9c4bfc955edfd415
+('VFS: don't do protected {sym,hard}links by default').
+
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -651,8 +651,8 @@ static inline void put_link(struct namei
+ 	path_put(link);
+ }
+ 
+-int sysctl_protected_symlinks __read_mostly = 0;
+-int sysctl_protected_hardlinks __read_mostly = 0;
++int sysctl_protected_symlinks __read_mostly = 1;
++int sysctl_protected_hardlinks __read_mostly = 1;
+ 
+ /**
+  * may_follow_link - Check symlink following for unsafe situations

diff --git a/2500_usb-storage-Disable-UAS-on-JMicron-SATA-enclosure.patch b/2500_usb-storage-Disable-UAS-on-JMicron-SATA-enclosure.patch
new file mode 100644
index 0000000..0dd93ef
--- /dev/null
+++ b/2500_usb-storage-Disable-UAS-on-JMicron-SATA-enclosure.patch
@@ -0,0 +1,40 @@
+From d02a55182307c01136b599fd048b4679f259a84e Mon Sep 17 00:00:00 2001
+From: Laura Abbott <labbott@fedoraproject.org>
+Date: Tue, 8 Sep 2015 09:53:38 -0700
+Subject: [PATCH] usb-storage: Disable UAS on JMicron SATA enclosure
+
+Steve Ellis reported incorrect block sizes and alignement
+offsets with a SATA enclosure. Adding a quirk to disable
+UAS fixes the problems.
+
+Reported-by: Steven Ellis <sellis@redhat.com>
+Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
+---
+ drivers/usb/storage/unusual_uas.h | 7 +++++--
+ 1 file changed, 5 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/usb/storage/unusual_uas.h b/drivers/usb/storage/unusual_uas.h
+index c85ea53..216d93d 100644
+--- a/drivers/usb/storage/unusual_uas.h
++++ b/drivers/usb/storage/unusual_uas.h
+@@ -141,12 +141,15 @@ UNUSUAL_DEV(0x2109, 0x0711, 0x0000, 0x9999,
+ 		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ 		US_FL_NO_ATA_1X),
+ 
+-/* Reported-by: Takeo Nakayama <javhera@gmx.com> */
++/*
++ * Initially Reported-by: Takeo Nakayama <javhera@gmx.com>
++ * UAS Ignore Reported by Steven Ellis <sellis@redhat.com>
++ */
+ UNUSUAL_DEV(0x357d, 0x7788, 0x0000, 0x9999,
+ 		"JMicron",
+ 		"JMS566",
+ 		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+-		US_FL_NO_REPORT_OPCODES),
++		US_FL_NO_REPORT_OPCODES | US_FL_IGNORE_UAS),
+ 
+ /* Reported-by: Hans de Goede <hdegoede@redhat.com> */
+ UNUSUAL_DEV(0x4971, 0x1012, 0x0000, 0x9999,
+-- 
+2.4.3
+

diff --git a/2600_enable-key-swapping-for-apple-mac.patch b/2600_enable-key-swapping-for-apple-mac.patch
new file mode 100644
index 0000000..ab228d3
--- /dev/null
+++ b/2600_enable-key-swapping-for-apple-mac.patch
@@ -0,0 +1,114 @@
+--- a/drivers/hid/hid-apple.c
++++ b/drivers/hid/hid-apple.c
+@@ -52,6 +52,22 @@
+ 		"(For people who want to keep Windows PC keyboard muscle memory. "
+ 		"[0] = as-is, Mac layout. 1 = swapped, Windows layout.)");
+ 
++static unsigned int swap_fn_leftctrl;
++module_param(swap_fn_leftctrl, uint, 0644);
++MODULE_PARM_DESC(swap_fn_leftctrl, "Swap the Fn and left Control keys. "
++		"(For people who want to keep PC keyboard muscle memory. "
++		"[0] = as-is, Mac layout, 1 = swapped, PC layout)");
++
++static unsigned int rightalt_as_rightctrl;
++module_param(rightalt_as_rightctrl, uint, 0644);
++MODULE_PARM_DESC(rightalt_as_rightctrl, "Use the right Alt key as a right Ctrl key. "
++		"[0] = as-is, Mac layout. 1 = Right Alt is right Ctrl");
++
++static unsigned int ejectcd_as_delete;
++module_param(ejectcd_as_delete, uint, 0644);
++MODULE_PARM_DESC(ejectcd_as_delete, "Use Eject-CD key as Delete key. "
++		"([0] = disabled, 1 = enabled)");
++
+ struct apple_sc {
+ 	unsigned long quirks;
+ 	unsigned int fn_on;
+@@ -164,6 +180,21 @@
+ 	{ }
+ };
+ 
++static const struct apple_key_translation swapped_fn_leftctrl_keys[] = {
++	{ KEY_FN, KEY_LEFTCTRL },
++	{ }
++};
++
++static const struct apple_key_translation rightalt_as_rightctrl_keys[] = {
++	{ KEY_RIGHTALT, KEY_RIGHTCTRL },
++	{ }
++};
++
++static const struct apple_key_translation ejectcd_as_delete_keys[] = {
++	{ KEY_EJECTCD,	KEY_DELETE },
++	{ }
++};
++
+ static const struct apple_key_translation *apple_find_translation(
+ 		const struct apple_key_translation *table, u16 from)
+ {
+@@ -183,9 +214,11 @@
+ 	struct apple_sc *asc = hid_get_drvdata(hid);
+ 	const struct apple_key_translation *trans, *table;
+ 
+-	if (usage->code == KEY_FN) {
++	u16 fn_keycode = (swap_fn_leftctrl) ? (KEY_LEFTCTRL) : (KEY_FN);
++
++	if (usage->code == fn_keycode) {
+ 		asc->fn_on = !!value;
+-		input_event(input, usage->type, usage->code, value);
++		input_event(input, usage->type, KEY_FN, value);
+ 		return 1;
+ 	}
+ 
+@@ -264,6 +297,30 @@
+ 		}
+ 	}
+ 
++	if (swap_fn_leftctrl) {
++		trans = apple_find_translation(swapped_fn_leftctrl_keys, usage->code);
++		if (trans) {
++			input_event(input, usage->type, trans->to, value);
++			return 1;
++		}
++	}
++
++	if (ejectcd_as_delete) {
++		trans = apple_find_translation(ejectcd_as_delete_keys, usage->code);
++		if (trans) {
++			input_event(input, usage->type, trans->to, value);
++			return 1;
++		}
++	}
++
++	if (rightalt_as_rightctrl) {
++		trans = apple_find_translation(rightalt_as_rightctrl_keys, usage->code);
++		if (trans) {
++			input_event(input, usage->type, trans->to, value);
++			return 1;
++		}
++	}
++
+ 	return 0;
+ }
+ 
+@@ -327,6 +384,21 @@
+ 
+ 	for (trans = apple_iso_keyboard; trans->from; trans++)
+ 		set_bit(trans->to, input->keybit);
++
++	if (swap_fn_leftctrl) {
++		for (trans = swapped_fn_leftctrl_keys; trans->from; trans++)
++			set_bit(trans->to, input->keybit);
++	}
++
++	if (ejectcd_as_delete) {
++		for (trans = ejectcd_as_delete_keys; trans->from; trans++)
++			set_bit(trans->to, input->keybit);
++	}
++
++        if (rightalt_as_rightctrl) {
++		for (trans = rightalt_as_rightctrl_keys; trans->from; trans++)
++			set_bit(trans->to, input->keybit);
++	}
+ }
+ 
+ static int apple_input_mapping(struct hid_device *hdev, struct hid_input *hi,

diff --git a/4200_fbcondecor.patch b/4200_fbcondecor.patch
new file mode 100644
index 0000000..7151d0f
--- /dev/null
+++ b/4200_fbcondecor.patch
@@ -0,0 +1,2095 @@
+diff --git a/Documentation/fb/00-INDEX b/Documentation/fb/00-INDEX
+index fe85e7c5907a..22309308ba56 100644
+--- a/Documentation/fb/00-INDEX
++++ b/Documentation/fb/00-INDEX
+@@ -23,6 +23,8 @@ ep93xx-fb.txt
+ 	- info on the driver for EP93xx LCD controller.
+ fbcon.txt
+ 	- intro to and usage guide for the framebuffer console (fbcon).
++fbcondecor.txt
++	- info on the Framebuffer Console Decoration
+ framebuffer.txt
+ 	- introduction to frame buffer devices.
+ gxfb.txt
+diff --git a/Documentation/fb/fbcondecor.txt b/Documentation/fb/fbcondecor.txt
+new file mode 100644
+index 000000000000..637209e11ccd
+--- /dev/null
++++ b/Documentation/fb/fbcondecor.txt
+@@ -0,0 +1,207 @@
++What is it?
++-----------
++
++The framebuffer decorations are a kernel feature which allows displaying a
++background picture on selected consoles.
++
++What do I need to get it to work?
++---------------------------------
++
++To get fbcondecor up-and-running you will have to:
++ 1) get a copy of splashutils [1] or a similar program
++ 2) get some fbcondecor themes
++ 3) build the kernel helper program
++ 4) build your kernel with the FB_CON_DECOR option enabled.
++
++To get fbcondecor operational right after fbcon initialization is finished, you
++will have to include a theme and the kernel helper into your initramfs image.
++Please refer to splashutils documentation for instructions on how to do that.
++
++[1] The splashutils package can be downloaded from:
++    http://github.com/alanhaggai/fbsplash
++
++The userspace helper
++--------------------
++
++The userspace fbcondecor helper (by default: /sbin/fbcondecor_helper) is called by the
++kernel whenever an important event occurs and the kernel needs some kind of
++job to be carried out. Important events include console switches and video
++mode switches (the kernel requests background images and configuration
++parameters for the current console). The fbcondecor helper must be accessible at
++all times. If it's not, fbcondecor will be switched off automatically.
++
++It's possible to set path to the fbcondecor helper by writing it to
++/proc/sys/kernel/fbcondecor.
++
++*****************************************************************************
++
++The information below is mostly technical stuff. There's probably no need to
++read it unless you plan to develop a userspace helper.
++
++The fbcondecor protocol
++-----------------------
++
++The fbcondecor protocol defines a communication interface between the kernel and
++the userspace fbcondecor helper.
++
++The kernel side is responsible for:
++
++ * rendering console text, using an image as a background (instead of a
++   standard solid color fbcon uses),
++ * accepting commands from the user via ioctls on the fbcondecor device,
++ * calling the userspace helper to set things up as soon as the fb subsystem
++   is initialized.
++
++The userspace helper is responsible for everything else, including parsing
++configuration files, decompressing the image files whenever the kernel needs
++it, and communicating with the kernel if necessary.
++
++The fbcondecor protocol specifies how communication is done in both ways:
++kernel->userspace and userspace->helper.
++
++Kernel -> Userspace
++-------------------
++
++The kernel communicates with the userspace helper by calling it and specifying
++the task to be done in a series of arguments.
++
++The arguments follow the pattern:
++<fbcondecor protocol version> <command> <parameters>
++
++All commands defined in fbcondecor protocol v2 have the following parameters:
++ virtual console
++ framebuffer number
++ theme
++
++Fbcondecor protocol v1 specified an additional 'fbcondecor mode' after the
++framebuffer number. Fbcondecor protocol v1 is deprecated and should not be used.
++
++Fbcondecor protocol v2 specifies the following commands:
++
++getpic
++------
++ The kernel issues this command to request image data. It's up to the
++ userspace  helper to find a background image appropriate for the specified
++ theme and the current resolution. The userspace helper should respond by
++ issuing the FBIOCONDECOR_SETPIC ioctl.
++
++init
++----
++ The kernel issues this command after the fbcondecor device is created and
++ the fbcondecor interface is initialized. Upon receiving 'init', the userspace
++ helper should parse the kernel command line (/proc/cmdline) or otherwise
++ decide whether fbcondecor is to be activated.
++
++ To activate fbcondecor on the first console the helper should issue the
++ FBIOCONDECOR_SETCFG, FBIOCONDECOR_SETPIC and FBIOCONDECOR_SETSTATE commands,
++ in the above-mentioned order.
++
++ When the userspace helper is called in an early phase of the boot process
++ (right after the initialization of fbcon), no filesystems will be mounted.
++ The helper program should mount sysfs and then create the appropriate
++ framebuffer, fbcondecor and tty0 devices (if they don't already exist) to get
++ current display settings and to be able to communicate with the kernel side.
++ It should probably also mount the procfs to be able to parse the kernel
++ command line parameters.
++
++ Note that the console sem is not held when the kernel calls fbcondecor_helper
++ with the 'init' command. The fbcondecor helper should perform all ioctls with
++ origin set to FBCON_DECOR_IO_ORIG_USER.
++
++modechange
++----------
++ The kernel issues this command on a mode change. The helper's response should
++ be similar to the response to the 'init' command. Note that this time the
++ console sem is held and all ioctls must be performed with origin set to
++ FBCON_DECOR_IO_ORIG_KERNEL.
++
++
++Userspace -> Kernel
++-------------------
++
++Userspace programs can communicate with fbcondecor via ioctls on the
++fbcondecor device. These ioctls are to be used by both the userspace helper
++(called only by the kernel) and userspace configuration tools (run by the users).
++
++The fbcondecor helper should set the origin field to FBCON_DECOR_IO_ORIG_KERNEL
++when doing the appropriate ioctls. All userspace configuration tools should
++use FBCON_DECOR_IO_ORIG_USER. Failure to set the appropriate value in the origin
++field when performing ioctls from the kernel helper will most likely result
++in a console deadlock.
++
++FBCON_DECOR_IO_ORIG_KERNEL instructs fbcondecor not to try to acquire the console
++semaphore. Not surprisingly, FBCON_DECOR_IO_ORIG_USER instructs it to acquire
++the console sem.
++
++The framebuffer console decoration provides the following ioctls (all defined in
++linux/fb.h):
++
++FBIOCONDECOR_SETPIC
++description: loads a background picture for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: struct fb_image*
++notes:
++If called for consoles other than the current foreground one, the picture data
++will be ignored.
++
++If the current virtual console is running in a 8-bpp mode, the cmap substruct
++of fb_image has to be filled appropriately: start should be set to 16 (first
++16 colors are reserved for fbcon), len to a value <= 240 and red, green and
++blue should point to valid cmap data. The transp field is ingored. The fields
++dx, dy, bg_color, fg_color in fb_image are ignored as well.
++
++FBIOCONDECOR_SETCFG
++description: sets the fbcondecor config for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: struct vc_decor*
++notes: The structure has to be filled with valid data.
++
++FBIOCONDECOR_GETCFG
++description: gets the fbcondecor config for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: struct vc_decor*
++
++FBIOCONDECOR_SETSTATE
++description: sets the fbcondecor state for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: unsigned int*
++          values: 0 = disabled, 1 = enabled.
++
++FBIOCONDECOR_GETSTATE
++description: gets the fbcondecor state for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: unsigned int*
++          values: as in FBIOCONDECOR_SETSTATE
++
++Info on used structures:
++
++Definition of struct vc_decor can be found in linux/console_decor.h. It's
++heavily commented. Note that the 'theme' field should point to a string
++no longer than FBCON_DECOR_THEME_LEN. When FBIOCONDECOR_GETCFG call is
++performed, the theme field should point to a char buffer of length
++FBCON_DECOR_THEME_LEN.
++
++Definition of struct fbcon_decor_iowrapper can be found in linux/fb.h.
++The fields in this struct have the following meaning:
++
++vc:
++Virtual console number.
++
++origin:
++Specifies if the ioctl is performed as a response to a kernel request. The
++fbcondecor helper should set this field to FBCON_DECOR_IO_ORIG_KERNEL, userspace
++programs should set it to FBCON_DECOR_IO_ORIG_USER. This field is necessary to
++avoid console semaphore deadlocks.
++
++data:
++Pointer to a data structure appropriate for the performed ioctl. Type of
++the data struct is specified in the ioctls description.
++
++*****************************************************************************
++
++Credit
++------
++
++Original 'bootsplash' project & implementation by:
++  Volker Poplawski <volker@poplawski.de>, Stefan Reinauer <stepan@suse.de>,
++  Steffen Winterfeldt <snwint@suse.de>, Michael Schroeder <mls@suse.de>,
++  Ken Wimer <wimer@suse.de>.
++
++Fbcondecor, fbcondecor protocol design, current implementation & docs by:
++  Michal Januszewski <michalj+fbcondecor@gmail.com>
++
+diff --git a/drivers/Makefile b/drivers/Makefile
+index 1d034b680431..9f41f2ea0c8b 100644
+--- a/drivers/Makefile
++++ b/drivers/Makefile
+@@ -23,6 +23,10 @@ obj-y				+= pci/dwc/
+ 
+ obj-$(CONFIG_PARISC)		+= parisc/
+ obj-$(CONFIG_RAPIDIO)		+= rapidio/
++# tty/ comes before char/ so that the VT console is the boot-time
++# default.
++obj-y				+= tty/
++obj-y				+= char/
+ obj-y				+= video/
+ obj-y				+= idle/
+ 
+@@ -53,11 +57,6 @@ obj-$(CONFIG_REGULATOR)		+= regulator/
+ # reset controllers early, since gpu drivers might rely on them to initialize
+ obj-$(CONFIG_RESET_CONTROLLER)	+= reset/
+ 
+-# tty/ comes before char/ so that the VT console is the boot-time
+-# default.
+-obj-y				+= tty/
+-obj-y				+= char/
+-
+ # iommu/ comes before gpu as gpu are using iommu controllers
+ obj-$(CONFIG_IOMMU_SUPPORT)	+= iommu/
+ 
+diff --git a/drivers/video/console/Kconfig b/drivers/video/console/Kconfig
+index 7f1f1fbcef9e..8439b618dfc0 100644
+--- a/drivers/video/console/Kconfig
++++ b/drivers/video/console/Kconfig
+@@ -151,6 +151,19 @@ config FRAMEBUFFER_CONSOLE_ROTATION
+          such that other users of the framebuffer will remain normally
+          oriented.
+ 
++config FB_CON_DECOR
++	bool "Support for the Framebuffer Console Decorations"
++	depends on FRAMEBUFFER_CONSOLE=y && !FB_TILEBLITTING
++	default n
++	---help---
++	  This option enables support for framebuffer console decorations which
++	  makes it possible to display images in the background of the system
++	  consoles.  Note that userspace utilities are necessary in order to take
++	  advantage of these features. Refer to Documentation/fb/fbcondecor.txt
++	  for more information.
++
++	  If unsure, say N.
++
+ config STI_CONSOLE
+         bool "STI text console"
+         depends on PARISC
+diff --git a/drivers/video/console/Makefile b/drivers/video/console/Makefile
+index db07b784bd2c..3e369bd120b8 100644
+--- a/drivers/video/console/Makefile
++++ b/drivers/video/console/Makefile
+@@ -9,4 +9,5 @@ obj-$(CONFIG_STI_CONSOLE)         += sticon.o sticore.o
+ obj-$(CONFIG_VGA_CONSOLE)         += vgacon.o
+ obj-$(CONFIG_MDA_CONSOLE)         += mdacon.o
+ 
++obj-$(CONFIG_FB_CON_DECOR)     	  += fbcondecor.o cfbcondecor.o
+ obj-$(CONFIG_FB_STI)              += sticore.o
+diff --git a/drivers/video/console/cfbcondecor.c b/drivers/video/console/cfbcondecor.c
+new file mode 100644
+index 000000000000..b00960803edc
+--- /dev/null
++++ b/drivers/video/console/cfbcondecor.c
+@@ -0,0 +1,473 @@
++/*
++ *  linux/drivers/video/cfbcon_decor.c -- Framebuffer decor render functions
++ *
++ *  Copyright (C) 2004 Michal Januszewski <michalj+fbcondecor@gmail.com>
++ *
++ *  Code based upon "Bootdecor" (C) 2001-2003
++ *       Volker Poplawski <volker@poplawski.de>,
++ *       Stefan Reinauer <stepan@suse.de>,
++ *       Steffen Winterfeldt <snwint@suse.de>,
++ *       Michael Schroeder <mls@suse.de>,
++ *       Ken Wimer <wimer@suse.de>.
++ *
++ *  This file is subject to the terms and conditions of the GNU General Public
++ *  License.  See the file COPYING in the main directory of this archive for
++ *  more details.
++ */
++#include <linux/module.h>
++#include <linux/types.h>
++#include <linux/fb.h>
++#include <linux/selection.h>
++#include <linux/slab.h>
++#include <linux/vt_kern.h>
++#include <asm/irq.h>
++
++#include "../fbdev/core/fbcon.h"
++#include "fbcondecor.h"
++
++#define parse_pixel(shift, bpp, type)						\
++	do {									\
++		if (d & (0x80 >> (shift)))					\
++			dd2[(shift)] = fgx;					\
++		else								\
++			dd2[(shift)] = transparent ? *(type *)decor_src : bgx;	\
++		decor_src += (bpp);						\
++	} while (0)								\
++
++extern int get_color(struct vc_data *vc, struct fb_info *info,
++		     u16 c, int is_fg);
++
++void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc)
++{
++	int i, j, k;
++	int minlen = min(min(info->var.red.length, info->var.green.length),
++			     info->var.blue.length);
++	u32 col;
++
++	for (j = i = 0; i < 16; i++) {
++		k = color_table[i];
++
++		col = ((vc->vc_palette[j++]  >> (8-minlen))
++			<< info->var.red.offset);
++		col |= ((vc->vc_palette[j++] >> (8-minlen))
++			<< info->var.green.offset);
++		col |= ((vc->vc_palette[j++] >> (8-minlen))
++			<< info->var.blue.offset);
++			((u32 *)info->pseudo_palette)[k] = col;
++	}
++}
++
++void fbcon_decor_renderc(struct fb_info *info, int ypos, int xpos, int height,
++		      int width, u8 *src, u32 fgx, u32 bgx, u8 transparent)
++{
++	unsigned int x, y;
++	u32 dd;
++	int bytespp = ((info->var.bits_per_pixel + 7) >> 3);
++	unsigned int d = ypos * info->fix.line_length + xpos * bytespp;
++	unsigned int ds = (ypos * info->var.xres + xpos) * bytespp;
++	u16 dd2[4];
++
++	u8 *decor_src = (u8 *)(info->bgdecor.data + ds);
++	u8 *dst = (u8 *)(info->screen_base + d);
++
++	if ((ypos + height) > info->var.yres || (xpos + width) > info->var.xres)
++		return;
++
++	for (y = 0; y < height; y++) {
++		switch (info->var.bits_per_pixel) {
++
++		case 32:
++			for (x = 0; x < width; x++) {
++
++				if ((x & 7) == 0)
++					d = *src++;
++				if (d & 0x80)
++					dd = fgx;
++				else
++					dd = transparent ?
++					     *(u32 *)decor_src : bgx;
++
++				d <<= 1;
++				decor_src += 4;
++				fb_writel(dd, dst);
++				dst += 4;
++			}
++			break;
++		case 24:
++			for (x = 0; x < width; x++) {
++
++				if ((x & 7) == 0)
++					d = *src++;
++				if (d & 0x80)
++					dd = fgx;
++				else
++					dd = transparent ?
++					     (*(u32 *)decor_src & 0xffffff) : bgx;
++
++				d <<= 1;
++				decor_src += 3;
++#ifdef __LITTLE_ENDIAN
++				fb_writew(dd & 0xffff, dst);
++				dst += 2;
++				fb_writeb((dd >> 16), dst);
++#else
++				fb_writew(dd >> 8, dst);
++				dst += 2;
++				fb_writeb(dd & 0xff, dst);
++#endif
++				dst++;
++			}
++			break;
++		case 16:
++			for (x = 0; x < width; x += 2) {
++				if ((x & 7) == 0)
++					d = *src++;
++
++				parse_pixel(0, 2, u16);
++				parse_pixel(1, 2, u16);
++#ifdef __LITTLE_ENDIAN
++				dd = dd2[0] | (dd2[1] << 16);
++#else
++				dd = dd2[1] | (dd2[0] << 16);
++#endif
++				d <<= 2;
++				fb_writel(dd, dst);
++				dst += 4;
++			}
++			break;
++
++		case 8:
++			for (x = 0; x < width; x += 4) {
++				if ((x & 7) == 0)
++					d = *src++;
++
++				parse_pixel(0, 1, u8);
++				parse_pixel(1, 1, u8);
++				parse_pixel(2, 1, u8);
++				parse_pixel(3, 1, u8);
++
++#ifdef __LITTLE_ENDIAN
++				dd = dd2[0] | (dd2[1] << 8) | (dd2[2] << 16) | (dd2[3] << 24);
++#else
++				dd = dd2[3] | (dd2[2] << 8) | (dd2[1] << 16) | (dd2[0] << 24);
++#endif
++				d <<= 4;
++				fb_writel(dd, dst);
++				dst += 4;
++			}
++		}
++
++		dst += info->fix.line_length - width * bytespp;
++		decor_src += (info->var.xres - width) * bytespp;
++	}
++}
++
++#define cc2cx(a)						\
++	((info->fix.visual == FB_VISUAL_TRUECOLOR ||		\
++		info->fix.visual == FB_VISUAL_DIRECTCOLOR) ?	\
++			((u32 *)info->pseudo_palette)[a] : a)
++
++void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info,
++		   const unsigned short *s, int count, int yy, int xx)
++{
++	unsigned short charmask = vc->vc_hi_font_mask ? 0x1ff : 0xff;
++	struct fbcon_ops *ops = info->fbcon_par;
++	int fg_color, bg_color, transparent;
++	u8 *src;
++	u32 bgx, fgx;
++	u16 c = scr_readw(s);
++
++	fg_color = get_color(vc, info, c, 1);
++	bg_color = get_color(vc, info, c, 0);
++
++	/* Don't paint the background image if console is blanked */
++	transparent = ops->blank_state ? 0 :
++		(vc->vc_decor.bg_color == bg_color);
++
++	xx = xx * vc->vc_font.width + vc->vc_decor.tx;
++	yy = yy * vc->vc_font.height + vc->vc_decor.ty;
++
++	fgx = cc2cx(fg_color);
++	bgx = cc2cx(bg_color);
++
++	while (count--) {
++		c = scr_readw(s++);
++		src = vc->vc_font.data + (c & charmask) * vc->vc_font.height *
++		      ((vc->vc_font.width + 7) >> 3);
++
++		fbcon_decor_renderc(info, yy, xx, vc->vc_font.height,
++			       vc->vc_font.width, src, fgx, bgx, transparent);
++		xx += vc->vc_font.width;
++	}
++}
++
++void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor)
++{
++	int i;
++	unsigned int dsize, s_pitch;
++	struct fbcon_ops *ops = info->fbcon_par;
++	struct vc_data *vc;
++	u8 *src;
++
++	/* we really don't need any cursors while the console is blanked */
++	if (info->state != FBINFO_STATE_RUNNING || ops->blank_state)
++		return;
++
++	vc = vc_cons[ops->currcon].d;
++
++	src = kmalloc(64 + sizeof(struct fb_image), GFP_ATOMIC);
++	if (!src)
++		return;
++
++	s_pitch = (cursor->image.width + 7) >> 3;
++	dsize = s_pitch * cursor->image.height;
++	if (cursor->enable) {
++		switch (cursor->rop) {
++		case ROP_XOR:
++			for (i = 0; i < dsize; i++)
++				src[i] = cursor->image.data[i] ^ cursor->mask[i];
++			break;
++		case ROP_COPY:
++		default:
++			for (i = 0; i < dsize; i++)
++				src[i] = cursor->image.data[i] & cursor->mask[i];
++			break;
++		}
++	} else
++		memcpy(src, cursor->image.data, dsize);
++
++	fbcon_decor_renderc(info,
++			cursor->image.dy + vc->vc_decor.ty,
++			cursor->image.dx + vc->vc_decor.tx,
++			cursor->image.height,
++			cursor->image.width,
++			(u8 *)src,
++			cc2cx(cursor->image.fg_color),
++			cc2cx(cursor->image.bg_color),
++			cursor->image.bg_color == vc->vc_decor.bg_color);
++
++	kfree(src);
++}
++
++static void decorset(u8 *dst, int height, int width, int dstbytes,
++				u32 bgx, int bpp)
++{
++	int i;
++
++	if (bpp == 8)
++		bgx |= bgx << 8;
++	if (bpp == 16 || bpp == 8)
++		bgx |= bgx << 16;
++
++	while (height-- > 0) {
++		u8 *p = dst;
++
++		switch (bpp) {
++
++		case 32:
++			for (i = 0; i < width; i++) {
++				fb_writel(bgx, p); p += 4;
++			}
++			break;
++		case 24:
++			for (i = 0; i < width; i++) {
++#ifdef __LITTLE_ENDIAN
++				fb_writew((bgx & 0xffff), (u16 *)p); p += 2;
++				fb_writeb((bgx >> 16), p++);
++#else
++				fb_writew((bgx >> 8), (u16 *)p); p += 2;
++				fb_writeb((bgx & 0xff), p++);
++#endif
++			}
++			break;
++		case 16:
++			for (i = 0; i < width/4; i++) {
++				fb_writel(bgx, p); p += 4;
++				fb_writel(bgx, p); p += 4;
++			}
++			if (width & 2) {
++				fb_writel(bgx, p); p += 4;
++			}
++			if (width & 1)
++				fb_writew(bgx, (u16 *)p);
++			break;
++		case 8:
++			for (i = 0; i < width/4; i++) {
++				fb_writel(bgx, p); p += 4;
++			}
++
++			if (width & 2) {
++				fb_writew(bgx, p); p += 2;
++			}
++			if (width & 1)
++				fb_writeb(bgx, (u8 *)p);
++			break;
++
++		}
++		dst += dstbytes;
++	}
++}
++
++void fbcon_decor_copy(u8 *dst, u8 *src, int height, int width, int linebytes,
++		   int srclinebytes, int bpp)
++{
++	int i;
++
++	while (height-- > 0) {
++		u32 *p = (u32 *)dst;
++		u32 *q = (u32 *)src;
++
++		switch (bpp) {
++
++		case 32:
++			for (i = 0; i < width; i++)
++				fb_writel(*q++, p++);
++			break;
++		case 24:
++			for (i = 0; i < (width * 3 / 4); i++)
++				fb_writel(*q++, p++);
++			if ((width * 3) % 4) {
++				if (width & 2) {
++					fb_writeb(*(u8 *)q, (u8 *)p);
++				} else if (width & 1) {
++					fb_writew(*(u16 *)q, (u16 *)p);
++					fb_writeb(*(u8 *)((u16 *)q + 1),
++							(u8 *)((u16 *)p + 2));
++				}
++			}
++			break;
++		case 16:
++			for (i = 0; i < width/4; i++) {
++				fb_writel(*q++, p++);
++				fb_writel(*q++, p++);
++			}
++			if (width & 2)
++				fb_writel(*q++, p++);
++			if (width & 1)
++				fb_writew(*(u16 *)q, (u16 *)p);
++			break;
++		case 8:
++			for (i = 0; i < width/4; i++)
++				fb_writel(*q++, p++);
++
++			if (width & 2) {
++				fb_writew(*(u16 *)q, (u16 *)p);
++				q = (u32 *) ((u16 *)q + 1);
++				p = (u32 *) ((u16 *)p + 1);
++			}
++			if (width & 1)
++				fb_writeb(*(u8 *)q, (u8 *)p);
++			break;
++		}
++
++		dst += linebytes;
++		src += srclinebytes;
++	}
++}
++
++static void decorfill(struct fb_info *info, int sy, int sx, int height,
++		       int width)
++{
++	int bytespp = ((info->var.bits_per_pixel + 7) >> 3);
++	int d  = sy * info->fix.line_length + sx * bytespp;
++	int ds = (sy * info->var.xres + sx) * bytespp;
++
++	fbcon_decor_copy((u8 *)(info->screen_base + d), (u8 *)(info->bgdecor.data + ds),
++		    height, width, info->fix.line_length, info->var.xres * bytespp,
++		    info->var.bits_per_pixel);
++}
++
++void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx,
++		    int height, int width)
++{
++	int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
++	struct fbcon_ops *ops = info->fbcon_par;
++	u8 *dst;
++	int transparent, bg_color = attr_bgcol_ec(bgshift, vc, info);
++
++	transparent = (vc->vc_decor.bg_color == bg_color);
++	sy = sy * vc->vc_font.height + vc->vc_decor.ty;
++	sx = sx * vc->vc_font.width + vc->vc_decor.tx;
++	height *= vc->vc_font.height;
++	width *= vc->vc_font.width;
++
++	/* Don't paint the background image if console is blanked */
++	if (transparent && !ops->blank_state) {
++		decorfill(info, sy, sx, height, width);
++	} else {
++		dst = (u8 *)(info->screen_base + sy * info->fix.line_length +
++			     sx * ((info->var.bits_per_pixel + 7) >> 3));
++		decorset(dst, height, width, info->fix.line_length, cc2cx(bg_color),
++			  info->var.bits_per_pixel);
++	}
++}
++
++void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info,
++			    int bottom_only)
++{
++	unsigned int tw = vc->vc_cols*vc->vc_font.width;
++	unsigned int th = vc->vc_rows*vc->vc_font.height;
++
++	if (!bottom_only) {
++		/* top margin */
++		decorfill(info, 0, 0, vc->vc_decor.ty, info->var.xres);
++		/* left margin */
++		decorfill(info, vc->vc_decor.ty, 0, th, vc->vc_decor.tx);
++		/* right margin */
++		decorfill(info, vc->vc_decor.ty, vc->vc_decor.tx + tw, th,
++			   info->var.xres - vc->vc_decor.tx - tw);
++	}
++	decorfill(info, vc->vc_decor.ty + th, 0,
++		   info->var.yres - vc->vc_decor.ty - th, info->var.xres);
++}
++
++void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y,
++			   int sx, int dx, int width)
++{
++	u16 *d = (u16 *) (vc->vc_origin + vc->vc_size_row * y + dx * 2);
++	u16 *s = d + (dx - sx);
++	u16 *start = d;
++	u16 *ls = d;
++	u16 *le = d + width;
++	u16 c;
++	int x = dx;
++	u16 attr = 1;
++
++	do {
++		c = scr_readw(d);
++		if (attr != (c & 0xff00)) {
++			attr = c & 0xff00;
++			if (d > start) {
++				fbcon_decor_putcs(vc, info, start, d - start, y, x);
++				x += d - start;
++				start = d;
++			}
++		}
++		if (s >= ls && s < le && c == scr_readw(s)) {
++			if (d > start) {
++				fbcon_decor_putcs(vc, info, start, d - start, y, x);
++				x += d - start + 1;
++				start = d + 1;
++			} else {
++				x++;
++				start++;
++			}
++		}
++		s++;
++		d++;
++	} while (d < le);
++	if (d > start)
++		fbcon_decor_putcs(vc, info, start, d - start, y, x);
++}
++
++void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank)
++{
++	if (blank) {
++		decorset((u8 *)info->screen_base, info->var.yres, info->var.xres,
++			  info->fix.line_length, 0, info->var.bits_per_pixel);
++	} else {
++		update_screen(vc);
++		fbcon_decor_clear_margins(vc, info, 0);
++	}
++}
++
+diff --git a/drivers/video/console/fbcondecor.c b/drivers/video/console/fbcondecor.c
+new file mode 100644
+index 000000000000..78288a497a60
+--- /dev/null
++++ b/drivers/video/console/fbcondecor.c
+@@ -0,0 +1,549 @@
++/*
++ *  linux/drivers/video/console/fbcondecor.c -- Framebuffer console decorations
++ *
++ *  Copyright (C) 2004-2009 Michal Januszewski <michalj+fbcondecor@gmail.com>
++ *
++ *  Code based upon "Bootsplash" (C) 2001-2003
++ *       Volker Poplawski <volker@poplawski.de>,
++ *       Stefan Reinauer <stepan@suse.de>,
++ *       Steffen Winterfeldt <snwint@suse.de>,
++ *       Michael Schroeder <mls@suse.de>,
++ *       Ken Wimer <wimer@suse.de>.
++ *
++ *  Compat ioctl support by Thorsten Klein <TK@Thorsten-Klein.de>.
++ *
++ *  This file is subject to the terms and conditions of the GNU General Public
++ *  License.  See the file COPYING in the main directory of this archive for
++ *  more details.
++ *
++ */
++#include <linux/module.h>
++#include <linux/kernel.h>
++#include <linux/string.h>
++#include <linux/types.h>
++#include <linux/fb.h>
++#include <linux/vt_kern.h>
++#include <linux/vmalloc.h>
++#include <linux/unistd.h>
++#include <linux/syscalls.h>
++#include <linux/init.h>
++#include <linux/proc_fs.h>
++#include <linux/workqueue.h>
++#include <linux/kmod.h>
++#include <linux/miscdevice.h>
++#include <linux/device.h>
++#include <linux/fs.h>
++#include <linux/compat.h>
++#include <linux/console.h>
++#include <linux/binfmts.h>
++#include <linux/uaccess.h>
++#include <asm/irq.h>
++
++#include "../fbdev/core/fbcon.h"
++#include "fbcondecor.h"
++
++extern signed char con2fb_map[];
++static int fbcon_decor_enable(struct vc_data *vc);
++
++static int initialized;
++
++char fbcon_decor_path[KMOD_PATH_LEN] = "/sbin/fbcondecor_helper";
++EXPORT_SYMBOL(fbcon_decor_path);
++
++int fbcon_decor_call_helper(char *cmd, unsigned short vc)
++{
++	char *envp[] = {
++		"HOME=/",
++		"PATH=/sbin:/bin",
++		NULL
++	};
++
++	char tfb[5];
++	char tcons[5];
++	unsigned char fb = (int) con2fb_map[vc];
++
++	char *argv[] = {
++		fbcon_decor_path,
++		"2",
++		cmd,
++		tcons,
++		tfb,
++		vc_cons[vc].d->vc_decor.theme,
++		NULL
++	};
++
++	snprintf(tfb, 5, "%d", fb);
++	snprintf(tcons, 5, "%d", vc);
++
++	return call_usermodehelper(fbcon_decor_path, argv, envp, UMH_WAIT_EXEC);
++}
++
++/* Disables fbcondecor on a virtual console; called with console sem held. */
++int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw)
++{
++	struct fb_info *info;
++
++	if (!vc->vc_decor.state)
++		return -EINVAL;
++
++	info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++	if (info == NULL)
++		return -EINVAL;
++
++	vc->vc_decor.state = 0;
++	vc_resize(vc, info->var.xres / vc->vc_font.width,
++		  info->var.yres / vc->vc_font.height);
++
++	if (fg_console == vc->vc_num && redraw) {
++		redraw_screen(vc, 0);
++		update_region(vc, vc->vc_origin +
++			      vc->vc_size_row * vc->vc_top,
++			      vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
++	}
++
++	printk(KERN_INFO "fbcondecor: switched decor state to 'off' on console %d\n",
++			 vc->vc_num);
++
++	return 0;
++}
++
++/* Enables fbcondecor on a virtual console; called with console sem held. */
++static int fbcon_decor_enable(struct vc_data *vc)
++{
++	struct fb_info *info;
++
++	info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++	if (vc->vc_decor.twidth == 0 || vc->vc_decor.theight == 0 ||
++	    info == NULL || vc->vc_decor.state || (!info->bgdecor.data &&
++	    vc->vc_num == fg_console))
++		return -EINVAL;
++
++	vc->vc_decor.state = 1;
++	vc_resize(vc, vc->vc_decor.twidth / vc->vc_font.width,
++		  vc->vc_decor.theight / vc->vc_font.height);
++
++	if (fg_console == vc->vc_num) {
++		redraw_screen(vc, 0);
++		update_region(vc, vc->vc_origin +
++			      vc->vc_size_row * vc->vc_top,
++			      vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
++		fbcon_decor_clear_margins(vc, info, 0);
++	}
++
++	printk(KERN_INFO "fbcondecor: switched decor state to 'on' on console %d\n",
++			 vc->vc_num);
++
++	return 0;
++}
++
++static inline int fbcon_decor_ioctl_dosetstate(struct vc_data *vc, unsigned int state, unsigned char origin)
++{
++	int ret;
++
++	console_lock();
++	if (!state)
++		ret = fbcon_decor_disable(vc, 1);
++	else
++		ret = fbcon_decor_enable(vc);
++	console_unlock();
++
++	return ret;
++}
++
++static inline void fbcon_decor_ioctl_dogetstate(struct vc_data *vc, unsigned int *state)
++{
++	*state = vc->vc_decor.state;
++}
++
++static int fbcon_decor_ioctl_dosetcfg(struct vc_data *vc, struct vc_decor *cfg, unsigned char origin)
++{
++	struct fb_info *info;
++	int len;
++	char *tmp;
++
++	info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++	if (info == NULL || !cfg->twidth || !cfg->theight ||
++	    cfg->tx + cfg->twidth  > info->var.xres ||
++	    cfg->ty + cfg->theight > info->var.yres)
++		return -EINVAL;
++
++	len = strnlen_user(cfg->theme, MAX_ARG_STRLEN);
++	if (!len || len > FBCON_DECOR_THEME_LEN)
++		return -EINVAL;
++	tmp = kmalloc(len, GFP_KERNEL);
++	if (!tmp)
++		return -ENOMEM;
++	if (copy_from_user(tmp, (void __user *)cfg->theme, len))
++		return -EFAULT;
++	cfg->theme = tmp;
++	cfg->state = 0;
++
++	console_lock();
++	if (vc->vc_decor.state)
++		fbcon_decor_disable(vc, 1);
++	kfree(vc->vc_decor.theme);
++	vc->vc_decor = *cfg;
++	console_unlock();
++
++	printk(KERN_INFO "fbcondecor: console %d using theme '%s'\n",
++			 vc->vc_num, vc->vc_decor.theme);
++	return 0;
++}
++
++static int fbcon_decor_ioctl_dogetcfg(struct vc_data *vc,
++					struct vc_decor *decor)
++{
++	char __user *tmp;
++
++	tmp = decor->theme;
++	*decor = vc->vc_decor;
++	decor->theme = tmp;
++
++	if (vc->vc_decor.theme) {
++		if (copy_to_user(tmp, vc->vc_decor.theme,
++					strlen(vc->vc_decor.theme) + 1))
++			return -EFAULT;
++	} else
++		if (put_user(0, tmp))
++			return -EFAULT;
++
++	return 0;
++}
++
++static int fbcon_decor_ioctl_dosetpic(struct vc_data *vc, struct fb_image *img,
++						unsigned char origin)
++{
++	struct fb_info *info;
++	int len;
++	u8 *tmp;
++
++	if (vc->vc_num != fg_console)
++		return -EINVAL;
++
++	info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++	if (info == NULL)
++		return -EINVAL;
++
++	if (img->width != info->var.xres || img->height != info->var.yres) {
++		printk(KERN_ERR "fbcondecor: picture dimensions mismatch\n");
++		printk(KERN_ERR "%dx%d vs %dx%d\n", img->width, img->height,
++				info->var.xres, info->var.yres);
++		return -EINVAL;
++	}
++
++	if (img->depth != info->var.bits_per_pixel) {
++		printk(KERN_ERR "fbcondecor: picture depth mismatch\n");
++		return -EINVAL;
++	}
++
++	if (img->depth == 8) {
++		if (!img->cmap.len || !img->cmap.red || !img->cmap.green ||
++		    !img->cmap.blue)
++			return -EINVAL;
++
++		tmp = vmalloc(img->cmap.len * 3 * 2);
++		if (!tmp)
++			return -ENOMEM;
++
++		if (copy_from_user(tmp,
++				(void __user *)img->cmap.red,
++						(img->cmap.len << 1)) ||
++			copy_from_user(tmp + (img->cmap.len << 1),
++				(void __user *)img->cmap.green,
++						(img->cmap.len << 1)) ||
++			copy_from_user(tmp + (img->cmap.len << 2),
++				(void __user *)img->cmap.blue,
++						(img->cmap.len << 1))) {
++			vfree(tmp);
++			return -EFAULT;
++		}
++
++		img->cmap.transp = NULL;
++		img->cmap.red = (u16 *)tmp;
++		img->cmap.green = img->cmap.red + img->cmap.len;
++		img->cmap.blue = img->cmap.green + img->cmap.len;
++	} else {
++		img->cmap.red = NULL;
++	}
++
++	len = ((img->depth + 7) >> 3) * img->width * img->height;
++
++	/*
++	 * Allocate an additional byte so that we never go outside of the
++	 * buffer boundaries in the rendering functions in a 24 bpp mode.
++	 */
++	tmp = vmalloc(len + 1);
++
++	if (!tmp)
++		goto out;
++
++	if (copy_from_user(tmp, (void __user *)img->data, len))
++		goto out;
++
++	img->data = tmp;
++
++	console_lock();
++
++	if (info->bgdecor.data)
++		vfree((u8 *)info->bgdecor.data);
++	if (info->bgdecor.cmap.red)
++		vfree(info->bgdecor.cmap.red);
++
++	info->bgdecor = *img;
++
++	if (fbcon_decor_active_vc(vc) && fg_console == vc->vc_num) {
++		redraw_screen(vc, 0);
++		update_region(vc, vc->vc_origin +
++			      vc->vc_size_row * vc->vc_top,
++			      vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
++		fbcon_decor_clear_margins(vc, info, 0);
++	}
++
++	console_unlock();
++
++	return 0;
++
++out:
++	if (img->cmap.red)
++		vfree(img->cmap.red);
++
++	if (tmp)
++		vfree(tmp);
++	return -ENOMEM;
++}
++
++static long fbcon_decor_ioctl(struct file *filp, u_int cmd, u_long arg)
++{
++	struct fbcon_decor_iowrapper __user *wrapper = (void __user *) arg;
++	struct vc_data *vc = NULL;
++	unsigned short vc_num = 0;
++	unsigned char origin = 0;
++	void __user *data = NULL;
++
++	if (!access_ok(VERIFY_READ, wrapper,
++			sizeof(struct fbcon_decor_iowrapper)))
++		return -EFAULT;
++
++	__get_user(vc_num, &wrapper->vc);
++	__get_user(origin, &wrapper->origin);
++	__get_user(data, &wrapper->data);
++
++	if (!vc_cons_allocated(vc_num))
++		return -EINVAL;
++
++	vc = vc_cons[vc_num].d;
++
++	switch (cmd) {
++	case FBIOCONDECOR_SETPIC:
++	{
++		struct fb_image img;
++
++		if (copy_from_user(&img, (struct fb_image __user *)data, sizeof(struct fb_image)))
++			return -EFAULT;
++
++		return fbcon_decor_ioctl_dosetpic(vc, &img, origin);
++	}
++	case FBIOCONDECOR_SETCFG:
++	{
++		struct vc_decor cfg;
++
++		if (copy_from_user(&cfg, (struct vc_decor __user *)data, sizeof(struct vc_decor)))
++			return -EFAULT;
++
++		return fbcon_decor_ioctl_dosetcfg(vc, &cfg, origin);
++	}
++	case FBIOCONDECOR_GETCFG:
++	{
++		int rval;
++		struct vc_decor cfg;
++
++		if (copy_from_user(&cfg, (struct vc_decor __user *)data, sizeof(struct vc_decor)))
++			return -EFAULT;
++
++		rval = fbcon_decor_ioctl_dogetcfg(vc, &cfg);
++
++		if (copy_to_user(data, &cfg, sizeof(struct vc_decor)))
++			return -EFAULT;
++		return rval;
++	}
++	case FBIOCONDECOR_SETSTATE:
++	{
++		unsigned int state = 0;
++
++		if (get_user(state, (unsigned int __user *)data))
++			return -EFAULT;
++		return fbcon_decor_ioctl_dosetstate(vc, state, origin);
++	}
++	case FBIOCONDECOR_GETSTATE:
++	{
++		unsigned int state = 0;
++
++		fbcon_decor_ioctl_dogetstate(vc, &state);
++		return put_user(state, (unsigned int __user *)data);
++	}
++
++	default:
++		return -ENOIOCTLCMD;
++	}
++}
++
++#ifdef CONFIG_COMPAT
++
++static long fbcon_decor_compat_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
++{
++	struct fbcon_decor_iowrapper32 __user *wrapper = (void __user *)arg;
++	struct vc_data *vc = NULL;
++	unsigned short vc_num = 0;
++	unsigned char origin = 0;
++	compat_uptr_t data_compat = 0;
++	void __user *data = NULL;
++
++	if (!access_ok(VERIFY_READ, wrapper,
++			sizeof(struct fbcon_decor_iowrapper32)))
++		return -EFAULT;
++
++	__get_user(vc_num, &wrapper->vc);
++	__get_user(origin, &wrapper->origin);
++	__get_user(data_compat, &wrapper->data);
++	data = compat_ptr(data_compat);
++
++	if (!vc_cons_allocated(vc_num))
++		return -EINVAL;
++
++	vc = vc_cons[vc_num].d;
++
++	switch (cmd) {
++	case FBIOCONDECOR_SETPIC32:
++	{
++		struct fb_image32 img_compat;
++		struct fb_image img;
++
++		if (copy_from_user(&img_compat, (struct fb_image32 __user *)data, sizeof(struct fb_image32)))
++			return -EFAULT;
++
++		fb_image_from_compat(img, img_compat);
++
++		return fbcon_decor_ioctl_dosetpic(vc, &img, origin);
++	}
++
++	case FBIOCONDECOR_SETCFG32:
++	{
++		struct vc_decor32 cfg_compat;
++		struct vc_decor cfg;
++
++		if (copy_from_user(&cfg_compat, (struct vc_decor32 __user *)data, sizeof(struct vc_decor32)))
++			return -EFAULT;
++
++		vc_decor_from_compat(cfg, cfg_compat);
++
++		return fbcon_decor_ioctl_dosetcfg(vc, &cfg, origin);
++	}
++
++	case FBIOCONDECOR_GETCFG32:
++	{
++		int rval;
++		struct vc_decor32 cfg_compat;
++		struct vc_decor cfg;
++
++		if (copy_from_user(&cfg_compat, (struct vc_decor32 __user *)data, sizeof(struct vc_decor32)))
++			return -EFAULT;
++		cfg.theme = compat_ptr(cfg_compat.theme);
++
++		rval = fbcon_decor_ioctl_dogetcfg(vc, &cfg);
++
++		vc_decor_to_compat(cfg_compat, cfg);
++
++		if (copy_to_user((struct vc_decor32 __user *)data, &cfg_compat, sizeof(struct vc_decor32)))
++			return -EFAULT;
++		return rval;
++	}
++
++	case FBIOCONDECOR_SETSTATE32:
++	{
++		compat_uint_t state_compat = 0;
++		unsigned int state = 0;
++
++		if (get_user(state_compat, (compat_uint_t __user *)data))
++			return -EFAULT;
++
++		state = (unsigned int)state_compat;
++
++		return fbcon_decor_ioctl_dosetstate(vc, state, origin);
++	}
++
++	case FBIOCONDECOR_GETSTATE32:
++	{
++		compat_uint_t state_compat = 0;
++		unsigned int state = 0;
++
++		fbcon_decor_ioctl_dogetstate(vc, &state);
++		state_compat = (compat_uint_t)state;
++
++		return put_user(state_compat, (compat_uint_t __user *)data);
++	}
++
++	default:
++		return -ENOIOCTLCMD;
++	}
++}
++#else
++  #define fbcon_decor_compat_ioctl NULL
++#endif
++
++static struct file_operations fbcon_decor_ops = {
++	.owner = THIS_MODULE,
++	.unlocked_ioctl = fbcon_decor_ioctl,
++	.compat_ioctl = fbcon_decor_compat_ioctl
++};
++
++static struct miscdevice fbcon_decor_dev = {
++	.minor = MISC_DYNAMIC_MINOR,
++	.name = "fbcondecor",
++	.fops = &fbcon_decor_ops
++};
++
++void fbcon_decor_reset(void)
++{
++	int i;
++
++	for (i = 0; i < num_registered_fb; i++) {
++		registered_fb[i]->bgdecor.data = NULL;
++		registered_fb[i]->bgdecor.cmap.red = NULL;
++	}
++
++	for (i = 0; i < MAX_NR_CONSOLES && vc_cons[i].d; i++) {
++		vc_cons[i].d->vc_decor.state = vc_cons[i].d->vc_decor.twidth =
++						vc_cons[i].d->vc_decor.theight = 0;
++		vc_cons[i].d->vc_decor.theme = NULL;
++	}
++}
++
++int fbcon_decor_init(void)
++{
++	int i;
++
++	fbcon_decor_reset();
++
++	if (initialized)
++		return 0;
++
++	i = misc_register(&fbcon_decor_dev);
++	if (i) {
++		printk(KERN_ERR "fbcondecor: failed to register device\n");
++		return i;
++	}
++
++	fbcon_decor_call_helper("init", 0);
++	initialized = 1;
++	return 0;
++}
++
++int fbcon_decor_exit(void)
++{
++	fbcon_decor_reset();
++	return 0;
++}
+diff --git a/drivers/video/console/fbcondecor.h b/drivers/video/console/fbcondecor.h
+new file mode 100644
+index 000000000000..c49386c16695
+--- /dev/null
++++ b/drivers/video/console/fbcondecor.h
+@@ -0,0 +1,77 @@
++/*
++ *  linux/drivers/video/console/fbcondecor.h -- Framebuffer Console Decoration headers
++ *
++ *  Copyright (C) 2004 Michal Januszewski <michalj+fbcondecor@gmail.com>
++ *
++ */
++
++#ifndef __FBCON_DECOR_H
++#define __FBCON_DECOR_H
++
++#ifndef _LINUX_FB_H
++#include <linux/fb.h>
++#endif
++
++/* This is needed for vc_cons in fbcmap.c */
++#include <linux/vt_kern.h>
++
++struct fb_cursor;
++struct fb_info;
++struct vc_data;
++
++#ifdef CONFIG_FB_CON_DECOR
++/* fbcondecor.c */
++int fbcon_decor_init(void);
++int fbcon_decor_exit(void);
++int fbcon_decor_call_helper(char *cmd, unsigned short cons);
++int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw);
++
++/* cfbcondecor.c */
++void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info, const unsigned short *s, int count, int yy, int xx);
++void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor);
++void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx, int height, int width);
++void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info, int bottom_only);
++void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank);
++void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y, int sx, int dx, int width);
++void fbcon_decor_copy(u8 *dst, u8 *src, int height, int width, int linebytes, int srclinesbytes, int bpp);
++void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc);
++
++/* vt.c */
++void acquire_console_sem(void);
++void release_console_sem(void);
++void do_unblank_screen(int entering_gfx);
++
++/* struct vc_data *y */
++#define fbcon_decor_active_vc(y) (y->vc_decor.state && y->vc_decor.theme)
++
++/* struct fb_info *x, struct vc_data *y */
++#define fbcon_decor_active_nores(x, y) (x->bgdecor.data && fbcon_decor_active_vc(y))
++
++/* struct fb_info *x, struct vc_data *y */
++#define fbcon_decor_active(x, y) (fbcon_decor_active_nores(x, y) &&	\
++				x->bgdecor.width == x->var.xres &&	\
++				x->bgdecor.height == x->var.yres &&	\
++				x->bgdecor.depth == x->var.bits_per_pixel)
++
++#else /* CONFIG_FB_CON_DECOR */
++
++static inline void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info, const unsigned short *s, int count, int yy, int xx) {}
++static inline void fbcon_decor_putc(struct vc_data *vc, struct fb_info *info, int c, int ypos, int xpos) {}
++static inline void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor) {}
++static inline void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx, int height, int width) {}
++static inline void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info, int bottom_only) {}
++static inline void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank) {}
++static inline void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y, int sx, int dx, int width) {}
++static inline void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc) {}
++static inline int fbcon_decor_call_helper(char *cmd, unsigned short cons) { return 0; }
++static inline int fbcon_decor_init(void) { return 0; }
++static inline int fbcon_decor_exit(void) { return 0; }
++static inline int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw) { return 0; }
++
++#define fbcon_decor_active_vc(y) (0)
++#define fbcon_decor_active_nores(x, y) (0)
++#define fbcon_decor_active(x, y) (0)
++
++#endif /* CONFIG_FB_CON_DECOR */
++
++#endif /* __FBCON_DECOR_H */
+diff --git a/drivers/video/fbdev/Kconfig b/drivers/video/fbdev/Kconfig
+index 5e58f5ec0a28..1daa8c2cb2d8 100644
+--- a/drivers/video/fbdev/Kconfig
++++ b/drivers/video/fbdev/Kconfig
+@@ -1226,7 +1226,6 @@ config FB_MATROX
+ 	select FB_CFB_FILLRECT
+ 	select FB_CFB_COPYAREA
+ 	select FB_CFB_IMAGEBLIT
+-	select FB_TILEBLITTING
+ 	select FB_MACMODES if PPC_PMAC
+ 	---help---
+ 	  Say Y here if you have a Matrox Millennium, Matrox Millennium II,
+diff --git a/drivers/video/fbdev/core/bitblit.c b/drivers/video/fbdev/core/bitblit.c
+index 790900d646c0..3f940c93752c 100644
+--- a/drivers/video/fbdev/core/bitblit.c
++++ b/drivers/video/fbdev/core/bitblit.c
+@@ -18,6 +18,7 @@
+ #include <linux/console.h>
+ #include <asm/types.h>
+ #include "fbcon.h"
++#include "../../console/fbcondecor.h"
+ 
+ /*
+  * Accelerated handlers.
+@@ -55,6 +56,13 @@ static void bit_bmove(struct vc_data *vc, struct fb_info *info, int sy,
+ 	area.height = height * vc->vc_font.height;
+ 	area.width = width * vc->vc_font.width;
+ 
++	if (fbcon_decor_active(info, vc)) {
++		area.sx += vc->vc_decor.tx;
++		area.sy += vc->vc_decor.ty;
++		area.dx += vc->vc_decor.tx;
++		area.dy += vc->vc_decor.ty;
++	}
++
+ 	info->fbops->fb_copyarea(info, &area);
+ }
+ 
+@@ -379,11 +387,15 @@ static void bit_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+ 	cursor.image.depth = 1;
+ 	cursor.rop = ROP_XOR;
+ 
+-	if (info->fbops->fb_cursor)
+-		err = info->fbops->fb_cursor(info, &cursor);
++	if (fbcon_decor_active(info, vc)) {
++		fbcon_decor_cursor(info, &cursor);
++	} else {
++		if (info->fbops->fb_cursor)
++			err = info->fbops->fb_cursor(info, &cursor);
+ 
+-	if (err)
+-		soft_cursor(info, &cursor);
++		if (err)
++			soft_cursor(info, &cursor);
++	}
+ 
+ 	ops->cursor_reset = 0;
+ }
+diff --git a/drivers/video/fbdev/core/fbcmap.c b/drivers/video/fbdev/core/fbcmap.c
+index 68a113594808..21f977cb59d2 100644
+--- a/drivers/video/fbdev/core/fbcmap.c
++++ b/drivers/video/fbdev/core/fbcmap.c
+@@ -17,6 +17,8 @@
+ #include <linux/slab.h>
+ #include <linux/uaccess.h>
+ 
++#include "../../console/fbcondecor.h"
++
+ static u16 red2[] __read_mostly = {
+     0x0000, 0xaaaa
+ };
+@@ -256,9 +258,12 @@ int fb_set_cmap(struct fb_cmap *cmap, struct fb_info *info)
+ 				break;
+ 		}
+ 	}
+-	if (rc == 0)
++	if (rc == 0) {
+ 		fb_copy_cmap(cmap, &info->cmap);
+-
++		if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
++		    info->fix.visual == FB_VISUAL_DIRECTCOLOR)
++			fbcon_decor_fix_pseudo_pal(info, vc_cons[fg_console].d);
++	}
+ 	return rc;
+ }
+ 
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 04612f938bab..95c349200078 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -80,6 +80,7 @@
+ #include <asm/irq.h>
+ 
+ #include "fbcon.h"
++#include "../../console/fbcondecor.h"
+ 
+ #ifdef FBCONDEBUG
+ #  define DPRINTK(fmt, args...) printk(KERN_DEBUG "%s: " fmt, __func__ , ## args)
+@@ -95,7 +96,7 @@ enum {
+ 
+ static struct display fb_display[MAX_NR_CONSOLES];
+ 
+-static signed char con2fb_map[MAX_NR_CONSOLES];
++signed char con2fb_map[MAX_NR_CONSOLES];
+ static signed char con2fb_map_boot[MAX_NR_CONSOLES];
+ 
+ static int logo_lines;
+@@ -282,7 +283,7 @@ static inline int fbcon_is_inactive(struct vc_data *vc, struct fb_info *info)
+ 		!vt_force_oops_output(vc);
+ }
+ 
+-static int get_color(struct vc_data *vc, struct fb_info *info,
++int get_color(struct vc_data *vc, struct fb_info *info,
+ 	      u16 c, int is_fg)
+ {
+ 	int depth = fb_get_color_depth(&info->var, &info->fix);
+@@ -551,6 +552,9 @@ static int do_fbcon_takeover(int show_logo)
+ 		info_idx = -1;
+ 	} else {
+ 		fbcon_has_console_bind = 1;
++#ifdef CONFIG_FB_CON_DECOR
++		fbcon_decor_init();
++#endif
+ 	}
+ 
+ 	return err;
+@@ -1013,6 +1017,12 @@ static const char *fbcon_startup(void)
+ 	rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ 	cols /= vc->vc_font.width;
+ 	rows /= vc->vc_font.height;
++
++	if (fbcon_decor_active(info, vc)) {
++		cols = vc->vc_decor.twidth / vc->vc_font.width;
++		rows = vc->vc_decor.theight / vc->vc_font.height;
++	}
++
+ 	vc_resize(vc, cols, rows);
+ 
+ 	DPRINTK("mode:   %s\n", info->fix.id);
+@@ -1042,7 +1052,7 @@ static void fbcon_init(struct vc_data *vc, int init)
+ 	cap = info->flags;
+ 
+ 	if (vc != svc || logo_shown == FBCON_LOGO_DONTSHOW ||
+-	    (info->fix.type == FB_TYPE_TEXT))
++	    (info->fix.type == FB_TYPE_TEXT) || fbcon_decor_active(info, vc))
+ 		logo = 0;
+ 
+ 	if (var_to_display(p, &info->var, info))
+@@ -1275,6 +1285,11 @@ static void fbcon_clear(struct vc_data *vc, int sy, int sx, int height,
+ 		fbcon_clear_margins(vc, 0);
+ 	}
+ 
++	if (fbcon_decor_active(info, vc)) {
++		fbcon_decor_clear(vc, info, sy, sx, height, width);
++		return;
++	}
++
+ 	/* Split blits that cross physical y_wrap boundary */
+ 
+ 	y_break = p->vrows - p->yscroll;
+@@ -1294,10 +1309,15 @@ static void fbcon_putcs(struct vc_data *vc, const unsigned short *s,
+ 	struct display *p = &fb_display[vc->vc_num];
+ 	struct fbcon_ops *ops = info->fbcon_par;
+ 
+-	if (!fbcon_is_inactive(vc, info))
+-		ops->putcs(vc, info, s, count, real_y(p, ypos), xpos,
+-			   get_color(vc, info, scr_readw(s), 1),
+-			   get_color(vc, info, scr_readw(s), 0));
++	if (!fbcon_is_inactive(vc, info)) {
++
++		if (fbcon_decor_active(info, vc))
++			fbcon_decor_putcs(vc, info, s, count, ypos, xpos);
++		else
++			ops->putcs(vc, info, s, count, real_y(p, ypos), xpos,
++				   get_color(vc, info, scr_readw(s), 1),
++				   get_color(vc, info, scr_readw(s), 0));
++	}
+ }
+ 
+ static void fbcon_putc(struct vc_data *vc, int c, int ypos, int xpos)
+@@ -1313,8 +1333,12 @@ static void fbcon_clear_margins(struct vc_data *vc, int bottom_only)
+ 	struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
+ 	struct fbcon_ops *ops = info->fbcon_par;
+ 
+-	if (!fbcon_is_inactive(vc, info))
+-		ops->clear_margins(vc, info, margin_color, bottom_only);
++	if (!fbcon_is_inactive(vc, info)) {
++		if (fbcon_decor_active(info, vc))
++			fbcon_decor_clear_margins(vc, info, bottom_only);
++		else
++			ops->clear_margins(vc, info, margin_color, bottom_only);
++	}
+ }
+ 
+ static void fbcon_cursor(struct vc_data *vc, int mode)
+@@ -1835,7 +1859,7 @@ static bool fbcon_scroll(struct vc_data *vc, unsigned int t, unsigned int b,
+ 			count = vc->vc_rows;
+ 		if (softback_top)
+ 			fbcon_softback_note(vc, t, count);
+-		if (logo_shown >= 0)
++		if (logo_shown >= 0 || fbcon_decor_active(info, vc))
+ 			goto redraw_up;
+ 		switch (p->scrollmode) {
+ 		case SCROLL_MOVE:
+@@ -1928,6 +1952,8 @@ static bool fbcon_scroll(struct vc_data *vc, unsigned int t, unsigned int b,
+ 			count = vc->vc_rows;
+ 		if (logo_shown >= 0)
+ 			goto redraw_down;
++		if (fbcon_decor_active(info, vc))
++			goto redraw_down;
+ 		switch (p->scrollmode) {
+ 		case SCROLL_MOVE:
+ 			fbcon_redraw_blit(vc, info, p, b - 1, b - t - count,
+@@ -2076,6 +2102,13 @@ static void fbcon_bmove_rec(struct vc_data *vc, struct display *p, int sy, int s
+ 		}
+ 		return;
+ 	}
++
++	if (fbcon_decor_active(info, vc) && sy == dy && height == 1) {
++		/* must use slower redraw bmove to keep background pic intact */
++		fbcon_decor_bmove_redraw(vc, info, sy, sx, dx, width);
++		return;
++	}
++
+ 	ops->bmove(vc, info, real_y(p, sy), sx, real_y(p, dy), dx,
+ 		   height, width);
+ }
+@@ -2146,8 +2179,8 @@ static int fbcon_resize(struct vc_data *vc, unsigned int width,
+ 	var.yres = virt_h * virt_fh;
+ 	x_diff = info->var.xres - var.xres;
+ 	y_diff = info->var.yres - var.yres;
+-	if (x_diff < 0 || x_diff > virt_fw ||
+-	    y_diff < 0 || y_diff > virt_fh) {
++	if ((x_diff < 0 || x_diff > virt_fw ||
++		y_diff < 0 || y_diff > virt_fh) && !vc->vc_decor.state) {
+ 		const struct fb_videomode *mode;
+ 
+ 		DPRINTK("attempting resize %ix%i\n", var.xres, var.yres);
+@@ -2183,6 +2216,22 @@ static int fbcon_switch(struct vc_data *vc)
+ 
+ 	info = registered_fb[con2fb_map[vc->vc_num]];
+ 	ops = info->fbcon_par;
++	prev_console = ops->currcon;
++	if (prev_console != -1)
++		old_info = registered_fb[con2fb_map[prev_console]];
++
++#ifdef CONFIG_FB_CON_DECOR
++	if (!fbcon_decor_active_vc(vc) && info->fix.visual == FB_VISUAL_DIRECTCOLOR) {
++		struct vc_data *vc_curr = vc_cons[prev_console].d;
++
++		if (vc_curr && fbcon_decor_active_vc(vc_curr)) {
++			// Clear the screen to avoid displaying funky colors
++			// during palette updates.
++			memset((u8 *)info->screen_base + info->fix.line_length * info->var.yoffset,
++			       0, info->var.yres * info->fix.line_length);
++		}
++	}
++#endif
+ 
+ 	if (softback_top) {
+ 		if (softback_lines)
+@@ -2201,9 +2250,6 @@ static int fbcon_switch(struct vc_data *vc)
+ 		logo_shown = FBCON_LOGO_CANSHOW;
+ 	}
+ 
+-	prev_console = ops->currcon;
+-	if (prev_console != -1)
+-		old_info = registered_fb[con2fb_map[prev_console]];
+ 	/*
+ 	 * FIXME: If we have multiple fbdev's loaded, we need to
+ 	 * update all info->currcon.  Perhaps, we can place this
+@@ -2247,6 +2293,18 @@ static int fbcon_switch(struct vc_data *vc)
+ 			fbcon_del_cursor_timer(old_info);
+ 	}
+ 
++	if (fbcon_decor_active_vc(vc)) {
++		struct vc_data *vc_curr = vc_cons[prev_console].d;
++
++		if (!vc_curr->vc_decor.theme ||
++			strcmp(vc->vc_decor.theme, vc_curr->vc_decor.theme) ||
++			(fbcon_decor_active_nores(info, vc_curr) &&
++			 !fbcon_decor_active(info, vc_curr))) {
++			fbcon_decor_disable(vc, 0);
++			fbcon_decor_call_helper("modechange", vc->vc_num);
++		}
++	}
++
+ 	if (fbcon_is_inactive(vc, info) ||
+ 	    ops->blank_state != FB_BLANK_UNBLANK)
+ 		fbcon_del_cursor_timer(info);
+@@ -2355,15 +2413,20 @@ static int fbcon_blank(struct vc_data *vc, int blank, int mode_switch)
+ 		}
+ 	}
+ 
+- 	if (!fbcon_is_inactive(vc, info)) {
++	if (!fbcon_is_inactive(vc, info)) {
+ 		if (ops->blank_state != blank) {
+ 			ops->blank_state = blank;
+ 			fbcon_cursor(vc, blank ? CM_ERASE : CM_DRAW);
+ 			ops->cursor_flash = (!blank);
+ 
+-			if (!(info->flags & FBINFO_MISC_USEREVENT))
+-				if (fb_blank(info, blank))
+-					fbcon_generic_blank(vc, info, blank);
++			if (!(info->flags & FBINFO_MISC_USEREVENT)) {
++				if (fb_blank(info, blank)) {
++					if (fbcon_decor_active(info, vc))
++						fbcon_decor_blank(vc, info, blank);
++					else
++						fbcon_generic_blank(vc, info, blank);
++				}
++			}
+ 		}
+ 
+ 		if (!blank)
+@@ -2546,13 +2609,22 @@ static int fbcon_do_set_font(struct vc_data *vc, int w, int h,
+ 		set_vc_hi_font(vc, true);
+ 
+ 	if (resize) {
++		/* reset wrap/pan */
+ 		int cols, rows;
+ 
+ 		cols = FBCON_SWAP(ops->rotate, info->var.xres, info->var.yres);
+ 		rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
++
++		if (fbcon_decor_active(info, vc)) {
++			info->var.xoffset = info->var.yoffset = p->yscroll = 0;
++			cols = vc->vc_decor.twidth;
++			rows = vc->vc_decor.theight;
++		}
+ 		cols /= w;
+ 		rows /= h;
++
+ 		vc_resize(vc, cols, rows);
++
+ 		if (con_is_visible(vc) && softback_buf)
+ 			fbcon_update_softback(vc);
+ 	} else if (con_is_visible(vc)
+@@ -2681,7 +2753,11 @@ static void fbcon_set_palette(struct vc_data *vc, const unsigned char *table)
+ 	int i, j, k, depth;
+ 	u8 val;
+ 
+-	if (fbcon_is_inactive(vc, info))
++	if (fbcon_is_inactive(vc, info)
++#ifdef CONFIG_FB_CON_DECOR
++			|| vc->vc_num != fg_console
++#endif
++		)
+ 		return;
+ 
+ 	if (!con_is_visible(vc))
+@@ -2707,7 +2783,47 @@ static void fbcon_set_palette(struct vc_data *vc, const unsigned char *table)
+ 	} else
+ 		fb_copy_cmap(fb_default_cmap(1 << depth), &palette_cmap);
+ 
+-	fb_set_cmap(&palette_cmap, info);
++	if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
++	    info->fix.visual == FB_VISUAL_DIRECTCOLOR) {
++
++		u16 *red, *green, *blue;
++		int minlen = min(min(info->var.red.length, info->var.green.length),
++				     info->var.blue.length);
++
++		struct fb_cmap cmap = {
++			.start = 0,
++			.len = (1 << minlen),
++			.red = NULL,
++			.green = NULL,
++			.blue = NULL,
++			.transp = NULL
++		};
++
++		red = kmalloc(256 * sizeof(u16) * 3, GFP_KERNEL);
++
++		if (!red)
++			goto out;
++
++		green = red + 256;
++		blue = green + 256;
++		cmap.red = red;
++		cmap.green = green;
++		cmap.blue = blue;
++
++		for (i = 0; i < cmap.len; i++)
++			red[i] = green[i] = blue[i] = (0xffff * i)/(cmap.len-1);
++
++		fb_set_cmap(&cmap, info);
++		fbcon_decor_fix_pseudo_pal(info, vc_cons[fg_console].d);
++		kfree(red);
++
++		return;
++
++	} else if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
++		   info->var.bits_per_pixel == 8 && info->bgdecor.cmap.red != NULL)
++		fb_set_cmap(&info->bgdecor.cmap, info);
++
++out:	fb_set_cmap(&palette_cmap, info);
+ }
+ 
+ static u16 *fbcon_screen_pos(struct vc_data *vc, int offset)
+@@ -2932,7 +3048,14 @@ static void fbcon_modechanged(struct fb_info *info)
+ 		rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ 		cols /= vc->vc_font.width;
+ 		rows /= vc->vc_font.height;
+-		vc_resize(vc, cols, rows);
++
++		if (!fbcon_decor_active_nores(info, vc)) {
++			vc_resize(vc, cols, rows);
++		} else {
++			fbcon_decor_disable(vc, 0);
++			fbcon_decor_call_helper("modechange", vc->vc_num);
++		}
++
+ 		updatescrollmode(p, info, vc);
+ 		scrollback_max = 0;
+ 		scrollback_current = 0;
+@@ -2977,7 +3100,8 @@ static void fbcon_set_all_vcs(struct fb_info *info)
+ 		rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ 		cols /= vc->vc_font.width;
+ 		rows /= vc->vc_font.height;
+-		vc_resize(vc, cols, rows);
++		if (!fbcon_decor_active_nores(info, vc))
++			vc_resize(vc, cols, rows);
+ 	}
+ 
+ 	if (fg != -1)
+@@ -3618,6 +3742,7 @@ static void fbcon_exit(void)
+ 		}
+ 	}
+ 
++	fbcon_decor_exit();
+ 	fbcon_has_exited = 1;
+ }
+ 
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index f741ba8df01b..b0141433d249 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -1253,15 +1253,6 @@ struct fb_fix_screeninfo32 {
+ 	u16			reserved[3];
+ };
+ 
+-struct fb_cmap32 {
+-	u32			start;
+-	u32			len;
+-	compat_caddr_t	red;
+-	compat_caddr_t	green;
+-	compat_caddr_t	blue;
+-	compat_caddr_t	transp;
+-};
+-
+ static int fb_getput_cmap(struct fb_info *info, unsigned int cmd,
+ 			  unsigned long arg)
+ {
+diff --git a/include/linux/console_decor.h b/include/linux/console_decor.h
+new file mode 100644
+index 000000000000..15143556c2aa
+--- /dev/null
++++ b/include/linux/console_decor.h
+@@ -0,0 +1,46 @@
++#ifndef _LINUX_CONSOLE_DECOR_H_
++#define _LINUX_CONSOLE_DECOR_H_ 1
++
++/* A structure used by the framebuffer console decorations (drivers/video/console/fbcondecor.c) */
++struct vc_decor {
++	__u8 bg_color;				/* The color that is to be treated as transparent */
++	__u8 state;				/* Current decor state: 0 = off, 1 = on */
++	__u16 tx, ty;				/* Top left corner coordinates of the text field */
++	__u16 twidth, theight;			/* Width and height of the text field */
++	char *theme;
++};
++
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++#include <linux/compat.h>
++
++struct vc_decor32 {
++	__u8 bg_color;				/* The color that is to be treated as transparent */
++	__u8 state;				/* Current decor state: 0 = off, 1 = on */
++	__u16 tx, ty;				/* Top left corner coordinates of the text field */
++	__u16 twidth, theight;			/* Width and height of the text field */
++	compat_uptr_t theme;
++};
++
++#define vc_decor_from_compat(to, from) \
++	(to).bg_color = (from).bg_color; \
++	(to).state    = (from).state; \
++	(to).tx       = (from).tx; \
++	(to).ty       = (from).ty; \
++	(to).twidth   = (from).twidth; \
++	(to).theight  = (from).theight; \
++	(to).theme    = compat_ptr((from).theme)
++
++#define vc_decor_to_compat(to, from) \
++	(to).bg_color = (from).bg_color; \
++	(to).state    = (from).state; \
++	(to).tx       = (from).tx; \
++	(to).ty       = (from).ty; \
++	(to).twidth   = (from).twidth; \
++	(to).theight  = (from).theight; \
++	(to).theme    = ptr_to_compat((from).theme)
++
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
++#endif
+diff --git a/include/linux/console_struct.h b/include/linux/console_struct.h
+index c0ec478ea5bf..8bfed6b21fc9 100644
+--- a/include/linux/console_struct.h
++++ b/include/linux/console_struct.h
+@@ -21,6 +21,7 @@ struct vt_struct;
+ struct uni_pagedir;
+ 
+ #define NPAR 16
++#include <linux/console_decor.h>
+ 
+ /*
+  * Example: vc_data of a console that was scrolled 3 lines down.
+@@ -141,6 +142,8 @@ struct vc_data {
+ 	struct uni_pagedir *vc_uni_pagedir;
+ 	struct uni_pagedir **vc_uni_pagedir_loc; /* [!] Location of uni_pagedir variable for this console */
+ 	bool vc_panic_force_write; /* when oops/panic this VC can accept forced output/blanking */
++
++	struct vc_decor vc_decor;
+ 	/* additional information is in vt_kern.h */
+ };
+ 
+diff --git a/include/linux/fb.h b/include/linux/fb.h
+index bc24e48e396d..ad7d182c7545 100644
+--- a/include/linux/fb.h
++++ b/include/linux/fb.h
+@@ -239,6 +239,34 @@ struct fb_deferred_io {
+ };
+ #endif
+ 
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++struct fb_image32 {
++	__u32 dx;			/* Where to place image */
++	__u32 dy;
++	__u32 width;			/* Size of image */
++	__u32 height;
++	__u32 fg_color;			/* Only used when a mono bitmap */
++	__u32 bg_color;
++	__u8  depth;			/* Depth of the image */
++	const compat_uptr_t data;	/* Pointer to image data */
++	struct fb_cmap32 cmap;		/* color map info */
++};
++
++#define fb_image_from_compat(to, from) \
++	(to).dx       = (from).dx; \
++	(to).dy       = (from).dy; \
++	(to).width    = (from).width; \
++	(to).height   = (from).height; \
++	(to).fg_color = (from).fg_color; \
++	(to).bg_color = (from).bg_color; \
++	(to).depth    = (from).depth; \
++	(to).data     = compat_ptr((from).data); \
++	fb_cmap_from_compat((to).cmap, (from).cmap)
++
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
+ /*
+  * Frame buffer operations
+  *
+@@ -509,6 +537,9 @@ struct fb_info {
+ #define FBINFO_STATE_SUSPENDED	1
+ 	u32 state;			/* Hardware state i.e suspend */
+ 	void *fbcon_par;                /* fbcon use-only private area */
++
++	struct fb_image bgdecor;
++
+ 	/* From here on everything is device dependent */
+ 	void *par;
+ 	/* we need the PCI or similar aperture base/size not
+diff --git a/include/uapi/linux/fb.h b/include/uapi/linux/fb.h
+index 6cd9b198b7c6..a228440649fa 100644
+--- a/include/uapi/linux/fb.h
++++ b/include/uapi/linux/fb.h
+@@ -9,6 +9,23 @@
+ 
+ #define FB_MAX			32	/* sufficient for now */
+ 
++struct fbcon_decor_iowrapper {
++	unsigned short vc;		/* Virtual console */
++	unsigned char origin;		/* Point of origin of the request */
++	void *data;
++};
++
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++#include <linux/compat.h>
++struct fbcon_decor_iowrapper32 {
++	unsigned short vc;		/* Virtual console */
++	unsigned char origin;		/* Point of origin of the request */
++	compat_uptr_t data;
++};
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
+ /* ioctls
+    0x46 is 'F'								*/
+ #define FBIOGET_VSCREENINFO	0x4600
+@@ -36,6 +53,25 @@
+ #define FBIOGET_DISPINFO        0x4618
+ #define FBIO_WAITFORVSYNC	_IOW('F', 0x20, __u32)
+ 
++#define FBIOCONDECOR_SETCFG	_IOWR('F', 0x19, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_GETCFG	_IOR('F', 0x1A, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_SETSTATE	_IOWR('F', 0x1B, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_GETSTATE	_IOR('F', 0x1C, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_SETPIC	_IOWR('F', 0x1D, struct fbcon_decor_iowrapper)
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++#define FBIOCONDECOR_SETCFG32	_IOWR('F', 0x19, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_GETCFG32	_IOR('F', 0x1A, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_SETSTATE32	_IOWR('F', 0x1B, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_GETSTATE32	_IOR('F', 0x1C, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_SETPIC32	_IOWR('F', 0x1D, struct fbcon_decor_iowrapper32)
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
++#define FBCON_DECOR_THEME_LEN		128	/* Maximum length of a theme name */
++#define FBCON_DECOR_IO_ORIG_KERNEL	0	/* Kernel ioctl origin */
++#define FBCON_DECOR_IO_ORIG_USER	1	/* User ioctl origin */
++
+ #define FB_TYPE_PACKED_PIXELS		0	/* Packed Pixels	*/
+ #define FB_TYPE_PLANES			1	/* Non interleaved planes */
+ #define FB_TYPE_INTERLEAVED_PLANES	2	/* Interleaved planes	*/
+@@ -278,6 +314,29 @@ struct fb_var_screeninfo {
+ 	__u32 reserved[4];		/* Reserved for future compatibility */
+ };
+ 
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++struct fb_cmap32 {
++	__u32 start;
++	__u32 len;			/* Number of entries */
++	compat_uptr_t red;		/* Red values	*/
++	compat_uptr_t green;
++	compat_uptr_t blue;
++	compat_uptr_t transp;		/* transparency, can be NULL */
++};
++
++#define fb_cmap_from_compat(to, from) \
++	(to).start  = (from).start; \
++	(to).len    = (from).len; \
++	(to).red    = compat_ptr((from).red); \
++	(to).green  = compat_ptr((from).green); \
++	(to).blue   = compat_ptr((from).blue); \
++	(to).transp = compat_ptr((from).transp)
++
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
++
+ struct fb_cmap {
+ 	__u32 start;			/* First entry	*/
+ 	__u32 len;			/* Number of entries */
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index d9c31bc2eaea..e33ac56cc32a 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -150,6 +150,10 @@ static const int cap_last_cap = CAP_LAST_CAP;
+ static unsigned long hung_task_timeout_max = (LONG_MAX/HZ);
+ #endif
+ 
++#ifdef CONFIG_FB_CON_DECOR
++extern char fbcon_decor_path[];
++#endif
++
+ #ifdef CONFIG_INOTIFY_USER
+ #include <linux/inotify.h>
+ #endif
+@@ -283,6 +287,15 @@ static struct ctl_table sysctl_base_table[] = {
+ 		.mode		= 0555,
+ 		.child		= dev_table,
+ 	},
++#ifdef CONFIG_FB_CON_DECOR
++	{
++		.procname	= "fbcondecor",
++		.data		= &fbcon_decor_path,
++		.maxlen		= KMOD_PATH_LEN,
++		.mode		= 0644,
++		.proc_handler	= &proc_dostring,
++	},
++#endif
+ 	{ }
+ };
+ 

diff --git a/4400_alpha-sysctl-uac.patch b/4400_alpha-sysctl-uac.patch
new file mode 100644
index 0000000..d42b4ed
--- /dev/null
+++ b/4400_alpha-sysctl-uac.patch
@@ -0,0 +1,142 @@
+diff --git a/arch/alpha/Kconfig b/arch/alpha/Kconfig
+index 7f312d8..1eb686b 100644
+--- a/arch/alpha/Kconfig
++++ b/arch/alpha/Kconfig
+@@ -697,6 +697,33 @@ config HZ
+ 	default 1200 if HZ_1200
+ 	default 1024
+
++config ALPHA_UAC_SYSCTL
++       bool "Configure UAC policy via sysctl"
++       depends on SYSCTL
++       default y
++       ---help---
++         Configuring the UAC (unaligned access control) policy on a Linux
++         system usually involves setting a compile time define. If you say
++         Y here, you will be able to modify the UAC policy at runtime using
++         the /proc interface.
++
++         The UAC policy defines the action Linux should take when an
++         unaligned memory access occurs. The action can include printing a
++         warning message (NOPRINT), sending a signal to the offending
++         program to help developers debug their applications (SIGBUS), or
++         disabling the transparent fixing (NOFIX).
++
++         The sysctls will be initialized to the compile-time defined UAC
++         policy. You can change these manually, or with the sysctl(8)
++         userspace utility.
++
++         To disable the warning messages at runtime, you would use
++
++           echo 1 > /proc/sys/kernel/uac/noprint
++
++         This is pretty harmless. Say Y if you're not sure.
++
++
+ source "drivers/pci/Kconfig"
+ source "drivers/eisa/Kconfig"
+
+diff --git a/arch/alpha/kernel/traps.c b/arch/alpha/kernel/traps.c
+index 74aceea..cb35d80 100644
+--- a/arch/alpha/kernel/traps.c
++++ b/arch/alpha/kernel/traps.c
+@@ -103,6 +103,49 @@ static char * ireg_name[] = {"v0", "t0", "t1", "t2", "t3", "t4", "t5", "t6",
+ 			   "t10", "t11", "ra", "pv", "at", "gp", "sp", "zero"};
+ #endif
+
++#ifdef CONFIG_ALPHA_UAC_SYSCTL
++
++#include <linux/sysctl.h>
++
++static int enabled_noprint = 0;
++static int enabled_sigbus = 0;
++static int enabled_nofix = 0;
++
++struct ctl_table uac_table[] = {
++       {
++               .procname       = "noprint",
++               .data           = &enabled_noprint,
++               .maxlen         = sizeof (int),
++               .mode           = 0644,
++               .proc_handler = &proc_dointvec,
++       },
++       {
++               .procname       = "sigbus",
++               .data           = &enabled_sigbus,
++               .maxlen         = sizeof (int),
++               .mode           = 0644,
++               .proc_handler = &proc_dointvec,
++       },
++       {
++               .procname       = "nofix",
++               .data           = &enabled_nofix,
++               .maxlen         = sizeof (int),
++               .mode           = 0644,
++               .proc_handler = &proc_dointvec,
++       },
++       { }
++};
++
++static int __init init_uac_sysctl(void)
++{
++   /* Initialize sysctls with the #defined UAC policy */
++   enabled_noprint = (test_thread_flag (TS_UAC_NOPRINT)) ? 1 : 0;
++   enabled_sigbus = (test_thread_flag (TS_UAC_SIGBUS)) ? 1 : 0;
++   enabled_nofix = (test_thread_flag (TS_UAC_NOFIX)) ? 1 : 0;
++   return 0;
++}
++#endif
++
+ static void
+ dik_show_code(unsigned int *pc)
+ {
+@@ -785,7 +828,12 @@ do_entUnaUser(void __user * va, unsigned long opcode,
+ 	/* Check the UAC bits to decide what the user wants us to do
+ 	   with the unaliged access.  */
+
++#ifndef CONFIG_ALPHA_UAC_SYSCTL
+ 	if (!(current_thread_info()->status & TS_UAC_NOPRINT)) {
++#else  /* CONFIG_ALPHA_UAC_SYSCTL */
++	if (!(current_thread_info()->status & TS_UAC_NOPRINT) &&
++	    !(enabled_noprint)) {
++#endif /* CONFIG_ALPHA_UAC_SYSCTL */
+ 		if (__ratelimit(&ratelimit)) {
+ 			printk("%s(%d): unaligned trap at %016lx: %p %lx %ld\n",
+ 			       current->comm, task_pid_nr(current),
+@@ -1090,3 +1138,6 @@ trap_init(void)
+ 	wrent(entSys, 5);
+ 	wrent(entDbg, 6);
+ }
++#ifdef CONFIG_ALPHA_UAC_SYSCTL
++       __initcall(init_uac_sysctl);
++#endif
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 87b2fc3..55021a8 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -152,6 +152,11 @@ static unsigned long hung_task_timeout_max = (LONG_MAX/HZ);
+ #ifdef CONFIG_INOTIFY_USER
+ #include <linux/inotify.h>
+ #endif
++
++#ifdef CONFIG_ALPHA_UAC_SYSCTL
++extern struct ctl_table uac_table[];
++#endif
++
+ #ifdef CONFIG_SPARC
+ #endif
+
+@@ -1844,6 +1849,13 @@ static struct ctl_table debug_table[] = {
+ 		.extra2		= &one,
+ 	},
+ #endif
++#ifdef CONFIG_ALPHA_UAC_SYSCTL
++	{
++	        .procname   = "uac",
++		.mode       = 0555,
++	        .child      = uac_table,
++	 },
++#endif /* CONFIG_ALPHA_UAC_SYSCTL */
+ 	{ }
+ };
+

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 5555b8a..56293b0 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -1,157 +1,9 @@
---- a/Kconfig	2016-07-01 19:22:17.117439707 -0400
-+++ b/Kconfig	2016-07-01 19:21:54.371440596 -0400
-@@ -8,4 +8,6 @@ config SRCARCH
- 	string
- 	option env="SRCARCH"
+--- a/Kconfig	2018-06-23 18:12:59.733149912 -0400
++++ b/Kconfig	2018-06-23 18:15:17.972352097 -0400
+@@ -10,3 +10,6 @@ comment "Compiler: $(CC_VERSION_TEXT)"
+ source "scripts/Kconfig.include"
  
-+source "distro/Kconfig"
-+
- source "arch/$SRCARCH/Kconfig"
---- /dev/null	2017-03-02 01:55:04.096566155 -0500
-+++ b/distro/Kconfig	2017-03-02 11:12:05.049448255 -0500
-@@ -0,0 +1,145 @@
-+menu "Gentoo Linux"
-+
-+config GENTOO_LINUX
-+	bool "Gentoo Linux support"
-+
-+	default y
-+
-+	help
-+		In order to boot Gentoo Linux a minimal set of config settings needs to
-+		be enabled in the kernel; to avoid the users from having to enable them
-+		manually as part of a Gentoo Linux installation or a new clean config,
-+		we enable these config settings by default for convenience.
-+
-+		See the settings that become available for more details and fine-tuning.
-+
-+config GENTOO_LINUX_UDEV
-+	bool "Linux dynamic and persistent device naming (userspace devfs) support"
-+
-+	depends on GENTOO_LINUX
-+	default y if GENTOO_LINUX
-+
-+	select DEVTMPFS
-+	select TMPFS
-+	select UNIX
-+
-+	select MMU
-+	select SHMEM
-+
-+	help
-+		In order to boot Gentoo Linux a minimal set of config settings needs to
-+		be enabled in the kernel; to avoid the users from having to enable them
-+		manually as part of a Gentoo Linux installation or a new clean config,
-+		we enable these config settings by default for convenience.
-+
-+		Currently this only selects TMPFS, DEVTMPFS and their dependencies.
-+		TMPFS is enabled to maintain a tmpfs file system at /dev/shm, /run and
-+		/sys/fs/cgroup; DEVTMPFS to maintain a devtmpfs file system at /dev.
-+
-+		Some of these are critical files that need to be available early in the
-+		boot process; if not available, it causes sysfs and udev to malfunction.
-+
-+		To ensure Gentoo Linux boots, it is best to leave this setting enabled;
-+		if you run a custom setup, you could consider whether to disable this.
-+
-+config GENTOO_LINUX_PORTAGE
-+	bool "Select options required by Portage features"
-+
-+	depends on GENTOO_LINUX
-+	default y if GENTOO_LINUX
-+
-+	select CGROUPS
-+	select NAMESPACES
-+	select IPC_NS
-+	select NET_NS
-+	select SYSVIPC
-+
-+	help
-+		This enables options required by various Portage FEATURES.
-+		Currently this selects:
-+
-+		CGROUPS     (required for FEATURES=cgroup)
-+		IPC_NS      (required for FEATURES=ipc-sandbox)
-+		NET_NS      (required for FEATURES=network-sandbox)
-+		SYSVIPC     (required by IPC_NS)
-+   
+ source "arch/$(SRCARCH)/Kconfig"
 +
-+		It is highly recommended that you leave this enabled as these FEATURES
-+		are, or will soon be, enabled by default.
-+
-+menu "Support for init systems, system and service managers"
-+	visible if GENTOO_LINUX
-+
-+config GENTOO_LINUX_INIT_SCRIPT
-+	bool "OpenRC, runit and other script based systems and managers"
-+
-+	default y if GENTOO_LINUX
-+
-+	depends on GENTOO_LINUX
-+
-+	select BINFMT_SCRIPT
-+
-+	help
-+		The init system is the first thing that loads after the kernel booted.
-+
-+		These config settings allow you to select which init systems to support;
-+		instead of having to select all the individual settings all over the
-+		place, these settings allows you to select all the settings at once.
-+
-+		This particular setting enables all the known requirements for OpenRC,
-+		runit and similar script based systems and managers.
-+
-+		If you are unsure about this, it is best to leave this setting enabled.
-+
-+config GENTOO_LINUX_INIT_SYSTEMD
-+	bool "systemd"
-+
-+	default n
-+
-+	depends on GENTOO_LINUX && GENTOO_LINUX_UDEV
-+
-+	select AUTOFS4_FS
-+	select BLK_DEV_BSG
-+	select CGROUPS
-+	select CHECKPOINT_RESTORE
-+	select CRYPTO_HMAC 
-+	select CRYPTO_SHA256
-+	select CRYPTO_USER_API_HASH
-+	select DEVPTS_MULTIPLE_INSTANCES
-+	select DMIID if X86_32 || X86_64 || X86
-+	select EPOLL
-+	select FANOTIFY
-+	select FHANDLE
-+	select INOTIFY_USER
-+	select IPV6
-+	select NET
-+	select NET_NS
-+	select PROC_FS
-+	select SECCOMP
-+	select SECCOMP_FILTER
-+	select SIGNALFD
-+	select SYSFS
-+	select TIMERFD
-+	select TMPFS_POSIX_ACL
-+	select TMPFS_XATTR
-+
-+	select ANON_INODES
-+	select BLOCK
-+	select EVENTFD
-+	select FSNOTIFY
-+	select INET
-+	select NLATTR
-+
-+	help
-+		The init system is the first thing that loads after the kernel booted.
-+
-+		These config settings allow you to select which init systems to support;
-+		instead of having to select all the individual settings all over the
-+		place, these settings allows you to select all the settings at once.
-+
-+		This particular setting enables all the known requirements for systemd;
-+		it also enables suggested optional settings, as the package suggests to.
-+
-+endmenu
++source "distro/Kconfig"
 +
-+endmenu

diff --git a/5010_enable-additional-cpu-optimizations-for-gcc.patch b/5010_enable-additional-cpu-optimizations-for-gcc.patch
new file mode 100644
index 0000000..a8aa759
--- /dev/null
+++ b/5010_enable-additional-cpu-optimizations-for-gcc.patch
@@ -0,0 +1,545 @@
+WARNING
+This patch works with gcc versions 4.9+ and with kernel version 4.13+ and should
+NOT be applied when compiling on older versions of gcc due to key name changes
+of the march flags introduced with the version 4.9 release of gcc.[1]
+
+Use the older version of this patch hosted on the same github for older
+versions of gcc.
+
+FEATURES
+This patch adds additional CPU options to the Linux kernel accessible under:
+ Processor type and features  --->
+  Processor family --->
+
+The expanded microarchitectures include:
+* AMD Improved K8-family
+* AMD K10-family
+* AMD Family 10h (Barcelona)
+* AMD Family 14h (Bobcat)
+* AMD Family 16h (Jaguar)
+* AMD Family 15h (Bulldozer)
+* AMD Family 15h (Piledriver)
+* AMD Family 15h (Steamroller)
+* AMD Family 15h (Excavator)
+* AMD Family 17h (Zen)
+* Intel Silvermont low-power processors
+* Intel 1st Gen Core i3/i5/i7 (Nehalem)
+* Intel 1.5 Gen Core i3/i5/i7 (Westmere)
+* Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
+* Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
+* Intel 4th Gen Core i3/i5/i7 (Haswell)
+* Intel 5th Gen Core i3/i5/i7 (Broadwell)
+* Intel 6th Gen Core i3/i5/i7 (Skylake)
+* Intel 6th Gen Core i7/i9 (Skylake X)
+
+It also offers to compile passing the 'native' option which, "selects the CPU
+to generate code for at compilation time by determining the processor type of
+the compiling machine. Using -march=native enables all instruction subsets
+supported by the local machine and will produce code optimized for the local
+machine under the constraints of the selected instruction set."[3]
+
+MINOR NOTES
+This patch also changes 'atom' to 'bonnell' in accordance with the gcc v4.9
+changes. Note that upstream is using the deprecated 'match=atom' flags when I
+believe it should use the newer 'march=bonnell' flag for atom processors.[2]
+
+It is not recommended to compile on Atom-CPUs with the 'native' option.[4] The
+recommendation is to use the 'atom' option instead.
+
+BENEFITS
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_gcc_patch
+
+REQUIREMENTS
+linux version >=4.13
+gcc version >=4.9
+
+ACKNOWLEDGMENTS
+This patch builds on the seminal work by Jeroen.[5]
+
+REFERENCES
+1. https://gcc.gnu.org/gcc-4.9/changes.html
+2. https://bugzilla.kernel.org/show_bug.cgi?id=77461
+3. https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html
+4. https://github.com/graysky2/kernel_gcc_patch/issues/15
+5. http://www.linuxforge.net/docs/linux/linux-gcc.php
+
+--- a/arch/x86/include/asm/module.h	2018-01-28 16:20:33.000000000 -0500
++++ b/arch/x86/include/asm/module.h	2018-03-10 06:42:38.688317317 -0500
+@@ -25,6 +25,26 @@ struct mod_arch_specific {
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE
++#define MODULE_PROC_FAMILY "NATIVE "
++#elif defined CONFIG_MNEHALEM
++#define MODULE_PROC_FAMILY "NEHALEM "
++#elif defined CONFIG_MWESTMERE
++#define MODULE_PROC_FAMILY "WESTMERE "
++#elif defined CONFIG_MSILVERMONT
++#define MODULE_PROC_FAMILY "SILVERMONT "
++#elif defined CONFIG_MSANDYBRIDGE
++#define MODULE_PROC_FAMILY "SANDYBRIDGE "
++#elif defined CONFIG_MIVYBRIDGE
++#define MODULE_PROC_FAMILY "IVYBRIDGE "
++#elif defined CONFIG_MHASWELL
++#define MODULE_PROC_FAMILY "HASWELL "
++#elif defined CONFIG_MBROADWELL
++#define MODULE_PROC_FAMILY "BROADWELL "
++#elif defined CONFIG_MSKYLAKE
++#define MODULE_PROC_FAMILY "SKYLAKE "
++#elif defined CONFIG_MSKYLAKEX
++#define MODULE_PROC_FAMILY "SKYLAKEX "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -43,6 +63,26 @@ struct mod_arch_specific {
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK8SSE3
++#define MODULE_PROC_FAMILY "K8SSE3 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MSTEAMROLLER
++#define MODULE_PROC_FAMILY "STEAMROLLER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
++#elif defined CONFIG_MEXCAVATOR
++#define MODULE_PROC_FAMILY "EXCAVATOR "
++#elif defined CONFIG_MZEN
++#define MODULE_PROC_FAMILY "ZEN "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE
+--- a/arch/x86/Kconfig.cpu	2018-01-28 16:20:33.000000000 -0500
++++ b/arch/x86/Kconfig.cpu	2018-03-10 06:45:50.244371799 -0500
+@@ -116,6 +116,7 @@ config MPENTIUMM
+ config MPENTIUM4
+ 	bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/older Xeon"
+ 	depends on X86_32
++	select X86_P6_NOP
+ 	---help---
+ 	  Select this for Intel Pentium 4 chips.  This includes the
+ 	  Pentium 4, Pentium D, P4-based Celeron and Xeon, and
+@@ -148,9 +149,8 @@ config MPENTIUM4
+ 		-Paxville
+ 		-Dempsey
+ 
+-
+ config MK6
+-	bool "K6/K6-II/K6-III"
++	bool "AMD K6/K6-II/K6-III"
+ 	depends on X86_32
+ 	---help---
+ 	  Select this for an AMD K6-family processor.  Enables use of
+@@ -158,7 +158,7 @@ config MK6
+ 	  flags to GCC.
+ 
+ config MK7
+-	bool "Athlon/Duron/K7"
++	bool "AMD Athlon/Duron/K7"
+ 	depends on X86_32
+ 	---help---
+ 	  Select this for an AMD Athlon K7-family processor.  Enables use of
+@@ -166,12 +166,83 @@ config MK7
+ 	  flags to GCC.
+ 
+ config MK8
+-	bool "Opteron/Athlon64/Hammer/K8"
++	bool "AMD Opteron/Athlon64/Hammer/K8"
+ 	---help---
+ 	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
+ 	  Enables use of some extended instructions, and passes appropriate
+ 	  optimization flags to GCC.
+ 
++config MK8SSE3
++	bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
++	---help---
++	  Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MK10
++	bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++	---help---
++	  Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++		Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MBARCELONA
++	bool "AMD Barcelona"
++	---help---
++	  Select this for AMD Family 10h Barcelona processors.
++
++	  Enables -march=barcelona
++
++config MBOBCAT
++	bool "AMD Bobcat"
++	---help---
++	  Select this for AMD Family 14h Bobcat processors.
++
++	  Enables -march=btver1
++
++config MJAGUAR
++	bool "AMD Jaguar"
++	---help---
++	  Select this for AMD Family 16h Jaguar processors.
++
++	  Enables -march=btver2
++
++config MBULLDOZER
++	bool "AMD Bulldozer"
++	---help---
++	  Select this for AMD Family 15h Bulldozer processors.
++
++	  Enables -march=bdver1
++
++config MPILEDRIVER
++	bool "AMD Piledriver"
++	---help---
++	  Select this for AMD Family 15h Piledriver processors.
++
++	  Enables -march=bdver2
++
++config MSTEAMROLLER
++	bool "AMD Steamroller"
++	---help---
++	  Select this for AMD Family 15h Steamroller processors.
++
++	  Enables -march=bdver3
++
++config MEXCAVATOR
++	bool "AMD Excavator"
++	---help---
++	  Select this for AMD Family 15h Excavator processors.
++
++	  Enables -march=bdver4
++
++config MZEN
++	bool "AMD Zen"
++	---help---
++	  Select this for AMD Family 17h Zen processors.
++
++	  Enables -march=znver1
++
+ config MCRUSOE
+ 	bool "Crusoe"
+ 	depends on X86_32
+@@ -253,6 +324,7 @@ config MVIAC7
+ 
+ config MPSC
+ 	bool "Intel P4 / older Netburst based Xeon"
++	select X86_P6_NOP
+ 	depends on X86_64
+ 	---help---
+ 	  Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
+@@ -262,8 +334,19 @@ config MPSC
+ 	  using the cpu family field
+ 	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+ 
++config MATOM
++	bool "Intel Atom"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for the Intel Atom platform. Intel Atom CPUs have an
++	  in-order pipelining architecture and thus can benefit from
++	  accordingly optimized code. Use a recent GCC with specific Atom
++	  support in order to fully benefit from selecting this option.
++
+ config MCORE2
+-	bool "Core 2/newer Xeon"
++	bool "Intel Core 2"
++	select X86_P6_NOP
+ 	---help---
+ 
+ 	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
+@@ -271,14 +354,88 @@ config MCORE2
+ 	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
+ 	  (not a typo)
+ 
+-config MATOM
+-	bool "Intel Atom"
++	  Enables -march=core2
++
++config MNEHALEM
++	bool "Intel Nehalem"
++	select X86_P6_NOP
+ 	---help---
+ 
+-	  Select this for the Intel Atom platform. Intel Atom CPUs have an
+-	  in-order pipelining architecture and thus can benefit from
+-	  accordingly optimized code. Use a recent GCC with specific Atom
+-	  support in order to fully benefit from selecting this option.
++	  Select this for 1st Gen Core processors in the Nehalem family.
++
++	  Enables -march=nehalem
++
++config MWESTMERE
++	bool "Intel Westmere"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for the Intel Westmere formerly Nehalem-C family.
++
++	  Enables -march=westmere
++
++config MSILVERMONT
++	bool "Intel Silvermont"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for the Intel Silvermont platform.
++
++	  Enables -march=silvermont
++
++config MSANDYBRIDGE
++	bool "Intel Sandy Bridge"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 2nd Gen Core processors in the Sandy Bridge family.
++
++	  Enables -march=sandybridge
++
++config MIVYBRIDGE
++	bool "Intel Ivy Bridge"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 3rd Gen Core processors in the Ivy Bridge family.
++
++	  Enables -march=ivybridge
++
++config MHASWELL
++	bool "Intel Haswell"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 4th Gen Core processors in the Haswell family.
++
++	  Enables -march=haswell
++
++config MBROADWELL
++	bool "Intel Broadwell"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 5th Gen Core processors in the Broadwell family.
++
++	  Enables -march=broadwell
++
++config MSKYLAKE
++	bool "Intel Skylake"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 6th Gen Core processors in the Skylake family.
++
++	  Enables -march=skylake
++
++config MSKYLAKEX
++	bool "Intel Skylake X"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 6th Gen Core processors in the Skylake X family.
++
++	  Enables -march=skylake-avx512
+ 
+ config GENERIC_CPU
+ 	bool "Generic-x86-64"
+@@ -287,6 +444,19 @@ config GENERIC_CPU
+ 	  Generic x86-64 CPU.
+ 	  Run equally well on all x86-64 CPUs.
+ 
++config MNATIVE
++ bool "Native optimizations autodetected by GCC"
++ ---help---
++
++   GCC 4.2 and above support -march=native, which automatically detects
++   the optimum settings to use based on your processor. -march=native
++   also detects and applies additional settings beyond -march specific
++   to your CPU, (eg. -msse4). Unless you have a specific reason not to
++   (e.g. distcc cross-compiling), you should probably be using
++   -march=native rather than anything listed below.
++
++   Enables -march=native
++
+ endchoice
+ 
+ config X86_GENERIC
+@@ -311,7 +481,7 @@ config X86_INTERNODE_CACHE_SHIFT
+ config X86_L1_CACHE_SHIFT
+ 	int
+ 	default "7" if MPENTIUM4 || MPSC
+-	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++	default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
+ 	default "4" if MELAN || M486 || MGEODEGX1
+ 	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+ 
+@@ -342,35 +512,36 @@ config X86_ALIGNMENT_16
+ 
+ config X86_INTEL_USERCOPY
+ 	def_bool y
+-	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
++	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MNATIVE
+ 
+ config X86_USE_PPRO_CHECKSUM
+ 	def_bool y
+-	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
++	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MATOM || MNATIVE
+ 
+ config X86_USE_3DNOW
+ 	def_bool y
+ 	depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML
+ 
+-#
+-# P6_NOPs are a relatively minor optimization that require a family >=
+-# 6 processor, except that it is broken on certain VIA chips.
+-# Furthermore, AMD chips prefer a totally different sequence of NOPs
+-# (which work on all CPUs).  In addition, it looks like Virtual PC
+-# does not understand them.
+-#
+-# As a result, disallow these if we're not compiling for X86_64 (these
+-# NOPs do work on all x86-64 capable chips); the list of processors in
+-# the right-hand clause are the cores that benefit from this optimization.
+-#
+ config X86_P6_NOP
+-	def_bool y
+-	depends on X86_64
+-	depends on (MCORE2 || MPENTIUM4 || MPSC)
++	default n
++	bool "Support for P6_NOPs on Intel chips"
++	depends on (MCORE2 || MPENTIUM4 || MPSC || MATOM || MNEHALEM || MWESTMERE || MSILVERMONT  || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MNATIVE)
++	---help---
++	P6_NOPs are a relatively minor optimization that require a family >=
++	6 processor, except that it is broken on certain VIA chips.
++	Furthermore, AMD chips prefer a totally different sequence of NOPs
++	(which work on all CPUs).  In addition, it looks like Virtual PC
++	does not understand them.
++
++	As a result, disallow these if we're not compiling for X86_64 (these
++	NOPs do work on all x86-64 capable chips); the list of processors in
++	the right-hand clause are the cores that benefit from this optimization.
++
++	Say Y if you have Intel CPU newer than Pentium Pro, N otherwise.
+ 
+ config X86_TSC
+ 	def_bool y
+-	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
++	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MNATIVE || MATOM) || X86_64
+ 
+ config X86_CMPXCHG64
+ 	def_bool y
+@@ -380,7 +551,7 @@ config X86_CMPXCHG64
+ # generates cmov.
+ config X86_CMOV
+ 	def_bool y
+-	depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
++	depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
+ 
+ config X86_MINIMUM_CPU_FAMILY
+ 	int
+--- a/arch/x86/Makefile	2018-01-28 16:20:33.000000000 -0500
++++ b/arch/x86/Makefile	2018-03-10 06:47:00.284240139 -0500
+@@ -124,13 +124,42 @@ else
+ 	KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
+ 
+         # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
++        cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+         cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
++        cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-mtune=k8)
++        cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
++        cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
++        cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++        cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
++        cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
++        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
++        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
++        cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
+         cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+ 
+         cflags-$(CONFIG_MCORE2) += \
+-                $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
+-	cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+-		$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++                $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
++        cflags-$(CONFIG_MNEHALEM) += \
++                $(call cc-option,-march=nehalem,$(call cc-option,-mtune=nehalem))
++        cflags-$(CONFIG_MWESTMERE) += \
++                $(call cc-option,-march=westmere,$(call cc-option,-mtune=westmere))
++        cflags-$(CONFIG_MSILVERMONT) += \
++                $(call cc-option,-march=silvermont,$(call cc-option,-mtune=silvermont))
++        cflags-$(CONFIG_MSANDYBRIDGE) += \
++                $(call cc-option,-march=sandybridge,$(call cc-option,-mtune=sandybridge))
++        cflags-$(CONFIG_MIVYBRIDGE) += \
++                $(call cc-option,-march=ivybridge,$(call cc-option,-mtune=ivybridge))
++        cflags-$(CONFIG_MHASWELL) += \
++                $(call cc-option,-march=haswell,$(call cc-option,-mtune=haswell))
++        cflags-$(CONFIG_MBROADWELL) += \
++                $(call cc-option,-march=broadwell,$(call cc-option,-mtune=broadwell))
++        cflags-$(CONFIG_MSKYLAKE) += \
++                $(call cc-option,-march=skylake,$(call cc-option,-mtune=skylake))
++        cflags-$(CONFIG_MSKYLAKEX) += \
++                $(call cc-option,-march=skylake-avx512,$(call cc-option,-mtune=skylake-avx512))
++        cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell) \
++                $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+         cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+         KBUILD_CFLAGS += $(cflags-y)
+ 
+--- a/arch/x86/Makefile_32.cpu	2018-01-28 16:20:33.000000000 -0500
++++ b/arch/x86/Makefile_32.cpu	2018-03-10 06:47:46.025992644 -0500
+@@ -23,7 +23,18 @@ cflags-$(CONFIG_MK6)		+= -march=k6
+ # Please note, that patches that add -march=athlon-xp and friends are pointless.
+ # They make zero difference whatsosever to performance at this time.
+ cflags-$(CONFIG_MK7)		+= -march=athlon
++cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8)		+= $(call cc-option,-march=k8,-march=athlon)
++cflags-$(CONFIG_MK8SSE3)		+= $(call cc-option,-march=k8-sse3,-march=athlon)
++cflags-$(CONFIG_MK10)	+= $(call cc-option,-march=amdfam10,-march=athlon)
++cflags-$(CONFIG_MBARCELONA)	+= $(call cc-option,-march=barcelona,-march=athlon)
++cflags-$(CONFIG_MBOBCAT)	+= $(call cc-option,-march=btver1,-march=athlon)
++cflags-$(CONFIG_MJAGUAR)	+= $(call cc-option,-march=btver2,-march=athlon)
++cflags-$(CONFIG_MBULLDOZER)	+= $(call cc-option,-march=bdver1,-march=athlon)
++cflags-$(CONFIG_MPILEDRIVER)	+= $(call cc-option,-march=bdver2,-march=athlon)
++cflags-$(CONFIG_MSTEAMROLLER)	+= $(call cc-option,-march=bdver3,-march=athlon)
++cflags-$(CONFIG_MEXCAVATOR)	+= $(call cc-option,-march=bdver4,-march=athlon)
++cflags-$(CONFIG_MZEN)	+= $(call cc-option,-march=znver1,-march=athlon)
+ cflags-$(CONFIG_MCRUSOE)	+= -march=i686 -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MEFFICEON)	+= -march=i686 $(call tune,pentium3) -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MWINCHIPC6)	+= $(call cc-option,-march=winchip-c6,-march=i586)
+@@ -32,8 +43,17 @@ cflags-$(CONFIG_MCYRIXIII)	+= $(call cc-
+ cflags-$(CONFIG_MVIAC3_2)	+= $(call cc-option,-march=c3-2,-march=i686)
+ cflags-$(CONFIG_MVIAC7)		+= -march=i686
+ cflags-$(CONFIG_MCORE2)		+= -march=i686 $(call tune,core2)
+-cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
+-	$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++cflags-$(CONFIG_MNEHALEM)	+= -march=i686 $(call tune,nehalem)
++cflags-$(CONFIG_MWESTMERE)	+= -march=i686 $(call tune,westmere)
++cflags-$(CONFIG_MSILVERMONT)	+= -march=i686 $(call tune,silvermont)
++cflags-$(CONFIG_MSANDYBRIDGE)	+= -march=i686 $(call tune,sandybridge)
++cflags-$(CONFIG_MIVYBRIDGE)	+= -march=i686 $(call tune,ivybridge)
++cflags-$(CONFIG_MHASWELL)	+= -march=i686 $(call tune,haswell)
++cflags-$(CONFIG_MBROADWELL)	+= -march=i686 $(call tune,broadwell)
++cflags-$(CONFIG_MSKYLAKE)	+= -march=i686 $(call tune,skylake)
++cflags-$(CONFIG_MSKYLAKEX)	+= -march=i686 $(call tune,skylake-avx512)
++cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=bonnell,$(call cc-option,-march=core2,-march=i686)) \
++	$(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+ 
+ # AMD Elan support
+ cflags-$(CONFIG_MELAN)		+= -march=i486


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 13:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 13:15 UTC (permalink / raw
  To: gentoo-commits

commit:     125fe9bfdd64004a6a01d2681ec493d7e444922b
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Aug 17 19:43:43 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 13:15:38 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=125fe9bf

Removal of redundant patch.

ix86/l1tf: Fix build error seen if CONFIG_KVM_INTEL is disabled

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                    |  4 ---
 1700_x86-l1tf-config-kvm-build-error-fix.patch | 40 --------------------------
 2 files changed, 44 deletions(-)

diff --git a/0000_README b/0000_README
index c801597..f72e2ad 100644
--- a/0000_README
+++ b/0000_README
@@ -59,10 +59,6 @@ Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.
 
-Patch:  1700_x86-l1tf-config-kvm-build-error-fix.patch
-From:   http://www.kernel.org
-Desc:   x86/l1tf: Fix build error seen if CONFIG_KVM_INTEL is disabled
-
 Patch:  2500_usb-storage-Disable-UAS-on-JMicron-SATA-enclosure.patch
 From:   https://bugzilla.redhat.com/show_bug.cgi?id=1260207#c5
 Desc:   Add UAS disable quirk. See bug #640082.

diff --git a/1700_x86-l1tf-config-kvm-build-error-fix.patch b/1700_x86-l1tf-config-kvm-build-error-fix.patch
deleted file mode 100644
index 88c2ec6..0000000
--- a/1700_x86-l1tf-config-kvm-build-error-fix.patch
+++ /dev/null
@@ -1,40 +0,0 @@
-From 1eb46908b35dfbac0ec1848d4b1e39667e0187e9 Mon Sep 17 00:00:00 2001
-From: Guenter Roeck <linux@roeck-us.net>
-Date: Wed, 15 Aug 2018 08:38:33 -0700
-Subject: x86/l1tf: Fix build error seen if CONFIG_KVM_INTEL is disabled
-
-From: Guenter Roeck <linux@roeck-us.net>
-
-commit 1eb46908b35dfbac0ec1848d4b1e39667e0187e9 upstream.
-
-allmodconfig+CONFIG_INTEL_KVM=n results in the following build error.
-
-  ERROR: "l1tf_vmx_mitigation" [arch/x86/kvm/kvm.ko] undefined!
-
-Fixes: 5b76a3cff011 ("KVM: VMX: Tell the nested hypervisor to skip L1D flush on vmentry")
-Reported-by: Meelis Roos <mroos@linux.ee>
-Cc: Meelis Roos <mroos@linux.ee>
-Cc: Paolo Bonzini <pbonzini@redhat.com>
-Cc: Thomas Gleixner <tglx@linutronix.de>
-Signed-off-by: Guenter Roeck <linux@roeck-us.net>
-Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
----
- arch/x86/kernel/cpu/bugs.c |    3 +--
- 1 file changed, 1 insertion(+), 2 deletions(-)
-
---- a/arch/x86/kernel/cpu/bugs.c
-+++ b/arch/x86/kernel/cpu/bugs.c
-@@ -648,10 +648,9 @@ void x86_spec_ctrl_setup_ap(void)
- enum l1tf_mitigations l1tf_mitigation __ro_after_init = L1TF_MITIGATION_FLUSH;
- #if IS_ENABLED(CONFIG_KVM_INTEL)
- EXPORT_SYMBOL_GPL(l1tf_mitigation);
--
-+#endif
- enum vmx_l1d_flush_state l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
- EXPORT_SYMBOL_GPL(l1tf_vmx_mitigation);
--#endif
- 
- static void __init l1tf_select_mitigation(void)
- {


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 13:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 13:15 UTC (permalink / raw
  To: gentoo-commits

commit:     37a9cc2ec281085f4896d9928b79147c086194a2
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 15 16:36:52 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 13:15:38 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=37a9cc2e

Linuxpatch 4.18.1

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1000_linux-4.18.1.patch | 4083 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4087 insertions(+)

diff --git a/0000_README b/0000_README
index 917d838..cf32ff2 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,10 @@ EXPERIMENTAL
 Individual Patch Descriptions:
 --------------------------------------------------------------------------
 
+Patch:  1000_linux-4.18.1.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.1
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1000_linux-4.18.1.patch b/1000_linux-4.18.1.patch
new file mode 100644
index 0000000..bd9c2da
--- /dev/null
+++ b/1000_linux-4.18.1.patch
@@ -0,0 +1,4083 @@
+diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
+index 9c5e7732d249..73318225a368 100644
+--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
++++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
+@@ -476,6 +476,7 @@ What:		/sys/devices/system/cpu/vulnerabilities
+ 		/sys/devices/system/cpu/vulnerabilities/spectre_v1
+ 		/sys/devices/system/cpu/vulnerabilities/spectre_v2
+ 		/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
++		/sys/devices/system/cpu/vulnerabilities/l1tf
+ Date:		January 2018
+ Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
+ Description:	Information about CPU vulnerabilities
+@@ -487,3 +488,26 @@ Description:	Information about CPU vulnerabilities
+ 		"Not affected"	  CPU is not affected by the vulnerability
+ 		"Vulnerable"	  CPU is affected and no mitigation in effect
+ 		"Mitigation: $M"  CPU is affected and mitigation $M is in effect
++
++		Details about the l1tf file can be found in
++		Documentation/admin-guide/l1tf.rst
++
++What:		/sys/devices/system/cpu/smt
++		/sys/devices/system/cpu/smt/active
++		/sys/devices/system/cpu/smt/control
++Date:		June 2018
++Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
++Description:	Control Symetric Multi Threading (SMT)
++
++		active:  Tells whether SMT is active (enabled and siblings online)
++
++		control: Read/write interface to control SMT. Possible
++			 values:
++
++			 "on"		SMT is enabled
++			 "off"		SMT is disabled
++			 "forceoff"	SMT is force disabled. Cannot be changed.
++			 "notsupported" SMT is not supported by the CPU
++
++			 If control status is "forceoff" or "notsupported" writes
++			 are rejected.
+diff --git a/Documentation/admin-guide/index.rst b/Documentation/admin-guide/index.rst
+index 48d70af11652..0873685bab0f 100644
+--- a/Documentation/admin-guide/index.rst
++++ b/Documentation/admin-guide/index.rst
+@@ -17,6 +17,15 @@ etc.
+    kernel-parameters
+    devices
+ 
++This section describes CPU vulnerabilities and provides an overview of the
++possible mitigations along with guidance for selecting mitigations if they
++are configurable at compile, boot or run time.
++
++.. toctree::
++   :maxdepth: 1
++
++   l1tf
++
+ Here is a set of documents aimed at users who are trying to track down
+ problems and bugs in particular.
+ 
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 533ff5c68970..1370b424a453 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -1967,10 +1967,84 @@
+ 			(virtualized real and unpaged mode) on capable
+ 			Intel chips. Default is 1 (enabled)
+ 
++	kvm-intel.vmentry_l1d_flush=[KVM,Intel] Mitigation for L1 Terminal Fault
++			CVE-2018-3620.
++
++			Valid arguments: never, cond, always
++
++			always: L1D cache flush on every VMENTER.
++			cond:	Flush L1D on VMENTER only when the code between
++				VMEXIT and VMENTER can leak host memory.
++			never:	Disables the mitigation
++
++			Default is cond (do L1 cache flush in specific instances)
++
+ 	kvm-intel.vpid=	[KVM,Intel] Disable Virtual Processor Identification
+ 			feature (tagged TLBs) on capable Intel chips.
+ 			Default is 1 (enabled)
+ 
++	l1tf=           [X86] Control mitigation of the L1TF vulnerability on
++			      affected CPUs
++
++			The kernel PTE inversion protection is unconditionally
++			enabled and cannot be disabled.
++
++			full
++				Provides all available mitigations for the
++				L1TF vulnerability. Disables SMT and
++				enables all mitigations in the
++				hypervisors, i.e. unconditional L1D flush.
++
++				SMT control and L1D flush control via the
++				sysfs interface is still possible after
++				boot.  Hypervisors will issue a warning
++				when the first VM is started in a
++				potentially insecure configuration,
++				i.e. SMT enabled or L1D flush disabled.
++
++			full,force
++				Same as 'full', but disables SMT and L1D
++				flush runtime control. Implies the
++				'nosmt=force' command line option.
++				(i.e. sysfs control of SMT is disabled.)
++
++			flush
++				Leaves SMT enabled and enables the default
++				hypervisor mitigation, i.e. conditional
++				L1D flush.
++
++				SMT control and L1D flush control via the
++				sysfs interface is still possible after
++				boot.  Hypervisors will issue a warning
++				when the first VM is started in a
++				potentially insecure configuration,
++				i.e. SMT enabled or L1D flush disabled.
++
++			flush,nosmt
++
++				Disables SMT and enables the default
++				hypervisor mitigation.
++
++				SMT control and L1D flush control via the
++				sysfs interface is still possible after
++				boot.  Hypervisors will issue a warning
++				when the first VM is started in a
++				potentially insecure configuration,
++				i.e. SMT enabled or L1D flush disabled.
++
++			flush,nowarn
++				Same as 'flush', but hypervisors will not
++				warn when a VM is started in a potentially
++				insecure configuration.
++
++			off
++				Disables hypervisor mitigations and doesn't
++				emit any warnings.
++
++			Default is 'flush'.
++
++			For details see: Documentation/admin-guide/l1tf.rst
++
+ 	l2cr=		[PPC]
+ 
+ 	l3cr=		[PPC]
+@@ -2687,6 +2761,10 @@
+ 	nosmt		[KNL,S390] Disable symmetric multithreading (SMT).
+ 			Equivalent to smt=1.
+ 
++			[KNL,x86] Disable symmetric multithreading (SMT).
++			nosmt=force: Force disable SMT, cannot be undone
++				     via the sysfs control file.
++
+ 	nospectre_v2	[X86] Disable all mitigations for the Spectre variant 2
+ 			(indirect branch prediction) vulnerability. System may
+ 			allow data leaks with this option, which is equivalent
+diff --git a/Documentation/admin-guide/l1tf.rst b/Documentation/admin-guide/l1tf.rst
+new file mode 100644
+index 000000000000..bae52b845de0
+--- /dev/null
++++ b/Documentation/admin-guide/l1tf.rst
+@@ -0,0 +1,610 @@
++L1TF - L1 Terminal Fault
++========================
++
++L1 Terminal Fault is a hardware vulnerability which allows unprivileged
++speculative access to data which is available in the Level 1 Data Cache
++when the page table entry controlling the virtual address, which is used
++for the access, has the Present bit cleared or other reserved bits set.
++
++Affected processors
++-------------------
++
++This vulnerability affects a wide range of Intel processors. The
++vulnerability is not present on:
++
++   - Processors from AMD, Centaur and other non Intel vendors
++
++   - Older processor models, where the CPU family is < 6
++
++   - A range of Intel ATOM processors (Cedarview, Cloverview, Lincroft,
++     Penwell, Pineview, Silvermont, Airmont, Merrifield)
++
++   - The Intel XEON PHI family
++
++   - Intel processors which have the ARCH_CAP_RDCL_NO bit set in the
++     IA32_ARCH_CAPABILITIES MSR. If the bit is set the CPU is not affected
++     by the Meltdown vulnerability either. These CPUs should become
++     available by end of 2018.
++
++Whether a processor is affected or not can be read out from the L1TF
++vulnerability file in sysfs. See :ref:`l1tf_sys_info`.
++
++Related CVEs
++------------
++
++The following CVE entries are related to the L1TF vulnerability:
++
++   =============  =================  ==============================
++   CVE-2018-3615  L1 Terminal Fault  SGX related aspects
++   CVE-2018-3620  L1 Terminal Fault  OS, SMM related aspects
++   CVE-2018-3646  L1 Terminal Fault  Virtualization related aspects
++   =============  =================  ==============================
++
++Problem
++-------
++
++If an instruction accesses a virtual address for which the relevant page
++table entry (PTE) has the Present bit cleared or other reserved bits set,
++then speculative execution ignores the invalid PTE and loads the referenced
++data if it is present in the Level 1 Data Cache, as if the page referenced
++by the address bits in the PTE was still present and accessible.
++
++While this is a purely speculative mechanism and the instruction will raise
++a page fault when it is retired eventually, the pure act of loading the
++data and making it available to other speculative instructions opens up the
++opportunity for side channel attacks to unprivileged malicious code,
++similar to the Meltdown attack.
++
++While Meltdown breaks the user space to kernel space protection, L1TF
++allows to attack any physical memory address in the system and the attack
++works across all protection domains. It allows an attack of SGX and also
++works from inside virtual machines because the speculation bypasses the
++extended page table (EPT) protection mechanism.
++
++
++Attack scenarios
++----------------
++
++1. Malicious user space
++^^^^^^^^^^^^^^^^^^^^^^^
++
++   Operating Systems store arbitrary information in the address bits of a
++   PTE which is marked non present. This allows a malicious user space
++   application to attack the physical memory to which these PTEs resolve.
++   In some cases user-space can maliciously influence the information
++   encoded in the address bits of the PTE, thus making attacks more
++   deterministic and more practical.
++
++   The Linux kernel contains a mitigation for this attack vector, PTE
++   inversion, which is permanently enabled and has no performance
++   impact. The kernel ensures that the address bits of PTEs, which are not
++   marked present, never point to cacheable physical memory space.
++
++   A system with an up to date kernel is protected against attacks from
++   malicious user space applications.
++
++2. Malicious guest in a virtual machine
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   The fact that L1TF breaks all domain protections allows malicious guest
++   OSes, which can control the PTEs directly, and malicious guest user
++   space applications, which run on an unprotected guest kernel lacking the
++   PTE inversion mitigation for L1TF, to attack physical host memory.
++
++   A special aspect of L1TF in the context of virtualization is symmetric
++   multi threading (SMT). The Intel implementation of SMT is called
++   HyperThreading. The fact that Hyperthreads on the affected processors
++   share the L1 Data Cache (L1D) is important for this. As the flaw allows
++   only to attack data which is present in L1D, a malicious guest running
++   on one Hyperthread can attack the data which is brought into the L1D by
++   the context which runs on the sibling Hyperthread of the same physical
++   core. This context can be host OS, host user space or a different guest.
++
++   If the processor does not support Extended Page Tables, the attack is
++   only possible, when the hypervisor does not sanitize the content of the
++   effective (shadow) page tables.
++
++   While solutions exist to mitigate these attack vectors fully, these
++   mitigations are not enabled by default in the Linux kernel because they
++   can affect performance significantly. The kernel provides several
++   mechanisms which can be utilized to address the problem depending on the
++   deployment scenario. The mitigations, their protection scope and impact
++   are described in the next sections.
++
++   The default mitigations and the rationale for choosing them are explained
++   at the end of this document. See :ref:`default_mitigations`.
++
++.. _l1tf_sys_info:
++
++L1TF system information
++-----------------------
++
++The Linux kernel provides a sysfs interface to enumerate the current L1TF
++status of the system: whether the system is vulnerable, and which
++mitigations are active. The relevant sysfs file is:
++
++/sys/devices/system/cpu/vulnerabilities/l1tf
++
++The possible values in this file are:
++
++  ===========================   ===============================
++  'Not affected'		The processor is not vulnerable
++  'Mitigation: PTE Inversion'	The host protection is active
++  ===========================   ===============================
++
++If KVM/VMX is enabled and the processor is vulnerable then the following
++information is appended to the 'Mitigation: PTE Inversion' part:
++
++  - SMT status:
++
++    =====================  ================
++    'VMX: SMT vulnerable'  SMT is enabled
++    'VMX: SMT disabled'    SMT is disabled
++    =====================  ================
++
++  - L1D Flush mode:
++
++    ================================  ====================================
++    'L1D vulnerable'		      L1D flushing is disabled
++
++    'L1D conditional cache flushes'   L1D flush is conditionally enabled
++
++    'L1D cache flushes'		      L1D flush is unconditionally enabled
++    ================================  ====================================
++
++The resulting grade of protection is discussed in the following sections.
++
++
++Host mitigation mechanism
++-------------------------
++
++The kernel is unconditionally protected against L1TF attacks from malicious
++user space running on the host.
++
++
++Guest mitigation mechanisms
++---------------------------
++
++.. _l1d_flush:
++
++1. L1D flush on VMENTER
++^^^^^^^^^^^^^^^^^^^^^^^
++
++   To make sure that a guest cannot attack data which is present in the L1D
++   the hypervisor flushes the L1D before entering the guest.
++
++   Flushing the L1D evicts not only the data which should not be accessed
++   by a potentially malicious guest, it also flushes the guest
++   data. Flushing the L1D has a performance impact as the processor has to
++   bring the flushed guest data back into the L1D. Depending on the
++   frequency of VMEXIT/VMENTER and the type of computations in the guest
++   performance degradation in the range of 1% to 50% has been observed. For
++   scenarios where guest VMEXIT/VMENTER are rare the performance impact is
++   minimal. Virtio and mechanisms like posted interrupts are designed to
++   confine the VMEXITs to a bare minimum, but specific configurations and
++   application scenarios might still suffer from a high VMEXIT rate.
++
++   The kernel provides two L1D flush modes:
++    - conditional ('cond')
++    - unconditional ('always')
++
++   The conditional mode avoids L1D flushing after VMEXITs which execute
++   only audited code paths before the corresponding VMENTER. These code
++   paths have been verified that they cannot expose secrets or other
++   interesting data to an attacker, but they can leak information about the
++   address space layout of the hypervisor.
++
++   Unconditional mode flushes L1D on all VMENTER invocations and provides
++   maximum protection. It has a higher overhead than the conditional
++   mode. The overhead cannot be quantified correctly as it depends on the
++   workload scenario and the resulting number of VMEXITs.
++
++   The general recommendation is to enable L1D flush on VMENTER. The kernel
++   defaults to conditional mode on affected processors.
++
++   **Note**, that L1D flush does not prevent the SMT problem because the
++   sibling thread will also bring back its data into the L1D which makes it
++   attackable again.
++
++   L1D flush can be controlled by the administrator via the kernel command
++   line and sysfs control files. See :ref:`mitigation_control_command_line`
++   and :ref:`mitigation_control_kvm`.
++
++.. _guest_confinement:
++
++2. Guest VCPU confinement to dedicated physical cores
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   To address the SMT problem, it is possible to make a guest or a group of
++   guests affine to one or more physical cores. The proper mechanism for
++   that is to utilize exclusive cpusets to ensure that no other guest or
++   host tasks can run on these cores.
++
++   If only a single guest or related guests run on sibling SMT threads on
++   the same physical core then they can only attack their own memory and
++   restricted parts of the host memory.
++
++   Host memory is attackable, when one of the sibling SMT threads runs in
++   host OS (hypervisor) context and the other in guest context. The amount
++   of valuable information from the host OS context depends on the context
++   which the host OS executes, i.e. interrupts, soft interrupts and kernel
++   threads. The amount of valuable data from these contexts cannot be
++   declared as non-interesting for an attacker without deep inspection of
++   the code.
++
++   **Note**, that assigning guests to a fixed set of physical cores affects
++   the ability of the scheduler to do load balancing and might have
++   negative effects on CPU utilization depending on the hosting
++   scenario. Disabling SMT might be a viable alternative for particular
++   scenarios.
++
++   For further information about confining guests to a single or to a group
++   of cores consult the cpusets documentation:
++
++   https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt
++
++.. _interrupt_isolation:
++
++3. Interrupt affinity
++^^^^^^^^^^^^^^^^^^^^^
++
++   Interrupts can be made affine to logical CPUs. This is not universally
++   true because there are types of interrupts which are truly per CPU
++   interrupts, e.g. the local timer interrupt. Aside of that multi queue
++   devices affine their interrupts to single CPUs or groups of CPUs per
++   queue without allowing the administrator to control the affinities.
++
++   Moving the interrupts, which can be affinity controlled, away from CPUs
++   which run untrusted guests, reduces the attack vector space.
++
++   Whether the interrupts with are affine to CPUs, which run untrusted
++   guests, provide interesting data for an attacker depends on the system
++   configuration and the scenarios which run on the system. While for some
++   of the interrupts it can be assumed that they won't expose interesting
++   information beyond exposing hints about the host OS memory layout, there
++   is no way to make general assumptions.
++
++   Interrupt affinity can be controlled by the administrator via the
++   /proc/irq/$NR/smp_affinity[_list] files. Limited documentation is
++   available at:
++
++   https://www.kernel.org/doc/Documentation/IRQ-affinity.txt
++
++.. _smt_control:
++
++4. SMT control
++^^^^^^^^^^^^^^
++
++   To prevent the SMT issues of L1TF it might be necessary to disable SMT
++   completely. Disabling SMT can have a significant performance impact, but
++   the impact depends on the hosting scenario and the type of workloads.
++   The impact of disabling SMT needs also to be weighted against the impact
++   of other mitigation solutions like confining guests to dedicated cores.
++
++   The kernel provides a sysfs interface to retrieve the status of SMT and
++   to control it. It also provides a kernel command line interface to
++   control SMT.
++
++   The kernel command line interface consists of the following options:
++
++     =========== ==========================================================
++     nosmt	 Affects the bring up of the secondary CPUs during boot. The
++		 kernel tries to bring all present CPUs online during the
++		 boot process. "nosmt" makes sure that from each physical
++		 core only one - the so called primary (hyper) thread is
++		 activated. Due to a design flaw of Intel processors related
++		 to Machine Check Exceptions the non primary siblings have
++		 to be brought up at least partially and are then shut down
++		 again.  "nosmt" can be undone via the sysfs interface.
++
++     nosmt=force Has the same effect as "nosmt" but it does not allow to
++		 undo the SMT disable via the sysfs interface.
++     =========== ==========================================================
++
++   The sysfs interface provides two files:
++
++   - /sys/devices/system/cpu/smt/control
++   - /sys/devices/system/cpu/smt/active
++
++   /sys/devices/system/cpu/smt/control:
++
++     This file allows to read out the SMT control state and provides the
++     ability to disable or (re)enable SMT. The possible states are:
++
++	==============  ===================================================
++	on		SMT is supported by the CPU and enabled. All
++			logical CPUs can be onlined and offlined without
++			restrictions.
++
++	off		SMT is supported by the CPU and disabled. Only
++			the so called primary SMT threads can be onlined
++			and offlined without restrictions. An attempt to
++			online a non-primary sibling is rejected
++
++	forceoff	Same as 'off' but the state cannot be controlled.
++			Attempts to write to the control file are rejected.
++
++	notsupported	The processor does not support SMT. It's therefore
++			not affected by the SMT implications of L1TF.
++			Attempts to write to the control file are rejected.
++	==============  ===================================================
++
++     The possible states which can be written into this file to control SMT
++     state are:
++
++     - on
++     - off
++     - forceoff
++
++   /sys/devices/system/cpu/smt/active:
++
++     This file reports whether SMT is enabled and active, i.e. if on any
++     physical core two or more sibling threads are online.
++
++   SMT control is also possible at boot time via the l1tf kernel command
++   line parameter in combination with L1D flush control. See
++   :ref:`mitigation_control_command_line`.
++
++5. Disabling EPT
++^^^^^^^^^^^^^^^^
++
++  Disabling EPT for virtual machines provides full mitigation for L1TF even
++  with SMT enabled, because the effective page tables for guests are
++  managed and sanitized by the hypervisor. Though disabling EPT has a
++  significant performance impact especially when the Meltdown mitigation
++  KPTI is enabled.
++
++  EPT can be disabled in the hypervisor via the 'kvm-intel.ept' parameter.
++
++There is ongoing research and development for new mitigation mechanisms to
++address the performance impact of disabling SMT or EPT.
++
++.. _mitigation_control_command_line:
++
++Mitigation control on the kernel command line
++---------------------------------------------
++
++The kernel command line allows to control the L1TF mitigations at boot
++time with the option "l1tf=". The valid arguments for this option are:
++
++  ============  =============================================================
++  full		Provides all available mitigations for the L1TF
++		vulnerability. Disables SMT and enables all mitigations in
++		the hypervisors, i.e. unconditional L1D flushing
++
++		SMT control and L1D flush control via the sysfs interface
++		is still possible after boot.  Hypervisors will issue a
++		warning when the first VM is started in a potentially
++		insecure configuration, i.e. SMT enabled or L1D flush
++		disabled.
++
++  full,force	Same as 'full', but disables SMT and L1D flush runtime
++		control. Implies the 'nosmt=force' command line option.
++		(i.e. sysfs control of SMT is disabled.)
++
++  flush		Leaves SMT enabled and enables the default hypervisor
++		mitigation, i.e. conditional L1D flushing
++
++		SMT control and L1D flush control via the sysfs interface
++		is still possible after boot.  Hypervisors will issue a
++		warning when the first VM is started in a potentially
++		insecure configuration, i.e. SMT enabled or L1D flush
++		disabled.
++
++  flush,nosmt	Disables SMT and enables the default hypervisor mitigation,
++		i.e. conditional L1D flushing.
++
++		SMT control and L1D flush control via the sysfs interface
++		is still possible after boot.  Hypervisors will issue a
++		warning when the first VM is started in a potentially
++		insecure configuration, i.e. SMT enabled or L1D flush
++		disabled.
++
++  flush,nowarn	Same as 'flush', but hypervisors will not warn when a VM is
++		started in a potentially insecure configuration.
++
++  off		Disables hypervisor mitigations and doesn't emit any
++		warnings.
++  ============  =============================================================
++
++The default is 'flush'. For details about L1D flushing see :ref:`l1d_flush`.
++
++
++.. _mitigation_control_kvm:
++
++Mitigation control for KVM - module parameter
++-------------------------------------------------------------
++
++The KVM hypervisor mitigation mechanism, flushing the L1D cache when
++entering a guest, can be controlled with a module parameter.
++
++The option/parameter is "kvm-intel.vmentry_l1d_flush=". It takes the
++following arguments:
++
++  ============  ==============================================================
++  always	L1D cache flush on every VMENTER.
++
++  cond		Flush L1D on VMENTER only when the code between VMEXIT and
++		VMENTER can leak host memory which is considered
++		interesting for an attacker. This still can leak host memory
++		which allows e.g. to determine the hosts address space layout.
++
++  never		Disables the mitigation
++  ============  ==============================================================
++
++The parameter can be provided on the kernel command line, as a module
++parameter when loading the modules and at runtime modified via the sysfs
++file:
++
++/sys/module/kvm_intel/parameters/vmentry_l1d_flush
++
++The default is 'cond'. If 'l1tf=full,force' is given on the kernel command
++line, then 'always' is enforced and the kvm-intel.vmentry_l1d_flush
++module parameter is ignored and writes to the sysfs file are rejected.
++
++
++Mitigation selection guide
++--------------------------
++
++1. No virtualization in use
++^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   The system is protected by the kernel unconditionally and no further
++   action is required.
++
++2. Virtualization with trusted guests
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   If the guest comes from a trusted source and the guest OS kernel is
++   guaranteed to have the L1TF mitigations in place the system is fully
++   protected against L1TF and no further action is required.
++
++   To avoid the overhead of the default L1D flushing on VMENTER the
++   administrator can disable the flushing via the kernel command line and
++   sysfs control files. See :ref:`mitigation_control_command_line` and
++   :ref:`mitigation_control_kvm`.
++
++
++3. Virtualization with untrusted guests
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++3.1. SMT not supported or disabled
++""""""""""""""""""""""""""""""""""
++
++  If SMT is not supported by the processor or disabled in the BIOS or by
++  the kernel, it's only required to enforce L1D flushing on VMENTER.
++
++  Conditional L1D flushing is the default behaviour and can be tuned. See
++  :ref:`mitigation_control_command_line` and :ref:`mitigation_control_kvm`.
++
++3.2. EPT not supported or disabled
++""""""""""""""""""""""""""""""""""
++
++  If EPT is not supported by the processor or disabled in the hypervisor,
++  the system is fully protected. SMT can stay enabled and L1D flushing on
++  VMENTER is not required.
++
++  EPT can be disabled in the hypervisor via the 'kvm-intel.ept' parameter.
++
++3.3. SMT and EPT supported and active
++"""""""""""""""""""""""""""""""""""""
++
++  If SMT and EPT are supported and active then various degrees of
++  mitigations can be employed:
++
++  - L1D flushing on VMENTER:
++
++    L1D flushing on VMENTER is the minimal protection requirement, but it
++    is only potent in combination with other mitigation methods.
++
++    Conditional L1D flushing is the default behaviour and can be tuned. See
++    :ref:`mitigation_control_command_line` and :ref:`mitigation_control_kvm`.
++
++  - Guest confinement:
++
++    Confinement of guests to a single or a group of physical cores which
++    are not running any other processes, can reduce the attack surface
++    significantly, but interrupts, soft interrupts and kernel threads can
++    still expose valuable data to a potential attacker. See
++    :ref:`guest_confinement`.
++
++  - Interrupt isolation:
++
++    Isolating the guest CPUs from interrupts can reduce the attack surface
++    further, but still allows a malicious guest to explore a limited amount
++    of host physical memory. This can at least be used to gain knowledge
++    about the host address space layout. The interrupts which have a fixed
++    affinity to the CPUs which run the untrusted guests can depending on
++    the scenario still trigger soft interrupts and schedule kernel threads
++    which might expose valuable information. See
++    :ref:`interrupt_isolation`.
++
++The above three mitigation methods combined can provide protection to a
++certain degree, but the risk of the remaining attack surface has to be
++carefully analyzed. For full protection the following methods are
++available:
++
++  - Disabling SMT:
++
++    Disabling SMT and enforcing the L1D flushing provides the maximum
++    amount of protection. This mitigation is not depending on any of the
++    above mitigation methods.
++
++    SMT control and L1D flushing can be tuned by the command line
++    parameters 'nosmt', 'l1tf', 'kvm-intel.vmentry_l1d_flush' and at run
++    time with the matching sysfs control files. See :ref:`smt_control`,
++    :ref:`mitigation_control_command_line` and
++    :ref:`mitigation_control_kvm`.
++
++  - Disabling EPT:
++
++    Disabling EPT provides the maximum amount of protection as well. It is
++    not depending on any of the above mitigation methods. SMT can stay
++    enabled and L1D flushing is not required, but the performance impact is
++    significant.
++
++    EPT can be disabled in the hypervisor via the 'kvm-intel.ept'
++    parameter.
++
++3.4. Nested virtual machines
++""""""""""""""""""""""""""""
++
++When nested virtualization is in use, three operating systems are involved:
++the bare metal hypervisor, the nested hypervisor and the nested virtual
++machine.  VMENTER operations from the nested hypervisor into the nested
++guest will always be processed by the bare metal hypervisor. If KVM is the
++bare metal hypervisor it wiil:
++
++ - Flush the L1D cache on every switch from the nested hypervisor to the
++   nested virtual machine, so that the nested hypervisor's secrets are not
++   exposed to the nested virtual machine;
++
++ - Flush the L1D cache on every switch from the nested virtual machine to
++   the nested hypervisor; this is a complex operation, and flushing the L1D
++   cache avoids that the bare metal hypervisor's secrets are exposed to the
++   nested virtual machine;
++
++ - Instruct the nested hypervisor to not perform any L1D cache flush. This
++   is an optimization to avoid double L1D flushing.
++
++
++.. _default_mitigations:
++
++Default mitigations
++-------------------
++
++  The kernel default mitigations for vulnerable processors are:
++
++  - PTE inversion to protect against malicious user space. This is done
++    unconditionally and cannot be controlled.
++
++  - L1D conditional flushing on VMENTER when EPT is enabled for
++    a guest.
++
++  The kernel does not by default enforce the disabling of SMT, which leaves
++  SMT systems vulnerable when running untrusted guests with EPT enabled.
++
++  The rationale for this choice is:
++
++  - Force disabling SMT can break existing setups, especially with
++    unattended updates.
++
++  - If regular users run untrusted guests on their machine, then L1TF is
++    just an add on to other malware which might be embedded in an untrusted
++    guest, e.g. spam-bots or attacks on the local network.
++
++    There is no technical way to prevent a user from running untrusted code
++    on their machines blindly.
++
++  - It's technically extremely unlikely and from today's knowledge even
++    impossible that L1TF can be exploited via the most popular attack
++    mechanisms like JavaScript because these mechanisms have no way to
++    control PTEs. If this would be possible and not other mitigation would
++    be possible, then the default might be different.
++
++  - The administrators of cloud and hosting setups have to carefully
++    analyze the risk for their scenarios and make the appropriate
++    mitigation choices, which might even vary across their deployed
++    machines and also result in other changes of their overall setup.
++    There is no way for the kernel to provide a sensible default for this
++    kind of scenarios.
+diff --git a/Makefile b/Makefile
+index 863f58503bee..5edf963148e8 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/Kconfig b/arch/Kconfig
+index 1aa59063f1fd..d1f2ed462ac8 100644
+--- a/arch/Kconfig
++++ b/arch/Kconfig
+@@ -13,6 +13,9 @@ config KEXEC_CORE
+ config HAVE_IMA_KEXEC
+ 	bool
+ 
++config HOTPLUG_SMT
++	bool
++
+ config OPROFILE
+ 	tristate "OProfile system profiling"
+ 	depends on PROFILING
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 887d3a7bb646..6b8065d718bd 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -187,6 +187,7 @@ config X86
+ 	select HAVE_SYSCALL_TRACEPOINTS
+ 	select HAVE_UNSTABLE_SCHED_CLOCK
+ 	select HAVE_USER_RETURN_NOTIFIER
++	select HOTPLUG_SMT			if SMP
+ 	select IRQ_FORCED_THREADING
+ 	select NEED_SG_DMA_LENGTH
+ 	select PCI_LOCKLESS_CONFIG
+diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h
+index 74a9e06b6cfd..130e81e10fc7 100644
+--- a/arch/x86/include/asm/apic.h
++++ b/arch/x86/include/asm/apic.h
+@@ -10,6 +10,7 @@
+ #include <asm/fixmap.h>
+ #include <asm/mpspec.h>
+ #include <asm/msr.h>
++#include <asm/hardirq.h>
+ 
+ #define ARCH_APICTIMER_STOPS_ON_C3	1
+ 
+@@ -502,12 +503,19 @@ extern int default_check_phys_apicid_present(int phys_apicid);
+ 
+ #endif /* CONFIG_X86_LOCAL_APIC */
+ 
++#ifdef CONFIG_SMP
++bool apic_id_is_primary_thread(unsigned int id);
++#else
++static inline bool apic_id_is_primary_thread(unsigned int id) { return false; }
++#endif
++
+ extern void irq_enter(void);
+ extern void irq_exit(void);
+ 
+ static inline void entering_irq(void)
+ {
+ 	irq_enter();
++	kvm_set_cpu_l1tf_flush_l1d();
+ }
+ 
+ static inline void entering_ack_irq(void)
+@@ -520,6 +528,7 @@ static inline void ipi_entering_ack_irq(void)
+ {
+ 	irq_enter();
+ 	ack_APIC_irq();
++	kvm_set_cpu_l1tf_flush_l1d();
+ }
+ 
+ static inline void exiting_irq(void)
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 5701f5cecd31..64aaa3f5f36c 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -219,6 +219,7 @@
+ #define X86_FEATURE_IBPB		( 7*32+26) /* Indirect Branch Prediction Barrier */
+ #define X86_FEATURE_STIBP		( 7*32+27) /* Single Thread Indirect Branch Predictors */
+ #define X86_FEATURE_ZEN			( 7*32+28) /* "" CPU is AMD family 0x17 (Zen) */
++#define X86_FEATURE_L1TF_PTEINV		( 7*32+29) /* "" L1TF workaround PTE inversion */
+ 
+ /* Virtualization flags: Linux defined, word 8 */
+ #define X86_FEATURE_TPR_SHADOW		( 8*32+ 0) /* Intel TPR Shadow */
+@@ -341,6 +342,7 @@
+ #define X86_FEATURE_PCONFIG		(18*32+18) /* Intel PCONFIG */
+ #define X86_FEATURE_SPEC_CTRL		(18*32+26) /* "" Speculation Control (IBRS + IBPB) */
+ #define X86_FEATURE_INTEL_STIBP		(18*32+27) /* "" Single Thread Indirect Branch Predictors */
++#define X86_FEATURE_FLUSH_L1D		(18*32+28) /* Flush L1D cache */
+ #define X86_FEATURE_ARCH_CAPABILITIES	(18*32+29) /* IA32_ARCH_CAPABILITIES MSR (Intel) */
+ #define X86_FEATURE_SPEC_CTRL_SSBD	(18*32+31) /* "" Speculative Store Bypass Disable */
+ 
+@@ -373,5 +375,6 @@
+ #define X86_BUG_SPECTRE_V1		X86_BUG(15) /* CPU is affected by Spectre variant 1 attack with conditional branches */
+ #define X86_BUG_SPECTRE_V2		X86_BUG(16) /* CPU is affected by Spectre variant 2 attack with indirect branches */
+ #define X86_BUG_SPEC_STORE_BYPASS	X86_BUG(17) /* CPU is affected by speculative store bypass attack */
++#define X86_BUG_L1TF			X86_BUG(18) /* CPU is affected by L1 Terminal Fault */
+ 
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/include/asm/dmi.h b/arch/x86/include/asm/dmi.h
+index 0ab2ab27ad1f..b825cb201251 100644
+--- a/arch/x86/include/asm/dmi.h
++++ b/arch/x86/include/asm/dmi.h
+@@ -4,8 +4,8 @@
+ 
+ #include <linux/compiler.h>
+ #include <linux/init.h>
++#include <linux/io.h>
+ 
+-#include <asm/io.h>
+ #include <asm/setup.h>
+ 
+ static __always_inline __init void *dmi_alloc(unsigned len)
+diff --git a/arch/x86/include/asm/hardirq.h b/arch/x86/include/asm/hardirq.h
+index 740a428acf1e..d9069bb26c7f 100644
+--- a/arch/x86/include/asm/hardirq.h
++++ b/arch/x86/include/asm/hardirq.h
+@@ -3,10 +3,12 @@
+ #define _ASM_X86_HARDIRQ_H
+ 
+ #include <linux/threads.h>
+-#include <linux/irq.h>
+ 
+ typedef struct {
+-	unsigned int __softirq_pending;
++	u16	     __softirq_pending;
++#if IS_ENABLED(CONFIG_KVM_INTEL)
++	u8	     kvm_cpu_l1tf_flush_l1d;
++#endif
+ 	unsigned int __nmi_count;	/* arch dependent */
+ #ifdef CONFIG_X86_LOCAL_APIC
+ 	unsigned int apic_timer_irqs;	/* arch dependent */
+@@ -58,4 +60,24 @@ extern u64 arch_irq_stat_cpu(unsigned int cpu);
+ extern u64 arch_irq_stat(void);
+ #define arch_irq_stat		arch_irq_stat
+ 
++
++#if IS_ENABLED(CONFIG_KVM_INTEL)
++static inline void kvm_set_cpu_l1tf_flush_l1d(void)
++{
++	__this_cpu_write(irq_stat.kvm_cpu_l1tf_flush_l1d, 1);
++}
++
++static inline void kvm_clear_cpu_l1tf_flush_l1d(void)
++{
++	__this_cpu_write(irq_stat.kvm_cpu_l1tf_flush_l1d, 0);
++}
++
++static inline bool kvm_get_cpu_l1tf_flush_l1d(void)
++{
++	return __this_cpu_read(irq_stat.kvm_cpu_l1tf_flush_l1d);
++}
++#else /* !IS_ENABLED(CONFIG_KVM_INTEL) */
++static inline void kvm_set_cpu_l1tf_flush_l1d(void) { }
++#endif /* IS_ENABLED(CONFIG_KVM_INTEL) */
++
+ #endif /* _ASM_X86_HARDIRQ_H */
+diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
+index c4fc17220df9..c14f2a74b2be 100644
+--- a/arch/x86/include/asm/irqflags.h
++++ b/arch/x86/include/asm/irqflags.h
+@@ -13,6 +13,8 @@
+  * Interrupt control:
+  */
+ 
++/* Declaration required for gcc < 4.9 to prevent -Werror=missing-prototypes */
++extern inline unsigned long native_save_fl(void);
+ extern inline unsigned long native_save_fl(void)
+ {
+ 	unsigned long flags;
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index c13cd28d9d1b..acebb808c4b5 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -17,6 +17,7 @@
+ #include <linux/tracepoint.h>
+ #include <linux/cpumask.h>
+ #include <linux/irq_work.h>
++#include <linux/irq.h>
+ 
+ #include <linux/kvm.h>
+ #include <linux/kvm_para.h>
+@@ -713,6 +714,9 @@ struct kvm_vcpu_arch {
+ 
+ 	/* be preempted when it's in kernel-mode(cpl=0) */
+ 	bool preempted_in_kernel;
++
++	/* Flush the L1 Data cache for L1TF mitigation on VMENTER */
++	bool l1tf_flush_l1d;
+ };
+ 
+ struct kvm_lpage_info {
+@@ -881,6 +885,7 @@ struct kvm_vcpu_stat {
+ 	u64 signal_exits;
+ 	u64 irq_window_exits;
+ 	u64 nmi_window_exits;
++	u64 l1d_flush;
+ 	u64 halt_exits;
+ 	u64 halt_successful_poll;
+ 	u64 halt_attempted_poll;
+@@ -1413,6 +1418,7 @@ int kvm_cpu_get_interrupt(struct kvm_vcpu *v);
+ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event);
+ void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu);
+ 
++u64 kvm_get_arch_capabilities(void);
+ void kvm_define_shared_msr(unsigned index, u32 msr);
+ int kvm_set_shared_msr(unsigned index, u64 val, u64 mask);
+ 
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 68b2c3150de1..4731f0cf97c5 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -70,12 +70,19 @@
+ #define MSR_IA32_ARCH_CAPABILITIES	0x0000010a
+ #define ARCH_CAP_RDCL_NO		(1 << 0)   /* Not susceptible to Meltdown */
+ #define ARCH_CAP_IBRS_ALL		(1 << 1)   /* Enhanced IBRS support */
++#define ARCH_CAP_SKIP_VMENTRY_L1DFLUSH	(1 << 3)   /* Skip L1D flush on vmentry */
+ #define ARCH_CAP_SSB_NO			(1 << 4)   /*
+ 						    * Not susceptible to Speculative Store Bypass
+ 						    * attack, so no Speculative Store Bypass
+ 						    * control required.
+ 						    */
+ 
++#define MSR_IA32_FLUSH_CMD		0x0000010b
++#define L1D_FLUSH			(1 << 0)   /*
++						    * Writeback and invalidate the
++						    * L1 data cache.
++						    */
++
+ #define MSR_IA32_BBL_CR_CTL		0x00000119
+ #define MSR_IA32_BBL_CR_CTL3		0x0000011e
+ 
+diff --git a/arch/x86/include/asm/page_32_types.h b/arch/x86/include/asm/page_32_types.h
+index aa30c3241ea7..0d5c739eebd7 100644
+--- a/arch/x86/include/asm/page_32_types.h
++++ b/arch/x86/include/asm/page_32_types.h
+@@ -29,8 +29,13 @@
+ #define N_EXCEPTION_STACKS 1
+ 
+ #ifdef CONFIG_X86_PAE
+-/* 44=32+12, the limit we can fit into an unsigned long pfn */
+-#define __PHYSICAL_MASK_SHIFT	44
++/*
++ * This is beyond the 44 bit limit imposed by the 32bit long pfns,
++ * but we need the full mask to make sure inverted PROT_NONE
++ * entries have all the host bits set in a guest.
++ * The real limit is still 44 bits.
++ */
++#define __PHYSICAL_MASK_SHIFT	52
+ #define __VIRTUAL_MASK_SHIFT	32
+ 
+ #else  /* !CONFIG_X86_PAE */
+diff --git a/arch/x86/include/asm/pgtable-2level.h b/arch/x86/include/asm/pgtable-2level.h
+index 685ffe8a0eaf..60d0f9015317 100644
+--- a/arch/x86/include/asm/pgtable-2level.h
++++ b/arch/x86/include/asm/pgtable-2level.h
+@@ -95,4 +95,21 @@ static inline unsigned long pte_bitop(unsigned long value, unsigned int rightshi
+ #define __pte_to_swp_entry(pte)		((swp_entry_t) { (pte).pte_low })
+ #define __swp_entry_to_pte(x)		((pte_t) { .pte = (x).val })
+ 
++/* No inverted PFNs on 2 level page tables */
++
++static inline u64 protnone_mask(u64 val)
++{
++	return 0;
++}
++
++static inline u64 flip_protnone_guard(u64 oldval, u64 val, u64 mask)
++{
++	return val;
++}
++
++static inline bool __pte_needs_invert(u64 val)
++{
++	return false;
++}
++
+ #endif /* _ASM_X86_PGTABLE_2LEVEL_H */
+diff --git a/arch/x86/include/asm/pgtable-3level.h b/arch/x86/include/asm/pgtable-3level.h
+index f24df59c40b2..bb035a4cbc8c 100644
+--- a/arch/x86/include/asm/pgtable-3level.h
++++ b/arch/x86/include/asm/pgtable-3level.h
+@@ -241,12 +241,43 @@ static inline pud_t native_pudp_get_and_clear(pud_t *pudp)
+ #endif
+ 
+ /* Encode and de-code a swap entry */
++#define SWP_TYPE_BITS		5
++
++#define SWP_OFFSET_FIRST_BIT	(_PAGE_BIT_PROTNONE + 1)
++
++/* We always extract/encode the offset by shifting it all the way up, and then down again */
++#define SWP_OFFSET_SHIFT	(SWP_OFFSET_FIRST_BIT + SWP_TYPE_BITS)
++
+ #define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > 5)
+ #define __swp_type(x)			(((x).val) & 0x1f)
+ #define __swp_offset(x)			((x).val >> 5)
+ #define __swp_entry(type, offset)	((swp_entry_t){(type) | (offset) << 5})
+-#define __pte_to_swp_entry(pte)		((swp_entry_t){ (pte).pte_high })
+-#define __swp_entry_to_pte(x)		((pte_t){ { .pte_high = (x).val } })
++
++/*
++ * Normally, __swp_entry() converts from arch-independent swp_entry_t to
++ * arch-dependent swp_entry_t, and __swp_entry_to_pte() just stores the result
++ * to pte. But here we have 32bit swp_entry_t and 64bit pte, and need to use the
++ * whole 64 bits. Thus, we shift the "real" arch-dependent conversion to
++ * __swp_entry_to_pte() through the following helper macro based on 64bit
++ * __swp_entry().
++ */
++#define __swp_pteval_entry(type, offset) ((pteval_t) { \
++	(~(pteval_t)(offset) << SWP_OFFSET_SHIFT >> SWP_TYPE_BITS) \
++	| ((pteval_t)(type) << (64 - SWP_TYPE_BITS)) })
++
++#define __swp_entry_to_pte(x)	((pte_t){ .pte = \
++		__swp_pteval_entry(__swp_type(x), __swp_offset(x)) })
++/*
++ * Analogically, __pte_to_swp_entry() doesn't just extract the arch-dependent
++ * swp_entry_t, but also has to convert it from 64bit to the 32bit
++ * intermediate representation, using the following macros based on 64bit
++ * __swp_type() and __swp_offset().
++ */
++#define __pteval_swp_type(x) ((unsigned long)((x).pte >> (64 - SWP_TYPE_BITS)))
++#define __pteval_swp_offset(x) ((unsigned long)(~((x).pte) << SWP_TYPE_BITS >> SWP_OFFSET_SHIFT))
++
++#define __pte_to_swp_entry(pte)	(__swp_entry(__pteval_swp_type(pte), \
++					     __pteval_swp_offset(pte)))
+ 
+ #define gup_get_pte gup_get_pte
+ /*
+@@ -295,4 +326,6 @@ static inline pte_t gup_get_pte(pte_t *ptep)
+ 	return pte;
+ }
+ 
++#include <asm/pgtable-invert.h>
++
+ #endif /* _ASM_X86_PGTABLE_3LEVEL_H */
+diff --git a/arch/x86/include/asm/pgtable-invert.h b/arch/x86/include/asm/pgtable-invert.h
+new file mode 100644
+index 000000000000..44b1203ece12
+--- /dev/null
++++ b/arch/x86/include/asm/pgtable-invert.h
+@@ -0,0 +1,32 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef _ASM_PGTABLE_INVERT_H
++#define _ASM_PGTABLE_INVERT_H 1
++
++#ifndef __ASSEMBLY__
++
++static inline bool __pte_needs_invert(u64 val)
++{
++	return !(val & _PAGE_PRESENT);
++}
++
++/* Get a mask to xor with the page table entry to get the correct pfn. */
++static inline u64 protnone_mask(u64 val)
++{
++	return __pte_needs_invert(val) ?  ~0ull : 0;
++}
++
++static inline u64 flip_protnone_guard(u64 oldval, u64 val, u64 mask)
++{
++	/*
++	 * When a PTE transitions from NONE to !NONE or vice-versa
++	 * invert the PFN part to stop speculation.
++	 * pte_pfn undoes this when needed.
++	 */
++	if (__pte_needs_invert(oldval) != __pte_needs_invert(val))
++		val = (val & ~mask) | (~val & mask);
++	return val;
++}
++
++#endif /* __ASSEMBLY__ */
++
++#endif
+diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
+index 5715647fc4fe..13125aad804c 100644
+--- a/arch/x86/include/asm/pgtable.h
++++ b/arch/x86/include/asm/pgtable.h
+@@ -185,19 +185,29 @@ static inline int pte_special(pte_t pte)
+ 	return pte_flags(pte) & _PAGE_SPECIAL;
+ }
+ 
++/* Entries that were set to PROT_NONE are inverted */
++
++static inline u64 protnone_mask(u64 val);
++
+ static inline unsigned long pte_pfn(pte_t pte)
+ {
+-	return (pte_val(pte) & PTE_PFN_MASK) >> PAGE_SHIFT;
++	phys_addr_t pfn = pte_val(pte);
++	pfn ^= protnone_mask(pfn);
++	return (pfn & PTE_PFN_MASK) >> PAGE_SHIFT;
+ }
+ 
+ static inline unsigned long pmd_pfn(pmd_t pmd)
+ {
+-	return (pmd_val(pmd) & pmd_pfn_mask(pmd)) >> PAGE_SHIFT;
++	phys_addr_t pfn = pmd_val(pmd);
++	pfn ^= protnone_mask(pfn);
++	return (pfn & pmd_pfn_mask(pmd)) >> PAGE_SHIFT;
+ }
+ 
+ static inline unsigned long pud_pfn(pud_t pud)
+ {
+-	return (pud_val(pud) & pud_pfn_mask(pud)) >> PAGE_SHIFT;
++	phys_addr_t pfn = pud_val(pud);
++	pfn ^= protnone_mask(pfn);
++	return (pfn & pud_pfn_mask(pud)) >> PAGE_SHIFT;
+ }
+ 
+ static inline unsigned long p4d_pfn(p4d_t p4d)
+@@ -400,11 +410,6 @@ static inline pmd_t pmd_mkwrite(pmd_t pmd)
+ 	return pmd_set_flags(pmd, _PAGE_RW);
+ }
+ 
+-static inline pmd_t pmd_mknotpresent(pmd_t pmd)
+-{
+-	return pmd_clear_flags(pmd, _PAGE_PRESENT | _PAGE_PROTNONE);
+-}
+-
+ static inline pud_t pud_set_flags(pud_t pud, pudval_t set)
+ {
+ 	pudval_t v = native_pud_val(pud);
+@@ -459,11 +464,6 @@ static inline pud_t pud_mkwrite(pud_t pud)
+ 	return pud_set_flags(pud, _PAGE_RW);
+ }
+ 
+-static inline pud_t pud_mknotpresent(pud_t pud)
+-{
+-	return pud_clear_flags(pud, _PAGE_PRESENT | _PAGE_PROTNONE);
+-}
+-
+ #ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY
+ static inline int pte_soft_dirty(pte_t pte)
+ {
+@@ -545,25 +545,45 @@ static inline pgprotval_t check_pgprot(pgprot_t pgprot)
+ 
+ static inline pte_t pfn_pte(unsigned long page_nr, pgprot_t pgprot)
+ {
+-	return __pte(((phys_addr_t)page_nr << PAGE_SHIFT) |
+-		     check_pgprot(pgprot));
++	phys_addr_t pfn = (phys_addr_t)page_nr << PAGE_SHIFT;
++	pfn ^= protnone_mask(pgprot_val(pgprot));
++	pfn &= PTE_PFN_MASK;
++	return __pte(pfn | check_pgprot(pgprot));
+ }
+ 
+ static inline pmd_t pfn_pmd(unsigned long page_nr, pgprot_t pgprot)
+ {
+-	return __pmd(((phys_addr_t)page_nr << PAGE_SHIFT) |
+-		     check_pgprot(pgprot));
++	phys_addr_t pfn = (phys_addr_t)page_nr << PAGE_SHIFT;
++	pfn ^= protnone_mask(pgprot_val(pgprot));
++	pfn &= PHYSICAL_PMD_PAGE_MASK;
++	return __pmd(pfn | check_pgprot(pgprot));
+ }
+ 
+ static inline pud_t pfn_pud(unsigned long page_nr, pgprot_t pgprot)
+ {
+-	return __pud(((phys_addr_t)page_nr << PAGE_SHIFT) |
+-		     check_pgprot(pgprot));
++	phys_addr_t pfn = (phys_addr_t)page_nr << PAGE_SHIFT;
++	pfn ^= protnone_mask(pgprot_val(pgprot));
++	pfn &= PHYSICAL_PUD_PAGE_MASK;
++	return __pud(pfn | check_pgprot(pgprot));
+ }
+ 
++static inline pmd_t pmd_mknotpresent(pmd_t pmd)
++{
++	return pfn_pmd(pmd_pfn(pmd),
++		      __pgprot(pmd_flags(pmd) & ~(_PAGE_PRESENT|_PAGE_PROTNONE)));
++}
++
++static inline pud_t pud_mknotpresent(pud_t pud)
++{
++	return pfn_pud(pud_pfn(pud),
++	      __pgprot(pud_flags(pud) & ~(_PAGE_PRESENT|_PAGE_PROTNONE)));
++}
++
++static inline u64 flip_protnone_guard(u64 oldval, u64 val, u64 mask);
++
+ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+ {
+-	pteval_t val = pte_val(pte);
++	pteval_t val = pte_val(pte), oldval = val;
+ 
+ 	/*
+ 	 * Chop off the NX bit (if present), and add the NX portion of
+@@ -571,17 +591,17 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+ 	 */
+ 	val &= _PAGE_CHG_MASK;
+ 	val |= check_pgprot(newprot) & ~_PAGE_CHG_MASK;
+-
++	val = flip_protnone_guard(oldval, val, PTE_PFN_MASK);
+ 	return __pte(val);
+ }
+ 
+ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
+ {
+-	pmdval_t val = pmd_val(pmd);
++	pmdval_t val = pmd_val(pmd), oldval = val;
+ 
+ 	val &= _HPAGE_CHG_MASK;
+ 	val |= check_pgprot(newprot) & ~_HPAGE_CHG_MASK;
+-
++	val = flip_protnone_guard(oldval, val, PHYSICAL_PMD_PAGE_MASK);
+ 	return __pmd(val);
+ }
+ 
+@@ -1320,6 +1340,14 @@ static inline bool pud_access_permitted(pud_t pud, bool write)
+ 	return __pte_access_permitted(pud_val(pud), write);
+ }
+ 
++#define __HAVE_ARCH_PFN_MODIFY_ALLOWED 1
++extern bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot);
++
++static inline bool arch_has_pfn_modify_check(void)
++{
++	return boot_cpu_has_bug(X86_BUG_L1TF);
++}
++
+ #include <asm-generic/pgtable.h>
+ #endif	/* __ASSEMBLY__ */
+ 
+diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h
+index 3c5385f9a88f..82ff20b0ae45 100644
+--- a/arch/x86/include/asm/pgtable_64.h
++++ b/arch/x86/include/asm/pgtable_64.h
+@@ -273,7 +273,7 @@ static inline int pgd_large(pgd_t pgd) { return 0; }
+  *
+  * |     ...            | 11| 10|  9|8|7|6|5| 4| 3|2| 1|0| <- bit number
+  * |     ...            |SW3|SW2|SW1|G|L|D|A|CD|WT|U| W|P| <- bit names
+- * | OFFSET (14->63) | TYPE (9-13)  |0|0|X|X| X| X|X|SD|0| <- swp entry
++ * | TYPE (59-63) | ~OFFSET (9-58)  |0|0|X|X| X| X|X|SD|0| <- swp entry
+  *
+  * G (8) is aliased and used as a PROT_NONE indicator for
+  * !present ptes.  We need to start storing swap entries above
+@@ -286,20 +286,34 @@ static inline int pgd_large(pgd_t pgd) { return 0; }
+  *
+  * Bit 7 in swp entry should be 0 because pmd_present checks not only P,
+  * but also L and G.
++ *
++ * The offset is inverted by a binary not operation to make the high
++ * physical bits set.
+  */
+-#define SWP_TYPE_FIRST_BIT (_PAGE_BIT_PROTNONE + 1)
+-#define SWP_TYPE_BITS 5
+-/* Place the offset above the type: */
+-#define SWP_OFFSET_FIRST_BIT (SWP_TYPE_FIRST_BIT + SWP_TYPE_BITS)
++#define SWP_TYPE_BITS		5
++
++#define SWP_OFFSET_FIRST_BIT	(_PAGE_BIT_PROTNONE + 1)
++
++/* We always extract/encode the offset by shifting it all the way up, and then down again */
++#define SWP_OFFSET_SHIFT	(SWP_OFFSET_FIRST_BIT+SWP_TYPE_BITS)
+ 
+ #define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > SWP_TYPE_BITS)
+ 
+-#define __swp_type(x)			(((x).val >> (SWP_TYPE_FIRST_BIT)) \
+-					 & ((1U << SWP_TYPE_BITS) - 1))
+-#define __swp_offset(x)			((x).val >> SWP_OFFSET_FIRST_BIT)
+-#define __swp_entry(type, offset)	((swp_entry_t) { \
+-					 ((type) << (SWP_TYPE_FIRST_BIT)) \
+-					 | ((offset) << SWP_OFFSET_FIRST_BIT) })
++/* Extract the high bits for type */
++#define __swp_type(x) ((x).val >> (64 - SWP_TYPE_BITS))
++
++/* Shift up (to get rid of type), then down to get value */
++#define __swp_offset(x) (~(x).val << SWP_TYPE_BITS >> SWP_OFFSET_SHIFT)
++
++/*
++ * Shift the offset up "too far" by TYPE bits, then down again
++ * The offset is inverted by a binary not operation to make the high
++ * physical bits set.
++ */
++#define __swp_entry(type, offset) ((swp_entry_t) { \
++	(~(unsigned long)(offset) << SWP_OFFSET_SHIFT >> SWP_TYPE_BITS) \
++	| ((unsigned long)(type) << (64-SWP_TYPE_BITS)) })
++
+ #define __pte_to_swp_entry(pte)		((swp_entry_t) { pte_val((pte)) })
+ #define __pmd_to_swp_entry(pmd)		((swp_entry_t) { pmd_val((pmd)) })
+ #define __swp_entry_to_pte(x)		((pte_t) { .pte = (x).val })
+@@ -343,5 +357,7 @@ static inline bool gup_fast_permitted(unsigned long start, int nr_pages,
+ 	return true;
+ }
+ 
++#include <asm/pgtable-invert.h>
++
+ #endif /* !__ASSEMBLY__ */
+ #endif /* _ASM_X86_PGTABLE_64_H */
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index cfd29ee8c3da..79e409974ccc 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -181,6 +181,11 @@ extern const struct seq_operations cpuinfo_op;
+ 
+ extern void cpu_detect(struct cpuinfo_x86 *c);
+ 
++static inline unsigned long l1tf_pfn_limit(void)
++{
++	return BIT(boot_cpu_data.x86_phys_bits - 1 - PAGE_SHIFT) - 1;
++}
++
+ extern void early_cpu_init(void);
+ extern void identify_boot_cpu(void);
+ extern void identify_secondary_cpu(struct cpuinfo_x86 *);
+@@ -977,4 +982,16 @@ bool xen_set_default_idle(void);
+ void stop_this_cpu(void *dummy);
+ void df_debug(struct pt_regs *regs, long error_code);
+ void microcode_check(void);
++
++enum l1tf_mitigations {
++	L1TF_MITIGATION_OFF,
++	L1TF_MITIGATION_FLUSH_NOWARN,
++	L1TF_MITIGATION_FLUSH,
++	L1TF_MITIGATION_FLUSH_NOSMT,
++	L1TF_MITIGATION_FULL,
++	L1TF_MITIGATION_FULL_FORCE
++};
++
++extern enum l1tf_mitigations l1tf_mitigation;
++
+ #endif /* _ASM_X86_PROCESSOR_H */
+diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h
+index c1d2a9892352..453cf38a1c33 100644
+--- a/arch/x86/include/asm/topology.h
++++ b/arch/x86/include/asm/topology.h
+@@ -123,13 +123,17 @@ static inline int topology_max_smt_threads(void)
+ }
+ 
+ int topology_update_package_map(unsigned int apicid, unsigned int cpu);
+-extern int topology_phys_to_logical_pkg(unsigned int pkg);
++int topology_phys_to_logical_pkg(unsigned int pkg);
++bool topology_is_primary_thread(unsigned int cpu);
++bool topology_smt_supported(void);
+ #else
+ #define topology_max_packages()			(1)
+ static inline int
+ topology_update_package_map(unsigned int apicid, unsigned int cpu) { return 0; }
+ static inline int topology_phys_to_logical_pkg(unsigned int pkg) { return 0; }
+ static inline int topology_max_smt_threads(void) { return 1; }
++static inline bool topology_is_primary_thread(unsigned int cpu) { return true; }
++static inline bool topology_smt_supported(void) { return false; }
+ #endif
+ 
+ static inline void arch_fix_phys_package_id(int num, u32 slot)
+diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
+index 6aa8499e1f62..95f9107449bf 100644
+--- a/arch/x86/include/asm/vmx.h
++++ b/arch/x86/include/asm/vmx.h
+@@ -576,4 +576,15 @@ enum vm_instruction_error_number {
+ 	VMXERR_INVALID_OPERAND_TO_INVEPT_INVVPID = 28,
+ };
+ 
++enum vmx_l1d_flush_state {
++	VMENTER_L1D_FLUSH_AUTO,
++	VMENTER_L1D_FLUSH_NEVER,
++	VMENTER_L1D_FLUSH_COND,
++	VMENTER_L1D_FLUSH_ALWAYS,
++	VMENTER_L1D_FLUSH_EPT_DISABLED,
++	VMENTER_L1D_FLUSH_NOT_REQUIRED,
++};
++
++extern enum vmx_l1d_flush_state l1tf_vmx_mitigation;
++
+ #endif
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index adbda5847b14..3b3a2d0af78d 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -56,6 +56,7 @@
+ #include <asm/hypervisor.h>
+ #include <asm/cpu_device_id.h>
+ #include <asm/intel-family.h>
++#include <asm/irq_regs.h>
+ 
+ unsigned int num_processors;
+ 
+@@ -2192,6 +2193,23 @@ static int cpuid_to_apicid[] = {
+ 	[0 ... NR_CPUS - 1] = -1,
+ };
+ 
++#ifdef CONFIG_SMP
++/**
++ * apic_id_is_primary_thread - Check whether APIC ID belongs to a primary thread
++ * @id:	APIC ID to check
++ */
++bool apic_id_is_primary_thread(unsigned int apicid)
++{
++	u32 mask;
++
++	if (smp_num_siblings == 1)
++		return true;
++	/* Isolate the SMT bit(s) in the APICID and check for 0 */
++	mask = (1U << (fls(smp_num_siblings) - 1)) - 1;
++	return !(apicid & mask);
++}
++#endif
++
+ /*
+  * Should use this API to allocate logical CPU IDs to keep nr_logical_cpuids
+  * and cpuid_to_apicid[] synchronized.
+diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
+index 3982f79d2377..ff0d14cd9e82 100644
+--- a/arch/x86/kernel/apic/io_apic.c
++++ b/arch/x86/kernel/apic/io_apic.c
+@@ -33,6 +33,7 @@
+ 
+ #include <linux/mm.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/init.h>
+ #include <linux/delay.h>
+ #include <linux/sched.h>
+diff --git a/arch/x86/kernel/apic/msi.c b/arch/x86/kernel/apic/msi.c
+index ce503c99f5c4..72a94401f9e0 100644
+--- a/arch/x86/kernel/apic/msi.c
++++ b/arch/x86/kernel/apic/msi.c
+@@ -12,6 +12,7 @@
+  */
+ #include <linux/mm.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/pci.h>
+ #include <linux/dmar.h>
+ #include <linux/hpet.h>
+diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
+index 35aaee4fc028..c9b773401fd8 100644
+--- a/arch/x86/kernel/apic/vector.c
++++ b/arch/x86/kernel/apic/vector.c
+@@ -11,6 +11,7 @@
+  * published by the Free Software Foundation.
+  */
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/seq_file.h>
+ #include <linux/init.h>
+ #include <linux/compiler.h>
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 38915fbfae73..97e962afb967 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -315,6 +315,13 @@ static void legacy_fixup_core_id(struct cpuinfo_x86 *c)
+ 	c->cpu_core_id %= cus_per_node;
+ }
+ 
++
++static void amd_get_topology_early(struct cpuinfo_x86 *c)
++{
++	if (cpu_has(c, X86_FEATURE_TOPOEXT))
++		smp_num_siblings = ((cpuid_ebx(0x8000001e) >> 8) & 0xff) + 1;
++}
++
+ /*
+  * Fixup core topology information for
+  * (1) AMD multi-node processors
+@@ -334,7 +341,6 @@ static void amd_get_topology(struct cpuinfo_x86 *c)
+ 		cpuid(0x8000001e, &eax, &ebx, &ecx, &edx);
+ 
+ 		node_id  = ecx & 0xff;
+-		smp_num_siblings = ((ebx >> 8) & 0xff) + 1;
+ 
+ 		if (c->x86 == 0x15)
+ 			c->cu_id = ebx & 0xff;
+@@ -613,6 +619,7 @@ clear_sev:
+ 
+ static void early_init_amd(struct cpuinfo_x86 *c)
+ {
++	u64 value;
+ 	u32 dummy;
+ 
+ 	early_init_amd_mc(c);
+@@ -683,6 +690,22 @@ static void early_init_amd(struct cpuinfo_x86 *c)
+ 		set_cpu_bug(c, X86_BUG_AMD_E400);
+ 
+ 	early_detect_mem_encrypt(c);
++
++	/* Re-enable TopologyExtensions if switched off by BIOS */
++	if (c->x86 == 0x15 &&
++	    (c->x86_model >= 0x10 && c->x86_model <= 0x6f) &&
++	    !cpu_has(c, X86_FEATURE_TOPOEXT)) {
++
++		if (msr_set_bit(0xc0011005, 54) > 0) {
++			rdmsrl(0xc0011005, value);
++			if (value & BIT_64(54)) {
++				set_cpu_cap(c, X86_FEATURE_TOPOEXT);
++				pr_info_once(FW_INFO "CPU: Re-enabling disabled Topology Extensions Support.\n");
++			}
++		}
++	}
++
++	amd_get_topology_early(c);
+ }
+ 
+ static void init_amd_k8(struct cpuinfo_x86 *c)
+@@ -774,19 +797,6 @@ static void init_amd_bd(struct cpuinfo_x86 *c)
+ {
+ 	u64 value;
+ 
+-	/* re-enable TopologyExtensions if switched off by BIOS */
+-	if ((c->x86_model >= 0x10) && (c->x86_model <= 0x6f) &&
+-	    !cpu_has(c, X86_FEATURE_TOPOEXT)) {
+-
+-		if (msr_set_bit(0xc0011005, 54) > 0) {
+-			rdmsrl(0xc0011005, value);
+-			if (value & BIT_64(54)) {
+-				set_cpu_cap(c, X86_FEATURE_TOPOEXT);
+-				pr_info_once(FW_INFO "CPU: Re-enabling disabled Topology Extensions Support.\n");
+-			}
+-		}
+-	}
+-
+ 	/*
+ 	 * The way access filter has a performance penalty on some workloads.
+ 	 * Disable it on the affected CPUs.
+@@ -850,16 +860,9 @@ static void init_amd(struct cpuinfo_x86 *c)
+ 
+ 	cpu_detect_cache_sizes(c);
+ 
+-	/* Multi core CPU? */
+-	if (c->extended_cpuid_level >= 0x80000008) {
+-		amd_detect_cmp(c);
+-		amd_get_topology(c);
+-		srat_detect_node(c);
+-	}
+-
+-#ifdef CONFIG_X86_32
+-	detect_ht(c);
+-#endif
++	amd_detect_cmp(c);
++	amd_get_topology(c);
++	srat_detect_node(c);
+ 
+ 	init_amd_cacheinfo(c);
+ 
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 5c0ea39311fe..c4f0ae49a53d 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -22,15 +22,18 @@
+ #include <asm/processor-flags.h>
+ #include <asm/fpu/internal.h>
+ #include <asm/msr.h>
++#include <asm/vmx.h>
+ #include <asm/paravirt.h>
+ #include <asm/alternative.h>
+ #include <asm/pgtable.h>
+ #include <asm/set_memory.h>
+ #include <asm/intel-family.h>
+ #include <asm/hypervisor.h>
++#include <asm/e820/api.h>
+ 
+ static void __init spectre_v2_select_mitigation(void);
+ static void __init ssb_select_mitigation(void);
++static void __init l1tf_select_mitigation(void);
+ 
+ /*
+  * Our boot-time value of the SPEC_CTRL MSR. We read it once so that any
+@@ -56,6 +59,12 @@ void __init check_bugs(void)
+ {
+ 	identify_boot_cpu();
+ 
++	/*
++	 * identify_boot_cpu() initialized SMT support information, let the
++	 * core code know.
++	 */
++	cpu_smt_check_topology_early();
++
+ 	if (!IS_ENABLED(CONFIG_SMP)) {
+ 		pr_info("CPU: ");
+ 		print_cpu_info(&boot_cpu_data);
+@@ -82,6 +91,8 @@ void __init check_bugs(void)
+ 	 */
+ 	ssb_select_mitigation();
+ 
++	l1tf_select_mitigation();
++
+ #ifdef CONFIG_X86_32
+ 	/*
+ 	 * Check whether we are able to run this kernel safely on SMP.
+@@ -313,23 +324,6 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
+ 	return cmd;
+ }
+ 
+-/* Check for Skylake-like CPUs (for RSB handling) */
+-static bool __init is_skylake_era(void)
+-{
+-	if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
+-	    boot_cpu_data.x86 == 6) {
+-		switch (boot_cpu_data.x86_model) {
+-		case INTEL_FAM6_SKYLAKE_MOBILE:
+-		case INTEL_FAM6_SKYLAKE_DESKTOP:
+-		case INTEL_FAM6_SKYLAKE_X:
+-		case INTEL_FAM6_KABYLAKE_MOBILE:
+-		case INTEL_FAM6_KABYLAKE_DESKTOP:
+-			return true;
+-		}
+-	}
+-	return false;
+-}
+-
+ static void __init spectre_v2_select_mitigation(void)
+ {
+ 	enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
+@@ -390,22 +384,15 @@ retpoline_auto:
+ 	pr_info("%s\n", spectre_v2_strings[mode]);
+ 
+ 	/*
+-	 * If neither SMEP nor PTI are available, there is a risk of
+-	 * hitting userspace addresses in the RSB after a context switch
+-	 * from a shallow call stack to a deeper one. To prevent this fill
+-	 * the entire RSB, even when using IBRS.
++	 * If spectre v2 protection has been enabled, unconditionally fill
++	 * RSB during a context switch; this protects against two independent
++	 * issues:
+ 	 *
+-	 * Skylake era CPUs have a separate issue with *underflow* of the
+-	 * RSB, when they will predict 'ret' targets from the generic BTB.
+-	 * The proper mitigation for this is IBRS. If IBRS is not supported
+-	 * or deactivated in favour of retpolines the RSB fill on context
+-	 * switch is required.
++	 *	- RSB underflow (and switch to BTB) on Skylake+
++	 *	- SpectreRSB variant of spectre v2 on X86_BUG_SPECTRE_V2 CPUs
+ 	 */
+-	if ((!boot_cpu_has(X86_FEATURE_PTI) &&
+-	     !boot_cpu_has(X86_FEATURE_SMEP)) || is_skylake_era()) {
+-		setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
+-		pr_info("Spectre v2 mitigation: Filling RSB on context switch\n");
+-	}
++	setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
++	pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n");
+ 
+ 	/* Initialize Indirect Branch Prediction Barrier if supported */
+ 	if (boot_cpu_has(X86_FEATURE_IBPB)) {
+@@ -654,8 +641,121 @@ void x86_spec_ctrl_setup_ap(void)
+ 		x86_amd_ssb_disable();
+ }
+ 
++#undef pr_fmt
++#define pr_fmt(fmt)	"L1TF: " fmt
++
++/* Default mitigation for L1TF-affected CPUs */
++enum l1tf_mitigations l1tf_mitigation __ro_after_init = L1TF_MITIGATION_FLUSH;
++#if IS_ENABLED(CONFIG_KVM_INTEL)
++EXPORT_SYMBOL_GPL(l1tf_mitigation);
++
++enum vmx_l1d_flush_state l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
++EXPORT_SYMBOL_GPL(l1tf_vmx_mitigation);
++#endif
++
++static void __init l1tf_select_mitigation(void)
++{
++	u64 half_pa;
++
++	if (!boot_cpu_has_bug(X86_BUG_L1TF))
++		return;
++
++	switch (l1tf_mitigation) {
++	case L1TF_MITIGATION_OFF:
++	case L1TF_MITIGATION_FLUSH_NOWARN:
++	case L1TF_MITIGATION_FLUSH:
++		break;
++	case L1TF_MITIGATION_FLUSH_NOSMT:
++	case L1TF_MITIGATION_FULL:
++		cpu_smt_disable(false);
++		break;
++	case L1TF_MITIGATION_FULL_FORCE:
++		cpu_smt_disable(true);
++		break;
++	}
++
++#if CONFIG_PGTABLE_LEVELS == 2
++	pr_warn("Kernel not compiled for PAE. No mitigation for L1TF\n");
++	return;
++#endif
++
++	/*
++	 * This is extremely unlikely to happen because almost all
++	 * systems have far more MAX_PA/2 than RAM can be fit into
++	 * DIMM slots.
++	 */
++	half_pa = (u64)l1tf_pfn_limit() << PAGE_SHIFT;
++	if (e820__mapped_any(half_pa, ULLONG_MAX - half_pa, E820_TYPE_RAM)) {
++		pr_warn("System has more than MAX_PA/2 memory. L1TF mitigation not effective.\n");
++		return;
++	}
++
++	setup_force_cpu_cap(X86_FEATURE_L1TF_PTEINV);
++}
++
++static int __init l1tf_cmdline(char *str)
++{
++	if (!boot_cpu_has_bug(X86_BUG_L1TF))
++		return 0;
++
++	if (!str)
++		return -EINVAL;
++
++	if (!strcmp(str, "off"))
++		l1tf_mitigation = L1TF_MITIGATION_OFF;
++	else if (!strcmp(str, "flush,nowarn"))
++		l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOWARN;
++	else if (!strcmp(str, "flush"))
++		l1tf_mitigation = L1TF_MITIGATION_FLUSH;
++	else if (!strcmp(str, "flush,nosmt"))
++		l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT;
++	else if (!strcmp(str, "full"))
++		l1tf_mitigation = L1TF_MITIGATION_FULL;
++	else if (!strcmp(str, "full,force"))
++		l1tf_mitigation = L1TF_MITIGATION_FULL_FORCE;
++
++	return 0;
++}
++early_param("l1tf", l1tf_cmdline);
++
++#undef pr_fmt
++
+ #ifdef CONFIG_SYSFS
+ 
++#define L1TF_DEFAULT_MSG "Mitigation: PTE Inversion"
++
++#if IS_ENABLED(CONFIG_KVM_INTEL)
++static const char *l1tf_vmx_states[] = {
++	[VMENTER_L1D_FLUSH_AUTO]		= "auto",
++	[VMENTER_L1D_FLUSH_NEVER]		= "vulnerable",
++	[VMENTER_L1D_FLUSH_COND]		= "conditional cache flushes",
++	[VMENTER_L1D_FLUSH_ALWAYS]		= "cache flushes",
++	[VMENTER_L1D_FLUSH_EPT_DISABLED]	= "EPT disabled",
++	[VMENTER_L1D_FLUSH_NOT_REQUIRED]	= "flush not necessary"
++};
++
++static ssize_t l1tf_show_state(char *buf)
++{
++	if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_AUTO)
++		return sprintf(buf, "%s\n", L1TF_DEFAULT_MSG);
++
++	if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_EPT_DISABLED ||
++	    (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_NEVER &&
++	     cpu_smt_control == CPU_SMT_ENABLED))
++		return sprintf(buf, "%s; VMX: %s\n", L1TF_DEFAULT_MSG,
++			       l1tf_vmx_states[l1tf_vmx_mitigation]);
++
++	return sprintf(buf, "%s; VMX: %s, SMT %s\n", L1TF_DEFAULT_MSG,
++		       l1tf_vmx_states[l1tf_vmx_mitigation],
++		       cpu_smt_control == CPU_SMT_ENABLED ? "vulnerable" : "disabled");
++}
++#else
++static ssize_t l1tf_show_state(char *buf)
++{
++	return sprintf(buf, "%s\n", L1TF_DEFAULT_MSG);
++}
++#endif
++
+ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
+ 			       char *buf, unsigned int bug)
+ {
+@@ -684,6 +784,10 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ 	case X86_BUG_SPEC_STORE_BYPASS:
+ 		return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);
+ 
++	case X86_BUG_L1TF:
++		if (boot_cpu_has(X86_FEATURE_L1TF_PTEINV))
++			return l1tf_show_state(buf);
++		break;
+ 	default:
+ 		break;
+ 	}
+@@ -710,4 +814,9 @@ ssize_t cpu_show_spec_store_bypass(struct device *dev, struct device_attribute *
+ {
+ 	return cpu_show_common(dev, attr, buf, X86_BUG_SPEC_STORE_BYPASS);
+ }
++
++ssize_t cpu_show_l1tf(struct device *dev, struct device_attribute *attr, char *buf)
++{
++	return cpu_show_common(dev, attr, buf, X86_BUG_L1TF);
++}
+ #endif
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index eb4cb3efd20e..9eda6f730ec4 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -661,33 +661,36 @@ static void cpu_detect_tlb(struct cpuinfo_x86 *c)
+ 		tlb_lld_4m[ENTRIES], tlb_lld_1g[ENTRIES]);
+ }
+ 
+-void detect_ht(struct cpuinfo_x86 *c)
++int detect_ht_early(struct cpuinfo_x86 *c)
+ {
+ #ifdef CONFIG_SMP
+ 	u32 eax, ebx, ecx, edx;
+-	int index_msb, core_bits;
+-	static bool printed;
+ 
+ 	if (!cpu_has(c, X86_FEATURE_HT))
+-		return;
++		return -1;
+ 
+ 	if (cpu_has(c, X86_FEATURE_CMP_LEGACY))
+-		goto out;
++		return -1;
+ 
+ 	if (cpu_has(c, X86_FEATURE_XTOPOLOGY))
+-		return;
++		return -1;
+ 
+ 	cpuid(1, &eax, &ebx, &ecx, &edx);
+ 
+ 	smp_num_siblings = (ebx & 0xff0000) >> 16;
+-
+-	if (smp_num_siblings == 1) {
++	if (smp_num_siblings == 1)
+ 		pr_info_once("CPU0: Hyper-Threading is disabled\n");
+-		goto out;
+-	}
++#endif
++	return 0;
++}
+ 
+-	if (smp_num_siblings <= 1)
+-		goto out;
++void detect_ht(struct cpuinfo_x86 *c)
++{
++#ifdef CONFIG_SMP
++	int index_msb, core_bits;
++
++	if (detect_ht_early(c) < 0)
++		return;
+ 
+ 	index_msb = get_count_order(smp_num_siblings);
+ 	c->phys_proc_id = apic->phys_pkg_id(c->initial_apicid, index_msb);
+@@ -700,15 +703,6 @@ void detect_ht(struct cpuinfo_x86 *c)
+ 
+ 	c->cpu_core_id = apic->phys_pkg_id(c->initial_apicid, index_msb) &
+ 				       ((1 << core_bits) - 1);
+-
+-out:
+-	if (!printed && (c->x86_max_cores * smp_num_siblings) > 1) {
+-		pr_info("CPU: Physical Processor ID: %d\n",
+-			c->phys_proc_id);
+-		pr_info("CPU: Processor Core ID: %d\n",
+-			c->cpu_core_id);
+-		printed = 1;
+-	}
+ #endif
+ }
+ 
+@@ -987,6 +981,21 @@ static const __initconst struct x86_cpu_id cpu_no_spec_store_bypass[] = {
+ 	{}
+ };
+ 
++static const __initconst struct x86_cpu_id cpu_no_l1tf[] = {
++	/* in addition to cpu_no_speculation */
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_SILVERMONT1	},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_SILVERMONT2	},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_AIRMONT		},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_MERRIFIELD	},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_MOOREFIELD	},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_GOLDMONT	},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_DENVERTON	},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_GEMINI_LAKE	},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_XEON_PHI_KNL		},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_XEON_PHI_KNM		},
++	{}
++};
++
+ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ {
+ 	u64 ia32_cap = 0;
+@@ -1013,6 +1022,11 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ 		return;
+ 
+ 	setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
++
++	if (x86_match_cpu(cpu_no_l1tf))
++		return;
++
++	setup_force_cpu_bug(X86_BUG_L1TF);
+ }
+ 
+ /*
+diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
+index 38216f678fc3..e59c0ea82a33 100644
+--- a/arch/x86/kernel/cpu/cpu.h
++++ b/arch/x86/kernel/cpu/cpu.h
+@@ -55,7 +55,9 @@ extern void init_intel_cacheinfo(struct cpuinfo_x86 *c);
+ extern void init_amd_cacheinfo(struct cpuinfo_x86 *c);
+ 
+ extern void detect_num_cpu_cores(struct cpuinfo_x86 *c);
++extern int detect_extended_topology_early(struct cpuinfo_x86 *c);
+ extern int detect_extended_topology(struct cpuinfo_x86 *c);
++extern int detect_ht_early(struct cpuinfo_x86 *c);
+ extern void detect_ht(struct cpuinfo_x86 *c);
+ 
+ unsigned int aperfmperf_get_khz(int cpu);
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index eb75564f2d25..6602941cfebf 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -301,6 +301,13 @@ static void early_init_intel(struct cpuinfo_x86 *c)
+ 	}
+ 
+ 	check_mpx_erratum(c);
++
++	/*
++	 * Get the number of SMT siblings early from the extended topology
++	 * leaf, if available. Otherwise try the legacy SMT detection.
++	 */
++	if (detect_extended_topology_early(c) < 0)
++		detect_ht_early(c);
+ }
+ 
+ #ifdef CONFIG_X86_32
+diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
+index 08286269fd24..b9bc8a1a584e 100644
+--- a/arch/x86/kernel/cpu/microcode/core.c
++++ b/arch/x86/kernel/cpu/microcode/core.c
+@@ -509,12 +509,20 @@ static struct platform_device	*microcode_pdev;
+ 
+ static int check_online_cpus(void)
+ {
+-	if (num_online_cpus() == num_present_cpus())
+-		return 0;
++	unsigned int cpu;
+ 
+-	pr_err("Not all CPUs online, aborting microcode update.\n");
++	/*
++	 * Make sure all CPUs are online.  It's fine for SMT to be disabled if
++	 * all the primary threads are still online.
++	 */
++	for_each_present_cpu(cpu) {
++		if (topology_is_primary_thread(cpu) && !cpu_online(cpu)) {
++			pr_err("Not all CPUs online, aborting microcode update.\n");
++			return -EINVAL;
++		}
++	}
+ 
+-	return -EINVAL;
++	return 0;
+ }
+ 
+ static atomic_t late_cpus_in;
+diff --git a/arch/x86/kernel/cpu/topology.c b/arch/x86/kernel/cpu/topology.c
+index 81c0afb39d0a..71ca064e3794 100644
+--- a/arch/x86/kernel/cpu/topology.c
++++ b/arch/x86/kernel/cpu/topology.c
+@@ -22,18 +22,10 @@
+ #define BITS_SHIFT_NEXT_LEVEL(eax)	((eax) & 0x1f)
+ #define LEVEL_MAX_SIBLINGS(ebx)		((ebx) & 0xffff)
+ 
+-/*
+- * Check for extended topology enumeration cpuid leaf 0xb and if it
+- * exists, use it for populating initial_apicid and cpu topology
+- * detection.
+- */
+-int detect_extended_topology(struct cpuinfo_x86 *c)
++int detect_extended_topology_early(struct cpuinfo_x86 *c)
+ {
+ #ifdef CONFIG_SMP
+-	unsigned int eax, ebx, ecx, edx, sub_index;
+-	unsigned int ht_mask_width, core_plus_mask_width;
+-	unsigned int core_select_mask, core_level_siblings;
+-	static bool printed;
++	unsigned int eax, ebx, ecx, edx;
+ 
+ 	if (c->cpuid_level < 0xb)
+ 		return -1;
+@@ -52,10 +44,30 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
+ 	 * initial apic id, which also represents 32-bit extended x2apic id.
+ 	 */
+ 	c->initial_apicid = edx;
++	smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx);
++#endif
++	return 0;
++}
++
++/*
++ * Check for extended topology enumeration cpuid leaf 0xb and if it
++ * exists, use it for populating initial_apicid and cpu topology
++ * detection.
++ */
++int detect_extended_topology(struct cpuinfo_x86 *c)
++{
++#ifdef CONFIG_SMP
++	unsigned int eax, ebx, ecx, edx, sub_index;
++	unsigned int ht_mask_width, core_plus_mask_width;
++	unsigned int core_select_mask, core_level_siblings;
++
++	if (detect_extended_topology_early(c) < 0)
++		return -1;
+ 
+ 	/*
+ 	 * Populate HT related information from sub-leaf level 0.
+ 	 */
++	cpuid_count(0xb, SMT_LEVEL, &eax, &ebx, &ecx, &edx);
+ 	core_level_siblings = smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx);
+ 	core_plus_mask_width = ht_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
+ 
+@@ -86,15 +98,6 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
+ 	c->apicid = apic->phys_pkg_id(c->initial_apicid, 0);
+ 
+ 	c->x86_max_cores = (core_level_siblings / smp_num_siblings);
+-
+-	if (!printed) {
+-		pr_info("CPU: Physical Processor ID: %d\n",
+-		       c->phys_proc_id);
+-		if (c->x86_max_cores > 1)
+-			pr_info("CPU: Processor Core ID: %d\n",
+-			       c->cpu_core_id);
+-		printed = 1;
+-	}
+ #endif
+ 	return 0;
+ }
+diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c
+index f92a6593de1e..2ea85b32421a 100644
+--- a/arch/x86/kernel/fpu/core.c
++++ b/arch/x86/kernel/fpu/core.c
+@@ -10,6 +10,7 @@
+ #include <asm/fpu/signal.h>
+ #include <asm/fpu/types.h>
+ #include <asm/traps.h>
++#include <asm/irq_regs.h>
+ 
+ #include <linux/hardirq.h>
+ #include <linux/pkeys.h>
+diff --git a/arch/x86/kernel/hpet.c b/arch/x86/kernel/hpet.c
+index 346b24883911..b0acb22e5a46 100644
+--- a/arch/x86/kernel/hpet.c
++++ b/arch/x86/kernel/hpet.c
+@@ -1,6 +1,7 @@
+ #include <linux/clocksource.h>
+ #include <linux/clockchips.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/export.h>
+ #include <linux/delay.h>
+ #include <linux/errno.h>
+diff --git a/arch/x86/kernel/i8259.c b/arch/x86/kernel/i8259.c
+index 86c4439f9d74..519649ddf100 100644
+--- a/arch/x86/kernel/i8259.c
++++ b/arch/x86/kernel/i8259.c
+@@ -5,6 +5,7 @@
+ #include <linux/sched.h>
+ #include <linux/ioport.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/timex.h>
+ #include <linux/random.h>
+ #include <linux/init.h>
+diff --git a/arch/x86/kernel/idt.c b/arch/x86/kernel/idt.c
+index 74383a3780dc..01adea278a71 100644
+--- a/arch/x86/kernel/idt.c
++++ b/arch/x86/kernel/idt.c
+@@ -8,6 +8,7 @@
+ #include <asm/traps.h>
+ #include <asm/proto.h>
+ #include <asm/desc.h>
++#include <asm/hw_irq.h>
+ 
+ struct idt_data {
+ 	unsigned int	vector;
+diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
+index 328d027d829d..59b5f2ea7c2f 100644
+--- a/arch/x86/kernel/irq.c
++++ b/arch/x86/kernel/irq.c
+@@ -10,6 +10,7 @@
+ #include <linux/ftrace.h>
+ #include <linux/delay.h>
+ #include <linux/export.h>
++#include <linux/irq.h>
+ 
+ #include <asm/apic.h>
+ #include <asm/io_apic.h>
+diff --git a/arch/x86/kernel/irq_32.c b/arch/x86/kernel/irq_32.c
+index c1bdbd3d3232..95600a99ae93 100644
+--- a/arch/x86/kernel/irq_32.c
++++ b/arch/x86/kernel/irq_32.c
+@@ -11,6 +11,7 @@
+ 
+ #include <linux/seq_file.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/kernel_stat.h>
+ #include <linux/notifier.h>
+ #include <linux/cpu.h>
+diff --git a/arch/x86/kernel/irq_64.c b/arch/x86/kernel/irq_64.c
+index d86e344f5b3d..0469cd078db1 100644
+--- a/arch/x86/kernel/irq_64.c
++++ b/arch/x86/kernel/irq_64.c
+@@ -11,6 +11,7 @@
+ 
+ #include <linux/kernel_stat.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/seq_file.h>
+ #include <linux/delay.h>
+ #include <linux/ftrace.h>
+diff --git a/arch/x86/kernel/irqinit.c b/arch/x86/kernel/irqinit.c
+index 772196c1b8c4..a0693b71cfc1 100644
+--- a/arch/x86/kernel/irqinit.c
++++ b/arch/x86/kernel/irqinit.c
+@@ -5,6 +5,7 @@
+ #include <linux/sched.h>
+ #include <linux/ioport.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/timex.h>
+ #include <linux/random.h>
+ #include <linux/kprobes.h>
+diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
+index 6f4d42377fe5..44e26dc326d5 100644
+--- a/arch/x86/kernel/kprobes/core.c
++++ b/arch/x86/kernel/kprobes/core.c
+@@ -395,8 +395,6 @@ int __copy_instruction(u8 *dest, u8 *src, u8 *real, struct insn *insn)
+ 			  - (u8 *) real;
+ 		if ((s64) (s32) newdisp != newdisp) {
+ 			pr_err("Kprobes error: new displacement does not fit into s32 (%llx)\n", newdisp);
+-			pr_err("\tSrc: %p, Dest: %p, old disp: %x\n",
+-				src, real, insn->displacement.value);
+ 			return 0;
+ 		}
+ 		disp = (u8 *) dest + insn_offset_displacement(insn);
+@@ -640,8 +638,7 @@ static int reenter_kprobe(struct kprobe *p, struct pt_regs *regs,
+ 		 * Raise a BUG or we'll continue in an endless reentering loop
+ 		 * and eventually a stack overflow.
+ 		 */
+-		printk(KERN_WARNING "Unrecoverable kprobe detected at %p.\n",
+-		       p->addr);
++		pr_err("Unrecoverable kprobe detected.\n");
+ 		dump_kprobe(p);
+ 		BUG();
+ 	default:
+diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
+index 99dc79e76bdc..930c88341e4e 100644
+--- a/arch/x86/kernel/paravirt.c
++++ b/arch/x86/kernel/paravirt.c
+@@ -88,10 +88,12 @@ unsigned paravirt_patch_call(void *insnbuf,
+ 	struct branch *b = insnbuf;
+ 	unsigned long delta = (unsigned long)target - (addr+5);
+ 
+-	if (tgt_clobbers & ~site_clobbers)
+-		return len;	/* target would clobber too much for this site */
+-	if (len < 5)
++	if (len < 5) {
++#ifdef CONFIG_RETPOLINE
++		WARN_ONCE("Failing to patch indirect CALL in %ps\n", (void *)addr);
++#endif
+ 		return len;	/* call too long for patch site */
++	}
+ 
+ 	b->opcode = 0xe8; /* call */
+ 	b->delta = delta;
+@@ -106,8 +108,12 @@ unsigned paravirt_patch_jmp(void *insnbuf, const void *target,
+ 	struct branch *b = insnbuf;
+ 	unsigned long delta = (unsigned long)target - (addr+5);
+ 
+-	if (len < 5)
++	if (len < 5) {
++#ifdef CONFIG_RETPOLINE
++		WARN_ONCE("Failing to patch indirect JMP in %ps\n", (void *)addr);
++#endif
+ 		return len;	/* call too long for patch site */
++	}
+ 
+ 	b->opcode = 0xe9;	/* jmp */
+ 	b->delta = delta;
+diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
+index 2f86d883dd95..74b4472ba0a6 100644
+--- a/arch/x86/kernel/setup.c
++++ b/arch/x86/kernel/setup.c
+@@ -823,6 +823,12 @@ void __init setup_arch(char **cmdline_p)
+ 	memblock_reserve(__pa_symbol(_text),
+ 			 (unsigned long)__bss_stop - (unsigned long)_text);
+ 
++	/*
++	 * Make sure page 0 is always reserved because on systems with
++	 * L1TF its contents can be leaked to user processes.
++	 */
++	memblock_reserve(0, PAGE_SIZE);
++
+ 	early_reserve_initrd();
+ 
+ 	/*
+diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
+index 5c574dff4c1a..04adc8d60aed 100644
+--- a/arch/x86/kernel/smp.c
++++ b/arch/x86/kernel/smp.c
+@@ -261,6 +261,7 @@ __visible void __irq_entry smp_reschedule_interrupt(struct pt_regs *regs)
+ {
+ 	ack_APIC_irq();
+ 	inc_irq_stat(irq_resched_count);
++	kvm_set_cpu_l1tf_flush_l1d();
+ 
+ 	if (trace_resched_ipi_enabled()) {
+ 		/*
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index db9656e13ea0..f02ecaf97904 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -80,6 +80,7 @@
+ #include <asm/intel-family.h>
+ #include <asm/cpu_device_id.h>
+ #include <asm/spec-ctrl.h>
++#include <asm/hw_irq.h>
+ 
+ /* representing HT siblings of each logical CPU */
+ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_sibling_map);
+@@ -270,6 +271,23 @@ static void notrace start_secondary(void *unused)
+ 	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+ }
+ 
++/**
++ * topology_is_primary_thread - Check whether CPU is the primary SMT thread
++ * @cpu:	CPU to check
++ */
++bool topology_is_primary_thread(unsigned int cpu)
++{
++	return apic_id_is_primary_thread(per_cpu(x86_cpu_to_apicid, cpu));
++}
++
++/**
++ * topology_smt_supported - Check whether SMT is supported by the CPUs
++ */
++bool topology_smt_supported(void)
++{
++	return smp_num_siblings > 1;
++}
++
+ /**
+  * topology_phys_to_logical_pkg - Map a physical package id to a logical
+  *
+diff --git a/arch/x86/kernel/time.c b/arch/x86/kernel/time.c
+index 774ebafa97c4..be01328eb755 100644
+--- a/arch/x86/kernel/time.c
++++ b/arch/x86/kernel/time.c
+@@ -12,6 +12,7 @@
+ 
+ #include <linux/clockchips.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/i8253.h>
+ #include <linux/time.h>
+ #include <linux/export.h>
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index 6b8f11521c41..a44e568363a4 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -3840,6 +3840,7 @@ int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code,
+ {
+ 	int r = 1;
+ 
++	vcpu->arch.l1tf_flush_l1d = true;
+ 	switch (vcpu->arch.apf.host_apf_reason) {
+ 	default:
+ 		trace_kvm_page_fault(fault_address, error_code);
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 5d8e317c2b04..46b428c0990e 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -188,6 +188,150 @@ module_param(ple_window_max, uint, 0444);
+ 
+ extern const ulong vmx_return;
+ 
++static DEFINE_STATIC_KEY_FALSE(vmx_l1d_should_flush);
++static DEFINE_STATIC_KEY_FALSE(vmx_l1d_flush_cond);
++static DEFINE_MUTEX(vmx_l1d_flush_mutex);
++
++/* Storage for pre module init parameter parsing */
++static enum vmx_l1d_flush_state __read_mostly vmentry_l1d_flush_param = VMENTER_L1D_FLUSH_AUTO;
++
++static const struct {
++	const char *option;
++	enum vmx_l1d_flush_state cmd;
++} vmentry_l1d_param[] = {
++	{"auto",	VMENTER_L1D_FLUSH_AUTO},
++	{"never",	VMENTER_L1D_FLUSH_NEVER},
++	{"cond",	VMENTER_L1D_FLUSH_COND},
++	{"always",	VMENTER_L1D_FLUSH_ALWAYS},
++};
++
++#define L1D_CACHE_ORDER 4
++static void *vmx_l1d_flush_pages;
++
++static int vmx_setup_l1d_flush(enum vmx_l1d_flush_state l1tf)
++{
++	struct page *page;
++	unsigned int i;
++
++	if (!enable_ept) {
++		l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_EPT_DISABLED;
++		return 0;
++	}
++
++       if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES)) {
++	       u64 msr;
++
++	       rdmsrl(MSR_IA32_ARCH_CAPABILITIES, msr);
++	       if (msr & ARCH_CAP_SKIP_VMENTRY_L1DFLUSH) {
++		       l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_NOT_REQUIRED;
++		       return 0;
++	       }
++       }
++
++	/* If set to auto use the default l1tf mitigation method */
++	if (l1tf == VMENTER_L1D_FLUSH_AUTO) {
++		switch (l1tf_mitigation) {
++		case L1TF_MITIGATION_OFF:
++			l1tf = VMENTER_L1D_FLUSH_NEVER;
++			break;
++		case L1TF_MITIGATION_FLUSH_NOWARN:
++		case L1TF_MITIGATION_FLUSH:
++		case L1TF_MITIGATION_FLUSH_NOSMT:
++			l1tf = VMENTER_L1D_FLUSH_COND;
++			break;
++		case L1TF_MITIGATION_FULL:
++		case L1TF_MITIGATION_FULL_FORCE:
++			l1tf = VMENTER_L1D_FLUSH_ALWAYS;
++			break;
++		}
++	} else if (l1tf_mitigation == L1TF_MITIGATION_FULL_FORCE) {
++		l1tf = VMENTER_L1D_FLUSH_ALWAYS;
++	}
++
++	if (l1tf != VMENTER_L1D_FLUSH_NEVER && !vmx_l1d_flush_pages &&
++	    !boot_cpu_has(X86_FEATURE_FLUSH_L1D)) {
++		page = alloc_pages(GFP_KERNEL, L1D_CACHE_ORDER);
++		if (!page)
++			return -ENOMEM;
++		vmx_l1d_flush_pages = page_address(page);
++
++		/*
++		 * Initialize each page with a different pattern in
++		 * order to protect against KSM in the nested
++		 * virtualization case.
++		 */
++		for (i = 0; i < 1u << L1D_CACHE_ORDER; ++i) {
++			memset(vmx_l1d_flush_pages + i * PAGE_SIZE, i + 1,
++			       PAGE_SIZE);
++		}
++	}
++
++	l1tf_vmx_mitigation = l1tf;
++
++	if (l1tf != VMENTER_L1D_FLUSH_NEVER)
++		static_branch_enable(&vmx_l1d_should_flush);
++	else
++		static_branch_disable(&vmx_l1d_should_flush);
++
++	if (l1tf == VMENTER_L1D_FLUSH_COND)
++		static_branch_enable(&vmx_l1d_flush_cond);
++	else
++		static_branch_disable(&vmx_l1d_flush_cond);
++	return 0;
++}
++
++static int vmentry_l1d_flush_parse(const char *s)
++{
++	unsigned int i;
++
++	if (s) {
++		for (i = 0; i < ARRAY_SIZE(vmentry_l1d_param); i++) {
++			if (sysfs_streq(s, vmentry_l1d_param[i].option))
++				return vmentry_l1d_param[i].cmd;
++		}
++	}
++	return -EINVAL;
++}
++
++static int vmentry_l1d_flush_set(const char *s, const struct kernel_param *kp)
++{
++	int l1tf, ret;
++
++	if (!boot_cpu_has(X86_BUG_L1TF))
++		return 0;
++
++	l1tf = vmentry_l1d_flush_parse(s);
++	if (l1tf < 0)
++		return l1tf;
++
++	/*
++	 * Has vmx_init() run already? If not then this is the pre init
++	 * parameter parsing. In that case just store the value and let
++	 * vmx_init() do the proper setup after enable_ept has been
++	 * established.
++	 */
++	if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_AUTO) {
++		vmentry_l1d_flush_param = l1tf;
++		return 0;
++	}
++
++	mutex_lock(&vmx_l1d_flush_mutex);
++	ret = vmx_setup_l1d_flush(l1tf);
++	mutex_unlock(&vmx_l1d_flush_mutex);
++	return ret;
++}
++
++static int vmentry_l1d_flush_get(char *s, const struct kernel_param *kp)
++{
++	return sprintf(s, "%s\n", vmentry_l1d_param[l1tf_vmx_mitigation].option);
++}
++
++static const struct kernel_param_ops vmentry_l1d_flush_ops = {
++	.set = vmentry_l1d_flush_set,
++	.get = vmentry_l1d_flush_get,
++};
++module_param_cb(vmentry_l1d_flush, &vmentry_l1d_flush_ops, NULL, 0644);
++
+ struct kvm_vmx {
+ 	struct kvm kvm;
+ 
+@@ -757,6 +901,11 @@ static inline int pi_test_sn(struct pi_desc *pi_desc)
+ 			(unsigned long *)&pi_desc->control);
+ }
+ 
++struct vmx_msrs {
++	unsigned int		nr;
++	struct vmx_msr_entry	val[NR_AUTOLOAD_MSRS];
++};
++
+ struct vcpu_vmx {
+ 	struct kvm_vcpu       vcpu;
+ 	unsigned long         host_rsp;
+@@ -790,9 +939,8 @@ struct vcpu_vmx {
+ 	struct loaded_vmcs   *loaded_vmcs;
+ 	bool                  __launched; /* temporary, used in vmx_vcpu_run */
+ 	struct msr_autoload {
+-		unsigned nr;
+-		struct vmx_msr_entry guest[NR_AUTOLOAD_MSRS];
+-		struct vmx_msr_entry host[NR_AUTOLOAD_MSRS];
++		struct vmx_msrs guest;
++		struct vmx_msrs host;
+ 	} msr_autoload;
+ 	struct {
+ 		int           loaded;
+@@ -2377,9 +2525,20 @@ static void clear_atomic_switch_msr_special(struct vcpu_vmx *vmx,
+ 	vm_exit_controls_clearbit(vmx, exit);
+ }
+ 
++static int find_msr(struct vmx_msrs *m, unsigned int msr)
++{
++	unsigned int i;
++
++	for (i = 0; i < m->nr; ++i) {
++		if (m->val[i].index == msr)
++			return i;
++	}
++	return -ENOENT;
++}
++
+ static void clear_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr)
+ {
+-	unsigned i;
++	int i;
+ 	struct msr_autoload *m = &vmx->msr_autoload;
+ 
+ 	switch (msr) {
+@@ -2400,18 +2559,21 @@ static void clear_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr)
+ 		}
+ 		break;
+ 	}
++	i = find_msr(&m->guest, msr);
++	if (i < 0)
++		goto skip_guest;
++	--m->guest.nr;
++	m->guest.val[i] = m->guest.val[m->guest.nr];
++	vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->guest.nr);
+ 
+-	for (i = 0; i < m->nr; ++i)
+-		if (m->guest[i].index == msr)
+-			break;
+-
+-	if (i == m->nr)
++skip_guest:
++	i = find_msr(&m->host, msr);
++	if (i < 0)
+ 		return;
+-	--m->nr;
+-	m->guest[i] = m->guest[m->nr];
+-	m->host[i] = m->host[m->nr];
+-	vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->nr);
+-	vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->nr);
++
++	--m->host.nr;
++	m->host.val[i] = m->host.val[m->host.nr];
++	vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->host.nr);
+ }
+ 
+ static void add_atomic_switch_msr_special(struct vcpu_vmx *vmx,
+@@ -2426,9 +2588,9 @@ static void add_atomic_switch_msr_special(struct vcpu_vmx *vmx,
+ }
+ 
+ static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr,
+-				  u64 guest_val, u64 host_val)
++				  u64 guest_val, u64 host_val, bool entry_only)
+ {
+-	unsigned i;
++	int i, j = 0;
+ 	struct msr_autoload *m = &vmx->msr_autoload;
+ 
+ 	switch (msr) {
+@@ -2463,24 +2625,31 @@ static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr,
+ 		wrmsrl(MSR_IA32_PEBS_ENABLE, 0);
+ 	}
+ 
+-	for (i = 0; i < m->nr; ++i)
+-		if (m->guest[i].index == msr)
+-			break;
++	i = find_msr(&m->guest, msr);
++	if (!entry_only)
++		j = find_msr(&m->host, msr);
+ 
+-	if (i == NR_AUTOLOAD_MSRS) {
++	if (i == NR_AUTOLOAD_MSRS || j == NR_AUTOLOAD_MSRS) {
+ 		printk_once(KERN_WARNING "Not enough msr switch entries. "
+ 				"Can't add msr %x\n", msr);
+ 		return;
+-	} else if (i == m->nr) {
+-		++m->nr;
+-		vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->nr);
+-		vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->nr);
+ 	}
++	if (i < 0) {
++		i = m->guest.nr++;
++		vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->guest.nr);
++	}
++	m->guest.val[i].index = msr;
++	m->guest.val[i].value = guest_val;
++
++	if (entry_only)
++		return;
+ 
+-	m->guest[i].index = msr;
+-	m->guest[i].value = guest_val;
+-	m->host[i].index = msr;
+-	m->host[i].value = host_val;
++	if (j < 0) {
++		j = m->host.nr++;
++		vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->host.nr);
++	}
++	m->host.val[j].index = msr;
++	m->host.val[j].value = host_val;
+ }
+ 
+ static bool update_transition_efer(struct vcpu_vmx *vmx, int efer_offset)
+@@ -2524,7 +2693,7 @@ static bool update_transition_efer(struct vcpu_vmx *vmx, int efer_offset)
+ 			guest_efer &= ~EFER_LME;
+ 		if (guest_efer != host_efer)
+ 			add_atomic_switch_msr(vmx, MSR_EFER,
+-					      guest_efer, host_efer);
++					      guest_efer, host_efer, false);
+ 		return false;
+ 	} else {
+ 		guest_efer &= ~ignore_bits;
+@@ -3987,7 +4156,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		vcpu->arch.ia32_xss = data;
+ 		if (vcpu->arch.ia32_xss != host_xss)
+ 			add_atomic_switch_msr(vmx, MSR_IA32_XSS,
+-				vcpu->arch.ia32_xss, host_xss);
++				vcpu->arch.ia32_xss, host_xss, false);
+ 		else
+ 			clear_atomic_switch_msr(vmx, MSR_IA32_XSS);
+ 		break;
+@@ -6274,9 +6443,9 @@ static void vmx_vcpu_setup(struct vcpu_vmx *vmx)
+ 
+ 	vmcs_write32(VM_EXIT_MSR_STORE_COUNT, 0);
+ 	vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, 0);
+-	vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host));
++	vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host.val));
+ 	vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, 0);
+-	vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest));
++	vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest.val));
+ 
+ 	if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT)
+ 		vmcs_write64(GUEST_IA32_PAT, vmx->vcpu.arch.pat);
+@@ -6296,8 +6465,7 @@ static void vmx_vcpu_setup(struct vcpu_vmx *vmx)
+ 		++vmx->nmsrs;
+ 	}
+ 
+-	if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES))
+-		rdmsrl(MSR_IA32_ARCH_CAPABILITIES, vmx->arch_capabilities);
++	vmx->arch_capabilities = kvm_get_arch_capabilities();
+ 
+ 	vm_exit_controls_init(vmx, vmcs_config.vmexit_ctrl);
+ 
+@@ -9548,6 +9716,79 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu)
+ 	}
+ }
+ 
++/*
++ * Software based L1D cache flush which is used when microcode providing
++ * the cache control MSR is not loaded.
++ *
++ * The L1D cache is 32 KiB on Nehalem and later microarchitectures, but to
++ * flush it is required to read in 64 KiB because the replacement algorithm
++ * is not exactly LRU. This could be sized at runtime via topology
++ * information but as all relevant affected CPUs have 32KiB L1D cache size
++ * there is no point in doing so.
++ */
++#define L1D_CACHE_ORDER 4
++static void *vmx_l1d_flush_pages;
++
++static void vmx_l1d_flush(struct kvm_vcpu *vcpu)
++{
++	int size = PAGE_SIZE << L1D_CACHE_ORDER;
++
++	/*
++	 * This code is only executed when the the flush mode is 'cond' or
++	 * 'always'
++	 */
++	if (static_branch_likely(&vmx_l1d_flush_cond)) {
++		bool flush_l1d;
++
++		/*
++		 * Clear the per-vcpu flush bit, it gets set again
++		 * either from vcpu_run() or from one of the unsafe
++		 * VMEXIT handlers.
++		 */
++		flush_l1d = vcpu->arch.l1tf_flush_l1d;
++		vcpu->arch.l1tf_flush_l1d = false;
++
++		/*
++		 * Clear the per-cpu flush bit, it gets set again from
++		 * the interrupt handlers.
++		 */
++		flush_l1d |= kvm_get_cpu_l1tf_flush_l1d();
++		kvm_clear_cpu_l1tf_flush_l1d();
++
++		if (!flush_l1d)
++			return;
++	}
++
++	vcpu->stat.l1d_flush++;
++
++	if (static_cpu_has(X86_FEATURE_FLUSH_L1D)) {
++		wrmsrl(MSR_IA32_FLUSH_CMD, L1D_FLUSH);
++		return;
++	}
++
++	asm volatile(
++		/* First ensure the pages are in the TLB */
++		"xorl	%%eax, %%eax\n"
++		".Lpopulate_tlb:\n\t"
++		"movzbl	(%[flush_pages], %%" _ASM_AX "), %%ecx\n\t"
++		"addl	$4096, %%eax\n\t"
++		"cmpl	%%eax, %[size]\n\t"
++		"jne	.Lpopulate_tlb\n\t"
++		"xorl	%%eax, %%eax\n\t"
++		"cpuid\n\t"
++		/* Now fill the cache */
++		"xorl	%%eax, %%eax\n"
++		".Lfill_cache:\n"
++		"movzbl	(%[flush_pages], %%" _ASM_AX "), %%ecx\n\t"
++		"addl	$64, %%eax\n\t"
++		"cmpl	%%eax, %[size]\n\t"
++		"jne	.Lfill_cache\n\t"
++		"lfence\n"
++		:: [flush_pages] "r" (vmx_l1d_flush_pages),
++		    [size] "r" (size)
++		: "eax", "ebx", "ecx", "edx");
++}
++
+ static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
+ {
+ 	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
+@@ -9949,7 +10190,7 @@ static void atomic_switch_perf_msrs(struct vcpu_vmx *vmx)
+ 			clear_atomic_switch_msr(vmx, msrs[i].msr);
+ 		else
+ 			add_atomic_switch_msr(vmx, msrs[i].msr, msrs[i].guest,
+-					msrs[i].host);
++					msrs[i].host, false);
+ }
+ 
+ static void vmx_arm_hv_timer(struct kvm_vcpu *vcpu)
+@@ -10044,6 +10285,9 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
+ 	evmcs_rsp = static_branch_unlikely(&enable_evmcs) ?
+ 		(unsigned long)&current_evmcs->host_rsp : 0;
+ 
++	if (static_branch_unlikely(&vmx_l1d_should_flush))
++		vmx_l1d_flush(vcpu);
++
+ 	asm(
+ 		/* Store host registers */
+ 		"push %%" _ASM_DX "; push %%" _ASM_BP ";"
+@@ -10403,10 +10647,37 @@ free_vcpu:
+ 	return ERR_PTR(err);
+ }
+ 
++#define L1TF_MSG_SMT "L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/l1tf.html for details.\n"
++#define L1TF_MSG_L1D "L1TF CPU bug present and virtualization mitigation disabled, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/l1tf.html for details.\n"
++
+ static int vmx_vm_init(struct kvm *kvm)
+ {
+ 	if (!ple_gap)
+ 		kvm->arch.pause_in_guest = true;
++
++	if (boot_cpu_has(X86_BUG_L1TF) && enable_ept) {
++		switch (l1tf_mitigation) {
++		case L1TF_MITIGATION_OFF:
++		case L1TF_MITIGATION_FLUSH_NOWARN:
++			/* 'I explicitly don't care' is set */
++			break;
++		case L1TF_MITIGATION_FLUSH:
++		case L1TF_MITIGATION_FLUSH_NOSMT:
++		case L1TF_MITIGATION_FULL:
++			/*
++			 * Warn upon starting the first VM in a potentially
++			 * insecure environment.
++			 */
++			if (cpu_smt_control == CPU_SMT_ENABLED)
++				pr_warn_once(L1TF_MSG_SMT);
++			if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_NEVER)
++				pr_warn_once(L1TF_MSG_L1D);
++			break;
++		case L1TF_MITIGATION_FULL_FORCE:
++			/* Flush is enforced */
++			break;
++		}
++	}
+ 	return 0;
+ }
+ 
+@@ -11260,10 +11531,10 @@ static void prepare_vmcs02_full(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
+ 	 * Set the MSR load/store lists to match L0's settings.
+ 	 */
+ 	vmcs_write32(VM_EXIT_MSR_STORE_COUNT, 0);
+-	vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.nr);
+-	vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host));
+-	vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.nr);
+-	vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest));
++	vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr);
++	vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host.val));
++	vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.guest.nr);
++	vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest.val));
+ 
+ 	set_cr4_guest_host_mask(vmx);
+ 
+@@ -11899,6 +12170,9 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch)
+ 		return ret;
+ 	}
+ 
++	/* Hide L1D cache contents from the nested guest.  */
++	vmx->vcpu.arch.l1tf_flush_l1d = true;
++
+ 	/*
+ 	 * If we're entering a halted L2 vcpu and the L2 vcpu won't be woken
+ 	 * by event injection, halt vcpu.
+@@ -12419,8 +12693,8 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason,
+ 	vmx_segment_cache_clear(vmx);
+ 
+ 	/* Update any VMCS fields that might have changed while L2 ran */
+-	vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.nr);
+-	vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.nr);
++	vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr);
++	vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.guest.nr);
+ 	vmcs_write64(TSC_OFFSET, vcpu->arch.tsc_offset);
+ 	if (vmx->hv_deadline_tsc == -1)
+ 		vmcs_clear_bits(PIN_BASED_VM_EXEC_CONTROL,
+@@ -13137,6 +13411,51 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
+ 	.enable_smi_window = enable_smi_window,
+ };
+ 
++static void vmx_cleanup_l1d_flush(void)
++{
++	if (vmx_l1d_flush_pages) {
++		free_pages((unsigned long)vmx_l1d_flush_pages, L1D_CACHE_ORDER);
++		vmx_l1d_flush_pages = NULL;
++	}
++	/* Restore state so sysfs ignores VMX */
++	l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
++}
++
++static void vmx_exit(void)
++{
++#ifdef CONFIG_KEXEC_CORE
++	RCU_INIT_POINTER(crash_vmclear_loaded_vmcss, NULL);
++	synchronize_rcu();
++#endif
++
++	kvm_exit();
++
++#if IS_ENABLED(CONFIG_HYPERV)
++	if (static_branch_unlikely(&enable_evmcs)) {
++		int cpu;
++		struct hv_vp_assist_page *vp_ap;
++		/*
++		 * Reset everything to support using non-enlightened VMCS
++		 * access later (e.g. when we reload the module with
++		 * enlightened_vmcs=0)
++		 */
++		for_each_online_cpu(cpu) {
++			vp_ap =	hv_get_vp_assist_page(cpu);
++
++			if (!vp_ap)
++				continue;
++
++			vp_ap->current_nested_vmcs = 0;
++			vp_ap->enlighten_vmentry = 0;
++		}
++
++		static_branch_disable(&enable_evmcs);
++	}
++#endif
++	vmx_cleanup_l1d_flush();
++}
++module_exit(vmx_exit);
++
+ static int __init vmx_init(void)
+ {
+ 	int r;
+@@ -13171,10 +13490,25 @@ static int __init vmx_init(void)
+ #endif
+ 
+ 	r = kvm_init(&vmx_x86_ops, sizeof(struct vcpu_vmx),
+-                     __alignof__(struct vcpu_vmx), THIS_MODULE);
++		     __alignof__(struct vcpu_vmx), THIS_MODULE);
+ 	if (r)
+ 		return r;
+ 
++	/*
++	 * Must be called after kvm_init() so enable_ept is properly set
++	 * up. Hand the parameter mitigation value in which was stored in
++	 * the pre module init parser. If no parameter was given, it will
++	 * contain 'auto' which will be turned into the default 'cond'
++	 * mitigation mode.
++	 */
++	if (boot_cpu_has(X86_BUG_L1TF)) {
++		r = vmx_setup_l1d_flush(vmentry_l1d_flush_param);
++		if (r) {
++			vmx_exit();
++			return r;
++		}
++	}
++
+ #ifdef CONFIG_KEXEC_CORE
+ 	rcu_assign_pointer(crash_vmclear_loaded_vmcss,
+ 			   crash_vmclear_local_loaded_vmcss);
+@@ -13183,39 +13517,4 @@ static int __init vmx_init(void)
+ 
+ 	return 0;
+ }
+-
+-static void __exit vmx_exit(void)
+-{
+-#ifdef CONFIG_KEXEC_CORE
+-	RCU_INIT_POINTER(crash_vmclear_loaded_vmcss, NULL);
+-	synchronize_rcu();
+-#endif
+-
+-	kvm_exit();
+-
+-#if IS_ENABLED(CONFIG_HYPERV)
+-	if (static_branch_unlikely(&enable_evmcs)) {
+-		int cpu;
+-		struct hv_vp_assist_page *vp_ap;
+-		/*
+-		 * Reset everything to support using non-enlightened VMCS
+-		 * access later (e.g. when we reload the module with
+-		 * enlightened_vmcs=0)
+-		 */
+-		for_each_online_cpu(cpu) {
+-			vp_ap =	hv_get_vp_assist_page(cpu);
+-
+-			if (!vp_ap)
+-				continue;
+-
+-			vp_ap->current_nested_vmcs = 0;
+-			vp_ap->enlighten_vmentry = 0;
+-		}
+-
+-		static_branch_disable(&enable_evmcs);
+-	}
+-#endif
+-}
+-
+-module_init(vmx_init)
+-module_exit(vmx_exit)
++module_init(vmx_init);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 2b812b3c5088..a5caa5e5480c 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -195,6 +195,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
+ 	{ "irq_injections", VCPU_STAT(irq_injections) },
+ 	{ "nmi_injections", VCPU_STAT(nmi_injections) },
+ 	{ "req_event", VCPU_STAT(req_event) },
++	{ "l1d_flush", VCPU_STAT(l1d_flush) },
+ 	{ "mmu_shadow_zapped", VM_STAT(mmu_shadow_zapped) },
+ 	{ "mmu_pte_write", VM_STAT(mmu_pte_write) },
+ 	{ "mmu_pte_updated", VM_STAT(mmu_pte_updated) },
+@@ -1102,11 +1103,35 @@ static u32 msr_based_features[] = {
+ 
+ static unsigned int num_msr_based_features;
+ 
++u64 kvm_get_arch_capabilities(void)
++{
++	u64 data;
++
++	rdmsrl_safe(MSR_IA32_ARCH_CAPABILITIES, &data);
++
++	/*
++	 * If we're doing cache flushes (either "always" or "cond")
++	 * we will do one whenever the guest does a vmlaunch/vmresume.
++	 * If an outer hypervisor is doing the cache flush for us
++	 * (VMENTER_L1D_FLUSH_NESTED_VM), we can safely pass that
++	 * capability to the guest too, and if EPT is disabled we're not
++	 * vulnerable.  Overall, only VMENTER_L1D_FLUSH_NEVER will
++	 * require a nested hypervisor to do a flush of its own.
++	 */
++	if (l1tf_vmx_mitigation != VMENTER_L1D_FLUSH_NEVER)
++		data |= ARCH_CAP_SKIP_VMENTRY_L1DFLUSH;
++
++	return data;
++}
++EXPORT_SYMBOL_GPL(kvm_get_arch_capabilities);
++
+ static int kvm_get_msr_feature(struct kvm_msr_entry *msr)
+ {
+ 	switch (msr->index) {
+-	case MSR_IA32_UCODE_REV:
+ 	case MSR_IA32_ARCH_CAPABILITIES:
++		msr->data = kvm_get_arch_capabilities();
++		break;
++	case MSR_IA32_UCODE_REV:
+ 		rdmsrl_safe(msr->index, &msr->data);
+ 		break;
+ 	default:
+@@ -4876,6 +4901,9 @@ static int emulator_write_std(struct x86_emulate_ctxt *ctxt, gva_t addr, void *v
+ int kvm_write_guest_virt_system(struct kvm_vcpu *vcpu, gva_t addr, void *val,
+ 				unsigned int bytes, struct x86_exception *exception)
+ {
++	/* kvm_write_guest_virt_system can pull in tons of pages. */
++	vcpu->arch.l1tf_flush_l1d = true;
++
+ 	return kvm_write_guest_virt_helper(addr, val, bytes, vcpu,
+ 					   PFERR_WRITE_MASK, exception);
+ }
+@@ -6052,6 +6080,8 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu,
+ 	bool writeback = true;
+ 	bool write_fault_to_spt = vcpu->arch.write_fault_to_shadow_pgtable;
+ 
++	vcpu->arch.l1tf_flush_l1d = true;
++
+ 	/*
+ 	 * Clear write_fault_to_shadow_pgtable here to ensure it is
+ 	 * never reused.
+@@ -7581,6 +7611,7 @@ static int vcpu_run(struct kvm_vcpu *vcpu)
+ 	struct kvm *kvm = vcpu->kvm;
+ 
+ 	vcpu->srcu_idx = srcu_read_lock(&kvm->srcu);
++	vcpu->arch.l1tf_flush_l1d = true;
+ 
+ 	for (;;) {
+ 		if (kvm_vcpu_running(vcpu)) {
+@@ -8700,6 +8731,7 @@ void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu)
+ 
+ void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu)
+ {
++	vcpu->arch.l1tf_flush_l1d = true;
+ 	kvm_x86_ops->sched_in(vcpu, cpu);
+ }
+ 
+diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
+index cee58a972cb2..83241eb71cd4 100644
+--- a/arch/x86/mm/init.c
++++ b/arch/x86/mm/init.c
+@@ -4,6 +4,8 @@
+ #include <linux/swap.h>
+ #include <linux/memblock.h>
+ #include <linux/bootmem.h>	/* for max_low_pfn */
++#include <linux/swapfile.h>
++#include <linux/swapops.h>
+ 
+ #include <asm/set_memory.h>
+ #include <asm/e820/api.h>
+@@ -880,3 +882,26 @@ void update_cache_mode_entry(unsigned entry, enum page_cache_mode cache)
+ 	__cachemode2pte_tbl[cache] = __cm_idx2pte(entry);
+ 	__pte2cachemode_tbl[entry] = cache;
+ }
++
++#ifdef CONFIG_SWAP
++unsigned long max_swapfile_size(void)
++{
++	unsigned long pages;
++
++	pages = generic_max_swapfile_size();
++
++	if (boot_cpu_has_bug(X86_BUG_L1TF)) {
++		/* Limit the swap file size to MAX_PA/2 for L1TF workaround */
++		unsigned long l1tf_limit = l1tf_pfn_limit() + 1;
++		/*
++		 * We encode swap offsets also with 3 bits below those for pfn
++		 * which makes the usable limit higher.
++		 */
++#if CONFIG_PGTABLE_LEVELS > 2
++		l1tf_limit <<= PAGE_SHIFT - SWP_OFFSET_FIRST_BIT;
++#endif
++		pages = min_t(unsigned long, l1tf_limit, pages);
++	}
++	return pages;
++}
++#endif
+diff --git a/arch/x86/mm/kmmio.c b/arch/x86/mm/kmmio.c
+index 7c8686709636..79eb55ce69a9 100644
+--- a/arch/x86/mm/kmmio.c
++++ b/arch/x86/mm/kmmio.c
+@@ -126,24 +126,29 @@ static struct kmmio_fault_page *get_kmmio_fault_page(unsigned long addr)
+ 
+ static void clear_pmd_presence(pmd_t *pmd, bool clear, pmdval_t *old)
+ {
++	pmd_t new_pmd;
+ 	pmdval_t v = pmd_val(*pmd);
+ 	if (clear) {
+-		*old = v & _PAGE_PRESENT;
+-		v &= ~_PAGE_PRESENT;
+-	} else	/* presume this has been called with clear==true previously */
+-		v |= *old;
+-	set_pmd(pmd, __pmd(v));
++		*old = v;
++		new_pmd = pmd_mknotpresent(*pmd);
++	} else {
++		/* Presume this has been called with clear==true previously */
++		new_pmd = __pmd(*old);
++	}
++	set_pmd(pmd, new_pmd);
+ }
+ 
+ static void clear_pte_presence(pte_t *pte, bool clear, pteval_t *old)
+ {
+ 	pteval_t v = pte_val(*pte);
+ 	if (clear) {
+-		*old = v & _PAGE_PRESENT;
+-		v &= ~_PAGE_PRESENT;
+-	} else	/* presume this has been called with clear==true previously */
+-		v |= *old;
+-	set_pte_atomic(pte, __pte(v));
++		*old = v;
++		/* Nothing should care about address */
++		pte_clear(&init_mm, 0, pte);
++	} else {
++		/* Presume this has been called with clear==true previously */
++		set_pte_atomic(pte, __pte(*old));
++	}
+ }
+ 
+ static int clear_page_presence(struct kmmio_fault_page *f, bool clear)
+diff --git a/arch/x86/mm/mmap.c b/arch/x86/mm/mmap.c
+index 48c591251600..f40ab8185d94 100644
+--- a/arch/x86/mm/mmap.c
++++ b/arch/x86/mm/mmap.c
+@@ -240,3 +240,24 @@ int valid_mmap_phys_addr_range(unsigned long pfn, size_t count)
+ 
+ 	return phys_addr_valid(addr + count - 1);
+ }
++
++/*
++ * Only allow root to set high MMIO mappings to PROT_NONE.
++ * This prevents an unpriv. user to set them to PROT_NONE and invert
++ * them, then pointing to valid memory for L1TF speculation.
++ *
++ * Note: for locked down kernels may want to disable the root override.
++ */
++bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot)
++{
++	if (!boot_cpu_has_bug(X86_BUG_L1TF))
++		return true;
++	if (!__pte_needs_invert(pgprot_val(prot)))
++		return true;
++	/* If it's real memory always allow */
++	if (pfn_valid(pfn))
++		return true;
++	if (pfn > l1tf_pfn_limit() && !capable(CAP_SYS_ADMIN))
++		return false;
++	return true;
++}
+diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
+index 3bded76e8d5c..7bb6f65c79de 100644
+--- a/arch/x86/mm/pageattr.c
++++ b/arch/x86/mm/pageattr.c
+@@ -1014,8 +1014,8 @@ static long populate_pmd(struct cpa_data *cpa,
+ 
+ 		pmd = pmd_offset(pud, start);
+ 
+-		set_pmd(pmd, __pmd(cpa->pfn << PAGE_SHIFT | _PAGE_PSE |
+-				   massage_pgprot(pmd_pgprot)));
++		set_pmd(pmd, pmd_mkhuge(pfn_pmd(cpa->pfn,
++					canon_pgprot(pmd_pgprot))));
+ 
+ 		start	  += PMD_SIZE;
+ 		cpa->pfn  += PMD_SIZE >> PAGE_SHIFT;
+@@ -1087,8 +1087,8 @@ static int populate_pud(struct cpa_data *cpa, unsigned long start, p4d_t *p4d,
+ 	 * Map everything starting from the Gb boundary, possibly with 1G pages
+ 	 */
+ 	while (boot_cpu_has(X86_FEATURE_GBPAGES) && end - start >= PUD_SIZE) {
+-		set_pud(pud, __pud(cpa->pfn << PAGE_SHIFT | _PAGE_PSE |
+-				   massage_pgprot(pud_pgprot)));
++		set_pud(pud, pud_mkhuge(pfn_pud(cpa->pfn,
++				   canon_pgprot(pud_pgprot))));
+ 
+ 		start	  += PUD_SIZE;
+ 		cpa->pfn  += PUD_SIZE >> PAGE_SHIFT;
+diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
+index 4d418e705878..fb752d9a3ce9 100644
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -45,6 +45,7 @@
+ #include <asm/pgalloc.h>
+ #include <asm/tlbflush.h>
+ #include <asm/desc.h>
++#include <asm/sections.h>
+ 
+ #undef pr_fmt
+ #define pr_fmt(fmt)     "Kernel/User page tables isolation: " fmt
+diff --git a/arch/x86/platform/intel-mid/device_libs/platform_mrfld_wdt.c b/arch/x86/platform/intel-mid/device_libs/platform_mrfld_wdt.c
+index 4f5fa65a1011..2acd6be13375 100644
+--- a/arch/x86/platform/intel-mid/device_libs/platform_mrfld_wdt.c
++++ b/arch/x86/platform/intel-mid/device_libs/platform_mrfld_wdt.c
+@@ -18,6 +18,7 @@
+ #include <asm/intel-mid.h>
+ #include <asm/intel_scu_ipc.h>
+ #include <asm/io_apic.h>
++#include <asm/hw_irq.h>
+ 
+ #define TANGIER_EXT_TIMER0_MSI 12
+ 
+diff --git a/arch/x86/platform/uv/tlb_uv.c b/arch/x86/platform/uv/tlb_uv.c
+index ca446da48fd2..3866b96a7ee7 100644
+--- a/arch/x86/platform/uv/tlb_uv.c
++++ b/arch/x86/platform/uv/tlb_uv.c
+@@ -1285,6 +1285,7 @@ void uv_bau_message_interrupt(struct pt_regs *regs)
+ 	struct msg_desc msgdesc;
+ 
+ 	ack_APIC_irq();
++	kvm_set_cpu_l1tf_flush_l1d();
+ 	time_start = get_cycles();
+ 
+ 	bcp = &per_cpu(bau_control, smp_processor_id());
+diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
+index 3b5318505c69..2eeddd814653 100644
+--- a/arch/x86/xen/enlighten.c
++++ b/arch/x86/xen/enlighten.c
+@@ -3,6 +3,7 @@
+ #endif
+ #include <linux/cpu.h>
+ #include <linux/kexec.h>
++#include <linux/slab.h>
+ 
+ #include <xen/features.h>
+ #include <xen/page.h>
+diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
+index 30cc9c877ebb..eb9443d5bae1 100644
+--- a/drivers/base/cpu.c
++++ b/drivers/base/cpu.c
+@@ -540,16 +540,24 @@ ssize_t __weak cpu_show_spec_store_bypass(struct device *dev,
+ 	return sprintf(buf, "Not affected\n");
+ }
+ 
++ssize_t __weak cpu_show_l1tf(struct device *dev,
++			     struct device_attribute *attr, char *buf)
++{
++	return sprintf(buf, "Not affected\n");
++}
++
+ static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
+ static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
+ static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
+ static DEVICE_ATTR(spec_store_bypass, 0444, cpu_show_spec_store_bypass, NULL);
++static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL);
+ 
+ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ 	&dev_attr_meltdown.attr,
+ 	&dev_attr_spectre_v1.attr,
+ 	&dev_attr_spectre_v2.attr,
+ 	&dev_attr_spec_store_bypass.attr,
++	&dev_attr_l1tf.attr,
+ 	NULL
+ };
+ 
+diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c
+index dc87797db500..b50b74053664 100644
+--- a/drivers/gpu/drm/i915/i915_pmu.c
++++ b/drivers/gpu/drm/i915/i915_pmu.c
+@@ -4,6 +4,7 @@
+  * Copyright © 2017-2018 Intel Corporation
+  */
+ 
++#include <linux/irq.h>
+ #include "i915_pmu.h"
+ #include "intel_ringbuffer.h"
+ #include "i915_drv.h"
+diff --git a/drivers/gpu/drm/i915/intel_lpe_audio.c b/drivers/gpu/drm/i915/intel_lpe_audio.c
+index 6269750e2b54..b4941101f21a 100644
+--- a/drivers/gpu/drm/i915/intel_lpe_audio.c
++++ b/drivers/gpu/drm/i915/intel_lpe_audio.c
+@@ -62,6 +62,7 @@
+ 
+ #include <linux/acpi.h>
+ #include <linux/device.h>
++#include <linux/irq.h>
+ #include <linux/pci.h>
+ #include <linux/pm_runtime.h>
+ 
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index f6325f1a89e8..d4d4a55f09f8 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -45,6 +45,7 @@
+ #include <linux/irqdomain.h>
+ #include <asm/irqdomain.h>
+ #include <asm/apic.h>
++#include <linux/irq.h>
+ #include <linux/msi.h>
+ #include <linux/hyperv.h>
+ #include <linux/refcount.h>
+diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
+index f59639afaa39..26ca0276b503 100644
+--- a/include/asm-generic/pgtable.h
++++ b/include/asm-generic/pgtable.h
+@@ -1083,6 +1083,18 @@ int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
+ static inline void init_espfix_bsp(void) { }
+ #endif
+ 
++#ifndef __HAVE_ARCH_PFN_MODIFY_ALLOWED
++static inline bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot)
++{
++	return true;
++}
++
++static inline bool arch_has_pfn_modify_check(void)
++{
++	return false;
++}
++#endif /* !_HAVE_ARCH_PFN_MODIFY_ALLOWED */
++
+ #endif /* !__ASSEMBLY__ */
+ 
+ #ifndef io_remap_pfn_range
+diff --git a/include/linux/cpu.h b/include/linux/cpu.h
+index 3233fbe23594..45789a892c41 100644
+--- a/include/linux/cpu.h
++++ b/include/linux/cpu.h
+@@ -55,6 +55,8 @@ extern ssize_t cpu_show_spectre_v2(struct device *dev,
+ 				   struct device_attribute *attr, char *buf);
+ extern ssize_t cpu_show_spec_store_bypass(struct device *dev,
+ 					  struct device_attribute *attr, char *buf);
++extern ssize_t cpu_show_l1tf(struct device *dev,
++			     struct device_attribute *attr, char *buf);
+ 
+ extern __printf(4, 5)
+ struct device *cpu_device_create(struct device *parent, void *drvdata,
+@@ -166,4 +168,23 @@ void cpuhp_report_idle_dead(void);
+ static inline void cpuhp_report_idle_dead(void) { }
+ #endif /* #ifdef CONFIG_HOTPLUG_CPU */
+ 
++enum cpuhp_smt_control {
++	CPU_SMT_ENABLED,
++	CPU_SMT_DISABLED,
++	CPU_SMT_FORCE_DISABLED,
++	CPU_SMT_NOT_SUPPORTED,
++};
++
++#if defined(CONFIG_SMP) && defined(CONFIG_HOTPLUG_SMT)
++extern enum cpuhp_smt_control cpu_smt_control;
++extern void cpu_smt_disable(bool force);
++extern void cpu_smt_check_topology_early(void);
++extern void cpu_smt_check_topology(void);
++#else
++# define cpu_smt_control		(CPU_SMT_ENABLED)
++static inline void cpu_smt_disable(bool force) { }
++static inline void cpu_smt_check_topology_early(void) { }
++static inline void cpu_smt_check_topology(void) { }
++#endif
++
+ #endif /* _LINUX_CPU_H_ */
+diff --git a/include/linux/swapfile.h b/include/linux/swapfile.h
+index 06bd7b096167..e06febf62978 100644
+--- a/include/linux/swapfile.h
++++ b/include/linux/swapfile.h
+@@ -10,5 +10,7 @@ extern spinlock_t swap_lock;
+ extern struct plist_head swap_active_head;
+ extern struct swap_info_struct *swap_info[];
+ extern int try_to_unuse(unsigned int, bool, unsigned long);
++extern unsigned long generic_max_swapfile_size(void);
++extern unsigned long max_swapfile_size(void);
+ 
+ #endif /* _LINUX_SWAPFILE_H */
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 2f8f338e77cf..f80afc674f02 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -60,6 +60,7 @@ struct cpuhp_cpu_state {
+ 	bool			rollback;
+ 	bool			single;
+ 	bool			bringup;
++	bool			booted_once;
+ 	struct hlist_node	*node;
+ 	struct hlist_node	*last;
+ 	enum cpuhp_state	cb_state;
+@@ -342,6 +343,85 @@ void cpu_hotplug_enable(void)
+ EXPORT_SYMBOL_GPL(cpu_hotplug_enable);
+ #endif	/* CONFIG_HOTPLUG_CPU */
+ 
++#ifdef CONFIG_HOTPLUG_SMT
++enum cpuhp_smt_control cpu_smt_control __read_mostly = CPU_SMT_ENABLED;
++EXPORT_SYMBOL_GPL(cpu_smt_control);
++
++static bool cpu_smt_available __read_mostly;
++
++void __init cpu_smt_disable(bool force)
++{
++	if (cpu_smt_control == CPU_SMT_FORCE_DISABLED ||
++		cpu_smt_control == CPU_SMT_NOT_SUPPORTED)
++		return;
++
++	if (force) {
++		pr_info("SMT: Force disabled\n");
++		cpu_smt_control = CPU_SMT_FORCE_DISABLED;
++	} else {
++		cpu_smt_control = CPU_SMT_DISABLED;
++	}
++}
++
++/*
++ * The decision whether SMT is supported can only be done after the full
++ * CPU identification. Called from architecture code before non boot CPUs
++ * are brought up.
++ */
++void __init cpu_smt_check_topology_early(void)
++{
++	if (!topology_smt_supported())
++		cpu_smt_control = CPU_SMT_NOT_SUPPORTED;
++}
++
++/*
++ * If SMT was disabled by BIOS, detect it here, after the CPUs have been
++ * brought online. This ensures the smt/l1tf sysfs entries are consistent
++ * with reality. cpu_smt_available is set to true during the bringup of non
++ * boot CPUs when a SMT sibling is detected. Note, this may overwrite
++ * cpu_smt_control's previous setting.
++ */
++void __init cpu_smt_check_topology(void)
++{
++	if (!cpu_smt_available)
++		cpu_smt_control = CPU_SMT_NOT_SUPPORTED;
++}
++
++static int __init smt_cmdline_disable(char *str)
++{
++	cpu_smt_disable(str && !strcmp(str, "force"));
++	return 0;
++}
++early_param("nosmt", smt_cmdline_disable);
++
++static inline bool cpu_smt_allowed(unsigned int cpu)
++{
++	if (topology_is_primary_thread(cpu))
++		return true;
++
++	/*
++	 * If the CPU is not a 'primary' thread and the booted_once bit is
++	 * set then the processor has SMT support. Store this information
++	 * for the late check of SMT support in cpu_smt_check_topology().
++	 */
++	if (per_cpu(cpuhp_state, cpu).booted_once)
++		cpu_smt_available = true;
++
++	if (cpu_smt_control == CPU_SMT_ENABLED)
++		return true;
++
++	/*
++	 * On x86 it's required to boot all logical CPUs at least once so
++	 * that the init code can get a chance to set CR4.MCE on each
++	 * CPU. Otherwise, a broadacasted MCE observing CR4.MCE=0b on any
++	 * core will shutdown the machine.
++	 */
++	return !per_cpu(cpuhp_state, cpu).booted_once;
++}
++#else
++static inline bool cpu_smt_allowed(unsigned int cpu) { return true; }
++#endif
++
+ static inline enum cpuhp_state
+ cpuhp_set_state(struct cpuhp_cpu_state *st, enum cpuhp_state target)
+ {
+@@ -422,6 +502,16 @@ static int bringup_wait_for_ap(unsigned int cpu)
+ 	stop_machine_unpark(cpu);
+ 	kthread_unpark(st->thread);
+ 
++	/*
++	 * SMT soft disabling on X86 requires to bring the CPU out of the
++	 * BIOS 'wait for SIPI' state in order to set the CR4.MCE bit.  The
++	 * CPU marked itself as booted_once in cpu_notify_starting() so the
++	 * cpu_smt_allowed() check will now return false if this is not the
++	 * primary sibling.
++	 */
++	if (!cpu_smt_allowed(cpu))
++		return -ECANCELED;
++
+ 	if (st->target <= CPUHP_AP_ONLINE_IDLE)
+ 		return 0;
+ 
+@@ -754,7 +844,6 @@ static int takedown_cpu(unsigned int cpu)
+ 
+ 	/* Park the smpboot threads */
+ 	kthread_park(per_cpu_ptr(&cpuhp_state, cpu)->thread);
+-	smpboot_park_threads(cpu);
+ 
+ 	/*
+ 	 * Prevent irq alloc/free while the dying cpu reorganizes the
+@@ -907,20 +996,19 @@ out:
+ 	return ret;
+ }
+ 
++static int cpu_down_maps_locked(unsigned int cpu, enum cpuhp_state target)
++{
++	if (cpu_hotplug_disabled)
++		return -EBUSY;
++	return _cpu_down(cpu, 0, target);
++}
++
+ static int do_cpu_down(unsigned int cpu, enum cpuhp_state target)
+ {
+ 	int err;
+ 
+ 	cpu_maps_update_begin();
+-
+-	if (cpu_hotplug_disabled) {
+-		err = -EBUSY;
+-		goto out;
+-	}
+-
+-	err = _cpu_down(cpu, 0, target);
+-
+-out:
++	err = cpu_down_maps_locked(cpu, target);
+ 	cpu_maps_update_done();
+ 	return err;
+ }
+@@ -949,6 +1037,7 @@ void notify_cpu_starting(unsigned int cpu)
+ 	int ret;
+ 
+ 	rcu_cpu_starting(cpu);	/* Enables RCU usage on this CPU. */
++	st->booted_once = true;
+ 	while (st->state < target) {
+ 		st->state++;
+ 		ret = cpuhp_invoke_callback(cpu, st->state, true, NULL, NULL);
+@@ -1058,6 +1147,10 @@ static int do_cpu_up(unsigned int cpu, enum cpuhp_state target)
+ 		err = -EBUSY;
+ 		goto out;
+ 	}
++	if (!cpu_smt_allowed(cpu)) {
++		err = -EPERM;
++		goto out;
++	}
+ 
+ 	err = _cpu_up(cpu, 0, target);
+ out:
+@@ -1332,7 +1425,7 @@ static struct cpuhp_step cpuhp_hp_states[] = {
+ 	[CPUHP_AP_SMPBOOT_THREADS] = {
+ 		.name			= "smpboot/threads:online",
+ 		.startup.single		= smpboot_unpark_threads,
+-		.teardown.single	= NULL,
++		.teardown.single	= smpboot_park_threads,
+ 	},
+ 	[CPUHP_AP_IRQ_AFFINITY_ONLINE] = {
+ 		.name			= "irq/affinity:online",
+@@ -1906,10 +1999,172 @@ static const struct attribute_group cpuhp_cpu_root_attr_group = {
+ 	NULL
+ };
+ 
++#ifdef CONFIG_HOTPLUG_SMT
++
++static const char *smt_states[] = {
++	[CPU_SMT_ENABLED]		= "on",
++	[CPU_SMT_DISABLED]		= "off",
++	[CPU_SMT_FORCE_DISABLED]	= "forceoff",
++	[CPU_SMT_NOT_SUPPORTED]		= "notsupported",
++};
++
++static ssize_t
++show_smt_control(struct device *dev, struct device_attribute *attr, char *buf)
++{
++	return snprintf(buf, PAGE_SIZE - 2, "%s\n", smt_states[cpu_smt_control]);
++}
++
++static void cpuhp_offline_cpu_device(unsigned int cpu)
++{
++	struct device *dev = get_cpu_device(cpu);
++
++	dev->offline = true;
++	/* Tell user space about the state change */
++	kobject_uevent(&dev->kobj, KOBJ_OFFLINE);
++}
++
++static void cpuhp_online_cpu_device(unsigned int cpu)
++{
++	struct device *dev = get_cpu_device(cpu);
++
++	dev->offline = false;
++	/* Tell user space about the state change */
++	kobject_uevent(&dev->kobj, KOBJ_ONLINE);
++}
++
++static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
++{
++	int cpu, ret = 0;
++
++	cpu_maps_update_begin();
++	for_each_online_cpu(cpu) {
++		if (topology_is_primary_thread(cpu))
++			continue;
++		ret = cpu_down_maps_locked(cpu, CPUHP_OFFLINE);
++		if (ret)
++			break;
++		/*
++		 * As this needs to hold the cpu maps lock it's impossible
++		 * to call device_offline() because that ends up calling
++		 * cpu_down() which takes cpu maps lock. cpu maps lock
++		 * needs to be held as this might race against in kernel
++		 * abusers of the hotplug machinery (thermal management).
++		 *
++		 * So nothing would update device:offline state. That would
++		 * leave the sysfs entry stale and prevent onlining after
++		 * smt control has been changed to 'off' again. This is
++		 * called under the sysfs hotplug lock, so it is properly
++		 * serialized against the regular offline usage.
++		 */
++		cpuhp_offline_cpu_device(cpu);
++	}
++	if (!ret)
++		cpu_smt_control = ctrlval;
++	cpu_maps_update_done();
++	return ret;
++}
++
++static int cpuhp_smt_enable(void)
++{
++	int cpu, ret = 0;
++
++	cpu_maps_update_begin();
++	cpu_smt_control = CPU_SMT_ENABLED;
++	for_each_present_cpu(cpu) {
++		/* Skip online CPUs and CPUs on offline nodes */
++		if (cpu_online(cpu) || !node_online(cpu_to_node(cpu)))
++			continue;
++		ret = _cpu_up(cpu, 0, CPUHP_ONLINE);
++		if (ret)
++			break;
++		/* See comment in cpuhp_smt_disable() */
++		cpuhp_online_cpu_device(cpu);
++	}
++	cpu_maps_update_done();
++	return ret;
++}
++
++static ssize_t
++store_smt_control(struct device *dev, struct device_attribute *attr,
++		  const char *buf, size_t count)
++{
++	int ctrlval, ret;
++
++	if (sysfs_streq(buf, "on"))
++		ctrlval = CPU_SMT_ENABLED;
++	else if (sysfs_streq(buf, "off"))
++		ctrlval = CPU_SMT_DISABLED;
++	else if (sysfs_streq(buf, "forceoff"))
++		ctrlval = CPU_SMT_FORCE_DISABLED;
++	else
++		return -EINVAL;
++
++	if (cpu_smt_control == CPU_SMT_FORCE_DISABLED)
++		return -EPERM;
++
++	if (cpu_smt_control == CPU_SMT_NOT_SUPPORTED)
++		return -ENODEV;
++
++	ret = lock_device_hotplug_sysfs();
++	if (ret)
++		return ret;
++
++	if (ctrlval != cpu_smt_control) {
++		switch (ctrlval) {
++		case CPU_SMT_ENABLED:
++			ret = cpuhp_smt_enable();
++			break;
++		case CPU_SMT_DISABLED:
++		case CPU_SMT_FORCE_DISABLED:
++			ret = cpuhp_smt_disable(ctrlval);
++			break;
++		}
++	}
++
++	unlock_device_hotplug();
++	return ret ? ret : count;
++}
++static DEVICE_ATTR(control, 0644, show_smt_control, store_smt_control);
++
++static ssize_t
++show_smt_active(struct device *dev, struct device_attribute *attr, char *buf)
++{
++	bool active = topology_max_smt_threads() > 1;
++
++	return snprintf(buf, PAGE_SIZE - 2, "%d\n", active);
++}
++static DEVICE_ATTR(active, 0444, show_smt_active, NULL);
++
++static struct attribute *cpuhp_smt_attrs[] = {
++	&dev_attr_control.attr,
++	&dev_attr_active.attr,
++	NULL
++};
++
++static const struct attribute_group cpuhp_smt_attr_group = {
++	.attrs = cpuhp_smt_attrs,
++	.name = "smt",
++	NULL
++};
++
++static int __init cpu_smt_state_init(void)
++{
++	return sysfs_create_group(&cpu_subsys.dev_root->kobj,
++				  &cpuhp_smt_attr_group);
++}
++
++#else
++static inline int cpu_smt_state_init(void) { return 0; }
++#endif
++
+ static int __init cpuhp_sysfs_init(void)
+ {
+ 	int cpu, ret;
+ 
++	ret = cpu_smt_state_init();
++	if (ret)
++		return ret;
++
+ 	ret = sysfs_create_group(&cpu_subsys.dev_root->kobj,
+ 				 &cpuhp_cpu_root_attr_group);
+ 	if (ret)
+@@ -2012,5 +2267,8 @@ void __init boot_cpu_init(void)
+  */
+ void __init boot_cpu_hotplug_init(void)
+ {
+-	per_cpu_ptr(&cpuhp_state, smp_processor_id())->state = CPUHP_ONLINE;
++#ifdef CONFIG_SMP
++	this_cpu_write(cpuhp_state.booted_once, true);
++#endif
++	this_cpu_write(cpuhp_state.state, CPUHP_ONLINE);
+ }
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index fe365c9a08e9..5ba96d9ddbde 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -5774,6 +5774,18 @@ int sched_cpu_activate(unsigned int cpu)
+ 	struct rq *rq = cpu_rq(cpu);
+ 	struct rq_flags rf;
+ 
++#ifdef CONFIG_SCHED_SMT
++	/*
++	 * The sched_smt_present static key needs to be evaluated on every
++	 * hotplug event because at boot time SMT might be disabled when
++	 * the number of booted CPUs is limited.
++	 *
++	 * If then later a sibling gets hotplugged, then the key would stay
++	 * off and SMT scheduling would never be functional.
++	 */
++	if (cpumask_weight(cpu_smt_mask(cpu)) > 1)
++		static_branch_enable_cpuslocked(&sched_smt_present);
++#endif
+ 	set_cpu_active(cpu, true);
+ 
+ 	if (sched_smp_initialized) {
+@@ -5871,22 +5883,6 @@ int sched_cpu_dying(unsigned int cpu)
+ }
+ #endif
+ 
+-#ifdef CONFIG_SCHED_SMT
+-DEFINE_STATIC_KEY_FALSE(sched_smt_present);
+-
+-static void sched_init_smt(void)
+-{
+-	/*
+-	 * We've enumerated all CPUs and will assume that if any CPU
+-	 * has SMT siblings, CPU0 will too.
+-	 */
+-	if (cpumask_weight(cpu_smt_mask(0)) > 1)
+-		static_branch_enable(&sched_smt_present);
+-}
+-#else
+-static inline void sched_init_smt(void) { }
+-#endif
+-
+ void __init sched_init_smp(void)
+ {
+ 	sched_init_numa();
+@@ -5908,8 +5904,6 @@ void __init sched_init_smp(void)
+ 	init_sched_rt_class();
+ 	init_sched_dl_class();
+ 
+-	sched_init_smt();
+-
+ 	sched_smp_initialized = true;
+ }
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 2f0a0be4d344..9c219f7b0970 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -6237,6 +6237,7 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
+ }
+ 
+ #ifdef CONFIG_SCHED_SMT
++DEFINE_STATIC_KEY_FALSE(sched_smt_present);
+ 
+ static inline void set_idle_cores(int cpu, int val)
+ {
+diff --git a/kernel/smp.c b/kernel/smp.c
+index 084c8b3a2681..d86eec5f51c1 100644
+--- a/kernel/smp.c
++++ b/kernel/smp.c
+@@ -584,6 +584,8 @@ void __init smp_init(void)
+ 		num_nodes, (num_nodes > 1 ? "s" : ""),
+ 		num_cpus,  (num_cpus  > 1 ? "s" : ""));
+ 
++	/* Final decision about SMT support */
++	cpu_smt_check_topology();
+ 	/* Any cleanup work */
+ 	smp_cpus_done(setup_max_cpus);
+ }
+diff --git a/mm/memory.c b/mm/memory.c
+index c5e87a3a82ba..0e356dd923c2 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -1884,6 +1884,9 @@ int vm_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr,
+ 	if (addr < vma->vm_start || addr >= vma->vm_end)
+ 		return -EFAULT;
+ 
++	if (!pfn_modify_allowed(pfn, pgprot))
++		return -EACCES;
++
+ 	track_pfn_insert(vma, &pgprot, __pfn_to_pfn_t(pfn, PFN_DEV));
+ 
+ 	ret = insert_pfn(vma, addr, __pfn_to_pfn_t(pfn, PFN_DEV), pgprot,
+@@ -1919,6 +1922,9 @@ static int __vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
+ 
+ 	track_pfn_insert(vma, &pgprot, pfn);
+ 
++	if (!pfn_modify_allowed(pfn_t_to_pfn(pfn), pgprot))
++		return -EACCES;
++
+ 	/*
+ 	 * If we don't have pte special, then we have to use the pfn_valid()
+ 	 * based VM_MIXEDMAP scheme (see vm_normal_page), and thus we *must*
+@@ -1980,6 +1986,7 @@ static int remap_pte_range(struct mm_struct *mm, pmd_t *pmd,
+ {
+ 	pte_t *pte;
+ 	spinlock_t *ptl;
++	int err = 0;
+ 
+ 	pte = pte_alloc_map_lock(mm, pmd, addr, &ptl);
+ 	if (!pte)
+@@ -1987,12 +1994,16 @@ static int remap_pte_range(struct mm_struct *mm, pmd_t *pmd,
+ 	arch_enter_lazy_mmu_mode();
+ 	do {
+ 		BUG_ON(!pte_none(*pte));
++		if (!pfn_modify_allowed(pfn, prot)) {
++			err = -EACCES;
++			break;
++		}
+ 		set_pte_at(mm, addr, pte, pte_mkspecial(pfn_pte(pfn, prot)));
+ 		pfn++;
+ 	} while (pte++, addr += PAGE_SIZE, addr != end);
+ 	arch_leave_lazy_mmu_mode();
+ 	pte_unmap_unlock(pte - 1, ptl);
+-	return 0;
++	return err;
+ }
+ 
+ static inline int remap_pmd_range(struct mm_struct *mm, pud_t *pud,
+@@ -2001,6 +2012,7 @@ static inline int remap_pmd_range(struct mm_struct *mm, pud_t *pud,
+ {
+ 	pmd_t *pmd;
+ 	unsigned long next;
++	int err;
+ 
+ 	pfn -= addr >> PAGE_SHIFT;
+ 	pmd = pmd_alloc(mm, pud, addr);
+@@ -2009,9 +2021,10 @@ static inline int remap_pmd_range(struct mm_struct *mm, pud_t *pud,
+ 	VM_BUG_ON(pmd_trans_huge(*pmd));
+ 	do {
+ 		next = pmd_addr_end(addr, end);
+-		if (remap_pte_range(mm, pmd, addr, next,
+-				pfn + (addr >> PAGE_SHIFT), prot))
+-			return -ENOMEM;
++		err = remap_pte_range(mm, pmd, addr, next,
++				pfn + (addr >> PAGE_SHIFT), prot);
++		if (err)
++			return err;
+ 	} while (pmd++, addr = next, addr != end);
+ 	return 0;
+ }
+@@ -2022,6 +2035,7 @@ static inline int remap_pud_range(struct mm_struct *mm, p4d_t *p4d,
+ {
+ 	pud_t *pud;
+ 	unsigned long next;
++	int err;
+ 
+ 	pfn -= addr >> PAGE_SHIFT;
+ 	pud = pud_alloc(mm, p4d, addr);
+@@ -2029,9 +2043,10 @@ static inline int remap_pud_range(struct mm_struct *mm, p4d_t *p4d,
+ 		return -ENOMEM;
+ 	do {
+ 		next = pud_addr_end(addr, end);
+-		if (remap_pmd_range(mm, pud, addr, next,
+-				pfn + (addr >> PAGE_SHIFT), prot))
+-			return -ENOMEM;
++		err = remap_pmd_range(mm, pud, addr, next,
++				pfn + (addr >> PAGE_SHIFT), prot);
++		if (err)
++			return err;
+ 	} while (pud++, addr = next, addr != end);
+ 	return 0;
+ }
+@@ -2042,6 +2057,7 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd,
+ {
+ 	p4d_t *p4d;
+ 	unsigned long next;
++	int err;
+ 
+ 	pfn -= addr >> PAGE_SHIFT;
+ 	p4d = p4d_alloc(mm, pgd, addr);
+@@ -2049,9 +2065,10 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd,
+ 		return -ENOMEM;
+ 	do {
+ 		next = p4d_addr_end(addr, end);
+-		if (remap_pud_range(mm, p4d, addr, next,
+-				pfn + (addr >> PAGE_SHIFT), prot))
+-			return -ENOMEM;
++		err = remap_pud_range(mm, p4d, addr, next,
++				pfn + (addr >> PAGE_SHIFT), prot);
++		if (err)
++			return err;
+ 	} while (p4d++, addr = next, addr != end);
+ 	return 0;
+ }
+diff --git a/mm/mprotect.c b/mm/mprotect.c
+index 625608bc8962..6d331620b9e5 100644
+--- a/mm/mprotect.c
++++ b/mm/mprotect.c
+@@ -306,6 +306,42 @@ unsigned long change_protection(struct vm_area_struct *vma, unsigned long start,
+ 	return pages;
+ }
+ 
++static int prot_none_pte_entry(pte_t *pte, unsigned long addr,
++			       unsigned long next, struct mm_walk *walk)
++{
++	return pfn_modify_allowed(pte_pfn(*pte), *(pgprot_t *)(walk->private)) ?
++		0 : -EACCES;
++}
++
++static int prot_none_hugetlb_entry(pte_t *pte, unsigned long hmask,
++				   unsigned long addr, unsigned long next,
++				   struct mm_walk *walk)
++{
++	return pfn_modify_allowed(pte_pfn(*pte), *(pgprot_t *)(walk->private)) ?
++		0 : -EACCES;
++}
++
++static int prot_none_test(unsigned long addr, unsigned long next,
++			  struct mm_walk *walk)
++{
++	return 0;
++}
++
++static int prot_none_walk(struct vm_area_struct *vma, unsigned long start,
++			   unsigned long end, unsigned long newflags)
++{
++	pgprot_t new_pgprot = vm_get_page_prot(newflags);
++	struct mm_walk prot_none_walk = {
++		.pte_entry = prot_none_pte_entry,
++		.hugetlb_entry = prot_none_hugetlb_entry,
++		.test_walk = prot_none_test,
++		.mm = current->mm,
++		.private = &new_pgprot,
++	};
++
++	return walk_page_range(start, end, &prot_none_walk);
++}
++
+ int
+ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev,
+ 	unsigned long start, unsigned long end, unsigned long newflags)
+@@ -323,6 +359,19 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev,
+ 		return 0;
+ 	}
+ 
++	/*
++	 * Do PROT_NONE PFN permission checks here when we can still
++	 * bail out without undoing a lot of state. This is a rather
++	 * uncommon case, so doesn't need to be very optimized.
++	 */
++	if (arch_has_pfn_modify_check() &&
++	    (vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) &&
++	    (newflags & (VM_READ|VM_WRITE|VM_EXEC)) == 0) {
++		error = prot_none_walk(vma, start, end, newflags);
++		if (error)
++			return error;
++	}
++
+ 	/*
+ 	 * If we make a private mapping writable we increase our commit;
+ 	 * but (without finer accounting) cannot reduce our commit if we
+diff --git a/mm/swapfile.c b/mm/swapfile.c
+index 2cc2972eedaf..18185ae4f223 100644
+--- a/mm/swapfile.c
++++ b/mm/swapfile.c
+@@ -2909,6 +2909,35 @@ static int claim_swapfile(struct swap_info_struct *p, struct inode *inode)
+ 	return 0;
+ }
+ 
++
++/*
++ * Find out how many pages are allowed for a single swap device. There
++ * are two limiting factors:
++ * 1) the number of bits for the swap offset in the swp_entry_t type, and
++ * 2) the number of bits in the swap pte, as defined by the different
++ * architectures.
++ *
++ * In order to find the largest possible bit mask, a swap entry with
++ * swap type 0 and swap offset ~0UL is created, encoded to a swap pte,
++ * decoded to a swp_entry_t again, and finally the swap offset is
++ * extracted.
++ *
++ * This will mask all the bits from the initial ~0UL mask that can't
++ * be encoded in either the swp_entry_t or the architecture definition
++ * of a swap pte.
++ */
++unsigned long generic_max_swapfile_size(void)
++{
++	return swp_offset(pte_to_swp_entry(
++			swp_entry_to_pte(swp_entry(0, ~0UL)))) + 1;
++}
++
++/* Can be overridden by an architecture for additional checks. */
++__weak unsigned long max_swapfile_size(void)
++{
++	return generic_max_swapfile_size();
++}
++
+ static unsigned long read_swap_header(struct swap_info_struct *p,
+ 					union swap_header *swap_header,
+ 					struct inode *inode)
+@@ -2944,22 +2973,7 @@ static unsigned long read_swap_header(struct swap_info_struct *p,
+ 	p->cluster_next = 1;
+ 	p->cluster_nr = 0;
+ 
+-	/*
+-	 * Find out how many pages are allowed for a single swap
+-	 * device. There are two limiting factors: 1) the number
+-	 * of bits for the swap offset in the swp_entry_t type, and
+-	 * 2) the number of bits in the swap pte as defined by the
+-	 * different architectures. In order to find the
+-	 * largest possible bit mask, a swap entry with swap type 0
+-	 * and swap offset ~0UL is created, encoded to a swap pte,
+-	 * decoded to a swp_entry_t again, and finally the swap
+-	 * offset is extracted. This will mask all the bits from
+-	 * the initial ~0UL mask that can't be encoded in either
+-	 * the swp_entry_t or the architecture definition of a
+-	 * swap pte.
+-	 */
+-	maxpages = swp_offset(pte_to_swp_entry(
+-			swp_entry_to_pte(swp_entry(0, ~0UL)))) + 1;
++	maxpages = max_swapfile_size();
+ 	last_page = swap_header->info.last_page;
+ 	if (!last_page) {
+ 		pr_warn("Empty swap-file\n");
+diff --git a/tools/arch/x86/include/asm/cpufeatures.h b/tools/arch/x86/include/asm/cpufeatures.h
+index 5701f5cecd31..64aaa3f5f36c 100644
+--- a/tools/arch/x86/include/asm/cpufeatures.h
++++ b/tools/arch/x86/include/asm/cpufeatures.h
+@@ -219,6 +219,7 @@
+ #define X86_FEATURE_IBPB		( 7*32+26) /* Indirect Branch Prediction Barrier */
+ #define X86_FEATURE_STIBP		( 7*32+27) /* Single Thread Indirect Branch Predictors */
+ #define X86_FEATURE_ZEN			( 7*32+28) /* "" CPU is AMD family 0x17 (Zen) */
++#define X86_FEATURE_L1TF_PTEINV		( 7*32+29) /* "" L1TF workaround PTE inversion */
+ 
+ /* Virtualization flags: Linux defined, word 8 */
+ #define X86_FEATURE_TPR_SHADOW		( 8*32+ 0) /* Intel TPR Shadow */
+@@ -341,6 +342,7 @@
+ #define X86_FEATURE_PCONFIG		(18*32+18) /* Intel PCONFIG */
+ #define X86_FEATURE_SPEC_CTRL		(18*32+26) /* "" Speculation Control (IBRS + IBPB) */
+ #define X86_FEATURE_INTEL_STIBP		(18*32+27) /* "" Single Thread Indirect Branch Predictors */
++#define X86_FEATURE_FLUSH_L1D		(18*32+28) /* Flush L1D cache */
+ #define X86_FEATURE_ARCH_CAPABILITIES	(18*32+29) /* IA32_ARCH_CAPABILITIES MSR (Intel) */
+ #define X86_FEATURE_SPEC_CTRL_SSBD	(18*32+31) /* "" Speculative Store Bypass Disable */
+ 
+@@ -373,5 +375,6 @@
+ #define X86_BUG_SPECTRE_V1		X86_BUG(15) /* CPU is affected by Spectre variant 1 attack with conditional branches */
+ #define X86_BUG_SPECTRE_V2		X86_BUG(16) /* CPU is affected by Spectre variant 2 attack with indirect branches */
+ #define X86_BUG_SPEC_STORE_BYPASS	X86_BUG(17) /* CPU is affected by speculative store bypass attack */
++#define X86_BUG_L1TF			X86_BUG(18) /* CPU is affected by L1 Terminal Fault */
+ 
+ #endif /* _ASM_X86_CPUFEATURES_H */


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 13:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 13:15 UTC (permalink / raw
  To: gentoo-commits

commit:     02d93420a77e577482463edde235c7e489191b63
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Aug 12 23:21:05 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 13:15:38 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=02d93420

Additional fixes for Gentoo distro patch.

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 5555b8a..43bae55 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -1,12 +1,11 @@
---- a/Kconfig	2016-07-01 19:22:17.117439707 -0400
-+++ b/Kconfig	2016-07-01 19:21:54.371440596 -0400
-@@ -8,4 +8,6 @@ config SRCARCH
- 	string
- 	option env="SRCARCH"
+--- a/Kconfig	2018-08-12 19:17:17.558649438 -0400
++++ b/Kconfig	2018-08-12 19:17:44.434897289 -0400
+@@ -10,3 +10,5 @@ comment "Compiler: $(CC_VERSION_TEXT)"
+ source "scripts/Kconfig.include"
  
-+source "distro/Kconfig"
+ source "arch/$(SRCARCH)/Kconfig"
 +
- source "arch/$SRCARCH/Kconfig"
++source "distro/Kconfig"
 --- /dev/null	2017-03-02 01:55:04.096566155 -0500
 +++ b/distro/Kconfig	2017-03-02 11:12:05.049448255 -0500
 @@ -0,0 +1,145 @@


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 13:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 13:15 UTC (permalink / raw
  To: gentoo-commits

commit:     0eff8e3c5f75711744b609c651575ba7b3a6c554
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Aug 12 23:15:02 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 13:15:29 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0eff8e3c

Update Gentoo distro patch.

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 160 +++++++++++++++++++++++++++++++++++++--
 1 file changed, 154 insertions(+), 6 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 56293b0..5555b8a 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -1,9 +1,157 @@
---- a/Kconfig	2018-06-23 18:12:59.733149912 -0400
-+++ b/Kconfig	2018-06-23 18:15:17.972352097 -0400
-@@ -10,3 +10,6 @@ comment "Compiler: $(CC_VERSION_TEXT)"
- source "scripts/Kconfig.include"
+--- a/Kconfig	2016-07-01 19:22:17.117439707 -0400
++++ b/Kconfig	2016-07-01 19:21:54.371440596 -0400
+@@ -8,4 +8,6 @@ config SRCARCH
+ 	string
+ 	option env="SRCARCH"
  
- source "arch/$(SRCARCH)/Kconfig"
-+
 +source "distro/Kconfig"
 +
+ source "arch/$SRCARCH/Kconfig"
+--- /dev/null	2017-03-02 01:55:04.096566155 -0500
++++ b/distro/Kconfig	2017-03-02 11:12:05.049448255 -0500
+@@ -0,0 +1,145 @@
++menu "Gentoo Linux"
++
++config GENTOO_LINUX
++	bool "Gentoo Linux support"
++
++	default y
++
++	help
++		In order to boot Gentoo Linux a minimal set of config settings needs to
++		be enabled in the kernel; to avoid the users from having to enable them
++		manually as part of a Gentoo Linux installation or a new clean config,
++		we enable these config settings by default for convenience.
++
++		See the settings that become available for more details and fine-tuning.
++
++config GENTOO_LINUX_UDEV
++	bool "Linux dynamic and persistent device naming (userspace devfs) support"
++
++	depends on GENTOO_LINUX
++	default y if GENTOO_LINUX
++
++	select DEVTMPFS
++	select TMPFS
++	select UNIX
++
++	select MMU
++	select SHMEM
++
++	help
++		In order to boot Gentoo Linux a minimal set of config settings needs to
++		be enabled in the kernel; to avoid the users from having to enable them
++		manually as part of a Gentoo Linux installation or a new clean config,
++		we enable these config settings by default for convenience.
++
++		Currently this only selects TMPFS, DEVTMPFS and their dependencies.
++		TMPFS is enabled to maintain a tmpfs file system at /dev/shm, /run and
++		/sys/fs/cgroup; DEVTMPFS to maintain a devtmpfs file system at /dev.
++
++		Some of these are critical files that need to be available early in the
++		boot process; if not available, it causes sysfs and udev to malfunction.
++
++		To ensure Gentoo Linux boots, it is best to leave this setting enabled;
++		if you run a custom setup, you could consider whether to disable this.
++
++config GENTOO_LINUX_PORTAGE
++	bool "Select options required by Portage features"
++
++	depends on GENTOO_LINUX
++	default y if GENTOO_LINUX
++
++	select CGROUPS
++	select NAMESPACES
++	select IPC_NS
++	select NET_NS
++	select SYSVIPC
++
++	help
++		This enables options required by various Portage FEATURES.
++		Currently this selects:
++
++		CGROUPS     (required for FEATURES=cgroup)
++		IPC_NS      (required for FEATURES=ipc-sandbox)
++		NET_NS      (required for FEATURES=network-sandbox)
++		SYSVIPC     (required by IPC_NS)
++   
++
++		It is highly recommended that you leave this enabled as these FEATURES
++		are, or will soon be, enabled by default.
++
++menu "Support for init systems, system and service managers"
++	visible if GENTOO_LINUX
++
++config GENTOO_LINUX_INIT_SCRIPT
++	bool "OpenRC, runit and other script based systems and managers"
++
++	default y if GENTOO_LINUX
++
++	depends on GENTOO_LINUX
++
++	select BINFMT_SCRIPT
++
++	help
++		The init system is the first thing that loads after the kernel booted.
++
++		These config settings allow you to select which init systems to support;
++		instead of having to select all the individual settings all over the
++		place, these settings allows you to select all the settings at once.
++
++		This particular setting enables all the known requirements for OpenRC,
++		runit and similar script based systems and managers.
++
++		If you are unsure about this, it is best to leave this setting enabled.
++
++config GENTOO_LINUX_INIT_SYSTEMD
++	bool "systemd"
++
++	default n
++
++	depends on GENTOO_LINUX && GENTOO_LINUX_UDEV
++
++	select AUTOFS4_FS
++	select BLK_DEV_BSG
++	select CGROUPS
++	select CHECKPOINT_RESTORE
++	select CRYPTO_HMAC 
++	select CRYPTO_SHA256
++	select CRYPTO_USER_API_HASH
++	select DEVPTS_MULTIPLE_INSTANCES
++	select DMIID if X86_32 || X86_64 || X86
++	select EPOLL
++	select FANOTIFY
++	select FHANDLE
++	select INOTIFY_USER
++	select IPV6
++	select NET
++	select NET_NS
++	select PROC_FS
++	select SECCOMP
++	select SECCOMP_FILTER
++	select SIGNALFD
++	select SYSFS
++	select TIMERFD
++	select TMPFS_POSIX_ACL
++	select TMPFS_XATTR
++
++	select ANON_INODES
++	select BLOCK
++	select EVENTFD
++	select FSNOTIFY
++	select INET
++	select NLATTR
++
++	help
++		The init system is the first thing that loads after the kernel booted.
++
++		These config settings allow you to select which init systems to support;
++		instead of having to select all the individual settings all over the
++		place, these settings allows you to select all the settings at once.
++
++		This particular setting enables all the known requirements for systemd;
++		it also enables suggested optional settings, as the package suggests to.
++
++endmenu
++
++endmenu


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 13:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 13:15 UTC (permalink / raw
  To: gentoo-commits

commit:     9d3382ebfd88adb3163796534092a318f4335150
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 16 11:45:09 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 13:15:38 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9d3382eb

x86/l1tf: Fix build error seen if CONFIG_KVM_INTEL is disabled.

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                    |  4 +++
 1700_x86-l1tf-config-kvm-build-error-fix.patch | 40 ++++++++++++++++++++++++++
 2 files changed, 44 insertions(+)

diff --git a/0000_README b/0000_README
index cf32ff2..ad4a3ed 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.
 
+Patch:  1700_x86-l1tf-config-kvm-build-error-fix.patch
+From:   http://www.kernel.org
+Desc:   x86/l1tf: Fix build error seen if CONFIG_KVM_INTEL is disabled
+
 Patch:  2500_usb-storage-Disable-UAS-on-JMicron-SATA-enclosure.patch
 From:   https://bugzilla.redhat.com/show_bug.cgi?id=1260207#c5
 Desc:   Add UAS disable quirk. See bug #640082.

diff --git a/1700_x86-l1tf-config-kvm-build-error-fix.patch b/1700_x86-l1tf-config-kvm-build-error-fix.patch
new file mode 100644
index 0000000..88c2ec6
--- /dev/null
+++ b/1700_x86-l1tf-config-kvm-build-error-fix.patch
@@ -0,0 +1,40 @@
+From 1eb46908b35dfbac0ec1848d4b1e39667e0187e9 Mon Sep 17 00:00:00 2001
+From: Guenter Roeck <linux@roeck-us.net>
+Date: Wed, 15 Aug 2018 08:38:33 -0700
+Subject: x86/l1tf: Fix build error seen if CONFIG_KVM_INTEL is disabled
+
+From: Guenter Roeck <linux@roeck-us.net>
+
+commit 1eb46908b35dfbac0ec1848d4b1e39667e0187e9 upstream.
+
+allmodconfig+CONFIG_INTEL_KVM=n results in the following build error.
+
+  ERROR: "l1tf_vmx_mitigation" [arch/x86/kvm/kvm.ko] undefined!
+
+Fixes: 5b76a3cff011 ("KVM: VMX: Tell the nested hypervisor to skip L1D flush on vmentry")
+Reported-by: Meelis Roos <mroos@linux.ee>
+Cc: Meelis Roos <mroos@linux.ee>
+Cc: Paolo Bonzini <pbonzini@redhat.com>
+Cc: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Guenter Roeck <linux@roeck-us.net>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/x86/kernel/cpu/bugs.c |    3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
+
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -648,10 +648,9 @@ void x86_spec_ctrl_setup_ap(void)
+ enum l1tf_mitigations l1tf_mitigation __ro_after_init = L1TF_MITIGATION_FLUSH;
+ #if IS_ENABLED(CONFIG_KVM_INTEL)
+ EXPORT_SYMBOL_GPL(l1tf_mitigation);
+-
++#endif
+ enum vmx_l1d_flush_state l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
+ EXPORT_SYMBOL_GPL(l1tf_vmx_mitigation);
+-#endif
+ 
+ static void __init l1tf_select_mitigation(void)
+ {


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 11:40 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 11:40 UTC (permalink / raw
  To: gentoo-commits

commit:     a624cc7caa71cbcdccecb18cc8b3a6a245869c1c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Nov 14 11:39:53 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 11:39:53 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a624cc7c

proj/linux-patches: Removal of redundant patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                      |  4 ----
 1800_TCA-OPTIONS-sched-fix.patch | 35 -----------------------------------
 2 files changed, 39 deletions(-)

diff --git a/0000_README b/0000_README
index afaac7a..4d0ed54 100644
--- a/0000_README
+++ b/0000_README
@@ -127,10 +127,6 @@ Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.
 
-Patch:  1800_TCA-OPTIONS-sched-fix.patch
-From:   https://git.kernel.org
-Desc:   net: sched: Remove TCA_OPTIONS from policy
-
 Patch:  2500_usb-storage-Disable-UAS-on-JMicron-SATA-enclosure.patch
 From:   https://bugzilla.redhat.com/show_bug.cgi?id=1260207#c5
 Desc:   Add UAS disable quirk. See bug #640082.

diff --git a/1800_TCA-OPTIONS-sched-fix.patch b/1800_TCA-OPTIONS-sched-fix.patch
deleted file mode 100644
index f960fac..0000000
--- a/1800_TCA-OPTIONS-sched-fix.patch
+++ /dev/null
@@ -1,35 +0,0 @@
-From e72bde6b66299602087c8c2350d36a525e75d06e Mon Sep 17 00:00:00 2001
-From: David Ahern <dsahern@gmail.com>
-Date: Wed, 24 Oct 2018 08:32:49 -0700
-Subject: net: sched: Remove TCA_OPTIONS from policy
-
-Marco reported an error with hfsc:
-root@Calimero:~# tc qdisc add dev eth0 root handle 1:0 hfsc default 1
-Error: Attribute failed policy validation.
-
-Apparently a few implementations pass TCA_OPTIONS as a binary instead
-of nested attribute, so drop TCA_OPTIONS from the policy.
-
-Fixes: 8b4c3cdd9dd8 ("net: sched: Add policy validation for tc attributes")
-Reported-by: Marco Berizzi <pupilla@libero.it>
-Signed-off-by: David Ahern <dsahern@gmail.com>
-Signed-off-by: David S. Miller <davem@davemloft.net>
----
- net/sched/sch_api.c | 1 -
- 1 file changed, 1 deletion(-)
-
-diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
-index 022bca98bde6..ca3b0f46de53 100644
---- a/net/sched/sch_api.c
-+++ b/net/sched/sch_api.c
-@@ -1320,7 +1320,6 @@ check_loop_fn(struct Qdisc *q, unsigned long cl, struct qdisc_walker *w)
- 
- const struct nla_policy rtm_tca_policy[TCA_MAX + 1] = {
- 	[TCA_KIND]		= { .type = NLA_STRING },
--	[TCA_OPTIONS]		= { .type = NLA_NESTED },
- 	[TCA_RATE]		= { .type = NLA_BINARY,
- 				    .len = sizeof(struct tc_estimator) },
- 	[TCA_STAB]		= { .type = NLA_NESTED },
--- 
-cgit 1.2-0.3.lf.el7
-


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 11:37 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     322321ed98a2fca2bc9414e8806b069ebfdc782f
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Nov 13 21:16:56 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 11:36:29 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=322321ed

proj/linux-patches: Linux patch 4.18.19

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |     4 +
 1018_linux-4.18.19.patch | 15151 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 15155 insertions(+)

diff --git a/0000_README b/0000_README
index bdc7ee9..afaac7a 100644
--- a/0000_README
+++ b/0000_README
@@ -115,6 +115,10 @@ Patch:  1017_linux-4.18.18.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.18
 
+Patch:  1018_linux-4.18.19.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.19
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1018_linux-4.18.19.patch b/1018_linux-4.18.19.patch
new file mode 100644
index 0000000..40499cf
--- /dev/null
+++ b/1018_linux-4.18.19.patch
@@ -0,0 +1,15151 @@
+diff --git a/Documentation/filesystems/fscrypt.rst b/Documentation/filesystems/fscrypt.rst
+index 48b424de85bb..cfbc18f0d9c9 100644
+--- a/Documentation/filesystems/fscrypt.rst
++++ b/Documentation/filesystems/fscrypt.rst
+@@ -191,21 +191,11 @@ Currently, the following pairs of encryption modes are supported:
+ 
+ - AES-256-XTS for contents and AES-256-CTS-CBC for filenames
+ - AES-128-CBC for contents and AES-128-CTS-CBC for filenames
+-- Speck128/256-XTS for contents and Speck128/256-CTS-CBC for filenames
+ 
+ It is strongly recommended to use AES-256-XTS for contents encryption.
+ AES-128-CBC was added only for low-powered embedded devices with
+ crypto accelerators such as CAAM or CESA that do not support XTS.
+ 
+-Similarly, Speck128/256 support was only added for older or low-end
+-CPUs which cannot do AES fast enough -- especially ARM CPUs which have
+-NEON instructions but not the Cryptography Extensions -- and for which
+-it would not otherwise be feasible to use encryption at all.  It is
+-not recommended to use Speck on CPUs that have AES instructions.
+-Speck support is only available if it has been enabled in the crypto
+-API via CONFIG_CRYPTO_SPECK.  Also, on ARM platforms, to get
+-acceptable performance CONFIG_CRYPTO_SPECK_NEON must be enabled.
+-
+ New encryption modes can be added relatively easily, without changes
+ to individual filesystems.  However, authenticated encryption (AE)
+ modes are not currently supported because of the difficulty of dealing
+diff --git a/Documentation/media/uapi/cec/cec-ioc-receive.rst b/Documentation/media/uapi/cec/cec-ioc-receive.rst
+index e964074cd15b..b25e48afaa08 100644
+--- a/Documentation/media/uapi/cec/cec-ioc-receive.rst
++++ b/Documentation/media/uapi/cec/cec-ioc-receive.rst
+@@ -16,10 +16,10 @@ CEC_RECEIVE, CEC_TRANSMIT - Receive or transmit a CEC message
+ Synopsis
+ ========
+ 
+-.. c:function:: int ioctl( int fd, CEC_RECEIVE, struct cec_msg *argp )
++.. c:function:: int ioctl( int fd, CEC_RECEIVE, struct cec_msg \*argp )
+     :name: CEC_RECEIVE
+ 
+-.. c:function:: int ioctl( int fd, CEC_TRANSMIT, struct cec_msg *argp )
++.. c:function:: int ioctl( int fd, CEC_TRANSMIT, struct cec_msg \*argp )
+     :name: CEC_TRANSMIT
+ 
+ Arguments
+@@ -272,6 +272,19 @@ View On' messages from initiator 0xf ('Unregistered') to destination 0 ('TV').
+       - The transmit failed after one or more retries. This status bit is
+ 	mutually exclusive with :ref:`CEC_TX_STATUS_OK <CEC-TX-STATUS-OK>`.
+ 	Other bits can still be set to explain which failures were seen.
++    * .. _`CEC-TX-STATUS-ABORTED`:
++
++      - ``CEC_TX_STATUS_ABORTED``
++      - 0x40
++      - The transmit was aborted due to an HDMI disconnect, or the adapter
++        was unconfigured, or a transmit was interrupted, or the driver
++	returned an error when attempting to start a transmit.
++    * .. _`CEC-TX-STATUS-TIMEOUT`:
++
++      - ``CEC_TX_STATUS_TIMEOUT``
++      - 0x80
++      - The transmit timed out. This should not normally happen and this
++	indicates a driver problem.
+ 
+ 
+ .. tabularcolumns:: |p{5.6cm}|p{0.9cm}|p{11.0cm}|
+@@ -300,6 +313,14 @@ View On' messages from initiator 0xf ('Unregistered') to destination 0 ('TV').
+       - The message was received successfully but the reply was
+ 	``CEC_MSG_FEATURE_ABORT``. This status is only set if this message
+ 	was the reply to an earlier transmitted message.
++    * .. _`CEC-RX-STATUS-ABORTED`:
++
++      - ``CEC_RX_STATUS_ABORTED``
++      - 0x08
++      - The wait for a reply to an earlier transmitted message was aborted
++        because the HDMI cable was disconnected, the adapter was unconfigured
++	or the :ref:`CEC_TRANSMIT <CEC_RECEIVE>` that waited for a
++	reply was interrupted.
+ 
+ 
+ 
+diff --git a/Documentation/media/uapi/v4l/biblio.rst b/Documentation/media/uapi/v4l/biblio.rst
+index 1cedcfc04327..386d6cf83e9c 100644
+--- a/Documentation/media/uapi/v4l/biblio.rst
++++ b/Documentation/media/uapi/v4l/biblio.rst
+@@ -226,16 +226,6 @@ xvYCC
+ 
+ :author:    International Electrotechnical Commission (http://www.iec.ch)
+ 
+-.. _adobergb:
+-
+-AdobeRGB
+-========
+-
+-
+-:title:     Adobe© RGB (1998) Color Image Encoding Version 2005-05
+-
+-:author:    Adobe Systems Incorporated (http://www.adobe.com)
+-
+ .. _oprgb:
+ 
+ opRGB
+diff --git a/Documentation/media/uapi/v4l/colorspaces-defs.rst b/Documentation/media/uapi/v4l/colorspaces-defs.rst
+index 410907fe9415..f24615544792 100644
+--- a/Documentation/media/uapi/v4l/colorspaces-defs.rst
++++ b/Documentation/media/uapi/v4l/colorspaces-defs.rst
+@@ -51,8 +51,8 @@ whole range, 0-255, dividing the angular value by 1.41. The enum
+       - See :ref:`col-rec709`.
+     * - ``V4L2_COLORSPACE_SRGB``
+       - See :ref:`col-srgb`.
+-    * - ``V4L2_COLORSPACE_ADOBERGB``
+-      - See :ref:`col-adobergb`.
++    * - ``V4L2_COLORSPACE_OPRGB``
++      - See :ref:`col-oprgb`.
+     * - ``V4L2_COLORSPACE_BT2020``
+       - See :ref:`col-bt2020`.
+     * - ``V4L2_COLORSPACE_DCI_P3``
+@@ -90,8 +90,8 @@ whole range, 0-255, dividing the angular value by 1.41. The enum
+       - Use the Rec. 709 transfer function.
+     * - ``V4L2_XFER_FUNC_SRGB``
+       - Use the sRGB transfer function.
+-    * - ``V4L2_XFER_FUNC_ADOBERGB``
+-      - Use the AdobeRGB transfer function.
++    * - ``V4L2_XFER_FUNC_OPRGB``
++      - Use the opRGB transfer function.
+     * - ``V4L2_XFER_FUNC_SMPTE240M``
+       - Use the SMPTE 240M transfer function.
+     * - ``V4L2_XFER_FUNC_NONE``
+diff --git a/Documentation/media/uapi/v4l/colorspaces-details.rst b/Documentation/media/uapi/v4l/colorspaces-details.rst
+index b5d551b9cc8f..09fabf4cd412 100644
+--- a/Documentation/media/uapi/v4l/colorspaces-details.rst
++++ b/Documentation/media/uapi/v4l/colorspaces-details.rst
+@@ -290,15 +290,14 @@ Y' is clamped to the range [0…1] and Cb and Cr are clamped to the range
+ 170M/BT.601. The Y'CbCr quantization is limited range.
+ 
+ 
+-.. _col-adobergb:
++.. _col-oprgb:
+ 
+-Colorspace Adobe RGB (V4L2_COLORSPACE_ADOBERGB)
++Colorspace opRGB (V4L2_COLORSPACE_OPRGB)
+ ===============================================
+ 
+-The :ref:`adobergb` standard defines the colorspace used by computer
+-graphics that use the AdobeRGB colorspace. This is also known as the
+-:ref:`oprgb` standard. The default transfer function is
+-``V4L2_XFER_FUNC_ADOBERGB``. The default Y'CbCr encoding is
++The :ref:`oprgb` standard defines the colorspace used by computer
++graphics that use the opRGB colorspace. The default transfer function is
++``V4L2_XFER_FUNC_OPRGB``. The default Y'CbCr encoding is
+ ``V4L2_YCBCR_ENC_601``. The default Y'CbCr quantization is limited
+ range.
+ 
+@@ -312,7 +311,7 @@ The chromaticities of the primary colors and the white reference are:
+ 
+ .. tabularcolumns:: |p{4.4cm}|p{4.4cm}|p{8.7cm}|
+ 
+-.. flat-table:: Adobe RGB Chromaticities
++.. flat-table:: opRGB Chromaticities
+     :header-rows:  1
+     :stub-columns: 0
+     :widths:       1 1 2
+diff --git a/Documentation/media/videodev2.h.rst.exceptions b/Documentation/media/videodev2.h.rst.exceptions
+index a5cb0a8686ac..8813ff9c42b9 100644
+--- a/Documentation/media/videodev2.h.rst.exceptions
++++ b/Documentation/media/videodev2.h.rst.exceptions
+@@ -56,7 +56,8 @@ replace symbol V4L2_MEMORY_USERPTR :c:type:`v4l2_memory`
+ # Documented enum v4l2_colorspace
+ replace symbol V4L2_COLORSPACE_470_SYSTEM_BG :c:type:`v4l2_colorspace`
+ replace symbol V4L2_COLORSPACE_470_SYSTEM_M :c:type:`v4l2_colorspace`
+-replace symbol V4L2_COLORSPACE_ADOBERGB :c:type:`v4l2_colorspace`
++replace symbol V4L2_COLORSPACE_OPRGB :c:type:`v4l2_colorspace`
++replace define V4L2_COLORSPACE_ADOBERGB :c:type:`v4l2_colorspace`
+ replace symbol V4L2_COLORSPACE_BT2020 :c:type:`v4l2_colorspace`
+ replace symbol V4L2_COLORSPACE_DCI_P3 :c:type:`v4l2_colorspace`
+ replace symbol V4L2_COLORSPACE_DEFAULT :c:type:`v4l2_colorspace`
+@@ -69,7 +70,8 @@ replace symbol V4L2_COLORSPACE_SRGB :c:type:`v4l2_colorspace`
+ 
+ # Documented enum v4l2_xfer_func
+ replace symbol V4L2_XFER_FUNC_709 :c:type:`v4l2_xfer_func`
+-replace symbol V4L2_XFER_FUNC_ADOBERGB :c:type:`v4l2_xfer_func`
++replace symbol V4L2_XFER_FUNC_OPRGB :c:type:`v4l2_xfer_func`
++replace define V4L2_XFER_FUNC_ADOBERGB :c:type:`v4l2_xfer_func`
+ replace symbol V4L2_XFER_FUNC_DCI_P3 :c:type:`v4l2_xfer_func`
+ replace symbol V4L2_XFER_FUNC_DEFAULT :c:type:`v4l2_xfer_func`
+ replace symbol V4L2_XFER_FUNC_NONE :c:type:`v4l2_xfer_func`
+diff --git a/Makefile b/Makefile
+index 7b35c1ec0427..71642133ba22 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 18
++SUBLEVEL = 19
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/arm/boot/dts/dra7.dtsi b/arch/arm/boot/dts/dra7.dtsi
+index a0ddf497e8cd..2cb45ddd2ae3 100644
+--- a/arch/arm/boot/dts/dra7.dtsi
++++ b/arch/arm/boot/dts/dra7.dtsi
+@@ -354,7 +354,7 @@
+ 				ti,hwmods = "pcie1";
+ 				phys = <&pcie1_phy>;
+ 				phy-names = "pcie-phy0";
+-				ti,syscon-unaligned-access = <&scm_conf1 0x14 2>;
++				ti,syscon-unaligned-access = <&scm_conf1 0x14 1>;
+ 				status = "disabled";
+ 			};
+ 		};
+diff --git a/arch/arm/boot/dts/exynos3250.dtsi b/arch/arm/boot/dts/exynos3250.dtsi
+index 962af97c1883..aff5d66ae058 100644
+--- a/arch/arm/boot/dts/exynos3250.dtsi
++++ b/arch/arm/boot/dts/exynos3250.dtsi
+@@ -78,6 +78,22 @@
+ 			compatible = "arm,cortex-a7";
+ 			reg = <1>;
+ 			clock-frequency = <1000000000>;
++			clocks = <&cmu CLK_ARM_CLK>;
++			clock-names = "cpu";
++			#cooling-cells = <2>;
++
++			operating-points = <
++				1000000 1150000
++				900000  1112500
++				800000  1075000
++				700000  1037500
++				600000  1000000
++				500000  962500
++				400000  925000
++				300000  887500
++				200000  850000
++				100000  850000
++			>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/exynos4210-origen.dts b/arch/arm/boot/dts/exynos4210-origen.dts
+index 2ab99f9f3d0a..dd9ec05eb0f7 100644
+--- a/arch/arm/boot/dts/exynos4210-origen.dts
++++ b/arch/arm/boot/dts/exynos4210-origen.dts
+@@ -151,6 +151,8 @@
+ 		reg = <0x66>;
+ 		interrupt-parent = <&gpx0>;
+ 		interrupts = <4 IRQ_TYPE_NONE>, <3 IRQ_TYPE_NONE>;
++		pinctrl-names = "default";
++		pinctrl-0 = <&max8997_irq>;
+ 
+ 		max8997,pmic-buck1-dvs-voltage = <1350000>;
+ 		max8997,pmic-buck2-dvs-voltage = <1100000>;
+@@ -288,6 +290,13 @@
+ 	};
+ };
+ 
++&pinctrl_1 {
++	max8997_irq: max8997-irq {
++		samsung,pins = "gpx0-3", "gpx0-4";
++		samsung,pin-pud = <EXYNOS_PIN_PULL_NONE>;
++	};
++};
++
+ &sdhci_0 {
+ 	bus-width = <4>;
+ 	pinctrl-0 = <&sd0_clk &sd0_cmd &sd0_bus4 &sd0_cd>;
+diff --git a/arch/arm/boot/dts/exynos4210.dtsi b/arch/arm/boot/dts/exynos4210.dtsi
+index 88fb47cef9a8..b6091c27f155 100644
+--- a/arch/arm/boot/dts/exynos4210.dtsi
++++ b/arch/arm/boot/dts/exynos4210.dtsi
+@@ -55,6 +55,19 @@
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a9";
+ 			reg = <0x901>;
++			clocks = <&clock CLK_ARM_CLK>;
++			clock-names = "cpu";
++			clock-latency = <160000>;
++
++			operating-points = <
++				1200000 1250000
++				1000000 1150000
++				800000	1075000
++				500000	975000
++				400000	975000
++				200000	950000
++			>;
++			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/exynos4412.dtsi b/arch/arm/boot/dts/exynos4412.dtsi
+index 7b43c10c510b..51f72f0327e5 100644
+--- a/arch/arm/boot/dts/exynos4412.dtsi
++++ b/arch/arm/boot/dts/exynos4412.dtsi
+@@ -49,21 +49,30 @@
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a9";
+ 			reg = <0xA01>;
++			clocks = <&clock CLK_ARM_CLK>;
++			clock-names = "cpu";
+ 			operating-points-v2 = <&cpu0_opp_table>;
++			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 
+ 		cpu@a02 {
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a9";
+ 			reg = <0xA02>;
++			clocks = <&clock CLK_ARM_CLK>;
++			clock-names = "cpu";
+ 			operating-points-v2 = <&cpu0_opp_table>;
++			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 
+ 		cpu@a03 {
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a9";
+ 			reg = <0xA03>;
++			clocks = <&clock CLK_ARM_CLK>;
++			clock-names = "cpu";
+ 			operating-points-v2 = <&cpu0_opp_table>;
++			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/exynos5250.dtsi b/arch/arm/boot/dts/exynos5250.dtsi
+index 2daf505b3d08..f04adf72b80e 100644
+--- a/arch/arm/boot/dts/exynos5250.dtsi
++++ b/arch/arm/boot/dts/exynos5250.dtsi
+@@ -54,36 +54,106 @@
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a15";
+ 			reg = <0>;
+-			clock-frequency = <1700000000>;
+ 			clocks = <&clock CLK_ARM_CLK>;
+ 			clock-names = "cpu";
+-			clock-latency = <140000>;
+-
+-			operating-points = <
+-				1700000 1300000
+-				1600000 1250000
+-				1500000 1225000
+-				1400000 1200000
+-				1300000 1150000
+-				1200000 1125000
+-				1100000 1100000
+-				1000000 1075000
+-				 900000 1050000
+-				 800000 1025000
+-				 700000 1012500
+-				 600000 1000000
+-				 500000  975000
+-				 400000  950000
+-				 300000  937500
+-				 200000  925000
+-			>;
++			operating-points-v2 = <&cpu0_opp_table>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 		cpu@1 {
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a15";
+ 			reg = <1>;
+-			clock-frequency = <1700000000>;
++			clocks = <&clock CLK_ARM_CLK>;
++			clock-names = "cpu";
++			operating-points-v2 = <&cpu0_opp_table>;
++			#cooling-cells = <2>; /* min followed by max */
++		};
++	};
++
++	cpu0_opp_table: opp_table0 {
++		compatible = "operating-points-v2";
++		opp-shared;
++
++		opp-200000000 {
++			opp-hz = /bits/ 64 <200000000>;
++			opp-microvolt = <925000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-300000000 {
++			opp-hz = /bits/ 64 <300000000>;
++			opp-microvolt = <937500>;
++			clock-latency-ns = <140000>;
++		};
++		opp-400000000 {
++			opp-hz = /bits/ 64 <400000000>;
++			opp-microvolt = <950000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-500000000 {
++			opp-hz = /bits/ 64 <500000000>;
++			opp-microvolt = <975000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-600000000 {
++			opp-hz = /bits/ 64 <600000000>;
++			opp-microvolt = <1000000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-700000000 {
++			opp-hz = /bits/ 64 <700000000>;
++			opp-microvolt = <1012500>;
++			clock-latency-ns = <140000>;
++		};
++		opp-800000000 {
++			opp-hz = /bits/ 64 <800000000>;
++			opp-microvolt = <1025000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-900000000 {
++			opp-hz = /bits/ 64 <900000000>;
++			opp-microvolt = <1050000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1000000000 {
++			opp-hz = /bits/ 64 <1000000000>;
++			opp-microvolt = <1075000>;
++			clock-latency-ns = <140000>;
++			opp-suspend;
++		};
++		opp-1100000000 {
++			opp-hz = /bits/ 64 <1100000000>;
++			opp-microvolt = <1100000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1200000000 {
++			opp-hz = /bits/ 64 <1200000000>;
++			opp-microvolt = <1125000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1300000000 {
++			opp-hz = /bits/ 64 <1300000000>;
++			opp-microvolt = <1150000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1400000000 {
++			opp-hz = /bits/ 64 <1400000000>;
++			opp-microvolt = <1200000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1500000000 {
++			opp-hz = /bits/ 64 <1500000000>;
++			opp-microvolt = <1225000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1600000000 {
++			opp-hz = /bits/ 64 <1600000000>;
++			opp-microvolt = <1250000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1700000000 {
++			opp-hz = /bits/ 64 <1700000000>;
++			opp-microvolt = <1300000>;
++			clock-latency-ns = <140000>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/socfpga_arria10.dtsi b/arch/arm/boot/dts/socfpga_arria10.dtsi
+index 791ca15c799e..bd1985694bca 100644
+--- a/arch/arm/boot/dts/socfpga_arria10.dtsi
++++ b/arch/arm/boot/dts/socfpga_arria10.dtsi
+@@ -601,7 +601,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		sdr: sdr@ffc25000 {
++		sdr: sdr@ffcfb100 {
+ 			compatible = "altr,sdr-ctl", "syscon";
+ 			reg = <0xffcfb100 0x80>;
+ 		};
+diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig
+index 925d1364727a..b8e69fe282b8 100644
+--- a/arch/arm/crypto/Kconfig
++++ b/arch/arm/crypto/Kconfig
+@@ -121,10 +121,4 @@ config CRYPTO_CHACHA20_NEON
+ 	select CRYPTO_BLKCIPHER
+ 	select CRYPTO_CHACHA20
+ 
+-config CRYPTO_SPECK_NEON
+-	tristate "NEON accelerated Speck cipher algorithms"
+-	depends on KERNEL_MODE_NEON
+-	select CRYPTO_BLKCIPHER
+-	select CRYPTO_SPECK
+-
+ endif
+diff --git a/arch/arm/crypto/Makefile b/arch/arm/crypto/Makefile
+index 8de542c48ade..bd5bceef0605 100644
+--- a/arch/arm/crypto/Makefile
++++ b/arch/arm/crypto/Makefile
+@@ -10,7 +10,6 @@ obj-$(CONFIG_CRYPTO_SHA1_ARM_NEON) += sha1-arm-neon.o
+ obj-$(CONFIG_CRYPTO_SHA256_ARM) += sha256-arm.o
+ obj-$(CONFIG_CRYPTO_SHA512_ARM) += sha512-arm.o
+ obj-$(CONFIG_CRYPTO_CHACHA20_NEON) += chacha20-neon.o
+-obj-$(CONFIG_CRYPTO_SPECK_NEON) += speck-neon.o
+ 
+ ce-obj-$(CONFIG_CRYPTO_AES_ARM_CE) += aes-arm-ce.o
+ ce-obj-$(CONFIG_CRYPTO_SHA1_ARM_CE) += sha1-arm-ce.o
+@@ -54,7 +53,6 @@ ghash-arm-ce-y	:= ghash-ce-core.o ghash-ce-glue.o
+ crct10dif-arm-ce-y	:= crct10dif-ce-core.o crct10dif-ce-glue.o
+ crc32-arm-ce-y:= crc32-ce-core.o crc32-ce-glue.o
+ chacha20-neon-y := chacha20-neon-core.o chacha20-neon-glue.o
+-speck-neon-y := speck-neon-core.o speck-neon-glue.o
+ 
+ ifdef REGENERATE_ARM_CRYPTO
+ quiet_cmd_perl = PERL    $@
+diff --git a/arch/arm/crypto/speck-neon-core.S b/arch/arm/crypto/speck-neon-core.S
+deleted file mode 100644
+index 57caa742016e..000000000000
+--- a/arch/arm/crypto/speck-neon-core.S
++++ /dev/null
+@@ -1,434 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * NEON-accelerated implementation of Speck128-XTS and Speck64-XTS
+- *
+- * Copyright (c) 2018 Google, Inc
+- *
+- * Author: Eric Biggers <ebiggers@google.com>
+- */
+-
+-#include <linux/linkage.h>
+-
+-	.text
+-	.fpu		neon
+-
+-	// arguments
+-	ROUND_KEYS	.req	r0	// const {u64,u32} *round_keys
+-	NROUNDS		.req	r1	// int nrounds
+-	DST		.req	r2	// void *dst
+-	SRC		.req	r3	// const void *src
+-	NBYTES		.req	r4	// unsigned int nbytes
+-	TWEAK		.req	r5	// void *tweak
+-
+-	// registers which hold the data being encrypted/decrypted
+-	X0		.req	q0
+-	X0_L		.req	d0
+-	X0_H		.req	d1
+-	Y0		.req	q1
+-	Y0_H		.req	d3
+-	X1		.req	q2
+-	X1_L		.req	d4
+-	X1_H		.req	d5
+-	Y1		.req	q3
+-	Y1_H		.req	d7
+-	X2		.req	q4
+-	X2_L		.req	d8
+-	X2_H		.req	d9
+-	Y2		.req	q5
+-	Y2_H		.req	d11
+-	X3		.req	q6
+-	X3_L		.req	d12
+-	X3_H		.req	d13
+-	Y3		.req	q7
+-	Y3_H		.req	d15
+-
+-	// the round key, duplicated in all lanes
+-	ROUND_KEY	.req	q8
+-	ROUND_KEY_L	.req	d16
+-	ROUND_KEY_H	.req	d17
+-
+-	// index vector for vtbl-based 8-bit rotates
+-	ROTATE_TABLE	.req	d18
+-
+-	// multiplication table for updating XTS tweaks
+-	GF128MUL_TABLE	.req	d19
+-	GF64MUL_TABLE	.req	d19
+-
+-	// current XTS tweak value(s)
+-	TWEAKV		.req	q10
+-	TWEAKV_L	.req	d20
+-	TWEAKV_H	.req	d21
+-
+-	TMP0		.req	q12
+-	TMP0_L		.req	d24
+-	TMP0_H		.req	d25
+-	TMP1		.req	q13
+-	TMP2		.req	q14
+-	TMP3		.req	q15
+-
+-	.align		4
+-.Lror64_8_table:
+-	.byte		1, 2, 3, 4, 5, 6, 7, 0
+-.Lror32_8_table:
+-	.byte		1, 2, 3, 0, 5, 6, 7, 4
+-.Lrol64_8_table:
+-	.byte		7, 0, 1, 2, 3, 4, 5, 6
+-.Lrol32_8_table:
+-	.byte		3, 0, 1, 2, 7, 4, 5, 6
+-.Lgf128mul_table:
+-	.byte		0, 0x87
+-	.fill		14
+-.Lgf64mul_table:
+-	.byte		0, 0x1b, (0x1b << 1), (0x1b << 1) ^ 0x1b
+-	.fill		12
+-
+-/*
+- * _speck_round_128bytes() - Speck encryption round on 128 bytes at a time
+- *
+- * Do one Speck encryption round on the 128 bytes (8 blocks for Speck128, 16 for
+- * Speck64) stored in X0-X3 and Y0-Y3, using the round key stored in all lanes
+- * of ROUND_KEY.  'n' is the lane size: 64 for Speck128, or 32 for Speck64.
+- *
+- * The 8-bit rotates are implemented using vtbl instead of vshr + vsli because
+- * the vtbl approach is faster on some processors and the same speed on others.
+- */
+-.macro _speck_round_128bytes	n
+-
+-	// x = ror(x, 8)
+-	vtbl.8		X0_L, {X0_L}, ROTATE_TABLE
+-	vtbl.8		X0_H, {X0_H}, ROTATE_TABLE
+-	vtbl.8		X1_L, {X1_L}, ROTATE_TABLE
+-	vtbl.8		X1_H, {X1_H}, ROTATE_TABLE
+-	vtbl.8		X2_L, {X2_L}, ROTATE_TABLE
+-	vtbl.8		X2_H, {X2_H}, ROTATE_TABLE
+-	vtbl.8		X3_L, {X3_L}, ROTATE_TABLE
+-	vtbl.8		X3_H, {X3_H}, ROTATE_TABLE
+-
+-	// x += y
+-	vadd.u\n	X0, Y0
+-	vadd.u\n	X1, Y1
+-	vadd.u\n	X2, Y2
+-	vadd.u\n	X3, Y3
+-
+-	// x ^= k
+-	veor		X0, ROUND_KEY
+-	veor		X1, ROUND_KEY
+-	veor		X2, ROUND_KEY
+-	veor		X3, ROUND_KEY
+-
+-	// y = rol(y, 3)
+-	vshl.u\n	TMP0, Y0, #3
+-	vshl.u\n	TMP1, Y1, #3
+-	vshl.u\n	TMP2, Y2, #3
+-	vshl.u\n	TMP3, Y3, #3
+-	vsri.u\n	TMP0, Y0, #(\n - 3)
+-	vsri.u\n	TMP1, Y1, #(\n - 3)
+-	vsri.u\n	TMP2, Y2, #(\n - 3)
+-	vsri.u\n	TMP3, Y3, #(\n - 3)
+-
+-	// y ^= x
+-	veor		Y0, TMP0, X0
+-	veor		Y1, TMP1, X1
+-	veor		Y2, TMP2, X2
+-	veor		Y3, TMP3, X3
+-.endm
+-
+-/*
+- * _speck_unround_128bytes() - Speck decryption round on 128 bytes at a time
+- *
+- * This is the inverse of _speck_round_128bytes().
+- */
+-.macro _speck_unround_128bytes	n
+-
+-	// y ^= x
+-	veor		TMP0, Y0, X0
+-	veor		TMP1, Y1, X1
+-	veor		TMP2, Y2, X2
+-	veor		TMP3, Y3, X3
+-
+-	// y = ror(y, 3)
+-	vshr.u\n	Y0, TMP0, #3
+-	vshr.u\n	Y1, TMP1, #3
+-	vshr.u\n	Y2, TMP2, #3
+-	vshr.u\n	Y3, TMP3, #3
+-	vsli.u\n	Y0, TMP0, #(\n - 3)
+-	vsli.u\n	Y1, TMP1, #(\n - 3)
+-	vsli.u\n	Y2, TMP2, #(\n - 3)
+-	vsli.u\n	Y3, TMP3, #(\n - 3)
+-
+-	// x ^= k
+-	veor		X0, ROUND_KEY
+-	veor		X1, ROUND_KEY
+-	veor		X2, ROUND_KEY
+-	veor		X3, ROUND_KEY
+-
+-	// x -= y
+-	vsub.u\n	X0, Y0
+-	vsub.u\n	X1, Y1
+-	vsub.u\n	X2, Y2
+-	vsub.u\n	X3, Y3
+-
+-	// x = rol(x, 8);
+-	vtbl.8		X0_L, {X0_L}, ROTATE_TABLE
+-	vtbl.8		X0_H, {X0_H}, ROTATE_TABLE
+-	vtbl.8		X1_L, {X1_L}, ROTATE_TABLE
+-	vtbl.8		X1_H, {X1_H}, ROTATE_TABLE
+-	vtbl.8		X2_L, {X2_L}, ROTATE_TABLE
+-	vtbl.8		X2_H, {X2_H}, ROTATE_TABLE
+-	vtbl.8		X3_L, {X3_L}, ROTATE_TABLE
+-	vtbl.8		X3_H, {X3_H}, ROTATE_TABLE
+-.endm
+-
+-.macro _xts128_precrypt_one	dst_reg, tweak_buf, tmp
+-
+-	// Load the next source block
+-	vld1.8		{\dst_reg}, [SRC]!
+-
+-	// Save the current tweak in the tweak buffer
+-	vst1.8		{TWEAKV}, [\tweak_buf:128]!
+-
+-	// XOR the next source block with the current tweak
+-	veor		\dst_reg, TWEAKV
+-
+-	/*
+-	 * Calculate the next tweak by multiplying the current one by x,
+-	 * modulo p(x) = x^128 + x^7 + x^2 + x + 1.
+-	 */
+-	vshr.u64	\tmp, TWEAKV, #63
+-	vshl.u64	TWEAKV, #1
+-	veor		TWEAKV_H, \tmp\()_L
+-	vtbl.8		\tmp\()_H, {GF128MUL_TABLE}, \tmp\()_H
+-	veor		TWEAKV_L, \tmp\()_H
+-.endm
+-
+-.macro _xts64_precrypt_two	dst_reg, tweak_buf, tmp
+-
+-	// Load the next two source blocks
+-	vld1.8		{\dst_reg}, [SRC]!
+-
+-	// Save the current two tweaks in the tweak buffer
+-	vst1.8		{TWEAKV}, [\tweak_buf:128]!
+-
+-	// XOR the next two source blocks with the current two tweaks
+-	veor		\dst_reg, TWEAKV
+-
+-	/*
+-	 * Calculate the next two tweaks by multiplying the current ones by x^2,
+-	 * modulo p(x) = x^64 + x^4 + x^3 + x + 1.
+-	 */
+-	vshr.u64	\tmp, TWEAKV, #62
+-	vshl.u64	TWEAKV, #2
+-	vtbl.8		\tmp\()_L, {GF64MUL_TABLE}, \tmp\()_L
+-	vtbl.8		\tmp\()_H, {GF64MUL_TABLE}, \tmp\()_H
+-	veor		TWEAKV, \tmp
+-.endm
+-
+-/*
+- * _speck_xts_crypt() - Speck-XTS encryption/decryption
+- *
+- * Encrypt or decrypt NBYTES bytes of data from the SRC buffer to the DST buffer
+- * using Speck-XTS, specifically the variant with a block size of '2n' and round
+- * count given by NROUNDS.  The expanded round keys are given in ROUND_KEYS, and
+- * the current XTS tweak value is given in TWEAK.  It's assumed that NBYTES is a
+- * nonzero multiple of 128.
+- */
+-.macro _speck_xts_crypt	n, decrypting
+-	push		{r4-r7}
+-	mov		r7, sp
+-
+-	/*
+-	 * The first four parameters were passed in registers r0-r3.  Load the
+-	 * additional parameters, which were passed on the stack.
+-	 */
+-	ldr		NBYTES, [sp, #16]
+-	ldr		TWEAK, [sp, #20]
+-
+-	/*
+-	 * If decrypting, modify the ROUND_KEYS parameter to point to the last
+-	 * round key rather than the first, since for decryption the round keys
+-	 * are used in reverse order.
+-	 */
+-.if \decrypting
+-.if \n == 64
+-	add		ROUND_KEYS, ROUND_KEYS, NROUNDS, lsl #3
+-	sub		ROUND_KEYS, #8
+-.else
+-	add		ROUND_KEYS, ROUND_KEYS, NROUNDS, lsl #2
+-	sub		ROUND_KEYS, #4
+-.endif
+-.endif
+-
+-	// Load the index vector for vtbl-based 8-bit rotates
+-.if \decrypting
+-	ldr		r12, =.Lrol\n\()_8_table
+-.else
+-	ldr		r12, =.Lror\n\()_8_table
+-.endif
+-	vld1.8		{ROTATE_TABLE}, [r12:64]
+-
+-	// One-time XTS preparation
+-
+-	/*
+-	 * Allocate stack space to store 128 bytes worth of tweaks.  For
+-	 * performance, this space is aligned to a 16-byte boundary so that we
+-	 * can use the load/store instructions that declare 16-byte alignment.
+-	 * For Thumb2 compatibility, don't do the 'bic' directly on 'sp'.
+-	 */
+-	sub		r12, sp, #128
+-	bic		r12, #0xf
+-	mov		sp, r12
+-
+-.if \n == 64
+-	// Load first tweak
+-	vld1.8		{TWEAKV}, [TWEAK]
+-
+-	// Load GF(2^128) multiplication table
+-	ldr		r12, =.Lgf128mul_table
+-	vld1.8		{GF128MUL_TABLE}, [r12:64]
+-.else
+-	// Load first tweak
+-	vld1.8		{TWEAKV_L}, [TWEAK]
+-
+-	// Load GF(2^64) multiplication table
+-	ldr		r12, =.Lgf64mul_table
+-	vld1.8		{GF64MUL_TABLE}, [r12:64]
+-
+-	// Calculate second tweak, packing it together with the first
+-	vshr.u64	TMP0_L, TWEAKV_L, #63
+-	vtbl.u8		TMP0_L, {GF64MUL_TABLE}, TMP0_L
+-	vshl.u64	TWEAKV_H, TWEAKV_L, #1
+-	veor		TWEAKV_H, TMP0_L
+-.endif
+-
+-.Lnext_128bytes_\@:
+-
+-	/*
+-	 * Load the source blocks into {X,Y}[0-3], XOR them with their XTS tweak
+-	 * values, and save the tweaks on the stack for later.  Then
+-	 * de-interleave the 'x' and 'y' elements of each block, i.e. make it so
+-	 * that the X[0-3] registers contain only the second halves of blocks,
+-	 * and the Y[0-3] registers contain only the first halves of blocks.
+-	 * (Speck uses the order (y, x) rather than the more intuitive (x, y).)
+-	 */
+-	mov		r12, sp
+-.if \n == 64
+-	_xts128_precrypt_one	X0, r12, TMP0
+-	_xts128_precrypt_one	Y0, r12, TMP0
+-	_xts128_precrypt_one	X1, r12, TMP0
+-	_xts128_precrypt_one	Y1, r12, TMP0
+-	_xts128_precrypt_one	X2, r12, TMP0
+-	_xts128_precrypt_one	Y2, r12, TMP0
+-	_xts128_precrypt_one	X3, r12, TMP0
+-	_xts128_precrypt_one	Y3, r12, TMP0
+-	vswp		X0_L, Y0_H
+-	vswp		X1_L, Y1_H
+-	vswp		X2_L, Y2_H
+-	vswp		X3_L, Y3_H
+-.else
+-	_xts64_precrypt_two	X0, r12, TMP0
+-	_xts64_precrypt_two	Y0, r12, TMP0
+-	_xts64_precrypt_two	X1, r12, TMP0
+-	_xts64_precrypt_two	Y1, r12, TMP0
+-	_xts64_precrypt_two	X2, r12, TMP0
+-	_xts64_precrypt_two	Y2, r12, TMP0
+-	_xts64_precrypt_two	X3, r12, TMP0
+-	_xts64_precrypt_two	Y3, r12, TMP0
+-	vuzp.32		Y0, X0
+-	vuzp.32		Y1, X1
+-	vuzp.32		Y2, X2
+-	vuzp.32		Y3, X3
+-.endif
+-
+-	// Do the cipher rounds
+-
+-	mov		r12, ROUND_KEYS
+-	mov		r6, NROUNDS
+-
+-.Lnext_round_\@:
+-.if \decrypting
+-.if \n == 64
+-	vld1.64		ROUND_KEY_L, [r12]
+-	sub		r12, #8
+-	vmov		ROUND_KEY_H, ROUND_KEY_L
+-.else
+-	vld1.32		{ROUND_KEY_L[],ROUND_KEY_H[]}, [r12]
+-	sub		r12, #4
+-.endif
+-	_speck_unround_128bytes	\n
+-.else
+-.if \n == 64
+-	vld1.64		ROUND_KEY_L, [r12]!
+-	vmov		ROUND_KEY_H, ROUND_KEY_L
+-.else
+-	vld1.32		{ROUND_KEY_L[],ROUND_KEY_H[]}, [r12]!
+-.endif
+-	_speck_round_128bytes	\n
+-.endif
+-	subs		r6, r6, #1
+-	bne		.Lnext_round_\@
+-
+-	// Re-interleave the 'x' and 'y' elements of each block
+-.if \n == 64
+-	vswp		X0_L, Y0_H
+-	vswp		X1_L, Y1_H
+-	vswp		X2_L, Y2_H
+-	vswp		X3_L, Y3_H
+-.else
+-	vzip.32		Y0, X0
+-	vzip.32		Y1, X1
+-	vzip.32		Y2, X2
+-	vzip.32		Y3, X3
+-.endif
+-
+-	// XOR the encrypted/decrypted blocks with the tweaks we saved earlier
+-	mov		r12, sp
+-	vld1.8		{TMP0, TMP1}, [r12:128]!
+-	vld1.8		{TMP2, TMP3}, [r12:128]!
+-	veor		X0, TMP0
+-	veor		Y0, TMP1
+-	veor		X1, TMP2
+-	veor		Y1, TMP3
+-	vld1.8		{TMP0, TMP1}, [r12:128]!
+-	vld1.8		{TMP2, TMP3}, [r12:128]!
+-	veor		X2, TMP0
+-	veor		Y2, TMP1
+-	veor		X3, TMP2
+-	veor		Y3, TMP3
+-
+-	// Store the ciphertext in the destination buffer
+-	vst1.8		{X0, Y0}, [DST]!
+-	vst1.8		{X1, Y1}, [DST]!
+-	vst1.8		{X2, Y2}, [DST]!
+-	vst1.8		{X3, Y3}, [DST]!
+-
+-	// Continue if there are more 128-byte chunks remaining, else return
+-	subs		NBYTES, #128
+-	bne		.Lnext_128bytes_\@
+-
+-	// Store the next tweak
+-.if \n == 64
+-	vst1.8		{TWEAKV}, [TWEAK]
+-.else
+-	vst1.8		{TWEAKV_L}, [TWEAK]
+-.endif
+-
+-	mov		sp, r7
+-	pop		{r4-r7}
+-	bx		lr
+-.endm
+-
+-ENTRY(speck128_xts_encrypt_neon)
+-	_speck_xts_crypt	n=64, decrypting=0
+-ENDPROC(speck128_xts_encrypt_neon)
+-
+-ENTRY(speck128_xts_decrypt_neon)
+-	_speck_xts_crypt	n=64, decrypting=1
+-ENDPROC(speck128_xts_decrypt_neon)
+-
+-ENTRY(speck64_xts_encrypt_neon)
+-	_speck_xts_crypt	n=32, decrypting=0
+-ENDPROC(speck64_xts_encrypt_neon)
+-
+-ENTRY(speck64_xts_decrypt_neon)
+-	_speck_xts_crypt	n=32, decrypting=1
+-ENDPROC(speck64_xts_decrypt_neon)
+diff --git a/arch/arm/crypto/speck-neon-glue.c b/arch/arm/crypto/speck-neon-glue.c
+deleted file mode 100644
+index f012c3ea998f..000000000000
+--- a/arch/arm/crypto/speck-neon-glue.c
++++ /dev/null
+@@ -1,288 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * NEON-accelerated implementation of Speck128-XTS and Speck64-XTS
+- *
+- * Copyright (c) 2018 Google, Inc
+- *
+- * Note: the NIST recommendation for XTS only specifies a 128-bit block size,
+- * but a 64-bit version (needed for Speck64) is fairly straightforward; the math
+- * is just done in GF(2^64) instead of GF(2^128), with the reducing polynomial
+- * x^64 + x^4 + x^3 + x + 1 from the original XEX paper (Rogaway, 2004:
+- * "Efficient Instantiations of Tweakable Blockciphers and Refinements to Modes
+- * OCB and PMAC"), represented as 0x1B.
+- */
+-
+-#include <asm/hwcap.h>
+-#include <asm/neon.h>
+-#include <asm/simd.h>
+-#include <crypto/algapi.h>
+-#include <crypto/gf128mul.h>
+-#include <crypto/internal/skcipher.h>
+-#include <crypto/speck.h>
+-#include <crypto/xts.h>
+-#include <linux/kernel.h>
+-#include <linux/module.h>
+-
+-/* The assembly functions only handle multiples of 128 bytes */
+-#define SPECK_NEON_CHUNK_SIZE	128
+-
+-/* Speck128 */
+-
+-struct speck128_xts_tfm_ctx {
+-	struct speck128_tfm_ctx main_key;
+-	struct speck128_tfm_ctx tweak_key;
+-};
+-
+-asmlinkage void speck128_xts_encrypt_neon(const u64 *round_keys, int nrounds,
+-					  void *dst, const void *src,
+-					  unsigned int nbytes, void *tweak);
+-
+-asmlinkage void speck128_xts_decrypt_neon(const u64 *round_keys, int nrounds,
+-					  void *dst, const void *src,
+-					  unsigned int nbytes, void *tweak);
+-
+-typedef void (*speck128_crypt_one_t)(const struct speck128_tfm_ctx *,
+-				     u8 *, const u8 *);
+-typedef void (*speck128_xts_crypt_many_t)(const u64 *, int, void *,
+-					  const void *, unsigned int, void *);
+-
+-static __always_inline int
+-__speck128_xts_crypt(struct skcipher_request *req,
+-		     speck128_crypt_one_t crypt_one,
+-		     speck128_xts_crypt_many_t crypt_many)
+-{
+-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+-	const struct speck128_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	struct skcipher_walk walk;
+-	le128 tweak;
+-	int err;
+-
+-	err = skcipher_walk_virt(&walk, req, true);
+-
+-	crypto_speck128_encrypt(&ctx->tweak_key, (u8 *)&tweak, walk.iv);
+-
+-	while (walk.nbytes > 0) {
+-		unsigned int nbytes = walk.nbytes;
+-		u8 *dst = walk.dst.virt.addr;
+-		const u8 *src = walk.src.virt.addr;
+-
+-		if (nbytes >= SPECK_NEON_CHUNK_SIZE && may_use_simd()) {
+-			unsigned int count;
+-
+-			count = round_down(nbytes, SPECK_NEON_CHUNK_SIZE);
+-			kernel_neon_begin();
+-			(*crypt_many)(ctx->main_key.round_keys,
+-				      ctx->main_key.nrounds,
+-				      dst, src, count, &tweak);
+-			kernel_neon_end();
+-			dst += count;
+-			src += count;
+-			nbytes -= count;
+-		}
+-
+-		/* Handle any remainder with generic code */
+-		while (nbytes >= sizeof(tweak)) {
+-			le128_xor((le128 *)dst, (const le128 *)src, &tweak);
+-			(*crypt_one)(&ctx->main_key, dst, dst);
+-			le128_xor((le128 *)dst, (const le128 *)dst, &tweak);
+-			gf128mul_x_ble(&tweak, &tweak);
+-
+-			dst += sizeof(tweak);
+-			src += sizeof(tweak);
+-			nbytes -= sizeof(tweak);
+-		}
+-		err = skcipher_walk_done(&walk, nbytes);
+-	}
+-
+-	return err;
+-}
+-
+-static int speck128_xts_encrypt(struct skcipher_request *req)
+-{
+-	return __speck128_xts_crypt(req, crypto_speck128_encrypt,
+-				    speck128_xts_encrypt_neon);
+-}
+-
+-static int speck128_xts_decrypt(struct skcipher_request *req)
+-{
+-	return __speck128_xts_crypt(req, crypto_speck128_decrypt,
+-				    speck128_xts_decrypt_neon);
+-}
+-
+-static int speck128_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
+-			       unsigned int keylen)
+-{
+-	struct speck128_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	int err;
+-
+-	err = xts_verify_key(tfm, key, keylen);
+-	if (err)
+-		return err;
+-
+-	keylen /= 2;
+-
+-	err = crypto_speck128_setkey(&ctx->main_key, key, keylen);
+-	if (err)
+-		return err;
+-
+-	return crypto_speck128_setkey(&ctx->tweak_key, key + keylen, keylen);
+-}
+-
+-/* Speck64 */
+-
+-struct speck64_xts_tfm_ctx {
+-	struct speck64_tfm_ctx main_key;
+-	struct speck64_tfm_ctx tweak_key;
+-};
+-
+-asmlinkage void speck64_xts_encrypt_neon(const u32 *round_keys, int nrounds,
+-					 void *dst, const void *src,
+-					 unsigned int nbytes, void *tweak);
+-
+-asmlinkage void speck64_xts_decrypt_neon(const u32 *round_keys, int nrounds,
+-					 void *dst, const void *src,
+-					 unsigned int nbytes, void *tweak);
+-
+-typedef void (*speck64_crypt_one_t)(const struct speck64_tfm_ctx *,
+-				    u8 *, const u8 *);
+-typedef void (*speck64_xts_crypt_many_t)(const u32 *, int, void *,
+-					 const void *, unsigned int, void *);
+-
+-static __always_inline int
+-__speck64_xts_crypt(struct skcipher_request *req, speck64_crypt_one_t crypt_one,
+-		    speck64_xts_crypt_many_t crypt_many)
+-{
+-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+-	const struct speck64_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	struct skcipher_walk walk;
+-	__le64 tweak;
+-	int err;
+-
+-	err = skcipher_walk_virt(&walk, req, true);
+-
+-	crypto_speck64_encrypt(&ctx->tweak_key, (u8 *)&tweak, walk.iv);
+-
+-	while (walk.nbytes > 0) {
+-		unsigned int nbytes = walk.nbytes;
+-		u8 *dst = walk.dst.virt.addr;
+-		const u8 *src = walk.src.virt.addr;
+-
+-		if (nbytes >= SPECK_NEON_CHUNK_SIZE && may_use_simd()) {
+-			unsigned int count;
+-
+-			count = round_down(nbytes, SPECK_NEON_CHUNK_SIZE);
+-			kernel_neon_begin();
+-			(*crypt_many)(ctx->main_key.round_keys,
+-				      ctx->main_key.nrounds,
+-				      dst, src, count, &tweak);
+-			kernel_neon_end();
+-			dst += count;
+-			src += count;
+-			nbytes -= count;
+-		}
+-
+-		/* Handle any remainder with generic code */
+-		while (nbytes >= sizeof(tweak)) {
+-			*(__le64 *)dst = *(__le64 *)src ^ tweak;
+-			(*crypt_one)(&ctx->main_key, dst, dst);
+-			*(__le64 *)dst ^= tweak;
+-			tweak = cpu_to_le64((le64_to_cpu(tweak) << 1) ^
+-					    ((tweak & cpu_to_le64(1ULL << 63)) ?
+-					     0x1B : 0));
+-			dst += sizeof(tweak);
+-			src += sizeof(tweak);
+-			nbytes -= sizeof(tweak);
+-		}
+-		err = skcipher_walk_done(&walk, nbytes);
+-	}
+-
+-	return err;
+-}
+-
+-static int speck64_xts_encrypt(struct skcipher_request *req)
+-{
+-	return __speck64_xts_crypt(req, crypto_speck64_encrypt,
+-				   speck64_xts_encrypt_neon);
+-}
+-
+-static int speck64_xts_decrypt(struct skcipher_request *req)
+-{
+-	return __speck64_xts_crypt(req, crypto_speck64_decrypt,
+-				   speck64_xts_decrypt_neon);
+-}
+-
+-static int speck64_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
+-			      unsigned int keylen)
+-{
+-	struct speck64_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	int err;
+-
+-	err = xts_verify_key(tfm, key, keylen);
+-	if (err)
+-		return err;
+-
+-	keylen /= 2;
+-
+-	err = crypto_speck64_setkey(&ctx->main_key, key, keylen);
+-	if (err)
+-		return err;
+-
+-	return crypto_speck64_setkey(&ctx->tweak_key, key + keylen, keylen);
+-}
+-
+-static struct skcipher_alg speck_algs[] = {
+-	{
+-		.base.cra_name		= "xts(speck128)",
+-		.base.cra_driver_name	= "xts-speck128-neon",
+-		.base.cra_priority	= 300,
+-		.base.cra_blocksize	= SPECK128_BLOCK_SIZE,
+-		.base.cra_ctxsize	= sizeof(struct speck128_xts_tfm_ctx),
+-		.base.cra_alignmask	= 7,
+-		.base.cra_module	= THIS_MODULE,
+-		.min_keysize		= 2 * SPECK128_128_KEY_SIZE,
+-		.max_keysize		= 2 * SPECK128_256_KEY_SIZE,
+-		.ivsize			= SPECK128_BLOCK_SIZE,
+-		.walksize		= SPECK_NEON_CHUNK_SIZE,
+-		.setkey			= speck128_xts_setkey,
+-		.encrypt		= speck128_xts_encrypt,
+-		.decrypt		= speck128_xts_decrypt,
+-	}, {
+-		.base.cra_name		= "xts(speck64)",
+-		.base.cra_driver_name	= "xts-speck64-neon",
+-		.base.cra_priority	= 300,
+-		.base.cra_blocksize	= SPECK64_BLOCK_SIZE,
+-		.base.cra_ctxsize	= sizeof(struct speck64_xts_tfm_ctx),
+-		.base.cra_alignmask	= 7,
+-		.base.cra_module	= THIS_MODULE,
+-		.min_keysize		= 2 * SPECK64_96_KEY_SIZE,
+-		.max_keysize		= 2 * SPECK64_128_KEY_SIZE,
+-		.ivsize			= SPECK64_BLOCK_SIZE,
+-		.walksize		= SPECK_NEON_CHUNK_SIZE,
+-		.setkey			= speck64_xts_setkey,
+-		.encrypt		= speck64_xts_encrypt,
+-		.decrypt		= speck64_xts_decrypt,
+-	}
+-};
+-
+-static int __init speck_neon_module_init(void)
+-{
+-	if (!(elf_hwcap & HWCAP_NEON))
+-		return -ENODEV;
+-	return crypto_register_skciphers(speck_algs, ARRAY_SIZE(speck_algs));
+-}
+-
+-static void __exit speck_neon_module_exit(void)
+-{
+-	crypto_unregister_skciphers(speck_algs, ARRAY_SIZE(speck_algs));
+-}
+-
+-module_init(speck_neon_module_init);
+-module_exit(speck_neon_module_exit);
+-
+-MODULE_DESCRIPTION("Speck block cipher (NEON-accelerated)");
+-MODULE_LICENSE("GPL");
+-MODULE_AUTHOR("Eric Biggers <ebiggers@google.com>");
+-MODULE_ALIAS_CRYPTO("xts(speck128)");
+-MODULE_ALIAS_CRYPTO("xts-speck128-neon");
+-MODULE_ALIAS_CRYPTO("xts(speck64)");
+-MODULE_ALIAS_CRYPTO("xts-speck64-neon");
+diff --git a/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi b/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
+index 67dac595dc72..3989876ab699 100644
+--- a/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
++++ b/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
+@@ -327,7 +327,7 @@
+ 
+ 		sysmgr: sysmgr@ffd12000 {
+ 			compatible = "altr,sys-mgr", "syscon";
+-			reg = <0xffd12000 0x1000>;
++			reg = <0xffd12000 0x228>;
+ 		};
+ 
+ 		/* Local timer */
+diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig
+index e3fdb0fd6f70..d51944ff9f91 100644
+--- a/arch/arm64/crypto/Kconfig
++++ b/arch/arm64/crypto/Kconfig
+@@ -119,10 +119,4 @@ config CRYPTO_AES_ARM64_BS
+ 	select CRYPTO_AES_ARM64
+ 	select CRYPTO_SIMD
+ 
+-config CRYPTO_SPECK_NEON
+-	tristate "NEON accelerated Speck cipher algorithms"
+-	depends on KERNEL_MODE_NEON
+-	select CRYPTO_BLKCIPHER
+-	select CRYPTO_SPECK
+-
+ endif
+diff --git a/arch/arm64/crypto/Makefile b/arch/arm64/crypto/Makefile
+index bcafd016618e..7bc4bda6d9c6 100644
+--- a/arch/arm64/crypto/Makefile
++++ b/arch/arm64/crypto/Makefile
+@@ -56,9 +56,6 @@ sha512-arm64-y := sha512-glue.o sha512-core.o
+ obj-$(CONFIG_CRYPTO_CHACHA20_NEON) += chacha20-neon.o
+ chacha20-neon-y := chacha20-neon-core.o chacha20-neon-glue.o
+ 
+-obj-$(CONFIG_CRYPTO_SPECK_NEON) += speck-neon.o
+-speck-neon-y := speck-neon-core.o speck-neon-glue.o
+-
+ obj-$(CONFIG_CRYPTO_AES_ARM64) += aes-arm64.o
+ aes-arm64-y := aes-cipher-core.o aes-cipher-glue.o
+ 
+diff --git a/arch/arm64/crypto/speck-neon-core.S b/arch/arm64/crypto/speck-neon-core.S
+deleted file mode 100644
+index b14463438b09..000000000000
+--- a/arch/arm64/crypto/speck-neon-core.S
++++ /dev/null
+@@ -1,352 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * ARM64 NEON-accelerated implementation of Speck128-XTS and Speck64-XTS
+- *
+- * Copyright (c) 2018 Google, Inc
+- *
+- * Author: Eric Biggers <ebiggers@google.com>
+- */
+-
+-#include <linux/linkage.h>
+-
+-	.text
+-
+-	// arguments
+-	ROUND_KEYS	.req	x0	// const {u64,u32} *round_keys
+-	NROUNDS		.req	w1	// int nrounds
+-	NROUNDS_X	.req	x1
+-	DST		.req	x2	// void *dst
+-	SRC		.req	x3	// const void *src
+-	NBYTES		.req	w4	// unsigned int nbytes
+-	TWEAK		.req	x5	// void *tweak
+-
+-	// registers which hold the data being encrypted/decrypted
+-	// (underscores avoid a naming collision with ARM64 registers x0-x3)
+-	X_0		.req	v0
+-	Y_0		.req	v1
+-	X_1		.req	v2
+-	Y_1		.req	v3
+-	X_2		.req	v4
+-	Y_2		.req	v5
+-	X_3		.req	v6
+-	Y_3		.req	v7
+-
+-	// the round key, duplicated in all lanes
+-	ROUND_KEY	.req	v8
+-
+-	// index vector for tbl-based 8-bit rotates
+-	ROTATE_TABLE	.req	v9
+-	ROTATE_TABLE_Q	.req	q9
+-
+-	// temporary registers
+-	TMP0		.req	v10
+-	TMP1		.req	v11
+-	TMP2		.req	v12
+-	TMP3		.req	v13
+-
+-	// multiplication table for updating XTS tweaks
+-	GFMUL_TABLE	.req	v14
+-	GFMUL_TABLE_Q	.req	q14
+-
+-	// next XTS tweak value(s)
+-	TWEAKV_NEXT	.req	v15
+-
+-	// XTS tweaks for the blocks currently being encrypted/decrypted
+-	TWEAKV0		.req	v16
+-	TWEAKV1		.req	v17
+-	TWEAKV2		.req	v18
+-	TWEAKV3		.req	v19
+-	TWEAKV4		.req	v20
+-	TWEAKV5		.req	v21
+-	TWEAKV6		.req	v22
+-	TWEAKV7		.req	v23
+-
+-	.align		4
+-.Lror64_8_table:
+-	.octa		0x080f0e0d0c0b0a090007060504030201
+-.Lror32_8_table:
+-	.octa		0x0c0f0e0d080b0a090407060500030201
+-.Lrol64_8_table:
+-	.octa		0x0e0d0c0b0a09080f0605040302010007
+-.Lrol32_8_table:
+-	.octa		0x0e0d0c0f0a09080b0605040702010003
+-.Lgf128mul_table:
+-	.octa		0x00000000000000870000000000000001
+-.Lgf64mul_table:
+-	.octa		0x0000000000000000000000002d361b00
+-
+-/*
+- * _speck_round_128bytes() - Speck encryption round on 128 bytes at a time
+- *
+- * Do one Speck encryption round on the 128 bytes (8 blocks for Speck128, 16 for
+- * Speck64) stored in X0-X3 and Y0-Y3, using the round key stored in all lanes
+- * of ROUND_KEY.  'n' is the lane size: 64 for Speck128, or 32 for Speck64.
+- * 'lanes' is the lane specifier: "2d" for Speck128 or "4s" for Speck64.
+- */
+-.macro _speck_round_128bytes	n, lanes
+-
+-	// x = ror(x, 8)
+-	tbl		X_0.16b, {X_0.16b}, ROTATE_TABLE.16b
+-	tbl		X_1.16b, {X_1.16b}, ROTATE_TABLE.16b
+-	tbl		X_2.16b, {X_2.16b}, ROTATE_TABLE.16b
+-	tbl		X_3.16b, {X_3.16b}, ROTATE_TABLE.16b
+-
+-	// x += y
+-	add		X_0.\lanes, X_0.\lanes, Y_0.\lanes
+-	add		X_1.\lanes, X_1.\lanes, Y_1.\lanes
+-	add		X_2.\lanes, X_2.\lanes, Y_2.\lanes
+-	add		X_3.\lanes, X_3.\lanes, Y_3.\lanes
+-
+-	// x ^= k
+-	eor		X_0.16b, X_0.16b, ROUND_KEY.16b
+-	eor		X_1.16b, X_1.16b, ROUND_KEY.16b
+-	eor		X_2.16b, X_2.16b, ROUND_KEY.16b
+-	eor		X_3.16b, X_3.16b, ROUND_KEY.16b
+-
+-	// y = rol(y, 3)
+-	shl		TMP0.\lanes, Y_0.\lanes, #3
+-	shl		TMP1.\lanes, Y_1.\lanes, #3
+-	shl		TMP2.\lanes, Y_2.\lanes, #3
+-	shl		TMP3.\lanes, Y_3.\lanes, #3
+-	sri		TMP0.\lanes, Y_0.\lanes, #(\n - 3)
+-	sri		TMP1.\lanes, Y_1.\lanes, #(\n - 3)
+-	sri		TMP2.\lanes, Y_2.\lanes, #(\n - 3)
+-	sri		TMP3.\lanes, Y_3.\lanes, #(\n - 3)
+-
+-	// y ^= x
+-	eor		Y_0.16b, TMP0.16b, X_0.16b
+-	eor		Y_1.16b, TMP1.16b, X_1.16b
+-	eor		Y_2.16b, TMP2.16b, X_2.16b
+-	eor		Y_3.16b, TMP3.16b, X_3.16b
+-.endm
+-
+-/*
+- * _speck_unround_128bytes() - Speck decryption round on 128 bytes at a time
+- *
+- * This is the inverse of _speck_round_128bytes().
+- */
+-.macro _speck_unround_128bytes	n, lanes
+-
+-	// y ^= x
+-	eor		TMP0.16b, Y_0.16b, X_0.16b
+-	eor		TMP1.16b, Y_1.16b, X_1.16b
+-	eor		TMP2.16b, Y_2.16b, X_2.16b
+-	eor		TMP3.16b, Y_3.16b, X_3.16b
+-
+-	// y = ror(y, 3)
+-	ushr		Y_0.\lanes, TMP0.\lanes, #3
+-	ushr		Y_1.\lanes, TMP1.\lanes, #3
+-	ushr		Y_2.\lanes, TMP2.\lanes, #3
+-	ushr		Y_3.\lanes, TMP3.\lanes, #3
+-	sli		Y_0.\lanes, TMP0.\lanes, #(\n - 3)
+-	sli		Y_1.\lanes, TMP1.\lanes, #(\n - 3)
+-	sli		Y_2.\lanes, TMP2.\lanes, #(\n - 3)
+-	sli		Y_3.\lanes, TMP3.\lanes, #(\n - 3)
+-
+-	// x ^= k
+-	eor		X_0.16b, X_0.16b, ROUND_KEY.16b
+-	eor		X_1.16b, X_1.16b, ROUND_KEY.16b
+-	eor		X_2.16b, X_2.16b, ROUND_KEY.16b
+-	eor		X_3.16b, X_3.16b, ROUND_KEY.16b
+-
+-	// x -= y
+-	sub		X_0.\lanes, X_0.\lanes, Y_0.\lanes
+-	sub		X_1.\lanes, X_1.\lanes, Y_1.\lanes
+-	sub		X_2.\lanes, X_2.\lanes, Y_2.\lanes
+-	sub		X_3.\lanes, X_3.\lanes, Y_3.\lanes
+-
+-	// x = rol(x, 8)
+-	tbl		X_0.16b, {X_0.16b}, ROTATE_TABLE.16b
+-	tbl		X_1.16b, {X_1.16b}, ROTATE_TABLE.16b
+-	tbl		X_2.16b, {X_2.16b}, ROTATE_TABLE.16b
+-	tbl		X_3.16b, {X_3.16b}, ROTATE_TABLE.16b
+-.endm
+-
+-.macro _next_xts_tweak	next, cur, tmp, n
+-.if \n == 64
+-	/*
+-	 * Calculate the next tweak by multiplying the current one by x,
+-	 * modulo p(x) = x^128 + x^7 + x^2 + x + 1.
+-	 */
+-	sshr		\tmp\().2d, \cur\().2d, #63
+-	and		\tmp\().16b, \tmp\().16b, GFMUL_TABLE.16b
+-	shl		\next\().2d, \cur\().2d, #1
+-	ext		\tmp\().16b, \tmp\().16b, \tmp\().16b, #8
+-	eor		\next\().16b, \next\().16b, \tmp\().16b
+-.else
+-	/*
+-	 * Calculate the next two tweaks by multiplying the current ones by x^2,
+-	 * modulo p(x) = x^64 + x^4 + x^3 + x + 1.
+-	 */
+-	ushr		\tmp\().2d, \cur\().2d, #62
+-	shl		\next\().2d, \cur\().2d, #2
+-	tbl		\tmp\().16b, {GFMUL_TABLE.16b}, \tmp\().16b
+-	eor		\next\().16b, \next\().16b, \tmp\().16b
+-.endif
+-.endm
+-
+-/*
+- * _speck_xts_crypt() - Speck-XTS encryption/decryption
+- *
+- * Encrypt or decrypt NBYTES bytes of data from the SRC buffer to the DST buffer
+- * using Speck-XTS, specifically the variant with a block size of '2n' and round
+- * count given by NROUNDS.  The expanded round keys are given in ROUND_KEYS, and
+- * the current XTS tweak value is given in TWEAK.  It's assumed that NBYTES is a
+- * nonzero multiple of 128.
+- */
+-.macro _speck_xts_crypt	n, lanes, decrypting
+-
+-	/*
+-	 * If decrypting, modify the ROUND_KEYS parameter to point to the last
+-	 * round key rather than the first, since for decryption the round keys
+-	 * are used in reverse order.
+-	 */
+-.if \decrypting
+-	mov		NROUNDS, NROUNDS	/* zero the high 32 bits */
+-.if \n == 64
+-	add		ROUND_KEYS, ROUND_KEYS, NROUNDS_X, lsl #3
+-	sub		ROUND_KEYS, ROUND_KEYS, #8
+-.else
+-	add		ROUND_KEYS, ROUND_KEYS, NROUNDS_X, lsl #2
+-	sub		ROUND_KEYS, ROUND_KEYS, #4
+-.endif
+-.endif
+-
+-	// Load the index vector for tbl-based 8-bit rotates
+-.if \decrypting
+-	ldr		ROTATE_TABLE_Q, .Lrol\n\()_8_table
+-.else
+-	ldr		ROTATE_TABLE_Q, .Lror\n\()_8_table
+-.endif
+-
+-	// One-time XTS preparation
+-.if \n == 64
+-	// Load first tweak
+-	ld1		{TWEAKV0.16b}, [TWEAK]
+-
+-	// Load GF(2^128) multiplication table
+-	ldr		GFMUL_TABLE_Q, .Lgf128mul_table
+-.else
+-	// Load first tweak
+-	ld1		{TWEAKV0.8b}, [TWEAK]
+-
+-	// Load GF(2^64) multiplication table
+-	ldr		GFMUL_TABLE_Q, .Lgf64mul_table
+-
+-	// Calculate second tweak, packing it together with the first
+-	ushr		TMP0.2d, TWEAKV0.2d, #63
+-	shl		TMP1.2d, TWEAKV0.2d, #1
+-	tbl		TMP0.8b, {GFMUL_TABLE.16b}, TMP0.8b
+-	eor		TMP0.8b, TMP0.8b, TMP1.8b
+-	mov		TWEAKV0.d[1], TMP0.d[0]
+-.endif
+-
+-.Lnext_128bytes_\@:
+-
+-	// Calculate XTS tweaks for next 128 bytes
+-	_next_xts_tweak	TWEAKV1, TWEAKV0, TMP0, \n
+-	_next_xts_tweak	TWEAKV2, TWEAKV1, TMP0, \n
+-	_next_xts_tweak	TWEAKV3, TWEAKV2, TMP0, \n
+-	_next_xts_tweak	TWEAKV4, TWEAKV3, TMP0, \n
+-	_next_xts_tweak	TWEAKV5, TWEAKV4, TMP0, \n
+-	_next_xts_tweak	TWEAKV6, TWEAKV5, TMP0, \n
+-	_next_xts_tweak	TWEAKV7, TWEAKV6, TMP0, \n
+-	_next_xts_tweak	TWEAKV_NEXT, TWEAKV7, TMP0, \n
+-
+-	// Load the next source blocks into {X,Y}[0-3]
+-	ld1		{X_0.16b-Y_1.16b}, [SRC], #64
+-	ld1		{X_2.16b-Y_3.16b}, [SRC], #64
+-
+-	// XOR the source blocks with their XTS tweaks
+-	eor		TMP0.16b, X_0.16b, TWEAKV0.16b
+-	eor		Y_0.16b,  Y_0.16b, TWEAKV1.16b
+-	eor		TMP1.16b, X_1.16b, TWEAKV2.16b
+-	eor		Y_1.16b,  Y_1.16b, TWEAKV3.16b
+-	eor		TMP2.16b, X_2.16b, TWEAKV4.16b
+-	eor		Y_2.16b,  Y_2.16b, TWEAKV5.16b
+-	eor		TMP3.16b, X_3.16b, TWEAKV6.16b
+-	eor		Y_3.16b,  Y_3.16b, TWEAKV7.16b
+-
+-	/*
+-	 * De-interleave the 'x' and 'y' elements of each block, i.e. make it so
+-	 * that the X[0-3] registers contain only the second halves of blocks,
+-	 * and the Y[0-3] registers contain only the first halves of blocks.
+-	 * (Speck uses the order (y, x) rather than the more intuitive (x, y).)
+-	 */
+-	uzp2		X_0.\lanes, TMP0.\lanes, Y_0.\lanes
+-	uzp1		Y_0.\lanes, TMP0.\lanes, Y_0.\lanes
+-	uzp2		X_1.\lanes, TMP1.\lanes, Y_1.\lanes
+-	uzp1		Y_1.\lanes, TMP1.\lanes, Y_1.\lanes
+-	uzp2		X_2.\lanes, TMP2.\lanes, Y_2.\lanes
+-	uzp1		Y_2.\lanes, TMP2.\lanes, Y_2.\lanes
+-	uzp2		X_3.\lanes, TMP3.\lanes, Y_3.\lanes
+-	uzp1		Y_3.\lanes, TMP3.\lanes, Y_3.\lanes
+-
+-	// Do the cipher rounds
+-	mov		x6, ROUND_KEYS
+-	mov		w7, NROUNDS
+-.Lnext_round_\@:
+-.if \decrypting
+-	ld1r		{ROUND_KEY.\lanes}, [x6]
+-	sub		x6, x6, #( \n / 8 )
+-	_speck_unround_128bytes	\n, \lanes
+-.else
+-	ld1r		{ROUND_KEY.\lanes}, [x6], #( \n / 8 )
+-	_speck_round_128bytes	\n, \lanes
+-.endif
+-	subs		w7, w7, #1
+-	bne		.Lnext_round_\@
+-
+-	// Re-interleave the 'x' and 'y' elements of each block
+-	zip1		TMP0.\lanes, Y_0.\lanes, X_0.\lanes
+-	zip2		Y_0.\lanes,  Y_0.\lanes, X_0.\lanes
+-	zip1		TMP1.\lanes, Y_1.\lanes, X_1.\lanes
+-	zip2		Y_1.\lanes,  Y_1.\lanes, X_1.\lanes
+-	zip1		TMP2.\lanes, Y_2.\lanes, X_2.\lanes
+-	zip2		Y_2.\lanes,  Y_2.\lanes, X_2.\lanes
+-	zip1		TMP3.\lanes, Y_3.\lanes, X_3.\lanes
+-	zip2		Y_3.\lanes,  Y_3.\lanes, X_3.\lanes
+-
+-	// XOR the encrypted/decrypted blocks with the tweaks calculated earlier
+-	eor		X_0.16b, TMP0.16b, TWEAKV0.16b
+-	eor		Y_0.16b, Y_0.16b,  TWEAKV1.16b
+-	eor		X_1.16b, TMP1.16b, TWEAKV2.16b
+-	eor		Y_1.16b, Y_1.16b,  TWEAKV3.16b
+-	eor		X_2.16b, TMP2.16b, TWEAKV4.16b
+-	eor		Y_2.16b, Y_2.16b,  TWEAKV5.16b
+-	eor		X_3.16b, TMP3.16b, TWEAKV6.16b
+-	eor		Y_3.16b, Y_3.16b,  TWEAKV7.16b
+-	mov		TWEAKV0.16b, TWEAKV_NEXT.16b
+-
+-	// Store the ciphertext in the destination buffer
+-	st1		{X_0.16b-Y_1.16b}, [DST], #64
+-	st1		{X_2.16b-Y_3.16b}, [DST], #64
+-
+-	// Continue if there are more 128-byte chunks remaining
+-	subs		NBYTES, NBYTES, #128
+-	bne		.Lnext_128bytes_\@
+-
+-	// Store the next tweak and return
+-.if \n == 64
+-	st1		{TWEAKV_NEXT.16b}, [TWEAK]
+-.else
+-	st1		{TWEAKV_NEXT.8b}, [TWEAK]
+-.endif
+-	ret
+-.endm
+-
+-ENTRY(speck128_xts_encrypt_neon)
+-	_speck_xts_crypt	n=64, lanes=2d, decrypting=0
+-ENDPROC(speck128_xts_encrypt_neon)
+-
+-ENTRY(speck128_xts_decrypt_neon)
+-	_speck_xts_crypt	n=64, lanes=2d, decrypting=1
+-ENDPROC(speck128_xts_decrypt_neon)
+-
+-ENTRY(speck64_xts_encrypt_neon)
+-	_speck_xts_crypt	n=32, lanes=4s, decrypting=0
+-ENDPROC(speck64_xts_encrypt_neon)
+-
+-ENTRY(speck64_xts_decrypt_neon)
+-	_speck_xts_crypt	n=32, lanes=4s, decrypting=1
+-ENDPROC(speck64_xts_decrypt_neon)
+diff --git a/arch/arm64/crypto/speck-neon-glue.c b/arch/arm64/crypto/speck-neon-glue.c
+deleted file mode 100644
+index 6e233aeb4ff4..000000000000
+--- a/arch/arm64/crypto/speck-neon-glue.c
++++ /dev/null
+@@ -1,282 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * NEON-accelerated implementation of Speck128-XTS and Speck64-XTS
+- * (64-bit version; based on the 32-bit version)
+- *
+- * Copyright (c) 2018 Google, Inc
+- */
+-
+-#include <asm/hwcap.h>
+-#include <asm/neon.h>
+-#include <asm/simd.h>
+-#include <crypto/algapi.h>
+-#include <crypto/gf128mul.h>
+-#include <crypto/internal/skcipher.h>
+-#include <crypto/speck.h>
+-#include <crypto/xts.h>
+-#include <linux/kernel.h>
+-#include <linux/module.h>
+-
+-/* The assembly functions only handle multiples of 128 bytes */
+-#define SPECK_NEON_CHUNK_SIZE	128
+-
+-/* Speck128 */
+-
+-struct speck128_xts_tfm_ctx {
+-	struct speck128_tfm_ctx main_key;
+-	struct speck128_tfm_ctx tweak_key;
+-};
+-
+-asmlinkage void speck128_xts_encrypt_neon(const u64 *round_keys, int nrounds,
+-					  void *dst, const void *src,
+-					  unsigned int nbytes, void *tweak);
+-
+-asmlinkage void speck128_xts_decrypt_neon(const u64 *round_keys, int nrounds,
+-					  void *dst, const void *src,
+-					  unsigned int nbytes, void *tweak);
+-
+-typedef void (*speck128_crypt_one_t)(const struct speck128_tfm_ctx *,
+-				     u8 *, const u8 *);
+-typedef void (*speck128_xts_crypt_many_t)(const u64 *, int, void *,
+-					  const void *, unsigned int, void *);
+-
+-static __always_inline int
+-__speck128_xts_crypt(struct skcipher_request *req,
+-		     speck128_crypt_one_t crypt_one,
+-		     speck128_xts_crypt_many_t crypt_many)
+-{
+-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+-	const struct speck128_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	struct skcipher_walk walk;
+-	le128 tweak;
+-	int err;
+-
+-	err = skcipher_walk_virt(&walk, req, true);
+-
+-	crypto_speck128_encrypt(&ctx->tweak_key, (u8 *)&tweak, walk.iv);
+-
+-	while (walk.nbytes > 0) {
+-		unsigned int nbytes = walk.nbytes;
+-		u8 *dst = walk.dst.virt.addr;
+-		const u8 *src = walk.src.virt.addr;
+-
+-		if (nbytes >= SPECK_NEON_CHUNK_SIZE && may_use_simd()) {
+-			unsigned int count;
+-
+-			count = round_down(nbytes, SPECK_NEON_CHUNK_SIZE);
+-			kernel_neon_begin();
+-			(*crypt_many)(ctx->main_key.round_keys,
+-				      ctx->main_key.nrounds,
+-				      dst, src, count, &tweak);
+-			kernel_neon_end();
+-			dst += count;
+-			src += count;
+-			nbytes -= count;
+-		}
+-
+-		/* Handle any remainder with generic code */
+-		while (nbytes >= sizeof(tweak)) {
+-			le128_xor((le128 *)dst, (const le128 *)src, &tweak);
+-			(*crypt_one)(&ctx->main_key, dst, dst);
+-			le128_xor((le128 *)dst, (const le128 *)dst, &tweak);
+-			gf128mul_x_ble(&tweak, &tweak);
+-
+-			dst += sizeof(tweak);
+-			src += sizeof(tweak);
+-			nbytes -= sizeof(tweak);
+-		}
+-		err = skcipher_walk_done(&walk, nbytes);
+-	}
+-
+-	return err;
+-}
+-
+-static int speck128_xts_encrypt(struct skcipher_request *req)
+-{
+-	return __speck128_xts_crypt(req, crypto_speck128_encrypt,
+-				    speck128_xts_encrypt_neon);
+-}
+-
+-static int speck128_xts_decrypt(struct skcipher_request *req)
+-{
+-	return __speck128_xts_crypt(req, crypto_speck128_decrypt,
+-				    speck128_xts_decrypt_neon);
+-}
+-
+-static int speck128_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
+-			       unsigned int keylen)
+-{
+-	struct speck128_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	int err;
+-
+-	err = xts_verify_key(tfm, key, keylen);
+-	if (err)
+-		return err;
+-
+-	keylen /= 2;
+-
+-	err = crypto_speck128_setkey(&ctx->main_key, key, keylen);
+-	if (err)
+-		return err;
+-
+-	return crypto_speck128_setkey(&ctx->tweak_key, key + keylen, keylen);
+-}
+-
+-/* Speck64 */
+-
+-struct speck64_xts_tfm_ctx {
+-	struct speck64_tfm_ctx main_key;
+-	struct speck64_tfm_ctx tweak_key;
+-};
+-
+-asmlinkage void speck64_xts_encrypt_neon(const u32 *round_keys, int nrounds,
+-					 void *dst, const void *src,
+-					 unsigned int nbytes, void *tweak);
+-
+-asmlinkage void speck64_xts_decrypt_neon(const u32 *round_keys, int nrounds,
+-					 void *dst, const void *src,
+-					 unsigned int nbytes, void *tweak);
+-
+-typedef void (*speck64_crypt_one_t)(const struct speck64_tfm_ctx *,
+-				    u8 *, const u8 *);
+-typedef void (*speck64_xts_crypt_many_t)(const u32 *, int, void *,
+-					 const void *, unsigned int, void *);
+-
+-static __always_inline int
+-__speck64_xts_crypt(struct skcipher_request *req, speck64_crypt_one_t crypt_one,
+-		    speck64_xts_crypt_many_t crypt_many)
+-{
+-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+-	const struct speck64_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	struct skcipher_walk walk;
+-	__le64 tweak;
+-	int err;
+-
+-	err = skcipher_walk_virt(&walk, req, true);
+-
+-	crypto_speck64_encrypt(&ctx->tweak_key, (u8 *)&tweak, walk.iv);
+-
+-	while (walk.nbytes > 0) {
+-		unsigned int nbytes = walk.nbytes;
+-		u8 *dst = walk.dst.virt.addr;
+-		const u8 *src = walk.src.virt.addr;
+-
+-		if (nbytes >= SPECK_NEON_CHUNK_SIZE && may_use_simd()) {
+-			unsigned int count;
+-
+-			count = round_down(nbytes, SPECK_NEON_CHUNK_SIZE);
+-			kernel_neon_begin();
+-			(*crypt_many)(ctx->main_key.round_keys,
+-				      ctx->main_key.nrounds,
+-				      dst, src, count, &tweak);
+-			kernel_neon_end();
+-			dst += count;
+-			src += count;
+-			nbytes -= count;
+-		}
+-
+-		/* Handle any remainder with generic code */
+-		while (nbytes >= sizeof(tweak)) {
+-			*(__le64 *)dst = *(__le64 *)src ^ tweak;
+-			(*crypt_one)(&ctx->main_key, dst, dst);
+-			*(__le64 *)dst ^= tweak;
+-			tweak = cpu_to_le64((le64_to_cpu(tweak) << 1) ^
+-					    ((tweak & cpu_to_le64(1ULL << 63)) ?
+-					     0x1B : 0));
+-			dst += sizeof(tweak);
+-			src += sizeof(tweak);
+-			nbytes -= sizeof(tweak);
+-		}
+-		err = skcipher_walk_done(&walk, nbytes);
+-	}
+-
+-	return err;
+-}
+-
+-static int speck64_xts_encrypt(struct skcipher_request *req)
+-{
+-	return __speck64_xts_crypt(req, crypto_speck64_encrypt,
+-				   speck64_xts_encrypt_neon);
+-}
+-
+-static int speck64_xts_decrypt(struct skcipher_request *req)
+-{
+-	return __speck64_xts_crypt(req, crypto_speck64_decrypt,
+-				   speck64_xts_decrypt_neon);
+-}
+-
+-static int speck64_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
+-			      unsigned int keylen)
+-{
+-	struct speck64_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	int err;
+-
+-	err = xts_verify_key(tfm, key, keylen);
+-	if (err)
+-		return err;
+-
+-	keylen /= 2;
+-
+-	err = crypto_speck64_setkey(&ctx->main_key, key, keylen);
+-	if (err)
+-		return err;
+-
+-	return crypto_speck64_setkey(&ctx->tweak_key, key + keylen, keylen);
+-}
+-
+-static struct skcipher_alg speck_algs[] = {
+-	{
+-		.base.cra_name		= "xts(speck128)",
+-		.base.cra_driver_name	= "xts-speck128-neon",
+-		.base.cra_priority	= 300,
+-		.base.cra_blocksize	= SPECK128_BLOCK_SIZE,
+-		.base.cra_ctxsize	= sizeof(struct speck128_xts_tfm_ctx),
+-		.base.cra_alignmask	= 7,
+-		.base.cra_module	= THIS_MODULE,
+-		.min_keysize		= 2 * SPECK128_128_KEY_SIZE,
+-		.max_keysize		= 2 * SPECK128_256_KEY_SIZE,
+-		.ivsize			= SPECK128_BLOCK_SIZE,
+-		.walksize		= SPECK_NEON_CHUNK_SIZE,
+-		.setkey			= speck128_xts_setkey,
+-		.encrypt		= speck128_xts_encrypt,
+-		.decrypt		= speck128_xts_decrypt,
+-	}, {
+-		.base.cra_name		= "xts(speck64)",
+-		.base.cra_driver_name	= "xts-speck64-neon",
+-		.base.cra_priority	= 300,
+-		.base.cra_blocksize	= SPECK64_BLOCK_SIZE,
+-		.base.cra_ctxsize	= sizeof(struct speck64_xts_tfm_ctx),
+-		.base.cra_alignmask	= 7,
+-		.base.cra_module	= THIS_MODULE,
+-		.min_keysize		= 2 * SPECK64_96_KEY_SIZE,
+-		.max_keysize		= 2 * SPECK64_128_KEY_SIZE,
+-		.ivsize			= SPECK64_BLOCK_SIZE,
+-		.walksize		= SPECK_NEON_CHUNK_SIZE,
+-		.setkey			= speck64_xts_setkey,
+-		.encrypt		= speck64_xts_encrypt,
+-		.decrypt		= speck64_xts_decrypt,
+-	}
+-};
+-
+-static int __init speck_neon_module_init(void)
+-{
+-	if (!(elf_hwcap & HWCAP_ASIMD))
+-		return -ENODEV;
+-	return crypto_register_skciphers(speck_algs, ARRAY_SIZE(speck_algs));
+-}
+-
+-static void __exit speck_neon_module_exit(void)
+-{
+-	crypto_unregister_skciphers(speck_algs, ARRAY_SIZE(speck_algs));
+-}
+-
+-module_init(speck_neon_module_init);
+-module_exit(speck_neon_module_exit);
+-
+-MODULE_DESCRIPTION("Speck block cipher (NEON-accelerated)");
+-MODULE_LICENSE("GPL");
+-MODULE_AUTHOR("Eric Biggers <ebiggers@google.com>");
+-MODULE_ALIAS_CRYPTO("xts(speck128)");
+-MODULE_ALIAS_CRYPTO("xts-speck128-neon");
+-MODULE_ALIAS_CRYPTO("xts(speck64)");
+-MODULE_ALIAS_CRYPTO("xts-speck64-neon");
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index e4103b718a7c..b687c80a9c10 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -847,15 +847,29 @@ static bool has_no_fpsimd(const struct arm64_cpu_capabilities *entry, int __unus
+ }
+ 
+ static bool has_cache_idc(const struct arm64_cpu_capabilities *entry,
+-			  int __unused)
++			  int scope)
+ {
+-	return read_sanitised_ftr_reg(SYS_CTR_EL0) & BIT(CTR_IDC_SHIFT);
++	u64 ctr;
++
++	if (scope == SCOPE_SYSTEM)
++		ctr = arm64_ftr_reg_ctrel0.sys_val;
++	else
++		ctr = read_cpuid_cachetype();
++
++	return ctr & BIT(CTR_IDC_SHIFT);
+ }
+ 
+ static bool has_cache_dic(const struct arm64_cpu_capabilities *entry,
+-			  int __unused)
++			  int scope)
+ {
+-	return read_sanitised_ftr_reg(SYS_CTR_EL0) & BIT(CTR_DIC_SHIFT);
++	u64 ctr;
++
++	if (scope == SCOPE_SYSTEM)
++		ctr = arm64_ftr_reg_ctrel0.sys_val;
++	else
++		ctr = read_cpuid_cachetype();
++
++	return ctr & BIT(CTR_DIC_SHIFT);
+ }
+ 
+ #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index 28ad8799406f..b0db91eefbde 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -599,7 +599,7 @@ el1_undef:
+ 	inherit_daif	pstate=x23, tmp=x2
+ 	mov	x0, sp
+ 	bl	do_undefinstr
+-	ASM_BUG()
++	kernel_exit 1
+ el1_dbg:
+ 	/*
+ 	 * Debug exception handling
+diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
+index d399d459397b..9fa3d69cceaa 100644
+--- a/arch/arm64/kernel/traps.c
++++ b/arch/arm64/kernel/traps.c
+@@ -310,10 +310,12 @@ static int call_undef_hook(struct pt_regs *regs)
+ 	int (*fn)(struct pt_regs *regs, u32 instr) = NULL;
+ 	void __user *pc = (void __user *)instruction_pointer(regs);
+ 
+-	if (!user_mode(regs))
+-		return 1;
+-
+-	if (compat_thumb_mode(regs)) {
++	if (!user_mode(regs)) {
++		__le32 instr_le;
++		if (probe_kernel_address((__force __le32 *)pc, instr_le))
++			goto exit;
++		instr = le32_to_cpu(instr_le);
++	} else if (compat_thumb_mode(regs)) {
+ 		/* 16-bit Thumb instruction */
+ 		__le16 instr_le;
+ 		if (get_user(instr_le, (__le16 __user *)pc))
+@@ -407,6 +409,7 @@ asmlinkage void __exception do_undefinstr(struct pt_regs *regs)
+ 		return;
+ 
+ 	force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
++	BUG_ON(!user_mode(regs));
+ }
+ 
+ void cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
+diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile
+index 137710f4dac3..5105bb044aa5 100644
+--- a/arch/arm64/lib/Makefile
++++ b/arch/arm64/lib/Makefile
+@@ -12,7 +12,7 @@ lib-y		:= bitops.o clear_user.o delay.o copy_from_user.o	\
+ # when supported by the CPU. Result and argument registers are handled
+ # correctly, based on the function prototype.
+ lib-$(CONFIG_ARM64_LSE_ATOMICS) += atomic_ll_sc.o
+-CFLAGS_atomic_ll_sc.o	:= -fcall-used-x0 -ffixed-x1 -ffixed-x2		\
++CFLAGS_atomic_ll_sc.o	:= -ffixed-x1 -ffixed-x2        		\
+ 		   -ffixed-x3 -ffixed-x4 -ffixed-x5 -ffixed-x6		\
+ 		   -ffixed-x7 -fcall-saved-x8 -fcall-saved-x9		\
+ 		   -fcall-saved-x10 -fcall-saved-x11 -fcall-saved-x12	\
+diff --git a/arch/m68k/configs/amiga_defconfig b/arch/m68k/configs/amiga_defconfig
+index a874e54404d1..4d4c76ab0bac 100644
+--- a/arch/m68k/configs/amiga_defconfig
++++ b/arch/m68k/configs/amiga_defconfig
+@@ -650,7 +650,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/apollo_defconfig b/arch/m68k/configs/apollo_defconfig
+index 8ce39e23aa42..0fd006c19fa3 100644
+--- a/arch/m68k/configs/apollo_defconfig
++++ b/arch/m68k/configs/apollo_defconfig
+@@ -609,7 +609,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/atari_defconfig b/arch/m68k/configs/atari_defconfig
+index 346c4e75edf8..9343e8d5cf60 100644
+--- a/arch/m68k/configs/atari_defconfig
++++ b/arch/m68k/configs/atari_defconfig
+@@ -631,7 +631,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/bvme6000_defconfig b/arch/m68k/configs/bvme6000_defconfig
+index fca9c7aa71a3..a10fff6e7b50 100644
+--- a/arch/m68k/configs/bvme6000_defconfig
++++ b/arch/m68k/configs/bvme6000_defconfig
+@@ -601,7 +601,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/hp300_defconfig b/arch/m68k/configs/hp300_defconfig
+index f9eab174915c..db81d8ea9d03 100644
+--- a/arch/m68k/configs/hp300_defconfig
++++ b/arch/m68k/configs/hp300_defconfig
+@@ -611,7 +611,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/mac_defconfig b/arch/m68k/configs/mac_defconfig
+index b52e597899eb..2546617a1147 100644
+--- a/arch/m68k/configs/mac_defconfig
++++ b/arch/m68k/configs/mac_defconfig
+@@ -633,7 +633,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/multi_defconfig b/arch/m68k/configs/multi_defconfig
+index 2a84eeec5b02..dc9b0d885e8b 100644
+--- a/arch/m68k/configs/multi_defconfig
++++ b/arch/m68k/configs/multi_defconfig
+@@ -713,7 +713,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/mvme147_defconfig b/arch/m68k/configs/mvme147_defconfig
+index 476e69994340..0d815a375ba0 100644
+--- a/arch/m68k/configs/mvme147_defconfig
++++ b/arch/m68k/configs/mvme147_defconfig
+@@ -601,7 +601,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/mvme16x_defconfig b/arch/m68k/configs/mvme16x_defconfig
+index 1477cda9146e..0cb8109b4c9e 100644
+--- a/arch/m68k/configs/mvme16x_defconfig
++++ b/arch/m68k/configs/mvme16x_defconfig
+@@ -601,7 +601,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/q40_defconfig b/arch/m68k/configs/q40_defconfig
+index b3a543dc48a0..e91a1c28bba7 100644
+--- a/arch/m68k/configs/q40_defconfig
++++ b/arch/m68k/configs/q40_defconfig
+@@ -624,7 +624,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/sun3_defconfig b/arch/m68k/configs/sun3_defconfig
+index d543ed5dfa96..3b2f0914c34f 100644
+--- a/arch/m68k/configs/sun3_defconfig
++++ b/arch/m68k/configs/sun3_defconfig
+@@ -602,7 +602,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/sun3x_defconfig b/arch/m68k/configs/sun3x_defconfig
+index a67e54246023..e4365ef4f5ed 100644
+--- a/arch/m68k/configs/sun3x_defconfig
++++ b/arch/m68k/configs/sun3x_defconfig
+@@ -603,7 +603,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/mips/cavium-octeon/executive/cvmx-helper.c b/arch/mips/cavium-octeon/executive/cvmx-helper.c
+index 75108ec669eb..6c79e8a16a26 100644
+--- a/arch/mips/cavium-octeon/executive/cvmx-helper.c
++++ b/arch/mips/cavium-octeon/executive/cvmx-helper.c
+@@ -67,7 +67,7 @@ void (*cvmx_override_pko_queue_priority) (int pko_port,
+ void (*cvmx_override_ipd_port_setup) (int ipd_port);
+ 
+ /* Port count per interface */
+-static int interface_port_count[5];
++static int interface_port_count[9];
+ 
+ /**
+  * Return the number of interfaces the chip has. Each interface
+diff --git a/arch/mips/lib/memset.S b/arch/mips/lib/memset.S
+index fac26ce64b2f..e76e88222a4b 100644
+--- a/arch/mips/lib/memset.S
++++ b/arch/mips/lib/memset.S
+@@ -262,9 +262,11 @@
+ 	 nop
+ 
+ .Lsmall_fixup\@:
++	.set		reorder
+ 	PTR_SUBU	a2, t1, a0
++	PTR_ADDIU	a2, 1
+ 	jr		ra
+-	 PTR_ADDIU	a2, 1
++	.set		noreorder
+ 
+ 	.endm
+ 
+diff --git a/arch/parisc/kernel/entry.S b/arch/parisc/kernel/entry.S
+index 1b4732e20137..843825a7e6e2 100644
+--- a/arch/parisc/kernel/entry.S
++++ b/arch/parisc/kernel/entry.S
+@@ -185,7 +185,7 @@
+ 	bv,n	0(%r3)
+ 	nop
+ 	.word	0		/* checksum (will be patched) */
+-	.word	PA(os_hpmc)	/* address of handler */
++	.word	0		/* address of handler */
+ 	.word	0		/* length of handler */
+ 	.endm
+ 
+diff --git a/arch/parisc/kernel/hpmc.S b/arch/parisc/kernel/hpmc.S
+index 781c3b9a3e46..fde654115564 100644
+--- a/arch/parisc/kernel/hpmc.S
++++ b/arch/parisc/kernel/hpmc.S
+@@ -85,7 +85,7 @@ END(hpmc_pim_data)
+ 
+ 	.import intr_save, code
+ 	.align 16
+-ENTRY_CFI(os_hpmc)
++ENTRY(os_hpmc)
+ .os_hpmc:
+ 
+ 	/*
+@@ -302,7 +302,6 @@ os_hpmc_6:
+ 	b .
+ 	nop
+ 	.align 16	/* make function length multiple of 16 bytes */
+-ENDPROC_CFI(os_hpmc)
+ .os_hpmc_end:
+ 
+ 
+diff --git a/arch/parisc/kernel/traps.c b/arch/parisc/kernel/traps.c
+index 4309ad31a874..2cb35e1e0099 100644
+--- a/arch/parisc/kernel/traps.c
++++ b/arch/parisc/kernel/traps.c
+@@ -827,7 +827,8 @@ void __init initialize_ivt(const void *iva)
+ 	 *    the Length/4 words starting at Address is zero.
+ 	 */
+ 
+-	/* Compute Checksum for HPMC handler */
++	/* Setup IVA and compute checksum for HPMC handler */
++	ivap[6] = (u32)__pa(os_hpmc);
+ 	length = os_hpmc_size;
+ 	ivap[7] = length;
+ 
+diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
+index 2607d2d33405..db6cd857c8c0 100644
+--- a/arch/parisc/mm/init.c
++++ b/arch/parisc/mm/init.c
+@@ -495,12 +495,8 @@ static void __init map_pages(unsigned long start_vaddr,
+ 						pte = pte_mkhuge(pte);
+ 				}
+ 
+-				if (address >= end_paddr) {
+-					if (force)
+-						break;
+-					else
+-						pte_val(pte) = 0;
+-				}
++				if (address >= end_paddr)
++					break;
+ 
+ 				set_pte(pg_table, pte);
+ 
+diff --git a/arch/powerpc/include/asm/mpic.h b/arch/powerpc/include/asm/mpic.h
+index fad8ddd697ac..0abf2e7fd222 100644
+--- a/arch/powerpc/include/asm/mpic.h
++++ b/arch/powerpc/include/asm/mpic.h
+@@ -393,7 +393,14 @@ extern struct bus_type mpic_subsys;
+ #define	MPIC_REGSET_TSI108		MPIC_REGSET(1)	/* Tsi108/109 PIC */
+ 
+ /* Get the version of primary MPIC */
++#ifdef CONFIG_MPIC
+ extern u32 fsl_mpic_primary_get_version(void);
++#else
++static inline u32 fsl_mpic_primary_get_version(void)
++{
++	return 0;
++}
++#endif
+ 
+ /* Allocate the controller structure and setup the linux irq descs
+  * for the range if interrupts passed in. No HW initialization is
+diff --git a/arch/powerpc/kernel/mce_power.c b/arch/powerpc/kernel/mce_power.c
+index 38c5b4764bfe..a74ffd5ad15c 100644
+--- a/arch/powerpc/kernel/mce_power.c
++++ b/arch/powerpc/kernel/mce_power.c
+@@ -97,6 +97,13 @@ static void flush_and_reload_slb(void)
+ 
+ static void flush_erat(void)
+ {
++#ifdef CONFIG_PPC_BOOK3S_64
++	if (!early_cpu_has_feature(CPU_FTR_ARCH_300)) {
++		flush_and_reload_slb();
++		return;
++	}
++#endif
++	/* PPC_INVALIDATE_ERAT can only be used on ISA v3 and newer */
+ 	asm volatile(PPC_INVALIDATE_ERAT : : :"memory");
+ }
+ 
+diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
+index 225bc5f91049..03dd2f9d60cf 100644
+--- a/arch/powerpc/kernel/setup_64.c
++++ b/arch/powerpc/kernel/setup_64.c
+@@ -242,13 +242,19 @@ static void cpu_ready_for_interrupts(void)
+ 	}
+ 
+ 	/*
+-	 * Fixup HFSCR:TM based on CPU features. The bit is set by our
+-	 * early asm init because at that point we haven't updated our
+-	 * CPU features from firmware and device-tree. Here we have,
+-	 * so let's do it.
++	 * Set HFSCR:TM based on CPU features:
++	 * In the special case of TM no suspend (P9N DD2.1), Linux is
++	 * told TM is off via the dt-ftrs but told to (partially) use
++	 * it via OPAL_REINIT_CPUS_TM_SUSPEND_DISABLED. So HFSCR[TM]
++	 * will be off from dt-ftrs but we need to turn it on for the
++	 * no suspend case.
+ 	 */
+-	if (cpu_has_feature(CPU_FTR_HVMODE) && !cpu_has_feature(CPU_FTR_TM_COMP))
+-		mtspr(SPRN_HFSCR, mfspr(SPRN_HFSCR) & ~HFSCR_TM);
++	if (cpu_has_feature(CPU_FTR_HVMODE)) {
++		if (cpu_has_feature(CPU_FTR_TM_COMP))
++			mtspr(SPRN_HFSCR, mfspr(SPRN_HFSCR) | HFSCR_TM);
++		else
++			mtspr(SPRN_HFSCR, mfspr(SPRN_HFSCR) & ~HFSCR_TM);
++	}
+ 
+ 	/* Set IR and DR in PACA MSR */
+ 	get_paca()->kernel_msr = MSR_KERNEL;
+diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
+index 1d049c78c82a..2e45e5fbad5b 100644
+--- a/arch/powerpc/mm/hash_native_64.c
++++ b/arch/powerpc/mm/hash_native_64.c
+@@ -115,6 +115,8 @@ static void tlbiel_all_isa300(unsigned int num_sets, unsigned int is)
+ 	tlbiel_hash_set_isa300(0, is, 0, 2, 1);
+ 
+ 	asm volatile("ptesync": : :"memory");
++
++	asm volatile(PPC_INVALIDATE_ERAT "; isync" : : :"memory");
+ }
+ 
+ void hash__tlbiel_all(unsigned int action)
+@@ -140,8 +142,6 @@ void hash__tlbiel_all(unsigned int action)
+ 		tlbiel_all_isa206(POWER7_TLB_SETS, is);
+ 	else
+ 		WARN(1, "%s called on pre-POWER7 CPU\n", __func__);
+-
+-	asm volatile(PPC_INVALIDATE_ERAT "; isync" : : :"memory");
+ }
+ 
+ static inline unsigned long  ___tlbie(unsigned long vpn, int psize,
+diff --git a/arch/s390/defconfig b/arch/s390/defconfig
+index f40600eb1762..5134c71a4937 100644
+--- a/arch/s390/defconfig
++++ b/arch/s390/defconfig
+@@ -221,7 +221,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_DEFLATE=m
+diff --git a/arch/s390/kernel/sthyi.c b/arch/s390/kernel/sthyi.c
+index 0859cde36f75..888cc2f166db 100644
+--- a/arch/s390/kernel/sthyi.c
++++ b/arch/s390/kernel/sthyi.c
+@@ -183,17 +183,19 @@ static void fill_hdr(struct sthyi_sctns *sctns)
+ static void fill_stsi_mac(struct sthyi_sctns *sctns,
+ 			  struct sysinfo_1_1_1 *sysinfo)
+ {
++	sclp_ocf_cpc_name_copy(sctns->mac.infmname);
++	if (*(u64 *)sctns->mac.infmname != 0)
++		sctns->mac.infmval1 |= MAC_NAME_VLD;
++
+ 	if (stsi(sysinfo, 1, 1, 1))
+ 		return;
+ 
+-	sclp_ocf_cpc_name_copy(sctns->mac.infmname);
+-
+ 	memcpy(sctns->mac.infmtype, sysinfo->type, sizeof(sctns->mac.infmtype));
+ 	memcpy(sctns->mac.infmmanu, sysinfo->manufacturer, sizeof(sctns->mac.infmmanu));
+ 	memcpy(sctns->mac.infmpman, sysinfo->plant, sizeof(sctns->mac.infmpman));
+ 	memcpy(sctns->mac.infmseq, sysinfo->sequence, sizeof(sctns->mac.infmseq));
+ 
+-	sctns->mac.infmval1 |= MAC_ID_VLD | MAC_NAME_VLD;
++	sctns->mac.infmval1 |= MAC_ID_VLD;
+ }
+ 
+ static void fill_stsi_par(struct sthyi_sctns *sctns,
+diff --git a/arch/x86/boot/tools/build.c b/arch/x86/boot/tools/build.c
+index d4e6cd4577e5..bf0e82400358 100644
+--- a/arch/x86/boot/tools/build.c
++++ b/arch/x86/boot/tools/build.c
+@@ -391,6 +391,13 @@ int main(int argc, char ** argv)
+ 		die("Unable to mmap '%s': %m", argv[2]);
+ 	/* Number of 16-byte paragraphs, including space for a 4-byte CRC */
+ 	sys_size = (sz + 15 + 4) / 16;
++#ifdef CONFIG_EFI_STUB
++	/*
++	 * COFF requires minimum 32-byte alignment of sections, and
++	 * adding a signature is problematic without that alignment.
++	 */
++	sys_size = (sys_size + 1) & ~1;
++#endif
+ 
+ 	/* Patch the setup code with the appropriate size parameters */
+ 	buf[0x1f1] = setup_sectors-1;
+diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
+index acbe7e8336d8..e4b78f962874 100644
+--- a/arch/x86/crypto/aesni-intel_glue.c
++++ b/arch/x86/crypto/aesni-intel_glue.c
+@@ -817,7 +817,7 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req,
+ 	/* Linearize assoc, if not already linear */
+ 	if (req->src->length >= assoclen && req->src->length &&
+ 		(!PageHighMem(sg_page(req->src)) ||
+-			req->src->offset + req->src->length < PAGE_SIZE)) {
++			req->src->offset + req->src->length <= PAGE_SIZE)) {
+ 		scatterwalk_start(&assoc_sg_walk, req->src);
+ 		assoc = scatterwalk_map(&assoc_sg_walk);
+ 	} else {
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 64aaa3f5f36c..c8ac84e90d0f 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -220,6 +220,7 @@
+ #define X86_FEATURE_STIBP		( 7*32+27) /* Single Thread Indirect Branch Predictors */
+ #define X86_FEATURE_ZEN			( 7*32+28) /* "" CPU is AMD family 0x17 (Zen) */
+ #define X86_FEATURE_L1TF_PTEINV		( 7*32+29) /* "" L1TF workaround PTE inversion */
++#define X86_FEATURE_IBRS_ENHANCED	( 7*32+30) /* Enhanced IBRS */
+ 
+ /* Virtualization flags: Linux defined, word 8 */
+ #define X86_FEATURE_TPR_SHADOW		( 8*32+ 0) /* Intel TPR Shadow */
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 0722b7745382..ccc23203b327 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -176,6 +176,7 @@ enum {
+ 
+ #define DR6_BD		(1 << 13)
+ #define DR6_BS		(1 << 14)
++#define DR6_BT		(1 << 15)
+ #define DR6_RTM		(1 << 16)
+ #define DR6_FIXED_1	0xfffe0ff0
+ #define DR6_INIT	0xffff0ff0
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index f6f6c63da62f..e7c8086e570e 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -215,6 +215,7 @@ enum spectre_v2_mitigation {
+ 	SPECTRE_V2_RETPOLINE_GENERIC,
+ 	SPECTRE_V2_RETPOLINE_AMD,
+ 	SPECTRE_V2_IBRS,
++	SPECTRE_V2_IBRS_ENHANCED,
+ };
+ 
+ /* The Speculative Store Bypass disable variants */
+diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
+index 0af97e51e609..6f293d9a0b07 100644
+--- a/arch/x86/include/asm/tlbflush.h
++++ b/arch/x86/include/asm/tlbflush.h
+@@ -469,6 +469,12 @@ static inline void __native_flush_tlb_one_user(unsigned long addr)
+  */
+ static inline void __flush_tlb_all(void)
+ {
++	/*
++	 * This is to catch users with enabled preemption and the PGE feature
++	 * and don't trigger the warning in __native_flush_tlb().
++	 */
++	VM_WARN_ON_ONCE(preemptible());
++
+ 	if (boot_cpu_has(X86_FEATURE_PGE)) {
+ 		__flush_tlb_global();
+ 	} else {
+diff --git a/arch/x86/kernel/check.c b/arch/x86/kernel/check.c
+index 33399426793e..cc8258a5378b 100644
+--- a/arch/x86/kernel/check.c
++++ b/arch/x86/kernel/check.c
+@@ -31,6 +31,11 @@ static __init int set_corruption_check(char *arg)
+ 	ssize_t ret;
+ 	unsigned long val;
+ 
++	if (!arg) {
++		pr_err("memory_corruption_check config string not provided\n");
++		return -EINVAL;
++	}
++
+ 	ret = kstrtoul(arg, 10, &val);
+ 	if (ret)
+ 		return ret;
+@@ -45,6 +50,11 @@ static __init int set_corruption_check_period(char *arg)
+ 	ssize_t ret;
+ 	unsigned long val;
+ 
++	if (!arg) {
++		pr_err("memory_corruption_check_period config string not provided\n");
++		return -EINVAL;
++	}
++
+ 	ret = kstrtoul(arg, 10, &val);
+ 	if (ret)
+ 		return ret;
+@@ -59,6 +69,11 @@ static __init int set_corruption_check_size(char *arg)
+ 	char *end;
+ 	unsigned size;
+ 
++	if (!arg) {
++		pr_err("memory_corruption_check_size config string not provided\n");
++		return -EINVAL;
++	}
++
+ 	size = memparse(arg, &end);
+ 
+ 	if (*end == '\0')
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 4891a621a752..91e5e086606c 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -35,12 +35,10 @@ static void __init spectre_v2_select_mitigation(void);
+ static void __init ssb_select_mitigation(void);
+ static void __init l1tf_select_mitigation(void);
+ 
+-/*
+- * Our boot-time value of the SPEC_CTRL MSR. We read it once so that any
+- * writes to SPEC_CTRL contain whatever reserved bits have been set.
+- */
+-u64 __ro_after_init x86_spec_ctrl_base;
++/* The base value of the SPEC_CTRL MSR that always has to be preserved. */
++u64 x86_spec_ctrl_base;
+ EXPORT_SYMBOL_GPL(x86_spec_ctrl_base);
++static DEFINE_MUTEX(spec_ctrl_mutex);
+ 
+ /*
+  * The vendor and possibly platform specific bits which can be modified in
+@@ -141,6 +139,7 @@ static const char *spectre_v2_strings[] = {
+ 	[SPECTRE_V2_RETPOLINE_MINIMAL_AMD]	= "Vulnerable: Minimal AMD ASM retpoline",
+ 	[SPECTRE_V2_RETPOLINE_GENERIC]		= "Mitigation: Full generic retpoline",
+ 	[SPECTRE_V2_RETPOLINE_AMD]		= "Mitigation: Full AMD retpoline",
++	[SPECTRE_V2_IBRS_ENHANCED]		= "Mitigation: Enhanced IBRS",
+ };
+ 
+ #undef pr_fmt
+@@ -324,6 +323,46 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
+ 	return cmd;
+ }
+ 
++static bool stibp_needed(void)
++{
++	if (spectre_v2_enabled == SPECTRE_V2_NONE)
++		return false;
++
++	if (!boot_cpu_has(X86_FEATURE_STIBP))
++		return false;
++
++	return true;
++}
++
++static void update_stibp_msr(void *info)
++{
++	wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
++}
++
++void arch_smt_update(void)
++{
++	u64 mask;
++
++	if (!stibp_needed())
++		return;
++
++	mutex_lock(&spec_ctrl_mutex);
++	mask = x86_spec_ctrl_base;
++	if (cpu_smt_control == CPU_SMT_ENABLED)
++		mask |= SPEC_CTRL_STIBP;
++	else
++		mask &= ~SPEC_CTRL_STIBP;
++
++	if (mask != x86_spec_ctrl_base) {
++		pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
++				cpu_smt_control == CPU_SMT_ENABLED ?
++				"Enabling" : "Disabling");
++		x86_spec_ctrl_base = mask;
++		on_each_cpu(update_stibp_msr, NULL, 1);
++	}
++	mutex_unlock(&spec_ctrl_mutex);
++}
++
+ static void __init spectre_v2_select_mitigation(void)
+ {
+ 	enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
+@@ -343,6 +382,13 @@ static void __init spectre_v2_select_mitigation(void)
+ 
+ 	case SPECTRE_V2_CMD_FORCE:
+ 	case SPECTRE_V2_CMD_AUTO:
++		if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
++			mode = SPECTRE_V2_IBRS_ENHANCED;
++			/* Force it so VMEXIT will restore correctly */
++			x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
++			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
++			goto specv2_set_mode;
++		}
+ 		if (IS_ENABLED(CONFIG_RETPOLINE))
+ 			goto retpoline_auto;
+ 		break;
+@@ -380,6 +426,7 @@ retpoline_auto:
+ 		setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
+ 	}
+ 
++specv2_set_mode:
+ 	spectre_v2_enabled = mode;
+ 	pr_info("%s\n", spectre_v2_strings[mode]);
+ 
+@@ -402,12 +449,22 @@ retpoline_auto:
+ 
+ 	/*
+ 	 * Retpoline means the kernel is safe because it has no indirect
+-	 * branches. But firmware isn't, so use IBRS to protect that.
++	 * branches. Enhanced IBRS protects firmware too, so, enable restricted
++	 * speculation around firmware calls only when Enhanced IBRS isn't
++	 * supported.
++	 *
++	 * Use "mode" to check Enhanced IBRS instead of boot_cpu_has(), because
++	 * the user might select retpoline on the kernel command line and if
++	 * the CPU supports Enhanced IBRS, kernel might un-intentionally not
++	 * enable IBRS around firmware calls.
+ 	 */
+-	if (boot_cpu_has(X86_FEATURE_IBRS)) {
++	if (boot_cpu_has(X86_FEATURE_IBRS) && mode != SPECTRE_V2_IBRS_ENHANCED) {
+ 		setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
+ 		pr_info("Enabling Restricted Speculation for firmware calls\n");
+ 	}
++
++	/* Enable STIBP if appropriate */
++	arch_smt_update();
+ }
+ 
+ #undef pr_fmt
+@@ -798,6 +855,8 @@ static ssize_t l1tf_show_state(char *buf)
+ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
+ 			       char *buf, unsigned int bug)
+ {
++	int ret;
++
+ 	if (!boot_cpu_has_bug(bug))
+ 		return sprintf(buf, "Not affected\n");
+ 
+@@ -815,10 +874,12 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ 		return sprintf(buf, "Mitigation: __user pointer sanitization\n");
+ 
+ 	case X86_BUG_SPECTRE_V2:
+-		return sprintf(buf, "%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
++		ret = sprintf(buf, "%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
+ 			       boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "",
+ 			       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
++			       (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
+ 			       spectre_v2_module_string());
++		return ret;
+ 
+ 	case X86_BUG_SPEC_STORE_BYPASS:
+ 		return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 1ee8ea36af30..79561bfcfa87 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1015,6 +1015,9 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ 	   !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
+ 		setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
+ 
++	if (ia32_cap & ARCH_CAP_IBRS_ALL)
++		setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
++
+ 	if (x86_match_cpu(cpu_no_meltdown))
+ 		return;
+ 
+diff --git a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c
+index 749856a2e736..bc3801985d73 100644
+--- a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c
++++ b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c
+@@ -2032,6 +2032,13 @@ static int rdtgroup_show_options(struct seq_file *seq, struct kernfs_root *kf)
+ {
+ 	if (rdt_resources_all[RDT_RESOURCE_L3DATA].alloc_enabled)
+ 		seq_puts(seq, ",cdp");
++
++	if (rdt_resources_all[RDT_RESOURCE_L2DATA].alloc_enabled)
++		seq_puts(seq, ",cdpl2");
++
++	if (is_mba_sc(&rdt_resources_all[RDT_RESOURCE_MBA]))
++		seq_puts(seq, ",mba_MBps");
++
+ 	return 0;
+ }
+ 
+diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
+index 23f1691670b6..61a949d84dfa 100644
+--- a/arch/x86/kernel/fpu/signal.c
++++ b/arch/x86/kernel/fpu/signal.c
+@@ -314,7 +314,6 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
+ 		 * thread's fpu state, reconstruct fxstate from the fsave
+ 		 * header. Validate and sanitize the copied state.
+ 		 */
+-		struct fpu *fpu = &tsk->thread.fpu;
+ 		struct user_i387_ia32_struct env;
+ 		int err = 0;
+ 
+diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
+index 203d398802a3..1467f966cfec 100644
+--- a/arch/x86/kernel/kprobes/opt.c
++++ b/arch/x86/kernel/kprobes/opt.c
+@@ -179,7 +179,7 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
+ 		opt_pre_handler(&op->kp, regs);
+ 		__this_cpu_write(current_kprobe, NULL);
+ 	}
+-	preempt_enable_no_resched();
++	preempt_enable();
+ }
+ NOKPROBE_SYMBOL(optimized_callback);
+ 
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 9efe130ea2e6..9fcc3ec3ab78 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -3160,10 +3160,13 @@ static int nested_vmx_check_exception(struct kvm_vcpu *vcpu, unsigned long *exit
+ 		}
+ 	} else {
+ 		if (vmcs12->exception_bitmap & (1u << nr)) {
+-			if (nr == DB_VECTOR)
++			if (nr == DB_VECTOR) {
+ 				*exit_qual = vcpu->arch.dr6;
+-			else
++				*exit_qual &= ~(DR6_FIXED_1 | DR6_BT);
++				*exit_qual ^= DR6_RTM;
++			} else {
+ 				*exit_qual = 0;
++			}
+ 			return 1;
+ 		}
+ 	}
+diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
+index 8d6c34fe49be..800de88208d7 100644
+--- a/arch/x86/mm/pageattr.c
++++ b/arch/x86/mm/pageattr.c
+@@ -2063,9 +2063,13 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
+ 
+ 	/*
+ 	 * We should perform an IPI and flush all tlbs,
+-	 * but that can deadlock->flush only current cpu:
++	 * but that can deadlock->flush only current cpu.
++	 * Preemption needs to be disabled around __flush_tlb_all() due to
++	 * CR3 reload in __native_flush_tlb().
+ 	 */
++	preempt_disable();
+ 	__flush_tlb_all();
++	preempt_enable();
+ 
+ 	arch_flush_lazy_mmu_mode();
+ }
+diff --git a/arch/x86/platform/olpc/olpc-xo1-rtc.c b/arch/x86/platform/olpc/olpc-xo1-rtc.c
+index a2b4efddd61a..8e7ddd7e313a 100644
+--- a/arch/x86/platform/olpc/olpc-xo1-rtc.c
++++ b/arch/x86/platform/olpc/olpc-xo1-rtc.c
+@@ -16,6 +16,7 @@
+ 
+ #include <asm/msr.h>
+ #include <asm/olpc.h>
++#include <asm/x86_init.h>
+ 
+ static void rtc_wake_on(struct device *dev)
+ {
+@@ -75,6 +76,8 @@ static int __init xo1_rtc_init(void)
+ 	if (r)
+ 		return r;
+ 
++	x86_platform.legacy.rtc = 0;
++
+ 	device_init_wakeup(&xo1_rtc_device.dev, 1);
+ 	return 0;
+ }
+diff --git a/arch/x86/xen/enlighten_pvh.c b/arch/x86/xen/enlighten_pvh.c
+index c85d1a88f476..f7f77023288a 100644
+--- a/arch/x86/xen/enlighten_pvh.c
++++ b/arch/x86/xen/enlighten_pvh.c
+@@ -75,7 +75,7 @@ static void __init init_pvh_bootparams(void)
+ 	 * Version 2.12 supports Xen entry point but we will use default x86/PC
+ 	 * environment (i.e. hardware_subarch 0).
+ 	 */
+-	pvh_bootparams.hdr.version = 0x212;
++	pvh_bootparams.hdr.version = (2 << 8) | 12;
+ 	pvh_bootparams.hdr.type_of_loader = (9 << 4) | 0; /* Xen loader */
+ 
+ 	x86_init.acpi.get_root_pointer = pvh_get_root_pointer;
+diff --git a/arch/x86/xen/platform-pci-unplug.c b/arch/x86/xen/platform-pci-unplug.c
+index 33a783c77d96..184b36922397 100644
+--- a/arch/x86/xen/platform-pci-unplug.c
++++ b/arch/x86/xen/platform-pci-unplug.c
+@@ -146,6 +146,10 @@ void xen_unplug_emulated_devices(void)
+ {
+ 	int r;
+ 
++	/* PVH guests don't have emulated devices. */
++	if (xen_pvh_domain())
++		return;
++
+ 	/* user explicitly requested no unplug */
+ 	if (xen_emul_unplug & XEN_UNPLUG_NEVER)
+ 		return;
+diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
+index cd97a62394e7..a970a2aa4456 100644
+--- a/arch/x86/xen/spinlock.c
++++ b/arch/x86/xen/spinlock.c
+@@ -9,6 +9,7 @@
+ #include <linux/log2.h>
+ #include <linux/gfp.h>
+ #include <linux/slab.h>
++#include <linux/atomic.h>
+ 
+ #include <asm/paravirt.h>
+ #include <asm/qspinlock.h>
+@@ -21,6 +22,7 @@
+ 
+ static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+ static DEFINE_PER_CPU(char *, irq_name);
++static DEFINE_PER_CPU(atomic_t, xen_qlock_wait_nest);
+ static bool xen_pvspin = true;
+ 
+ static void xen_qlock_kick(int cpu)
+@@ -40,33 +42,24 @@ static void xen_qlock_kick(int cpu)
+ static void xen_qlock_wait(u8 *byte, u8 val)
+ {
+ 	int irq = __this_cpu_read(lock_kicker_irq);
++	atomic_t *nest_cnt = this_cpu_ptr(&xen_qlock_wait_nest);
+ 
+ 	/* If kicker interrupts not initialized yet, just spin */
+-	if (irq == -1)
++	if (irq == -1 || in_nmi())
+ 		return;
+ 
+-	/* clear pending */
+-	xen_clear_irq_pending(irq);
+-	barrier();
+-
+-	/*
+-	 * We check the byte value after clearing pending IRQ to make sure
+-	 * that we won't miss a wakeup event because of the clearing.
+-	 *
+-	 * The sync_clear_bit() call in xen_clear_irq_pending() is atomic.
+-	 * So it is effectively a memory barrier for x86.
+-	 */
+-	if (READ_ONCE(*byte) != val)
+-		return;
++	/* Detect reentry. */
++	atomic_inc(nest_cnt);
+ 
+-	/*
+-	 * If an interrupt happens here, it will leave the wakeup irq
+-	 * pending, which will cause xen_poll_irq() to return
+-	 * immediately.
+-	 */
++	/* If irq pending already and no nested call clear it. */
++	if (atomic_read(nest_cnt) == 1 && xen_test_irq_pending(irq)) {
++		xen_clear_irq_pending(irq);
++	} else if (READ_ONCE(*byte) == val) {
++		/* Block until irq becomes pending (or a spurious wakeup) */
++		xen_poll_irq(irq);
++	}
+ 
+-	/* Block until irq becomes pending (or perhaps a spurious wakeup) */
+-	xen_poll_irq(irq);
++	atomic_dec(nest_cnt);
+ }
+ 
+ static irqreturn_t dummy_handler(int irq, void *dev_id)
+diff --git a/arch/x86/xen/xen-pvh.S b/arch/x86/xen/xen-pvh.S
+index ca2d3b2bf2af..58722a052f9c 100644
+--- a/arch/x86/xen/xen-pvh.S
++++ b/arch/x86/xen/xen-pvh.S
+@@ -181,7 +181,7 @@ canary:
+ 	.fill 48, 1, 0
+ 
+ early_stack:
+-	.fill 256, 1, 0
++	.fill BOOT_STACK_SIZE, 1, 0
+ early_stack_end:
+ 
+ 	ELFNOTE(Xen, XEN_ELFNOTE_PHYS32_ENTRY,
+diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
+index 4498c43245e2..681498e5d40a 100644
+--- a/block/bfq-wf2q.c
++++ b/block/bfq-wf2q.c
+@@ -1178,10 +1178,17 @@ bool __bfq_deactivate_entity(struct bfq_entity *entity, bool ins_into_idle_tree)
+ 	st = bfq_entity_service_tree(entity);
+ 	is_in_service = entity == sd->in_service_entity;
+ 
+-	if (is_in_service) {
+-		bfq_calc_finish(entity, entity->service);
++	bfq_calc_finish(entity, entity->service);
++
++	if (is_in_service)
+ 		sd->in_service_entity = NULL;
+-	}
++	else
++		/*
++		 * Non in-service entity: nobody will take care of
++		 * resetting its service counter on expiration. Do it
++		 * now.
++		 */
++		entity->service = 0;
+ 
+ 	if (entity->tree == &st->active)
+ 		bfq_active_extract(st, entity);
+diff --git a/block/blk-lib.c b/block/blk-lib.c
+index d1b9dd03da25..1f196cf0aa5d 100644
+--- a/block/blk-lib.c
++++ b/block/blk-lib.c
+@@ -29,9 +29,7 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
+ {
+ 	struct request_queue *q = bdev_get_queue(bdev);
+ 	struct bio *bio = *biop;
+-	unsigned int granularity;
+ 	unsigned int op;
+-	int alignment;
+ 	sector_t bs_mask;
+ 
+ 	if (!q)
+@@ -54,38 +52,15 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
+ 	if ((sector | nr_sects) & bs_mask)
+ 		return -EINVAL;
+ 
+-	/* Zero-sector (unknown) and one-sector granularities are the same.  */
+-	granularity = max(q->limits.discard_granularity >> 9, 1U);
+-	alignment = (bdev_discard_alignment(bdev) >> 9) % granularity;
+-
+ 	while (nr_sects) {
+-		unsigned int req_sects;
+-		sector_t end_sect, tmp;
++		unsigned int req_sects = nr_sects;
++		sector_t end_sect;
+ 
+-		/*
+-		 * Issue in chunks of the user defined max discard setting,
+-		 * ensuring that bi_size doesn't overflow
+-		 */
+-		req_sects = min_t(sector_t, nr_sects,
+-					q->limits.max_discard_sectors);
+ 		if (!req_sects)
+ 			goto fail;
+-		if (req_sects > UINT_MAX >> 9)
+-			req_sects = UINT_MAX >> 9;
++		req_sects = min(req_sects, bio_allowed_max_sectors(q));
+ 
+-		/*
+-		 * If splitting a request, and the next starting sector would be
+-		 * misaligned, stop the discard at the previous aligned sector.
+-		 */
+ 		end_sect = sector + req_sects;
+-		tmp = end_sect;
+-		if (req_sects < nr_sects &&
+-		    sector_div(tmp, granularity) != alignment) {
+-			end_sect = end_sect - alignment;
+-			sector_div(end_sect, granularity);
+-			end_sect = end_sect * granularity + alignment;
+-			req_sects = end_sect - sector;
+-		}
+ 
+ 		bio = next_bio(bio, 0, gfp_mask);
+ 		bio->bi_iter.bi_sector = sector;
+@@ -186,7 +161,7 @@ static int __blkdev_issue_write_same(struct block_device *bdev, sector_t sector,
+ 		return -EOPNOTSUPP;
+ 
+ 	/* Ensure that max_write_same_sectors doesn't overflow bi_size */
+-	max_write_same_sectors = UINT_MAX >> 9;
++	max_write_same_sectors = bio_allowed_max_sectors(q);
+ 
+ 	while (nr_sects) {
+ 		bio = next_bio(bio, 1, gfp_mask);
+diff --git a/block/blk-merge.c b/block/blk-merge.c
+index aaec38cc37b8..2e042190a4f1 100644
+--- a/block/blk-merge.c
++++ b/block/blk-merge.c
+@@ -27,7 +27,8 @@ static struct bio *blk_bio_discard_split(struct request_queue *q,
+ 	/* Zero-sector (unknown) and one-sector granularities are the same.  */
+ 	granularity = max(q->limits.discard_granularity >> 9, 1U);
+ 
+-	max_discard_sectors = min(q->limits.max_discard_sectors, UINT_MAX >> 9);
++	max_discard_sectors = min(q->limits.max_discard_sectors,
++			bio_allowed_max_sectors(q));
+ 	max_discard_sectors -= max_discard_sectors % granularity;
+ 
+ 	if (unlikely(!max_discard_sectors)) {
+diff --git a/block/blk.h b/block/blk.h
+index a8f0f7986cfd..a26a8fb257a4 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -326,6 +326,16 @@ static inline unsigned long blk_rq_deadline(struct request *rq)
+ 	return rq->__deadline & ~0x1UL;
+ }
+ 
++/*
++ * The max size one bio can handle is UINT_MAX becasue bvec_iter.bi_size
++ * is defined as 'unsigned int', meantime it has to aligned to with logical
++ * block size which is the minimum accepted unit by hardware.
++ */
++static inline unsigned int bio_allowed_max_sectors(struct request_queue *q)
++{
++	return round_down(UINT_MAX, queue_logical_block_size(q)) >> 9;
++}
++
+ /*
+  * Internal io_context interface
+  */
+diff --git a/block/bounce.c b/block/bounce.c
+index fd31347b7836..5849535296b9 100644
+--- a/block/bounce.c
++++ b/block/bounce.c
+@@ -31,6 +31,24 @@
+ static struct bio_set bounce_bio_set, bounce_bio_split;
+ static mempool_t page_pool, isa_page_pool;
+ 
++static void init_bounce_bioset(void)
++{
++	static bool bounce_bs_setup;
++	int ret;
++
++	if (bounce_bs_setup)
++		return;
++
++	ret = bioset_init(&bounce_bio_set, BIO_POOL_SIZE, 0, BIOSET_NEED_BVECS);
++	BUG_ON(ret);
++	if (bioset_integrity_create(&bounce_bio_set, BIO_POOL_SIZE))
++		BUG_ON(1);
++
++	ret = bioset_init(&bounce_bio_split, BIO_POOL_SIZE, 0, 0);
++	BUG_ON(ret);
++	bounce_bs_setup = true;
++}
++
+ #if defined(CONFIG_HIGHMEM)
+ static __init int init_emergency_pool(void)
+ {
+@@ -44,14 +62,7 @@ static __init int init_emergency_pool(void)
+ 	BUG_ON(ret);
+ 	pr_info("pool size: %d pages\n", POOL_SIZE);
+ 
+-	ret = bioset_init(&bounce_bio_set, BIO_POOL_SIZE, 0, BIOSET_NEED_BVECS);
+-	BUG_ON(ret);
+-	if (bioset_integrity_create(&bounce_bio_set, BIO_POOL_SIZE))
+-		BUG_ON(1);
+-
+-	ret = bioset_init(&bounce_bio_split, BIO_POOL_SIZE, 0, 0);
+-	BUG_ON(ret);
+-
++	init_bounce_bioset();
+ 	return 0;
+ }
+ 
+@@ -86,6 +97,8 @@ static void *mempool_alloc_pages_isa(gfp_t gfp_mask, void *data)
+ 	return mempool_alloc_pages(gfp_mask | GFP_DMA, data);
+ }
+ 
++static DEFINE_MUTEX(isa_mutex);
++
+ /*
+  * gets called "every" time someone init's a queue with BLK_BOUNCE_ISA
+  * as the max address, so check if the pool has already been created.
+@@ -94,14 +107,20 @@ int init_emergency_isa_pool(void)
+ {
+ 	int ret;
+ 
+-	if (mempool_initialized(&isa_page_pool))
++	mutex_lock(&isa_mutex);
++
++	if (mempool_initialized(&isa_page_pool)) {
++		mutex_unlock(&isa_mutex);
+ 		return 0;
++	}
+ 
+ 	ret = mempool_init(&isa_page_pool, ISA_POOL_SIZE, mempool_alloc_pages_isa,
+ 			   mempool_free_pages, (void *) 0);
+ 	BUG_ON(ret);
+ 
+ 	pr_info("isa pool size: %d pages\n", ISA_POOL_SIZE);
++	init_bounce_bioset();
++	mutex_unlock(&isa_mutex);
+ 	return 0;
+ }
+ 
+diff --git a/crypto/Kconfig b/crypto/Kconfig
+index f3e40ac56d93..59e32623a7ce 100644
+--- a/crypto/Kconfig
++++ b/crypto/Kconfig
+@@ -1590,20 +1590,6 @@ config CRYPTO_SM4
+ 
+ 	  If unsure, say N.
+ 
+-config CRYPTO_SPECK
+-	tristate "Speck cipher algorithm"
+-	select CRYPTO_ALGAPI
+-	help
+-	  Speck is a lightweight block cipher that is tuned for optimal
+-	  performance in software (rather than hardware).
+-
+-	  Speck may not be as secure as AES, and should only be used on systems
+-	  where AES is not fast enough.
+-
+-	  See also: <https://eprint.iacr.org/2013/404.pdf>
+-
+-	  If unsure, say N.
+-
+ config CRYPTO_TEA
+ 	tristate "TEA, XTEA and XETA cipher algorithms"
+ 	select CRYPTO_ALGAPI
+diff --git a/crypto/Makefile b/crypto/Makefile
+index 6d1d40eeb964..f6a234d08882 100644
+--- a/crypto/Makefile
++++ b/crypto/Makefile
+@@ -115,7 +115,6 @@ obj-$(CONFIG_CRYPTO_TEA) += tea.o
+ obj-$(CONFIG_CRYPTO_KHAZAD) += khazad.o
+ obj-$(CONFIG_CRYPTO_ANUBIS) += anubis.o
+ obj-$(CONFIG_CRYPTO_SEED) += seed.o
+-obj-$(CONFIG_CRYPTO_SPECK) += speck.o
+ obj-$(CONFIG_CRYPTO_SALSA20) += salsa20_generic.o
+ obj-$(CONFIG_CRYPTO_CHACHA20) += chacha20_generic.o
+ obj-$(CONFIG_CRYPTO_POLY1305) += poly1305_generic.o
+diff --git a/crypto/aegis.h b/crypto/aegis.h
+index f1c6900ddb80..405e025fc906 100644
+--- a/crypto/aegis.h
++++ b/crypto/aegis.h
+@@ -21,7 +21,7 @@
+ 
+ union aegis_block {
+ 	__le64 words64[AEGIS_BLOCK_SIZE / sizeof(__le64)];
+-	u32 words32[AEGIS_BLOCK_SIZE / sizeof(u32)];
++	__le32 words32[AEGIS_BLOCK_SIZE / sizeof(__le32)];
+ 	u8 bytes[AEGIS_BLOCK_SIZE];
+ };
+ 
+@@ -57,24 +57,22 @@ static void crypto_aegis_aesenc(union aegis_block *dst,
+ 				const union aegis_block *src,
+ 				const union aegis_block *key)
+ {
+-	u32 *d = dst->words32;
+ 	const u8  *s  = src->bytes;
+-	const u32 *k  = key->words32;
+ 	const u32 *t0 = crypto_ft_tab[0];
+ 	const u32 *t1 = crypto_ft_tab[1];
+ 	const u32 *t2 = crypto_ft_tab[2];
+ 	const u32 *t3 = crypto_ft_tab[3];
+ 	u32 d0, d1, d2, d3;
+ 
+-	d0 = t0[s[ 0]] ^ t1[s[ 5]] ^ t2[s[10]] ^ t3[s[15]] ^ k[0];
+-	d1 = t0[s[ 4]] ^ t1[s[ 9]] ^ t2[s[14]] ^ t3[s[ 3]] ^ k[1];
+-	d2 = t0[s[ 8]] ^ t1[s[13]] ^ t2[s[ 2]] ^ t3[s[ 7]] ^ k[2];
+-	d3 = t0[s[12]] ^ t1[s[ 1]] ^ t2[s[ 6]] ^ t3[s[11]] ^ k[3];
++	d0 = t0[s[ 0]] ^ t1[s[ 5]] ^ t2[s[10]] ^ t3[s[15]];
++	d1 = t0[s[ 4]] ^ t1[s[ 9]] ^ t2[s[14]] ^ t3[s[ 3]];
++	d2 = t0[s[ 8]] ^ t1[s[13]] ^ t2[s[ 2]] ^ t3[s[ 7]];
++	d3 = t0[s[12]] ^ t1[s[ 1]] ^ t2[s[ 6]] ^ t3[s[11]];
+ 
+-	d[0] = d0;
+-	d[1] = d1;
+-	d[2] = d2;
+-	d[3] = d3;
++	dst->words32[0] = cpu_to_le32(d0) ^ key->words32[0];
++	dst->words32[1] = cpu_to_le32(d1) ^ key->words32[1];
++	dst->words32[2] = cpu_to_le32(d2) ^ key->words32[2];
++	dst->words32[3] = cpu_to_le32(d3) ^ key->words32[3];
+ }
+ 
+ #endif /* _CRYPTO_AEGIS_H */
+diff --git a/crypto/lrw.c b/crypto/lrw.c
+index 954a7064a179..7657bebd060c 100644
+--- a/crypto/lrw.c
++++ b/crypto/lrw.c
+@@ -143,7 +143,12 @@ static inline int get_index128(be128 *block)
+ 		return x + ffz(val);
+ 	}
+ 
+-	return x;
++	/*
++	 * If we get here, then x == 128 and we are incrementing the counter
++	 * from all ones to all zeros. This means we must return index 127, i.e.
++	 * the one corresponding to key2*{ 1,...,1 }.
++	 */
++	return 127;
+ }
+ 
+ static int post_crypt(struct skcipher_request *req)
+diff --git a/crypto/morus1280.c b/crypto/morus1280.c
+index 6180b2557836..8f1952d96ebd 100644
+--- a/crypto/morus1280.c
++++ b/crypto/morus1280.c
+@@ -385,14 +385,11 @@ static void crypto_morus1280_final(struct morus1280_state *state,
+ 				   struct morus1280_block *tag_xor,
+ 				   u64 assoclen, u64 cryptlen)
+ {
+-	u64 assocbits = assoclen * 8;
+-	u64 cryptbits = cryptlen * 8;
+-
+ 	struct morus1280_block tmp;
+ 	unsigned int i;
+ 
+-	tmp.words[0] = cpu_to_le64(assocbits);
+-	tmp.words[1] = cpu_to_le64(cryptbits);
++	tmp.words[0] = assoclen * 8;
++	tmp.words[1] = cryptlen * 8;
+ 	tmp.words[2] = 0;
+ 	tmp.words[3] = 0;
+ 
+diff --git a/crypto/morus640.c b/crypto/morus640.c
+index 5eede3749e64..6ccb901934c3 100644
+--- a/crypto/morus640.c
++++ b/crypto/morus640.c
+@@ -384,21 +384,13 @@ static void crypto_morus640_final(struct morus640_state *state,
+ 				  struct morus640_block *tag_xor,
+ 				  u64 assoclen, u64 cryptlen)
+ {
+-	u64 assocbits = assoclen * 8;
+-	u64 cryptbits = cryptlen * 8;
+-
+-	u32 assocbits_lo = (u32)assocbits;
+-	u32 assocbits_hi = (u32)(assocbits >> 32);
+-	u32 cryptbits_lo = (u32)cryptbits;
+-	u32 cryptbits_hi = (u32)(cryptbits >> 32);
+-
+ 	struct morus640_block tmp;
+ 	unsigned int i;
+ 
+-	tmp.words[0] = cpu_to_le32(assocbits_lo);
+-	tmp.words[1] = cpu_to_le32(assocbits_hi);
+-	tmp.words[2] = cpu_to_le32(cryptbits_lo);
+-	tmp.words[3] = cpu_to_le32(cryptbits_hi);
++	tmp.words[0] = lower_32_bits(assoclen * 8);
++	tmp.words[1] = upper_32_bits(assoclen * 8);
++	tmp.words[2] = lower_32_bits(cryptlen * 8);
++	tmp.words[3] = upper_32_bits(cryptlen * 8);
+ 
+ 	for (i = 0; i < MORUS_BLOCK_WORDS; i++)
+ 		state->s[4].words[i] ^= state->s[0].words[i];
+diff --git a/crypto/speck.c b/crypto/speck.c
+deleted file mode 100644
+index 58aa9f7f91f7..000000000000
+--- a/crypto/speck.c
++++ /dev/null
+@@ -1,307 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * Speck: a lightweight block cipher
+- *
+- * Copyright (c) 2018 Google, Inc
+- *
+- * Speck has 10 variants, including 5 block sizes.  For now we only implement
+- * the variants Speck128/128, Speck128/192, Speck128/256, Speck64/96, and
+- * Speck64/128.   Speck${B}/${K} denotes the variant with a block size of B bits
+- * and a key size of K bits.  The Speck128 variants are believed to be the most
+- * secure variants, and they use the same block size and key sizes as AES.  The
+- * Speck64 variants are less secure, but on 32-bit processors are usually
+- * faster.  The remaining variants (Speck32, Speck48, and Speck96) are even less
+- * secure and/or not as well suited for implementation on either 32-bit or
+- * 64-bit processors, so are omitted.
+- *
+- * Reference: "The Simon and Speck Families of Lightweight Block Ciphers"
+- * https://eprint.iacr.org/2013/404.pdf
+- *
+- * In a correspondence, the Speck designers have also clarified that the words
+- * should be interpreted in little-endian format, and the words should be
+- * ordered such that the first word of each block is 'y' rather than 'x', and
+- * the first key word (rather than the last) becomes the first round key.
+- */
+-
+-#include <asm/unaligned.h>
+-#include <crypto/speck.h>
+-#include <linux/bitops.h>
+-#include <linux/crypto.h>
+-#include <linux/init.h>
+-#include <linux/module.h>
+-
+-/* Speck128 */
+-
+-static __always_inline void speck128_round(u64 *x, u64 *y, u64 k)
+-{
+-	*x = ror64(*x, 8);
+-	*x += *y;
+-	*x ^= k;
+-	*y = rol64(*y, 3);
+-	*y ^= *x;
+-}
+-
+-static __always_inline void speck128_unround(u64 *x, u64 *y, u64 k)
+-{
+-	*y ^= *x;
+-	*y = ror64(*y, 3);
+-	*x ^= k;
+-	*x -= *y;
+-	*x = rol64(*x, 8);
+-}
+-
+-void crypto_speck128_encrypt(const struct speck128_tfm_ctx *ctx,
+-			     u8 *out, const u8 *in)
+-{
+-	u64 y = get_unaligned_le64(in);
+-	u64 x = get_unaligned_le64(in + 8);
+-	int i;
+-
+-	for (i = 0; i < ctx->nrounds; i++)
+-		speck128_round(&x, &y, ctx->round_keys[i]);
+-
+-	put_unaligned_le64(y, out);
+-	put_unaligned_le64(x, out + 8);
+-}
+-EXPORT_SYMBOL_GPL(crypto_speck128_encrypt);
+-
+-static void speck128_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+-{
+-	crypto_speck128_encrypt(crypto_tfm_ctx(tfm), out, in);
+-}
+-
+-void crypto_speck128_decrypt(const struct speck128_tfm_ctx *ctx,
+-			     u8 *out, const u8 *in)
+-{
+-	u64 y = get_unaligned_le64(in);
+-	u64 x = get_unaligned_le64(in + 8);
+-	int i;
+-
+-	for (i = ctx->nrounds - 1; i >= 0; i--)
+-		speck128_unround(&x, &y, ctx->round_keys[i]);
+-
+-	put_unaligned_le64(y, out);
+-	put_unaligned_le64(x, out + 8);
+-}
+-EXPORT_SYMBOL_GPL(crypto_speck128_decrypt);
+-
+-static void speck128_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+-{
+-	crypto_speck128_decrypt(crypto_tfm_ctx(tfm), out, in);
+-}
+-
+-int crypto_speck128_setkey(struct speck128_tfm_ctx *ctx, const u8 *key,
+-			   unsigned int keylen)
+-{
+-	u64 l[3];
+-	u64 k;
+-	int i;
+-
+-	switch (keylen) {
+-	case SPECK128_128_KEY_SIZE:
+-		k = get_unaligned_le64(key);
+-		l[0] = get_unaligned_le64(key + 8);
+-		ctx->nrounds = SPECK128_128_NROUNDS;
+-		for (i = 0; i < ctx->nrounds; i++) {
+-			ctx->round_keys[i] = k;
+-			speck128_round(&l[0], &k, i);
+-		}
+-		break;
+-	case SPECK128_192_KEY_SIZE:
+-		k = get_unaligned_le64(key);
+-		l[0] = get_unaligned_le64(key + 8);
+-		l[1] = get_unaligned_le64(key + 16);
+-		ctx->nrounds = SPECK128_192_NROUNDS;
+-		for (i = 0; i < ctx->nrounds; i++) {
+-			ctx->round_keys[i] = k;
+-			speck128_round(&l[i % 2], &k, i);
+-		}
+-		break;
+-	case SPECK128_256_KEY_SIZE:
+-		k = get_unaligned_le64(key);
+-		l[0] = get_unaligned_le64(key + 8);
+-		l[1] = get_unaligned_le64(key + 16);
+-		l[2] = get_unaligned_le64(key + 24);
+-		ctx->nrounds = SPECK128_256_NROUNDS;
+-		for (i = 0; i < ctx->nrounds; i++) {
+-			ctx->round_keys[i] = k;
+-			speck128_round(&l[i % 3], &k, i);
+-		}
+-		break;
+-	default:
+-		return -EINVAL;
+-	}
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(crypto_speck128_setkey);
+-
+-static int speck128_setkey(struct crypto_tfm *tfm, const u8 *key,
+-			   unsigned int keylen)
+-{
+-	return crypto_speck128_setkey(crypto_tfm_ctx(tfm), key, keylen);
+-}
+-
+-/* Speck64 */
+-
+-static __always_inline void speck64_round(u32 *x, u32 *y, u32 k)
+-{
+-	*x = ror32(*x, 8);
+-	*x += *y;
+-	*x ^= k;
+-	*y = rol32(*y, 3);
+-	*y ^= *x;
+-}
+-
+-static __always_inline void speck64_unround(u32 *x, u32 *y, u32 k)
+-{
+-	*y ^= *x;
+-	*y = ror32(*y, 3);
+-	*x ^= k;
+-	*x -= *y;
+-	*x = rol32(*x, 8);
+-}
+-
+-void crypto_speck64_encrypt(const struct speck64_tfm_ctx *ctx,
+-			    u8 *out, const u8 *in)
+-{
+-	u32 y = get_unaligned_le32(in);
+-	u32 x = get_unaligned_le32(in + 4);
+-	int i;
+-
+-	for (i = 0; i < ctx->nrounds; i++)
+-		speck64_round(&x, &y, ctx->round_keys[i]);
+-
+-	put_unaligned_le32(y, out);
+-	put_unaligned_le32(x, out + 4);
+-}
+-EXPORT_SYMBOL_GPL(crypto_speck64_encrypt);
+-
+-static void speck64_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+-{
+-	crypto_speck64_encrypt(crypto_tfm_ctx(tfm), out, in);
+-}
+-
+-void crypto_speck64_decrypt(const struct speck64_tfm_ctx *ctx,
+-			    u8 *out, const u8 *in)
+-{
+-	u32 y = get_unaligned_le32(in);
+-	u32 x = get_unaligned_le32(in + 4);
+-	int i;
+-
+-	for (i = ctx->nrounds - 1; i >= 0; i--)
+-		speck64_unround(&x, &y, ctx->round_keys[i]);
+-
+-	put_unaligned_le32(y, out);
+-	put_unaligned_le32(x, out + 4);
+-}
+-EXPORT_SYMBOL_GPL(crypto_speck64_decrypt);
+-
+-static void speck64_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+-{
+-	crypto_speck64_decrypt(crypto_tfm_ctx(tfm), out, in);
+-}
+-
+-int crypto_speck64_setkey(struct speck64_tfm_ctx *ctx, const u8 *key,
+-			  unsigned int keylen)
+-{
+-	u32 l[3];
+-	u32 k;
+-	int i;
+-
+-	switch (keylen) {
+-	case SPECK64_96_KEY_SIZE:
+-		k = get_unaligned_le32(key);
+-		l[0] = get_unaligned_le32(key + 4);
+-		l[1] = get_unaligned_le32(key + 8);
+-		ctx->nrounds = SPECK64_96_NROUNDS;
+-		for (i = 0; i < ctx->nrounds; i++) {
+-			ctx->round_keys[i] = k;
+-			speck64_round(&l[i % 2], &k, i);
+-		}
+-		break;
+-	case SPECK64_128_KEY_SIZE:
+-		k = get_unaligned_le32(key);
+-		l[0] = get_unaligned_le32(key + 4);
+-		l[1] = get_unaligned_le32(key + 8);
+-		l[2] = get_unaligned_le32(key + 12);
+-		ctx->nrounds = SPECK64_128_NROUNDS;
+-		for (i = 0; i < ctx->nrounds; i++) {
+-			ctx->round_keys[i] = k;
+-			speck64_round(&l[i % 3], &k, i);
+-		}
+-		break;
+-	default:
+-		return -EINVAL;
+-	}
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(crypto_speck64_setkey);
+-
+-static int speck64_setkey(struct crypto_tfm *tfm, const u8 *key,
+-			  unsigned int keylen)
+-{
+-	return crypto_speck64_setkey(crypto_tfm_ctx(tfm), key, keylen);
+-}
+-
+-/* Algorithm definitions */
+-
+-static struct crypto_alg speck_algs[] = {
+-	{
+-		.cra_name		= "speck128",
+-		.cra_driver_name	= "speck128-generic",
+-		.cra_priority		= 100,
+-		.cra_flags		= CRYPTO_ALG_TYPE_CIPHER,
+-		.cra_blocksize		= SPECK128_BLOCK_SIZE,
+-		.cra_ctxsize		= sizeof(struct speck128_tfm_ctx),
+-		.cra_module		= THIS_MODULE,
+-		.cra_u			= {
+-			.cipher = {
+-				.cia_min_keysize	= SPECK128_128_KEY_SIZE,
+-				.cia_max_keysize	= SPECK128_256_KEY_SIZE,
+-				.cia_setkey		= speck128_setkey,
+-				.cia_encrypt		= speck128_encrypt,
+-				.cia_decrypt		= speck128_decrypt
+-			}
+-		}
+-	}, {
+-		.cra_name		= "speck64",
+-		.cra_driver_name	= "speck64-generic",
+-		.cra_priority		= 100,
+-		.cra_flags		= CRYPTO_ALG_TYPE_CIPHER,
+-		.cra_blocksize		= SPECK64_BLOCK_SIZE,
+-		.cra_ctxsize		= sizeof(struct speck64_tfm_ctx),
+-		.cra_module		= THIS_MODULE,
+-		.cra_u			= {
+-			.cipher = {
+-				.cia_min_keysize	= SPECK64_96_KEY_SIZE,
+-				.cia_max_keysize	= SPECK64_128_KEY_SIZE,
+-				.cia_setkey		= speck64_setkey,
+-				.cia_encrypt		= speck64_encrypt,
+-				.cia_decrypt		= speck64_decrypt
+-			}
+-		}
+-	}
+-};
+-
+-static int __init speck_module_init(void)
+-{
+-	return crypto_register_algs(speck_algs, ARRAY_SIZE(speck_algs));
+-}
+-
+-static void __exit speck_module_exit(void)
+-{
+-	crypto_unregister_algs(speck_algs, ARRAY_SIZE(speck_algs));
+-}
+-
+-module_init(speck_module_init);
+-module_exit(speck_module_exit);
+-
+-MODULE_DESCRIPTION("Speck block cipher (generic)");
+-MODULE_LICENSE("GPL");
+-MODULE_AUTHOR("Eric Biggers <ebiggers@google.com>");
+-MODULE_ALIAS_CRYPTO("speck128");
+-MODULE_ALIAS_CRYPTO("speck128-generic");
+-MODULE_ALIAS_CRYPTO("speck64");
+-MODULE_ALIAS_CRYPTO("speck64-generic");
+diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
+index d5bcdd905007..ee4f2a175bda 100644
+--- a/crypto/tcrypt.c
++++ b/crypto/tcrypt.c
+@@ -1097,6 +1097,9 @@ static void test_ahash_speed_common(const char *algo, unsigned int secs,
+ 			break;
+ 		}
+ 
++		if (speed[i].klen)
++			crypto_ahash_setkey(tfm, tvmem[0], speed[i].klen);
++
+ 		pr_info("test%3u "
+ 			"(%5u byte blocks,%5u bytes per update,%4u updates): ",
+ 			i, speed[i].blen, speed[i].plen, speed[i].blen / speed[i].plen);
+diff --git a/crypto/testmgr.c b/crypto/testmgr.c
+index 11e45352fd0b..1ed03bf6a977 100644
+--- a/crypto/testmgr.c
++++ b/crypto/testmgr.c
+@@ -3000,18 +3000,6 @@ static const struct alg_test_desc alg_test_descs[] = {
+ 		.suite = {
+ 			.cipher = __VECS(sm4_tv_template)
+ 		}
+-	}, {
+-		.alg = "ecb(speck128)",
+-		.test = alg_test_skcipher,
+-		.suite = {
+-			.cipher = __VECS(speck128_tv_template)
+-		}
+-	}, {
+-		.alg = "ecb(speck64)",
+-		.test = alg_test_skcipher,
+-		.suite = {
+-			.cipher = __VECS(speck64_tv_template)
+-		}
+ 	}, {
+ 		.alg = "ecb(tea)",
+ 		.test = alg_test_skcipher,
+@@ -3539,18 +3527,6 @@ static const struct alg_test_desc alg_test_descs[] = {
+ 		.suite = {
+ 			.cipher = __VECS(serpent_xts_tv_template)
+ 		}
+-	}, {
+-		.alg = "xts(speck128)",
+-		.test = alg_test_skcipher,
+-		.suite = {
+-			.cipher = __VECS(speck128_xts_tv_template)
+-		}
+-	}, {
+-		.alg = "xts(speck64)",
+-		.test = alg_test_skcipher,
+-		.suite = {
+-			.cipher = __VECS(speck64_xts_tv_template)
+-		}
+ 	}, {
+ 		.alg = "xts(twofish)",
+ 		.test = alg_test_skcipher,
+diff --git a/crypto/testmgr.h b/crypto/testmgr.h
+index b950aa234e43..36572c665026 100644
+--- a/crypto/testmgr.h
++++ b/crypto/testmgr.h
+@@ -10141,744 +10141,6 @@ static const struct cipher_testvec sm4_tv_template[] = {
+ 	}
+ };
+ 
+-/*
+- * Speck test vectors taken from the original paper:
+- * "The Simon and Speck Families of Lightweight Block Ciphers"
+- * https://eprint.iacr.org/2013/404.pdf
+- *
+- * Note that the paper does not make byte and word order clear.  But it was
+- * confirmed with the authors that the intended orders are little endian byte
+- * order and (y, x) word order.  Equivalently, the printed test vectors, when
+- * looking at only the bytes (ignoring the whitespace that divides them into
+- * words), are backwards: the left-most byte is actually the one with the
+- * highest memory address, while the right-most byte is actually the one with
+- * the lowest memory address.
+- */
+-
+-static const struct cipher_testvec speck128_tv_template[] = {
+-	{ /* Speck128/128 */
+-		.key	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
+-		.klen	= 16,
+-		.ptext	= "\x20\x6d\x61\x64\x65\x20\x69\x74"
+-			  "\x20\x65\x71\x75\x69\x76\x61\x6c",
+-		.ctext	= "\x18\x0d\x57\x5c\xdf\xfe\x60\x78"
+-			  "\x65\x32\x78\x79\x51\x98\x5d\xa6",
+-		.len	= 16,
+-	}, { /* Speck128/192 */
+-		.key	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17",
+-		.klen	= 24,
+-		.ptext	= "\x65\x6e\x74\x20\x74\x6f\x20\x43"
+-			  "\x68\x69\x65\x66\x20\x48\x61\x72",
+-		.ctext	= "\x86\x18\x3c\xe0\x5d\x18\xbc\xf9"
+-			  "\x66\x55\x13\x13\x3a\xcf\xe4\x1b",
+-		.len	= 16,
+-	}, { /* Speck128/256 */
+-		.key	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f",
+-		.klen	= 32,
+-		.ptext	= "\x70\x6f\x6f\x6e\x65\x72\x2e\x20"
+-			  "\x49\x6e\x20\x74\x68\x6f\x73\x65",
+-		.ctext	= "\x43\x8f\x18\x9c\x8d\xb4\xee\x4e"
+-			  "\x3e\xf5\xc0\x05\x04\x01\x09\x41",
+-		.len	= 16,
+-	},
+-};
+-
+-/*
+- * Speck128-XTS test vectors, taken from the AES-XTS test vectors with the
+- * ciphertext recomputed with Speck128 as the cipher
+- */
+-static const struct cipher_testvec speck128_xts_tv_template[] = {
+-	{
+-		.key	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.klen	= 32,
+-		.iv	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ctext	= "\xbe\xa0\xe7\x03\xd7\xfe\xab\x62"
+-			  "\x3b\x99\x4a\x64\x74\x77\xac\xed"
+-			  "\xd8\xf4\xa6\xcf\xae\xb9\x07\x42"
+-			  "\x51\xd9\xb6\x1d\xe0\x5e\xbc\x54",
+-		.len	= 32,
+-	}, {
+-		.key	= "\x11\x11\x11\x11\x11\x11\x11\x11"
+-			  "\x11\x11\x11\x11\x11\x11\x11\x11"
+-			  "\x22\x22\x22\x22\x22\x22\x22\x22"
+-			  "\x22\x22\x22\x22\x22\x22\x22\x22",
+-		.klen	= 32,
+-		.iv	= "\x33\x33\x33\x33\x33\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44",
+-		.ctext	= "\xfb\x53\x81\x75\x6f\x9f\x34\xad"
+-			  "\x7e\x01\xed\x7b\xcc\xda\x4e\x4a"
+-			  "\xd4\x84\xa4\x53\xd5\x88\x73\x1b"
+-			  "\xfd\xcb\xae\x0d\xf3\x04\xee\xe6",
+-		.len	= 32,
+-	}, {
+-		.key	= "\xff\xfe\xfd\xfc\xfb\xfa\xf9\xf8"
+-			  "\xf7\xf6\xf5\xf4\xf3\xf2\xf1\xf0"
+-			  "\x22\x22\x22\x22\x22\x22\x22\x22"
+-			  "\x22\x22\x22\x22\x22\x22\x22\x22",
+-		.klen	= 32,
+-		.iv	= "\x33\x33\x33\x33\x33\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44",
+-		.ctext	= "\x21\x52\x84\x15\xd1\xf7\x21\x55"
+-			  "\xd9\x75\x4a\xd3\xc5\xdb\x9f\x7d"
+-			  "\xda\x63\xb2\xf1\x82\xb0\x89\x59"
+-			  "\x86\xd4\xaa\xaa\xdd\xff\x4f\x92",
+-		.len	= 32,
+-	}, {
+-		.key	= "\x27\x18\x28\x18\x28\x45\x90\x45"
+-			  "\x23\x53\x60\x28\x74\x71\x35\x26"
+-			  "\x31\x41\x59\x26\x53\x58\x97\x93"
+-			  "\x23\x84\x62\x64\x33\x83\x27\x95",
+-		.klen	= 32,
+-		.iv	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+-			  "\x20\x21\x22\x23\x24\x25\x26\x27"
+-			  "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+-			  "\x30\x31\x32\x33\x34\x35\x36\x37"
+-			  "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+-			  "\x40\x41\x42\x43\x44\x45\x46\x47"
+-			  "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+-			  "\x50\x51\x52\x53\x54\x55\x56\x57"
+-			  "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+-			  "\x60\x61\x62\x63\x64\x65\x66\x67"
+-			  "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+-			  "\x70\x71\x72\x73\x74\x75\x76\x77"
+-			  "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+-			  "\x80\x81\x82\x83\x84\x85\x86\x87"
+-			  "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+-			  "\x90\x91\x92\x93\x94\x95\x96\x97"
+-			  "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+-			  "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+-			  "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+-			  "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+-			  "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+-			  "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+-			  "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+-			  "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+-			  "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+-			  "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+-			  "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+-			  "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+-			  "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
+-			  "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+-			  "\x20\x21\x22\x23\x24\x25\x26\x27"
+-			  "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+-			  "\x30\x31\x32\x33\x34\x35\x36\x37"
+-			  "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+-			  "\x40\x41\x42\x43\x44\x45\x46\x47"
+-			  "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+-			  "\x50\x51\x52\x53\x54\x55\x56\x57"
+-			  "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+-			  "\x60\x61\x62\x63\x64\x65\x66\x67"
+-			  "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+-			  "\x70\x71\x72\x73\x74\x75\x76\x77"
+-			  "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+-			  "\x80\x81\x82\x83\x84\x85\x86\x87"
+-			  "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+-			  "\x90\x91\x92\x93\x94\x95\x96\x97"
+-			  "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+-			  "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+-			  "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+-			  "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+-			  "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+-			  "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+-			  "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+-			  "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+-			  "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+-			  "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+-			  "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+-			  "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+-			  "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
+-		.ctext	= "\x57\xb5\xf8\x71\x6e\x6d\xdd\x82"
+-			  "\x53\xd0\xed\x2d\x30\xc1\x20\xef"
+-			  "\x70\x67\x5e\xff\x09\x70\xbb\xc1"
+-			  "\x3a\x7b\x48\x26\xd9\x0b\xf4\x48"
+-			  "\xbe\xce\xb1\xc7\xb2\x67\xc4\xa7"
+-			  "\x76\xf8\x36\x30\xb7\xb4\x9a\xd9"
+-			  "\xf5\x9d\xd0\x7b\xc1\x06\x96\x44"
+-			  "\x19\xc5\x58\x84\x63\xb9\x12\x68"
+-			  "\x68\xc7\xaa\x18\x98\xf2\x1f\x5c"
+-			  "\x39\xa6\xd8\x32\x2b\xc3\x51\xfd"
+-			  "\x74\x79\x2e\xb4\x44\xd7\x69\xc4"
+-			  "\xfc\x29\xe6\xed\x26\x1e\xa6\x9d"
+-			  "\x1c\xbe\x00\x0e\x7f\x3a\xca\xfb"
+-			  "\x6d\x13\x65\xa0\xf9\x31\x12\xe2"
+-			  "\x26\xd1\xec\x2b\x0a\x8b\x59\x99"
+-			  "\xa7\x49\xa0\x0e\x09\x33\x85\x50"
+-			  "\xc3\x23\xca\x7a\xdd\x13\x45\x5f"
+-			  "\xde\x4c\xa7\xcb\x00\x8a\x66\x6f"
+-			  "\xa2\xb6\xb1\x2e\xe1\xa0\x18\xf6"
+-			  "\xad\xf3\xbd\xeb\xc7\xef\x55\x4f"
+-			  "\x79\x91\x8d\x36\x13\x7b\xd0\x4a"
+-			  "\x6c\x39\xfb\x53\xb8\x6f\x02\x51"
+-			  "\xa5\x20\xac\x24\x1c\x73\x59\x73"
+-			  "\x58\x61\x3a\x87\x58\xb3\x20\x56"
+-			  "\x39\x06\x2b\x4d\xd3\x20\x2b\x89"
+-			  "\x3f\xa2\xf0\x96\xeb\x7f\xa4\xcd"
+-			  "\x11\xae\xbd\xcb\x3a\xb4\xd9\x91"
+-			  "\x09\x35\x71\x50\x65\xac\x92\xe3"
+-			  "\x7b\x32\xc0\x7a\xdd\xd4\xc3\x92"
+-			  "\x6f\xeb\x79\xde\x6f\xd3\x25\xc9"
+-			  "\xcd\x63\xf5\x1e\x7a\x3b\x26\x9d"
+-			  "\x77\x04\x80\xa9\xbf\x38\xb5\xbd"
+-			  "\xb8\x05\x07\xbd\xfd\xab\x7b\xf8"
+-			  "\x2a\x26\xcc\x49\x14\x6d\x55\x01"
+-			  "\x06\x94\xd8\xb2\x2d\x53\x83\x1b"
+-			  "\x8f\xd4\xdd\x57\x12\x7e\x18\xba"
+-			  "\x8e\xe2\x4d\x80\xef\x7e\x6b\x9d"
+-			  "\x24\xa9\x60\xa4\x97\x85\x86\x2a"
+-			  "\x01\x00\x09\xf1\xcb\x4a\x24\x1c"
+-			  "\xd8\xf6\xe6\x5b\xe7\x5d\xf2\xc4"
+-			  "\x97\x1c\x10\xc6\x4d\x66\x4f\x98"
+-			  "\x87\x30\xac\xd5\xea\x73\x49\x10"
+-			  "\x80\xea\xe5\x5f\x4d\x5f\x03\x33"
+-			  "\x66\x02\x35\x3d\x60\x06\x36\x4f"
+-			  "\x14\x1c\xd8\x07\x1f\x78\xd0\xf8"
+-			  "\x4f\x6c\x62\x7c\x15\xa5\x7c\x28"
+-			  "\x7c\xcc\xeb\x1f\xd1\x07\x90\x93"
+-			  "\x7e\xc2\xa8\x3a\x80\xc0\xf5\x30"
+-			  "\xcc\x75\xcf\x16\x26\xa9\x26\x3b"
+-			  "\xe7\x68\x2f\x15\x21\x5b\xe4\x00"
+-			  "\xbd\x48\x50\xcd\x75\x70\xc4\x62"
+-			  "\xbb\x41\xfb\x89\x4a\x88\x3b\x3b"
+-			  "\x51\x66\x02\x69\x04\x97\x36\xd4"
+-			  "\x75\xae\x0b\xa3\x42\xf8\xca\x79"
+-			  "\x8f\x93\xe9\xcc\x38\xbd\xd6\xd2"
+-			  "\xf9\x70\x4e\xc3\x6a\x8e\x25\xbd"
+-			  "\xea\x15\x5a\xa0\x85\x7e\x81\x0d"
+-			  "\x03\xe7\x05\x39\xf5\x05\x26\xee"
+-			  "\xec\xaa\x1f\x3d\xc9\x98\x76\x01"
+-			  "\x2c\xf4\xfc\xa3\x88\x77\x38\xc4"
+-			  "\x50\x65\x50\x6d\x04\x1f\xdf\x5a"
+-			  "\xaa\xf2\x01\xa9\xc1\x8d\xee\xca"
+-			  "\x47\x26\xef\x39\xb8\xb4\xf2\xd1"
+-			  "\xd6\xbb\x1b\x2a\xc1\x34\x14\xcf",
+-		.len	= 512,
+-	}, {
+-		.key	= "\x27\x18\x28\x18\x28\x45\x90\x45"
+-			  "\x23\x53\x60\x28\x74\x71\x35\x26"
+-			  "\x62\x49\x77\x57\x24\x70\x93\x69"
+-			  "\x99\x59\x57\x49\x66\x96\x76\x27"
+-			  "\x31\x41\x59\x26\x53\x58\x97\x93"
+-			  "\x23\x84\x62\x64\x33\x83\x27\x95"
+-			  "\x02\x88\x41\x97\x16\x93\x99\x37"
+-			  "\x51\x05\x82\x09\x74\x94\x45\x92",
+-		.klen	= 64,
+-		.iv	= "\xff\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+-			  "\x20\x21\x22\x23\x24\x25\x26\x27"
+-			  "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+-			  "\x30\x31\x32\x33\x34\x35\x36\x37"
+-			  "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+-			  "\x40\x41\x42\x43\x44\x45\x46\x47"
+-			  "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+-			  "\x50\x51\x52\x53\x54\x55\x56\x57"
+-			  "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+-			  "\x60\x61\x62\x63\x64\x65\x66\x67"
+-			  "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+-			  "\x70\x71\x72\x73\x74\x75\x76\x77"
+-			  "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+-			  "\x80\x81\x82\x83\x84\x85\x86\x87"
+-			  "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+-			  "\x90\x91\x92\x93\x94\x95\x96\x97"
+-			  "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+-			  "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+-			  "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+-			  "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+-			  "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+-			  "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+-			  "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+-			  "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+-			  "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+-			  "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+-			  "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+-			  "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+-			  "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
+-			  "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+-			  "\x20\x21\x22\x23\x24\x25\x26\x27"
+-			  "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+-			  "\x30\x31\x32\x33\x34\x35\x36\x37"
+-			  "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+-			  "\x40\x41\x42\x43\x44\x45\x46\x47"
+-			  "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+-			  "\x50\x51\x52\x53\x54\x55\x56\x57"
+-			  "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+-			  "\x60\x61\x62\x63\x64\x65\x66\x67"
+-			  "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+-			  "\x70\x71\x72\x73\x74\x75\x76\x77"
+-			  "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+-			  "\x80\x81\x82\x83\x84\x85\x86\x87"
+-			  "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+-			  "\x90\x91\x92\x93\x94\x95\x96\x97"
+-			  "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+-			  "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+-			  "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+-			  "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+-			  "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+-			  "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+-			  "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+-			  "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+-			  "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+-			  "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+-			  "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+-			  "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+-			  "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
+-		.ctext	= "\xc5\x85\x2a\x4b\x73\xe4\xf6\xf1"
+-			  "\x7e\xf9\xf6\xe9\xa3\x73\x36\xcb"
+-			  "\xaa\xb6\x22\xb0\x24\x6e\x3d\x73"
+-			  "\x92\x99\xde\xd3\x76\xed\xcd\x63"
+-			  "\x64\x3a\x22\x57\xc1\x43\x49\xd4"
+-			  "\x79\x36\x31\x19\x62\xae\x10\x7e"
+-			  "\x7d\xcf\x7a\xe2\x6b\xce\x27\xfa"
+-			  "\xdc\x3d\xd9\x83\xd3\x42\x4c\xe0"
+-			  "\x1b\xd6\x1d\x1a\x6f\xd2\x03\x00"
+-			  "\xfc\x81\x99\x8a\x14\x62\xf5\x7e"
+-			  "\x0d\xe7\x12\xe8\x17\x9d\x0b\xec"
+-			  "\xe2\xf7\xc9\xa7\x63\xd1\x79\xb6"
+-			  "\x62\x62\x37\xfe\x0a\x4c\x4a\x37"
+-			  "\x70\xc7\x5e\x96\x5f\xbc\x8e\x9e"
+-			  "\x85\x3c\x4f\x26\x64\x85\xbc\x68"
+-			  "\xb0\xe0\x86\x5e\x26\x41\xce\x11"
+-			  "\x50\xda\x97\x14\xe9\x9e\xc7\x6d"
+-			  "\x3b\xdc\x43\xde\x2b\x27\x69\x7d"
+-			  "\xfc\xb0\x28\xbd\x8f\xb1\xc6\x31"
+-			  "\x14\x4d\xf0\x74\x37\xfd\x07\x25"
+-			  "\x96\x55\xe5\xfc\x9e\x27\x2a\x74"
+-			  "\x1b\x83\x4d\x15\x83\xac\x57\xa0"
+-			  "\xac\xa5\xd0\x38\xef\x19\x56\x53"
+-			  "\x25\x4b\xfc\xce\x04\x23\xe5\x6b"
+-			  "\xf6\xc6\x6c\x32\x0b\xb3\x12\xc5"
+-			  "\xed\x22\x34\x1c\x5d\xed\x17\x06"
+-			  "\x36\xa3\xe6\x77\xb9\x97\x46\xb8"
+-			  "\xe9\x3f\x7e\xc7\xbc\x13\x5c\xdc"
+-			  "\x6e\x3f\x04\x5e\xd1\x59\xa5\x82"
+-			  "\x35\x91\x3d\x1b\xe4\x97\x9f\x92"
+-			  "\x1c\x5e\x5f\x6f\x41\xd4\x62\xa1"
+-			  "\x8d\x39\xfc\x42\xfb\x38\x80\xb9"
+-			  "\x0a\xe3\xcc\x6a\x93\xd9\x7a\xb1"
+-			  "\xe9\x69\xaf\x0a\x6b\x75\x38\xa7"
+-			  "\xa1\xbf\xf7\xda\x95\x93\x4b\x78"
+-			  "\x19\xf5\x94\xf9\xd2\x00\x33\x37"
+-			  "\xcf\xf5\x9e\x9c\xf3\xcc\xa6\xee"
+-			  "\x42\xb2\x9e\x2c\x5f\x48\x23\x26"
+-			  "\x15\x25\x17\x03\x3d\xfe\x2c\xfc"
+-			  "\xeb\xba\xda\xe0\x00\x05\xb6\xa6"
+-			  "\x07\xb3\xe8\x36\x5b\xec\x5b\xbf"
+-			  "\xd6\x5b\x00\x74\xc6\x97\xf1\x6a"
+-			  "\x49\xa1\xc3\xfa\x10\x52\xb9\x14"
+-			  "\xad\xb7\x73\xf8\x78\x12\xc8\x59"
+-			  "\x17\x80\x4c\x57\x39\xf1\x6d\x80"
+-			  "\x25\x77\x0f\x5e\x7d\xf0\xaf\x21"
+-			  "\xec\xce\xb7\xc8\x02\x8a\xed\x53"
+-			  "\x2c\x25\x68\x2e\x1f\x85\x5e\x67"
+-			  "\xd1\x07\x7a\x3a\x89\x08\xe0\x34"
+-			  "\xdc\xdb\x26\xb4\x6b\x77\xfc\x40"
+-			  "\x31\x15\x72\xa0\xf0\x73\xd9\x3b"
+-			  "\xd5\xdb\xfe\xfc\x8f\xa9\x44\xa2"
+-			  "\x09\x9f\xc6\x33\xe5\xe2\x88\xe8"
+-			  "\xf3\xf0\x1a\xf4\xce\x12\x0f\xd6"
+-			  "\xf7\x36\xe6\xa4\xf4\x7a\x10\x58"
+-			  "\xcc\x1f\x48\x49\x65\x47\x75\xe9"
+-			  "\x28\xe1\x65\x7b\xf2\xc4\xb5\x07"
+-			  "\xf2\xec\x76\xd8\x8f\x09\xf3\x16"
+-			  "\xa1\x51\x89\x3b\xeb\x96\x42\xac"
+-			  "\x65\xe0\x67\x63\x29\xdc\xb4\x7d"
+-			  "\xf2\x41\x51\x6a\xcb\xde\x3c\xfb"
+-			  "\x66\x8d\x13\xca\xe0\x59\x2a\x00"
+-			  "\xc9\x53\x4c\xe6\x9e\xe2\x73\xd5"
+-			  "\x67\x19\xb2\xbd\x9a\x63\xd7\x5c",
+-		.len	= 512,
+-		.also_non_np = 1,
+-		.np	= 3,
+-		.tap	= { 512 - 20, 4, 16 },
+-	}
+-};
+-
+-static const struct cipher_testvec speck64_tv_template[] = {
+-	{ /* Speck64/96 */
+-		.key	= "\x00\x01\x02\x03\x08\x09\x0a\x0b"
+-			  "\x10\x11\x12\x13",
+-		.klen	= 12,
+-		.ptext	= "\x65\x61\x6e\x73\x20\x46\x61\x74",
+-		.ctext	= "\x6c\x94\x75\x41\xec\x52\x79\x9f",
+-		.len	= 8,
+-	}, { /* Speck64/128 */
+-		.key	= "\x00\x01\x02\x03\x08\x09\x0a\x0b"
+-			  "\x10\x11\x12\x13\x18\x19\x1a\x1b",
+-		.klen	= 16,
+-		.ptext	= "\x2d\x43\x75\x74\x74\x65\x72\x3b",
+-		.ctext	= "\x8b\x02\x4e\x45\x48\xa5\x6f\x8c",
+-		.len	= 8,
+-	},
+-};
+-
+-/*
+- * Speck64-XTS test vectors, taken from the AES-XTS test vectors with the
+- * ciphertext recomputed with Speck64 as the cipher, and key lengths adjusted
+- */
+-static const struct cipher_testvec speck64_xts_tv_template[] = {
+-	{
+-		.key	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.klen	= 24,
+-		.iv	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ctext	= "\x84\xaf\x54\x07\x19\xd4\x7c\xa6"
+-			  "\xe4\xfe\xdf\xc4\x1f\x34\xc3\xc2"
+-			  "\x80\xf5\x72\xe7\xcd\xf0\x99\x22"
+-			  "\x35\xa7\x2f\x06\xef\xdc\x51\xaa",
+-		.len	= 32,
+-	}, {
+-		.key	= "\x11\x11\x11\x11\x11\x11\x11\x11"
+-			  "\x11\x11\x11\x11\x11\x11\x11\x11"
+-			  "\x22\x22\x22\x22\x22\x22\x22\x22",
+-		.klen	= 24,
+-		.iv	= "\x33\x33\x33\x33\x33\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44",
+-		.ctext	= "\x12\x56\x73\xcd\x15\x87\xa8\x59"
+-			  "\xcf\x84\xae\xd9\x1c\x66\xd6\x9f"
+-			  "\xb3\x12\x69\x7e\x36\xeb\x52\xff"
+-			  "\x62\xdd\xba\x90\xb3\xe1\xee\x99",
+-		.len	= 32,
+-	}, {
+-		.key	= "\xff\xfe\xfd\xfc\xfb\xfa\xf9\xf8"
+-			  "\xf7\xf6\xf5\xf4\xf3\xf2\xf1\xf0"
+-			  "\x22\x22\x22\x22\x22\x22\x22\x22",
+-		.klen	= 24,
+-		.iv	= "\x33\x33\x33\x33\x33\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44",
+-		.ctext	= "\x15\x1b\xe4\x2c\xa2\x5a\x2d\x2c"
+-			  "\x27\x36\xc0\xbf\x5d\xea\x36\x37"
+-			  "\x2d\x1a\x88\xbc\x66\xb5\xd0\x0b"
+-			  "\xa1\xbc\x19\xb2\x0f\x3b\x75\x34",
+-		.len	= 32,
+-	}, {
+-		.key	= "\x27\x18\x28\x18\x28\x45\x90\x45"
+-			  "\x23\x53\x60\x28\x74\x71\x35\x26"
+-			  "\x31\x41\x59\x26\x53\x58\x97\x93",
+-		.klen	= 24,
+-		.iv	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+-			  "\x20\x21\x22\x23\x24\x25\x26\x27"
+-			  "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+-			  "\x30\x31\x32\x33\x34\x35\x36\x37"
+-			  "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+-			  "\x40\x41\x42\x43\x44\x45\x46\x47"
+-			  "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+-			  "\x50\x51\x52\x53\x54\x55\x56\x57"
+-			  "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+-			  "\x60\x61\x62\x63\x64\x65\x66\x67"
+-			  "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+-			  "\x70\x71\x72\x73\x74\x75\x76\x77"
+-			  "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+-			  "\x80\x81\x82\x83\x84\x85\x86\x87"
+-			  "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+-			  "\x90\x91\x92\x93\x94\x95\x96\x97"
+-			  "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+-			  "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+-			  "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+-			  "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+-			  "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+-			  "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+-			  "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+-			  "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+-			  "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+-			  "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+-			  "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+-			  "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+-			  "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
+-			  "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+-			  "\x20\x21\x22\x23\x24\x25\x26\x27"
+-			  "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+-			  "\x30\x31\x32\x33\x34\x35\x36\x37"
+-			  "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+-			  "\x40\x41\x42\x43\x44\x45\x46\x47"
+-			  "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+-			  "\x50\x51\x52\x53\x54\x55\x56\x57"
+-			  "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+-			  "\x60\x61\x62\x63\x64\x65\x66\x67"
+-			  "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+-			  "\x70\x71\x72\x73\x74\x75\x76\x77"
+-			  "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+-			  "\x80\x81\x82\x83\x84\x85\x86\x87"
+-			  "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+-			  "\x90\x91\x92\x93\x94\x95\x96\x97"
+-			  "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+-			  "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+-			  "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+-			  "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+-			  "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+-			  "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+-			  "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+-			  "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+-			  "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+-			  "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+-			  "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+-			  "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+-			  "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
+-		.ctext	= "\xaf\xa1\x81\xa6\x32\xbb\x15\x8e"
+-			  "\xf8\x95\x2e\xd3\xe6\xee\x7e\x09"
+-			  "\x0c\x1a\xf5\x02\x97\x8b\xe3\xb3"
+-			  "\x11\xc7\x39\x96\xd0\x95\xf4\x56"
+-			  "\xf4\xdd\x03\x38\x01\x44\x2c\xcf"
+-			  "\x88\xae\x8e\x3c\xcd\xe7\xaa\x66"
+-			  "\xfe\x3d\xc6\xfb\x01\x23\x51\x43"
+-			  "\xd5\xd2\x13\x86\x94\x34\xe9\x62"
+-			  "\xf9\x89\xe3\xd1\x7b\xbe\xf8\xef"
+-			  "\x76\x35\x04\x3f\xdb\x23\x9d\x0b"
+-			  "\x85\x42\xb9\x02\xd6\xcc\xdb\x96"
+-			  "\xa7\x6b\x27\xb6\xd4\x45\x8f\x7d"
+-			  "\xae\xd2\x04\xd5\xda\xc1\x7e\x24"
+-			  "\x8c\x73\xbe\x48\x7e\xcf\x65\x28"
+-			  "\x29\xe5\xbe\x54\x30\xcb\x46\x95"
+-			  "\x4f\x2e\x8a\x36\xc8\x27\xc5\xbe"
+-			  "\xd0\x1a\xaf\xab\x26\xcd\x9e\x69"
+-			  "\xa1\x09\x95\x71\x26\xe9\xc4\xdf"
+-			  "\xe6\x31\xc3\x46\xda\xaf\x0b\x41"
+-			  "\x1f\xab\xb1\x8e\xd6\xfc\x0b\xb3"
+-			  "\x82\xc0\x37\x27\xfc\x91\xa7\x05"
+-			  "\xfb\xc5\xdc\x2b\x74\x96\x48\x43"
+-			  "\x5d\x9c\x19\x0f\x60\x63\x3a\x1f"
+-			  "\x6f\xf0\x03\xbe\x4d\xfd\xc8\x4a"
+-			  "\xc6\xa4\x81\x6d\xc3\x12\x2a\x5c"
+-			  "\x07\xff\xf3\x72\x74\x48\xb5\x40"
+-			  "\x50\xb5\xdd\x90\x43\x31\x18\x15"
+-			  "\x7b\xf2\xa6\xdb\x83\xc8\x4b\x4a"
+-			  "\x29\x93\x90\x8b\xda\x07\xf0\x35"
+-			  "\x6d\x90\x88\x09\x4e\x83\xf5\x5b"
+-			  "\x94\x12\xbb\x33\x27\x1d\x3f\x23"
+-			  "\x51\xa8\x7c\x07\xa2\xae\x77\xa6"
+-			  "\x50\xfd\xcc\xc0\x4f\x80\x7a\x9f"
+-			  "\x66\xdd\xcd\x75\x24\x8b\x33\xf7"
+-			  "\x20\xdb\x83\x9b\x4f\x11\x63\x6e"
+-			  "\xcf\x37\xef\xc9\x11\x01\x5c\x45"
+-			  "\x32\x99\x7c\x3c\x9e\x42\x89\xe3"
+-			  "\x70\x6d\x15\x9f\xb1\xe6\xb6\x05"
+-			  "\xfe\x0c\xb9\x49\x2d\x90\x6d\xcc"
+-			  "\x5d\x3f\xc1\xfe\x89\x0a\x2e\x2d"
+-			  "\xa0\xa8\x89\x3b\x73\x39\xa5\x94"
+-			  "\x4c\xa4\xa6\xbb\xa7\x14\x46\x89"
+-			  "\x10\xff\xaf\xef\xca\xdd\x4f\x80"
+-			  "\xb3\xdf\x3b\xab\xd4\xe5\x5a\xc7"
+-			  "\x33\xca\x00\x8b\x8b\x3f\xea\xec"
+-			  "\x68\x8a\xc2\x6d\xfd\xd4\x67\x0f"
+-			  "\x22\x31\xe1\x0e\xfe\x5a\x04\xd5"
+-			  "\x64\xa3\xf1\x1a\x76\x28\xcc\x35"
+-			  "\x36\xa7\x0a\x74\xf7\x1c\x44\x9b"
+-			  "\xc7\x1b\x53\x17\x02\xea\xd1\xad"
+-			  "\x13\x51\x73\xc0\xa0\xb2\x05\x32"
+-			  "\xa8\xa2\x37\x2e\xe1\x7a\x3a\x19"
+-			  "\x26\xb4\x6c\x62\x5d\xb3\x1a\x1d"
+-			  "\x59\xda\xee\x1a\x22\x18\xda\x0d"
+-			  "\x88\x0f\x55\x8b\x72\x62\xfd\xc1"
+-			  "\x69\x13\xcd\x0d\x5f\xc1\x09\x52"
+-			  "\xee\xd6\xe3\x84\x4d\xee\xf6\x88"
+-			  "\xaf\x83\xdc\x76\xf4\xc0\x93\x3f"
+-			  "\x4a\x75\x2f\xb0\x0b\x3e\xc4\x54"
+-			  "\x7d\x69\x8d\x00\x62\x77\x0d\x14"
+-			  "\xbe\x7c\xa6\x7d\xc5\x24\x4f\xf3"
+-			  "\x50\xf7\x5f\xf4\xc2\xca\x41\x97"
+-			  "\x37\xbe\x75\x74\xcd\xf0\x75\x6e"
+-			  "\x25\x23\x94\xbd\xda\x8d\xb0\xd4",
+-		.len	= 512,
+-	}, {
+-		.key	= "\x27\x18\x28\x18\x28\x45\x90\x45"
+-			  "\x23\x53\x60\x28\x74\x71\x35\x26"
+-			  "\x62\x49\x77\x57\x24\x70\x93\x69"
+-			  "\x99\x59\x57\x49\x66\x96\x76\x27",
+-		.klen	= 32,
+-		.iv	= "\xff\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+-			  "\x20\x21\x22\x23\x24\x25\x26\x27"
+-			  "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+-			  "\x30\x31\x32\x33\x34\x35\x36\x37"
+-			  "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+-			  "\x40\x41\x42\x43\x44\x45\x46\x47"
+-			  "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+-			  "\x50\x51\x52\x53\x54\x55\x56\x57"
+-			  "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+-			  "\x60\x61\x62\x63\x64\x65\x66\x67"
+-			  "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+-			  "\x70\x71\x72\x73\x74\x75\x76\x77"
+-			  "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+-			  "\x80\x81\x82\x83\x84\x85\x86\x87"
+-			  "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+-			  "\x90\x91\x92\x93\x94\x95\x96\x97"
+-			  "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+-			  "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+-			  "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+-			  "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+-			  "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+-			  "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+-			  "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+-			  "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+-			  "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+-			  "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+-			  "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+-			  "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+-			  "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
+-			  "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+-			  "\x20\x21\x22\x23\x24\x25\x26\x27"
+-			  "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+-			  "\x30\x31\x32\x33\x34\x35\x36\x37"
+-			  "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+-			  "\x40\x41\x42\x43\x44\x45\x46\x47"
+-			  "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+-			  "\x50\x51\x52\x53\x54\x55\x56\x57"
+-			  "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+-			  "\x60\x61\x62\x63\x64\x65\x66\x67"
+-			  "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+-			  "\x70\x71\x72\x73\x74\x75\x76\x77"
+-			  "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+-			  "\x80\x81\x82\x83\x84\x85\x86\x87"
+-			  "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+-			  "\x90\x91\x92\x93\x94\x95\x96\x97"
+-			  "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+-			  "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+-			  "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+-			  "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+-			  "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+-			  "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+-			  "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+-			  "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+-			  "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+-			  "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+-			  "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+-			  "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+-			  "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
+-		.ctext	= "\x55\xed\x71\xd3\x02\x8e\x15\x3b"
+-			  "\xc6\x71\x29\x2d\x3e\x89\x9f\x59"
+-			  "\x68\x6a\xcc\x8a\x56\x97\xf3\x95"
+-			  "\x4e\x51\x08\xda\x2a\xf8\x6f\x3c"
+-			  "\x78\x16\xea\x80\xdb\x33\x75\x94"
+-			  "\xf9\x29\xc4\x2b\x76\x75\x97\xc7"
+-			  "\xf2\x98\x2c\xf9\xff\xc8\xd5\x2b"
+-			  "\x18\xf1\xaf\xcf\x7c\xc5\x0b\xee"
+-			  "\xad\x3c\x76\x7c\xe6\x27\xa2\x2a"
+-			  "\xe4\x66\xe1\xab\xa2\x39\xfc\x7c"
+-			  "\xf5\xec\x32\x74\xa3\xb8\x03\x88"
+-			  "\x52\xfc\x2e\x56\x3f\xa1\xf0\x9f"
+-			  "\x84\x5e\x46\xed\x20\x89\xb6\x44"
+-			  "\x8d\xd0\xed\x54\x47\x16\xbe\x95"
+-			  "\x8a\xb3\x6b\x72\xc4\x32\x52\x13"
+-			  "\x1b\xb0\x82\xbe\xac\xf9\x70\xa6"
+-			  "\x44\x18\xdd\x8c\x6e\xca\x6e\x45"
+-			  "\x8f\x1e\x10\x07\x57\x25\x98\x7b"
+-			  "\x17\x8c\x78\xdd\x80\xa7\xd9\xd8"
+-			  "\x63\xaf\xb9\x67\x57\xfd\xbc\xdb"
+-			  "\x44\xe9\xc5\x65\xd1\xc7\x3b\xff"
+-			  "\x20\xa0\x80\x1a\xc3\x9a\xad\x5e"
+-			  "\x5d\x3b\xd3\x07\xd9\xf5\xfd\x3d"
+-			  "\x4a\x8b\xa8\xd2\x6e\x7a\x51\x65"
+-			  "\x6c\x8e\x95\xe0\x45\xc9\x5f\x4a"
+-			  "\x09\x3c\x3d\x71\x7f\x0c\x84\x2a"
+-			  "\xc8\x48\x52\x1a\xc2\xd5\xd6\x78"
+-			  "\x92\x1e\xa0\x90\x2e\xea\xf0\xf3"
+-			  "\xdc\x0f\xb1\xaf\x0d\x9b\x06\x2e"
+-			  "\x35\x10\x30\x82\x0d\xe7\xc5\x9b"
+-			  "\xde\x44\x18\xbd\x9f\xd1\x45\xa9"
+-			  "\x7b\x7a\x4a\xad\x35\x65\x27\xca"
+-			  "\xb2\xc3\xd4\x9b\x71\x86\x70\xee"
+-			  "\xf1\x89\x3b\x85\x4b\x5b\xaa\xaf"
+-			  "\xfc\x42\xc8\x31\x59\xbe\x16\x60"
+-			  "\x4f\xf9\xfa\x12\xea\xd0\xa7\x14"
+-			  "\xf0\x7a\xf3\xd5\x8d\xbd\x81\xef"
+-			  "\x52\x7f\x29\x51\x94\x20\x67\x3c"
+-			  "\xd1\xaf\x77\x9f\x22\x5a\x4e\x63"
+-			  "\xe7\xff\x73\x25\xd1\xdd\x96\x8a"
+-			  "\x98\x52\x6d\xf3\xac\x3e\xf2\x18"
+-			  "\x6d\xf6\x0a\x29\xa6\x34\x3d\xed"
+-			  "\xe3\x27\x0d\x9d\x0a\x02\x44\x7e"
+-			  "\x5a\x7e\x67\x0f\x0a\x9e\xd6\xad"
+-			  "\x91\xe6\x4d\x81\x8c\x5c\x59\xaa"
+-			  "\xfb\xeb\x56\x53\xd2\x7d\x4c\x81"
+-			  "\x65\x53\x0f\x41\x11\xbd\x98\x99"
+-			  "\xf9\xc6\xfa\x51\x2e\xa3\xdd\x8d"
+-			  "\x84\x98\xf9\x34\xed\x33\x2a\x1f"
+-			  "\x82\xed\xc1\x73\x98\xd3\x02\xdc"
+-			  "\xe6\xc2\x33\x1d\xa2\xb4\xca\x76"
+-			  "\x63\x51\x34\x9d\x96\x12\xae\xce"
+-			  "\x83\xc9\x76\x5e\xa4\x1b\x53\x37"
+-			  "\x17\xd5\xc0\x80\x1d\x62\xf8\x3d"
+-			  "\x54\x27\x74\xbb\x10\x86\x57\x46"
+-			  "\x68\xe1\xed\x14\xe7\x9d\xfc\x84"
+-			  "\x47\xbc\xc2\xf8\x19\x4b\x99\xcf"
+-			  "\x7a\xe9\xc4\xb8\x8c\x82\x72\x4d"
+-			  "\x7b\x4f\x38\x55\x36\x71\x64\xc1"
+-			  "\xfc\x5c\x75\x52\x33\x02\x18\xf8"
+-			  "\x17\xe1\x2b\xc2\x43\x39\xbd\x76"
+-			  "\x9b\x63\x76\x32\x2f\x19\x72\x10"
+-			  "\x9f\x21\x0c\xf1\x66\x50\x7f\xa5"
+-			  "\x0d\x1f\x46\xe0\xba\xd3\x2f\x3c",
+-		.len	= 512,
+-		.also_non_np = 1,
+-		.np	= 3,
+-		.tap	= { 512 - 20, 4, 16 },
+-	}
+-};
+-
+ /* Cast6 test vectors from RFC 2612 */
+ static const struct cipher_testvec cast6_tv_template[] = {
+ 	{
+diff --git a/drivers/acpi/acpi_lpit.c b/drivers/acpi/acpi_lpit.c
+index cf4fc0161164..e43cb71b6972 100644
+--- a/drivers/acpi/acpi_lpit.c
++++ b/drivers/acpi/acpi_lpit.c
+@@ -117,11 +117,17 @@ static void lpit_update_residency(struct lpit_residency_info *info,
+ 		if (!info->iomem_addr)
+ 			return;
+ 
++		if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0))
++			return;
++
+ 		/* Silently fail, if cpuidle attribute group is not present */
+ 		sysfs_add_file_to_group(&cpu_subsys.dev_root->kobj,
+ 					&dev_attr_low_power_idle_system_residency_us.attr,
+ 					"cpuidle");
+ 	} else if (info->gaddr.space_id == ACPI_ADR_SPACE_FIXED_HARDWARE) {
++		if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0))
++			return;
++
+ 		/* Silently fail, if cpuidle attribute group is not present */
+ 		sysfs_add_file_to_group(&cpu_subsys.dev_root->kobj,
+ 					&dev_attr_low_power_idle_cpu_residency_us.attr,
+diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c
+index bf64cfa30feb..969bf8d515c0 100644
+--- a/drivers/acpi/acpi_lpss.c
++++ b/drivers/acpi/acpi_lpss.c
+@@ -327,9 +327,11 @@ static const struct acpi_device_id acpi_lpss_device_ids[] = {
+ 	{ "INT33FC", },
+ 
+ 	/* Braswell LPSS devices */
++	{ "80862286", LPSS_ADDR(lpss_dma_desc) },
+ 	{ "80862288", LPSS_ADDR(bsw_pwm_dev_desc) },
+ 	{ "8086228A", LPSS_ADDR(bsw_uart_dev_desc) },
+ 	{ "8086228E", LPSS_ADDR(bsw_spi_dev_desc) },
++	{ "808622C0", LPSS_ADDR(lpss_dma_desc) },
+ 	{ "808622C1", LPSS_ADDR(bsw_i2c_dev_desc) },
+ 
+ 	/* Broadwell LPSS devices */
+diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
+index 449d86d39965..fc447410ae4d 100644
+--- a/drivers/acpi/acpi_processor.c
++++ b/drivers/acpi/acpi_processor.c
+@@ -643,7 +643,7 @@ static acpi_status __init acpi_processor_ids_walk(acpi_handle handle,
+ 
+ 	status = acpi_get_type(handle, &acpi_type);
+ 	if (ACPI_FAILURE(status))
+-		return false;
++		return status;
+ 
+ 	switch (acpi_type) {
+ 	case ACPI_TYPE_PROCESSOR:
+@@ -663,11 +663,12 @@ static acpi_status __init acpi_processor_ids_walk(acpi_handle handle,
+ 	}
+ 
+ 	processor_validated_ids_update(uid);
+-	return true;
++	return AE_OK;
+ 
+ err:
++	/* Exit on error, but don't abort the namespace walk */
+ 	acpi_handle_info(handle, "Invalid processor object\n");
+-	return false;
++	return AE_OK;
+ 
+ }
+ 
+diff --git a/drivers/acpi/acpica/dsopcode.c b/drivers/acpi/acpica/dsopcode.c
+index e9fb0bf3c8d2..78f9de260d5f 100644
+--- a/drivers/acpi/acpica/dsopcode.c
++++ b/drivers/acpi/acpica/dsopcode.c
+@@ -417,6 +417,10 @@ acpi_ds_eval_region_operands(struct acpi_walk_state *walk_state,
+ 			  ACPI_FORMAT_UINT64(obj_desc->region.address),
+ 			  obj_desc->region.length));
+ 
++	status = acpi_ut_add_address_range(obj_desc->region.space_id,
++					   obj_desc->region.address,
++					   obj_desc->region.length, node);
++
+ 	/* Now the address and length are valid for this opregion */
+ 
+ 	obj_desc->region.flags |= AOPOBJ_DATA_VALID;
+diff --git a/drivers/acpi/acpica/psloop.c b/drivers/acpi/acpica/psloop.c
+index 0f0bdc9d24c6..314276779f57 100644
+--- a/drivers/acpi/acpica/psloop.c
++++ b/drivers/acpi/acpica/psloop.c
+@@ -417,6 +417,7 @@ acpi_status acpi_ps_parse_loop(struct acpi_walk_state *walk_state)
+ 	union acpi_parse_object *op = NULL;	/* current op */
+ 	struct acpi_parse_state *parser_state;
+ 	u8 *aml_op_start = NULL;
++	u8 opcode_length;
+ 
+ 	ACPI_FUNCTION_TRACE_PTR(ps_parse_loop, walk_state);
+ 
+@@ -540,8 +541,19 @@ acpi_status acpi_ps_parse_loop(struct acpi_walk_state *walk_state)
+ 						    "Skip parsing opcode %s",
+ 						    acpi_ps_get_opcode_name
+ 						    (walk_state->opcode)));
++
++					/*
++					 * Determine the opcode length before skipping the opcode.
++					 * An opcode can be 1 byte or 2 bytes in length.
++					 */
++					opcode_length = 1;
++					if ((walk_state->opcode & 0xFF00) ==
++					    AML_EXTENDED_OPCODE) {
++						opcode_length = 2;
++					}
+ 					walk_state->parser_state.aml =
+-					    walk_state->aml + 1;
++					    walk_state->aml + opcode_length;
++
+ 					walk_state->parser_state.aml =
+ 					    acpi_ps_get_next_package_end
+ 					    (&walk_state->parser_state);
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index 7c479002e798..c0db96e8a81a 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -2456,7 +2456,8 @@ static int ars_get_cap(struct acpi_nfit_desc *acpi_desc,
+ 	return cmd_rc;
+ }
+ 
+-static int ars_start(struct acpi_nfit_desc *acpi_desc, struct nfit_spa *nfit_spa)
++static int ars_start(struct acpi_nfit_desc *acpi_desc,
++		struct nfit_spa *nfit_spa, enum nfit_ars_state req_type)
+ {
+ 	int rc;
+ 	int cmd_rc;
+@@ -2467,7 +2468,7 @@ static int ars_start(struct acpi_nfit_desc *acpi_desc, struct nfit_spa *nfit_spa
+ 	memset(&ars_start, 0, sizeof(ars_start));
+ 	ars_start.address = spa->address;
+ 	ars_start.length = spa->length;
+-	if (test_bit(ARS_SHORT, &nfit_spa->ars_state))
++	if (req_type == ARS_REQ_SHORT)
+ 		ars_start.flags = ND_ARS_RETURN_PREV_DATA;
+ 	if (nfit_spa_type(spa) == NFIT_SPA_PM)
+ 		ars_start.type = ND_ARS_PERSISTENT;
+@@ -2524,6 +2525,15 @@ static void ars_complete(struct acpi_nfit_desc *acpi_desc,
+ 	struct nd_region *nd_region = nfit_spa->nd_region;
+ 	struct device *dev;
+ 
++	lockdep_assert_held(&acpi_desc->init_mutex);
++	/*
++	 * Only advance the ARS state for ARS runs initiated by the
++	 * kernel, ignore ARS results from BIOS initiated runs for scrub
++	 * completion tracking.
++	 */
++	if (acpi_desc->scrub_spa != nfit_spa)
++		return;
++
+ 	if ((ars_status->address >= spa->address && ars_status->address
+ 				< spa->address + spa->length)
+ 			|| (ars_status->address < spa->address)) {
+@@ -2543,23 +2553,13 @@ static void ars_complete(struct acpi_nfit_desc *acpi_desc,
+ 	} else
+ 		return;
+ 
+-	if (test_bit(ARS_DONE, &nfit_spa->ars_state))
+-		return;
+-
+-	if (!test_and_clear_bit(ARS_REQ, &nfit_spa->ars_state))
+-		return;
+-
++	acpi_desc->scrub_spa = NULL;
+ 	if (nd_region) {
+ 		dev = nd_region_dev(nd_region);
+ 		nvdimm_region_notify(nd_region, NVDIMM_REVALIDATE_POISON);
+ 	} else
+ 		dev = acpi_desc->dev;
+-
+-	dev_dbg(dev, "ARS: range %d %s complete\n", spa->range_index,
+-			test_bit(ARS_SHORT, &nfit_spa->ars_state)
+-			? "short" : "long");
+-	clear_bit(ARS_SHORT, &nfit_spa->ars_state);
+-	set_bit(ARS_DONE, &nfit_spa->ars_state);
++	dev_dbg(dev, "ARS: range %d complete\n", spa->range_index);
+ }
+ 
+ static int ars_status_process_records(struct acpi_nfit_desc *acpi_desc)
+@@ -2840,46 +2840,55 @@ static int acpi_nfit_query_poison(struct acpi_nfit_desc *acpi_desc)
+ 	return 0;
+ }
+ 
+-static int ars_register(struct acpi_nfit_desc *acpi_desc, struct nfit_spa *nfit_spa,
+-		int *query_rc)
++static int ars_register(struct acpi_nfit_desc *acpi_desc,
++		struct nfit_spa *nfit_spa)
+ {
+-	int rc = *query_rc;
++	int rc;
+ 
+-	if (no_init_ars)
++	if (no_init_ars || test_bit(ARS_FAILED, &nfit_spa->ars_state))
+ 		return acpi_nfit_register_region(acpi_desc, nfit_spa);
+ 
+-	set_bit(ARS_REQ, &nfit_spa->ars_state);
+-	set_bit(ARS_SHORT, &nfit_spa->ars_state);
++	set_bit(ARS_REQ_SHORT, &nfit_spa->ars_state);
++	set_bit(ARS_REQ_LONG, &nfit_spa->ars_state);
+ 
+-	switch (rc) {
++	switch (acpi_nfit_query_poison(acpi_desc)) {
+ 	case 0:
+ 	case -EAGAIN:
+-		rc = ars_start(acpi_desc, nfit_spa);
+-		if (rc == -EBUSY) {
+-			*query_rc = rc;
++		rc = ars_start(acpi_desc, nfit_spa, ARS_REQ_SHORT);
++		/* shouldn't happen, try again later */
++		if (rc == -EBUSY)
+ 			break;
+-		} else if (rc == 0) {
+-			rc = acpi_nfit_query_poison(acpi_desc);
+-		} else {
++		if (rc) {
+ 			set_bit(ARS_FAILED, &nfit_spa->ars_state);
+ 			break;
+ 		}
+-		if (rc == -EAGAIN)
+-			clear_bit(ARS_SHORT, &nfit_spa->ars_state);
+-		else if (rc == 0)
+-			ars_complete(acpi_desc, nfit_spa);
++		clear_bit(ARS_REQ_SHORT, &nfit_spa->ars_state);
++		rc = acpi_nfit_query_poison(acpi_desc);
++		if (rc)
++			break;
++		acpi_desc->scrub_spa = nfit_spa;
++		ars_complete(acpi_desc, nfit_spa);
++		/*
++		 * If ars_complete() says we didn't complete the
++		 * short scrub, we'll try again with a long
++		 * request.
++		 */
++		acpi_desc->scrub_spa = NULL;
+ 		break;
+ 	case -EBUSY:
++	case -ENOMEM:
+ 	case -ENOSPC:
++		/*
++		 * BIOS was using ARS, wait for it to complete (or
++		 * resources to become available) and then perform our
++		 * own scrubs.
++		 */
+ 		break;
+ 	default:
+ 		set_bit(ARS_FAILED, &nfit_spa->ars_state);
+ 		break;
+ 	}
+ 
+-	if (test_and_clear_bit(ARS_DONE, &nfit_spa->ars_state))
+-		set_bit(ARS_REQ, &nfit_spa->ars_state);
+-
+ 	return acpi_nfit_register_region(acpi_desc, nfit_spa);
+ }
+ 
+@@ -2901,6 +2910,8 @@ static unsigned int __acpi_nfit_scrub(struct acpi_nfit_desc *acpi_desc,
+ 	struct device *dev = acpi_desc->dev;
+ 	struct nfit_spa *nfit_spa;
+ 
++	lockdep_assert_held(&acpi_desc->init_mutex);
++
+ 	if (acpi_desc->cancel)
+ 		return 0;
+ 
+@@ -2924,21 +2935,49 @@ static unsigned int __acpi_nfit_scrub(struct acpi_nfit_desc *acpi_desc,
+ 
+ 	ars_complete_all(acpi_desc);
+ 	list_for_each_entry(nfit_spa, &acpi_desc->spas, list) {
++		enum nfit_ars_state req_type;
++		int rc;
++
+ 		if (test_bit(ARS_FAILED, &nfit_spa->ars_state))
+ 			continue;
+-		if (test_bit(ARS_REQ, &nfit_spa->ars_state)) {
+-			int rc = ars_start(acpi_desc, nfit_spa);
+-
+-			clear_bit(ARS_DONE, &nfit_spa->ars_state);
+-			dev = nd_region_dev(nfit_spa->nd_region);
+-			dev_dbg(dev, "ARS: range %d ARS start (%d)\n",
+-					nfit_spa->spa->range_index, rc);
+-			if (rc == 0 || rc == -EBUSY)
+-				return 1;
+-			dev_err(dev, "ARS: range %d ARS failed (%d)\n",
+-					nfit_spa->spa->range_index, rc);
+-			set_bit(ARS_FAILED, &nfit_spa->ars_state);
++
++		/* prefer short ARS requests first */
++		if (test_bit(ARS_REQ_SHORT, &nfit_spa->ars_state))
++			req_type = ARS_REQ_SHORT;
++		else if (test_bit(ARS_REQ_LONG, &nfit_spa->ars_state))
++			req_type = ARS_REQ_LONG;
++		else
++			continue;
++		rc = ars_start(acpi_desc, nfit_spa, req_type);
++
++		dev = nd_region_dev(nfit_spa->nd_region);
++		dev_dbg(dev, "ARS: range %d ARS start %s (%d)\n",
++				nfit_spa->spa->range_index,
++				req_type == ARS_REQ_SHORT ? "short" : "long",
++				rc);
++		/*
++		 * Hmm, we raced someone else starting ARS? Try again in
++		 * a bit.
++		 */
++		if (rc == -EBUSY)
++			return 1;
++		if (rc == 0) {
++			dev_WARN_ONCE(dev, acpi_desc->scrub_spa,
++					"scrub start while range %d active\n",
++					acpi_desc->scrub_spa->spa->range_index);
++			clear_bit(req_type, &nfit_spa->ars_state);
++			acpi_desc->scrub_spa = nfit_spa;
++			/*
++			 * Consider this spa last for future scrub
++			 * requests
++			 */
++			list_move_tail(&nfit_spa->list, &acpi_desc->spas);
++			return 1;
+ 		}
++
++		dev_err(dev, "ARS: range %d ARS failed (%d)\n",
++				nfit_spa->spa->range_index, rc);
++		set_bit(ARS_FAILED, &nfit_spa->ars_state);
+ 	}
+ 	return 0;
+ }
+@@ -2994,6 +3033,7 @@ static void acpi_nfit_init_ars(struct acpi_nfit_desc *acpi_desc,
+ 	struct nd_cmd_ars_cap ars_cap;
+ 	int rc;
+ 
++	set_bit(ARS_FAILED, &nfit_spa->ars_state);
+ 	memset(&ars_cap, 0, sizeof(ars_cap));
+ 	rc = ars_get_cap(acpi_desc, &ars_cap, nfit_spa);
+ 	if (rc < 0)
+@@ -3010,16 +3050,14 @@ static void acpi_nfit_init_ars(struct acpi_nfit_desc *acpi_desc,
+ 	nfit_spa->clear_err_unit = ars_cap.clear_err_unit;
+ 	acpi_desc->max_ars = max(nfit_spa->max_ars, acpi_desc->max_ars);
+ 	clear_bit(ARS_FAILED, &nfit_spa->ars_state);
+-	set_bit(ARS_REQ, &nfit_spa->ars_state);
+ }
+ 
+ static int acpi_nfit_register_regions(struct acpi_nfit_desc *acpi_desc)
+ {
+ 	struct nfit_spa *nfit_spa;
+-	int rc, query_rc;
++	int rc;
+ 
+ 	list_for_each_entry(nfit_spa, &acpi_desc->spas, list) {
+-		set_bit(ARS_FAILED, &nfit_spa->ars_state);
+ 		switch (nfit_spa_type(nfit_spa->spa)) {
+ 		case NFIT_SPA_VOLATILE:
+ 		case NFIT_SPA_PM:
+@@ -3028,20 +3066,12 @@ static int acpi_nfit_register_regions(struct acpi_nfit_desc *acpi_desc)
+ 		}
+ 	}
+ 
+-	/*
+-	 * Reap any results that might be pending before starting new
+-	 * short requests.
+-	 */
+-	query_rc = acpi_nfit_query_poison(acpi_desc);
+-	if (query_rc == 0)
+-		ars_complete_all(acpi_desc);
+-
+ 	list_for_each_entry(nfit_spa, &acpi_desc->spas, list)
+ 		switch (nfit_spa_type(nfit_spa->spa)) {
+ 		case NFIT_SPA_VOLATILE:
+ 		case NFIT_SPA_PM:
+ 			/* register regions and kick off initial ARS run */
+-			rc = ars_register(acpi_desc, nfit_spa, &query_rc);
++			rc = ars_register(acpi_desc, nfit_spa);
+ 			if (rc)
+ 				return rc;
+ 			break;
+@@ -3236,7 +3266,8 @@ static int acpi_nfit_clear_to_send(struct nvdimm_bus_descriptor *nd_desc,
+ 	return 0;
+ }
+ 
+-int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc, unsigned long flags)
++int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc,
++		enum nfit_ars_state req_type)
+ {
+ 	struct device *dev = acpi_desc->dev;
+ 	int scheduled = 0, busy = 0;
+@@ -3256,13 +3287,10 @@ int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc, unsigned long flags)
+ 		if (test_bit(ARS_FAILED, &nfit_spa->ars_state))
+ 			continue;
+ 
+-		if (test_and_set_bit(ARS_REQ, &nfit_spa->ars_state))
++		if (test_and_set_bit(req_type, &nfit_spa->ars_state))
+ 			busy++;
+-		else {
+-			if (test_bit(ARS_SHORT, &flags))
+-				set_bit(ARS_SHORT, &nfit_spa->ars_state);
++		else
+ 			scheduled++;
+-		}
+ 	}
+ 	if (scheduled) {
+ 		sched_ars(acpi_desc);
+@@ -3448,10 +3476,11 @@ static void acpi_nfit_update_notify(struct device *dev, acpi_handle handle)
+ static void acpi_nfit_uc_error_notify(struct device *dev, acpi_handle handle)
+ {
+ 	struct acpi_nfit_desc *acpi_desc = dev_get_drvdata(dev);
+-	unsigned long flags = (acpi_desc->scrub_mode == HW_ERROR_SCRUB_ON) ?
+-			0 : 1 << ARS_SHORT;
+ 
+-	acpi_nfit_ars_rescan(acpi_desc, flags);
++	if (acpi_desc->scrub_mode == HW_ERROR_SCRUB_ON)
++		acpi_nfit_ars_rescan(acpi_desc, ARS_REQ_LONG);
++	else
++		acpi_nfit_ars_rescan(acpi_desc, ARS_REQ_SHORT);
+ }
+ 
+ void __acpi_nfit_notify(struct device *dev, acpi_handle handle, u32 event)
+diff --git a/drivers/acpi/nfit/nfit.h b/drivers/acpi/nfit/nfit.h
+index a97ff42fe311..02c10de50386 100644
+--- a/drivers/acpi/nfit/nfit.h
++++ b/drivers/acpi/nfit/nfit.h
+@@ -118,9 +118,8 @@ enum nfit_dimm_notifiers {
+ };
+ 
+ enum nfit_ars_state {
+-	ARS_REQ,
+-	ARS_DONE,
+-	ARS_SHORT,
++	ARS_REQ_SHORT,
++	ARS_REQ_LONG,
+ 	ARS_FAILED,
+ };
+ 
+@@ -197,6 +196,7 @@ struct acpi_nfit_desc {
+ 	struct device *dev;
+ 	u8 ars_start_flags;
+ 	struct nd_cmd_ars_status *ars_status;
++	struct nfit_spa *scrub_spa;
+ 	struct delayed_work dwork;
+ 	struct list_head list;
+ 	struct kernfs_node *scrub_count_state;
+@@ -251,7 +251,8 @@ struct nfit_blk {
+ 
+ extern struct list_head acpi_descs;
+ extern struct mutex acpi_desc_lock;
+-int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc, unsigned long flags);
++int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc,
++		enum nfit_ars_state req_type);
+ 
+ #ifdef CONFIG_X86_MCE
+ void nfit_mce_register(void);
+diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
+index 8df9abfa947b..ed73f6fb0779 100644
+--- a/drivers/acpi/osl.c
++++ b/drivers/acpi/osl.c
+@@ -617,15 +617,18 @@ void acpi_os_stall(u32 us)
+ }
+ 
+ /*
+- * Support ACPI 3.0 AML Timer operand
+- * Returns 64-bit free-running, monotonically increasing timer
+- * with 100ns granularity
++ * Support ACPI 3.0 AML Timer operand. Returns a 64-bit free-running,
++ * monotonically increasing timer with 100ns granularity. Do not use
++ * ktime_get() to implement this function because this function may get
++ * called after timekeeping has been suspended. Note: calling this function
++ * after timekeeping has been suspended may lead to unexpected results
++ * because when timekeeping is suspended the jiffies counter is not
++ * incremented. See also timekeeping_suspend().
+  */
+ u64 acpi_os_get_timer(void)
+ {
+-	u64 time_ns = ktime_to_ns(ktime_get());
+-	do_div(time_ns, 100);
+-	return time_ns;
++	return (get_jiffies_64() - INITIAL_JIFFIES) *
++		(ACPI_100NSEC_PER_SEC / HZ);
+ }
+ 
+ acpi_status acpi_os_read_port(acpi_io_address port, u32 * value, u32 width)
+diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c
+index d1e26cb599bf..da031b1df6f5 100644
+--- a/drivers/acpi/pptt.c
++++ b/drivers/acpi/pptt.c
+@@ -338,9 +338,6 @@ static struct acpi_pptt_cache *acpi_find_cache_node(struct acpi_table_header *ta
+ 	return found;
+ }
+ 
+-/* total number of attributes checked by the properties code */
+-#define PPTT_CHECKED_ATTRIBUTES 4
+-
+ /**
+  * update_cache_properties() - Update cacheinfo for the given processor
+  * @this_leaf: Kernel cache info structure being updated
+@@ -357,25 +354,15 @@ static void update_cache_properties(struct cacheinfo *this_leaf,
+ 				    struct acpi_pptt_cache *found_cache,
+ 				    struct acpi_pptt_processor *cpu_node)
+ {
+-	int valid_flags = 0;
+-
+ 	this_leaf->fw_token = cpu_node;
+-	if (found_cache->flags & ACPI_PPTT_SIZE_PROPERTY_VALID) {
++	if (found_cache->flags & ACPI_PPTT_SIZE_PROPERTY_VALID)
+ 		this_leaf->size = found_cache->size;
+-		valid_flags++;
+-	}
+-	if (found_cache->flags & ACPI_PPTT_LINE_SIZE_VALID) {
++	if (found_cache->flags & ACPI_PPTT_LINE_SIZE_VALID)
+ 		this_leaf->coherency_line_size = found_cache->line_size;
+-		valid_flags++;
+-	}
+-	if (found_cache->flags & ACPI_PPTT_NUMBER_OF_SETS_VALID) {
++	if (found_cache->flags & ACPI_PPTT_NUMBER_OF_SETS_VALID)
+ 		this_leaf->number_of_sets = found_cache->number_of_sets;
+-		valid_flags++;
+-	}
+-	if (found_cache->flags & ACPI_PPTT_ASSOCIATIVITY_VALID) {
++	if (found_cache->flags & ACPI_PPTT_ASSOCIATIVITY_VALID)
+ 		this_leaf->ways_of_associativity = found_cache->associativity;
+-		valid_flags++;
+-	}
+ 	if (found_cache->flags & ACPI_PPTT_WRITE_POLICY_VALID) {
+ 		switch (found_cache->attributes & ACPI_PPTT_MASK_WRITE_POLICY) {
+ 		case ACPI_PPTT_CACHE_POLICY_WT:
+@@ -402,11 +389,17 @@ static void update_cache_properties(struct cacheinfo *this_leaf,
+ 		}
+ 	}
+ 	/*
+-	 * If the above flags are valid, and the cache type is NOCACHE
+-	 * update the cache type as well.
++	 * If cache type is NOCACHE, then the cache hasn't been specified
++	 * via other mechanisms.  Update the type if a cache type has been
++	 * provided.
++	 *
++	 * Note, we assume such caches are unified based on conventional system
++	 * design and known examples.  Significant work is required elsewhere to
++	 * fully support data/instruction only type caches which are only
++	 * specified in PPTT.
+ 	 */
+ 	if (this_leaf->type == CACHE_TYPE_NOCACHE &&
+-	    valid_flags == PPTT_CHECKED_ATTRIBUTES)
++	    found_cache->flags & ACPI_PPTT_CACHE_TYPE_VALID)
+ 		this_leaf->type = CACHE_TYPE_UNIFIED;
+ }
+ 
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 99bf0c0394f8..321a9579556d 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -4552,6 +4552,7 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ 	/* These specific Samsung models/firmware-revs do not handle LPM well */
+ 	{ "SAMSUNG MZMPC128HBFU-000MV", "CXM14M1Q", ATA_HORKAGE_NOLPM, },
+ 	{ "SAMSUNG SSD PM830 mSATA *",  "CXM13D1Q", ATA_HORKAGE_NOLPM, },
++	{ "SAMSUNG MZ7TD256HAFV-000L9", "DXT02L5Q", ATA_HORKAGE_NOLPM, },
+ 
+ 	/* devices that don't properly handle queued TRIM commands */
+ 	{ "Micron_M500IT_*",		"MU01",	ATA_HORKAGE_NO_NCQ_TRIM |
+diff --git a/drivers/block/ataflop.c b/drivers/block/ataflop.c
+index dfb2c2622e5a..822e3060d834 100644
+--- a/drivers/block/ataflop.c
++++ b/drivers/block/ataflop.c
+@@ -1935,6 +1935,11 @@ static int __init atari_floppy_init (void)
+ 		unit[i].disk = alloc_disk(1);
+ 		if (!unit[i].disk)
+ 			goto Enomem;
++
++		unit[i].disk->queue = blk_init_queue(do_fd_request,
++						     &ataflop_lock);
++		if (!unit[i].disk->queue)
++			goto Enomem;
+ 	}
+ 
+ 	if (UseTrackbuffer < 0)
+@@ -1966,10 +1971,6 @@ static int __init atari_floppy_init (void)
+ 		sprintf(unit[i].disk->disk_name, "fd%d", i);
+ 		unit[i].disk->fops = &floppy_fops;
+ 		unit[i].disk->private_data = &unit[i];
+-		unit[i].disk->queue = blk_init_queue(do_fd_request,
+-					&ataflop_lock);
+-		if (!unit[i].disk->queue)
+-			goto Enomem;
+ 		set_capacity(unit[i].disk, MAX_DISK_SIZE * 2);
+ 		add_disk(unit[i].disk);
+ 	}
+@@ -1984,13 +1985,17 @@ static int __init atari_floppy_init (void)
+ 
+ 	return 0;
+ Enomem:
+-	while (i--) {
+-		struct request_queue *q = unit[i].disk->queue;
++	do {
++		struct gendisk *disk = unit[i].disk;
+ 
+-		put_disk(unit[i].disk);
+-		if (q)
+-			blk_cleanup_queue(q);
+-	}
++		if (disk) {
++			if (disk->queue) {
++				blk_cleanup_queue(disk->queue);
++				disk->queue = NULL;
++			}
++			put_disk(unit[i].disk);
++		}
++	} while (i--);
+ 
+ 	unregister_blkdev(FLOPPY_MAJOR, "fd");
+ 	return -ENOMEM;
+diff --git a/drivers/block/swim.c b/drivers/block/swim.c
+index 0e31884a9519..cbe909c51847 100644
+--- a/drivers/block/swim.c
++++ b/drivers/block/swim.c
+@@ -887,8 +887,17 @@ static int swim_floppy_init(struct swim_priv *swd)
+ 
+ exit_put_disks:
+ 	unregister_blkdev(FLOPPY_MAJOR, "fd");
+-	while (drive--)
+-		put_disk(swd->unit[drive].disk);
++	do {
++		struct gendisk *disk = swd->unit[drive].disk;
++
++		if (disk) {
++			if (disk->queue) {
++				blk_cleanup_queue(disk->queue);
++				disk->queue = NULL;
++			}
++			put_disk(disk);
++		}
++	} while (drive--);
+ 	return err;
+ }
+ 
+diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
+index b5cedccb5d7d..144df6830b82 100644
+--- a/drivers/block/xen-blkfront.c
++++ b/drivers/block/xen-blkfront.c
+@@ -1911,6 +1911,7 @@ static int negotiate_mq(struct blkfront_info *info)
+ 			      GFP_KERNEL);
+ 	if (!info->rinfo) {
+ 		xenbus_dev_fatal(info->xbdev, -ENOMEM, "allocating ring_info structure");
++		info->nr_rings = 0;
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -2475,6 +2476,9 @@ static int blkfront_remove(struct xenbus_device *xbdev)
+ 
+ 	dev_dbg(&xbdev->dev, "%s removed", xbdev->nodename);
+ 
++	if (!info)
++		return 0;
++
+ 	blkif_free(info, 0);
+ 
+ 	mutex_lock(&info->mutex);
+diff --git a/drivers/bluetooth/btbcm.c b/drivers/bluetooth/btbcm.c
+index 99cde1f9467d..e3e4d929e74f 100644
+--- a/drivers/bluetooth/btbcm.c
++++ b/drivers/bluetooth/btbcm.c
+@@ -324,6 +324,7 @@ static const struct bcm_subver_table bcm_uart_subver_table[] = {
+ 	{ 0x4103, "BCM4330B1"	},	/* 002.001.003 */
+ 	{ 0x410e, "BCM43341B0"	},	/* 002.001.014 */
+ 	{ 0x4406, "BCM4324B3"	},	/* 002.004.006 */
++	{ 0x6109, "BCM4335C0"	},	/* 003.001.009 */
+ 	{ 0x610c, "BCM4354"	},	/* 003.001.012 */
+ 	{ 0x2122, "BCM4343A0"	},	/* 001.001.034 */
+ 	{ 0x2209, "BCM43430A1"  },	/* 001.002.009 */
+diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c
+index 265d6a6583bc..e33fefd6ceae 100644
+--- a/drivers/char/ipmi/ipmi_ssif.c
++++ b/drivers/char/ipmi/ipmi_ssif.c
+@@ -606,8 +606,9 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ 			flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
+ 			ssif_info->waiting_alert = true;
+ 			ssif_info->rtc_us_timer = SSIF_MSG_USEC;
+-			mod_timer(&ssif_info->retry_timer,
+-				  jiffies + SSIF_MSG_JIFFIES);
++			if (!ssif_info->stopping)
++				mod_timer(&ssif_info->retry_timer,
++					  jiffies + SSIF_MSG_JIFFIES);
+ 			ipmi_ssif_unlock_cond(ssif_info, flags);
+ 			return;
+ 		}
+@@ -939,8 +940,9 @@ static void msg_written_handler(struct ssif_info *ssif_info, int result,
+ 			ssif_info->waiting_alert = true;
+ 			ssif_info->retries_left = SSIF_RECV_RETRIES;
+ 			ssif_info->rtc_us_timer = SSIF_MSG_PART_USEC;
+-			mod_timer(&ssif_info->retry_timer,
+-				  jiffies + SSIF_MSG_PART_JIFFIES);
++			if (!ssif_info->stopping)
++				mod_timer(&ssif_info->retry_timer,
++					  jiffies + SSIF_MSG_PART_JIFFIES);
+ 			ipmi_ssif_unlock_cond(ssif_info, flags);
+ 		}
+ 	}
+diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
+index 3a3a7a548a85..e8822b3d10e1 100644
+--- a/drivers/char/tpm/tpm-interface.c
++++ b/drivers/char/tpm/tpm-interface.c
+@@ -664,7 +664,8 @@ ssize_t tpm_transmit_cmd(struct tpm_chip *chip, struct tpm_space *space,
+ 		return len;
+ 
+ 	err = be32_to_cpu(header->return_code);
+-	if (err != 0 && desc)
++	if (err != 0 && err != TPM_ERR_DISABLED && err != TPM_ERR_DEACTIVATED
++	    && desc)
+ 		dev_err(&chip->dev, "A TPM error (%d) occurred %s\n", err,
+ 			desc);
+ 	if (err)
+diff --git a/drivers/char/tpm/xen-tpmfront.c b/drivers/char/tpm/xen-tpmfront.c
+index 911475d36800..b150f87f38f5 100644
+--- a/drivers/char/tpm/xen-tpmfront.c
++++ b/drivers/char/tpm/xen-tpmfront.c
+@@ -264,7 +264,7 @@ static int setup_ring(struct xenbus_device *dev, struct tpm_private *priv)
+ 		return -ENOMEM;
+ 	}
+ 
+-	rv = xenbus_grant_ring(dev, &priv->shr, 1, &gref);
++	rv = xenbus_grant_ring(dev, priv->shr, 1, &gref);
+ 	if (rv < 0)
+ 		return rv;
+ 
+diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c
+index 0a9ebf00be46..e58bfcb1169e 100644
+--- a/drivers/cpufreq/cpufreq-dt.c
++++ b/drivers/cpufreq/cpufreq-dt.c
+@@ -32,6 +32,7 @@ struct private_data {
+ 	struct device *cpu_dev;
+ 	struct thermal_cooling_device *cdev;
+ 	const char *reg_name;
++	bool have_static_opps;
+ };
+ 
+ static struct freq_attr *cpufreq_dt_attr[] = {
+@@ -204,6 +205,15 @@ static int cpufreq_init(struct cpufreq_policy *policy)
+ 		}
+ 	}
+ 
++	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
++	if (!priv) {
++		ret = -ENOMEM;
++		goto out_put_regulator;
++	}
++
++	priv->reg_name = name;
++	priv->opp_table = opp_table;
++
+ 	/*
+ 	 * Initialize OPP tables for all policy->cpus. They will be shared by
+ 	 * all CPUs which have marked their CPUs shared with OPP bindings.
+@@ -214,7 +224,8 @@ static int cpufreq_init(struct cpufreq_policy *policy)
+ 	 *
+ 	 * OPPs might be populated at runtime, don't check for error here
+ 	 */
+-	dev_pm_opp_of_cpumask_add_table(policy->cpus);
++	if (!dev_pm_opp_of_cpumask_add_table(policy->cpus))
++		priv->have_static_opps = true;
+ 
+ 	/*
+ 	 * But we need OPP table to function so if it is not there let's
+@@ -240,19 +251,10 @@ static int cpufreq_init(struct cpufreq_policy *policy)
+ 				__func__, ret);
+ 	}
+ 
+-	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+-	if (!priv) {
+-		ret = -ENOMEM;
+-		goto out_free_opp;
+-	}
+-
+-	priv->reg_name = name;
+-	priv->opp_table = opp_table;
+-
+ 	ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table);
+ 	if (ret) {
+ 		dev_err(cpu_dev, "failed to init cpufreq table: %d\n", ret);
+-		goto out_free_priv;
++		goto out_free_opp;
+ 	}
+ 
+ 	priv->cpu_dev = cpu_dev;
+@@ -282,10 +284,11 @@ static int cpufreq_init(struct cpufreq_policy *policy)
+ 
+ out_free_cpufreq_table:
+ 	dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table);
+-out_free_priv:
+-	kfree(priv);
+ out_free_opp:
+-	dev_pm_opp_of_cpumask_remove_table(policy->cpus);
++	if (priv->have_static_opps)
++		dev_pm_opp_of_cpumask_remove_table(policy->cpus);
++	kfree(priv);
++out_put_regulator:
+ 	if (name)
+ 		dev_pm_opp_put_regulators(opp_table);
+ out_put_clk:
+@@ -300,7 +303,8 @@ static int cpufreq_exit(struct cpufreq_policy *policy)
+ 
+ 	cpufreq_cooling_unregister(priv->cdev);
+ 	dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
+-	dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
++	if (priv->have_static_opps)
++		dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
+ 	if (priv->reg_name)
+ 		dev_pm_opp_put_regulators(priv->opp_table);
+ 
+diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
+index f20f20a77d4d..4268f87e99fc 100644
+--- a/drivers/cpufreq/cpufreq_conservative.c
++++ b/drivers/cpufreq/cpufreq_conservative.c
+@@ -80,8 +80,10 @@ static unsigned int cs_dbs_update(struct cpufreq_policy *policy)
+ 	 * changed in the meantime, so fall back to current frequency in that
+ 	 * case.
+ 	 */
+-	if (requested_freq > policy->max || requested_freq < policy->min)
++	if (requested_freq > policy->max || requested_freq < policy->min) {
+ 		requested_freq = policy->cur;
++		dbs_info->requested_freq = requested_freq;
++	}
+ 
+ 	freq_step = get_freq_step(cs_tuners, policy);
+ 
+@@ -92,7 +94,7 @@ static unsigned int cs_dbs_update(struct cpufreq_policy *policy)
+ 	if (policy_dbs->idle_periods < UINT_MAX) {
+ 		unsigned int freq_steps = policy_dbs->idle_periods * freq_step;
+ 
+-		if (requested_freq > freq_steps)
++		if (requested_freq > policy->min + freq_steps)
+ 			requested_freq -= freq_steps;
+ 		else
+ 			requested_freq = policy->min;
+diff --git a/drivers/crypto/caam/regs.h b/drivers/crypto/caam/regs.h
+index 4fb91ba39c36..ce3f9ad7120f 100644
+--- a/drivers/crypto/caam/regs.h
++++ b/drivers/crypto/caam/regs.h
+@@ -70,22 +70,22 @@
+ extern bool caam_little_end;
+ extern bool caam_imx;
+ 
+-#define caam_to_cpu(len)				\
+-static inline u##len caam##len ## _to_cpu(u##len val)	\
+-{							\
+-	if (caam_little_end)				\
+-		return le##len ## _to_cpu(val);		\
+-	else						\
+-		return be##len ## _to_cpu(val);		\
++#define caam_to_cpu(len)						\
++static inline u##len caam##len ## _to_cpu(u##len val)			\
++{									\
++	if (caam_little_end)						\
++		return le##len ## _to_cpu((__force __le##len)val);	\
++	else								\
++		return be##len ## _to_cpu((__force __be##len)val);	\
+ }
+ 
+-#define cpu_to_caam(len)				\
+-static inline u##len cpu_to_caam##len(u##len val)	\
+-{							\
+-	if (caam_little_end)				\
+-		return cpu_to_le##len(val);		\
+-	else						\
+-		return cpu_to_be##len(val);		\
++#define cpu_to_caam(len)					\
++static inline u##len cpu_to_caam##len(u##len val)		\
++{								\
++	if (caam_little_end)					\
++		return (__force u##len)cpu_to_le##len(val);	\
++	else							\
++		return (__force u##len)cpu_to_be##len(val);	\
+ }
+ 
+ caam_to_cpu(16)
+diff --git a/drivers/dma/dma-jz4780.c b/drivers/dma/dma-jz4780.c
+index 85820a2d69d4..987899610b46 100644
+--- a/drivers/dma/dma-jz4780.c
++++ b/drivers/dma/dma-jz4780.c
+@@ -761,6 +761,11 @@ static int jz4780_dma_probe(struct platform_device *pdev)
+ 	struct resource *res;
+ 	int i, ret;
+ 
++	if (!dev->of_node) {
++		dev_err(dev, "This driver must be probed from devicetree\n");
++		return -EINVAL;
++	}
++
+ 	jzdma = devm_kzalloc(dev, sizeof(*jzdma), GFP_KERNEL);
+ 	if (!jzdma)
+ 		return -ENOMEM;
+diff --git a/drivers/dma/ioat/init.c b/drivers/dma/ioat/init.c
+index 4fa4c06c9edb..21a5708985bc 100644
+--- a/drivers/dma/ioat/init.c
++++ b/drivers/dma/ioat/init.c
+@@ -1205,8 +1205,15 @@ static void ioat_shutdown(struct pci_dev *pdev)
+ 
+ 		spin_lock_bh(&ioat_chan->prep_lock);
+ 		set_bit(IOAT_CHAN_DOWN, &ioat_chan->state);
+-		del_timer_sync(&ioat_chan->timer);
+ 		spin_unlock_bh(&ioat_chan->prep_lock);
++		/*
++		 * Synchronization rule for del_timer_sync():
++		 *  - The caller must not hold locks which would prevent
++		 *    completion of the timer's handler.
++		 * So prep_lock cannot be held before calling it.
++		 */
++		del_timer_sync(&ioat_chan->timer);
++
+ 		/* this should quiesce then reset */
+ 		ioat_reset_hw(ioat_chan);
+ 	}
+diff --git a/drivers/dma/ppc4xx/adma.c b/drivers/dma/ppc4xx/adma.c
+index 4cf0d4d0cecf..25610286979f 100644
+--- a/drivers/dma/ppc4xx/adma.c
++++ b/drivers/dma/ppc4xx/adma.c
+@@ -4360,7 +4360,7 @@ static ssize_t enable_store(struct device_driver *dev, const char *buf,
+ }
+ static DRIVER_ATTR_RW(enable);
+ 
+-static ssize_t poly_store(struct device_driver *dev, char *buf)
++static ssize_t poly_show(struct device_driver *dev, char *buf)
+ {
+ 	ssize_t size = 0;
+ 	u32 reg;
+diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c
+index 18aeabb1d5ee..e2addb2bca29 100644
+--- a/drivers/edac/amd64_edac.c
++++ b/drivers/edac/amd64_edac.c
+@@ -2200,6 +2200,15 @@ static struct amd64_family_type family_types[] = {
+ 			.dbam_to_cs		= f17_base_addr_to_cs_size,
+ 		}
+ 	},
++	[F17_M10H_CPUS] = {
++		.ctl_name = "F17h_M10h",
++		.f0_id = PCI_DEVICE_ID_AMD_17H_M10H_DF_F0,
++		.f6_id = PCI_DEVICE_ID_AMD_17H_M10H_DF_F6,
++		.ops = {
++			.early_channel_count	= f17_early_channel_count,
++			.dbam_to_cs		= f17_base_addr_to_cs_size,
++		}
++	},
+ };
+ 
+ /*
+@@ -3188,6 +3197,11 @@ static struct amd64_family_type *per_family_init(struct amd64_pvt *pvt)
+ 		break;
+ 
+ 	case 0x17:
++		if (pvt->model >= 0x10 && pvt->model <= 0x2f) {
++			fam_type = &family_types[F17_M10H_CPUS];
++			pvt->ops = &family_types[F17_M10H_CPUS].ops;
++			break;
++		}
+ 		fam_type	= &family_types[F17_CPUS];
+ 		pvt->ops	= &family_types[F17_CPUS].ops;
+ 		break;
+diff --git a/drivers/edac/amd64_edac.h b/drivers/edac/amd64_edac.h
+index 1d4b74e9a037..4242f8e39c18 100644
+--- a/drivers/edac/amd64_edac.h
++++ b/drivers/edac/amd64_edac.h
+@@ -115,6 +115,8 @@
+ #define PCI_DEVICE_ID_AMD_16H_M30H_NB_F2 0x1582
+ #define PCI_DEVICE_ID_AMD_17H_DF_F0	0x1460
+ #define PCI_DEVICE_ID_AMD_17H_DF_F6	0x1466
++#define PCI_DEVICE_ID_AMD_17H_M10H_DF_F0 0x15e8
++#define PCI_DEVICE_ID_AMD_17H_M10H_DF_F6 0x15ee
+ 
+ /*
+  * Function 1 - Address Map
+@@ -281,6 +283,7 @@ enum amd_families {
+ 	F16_CPUS,
+ 	F16_M30H_CPUS,
+ 	F17_CPUS,
++	F17_M10H_CPUS,
+ 	NUM_FAMILIES,
+ };
+ 
+diff --git a/drivers/edac/i7core_edac.c b/drivers/edac/i7core_edac.c
+index 8e120bf60624..f1d19504a028 100644
+--- a/drivers/edac/i7core_edac.c
++++ b/drivers/edac/i7core_edac.c
+@@ -1711,6 +1711,7 @@ static void i7core_mce_output_error(struct mem_ctl_info *mci,
+ 	u32 errnum = find_first_bit(&error, 32);
+ 
+ 	if (uncorrected_error) {
++		core_err_cnt = 1;
+ 		if (ripv)
+ 			tp_event = HW_EVENT_ERR_FATAL;
+ 		else
+diff --git a/drivers/edac/sb_edac.c b/drivers/edac/sb_edac.c
+index 4a89c8093307..498d253a3b7e 100644
+--- a/drivers/edac/sb_edac.c
++++ b/drivers/edac/sb_edac.c
+@@ -2881,6 +2881,7 @@ static void sbridge_mce_output_error(struct mem_ctl_info *mci,
+ 		recoverable = GET_BITFIELD(m->status, 56, 56);
+ 
+ 	if (uncorrected_error) {
++		core_err_cnt = 1;
+ 		if (ripv) {
+ 			type = "FATAL";
+ 			tp_event = HW_EVENT_ERR_FATAL;
+diff --git a/drivers/edac/skx_edac.c b/drivers/edac/skx_edac.c
+index fae095162c01..4ba92f1dd0f7 100644
+--- a/drivers/edac/skx_edac.c
++++ b/drivers/edac/skx_edac.c
+@@ -668,7 +668,7 @@ sad_found:
+ 			break;
+ 		case 2:
+ 			lchan = (addr >> shift) % 2;
+-			lchan = (lchan << 1) | ~lchan;
++			lchan = (lchan << 1) | !lchan;
+ 			break;
+ 		case 3:
+ 			lchan = ((addr >> shift) % 2) << 1;
+@@ -959,6 +959,7 @@ static void skx_mce_output_error(struct mem_ctl_info *mci,
+ 	recoverable = GET_BITFIELD(m->status, 56, 56);
+ 
+ 	if (uncorrected_error) {
++		core_err_cnt = 1;
+ 		if (ripv) {
+ 			type = "FATAL";
+ 			tp_event = HW_EVENT_ERR_FATAL;
+diff --git a/drivers/firmware/google/coreboot_table.c b/drivers/firmware/google/coreboot_table.c
+index 19db5709ae28..898bb9abc41f 100644
+--- a/drivers/firmware/google/coreboot_table.c
++++ b/drivers/firmware/google/coreboot_table.c
+@@ -110,7 +110,8 @@ int coreboot_table_init(struct device *dev, void __iomem *ptr)
+ 
+ 	if (strncmp(header.signature, "LBIO", sizeof(header.signature))) {
+ 		pr_warn("coreboot_table: coreboot table missing or corrupt!\n");
+-		return -ENODEV;
++		ret = -ENODEV;
++		goto out;
+ 	}
+ 
+ 	ptr_entry = (void *)ptr_header + header.header_bytes;
+@@ -137,7 +138,8 @@ int coreboot_table_init(struct device *dev, void __iomem *ptr)
+ 
+ 		ptr_entry += entry.size;
+ 	}
+-
++out:
++	iounmap(ptr);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(coreboot_table_init);
+@@ -146,7 +148,6 @@ int coreboot_table_exit(void)
+ {
+ 	if (ptr_header) {
+ 		bus_unregister(&coreboot_bus_type);
+-		iounmap(ptr_header);
+ 		ptr_header = NULL;
+ 	}
+ 
+diff --git a/drivers/gpio/gpio-brcmstb.c b/drivers/gpio/gpio-brcmstb.c
+index 16c7f9f49416..af936dcca659 100644
+--- a/drivers/gpio/gpio-brcmstb.c
++++ b/drivers/gpio/gpio-brcmstb.c
+@@ -664,6 +664,18 @@ static int brcmstb_gpio_probe(struct platform_device *pdev)
+ 		struct brcmstb_gpio_bank *bank;
+ 		struct gpio_chip *gc;
+ 
++		/*
++		 * If bank_width is 0, then there is an empty bank in the
++		 * register block. Special handling for this case.
++		 */
++		if (bank_width == 0) {
++			dev_dbg(dev, "Width 0 found: Empty bank @ %d\n",
++				num_banks);
++			num_banks++;
++			gpio_base += MAX_GPIO_PER_BANK;
++			continue;
++		}
++
+ 		bank = devm_kzalloc(dev, sizeof(*bank), GFP_KERNEL);
+ 		if (!bank) {
+ 			err = -ENOMEM;
+@@ -740,9 +752,6 @@ static int brcmstb_gpio_probe(struct platform_device *pdev)
+ 			goto fail;
+ 	}
+ 
+-	dev_info(dev, "Registered %d banks (GPIO(s): %d-%d)\n",
+-			num_banks, priv->gpio_base, gpio_base - 1);
+-
+ 	if (priv->parent_wake_irq && need_wakeup_event)
+ 		pm_wakeup_event(dev, 0);
+ 
+diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c
+index 895741e9cd7d..52ccf1c31855 100644
+--- a/drivers/gpu/drm/drm_atomic.c
++++ b/drivers/gpu/drm/drm_atomic.c
+@@ -173,6 +173,11 @@ void drm_atomic_state_default_clear(struct drm_atomic_state *state)
+ 		state->crtcs[i].state = NULL;
+ 		state->crtcs[i].old_state = NULL;
+ 		state->crtcs[i].new_state = NULL;
++
++		if (state->crtcs[i].commit) {
++			drm_crtc_commit_put(state->crtcs[i].commit);
++			state->crtcs[i].commit = NULL;
++		}
+ 	}
+ 
+ 	for (i = 0; i < config->num_total_plane; i++) {
+diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
+index 81e32199d3ef..abca95b970ea 100644
+--- a/drivers/gpu/drm/drm_atomic_helper.c
++++ b/drivers/gpu/drm/drm_atomic_helper.c
+@@ -1384,15 +1384,16 @@ EXPORT_SYMBOL(drm_atomic_helper_wait_for_vblanks);
+ void drm_atomic_helper_wait_for_flip_done(struct drm_device *dev,
+ 					  struct drm_atomic_state *old_state)
+ {
+-	struct drm_crtc_state *new_crtc_state;
+ 	struct drm_crtc *crtc;
+ 	int i;
+ 
+-	for_each_new_crtc_in_state(old_state, crtc, new_crtc_state, i) {
+-		struct drm_crtc_commit *commit = new_crtc_state->commit;
++	for (i = 0; i < dev->mode_config.num_crtc; i++) {
++		struct drm_crtc_commit *commit = old_state->crtcs[i].commit;
+ 		int ret;
+ 
+-		if (!commit)
++		crtc = old_state->crtcs[i].ptr;
++
++		if (!crtc || !commit)
+ 			continue;
+ 
+ 		ret = wait_for_completion_timeout(&commit->flip_done, 10 * HZ);
+@@ -1906,6 +1907,9 @@ int drm_atomic_helper_setup_commit(struct drm_atomic_state *state,
+ 		drm_crtc_commit_get(commit);
+ 
+ 		commit->abort_completion = true;
++
++		state->crtcs[i].commit = commit;
++		drm_crtc_commit_get(commit);
+ 	}
+ 
+ 	for_each_oldnew_connector_in_state(state, conn, old_conn_state, new_conn_state, i) {
+diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c
+index 98a36e6c69ad..bd207857a964 100644
+--- a/drivers/gpu/drm/drm_crtc.c
++++ b/drivers/gpu/drm/drm_crtc.c
+@@ -560,9 +560,9 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data,
+ 	struct drm_mode_crtc *crtc_req = data;
+ 	struct drm_crtc *crtc;
+ 	struct drm_plane *plane;
+-	struct drm_connector **connector_set = NULL, *connector;
+-	struct drm_framebuffer *fb = NULL;
+-	struct drm_display_mode *mode = NULL;
++	struct drm_connector **connector_set, *connector;
++	struct drm_framebuffer *fb;
++	struct drm_display_mode *mode;
+ 	struct drm_mode_set set;
+ 	uint32_t __user *set_connectors_ptr;
+ 	struct drm_modeset_acquire_ctx ctx;
+@@ -591,6 +591,10 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data,
+ 	mutex_lock(&crtc->dev->mode_config.mutex);
+ 	drm_modeset_acquire_init(&ctx, DRM_MODESET_ACQUIRE_INTERRUPTIBLE);
+ retry:
++	connector_set = NULL;
++	fb = NULL;
++	mode = NULL;
++
+ 	ret = drm_modeset_lock_all_ctx(crtc->dev, &ctx);
+ 	if (ret)
+ 		goto out;
+diff --git a/drivers/gpu/drm/mediatek/mtk_hdmi.c b/drivers/gpu/drm/mediatek/mtk_hdmi.c
+index 59a11026dceb..45a8ba42c8f4 100644
+--- a/drivers/gpu/drm/mediatek/mtk_hdmi.c
++++ b/drivers/gpu/drm/mediatek/mtk_hdmi.c
+@@ -1446,8 +1446,7 @@ static int mtk_hdmi_dt_parse_pdata(struct mtk_hdmi *hdmi,
+ 	}
+ 
+ 	/* The CEC module handles HDMI hotplug detection */
+-	cec_np = of_find_compatible_node(np->parent, NULL,
+-					 "mediatek,mt8173-cec");
++	cec_np = of_get_compatible_child(np->parent, "mediatek,mt8173-cec");
+ 	if (!cec_np) {
+ 		dev_err(dev, "Failed to find CEC node\n");
+ 		return -EINVAL;
+@@ -1457,8 +1456,10 @@ static int mtk_hdmi_dt_parse_pdata(struct mtk_hdmi *hdmi,
+ 	if (!cec_pdev) {
+ 		dev_err(hdmi->dev, "Waiting for CEC device %pOF\n",
+ 			cec_np);
++		of_node_put(cec_np);
+ 		return -EPROBE_DEFER;
+ 	}
++	of_node_put(cec_np);
+ 	hdmi->cec_dev = &cec_pdev->dev;
+ 
+ 	/*
+diff --git a/drivers/hid/usbhid/hiddev.c b/drivers/hid/usbhid/hiddev.c
+index 23872d08308c..a746017fac17 100644
+--- a/drivers/hid/usbhid/hiddev.c
++++ b/drivers/hid/usbhid/hiddev.c
+@@ -512,14 +512,24 @@ static noinline int hiddev_ioctl_usage(struct hiddev *hiddev, unsigned int cmd,
+ 			if (cmd == HIDIOCGCOLLECTIONINDEX) {
+ 				if (uref->usage_index >= field->maxusage)
+ 					goto inval;
++				uref->usage_index =
++					array_index_nospec(uref->usage_index,
++							   field->maxusage);
+ 			} else if (uref->usage_index >= field->report_count)
+ 				goto inval;
+ 		}
+ 
+-		if ((cmd == HIDIOCGUSAGES || cmd == HIDIOCSUSAGES) &&
+-		    (uref_multi->num_values > HID_MAX_MULTI_USAGES ||
+-		     uref->usage_index + uref_multi->num_values > field->report_count))
+-			goto inval;
++		if (cmd == HIDIOCGUSAGES || cmd == HIDIOCSUSAGES) {
++			if (uref_multi->num_values > HID_MAX_MULTI_USAGES ||
++			    uref->usage_index + uref_multi->num_values >
++			    field->report_count)
++				goto inval;
++
++			uref->usage_index =
++				array_index_nospec(uref->usage_index,
++						   field->report_count -
++						   uref_multi->num_values);
++		}
+ 
+ 		switch (cmd) {
+ 		case HIDIOCGUSAGE:
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index ad7afa74d365..ff9a1d8e90f7 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -3335,6 +3335,7 @@ static void wacom_setup_intuos(struct wacom_wac *wacom_wac)
+ 
+ void wacom_setup_device_quirks(struct wacom *wacom)
+ {
++	struct wacom_wac *wacom_wac = &wacom->wacom_wac;
+ 	struct wacom_features *features = &wacom->wacom_wac.features;
+ 
+ 	/* The pen and pad share the same interface on most devices */
+@@ -3464,6 +3465,24 @@ void wacom_setup_device_quirks(struct wacom *wacom)
+ 
+ 	if (features->type == REMOTE)
+ 		features->device_type |= WACOM_DEVICETYPE_WL_MONITOR;
++
++	/* HID descriptor for DTK-2451 / DTH-2452 claims to report lots
++	 * of things it shouldn't. Lets fix up the damage...
++	 */
++	if (wacom->hdev->product == 0x382 || wacom->hdev->product == 0x37d) {
++		features->quirks &= ~WACOM_QUIRK_TOOLSERIAL;
++		__clear_bit(BTN_TOOL_BRUSH, wacom_wac->pen_input->keybit);
++		__clear_bit(BTN_TOOL_PENCIL, wacom_wac->pen_input->keybit);
++		__clear_bit(BTN_TOOL_AIRBRUSH, wacom_wac->pen_input->keybit);
++		__clear_bit(ABS_Z, wacom_wac->pen_input->absbit);
++		__clear_bit(ABS_DISTANCE, wacom_wac->pen_input->absbit);
++		__clear_bit(ABS_TILT_X, wacom_wac->pen_input->absbit);
++		__clear_bit(ABS_TILT_Y, wacom_wac->pen_input->absbit);
++		__clear_bit(ABS_WHEEL, wacom_wac->pen_input->absbit);
++		__clear_bit(ABS_MISC, wacom_wac->pen_input->absbit);
++		__clear_bit(MSC_SERIAL, wacom_wac->pen_input->mscbit);
++		__clear_bit(EV_MSC, wacom_wac->pen_input->evbit);
++	}
+ }
+ 
+ int wacom_setup_pen_input_capabilities(struct input_dev *input_dev,
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index 0f0e091c117c..c4a1ebcfffb6 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -606,16 +606,18 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
+ 	bool perf_chn = vmbus_devs[dev_type].perf_device;
+ 	struct vmbus_channel *primary = channel->primary_channel;
+ 	int next_node;
+-	struct cpumask available_mask;
++	cpumask_var_t available_mask;
+ 	struct cpumask *alloced_mask;
+ 
+ 	if ((vmbus_proto_version == VERSION_WS2008) ||
+-	    (vmbus_proto_version == VERSION_WIN7) || (!perf_chn)) {
++	    (vmbus_proto_version == VERSION_WIN7) || (!perf_chn) ||
++	    !alloc_cpumask_var(&available_mask, GFP_KERNEL)) {
+ 		/*
+ 		 * Prior to win8, all channel interrupts are
+ 		 * delivered on cpu 0.
+ 		 * Also if the channel is not a performance critical
+ 		 * channel, bind it to cpu 0.
++		 * In case alloc_cpumask_var() fails, bind it to cpu 0.
+ 		 */
+ 		channel->numa_node = 0;
+ 		channel->target_cpu = 0;
+@@ -653,7 +655,7 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
+ 		cpumask_clear(alloced_mask);
+ 	}
+ 
+-	cpumask_xor(&available_mask, alloced_mask,
++	cpumask_xor(available_mask, alloced_mask,
+ 		    cpumask_of_node(primary->numa_node));
+ 
+ 	cur_cpu = -1;
+@@ -671,10 +673,10 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
+ 	}
+ 
+ 	while (true) {
+-		cur_cpu = cpumask_next(cur_cpu, &available_mask);
++		cur_cpu = cpumask_next(cur_cpu, available_mask);
+ 		if (cur_cpu >= nr_cpu_ids) {
+ 			cur_cpu = -1;
+-			cpumask_copy(&available_mask,
++			cpumask_copy(available_mask,
+ 				     cpumask_of_node(primary->numa_node));
+ 			continue;
+ 		}
+@@ -704,6 +706,8 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
+ 
+ 	channel->target_cpu = cur_cpu;
+ 	channel->target_vp = hv_cpu_number_to_vp_number(cur_cpu);
++
++	free_cpumask_var(available_mask);
+ }
+ 
+ static void vmbus_wait_for_unload(void)
+diff --git a/drivers/hwmon/pmbus/pmbus.c b/drivers/hwmon/pmbus/pmbus.c
+index 7718e58dbda5..7688dab32f6e 100644
+--- a/drivers/hwmon/pmbus/pmbus.c
++++ b/drivers/hwmon/pmbus/pmbus.c
+@@ -118,6 +118,8 @@ static int pmbus_identify(struct i2c_client *client,
+ 		} else {
+ 			info->pages = 1;
+ 		}
++
++		pmbus_clear_faults(client);
+ 	}
+ 
+ 	if (pmbus_check_byte_register(client, 0, PMBUS_VOUT_MODE)) {
+diff --git a/drivers/hwmon/pmbus/pmbus_core.c b/drivers/hwmon/pmbus/pmbus_core.c
+index 82c3754e21e3..2e2b5851139c 100644
+--- a/drivers/hwmon/pmbus/pmbus_core.c
++++ b/drivers/hwmon/pmbus/pmbus_core.c
+@@ -2015,7 +2015,10 @@ static int pmbus_init_common(struct i2c_client *client, struct pmbus_data *data,
+ 	if (ret >= 0 && (ret & PB_CAPABILITY_ERROR_CHECK))
+ 		client->flags |= I2C_CLIENT_PEC;
+ 
+-	pmbus_clear_faults(client);
++	if (data->info->pages)
++		pmbus_clear_faults(client);
++	else
++		pmbus_clear_fault_page(client, -1);
+ 
+ 	if (info->identify) {
+ 		ret = (*info->identify)(client, info);
+diff --git a/drivers/hwmon/pwm-fan.c b/drivers/hwmon/pwm-fan.c
+index 7838af58f92d..9d611dd268e1 100644
+--- a/drivers/hwmon/pwm-fan.c
++++ b/drivers/hwmon/pwm-fan.c
+@@ -290,9 +290,19 @@ static int pwm_fan_remove(struct platform_device *pdev)
+ static int pwm_fan_suspend(struct device *dev)
+ {
+ 	struct pwm_fan_ctx *ctx = dev_get_drvdata(dev);
++	struct pwm_args args;
++	int ret;
++
++	pwm_get_args(ctx->pwm, &args);
++
++	if (ctx->pwm_value) {
++		ret = pwm_config(ctx->pwm, 0, args.period);
++		if (ret < 0)
++			return ret;
+ 
+-	if (ctx->pwm_value)
+ 		pwm_disable(ctx->pwm);
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-etb10.c b/drivers/hwtracing/coresight/coresight-etb10.c
+index 320d29df17e1..8c1d53f7af83 100644
+--- a/drivers/hwtracing/coresight/coresight-etb10.c
++++ b/drivers/hwtracing/coresight/coresight-etb10.c
+@@ -147,6 +147,10 @@ static int etb_enable(struct coresight_device *csdev, u32 mode)
+ 	if (val == CS_MODE_PERF)
+ 		return -EBUSY;
+ 
++	/* Don't let perf disturb sysFS sessions */
++	if (val == CS_MODE_SYSFS && mode == CS_MODE_PERF)
++		return -EBUSY;
++
+ 	/* Nothing to do, the tracer is already enabled. */
+ 	if (val == CS_MODE_SYSFS)
+ 		goto out;
+diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
+index 3c1c817f6968..e152716bf07f 100644
+--- a/drivers/i2c/busses/i2c-rcar.c
++++ b/drivers/i2c/busses/i2c-rcar.c
+@@ -812,8 +812,12 @@ static int rcar_i2c_master_xfer(struct i2c_adapter *adap,
+ 
+ 	time_left = wait_event_timeout(priv->wait, priv->flags & ID_DONE,
+ 				     num * adap->timeout);
+-	if (!time_left) {
++
++	/* cleanup DMA if it couldn't complete properly due to an error */
++	if (priv->dma_direction != DMA_NONE)
+ 		rcar_i2c_cleanup_dma(priv);
++
++	if (!time_left) {
+ 		rcar_i2c_init(priv);
+ 		ret = -ETIMEDOUT;
+ 	} else if (priv->flags & ID_NACK) {
+diff --git a/drivers/iio/adc/at91_adc.c b/drivers/iio/adc/at91_adc.c
+index 44b516863c9d..75d2f73582a3 100644
+--- a/drivers/iio/adc/at91_adc.c
++++ b/drivers/iio/adc/at91_adc.c
+@@ -248,12 +248,14 @@ static irqreturn_t at91_adc_trigger_handler(int irq, void *p)
+ 	struct iio_poll_func *pf = p;
+ 	struct iio_dev *idev = pf->indio_dev;
+ 	struct at91_adc_state *st = iio_priv(idev);
++	struct iio_chan_spec const *chan;
+ 	int i, j = 0;
+ 
+ 	for (i = 0; i < idev->masklength; i++) {
+ 		if (!test_bit(i, idev->active_scan_mask))
+ 			continue;
+-		st->buffer[j] = at91_adc_readl(st, AT91_ADC_CHAN(st, i));
++		chan = idev->channels + i;
++		st->buffer[j] = at91_adc_readl(st, AT91_ADC_CHAN(st, chan->channel));
+ 		j++;
+ 	}
+ 
+@@ -279,6 +281,8 @@ static void handle_adc_eoc_trigger(int irq, struct iio_dev *idev)
+ 		iio_trigger_poll(idev->trig);
+ 	} else {
+ 		st->last_value = at91_adc_readl(st, AT91_ADC_CHAN(st, st->chnb));
++		/* Needed to ACK the DRDY interruption */
++		at91_adc_readl(st, AT91_ADC_LCDR);
+ 		st->done = true;
+ 		wake_up_interruptible(&st->wq_data_avail);
+ 	}
+diff --git a/drivers/iio/adc/fsl-imx25-gcq.c b/drivers/iio/adc/fsl-imx25-gcq.c
+index ea264fa9e567..929c617db364 100644
+--- a/drivers/iio/adc/fsl-imx25-gcq.c
++++ b/drivers/iio/adc/fsl-imx25-gcq.c
+@@ -209,12 +209,14 @@ static int mx25_gcq_setup_cfgs(struct platform_device *pdev,
+ 		ret = of_property_read_u32(child, "reg", &reg);
+ 		if (ret) {
+ 			dev_err(dev, "Failed to get reg property\n");
++			of_node_put(child);
+ 			return ret;
+ 		}
+ 
+ 		if (reg >= MX25_NUM_CFGS) {
+ 			dev_err(dev,
+ 				"reg value is greater than the number of available configuration registers\n");
++			of_node_put(child);
+ 			return -EINVAL;
+ 		}
+ 
+@@ -228,6 +230,7 @@ static int mx25_gcq_setup_cfgs(struct platform_device *pdev,
+ 			if (IS_ERR(priv->vref[refp])) {
+ 				dev_err(dev, "Error, trying to use external voltage reference without a vref-%s regulator.",
+ 					mx25_gcq_refp_names[refp]);
++				of_node_put(child);
+ 				return PTR_ERR(priv->vref[refp]);
+ 			}
+ 			priv->channel_vref_mv[reg] =
+@@ -240,6 +243,7 @@ static int mx25_gcq_setup_cfgs(struct platform_device *pdev,
+ 			break;
+ 		default:
+ 			dev_err(dev, "Invalid positive reference %d\n", refp);
++			of_node_put(child);
+ 			return -EINVAL;
+ 		}
+ 
+@@ -254,10 +258,12 @@ static int mx25_gcq_setup_cfgs(struct platform_device *pdev,
+ 
+ 		if ((refp & MX25_ADCQ_CFG_REFP_MASK) != refp) {
+ 			dev_err(dev, "Invalid fsl,adc-refp property value\n");
++			of_node_put(child);
+ 			return -EINVAL;
+ 		}
+ 		if ((refn & MX25_ADCQ_CFG_REFN_MASK) != refn) {
+ 			dev_err(dev, "Invalid fsl,adc-refn property value\n");
++			of_node_put(child);
+ 			return -EINVAL;
+ 		}
+ 
+diff --git a/drivers/iio/dac/ad5064.c b/drivers/iio/dac/ad5064.c
+index bf4fc40ec84d..2f98cb2a3b96 100644
+--- a/drivers/iio/dac/ad5064.c
++++ b/drivers/iio/dac/ad5064.c
+@@ -808,6 +808,40 @@ static int ad5064_set_config(struct ad5064_state *st, unsigned int val)
+ 	return ad5064_write(st, cmd, 0, val, 0);
+ }
+ 
++static int ad5064_request_vref(struct ad5064_state *st, struct device *dev)
++{
++	unsigned int i;
++	int ret;
++
++	for (i = 0; i < ad5064_num_vref(st); ++i)
++		st->vref_reg[i].supply = ad5064_vref_name(st, i);
++
++	if (!st->chip_info->internal_vref)
++		return devm_regulator_bulk_get(dev, ad5064_num_vref(st),
++					       st->vref_reg);
++
++	/*
++	 * This assumes that when the regulator has an internal VREF
++	 * there is only one external VREF connection, which is
++	 * currently the case for all supported devices.
++	 */
++	st->vref_reg[0].consumer = devm_regulator_get_optional(dev, "vref");
++	if (!IS_ERR(st->vref_reg[0].consumer))
++		return 0;
++
++	ret = PTR_ERR(st->vref_reg[0].consumer);
++	if (ret != -ENODEV)
++		return ret;
++
++	/* If no external regulator was supplied use the internal VREF */
++	st->use_internal_vref = true;
++	ret = ad5064_set_config(st, AD5064_CONFIG_INT_VREF_ENABLE);
++	if (ret)
++		dev_err(dev, "Failed to enable internal vref: %d\n", ret);
++
++	return ret;
++}
++
+ static int ad5064_probe(struct device *dev, enum ad5064_type type,
+ 			const char *name, ad5064_write_func write)
+ {
+@@ -828,22 +862,11 @@ static int ad5064_probe(struct device *dev, enum ad5064_type type,
+ 	st->dev = dev;
+ 	st->write = write;
+ 
+-	for (i = 0; i < ad5064_num_vref(st); ++i)
+-		st->vref_reg[i].supply = ad5064_vref_name(st, i);
++	ret = ad5064_request_vref(st, dev);
++	if (ret)
++		return ret;
+ 
+-	ret = devm_regulator_bulk_get(dev, ad5064_num_vref(st),
+-		st->vref_reg);
+-	if (ret) {
+-		if (!st->chip_info->internal_vref)
+-			return ret;
+-		st->use_internal_vref = true;
+-		ret = ad5064_set_config(st, AD5064_CONFIG_INT_VREF_ENABLE);
+-		if (ret) {
+-			dev_err(dev, "Failed to enable internal vref: %d\n",
+-				ret);
+-			return ret;
+-		}
+-	} else {
++	if (!st->use_internal_vref) {
+ 		ret = regulator_bulk_enable(ad5064_num_vref(st), st->vref_reg);
+ 		if (ret)
+ 			return ret;
+diff --git a/drivers/infiniband/core/sysfs.c b/drivers/infiniband/core/sysfs.c
+index 31c7efaf8e7a..63406cd212a7 100644
+--- a/drivers/infiniband/core/sysfs.c
++++ b/drivers/infiniband/core/sysfs.c
+@@ -516,7 +516,7 @@ static ssize_t show_pma_counter(struct ib_port *p, struct port_attribute *attr,
+ 	ret = get_perf_mad(p->ibdev, p->port_num, tab_attr->attr_id, &data,
+ 			40 + offset / 8, sizeof(data));
+ 	if (ret < 0)
+-		return sprintf(buf, "N/A (no PMA)\n");
++		return ret;
+ 
+ 	switch (width) {
+ 	case 4:
+@@ -1061,10 +1061,12 @@ static int add_port(struct ib_device *device, int port_num,
+ 		goto err_put;
+ 	}
+ 
+-	p->pma_table = get_counter_table(device, port_num);
+-	ret = sysfs_create_group(&p->kobj, p->pma_table);
+-	if (ret)
+-		goto err_put_gid_attrs;
++	if (device->process_mad) {
++		p->pma_table = get_counter_table(device, port_num);
++		ret = sysfs_create_group(&p->kobj, p->pma_table);
++		if (ret)
++			goto err_put_gid_attrs;
++	}
+ 
+ 	p->gid_group.name  = "gids";
+ 	p->gid_group.attrs = alloc_group_attrs(show_port_gid, attr.gid_tbl_len);
+@@ -1177,7 +1179,8 @@ err_free_gid:
+ 	p->gid_group.attrs = NULL;
+ 
+ err_remove_pma:
+-	sysfs_remove_group(&p->kobj, p->pma_table);
++	if (p->pma_table)
++		sysfs_remove_group(&p->kobj, p->pma_table);
+ 
+ err_put_gid_attrs:
+ 	kobject_put(&p->gid_attr_group->kobj);
+@@ -1289,7 +1292,9 @@ static void free_port_list_attributes(struct ib_device *device)
+ 			kfree(port->hw_stats);
+ 			free_hsag(&port->kobj, port->hw_stats_ag);
+ 		}
+-		sysfs_remove_group(p, port->pma_table);
++
++		if (port->pma_table)
++			sysfs_remove_group(p, port->pma_table);
+ 		sysfs_remove_group(p, &port->pkey_group);
+ 		sysfs_remove_group(p, &port->gid_group);
+ 		sysfs_remove_group(&port->gid_attr_group->kobj,
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index 6ad0d46ab879..249efa0a6aba 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -360,7 +360,8 @@ void bnxt_qplib_disable_nq(struct bnxt_qplib_nq *nq)
+ 	}
+ 
+ 	/* Make sure the HW is stopped! */
+-	bnxt_qplib_nq_stop_irq(nq, true);
++	if (nq->requested)
++		bnxt_qplib_nq_stop_irq(nq, true);
+ 
+ 	if (nq->bar_reg_iomem)
+ 		iounmap(nq->bar_reg_iomem);
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+index 2852d350ada1..6637df77d236 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+@@ -309,8 +309,17 @@ static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw,
+ 		rcfw->aeq_handler(rcfw, qp_event, qp);
+ 		break;
+ 	default:
+-		/* Command Response */
+-		spin_lock_irqsave(&cmdq->lock, flags);
++		/*
++		 * Command Response
++		 * cmdq->lock needs to be acquired to synchronie
++		 * the command send and completion reaping. This function
++		 * is always called with creq->lock held. Using
++		 * the nested variant of spin_lock.
++		 *
++		 */
++
++		spin_lock_irqsave_nested(&cmdq->lock, flags,
++					 SINGLE_DEPTH_NESTING);
+ 		cookie = le16_to_cpu(qp_event->cookie);
+ 		mcookie = qp_event->cookie;
+ 		blocked = cookie & RCFW_CMD_IS_BLOCKING;
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 73339fd47dd8..addd432f3f38 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -691,7 +691,6 @@ int mlx5_mr_cache_init(struct mlx5_ib_dev *dev)
+ 		init_completion(&ent->compl);
+ 		INIT_WORK(&ent->work, cache_work_func);
+ 		INIT_DELAYED_WORK(&ent->dwork, delayed_cache_work_func);
+-		queue_work(cache->wq, &ent->work);
+ 
+ 		if (i > MR_CACHE_LAST_STD_ENTRY) {
+ 			mlx5_odp_init_mr_cache_entry(ent);
+@@ -711,6 +710,7 @@ int mlx5_mr_cache_init(struct mlx5_ib_dev *dev)
+ 			ent->limit = dev->mdev->profile->mr_cache[i].limit;
+ 		else
+ 			ent->limit = 0;
++		queue_work(cache->wq, &ent->work);
+ 	}
+ 
+ 	err = mlx5_mr_cache_debugfs_init(dev);
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index 01eae67d5a6e..e260f6a156ed 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -3264,7 +3264,9 @@ static bool modify_dci_qp_is_ok(enum ib_qp_state cur_state, enum ib_qp_state new
+ 	int req = IB_QP_STATE;
+ 	int opt = 0;
+ 
+-	if (cur_state == IB_QPS_RESET && new_state == IB_QPS_INIT) {
++	if (new_state == IB_QPS_RESET) {
++		return is_valid_mask(attr_mask, req, opt);
++	} else if (cur_state == IB_QPS_RESET && new_state == IB_QPS_INIT) {
+ 		req |= IB_QP_PKEY_INDEX | IB_QP_PORT;
+ 		return is_valid_mask(attr_mask, req, opt);
+ 	} else if (cur_state == IB_QPS_INIT && new_state == IB_QPS_INIT) {
+diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
+index 5b57de30dee4..b8104d50b1a0 100644
+--- a/drivers/infiniband/sw/rxe/rxe_resp.c
++++ b/drivers/infiniband/sw/rxe/rxe_resp.c
+@@ -682,6 +682,7 @@ static enum resp_states read_reply(struct rxe_qp *qp,
+ 		rxe_advance_resp_resource(qp);
+ 
+ 		res->type		= RXE_READ_MASK;
++		res->replay		= 0;
+ 
+ 		res->read.va		= qp->resp.va;
+ 		res->read.va_org	= qp->resp.va;
+@@ -752,7 +753,8 @@ static enum resp_states read_reply(struct rxe_qp *qp,
+ 		state = RESPST_DONE;
+ 	} else {
+ 		qp->resp.res = NULL;
+-		qp->resp.opcode = -1;
++		if (!res->replay)
++			qp->resp.opcode = -1;
+ 		if (psn_compare(res->cur_psn, qp->resp.psn) >= 0)
+ 			qp->resp.psn = res->cur_psn;
+ 		state = RESPST_CLEANUP;
+@@ -814,6 +816,7 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt)
+ 
+ 	/* next expected psn, read handles this separately */
+ 	qp->resp.psn = (pkt->psn + 1) & BTH_PSN_MASK;
++	qp->resp.ack_psn = qp->resp.psn;
+ 
+ 	qp->resp.opcode = pkt->opcode;
+ 	qp->resp.status = IB_WC_SUCCESS;
+@@ -1060,7 +1063,7 @@ static enum resp_states duplicate_request(struct rxe_qp *qp,
+ 					  struct rxe_pkt_info *pkt)
+ {
+ 	enum resp_states rc;
+-	u32 prev_psn = (qp->resp.psn - 1) & BTH_PSN_MASK;
++	u32 prev_psn = (qp->resp.ack_psn - 1) & BTH_PSN_MASK;
+ 
+ 	if (pkt->mask & RXE_SEND_MASK ||
+ 	    pkt->mask & RXE_WRITE_MASK) {
+@@ -1103,6 +1106,7 @@ static enum resp_states duplicate_request(struct rxe_qp *qp,
+ 			res->state = (pkt->psn == res->first_psn) ?
+ 					rdatm_res_state_new :
+ 					rdatm_res_state_replay;
++			res->replay = 1;
+ 
+ 			/* Reset the resource, except length. */
+ 			res->read.va_org = iova;
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
+index af1470d29391..332a16dad2a7 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
+@@ -171,6 +171,7 @@ enum rdatm_res_state {
+ 
+ struct resp_res {
+ 	int			type;
++	int			replay;
+ 	u32			first_psn;
+ 	u32			last_psn;
+ 	u32			cur_psn;
+@@ -195,6 +196,7 @@ struct rxe_resp_info {
+ 	enum rxe_qp_state	state;
+ 	u32			msn;
+ 	u32			psn;
++	u32			ack_psn;
+ 	int			opcode;
+ 	int			drop_msg;
+ 	int			goto_error;
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+index a620701f9d41..1ac2bbc84671 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+@@ -1439,11 +1439,15 @@ static void ipoib_cm_skb_reap(struct work_struct *work)
+ 		spin_unlock_irqrestore(&priv->lock, flags);
+ 		netif_tx_unlock_bh(dev);
+ 
+-		if (skb->protocol == htons(ETH_P_IP))
++		if (skb->protocol == htons(ETH_P_IP)) {
++			memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
+ 			icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, htonl(mtu));
++		}
+ #if IS_ENABLED(CONFIG_IPV6)
+-		else if (skb->protocol == htons(ETH_P_IPV6))
++		else if (skb->protocol == htons(ETH_P_IPV6)) {
++			memset(IP6CB(skb), 0, sizeof(*IP6CB(skb)));
+ 			icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
++		}
+ #endif
+ 		dev_kfree_skb_any(skb);
+ 
+diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
+index 5349e22b5c78..29646004a4a7 100644
+--- a/drivers/iommu/arm-smmu.c
++++ b/drivers/iommu/arm-smmu.c
+@@ -469,6 +469,9 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
+ 	bool stage1 = cfg->cbar != CBAR_TYPE_S2_TRANS;
+ 	void __iomem *reg = ARM_SMMU_CB(smmu_domain->smmu, cfg->cbndx);
+ 
++	if (smmu_domain->smmu->features & ARM_SMMU_FEAT_COHERENT_WALK)
++		wmb();
++
+ 	if (stage1) {
+ 		reg += leaf ? ARM_SMMU_CB_S1_TLBIVAL : ARM_SMMU_CB_S1_TLBIVA;
+ 
+@@ -510,6 +513,9 @@ static void arm_smmu_tlb_inv_vmid_nosync(unsigned long iova, size_t size,
+ 	struct arm_smmu_domain *smmu_domain = cookie;
+ 	void __iomem *base = ARM_SMMU_GR0(smmu_domain->smmu);
+ 
++	if (smmu_domain->smmu->features & ARM_SMMU_FEAT_COHERENT_WALK)
++		wmb();
++
+ 	writel_relaxed(smmu_domain->cfg.vmid, base + ARM_SMMU_GR0_TLBIVMID);
+ }
+ 
+diff --git a/drivers/irqchip/qcom-pdc.c b/drivers/irqchip/qcom-pdc.c
+index b1b47a40a278..faa7d61b9d6c 100644
+--- a/drivers/irqchip/qcom-pdc.c
++++ b/drivers/irqchip/qcom-pdc.c
+@@ -124,6 +124,7 @@ static int qcom_pdc_gic_set_type(struct irq_data *d, unsigned int type)
+ 		break;
+ 	case IRQ_TYPE_EDGE_BOTH:
+ 		pdc_type = PDC_EDGE_DUAL;
++		type = IRQ_TYPE_EDGE_RISING;
+ 		break;
+ 	case IRQ_TYPE_LEVEL_HIGH:
+ 		pdc_type = PDC_LEVEL_HIGH;
+diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c
+index ed9cc977c8b3..f6427e805150 100644
+--- a/drivers/lightnvm/pblk-core.c
++++ b/drivers/lightnvm/pblk-core.c
+@@ -1538,13 +1538,14 @@ struct pblk_line *pblk_line_replace_data(struct pblk *pblk)
+ 	struct pblk_line *cur, *new = NULL;
+ 	unsigned int left_seblks;
+ 
+-	cur = l_mg->data_line;
+ 	new = l_mg->data_next;
+ 	if (!new)
+ 		goto out;
+-	l_mg->data_line = new;
+ 
+ 	spin_lock(&l_mg->free_lock);
++	cur = l_mg->data_line;
++	l_mg->data_line = new;
++
+ 	pblk_line_setup_metadata(new, l_mg, &pblk->lm);
+ 	spin_unlock(&l_mg->free_lock);
+ 
+diff --git a/drivers/lightnvm/pblk-recovery.c b/drivers/lightnvm/pblk-recovery.c
+index d83466b3821b..958bda8a69b7 100644
+--- a/drivers/lightnvm/pblk-recovery.c
++++ b/drivers/lightnvm/pblk-recovery.c
+@@ -956,12 +956,14 @@ next:
+ 		}
+ 	}
+ 
+-	spin_lock(&l_mg->free_lock);
+ 	if (!open_lines) {
++		spin_lock(&l_mg->free_lock);
+ 		WARN_ON_ONCE(!test_and_clear_bit(meta_line,
+ 							&l_mg->meta_bitmap));
++		spin_unlock(&l_mg->free_lock);
+ 		pblk_line_replace_data(pblk);
+ 	} else {
++		spin_lock(&l_mg->free_lock);
+ 		/* Allocate next line for preparation */
+ 		l_mg->data_next = pblk_line_get(pblk);
+ 		if (l_mg->data_next) {
+@@ -969,8 +971,8 @@ next:
+ 			l_mg->data_next->type = PBLK_LINETYPE_DATA;
+ 			is_next = 1;
+ 		}
++		spin_unlock(&l_mg->free_lock);
+ 	}
+-	spin_unlock(&l_mg->free_lock);
+ 
+ 	if (is_next)
+ 		pblk_line_erase(pblk, l_mg->data_next);
+diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c
+index 88a0a7c407aa..432f7d94d369 100644
+--- a/drivers/lightnvm/pblk-sysfs.c
++++ b/drivers/lightnvm/pblk-sysfs.c
+@@ -262,8 +262,14 @@ static ssize_t pblk_sysfs_lines(struct pblk *pblk, char *page)
+ 		sec_in_line = l_mg->data_line->sec_in_line;
+ 		meta_weight = bitmap_weight(&l_mg->meta_bitmap,
+ 							PBLK_DATA_LINES);
+-		map_weight = bitmap_weight(l_mg->data_line->map_bitmap,
++
++		spin_lock(&l_mg->data_line->lock);
++		if (l_mg->data_line->map_bitmap)
++			map_weight = bitmap_weight(l_mg->data_line->map_bitmap,
+ 							lm->sec_per_line);
++		else
++			map_weight = 0;
++		spin_unlock(&l_mg->data_line->lock);
+ 	}
+ 	spin_unlock(&l_mg->free_lock);
+ 
+diff --git a/drivers/lightnvm/pblk-write.c b/drivers/lightnvm/pblk-write.c
+index f353e52941f5..89ac60d4849e 100644
+--- a/drivers/lightnvm/pblk-write.c
++++ b/drivers/lightnvm/pblk-write.c
+@@ -417,12 +417,11 @@ int pblk_submit_meta_io(struct pblk *pblk, struct pblk_line *meta_line)
+ 			rqd->ppa_list[i] = addr_to_gen_ppa(pblk, paddr, id);
+ 	}
+ 
++	spin_lock(&l_mg->close_lock);
+ 	emeta->mem += rq_len;
+-	if (emeta->mem >= lm->emeta_len[0]) {
+-		spin_lock(&l_mg->close_lock);
++	if (emeta->mem >= lm->emeta_len[0])
+ 		list_del(&meta_line->list);
+-		spin_unlock(&l_mg->close_lock);
+-	}
++	spin_unlock(&l_mg->close_lock);
+ 
+ 	pblk_down_page(pblk, rqd->ppa_list, rqd->nr_ppas);
+ 
+@@ -491,14 +490,15 @@ static struct pblk_line *pblk_should_submit_meta_io(struct pblk *pblk,
+ 	struct pblk_line *meta_line;
+ 
+ 	spin_lock(&l_mg->close_lock);
+-retry:
+ 	if (list_empty(&l_mg->emeta_list)) {
+ 		spin_unlock(&l_mg->close_lock);
+ 		return NULL;
+ 	}
+ 	meta_line = list_first_entry(&l_mg->emeta_list, struct pblk_line, list);
+-	if (meta_line->emeta->mem >= lm->emeta_len[0])
+-		goto retry;
++	if (meta_line->emeta->mem >= lm->emeta_len[0]) {
++		spin_unlock(&l_mg->close_lock);
++		return NULL;
++	}
+ 	spin_unlock(&l_mg->close_lock);
+ 
+ 	if (!pblk_valid_meta_ppa(pblk, meta_line, data_rqd))
+diff --git a/drivers/mailbox/pcc.c b/drivers/mailbox/pcc.c
+index 311e91b1a14f..256f18b67e8a 100644
+--- a/drivers/mailbox/pcc.c
++++ b/drivers/mailbox/pcc.c
+@@ -461,8 +461,11 @@ static int __init acpi_pcc_probe(void)
+ 	count = acpi_table_parse_entries_array(ACPI_SIG_PCCT,
+ 			sizeof(struct acpi_table_pcct), proc,
+ 			ACPI_PCCT_TYPE_RESERVED, MAX_PCC_SUBSPACES);
+-	if (count == 0 || count > MAX_PCC_SUBSPACES) {
+-		pr_warn("Invalid PCCT: %d PCC subspaces\n", count);
++	if (count <= 0 || count > MAX_PCC_SUBSPACES) {
++		if (count < 0)
++			pr_warn("Error parsing PCC subspaces from PCCT\n");
++		else
++			pr_warn("Invalid PCCT: %d PCC subspaces\n", count);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
+index 547c9eedc2f4..d681524f82a4 100644
+--- a/drivers/md/bcache/btree.c
++++ b/drivers/md/bcache/btree.c
+@@ -2380,7 +2380,7 @@ static int refill_keybuf_fn(struct btree_op *op, struct btree *b,
+ 	struct keybuf *buf = refill->buf;
+ 	int ret = MAP_CONTINUE;
+ 
+-	if (bkey_cmp(k, refill->end) >= 0) {
++	if (bkey_cmp(k, refill->end) > 0) {
+ 		ret = MAP_DONE;
+ 		goto out;
+ 	}
+diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
+index ae67f5fa8047..9d2fa1359029 100644
+--- a/drivers/md/bcache/request.c
++++ b/drivers/md/bcache/request.c
+@@ -843,7 +843,7 @@ static void cached_dev_read_done_bh(struct closure *cl)
+ 
+ 	bch_mark_cache_accounting(s->iop.c, s->d,
+ 				  !s->cache_missed, s->iop.bypass);
+-	trace_bcache_read(s->orig_bio, !s->cache_miss, s->iop.bypass);
++	trace_bcache_read(s->orig_bio, !s->cache_missed, s->iop.bypass);
+ 
+ 	if (s->iop.status)
+ 		continue_at_nobarrier(cl, cached_dev_read_error, bcache_wq);
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index fa4058e43202..6e5220554220 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -1131,11 +1131,12 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c,
+ 	}
+ 
+ 	if (BDEV_STATE(&dc->sb) == BDEV_STATE_DIRTY) {
+-		bch_sectors_dirty_init(&dc->disk);
+ 		atomic_set(&dc->has_dirty, 1);
+ 		bch_writeback_queue(dc);
+ 	}
+ 
++	bch_sectors_dirty_init(&dc->disk);
++
+ 	bch_cached_dev_run(dc);
+ 	bcache_device_link(&dc->disk, c, "bdev");
+ 
+diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
+index 225b15aa0340..34819f2c257d 100644
+--- a/drivers/md/bcache/sysfs.c
++++ b/drivers/md/bcache/sysfs.c
+@@ -263,6 +263,7 @@ STORE(__cached_dev)
+ 			    1, WRITEBACK_RATE_UPDATE_SECS_MAX);
+ 	d_strtoul(writeback_rate_i_term_inverse);
+ 	d_strtoul_nonzero(writeback_rate_p_term_inverse);
++	d_strtoul_nonzero(writeback_rate_minimum);
+ 
+ 	sysfs_strtoul_clamp(io_error_limit, dc->error_limit, 0, INT_MAX);
+ 
+@@ -389,6 +390,7 @@ static struct attribute *bch_cached_dev_files[] = {
+ 	&sysfs_writeback_rate_update_seconds,
+ 	&sysfs_writeback_rate_i_term_inverse,
+ 	&sysfs_writeback_rate_p_term_inverse,
++	&sysfs_writeback_rate_minimum,
+ 	&sysfs_writeback_rate_debug,
+ 	&sysfs_errors,
+ 	&sysfs_io_error_limit,
+diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
+index b810ea77e6b1..f666778ad237 100644
+--- a/drivers/md/dm-ioctl.c
++++ b/drivers/md/dm-ioctl.c
+@@ -1720,8 +1720,7 @@ static void free_params(struct dm_ioctl *param, size_t param_size, int param_fla
+ }
+ 
+ static int copy_params(struct dm_ioctl __user *user, struct dm_ioctl *param_kernel,
+-		       int ioctl_flags,
+-		       struct dm_ioctl **param, int *param_flags)
++		       int ioctl_flags, struct dm_ioctl **param, int *param_flags)
+ {
+ 	struct dm_ioctl *dmi;
+ 	int secure_data;
+@@ -1762,18 +1761,13 @@ static int copy_params(struct dm_ioctl __user *user, struct dm_ioctl *param_kern
+ 
+ 	*param_flags |= DM_PARAMS_MALLOC;
+ 
+-	if (copy_from_user(dmi, user, param_kernel->data_size))
+-		goto bad;
++	/* Copy from param_kernel (which was already copied from user) */
++	memcpy(dmi, param_kernel, minimum_data_size);
+ 
+-data_copied:
+-	/*
+-	 * Abort if something changed the ioctl data while it was being copied.
+-	 */
+-	if (dmi->data_size != param_kernel->data_size) {
+-		DMERR("rejecting ioctl: data size modified while processing parameters");
++	if (copy_from_user(&dmi->data, (char __user *)user + minimum_data_size,
++			   param_kernel->data_size - minimum_data_size))
+ 		goto bad;
+-	}
+-
++data_copied:
+ 	/* Wipe the user buffer so we do not return it to userspace */
+ 	if (secure_data && clear_user(user, param_kernel->data_size))
+ 		goto bad;
+diff --git a/drivers/md/dm-zoned-metadata.c b/drivers/md/dm-zoned-metadata.c
+index 969954915566..fa68336560c3 100644
+--- a/drivers/md/dm-zoned-metadata.c
++++ b/drivers/md/dm-zoned-metadata.c
+@@ -99,7 +99,7 @@ struct dmz_mblock {
+ 	struct rb_node		node;
+ 	struct list_head	link;
+ 	sector_t		no;
+-	atomic_t		ref;
++	unsigned int		ref;
+ 	unsigned long		state;
+ 	struct page		*page;
+ 	void			*data;
+@@ -296,7 +296,7 @@ static struct dmz_mblock *dmz_alloc_mblock(struct dmz_metadata *zmd,
+ 
+ 	RB_CLEAR_NODE(&mblk->node);
+ 	INIT_LIST_HEAD(&mblk->link);
+-	atomic_set(&mblk->ref, 0);
++	mblk->ref = 0;
+ 	mblk->state = 0;
+ 	mblk->no = mblk_no;
+ 	mblk->data = page_address(mblk->page);
+@@ -339,10 +339,11 @@ static void dmz_insert_mblock(struct dmz_metadata *zmd, struct dmz_mblock *mblk)
+ }
+ 
+ /*
+- * Lookup a metadata block in the rbtree.
++ * Lookup a metadata block in the rbtree. If the block is found, increment
++ * its reference count.
+  */
+-static struct dmz_mblock *dmz_lookup_mblock(struct dmz_metadata *zmd,
+-					    sector_t mblk_no)
++static struct dmz_mblock *dmz_get_mblock_fast(struct dmz_metadata *zmd,
++					      sector_t mblk_no)
+ {
+ 	struct rb_root *root = &zmd->mblk_rbtree;
+ 	struct rb_node *node = root->rb_node;
+@@ -350,8 +351,17 @@ static struct dmz_mblock *dmz_lookup_mblock(struct dmz_metadata *zmd,
+ 
+ 	while (node) {
+ 		mblk = container_of(node, struct dmz_mblock, node);
+-		if (mblk->no == mblk_no)
++		if (mblk->no == mblk_no) {
++			/*
++			 * If this is the first reference to the block,
++			 * remove it from the LRU list.
++			 */
++			mblk->ref++;
++			if (mblk->ref == 1 &&
++			    !test_bit(DMZ_META_DIRTY, &mblk->state))
++				list_del_init(&mblk->link);
+ 			return mblk;
++		}
+ 		node = (mblk->no < mblk_no) ? node->rb_left : node->rb_right;
+ 	}
+ 
+@@ -382,32 +392,47 @@ static void dmz_mblock_bio_end_io(struct bio *bio)
+ }
+ 
+ /*
+- * Read a metadata block from disk.
++ * Read an uncached metadata block from disk and add it to the cache.
+  */
+-static struct dmz_mblock *dmz_fetch_mblock(struct dmz_metadata *zmd,
+-					   sector_t mblk_no)
++static struct dmz_mblock *dmz_get_mblock_slow(struct dmz_metadata *zmd,
++					      sector_t mblk_no)
+ {
+-	struct dmz_mblock *mblk;
++	struct dmz_mblock *mblk, *m;
+ 	sector_t block = zmd->sb[zmd->mblk_primary].block + mblk_no;
+ 	struct bio *bio;
+ 
+-	/* Get block and insert it */
++	/* Get a new block and a BIO to read it */
+ 	mblk = dmz_alloc_mblock(zmd, mblk_no);
+ 	if (!mblk)
+ 		return NULL;
+ 
+-	spin_lock(&zmd->mblk_lock);
+-	atomic_inc(&mblk->ref);
+-	set_bit(DMZ_META_READING, &mblk->state);
+-	dmz_insert_mblock(zmd, mblk);
+-	spin_unlock(&zmd->mblk_lock);
+-
+ 	bio = bio_alloc(GFP_NOIO, 1);
+ 	if (!bio) {
+ 		dmz_free_mblock(zmd, mblk);
+ 		return NULL;
+ 	}
+ 
++	spin_lock(&zmd->mblk_lock);
++
++	/*
++	 * Make sure that another context did not start reading
++	 * the block already.
++	 */
++	m = dmz_get_mblock_fast(zmd, mblk_no);
++	if (m) {
++		spin_unlock(&zmd->mblk_lock);
++		dmz_free_mblock(zmd, mblk);
++		bio_put(bio);
++		return m;
++	}
++
++	mblk->ref++;
++	set_bit(DMZ_META_READING, &mblk->state);
++	dmz_insert_mblock(zmd, mblk);
++
++	spin_unlock(&zmd->mblk_lock);
++
++	/* Submit read BIO */
+ 	bio->bi_iter.bi_sector = dmz_blk2sect(block);
+ 	bio_set_dev(bio, zmd->dev->bdev);
+ 	bio->bi_private = mblk;
+@@ -484,7 +509,8 @@ static void dmz_release_mblock(struct dmz_metadata *zmd,
+ 
+ 	spin_lock(&zmd->mblk_lock);
+ 
+-	if (atomic_dec_and_test(&mblk->ref)) {
++	mblk->ref--;
++	if (mblk->ref == 0) {
+ 		if (test_bit(DMZ_META_ERROR, &mblk->state)) {
+ 			rb_erase(&mblk->node, &zmd->mblk_rbtree);
+ 			dmz_free_mblock(zmd, mblk);
+@@ -508,18 +534,12 @@ static struct dmz_mblock *dmz_get_mblock(struct dmz_metadata *zmd,
+ 
+ 	/* Check rbtree */
+ 	spin_lock(&zmd->mblk_lock);
+-	mblk = dmz_lookup_mblock(zmd, mblk_no);
+-	if (mblk) {
+-		/* Cache hit: remove block from LRU list */
+-		if (atomic_inc_return(&mblk->ref) == 1 &&
+-		    !test_bit(DMZ_META_DIRTY, &mblk->state))
+-			list_del_init(&mblk->link);
+-	}
++	mblk = dmz_get_mblock_fast(zmd, mblk_no);
+ 	spin_unlock(&zmd->mblk_lock);
+ 
+ 	if (!mblk) {
+ 		/* Cache miss: read the block from disk */
+-		mblk = dmz_fetch_mblock(zmd, mblk_no);
++		mblk = dmz_get_mblock_slow(zmd, mblk_no);
+ 		if (!mblk)
+ 			return ERR_PTR(-ENOMEM);
+ 	}
+@@ -753,7 +773,7 @@ int dmz_flush_metadata(struct dmz_metadata *zmd)
+ 
+ 		spin_lock(&zmd->mblk_lock);
+ 		clear_bit(DMZ_META_DIRTY, &mblk->state);
+-		if (atomic_read(&mblk->ref) == 0)
++		if (mblk->ref == 0)
+ 			list_add_tail(&mblk->link, &zmd->mblk_lru_list);
+ 		spin_unlock(&zmd->mblk_lock);
+ 	}
+@@ -2308,7 +2328,7 @@ static void dmz_cleanup_metadata(struct dmz_metadata *zmd)
+ 		mblk = list_first_entry(&zmd->mblk_dirty_list,
+ 					struct dmz_mblock, link);
+ 		dmz_dev_warn(zmd->dev, "mblock %llu still in dirty list (ref %u)",
+-			     (u64)mblk->no, atomic_read(&mblk->ref));
++			     (u64)mblk->no, mblk->ref);
+ 		list_del_init(&mblk->link);
+ 		rb_erase(&mblk->node, &zmd->mblk_rbtree);
+ 		dmz_free_mblock(zmd, mblk);
+@@ -2326,8 +2346,8 @@ static void dmz_cleanup_metadata(struct dmz_metadata *zmd)
+ 	root = &zmd->mblk_rbtree;
+ 	rbtree_postorder_for_each_entry_safe(mblk, next, root, node) {
+ 		dmz_dev_warn(zmd->dev, "mblock %llu ref %u still in rbtree",
+-			     (u64)mblk->no, atomic_read(&mblk->ref));
+-		atomic_set(&mblk->ref, 0);
++			     (u64)mblk->no, mblk->ref);
++		mblk->ref = 0;
+ 		dmz_free_mblock(zmd, mblk);
+ 	}
+ 
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 994aed2f9dff..71665e2c30eb 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -455,10 +455,11 @@ static void md_end_flush(struct bio *fbio)
+ 	rdev_dec_pending(rdev, mddev);
+ 
+ 	if (atomic_dec_and_test(&fi->flush_pending)) {
+-		if (bio->bi_iter.bi_size == 0)
++		if (bio->bi_iter.bi_size == 0) {
+ 			/* an empty barrier - all done */
+ 			bio_endio(bio);
+-		else {
++			mempool_free(fi, mddev->flush_pool);
++		} else {
+ 			INIT_WORK(&fi->flush_work, submit_flushes);
+ 			queue_work(md_wq, &fi->flush_work);
+ 		}
+@@ -512,10 +513,11 @@ void md_flush_request(struct mddev *mddev, struct bio *bio)
+ 	rcu_read_unlock();
+ 
+ 	if (atomic_dec_and_test(&fi->flush_pending)) {
+-		if (bio->bi_iter.bi_size == 0)
++		if (bio->bi_iter.bi_size == 0) {
+ 			/* an empty barrier - all done */
+ 			bio_endio(bio);
+-		else {
++			mempool_free(fi, mddev->flush_pool);
++		} else {
+ 			INIT_WORK(&fi->flush_work, submit_flushes);
+ 			queue_work(md_wq, &fi->flush_work);
+ 		}
+@@ -5907,14 +5909,6 @@ static void __md_stop(struct mddev *mddev)
+ 		mddev->to_remove = &md_redundancy_group;
+ 	module_put(pers->owner);
+ 	clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+-}
+-
+-void md_stop(struct mddev *mddev)
+-{
+-	/* stop the array and free an attached data structures.
+-	 * This is called from dm-raid
+-	 */
+-	__md_stop(mddev);
+ 	if (mddev->flush_bio_pool) {
+ 		mempool_destroy(mddev->flush_bio_pool);
+ 		mddev->flush_bio_pool = NULL;
+@@ -5923,6 +5917,14 @@ void md_stop(struct mddev *mddev)
+ 		mempool_destroy(mddev->flush_pool);
+ 		mddev->flush_pool = NULL;
+ 	}
++}
++
++void md_stop(struct mddev *mddev)
++{
++	/* stop the array and free an attached data structures.
++	 * This is called from dm-raid
++	 */
++	__md_stop(mddev);
+ 	bioset_exit(&mddev->bio_set);
+ 	bioset_exit(&mddev->sync_set);
+ }
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index 8e05c1092aef..c9362463d266 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -1736,6 +1736,7 @@ static int raid1_add_disk(struct mddev *mddev, struct md_rdev *rdev)
+ 	 */
+ 	if (rdev->saved_raid_disk >= 0 &&
+ 	    rdev->saved_raid_disk >= first &&
++	    rdev->saved_raid_disk < conf->raid_disks &&
+ 	    conf->mirrors[rdev->saved_raid_disk].rdev == NULL)
+ 		first = last = rdev->saved_raid_disk;
+ 
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 8c93d44a052c..e555221fb75b 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -1808,6 +1808,7 @@ static int raid10_add_disk(struct mddev *mddev, struct md_rdev *rdev)
+ 		first = last = rdev->raid_disk;
+ 
+ 	if (rdev->saved_raid_disk >= first &&
++	    rdev->saved_raid_disk < conf->geo.raid_disks &&
+ 	    conf->mirrors[rdev->saved_raid_disk].rdev == NULL)
+ 		mirror = rdev->saved_raid_disk;
+ 	else
+diff --git a/drivers/media/cec/cec-adap.c b/drivers/media/cec/cec-adap.c
+index b7fad0ec5710..fecba7ddcd00 100644
+--- a/drivers/media/cec/cec-adap.c
++++ b/drivers/media/cec/cec-adap.c
+@@ -325,7 +325,7 @@ static void cec_data_completed(struct cec_data *data)
+  *
+  * This function is called with adap->lock held.
+  */
+-static void cec_data_cancel(struct cec_data *data)
++static void cec_data_cancel(struct cec_data *data, u8 tx_status)
+ {
+ 	/*
+ 	 * It's either the current transmit, or it is a pending
+@@ -340,13 +340,11 @@ static void cec_data_cancel(struct cec_data *data)
+ 	}
+ 
+ 	if (data->msg.tx_status & CEC_TX_STATUS_OK) {
+-		/* Mark the canceled RX as a timeout */
+ 		data->msg.rx_ts = ktime_get_ns();
+-		data->msg.rx_status = CEC_RX_STATUS_TIMEOUT;
++		data->msg.rx_status = CEC_RX_STATUS_ABORTED;
+ 	} else {
+-		/* Mark the canceled TX as an error */
+ 		data->msg.tx_ts = ktime_get_ns();
+-		data->msg.tx_status |= CEC_TX_STATUS_ERROR |
++		data->msg.tx_status |= tx_status |
+ 				       CEC_TX_STATUS_MAX_RETRIES;
+ 		data->msg.tx_error_cnt++;
+ 		data->attempts = 0;
+@@ -374,15 +372,15 @@ static void cec_flush(struct cec_adapter *adap)
+ 	while (!list_empty(&adap->transmit_queue)) {
+ 		data = list_first_entry(&adap->transmit_queue,
+ 					struct cec_data, list);
+-		cec_data_cancel(data);
++		cec_data_cancel(data, CEC_TX_STATUS_ABORTED);
+ 	}
+ 	if (adap->transmitting)
+-		cec_data_cancel(adap->transmitting);
++		cec_data_cancel(adap->transmitting, CEC_TX_STATUS_ABORTED);
+ 
+ 	/* Cancel the pending timeout work. */
+ 	list_for_each_entry_safe(data, n, &adap->wait_queue, list) {
+ 		if (cancel_delayed_work(&data->work))
+-			cec_data_cancel(data);
++			cec_data_cancel(data, CEC_TX_STATUS_OK);
+ 		/*
+ 		 * If cancel_delayed_work returned false, then
+ 		 * the cec_wait_timeout function is running,
+@@ -458,12 +456,13 @@ int cec_thread_func(void *_adap)
+ 			 * so much traffic on the bus that the adapter was
+ 			 * unable to transmit for CEC_XFER_TIMEOUT_MS (2.1s).
+ 			 */
+-			dprintk(1, "%s: message %*ph timed out\n", __func__,
++			pr_warn("cec-%s: message %*ph timed out\n", adap->name,
+ 				adap->transmitting->msg.len,
+ 				adap->transmitting->msg.msg);
+ 			adap->tx_timeouts++;
+ 			/* Just give up on this. */
+-			cec_data_cancel(adap->transmitting);
++			cec_data_cancel(adap->transmitting,
++					CEC_TX_STATUS_TIMEOUT);
+ 			goto unlock;
+ 		}
+ 
+@@ -498,9 +497,11 @@ int cec_thread_func(void *_adap)
+ 		if (data->attempts) {
+ 			/* should be >= 3 data bit periods for a retry */
+ 			signal_free_time = CEC_SIGNAL_FREE_TIME_RETRY;
+-		} else if (data->new_initiator) {
++		} else if (adap->last_initiator !=
++			   cec_msg_initiator(&data->msg)) {
+ 			/* should be >= 5 data bit periods for new initiator */
+ 			signal_free_time = CEC_SIGNAL_FREE_TIME_NEW_INITIATOR;
++			adap->last_initiator = cec_msg_initiator(&data->msg);
+ 		} else {
+ 			/*
+ 			 * should be >= 7 data bit periods for sending another
+@@ -514,7 +515,7 @@ int cec_thread_func(void *_adap)
+ 		/* Tell the adapter to transmit, cancel on error */
+ 		if (adap->ops->adap_transmit(adap, data->attempts,
+ 					     signal_free_time, &data->msg))
+-			cec_data_cancel(data);
++			cec_data_cancel(data, CEC_TX_STATUS_ABORTED);
+ 
+ unlock:
+ 		mutex_unlock(&adap->lock);
+@@ -685,9 +686,6 @@ int cec_transmit_msg_fh(struct cec_adapter *adap, struct cec_msg *msg,
+ 			struct cec_fh *fh, bool block)
+ {
+ 	struct cec_data *data;
+-	u8 last_initiator = 0xff;
+-	unsigned int timeout;
+-	int res = 0;
+ 
+ 	msg->rx_ts = 0;
+ 	msg->tx_ts = 0;
+@@ -797,23 +795,6 @@ int cec_transmit_msg_fh(struct cec_adapter *adap, struct cec_msg *msg,
+ 	data->adap = adap;
+ 	data->blocking = block;
+ 
+-	/*
+-	 * Determine if this message follows a message from the same
+-	 * initiator. Needed to determine the free signal time later on.
+-	 */
+-	if (msg->len > 1) {
+-		if (!(list_empty(&adap->transmit_queue))) {
+-			const struct cec_data *last;
+-
+-			last = list_last_entry(&adap->transmit_queue,
+-					       const struct cec_data, list);
+-			last_initiator = cec_msg_initiator(&last->msg);
+-		} else if (adap->transmitting) {
+-			last_initiator =
+-				cec_msg_initiator(&adap->transmitting->msg);
+-		}
+-	}
+-	data->new_initiator = last_initiator != cec_msg_initiator(msg);
+ 	init_completion(&data->c);
+ 	INIT_DELAYED_WORK(&data->work, cec_wait_timeout);
+ 
+@@ -829,48 +810,23 @@ int cec_transmit_msg_fh(struct cec_adapter *adap, struct cec_msg *msg,
+ 	if (!block)
+ 		return 0;
+ 
+-	/*
+-	 * If we don't get a completion before this time something is really
+-	 * wrong and we time out.
+-	 */
+-	timeout = CEC_XFER_TIMEOUT_MS;
+-	/* Add the requested timeout if we have to wait for a reply as well */
+-	if (msg->timeout)
+-		timeout += msg->timeout;
+-
+ 	/*
+ 	 * Release the lock and wait, retake the lock afterwards.
+ 	 */
+ 	mutex_unlock(&adap->lock);
+-	res = wait_for_completion_killable_timeout(&data->c,
+-						   msecs_to_jiffies(timeout));
++	wait_for_completion_killable(&data->c);
++	if (!data->completed)
++		cancel_delayed_work_sync(&data->work);
+ 	mutex_lock(&adap->lock);
+ 
+-	if (data->completed) {
+-		/* The transmit completed (possibly with an error) */
+-		*msg = data->msg;
+-		kfree(data);
+-		return 0;
+-	}
+-	/*
+-	 * The wait for completion timed out or was interrupted, so mark this
+-	 * as non-blocking and disconnect from the filehandle since it is
+-	 * still 'in flight'. When it finally completes it will just drop the
+-	 * result silently.
+-	 */
+-	data->blocking = false;
+-	if (data->fh)
+-		list_del(&data->xfer_list);
+-	data->fh = NULL;
++	/* Cancel the transmit if it was interrupted */
++	if (!data->completed)
++		cec_data_cancel(data, CEC_TX_STATUS_ABORTED);
+ 
+-	if (res == 0) { /* timed out */
+-		/* Check if the reply or the transmit failed */
+-		if (msg->timeout && (msg->tx_status & CEC_TX_STATUS_OK))
+-			msg->rx_status = CEC_RX_STATUS_TIMEOUT;
+-		else
+-			msg->tx_status = CEC_TX_STATUS_MAX_RETRIES;
+-	}
+-	return res > 0 ? 0 : res;
++	/* The transmit completed (possibly with an error) */
++	*msg = data->msg;
++	kfree(data);
++	return 0;
+ }
+ 
+ /* Helper function to be used by drivers and this framework. */
+@@ -1028,6 +984,8 @@ void cec_received_msg_ts(struct cec_adapter *adap,
+ 	mutex_lock(&adap->lock);
+ 	dprintk(2, "%s: %*ph\n", __func__, msg->len, msg->msg);
+ 
++	adap->last_initiator = 0xff;
++
+ 	/* Check if this message was for us (directed or broadcast). */
+ 	if (!cec_msg_is_broadcast(msg))
+ 		valid_la = cec_has_log_addr(adap, msg_dest);
+@@ -1490,6 +1448,8 @@ void __cec_s_phys_addr(struct cec_adapter *adap, u16 phys_addr, bool block)
+ 	}
+ 
+ 	mutex_lock(&adap->devnode.lock);
++	adap->last_initiator = 0xff;
++
+ 	if ((adap->needs_hpd || list_empty(&adap->devnode.fhs)) &&
+ 	    adap->ops->adap_enable(adap, true)) {
+ 		mutex_unlock(&adap->devnode.lock);
+diff --git a/drivers/media/cec/cec-api.c b/drivers/media/cec/cec-api.c
+index 10b67fc40318..0199765fbae6 100644
+--- a/drivers/media/cec/cec-api.c
++++ b/drivers/media/cec/cec-api.c
+@@ -101,6 +101,23 @@ static long cec_adap_g_phys_addr(struct cec_adapter *adap,
+ 	return 0;
+ }
+ 
++static int cec_validate_phys_addr(u16 phys_addr)
++{
++	int i;
++
++	if (phys_addr == CEC_PHYS_ADDR_INVALID)
++		return 0;
++	for (i = 0; i < 16; i += 4)
++		if (phys_addr & (0xf << i))
++			break;
++	if (i == 16)
++		return 0;
++	for (i += 4; i < 16; i += 4)
++		if ((phys_addr & (0xf << i)) == 0)
++			return -EINVAL;
++	return 0;
++}
++
+ static long cec_adap_s_phys_addr(struct cec_adapter *adap, struct cec_fh *fh,
+ 				 bool block, __u16 __user *parg)
+ {
+@@ -112,7 +129,7 @@ static long cec_adap_s_phys_addr(struct cec_adapter *adap, struct cec_fh *fh,
+ 	if (copy_from_user(&phys_addr, parg, sizeof(phys_addr)))
+ 		return -EFAULT;
+ 
+-	err = cec_phys_addr_validate(phys_addr, NULL, NULL);
++	err = cec_validate_phys_addr(phys_addr);
+ 	if (err)
+ 		return err;
+ 	mutex_lock(&adap->lock);
+diff --git a/drivers/media/cec/cec-edid.c b/drivers/media/cec/cec-edid.c
+index ec72ac1c0b91..f587e8eaefd8 100644
+--- a/drivers/media/cec/cec-edid.c
++++ b/drivers/media/cec/cec-edid.c
+@@ -10,66 +10,6 @@
+ #include <linux/types.h>
+ #include <media/cec.h>
+ 
+-/*
+- * This EDID is expected to be a CEA-861 compliant, which means that there are
+- * at least two blocks and one or more of the extensions blocks are CEA-861
+- * blocks.
+- *
+- * The returned location is guaranteed to be < size - 1.
+- */
+-static unsigned int cec_get_edid_spa_location(const u8 *edid, unsigned int size)
+-{
+-	unsigned int blocks = size / 128;
+-	unsigned int block;
+-	u8 d;
+-
+-	/* Sanity check: at least 2 blocks and a multiple of the block size */
+-	if (blocks < 2 || size % 128)
+-		return 0;
+-
+-	/*
+-	 * If there are fewer extension blocks than the size, then update
+-	 * 'blocks'. It is allowed to have more extension blocks than the size,
+-	 * since some hardware can only read e.g. 256 bytes of the EDID, even
+-	 * though more blocks are present. The first CEA-861 extension block
+-	 * should normally be in block 1 anyway.
+-	 */
+-	if (edid[0x7e] + 1 < blocks)
+-		blocks = edid[0x7e] + 1;
+-
+-	for (block = 1; block < blocks; block++) {
+-		unsigned int offset = block * 128;
+-
+-		/* Skip any non-CEA-861 extension blocks */
+-		if (edid[offset] != 0x02 || edid[offset + 1] != 0x03)
+-			continue;
+-
+-		/* search Vendor Specific Data Block (tag 3) */
+-		d = edid[offset + 2] & 0x7f;
+-		/* Check if there are Data Blocks */
+-		if (d <= 4)
+-			continue;
+-		if (d > 4) {
+-			unsigned int i = offset + 4;
+-			unsigned int end = offset + d;
+-
+-			/* Note: 'end' is always < 'size' */
+-			do {
+-				u8 tag = edid[i] >> 5;
+-				u8 len = edid[i] & 0x1f;
+-
+-				if (tag == 3 && len >= 5 && i + len <= end &&
+-				    edid[i + 1] == 0x03 &&
+-				    edid[i + 2] == 0x0c &&
+-				    edid[i + 3] == 0x00)
+-					return i + 4;
+-				i += len + 1;
+-			} while (i < end);
+-		}
+-	}
+-	return 0;
+-}
+-
+ u16 cec_get_edid_phys_addr(const u8 *edid, unsigned int size,
+ 			   unsigned int *offset)
+ {
+diff --git a/drivers/media/common/v4l2-tpg/v4l2-tpg-colors.c b/drivers/media/common/v4l2-tpg/v4l2-tpg-colors.c
+index 3a3dc23c560c..a4341205c197 100644
+--- a/drivers/media/common/v4l2-tpg/v4l2-tpg-colors.c
++++ b/drivers/media/common/v4l2-tpg/v4l2-tpg-colors.c
+@@ -602,14 +602,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_SRGB][5] = { 3138, 657, 810 },
+ 	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_SRGB][6] = { 731, 680, 3048 },
+ 	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_SRGB][7] = { 800, 799, 800 },
+-	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][1] = { 3046, 3054, 886 },
+-	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][2] = { 0, 3058, 3031 },
+-	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][3] = { 360, 3079, 877 },
+-	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][4] = { 3103, 587, 3027 },
+-	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][5] = { 3116, 723, 861 },
+-	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][6] = { 789, 744, 3025 },
+-	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][1] = { 3046, 3054, 886 },
++	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][2] = { 0, 3058, 3031 },
++	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][3] = { 360, 3079, 877 },
++	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][4] = { 3103, 587, 3027 },
++	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][5] = { 3116, 723, 861 },
++	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][6] = { 789, 744, 3025 },
++	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
+ 	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+ 	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_SMPTE240M][1] = { 2941, 2950, 546 },
+ 	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_SMPTE240M][2] = { 0, 2954, 2924 },
+@@ -658,14 +658,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_SRGB][5] = { 3138, 657, 810 },
+ 	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_SRGB][6] = { 731, 680, 3048 },
+ 	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_SRGB][7] = { 800, 799, 800 },
+-	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][1] = { 3046, 3054, 886 },
+-	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][2] = { 0, 3058, 3031 },
+-	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][3] = { 360, 3079, 877 },
+-	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][4] = { 3103, 587, 3027 },
+-	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][5] = { 3116, 723, 861 },
+-	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][6] = { 789, 744, 3025 },
+-	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][1] = { 3046, 3054, 886 },
++	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][2] = { 0, 3058, 3031 },
++	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][3] = { 360, 3079, 877 },
++	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][4] = { 3103, 587, 3027 },
++	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][5] = { 3116, 723, 861 },
++	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][6] = { 789, 744, 3025 },
++	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
+ 	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+ 	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_SMPTE240M][1] = { 2941, 2950, 546 },
+ 	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_SMPTE240M][2] = { 0, 2954, 2924 },
+@@ -714,14 +714,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_SRGB][5] = { 3056, 800, 800 },
+ 	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_SRGB][6] = { 800, 800, 3056 },
+ 	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 800 },
+-	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][1] = { 3033, 3033, 851 },
+-	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][2] = { 851, 3033, 3033 },
+-	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][3] = { 851, 3033, 851 },
+-	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][4] = { 3033, 851, 3033 },
+-	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][5] = { 3033, 851, 851 },
+-	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][6] = { 851, 851, 3033 },
+-	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][1] = { 3033, 3033, 851 },
++	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][2] = { 851, 3033, 3033 },
++	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][3] = { 851, 3033, 851 },
++	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][4] = { 3033, 851, 3033 },
++	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][5] = { 3033, 851, 851 },
++	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][6] = { 851, 851, 3033 },
++	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
+ 	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+ 	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_SMPTE240M][1] = { 2926, 2926, 507 },
+ 	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_SMPTE240M][2] = { 507, 2926, 2926 },
+@@ -770,14 +770,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_SRGB][5] = { 2599, 901, 909 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_SRGB][6] = { 991, 0, 2966 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_SRGB][7] = { 800, 799, 800 },
+-	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][1] = { 2989, 3120, 1180 },
+-	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][2] = { 1913, 3011, 3009 },
+-	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][3] = { 1836, 3099, 1105 },
+-	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][4] = { 2627, 413, 2966 },
+-	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][5] = { 2576, 943, 951 },
+-	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][6] = { 1026, 0, 2942 },
+-	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][1] = { 2989, 3120, 1180 },
++	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][2] = { 1913, 3011, 3009 },
++	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][3] = { 1836, 3099, 1105 },
++	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][4] = { 2627, 413, 2966 },
++	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][5] = { 2576, 943, 951 },
++	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][6] = { 1026, 0, 2942 },
++	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_SMPTE240M][1] = { 2879, 3022, 874 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_SMPTE240M][2] = { 1688, 2903, 2901 },
+@@ -826,14 +826,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_SRGB][5] = { 3001, 800, 799 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_SRGB][6] = { 800, 800, 3071 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 799 },
+-	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][1] = { 3033, 3033, 776 },
+-	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][2] = { 1068, 3033, 3033 },
+-	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][3] = { 1068, 3033, 776 },
+-	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][4] = { 2977, 851, 3048 },
+-	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][5] = { 2977, 851, 851 },
+-	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][6] = { 851, 851, 3048 },
+-	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][1] = { 3033, 3033, 776 },
++	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][2] = { 1068, 3033, 3033 },
++	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][3] = { 1068, 3033, 776 },
++	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][4] = { 2977, 851, 3048 },
++	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][5] = { 2977, 851, 851 },
++	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][6] = { 851, 851, 3048 },
++	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_SMPTE240M][1] = { 2926, 2926, 423 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_SMPTE240M][2] = { 749, 2926, 2926 },
+@@ -882,14 +882,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SRGB][5] = { 3056, 800, 800 },
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SRGB][6] = { 800, 800, 3056 },
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 800 },
+-	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][1] = { 3033, 3033, 851 },
+-	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][2] = { 851, 3033, 3033 },
+-	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][3] = { 851, 3033, 851 },
+-	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][4] = { 3033, 851, 3033 },
+-	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][5] = { 3033, 851, 851 },
+-	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][6] = { 851, 851, 3033 },
+-	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][1] = { 3033, 3033, 851 },
++	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][2] = { 851, 3033, 3033 },
++	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][3] = { 851, 3033, 851 },
++	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][4] = { 3033, 851, 3033 },
++	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][5] = { 3033, 851, 851 },
++	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][6] = { 851, 851, 3033 },
++	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SMPTE240M][1] = { 2926, 2926, 507 },
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SMPTE240M][2] = { 507, 2926, 2926 },
+@@ -922,62 +922,62 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SMPTE2084][5] = { 1812, 886, 886 },
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SMPTE2084][6] = { 886, 886, 1812 },
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SMPTE2084][7] = { 886, 886, 886 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][0] = { 2939, 2939, 2939 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][1] = { 2939, 2939, 781 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][2] = { 1622, 2939, 2939 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][3] = { 1622, 2939, 781 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][4] = { 2502, 547, 2881 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][5] = { 2502, 547, 547 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][6] = { 547, 547, 2881 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][7] = { 547, 547, 547 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][0] = { 3056, 3056, 3056 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][1] = { 3056, 3056, 1031 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][2] = { 1838, 3056, 3056 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][3] = { 1838, 3056, 1031 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][4] = { 2657, 800, 3002 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][5] = { 2657, 800, 800 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][6] = { 800, 800, 3002 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 800 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][1] = { 3033, 3033, 1063 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][2] = { 1828, 3033, 3033 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][3] = { 1828, 3033, 1063 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][4] = { 2633, 851, 2979 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][5] = { 2633, 851, 851 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][6] = { 851, 851, 2979 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][1] = { 2926, 2926, 744 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][2] = { 1594, 2926, 2926 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][3] = { 1594, 2926, 744 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][4] = { 2484, 507, 2867 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][5] = { 2484, 507, 507 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][6] = { 507, 507, 2867 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][7] = { 507, 507, 507 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][0] = { 2125, 2125, 2125 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][1] = { 2125, 2125, 212 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][2] = { 698, 2125, 2125 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][3] = { 698, 2125, 212 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][4] = { 1557, 130, 2043 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][5] = { 1557, 130, 130 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][6] = { 130, 130, 2043 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][7] = { 130, 130, 130 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][0] = { 3175, 3175, 3175 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][1] = { 3175, 3175, 1308 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][2] = { 2069, 3175, 3175 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][3] = { 2069, 3175, 1308 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][4] = { 2816, 1084, 3127 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][5] = { 2816, 1084, 1084 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][6] = { 1084, 1084, 3127 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][7] = { 1084, 1084, 1084 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][0] = { 1812, 1812, 1812 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][1] = { 1812, 1812, 1022 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][2] = { 1402, 1812, 1812 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][3] = { 1402, 1812, 1022 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][4] = { 1692, 886, 1797 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][5] = { 1692, 886, 886 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][6] = { 886, 886, 1797 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][7] = { 886, 886, 886 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][0] = { 2939, 2939, 2939 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][1] = { 2939, 2939, 781 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][2] = { 1622, 2939, 2939 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][3] = { 1622, 2939, 781 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][4] = { 2502, 547, 2881 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][5] = { 2502, 547, 547 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][6] = { 547, 547, 2881 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][7] = { 547, 547, 547 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][0] = { 3056, 3056, 3056 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][1] = { 3056, 3056, 1031 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][2] = { 1838, 3056, 3056 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][3] = { 1838, 3056, 1031 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][4] = { 2657, 800, 3002 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][5] = { 2657, 800, 800 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][6] = { 800, 800, 3002 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 800 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][1] = { 3033, 3033, 1063 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][2] = { 1828, 3033, 3033 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][3] = { 1828, 3033, 1063 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][4] = { 2633, 851, 2979 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][5] = { 2633, 851, 851 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][6] = { 851, 851, 2979 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][1] = { 2926, 2926, 744 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][2] = { 1594, 2926, 2926 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][3] = { 1594, 2926, 744 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][4] = { 2484, 507, 2867 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][5] = { 2484, 507, 507 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][6] = { 507, 507, 2867 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][7] = { 507, 507, 507 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][0] = { 2125, 2125, 2125 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][1] = { 2125, 2125, 212 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][2] = { 698, 2125, 2125 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][3] = { 698, 2125, 212 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][4] = { 1557, 130, 2043 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][5] = { 1557, 130, 130 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][6] = { 130, 130, 2043 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][7] = { 130, 130, 130 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][0] = { 3175, 3175, 3175 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][1] = { 3175, 3175, 1308 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][2] = { 2069, 3175, 3175 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][3] = { 2069, 3175, 1308 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][4] = { 2816, 1084, 3127 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][5] = { 2816, 1084, 1084 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][6] = { 1084, 1084, 3127 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][7] = { 1084, 1084, 1084 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][0] = { 1812, 1812, 1812 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][1] = { 1812, 1812, 1022 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][2] = { 1402, 1812, 1812 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][3] = { 1402, 1812, 1022 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][4] = { 1692, 886, 1797 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][5] = { 1692, 886, 886 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][6] = { 886, 886, 1797 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][7] = { 886, 886, 886 },
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_709][0] = { 2939, 2939, 2939 },
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_709][1] = { 2877, 2923, 1058 },
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_709][2] = { 1837, 2840, 2916 },
+@@ -994,14 +994,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_SRGB][5] = { 2517, 1159, 900 },
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_SRGB][6] = { 1042, 870, 2917 },
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 800 },
+-	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][1] = { 2976, 3018, 1315 },
+-	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][2] = { 2024, 2942, 3011 },
+-	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][3] = { 1930, 2926, 1256 },
+-	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][4] = { 2563, 1227, 2916 },
+-	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][5] = { 2494, 1183, 943 },
+-	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][6] = { 1073, 916, 2894 },
+-	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][1] = { 2976, 3018, 1315 },
++	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][2] = { 2024, 2942, 3011 },
++	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][3] = { 1930, 2926, 1256 },
++	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][4] = { 2563, 1227, 2916 },
++	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][5] = { 2494, 1183, 943 },
++	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][6] = { 1073, 916, 2894 },
++	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_SMPTE240M][1] = { 2864, 2910, 1024 },
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_SMPTE240M][2] = { 1811, 2826, 2903 },
+@@ -1050,14 +1050,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_SRGB][5] = { 2880, 998, 902 },
+ 	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_SRGB][6] = { 816, 823, 2940 },
+ 	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 799 },
+-	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][1] = { 3029, 3028, 1255 },
+-	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][2] = { 1406, 2988, 3011 },
+-	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][3] = { 1398, 2983, 1190 },
+-	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][4] = { 2860, 1050, 2939 },
+-	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][5] = { 2857, 1033, 945 },
+-	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][6] = { 866, 873, 2916 },
+-	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][1] = { 3029, 3028, 1255 },
++	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][2] = { 1406, 2988, 3011 },
++	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][3] = { 1398, 2983, 1190 },
++	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][4] = { 2860, 1050, 2939 },
++	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][5] = { 2857, 1033, 945 },
++	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][6] = { 866, 873, 2916 },
++	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
+ 	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+ 	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_SMPTE240M][1] = { 2923, 2921, 957 },
+ 	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_SMPTE240M][2] = { 1125, 2877, 2902 },
+@@ -1128,7 +1128,7 @@ static const double rec709_to_240m[3][3] = {
+ 	{ 0.0016327, 0.0044133, 0.9939540 },
+ };
+ 
+-static const double rec709_to_adobergb[3][3] = {
++static const double rec709_to_oprgb[3][3] = {
+ 	{ 0.7151627, 0.2848373, -0.0000000 },
+ 	{ 0.0000000, 1.0000000, 0.0000000 },
+ 	{ -0.0000000, 0.0411705, 0.9588295 },
+@@ -1195,7 +1195,7 @@ static double transfer_rec709_to_rgb(double v)
+ 	return (v < 0.081) ? v / 4.5 : pow((v + 0.099) / 1.099, 1.0 / 0.45);
+ }
+ 
+-static double transfer_rgb_to_adobergb(double v)
++static double transfer_rgb_to_oprgb(double v)
+ {
+ 	return pow(v, 1.0 / 2.19921875);
+ }
+@@ -1251,8 +1251,8 @@ static void csc(enum v4l2_colorspace colorspace, enum v4l2_xfer_func xfer_func,
+ 	case V4L2_COLORSPACE_470_SYSTEM_M:
+ 		mult_matrix(r, g, b, rec709_to_ntsc1953);
+ 		break;
+-	case V4L2_COLORSPACE_ADOBERGB:
+-		mult_matrix(r, g, b, rec709_to_adobergb);
++	case V4L2_COLORSPACE_OPRGB:
++		mult_matrix(r, g, b, rec709_to_oprgb);
+ 		break;
+ 	case V4L2_COLORSPACE_BT2020:
+ 		mult_matrix(r, g, b, rec709_to_bt2020);
+@@ -1284,10 +1284,10 @@ static void csc(enum v4l2_colorspace colorspace, enum v4l2_xfer_func xfer_func,
+ 		*g = transfer_rgb_to_srgb(*g);
+ 		*b = transfer_rgb_to_srgb(*b);
+ 		break;
+-	case V4L2_XFER_FUNC_ADOBERGB:
+-		*r = transfer_rgb_to_adobergb(*r);
+-		*g = transfer_rgb_to_adobergb(*g);
+-		*b = transfer_rgb_to_adobergb(*b);
++	case V4L2_XFER_FUNC_OPRGB:
++		*r = transfer_rgb_to_oprgb(*r);
++		*g = transfer_rgb_to_oprgb(*g);
++		*b = transfer_rgb_to_oprgb(*b);
+ 		break;
+ 	case V4L2_XFER_FUNC_DCI_P3:
+ 		*r = transfer_rgb_to_dcip3(*r);
+@@ -1321,7 +1321,7 @@ int main(int argc, char **argv)
+ 		V4L2_COLORSPACE_470_SYSTEM_BG,
+ 		0,
+ 		V4L2_COLORSPACE_SRGB,
+-		V4L2_COLORSPACE_ADOBERGB,
++		V4L2_COLORSPACE_OPRGB,
+ 		V4L2_COLORSPACE_BT2020,
+ 		0,
+ 		V4L2_COLORSPACE_DCI_P3,
+@@ -1336,7 +1336,7 @@ int main(int argc, char **argv)
+ 		"V4L2_COLORSPACE_470_SYSTEM_BG",
+ 		"",
+ 		"V4L2_COLORSPACE_SRGB",
+-		"V4L2_COLORSPACE_ADOBERGB",
++		"V4L2_COLORSPACE_OPRGB",
+ 		"V4L2_COLORSPACE_BT2020",
+ 		"",
+ 		"V4L2_COLORSPACE_DCI_P3",
+@@ -1345,7 +1345,7 @@ int main(int argc, char **argv)
+ 		"",
+ 		"V4L2_XFER_FUNC_709",
+ 		"V4L2_XFER_FUNC_SRGB",
+-		"V4L2_XFER_FUNC_ADOBERGB",
++		"V4L2_XFER_FUNC_OPRGB",
+ 		"V4L2_XFER_FUNC_SMPTE240M",
+ 		"V4L2_XFER_FUNC_NONE",
+ 		"V4L2_XFER_FUNC_DCI_P3",
+diff --git a/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c b/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
+index abd4c788dffd..f40ab5704bf0 100644
+--- a/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
++++ b/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
+@@ -1770,7 +1770,7 @@ typedef struct { u16 __; u8 _; } __packed x24;
+ 				pos[7] = (chr & (0x01 << 0) ? fg : bg);	\
+ 			} \
+ 	\
+-			pos += (tpg->hflip ? -8 : 8) / hdiv;	\
++			pos += (tpg->hflip ? -8 : 8) / (int)hdiv;	\
+ 		}	\
+ 	}	\
+ } while (0)
+diff --git a/drivers/media/i2c/adv7511.c b/drivers/media/i2c/adv7511.c
+index 5731751d3f2a..cd6e7372ef9c 100644
+--- a/drivers/media/i2c/adv7511.c
++++ b/drivers/media/i2c/adv7511.c
+@@ -1355,10 +1355,10 @@ static int adv7511_set_fmt(struct v4l2_subdev *sd,
+ 	state->xfer_func = format->format.xfer_func;
+ 
+ 	switch (format->format.colorspace) {
+-	case V4L2_COLORSPACE_ADOBERGB:
++	case V4L2_COLORSPACE_OPRGB:
+ 		c = HDMI_COLORIMETRY_EXTENDED;
+-		ec = y ? HDMI_EXTENDED_COLORIMETRY_ADOBE_YCC_601 :
+-			 HDMI_EXTENDED_COLORIMETRY_ADOBE_RGB;
++		ec = y ? HDMI_EXTENDED_COLORIMETRY_OPYCC_601 :
++			 HDMI_EXTENDED_COLORIMETRY_OPRGB;
+ 		break;
+ 	case V4L2_COLORSPACE_SMPTE170M:
+ 		c = y ? HDMI_COLORIMETRY_ITU_601 : HDMI_COLORIMETRY_NONE;
+diff --git a/drivers/media/i2c/adv7604.c b/drivers/media/i2c/adv7604.c
+index cac2081e876e..2437f72f7caf 100644
+--- a/drivers/media/i2c/adv7604.c
++++ b/drivers/media/i2c/adv7604.c
+@@ -2284,8 +2284,10 @@ static int adv76xx_set_edid(struct v4l2_subdev *sd, struct v4l2_edid *edid)
+ 		state->aspect_ratio.numerator = 16;
+ 		state->aspect_ratio.denominator = 9;
+ 
+-		if (!state->edid.present)
++		if (!state->edid.present) {
+ 			state->edid.blocks = 0;
++			cec_phys_addr_invalidate(state->cec_adap);
++		}
+ 
+ 		v4l2_dbg(2, debug, sd, "%s: clear EDID pad %d, edid.present = 0x%x\n",
+ 				__func__, edid->pad, state->edid.present);
+@@ -2474,7 +2476,7 @@ static int adv76xx_log_status(struct v4l2_subdev *sd)
+ 		"YCbCr Bt.601 (16-235)", "YCbCr Bt.709 (16-235)",
+ 		"xvYCC Bt.601", "xvYCC Bt.709",
+ 		"YCbCr Bt.601 (0-255)", "YCbCr Bt.709 (0-255)",
+-		"sYCC", "Adobe YCC 601", "AdobeRGB", "invalid", "invalid",
++		"sYCC", "opYCC 601", "opRGB", "invalid", "invalid",
+ 		"invalid", "invalid", "invalid"
+ 	};
+ 	static const char * const rgb_quantization_range_txt[] = {
+diff --git a/drivers/media/i2c/adv7842.c b/drivers/media/i2c/adv7842.c
+index fddac32e5051..ceca6be13ca9 100644
+--- a/drivers/media/i2c/adv7842.c
++++ b/drivers/media/i2c/adv7842.c
+@@ -786,8 +786,10 @@ static int edid_write_hdmi_segment(struct v4l2_subdev *sd, u8 port)
+ 	/* Disable I2C access to internal EDID ram from HDMI DDC ports */
+ 	rep_write_and_or(sd, 0x77, 0xf3, 0x00);
+ 
+-	if (!state->hdmi_edid.present)
++	if (!state->hdmi_edid.present) {
++		cec_phys_addr_invalidate(state->cec_adap);
+ 		return 0;
++	}
+ 
+ 	pa = cec_get_edid_phys_addr(edid, 256, &spa_loc);
+ 	err = cec_phys_addr_validate(pa, &pa, NULL);
+diff --git a/drivers/media/i2c/ov7670.c b/drivers/media/i2c/ov7670.c
+index 3474ef832c1e..480edeebac60 100644
+--- a/drivers/media/i2c/ov7670.c
++++ b/drivers/media/i2c/ov7670.c
+@@ -1810,17 +1810,24 @@ static int ov7670_probe(struct i2c_client *client,
+ 			info->pclk_hb_disable = true;
+ 	}
+ 
+-	info->clk = devm_clk_get(&client->dev, "xclk");
+-	if (IS_ERR(info->clk))
+-		return PTR_ERR(info->clk);
+-	ret = clk_prepare_enable(info->clk);
+-	if (ret)
+-		return ret;
++	info->clk = devm_clk_get(&client->dev, "xclk"); /* optional */
++	if (IS_ERR(info->clk)) {
++		ret = PTR_ERR(info->clk);
++		if (ret == -ENOENT)
++			info->clk = NULL;
++		else
++			return ret;
++	}
++	if (info->clk) {
++		ret = clk_prepare_enable(info->clk);
++		if (ret)
++			return ret;
+ 
+-	info->clock_speed = clk_get_rate(info->clk) / 1000000;
+-	if (info->clock_speed < 10 || info->clock_speed > 48) {
+-		ret = -EINVAL;
+-		goto clk_disable;
++		info->clock_speed = clk_get_rate(info->clk) / 1000000;
++		if (info->clock_speed < 10 || info->clock_speed > 48) {
++			ret = -EINVAL;
++			goto clk_disable;
++		}
+ 	}
+ 
+ 	ret = ov7670_init_gpio(client, info);
+diff --git a/drivers/media/i2c/tc358743.c b/drivers/media/i2c/tc358743.c
+index 393bbbbbaad7..865639587a97 100644
+--- a/drivers/media/i2c/tc358743.c
++++ b/drivers/media/i2c/tc358743.c
+@@ -1243,9 +1243,9 @@ static int tc358743_log_status(struct v4l2_subdev *sd)
+ 	u8 vi_status3 =  i2c_rd8(sd, VI_STATUS3);
+ 	const int deep_color_mode[4] = { 8, 10, 12, 16 };
+ 	static const char * const input_color_space[] = {
+-		"RGB", "YCbCr 601", "Adobe RGB", "YCbCr 709", "NA (4)",
++		"RGB", "YCbCr 601", "opRGB", "YCbCr 709", "NA (4)",
+ 		"xvYCC 601", "NA(6)", "xvYCC 709", "NA(8)", "sYCC601",
+-		"NA(10)", "NA(11)", "NA(12)", "Adobe YCC 601"};
++		"NA(10)", "NA(11)", "NA(12)", "opYCC 601"};
+ 
+ 	v4l2_info(sd, "-----Chip status-----\n");
+ 	v4l2_info(sd, "Chip ID: 0x%02x\n",
+diff --git a/drivers/media/i2c/tvp5150.c b/drivers/media/i2c/tvp5150.c
+index 76e6bed5a1da..805bd9c65940 100644
+--- a/drivers/media/i2c/tvp5150.c
++++ b/drivers/media/i2c/tvp5150.c
+@@ -1534,7 +1534,7 @@ static int tvp5150_probe(struct i2c_client *c,
+ 			27000000, 1, 27000000);
+ 	v4l2_ctrl_new_std_menu_items(&core->hdl, &tvp5150_ctrl_ops,
+ 				     V4L2_CID_TEST_PATTERN,
+-				     ARRAY_SIZE(tvp5150_test_patterns),
++				     ARRAY_SIZE(tvp5150_test_patterns) - 1,
+ 				     0, 0, tvp5150_test_patterns);
+ 	sd->ctrl_handler = &core->hdl;
+ 	if (core->hdl.error) {
+diff --git a/drivers/media/platform/vivid/vivid-core.h b/drivers/media/platform/vivid/vivid-core.h
+index 477c80a4d44c..cd4c8230563c 100644
+--- a/drivers/media/platform/vivid/vivid-core.h
++++ b/drivers/media/platform/vivid/vivid-core.h
+@@ -111,7 +111,7 @@ enum vivid_colorspace {
+ 	VIVID_CS_170M,
+ 	VIVID_CS_709,
+ 	VIVID_CS_SRGB,
+-	VIVID_CS_ADOBERGB,
++	VIVID_CS_OPRGB,
+ 	VIVID_CS_2020,
+ 	VIVID_CS_DCI_P3,
+ 	VIVID_CS_240M,
+diff --git a/drivers/media/platform/vivid/vivid-ctrls.c b/drivers/media/platform/vivid/vivid-ctrls.c
+index 6b0bfa091592..e1185f0f6607 100644
+--- a/drivers/media/platform/vivid/vivid-ctrls.c
++++ b/drivers/media/platform/vivid/vivid-ctrls.c
+@@ -348,7 +348,7 @@ static int vivid_vid_cap_s_ctrl(struct v4l2_ctrl *ctrl)
+ 		V4L2_COLORSPACE_SMPTE170M,
+ 		V4L2_COLORSPACE_REC709,
+ 		V4L2_COLORSPACE_SRGB,
+-		V4L2_COLORSPACE_ADOBERGB,
++		V4L2_COLORSPACE_OPRGB,
+ 		V4L2_COLORSPACE_BT2020,
+ 		V4L2_COLORSPACE_DCI_P3,
+ 		V4L2_COLORSPACE_SMPTE240M,
+@@ -729,7 +729,7 @@ static const char * const vivid_ctrl_colorspace_strings[] = {
+ 	"SMPTE 170M",
+ 	"Rec. 709",
+ 	"sRGB",
+-	"AdobeRGB",
++	"opRGB",
+ 	"BT.2020",
+ 	"DCI-P3",
+ 	"SMPTE 240M",
+@@ -752,7 +752,7 @@ static const char * const vivid_ctrl_xfer_func_strings[] = {
+ 	"Default",
+ 	"Rec. 709",
+ 	"sRGB",
+-	"AdobeRGB",
++	"opRGB",
+ 	"SMPTE 240M",
+ 	"None",
+ 	"DCI-P3",
+diff --git a/drivers/media/platform/vivid/vivid-vid-out.c b/drivers/media/platform/vivid/vivid-vid-out.c
+index 51fec66d8d45..50248e2176a0 100644
+--- a/drivers/media/platform/vivid/vivid-vid-out.c
++++ b/drivers/media/platform/vivid/vivid-vid-out.c
+@@ -413,7 +413,7 @@ int vivid_try_fmt_vid_out(struct file *file, void *priv,
+ 		mp->colorspace = V4L2_COLORSPACE_SMPTE170M;
+ 	} else if (mp->colorspace != V4L2_COLORSPACE_SMPTE170M &&
+ 		   mp->colorspace != V4L2_COLORSPACE_REC709 &&
+-		   mp->colorspace != V4L2_COLORSPACE_ADOBERGB &&
++		   mp->colorspace != V4L2_COLORSPACE_OPRGB &&
+ 		   mp->colorspace != V4L2_COLORSPACE_BT2020 &&
+ 		   mp->colorspace != V4L2_COLORSPACE_SRGB) {
+ 		mp->colorspace = V4L2_COLORSPACE_REC709;
+diff --git a/drivers/media/usb/dvb-usb-v2/dvbsky.c b/drivers/media/usb/dvb-usb-v2/dvbsky.c
+index 1aa88d94e57f..e28bd8836751 100644
+--- a/drivers/media/usb/dvb-usb-v2/dvbsky.c
++++ b/drivers/media/usb/dvb-usb-v2/dvbsky.c
+@@ -31,6 +31,7 @@ MODULE_PARM_DESC(disable_rc, "Disable inbuilt IR receiver.");
+ DVB_DEFINE_MOD_OPT_ADAPTER_NR(adapter_nr);
+ 
+ struct dvbsky_state {
++	struct mutex stream_mutex;
+ 	u8 ibuf[DVBSKY_BUF_LEN];
+ 	u8 obuf[DVBSKY_BUF_LEN];
+ 	u8 last_lock;
+@@ -67,17 +68,18 @@ static int dvbsky_usb_generic_rw(struct dvb_usb_device *d,
+ 
+ static int dvbsky_stream_ctrl(struct dvb_usb_device *d, u8 onoff)
+ {
++	struct dvbsky_state *state = d_to_priv(d);
+ 	int ret;
+-	static u8 obuf_pre[3] = { 0x37, 0, 0 };
+-	static u8 obuf_post[3] = { 0x36, 3, 0 };
++	u8 obuf_pre[3] = { 0x37, 0, 0 };
++	u8 obuf_post[3] = { 0x36, 3, 0 };
+ 
+-	mutex_lock(&d->usb_mutex);
+-	ret = dvb_usbv2_generic_rw_locked(d, obuf_pre, 3, NULL, 0);
++	mutex_lock(&state->stream_mutex);
++	ret = dvbsky_usb_generic_rw(d, obuf_pre, 3, NULL, 0);
+ 	if (!ret && onoff) {
+ 		msleep(20);
+-		ret = dvb_usbv2_generic_rw_locked(d, obuf_post, 3, NULL, 0);
++		ret = dvbsky_usb_generic_rw(d, obuf_post, 3, NULL, 0);
+ 	}
+-	mutex_unlock(&d->usb_mutex);
++	mutex_unlock(&state->stream_mutex);
+ 	return ret;
+ }
+ 
+@@ -606,6 +608,8 @@ static int dvbsky_init(struct dvb_usb_device *d)
+ 	if (ret)
+ 		return ret;
+ 	*/
++	mutex_init(&state->stream_mutex);
++
+ 	state->last_lock = 0;
+ 
+ 	return 0;
+diff --git a/drivers/media/usb/em28xx/em28xx-cards.c b/drivers/media/usb/em28xx/em28xx-cards.c
+index ff5e41ac4723..98d6c8fcd262 100644
+--- a/drivers/media/usb/em28xx/em28xx-cards.c
++++ b/drivers/media/usb/em28xx/em28xx-cards.c
+@@ -2141,13 +2141,13 @@ const struct em28xx_board em28xx_boards[] = {
+ 		.input           = { {
+ 			.type     = EM28XX_VMUX_COMPOSITE,
+ 			.vmux     = TVP5150_COMPOSITE1,
+-			.amux     = EM28XX_AUDIO_SRC_LINE,
++			.amux     = EM28XX_AMUX_LINE_IN,
+ 			.gpio     = terratec_av350_unmute_gpio,
+ 
+ 		}, {
+ 			.type     = EM28XX_VMUX_SVIDEO,
+ 			.vmux     = TVP5150_SVIDEO,
+-			.amux     = EM28XX_AUDIO_SRC_LINE,
++			.amux     = EM28XX_AMUX_LINE_IN,
+ 			.gpio     = terratec_av350_unmute_gpio,
+ 		} },
+ 	},
+@@ -3041,6 +3041,9 @@ static int em28xx_hint_board(struct em28xx *dev)
+ 
+ static void em28xx_card_setup(struct em28xx *dev)
+ {
++	int i, j, idx;
++	bool duplicate_entry;
++
+ 	/*
+ 	 * If the device can be a webcam, seek for a sensor.
+ 	 * If sensor is not found, then it isn't a webcam.
+@@ -3197,6 +3200,32 @@ static void em28xx_card_setup(struct em28xx *dev)
+ 	/* Allow override tuner type by a module parameter */
+ 	if (tuner >= 0)
+ 		dev->tuner_type = tuner;
++
++	/*
++	 * Dynamically generate a list of valid audio inputs for this
++	 * specific board, mapping them via enum em28xx_amux.
++	 */
++
++	idx = 0;
++	for (i = 0; i < MAX_EM28XX_INPUT; i++) {
++		if (!INPUT(i)->type)
++			continue;
++
++		/* Skip already mapped audio inputs */
++		duplicate_entry = false;
++		for (j = 0; j < idx; j++) {
++			if (INPUT(i)->amux == dev->amux_map[j]) {
++				duplicate_entry = true;
++				break;
++			}
++		}
++		if (duplicate_entry)
++			continue;
++
++		dev->amux_map[idx++] = INPUT(i)->amux;
++	}
++	for (; idx < MAX_EM28XX_INPUT; idx++)
++		dev->amux_map[idx] = EM28XX_AMUX_UNUSED;
+ }
+ 
+ void em28xx_setup_xc3028(struct em28xx *dev, struct xc2028_ctrl *ctl)
+diff --git a/drivers/media/usb/em28xx/em28xx-video.c b/drivers/media/usb/em28xx/em28xx-video.c
+index 68571bf36d28..3bf98ac897ec 100644
+--- a/drivers/media/usb/em28xx/em28xx-video.c
++++ b/drivers/media/usb/em28xx/em28xx-video.c
+@@ -1093,6 +1093,8 @@ int em28xx_start_analog_streaming(struct vb2_queue *vq, unsigned int count)
+ 
+ 	em28xx_videodbg("%s\n", __func__);
+ 
++	dev->v4l2->field_count = 0;
++
+ 	/*
+ 	 * Make sure streaming is not already in progress for this type
+ 	 * of filehandle (e.g. video, vbi)
+@@ -1471,9 +1473,9 @@ static int vidioc_try_fmt_vid_cap(struct file *file, void *priv,
+ 
+ 	fmt = format_by_fourcc(f->fmt.pix.pixelformat);
+ 	if (!fmt) {
+-		em28xx_videodbg("Fourcc format (%08x) invalid.\n",
+-				f->fmt.pix.pixelformat);
+-		return -EINVAL;
++		fmt = &format[0];
++		em28xx_videodbg("Fourcc format (%08x) invalid. Using default (%08x).\n",
++				f->fmt.pix.pixelformat, fmt->fourcc);
+ 	}
+ 
+ 	if (dev->board.is_em2800) {
+@@ -1666,6 +1668,7 @@ static int vidioc_enum_input(struct file *file, void *priv,
+ {
+ 	struct em28xx *dev = video_drvdata(file);
+ 	unsigned int       n;
++	int j;
+ 
+ 	n = i->index;
+ 	if (n >= MAX_EM28XX_INPUT)
+@@ -1685,6 +1688,12 @@ static int vidioc_enum_input(struct file *file, void *priv,
+ 	if (dev->is_webcam)
+ 		i->capabilities = 0;
+ 
++	/* Dynamically generates an audioset bitmask */
++	i->audioset = 0;
++	for (j = 0; j < MAX_EM28XX_INPUT; j++)
++		if (dev->amux_map[j] != EM28XX_AMUX_UNUSED)
++			i->audioset |= 1 << j;
++
+ 	return 0;
+ }
+ 
+@@ -1710,11 +1719,24 @@ static int vidioc_s_input(struct file *file, void *priv, unsigned int i)
+ 	return 0;
+ }
+ 
+-static int vidioc_g_audio(struct file *file, void *priv, struct v4l2_audio *a)
++static int em28xx_fill_audio_input(struct em28xx *dev,
++				   const char *s,
++				   struct v4l2_audio *a,
++				   unsigned int index)
+ {
+-	struct em28xx *dev = video_drvdata(file);
++	unsigned int idx = dev->amux_map[index];
++
++	/*
++	 * With msp3400, almost all mappings use the default (amux = 0).
++	 * The only one may use a different value is WinTV USB2, where it
++	 * can also be SCART1 input.
++	 * As it is very doubtful that we would see new boards with msp3400,
++	 * let's just reuse the existing switch.
++	 */
++	if (dev->has_msp34xx && idx != EM28XX_AMUX_UNUSED)
++		idx = EM28XX_AMUX_LINE_IN;
+ 
+-	switch (a->index) {
++	switch (idx) {
+ 	case EM28XX_AMUX_VIDEO:
+ 		strcpy(a->name, "Television");
+ 		break;
+@@ -1739,32 +1761,79 @@ static int vidioc_g_audio(struct file *file, void *priv, struct v4l2_audio *a)
+ 	case EM28XX_AMUX_PCM_OUT:
+ 		strcpy(a->name, "PCM");
+ 		break;
++	case EM28XX_AMUX_UNUSED:
+ 	default:
+ 		return -EINVAL;
+ 	}
+-
+-	a->index = dev->ctl_ainput;
++	a->index = index;
+ 	a->capability = V4L2_AUDCAP_STEREO;
+ 
++	em28xx_videodbg("%s: audio input index %d is '%s'\n",
++			s, a->index, a->name);
++
+ 	return 0;
+ }
+ 
++static int vidioc_enumaudio(struct file *file, void *fh, struct v4l2_audio *a)
++{
++	struct em28xx *dev = video_drvdata(file);
++
++	if (a->index >= MAX_EM28XX_INPUT)
++		return -EINVAL;
++
++	return em28xx_fill_audio_input(dev, __func__, a, a->index);
++}
++
++static int vidioc_g_audio(struct file *file, void *priv, struct v4l2_audio *a)
++{
++	struct em28xx *dev = video_drvdata(file);
++	int i;
++
++	for (i = 0; i < MAX_EM28XX_INPUT; i++)
++		if (dev->ctl_ainput == dev->amux_map[i])
++			return em28xx_fill_audio_input(dev, __func__, a, i);
++
++	/* Should never happen! */
++	return -EINVAL;
++}
++
+ static int vidioc_s_audio(struct file *file, void *priv,
+ 			  const struct v4l2_audio *a)
+ {
+ 	struct em28xx *dev = video_drvdata(file);
++	int idx, i;
+ 
+ 	if (a->index >= MAX_EM28XX_INPUT)
+ 		return -EINVAL;
+-	if (!INPUT(a->index)->type)
++
++	idx = dev->amux_map[a->index];
++
++	if (idx == EM28XX_AMUX_UNUSED)
+ 		return -EINVAL;
+ 
+-	dev->ctl_ainput = INPUT(a->index)->amux;
+-	dev->ctl_aoutput = INPUT(a->index)->aout;
++	dev->ctl_ainput = idx;
++
++	/*
++	 * FIXME: This is wrong, as different inputs at em28xx_cards
++	 * may have different audio outputs. So, the right thing
++	 * to do is to implement VIDIOC_G_AUDOUT/VIDIOC_S_AUDOUT.
++	 * With the current board definitions, this would work fine,
++	 * as, currently, all boards fit.
++	 */
++	for (i = 0; i < MAX_EM28XX_INPUT; i++)
++		if (idx == dev->amux_map[i])
++			break;
++	if (i == MAX_EM28XX_INPUT)
++		return -EINVAL;
++
++	dev->ctl_aoutput = INPUT(i)->aout;
+ 
+ 	if (!dev->ctl_aoutput)
+ 		dev->ctl_aoutput = EM28XX_AOUT_MASTER;
+ 
++	em28xx_videodbg("%s: set audio input to %d\n", __func__,
++			dev->ctl_ainput);
++
+ 	return 0;
+ }
+ 
+@@ -2302,6 +2371,7 @@ static const struct v4l2_ioctl_ops video_ioctl_ops = {
+ 	.vidioc_try_fmt_vbi_cap     = vidioc_g_fmt_vbi_cap,
+ 	.vidioc_s_fmt_vbi_cap       = vidioc_g_fmt_vbi_cap,
+ 	.vidioc_enum_framesizes     = vidioc_enum_framesizes,
++	.vidioc_enumaudio           = vidioc_enumaudio,
+ 	.vidioc_g_audio             = vidioc_g_audio,
+ 	.vidioc_s_audio             = vidioc_s_audio,
+ 
+diff --git a/drivers/media/usb/em28xx/em28xx.h b/drivers/media/usb/em28xx/em28xx.h
+index 953caac025f2..a551072e62ed 100644
+--- a/drivers/media/usb/em28xx/em28xx.h
++++ b/drivers/media/usb/em28xx/em28xx.h
+@@ -335,6 +335,9 @@ enum em28xx_usb_audio_type {
+ /**
+  * em28xx_amux - describes the type of audio input used by em28xx
+  *
++ * @EM28XX_AMUX_UNUSED:
++ *	Used only on em28xx dev->map field, in order to mark an entry
++ *	as unused.
+  * @EM28XX_AMUX_VIDEO:
+  *	On devices without AC97, this is the only value that it is currently
+  *	allowed.
+@@ -369,7 +372,8 @@ enum em28xx_usb_audio_type {
+  * same time, via the alsa mux.
+  */
+ enum em28xx_amux {
+-	EM28XX_AMUX_VIDEO,
++	EM28XX_AMUX_UNUSED = -1,
++	EM28XX_AMUX_VIDEO = 0,
+ 	EM28XX_AMUX_LINE_IN,
+ 
+ 	/* Some less-common mixer setups */
+@@ -692,6 +696,8 @@ struct em28xx {
+ 	unsigned int ctl_input;	// selected input
+ 	unsigned int ctl_ainput;// selected audio input
+ 	unsigned int ctl_aoutput;// selected audio output
++	enum em28xx_amux amux_map[MAX_EM28XX_INPUT];
++
+ 	int mute;
+ 	int volume;
+ 
+diff --git a/drivers/media/v4l2-core/v4l2-dv-timings.c b/drivers/media/v4l2-core/v4l2-dv-timings.c
+index c81faea96fba..c7c600c1f63b 100644
+--- a/drivers/media/v4l2-core/v4l2-dv-timings.c
++++ b/drivers/media/v4l2-core/v4l2-dv-timings.c
+@@ -837,9 +837,9 @@ v4l2_hdmi_rx_colorimetry(const struct hdmi_avi_infoframe *avi,
+ 		switch (avi->colorimetry) {
+ 		case HDMI_COLORIMETRY_EXTENDED:
+ 			switch (avi->extended_colorimetry) {
+-			case HDMI_EXTENDED_COLORIMETRY_ADOBE_RGB:
+-				c.colorspace = V4L2_COLORSPACE_ADOBERGB;
+-				c.xfer_func = V4L2_XFER_FUNC_ADOBERGB;
++			case HDMI_EXTENDED_COLORIMETRY_OPRGB:
++				c.colorspace = V4L2_COLORSPACE_OPRGB;
++				c.xfer_func = V4L2_XFER_FUNC_OPRGB;
+ 				break;
+ 			case HDMI_EXTENDED_COLORIMETRY_BT2020:
+ 				c.colorspace = V4L2_COLORSPACE_BT2020;
+@@ -908,10 +908,10 @@ v4l2_hdmi_rx_colorimetry(const struct hdmi_avi_infoframe *avi,
+ 				c.ycbcr_enc = V4L2_YCBCR_ENC_601;
+ 				c.xfer_func = V4L2_XFER_FUNC_SRGB;
+ 				break;
+-			case HDMI_EXTENDED_COLORIMETRY_ADOBE_YCC_601:
+-				c.colorspace = V4L2_COLORSPACE_ADOBERGB;
++			case HDMI_EXTENDED_COLORIMETRY_OPYCC_601:
++				c.colorspace = V4L2_COLORSPACE_OPRGB;
+ 				c.ycbcr_enc = V4L2_YCBCR_ENC_601;
+-				c.xfer_func = V4L2_XFER_FUNC_ADOBERGB;
++				c.xfer_func = V4L2_XFER_FUNC_OPRGB;
+ 				break;
+ 			case HDMI_EXTENDED_COLORIMETRY_BT2020:
+ 				c.colorspace = V4L2_COLORSPACE_BT2020;
+diff --git a/drivers/mfd/menelaus.c b/drivers/mfd/menelaus.c
+index 29b7164a823b..d28ebe7ecd21 100644
+--- a/drivers/mfd/menelaus.c
++++ b/drivers/mfd/menelaus.c
+@@ -1094,6 +1094,7 @@ static void menelaus_rtc_alarm_work(struct menelaus_chip *m)
+ static inline void menelaus_rtc_init(struct menelaus_chip *m)
+ {
+ 	int	alarm = (m->client->irq > 0);
++	int	err;
+ 
+ 	/* assume 32KDETEN pin is pulled high */
+ 	if (!(menelaus_read_reg(MENELAUS_OSC_CTRL) & 0x80)) {
+@@ -1101,6 +1102,12 @@ static inline void menelaus_rtc_init(struct menelaus_chip *m)
+ 		return;
+ 	}
+ 
++	m->rtc = devm_rtc_allocate_device(&m->client->dev);
++	if (IS_ERR(m->rtc))
++		return;
++
++	m->rtc->ops = &menelaus_rtc_ops;
++
+ 	/* support RTC alarm; it can issue wakeups */
+ 	if (alarm) {
+ 		if (menelaus_add_irq_work(MENELAUS_RTCALM_IRQ,
+@@ -1125,10 +1132,8 @@ static inline void menelaus_rtc_init(struct menelaus_chip *m)
+ 		menelaus_write_reg(MENELAUS_RTC_CTRL, m->rtc_control);
+ 	}
+ 
+-	m->rtc = rtc_device_register(DRIVER_NAME,
+-			&m->client->dev,
+-			&menelaus_rtc_ops, THIS_MODULE);
+-	if (IS_ERR(m->rtc)) {
++	err = rtc_register_device(m->rtc);
++	if (err) {
+ 		if (alarm) {
+ 			menelaus_remove_irq_work(MENELAUS_RTCALM_IRQ);
+ 			device_init_wakeup(&m->client->dev, 0);
+diff --git a/drivers/misc/genwqe/card_base.h b/drivers/misc/genwqe/card_base.h
+index 1c3967f10f55..1f94fb436c3c 100644
+--- a/drivers/misc/genwqe/card_base.h
++++ b/drivers/misc/genwqe/card_base.h
+@@ -408,7 +408,7 @@ struct genwqe_file {
+ 	struct file *filp;
+ 
+ 	struct fasync_struct *async_queue;
+-	struct task_struct *owner;
++	struct pid *opener;
+ 	struct list_head list;		/* entry in list of open files */
+ 
+ 	spinlock_t map_lock;		/* lock for dma_mappings */
+diff --git a/drivers/misc/genwqe/card_dev.c b/drivers/misc/genwqe/card_dev.c
+index 0dd6b5ef314a..66f222f24da3 100644
+--- a/drivers/misc/genwqe/card_dev.c
++++ b/drivers/misc/genwqe/card_dev.c
+@@ -52,7 +52,7 @@ static void genwqe_add_file(struct genwqe_dev *cd, struct genwqe_file *cfile)
+ {
+ 	unsigned long flags;
+ 
+-	cfile->owner = current;
++	cfile->opener = get_pid(task_tgid(current));
+ 	spin_lock_irqsave(&cd->file_lock, flags);
+ 	list_add(&cfile->list, &cd->file_list);
+ 	spin_unlock_irqrestore(&cd->file_lock, flags);
+@@ -65,6 +65,7 @@ static int genwqe_del_file(struct genwqe_dev *cd, struct genwqe_file *cfile)
+ 	spin_lock_irqsave(&cd->file_lock, flags);
+ 	list_del(&cfile->list);
+ 	spin_unlock_irqrestore(&cd->file_lock, flags);
++	put_pid(cfile->opener);
+ 
+ 	return 0;
+ }
+@@ -275,7 +276,7 @@ static int genwqe_kill_fasync(struct genwqe_dev *cd, int sig)
+ 	return files;
+ }
+ 
+-static int genwqe_force_sig(struct genwqe_dev *cd, int sig)
++static int genwqe_terminate(struct genwqe_dev *cd)
+ {
+ 	unsigned int files = 0;
+ 	unsigned long flags;
+@@ -283,7 +284,7 @@ static int genwqe_force_sig(struct genwqe_dev *cd, int sig)
+ 
+ 	spin_lock_irqsave(&cd->file_lock, flags);
+ 	list_for_each_entry(cfile, &cd->file_list, list) {
+-		force_sig(sig, cfile->owner);
++		kill_pid(cfile->opener, SIGKILL, 1);
+ 		files++;
+ 	}
+ 	spin_unlock_irqrestore(&cd->file_lock, flags);
+@@ -1357,7 +1358,7 @@ static int genwqe_inform_and_stop_processes(struct genwqe_dev *cd)
+ 		dev_warn(&pci_dev->dev,
+ 			 "[%s] send SIGKILL and wait ...\n", __func__);
+ 
+-		rc = genwqe_force_sig(cd, SIGKILL); /* force terminate */
++		rc = genwqe_terminate(cd);
+ 		if (rc) {
+ 			/* Give kill_timout more seconds to end processes */
+ 			for (i = 0; (i < GENWQE_KILL_TIMEOUT) &&
+diff --git a/drivers/misc/ocxl/config.c b/drivers/misc/ocxl/config.c
+index 2e30de9c694a..57a6bb1fd3c9 100644
+--- a/drivers/misc/ocxl/config.c
++++ b/drivers/misc/ocxl/config.c
+@@ -280,7 +280,9 @@ int ocxl_config_check_afu_index(struct pci_dev *dev,
+ 	u32 val;
+ 	int rc, templ_major, templ_minor, len;
+ 
+-	pci_write_config_word(dev, fn->dvsec_afu_info_pos, afu_idx);
++	pci_write_config_byte(dev,
++			fn->dvsec_afu_info_pos + OCXL_DVSEC_AFU_INFO_AFU_IDX,
++			afu_idx);
+ 	rc = read_afu_info(dev, fn, OCXL_DVSEC_TEMPL_VERSION, &val);
+ 	if (rc)
+ 		return rc;
+diff --git a/drivers/misc/vmw_vmci/vmci_driver.c b/drivers/misc/vmw_vmci/vmci_driver.c
+index d7eaf1eb11e7..003bfba40758 100644
+--- a/drivers/misc/vmw_vmci/vmci_driver.c
++++ b/drivers/misc/vmw_vmci/vmci_driver.c
+@@ -113,5 +113,5 @@ module_exit(vmci_drv_exit);
+ 
+ MODULE_AUTHOR("VMware, Inc.");
+ MODULE_DESCRIPTION("VMware Virtual Machine Communication Interface.");
+-MODULE_VERSION("1.1.5.0-k");
++MODULE_VERSION("1.1.6.0-k");
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/misc/vmw_vmci/vmci_resource.c b/drivers/misc/vmw_vmci/vmci_resource.c
+index 1ab6e8737a5f..da1ee2e1ba99 100644
+--- a/drivers/misc/vmw_vmci/vmci_resource.c
++++ b/drivers/misc/vmw_vmci/vmci_resource.c
+@@ -57,7 +57,8 @@ static struct vmci_resource *vmci_resource_lookup(struct vmci_handle handle,
+ 
+ 		if (r->type == type &&
+ 		    rid == handle.resource &&
+-		    (cid == handle.context || cid == VMCI_INVALID_ID)) {
++		    (cid == handle.context || cid == VMCI_INVALID_ID ||
++		     handle.context == VMCI_INVALID_ID)) {
+ 			resource = r;
+ 			break;
+ 		}
+diff --git a/drivers/mmc/host/sdhci-acpi.c b/drivers/mmc/host/sdhci-acpi.c
+index 32321bd596d8..c61109f7b793 100644
+--- a/drivers/mmc/host/sdhci-acpi.c
++++ b/drivers/mmc/host/sdhci-acpi.c
+@@ -76,6 +76,7 @@ struct sdhci_acpi_slot {
+ 	size_t		priv_size;
+ 	int (*probe_slot)(struct platform_device *, const char *, const char *);
+ 	int (*remove_slot)(struct platform_device *);
++	int (*free_slot)(struct platform_device *pdev);
+ 	int (*setup_host)(struct platform_device *pdev);
+ };
+ 
+@@ -756,6 +757,9 @@ static int sdhci_acpi_probe(struct platform_device *pdev)
+ err_cleanup:
+ 	sdhci_cleanup_host(c->host);
+ err_free:
++	if (c->slot && c->slot->free_slot)
++		c->slot->free_slot(pdev);
++
+ 	sdhci_free_host(c->host);
+ 	return err;
+ }
+@@ -777,6 +781,10 @@ static int sdhci_acpi_remove(struct platform_device *pdev)
+ 
+ 	dead = (sdhci_readl(c->host, SDHCI_INT_STATUS) == ~0);
+ 	sdhci_remove_host(c->host, dead);
++
++	if (c->slot && c->slot->free_slot)
++		c->slot->free_slot(pdev);
++
+ 	sdhci_free_host(c->host);
+ 
+ 	return 0;
+diff --git a/drivers/mmc/host/sdhci-pci-o2micro.c b/drivers/mmc/host/sdhci-pci-o2micro.c
+index 555970a29c94..34326d95d254 100644
+--- a/drivers/mmc/host/sdhci-pci-o2micro.c
++++ b/drivers/mmc/host/sdhci-pci-o2micro.c
+@@ -367,6 +367,9 @@ int sdhci_pci_o2_probe(struct sdhci_pci_chip *chip)
+ 		pci_write_config_byte(chip->pdev, O2_SD_LOCK_WP, scratch);
+ 		break;
+ 	case PCI_DEVICE_ID_O2_SEABIRD0:
++		if (chip->pdev->revision == 0x01)
++			chip->quirks |= SDHCI_QUIRK_DELAY_AFTER_POWER;
++		/* fall through */
+ 	case PCI_DEVICE_ID_O2_SEABIRD1:
+ 		/* UnLock WP */
+ 		ret = pci_read_config_byte(chip->pdev,
+diff --git a/drivers/mtd/nand/raw/atmel/nand-controller.c b/drivers/mtd/nand/raw/atmel/nand-controller.c
+index e686fe73159e..a1fd6f6f5414 100644
+--- a/drivers/mtd/nand/raw/atmel/nand-controller.c
++++ b/drivers/mtd/nand/raw/atmel/nand-controller.c
+@@ -2081,6 +2081,10 @@ atmel_hsmc_nand_controller_legacy_init(struct atmel_hsmc_nand_controller *nc)
+ 	nand_np = dev->of_node;
+ 	nfc_np = of_find_compatible_node(dev->of_node, NULL,
+ 					 "atmel,sama5d3-nfc");
++	if (!nfc_np) {
++		dev_err(dev, "Could not find device node for sama5d3-nfc\n");
++		return -ENODEV;
++	}
+ 
+ 	nc->clk = of_clk_get(nfc_np, 0);
+ 	if (IS_ERR(nc->clk)) {
+diff --git a/drivers/mtd/nand/raw/denali.c b/drivers/mtd/nand/raw/denali.c
+index c502075e5721..ff955f085351 100644
+--- a/drivers/mtd/nand/raw/denali.c
++++ b/drivers/mtd/nand/raw/denali.c
+@@ -28,6 +28,7 @@
+ MODULE_LICENSE("GPL");
+ 
+ #define DENALI_NAND_NAME    "denali-nand"
++#define DENALI_DEFAULT_OOB_SKIP_BYTES	8
+ 
+ /* for Indexed Addressing */
+ #define DENALI_INDEXED_CTRL	0x00
+@@ -1106,12 +1107,17 @@ static void denali_hw_init(struct denali_nand_info *denali)
+ 		denali->revision = swab16(ioread32(denali->reg + REVISION));
+ 
+ 	/*
+-	 * tell driver how many bit controller will skip before
+-	 * writing ECC code in OOB, this register may be already
+-	 * set by firmware. So we read this value out.
+-	 * if this value is 0, just let it be.
++	 * Set how many bytes should be skipped before writing data in OOB.
++	 * If a non-zero value has already been set (by firmware or something),
++	 * just use it.  Otherwise, set the driver default.
+ 	 */
+ 	denali->oob_skip_bytes = ioread32(denali->reg + SPARE_AREA_SKIP_BYTES);
++	if (!denali->oob_skip_bytes) {
++		denali->oob_skip_bytes = DENALI_DEFAULT_OOB_SKIP_BYTES;
++		iowrite32(denali->oob_skip_bytes,
++			  denali->reg + SPARE_AREA_SKIP_BYTES);
++	}
++
+ 	denali_detect_max_banks(denali);
+ 	iowrite32(0x0F, denali->reg + RB_PIN_ENABLED);
+ 	iowrite32(CHIP_EN_DONT_CARE__FLAG, denali->reg + CHIP_ENABLE_DONT_CARE);
+diff --git a/drivers/mtd/nand/raw/marvell_nand.c b/drivers/mtd/nand/raw/marvell_nand.c
+index c88588815ca1..a3477cbf6115 100644
+--- a/drivers/mtd/nand/raw/marvell_nand.c
++++ b/drivers/mtd/nand/raw/marvell_nand.c
+@@ -691,7 +691,7 @@ static irqreturn_t marvell_nfc_isr(int irq, void *dev_id)
+ 
+ 	marvell_nfc_disable_int(nfc, st & NDCR_ALL_INT);
+ 
+-	if (!(st & (NDSR_RDDREQ | NDSR_WRDREQ | NDSR_WRCMDREQ)))
++	if (st & (NDSR_RDY(0) | NDSR_RDY(1)))
+ 		complete(&nfc->complete);
+ 
+ 	return IRQ_HANDLED;
+diff --git a/drivers/mtd/spi-nor/fsl-quadspi.c b/drivers/mtd/spi-nor/fsl-quadspi.c
+index 7d9620c7ff6c..1ff3430f82c8 100644
+--- a/drivers/mtd/spi-nor/fsl-quadspi.c
++++ b/drivers/mtd/spi-nor/fsl-quadspi.c
+@@ -478,6 +478,7 @@ static int fsl_qspi_get_seqid(struct fsl_qspi *q, u8 cmd)
+ {
+ 	switch (cmd) {
+ 	case SPINOR_OP_READ_1_1_4:
++	case SPINOR_OP_READ_1_1_4_4B:
+ 		return SEQID_READ;
+ 	case SPINOR_OP_WREN:
+ 		return SEQID_WREN;
+@@ -543,6 +544,9 @@ fsl_qspi_runcmd(struct fsl_qspi *q, u8 cmd, unsigned int addr, int len)
+ 
+ 	/* trigger the LUT now */
+ 	seqid = fsl_qspi_get_seqid(q, cmd);
++	if (seqid < 0)
++		return seqid;
++
+ 	qspi_writel(q, (seqid << QUADSPI_IPCR_SEQID_SHIFT) | len,
+ 			base + QUADSPI_IPCR);
+ 
+@@ -671,7 +675,7 @@ static void fsl_qspi_set_map_addr(struct fsl_qspi *q)
+  * causes the controller to clear the buffer, and use the sequence pointed
+  * by the QUADSPI_BFGENCR[SEQID] to initiate a read from the flash.
+  */
+-static void fsl_qspi_init_ahb_read(struct fsl_qspi *q)
++static int fsl_qspi_init_ahb_read(struct fsl_qspi *q)
+ {
+ 	void __iomem *base = q->iobase;
+ 	int seqid;
+@@ -696,8 +700,13 @@ static void fsl_qspi_init_ahb_read(struct fsl_qspi *q)
+ 
+ 	/* Set the default lut sequence for AHB Read. */
+ 	seqid = fsl_qspi_get_seqid(q, q->nor[0].read_opcode);
++	if (seqid < 0)
++		return seqid;
++
+ 	qspi_writel(q, seqid << QUADSPI_BFGENCR_SEQID_SHIFT,
+ 		q->iobase + QUADSPI_BFGENCR);
++
++	return 0;
+ }
+ 
+ /* This function was used to prepare and enable QSPI clock */
+@@ -805,9 +814,7 @@ static int fsl_qspi_nor_setup_last(struct fsl_qspi *q)
+ 	fsl_qspi_init_lut(q);
+ 
+ 	/* Init for AHB read */
+-	fsl_qspi_init_ahb_read(q);
+-
+-	return 0;
++	return fsl_qspi_init_ahb_read(q);
+ }
+ 
+ static const struct of_device_id fsl_qspi_dt_ids[] = {
+diff --git a/drivers/mtd/spi-nor/intel-spi-pci.c b/drivers/mtd/spi-nor/intel-spi-pci.c
+index c0976f2e3dd1..872b40922608 100644
+--- a/drivers/mtd/spi-nor/intel-spi-pci.c
++++ b/drivers/mtd/spi-nor/intel-spi-pci.c
+@@ -65,6 +65,7 @@ static void intel_spi_pci_remove(struct pci_dev *pdev)
+ static const struct pci_device_id intel_spi_pci_ids[] = {
+ 	{ PCI_VDEVICE(INTEL, 0x18e0), (unsigned long)&bxt_info },
+ 	{ PCI_VDEVICE(INTEL, 0x19e0), (unsigned long)&bxt_info },
++	{ PCI_VDEVICE(INTEL, 0x34a4), (unsigned long)&bxt_info },
+ 	{ PCI_VDEVICE(INTEL, 0xa1a4), (unsigned long)&bxt_info },
+ 	{ PCI_VDEVICE(INTEL, 0xa224), (unsigned long)&bxt_info },
+ 	{ },
+diff --git a/drivers/net/dsa/mv88e6xxx/phy.c b/drivers/net/dsa/mv88e6xxx/phy.c
+index 46af8052e535..152a65d46e0b 100644
+--- a/drivers/net/dsa/mv88e6xxx/phy.c
++++ b/drivers/net/dsa/mv88e6xxx/phy.c
+@@ -110,6 +110,9 @@ int mv88e6xxx_phy_page_write(struct mv88e6xxx_chip *chip, int phy,
+ 	err = mv88e6xxx_phy_page_get(chip, phy, page);
+ 	if (!err) {
+ 		err = mv88e6xxx_phy_write(chip, phy, MV88E6XXX_PHY_PAGE, page);
++		if (!err)
++			err = mv88e6xxx_phy_write(chip, phy, reg, val);
++
+ 		mv88e6xxx_phy_page_put(chip, phy);
+ 	}
+ 
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+index 34af5f1569c8..de0e24d912fe 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+@@ -342,7 +342,7 @@ static struct device_node *bcmgenet_mii_of_find_mdio(struct bcmgenet_priv *priv)
+ 	if (!compat)
+ 		return NULL;
+ 
+-	priv->mdio_dn = of_find_compatible_node(dn, NULL, compat);
++	priv->mdio_dn = of_get_compatible_child(dn, compat);
+ 	kfree(compat);
+ 	if (!priv->mdio_dn) {
+ 		dev_err(kdev, "unable to find MDIO bus node\n");
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 9d69621f5ab4..542f16074dc9 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -1907,6 +1907,7 @@ static int is_valid_clean_head(struct hns3_enet_ring *ring, int h)
+ bool hns3_clean_tx_ring(struct hns3_enet_ring *ring, int budget)
+ {
+ 	struct net_device *netdev = ring->tqp->handle->kinfo.netdev;
++	struct hns3_nic_priv *priv = netdev_priv(netdev);
+ 	struct netdev_queue *dev_queue;
+ 	int bytes, pkts;
+ 	int head;
+@@ -1953,7 +1954,8 @@ bool hns3_clean_tx_ring(struct hns3_enet_ring *ring, int budget)
+ 		 * sees the new next_to_clean.
+ 		 */
+ 		smp_mb();
+-		if (netif_tx_queue_stopped(dev_queue)) {
++		if (netif_tx_queue_stopped(dev_queue) &&
++		    !test_bit(HNS3_NIC_STATE_DOWN, &priv->state)) {
+ 			netif_tx_wake_queue(dev_queue);
+ 			ring->stats.restart_queue++;
+ 		}
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+index 11620e003a8e..967a625c040d 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+@@ -310,7 +310,7 @@ static void hns3_self_test(struct net_device *ndev,
+ 			h->flags & HNAE3_SUPPORT_MAC_LOOPBACK;
+ 
+ 	if (if_running)
+-		dev_close(ndev);
++		ndev->netdev_ops->ndo_stop(ndev);
+ 
+ #if IS_ENABLED(CONFIG_VLAN_8021Q)
+ 	/* Disable the vlan filter for selftest does not support it */
+@@ -348,7 +348,7 @@ static void hns3_self_test(struct net_device *ndev,
+ #endif
+ 
+ 	if (if_running)
+-		dev_open(ndev);
++		ndev->netdev_ops->ndo_open(ndev);
+ }
+ 
+ static int hns3_get_sset_count(struct net_device *netdev, int stringset)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+index 955f0e3d5c95..b4c0597a392d 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+@@ -79,6 +79,7 @@ static int hclge_ieee_getets(struct hnae3_handle *h, struct ieee_ets *ets)
+ static int hclge_ets_validate(struct hclge_dev *hdev, struct ieee_ets *ets,
+ 			      u8 *tc, bool *changed)
+ {
++	bool has_ets_tc = false;
+ 	u32 total_ets_bw = 0;
+ 	u8 max_tc = 0;
+ 	u8 i;
+@@ -106,13 +107,14 @@ static int hclge_ets_validate(struct hclge_dev *hdev, struct ieee_ets *ets,
+ 				*changed = true;
+ 
+ 			total_ets_bw += ets->tc_tx_bw[i];
+-		break;
++			has_ets_tc = true;
++			break;
+ 		default:
+ 			return -EINVAL;
+ 		}
+ 	}
+ 
+-	if (total_ets_bw != BW_PERCENT)
++	if (has_ets_tc && total_ets_bw != BW_PERCENT)
+ 		return -EINVAL;
+ 
+ 	*tc = max_tc + 1;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 13f43b74fd6d..9f2bea64c522 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -1669,11 +1669,13 @@ static int hclge_tx_buffer_calc(struct hclge_dev *hdev,
+ static int hclge_rx_buffer_calc(struct hclge_dev *hdev,
+ 				struct hclge_pkt_buf_alloc *buf_alloc)
+ {
+-	u32 rx_all = hdev->pkt_buf_size;
++#define HCLGE_BUF_SIZE_UNIT	128
++	u32 rx_all = hdev->pkt_buf_size, aligned_mps;
+ 	int no_pfc_priv_num, pfc_priv_num;
+ 	struct hclge_priv_buf *priv;
+ 	int i;
+ 
++	aligned_mps = round_up(hdev->mps, HCLGE_BUF_SIZE_UNIT);
+ 	rx_all -= hclge_get_tx_buff_alloced(buf_alloc);
+ 
+ 	/* When DCB is not supported, rx private
+@@ -1692,13 +1694,13 @@ static int hclge_rx_buffer_calc(struct hclge_dev *hdev,
+ 		if (hdev->hw_tc_map & BIT(i)) {
+ 			priv->enable = 1;
+ 			if (hdev->tm_info.hw_pfc_map & BIT(i)) {
+-				priv->wl.low = hdev->mps;
+-				priv->wl.high = priv->wl.low + hdev->mps;
++				priv->wl.low = aligned_mps;
++				priv->wl.high = priv->wl.low + aligned_mps;
+ 				priv->buf_size = priv->wl.high +
+ 						HCLGE_DEFAULT_DV;
+ 			} else {
+ 				priv->wl.low = 0;
+-				priv->wl.high = 2 * hdev->mps;
++				priv->wl.high = 2 * aligned_mps;
+ 				priv->buf_size = priv->wl.high;
+ 			}
+ 		} else {
+@@ -1730,11 +1732,11 @@ static int hclge_rx_buffer_calc(struct hclge_dev *hdev,
+ 
+ 		if (hdev->tm_info.hw_pfc_map & BIT(i)) {
+ 			priv->wl.low = 128;
+-			priv->wl.high = priv->wl.low + hdev->mps;
++			priv->wl.high = priv->wl.low + aligned_mps;
+ 			priv->buf_size = priv->wl.high + HCLGE_DEFAULT_DV;
+ 		} else {
+ 			priv->wl.low = 0;
+-			priv->wl.high = hdev->mps;
++			priv->wl.high = aligned_mps;
+ 			priv->buf_size = priv->wl.high;
+ 		}
+ 	}
+@@ -2396,6 +2398,9 @@ static int hclge_get_mac_phy_link(struct hclge_dev *hdev)
+ 	int mac_state;
+ 	int link_stat;
+ 
++	if (test_bit(HCLGE_STATE_DOWN, &hdev->state))
++		return 0;
++
+ 	mac_state = hclge_get_mac_link_status(hdev);
+ 
+ 	if (hdev->hw.mac.phydev) {
+@@ -3789,6 +3794,8 @@ static void hclge_ae_stop(struct hnae3_handle *handle)
+ 	struct hclge_dev *hdev = vport->back;
+ 	int i;
+ 
++	set_bit(HCLGE_STATE_DOWN, &hdev->state);
++
+ 	del_timer_sync(&hdev->service_timer);
+ 	cancel_work_sync(&hdev->service_task);
+ 	clear_bit(HCLGE_STATE_SERVICE_SCHED, &hdev->state);
+@@ -4679,9 +4686,17 @@ static int hclge_set_vf_vlan_common(struct hclge_dev *hdev, int vfid,
+ 			"Add vf vlan filter fail, ret =%d.\n",
+ 			req0->resp_code);
+ 	} else {
++#define HCLGE_VF_VLAN_DEL_NO_FOUND	1
+ 		if (!req0->resp_code)
+ 			return 0;
+ 
++		if (req0->resp_code == HCLGE_VF_VLAN_DEL_NO_FOUND) {
++			dev_warn(&hdev->pdev->dev,
++				 "vlan %d filter is not in vf vlan table\n",
++				 vlan);
++			return 0;
++		}
++
+ 		dev_err(&hdev->pdev->dev,
+ 			"Kill vf vlan filter fail, ret =%d.\n",
+ 			req0->resp_code);
+@@ -4725,6 +4740,9 @@ static int hclge_set_vlan_filter_hw(struct hclge_dev *hdev, __be16 proto,
+ 	u16 vport_idx, vport_num = 0;
+ 	int ret;
+ 
++	if (is_kill && !vlan_id)
++		return 0;
++
+ 	ret = hclge_set_vf_vlan_common(hdev, vport_id, is_kill, vlan_id,
+ 				       0, proto);
+ 	if (ret) {
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index 12aa1f1b99ef..6090a7cd83e1 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -299,6 +299,9 @@ void hclgevf_update_link_status(struct hclgevf_dev *hdev, int link_state)
+ 
+ 	client = handle->client;
+ 
++	link_state =
++		test_bit(HCLGEVF_STATE_DOWN, &hdev->state) ? 0 : link_state;
++
+ 	if (link_state != hdev->hw.mac.link) {
+ 		client->ops->link_status_change(handle, !!link_state);
+ 		hdev->hw.mac.link = link_state;
+@@ -1439,6 +1442,8 @@ static void hclgevf_ae_stop(struct hnae3_handle *handle)
+ 	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+ 	int i, queue_id;
+ 
++	set_bit(HCLGEVF_STATE_DOWN, &hdev->state);
++
+ 	for (i = 0; i < hdev->num_tqps; i++) {
+ 		/* Ring disable */
+ 		queue_id = hclgevf_get_queue_id(handle->kinfo.tqp[i]);
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index ed071ea75f20..ce12824a8325 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -39,9 +39,9 @@
+ extern const char ice_drv_ver[];
+ #define ICE_BAR0		0
+ #define ICE_DFLT_NUM_DESC	128
+-#define ICE_MIN_NUM_DESC	8
+-#define ICE_MAX_NUM_DESC	8160
+ #define ICE_REQ_DESC_MULTIPLE	32
++#define ICE_MIN_NUM_DESC	ICE_REQ_DESC_MULTIPLE
++#define ICE_MAX_NUM_DESC	8160
+ #define ICE_DFLT_TRAFFIC_CLASS	BIT(0)
+ #define ICE_INT_NAME_STR_LEN	(IFNAMSIZ + 16)
+ #define ICE_ETHTOOL_FWVER_LEN	32
+diff --git a/drivers/net/ethernet/intel/ice/ice_controlq.c b/drivers/net/ethernet/intel/ice/ice_controlq.c
+index 62be72fdc8f3..e783976c401d 100644
+--- a/drivers/net/ethernet/intel/ice/ice_controlq.c
++++ b/drivers/net/ethernet/intel/ice/ice_controlq.c
+@@ -518,22 +518,31 @@ shutdown_sq_out:
+ 
+ /**
+  * ice_aq_ver_check - Check the reported AQ API version.
+- * @fw_branch: The "branch" of FW, typically describes the device type
+- * @fw_major: The major version of the FW API
+- * @fw_minor: The minor version increment of the FW API
++ * @hw: pointer to the hardware structure
+  *
+  * Checks if the driver should load on a given AQ API version.
+  *
+  * Return: 'true' iff the driver should attempt to load. 'false' otherwise.
+  */
+-static bool ice_aq_ver_check(u8 fw_branch, u8 fw_major, u8 fw_minor)
++static bool ice_aq_ver_check(struct ice_hw *hw)
+ {
+-	if (fw_branch != EXP_FW_API_VER_BRANCH)
+-		return false;
+-	if (fw_major != EXP_FW_API_VER_MAJOR)
+-		return false;
+-	if (fw_minor != EXP_FW_API_VER_MINOR)
++	if (hw->api_maj_ver > EXP_FW_API_VER_MAJOR) {
++		/* Major API version is newer than expected, don't load */
++		dev_warn(ice_hw_to_dev(hw),
++			 "The driver for the device stopped because the NVM image is newer than expected. You must install the most recent version of the network driver.\n");
+ 		return false;
++	} else if (hw->api_maj_ver == EXP_FW_API_VER_MAJOR) {
++		if (hw->api_min_ver > (EXP_FW_API_VER_MINOR + 2))
++			dev_info(ice_hw_to_dev(hw),
++				 "The driver for the device detected a newer version of the NVM image than expected. Please install the most recent version of the network driver.\n");
++		else if ((hw->api_min_ver + 2) < EXP_FW_API_VER_MINOR)
++			dev_info(ice_hw_to_dev(hw),
++				 "The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.\n");
++	} else {
++		/* Major API version is older than expected, log a warning */
++		dev_info(ice_hw_to_dev(hw),
++			 "The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.\n");
++	}
+ 	return true;
+ }
+ 
+@@ -588,8 +597,7 @@ static enum ice_status ice_init_check_adminq(struct ice_hw *hw)
+ 	if (status)
+ 		goto init_ctrlq_free_rq;
+ 
+-	if (!ice_aq_ver_check(hw->api_branch, hw->api_maj_ver,
+-			      hw->api_min_ver)) {
++	if (!ice_aq_ver_check(hw)) {
+ 		status = ICE_ERR_FW_API_VER;
+ 		goto init_ctrlq_free_rq;
+ 	}
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+index c71a9b528d6d..9d6754f65a1a 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+@@ -478,9 +478,11 @@ ice_get_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring)
+ 	ring->tx_max_pending = ICE_MAX_NUM_DESC;
+ 	ring->rx_pending = vsi->rx_rings[0]->count;
+ 	ring->tx_pending = vsi->tx_rings[0]->count;
+-	ring->rx_mini_pending = ICE_MIN_NUM_DESC;
++
++	/* Rx mini and jumbo rings are not supported */
+ 	ring->rx_mini_max_pending = 0;
+ 	ring->rx_jumbo_max_pending = 0;
++	ring->rx_mini_pending = 0;
+ 	ring->rx_jumbo_pending = 0;
+ }
+ 
+@@ -498,14 +500,23 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring)
+ 	    ring->tx_pending < ICE_MIN_NUM_DESC ||
+ 	    ring->rx_pending > ICE_MAX_NUM_DESC ||
+ 	    ring->rx_pending < ICE_MIN_NUM_DESC) {
+-		netdev_err(netdev, "Descriptors requested (Tx: %d / Rx: %d) out of range [%d-%d]\n",
++		netdev_err(netdev, "Descriptors requested (Tx: %d / Rx: %d) out of range [%d-%d] (increment %d)\n",
+ 			   ring->tx_pending, ring->rx_pending,
+-			   ICE_MIN_NUM_DESC, ICE_MAX_NUM_DESC);
++			   ICE_MIN_NUM_DESC, ICE_MAX_NUM_DESC,
++			   ICE_REQ_DESC_MULTIPLE);
+ 		return -EINVAL;
+ 	}
+ 
+ 	new_tx_cnt = ALIGN(ring->tx_pending, ICE_REQ_DESC_MULTIPLE);
++	if (new_tx_cnt != ring->tx_pending)
++		netdev_info(netdev,
++			    "Requested Tx descriptor count rounded up to %d\n",
++			    new_tx_cnt);
+ 	new_rx_cnt = ALIGN(ring->rx_pending, ICE_REQ_DESC_MULTIPLE);
++	if (new_rx_cnt != ring->rx_pending)
++		netdev_info(netdev,
++			    "Requested Rx descriptor count rounded up to %d\n",
++			    new_rx_cnt);
+ 
+ 	/* if nothing to do return success */
+ 	if (new_tx_cnt == vsi->tx_rings[0]->count &&
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
+index da4322e4daed..add124e0381d 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
+@@ -676,6 +676,9 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs)
+ 	} else {
+ 		struct tx_sa tsa;
+ 
++		if (adapter->num_vfs)
++			return -EOPNOTSUPP;
++
+ 		/* find the first unused index */
+ 		ret = ixgbe_ipsec_find_empty_idx(ipsec, false);
+ 		if (ret < 0) {
+diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+index 59416eddd840..ce28d474b929 100644
+--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
++++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+@@ -3849,6 +3849,10 @@ static void ixgbevf_tx_csum(struct ixgbevf_ring *tx_ring,
+ 		skb_checksum_help(skb);
+ 		goto no_csum;
+ 	}
++
++	if (first->protocol == htons(ETH_P_IP))
++		type_tucmd |= IXGBE_ADVTXD_TUCMD_IPV4;
++
+ 	/* update TX checksum flag */
+ 	first->tx_flags |= IXGBE_TX_FLAGS_CSUM;
+ 	vlan_macip_lens = skb_checksum_start_offset(skb) -
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/action.c b/drivers/net/ethernet/netronome/nfp/flower/action.c
+index 4a6d2db75071..417fbcc64f00 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/action.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/action.c
+@@ -314,12 +314,14 @@ nfp_fl_set_ip4(const struct tc_action *action, int idx, u32 off,
+ 
+ 	switch (off) {
+ 	case offsetof(struct iphdr, daddr):
+-		set_ip_addr->ipv4_dst_mask = mask;
+-		set_ip_addr->ipv4_dst = exact;
++		set_ip_addr->ipv4_dst_mask |= mask;
++		set_ip_addr->ipv4_dst &= ~mask;
++		set_ip_addr->ipv4_dst |= exact & mask;
+ 		break;
+ 	case offsetof(struct iphdr, saddr):
+-		set_ip_addr->ipv4_src_mask = mask;
+-		set_ip_addr->ipv4_src = exact;
++		set_ip_addr->ipv4_src_mask |= mask;
++		set_ip_addr->ipv4_src &= ~mask;
++		set_ip_addr->ipv4_src |= exact & mask;
+ 		break;
+ 	default:
+ 		return -EOPNOTSUPP;
+@@ -333,11 +335,12 @@ nfp_fl_set_ip4(const struct tc_action *action, int idx, u32 off,
+ }
+ 
+ static void
+-nfp_fl_set_ip6_helper(int opcode_tag, int idx, __be32 exact, __be32 mask,
++nfp_fl_set_ip6_helper(int opcode_tag, u8 word, __be32 exact, __be32 mask,
+ 		      struct nfp_fl_set_ipv6_addr *ip6)
+ {
+-	ip6->ipv6[idx % 4].mask = mask;
+-	ip6->ipv6[idx % 4].exact = exact;
++	ip6->ipv6[word].mask |= mask;
++	ip6->ipv6[word].exact &= ~mask;
++	ip6->ipv6[word].exact |= exact & mask;
+ 
+ 	ip6->reserved = cpu_to_be16(0);
+ 	ip6->head.jump_id = opcode_tag;
+@@ -350,6 +353,7 @@ nfp_fl_set_ip6(const struct tc_action *action, int idx, u32 off,
+ 	       struct nfp_fl_set_ipv6_addr *ip_src)
+ {
+ 	__be32 exact, mask;
++	u8 word;
+ 
+ 	/* We are expecting tcf_pedit to return a big endian value */
+ 	mask = (__force __be32)~tcf_pedit_mask(action, idx);
+@@ -358,17 +362,20 @@ nfp_fl_set_ip6(const struct tc_action *action, int idx, u32 off,
+ 	if (exact & ~mask)
+ 		return -EOPNOTSUPP;
+ 
+-	if (off < offsetof(struct ipv6hdr, saddr))
++	if (off < offsetof(struct ipv6hdr, saddr)) {
+ 		return -EOPNOTSUPP;
+-	else if (off < offsetof(struct ipv6hdr, daddr))
+-		nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_SRC, idx,
++	} else if (off < offsetof(struct ipv6hdr, daddr)) {
++		word = (off - offsetof(struct ipv6hdr, saddr)) / sizeof(exact);
++		nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_SRC, word,
+ 				      exact, mask, ip_src);
+-	else if (off < offsetof(struct ipv6hdr, daddr) +
+-		       sizeof(struct in6_addr))
+-		nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_DST, idx,
++	} else if (off < offsetof(struct ipv6hdr, daddr) +
++		       sizeof(struct in6_addr)) {
++		word = (off - offsetof(struct ipv6hdr, daddr)) / sizeof(exact);
++		nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_DST, word,
+ 				      exact, mask, ip_dst);
+-	else
++	} else {
+ 		return -EOPNOTSUPP;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_devlink.c b/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
+index db463e20a876..e9a4179e7e48 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
+@@ -96,6 +96,7 @@ nfp_devlink_port_split(struct devlink *devlink, unsigned int port_index,
+ {
+ 	struct nfp_pf *pf = devlink_priv(devlink);
+ 	struct nfp_eth_table_port eth_port;
++	unsigned int lanes;
+ 	int ret;
+ 
+ 	if (count < 2)
+@@ -114,8 +115,12 @@ nfp_devlink_port_split(struct devlink *devlink, unsigned int port_index,
+ 		goto out;
+ 	}
+ 
+-	ret = nfp_devlink_set_lanes(pf, eth_port.index,
+-				    eth_port.port_lanes / count);
++	/* Special case the 100G CXP -> 2x40G split */
++	lanes = eth_port.port_lanes / count;
++	if (eth_port.lanes == 10 && count == 2)
++		lanes = 8 / count;
++
++	ret = nfp_devlink_set_lanes(pf, eth_port.index, lanes);
+ out:
+ 	mutex_unlock(&pf->lock);
+ 
+@@ -128,6 +133,7 @@ nfp_devlink_port_unsplit(struct devlink *devlink, unsigned int port_index,
+ {
+ 	struct nfp_pf *pf = devlink_priv(devlink);
+ 	struct nfp_eth_table_port eth_port;
++	unsigned int lanes;
+ 	int ret;
+ 
+ 	mutex_lock(&pf->lock);
+@@ -143,7 +149,12 @@ nfp_devlink_port_unsplit(struct devlink *devlink, unsigned int port_index,
+ 		goto out;
+ 	}
+ 
+-	ret = nfp_devlink_set_lanes(pf, eth_port.index, eth_port.port_lanes);
++	/* Special case the 100G CXP -> 2x40G unsplit */
++	lanes = eth_port.port_lanes;
++	if (eth_port.port_lanes == 8)
++		lanes = 10;
++
++	ret = nfp_devlink_set_lanes(pf, eth_port.index, lanes);
+ out:
+ 	mutex_unlock(&pf->lock);
+ 
+diff --git a/drivers/net/ethernet/qlogic/qla3xxx.c b/drivers/net/ethernet/qlogic/qla3xxx.c
+index b48f76182049..10b075bc5959 100644
+--- a/drivers/net/ethernet/qlogic/qla3xxx.c
++++ b/drivers/net/ethernet/qlogic/qla3xxx.c
+@@ -380,8 +380,6 @@ static void fm93c56a_select(struct ql3_adapter *qdev)
+ 
+ 	qdev->eeprom_cmd_data = AUBURN_EEPROM_CS_1;
+ 	ql_write_nvram_reg(qdev, spir, ISP_NVRAM_MASK | qdev->eeprom_cmd_data);
+-	ql_write_nvram_reg(qdev, spir,
+-			   ((ISP_NVRAM_MASK << 16) | qdev->eeprom_cmd_data));
+ }
+ 
+ /*
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index f18087102d40..41bcbdd355f0 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -7539,20 +7539,12 @@ static int rtl_alloc_irq(struct rtl8169_private *tp)
+ {
+ 	unsigned int flags;
+ 
+-	switch (tp->mac_version) {
+-	case RTL_GIGA_MAC_VER_01 ... RTL_GIGA_MAC_VER_06:
++	if (tp->mac_version <= RTL_GIGA_MAC_VER_06) {
+ 		RTL_W8(tp, Cfg9346, Cfg9346_Unlock);
+ 		RTL_W8(tp, Config2, RTL_R8(tp, Config2) & ~MSIEnable);
+ 		RTL_W8(tp, Cfg9346, Cfg9346_Lock);
+ 		flags = PCI_IRQ_LEGACY;
+-		break;
+-	case RTL_GIGA_MAC_VER_39 ... RTL_GIGA_MAC_VER_40:
+-		/* This version was reported to have issues with resume
+-		 * from suspend when using MSI-X
+-		 */
+-		flags = PCI_IRQ_LEGACY | PCI_IRQ_MSI;
+-		break;
+-	default:
++	} else {
+ 		flags = PCI_IRQ_ALL_TYPES;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
+index e080d3e7c582..4d7d53fbc0ef 100644
+--- a/drivers/net/ethernet/socionext/netsec.c
++++ b/drivers/net/ethernet/socionext/netsec.c
+@@ -945,6 +945,9 @@ static void netsec_uninit_pkt_dring(struct netsec_priv *priv, int id)
+ 	dring->head = 0;
+ 	dring->tail = 0;
+ 	dring->pkt_cnt = 0;
++
++	if (id == NETSEC_RING_TX)
++		netdev_reset_queue(priv->ndev);
+ }
+ 
+ static void netsec_free_dring(struct netsec_priv *priv, int id)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+index f9a61f90cfbc..0f660af01a4b 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+@@ -714,8 +714,9 @@ static int get_ephy_nodes(struct stmmac_priv *priv)
+ 		return -ENODEV;
+ 	}
+ 
+-	mdio_internal = of_find_compatible_node(mdio_mux, NULL,
++	mdio_internal = of_get_compatible_child(mdio_mux,
+ 						"allwinner,sun8i-h3-mdio-internal");
++	of_node_put(mdio_mux);
+ 	if (!mdio_internal) {
+ 		dev_err(priv->device, "Cannot get internal_mdio node\n");
+ 		return -ENODEV;
+@@ -729,13 +730,20 @@ static int get_ephy_nodes(struct stmmac_priv *priv)
+ 		gmac->rst_ephy = of_reset_control_get_exclusive(iphynode, NULL);
+ 		if (IS_ERR(gmac->rst_ephy)) {
+ 			ret = PTR_ERR(gmac->rst_ephy);
+-			if (ret == -EPROBE_DEFER)
++			if (ret == -EPROBE_DEFER) {
++				of_node_put(iphynode);
++				of_node_put(mdio_internal);
+ 				return ret;
++			}
+ 			continue;
+ 		}
+ 		dev_info(priv->device, "Found internal PHY node\n");
++		of_node_put(iphynode);
++		of_node_put(mdio_internal);
+ 		return 0;
+ 	}
++
++	of_node_put(mdio_internal);
+ 	return -ENODEV;
+ }
+ 
+diff --git a/drivers/net/net_failover.c b/drivers/net/net_failover.c
+index 4f390fa557e4..8ec02f1a3be8 100644
+--- a/drivers/net/net_failover.c
++++ b/drivers/net/net_failover.c
+@@ -602,6 +602,9 @@ static int net_failover_slave_unregister(struct net_device *slave_dev,
+ 	primary_dev = rtnl_dereference(nfo_info->primary_dev);
+ 	standby_dev = rtnl_dereference(nfo_info->standby_dev);
+ 
++	if (WARN_ON_ONCE(slave_dev != primary_dev && slave_dev != standby_dev))
++		return -ENODEV;
++
+ 	vlan_vids_del_by_dev(slave_dev, failover_dev);
+ 	dev_uc_unsync(slave_dev, failover_dev);
+ 	dev_mc_unsync(slave_dev, failover_dev);
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index 5827fccd4f29..44a0770de142 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -907,6 +907,9 @@ void phylink_start(struct phylink *pl)
+ 		    phylink_an_mode_str(pl->link_an_mode),
+ 		    phy_modes(pl->link_config.interface));
+ 
++	/* Always set the carrier off */
++	netif_carrier_off(pl->netdev);
++
+ 	/* Apply the link configuration to the MAC when starting. This allows
+ 	 * a fixed-link to start with the correct parameters, and also
+ 	 * ensures that we set the appropriate advertisement for Serdes links.
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 725dd63f8413..546081993ecf 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -2304,6 +2304,8 @@ static void tun_setup(struct net_device *dev)
+ static int tun_validate(struct nlattr *tb[], struct nlattr *data[],
+ 			struct netlink_ext_ack *extack)
+ {
++	if (!data)
++		return 0;
+ 	return -EINVAL;
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
+index 2319f79b34f0..e6d23b6895bd 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.c
++++ b/drivers/net/wireless/ath/ath10k/wmi.c
+@@ -1869,6 +1869,12 @@ int ath10k_wmi_cmd_send(struct ath10k *ar, struct sk_buff *skb, u32 cmd_id)
+ 	if (ret)
+ 		dev_kfree_skb_any(skb);
+ 
++	if (ret == -EAGAIN) {
++		ath10k_warn(ar, "wmi command %d timeout, restarting hardware\n",
++			    cmd_id);
++		queue_work(ar->workqueue, &ar->restart_work);
++	}
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c b/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c
+index d8b79cb72b58..e7584b842dce 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c
+@@ -77,6 +77,8 @@ static u16 d11ac_bw(enum brcmu_chan_bw bw)
+ 		return BRCMU_CHSPEC_D11AC_BW_40;
+ 	case BRCMU_CHAN_BW_80:
+ 		return BRCMU_CHSPEC_D11AC_BW_80;
++	case BRCMU_CHAN_BW_160:
++		return BRCMU_CHSPEC_D11AC_BW_160;
+ 	default:
+ 		WARN_ON(1);
+ 	}
+@@ -190,8 +192,38 @@ static void brcmu_d11ac_decchspec(struct brcmu_chan *ch)
+ 			break;
+ 		}
+ 		break;
+-	case BRCMU_CHSPEC_D11AC_BW_8080:
+ 	case BRCMU_CHSPEC_D11AC_BW_160:
++		switch (ch->sb) {
++		case BRCMU_CHAN_SB_LLL:
++			ch->control_ch_num -= CH_70MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_LLU:
++			ch->control_ch_num -= CH_50MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_LUL:
++			ch->control_ch_num -= CH_30MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_LUU:
++			ch->control_ch_num -= CH_10MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_ULL:
++			ch->control_ch_num += CH_10MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_ULU:
++			ch->control_ch_num += CH_30MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_UUL:
++			ch->control_ch_num += CH_50MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_UUU:
++			ch->control_ch_num += CH_70MHZ_APART;
++			break;
++		default:
++			WARN_ON_ONCE(1);
++			break;
++		}
++		break;
++	case BRCMU_CHSPEC_D11AC_BW_8080:
+ 	default:
+ 		WARN_ON_ONCE(1);
+ 		break;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/include/brcmu_wifi.h b/drivers/net/wireless/broadcom/brcm80211/include/brcmu_wifi.h
+index 7b9a77981df1..75b2a0438cfa 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/include/brcmu_wifi.h
++++ b/drivers/net/wireless/broadcom/brcm80211/include/brcmu_wifi.h
+@@ -29,6 +29,8 @@
+ #define CH_UPPER_SB			0x01
+ #define CH_LOWER_SB			0x02
+ #define CH_EWA_VALID			0x04
++#define CH_70MHZ_APART			14
++#define CH_50MHZ_APART			10
+ #define CH_30MHZ_APART			6
+ #define CH_20MHZ_APART			4
+ #define CH_10MHZ_APART			2
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+index 866c91c923be..dd674dcf1a0a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+@@ -669,8 +669,12 @@ static int iwl_mvm_sar_get_ewrd_table(struct iwl_mvm *mvm)
+ 	enabled = !!(wifi_pkg->package.elements[1].integer.value);
+ 	n_profiles = wifi_pkg->package.elements[2].integer.value;
+ 
+-	/* in case of BIOS bug */
+-	if (n_profiles <= 0) {
++	/*
++	 * Check the validity of n_profiles.  The EWRD profiles start
++	 * from index 1, so the maximum value allowed here is
++	 * ACPI_SAR_PROFILES_NUM - 1.
++	 */
++	if (n_profiles <= 0 || n_profiles >= ACPI_SAR_PROFILE_NUM) {
+ 		ret = -EINVAL;
+ 		goto out_free;
+ 	}
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index a6e072234398..da45dc972889 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -1232,12 +1232,15 @@ void __iwl_mvm_mac_stop(struct iwl_mvm *mvm)
+ 	iwl_mvm_del_aux_sta(mvm);
+ 
+ 	/*
+-	 * Clear IN_HW_RESTART flag when stopping the hw (as restart_complete()
+-	 * won't be called in this case).
++	 * Clear IN_HW_RESTART and HW_RESTART_REQUESTED flag when stopping the
++	 * hw (as restart_complete() won't be called in this case) and mac80211
++	 * won't execute the restart.
+ 	 * But make sure to cleanup interfaces that have gone down before/during
+ 	 * HW restart was requested.
+ 	 */
+-	if (test_and_clear_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status))
++	if (test_and_clear_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) ||
++	    test_and_clear_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED,
++			       &mvm->status))
+ 		ieee80211_iterate_interfaces(mvm->hw, 0,
+ 					     iwl_mvm_cleanup_iterator, mvm);
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
+index 642da10b0b7f..fccb3a4f9d57 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
+@@ -1218,7 +1218,11 @@ void iwl_mvm_rs_tx_status(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ 	    !(info->flags & IEEE80211_TX_STAT_AMPDU))
+ 		return;
+ 
+-	rs_rate_from_ucode_rate(tx_resp_hwrate, info->band, &tx_resp_rate);
++	if (rs_rate_from_ucode_rate(tx_resp_hwrate, info->band,
++				    &tx_resp_rate)) {
++		WARN_ON_ONCE(1);
++		return;
++	}
+ 
+ #ifdef CONFIG_MAC80211_DEBUGFS
+ 	/* Disable last tx check if we are debugging with fixed rate but
+@@ -1269,7 +1273,10 @@ void iwl_mvm_rs_tx_status(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ 	 */
+ 	table = &lq_sta->lq;
+ 	lq_hwrate = le32_to_cpu(table->rs_table[0]);
+-	rs_rate_from_ucode_rate(lq_hwrate, info->band, &lq_rate);
++	if (rs_rate_from_ucode_rate(lq_hwrate, info->band, &lq_rate)) {
++		WARN_ON_ONCE(1);
++		return;
++	}
+ 
+ 	/* Here we actually compare this rate to the latest LQ command */
+ 	if (lq_color != LQ_FLAG_COLOR_GET(table->flags)) {
+@@ -1371,8 +1378,12 @@ void iwl_mvm_rs_tx_status(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ 		/* Collect data for each rate used during failed TX attempts */
+ 		for (i = 0; i <= retries; ++i) {
+ 			lq_hwrate = le32_to_cpu(table->rs_table[i]);
+-			rs_rate_from_ucode_rate(lq_hwrate, info->band,
+-						&lq_rate);
++			if (rs_rate_from_ucode_rate(lq_hwrate, info->band,
++						    &lq_rate)) {
++				WARN_ON_ONCE(1);
++				return;
++			}
++
+ 			/*
+ 			 * Only collect stats if retried rate is in the same RS
+ 			 * table as active/search.
+@@ -3241,7 +3252,10 @@ static void rs_build_rates_table_from_fixed(struct iwl_mvm *mvm,
+ 	for (i = 0; i < num_rates; i++)
+ 		lq_cmd->rs_table[i] = ucode_rate_le32;
+ 
+-	rs_rate_from_ucode_rate(ucode_rate, band, &rate);
++	if (rs_rate_from_ucode_rate(ucode_rate, band, &rate)) {
++		WARN_ON_ONCE(1);
++		return;
++	}
+ 
+ 	if (is_mimo(&rate))
+ 		lq_cmd->mimo_delim = num_rates - 1;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+index cf2591f2ac23..2d35b70de2ab 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+@@ -1385,6 +1385,7 @@ static void iwl_mvm_rx_tx_cmd_single(struct iwl_mvm *mvm,
+ 	while (!skb_queue_empty(&skbs)) {
+ 		struct sk_buff *skb = __skb_dequeue(&skbs);
+ 		struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
++		struct ieee80211_hdr *hdr = (void *)skb->data;
+ 		bool flushed = false;
+ 
+ 		skb_freed++;
+@@ -1429,11 +1430,11 @@ static void iwl_mvm_rx_tx_cmd_single(struct iwl_mvm *mvm,
+ 			info->flags |= IEEE80211_TX_STAT_AMPDU_NO_BACK;
+ 		info->flags &= ~IEEE80211_TX_CTL_AMPDU;
+ 
+-		/* W/A FW bug: seq_ctl is wrong when the status isn't success */
+-		if (status != TX_STATUS_SUCCESS) {
+-			struct ieee80211_hdr *hdr = (void *)skb->data;
++		/* W/A FW bug: seq_ctl is wrong upon failure / BAR frame */
++		if (ieee80211_is_back_req(hdr->frame_control))
++			seq_ctl = 0;
++		else if (status != TX_STATUS_SUCCESS)
+ 			seq_ctl = le16_to_cpu(hdr->seq_ctrl);
+-		}
+ 
+ 		if (unlikely(!seq_ctl)) {
+ 			struct ieee80211_hdr *hdr = (void *)skb->data;
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+index d15f5ba2dc77..cb5631c85d16 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+@@ -1050,6 +1050,14 @@ void iwl_pcie_rx_free(struct iwl_trans *trans)
+ 	kfree(trans_pcie->rxq);
+ }
+ 
++static void iwl_pcie_rx_move_to_allocator(struct iwl_rxq *rxq,
++					  struct iwl_rb_allocator *rba)
++{
++	spin_lock(&rba->lock);
++	list_splice_tail_init(&rxq->rx_used, &rba->rbd_empty);
++	spin_unlock(&rba->lock);
++}
++
+ /*
+  * iwl_pcie_rx_reuse_rbd - Recycle used RBDs
+  *
+@@ -1081,9 +1089,7 @@ static void iwl_pcie_rx_reuse_rbd(struct iwl_trans *trans,
+ 	if ((rxq->used_count % RX_CLAIM_REQ_ALLOC) == RX_POST_REQ_ALLOC) {
+ 		/* Move the 2 RBDs to the allocator ownership.
+ 		 Allocator has another 6 from pool for the request completion*/
+-		spin_lock(&rba->lock);
+-		list_splice_tail_init(&rxq->rx_used, &rba->rbd_empty);
+-		spin_unlock(&rba->lock);
++		iwl_pcie_rx_move_to_allocator(rxq, rba);
+ 
+ 		atomic_inc(&rba->req_pending);
+ 		queue_work(rba->alloc_wq, &rba->rx_alloc);
+@@ -1261,10 +1267,18 @@ restart:
+ 		IWL_DEBUG_RX(trans, "Q %d: HW = SW = %d\n", rxq->id, r);
+ 
+ 	while (i != r) {
++		struct iwl_rb_allocator *rba = &trans_pcie->rba;
+ 		struct iwl_rx_mem_buffer *rxb;
+-
+-		if (unlikely(rxq->used_count == rxq->queue_size / 2))
++		/* number of RBDs still waiting for page allocation */
++		u32 rb_pending_alloc =
++			atomic_read(&trans_pcie->rba.req_pending) *
++			RX_CLAIM_REQ_ALLOC;
++
++		if (unlikely(rb_pending_alloc >= rxq->queue_size / 2 &&
++			     !emergency)) {
++			iwl_pcie_rx_move_to_allocator(rxq, rba);
+ 			emergency = true;
++		}
+ 
+ 		if (trans->cfg->mq_rx_supported) {
+ 			/*
+@@ -1307,17 +1321,13 @@ restart:
+ 			iwl_pcie_rx_allocator_get(trans, rxq);
+ 
+ 		if (rxq->used_count % RX_CLAIM_REQ_ALLOC == 0 && !emergency) {
+-			struct iwl_rb_allocator *rba = &trans_pcie->rba;
+-
+ 			/* Add the remaining empty RBDs for allocator use */
+-			spin_lock(&rba->lock);
+-			list_splice_tail_init(&rxq->rx_used, &rba->rbd_empty);
+-			spin_unlock(&rba->lock);
++			iwl_pcie_rx_move_to_allocator(rxq, rba);
+ 		} else if (emergency) {
+ 			count++;
+ 			if (count == 8) {
+ 				count = 0;
+-				if (rxq->used_count < rxq->queue_size / 3)
++				if (rb_pending_alloc < rxq->queue_size / 3)
+ 					emergency = false;
+ 
+ 				rxq->read = i;
+diff --git a/drivers/net/wireless/marvell/libertas/if_usb.c b/drivers/net/wireless/marvell/libertas/if_usb.c
+index ffea610f67e2..10ba94c2b35b 100644
+--- a/drivers/net/wireless/marvell/libertas/if_usb.c
++++ b/drivers/net/wireless/marvell/libertas/if_usb.c
+@@ -456,8 +456,6 @@ static int __if_usb_submit_rx_urb(struct if_usb_card *cardp,
+ 			  MRVDRV_ETH_RX_PACKET_BUFFER_SIZE, callbackfn,
+ 			  cardp);
+ 
+-	cardp->rx_urb->transfer_flags |= URB_ZERO_PACKET;
+-
+ 	lbs_deb_usb2(&cardp->udev->dev, "Pointer for rx_urb %p\n", cardp->rx_urb);
+ 	if ((ret = usb_submit_urb(cardp->rx_urb, GFP_ATOMIC))) {
+ 		lbs_deb_usbd(&cardp->udev->dev, "Submit Rx URB failed: %d\n", ret);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c b/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c
+index 8985446570bd..190c699d6e3b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c
+@@ -725,8 +725,7 @@ __mt76x2_mac_set_beacon(struct mt76x2_dev *dev, u8 bcn_idx, struct sk_buff *skb)
+ 	if (skb) {
+ 		ret = mt76_write_beacon(dev, beacon_addr, skb);
+ 		if (!ret)
+-			dev->beacon_data_mask |= BIT(bcn_idx) &
+-						 dev->beacon_mask;
++			dev->beacon_data_mask |= BIT(bcn_idx);
+ 	} else {
+ 		dev->beacon_data_mask &= ~BIT(bcn_idx);
+ 		for (i = 0; i < beacon_len; i += 4)
+diff --git a/drivers/net/wireless/rsi/rsi_91x_usb.c b/drivers/net/wireless/rsi/rsi_91x_usb.c
+index 6ce6b754df12..45a1b86491b6 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_usb.c
++++ b/drivers/net/wireless/rsi/rsi_91x_usb.c
+@@ -266,15 +266,17 @@ static void rsi_rx_done_handler(struct urb *urb)
+ 	if (urb->status)
+ 		goto out;
+ 
+-	if (urb->actual_length <= 0) {
+-		rsi_dbg(INFO_ZONE, "%s: Zero length packet\n", __func__);
++	if (urb->actual_length <= 0 ||
++	    urb->actual_length > rx_cb->rx_skb->len) {
++		rsi_dbg(INFO_ZONE, "%s: Invalid packet length = %d\n",
++			__func__, urb->actual_length);
+ 		goto out;
+ 	}
+ 	if (skb_queue_len(&dev->rx_q) >= RSI_MAX_RX_PKTS) {
+ 		rsi_dbg(INFO_ZONE, "Max RX packets reached\n");
+ 		goto out;
+ 	}
+-	skb_put(rx_cb->rx_skb, urb->actual_length);
++	skb_trim(rx_cb->rx_skb, urb->actual_length);
+ 	skb_queue_tail(&dev->rx_q, rx_cb->rx_skb);
+ 
+ 	rsi_set_event(&dev->rx_thread.event);
+@@ -308,6 +310,7 @@ static int rsi_rx_urb_submit(struct rsi_hw *adapter, u8 ep_num)
+ 	if (!skb)
+ 		return -ENOMEM;
+ 	skb_reserve(skb, MAX_DWORD_ALIGN_BYTES);
++	skb_put(skb, RSI_MAX_RX_USB_PKT_SIZE - MAX_DWORD_ALIGN_BYTES);
+ 	dword_align_bytes = (unsigned long)skb->data & 0x3f;
+ 	if (dword_align_bytes > 0)
+ 		skb_push(skb, dword_align_bytes);
+@@ -319,7 +322,7 @@ static int rsi_rx_urb_submit(struct rsi_hw *adapter, u8 ep_num)
+ 			  usb_rcvbulkpipe(dev->usbdev,
+ 			  dev->bulkin_endpoint_addr[ep_num - 1]),
+ 			  urb->transfer_buffer,
+-			  RSI_MAX_RX_USB_PKT_SIZE,
++			  skb->len,
+ 			  rsi_rx_done_handler,
+ 			  rx_cb);
+ 
+diff --git a/drivers/nfc/nfcmrvl/uart.c b/drivers/nfc/nfcmrvl/uart.c
+index 91162f8e0366..9a22056e8d9e 100644
+--- a/drivers/nfc/nfcmrvl/uart.c
++++ b/drivers/nfc/nfcmrvl/uart.c
+@@ -73,10 +73,9 @@ static int nfcmrvl_uart_parse_dt(struct device_node *node,
+ 	struct device_node *matched_node;
+ 	int ret;
+ 
+-	matched_node = of_find_compatible_node(node, NULL, "marvell,nfc-uart");
++	matched_node = of_get_compatible_child(node, "marvell,nfc-uart");
+ 	if (!matched_node) {
+-		matched_node = of_find_compatible_node(node, NULL,
+-						       "mrvl,nfc-uart");
++		matched_node = of_get_compatible_child(node, "mrvl,nfc-uart");
+ 		if (!matched_node)
+ 			return -ENODEV;
+ 	}
+diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
+index 8aae6dcc839f..9148015ed803 100644
+--- a/drivers/nvdimm/bus.c
++++ b/drivers/nvdimm/bus.c
+@@ -488,6 +488,8 @@ static void nd_async_device_register(void *d, async_cookie_t cookie)
+ 		put_device(dev);
+ 	}
+ 	put_device(dev);
++	if (dev->parent)
++		put_device(dev->parent);
+ }
+ 
+ static void nd_async_device_unregister(void *d, async_cookie_t cookie)
+@@ -507,6 +509,8 @@ void __nd_device_register(struct device *dev)
+ 	if (!dev)
+ 		return;
+ 	dev->bus = &nvdimm_bus_type;
++	if (dev->parent)
++		get_device(dev->parent);
+ 	get_device(dev);
+ 	async_schedule_domain(nd_async_device_register, dev,
+ 			&nd_async_domain);
+diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
+index 8b1fd7f1a224..2245cfb8c6ab 100644
+--- a/drivers/nvdimm/pmem.c
++++ b/drivers/nvdimm/pmem.c
+@@ -393,9 +393,11 @@ static int pmem_attach_disk(struct device *dev,
+ 		addr = devm_memremap_pages(dev, &pmem->pgmap);
+ 		pmem->pfn_flags |= PFN_MAP;
+ 		memcpy(&bb_res, &pmem->pgmap.res, sizeof(bb_res));
+-	} else
++	} else {
+ 		addr = devm_memremap(dev, pmem->phys_addr,
+ 				pmem->size, ARCH_MEMREMAP_PMEM);
++		memcpy(&bb_res, &nsio->res, sizeof(bb_res));
++	}
+ 
+ 	/*
+ 	 * At release time the queue must be frozen before
+diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
+index c30d5af02cc2..63cb01ef4ef0 100644
+--- a/drivers/nvdimm/region_devs.c
++++ b/drivers/nvdimm/region_devs.c
+@@ -545,10 +545,17 @@ static ssize_t region_badblocks_show(struct device *dev,
+ 		struct device_attribute *attr, char *buf)
+ {
+ 	struct nd_region *nd_region = to_nd_region(dev);
++	ssize_t rc;
+ 
+-	return badblocks_show(&nd_region->bb, buf, 0);
+-}
++	device_lock(dev);
++	if (dev->driver)
++		rc = badblocks_show(&nd_region->bb, buf, 0);
++	else
++		rc = -ENXIO;
++	device_unlock(dev);
+ 
++	return rc;
++}
+ static DEVICE_ATTR(badblocks, 0444, region_badblocks_show, NULL);
+ 
+ static ssize_t resource_show(struct device *dev,
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index bf65501e6ed6..f1f375fb362b 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -3119,8 +3119,8 @@ static void nvme_ns_remove(struct nvme_ns *ns)
+ 	}
+ 
+ 	mutex_lock(&ns->ctrl->subsys->lock);
+-	nvme_mpath_clear_current_path(ns);
+ 	list_del_rcu(&ns->siblings);
++	nvme_mpath_clear_current_path(ns);
+ 	mutex_unlock(&ns->ctrl->subsys->lock);
+ 
+ 	down_write(&ns->ctrl->namespaces_rwsem);
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index 514d1dfc5630..122b52d0ebfd 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -518,11 +518,17 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
+ 			goto err_device_del;
+ 	}
+ 
+-	if (config->cells)
+-		nvmem_add_cells(nvmem, config->cells, config->ncells);
++	if (config->cells) {
++		rval = nvmem_add_cells(nvmem, config->cells, config->ncells);
++		if (rval)
++			goto err_teardown_compat;
++	}
+ 
+ 	return nvmem;
+ 
++err_teardown_compat:
++	if (config->compat)
++		device_remove_bin_file(nvmem->base_dev, &nvmem->eeprom);
+ err_device_del:
+ 	device_del(&nvmem->dev);
+ err_put_device:
+diff --git a/drivers/opp/of.c b/drivers/opp/of.c
+index 7af0ddec936b..20988c426650 100644
+--- a/drivers/opp/of.c
++++ b/drivers/opp/of.c
+@@ -425,6 +425,7 @@ static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np)
+ 		dev_err(dev, "Not all nodes have performance state set (%d: %d)\n",
+ 			count, pstate_count);
+ 		ret = -ENOENT;
++		_dev_pm_opp_remove_table(opp_table, dev, false);
+ 		goto put_opp_table;
+ 	}
+ 
+diff --git a/drivers/pci/controller/dwc/pci-dra7xx.c b/drivers/pci/controller/dwc/pci-dra7xx.c
+index 345aab56ce8b..78ed6cc8d521 100644
+--- a/drivers/pci/controller/dwc/pci-dra7xx.c
++++ b/drivers/pci/controller/dwc/pci-dra7xx.c
+@@ -542,7 +542,7 @@ static const struct of_device_id of_dra7xx_pcie_match[] = {
+ };
+ 
+ /*
+- * dra7xx_pcie_ep_unaligned_memaccess: workaround for AM572x/AM571x Errata i870
++ * dra7xx_pcie_unaligned_memaccess: workaround for AM572x/AM571x Errata i870
+  * @dra7xx: the dra7xx device where the workaround should be applied
+  *
+  * Access to the PCIe slave port that are not 32-bit aligned will result
+@@ -552,7 +552,7 @@ static const struct of_device_id of_dra7xx_pcie_match[] = {
+  *
+  * To avoid this issue set PCIE_SS1_AXI2OCP_LEGACY_MODE_ENABLE to 1.
+  */
+-static int dra7xx_pcie_ep_unaligned_memaccess(struct device *dev)
++static int dra7xx_pcie_unaligned_memaccess(struct device *dev)
+ {
+ 	int ret;
+ 	struct device_node *np = dev->of_node;
+@@ -704,6 +704,11 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev)
+ 
+ 		dra7xx_pcie_writel(dra7xx, PCIECTRL_TI_CONF_DEVICE_TYPE,
+ 				   DEVICE_TYPE_RC);
++
++		ret = dra7xx_pcie_unaligned_memaccess(dev);
++		if (ret)
++			dev_err(dev, "WA for Errata i870 not applied\n");
++
+ 		ret = dra7xx_add_pcie_port(dra7xx, pdev);
+ 		if (ret < 0)
+ 			goto err_gpio;
+@@ -717,7 +722,7 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev)
+ 		dra7xx_pcie_writel(dra7xx, PCIECTRL_TI_CONF_DEVICE_TYPE,
+ 				   DEVICE_TYPE_EP);
+ 
+-		ret = dra7xx_pcie_ep_unaligned_memaccess(dev);
++		ret = dra7xx_pcie_unaligned_memaccess(dev);
+ 		if (ret)
+ 			goto err_gpio;
+ 
+diff --git a/drivers/pci/controller/pcie-cadence-ep.c b/drivers/pci/controller/pcie-cadence-ep.c
+index e3fe4124e3af..a67dc91261f5 100644
+--- a/drivers/pci/controller/pcie-cadence-ep.c
++++ b/drivers/pci/controller/pcie-cadence-ep.c
+@@ -259,7 +259,6 @@ static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn,
+ 				     u8 intx, bool is_asserted)
+ {
+ 	struct cdns_pcie *pcie = &ep->pcie;
+-	u32 r = ep->max_regions - 1;
+ 	u32 offset;
+ 	u16 status;
+ 	u8 msg_code;
+@@ -269,8 +268,8 @@ static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn,
+ 	/* Set the outbound region if needed. */
+ 	if (unlikely(ep->irq_pci_addr != CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY ||
+ 		     ep->irq_pci_fn != fn)) {
+-		/* Last region was reserved for IRQ writes. */
+-		cdns_pcie_set_outbound_region_for_normal_msg(pcie, fn, r,
++		/* First region was reserved for IRQ writes. */
++		cdns_pcie_set_outbound_region_for_normal_msg(pcie, fn, 0,
+ 							     ep->irq_phys_addr);
+ 		ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY;
+ 		ep->irq_pci_fn = fn;
+@@ -348,8 +347,8 @@ static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn,
+ 	/* Set the outbound region if needed. */
+ 	if (unlikely(ep->irq_pci_addr != (pci_addr & ~pci_addr_mask) ||
+ 		     ep->irq_pci_fn != fn)) {
+-		/* Last region was reserved for IRQ writes. */
+-		cdns_pcie_set_outbound_region(pcie, fn, ep->max_regions - 1,
++		/* First region was reserved for IRQ writes. */
++		cdns_pcie_set_outbound_region(pcie, fn, 0,
+ 					      false,
+ 					      ep->irq_phys_addr,
+ 					      pci_addr & ~pci_addr_mask,
+@@ -510,6 +509,8 @@ static int cdns_pcie_ep_probe(struct platform_device *pdev)
+ 		goto free_epc_mem;
+ 	}
+ 	ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE;
++	/* Reserve region 0 for IRQs */
++	set_bit(0, &ep->ob_region_map);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/pci/controller/pcie-mediatek.c b/drivers/pci/controller/pcie-mediatek.c
+index 861dda69f366..c5ff6ca65eab 100644
+--- a/drivers/pci/controller/pcie-mediatek.c
++++ b/drivers/pci/controller/pcie-mediatek.c
+@@ -337,6 +337,17 @@ static struct mtk_pcie_port *mtk_pcie_find_port(struct pci_bus *bus,
+ {
+ 	struct mtk_pcie *pcie = bus->sysdata;
+ 	struct mtk_pcie_port *port;
++	struct pci_dev *dev = NULL;
++
++	/*
++	 * Walk the bus hierarchy to get the devfn value
++	 * of the port in the root bus.
++	 */
++	while (bus && bus->number) {
++		dev = bus->self;
++		bus = dev->bus;
++		devfn = dev->devfn;
++	}
+ 
+ 	list_for_each_entry(port, &pcie->ports, list)
+ 		if (port->slot == PCI_SLOT(devfn))
+diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
+index 942b64fc7f1f..fd2dbd7eed7b 100644
+--- a/drivers/pci/controller/vmd.c
++++ b/drivers/pci/controller/vmd.c
+@@ -197,9 +197,20 @@ static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd, struct msi_desc *d
+ 	int i, best = 1;
+ 	unsigned long flags;
+ 
+-	if (pci_is_bridge(msi_desc_to_pci_dev(desc)) || vmd->msix_count == 1)
++	if (vmd->msix_count == 1)
+ 		return &vmd->irqs[0];
+ 
++	/*
++	 * White list for fast-interrupt handlers. All others will share the
++	 * "slow" interrupt vector.
++	 */
++	switch (msi_desc_to_pci_dev(desc)->class) {
++	case PCI_CLASS_STORAGE_EXPRESS:
++		break;
++	default:
++		return &vmd->irqs[0];
++	}
++
+ 	raw_spin_lock_irqsave(&list_lock, flags);
+ 	for (i = 1; i < vmd->msix_count; i++)
+ 		if (vmd->irqs[i].count < vmd->irqs[best].count)
+diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
+index 4d88afdfc843..f7b7cb7189eb 100644
+--- a/drivers/pci/msi.c
++++ b/drivers/pci/msi.c
+@@ -958,7 +958,6 @@ static int __pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries,
+ 			}
+ 		}
+ 	}
+-	WARN_ON(!!dev->msix_enabled);
+ 
+ 	/* Check whether driver already requested for MSI irq */
+ 	if (dev->msi_enabled) {
+@@ -1028,8 +1027,6 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
+ 	if (!pci_msi_supported(dev, minvec))
+ 		return -EINVAL;
+ 
+-	WARN_ON(!!dev->msi_enabled);
+-
+ 	/* Check whether driver already requested MSI-X irqs */
+ 	if (dev->msix_enabled) {
+ 		pci_info(dev, "can't enable MSI (MSI-X already enabled)\n");
+@@ -1039,6 +1036,9 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
+ 	if (maxvec < minvec)
+ 		return -ERANGE;
+ 
++	if (WARN_ON_ONCE(dev->msi_enabled))
++		return -EINVAL;
++
+ 	nvec = pci_msi_vec_count(dev);
+ 	if (nvec < 0)
+ 		return nvec;
+@@ -1087,6 +1087,9 @@ static int __pci_enable_msix_range(struct pci_dev *dev,
+ 	if (maxvec < minvec)
+ 		return -ERANGE;
+ 
++	if (WARN_ON_ONCE(dev->msix_enabled))
++		return -EINVAL;
++
+ 	for (;;) {
+ 		if (affd) {
+ 			nvec = irq_calc_affinity_vectors(minvec, nvec, affd);
+diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
+index 5d1698265da5..d2b04ab37308 100644
+--- a/drivers/pci/pci-acpi.c
++++ b/drivers/pci/pci-acpi.c
+@@ -779,19 +779,33 @@ static void pci_acpi_setup(struct device *dev)
+ 		return;
+ 
+ 	device_set_wakeup_capable(dev, true);
++	/*
++	 * For bridges that can do D3 we enable wake automatically (as
++	 * we do for the power management itself in that case). The
++	 * reason is that the bridge may have additional methods such as
++	 * _DSW that need to be called.
++	 */
++	if (pci_dev->bridge_d3)
++		device_wakeup_enable(dev);
++
+ 	acpi_pci_wakeup(pci_dev, false);
+ }
+ 
+ static void pci_acpi_cleanup(struct device *dev)
+ {
+ 	struct acpi_device *adev = ACPI_COMPANION(dev);
++	struct pci_dev *pci_dev = to_pci_dev(dev);
+ 
+ 	if (!adev)
+ 		return;
+ 
+ 	pci_acpi_remove_pm_notifier(adev);
+-	if (adev->wakeup.flags.valid)
++	if (adev->wakeup.flags.valid) {
++		if (pci_dev->bridge_d3)
++			device_wakeup_disable(dev);
++
+ 		device_set_wakeup_capable(dev, false);
++	}
+ }
+ 
+ static bool pci_acpi_bus_match(struct device *dev)
+diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
+index c687c817b47d..6322c3f446bc 100644
+--- a/drivers/pci/pcie/aspm.c
++++ b/drivers/pci/pcie/aspm.c
+@@ -991,7 +991,7 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev)
+ 	 * All PCIe functions are in one slot, remove one function will remove
+ 	 * the whole slot, so just wait until we are the last function left.
+ 	 */
+-	if (!list_is_last(&pdev->bus_list, &parent->subordinate->devices))
++	if (!list_empty(&parent->subordinate->devices))
+ 		goto out;
+ 
+ 	link = parent->link_state;
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index d1e2d175c10b..a4d11d14b196 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -3177,7 +3177,11 @@ static void disable_igfx_irq(struct pci_dev *dev)
+ 
+ 	pci_iounmap(dev, regs);
+ }
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0042, disable_igfx_irq);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0046, disable_igfx_irq);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x004a, disable_igfx_irq);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0102, disable_igfx_irq);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0106, disable_igfx_irq);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x010a, disable_igfx_irq);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0152, disable_igfx_irq);
+ 
+diff --git a/drivers/pci/remove.c b/drivers/pci/remove.c
+index 5e3d0dced2b8..b08945a7bbfd 100644
+--- a/drivers/pci/remove.c
++++ b/drivers/pci/remove.c
+@@ -26,9 +26,6 @@ static void pci_stop_dev(struct pci_dev *dev)
+ 
+ 		pci_dev_assign_added(dev, false);
+ 	}
+-
+-	if (dev->bus->self)
+-		pcie_aspm_exit_link_state(dev);
+ }
+ 
+ static void pci_destroy_dev(struct pci_dev *dev)
+@@ -42,6 +39,7 @@ static void pci_destroy_dev(struct pci_dev *dev)
+ 	list_del(&dev->bus_list);
+ 	up_write(&pci_bus_sem);
+ 
++	pcie_aspm_exit_link_state(dev);
+ 	pci_bridge_d3_update(dev);
+ 	pci_free_resources(dev);
+ 	put_device(&dev->dev);
+diff --git a/drivers/pcmcia/ricoh.h b/drivers/pcmcia/ricoh.h
+index 01098c841f87..8ac7b138c094 100644
+--- a/drivers/pcmcia/ricoh.h
++++ b/drivers/pcmcia/ricoh.h
+@@ -119,6 +119,10 @@
+ #define  RL5C4XX_MISC_CONTROL           0x2F /* 8 bit */
+ #define  RL5C4XX_ZV_ENABLE              0x08
+ 
++/* Misc Control 3 Register */
++#define RL5C4XX_MISC3			0x00A2 /* 16 bit */
++#define  RL5C47X_MISC3_CB_CLKRUN_DIS	BIT(1)
++
+ #ifdef __YENTA_H
+ 
+ #define rl_misc(socket)		((socket)->private[0])
+@@ -156,6 +160,35 @@ static void ricoh_set_zv(struct yenta_socket *socket)
+         }
+ }
+ 
++static void ricoh_set_clkrun(struct yenta_socket *socket, bool quiet)
++{
++	u16 misc3;
++
++	/*
++	 * RL5C475II likely has this setting, too, however no datasheet
++	 * is publicly available for this chip
++	 */
++	if (socket->dev->device != PCI_DEVICE_ID_RICOH_RL5C476 &&
++	    socket->dev->device != PCI_DEVICE_ID_RICOH_RL5C478)
++		return;
++
++	if (socket->dev->revision < 0x80)
++		return;
++
++	misc3 = config_readw(socket, RL5C4XX_MISC3);
++	if (misc3 & RL5C47X_MISC3_CB_CLKRUN_DIS) {
++		if (!quiet)
++			dev_dbg(&socket->dev->dev,
++				"CLKRUN feature already disabled\n");
++	} else if (disable_clkrun) {
++		if (!quiet)
++			dev_info(&socket->dev->dev,
++				 "Disabling CLKRUN feature\n");
++		misc3 |= RL5C47X_MISC3_CB_CLKRUN_DIS;
++		config_writew(socket, RL5C4XX_MISC3, misc3);
++	}
++}
++
+ static void ricoh_save_state(struct yenta_socket *socket)
+ {
+ 	rl_misc(socket) = config_readw(socket, RL5C4XX_MISC);
+@@ -172,6 +205,7 @@ static void ricoh_restore_state(struct yenta_socket *socket)
+ 	config_writew(socket, RL5C4XX_16BIT_IO_0, rl_io(socket));
+ 	config_writew(socket, RL5C4XX_16BIT_MEM_0, rl_mem(socket));
+ 	config_writew(socket, RL5C4XX_CONFIG, rl_config(socket));
++	ricoh_set_clkrun(socket, true);
+ }
+ 
+ 
+@@ -197,6 +231,7 @@ static int ricoh_override(struct yenta_socket *socket)
+ 	config_writew(socket, RL5C4XX_CONFIG, config);
+ 
+ 	ricoh_set_zv(socket);
++	ricoh_set_clkrun(socket, false);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/pcmcia/yenta_socket.c b/drivers/pcmcia/yenta_socket.c
+index ab3da2262f0f..ac6a3f46b1e6 100644
+--- a/drivers/pcmcia/yenta_socket.c
++++ b/drivers/pcmcia/yenta_socket.c
+@@ -26,7 +26,8 @@
+ 
+ static bool disable_clkrun;
+ module_param(disable_clkrun, bool, 0444);
+-MODULE_PARM_DESC(disable_clkrun, "If PC card doesn't function properly, please try this option");
++MODULE_PARM_DESC(disable_clkrun,
++		 "If PC card doesn't function properly, please try this option (TI and Ricoh bridges only)");
+ 
+ static bool isa_probe = 1;
+ module_param(isa_probe, bool, 0444);
+diff --git a/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c b/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c
+index 6556dbeae65e..ac251c62bc66 100644
+--- a/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c
++++ b/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c
+@@ -319,6 +319,8 @@ static int pmic_mpp_set_mux(struct pinctrl_dev *pctldev, unsigned function,
+ 	pad->function = function;
+ 
+ 	ret = pmic_mpp_write_mode_ctl(state, pad);
++	if (ret < 0)
++		return ret;
+ 
+ 	val = pad->is_enabled << PMIC_MPP_REG_MASTER_EN_SHIFT;
+ 
+@@ -343,13 +345,12 @@ static int pmic_mpp_config_get(struct pinctrl_dev *pctldev,
+ 
+ 	switch (param) {
+ 	case PIN_CONFIG_BIAS_DISABLE:
+-		arg = pad->pullup == PMIC_MPP_PULL_UP_OPEN;
++		if (pad->pullup != PMIC_MPP_PULL_UP_OPEN)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_PULL_UP:
+ 		switch (pad->pullup) {
+-		case PMIC_MPP_PULL_UP_OPEN:
+-			arg = 0;
+-			break;
+ 		case PMIC_MPP_PULL_UP_0P6KOHM:
+ 			arg = 600;
+ 			break;
+@@ -364,13 +365,17 @@ static int pmic_mpp_config_get(struct pinctrl_dev *pctldev,
+ 		}
+ 		break;
+ 	case PIN_CONFIG_BIAS_HIGH_IMPEDANCE:
+-		arg = !pad->is_enabled;
++		if (pad->is_enabled)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_POWER_SOURCE:
+ 		arg = pad->power_source;
+ 		break;
+ 	case PIN_CONFIG_INPUT_ENABLE:
+-		arg = pad->input_enabled;
++		if (!pad->input_enabled)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_OUTPUT:
+ 		arg = pad->out_value;
+@@ -382,7 +387,9 @@ static int pmic_mpp_config_get(struct pinctrl_dev *pctldev,
+ 		arg = pad->amux_input;
+ 		break;
+ 	case PMIC_MPP_CONF_PAIRED:
+-		arg = pad->paired;
++		if (!pad->paired)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_DRIVE_STRENGTH:
+ 		arg = pad->drive_strength;
+@@ -455,7 +462,7 @@ static int pmic_mpp_config_set(struct pinctrl_dev *pctldev, unsigned int pin,
+ 			pad->dtest = arg;
+ 			break;
+ 		case PIN_CONFIG_DRIVE_STRENGTH:
+-			arg = pad->drive_strength;
++			pad->drive_strength = arg;
+ 			break;
+ 		case PMIC_MPP_CONF_AMUX_ROUTE:
+ 			if (arg >= PMIC_MPP_AMUX_ROUTE_ABUS4)
+@@ -502,6 +509,10 @@ static int pmic_mpp_config_set(struct pinctrl_dev *pctldev, unsigned int pin,
+ 	if (ret < 0)
+ 		return ret;
+ 
++	ret = pmic_mpp_write(state, pad, PMIC_MPP_REG_SINK_CTL, pad->drive_strength);
++	if (ret < 0)
++		return ret;
++
+ 	val = pad->is_enabled << PMIC_MPP_REG_MASTER_EN_SHIFT;
+ 
+ 	return pmic_mpp_write(state, pad, PMIC_MPP_REG_EN_CTL, val);
+diff --git a/drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c b/drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c
+index f53e32a9d8fc..0e153bae322e 100644
+--- a/drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c
++++ b/drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c
+@@ -260,22 +260,32 @@ static int pm8xxx_pin_config_get(struct pinctrl_dev *pctldev,
+ 
+ 	switch (param) {
+ 	case PIN_CONFIG_BIAS_DISABLE:
+-		arg = pin->bias == PM8XXX_GPIO_BIAS_NP;
++		if (pin->bias != PM8XXX_GPIO_BIAS_NP)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_PULL_DOWN:
+-		arg = pin->bias == PM8XXX_GPIO_BIAS_PD;
++		if (pin->bias != PM8XXX_GPIO_BIAS_PD)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_PULL_UP:
+-		arg = pin->bias <= PM8XXX_GPIO_BIAS_PU_1P5_30;
++		if (pin->bias > PM8XXX_GPIO_BIAS_PU_1P5_30)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PM8XXX_QCOM_PULL_UP_STRENGTH:
+ 		arg = pin->pull_up_strength;
+ 		break;
+ 	case PIN_CONFIG_BIAS_HIGH_IMPEDANCE:
+-		arg = pin->disable;
++		if (!pin->disable)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_INPUT_ENABLE:
+-		arg = pin->mode == PM8XXX_GPIO_MODE_INPUT;
++		if (pin->mode != PM8XXX_GPIO_MODE_INPUT)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_OUTPUT:
+ 		if (pin->mode & PM8XXX_GPIO_MODE_OUTPUT)
+@@ -290,10 +300,14 @@ static int pm8xxx_pin_config_get(struct pinctrl_dev *pctldev,
+ 		arg = pin->output_strength;
+ 		break;
+ 	case PIN_CONFIG_DRIVE_PUSH_PULL:
+-		arg = !pin->open_drain;
++		if (pin->open_drain)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_DRIVE_OPEN_DRAIN:
+-		arg = pin->open_drain;
++		if (!pin->open_drain)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	default:
+ 		return -EINVAL;
+diff --git a/drivers/pinctrl/sunxi/pinctrl-sunxi.c b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+index 4d9bf9b3e9f3..26ebedc1f6d3 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-sunxi.c
++++ b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+@@ -1079,10 +1079,9 @@ static int sunxi_pinctrl_build_state(struct platform_device *pdev)
+ 	 * We suppose that we won't have any more functions than pins,
+ 	 * we'll reallocate that later anyway
+ 	 */
+-	pctl->functions = devm_kcalloc(&pdev->dev,
+-				       pctl->ngroups,
+-				       sizeof(*pctl->functions),
+-				       GFP_KERNEL);
++	pctl->functions = kcalloc(pctl->ngroups,
++				  sizeof(*pctl->functions),
++				  GFP_KERNEL);
+ 	if (!pctl->functions)
+ 		return -ENOMEM;
+ 
+@@ -1133,8 +1132,10 @@ static int sunxi_pinctrl_build_state(struct platform_device *pdev)
+ 
+ 			func_item = sunxi_pinctrl_find_function_by_name(pctl,
+ 									func->name);
+-			if (!func_item)
++			if (!func_item) {
++				kfree(pctl->functions);
+ 				return -EINVAL;
++			}
+ 
+ 			if (!func_item->groups) {
+ 				func_item->groups =
+@@ -1142,8 +1143,10 @@ static int sunxi_pinctrl_build_state(struct platform_device *pdev)
+ 						     func_item->ngroups,
+ 						     sizeof(*func_item->groups),
+ 						     GFP_KERNEL);
+-				if (!func_item->groups)
++				if (!func_item->groups) {
++					kfree(pctl->functions);
+ 					return -ENOMEM;
++				}
+ 			}
+ 
+ 			func_grp = func_item->groups;
+diff --git a/drivers/power/supply/twl4030_charger.c b/drivers/power/supply/twl4030_charger.c
+index bbcaee56db9d..b6a7d9f74cf3 100644
+--- a/drivers/power/supply/twl4030_charger.c
++++ b/drivers/power/supply/twl4030_charger.c
+@@ -996,12 +996,13 @@ static int twl4030_bci_probe(struct platform_device *pdev)
+ 	if (bci->dev->of_node) {
+ 		struct device_node *phynode;
+ 
+-		phynode = of_find_compatible_node(bci->dev->of_node->parent,
+-						  NULL, "ti,twl4030-usb");
++		phynode = of_get_compatible_child(bci->dev->of_node->parent,
++						  "ti,twl4030-usb");
+ 		if (phynode) {
+ 			bci->usb_nb.notifier_call = twl4030_bci_usb_ncb;
+ 			bci->transceiver = devm_usb_get_phy_by_node(
+ 				bci->dev, phynode, &bci->usb_nb);
++			of_node_put(phynode);
+ 			if (IS_ERR(bci->transceiver)) {
+ 				ret = PTR_ERR(bci->transceiver);
+ 				if (ret == -EPROBE_DEFER)
+diff --git a/drivers/rpmsg/qcom_smd.c b/drivers/rpmsg/qcom_smd.c
+index 6437bbeebc91..e026a7817013 100644
+--- a/drivers/rpmsg/qcom_smd.c
++++ b/drivers/rpmsg/qcom_smd.c
+@@ -1114,8 +1114,10 @@ static struct qcom_smd_channel *qcom_smd_create_channel(struct qcom_smd_edge *ed
+ 
+ 	channel->edge = edge;
+ 	channel->name = kstrdup(name, GFP_KERNEL);
+-	if (!channel->name)
+-		return ERR_PTR(-ENOMEM);
++	if (!channel->name) {
++		ret = -ENOMEM;
++		goto free_channel;
++	}
+ 
+ 	spin_lock_init(&channel->tx_lock);
+ 	spin_lock_init(&channel->recv_lock);
+@@ -1165,6 +1167,7 @@ static struct qcom_smd_channel *qcom_smd_create_channel(struct qcom_smd_edge *ed
+ 
+ free_name_and_channel:
+ 	kfree(channel->name);
++free_channel:
+ 	kfree(channel);
+ 
+ 	return ERR_PTR(ret);
+diff --git a/drivers/rtc/rtc-cmos.c b/drivers/rtc/rtc-cmos.c
+index cd3a2411bc2f..df0c5776d49b 100644
+--- a/drivers/rtc/rtc-cmos.c
++++ b/drivers/rtc/rtc-cmos.c
+@@ -50,6 +50,7 @@
+ /* this is for "generic access to PC-style RTC" using CMOS_READ/CMOS_WRITE */
+ #include <linux/mc146818rtc.h>
+ 
++#ifdef CONFIG_ACPI
+ /*
+  * Use ACPI SCI to replace HPET interrupt for RTC Alarm event
+  *
+@@ -61,6 +62,18 @@
+ static bool use_acpi_alarm;
+ module_param(use_acpi_alarm, bool, 0444);
+ 
++static inline int cmos_use_acpi_alarm(void)
++{
++	return use_acpi_alarm;
++}
++#else /* !CONFIG_ACPI */
++
++static inline int cmos_use_acpi_alarm(void)
++{
++	return 0;
++}
++#endif
++
+ struct cmos_rtc {
+ 	struct rtc_device	*rtc;
+ 	struct device		*dev;
+@@ -167,9 +180,9 @@ static inline int hpet_unregister_irq_handler(irq_handler_t handler)
+ #endif
+ 
+ /* Don't use HPET for RTC Alarm event if ACPI Fixed event is used */
+-static int use_hpet_alarm(void)
++static inline int use_hpet_alarm(void)
+ {
+-	return is_hpet_enabled() && !use_acpi_alarm;
++	return is_hpet_enabled() && !cmos_use_acpi_alarm();
+ }
+ 
+ /*----------------------------------------------------------------*/
+@@ -340,7 +353,7 @@ static void cmos_irq_enable(struct cmos_rtc *cmos, unsigned char mask)
+ 	if (use_hpet_alarm())
+ 		hpet_set_rtc_irq_bit(mask);
+ 
+-	if ((mask & RTC_AIE) && use_acpi_alarm) {
++	if ((mask & RTC_AIE) && cmos_use_acpi_alarm()) {
+ 		if (cmos->wake_on)
+ 			cmos->wake_on(cmos->dev);
+ 	}
+@@ -358,7 +371,7 @@ static void cmos_irq_disable(struct cmos_rtc *cmos, unsigned char mask)
+ 	if (use_hpet_alarm())
+ 		hpet_mask_rtc_irq_bit(mask);
+ 
+-	if ((mask & RTC_AIE) && use_acpi_alarm) {
++	if ((mask & RTC_AIE) && cmos_use_acpi_alarm()) {
+ 		if (cmos->wake_off)
+ 			cmos->wake_off(cmos->dev);
+ 	}
+@@ -980,7 +993,7 @@ static int cmos_suspend(struct device *dev)
+ 	}
+ 	spin_unlock_irq(&rtc_lock);
+ 
+-	if ((tmp & RTC_AIE) && !use_acpi_alarm) {
++	if ((tmp & RTC_AIE) && !cmos_use_acpi_alarm()) {
+ 		cmos->enabled_wake = 1;
+ 		if (cmos->wake_on)
+ 			cmos->wake_on(dev);
+@@ -1031,7 +1044,7 @@ static void cmos_check_wkalrm(struct device *dev)
+ 	 * ACPI RTC wake event is cleared after resume from STR,
+ 	 * ACK the rtc irq here
+ 	 */
+-	if (t_now >= cmos->alarm_expires && use_acpi_alarm) {
++	if (t_now >= cmos->alarm_expires && cmos_use_acpi_alarm()) {
+ 		cmos_interrupt(0, (void *)cmos->rtc);
+ 		return;
+ 	}
+@@ -1053,7 +1066,7 @@ static int __maybe_unused cmos_resume(struct device *dev)
+ 	struct cmos_rtc	*cmos = dev_get_drvdata(dev);
+ 	unsigned char tmp;
+ 
+-	if (cmos->enabled_wake && !use_acpi_alarm) {
++	if (cmos->enabled_wake && !cmos_use_acpi_alarm()) {
+ 		if (cmos->wake_off)
+ 			cmos->wake_off(dev);
+ 		else
+@@ -1132,7 +1145,7 @@ static u32 rtc_handler(void *context)
+ 	 * Or else, ACPI SCI is enabled during suspend/resume only,
+ 	 * update rtc irq in that case.
+ 	 */
+-	if (use_acpi_alarm)
++	if (cmos_use_acpi_alarm())
+ 		cmos_interrupt(0, (void *)cmos->rtc);
+ 	else {
+ 		/* Fix me: can we use cmos_interrupt() here as well? */
+diff --git a/drivers/rtc/rtc-ds1307.c b/drivers/rtc/rtc-ds1307.c
+index e9ec4160d7f6..83fa875b89cd 100644
+--- a/drivers/rtc/rtc-ds1307.c
++++ b/drivers/rtc/rtc-ds1307.c
+@@ -1372,7 +1372,6 @@ static void ds1307_clks_register(struct ds1307 *ds1307)
+ static const struct regmap_config regmap_config = {
+ 	.reg_bits = 8,
+ 	.val_bits = 8,
+-	.max_register = 0x9,
+ };
+ 
+ static int ds1307_probe(struct i2c_client *client,
+diff --git a/drivers/scsi/esp_scsi.c b/drivers/scsi/esp_scsi.c
+index c3fc34b9964d..9e5d3f7d29ae 100644
+--- a/drivers/scsi/esp_scsi.c
++++ b/drivers/scsi/esp_scsi.c
+@@ -1338,6 +1338,7 @@ static int esp_data_bytes_sent(struct esp *esp, struct esp_cmd_entry *ent,
+ 
+ 	bytes_sent = esp->data_dma_len;
+ 	bytes_sent -= ecount;
++	bytes_sent -= esp->send_cmd_residual;
+ 
+ 	/*
+ 	 * The am53c974 has a DMA 'pecularity'. The doc states:
+diff --git a/drivers/scsi/esp_scsi.h b/drivers/scsi/esp_scsi.h
+index 8163dca2071b..a77772777a30 100644
+--- a/drivers/scsi/esp_scsi.h
++++ b/drivers/scsi/esp_scsi.h
+@@ -540,6 +540,8 @@ struct esp {
+ 
+ 	void			*dma;
+ 	int			dmarev;
++
++	u32			send_cmd_residual;
+ };
+ 
+ /* A front-end driver for the ESP chip should do the following in
+diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
+index a94fb9f8bb44..3b3af1459008 100644
+--- a/drivers/scsi/lpfc/lpfc_scsi.c
++++ b/drivers/scsi/lpfc/lpfc_scsi.c
+@@ -4140,9 +4140,17 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
+ 	}
+ 	lpfc_scsi_unprep_dma_buf(phba, lpfc_cmd);
+ 
+-	spin_lock_irqsave(&phba->hbalock, flags);
+-	lpfc_cmd->pCmd = NULL;
+-	spin_unlock_irqrestore(&phba->hbalock, flags);
++	/* If pCmd was set to NULL from abort path, do not call scsi_done */
++	if (xchg(&lpfc_cmd->pCmd, NULL) == NULL) {
++		lpfc_printf_vlog(vport, KERN_INFO, LOG_FCP,
++				 "0711 FCP cmd already NULL, sid: 0x%06x, "
++				 "did: 0x%06x, oxid: 0x%04x\n",
++				 vport->fc_myDID,
++				 (pnode) ? pnode->nlp_DID : 0,
++				 phba->sli_rev == LPFC_SLI_REV4 ?
++				 lpfc_cmd->cur_iocbq.sli4_xritag : 0xffff);
++		return;
++	}
+ 
+ 	/* The sdev is not guaranteed to be valid post scsi_done upcall. */
+ 	cmd->scsi_done(cmd);
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 6f3c00a233ec..4f8d459d2378 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -3790,6 +3790,7 @@ lpfc_sli_handle_slow_ring_event_s4(struct lpfc_hba *phba,
+ 	struct hbq_dmabuf *dmabuf;
+ 	struct lpfc_cq_event *cq_event;
+ 	unsigned long iflag;
++	int count = 0;
+ 
+ 	spin_lock_irqsave(&phba->hbalock, iflag);
+ 	phba->hba_flag &= ~HBA_SP_QUEUE_EVT;
+@@ -3811,16 +3812,22 @@ lpfc_sli_handle_slow_ring_event_s4(struct lpfc_hba *phba,
+ 			if (irspiocbq)
+ 				lpfc_sli_sp_handle_rspiocb(phba, pring,
+ 							   irspiocbq);
++			count++;
+ 			break;
+ 		case CQE_CODE_RECEIVE:
+ 		case CQE_CODE_RECEIVE_V1:
+ 			dmabuf = container_of(cq_event, struct hbq_dmabuf,
+ 					      cq_event);
+ 			lpfc_sli4_handle_received_buffer(phba, dmabuf);
++			count++;
+ 			break;
+ 		default:
+ 			break;
+ 		}
++
++		/* Limit the number of events to 64 to avoid soft lockups */
++		if (count == 64)
++			break;
+ 	}
+ }
+ 
+diff --git a/drivers/scsi/mac_esp.c b/drivers/scsi/mac_esp.c
+index eb551f3cc471..71879f2207e0 100644
+--- a/drivers/scsi/mac_esp.c
++++ b/drivers/scsi/mac_esp.c
+@@ -427,6 +427,8 @@ static void mac_esp_send_pio_cmd(struct esp *esp, u32 addr, u32 esp_count,
+ 			scsi_esp_cmd(esp, ESP_CMD_TI);
+ 		}
+ 	}
++
++	esp->send_cmd_residual = esp_count;
+ }
+ 
+ static int mac_esp_irq_pending(struct esp *esp)
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index 8e84e3fb648a..2d6f6414a2a2 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -7499,6 +7499,9 @@ static int megasas_mgmt_compat_ioctl_fw(struct file *file, unsigned long arg)
+ 		get_user(user_sense_off, &cioc->sense_off))
+ 		return -EFAULT;
+ 
++	if (local_sense_off != user_sense_off)
++		return -EINVAL;
++
+ 	if (local_sense_len) {
+ 		void __user **sense_ioc_ptr =
+ 			(void __user **)((u8 *)((unsigned long)&ioc->frame.raw) + local_sense_off);
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 397081d320b1..83f71c266c66 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -1677,8 +1677,9 @@ static void __ufshcd_release(struct ufs_hba *hba)
+ 
+ 	hba->clk_gating.state = REQ_CLKS_OFF;
+ 	trace_ufshcd_clk_gating(dev_name(hba->dev), hba->clk_gating.state);
+-	schedule_delayed_work(&hba->clk_gating.gate_work,
+-			msecs_to_jiffies(hba->clk_gating.delay_ms));
++	queue_delayed_work(hba->clk_gating.clk_gating_workq,
++			   &hba->clk_gating.gate_work,
++			   msecs_to_jiffies(hba->clk_gating.delay_ms));
+ }
+ 
+ void ufshcd_release(struct ufs_hba *hba)
+diff --git a/drivers/soc/qcom/rmtfs_mem.c b/drivers/soc/qcom/rmtfs_mem.c
+index 8a3678c2e83c..97bb5989aa21 100644
+--- a/drivers/soc/qcom/rmtfs_mem.c
++++ b/drivers/soc/qcom/rmtfs_mem.c
+@@ -212,6 +212,11 @@ static int qcom_rmtfs_mem_probe(struct platform_device *pdev)
+ 		dev_err(&pdev->dev, "failed to parse qcom,vmid\n");
+ 		goto remove_cdev;
+ 	} else if (!ret) {
++		if (!qcom_scm_is_available()) {
++			ret = -EPROBE_DEFER;
++			goto remove_cdev;
++		}
++
+ 		perms[0].vmid = QCOM_SCM_VMID_HLOS;
+ 		perms[0].perm = QCOM_SCM_PERM_RW;
+ 		perms[1].vmid = vmid;
+diff --git a/drivers/soc/tegra/pmc.c b/drivers/soc/tegra/pmc.c
+index 2d6f3fcf3211..ed71a4c9c8b2 100644
+--- a/drivers/soc/tegra/pmc.c
++++ b/drivers/soc/tegra/pmc.c
+@@ -1288,7 +1288,7 @@ static void tegra_pmc_init_tsense_reset(struct tegra_pmc *pmc)
+ 	if (!pmc->soc->has_tsense_reset)
+ 		return;
+ 
+-	np = of_find_node_by_name(pmc->dev->of_node, "i2c-thermtrip");
++	np = of_get_child_by_name(pmc->dev->of_node, "i2c-thermtrip");
+ 	if (!np) {
+ 		dev_warn(dev, "i2c-thermtrip node not found, %s.\n", disabled);
+ 		return;
+diff --git a/drivers/spi/spi-bcm-qspi.c b/drivers/spi/spi-bcm-qspi.c
+index 8612525fa4e3..584bcb018a62 100644
+--- a/drivers/spi/spi-bcm-qspi.c
++++ b/drivers/spi/spi-bcm-qspi.c
+@@ -89,7 +89,7 @@
+ #define BSPI_BPP_MODE_SELECT_MASK		BIT(8)
+ #define BSPI_BPP_ADDR_SELECT_MASK		BIT(16)
+ 
+-#define BSPI_READ_LENGTH			512
++#define BSPI_READ_LENGTH			256
+ 
+ /* MSPI register offsets */
+ #define MSPI_SPCR0_LSB				0x000
+@@ -355,7 +355,7 @@ static int bcm_qspi_bspi_set_flex_mode(struct bcm_qspi *qspi,
+ 	int bpc = 0, bpp = 0;
+ 	u8 command = op->cmd.opcode;
+ 	int width  = op->cmd.buswidth ? op->cmd.buswidth : SPI_NBITS_SINGLE;
+-	int addrlen = op->addr.nbytes * 8;
++	int addrlen = op->addr.nbytes;
+ 	int flex_mode = 1;
+ 
+ 	dev_dbg(&qspi->pdev->dev, "set flex mode w %x addrlen %x hp %d\n",
+diff --git a/drivers/spi/spi-ep93xx.c b/drivers/spi/spi-ep93xx.c
+index f1526757aaf6..79fc3940245a 100644
+--- a/drivers/spi/spi-ep93xx.c
++++ b/drivers/spi/spi-ep93xx.c
+@@ -246,6 +246,19 @@ static int ep93xx_spi_read_write(struct spi_master *master)
+ 	return -EINPROGRESS;
+ }
+ 
++static enum dma_transfer_direction
++ep93xx_dma_data_to_trans_dir(enum dma_data_direction dir)
++{
++	switch (dir) {
++	case DMA_TO_DEVICE:
++		return DMA_MEM_TO_DEV;
++	case DMA_FROM_DEVICE:
++		return DMA_DEV_TO_MEM;
++	default:
++		return DMA_TRANS_NONE;
++	}
++}
++
+ /**
+  * ep93xx_spi_dma_prepare() - prepares a DMA transfer
+  * @master: SPI master
+@@ -257,7 +270,7 @@ static int ep93xx_spi_read_write(struct spi_master *master)
+  */
+ static struct dma_async_tx_descriptor *
+ ep93xx_spi_dma_prepare(struct spi_master *master,
+-		       enum dma_transfer_direction dir)
++		       enum dma_data_direction dir)
+ {
+ 	struct ep93xx_spi *espi = spi_master_get_devdata(master);
+ 	struct spi_transfer *xfer = master->cur_msg->state;
+@@ -277,9 +290,9 @@ ep93xx_spi_dma_prepare(struct spi_master *master,
+ 		buswidth = DMA_SLAVE_BUSWIDTH_1_BYTE;
+ 
+ 	memset(&conf, 0, sizeof(conf));
+-	conf.direction = dir;
++	conf.direction = ep93xx_dma_data_to_trans_dir(dir);
+ 
+-	if (dir == DMA_DEV_TO_MEM) {
++	if (dir == DMA_FROM_DEVICE) {
+ 		chan = espi->dma_rx;
+ 		buf = xfer->rx_buf;
+ 		sgt = &espi->rx_sgt;
+@@ -343,7 +356,8 @@ ep93xx_spi_dma_prepare(struct spi_master *master,
+ 	if (!nents)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	txd = dmaengine_prep_slave_sg(chan, sgt->sgl, nents, dir, DMA_CTRL_ACK);
++	txd = dmaengine_prep_slave_sg(chan, sgt->sgl, nents, conf.direction,
++				      DMA_CTRL_ACK);
+ 	if (!txd) {
+ 		dma_unmap_sg(chan->device->dev, sgt->sgl, sgt->nents, dir);
+ 		return ERR_PTR(-ENOMEM);
+@@ -360,13 +374,13 @@ ep93xx_spi_dma_prepare(struct spi_master *master,
+  * unmapped.
+  */
+ static void ep93xx_spi_dma_finish(struct spi_master *master,
+-				  enum dma_transfer_direction dir)
++				  enum dma_data_direction dir)
+ {
+ 	struct ep93xx_spi *espi = spi_master_get_devdata(master);
+ 	struct dma_chan *chan;
+ 	struct sg_table *sgt;
+ 
+-	if (dir == DMA_DEV_TO_MEM) {
++	if (dir == DMA_FROM_DEVICE) {
+ 		chan = espi->dma_rx;
+ 		sgt = &espi->rx_sgt;
+ 	} else {
+@@ -381,8 +395,8 @@ static void ep93xx_spi_dma_callback(void *callback_param)
+ {
+ 	struct spi_master *master = callback_param;
+ 
+-	ep93xx_spi_dma_finish(master, DMA_MEM_TO_DEV);
+-	ep93xx_spi_dma_finish(master, DMA_DEV_TO_MEM);
++	ep93xx_spi_dma_finish(master, DMA_TO_DEVICE);
++	ep93xx_spi_dma_finish(master, DMA_FROM_DEVICE);
+ 
+ 	spi_finalize_current_transfer(master);
+ }
+@@ -392,15 +406,15 @@ static int ep93xx_spi_dma_transfer(struct spi_master *master)
+ 	struct ep93xx_spi *espi = spi_master_get_devdata(master);
+ 	struct dma_async_tx_descriptor *rxd, *txd;
+ 
+-	rxd = ep93xx_spi_dma_prepare(master, DMA_DEV_TO_MEM);
++	rxd = ep93xx_spi_dma_prepare(master, DMA_FROM_DEVICE);
+ 	if (IS_ERR(rxd)) {
+ 		dev_err(&master->dev, "DMA RX failed: %ld\n", PTR_ERR(rxd));
+ 		return PTR_ERR(rxd);
+ 	}
+ 
+-	txd = ep93xx_spi_dma_prepare(master, DMA_MEM_TO_DEV);
++	txd = ep93xx_spi_dma_prepare(master, DMA_TO_DEVICE);
+ 	if (IS_ERR(txd)) {
+-		ep93xx_spi_dma_finish(master, DMA_DEV_TO_MEM);
++		ep93xx_spi_dma_finish(master, DMA_FROM_DEVICE);
+ 		dev_err(&master->dev, "DMA TX failed: %ld\n", PTR_ERR(txd));
+ 		return PTR_ERR(txd);
+ 	}
+diff --git a/drivers/spi/spi-gpio.c b/drivers/spi/spi-gpio.c
+index 3b518ead504e..b82b47152b18 100644
+--- a/drivers/spi/spi-gpio.c
++++ b/drivers/spi/spi-gpio.c
+@@ -282,9 +282,11 @@ static int spi_gpio_request(struct device *dev,
+ 	spi_gpio->miso = devm_gpiod_get_optional(dev, "miso", GPIOD_IN);
+ 	if (IS_ERR(spi_gpio->miso))
+ 		return PTR_ERR(spi_gpio->miso);
+-	if (!spi_gpio->miso)
+-		/* HW configuration without MISO pin */
+-		*mflags |= SPI_MASTER_NO_RX;
++	/*
++	 * No setting SPI_MASTER_NO_RX here - if there is only a MOSI
++	 * pin connected the host can still do RX by changing the
++	 * direction of the line.
++	 */
+ 
+ 	spi_gpio->sck = devm_gpiod_get(dev, "sck", GPIOD_OUT_LOW);
+ 	if (IS_ERR(spi_gpio->sck))
+@@ -408,7 +410,7 @@ static int spi_gpio_probe(struct platform_device *pdev)
+ 	spi_gpio->bitbang.master = master;
+ 	spi_gpio->bitbang.chipselect = spi_gpio_chipselect;
+ 
+-	if ((master_flags & (SPI_MASTER_NO_TX | SPI_MASTER_NO_RX)) == 0) {
++	if ((master_flags & SPI_MASTER_NO_TX) == 0) {
+ 		spi_gpio->bitbang.txrx_word[SPI_MODE_0] = spi_gpio_txrx_word_mode0;
+ 		spi_gpio->bitbang.txrx_word[SPI_MODE_1] = spi_gpio_txrx_word_mode1;
+ 		spi_gpio->bitbang.txrx_word[SPI_MODE_2] = spi_gpio_txrx_word_mode2;
+diff --git a/drivers/spi/spi-mem.c b/drivers/spi/spi-mem.c
+index 990770dfa5cf..ec0c24e873cd 100644
+--- a/drivers/spi/spi-mem.c
++++ b/drivers/spi/spi-mem.c
+@@ -328,10 +328,25 @@ EXPORT_SYMBOL_GPL(spi_mem_exec_op);
+ int spi_mem_adjust_op_size(struct spi_mem *mem, struct spi_mem_op *op)
+ {
+ 	struct spi_controller *ctlr = mem->spi->controller;
++	size_t len;
++
++	len = sizeof(op->cmd.opcode) + op->addr.nbytes + op->dummy.nbytes;
+ 
+ 	if (ctlr->mem_ops && ctlr->mem_ops->adjust_op_size)
+ 		return ctlr->mem_ops->adjust_op_size(mem, op);
+ 
++	if (!ctlr->mem_ops || !ctlr->mem_ops->exec_op) {
++		if (len > spi_max_transfer_size(mem->spi))
++			return -EINVAL;
++
++		op->data.nbytes = min3((size_t)op->data.nbytes,
++				       spi_max_transfer_size(mem->spi),
++				       spi_max_message_size(mem->spi) -
++				       len);
++		if (!op->data.nbytes)
++			return -EINVAL;
++	}
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(spi_mem_adjust_op_size);
+diff --git a/drivers/tc/tc.c b/drivers/tc/tc.c
+index 3be9519654e5..cf3fad2cb871 100644
+--- a/drivers/tc/tc.c
++++ b/drivers/tc/tc.c
+@@ -2,7 +2,7 @@
+  *	TURBOchannel bus services.
+  *
+  *	Copyright (c) Harald Koerfgen, 1998
+- *	Copyright (c) 2001, 2003, 2005, 2006  Maciej W. Rozycki
++ *	Copyright (c) 2001, 2003, 2005, 2006, 2018  Maciej W. Rozycki
+  *	Copyright (c) 2005  James Simmons
+  *
+  *	This file is subject to the terms and conditions of the GNU
+@@ -10,6 +10,7 @@
+  *	directory of this archive for more details.
+  */
+ #include <linux/compiler.h>
++#include <linux/dma-mapping.h>
+ #include <linux/errno.h>
+ #include <linux/init.h>
+ #include <linux/ioport.h>
+@@ -92,6 +93,11 @@ static void __init tc_bus_add_devices(struct tc_bus *tbus)
+ 		tdev->dev.bus = &tc_bus_type;
+ 		tdev->slot = slot;
+ 
++		/* TURBOchannel has 34-bit DMA addressing (16GiB space). */
++		tdev->dma_mask = DMA_BIT_MASK(34);
++		tdev->dev.dma_mask = &tdev->dma_mask;
++		tdev->dev.coherent_dma_mask = DMA_BIT_MASK(34);
++
+ 		for (i = 0; i < 8; i++) {
+ 			tdev->firmware[i] =
+ 				readb(module + offset + TC_FIRM_VER + 4 * i);
+diff --git a/drivers/thermal/da9062-thermal.c b/drivers/thermal/da9062-thermal.c
+index dd8dd947b7f0..01b0cb994457 100644
+--- a/drivers/thermal/da9062-thermal.c
++++ b/drivers/thermal/da9062-thermal.c
+@@ -106,7 +106,7 @@ static void da9062_thermal_poll_on(struct work_struct *work)
+ 					   THERMAL_EVENT_UNSPECIFIED);
+ 
+ 		delay = msecs_to_jiffies(thermal->zone->passive_delay);
+-		schedule_delayed_work(&thermal->work, delay);
++		queue_delayed_work(system_freezable_wq, &thermal->work, delay);
+ 		return;
+ 	}
+ 
+@@ -125,7 +125,7 @@ static irqreturn_t da9062_thermal_irq_handler(int irq, void *data)
+ 	struct da9062_thermal *thermal = data;
+ 
+ 	disable_irq_nosync(thermal->irq);
+-	schedule_delayed_work(&thermal->work, 0);
++	queue_delayed_work(system_freezable_wq, &thermal->work, 0);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/thermal/rcar_thermal.c b/drivers/thermal/rcar_thermal.c
+index e77e63070e99..5844e26bd372 100644
+--- a/drivers/thermal/rcar_thermal.c
++++ b/drivers/thermal/rcar_thermal.c
+@@ -465,6 +465,7 @@ static int rcar_thermal_remove(struct platform_device *pdev)
+ 
+ 	rcar_thermal_for_each_priv(priv, common) {
+ 		rcar_thermal_irq_disable(priv);
++		cancel_delayed_work_sync(&priv->work);
+ 		if (priv->chip->use_of_thermal)
+ 			thermal_remove_hwmon_sysfs(priv->zone);
+ 		else
+diff --git a/drivers/tty/serial/kgdboc.c b/drivers/tty/serial/kgdboc.c
+index b4ba2b1dab76..f4d0ef695225 100644
+--- a/drivers/tty/serial/kgdboc.c
++++ b/drivers/tty/serial/kgdboc.c
+@@ -130,6 +130,11 @@ static void kgdboc_unregister_kbd(void)
+ 
+ static int kgdboc_option_setup(char *opt)
+ {
++	if (!opt) {
++		pr_err("kgdboc: config string not provided\n");
++		return -EINVAL;
++	}
++
+ 	if (strlen(opt) >= MAX_CONFIG_LEN) {
+ 		printk(KERN_ERR "kgdboc: config string too long\n");
+ 		return -ENOSPC;
+diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c
+index 6c58ad1abd7e..d5b2efae82fc 100644
+--- a/drivers/uio/uio.c
++++ b/drivers/uio/uio.c
+@@ -275,6 +275,8 @@ static struct class uio_class = {
+ 	.dev_groups = uio_groups,
+ };
+ 
++bool uio_class_registered;
++
+ /*
+  * device functions
+  */
+@@ -877,6 +879,9 @@ static int init_uio_class(void)
+ 		printk(KERN_ERR "class_register failed for uio\n");
+ 		goto err_class_register;
+ 	}
++
++	uio_class_registered = true;
++
+ 	return 0;
+ 
+ err_class_register:
+@@ -887,6 +892,7 @@ exit:
+ 
+ static void release_uio_class(void)
+ {
++	uio_class_registered = false;
+ 	class_unregister(&uio_class);
+ 	uio_major_cleanup();
+ }
+@@ -913,6 +919,9 @@ int __uio_register_device(struct module *owner,
+ 	struct uio_device *idev;
+ 	int ret = 0;
+ 
++	if (!uio_class_registered)
++		return -EPROBE_DEFER;
++
+ 	if (!parent || !info || !info->name || !info->version)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/usb/chipidea/otg.h b/drivers/usb/chipidea/otg.h
+index 7e7428e48bfa..4f8b8179ec96 100644
+--- a/drivers/usb/chipidea/otg.h
++++ b/drivers/usb/chipidea/otg.h
+@@ -17,7 +17,8 @@ void ci_handle_vbus_change(struct ci_hdrc *ci);
+ static inline void ci_otg_queue_work(struct ci_hdrc *ci)
+ {
+ 	disable_irq_nosync(ci->irq);
+-	queue_work(ci->wq, &ci->work);
++	if (queue_work(ci->wq, &ci->work) == false)
++		enable_irq(ci->irq);
+ }
+ 
+ #endif /* __DRIVERS_USB_CHIPIDEA_OTG_H */
+diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c
+index 6e2cdd7b93d4..05a68f035d19 100644
+--- a/drivers/usb/dwc2/hcd.c
++++ b/drivers/usb/dwc2/hcd.c
+@@ -4394,6 +4394,7 @@ static int _dwc2_hcd_start(struct usb_hcd *hcd)
+ 	struct dwc2_hsotg *hsotg = dwc2_hcd_to_hsotg(hcd);
+ 	struct usb_bus *bus = hcd_to_bus(hcd);
+ 	unsigned long flags;
++	int ret;
+ 
+ 	dev_dbg(hsotg->dev, "DWC OTG HCD START\n");
+ 
+@@ -4409,6 +4410,13 @@ static int _dwc2_hcd_start(struct usb_hcd *hcd)
+ 
+ 	dwc2_hcd_reinit(hsotg);
+ 
++	/* enable external vbus supply before resuming root hub */
++	spin_unlock_irqrestore(&hsotg->lock, flags);
++	ret = dwc2_vbus_supply_init(hsotg);
++	if (ret)
++		return ret;
++	spin_lock_irqsave(&hsotg->lock, flags);
++
+ 	/* Initialize and connect root hub if one is not already attached */
+ 	if (bus->root_hub) {
+ 		dev_dbg(hsotg->dev, "DWC OTG HCD Has Root Hub\n");
+@@ -4418,7 +4426,7 @@ static int _dwc2_hcd_start(struct usb_hcd *hcd)
+ 
+ 	spin_unlock_irqrestore(&hsotg->lock, flags);
+ 
+-	return dwc2_vbus_supply_init(hsotg);
++	return 0;
+ }
+ 
+ /*
+diff --git a/drivers/usb/gadget/udc/atmel_usba_udc.c b/drivers/usb/gadget/udc/atmel_usba_udc.c
+index 17147b8c771e..8f267be1745d 100644
+--- a/drivers/usb/gadget/udc/atmel_usba_udc.c
++++ b/drivers/usb/gadget/udc/atmel_usba_udc.c
+@@ -2017,6 +2017,8 @@ static struct usba_ep * atmel_udc_of_init(struct platform_device *pdev,
+ 
+ 	udc->errata = match->data;
+ 	udc->pmc = syscon_regmap_lookup_by_compatible("atmel,at91sam9g45-pmc");
++	if (IS_ERR(udc->pmc))
++		udc->pmc = syscon_regmap_lookup_by_compatible("atmel,at91sam9rl-pmc");
+ 	if (IS_ERR(udc->pmc))
+ 		udc->pmc = syscon_regmap_lookup_by_compatible("atmel,at91sam9x5-pmc");
+ 	if (udc->errata && IS_ERR(udc->pmc))
+diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c
+index 5b5f1c8b47c9..104b80c28636 100644
+--- a/drivers/usb/gadget/udc/renesas_usb3.c
++++ b/drivers/usb/gadget/udc/renesas_usb3.c
+@@ -2377,6 +2377,9 @@ static ssize_t renesas_usb3_b_device_write(struct file *file,
+ 	else
+ 		usb3->forced_b_device = false;
+ 
++	if (usb3->workaround_for_vbus)
++		usb3_disconnect(usb3);
++
+ 	/* Let this driver call usb3_connect() anyway */
+ 	usb3_check_id(usb3);
+ 
+diff --git a/drivers/usb/host/ohci-at91.c b/drivers/usb/host/ohci-at91.c
+index e98673954020..ec6739ef3129 100644
+--- a/drivers/usb/host/ohci-at91.c
++++ b/drivers/usb/host/ohci-at91.c
+@@ -551,6 +551,8 @@ static int ohci_hcd_at91_drv_probe(struct platform_device *pdev)
+ 		pdata->overcurrent_pin[i] =
+ 			devm_gpiod_get_index_optional(&pdev->dev, "atmel,oc",
+ 						      i, GPIOD_IN);
++		if (!pdata->overcurrent_pin[i])
++			continue;
+ 		if (IS_ERR(pdata->overcurrent_pin[i])) {
+ 			err = PTR_ERR(pdata->overcurrent_pin[i]);
+ 			dev_err(&pdev->dev, "unable to claim gpio \"overcurrent\": %d\n", err);
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index a4b95d019f84..1f7eeee2ebca 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -900,6 +900,7 @@ static u32 xhci_get_port_status(struct usb_hcd *hcd,
+ 				set_bit(wIndex, &bus_state->resuming_ports);
+ 				bus_state->resume_done[wIndex] = timeout;
+ 				mod_timer(&hcd->rh_timer, timeout);
++				usb_hcd_start_port_resume(&hcd->self, wIndex);
+ 			}
+ 		/* Has resume been signalled for USB_RESUME_TIME yet? */
+ 		} else if (time_after_eq(jiffies,
+@@ -940,6 +941,7 @@ static u32 xhci_get_port_status(struct usb_hcd *hcd,
+ 				clear_bit(wIndex, &bus_state->rexit_ports);
+ 			}
+ 
++			usb_hcd_end_port_resume(&hcd->self, wIndex);
+ 			bus_state->port_c_suspend |= 1 << wIndex;
+ 			bus_state->suspended_ports &= ~(1 << wIndex);
+ 		} else {
+@@ -962,6 +964,7 @@ static u32 xhci_get_port_status(struct usb_hcd *hcd,
+ 	    (raw_port_status & PORT_PLS_MASK) != XDEV_RESUME) {
+ 		bus_state->resume_done[wIndex] = 0;
+ 		clear_bit(wIndex, &bus_state->resuming_ports);
++		usb_hcd_end_port_resume(&hcd->self, wIndex);
+ 	}
+ 
+ 
+@@ -1337,6 +1340,7 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 					goto error;
+ 
+ 				set_bit(wIndex, &bus_state->resuming_ports);
++				usb_hcd_start_port_resume(&hcd->self, wIndex);
+ 				xhci_set_link_state(xhci, ports[wIndex],
+ 						    XDEV_RESUME);
+ 				spin_unlock_irqrestore(&xhci->lock, flags);
+@@ -1345,6 +1349,7 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 				xhci_set_link_state(xhci, ports[wIndex],
+ 							XDEV_U0);
+ 				clear_bit(wIndex, &bus_state->resuming_ports);
++				usb_hcd_end_port_resume(&hcd->self, wIndex);
+ 			}
+ 			bus_state->port_c_suspend |= 1 << wIndex;
+ 
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index f0a99aa0ac58..cd4659703647 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1602,6 +1602,7 @@ static void handle_port_status(struct xhci_hcd *xhci,
+ 			set_bit(HCD_FLAG_POLL_RH, &hcd->flags);
+ 			mod_timer(&hcd->rh_timer,
+ 				  bus_state->resume_done[hcd_portnum]);
++			usb_hcd_start_port_resume(&hcd->self, hcd_portnum);
+ 			bogus_port_status = true;
+ 		}
+ 	}
+diff --git a/drivers/usb/typec/tcpm.c b/drivers/usb/typec/tcpm.c
+index d1d20252bad8..a7e231ccb0a1 100644
+--- a/drivers/usb/typec/tcpm.c
++++ b/drivers/usb/typec/tcpm.c
+@@ -1383,8 +1383,8 @@ static enum pdo_err tcpm_caps_err(struct tcpm_port *port, const u32 *pdo,
+ 				if (pdo_apdo_type(pdo[i]) != APDO_TYPE_PPS)
+ 					break;
+ 
+-				if (pdo_pps_apdo_max_current(pdo[i]) <
+-				    pdo_pps_apdo_max_current(pdo[i - 1]))
++				if (pdo_pps_apdo_max_voltage(pdo[i]) <
++				    pdo_pps_apdo_max_voltage(pdo[i - 1]))
+ 					return PDO_ERR_PPS_APDO_NOT_SORTED;
+ 				else if (pdo_pps_apdo_min_voltage(pdo[i]) ==
+ 					  pdo_pps_apdo_min_voltage(pdo[i - 1]) &&
+@@ -4018,6 +4018,9 @@ static int tcpm_pps_set_op_curr(struct tcpm_port *port, u16 op_curr)
+ 		goto port_unlock;
+ 	}
+ 
++	/* Round down operating current to align with PPS valid steps */
++	op_curr = op_curr - (op_curr % RDO_PROG_CURR_MA_STEP);
++
+ 	reinit_completion(&port->pps_complete);
+ 	port->pps_data.op_curr = op_curr;
+ 	port->pps_status = 0;
+@@ -4071,6 +4074,9 @@ static int tcpm_pps_set_out_volt(struct tcpm_port *port, u16 out_volt)
+ 		goto port_unlock;
+ 	}
+ 
++	/* Round down output voltage to align with PPS valid steps */
++	out_volt = out_volt - (out_volt % RDO_PROG_VOLT_MV_STEP);
++
+ 	reinit_completion(&port->pps_complete);
+ 	port->pps_data.out_volt = out_volt;
+ 	port->pps_status = 0;
+diff --git a/drivers/usb/usbip/vudc_main.c b/drivers/usb/usbip/vudc_main.c
+index 3fc22037a82f..390733e6937e 100644
+--- a/drivers/usb/usbip/vudc_main.c
++++ b/drivers/usb/usbip/vudc_main.c
+@@ -73,6 +73,10 @@ static int __init init(void)
+ cleanup:
+ 	list_for_each_entry_safe(udc_dev, udc_dev2, &vudc_devices, dev_entry) {
+ 		list_del(&udc_dev->dev_entry);
++		/*
++		 * Just do platform_device_del() here, put_vudc_device()
++		 * calls the platform_device_put()
++		 */
+ 		platform_device_del(udc_dev->pdev);
+ 		put_vudc_device(udc_dev);
+ 	}
+@@ -89,7 +93,11 @@ static void __exit cleanup(void)
+ 
+ 	list_for_each_entry_safe(udc_dev, udc_dev2, &vudc_devices, dev_entry) {
+ 		list_del(&udc_dev->dev_entry);
+-		platform_device_unregister(udc_dev->pdev);
++		/*
++		 * Just do platform_device_del() here, put_vudc_device()
++		 * calls the platform_device_put()
++		 */
++		platform_device_del(udc_dev->pdev);
+ 		put_vudc_device(udc_dev);
+ 	}
+ 	platform_driver_unregister(&vudc_driver);
+diff --git a/drivers/video/hdmi.c b/drivers/video/hdmi.c
+index 38716eb50408..8a3e8f61b991 100644
+--- a/drivers/video/hdmi.c
++++ b/drivers/video/hdmi.c
+@@ -592,10 +592,10 @@ hdmi_extended_colorimetry_get_name(enum hdmi_extended_colorimetry ext_col)
+ 		return "xvYCC 709";
+ 	case HDMI_EXTENDED_COLORIMETRY_S_YCC_601:
+ 		return "sYCC 601";
+-	case HDMI_EXTENDED_COLORIMETRY_ADOBE_YCC_601:
+-		return "Adobe YCC 601";
+-	case HDMI_EXTENDED_COLORIMETRY_ADOBE_RGB:
+-		return "Adobe RGB";
++	case HDMI_EXTENDED_COLORIMETRY_OPYCC_601:
++		return "opYCC 601";
++	case HDMI_EXTENDED_COLORIMETRY_OPRGB:
++		return "opRGB";
+ 	case HDMI_EXTENDED_COLORIMETRY_BT2020_CONST_LUM:
+ 		return "BT.2020 Constant Luminance";
+ 	case HDMI_EXTENDED_COLORIMETRY_BT2020:
+diff --git a/drivers/w1/masters/omap_hdq.c b/drivers/w1/masters/omap_hdq.c
+index 83fc9aab34e8..3099052e1243 100644
+--- a/drivers/w1/masters/omap_hdq.c
++++ b/drivers/w1/masters/omap_hdq.c
+@@ -763,6 +763,8 @@ static int omap_hdq_remove(struct platform_device *pdev)
+ 	/* remove module dependency */
+ 	pm_runtime_disable(&pdev->dev);
+ 
++	w1_remove_master_device(&omap_w1_master);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/xen/privcmd-buf.c b/drivers/xen/privcmd-buf.c
+index df1ed37c3269..de01a6d0059d 100644
+--- a/drivers/xen/privcmd-buf.c
++++ b/drivers/xen/privcmd-buf.c
+@@ -21,15 +21,9 @@
+ 
+ MODULE_LICENSE("GPL");
+ 
+-static unsigned int limit = 64;
+-module_param(limit, uint, 0644);
+-MODULE_PARM_DESC(limit, "Maximum number of pages that may be allocated by "
+-			"the privcmd-buf device per open file");
+-
+ struct privcmd_buf_private {
+ 	struct mutex lock;
+ 	struct list_head list;
+-	unsigned int allocated;
+ };
+ 
+ struct privcmd_buf_vma_private {
+@@ -60,13 +54,10 @@ static void privcmd_buf_vmapriv_free(struct privcmd_buf_vma_private *vma_priv)
+ {
+ 	unsigned int i;
+ 
+-	vma_priv->file_priv->allocated -= vma_priv->n_pages;
+-
+ 	list_del(&vma_priv->list);
+ 
+ 	for (i = 0; i < vma_priv->n_pages; i++)
+-		if (vma_priv->pages[i])
+-			__free_page(vma_priv->pages[i]);
++		__free_page(vma_priv->pages[i]);
+ 
+ 	kfree(vma_priv);
+ }
+@@ -146,8 +137,7 @@ static int privcmd_buf_mmap(struct file *file, struct vm_area_struct *vma)
+ 	unsigned int i;
+ 	int ret = 0;
+ 
+-	if (!(vma->vm_flags & VM_SHARED) || count > limit ||
+-	    file_priv->allocated + count > limit)
++	if (!(vma->vm_flags & VM_SHARED))
+ 		return -EINVAL;
+ 
+ 	vma_priv = kzalloc(sizeof(*vma_priv) + count * sizeof(void *),
+@@ -155,19 +145,15 @@ static int privcmd_buf_mmap(struct file *file, struct vm_area_struct *vma)
+ 	if (!vma_priv)
+ 		return -ENOMEM;
+ 
+-	vma_priv->n_pages = count;
+-	count = 0;
+-	for (i = 0; i < vma_priv->n_pages; i++) {
++	for (i = 0; i < count; i++) {
+ 		vma_priv->pages[i] = alloc_page(GFP_KERNEL | __GFP_ZERO);
+ 		if (!vma_priv->pages[i])
+ 			break;
+-		count++;
++		vma_priv->n_pages++;
+ 	}
+ 
+ 	mutex_lock(&file_priv->lock);
+ 
+-	file_priv->allocated += count;
+-
+ 	vma_priv->file_priv = file_priv;
+ 	vma_priv->users = 1;
+ 
+diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
+index a6f9ba85dc4b..aa081f806728 100644
+--- a/drivers/xen/swiotlb-xen.c
++++ b/drivers/xen/swiotlb-xen.c
+@@ -303,6 +303,9 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
+ 	*/
+ 	flags &= ~(__GFP_DMA | __GFP_HIGHMEM);
+ 
++	/* Convert the size to actually allocated. */
++	size = 1UL << (order + XEN_PAGE_SHIFT);
++
+ 	/* On ARM this function returns an ioremap'ped virtual address for
+ 	 * which virt_to_phys doesn't return the corresponding physical
+ 	 * address. In fact on ARM virt_to_phys only works for kernel direct
+@@ -351,6 +354,9 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
+ 	 * physical address */
+ 	phys = xen_bus_to_phys(dev_addr);
+ 
++	/* Convert the size to actually allocated. */
++	size = 1UL << (order + XEN_PAGE_SHIFT);
++
+ 	if (((dev_addr + size - 1 <= dma_mask)) ||
+ 	    range_straddles_page_boundary(phys, size))
+ 		xen_destroy_contiguous_region(phys, order);
+diff --git a/drivers/xen/xen-balloon.c b/drivers/xen/xen-balloon.c
+index 294f35ce9e46..cf8ef8cee5a0 100644
+--- a/drivers/xen/xen-balloon.c
++++ b/drivers/xen/xen-balloon.c
+@@ -75,12 +75,15 @@ static void watch_target(struct xenbus_watch *watch,
+ 
+ 	if (!watch_fired) {
+ 		watch_fired = true;
+-		err = xenbus_scanf(XBT_NIL, "memory", "static-max", "%llu",
+-				   &static_max);
+-		if (err != 1)
+-			static_max = new_target;
+-		else
++
++		if ((xenbus_scanf(XBT_NIL, "memory", "static-max",
++				  "%llu", &static_max) == 1) ||
++		    (xenbus_scanf(XBT_NIL, "memory", "memory_static_max",
++				  "%llu", &static_max) == 1))
+ 			static_max >>= PAGE_SHIFT - 10;
++		else
++			static_max = new_target;
++
+ 		target_diff = (xen_pv_domain() || xen_initial_domain()) ? 0
+ 				: static_max - balloon_stats.target_pages;
+ 	}
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 4bc326df472e..4a7ae216977d 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -1054,9 +1054,26 @@ static noinline int __btrfs_cow_block(struct btrfs_trans_handle *trans,
+ 	if ((root->root_key.objectid == BTRFS_TREE_RELOC_OBJECTID) && parent)
+ 		parent_start = parent->start;
+ 
++	/*
++	 * If we are COWing a node/leaf from the extent, chunk or device trees,
++	 * make sure that we do not finish block group creation of pending block
++	 * groups. We do this to avoid a deadlock.
++	 * COWing can result in allocation of a new chunk, and flushing pending
++	 * block groups (btrfs_create_pending_block_groups()) can be triggered
++	 * when finishing allocation of a new chunk. Creation of a pending block
++	 * group modifies the extent, chunk and device trees, therefore we could
++	 * deadlock with ourselves since we are holding a lock on an extent
++	 * buffer that btrfs_create_pending_block_groups() may try to COW later.
++	 */
++	if (root == fs_info->extent_root ||
++	    root == fs_info->chunk_root ||
++	    root == fs_info->dev_root)
++		trans->can_flush_pending_bgs = false;
++
+ 	cow = btrfs_alloc_tree_block(trans, root, parent_start,
+ 			root->root_key.objectid, &disk_key, level,
+ 			search_start, empty_size);
++	trans->can_flush_pending_bgs = true;
+ 	if (IS_ERR(cow))
+ 		return PTR_ERR(cow);
+ 
+diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
+index d20b244623f2..e129a595f811 100644
+--- a/fs/btrfs/dev-replace.c
++++ b/fs/btrfs/dev-replace.c
+@@ -445,6 +445,7 @@ int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info,
+ 		break;
+ 	case BTRFS_IOCTL_DEV_REPLACE_STATE_STARTED:
+ 	case BTRFS_IOCTL_DEV_REPLACE_STATE_SUSPENDED:
++		ASSERT(0);
+ 		ret = BTRFS_IOCTL_DEV_REPLACE_RESULT_ALREADY_STARTED;
+ 		goto leave;
+ 	}
+@@ -487,6 +488,10 @@ int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info,
+ 	if (IS_ERR(trans)) {
+ 		ret = PTR_ERR(trans);
+ 		btrfs_dev_replace_write_lock(dev_replace);
++		dev_replace->replace_state =
++			BTRFS_IOCTL_DEV_REPLACE_STATE_NEVER_STARTED;
++		dev_replace->srcdev = NULL;
++		dev_replace->tgtdev = NULL;
+ 		goto leave;
+ 	}
+ 
+@@ -508,8 +513,6 @@ int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info,
+ 	return ret;
+ 
+ leave:
+-	dev_replace->srcdev = NULL;
+-	dev_replace->tgtdev = NULL;
+ 	btrfs_dev_replace_write_unlock(dev_replace);
+ 	btrfs_destroy_dev_replace_tgtdev(fs_info, tgt_device);
+ 	return ret;
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 4ab0bccfa281..e67de6a9805b 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -2490,6 +2490,9 @@ static int run_one_delayed_ref(struct btrfs_trans_handle *trans,
+ 					   insert_reserved);
+ 	else
+ 		BUG();
++	if (ret && insert_reserved)
++		btrfs_pin_extent(trans->fs_info, node->bytenr,
++				 node->num_bytes, 1);
+ 	return ret;
+ }
+ 
+@@ -3034,7 +3037,6 @@ int btrfs_run_delayed_refs(struct btrfs_trans_handle *trans,
+ 	struct btrfs_delayed_ref_head *head;
+ 	int ret;
+ 	int run_all = count == (unsigned long)-1;
+-	bool can_flush_pending_bgs = trans->can_flush_pending_bgs;
+ 
+ 	/* We'll clean this up in btrfs_cleanup_transaction */
+ 	if (trans->aborted)
+@@ -3051,7 +3053,6 @@ again:
+ #ifdef SCRAMBLE_DELAYED_REFS
+ 	delayed_refs->run_delayed_start = find_middle(&delayed_refs->root);
+ #endif
+-	trans->can_flush_pending_bgs = false;
+ 	ret = __btrfs_run_delayed_refs(trans, count);
+ 	if (ret < 0) {
+ 		btrfs_abort_transaction(trans, ret);
+@@ -3082,7 +3083,6 @@ again:
+ 		goto again;
+ 	}
+ out:
+-	trans->can_flush_pending_bgs = can_flush_pending_bgs;
+ 	return 0;
+ }
+ 
+@@ -4664,6 +4664,7 @@ again:
+ 			goto out;
+ 	} else {
+ 		ret = 1;
++		space_info->max_extent_size = 0;
+ 	}
+ 
+ 	space_info->force_alloc = CHUNK_ALLOC_NO_FORCE;
+@@ -4685,11 +4686,9 @@ out:
+ 	 * the block groups that were made dirty during the lifetime of the
+ 	 * transaction.
+ 	 */
+-	if (trans->can_flush_pending_bgs &&
+-	    trans->chunk_bytes_reserved >= (u64)SZ_2M) {
++	if (trans->chunk_bytes_reserved >= (u64)SZ_2M)
+ 		btrfs_create_pending_block_groups(trans);
+-		btrfs_trans_release_chunk_metadata(trans);
+-	}
++
+ 	return ret;
+ }
+ 
+@@ -6581,6 +6580,7 @@ static int btrfs_free_reserved_bytes(struct btrfs_block_group_cache *cache,
+ 		space_info->bytes_readonly += num_bytes;
+ 	cache->reserved -= num_bytes;
+ 	space_info->bytes_reserved -= num_bytes;
++	space_info->max_extent_size = 0;
+ 
+ 	if (delalloc)
+ 		cache->delalloc_bytes -= num_bytes;
+@@ -7412,6 +7412,7 @@ static noinline int find_free_extent(struct btrfs_fs_info *fs_info,
+ 	struct btrfs_block_group_cache *block_group = NULL;
+ 	u64 search_start = 0;
+ 	u64 max_extent_size = 0;
++	u64 max_free_space = 0;
+ 	u64 empty_cluster = 0;
+ 	struct btrfs_space_info *space_info;
+ 	int loop = 0;
+@@ -7707,8 +7708,8 @@ unclustered_alloc:
+ 			spin_lock(&ctl->tree_lock);
+ 			if (ctl->free_space <
+ 			    num_bytes + empty_cluster + empty_size) {
+-				if (ctl->free_space > max_extent_size)
+-					max_extent_size = ctl->free_space;
++				max_free_space = max(max_free_space,
++						     ctl->free_space);
+ 				spin_unlock(&ctl->tree_lock);
+ 				goto loop;
+ 			}
+@@ -7877,6 +7878,8 @@ loop:
+ 	}
+ out:
+ 	if (ret == -ENOSPC) {
++		if (!max_extent_size)
++			max_extent_size = max_free_space;
+ 		spin_lock(&space_info->lock);
+ 		space_info->max_extent_size = max_extent_size;
+ 		spin_unlock(&space_info->lock);
+@@ -8158,21 +8161,14 @@ static int alloc_reserved_tree_block(struct btrfs_trans_handle *trans,
+ 	}
+ 
+ 	path = btrfs_alloc_path();
+-	if (!path) {
+-		btrfs_free_and_pin_reserved_extent(fs_info,
+-						   extent_key.objectid,
+-						   fs_info->nodesize);
++	if (!path)
+ 		return -ENOMEM;
+-	}
+ 
+ 	path->leave_spinning = 1;
+ 	ret = btrfs_insert_empty_item(trans, fs_info->extent_root, path,
+ 				      &extent_key, size);
+ 	if (ret) {
+ 		btrfs_free_path(path);
+-		btrfs_free_and_pin_reserved_extent(fs_info,
+-						   extent_key.objectid,
+-						   fs_info->nodesize);
+ 		return ret;
+ 	}
+ 
+@@ -8301,6 +8297,19 @@ btrfs_init_new_buffer(struct btrfs_trans_handle *trans, struct btrfs_root *root,
+ 	if (IS_ERR(buf))
+ 		return buf;
+ 
++	/*
++	 * Extra safety check in case the extent tree is corrupted and extent
++	 * allocator chooses to use a tree block which is already used and
++	 * locked.
++	 */
++	if (buf->lock_owner == current->pid) {
++		btrfs_err_rl(fs_info,
++"tree block %llu owner %llu already locked by pid=%d, extent tree corruption detected",
++			buf->start, btrfs_header_owner(buf), current->pid);
++		free_extent_buffer(buf);
++		return ERR_PTR(-EUCLEAN);
++	}
++
+ 	btrfs_set_header_generation(buf, trans->transid);
+ 	btrfs_set_buffer_lockdep_class(root->root_key.objectid, buf, level);
+ 	btrfs_tree_lock(buf);
+@@ -8938,15 +8947,14 @@ static noinline int walk_up_proc(struct btrfs_trans_handle *trans,
+ 	if (eb == root->node) {
+ 		if (wc->flags[level] & BTRFS_BLOCK_FLAG_FULL_BACKREF)
+ 			parent = eb->start;
+-		else
+-			BUG_ON(root->root_key.objectid !=
+-			       btrfs_header_owner(eb));
++		else if (root->root_key.objectid != btrfs_header_owner(eb))
++			goto owner_mismatch;
+ 	} else {
+ 		if (wc->flags[level + 1] & BTRFS_BLOCK_FLAG_FULL_BACKREF)
+ 			parent = path->nodes[level + 1]->start;
+-		else
+-			BUG_ON(root->root_key.objectid !=
+-			       btrfs_header_owner(path->nodes[level + 1]));
++		else if (root->root_key.objectid !=
++			 btrfs_header_owner(path->nodes[level + 1]))
++			goto owner_mismatch;
+ 	}
+ 
+ 	btrfs_free_tree_block(trans, root, eb, parent, wc->refs[level] == 1);
+@@ -8954,6 +8962,11 @@ out:
+ 	wc->refs[level] = 0;
+ 	wc->flags[level] = 0;
+ 	return 0;
++
++owner_mismatch:
++	btrfs_err_rl(fs_info, "unexpected tree owner, have %llu expect %llu",
++		     btrfs_header_owner(eb), root->root_key.objectid);
++	return -EUCLEAN;
+ }
+ 
+ static noinline int walk_down_tree(struct btrfs_trans_handle *trans,
+@@ -9007,6 +9020,8 @@ static noinline int walk_up_tree(struct btrfs_trans_handle *trans,
+ 			ret = walk_up_proc(trans, root, path, wc);
+ 			if (ret > 0)
+ 				return 0;
++			if (ret < 0)
++				return ret;
+ 
+ 			if (path->locks[level]) {
+ 				btrfs_tree_unlock_rw(path->nodes[level],
+@@ -9772,6 +9787,7 @@ void btrfs_put_block_group_cache(struct btrfs_fs_info *info)
+ 
+ 		block_group = btrfs_lookup_first_block_group(info, last);
+ 		while (block_group) {
++			wait_block_group_cache_done(block_group);
+ 			spin_lock(&block_group->lock);
+ 			if (block_group->iref)
+ 				break;
+@@ -10184,15 +10200,19 @@ error:
+ void btrfs_create_pending_block_groups(struct btrfs_trans_handle *trans)
+ {
+ 	struct btrfs_fs_info *fs_info = trans->fs_info;
+-	struct btrfs_block_group_cache *block_group, *tmp;
++	struct btrfs_block_group_cache *block_group;
+ 	struct btrfs_root *extent_root = fs_info->extent_root;
+ 	struct btrfs_block_group_item item;
+ 	struct btrfs_key key;
+ 	int ret = 0;
+-	bool can_flush_pending_bgs = trans->can_flush_pending_bgs;
+ 
+-	trans->can_flush_pending_bgs = false;
+-	list_for_each_entry_safe(block_group, tmp, &trans->new_bgs, bg_list) {
++	if (!trans->can_flush_pending_bgs)
++		return;
++
++	while (!list_empty(&trans->new_bgs)) {
++		block_group = list_first_entry(&trans->new_bgs,
++					       struct btrfs_block_group_cache,
++					       bg_list);
+ 		if (ret)
+ 			goto next;
+ 
+@@ -10214,7 +10234,7 @@ void btrfs_create_pending_block_groups(struct btrfs_trans_handle *trans)
+ next:
+ 		list_del_init(&block_group->bg_list);
+ 	}
+-	trans->can_flush_pending_bgs = can_flush_pending_bgs;
++	btrfs_trans_release_chunk_metadata(trans);
+ }
+ 
+ int btrfs_make_block_group(struct btrfs_trans_handle *trans,
+@@ -10869,14 +10889,16 @@ int btrfs_error_unpin_extent_range(struct btrfs_fs_info *fs_info,
+  * We don't want a transaction for this since the discard may take a
+  * substantial amount of time.  We don't require that a transaction be
+  * running, but we do need to take a running transaction into account
+- * to ensure that we're not discarding chunks that were released in
+- * the current transaction.
++ * to ensure that we're not discarding chunks that were released or
++ * allocated in the current transaction.
+  *
+  * Holding the chunks lock will prevent other threads from allocating
+  * or releasing chunks, but it won't prevent a running transaction
+  * from committing and releasing the memory that the pending chunks
+  * list head uses.  For that, we need to take a reference to the
+- * transaction.
++ * transaction and hold the commit root sem.  We only need to hold
++ * it while performing the free space search since we have already
++ * held back allocations.
+  */
+ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 				   u64 minlen, u64 *trimmed)
+@@ -10886,6 +10908,10 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 
+ 	*trimmed = 0;
+ 
++	/* Discard not supported = nothing to do. */
++	if (!blk_queue_discard(bdev_get_queue(device->bdev)))
++		return 0;
++
+ 	/* Not writeable = nothing to do. */
+ 	if (!test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state))
+ 		return 0;
+@@ -10903,9 +10929,13 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 
+ 		ret = mutex_lock_interruptible(&fs_info->chunk_mutex);
+ 		if (ret)
+-			return ret;
++			break;
+ 
+-		down_read(&fs_info->commit_root_sem);
++		ret = down_read_killable(&fs_info->commit_root_sem);
++		if (ret) {
++			mutex_unlock(&fs_info->chunk_mutex);
++			break;
++		}
+ 
+ 		spin_lock(&fs_info->trans_lock);
+ 		trans = fs_info->running_transaction;
+@@ -10913,13 +10943,17 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 			refcount_inc(&trans->use_count);
+ 		spin_unlock(&fs_info->trans_lock);
+ 
++		if (!trans)
++			up_read(&fs_info->commit_root_sem);
++
+ 		ret = find_free_dev_extent_start(trans, device, minlen, start,
+ 						 &start, &len);
+-		if (trans)
++		if (trans) {
++			up_read(&fs_info->commit_root_sem);
+ 			btrfs_put_transaction(trans);
++		}
+ 
+ 		if (ret) {
+-			up_read(&fs_info->commit_root_sem);
+ 			mutex_unlock(&fs_info->chunk_mutex);
+ 			if (ret == -ENOSPC)
+ 				ret = 0;
+@@ -10927,7 +10961,6 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 		}
+ 
+ 		ret = btrfs_issue_discard(device->bdev, start, len, &bytes);
+-		up_read(&fs_info->commit_root_sem);
+ 		mutex_unlock(&fs_info->chunk_mutex);
+ 
+ 		if (ret)
+@@ -10947,6 +10980,15 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 	return ret;
+ }
+ 
++/*
++ * Trim the whole filesystem by:
++ * 1) trimming the free space in each block group
++ * 2) trimming the unallocated space on each device
++ *
++ * This will also continue trimming even if a block group or device encounters
++ * an error.  The return value will be the last error, or 0 if nothing bad
++ * happens.
++ */
+ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
+ {
+ 	struct btrfs_block_group_cache *cache = NULL;
+@@ -10956,18 +10998,14 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
+ 	u64 start;
+ 	u64 end;
+ 	u64 trimmed = 0;
+-	u64 total_bytes = btrfs_super_total_bytes(fs_info->super_copy);
++	u64 bg_failed = 0;
++	u64 dev_failed = 0;
++	int bg_ret = 0;
++	int dev_ret = 0;
+ 	int ret = 0;
+ 
+-	/*
+-	 * try to trim all FS space, our block group may start from non-zero.
+-	 */
+-	if (range->len == total_bytes)
+-		cache = btrfs_lookup_first_block_group(fs_info, range->start);
+-	else
+-		cache = btrfs_lookup_block_group(fs_info, range->start);
+-
+-	while (cache) {
++	cache = btrfs_lookup_first_block_group(fs_info, range->start);
++	for (; cache; cache = next_block_group(fs_info, cache)) {
+ 		if (cache->key.objectid >= (range->start + range->len)) {
+ 			btrfs_put_block_group(cache);
+ 			break;
+@@ -10981,13 +11019,15 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
+ 			if (!block_group_cache_done(cache)) {
+ 				ret = cache_block_group(cache, 0);
+ 				if (ret) {
+-					btrfs_put_block_group(cache);
+-					break;
++					bg_failed++;
++					bg_ret = ret;
++					continue;
+ 				}
+ 				ret = wait_block_group_cache_done(cache);
+ 				if (ret) {
+-					btrfs_put_block_group(cache);
+-					break;
++					bg_failed++;
++					bg_ret = ret;
++					continue;
+ 				}
+ 			}
+ 			ret = btrfs_trim_block_group(cache,
+@@ -10998,28 +11038,40 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
+ 
+ 			trimmed += group_trimmed;
+ 			if (ret) {
+-				btrfs_put_block_group(cache);
+-				break;
++				bg_failed++;
++				bg_ret = ret;
++				continue;
+ 			}
+ 		}
+-
+-		cache = next_block_group(fs_info, cache);
+ 	}
+ 
++	if (bg_failed)
++		btrfs_warn(fs_info,
++			"failed to trim %llu block group(s), last error %d",
++			bg_failed, bg_ret);
+ 	mutex_lock(&fs_info->fs_devices->device_list_mutex);
+-	devices = &fs_info->fs_devices->alloc_list;
+-	list_for_each_entry(device, devices, dev_alloc_list) {
++	devices = &fs_info->fs_devices->devices;
++	list_for_each_entry(device, devices, dev_list) {
+ 		ret = btrfs_trim_free_extents(device, range->minlen,
+ 					      &group_trimmed);
+-		if (ret)
++		if (ret) {
++			dev_failed++;
++			dev_ret = ret;
+ 			break;
++		}
+ 
+ 		trimmed += group_trimmed;
+ 	}
+ 	mutex_unlock(&fs_info->fs_devices->device_list_mutex);
+ 
++	if (dev_failed)
++		btrfs_warn(fs_info,
++			"failed to trim %llu device(s), last error %d",
++			dev_failed, dev_ret);
+ 	range->len = trimmed;
+-	return ret;
++	if (bg_ret)
++		return bg_ret;
++	return dev_ret;
+ }
+ 
+ /*
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 51e77d72068a..22c2f38cd9b3 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -534,6 +534,14 @@ int btrfs_dirty_pages(struct inode *inode, struct page **pages,
+ 
+ 	end_of_last_block = start_pos + num_bytes - 1;
+ 
++	/*
++	 * The pages may have already been dirty, clear out old accounting so
++	 * we can set things up properly
++	 */
++	clear_extent_bit(&BTRFS_I(inode)->io_tree, start_pos, end_of_last_block,
++			 EXTENT_DIRTY | EXTENT_DELALLOC |
++			 EXTENT_DO_ACCOUNTING | EXTENT_DEFRAG, 0, 0, cached);
++
+ 	if (!btrfs_is_free_space_inode(BTRFS_I(inode))) {
+ 		if (start_pos >= isize &&
+ 		    !(BTRFS_I(inode)->flags & BTRFS_INODE_PREALLOC)) {
+@@ -1504,18 +1512,27 @@ lock_and_cleanup_extent_if_need(struct btrfs_inode *inode, struct page **pages,
+ 		}
+ 		if (ordered)
+ 			btrfs_put_ordered_extent(ordered);
+-		clear_extent_bit(&inode->io_tree, start_pos, last_pos,
+-				 EXTENT_DIRTY | EXTENT_DELALLOC |
+-				 EXTENT_DO_ACCOUNTING | EXTENT_DEFRAG,
+-				 0, 0, cached_state);
++
+ 		*lockstart = start_pos;
+ 		*lockend = last_pos;
+ 		ret = 1;
+ 	}
+ 
++	/*
++	 * It's possible the pages are dirty right now, but we don't want
++	 * to clean them yet because copy_from_user may catch a page fault
++	 * and we might have to fall back to one page at a time.  If that
++	 * happens, we'll unlock these pages and we'd have a window where
++	 * reclaim could sneak in and drop the once-dirty page on the floor
++	 * without writing it.
++	 *
++	 * We have the pages locked and the extent range locked, so there's
++	 * no way someone can start IO on any dirty pages in this range.
++	 *
++	 * We'll call btrfs_dirty_pages() later on, and that will flip around
++	 * delalloc bits and dirty the pages as required.
++	 */
+ 	for (i = 0; i < num_pages; i++) {
+-		if (clear_page_dirty_for_io(pages[i]))
+-			account_page_redirty(pages[i]);
+ 		set_page_extent_mapped(pages[i]);
+ 		WARN_ON(!PageLocked(pages[i]));
+ 	}
+@@ -2065,6 +2082,14 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 		goto out;
+ 
+ 	inode_lock(inode);
++
++	/*
++	 * We take the dio_sem here because the tree log stuff can race with
++	 * lockless dio writes and get an extent map logged for an extent we
++	 * never waited on.  We need it this high up for lockdep reasons.
++	 */
++	down_write(&BTRFS_I(inode)->dio_sem);
++
+ 	atomic_inc(&root->log_batch);
+ 	full_sync = test_bit(BTRFS_INODE_NEEDS_FULL_SYNC,
+ 			     &BTRFS_I(inode)->runtime_flags);
+@@ -2116,6 +2141,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 		ret = start_ordered_ops(inode, start, end);
+ 	}
+ 	if (ret) {
++		up_write(&BTRFS_I(inode)->dio_sem);
+ 		inode_unlock(inode);
+ 		goto out;
+ 	}
+@@ -2171,6 +2197,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 		 * checked called fsync.
+ 		 */
+ 		ret = filemap_check_wb_err(inode->i_mapping, file->f_wb_err);
++		up_write(&BTRFS_I(inode)->dio_sem);
+ 		inode_unlock(inode);
+ 		goto out;
+ 	}
+@@ -2189,6 +2216,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 	trans = btrfs_start_transaction(root, 0);
+ 	if (IS_ERR(trans)) {
+ 		ret = PTR_ERR(trans);
++		up_write(&BTRFS_I(inode)->dio_sem);
+ 		inode_unlock(inode);
+ 		goto out;
+ 	}
+@@ -2210,6 +2238,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 	 * file again, but that will end up using the synchronization
+ 	 * inside btrfs_sync_log to keep things safe.
+ 	 */
++	up_write(&BTRFS_I(inode)->dio_sem);
+ 	inode_unlock(inode);
+ 
+ 	/*
+diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
+index d5f80cb300be..a5f18333aa8c 100644
+--- a/fs/btrfs/free-space-cache.c
++++ b/fs/btrfs/free-space-cache.c
+@@ -10,6 +10,7 @@
+ #include <linux/math64.h>
+ #include <linux/ratelimit.h>
+ #include <linux/error-injection.h>
++#include <linux/sched/mm.h>
+ #include "ctree.h"
+ #include "free-space-cache.h"
+ #include "transaction.h"
+@@ -47,6 +48,7 @@ static struct inode *__lookup_free_space_inode(struct btrfs_root *root,
+ 	struct btrfs_free_space_header *header;
+ 	struct extent_buffer *leaf;
+ 	struct inode *inode = NULL;
++	unsigned nofs_flag;
+ 	int ret;
+ 
+ 	key.objectid = BTRFS_FREE_SPACE_OBJECTID;
+@@ -68,7 +70,13 @@ static struct inode *__lookup_free_space_inode(struct btrfs_root *root,
+ 	btrfs_disk_key_to_cpu(&location, &disk_key);
+ 	btrfs_release_path(path);
+ 
++	/*
++	 * We are often under a trans handle at this point, so we need to make
++	 * sure NOFS is set to keep us from deadlocking.
++	 */
++	nofs_flag = memalloc_nofs_save();
+ 	inode = btrfs_iget(fs_info->sb, &location, root, NULL);
++	memalloc_nofs_restore(nofs_flag);
+ 	if (IS_ERR(inode))
+ 		return inode;
+ 	if (is_bad_inode(inode)) {
+@@ -1686,6 +1694,8 @@ static inline void __bitmap_clear_bits(struct btrfs_free_space_ctl *ctl,
+ 	bitmap_clear(info->bitmap, start, count);
+ 
+ 	info->bytes -= bytes;
++	if (info->max_extent_size > ctl->unit)
++		info->max_extent_size = 0;
+ }
+ 
+ static void bitmap_clear_bits(struct btrfs_free_space_ctl *ctl,
+@@ -1769,6 +1779,13 @@ static int search_bitmap(struct btrfs_free_space_ctl *ctl,
+ 	return -1;
+ }
+ 
++static inline u64 get_max_extent_size(struct btrfs_free_space *entry)
++{
++	if (entry->bitmap)
++		return entry->max_extent_size;
++	return entry->bytes;
++}
++
+ /* Cache the size of the max extent in bytes */
+ static struct btrfs_free_space *
+ find_free_space(struct btrfs_free_space_ctl *ctl, u64 *offset, u64 *bytes,
+@@ -1790,8 +1807,8 @@ find_free_space(struct btrfs_free_space_ctl *ctl, u64 *offset, u64 *bytes,
+ 	for (node = &entry->offset_index; node; node = rb_next(node)) {
+ 		entry = rb_entry(node, struct btrfs_free_space, offset_index);
+ 		if (entry->bytes < *bytes) {
+-			if (entry->bytes > *max_extent_size)
+-				*max_extent_size = entry->bytes;
++			*max_extent_size = max(get_max_extent_size(entry),
++					       *max_extent_size);
+ 			continue;
+ 		}
+ 
+@@ -1809,8 +1826,8 @@ find_free_space(struct btrfs_free_space_ctl *ctl, u64 *offset, u64 *bytes,
+ 		}
+ 
+ 		if (entry->bytes < *bytes + align_off) {
+-			if (entry->bytes > *max_extent_size)
+-				*max_extent_size = entry->bytes;
++			*max_extent_size = max(get_max_extent_size(entry),
++					       *max_extent_size);
+ 			continue;
+ 		}
+ 
+@@ -1822,8 +1839,10 @@ find_free_space(struct btrfs_free_space_ctl *ctl, u64 *offset, u64 *bytes,
+ 				*offset = tmp;
+ 				*bytes = size;
+ 				return entry;
+-			} else if (size > *max_extent_size) {
+-				*max_extent_size = size;
++			} else {
++				*max_extent_size =
++					max(get_max_extent_size(entry),
++					    *max_extent_size);
+ 			}
+ 			continue;
+ 		}
+@@ -2447,6 +2466,7 @@ void btrfs_dump_free_space(struct btrfs_block_group_cache *block_group,
+ 	struct rb_node *n;
+ 	int count = 0;
+ 
++	spin_lock(&ctl->tree_lock);
+ 	for (n = rb_first(&ctl->free_space_offset); n; n = rb_next(n)) {
+ 		info = rb_entry(n, struct btrfs_free_space, offset_index);
+ 		if (info->bytes >= bytes && !block_group->ro)
+@@ -2455,6 +2475,7 @@ void btrfs_dump_free_space(struct btrfs_block_group_cache *block_group,
+ 			   info->offset, info->bytes,
+ 		       (info->bitmap) ? "yes" : "no");
+ 	}
++	spin_unlock(&ctl->tree_lock);
+ 	btrfs_info(fs_info, "block group has cluster?: %s",
+ 	       list_empty(&block_group->cluster_list) ? "no" : "yes");
+ 	btrfs_info(fs_info,
+@@ -2683,8 +2704,8 @@ static u64 btrfs_alloc_from_bitmap(struct btrfs_block_group_cache *block_group,
+ 
+ 	err = search_bitmap(ctl, entry, &search_start, &search_bytes, true);
+ 	if (err) {
+-		if (search_bytes > *max_extent_size)
+-			*max_extent_size = search_bytes;
++		*max_extent_size = max(get_max_extent_size(entry),
++				       *max_extent_size);
+ 		return 0;
+ 	}
+ 
+@@ -2721,8 +2742,9 @@ u64 btrfs_alloc_from_cluster(struct btrfs_block_group_cache *block_group,
+ 
+ 	entry = rb_entry(node, struct btrfs_free_space, offset_index);
+ 	while (1) {
+-		if (entry->bytes < bytes && entry->bytes > *max_extent_size)
+-			*max_extent_size = entry->bytes;
++		if (entry->bytes < bytes)
++			*max_extent_size = max(get_max_extent_size(entry),
++					       *max_extent_size);
+ 
+ 		if (entry->bytes < bytes ||
+ 		    (!entry->bitmap && entry->offset < min_start)) {
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index d3736fbf6774..dc0f9d089b19 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -507,6 +507,7 @@ again:
+ 		pages = kcalloc(nr_pages, sizeof(struct page *), GFP_NOFS);
+ 		if (!pages) {
+ 			/* just bail out to the uncompressed code */
++			nr_pages = 0;
+ 			goto cont;
+ 		}
+ 
+@@ -2950,6 +2951,7 @@ static int btrfs_finish_ordered_io(struct btrfs_ordered_extent *ordered_extent)
+ 	bool truncated = false;
+ 	bool range_locked = false;
+ 	bool clear_new_delalloc_bytes = false;
++	bool clear_reserved_extent = true;
+ 
+ 	if (!test_bit(BTRFS_ORDERED_NOCOW, &ordered_extent->flags) &&
+ 	    !test_bit(BTRFS_ORDERED_PREALLOC, &ordered_extent->flags) &&
+@@ -3053,10 +3055,12 @@ static int btrfs_finish_ordered_io(struct btrfs_ordered_extent *ordered_extent)
+ 						logical_len, logical_len,
+ 						compress_type, 0, 0,
+ 						BTRFS_FILE_EXTENT_REG);
+-		if (!ret)
++		if (!ret) {
++			clear_reserved_extent = false;
+ 			btrfs_release_delalloc_bytes(fs_info,
+ 						     ordered_extent->start,
+ 						     ordered_extent->disk_len);
++		}
+ 	}
+ 	unpin_extent_cache(&BTRFS_I(inode)->extent_tree,
+ 			   ordered_extent->file_offset, ordered_extent->len,
+@@ -3117,8 +3121,13 @@ out:
+ 		 * wrong we need to return the space for this ordered extent
+ 		 * back to the allocator.  We only free the extent in the
+ 		 * truncated case if we didn't write out the extent at all.
++		 *
++		 * If we made it past insert_reserved_file_extent before we
++		 * errored out then we don't need to do this as the accounting
++		 * has already been done.
+ 		 */
+ 		if ((ret || !logical_len) &&
++		    clear_reserved_extent &&
+ 		    !test_bit(BTRFS_ORDERED_NOCOW, &ordered_extent->flags) &&
+ 		    !test_bit(BTRFS_ORDERED_PREALLOC, &ordered_extent->flags))
+ 			btrfs_free_reserved_extent(fs_info,
+@@ -5293,11 +5302,13 @@ static void evict_inode_truncate_pages(struct inode *inode)
+ 		struct extent_state *cached_state = NULL;
+ 		u64 start;
+ 		u64 end;
++		unsigned state_flags;
+ 
+ 		node = rb_first(&io_tree->state);
+ 		state = rb_entry(node, struct extent_state, rb_node);
+ 		start = state->start;
+ 		end = state->end;
++		state_flags = state->state;
+ 		spin_unlock(&io_tree->lock);
+ 
+ 		lock_extent_bits(io_tree, start, end, &cached_state);
+@@ -5310,7 +5321,7 @@ static void evict_inode_truncate_pages(struct inode *inode)
+ 		 *
+ 		 * Note, end is the bytenr of last byte, so we need + 1 here.
+ 		 */
+-		if (state->state & EXTENT_DELALLOC)
++		if (state_flags & EXTENT_DELALLOC)
+ 			btrfs_qgroup_free_data(inode, NULL, start, end - start + 1);
+ 
+ 		clear_extent_bit(io_tree, start, end,
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index ef7159646615..c972920701a3 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -496,7 +496,6 @@ static noinline int btrfs_ioctl_fitrim(struct file *file, void __user *arg)
+ 	struct fstrim_range range;
+ 	u64 minlen = ULLONG_MAX;
+ 	u64 num_devices = 0;
+-	u64 total_bytes = btrfs_super_total_bytes(fs_info->super_copy);
+ 	int ret;
+ 
+ 	if (!capable(CAP_SYS_ADMIN))
+@@ -520,11 +519,15 @@ static noinline int btrfs_ioctl_fitrim(struct file *file, void __user *arg)
+ 		return -EOPNOTSUPP;
+ 	if (copy_from_user(&range, arg, sizeof(range)))
+ 		return -EFAULT;
+-	if (range.start > total_bytes ||
+-	    range.len < fs_info->sb->s_blocksize)
++
++	/*
++	 * NOTE: Don't truncate the range using super->total_bytes.  Bytenr of
++	 * block group is in the logical address space, which can be any
++	 * sectorsize aligned bytenr in  the range [0, U64_MAX].
++	 */
++	if (range.len < fs_info->sb->s_blocksize)
+ 		return -EINVAL;
+ 
+-	range.len = min(range.len, total_bytes - range.start);
+ 	range.minlen = max(range.minlen, minlen);
+ 	ret = btrfs_trim_fs(fs_info, &range);
+ 	if (ret < 0)
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index c25dc47210a3..7407f5a5d682 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -2856,6 +2856,7 @@ qgroup_rescan_zero_tracking(struct btrfs_fs_info *fs_info)
+ 		qgroup->rfer_cmpr = 0;
+ 		qgroup->excl = 0;
+ 		qgroup->excl_cmpr = 0;
++		qgroup_dirty(fs_info, qgroup);
+ 	}
+ 	spin_unlock(&fs_info->qgroup_lock);
+ }
+@@ -3065,6 +3066,10 @@ static int __btrfs_qgroup_release_data(struct inode *inode,
+ 	int trace_op = QGROUP_RELEASE;
+ 	int ret;
+ 
++	if (!test_bit(BTRFS_FS_QUOTA_ENABLED,
++		      &BTRFS_I(inode)->root->fs_info->flags))
++		return 0;
++
+ 	/* In release case, we shouldn't have @reserved */
+ 	WARN_ON(!free && reserved);
+ 	if (free && reserved)
+diff --git a/fs/btrfs/qgroup.h b/fs/btrfs/qgroup.h
+index d60dd06445ce..cad73ed7aebc 100644
+--- a/fs/btrfs/qgroup.h
++++ b/fs/btrfs/qgroup.h
+@@ -261,6 +261,8 @@ void btrfs_qgroup_free_refroot(struct btrfs_fs_info *fs_info,
+ static inline void btrfs_qgroup_free_delayed_ref(struct btrfs_fs_info *fs_info,
+ 						 u64 ref_root, u64 num_bytes)
+ {
++	if (!test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags))
++		return;
+ 	trace_btrfs_qgroup_free_delayed_ref(fs_info, ref_root, num_bytes);
+ 	btrfs_qgroup_free_refroot(fs_info, ref_root, num_bytes,
+ 				  BTRFS_QGROUP_RSV_DATA);
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index be94c65bb4d2..5ee49b796815 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -1321,7 +1321,7 @@ static void __del_reloc_root(struct btrfs_root *root)
+ 	struct mapping_node *node = NULL;
+ 	struct reloc_control *rc = fs_info->reloc_ctl;
+ 
+-	if (rc) {
++	if (rc && root->node) {
+ 		spin_lock(&rc->reloc_root_tree.lock);
+ 		rb_node = tree_search(&rc->reloc_root_tree.rb_root,
+ 				      root->node->start);
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index ff5f6c719976..9ee0aca134fc 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -1930,6 +1930,9 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
+ 		return ret;
+ 	}
+ 
++	btrfs_trans_release_metadata(trans);
++	trans->block_rsv = NULL;
++
+ 	/* make a pass through all the delayed refs we have so far
+ 	 * any runnings procs may add more while we are here
+ 	 */
+@@ -1939,9 +1942,6 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
+ 		return ret;
+ 	}
+ 
+-	btrfs_trans_release_metadata(trans);
+-	trans->block_rsv = NULL;
+-
+ 	cur_trans = trans->transaction;
+ 
+ 	/*
+@@ -2281,15 +2281,6 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
+ 
+ 	kmem_cache_free(btrfs_trans_handle_cachep, trans);
+ 
+-	/*
+-	 * If fs has been frozen, we can not handle delayed iputs, otherwise
+-	 * it'll result in deadlock about SB_FREEZE_FS.
+-	 */
+-	if (current != fs_info->transaction_kthread &&
+-	    current != fs_info->cleaner_kthread &&
+-	    !test_bit(BTRFS_FS_FROZEN, &fs_info->flags))
+-		btrfs_run_delayed_iputs(fs_info);
+-
+ 	return ret;
+ 
+ scrub_continue:
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 84b00a29d531..8b3f14a1adf0 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -258,6 +258,13 @@ struct walk_control {
+ 	/* what stage of the replay code we're currently in */
+ 	int stage;
+ 
++	/*
++	 * Ignore any items from the inode currently being processed. Needs
++	 * to be set every time we find a BTRFS_INODE_ITEM_KEY and we are in
++	 * the LOG_WALK_REPLAY_INODES stage.
++	 */
++	bool ignore_cur_inode;
++
+ 	/* the root we are currently replaying */
+ 	struct btrfs_root *replay_dest;
+ 
+@@ -2492,6 +2499,20 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+ 
+ 			inode_item = btrfs_item_ptr(eb, i,
+ 					    struct btrfs_inode_item);
++			/*
++			 * If we have a tmpfile (O_TMPFILE) that got fsync'ed
++			 * and never got linked before the fsync, skip it, as
++			 * replaying it is pointless since it would be deleted
++			 * later. We skip logging tmpfiles, but it's always
++			 * possible we are replaying a log created with a kernel
++			 * that used to log tmpfiles.
++			 */
++			if (btrfs_inode_nlink(eb, inode_item) == 0) {
++				wc->ignore_cur_inode = true;
++				continue;
++			} else {
++				wc->ignore_cur_inode = false;
++			}
+ 			ret = replay_xattr_deletes(wc->trans, root, log,
+ 						   path, key.objectid);
+ 			if (ret)
+@@ -2529,16 +2550,8 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+ 					     root->fs_info->sectorsize);
+ 				ret = btrfs_drop_extents(wc->trans, root, inode,
+ 							 from, (u64)-1, 1);
+-				/*
+-				 * If the nlink count is zero here, the iput
+-				 * will free the inode.  We bump it to make
+-				 * sure it doesn't get freed until the link
+-				 * count fixup is done.
+-				 */
+ 				if (!ret) {
+-					if (inode->i_nlink == 0)
+-						inc_nlink(inode);
+-					/* Update link count and nbytes. */
++					/* Update the inode's nbytes. */
+ 					ret = btrfs_update_inode(wc->trans,
+ 								 root, inode);
+ 				}
+@@ -2553,6 +2566,9 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+ 				break;
+ 		}
+ 
++		if (wc->ignore_cur_inode)
++			continue;
++
+ 		if (key.type == BTRFS_DIR_INDEX_KEY &&
+ 		    wc->stage == LOG_WALK_REPLAY_DIR_INDEX) {
+ 			ret = replay_one_dir_item(wc->trans, root, path,
+@@ -3209,9 +3225,12 @@ static void free_log_tree(struct btrfs_trans_handle *trans,
+ 	};
+ 
+ 	ret = walk_log_tree(trans, log, &wc);
+-	/* I don't think this can happen but just in case */
+-	if (ret)
+-		btrfs_abort_transaction(trans, ret);
++	if (ret) {
++		if (trans)
++			btrfs_abort_transaction(trans, ret);
++		else
++			btrfs_handle_fs_error(log->fs_info, ret, NULL);
++	}
+ 
+ 	while (1) {
+ 		ret = find_first_extent_bit(&log->dirty_log_pages,
+@@ -4505,7 +4524,6 @@ static int btrfs_log_changed_extents(struct btrfs_trans_handle *trans,
+ 
+ 	INIT_LIST_HEAD(&extents);
+ 
+-	down_write(&inode->dio_sem);
+ 	write_lock(&tree->lock);
+ 	test_gen = root->fs_info->last_trans_committed;
+ 	logged_start = start;
+@@ -4586,7 +4604,6 @@ process:
+ 	}
+ 	WARN_ON(!list_empty(&extents));
+ 	write_unlock(&tree->lock);
+-	up_write(&inode->dio_sem);
+ 
+ 	btrfs_release_path(path);
+ 	if (!ret)
+@@ -4784,7 +4801,8 @@ static int btrfs_log_trailing_hole(struct btrfs_trans_handle *trans,
+ 			ASSERT(len == i_size ||
+ 			       (len == fs_info->sectorsize &&
+ 				btrfs_file_extent_compression(leaf, extent) !=
+-				BTRFS_COMPRESS_NONE));
++				BTRFS_COMPRESS_NONE) ||
++			       (len < i_size && i_size < fs_info->sectorsize));
+ 			return 0;
+ 		}
+ 
+@@ -5718,9 +5736,33 @@ static int btrfs_log_all_parents(struct btrfs_trans_handle *trans,
+ 
+ 			dir_inode = btrfs_iget(fs_info->sb, &inode_key,
+ 					       root, NULL);
+-			/* If parent inode was deleted, skip it. */
+-			if (IS_ERR(dir_inode))
+-				continue;
++			/*
++			 * If the parent inode was deleted, return an error to
++			 * fallback to a transaction commit. This is to prevent
++			 * getting an inode that was moved from one parent A to
++			 * a parent B, got its former parent A deleted and then
++			 * it got fsync'ed, from existing at both parents after
++			 * a log replay (and the old parent still existing).
++			 * Example:
++			 *
++			 * mkdir /mnt/A
++			 * mkdir /mnt/B
++			 * touch /mnt/B/bar
++			 * sync
++			 * mv /mnt/B/bar /mnt/A/bar
++			 * mv -T /mnt/A /mnt/B
++			 * fsync /mnt/B/bar
++			 * <power fail>
++			 *
++			 * If we ignore the old parent B which got deleted,
++			 * after a log replay we would have file bar linked
++			 * at both parents and the old parent B would still
++			 * exist.
++			 */
++			if (IS_ERR(dir_inode)) {
++				ret = PTR_ERR(dir_inode);
++				goto out;
++			}
+ 
+ 			if (ctx)
+ 				ctx->log_new_dentries = false;
+@@ -5794,7 +5836,13 @@ static int btrfs_log_inode_parent(struct btrfs_trans_handle *trans,
+ 	if (ret)
+ 		goto end_no_trans;
+ 
+-	if (btrfs_inode_in_log(inode, trans->transid)) {
++	/*
++	 * Skip already logged inodes or inodes corresponding to tmpfiles
++	 * (since logging them is pointless, a link count of 0 means they
++	 * will never be accessible).
++	 */
++	if (btrfs_inode_in_log(inode, trans->transid) ||
++	    inode->vfs_inode.i_nlink == 0) {
+ 		ret = BTRFS_NO_LOG_SYNC;
+ 		goto end_no_trans;
+ 	}
+diff --git a/fs/cifs/cifs_debug.c b/fs/cifs/cifs_debug.c
+index b20297988fe0..c1261b7fd292 100644
+--- a/fs/cifs/cifs_debug.c
++++ b/fs/cifs/cifs_debug.c
+@@ -383,6 +383,9 @@ static ssize_t cifs_stats_proc_write(struct file *file,
+ 		atomic_set(&totBufAllocCount, 0);
+ 		atomic_set(&totSmBufAllocCount, 0);
+ #endif /* CONFIG_CIFS_STATS2 */
++		atomic_set(&tcpSesReconnectCount, 0);
++		atomic_set(&tconInfoReconnectCount, 0);
++
+ 		spin_lock(&GlobalMid_Lock);
+ 		GlobalMaxActiveXid = 0;
+ 		GlobalCurrentXid = 0;
+diff --git a/fs/cifs/cifs_spnego.c b/fs/cifs/cifs_spnego.c
+index b611fc2e8984..7f01c6e60791 100644
+--- a/fs/cifs/cifs_spnego.c
++++ b/fs/cifs/cifs_spnego.c
+@@ -147,8 +147,10 @@ cifs_get_spnego_key(struct cifs_ses *sesInfo)
+ 		sprintf(dp, ";sec=krb5");
+ 	else if (server->sec_mskerberos)
+ 		sprintf(dp, ";sec=mskrb5");
+-	else
+-		goto out;
++	else {
++		cifs_dbg(VFS, "unknown or missing server auth type, use krb5\n");
++		sprintf(dp, ";sec=krb5");
++	}
+ 
+ 	dp = description + strlen(description);
+ 	sprintf(dp, ";uid=0x%x",
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index d279fa5472db..334b2b3d21a3 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -779,7 +779,15 @@ cifs_get_inode_info(struct inode **inode, const char *full_path,
+ 	} else if (rc == -EREMOTE) {
+ 		cifs_create_dfs_fattr(&fattr, sb);
+ 		rc = 0;
+-	} else if (rc == -EACCES && backup_cred(cifs_sb)) {
++	} else if ((rc == -EACCES) && backup_cred(cifs_sb) &&
++		   (strcmp(server->vals->version_string, SMB1_VERSION_STRING)
++		      == 0)) {
++			/*
++			 * For SMB2 and later the backup intent flag is already
++			 * sent if needed on open and there is no path based
++			 * FindFirst operation to use to retry with
++			 */
++
+ 			srchinf = kzalloc(sizeof(struct cifs_search_info),
+ 						GFP_KERNEL);
+ 			if (srchinf == NULL) {
+diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
+index f408994fc632..6e000392e4a4 100644
+--- a/fs/cramfs/inode.c
++++ b/fs/cramfs/inode.c
+@@ -202,7 +202,8 @@ static void *cramfs_blkdev_read(struct super_block *sb, unsigned int offset,
+ 			continue;
+ 		blk_offset = (blocknr - buffer_blocknr[i]) << PAGE_SHIFT;
+ 		blk_offset += offset;
+-		if (blk_offset + len > BUFFER_SIZE)
++		if (blk_offset > BUFFER_SIZE ||
++		    blk_offset + len > BUFFER_SIZE)
+ 			continue;
+ 		return read_buffers[i] + blk_offset;
+ 	}
+diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h
+index 39c20ef26db4..79debfc9cef9 100644
+--- a/fs/crypto/fscrypt_private.h
++++ b/fs/crypto/fscrypt_private.h
+@@ -83,10 +83,6 @@ static inline bool fscrypt_valid_enc_modes(u32 contents_mode,
+ 	    filenames_mode == FS_ENCRYPTION_MODE_AES_256_CTS)
+ 		return true;
+ 
+-	if (contents_mode == FS_ENCRYPTION_MODE_SPECK128_256_XTS &&
+-	    filenames_mode == FS_ENCRYPTION_MODE_SPECK128_256_CTS)
+-		return true;
+-
+ 	return false;
+ }
+ 
+diff --git a/fs/crypto/keyinfo.c b/fs/crypto/keyinfo.c
+index e997ca51192f..7874c9bb2fc5 100644
+--- a/fs/crypto/keyinfo.c
++++ b/fs/crypto/keyinfo.c
+@@ -174,16 +174,6 @@ static struct fscrypt_mode {
+ 		.cipher_str = "cts(cbc(aes))",
+ 		.keysize = 16,
+ 	},
+-	[FS_ENCRYPTION_MODE_SPECK128_256_XTS] = {
+-		.friendly_name = "Speck128/256-XTS",
+-		.cipher_str = "xts(speck128)",
+-		.keysize = 64,
+-	},
+-	[FS_ENCRYPTION_MODE_SPECK128_256_CTS] = {
+-		.friendly_name = "Speck128/256-CTS-CBC",
+-		.cipher_str = "cts(cbc(speck128))",
+-		.keysize = 32,
+-	},
+ };
+ 
+ static struct fscrypt_mode *
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index aa1ce53d0c87..7fcc11fcbbbd 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -1387,7 +1387,8 @@ struct ext4_sb_info {
+ 	u32 s_min_batch_time;
+ 	struct block_device *journal_bdev;
+ #ifdef CONFIG_QUOTA
+-	char *s_qf_names[EXT4_MAXQUOTAS];	/* Names of quota files with journalled quota */
++	/* Names of quota files with journalled quota */
++	char __rcu *s_qf_names[EXT4_MAXQUOTAS];
+ 	int s_jquota_fmt;			/* Format of quota to use */
+ #endif
+ 	unsigned int s_want_extra_isize; /* New inodes should reserve # bytes */
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 7b4736022761..9c4bac18cc6c 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -863,7 +863,7 @@ int ext4_da_write_inline_data_begin(struct address_space *mapping,
+ 	handle_t *handle;
+ 	struct page *page;
+ 	struct ext4_iloc iloc;
+-	int retries;
++	int retries = 0;
+ 
+ 	ret = ext4_get_inode_loc(inode, &iloc);
+ 	if (ret)
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index a7074115d6f6..0edee31913d1 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -67,7 +67,6 @@ static void swap_inode_data(struct inode *inode1, struct inode *inode2)
+ 	ei1 = EXT4_I(inode1);
+ 	ei2 = EXT4_I(inode2);
+ 
+-	swap(inode1->i_flags, inode2->i_flags);
+ 	swap(inode1->i_version, inode2->i_version);
+ 	swap(inode1->i_blocks, inode2->i_blocks);
+ 	swap(inode1->i_bytes, inode2->i_bytes);
+@@ -85,6 +84,21 @@ static void swap_inode_data(struct inode *inode1, struct inode *inode2)
+ 	i_size_write(inode2, isize);
+ }
+ 
++static void reset_inode_seed(struct inode *inode)
++{
++	struct ext4_inode_info *ei = EXT4_I(inode);
++	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
++	__le32 inum = cpu_to_le32(inode->i_ino);
++	__le32 gen = cpu_to_le32(inode->i_generation);
++	__u32 csum;
++
++	if (!ext4_has_metadata_csum(inode->i_sb))
++		return;
++
++	csum = ext4_chksum(sbi, sbi->s_csum_seed, (__u8 *)&inum, sizeof(inum));
++	ei->i_csum_seed = ext4_chksum(sbi, csum, (__u8 *)&gen, sizeof(gen));
++}
++
+ /**
+  * Swap the information from the given @inode and the inode
+  * EXT4_BOOT_LOADER_INO. It will basically swap i_data and all other
+@@ -102,10 +116,13 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 	struct inode *inode_bl;
+ 	struct ext4_inode_info *ei_bl;
+ 
+-	if (inode->i_nlink != 1 || !S_ISREG(inode->i_mode))
++	if (inode->i_nlink != 1 || !S_ISREG(inode->i_mode) ||
++	    IS_SWAPFILE(inode) || IS_ENCRYPTED(inode) ||
++	    ext4_has_inline_data(inode))
+ 		return -EINVAL;
+ 
+-	if (!inode_owner_or_capable(inode) || !capable(CAP_SYS_ADMIN))
++	if (IS_RDONLY(inode) || IS_APPEND(inode) || IS_IMMUTABLE(inode) ||
++	    !inode_owner_or_capable(inode) || !capable(CAP_SYS_ADMIN))
+ 		return -EPERM;
+ 
+ 	inode_bl = ext4_iget(sb, EXT4_BOOT_LOADER_INO);
+@@ -120,13 +137,13 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 	 * that only 1 swap_inode_boot_loader is running. */
+ 	lock_two_nondirectories(inode, inode_bl);
+ 
+-	truncate_inode_pages(&inode->i_data, 0);
+-	truncate_inode_pages(&inode_bl->i_data, 0);
+-
+ 	/* Wait for all existing dio workers */
+ 	inode_dio_wait(inode);
+ 	inode_dio_wait(inode_bl);
+ 
++	truncate_inode_pages(&inode->i_data, 0);
++	truncate_inode_pages(&inode_bl->i_data, 0);
++
+ 	handle = ext4_journal_start(inode_bl, EXT4_HT_MOVE_EXTENTS, 2);
+ 	if (IS_ERR(handle)) {
+ 		err = -EINVAL;
+@@ -159,6 +176,8 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 
+ 	inode->i_generation = prandom_u32();
+ 	inode_bl->i_generation = prandom_u32();
++	reset_inode_seed(inode);
++	reset_inode_seed(inode_bl);
+ 
+ 	ext4_discard_preallocations(inode);
+ 
+@@ -169,6 +188,7 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 			inode->i_ino, err);
+ 		/* Revert all changes: */
+ 		swap_inode_data(inode, inode_bl);
++		ext4_mark_inode_dirty(handle, inode);
+ 	} else {
+ 		err = ext4_mark_inode_dirty(handle, inode_bl);
+ 		if (err < 0) {
+@@ -178,6 +198,7 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 			/* Revert all changes: */
+ 			swap_inode_data(inode, inode_bl);
+ 			ext4_mark_inode_dirty(handle, inode);
++			ext4_mark_inode_dirty(handle, inode_bl);
+ 		}
+ 	}
+ 	ext4_journal_stop(handle);
+@@ -339,19 +360,14 @@ static int ext4_ioctl_setproject(struct file *filp, __u32 projid)
+ 	if (projid_eq(kprojid, EXT4_I(inode)->i_projid))
+ 		return 0;
+ 
+-	err = mnt_want_write_file(filp);
+-	if (err)
+-		return err;
+-
+ 	err = -EPERM;
+-	inode_lock(inode);
+ 	/* Is it quota file? Do not allow user to mess with it */
+ 	if (ext4_is_quota_file(inode))
+-		goto out_unlock;
++		return err;
+ 
+ 	err = ext4_get_inode_loc(inode, &iloc);
+ 	if (err)
+-		goto out_unlock;
++		return err;
+ 
+ 	raw_inode = ext4_raw_inode(&iloc);
+ 	if (!EXT4_FITS_IN_INODE(raw_inode, ei, i_projid)) {
+@@ -359,20 +375,20 @@ static int ext4_ioctl_setproject(struct file *filp, __u32 projid)
+ 					      EXT4_SB(sb)->s_want_extra_isize,
+ 					      &iloc);
+ 		if (err)
+-			goto out_unlock;
++			return err;
+ 	} else {
+ 		brelse(iloc.bh);
+ 	}
+ 
+-	dquot_initialize(inode);
++	err = dquot_initialize(inode);
++	if (err)
++		return err;
+ 
+ 	handle = ext4_journal_start(inode, EXT4_HT_QUOTA,
+ 		EXT4_QUOTA_INIT_BLOCKS(sb) +
+ 		EXT4_QUOTA_DEL_BLOCKS(sb) + 3);
+-	if (IS_ERR(handle)) {
+-		err = PTR_ERR(handle);
+-		goto out_unlock;
+-	}
++	if (IS_ERR(handle))
++		return PTR_ERR(handle);
+ 
+ 	err = ext4_reserve_inode_write(handle, inode, &iloc);
+ 	if (err)
+@@ -400,9 +416,6 @@ out_dirty:
+ 		err = rc;
+ out_stop:
+ 	ext4_journal_stop(handle);
+-out_unlock:
+-	inode_unlock(inode);
+-	mnt_drop_write_file(filp);
+ 	return err;
+ }
+ #else
+@@ -626,6 +639,30 @@ group_add_out:
+ 	return err;
+ }
+ 
++static int ext4_ioctl_check_project(struct inode *inode, struct fsxattr *fa)
++{
++	/*
++	 * Project Quota ID state is only allowed to change from within the init
++	 * namespace. Enforce that restriction only if we are trying to change
++	 * the quota ID state. Everything else is allowed in user namespaces.
++	 */
++	if (current_user_ns() == &init_user_ns)
++		return 0;
++
++	if (__kprojid_val(EXT4_I(inode)->i_projid) != fa->fsx_projid)
++		return -EINVAL;
++
++	if (ext4_test_inode_flag(inode, EXT4_INODE_PROJINHERIT)) {
++		if (!(fa->fsx_xflags & FS_XFLAG_PROJINHERIT))
++			return -EINVAL;
++	} else {
++		if (fa->fsx_xflags & FS_XFLAG_PROJINHERIT)
++			return -EINVAL;
++	}
++
++	return 0;
++}
++
+ long ext4_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ {
+ 	struct inode *inode = file_inode(filp);
+@@ -1025,19 +1062,19 @@ resizefs_out:
+ 			return err;
+ 
+ 		inode_lock(inode);
++		err = ext4_ioctl_check_project(inode, &fa);
++		if (err)
++			goto out;
+ 		flags = (ei->i_flags & ~EXT4_FL_XFLAG_VISIBLE) |
+ 			 (flags & EXT4_FL_XFLAG_VISIBLE);
+ 		err = ext4_ioctl_setflags(inode, flags);
+-		inode_unlock(inode);
+-		mnt_drop_write_file(filp);
+ 		if (err)
+-			return err;
+-
++			goto out;
+ 		err = ext4_ioctl_setproject(filp, fa.fsx_projid);
+-		if (err)
+-			return err;
+-
+-		return 0;
++out:
++		inode_unlock(inode);
++		mnt_drop_write_file(filp);
++		return err;
+ 	}
+ 	case EXT4_IOC_SHUTDOWN:
+ 		return ext4_shutdown(sb, arg);
+diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c
+index 8e17efdcbf11..887353875060 100644
+--- a/fs/ext4/move_extent.c
++++ b/fs/ext4/move_extent.c
+@@ -518,9 +518,13 @@ mext_check_arguments(struct inode *orig_inode,
+ 			orig_inode->i_ino, donor_inode->i_ino);
+ 		return -EINVAL;
+ 	}
+-	if (orig_eof < orig_start + *len - 1)
++	if (orig_eof <= orig_start)
++		*len = 0;
++	else if (orig_eof < orig_start + *len - 1)
+ 		*len = orig_eof - orig_start;
+-	if (donor_eof < donor_start + *len - 1)
++	if (donor_eof <= donor_start)
++		*len = 0;
++	else if (donor_eof < donor_start + *len - 1)
+ 		*len = donor_eof - donor_start;
+ 	if (!*len) {
+ 		ext4_debug("ext4 move extent: len should not be 0 "
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index a7a0fffc3ae8..8d91d50ccf42 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -895,6 +895,18 @@ static inline void ext4_quota_off_umount(struct super_block *sb)
+ 	for (type = 0; type < EXT4_MAXQUOTAS; type++)
+ 		ext4_quota_off(sb, type);
+ }
++
++/*
++ * This is a helper function which is used in the mount/remount
++ * codepaths (which holds s_umount) to fetch the quota file name.
++ */
++static inline char *get_qf_name(struct super_block *sb,
++				struct ext4_sb_info *sbi,
++				int type)
++{
++	return rcu_dereference_protected(sbi->s_qf_names[type],
++					 lockdep_is_held(&sb->s_umount));
++}
+ #else
+ static inline void ext4_quota_off_umount(struct super_block *sb)
+ {
+@@ -946,7 +958,7 @@ static void ext4_put_super(struct super_block *sb)
+ 	percpu_free_rwsem(&sbi->s_journal_flag_rwsem);
+ #ifdef CONFIG_QUOTA
+ 	for (i = 0; i < EXT4_MAXQUOTAS; i++)
+-		kfree(sbi->s_qf_names[i]);
++		kfree(get_qf_name(sb, sbi, i));
+ #endif
+ 
+ 	/* Debugging code just in case the in-memory inode orphan list
+@@ -1511,11 +1523,10 @@ static const char deprecated_msg[] =
+ static int set_qf_name(struct super_block *sb, int qtype, substring_t *args)
+ {
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+-	char *qname;
++	char *qname, *old_qname = get_qf_name(sb, sbi, qtype);
+ 	int ret = -1;
+ 
+-	if (sb_any_quota_loaded(sb) &&
+-		!sbi->s_qf_names[qtype]) {
++	if (sb_any_quota_loaded(sb) && !old_qname) {
+ 		ext4_msg(sb, KERN_ERR,
+ 			"Cannot change journaled "
+ 			"quota options when quota turned on");
+@@ -1532,8 +1543,8 @@ static int set_qf_name(struct super_block *sb, int qtype, substring_t *args)
+ 			"Not enough memory for storing quotafile name");
+ 		return -1;
+ 	}
+-	if (sbi->s_qf_names[qtype]) {
+-		if (strcmp(sbi->s_qf_names[qtype], qname) == 0)
++	if (old_qname) {
++		if (strcmp(old_qname, qname) == 0)
+ 			ret = 1;
+ 		else
+ 			ext4_msg(sb, KERN_ERR,
+@@ -1546,7 +1557,7 @@ static int set_qf_name(struct super_block *sb, int qtype, substring_t *args)
+ 			"quotafile must be on filesystem root");
+ 		goto errout;
+ 	}
+-	sbi->s_qf_names[qtype] = qname;
++	rcu_assign_pointer(sbi->s_qf_names[qtype], qname);
+ 	set_opt(sb, QUOTA);
+ 	return 1;
+ errout:
+@@ -1558,15 +1569,16 @@ static int clear_qf_name(struct super_block *sb, int qtype)
+ {
+ 
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
++	char *old_qname = get_qf_name(sb, sbi, qtype);
+ 
+-	if (sb_any_quota_loaded(sb) &&
+-		sbi->s_qf_names[qtype]) {
++	if (sb_any_quota_loaded(sb) && old_qname) {
+ 		ext4_msg(sb, KERN_ERR, "Cannot change journaled quota options"
+ 			" when quota turned on");
+ 		return -1;
+ 	}
+-	kfree(sbi->s_qf_names[qtype]);
+-	sbi->s_qf_names[qtype] = NULL;
++	rcu_assign_pointer(sbi->s_qf_names[qtype], NULL);
++	synchronize_rcu();
++	kfree(old_qname);
+ 	return 1;
+ }
+ #endif
+@@ -1941,7 +1953,7 @@ static int parse_options(char *options, struct super_block *sb,
+ 			 int is_remount)
+ {
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+-	char *p;
++	char *p, __maybe_unused *usr_qf_name, __maybe_unused *grp_qf_name;
+ 	substring_t args[MAX_OPT_ARGS];
+ 	int token;
+ 
+@@ -1972,11 +1984,13 @@ static int parse_options(char *options, struct super_block *sb,
+ 			 "Cannot enable project quota enforcement.");
+ 		return 0;
+ 	}
+-	if (sbi->s_qf_names[USRQUOTA] || sbi->s_qf_names[GRPQUOTA]) {
+-		if (test_opt(sb, USRQUOTA) && sbi->s_qf_names[USRQUOTA])
++	usr_qf_name = get_qf_name(sb, sbi, USRQUOTA);
++	grp_qf_name = get_qf_name(sb, sbi, GRPQUOTA);
++	if (usr_qf_name || grp_qf_name) {
++		if (test_opt(sb, USRQUOTA) && usr_qf_name)
+ 			clear_opt(sb, USRQUOTA);
+ 
+-		if (test_opt(sb, GRPQUOTA) && sbi->s_qf_names[GRPQUOTA])
++		if (test_opt(sb, GRPQUOTA) && grp_qf_name)
+ 			clear_opt(sb, GRPQUOTA);
+ 
+ 		if (test_opt(sb, GRPQUOTA) || test_opt(sb, USRQUOTA)) {
+@@ -2010,6 +2024,7 @@ static inline void ext4_show_quota_options(struct seq_file *seq,
+ {
+ #if defined(CONFIG_QUOTA)
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
++	char *usr_qf_name, *grp_qf_name;
+ 
+ 	if (sbi->s_jquota_fmt) {
+ 		char *fmtname = "";
+@@ -2028,11 +2043,14 @@ static inline void ext4_show_quota_options(struct seq_file *seq,
+ 		seq_printf(seq, ",jqfmt=%s", fmtname);
+ 	}
+ 
+-	if (sbi->s_qf_names[USRQUOTA])
+-		seq_show_option(seq, "usrjquota", sbi->s_qf_names[USRQUOTA]);
+-
+-	if (sbi->s_qf_names[GRPQUOTA])
+-		seq_show_option(seq, "grpjquota", sbi->s_qf_names[GRPQUOTA]);
++	rcu_read_lock();
++	usr_qf_name = rcu_dereference(sbi->s_qf_names[USRQUOTA]);
++	grp_qf_name = rcu_dereference(sbi->s_qf_names[GRPQUOTA]);
++	if (usr_qf_name)
++		seq_show_option(seq, "usrjquota", usr_qf_name);
++	if (grp_qf_name)
++		seq_show_option(seq, "grpjquota", grp_qf_name);
++	rcu_read_unlock();
+ #endif
+ }
+ 
+@@ -5081,6 +5099,7 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 	int err = 0;
+ #ifdef CONFIG_QUOTA
+ 	int i, j;
++	char *to_free[EXT4_MAXQUOTAS];
+ #endif
+ 	char *orig_data = kstrdup(data, GFP_KERNEL);
+ 
+@@ -5097,8 +5116,9 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 	old_opts.s_jquota_fmt = sbi->s_jquota_fmt;
+ 	for (i = 0; i < EXT4_MAXQUOTAS; i++)
+ 		if (sbi->s_qf_names[i]) {
+-			old_opts.s_qf_names[i] = kstrdup(sbi->s_qf_names[i],
+-							 GFP_KERNEL);
++			char *qf_name = get_qf_name(sb, sbi, i);
++
++			old_opts.s_qf_names[i] = kstrdup(qf_name, GFP_KERNEL);
+ 			if (!old_opts.s_qf_names[i]) {
+ 				for (j = 0; j < i; j++)
+ 					kfree(old_opts.s_qf_names[j]);
+@@ -5327,9 +5347,12 @@ restore_opts:
+ #ifdef CONFIG_QUOTA
+ 	sbi->s_jquota_fmt = old_opts.s_jquota_fmt;
+ 	for (i = 0; i < EXT4_MAXQUOTAS; i++) {
+-		kfree(sbi->s_qf_names[i]);
+-		sbi->s_qf_names[i] = old_opts.s_qf_names[i];
++		to_free[i] = get_qf_name(sb, sbi, i);
++		rcu_assign_pointer(sbi->s_qf_names[i], old_opts.s_qf_names[i]);
+ 	}
++	synchronize_rcu();
++	for (i = 0; i < EXT4_MAXQUOTAS; i++)
++		kfree(to_free[i]);
+ #endif
+ 	kfree(orig_data);
+ 	return err;
+@@ -5520,7 +5543,7 @@ static int ext4_write_info(struct super_block *sb, int type)
+  */
+ static int ext4_quota_on_mount(struct super_block *sb, int type)
+ {
+-	return dquot_quota_on_mount(sb, EXT4_SB(sb)->s_qf_names[type],
++	return dquot_quota_on_mount(sb, get_qf_name(sb, EXT4_SB(sb), type),
+ 					EXT4_SB(sb)->s_jquota_fmt, type);
+ }
+ 
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index b61954d40c25..e397515261dc 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -80,7 +80,8 @@ static void __read_end_io(struct bio *bio)
+ 		/* PG_error was set if any post_read step failed */
+ 		if (bio->bi_status || PageError(page)) {
+ 			ClearPageUptodate(page);
+-			SetPageError(page);
++			/* will re-read again later */
++			ClearPageError(page);
+ 		} else {
+ 			SetPageUptodate(page);
+ 		}
+@@ -453,12 +454,16 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio)
+ 		bio_put(bio);
+ 		return -EFAULT;
+ 	}
+-	bio_set_op_attrs(bio, fio->op, fio->op_flags);
+ 
+-	__submit_bio(fio->sbi, bio, fio->type);
++	if (fio->io_wbc && !is_read_io(fio->op))
++		wbc_account_io(fio->io_wbc, page, PAGE_SIZE);
++
++	bio_set_op_attrs(bio, fio->op, fio->op_flags);
+ 
+ 	if (!is_read_io(fio->op))
+ 		inc_page_count(fio->sbi, WB_DATA_TYPE(fio->page));
++
++	__submit_bio(fio->sbi, bio, fio->type);
+ 	return 0;
+ }
+ 
+@@ -580,6 +585,7 @@ static int f2fs_submit_page_read(struct inode *inode, struct page *page,
+ 		bio_put(bio);
+ 		return -EFAULT;
+ 	}
++	ClearPageError(page);
+ 	__submit_bio(F2FS_I_SB(inode), bio, DATA);
+ 	return 0;
+ }
+@@ -1524,6 +1530,7 @@ submit_and_realloc:
+ 		if (bio_add_page(bio, page, blocksize, 0) < blocksize)
+ 			goto submit_and_realloc;
+ 
++		ClearPageError(page);
+ 		last_block_in_bio = block_nr;
+ 		goto next_page;
+ set_error_page:
+@@ -2494,10 +2501,6 @@ static int f2fs_set_data_page_dirty(struct page *page)
+ 	if (!PageUptodate(page))
+ 		SetPageUptodate(page);
+ 
+-	/* don't remain PG_checked flag which was set during GC */
+-	if (is_cold_data(page))
+-		clear_cold_data(page);
+-
+ 	if (f2fs_is_atomic_file(inode) && !f2fs_is_commit_atomic_write(inode)) {
+ 		if (!IS_ATOMIC_WRITTEN_PAGE(page)) {
+ 			f2fs_register_inmem_page(inode, page);
+diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c
+index 231b77ef5a53..a70cd2580eae 100644
+--- a/fs/f2fs/extent_cache.c
++++ b/fs/f2fs/extent_cache.c
+@@ -308,14 +308,13 @@ static unsigned int __free_extent_tree(struct f2fs_sb_info *sbi,
+ 	return count - atomic_read(&et->node_cnt);
+ }
+ 
+-static void __drop_largest_extent(struct inode *inode,
++static void __drop_largest_extent(struct extent_tree *et,
+ 					pgoff_t fofs, unsigned int len)
+ {
+-	struct extent_info *largest = &F2FS_I(inode)->extent_tree->largest;
+-
+-	if (fofs < largest->fofs + largest->len && fofs + len > largest->fofs) {
+-		largest->len = 0;
+-		f2fs_mark_inode_dirty_sync(inode, true);
++	if (fofs < et->largest.fofs + et->largest.len &&
++			fofs + len > et->largest.fofs) {
++		et->largest.len = 0;
++		et->largest_updated = true;
+ 	}
+ }
+ 
+@@ -416,12 +415,11 @@ out:
+ 	return ret;
+ }
+ 
+-static struct extent_node *__try_merge_extent_node(struct inode *inode,
++static struct extent_node *__try_merge_extent_node(struct f2fs_sb_info *sbi,
+ 				struct extent_tree *et, struct extent_info *ei,
+ 				struct extent_node *prev_ex,
+ 				struct extent_node *next_ex)
+ {
+-	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	struct extent_node *en = NULL;
+ 
+ 	if (prev_ex && __is_back_mergeable(ei, &prev_ex->ei)) {
+@@ -443,7 +441,7 @@ static struct extent_node *__try_merge_extent_node(struct inode *inode,
+ 	if (!en)
+ 		return NULL;
+ 
+-	__try_update_largest_extent(inode, et, en);
++	__try_update_largest_extent(et, en);
+ 
+ 	spin_lock(&sbi->extent_lock);
+ 	if (!list_empty(&en->list)) {
+@@ -454,12 +452,11 @@ static struct extent_node *__try_merge_extent_node(struct inode *inode,
+ 	return en;
+ }
+ 
+-static struct extent_node *__insert_extent_tree(struct inode *inode,
++static struct extent_node *__insert_extent_tree(struct f2fs_sb_info *sbi,
+ 				struct extent_tree *et, struct extent_info *ei,
+ 				struct rb_node **insert_p,
+ 				struct rb_node *insert_parent)
+ {
+-	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	struct rb_node **p;
+ 	struct rb_node *parent = NULL;
+ 	struct extent_node *en = NULL;
+@@ -476,7 +473,7 @@ do_insert:
+ 	if (!en)
+ 		return NULL;
+ 
+-	__try_update_largest_extent(inode, et, en);
++	__try_update_largest_extent(et, en);
+ 
+ 	/* update in global extent list */
+ 	spin_lock(&sbi->extent_lock);
+@@ -497,6 +494,7 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
+ 	struct rb_node **insert_p = NULL, *insert_parent = NULL;
+ 	unsigned int end = fofs + len;
+ 	unsigned int pos = (unsigned int)fofs;
++	bool updated = false;
+ 
+ 	if (!et)
+ 		return;
+@@ -517,7 +515,7 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
+ 	 * drop largest extent before lookup, in case it's already
+ 	 * been shrunk from extent tree
+ 	 */
+-	__drop_largest_extent(inode, fofs, len);
++	__drop_largest_extent(et, fofs, len);
+ 
+ 	/* 1. lookup first extent node in range [fofs, fofs + len - 1] */
+ 	en = (struct extent_node *)f2fs_lookup_rb_tree_ret(&et->root,
+@@ -550,7 +548,7 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
+ 				set_extent_info(&ei, end,
+ 						end - dei.fofs + dei.blk,
+ 						org_end - end);
+-				en1 = __insert_extent_tree(inode, et, &ei,
++				en1 = __insert_extent_tree(sbi, et, &ei,
+ 							NULL, NULL);
+ 				next_en = en1;
+ 			} else {
+@@ -570,7 +568,7 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
+ 		}
+ 
+ 		if (parts)
+-			__try_update_largest_extent(inode, et, en);
++			__try_update_largest_extent(et, en);
+ 		else
+ 			__release_extent_node(sbi, et, en);
+ 
+@@ -590,15 +588,16 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
+ 	if (blkaddr) {
+ 
+ 		set_extent_info(&ei, fofs, blkaddr, len);
+-		if (!__try_merge_extent_node(inode, et, &ei, prev_en, next_en))
+-			__insert_extent_tree(inode, et, &ei,
++		if (!__try_merge_extent_node(sbi, et, &ei, prev_en, next_en))
++			__insert_extent_tree(sbi, et, &ei,
+ 						insert_p, insert_parent);
+ 
+ 		/* give up extent_cache, if split and small updates happen */
+ 		if (dei.len >= 1 &&
+ 				prev.len < F2FS_MIN_EXTENT_LEN &&
+ 				et->largest.len < F2FS_MIN_EXTENT_LEN) {
+-			__drop_largest_extent(inode, 0, UINT_MAX);
++			et->largest.len = 0;
++			et->largest_updated = true;
+ 			set_inode_flag(inode, FI_NO_EXTENT);
+ 		}
+ 	}
+@@ -606,7 +605,15 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
+ 	if (is_inode_flag_set(inode, FI_NO_EXTENT))
+ 		__free_extent_tree(sbi, et);
+ 
++	if (et->largest_updated) {
++		et->largest_updated = false;
++		updated = true;
++	}
++
+ 	write_unlock(&et->lock);
++
++	if (updated)
++		f2fs_mark_inode_dirty_sync(inode, true);
+ }
+ 
+ unsigned int f2fs_shrink_extent_tree(struct f2fs_sb_info *sbi, int nr_shrink)
+@@ -705,6 +712,7 @@ void f2fs_drop_extent_tree(struct inode *inode)
+ {
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	struct extent_tree *et = F2FS_I(inode)->extent_tree;
++	bool updated = false;
+ 
+ 	if (!f2fs_may_extent_tree(inode))
+ 		return;
+@@ -713,8 +721,13 @@ void f2fs_drop_extent_tree(struct inode *inode)
+ 
+ 	write_lock(&et->lock);
+ 	__free_extent_tree(sbi, et);
+-	__drop_largest_extent(inode, 0, UINT_MAX);
++	if (et->largest.len) {
++		et->largest.len = 0;
++		updated = true;
++	}
+ 	write_unlock(&et->lock);
++	if (updated)
++		f2fs_mark_inode_dirty_sync(inode, true);
+ }
+ 
+ void f2fs_destroy_extent_tree(struct inode *inode)
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index b6f2dc8163e1..181aade161e8 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -556,6 +556,7 @@ struct extent_tree {
+ 	struct list_head list;		/* to be used by sbi->zombie_list */
+ 	rwlock_t lock;			/* protect extent info rb-tree */
+ 	atomic_t node_cnt;		/* # of extent node in rb-tree*/
++	bool largest_updated;		/* largest extent updated */
+ };
+ 
+ /*
+@@ -736,12 +737,12 @@ static inline bool __is_front_mergeable(struct extent_info *cur,
+ }
+ 
+ extern void f2fs_mark_inode_dirty_sync(struct inode *inode, bool sync);
+-static inline void __try_update_largest_extent(struct inode *inode,
+-			struct extent_tree *et, struct extent_node *en)
++static inline void __try_update_largest_extent(struct extent_tree *et,
++						struct extent_node *en)
+ {
+ 	if (en->ei.len > et->largest.len) {
+ 		et->largest = en->ei;
+-		f2fs_mark_inode_dirty_sync(inode, true);
++		et->largest_updated = true;
+ 	}
+ }
+ 
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index cf0f944fcaea..4a2e75bce36a 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -287,6 +287,12 @@ static int do_read_inode(struct inode *inode)
+ 	if (f2fs_has_inline_data(inode) && !f2fs_exist_data(inode))
+ 		__recover_inline_status(inode, node_page);
+ 
++	/* try to recover cold bit for non-dir inode */
++	if (!S_ISDIR(inode->i_mode) && !is_cold_node(node_page)) {
++		set_cold_node(node_page, false);
++		set_page_dirty(node_page);
++	}
++
+ 	/* get rdev by using inline_info */
+ 	__get_inode_rdev(inode, ri);
+ 
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 52ed02b0327c..ec22e7c5b37e 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -2356,7 +2356,7 @@ retry:
+ 	if (!PageUptodate(ipage))
+ 		SetPageUptodate(ipage);
+ 	fill_node_footer(ipage, ino, ino, 0, true);
+-	set_cold_node(page, false);
++	set_cold_node(ipage, false);
+ 
+ 	src = F2FS_INODE(page);
+ 	dst = F2FS_INODE(ipage);
+@@ -2379,6 +2379,13 @@ retry:
+ 			F2FS_FITS_IN_INODE(src, le16_to_cpu(src->i_extra_isize),
+ 								i_projid))
+ 			dst->i_projid = src->i_projid;
++
++		if (f2fs_sb_has_inode_crtime(sbi->sb) &&
++			F2FS_FITS_IN_INODE(src, le16_to_cpu(src->i_extra_isize),
++							i_crtime_nsec)) {
++			dst->i_crtime = src->i_crtime;
++			dst->i_crtime_nsec = src->i_crtime_nsec;
++		}
+ 	}
+ 
+ 	new_ni = old_ni;
+diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
+index ad70e62c5da4..a69a2c5c6682 100644
+--- a/fs/f2fs/recovery.c
++++ b/fs/f2fs/recovery.c
+@@ -221,6 +221,7 @@ static void recover_inode(struct inode *inode, struct page *page)
+ 	inode->i_mtime.tv_nsec = le32_to_cpu(raw->i_mtime_nsec);
+ 
+ 	F2FS_I(inode)->i_advise = raw->i_advise;
++	F2FS_I(inode)->i_flags = le32_to_cpu(raw->i_flags);
+ 
+ 	recover_inline_flags(inode, raw);
+ 
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 742147cbe759..a3e90e6f72a8 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1820,7 +1820,9 @@ static int f2fs_quota_off(struct super_block *sb, int type)
+ 	if (!inode || !igrab(inode))
+ 		return dquot_quota_off(sb, type);
+ 
+-	f2fs_quota_sync(sb, type);
++	err = f2fs_quota_sync(sb, type);
++	if (err)
++		goto out_put;
+ 
+ 	err = dquot_quota_off(sb, type);
+ 	if (err || f2fs_sb_has_quota_ino(sb))
+@@ -1839,9 +1841,20 @@ out_put:
+ void f2fs_quota_off_umount(struct super_block *sb)
+ {
+ 	int type;
++	int err;
++
++	for (type = 0; type < MAXQUOTAS; type++) {
++		err = f2fs_quota_off(sb, type);
++		if (err) {
++			int ret = dquot_quota_off(sb, type);
+ 
+-	for (type = 0; type < MAXQUOTAS; type++)
+-		f2fs_quota_off(sb, type);
++			f2fs_msg(sb, KERN_ERR,
++				"Fail to turn off disk quota "
++				"(type: %d, err: %d, ret:%d), Please "
++				"run fsck to fix it.", type, err, ret);
++			set_sbi_flag(F2FS_SB(sb), SBI_NEED_FSCK);
++		}
++	}
+ }
+ 
+ static int f2fs_get_projid(struct inode *inode, kprojid_t *projid)
+diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
+index c2469833b4fb..6b84ef6ccff3 100644
+--- a/fs/gfs2/ops_fstype.c
++++ b/fs/gfs2/ops_fstype.c
+@@ -1333,6 +1333,9 @@ static struct dentry *gfs2_mount_meta(struct file_system_type *fs_type,
+ 	struct path path;
+ 	int error;
+ 
++	if (!dev_name || !*dev_name)
++		return ERR_PTR(-EINVAL);
++
+ 	error = kern_path(dev_name, LOOKUP_FOLLOW, &path);
+ 	if (error) {
+ 		pr_warn("path_lookup on %s returned error %d\n",
+diff --git a/fs/jbd2/checkpoint.c b/fs/jbd2/checkpoint.c
+index c125d662777c..26f8d7e46462 100644
+--- a/fs/jbd2/checkpoint.c
++++ b/fs/jbd2/checkpoint.c
+@@ -251,8 +251,8 @@ restart:
+ 		bh = jh2bh(jh);
+ 
+ 		if (buffer_locked(bh)) {
+-			spin_unlock(&journal->j_list_lock);
+ 			get_bh(bh);
++			spin_unlock(&journal->j_list_lock);
+ 			wait_on_buffer(bh);
+ 			/* the journal_head may have gone by now */
+ 			BUFFER_TRACE(bh, "brelse");
+@@ -333,8 +333,8 @@ restart2:
+ 		jh = transaction->t_checkpoint_io_list;
+ 		bh = jh2bh(jh);
+ 		if (buffer_locked(bh)) {
+-			spin_unlock(&journal->j_list_lock);
+ 			get_bh(bh);
++			spin_unlock(&journal->j_list_lock);
+ 			wait_on_buffer(bh);
+ 			/* the journal_head may have gone by now */
+ 			BUFFER_TRACE(bh, "brelse");
+diff --git a/fs/jffs2/super.c b/fs/jffs2/super.c
+index 87bdf0f4cba1..902a7dd10e5c 100644
+--- a/fs/jffs2/super.c
++++ b/fs/jffs2/super.c
+@@ -285,10 +285,8 @@ static int jffs2_fill_super(struct super_block *sb, void *data, int silent)
+ 	sb->s_fs_info = c;
+ 
+ 	ret = jffs2_parse_options(c, data);
+-	if (ret) {
+-		kfree(c);
++	if (ret)
+ 		return -EINVAL;
+-	}
+ 
+ 	/* Initialize JFFS2 superblock locks, the further initialization will
+ 	 * be done later */
+diff --git a/fs/lockd/host.c b/fs/lockd/host.c
+index d35cd6be0675..93fb7cf0b92b 100644
+--- a/fs/lockd/host.c
++++ b/fs/lockd/host.c
+@@ -341,7 +341,7 @@ struct nlm_host *nlmsvc_lookup_host(const struct svc_rqst *rqstp,
+ 	};
+ 	struct lockd_net *ln = net_generic(net, lockd_net_id);
+ 
+-	dprintk("lockd: %s(host='%*s', vers=%u, proto=%s)\n", __func__,
++	dprintk("lockd: %s(host='%.*s', vers=%u, proto=%s)\n", __func__,
+ 			(int)hostname_len, hostname, rqstp->rq_vers,
+ 			(rqstp->rq_prot == IPPROTO_UDP ? "udp" : "tcp"));
+ 
+diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c
+index d7124fb12041..5df68d79d661 100644
+--- a/fs/nfs/nfs4client.c
++++ b/fs/nfs/nfs4client.c
+@@ -935,10 +935,10 @@ EXPORT_SYMBOL_GPL(nfs4_set_ds_client);
+ 
+ /*
+  * Session has been established, and the client marked ready.
+- * Set the mount rsize and wsize with negotiated fore channel
+- * attributes which will be bound checked in nfs_server_set_fsinfo.
++ * Limit the mount rsize, wsize and dtsize using negotiated fore
++ * channel attributes.
+  */
+-static void nfs4_session_set_rwsize(struct nfs_server *server)
++static void nfs4_session_limit_rwsize(struct nfs_server *server)
+ {
+ #ifdef CONFIG_NFS_V4_1
+ 	struct nfs4_session *sess;
+@@ -951,9 +951,11 @@ static void nfs4_session_set_rwsize(struct nfs_server *server)
+ 	server_resp_sz = sess->fc_attrs.max_resp_sz - nfs41_maxread_overhead;
+ 	server_rqst_sz = sess->fc_attrs.max_rqst_sz - nfs41_maxwrite_overhead;
+ 
+-	if (!server->rsize || server->rsize > server_resp_sz)
++	if (server->dtsize > server_resp_sz)
++		server->dtsize = server_resp_sz;
++	if (server->rsize > server_resp_sz)
+ 		server->rsize = server_resp_sz;
+-	if (!server->wsize || server->wsize > server_rqst_sz)
++	if (server->wsize > server_rqst_sz)
+ 		server->wsize = server_rqst_sz;
+ #endif /* CONFIG_NFS_V4_1 */
+ }
+@@ -1000,12 +1002,12 @@ static int nfs4_server_common_setup(struct nfs_server *server,
+ 			(unsigned long long) server->fsid.minor);
+ 	nfs_display_fhandle(mntfh, "Pseudo-fs root FH");
+ 
+-	nfs4_session_set_rwsize(server);
+-
+ 	error = nfs_probe_fsinfo(server, mntfh, fattr);
+ 	if (error < 0)
+ 		goto out;
+ 
++	nfs4_session_limit_rwsize(server);
++
+ 	if (server->namelen == 0 || server->namelen > NFS4_MAXNAMLEN)
+ 		server->namelen = NFS4_MAXNAMLEN;
+ 
+diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
+index 67d19cd92e44..7e6425791388 100644
+--- a/fs/nfs/pagelist.c
++++ b/fs/nfs/pagelist.c
+@@ -1110,6 +1110,20 @@ static int nfs_pageio_add_request_mirror(struct nfs_pageio_descriptor *desc,
+ 	return ret;
+ }
+ 
++static void nfs_pageio_error_cleanup(struct nfs_pageio_descriptor *desc)
++{
++	u32 midx;
++	struct nfs_pgio_mirror *mirror;
++
++	if (!desc->pg_error)
++		return;
++
++	for (midx = 0; midx < desc->pg_mirror_count; midx++) {
++		mirror = &desc->pg_mirrors[midx];
++		desc->pg_completion_ops->error_cleanup(&mirror->pg_list);
++	}
++}
++
+ int nfs_pageio_add_request(struct nfs_pageio_descriptor *desc,
+ 			   struct nfs_page *req)
+ {
+@@ -1160,25 +1174,11 @@ int nfs_pageio_add_request(struct nfs_pageio_descriptor *desc,
+ 	return 1;
+ 
+ out_failed:
+-	/*
+-	 * We might have failed before sending any reqs over wire.
+-	 * Clean up rest of the reqs in mirror pg_list.
+-	 */
+-	if (desc->pg_error) {
+-		struct nfs_pgio_mirror *mirror;
+-		void (*func)(struct list_head *);
+-
+-		/* remember fatal errors */
+-		if (nfs_error_is_fatal(desc->pg_error))
+-			nfs_context_set_write_error(req->wb_context,
+-						    desc->pg_error);
+-
+-		func = desc->pg_completion_ops->error_cleanup;
+-		for (midx = 0; midx < desc->pg_mirror_count; midx++) {
+-			mirror = &desc->pg_mirrors[midx];
+-			func(&mirror->pg_list);
+-		}
+-	}
++	/* remember fatal errors */
++	if (nfs_error_is_fatal(desc->pg_error))
++		nfs_context_set_write_error(req->wb_context,
++						desc->pg_error);
++	nfs_pageio_error_cleanup(desc);
+ 	return 0;
+ }
+ 
+@@ -1250,6 +1250,8 @@ void nfs_pageio_complete(struct nfs_pageio_descriptor *desc)
+ 	for (midx = 0; midx < desc->pg_mirror_count; midx++)
+ 		nfs_pageio_complete_mirror(desc, midx);
+ 
++	if (desc->pg_error < 0)
++		nfs_pageio_error_cleanup(desc);
+ 	if (desc->pg_ops->pg_cleanup)
+ 		desc->pg_ops->pg_cleanup(desc);
+ 	nfs_pageio_cleanup_mirroring(desc);
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 4a17fad93411..18fa7fd3bae9 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -4361,7 +4361,7 @@ nfs4_set_delegation(struct nfs4_client *clp, struct svc_fh *fh,
+ 
+ 	fl = nfs4_alloc_init_lease(dp, NFS4_OPEN_DELEGATE_READ);
+ 	if (!fl)
+-		goto out_stid;
++		goto out_clnt_odstate;
+ 
+ 	status = vfs_setlease(fp->fi_deleg_file, fl->fl_type, &fl, NULL);
+ 	if (fl)
+@@ -4386,7 +4386,6 @@ out_unlock:
+ 	vfs_setlease(fp->fi_deleg_file, F_UNLCK, NULL, (void **)&dp);
+ out_clnt_odstate:
+ 	put_clnt_odstate(dp->dl_clnt_odstate);
+-out_stid:
+ 	nfs4_put_stid(&dp->dl_stid);
+ out_delegees:
+ 	put_deleg_file(fp);
+diff --git a/fs/notify/fsnotify.c b/fs/notify/fsnotify.c
+index ababdbfab537..f43ea1aad542 100644
+--- a/fs/notify/fsnotify.c
++++ b/fs/notify/fsnotify.c
+@@ -96,6 +96,9 @@ void fsnotify_unmount_inodes(struct super_block *sb)
+ 
+ 	if (iput_inode)
+ 		iput(iput_inode);
++	/* Wait for outstanding inode references from connectors */
++	wait_var_event(&sb->s_fsnotify_inode_refs,
++		       !atomic_long_read(&sb->s_fsnotify_inode_refs));
+ }
+ 
+ /*
+diff --git a/fs/notify/mark.c b/fs/notify/mark.c
+index 61f4c5fa34c7..75394ae96673 100644
+--- a/fs/notify/mark.c
++++ b/fs/notify/mark.c
+@@ -161,15 +161,18 @@ static void fsnotify_connector_destroy_workfn(struct work_struct *work)
+ 	}
+ }
+ 
+-static struct inode *fsnotify_detach_connector_from_object(
+-					struct fsnotify_mark_connector *conn)
++static void *fsnotify_detach_connector_from_object(
++					struct fsnotify_mark_connector *conn,
++					unsigned int *type)
+ {
+ 	struct inode *inode = NULL;
+ 
++	*type = conn->type;
+ 	if (conn->type == FSNOTIFY_OBJ_TYPE_INODE) {
+ 		inode = conn->inode;
+ 		rcu_assign_pointer(inode->i_fsnotify_marks, NULL);
+ 		inode->i_fsnotify_mask = 0;
++		atomic_long_inc(&inode->i_sb->s_fsnotify_inode_refs);
+ 		conn->inode = NULL;
+ 		conn->type = FSNOTIFY_OBJ_TYPE_DETACHED;
+ 	} else if (conn->type == FSNOTIFY_OBJ_TYPE_VFSMOUNT) {
+@@ -193,10 +196,29 @@ static void fsnotify_final_mark_destroy(struct fsnotify_mark *mark)
+ 	fsnotify_put_group(group);
+ }
+ 
++/* Drop object reference originally held by a connector */
++static void fsnotify_drop_object(unsigned int type, void *objp)
++{
++	struct inode *inode;
++	struct super_block *sb;
++
++	if (!objp)
++		return;
++	/* Currently only inode references are passed to be dropped */
++	if (WARN_ON_ONCE(type != FSNOTIFY_OBJ_TYPE_INODE))
++		return;
++	inode = objp;
++	sb = inode->i_sb;
++	iput(inode);
++	if (atomic_long_dec_and_test(&sb->s_fsnotify_inode_refs))
++		wake_up_var(&sb->s_fsnotify_inode_refs);
++}
++
+ void fsnotify_put_mark(struct fsnotify_mark *mark)
+ {
+ 	struct fsnotify_mark_connector *conn;
+-	struct inode *inode = NULL;
++	void *objp = NULL;
++	unsigned int type = FSNOTIFY_OBJ_TYPE_DETACHED;
+ 	bool free_conn = false;
+ 
+ 	/* Catch marks that were actually never attached to object */
+@@ -216,7 +238,7 @@ void fsnotify_put_mark(struct fsnotify_mark *mark)
+ 	conn = mark->connector;
+ 	hlist_del_init_rcu(&mark->obj_list);
+ 	if (hlist_empty(&conn->list)) {
+-		inode = fsnotify_detach_connector_from_object(conn);
++		objp = fsnotify_detach_connector_from_object(conn, &type);
+ 		free_conn = true;
+ 	} else {
+ 		__fsnotify_recalc_mask(conn);
+@@ -224,7 +246,7 @@ void fsnotify_put_mark(struct fsnotify_mark *mark)
+ 	mark->connector = NULL;
+ 	spin_unlock(&conn->lock);
+ 
+-	iput(inode);
++	fsnotify_drop_object(type, objp);
+ 
+ 	if (free_conn) {
+ 		spin_lock(&destroy_lock);
+@@ -702,7 +724,8 @@ void fsnotify_destroy_marks(struct fsnotify_mark_connector __rcu **connp)
+ {
+ 	struct fsnotify_mark_connector *conn;
+ 	struct fsnotify_mark *mark, *old_mark = NULL;
+-	struct inode *inode;
++	void *objp;
++	unsigned int type;
+ 
+ 	conn = fsnotify_grab_connector(connp);
+ 	if (!conn)
+@@ -728,11 +751,11 @@ void fsnotify_destroy_marks(struct fsnotify_mark_connector __rcu **connp)
+ 	 * mark references get dropped. It would lead to strange results such
+ 	 * as delaying inode deletion or blocking unmount.
+ 	 */
+-	inode = fsnotify_detach_connector_from_object(conn);
++	objp = fsnotify_detach_connector_from_object(conn, &type);
+ 	spin_unlock(&conn->lock);
+ 	if (old_mark)
+ 		fsnotify_put_mark(old_mark);
+-	iput(inode);
++	fsnotify_drop_object(type, objp);
+ }
+ 
+ /*
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index dfd73a4616ce..3437da437099 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -767,6 +767,8 @@ static int show_smap(struct seq_file *m, void *v, int is_pid)
+ 	smaps_walk.private = mss;
+ 
+ #ifdef CONFIG_SHMEM
++	/* In case of smaps_rollup, reset the value from previous vma */
++	mss->check_shmem_swap = false;
+ 	if (vma->vm_file && shmem_mapping(vma->vm_file->f_mapping)) {
+ 		/*
+ 		 * For shared or readonly shmem mappings we know that all
+@@ -782,7 +784,7 @@ static int show_smap(struct seq_file *m, void *v, int is_pid)
+ 
+ 		if (!shmem_swapped || (vma->vm_flags & VM_SHARED) ||
+ 					!(vma->vm_flags & VM_WRITE)) {
+-			mss->swap = shmem_swapped;
++			mss->swap += shmem_swapped;
+ 		} else {
+ 			mss->check_shmem_swap = true;
+ 			smaps_walk.pte_hole = smaps_pte_hole;
+diff --git a/include/crypto/speck.h b/include/crypto/speck.h
+deleted file mode 100644
+index 73cfc952d405..000000000000
+--- a/include/crypto/speck.h
++++ /dev/null
+@@ -1,62 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * Common values for the Speck algorithm
+- */
+-
+-#ifndef _CRYPTO_SPECK_H
+-#define _CRYPTO_SPECK_H
+-
+-#include <linux/types.h>
+-
+-/* Speck128 */
+-
+-#define SPECK128_BLOCK_SIZE	16
+-
+-#define SPECK128_128_KEY_SIZE	16
+-#define SPECK128_128_NROUNDS	32
+-
+-#define SPECK128_192_KEY_SIZE	24
+-#define SPECK128_192_NROUNDS	33
+-
+-#define SPECK128_256_KEY_SIZE	32
+-#define SPECK128_256_NROUNDS	34
+-
+-struct speck128_tfm_ctx {
+-	u64 round_keys[SPECK128_256_NROUNDS];
+-	int nrounds;
+-};
+-
+-void crypto_speck128_encrypt(const struct speck128_tfm_ctx *ctx,
+-			     u8 *out, const u8 *in);
+-
+-void crypto_speck128_decrypt(const struct speck128_tfm_ctx *ctx,
+-			     u8 *out, const u8 *in);
+-
+-int crypto_speck128_setkey(struct speck128_tfm_ctx *ctx, const u8 *key,
+-			   unsigned int keysize);
+-
+-/* Speck64 */
+-
+-#define SPECK64_BLOCK_SIZE	8
+-
+-#define SPECK64_96_KEY_SIZE	12
+-#define SPECK64_96_NROUNDS	26
+-
+-#define SPECK64_128_KEY_SIZE	16
+-#define SPECK64_128_NROUNDS	27
+-
+-struct speck64_tfm_ctx {
+-	u32 round_keys[SPECK64_128_NROUNDS];
+-	int nrounds;
+-};
+-
+-void crypto_speck64_encrypt(const struct speck64_tfm_ctx *ctx,
+-			    u8 *out, const u8 *in);
+-
+-void crypto_speck64_decrypt(const struct speck64_tfm_ctx *ctx,
+-			    u8 *out, const u8 *in);
+-
+-int crypto_speck64_setkey(struct speck64_tfm_ctx *ctx, const u8 *key,
+-			  unsigned int keysize);
+-
+-#endif /* _CRYPTO_SPECK_H */
+diff --git a/include/drm/drm_atomic.h b/include/drm/drm_atomic.h
+index a57a8aa90ffb..2b0d02458a18 100644
+--- a/include/drm/drm_atomic.h
++++ b/include/drm/drm_atomic.h
+@@ -153,6 +153,17 @@ struct __drm_planes_state {
+ struct __drm_crtcs_state {
+ 	struct drm_crtc *ptr;
+ 	struct drm_crtc_state *state, *old_state, *new_state;
++
++	/**
++	 * @commit:
++	 *
++	 * A reference to the CRTC commit object that is kept for use by
++	 * drm_atomic_helper_wait_for_flip_done() after
++	 * drm_atomic_helper_commit_hw_done() is called. This ensures that a
++	 * concurrent commit won't free a commit object that is still in use.
++	 */
++	struct drm_crtc_commit *commit;
++
+ 	s32 __user *out_fence_ptr;
+ 	u64 last_vblank_count;
+ };
+diff --git a/include/linux/compat.h b/include/linux/compat.h
+index c68acc47da57..47041c7fed28 100644
+--- a/include/linux/compat.h
++++ b/include/linux/compat.h
+@@ -103,6 +103,9 @@ typedef struct compat_sigaltstack {
+ 	compat_size_t			ss_size;
+ } compat_stack_t;
+ #endif
++#ifndef COMPAT_MINSIGSTKSZ
++#define COMPAT_MINSIGSTKSZ	MINSIGSTKSZ
++#endif
+ 
+ #define compat_jiffies_to_clock_t(x)	\
+ 		(((unsigned long)(x) * COMPAT_USER_HZ) / HZ)
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index e73363bd8646..cf23c128ac46 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -1416,6 +1416,9 @@ struct super_block {
+ 	/* Number of inodes with nlink == 0 but still referenced */
+ 	atomic_long_t s_remove_count;
+ 
++	/* Pending fsnotify inode refs */
++	atomic_long_t s_fsnotify_inode_refs;
++
+ 	/* Being remounted read-only */
+ 	int s_readonly_remount;
+ 
+diff --git a/include/linux/hdmi.h b/include/linux/hdmi.h
+index d271ff23984f..4f3febc0f971 100644
+--- a/include/linux/hdmi.h
++++ b/include/linux/hdmi.h
+@@ -101,8 +101,8 @@ enum hdmi_extended_colorimetry {
+ 	HDMI_EXTENDED_COLORIMETRY_XV_YCC_601,
+ 	HDMI_EXTENDED_COLORIMETRY_XV_YCC_709,
+ 	HDMI_EXTENDED_COLORIMETRY_S_YCC_601,
+-	HDMI_EXTENDED_COLORIMETRY_ADOBE_YCC_601,
+-	HDMI_EXTENDED_COLORIMETRY_ADOBE_RGB,
++	HDMI_EXTENDED_COLORIMETRY_OPYCC_601,
++	HDMI_EXTENDED_COLORIMETRY_OPRGB,
+ 
+ 	/* The following EC values are only defined in CEA-861-F. */
+ 	HDMI_EXTENDED_COLORIMETRY_BT2020_CONST_LUM,
+diff --git a/include/linux/signal.h b/include/linux/signal.h
+index 3c5200137b24..42ba31da534f 100644
+--- a/include/linux/signal.h
++++ b/include/linux/signal.h
+@@ -36,7 +36,7 @@ enum siginfo_layout {
+ 	SIL_SYS,
+ };
+ 
+-enum siginfo_layout siginfo_layout(int sig, int si_code);
++enum siginfo_layout siginfo_layout(unsigned sig, int si_code);
+ 
+ /*
+  * Define some primitives to manipulate sigset_t.
+diff --git a/include/linux/tc.h b/include/linux/tc.h
+index f92511e57cdb..a60639f37963 100644
+--- a/include/linux/tc.h
++++ b/include/linux/tc.h
+@@ -84,6 +84,7 @@ struct tc_dev {
+ 					   device. */
+ 	struct device	dev;		/* Generic device interface. */
+ 	struct resource	resource;	/* Address space of this device. */
++	u64		dma_mask;	/* DMA addressable range. */
+ 	char		vendor[9];
+ 	char		name[9];
+ 	char		firmware[9];
+diff --git a/include/media/cec.h b/include/media/cec.h
+index 580ab1042898..71cc0272b053 100644
+--- a/include/media/cec.h
++++ b/include/media/cec.h
+@@ -63,7 +63,6 @@ struct cec_data {
+ 	struct delayed_work work;
+ 	struct completion c;
+ 	u8 attempts;
+-	bool new_initiator;
+ 	bool blocking;
+ 	bool completed;
+ };
+@@ -174,6 +173,7 @@ struct cec_adapter {
+ 	bool is_configuring;
+ 	bool is_configured;
+ 	bool cec_pin_is_high;
++	u8 last_initiator;
+ 	u32 monitor_all_cnt;
+ 	u32 monitor_pin_cnt;
+ 	u32 follower_cnt;
+@@ -451,4 +451,74 @@ static inline void cec_phys_addr_invalidate(struct cec_adapter *adap)
+ 	cec_s_phys_addr(adap, CEC_PHYS_ADDR_INVALID, false);
+ }
+ 
++/**
++ * cec_get_edid_spa_location() - find location of the Source Physical Address
++ *
++ * @edid: the EDID
++ * @size: the size of the EDID
++ *
++ * This EDID is expected to be a CEA-861 compliant, which means that there are
++ * at least two blocks and one or more of the extensions blocks are CEA-861
++ * blocks.
++ *
++ * The returned location is guaranteed to be <= size-2.
++ *
++ * This is an inline function since it is used by both CEC and V4L2.
++ * Ideally this would go in a module shared by both, but it is overkill to do
++ * that for just a single function.
++ */
++static inline unsigned int cec_get_edid_spa_location(const u8 *edid,
++						     unsigned int size)
++{
++	unsigned int blocks = size / 128;
++	unsigned int block;
++	u8 d;
++
++	/* Sanity check: at least 2 blocks and a multiple of the block size */
++	if (blocks < 2 || size % 128)
++		return 0;
++
++	/*
++	 * If there are fewer extension blocks than the size, then update
++	 * 'blocks'. It is allowed to have more extension blocks than the size,
++	 * since some hardware can only read e.g. 256 bytes of the EDID, even
++	 * though more blocks are present. The first CEA-861 extension block
++	 * should normally be in block 1 anyway.
++	 */
++	if (edid[0x7e] + 1 < blocks)
++		blocks = edid[0x7e] + 1;
++
++	for (block = 1; block < blocks; block++) {
++		unsigned int offset = block * 128;
++
++		/* Skip any non-CEA-861 extension blocks */
++		if (edid[offset] != 0x02 || edid[offset + 1] != 0x03)
++			continue;
++
++		/* search Vendor Specific Data Block (tag 3) */
++		d = edid[offset + 2] & 0x7f;
++		/* Check if there are Data Blocks */
++		if (d <= 4)
++			continue;
++		if (d > 4) {
++			unsigned int i = offset + 4;
++			unsigned int end = offset + d;
++
++			/* Note: 'end' is always < 'size' */
++			do {
++				u8 tag = edid[i] >> 5;
++				u8 len = edid[i] & 0x1f;
++
++				if (tag == 3 && len >= 5 && i + len <= end &&
++				    edid[i + 1] == 0x03 &&
++				    edid[i + 2] == 0x0c &&
++				    edid[i + 3] == 0x00)
++					return i + 4;
++				i += len + 1;
++			} while (i < end);
++		}
++	}
++	return 0;
++}
++
+ #endif /* _MEDIA_CEC_H */
+diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
+index 6c003995347a..59185fbbd202 100644
+--- a/include/rdma/ib_verbs.h
++++ b/include/rdma/ib_verbs.h
+@@ -1296,21 +1296,27 @@ struct ib_qp_attr {
+ };
+ 
+ enum ib_wr_opcode {
+-	IB_WR_RDMA_WRITE,
+-	IB_WR_RDMA_WRITE_WITH_IMM,
+-	IB_WR_SEND,
+-	IB_WR_SEND_WITH_IMM,
+-	IB_WR_RDMA_READ,
+-	IB_WR_ATOMIC_CMP_AND_SWP,
+-	IB_WR_ATOMIC_FETCH_AND_ADD,
+-	IB_WR_LSO,
+-	IB_WR_SEND_WITH_INV,
+-	IB_WR_RDMA_READ_WITH_INV,
+-	IB_WR_LOCAL_INV,
+-	IB_WR_REG_MR,
+-	IB_WR_MASKED_ATOMIC_CMP_AND_SWP,
+-	IB_WR_MASKED_ATOMIC_FETCH_AND_ADD,
++	/* These are shared with userspace */
++	IB_WR_RDMA_WRITE = IB_UVERBS_WR_RDMA_WRITE,
++	IB_WR_RDMA_WRITE_WITH_IMM = IB_UVERBS_WR_RDMA_WRITE_WITH_IMM,
++	IB_WR_SEND = IB_UVERBS_WR_SEND,
++	IB_WR_SEND_WITH_IMM = IB_UVERBS_WR_SEND_WITH_IMM,
++	IB_WR_RDMA_READ = IB_UVERBS_WR_RDMA_READ,
++	IB_WR_ATOMIC_CMP_AND_SWP = IB_UVERBS_WR_ATOMIC_CMP_AND_SWP,
++	IB_WR_ATOMIC_FETCH_AND_ADD = IB_UVERBS_WR_ATOMIC_FETCH_AND_ADD,
++	IB_WR_LSO = IB_UVERBS_WR_TSO,
++	IB_WR_SEND_WITH_INV = IB_UVERBS_WR_SEND_WITH_INV,
++	IB_WR_RDMA_READ_WITH_INV = IB_UVERBS_WR_RDMA_READ_WITH_INV,
++	IB_WR_LOCAL_INV = IB_UVERBS_WR_LOCAL_INV,
++	IB_WR_MASKED_ATOMIC_CMP_AND_SWP =
++		IB_UVERBS_WR_MASKED_ATOMIC_CMP_AND_SWP,
++	IB_WR_MASKED_ATOMIC_FETCH_AND_ADD =
++		IB_UVERBS_WR_MASKED_ATOMIC_FETCH_AND_ADD,
++
++	/* These are kernel only and can not be issued by userspace */
++	IB_WR_REG_MR = 0x20,
+ 	IB_WR_REG_SIG_MR,
++
+ 	/* reserve values for low level drivers' internal use.
+ 	 * These values will not be used at all in the ib core layer.
+ 	 */
+diff --git a/include/uapi/linux/cec.h b/include/uapi/linux/cec.h
+index 20fe091b7e96..bc2a1b98d9dd 100644
+--- a/include/uapi/linux/cec.h
++++ b/include/uapi/linux/cec.h
+@@ -152,10 +152,13 @@ static inline void cec_msg_set_reply_to(struct cec_msg *msg,
+ #define CEC_TX_STATUS_LOW_DRIVE		(1 << 3)
+ #define CEC_TX_STATUS_ERROR		(1 << 4)
+ #define CEC_TX_STATUS_MAX_RETRIES	(1 << 5)
++#define CEC_TX_STATUS_ABORTED		(1 << 6)
++#define CEC_TX_STATUS_TIMEOUT		(1 << 7)
+ 
+ #define CEC_RX_STATUS_OK		(1 << 0)
+ #define CEC_RX_STATUS_TIMEOUT		(1 << 1)
+ #define CEC_RX_STATUS_FEATURE_ABORT	(1 << 2)
++#define CEC_RX_STATUS_ABORTED		(1 << 3)
+ 
+ static inline int cec_msg_status_is_ok(const struct cec_msg *msg)
+ {
+diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
+index 73e01918f996..a441ea1bfe6d 100644
+--- a/include/uapi/linux/fs.h
++++ b/include/uapi/linux/fs.h
+@@ -279,8 +279,8 @@ struct fsxattr {
+ #define FS_ENCRYPTION_MODE_AES_256_CTS		4
+ #define FS_ENCRYPTION_MODE_AES_128_CBC		5
+ #define FS_ENCRYPTION_MODE_AES_128_CTS		6
+-#define FS_ENCRYPTION_MODE_SPECK128_256_XTS	7
+-#define FS_ENCRYPTION_MODE_SPECK128_256_CTS	8
++#define FS_ENCRYPTION_MODE_SPECK128_256_XTS	7 /* Removed, do not use. */
++#define FS_ENCRYPTION_MODE_SPECK128_256_CTS	8 /* Removed, do not use. */
+ 
+ struct fscrypt_policy {
+ 	__u8 version;
+diff --git a/include/uapi/linux/ndctl.h b/include/uapi/linux/ndctl.h
+index 7e27070b9440..2f2c43d633c5 100644
+--- a/include/uapi/linux/ndctl.h
++++ b/include/uapi/linux/ndctl.h
+@@ -128,37 +128,31 @@ enum {
+ 
+ static inline const char *nvdimm_bus_cmd_name(unsigned cmd)
+ {
+-	static const char * const names[] = {
+-		[ND_CMD_ARS_CAP] = "ars_cap",
+-		[ND_CMD_ARS_START] = "ars_start",
+-		[ND_CMD_ARS_STATUS] = "ars_status",
+-		[ND_CMD_CLEAR_ERROR] = "clear_error",
+-		[ND_CMD_CALL] = "cmd_call",
+-	};
+-
+-	if (cmd < ARRAY_SIZE(names) && names[cmd])
+-		return names[cmd];
+-	return "unknown";
++	switch (cmd) {
++	case ND_CMD_ARS_CAP:		return "ars_cap";
++	case ND_CMD_ARS_START:		return "ars_start";
++	case ND_CMD_ARS_STATUS:		return "ars_status";
++	case ND_CMD_CLEAR_ERROR:	return "clear_error";
++	case ND_CMD_CALL:		return "cmd_call";
++	default:			return "unknown";
++	}
+ }
+ 
+ static inline const char *nvdimm_cmd_name(unsigned cmd)
+ {
+-	static const char * const names[] = {
+-		[ND_CMD_SMART] = "smart",
+-		[ND_CMD_SMART_THRESHOLD] = "smart_thresh",
+-		[ND_CMD_DIMM_FLAGS] = "flags",
+-		[ND_CMD_GET_CONFIG_SIZE] = "get_size",
+-		[ND_CMD_GET_CONFIG_DATA] = "get_data",
+-		[ND_CMD_SET_CONFIG_DATA] = "set_data",
+-		[ND_CMD_VENDOR_EFFECT_LOG_SIZE] = "effect_size",
+-		[ND_CMD_VENDOR_EFFECT_LOG] = "effect_log",
+-		[ND_CMD_VENDOR] = "vendor",
+-		[ND_CMD_CALL] = "cmd_call",
+-	};
+-
+-	if (cmd < ARRAY_SIZE(names) && names[cmd])
+-		return names[cmd];
+-	return "unknown";
++	switch (cmd) {
++	case ND_CMD_SMART:			return "smart";
++	case ND_CMD_SMART_THRESHOLD:		return "smart_thresh";
++	case ND_CMD_DIMM_FLAGS:			return "flags";
++	case ND_CMD_GET_CONFIG_SIZE:		return "get_size";
++	case ND_CMD_GET_CONFIG_DATA:		return "get_data";
++	case ND_CMD_SET_CONFIG_DATA:		return "set_data";
++	case ND_CMD_VENDOR_EFFECT_LOG_SIZE:	return "effect_size";
++	case ND_CMD_VENDOR_EFFECT_LOG:		return "effect_log";
++	case ND_CMD_VENDOR:			return "vendor";
++	case ND_CMD_CALL:			return "cmd_call";
++	default:				return "unknown";
++	}
+ }
+ 
+ #define ND_IOCTL 'N'
+diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
+index 600877be5c22..082dc1439a50 100644
+--- a/include/uapi/linux/videodev2.h
++++ b/include/uapi/linux/videodev2.h
+@@ -225,8 +225,8 @@ enum v4l2_colorspace {
+ 	/* For RGB colorspaces such as produces by most webcams. */
+ 	V4L2_COLORSPACE_SRGB          = 8,
+ 
+-	/* AdobeRGB colorspace */
+-	V4L2_COLORSPACE_ADOBERGB      = 9,
++	/* opRGB colorspace */
++	V4L2_COLORSPACE_OPRGB         = 9,
+ 
+ 	/* BT.2020 colorspace, used for UHDTV. */
+ 	V4L2_COLORSPACE_BT2020        = 10,
+@@ -258,7 +258,7 @@ enum v4l2_xfer_func {
+ 	 *
+ 	 * V4L2_COLORSPACE_SRGB, V4L2_COLORSPACE_JPEG: V4L2_XFER_FUNC_SRGB
+ 	 *
+-	 * V4L2_COLORSPACE_ADOBERGB: V4L2_XFER_FUNC_ADOBERGB
++	 * V4L2_COLORSPACE_OPRGB: V4L2_XFER_FUNC_OPRGB
+ 	 *
+ 	 * V4L2_COLORSPACE_SMPTE240M: V4L2_XFER_FUNC_SMPTE240M
+ 	 *
+@@ -269,7 +269,7 @@ enum v4l2_xfer_func {
+ 	V4L2_XFER_FUNC_DEFAULT     = 0,
+ 	V4L2_XFER_FUNC_709         = 1,
+ 	V4L2_XFER_FUNC_SRGB        = 2,
+-	V4L2_XFER_FUNC_ADOBERGB    = 3,
++	V4L2_XFER_FUNC_OPRGB       = 3,
+ 	V4L2_XFER_FUNC_SMPTE240M   = 4,
+ 	V4L2_XFER_FUNC_NONE        = 5,
+ 	V4L2_XFER_FUNC_DCI_P3      = 6,
+@@ -281,7 +281,7 @@ enum v4l2_xfer_func {
+  * This depends on the colorspace.
+  */
+ #define V4L2_MAP_XFER_FUNC_DEFAULT(colsp) \
+-	((colsp) == V4L2_COLORSPACE_ADOBERGB ? V4L2_XFER_FUNC_ADOBERGB : \
++	((colsp) == V4L2_COLORSPACE_OPRGB ? V4L2_XFER_FUNC_OPRGB : \
+ 	 ((colsp) == V4L2_COLORSPACE_SMPTE240M ? V4L2_XFER_FUNC_SMPTE240M : \
+ 	  ((colsp) == V4L2_COLORSPACE_DCI_P3 ? V4L2_XFER_FUNC_DCI_P3 : \
+ 	   ((colsp) == V4L2_COLORSPACE_RAW ? V4L2_XFER_FUNC_NONE : \
+@@ -295,7 +295,7 @@ enum v4l2_ycbcr_encoding {
+ 	 *
+ 	 * V4L2_COLORSPACE_SMPTE170M, V4L2_COLORSPACE_470_SYSTEM_M,
+ 	 * V4L2_COLORSPACE_470_SYSTEM_BG, V4L2_COLORSPACE_SRGB,
+-	 * V4L2_COLORSPACE_ADOBERGB and V4L2_COLORSPACE_JPEG: V4L2_YCBCR_ENC_601
++	 * V4L2_COLORSPACE_OPRGB and V4L2_COLORSPACE_JPEG: V4L2_YCBCR_ENC_601
+ 	 *
+ 	 * V4L2_COLORSPACE_REC709 and V4L2_COLORSPACE_DCI_P3: V4L2_YCBCR_ENC_709
+ 	 *
+@@ -382,6 +382,17 @@ enum v4l2_quantization {
+ 	 (((is_rgb_or_hsv) || (colsp) == V4L2_COLORSPACE_JPEG) ? \
+ 	 V4L2_QUANTIZATION_FULL_RANGE : V4L2_QUANTIZATION_LIM_RANGE))
+ 
++/*
++ * Deprecated names for opRGB colorspace (IEC 61966-2-5)
++ *
++ * WARNING: Please don't use these deprecated defines in your code, as
++ * there is a chance we have to remove them in the future.
++ */
++#ifndef __KERNEL__
++#define V4L2_COLORSPACE_ADOBERGB V4L2_COLORSPACE_OPRGB
++#define V4L2_XFER_FUNC_ADOBERGB  V4L2_XFER_FUNC_OPRGB
++#endif
++
+ enum v4l2_priority {
+ 	V4L2_PRIORITY_UNSET       = 0,  /* not initialized */
+ 	V4L2_PRIORITY_BACKGROUND  = 1,
+diff --git a/include/uapi/rdma/ib_user_verbs.h b/include/uapi/rdma/ib_user_verbs.h
+index 4f9991de8e3a..8345ca799ad8 100644
+--- a/include/uapi/rdma/ib_user_verbs.h
++++ b/include/uapi/rdma/ib_user_verbs.h
+@@ -762,10 +762,28 @@ struct ib_uverbs_sge {
+ 	__u32 lkey;
+ };
+ 
++enum ib_uverbs_wr_opcode {
++	IB_UVERBS_WR_RDMA_WRITE = 0,
++	IB_UVERBS_WR_RDMA_WRITE_WITH_IMM = 1,
++	IB_UVERBS_WR_SEND = 2,
++	IB_UVERBS_WR_SEND_WITH_IMM = 3,
++	IB_UVERBS_WR_RDMA_READ = 4,
++	IB_UVERBS_WR_ATOMIC_CMP_AND_SWP = 5,
++	IB_UVERBS_WR_ATOMIC_FETCH_AND_ADD = 6,
++	IB_UVERBS_WR_LOCAL_INV = 7,
++	IB_UVERBS_WR_BIND_MW = 8,
++	IB_UVERBS_WR_SEND_WITH_INV = 9,
++	IB_UVERBS_WR_TSO = 10,
++	IB_UVERBS_WR_RDMA_READ_WITH_INV = 11,
++	IB_UVERBS_WR_MASKED_ATOMIC_CMP_AND_SWP = 12,
++	IB_UVERBS_WR_MASKED_ATOMIC_FETCH_AND_ADD = 13,
++	/* Review enum ib_wr_opcode before modifying this */
++};
++
+ struct ib_uverbs_send_wr {
+ 	__aligned_u64 wr_id;
+ 	__u32 num_sge;
+-	__u32 opcode;
++	__u32 opcode;		/* see enum ib_uverbs_wr_opcode */
+ 	__u32 send_flags;
+ 	union {
+ 		__be32 imm_data;
+diff --git a/kernel/bounds.c b/kernel/bounds.c
+index c373e887c066..9795d75b09b2 100644
+--- a/kernel/bounds.c
++++ b/kernel/bounds.c
+@@ -13,7 +13,7 @@
+ #include <linux/log2.h>
+ #include <linux/spinlock_types.h>
+ 
+-void foo(void)
++int main(void)
+ {
+ 	/* The enum constants to put into include/generated/bounds.h */
+ 	DEFINE(NR_PAGEFLAGS, __NR_PAGEFLAGS);
+@@ -23,4 +23,6 @@ void foo(void)
+ #endif
+ 	DEFINE(SPINLOCK_SIZE, sizeof(spinlock_t));
+ 	/* End of constants */
++
++	return 0;
+ }
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index a31a1ba0f8ea..0f5d2e66cd6b 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -683,6 +683,17 @@ err_put:
+ 	return err;
+ }
+ 
++static void maybe_wait_bpf_programs(struct bpf_map *map)
++{
++	/* Wait for any running BPF programs to complete so that
++	 * userspace, when we return to it, knows that all programs
++	 * that could be running use the new map value.
++	 */
++	if (map->map_type == BPF_MAP_TYPE_HASH_OF_MAPS ||
++	    map->map_type == BPF_MAP_TYPE_ARRAY_OF_MAPS)
++		synchronize_rcu();
++}
++
+ #define BPF_MAP_UPDATE_ELEM_LAST_FIELD flags
+ 
+ static int map_update_elem(union bpf_attr *attr)
+@@ -769,6 +780,7 @@ static int map_update_elem(union bpf_attr *attr)
+ 	}
+ 	__this_cpu_dec(bpf_prog_active);
+ 	preempt_enable();
++	maybe_wait_bpf_programs(map);
+ out:
+ free_value:
+ 	kfree(value);
+@@ -821,6 +833,7 @@ static int map_delete_elem(union bpf_attr *attr)
+ 	rcu_read_unlock();
+ 	__this_cpu_dec(bpf_prog_active);
+ 	preempt_enable();
++	maybe_wait_bpf_programs(map);
+ out:
+ 	kfree(key);
+ err_put:
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index b000686fa1a1..d565ec6af97c 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -553,7 +553,9 @@ static void __mark_reg_not_init(struct bpf_reg_state *reg);
+  */
+ static void __mark_reg_known(struct bpf_reg_state *reg, u64 imm)
+ {
+-	reg->id = 0;
++	/* Clear id, off, and union(map_ptr, range) */
++	memset(((u8 *)reg) + sizeof(reg->type), 0,
++	       offsetof(struct bpf_reg_state, var_off) - sizeof(reg->type));
+ 	reg->var_off = tnum_const(imm);
+ 	reg->smin_value = (s64)imm;
+ 	reg->smax_value = (s64)imm;
+@@ -572,7 +574,6 @@ static void __mark_reg_known_zero(struct bpf_reg_state *reg)
+ static void __mark_reg_const_zero(struct bpf_reg_state *reg)
+ {
+ 	__mark_reg_known(reg, 0);
+-	reg->off = 0;
+ 	reg->type = SCALAR_VALUE;
+ }
+ 
+@@ -683,9 +684,12 @@ static void __mark_reg_unbounded(struct bpf_reg_state *reg)
+ /* Mark a register as having a completely unknown (scalar) value. */
+ static void __mark_reg_unknown(struct bpf_reg_state *reg)
+ {
++	/*
++	 * Clear type, id, off, and union(map_ptr, range) and
++	 * padding between 'type' and union
++	 */
++	memset(reg, 0, offsetof(struct bpf_reg_state, var_off));
+ 	reg->type = SCALAR_VALUE;
+-	reg->id = 0;
+-	reg->off = 0;
+ 	reg->var_off = tnum_unknown;
+ 	reg->frameno = 0;
+ 	__mark_reg_unbounded(reg);
+@@ -1726,9 +1730,6 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
+ 			else
+ 				mark_reg_known_zero(env, regs,
+ 						    value_regno);
+-			regs[value_regno].id = 0;
+-			regs[value_regno].off = 0;
+-			regs[value_regno].range = 0;
+ 			regs[value_regno].type = reg_type;
+ 		}
+ 
+@@ -2549,7 +2550,6 @@ static int check_helper_call(struct bpf_verifier_env *env, int func_id, int insn
+ 		regs[BPF_REG_0].type = PTR_TO_MAP_VALUE_OR_NULL;
+ 		/* There is no offset yet applied, variable or fixed */
+ 		mark_reg_known_zero(env, regs, BPF_REG_0);
+-		regs[BPF_REG_0].off = 0;
+ 		/* remember map_ptr, so that check_map_access()
+ 		 * can check 'value_size' boundary of memory access
+ 		 * to map element returned from bpf_map_lookup_elem()
+diff --git a/kernel/bpf/xskmap.c b/kernel/bpf/xskmap.c
+index b3c557476a8d..c98501a04742 100644
+--- a/kernel/bpf/xskmap.c
++++ b/kernel/bpf/xskmap.c
+@@ -191,11 +191,8 @@ static int xsk_map_update_elem(struct bpf_map *map, void *key, void *value,
+ 	sock_hold(sock->sk);
+ 
+ 	old_xs = xchg(&m->xsk_map[i], xs);
+-	if (old_xs) {
+-		/* Make sure we've flushed everything. */
+-		synchronize_net();
++	if (old_xs)
+ 		sock_put((struct sock *)old_xs);
+-	}
+ 
+ 	sockfd_put(sock);
+ 	return 0;
+@@ -211,11 +208,8 @@ static int xsk_map_delete_elem(struct bpf_map *map, void *key)
+ 		return -EINVAL;
+ 
+ 	old_xs = xchg(&m->xsk_map[k], NULL);
+-	if (old_xs) {
+-		/* Make sure we've flushed everything. */
+-		synchronize_net();
++	if (old_xs)
+ 		sock_put((struct sock *)old_xs);
+-	}
+ 
+ 	return 0;
+ }
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 517907b082df..3ec5a37e3068 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -2033,6 +2033,12 @@ static void cpuhp_online_cpu_device(unsigned int cpu)
+ 	kobject_uevent(&dev->kobj, KOBJ_ONLINE);
+ }
+ 
++/*
++ * Architectures that need SMT-specific errata handling during SMT hotplug
++ * should override this.
++ */
++void __weak arch_smt_update(void) { };
++
+ static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
+ {
+ 	int cpu, ret = 0;
+@@ -2059,8 +2065,10 @@ static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
+ 		 */
+ 		cpuhp_offline_cpu_device(cpu);
+ 	}
+-	if (!ret)
++	if (!ret) {
+ 		cpu_smt_control = ctrlval;
++		arch_smt_update();
++	}
+ 	cpu_maps_update_done();
+ 	return ret;
+ }
+@@ -2071,6 +2079,7 @@ static int cpuhp_smt_enable(void)
+ 
+ 	cpu_maps_update_begin();
+ 	cpu_smt_control = CPU_SMT_ENABLED;
++	arch_smt_update();
+ 	for_each_present_cpu(cpu) {
+ 		/* Skip online CPUs and CPUs on offline nodes */
+ 		if (cpu_online(cpu) || !node_online(cpu_to_node(cpu)))
+diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c
+index d987dcd1bd56..54a33337680f 100644
+--- a/kernel/dma/contiguous.c
++++ b/kernel/dma/contiguous.c
+@@ -49,7 +49,11 @@ static phys_addr_t limit_cmdline;
+ 
+ static int __init early_cma(char *p)
+ {
+-	pr_debug("%s(%s)\n", __func__, p);
++	if (!p) {
++		pr_err("Config string not provided\n");
++		return -EINVAL;
++	}
++
+ 	size_cmdline = memparse(p, &p);
+ 	if (*p != '@')
+ 		return 0;
+diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
+index 9a8b7ba9aa88..c4e31f44a0ff 100644
+--- a/kernel/irq/manage.c
++++ b/kernel/irq/manage.c
+@@ -920,6 +920,9 @@ irq_forced_thread_fn(struct irq_desc *desc, struct irqaction *action)
+ 
+ 	local_bh_disable();
+ 	ret = action->thread_fn(action->irq, action->dev_id);
++	if (ret == IRQ_HANDLED)
++		atomic_inc(&desc->threads_handled);
++
+ 	irq_finalize_oneshot(desc, action);
+ 	local_bh_enable();
+ 	return ret;
+@@ -936,6 +939,9 @@ static irqreturn_t irq_thread_fn(struct irq_desc *desc,
+ 	irqreturn_t ret;
+ 
+ 	ret = action->thread_fn(action->irq, action->dev_id);
++	if (ret == IRQ_HANDLED)
++		atomic_inc(&desc->threads_handled);
++
+ 	irq_finalize_oneshot(desc, action);
+ 	return ret;
+ }
+@@ -1013,8 +1019,6 @@ static int irq_thread(void *data)
+ 		irq_thread_check_affinity(desc, action);
+ 
+ 		action_ret = handler_fn(desc, action);
+-		if (action_ret == IRQ_HANDLED)
+-			atomic_inc(&desc->threads_handled);
+ 		if (action_ret == IRQ_WAKE_THREAD)
+ 			irq_wake_secondary(desc, action);
+ 
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index f3183ad10d96..07f912b765db 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -700,9 +700,10 @@ static void unoptimize_kprobe(struct kprobe *p, bool force)
+ }
+ 
+ /* Cancel unoptimizing for reusing */
+-static void reuse_unused_kprobe(struct kprobe *ap)
++static int reuse_unused_kprobe(struct kprobe *ap)
+ {
+ 	struct optimized_kprobe *op;
++	int ret;
+ 
+ 	BUG_ON(!kprobe_unused(ap));
+ 	/*
+@@ -714,8 +715,12 @@ static void reuse_unused_kprobe(struct kprobe *ap)
+ 	/* Enable the probe again */
+ 	ap->flags &= ~KPROBE_FLAG_DISABLED;
+ 	/* Optimize it again (remove from op->list) */
+-	BUG_ON(!kprobe_optready(ap));
++	ret = kprobe_optready(ap);
++	if (ret)
++		return ret;
++
+ 	optimize_kprobe(ap);
++	return 0;
+ }
+ 
+ /* Remove optimized instructions */
+@@ -940,11 +945,16 @@ static void __disarm_kprobe(struct kprobe *p, bool reopt)
+ #define kprobe_disarmed(p)			kprobe_disabled(p)
+ #define wait_for_kprobe_optimizer()		do {} while (0)
+ 
+-/* There should be no unused kprobes can be reused without optimization */
+-static void reuse_unused_kprobe(struct kprobe *ap)
++static int reuse_unused_kprobe(struct kprobe *ap)
+ {
++	/*
++	 * If the optimized kprobe is NOT supported, the aggr kprobe is
++	 * released at the same time that the last aggregated kprobe is
++	 * unregistered.
++	 * Thus there should be no chance to reuse unused kprobe.
++	 */
+ 	printk(KERN_ERR "Error: There should be no unused kprobe here.\n");
+-	BUG_ON(kprobe_unused(ap));
++	return -EINVAL;
+ }
+ 
+ static void free_aggr_kprobe(struct kprobe *p)
+@@ -1343,9 +1353,12 @@ static int register_aggr_kprobe(struct kprobe *orig_p, struct kprobe *p)
+ 			goto out;
+ 		}
+ 		init_aggr_kprobe(ap, orig_p);
+-	} else if (kprobe_unused(ap))
++	} else if (kprobe_unused(ap)) {
+ 		/* This probe is going to die. Rescue it */
+-		reuse_unused_kprobe(ap);
++		ret = reuse_unused_kprobe(ap);
++		if (ret)
++			goto out;
++	}
+ 
+ 	if (kprobe_gone(ap)) {
+ 		/*
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index 5fa4d3138bf1..aa6ebb799f16 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -4148,7 +4148,7 @@ void lock_contended(struct lockdep_map *lock, unsigned long ip)
+ {
+ 	unsigned long flags;
+ 
+-	if (unlikely(!lock_stat))
++	if (unlikely(!lock_stat || !debug_locks))
+ 		return;
+ 
+ 	if (unlikely(current->lockdep_recursion))
+@@ -4168,7 +4168,7 @@ void lock_acquired(struct lockdep_map *lock, unsigned long ip)
+ {
+ 	unsigned long flags;
+ 
+-	if (unlikely(!lock_stat))
++	if (unlikely(!lock_stat || !debug_locks))
+ 		return;
+ 
+ 	if (unlikely(current->lockdep_recursion))
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index 1d1513215c22..72de8cc5a13e 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -1047,7 +1047,12 @@ static void __init log_buf_len_update(unsigned size)
+ /* save requested log_buf_len since it's too early to process it */
+ static int __init log_buf_len_setup(char *str)
+ {
+-	unsigned size = memparse(str, &str);
++	unsigned int size;
++
++	if (!str)
++		return -EINVAL;
++
++	size = memparse(str, &str);
+ 
+ 	log_buf_len_update(size);
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index b27b9509ea89..9e4f550e4797 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -4321,7 +4321,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
+ 	 * put back on, and if we advance min_vruntime, we'll be placed back
+ 	 * further than we started -- ie. we'll be penalized.
+ 	 */
+-	if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) == DEQUEUE_SAVE)
++	if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) != DEQUEUE_SAVE)
+ 		update_min_vruntime(cfs_rq);
+ }
+ 
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 8d8a940422a8..dce9859f6547 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -1009,7 +1009,7 @@ static int __send_signal(int sig, struct siginfo *info, struct task_struct *t,
+ 
+ 	result = TRACE_SIGNAL_IGNORED;
+ 	if (!prepare_signal(sig, t,
+-			from_ancestor_ns || (info == SEND_SIG_FORCED)))
++			from_ancestor_ns || (info == SEND_SIG_PRIV) || (info == SEND_SIG_FORCED)))
+ 		goto ret;
+ 
+ 	pending = group ? &t->signal->shared_pending : &t->pending;
+@@ -2804,7 +2804,7 @@ COMPAT_SYSCALL_DEFINE2(rt_sigpending, compat_sigset_t __user *, uset,
+ }
+ #endif
+ 
+-enum siginfo_layout siginfo_layout(int sig, int si_code)
++enum siginfo_layout siginfo_layout(unsigned sig, int si_code)
+ {
+ 	enum siginfo_layout layout = SIL_KILL;
+ 	if ((si_code > SI_USER) && (si_code < SI_KERNEL)) {
+@@ -3417,7 +3417,8 @@ int do_sigaction(int sig, struct k_sigaction *act, struct k_sigaction *oact)
+ }
+ 
+ static int
+-do_sigaltstack (const stack_t *ss, stack_t *oss, unsigned long sp)
++do_sigaltstack (const stack_t *ss, stack_t *oss, unsigned long sp,
++		size_t min_ss_size)
+ {
+ 	struct task_struct *t = current;
+ 
+@@ -3447,7 +3448,7 @@ do_sigaltstack (const stack_t *ss, stack_t *oss, unsigned long sp)
+ 			ss_size = 0;
+ 			ss_sp = NULL;
+ 		} else {
+-			if (unlikely(ss_size < MINSIGSTKSZ))
++			if (unlikely(ss_size < min_ss_size))
+ 				return -ENOMEM;
+ 		}
+ 
+@@ -3465,7 +3466,8 @@ SYSCALL_DEFINE2(sigaltstack,const stack_t __user *,uss, stack_t __user *,uoss)
+ 	if (uss && copy_from_user(&new, uss, sizeof(stack_t)))
+ 		return -EFAULT;
+ 	err = do_sigaltstack(uss ? &new : NULL, uoss ? &old : NULL,
+-			      current_user_stack_pointer());
++			      current_user_stack_pointer(),
++			      MINSIGSTKSZ);
+ 	if (!err && uoss && copy_to_user(uoss, &old, sizeof(stack_t)))
+ 		err = -EFAULT;
+ 	return err;
+@@ -3476,7 +3478,8 @@ int restore_altstack(const stack_t __user *uss)
+ 	stack_t new;
+ 	if (copy_from_user(&new, uss, sizeof(stack_t)))
+ 		return -EFAULT;
+-	(void)do_sigaltstack(&new, NULL, current_user_stack_pointer());
++	(void)do_sigaltstack(&new, NULL, current_user_stack_pointer(),
++			     MINSIGSTKSZ);
+ 	/* squash all but EFAULT for now */
+ 	return 0;
+ }
+@@ -3510,7 +3513,8 @@ static int do_compat_sigaltstack(const compat_stack_t __user *uss_ptr,
+ 		uss.ss_size = uss32.ss_size;
+ 	}
+ 	ret = do_sigaltstack(uss_ptr ? &uss : NULL, &uoss,
+-			     compat_user_stack_pointer());
++			     compat_user_stack_pointer(),
++			     COMPAT_MINSIGSTKSZ);
+ 	if (ret >= 0 && uoss_ptr)  {
+ 		compat_stack_t old;
+ 		memset(&old, 0, sizeof(old));
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 6c78bc2b7fff..b3482eed270c 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -1072,8 +1072,10 @@ static int create_synth_event(int argc, char **argv)
+ 		event = NULL;
+ 		ret = -EEXIST;
+ 		goto out;
+-	} else if (delete_event)
++	} else if (delete_event) {
++		ret = -ENOENT;
+ 		goto out;
++	}
+ 
+ 	if (argc < 2) {
+ 		ret = -EINVAL;
+diff --git a/kernel/user_namespace.c b/kernel/user_namespace.c
+index e5222b5fb4fe..923414a246e9 100644
+--- a/kernel/user_namespace.c
++++ b/kernel/user_namespace.c
+@@ -974,10 +974,6 @@ static ssize_t map_write(struct file *file, const char __user *buf,
+ 	if (!new_idmap_permitted(file, ns, cap_setid, &new_map))
+ 		goto out;
+ 
+-	ret = sort_idmaps(&new_map);
+-	if (ret < 0)
+-		goto out;
+-
+ 	ret = -EPERM;
+ 	/* Map the lower ids from the parent user namespace to the
+ 	 * kernel global id space.
+@@ -1004,6 +1000,14 @@ static ssize_t map_write(struct file *file, const char __user *buf,
+ 		e->lower_first = lower_first;
+ 	}
+ 
++	/*
++	 * If we want to use binary search for lookup, this clones the extent
++	 * array and sorts both copies.
++	 */
++	ret = sort_idmaps(&new_map);
++	if (ret < 0)
++		goto out;
++
+ 	/* Install the map */
+ 	if (new_map.nr_extents <= UID_GID_MAP_MAX_BASE_EXTENTS) {
+ 		memcpy(map->extent, new_map.extent,
+diff --git a/lib/debug_locks.c b/lib/debug_locks.c
+index 96c4c633d95e..124fdf238b3d 100644
+--- a/lib/debug_locks.c
++++ b/lib/debug_locks.c
+@@ -37,7 +37,7 @@ EXPORT_SYMBOL_GPL(debug_locks_silent);
+  */
+ int debug_locks_off(void)
+ {
+-	if (__debug_locks_off()) {
++	if (debug_locks && __debug_locks_off()) {
+ 		if (!debug_locks_silent) {
+ 			console_verbose();
+ 			return 1;
+diff --git a/mm/hmm.c b/mm/hmm.c
+index f9d1d89dec4d..49e3db686348 100644
+--- a/mm/hmm.c
++++ b/mm/hmm.c
+@@ -91,16 +91,6 @@ static struct hmm *hmm_register(struct mm_struct *mm)
+ 	spin_lock_init(&hmm->lock);
+ 	hmm->mm = mm;
+ 
+-	/*
+-	 * We should only get here if hold the mmap_sem in write mode ie on
+-	 * registration of first mirror through hmm_mirror_register()
+-	 */
+-	hmm->mmu_notifier.ops = &hmm_mmu_notifier_ops;
+-	if (__mmu_notifier_register(&hmm->mmu_notifier, mm)) {
+-		kfree(hmm);
+-		return NULL;
+-	}
+-
+ 	spin_lock(&mm->page_table_lock);
+ 	if (!mm->hmm)
+ 		mm->hmm = hmm;
+@@ -108,12 +98,27 @@ static struct hmm *hmm_register(struct mm_struct *mm)
+ 		cleanup = true;
+ 	spin_unlock(&mm->page_table_lock);
+ 
+-	if (cleanup) {
+-		mmu_notifier_unregister(&hmm->mmu_notifier, mm);
+-		kfree(hmm);
+-	}
++	if (cleanup)
++		goto error;
++
++	/*
++	 * We should only get here if hold the mmap_sem in write mode ie on
++	 * registration of first mirror through hmm_mirror_register()
++	 */
++	hmm->mmu_notifier.ops = &hmm_mmu_notifier_ops;
++	if (__mmu_notifier_register(&hmm->mmu_notifier, mm))
++		goto error_mm;
+ 
+ 	return mm->hmm;
++
++error_mm:
++	spin_lock(&mm->page_table_lock);
++	if (mm->hmm == hmm)
++		mm->hmm = NULL;
++	spin_unlock(&mm->page_table_lock);
++error:
++	kfree(hmm);
++	return NULL;
+ }
+ 
+ void hmm_mm_destroy(struct mm_struct *mm)
+@@ -275,12 +280,13 @@ void hmm_mirror_unregister(struct hmm_mirror *mirror)
+ 	if (!should_unregister || mm == NULL)
+ 		return;
+ 
++	mmu_notifier_unregister_no_release(&hmm->mmu_notifier, mm);
++
+ 	spin_lock(&mm->page_table_lock);
+ 	if (mm->hmm == hmm)
+ 		mm->hmm = NULL;
+ 	spin_unlock(&mm->page_table_lock);
+ 
+-	mmu_notifier_unregister_no_release(&hmm->mmu_notifier, mm);
+ 	kfree(hmm);
+ }
+ EXPORT_SYMBOL(hmm_mirror_unregister);
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index f469315a6a0f..5b38fbef9441 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -3678,6 +3678,12 @@ int huge_add_to_page_cache(struct page *page, struct address_space *mapping,
+ 		return err;
+ 	ClearPagePrivate(page);
+ 
++	/*
++	 * set page dirty so that it will not be removed from cache/file
++	 * by non-hugetlbfs specific code paths.
++	 */
++	set_page_dirty(page);
++
+ 	spin_lock(&inode->i_lock);
+ 	inode->i_blocks += blocks_per_huge_page(h);
+ 	spin_unlock(&inode->i_lock);
+diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
+index ae3c2a35d61b..11df03e71288 100644
+--- a/mm/page_vma_mapped.c
++++ b/mm/page_vma_mapped.c
+@@ -21,7 +21,29 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw)
+ 			if (!is_swap_pte(*pvmw->pte))
+ 				return false;
+ 		} else {
+-			if (!pte_present(*pvmw->pte))
++			/*
++			 * We get here when we are trying to unmap a private
++			 * device page from the process address space. Such
++			 * page is not CPU accessible and thus is mapped as
++			 * a special swap entry, nonetheless it still does
++			 * count as a valid regular mapping for the page (and
++			 * is accounted as such in page maps count).
++			 *
++			 * So handle this special case as if it was a normal
++			 * page mapping ie lock CPU page table and returns
++			 * true.
++			 *
++			 * For more details on device private memory see HMM
++			 * (include/linux/hmm.h or mm/hmm.c).
++			 */
++			if (is_swap_pte(*pvmw->pte)) {
++				swp_entry_t entry;
++
++				/* Handle un-addressable ZONE_DEVICE memory */
++				entry = pte_to_swp_entry(*pvmw->pte);
++				if (!is_device_private_entry(entry))
++					return false;
++			} else if (!pte_present(*pvmw->pte))
+ 				return false;
+ 		}
+ 	}
+diff --git a/net/core/netclassid_cgroup.c b/net/core/netclassid_cgroup.c
+index 5e4f04004a49..7bf833598615 100644
+--- a/net/core/netclassid_cgroup.c
++++ b/net/core/netclassid_cgroup.c
+@@ -106,6 +106,7 @@ static int write_classid(struct cgroup_subsys_state *css, struct cftype *cft,
+ 		iterate_fd(p->files, 0, update_classid_sock,
+ 			   (void *)(unsigned long)cs->classid);
+ 		task_unlock(p);
++		cond_resched();
+ 	}
+ 	css_task_iter_end(&it);
+ 
+diff --git a/net/ipv4/cipso_ipv4.c b/net/ipv4/cipso_ipv4.c
+index 82178cc69c96..777fa3b7fb13 100644
+--- a/net/ipv4/cipso_ipv4.c
++++ b/net/ipv4/cipso_ipv4.c
+@@ -1512,7 +1512,7 @@ static int cipso_v4_parsetag_loc(const struct cipso_v4_doi *doi_def,
+  *
+  * Description:
+  * Parse the packet's IP header looking for a CIPSO option.  Returns a pointer
+- * to the start of the CIPSO option on success, NULL if one if not found.
++ * to the start of the CIPSO option on success, NULL if one is not found.
+  *
+  */
+ unsigned char *cipso_v4_optptr(const struct sk_buff *skb)
+@@ -1522,10 +1522,8 @@ unsigned char *cipso_v4_optptr(const struct sk_buff *skb)
+ 	int optlen;
+ 	int taglen;
+ 
+-	for (optlen = iph->ihl*4 - sizeof(struct iphdr); optlen > 0; ) {
++	for (optlen = iph->ihl*4 - sizeof(struct iphdr); optlen > 1; ) {
+ 		switch (optptr[0]) {
+-		case IPOPT_CIPSO:
+-			return optptr;
+ 		case IPOPT_END:
+ 			return NULL;
+ 		case IPOPT_NOOP:
+@@ -1534,6 +1532,11 @@ unsigned char *cipso_v4_optptr(const struct sk_buff *skb)
+ 		default:
+ 			taglen = optptr[1];
+ 		}
++		if (!taglen || taglen > optlen)
++			return NULL;
++		if (optptr[0] == IPOPT_CIPSO)
++			return optptr;
++
+ 		optlen -= taglen;
+ 		optptr += taglen;
+ 	}
+diff --git a/net/netfilter/xt_nat.c b/net/netfilter/xt_nat.c
+index 8af9707f8789..ac91170fc8c8 100644
+--- a/net/netfilter/xt_nat.c
++++ b/net/netfilter/xt_nat.c
+@@ -216,6 +216,8 @@ static struct xt_target xt_nat_target_reg[] __read_mostly = {
+ 	{
+ 		.name		= "DNAT",
+ 		.revision	= 2,
++		.checkentry	= xt_nat_checkentry,
++		.destroy	= xt_nat_destroy,
+ 		.target		= xt_dnat_target_v2,
+ 		.targetsize	= sizeof(struct nf_nat_range2),
+ 		.table		= "nat",
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 57f71765febe..ce852f8c1d27 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -1306,7 +1306,6 @@ check_loop_fn(struct Qdisc *q, unsigned long cl, struct qdisc_walker *w)
+ 
+ const struct nla_policy rtm_tca_policy[TCA_MAX + 1] = {
+ 	[TCA_KIND]		= { .type = NLA_STRING },
+-	[TCA_OPTIONS]		= { .type = NLA_NESTED },
+ 	[TCA_RATE]		= { .type = NLA_BINARY,
+ 				    .len = sizeof(struct tc_estimator) },
+ 	[TCA_STAB]		= { .type = NLA_NESTED },
+diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
+index 5185efb9027b..83ccd0221c98 100644
+--- a/net/sunrpc/svc_xprt.c
++++ b/net/sunrpc/svc_xprt.c
+@@ -989,7 +989,7 @@ static void call_xpt_users(struct svc_xprt *xprt)
+ 	spin_lock(&xprt->xpt_lock);
+ 	while (!list_empty(&xprt->xpt_users)) {
+ 		u = list_first_entry(&xprt->xpt_users, struct svc_xpt_user, list);
+-		list_del(&u->list);
++		list_del_init(&u->list);
+ 		u->callback(u);
+ 	}
+ 	spin_unlock(&xprt->xpt_lock);
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
+index a68180090554..b9827665ff35 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
+@@ -248,6 +248,7 @@ static void
+ xprt_rdma_bc_close(struct rpc_xprt *xprt)
+ {
+ 	dprintk("svcrdma: %s: xprt %p\n", __func__, xprt);
++	xprt->cwnd = RPC_CWNDSHIFT;
+ }
+ 
+ static void
+diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
+index 143ce2579ba9..98cbc7b060ba 100644
+--- a/net/sunrpc/xprtrdma/transport.c
++++ b/net/sunrpc/xprtrdma/transport.c
+@@ -468,6 +468,12 @@ xprt_rdma_close(struct rpc_xprt *xprt)
+ 		xprt->reestablish_timeout = 0;
+ 	xprt_disconnect_done(xprt);
+ 	rpcrdma_ep_disconnect(ep, ia);
++
++	/* Prepare @xprt for the next connection by reinitializing
++	 * its credit grant to one (see RFC 8166, Section 3.3.3).
++	 */
++	r_xprt->rx_buf.rb_credits = 1;
++	xprt->cwnd = RPC_CWNDSHIFT;
+ }
+ 
+ /**
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 4e937cd7c17d..661504042d30 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -744,6 +744,8 @@ static int xsk_create(struct net *net, struct socket *sock, int protocol,
+ 	sk->sk_destruct = xsk_destruct;
+ 	sk_refcnt_debug_inc(sk);
+ 
++	sock_set_flag(sk, SOCK_RCU_FREE);
++
+ 	xs = xdp_sk(sk);
+ 	mutex_init(&xs->mutex);
+ 	spin_lock_init(&xs->tx_completion_lock);
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 526e6814ed4b..1d2e0a90c0ca 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -625,9 +625,9 @@ static void xfrm_hash_rebuild(struct work_struct *work)
+ 				break;
+ 		}
+ 		if (newpos)
+-			hlist_add_behind(&policy->bydst, newpos);
++			hlist_add_behind_rcu(&policy->bydst, newpos);
+ 		else
+-			hlist_add_head(&policy->bydst, chain);
++			hlist_add_head_rcu(&policy->bydst, chain);
+ 	}
+ 
+ 	spin_unlock_bh(&net->xfrm.xfrm_policy_lock);
+@@ -766,9 +766,9 @@ int xfrm_policy_insert(int dir, struct xfrm_policy *policy, int excl)
+ 			break;
+ 	}
+ 	if (newpos)
+-		hlist_add_behind(&policy->bydst, newpos);
++		hlist_add_behind_rcu(&policy->bydst, newpos);
+ 	else
+-		hlist_add_head(&policy->bydst, chain);
++		hlist_add_head_rcu(&policy->bydst, chain);
+ 	__xfrm_policy_link(policy, dir);
+ 
+ 	/* After previous checking, family can either be AF_INET or AF_INET6 */
+diff --git a/security/integrity/ima/ima_fs.c b/security/integrity/ima/ima_fs.c
+index ae9d5c766a3c..cfb8cc3b975e 100644
+--- a/security/integrity/ima/ima_fs.c
++++ b/security/integrity/ima/ima_fs.c
+@@ -42,14 +42,14 @@ static int __init default_canonical_fmt_setup(char *str)
+ __setup("ima_canonical_fmt", default_canonical_fmt_setup);
+ 
+ static int valid_policy = 1;
+-#define TMPBUFLEN 12
++
+ static ssize_t ima_show_htable_value(char __user *buf, size_t count,
+ 				     loff_t *ppos, atomic_long_t *val)
+ {
+-	char tmpbuf[TMPBUFLEN];
++	char tmpbuf[32];	/* greater than largest 'long' string value */
+ 	ssize_t len;
+ 
+-	len = scnprintf(tmpbuf, TMPBUFLEN, "%li\n", atomic_long_read(val));
++	len = scnprintf(tmpbuf, sizeof(tmpbuf), "%li\n", atomic_long_read(val));
+ 	return simple_read_from_buffer(buf, count, ppos, tmpbuf, len);
+ }
+ 
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 2b5ee5fbd652..4680a217d0fa 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -1509,6 +1509,11 @@ static int selinux_genfs_get_sid(struct dentry *dentry,
+ 		}
+ 		rc = security_genfs_sid(&selinux_state, sb->s_type->name,
+ 					path, tclass, sid);
++		if (rc == -ENOENT) {
++			/* No match in policy, mark as unlabeled. */
++			*sid = SECINITSID_UNLABELED;
++			rc = 0;
++		}
+ 	}
+ 	free_page((unsigned long)buffer);
+ 	return rc;
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 8b6cd5a79bfa..a81d815c81f3 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -420,6 +420,7 @@ static int smk_ptrace_rule_check(struct task_struct *tracer,
+ 	struct smk_audit_info ad, *saip = NULL;
+ 	struct task_smack *tsp;
+ 	struct smack_known *tracer_known;
++	const struct cred *tracercred;
+ 
+ 	if ((mode & PTRACE_MODE_NOAUDIT) == 0) {
+ 		smk_ad_init(&ad, func, LSM_AUDIT_DATA_TASK);
+@@ -428,7 +429,8 @@ static int smk_ptrace_rule_check(struct task_struct *tracer,
+ 	}
+ 
+ 	rcu_read_lock();
+-	tsp = __task_cred(tracer)->security;
++	tracercred = __task_cred(tracer);
++	tsp = tracercred->security;
+ 	tracer_known = smk_of_task(tsp);
+ 
+ 	if ((mode & PTRACE_MODE_ATTACH) &&
+@@ -438,7 +440,7 @@ static int smk_ptrace_rule_check(struct task_struct *tracer,
+ 			rc = 0;
+ 		else if (smack_ptrace_rule == SMACK_PTRACE_DRACONIAN)
+ 			rc = -EACCES;
+-		else if (capable(CAP_SYS_PTRACE))
++		else if (smack_privileged_cred(CAP_SYS_PTRACE, tracercred))
+ 			rc = 0;
+ 		else
+ 			rc = -EACCES;
+@@ -1840,6 +1842,7 @@ static int smack_file_send_sigiotask(struct task_struct *tsk,
+ {
+ 	struct smack_known *skp;
+ 	struct smack_known *tkp = smk_of_task(tsk->cred->security);
++	const struct cred *tcred;
+ 	struct file *file;
+ 	int rc;
+ 	struct smk_audit_info ad;
+@@ -1853,8 +1856,12 @@ static int smack_file_send_sigiotask(struct task_struct *tsk,
+ 	skp = file->f_security;
+ 	rc = smk_access(skp, tkp, MAY_DELIVER, NULL);
+ 	rc = smk_bu_note("sigiotask", skp, tkp, MAY_DELIVER, rc);
+-	if (rc != 0 && has_capability(tsk, CAP_MAC_OVERRIDE))
++
++	rcu_read_lock();
++	tcred = __task_cred(tsk);
++	if (rc != 0 && smack_privileged_cred(CAP_MAC_OVERRIDE, tcred))
+ 		rc = 0;
++	rcu_read_unlock();
+ 
+ 	smk_ad_init(&ad, __func__, LSM_AUDIT_DATA_TASK);
+ 	smk_ad_setfield_u_tsk(&ad, tsk);
+diff --git a/sound/pci/ca0106/ca0106.h b/sound/pci/ca0106/ca0106.h
+index 04402c14cb23..9847b669cf3c 100644
+--- a/sound/pci/ca0106/ca0106.h
++++ b/sound/pci/ca0106/ca0106.h
+@@ -582,7 +582,7 @@
+ #define SPI_PL_BIT_R_R		(2<<7)	/* right channel = right */
+ #define SPI_PL_BIT_R_C		(3<<7)	/* right channel = (L+R)/2 */
+ #define SPI_IZD_REG		2
+-#define SPI_IZD_BIT		(1<<4)	/* infinite zero detect */
++#define SPI_IZD_BIT		(0<<4)	/* infinite zero detect */
+ 
+ #define SPI_FMT_REG		3
+ #define SPI_FMT_BIT_RJ		(0<<0)	/* right justified mode */
+diff --git a/sound/pci/hda/hda_controller.h b/sound/pci/hda/hda_controller.h
+index a68e75b00ea3..53c3cd28bc99 100644
+--- a/sound/pci/hda/hda_controller.h
++++ b/sound/pci/hda/hda_controller.h
+@@ -160,6 +160,7 @@ struct azx {
+ 	unsigned int msi:1;
+ 	unsigned int probing:1; /* codec probing phase */
+ 	unsigned int snoop:1;
++	unsigned int uc_buffer:1; /* non-cached pages for stream buffers */
+ 	unsigned int align_buffer_size:1;
+ 	unsigned int region_requested:1;
+ 	unsigned int disabled:1; /* disabled by vga_switcheroo */
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 28dc5e124995..6f6703e53a05 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -410,7 +410,7 @@ static void __mark_pages_wc(struct azx *chip, struct snd_dma_buffer *dmab, bool
+ #ifdef CONFIG_SND_DMA_SGBUF
+ 	if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_SG) {
+ 		struct snd_sg_buf *sgbuf = dmab->private_data;
+-		if (chip->driver_type == AZX_DRIVER_CMEDIA)
++		if (!chip->uc_buffer)
+ 			return; /* deal with only CORB/RIRB buffers */
+ 		if (on)
+ 			set_pages_array_wc(sgbuf->page_table, sgbuf->pages);
+@@ -1636,6 +1636,7 @@ static void azx_check_snoop_available(struct azx *chip)
+ 		dev_info(chip->card->dev, "Force to %s mode by module option\n",
+ 			 snoop ? "snoop" : "non-snoop");
+ 		chip->snoop = snoop;
++		chip->uc_buffer = !snoop;
+ 		return;
+ 	}
+ 
+@@ -1656,8 +1657,12 @@ static void azx_check_snoop_available(struct azx *chip)
+ 		snoop = false;
+ 
+ 	chip->snoop = snoop;
+-	if (!snoop)
++	if (!snoop) {
+ 		dev_info(chip->card->dev, "Force to non-snoop mode\n");
++		/* C-Media requires non-cached pages only for CORB/RIRB */
++		if (chip->driver_type != AZX_DRIVER_CMEDIA)
++			chip->uc_buffer = true;
++	}
+ }
+ 
+ static void azx_probe_work(struct work_struct *work)
+@@ -2096,7 +2101,7 @@ static void pcm_mmap_prepare(struct snd_pcm_substream *substream,
+ #ifdef CONFIG_X86
+ 	struct azx_pcm *apcm = snd_pcm_substream_chip(substream);
+ 	struct azx *chip = apcm->chip;
+-	if (!azx_snoop(chip) && chip->driver_type != AZX_DRIVER_CMEDIA)
++	if (chip->uc_buffer)
+ 		area->vm_page_prot = pgprot_writecombine(area->vm_page_prot);
+ #endif
+ }
+@@ -2215,8 +2220,12 @@ static struct snd_pci_quirk power_save_blacklist[] = {
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1581607 */
+ 	SND_PCI_QUIRK(0x1558, 0x3501, "Clevo W35xSS_370SS", 0),
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
++	SND_PCI_QUIRK(0x1028, 0x0497, "Dell Precision T3600", 0),
++	/* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
+ 	/* Note the P55A-UD3 and Z87-D3HP share the subsys id for the HDA dev */
+ 	SND_PCI_QUIRK(0x1458, 0xa002, "Gigabyte P55A-UD3 / Z87-D3HP", 0),
++	/* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
++	SND_PCI_QUIRK(0x8086, 0x2040, "Intel DZ77BH-55K", 0),
+ 	/* https://bugzilla.kernel.org/show_bug.cgi?id=199607 */
+ 	SND_PCI_QUIRK(0x8086, 0x2057, "Intel NUC5i7RYB", 0),
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1520902 */
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 1a8a2d440fbd..7d6c3cebb0e3 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -980,6 +980,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x21da, "Lenovo X220", CXT_PINCFG_LENOVO_TP410),
+ 	SND_PCI_QUIRK(0x17aa, 0x21db, "Lenovo X220-tablet", CXT_PINCFG_LENOVO_TP410),
+ 	SND_PCI_QUIRK(0x17aa, 0x38af, "Lenovo IdeaPad Z560", CXT_FIXUP_MUTE_LED_EAPD),
++	SND_PCI_QUIRK(0x17aa, 0x3905, "Lenovo G50-30", CXT_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x390b, "Lenovo G50-80", CXT_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3975, "Lenovo U300s", CXT_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3977, "Lenovo IdeaPad U310", CXT_FIXUP_STEREO_DMIC),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 08b6369f930b..23dd4bb026d1 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6799,6 +6799,12 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x1a, 0x02a11040},
+ 		{0x1b, 0x01014020},
+ 		{0x21, 0x0221101f}),
++	SND_HDA_PIN_QUIRK(0x10ec0235, 0x17aa, "Lenovo", ALC294_FIXUP_LENOVO_MIC_LOCATION,
++		{0x14, 0x90170110},
++		{0x19, 0x02a11030},
++		{0x1a, 0x02a11040},
++		{0x1b, 0x01011020},
++		{0x21, 0x0221101f}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0235, 0x17aa, "Lenovo", ALC294_FIXUP_LENOVO_MIC_LOCATION,
+ 		{0x14, 0x90170110},
+ 		{0x19, 0x02a11020},
+@@ -7690,6 +7696,8 @@ enum {
+ 	ALC662_FIXUP_ASUS_Nx50,
+ 	ALC668_FIXUP_ASUS_Nx51_HEADSET_MODE,
+ 	ALC668_FIXUP_ASUS_Nx51,
++	ALC668_FIXUP_MIC_COEF,
++	ALC668_FIXUP_ASUS_G751,
+ 	ALC891_FIXUP_HEADSET_MODE,
+ 	ALC891_FIXUP_DELL_MIC_NO_PRESENCE,
+ 	ALC662_FIXUP_ACER_VERITON,
+@@ -7959,6 +7967,23 @@ static const struct hda_fixup alc662_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC668_FIXUP_ASUS_Nx51_HEADSET_MODE,
+ 	},
++	[ALC668_FIXUP_MIC_COEF] = {
++		.type = HDA_FIXUP_VERBS,
++		.v.verbs = (const struct hda_verb[]) {
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0xc3 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x4000 },
++			{}
++		},
++	},
++	[ALC668_FIXUP_ASUS_G751] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x16, 0x0421101f }, /* HP */
++			{}
++		},
++		.chained = true,
++		.chain_id = ALC668_FIXUP_MIC_COEF
++	},
+ 	[ALC891_FIXUP_HEADSET_MODE] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc_fixup_headset_mode,
+@@ -8032,6 +8057,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x11cd, "Asus N550", ALC662_FIXUP_ASUS_Nx50),
+ 	SND_PCI_QUIRK(0x1043, 0x13df, "Asus N550JX", ALC662_FIXUP_BASS_1A),
+ 	SND_PCI_QUIRK(0x1043, 0x129d, "Asus N750", ALC662_FIXUP_ASUS_Nx50),
++	SND_PCI_QUIRK(0x1043, 0x12ff, "ASUS G751", ALC668_FIXUP_ASUS_G751),
+ 	SND_PCI_QUIRK(0x1043, 0x1477, "ASUS N56VZ", ALC662_FIXUP_BASS_MODE4_CHMAP),
+ 	SND_PCI_QUIRK(0x1043, 0x15a7, "ASUS UX51VZH", ALC662_FIXUP_BASS_16),
+ 	SND_PCI_QUIRK(0x1043, 0x177d, "ASUS N551", ALC668_FIXUP_ASUS_Nx51),
+diff --git a/sound/soc/codecs/sta32x.c b/sound/soc/codecs/sta32x.c
+index d5035f2f2b2b..ce508b4cc85c 100644
+--- a/sound/soc/codecs/sta32x.c
++++ b/sound/soc/codecs/sta32x.c
+@@ -879,6 +879,9 @@ static int sta32x_probe(struct snd_soc_component *component)
+ 	struct sta32x_priv *sta32x = snd_soc_component_get_drvdata(component);
+ 	struct sta32x_platform_data *pdata = sta32x->pdata;
+ 	int i, ret = 0, thermal = 0;
++
++	sta32x->component = component;
++
+ 	ret = regulator_bulk_enable(ARRAY_SIZE(sta32x->supplies),
+ 				    sta32x->supplies);
+ 	if (ret != 0) {
+diff --git a/sound/soc/intel/skylake/skl-topology.c b/sound/soc/intel/skylake/skl-topology.c
+index fcdc716754b6..bde2effde861 100644
+--- a/sound/soc/intel/skylake/skl-topology.c
++++ b/sound/soc/intel/skylake/skl-topology.c
+@@ -2458,6 +2458,7 @@ static int skl_tplg_get_token(struct device *dev,
+ 
+ 	case SKL_TKN_U8_CORE_ID:
+ 		mconfig->core_id = tkn_elem->value;
++		break;
+ 
+ 	case SKL_TKN_U8_MOD_TYPE:
+ 		mconfig->m_type = tkn_elem->value;
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index 67b042738ed7..986151732d68 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -831,7 +831,7 @@ ifndef NO_JVMTI
+     JDIR=$(shell /usr/sbin/update-java-alternatives -l | head -1 | awk '{print $$3}')
+   else
+     ifneq (,$(wildcard /usr/sbin/alternatives))
+-      JDIR=$(shell alternatives --display java | tail -1 | cut -d' ' -f 5 | sed 's%/jre/bin/java.%%g')
++      JDIR=$(shell /usr/sbin/alternatives --display java | tail -1 | cut -d' ' -f 5 | sed 's%/jre/bin/java.%%g')
+     endif
+   endif
+   ifndef JDIR
+diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
+index c04dc7b53797..82a3c8be19ee 100644
+--- a/tools/perf/builtin-report.c
++++ b/tools/perf/builtin-report.c
+@@ -981,6 +981,7 @@ int cmd_report(int argc, const char **argv)
+ 			.id_index	 = perf_event__process_id_index,
+ 			.auxtrace_info	 = perf_event__process_auxtrace_info,
+ 			.auxtrace	 = perf_event__process_auxtrace,
++			.event_update	 = perf_event__process_event_update,
+ 			.feature	 = process_feature_event,
+ 			.ordered_events	 = true,
+ 			.ordering_requires_timestamps = true,
+diff --git a/tools/perf/pmu-events/arch/x86/ivytown/uncore-power.json b/tools/perf/pmu-events/arch/x86/ivytown/uncore-power.json
+index d40498f2cb1e..635c09fda1d9 100644
+--- a/tools/perf/pmu-events/arch/x86/ivytown/uncore-power.json
++++ b/tools/perf/pmu-events/arch/x86/ivytown/uncore-power.json
+@@ -188,7 +188,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xb",
+         "EventName": "UNC_P_FREQ_GE_1200MHZ_CYCLES",
+-        "Filter": "filter_band0=1200",
++        "Filter": "filter_band0=12",
+         "MetricExpr": "(UNC_P_FREQ_GE_1200MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_1200mhz_cycles %",
+         "PerPkg": "1",
+@@ -199,7 +199,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xc",
+         "EventName": "UNC_P_FREQ_GE_2000MHZ_CYCLES",
+-        "Filter": "filter_band1=2000",
++        "Filter": "filter_band1=20",
+         "MetricExpr": "(UNC_P_FREQ_GE_2000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_2000mhz_cycles %",
+         "PerPkg": "1",
+@@ -210,7 +210,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xd",
+         "EventName": "UNC_P_FREQ_GE_3000MHZ_CYCLES",
+-        "Filter": "filter_band2=3000",
++        "Filter": "filter_band2=30",
+         "MetricExpr": "(UNC_P_FREQ_GE_3000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_3000mhz_cycles %",
+         "PerPkg": "1",
+@@ -221,7 +221,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xe",
+         "EventName": "UNC_P_FREQ_GE_4000MHZ_CYCLES",
+-        "Filter": "filter_band3=4000",
++        "Filter": "filter_band3=40",
+         "MetricExpr": "(UNC_P_FREQ_GE_4000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_4000mhz_cycles %",
+         "PerPkg": "1",
+@@ -232,7 +232,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xb",
+         "EventName": "UNC_P_FREQ_GE_1200MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band0=1200",
++        "Filter": "edge=1,filter_band0=12",
+         "MetricExpr": "(UNC_P_FREQ_GE_1200MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_1200mhz_cycles %",
+         "PerPkg": "1",
+@@ -243,7 +243,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xc",
+         "EventName": "UNC_P_FREQ_GE_2000MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band1=2000",
++        "Filter": "edge=1,filter_band1=20",
+         "MetricExpr": "(UNC_P_FREQ_GE_2000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_2000mhz_cycles %",
+         "PerPkg": "1",
+@@ -254,7 +254,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xd",
+         "EventName": "UNC_P_FREQ_GE_3000MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band2=4000",
++        "Filter": "edge=1,filter_band2=30",
+         "MetricExpr": "(UNC_P_FREQ_GE_3000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_3000mhz_cycles %",
+         "PerPkg": "1",
+@@ -265,7 +265,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xe",
+         "EventName": "UNC_P_FREQ_GE_4000MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band3=4000",
++        "Filter": "edge=1,filter_band3=40",
+         "MetricExpr": "(UNC_P_FREQ_GE_4000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_4000mhz_cycles %",
+         "PerPkg": "1",
+diff --git a/tools/perf/pmu-events/arch/x86/jaketown/uncore-power.json b/tools/perf/pmu-events/arch/x86/jaketown/uncore-power.json
+index 16034bfd06dd..8755693d86c6 100644
+--- a/tools/perf/pmu-events/arch/x86/jaketown/uncore-power.json
++++ b/tools/perf/pmu-events/arch/x86/jaketown/uncore-power.json
+@@ -187,7 +187,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xb",
+         "EventName": "UNC_P_FREQ_GE_1200MHZ_CYCLES",
+-        "Filter": "filter_band0=1200",
++        "Filter": "filter_band0=12",
+         "MetricExpr": "(UNC_P_FREQ_GE_1200MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_1200mhz_cycles %",
+         "PerPkg": "1",
+@@ -198,7 +198,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xc",
+         "EventName": "UNC_P_FREQ_GE_2000MHZ_CYCLES",
+-        "Filter": "filter_band1=2000",
++        "Filter": "filter_band1=20",
+         "MetricExpr": "(UNC_P_FREQ_GE_2000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_2000mhz_cycles %",
+         "PerPkg": "1",
+@@ -209,7 +209,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xd",
+         "EventName": "UNC_P_FREQ_GE_3000MHZ_CYCLES",
+-        "Filter": "filter_band2=3000",
++        "Filter": "filter_band2=30",
+         "MetricExpr": "(UNC_P_FREQ_GE_3000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_3000mhz_cycles %",
+         "PerPkg": "1",
+@@ -220,7 +220,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xe",
+         "EventName": "UNC_P_FREQ_GE_4000MHZ_CYCLES",
+-        "Filter": "filter_band3=4000",
++        "Filter": "filter_band3=40",
+         "MetricExpr": "(UNC_P_FREQ_GE_4000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_4000mhz_cycles %",
+         "PerPkg": "1",
+@@ -231,7 +231,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xb",
+         "EventName": "UNC_P_FREQ_GE_1200MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band0=1200",
++        "Filter": "edge=1,filter_band0=12",
+         "MetricExpr": "(UNC_P_FREQ_GE_1200MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_1200mhz_cycles %",
+         "PerPkg": "1",
+@@ -242,7 +242,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xc",
+         "EventName": "UNC_P_FREQ_GE_2000MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band1=2000",
++        "Filter": "edge=1,filter_band1=20",
+         "MetricExpr": "(UNC_P_FREQ_GE_2000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_2000mhz_cycles %",
+         "PerPkg": "1",
+@@ -253,7 +253,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xd",
+         "EventName": "UNC_P_FREQ_GE_3000MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band2=4000",
++        "Filter": "edge=1,filter_band2=30",
+         "MetricExpr": "(UNC_P_FREQ_GE_3000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_3000mhz_cycles %",
+         "PerPkg": "1",
+@@ -264,7 +264,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xe",
+         "EventName": "UNC_P_FREQ_GE_4000MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band3=4000",
++        "Filter": "edge=1,filter_band3=40",
+         "MetricExpr": "(UNC_P_FREQ_GE_4000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_4000mhz_cycles %",
+         "PerPkg": "1",
+diff --git a/tools/perf/tests/shell/record+probe_libc_inet_pton.sh b/tools/perf/tests/shell/record+probe_libc_inet_pton.sh
+index 3013ac8f83d0..cab7b0aea6ea 100755
+--- a/tools/perf/tests/shell/record+probe_libc_inet_pton.sh
++++ b/tools/perf/tests/shell/record+probe_libc_inet_pton.sh
+@@ -48,7 +48,7 @@ trace_libc_inet_pton_backtrace() {
+ 	*)
+ 		eventattr='max-stack=3'
+ 		echo "getaddrinfo\+0x[[:xdigit:]]+[[:space:]]\($libc\)$" >> $expected
+-		echo ".*\+0x[[:xdigit:]]+[[:space:]]\(.*/bin/ping.*\)$" >> $expected
++		echo ".*(\+0x[[:xdigit:]]+|\[unknown\])[[:space:]]\(.*/bin/ping.*\)$" >> $expected
+ 		;;
+ 	esac
+ 
+diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
+index 0c8ecf0c78a4..6f3db78efe39 100644
+--- a/tools/perf/util/event.c
++++ b/tools/perf/util/event.c
+@@ -1074,6 +1074,7 @@ void *cpu_map_data__alloc(struct cpu_map *map, size_t *size, u16 *type, int *max
+ 	}
+ 
+ 	*size += sizeof(struct cpu_map_data);
++	*size = PERF_ALIGN(*size, sizeof(u64));
+ 	return zalloc(*size);
+ }
+ 
+diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
+index 6324afba8fdd..86ad1389ff5a 100644
+--- a/tools/perf/util/evsel.c
++++ b/tools/perf/util/evsel.c
+@@ -1078,6 +1078,9 @@ void perf_evsel__config(struct perf_evsel *evsel, struct record_opts *opts,
+ 		attr->exclude_user   = 1;
+ 	}
+ 
++	if (evsel->own_cpus)
++		evsel->attr.read_format |= PERF_FORMAT_ID;
++
+ 	/*
+ 	 * Apply event specific term settings,
+ 	 * it overloads any global configuration.
+diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
+index 3ba6a1742f91..02580f3ded1a 100644
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -936,13 +936,14 @@ static void pmu_format_value(unsigned long *format, __u64 value, __u64 *v,
+ 
+ static __u64 pmu_format_max_value(const unsigned long *format)
+ {
+-	__u64 w = 0;
+-	int fbit;
+-
+-	for_each_set_bit(fbit, format, PERF_PMU_FORMAT_BITS)
+-		w |= (1ULL << fbit);
++	int w;
+ 
+-	return w;
++	w = bitmap_weight(format, PERF_PMU_FORMAT_BITS);
++	if (!w)
++		return 0;
++	if (w < 64)
++		return (1ULL << w) - 1;
++	return -1;
+ }
+ 
+ /*
+diff --git a/tools/perf/util/srcline.c b/tools/perf/util/srcline.c
+index 09d6746e6ec8..e767c4a9d4d2 100644
+--- a/tools/perf/util/srcline.c
++++ b/tools/perf/util/srcline.c
+@@ -85,6 +85,9 @@ static struct symbol *new_inline_sym(struct dso *dso,
+ 	struct symbol *inline_sym;
+ 	char *demangled = NULL;
+ 
++	if (!funcname)
++		funcname = "??";
++
+ 	if (dso) {
+ 		demangled = dso__demangle_sym(dso, 0, funcname);
+ 		if (demangled)
+diff --git a/tools/perf/util/strbuf.c b/tools/perf/util/strbuf.c
+index 3d1cf5bf7f18..9005fbe0780e 100644
+--- a/tools/perf/util/strbuf.c
++++ b/tools/perf/util/strbuf.c
+@@ -98,19 +98,25 @@ static int strbuf_addv(struct strbuf *sb, const char *fmt, va_list ap)
+ 
+ 	va_copy(ap_saved, ap);
+ 	len = vsnprintf(sb->buf + sb->len, sb->alloc - sb->len, fmt, ap);
+-	if (len < 0)
++	if (len < 0) {
++		va_end(ap_saved);
+ 		return len;
++	}
+ 	if (len > strbuf_avail(sb)) {
+ 		ret = strbuf_grow(sb, len);
+-		if (ret)
++		if (ret) {
++			va_end(ap_saved);
+ 			return ret;
++		}
+ 		len = vsnprintf(sb->buf + sb->len, sb->alloc - sb->len, fmt, ap_saved);
+ 		va_end(ap_saved);
+ 		if (len > strbuf_avail(sb)) {
+ 			pr_debug("this should not happen, your vsnprintf is broken");
++			va_end(ap_saved);
+ 			return -EINVAL;
+ 		}
+ 	}
++	va_end(ap_saved);
+ 	return strbuf_setlen(sb, sb->len + len);
+ }
+ 
+diff --git a/tools/perf/util/trace-event-info.c b/tools/perf/util/trace-event-info.c
+index 7b0ca7cbb7de..8ad8e755127b 100644
+--- a/tools/perf/util/trace-event-info.c
++++ b/tools/perf/util/trace-event-info.c
+@@ -531,12 +531,14 @@ struct tracing_data *tracing_data_get(struct list_head *pattrs,
+ 			 "/tmp/perf-XXXXXX");
+ 		if (!mkstemp(tdata->temp_file)) {
+ 			pr_debug("Can't make temp file");
++			free(tdata);
+ 			return NULL;
+ 		}
+ 
+ 		temp_fd = open(tdata->temp_file, O_RDWR);
+ 		if (temp_fd < 0) {
+ 			pr_debug("Can't read '%s'", tdata->temp_file);
++			free(tdata);
+ 			return NULL;
+ 		}
+ 
+diff --git a/tools/perf/util/trace-event-read.c b/tools/perf/util/trace-event-read.c
+index 40b425949aa3..2d50e4384c72 100644
+--- a/tools/perf/util/trace-event-read.c
++++ b/tools/perf/util/trace-event-read.c
+@@ -349,9 +349,12 @@ static int read_event_files(struct pevent *pevent)
+ 		for (x=0; x < count; x++) {
+ 			size = read8(pevent);
+ 			ret = read_event_file(pevent, sys, size);
+-			if (ret)
++			if (ret) {
++				free(sys);
+ 				return ret;
++			}
+ 		}
++		free(sys);
+ 	}
+ 	return 0;
+ }
+diff --git a/tools/power/cpupower/utils/cpufreq-info.c b/tools/power/cpupower/utils/cpufreq-info.c
+index df43cd45d810..ccd08dd00996 100644
+--- a/tools/power/cpupower/utils/cpufreq-info.c
++++ b/tools/power/cpupower/utils/cpufreq-info.c
+@@ -200,6 +200,8 @@ static int get_boost_mode(unsigned int cpu)
+ 		printf(_("    Boost States: %d\n"), b_states);
+ 		printf(_("    Total States: %d\n"), pstate_no);
+ 		for (i = 0; i < pstate_no; i++) {
++			if (!pstates[i])
++				continue;
+ 			if (i < b_states)
+ 				printf(_("    Pstate-Pb%d: %luMHz (boost state)"
+ 					 "\n"), i, pstates[i]);
+diff --git a/tools/power/cpupower/utils/helpers/amd.c b/tools/power/cpupower/utils/helpers/amd.c
+index bb41cdd0df6b..9607ada5b29a 100644
+--- a/tools/power/cpupower/utils/helpers/amd.c
++++ b/tools/power/cpupower/utils/helpers/amd.c
+@@ -33,7 +33,7 @@ union msr_pstate {
+ 		unsigned vid:8;
+ 		unsigned iddval:8;
+ 		unsigned idddiv:2;
+-		unsigned res1:30;
++		unsigned res1:31;
+ 		unsigned en:1;
+ 	} fam17h_bits;
+ 	unsigned long long val;
+@@ -119,6 +119,11 @@ int decode_pstates(unsigned int cpu, unsigned int cpu_family,
+ 		}
+ 		if (read_msr(cpu, MSR_AMD_PSTATE + i, &pstate.val))
+ 			return -1;
++		if ((cpu_family == 0x17) && (!pstate.fam17h_bits.en))
++			continue;
++		else if (!pstate.bits.en)
++			continue;
++
+ 		pstates[i] = get_cof(cpu_family, pstate);
+ 	}
+ 	*no = i;
+diff --git a/tools/testing/selftests/drivers/usb/usbip/usbip_test.sh b/tools/testing/selftests/drivers/usb/usbip/usbip_test.sh
+index 1893d0f59ad7..059b7e81b922 100755
+--- a/tools/testing/selftests/drivers/usb/usbip/usbip_test.sh
++++ b/tools/testing/selftests/drivers/usb/usbip/usbip_test.sh
+@@ -143,6 +143,10 @@ echo "Import devices from localhost - should work"
+ src/usbip attach -r localhost -b $busid;
+ echo "=============================================================="
+ 
++# Wait for sysfs file to be updated. Without this sleep, usbip port
++# shows no imported devices.
++sleep 3;
++
+ echo "List imported devices - expect to see imported devices";
+ src/usbip port;
+ echo "=============================================================="
+diff --git a/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-createremove.tc b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-createremove.tc
+index cef11377dcbd..c604438df13b 100644
+--- a/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-createremove.tc
++++ b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-createremove.tc
+@@ -35,18 +35,18 @@ fi
+ 
+ reset_trigger
+ 
+-echo "Test create synthetic event with an error"
+-echo 'wakeup_latency  u64 lat pid_t pid char' > synthetic_events > /dev/null
++echo "Test remove synthetic event"
++echo '!wakeup_latency  u64 lat pid_t pid char comm[16]' >> synthetic_events
+ if [ -d events/synthetic/wakeup_latency ]; then
+-    fail "Created wakeup_latency synthetic event with an invalid format"
++    fail "Failed to delete wakeup_latency synthetic event"
+ fi
+ 
+ reset_trigger
+ 
+-echo "Test remove synthetic event"
+-echo '!wakeup_latency  u64 lat pid_t pid char comm[16]' > synthetic_events
++echo "Test create synthetic event with an error"
++echo 'wakeup_latency  u64 lat pid_t pid char' > synthetic_events > /dev/null
+ if [ -d events/synthetic/wakeup_latency ]; then
+-    fail "Failed to delete wakeup_latency synthetic event"
++    fail "Created wakeup_latency synthetic event with an invalid format"
+ fi
+ 
+ do_reset
+diff --git a/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-syntax.tc b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-syntax.tc
+new file mode 100644
+index 000000000000..88e6c3f43006
+--- /dev/null
++++ b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-syntax.tc
+@@ -0,0 +1,80 @@
++#!/bin/sh
++# SPDX-License-Identifier: GPL-2.0
++# description: event trigger - test synthetic_events syntax parser
++
++do_reset() {
++    reset_trigger
++    echo > set_event
++    clear_trace
++}
++
++fail() { #msg
++    do_reset
++    echo $1
++    exit_fail
++}
++
++if [ ! -f set_event ]; then
++    echo "event tracing is not supported"
++    exit_unsupported
++fi
++
++if [ ! -f synthetic_events ]; then
++    echo "synthetic event is not supported"
++    exit_unsupported
++fi
++
++reset_tracer
++do_reset
++
++echo "Test synthetic_events syntax parser"
++
++echo > synthetic_events
++
++# synthetic event must have a field
++! echo "myevent" >> synthetic_events
++echo "myevent u64 var1" >> synthetic_events
++
++# synthetic event must be found in synthetic_events
++grep "myevent[[:space:]]u64 var1" synthetic_events
++
++# it is not possible to add same name event
++! echo "myevent u64 var2" >> synthetic_events
++
++# Non-append open will cleanup all events and add new one
++echo "myevent u64 var2" > synthetic_events
++
++# multiple fields with different spaces
++echo "myevent u64 var1; u64 var2;" > synthetic_events
++grep "myevent[[:space:]]u64 var1; u64 var2" synthetic_events
++echo "myevent u64 var1 ; u64 var2 ;" > synthetic_events
++grep "myevent[[:space:]]u64 var1; u64 var2" synthetic_events
++echo "myevent u64 var1 ;u64 var2" > synthetic_events
++grep "myevent[[:space:]]u64 var1; u64 var2" synthetic_events
++
++# test field types
++echo "myevent u32 var" > synthetic_events
++echo "myevent u16 var" > synthetic_events
++echo "myevent u8 var" > synthetic_events
++echo "myevent s64 var" > synthetic_events
++echo "myevent s32 var" > synthetic_events
++echo "myevent s16 var" > synthetic_events
++echo "myevent s8 var" > synthetic_events
++
++echo "myevent char var" > synthetic_events
++echo "myevent int var" > synthetic_events
++echo "myevent long var" > synthetic_events
++echo "myevent pid_t var" > synthetic_events
++
++echo "myevent unsigned char var" > synthetic_events
++echo "myevent unsigned int var" > synthetic_events
++echo "myevent unsigned long var" > synthetic_events
++grep "myevent[[:space:]]unsigned long var" synthetic_events
++
++# test string type
++echo "myevent char var[10]" > synthetic_events
++grep "myevent[[:space:]]char\[10\] var" synthetic_events
++
++do_reset
++
++exit 0
+diff --git a/tools/testing/selftests/net/reuseport_bpf.c b/tools/testing/selftests/net/reuseport_bpf.c
+index cad14cd0ea92..b5277106df1f 100644
+--- a/tools/testing/selftests/net/reuseport_bpf.c
++++ b/tools/testing/selftests/net/reuseport_bpf.c
+@@ -437,14 +437,19 @@ void enable_fastopen(void)
+ 	}
+ }
+ 
+-static struct rlimit rlim_old, rlim_new;
++static struct rlimit rlim_old;
+ 
+ static  __attribute__((constructor)) void main_ctor(void)
+ {
+ 	getrlimit(RLIMIT_MEMLOCK, &rlim_old);
+-	rlim_new.rlim_cur = rlim_old.rlim_cur + (1UL << 20);
+-	rlim_new.rlim_max = rlim_old.rlim_max + (1UL << 20);
+-	setrlimit(RLIMIT_MEMLOCK, &rlim_new);
++
++	if (rlim_old.rlim_cur != RLIM_INFINITY) {
++		struct rlimit rlim_new;
++
++		rlim_new.rlim_cur = rlim_old.rlim_cur + (1UL << 20);
++		rlim_new.rlim_max = rlim_old.rlim_max + (1UL << 20);
++		setrlimit(RLIMIT_MEMLOCK, &rlim_new);
++	}
+ }
+ 
+ static __attribute__((destructor)) void main_dtor(void)
+diff --git a/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr.c b/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr.c
+index 327fa943c7f3..dbdffa2e2c82 100644
+--- a/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr.c
++++ b/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr.c
+@@ -67,8 +67,8 @@ trans:
+ 		"3: ;"
+ 		: [res] "=r" (result), [texasr] "=r" (texasr)
+ 		: [gpr_1]"i"(GPR_1), [gpr_2]"i"(GPR_2), [gpr_4]"i"(GPR_4),
+-		[sprn_texasr] "i" (SPRN_TEXASR), [flt_1] "r" (&a),
+-		[flt_2] "r" (&b), [flt_4] "r" (&d)
++		[sprn_texasr] "i" (SPRN_TEXASR), [flt_1] "b" (&a),
++		[flt_4] "b" (&d)
+ 		: "memory", "r5", "r6", "r7",
+ 		"r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15",
+ 		"r16", "r17", "r18", "r19", "r20", "r21", "r22", "r23",
+diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
+index 04e554cae3a2..f8c2b9e7c19c 100644
+--- a/virt/kvm/arm/arm.c
++++ b/virt/kvm/arm/arm.c
+@@ -1244,8 +1244,6 @@ static void cpu_init_hyp_mode(void *dummy)
+ 
+ 	__cpu_init_hyp_mode(pgd_ptr, hyp_stack_ptr, vector_ptr);
+ 	__cpu_init_stage2();
+-
+-	kvm_arm_init_debug();
+ }
+ 
+ static void cpu_hyp_reset(void)
+@@ -1269,6 +1267,8 @@ static void cpu_hyp_reinit(void)
+ 		cpu_init_hyp_mode(NULL);
+ 	}
+ 
++	kvm_arm_init_debug();
++
+ 	if (vgic_present)
+ 		kvm_vgic_init_cpu_hardware();
+ }
+diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
+index fd8c88463928..fbba603caf1b 100644
+--- a/virt/kvm/arm/mmu.c
++++ b/virt/kvm/arm/mmu.c
+@@ -1201,8 +1201,14 @@ static bool transparent_hugepage_adjust(kvm_pfn_t *pfnp, phys_addr_t *ipap)
+ {
+ 	kvm_pfn_t pfn = *pfnp;
+ 	gfn_t gfn = *ipap >> PAGE_SHIFT;
++	struct page *page = pfn_to_page(pfn);
+ 
+-	if (PageTransCompoundMap(pfn_to_page(pfn))) {
++	/*
++	 * PageTransCompoungMap() returns true for THP and
++	 * hugetlbfs. Make sure the adjustment is done only for THP
++	 * pages.
++	 */
++	if (!PageHuge(page) && PageTransCompoundMap(page)) {
+ 		unsigned long mask;
+ 		/*
+ 		 * The address we faulted on is backed by a transparent huge


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 11:37 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     fe0a60d5a8d20554a55ea5140308be26d3b4b46e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Sep 29 13:36:23 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 11:36:25 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=fe0a60d5

Linux patch 4.18.11

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1010_linux-4.18.11.patch | 2983 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2987 insertions(+)

diff --git a/0000_README b/0000_README
index a9e2bd7..cccbd63 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch:  1009_linux-4.18.10.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.10
 
+Patch:  1010_linux-4.18.11.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.11
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1010_linux-4.18.11.patch b/1010_linux-4.18.11.patch
new file mode 100644
index 0000000..fe34a23
--- /dev/null
+++ b/1010_linux-4.18.11.patch
@@ -0,0 +1,2983 @@
+diff --git a/Makefile b/Makefile
+index ffab15235ff0..de0ecace693a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/x86/crypto/aegis128-aesni-glue.c b/arch/x86/crypto/aegis128-aesni-glue.c
+index acd11b3bf639..2a356b948720 100644
+--- a/arch/x86/crypto/aegis128-aesni-glue.c
++++ b/arch/x86/crypto/aegis128-aesni-glue.c
+@@ -379,7 +379,6 @@ static int __init crypto_aegis128_aesni_module_init(void)
+ {
+ 	if (!boot_cpu_has(X86_FEATURE_XMM2) ||
+ 	    !boot_cpu_has(X86_FEATURE_AES) ||
+-	    !boot_cpu_has(X86_FEATURE_OSXSAVE) ||
+ 	    !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL))
+ 		return -ENODEV;
+ 
+diff --git a/arch/x86/crypto/aegis128l-aesni-glue.c b/arch/x86/crypto/aegis128l-aesni-glue.c
+index 2071c3d1ae07..dbe8bb980da1 100644
+--- a/arch/x86/crypto/aegis128l-aesni-glue.c
++++ b/arch/x86/crypto/aegis128l-aesni-glue.c
+@@ -379,7 +379,6 @@ static int __init crypto_aegis128l_aesni_module_init(void)
+ {
+ 	if (!boot_cpu_has(X86_FEATURE_XMM2) ||
+ 	    !boot_cpu_has(X86_FEATURE_AES) ||
+-	    !boot_cpu_has(X86_FEATURE_OSXSAVE) ||
+ 	    !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL))
+ 		return -ENODEV;
+ 
+diff --git a/arch/x86/crypto/aegis256-aesni-glue.c b/arch/x86/crypto/aegis256-aesni-glue.c
+index b5f2a8fd5a71..8bebda2de92f 100644
+--- a/arch/x86/crypto/aegis256-aesni-glue.c
++++ b/arch/x86/crypto/aegis256-aesni-glue.c
+@@ -379,7 +379,6 @@ static int __init crypto_aegis256_aesni_module_init(void)
+ {
+ 	if (!boot_cpu_has(X86_FEATURE_XMM2) ||
+ 	    !boot_cpu_has(X86_FEATURE_AES) ||
+-	    !boot_cpu_has(X86_FEATURE_OSXSAVE) ||
+ 	    !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL))
+ 		return -ENODEV;
+ 
+diff --git a/arch/x86/crypto/morus1280-sse2-glue.c b/arch/x86/crypto/morus1280-sse2-glue.c
+index 95cf857d2cbb..f40244eaf14d 100644
+--- a/arch/x86/crypto/morus1280-sse2-glue.c
++++ b/arch/x86/crypto/morus1280-sse2-glue.c
+@@ -40,7 +40,6 @@ MORUS1280_DECLARE_ALGS(sse2, "morus1280-sse2", 350);
+ static int __init crypto_morus1280_sse2_module_init(void)
+ {
+ 	if (!boot_cpu_has(X86_FEATURE_XMM2) ||
+-	    !boot_cpu_has(X86_FEATURE_OSXSAVE) ||
+ 	    !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL))
+ 		return -ENODEV;
+ 
+diff --git a/arch/x86/crypto/morus640-sse2-glue.c b/arch/x86/crypto/morus640-sse2-glue.c
+index 615fb7bc9a32..9afaf8f8565a 100644
+--- a/arch/x86/crypto/morus640-sse2-glue.c
++++ b/arch/x86/crypto/morus640-sse2-glue.c
+@@ -40,7 +40,6 @@ MORUS640_DECLARE_ALGS(sse2, "morus640-sse2", 400);
+ static int __init crypto_morus640_sse2_module_init(void)
+ {
+ 	if (!boot_cpu_has(X86_FEATURE_XMM2) ||
+-	    !boot_cpu_has(X86_FEATURE_OSXSAVE) ||
+ 	    !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL))
+ 		return -ENODEV;
+ 
+diff --git a/arch/x86/xen/pmu.c b/arch/x86/xen/pmu.c
+index 7d00d4ad44d4..95997e6c0696 100644
+--- a/arch/x86/xen/pmu.c
++++ b/arch/x86/xen/pmu.c
+@@ -478,7 +478,7 @@ static void xen_convert_regs(const struct xen_pmu_regs *xen_regs,
+ irqreturn_t xen_pmu_irq_handler(int irq, void *dev_id)
+ {
+ 	int err, ret = IRQ_NONE;
+-	struct pt_regs regs;
++	struct pt_regs regs = {0};
+ 	const struct xen_pmu_data *xenpmu_data = get_xenpmu_data();
+ 	uint8_t xenpmu_flags = get_xenpmu_flags();
+ 
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 984b37647b2f..22a2bc5f25ce 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -5358,10 +5358,20 @@ void ata_qc_complete(struct ata_queued_cmd *qc)
+  */
+ int ata_qc_complete_multiple(struct ata_port *ap, u64 qc_active)
+ {
++	u64 done_mask, ap_qc_active = ap->qc_active;
+ 	int nr_done = 0;
+-	u64 done_mask;
+ 
+-	done_mask = ap->qc_active ^ qc_active;
++	/*
++	 * If the internal tag is set on ap->qc_active, then we care about
++	 * bit0 on the passed in qc_active mask. Move that bit up to match
++	 * the internal tag.
++	 */
++	if (ap_qc_active & (1ULL << ATA_TAG_INTERNAL)) {
++		qc_active |= (qc_active & 0x01) << ATA_TAG_INTERNAL;
++		qc_active ^= qc_active & 0x01;
++	}
++
++	done_mask = ap_qc_active ^ qc_active;
+ 
+ 	if (unlikely(done_mask & qc_active)) {
+ 		ata_port_err(ap, "illegal qc_active transition (%08llx->%08llx)\n",
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
+index e950730f1933..5a6e7e1cb351 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
+@@ -367,12 +367,14 @@ static int amdgpu_cgs_get_firmware_info(struct cgs_device *cgs_device,
+ 				break;
+ 			case CHIP_POLARIS10:
+ 				if (type == CGS_UCODE_ID_SMU) {
+-					if ((adev->pdev->device == 0x67df) &&
+-					    ((adev->pdev->revision == 0xe0) ||
+-					     (adev->pdev->revision == 0xe3) ||
+-					     (adev->pdev->revision == 0xe4) ||
+-					     (adev->pdev->revision == 0xe5) ||
+-					     (adev->pdev->revision == 0xe7) ||
++					if (((adev->pdev->device == 0x67df) &&
++					     ((adev->pdev->revision == 0xe0) ||
++					      (adev->pdev->revision == 0xe3) ||
++					      (adev->pdev->revision == 0xe4) ||
++					      (adev->pdev->revision == 0xe5) ||
++					      (adev->pdev->revision == 0xe7) ||
++					      (adev->pdev->revision == 0xef))) ||
++					    ((adev->pdev->device == 0x6fdf) &&
+ 					     (adev->pdev->revision == 0xef))) {
+ 						info->is_kicker = true;
+ 						strcpy(fw_name, "amdgpu/polaris10_k_smc.bin");
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index b0bf2f24da48..dc893076398e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -532,6 +532,7 @@ static const struct pci_device_id pciidlist[] = {
+ 	{0x1002, 0x67CA, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
+ 	{0x1002, 0x67CC, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
+ 	{0x1002, 0x67CF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
++	{0x1002, 0x6FDF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
+ 	/* Polaris12 */
+ 	{0x1002, 0x6980, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
+ 	{0x1002, 0x6981, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
+diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
+index dec0d60921bf..00486c744f24 100644
+--- a/drivers/gpu/drm/i915/intel_display.c
++++ b/drivers/gpu/drm/i915/intel_display.c
+@@ -5062,10 +5062,14 @@ void hsw_disable_ips(const struct intel_crtc_state *crtc_state)
+ 		mutex_lock(&dev_priv->pcu_lock);
+ 		WARN_ON(sandybridge_pcode_write(dev_priv, DISPLAY_IPS_CONTROL, 0));
+ 		mutex_unlock(&dev_priv->pcu_lock);
+-		/* wait for pcode to finish disabling IPS, which may take up to 42ms */
++		/*
++		 * Wait for PCODE to finish disabling IPS. The BSpec specified
++		 * 42ms timeout value leads to occasional timeouts so use 100ms
++		 * instead.
++		 */
+ 		if (intel_wait_for_register(dev_priv,
+ 					    IPS_CTL, IPS_ENABLE, 0,
+-					    42))
++					    100))
+ 			DRM_ERROR("Timed out waiting for IPS disable\n");
+ 	} else {
+ 		I915_WRITE(IPS_CTL, 0);
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+index 9bae4db84cfb..7a12d75e5157 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+@@ -1098,17 +1098,21 @@ nv50_mstm_enable(struct nv50_mstm *mstm, u8 dpcd, int state)
+ 	int ret;
+ 
+ 	if (dpcd >= 0x12) {
+-		ret = drm_dp_dpcd_readb(mstm->mgr.aux, DP_MSTM_CTRL, &dpcd);
++		/* Even if we're enabling MST, start with disabling the
++		 * branching unit to clear any sink-side MST topology state
++		 * that wasn't set by us
++		 */
++		ret = drm_dp_dpcd_writeb(mstm->mgr.aux, DP_MSTM_CTRL, 0);
+ 		if (ret < 0)
+ 			return ret;
+ 
+-		dpcd &= ~DP_MST_EN;
+-		if (state)
+-			dpcd |= DP_MST_EN;
+-
+-		ret = drm_dp_dpcd_writeb(mstm->mgr.aux, DP_MSTM_CTRL, dpcd);
+-		if (ret < 0)
+-			return ret;
++		if (state) {
++			/* Now, start initializing */
++			ret = drm_dp_dpcd_writeb(mstm->mgr.aux, DP_MSTM_CTRL,
++						 DP_MST_EN);
++			if (ret < 0)
++				return ret;
++		}
+ 	}
+ 
+ 	return nvif_mthd(disp, 0, &args, sizeof(args));
+@@ -1117,31 +1121,58 @@ nv50_mstm_enable(struct nv50_mstm *mstm, u8 dpcd, int state)
+ int
+ nv50_mstm_detect(struct nv50_mstm *mstm, u8 dpcd[8], int allow)
+ {
+-	int ret, state = 0;
++	struct drm_dp_aux *aux;
++	int ret;
++	bool old_state, new_state;
++	u8 mstm_ctrl;
+ 
+ 	if (!mstm)
+ 		return 0;
+ 
+-	if (dpcd[0] >= 0x12) {
+-		ret = drm_dp_dpcd_readb(mstm->mgr.aux, DP_MSTM_CAP, &dpcd[1]);
++	mutex_lock(&mstm->mgr.lock);
++
++	old_state = mstm->mgr.mst_state;
++	new_state = old_state;
++	aux = mstm->mgr.aux;
++
++	if (old_state) {
++		/* Just check that the MST hub is still as we expect it */
++		ret = drm_dp_dpcd_readb(aux, DP_MSTM_CTRL, &mstm_ctrl);
++		if (ret < 0 || !(mstm_ctrl & DP_MST_EN)) {
++			DRM_DEBUG_KMS("Hub gone, disabling MST topology\n");
++			new_state = false;
++		}
++	} else if (dpcd[0] >= 0x12) {
++		ret = drm_dp_dpcd_readb(aux, DP_MSTM_CAP, &dpcd[1]);
+ 		if (ret < 0)
+-			return ret;
++			goto probe_error;
+ 
+ 		if (!(dpcd[1] & DP_MST_CAP))
+ 			dpcd[0] = 0x11;
+ 		else
+-			state = allow;
++			new_state = allow;
++	}
++
++	if (new_state == old_state) {
++		mutex_unlock(&mstm->mgr.lock);
++		return new_state;
+ 	}
+ 
+-	ret = nv50_mstm_enable(mstm, dpcd[0], state);
++	ret = nv50_mstm_enable(mstm, dpcd[0], new_state);
+ 	if (ret)
+-		return ret;
++		goto probe_error;
++
++	mutex_unlock(&mstm->mgr.lock);
+ 
+-	ret = drm_dp_mst_topology_mgr_set_mst(&mstm->mgr, state);
++	ret = drm_dp_mst_topology_mgr_set_mst(&mstm->mgr, new_state);
+ 	if (ret)
+ 		return nv50_mstm_enable(mstm, dpcd[0], 0);
+ 
+-	return mstm->mgr.mst_state;
++	return new_state;
++
++probe_error:
++	mutex_unlock(&mstm->mgr.lock);
++	return ret;
+ }
+ 
+ static void
+@@ -2049,7 +2080,7 @@ nv50_disp_atomic_state_alloc(struct drm_device *dev)
+ static const struct drm_mode_config_funcs
+ nv50_disp_func = {
+ 	.fb_create = nouveau_user_framebuffer_create,
+-	.output_poll_changed = drm_fb_helper_output_poll_changed,
++	.output_poll_changed = nouveau_fbcon_output_poll_changed,
+ 	.atomic_check = nv50_disp_atomic_check,
+ 	.atomic_commit = nv50_disp_atomic_commit,
+ 	.atomic_state_alloc = nv50_disp_atomic_state_alloc,
+diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c
+index af68eae4c626..de4ab310ef8e 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_connector.c
++++ b/drivers/gpu/drm/nouveau/nouveau_connector.c
+@@ -570,12 +570,16 @@ nouveau_connector_detect(struct drm_connector *connector, bool force)
+ 		nv_connector->edid = NULL;
+ 	}
+ 
+-	/* Outputs are only polled while runtime active, so acquiring a
+-	 * runtime PM ref here is unnecessary (and would deadlock upon
+-	 * runtime suspend because it waits for polling to finish).
++	/* Outputs are only polled while runtime active, so resuming the
++	 * device here is unnecessary (and would deadlock upon runtime suspend
++	 * because it waits for polling to finish). We do however, want to
++	 * prevent the autosuspend timer from elapsing during this operation
++	 * if possible.
+ 	 */
+-	if (!drm_kms_helper_is_poll_worker()) {
+-		ret = pm_runtime_get_sync(connector->dev->dev);
++	if (drm_kms_helper_is_poll_worker()) {
++		pm_runtime_get_noresume(dev->dev);
++	} else {
++		ret = pm_runtime_get_sync(dev->dev);
+ 		if (ret < 0 && ret != -EACCES)
+ 			return conn_status;
+ 	}
+@@ -653,10 +657,8 @@ detect_analog:
+ 
+  out:
+ 
+-	if (!drm_kms_helper_is_poll_worker()) {
+-		pm_runtime_mark_last_busy(connector->dev->dev);
+-		pm_runtime_put_autosuspend(connector->dev->dev);
+-	}
++	pm_runtime_mark_last_busy(dev->dev);
++	pm_runtime_put_autosuspend(dev->dev);
+ 
+ 	return conn_status;
+ }
+@@ -1120,6 +1122,26 @@ nouveau_connector_hotplug(struct nvif_notify *notify)
+ 	const struct nvif_notify_conn_rep_v0 *rep = notify->data;
+ 	const char *name = connector->name;
+ 	struct nouveau_encoder *nv_encoder;
++	int ret;
++
++	ret = pm_runtime_get(drm->dev->dev);
++	if (ret == 0) {
++		/* We can't block here if there's a pending PM request
++		 * running, as we'll deadlock nouveau_display_fini() when it
++		 * calls nvif_put() on our nvif_notify struct. So, simply
++		 * defer the hotplug event until the device finishes resuming
++		 */
++		NV_DEBUG(drm, "Deferring HPD on %s until runtime resume\n",
++			 name);
++		schedule_work(&drm->hpd_work);
++
++		pm_runtime_put_noidle(drm->dev->dev);
++		return NVIF_NOTIFY_KEEP;
++	} else if (ret != 1 && ret != -EACCES) {
++		NV_WARN(drm, "HPD on %s dropped due to RPM failure: %d\n",
++			name, ret);
++		return NVIF_NOTIFY_DROP;
++	}
+ 
+ 	if (rep->mask & NVIF_NOTIFY_CONN_V0_IRQ) {
+ 		NV_DEBUG(drm, "service %s\n", name);
+@@ -1137,6 +1159,8 @@ nouveau_connector_hotplug(struct nvif_notify *notify)
+ 		drm_helper_hpd_irq_event(connector->dev);
+ 	}
+ 
++	pm_runtime_mark_last_busy(drm->dev->dev);
++	pm_runtime_put_autosuspend(drm->dev->dev);
+ 	return NVIF_NOTIFY_KEEP;
+ }
+ 
+diff --git a/drivers/gpu/drm/nouveau/nouveau_display.c b/drivers/gpu/drm/nouveau/nouveau_display.c
+index ec7861457b84..c5b3cc17965c 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_display.c
++++ b/drivers/gpu/drm/nouveau/nouveau_display.c
+@@ -293,7 +293,7 @@ nouveau_user_framebuffer_create(struct drm_device *dev,
+ 
+ static const struct drm_mode_config_funcs nouveau_mode_config_funcs = {
+ 	.fb_create = nouveau_user_framebuffer_create,
+-	.output_poll_changed = drm_fb_helper_output_poll_changed,
++	.output_poll_changed = nouveau_fbcon_output_poll_changed,
+ };
+ 
+ 
+@@ -355,8 +355,6 @@ nouveau_display_hpd_work(struct work_struct *work)
+ 	pm_runtime_get_sync(drm->dev->dev);
+ 
+ 	drm_helper_hpd_irq_event(drm->dev);
+-	/* enable polling for external displays */
+-	drm_kms_helper_poll_enable(drm->dev);
+ 
+ 	pm_runtime_mark_last_busy(drm->dev->dev);
+ 	pm_runtime_put_sync(drm->dev->dev);
+@@ -379,15 +377,29 @@ nouveau_display_acpi_ntfy(struct notifier_block *nb, unsigned long val,
+ {
+ 	struct nouveau_drm *drm = container_of(nb, typeof(*drm), acpi_nb);
+ 	struct acpi_bus_event *info = data;
++	int ret;
+ 
+ 	if (!strcmp(info->device_class, ACPI_VIDEO_CLASS)) {
+ 		if (info->type == ACPI_VIDEO_NOTIFY_PROBE) {
+-			/*
+-			 * This may be the only indication we receive of a
+-			 * connector hotplug on a runtime suspended GPU,
+-			 * schedule hpd_work to check.
+-			 */
+-			schedule_work(&drm->hpd_work);
++			ret = pm_runtime_get(drm->dev->dev);
++			if (ret == 1 || ret == -EACCES) {
++				/* If the GPU is already awake, or in a state
++				 * where we can't wake it up, it can handle
++				 * it's own hotplug events.
++				 */
++				pm_runtime_put_autosuspend(drm->dev->dev);
++			} else if (ret == 0) {
++				/* This may be the only indication we receive
++				 * of a connector hotplug on a runtime
++				 * suspended GPU, schedule hpd_work to check.
++				 */
++				NV_DEBUG(drm, "ACPI requested connector reprobe\n");
++				schedule_work(&drm->hpd_work);
++				pm_runtime_put_noidle(drm->dev->dev);
++			} else {
++				NV_WARN(drm, "Dropped ACPI reprobe event due to RPM error: %d\n",
++					ret);
++			}
+ 
+ 			/* acpi-video should not generate keypresses for this */
+ 			return NOTIFY_BAD;
+@@ -411,6 +423,11 @@ nouveau_display_init(struct drm_device *dev)
+ 	if (ret)
+ 		return ret;
+ 
++	/* enable connector detection and polling for connectors without HPD
++	 * support
++	 */
++	drm_kms_helper_poll_enable(dev);
++
+ 	/* enable hotplug interrupts */
+ 	drm_connector_list_iter_begin(dev, &conn_iter);
+ 	nouveau_for_each_non_mst_connector_iter(connector, &conn_iter) {
+@@ -425,7 +442,7 @@ nouveau_display_init(struct drm_device *dev)
+ }
+ 
+ void
+-nouveau_display_fini(struct drm_device *dev, bool suspend)
++nouveau_display_fini(struct drm_device *dev, bool suspend, bool runtime)
+ {
+ 	struct nouveau_display *disp = nouveau_display(dev);
+ 	struct nouveau_drm *drm = nouveau_drm(dev);
+@@ -450,6 +467,9 @@ nouveau_display_fini(struct drm_device *dev, bool suspend)
+ 	}
+ 	drm_connector_list_iter_end(&conn_iter);
+ 
++	if (!runtime)
++		cancel_work_sync(&drm->hpd_work);
++
+ 	drm_kms_helper_poll_disable(dev);
+ 	disp->fini(dev);
+ }
+@@ -618,11 +638,11 @@ nouveau_display_suspend(struct drm_device *dev, bool runtime)
+ 			}
+ 		}
+ 
+-		nouveau_display_fini(dev, true);
++		nouveau_display_fini(dev, true, runtime);
+ 		return 0;
+ 	}
+ 
+-	nouveau_display_fini(dev, true);
++	nouveau_display_fini(dev, true, runtime);
+ 
+ 	list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
+ 		struct nouveau_framebuffer *nouveau_fb;
+diff --git a/drivers/gpu/drm/nouveau/nouveau_display.h b/drivers/gpu/drm/nouveau/nouveau_display.h
+index 54aa7c3fa42d..ff92b54ce448 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_display.h
++++ b/drivers/gpu/drm/nouveau/nouveau_display.h
+@@ -62,7 +62,7 @@ nouveau_display(struct drm_device *dev)
+ int  nouveau_display_create(struct drm_device *dev);
+ void nouveau_display_destroy(struct drm_device *dev);
+ int  nouveau_display_init(struct drm_device *dev);
+-void nouveau_display_fini(struct drm_device *dev, bool suspend);
++void nouveau_display_fini(struct drm_device *dev, bool suspend, bool runtime);
+ int  nouveau_display_suspend(struct drm_device *dev, bool runtime);
+ void nouveau_display_resume(struct drm_device *dev, bool runtime);
+ int  nouveau_display_vblank_enable(struct drm_device *, unsigned int);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index c7ec86d6c3c9..c2ebe5da34d0 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -629,7 +629,7 @@ nouveau_drm_unload(struct drm_device *dev)
+ 	nouveau_debugfs_fini(drm);
+ 
+ 	if (dev->mode_config.num_crtc)
+-		nouveau_display_fini(dev, false);
++		nouveau_display_fini(dev, false, false);
+ 	nouveau_display_destroy(dev);
+ 
+ 	nouveau_bios_takedown(dev);
+@@ -835,7 +835,6 @@ nouveau_pmops_runtime_suspend(struct device *dev)
+ 		return -EBUSY;
+ 	}
+ 
+-	drm_kms_helper_poll_disable(drm_dev);
+ 	nouveau_switcheroo_optimus_dsm();
+ 	ret = nouveau_do_suspend(drm_dev, true);
+ 	pci_save_state(pdev);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_fbcon.c b/drivers/gpu/drm/nouveau/nouveau_fbcon.c
+index 85c1f10bc2b6..8cf966690963 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_fbcon.c
++++ b/drivers/gpu/drm/nouveau/nouveau_fbcon.c
+@@ -466,6 +466,7 @@ nouveau_fbcon_set_suspend_work(struct work_struct *work)
+ 	console_unlock();
+ 
+ 	if (state == FBINFO_STATE_RUNNING) {
++		nouveau_fbcon_hotplug_resume(drm->fbcon);
+ 		pm_runtime_mark_last_busy(drm->dev->dev);
+ 		pm_runtime_put_sync(drm->dev->dev);
+ 	}
+@@ -487,6 +488,61 @@ nouveau_fbcon_set_suspend(struct drm_device *dev, int state)
+ 	schedule_work(&drm->fbcon_work);
+ }
+ 
++void
++nouveau_fbcon_output_poll_changed(struct drm_device *dev)
++{
++	struct nouveau_drm *drm = nouveau_drm(dev);
++	struct nouveau_fbdev *fbcon = drm->fbcon;
++	int ret;
++
++	if (!fbcon)
++		return;
++
++	mutex_lock(&fbcon->hotplug_lock);
++
++	ret = pm_runtime_get(dev->dev);
++	if (ret == 1 || ret == -EACCES) {
++		drm_fb_helper_hotplug_event(&fbcon->helper);
++
++		pm_runtime_mark_last_busy(dev->dev);
++		pm_runtime_put_autosuspend(dev->dev);
++	} else if (ret == 0) {
++		/* If the GPU was already in the process of suspending before
++		 * this event happened, then we can't block here as we'll
++		 * deadlock the runtime pmops since they wait for us to
++		 * finish. So, just defer this event for when we runtime
++		 * resume again. It will be handled by fbcon_work.
++		 */
++		NV_DEBUG(drm, "fbcon HPD event deferred until runtime resume\n");
++		fbcon->hotplug_waiting = true;
++		pm_runtime_put_noidle(drm->dev->dev);
++	} else {
++		DRM_WARN("fbcon HPD event lost due to RPM failure: %d\n",
++			 ret);
++	}
++
++	mutex_unlock(&fbcon->hotplug_lock);
++}
++
++void
++nouveau_fbcon_hotplug_resume(struct nouveau_fbdev *fbcon)
++{
++	struct nouveau_drm *drm;
++
++	if (!fbcon)
++		return;
++	drm = nouveau_drm(fbcon->helper.dev);
++
++	mutex_lock(&fbcon->hotplug_lock);
++	if (fbcon->hotplug_waiting) {
++		fbcon->hotplug_waiting = false;
++
++		NV_DEBUG(drm, "Handling deferred fbcon HPD events\n");
++		drm_fb_helper_hotplug_event(&fbcon->helper);
++	}
++	mutex_unlock(&fbcon->hotplug_lock);
++}
++
+ int
+ nouveau_fbcon_init(struct drm_device *dev)
+ {
+@@ -505,6 +561,7 @@ nouveau_fbcon_init(struct drm_device *dev)
+ 
+ 	drm->fbcon = fbcon;
+ 	INIT_WORK(&drm->fbcon_work, nouveau_fbcon_set_suspend_work);
++	mutex_init(&fbcon->hotplug_lock);
+ 
+ 	drm_fb_helper_prepare(dev, &fbcon->helper, &nouveau_fbcon_helper_funcs);
+ 
+diff --git a/drivers/gpu/drm/nouveau/nouveau_fbcon.h b/drivers/gpu/drm/nouveau/nouveau_fbcon.h
+index a6f192ea3fa6..db9d52047ef8 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_fbcon.h
++++ b/drivers/gpu/drm/nouveau/nouveau_fbcon.h
+@@ -41,6 +41,9 @@ struct nouveau_fbdev {
+ 	struct nvif_object gdi;
+ 	struct nvif_object blit;
+ 	struct nvif_object twod;
++
++	struct mutex hotplug_lock;
++	bool hotplug_waiting;
+ };
+ 
+ void nouveau_fbcon_restore(void);
+@@ -68,6 +71,8 @@ void nouveau_fbcon_set_suspend(struct drm_device *dev, int state);
+ void nouveau_fbcon_accel_save_disable(struct drm_device *dev);
+ void nouveau_fbcon_accel_restore(struct drm_device *dev);
+ 
++void nouveau_fbcon_output_poll_changed(struct drm_device *dev);
++void nouveau_fbcon_hotplug_resume(struct nouveau_fbdev *fbcon);
+ extern int nouveau_nofbaccel;
+ 
+ #endif /* __NV50_FBCON_H__ */
+diff --git a/drivers/gpu/drm/udl/udl_fb.c b/drivers/gpu/drm/udl/udl_fb.c
+index 8746eeeec44d..491f1892b50e 100644
+--- a/drivers/gpu/drm/udl/udl_fb.c
++++ b/drivers/gpu/drm/udl/udl_fb.c
+@@ -432,9 +432,11 @@ static void udl_fbdev_destroy(struct drm_device *dev,
+ {
+ 	drm_fb_helper_unregister_fbi(&ufbdev->helper);
+ 	drm_fb_helper_fini(&ufbdev->helper);
+-	drm_framebuffer_unregister_private(&ufbdev->ufb.base);
+-	drm_framebuffer_cleanup(&ufbdev->ufb.base);
+-	drm_gem_object_put_unlocked(&ufbdev->ufb.obj->base);
++	if (ufbdev->ufb.obj) {
++		drm_framebuffer_unregister_private(&ufbdev->ufb.base);
++		drm_framebuffer_cleanup(&ufbdev->ufb.base);
++		drm_gem_object_put_unlocked(&ufbdev->ufb.obj->base);
++	}
+ }
+ 
+ int udl_fbdev_init(struct drm_device *dev)
+diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c
+index a951ec75d01f..cf5aea1d6488 100644
+--- a/drivers/gpu/drm/vc4/vc4_plane.c
++++ b/drivers/gpu/drm/vc4/vc4_plane.c
+@@ -297,6 +297,9 @@ static int vc4_plane_setup_clipping_and_scaling(struct drm_plane_state *state)
+ 	vc4_state->y_scaling[0] = vc4_get_scaling_mode(vc4_state->src_h[0],
+ 						       vc4_state->crtc_h);
+ 
++	vc4_state->is_unity = (vc4_state->x_scaling[0] == VC4_SCALING_NONE &&
++			       vc4_state->y_scaling[0] == VC4_SCALING_NONE);
++
+ 	if (num_planes > 1) {
+ 		vc4_state->is_yuv = true;
+ 
+@@ -312,24 +315,17 @@ static int vc4_plane_setup_clipping_and_scaling(struct drm_plane_state *state)
+ 			vc4_get_scaling_mode(vc4_state->src_h[1],
+ 					     vc4_state->crtc_h);
+ 
+-		/* YUV conversion requires that scaling be enabled,
+-		 * even on a plane that's otherwise 1:1.  Choose TPZ
+-		 * for simplicity.
++		/* YUV conversion requires that horizontal scaling be enabled,
++		 * even on a plane that's otherwise 1:1. Looks like only PPF
++		 * works in that case, so let's pick that one.
+ 		 */
+-		if (vc4_state->x_scaling[0] == VC4_SCALING_NONE)
+-			vc4_state->x_scaling[0] = VC4_SCALING_TPZ;
+-		if (vc4_state->y_scaling[0] == VC4_SCALING_NONE)
+-			vc4_state->y_scaling[0] = VC4_SCALING_TPZ;
++		if (vc4_state->is_unity)
++			vc4_state->x_scaling[0] = VC4_SCALING_PPF;
+ 	} else {
+ 		vc4_state->x_scaling[1] = VC4_SCALING_NONE;
+ 		vc4_state->y_scaling[1] = VC4_SCALING_NONE;
+ 	}
+ 
+-	vc4_state->is_unity = (vc4_state->x_scaling[0] == VC4_SCALING_NONE &&
+-			       vc4_state->y_scaling[0] == VC4_SCALING_NONE &&
+-			       vc4_state->x_scaling[1] == VC4_SCALING_NONE &&
+-			       vc4_state->y_scaling[1] == VC4_SCALING_NONE);
+-
+ 	/* No configuring scaling on the cursor plane, since it gets
+ 	   non-vblank-synced updates, and scaling requires requires
+ 	   LBM changes which have to be vblank-synced.
+@@ -621,7 +617,10 @@ static int vc4_plane_mode_set(struct drm_plane *plane,
+ 		vc4_dlist_write(vc4_state, SCALER_CSC2_ITR_R_601_5);
+ 	}
+ 
+-	if (!vc4_state->is_unity) {
++	if (vc4_state->x_scaling[0] != VC4_SCALING_NONE ||
++	    vc4_state->x_scaling[1] != VC4_SCALING_NONE ||
++	    vc4_state->y_scaling[0] != VC4_SCALING_NONE ||
++	    vc4_state->y_scaling[1] != VC4_SCALING_NONE) {
+ 		/* LBM Base Address. */
+ 		if (vc4_state->y_scaling[0] != VC4_SCALING_NONE ||
+ 		    vc4_state->y_scaling[1] != VC4_SCALING_NONE) {
+diff --git a/drivers/infiniband/hw/cxgb4/qp.c b/drivers/infiniband/hw/cxgb4/qp.c
+index aef53305f1c3..d97581ae3bf9 100644
+--- a/drivers/infiniband/hw/cxgb4/qp.c
++++ b/drivers/infiniband/hw/cxgb4/qp.c
+@@ -1388,6 +1388,12 @@ static void flush_qp(struct c4iw_qp *qhp)
+ 	schp = to_c4iw_cq(qhp->ibqp.send_cq);
+ 
+ 	if (qhp->ibqp.uobject) {
++
++		/* for user qps, qhp->wq.flushed is protected by qhp->mutex */
++		if (qhp->wq.flushed)
++			return;
++
++		qhp->wq.flushed = 1;
+ 		t4_set_wq_in_error(&qhp->wq);
+ 		t4_set_cq_in_error(&rchp->cq);
+ 		spin_lock_irqsave(&rchp->comp_handler_lock, flag);
+diff --git a/drivers/misc/vmw_balloon.c b/drivers/misc/vmw_balloon.c
+index 5f8b583c6e41..f74166aa9a0d 100644
+--- a/drivers/misc/vmw_balloon.c
++++ b/drivers/misc/vmw_balloon.c
+@@ -45,6 +45,7 @@
+ #include <linux/seq_file.h>
+ #include <linux/vmw_vmci_defs.h>
+ #include <linux/vmw_vmci_api.h>
++#include <linux/io.h>
+ #include <asm/hypervisor.h>
+ 
+ MODULE_AUTHOR("VMware, Inc.");
+diff --git a/drivers/mtd/devices/m25p80.c b/drivers/mtd/devices/m25p80.c
+index e84563d2067f..3463cd94a7f6 100644
+--- a/drivers/mtd/devices/m25p80.c
++++ b/drivers/mtd/devices/m25p80.c
+@@ -41,13 +41,23 @@ static int m25p80_read_reg(struct spi_nor *nor, u8 code, u8 *val, int len)
+ 	struct spi_mem_op op = SPI_MEM_OP(SPI_MEM_OP_CMD(code, 1),
+ 					  SPI_MEM_OP_NO_ADDR,
+ 					  SPI_MEM_OP_NO_DUMMY,
+-					  SPI_MEM_OP_DATA_IN(len, val, 1));
++					  SPI_MEM_OP_DATA_IN(len, NULL, 1));
++	void *scratchbuf;
+ 	int ret;
+ 
++	scratchbuf = kmalloc(len, GFP_KERNEL);
++	if (!scratchbuf)
++		return -ENOMEM;
++
++	op.data.buf.in = scratchbuf;
+ 	ret = spi_mem_exec_op(flash->spimem, &op);
+ 	if (ret < 0)
+ 		dev_err(&flash->spimem->spi->dev, "error %d reading %x\n", ret,
+ 			code);
++	else
++		memcpy(val, scratchbuf, len);
++
++	kfree(scratchbuf);
+ 
+ 	return ret;
+ }
+@@ -58,9 +68,19 @@ static int m25p80_write_reg(struct spi_nor *nor, u8 opcode, u8 *buf, int len)
+ 	struct spi_mem_op op = SPI_MEM_OP(SPI_MEM_OP_CMD(opcode, 1),
+ 					  SPI_MEM_OP_NO_ADDR,
+ 					  SPI_MEM_OP_NO_DUMMY,
+-					  SPI_MEM_OP_DATA_OUT(len, buf, 1));
++					  SPI_MEM_OP_DATA_OUT(len, NULL, 1));
++	void *scratchbuf;
++	int ret;
+ 
+-	return spi_mem_exec_op(flash->spimem, &op);
++	scratchbuf = kmemdup(buf, len, GFP_KERNEL);
++	if (!scratchbuf)
++		return -ENOMEM;
++
++	op.data.buf.out = scratchbuf;
++	ret = spi_mem_exec_op(flash->spimem, &op);
++	kfree(scratchbuf);
++
++	return ret;
+ }
+ 
+ static ssize_t m25p80_write(struct spi_nor *nor, loff_t to, size_t len,
+diff --git a/drivers/mtd/nand/raw/denali.c b/drivers/mtd/nand/raw/denali.c
+index 2a302a1d1430..c502075e5721 100644
+--- a/drivers/mtd/nand/raw/denali.c
++++ b/drivers/mtd/nand/raw/denali.c
+@@ -604,6 +604,12 @@ static int denali_dma_xfer(struct denali_nand_info *denali, void *buf,
+ 	}
+ 
+ 	iowrite32(DMA_ENABLE__FLAG, denali->reg + DMA_ENABLE);
++	/*
++	 * The ->setup_dma() hook kicks DMA by using the data/command
++	 * interface, which belongs to a different AXI port from the
++	 * register interface.  Read back the register to avoid a race.
++	 */
++	ioread32(denali->reg + DMA_ENABLE);
+ 
+ 	denali_reset_irq(denali);
+ 	denali->setup_dma(denali, dma_addr, page, write);
+diff --git a/drivers/net/appletalk/ipddp.c b/drivers/net/appletalk/ipddp.c
+index 9375cef22420..3d27616d9c85 100644
+--- a/drivers/net/appletalk/ipddp.c
++++ b/drivers/net/appletalk/ipddp.c
+@@ -283,8 +283,12 @@ static int ipddp_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+                 case SIOCFINDIPDDPRT:
+ 			spin_lock_bh(&ipddp_route_lock);
+ 			rp = __ipddp_find_route(&rcp);
+-			if (rp)
+-				memcpy(&rcp2, rp, sizeof(rcp2));
++			if (rp) {
++				memset(&rcp2, 0, sizeof(rcp2));
++				rcp2.ip    = rp->ip;
++				rcp2.at    = rp->at;
++				rcp2.flags = rp->flags;
++			}
+ 			spin_unlock_bh(&ipddp_route_lock);
+ 
+ 			if (rp) {
+diff --git a/drivers/net/dsa/mv88e6xxx/global1.h b/drivers/net/dsa/mv88e6xxx/global1.h
+index 7c791c1da4b9..bef01331266f 100644
+--- a/drivers/net/dsa/mv88e6xxx/global1.h
++++ b/drivers/net/dsa/mv88e6xxx/global1.h
+@@ -128,7 +128,7 @@
+ #define MV88E6XXX_G1_ATU_OP_GET_CLR_VIOLATION		0x7000
+ #define MV88E6XXX_G1_ATU_OP_AGE_OUT_VIOLATION		BIT(7)
+ #define MV88E6XXX_G1_ATU_OP_MEMBER_VIOLATION		BIT(6)
+-#define MV88E6XXX_G1_ATU_OP_MISS_VIOLTATION		BIT(5)
++#define MV88E6XXX_G1_ATU_OP_MISS_VIOLATION		BIT(5)
+ #define MV88E6XXX_G1_ATU_OP_FULL_VIOLATION		BIT(4)
+ 
+ /* Offset 0x0C: ATU Data Register */
+diff --git a/drivers/net/dsa/mv88e6xxx/global1_atu.c b/drivers/net/dsa/mv88e6xxx/global1_atu.c
+index 307410898fc9..5200e4bdce93 100644
+--- a/drivers/net/dsa/mv88e6xxx/global1_atu.c
++++ b/drivers/net/dsa/mv88e6xxx/global1_atu.c
+@@ -349,7 +349,7 @@ static irqreturn_t mv88e6xxx_g1_atu_prob_irq_thread_fn(int irq, void *dev_id)
+ 		chip->ports[entry.portvec].atu_member_violation++;
+ 	}
+ 
+-	if (val & MV88E6XXX_G1_ATU_OP_MEMBER_VIOLATION) {
++	if (val & MV88E6XXX_G1_ATU_OP_MISS_VIOLATION) {
+ 		dev_err_ratelimited(chip->dev,
+ 				    "ATU miss violation for %pM portvec %x\n",
+ 				    entry.mac, entry.portvec);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 4fdf3d33aa59..80b05597c5fe 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -7888,7 +7888,7 @@ static int bnxt_change_mac_addr(struct net_device *dev, void *p)
+ 	if (ether_addr_equal(addr->sa_data, dev->dev_addr))
+ 		return 0;
+ 
+-	rc = bnxt_approve_mac(bp, addr->sa_data);
++	rc = bnxt_approve_mac(bp, addr->sa_data, true);
+ 	if (rc)
+ 		return rc;
+ 
+@@ -8683,14 +8683,19 @@ static int bnxt_init_mac_addr(struct bnxt *bp)
+ 	} else {
+ #ifdef CONFIG_BNXT_SRIOV
+ 		struct bnxt_vf_info *vf = &bp->vf;
++		bool strict_approval = true;
+ 
+ 		if (is_valid_ether_addr(vf->mac_addr)) {
+ 			/* overwrite netdev dev_addr with admin VF MAC */
+ 			memcpy(bp->dev->dev_addr, vf->mac_addr, ETH_ALEN);
++			/* Older PF driver or firmware may not approve this
++			 * correctly.
++			 */
++			strict_approval = false;
+ 		} else {
+ 			eth_hw_addr_random(bp->dev);
+ 		}
+-		rc = bnxt_approve_mac(bp, bp->dev->dev_addr);
++		rc = bnxt_approve_mac(bp, bp->dev->dev_addr, strict_approval);
+ #endif
+ 	}
+ 	return rc;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+index 2c77004a022b..24d16d3d33a1 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+@@ -1095,7 +1095,7 @@ update_vf_mac_exit:
+ 	mutex_unlock(&bp->hwrm_cmd_lock);
+ }
+ 
+-int bnxt_approve_mac(struct bnxt *bp, u8 *mac)
++int bnxt_approve_mac(struct bnxt *bp, u8 *mac, bool strict)
+ {
+ 	struct hwrm_func_vf_cfg_input req = {0};
+ 	int rc = 0;
+@@ -1113,12 +1113,13 @@ int bnxt_approve_mac(struct bnxt *bp, u8 *mac)
+ 	memcpy(req.dflt_mac_addr, mac, ETH_ALEN);
+ 	rc = hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
+ mac_done:
+-	if (rc) {
++	if (rc && strict) {
+ 		rc = -EADDRNOTAVAIL;
+ 		netdev_warn(bp->dev, "VF MAC address %pM not approved by the PF\n",
+ 			    mac);
++		return rc;
+ 	}
+-	return rc;
++	return 0;
+ }
+ #else
+ 
+@@ -1135,7 +1136,7 @@ void bnxt_update_vf_mac(struct bnxt *bp)
+ {
+ }
+ 
+-int bnxt_approve_mac(struct bnxt *bp, u8 *mac)
++int bnxt_approve_mac(struct bnxt *bp, u8 *mac, bool strict)
+ {
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.h
+index e9b20cd19881..2eed9eda1195 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.h
+@@ -39,5 +39,5 @@ int bnxt_sriov_configure(struct pci_dev *pdev, int num_vfs);
+ void bnxt_sriov_disable(struct bnxt *);
+ void bnxt_hwrm_exec_fwd_req(struct bnxt *);
+ void bnxt_update_vf_mac(struct bnxt *);
+-int bnxt_approve_mac(struct bnxt *, u8 *);
++int bnxt_approve_mac(struct bnxt *, u8 *, bool);
+ #endif
+diff --git a/drivers/net/ethernet/hp/hp100.c b/drivers/net/ethernet/hp/hp100.c
+index c8c7ad2eff77..9b5a68b65432 100644
+--- a/drivers/net/ethernet/hp/hp100.c
++++ b/drivers/net/ethernet/hp/hp100.c
+@@ -2634,7 +2634,7 @@ static int hp100_login_to_vg_hub(struct net_device *dev, u_short force_relogin)
+ 		/* Wait for link to drop */
+ 		time = jiffies + (HZ / 10);
+ 		do {
+-			if (~(hp100_inb(VG_LAN_CFG_1) & HP100_LINK_UP_ST))
++			if (!(hp100_inb(VG_LAN_CFG_1) & HP100_LINK_UP_ST))
+ 				break;
+ 			if (!in_interrupt())
+ 				schedule_timeout_interruptible(1);
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index f7f08e3fa761..661fa5a38df2 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -61,6 +61,8 @@ static struct {
+  */
+ static void mvpp2_mac_config(struct net_device *dev, unsigned int mode,
+ 			     const struct phylink_link_state *state);
++static void mvpp2_mac_link_up(struct net_device *dev, unsigned int mode,
++			      phy_interface_t interface, struct phy_device *phy);
+ 
+ /* Queue modes */
+ #define MVPP2_QDIST_SINGLE_MODE	0
+@@ -3142,6 +3144,7 @@ static void mvpp2_start_dev(struct mvpp2_port *port)
+ 		mvpp22_mode_reconfigure(port);
+ 
+ 	if (port->phylink) {
++		netif_carrier_off(port->dev);
+ 		phylink_start(port->phylink);
+ 	} else {
+ 		/* Phylink isn't used as of now for ACPI, so the MAC has to be
+@@ -3150,9 +3153,10 @@ static void mvpp2_start_dev(struct mvpp2_port *port)
+ 		 */
+ 		struct phylink_link_state state = {
+ 			.interface = port->phy_interface,
+-			.link = 1,
+ 		};
+ 		mvpp2_mac_config(port->dev, MLO_AN_INBAND, &state);
++		mvpp2_mac_link_up(port->dev, MLO_AN_INBAND, port->phy_interface,
++				  NULL);
+ 	}
+ 
+ 	netif_tx_start_all_queues(port->dev);
+@@ -4389,10 +4393,6 @@ static void mvpp2_mac_config(struct net_device *dev, unsigned int mode,
+ 		return;
+ 	}
+ 
+-	netif_tx_stop_all_queues(port->dev);
+-	if (!port->has_phy)
+-		netif_carrier_off(port->dev);
+-
+ 	/* Make sure the port is disabled when reconfiguring the mode */
+ 	mvpp2_port_disable(port);
+ 
+@@ -4417,16 +4417,7 @@ static void mvpp2_mac_config(struct net_device *dev, unsigned int mode,
+ 	if (port->priv->hw_version == MVPP21 && port->flags & MVPP2_F_LOOPBACK)
+ 		mvpp2_port_loopback_set(port, state);
+ 
+-	/* If the port already was up, make sure it's still in the same state */
+-	if (state->link || !port->has_phy) {
+-		mvpp2_port_enable(port);
+-
+-		mvpp2_egress_enable(port);
+-		mvpp2_ingress_enable(port);
+-		if (!port->has_phy)
+-			netif_carrier_on(dev);
+-		netif_tx_wake_all_queues(dev);
+-	}
++	mvpp2_port_enable(port);
+ }
+ 
+ static void mvpp2_mac_link_up(struct net_device *dev, unsigned int mode,
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index 6d74cde68163..c0fc30a1f600 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -2172,17 +2172,15 @@ static int netvsc_remove(struct hv_device *dev)
+ 
+ 	cancel_delayed_work_sync(&ndev_ctx->dwork);
+ 
+-	rcu_read_lock();
+-	nvdev = rcu_dereference(ndev_ctx->nvdev);
+-
+-	if  (nvdev)
++	rtnl_lock();
++	nvdev = rtnl_dereference(ndev_ctx->nvdev);
++	if (nvdev)
+ 		cancel_work_sync(&nvdev->subchan_work);
+ 
+ 	/*
+ 	 * Call to the vsc driver to let it know that the device is being
+ 	 * removed. Also blocks mtu and channel changes.
+ 	 */
+-	rtnl_lock();
+ 	vf_netdev = rtnl_dereference(ndev_ctx->vf_netdev);
+ 	if (vf_netdev)
+ 		netvsc_unregister_vf(vf_netdev);
+@@ -2194,7 +2192,6 @@ static int netvsc_remove(struct hv_device *dev)
+ 	list_del(&ndev_ctx->list);
+ 
+ 	rtnl_unlock();
+-	rcu_read_unlock();
+ 
+ 	hv_set_drvdata(dev, NULL);
+ 
+diff --git a/drivers/net/ppp/pppoe.c b/drivers/net/ppp/pppoe.c
+index ce61231e96ea..62dc564b251d 100644
+--- a/drivers/net/ppp/pppoe.c
++++ b/drivers/net/ppp/pppoe.c
+@@ -429,6 +429,9 @@ static int pppoe_rcv(struct sk_buff *skb, struct net_device *dev,
+ 	if (!skb)
+ 		goto out;
+ 
++	if (skb_mac_header_len(skb) < ETH_HLEN)
++		goto drop;
++
+ 	if (!pskb_may_pull(skb, sizeof(struct pppoe_hdr)))
+ 		goto drop;
+ 
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index cb0cc30c3d6a..1e95d37c6e27 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1206,13 +1206,13 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x1199, 0x9061, 8)},	/* Sierra Wireless Modem */
+ 	{QMI_FIXED_INTF(0x1199, 0x9063, 8)},	/* Sierra Wireless EM7305 */
+ 	{QMI_FIXED_INTF(0x1199, 0x9063, 10)},	/* Sierra Wireless EM7305 */
+-	{QMI_FIXED_INTF(0x1199, 0x9071, 8)},	/* Sierra Wireless MC74xx */
+-	{QMI_FIXED_INTF(0x1199, 0x9071, 10)},	/* Sierra Wireless MC74xx */
+-	{QMI_FIXED_INTF(0x1199, 0x9079, 8)},	/* Sierra Wireless EM74xx */
+-	{QMI_FIXED_INTF(0x1199, 0x9079, 10)},	/* Sierra Wireless EM74xx */
+-	{QMI_FIXED_INTF(0x1199, 0x907b, 8)},	/* Sierra Wireless EM74xx */
+-	{QMI_FIXED_INTF(0x1199, 0x907b, 10)},	/* Sierra Wireless EM74xx */
+-	{QMI_FIXED_INTF(0x1199, 0x9091, 8)},	/* Sierra Wireless EM7565 */
++	{QMI_QUIRK_SET_DTR(0x1199, 0x9071, 8)},	/* Sierra Wireless MC74xx */
++	{QMI_QUIRK_SET_DTR(0x1199, 0x9071, 10)},/* Sierra Wireless MC74xx */
++	{QMI_QUIRK_SET_DTR(0x1199, 0x9079, 8)},	/* Sierra Wireless EM74xx */
++	{QMI_QUIRK_SET_DTR(0x1199, 0x9079, 10)},/* Sierra Wireless EM74xx */
++	{QMI_QUIRK_SET_DTR(0x1199, 0x907b, 8)},	/* Sierra Wireless EM74xx */
++	{QMI_QUIRK_SET_DTR(0x1199, 0x907b, 10)},/* Sierra Wireless EM74xx */
++	{QMI_QUIRK_SET_DTR(0x1199, 0x9091, 8)},	/* Sierra Wireless EM7565 */
+ 	{QMI_FIXED_INTF(0x1bbb, 0x011e, 4)},	/* Telekom Speedstick LTE II (Alcatel One Touch L100V LTE) */
+ 	{QMI_FIXED_INTF(0x1bbb, 0x0203, 2)},	/* Alcatel L800MA */
+ 	{QMI_FIXED_INTF(0x2357, 0x0201, 4)},	/* TP-LINK HSUPA Modem MA180 */
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index c2b6aa1d485f..f49c2a60a6eb 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -907,7 +907,11 @@ static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
+ 			BUG_ON(pull_to <= skb_headlen(skb));
+ 			__pskb_pull_tail(skb, pull_to - skb_headlen(skb));
+ 		}
+-		BUG_ON(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS);
++		if (unlikely(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS)) {
++			queue->rx.rsp_cons = ++cons;
++			kfree_skb(nskb);
++			return ~0U;
++		}
+ 
+ 		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
+ 				skb_frag_page(nfrag),
+@@ -1044,6 +1048,8 @@ err:
+ 		skb->len += rx->status;
+ 
+ 		i = xennet_fill_frags(queue, skb, &tmpq);
++		if (unlikely(i == ~0U))
++			goto err;
+ 
+ 		if (rx->flags & XEN_NETRXF_csum_blank)
+ 			skb->ip_summed = CHECKSUM_PARTIAL;
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index f439de848658..d1e2d175c10b 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -4235,11 +4235,6 @@ static int pci_quirk_qcom_rp_acs(struct pci_dev *dev, u16 acs_flags)
+  *
+  * 0x9d10-0x9d1b PCI Express Root port #{1-12}
+  *
+- * The 300 series chipset suffers from the same bug so include those root
+- * ports here as well.
+- *
+- * 0xa32c-0xa343 PCI Express Root port #{0-24}
+- *
+  * [1] http://www.intel.com/content/www/us/en/chipsets/100-series-chipset-datasheet-vol-2.html
+  * [2] http://www.intel.com/content/www/us/en/chipsets/100-series-chipset-datasheet-vol-1.html
+  * [3] http://www.intel.com/content/www/us/en/chipsets/100-series-chipset-spec-update.html
+@@ -4257,7 +4252,6 @@ static bool pci_quirk_intel_spt_pch_acs_match(struct pci_dev *dev)
+ 	case 0xa110 ... 0xa11f: case 0xa167 ... 0xa16a: /* Sunrise Point */
+ 	case 0xa290 ... 0xa29f: case 0xa2e7 ... 0xa2ee: /* Union Point */
+ 	case 0x9d10 ... 0x9d1b: /* 7th & 8th Gen Mobile */
+-	case 0xa32c ... 0xa343:				/* 300 series */
+ 		return true;
+ 	}
+ 
+diff --git a/drivers/platform/x86/alienware-wmi.c b/drivers/platform/x86/alienware-wmi.c
+index d975462a4c57..f10af5c383c5 100644
+--- a/drivers/platform/x86/alienware-wmi.c
++++ b/drivers/platform/x86/alienware-wmi.c
+@@ -536,6 +536,7 @@ static acpi_status alienware_wmax_command(struct wmax_basic_args *in_args,
+ 		if (obj && obj->type == ACPI_TYPE_INTEGER)
+ 			*out_data = (u32) obj->integer.value;
+ 	}
++	kfree(output.pointer);
+ 	return status;
+ 
+ }
+diff --git a/drivers/platform/x86/dell-smbios-wmi.c b/drivers/platform/x86/dell-smbios-wmi.c
+index fbefedb1c172..548abba2c1e9 100644
+--- a/drivers/platform/x86/dell-smbios-wmi.c
++++ b/drivers/platform/x86/dell-smbios-wmi.c
+@@ -78,6 +78,7 @@ static int run_smbios_call(struct wmi_device *wdev)
+ 	dev_dbg(&wdev->dev, "result: [%08x,%08x,%08x,%08x]\n",
+ 		priv->buf->std.output[0], priv->buf->std.output[1],
+ 		priv->buf->std.output[2], priv->buf->std.output[3]);
++	kfree(output.pointer);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/rpmsg/rpmsg_core.c b/drivers/rpmsg/rpmsg_core.c
+index 8122807db380..b714a543a91d 100644
+--- a/drivers/rpmsg/rpmsg_core.c
++++ b/drivers/rpmsg/rpmsg_core.c
+@@ -15,7 +15,6 @@
+ #include <linux/module.h>
+ #include <linux/rpmsg.h>
+ #include <linux/of_device.h>
+-#include <linux/pm_domain.h>
+ #include <linux/slab.h>
+ 
+ #include "rpmsg_internal.h"
+@@ -450,10 +449,6 @@ static int rpmsg_dev_probe(struct device *dev)
+ 	struct rpmsg_endpoint *ept = NULL;
+ 	int err;
+ 
+-	err = dev_pm_domain_attach(dev, true);
+-	if (err)
+-		goto out;
+-
+ 	if (rpdrv->callback) {
+ 		strncpy(chinfo.name, rpdev->id.name, RPMSG_NAME_SIZE);
+ 		chinfo.src = rpdev->src;
+@@ -495,8 +490,6 @@ static int rpmsg_dev_remove(struct device *dev)
+ 
+ 	rpdrv->remove(rpdev);
+ 
+-	dev_pm_domain_detach(dev, true);
+-
+ 	if (rpdev->ept)
+ 		rpmsg_destroy_ept(rpdev->ept);
+ 
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index ec395a6baf9c..9da0bc5a036c 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -2143,8 +2143,17 @@ int spi_register_controller(struct spi_controller *ctlr)
+ 	 */
+ 	if (ctlr->num_chipselect == 0)
+ 		return -EINVAL;
+-	/* allocate dynamic bus number using Linux idr */
+-	if ((ctlr->bus_num < 0) && ctlr->dev.of_node) {
++	if (ctlr->bus_num >= 0) {
++		/* devices with a fixed bus num must check-in with the num */
++		mutex_lock(&board_lock);
++		id = idr_alloc(&spi_master_idr, ctlr, ctlr->bus_num,
++			ctlr->bus_num + 1, GFP_KERNEL);
++		mutex_unlock(&board_lock);
++		if (WARN(id < 0, "couldn't get idr"))
++			return id == -ENOSPC ? -EBUSY : id;
++		ctlr->bus_num = id;
++	} else if (ctlr->dev.of_node) {
++		/* allocate dynamic bus number using Linux idr */
+ 		id = of_alias_get_id(ctlr->dev.of_node, "spi");
+ 		if (id >= 0) {
+ 			ctlr->bus_num = id;
+diff --git a/drivers/target/iscsi/iscsi_target_auth.c b/drivers/target/iscsi/iscsi_target_auth.c
+index 9518ffd8b8ba..4e680d753941 100644
+--- a/drivers/target/iscsi/iscsi_target_auth.c
++++ b/drivers/target/iscsi/iscsi_target_auth.c
+@@ -26,27 +26,6 @@
+ #include "iscsi_target_nego.h"
+ #include "iscsi_target_auth.h"
+ 
+-static int chap_string_to_hex(unsigned char *dst, unsigned char *src, int len)
+-{
+-	int j = DIV_ROUND_UP(len, 2), rc;
+-
+-	rc = hex2bin(dst, src, j);
+-	if (rc < 0)
+-		pr_debug("CHAP string contains non hex digit symbols\n");
+-
+-	dst[j] = '\0';
+-	return j;
+-}
+-
+-static void chap_binaryhex_to_asciihex(char *dst, char *src, int src_len)
+-{
+-	int i;
+-
+-	for (i = 0; i < src_len; i++) {
+-		sprintf(&dst[i*2], "%02x", (int) src[i] & 0xff);
+-	}
+-}
+-
+ static int chap_gen_challenge(
+ 	struct iscsi_conn *conn,
+ 	int caller,
+@@ -62,7 +41,7 @@ static int chap_gen_challenge(
+ 	ret = get_random_bytes_wait(chap->challenge, CHAP_CHALLENGE_LENGTH);
+ 	if (unlikely(ret))
+ 		return ret;
+-	chap_binaryhex_to_asciihex(challenge_asciihex, chap->challenge,
++	bin2hex(challenge_asciihex, chap->challenge,
+ 				CHAP_CHALLENGE_LENGTH);
+ 	/*
+ 	 * Set CHAP_C, and copy the generated challenge into c_str.
+@@ -248,9 +227,16 @@ static int chap_server_compute_md5(
+ 		pr_err("Could not find CHAP_R.\n");
+ 		goto out;
+ 	}
++	if (strlen(chap_r) != MD5_SIGNATURE_SIZE * 2) {
++		pr_err("Malformed CHAP_R\n");
++		goto out;
++	}
++	if (hex2bin(client_digest, chap_r, MD5_SIGNATURE_SIZE) < 0) {
++		pr_err("Malformed CHAP_R\n");
++		goto out;
++	}
+ 
+ 	pr_debug("[server] Got CHAP_R=%s\n", chap_r);
+-	chap_string_to_hex(client_digest, chap_r, strlen(chap_r));
+ 
+ 	tfm = crypto_alloc_shash("md5", 0, 0);
+ 	if (IS_ERR(tfm)) {
+@@ -294,7 +280,7 @@ static int chap_server_compute_md5(
+ 		goto out;
+ 	}
+ 
+-	chap_binaryhex_to_asciihex(response, server_digest, MD5_SIGNATURE_SIZE);
++	bin2hex(response, server_digest, MD5_SIGNATURE_SIZE);
+ 	pr_debug("[server] MD5 Server Digest: %s\n", response);
+ 
+ 	if (memcmp(server_digest, client_digest, MD5_SIGNATURE_SIZE) != 0) {
+@@ -349,9 +335,7 @@ static int chap_server_compute_md5(
+ 		pr_err("Could not find CHAP_C.\n");
+ 		goto out;
+ 	}
+-	pr_debug("[server] Got CHAP_C=%s\n", challenge);
+-	challenge_len = chap_string_to_hex(challenge_binhex, challenge,
+-				strlen(challenge));
++	challenge_len = DIV_ROUND_UP(strlen(challenge), 2);
+ 	if (!challenge_len) {
+ 		pr_err("Unable to convert incoming challenge\n");
+ 		goto out;
+@@ -360,6 +344,11 @@ static int chap_server_compute_md5(
+ 		pr_err("CHAP_C exceeds maximum binary size of 1024 bytes\n");
+ 		goto out;
+ 	}
++	if (hex2bin(challenge_binhex, challenge, challenge_len) < 0) {
++		pr_err("Malformed CHAP_C\n");
++		goto out;
++	}
++	pr_debug("[server] Got CHAP_C=%s\n", challenge);
+ 	/*
+ 	 * During mutual authentication, the CHAP_C generated by the
+ 	 * initiator must not match the original CHAP_C generated by
+@@ -413,7 +402,7 @@ static int chap_server_compute_md5(
+ 	/*
+ 	 * Convert response from binary hex to ascii hext.
+ 	 */
+-	chap_binaryhex_to_asciihex(response, digest, MD5_SIGNATURE_SIZE);
++	bin2hex(response, digest, MD5_SIGNATURE_SIZE);
+ 	*nr_out_len += sprintf(nr_out_ptr + *nr_out_len, "CHAP_R=0x%s",
+ 			response);
+ 	*nr_out_len += 1;
+diff --git a/drivers/tty/vt/vt_ioctl.c b/drivers/tty/vt/vt_ioctl.c
+index a78ad10a119b..73cdc0d633dd 100644
+--- a/drivers/tty/vt/vt_ioctl.c
++++ b/drivers/tty/vt/vt_ioctl.c
+@@ -32,6 +32,8 @@
+ #include <asm/io.h>
+ #include <linux/uaccess.h>
+ 
++#include <linux/nospec.h>
++
+ #include <linux/kbd_kern.h>
+ #include <linux/vt_kern.h>
+ #include <linux/kbd_diacr.h>
+@@ -700,6 +702,8 @@ int vt_ioctl(struct tty_struct *tty,
+ 		if (vsa.console == 0 || vsa.console > MAX_NR_CONSOLES)
+ 			ret = -ENXIO;
+ 		else {
++			vsa.console = array_index_nospec(vsa.console,
++							 MAX_NR_CONSOLES + 1);
+ 			vsa.console--;
+ 			console_lock();
+ 			ret = vc_allocate(vsa.console);
+diff --git a/fs/ext4/dir.c b/fs/ext4/dir.c
+index e2902d394f1b..f93f9881ec18 100644
+--- a/fs/ext4/dir.c
++++ b/fs/ext4/dir.c
+@@ -76,7 +76,7 @@ int __ext4_check_dir_entry(const char *function, unsigned int line,
+ 	else if (unlikely(rlen < EXT4_DIR_REC_LEN(de->name_len)))
+ 		error_msg = "rec_len is too small for name_len";
+ 	else if (unlikely(((char *) de - buf) + rlen > size))
+-		error_msg = "directory entry across range";
++		error_msg = "directory entry overrun";
+ 	else if (unlikely(le32_to_cpu(de->inode) >
+ 			le32_to_cpu(EXT4_SB(dir->i_sb)->s_es->s_inodes_count)))
+ 		error_msg = "inode out of bounds";
+@@ -85,18 +85,16 @@ int __ext4_check_dir_entry(const char *function, unsigned int line,
+ 
+ 	if (filp)
+ 		ext4_error_file(filp, function, line, bh->b_blocknr,
+-				"bad entry in directory: %s - offset=%u(%u), "
+-				"inode=%u, rec_len=%d, name_len=%d",
+-				error_msg, (unsigned) (offset % size),
+-				offset, le32_to_cpu(de->inode),
+-				rlen, de->name_len);
++				"bad entry in directory: %s - offset=%u, "
++				"inode=%u, rec_len=%d, name_len=%d, size=%d",
++				error_msg, offset, le32_to_cpu(de->inode),
++				rlen, de->name_len, size);
+ 	else
+ 		ext4_error_inode(dir, function, line, bh->b_blocknr,
+-				"bad entry in directory: %s - offset=%u(%u), "
+-				"inode=%u, rec_len=%d, name_len=%d",
+-				error_msg, (unsigned) (offset % size),
+-				offset, le32_to_cpu(de->inode),
+-				rlen, de->name_len);
++				"bad entry in directory: %s - offset=%u, "
++				"inode=%u, rec_len=%d, name_len=%d, size=%d",
++				 error_msg, offset, le32_to_cpu(de->inode),
++				 rlen, de->name_len, size);
+ 
+ 	return 1;
+ }
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 7c7123f265c2..aa1ce53d0c87 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -675,6 +675,9 @@ enum {
+ /* Max physical block we can address w/o extents */
+ #define EXT4_MAX_BLOCK_FILE_PHYS	0xFFFFFFFF
+ 
++/* Max logical block we can support */
++#define EXT4_MAX_LOGICAL_BLOCK		0xFFFFFFFF
++
+ /*
+  * Structure of an inode on the disk
+  */
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 3543fe80a3c4..7b4736022761 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -1753,6 +1753,7 @@ bool empty_inline_dir(struct inode *dir, int *has_inline_data)
+ {
+ 	int err, inline_size;
+ 	struct ext4_iloc iloc;
++	size_t inline_len;
+ 	void *inline_pos;
+ 	unsigned int offset;
+ 	struct ext4_dir_entry_2 *de;
+@@ -1780,8 +1781,9 @@ bool empty_inline_dir(struct inode *dir, int *has_inline_data)
+ 		goto out;
+ 	}
+ 
++	inline_len = ext4_get_inline_size(dir);
+ 	offset = EXT4_INLINE_DOTDOT_SIZE;
+-	while (offset < dir->i_size) {
++	while (offset < inline_len) {
+ 		de = ext4_get_inline_entry(dir, &iloc, offset,
+ 					   &inline_pos, &inline_size);
+ 		if (ext4_check_dir_entry(dir, NULL, de,
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 4efe77286ecd..2276137d0083 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -3412,12 +3412,16 @@ static int ext4_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
+ {
+ 	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
+ 	unsigned int blkbits = inode->i_blkbits;
+-	unsigned long first_block = offset >> blkbits;
+-	unsigned long last_block = (offset + length - 1) >> blkbits;
++	unsigned long first_block, last_block;
+ 	struct ext4_map_blocks map;
+ 	bool delalloc = false;
+ 	int ret;
+ 
++	if ((offset >> blkbits) > EXT4_MAX_LOGICAL_BLOCK)
++		return -EINVAL;
++	first_block = offset >> blkbits;
++	last_block = min_t(loff_t, (offset + length - 1) >> blkbits,
++			   EXT4_MAX_LOGICAL_BLOCK);
+ 
+ 	if (flags & IOMAP_REPORT) {
+ 		if (ext4_has_inline_data(inode)) {
+@@ -3947,6 +3951,7 @@ static const struct address_space_operations ext4_dax_aops = {
+ 	.writepages		= ext4_dax_writepages,
+ 	.direct_IO		= noop_direct_IO,
+ 	.set_page_dirty		= noop_set_page_dirty,
++	.bmap			= ext4_bmap,
+ 	.invalidatepage		= noop_invalidatepage,
+ };
+ 
+@@ -4856,6 +4861,7 @@ struct inode *ext4_iget(struct super_block *sb, unsigned long ino)
+ 		 * not initialized on a new filesystem. */
+ 	}
+ 	ei->i_flags = le32_to_cpu(raw_inode->i_flags);
++	ext4_set_inode_flags(inode);
+ 	inode->i_blocks = ext4_inode_blocks(raw_inode, ei);
+ 	ei->i_file_acl = le32_to_cpu(raw_inode->i_file_acl_lo);
+ 	if (ext4_has_feature_64bit(sb))
+@@ -5005,7 +5011,6 @@ struct inode *ext4_iget(struct super_block *sb, unsigned long ino)
+ 		goto bad_inode;
+ 	}
+ 	brelse(iloc.bh);
+-	ext4_set_inode_flags(inode);
+ 
+ 	unlock_new_inode(inode);
+ 	return inode;
+diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c
+index 638ad4743477..38e6a846aac1 100644
+--- a/fs/ext4/mmp.c
++++ b/fs/ext4/mmp.c
+@@ -49,7 +49,6 @@ static int write_mmp_block(struct super_block *sb, struct buffer_head *bh)
+ 	 */
+ 	sb_start_write(sb);
+ 	ext4_mmp_csum_set(sb, mmp);
+-	mark_buffer_dirty(bh);
+ 	lock_buffer(bh);
+ 	bh->b_end_io = end_buffer_write_sync;
+ 	get_bh(bh);
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 116ff68c5bd4..377d516c475f 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -3478,6 +3478,12 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 	int credits;
+ 	u8 old_file_type;
+ 
++	if (new.inode && new.inode->i_nlink == 0) {
++		EXT4_ERROR_INODE(new.inode,
++				 "target of rename is already freed");
++		return -EFSCORRUPTED;
++	}
++
+ 	if ((ext4_test_inode_flag(new_dir, EXT4_INODE_PROJINHERIT)) &&
+ 	    (!projid_eq(EXT4_I(new_dir)->i_projid,
+ 			EXT4_I(old_dentry->d_inode)->i_projid)))
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index e5fb38451a73..ebbc663d0798 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -19,6 +19,7 @@
+ 
+ int ext4_resize_begin(struct super_block *sb)
+ {
++	struct ext4_sb_info *sbi = EXT4_SB(sb);
+ 	int ret = 0;
+ 
+ 	if (!capable(CAP_SYS_RESOURCE))
+@@ -29,7 +30,7 @@ int ext4_resize_begin(struct super_block *sb)
+          * because the user tools have no way of handling this.  Probably a
+          * bad time to do it anyways.
+          */
+-	if (EXT4_SB(sb)->s_sbh->b_blocknr !=
++	if (EXT4_B2C(sbi, sbi->s_sbh->b_blocknr) !=
+ 	    le32_to_cpu(EXT4_SB(sb)->s_es->s_first_data_block)) {
+ 		ext4_warning(sb, "won't resize using backup superblock at %llu",
+ 			(unsigned long long)EXT4_SB(sb)->s_sbh->b_blocknr);
+@@ -1986,6 +1987,26 @@ retry:
+ 		}
+ 	}
+ 
++	/*
++	 * Make sure the last group has enough space so that it's
++	 * guaranteed to have enough space for all metadata blocks
++	 * that it might need to hold.  (We might not need to store
++	 * the inode table blocks in the last block group, but there
++	 * will be cases where this might be needed.)
++	 */
++	if ((ext4_group_first_block_no(sb, n_group) +
++	     ext4_group_overhead_blocks(sb, n_group) + 2 +
++	     sbi->s_itb_per_group + sbi->s_cluster_ratio) >= n_blocks_count) {
++		n_blocks_count = ext4_group_first_block_no(sb, n_group);
++		n_group--;
++		n_blocks_count_retry = 0;
++		if (resize_inode) {
++			iput(resize_inode);
++			resize_inode = NULL;
++		}
++		goto retry;
++	}
++
+ 	/* extend the last group */
+ 	if (n_group == o_group)
+ 		add = n_blocks_count - o_blocks_count;
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 130c12974e28..a7a0fffc3ae8 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -2126,6 +2126,8 @@ static int _ext4_show_options(struct seq_file *seq, struct super_block *sb,
+ 		SEQ_OPTS_PRINT("max_dir_size_kb=%u", sbi->s_max_dir_size_kb);
+ 	if (test_opt(sb, DATA_ERR_ABORT))
+ 		SEQ_OPTS_PUTS("data_err=abort");
++	if (DUMMY_ENCRYPTION_ENABLED(sbi))
++		SEQ_OPTS_PUTS("test_dummy_encryption");
+ 
+ 	ext4_show_quota_options(seq, sb);
+ 	return 0;
+@@ -4357,11 +4359,13 @@ no_journal:
+ 	block = ext4_count_free_clusters(sb);
+ 	ext4_free_blocks_count_set(sbi->s_es, 
+ 				   EXT4_C2B(sbi, block));
++	ext4_superblock_csum_set(sb);
+ 	err = percpu_counter_init(&sbi->s_freeclusters_counter, block,
+ 				  GFP_KERNEL);
+ 	if (!err) {
+ 		unsigned long freei = ext4_count_free_inodes(sb);
+ 		sbi->s_es->s_free_inodes_count = cpu_to_le32(freei);
++		ext4_superblock_csum_set(sb);
+ 		err = percpu_counter_init(&sbi->s_freeinodes_counter, freei,
+ 					  GFP_KERNEL);
+ 	}
+diff --git a/fs/ocfs2/buffer_head_io.c b/fs/ocfs2/buffer_head_io.c
+index d9ebe11c8990..1d098c3c00e0 100644
+--- a/fs/ocfs2/buffer_head_io.c
++++ b/fs/ocfs2/buffer_head_io.c
+@@ -342,6 +342,7 @@ int ocfs2_read_blocks(struct ocfs2_caching_info *ci, u64 block, int nr,
+ 				 * for this bh as it's not marked locally
+ 				 * uptodate. */
+ 				status = -EIO;
++				clear_buffer_needs_validate(bh);
+ 				put_bh(bh);
+ 				bhs[i] = NULL;
+ 				continue;
+diff --git a/fs/ubifs/xattr.c b/fs/ubifs/xattr.c
+index 09e37e63bddd..6f720fdf5020 100644
+--- a/fs/ubifs/xattr.c
++++ b/fs/ubifs/xattr.c
+@@ -152,12 +152,6 @@ static int create_xattr(struct ubifs_info *c, struct inode *host,
+ 	ui->data_len = size;
+ 
+ 	mutex_lock(&host_ui->ui_mutex);
+-
+-	if (!host->i_nlink) {
+-		err = -ENOENT;
+-		goto out_noent;
+-	}
+-
+ 	host->i_ctime = current_time(host);
+ 	host_ui->xattr_cnt += 1;
+ 	host_ui->xattr_size += CALC_DENT_SIZE(fname_len(nm));
+@@ -190,7 +184,6 @@ out_cancel:
+ 	host_ui->xattr_size -= CALC_XATTR_BYTES(size);
+ 	host_ui->xattr_names -= fname_len(nm);
+ 	host_ui->flags &= ~UBIFS_CRYPT_FL;
+-out_noent:
+ 	mutex_unlock(&host_ui->ui_mutex);
+ out_free:
+ 	make_bad_inode(inode);
+@@ -242,12 +235,6 @@ static int change_xattr(struct ubifs_info *c, struct inode *host,
+ 	mutex_unlock(&ui->ui_mutex);
+ 
+ 	mutex_lock(&host_ui->ui_mutex);
+-
+-	if (!host->i_nlink) {
+-		err = -ENOENT;
+-		goto out_noent;
+-	}
+-
+ 	host->i_ctime = current_time(host);
+ 	host_ui->xattr_size -= CALC_XATTR_BYTES(old_size);
+ 	host_ui->xattr_size += CALC_XATTR_BYTES(size);
+@@ -269,7 +256,6 @@ static int change_xattr(struct ubifs_info *c, struct inode *host,
+ out_cancel:
+ 	host_ui->xattr_size -= CALC_XATTR_BYTES(size);
+ 	host_ui->xattr_size += CALC_XATTR_BYTES(old_size);
+-out_noent:
+ 	mutex_unlock(&host_ui->ui_mutex);
+ 	make_bad_inode(inode);
+ out_free:
+@@ -496,12 +482,6 @@ static int remove_xattr(struct ubifs_info *c, struct inode *host,
+ 		return err;
+ 
+ 	mutex_lock(&host_ui->ui_mutex);
+-
+-	if (!host->i_nlink) {
+-		err = -ENOENT;
+-		goto out_noent;
+-	}
+-
+ 	host->i_ctime = current_time(host);
+ 	host_ui->xattr_cnt -= 1;
+ 	host_ui->xattr_size -= CALC_DENT_SIZE(fname_len(nm));
+@@ -521,7 +501,6 @@ out_cancel:
+ 	host_ui->xattr_size += CALC_DENT_SIZE(fname_len(nm));
+ 	host_ui->xattr_size += CALC_XATTR_BYTES(ui->data_len);
+ 	host_ui->xattr_names += fname_len(nm);
+-out_noent:
+ 	mutex_unlock(&host_ui->ui_mutex);
+ 	ubifs_release_budget(c, &req);
+ 	make_bad_inode(inode);
+@@ -561,9 +540,6 @@ static int ubifs_xattr_remove(struct inode *host, const char *name)
+ 
+ 	ubifs_assert(inode_is_locked(host));
+ 
+-	if (!host->i_nlink)
+-		return -ENOENT;
+-
+ 	if (fname_len(&nm) > UBIFS_MAX_NLEN)
+ 		return -ENAMETOOLONG;
+ 
+diff --git a/include/net/nfc/hci.h b/include/net/nfc/hci.h
+index 316694dafa5b..008f466d1da7 100644
+--- a/include/net/nfc/hci.h
++++ b/include/net/nfc/hci.h
+@@ -87,7 +87,7 @@ struct nfc_hci_pipe {
+  * According to specification 102 622 chapter 4.4 Pipes,
+  * the pipe identifier is 7 bits long.
+  */
+-#define NFC_HCI_MAX_PIPES		127
++#define NFC_HCI_MAX_PIPES		128
+ struct nfc_hci_init_data {
+ 	u8 gate_count;
+ 	struct nfc_hci_gate gates[NFC_HCI_MAX_CUSTOM_GATES];
+diff --git a/include/net/tls.h b/include/net/tls.h
+index 70c273777fe9..32b71e5b1290 100644
+--- a/include/net/tls.h
++++ b/include/net/tls.h
+@@ -165,15 +165,14 @@ struct cipher_context {
+ 	char *rec_seq;
+ };
+ 
++union tls_crypto_context {
++	struct tls_crypto_info info;
++	struct tls12_crypto_info_aes_gcm_128 aes_gcm_128;
++};
++
+ struct tls_context {
+-	union {
+-		struct tls_crypto_info crypto_send;
+-		struct tls12_crypto_info_aes_gcm_128 crypto_send_aes_gcm_128;
+-	};
+-	union {
+-		struct tls_crypto_info crypto_recv;
+-		struct tls12_crypto_info_aes_gcm_128 crypto_recv_aes_gcm_128;
+-	};
++	union tls_crypto_context crypto_send;
++	union tls_crypto_context crypto_recv;
+ 
+ 	struct list_head list;
+ 	struct net_device *netdev;
+@@ -337,8 +336,8 @@ static inline void tls_fill_prepend(struct tls_context *ctx,
+ 	 * size KTLS_DTLS_HEADER_SIZE + KTLS_DTLS_NONCE_EXPLICIT_SIZE
+ 	 */
+ 	buf[0] = record_type;
+-	buf[1] = TLS_VERSION_MINOR(ctx->crypto_send.version);
+-	buf[2] = TLS_VERSION_MAJOR(ctx->crypto_send.version);
++	buf[1] = TLS_VERSION_MINOR(ctx->crypto_send.info.version);
++	buf[2] = TLS_VERSION_MAJOR(ctx->crypto_send.info.version);
+ 	/* we can use IV for nonce explicit according to spec */
+ 	buf[3] = pkt_len >> 8;
+ 	buf[4] = pkt_len & 0xFF;
+diff --git a/include/uapi/linux/keyctl.h b/include/uapi/linux/keyctl.h
+index 910cc4334b21..7b8c9e19bad1 100644
+--- a/include/uapi/linux/keyctl.h
++++ b/include/uapi/linux/keyctl.h
+@@ -65,7 +65,7 @@
+ 
+ /* keyctl structures */
+ struct keyctl_dh_params {
+-	__s32 dh_private;
++	__s32 private;
+ 	__s32 prime;
+ 	__s32 base;
+ };
+diff --git a/include/uapi/sound/skl-tplg-interface.h b/include/uapi/sound/skl-tplg-interface.h
+index f58cafa42f18..f39352cef382 100644
+--- a/include/uapi/sound/skl-tplg-interface.h
++++ b/include/uapi/sound/skl-tplg-interface.h
+@@ -10,6 +10,8 @@
+ #ifndef __HDA_TPLG_INTERFACE_H__
+ #define __HDA_TPLG_INTERFACE_H__
+ 
++#include <linux/types.h>
++
+ /*
+  * Default types range from 0~12. type can range from 0 to 0xff
+  * SST types start at higher to avoid any overlapping in future
+@@ -143,10 +145,10 @@ enum skl_module_param_type {
+ };
+ 
+ struct skl_dfw_algo_data {
+-	u32 set_params:2;
+-	u32 rsvd:30;
+-	u32 param_id;
+-	u32 max;
++	__u32 set_params:2;
++	__u32 rsvd:30;
++	__u32 param_id;
++	__u32 max;
+ 	char params[0];
+ } __packed;
+ 
+@@ -163,68 +165,68 @@ enum skl_tuple_type {
+ /* v4 configuration data */
+ 
+ struct skl_dfw_v4_module_pin {
+-	u16 module_id;
+-	u16 instance_id;
++	__u16 module_id;
++	__u16 instance_id;
+ } __packed;
+ 
+ struct skl_dfw_v4_module_fmt {
+-	u32 channels;
+-	u32 freq;
+-	u32 bit_depth;
+-	u32 valid_bit_depth;
+-	u32 ch_cfg;
+-	u32 interleaving_style;
+-	u32 sample_type;
+-	u32 ch_map;
++	__u32 channels;
++	__u32 freq;
++	__u32 bit_depth;
++	__u32 valid_bit_depth;
++	__u32 ch_cfg;
++	__u32 interleaving_style;
++	__u32 sample_type;
++	__u32 ch_map;
+ } __packed;
+ 
+ struct skl_dfw_v4_module_caps {
+-	u32 set_params:2;
+-	u32 rsvd:30;
+-	u32 param_id;
+-	u32 caps_size;
+-	u32 caps[HDA_SST_CFG_MAX];
++	__u32 set_params:2;
++	__u32 rsvd:30;
++	__u32 param_id;
++	__u32 caps_size;
++	__u32 caps[HDA_SST_CFG_MAX];
+ } __packed;
+ 
+ struct skl_dfw_v4_pipe {
+-	u8 pipe_id;
+-	u8 pipe_priority;
+-	u16 conn_type:4;
+-	u16 rsvd:4;
+-	u16 memory_pages:8;
++	__u8 pipe_id;
++	__u8 pipe_priority;
++	__u16 conn_type:4;
++	__u16 rsvd:4;
++	__u16 memory_pages:8;
+ } __packed;
+ 
+ struct skl_dfw_v4_module {
+ 	char uuid[SKL_UUID_STR_SZ];
+ 
+-	u16 module_id;
+-	u16 instance_id;
+-	u32 max_mcps;
+-	u32 mem_pages;
+-	u32 obs;
+-	u32 ibs;
+-	u32 vbus_id;
+-
+-	u32 max_in_queue:8;
+-	u32 max_out_queue:8;
+-	u32 time_slot:8;
+-	u32 core_id:4;
+-	u32 rsvd1:4;
+-
+-	u32 module_type:8;
+-	u32 conn_type:4;
+-	u32 dev_type:4;
+-	u32 hw_conn_type:4;
+-	u32 rsvd2:12;
+-
+-	u32 params_fixup:8;
+-	u32 converter:8;
+-	u32 input_pin_type:1;
+-	u32 output_pin_type:1;
+-	u32 is_dynamic_in_pin:1;
+-	u32 is_dynamic_out_pin:1;
+-	u32 is_loadable:1;
+-	u32 rsvd3:11;
++	__u16 module_id;
++	__u16 instance_id;
++	__u32 max_mcps;
++	__u32 mem_pages;
++	__u32 obs;
++	__u32 ibs;
++	__u32 vbus_id;
++
++	__u32 max_in_queue:8;
++	__u32 max_out_queue:8;
++	__u32 time_slot:8;
++	__u32 core_id:4;
++	__u32 rsvd1:4;
++
++	__u32 module_type:8;
++	__u32 conn_type:4;
++	__u32 dev_type:4;
++	__u32 hw_conn_type:4;
++	__u32 rsvd2:12;
++
++	__u32 params_fixup:8;
++	__u32 converter:8;
++	__u32 input_pin_type:1;
++	__u32 output_pin_type:1;
++	__u32 is_dynamic_in_pin:1;
++	__u32 is_dynamic_out_pin:1;
++	__u32 is_loadable:1;
++	__u32 rsvd3:11;
+ 
+ 	struct skl_dfw_v4_pipe pipe;
+ 	struct skl_dfw_v4_module_fmt in_fmt[MAX_IN_QUEUE];
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 63aaac52a265..adbe21c8876e 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -3132,7 +3132,7 @@ static int adjust_reg_min_max_vals(struct bpf_verifier_env *env,
+ 				 * an arbitrary scalar. Disallow all math except
+ 				 * pointer subtraction
+ 				 */
+-				if (opcode == BPF_SUB){
++				if (opcode == BPF_SUB && env->allow_ptr_leaks) {
+ 					mark_reg_unknown(env, regs, insn->dst_reg);
+ 					return 0;
+ 				}
+diff --git a/kernel/pid.c b/kernel/pid.c
+index 157fe4b19971..2ff2d8bfa4e0 100644
+--- a/kernel/pid.c
++++ b/kernel/pid.c
+@@ -195,7 +195,7 @@ struct pid *alloc_pid(struct pid_namespace *ns)
+ 		idr_preload_end();
+ 
+ 		if (nr < 0) {
+-			retval = nr;
++			retval = (nr == -ENOSPC) ? -EAGAIN : nr;
+ 			goto out_free;
+ 		}
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 478d9d3e6be9..26526fc41f0d 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -10019,7 +10019,8 @@ static inline bool vruntime_normalized(struct task_struct *p)
+ 	 * - A task which has been woken up by try_to_wake_up() and
+ 	 *   waiting for actually being woken up by sched_ttwu_pending().
+ 	 */
+-	if (!se->sum_exec_runtime || p->state == TASK_WAKING)
++	if (!se->sum_exec_runtime ||
++	    (p->state == TASK_WAKING && p->sched_remote_wakeup))
+ 		return true;
+ 
+ 	return false;
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 0b0b688ea166..e58fd35ff64a 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -1545,6 +1545,8 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned long nr_pages)
+ 	tmp_iter_page = first_page;
+ 
+ 	do {
++		cond_resched();
++
+ 		to_remove_page = tmp_iter_page;
+ 		rb_inc_page(cpu_buffer, &tmp_iter_page);
+ 
+diff --git a/mm/Kconfig b/mm/Kconfig
+index 94af022b7f3d..22e949e263f0 100644
+--- a/mm/Kconfig
++++ b/mm/Kconfig
+@@ -637,6 +637,7 @@ config DEFERRED_STRUCT_PAGE_INIT
+ 	depends on NO_BOOTMEM
+ 	depends on SPARSEMEM
+ 	depends on !NEED_PER_CPU_KM
++	depends on 64BIT
+ 	help
+ 	  Ordinarily all struct pages are initialised during early boot in a
+ 	  single thread. On very large machines this can take a considerable
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 41b9bbf24e16..8264bbdbb6a5 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -2226,6 +2226,8 @@ static struct inode *shmem_get_inode(struct super_block *sb, const struct inode
+ 			mpol_shared_policy_init(&info->policy, NULL);
+ 			break;
+ 		}
++
++		lockdep_annotate_inode_mutex_key(inode);
+ 	} else
+ 		shmem_free_inode(sb);
+ 	return inode;
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 8e3fda9e725c..cb01d509d511 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -1179,6 +1179,12 @@ int neigh_update(struct neighbour *neigh, const u8 *lladdr, u8 new,
+ 		lladdr = neigh->ha;
+ 	}
+ 
++	/* Update confirmed timestamp for neighbour entry after we
++	 * received ARP packet even if it doesn't change IP to MAC binding.
++	 */
++	if (new & NUD_CONNECTED)
++		neigh->confirmed = jiffies;
++
+ 	/* If entry was valid and address is not changed,
+ 	   do not change entry state, if new one is STALE.
+ 	 */
+@@ -1200,15 +1206,12 @@ int neigh_update(struct neighbour *neigh, const u8 *lladdr, u8 new,
+ 		}
+ 	}
+ 
+-	/* Update timestamps only once we know we will make a change to the
++	/* Update timestamp only once we know we will make a change to the
+ 	 * neighbour entry. Otherwise we risk to move the locktime window with
+ 	 * noop updates and ignore relevant ARP updates.
+ 	 */
+-	if (new != old || lladdr != neigh->ha) {
+-		if (new & NUD_CONNECTED)
+-			neigh->confirmed = jiffies;
++	if (new != old || lladdr != neigh->ha)
+ 		neigh->updated = jiffies;
+-	}
+ 
+ 	if (new != old) {
+ 		neigh_del_timer(neigh);
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index e3f743c141b3..bafaa033826f 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -2760,7 +2760,7 @@ int rtnl_configure_link(struct net_device *dev, const struct ifinfomsg *ifm)
+ 	}
+ 
+ 	if (dev->rtnl_link_state == RTNL_LINK_INITIALIZED) {
+-		__dev_notify_flags(dev, old_flags, 0U);
++		__dev_notify_flags(dev, old_flags, (old_flags ^ dev->flags));
+ 	} else {
+ 		dev->rtnl_link_state = RTNL_LINK_INITIALIZED;
+ 		__dev_notify_flags(dev, old_flags, ~0U);
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index b403499fdabe..0c43b050dac7 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -1377,6 +1377,7 @@ struct sk_buff *inet_gso_segment(struct sk_buff *skb,
+ 		if (encap)
+ 			skb_reset_inner_headers(skb);
+ 		skb->network_header = (u8 *)iph - skb->head;
++		skb_reset_mac_len(skb);
+ 	} while ((skb = skb->next));
+ 
+ out:
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 24e116ddae79..fed65bc9df86 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -2128,6 +2128,28 @@ static inline int udp4_csum_init(struct sk_buff *skb, struct udphdr *uh,
+ 							 inet_compute_pseudo);
+ }
+ 
++/* wrapper for udp_queue_rcv_skb tacking care of csum conversion and
++ * return code conversion for ip layer consumption
++ */
++static int udp_unicast_rcv_skb(struct sock *sk, struct sk_buff *skb,
++			       struct udphdr *uh)
++{
++	int ret;
++
++	if (inet_get_convert_csum(sk) && uh->check && !IS_UDPLITE(sk))
++		skb_checksum_try_convert(skb, IPPROTO_UDP, uh->check,
++					 inet_compute_pseudo);
++
++	ret = udp_queue_rcv_skb(sk, skb);
++
++	/* a return value > 0 means to resubmit the input, but
++	 * it wants the return to be -protocol, or 0
++	 */
++	if (ret > 0)
++		return -ret;
++	return 0;
++}
++
+ /*
+  *	All we need to do is get the socket, and then do a checksum.
+  */
+@@ -2174,14 +2196,9 @@ int __udp4_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
+ 		if (unlikely(sk->sk_rx_dst != dst))
+ 			udp_sk_rx_dst_set(sk, dst);
+ 
+-		ret = udp_queue_rcv_skb(sk, skb);
++		ret = udp_unicast_rcv_skb(sk, skb, uh);
+ 		sock_put(sk);
+-		/* a return value > 0 means to resubmit the input, but
+-		 * it wants the return to be -protocol, or 0
+-		 */
+-		if (ret > 0)
+-			return -ret;
+-		return 0;
++		return ret;
+ 	}
+ 
+ 	if (rt->rt_flags & (RTCF_BROADCAST|RTCF_MULTICAST))
+@@ -2189,22 +2206,8 @@ int __udp4_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
+ 						saddr, daddr, udptable, proto);
+ 
+ 	sk = __udp4_lib_lookup_skb(skb, uh->source, uh->dest, udptable);
+-	if (sk) {
+-		int ret;
+-
+-		if (inet_get_convert_csum(sk) && uh->check && !IS_UDPLITE(sk))
+-			skb_checksum_try_convert(skb, IPPROTO_UDP, uh->check,
+-						 inet_compute_pseudo);
+-
+-		ret = udp_queue_rcv_skb(sk, skb);
+-
+-		/* a return value > 0 means to resubmit the input, but
+-		 * it wants the return to be -protocol, or 0
+-		 */
+-		if (ret > 0)
+-			return -ret;
+-		return 0;
+-	}
++	if (sk)
++		return udp_unicast_rcv_skb(sk, skb, uh);
+ 
+ 	if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb))
+ 		goto drop;
+diff --git a/net/ipv6/ip6_offload.c b/net/ipv6/ip6_offload.c
+index 5b3f2f89ef41..c6b75e96868c 100644
+--- a/net/ipv6/ip6_offload.c
++++ b/net/ipv6/ip6_offload.c
+@@ -115,6 +115,7 @@ static struct sk_buff *ipv6_gso_segment(struct sk_buff *skb,
+ 			payload_len = skb->len - nhoff - sizeof(*ipv6h);
+ 		ipv6h->payload_len = htons(payload_len);
+ 		skb->network_header = (u8 *)ipv6h - skb->head;
++		skb_reset_mac_len(skb);
+ 
+ 		if (udpfrag) {
+ 			int err = ip6_find_1stfragopt(skb, &prevhdr);
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 3168847c30d1..4f607aace43c 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -219,12 +219,10 @@ int ip6_xmit(const struct sock *sk, struct sk_buff *skb, struct flowi6 *fl6,
+ 				kfree_skb(skb);
+ 				return -ENOBUFS;
+ 			}
++			if (skb->sk)
++				skb_set_owner_w(skb2, skb->sk);
+ 			consume_skb(skb);
+ 			skb = skb2;
+-			/* skb_set_owner_w() changes sk->sk_wmem_alloc atomically,
+-			 * it is safe to call in our context (socket lock not held)
+-			 */
+-			skb_set_owner_w(skb, (struct sock *)sk);
+ 		}
+ 		if (opt->opt_flen)
+ 			ipv6_push_frag_opts(skb, opt, &proto);
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 18e00ce1719a..480a79f47c52 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -946,8 +946,6 @@ static void ip6_rt_init_dst_reject(struct rt6_info *rt, struct fib6_info *ort)
+ 
+ static void ip6_rt_init_dst(struct rt6_info *rt, struct fib6_info *ort)
+ {
+-	rt->dst.flags |= fib6_info_dst_flags(ort);
+-
+ 	if (ort->fib6_flags & RTF_REJECT) {
+ 		ip6_rt_init_dst_reject(rt, ort);
+ 		return;
+@@ -4670,20 +4668,31 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 			 int iif, int type, u32 portid, u32 seq,
+ 			 unsigned int flags)
+ {
+-	struct rtmsg *rtm;
++	struct rt6_info *rt6 = (struct rt6_info *)dst;
++	struct rt6key *rt6_dst, *rt6_src;
++	u32 *pmetrics, table, rt6_flags;
+ 	struct nlmsghdr *nlh;
++	struct rtmsg *rtm;
+ 	long expires = 0;
+-	u32 *pmetrics;
+-	u32 table;
+ 
+ 	nlh = nlmsg_put(skb, portid, seq, type, sizeof(*rtm), flags);
+ 	if (!nlh)
+ 		return -EMSGSIZE;
+ 
++	if (rt6) {
++		rt6_dst = &rt6->rt6i_dst;
++		rt6_src = &rt6->rt6i_src;
++		rt6_flags = rt6->rt6i_flags;
++	} else {
++		rt6_dst = &rt->fib6_dst;
++		rt6_src = &rt->fib6_src;
++		rt6_flags = rt->fib6_flags;
++	}
++
+ 	rtm = nlmsg_data(nlh);
+ 	rtm->rtm_family = AF_INET6;
+-	rtm->rtm_dst_len = rt->fib6_dst.plen;
+-	rtm->rtm_src_len = rt->fib6_src.plen;
++	rtm->rtm_dst_len = rt6_dst->plen;
++	rtm->rtm_src_len = rt6_src->plen;
+ 	rtm->rtm_tos = 0;
+ 	if (rt->fib6_table)
+ 		table = rt->fib6_table->tb6_id;
+@@ -4698,7 +4707,7 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 	rtm->rtm_scope = RT_SCOPE_UNIVERSE;
+ 	rtm->rtm_protocol = rt->fib6_protocol;
+ 
+-	if (rt->fib6_flags & RTF_CACHE)
++	if (rt6_flags & RTF_CACHE)
+ 		rtm->rtm_flags |= RTM_F_CLONED;
+ 
+ 	if (dest) {
+@@ -4706,7 +4715,7 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 			goto nla_put_failure;
+ 		rtm->rtm_dst_len = 128;
+ 	} else if (rtm->rtm_dst_len)
+-		if (nla_put_in6_addr(skb, RTA_DST, &rt->fib6_dst.addr))
++		if (nla_put_in6_addr(skb, RTA_DST, &rt6_dst->addr))
+ 			goto nla_put_failure;
+ #ifdef CONFIG_IPV6_SUBTREES
+ 	if (src) {
+@@ -4714,12 +4723,12 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 			goto nla_put_failure;
+ 		rtm->rtm_src_len = 128;
+ 	} else if (rtm->rtm_src_len &&
+-		   nla_put_in6_addr(skb, RTA_SRC, &rt->fib6_src.addr))
++		   nla_put_in6_addr(skb, RTA_SRC, &rt6_src->addr))
+ 		goto nla_put_failure;
+ #endif
+ 	if (iif) {
+ #ifdef CONFIG_IPV6_MROUTE
+-		if (ipv6_addr_is_multicast(&rt->fib6_dst.addr)) {
++		if (ipv6_addr_is_multicast(&rt6_dst->addr)) {
+ 			int err = ip6mr_get_route(net, skb, rtm, portid);
+ 
+ 			if (err == 0)
+@@ -4754,7 +4763,14 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 	/* For multipath routes, walk the siblings list and add
+ 	 * each as a nexthop within RTA_MULTIPATH.
+ 	 */
+-	if (rt->fib6_nsiblings) {
++	if (rt6) {
++		if (rt6_flags & RTF_GATEWAY &&
++		    nla_put_in6_addr(skb, RTA_GATEWAY, &rt6->rt6i_gateway))
++			goto nla_put_failure;
++
++		if (dst->dev && nla_put_u32(skb, RTA_OIF, dst->dev->ifindex))
++			goto nla_put_failure;
++	} else if (rt->fib6_nsiblings) {
+ 		struct fib6_info *sibling, *next_sibling;
+ 		struct nlattr *mp;
+ 
+@@ -4777,7 +4793,7 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 			goto nla_put_failure;
+ 	}
+ 
+-	if (rt->fib6_flags & RTF_EXPIRES) {
++	if (rt6_flags & RTF_EXPIRES) {
+ 		expires = dst ? dst->expires : rt->expires;
+ 		expires -= jiffies;
+ 	}
+@@ -4785,7 +4801,7 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 	if (rtnl_put_cacheinfo(skb, dst, 0, expires, dst ? dst->error : 0) < 0)
+ 		goto nla_put_failure;
+ 
+-	if (nla_put_u8(skb, RTA_PREF, IPV6_EXTRACT_PREF(rt->fib6_flags)))
++	if (nla_put_u8(skb, RTA_PREF, IPV6_EXTRACT_PREF(rt6_flags)))
+ 		goto nla_put_failure;
+ 
+ 
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index e6645cae403e..39d0cab919bb 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -748,6 +748,28 @@ static void udp6_sk_rx_dst_set(struct sock *sk, struct dst_entry *dst)
+ 	}
+ }
+ 
++/* wrapper for udp_queue_rcv_skb tacking care of csum conversion and
++ * return code conversion for ip layer consumption
++ */
++static int udp6_unicast_rcv_skb(struct sock *sk, struct sk_buff *skb,
++				struct udphdr *uh)
++{
++	int ret;
++
++	if (inet_get_convert_csum(sk) && uh->check && !IS_UDPLITE(sk))
++		skb_checksum_try_convert(skb, IPPROTO_UDP, uh->check,
++					 ip6_compute_pseudo);
++
++	ret = udpv6_queue_rcv_skb(sk, skb);
++
++	/* a return value > 0 means to resubmit the input, but
++	 * it wants the return to be -protocol, or 0
++	 */
++	if (ret > 0)
++		return -ret;
++	return 0;
++}
++
+ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
+ 		   int proto)
+ {
+@@ -799,13 +821,14 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
+ 		if (unlikely(sk->sk_rx_dst != dst))
+ 			udp6_sk_rx_dst_set(sk, dst);
+ 
+-		ret = udpv6_queue_rcv_skb(sk, skb);
+-		sock_put(sk);
++		if (!uh->check && !udp_sk(sk)->no_check6_rx) {
++			sock_put(sk);
++			goto report_csum_error;
++		}
+ 
+-		/* a return value > 0 means to resubmit the input */
+-		if (ret > 0)
+-			return ret;
+-		return 0;
++		ret = udp6_unicast_rcv_skb(sk, skb, uh);
++		sock_put(sk);
++		return ret;
+ 	}
+ 
+ 	/*
+@@ -818,30 +841,13 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
+ 	/* Unicast */
+ 	sk = __udp6_lib_lookup_skb(skb, uh->source, uh->dest, udptable);
+ 	if (sk) {
+-		int ret;
+-
+-		if (!uh->check && !udp_sk(sk)->no_check6_rx) {
+-			udp6_csum_zero_error(skb);
+-			goto csum_error;
+-		}
+-
+-		if (inet_get_convert_csum(sk) && uh->check && !IS_UDPLITE(sk))
+-			skb_checksum_try_convert(skb, IPPROTO_UDP, uh->check,
+-						 ip6_compute_pseudo);
+-
+-		ret = udpv6_queue_rcv_skb(sk, skb);
+-
+-		/* a return value > 0 means to resubmit the input */
+-		if (ret > 0)
+-			return ret;
+-
+-		return 0;
++		if (!uh->check && !udp_sk(sk)->no_check6_rx)
++			goto report_csum_error;
++		return udp6_unicast_rcv_skb(sk, skb, uh);
+ 	}
+ 
+-	if (!uh->check) {
+-		udp6_csum_zero_error(skb);
+-		goto csum_error;
+-	}
++	if (!uh->check)
++		goto report_csum_error;
+ 
+ 	if (!xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb))
+ 		goto discard;
+@@ -862,6 +868,9 @@ short_packet:
+ 			    ulen, skb->len,
+ 			    daddr, ntohs(uh->dest));
+ 	goto discard;
++
++report_csum_error:
++	udp6_csum_zero_error(skb);
+ csum_error:
+ 	__UDP6_INC_STATS(net, UDP_MIB_CSUMERRORS, proto == IPPROTO_UDPLITE);
+ discard:
+diff --git a/net/nfc/hci/core.c b/net/nfc/hci/core.c
+index ac8030c4bcf8..19cb2e473ea6 100644
+--- a/net/nfc/hci/core.c
++++ b/net/nfc/hci/core.c
+@@ -209,6 +209,11 @@ void nfc_hci_cmd_received(struct nfc_hci_dev *hdev, u8 pipe, u8 cmd,
+ 		}
+ 		create_info = (struct hci_create_pipe_resp *)skb->data;
+ 
++		if (create_info->pipe >= NFC_HCI_MAX_PIPES) {
++			status = NFC_HCI_ANY_E_NOK;
++			goto exit;
++		}
++
+ 		/* Save the new created pipe and bind with local gate,
+ 		 * the description for skb->data[3] is destination gate id
+ 		 * but since we received this cmd from host controller, we
+@@ -232,6 +237,11 @@ void nfc_hci_cmd_received(struct nfc_hci_dev *hdev, u8 pipe, u8 cmd,
+ 		}
+ 		delete_info = (struct hci_delete_pipe_noti *)skb->data;
+ 
++		if (delete_info->pipe >= NFC_HCI_MAX_PIPES) {
++			status = NFC_HCI_ANY_E_NOK;
++			goto exit;
++		}
++
+ 		hdev->pipes[delete_info->pipe].gate = NFC_HCI_INVALID_GATE;
+ 		hdev->pipes[delete_info->pipe].dest_host = NFC_HCI_INVALID_HOST;
+ 		break;
+diff --git a/net/sched/act_sample.c b/net/sched/act_sample.c
+index 5db358497c9e..e0e334a3a6e1 100644
+--- a/net/sched/act_sample.c
++++ b/net/sched/act_sample.c
+@@ -64,7 +64,7 @@ static int tcf_sample_init(struct net *net, struct nlattr *nla,
+ 
+ 	if (!exists) {
+ 		ret = tcf_idr_create(tn, parm->index, est, a,
+-				     &act_sample_ops, bind, false);
++				     &act_sample_ops, bind, true);
+ 		if (ret)
+ 			return ret;
+ 		ret = ACT_P_CREATED;
+diff --git a/net/socket.c b/net/socket.c
+index 4ac3b834cce9..d4187ac17d55 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -962,7 +962,8 @@ void dlci_ioctl_set(int (*hook) (unsigned int, void __user *))
+ EXPORT_SYMBOL(dlci_ioctl_set);
+ 
+ static long sock_do_ioctl(struct net *net, struct socket *sock,
+-				 unsigned int cmd, unsigned long arg)
++			  unsigned int cmd, unsigned long arg,
++			  unsigned int ifreq_size)
+ {
+ 	int err;
+ 	void __user *argp = (void __user *)arg;
+@@ -988,11 +989,11 @@ static long sock_do_ioctl(struct net *net, struct socket *sock,
+ 	} else {
+ 		struct ifreq ifr;
+ 		bool need_copyout;
+-		if (copy_from_user(&ifr, argp, sizeof(struct ifreq)))
++		if (copy_from_user(&ifr, argp, ifreq_size))
+ 			return -EFAULT;
+ 		err = dev_ioctl(net, cmd, &ifr, &need_copyout);
+ 		if (!err && need_copyout)
+-			if (copy_to_user(argp, &ifr, sizeof(struct ifreq)))
++			if (copy_to_user(argp, &ifr, ifreq_size))
+ 				return -EFAULT;
+ 	}
+ 	return err;
+@@ -1091,7 +1092,8 @@ static long sock_ioctl(struct file *file, unsigned cmd, unsigned long arg)
+ 			err = open_related_ns(&net->ns, get_net_ns);
+ 			break;
+ 		default:
+-			err = sock_do_ioctl(net, sock, cmd, arg);
++			err = sock_do_ioctl(net, sock, cmd, arg,
++					    sizeof(struct ifreq));
+ 			break;
+ 		}
+ 	return err;
+@@ -2762,7 +2764,8 @@ static int do_siocgstamp(struct net *net, struct socket *sock,
+ 	int err;
+ 
+ 	set_fs(KERNEL_DS);
+-	err = sock_do_ioctl(net, sock, cmd, (unsigned long)&ktv);
++	err = sock_do_ioctl(net, sock, cmd, (unsigned long)&ktv,
++			    sizeof(struct compat_ifreq));
+ 	set_fs(old_fs);
+ 	if (!err)
+ 		err = compat_put_timeval(&ktv, up);
+@@ -2778,7 +2781,8 @@ static int do_siocgstampns(struct net *net, struct socket *sock,
+ 	int err;
+ 
+ 	set_fs(KERNEL_DS);
+-	err = sock_do_ioctl(net, sock, cmd, (unsigned long)&kts);
++	err = sock_do_ioctl(net, sock, cmd, (unsigned long)&kts,
++			    sizeof(struct compat_ifreq));
+ 	set_fs(old_fs);
+ 	if (!err)
+ 		err = compat_put_timespec(&kts, up);
+@@ -3084,7 +3088,8 @@ static int routing_ioctl(struct net *net, struct socket *sock,
+ 	}
+ 
+ 	set_fs(KERNEL_DS);
+-	ret = sock_do_ioctl(net, sock, cmd, (unsigned long) r);
++	ret = sock_do_ioctl(net, sock, cmd, (unsigned long) r,
++			    sizeof(struct compat_ifreq));
+ 	set_fs(old_fs);
+ 
+ out:
+@@ -3197,7 +3202,8 @@ static int compat_sock_ioctl_trans(struct file *file, struct socket *sock,
+ 	case SIOCBONDSETHWADDR:
+ 	case SIOCBONDCHANGEACTIVE:
+ 	case SIOCGIFNAME:
+-		return sock_do_ioctl(net, sock, cmd, arg);
++		return sock_do_ioctl(net, sock, cmd, arg,
++				     sizeof(struct compat_ifreq));
+ 	}
+ 
+ 	return -ENOIOCTLCMD;
+diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
+index a7a8f8e20ff3..9bd0286d5407 100644
+--- a/net/tls/tls_device.c
++++ b/net/tls/tls_device.c
+@@ -552,7 +552,7 @@ int tls_set_device_offload(struct sock *sk, struct tls_context *ctx)
+ 		goto free_marker_record;
+ 	}
+ 
+-	crypto_info = &ctx->crypto_send;
++	crypto_info = &ctx->crypto_send.info;
+ 	switch (crypto_info->cipher_type) {
+ 	case TLS_CIPHER_AES_GCM_128:
+ 		nonce_size = TLS_CIPHER_AES_GCM_128_IV_SIZE;
+@@ -650,7 +650,7 @@ int tls_set_device_offload(struct sock *sk, struct tls_context *ctx)
+ 
+ 	ctx->priv_ctx_tx = offload_ctx;
+ 	rc = netdev->tlsdev_ops->tls_dev_add(netdev, sk, TLS_OFFLOAD_CTX_DIR_TX,
+-					     &ctx->crypto_send,
++					     &ctx->crypto_send.info,
+ 					     tcp_sk(sk)->write_seq);
+ 	if (rc)
+ 		goto release_netdev;
+diff --git a/net/tls/tls_device_fallback.c b/net/tls/tls_device_fallback.c
+index 748914abdb60..72143679d3d6 100644
+--- a/net/tls/tls_device_fallback.c
++++ b/net/tls/tls_device_fallback.c
+@@ -320,7 +320,7 @@ static struct sk_buff *tls_enc_skb(struct tls_context *tls_ctx,
+ 		goto free_req;
+ 
+ 	iv = buf;
+-	memcpy(iv, tls_ctx->crypto_send_aes_gcm_128.salt,
++	memcpy(iv, tls_ctx->crypto_send.aes_gcm_128.salt,
+ 	       TLS_CIPHER_AES_GCM_128_SALT_SIZE);
+ 	aad = buf + TLS_CIPHER_AES_GCM_128_SALT_SIZE +
+ 	      TLS_CIPHER_AES_GCM_128_IV_SIZE;
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index 45188d920013..2ccf194c3ebb 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -245,6 +245,16 @@ static void tls_write_space(struct sock *sk)
+ 	ctx->sk_write_space(sk);
+ }
+ 
++static void tls_ctx_free(struct tls_context *ctx)
++{
++	if (!ctx)
++		return;
++
++	memzero_explicit(&ctx->crypto_send, sizeof(ctx->crypto_send));
++	memzero_explicit(&ctx->crypto_recv, sizeof(ctx->crypto_recv));
++	kfree(ctx);
++}
++
+ static void tls_sk_proto_close(struct sock *sk, long timeout)
+ {
+ 	struct tls_context *ctx = tls_get_ctx(sk);
+@@ -295,7 +305,7 @@ static void tls_sk_proto_close(struct sock *sk, long timeout)
+ #else
+ 	{
+ #endif
+-		kfree(ctx);
++		tls_ctx_free(ctx);
+ 		ctx = NULL;
+ 	}
+ 
+@@ -306,7 +316,7 @@ skip_tx_cleanup:
+ 	 * for sk->sk_prot->unhash [tls_hw_unhash]
+ 	 */
+ 	if (free_ctx)
+-		kfree(ctx);
++		tls_ctx_free(ctx);
+ }
+ 
+ static int do_tls_getsockopt_tx(struct sock *sk, char __user *optval,
+@@ -331,7 +341,7 @@ static int do_tls_getsockopt_tx(struct sock *sk, char __user *optval,
+ 	}
+ 
+ 	/* get user crypto info */
+-	crypto_info = &ctx->crypto_send;
++	crypto_info = &ctx->crypto_send.info;
+ 
+ 	if (!TLS_CRYPTO_INFO_READY(crypto_info)) {
+ 		rc = -EBUSY;
+@@ -418,9 +428,9 @@ static int do_tls_setsockopt_conf(struct sock *sk, char __user *optval,
+ 	}
+ 
+ 	if (tx)
+-		crypto_info = &ctx->crypto_send;
++		crypto_info = &ctx->crypto_send.info;
+ 	else
+-		crypto_info = &ctx->crypto_recv;
++		crypto_info = &ctx->crypto_recv.info;
+ 
+ 	/* Currently we don't support set crypto info more than one time */
+ 	if (TLS_CRYPTO_INFO_READY(crypto_info)) {
+@@ -492,7 +502,7 @@ static int do_tls_setsockopt_conf(struct sock *sk, char __user *optval,
+ 	goto out;
+ 
+ err_crypto_info:
+-	memset(crypto_info, 0, sizeof(*crypto_info));
++	memzero_explicit(crypto_info, sizeof(union tls_crypto_context));
+ out:
+ 	return rc;
+ }
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index b3344bbe336b..9fab8e5a4a5b 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -872,7 +872,15 @@ fallback_to_reg_recv:
+ 				if (control != TLS_RECORD_TYPE_DATA)
+ 					goto recv_end;
+ 			}
++		} else {
++			/* MSG_PEEK right now cannot look beyond current skb
++			 * from strparser, meaning we cannot advance skb here
++			 * and thus unpause strparser since we'd loose original
++			 * one.
++			 */
++			break;
+ 		}
++
+ 		/* If we have a new message from strparser, continue now. */
+ 		if (copied >= target && !ctx->recv_pkt)
+ 			break;
+@@ -989,8 +997,8 @@ static int tls_read_size(struct strparser *strp, struct sk_buff *skb)
+ 		goto read_failure;
+ 	}
+ 
+-	if (header[1] != TLS_VERSION_MINOR(tls_ctx->crypto_recv.version) ||
+-	    header[2] != TLS_VERSION_MAJOR(tls_ctx->crypto_recv.version)) {
++	if (header[1] != TLS_VERSION_MINOR(tls_ctx->crypto_recv.info.version) ||
++	    header[2] != TLS_VERSION_MAJOR(tls_ctx->crypto_recv.info.version)) {
+ 		ret = -EINVAL;
+ 		goto read_failure;
+ 	}
+@@ -1064,7 +1072,6 @@ void tls_sw_free_resources_rx(struct sock *sk)
+ 
+ int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx)
+ {
+-	char keyval[TLS_CIPHER_AES_GCM_128_KEY_SIZE];
+ 	struct tls_crypto_info *crypto_info;
+ 	struct tls12_crypto_info_aes_gcm_128 *gcm_128_info;
+ 	struct tls_sw_context_tx *sw_ctx_tx = NULL;
+@@ -1100,11 +1107,11 @@ int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx)
+ 	}
+ 
+ 	if (tx) {
+-		crypto_info = &ctx->crypto_send;
++		crypto_info = &ctx->crypto_send.info;
+ 		cctx = &ctx->tx;
+ 		aead = &sw_ctx_tx->aead_send;
+ 	} else {
+-		crypto_info = &ctx->crypto_recv;
++		crypto_info = &ctx->crypto_recv.info;
+ 		cctx = &ctx->rx;
+ 		aead = &sw_ctx_rx->aead_recv;
+ 	}
+@@ -1184,9 +1191,7 @@ int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx)
+ 
+ 	ctx->push_pending_record = tls_sw_push_pending_record;
+ 
+-	memcpy(keyval, gcm_128_info->key, TLS_CIPHER_AES_GCM_128_KEY_SIZE);
+-
+-	rc = crypto_aead_setkey(*aead, keyval,
++	rc = crypto_aead_setkey(*aead, gcm_128_info->key,
+ 				TLS_CIPHER_AES_GCM_128_KEY_SIZE);
+ 	if (rc)
+ 		goto free_aead;
+diff --git a/security/keys/dh.c b/security/keys/dh.c
+index 1a68d27e72b4..b203f7758f97 100644
+--- a/security/keys/dh.c
++++ b/security/keys/dh.c
+@@ -300,7 +300,7 @@ long __keyctl_dh_compute(struct keyctl_dh_params __user *params,
+ 	}
+ 	dh_inputs.g_size = dlen;
+ 
+-	dlen = dh_data_from_key(pcopy.dh_private, &dh_inputs.key);
++	dlen = dh_data_from_key(pcopy.private, &dh_inputs.key);
+ 	if (dlen < 0) {
+ 		ret = dlen;
+ 		goto out2;
+diff --git a/sound/firewire/bebob/bebob.c b/sound/firewire/bebob/bebob.c
+index 730ea91d9be8..93676354f87f 100644
+--- a/sound/firewire/bebob/bebob.c
++++ b/sound/firewire/bebob/bebob.c
+@@ -263,6 +263,8 @@ do_registration(struct work_struct *work)
+ error:
+ 	mutex_unlock(&devices_mutex);
+ 	snd_bebob_stream_destroy_duplex(bebob);
++	kfree(bebob->maudio_special_quirk);
++	bebob->maudio_special_quirk = NULL;
+ 	snd_card_free(bebob->card);
+ 	dev_info(&bebob->unit->device,
+ 		 "Sound card registration failed: %d\n", err);
+diff --git a/sound/firewire/bebob/bebob_maudio.c b/sound/firewire/bebob/bebob_maudio.c
+index bd55620c6a47..c266997ad299 100644
+--- a/sound/firewire/bebob/bebob_maudio.c
++++ b/sound/firewire/bebob/bebob_maudio.c
+@@ -96,17 +96,13 @@ int snd_bebob_maudio_load_firmware(struct fw_unit *unit)
+ 	struct fw_device *device = fw_parent_device(unit);
+ 	int err, rcode;
+ 	u64 date;
+-	__le32 cues[3] = {
+-		cpu_to_le32(MAUDIO_BOOTLOADER_CUE1),
+-		cpu_to_le32(MAUDIO_BOOTLOADER_CUE2),
+-		cpu_to_le32(MAUDIO_BOOTLOADER_CUE3)
+-	};
++	__le32 *cues;
+ 
+ 	/* check date of software used to build */
+ 	err = snd_bebob_read_block(unit, INFO_OFFSET_SW_DATE,
+ 				   &date, sizeof(u64));
+ 	if (err < 0)
+-		goto end;
++		return err;
+ 	/*
+ 	 * firmware version 5058 or later has date later than "20070401", but
+ 	 * 'date' is not null-terminated.
+@@ -114,20 +110,28 @@ int snd_bebob_maudio_load_firmware(struct fw_unit *unit)
+ 	if (date < 0x3230303730343031LL) {
+ 		dev_err(&unit->device,
+ 			"Use firmware version 5058 or later\n");
+-		err = -ENOSYS;
+-		goto end;
++		return -ENXIO;
+ 	}
+ 
++	cues = kmalloc_array(3, sizeof(*cues), GFP_KERNEL);
++	if (!cues)
++		return -ENOMEM;
++
++	cues[0] = cpu_to_le32(MAUDIO_BOOTLOADER_CUE1);
++	cues[1] = cpu_to_le32(MAUDIO_BOOTLOADER_CUE2);
++	cues[2] = cpu_to_le32(MAUDIO_BOOTLOADER_CUE3);
++
+ 	rcode = fw_run_transaction(device->card, TCODE_WRITE_BLOCK_REQUEST,
+ 				   device->node_id, device->generation,
+ 				   device->max_speed, BEBOB_ADDR_REG_REQ,
+-				   cues, sizeof(cues));
++				   cues, 3 * sizeof(*cues));
++	kfree(cues);
+ 	if (rcode != RCODE_COMPLETE) {
+ 		dev_err(&unit->device,
+ 			"Failed to send a cue to load firmware\n");
+ 		err = -EIO;
+ 	}
+-end:
++
+ 	return err;
+ }
+ 
+@@ -290,10 +294,6 @@ snd_bebob_maudio_special_discover(struct snd_bebob *bebob, bool is1814)
+ 		bebob->midi_output_ports = 2;
+ 	}
+ end:
+-	if (err < 0) {
+-		kfree(params);
+-		bebob->maudio_special_quirk = NULL;
+-	}
+ 	mutex_unlock(&bebob->mutex);
+ 	return err;
+ }
+diff --git a/sound/firewire/digi00x/digi00x.c b/sound/firewire/digi00x/digi00x.c
+index 1f5e1d23f31a..ef689997d6a5 100644
+--- a/sound/firewire/digi00x/digi00x.c
++++ b/sound/firewire/digi00x/digi00x.c
+@@ -49,6 +49,7 @@ static void dg00x_free(struct snd_dg00x *dg00x)
+ 	fw_unit_put(dg00x->unit);
+ 
+ 	mutex_destroy(&dg00x->mutex);
++	kfree(dg00x);
+ }
+ 
+ static void dg00x_card_free(struct snd_card *card)
+diff --git a/sound/firewire/fireface/ff-protocol-ff400.c b/sound/firewire/fireface/ff-protocol-ff400.c
+index ad7a0a32557d..64c3cb0fb926 100644
+--- a/sound/firewire/fireface/ff-protocol-ff400.c
++++ b/sound/firewire/fireface/ff-protocol-ff400.c
+@@ -146,6 +146,7 @@ static int ff400_switch_fetching_mode(struct snd_ff *ff, bool enable)
+ {
+ 	__le32 *reg;
+ 	int i;
++	int err;
+ 
+ 	reg = kcalloc(18, sizeof(__le32), GFP_KERNEL);
+ 	if (reg == NULL)
+@@ -163,9 +164,11 @@ static int ff400_switch_fetching_mode(struct snd_ff *ff, bool enable)
+ 			reg[i] = cpu_to_le32(0x00000001);
+ 	}
+ 
+-	return snd_fw_transaction(ff->unit, TCODE_WRITE_BLOCK_REQUEST,
+-				  FF400_FETCH_PCM_FRAMES, reg,
+-				  sizeof(__le32) * 18, 0);
++	err = snd_fw_transaction(ff->unit, TCODE_WRITE_BLOCK_REQUEST,
++				 FF400_FETCH_PCM_FRAMES, reg,
++				 sizeof(__le32) * 18, 0);
++	kfree(reg);
++	return err;
+ }
+ 
+ static void ff400_dump_sync_status(struct snd_ff *ff,
+diff --git a/sound/firewire/fireworks/fireworks.c b/sound/firewire/fireworks/fireworks.c
+index 71a0613d3da0..f2d073365cf6 100644
+--- a/sound/firewire/fireworks/fireworks.c
++++ b/sound/firewire/fireworks/fireworks.c
+@@ -301,6 +301,8 @@ error:
+ 	snd_efw_transaction_remove_instance(efw);
+ 	snd_efw_stream_destroy_duplex(efw);
+ 	snd_card_free(efw->card);
++	kfree(efw->resp_buf);
++	efw->resp_buf = NULL;
+ 	dev_info(&efw->unit->device,
+ 		 "Sound card registration failed: %d\n", err);
+ }
+diff --git a/sound/firewire/oxfw/oxfw.c b/sound/firewire/oxfw/oxfw.c
+index 1e5b2c802635..2ea8be6c8584 100644
+--- a/sound/firewire/oxfw/oxfw.c
++++ b/sound/firewire/oxfw/oxfw.c
+@@ -130,6 +130,7 @@ static void oxfw_free(struct snd_oxfw *oxfw)
+ 
+ 	kfree(oxfw->spec);
+ 	mutex_destroy(&oxfw->mutex);
++	kfree(oxfw);
+ }
+ 
+ /*
+@@ -207,6 +208,7 @@ static int detect_quirks(struct snd_oxfw *oxfw)
+ static void do_registration(struct work_struct *work)
+ {
+ 	struct snd_oxfw *oxfw = container_of(work, struct snd_oxfw, dwork.work);
++	int i;
+ 	int err;
+ 
+ 	if (oxfw->registered)
+@@ -269,7 +271,15 @@ error:
+ 	snd_oxfw_stream_destroy_simplex(oxfw, &oxfw->rx_stream);
+ 	if (oxfw->has_output)
+ 		snd_oxfw_stream_destroy_simplex(oxfw, &oxfw->tx_stream);
++	for (i = 0; i < SND_OXFW_STREAM_FORMAT_ENTRIES; ++i) {
++		kfree(oxfw->tx_stream_formats[i]);
++		oxfw->tx_stream_formats[i] = NULL;
++		kfree(oxfw->rx_stream_formats[i]);
++		oxfw->rx_stream_formats[i] = NULL;
++	}
+ 	snd_card_free(oxfw->card);
++	kfree(oxfw->spec);
++	oxfw->spec = NULL;
+ 	dev_info(&oxfw->unit->device,
+ 		 "Sound card registration failed: %d\n", err);
+ }
+diff --git a/sound/firewire/tascam/tascam.c b/sound/firewire/tascam/tascam.c
+index 44ad41fb7374..d3fdc463a884 100644
+--- a/sound/firewire/tascam/tascam.c
++++ b/sound/firewire/tascam/tascam.c
+@@ -93,6 +93,7 @@ static void tscm_free(struct snd_tscm *tscm)
+ 	fw_unit_put(tscm->unit);
+ 
+ 	mutex_destroy(&tscm->mutex);
++	kfree(tscm);
+ }
+ 
+ static void tscm_card_free(struct snd_card *card)
+diff --git a/sound/pci/emu10k1/emufx.c b/sound/pci/emu10k1/emufx.c
+index de2ecbe95d6c..2c54d26f30a6 100644
+--- a/sound/pci/emu10k1/emufx.c
++++ b/sound/pci/emu10k1/emufx.c
+@@ -2540,7 +2540,7 @@ static int snd_emu10k1_fx8010_ioctl(struct snd_hwdep * hw, struct file *file, un
+ 		emu->support_tlv = 1;
+ 		return put_user(SNDRV_EMU10K1_VERSION, (int __user *)argp);
+ 	case SNDRV_EMU10K1_IOCTL_INFO:
+-		info = kmalloc(sizeof(*info), GFP_KERNEL);
++		info = kzalloc(sizeof(*info), GFP_KERNEL);
+ 		if (!info)
+ 			return -ENOMEM;
+ 		snd_emu10k1_fx8010_info(emu, info);
+diff --git a/sound/soc/codecs/cs4265.c b/sound/soc/codecs/cs4265.c
+index 275677de669f..407554175282 100644
+--- a/sound/soc/codecs/cs4265.c
++++ b/sound/soc/codecs/cs4265.c
+@@ -157,8 +157,8 @@ static const struct snd_kcontrol_new cs4265_snd_controls[] = {
+ 	SOC_SINGLE("Validity Bit Control Switch", CS4265_SPDIF_CTL2,
+ 				3, 1, 0),
+ 	SOC_ENUM("SPDIF Mono/Stereo", spdif_mono_stereo_enum),
+-	SOC_SINGLE("MMTLR Data Switch", 0,
+-				1, 1, 0),
++	SOC_SINGLE("MMTLR Data Switch", CS4265_SPDIF_CTL2,
++				0, 1, 0),
+ 	SOC_ENUM("Mono Channel Select", spdif_mono_select_enum),
+ 	SND_SOC_BYTES("C Data Buffer", CS4265_C_DATA_BUFF, 24),
+ };
+diff --git a/sound/soc/codecs/tas6424.c b/sound/soc/codecs/tas6424.c
+index 14999b999fd3..0d6145549a98 100644
+--- a/sound/soc/codecs/tas6424.c
++++ b/sound/soc/codecs/tas6424.c
+@@ -424,8 +424,10 @@ static void tas6424_fault_check_work(struct work_struct *work)
+ 	       TAS6424_FAULT_PVDD_UV |
+ 	       TAS6424_FAULT_VBAT_UV;
+ 
+-	if (reg)
++	if (!reg) {
++		tas6424->last_fault1 = reg;
+ 		goto check_global_fault2_reg;
++	}
+ 
+ 	/*
+ 	 * Only flag errors once for a given occurrence. This is needed as
+@@ -461,8 +463,10 @@ check_global_fault2_reg:
+ 	       TAS6424_FAULT_OTSD_CH3 |
+ 	       TAS6424_FAULT_OTSD_CH4;
+ 
+-	if (!reg)
++	if (!reg) {
++		tas6424->last_fault2 = reg;
+ 		goto check_warn_reg;
++	}
+ 
+ 	if ((reg & TAS6424_FAULT_OTSD) && !(tas6424->last_fault2 & TAS6424_FAULT_OTSD))
+ 		dev_crit(dev, "experienced a global overtemp shutdown\n");
+@@ -497,8 +501,10 @@ check_warn_reg:
+ 	       TAS6424_WARN_VDD_OTW_CH3 |
+ 	       TAS6424_WARN_VDD_OTW_CH4;
+ 
+-	if (!reg)
++	if (!reg) {
++		tas6424->last_warn = reg;
+ 		goto out;
++	}
+ 
+ 	if ((reg & TAS6424_WARN_VDD_UV) && !(tas6424->last_warn & TAS6424_WARN_VDD_UV))
+ 		dev_warn(dev, "experienced a VDD under voltage condition\n");
+diff --git a/sound/soc/codecs/wm9712.c b/sound/soc/codecs/wm9712.c
+index 953d94d50586..ade34c26ad2f 100644
+--- a/sound/soc/codecs/wm9712.c
++++ b/sound/soc/codecs/wm9712.c
+@@ -719,7 +719,7 @@ static int wm9712_probe(struct platform_device *pdev)
+ 
+ static struct platform_driver wm9712_component_driver = {
+ 	.driver = {
+-		.name = "wm9712-component",
++		.name = "wm9712-codec",
+ 	},
+ 
+ 	.probe = wm9712_probe,
+diff --git a/sound/soc/sh/rcar/core.c b/sound/soc/sh/rcar/core.c
+index f237002180c0..ff13189a7ee4 100644
+--- a/sound/soc/sh/rcar/core.c
++++ b/sound/soc/sh/rcar/core.c
+@@ -953,12 +953,23 @@ static void rsnd_soc_dai_shutdown(struct snd_pcm_substream *substream,
+ 	rsnd_dai_stream_quit(io);
+ }
+ 
++static int rsnd_soc_dai_prepare(struct snd_pcm_substream *substream,
++				struct snd_soc_dai *dai)
++{
++	struct rsnd_priv *priv = rsnd_dai_to_priv(dai);
++	struct rsnd_dai *rdai = rsnd_dai_to_rdai(dai);
++	struct rsnd_dai_stream *io = rsnd_rdai_to_io(rdai, substream);
++
++	return rsnd_dai_call(prepare, io, priv);
++}
++
+ static const struct snd_soc_dai_ops rsnd_soc_dai_ops = {
+ 	.startup	= rsnd_soc_dai_startup,
+ 	.shutdown	= rsnd_soc_dai_shutdown,
+ 	.trigger	= rsnd_soc_dai_trigger,
+ 	.set_fmt	= rsnd_soc_dai_set_fmt,
+ 	.set_tdm_slot	= rsnd_soc_set_dai_tdm_slot,
++	.prepare	= rsnd_soc_dai_prepare,
+ };
+ 
+ void rsnd_parse_connect_common(struct rsnd_dai *rdai,
+diff --git a/sound/soc/sh/rcar/rsnd.h b/sound/soc/sh/rcar/rsnd.h
+index 6d7280d2d9be..e93032498a5b 100644
+--- a/sound/soc/sh/rcar/rsnd.h
++++ b/sound/soc/sh/rcar/rsnd.h
+@@ -283,6 +283,9 @@ struct rsnd_mod_ops {
+ 	int (*nolock_stop)(struct rsnd_mod *mod,
+ 		    struct rsnd_dai_stream *io,
+ 		    struct rsnd_priv *priv);
++	int (*prepare)(struct rsnd_mod *mod,
++		       struct rsnd_dai_stream *io,
++		       struct rsnd_priv *priv);
+ };
+ 
+ struct rsnd_dai_stream;
+@@ -312,6 +315,7 @@ struct rsnd_mod {
+  * H	0: fallback
+  * H	0: hw_params
+  * H	0: pointer
++ * H	0: prepare
+  */
+ #define __rsnd_mod_shift_nolock_start	0
+ #define __rsnd_mod_shift_nolock_stop	0
+@@ -326,6 +330,7 @@ struct rsnd_mod {
+ #define __rsnd_mod_shift_fallback	28 /* always called */
+ #define __rsnd_mod_shift_hw_params	28 /* always called */
+ #define __rsnd_mod_shift_pointer	28 /* always called */
++#define __rsnd_mod_shift_prepare	28 /* always called */
+ 
+ #define __rsnd_mod_add_probe		0
+ #define __rsnd_mod_add_remove		0
+@@ -340,6 +345,7 @@ struct rsnd_mod {
+ #define __rsnd_mod_add_fallback		0
+ #define __rsnd_mod_add_hw_params	0
+ #define __rsnd_mod_add_pointer		0
++#define __rsnd_mod_add_prepare		0
+ 
+ #define __rsnd_mod_call_probe		0
+ #define __rsnd_mod_call_remove		0
+@@ -354,6 +360,7 @@ struct rsnd_mod {
+ #define __rsnd_mod_call_pointer		0
+ #define __rsnd_mod_call_nolock_start	0
+ #define __rsnd_mod_call_nolock_stop	1
++#define __rsnd_mod_call_prepare		0
+ 
+ #define rsnd_mod_to_priv(mod)	((mod)->priv)
+ #define rsnd_mod_name(mod)	((mod)->ops->name)
+diff --git a/sound/soc/sh/rcar/ssi.c b/sound/soc/sh/rcar/ssi.c
+index 6e1166ec24a0..cf4b40d376e5 100644
+--- a/sound/soc/sh/rcar/ssi.c
++++ b/sound/soc/sh/rcar/ssi.c
+@@ -286,7 +286,7 @@ static int rsnd_ssi_master_clk_start(struct rsnd_mod *mod,
+ 	if (rsnd_ssi_is_multi_slave(mod, io))
+ 		return 0;
+ 
+-	if (ssi->usrcnt > 1) {
++	if (ssi->rate) {
+ 		if (ssi->rate != rate) {
+ 			dev_err(dev, "SSI parent/child should use same rate\n");
+ 			return -EINVAL;
+@@ -431,7 +431,6 @@ static int rsnd_ssi_init(struct rsnd_mod *mod,
+ 			 struct rsnd_priv *priv)
+ {
+ 	struct rsnd_ssi *ssi = rsnd_mod_to_ssi(mod);
+-	int ret;
+ 
+ 	if (!rsnd_ssi_is_run_mods(mod, io))
+ 		return 0;
+@@ -440,10 +439,6 @@ static int rsnd_ssi_init(struct rsnd_mod *mod,
+ 
+ 	rsnd_mod_power_on(mod);
+ 
+-	ret = rsnd_ssi_master_clk_start(mod, io);
+-	if (ret < 0)
+-		return ret;
+-
+ 	rsnd_ssi_config_init(mod, io);
+ 
+ 	rsnd_ssi_register_setup(mod);
+@@ -846,6 +841,13 @@ static int rsnd_ssi_pio_pointer(struct rsnd_mod *mod,
+ 	return 0;
+ }
+ 
++static int rsnd_ssi_prepare(struct rsnd_mod *mod,
++			    struct rsnd_dai_stream *io,
++			    struct rsnd_priv *priv)
++{
++	return rsnd_ssi_master_clk_start(mod, io);
++}
++
+ static struct rsnd_mod_ops rsnd_ssi_pio_ops = {
+ 	.name	= SSI_NAME,
+ 	.probe	= rsnd_ssi_common_probe,
+@@ -858,6 +860,7 @@ static struct rsnd_mod_ops rsnd_ssi_pio_ops = {
+ 	.pointer = rsnd_ssi_pio_pointer,
+ 	.pcm_new = rsnd_ssi_pcm_new,
+ 	.hw_params = rsnd_ssi_hw_params,
++	.prepare = rsnd_ssi_prepare,
+ };
+ 
+ static int rsnd_ssi_dma_probe(struct rsnd_mod *mod,
+@@ -934,6 +937,7 @@ static struct rsnd_mod_ops rsnd_ssi_dma_ops = {
+ 	.pcm_new = rsnd_ssi_pcm_new,
+ 	.fallback = rsnd_ssi_fallback,
+ 	.hw_params = rsnd_ssi_hw_params,
++	.prepare = rsnd_ssi_prepare,
+ };
+ 
+ int rsnd_ssi_is_dma_mode(struct rsnd_mod *mod)


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 11:37 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     936f353b2f29bdb407be3d71ddc57c38752c9130
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Nov 11 01:51:36 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 11:36:29 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=936f353b

net: sched: Remove TCA_OPTIONS from policy

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                      |  4 ++++
 1800_TCA-OPTIONS-sched-fix.patch | 35 +++++++++++++++++++++++++++++++++++
 2 files changed, 39 insertions(+)

diff --git a/0000_README b/0000_README
index 6774045..bdc7ee9 100644
--- a/0000_README
+++ b/0000_README
@@ -123,6 +123,10 @@ Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.
 
+Patch:  1800_TCA-OPTIONS-sched-fix.patch
+From:   https://git.kernel.org
+Desc:   net: sched: Remove TCA_OPTIONS from policy
+
 Patch:  2500_usb-storage-Disable-UAS-on-JMicron-SATA-enclosure.patch
 From:   https://bugzilla.redhat.com/show_bug.cgi?id=1260207#c5
 Desc:   Add UAS disable quirk. See bug #640082.

diff --git a/1800_TCA-OPTIONS-sched-fix.patch b/1800_TCA-OPTIONS-sched-fix.patch
new file mode 100644
index 0000000..f960fac
--- /dev/null
+++ b/1800_TCA-OPTIONS-sched-fix.patch
@@ -0,0 +1,35 @@
+From e72bde6b66299602087c8c2350d36a525e75d06e Mon Sep 17 00:00:00 2001
+From: David Ahern <dsahern@gmail.com>
+Date: Wed, 24 Oct 2018 08:32:49 -0700
+Subject: net: sched: Remove TCA_OPTIONS from policy
+
+Marco reported an error with hfsc:
+root@Calimero:~# tc qdisc add dev eth0 root handle 1:0 hfsc default 1
+Error: Attribute failed policy validation.
+
+Apparently a few implementations pass TCA_OPTIONS as a binary instead
+of nested attribute, so drop TCA_OPTIONS from the policy.
+
+Fixes: 8b4c3cdd9dd8 ("net: sched: Add policy validation for tc attributes")
+Reported-by: Marco Berizzi <pupilla@libero.it>
+Signed-off-by: David Ahern <dsahern@gmail.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+---
+ net/sched/sch_api.c | 1 -
+ 1 file changed, 1 deletion(-)
+
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 022bca98bde6..ca3b0f46de53 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -1320,7 +1320,6 @@ check_loop_fn(struct Qdisc *q, unsigned long cl, struct qdisc_walker *w)
+ 
+ const struct nla_policy rtm_tca_policy[TCA_MAX + 1] = {
+ 	[TCA_KIND]		= { .type = NLA_STRING },
+-	[TCA_OPTIONS]		= { .type = NLA_NESTED },
+ 	[TCA_RATE]		= { .type = NLA_BINARY,
+ 				    .len = sizeof(struct tc_estimator) },
+ 	[TCA_STAB]		= { .type = NLA_NESTED },
+-- 
+cgit 1.2-0.3.lf.el7
+


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 11:37 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     29e88f502141f67ffa62d56460bd8a873e160f31
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Oct 20 12:36:21 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 11:36:27 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=29e88f50

Linux patch 4.18.16

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1015_linux-4.18.16.patch | 2439 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2443 insertions(+)

diff --git a/0000_README b/0000_README
index 5676b13..52e9ca9 100644
--- a/0000_README
+++ b/0000_README
@@ -103,6 +103,10 @@ Patch:  1014_linux-4.18.15.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.15
 
+Patch:  1015_linux-4.18.16.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.16
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1015_linux-4.18.16.patch b/1015_linux-4.18.16.patch
new file mode 100644
index 0000000..9bc7017
--- /dev/null
+++ b/1015_linux-4.18.16.patch
@@ -0,0 +1,2439 @@
+diff --git a/Makefile b/Makefile
+index 968eb96a0553..034dd990b0ae 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 15
++SUBLEVEL = 16
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/arc/Makefile b/arch/arc/Makefile
+index 6c1b20dd76ad..7c6c97782022 100644
+--- a/arch/arc/Makefile
++++ b/arch/arc/Makefile
+@@ -6,34 +6,12 @@
+ # published by the Free Software Foundation.
+ #
+ 
+-ifeq ($(CROSS_COMPILE),)
+-ifndef CONFIG_CPU_BIG_ENDIAN
+-CROSS_COMPILE := arc-linux-
+-else
+-CROSS_COMPILE := arceb-linux-
+-endif
+-endif
+-
+ KBUILD_DEFCONFIG := nsim_700_defconfig
+ 
+ cflags-y	+= -fno-common -pipe -fno-builtin -mmedium-calls -D__linux__
+ cflags-$(CONFIG_ISA_ARCOMPACT)	+= -mA7
+ cflags-$(CONFIG_ISA_ARCV2)	+= -mcpu=archs
+ 
+-is_700 = $(shell $(CC) -dM -E - < /dev/null | grep -q "ARC700" && echo 1 || echo 0)
+-
+-ifdef CONFIG_ISA_ARCOMPACT
+-ifeq ($(is_700), 0)
+-    $(error Toolchain not configured for ARCompact builds)
+-endif
+-endif
+-
+-ifdef CONFIG_ISA_ARCV2
+-ifeq ($(is_700), 1)
+-    $(error Toolchain not configured for ARCv2 builds)
+-endif
+-endif
+-
+ ifdef CONFIG_ARC_CURR_IN_REG
+ # For a global register defintion, make sure it gets passed to every file
+ # We had a customer reported bug where some code built in kernel was NOT using
+@@ -87,7 +65,7 @@ ldflags-$(CONFIG_CPU_BIG_ENDIAN)	+= -EB
+ # --build-id w/o "-marclinux". Default arc-elf32-ld is OK
+ ldflags-$(upto_gcc44)			+= -marclinux
+ 
+-LIBGCC	:= $(shell $(CC) $(cflags-y) --print-libgcc-file-name)
++LIBGCC	= $(shell $(CC) $(cflags-y) --print-libgcc-file-name)
+ 
+ # Modules with short calls might break for calls into builtin-kernel
+ KBUILD_CFLAGS_MODULE	+= -mlong-calls -mno-millicode
+diff --git a/arch/powerpc/kernel/tm.S b/arch/powerpc/kernel/tm.S
+index ff12f47a96b6..09d347b61218 100644
+--- a/arch/powerpc/kernel/tm.S
++++ b/arch/powerpc/kernel/tm.S
+@@ -175,13 +175,27 @@ _GLOBAL(tm_reclaim)
+ 	std	r1, PACATMSCRATCH(r13)
+ 	ld	r1, PACAR1(r13)
+ 
+-	/* Store the PPR in r11 and reset to decent value */
+ 	std	r11, GPR11(r1)			/* Temporary stash */
+ 
++	/*
++	 * Move the saved user r1 to the kernel stack in case PACATMSCRATCH is
++	 * clobbered by an exception once we turn on MSR_RI below.
++	 */
++	ld	r11, PACATMSCRATCH(r13)
++	std	r11, GPR1(r1)
++
++	/*
++	 * Store r13 away so we can free up the scratch SPR for the SLB fault
++	 * handler (needed once we start accessing the thread_struct).
++	 */
++	GET_SCRATCH0(r11)
++	std	r11, GPR13(r1)
++
+ 	/* Reset MSR RI so we can take SLB faults again */
+ 	li	r11, MSR_RI
+ 	mtmsrd	r11, 1
+ 
++	/* Store the PPR in r11 and reset to decent value */
+ 	mfspr	r11, SPRN_PPR
+ 	HMT_MEDIUM
+ 
+@@ -206,11 +220,11 @@ _GLOBAL(tm_reclaim)
+ 	SAVE_GPR(8, r7)				/* user r8 */
+ 	SAVE_GPR(9, r7)				/* user r9 */
+ 	SAVE_GPR(10, r7)			/* user r10 */
+-	ld	r3, PACATMSCRATCH(r13)		/* user r1 */
++	ld	r3, GPR1(r1)			/* user r1 */
+ 	ld	r4, GPR7(r1)			/* user r7 */
+ 	ld	r5, GPR11(r1)			/* user r11 */
+ 	ld	r6, GPR12(r1)			/* user r12 */
+-	GET_SCRATCH0(8)				/* user r13 */
++	ld	r8, GPR13(r1)			/* user r13 */
+ 	std	r3, GPR1(r7)
+ 	std	r4, GPR7(r7)
+ 	std	r5, GPR11(r7)
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index b5a71baedbc2..59d07bd5374a 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -1204,7 +1204,9 @@ int find_and_online_cpu_nid(int cpu)
+ 	int new_nid;
+ 
+ 	/* Use associativity from first thread for all siblings */
+-	vphn_get_associativity(cpu, associativity);
++	if (vphn_get_associativity(cpu, associativity))
++		return cpu_to_node(cpu);
++
+ 	new_nid = associativity_to_nid(associativity);
+ 	if (new_nid < 0 || !node_possible(new_nid))
+ 		new_nid = first_online_node;
+diff --git a/arch/riscv/include/asm/asm-prototypes.h b/arch/riscv/include/asm/asm-prototypes.h
+new file mode 100644
+index 000000000000..c9fecd120d18
+--- /dev/null
++++ b/arch/riscv/include/asm/asm-prototypes.h
+@@ -0,0 +1,7 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef _ASM_RISCV_PROTOTYPES_H
++
++#include <linux/ftrace.h>
++#include <asm-generic/asm-prototypes.h>
++
++#endif /* _ASM_RISCV_PROTOTYPES_H */
+diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
+index eaa843a52907..a480356e0ed8 100644
+--- a/arch/x86/boot/compressed/mem_encrypt.S
++++ b/arch/x86/boot/compressed/mem_encrypt.S
+@@ -25,20 +25,6 @@ ENTRY(get_sev_encryption_bit)
+ 	push	%ebx
+ 	push	%ecx
+ 	push	%edx
+-	push	%edi
+-
+-	/*
+-	 * RIP-relative addressing is needed to access the encryption bit
+-	 * variable. Since we are running in 32-bit mode we need this call/pop
+-	 * sequence to get the proper relative addressing.
+-	 */
+-	call	1f
+-1:	popl	%edi
+-	subl	$1b, %edi
+-
+-	movl	enc_bit(%edi), %eax
+-	cmpl	$0, %eax
+-	jge	.Lsev_exit
+ 
+ 	/* Check if running under a hypervisor */
+ 	movl	$1, %eax
+@@ -69,15 +55,12 @@ ENTRY(get_sev_encryption_bit)
+ 
+ 	movl	%ebx, %eax
+ 	andl	$0x3f, %eax		/* Return the encryption bit location */
+-	movl	%eax, enc_bit(%edi)
+ 	jmp	.Lsev_exit
+ 
+ .Lno_sev:
+ 	xor	%eax, %eax
+-	movl	%eax, enc_bit(%edi)
+ 
+ .Lsev_exit:
+-	pop	%edi
+ 	pop	%edx
+ 	pop	%ecx
+ 	pop	%ebx
+@@ -113,8 +96,6 @@ ENTRY(set_sev_encryption_mask)
+ ENDPROC(set_sev_encryption_mask)
+ 
+ 	.data
+-enc_bit:
+-	.int	0xffffffff
+ 
+ #ifdef CONFIG_AMD_MEM_ENCRYPT
+ 	.balign	8
+diff --git a/drivers/clocksource/timer-fttmr010.c b/drivers/clocksource/timer-fttmr010.c
+index c020038ebfab..cf93f6419b51 100644
+--- a/drivers/clocksource/timer-fttmr010.c
++++ b/drivers/clocksource/timer-fttmr010.c
+@@ -130,13 +130,17 @@ static int fttmr010_timer_set_next_event(unsigned long cycles,
+ 	cr &= ~fttmr010->t1_enable_val;
+ 	writel(cr, fttmr010->base + TIMER_CR);
+ 
+-	/* Setup the match register forward/backward in time */
+-	cr = readl(fttmr010->base + TIMER1_COUNT);
+-	if (fttmr010->count_down)
+-		cr -= cycles;
+-	else
+-		cr += cycles;
+-	writel(cr, fttmr010->base + TIMER1_MATCH1);
++	if (fttmr010->count_down) {
++		/*
++		 * ASPEED Timer Controller will load TIMER1_LOAD register
++		 * into TIMER1_COUNT register when the timer is re-enabled.
++		 */
++		writel(cycles, fttmr010->base + TIMER1_LOAD);
++	} else {
++		/* Setup the match register forward in time */
++		cr = readl(fttmr010->base + TIMER1_COUNT);
++		writel(cr + cycles, fttmr010->base + TIMER1_MATCH1);
++	}
+ 
+ 	/* Start */
+ 	cr = readl(fttmr010->base + TIMER_CR);
+diff --git a/drivers/clocksource/timer-ti-32k.c b/drivers/clocksource/timer-ti-32k.c
+index 880a861ab3c8..713214d085e0 100644
+--- a/drivers/clocksource/timer-ti-32k.c
++++ b/drivers/clocksource/timer-ti-32k.c
+@@ -98,6 +98,9 @@ static int __init ti_32k_timer_init(struct device_node *np)
+ 		return -ENXIO;
+ 	}
+ 
++	if (!of_machine_is_compatible("ti,am43"))
++		ti_32k_timer.cs.flags |= CLOCK_SOURCE_SUSPEND_NONSTOP;
++
+ 	ti_32k_timer.counter = ti_32k_timer.base;
+ 
+ 	/*
+diff --git a/drivers/gpu/drm/arm/malidp_drv.c b/drivers/gpu/drm/arm/malidp_drv.c
+index 0a788d76ed5f..0ec4659795f1 100644
+--- a/drivers/gpu/drm/arm/malidp_drv.c
++++ b/drivers/gpu/drm/arm/malidp_drv.c
+@@ -615,6 +615,7 @@ static int malidp_bind(struct device *dev)
+ 	drm->irq_enabled = true;
+ 
+ 	ret = drm_vblank_init(drm, drm->mode_config.num_crtc);
++	drm_crtc_vblank_reset(&malidp->crtc);
+ 	if (ret < 0) {
+ 		DRM_ERROR("failed to initialise vblank\n");
+ 		goto vblank_fail;
+diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c
+index c2e55e5d97f6..1cf6290d6435 100644
+--- a/drivers/hwtracing/intel_th/pci.c
++++ b/drivers/hwtracing/intel_th/pci.c
+@@ -160,6 +160,11 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
+ 		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x18e1),
+ 		.driver_data = (kernel_ulong_t)&intel_th_2x,
+ 	},
++	{
++		/* Ice Lake PCH */
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x34a6),
++		.driver_data = (kernel_ulong_t)&intel_th_2x,
++	},
+ 	{ 0 },
+ };
+ 
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index 0e5eb0f547d3..b83348416885 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -2048,33 +2048,55 @@ static int modify_qp(struct ib_uverbs_file *file,
+ 
+ 	if ((cmd->base.attr_mask & IB_QP_CUR_STATE &&
+ 	    cmd->base.cur_qp_state > IB_QPS_ERR) ||
+-	    cmd->base.qp_state > IB_QPS_ERR) {
++	    (cmd->base.attr_mask & IB_QP_STATE &&
++	    cmd->base.qp_state > IB_QPS_ERR)) {
+ 		ret = -EINVAL;
+ 		goto release_qp;
+ 	}
+ 
+-	attr->qp_state		  = cmd->base.qp_state;
+-	attr->cur_qp_state	  = cmd->base.cur_qp_state;
+-	attr->path_mtu		  = cmd->base.path_mtu;
+-	attr->path_mig_state	  = cmd->base.path_mig_state;
+-	attr->qkey		  = cmd->base.qkey;
+-	attr->rq_psn		  = cmd->base.rq_psn;
+-	attr->sq_psn		  = cmd->base.sq_psn;
+-	attr->dest_qp_num	  = cmd->base.dest_qp_num;
+-	attr->qp_access_flags	  = cmd->base.qp_access_flags;
+-	attr->pkey_index	  = cmd->base.pkey_index;
+-	attr->alt_pkey_index	  = cmd->base.alt_pkey_index;
+-	attr->en_sqd_async_notify = cmd->base.en_sqd_async_notify;
+-	attr->max_rd_atomic	  = cmd->base.max_rd_atomic;
+-	attr->max_dest_rd_atomic  = cmd->base.max_dest_rd_atomic;
+-	attr->min_rnr_timer	  = cmd->base.min_rnr_timer;
+-	attr->port_num		  = cmd->base.port_num;
+-	attr->timeout		  = cmd->base.timeout;
+-	attr->retry_cnt		  = cmd->base.retry_cnt;
+-	attr->rnr_retry		  = cmd->base.rnr_retry;
+-	attr->alt_port_num	  = cmd->base.alt_port_num;
+-	attr->alt_timeout	  = cmd->base.alt_timeout;
+-	attr->rate_limit	  = cmd->rate_limit;
++	if (cmd->base.attr_mask & IB_QP_STATE)
++		attr->qp_state = cmd->base.qp_state;
++	if (cmd->base.attr_mask & IB_QP_CUR_STATE)
++		attr->cur_qp_state = cmd->base.cur_qp_state;
++	if (cmd->base.attr_mask & IB_QP_PATH_MTU)
++		attr->path_mtu = cmd->base.path_mtu;
++	if (cmd->base.attr_mask & IB_QP_PATH_MIG_STATE)
++		attr->path_mig_state = cmd->base.path_mig_state;
++	if (cmd->base.attr_mask & IB_QP_QKEY)
++		attr->qkey = cmd->base.qkey;
++	if (cmd->base.attr_mask & IB_QP_RQ_PSN)
++		attr->rq_psn = cmd->base.rq_psn;
++	if (cmd->base.attr_mask & IB_QP_SQ_PSN)
++		attr->sq_psn = cmd->base.sq_psn;
++	if (cmd->base.attr_mask & IB_QP_DEST_QPN)
++		attr->dest_qp_num = cmd->base.dest_qp_num;
++	if (cmd->base.attr_mask & IB_QP_ACCESS_FLAGS)
++		attr->qp_access_flags = cmd->base.qp_access_flags;
++	if (cmd->base.attr_mask & IB_QP_PKEY_INDEX)
++		attr->pkey_index = cmd->base.pkey_index;
++	if (cmd->base.attr_mask & IB_QP_EN_SQD_ASYNC_NOTIFY)
++		attr->en_sqd_async_notify = cmd->base.en_sqd_async_notify;
++	if (cmd->base.attr_mask & IB_QP_MAX_QP_RD_ATOMIC)
++		attr->max_rd_atomic = cmd->base.max_rd_atomic;
++	if (cmd->base.attr_mask & IB_QP_MAX_DEST_RD_ATOMIC)
++		attr->max_dest_rd_atomic = cmd->base.max_dest_rd_atomic;
++	if (cmd->base.attr_mask & IB_QP_MIN_RNR_TIMER)
++		attr->min_rnr_timer = cmd->base.min_rnr_timer;
++	if (cmd->base.attr_mask & IB_QP_PORT)
++		attr->port_num = cmd->base.port_num;
++	if (cmd->base.attr_mask & IB_QP_TIMEOUT)
++		attr->timeout = cmd->base.timeout;
++	if (cmd->base.attr_mask & IB_QP_RETRY_CNT)
++		attr->retry_cnt = cmd->base.retry_cnt;
++	if (cmd->base.attr_mask & IB_QP_RNR_RETRY)
++		attr->rnr_retry = cmd->base.rnr_retry;
++	if (cmd->base.attr_mask & IB_QP_ALT_PATH) {
++		attr->alt_port_num = cmd->base.alt_port_num;
++		attr->alt_timeout = cmd->base.alt_timeout;
++		attr->alt_pkey_index = cmd->base.alt_pkey_index;
++	}
++	if (cmd->base.attr_mask & IB_QP_RATE_LIMIT)
++		attr->rate_limit = cmd->rate_limit;
+ 
+ 	if (cmd->base.attr_mask & IB_QP_AV)
+ 		copy_ah_attr_from_uverbs(qp->device, &attr->ah_attr,
+diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
+index 20b9f31052bf..85cd1a3593d6 100644
+--- a/drivers/infiniband/hw/bnxt_re/main.c
++++ b/drivers/infiniband/hw/bnxt_re/main.c
+@@ -78,7 +78,7 @@ static struct list_head bnxt_re_dev_list = LIST_HEAD_INIT(bnxt_re_dev_list);
+ /* Mutex to protect the list of bnxt_re devices added */
+ static DEFINE_MUTEX(bnxt_re_dev_lock);
+ static struct workqueue_struct *bnxt_re_wq;
+-static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev, bool lock_wait);
++static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev);
+ 
+ /* SR-IOV helper functions */
+ 
+@@ -182,7 +182,7 @@ static void bnxt_re_shutdown(void *p)
+ 	if (!rdev)
+ 		return;
+ 
+-	bnxt_re_ib_unreg(rdev, false);
++	bnxt_re_ib_unreg(rdev);
+ }
+ 
+ static void bnxt_re_stop_irq(void *handle)
+@@ -251,7 +251,7 @@ static struct bnxt_ulp_ops bnxt_re_ulp_ops = {
+ /* Driver registration routines used to let the networking driver (bnxt_en)
+  * to know that the RoCE driver is now installed
+  */
+-static int bnxt_re_unregister_netdev(struct bnxt_re_dev *rdev, bool lock_wait)
++static int bnxt_re_unregister_netdev(struct bnxt_re_dev *rdev)
+ {
+ 	struct bnxt_en_dev *en_dev;
+ 	int rc;
+@@ -260,14 +260,9 @@ static int bnxt_re_unregister_netdev(struct bnxt_re_dev *rdev, bool lock_wait)
+ 		return -EINVAL;
+ 
+ 	en_dev = rdev->en_dev;
+-	/* Acquire rtnl lock if it is not invokded from netdev event */
+-	if (lock_wait)
+-		rtnl_lock();
+ 
+ 	rc = en_dev->en_ops->bnxt_unregister_device(rdev->en_dev,
+ 						    BNXT_ROCE_ULP);
+-	if (lock_wait)
+-		rtnl_unlock();
+ 	return rc;
+ }
+ 
+@@ -281,14 +276,12 @@ static int bnxt_re_register_netdev(struct bnxt_re_dev *rdev)
+ 
+ 	en_dev = rdev->en_dev;
+ 
+-	rtnl_lock();
+ 	rc = en_dev->en_ops->bnxt_register_device(en_dev, BNXT_ROCE_ULP,
+ 						  &bnxt_re_ulp_ops, rdev);
+-	rtnl_unlock();
+ 	return rc;
+ }
+ 
+-static int bnxt_re_free_msix(struct bnxt_re_dev *rdev, bool lock_wait)
++static int bnxt_re_free_msix(struct bnxt_re_dev *rdev)
+ {
+ 	struct bnxt_en_dev *en_dev;
+ 	int rc;
+@@ -298,13 +291,9 @@ static int bnxt_re_free_msix(struct bnxt_re_dev *rdev, bool lock_wait)
+ 
+ 	en_dev = rdev->en_dev;
+ 
+-	if (lock_wait)
+-		rtnl_lock();
+ 
+ 	rc = en_dev->en_ops->bnxt_free_msix(rdev->en_dev, BNXT_ROCE_ULP);
+ 
+-	if (lock_wait)
+-		rtnl_unlock();
+ 	return rc;
+ }
+ 
+@@ -320,7 +309,6 @@ static int bnxt_re_request_msix(struct bnxt_re_dev *rdev)
+ 
+ 	num_msix_want = min_t(u32, BNXT_RE_MAX_MSIX, num_online_cpus());
+ 
+-	rtnl_lock();
+ 	num_msix_got = en_dev->en_ops->bnxt_request_msix(en_dev, BNXT_ROCE_ULP,
+ 							 rdev->msix_entries,
+ 							 num_msix_want);
+@@ -335,7 +323,6 @@ static int bnxt_re_request_msix(struct bnxt_re_dev *rdev)
+ 	}
+ 	rdev->num_msix = num_msix_got;
+ done:
+-	rtnl_unlock();
+ 	return rc;
+ }
+ 
+@@ -358,24 +345,18 @@ static void bnxt_re_fill_fw_msg(struct bnxt_fw_msg *fw_msg, void *msg,
+ 	fw_msg->timeout = timeout;
+ }
+ 
+-static int bnxt_re_net_ring_free(struct bnxt_re_dev *rdev, u16 fw_ring_id,
+-				 bool lock_wait)
++static int bnxt_re_net_ring_free(struct bnxt_re_dev *rdev, u16 fw_ring_id)
+ {
+ 	struct bnxt_en_dev *en_dev = rdev->en_dev;
+ 	struct hwrm_ring_free_input req = {0};
+ 	struct hwrm_ring_free_output resp;
+ 	struct bnxt_fw_msg fw_msg;
+-	bool do_unlock = false;
+ 	int rc = -EINVAL;
+ 
+ 	if (!en_dev)
+ 		return rc;
+ 
+ 	memset(&fw_msg, 0, sizeof(fw_msg));
+-	if (lock_wait) {
+-		rtnl_lock();
+-		do_unlock = true;
+-	}
+ 
+ 	bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_RING_FREE, -1, -1);
+ 	req.ring_type = RING_ALLOC_REQ_RING_TYPE_L2_CMPL;
+@@ -386,8 +367,6 @@ static int bnxt_re_net_ring_free(struct bnxt_re_dev *rdev, u16 fw_ring_id,
+ 	if (rc)
+ 		dev_err(rdev_to_dev(rdev),
+ 			"Failed to free HW ring:%d :%#x", req.ring_id, rc);
+-	if (do_unlock)
+-		rtnl_unlock();
+ 	return rc;
+ }
+ 
+@@ -405,7 +384,6 @@ static int bnxt_re_net_ring_alloc(struct bnxt_re_dev *rdev, dma_addr_t *dma_arr,
+ 		return rc;
+ 
+ 	memset(&fw_msg, 0, sizeof(fw_msg));
+-	rtnl_lock();
+ 	bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_RING_ALLOC, -1, -1);
+ 	req.enables = 0;
+ 	req.page_tbl_addr =  cpu_to_le64(dma_arr[0]);
+@@ -426,27 +404,21 @@ static int bnxt_re_net_ring_alloc(struct bnxt_re_dev *rdev, dma_addr_t *dma_arr,
+ 	if (!rc)
+ 		*fw_ring_id = le16_to_cpu(resp.ring_id);
+ 
+-	rtnl_unlock();
+ 	return rc;
+ }
+ 
+ static int bnxt_re_net_stats_ctx_free(struct bnxt_re_dev *rdev,
+-				      u32 fw_stats_ctx_id, bool lock_wait)
++				      u32 fw_stats_ctx_id)
+ {
+ 	struct bnxt_en_dev *en_dev = rdev->en_dev;
+ 	struct hwrm_stat_ctx_free_input req = {0};
+ 	struct bnxt_fw_msg fw_msg;
+-	bool do_unlock = false;
+ 	int rc = -EINVAL;
+ 
+ 	if (!en_dev)
+ 		return rc;
+ 
+ 	memset(&fw_msg, 0, sizeof(fw_msg));
+-	if (lock_wait) {
+-		rtnl_lock();
+-		do_unlock = true;
+-	}
+ 
+ 	bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_STAT_CTX_FREE, -1, -1);
+ 	req.stat_ctx_id = cpu_to_le32(fw_stats_ctx_id);
+@@ -457,8 +429,6 @@ static int bnxt_re_net_stats_ctx_free(struct bnxt_re_dev *rdev,
+ 		dev_err(rdev_to_dev(rdev),
+ 			"Failed to free HW stats context %#x", rc);
+ 
+-	if (do_unlock)
+-		rtnl_unlock();
+ 	return rc;
+ }
+ 
+@@ -478,7 +448,6 @@ static int bnxt_re_net_stats_ctx_alloc(struct bnxt_re_dev *rdev,
+ 		return rc;
+ 
+ 	memset(&fw_msg, 0, sizeof(fw_msg));
+-	rtnl_lock();
+ 
+ 	bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_STAT_CTX_ALLOC, -1, -1);
+ 	req.update_period_ms = cpu_to_le32(1000);
+@@ -490,7 +459,6 @@ static int bnxt_re_net_stats_ctx_alloc(struct bnxt_re_dev *rdev,
+ 	if (!rc)
+ 		*fw_stats_ctx_id = le32_to_cpu(resp.stat_ctx_id);
+ 
+-	rtnl_unlock();
+ 	return rc;
+ }
+ 
+@@ -929,19 +897,19 @@ fail:
+ 	return rc;
+ }
+ 
+-static void bnxt_re_free_nq_res(struct bnxt_re_dev *rdev, bool lock_wait)
++static void bnxt_re_free_nq_res(struct bnxt_re_dev *rdev)
+ {
+ 	int i;
+ 
+ 	for (i = 0; i < rdev->num_msix - 1; i++) {
+-		bnxt_re_net_ring_free(rdev, rdev->nq[i].ring_id, lock_wait);
++		bnxt_re_net_ring_free(rdev, rdev->nq[i].ring_id);
+ 		bnxt_qplib_free_nq(&rdev->nq[i]);
+ 	}
+ }
+ 
+-static void bnxt_re_free_res(struct bnxt_re_dev *rdev, bool lock_wait)
++static void bnxt_re_free_res(struct bnxt_re_dev *rdev)
+ {
+-	bnxt_re_free_nq_res(rdev, lock_wait);
++	bnxt_re_free_nq_res(rdev);
+ 
+ 	if (rdev->qplib_res.dpi_tbl.max) {
+ 		bnxt_qplib_dealloc_dpi(&rdev->qplib_res,
+@@ -1219,7 +1187,7 @@ static int bnxt_re_setup_qos(struct bnxt_re_dev *rdev)
+ 	return 0;
+ }
+ 
+-static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev, bool lock_wait)
++static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev)
+ {
+ 	int i, rc;
+ 
+@@ -1234,28 +1202,27 @@ static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev, bool lock_wait)
+ 		cancel_delayed_work(&rdev->worker);
+ 
+ 	bnxt_re_cleanup_res(rdev);
+-	bnxt_re_free_res(rdev, lock_wait);
++	bnxt_re_free_res(rdev);
+ 
+ 	if (test_and_clear_bit(BNXT_RE_FLAG_RCFW_CHANNEL_EN, &rdev->flags)) {
+ 		rc = bnxt_qplib_deinit_rcfw(&rdev->rcfw);
+ 		if (rc)
+ 			dev_warn(rdev_to_dev(rdev),
+ 				 "Failed to deinitialize RCFW: %#x", rc);
+-		bnxt_re_net_stats_ctx_free(rdev, rdev->qplib_ctx.stats.fw_id,
+-					   lock_wait);
++		bnxt_re_net_stats_ctx_free(rdev, rdev->qplib_ctx.stats.fw_id);
+ 		bnxt_qplib_free_ctx(rdev->en_dev->pdev, &rdev->qplib_ctx);
+ 		bnxt_qplib_disable_rcfw_channel(&rdev->rcfw);
+-		bnxt_re_net_ring_free(rdev, rdev->rcfw.creq_ring_id, lock_wait);
++		bnxt_re_net_ring_free(rdev, rdev->rcfw.creq_ring_id);
+ 		bnxt_qplib_free_rcfw_channel(&rdev->rcfw);
+ 	}
+ 	if (test_and_clear_bit(BNXT_RE_FLAG_GOT_MSIX, &rdev->flags)) {
+-		rc = bnxt_re_free_msix(rdev, lock_wait);
++		rc = bnxt_re_free_msix(rdev);
+ 		if (rc)
+ 			dev_warn(rdev_to_dev(rdev),
+ 				 "Failed to free MSI-X vectors: %#x", rc);
+ 	}
+ 	if (test_and_clear_bit(BNXT_RE_FLAG_NETDEV_REGISTERED, &rdev->flags)) {
+-		rc = bnxt_re_unregister_netdev(rdev, lock_wait);
++		rc = bnxt_re_unregister_netdev(rdev);
+ 		if (rc)
+ 			dev_warn(rdev_to_dev(rdev),
+ 				 "Failed to unregister with netdev: %#x", rc);
+@@ -1276,6 +1243,12 @@ static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev)
+ {
+ 	int i, j, rc;
+ 
++	bool locked;
++
++	/* Acquire rtnl lock through out this function */
++	rtnl_lock();
++	locked = true;
++
+ 	/* Registered a new RoCE device instance to netdev */
+ 	rc = bnxt_re_register_netdev(rdev);
+ 	if (rc) {
+@@ -1374,12 +1347,16 @@ static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev)
+ 		schedule_delayed_work(&rdev->worker, msecs_to_jiffies(30000));
+ 	}
+ 
++	rtnl_unlock();
++	locked = false;
++
+ 	/* Register ib dev */
+ 	rc = bnxt_re_register_ib(rdev);
+ 	if (rc) {
+ 		pr_err("Failed to register with IB: %#x\n", rc);
+ 		goto fail;
+ 	}
++	set_bit(BNXT_RE_FLAG_IBDEV_REGISTERED, &rdev->flags);
+ 	dev_info(rdev_to_dev(rdev), "Device registered successfully");
+ 	for (i = 0; i < ARRAY_SIZE(bnxt_re_attributes); i++) {
+ 		rc = device_create_file(&rdev->ibdev.dev,
+@@ -1395,7 +1372,6 @@ static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev)
+ 			goto fail;
+ 		}
+ 	}
+-	set_bit(BNXT_RE_FLAG_IBDEV_REGISTERED, &rdev->flags);
+ 	ib_get_eth_speed(&rdev->ibdev, 1, &rdev->active_speed,
+ 			 &rdev->active_width);
+ 	set_bit(BNXT_RE_FLAG_ISSUE_ROCE_STATS, &rdev->flags);
+@@ -1404,17 +1380,21 @@ static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev)
+ 
+ 	return 0;
+ free_sctx:
+-	bnxt_re_net_stats_ctx_free(rdev, rdev->qplib_ctx.stats.fw_id, true);
++	bnxt_re_net_stats_ctx_free(rdev, rdev->qplib_ctx.stats.fw_id);
+ free_ctx:
+ 	bnxt_qplib_free_ctx(rdev->en_dev->pdev, &rdev->qplib_ctx);
+ disable_rcfw:
+ 	bnxt_qplib_disable_rcfw_channel(&rdev->rcfw);
+ free_ring:
+-	bnxt_re_net_ring_free(rdev, rdev->rcfw.creq_ring_id, true);
++	bnxt_re_net_ring_free(rdev, rdev->rcfw.creq_ring_id);
+ free_rcfw:
+ 	bnxt_qplib_free_rcfw_channel(&rdev->rcfw);
+ fail:
+-	bnxt_re_ib_unreg(rdev, true);
++	if (!locked)
++		rtnl_lock();
++	bnxt_re_ib_unreg(rdev);
++	rtnl_unlock();
++
+ 	return rc;
+ }
+ 
+@@ -1567,7 +1547,7 @@ static int bnxt_re_netdev_event(struct notifier_block *notifier,
+ 		 */
+ 		if (atomic_read(&rdev->sched_count) > 0)
+ 			goto exit;
+-		bnxt_re_ib_unreg(rdev, false);
++		bnxt_re_ib_unreg(rdev);
+ 		bnxt_re_remove_one(rdev);
+ 		bnxt_re_dev_unreg(rdev);
+ 		break;
+@@ -1646,7 +1626,10 @@ static void __exit bnxt_re_mod_exit(void)
+ 		 */
+ 		flush_workqueue(bnxt_re_wq);
+ 		bnxt_re_dev_stop(rdev);
+-		bnxt_re_ib_unreg(rdev, true);
++		/* Acquire the rtnl_lock as the L2 resources are freed here */
++		rtnl_lock();
++		bnxt_re_ib_unreg(rdev);
++		rtnl_unlock();
+ 		bnxt_re_remove_one(rdev);
+ 		bnxt_re_dev_unreg(rdev);
+ 	}
+diff --git a/drivers/input/keyboard/atakbd.c b/drivers/input/keyboard/atakbd.c
+index f1235831283d..fdeda0b0fbd6 100644
+--- a/drivers/input/keyboard/atakbd.c
++++ b/drivers/input/keyboard/atakbd.c
+@@ -79,8 +79,7 @@ MODULE_LICENSE("GPL");
+  */
+ 
+ 
+-static unsigned char atakbd_keycode[0x72] = {	/* American layout */
+-	[0]	 = KEY_GRAVE,
++static unsigned char atakbd_keycode[0x73] = {	/* American layout */
+ 	[1]	 = KEY_ESC,
+ 	[2]	 = KEY_1,
+ 	[3]	 = KEY_2,
+@@ -121,9 +120,9 @@ static unsigned char atakbd_keycode[0x72] = {	/* American layout */
+ 	[38]	 = KEY_L,
+ 	[39]	 = KEY_SEMICOLON,
+ 	[40]	 = KEY_APOSTROPHE,
+-	[41]	 = KEY_BACKSLASH,	/* FIXME, '#' */
++	[41]	 = KEY_GRAVE,
+ 	[42]	 = KEY_LEFTSHIFT,
+-	[43]	 = KEY_GRAVE,		/* FIXME: '~' */
++	[43]	 = KEY_BACKSLASH,
+ 	[44]	 = KEY_Z,
+ 	[45]	 = KEY_X,
+ 	[46]	 = KEY_C,
+@@ -149,45 +148,34 @@ static unsigned char atakbd_keycode[0x72] = {	/* American layout */
+ 	[66]	 = KEY_F8,
+ 	[67]	 = KEY_F9,
+ 	[68]	 = KEY_F10,
+-	[69]	 = KEY_ESC,
+-	[70]	 = KEY_DELETE,
+-	[71]	 = KEY_KP7,
+-	[72]	 = KEY_KP8,
+-	[73]	 = KEY_KP9,
++	[71]	 = KEY_HOME,
++	[72]	 = KEY_UP,
+ 	[74]	 = KEY_KPMINUS,
+-	[75]	 = KEY_KP4,
+-	[76]	 = KEY_KP5,
+-	[77]	 = KEY_KP6,
++	[75]	 = KEY_LEFT,
++	[77]	 = KEY_RIGHT,
+ 	[78]	 = KEY_KPPLUS,
+-	[79]	 = KEY_KP1,
+-	[80]	 = KEY_KP2,
+-	[81]	 = KEY_KP3,
+-	[82]	 = KEY_KP0,
+-	[83]	 = KEY_KPDOT,
+-	[90]	 = KEY_KPLEFTPAREN,
+-	[91]	 = KEY_KPRIGHTPAREN,
+-	[92]	 = KEY_KPASTERISK,	/* FIXME */
+-	[93]	 = KEY_KPASTERISK,
+-	[94]	 = KEY_KPPLUS,
+-	[95]	 = KEY_HELP,
++	[80]	 = KEY_DOWN,
++	[82]	 = KEY_INSERT,
++	[83]	 = KEY_DELETE,
+ 	[96]	 = KEY_102ND,
+-	[97]	 = KEY_KPASTERISK,	/* FIXME */
+-	[98]	 = KEY_KPSLASH,
++	[97]	 = KEY_UNDO,
++	[98]	 = KEY_HELP,
+ 	[99]	 = KEY_KPLEFTPAREN,
+ 	[100]	 = KEY_KPRIGHTPAREN,
+ 	[101]	 = KEY_KPSLASH,
+ 	[102]	 = KEY_KPASTERISK,
+-	[103]	 = KEY_UP,
+-	[104]	 = KEY_KPASTERISK,	/* FIXME */
+-	[105]	 = KEY_LEFT,
+-	[106]	 = KEY_RIGHT,
+-	[107]	 = KEY_KPASTERISK,	/* FIXME */
+-	[108]	 = KEY_DOWN,
+-	[109]	 = KEY_KPASTERISK,	/* FIXME */
+-	[110]	 = KEY_KPASTERISK,	/* FIXME */
+-	[111]	 = KEY_KPASTERISK,	/* FIXME */
+-	[112]	 = KEY_KPASTERISK,	/* FIXME */
+-	[113]	 = KEY_KPASTERISK	/* FIXME */
++	[103]	 = KEY_KP7,
++	[104]	 = KEY_KP8,
++	[105]	 = KEY_KP9,
++	[106]	 = KEY_KP4,
++	[107]	 = KEY_KP5,
++	[108]	 = KEY_KP6,
++	[109]	 = KEY_KP1,
++	[110]	 = KEY_KP2,
++	[111]	 = KEY_KP3,
++	[112]	 = KEY_KP0,
++	[113]	 = KEY_KPDOT,
++	[114]	 = KEY_KPENTER,
+ };
+ 
+ static struct input_dev *atakbd_dev;
+@@ -195,21 +183,15 @@ static struct input_dev *atakbd_dev;
+ static void atakbd_interrupt(unsigned char scancode, char down)
+ {
+ 
+-	if (scancode < 0x72) {		/* scancodes < 0xf2 are keys */
++	if (scancode < 0x73) {		/* scancodes < 0xf3 are keys */
+ 
+ 		// report raw events here?
+ 
+ 		scancode = atakbd_keycode[scancode];
+ 
+-		if (scancode == KEY_CAPSLOCK) {	/* CapsLock is a toggle switch key on Amiga */
+-			input_report_key(atakbd_dev, scancode, 1);
+-			input_report_key(atakbd_dev, scancode, 0);
+-			input_sync(atakbd_dev);
+-		} else {
+-			input_report_key(atakbd_dev, scancode, down);
+-			input_sync(atakbd_dev);
+-		}
+-	} else				/* scancodes >= 0xf2 are mouse data, most likely */
++		input_report_key(atakbd_dev, scancode, down);
++		input_sync(atakbd_dev);
++	} else				/* scancodes >= 0xf3 are mouse data, most likely */
+ 		printk(KERN_INFO "atakbd: unhandled scancode %x\n", scancode);
+ 
+ 	return;
+diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
+index c53363443280..c2b511a16b0e 100644
+--- a/drivers/iommu/amd_iommu.c
++++ b/drivers/iommu/amd_iommu.c
+@@ -246,7 +246,13 @@ static u16 get_alias(struct device *dev)
+ 
+ 	/* The callers make sure that get_device_id() does not fail here */
+ 	devid = get_device_id(dev);
++
++	/* For ACPI HID devices, we simply return the devid as such */
++	if (!dev_is_pci(dev))
++		return devid;
++
+ 	ivrs_alias = amd_iommu_alias_table[devid];
++
+ 	pci_for_each_dma_alias(pdev, __last_alias, &pci_alias);
+ 
+ 	if (ivrs_alias == pci_alias)
+diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c
+index 2b1724e8d307..701820b39fd1 100644
+--- a/drivers/iommu/rockchip-iommu.c
++++ b/drivers/iommu/rockchip-iommu.c
+@@ -1242,6 +1242,12 @@ err_unprepare_clocks:
+ 
+ static void rk_iommu_shutdown(struct platform_device *pdev)
+ {
++	struct rk_iommu *iommu = platform_get_drvdata(pdev);
++	int i = 0, irq;
++
++	while ((irq = platform_get_irq(pdev, i++)) != -ENXIO)
++		devm_free_irq(iommu->dev, irq, iommu);
++
+ 	pm_runtime_force_suspend(&pdev->dev);
+ }
+ 
+diff --git a/drivers/media/usb/dvb-usb-v2/af9035.c b/drivers/media/usb/dvb-usb-v2/af9035.c
+index 666d319d3d1a..1f6c1eefe389 100644
+--- a/drivers/media/usb/dvb-usb-v2/af9035.c
++++ b/drivers/media/usb/dvb-usb-v2/af9035.c
+@@ -402,8 +402,10 @@ static int af9035_i2c_master_xfer(struct i2c_adapter *adap,
+ 			if (msg[0].addr == state->af9033_i2c_addr[1])
+ 				reg |= 0x100000;
+ 
+-			ret = af9035_wr_regs(d, reg, &msg[0].buf[3],
+-					msg[0].len - 3);
++			ret = (msg[0].len >= 3) ? af9035_wr_regs(d, reg,
++							         &msg[0].buf[3],
++							         msg[0].len - 3)
++					        : -EOPNOTSUPP;
+ 		} else {
+ 			/* I2C write */
+ 			u8 buf[MAX_XFER_SIZE];
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h b/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
+index 09e38f0733bd..10b9cb2185b1 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
+@@ -753,7 +753,6 @@ struct cpl_abort_req_rss {
+ };
+ 
+ struct cpl_abort_req_rss6 {
+-	WR_HDR;
+ 	union opcode_tid ot;
+ 	__u32 srqidx_status;
+ };
+diff --git a/drivers/net/ethernet/ibm/emac/core.c b/drivers/net/ethernet/ibm/emac/core.c
+index 372664686309..129f4e9f38da 100644
+--- a/drivers/net/ethernet/ibm/emac/core.c
++++ b/drivers/net/ethernet/ibm/emac/core.c
+@@ -2677,12 +2677,17 @@ static int emac_init_phy(struct emac_instance *dev)
+ 		if (of_phy_is_fixed_link(np)) {
+ 			int res = emac_dt_mdio_probe(dev);
+ 
+-			if (!res) {
+-				res = of_phy_register_fixed_link(np);
+-				if (res)
+-					mdiobus_unregister(dev->mii_bus);
++			if (res)
++				return res;
++
++			res = of_phy_register_fixed_link(np);
++			dev->phy_dev = of_phy_find_device(np);
++			if (res || !dev->phy_dev) {
++				mdiobus_unregister(dev->mii_bus);
++				return res ? res : -EINVAL;
+ 			}
+-			return res;
++			emac_adjust_link(dev->ndev);
++			put_device(&dev->phy_dev->mdio.dev);
+ 		}
+ 		return 0;
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx4/eq.c b/drivers/net/ethernet/mellanox/mlx4/eq.c
+index 1f3372c1802e..2df92dbd38e1 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/eq.c
++++ b/drivers/net/ethernet/mellanox/mlx4/eq.c
+@@ -240,7 +240,8 @@ static void mlx4_set_eq_affinity_hint(struct mlx4_priv *priv, int vec)
+ 	struct mlx4_dev *dev = &priv->dev;
+ 	struct mlx4_eq *eq = &priv->eq_table.eq[vec];
+ 
+-	if (!eq->affinity_mask || cpumask_empty(eq->affinity_mask))
++	if (!cpumask_available(eq->affinity_mask) ||
++	    cpumask_empty(eq->affinity_mask))
+ 		return;
+ 
+ 	hint_err = irq_set_affinity_hint(eq->irq, eq->affinity_mask);
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_dcbx.c b/drivers/net/ethernet/qlogic/qed/qed_dcbx.c
+index e0680ce91328..09ed0ba4225a 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_dcbx.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_dcbx.c
+@@ -190,6 +190,7 @@ qed_dcbx_dp_protocol(struct qed_hwfn *p_hwfn, struct qed_dcbx_results *p_data)
+ 
+ static void
+ qed_dcbx_set_params(struct qed_dcbx_results *p_data,
++		    struct qed_hwfn *p_hwfn,
+ 		    struct qed_hw_info *p_info,
+ 		    bool enable,
+ 		    u8 prio,
+@@ -206,6 +207,11 @@ qed_dcbx_set_params(struct qed_dcbx_results *p_data,
+ 	else
+ 		p_data->arr[type].update = DONT_UPDATE_DCB_DSCP;
+ 
++	/* Do not add vlan tag 0 when DCB is enabled and port in UFP/OV mode */
++	if ((test_bit(QED_MF_8021Q_TAGGING, &p_hwfn->cdev->mf_bits) ||
++	     test_bit(QED_MF_8021AD_TAGGING, &p_hwfn->cdev->mf_bits)))
++		p_data->arr[type].dont_add_vlan0 = true;
++
+ 	/* QM reconf data */
+ 	if (p_info->personality == personality)
+ 		p_info->offload_tc = tc;
+@@ -233,7 +239,7 @@ qed_dcbx_update_app_info(struct qed_dcbx_results *p_data,
+ 		personality = qed_dcbx_app_update[i].personality;
+ 		name = qed_dcbx_app_update[i].name;
+ 
+-		qed_dcbx_set_params(p_data, p_info, enable,
++		qed_dcbx_set_params(p_data, p_hwfn, p_info, enable,
+ 				    prio, tc, type, personality);
+ 	}
+ }
+@@ -956,6 +962,7 @@ static void qed_dcbx_update_protocol_data(struct protocol_dcb_data *p_data,
+ 	p_data->dcb_enable_flag = p_src->arr[type].enable;
+ 	p_data->dcb_priority = p_src->arr[type].priority;
+ 	p_data->dcb_tc = p_src->arr[type].tc;
++	p_data->dcb_dont_add_vlan0 = p_src->arr[type].dont_add_vlan0;
+ }
+ 
+ /* Set pf update ramrod command params */
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_dcbx.h b/drivers/net/ethernet/qlogic/qed/qed_dcbx.h
+index 5feb90e049e0..d950d836858c 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_dcbx.h
++++ b/drivers/net/ethernet/qlogic/qed/qed_dcbx.h
+@@ -55,6 +55,7 @@ struct qed_dcbx_app_data {
+ 	u8 update;		/* Update indication */
+ 	u8 priority;		/* Priority */
+ 	u8 tc;			/* Traffic Class */
++	bool dont_add_vlan0;	/* Do not insert a vlan tag with id 0 */
+ };
+ 
+ #define QED_DCBX_VERSION_DISABLED       0
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c
+index e5249b4741d0..194f4dbe57d3 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_dev.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c
+@@ -1636,7 +1636,7 @@ static int qed_vf_start(struct qed_hwfn *p_hwfn,
+ int qed_hw_init(struct qed_dev *cdev, struct qed_hw_init_params *p_params)
+ {
+ 	struct qed_load_req_params load_req_params;
+-	u32 load_code, param, drv_mb_param;
++	u32 load_code, resp, param, drv_mb_param;
+ 	bool b_default_mtu = true;
+ 	struct qed_hwfn *p_hwfn;
+ 	int rc = 0, mfw_rc, i;
+@@ -1782,6 +1782,19 @@ int qed_hw_init(struct qed_dev *cdev, struct qed_hw_init_params *p_params)
+ 
+ 	if (IS_PF(cdev)) {
+ 		p_hwfn = QED_LEADING_HWFN(cdev);
++
++		/* Get pre-negotiated values for stag, bandwidth etc. */
++		DP_VERBOSE(p_hwfn,
++			   QED_MSG_SPQ,
++			   "Sending GET_OEM_UPDATES command to trigger stag/bandwidth attention handling\n");
++		drv_mb_param = 1 << DRV_MB_PARAM_DUMMY_OEM_UPDATES_OFFSET;
++		rc = qed_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
++				 DRV_MSG_CODE_GET_OEM_UPDATES,
++				 drv_mb_param, &resp, &param);
++		if (rc)
++			DP_NOTICE(p_hwfn,
++				  "Failed to send GET_OEM_UPDATES attention request\n");
++
+ 		drv_mb_param = STORM_FW_VERSION;
+ 		rc = qed_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
+ 				 DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER,
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_hsi.h b/drivers/net/ethernet/qlogic/qed/qed_hsi.h
+index 463ffa83685f..ec5de7cf1af4 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_hsi.h
++++ b/drivers/net/ethernet/qlogic/qed/qed_hsi.h
+@@ -12415,6 +12415,7 @@ struct public_drv_mb {
+ #define DRV_MSG_SET_RESOURCE_VALUE_MSG		0x35000000
+ #define DRV_MSG_CODE_OV_UPDATE_WOL              0x38000000
+ #define DRV_MSG_CODE_OV_UPDATE_ESWITCH_MODE     0x39000000
++#define DRV_MSG_CODE_GET_OEM_UPDATES            0x41000000
+ 
+ #define DRV_MSG_CODE_BW_UPDATE_ACK		0x32000000
+ #define DRV_MSG_CODE_NIG_DRAIN			0x30000000
+@@ -12540,6 +12541,9 @@ struct public_drv_mb {
+ #define DRV_MB_PARAM_ESWITCH_MODE_VEB	0x1
+ #define DRV_MB_PARAM_ESWITCH_MODE_VEPA	0x2
+ 
++#define DRV_MB_PARAM_DUMMY_OEM_UPDATES_MASK	0x1
++#define DRV_MB_PARAM_DUMMY_OEM_UPDATES_OFFSET	0
++
+ #define DRV_MB_PARAM_SET_LED_MODE_OPER		0x0
+ #define DRV_MB_PARAM_SET_LED_MODE_ON		0x1
+ #define DRV_MB_PARAM_SET_LED_MODE_OFF		0x2
+diff --git a/drivers/net/ethernet/renesas/ravb.h b/drivers/net/ethernet/renesas/ravb.h
+index b81f4faf7b10..1c40989479bd 100644
+--- a/drivers/net/ethernet/renesas/ravb.h
++++ b/drivers/net/ethernet/renesas/ravb.h
+@@ -431,6 +431,7 @@ enum EIS_BIT {
+ 	EIS_CULF1	= 0x00000080,
+ 	EIS_TFFF	= 0x00000100,
+ 	EIS_QFS		= 0x00010000,
++	EIS_RESERVED	= (GENMASK(31, 17) | GENMASK(15, 11)),
+ };
+ 
+ /* RIC0 */
+@@ -475,6 +476,7 @@ enum RIS0_BIT {
+ 	RIS0_FRF15	= 0x00008000,
+ 	RIS0_FRF16	= 0x00010000,
+ 	RIS0_FRF17	= 0x00020000,
++	RIS0_RESERVED	= GENMASK(31, 18),
+ };
+ 
+ /* RIC1 */
+@@ -531,6 +533,7 @@ enum RIS2_BIT {
+ 	RIS2_QFF16	= 0x00010000,
+ 	RIS2_QFF17	= 0x00020000,
+ 	RIS2_RFFF	= 0x80000000,
++	RIS2_RESERVED	= GENMASK(30, 18),
+ };
+ 
+ /* TIC */
+@@ -547,6 +550,7 @@ enum TIS_BIT {
+ 	TIS_FTF1	= 0x00000002,	/* Undocumented? */
+ 	TIS_TFUF	= 0x00000100,
+ 	TIS_TFWF	= 0x00000200,
++	TIS_RESERVED	= (GENMASK(31, 20) | GENMASK(15, 12) | GENMASK(7, 4))
+ };
+ 
+ /* ISS */
+@@ -620,6 +624,7 @@ enum GIC_BIT {
+ enum GIS_BIT {
+ 	GIS_PTCF	= 0x00000001,	/* Undocumented? */
+ 	GIS_PTMF	= 0x00000004,
++	GIS_RESERVED	= GENMASK(15, 10),
+ };
+ 
+ /* GIE (R-Car Gen3 only) */
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index 0d811c02ff34..db4e306ca996 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -742,10 +742,11 @@ static void ravb_error_interrupt(struct net_device *ndev)
+ 	u32 eis, ris2;
+ 
+ 	eis = ravb_read(ndev, EIS);
+-	ravb_write(ndev, ~EIS_QFS, EIS);
++	ravb_write(ndev, ~(EIS_QFS | EIS_RESERVED), EIS);
+ 	if (eis & EIS_QFS) {
+ 		ris2 = ravb_read(ndev, RIS2);
+-		ravb_write(ndev, ~(RIS2_QFF0 | RIS2_RFFF), RIS2);
++		ravb_write(ndev, ~(RIS2_QFF0 | RIS2_RFFF | RIS2_RESERVED),
++			   RIS2);
+ 
+ 		/* Receive Descriptor Empty int */
+ 		if (ris2 & RIS2_QFF0)
+@@ -798,7 +799,7 @@ static bool ravb_timestamp_interrupt(struct net_device *ndev)
+ 	u32 tis = ravb_read(ndev, TIS);
+ 
+ 	if (tis & TIS_TFUF) {
+-		ravb_write(ndev, ~TIS_TFUF, TIS);
++		ravb_write(ndev, ~(TIS_TFUF | TIS_RESERVED), TIS);
+ 		ravb_get_tx_tstamp(ndev);
+ 		return true;
+ 	}
+@@ -933,7 +934,7 @@ static int ravb_poll(struct napi_struct *napi, int budget)
+ 		/* Processing RX Descriptor Ring */
+ 		if (ris0 & mask) {
+ 			/* Clear RX interrupt */
+-			ravb_write(ndev, ~mask, RIS0);
++			ravb_write(ndev, ~(mask | RIS0_RESERVED), RIS0);
+ 			if (ravb_rx(ndev, &quota, q))
+ 				goto out;
+ 		}
+@@ -941,7 +942,7 @@ static int ravb_poll(struct napi_struct *napi, int budget)
+ 		if (tis & mask) {
+ 			spin_lock_irqsave(&priv->lock, flags);
+ 			/* Clear TX interrupt */
+-			ravb_write(ndev, ~mask, TIS);
++			ravb_write(ndev, ~(mask | TIS_RESERVED), TIS);
+ 			ravb_tx_free(ndev, q, true);
+ 			netif_wake_subqueue(ndev, q);
+ 			mmiowb();
+diff --git a/drivers/net/ethernet/renesas/ravb_ptp.c b/drivers/net/ethernet/renesas/ravb_ptp.c
+index eede70ec37f8..9e3222fd69f9 100644
+--- a/drivers/net/ethernet/renesas/ravb_ptp.c
++++ b/drivers/net/ethernet/renesas/ravb_ptp.c
+@@ -319,7 +319,7 @@ void ravb_ptp_interrupt(struct net_device *ndev)
+ 		}
+ 	}
+ 
+-	ravb_write(ndev, ~gis, GIS);
++	ravb_write(ndev, ~(gis | GIS_RESERVED), GIS);
+ }
+ 
+ void ravb_ptp_init(struct net_device *ndev, struct platform_device *pdev)
+diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c
+index 778c4f76a884..2153956a0b20 100644
+--- a/drivers/pci/controller/dwc/pcie-designware.c
++++ b/drivers/pci/controller/dwc/pcie-designware.c
+@@ -135,7 +135,7 @@ static void dw_pcie_prog_outbound_atu_unroll(struct dw_pcie *pci, int index,
+ 		if (val & PCIE_ATU_ENABLE)
+ 			return;
+ 
+-		usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX);
++		mdelay(LINK_WAIT_IATU);
+ 	}
+ 	dev_err(pci->dev, "Outbound iATU is not being enabled\n");
+ }
+@@ -178,7 +178,7 @@ void dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, int type,
+ 		if (val & PCIE_ATU_ENABLE)
+ 			return;
+ 
+-		usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX);
++		mdelay(LINK_WAIT_IATU);
+ 	}
+ 	dev_err(pci->dev, "Outbound iATU is not being enabled\n");
+ }
+@@ -236,7 +236,7 @@ static int dw_pcie_prog_inbound_atu_unroll(struct dw_pcie *pci, int index,
+ 		if (val & PCIE_ATU_ENABLE)
+ 			return 0;
+ 
+-		usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX);
++		mdelay(LINK_WAIT_IATU);
+ 	}
+ 	dev_err(pci->dev, "Inbound iATU is not being enabled\n");
+ 
+@@ -282,7 +282,7 @@ int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, int index, int bar,
+ 		if (val & PCIE_ATU_ENABLE)
+ 			return 0;
+ 
+-		usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX);
++		mdelay(LINK_WAIT_IATU);
+ 	}
+ 	dev_err(pci->dev, "Inbound iATU is not being enabled\n");
+ 
+diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h
+index bee4e2535a61..b99d1d72dd12 100644
+--- a/drivers/pci/controller/dwc/pcie-designware.h
++++ b/drivers/pci/controller/dwc/pcie-designware.h
+@@ -26,8 +26,7 @@
+ 
+ /* Parameters for the waiting for iATU enabled routine */
+ #define LINK_WAIT_MAX_IATU_RETRIES	5
+-#define LINK_WAIT_IATU_MIN		9000
+-#define LINK_WAIT_IATU_MAX		10000
++#define LINK_WAIT_IATU			9
+ 
+ /* Synopsys-specific PCIe configuration registers */
+ #define PCIE_PORT_LINK_CONTROL		0x710
+diff --git a/drivers/pinctrl/pinctrl-amd.c b/drivers/pinctrl/pinctrl-amd.c
+index b91db89eb924..d3ba867d01f0 100644
+--- a/drivers/pinctrl/pinctrl-amd.c
++++ b/drivers/pinctrl/pinctrl-amd.c
+@@ -348,21 +348,12 @@ static void amd_gpio_irq_enable(struct irq_data *d)
+ 	unsigned long flags;
+ 	struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+ 	struct amd_gpio *gpio_dev = gpiochip_get_data(gc);
+-	u32 mask = BIT(INTERRUPT_ENABLE_OFF) | BIT(INTERRUPT_MASK_OFF);
+ 
+ 	raw_spin_lock_irqsave(&gpio_dev->lock, flags);
+ 	pin_reg = readl(gpio_dev->base + (d->hwirq)*4);
+ 	pin_reg |= BIT(INTERRUPT_ENABLE_OFF);
+ 	pin_reg |= BIT(INTERRUPT_MASK_OFF);
+ 	writel(pin_reg, gpio_dev->base + (d->hwirq)*4);
+-	/*
+-	 * When debounce logic is enabled it takes ~900 us before interrupts
+-	 * can be enabled.  During this "debounce warm up" period the
+-	 * "INTERRUPT_ENABLE" bit will read as 0. Poll the bit here until it
+-	 * reads back as 1, signaling that interrupts are now enabled.
+-	 */
+-	while ((readl(gpio_dev->base + (d->hwirq)*4) & mask) != mask)
+-		continue;
+ 	raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
+ }
+ 
+@@ -426,7 +417,7 @@ static void amd_gpio_irq_eoi(struct irq_data *d)
+ static int amd_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ {
+ 	int ret = 0;
+-	u32 pin_reg;
++	u32 pin_reg, pin_reg_irq_en, mask;
+ 	unsigned long flags, irq_flags;
+ 	struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+ 	struct amd_gpio *gpio_dev = gpiochip_get_data(gc);
+@@ -495,6 +486,28 @@ static int amd_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ 	}
+ 
+ 	pin_reg |= CLR_INTR_STAT << INTERRUPT_STS_OFF;
++	/*
++	 * If WAKE_INT_MASTER_REG.MaskStsEn is set, a software write to the
++	 * debounce registers of any GPIO will block wake/interrupt status
++	 * generation for *all* GPIOs for a lenght of time that depends on
++	 * WAKE_INT_MASTER_REG.MaskStsLength[11:0].  During this period the
++	 * INTERRUPT_ENABLE bit will read as 0.
++	 *
++	 * We temporarily enable irq for the GPIO whose configuration is
++	 * changing, and then wait for it to read back as 1 to know when
++	 * debounce has settled and then disable the irq again.
++	 * We do this polling with the spinlock held to ensure other GPIO
++	 * access routines do not read an incorrect value for the irq enable
++	 * bit of other GPIOs.  We keep the GPIO masked while polling to avoid
++	 * spurious irqs, and disable the irq again after polling.
++	 */
++	mask = BIT(INTERRUPT_ENABLE_OFF);
++	pin_reg_irq_en = pin_reg;
++	pin_reg_irq_en |= mask;
++	pin_reg_irq_en &= ~BIT(INTERRUPT_MASK_OFF);
++	writel(pin_reg_irq_en, gpio_dev->base + (d->hwirq)*4);
++	while ((readl(gpio_dev->base + (d->hwirq)*4) & mask) != mask)
++		continue;
+ 	writel(pin_reg, gpio_dev->base + (d->hwirq)*4);
+ 	raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
+ 
+diff --git a/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c b/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c
+index c3a76af9f5fa..ada1ebebd325 100644
+--- a/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c
++++ b/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c
+@@ -3475,11 +3475,10 @@ static int ibmvscsis_probe(struct vio_dev *vdev,
+ 		vscsi->dds.window[LOCAL].liobn,
+ 		vscsi->dds.window[REMOTE].liobn);
+ 
+-	strcpy(vscsi->eye, "VSCSI ");
+-	strncat(vscsi->eye, vdev->name, MAX_EYE);
++	snprintf(vscsi->eye, sizeof(vscsi->eye), "VSCSI %s", vdev->name);
+ 
+ 	vscsi->dds.unit_id = vdev->unit_address;
+-	strncpy(vscsi->dds.partition_name, partition_name,
++	strscpy(vscsi->dds.partition_name, partition_name,
+ 		sizeof(vscsi->dds.partition_name));
+ 	vscsi->dds.partition_num = partition_number;
+ 
+diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c
+index 02d65dce74e5..2e8a91341254 100644
+--- a/drivers/scsi/ipr.c
++++ b/drivers/scsi/ipr.c
+@@ -3310,6 +3310,65 @@ static void ipr_release_dump(struct kref *kref)
+ 	LEAVE;
+ }
+ 
++static void ipr_add_remove_thread(struct work_struct *work)
++{
++	unsigned long lock_flags;
++	struct ipr_resource_entry *res;
++	struct scsi_device *sdev;
++	struct ipr_ioa_cfg *ioa_cfg =
++		container_of(work, struct ipr_ioa_cfg, scsi_add_work_q);
++	u8 bus, target, lun;
++	int did_work;
++
++	ENTER;
++	spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
++
++restart:
++	do {
++		did_work = 0;
++		if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].allow_cmds) {
++			spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
++			return;
++		}
++
++		list_for_each_entry(res, &ioa_cfg->used_res_q, queue) {
++			if (res->del_from_ml && res->sdev) {
++				did_work = 1;
++				sdev = res->sdev;
++				if (!scsi_device_get(sdev)) {
++					if (!res->add_to_ml)
++						list_move_tail(&res->queue, &ioa_cfg->free_res_q);
++					else
++						res->del_from_ml = 0;
++					spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
++					scsi_remove_device(sdev);
++					scsi_device_put(sdev);
++					spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
++				}
++				break;
++			}
++		}
++	} while (did_work);
++
++	list_for_each_entry(res, &ioa_cfg->used_res_q, queue) {
++		if (res->add_to_ml) {
++			bus = res->bus;
++			target = res->target;
++			lun = res->lun;
++			res->add_to_ml = 0;
++			spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
++			scsi_add_device(ioa_cfg->host, bus, target, lun);
++			spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
++			goto restart;
++		}
++	}
++
++	ioa_cfg->scan_done = 1;
++	spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
++	kobject_uevent(&ioa_cfg->host->shost_dev.kobj, KOBJ_CHANGE);
++	LEAVE;
++}
++
+ /**
+  * ipr_worker_thread - Worker thread
+  * @work:		ioa config struct
+@@ -3324,13 +3383,9 @@ static void ipr_release_dump(struct kref *kref)
+ static void ipr_worker_thread(struct work_struct *work)
+ {
+ 	unsigned long lock_flags;
+-	struct ipr_resource_entry *res;
+-	struct scsi_device *sdev;
+ 	struct ipr_dump *dump;
+ 	struct ipr_ioa_cfg *ioa_cfg =
+ 		container_of(work, struct ipr_ioa_cfg, work_q);
+-	u8 bus, target, lun;
+-	int did_work;
+ 
+ 	ENTER;
+ 	spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+@@ -3368,49 +3423,9 @@ static void ipr_worker_thread(struct work_struct *work)
+ 		return;
+ 	}
+ 
+-restart:
+-	do {
+-		did_work = 0;
+-		if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].allow_cmds) {
+-			spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+-			return;
+-		}
++	schedule_work(&ioa_cfg->scsi_add_work_q);
+ 
+-		list_for_each_entry(res, &ioa_cfg->used_res_q, queue) {
+-			if (res->del_from_ml && res->sdev) {
+-				did_work = 1;
+-				sdev = res->sdev;
+-				if (!scsi_device_get(sdev)) {
+-					if (!res->add_to_ml)
+-						list_move_tail(&res->queue, &ioa_cfg->free_res_q);
+-					else
+-						res->del_from_ml = 0;
+-					spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+-					scsi_remove_device(sdev);
+-					scsi_device_put(sdev);
+-					spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+-				}
+-				break;
+-			}
+-		}
+-	} while (did_work);
+-
+-	list_for_each_entry(res, &ioa_cfg->used_res_q, queue) {
+-		if (res->add_to_ml) {
+-			bus = res->bus;
+-			target = res->target;
+-			lun = res->lun;
+-			res->add_to_ml = 0;
+-			spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+-			scsi_add_device(ioa_cfg->host, bus, target, lun);
+-			spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+-			goto restart;
+-		}
+-	}
+-
+-	ioa_cfg->scan_done = 1;
+ 	spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+-	kobject_uevent(&ioa_cfg->host->shost_dev.kobj, KOBJ_CHANGE);
+ 	LEAVE;
+ }
+ 
+@@ -9908,6 +9923,7 @@ static void ipr_init_ioa_cfg(struct ipr_ioa_cfg *ioa_cfg,
+ 	INIT_LIST_HEAD(&ioa_cfg->free_res_q);
+ 	INIT_LIST_HEAD(&ioa_cfg->used_res_q);
+ 	INIT_WORK(&ioa_cfg->work_q, ipr_worker_thread);
++	INIT_WORK(&ioa_cfg->scsi_add_work_q, ipr_add_remove_thread);
+ 	init_waitqueue_head(&ioa_cfg->reset_wait_q);
+ 	init_waitqueue_head(&ioa_cfg->msi_wait_q);
+ 	init_waitqueue_head(&ioa_cfg->eeh_wait_q);
+diff --git a/drivers/scsi/ipr.h b/drivers/scsi/ipr.h
+index 93570734cbfb..a98cfd24035a 100644
+--- a/drivers/scsi/ipr.h
++++ b/drivers/scsi/ipr.h
+@@ -1568,6 +1568,7 @@ struct ipr_ioa_cfg {
+ 	u8 saved_mode_page_len;
+ 
+ 	struct work_struct work_q;
++	struct work_struct scsi_add_work_q;
+ 	struct workqueue_struct *reset_work_q;
+ 
+ 	wait_queue_head_t reset_wait_q;
+diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
+index 729d343861f4..de64cbb0e3d5 100644
+--- a/drivers/scsi/lpfc/lpfc_attr.c
++++ b/drivers/scsi/lpfc/lpfc_attr.c
+@@ -320,12 +320,12 @@ lpfc_nvme_info_show(struct device *dev, struct device_attribute *attr,
+ 			localport->port_id, statep);
+ 
+ 	list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
++		nrport = NULL;
++		spin_lock(&vport->phba->hbalock);
+ 		rport = lpfc_ndlp_get_nrport(ndlp);
+-		if (!rport)
+-			continue;
+-
+-		/* local short-hand pointer. */
+-		nrport = rport->remoteport;
++		if (rport)
++			nrport = rport->remoteport;
++		spin_unlock(&vport->phba->hbalock);
+ 		if (!nrport)
+ 			continue;
+ 
+@@ -3304,6 +3304,7 @@ lpfc_update_rport_devloss_tmo(struct lpfc_vport *vport)
+ 	struct lpfc_nodelist  *ndlp;
+ #if (IS_ENABLED(CONFIG_NVME_FC))
+ 	struct lpfc_nvme_rport *rport;
++	struct nvme_fc_remote_port *remoteport = NULL;
+ #endif
+ 
+ 	shost = lpfc_shost_from_vport(vport);
+@@ -3314,8 +3315,12 @@ lpfc_update_rport_devloss_tmo(struct lpfc_vport *vport)
+ 		if (ndlp->rport)
+ 			ndlp->rport->dev_loss_tmo = vport->cfg_devloss_tmo;
+ #if (IS_ENABLED(CONFIG_NVME_FC))
++		spin_lock(&vport->phba->hbalock);
+ 		rport = lpfc_ndlp_get_nrport(ndlp);
+ 		if (rport)
++			remoteport = rport->remoteport;
++		spin_unlock(&vport->phba->hbalock);
++		if (remoteport)
+ 			nvme_fc_set_remoteport_devloss(rport->remoteport,
+ 						       vport->cfg_devloss_tmo);
+ #endif
+diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
+index 9df0c051349f..aec5b10a8c85 100644
+--- a/drivers/scsi/lpfc/lpfc_debugfs.c
++++ b/drivers/scsi/lpfc/lpfc_debugfs.c
+@@ -551,7 +551,7 @@ lpfc_debugfs_nodelist_data(struct lpfc_vport *vport, char *buf, int size)
+ 	unsigned char *statep;
+ 	struct nvme_fc_local_port *localport;
+ 	struct lpfc_nvmet_tgtport *tgtp;
+-	struct nvme_fc_remote_port *nrport;
++	struct nvme_fc_remote_port *nrport = NULL;
+ 	struct lpfc_nvme_rport *rport;
+ 
+ 	cnt = (LPFC_NODELIST_SIZE / LPFC_NODELIST_ENTRY_SIZE);
+@@ -696,11 +696,11 @@ lpfc_debugfs_nodelist_data(struct lpfc_vport *vport, char *buf, int size)
+ 	len += snprintf(buf + len, size - len, "\tRport List:\n");
+ 	list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
+ 		/* local short-hand pointer. */
++		spin_lock(&phba->hbalock);
+ 		rport = lpfc_ndlp_get_nrport(ndlp);
+-		if (!rport)
+-			continue;
+-
+-		nrport = rport->remoteport;
++		if (rport)
++			nrport = rport->remoteport;
++		spin_unlock(&phba->hbalock);
+ 		if (!nrport)
+ 			continue;
+ 
+diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
+index cab1fb087e6a..0960dcaf1684 100644
+--- a/drivers/scsi/lpfc/lpfc_nvme.c
++++ b/drivers/scsi/lpfc/lpfc_nvme.c
+@@ -2718,7 +2718,9 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ 	rpinfo.port_name = wwn_to_u64(ndlp->nlp_portname.u.wwn);
+ 	rpinfo.node_name = wwn_to_u64(ndlp->nlp_nodename.u.wwn);
+ 
++	spin_lock_irq(&vport->phba->hbalock);
+ 	oldrport = lpfc_ndlp_get_nrport(ndlp);
++	spin_unlock_irq(&vport->phba->hbalock);
+ 	if (!oldrport)
+ 		lpfc_nlp_get(ndlp);
+ 
+@@ -2833,7 +2835,7 @@ lpfc_nvme_unregister_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ 	struct nvme_fc_local_port *localport;
+ 	struct lpfc_nvme_lport *lport;
+ 	struct lpfc_nvme_rport *rport;
+-	struct nvme_fc_remote_port *remoteport;
++	struct nvme_fc_remote_port *remoteport = NULL;
+ 
+ 	localport = vport->localport;
+ 
+@@ -2847,11 +2849,14 @@ lpfc_nvme_unregister_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ 	if (!lport)
+ 		goto input_err;
+ 
++	spin_lock_irq(&vport->phba->hbalock);
+ 	rport = lpfc_ndlp_get_nrport(ndlp);
+-	if (!rport)
++	if (rport)
++		remoteport = rport->remoteport;
++	spin_unlock_irq(&vport->phba->hbalock);
++	if (!remoteport)
+ 		goto input_err;
+ 
+-	remoteport = rport->remoteport;
+ 	lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC,
+ 			 "6033 Unreg nvme remoteport %p, portname x%llx, "
+ 			 "port_id x%06x, portstate x%x port type x%x\n",
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 9421d9877730..0949d3db56e7 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -1277,7 +1277,8 @@ static int sd_init_command(struct scsi_cmnd *cmd)
+ 	case REQ_OP_ZONE_RESET:
+ 		return sd_zbc_setup_reset_cmnd(cmd);
+ 	default:
+-		BUG();
++		WARN_ON_ONCE(1);
++		return BLKPREP_KILL;
+ 	}
+ }
+ 
+diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c
+index 4b5e250e8615..e5c7e1ef6318 100644
+--- a/drivers/soundwire/stream.c
++++ b/drivers/soundwire/stream.c
+@@ -899,9 +899,10 @@ static void sdw_release_master_stream(struct sdw_stream_runtime *stream)
+ 	struct sdw_master_runtime *m_rt = stream->m_rt;
+ 	struct sdw_slave_runtime *s_rt, *_s_rt;
+ 
+-	list_for_each_entry_safe(s_rt, _s_rt,
+-			&m_rt->slave_rt_list, m_rt_node)
+-		sdw_stream_remove_slave(s_rt->slave, stream);
++	list_for_each_entry_safe(s_rt, _s_rt, &m_rt->slave_rt_list, m_rt_node) {
++		sdw_slave_port_release(s_rt->slave->bus, s_rt->slave, stream);
++		sdw_release_slave_stream(s_rt->slave, stream);
++	}
+ 
+ 	list_del(&m_rt->bus_node);
+ }
+@@ -1112,7 +1113,7 @@ int sdw_stream_add_master(struct sdw_bus *bus,
+ 				"Master runtime config failed for stream:%s",
+ 				stream->name);
+ 		ret = -ENOMEM;
+-		goto error;
++		goto unlock;
+ 	}
+ 
+ 	ret = sdw_config_stream(bus->dev, stream, stream_config, false);
+@@ -1123,11 +1124,11 @@ int sdw_stream_add_master(struct sdw_bus *bus,
+ 	if (ret)
+ 		goto stream_error;
+ 
+-	stream->state = SDW_STREAM_CONFIGURED;
++	goto unlock;
+ 
+ stream_error:
+ 	sdw_release_master_stream(stream);
+-error:
++unlock:
+ 	mutex_unlock(&bus->bus_lock);
+ 	return ret;
+ }
+@@ -1141,6 +1142,10 @@ EXPORT_SYMBOL(sdw_stream_add_master);
+  * @stream: SoundWire stream
+  * @port_config: Port configuration for audio stream
+  * @num_ports: Number of ports
++ *
++ * It is expected that Slave is added before adding Master
++ * to the Stream.
++ *
+  */
+ int sdw_stream_add_slave(struct sdw_slave *slave,
+ 		struct sdw_stream_config *stream_config,
+@@ -1186,6 +1191,12 @@ int sdw_stream_add_slave(struct sdw_slave *slave,
+ 	if (ret)
+ 		goto stream_error;
+ 
++	/*
++	 * Change stream state to CONFIGURED on first Slave add.
++	 * Bus is not aware of number of Slave(s) in a stream at this
++	 * point so cannot depend on all Slave(s) to be added in order to
++	 * change stream state to CONFIGURED.
++	 */
+ 	stream->state = SDW_STREAM_CONFIGURED;
+ 	goto error;
+ 
+diff --git a/drivers/spi/spi-gpio.c b/drivers/spi/spi-gpio.c
+index 6ae92d4dca19..3b518ead504e 100644
+--- a/drivers/spi/spi-gpio.c
++++ b/drivers/spi/spi-gpio.c
+@@ -287,8 +287,8 @@ static int spi_gpio_request(struct device *dev,
+ 		*mflags |= SPI_MASTER_NO_RX;
+ 
+ 	spi_gpio->sck = devm_gpiod_get(dev, "sck", GPIOD_OUT_LOW);
+-	if (IS_ERR(spi_gpio->mosi))
+-		return PTR_ERR(spi_gpio->mosi);
++	if (IS_ERR(spi_gpio->sck))
++		return PTR_ERR(spi_gpio->sck);
+ 
+ 	for (i = 0; i < num_chipselects; i++) {
+ 		spi_gpio->cs_gpios[i] = devm_gpiod_get_index(dev, "cs",
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 1949e0939d40..bd2f4c68506a 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -446,10 +446,10 @@ int mnt_want_write_file_path(struct file *file)
+ {
+ 	int ret;
+ 
+-	sb_start_write(file_inode(file)->i_sb);
++	sb_start_write(file->f_path.mnt->mnt_sb);
+ 	ret = __mnt_want_write_file(file);
+ 	if (ret)
+-		sb_end_write(file_inode(file)->i_sb);
++		sb_end_write(file->f_path.mnt->mnt_sb);
+ 	return ret;
+ }
+ 
+@@ -540,8 +540,7 @@ void __mnt_drop_write_file(struct file *file)
+ 
+ void mnt_drop_write_file_path(struct file *file)
+ {
+-	__mnt_drop_write_file(file);
+-	sb_end_write(file_inode(file)->i_sb);
++	mnt_drop_write(file->f_path.mnt);
+ }
+ 
+ void mnt_drop_write_file(struct file *file)
+diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
+index a8a126259bc4..0bec79ae4c2d 100644
+--- a/include/linux/huge_mm.h
++++ b/include/linux/huge_mm.h
+@@ -42,7 +42,7 @@ extern int mincore_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ 			unsigned char *vec);
+ extern bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+ 			 unsigned long new_addr, unsigned long old_end,
+-			 pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush);
++			 pmd_t *old_pmd, pmd_t *new_pmd);
+ extern int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ 			unsigned long addr, pgprot_t newprot,
+ 			int prot_numa);
+diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
+index f833a60699ad..e60078ffb302 100644
+--- a/kernel/bpf/sockmap.c
++++ b/kernel/bpf/sockmap.c
+@@ -132,6 +132,7 @@ struct smap_psock {
+ 	struct work_struct gc_work;
+ 
+ 	struct proto *sk_proto;
++	void (*save_unhash)(struct sock *sk);
+ 	void (*save_close)(struct sock *sk, long timeout);
+ 	void (*save_data_ready)(struct sock *sk);
+ 	void (*save_write_space)(struct sock *sk);
+@@ -143,6 +144,7 @@ static int bpf_tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ static int bpf_tcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size);
+ static int bpf_tcp_sendpage(struct sock *sk, struct page *page,
+ 			    int offset, size_t size, int flags);
++static void bpf_tcp_unhash(struct sock *sk);
+ static void bpf_tcp_close(struct sock *sk, long timeout);
+ 
+ static inline struct smap_psock *smap_psock_sk(const struct sock *sk)
+@@ -184,6 +186,7 @@ static void build_protos(struct proto prot[SOCKMAP_NUM_CONFIGS],
+ 			 struct proto *base)
+ {
+ 	prot[SOCKMAP_BASE]			= *base;
++	prot[SOCKMAP_BASE].unhash		= bpf_tcp_unhash;
+ 	prot[SOCKMAP_BASE].close		= bpf_tcp_close;
+ 	prot[SOCKMAP_BASE].recvmsg		= bpf_tcp_recvmsg;
+ 	prot[SOCKMAP_BASE].stream_memory_read	= bpf_tcp_stream_read;
+@@ -217,6 +220,7 @@ static int bpf_tcp_init(struct sock *sk)
+ 		return -EBUSY;
+ 	}
+ 
++	psock->save_unhash = sk->sk_prot->unhash;
+ 	psock->save_close = sk->sk_prot->close;
+ 	psock->sk_proto = sk->sk_prot;
+ 
+@@ -305,30 +309,12 @@ static struct smap_psock_map_entry *psock_map_pop(struct sock *sk,
+ 	return e;
+ }
+ 
+-static void bpf_tcp_close(struct sock *sk, long timeout)
++static void bpf_tcp_remove(struct sock *sk, struct smap_psock *psock)
+ {
+-	void (*close_fun)(struct sock *sk, long timeout);
+ 	struct smap_psock_map_entry *e;
+ 	struct sk_msg_buff *md, *mtmp;
+-	struct smap_psock *psock;
+ 	struct sock *osk;
+ 
+-	lock_sock(sk);
+-	rcu_read_lock();
+-	psock = smap_psock_sk(sk);
+-	if (unlikely(!psock)) {
+-		rcu_read_unlock();
+-		release_sock(sk);
+-		return sk->sk_prot->close(sk, timeout);
+-	}
+-
+-	/* The psock may be destroyed anytime after exiting the RCU critial
+-	 * section so by the time we use close_fun the psock may no longer
+-	 * be valid. However, bpf_tcp_close is called with the sock lock
+-	 * held so the close hook and sk are still valid.
+-	 */
+-	close_fun = psock->save_close;
+-
+ 	if (psock->cork) {
+ 		free_start_sg(psock->sock, psock->cork, true);
+ 		kfree(psock->cork);
+@@ -379,6 +365,42 @@ static void bpf_tcp_close(struct sock *sk, long timeout)
+ 		kfree(e);
+ 		e = psock_map_pop(sk, psock);
+ 	}
++}
++
++static void bpf_tcp_unhash(struct sock *sk)
++{
++	void (*unhash_fun)(struct sock *sk);
++	struct smap_psock *psock;
++
++	rcu_read_lock();
++	psock = smap_psock_sk(sk);
++	if (unlikely(!psock)) {
++		rcu_read_unlock();
++		if (sk->sk_prot->unhash)
++			sk->sk_prot->unhash(sk);
++		return;
++	}
++	unhash_fun = psock->save_unhash;
++	bpf_tcp_remove(sk, psock);
++	rcu_read_unlock();
++	unhash_fun(sk);
++}
++
++static void bpf_tcp_close(struct sock *sk, long timeout)
++{
++	void (*close_fun)(struct sock *sk, long timeout);
++	struct smap_psock *psock;
++
++	lock_sock(sk);
++	rcu_read_lock();
++	psock = smap_psock_sk(sk);
++	if (unlikely(!psock)) {
++		rcu_read_unlock();
++		release_sock(sk);
++		return sk->sk_prot->close(sk, timeout);
++	}
++	close_fun = psock->save_close;
++	bpf_tcp_remove(sk, psock);
+ 	rcu_read_unlock();
+ 	release_sock(sk);
+ 	close_fun(sk, timeout);
+@@ -2100,8 +2122,12 @@ static int sock_map_update_elem(struct bpf_map *map,
+ 		return -EINVAL;
+ 	}
+ 
++	/* ULPs are currently supported only for TCP sockets in ESTABLISHED
++	 * state.
++	 */
+ 	if (skops.sk->sk_type != SOCK_STREAM ||
+-	    skops.sk->sk_protocol != IPPROTO_TCP) {
++	    skops.sk->sk_protocol != IPPROTO_TCP ||
++	    skops.sk->sk_state != TCP_ESTABLISHED) {
+ 		fput(socket->file);
+ 		return -EOPNOTSUPP;
+ 	}
+@@ -2456,6 +2482,16 @@ static int sock_hash_update_elem(struct bpf_map *map,
+ 		return -EINVAL;
+ 	}
+ 
++	/* ULPs are currently supported only for TCP sockets in ESTABLISHED
++	 * state.
++	 */
++	if (skops.sk->sk_type != SOCK_STREAM ||
++	    skops.sk->sk_protocol != IPPROTO_TCP ||
++	    skops.sk->sk_state != TCP_ESTABLISHED) {
++		fput(socket->file);
++		return -EOPNOTSUPP;
++	}
++
+ 	lock_sock(skops.sk);
+ 	preempt_disable();
+ 	rcu_read_lock();
+@@ -2544,10 +2580,22 @@ const struct bpf_map_ops sock_hash_ops = {
+ 	.map_release_uref = sock_map_release,
+ };
+ 
++static bool bpf_is_valid_sock_op(struct bpf_sock_ops_kern *ops)
++{
++	return ops->op == BPF_SOCK_OPS_PASSIVE_ESTABLISHED_CB ||
++	       ops->op == BPF_SOCK_OPS_ACTIVE_ESTABLISHED_CB;
++}
+ BPF_CALL_4(bpf_sock_map_update, struct bpf_sock_ops_kern *, bpf_sock,
+ 	   struct bpf_map *, map, void *, key, u64, flags)
+ {
+ 	WARN_ON_ONCE(!rcu_read_lock_held());
++
++	/* ULPs are currently supported only for TCP sockets in ESTABLISHED
++	 * state. This checks that the sock ops triggering the update is
++	 * one indicating we are (or will be soon) in an ESTABLISHED state.
++	 */
++	if (!bpf_is_valid_sock_op(bpf_sock))
++		return -EOPNOTSUPP;
+ 	return sock_map_ctx_update_elem(bpf_sock, map, key, flags);
+ }
+ 
+@@ -2566,6 +2614,9 @@ BPF_CALL_4(bpf_sock_hash_update, struct bpf_sock_ops_kern *, bpf_sock,
+ 	   struct bpf_map *, map, void *, key, u64, flags)
+ {
+ 	WARN_ON_ONCE(!rcu_read_lock_held());
++
++	if (!bpf_is_valid_sock_op(bpf_sock))
++		return -EOPNOTSUPP;
+ 	return sock_hash_ctx_update_elem(bpf_sock, map, key, flags);
+ }
+ 
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index f7274e0c8bdc..3238bb2d0c93 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -1778,7 +1778,7 @@ static pmd_t move_soft_dirty_pmd(pmd_t pmd)
+ 
+ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+ 		  unsigned long new_addr, unsigned long old_end,
+-		  pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush)
++		  pmd_t *old_pmd, pmd_t *new_pmd)
+ {
+ 	spinlock_t *old_ptl, *new_ptl;
+ 	pmd_t pmd;
+@@ -1809,7 +1809,7 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+ 		if (new_ptl != old_ptl)
+ 			spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
+ 		pmd = pmdp_huge_get_and_clear(mm, old_addr, old_pmd);
+-		if (pmd_present(pmd) && pmd_dirty(pmd))
++		if (pmd_present(pmd))
+ 			force_flush = true;
+ 		VM_BUG_ON(!pmd_none(*new_pmd));
+ 
+@@ -1820,12 +1820,10 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+ 		}
+ 		pmd = move_soft_dirty_pmd(pmd);
+ 		set_pmd_at(mm, new_addr, new_pmd, pmd);
+-		if (new_ptl != old_ptl)
+-			spin_unlock(new_ptl);
+ 		if (force_flush)
+ 			flush_tlb_range(vma, old_addr, old_addr + PMD_SIZE);
+-		else
+-			*need_flush = true;
++		if (new_ptl != old_ptl)
++			spin_unlock(new_ptl);
+ 		spin_unlock(old_ptl);
+ 		return true;
+ 	}
+diff --git a/mm/mremap.c b/mm/mremap.c
+index 5c2e18505f75..a9617e72e6b7 100644
+--- a/mm/mremap.c
++++ b/mm/mremap.c
+@@ -115,7 +115,7 @@ static pte_t move_soft_dirty_pte(pte_t pte)
+ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
+ 		unsigned long old_addr, unsigned long old_end,
+ 		struct vm_area_struct *new_vma, pmd_t *new_pmd,
+-		unsigned long new_addr, bool need_rmap_locks, bool *need_flush)
++		unsigned long new_addr, bool need_rmap_locks)
+ {
+ 	struct mm_struct *mm = vma->vm_mm;
+ 	pte_t *old_pte, *new_pte, pte;
+@@ -163,15 +163,17 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
+ 
+ 		pte = ptep_get_and_clear(mm, old_addr, old_pte);
+ 		/*
+-		 * If we are remapping a dirty PTE, make sure
++		 * If we are remapping a valid PTE, make sure
+ 		 * to flush TLB before we drop the PTL for the
+-		 * old PTE or we may race with page_mkclean().
++		 * PTE.
+ 		 *
+-		 * This check has to be done after we removed the
+-		 * old PTE from page tables or another thread may
+-		 * dirty it after the check and before the removal.
++		 * NOTE! Both old and new PTL matter: the old one
++		 * for racing with page_mkclean(), the new one to
++		 * make sure the physical page stays valid until
++		 * the TLB entry for the old mapping has been
++		 * flushed.
+ 		 */
+-		if (pte_present(pte) && pte_dirty(pte))
++		if (pte_present(pte))
+ 			force_flush = true;
+ 		pte = move_pte(pte, new_vma->vm_page_prot, old_addr, new_addr);
+ 		pte = move_soft_dirty_pte(pte);
+@@ -179,13 +181,11 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
+ 	}
+ 
+ 	arch_leave_lazy_mmu_mode();
++	if (force_flush)
++		flush_tlb_range(vma, old_end - len, old_end);
+ 	if (new_ptl != old_ptl)
+ 		spin_unlock(new_ptl);
+ 	pte_unmap(new_pte - 1);
+-	if (force_flush)
+-		flush_tlb_range(vma, old_end - len, old_end);
+-	else
+-		*need_flush = true;
+ 	pte_unmap_unlock(old_pte - 1, old_ptl);
+ 	if (need_rmap_locks)
+ 		drop_rmap_locks(vma);
+@@ -198,7 +198,6 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
+ {
+ 	unsigned long extent, next, old_end;
+ 	pmd_t *old_pmd, *new_pmd;
+-	bool need_flush = false;
+ 	unsigned long mmun_start;	/* For mmu_notifiers */
+ 	unsigned long mmun_end;		/* For mmu_notifiers */
+ 
+@@ -229,8 +228,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
+ 				if (need_rmap_locks)
+ 					take_rmap_locks(vma);
+ 				moved = move_huge_pmd(vma, old_addr, new_addr,
+-						    old_end, old_pmd, new_pmd,
+-						    &need_flush);
++						    old_end, old_pmd, new_pmd);
+ 				if (need_rmap_locks)
+ 					drop_rmap_locks(vma);
+ 				if (moved)
+@@ -246,10 +244,8 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
+ 		if (extent > next - new_addr)
+ 			extent = next - new_addr;
+ 		move_ptes(vma, old_pmd, old_addr, old_addr + extent, new_vma,
+-			  new_pmd, new_addr, need_rmap_locks, &need_flush);
++			  new_pmd, new_addr, need_rmap_locks);
+ 	}
+-	if (need_flush)
+-		flush_tlb_range(vma, old_end-len, old_addr);
+ 
+ 	mmu_notifier_invalidate_range_end(vma->vm_mm, mmun_start, mmun_end);
+ 
+diff --git a/net/batman-adv/bat_v_elp.c b/net/batman-adv/bat_v_elp.c
+index 71c20c1d4002..9f481cfdf77d 100644
+--- a/net/batman-adv/bat_v_elp.c
++++ b/net/batman-adv/bat_v_elp.c
+@@ -241,7 +241,7 @@ batadv_v_elp_wifi_neigh_probe(struct batadv_hardif_neigh_node *neigh)
+ 		 * the packet to be exactly of that size to make the link
+ 		 * throughput estimation effective.
+ 		 */
+-		skb_put(skb, probe_len - hard_iface->bat_v.elp_skb->len);
++		skb_put_zero(skb, probe_len - hard_iface->bat_v.elp_skb->len);
+ 
+ 		batadv_dbg(BATADV_DBG_BATMAN, bat_priv,
+ 			   "Sending unicast (probe) ELP packet on interface %s to %pM\n",
+@@ -268,6 +268,7 @@ static void batadv_v_elp_periodic_work(struct work_struct *work)
+ 	struct batadv_priv *bat_priv;
+ 	struct sk_buff *skb;
+ 	u32 elp_interval;
++	bool ret;
+ 
+ 	bat_v = container_of(work, struct batadv_hard_iface_bat_v, elp_wq.work);
+ 	hard_iface = container_of(bat_v, struct batadv_hard_iface, bat_v);
+@@ -329,8 +330,11 @@ static void batadv_v_elp_periodic_work(struct work_struct *work)
+ 		 * may sleep and that is not allowed in an rcu protected
+ 		 * context. Therefore schedule a task for that.
+ 		 */
+-		queue_work(batadv_event_workqueue,
+-			   &hardif_neigh->bat_v.metric_work);
++		ret = queue_work(batadv_event_workqueue,
++				 &hardif_neigh->bat_v.metric_work);
++
++		if (!ret)
++			batadv_hardif_neigh_put(hardif_neigh);
+ 	}
+ 	rcu_read_unlock();
+ 
+diff --git a/net/batman-adv/bridge_loop_avoidance.c b/net/batman-adv/bridge_loop_avoidance.c
+index a2de5a44bd41..58c093caf49e 100644
+--- a/net/batman-adv/bridge_loop_avoidance.c
++++ b/net/batman-adv/bridge_loop_avoidance.c
+@@ -1772,6 +1772,7 @@ batadv_bla_loopdetect_check(struct batadv_priv *bat_priv, struct sk_buff *skb,
+ {
+ 	struct batadv_bla_backbone_gw *backbone_gw;
+ 	struct ethhdr *ethhdr;
++	bool ret;
+ 
+ 	ethhdr = eth_hdr(skb);
+ 
+@@ -1795,8 +1796,13 @@ batadv_bla_loopdetect_check(struct batadv_priv *bat_priv, struct sk_buff *skb,
+ 	if (unlikely(!backbone_gw))
+ 		return true;
+ 
+-	queue_work(batadv_event_workqueue, &backbone_gw->report_work);
+-	/* backbone_gw is unreferenced in the report work function function */
++	ret = queue_work(batadv_event_workqueue, &backbone_gw->report_work);
++
++	/* backbone_gw is unreferenced in the report work function function
++	 * if queue_work() call was successful
++	 */
++	if (!ret)
++		batadv_backbone_gw_put(backbone_gw);
+ 
+ 	return true;
+ }
+diff --git a/net/batman-adv/gateway_client.c b/net/batman-adv/gateway_client.c
+index 8b198ee798c9..140c61a3f1ec 100644
+--- a/net/batman-adv/gateway_client.c
++++ b/net/batman-adv/gateway_client.c
+@@ -32,6 +32,7 @@
+ #include <linux/kernel.h>
+ #include <linux/kref.h>
+ #include <linux/list.h>
++#include <linux/lockdep.h>
+ #include <linux/netdevice.h>
+ #include <linux/netlink.h>
+ #include <linux/rculist.h>
+@@ -348,6 +349,9 @@ out:
+  * @bat_priv: the bat priv with all the soft interface information
+  * @orig_node: originator announcing gateway capabilities
+  * @gateway: announced bandwidth information
++ *
++ * Has to be called with the appropriate locks being acquired
++ * (gw.list_lock).
+  */
+ static void batadv_gw_node_add(struct batadv_priv *bat_priv,
+ 			       struct batadv_orig_node *orig_node,
+@@ -355,6 +359,8 @@ static void batadv_gw_node_add(struct batadv_priv *bat_priv,
+ {
+ 	struct batadv_gw_node *gw_node;
+ 
++	lockdep_assert_held(&bat_priv->gw.list_lock);
++
+ 	if (gateway->bandwidth_down == 0)
+ 		return;
+ 
+@@ -369,10 +375,8 @@ static void batadv_gw_node_add(struct batadv_priv *bat_priv,
+ 	gw_node->bandwidth_down = ntohl(gateway->bandwidth_down);
+ 	gw_node->bandwidth_up = ntohl(gateway->bandwidth_up);
+ 
+-	spin_lock_bh(&bat_priv->gw.list_lock);
+ 	kref_get(&gw_node->refcount);
+ 	hlist_add_head_rcu(&gw_node->list, &bat_priv->gw.gateway_list);
+-	spin_unlock_bh(&bat_priv->gw.list_lock);
+ 
+ 	batadv_dbg(BATADV_DBG_BATMAN, bat_priv,
+ 		   "Found new gateway %pM -> gw bandwidth: %u.%u/%u.%u MBit\n",
+@@ -428,11 +432,14 @@ void batadv_gw_node_update(struct batadv_priv *bat_priv,
+ {
+ 	struct batadv_gw_node *gw_node, *curr_gw = NULL;
+ 
++	spin_lock_bh(&bat_priv->gw.list_lock);
+ 	gw_node = batadv_gw_node_get(bat_priv, orig_node);
+ 	if (!gw_node) {
+ 		batadv_gw_node_add(bat_priv, orig_node, gateway);
++		spin_unlock_bh(&bat_priv->gw.list_lock);
+ 		goto out;
+ 	}
++	spin_unlock_bh(&bat_priv->gw.list_lock);
+ 
+ 	if (gw_node->bandwidth_down == ntohl(gateway->bandwidth_down) &&
+ 	    gw_node->bandwidth_up == ntohl(gateway->bandwidth_up))
+diff --git a/net/batman-adv/network-coding.c b/net/batman-adv/network-coding.c
+index c3578444f3cb..34caf129a9bf 100644
+--- a/net/batman-adv/network-coding.c
++++ b/net/batman-adv/network-coding.c
+@@ -854,16 +854,27 @@ batadv_nc_get_nc_node(struct batadv_priv *bat_priv,
+ 	spinlock_t *lock; /* Used to lock list selected by "int in_coding" */
+ 	struct list_head *list;
+ 
++	/* Select ingoing or outgoing coding node */
++	if (in_coding) {
++		lock = &orig_neigh_node->in_coding_list_lock;
++		list = &orig_neigh_node->in_coding_list;
++	} else {
++		lock = &orig_neigh_node->out_coding_list_lock;
++		list = &orig_neigh_node->out_coding_list;
++	}
++
++	spin_lock_bh(lock);
++
+ 	/* Check if nc_node is already added */
+ 	nc_node = batadv_nc_find_nc_node(orig_node, orig_neigh_node, in_coding);
+ 
+ 	/* Node found */
+ 	if (nc_node)
+-		return nc_node;
++		goto unlock;
+ 
+ 	nc_node = kzalloc(sizeof(*nc_node), GFP_ATOMIC);
+ 	if (!nc_node)
+-		return NULL;
++		goto unlock;
+ 
+ 	/* Initialize nc_node */
+ 	INIT_LIST_HEAD(&nc_node->list);
+@@ -872,22 +883,14 @@ batadv_nc_get_nc_node(struct batadv_priv *bat_priv,
+ 	kref_get(&orig_neigh_node->refcount);
+ 	nc_node->orig_node = orig_neigh_node;
+ 
+-	/* Select ingoing or outgoing coding node */
+-	if (in_coding) {
+-		lock = &orig_neigh_node->in_coding_list_lock;
+-		list = &orig_neigh_node->in_coding_list;
+-	} else {
+-		lock = &orig_neigh_node->out_coding_list_lock;
+-		list = &orig_neigh_node->out_coding_list;
+-	}
+-
+ 	batadv_dbg(BATADV_DBG_NC, bat_priv, "Adding nc_node %pM -> %pM\n",
+ 		   nc_node->addr, nc_node->orig_node->orig);
+ 
+ 	/* Add nc_node to orig_node */
+-	spin_lock_bh(lock);
+ 	kref_get(&nc_node->refcount);
+ 	list_add_tail_rcu(&nc_node->list, list);
++
++unlock:
+ 	spin_unlock_bh(lock);
+ 
+ 	return nc_node;
+diff --git a/net/batman-adv/soft-interface.c b/net/batman-adv/soft-interface.c
+index 1485263a348b..626ddca332db 100644
+--- a/net/batman-adv/soft-interface.c
++++ b/net/batman-adv/soft-interface.c
+@@ -574,15 +574,20 @@ int batadv_softif_create_vlan(struct batadv_priv *bat_priv, unsigned short vid)
+ 	struct batadv_softif_vlan *vlan;
+ 	int err;
+ 
++	spin_lock_bh(&bat_priv->softif_vlan_list_lock);
++
+ 	vlan = batadv_softif_vlan_get(bat_priv, vid);
+ 	if (vlan) {
+ 		batadv_softif_vlan_put(vlan);
++		spin_unlock_bh(&bat_priv->softif_vlan_list_lock);
+ 		return -EEXIST;
+ 	}
+ 
+ 	vlan = kzalloc(sizeof(*vlan), GFP_ATOMIC);
+-	if (!vlan)
++	if (!vlan) {
++		spin_unlock_bh(&bat_priv->softif_vlan_list_lock);
+ 		return -ENOMEM;
++	}
+ 
+ 	vlan->bat_priv = bat_priv;
+ 	vlan->vid = vid;
+@@ -590,17 +595,23 @@ int batadv_softif_create_vlan(struct batadv_priv *bat_priv, unsigned short vid)
+ 
+ 	atomic_set(&vlan->ap_isolation, 0);
+ 
++	kref_get(&vlan->refcount);
++	hlist_add_head_rcu(&vlan->list, &bat_priv->softif_vlan_list);
++	spin_unlock_bh(&bat_priv->softif_vlan_list_lock);
++
++	/* batadv_sysfs_add_vlan cannot be in the spinlock section due to the
++	 * sleeping behavior of the sysfs functions and the fs_reclaim lock
++	 */
+ 	err = batadv_sysfs_add_vlan(bat_priv->soft_iface, vlan);
+ 	if (err) {
+-		kfree(vlan);
++		/* ref for the function */
++		batadv_softif_vlan_put(vlan);
++
++		/* ref for the list */
++		batadv_softif_vlan_put(vlan);
+ 		return err;
+ 	}
+ 
+-	spin_lock_bh(&bat_priv->softif_vlan_list_lock);
+-	kref_get(&vlan->refcount);
+-	hlist_add_head_rcu(&vlan->list, &bat_priv->softif_vlan_list);
+-	spin_unlock_bh(&bat_priv->softif_vlan_list_lock);
+-
+ 	/* add a new TT local entry. This one will be marked with the NOPURGE
+ 	 * flag
+ 	 */
+diff --git a/net/batman-adv/sysfs.c b/net/batman-adv/sysfs.c
+index f2eef43bd2ec..09427fc6494a 100644
+--- a/net/batman-adv/sysfs.c
++++ b/net/batman-adv/sysfs.c
+@@ -188,7 +188,8 @@ ssize_t batadv_store_##_name(struct kobject *kobj,			\
+ 									\
+ 	return __batadv_store_uint_attr(buff, count, _min, _max,	\
+ 					_post_func, attr,		\
+-					&bat_priv->_var, net_dev);	\
++					&bat_priv->_var, net_dev,	\
++					NULL);	\
+ }
+ 
+ #define BATADV_ATTR_SIF_SHOW_UINT(_name, _var)				\
+@@ -262,7 +263,9 @@ ssize_t batadv_store_##_name(struct kobject *kobj,			\
+ 									\
+ 	length = __batadv_store_uint_attr(buff, count, _min, _max,	\
+ 					  _post_func, attr,		\
+-					  &hard_iface->_var, net_dev);	\
++					  &hard_iface->_var,		\
++					  hard_iface->soft_iface,	\
++					  net_dev);			\
+ 									\
+ 	batadv_hardif_put(hard_iface);				\
+ 	return length;							\
+@@ -356,10 +359,12 @@ __batadv_store_bool_attr(char *buff, size_t count,
+ 
+ static int batadv_store_uint_attr(const char *buff, size_t count,
+ 				  struct net_device *net_dev,
++				  struct net_device *slave_dev,
+ 				  const char *attr_name,
+ 				  unsigned int min, unsigned int max,
+ 				  atomic_t *attr)
+ {
++	char ifname[IFNAMSIZ + 3] = "";
+ 	unsigned long uint_val;
+ 	int ret;
+ 
+@@ -385,8 +390,11 @@ static int batadv_store_uint_attr(const char *buff, size_t count,
+ 	if (atomic_read(attr) == uint_val)
+ 		return count;
+ 
+-	batadv_info(net_dev, "%s: Changing from: %i to: %lu\n",
+-		    attr_name, atomic_read(attr), uint_val);
++	if (slave_dev)
++		snprintf(ifname, sizeof(ifname), "%s: ", slave_dev->name);
++
++	batadv_info(net_dev, "%s: %sChanging from: %i to: %lu\n",
++		    attr_name, ifname, atomic_read(attr), uint_val);
+ 
+ 	atomic_set(attr, uint_val);
+ 	return count;
+@@ -397,12 +405,13 @@ static ssize_t __batadv_store_uint_attr(const char *buff, size_t count,
+ 					void (*post_func)(struct net_device *),
+ 					const struct attribute *attr,
+ 					atomic_t *attr_store,
+-					struct net_device *net_dev)
++					struct net_device *net_dev,
++					struct net_device *slave_dev)
+ {
+ 	int ret;
+ 
+-	ret = batadv_store_uint_attr(buff, count, net_dev, attr->name, min, max,
+-				     attr_store);
++	ret = batadv_store_uint_attr(buff, count, net_dev, slave_dev,
++				     attr->name, min, max, attr_store);
+ 	if (post_func && ret)
+ 		post_func(net_dev);
+ 
+@@ -571,7 +580,7 @@ static ssize_t batadv_store_gw_sel_class(struct kobject *kobj,
+ 	return __batadv_store_uint_attr(buff, count, 1, BATADV_TQ_MAX_VALUE,
+ 					batadv_post_gw_reselect, attr,
+ 					&bat_priv->gw.sel_class,
+-					bat_priv->soft_iface);
++					bat_priv->soft_iface, NULL);
+ }
+ 
+ static ssize_t batadv_show_gw_bwidth(struct kobject *kobj,
+@@ -1090,8 +1099,9 @@ static ssize_t batadv_store_throughput_override(struct kobject *kobj,
+ 	if (old_tp_override == tp_override)
+ 		goto out;
+ 
+-	batadv_info(net_dev, "%s: Changing from: %u.%u MBit to: %u.%u MBit\n",
+-		    "throughput_override",
++	batadv_info(hard_iface->soft_iface,
++		    "%s: %s: Changing from: %u.%u MBit to: %u.%u MBit\n",
++		    "throughput_override", net_dev->name,
+ 		    old_tp_override / 10, old_tp_override % 10,
+ 		    tp_override / 10, tp_override % 10);
+ 
+diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
+index 12a2b7d21376..d21624c44665 100644
+--- a/net/batman-adv/translation-table.c
++++ b/net/batman-adv/translation-table.c
+@@ -1613,6 +1613,8 @@ batadv_tt_global_orig_entry_add(struct batadv_tt_global_entry *tt_global,
+ {
+ 	struct batadv_tt_orig_list_entry *orig_entry;
+ 
++	spin_lock_bh(&tt_global->list_lock);
++
+ 	orig_entry = batadv_tt_global_orig_entry_find(tt_global, orig_node);
+ 	if (orig_entry) {
+ 		/* refresh the ttvn: the current value could be a bogus one that
+@@ -1635,11 +1637,9 @@ batadv_tt_global_orig_entry_add(struct batadv_tt_global_entry *tt_global,
+ 	orig_entry->flags = flags;
+ 	kref_init(&orig_entry->refcount);
+ 
+-	spin_lock_bh(&tt_global->list_lock);
+ 	kref_get(&orig_entry->refcount);
+ 	hlist_add_head_rcu(&orig_entry->list,
+ 			   &tt_global->orig_list);
+-	spin_unlock_bh(&tt_global->list_lock);
+ 	atomic_inc(&tt_global->orig_list_count);
+ 
+ sync_flags:
+@@ -1647,6 +1647,8 @@ sync_flags:
+ out:
+ 	if (orig_entry)
+ 		batadv_tt_orig_list_entry_put(orig_entry);
++
++	spin_unlock_bh(&tt_global->list_lock);
+ }
+ 
+ /**
+diff --git a/net/batman-adv/tvlv.c b/net/batman-adv/tvlv.c
+index a637458205d1..40e69c9346d2 100644
+--- a/net/batman-adv/tvlv.c
++++ b/net/batman-adv/tvlv.c
+@@ -529,15 +529,20 @@ void batadv_tvlv_handler_register(struct batadv_priv *bat_priv,
+ {
+ 	struct batadv_tvlv_handler *tvlv_handler;
+ 
++	spin_lock_bh(&bat_priv->tvlv.handler_list_lock);
++
+ 	tvlv_handler = batadv_tvlv_handler_get(bat_priv, type, version);
+ 	if (tvlv_handler) {
++		spin_unlock_bh(&bat_priv->tvlv.handler_list_lock);
+ 		batadv_tvlv_handler_put(tvlv_handler);
+ 		return;
+ 	}
+ 
+ 	tvlv_handler = kzalloc(sizeof(*tvlv_handler), GFP_ATOMIC);
+-	if (!tvlv_handler)
++	if (!tvlv_handler) {
++		spin_unlock_bh(&bat_priv->tvlv.handler_list_lock);
+ 		return;
++	}
+ 
+ 	tvlv_handler->ogm_handler = optr;
+ 	tvlv_handler->unicast_handler = uptr;
+@@ -547,7 +552,6 @@ void batadv_tvlv_handler_register(struct batadv_priv *bat_priv,
+ 	kref_init(&tvlv_handler->refcount);
+ 	INIT_HLIST_NODE(&tvlv_handler->list);
+ 
+-	spin_lock_bh(&bat_priv->tvlv.handler_list_lock);
+ 	kref_get(&tvlv_handler->refcount);
+ 	hlist_add_head_rcu(&tvlv_handler->list, &bat_priv->tvlv.handler_list);
+ 	spin_unlock_bh(&bat_priv->tvlv.handler_list_lock);
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index e7de5f282722..effa87858b21 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -612,7 +612,10 @@ static void smc_connect_work(struct work_struct *work)
+ 		smc->sk.sk_err = -rc;
+ 
+ out:
+-	smc->sk.sk_state_change(&smc->sk);
++	if (smc->sk.sk_err)
++		smc->sk.sk_state_change(&smc->sk);
++	else
++		smc->sk.sk_write_space(&smc->sk);
+ 	kfree(smc->connect_info);
+ 	smc->connect_info = NULL;
+ 	release_sock(&smc->sk);
+@@ -1345,7 +1348,7 @@ static __poll_t smc_poll(struct file *file, struct socket *sock,
+ 		return EPOLLNVAL;
+ 
+ 	smc = smc_sk(sock->sk);
+-	if ((sk->sk_state == SMC_INIT) || smc->use_fallback) {
++	if (smc->use_fallback) {
+ 		/* delegate to CLC child sock */
+ 		mask = smc->clcsock->ops->poll(file, smc->clcsock, wait);
+ 		sk->sk_err = smc->clcsock->sk->sk_err;
+diff --git a/net/smc/smc_clc.c b/net/smc/smc_clc.c
+index ae5d168653ce..086157555ac3 100644
+--- a/net/smc/smc_clc.c
++++ b/net/smc/smc_clc.c
+@@ -405,14 +405,12 @@ int smc_clc_send_proposal(struct smc_sock *smc,
+ 	vec[i++].iov_len = sizeof(trl);
+ 	/* due to the few bytes needed for clc-handshake this cannot block */
+ 	len = kernel_sendmsg(smc->clcsock, &msg, vec, i, plen);
+-	if (len < sizeof(pclc)) {
+-		if (len >= 0) {
+-			reason_code = -ENETUNREACH;
+-			smc->sk.sk_err = -reason_code;
+-		} else {
+-			smc->sk.sk_err = smc->clcsock->sk->sk_err;
+-			reason_code = -smc->sk.sk_err;
+-		}
++	if (len < 0) {
++		smc->sk.sk_err = smc->clcsock->sk->sk_err;
++		reason_code = -smc->sk.sk_err;
++	} else if (len < (int)sizeof(pclc)) {
++		reason_code = -ENETUNREACH;
++		smc->sk.sk_err = -reason_code;
+ 	}
+ 
+ 	return reason_code;
+diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
+index 6c253343a6f9..70d18d0d39ff 100644
+--- a/tools/testing/selftests/bpf/test_maps.c
++++ b/tools/testing/selftests/bpf/test_maps.c
+@@ -566,7 +566,11 @@ static void test_sockmap(int tasks, void *data)
+ 	/* Test update without programs */
+ 	for (i = 0; i < 6; i++) {
+ 		err = bpf_map_update_elem(fd, &i, &sfd[i], BPF_ANY);
+-		if (err) {
++		if (i < 2 && !err) {
++			printf("Allowed update sockmap '%i:%i' not in ESTABLISHED\n",
++			       i, sfd[i]);
++			goto out_sockmap;
++		} else if (i >= 2 && err) {
+ 			printf("Failed noprog update sockmap '%i:%i'\n",
+ 			       i, sfd[i]);
+ 			goto out_sockmap;
+@@ -727,7 +731,7 @@ static void test_sockmap(int tasks, void *data)
+ 	}
+ 
+ 	/* Test map update elem afterwards fd lives in fd and map_fd */
+-	for (i = 0; i < 6; i++) {
++	for (i = 2; i < 6; i++) {
+ 		err = bpf_map_update_elem(map_fd_rx, &i, &sfd[i], BPF_ANY);
+ 		if (err) {
+ 			printf("Failed map_fd_rx update sockmap %i '%i:%i'\n",
+@@ -831,7 +835,7 @@ static void test_sockmap(int tasks, void *data)
+ 	}
+ 
+ 	/* Delete the elems without programs */
+-	for (i = 0; i < 6; i++) {
++	for (i = 2; i < 6; i++) {
+ 		err = bpf_map_delete_elem(fd, &i);
+ 		if (err) {
+ 			printf("Failed delete sockmap %i '%i:%i'\n",
+diff --git a/tools/testing/selftests/net/pmtu.sh b/tools/testing/selftests/net/pmtu.sh
+index 32a194e3e07a..0ab9423d009f 100755
+--- a/tools/testing/selftests/net/pmtu.sh
++++ b/tools/testing/selftests/net/pmtu.sh
+@@ -178,8 +178,8 @@ setup() {
+ 
+ cleanup() {
+ 	[ ${cleanup_done} -eq 1 ] && return
+-	ip netns del ${NS_A} 2 > /dev/null
+-	ip netns del ${NS_B} 2 > /dev/null
++	ip netns del ${NS_A} 2> /dev/null
++	ip netns del ${NS_B} 2> /dev/null
+ 	cleanup_done=1
+ }
+ 


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 11:37 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     3c99d5ac3d09450f9e5c7392b5173804107a827e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Nov 10 21:33:13 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 11:36:28 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3c99d5ac

Linux patch 4.18.18

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1017_linux-4.18.18.patch | 1206 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1210 insertions(+)

diff --git a/0000_README b/0000_README
index fcd301e..6774045 100644
--- a/0000_README
+++ b/0000_README
@@ -111,6 +111,10 @@ Patch:  1016_linux-4.18.17.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.17
 
+Patch:  1017_linux-4.18.18.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.18
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1017_linux-4.18.18.patch b/1017_linux-4.18.18.patch
new file mode 100644
index 0000000..093fbfc
--- /dev/null
+++ b/1017_linux-4.18.18.patch
@@ -0,0 +1,1206 @@
+diff --git a/Makefile b/Makefile
+index c051db0ca5a0..7b35c1ec0427 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 17
++SUBLEVEL = 18
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h
+index a38bf5a1e37a..69dcdf195b61 100644
+--- a/arch/x86/include/asm/fpu/internal.h
++++ b/arch/x86/include/asm/fpu/internal.h
+@@ -528,7 +528,7 @@ static inline void fpregs_activate(struct fpu *fpu)
+ static inline void
+ switch_fpu_prepare(struct fpu *old_fpu, int cpu)
+ {
+-	if (old_fpu->initialized) {
++	if (static_cpu_has(X86_FEATURE_FPU) && old_fpu->initialized) {
+ 		if (!copy_fpregs_to_fpstate(old_fpu))
+ 			old_fpu->last_cpu = -1;
+ 		else
+diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
+index a06b07399d17..6abf3af96fc8 100644
+--- a/arch/x86/include/asm/percpu.h
++++ b/arch/x86/include/asm/percpu.h
+@@ -185,22 +185,22 @@ do {									\
+ 	typeof(var) pfo_ret__;				\
+ 	switch (sizeof(var)) {				\
+ 	case 1:						\
+-		asm(op "b "__percpu_arg(1)",%0"		\
++		asm volatile(op "b "__percpu_arg(1)",%0"\
+ 		    : "=q" (pfo_ret__)			\
+ 		    : "m" (var));			\
+ 		break;					\
+ 	case 2:						\
+-		asm(op "w "__percpu_arg(1)",%0"		\
++		asm volatile(op "w "__percpu_arg(1)",%0"\
+ 		    : "=r" (pfo_ret__)			\
+ 		    : "m" (var));			\
+ 		break;					\
+ 	case 4:						\
+-		asm(op "l "__percpu_arg(1)",%0"		\
++		asm volatile(op "l "__percpu_arg(1)",%0"\
+ 		    : "=r" (pfo_ret__)			\
+ 		    : "m" (var));			\
+ 		break;					\
+ 	case 8:						\
+-		asm(op "q "__percpu_arg(1)",%0"		\
++		asm volatile(op "q "__percpu_arg(1)",%0"\
+ 		    : "=r" (pfo_ret__)			\
+ 		    : "m" (var));			\
+ 		break;					\
+diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c
+index 661583662430..71c0b01d93b1 100644
+--- a/arch/x86/kernel/pci-swiotlb.c
++++ b/arch/x86/kernel/pci-swiotlb.c
+@@ -42,10 +42,8 @@ IOMMU_INIT_FINISH(pci_swiotlb_detect_override,
+ int __init pci_swiotlb_detect_4gb(void)
+ {
+ 	/* don't initialize swiotlb if iommu=off (no_iommu=1) */
+-#ifdef CONFIG_X86_64
+ 	if (!no_iommu && max_possible_pfn > MAX_DMA32_PFN)
+ 		swiotlb = 1;
+-#endif
+ 
+ 	/*
+ 	 * If SME is active then swiotlb will be set to 1 so that bounce
+diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
+index 74b4472ba0a6..f32472acf66c 100644
+--- a/arch/x86/kernel/setup.c
++++ b/arch/x86/kernel/setup.c
+@@ -1258,7 +1258,7 @@ void __init setup_arch(char **cmdline_p)
+ 	x86_init.hyper.guest_late_init();
+ 
+ 	e820__reserve_resources();
+-	e820__register_nosave_regions(max_low_pfn);
++	e820__register_nosave_regions(max_pfn);
+ 
+ 	x86_init.resources.reserve_resources();
+ 
+diff --git a/arch/x86/kernel/time.c b/arch/x86/kernel/time.c
+index be01328eb755..fddaefc51fb6 100644
+--- a/arch/x86/kernel/time.c
++++ b/arch/x86/kernel/time.c
+@@ -25,7 +25,7 @@
+ #include <asm/time.h>
+ 
+ #ifdef CONFIG_X86_64
+-__visible volatile unsigned long jiffies __cacheline_aligned = INITIAL_JIFFIES;
++__visible volatile unsigned long jiffies __cacheline_aligned_in_smp = INITIAL_JIFFIES;
+ #endif
+ 
+ unsigned long profile_pc(struct pt_regs *regs)
+diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
+index a10481656d82..2f4af9598f62 100644
+--- a/arch/x86/kernel/tsc.c
++++ b/arch/x86/kernel/tsc.c
+@@ -60,7 +60,7 @@ struct cyc2ns {
+ 
+ static DEFINE_PER_CPU_ALIGNED(struct cyc2ns, cyc2ns);
+ 
+-void cyc2ns_read_begin(struct cyc2ns_data *data)
++void __always_inline cyc2ns_read_begin(struct cyc2ns_data *data)
+ {
+ 	int seq, idx;
+ 
+@@ -77,7 +77,7 @@ void cyc2ns_read_begin(struct cyc2ns_data *data)
+ 	} while (unlikely(seq != this_cpu_read(cyc2ns.seq.sequence)));
+ }
+ 
+-void cyc2ns_read_end(void)
++void __always_inline cyc2ns_read_end(void)
+ {
+ 	preempt_enable_notrace();
+ }
+@@ -123,7 +123,7 @@ static void __init cyc2ns_init(int cpu)
+ 	seqcount_init(&c2n->seq);
+ }
+ 
+-static inline unsigned long long cycles_2_ns(unsigned long long cyc)
++static __always_inline unsigned long long cycles_2_ns(unsigned long long cyc)
+ {
+ 	struct cyc2ns_data data;
+ 	unsigned long long ns;
+diff --git a/drivers/clk/sunxi-ng/ccu-sun4i-a10.c b/drivers/clk/sunxi-ng/ccu-sun4i-a10.c
+index ffa5dac221e4..129ebd2588fd 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun4i-a10.c
++++ b/drivers/clk/sunxi-ng/ccu-sun4i-a10.c
+@@ -1434,8 +1434,16 @@ static void __init sun4i_ccu_init(struct device_node *node,
+ 		return;
+ 	}
+ 
+-	/* Force the PLL-Audio-1x divider to 1 */
+ 	val = readl(reg + SUN4I_PLL_AUDIO_REG);
++
++	/*
++	 * Force VCO and PLL bias current to lowest setting. Higher
++	 * settings interfere with sigma-delta modulation and result
++	 * in audible noise and distortions when using SPDIF or I2S.
++	 */
++	val &= ~GENMASK(25, 16);
++
++	/* Force the PLL-Audio-1x divider to 1 */
+ 	val &= ~GENMASK(29, 26);
+ 	writel(val | (1 << 26), reg + SUN4I_PLL_AUDIO_REG);
+ 
+diff --git a/drivers/gpio/gpio-mxs.c b/drivers/gpio/gpio-mxs.c
+index e2831ee70cdc..deb539b3316b 100644
+--- a/drivers/gpio/gpio-mxs.c
++++ b/drivers/gpio/gpio-mxs.c
+@@ -18,8 +18,6 @@
+ #include <linux/platform_device.h>
+ #include <linux/slab.h>
+ #include <linux/gpio/driver.h>
+-/* FIXME: for gpio_get_value(), replace this by direct register read */
+-#include <linux/gpio.h>
+ #include <linux/module.h>
+ 
+ #define MXS_SET		0x4
+@@ -86,7 +84,7 @@ static int mxs_gpio_set_irq_type(struct irq_data *d, unsigned int type)
+ 	port->both_edges &= ~pin_mask;
+ 	switch (type) {
+ 	case IRQ_TYPE_EDGE_BOTH:
+-		val = gpio_get_value(port->gc.base + d->hwirq);
++		val = port->gc.get(&port->gc, d->hwirq);
+ 		if (val)
+ 			edge = GPIO_INT_FALL_EDGE;
+ 		else
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index c7b4481c90d7..d74d9a8cde2a 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -113,6 +113,9 @@ static const struct edid_quirk {
+ 	/* AEO model 0 reports 8 bpc, but is a 6 bpc panel */
+ 	{ "AEO", 0, EDID_QUIRK_FORCE_6BPC },
+ 
++	/* BOE model on HP Pavilion 15-n233sl reports 8 bpc, but is a 6 bpc panel */
++	{ "BOE", 0x78b, EDID_QUIRK_FORCE_6BPC },
++
+ 	/* CPT panel of Asus UX303LA reports 8 bpc, but is a 6 bpc panel */
+ 	{ "CPT", 0x17df, EDID_QUIRK_FORCE_6BPC },
+ 
+@@ -4279,7 +4282,7 @@ static void drm_parse_ycbcr420_deep_color_info(struct drm_connector *connector,
+ 	struct drm_hdmi_info *hdmi = &connector->display_info.hdmi;
+ 
+ 	dc_mask = db[7] & DRM_EDID_YCBCR420_DC_MASK;
+-	hdmi->y420_dc_modes |= dc_mask;
++	hdmi->y420_dc_modes = dc_mask;
+ }
+ 
+ static void drm_parse_hdmi_forum_vsdb(struct drm_connector *connector,
+diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
+index 2ee1eaa66188..1ebac724fe7b 100644
+--- a/drivers/gpu/drm/drm_fb_helper.c
++++ b/drivers/gpu/drm/drm_fb_helper.c
+@@ -1561,6 +1561,25 @@ unlock:
+ }
+ EXPORT_SYMBOL(drm_fb_helper_ioctl);
+ 
++static bool drm_fb_pixel_format_equal(const struct fb_var_screeninfo *var_1,
++				      const struct fb_var_screeninfo *var_2)
++{
++	return var_1->bits_per_pixel == var_2->bits_per_pixel &&
++	       var_1->grayscale == var_2->grayscale &&
++	       var_1->red.offset == var_2->red.offset &&
++	       var_1->red.length == var_2->red.length &&
++	       var_1->red.msb_right == var_2->red.msb_right &&
++	       var_1->green.offset == var_2->green.offset &&
++	       var_1->green.length == var_2->green.length &&
++	       var_1->green.msb_right == var_2->green.msb_right &&
++	       var_1->blue.offset == var_2->blue.offset &&
++	       var_1->blue.length == var_2->blue.length &&
++	       var_1->blue.msb_right == var_2->blue.msb_right &&
++	       var_1->transp.offset == var_2->transp.offset &&
++	       var_1->transp.length == var_2->transp.length &&
++	       var_1->transp.msb_right == var_2->transp.msb_right;
++}
++
+ /**
+  * drm_fb_helper_check_var - implementation for &fb_ops.fb_check_var
+  * @var: screeninfo to check
+@@ -1571,7 +1590,6 @@ int drm_fb_helper_check_var(struct fb_var_screeninfo *var,
+ {
+ 	struct drm_fb_helper *fb_helper = info->par;
+ 	struct drm_framebuffer *fb = fb_helper->fb;
+-	int depth;
+ 
+ 	if (var->pixclock != 0 || in_dbg_master())
+ 		return -EINVAL;
+@@ -1591,72 +1609,15 @@ int drm_fb_helper_check_var(struct fb_var_screeninfo *var,
+ 		return -EINVAL;
+ 	}
+ 
+-	switch (var->bits_per_pixel) {
+-	case 16:
+-		depth = (var->green.length == 6) ? 16 : 15;
+-		break;
+-	case 32:
+-		depth = (var->transp.length > 0) ? 32 : 24;
+-		break;
+-	default:
+-		depth = var->bits_per_pixel;
+-		break;
+-	}
+-
+-	switch (depth) {
+-	case 8:
+-		var->red.offset = 0;
+-		var->green.offset = 0;
+-		var->blue.offset = 0;
+-		var->red.length = 8;
+-		var->green.length = 8;
+-		var->blue.length = 8;
+-		var->transp.length = 0;
+-		var->transp.offset = 0;
+-		break;
+-	case 15:
+-		var->red.offset = 10;
+-		var->green.offset = 5;
+-		var->blue.offset = 0;
+-		var->red.length = 5;
+-		var->green.length = 5;
+-		var->blue.length = 5;
+-		var->transp.length = 1;
+-		var->transp.offset = 15;
+-		break;
+-	case 16:
+-		var->red.offset = 11;
+-		var->green.offset = 5;
+-		var->blue.offset = 0;
+-		var->red.length = 5;
+-		var->green.length = 6;
+-		var->blue.length = 5;
+-		var->transp.length = 0;
+-		var->transp.offset = 0;
+-		break;
+-	case 24:
+-		var->red.offset = 16;
+-		var->green.offset = 8;
+-		var->blue.offset = 0;
+-		var->red.length = 8;
+-		var->green.length = 8;
+-		var->blue.length = 8;
+-		var->transp.length = 0;
+-		var->transp.offset = 0;
+-		break;
+-	case 32:
+-		var->red.offset = 16;
+-		var->green.offset = 8;
+-		var->blue.offset = 0;
+-		var->red.length = 8;
+-		var->green.length = 8;
+-		var->blue.length = 8;
+-		var->transp.length = 8;
+-		var->transp.offset = 24;
+-		break;
+-	default:
++	/*
++	 * drm fbdev emulation doesn't support changing the pixel format at all,
++	 * so reject all pixel format changing requests.
++	 */
++	if (!drm_fb_pixel_format_equal(var, &info->var)) {
++		DRM_DEBUG("fbdev emulation doesn't support changing the pixel format\n");
+ 		return -EINVAL;
+ 	}
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL(drm_fb_helper_check_var);
+diff --git a/drivers/gpu/drm/sun4i/sun4i_dotclock.c b/drivers/gpu/drm/sun4i/sun4i_dotclock.c
+index e36004fbe453..2a15f2f9271e 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_dotclock.c
++++ b/drivers/gpu/drm/sun4i/sun4i_dotclock.c
+@@ -81,9 +81,19 @@ static long sun4i_dclk_round_rate(struct clk_hw *hw, unsigned long rate,
+ 	int i;
+ 
+ 	for (i = tcon->dclk_min_div; i <= tcon->dclk_max_div; i++) {
+-		unsigned long ideal = rate * i;
++		u64 ideal = (u64)rate * i;
+ 		unsigned long rounded;
+ 
++		/*
++		 * ideal has overflowed the max value that can be stored in an
++		 * unsigned long, and every clk operation we might do on a
++		 * truncated u64 value will give us incorrect results.
++		 * Let's just stop there since bigger dividers will result in
++		 * the same overflow issue.
++		 */
++		if (ideal > ULONG_MAX)
++			goto out;
++
+ 		rounded = clk_hw_round_rate(clk_hw_get_parent(hw),
+ 					    ideal);
+ 
+diff --git a/drivers/infiniband/core/ucm.c b/drivers/infiniband/core/ucm.c
+index 9eef96dacbd7..d93a719d25c1 100644
+--- a/drivers/infiniband/core/ucm.c
++++ b/drivers/infiniband/core/ucm.c
+@@ -46,6 +46,8 @@
+ #include <linux/mutex.h>
+ #include <linux/slab.h>
+ 
++#include <linux/nospec.h>
++
+ #include <linux/uaccess.h>
+ 
+ #include <rdma/ib.h>
+@@ -1123,6 +1125,7 @@ static ssize_t ib_ucm_write(struct file *filp, const char __user *buf,
+ 
+ 	if (hdr.cmd >= ARRAY_SIZE(ucm_cmd_table))
+ 		return -EINVAL;
++	hdr.cmd = array_index_nospec(hdr.cmd, ARRAY_SIZE(ucm_cmd_table));
+ 
+ 	if (hdr.in + sizeof(hdr) > len)
+ 		return -EINVAL;
+diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c
+index 21863ddde63e..01d68ed46c1b 100644
+--- a/drivers/infiniband/core/ucma.c
++++ b/drivers/infiniband/core/ucma.c
+@@ -44,6 +44,8 @@
+ #include <linux/module.h>
+ #include <linux/nsproxy.h>
+ 
++#include <linux/nospec.h>
++
+ #include <rdma/rdma_user_cm.h>
+ #include <rdma/ib_marshall.h>
+ #include <rdma/rdma_cm.h>
+@@ -1676,6 +1678,7 @@ static ssize_t ucma_write(struct file *filp, const char __user *buf,
+ 
+ 	if (hdr.cmd >= ARRAY_SIZE(ucma_cmd_table))
+ 		return -EINVAL;
++	hdr.cmd = array_index_nospec(hdr.cmd, ARRAY_SIZE(ucma_cmd_table));
+ 
+ 	if (hdr.in + sizeof(hdr) > len)
+ 		return -EINVAL;
+diff --git a/drivers/input/mouse/elan_i2c_core.c b/drivers/input/mouse/elan_i2c_core.c
+index f5ae24865355..b0f9d19b3410 100644
+--- a/drivers/input/mouse/elan_i2c_core.c
++++ b/drivers/input/mouse/elan_i2c_core.c
+@@ -1346,6 +1346,7 @@ static const struct acpi_device_id elan_acpi_id[] = {
+ 	{ "ELAN0611", 0 },
+ 	{ "ELAN0612", 0 },
+ 	{ "ELAN0618", 0 },
++	{ "ELAN061C", 0 },
+ 	{ "ELAN061D", 0 },
+ 	{ "ELAN0622", 0 },
+ 	{ "ELAN1000", 0 },
+diff --git a/drivers/misc/eeprom/at24.c b/drivers/misc/eeprom/at24.c
+index f5cc517d1131..7e50e1d6f58c 100644
+--- a/drivers/misc/eeprom/at24.c
++++ b/drivers/misc/eeprom/at24.c
+@@ -478,6 +478,23 @@ static void at24_properties_to_pdata(struct device *dev,
+ 	if (device_property_present(dev, "no-read-rollover"))
+ 		chip->flags |= AT24_FLAG_NO_RDROL;
+ 
++	err = device_property_read_u32(dev, "address-width", &val);
++	if (!err) {
++		switch (val) {
++		case 8:
++			if (chip->flags & AT24_FLAG_ADDR16)
++				dev_warn(dev, "Override address width to be 8, while default is 16\n");
++			chip->flags &= ~AT24_FLAG_ADDR16;
++			break;
++		case 16:
++			chip->flags |= AT24_FLAG_ADDR16;
++			break;
++		default:
++			dev_warn(dev, "Bad \"address-width\" property: %u\n",
++				 val);
++		}
++	}
++
+ 	err = device_property_read_u32(dev, "size", &val);
+ 	if (!err)
+ 		chip->byte_len = val;
+diff --git a/drivers/ptp/ptp_chardev.c b/drivers/ptp/ptp_chardev.c
+index 01b0e2bb3319..2012551d93e0 100644
+--- a/drivers/ptp/ptp_chardev.c
++++ b/drivers/ptp/ptp_chardev.c
+@@ -24,6 +24,8 @@
+ #include <linux/slab.h>
+ #include <linux/timekeeping.h>
+ 
++#include <linux/nospec.h>
++
+ #include "ptp_private.h"
+ 
+ static int ptp_disable_pinfunc(struct ptp_clock_info *ops,
+@@ -248,6 +250,7 @@ long ptp_ioctl(struct posix_clock *pc, unsigned int cmd, unsigned long arg)
+ 			err = -EINVAL;
+ 			break;
+ 		}
++		pin_index = array_index_nospec(pin_index, ops->n_pins);
+ 		if (mutex_lock_interruptible(&ptp->pincfg_mux))
+ 			return -ERESTARTSYS;
+ 		pd = ops->pin_config[pin_index];
+@@ -266,6 +269,7 @@ long ptp_ioctl(struct posix_clock *pc, unsigned int cmd, unsigned long arg)
+ 			err = -EINVAL;
+ 			break;
+ 		}
++		pin_index = array_index_nospec(pin_index, ops->n_pins);
+ 		if (mutex_lock_interruptible(&ptp->pincfg_mux))
+ 			return -ERESTARTSYS;
+ 		err = ptp_set_pinfunc(ptp, pin_index, pd.func, pd.chan);
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 84f52774810a..b61d101894ef 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -309,17 +309,17 @@ static void acm_process_notification(struct acm *acm, unsigned char *buf)
+ 
+ 		if (difference & ACM_CTRL_DSR)
+ 			acm->iocount.dsr++;
+-		if (difference & ACM_CTRL_BRK)
+-			acm->iocount.brk++;
+-		if (difference & ACM_CTRL_RI)
+-			acm->iocount.rng++;
+ 		if (difference & ACM_CTRL_DCD)
+ 			acm->iocount.dcd++;
+-		if (difference & ACM_CTRL_FRAMING)
++		if (newctrl & ACM_CTRL_BRK)
++			acm->iocount.brk++;
++		if (newctrl & ACM_CTRL_RI)
++			acm->iocount.rng++;
++		if (newctrl & ACM_CTRL_FRAMING)
+ 			acm->iocount.frame++;
+-		if (difference & ACM_CTRL_PARITY)
++		if (newctrl & ACM_CTRL_PARITY)
+ 			acm->iocount.parity++;
+-		if (difference & ACM_CTRL_OVERRUN)
++		if (newctrl & ACM_CTRL_OVERRUN)
+ 			acm->iocount.overrun++;
+ 		spin_unlock(&acm->read_lock);
+ 
+@@ -354,7 +354,6 @@ static void acm_ctrl_irq(struct urb *urb)
+ 	case -ENOENT:
+ 	case -ESHUTDOWN:
+ 		/* this urb is terminated, clean up */
+-		acm->nb_index = 0;
+ 		dev_dbg(&acm->control->dev,
+ 			"%s - urb shutting down with status: %d\n",
+ 			__func__, status);
+@@ -1642,6 +1641,7 @@ static int acm_pre_reset(struct usb_interface *intf)
+ 	struct acm *acm = usb_get_intfdata(intf);
+ 
+ 	clear_bit(EVENT_RX_STALL, &acm->flags);
++	acm->nb_index = 0; /* pending control transfers are lost */
+ 
+ 	return 0;
+ }
+diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
+index e1e0c90ce569..2e66711dac9c 100644
+--- a/drivers/usb/core/devio.c
++++ b/drivers/usb/core/devio.c
+@@ -1473,8 +1473,6 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ 	u = 0;
+ 	switch (uurb->type) {
+ 	case USBDEVFS_URB_TYPE_CONTROL:
+-		if (is_in)
+-			allow_short = true;
+ 		if (!usb_endpoint_xfer_control(&ep->desc))
+ 			return -EINVAL;
+ 		/* min 8 byte setup packet */
+@@ -1504,6 +1502,8 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ 			is_in = 0;
+ 			uurb->endpoint &= ~USB_DIR_IN;
+ 		}
++		if (is_in)
++			allow_short = true;
+ 		snoop(&ps->dev->dev, "control urb: bRequestType=%02x "
+ 			"bRequest=%02x wValue=%04x "
+ 			"wIndex=%04x wLength=%04x\n",
+diff --git a/drivers/usb/gadget/function/f_mass_storage.c b/drivers/usb/gadget/function/f_mass_storage.c
+index acecd13dcbd9..b29620e5df83 100644
+--- a/drivers/usb/gadget/function/f_mass_storage.c
++++ b/drivers/usb/gadget/function/f_mass_storage.c
+@@ -222,6 +222,8 @@
+ #include <linux/usb/gadget.h>
+ #include <linux/usb/composite.h>
+ 
++#include <linux/nospec.h>
++
+ #include "configfs.h"
+ 
+ 
+@@ -3171,6 +3173,7 @@ static struct config_group *fsg_lun_make(struct config_group *group,
+ 	fsg_opts = to_fsg_opts(&group->cg_item);
+ 	if (num >= FSG_MAX_LUNS)
+ 		return ERR_PTR(-ERANGE);
++	num = array_index_nospec(num, FSG_MAX_LUNS);
+ 
+ 	mutex_lock(&fsg_opts->lock);
+ 	if (fsg_opts->refcnt || fsg_opts->common->luns[num]) {
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 722860eb5a91..51dd8e00c4f8 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -179,10 +179,12 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 		xhci->quirks |= XHCI_PME_STUCK_QUIRK;
+ 	}
+ 	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+-		 pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI) {
++	    pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI)
+ 		xhci->quirks |= XHCI_SSIC_PORT_UNUSED;
++	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
++	    (pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_INTEL_APL_XHCI))
+ 		xhci->quirks |= XHCI_INTEL_USB_ROLE_SW;
+-	}
+ 	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+ 	    (pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_SUNRISEPOINT_LP_XHCI ||
+diff --git a/drivers/usb/roles/intel-xhci-usb-role-switch.c b/drivers/usb/roles/intel-xhci-usb-role-switch.c
+index 1fb3dd0f1dfa..277de96181f9 100644
+--- a/drivers/usb/roles/intel-xhci-usb-role-switch.c
++++ b/drivers/usb/roles/intel-xhci-usb-role-switch.c
+@@ -161,6 +161,8 @@ static int intel_xhci_usb_remove(struct platform_device *pdev)
+ {
+ 	struct intel_xhci_usb_data *data = platform_get_drvdata(pdev);
+ 
++	pm_runtime_disable(&pdev->dev);
++
+ 	usb_role_switch_unregister(data->role_sw);
+ 	return 0;
+ }
+diff --git a/drivers/usb/usbip/vhci_hcd.c b/drivers/usb/usbip/vhci_hcd.c
+index d11f3f8dad40..1e592ec94ba4 100644
+--- a/drivers/usb/usbip/vhci_hcd.c
++++ b/drivers/usb/usbip/vhci_hcd.c
+@@ -318,8 +318,9 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 	struct vhci_hcd	*vhci_hcd;
+ 	struct vhci	*vhci;
+ 	int             retval = 0;
+-	int		rhport;
++	int		rhport = -1;
+ 	unsigned long	flags;
++	bool invalid_rhport = false;
+ 
+ 	u32 prev_port_status[VHCI_HC_PORTS];
+ 
+@@ -334,9 +335,19 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 	usbip_dbg_vhci_rh("typeReq %x wValue %x wIndex %x\n", typeReq, wValue,
+ 			  wIndex);
+ 
+-	if (wIndex > VHCI_HC_PORTS)
+-		pr_err("invalid port number %d\n", wIndex);
+-	rhport = wIndex - 1;
++	/*
++	 * wIndex can be 0 for some request types (typeReq). rhport is
++	 * in valid range when wIndex >= 1 and < VHCI_HC_PORTS.
++	 *
++	 * Reference port_status[] only with valid rhport when
++	 * invalid_rhport is false.
++	 */
++	if (wIndex < 1 || wIndex > VHCI_HC_PORTS) {
++		invalid_rhport = true;
++		if (wIndex > VHCI_HC_PORTS)
++			pr_err("invalid port number %d\n", wIndex);
++	} else
++		rhport = wIndex - 1;
+ 
+ 	vhci_hcd = hcd_to_vhci_hcd(hcd);
+ 	vhci = vhci_hcd->vhci;
+@@ -345,8 +356,9 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 
+ 	/* store old status and compare now and old later */
+ 	if (usbip_dbg_flag_vhci_rh) {
+-		memcpy(prev_port_status, vhci_hcd->port_status,
+-			sizeof(prev_port_status));
++		if (!invalid_rhport)
++			memcpy(prev_port_status, vhci_hcd->port_status,
++				sizeof(prev_port_status));
+ 	}
+ 
+ 	switch (typeReq) {
+@@ -354,8 +366,10 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 		usbip_dbg_vhci_rh(" ClearHubFeature\n");
+ 		break;
+ 	case ClearPortFeature:
+-		if (rhport < 0)
++		if (invalid_rhport) {
++			pr_err("invalid port number %d\n", wIndex);
+ 			goto error;
++		}
+ 		switch (wValue) {
+ 		case USB_PORT_FEAT_SUSPEND:
+ 			if (hcd->speed == HCD_USB3) {
+@@ -415,9 +429,10 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 		break;
+ 	case GetPortStatus:
+ 		usbip_dbg_vhci_rh(" GetPortStatus port %x\n", wIndex);
+-		if (wIndex < 1) {
++		if (invalid_rhport) {
+ 			pr_err("invalid port number %d\n", wIndex);
+ 			retval = -EPIPE;
++			goto error;
+ 		}
+ 
+ 		/* we do not care about resume. */
+@@ -513,16 +528,20 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 				goto error;
+ 			}
+ 
+-			if (rhport < 0)
++			if (invalid_rhport) {
++				pr_err("invalid port number %d\n", wIndex);
+ 				goto error;
++			}
+ 
+ 			vhci_hcd->port_status[rhport] |= USB_PORT_STAT_SUSPEND;
+ 			break;
+ 		case USB_PORT_FEAT_POWER:
+ 			usbip_dbg_vhci_rh(
+ 				" SetPortFeature: USB_PORT_FEAT_POWER\n");
+-			if (rhport < 0)
++			if (invalid_rhport) {
++				pr_err("invalid port number %d\n", wIndex);
+ 				goto error;
++			}
+ 			if (hcd->speed == HCD_USB3)
+ 				vhci_hcd->port_status[rhport] |= USB_SS_PORT_STAT_POWER;
+ 			else
+@@ -531,8 +550,10 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 		case USB_PORT_FEAT_BH_PORT_RESET:
+ 			usbip_dbg_vhci_rh(
+ 				" SetPortFeature: USB_PORT_FEAT_BH_PORT_RESET\n");
+-			if (rhport < 0)
++			if (invalid_rhport) {
++				pr_err("invalid port number %d\n", wIndex);
+ 				goto error;
++			}
+ 			/* Applicable only for USB3.0 hub */
+ 			if (hcd->speed != HCD_USB3) {
+ 				pr_err("USB_PORT_FEAT_BH_PORT_RESET req not "
+@@ -543,8 +564,10 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 		case USB_PORT_FEAT_RESET:
+ 			usbip_dbg_vhci_rh(
+ 				" SetPortFeature: USB_PORT_FEAT_RESET\n");
+-			if (rhport < 0)
++			if (invalid_rhport) {
++				pr_err("invalid port number %d\n", wIndex);
+ 				goto error;
++			}
+ 			/* if it's already enabled, disable */
+ 			if (hcd->speed == HCD_USB3) {
+ 				vhci_hcd->port_status[rhport] = 0;
+@@ -565,8 +588,10 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 		default:
+ 			usbip_dbg_vhci_rh(" SetPortFeature: default %d\n",
+ 					  wValue);
+-			if (rhport < 0)
++			if (invalid_rhport) {
++				pr_err("invalid port number %d\n", wIndex);
+ 				goto error;
++			}
+ 			if (hcd->speed == HCD_USB3) {
+ 				if ((vhci_hcd->port_status[rhport] &
+ 				     USB_SS_PORT_STAT_POWER) != 0) {
+@@ -608,7 +633,7 @@ error:
+ 	if (usbip_dbg_flag_vhci_rh) {
+ 		pr_debug("port %d\n", rhport);
+ 		/* Only dump valid port status */
+-		if (rhport >= 0) {
++		if (!invalid_rhport) {
+ 			dump_port_status_diff(prev_port_status[rhport],
+ 					      vhci_hcd->port_status[rhport],
+ 					      hcd->speed == HCD_USB3);
+@@ -618,8 +643,10 @@ error:
+ 
+ 	spin_unlock_irqrestore(&vhci->lock, flags);
+ 
+-	if ((vhci_hcd->port_status[rhport] & PORT_C_MASK) != 0)
++	if (!invalid_rhport &&
++	    (vhci_hcd->port_status[rhport] & PORT_C_MASK) != 0) {
+ 		usb_hcd_poll_rh_status(hcd);
++	}
+ 
+ 	return retval;
+ }
+diff --git a/fs/cachefiles/namei.c b/fs/cachefiles/namei.c
+index af2b17b21b94..95983c744164 100644
+--- a/fs/cachefiles/namei.c
++++ b/fs/cachefiles/namei.c
+@@ -343,7 +343,7 @@ try_again:
+ 	trap = lock_rename(cache->graveyard, dir);
+ 
+ 	/* do some checks before getting the grave dentry */
+-	if (rep->d_parent != dir) {
++	if (rep->d_parent != dir || IS_DEADDIR(d_inode(rep))) {
+ 		/* the entry was probably culled when we dropped the parent dir
+ 		 * lock */
+ 		unlock_rename(cache->graveyard, dir);
+diff --git a/fs/fscache/cookie.c b/fs/fscache/cookie.c
+index 83bfe04456b6..c550512ce335 100644
+--- a/fs/fscache/cookie.c
++++ b/fs/fscache/cookie.c
+@@ -70,20 +70,7 @@ void fscache_free_cookie(struct fscache_cookie *cookie)
+ }
+ 
+ /*
+- * initialise an cookie jar slab element prior to any use
+- */
+-void fscache_cookie_init_once(void *_cookie)
+-{
+-	struct fscache_cookie *cookie = _cookie;
+-
+-	memset(cookie, 0, sizeof(*cookie));
+-	spin_lock_init(&cookie->lock);
+-	spin_lock_init(&cookie->stores_lock);
+-	INIT_HLIST_HEAD(&cookie->backing_objects);
+-}
+-
+-/*
+- * Set the index key in a cookie.  The cookie struct has space for a 12-byte
++ * Set the index key in a cookie.  The cookie struct has space for a 16-byte
+  * key plus length and hash, but if that's not big enough, it's instead a
+  * pointer to a buffer containing 3 bytes of hash, 1 byte of length and then
+  * the key data.
+@@ -93,20 +80,18 @@ static int fscache_set_key(struct fscache_cookie *cookie,
+ {
+ 	unsigned long long h;
+ 	u32 *buf;
++	int bufs;
+ 	int i;
+ 
+-	cookie->key_len = index_key_len;
++	bufs = DIV_ROUND_UP(index_key_len, sizeof(*buf));
+ 
+ 	if (index_key_len > sizeof(cookie->inline_key)) {
+-		buf = kzalloc(index_key_len, GFP_KERNEL);
++		buf = kcalloc(bufs, sizeof(*buf), GFP_KERNEL);
+ 		if (!buf)
+ 			return -ENOMEM;
+ 		cookie->key = buf;
+ 	} else {
+ 		buf = (u32 *)cookie->inline_key;
+-		buf[0] = 0;
+-		buf[1] = 0;
+-		buf[2] = 0;
+ 	}
+ 
+ 	memcpy(buf, index_key, index_key_len);
+@@ -116,7 +101,8 @@ static int fscache_set_key(struct fscache_cookie *cookie,
+ 	 */
+ 	h = (unsigned long)cookie->parent;
+ 	h += index_key_len + cookie->type;
+-	for (i = 0; i < (index_key_len + sizeof(u32) - 1) / sizeof(u32); i++)
++
++	for (i = 0; i < bufs; i++)
+ 		h += buf[i];
+ 
+ 	cookie->key_hash = h ^ (h >> 32);
+@@ -161,7 +147,7 @@ struct fscache_cookie *fscache_alloc_cookie(
+ 	struct fscache_cookie *cookie;
+ 
+ 	/* allocate and initialise a cookie */
+-	cookie = kmem_cache_alloc(fscache_cookie_jar, GFP_KERNEL);
++	cookie = kmem_cache_zalloc(fscache_cookie_jar, GFP_KERNEL);
+ 	if (!cookie)
+ 		return NULL;
+ 
+@@ -192,6 +178,9 @@ struct fscache_cookie *fscache_alloc_cookie(
+ 	cookie->netfs_data	= netfs_data;
+ 	cookie->flags		= (1 << FSCACHE_COOKIE_NO_DATA_YET);
+ 	cookie->type		= def->type;
++	spin_lock_init(&cookie->lock);
++	spin_lock_init(&cookie->stores_lock);
++	INIT_HLIST_HEAD(&cookie->backing_objects);
+ 
+ 	/* radix tree insertion won't use the preallocation pool unless it's
+ 	 * told it may not wait */
+diff --git a/fs/fscache/internal.h b/fs/fscache/internal.h
+index f83328a7f048..d6209022e965 100644
+--- a/fs/fscache/internal.h
++++ b/fs/fscache/internal.h
+@@ -51,7 +51,6 @@ extern struct fscache_cache *fscache_select_cache_for_object(
+ extern struct kmem_cache *fscache_cookie_jar;
+ 
+ extern void fscache_free_cookie(struct fscache_cookie *);
+-extern void fscache_cookie_init_once(void *);
+ extern struct fscache_cookie *fscache_alloc_cookie(struct fscache_cookie *,
+ 						   const struct fscache_cookie_def *,
+ 						   const void *, size_t,
+diff --git a/fs/fscache/main.c b/fs/fscache/main.c
+index 7dce110bf17d..30ad89db1efc 100644
+--- a/fs/fscache/main.c
++++ b/fs/fscache/main.c
+@@ -143,9 +143,7 @@ static int __init fscache_init(void)
+ 
+ 	fscache_cookie_jar = kmem_cache_create("fscache_cookie_jar",
+ 					       sizeof(struct fscache_cookie),
+-					       0,
+-					       0,
+-					       fscache_cookie_init_once);
++					       0, 0, NULL);
+ 	if (!fscache_cookie_jar) {
+ 		pr_notice("Failed to allocate a cookie jar\n");
+ 		ret = -ENOMEM;
+diff --git a/fs/ioctl.c b/fs/ioctl.c
+index b445b13fc59b..5444fec607ce 100644
+--- a/fs/ioctl.c
++++ b/fs/ioctl.c
+@@ -229,7 +229,7 @@ static long ioctl_file_clone(struct file *dst_file, unsigned long srcfd,
+ 	ret = -EXDEV;
+ 	if (src_file.file->f_path.mnt != dst_file->f_path.mnt)
+ 		goto fdput;
+-	ret = do_clone_file_range(src_file.file, off, dst_file, destoff, olen);
++	ret = vfs_clone_file_range(src_file.file, off, dst_file, destoff, olen);
+ fdput:
+ 	fdput(src_file);
+ 	return ret;
+diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
+index b0555d7d8200..613d2fe2dddd 100644
+--- a/fs/nfsd/vfs.c
++++ b/fs/nfsd/vfs.c
+@@ -541,7 +541,8 @@ __be32 nfsd4_set_nfs4_label(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ __be32 nfsd4_clone_file_range(struct file *src, u64 src_pos, struct file *dst,
+ 		u64 dst_pos, u64 count)
+ {
+-	return nfserrno(do_clone_file_range(src, src_pos, dst, dst_pos, count));
++	return nfserrno(vfs_clone_file_range(src, src_pos, dst, dst_pos,
++					     count));
+ }
+ 
+ ssize_t nfsd_copy_file_range(struct file *src, u64 src_pos, struct file *dst,
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index ddaddb4ce4c3..26b477f2538d 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -156,7 +156,7 @@ static int ovl_copy_up_data(struct path *old, struct path *new, loff_t len)
+ 	}
+ 
+ 	/* Try to use clone_file_range to clone up within the same fs */
+-	error = vfs_clone_file_range(old_file, 0, new_file, 0, len);
++	error = do_clone_file_range(old_file, 0, new_file, 0, len);
+ 	if (!error)
+ 		goto out;
+ 	/* Couldn't clone, so now we try to copy the data */
+diff --git a/fs/read_write.c b/fs/read_write.c
+index 153f8f690490..c9d489684335 100644
+--- a/fs/read_write.c
++++ b/fs/read_write.c
+@@ -1818,8 +1818,8 @@ int vfs_clone_file_prep_inodes(struct inode *inode_in, loff_t pos_in,
+ }
+ EXPORT_SYMBOL(vfs_clone_file_prep_inodes);
+ 
+-int vfs_clone_file_range(struct file *file_in, loff_t pos_in,
+-		struct file *file_out, loff_t pos_out, u64 len)
++int do_clone_file_range(struct file *file_in, loff_t pos_in,
++			struct file *file_out, loff_t pos_out, u64 len)
+ {
+ 	struct inode *inode_in = file_inode(file_in);
+ 	struct inode *inode_out = file_inode(file_out);
+@@ -1866,6 +1866,19 @@ int vfs_clone_file_range(struct file *file_in, loff_t pos_in,
+ 
+ 	return ret;
+ }
++EXPORT_SYMBOL(do_clone_file_range);
++
++int vfs_clone_file_range(struct file *file_in, loff_t pos_in,
++			 struct file *file_out, loff_t pos_out, u64 len)
++{
++	int ret;
++
++	file_start_write(file_out);
++	ret = do_clone_file_range(file_in, pos_in, file_out, pos_out, len);
++	file_end_write(file_out);
++
++	return ret;
++}
+ EXPORT_SYMBOL(vfs_clone_file_range);
+ 
+ /*
+diff --git a/include/drm/drm_edid.h b/include/drm/drm_edid.h
+index b25d12ef120a..e3c404833115 100644
+--- a/include/drm/drm_edid.h
++++ b/include/drm/drm_edid.h
+@@ -214,9 +214,9 @@ struct detailed_timing {
+ #define DRM_EDID_HDMI_DC_Y444             (1 << 3)
+ 
+ /* YCBCR 420 deep color modes */
+-#define DRM_EDID_YCBCR420_DC_48		  (1 << 6)
+-#define DRM_EDID_YCBCR420_DC_36		  (1 << 5)
+-#define DRM_EDID_YCBCR420_DC_30		  (1 << 4)
++#define DRM_EDID_YCBCR420_DC_48		  (1 << 2)
++#define DRM_EDID_YCBCR420_DC_36		  (1 << 1)
++#define DRM_EDID_YCBCR420_DC_30		  (1 << 0)
+ #define DRM_EDID_YCBCR420_DC_MASK (DRM_EDID_YCBCR420_DC_48 | \
+ 				    DRM_EDID_YCBCR420_DC_36 | \
+ 				    DRM_EDID_YCBCR420_DC_30)
+diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
+index 38b04f559ad3..1fd6fa822d2c 100644
+--- a/include/linux/bpf_verifier.h
++++ b/include/linux/bpf_verifier.h
+@@ -50,6 +50,9 @@ struct bpf_reg_state {
+ 		 *   PTR_TO_MAP_VALUE_OR_NULL
+ 		 */
+ 		struct bpf_map *map_ptr;
++
++		/* Max size from any of the above. */
++		unsigned long raw;
+ 	};
+ 	/* Fixed part of pointer offset, pointer types only */
+ 	s32 off;
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index a3afa50bb79f..e73363bd8646 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -1813,8 +1813,10 @@ extern ssize_t vfs_copy_file_range(struct file *, loff_t , struct file *,
+ extern int vfs_clone_file_prep_inodes(struct inode *inode_in, loff_t pos_in,
+ 				      struct inode *inode_out, loff_t pos_out,
+ 				      u64 *len, bool is_dedupe);
++extern int do_clone_file_range(struct file *file_in, loff_t pos_in,
++			       struct file *file_out, loff_t pos_out, u64 len);
+ extern int vfs_clone_file_range(struct file *file_in, loff_t pos_in,
+-		struct file *file_out, loff_t pos_out, u64 len);
++				struct file *file_out, loff_t pos_out, u64 len);
+ extern int vfs_dedupe_file_range_compare(struct inode *src, loff_t srcoff,
+ 					 struct inode *dest, loff_t destoff,
+ 					 loff_t len, bool *is_same);
+@@ -2755,19 +2757,6 @@ static inline void file_end_write(struct file *file)
+ 	__sb_end_write(file_inode(file)->i_sb, SB_FREEZE_WRITE);
+ }
+ 
+-static inline int do_clone_file_range(struct file *file_in, loff_t pos_in,
+-				      struct file *file_out, loff_t pos_out,
+-				      u64 len)
+-{
+-	int ret;
+-
+-	file_start_write(file_out);
+-	ret = vfs_clone_file_range(file_in, pos_in, file_out, pos_out, len);
+-	file_end_write(file_out);
+-
+-	return ret;
+-}
+-
+ /*
+  * get_write_access() gets write permission for a file.
+  * put_write_access() releases this write permission.
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 82e8edef6ea0..b000686fa1a1 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -2731,7 +2731,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 			dst_reg->umax_value = umax_ptr;
+ 			dst_reg->var_off = ptr_reg->var_off;
+ 			dst_reg->off = ptr_reg->off + smin_val;
+-			dst_reg->range = ptr_reg->range;
++			dst_reg->raw = ptr_reg->raw;
+ 			break;
+ 		}
+ 		/* A new variable offset is created.  Note that off_reg->off
+@@ -2761,10 +2761,11 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 		}
+ 		dst_reg->var_off = tnum_add(ptr_reg->var_off, off_reg->var_off);
+ 		dst_reg->off = ptr_reg->off;
++		dst_reg->raw = ptr_reg->raw;
+ 		if (reg_is_pkt_pointer(ptr_reg)) {
+ 			dst_reg->id = ++env->id_gen;
+ 			/* something was added to pkt_ptr, set range to zero */
+-			dst_reg->range = 0;
++			dst_reg->raw = 0;
+ 		}
+ 		break;
+ 	case BPF_SUB:
+@@ -2793,7 +2794,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 			dst_reg->var_off = ptr_reg->var_off;
+ 			dst_reg->id = ptr_reg->id;
+ 			dst_reg->off = ptr_reg->off - smin_val;
+-			dst_reg->range = ptr_reg->range;
++			dst_reg->raw = ptr_reg->raw;
+ 			break;
+ 		}
+ 		/* A new variable offset is created.  If the subtrahend is known
+@@ -2819,11 +2820,12 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 		}
+ 		dst_reg->var_off = tnum_sub(ptr_reg->var_off, off_reg->var_off);
+ 		dst_reg->off = ptr_reg->off;
++		dst_reg->raw = ptr_reg->raw;
+ 		if (reg_is_pkt_pointer(ptr_reg)) {
+ 			dst_reg->id = ++env->id_gen;
+ 			/* something was added to pkt_ptr, set range to zero */
+ 			if (smin_val < 0)
+-				dst_reg->range = 0;
++				dst_reg->raw = 0;
+ 		}
+ 		break;
+ 	case BPF_AND:
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 26526fc41f0d..b27b9509ea89 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -4797,9 +4797,13 @@ static void throttle_cfs_rq(struct cfs_rq *cfs_rq)
+ 
+ 	/*
+ 	 * Add to the _head_ of the list, so that an already-started
+-	 * distribute_cfs_runtime will not see us
++	 * distribute_cfs_runtime will not see us. If disribute_cfs_runtime is
++	 * not running add to the tail so that later runqueues don't get starved.
+ 	 */
+-	list_add_rcu(&cfs_rq->throttled_list, &cfs_b->throttled_cfs_rq);
++	if (cfs_b->distribute_running)
++		list_add_rcu(&cfs_rq->throttled_list, &cfs_b->throttled_cfs_rq);
++	else
++		list_add_tail_rcu(&cfs_rq->throttled_list, &cfs_b->throttled_cfs_rq);
+ 
+ 	/*
+ 	 * If we're the first throttled task, make sure the bandwidth
+@@ -4943,14 +4947,16 @@ static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun)
+ 	 * in us over-using our runtime if it is all used during this loop, but
+ 	 * only by limited amounts in that extreme case.
+ 	 */
+-	while (throttled && cfs_b->runtime > 0) {
++	while (throttled && cfs_b->runtime > 0 && !cfs_b->distribute_running) {
+ 		runtime = cfs_b->runtime;
++		cfs_b->distribute_running = 1;
+ 		raw_spin_unlock(&cfs_b->lock);
+ 		/* we can't nest cfs_b->lock while distributing bandwidth */
+ 		runtime = distribute_cfs_runtime(cfs_b, runtime,
+ 						 runtime_expires);
+ 		raw_spin_lock(&cfs_b->lock);
+ 
++		cfs_b->distribute_running = 0;
+ 		throttled = !list_empty(&cfs_b->throttled_cfs_rq);
+ 
+ 		cfs_b->runtime -= min(runtime, cfs_b->runtime);
+@@ -5061,6 +5067,11 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
+ 
+ 	/* confirm we're still not at a refresh boundary */
+ 	raw_spin_lock(&cfs_b->lock);
++	if (cfs_b->distribute_running) {
++		raw_spin_unlock(&cfs_b->lock);
++		return;
++	}
++
+ 	if (runtime_refresh_within(cfs_b, min_bandwidth_expiration)) {
+ 		raw_spin_unlock(&cfs_b->lock);
+ 		return;
+@@ -5070,6 +5081,9 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
+ 		runtime = cfs_b->runtime;
+ 
+ 	expires = cfs_b->runtime_expires;
++	if (runtime)
++		cfs_b->distribute_running = 1;
++
+ 	raw_spin_unlock(&cfs_b->lock);
+ 
+ 	if (!runtime)
+@@ -5080,6 +5094,7 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
+ 	raw_spin_lock(&cfs_b->lock);
+ 	if (expires == cfs_b->runtime_expires)
+ 		cfs_b->runtime -= min(runtime, cfs_b->runtime);
++	cfs_b->distribute_running = 0;
+ 	raw_spin_unlock(&cfs_b->lock);
+ }
+ 
+@@ -5188,6 +5203,7 @@ void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b)
+ 	cfs_b->period_timer.function = sched_cfs_period_timer;
+ 	hrtimer_init(&cfs_b->slack_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ 	cfs_b->slack_timer.function = sched_cfs_slack_timer;
++	cfs_b->distribute_running = 0;
+ }
+ 
+ static void init_cfs_rq_runtime(struct cfs_rq *cfs_rq)
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index c7742dcc136c..4565c3f9ecc5 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -346,6 +346,8 @@ struct cfs_bandwidth {
+ 	int			nr_periods;
+ 	int			nr_throttled;
+ 	u64			throttled_time;
++
++	bool                    distribute_running;
+ #endif
+ };
+ 
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index aae18af94c94..6c78bc2b7fff 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -747,16 +747,30 @@ static void free_synth_field(struct synth_field *field)
+ 	kfree(field);
+ }
+ 
+-static struct synth_field *parse_synth_field(char *field_type,
+-					     char *field_name)
++static struct synth_field *parse_synth_field(int argc, char **argv,
++					     int *consumed)
+ {
+ 	struct synth_field *field;
++	const char *prefix = NULL;
++	char *field_type = argv[0], *field_name;
+ 	int len, ret = 0;
+ 	char *array;
+ 
+ 	if (field_type[0] == ';')
+ 		field_type++;
+ 
++	if (!strcmp(field_type, "unsigned")) {
++		if (argc < 3)
++			return ERR_PTR(-EINVAL);
++		prefix = "unsigned ";
++		field_type = argv[1];
++		field_name = argv[2];
++		*consumed = 3;
++	} else {
++		field_name = argv[1];
++		*consumed = 2;
++	}
++
+ 	len = strlen(field_name);
+ 	if (field_name[len - 1] == ';')
+ 		field_name[len - 1] = '\0';
+@@ -769,11 +783,15 @@ static struct synth_field *parse_synth_field(char *field_type,
+ 	array = strchr(field_name, '[');
+ 	if (array)
+ 		len += strlen(array);
++	if (prefix)
++		len += strlen(prefix);
+ 	field->type = kzalloc(len, GFP_KERNEL);
+ 	if (!field->type) {
+ 		ret = -ENOMEM;
+ 		goto free;
+ 	}
++	if (prefix)
++		strcat(field->type, prefix);
+ 	strcat(field->type, field_type);
+ 	if (array) {
+ 		strcat(field->type, array);
+@@ -1018,7 +1036,7 @@ static int create_synth_event(int argc, char **argv)
+ 	struct synth_field *field, *fields[SYNTH_FIELDS_MAX];
+ 	struct synth_event *event = NULL;
+ 	bool delete_event = false;
+-	int i, n_fields = 0, ret = 0;
++	int i, consumed = 0, n_fields = 0, ret = 0;
+ 	char *name;
+ 
+ 	mutex_lock(&synth_event_mutex);
+@@ -1070,16 +1088,16 @@ static int create_synth_event(int argc, char **argv)
+ 			goto err;
+ 		}
+ 
+-		field = parse_synth_field(argv[i], argv[i + 1]);
++		field = parse_synth_field(argc - i, &argv[i], &consumed);
+ 		if (IS_ERR(field)) {
+ 			ret = PTR_ERR(field);
+ 			goto err;
+ 		}
+-		fields[n_fields] = field;
+-		i++; n_fields++;
++		fields[n_fields++] = field;
++		i += consumed - 1;
+ 	}
+ 
+-	if (i < argc) {
++	if (i < argc && strcmp(argv[i], ";") != 0) {
+ 		ret = -EINVAL;
+ 		goto err;
+ 	}


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 11:37 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     b0c3ec62599d23dda1b591b0470997cd4bfe3e0a
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Oct  4 10:44:07 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 11:36:26 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b0c3ec62

Linux patch 4.18.12

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1011_linux-4.18.12.patch | 7724 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 7728 insertions(+)

diff --git a/0000_README b/0000_README
index cccbd63..ff87445 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch:  1010_linux-4.18.11.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.11
 
+Patch:  1011_linux-4.18.12.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.12
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1011_linux-4.18.12.patch b/1011_linux-4.18.12.patch
new file mode 100644
index 0000000..0851ea8
--- /dev/null
+++ b/1011_linux-4.18.12.patch
@@ -0,0 +1,7724 @@
+diff --git a/Documentation/hwmon/ina2xx b/Documentation/hwmon/ina2xx
+index 72d16f08e431..b8df81f6d6bc 100644
+--- a/Documentation/hwmon/ina2xx
++++ b/Documentation/hwmon/ina2xx
+@@ -32,7 +32,7 @@ Supported chips:
+     Datasheet: Publicly available at the Texas Instruments website
+                http://www.ti.com/
+ 
+-Author: Lothar Felten <l-felten@ti.com>
++Author: Lothar Felten <lothar.felten@gmail.com>
+ 
+ Description
+ -----------
+diff --git a/Documentation/process/2.Process.rst b/Documentation/process/2.Process.rst
+index a9c46dd0706b..51d0349c7809 100644
+--- a/Documentation/process/2.Process.rst
++++ b/Documentation/process/2.Process.rst
+@@ -134,7 +134,7 @@ and their maintainers are:
+ 	4.4	Greg Kroah-Hartman	(very long-term stable kernel)
+ 	4.9	Greg Kroah-Hartman
+ 	4.14	Greg Kroah-Hartman
+-	======  ======================  ===========================
++	======  ======================  ==============================
+ 
+ The selection of a kernel for long-term support is purely a matter of a
+ maintainer having the need and the time to maintain that release.  There
+diff --git a/Makefile b/Makefile
+index de0ecace693a..466e07af8473 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 11
++SUBLEVEL = 12
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/arm/boot/dts/dra7.dtsi b/arch/arm/boot/dts/dra7.dtsi
+index e03495a799ce..a0ddf497e8cd 100644
+--- a/arch/arm/boot/dts/dra7.dtsi
++++ b/arch/arm/boot/dts/dra7.dtsi
+@@ -1893,7 +1893,7 @@
+ 			};
+ 		};
+ 
+-		dcan1: can@481cc000 {
++		dcan1: can@4ae3c000 {
+ 			compatible = "ti,dra7-d_can";
+ 			ti,hwmods = "dcan1";
+ 			reg = <0x4ae3c000 0x2000>;
+@@ -1903,7 +1903,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		dcan2: can@481d0000 {
++		dcan2: can@48480000 {
+ 			compatible = "ti,dra7-d_can";
+ 			ti,hwmods = "dcan2";
+ 			reg = <0x48480000 0x2000>;
+diff --git a/arch/arm/boot/dts/imx7d.dtsi b/arch/arm/boot/dts/imx7d.dtsi
+index 8d3d123d0a5c..37f0a5afe348 100644
+--- a/arch/arm/boot/dts/imx7d.dtsi
++++ b/arch/arm/boot/dts/imx7d.dtsi
+@@ -125,10 +125,14 @@
+ 		interrupt-names = "msi";
+ 		#interrupt-cells = <1>;
+ 		interrupt-map-mask = <0 0 0 0x7>;
+-		interrupt-map = <0 0 0 1 &intc GIC_SPI 122 IRQ_TYPE_LEVEL_HIGH>,
+-				<0 0 0 2 &intc GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>,
+-				<0 0 0 3 &intc GIC_SPI 124 IRQ_TYPE_LEVEL_HIGH>,
+-				<0 0 0 4 &intc GIC_SPI 125 IRQ_TYPE_LEVEL_HIGH>;
++		/*
++		 * Reference manual lists pci irqs incorrectly
++		 * Real hardware ordering is same as imx6: D+MSI, C, B, A
++		 */
++		interrupt-map = <0 0 0 1 &intc GIC_SPI 125 IRQ_TYPE_LEVEL_HIGH>,
++				<0 0 0 2 &intc GIC_SPI 124 IRQ_TYPE_LEVEL_HIGH>,
++				<0 0 0 3 &intc GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>,
++				<0 0 0 4 &intc GIC_SPI 122 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&clks IMX7D_PCIE_CTRL_ROOT_CLK>,
+ 			 <&clks IMX7D_PLL_ENET_MAIN_100M_CLK>,
+ 			 <&clks IMX7D_PCIE_PHY_ROOT_CLK>;
+diff --git a/arch/arm/boot/dts/ls1021a.dtsi b/arch/arm/boot/dts/ls1021a.dtsi
+index c55d479971cc..f18490548c78 100644
+--- a/arch/arm/boot/dts/ls1021a.dtsi
++++ b/arch/arm/boot/dts/ls1021a.dtsi
+@@ -84,6 +84,7 @@
+ 			device_type = "cpu";
+ 			reg = <0xf01>;
+ 			clocks = <&clockgen 1 0>;
++			#cooling-cells = <2>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/mt7623.dtsi b/arch/arm/boot/dts/mt7623.dtsi
+index d1eb123bc73b..1cdc346a05e8 100644
+--- a/arch/arm/boot/dts/mt7623.dtsi
++++ b/arch/arm/boot/dts/mt7623.dtsi
+@@ -92,6 +92,7 @@
+ 				 <&apmixedsys CLK_APMIXED_MAINPLL>;
+ 			clock-names = "cpu", "intermediate";
+ 			operating-points-v2 = <&cpu_opp_table>;
++			#cooling-cells = <2>;
+ 			clock-frequency = <1300000000>;
+ 		};
+ 
+@@ -103,6 +104,7 @@
+ 				 <&apmixedsys CLK_APMIXED_MAINPLL>;
+ 			clock-names = "cpu", "intermediate";
+ 			operating-points-v2 = <&cpu_opp_table>;
++			#cooling-cells = <2>;
+ 			clock-frequency = <1300000000>;
+ 		};
+ 
+@@ -114,6 +116,7 @@
+ 				 <&apmixedsys CLK_APMIXED_MAINPLL>;
+ 			clock-names = "cpu", "intermediate";
+ 			operating-points-v2 = <&cpu_opp_table>;
++			#cooling-cells = <2>;
+ 			clock-frequency = <1300000000>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/omap4-droid4-xt894.dts b/arch/arm/boot/dts/omap4-droid4-xt894.dts
+index e7c3c563ff8f..5f27518561c4 100644
+--- a/arch/arm/boot/dts/omap4-droid4-xt894.dts
++++ b/arch/arm/boot/dts/omap4-droid4-xt894.dts
+@@ -351,7 +351,7 @@
+ &mmc2 {
+ 	vmmc-supply = <&vsdio>;
+ 	bus-width = <8>;
+-	non-removable;
++	ti,non-removable;
+ };
+ 
+ &mmc3 {
+@@ -618,15 +618,6 @@
+ 		OMAP4_IOPAD(0x10c, PIN_INPUT | MUX_MODE1)	/* abe_mcbsp3_fsx */
+ 		>;
+ 	};
+-};
+-
+-&omap4_pmx_wkup {
+-	usb_gpio_mux_sel2: pinmux_usb_gpio_mux_sel2_pins {
+-		/* gpio_wk0 */
+-		pinctrl-single,pins = <
+-		OMAP4_IOPAD(0x040, PIN_OUTPUT_PULLDOWN | MUX_MODE3)
+-		>;
+-	};
+ 
+ 	vibrator_direction_pin: pinmux_vibrator_direction_pin {
+ 		pinctrl-single,pins = <
+@@ -641,6 +632,15 @@
+ 	};
+ };
+ 
++&omap4_pmx_wkup {
++	usb_gpio_mux_sel2: pinmux_usb_gpio_mux_sel2_pins {
++		/* gpio_wk0 */
++		pinctrl-single,pins = <
++		OMAP4_IOPAD(0x040, PIN_OUTPUT_PULLDOWN | MUX_MODE3)
++		>;
++	};
++};
++
+ /*
+  * As uart1 is wired to mdm6600 with rts and cts, we can use the cts pin for
+  * uart1 wakeirq.
+diff --git a/arch/arm/mach-mvebu/pmsu.c b/arch/arm/mach-mvebu/pmsu.c
+index 27a78c80e5b1..73d5d72dfc3e 100644
+--- a/arch/arm/mach-mvebu/pmsu.c
++++ b/arch/arm/mach-mvebu/pmsu.c
+@@ -116,8 +116,8 @@ void mvebu_pmsu_set_cpu_boot_addr(int hw_cpu, void *boot_addr)
+ 		PMSU_BOOT_ADDR_REDIRECT_OFFSET(hw_cpu));
+ }
+ 
+-extern unsigned char mvebu_boot_wa_start;
+-extern unsigned char mvebu_boot_wa_end;
++extern unsigned char mvebu_boot_wa_start[];
++extern unsigned char mvebu_boot_wa_end[];
+ 
+ /*
+  * This function sets up the boot address workaround needed for SMP
+@@ -130,7 +130,7 @@ int mvebu_setup_boot_addr_wa(unsigned int crypto_eng_target,
+ 			     phys_addr_t resume_addr_reg)
+ {
+ 	void __iomem *sram_virt_base;
+-	u32 code_len = &mvebu_boot_wa_end - &mvebu_boot_wa_start;
++	u32 code_len = mvebu_boot_wa_end - mvebu_boot_wa_start;
+ 
+ 	mvebu_mbus_del_window(BOOTROM_BASE, BOOTROM_SIZE);
+ 	mvebu_mbus_add_window_by_id(crypto_eng_target, crypto_eng_attribute,
+diff --git a/arch/arm/mach-omap2/omap_hwmod.c b/arch/arm/mach-omap2/omap_hwmod.c
+index 2ceffd85dd3d..cd65ea4e9c54 100644
+--- a/arch/arm/mach-omap2/omap_hwmod.c
++++ b/arch/arm/mach-omap2/omap_hwmod.c
+@@ -2160,6 +2160,37 @@ static int of_dev_hwmod_lookup(struct device_node *np,
+ 	return -ENODEV;
+ }
+ 
++/**
++ * omap_hwmod_fix_mpu_rt_idx - fix up mpu_rt_idx register offsets
++ *
++ * @oh: struct omap_hwmod *
++ * @np: struct device_node *
++ *
++ * Fix up module register offsets for modules with mpu_rt_idx.
++ * Only needed for cpsw with interconnect target module defined
++ * in device tree while still using legacy hwmod platform data
++ * for rev, sysc and syss registers.
++ *
++ * Can be removed when all cpsw hwmod platform data has been
++ * dropped.
++ */
++static void omap_hwmod_fix_mpu_rt_idx(struct omap_hwmod *oh,
++				      struct device_node *np,
++				      struct resource *res)
++{
++	struct device_node *child = NULL;
++	int error;
++
++	child = of_get_next_child(np, child);
++	if (!child)
++		return;
++
++	error = of_address_to_resource(child, oh->mpu_rt_idx, res);
++	if (error)
++		pr_err("%s: error mapping mpu_rt_idx: %i\n",
++		       __func__, error);
++}
++
+ /**
+  * omap_hwmod_parse_module_range - map module IO range from device tree
+  * @oh: struct omap_hwmod *
+@@ -2220,7 +2251,13 @@ int omap_hwmod_parse_module_range(struct omap_hwmod *oh,
+ 	size = be32_to_cpup(ranges);
+ 
+ 	pr_debug("omap_hwmod: %s %s at 0x%llx size 0x%llx\n",
+-		 oh->name, np->name, base, size);
++		 oh ? oh->name : "", np->name, base, size);
++
++	if (oh && oh->mpu_rt_idx) {
++		omap_hwmod_fix_mpu_rt_idx(oh, np, res);
++
++		return 0;
++	}
+ 
+ 	res->start = base;
+ 	res->end = base + size - 1;
+diff --git a/arch/arm/mach-omap2/omap_hwmod_reset.c b/arch/arm/mach-omap2/omap_hwmod_reset.c
+index b68f9c0aff0b..d5ddba00bb73 100644
+--- a/arch/arm/mach-omap2/omap_hwmod_reset.c
++++ b/arch/arm/mach-omap2/omap_hwmod_reset.c
+@@ -92,11 +92,13 @@ static void omap_rtc_wait_not_busy(struct omap_hwmod *oh)
+  */
+ void omap_hwmod_rtc_unlock(struct omap_hwmod *oh)
+ {
+-	local_irq_disable();
++	unsigned long flags;
++
++	local_irq_save(flags);
+ 	omap_rtc_wait_not_busy(oh);
+ 	omap_hwmod_write(OMAP_RTC_KICK0_VALUE, oh, OMAP_RTC_KICK0_REG);
+ 	omap_hwmod_write(OMAP_RTC_KICK1_VALUE, oh, OMAP_RTC_KICK1_REG);
+-	local_irq_enable();
++	local_irq_restore(flags);
+ }
+ 
+ /**
+@@ -110,9 +112,11 @@ void omap_hwmod_rtc_unlock(struct omap_hwmod *oh)
+  */
+ void omap_hwmod_rtc_lock(struct omap_hwmod *oh)
+ {
+-	local_irq_disable();
++	unsigned long flags;
++
++	local_irq_save(flags);
+ 	omap_rtc_wait_not_busy(oh);
+ 	omap_hwmod_write(0x0, oh, OMAP_RTC_KICK0_REG);
+ 	omap_hwmod_write(0x0, oh, OMAP_RTC_KICK1_REG);
+-	local_irq_enable();
++	local_irq_restore(flags);
+ }
+diff --git a/arch/arm64/boot/dts/renesas/r8a7795-es1.dtsi b/arch/arm64/boot/dts/renesas/r8a7795-es1.dtsi
+index e19dcd6cb767..0a42b016f257 100644
+--- a/arch/arm64/boot/dts/renesas/r8a7795-es1.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a7795-es1.dtsi
+@@ -80,7 +80,7 @@
+ 
+ 	vspd3: vsp@fea38000 {
+ 		compatible = "renesas,vsp2";
+-		reg = <0 0xfea38000 0 0x8000>;
++		reg = <0 0xfea38000 0 0x5000>;
+ 		interrupts = <GIC_SPI 469 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&cpg CPG_MOD 620>;
+ 		power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a7795.dtsi b/arch/arm64/boot/dts/renesas/r8a7795.dtsi
+index d842940b2f43..91c392f879f9 100644
+--- a/arch/arm64/boot/dts/renesas/r8a7795.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a7795.dtsi
+@@ -2530,7 +2530,7 @@
+ 
+ 		vspd0: vsp@fea20000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea20000 0 0x8000>;
++			reg = <0 0xfea20000 0 0x5000>;
+ 			interrupts = <GIC_SPI 466 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 623>;
+ 			power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
+@@ -2541,7 +2541,7 @@
+ 
+ 		vspd1: vsp@fea28000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea28000 0 0x8000>;
++			reg = <0 0xfea28000 0 0x5000>;
+ 			interrupts = <GIC_SPI 467 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 622>;
+ 			power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
+@@ -2552,7 +2552,7 @@
+ 
+ 		vspd2: vsp@fea30000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea30000 0 0x8000>;
++			reg = <0 0xfea30000 0 0x5000>;
+ 			interrupts = <GIC_SPI 468 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 621>;
+ 			power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a7796.dtsi b/arch/arm64/boot/dts/renesas/r8a7796.dtsi
+index 7c25be6b5af3..a3653f9f4627 100644
+--- a/arch/arm64/boot/dts/renesas/r8a7796.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a7796.dtsi
+@@ -2212,7 +2212,7 @@
+ 
+ 		vspd0: vsp@fea20000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea20000 0 0x8000>;
++			reg = <0 0xfea20000 0 0x5000>;
+ 			interrupts = <GIC_SPI 466 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 623>;
+ 			power-domains = <&sysc R8A7796_PD_ALWAYS_ON>;
+@@ -2223,7 +2223,7 @@
+ 
+ 		vspd1: vsp@fea28000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea28000 0 0x8000>;
++			reg = <0 0xfea28000 0 0x5000>;
+ 			interrupts = <GIC_SPI 467 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 622>;
+ 			power-domains = <&sysc R8A7796_PD_ALWAYS_ON>;
+@@ -2234,7 +2234,7 @@
+ 
+ 		vspd2: vsp@fea30000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea30000 0 0x8000>;
++			reg = <0 0xfea30000 0 0x5000>;
+ 			interrupts = <GIC_SPI 468 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 621>;
+ 			power-domains = <&sysc R8A7796_PD_ALWAYS_ON>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77965.dtsi b/arch/arm64/boot/dts/renesas/r8a77965.dtsi
+index 486aecacb22a..ca618228fce1 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77965.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77965.dtsi
+@@ -1397,7 +1397,7 @@
+ 
+ 		vspd0: vsp@fea20000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea20000 0 0x8000>;
++			reg = <0 0xfea20000 0 0x5000>;
+ 			interrupts = <GIC_SPI 466 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 623>;
+ 			power-domains = <&sysc R8A77965_PD_ALWAYS_ON>;
+@@ -1416,7 +1416,7 @@
+ 
+ 		vspd1: vsp@fea28000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea28000 0 0x8000>;
++			reg = <0 0xfea28000 0 0x5000>;
+ 			interrupts = <GIC_SPI 467 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 622>;
+ 			power-domains = <&sysc R8A77965_PD_ALWAYS_ON>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77970.dtsi b/arch/arm64/boot/dts/renesas/r8a77970.dtsi
+index 98a2317a16c4..89dc4e343b7c 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77970.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77970.dtsi
+@@ -776,7 +776,7 @@
+ 
+ 		vspd0: vsp@fea20000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea20000 0 0x8000>;
++			reg = <0 0xfea20000 0 0x5000>;
+ 			interrupts = <GIC_SPI 169 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 623>;
+ 			power-domains = <&sysc R8A77970_PD_ALWAYS_ON>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77995.dtsi b/arch/arm64/boot/dts/renesas/r8a77995.dtsi
+index 2506f46293e8..ac9aadf2723c 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77995.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77995.dtsi
+@@ -699,7 +699,7 @@
+ 
+ 		vspd0: vsp@fea20000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea20000 0 0x8000>;
++			reg = <0 0xfea20000 0 0x5000>;
+ 			interrupts = <GIC_SPI 466 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 623>;
+ 			power-domains = <&sysc R8A77995_PD_ALWAYS_ON>;
+@@ -709,7 +709,7 @@
+ 
+ 		vspd1: vsp@fea28000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea28000 0 0x8000>;
++			reg = <0 0xfea28000 0 0x5000>;
+ 			interrupts = <GIC_SPI 467 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 622>;
+ 			power-domains = <&sysc R8A77995_PD_ALWAYS_ON>;
+diff --git a/arch/arm64/boot/dts/renesas/salvator-common.dtsi b/arch/arm64/boot/dts/renesas/salvator-common.dtsi
+index 9256fbaaab7f..5853f5177b4b 100644
+--- a/arch/arm64/boot/dts/renesas/salvator-common.dtsi
++++ b/arch/arm64/boot/dts/renesas/salvator-common.dtsi
+@@ -440,7 +440,7 @@
+ 			};
+ 		};
+ 
+-		port@10 {
++		port@a {
+ 			reg = <10>;
+ 
+ 			adv7482_txa: endpoint {
+@@ -450,7 +450,7 @@
+ 			};
+ 		};
+ 
+-		port@11 {
++		port@b {
+ 			reg = <11>;
+ 
+ 			adv7482_txb: endpoint {
+diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
+index 56a0260ceb11..d5c6bb1562d8 100644
+--- a/arch/arm64/kvm/guest.c
++++ b/arch/arm64/kvm/guest.c
+@@ -57,6 +57,45 @@ static u64 core_reg_offset_from_id(u64 id)
+ 	return id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_ARM_CORE);
+ }
+ 
++static int validate_core_offset(const struct kvm_one_reg *reg)
++{
++	u64 off = core_reg_offset_from_id(reg->id);
++	int size;
++
++	switch (off) {
++	case KVM_REG_ARM_CORE_REG(regs.regs[0]) ...
++	     KVM_REG_ARM_CORE_REG(regs.regs[30]):
++	case KVM_REG_ARM_CORE_REG(regs.sp):
++	case KVM_REG_ARM_CORE_REG(regs.pc):
++	case KVM_REG_ARM_CORE_REG(regs.pstate):
++	case KVM_REG_ARM_CORE_REG(sp_el1):
++	case KVM_REG_ARM_CORE_REG(elr_el1):
++	case KVM_REG_ARM_CORE_REG(spsr[0]) ...
++	     KVM_REG_ARM_CORE_REG(spsr[KVM_NR_SPSR - 1]):
++		size = sizeof(__u64);
++		break;
++
++	case KVM_REG_ARM_CORE_REG(fp_regs.vregs[0]) ...
++	     KVM_REG_ARM_CORE_REG(fp_regs.vregs[31]):
++		size = sizeof(__uint128_t);
++		break;
++
++	case KVM_REG_ARM_CORE_REG(fp_regs.fpsr):
++	case KVM_REG_ARM_CORE_REG(fp_regs.fpcr):
++		size = sizeof(__u32);
++		break;
++
++	default:
++		return -EINVAL;
++	}
++
++	if (KVM_REG_SIZE(reg->id) == size &&
++	    IS_ALIGNED(off, size / sizeof(__u32)))
++		return 0;
++
++	return -EINVAL;
++}
++
+ static int get_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ {
+ 	/*
+@@ -76,6 +115,9 @@ static int get_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ 	    (off + (KVM_REG_SIZE(reg->id) / sizeof(__u32))) >= nr_regs)
+ 		return -ENOENT;
+ 
++	if (validate_core_offset(reg))
++		return -EINVAL;
++
+ 	if (copy_to_user(uaddr, ((u32 *)regs) + off, KVM_REG_SIZE(reg->id)))
+ 		return -EFAULT;
+ 
+@@ -98,6 +140,9 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ 	    (off + (KVM_REG_SIZE(reg->id) / sizeof(__u32))) >= nr_regs)
+ 		return -ENOENT;
+ 
++	if (validate_core_offset(reg))
++		return -EINVAL;
++
+ 	if (KVM_REG_SIZE(reg->id) > sizeof(tmp))
+ 		return -EINVAL;
+ 
+@@ -107,17 +152,25 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ 	}
+ 
+ 	if (off == KVM_REG_ARM_CORE_REG(regs.pstate)) {
+-		u32 mode = (*(u32 *)valp) & COMPAT_PSR_MODE_MASK;
++		u64 mode = (*(u64 *)valp) & COMPAT_PSR_MODE_MASK;
+ 		switch (mode) {
+ 		case COMPAT_PSR_MODE_USR:
++			if (!system_supports_32bit_el0())
++				return -EINVAL;
++			break;
+ 		case COMPAT_PSR_MODE_FIQ:
+ 		case COMPAT_PSR_MODE_IRQ:
+ 		case COMPAT_PSR_MODE_SVC:
+ 		case COMPAT_PSR_MODE_ABT:
+ 		case COMPAT_PSR_MODE_UND:
++			if (!vcpu_el1_is_32bit(vcpu))
++				return -EINVAL;
++			break;
+ 		case PSR_MODE_EL0t:
+ 		case PSR_MODE_EL1t:
+ 		case PSR_MODE_EL1h:
++			if (vcpu_el1_is_32bit(vcpu))
++				return -EINVAL;
+ 			break;
+ 		default:
+ 			err = -EINVAL;
+diff --git a/arch/mips/boot/Makefile b/arch/mips/boot/Makefile
+index c22da16d67b8..5c7bfa8478e7 100644
+--- a/arch/mips/boot/Makefile
++++ b/arch/mips/boot/Makefile
+@@ -118,10 +118,12 @@ ifeq ($(ADDR_BITS),64)
+ 	itb_addr_cells = 2
+ endif
+ 
++targets += vmlinux.its.S
++
+ quiet_cmd_its_cat = CAT     $@
+-      cmd_its_cat = cat $^ >$@
++      cmd_its_cat = cat $(filter-out $(PHONY), $^) >$@
+ 
+-$(obj)/vmlinux.its.S: $(addprefix $(srctree)/arch/mips/$(PLATFORM)/,$(ITS_INPUTS))
++$(obj)/vmlinux.its.S: $(addprefix $(srctree)/arch/mips/$(PLATFORM)/,$(ITS_INPUTS)) FORCE
+ 	$(call if_changed,its_cat)
+ 
+ quiet_cmd_cpp_its_S = ITS     $@
+diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
+index f817342aab8f..53729220b48d 100644
+--- a/arch/powerpc/kernel/exceptions-64s.S
++++ b/arch/powerpc/kernel/exceptions-64s.S
+@@ -1321,9 +1321,7 @@ EXC_REAL_BEGIN(denorm_exception_hv, 0x1500, 0x100)
+ 
+ #ifdef CONFIG_PPC_DENORMALISATION
+ 	mfspr	r10,SPRN_HSRR1
+-	mfspr	r11,SPRN_HSRR0		/* save HSRR0 */
+ 	andis.	r10,r10,(HSRR1_DENORM)@h /* denorm? */
+-	addi	r11,r11,-4		/* HSRR0 is next instruction */
+ 	bne+	denorm_assist
+ #endif
+ 
+@@ -1389,6 +1387,8 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
+  */
+ 	XVCPSGNDP32(32)
+ denorm_done:
++	mfspr	r11,SPRN_HSRR0
++	subi	r11,r11,4
+ 	mtspr	SPRN_HSRR0,r11
+ 	mtcrf	0x80,r9
+ 	ld	r9,PACA_EXGEN+EX_R9(r13)
+diff --git a/arch/powerpc/kernel/machine_kexec.c b/arch/powerpc/kernel/machine_kexec.c
+index 936c7e2d421e..b53401334e81 100644
+--- a/arch/powerpc/kernel/machine_kexec.c
++++ b/arch/powerpc/kernel/machine_kexec.c
+@@ -188,7 +188,12 @@ void __init reserve_crashkernel(void)
+ 			(unsigned long)(crashk_res.start >> 20),
+ 			(unsigned long)(memblock_phys_mem_size() >> 20));
+ 
+-	memblock_reserve(crashk_res.start, crash_size);
++	if (!memblock_is_region_memory(crashk_res.start, crash_size) ||
++	    memblock_reserve(crashk_res.start, crash_size)) {
++		pr_err("Failed to reserve memory for crashkernel!\n");
++		crashk_res.start = crashk_res.end = 0;
++		return;
++	}
+ }
+ 
+ int overlaps_crashkernel(unsigned long start, unsigned long size)
+diff --git a/arch/powerpc/lib/checksum_64.S b/arch/powerpc/lib/checksum_64.S
+index 886ed94b9c13..d05c8af4ac51 100644
+--- a/arch/powerpc/lib/checksum_64.S
++++ b/arch/powerpc/lib/checksum_64.S
+@@ -443,6 +443,9 @@ _GLOBAL(csum_ipv6_magic)
+ 	addc	r0, r8, r9
+ 	ld	r10, 0(r4)
+ 	ld	r11, 8(r4)
++#ifdef CONFIG_CPU_LITTLE_ENDIAN
++	rotldi	r5, r5, 8
++#endif
+ 	adde	r0, r0, r10
+ 	add	r5, r5, r7
+ 	adde	r0, r0, r11
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index 35ac5422903a..b5a71baedbc2 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -1452,7 +1452,8 @@ static struct timer_list topology_timer;
+ 
+ static void reset_topology_timer(void)
+ {
+-	mod_timer(&topology_timer, jiffies + topology_timer_secs * HZ);
++	if (vphn_enabled)
++		mod_timer(&topology_timer, jiffies + topology_timer_secs * HZ);
+ }
+ 
+ #ifdef CONFIG_SMP
+diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
+index 0e7810ccd1ae..c18d17d830a1 100644
+--- a/arch/powerpc/mm/pkeys.c
++++ b/arch/powerpc/mm/pkeys.c
+@@ -44,7 +44,7 @@ static void scan_pkey_feature(void)
+ 	 * Since any pkey can be used for data or execute, we will just treat
+ 	 * all keys as equal and track them as one entity.
+ 	 */
+-	pkeys_total = be32_to_cpu(vals[0]);
++	pkeys_total = vals[0];
+ 	pkeys_devtree_defined = true;
+ }
+ 
+diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
+index a2cdf358a3ac..0976049d3365 100644
+--- a/arch/powerpc/platforms/powernv/pci-ioda.c
++++ b/arch/powerpc/platforms/powernv/pci-ioda.c
+@@ -2841,7 +2841,7 @@ static long pnv_pci_ioda2_table_alloc_pages(int nid, __u64 bus_offset,
+ 	level_shift = entries_shift + 3;
+ 	level_shift = max_t(unsigned, level_shift, PAGE_SHIFT);
+ 
+-	if ((level_shift - 3) * levels + page_shift >= 60)
++	if ((level_shift - 3) * levels + page_shift >= 55)
+ 		return -EINVAL;
+ 
+ 	/* Allocate TCE table */
+diff --git a/arch/s390/kernel/sysinfo.c b/arch/s390/kernel/sysinfo.c
+index 54f5496913fa..12f80d1f0415 100644
+--- a/arch/s390/kernel/sysinfo.c
++++ b/arch/s390/kernel/sysinfo.c
+@@ -59,6 +59,8 @@ int stsi(void *sysinfo, int fc, int sel1, int sel2)
+ }
+ EXPORT_SYMBOL(stsi);
+ 
++#ifdef CONFIG_PROC_FS
++
+ static bool convert_ext_name(unsigned char encoding, char *name, size_t len)
+ {
+ 	switch (encoding) {
+@@ -301,6 +303,8 @@ static int __init sysinfo_create_proc(void)
+ }
+ device_initcall(sysinfo_create_proc);
+ 
++#endif /* CONFIG_PROC_FS */
++
+ /*
+  * Service levels interface.
+  */
+diff --git a/arch/s390/mm/extmem.c b/arch/s390/mm/extmem.c
+index 6ad15d3fab81..84111a43ea29 100644
+--- a/arch/s390/mm/extmem.c
++++ b/arch/s390/mm/extmem.c
+@@ -80,7 +80,7 @@ struct qin64 {
+ struct dcss_segment {
+ 	struct list_head list;
+ 	char dcss_name[8];
+-	char res_name[15];
++	char res_name[16];
+ 	unsigned long start_addr;
+ 	unsigned long end;
+ 	atomic_t ref_count;
+@@ -433,7 +433,7 @@ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long
+ 	memcpy(&seg->res_name, seg->dcss_name, 8);
+ 	EBCASC(seg->res_name, 8);
+ 	seg->res_name[8] = '\0';
+-	strncat(seg->res_name, " (DCSS)", 7);
++	strlcat(seg->res_name, " (DCSS)", sizeof(seg->res_name));
+ 	seg->res->name = seg->res_name;
+ 	rc = seg->vm_segtype;
+ 	if (rc == SEG_TYPE_SC ||
+diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c
+index e3bd5627afef..76d89ee8b428 100644
+--- a/arch/s390/mm/pgalloc.c
++++ b/arch/s390/mm/pgalloc.c
+@@ -28,7 +28,7 @@ static struct ctl_table page_table_sysctl[] = {
+ 		.data		= &page_table_allocate_pgste,
+ 		.maxlen		= sizeof(int),
+ 		.mode		= S_IRUGO | S_IWUSR,
+-		.proc_handler	= proc_dointvec,
++		.proc_handler	= proc_dointvec_minmax,
+ 		.extra1		= &page_table_allocate_pgste_min,
+ 		.extra2		= &page_table_allocate_pgste_max,
+ 	},
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index 8ae7ffda8f98..0ab33af41fbd 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -92,7 +92,7 @@ END(native_usergs_sysret64)
+ .endm
+ 
+ .macro TRACE_IRQS_IRETQ_DEBUG
+-	bt	$9, EFLAGS(%rsp)		/* interrupts off? */
++	btl	$9, EFLAGS(%rsp)		/* interrupts off? */
+ 	jnc	1f
+ 	TRACE_IRQS_ON_DEBUG
+ 1:
+@@ -701,7 +701,7 @@ retint_kernel:
+ #ifdef CONFIG_PREEMPT
+ 	/* Interrupts are off */
+ 	/* Check if we need preemption */
+-	bt	$9, EFLAGS(%rsp)		/* were interrupts off? */
++	btl	$9, EFLAGS(%rsp)		/* were interrupts off? */
+ 	jnc	1f
+ 0:	cmpl	$0, PER_CPU_VAR(__preempt_count)
+ 	jnz	1f
+diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
+index cf372b90557e..a4170048a30b 100644
+--- a/arch/x86/events/intel/lbr.c
++++ b/arch/x86/events/intel/lbr.c
+@@ -346,7 +346,7 @@ static void __intel_pmu_lbr_restore(struct x86_perf_task_context *task_ctx)
+ 
+ 	mask = x86_pmu.lbr_nr - 1;
+ 	tos = task_ctx->tos;
+-	for (i = 0; i < tos; i++) {
++	for (i = 0; i < task_ctx->valid_lbrs; i++) {
+ 		lbr_idx = (tos - i) & mask;
+ 		wrlbr_from(lbr_idx, task_ctx->lbr_from[i]);
+ 		wrlbr_to  (lbr_idx, task_ctx->lbr_to[i]);
+@@ -354,6 +354,15 @@ static void __intel_pmu_lbr_restore(struct x86_perf_task_context *task_ctx)
+ 		if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_INFO)
+ 			wrmsrl(MSR_LBR_INFO_0 + lbr_idx, task_ctx->lbr_info[i]);
+ 	}
++
++	for (; i < x86_pmu.lbr_nr; i++) {
++		lbr_idx = (tos - i) & mask;
++		wrlbr_from(lbr_idx, 0);
++		wrlbr_to(lbr_idx, 0);
++		if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_INFO)
++			wrmsrl(MSR_LBR_INFO_0 + lbr_idx, 0);
++	}
++
+ 	wrmsrl(x86_pmu.lbr_tos, tos);
+ 	task_ctx->lbr_stack_state = LBR_NONE;
+ }
+@@ -361,7 +370,7 @@ static void __intel_pmu_lbr_restore(struct x86_perf_task_context *task_ctx)
+ static void __intel_pmu_lbr_save(struct x86_perf_task_context *task_ctx)
+ {
+ 	unsigned lbr_idx, mask;
+-	u64 tos;
++	u64 tos, from;
+ 	int i;
+ 
+ 	if (task_ctx->lbr_callstack_users == 0) {
+@@ -371,13 +380,17 @@ static void __intel_pmu_lbr_save(struct x86_perf_task_context *task_ctx)
+ 
+ 	mask = x86_pmu.lbr_nr - 1;
+ 	tos = intel_pmu_lbr_tos();
+-	for (i = 0; i < tos; i++) {
++	for (i = 0; i < x86_pmu.lbr_nr; i++) {
+ 		lbr_idx = (tos - i) & mask;
+-		task_ctx->lbr_from[i] = rdlbr_from(lbr_idx);
++		from = rdlbr_from(lbr_idx);
++		if (!from)
++			break;
++		task_ctx->lbr_from[i] = from;
+ 		task_ctx->lbr_to[i]   = rdlbr_to(lbr_idx);
+ 		if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_INFO)
+ 			rdmsrl(MSR_LBR_INFO_0 + lbr_idx, task_ctx->lbr_info[i]);
+ 	}
++	task_ctx->valid_lbrs = i;
+ 	task_ctx->tos = tos;
+ 	task_ctx->lbr_stack_state = LBR_VALID;
+ }
+@@ -531,7 +544,7 @@ static void intel_pmu_lbr_read_32(struct cpu_hw_events *cpuc)
+  */
+ static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc)
+ {
+-	bool need_info = false;
++	bool need_info = false, call_stack = false;
+ 	unsigned long mask = x86_pmu.lbr_nr - 1;
+ 	int lbr_format = x86_pmu.intel_cap.lbr_format;
+ 	u64 tos = intel_pmu_lbr_tos();
+@@ -542,7 +555,7 @@ static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc)
+ 	if (cpuc->lbr_sel) {
+ 		need_info = !(cpuc->lbr_sel->config & LBR_NO_INFO);
+ 		if (cpuc->lbr_sel->config & LBR_CALL_STACK)
+-			num = tos;
++			call_stack = true;
+ 	}
+ 
+ 	for (i = 0; i < num; i++) {
+@@ -555,6 +568,13 @@ static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc)
+ 		from = rdlbr_from(lbr_idx);
+ 		to   = rdlbr_to(lbr_idx);
+ 
++		/*
++		 * Read LBR call stack entries
++		 * until invalid entry (0s) is detected.
++		 */
++		if (call_stack && !from)
++			break;
++
+ 		if (lbr_format == LBR_FORMAT_INFO && need_info) {
+ 			u64 info;
+ 
+diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
+index 9f3711470ec1..6b72a92069fd 100644
+--- a/arch/x86/events/perf_event.h
++++ b/arch/x86/events/perf_event.h
+@@ -648,6 +648,7 @@ struct x86_perf_task_context {
+ 	u64 lbr_to[MAX_LBR_ENTRIES];
+ 	u64 lbr_info[MAX_LBR_ENTRIES];
+ 	int tos;
++	int valid_lbrs;
+ 	int lbr_callstack_users;
+ 	int lbr_stack_state;
+ };
+diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h
+index e203169931c7..6390bd8c141b 100644
+--- a/arch/x86/include/asm/fixmap.h
++++ b/arch/x86/include/asm/fixmap.h
+@@ -14,6 +14,16 @@
+ #ifndef _ASM_X86_FIXMAP_H
+ #define _ASM_X86_FIXMAP_H
+ 
++/*
++ * Exposed to assembly code for setting up initial page tables. Cannot be
++ * calculated in assembly code (fixmap entries are an enum), but is sanity
++ * checked in the actual fixmap C code to make sure that the fixmap is
++ * covered fully.
++ */
++#define FIXMAP_PMD_NUM	2
++/* fixmap starts downwards from the 507th entry in level2_fixmap_pgt */
++#define FIXMAP_PMD_TOP	507
++
+ #ifndef __ASSEMBLY__
+ #include <linux/kernel.h>
+ #include <asm/acpi.h>
+diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h
+index 82ff20b0ae45..20127d551ab5 100644
+--- a/arch/x86/include/asm/pgtable_64.h
++++ b/arch/x86/include/asm/pgtable_64.h
+@@ -14,6 +14,7 @@
+ #include <asm/processor.h>
+ #include <linux/bitops.h>
+ #include <linux/threads.h>
++#include <asm/fixmap.h>
+ 
+ extern p4d_t level4_kernel_pgt[512];
+ extern p4d_t level4_ident_pgt[512];
+@@ -22,7 +23,7 @@ extern pud_t level3_ident_pgt[512];
+ extern pmd_t level2_kernel_pgt[512];
+ extern pmd_t level2_fixmap_pgt[512];
+ extern pmd_t level2_ident_pgt[512];
+-extern pte_t level1_fixmap_pgt[512];
++extern pte_t level1_fixmap_pgt[512 * FIXMAP_PMD_NUM];
+ extern pgd_t init_top_pgt[];
+ 
+ #define swapper_pg_dir init_top_pgt
+diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
+index 8047379e575a..11455200ae66 100644
+--- a/arch/x86/kernel/head64.c
++++ b/arch/x86/kernel/head64.c
+@@ -35,6 +35,7 @@
+ #include <asm/bootparam_utils.h>
+ #include <asm/microcode.h>
+ #include <asm/kasan.h>
++#include <asm/fixmap.h>
+ 
+ /*
+  * Manage page tables very early on.
+@@ -165,7 +166,8 @@ unsigned long __head __startup_64(unsigned long physaddr,
+ 	pud[511] += load_delta;
+ 
+ 	pmd = fixup_pointer(level2_fixmap_pgt, physaddr);
+-	pmd[506] += load_delta;
++	for (i = FIXMAP_PMD_TOP; i > FIXMAP_PMD_TOP - FIXMAP_PMD_NUM; i--)
++		pmd[i] += load_delta;
+ 
+ 	/*
+ 	 * Set up the identity mapping for the switchover.  These
+diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
+index 8344dd2f310a..6bc215c15ce0 100644
+--- a/arch/x86/kernel/head_64.S
++++ b/arch/x86/kernel/head_64.S
+@@ -24,6 +24,7 @@
+ #include "../entry/calling.h"
+ #include <asm/export.h>
+ #include <asm/nospec-branch.h>
++#include <asm/fixmap.h>
+ 
+ #ifdef CONFIG_PARAVIRT
+ #include <asm/asm-offsets.h>
+@@ -445,13 +446,20 @@ NEXT_PAGE(level2_kernel_pgt)
+ 		KERNEL_IMAGE_SIZE/PMD_SIZE)
+ 
+ NEXT_PAGE(level2_fixmap_pgt)
+-	.fill	506,8,0
+-	.quad	level1_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC
+-	/* 8MB reserved for vsyscalls + a 2MB hole = 4 + 1 entries */
+-	.fill	5,8,0
++	.fill	(512 - 4 - FIXMAP_PMD_NUM),8,0
++	pgtno = 0
++	.rept (FIXMAP_PMD_NUM)
++	.quad level1_fixmap_pgt + (pgtno << PAGE_SHIFT) - __START_KERNEL_map \
++		+ _PAGE_TABLE_NOENC;
++	pgtno = pgtno + 1
++	.endr
++	/* 6 MB reserved space + a 2MB hole */
++	.fill	4,8,0
+ 
+ NEXT_PAGE(level1_fixmap_pgt)
++	.rept (FIXMAP_PMD_NUM)
+ 	.fill	512,8,0
++	.endr
+ 
+ #undef PMDS
+ 
+diff --git a/arch/x86/kernel/tsc_msr.c b/arch/x86/kernel/tsc_msr.c
+index 19afdbd7d0a7..5532d1be7687 100644
+--- a/arch/x86/kernel/tsc_msr.c
++++ b/arch/x86/kernel/tsc_msr.c
+@@ -12,6 +12,7 @@
+ #include <asm/setup.h>
+ #include <asm/apic.h>
+ #include <asm/param.h>
++#include <asm/tsc.h>
+ 
+ #define MAX_NUM_FREQS	9
+ 
+diff --git a/arch/x86/mm/numa_emulation.c b/arch/x86/mm/numa_emulation.c
+index 34a2a3bfde9c..22cbad56acab 100644
+--- a/arch/x86/mm/numa_emulation.c
++++ b/arch/x86/mm/numa_emulation.c
+@@ -61,7 +61,7 @@ static int __init emu_setup_memblk(struct numa_meminfo *ei,
+ 	eb->nid = nid;
+ 
+ 	if (emu_nid_to_phys[nid] == NUMA_NO_NODE)
+-		emu_nid_to_phys[nid] = nid;
++		emu_nid_to_phys[nid] = pb->nid;
+ 
+ 	pb->start += size;
+ 	if (pb->start >= pb->end) {
+diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
+index e3deefb891da..a300ffeece9b 100644
+--- a/arch/x86/mm/pgtable.c
++++ b/arch/x86/mm/pgtable.c
+@@ -577,6 +577,15 @@ void __native_set_fixmap(enum fixed_addresses idx, pte_t pte)
+ {
+ 	unsigned long address = __fix_to_virt(idx);
+ 
++#ifdef CONFIG_X86_64
++       /*
++	* Ensure that the static initial page tables are covering the
++	* fixmap completely.
++	*/
++	BUILD_BUG_ON(__end_of_permanent_fixed_addresses >
++		     (FIXMAP_PMD_NUM * PTRS_PER_PTE));
++#endif
++
+ 	if (idx >= __end_of_fixed_addresses) {
+ 		BUG();
+ 		return;
+diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
+index 1d2106d83b4e..019da252a04f 100644
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -239,7 +239,7 @@ static pmd_t *pti_user_pagetable_walk_pmd(unsigned long address)
+  *
+  * Returns a pointer to a PTE on success, or NULL on failure.
+  */
+-static __init pte_t *pti_user_pagetable_walk_pte(unsigned long address)
++static pte_t *pti_user_pagetable_walk_pte(unsigned long address)
+ {
+ 	gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
+ 	pmd_t *pmd;
+diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
+index 071d82ec9abb..2473eaca3468 100644
+--- a/arch/x86/xen/mmu_pv.c
++++ b/arch/x86/xen/mmu_pv.c
+@@ -1908,7 +1908,7 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
+ 	/* L3_k[511] -> level2_fixmap_pgt */
+ 	convert_pfn_mfn(level3_kernel_pgt);
+ 
+-	/* L3_k[511][506] -> level1_fixmap_pgt */
++	/* L3_k[511][508-FIXMAP_PMD_NUM ... 507] -> level1_fixmap_pgt */
+ 	convert_pfn_mfn(level2_fixmap_pgt);
+ 
+ 	/* We get [511][511] and have Xen's version of level2_kernel_pgt */
+@@ -1953,7 +1953,11 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
+ 	set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO);
+ 	set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
+ 	set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
+-	set_page_prot(level1_fixmap_pgt, PAGE_KERNEL_RO);
++
++	for (i = 0; i < FIXMAP_PMD_NUM; i++) {
++		set_page_prot(level1_fixmap_pgt + i * PTRS_PER_PTE,
++			      PAGE_KERNEL_RO);
++	}
+ 
+ 	/* Pin down new L4 */
+ 	pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
+diff --git a/block/elevator.c b/block/elevator.c
+index fa828b5bfd4b..89a48a3a8c12 100644
+--- a/block/elevator.c
++++ b/block/elevator.c
+@@ -609,7 +609,7 @@ void elv_drain_elevator(struct request_queue *q)
+ 
+ 	while (e->type->ops.sq.elevator_dispatch_fn(q, 1))
+ 		;
+-	if (q->nr_sorted && printed++ < 10) {
++	if (q->nr_sorted && !blk_queue_is_zoned(q) && printed++ < 10 ) {
+ 		printk(KERN_ERR "%s: forced dispatching is broken "
+ 		       "(nr_sorted=%u), please report this\n",
+ 		       q->elevator->type->elevator_name, q->nr_sorted);
+diff --git a/crypto/ablkcipher.c b/crypto/ablkcipher.c
+index 4ee7c041bb82..8882e90e868e 100644
+--- a/crypto/ablkcipher.c
++++ b/crypto/ablkcipher.c
+@@ -368,6 +368,7 @@ static int crypto_ablkcipher_report(struct sk_buff *skb, struct crypto_alg *alg)
+ 	strncpy(rblkcipher.type, "ablkcipher", sizeof(rblkcipher.type));
+ 	strncpy(rblkcipher.geniv, alg->cra_ablkcipher.geniv ?: "<default>",
+ 		sizeof(rblkcipher.geniv));
++	rblkcipher.geniv[sizeof(rblkcipher.geniv) - 1] = '\0';
+ 
+ 	rblkcipher.blocksize = alg->cra_blocksize;
+ 	rblkcipher.min_keysize = alg->cra_ablkcipher.min_keysize;
+@@ -442,6 +443,7 @@ static int crypto_givcipher_report(struct sk_buff *skb, struct crypto_alg *alg)
+ 	strncpy(rblkcipher.type, "givcipher", sizeof(rblkcipher.type));
+ 	strncpy(rblkcipher.geniv, alg->cra_ablkcipher.geniv ?: "<built-in>",
+ 		sizeof(rblkcipher.geniv));
++	rblkcipher.geniv[sizeof(rblkcipher.geniv) - 1] = '\0';
+ 
+ 	rblkcipher.blocksize = alg->cra_blocksize;
+ 	rblkcipher.min_keysize = alg->cra_ablkcipher.min_keysize;
+diff --git a/crypto/blkcipher.c b/crypto/blkcipher.c
+index 77b5fa293f66..f93abf13b5d4 100644
+--- a/crypto/blkcipher.c
++++ b/crypto/blkcipher.c
+@@ -510,6 +510,7 @@ static int crypto_blkcipher_report(struct sk_buff *skb, struct crypto_alg *alg)
+ 	strncpy(rblkcipher.type, "blkcipher", sizeof(rblkcipher.type));
+ 	strncpy(rblkcipher.geniv, alg->cra_blkcipher.geniv ?: "<default>",
+ 		sizeof(rblkcipher.geniv));
++	rblkcipher.geniv[sizeof(rblkcipher.geniv) - 1] = '\0';
+ 
+ 	rblkcipher.blocksize = alg->cra_blocksize;
+ 	rblkcipher.min_keysize = alg->cra_blkcipher.min_keysize;
+diff --git a/drivers/acpi/button.c b/drivers/acpi/button.c
+index 2345a5ee2dbb..40ed3ec9fc94 100644
+--- a/drivers/acpi/button.c
++++ b/drivers/acpi/button.c
+@@ -235,9 +235,6 @@ static int acpi_lid_notify_state(struct acpi_device *device, int state)
+ 		button->last_time = ktime_get();
+ 	}
+ 
+-	if (state)
+-		acpi_pm_wakeup_event(&device->dev);
+-
+ 	ret = blocking_notifier_call_chain(&acpi_lid_notifier, state, device);
+ 	if (ret == NOTIFY_DONE)
+ 		ret = blocking_notifier_call_chain(&acpi_lid_notifier, state,
+@@ -366,7 +363,8 @@ int acpi_lid_open(void)
+ }
+ EXPORT_SYMBOL(acpi_lid_open);
+ 
+-static int acpi_lid_update_state(struct acpi_device *device)
++static int acpi_lid_update_state(struct acpi_device *device,
++				 bool signal_wakeup)
+ {
+ 	int state;
+ 
+@@ -374,6 +372,9 @@ static int acpi_lid_update_state(struct acpi_device *device)
+ 	if (state < 0)
+ 		return state;
+ 
++	if (state && signal_wakeup)
++		acpi_pm_wakeup_event(&device->dev);
++
+ 	return acpi_lid_notify_state(device, state);
+ }
+ 
+@@ -384,7 +385,7 @@ static void acpi_lid_initialize_state(struct acpi_device *device)
+ 		(void)acpi_lid_notify_state(device, 1);
+ 		break;
+ 	case ACPI_BUTTON_LID_INIT_METHOD:
+-		(void)acpi_lid_update_state(device);
++		(void)acpi_lid_update_state(device, false);
+ 		break;
+ 	case ACPI_BUTTON_LID_INIT_IGNORE:
+ 	default:
+@@ -409,7 +410,7 @@ static void acpi_button_notify(struct acpi_device *device, u32 event)
+ 			users = button->input->users;
+ 			mutex_unlock(&button->input->mutex);
+ 			if (users)
+-				acpi_lid_update_state(device);
++				acpi_lid_update_state(device, true);
+ 		} else {
+ 			int keycode;
+ 
+diff --git a/drivers/ata/pata_ftide010.c b/drivers/ata/pata_ftide010.c
+index 5d4b72e21161..569a4a662dcd 100644
+--- a/drivers/ata/pata_ftide010.c
++++ b/drivers/ata/pata_ftide010.c
+@@ -256,14 +256,12 @@ static struct ata_port_operations pata_ftide010_port_ops = {
+ 	.qc_issue	= ftide010_qc_issue,
+ };
+ 
+-static struct ata_port_info ftide010_port_info[] = {
+-	{
+-		.flags		= ATA_FLAG_SLAVE_POSS,
+-		.mwdma_mask	= ATA_MWDMA2,
+-		.udma_mask	= ATA_UDMA6,
+-		.pio_mask	= ATA_PIO4,
+-		.port_ops	= &pata_ftide010_port_ops,
+-	},
++static struct ata_port_info ftide010_port_info = {
++	.flags		= ATA_FLAG_SLAVE_POSS,
++	.mwdma_mask	= ATA_MWDMA2,
++	.udma_mask	= ATA_UDMA6,
++	.pio_mask	= ATA_PIO4,
++	.port_ops	= &pata_ftide010_port_ops,
+ };
+ 
+ #if IS_ENABLED(CONFIG_SATA_GEMINI)
+@@ -349,6 +347,7 @@ static int pata_ftide010_gemini_cable_detect(struct ata_port *ap)
+ }
+ 
+ static int pata_ftide010_gemini_init(struct ftide010 *ftide,
++				     struct ata_port_info *pi,
+ 				     bool is_ata1)
+ {
+ 	struct device *dev = ftide->dev;
+@@ -373,7 +372,13 @@ static int pata_ftide010_gemini_init(struct ftide010 *ftide,
+ 
+ 	/* Flag port as SATA-capable */
+ 	if (gemini_sata_bridge_enabled(sg, is_ata1))
+-		ftide010_port_info[0].flags |= ATA_FLAG_SATA;
++		pi->flags |= ATA_FLAG_SATA;
++
++	/* This device has broken DMA, only PIO works */
++	if (of_machine_is_compatible("itian,sq201")) {
++		pi->mwdma_mask = 0;
++		pi->udma_mask = 0;
++	}
+ 
+ 	/*
+ 	 * We assume that a simple 40-wire cable is used in the PATA mode.
+@@ -435,6 +440,7 @@ static int pata_ftide010_gemini_init(struct ftide010 *ftide,
+ }
+ #else
+ static int pata_ftide010_gemini_init(struct ftide010 *ftide,
++				     struct ata_port_info *pi,
+ 				     bool is_ata1)
+ {
+ 	return -ENOTSUPP;
+@@ -446,7 +452,7 @@ static int pata_ftide010_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+ 	struct device_node *np = dev->of_node;
+-	const struct ata_port_info pi = ftide010_port_info[0];
++	struct ata_port_info pi = ftide010_port_info;
+ 	const struct ata_port_info *ppi[] = { &pi, NULL };
+ 	struct ftide010 *ftide;
+ 	struct resource *res;
+@@ -490,6 +496,7 @@ static int pata_ftide010_probe(struct platform_device *pdev)
+ 		 * are ATA0. This will also set up the cable types.
+ 		 */
+ 		ret = pata_ftide010_gemini_init(ftide,
++				&pi,
+ 				(res->start == 0x63400000));
+ 		if (ret)
+ 			goto err_dis_clk;
+diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
+index 8871b5044d9e..7d7c698c0213 100644
+--- a/drivers/block/floppy.c
++++ b/drivers/block/floppy.c
+@@ -3470,6 +3470,9 @@ static int fd_locked_ioctl(struct block_device *bdev, fmode_t mode, unsigned int
+ 					  (struct floppy_struct **)&outparam);
+ 		if (ret)
+ 			return ret;
++		memcpy(&inparam.g, outparam,
++				offsetof(struct floppy_struct, name));
++		outparam = &inparam.g;
+ 		break;
+ 	case FDMSGON:
+ 		UDP->flags |= FTD_MSG;
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index f73a27ea28cc..75947f04fc75 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -374,6 +374,7 @@ static const struct usb_device_id blacklist_table[] = {
+ 	{ USB_DEVICE(0x7392, 0xa611), .driver_info = BTUSB_REALTEK },
+ 
+ 	/* Additional Realtek 8723DE Bluetooth devices */
++	{ USB_DEVICE(0x0bda, 0xb009), .driver_info = BTUSB_REALTEK },
+ 	{ USB_DEVICE(0x2ff8, 0xb011), .driver_info = BTUSB_REALTEK },
+ 
+ 	/* Additional Realtek 8821AE Bluetooth devices */
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 80d60f43db56..4576a1268e0e 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -490,32 +490,29 @@ static int sysc_check_registers(struct sysc *ddata)
+ 
+ /**
+  * syc_ioremap - ioremap register space for the interconnect target module
+- * @ddata: deviec driver data
++ * @ddata: device driver data
+  *
+  * Note that the interconnect target module registers can be anywhere
+- * within the first child device address space. For example, SGX has
+- * them at offset 0x1fc00 in the 32MB module address space. We just
+- * what we need around the interconnect target module registers.
++ * within the interconnect target module range. For example, SGX has
++ * them at offset 0x1fc00 in the 32MB module address space. And cpsw
++ * has them at offset 0x1200 in the CPSW_WR child. Usually the
++ * the interconnect target module registers are at the beginning of
++ * the module range though.
+  */
+ static int sysc_ioremap(struct sysc *ddata)
+ {
+-	u32 size = 0;
+-
+-	if (ddata->offsets[SYSC_SYSSTATUS] >= 0)
+-		size = ddata->offsets[SYSC_SYSSTATUS];
+-	else if (ddata->offsets[SYSC_SYSCONFIG] >= 0)
+-		size = ddata->offsets[SYSC_SYSCONFIG];
+-	else if (ddata->offsets[SYSC_REVISION] >= 0)
+-		size = ddata->offsets[SYSC_REVISION];
+-	else
+-		return -EINVAL;
++	int size;
+ 
+-	size &= 0xfff00;
+-	size += SZ_256;
++	size = max3(ddata->offsets[SYSC_REVISION],
++		    ddata->offsets[SYSC_SYSCONFIG],
++		    ddata->offsets[SYSC_SYSSTATUS]);
++
++	if (size < 0 || (size + sizeof(u32)) > ddata->module_size)
++		return -EINVAL;
+ 
+ 	ddata->module_va = devm_ioremap(ddata->dev,
+ 					ddata->module_pa,
+-					size);
++					size + sizeof(u32));
+ 	if (!ddata->module_va)
+ 		return -EIO;
+ 
+@@ -1178,10 +1175,10 @@ static int sysc_child_suspend_noirq(struct device *dev)
+ 	if (!pm_runtime_status_suspended(dev)) {
+ 		error = pm_generic_runtime_suspend(dev);
+ 		if (error) {
+-			dev_err(dev, "%s error at %i: %i\n",
+-				__func__, __LINE__, error);
++			dev_warn(dev, "%s busy at %i: %i\n",
++				 __func__, __LINE__, error);
+ 
+-			return error;
++			return 0;
+ 		}
+ 
+ 		error = sysc_runtime_suspend(ddata->dev);
+diff --git a/drivers/clk/x86/clk-st.c b/drivers/clk/x86/clk-st.c
+index fb62f3938008..3a0996f2d556 100644
+--- a/drivers/clk/x86/clk-st.c
++++ b/drivers/clk/x86/clk-st.c
+@@ -46,7 +46,7 @@ static int st_clk_probe(struct platform_device *pdev)
+ 		clk_oscout1_parents, ARRAY_SIZE(clk_oscout1_parents),
+ 		0, st_data->base + CLKDRVSTR2, OSCOUT1CLK25MHZ, 3, 0, NULL);
+ 
+-	clk_set_parent(hws[ST_CLK_MUX]->clk, hws[ST_CLK_25M]->clk);
++	clk_set_parent(hws[ST_CLK_MUX]->clk, hws[ST_CLK_48M]->clk);
+ 
+ 	hws[ST_CLK_GATE] = clk_hw_register_gate(NULL, "oscout1", "oscout1_mux",
+ 		0, st_data->base + MISCCLKCNTL1, OSCCLKENB,
+diff --git a/drivers/crypto/cavium/nitrox/nitrox_dev.h b/drivers/crypto/cavium/nitrox/nitrox_dev.h
+index 9a476bb6d4c7..af596455b420 100644
+--- a/drivers/crypto/cavium/nitrox/nitrox_dev.h
++++ b/drivers/crypto/cavium/nitrox/nitrox_dev.h
+@@ -35,6 +35,7 @@ struct nitrox_cmdq {
+ 	/* requests in backlog queues */
+ 	atomic_t backlog_count;
+ 
++	int write_idx;
+ 	/* command size 32B/64B */
+ 	u8 instr_size;
+ 	u8 qno;
+@@ -87,7 +88,7 @@ struct nitrox_bh {
+ 	struct bh_data *slc;
+ };
+ 
+-/* NITROX-5 driver state */
++/* NITROX-V driver state */
+ #define NITROX_UCODE_LOADED	0
+ #define NITROX_READY		1
+ 
+diff --git a/drivers/crypto/cavium/nitrox/nitrox_lib.c b/drivers/crypto/cavium/nitrox/nitrox_lib.c
+index 4fdc921ba611..9906c0086647 100644
+--- a/drivers/crypto/cavium/nitrox/nitrox_lib.c
++++ b/drivers/crypto/cavium/nitrox/nitrox_lib.c
+@@ -36,6 +36,7 @@ static int cmdq_common_init(struct nitrox_cmdq *cmdq)
+ 	cmdq->head = PTR_ALIGN(cmdq->head_unaligned, PKT_IN_ALIGN);
+ 	cmdq->dma = PTR_ALIGN(cmdq->dma_unaligned, PKT_IN_ALIGN);
+ 	cmdq->qsize = (qsize + PKT_IN_ALIGN);
++	cmdq->write_idx = 0;
+ 
+ 	spin_lock_init(&cmdq->response_lock);
+ 	spin_lock_init(&cmdq->cmdq_lock);
+diff --git a/drivers/crypto/cavium/nitrox/nitrox_reqmgr.c b/drivers/crypto/cavium/nitrox/nitrox_reqmgr.c
+index deaefd532aaa..4a362fc22f62 100644
+--- a/drivers/crypto/cavium/nitrox/nitrox_reqmgr.c
++++ b/drivers/crypto/cavium/nitrox/nitrox_reqmgr.c
+@@ -42,6 +42,16 @@
+  *   Invalid flag options in AES-CCM IV.
+  */
+ 
++static inline int incr_index(int index, int count, int max)
++{
++	if ((index + count) >= max)
++		index = index + count - max;
++	else
++		index += count;
++
++	return index;
++}
++
+ /**
+  * dma_free_sglist - unmap and free the sg lists.
+  * @ndev: N5 device
+@@ -426,30 +436,29 @@ static void post_se_instr(struct nitrox_softreq *sr,
+ 			  struct nitrox_cmdq *cmdq)
+ {
+ 	struct nitrox_device *ndev = sr->ndev;
+-	union nps_pkt_in_instr_baoff_dbell pkt_in_baoff_dbell;
+-	u64 offset;
++	int idx;
+ 	u8 *ent;
+ 
+ 	spin_lock_bh(&cmdq->cmdq_lock);
+ 
+-	/* get the next write offset */
+-	offset = NPS_PKT_IN_INSTR_BAOFF_DBELLX(cmdq->qno);
+-	pkt_in_baoff_dbell.value = nitrox_read_csr(ndev, offset);
++	idx = cmdq->write_idx;
+ 	/* copy the instruction */
+-	ent = cmdq->head + pkt_in_baoff_dbell.s.aoff;
++	ent = cmdq->head + (idx * cmdq->instr_size);
+ 	memcpy(ent, &sr->instr, cmdq->instr_size);
+-	/* flush the command queue updates */
+-	dma_wmb();
+ 
+-	sr->tstamp = jiffies;
+ 	atomic_set(&sr->status, REQ_POSTED);
+ 	response_list_add(sr, cmdq);
++	sr->tstamp = jiffies;
++	/* flush the command queue updates */
++	dma_wmb();
+ 
+ 	/* Ring doorbell with count 1 */
+ 	writeq(1, cmdq->dbell_csr_addr);
+ 	/* orders the doorbell rings */
+ 	mmiowb();
+ 
++	cmdq->write_idx = incr_index(idx, 1, ndev->qlen);
++
+ 	spin_unlock_bh(&cmdq->cmdq_lock);
+ }
+ 
+@@ -459,6 +468,9 @@ static int post_backlog_cmds(struct nitrox_cmdq *cmdq)
+ 	struct nitrox_softreq *sr, *tmp;
+ 	int ret = 0;
+ 
++	if (!atomic_read(&cmdq->backlog_count))
++		return 0;
++
+ 	spin_lock_bh(&cmdq->backlog_lock);
+ 
+ 	list_for_each_entry_safe(sr, tmp, &cmdq->backlog_head, backlog) {
+@@ -466,7 +478,7 @@ static int post_backlog_cmds(struct nitrox_cmdq *cmdq)
+ 
+ 		/* submit until space available */
+ 		if (unlikely(cmdq_full(cmdq, ndev->qlen))) {
+-			ret = -EBUSY;
++			ret = -ENOSPC;
+ 			break;
+ 		}
+ 		/* delete from backlog list */
+@@ -491,23 +503,20 @@ static int nitrox_enqueue_request(struct nitrox_softreq *sr)
+ {
+ 	struct nitrox_cmdq *cmdq = sr->cmdq;
+ 	struct nitrox_device *ndev = sr->ndev;
+-	int ret = -EBUSY;
++
++	/* try to post backlog requests */
++	post_backlog_cmds(cmdq);
+ 
+ 	if (unlikely(cmdq_full(cmdq, ndev->qlen))) {
+ 		if (!(sr->flags & CRYPTO_TFM_REQ_MAY_BACKLOG))
+-			return -EAGAIN;
+-
++			return -ENOSPC;
++		/* add to backlog list */
+ 		backlog_list_add(sr, cmdq);
+-	} else {
+-		ret = post_backlog_cmds(cmdq);
+-		if (ret) {
+-			backlog_list_add(sr, cmdq);
+-			return ret;
+-		}
+-		post_se_instr(sr, cmdq);
+-		ret = -EINPROGRESS;
++		return -EBUSY;
+ 	}
+-	return ret;
++	post_se_instr(sr, cmdq);
++
++	return -EINPROGRESS;
+ }
+ 
+ /**
+@@ -624,11 +633,9 @@ int nitrox_process_se_request(struct nitrox_device *ndev,
+ 	 */
+ 	sr->instr.fdata[0] = *((u64 *)&req->gph);
+ 	sr->instr.fdata[1] = 0;
+-	/* flush the soft_req changes before posting the cmd */
+-	wmb();
+ 
+ 	ret = nitrox_enqueue_request(sr);
+-	if (ret == -EAGAIN)
++	if (ret == -ENOSPC)
+ 		goto send_fail;
+ 
+ 	return ret;
+diff --git a/drivers/crypto/chelsio/chtls/chtls.h b/drivers/crypto/chelsio/chtls/chtls.h
+index a53a0e6ba024..7725b6ee14ef 100644
+--- a/drivers/crypto/chelsio/chtls/chtls.h
++++ b/drivers/crypto/chelsio/chtls/chtls.h
+@@ -96,6 +96,10 @@ enum csk_flags {
+ 	CSK_CONN_INLINE,	/* Connection on HW */
+ };
+ 
++enum chtls_cdev_state {
++	CHTLS_CDEV_STATE_UP = 1
++};
++
+ struct listen_ctx {
+ 	struct sock *lsk;
+ 	struct chtls_dev *cdev;
+@@ -146,6 +150,7 @@ struct chtls_dev {
+ 	unsigned int send_page_order;
+ 	int max_host_sndbuf;
+ 	struct key_map kmap;
++	unsigned int cdev_state;
+ };
+ 
+ struct chtls_hws {
+diff --git a/drivers/crypto/chelsio/chtls/chtls_main.c b/drivers/crypto/chelsio/chtls/chtls_main.c
+index 9b07f9165658..f59b044ebd25 100644
+--- a/drivers/crypto/chelsio/chtls/chtls_main.c
++++ b/drivers/crypto/chelsio/chtls/chtls_main.c
+@@ -160,6 +160,7 @@ static void chtls_register_dev(struct chtls_dev *cdev)
+ 	tlsdev->hash = chtls_create_hash;
+ 	tlsdev->unhash = chtls_destroy_hash;
+ 	tls_register_device(&cdev->tlsdev);
++	cdev->cdev_state = CHTLS_CDEV_STATE_UP;
+ }
+ 
+ static void chtls_unregister_dev(struct chtls_dev *cdev)
+@@ -281,8 +282,10 @@ static void chtls_free_all_uld(void)
+ 	struct chtls_dev *cdev, *tmp;
+ 
+ 	mutex_lock(&cdev_mutex);
+-	list_for_each_entry_safe(cdev, tmp, &cdev_list, list)
+-		chtls_free_uld(cdev);
++	list_for_each_entry_safe(cdev, tmp, &cdev_list, list) {
++		if (cdev->cdev_state == CHTLS_CDEV_STATE_UP)
++			chtls_free_uld(cdev);
++	}
+ 	mutex_unlock(&cdev_mutex);
+ }
+ 
+diff --git a/drivers/edac/altera_edac.c b/drivers/edac/altera_edac.c
+index d0d5c4dbe097..5762c3c383f2 100644
+--- a/drivers/edac/altera_edac.c
++++ b/drivers/edac/altera_edac.c
+@@ -730,7 +730,8 @@ static int altr_s10_sdram_probe(struct platform_device *pdev)
+ 			 S10_DDR0_IRQ_MASK)) {
+ 		edac_printk(KERN_ERR, EDAC_MC,
+ 			    "Error clearing SDRAM ECC count\n");
+-		return -ENODEV;
++		ret = -ENODEV;
++		goto err2;
+ 	}
+ 
+ 	if (regmap_update_bits(drvdata->mc_vbase, priv->ecc_irq_en_offset,
+diff --git a/drivers/edac/edac_mc_sysfs.c b/drivers/edac/edac_mc_sysfs.c
+index 7481955160a4..20374b8248f0 100644
+--- a/drivers/edac/edac_mc_sysfs.c
++++ b/drivers/edac/edac_mc_sysfs.c
+@@ -1075,14 +1075,14 @@ int __init edac_mc_sysfs_init(void)
+ 
+ 	err = device_add(mci_pdev);
+ 	if (err < 0)
+-		goto out_dev_free;
++		goto out_put_device;
+ 
+ 	edac_dbg(0, "device %s created\n", dev_name(mci_pdev));
+ 
+ 	return 0;
+ 
+- out_dev_free:
+-	kfree(mci_pdev);
++ out_put_device:
++	put_device(mci_pdev);
+  out:
+ 	return err;
+ }
+diff --git a/drivers/edac/i7core_edac.c b/drivers/edac/i7core_edac.c
+index 8ed4dd9c571b..8e120bf60624 100644
+--- a/drivers/edac/i7core_edac.c
++++ b/drivers/edac/i7core_edac.c
+@@ -1177,15 +1177,14 @@ static int i7core_create_sysfs_devices(struct mem_ctl_info *mci)
+ 
+ 	rc = device_add(pvt->addrmatch_dev);
+ 	if (rc < 0)
+-		return rc;
++		goto err_put_addrmatch;
+ 
+ 	if (!pvt->is_registered) {
+ 		pvt->chancounts_dev = kzalloc(sizeof(*pvt->chancounts_dev),
+ 					      GFP_KERNEL);
+ 		if (!pvt->chancounts_dev) {
+-			put_device(pvt->addrmatch_dev);
+-			device_del(pvt->addrmatch_dev);
+-			return -ENOMEM;
++			rc = -ENOMEM;
++			goto err_del_addrmatch;
+ 		}
+ 
+ 		pvt->chancounts_dev->type = &all_channel_counts_type;
+@@ -1199,9 +1198,18 @@ static int i7core_create_sysfs_devices(struct mem_ctl_info *mci)
+ 
+ 		rc = device_add(pvt->chancounts_dev);
+ 		if (rc < 0)
+-			return rc;
++			goto err_put_chancounts;
+ 	}
+ 	return 0;
++
++err_put_chancounts:
++	put_device(pvt->chancounts_dev);
++err_del_addrmatch:
++	device_del(pvt->addrmatch_dev);
++err_put_addrmatch:
++	put_device(pvt->addrmatch_dev);
++
++	return rc;
+ }
+ 
+ static void i7core_delete_sysfs_devices(struct mem_ctl_info *mci)
+@@ -1211,11 +1219,11 @@ static void i7core_delete_sysfs_devices(struct mem_ctl_info *mci)
+ 	edac_dbg(1, "\n");
+ 
+ 	if (!pvt->is_registered) {
+-		put_device(pvt->chancounts_dev);
+ 		device_del(pvt->chancounts_dev);
++		put_device(pvt->chancounts_dev);
+ 	}
+-	put_device(pvt->addrmatch_dev);
+ 	device_del(pvt->addrmatch_dev);
++	put_device(pvt->addrmatch_dev);
+ }
+ 
+ /****************************************************************************
+diff --git a/drivers/gpio/gpio-menz127.c b/drivers/gpio/gpio-menz127.c
+index e1037582e34d..b2635326546e 100644
+--- a/drivers/gpio/gpio-menz127.c
++++ b/drivers/gpio/gpio-menz127.c
+@@ -56,9 +56,9 @@ static int men_z127_debounce(struct gpio_chip *gc, unsigned gpio,
+ 		rnd = fls(debounce) - 1;
+ 
+ 		if (rnd && (debounce & BIT(rnd - 1)))
+-			debounce = round_up(debounce, MEN_Z127_DB_MIN_US);
++			debounce = roundup(debounce, MEN_Z127_DB_MIN_US);
+ 		else
+-			debounce = round_down(debounce, MEN_Z127_DB_MIN_US);
++			debounce = rounddown(debounce, MEN_Z127_DB_MIN_US);
+ 
+ 		if (debounce > MEN_Z127_DB_MAX_US)
+ 			debounce = MEN_Z127_DB_MAX_US;
+diff --git a/drivers/gpio/gpio-tegra.c b/drivers/gpio/gpio-tegra.c
+index d5d79727c55d..d9e4da146227 100644
+--- a/drivers/gpio/gpio-tegra.c
++++ b/drivers/gpio/gpio-tegra.c
+@@ -323,13 +323,6 @@ static int tegra_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ 		return -EINVAL;
+ 	}
+ 
+-	ret = gpiochip_lock_as_irq(&tgi->gc, gpio);
+-	if (ret) {
+-		dev_err(tgi->dev,
+-			"unable to lock Tegra GPIO %u as IRQ\n", gpio);
+-		return ret;
+-	}
+-
+ 	spin_lock_irqsave(&bank->lvl_lock[port], flags);
+ 
+ 	val = tegra_gpio_readl(tgi, GPIO_INT_LVL(tgi, gpio));
+@@ -342,6 +335,14 @@ static int tegra_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ 	tegra_gpio_mask_write(tgi, GPIO_MSK_OE(tgi, gpio), gpio, 0);
+ 	tegra_gpio_enable(tgi, gpio);
+ 
++	ret = gpiochip_lock_as_irq(&tgi->gc, gpio);
++	if (ret) {
++		dev_err(tgi->dev,
++			"unable to lock Tegra GPIO %u as IRQ\n", gpio);
++		tegra_gpio_disable(tgi, gpio);
++		return ret;
++	}
++
+ 	if (type & (IRQ_TYPE_LEVEL_LOW | IRQ_TYPE_LEVEL_HIGH))
+ 		irq_set_handler_locked(d, handle_level_irq);
+ 	else if (type & (IRQ_TYPE_EDGE_FALLING | IRQ_TYPE_EDGE_RISING))
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index 5a196ec49be8..7200eea4f918 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -975,13 +975,9 @@ static int amdgpu_cs_ib_fill(struct amdgpu_device *adev,
+ 		if (r)
+ 			return r;
+ 
+-		if (chunk_ib->flags & AMDGPU_IB_FLAG_PREAMBLE) {
+-			parser->job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT;
+-			if (!parser->ctx->preamble_presented) {
+-				parser->job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT_FIRST;
+-				parser->ctx->preamble_presented = true;
+-			}
+-		}
++		if (chunk_ib->flags & AMDGPU_IB_FLAG_PREAMBLE)
++			parser->job->preamble_status |=
++				AMDGPU_PREAMBLE_IB_PRESENT;
+ 
+ 		if (parser->job->ring && parser->job->ring != ring)
+ 			return -EINVAL;
+@@ -1206,6 +1202,12 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
+ 
+ 	amdgpu_cs_post_dependencies(p);
+ 
++	if ((job->preamble_status & AMDGPU_PREAMBLE_IB_PRESENT) &&
++	    !p->ctx->preamble_presented) {
++		job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT_FIRST;
++		p->ctx->preamble_presented = true;
++	}
++
+ 	cs->out.handle = seq;
+ 	job->uf_sequence = seq;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+index 7aaa263ad8c7..6b5d4a20860d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+@@ -164,8 +164,10 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
+ 		return r;
+ 	}
+ 
++	need_ctx_switch = ring->current_ctx != fence_ctx;
+ 	if (ring->funcs->emit_pipeline_sync && job &&
+ 	    ((tmp = amdgpu_sync_get_fence(&job->sched_sync, NULL)) ||
++	     (amdgpu_sriov_vf(adev) && need_ctx_switch) ||
+ 	     amdgpu_vm_need_pipeline_sync(ring, job))) {
+ 		need_pipe_sync = true;
+ 		dma_fence_put(tmp);
+@@ -196,7 +198,6 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
+ 	}
+ 
+ 	skip_preamble = ring->current_ctx == fence_ctx;
+-	need_ctx_switch = ring->current_ctx != fence_ctx;
+ 	if (job && ring->funcs->emit_cntxcntl) {
+ 		if (need_ctx_switch)
+ 			status |= AMDGPU_HAVE_CTX_SWITCH;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index fdcb498f6d19..c31fff32a321 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -123,6 +123,7 @@ static void amdgpu_vm_bo_base_init(struct amdgpu_vm_bo_base *base,
+ 	 * is validated on next vm use to avoid fault.
+ 	 * */
+ 	list_move_tail(&base->vm_status, &vm->evicted);
++	base->moved = true;
+ }
+ 
+ /**
+@@ -303,7 +304,6 @@ static int amdgpu_vm_clear_bo(struct amdgpu_device *adev,
+ 	uint64_t addr;
+ 	int r;
+ 
+-	addr = amdgpu_bo_gpu_offset(bo);
+ 	entries = amdgpu_bo_size(bo) / 8;
+ 
+ 	if (pte_support_ats) {
+@@ -335,6 +335,7 @@ static int amdgpu_vm_clear_bo(struct amdgpu_device *adev,
+ 	if (r)
+ 		goto error;
+ 
++	addr = amdgpu_bo_gpu_offset(bo);
+ 	if (ats_entries) {
+ 		uint64_t ats_value;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+index 818874b13c99..9057a5adb31b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+@@ -5614,6 +5614,11 @@ static int gfx_v8_0_set_powergating_state(void *handle,
+ 	if (amdgpu_sriov_vf(adev))
+ 		return 0;
+ 
++	if (adev->pg_flags & (AMD_PG_SUPPORT_GFX_SMG |
++				AMD_PG_SUPPORT_RLC_SMU_HS |
++				AMD_PG_SUPPORT_CP |
++				AMD_PG_SUPPORT_GFX_DMG))
++		adev->gfx.rlc.funcs->enter_safe_mode(adev);
+ 	switch (adev->asic_type) {
+ 	case CHIP_CARRIZO:
+ 	case CHIP_STONEY:
+@@ -5663,7 +5668,11 @@ static int gfx_v8_0_set_powergating_state(void *handle,
+ 	default:
+ 		break;
+ 	}
+-
++	if (adev->pg_flags & (AMD_PG_SUPPORT_GFX_SMG |
++				AMD_PG_SUPPORT_RLC_SMU_HS |
++				AMD_PG_SUPPORT_CP |
++				AMD_PG_SUPPORT_GFX_DMG))
++		adev->gfx.rlc.funcs->exit_safe_mode(adev);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/kv_dpm.c b/drivers/gpu/drm/amd/amdgpu/kv_dpm.c
+index 7a1e77c93bf1..d8e469c594bb 100644
+--- a/drivers/gpu/drm/amd/amdgpu/kv_dpm.c
++++ b/drivers/gpu/drm/amd/amdgpu/kv_dpm.c
+@@ -1354,8 +1354,6 @@ static int kv_dpm_enable(struct amdgpu_device *adev)
+ 		return ret;
+ 	}
+ 
+-	kv_update_current_ps(adev, adev->pm.dpm.boot_ps);
+-
+ 	if (adev->irq.installed &&
+ 	    amdgpu_is_internal_thermal_sensor(adev->pm.int_thermal_type)) {
+ 		ret = kv_set_thermal_temperature_range(adev, KV_TEMP_RANGE_MIN, KV_TEMP_RANGE_MAX);
+@@ -3061,7 +3059,7 @@ static int kv_dpm_hw_init(void *handle)
+ 	else
+ 		adev->pm.dpm_enabled = true;
+ 	mutex_unlock(&adev->pm.mutex);
+-
++	amdgpu_pm_compute_clocks(adev);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/si_dpm.c b/drivers/gpu/drm/amd/amdgpu/si_dpm.c
+index 5c97a3671726..606f461dce49 100644
+--- a/drivers/gpu/drm/amd/amdgpu/si_dpm.c
++++ b/drivers/gpu/drm/amd/amdgpu/si_dpm.c
+@@ -6887,7 +6887,6 @@ static int si_dpm_enable(struct amdgpu_device *adev)
+ 
+ 	si_enable_auto_throttle_source(adev, AMDGPU_DPM_AUTO_THROTTLE_SRC_THERMAL, true);
+ 	si_thermal_start_thermal_controller(adev);
+-	ni_update_current_ps(adev, boot_ps);
+ 
+ 	return 0;
+ }
+@@ -7764,7 +7763,7 @@ static int si_dpm_hw_init(void *handle)
+ 	else
+ 		adev->pm.dpm_enabled = true;
+ 	mutex_unlock(&adev->pm.mutex);
+-
++	amdgpu_pm_compute_clocks(adev);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
+index 88b09dd758ba..ca137757a69e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
+@@ -133,7 +133,7 @@ static bool calculate_fb_and_fractional_fb_divider(
+ 	uint64_t feedback_divider;
+ 
+ 	feedback_divider =
+-		(uint64_t)(target_pix_clk_khz * ref_divider * post_divider);
++		(uint64_t)target_pix_clk_khz * ref_divider * post_divider;
+ 	feedback_divider *= 10;
+ 	/* additional factor, since we divide by 10 afterwards */
+ 	feedback_divider *= (uint64_t)(calc_pll_cs->fract_fb_divider_factor);
+@@ -145,8 +145,8 @@ static bool calculate_fb_and_fractional_fb_divider(
+  * of fractional feedback decimal point and the fractional FB Divider precision
+  * is 2 then the equation becomes (ullfeedbackDivider + 5*100) / (10*100))*/
+ 
+-	feedback_divider += (uint64_t)
+-			(5 * calc_pll_cs->fract_fb_divider_precision_factor);
++	feedback_divider += 5ULL *
++			    calc_pll_cs->fract_fb_divider_precision_factor;
+ 	feedback_divider =
+ 		div_u64(feedback_divider,
+ 			calc_pll_cs->fract_fb_divider_precision_factor * 10);
+@@ -203,8 +203,8 @@ static bool calc_fb_divider_checking_tolerance(
+ 			&fract_feedback_divider);
+ 
+ 	/*Actual calculated value*/
+-	actual_calc_clk_khz = (uint64_t)(feedback_divider *
+-					calc_pll_cs->fract_fb_divider_factor) +
++	actual_calc_clk_khz = (uint64_t)feedback_divider *
++					calc_pll_cs->fract_fb_divider_factor +
+ 							fract_feedback_divider;
+ 	actual_calc_clk_khz *= calc_pll_cs->ref_freq_khz;
+ 	actual_calc_clk_khz =
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c b/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
+index c2037daa8e66..0efbf411667a 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
+@@ -239,6 +239,8 @@ void dml1_extract_rq_regs(
+ 	extract_rq_sizing_regs(mode_lib, &(rq_regs->rq_regs_l), rq_param.sizing.rq_l);
+ 	if (rq_param.yuv420)
+ 		extract_rq_sizing_regs(mode_lib, &(rq_regs->rq_regs_c), rq_param.sizing.rq_c);
++	else
++		memset(&(rq_regs->rq_regs_c), 0, sizeof(rq_regs->rq_regs_c));
+ 
+ 	rq_regs->rq_regs_l.swath_height = dml_log2(rq_param.dlg.rq_l.swath_height);
+ 	rq_regs->rq_regs_c.swath_height = dml_log2(rq_param.dlg.rq_c.swath_height);
+diff --git a/drivers/gpu/drm/omapdrm/omap_debugfs.c b/drivers/gpu/drm/omapdrm/omap_debugfs.c
+index b42e286616b0..84da7a5b84f3 100644
+--- a/drivers/gpu/drm/omapdrm/omap_debugfs.c
++++ b/drivers/gpu/drm/omapdrm/omap_debugfs.c
+@@ -37,7 +37,9 @@ static int gem_show(struct seq_file *m, void *arg)
+ 		return ret;
+ 
+ 	seq_printf(m, "All Objects:\n");
++	mutex_lock(&priv->list_lock);
+ 	omap_gem_describe_objects(&priv->obj_list, m);
++	mutex_unlock(&priv->list_lock);
+ 
+ 	mutex_unlock(&dev->struct_mutex);
+ 
+diff --git a/drivers/gpu/drm/omapdrm/omap_drv.c b/drivers/gpu/drm/omapdrm/omap_drv.c
+index ef3b0e3571ec..5fcf9eaf3eaf 100644
+--- a/drivers/gpu/drm/omapdrm/omap_drv.c
++++ b/drivers/gpu/drm/omapdrm/omap_drv.c
+@@ -540,7 +540,7 @@ static int omapdrm_init(struct omap_drm_private *priv, struct device *dev)
+ 	priv->omaprev = soc ? (unsigned int)soc->data : 0;
+ 	priv->wq = alloc_ordered_workqueue("omapdrm", 0);
+ 
+-	spin_lock_init(&priv->list_lock);
++	mutex_init(&priv->list_lock);
+ 	INIT_LIST_HEAD(&priv->obj_list);
+ 
+ 	/* Allocate and initialize the DRM device. */
+diff --git a/drivers/gpu/drm/omapdrm/omap_drv.h b/drivers/gpu/drm/omapdrm/omap_drv.h
+index 6eaee4df4559..f27c8e216adf 100644
+--- a/drivers/gpu/drm/omapdrm/omap_drv.h
++++ b/drivers/gpu/drm/omapdrm/omap_drv.h
+@@ -71,7 +71,7 @@ struct omap_drm_private {
+ 	struct workqueue_struct *wq;
+ 
+ 	/* lock for obj_list below */
+-	spinlock_t list_lock;
++	struct mutex list_lock;
+ 
+ 	/* list of GEM objects: */
+ 	struct list_head obj_list;
+diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c
+index 17a53d207978..7a029b892a37 100644
+--- a/drivers/gpu/drm/omapdrm/omap_gem.c
++++ b/drivers/gpu/drm/omapdrm/omap_gem.c
+@@ -1001,6 +1001,7 @@ int omap_gem_resume(struct drm_device *dev)
+ 	struct omap_gem_object *omap_obj;
+ 	int ret = 0;
+ 
++	mutex_lock(&priv->list_lock);
+ 	list_for_each_entry(omap_obj, &priv->obj_list, mm_list) {
+ 		if (omap_obj->block) {
+ 			struct drm_gem_object *obj = &omap_obj->base;
+@@ -1012,12 +1013,14 @@ int omap_gem_resume(struct drm_device *dev)
+ 					omap_obj->roll, true);
+ 			if (ret) {
+ 				dev_err(dev->dev, "could not repin: %d\n", ret);
+-				return ret;
++				goto done;
+ 			}
+ 		}
+ 	}
+ 
+-	return 0;
++done:
++	mutex_unlock(&priv->list_lock);
++	return ret;
+ }
+ #endif
+ 
+@@ -1085,9 +1088,9 @@ void omap_gem_free_object(struct drm_gem_object *obj)
+ 
+ 	WARN_ON(!mutex_is_locked(&dev->struct_mutex));
+ 
+-	spin_lock(&priv->list_lock);
++	mutex_lock(&priv->list_lock);
+ 	list_del(&omap_obj->mm_list);
+-	spin_unlock(&priv->list_lock);
++	mutex_unlock(&priv->list_lock);
+ 
+ 	/* this means the object is still pinned.. which really should
+ 	 * not happen.  I think..
+@@ -1206,9 +1209,9 @@ struct drm_gem_object *omap_gem_new(struct drm_device *dev,
+ 			goto err_release;
+ 	}
+ 
+-	spin_lock(&priv->list_lock);
++	mutex_lock(&priv->list_lock);
+ 	list_add(&omap_obj->mm_list, &priv->obj_list);
+-	spin_unlock(&priv->list_lock);
++	mutex_unlock(&priv->list_lock);
+ 
+ 	return obj;
+ 
+diff --git a/drivers/gpu/drm/sun4i/sun4i_drv.c b/drivers/gpu/drm/sun4i/sun4i_drv.c
+index 50d19605c38f..e15fa2389e3f 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_drv.c
++++ b/drivers/gpu/drm/sun4i/sun4i_drv.c
+@@ -283,7 +283,6 @@ static int sun4i_drv_add_endpoints(struct device *dev,
+ 		remote = of_graph_get_remote_port_parent(ep);
+ 		if (!remote) {
+ 			DRM_DEBUG_DRIVER("Error retrieving the output node\n");
+-			of_node_put(remote);
+ 			continue;
+ 		}
+ 
+@@ -297,11 +296,13 @@ static int sun4i_drv_add_endpoints(struct device *dev,
+ 
+ 			if (of_graph_parse_endpoint(ep, &endpoint)) {
+ 				DRM_DEBUG_DRIVER("Couldn't parse endpoint\n");
++				of_node_put(remote);
+ 				continue;
+ 			}
+ 
+ 			if (!endpoint.id) {
+ 				DRM_DEBUG_DRIVER("Endpoint is our panel... skipping\n");
++				of_node_put(remote);
+ 				continue;
+ 			}
+ 		}
+diff --git a/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c b/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
+index 5a52fc489a9d..966688f04741 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
++++ b/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
+@@ -477,13 +477,15 @@ int sun8i_hdmi_phy_probe(struct sun8i_dw_hdmi *hdmi, struct device_node *node)
+ 			dev_err(dev, "Couldn't create the PHY clock\n");
+ 			goto err_put_clk_pll0;
+ 		}
++
++		clk_prepare_enable(phy->clk_phy);
+ 	}
+ 
+ 	phy->rst_phy = of_reset_control_get_shared(node, "phy");
+ 	if (IS_ERR(phy->rst_phy)) {
+ 		dev_err(dev, "Could not get phy reset control\n");
+ 		ret = PTR_ERR(phy->rst_phy);
+-		goto err_put_clk_pll0;
++		goto err_disable_clk_phy;
+ 	}
+ 
+ 	ret = reset_control_deassert(phy->rst_phy);
+@@ -514,6 +516,8 @@ err_deassert_rst_phy:
+ 	reset_control_assert(phy->rst_phy);
+ err_put_rst_phy:
+ 	reset_control_put(phy->rst_phy);
++err_disable_clk_phy:
++	clk_disable_unprepare(phy->clk_phy);
+ err_put_clk_pll0:
+ 	if (phy->variant->has_phy_clk)
+ 		clk_put(phy->clk_pll0);
+@@ -531,6 +535,7 @@ void sun8i_hdmi_phy_remove(struct sun8i_dw_hdmi *hdmi)
+ 
+ 	clk_disable_unprepare(phy->clk_mod);
+ 	clk_disable_unprepare(phy->clk_bus);
++	clk_disable_unprepare(phy->clk_phy);
+ 
+ 	reset_control_assert(phy->rst_phy);
+ 
+diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h
+index a043ac3aae98..26005abd9c5d 100644
+--- a/drivers/gpu/drm/v3d/v3d_drv.h
++++ b/drivers/gpu/drm/v3d/v3d_drv.h
+@@ -85,6 +85,11 @@ struct v3d_dev {
+ 	 */
+ 	struct mutex reset_lock;
+ 
++	/* Lock taken when creating and pushing the GPU scheduler
++	 * jobs, to keep the sched-fence seqnos in order.
++	 */
++	struct mutex sched_lock;
++
+ 	struct {
+ 		u32 num_allocated;
+ 		u32 pages_allocated;
+diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c
+index b513f9189caf..269fe16379c0 100644
+--- a/drivers/gpu/drm/v3d/v3d_gem.c
++++ b/drivers/gpu/drm/v3d/v3d_gem.c
+@@ -550,6 +550,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data,
+ 	if (ret)
+ 		goto fail;
+ 
++	mutex_lock(&v3d->sched_lock);
+ 	if (exec->bin.start != exec->bin.end) {
+ 		ret = drm_sched_job_init(&exec->bin.base,
+ 					 &v3d->queue[V3D_BIN].sched,
+@@ -576,6 +577,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data,
+ 	kref_get(&exec->refcount); /* put by scheduler job completion */
+ 	drm_sched_entity_push_job(&exec->render.base,
+ 				  &v3d_priv->sched_entity[V3D_RENDER]);
++	mutex_unlock(&v3d->sched_lock);
+ 
+ 	v3d_attach_object_fences(exec);
+ 
+@@ -594,6 +596,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data,
+ 	return 0;
+ 
+ fail_unreserve:
++	mutex_unlock(&v3d->sched_lock);
+ 	v3d_unlock_bo_reservations(dev, exec, &acquire_ctx);
+ fail:
+ 	v3d_exec_put(exec);
+@@ -615,6 +618,7 @@ v3d_gem_init(struct drm_device *dev)
+ 	spin_lock_init(&v3d->job_lock);
+ 	mutex_init(&v3d->bo_lock);
+ 	mutex_init(&v3d->reset_lock);
++	mutex_init(&v3d->sched_lock);
+ 
+ 	/* Note: We don't allocate address 0.  Various bits of HW
+ 	 * treat 0 as special, such as the occlusion query counters
+diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c
+index cf5aea1d6488..203ddf5723e8 100644
+--- a/drivers/gpu/drm/vc4/vc4_plane.c
++++ b/drivers/gpu/drm/vc4/vc4_plane.c
+@@ -543,6 +543,7 @@ static int vc4_plane_mode_set(struct drm_plane *plane,
+ 	/* Control word */
+ 	vc4_dlist_write(vc4_state,
+ 			SCALER_CTL0_VALID |
++			VC4_SET_FIELD(SCALER_CTL0_RGBA_EXPAND_ROUND, SCALER_CTL0_RGBA_EXPAND) |
+ 			(format->pixel_order << SCALER_CTL0_ORDER_SHIFT) |
+ 			(format->hvs << SCALER_CTL0_PIXEL_FORMAT_SHIFT) |
+ 			VC4_SET_FIELD(tiling, SCALER_CTL0_TILING) |
+@@ -874,7 +875,9 @@ static bool vc4_format_mod_supported(struct drm_plane *plane,
+ 	case DRM_FORMAT_YUV420:
+ 	case DRM_FORMAT_YVU420:
+ 	case DRM_FORMAT_NV12:
++	case DRM_FORMAT_NV21:
+ 	case DRM_FORMAT_NV16:
++	case DRM_FORMAT_NV61:
+ 	default:
+ 		return (modifier == DRM_FORMAT_MOD_LINEAR);
+ 	}
+diff --git a/drivers/hid/hid-ntrig.c b/drivers/hid/hid-ntrig.c
+index 43b1c7234316..9bc6f4867cb3 100644
+--- a/drivers/hid/hid-ntrig.c
++++ b/drivers/hid/hid-ntrig.c
+@@ -955,6 +955,8 @@ static int ntrig_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 
+ 	ret = sysfs_create_group(&hdev->dev.kobj,
+ 			&ntrig_attribute_group);
++	if (ret)
++		hid_err(hdev, "cannot create sysfs group\n");
+ 
+ 	return 0;
+ err_free:
+diff --git a/drivers/hid/i2c-hid/i2c-hid.c b/drivers/hid/i2c-hid/i2c-hid.c
+index 5fd1159fc095..64773433b947 100644
+--- a/drivers/hid/i2c-hid/i2c-hid.c
++++ b/drivers/hid/i2c-hid/i2c-hid.c
+@@ -1004,18 +1004,18 @@ static int i2c_hid_probe(struct i2c_client *client,
+ 		return client->irq;
+ 	}
+ 
+-	ihid = kzalloc(sizeof(struct i2c_hid), GFP_KERNEL);
++	ihid = devm_kzalloc(&client->dev, sizeof(*ihid), GFP_KERNEL);
+ 	if (!ihid)
+ 		return -ENOMEM;
+ 
+ 	if (client->dev.of_node) {
+ 		ret = i2c_hid_of_probe(client, &ihid->pdata);
+ 		if (ret)
+-			goto err;
++			return ret;
+ 	} else if (!platform_data) {
+ 		ret = i2c_hid_acpi_pdata(client, &ihid->pdata);
+ 		if (ret)
+-			goto err;
++			return ret;
+ 	} else {
+ 		ihid->pdata = *platform_data;
+ 	}
+@@ -1128,7 +1128,6 @@ err_regulator:
+ 
+ err:
+ 	i2c_hid_free_buffers(ihid);
+-	kfree(ihid);
+ 	return ret;
+ }
+ 
+@@ -1152,8 +1151,6 @@ static int i2c_hid_remove(struct i2c_client *client)
+ 
+ 	regulator_disable(ihid->pdata.supply);
+ 
+-	kfree(ihid);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/hwmon/adt7475.c b/drivers/hwmon/adt7475.c
+index 9ef84998c7f3..37db2eb66ed7 100644
+--- a/drivers/hwmon/adt7475.c
++++ b/drivers/hwmon/adt7475.c
+@@ -303,14 +303,18 @@ static inline u16 volt2reg(int channel, long volt, u8 bypass_attn)
+ 	return clamp_val(reg, 0, 1023) & (0xff << 2);
+ }
+ 
+-static u16 adt7475_read_word(struct i2c_client *client, int reg)
++static int adt7475_read_word(struct i2c_client *client, int reg)
+ {
+-	u16 val;
++	int val1, val2;
+ 
+-	val = i2c_smbus_read_byte_data(client, reg);
+-	val |= (i2c_smbus_read_byte_data(client, reg + 1) << 8);
++	val1 = i2c_smbus_read_byte_data(client, reg);
++	if (val1 < 0)
++		return val1;
++	val2 = i2c_smbus_read_byte_data(client, reg + 1);
++	if (val2 < 0)
++		return val2;
+ 
+-	return val;
++	return val1 | (val2 << 8);
+ }
+ 
+ static void adt7475_write_word(struct i2c_client *client, int reg, u16 val)
+diff --git a/drivers/hwmon/ina2xx.c b/drivers/hwmon/ina2xx.c
+index e9e6aeabbf84..71d3445ba869 100644
+--- a/drivers/hwmon/ina2xx.c
++++ b/drivers/hwmon/ina2xx.c
+@@ -17,7 +17,7 @@
+  * Bi-directional Current/Power Monitor with I2C Interface
+  * Datasheet: http://www.ti.com/product/ina230
+  *
+- * Copyright (C) 2012 Lothar Felten <l-felten@ti.com>
++ * Copyright (C) 2012 Lothar Felten <lothar.felten@gmail.com>
+  * Thanks to Jan Volkering
+  *
+  * This program is free software; you can redistribute it and/or modify
+@@ -329,6 +329,15 @@ static int ina2xx_set_shunt(struct ina2xx_data *data, long val)
+ 	return 0;
+ }
+ 
++static ssize_t ina2xx_show_shunt(struct device *dev,
++			      struct device_attribute *da,
++			      char *buf)
++{
++	struct ina2xx_data *data = dev_get_drvdata(dev);
++
++	return snprintf(buf, PAGE_SIZE, "%li\n", data->rshunt);
++}
++
+ static ssize_t ina2xx_store_shunt(struct device *dev,
+ 				  struct device_attribute *da,
+ 				  const char *buf, size_t count)
+@@ -403,7 +412,7 @@ static SENSOR_DEVICE_ATTR(power1_input, S_IRUGO, ina2xx_show_value, NULL,
+ 
+ /* shunt resistance */
+ static SENSOR_DEVICE_ATTR(shunt_resistor, S_IRUGO | S_IWUSR,
+-			  ina2xx_show_value, ina2xx_store_shunt,
++			  ina2xx_show_shunt, ina2xx_store_shunt,
+ 			  INA2XX_CALIBRATION);
+ 
+ /* update interval (ina226 only) */
+diff --git a/drivers/hwtracing/intel_th/core.c b/drivers/hwtracing/intel_th/core.c
+index da962aa2cef5..fc6b7f8b62fb 100644
+--- a/drivers/hwtracing/intel_th/core.c
++++ b/drivers/hwtracing/intel_th/core.c
+@@ -139,7 +139,8 @@ static int intel_th_remove(struct device *dev)
+ 			th->thdev[i] = NULL;
+ 		}
+ 
+-		th->num_thdevs = lowest;
++		if (lowest >= 0)
++			th->num_thdevs = lowest;
+ 	}
+ 
+ 	if (thdrv->attr_group)
+@@ -487,7 +488,7 @@ static const struct intel_th_subdevice {
+ 				.flags	= IORESOURCE_MEM,
+ 			},
+ 			{
+-				.start	= TH_MMIO_SW,
++				.start	= 1, /* use resource[1] */
+ 				.end	= 0,
+ 				.flags	= IORESOURCE_MEM,
+ 			},
+@@ -580,6 +581,7 @@ intel_th_subdevice_alloc(struct intel_th *th,
+ 	struct intel_th_device *thdev;
+ 	struct resource res[3];
+ 	unsigned int req = 0;
++	bool is64bit = false;
+ 	int r, err;
+ 
+ 	thdev = intel_th_device_alloc(th, subdev->type, subdev->name,
+@@ -589,12 +591,18 @@ intel_th_subdevice_alloc(struct intel_th *th,
+ 
+ 	thdev->drvdata = th->drvdata;
+ 
++	for (r = 0; r < th->num_resources; r++)
++		if (th->resource[r].flags & IORESOURCE_MEM_64) {
++			is64bit = true;
++			break;
++		}
++
+ 	memcpy(res, subdev->res,
+ 	       sizeof(struct resource) * subdev->nres);
+ 
+ 	for (r = 0; r < subdev->nres; r++) {
+ 		struct resource *devres = th->resource;
+-		int bar = TH_MMIO_CONFIG;
++		int bar = 0; /* cut subdevices' MMIO from resource[0] */
+ 
+ 		/*
+ 		 * Take .end == 0 to mean 'take the whole bar',
+@@ -603,6 +611,8 @@ intel_th_subdevice_alloc(struct intel_th *th,
+ 		 */
+ 		if (!res[r].end && res[r].flags == IORESOURCE_MEM) {
+ 			bar = res[r].start;
++			if (is64bit)
++				bar *= 2;
+ 			res[r].start = 0;
+ 			res[r].end = resource_size(&devres[bar]) - 1;
+ 		}
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index 45fcf0c37a9e..2806cdeda053 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -1417,6 +1417,13 @@ static void i801_add_tco(struct i801_priv *priv)
+ }
+ 
+ #ifdef CONFIG_ACPI
++static bool i801_acpi_is_smbus_ioport(const struct i801_priv *priv,
++				      acpi_physical_address address)
++{
++	return address >= priv->smba &&
++	       address <= pci_resource_end(priv->pci_dev, SMBBAR);
++}
++
+ static acpi_status
+ i801_acpi_io_handler(u32 function, acpi_physical_address address, u32 bits,
+ 		     u64 *value, void *handler_context, void *region_context)
+@@ -1432,7 +1439,7 @@ i801_acpi_io_handler(u32 function, acpi_physical_address address, u32 bits,
+ 	 */
+ 	mutex_lock(&priv->acpi_lock);
+ 
+-	if (!priv->acpi_reserved) {
++	if (!priv->acpi_reserved && i801_acpi_is_smbus_ioport(priv, address)) {
+ 		priv->acpi_reserved = true;
+ 
+ 		dev_warn(&pdev->dev, "BIOS is accessing SMBus registers\n");
+diff --git a/drivers/iio/accel/adxl345_core.c b/drivers/iio/accel/adxl345_core.c
+index 7251d0e63d74..98080e05ac6d 100644
+--- a/drivers/iio/accel/adxl345_core.c
++++ b/drivers/iio/accel/adxl345_core.c
+@@ -21,6 +21,8 @@
+ #define ADXL345_REG_DATAX0		0x32
+ #define ADXL345_REG_DATAY0		0x34
+ #define ADXL345_REG_DATAZ0		0x36
++#define ADXL345_REG_DATA_AXIS(index)	\
++	(ADXL345_REG_DATAX0 + (index) * sizeof(__le16))
+ 
+ #define ADXL345_POWER_CTL_MEASURE	BIT(3)
+ #define ADXL345_POWER_CTL_STANDBY	0x00
+@@ -47,19 +49,19 @@ struct adxl345_data {
+ 	u8 data_range;
+ };
+ 
+-#define ADXL345_CHANNEL(reg, axis) {					\
++#define ADXL345_CHANNEL(index, axis) {					\
+ 	.type = IIO_ACCEL,						\
+ 	.modified = 1,							\
+ 	.channel2 = IIO_MOD_##axis,					\
+-	.address = reg,							\
++	.address = index,						\
+ 	.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),			\
+ 	.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),		\
+ }
+ 
+ static const struct iio_chan_spec adxl345_channels[] = {
+-	ADXL345_CHANNEL(ADXL345_REG_DATAX0, X),
+-	ADXL345_CHANNEL(ADXL345_REG_DATAY0, Y),
+-	ADXL345_CHANNEL(ADXL345_REG_DATAZ0, Z),
++	ADXL345_CHANNEL(0, X),
++	ADXL345_CHANNEL(1, Y),
++	ADXL345_CHANNEL(2, Z),
+ };
+ 
+ static int adxl345_read_raw(struct iio_dev *indio_dev,
+@@ -67,7 +69,7 @@ static int adxl345_read_raw(struct iio_dev *indio_dev,
+ 			    int *val, int *val2, long mask)
+ {
+ 	struct adxl345_data *data = iio_priv(indio_dev);
+-	__le16 regval;
++	__le16 accel;
+ 	int ret;
+ 
+ 	switch (mask) {
+@@ -77,12 +79,13 @@ static int adxl345_read_raw(struct iio_dev *indio_dev,
+ 		 * ADXL345_REG_DATA(X0/Y0/Z0) contain the least significant byte
+ 		 * and ADXL345_REG_DATA(X0/Y0/Z0) + 1 the most significant byte
+ 		 */
+-		ret = regmap_bulk_read(data->regmap, chan->address, &regval,
+-				       sizeof(regval));
++		ret = regmap_bulk_read(data->regmap,
++				       ADXL345_REG_DATA_AXIS(chan->address),
++				       &accel, sizeof(accel));
+ 		if (ret < 0)
+ 			return ret;
+ 
+-		*val = sign_extend32(le16_to_cpu(regval), 12);
++		*val = sign_extend32(le16_to_cpu(accel), 12);
+ 		return IIO_VAL_INT;
+ 	case IIO_CHAN_INFO_SCALE:
+ 		*val = 0;
+diff --git a/drivers/iio/adc/ina2xx-adc.c b/drivers/iio/adc/ina2xx-adc.c
+index 0635a79864bf..d1239624187d 100644
+--- a/drivers/iio/adc/ina2xx-adc.c
++++ b/drivers/iio/adc/ina2xx-adc.c
+@@ -30,6 +30,7 @@
+ #include <linux/module.h>
+ #include <linux/of_device.h>
+ #include <linux/regmap.h>
++#include <linux/sched/task.h>
+ #include <linux/util_macros.h>
+ 
+ #include <linux/platform_data/ina2xx.h>
+@@ -826,6 +827,7 @@ static int ina2xx_buffer_enable(struct iio_dev *indio_dev)
+ {
+ 	struct ina2xx_chip_info *chip = iio_priv(indio_dev);
+ 	unsigned int sampling_us = SAMPLING_PERIOD(chip);
++	struct task_struct *task;
+ 
+ 	dev_dbg(&indio_dev->dev, "Enabling buffer w/ scan_mask %02x, freq = %d, avg =%u\n",
+ 		(unsigned int)(*indio_dev->active_scan_mask),
+@@ -835,11 +837,17 @@ static int ina2xx_buffer_enable(struct iio_dev *indio_dev)
+ 	dev_dbg(&indio_dev->dev, "Async readout mode: %d\n",
+ 		chip->allow_async_readout);
+ 
+-	chip->task = kthread_run(ina2xx_capture_thread, (void *)indio_dev,
+-				 "%s:%d-%uus", indio_dev->name, indio_dev->id,
+-				 sampling_us);
++	task = kthread_create(ina2xx_capture_thread, (void *)indio_dev,
++			      "%s:%d-%uus", indio_dev->name, indio_dev->id,
++			      sampling_us);
++	if (IS_ERR(task))
++		return PTR_ERR(task);
++
++	get_task_struct(task);
++	wake_up_process(task);
++	chip->task = task;
+ 
+-	return PTR_ERR_OR_ZERO(chip->task);
++	return 0;
+ }
+ 
+ static int ina2xx_buffer_disable(struct iio_dev *indio_dev)
+@@ -848,6 +856,7 @@ static int ina2xx_buffer_disable(struct iio_dev *indio_dev)
+ 
+ 	if (chip->task) {
+ 		kthread_stop(chip->task);
++		put_task_struct(chip->task);
+ 		chip->task = NULL;
+ 	}
+ 
+diff --git a/drivers/iio/counter/104-quad-8.c b/drivers/iio/counter/104-quad-8.c
+index b56985078d8c..4be85ec54af4 100644
+--- a/drivers/iio/counter/104-quad-8.c
++++ b/drivers/iio/counter/104-quad-8.c
+@@ -138,7 +138,7 @@ static int quad8_write_raw(struct iio_dev *indio_dev,
+ 			outb(val >> (8 * i), base_offset);
+ 
+ 		/* Reset Borrow, Carry, Compare, and Sign flags */
+-		outb(0x02, base_offset + 1);
++		outb(0x04, base_offset + 1);
+ 		/* Reset Error flag */
+ 		outb(0x06, base_offset + 1);
+ 
+diff --git a/drivers/infiniband/core/rw.c b/drivers/infiniband/core/rw.c
+index c8963e91f92a..3ee0adfb45e9 100644
+--- a/drivers/infiniband/core/rw.c
++++ b/drivers/infiniband/core/rw.c
+@@ -87,7 +87,7 @@ static int rdma_rw_init_one_mr(struct ib_qp *qp, u8 port_num,
+ 	}
+ 
+ 	ret = ib_map_mr_sg(reg->mr, sg, nents, &offset, PAGE_SIZE);
+-	if (ret < nents) {
++	if (ret < 0 || ret < nents) {
+ 		ib_mr_pool_put(qp, &qp->rdma_mrs, reg->mr);
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index 583d3a10b940..0e5eb0f547d3 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -2812,6 +2812,9 @@ static struct ib_uflow_resources *flow_resources_alloc(size_t num_specs)
+ 	if (!resources)
+ 		goto err_res;
+ 
++	if (!num_specs)
++		goto out;
++
+ 	resources->counters =
+ 		kcalloc(num_specs, sizeof(*resources->counters), GFP_KERNEL);
+ 
+@@ -2824,8 +2827,8 @@ static struct ib_uflow_resources *flow_resources_alloc(size_t num_specs)
+ 	if (!resources->collection)
+ 		goto err_collection;
+ 
++out:
+ 	resources->max = num_specs;
+-
+ 	return resources;
+ 
+ err_collection:
+diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
+index 2094d136513d..92d8469e28f3 100644
+--- a/drivers/infiniband/core/uverbs_main.c
++++ b/drivers/infiniband/core/uverbs_main.c
+@@ -429,6 +429,7 @@ static int ib_uverbs_comp_event_close(struct inode *inode, struct file *filp)
+ 			list_del(&entry->obj_list);
+ 		kfree(entry);
+ 	}
++	file->ev_queue.is_closed = 1;
+ 	spin_unlock_irq(&file->ev_queue.lock);
+ 
+ 	uverbs_close_fd(filp);
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index 50d8f1fc98d5..e426b990c1dd 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -2354,7 +2354,7 @@ static int bnxt_qplib_cq_process_res_rc(struct bnxt_qplib_cq *cq,
+ 		srq = qp->srq;
+ 		if (!srq)
+ 			return -EINVAL;
+-		if (wr_id_idx > srq->hwq.max_elements) {
++		if (wr_id_idx >= srq->hwq.max_elements) {
+ 			dev_err(&cq->hwq.pdev->dev,
+ 				"QPLIB: FP: CQ Process RC ");
+ 			dev_err(&cq->hwq.pdev->dev,
+@@ -2369,7 +2369,7 @@ static int bnxt_qplib_cq_process_res_rc(struct bnxt_qplib_cq *cq,
+ 		*pcqe = cqe;
+ 	} else {
+ 		rq = &qp->rq;
+-		if (wr_id_idx > rq->hwq.max_elements) {
++		if (wr_id_idx >= rq->hwq.max_elements) {
+ 			dev_err(&cq->hwq.pdev->dev,
+ 				"QPLIB: FP: CQ Process RC ");
+ 			dev_err(&cq->hwq.pdev->dev,
+@@ -2437,7 +2437,7 @@ static int bnxt_qplib_cq_process_res_ud(struct bnxt_qplib_cq *cq,
+ 		if (!srq)
+ 			return -EINVAL;
+ 
+-		if (wr_id_idx > srq->hwq.max_elements) {
++		if (wr_id_idx >= srq->hwq.max_elements) {
+ 			dev_err(&cq->hwq.pdev->dev,
+ 				"QPLIB: FP: CQ Process UD ");
+ 			dev_err(&cq->hwq.pdev->dev,
+@@ -2452,7 +2452,7 @@ static int bnxt_qplib_cq_process_res_ud(struct bnxt_qplib_cq *cq,
+ 		*pcqe = cqe;
+ 	} else {
+ 		rq = &qp->rq;
+-		if (wr_id_idx > rq->hwq.max_elements) {
++		if (wr_id_idx >= rq->hwq.max_elements) {
+ 			dev_err(&cq->hwq.pdev->dev,
+ 				"QPLIB: FP: CQ Process UD ");
+ 			dev_err(&cq->hwq.pdev->dev,
+@@ -2546,7 +2546,7 @@ static int bnxt_qplib_cq_process_res_raweth_qp1(struct bnxt_qplib_cq *cq,
+ 				"QPLIB: FP: SRQ used but not defined??");
+ 			return -EINVAL;
+ 		}
+-		if (wr_id_idx > srq->hwq.max_elements) {
++		if (wr_id_idx >= srq->hwq.max_elements) {
+ 			dev_err(&cq->hwq.pdev->dev,
+ 				"QPLIB: FP: CQ Process Raw/QP1 ");
+ 			dev_err(&cq->hwq.pdev->dev,
+@@ -2561,7 +2561,7 @@ static int bnxt_qplib_cq_process_res_raweth_qp1(struct bnxt_qplib_cq *cq,
+ 		*pcqe = cqe;
+ 	} else {
+ 		rq = &qp->rq;
+-		if (wr_id_idx > rq->hwq.max_elements) {
++		if (wr_id_idx >= rq->hwq.max_elements) {
+ 			dev_err(&cq->hwq.pdev->dev,
+ 				"QPLIB: FP: CQ Process Raw/QP1 RQ wr_id ");
+ 			dev_err(&cq->hwq.pdev->dev,
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.c b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+index 2f3f32eaa1d5..4097f3fa25c5 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_sp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+@@ -197,7 +197,7 @@ int bnxt_qplib_get_sgid(struct bnxt_qplib_res *res,
+ 			struct bnxt_qplib_sgid_tbl *sgid_tbl, int index,
+ 			struct bnxt_qplib_gid *gid)
+ {
+-	if (index > sgid_tbl->max) {
++	if (index >= sgid_tbl->max) {
+ 		dev_err(&res->pdev->dev,
+ 			"QPLIB: Index %d exceeded SGID table max (%d)",
+ 			index, sgid_tbl->max);
+@@ -402,7 +402,7 @@ int bnxt_qplib_get_pkey(struct bnxt_qplib_res *res,
+ 		*pkey = 0xFFFF;
+ 		return 0;
+ 	}
+-	if (index > pkey_tbl->max) {
++	if (index >= pkey_tbl->max) {
+ 		dev_err(&res->pdev->dev,
+ 			"QPLIB: Index %d exceeded PKEY table max (%d)",
+ 			index, pkey_tbl->max);
+diff --git a/drivers/infiniband/hw/hfi1/chip.c b/drivers/infiniband/hw/hfi1/chip.c
+index 6deb101cdd43..b49351914feb 100644
+--- a/drivers/infiniband/hw/hfi1/chip.c
++++ b/drivers/infiniband/hw/hfi1/chip.c
+@@ -6733,6 +6733,7 @@ void start_freeze_handling(struct hfi1_pportdata *ppd, int flags)
+ 	struct hfi1_devdata *dd = ppd->dd;
+ 	struct send_context *sc;
+ 	int i;
++	int sc_flags;
+ 
+ 	if (flags & FREEZE_SELF)
+ 		write_csr(dd, CCE_CTRL, CCE_CTRL_SPC_FREEZE_SMASK);
+@@ -6743,11 +6744,13 @@ void start_freeze_handling(struct hfi1_pportdata *ppd, int flags)
+ 	/* notify all SDMA engines that they are going into a freeze */
+ 	sdma_freeze_notify(dd, !!(flags & FREEZE_LINK_DOWN));
+ 
++	sc_flags = SCF_FROZEN | SCF_HALTED | (flags & FREEZE_LINK_DOWN ?
++					      SCF_LINK_DOWN : 0);
+ 	/* do halt pre-handling on all enabled send contexts */
+ 	for (i = 0; i < dd->num_send_contexts; i++) {
+ 		sc = dd->send_contexts[i].sc;
+ 		if (sc && (sc->flags & SCF_ENABLED))
+-			sc_stop(sc, SCF_FROZEN | SCF_HALTED);
++			sc_stop(sc, sc_flags);
+ 	}
+ 
+ 	/* Send context are frozen. Notify user space */
+@@ -10665,6 +10668,7 @@ int set_link_state(struct hfi1_pportdata *ppd, u32 state)
+ 		add_rcvctrl(dd, RCV_CTRL_RCV_PORT_ENABLE_SMASK);
+ 
+ 		handle_linkup_change(dd, 1);
++		pio_kernel_linkup(dd);
+ 
+ 		/*
+ 		 * After link up, a new link width will have been set.
+diff --git a/drivers/infiniband/hw/hfi1/pio.c b/drivers/infiniband/hw/hfi1/pio.c
+index 9cac15d10c4f..81f7cd7abcc5 100644
+--- a/drivers/infiniband/hw/hfi1/pio.c
++++ b/drivers/infiniband/hw/hfi1/pio.c
+@@ -86,6 +86,7 @@ void pio_send_control(struct hfi1_devdata *dd, int op)
+ 	unsigned long flags;
+ 	int write = 1;	/* write sendctrl back */
+ 	int flush = 0;	/* re-read sendctrl to make sure it is flushed */
++	int i;
+ 
+ 	spin_lock_irqsave(&dd->sendctrl_lock, flags);
+ 
+@@ -95,9 +96,13 @@ void pio_send_control(struct hfi1_devdata *dd, int op)
+ 		reg |= SEND_CTRL_SEND_ENABLE_SMASK;
+ 	/* Fall through */
+ 	case PSC_DATA_VL_ENABLE:
++		mask = 0;
++		for (i = 0; i < ARRAY_SIZE(dd->vld); i++)
++			if (!dd->vld[i].mtu)
++				mask |= BIT_ULL(i);
+ 		/* Disallow sending on VLs not enabled */
+-		mask = (((~0ull) << num_vls) & SEND_CTRL_UNSUPPORTED_VL_MASK) <<
+-				SEND_CTRL_UNSUPPORTED_VL_SHIFT;
++		mask = (mask & SEND_CTRL_UNSUPPORTED_VL_MASK) <<
++			SEND_CTRL_UNSUPPORTED_VL_SHIFT;
+ 		reg = (reg & ~SEND_CTRL_UNSUPPORTED_VL_SMASK) | mask;
+ 		break;
+ 	case PSC_GLOBAL_DISABLE:
+@@ -921,20 +926,18 @@ void sc_free(struct send_context *sc)
+ void sc_disable(struct send_context *sc)
+ {
+ 	u64 reg;
+-	unsigned long flags;
+ 	struct pio_buf *pbuf;
+ 
+ 	if (!sc)
+ 		return;
+ 
+ 	/* do all steps, even if already disabled */
+-	spin_lock_irqsave(&sc->alloc_lock, flags);
++	spin_lock_irq(&sc->alloc_lock);
+ 	reg = read_kctxt_csr(sc->dd, sc->hw_context, SC(CTRL));
+ 	reg &= ~SC(CTRL_CTXT_ENABLE_SMASK);
+ 	sc->flags &= ~SCF_ENABLED;
+ 	sc_wait_for_packet_egress(sc, 1);
+ 	write_kctxt_csr(sc->dd, sc->hw_context, SC(CTRL), reg);
+-	spin_unlock_irqrestore(&sc->alloc_lock, flags);
+ 
+ 	/*
+ 	 * Flush any waiters.  Once the context is disabled,
+@@ -944,7 +947,7 @@ void sc_disable(struct send_context *sc)
+ 	 * proceed with the flush.
+ 	 */
+ 	udelay(1);
+-	spin_lock_irqsave(&sc->release_lock, flags);
++	spin_lock(&sc->release_lock);
+ 	if (sc->sr) {	/* this context has a shadow ring */
+ 		while (sc->sr_tail != sc->sr_head) {
+ 			pbuf = &sc->sr[sc->sr_tail].pbuf;
+@@ -955,7 +958,8 @@ void sc_disable(struct send_context *sc)
+ 				sc->sr_tail = 0;
+ 		}
+ 	}
+-	spin_unlock_irqrestore(&sc->release_lock, flags);
++	spin_unlock(&sc->release_lock);
++	spin_unlock_irq(&sc->alloc_lock);
+ }
+ 
+ /* return SendEgressCtxtStatus.PacketOccupancy */
+@@ -1178,11 +1182,39 @@ void pio_kernel_unfreeze(struct hfi1_devdata *dd)
+ 		sc = dd->send_contexts[i].sc;
+ 		if (!sc || !(sc->flags & SCF_FROZEN) || sc->type == SC_USER)
+ 			continue;
++		if (sc->flags & SCF_LINK_DOWN)
++			continue;
+ 
+ 		sc_enable(sc);	/* will clear the sc frozen flag */
+ 	}
+ }
+ 
++/**
++ * pio_kernel_linkup() - Re-enable send contexts after linkup event
++ * @dd: valid devive data
++ *
++ * When the link goes down, the freeze path is taken.  However, a link down
++ * event is different from a freeze because if the send context is re-enabled
++ * whowever is sending data will start sending data again, which will hang
++ * any QP that is sending data.
++ *
++ * The freeze path now looks at the type of event that occurs and takes this
++ * path for link down event.
++ */
++void pio_kernel_linkup(struct hfi1_devdata *dd)
++{
++	struct send_context *sc;
++	int i;
++
++	for (i = 0; i < dd->num_send_contexts; i++) {
++		sc = dd->send_contexts[i].sc;
++		if (!sc || !(sc->flags & SCF_LINK_DOWN) || sc->type == SC_USER)
++			continue;
++
++		sc_enable(sc);	/* will clear the sc link down flag */
++	}
++}
++
+ /*
+  * Wait for the SendPioInitCtxt.PioInitInProgress bit to clear.
+  * Returns:
+@@ -1382,11 +1414,10 @@ void sc_stop(struct send_context *sc, int flag)
+ {
+ 	unsigned long flags;
+ 
+-	/* mark the context */
+-	sc->flags |= flag;
+-
+ 	/* stop buffer allocations */
+ 	spin_lock_irqsave(&sc->alloc_lock, flags);
++	/* mark the context */
++	sc->flags |= flag;
+ 	sc->flags &= ~SCF_ENABLED;
+ 	spin_unlock_irqrestore(&sc->alloc_lock, flags);
+ 	wake_up(&sc->halt_wait);
+diff --git a/drivers/infiniband/hw/hfi1/pio.h b/drivers/infiniband/hw/hfi1/pio.h
+index 058b08f459ab..aaf372c3e5d6 100644
+--- a/drivers/infiniband/hw/hfi1/pio.h
++++ b/drivers/infiniband/hw/hfi1/pio.h
+@@ -139,6 +139,7 @@ struct send_context {
+ #define SCF_IN_FREE 0x02
+ #define SCF_HALTED  0x04
+ #define SCF_FROZEN  0x08
++#define SCF_LINK_DOWN 0x10
+ 
+ struct send_context_info {
+ 	struct send_context *sc;	/* allocated working context */
+@@ -306,6 +307,7 @@ void set_pio_integrity(struct send_context *sc);
+ void pio_reset_all(struct hfi1_devdata *dd);
+ void pio_freeze(struct hfi1_devdata *dd);
+ void pio_kernel_unfreeze(struct hfi1_devdata *dd);
++void pio_kernel_linkup(struct hfi1_devdata *dd);
+ 
+ /* global PIO send control operations */
+ #define PSC_GLOBAL_ENABLE 0
+diff --git a/drivers/infiniband/hw/hfi1/user_sdma.c b/drivers/infiniband/hw/hfi1/user_sdma.c
+index a3a7b33196d6..5c88706121c1 100644
+--- a/drivers/infiniband/hw/hfi1/user_sdma.c
++++ b/drivers/infiniband/hw/hfi1/user_sdma.c
+@@ -828,7 +828,7 @@ static int user_sdma_send_pkts(struct user_sdma_request *req, unsigned maxpkts)
+ 			if (READ_ONCE(iovec->offset) == iovec->iov.iov_len) {
+ 				if (++req->iov_idx == req->data_iovs) {
+ 					ret = -EFAULT;
+-					goto free_txreq;
++					goto free_tx;
+ 				}
+ 				iovec = &req->iovs[req->iov_idx];
+ 				WARN_ON(iovec->offset);
+diff --git a/drivers/infiniband/hw/hfi1/verbs.c b/drivers/infiniband/hw/hfi1/verbs.c
+index 08991874c0e2..a1040a142aac 100644
+--- a/drivers/infiniband/hw/hfi1/verbs.c
++++ b/drivers/infiniband/hw/hfi1/verbs.c
+@@ -1590,6 +1590,7 @@ static int hfi1_check_ah(struct ib_device *ibdev, struct rdma_ah_attr *ah_attr)
+ 	struct hfi1_pportdata *ppd;
+ 	struct hfi1_devdata *dd;
+ 	u8 sc5;
++	u8 sl;
+ 
+ 	if (hfi1_check_mcast(rdma_ah_get_dlid(ah_attr)) &&
+ 	    !(rdma_ah_get_ah_flags(ah_attr) & IB_AH_GRH))
+@@ -1598,8 +1599,13 @@ static int hfi1_check_ah(struct ib_device *ibdev, struct rdma_ah_attr *ah_attr)
+ 	/* test the mapping for validity */
+ 	ibp = to_iport(ibdev, rdma_ah_get_port_num(ah_attr));
+ 	ppd = ppd_from_ibp(ibp);
+-	sc5 = ibp->sl_to_sc[rdma_ah_get_sl(ah_attr)];
+ 	dd = dd_from_ppd(ppd);
++
++	sl = rdma_ah_get_sl(ah_attr);
++	if (sl >= ARRAY_SIZE(ibp->sl_to_sc))
++		return -EINVAL;
++
++	sc5 = ibp->sl_to_sc[sl];
+ 	if (sc_to_vlt(dd, sc5) > num_vls && sc_to_vlt(dd, sc5) != 0xf)
+ 		return -EINVAL;
+ 	return 0;
+diff --git a/drivers/infiniband/hw/i40iw/i40iw_verbs.c b/drivers/infiniband/hw/i40iw/i40iw_verbs.c
+index 68679ad4c6da..937899fea01d 100644
+--- a/drivers/infiniband/hw/i40iw/i40iw_verbs.c
++++ b/drivers/infiniband/hw/i40iw/i40iw_verbs.c
+@@ -1409,6 +1409,7 @@ static void i40iw_set_hugetlb_values(u64 addr, struct i40iw_mr *iwmr)
+ 	struct vm_area_struct *vma;
+ 	struct hstate *h;
+ 
++	down_read(&current->mm->mmap_sem);
+ 	vma = find_vma(current->mm, addr);
+ 	if (vma && is_vm_hugetlb_page(vma)) {
+ 		h = hstate_vma(vma);
+@@ -1417,6 +1418,7 @@ static void i40iw_set_hugetlb_values(u64 addr, struct i40iw_mr *iwmr)
+ 			iwmr->page_msk = huge_page_mask(h);
+ 		}
+ 	}
++	up_read(&current->mm->mmap_sem);
+ }
+ 
+ /**
+diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
+index 3b8045fd23ed..b94e33a56e97 100644
+--- a/drivers/infiniband/hw/mlx4/qp.c
++++ b/drivers/infiniband/hw/mlx4/qp.c
+@@ -4047,9 +4047,9 @@ static void to_rdma_ah_attr(struct mlx4_ib_dev *ibdev,
+ 	u8 port_num = path->sched_queue & 0x40 ? 2 : 1;
+ 
+ 	memset(ah_attr, 0, sizeof(*ah_attr));
+-	ah_attr->type = rdma_ah_find_type(&ibdev->ib_dev, port_num);
+ 	if (port_num == 0 || port_num > dev->caps.num_ports)
+ 		return;
++	ah_attr->type = rdma_ah_find_type(&ibdev->ib_dev, port_num);
+ 
+ 	if (ah_attr->type == RDMA_AH_ATTR_TYPE_ROCE)
+ 		rdma_ah_set_sl(ah_attr, ((path->sched_queue >> 3) & 0x7) |
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index cbeae4509359..85677afa6f77 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -2699,7 +2699,7 @@ static int parse_flow_attr(struct mlx5_core_dev *mdev, u32 *match_c,
+ 			 IPPROTO_GRE);
+ 
+ 		MLX5_SET(fte_match_set_misc, misc_params_c, gre_protocol,
+-			 0xffff);
++			 ntohs(ib_spec->gre.mask.protocol));
+ 		MLX5_SET(fte_match_set_misc, misc_params_v, gre_protocol,
+ 			 ntohs(ib_spec->gre.val.protocol));
+ 
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
+index 9786b24b956f..2b8cc76bb77e 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.c
++++ b/drivers/infiniband/ulp/srp/ib_srp.c
+@@ -2954,7 +2954,7 @@ static int srp_reset_device(struct scsi_cmnd *scmnd)
+ {
+ 	struct srp_target_port *target = host_to_target(scmnd->device->host);
+ 	struct srp_rdma_ch *ch;
+-	int i;
++	int i, j;
+ 	u8 status;
+ 
+ 	shost_printk(KERN_ERR, target->scsi_host, "SRP reset_device called\n");
+@@ -2968,8 +2968,8 @@ static int srp_reset_device(struct scsi_cmnd *scmnd)
+ 
+ 	for (i = 0; i < target->ch_count; i++) {
+ 		ch = &target->ch[i];
+-		for (i = 0; i < target->req_ring_size; ++i) {
+-			struct srp_request *req = &ch->req_ring[i];
++		for (j = 0; j < target->req_ring_size; ++j) {
++			struct srp_request *req = &ch->req_ring[j];
+ 
+ 			srp_finish_req(ch, req, scmnd->device, DID_RESET << 16);
+ 		}
+diff --git a/drivers/input/misc/xen-kbdfront.c b/drivers/input/misc/xen-kbdfront.c
+index d91f3b1c5375..92d739649022 100644
+--- a/drivers/input/misc/xen-kbdfront.c
++++ b/drivers/input/misc/xen-kbdfront.c
+@@ -229,7 +229,7 @@ static int xenkbd_probe(struct xenbus_device *dev,
+ 		}
+ 	}
+ 
+-	touch = xenbus_read_unsigned(dev->nodename,
++	touch = xenbus_read_unsigned(dev->otherend,
+ 				     XENKBD_FIELD_FEAT_MTOUCH, 0);
+ 	if (touch) {
+ 		ret = xenbus_write(XBT_NIL, dev->nodename,
+@@ -304,13 +304,13 @@ static int xenkbd_probe(struct xenbus_device *dev,
+ 		if (!mtouch)
+ 			goto error_nomem;
+ 
+-		num_cont = xenbus_read_unsigned(info->xbdev->nodename,
++		num_cont = xenbus_read_unsigned(info->xbdev->otherend,
+ 						XENKBD_FIELD_MT_NUM_CONTACTS,
+ 						1);
+-		width = xenbus_read_unsigned(info->xbdev->nodename,
++		width = xenbus_read_unsigned(info->xbdev->otherend,
+ 					     XENKBD_FIELD_MT_WIDTH,
+ 					     XENFB_WIDTH);
+-		height = xenbus_read_unsigned(info->xbdev->nodename,
++		height = xenbus_read_unsigned(info->xbdev->otherend,
+ 					      XENKBD_FIELD_MT_HEIGHT,
+ 					      XENFB_HEIGHT);
+ 
+diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
+index dd85b16dc6f8..88564f729e93 100644
+--- a/drivers/input/mouse/elantech.c
++++ b/drivers/input/mouse/elantech.c
+@@ -1178,6 +1178,8 @@ static const struct dmi_system_id elantech_dmi_has_middle_button[] = {
+ static const char * const middle_button_pnp_ids[] = {
+ 	"LEN2131", /* ThinkPad P52 w/ NFC */
+ 	"LEN2132", /* ThinkPad P52 */
++	"LEN2133", /* ThinkPad P72 w/ NFC */
++	"LEN2134", /* ThinkPad P72 */
+ 	NULL
+ };
+ 
+diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
+index 596b95c50051..d77c97fe4a23 100644
+--- a/drivers/iommu/amd_iommu.c
++++ b/drivers/iommu/amd_iommu.c
+@@ -2405,9 +2405,9 @@ static void __unmap_single(struct dma_ops_domain *dma_dom,
+ 	}
+ 
+ 	if (amd_iommu_unmap_flush) {
+-		dma_ops_free_iova(dma_dom, dma_addr, pages);
+ 		domain_flush_tlb(&dma_dom->domain);
+ 		domain_flush_complete(&dma_dom->domain);
++		dma_ops_free_iova(dma_dom, dma_addr, pages);
+ 	} else {
+ 		pages = __roundup_pow_of_two(pages);
+ 		queue_iova(&dma_dom->iovad, dma_addr >> PAGE_SHIFT, pages, 0);
+diff --git a/drivers/iommu/msm_iommu.c b/drivers/iommu/msm_iommu.c
+index 0d3350463a3f..9a95c9b9d0d8 100644
+--- a/drivers/iommu/msm_iommu.c
++++ b/drivers/iommu/msm_iommu.c
+@@ -395,20 +395,15 @@ static int msm_iommu_add_device(struct device *dev)
+ 	struct msm_iommu_dev *iommu;
+ 	struct iommu_group *group;
+ 	unsigned long flags;
+-	int ret = 0;
+ 
+ 	spin_lock_irqsave(&msm_iommu_lock, flags);
+-
+ 	iommu = find_iommu_for_dev(dev);
++	spin_unlock_irqrestore(&msm_iommu_lock, flags);
++
+ 	if (iommu)
+ 		iommu_device_link(&iommu->iommu, dev);
+ 	else
+-		ret = -ENODEV;
+-
+-	spin_unlock_irqrestore(&msm_iommu_lock, flags);
+-
+-	if (ret)
+-		return ret;
++		return -ENODEV;
+ 
+ 	group = iommu_group_get_for_dev(dev);
+ 	if (IS_ERR(group))
+@@ -425,13 +420,12 @@ static void msm_iommu_remove_device(struct device *dev)
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&msm_iommu_lock, flags);
+-
+ 	iommu = find_iommu_for_dev(dev);
++	spin_unlock_irqrestore(&msm_iommu_lock, flags);
++
+ 	if (iommu)
+ 		iommu_device_unlink(&iommu->iommu, dev);
+ 
+-	spin_unlock_irqrestore(&msm_iommu_lock, flags);
+-
+ 	iommu_group_remove_device(dev);
+ }
+ 
+diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
+index 021cbf9ef1bf..1ac945f7a3c2 100644
+--- a/drivers/md/md-cluster.c
++++ b/drivers/md/md-cluster.c
+@@ -304,15 +304,6 @@ static void recover_bitmaps(struct md_thread *thread)
+ 	while (cinfo->recovery_map) {
+ 		slot = fls64((u64)cinfo->recovery_map) - 1;
+ 
+-		/* Clear suspend_area associated with the bitmap */
+-		spin_lock_irq(&cinfo->suspend_lock);
+-		list_for_each_entry_safe(s, tmp, &cinfo->suspend_list, list)
+-			if (slot == s->slot) {
+-				list_del(&s->list);
+-				kfree(s);
+-			}
+-		spin_unlock_irq(&cinfo->suspend_lock);
+-
+ 		snprintf(str, 64, "bitmap%04d", slot);
+ 		bm_lockres = lockres_init(mddev, str, NULL, 1);
+ 		if (!bm_lockres) {
+@@ -331,6 +322,16 @@ static void recover_bitmaps(struct md_thread *thread)
+ 			pr_err("md-cluster: Could not copy data from bitmap %d\n", slot);
+ 			goto clear_bit;
+ 		}
++
++		/* Clear suspend_area associated with the bitmap */
++		spin_lock_irq(&cinfo->suspend_lock);
++		list_for_each_entry_safe(s, tmp, &cinfo->suspend_list, list)
++			if (slot == s->slot) {
++				list_del(&s->list);
++				kfree(s);
++			}
++		spin_unlock_irq(&cinfo->suspend_lock);
++
+ 		if (hi > 0) {
+ 			if (lo < mddev->recovery_cp)
+ 				mddev->recovery_cp = lo;
+diff --git a/drivers/media/i2c/ov772x.c b/drivers/media/i2c/ov772x.c
+index e2550708abc8..3fdbe644648a 100644
+--- a/drivers/media/i2c/ov772x.c
++++ b/drivers/media/i2c/ov772x.c
+@@ -542,9 +542,19 @@ static struct ov772x_priv *to_ov772x(struct v4l2_subdev *sd)
+ 	return container_of(sd, struct ov772x_priv, subdev);
+ }
+ 
+-static inline int ov772x_read(struct i2c_client *client, u8 addr)
++static int ov772x_read(struct i2c_client *client, u8 addr)
+ {
+-	return i2c_smbus_read_byte_data(client, addr);
++	int ret;
++	u8 val;
++
++	ret = i2c_master_send(client, &addr, 1);
++	if (ret < 0)
++		return ret;
++	ret = i2c_master_recv(client, &val, 1);
++	if (ret < 0)
++		return ret;
++
++	return val;
+ }
+ 
+ static inline int ov772x_write(struct i2c_client *client, u8 addr, u8 value)
+@@ -1136,7 +1146,7 @@ static int ov772x_set_fmt(struct v4l2_subdev *sd,
+ static int ov772x_video_probe(struct ov772x_priv *priv)
+ {
+ 	struct i2c_client  *client = v4l2_get_subdevdata(&priv->subdev);
+-	u8                  pid, ver;
++	int		    pid, ver, midh, midl;
+ 	const char         *devname;
+ 	int		    ret;
+ 
+@@ -1146,7 +1156,11 @@ static int ov772x_video_probe(struct ov772x_priv *priv)
+ 
+ 	/* Check and show product ID and manufacturer ID. */
+ 	pid = ov772x_read(client, PID);
++	if (pid < 0)
++		return pid;
+ 	ver = ov772x_read(client, VER);
++	if (ver < 0)
++		return ver;
+ 
+ 	switch (VERSION(pid, ver)) {
+ 	case OV7720:
+@@ -1162,13 +1176,17 @@ static int ov772x_video_probe(struct ov772x_priv *priv)
+ 		goto done;
+ 	}
+ 
++	midh = ov772x_read(client, MIDH);
++	if (midh < 0)
++		return midh;
++	midl = ov772x_read(client, MIDL);
++	if (midl < 0)
++		return midl;
++
+ 	dev_info(&client->dev,
+ 		 "%s Product ID %0x:%0x Manufacturer ID %x:%x\n",
+-		 devname,
+-		 pid,
+-		 ver,
+-		 ov772x_read(client, MIDH),
+-		 ov772x_read(client, MIDL));
++		 devname, pid, ver, midh, midl);
++
+ 	ret = v4l2_ctrl_handler_setup(&priv->hdl);
+ 
+ done:
+@@ -1255,13 +1273,11 @@ static int ov772x_probe(struct i2c_client *client,
+ 		return -EINVAL;
+ 	}
+ 
+-	if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE_DATA |
+-					      I2C_FUNC_PROTOCOL_MANGLING)) {
++	if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE_DATA)) {
+ 		dev_err(&adapter->dev,
+-			"I2C-Adapter doesn't support SMBUS_BYTE_DATA or PROTOCOL_MANGLING\n");
++			"I2C-Adapter doesn't support SMBUS_BYTE_DATA\n");
+ 		return -EIO;
+ 	}
+-	client->flags |= I2C_CLIENT_SCCB;
+ 
+ 	priv = devm_kzalloc(&client->dev, sizeof(*priv), GFP_KERNEL);
+ 	if (!priv)
+diff --git a/drivers/media/i2c/soc_camera/ov772x.c b/drivers/media/i2c/soc_camera/ov772x.c
+index 806383500313..14377af7c888 100644
+--- a/drivers/media/i2c/soc_camera/ov772x.c
++++ b/drivers/media/i2c/soc_camera/ov772x.c
+@@ -834,7 +834,7 @@ static int ov772x_set_params(struct ov772x_priv *priv,
+ 	 * set COM8
+ 	 */
+ 	if (priv->band_filter) {
+-		ret = ov772x_mask_set(client, COM8, BNDF_ON_OFF, 1);
++		ret = ov772x_mask_set(client, COM8, BNDF_ON_OFF, BNDF_ON_OFF);
+ 		if (!ret)
+ 			ret = ov772x_mask_set(client, BDBASE,
+ 					      0xff, 256 - priv->band_filter);
+diff --git a/drivers/media/platform/exynos4-is/fimc-isp-video.c b/drivers/media/platform/exynos4-is/fimc-isp-video.c
+index 55ba696b8cf4..a920164f53f1 100644
+--- a/drivers/media/platform/exynos4-is/fimc-isp-video.c
++++ b/drivers/media/platform/exynos4-is/fimc-isp-video.c
+@@ -384,12 +384,17 @@ static void __isp_video_try_fmt(struct fimc_isp *isp,
+ 				struct v4l2_pix_format_mplane *pixm,
+ 				const struct fimc_fmt **fmt)
+ {
+-	*fmt = fimc_isp_find_format(&pixm->pixelformat, NULL, 2);
++	const struct fimc_fmt *__fmt;
++
++	__fmt = fimc_isp_find_format(&pixm->pixelformat, NULL, 2);
++
++	if (fmt)
++		*fmt = __fmt;
+ 
+ 	pixm->colorspace = V4L2_COLORSPACE_SRGB;
+ 	pixm->field = V4L2_FIELD_NONE;
+-	pixm->num_planes = (*fmt)->memplanes;
+-	pixm->pixelformat = (*fmt)->fourcc;
++	pixm->num_planes = __fmt->memplanes;
++	pixm->pixelformat = __fmt->fourcc;
+ 	/*
+ 	 * TODO: double check with the docmentation these width/height
+ 	 * constraints are correct.
+diff --git a/drivers/media/platform/fsl-viu.c b/drivers/media/platform/fsl-viu.c
+index e41510ce69a4..0273302aa741 100644
+--- a/drivers/media/platform/fsl-viu.c
++++ b/drivers/media/platform/fsl-viu.c
+@@ -1414,7 +1414,7 @@ static int viu_of_probe(struct platform_device *op)
+ 				     sizeof(struct viu_reg), DRV_NAME)) {
+ 		dev_err(&op->dev, "Error while requesting mem region\n");
+ 		ret = -EBUSY;
+-		goto err;
++		goto err_irq;
+ 	}
+ 
+ 	/* remap registers */
+@@ -1422,7 +1422,7 @@ static int viu_of_probe(struct platform_device *op)
+ 	if (!viu_regs) {
+ 		dev_err(&op->dev, "Can't map register set\n");
+ 		ret = -ENOMEM;
+-		goto err;
++		goto err_irq;
+ 	}
+ 
+ 	/* Prepare our private structure */
+@@ -1430,7 +1430,7 @@ static int viu_of_probe(struct platform_device *op)
+ 	if (!viu_dev) {
+ 		dev_err(&op->dev, "Can't allocate private structure\n");
+ 		ret = -ENOMEM;
+-		goto err;
++		goto err_irq;
+ 	}
+ 
+ 	viu_dev->vr = viu_regs;
+@@ -1446,16 +1446,21 @@ static int viu_of_probe(struct platform_device *op)
+ 	ret = v4l2_device_register(viu_dev->dev, &viu_dev->v4l2_dev);
+ 	if (ret < 0) {
+ 		dev_err(&op->dev, "v4l2_device_register() failed: %d\n", ret);
+-		goto err;
++		goto err_irq;
+ 	}
+ 
+ 	ad = i2c_get_adapter(0);
++	if (!ad) {
++		ret = -EFAULT;
++		dev_err(&op->dev, "couldn't get i2c adapter\n");
++		goto err_v4l2;
++	}
+ 
+ 	v4l2_ctrl_handler_init(&viu_dev->hdl, 5);
+ 	if (viu_dev->hdl.error) {
+ 		ret = viu_dev->hdl.error;
+ 		dev_err(&op->dev, "couldn't register control\n");
+-		goto err_vdev;
++		goto err_i2c;
+ 	}
+ 	/* This control handler will inherit the control(s) from the
+ 	   sub-device(s). */
+@@ -1471,7 +1476,7 @@ static int viu_of_probe(struct platform_device *op)
+ 	vdev = video_device_alloc();
+ 	if (vdev == NULL) {
+ 		ret = -ENOMEM;
+-		goto err_vdev;
++		goto err_hdl;
+ 	}
+ 
+ 	*vdev = viu_template;
+@@ -1492,7 +1497,7 @@ static int viu_of_probe(struct platform_device *op)
+ 	ret = video_register_device(viu_dev->vdev, VFL_TYPE_GRABBER, -1);
+ 	if (ret < 0) {
+ 		video_device_release(viu_dev->vdev);
+-		goto err_vdev;
++		goto err_unlock;
+ 	}
+ 
+ 	/* enable VIU clock */
+@@ -1500,12 +1505,12 @@ static int viu_of_probe(struct platform_device *op)
+ 	if (IS_ERR(clk)) {
+ 		dev_err(&op->dev, "failed to lookup the clock!\n");
+ 		ret = PTR_ERR(clk);
+-		goto err_clk;
++		goto err_vdev;
+ 	}
+ 	ret = clk_prepare_enable(clk);
+ 	if (ret) {
+ 		dev_err(&op->dev, "failed to enable the clock!\n");
+-		goto err_clk;
++		goto err_vdev;
+ 	}
+ 	viu_dev->clk = clk;
+ 
+@@ -1516,7 +1521,7 @@ static int viu_of_probe(struct platform_device *op)
+ 	if (request_irq(viu_dev->irq, viu_intr, 0, "viu", (void *)viu_dev)) {
+ 		dev_err(&op->dev, "Request VIU IRQ failed.\n");
+ 		ret = -ENODEV;
+-		goto err_irq;
++		goto err_clk;
+ 	}
+ 
+ 	mutex_unlock(&viu_dev->lock);
+@@ -1524,16 +1529,19 @@ static int viu_of_probe(struct platform_device *op)
+ 	dev_info(&op->dev, "Freescale VIU Video Capture Board\n");
+ 	return ret;
+ 
+-err_irq:
+-	clk_disable_unprepare(viu_dev->clk);
+ err_clk:
+-	video_unregister_device(viu_dev->vdev);
++	clk_disable_unprepare(viu_dev->clk);
+ err_vdev:
+-	v4l2_ctrl_handler_free(&viu_dev->hdl);
++	video_unregister_device(viu_dev->vdev);
++err_unlock:
+ 	mutex_unlock(&viu_dev->lock);
++err_hdl:
++	v4l2_ctrl_handler_free(&viu_dev->hdl);
++err_i2c:
+ 	i2c_put_adapter(ad);
++err_v4l2:
+ 	v4l2_device_unregister(&viu_dev->v4l2_dev);
+-err:
++err_irq:
+ 	irq_dispose_mapping(viu_irq);
+ 	return ret;
+ }
+diff --git a/drivers/media/platform/omap3isp/isp.c b/drivers/media/platform/omap3isp/isp.c
+index f22cf351e3ee..ae0ef8b241a7 100644
+--- a/drivers/media/platform/omap3isp/isp.c
++++ b/drivers/media/platform/omap3isp/isp.c
+@@ -300,7 +300,7 @@ static struct clk *isp_xclk_src_get(struct of_phandle_args *clkspec, void *data)
+ static int isp_xclk_init(struct isp_device *isp)
+ {
+ 	struct device_node *np = isp->dev->of_node;
+-	struct clk_init_data init;
++	struct clk_init_data init = { 0 };
+ 	unsigned int i;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(isp->xclks); ++i)
+diff --git a/drivers/media/platform/s3c-camif/camif-capture.c b/drivers/media/platform/s3c-camif/camif-capture.c
+index 9ab8e7ee2e1e..b1d9f3857d3d 100644
+--- a/drivers/media/platform/s3c-camif/camif-capture.c
++++ b/drivers/media/platform/s3c-camif/camif-capture.c
+@@ -117,6 +117,8 @@ static int sensor_set_power(struct camif_dev *camif, int on)
+ 
+ 	if (camif->sensor.power_count == !on)
+ 		err = v4l2_subdev_call(sensor->sd, core, s_power, on);
++	if (err == -ENOIOCTLCMD)
++		err = 0;
+ 	if (!err)
+ 		sensor->power_count += on ? 1 : -1;
+ 
+diff --git a/drivers/media/usb/tm6000/tm6000-dvb.c b/drivers/media/usb/tm6000/tm6000-dvb.c
+index c811fc6cf48a..3a4e545c6037 100644
+--- a/drivers/media/usb/tm6000/tm6000-dvb.c
++++ b/drivers/media/usb/tm6000/tm6000-dvb.c
+@@ -266,6 +266,11 @@ static int register_dvb(struct tm6000_core *dev)
+ 
+ 	ret = dvb_register_adapter(&dvb->adapter, "Trident TVMaster 6000 DVB-T",
+ 					THIS_MODULE, &dev->udev->dev, adapter_nr);
++	if (ret < 0) {
++		pr_err("tm6000: couldn't register the adapter!\n");
++		goto err;
++	}
++
+ 	dvb->adapter.priv = dev;
+ 
+ 	if (dvb->frontend) {
+diff --git a/drivers/media/v4l2-core/v4l2-event.c b/drivers/media/v4l2-core/v4l2-event.c
+index 127fe6eb91d9..a3ef1f50a4b3 100644
+--- a/drivers/media/v4l2-core/v4l2-event.c
++++ b/drivers/media/v4l2-core/v4l2-event.c
+@@ -115,14 +115,6 @@ static void __v4l2_event_queue_fh(struct v4l2_fh *fh, const struct v4l2_event *e
+ 	if (sev == NULL)
+ 		return;
+ 
+-	/*
+-	 * If the event has been added to the fh->subscribed list, but its
+-	 * add op has not completed yet elems will be 0, treat this as
+-	 * not being subscribed.
+-	 */
+-	if (!sev->elems)
+-		return;
+-
+ 	/* Increase event sequence number on fh. */
+ 	fh->sequence++;
+ 
+@@ -208,6 +200,7 @@ int v4l2_event_subscribe(struct v4l2_fh *fh,
+ 	struct v4l2_subscribed_event *sev, *found_ev;
+ 	unsigned long flags;
+ 	unsigned i;
++	int ret = 0;
+ 
+ 	if (sub->type == V4L2_EVENT_ALL)
+ 		return -EINVAL;
+@@ -225,31 +218,36 @@ int v4l2_event_subscribe(struct v4l2_fh *fh,
+ 	sev->flags = sub->flags;
+ 	sev->fh = fh;
+ 	sev->ops = ops;
++	sev->elems = elems;
++
++	mutex_lock(&fh->subscribe_lock);
+ 
+ 	spin_lock_irqsave(&fh->vdev->fh_lock, flags);
+ 	found_ev = v4l2_event_subscribed(fh, sub->type, sub->id);
+-	if (!found_ev)
+-		list_add(&sev->list, &fh->subscribed);
+ 	spin_unlock_irqrestore(&fh->vdev->fh_lock, flags);
+ 
+ 	if (found_ev) {
++		/* Already listening */
+ 		kvfree(sev);
+-		return 0; /* Already listening */
++		goto out_unlock;
+ 	}
+ 
+ 	if (sev->ops && sev->ops->add) {
+-		int ret = sev->ops->add(sev, elems);
++		ret = sev->ops->add(sev, elems);
+ 		if (ret) {
+-			sev->ops = NULL;
+-			v4l2_event_unsubscribe(fh, sub);
+-			return ret;
++			kvfree(sev);
++			goto out_unlock;
+ 		}
+ 	}
+ 
+-	/* Mark as ready for use */
+-	sev->elems = elems;
++	spin_lock_irqsave(&fh->vdev->fh_lock, flags);
++	list_add(&sev->list, &fh->subscribed);
++	spin_unlock_irqrestore(&fh->vdev->fh_lock, flags);
+ 
+-	return 0;
++out_unlock:
++	mutex_unlock(&fh->subscribe_lock);
++
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_event_subscribe);
+ 
+@@ -288,6 +286,8 @@ int v4l2_event_unsubscribe(struct v4l2_fh *fh,
+ 		return 0;
+ 	}
+ 
++	mutex_lock(&fh->subscribe_lock);
++
+ 	spin_lock_irqsave(&fh->vdev->fh_lock, flags);
+ 
+ 	sev = v4l2_event_subscribed(fh, sub->type, sub->id);
+@@ -305,6 +305,8 @@ int v4l2_event_unsubscribe(struct v4l2_fh *fh,
+ 	if (sev && sev->ops && sev->ops->del)
+ 		sev->ops->del(sev);
+ 
++	mutex_unlock(&fh->subscribe_lock);
++
+ 	kvfree(sev);
+ 
+ 	return 0;
+diff --git a/drivers/media/v4l2-core/v4l2-fh.c b/drivers/media/v4l2-core/v4l2-fh.c
+index 3895999bf880..c91a7bd3ecfc 100644
+--- a/drivers/media/v4l2-core/v4l2-fh.c
++++ b/drivers/media/v4l2-core/v4l2-fh.c
+@@ -45,6 +45,7 @@ void v4l2_fh_init(struct v4l2_fh *fh, struct video_device *vdev)
+ 	INIT_LIST_HEAD(&fh->available);
+ 	INIT_LIST_HEAD(&fh->subscribed);
+ 	fh->sequence = -1;
++	mutex_init(&fh->subscribe_lock);
+ }
+ EXPORT_SYMBOL_GPL(v4l2_fh_init);
+ 
+@@ -90,6 +91,7 @@ void v4l2_fh_exit(struct v4l2_fh *fh)
+ 		return;
+ 	v4l_disable_media_source(fh->vdev);
+ 	v4l2_event_unsubscribe_all(fh);
++	mutex_destroy(&fh->subscribe_lock);
+ 	fh->vdev = NULL;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_fh_exit);
+diff --git a/drivers/misc/ibmvmc.c b/drivers/misc/ibmvmc.c
+index 50d82c3d032a..b8aaa684c397 100644
+--- a/drivers/misc/ibmvmc.c
++++ b/drivers/misc/ibmvmc.c
+@@ -273,7 +273,7 @@ static void *alloc_dma_buffer(struct vio_dev *vdev, size_t size,
+ 			      dma_addr_t *dma_handle)
+ {
+ 	/* allocate memory */
+-	void *buffer = kzalloc(size, GFP_KERNEL);
++	void *buffer = kzalloc(size, GFP_ATOMIC);
+ 
+ 	if (!buffer) {
+ 		*dma_handle = 0;
+diff --git a/drivers/misc/sram.c b/drivers/misc/sram.c
+index 679647713e36..74b183baf044 100644
+--- a/drivers/misc/sram.c
++++ b/drivers/misc/sram.c
+@@ -391,23 +391,23 @@ static int sram_probe(struct platform_device *pdev)
+ 	if (IS_ERR(sram->pool))
+ 		return PTR_ERR(sram->pool);
+ 
+-	ret = sram_reserve_regions(sram, res);
+-	if (ret)
+-		return ret;
+-
+ 	sram->clk = devm_clk_get(sram->dev, NULL);
+ 	if (IS_ERR(sram->clk))
+ 		sram->clk = NULL;
+ 	else
+ 		clk_prepare_enable(sram->clk);
+ 
++	ret = sram_reserve_regions(sram, res);
++	if (ret)
++		goto err_disable_clk;
++
+ 	platform_set_drvdata(pdev, sram);
+ 
+ 	init_func = of_device_get_match_data(&pdev->dev);
+ 	if (init_func) {
+ 		ret = init_func();
+ 		if (ret)
+-			goto err_disable_clk;
++			goto err_free_partitions;
+ 	}
+ 
+ 	dev_dbg(sram->dev, "SRAM pool: %zu KiB @ 0x%p\n",
+@@ -415,10 +415,11 @@ static int sram_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ 
++err_free_partitions:
++	sram_free_partitions(sram);
+ err_disable_clk:
+ 	if (sram->clk)
+ 		clk_disable_unprepare(sram->clk);
+-	sram_free_partitions(sram);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/misc/tsl2550.c b/drivers/misc/tsl2550.c
+index adf46072cb37..3fce3b6a3624 100644
+--- a/drivers/misc/tsl2550.c
++++ b/drivers/misc/tsl2550.c
+@@ -177,7 +177,7 @@ static int tsl2550_calculate_lux(u8 ch0, u8 ch1)
+ 		} else
+ 			lux = 0;
+ 	else
+-		return -EAGAIN;
++		return 0;
+ 
+ 	/* LUX range check */
+ 	return lux > TSL2550_MAX_LUX ? TSL2550_MAX_LUX : lux;
+diff --git a/drivers/misc/vmw_vmci/vmci_queue_pair.c b/drivers/misc/vmw_vmci/vmci_queue_pair.c
+index b4d7774cfe07..d95e8648e7b3 100644
+--- a/drivers/misc/vmw_vmci/vmci_queue_pair.c
++++ b/drivers/misc/vmw_vmci/vmci_queue_pair.c
+@@ -668,7 +668,7 @@ static int qp_host_get_user_memory(u64 produce_uva,
+ 	retval = get_user_pages_fast((uintptr_t) produce_uva,
+ 				     produce_q->kernel_if->num_pages, 1,
+ 				     produce_q->kernel_if->u.h.header_page);
+-	if (retval < produce_q->kernel_if->num_pages) {
++	if (retval < (int)produce_q->kernel_if->num_pages) {
+ 		pr_debug("get_user_pages_fast(produce) failed (retval=%d)",
+ 			retval);
+ 		qp_release_pages(produce_q->kernel_if->u.h.header_page,
+@@ -680,7 +680,7 @@ static int qp_host_get_user_memory(u64 produce_uva,
+ 	retval = get_user_pages_fast((uintptr_t) consume_uva,
+ 				     consume_q->kernel_if->num_pages, 1,
+ 				     consume_q->kernel_if->u.h.header_page);
+-	if (retval < consume_q->kernel_if->num_pages) {
++	if (retval < (int)consume_q->kernel_if->num_pages) {
+ 		pr_debug("get_user_pages_fast(consume) failed (retval=%d)",
+ 			retval);
+ 		qp_release_pages(consume_q->kernel_if->u.h.header_page,
+diff --git a/drivers/mmc/host/android-goldfish.c b/drivers/mmc/host/android-goldfish.c
+index 294de177632c..61e4e2a213c9 100644
+--- a/drivers/mmc/host/android-goldfish.c
++++ b/drivers/mmc/host/android-goldfish.c
+@@ -217,7 +217,7 @@ static void goldfish_mmc_xfer_done(struct goldfish_mmc_host *host,
+ 			 * We don't really have DMA, so we need
+ 			 * to copy from our platform driver buffer
+ 			 */
+-			sg_copy_to_buffer(data->sg, 1, host->virt_base,
++			sg_copy_from_buffer(data->sg, 1, host->virt_base,
+ 					data->sg->length);
+ 		}
+ 		host->data->bytes_xfered += data->sg->length;
+@@ -393,7 +393,7 @@ static void goldfish_mmc_prepare_data(struct goldfish_mmc_host *host,
+ 		 * We don't really have DMA, so we need to copy to our
+ 		 * platform driver buffer
+ 		 */
+-		sg_copy_from_buffer(data->sg, 1, host->virt_base,
++		sg_copy_to_buffer(data->sg, 1, host->virt_base,
+ 				data->sg->length);
+ 	}
+ }
+diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c
+index 5aa2c9404e92..be53044086c7 100644
+--- a/drivers/mmc/host/atmel-mci.c
++++ b/drivers/mmc/host/atmel-mci.c
+@@ -1976,7 +1976,7 @@ static void atmci_read_data_pio(struct atmel_mci *host)
+ 	do {
+ 		value = atmci_readl(host, ATMCI_RDR);
+ 		if (likely(offset + 4 <= sg->length)) {
+-			sg_pcopy_to_buffer(sg, 1, &value, sizeof(u32), offset);
++			sg_pcopy_from_buffer(sg, 1, &value, sizeof(u32), offset);
+ 
+ 			offset += 4;
+ 			nbytes += 4;
+@@ -1993,7 +1993,7 @@ static void atmci_read_data_pio(struct atmel_mci *host)
+ 		} else {
+ 			unsigned int remaining = sg->length - offset;
+ 
+-			sg_pcopy_to_buffer(sg, 1, &value, remaining, offset);
++			sg_pcopy_from_buffer(sg, 1, &value, remaining, offset);
+ 			nbytes += remaining;
+ 
+ 			flush_dcache_page(sg_page(sg));
+@@ -2003,7 +2003,7 @@ static void atmci_read_data_pio(struct atmel_mci *host)
+ 				goto done;
+ 
+ 			offset = 4 - remaining;
+-			sg_pcopy_to_buffer(sg, 1, (u8 *)&value + remaining,
++			sg_pcopy_from_buffer(sg, 1, (u8 *)&value + remaining,
+ 					offset, 0);
+ 			nbytes += offset;
+ 		}
+@@ -2042,7 +2042,7 @@ static void atmci_write_data_pio(struct atmel_mci *host)
+ 
+ 	do {
+ 		if (likely(offset + 4 <= sg->length)) {
+-			sg_pcopy_from_buffer(sg, 1, &value, sizeof(u32), offset);
++			sg_pcopy_to_buffer(sg, 1, &value, sizeof(u32), offset);
+ 			atmci_writel(host, ATMCI_TDR, value);
+ 
+ 			offset += 4;
+@@ -2059,7 +2059,7 @@ static void atmci_write_data_pio(struct atmel_mci *host)
+ 			unsigned int remaining = sg->length - offset;
+ 
+ 			value = 0;
+-			sg_pcopy_from_buffer(sg, 1, &value, remaining, offset);
++			sg_pcopy_to_buffer(sg, 1, &value, remaining, offset);
+ 			nbytes += remaining;
+ 
+ 			host->sg = sg = sg_next(sg);
+@@ -2070,7 +2070,7 @@ static void atmci_write_data_pio(struct atmel_mci *host)
+ 			}
+ 
+ 			offset = 4 - remaining;
+-			sg_pcopy_from_buffer(sg, 1, (u8 *)&value + remaining,
++			sg_pcopy_to_buffer(sg, 1, (u8 *)&value + remaining,
+ 					offset, 0);
+ 			atmci_writel(host, ATMCI_TDR, value);
+ 			nbytes += offset;
+diff --git a/drivers/mtd/nand/raw/atmel/nand-controller.c b/drivers/mtd/nand/raw/atmel/nand-controller.c
+index 12f6753d47ae..e686fe73159e 100644
+--- a/drivers/mtd/nand/raw/atmel/nand-controller.c
++++ b/drivers/mtd/nand/raw/atmel/nand-controller.c
+@@ -129,6 +129,11 @@
+ #define DEFAULT_TIMEOUT_MS			1000
+ #define MIN_DMA_LEN				128
+ 
++static bool atmel_nand_avoid_dma __read_mostly;
++
++MODULE_PARM_DESC(avoiddma, "Avoid using DMA");
++module_param_named(avoiddma, atmel_nand_avoid_dma, bool, 0400);
++
+ enum atmel_nand_rb_type {
+ 	ATMEL_NAND_NO_RB,
+ 	ATMEL_NAND_NATIVE_RB,
+@@ -1977,7 +1982,7 @@ static int atmel_nand_controller_init(struct atmel_nand_controller *nc,
+ 		return ret;
+ 	}
+ 
+-	if (nc->caps->has_dma) {
++	if (nc->caps->has_dma && !atmel_nand_avoid_dma) {
+ 		dma_cap_mask_t mask;
+ 
+ 		dma_cap_zero(mask);
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+index a8926e97935e..c5d387be6cfe 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+@@ -5705,7 +5705,7 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 		if (t4_read_reg(adapter, LE_DB_CONFIG_A) & HASHEN_F) {
+ 			u32 hash_base, hash_reg;
+ 
+-			if (chip <= CHELSIO_T5) {
++			if (chip_ver <= CHELSIO_T5) {
+ 				hash_reg = LE_DB_TID_HASHBASE_A;
+ 				hash_base = t4_read_reg(adapter, hash_reg);
+ 				adapter->tids.hash_base = hash_base / 4;
+diff --git a/drivers/net/ethernet/hisilicon/hns/hnae.h b/drivers/net/ethernet/hisilicon/hns/hnae.h
+index fa5b30f547f6..cad52bd331f7 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hnae.h
++++ b/drivers/net/ethernet/hisilicon/hns/hnae.h
+@@ -220,10 +220,10 @@ struct hnae_desc_cb {
+ 
+ 	/* priv data for the desc, e.g. skb when use with ip stack*/
+ 	void *priv;
+-	u16 page_offset;
+-	u16 reuse_flag;
++	u32 page_offset;
++	u32 length;     /* length of the buffer */
+ 
+-	u16 length;     /* length of the buffer */
++	u16 reuse_flag;
+ 
+        /* desc type, used by the ring user to mark the type of the priv data */
+ 	u16 type;
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_enet.c b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+index ef9ef703d13a..ef994a715f93 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+@@ -530,7 +530,7 @@ static void hns_nic_reuse_page(struct sk_buff *skb, int i,
+ 	}
+ 
+ 	skb_add_rx_frag(skb, i, desc_cb->priv, desc_cb->page_offset + pull_len,
+-			size - pull_len, truesize - pull_len);
++			size - pull_len, truesize);
+ 
+ 	 /* avoid re-using remote pages,flag default unreuse */
+ 	if (unlikely(page_to_nid(desc_cb->priv) != numa_node_id()))
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+index 3b083d5ae9ce..c84c09053640 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+@@ -290,11 +290,11 @@ struct hns3_desc_cb {
+ 
+ 	/* priv data for the desc, e.g. skb when use with ip stack*/
+ 	void *priv;
+-	u16 page_offset;
+-	u16 reuse_flag;
+-
++	u32 page_offset;
+ 	u32 length;     /* length of the buffer */
+ 
++	u16 reuse_flag;
++
+        /* desc type, used by the ring user to mark the type of the priv data */
+ 	u16 type;
+ };
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+index 40c0425b4023..11620e003a8e 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+@@ -201,7 +201,9 @@ static u32 hns3_lb_check_rx_ring(struct hns3_nic_priv *priv, u32 budget)
+ 		rx_group = &ring->tqp_vector->rx_group;
+ 		pre_rx_pkt = rx_group->total_packets;
+ 
++		preempt_disable();
+ 		hns3_clean_rx_ring(ring, budget, hns3_lb_check_skb_data);
++		preempt_enable();
+ 
+ 		rcv_good_pkt_total += (rx_group->total_packets - pre_rx_pkt);
+ 		rx_group->total_packets = pre_rx_pkt;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+index 262c125f8137..f027fceea548 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+@@ -1223,6 +1223,10 @@ static int hclge_mac_pause_setup_hw(struct hclge_dev *hdev)
+ 		tx_en = true;
+ 		rx_en = true;
+ 		break;
++	case HCLGE_FC_PFC:
++		tx_en = false;
++		rx_en = false;
++		break;
+ 	default:
+ 		tx_en = true;
+ 		rx_en = true;
+@@ -1240,8 +1244,9 @@ int hclge_pause_setup_hw(struct hclge_dev *hdev)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (hdev->tm_info.fc_mode != HCLGE_FC_PFC)
+-		return hclge_mac_pause_setup_hw(hdev);
++	ret = hclge_mac_pause_setup_hw(hdev);
++	if (ret)
++		return ret;
+ 
+ 	/* Only DCB-supported dev supports qset back pressure and pfc cmd */
+ 	if (!hnae3_dev_dcb_supported(hdev))
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index a17872aab168..12aa1f1b99ef 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -648,8 +648,17 @@ static int hclgevf_unmap_ring_from_vector(
+ static int hclgevf_put_vector(struct hnae3_handle *handle, int vector)
+ {
+ 	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
++	int vector_id;
+ 
+-	hclgevf_free_vector(hdev, vector);
++	vector_id = hclgevf_get_vector_index(hdev, vector);
++	if (vector_id < 0) {
++		dev_err(&handle->pdev->dev,
++			"hclgevf_put_vector get vector index fail. ret =%d\n",
++			vector_id);
++		return vector_id;
++	}
++
++	hclgevf_free_vector(hdev, vector_id);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
+index b598c06af8e0..cd246f906150 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
+@@ -208,7 +208,8 @@ void hclgevf_mbx_handler(struct hclgevf_dev *hdev)
+ 
+ 			/* tail the async message in arq */
+ 			msg_q = hdev->arq.msg_q[hdev->arq.tail];
+-			memcpy(&msg_q[0], req->msg, HCLGE_MBX_MAX_ARQ_MSG_SIZE);
++			memcpy(&msg_q[0], req->msg,
++			       HCLGE_MBX_MAX_ARQ_MSG_SIZE * sizeof(u16));
+ 			hclge_mbx_tail_ptr_move_arq(hdev->arq);
+ 			hdev->arq.count++;
+ 
+diff --git a/drivers/net/ethernet/intel/e1000/e1000_ethtool.c b/drivers/net/ethernet/intel/e1000/e1000_ethtool.c
+index bdb3f8e65ed4..2569a168334c 100644
+--- a/drivers/net/ethernet/intel/e1000/e1000_ethtool.c
++++ b/drivers/net/ethernet/intel/e1000/e1000_ethtool.c
+@@ -624,14 +624,14 @@ static int e1000_set_ringparam(struct net_device *netdev,
+ 		adapter->tx_ring = tx_old;
+ 		e1000_free_all_rx_resources(adapter);
+ 		e1000_free_all_tx_resources(adapter);
+-		kfree(tx_old);
+-		kfree(rx_old);
+ 		adapter->rx_ring = rxdr;
+ 		adapter->tx_ring = txdr;
+ 		err = e1000_up(adapter);
+ 		if (err)
+ 			goto err_setup;
+ 	}
++	kfree(tx_old);
++	kfree(rx_old);
+ 
+ 	clear_bit(__E1000_RESETTING, &adapter->flags);
+ 	return 0;
+@@ -644,7 +644,8 @@ err_setup_rx:
+ err_alloc_rx:
+ 	kfree(txdr);
+ err_alloc_tx:
+-	e1000_up(adapter);
++	if (netif_running(adapter->netdev))
++		e1000_up(adapter);
+ err_setup:
+ 	clear_bit(__E1000_RESETTING, &adapter->flags);
+ 	return err;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+index 6947a2a571cb..5d670f4ce5ac 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+@@ -1903,7 +1903,7 @@ static void i40e_get_stat_strings(struct net_device *netdev, u8 *data)
+ 		data += ETH_GSTRING_LEN;
+ 	}
+ 
+-	WARN_ONCE(p - data != i40e_get_stats_count(netdev) * ETH_GSTRING_LEN,
++	WARN_ONCE(data - p != i40e_get_stats_count(netdev) * ETH_GSTRING_LEN,
+ 		  "stat strings count mismatch!");
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index c944bd10b03d..5f105bc68c6a 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -5121,15 +5121,17 @@ static int i40e_vsi_configure_bw_alloc(struct i40e_vsi *vsi, u8 enabled_tc,
+ 				       u8 *bw_share)
+ {
+ 	struct i40e_aqc_configure_vsi_tc_bw_data bw_data;
++	struct i40e_pf *pf = vsi->back;
+ 	i40e_status ret;
+ 	int i;
+ 
+-	if (vsi->back->flags & I40E_FLAG_TC_MQPRIO)
++	/* There is no need to reset BW when mqprio mode is on.  */
++	if (pf->flags & I40E_FLAG_TC_MQPRIO)
+ 		return 0;
+-	if (!vsi->mqprio_qopt.qopt.hw) {
++	if (!vsi->mqprio_qopt.qopt.hw && !(pf->flags & I40E_FLAG_DCB_ENABLED)) {
+ 		ret = i40e_set_bw_limit(vsi, vsi->seid, 0);
+ 		if (ret)
+-			dev_info(&vsi->back->pdev->dev,
++			dev_info(&pf->pdev->dev,
+ 				 "Failed to reset tx rate for vsi->seid %u\n",
+ 				 vsi->seid);
+ 		return ret;
+@@ -5138,12 +5140,11 @@ static int i40e_vsi_configure_bw_alloc(struct i40e_vsi *vsi, u8 enabled_tc,
+ 	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++)
+ 		bw_data.tc_bw_credits[i] = bw_share[i];
+ 
+-	ret = i40e_aq_config_vsi_tc_bw(&vsi->back->hw, vsi->seid, &bw_data,
+-				       NULL);
++	ret = i40e_aq_config_vsi_tc_bw(&pf->hw, vsi->seid, &bw_data, NULL);
+ 	if (ret) {
+-		dev_info(&vsi->back->pdev->dev,
++		dev_info(&pf->pdev->dev,
+ 			 "AQ command Config VSI BW allocation per TC failed = %d\n",
+-			 vsi->back->hw.aq.asq_last_status);
++			 pf->hw.aq.asq_last_status);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index d8b5fff581e7..ed071ea75f20 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -89,6 +89,13 @@ extern const char ice_drv_ver[];
+ #define ice_for_each_rxq(vsi, i) \
+ 	for ((i) = 0; (i) < (vsi)->num_rxq; (i)++)
+ 
++/* Macros for each allocated tx/rx ring whether used or not in a VSI */
++#define ice_for_each_alloc_txq(vsi, i) \
++	for ((i) = 0; (i) < (vsi)->alloc_txq; (i)++)
++
++#define ice_for_each_alloc_rxq(vsi, i) \
++	for ((i) = 0; (i) < (vsi)->alloc_rxq; (i)++)
++
+ struct ice_tc_info {
+ 	u16 qoffset;
+ 	u16 qcount;
+diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+index 7541ec2270b3..a0614f472658 100644
+--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
++++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+@@ -329,19 +329,19 @@ struct ice_aqc_vsi_props {
+ 	/* VLAN section */
+ 	__le16 pvid; /* VLANS include priority bits */
+ 	u8 pvlan_reserved[2];
+-	u8 port_vlan_flags;
+-#define ICE_AQ_VSI_PVLAN_MODE_S	0
+-#define ICE_AQ_VSI_PVLAN_MODE_M	(0x3 << ICE_AQ_VSI_PVLAN_MODE_S)
+-#define ICE_AQ_VSI_PVLAN_MODE_UNTAGGED	0x1
+-#define ICE_AQ_VSI_PVLAN_MODE_TAGGED	0x2
+-#define ICE_AQ_VSI_PVLAN_MODE_ALL	0x3
++	u8 vlan_flags;
++#define ICE_AQ_VSI_VLAN_MODE_S	0
++#define ICE_AQ_VSI_VLAN_MODE_M	(0x3 << ICE_AQ_VSI_VLAN_MODE_S)
++#define ICE_AQ_VSI_VLAN_MODE_UNTAGGED	0x1
++#define ICE_AQ_VSI_VLAN_MODE_TAGGED	0x2
++#define ICE_AQ_VSI_VLAN_MODE_ALL	0x3
+ #define ICE_AQ_VSI_PVLAN_INSERT_PVID	BIT(2)
+-#define ICE_AQ_VSI_PVLAN_EMOD_S	3
+-#define ICE_AQ_VSI_PVLAN_EMOD_M	(0x3 << ICE_AQ_VSI_PVLAN_EMOD_S)
+-#define ICE_AQ_VSI_PVLAN_EMOD_STR_BOTH	(0x0 << ICE_AQ_VSI_PVLAN_EMOD_S)
+-#define ICE_AQ_VSI_PVLAN_EMOD_STR_UP	(0x1 << ICE_AQ_VSI_PVLAN_EMOD_S)
+-#define ICE_AQ_VSI_PVLAN_EMOD_STR	(0x2 << ICE_AQ_VSI_PVLAN_EMOD_S)
+-#define ICE_AQ_VSI_PVLAN_EMOD_NOTHING	(0x3 << ICE_AQ_VSI_PVLAN_EMOD_S)
++#define ICE_AQ_VSI_VLAN_EMOD_S		3
++#define ICE_AQ_VSI_VLAN_EMOD_M		(0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
++#define ICE_AQ_VSI_VLAN_EMOD_STR_BOTH	(0x0 << ICE_AQ_VSI_VLAN_EMOD_S)
++#define ICE_AQ_VSI_VLAN_EMOD_STR_UP	(0x1 << ICE_AQ_VSI_VLAN_EMOD_S)
++#define ICE_AQ_VSI_VLAN_EMOD_STR	(0x2 << ICE_AQ_VSI_VLAN_EMOD_S)
++#define ICE_AQ_VSI_VLAN_EMOD_NOTHING	(0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
+ 	u8 pvlan_reserved2[3];
+ 	/* ingress egress up sections */
+ 	__le32 ingress_table; /* bitmap, 3 bits per up */
+@@ -594,6 +594,7 @@ struct ice_sw_rule_lg_act {
+ #define ICE_LG_ACT_GENERIC_OFFSET_M	(0x7 << ICE_LG_ACT_GENERIC_OFFSET_S)
+ #define ICE_LG_ACT_GENERIC_PRIORITY_S	22
+ #define ICE_LG_ACT_GENERIC_PRIORITY_M	(0x7 << ICE_LG_ACT_GENERIC_PRIORITY_S)
++#define ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX	7
+ 
+ 	/* Action = 7 - Set Stat count */
+ #define ICE_LG_ACT_STAT_COUNT		0x7
+diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
+index 71d032cc5fa7..ebd701ac9428 100644
+--- a/drivers/net/ethernet/intel/ice/ice_common.c
++++ b/drivers/net/ethernet/intel/ice/ice_common.c
+@@ -1483,7 +1483,7 @@ enum ice_status ice_get_link_status(struct ice_port_info *pi, bool *link_up)
+ 	struct ice_phy_info *phy_info;
+ 	enum ice_status status = 0;
+ 
+-	if (!pi)
++	if (!pi || !link_up)
+ 		return ICE_ERR_PARAM;
+ 
+ 	phy_info = &pi->phy;
+@@ -1619,20 +1619,23 @@ __ice_aq_get_set_rss_lut(struct ice_hw *hw, u16 vsi_id, u8 lut_type, u8 *lut,
+ 	}
+ 
+ 	/* LUT size is only valid for Global and PF table types */
+-	if (lut_size == ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128) {
+-		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128_FLAG <<
+-			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+-			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+-	} else if (lut_size == ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512) {
++	switch (lut_size) {
++	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128:
++		break;
++	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512:
+ 		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG <<
+ 			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+ 			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+-	} else if ((lut_size == ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K) &&
+-		   (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF)) {
+-		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG <<
+-			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+-			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+-	} else {
++		break;
++	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K:
++		if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
++			flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG <<
++				  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
++				 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
++			break;
++		}
++		/* fall-through */
++	default:
+ 		status = ICE_ERR_PARAM;
+ 		goto ice_aq_get_set_rss_lut_exit;
+ 	}
+diff --git a/drivers/net/ethernet/intel/ice/ice_controlq.c b/drivers/net/ethernet/intel/ice/ice_controlq.c
+index 7c511f144ed6..62be72fdc8f3 100644
+--- a/drivers/net/ethernet/intel/ice/ice_controlq.c
++++ b/drivers/net/ethernet/intel/ice/ice_controlq.c
+@@ -597,10 +597,14 @@ static enum ice_status ice_init_check_adminq(struct ice_hw *hw)
+ 	return 0;
+ 
+ init_ctrlq_free_rq:
+-	ice_shutdown_rq(hw, cq);
+-	ice_shutdown_sq(hw, cq);
+-	mutex_destroy(&cq->sq_lock);
+-	mutex_destroy(&cq->rq_lock);
++	if (cq->rq.head) {
++		ice_shutdown_rq(hw, cq);
++		mutex_destroy(&cq->rq_lock);
++	}
++	if (cq->sq.head) {
++		ice_shutdown_sq(hw, cq);
++		mutex_destroy(&cq->sq_lock);
++	}
+ 	return status;
+ }
+ 
+@@ -706,10 +710,14 @@ static void ice_shutdown_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
+ 		return;
+ 	}
+ 
+-	ice_shutdown_sq(hw, cq);
+-	ice_shutdown_rq(hw, cq);
+-	mutex_destroy(&cq->sq_lock);
+-	mutex_destroy(&cq->rq_lock);
++	if (cq->sq.head) {
++		ice_shutdown_sq(hw, cq);
++		mutex_destroy(&cq->sq_lock);
++	}
++	if (cq->rq.head) {
++		ice_shutdown_rq(hw, cq);
++		mutex_destroy(&cq->rq_lock);
++	}
+ }
+ 
+ /**
+@@ -1057,8 +1065,11 @@ ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+ 
+ clean_rq_elem_out:
+ 	/* Set pending if needed, unlock and return */
+-	if (pending)
++	if (pending) {
++		/* re-read HW head to calculate actual pending messages */
++		ntu = (u16)(rd32(hw, cq->rq.head) & cq->rq.head_mask);
+ 		*pending = (u16)((ntc > ntu ? cq->rq.count : 0) + (ntu - ntc));
++	}
+ clean_rq_elem_err:
+ 	mutex_unlock(&cq->rq_lock);
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+index 1db304c01d10..c71a9b528d6d 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+@@ -26,7 +26,7 @@ static int ice_q_stats_len(struct net_device *netdev)
+ {
+ 	struct ice_netdev_priv *np = netdev_priv(netdev);
+ 
+-	return ((np->vsi->num_txq + np->vsi->num_rxq) *
++	return ((np->vsi->alloc_txq + np->vsi->alloc_rxq) *
+ 		(sizeof(struct ice_q_stats) / sizeof(u64)));
+ }
+ 
+@@ -218,7 +218,7 @@ static void ice_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
+ 			p += ETH_GSTRING_LEN;
+ 		}
+ 
+-		ice_for_each_txq(vsi, i) {
++		ice_for_each_alloc_txq(vsi, i) {
+ 			snprintf(p, ETH_GSTRING_LEN,
+ 				 "tx-queue-%u.tx_packets", i);
+ 			p += ETH_GSTRING_LEN;
+@@ -226,7 +226,7 @@ static void ice_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
+ 			p += ETH_GSTRING_LEN;
+ 		}
+ 
+-		ice_for_each_rxq(vsi, i) {
++		ice_for_each_alloc_rxq(vsi, i) {
+ 			snprintf(p, ETH_GSTRING_LEN,
+ 				 "rx-queue-%u.rx_packets", i);
+ 			p += ETH_GSTRING_LEN;
+@@ -253,6 +253,24 @@ static int ice_get_sset_count(struct net_device *netdev, int sset)
+ {
+ 	switch (sset) {
+ 	case ETH_SS_STATS:
++		/* The number (and order) of strings reported *must* remain
++		 * constant for a given netdevice. This function must not
++		 * report a different number based on run time parameters
++		 * (such as the number of queues in use, or the setting of
++		 * a private ethtool flag). This is due to the nature of the
++		 * ethtool stats API.
++		 *
++		 * User space programs such as ethtool must make 3 separate
++		 * ioctl requests, one for size, one for the strings, and
++		 * finally one for the stats. Since these cross into
++		 * user space, changes to the number or size could result in
++		 * undefined memory access or incorrect string<->value
++		 * correlations for statistics.
++		 *
++		 * Even if it appears to be safe, changes to the size or
++		 * order of strings will suffer from race conditions and are
++		 * not safe.
++		 */
+ 		return ICE_ALL_STATS_LEN(netdev);
+ 	default:
+ 		return -EOPNOTSUPP;
+@@ -280,18 +298,26 @@ ice_get_ethtool_stats(struct net_device *netdev,
+ 	/* populate per queue stats */
+ 	rcu_read_lock();
+ 
+-	ice_for_each_txq(vsi, j) {
++	ice_for_each_alloc_txq(vsi, j) {
+ 		ring = READ_ONCE(vsi->tx_rings[j]);
+-		if (!ring)
+-			continue;
+-		data[i++] = ring->stats.pkts;
+-		data[i++] = ring->stats.bytes;
++		if (ring) {
++			data[i++] = ring->stats.pkts;
++			data[i++] = ring->stats.bytes;
++		} else {
++			data[i++] = 0;
++			data[i++] = 0;
++		}
+ 	}
+ 
+-	ice_for_each_rxq(vsi, j) {
++	ice_for_each_alloc_rxq(vsi, j) {
+ 		ring = READ_ONCE(vsi->rx_rings[j]);
+-		data[i++] = ring->stats.pkts;
+-		data[i++] = ring->stats.bytes;
++		if (ring) {
++			data[i++] = ring->stats.pkts;
++			data[i++] = ring->stats.bytes;
++		} else {
++			data[i++] = 0;
++			data[i++] = 0;
++		}
+ 	}
+ 
+ 	rcu_read_unlock();
+@@ -519,7 +545,7 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring)
+ 		goto done;
+ 	}
+ 
+-	for (i = 0; i < vsi->num_txq; i++) {
++	for (i = 0; i < vsi->alloc_txq; i++) {
+ 		/* clone ring and setup updated count */
+ 		tx_rings[i] = *vsi->tx_rings[i];
+ 		tx_rings[i].count = new_tx_cnt;
+@@ -551,7 +577,7 @@ process_rx:
+ 		goto done;
+ 	}
+ 
+-	for (i = 0; i < vsi->num_rxq; i++) {
++	for (i = 0; i < vsi->alloc_rxq; i++) {
+ 		/* clone ring and setup updated count */
+ 		rx_rings[i] = *vsi->rx_rings[i];
+ 		rx_rings[i].count = new_rx_cnt;
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 5299caf55a7f..27c9aa31b248 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -916,6 +916,21 @@ static int __ice_clean_ctrlq(struct ice_pf *pf, enum ice_ctl_q q_type)
+ 	return pending && (i == ICE_DFLT_IRQ_WORK);
+ }
+ 
++/**
++ * ice_ctrlq_pending - check if there is a difference between ntc and ntu
++ * @hw: pointer to hardware info
++ * @cq: control queue information
++ *
++ * returns true if there are pending messages in a queue, false if there aren't
++ */
++static bool ice_ctrlq_pending(struct ice_hw *hw, struct ice_ctl_q_info *cq)
++{
++	u16 ntu;
++
++	ntu = (u16)(rd32(hw, cq->rq.head) & cq->rq.head_mask);
++	return cq->rq.next_to_clean != ntu;
++}
++
+ /**
+  * ice_clean_adminq_subtask - clean the AdminQ rings
+  * @pf: board private structure
+@@ -923,7 +938,6 @@ static int __ice_clean_ctrlq(struct ice_pf *pf, enum ice_ctl_q q_type)
+ static void ice_clean_adminq_subtask(struct ice_pf *pf)
+ {
+ 	struct ice_hw *hw = &pf->hw;
+-	u32 val;
+ 
+ 	if (!test_bit(__ICE_ADMINQ_EVENT_PENDING, pf->state))
+ 		return;
+@@ -933,9 +947,13 @@ static void ice_clean_adminq_subtask(struct ice_pf *pf)
+ 
+ 	clear_bit(__ICE_ADMINQ_EVENT_PENDING, pf->state);
+ 
+-	/* re-enable Admin queue interrupt causes */
+-	val = rd32(hw, PFINT_FW_CTL);
+-	wr32(hw, PFINT_FW_CTL, (val | PFINT_FW_CTL_CAUSE_ENA_M));
++	/* There might be a situation where new messages arrive to a control
++	 * queue between processing the last message and clearing the
++	 * EVENT_PENDING bit. So before exiting, check queue head again (using
++	 * ice_ctrlq_pending) and process new messages if any.
++	 */
++	if (ice_ctrlq_pending(hw, &hw->adminq))
++		__ice_clean_ctrlq(pf, ICE_CTL_Q_ADMIN);
+ 
+ 	ice_flush(hw);
+ }
+@@ -1295,11 +1313,8 @@ static void ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt)
+ 		qcount = numq_tc;
+ 	}
+ 
+-	/* find higher power-of-2 of qcount */
+-	pow = ilog2(qcount);
+-
+-	if (!is_power_of_2(qcount))
+-		pow++;
++	/* find the (rounded up) power-of-2 of qcount */
++	pow = order_base_2(qcount);
+ 
+ 	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
+ 		if (!(vsi->tc_cfg.ena_tc & BIT(i))) {
+@@ -1352,14 +1367,15 @@ static void ice_set_dflt_vsi_ctx(struct ice_vsi_ctx *ctxt)
+ 	ctxt->info.sw_flags = ICE_AQ_VSI_SW_FLAG_SRC_PRUNE;
+ 	/* Traffic from VSI can be sent to LAN */
+ 	ctxt->info.sw_flags2 = ICE_AQ_VSI_SW_FLAG_LAN_ENA;
+-	/* Allow all packets untagged/tagged */
+-	ctxt->info.port_vlan_flags = ((ICE_AQ_VSI_PVLAN_MODE_ALL &
+-				       ICE_AQ_VSI_PVLAN_MODE_M) >>
+-				      ICE_AQ_VSI_PVLAN_MODE_S);
+-	/* Show VLAN/UP from packets in Rx descriptors */
+-	ctxt->info.port_vlan_flags |= ((ICE_AQ_VSI_PVLAN_EMOD_STR_BOTH &
+-					ICE_AQ_VSI_PVLAN_EMOD_M) >>
+-				       ICE_AQ_VSI_PVLAN_EMOD_S);
++
++	/* By default bits 3 and 4 in vlan_flags are 0's which results in legacy
++	 * behavior (show VLAN, DEI, and UP) in descriptor. Also, allow all
++	 * packets untagged/tagged.
++	 */
++	ctxt->info.vlan_flags = ((ICE_AQ_VSI_VLAN_MODE_ALL &
++				  ICE_AQ_VSI_VLAN_MODE_M) >>
++				 ICE_AQ_VSI_VLAN_MODE_S);
++
+ 	/* Have 1:1 UP mapping for both ingress/egress tables */
+ 	table |= ICE_UP_TABLE_TRANSLATE(0, 0);
+ 	table |= ICE_UP_TABLE_TRANSLATE(1, 1);
+@@ -2058,15 +2074,13 @@ static int ice_req_irq_msix_misc(struct ice_pf *pf)
+ skip_req_irq:
+ 	ice_ena_misc_vector(pf);
+ 
+-	val = (pf->oicr_idx & PFINT_OICR_CTL_MSIX_INDX_M) |
+-	      (ICE_RX_ITR & PFINT_OICR_CTL_ITR_INDX_M) |
+-	      PFINT_OICR_CTL_CAUSE_ENA_M;
++	val = ((pf->oicr_idx & PFINT_OICR_CTL_MSIX_INDX_M) |
++	       PFINT_OICR_CTL_CAUSE_ENA_M);
+ 	wr32(hw, PFINT_OICR_CTL, val);
+ 
+ 	/* This enables Admin queue Interrupt causes */
+-	val = (pf->oicr_idx & PFINT_FW_CTL_MSIX_INDX_M) |
+-	      (ICE_RX_ITR & PFINT_FW_CTL_ITR_INDX_M) |
+-	      PFINT_FW_CTL_CAUSE_ENA_M;
++	val = ((pf->oicr_idx & PFINT_FW_CTL_MSIX_INDX_M) |
++	       PFINT_FW_CTL_CAUSE_ENA_M);
+ 	wr32(hw, PFINT_FW_CTL, val);
+ 
+ 	itr_gran = hw->itr_gran_200;
+@@ -3246,8 +3260,10 @@ static void ice_clear_interrupt_scheme(struct ice_pf *pf)
+ 	if (test_bit(ICE_FLAG_MSIX_ENA, pf->flags))
+ 		ice_dis_msix(pf);
+ 
+-	devm_kfree(&pf->pdev->dev, pf->irq_tracker);
+-	pf->irq_tracker = NULL;
++	if (pf->irq_tracker) {
++		devm_kfree(&pf->pdev->dev, pf->irq_tracker);
++		pf->irq_tracker = NULL;
++	}
+ }
+ 
+ /**
+@@ -3720,10 +3736,10 @@ static int ice_vsi_manage_vlan_insertion(struct ice_vsi *vsi)
+ 	enum ice_status status;
+ 
+ 	/* Here we are configuring the VSI to let the driver add VLAN tags by
+-	 * setting port_vlan_flags to ICE_AQ_VSI_PVLAN_MODE_ALL. The actual VLAN
+-	 * tag insertion happens in the Tx hot path, in ice_tx_map.
++	 * setting vlan_flags to ICE_AQ_VSI_VLAN_MODE_ALL. The actual VLAN tag
++	 * insertion happens in the Tx hot path, in ice_tx_map.
+ 	 */
+-	ctxt.info.port_vlan_flags = ICE_AQ_VSI_PVLAN_MODE_ALL;
++	ctxt.info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_ALL;
+ 
+ 	ctxt.info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID);
+ 	ctxt.vsi_num = vsi->vsi_num;
+@@ -3735,7 +3751,7 @@ static int ice_vsi_manage_vlan_insertion(struct ice_vsi *vsi)
+ 		return -EIO;
+ 	}
+ 
+-	vsi->info.port_vlan_flags = ctxt.info.port_vlan_flags;
++	vsi->info.vlan_flags = ctxt.info.vlan_flags;
+ 	return 0;
+ }
+ 
+@@ -3757,12 +3773,15 @@ static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena)
+ 	 */
+ 	if (ena) {
+ 		/* Strip VLAN tag from Rx packet and put it in the desc */
+-		ctxt.info.port_vlan_flags = ICE_AQ_VSI_PVLAN_EMOD_STR_BOTH;
++		ctxt.info.vlan_flags = ICE_AQ_VSI_VLAN_EMOD_STR_BOTH;
+ 	} else {
+ 		/* Disable stripping. Leave tag in packet */
+-		ctxt.info.port_vlan_flags = ICE_AQ_VSI_PVLAN_EMOD_NOTHING;
++		ctxt.info.vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING;
+ 	}
+ 
++	/* Allow all packets untagged/tagged */
++	ctxt.info.vlan_flags |= ICE_AQ_VSI_VLAN_MODE_ALL;
++
+ 	ctxt.info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID);
+ 	ctxt.vsi_num = vsi->vsi_num;
+ 
+@@ -3773,7 +3792,7 @@ static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena)
+ 		return -EIO;
+ 	}
+ 
+-	vsi->info.port_vlan_flags = ctxt.info.port_vlan_flags;
++	vsi->info.vlan_flags = ctxt.info.vlan_flags;
+ 	return 0;
+ }
+ 
+@@ -4098,11 +4117,12 @@ static int ice_vsi_cfg(struct ice_vsi *vsi)
+ {
+ 	int err;
+ 
+-	ice_set_rx_mode(vsi->netdev);
+-
+-	err = ice_restore_vlan(vsi);
+-	if (err)
+-		return err;
++	if (vsi->netdev) {
++		ice_set_rx_mode(vsi->netdev);
++		err = ice_restore_vlan(vsi);
++		if (err)
++			return err;
++	}
+ 
+ 	err = ice_vsi_cfg_txqs(vsi);
+ 	if (!err)
+@@ -4868,7 +4888,7 @@ int ice_down(struct ice_vsi *vsi)
+  */
+ static int ice_vsi_setup_tx_rings(struct ice_vsi *vsi)
+ {
+-	int i, err;
++	int i, err = 0;
+ 
+ 	if (!vsi->num_txq) {
+ 		dev_err(&vsi->back->pdev->dev, "VSI %d has 0 Tx queues\n",
+@@ -4893,7 +4913,7 @@ static int ice_vsi_setup_tx_rings(struct ice_vsi *vsi)
+  */
+ static int ice_vsi_setup_rx_rings(struct ice_vsi *vsi)
+ {
+-	int i, err;
++	int i, err = 0;
+ 
+ 	if (!vsi->num_rxq) {
+ 		dev_err(&vsi->back->pdev->dev, "VSI %d has 0 Rx queues\n",
+diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c
+index 723d15f1e90b..6b7ec2ae5ad6 100644
+--- a/drivers/net/ethernet/intel/ice/ice_switch.c
++++ b/drivers/net/ethernet/intel/ice/ice_switch.c
+@@ -645,14 +645,14 @@ ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent,
+ 	act |= (1 << ICE_LG_ACT_GENERIC_VALUE_S) & ICE_LG_ACT_GENERIC_VALUE_M;
+ 	lg_act->pdata.lg_act.act[1] = cpu_to_le32(act);
+ 
+-	act = (7 << ICE_LG_ACT_GENERIC_OFFSET_S) & ICE_LG_ACT_GENERIC_VALUE_M;
++	act = (ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX <<
++	       ICE_LG_ACT_GENERIC_OFFSET_S) & ICE_LG_ACT_GENERIC_OFFSET_M;
+ 
+ 	/* Third action Marker value */
+ 	act |= ICE_LG_ACT_GENERIC;
+ 	act |= (sw_marker << ICE_LG_ACT_GENERIC_VALUE_S) &
+ 		ICE_LG_ACT_GENERIC_VALUE_M;
+ 
+-	act |= (0 << ICE_LG_ACT_GENERIC_OFFSET_S) & ICE_LG_ACT_GENERIC_VALUE_M;
+ 	lg_act->pdata.lg_act.act[2] = cpu_to_le32(act);
+ 
+ 	/* call the fill switch rule to fill the lookup tx rx structure */
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+index 6f59933cdff7..2bc4fe475f28 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+@@ -688,8 +688,13 @@ static int ixgbe_set_vf_macvlan(struct ixgbe_adapter *adapter,
+ static inline void ixgbe_vf_reset_event(struct ixgbe_adapter *adapter, u32 vf)
+ {
+ 	struct ixgbe_hw *hw = &adapter->hw;
++	struct ixgbe_ring_feature *vmdq = &adapter->ring_feature[RING_F_VMDQ];
+ 	struct vf_data_storage *vfinfo = &adapter->vfinfo[vf];
++	u32 q_per_pool = __ALIGN_MASK(1, ~vmdq->mask);
+ 	u8 num_tcs = adapter->hw_tcs;
++	u32 reg_val;
++	u32 queue;
++	u32 word;
+ 
+ 	/* remove VLAN filters beloning to this VF */
+ 	ixgbe_clear_vf_vlans(adapter, vf);
+@@ -726,6 +731,27 @@ static inline void ixgbe_vf_reset_event(struct ixgbe_adapter *adapter, u32 vf)
+ 
+ 	/* reset VF api back to unknown */
+ 	adapter->vfinfo[vf].vf_api = ixgbe_mbox_api_10;
++
++	/* Restart each queue for given VF */
++	for (queue = 0; queue < q_per_pool; queue++) {
++		unsigned int reg_idx = (vf * q_per_pool) + queue;
++
++		reg_val = IXGBE_READ_REG(hw, IXGBE_PVFTXDCTL(reg_idx));
++
++		/* Re-enabling only configured queues */
++		if (reg_val) {
++			reg_val |= IXGBE_TXDCTL_ENABLE;
++			IXGBE_WRITE_REG(hw, IXGBE_PVFTXDCTL(reg_idx), reg_val);
++			reg_val &= ~IXGBE_TXDCTL_ENABLE;
++			IXGBE_WRITE_REG(hw, IXGBE_PVFTXDCTL(reg_idx), reg_val);
++		}
++	}
++
++	/* Clear VF's mailbox memory */
++	for (word = 0; word < IXGBE_VFMAILBOX_SIZE; word++)
++		IXGBE_WRITE_REG_ARRAY(hw, IXGBE_PFMBMEM(vf), word, 0);
++
++	IXGBE_WRITE_FLUSH(hw);
+ }
+ 
+ static int ixgbe_set_vf_mac(struct ixgbe_adapter *adapter,
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
+index 44cfb2021145..41bcbb337e83 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
+@@ -2518,6 +2518,7 @@ enum {
+ /* Translated register #defines */
+ #define IXGBE_PVFTDH(P)		(0x06010 + (0x40 * (P)))
+ #define IXGBE_PVFTDT(P)		(0x06018 + (0x40 * (P)))
++#define IXGBE_PVFTXDCTL(P)	(0x06028 + (0x40 * (P)))
+ #define IXGBE_PVFTDWBAL(P)	(0x06038 + (0x40 * (P)))
+ #define IXGBE_PVFTDWBAH(P)	(0x0603C + (0x40 * (P)))
+ 
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.c b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
+index cdd645024a32..ad6826b5f758 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_mcp.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
+@@ -48,7 +48,7 @@
+ #include "qed_reg_addr.h"
+ #include "qed_sriov.h"
+ 
+-#define CHIP_MCP_RESP_ITER_US 10
++#define QED_MCP_RESP_ITER_US	10
+ 
+ #define QED_DRV_MB_MAX_RETRIES	(500 * 1000)	/* Account for 5 sec */
+ #define QED_MCP_RESET_RETRIES	(50 * 1000)	/* Account for 500 msec */
+@@ -183,18 +183,57 @@ int qed_mcp_free(struct qed_hwfn *p_hwfn)
+ 	return 0;
+ }
+ 
++/* Maximum of 1 sec to wait for the SHMEM ready indication */
++#define QED_MCP_SHMEM_RDY_MAX_RETRIES	20
++#define QED_MCP_SHMEM_RDY_ITER_MS	50
++
+ static int qed_load_mcp_offsets(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ {
+ 	struct qed_mcp_info *p_info = p_hwfn->mcp_info;
++	u8 cnt = QED_MCP_SHMEM_RDY_MAX_RETRIES;
++	u8 msec = QED_MCP_SHMEM_RDY_ITER_MS;
+ 	u32 drv_mb_offsize, mfw_mb_offsize;
+ 	u32 mcp_pf_id = MCP_PF_ID(p_hwfn);
+ 
+ 	p_info->public_base = qed_rd(p_hwfn, p_ptt, MISC_REG_SHARED_MEM_ADDR);
+-	if (!p_info->public_base)
+-		return 0;
++	if (!p_info->public_base) {
++		DP_NOTICE(p_hwfn,
++			  "The address of the MCP scratch-pad is not configured\n");
++		return -EINVAL;
++	}
+ 
+ 	p_info->public_base |= GRCBASE_MCP;
+ 
++	/* Get the MFW MB address and number of supported messages */
++	mfw_mb_offsize = qed_rd(p_hwfn, p_ptt,
++				SECTION_OFFSIZE_ADDR(p_info->public_base,
++						     PUBLIC_MFW_MB));
++	p_info->mfw_mb_addr = SECTION_ADDR(mfw_mb_offsize, mcp_pf_id);
++	p_info->mfw_mb_length = (u16)qed_rd(p_hwfn, p_ptt,
++					    p_info->mfw_mb_addr +
++					    offsetof(struct public_mfw_mb,
++						     sup_msgs));
++
++	/* The driver can notify that there was an MCP reset, and might read the
++	 * SHMEM values before the MFW has completed initializing them.
++	 * To avoid this, the "sup_msgs" field in the MFW mailbox is used as a
++	 * data ready indication.
++	 */
++	while (!p_info->mfw_mb_length && --cnt) {
++		msleep(msec);
++		p_info->mfw_mb_length =
++			(u16)qed_rd(p_hwfn, p_ptt,
++				    p_info->mfw_mb_addr +
++				    offsetof(struct public_mfw_mb, sup_msgs));
++	}
++
++	if (!cnt) {
++		DP_NOTICE(p_hwfn,
++			  "Failed to get the SHMEM ready notification after %d msec\n",
++			  QED_MCP_SHMEM_RDY_MAX_RETRIES * msec);
++		return -EBUSY;
++	}
++
+ 	/* Calculate the driver and MFW mailbox address */
+ 	drv_mb_offsize = qed_rd(p_hwfn, p_ptt,
+ 				SECTION_OFFSIZE_ADDR(p_info->public_base,
+@@ -204,13 +243,6 @@ static int qed_load_mcp_offsets(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ 		   "drv_mb_offsiz = 0x%x, drv_mb_addr = 0x%x mcp_pf_id = 0x%x\n",
+ 		   drv_mb_offsize, p_info->drv_mb_addr, mcp_pf_id);
+ 
+-	/* Set the MFW MB address */
+-	mfw_mb_offsize = qed_rd(p_hwfn, p_ptt,
+-				SECTION_OFFSIZE_ADDR(p_info->public_base,
+-						     PUBLIC_MFW_MB));
+-	p_info->mfw_mb_addr = SECTION_ADDR(mfw_mb_offsize, mcp_pf_id);
+-	p_info->mfw_mb_length =	(u16)qed_rd(p_hwfn, p_ptt, p_info->mfw_mb_addr);
+-
+ 	/* Get the current driver mailbox sequence before sending
+ 	 * the first command
+ 	 */
+@@ -285,9 +317,15 @@ static void qed_mcp_reread_offsets(struct qed_hwfn *p_hwfn,
+ 
+ int qed_mcp_reset(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ {
+-	u32 org_mcp_reset_seq, seq, delay = CHIP_MCP_RESP_ITER_US, cnt = 0;
++	u32 org_mcp_reset_seq, seq, delay = QED_MCP_RESP_ITER_US, cnt = 0;
+ 	int rc = 0;
+ 
++	if (p_hwfn->mcp_info->b_block_cmd) {
++		DP_NOTICE(p_hwfn,
++			  "The MFW is not responsive. Avoid sending MCP_RESET mailbox command.\n");
++		return -EBUSY;
++	}
++
+ 	/* Ensure that only a single thread is accessing the mailbox */
+ 	spin_lock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 
+@@ -413,14 +451,41 @@ static void __qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 		   (p_mb_params->cmd | seq_num), p_mb_params->param);
+ }
+ 
++static void qed_mcp_cmd_set_blocking(struct qed_hwfn *p_hwfn, bool block_cmd)
++{
++	p_hwfn->mcp_info->b_block_cmd = block_cmd;
++
++	DP_INFO(p_hwfn, "%s sending of mailbox commands to the MFW\n",
++		block_cmd ? "Block" : "Unblock");
++}
++
++static void qed_mcp_print_cpu_info(struct qed_hwfn *p_hwfn,
++				   struct qed_ptt *p_ptt)
++{
++	u32 cpu_mode, cpu_state, cpu_pc_0, cpu_pc_1, cpu_pc_2;
++	u32 delay = QED_MCP_RESP_ITER_US;
++
++	cpu_mode = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_MODE);
++	cpu_state = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_STATE);
++	cpu_pc_0 = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_PROGRAM_COUNTER);
++	udelay(delay);
++	cpu_pc_1 = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_PROGRAM_COUNTER);
++	udelay(delay);
++	cpu_pc_2 = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_PROGRAM_COUNTER);
++
++	DP_NOTICE(p_hwfn,
++		  "MCP CPU info: mode 0x%08x, state 0x%08x, pc {0x%08x, 0x%08x, 0x%08x}\n",
++		  cpu_mode, cpu_state, cpu_pc_0, cpu_pc_1, cpu_pc_2);
++}
++
+ static int
+ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 		       struct qed_ptt *p_ptt,
+ 		       struct qed_mcp_mb_params *p_mb_params,
+-		       u32 max_retries, u32 delay)
++		       u32 max_retries, u32 usecs)
+ {
++	u32 cnt = 0, msecs = DIV_ROUND_UP(usecs, 1000);
+ 	struct qed_mcp_cmd_elem *p_cmd_elem;
+-	u32 cnt = 0;
+ 	u16 seq_num;
+ 	int rc = 0;
+ 
+@@ -443,7 +508,11 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 			goto err;
+ 
+ 		spin_unlock_bh(&p_hwfn->mcp_info->cmd_lock);
+-		udelay(delay);
++
++		if (QED_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP))
++			msleep(msecs);
++		else
++			udelay(usecs);
+ 	} while (++cnt < max_retries);
+ 
+ 	if (cnt >= max_retries) {
+@@ -472,7 +541,11 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 		 * The spinlock stays locked until the list element is removed.
+ 		 */
+ 
+-		udelay(delay);
++		if (QED_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP))
++			msleep(msecs);
++		else
++			udelay(usecs);
++
+ 		spin_lock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 
+ 		if (p_cmd_elem->b_is_completed)
+@@ -491,11 +564,15 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 		DP_NOTICE(p_hwfn,
+ 			  "The MFW failed to respond to command 0x%08x [param 0x%08x].\n",
+ 			  p_mb_params->cmd, p_mb_params->param);
++		qed_mcp_print_cpu_info(p_hwfn, p_ptt);
+ 
+ 		spin_lock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 		qed_mcp_cmd_del_elem(p_hwfn, p_cmd_elem);
+ 		spin_unlock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 
++		if (!QED_MB_FLAGS_IS_SET(p_mb_params, AVOID_BLOCK))
++			qed_mcp_cmd_set_blocking(p_hwfn, true);
++
+ 		return -EAGAIN;
+ 	}
+ 
+@@ -507,7 +584,7 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 		   "MFW mailbox: response 0x%08x param 0x%08x [after %d.%03d ms]\n",
+ 		   p_mb_params->mcp_resp,
+ 		   p_mb_params->mcp_param,
+-		   (cnt * delay) / 1000, (cnt * delay) % 1000);
++		   (cnt * usecs) / 1000, (cnt * usecs) % 1000);
+ 
+ 	/* Clear the sequence number from the MFW response */
+ 	p_mb_params->mcp_resp &= FW_MSG_CODE_MASK;
+@@ -525,7 +602,7 @@ static int qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ {
+ 	size_t union_data_size = sizeof(union drv_union_data);
+ 	u32 max_retries = QED_DRV_MB_MAX_RETRIES;
+-	u32 delay = CHIP_MCP_RESP_ITER_US;
++	u32 usecs = QED_MCP_RESP_ITER_US;
+ 
+ 	/* MCP not initialized */
+ 	if (!qed_mcp_is_init(p_hwfn)) {
+@@ -533,6 +610,13 @@ static int qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 		return -EBUSY;
+ 	}
+ 
++	if (p_hwfn->mcp_info->b_block_cmd) {
++		DP_NOTICE(p_hwfn,
++			  "The MFW is not responsive. Avoid sending mailbox command 0x%08x [param 0x%08x].\n",
++			  p_mb_params->cmd, p_mb_params->param);
++		return -EBUSY;
++	}
++
+ 	if (p_mb_params->data_src_size > union_data_size ||
+ 	    p_mb_params->data_dst_size > union_data_size) {
+ 		DP_ERR(p_hwfn,
+@@ -542,8 +626,13 @@ static int qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 		return -EINVAL;
+ 	}
+ 
++	if (QED_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP)) {
++		max_retries = DIV_ROUND_UP(max_retries, 1000);
++		usecs *= 1000;
++	}
++
+ 	return _qed_mcp_cmd_and_union(p_hwfn, p_ptt, p_mb_params, max_retries,
+-				      delay);
++				      usecs);
+ }
+ 
+ int qed_mcp_cmd(struct qed_hwfn *p_hwfn,
+@@ -760,6 +849,7 @@ __qed_mcp_load_req(struct qed_hwfn *p_hwfn,
+ 	mb_params.data_src_size = sizeof(load_req);
+ 	mb_params.p_data_dst = &load_rsp;
+ 	mb_params.data_dst_size = sizeof(load_rsp);
++	mb_params.flags = QED_MB_FLAG_CAN_SLEEP | QED_MB_FLAG_AVOID_BLOCK;
+ 
+ 	DP_VERBOSE(p_hwfn, QED_MSG_SP,
+ 		   "Load Request: param 0x%08x [init_hw %d, drv_type %d, hsi_ver %d, pda 0x%04x]\n",
+@@ -981,7 +1071,8 @@ int qed_mcp_load_req(struct qed_hwfn *p_hwfn,
+ 
+ int qed_mcp_unload_req(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ {
+-	u32 wol_param, mcp_resp, mcp_param;
++	struct qed_mcp_mb_params mb_params;
++	u32 wol_param;
+ 
+ 	switch (p_hwfn->cdev->wol_config) {
+ 	case QED_OV_WOL_DISABLED:
+@@ -999,8 +1090,12 @@ int qed_mcp_unload_req(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ 		wol_param = DRV_MB_PARAM_UNLOAD_WOL_MCP;
+ 	}
+ 
+-	return qed_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_UNLOAD_REQ, wol_param,
+-			   &mcp_resp, &mcp_param);
++	memset(&mb_params, 0, sizeof(mb_params));
++	mb_params.cmd = DRV_MSG_CODE_UNLOAD_REQ;
++	mb_params.param = wol_param;
++	mb_params.flags = QED_MB_FLAG_CAN_SLEEP | QED_MB_FLAG_AVOID_BLOCK;
++
++	return qed_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+ }
+ 
+ int qed_mcp_unload_done(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+@@ -2075,31 +2170,65 @@ qed_mcp_send_drv_version(struct qed_hwfn *p_hwfn,
+ 	return rc;
+ }
+ 
++/* A maximal 100 msec waiting time for the MCP to halt */
++#define QED_MCP_HALT_SLEEP_MS		10
++#define QED_MCP_HALT_MAX_RETRIES	10
++
+ int qed_mcp_halt(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ {
+-	u32 resp = 0, param = 0;
++	u32 resp = 0, param = 0, cpu_state, cnt = 0;
+ 	int rc;
+ 
+ 	rc = qed_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MCP_HALT, 0, &resp,
+ 			 &param);
+-	if (rc)
++	if (rc) {
+ 		DP_ERR(p_hwfn, "MCP response failure, aborting\n");
++		return rc;
++	}
+ 
+-	return rc;
++	do {
++		msleep(QED_MCP_HALT_SLEEP_MS);
++		cpu_state = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_STATE);
++		if (cpu_state & MCP_REG_CPU_STATE_SOFT_HALTED)
++			break;
++	} while (++cnt < QED_MCP_HALT_MAX_RETRIES);
++
++	if (cnt == QED_MCP_HALT_MAX_RETRIES) {
++		DP_NOTICE(p_hwfn,
++			  "Failed to halt the MCP [CPU_MODE = 0x%08x, CPU_STATE = 0x%08x]\n",
++			  qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_MODE), cpu_state);
++		return -EBUSY;
++	}
++
++	qed_mcp_cmd_set_blocking(p_hwfn, true);
++
++	return 0;
+ }
+ 
++#define QED_MCP_RESUME_SLEEP_MS	10
++
+ int qed_mcp_resume(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ {
+-	u32 value, cpu_mode;
++	u32 cpu_mode, cpu_state;
+ 
+ 	qed_wr(p_hwfn, p_ptt, MCP_REG_CPU_STATE, 0xffffffff);
+ 
+-	value = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_MODE);
+-	value &= ~MCP_REG_CPU_MODE_SOFT_HALT;
+-	qed_wr(p_hwfn, p_ptt, MCP_REG_CPU_MODE, value);
+ 	cpu_mode = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_MODE);
++	cpu_mode &= ~MCP_REG_CPU_MODE_SOFT_HALT;
++	qed_wr(p_hwfn, p_ptt, MCP_REG_CPU_MODE, cpu_mode);
++	msleep(QED_MCP_RESUME_SLEEP_MS);
++	cpu_state = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_STATE);
+ 
+-	return (cpu_mode & MCP_REG_CPU_MODE_SOFT_HALT) ? -EAGAIN : 0;
++	if (cpu_state & MCP_REG_CPU_STATE_SOFT_HALTED) {
++		DP_NOTICE(p_hwfn,
++			  "Failed to resume the MCP [CPU_MODE = 0x%08x, CPU_STATE = 0x%08x]\n",
++			  cpu_mode, cpu_state);
++		return -EBUSY;
++	}
++
++	qed_mcp_cmd_set_blocking(p_hwfn, false);
++
++	return 0;
+ }
+ 
+ int qed_mcp_ov_update_current_config(struct qed_hwfn *p_hwfn,
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.h b/drivers/net/ethernet/qlogic/qed/qed_mcp.h
+index 632a838f1fe3..ce2e617d2cab 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_mcp.h
++++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.h
+@@ -635,11 +635,14 @@ struct qed_mcp_info {
+ 	 */
+ 	spinlock_t				cmd_lock;
+ 
++	/* Flag to indicate whether sending a MFW mailbox command is blocked */
++	bool					b_block_cmd;
++
+ 	/* Spinlock used for syncing SW link-changes and link-changes
+ 	 * originating from attention context.
+ 	 */
+ 	spinlock_t				link_lock;
+-	bool					block_mb_sending;
++
+ 	u32					public_base;
+ 	u32					drv_mb_addr;
+ 	u32					mfw_mb_addr;
+@@ -660,14 +663,20 @@ struct qed_mcp_info {
+ };
+ 
+ struct qed_mcp_mb_params {
+-	u32			cmd;
+-	u32			param;
+-	void			*p_data_src;
+-	u8			data_src_size;
+-	void			*p_data_dst;
+-	u8			data_dst_size;
+-	u32			mcp_resp;
+-	u32			mcp_param;
++	u32 cmd;
++	u32 param;
++	void *p_data_src;
++	void *p_data_dst;
++	u8 data_src_size;
++	u8 data_dst_size;
++	u32 mcp_resp;
++	u32 mcp_param;
++	u32 flags;
++#define QED_MB_FLAG_CAN_SLEEP	(0x1 << 0)
++#define QED_MB_FLAG_AVOID_BLOCK	(0x1 << 1)
++#define QED_MB_FLAGS_IS_SET(params, flag) \
++	({ typeof(params) __params = (params); \
++	   (__params && (__params->flags & QED_MB_FLAG_ ## flag)); })
+ };
+ 
+ struct qed_drv_tlv_hdr {
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h b/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
+index d8ad2dcad8d5..f736f70956fd 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
++++ b/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
+@@ -562,8 +562,10 @@
+ 	0
+ #define MCP_REG_CPU_STATE \
+ 	0xe05004UL
++#define MCP_REG_CPU_STATE_SOFT_HALTED	(0x1UL << 10)
+ #define MCP_REG_CPU_EVENT_MASK \
+ 	0xe05008UL
++#define MCP_REG_CPU_PROGRAM_COUNTER	0xe0501cUL
+ #define PGLUE_B_REG_PF_BAR0_SIZE \
+ 	0x2aae60UL
+ #define PGLUE_B_REG_PF_BAR1_SIZE \
+diff --git a/drivers/net/phy/xilinx_gmii2rgmii.c b/drivers/net/phy/xilinx_gmii2rgmii.c
+index 2e5150b0b8d5..7a14e8170e82 100644
+--- a/drivers/net/phy/xilinx_gmii2rgmii.c
++++ b/drivers/net/phy/xilinx_gmii2rgmii.c
+@@ -40,8 +40,11 @@ static int xgmiitorgmii_read_status(struct phy_device *phydev)
+ {
+ 	struct gmii2rgmii *priv = phydev->priv;
+ 	u16 val = 0;
++	int err;
+ 
+-	priv->phy_drv->read_status(phydev);
++	err = priv->phy_drv->read_status(phydev);
++	if (err < 0)
++		return err;
+ 
+ 	val = mdiobus_read(phydev->mdio.bus, priv->addr, XILINX_GMII2RGMII_REG);
+ 	val &= ~XILINX_GMII2RGMII_SPEED_MASK;
+@@ -81,6 +84,11 @@ static int xgmiitorgmii_probe(struct mdio_device *mdiodev)
+ 		return -EPROBE_DEFER;
+ 	}
+ 
++	if (!priv->phy_dev->drv) {
++		dev_info(dev, "Attached phy not ready\n");
++		return -EPROBE_DEFER;
++	}
++
+ 	priv->addr = mdiodev->addr;
+ 	priv->phy_drv = priv->phy_dev->drv;
+ 	memcpy(&priv->conv_phy_drv, priv->phy_dev->drv,
+diff --git a/drivers/net/wireless/ath/ath10k/ce.c b/drivers/net/wireless/ath/ath10k/ce.c
+index 3b96a43fbda4..18c709c484e7 100644
+--- a/drivers/net/wireless/ath/ath10k/ce.c
++++ b/drivers/net/wireless/ath/ath10k/ce.c
+@@ -1512,7 +1512,7 @@ ath10k_ce_alloc_src_ring_64(struct ath10k *ar, unsigned int ce_id,
+ 		ret = ath10k_ce_alloc_shadow_base(ar, src_ring, nentries);
+ 		if (ret) {
+ 			dma_free_coherent(ar->dev,
+-					  (nentries * sizeof(struct ce_desc) +
++					  (nentries * sizeof(struct ce_desc_64) +
+ 					   CE_DESC_RING_ALIGN),
+ 					  src_ring->base_addr_owner_space_unaligned,
+ 					  base_addr);
+diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c
+index c72d8af122a2..4d1cd90d6d27 100644
+--- a/drivers/net/wireless/ath/ath10k/htt_rx.c
++++ b/drivers/net/wireless/ath/ath10k/htt_rx.c
+@@ -268,11 +268,12 @@ int ath10k_htt_rx_ring_refill(struct ath10k *ar)
+ 	spin_lock_bh(&htt->rx_ring.lock);
+ 	ret = ath10k_htt_rx_ring_fill_n(htt, (htt->rx_ring.fill_level -
+ 					      htt->rx_ring.fill_cnt));
+-	spin_unlock_bh(&htt->rx_ring.lock);
+ 
+ 	if (ret)
+ 		ath10k_htt_rx_ring_free(htt);
+ 
++	spin_unlock_bh(&htt->rx_ring.lock);
++
+ 	return ret;
+ }
+ 
+@@ -284,7 +285,9 @@ void ath10k_htt_rx_free(struct ath10k_htt *htt)
+ 	skb_queue_purge(&htt->rx_in_ord_compl_q);
+ 	skb_queue_purge(&htt->tx_fetch_ind_q);
+ 
++	spin_lock_bh(&htt->rx_ring.lock);
+ 	ath10k_htt_rx_ring_free(htt);
++	spin_unlock_bh(&htt->rx_ring.lock);
+ 
+ 	dma_free_coherent(htt->ar->dev,
+ 			  ath10k_htt_get_rx_ring_size(htt),
+@@ -1089,7 +1092,7 @@ static void ath10k_htt_rx_h_queue_msdu(struct ath10k *ar,
+ 	status = IEEE80211_SKB_RXCB(skb);
+ 	*status = *rx_status;
+ 
+-	__skb_queue_tail(&ar->htt.rx_msdus_q, skb);
++	skb_queue_tail(&ar->htt.rx_msdus_q, skb);
+ }
+ 
+ static void ath10k_process_rx(struct ath10k *ar, struct sk_buff *skb)
+@@ -2810,7 +2813,7 @@ bool ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb)
+ 		break;
+ 	}
+ 	case HTT_T2H_MSG_TYPE_RX_IN_ORD_PADDR_IND: {
+-		__skb_queue_tail(&htt->rx_in_ord_compl_q, skb);
++		skb_queue_tail(&htt->rx_in_ord_compl_q, skb);
+ 		return false;
+ 	}
+ 	case HTT_T2H_MSG_TYPE_TX_CREDIT_UPDATE_IND:
+@@ -2874,7 +2877,7 @@ static int ath10k_htt_rx_deliver_msdu(struct ath10k *ar, int quota, int budget)
+ 		if (skb_queue_empty(&ar->htt.rx_msdus_q))
+ 			break;
+ 
+-		skb = __skb_dequeue(&ar->htt.rx_msdus_q);
++		skb = skb_dequeue(&ar->htt.rx_msdus_q);
+ 		if (!skb)
+ 			break;
+ 		ath10k_process_rx(ar, skb);
+@@ -2905,7 +2908,7 @@ int ath10k_htt_txrx_compl_task(struct ath10k *ar, int budget)
+ 		goto exit;
+ 	}
+ 
+-	while ((skb = __skb_dequeue(&htt->rx_in_ord_compl_q))) {
++	while ((skb = skb_dequeue(&htt->rx_in_ord_compl_q))) {
+ 		spin_lock_bh(&htt->rx_ring.lock);
+ 		ret = ath10k_htt_rx_in_ord_ind(ar, skb);
+ 		spin_unlock_bh(&htt->rx_ring.lock);
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index 747c6951b5c1..e0b9f7d0dfd3 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -4054,6 +4054,7 @@ void ath10k_mac_tx_push_pending(struct ath10k *ar)
+ 	rcu_read_unlock();
+ 	spin_unlock_bh(&ar->txqs_lock);
+ }
++EXPORT_SYMBOL(ath10k_mac_tx_push_pending);
+ 
+ /************/
+ /* Scanning */
+diff --git a/drivers/net/wireless/ath/ath10k/sdio.c b/drivers/net/wireless/ath/ath10k/sdio.c
+index d612ce8c9cff..299db8b1c9ba 100644
+--- a/drivers/net/wireless/ath/ath10k/sdio.c
++++ b/drivers/net/wireless/ath/ath10k/sdio.c
+@@ -30,6 +30,7 @@
+ #include "debug.h"
+ #include "hif.h"
+ #include "htc.h"
++#include "mac.h"
+ #include "targaddrs.h"
+ #include "trace.h"
+ #include "sdio.h"
+@@ -396,6 +397,7 @@ static int ath10k_sdio_mbox_rx_process_packet(struct ath10k *ar,
+ 	int ret;
+ 
+ 	payload_len = le16_to_cpu(htc_hdr->len);
++	skb->len = payload_len + sizeof(struct ath10k_htc_hdr);
+ 
+ 	if (trailer_present) {
+ 		trailer = skb->data + sizeof(*htc_hdr) +
+@@ -434,12 +436,14 @@ static int ath10k_sdio_mbox_rx_process_packets(struct ath10k *ar,
+ 	enum ath10k_htc_ep_id id;
+ 	int ret, i, *n_lookahead_local;
+ 	u32 *lookaheads_local;
++	int lookahead_idx = 0;
+ 
+ 	for (i = 0; i < ar_sdio->n_rx_pkts; i++) {
+ 		lookaheads_local = lookaheads;
+ 		n_lookahead_local = n_lookahead;
+ 
+-		id = ((struct ath10k_htc_hdr *)&lookaheads[i])->eid;
++		id = ((struct ath10k_htc_hdr *)
++		      &lookaheads[lookahead_idx++])->eid;
+ 
+ 		if (id >= ATH10K_HTC_EP_COUNT) {
+ 			ath10k_warn(ar, "invalid endpoint in look-ahead: %d\n",
+@@ -462,6 +466,7 @@ static int ath10k_sdio_mbox_rx_process_packets(struct ath10k *ar,
+ 			/* Only read lookahead's from RX trailers
+ 			 * for the last packet in a bundle.
+ 			 */
++			lookahead_idx--;
+ 			lookaheads_local = NULL;
+ 			n_lookahead_local = NULL;
+ 		}
+@@ -1342,6 +1347,8 @@ static void ath10k_sdio_irq_handler(struct sdio_func *func)
+ 			break;
+ 	} while (time_before(jiffies, timeout) && !done);
+ 
++	ath10k_mac_tx_push_pending(ar);
++
+ 	sdio_claim_host(ar_sdio->func);
+ 
+ 	if (ret && ret != -ECANCELED)
+diff --git a/drivers/net/wireless/ath/ath10k/snoc.c b/drivers/net/wireless/ath/ath10k/snoc.c
+index a3a7042fe13a..aa621bf50a91 100644
+--- a/drivers/net/wireless/ath/ath10k/snoc.c
++++ b/drivers/net/wireless/ath/ath10k/snoc.c
+@@ -449,7 +449,7 @@ static void ath10k_snoc_htt_rx_cb(struct ath10k_ce_pipe *ce_state)
+ 
+ static void ath10k_snoc_rx_replenish_retry(struct timer_list *t)
+ {
+-	struct ath10k_pci *ar_snoc = from_timer(ar_snoc, t, rx_post_retry);
++	struct ath10k_snoc *ar_snoc = from_timer(ar_snoc, t, rx_post_retry);
+ 	struct ath10k *ar = ar_snoc->ar;
+ 
+ 	ath10k_snoc_rx_post(ar);
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
+index f97ab795cf2e..2319f79b34f0 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.c
++++ b/drivers/net/wireless/ath/ath10k/wmi.c
+@@ -4602,10 +4602,6 @@ void ath10k_wmi_event_pdev_tpc_config(struct ath10k *ar, struct sk_buff *skb)
+ 
+ 	ev = (struct wmi_pdev_tpc_config_event *)skb->data;
+ 
+-	tpc_stats = kzalloc(sizeof(*tpc_stats), GFP_ATOMIC);
+-	if (!tpc_stats)
+-		return;
+-
+ 	num_tx_chain = __le32_to_cpu(ev->num_tx_chain);
+ 
+ 	if (num_tx_chain > WMI_TPC_TX_N_CHAIN) {
+@@ -4614,6 +4610,10 @@ void ath10k_wmi_event_pdev_tpc_config(struct ath10k *ar, struct sk_buff *skb)
+ 		return;
+ 	}
+ 
++	tpc_stats = kzalloc(sizeof(*tpc_stats), GFP_ATOMIC);
++	if (!tpc_stats)
++		return;
++
+ 	ath10k_wmi_tpc_config_get_rate_code(rate_code, pream_table,
+ 					    num_tx_chain);
+ 
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_qmath.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_qmath.c
+index b9672da24a9d..b24bc57ca91b 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_qmath.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_qmath.c
+@@ -213,7 +213,7 @@ static const s16 log_table[] = {
+ 	30498,
+ 	31267,
+ 	32024,
+-	32768
++	32767
+ };
+ 
+ #define LOG_TABLE_SIZE 32       /* log_table size */
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c b/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c
+index b49aea4da2d6..8985446570bd 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c
+@@ -439,15 +439,13 @@ mt76x2_mac_fill_tx_status(struct mt76x2_dev *dev,
+ 	if (last_rate < IEEE80211_TX_MAX_RATES - 1)
+ 		rate[last_rate + 1].idx = -1;
+ 
+-	cur_idx = rate[last_rate].idx + st->retry;
++	cur_idx = rate[last_rate].idx + last_rate;
+ 	for (i = 0; i <= last_rate; i++) {
+ 		rate[i].flags = rate[last_rate].flags;
+ 		rate[i].idx = max_t(int, 0, cur_idx - i);
+ 		rate[i].count = 1;
+ 	}
+-
+-	if (last_rate > 0)
+-		rate[last_rate - 1].count = st->retry + 1 - last_rate;
++	rate[last_rate].count = st->retry + 1 - last_rate;
+ 
+ 	info->status.ampdu_len = n_frames;
+ 	info->status.ampdu_ack_len = st->success ? n_frames : 0;
+diff --git a/drivers/net/wireless/rndis_wlan.c b/drivers/net/wireless/rndis_wlan.c
+index 9935bd09db1f..d4947e3a909e 100644
+--- a/drivers/net/wireless/rndis_wlan.c
++++ b/drivers/net/wireless/rndis_wlan.c
+@@ -2928,6 +2928,8 @@ static void rndis_wlan_auth_indication(struct usbnet *usbdev,
+ 
+ 	while (buflen >= sizeof(*auth_req)) {
+ 		auth_req = (void *)buf;
++		if (buflen < le32_to_cpu(auth_req->length))
++			return;
+ 		type = "unknown";
+ 		flags = le32_to_cpu(auth_req->flags);
+ 		pairwise_error = false;
+diff --git a/drivers/net/wireless/ti/wlcore/cmd.c b/drivers/net/wireless/ti/wlcore/cmd.c
+index 761cf8573a80..f48c3f62966d 100644
+--- a/drivers/net/wireless/ti/wlcore/cmd.c
++++ b/drivers/net/wireless/ti/wlcore/cmd.c
+@@ -35,6 +35,7 @@
+ #include "wl12xx_80211.h"
+ #include "cmd.h"
+ #include "event.h"
++#include "ps.h"
+ #include "tx.h"
+ #include "hw_ops.h"
+ 
+@@ -191,6 +192,10 @@ int wlcore_cmd_wait_for_event_or_timeout(struct wl1271 *wl,
+ 
+ 	timeout_time = jiffies + msecs_to_jiffies(WL1271_EVENT_TIMEOUT);
+ 
++	ret = wl1271_ps_elp_wakeup(wl);
++	if (ret < 0)
++		return ret;
++
+ 	do {
+ 		if (time_after(jiffies, timeout_time)) {
+ 			wl1271_debug(DEBUG_CMD, "timeout waiting for event %d",
+@@ -222,6 +227,7 @@ int wlcore_cmd_wait_for_event_or_timeout(struct wl1271 *wl,
+ 	} while (!event);
+ 
+ out:
++	wl1271_ps_elp_sleep(wl);
+ 	kfree(events_vector);
+ 	return ret;
+ }
+diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
+index 34712def81b1..5251689a1d9a 100644
+--- a/drivers/nvme/target/fcloop.c
++++ b/drivers/nvme/target/fcloop.c
+@@ -311,7 +311,7 @@ fcloop_tgt_lsrqst_done_work(struct work_struct *work)
+ 	struct fcloop_tport *tport = tls_req->tport;
+ 	struct nvmefc_ls_req *lsreq = tls_req->lsreq;
+ 
+-	if (tport->remoteport)
++	if (!tport || tport->remoteport)
+ 		lsreq->done(lsreq, tls_req->status);
+ }
+ 
+@@ -329,6 +329,7 @@ fcloop_ls_req(struct nvme_fc_local_port *localport,
+ 
+ 	if (!rport->targetport) {
+ 		tls_req->status = -ECONNREFUSED;
++		tls_req->tport = NULL;
+ 		schedule_work(&tls_req->work);
+ 		return ret;
+ 	}
+diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c
+index ef0b1b6ba86f..12afa7fdf77e 100644
+--- a/drivers/pci/hotplug/acpiphp_glue.c
++++ b/drivers/pci/hotplug/acpiphp_glue.c
+@@ -457,17 +457,18 @@ static void acpiphp_native_scan_bridge(struct pci_dev *bridge)
+ /**
+  * enable_slot - enable, configure a slot
+  * @slot: slot to be enabled
++ * @bridge: true if enable is for the whole bridge (not a single slot)
+  *
+  * This function should be called per *physical slot*,
+  * not per each slot object in ACPI namespace.
+  */
+-static void enable_slot(struct acpiphp_slot *slot)
++static void enable_slot(struct acpiphp_slot *slot, bool bridge)
+ {
+ 	struct pci_dev *dev;
+ 	struct pci_bus *bus = slot->bus;
+ 	struct acpiphp_func *func;
+ 
+-	if (bus->self && hotplug_is_native(bus->self)) {
++	if (bridge && bus->self && hotplug_is_native(bus->self)) {
+ 		/*
+ 		 * If native hotplug is used, it will take care of hotplug
+ 		 * slot management and resource allocation for hotplug
+@@ -701,7 +702,7 @@ static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
+ 					trim_stale_devices(dev);
+ 
+ 			/* configure all functions */
+-			enable_slot(slot);
++			enable_slot(slot, true);
+ 		} else {
+ 			disable_slot(slot);
+ 		}
+@@ -785,7 +786,7 @@ static void hotplug_event(u32 type, struct acpiphp_context *context)
+ 		if (bridge)
+ 			acpiphp_check_bridge(bridge);
+ 		else if (!(slot->flags & SLOT_IS_GOING_AWAY))
+-			enable_slot(slot);
++			enable_slot(slot, false);
+ 
+ 		break;
+ 
+@@ -973,7 +974,7 @@ int acpiphp_enable_slot(struct acpiphp_slot *slot)
+ 
+ 	/* configure all functions */
+ 	if (!(slot->flags & SLOT_ENABLED))
+-		enable_slot(slot);
++		enable_slot(slot, false);
+ 
+ 	pci_unlock_rescan_remove();
+ 	return 0;
+diff --git a/drivers/platform/x86/asus-wireless.c b/drivers/platform/x86/asus-wireless.c
+index 6afd011de9e5..b8e35a8d65cf 100644
+--- a/drivers/platform/x86/asus-wireless.c
++++ b/drivers/platform/x86/asus-wireless.c
+@@ -52,13 +52,12 @@ static const struct acpi_device_id device_ids[] = {
+ };
+ MODULE_DEVICE_TABLE(acpi, device_ids);
+ 
+-static u64 asus_wireless_method(acpi_handle handle, const char *method,
+-				int param)
++static acpi_status asus_wireless_method(acpi_handle handle, const char *method,
++					int param, u64 *ret)
+ {
+ 	struct acpi_object_list p;
+ 	union acpi_object obj;
+ 	acpi_status s;
+-	u64 ret;
+ 
+ 	acpi_handle_debug(handle, "Evaluating method %s, parameter %#x\n",
+ 			  method, param);
+@@ -67,24 +66,27 @@ static u64 asus_wireless_method(acpi_handle handle, const char *method,
+ 	p.count = 1;
+ 	p.pointer = &obj;
+ 
+-	s = acpi_evaluate_integer(handle, (acpi_string) method, &p, &ret);
++	s = acpi_evaluate_integer(handle, (acpi_string) method, &p, ret);
+ 	if (ACPI_FAILURE(s))
+ 		acpi_handle_err(handle,
+ 				"Failed to eval method %s, param %#x (%d)\n",
+ 				method, param, s);
+-	acpi_handle_debug(handle, "%s returned %#llx\n", method, ret);
+-	return ret;
++	else
++		acpi_handle_debug(handle, "%s returned %#llx\n", method, *ret);
++
++	return s;
+ }
+ 
+ static enum led_brightness led_state_get(struct led_classdev *led)
+ {
+ 	struct asus_wireless_data *data;
+-	int s;
++	acpi_status s;
++	u64 ret;
+ 
+ 	data = container_of(led, struct asus_wireless_data, led);
+ 	s = asus_wireless_method(acpi_device_handle(data->adev), "HSWC",
+-				 data->hswc_params->status);
+-	if (s == data->hswc_params->on)
++				 data->hswc_params->status, &ret);
++	if (ACPI_SUCCESS(s) && ret == data->hswc_params->on)
+ 		return LED_FULL;
+ 	return LED_OFF;
+ }
+@@ -92,10 +94,11 @@ static enum led_brightness led_state_get(struct led_classdev *led)
+ static void led_state_update(struct work_struct *work)
+ {
+ 	struct asus_wireless_data *data;
++	u64 ret;
+ 
+ 	data = container_of(work, struct asus_wireless_data, led_work);
+ 	asus_wireless_method(acpi_device_handle(data->adev), "HSWC",
+-			     data->led_state);
++			     data->led_state, &ret);
+ }
+ 
+ static void led_state_set(struct led_classdev *led, enum led_brightness value)
+diff --git a/drivers/power/reset/vexpress-poweroff.c b/drivers/power/reset/vexpress-poweroff.c
+index 102f95a09460..e9e749f87517 100644
+--- a/drivers/power/reset/vexpress-poweroff.c
++++ b/drivers/power/reset/vexpress-poweroff.c
+@@ -35,6 +35,7 @@ static void vexpress_reset_do(struct device *dev, const char *what)
+ }
+ 
+ static struct device *vexpress_power_off_device;
++static atomic_t vexpress_restart_nb_refcnt = ATOMIC_INIT(0);
+ 
+ static void vexpress_power_off(void)
+ {
+@@ -99,10 +100,13 @@ static int _vexpress_register_restart_handler(struct device *dev)
+ 	int err;
+ 
+ 	vexpress_restart_device = dev;
+-	err = register_restart_handler(&vexpress_restart_nb);
+-	if (err) {
+-		dev_err(dev, "cannot register restart handler (err=%d)\n", err);
+-		return err;
++	if (atomic_inc_return(&vexpress_restart_nb_refcnt) == 1) {
++		err = register_restart_handler(&vexpress_restart_nb);
++		if (err) {
++			dev_err(dev, "cannot register restart handler (err=%d)\n", err);
++			atomic_dec(&vexpress_restart_nb_refcnt);
++			return err;
++		}
+ 	}
+ 	device_create_file(dev, &dev_attr_active);
+ 
+diff --git a/drivers/power/supply/axp288_charger.c b/drivers/power/supply/axp288_charger.c
+index 6e1bc14c3304..735658ee1c60 100644
+--- a/drivers/power/supply/axp288_charger.c
++++ b/drivers/power/supply/axp288_charger.c
+@@ -718,7 +718,7 @@ static int charger_init_hw_regs(struct axp288_chrg_info *info)
+ 	}
+ 
+ 	/* Determine charge current limit */
+-	cc = (ret & CHRG_CCCV_CC_MASK) >> CHRG_CCCV_CC_BIT_POS;
++	cc = (val & CHRG_CCCV_CC_MASK) >> CHRG_CCCV_CC_BIT_POS;
+ 	cc = (cc * CHRG_CCCV_CC_LSB_RES) + CHRG_CCCV_CC_OFFSET;
+ 	info->cc = cc;
+ 
+diff --git a/drivers/power/supply/power_supply_core.c b/drivers/power/supply/power_supply_core.c
+index d21f478741c1..e85361878450 100644
+--- a/drivers/power/supply/power_supply_core.c
++++ b/drivers/power/supply/power_supply_core.c
+@@ -14,6 +14,7 @@
+ #include <linux/types.h>
+ #include <linux/init.h>
+ #include <linux/slab.h>
++#include <linux/delay.h>
+ #include <linux/device.h>
+ #include <linux/notifier.h>
+ #include <linux/err.h>
+@@ -140,8 +141,13 @@ static void power_supply_deferred_register_work(struct work_struct *work)
+ 	struct power_supply *psy = container_of(work, struct power_supply,
+ 						deferred_register_work.work);
+ 
+-	if (psy->dev.parent)
+-		mutex_lock(&psy->dev.parent->mutex);
++	if (psy->dev.parent) {
++		while (!mutex_trylock(&psy->dev.parent->mutex)) {
++			if (psy->removing)
++				return;
++			msleep(10);
++		}
++	}
+ 
+ 	power_supply_changed(psy);
+ 
+@@ -1082,6 +1088,7 @@ EXPORT_SYMBOL_GPL(devm_power_supply_register_no_ws);
+ void power_supply_unregister(struct power_supply *psy)
+ {
+ 	WARN_ON(atomic_dec_return(&psy->use_cnt));
++	psy->removing = true;
+ 	cancel_work_sync(&psy->changed_work);
+ 	cancel_delayed_work_sync(&psy->deferred_register_work);
+ 	sysfs_remove_link(&psy->dev.kobj, "powers");
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 6ed568b96c0e..cc1450c53fb2 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -3147,7 +3147,7 @@ static inline int regulator_suspend_toggle(struct regulator_dev *rdev,
+ 	if (!rstate->changeable)
+ 		return -EPERM;
+ 
+-	rstate->enabled = en;
++	rstate->enabled = (en) ? ENABLE_IN_SUSPEND : DISABLE_IN_SUSPEND;
+ 
+ 	return 0;
+ }
+@@ -4381,13 +4381,13 @@ regulator_register(const struct regulator_desc *regulator_desc,
+ 	    !rdev->desc->fixed_uV)
+ 		rdev->is_switch = true;
+ 
++	dev_set_drvdata(&rdev->dev, rdev);
+ 	ret = device_register(&rdev->dev);
+ 	if (ret != 0) {
+ 		put_device(&rdev->dev);
+ 		goto unset_supplies;
+ 	}
+ 
+-	dev_set_drvdata(&rdev->dev, rdev);
+ 	rdev_init_debugfs(rdev);
+ 
+ 	/* try to resolve regulators supply since a new one was registered */
+diff --git a/drivers/regulator/of_regulator.c b/drivers/regulator/of_regulator.c
+index 638f17d4c848..210fc20f7de7 100644
+--- a/drivers/regulator/of_regulator.c
++++ b/drivers/regulator/of_regulator.c
+@@ -213,8 +213,6 @@ static void of_get_regulation_constraints(struct device_node *np,
+ 		else if (of_property_read_bool(suspend_np,
+ 					"regulator-off-in-suspend"))
+ 			suspend_state->enabled = DISABLE_IN_SUSPEND;
+-		else
+-			suspend_state->enabled = DO_NOTHING_IN_SUSPEND;
+ 
+ 		if (!of_property_read_u32(np, "regulator-suspend-min-microvolt",
+ 					  &pval))
+diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
+index a9f60d0ee02e..7c732414367f 100644
+--- a/drivers/s390/block/dasd.c
++++ b/drivers/s390/block/dasd.c
+@@ -3127,6 +3127,7 @@ static int dasd_alloc_queue(struct dasd_block *block)
+ 	block->tag_set.nr_hw_queues = nr_hw_queues;
+ 	block->tag_set.queue_depth = queue_depth;
+ 	block->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
++	block->tag_set.numa_node = NUMA_NO_NODE;
+ 
+ 	rc = blk_mq_alloc_tag_set(&block->tag_set);
+ 	if (rc)
+diff --git a/drivers/s390/block/scm_blk.c b/drivers/s390/block/scm_blk.c
+index b1fcb76dd272..98f66b7b6794 100644
+--- a/drivers/s390/block/scm_blk.c
++++ b/drivers/s390/block/scm_blk.c
+@@ -455,6 +455,7 @@ int scm_blk_dev_setup(struct scm_blk_dev *bdev, struct scm_device *scmdev)
+ 	bdev->tag_set.nr_hw_queues = nr_requests;
+ 	bdev->tag_set.queue_depth = nr_requests_per_io * nr_requests;
+ 	bdev->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
++	bdev->tag_set.numa_node = NUMA_NO_NODE;
+ 
+ 	ret = blk_mq_alloc_tag_set(&bdev->tag_set);
+ 	if (ret)
+diff --git a/drivers/scsi/bnx2i/bnx2i_hwi.c b/drivers/scsi/bnx2i/bnx2i_hwi.c
+index 8f03a869ac98..e9e669a6c2bc 100644
+--- a/drivers/scsi/bnx2i/bnx2i_hwi.c
++++ b/drivers/scsi/bnx2i/bnx2i_hwi.c
+@@ -2727,6 +2727,8 @@ int bnx2i_map_ep_dbell_regs(struct bnx2i_endpoint *ep)
+ 					      BNX2X_DOORBELL_PCI_BAR);
+ 		reg_off = (1 << BNX2X_DB_SHIFT) * (cid_num & 0x1FFFF);
+ 		ep->qp.ctx_base = ioremap_nocache(reg_base + reg_off, 4);
++		if (!ep->qp.ctx_base)
++			return -ENOMEM;
+ 		goto arm_cq;
+ 	}
+ 
+diff --git a/drivers/scsi/hisi_sas/hisi_sas.h b/drivers/scsi/hisi_sas/hisi_sas.h
+index 7052a5d45f7f..78e5a9254143 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas.h
++++ b/drivers/scsi/hisi_sas/hisi_sas.h
+@@ -277,6 +277,7 @@ struct hisi_hba {
+ 
+ 	int n_phy;
+ 	spinlock_t lock;
++	struct semaphore sem;
+ 
+ 	struct timer_list timer;
+ 	struct workqueue_struct *wq;
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
+index 6f562974f8f6..bfbd2fb7e69e 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
+@@ -914,7 +914,9 @@ static void hisi_sas_dev_gone(struct domain_device *device)
+ 
+ 		hisi_sas_dereg_device(hisi_hba, device);
+ 
++		down(&hisi_hba->sem);
+ 		hisi_hba->hw->clear_itct(hisi_hba, sas_dev);
++		up(&hisi_hba->sem);
+ 		device->lldd_dev = NULL;
+ 	}
+ 
+@@ -1364,6 +1366,7 @@ static int hisi_sas_controller_reset(struct hisi_hba *hisi_hba)
+ 	if (test_and_set_bit(HISI_SAS_RESET_BIT, &hisi_hba->flags))
+ 		return -1;
+ 
++	down(&hisi_hba->sem);
+ 	dev_info(dev, "controller resetting...\n");
+ 	old_state = hisi_hba->hw->get_phys_state(hisi_hba);
+ 
+@@ -1378,6 +1381,7 @@ static int hisi_sas_controller_reset(struct hisi_hba *hisi_hba)
+ 	if (rc) {
+ 		dev_warn(dev, "controller reset failed (%d)\n", rc);
+ 		clear_bit(HISI_SAS_REJECT_CMD_BIT, &hisi_hba->flags);
++		up(&hisi_hba->sem);
+ 		scsi_unblock_requests(shost);
+ 		goto out;
+ 	}
+@@ -1388,6 +1392,7 @@ static int hisi_sas_controller_reset(struct hisi_hba *hisi_hba)
+ 	hisi_hba->hw->phys_init(hisi_hba);
+ 	msleep(1000);
+ 	hisi_sas_refresh_port_id(hisi_hba);
++	up(&hisi_hba->sem);
+ 
+ 	if (hisi_hba->reject_stp_links_msk)
+ 		hisi_sas_terminate_stp_reject(hisi_hba);
+@@ -2016,6 +2021,7 @@ int hisi_sas_alloc(struct hisi_hba *hisi_hba, struct Scsi_Host *shost)
+ 	struct device *dev = hisi_hba->dev;
+ 	int i, s, max_command_entries = hisi_hba->hw->max_command_entries;
+ 
++	sema_init(&hisi_hba->sem, 1);
+ 	spin_lock_init(&hisi_hba->lock);
+ 	for (i = 0; i < hisi_hba->n_phy; i++) {
+ 		hisi_sas_phy_init(hisi_hba, i);
+diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.c b/drivers/scsi/ibmvscsi/ibmvscsi.c
+index 17df76f0be3c..67a2c844e30d 100644
+--- a/drivers/scsi/ibmvscsi/ibmvscsi.c
++++ b/drivers/scsi/ibmvscsi/ibmvscsi.c
+@@ -93,7 +93,7 @@ static int max_requests = IBMVSCSI_MAX_REQUESTS_DEFAULT;
+ static int max_events = IBMVSCSI_MAX_REQUESTS_DEFAULT + 2;
+ static int fast_fail = 1;
+ static int client_reserve = 1;
+-static char partition_name[97] = "UNKNOWN";
++static char partition_name[96] = "UNKNOWN";
+ static unsigned int partition_number = -1;
+ static LIST_HEAD(ibmvscsi_head);
+ 
+@@ -262,7 +262,7 @@ static void gather_partition_info(void)
+ 
+ 	ppartition_name = of_get_property(of_root, "ibm,partition-name", NULL);
+ 	if (ppartition_name)
+-		strncpy(partition_name, ppartition_name,
++		strlcpy(partition_name, ppartition_name,
+ 				sizeof(partition_name));
+ 	p_number_ptr = of_get_property(of_root, "ibm,partition-no", NULL);
+ 	if (p_number_ptr)
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index 71d97573a667..8e84e3fb648a 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -6789,6 +6789,9 @@ megasas_resume(struct pci_dev *pdev)
+ 			goto fail_init_mfi;
+ 	}
+ 
++	if (megasas_get_ctrl_info(instance) != DCMD_SUCCESS)
++		goto fail_init_mfi;
++
+ 	tasklet_init(&instance->isr_tasklet, instance->instancet->tasklet,
+ 		     (unsigned long)instance);
+ 
+diff --git a/drivers/siox/siox-core.c b/drivers/siox/siox-core.c
+index 16590dfaafa4..cef307c0399c 100644
+--- a/drivers/siox/siox-core.c
++++ b/drivers/siox/siox-core.c
+@@ -715,17 +715,17 @@ int siox_master_register(struct siox_master *smaster)
+ 
+ 	dev_set_name(&smaster->dev, "siox-%d", smaster->busno);
+ 
++	mutex_init(&smaster->lock);
++	INIT_LIST_HEAD(&smaster->devices);
++
+ 	smaster->last_poll = jiffies;
+-	smaster->poll_thread = kthread_create(siox_poll_thread, smaster,
+-					      "siox-%d", smaster->busno);
++	smaster->poll_thread = kthread_run(siox_poll_thread, smaster,
++					   "siox-%d", smaster->busno);
+ 	if (IS_ERR(smaster->poll_thread)) {
+ 		smaster->active = 0;
+ 		return PTR_ERR(smaster->poll_thread);
+ 	}
+ 
+-	mutex_init(&smaster->lock);
+-	INIT_LIST_HEAD(&smaster->devices);
+-
+ 	ret = device_add(&smaster->dev);
+ 	if (ret)
+ 		kthread_stop(smaster->poll_thread);
+diff --git a/drivers/spi/spi-orion.c b/drivers/spi/spi-orion.c
+index d01a6adc726e..47ef6b1a2e76 100644
+--- a/drivers/spi/spi-orion.c
++++ b/drivers/spi/spi-orion.c
+@@ -20,6 +20,7 @@
+ #include <linux/of.h>
+ #include <linux/of_address.h>
+ #include <linux/of_device.h>
++#include <linux/of_gpio.h>
+ #include <linux/clk.h>
+ #include <linux/sizes.h>
+ #include <linux/gpio.h>
+@@ -681,9 +682,9 @@ static int orion_spi_probe(struct platform_device *pdev)
+ 		goto out_rel_axi_clk;
+ 	}
+ 
+-	/* Scan all SPI devices of this controller for direct mapped devices */
+ 	for_each_available_child_of_node(pdev->dev.of_node, np) {
+ 		u32 cs;
++		int cs_gpio;
+ 
+ 		/* Get chip-select number from the "reg" property */
+ 		status = of_property_read_u32(np, "reg", &cs);
+@@ -694,6 +695,44 @@ static int orion_spi_probe(struct platform_device *pdev)
+ 			continue;
+ 		}
+ 
++		/*
++		 * Initialize the CS GPIO:
++		 * - properly request the actual GPIO signal
++		 * - de-assert the logical signal so that all GPIO CS lines
++		 *   are inactive when probing for slaves
++		 * - find an unused physical CS which will be driven for any
++		 *   slave which uses a CS GPIO
++		 */
++		cs_gpio = of_get_named_gpio(pdev->dev.of_node, "cs-gpios", cs);
++		if (cs_gpio > 0) {
++			char *gpio_name;
++			int cs_flags;
++
++			if (spi->unused_hw_gpio == -1) {
++				dev_info(&pdev->dev,
++					"Selected unused HW CS#%d for any GPIO CSes\n",
++					cs);
++				spi->unused_hw_gpio = cs;
++			}
++
++			gpio_name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
++					"%s-CS%d", dev_name(&pdev->dev), cs);
++			if (!gpio_name) {
++				status = -ENOMEM;
++				goto out_rel_axi_clk;
++			}
++
++			cs_flags = of_property_read_bool(np, "spi-cs-high") ?
++				GPIOF_OUT_INIT_LOW : GPIOF_OUT_INIT_HIGH;
++			status = devm_gpio_request_one(&pdev->dev, cs_gpio,
++					cs_flags, gpio_name);
++			if (status) {
++				dev_err(&pdev->dev,
++					"Can't request GPIO for CS %d\n", cs);
++				goto out_rel_axi_clk;
++			}
++		}
++
+ 		/*
+ 		 * Check if an address is configured for this SPI device. If
+ 		 * not, the MBus mapping via the 'ranges' property in the 'soc'
+@@ -740,44 +779,8 @@ static int orion_spi_probe(struct platform_device *pdev)
+ 	if (status < 0)
+ 		goto out_rel_pm;
+ 
+-	if (master->cs_gpios) {
+-		int i;
+-		for (i = 0; i < master->num_chipselect; ++i) {
+-			char *gpio_name;
+-
+-			if (!gpio_is_valid(master->cs_gpios[i])) {
+-				continue;
+-			}
+-
+-			gpio_name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
+-					"%s-CS%d", dev_name(&pdev->dev), i);
+-			if (!gpio_name) {
+-				status = -ENOMEM;
+-				goto out_rel_master;
+-			}
+-
+-			status = devm_gpio_request(&pdev->dev,
+-					master->cs_gpios[i], gpio_name);
+-			if (status) {
+-				dev_err(&pdev->dev,
+-					"Can't request GPIO for CS %d\n",
+-					master->cs_gpios[i]);
+-				goto out_rel_master;
+-			}
+-			if (spi->unused_hw_gpio == -1) {
+-				dev_info(&pdev->dev,
+-					"Selected unused HW CS#%d for any GPIO CSes\n",
+-					i);
+-				spi->unused_hw_gpio = i;
+-			}
+-		}
+-	}
+-
+-
+ 	return status;
+ 
+-out_rel_master:
+-	spi_unregister_master(master);
+ out_rel_pm:
+ 	pm_runtime_disable(&pdev->dev);
+ out_rel_axi_clk:
+diff --git a/drivers/spi/spi-rspi.c b/drivers/spi/spi-rspi.c
+index 95dc4d78618d..b37de1d991d6 100644
+--- a/drivers/spi/spi-rspi.c
++++ b/drivers/spi/spi-rspi.c
+@@ -598,11 +598,13 @@ static int rspi_dma_transfer(struct rspi_data *rspi, struct sg_table *tx,
+ 
+ 	ret = wait_event_interruptible_timeout(rspi->wait,
+ 					       rspi->dma_callbacked, HZ);
+-	if (ret > 0 && rspi->dma_callbacked)
++	if (ret > 0 && rspi->dma_callbacked) {
+ 		ret = 0;
+-	else if (!ret) {
+-		dev_err(&rspi->master->dev, "DMA timeout\n");
+-		ret = -ETIMEDOUT;
++	} else {
++		if (!ret) {
++			dev_err(&rspi->master->dev, "DMA timeout\n");
++			ret = -ETIMEDOUT;
++		}
+ 		if (tx)
+ 			dmaengine_terminate_all(rspi->master->dma_tx);
+ 		if (rx)
+@@ -1350,12 +1352,36 @@ static const struct platform_device_id spi_driver_ids[] = {
+ 
+ MODULE_DEVICE_TABLE(platform, spi_driver_ids);
+ 
++#ifdef CONFIG_PM_SLEEP
++static int rspi_suspend(struct device *dev)
++{
++	struct platform_device *pdev = to_platform_device(dev);
++	struct rspi_data *rspi = platform_get_drvdata(pdev);
++
++	return spi_master_suspend(rspi->master);
++}
++
++static int rspi_resume(struct device *dev)
++{
++	struct platform_device *pdev = to_platform_device(dev);
++	struct rspi_data *rspi = platform_get_drvdata(pdev);
++
++	return spi_master_resume(rspi->master);
++}
++
++static SIMPLE_DEV_PM_OPS(rspi_pm_ops, rspi_suspend, rspi_resume);
++#define DEV_PM_OPS	&rspi_pm_ops
++#else
++#define DEV_PM_OPS	NULL
++#endif /* CONFIG_PM_SLEEP */
++
+ static struct platform_driver rspi_driver = {
+ 	.probe =	rspi_probe,
+ 	.remove =	rspi_remove,
+ 	.id_table =	spi_driver_ids,
+ 	.driver		= {
+ 		.name = "renesas_spi",
++		.pm = DEV_PM_OPS,
+ 		.of_match_table = of_match_ptr(rspi_of_match),
+ 	},
+ };
+diff --git a/drivers/spi/spi-sh-msiof.c b/drivers/spi/spi-sh-msiof.c
+index 0e74cbf9929d..37364c634fef 100644
+--- a/drivers/spi/spi-sh-msiof.c
++++ b/drivers/spi/spi-sh-msiof.c
+@@ -396,7 +396,8 @@ static void sh_msiof_spi_set_mode_regs(struct sh_msiof_spi_priv *p,
+ 
+ static void sh_msiof_reset_str(struct sh_msiof_spi_priv *p)
+ {
+-	sh_msiof_write(p, STR, sh_msiof_read(p, STR));
++	sh_msiof_write(p, STR,
++		       sh_msiof_read(p, STR) & ~(STR_TDREQ | STR_RDREQ));
+ }
+ 
+ static void sh_msiof_spi_write_fifo_8(struct sh_msiof_spi_priv *p,
+@@ -1421,12 +1422,37 @@ static const struct platform_device_id spi_driver_ids[] = {
+ };
+ MODULE_DEVICE_TABLE(platform, spi_driver_ids);
+ 
++#ifdef CONFIG_PM_SLEEP
++static int sh_msiof_spi_suspend(struct device *dev)
++{
++	struct platform_device *pdev = to_platform_device(dev);
++	struct sh_msiof_spi_priv *p = platform_get_drvdata(pdev);
++
++	return spi_master_suspend(p->master);
++}
++
++static int sh_msiof_spi_resume(struct device *dev)
++{
++	struct platform_device *pdev = to_platform_device(dev);
++	struct sh_msiof_spi_priv *p = platform_get_drvdata(pdev);
++
++	return spi_master_resume(p->master);
++}
++
++static SIMPLE_DEV_PM_OPS(sh_msiof_spi_pm_ops, sh_msiof_spi_suspend,
++			 sh_msiof_spi_resume);
++#define DEV_PM_OPS	&sh_msiof_spi_pm_ops
++#else
++#define DEV_PM_OPS	NULL
++#endif /* CONFIG_PM_SLEEP */
++
+ static struct platform_driver sh_msiof_spi_drv = {
+ 	.probe		= sh_msiof_spi_probe,
+ 	.remove		= sh_msiof_spi_remove,
+ 	.id_table	= spi_driver_ids,
+ 	.driver		= {
+ 		.name		= "spi_sh_msiof",
++		.pm		= DEV_PM_OPS,
+ 		.of_match_table = of_match_ptr(sh_msiof_match),
+ 	},
+ };
+diff --git a/drivers/spi/spi-tegra20-slink.c b/drivers/spi/spi-tegra20-slink.c
+index 6f7b946b5ced..1427f343b39a 100644
+--- a/drivers/spi/spi-tegra20-slink.c
++++ b/drivers/spi/spi-tegra20-slink.c
+@@ -1063,6 +1063,24 @@ static int tegra_slink_probe(struct platform_device *pdev)
+ 		goto exit_free_master;
+ 	}
+ 
++	/* disabled clock may cause interrupt storm upon request */
++	tspi->clk = devm_clk_get(&pdev->dev, NULL);
++	if (IS_ERR(tspi->clk)) {
++		ret = PTR_ERR(tspi->clk);
++		dev_err(&pdev->dev, "Can not get clock %d\n", ret);
++		goto exit_free_master;
++	}
++	ret = clk_prepare(tspi->clk);
++	if (ret < 0) {
++		dev_err(&pdev->dev, "Clock prepare failed %d\n", ret);
++		goto exit_free_master;
++	}
++	ret = clk_enable(tspi->clk);
++	if (ret < 0) {
++		dev_err(&pdev->dev, "Clock enable failed %d\n", ret);
++		goto exit_free_master;
++	}
++
+ 	spi_irq = platform_get_irq(pdev, 0);
+ 	tspi->irq = spi_irq;
+ 	ret = request_threaded_irq(tspi->irq, tegra_slink_isr,
+@@ -1071,14 +1089,7 @@ static int tegra_slink_probe(struct platform_device *pdev)
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "Failed to register ISR for IRQ %d\n",
+ 					tspi->irq);
+-		goto exit_free_master;
+-	}
+-
+-	tspi->clk = devm_clk_get(&pdev->dev, NULL);
+-	if (IS_ERR(tspi->clk)) {
+-		dev_err(&pdev->dev, "can not get clock\n");
+-		ret = PTR_ERR(tspi->clk);
+-		goto exit_free_irq;
++		goto exit_clk_disable;
+ 	}
+ 
+ 	tspi->rst = devm_reset_control_get_exclusive(&pdev->dev, "spi");
+@@ -1138,6 +1149,8 @@ exit_rx_dma_free:
+ 	tegra_slink_deinit_dma_param(tspi, true);
+ exit_free_irq:
+ 	free_irq(spi_irq, tspi);
++exit_clk_disable:
++	clk_disable(tspi->clk);
+ exit_free_master:
+ 	spi_master_put(master);
+ 	return ret;
+@@ -1150,6 +1163,8 @@ static int tegra_slink_remove(struct platform_device *pdev)
+ 
+ 	free_irq(tspi->irq, tspi);
+ 
++	clk_disable(tspi->clk);
++
+ 	if (tspi->tx_dma_chan)
+ 		tegra_slink_deinit_dma_param(tspi, false);
+ 
+diff --git a/drivers/staging/android/ashmem.c b/drivers/staging/android/ashmem.c
+index d5d33e12e952..716573c21579 100644
+--- a/drivers/staging/android/ashmem.c
++++ b/drivers/staging/android/ashmem.c
+@@ -366,6 +366,12 @@ static int ashmem_mmap(struct file *file, struct vm_area_struct *vma)
+ 		goto out;
+ 	}
+ 
++	/* requested mapping size larger than object size */
++	if (vma->vm_end - vma->vm_start > PAGE_ALIGN(asma->size)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
+ 	/* requested protection bits must match our allowed protection mask */
+ 	if (unlikely((vma->vm_flags & ~calc_vm_prot_bits(asma->prot_mask, 0)) &
+ 		     calc_vm_prot_bits(PROT_MASK, 0))) {
+diff --git a/drivers/staging/media/imx/imx-ic-prpencvf.c b/drivers/staging/media/imx/imx-ic-prpencvf.c
+index ae453fd422f0..ffeb017c73b2 100644
+--- a/drivers/staging/media/imx/imx-ic-prpencvf.c
++++ b/drivers/staging/media/imx/imx-ic-prpencvf.c
+@@ -210,6 +210,7 @@ static void prp_vb2_buf_done(struct prp_priv *priv, struct ipuv3_channel *ch)
+ 
+ 	done = priv->active_vb2_buf[priv->ipu_buf_num];
+ 	if (done) {
++		done->vbuf.field = vdev->fmt.fmt.pix.field;
+ 		vb = &done->vbuf.vb2_buf;
+ 		vb->timestamp = ktime_get_ns();
+ 		vb2_buffer_done(vb, priv->nfb4eof ?
+diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
+index 95d7805f3485..0e963c24af37 100644
+--- a/drivers/staging/media/imx/imx-media-csi.c
++++ b/drivers/staging/media/imx/imx-media-csi.c
+@@ -236,6 +236,7 @@ static void csi_vb2_buf_done(struct csi_priv *priv)
+ 
+ 	done = priv->active_vb2_buf[priv->ipu_buf_num];
+ 	if (done) {
++		done->vbuf.field = vdev->fmt.fmt.pix.field;
+ 		vb = &done->vbuf.vb2_buf;
+ 		vb->timestamp = ktime_get_ns();
+ 		vb2_buffer_done(vb, priv->nfb4eof ?
+diff --git a/drivers/staging/mt7621-dts/gbpc1.dts b/drivers/staging/mt7621-dts/gbpc1.dts
+index 6b13d85d9d34..87555600195f 100644
+--- a/drivers/staging/mt7621-dts/gbpc1.dts
++++ b/drivers/staging/mt7621-dts/gbpc1.dts
+@@ -113,6 +113,8 @@
+ };
+ 
+ &pcie {
++	pinctrl-names = "default";
++	pinctrl-0 = <&pcie_pins>;
+ 	status = "okay";
+ };
+ 
+diff --git a/drivers/staging/mt7621-dts/mt7621.dtsi b/drivers/staging/mt7621-dts/mt7621.dtsi
+index eb3966b7f033..ce6b43639079 100644
+--- a/drivers/staging/mt7621-dts/mt7621.dtsi
++++ b/drivers/staging/mt7621-dts/mt7621.dtsi
+@@ -447,31 +447,28 @@
+ 		clocks = <&clkctrl 24 &clkctrl 25 &clkctrl 26>;
+ 		clock-names = "pcie0", "pcie1", "pcie2";
+ 
+-		pcie0 {
++		pcie@0,0 {
+ 			reg = <0x0000 0 0 0 0>;
+-
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+-
+-			device_type = "pci";
++			ranges;
++			bus-range = <0x00 0xff>;
+ 		};
+ 
+-		pcie1 {
++		pcie@1,0 {
+ 			reg = <0x0800 0 0 0 0>;
+-
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+-
+-			device_type = "pci";
++			ranges;
++			bus-range = <0x00 0xff>;
+ 		};
+ 
+-		pcie2 {
++		pcie@2,0 {
+ 			reg = <0x1000 0 0 0 0>;
+-
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+-
+-			device_type = "pci";
++			ranges;
++			bus-range = <0x00 0xff>;
+ 		};
+ 	};
+ };
+diff --git a/drivers/staging/mt7621-eth/mtk_eth_soc.c b/drivers/staging/mt7621-eth/mtk_eth_soc.c
+index 2c7a2e666bfb..381d9d270bf5 100644
+--- a/drivers/staging/mt7621-eth/mtk_eth_soc.c
++++ b/drivers/staging/mt7621-eth/mtk_eth_soc.c
+@@ -2012,8 +2012,10 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np)
+ 		mac->hw_stats = devm_kzalloc(eth->dev,
+ 					     sizeof(*mac->hw_stats),
+ 					     GFP_KERNEL);
+-		if (!mac->hw_stats)
+-			return -ENOMEM;
++		if (!mac->hw_stats) {
++			err = -ENOMEM;
++			goto free_netdev;
++		}
+ 		spin_lock_init(&mac->hw_stats->stats_lock);
+ 		mac->hw_stats->reg_offset = id * MTK_STAT_OFFSET;
+ 	}
+@@ -2037,7 +2039,8 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np)
+ 	err = register_netdev(eth->netdev[id]);
+ 	if (err) {
+ 		dev_err(eth->dev, "error bringing up device\n");
+-		return err;
++		err = -ENOMEM;
++		goto free_netdev;
+ 	}
+ 	eth->netdev[id]->irq = eth->irq;
+ 	netif_info(eth, probe, eth->netdev[id],
+@@ -2045,6 +2048,10 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np)
+ 		   eth->netdev[id]->base_addr, eth->netdev[id]->irq);
+ 
+ 	return 0;
++
++free_netdev:
++	free_netdev(eth->netdev[id]);
++	return err;
+ }
+ 
+ static int mtk_probe(struct platform_device *pdev)
+diff --git a/drivers/staging/pi433/pi433_if.c b/drivers/staging/pi433/pi433_if.c
+index b061f77dda41..94e0bfcec991 100644
+--- a/drivers/staging/pi433/pi433_if.c
++++ b/drivers/staging/pi433/pi433_if.c
+@@ -880,6 +880,7 @@ pi433_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ 	int			retval = 0;
+ 	struct pi433_instance	*instance;
+ 	struct pi433_device	*device;
++	struct pi433_tx_cfg	tx_cfg;
+ 	void __user *argp = (void __user *)arg;
+ 
+ 	/* Check type and command number */
+@@ -902,9 +903,11 @@ pi433_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ 			return -EFAULT;
+ 		break;
+ 	case PI433_IOC_WR_TX_CFG:
+-		if (copy_from_user(&instance->tx_cfg, argp,
+-				   sizeof(struct pi433_tx_cfg)))
++		if (copy_from_user(&tx_cfg, argp, sizeof(struct pi433_tx_cfg)))
+ 			return -EFAULT;
++		mutex_lock(&device->tx_fifo_lock);
++		memcpy(&instance->tx_cfg, &tx_cfg, sizeof(struct pi433_tx_cfg));
++		mutex_unlock(&device->tx_fifo_lock);
+ 		break;
+ 	case PI433_IOC_RD_RX_CFG:
+ 		if (copy_to_user(argp, &device->rx_cfg,
+diff --git a/drivers/staging/rts5208/sd.c b/drivers/staging/rts5208/sd.c
+index d548bc695f9e..0421dd9277a8 100644
+--- a/drivers/staging/rts5208/sd.c
++++ b/drivers/staging/rts5208/sd.c
+@@ -4996,7 +4996,7 @@ int sd_execute_write_data(struct scsi_cmnd *srb, struct rtsx_chip *chip)
+ 			goto sd_execute_write_cmd_failed;
+ 		}
+ 
+-		rtsx_write_register(chip, SD_BYTE_CNT_L, 0xFF, 0x00);
++		retval = rtsx_write_register(chip, SD_BYTE_CNT_L, 0xFF, 0x00);
+ 		if (retval != STATUS_SUCCESS) {
+ 			rtsx_trace(chip);
+ 			goto sd_execute_write_cmd_failed;
+diff --git a/drivers/target/iscsi/iscsi_target_tpg.c b/drivers/target/iscsi/iscsi_target_tpg.c
+index 4b34f71547c6..101d62105c93 100644
+--- a/drivers/target/iscsi/iscsi_target_tpg.c
++++ b/drivers/target/iscsi/iscsi_target_tpg.c
+@@ -636,8 +636,7 @@ int iscsit_ta_authentication(struct iscsi_portal_group *tpg, u32 authentication)
+ 		none = strstr(buf1, NONE);
+ 		if (none)
+ 			goto out;
+-		strncat(buf1, ",", strlen(","));
+-		strncat(buf1, NONE, strlen(NONE));
++		strlcat(buf1, "," NONE, sizeof(buf1));
+ 		if (iscsi_update_param_value(param, buf1) < 0)
+ 			return -EINVAL;
+ 	}
+diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
+index e27db4d45a9d..06c9886e556c 100644
+--- a/drivers/target/target_core_device.c
++++ b/drivers/target/target_core_device.c
+@@ -904,14 +904,20 @@ struct se_device *target_find_device(int id, bool do_depend)
+ EXPORT_SYMBOL(target_find_device);
+ 
+ struct devices_idr_iter {
++	struct config_item *prev_item;
+ 	int (*fn)(struct se_device *dev, void *data);
+ 	void *data;
+ };
+ 
+ static int target_devices_idr_iter(int id, void *p, void *data)
++	 __must_hold(&device_mutex)
+ {
+ 	struct devices_idr_iter *iter = data;
+ 	struct se_device *dev = p;
++	int ret;
++
++	config_item_put(iter->prev_item);
++	iter->prev_item = NULL;
+ 
+ 	/*
+ 	 * We add the device early to the idr, so it can be used
+@@ -922,7 +928,15 @@ static int target_devices_idr_iter(int id, void *p, void *data)
+ 	if (!(dev->dev_flags & DF_CONFIGURED))
+ 		return 0;
+ 
+-	return iter->fn(dev, iter->data);
++	iter->prev_item = config_item_get_unless_zero(&dev->dev_group.cg_item);
++	if (!iter->prev_item)
++		return 0;
++	mutex_unlock(&device_mutex);
++
++	ret = iter->fn(dev, iter->data);
++
++	mutex_lock(&device_mutex);
++	return ret;
+ }
+ 
+ /**
+@@ -936,15 +950,13 @@ static int target_devices_idr_iter(int id, void *p, void *data)
+ int target_for_each_device(int (*fn)(struct se_device *dev, void *data),
+ 			   void *data)
+ {
+-	struct devices_idr_iter iter;
++	struct devices_idr_iter iter = { .fn = fn, .data = data };
+ 	int ret;
+ 
+-	iter.fn = fn;
+-	iter.data = data;
+-
+ 	mutex_lock(&device_mutex);
+ 	ret = idr_for_each(&devices_idr, target_devices_idr_iter, &iter);
+ 	mutex_unlock(&device_mutex);
++	config_item_put(iter.prev_item);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/thermal/imx_thermal.c b/drivers/thermal/imx_thermal.c
+index 334d98be03b9..b1f82d64253e 100644
+--- a/drivers/thermal/imx_thermal.c
++++ b/drivers/thermal/imx_thermal.c
+@@ -604,7 +604,10 @@ static int imx_init_from_nvmem_cells(struct platform_device *pdev)
+ 	ret = nvmem_cell_read_u32(&pdev->dev, "calib", &val);
+ 	if (ret)
+ 		return ret;
+-	imx_init_calib(pdev, val);
++
++	ret = imx_init_calib(pdev, val);
++	if (ret)
++		return ret;
+ 
+ 	ret = nvmem_cell_read_u32(&pdev->dev, "temp_grade", &val);
+ 	if (ret)
+diff --git a/drivers/thermal/of-thermal.c b/drivers/thermal/of-thermal.c
+index 977a8307fbb1..4f2816559205 100644
+--- a/drivers/thermal/of-thermal.c
++++ b/drivers/thermal/of-thermal.c
+@@ -260,10 +260,13 @@ static int of_thermal_set_mode(struct thermal_zone_device *tz,
+ 
+ 	mutex_lock(&tz->lock);
+ 
+-	if (mode == THERMAL_DEVICE_ENABLED)
++	if (mode == THERMAL_DEVICE_ENABLED) {
+ 		tz->polling_delay = data->polling_delay;
+-	else
++		tz->passive_delay = data->passive_delay;
++	} else {
+ 		tz->polling_delay = 0;
++		tz->passive_delay = 0;
++	}
+ 
+ 	mutex_unlock(&tz->lock);
+ 
+diff --git a/drivers/tty/serial/8250/serial_cs.c b/drivers/tty/serial/8250/serial_cs.c
+index 9963a766dcfb..c8186a05a453 100644
+--- a/drivers/tty/serial/8250/serial_cs.c
++++ b/drivers/tty/serial/8250/serial_cs.c
+@@ -638,8 +638,10 @@ static int serial_config(struct pcmcia_device *link)
+ 	    (link->has_func_id) &&
+ 	    (link->socket->pcmcia_pfc == 0) &&
+ 	    ((link->func_id == CISTPL_FUNCID_MULTI) ||
+-	     (link->func_id == CISTPL_FUNCID_SERIAL)))
+-		pcmcia_loop_config(link, serial_check_for_multi, info);
++	     (link->func_id == CISTPL_FUNCID_SERIAL))) {
++		if (pcmcia_loop_config(link, serial_check_for_multi, info))
++			goto failed;
++	}
+ 
+ 	/*
+ 	 * Apply any multi-port quirk.
+diff --git a/drivers/tty/serial/cpm_uart/cpm_uart_core.c b/drivers/tty/serial/cpm_uart/cpm_uart_core.c
+index 24a5f05e769b..e5389591bb4f 100644
+--- a/drivers/tty/serial/cpm_uart/cpm_uart_core.c
++++ b/drivers/tty/serial/cpm_uart/cpm_uart_core.c
+@@ -1054,8 +1054,8 @@ static int poll_wait_key(char *obuf, struct uart_cpm_port *pinfo)
+ 	/* Get the address of the host memory buffer.
+ 	 */
+ 	bdp = pinfo->rx_cur;
+-	while (bdp->cbd_sc & BD_SC_EMPTY)
+-		;
++	if (bdp->cbd_sc & BD_SC_EMPTY)
++		return NO_POLL_CHAR;
+ 
+ 	/* If the buffer address is in the CPM DPRAM, don't
+ 	 * convert it.
+@@ -1090,7 +1090,11 @@ static int cpm_get_poll_char(struct uart_port *port)
+ 		poll_chars = 0;
+ 	}
+ 	if (poll_chars <= 0) {
+-		poll_chars = poll_wait_key(poll_buf, pinfo);
++		int ret = poll_wait_key(poll_buf, pinfo);
++
++		if (ret == NO_POLL_CHAR)
++			return ret;
++		poll_chars = ret;
+ 		pollp = poll_buf;
+ 	}
+ 	poll_chars--;
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 51e47a63d61a..3f8d1274fc85 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -979,7 +979,8 @@ static inline int lpuart_start_rx_dma(struct lpuart_port *sport)
+ 	struct circ_buf *ring = &sport->rx_ring;
+ 	int ret, nent;
+ 	int bits, baud;
+-	struct tty_struct *tty = tty_port_tty_get(&sport->port.state->port);
++	struct tty_port *port = &sport->port.state->port;
++	struct tty_struct *tty = port->tty;
+ 	struct ktermios *termios = &tty->termios;
+ 
+ 	baud = tty_get_baud_rate(tty);
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index 4e853570ea80..554a69db1bca 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -2350,6 +2350,14 @@ static int imx_uart_probe(struct platform_device *pdev)
+ 				ret);
+ 			return ret;
+ 		}
++
++		ret = devm_request_irq(&pdev->dev, rtsirq, imx_uart_rtsint, 0,
++				       dev_name(&pdev->dev), sport);
++		if (ret) {
++			dev_err(&pdev->dev, "failed to request rts irq: %d\n",
++				ret);
++			return ret;
++		}
+ 	} else {
+ 		ret = devm_request_irq(&pdev->dev, rxirq, imx_uart_int, 0,
+ 				       dev_name(&pdev->dev), sport);
+diff --git a/drivers/tty/serial/mvebu-uart.c b/drivers/tty/serial/mvebu-uart.c
+index d04b5eeea3c6..170e446a2f62 100644
+--- a/drivers/tty/serial/mvebu-uart.c
++++ b/drivers/tty/serial/mvebu-uart.c
+@@ -511,6 +511,7 @@ static void mvebu_uart_set_termios(struct uart_port *port,
+ 		termios->c_iflag |= old->c_iflag & ~(INPCK | IGNPAR);
+ 		termios->c_cflag &= CREAD | CBAUD;
+ 		termios->c_cflag |= old->c_cflag & ~(CREAD | CBAUD);
++		termios->c_cflag |= CS8;
+ 	}
+ 
+ 	spin_unlock_irqrestore(&port->lock, flags);
+diff --git a/drivers/tty/serial/pxa.c b/drivers/tty/serial/pxa.c
+index eda3c7710d6a..4932b674f7ef 100644
+--- a/drivers/tty/serial/pxa.c
++++ b/drivers/tty/serial/pxa.c
+@@ -887,7 +887,8 @@ static int serial_pxa_probe(struct platform_device *dev)
+ 		goto err_clk;
+ 	if (sport->port.line >= ARRAY_SIZE(serial_pxa_ports)) {
+ 		dev_err(&dev->dev, "serial%d out of range\n", sport->port.line);
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err_clk;
+ 	}
+ 	snprintf(sport->name, PXA_NAME_LEN - 1, "UART%d", sport->port.line + 1);
+ 
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index c181eb37f985..3c55600a8236 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -2099,6 +2099,8 @@ static void sci_shutdown(struct uart_port *port)
+ 	}
+ #endif
+ 
++	if (s->rx_trigger > 1 && s->rx_fifo_timeout > 0)
++		del_timer_sync(&s->rx_fifo_timer);
+ 	sci_free_irq(s);
+ 	sci_free_dma(port);
+ }
+diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
+index 632a2bfabc08..a0d284ef3f40 100644
+--- a/drivers/usb/class/cdc-wdm.c
++++ b/drivers/usb/class/cdc-wdm.c
+@@ -458,7 +458,7 @@ static int service_outstanding_interrupt(struct wdm_device *desc)
+ 
+ 	set_bit(WDM_RESPONDING, &desc->flags);
+ 	spin_unlock_irq(&desc->iuspin);
+-	rv = usb_submit_urb(desc->response, GFP_ATOMIC);
++	rv = usb_submit_urb(desc->response, GFP_KERNEL);
+ 	spin_lock_irq(&desc->iuspin);
+ 	if (rv) {
+ 		dev_err(&desc->intf->dev,
+diff --git a/drivers/usb/common/roles.c b/drivers/usb/common/roles.c
+index 15cc76e22123..99116af07f1d 100644
+--- a/drivers/usb/common/roles.c
++++ b/drivers/usb/common/roles.c
+@@ -109,8 +109,15 @@ static void *usb_role_switch_match(struct device_connection *con, int ep,
+  */
+ struct usb_role_switch *usb_role_switch_get(struct device *dev)
+ {
+-	return device_connection_find_match(dev, "usb-role-switch", NULL,
+-					    usb_role_switch_match);
++	struct usb_role_switch *sw;
++
++	sw = device_connection_find_match(dev, "usb-role-switch", NULL,
++					  usb_role_switch_match);
++
++	if (!IS_ERR_OR_NULL(sw))
++		WARN_ON(!try_module_get(sw->dev.parent->driver->owner));
++
++	return sw;
+ }
+ EXPORT_SYMBOL_GPL(usb_role_switch_get);
+ 
+@@ -122,8 +129,10 @@ EXPORT_SYMBOL_GPL(usb_role_switch_get);
+  */
+ void usb_role_switch_put(struct usb_role_switch *sw)
+ {
+-	if (!IS_ERR_OR_NULL(sw))
++	if (!IS_ERR_OR_NULL(sw)) {
+ 		put_device(&sw->dev);
++		module_put(sw->dev.parent->driver->owner);
++	}
+ }
+ EXPORT_SYMBOL_GPL(usb_role_switch_put);
+ 
+diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
+index 476dcc5f2da3..e1e0c90ce569 100644
+--- a/drivers/usb/core/devio.c
++++ b/drivers/usb/core/devio.c
+@@ -1433,10 +1433,13 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ 	struct async *as = NULL;
+ 	struct usb_ctrlrequest *dr = NULL;
+ 	unsigned int u, totlen, isofrmlen;
+-	int i, ret, is_in, num_sgs = 0, ifnum = -1;
++	int i, ret, num_sgs = 0, ifnum = -1;
+ 	int number_of_packets = 0;
+ 	unsigned int stream_id = 0;
+ 	void *buf;
++	bool is_in;
++	bool allow_short = false;
++	bool allow_zero = false;
+ 	unsigned long mask =	USBDEVFS_URB_SHORT_NOT_OK |
+ 				USBDEVFS_URB_BULK_CONTINUATION |
+ 				USBDEVFS_URB_NO_FSBR |
+@@ -1470,6 +1473,8 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ 	u = 0;
+ 	switch (uurb->type) {
+ 	case USBDEVFS_URB_TYPE_CONTROL:
++		if (is_in)
++			allow_short = true;
+ 		if (!usb_endpoint_xfer_control(&ep->desc))
+ 			return -EINVAL;
+ 		/* min 8 byte setup packet */
+@@ -1510,6 +1515,10 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ 		break;
+ 
+ 	case USBDEVFS_URB_TYPE_BULK:
++		if (!is_in)
++			allow_zero = true;
++		else
++			allow_short = true;
+ 		switch (usb_endpoint_type(&ep->desc)) {
+ 		case USB_ENDPOINT_XFER_CONTROL:
+ 		case USB_ENDPOINT_XFER_ISOC:
+@@ -1530,6 +1539,10 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ 		if (!usb_endpoint_xfer_int(&ep->desc))
+ 			return -EINVAL;
+  interrupt_urb:
++		if (!is_in)
++			allow_zero = true;
++		else
++			allow_short = true;
+ 		break;
+ 
+ 	case USBDEVFS_URB_TYPE_ISO:
+@@ -1675,14 +1688,19 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ 	u = (is_in ? URB_DIR_IN : URB_DIR_OUT);
+ 	if (uurb->flags & USBDEVFS_URB_ISO_ASAP)
+ 		u |= URB_ISO_ASAP;
+-	if (uurb->flags & USBDEVFS_URB_SHORT_NOT_OK && is_in)
++	if (allow_short && uurb->flags & USBDEVFS_URB_SHORT_NOT_OK)
+ 		u |= URB_SHORT_NOT_OK;
+-	if (uurb->flags & USBDEVFS_URB_ZERO_PACKET)
++	if (allow_zero && uurb->flags & USBDEVFS_URB_ZERO_PACKET)
+ 		u |= URB_ZERO_PACKET;
+ 	if (uurb->flags & USBDEVFS_URB_NO_INTERRUPT)
+ 		u |= URB_NO_INTERRUPT;
+ 	as->urb->transfer_flags = u;
+ 
++	if (!allow_short && uurb->flags & USBDEVFS_URB_SHORT_NOT_OK)
++		dev_warn(&ps->dev->dev, "Requested nonsensical USBDEVFS_URB_SHORT_NOT_OK.\n");
++	if (!allow_zero && uurb->flags & USBDEVFS_URB_ZERO_PACKET)
++		dev_warn(&ps->dev->dev, "Requested nonsensical USBDEVFS_URB_ZERO_PACKET.\n");
++
+ 	as->urb->transfer_buffer_length = uurb->buffer_length;
+ 	as->urb->setup_packet = (unsigned char *)dr;
+ 	dr = NULL;
+diff --git a/drivers/usb/core/driver.c b/drivers/usb/core/driver.c
+index e76e95f62f76..a1f225f077cd 100644
+--- a/drivers/usb/core/driver.c
++++ b/drivers/usb/core/driver.c
+@@ -512,7 +512,6 @@ int usb_driver_claim_interface(struct usb_driver *driver,
+ 	struct device *dev;
+ 	struct usb_device *udev;
+ 	int retval = 0;
+-	int lpm_disable_error = -ENODEV;
+ 
+ 	if (!iface)
+ 		return -ENODEV;
+@@ -533,16 +532,6 @@ int usb_driver_claim_interface(struct usb_driver *driver,
+ 
+ 	iface->condition = USB_INTERFACE_BOUND;
+ 
+-	/* See the comment about disabling LPM in usb_probe_interface(). */
+-	if (driver->disable_hub_initiated_lpm) {
+-		lpm_disable_error = usb_unlocked_disable_lpm(udev);
+-		if (lpm_disable_error) {
+-			dev_err(&iface->dev, "%s Failed to disable LPM for driver %s\n",
+-				__func__, driver->name);
+-			return -ENOMEM;
+-		}
+-	}
+-
+ 	/* Claimed interfaces are initially inactive (suspended) and
+ 	 * runtime-PM-enabled, but only if the driver has autosuspend
+ 	 * support.  Otherwise they are marked active, to prevent the
+@@ -561,9 +550,20 @@ int usb_driver_claim_interface(struct usb_driver *driver,
+ 	if (device_is_registered(dev))
+ 		retval = device_bind_driver(dev);
+ 
+-	/* Attempt to re-enable USB3 LPM, if the disable was successful. */
+-	if (!lpm_disable_error)
+-		usb_unlocked_enable_lpm(udev);
++	if (retval) {
++		dev->driver = NULL;
++		usb_set_intfdata(iface, NULL);
++		iface->needs_remote_wakeup = 0;
++		iface->condition = USB_INTERFACE_UNBOUND;
++
++		/*
++		 * Unbound interfaces are always runtime-PM-disabled
++		 * and runtime-PM-suspended
++		 */
++		if (driver->supports_autosuspend)
++			pm_runtime_disable(dev);
++		pm_runtime_set_suspended(dev);
++	}
+ 
+ 	return retval;
+ }
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index e77dfe5ed5ec..178d6c6063c0 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -58,6 +58,7 @@ static int quirks_param_set(const char *val, const struct kernel_param *kp)
+ 	quirk_list = kcalloc(quirk_count, sizeof(struct quirk_entry),
+ 			     GFP_KERNEL);
+ 	if (!quirk_list) {
++		quirk_count = 0;
+ 		mutex_unlock(&quirk_mutex);
+ 		return -ENOMEM;
+ 	}
+@@ -154,7 +155,7 @@ static struct kparam_string quirks_param_string = {
+ 	.string = quirks_param,
+ };
+ 
+-module_param_cb(quirks, &quirks_param_ops, &quirks_param_string, 0644);
++device_param_cb(quirks, &quirks_param_ops, &quirks_param_string, 0644);
+ MODULE_PARM_DESC(quirks, "Add/modify USB quirks by specifying quirks=vendorID:productID:quirks");
+ 
+ /* Lists of quirky USB devices, split in device quirks and interface quirks.
+diff --git a/drivers/usb/core/usb.c b/drivers/usb/core/usb.c
+index 623be3174fb3..79d8bd7a612e 100644
+--- a/drivers/usb/core/usb.c
++++ b/drivers/usb/core/usb.c
+@@ -228,6 +228,8 @@ struct usb_host_interface *usb_find_alt_setting(
+ 	struct usb_interface_cache *intf_cache = NULL;
+ 	int i;
+ 
++	if (!config)
++		return NULL;
+ 	for (i = 0; i < config->desc.bNumInterfaces; i++) {
+ 		if (config->intf_cache[i]->altsetting[0].desc.bInterfaceNumber
+ 				== iface_num) {
+diff --git a/drivers/usb/musb/musb_dsps.c b/drivers/usb/musb/musb_dsps.c
+index fb871eabcc10..a129d601a0c3 100644
+--- a/drivers/usb/musb/musb_dsps.c
++++ b/drivers/usb/musb/musb_dsps.c
+@@ -658,16 +658,6 @@ dsps_dma_controller_create(struct musb *musb, void __iomem *base)
+ 	return controller;
+ }
+ 
+-static void dsps_dma_controller_destroy(struct dma_controller *c)
+-{
+-	struct musb *musb = c->musb;
+-	struct dsps_glue *glue = dev_get_drvdata(musb->controller->parent);
+-	void __iomem *usbss_base = glue->usbss_base;
+-
+-	musb_writel(usbss_base, USBSS_IRQ_CLEARR, USBSS_IRQ_PD_COMP);
+-	cppi41_dma_controller_destroy(c);
+-}
+-
+ #ifdef CONFIG_PM_SLEEP
+ static void dsps_dma_controller_suspend(struct dsps_glue *glue)
+ {
+@@ -697,7 +687,7 @@ static struct musb_platform_ops dsps_ops = {
+ 
+ #ifdef CONFIG_USB_TI_CPPI41_DMA
+ 	.dma_init	= dsps_dma_controller_create,
+-	.dma_exit	= dsps_dma_controller_destroy,
++	.dma_exit	= cppi41_dma_controller_destroy,
+ #endif
+ 	.enable		= dsps_musb_enable,
+ 	.disable	= dsps_musb_disable,
+diff --git a/drivers/usb/serial/kobil_sct.c b/drivers/usb/serial/kobil_sct.c
+index a31ea7e194dd..a6ebed1e0f20 100644
+--- a/drivers/usb/serial/kobil_sct.c
++++ b/drivers/usb/serial/kobil_sct.c
+@@ -393,12 +393,20 @@ static int kobil_tiocmget(struct tty_struct *tty)
+ 			  transfer_buffer_length,
+ 			  KOBIL_TIMEOUT);
+ 
+-	dev_dbg(&port->dev, "%s - Send get_status_line_state URB returns: %i. Statusline: %02x\n",
+-		__func__, result, transfer_buffer[0]);
++	dev_dbg(&port->dev, "Send get_status_line_state URB returns: %i\n",
++			result);
++	if (result < 1) {
++		if (result >= 0)
++			result = -EIO;
++		goto out_free;
++	}
++
++	dev_dbg(&port->dev, "Statusline: %02x\n", transfer_buffer[0]);
+ 
+ 	result = 0;
+ 	if ((transfer_buffer[0] & SUSBCR_GSL_DSR) != 0)
+ 		result = TIOCM_DSR;
++out_free:
+ 	kfree(transfer_buffer);
+ 	return result;
+ }
+diff --git a/drivers/usb/wusbcore/security.c b/drivers/usb/wusbcore/security.c
+index 33d2f5d7f33b..14ac8c98ac9e 100644
+--- a/drivers/usb/wusbcore/security.c
++++ b/drivers/usb/wusbcore/security.c
+@@ -217,7 +217,7 @@ int wusb_dev_sec_add(struct wusbhc *wusbhc,
+ 
+ 	result = usb_get_descriptor(usb_dev, USB_DT_SECURITY,
+ 				    0, secd, sizeof(*secd));
+-	if (result < sizeof(*secd)) {
++	if (result < (int)sizeof(*secd)) {
+ 		dev_err(dev, "Can't read security descriptor or "
+ 			"not enough data: %d\n", result);
+ 		goto out;
+diff --git a/drivers/uwb/hwa-rc.c b/drivers/uwb/hwa-rc.c
+index 9a53912bdfe9..5d3ba747ae17 100644
+--- a/drivers/uwb/hwa-rc.c
++++ b/drivers/uwb/hwa-rc.c
+@@ -873,6 +873,7 @@ error_get_version:
+ error_rc_add:
+ 	usb_put_intf(iface);
+ 	usb_put_dev(hwarc->usb_dev);
++	kfree(hwarc);
+ error_alloc:
+ 	uwb_rc_put(uwb_rc);
+ error_rc_alloc:
+diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
+index 29756d88799b..6b86ca8772fb 100644
+--- a/drivers/vhost/net.c
++++ b/drivers/vhost/net.c
+@@ -396,13 +396,10 @@ static inline unsigned long busy_clock(void)
+ 	return local_clock() >> 10;
+ }
+ 
+-static bool vhost_can_busy_poll(struct vhost_dev *dev,
+-				unsigned long endtime)
++static bool vhost_can_busy_poll(unsigned long endtime)
+ {
+-	return likely(!need_resched()) &&
+-	       likely(!time_after(busy_clock(), endtime)) &&
+-	       likely(!signal_pending(current)) &&
+-	       !vhost_has_work(dev);
++	return likely(!need_resched() && !time_after(busy_clock(), endtime) &&
++		      !signal_pending(current));
+ }
+ 
+ static void vhost_net_disable_vq(struct vhost_net *n,
+@@ -434,7 +431,8 @@ static int vhost_net_enable_vq(struct vhost_net *n,
+ static int vhost_net_tx_get_vq_desc(struct vhost_net *net,
+ 				    struct vhost_virtqueue *vq,
+ 				    struct iovec iov[], unsigned int iov_size,
+-				    unsigned int *out_num, unsigned int *in_num)
++				    unsigned int *out_num, unsigned int *in_num,
++				    bool *busyloop_intr)
+ {
+ 	unsigned long uninitialized_var(endtime);
+ 	int r = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
+@@ -443,9 +441,15 @@ static int vhost_net_tx_get_vq_desc(struct vhost_net *net,
+ 	if (r == vq->num && vq->busyloop_timeout) {
+ 		preempt_disable();
+ 		endtime = busy_clock() + vq->busyloop_timeout;
+-		while (vhost_can_busy_poll(vq->dev, endtime) &&
+-		       vhost_vq_avail_empty(vq->dev, vq))
++		while (vhost_can_busy_poll(endtime)) {
++			if (vhost_has_work(vq->dev)) {
++				*busyloop_intr = true;
++				break;
++			}
++			if (!vhost_vq_avail_empty(vq->dev, vq))
++				break;
+ 			cpu_relax();
++		}
+ 		preempt_enable();
+ 		r = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
+ 				      out_num, in_num, NULL, NULL);
+@@ -501,20 +505,24 @@ static void handle_tx(struct vhost_net *net)
+ 	zcopy = nvq->ubufs;
+ 
+ 	for (;;) {
++		bool busyloop_intr;
++
+ 		/* Release DMAs done buffers first */
+ 		if (zcopy)
+ 			vhost_zerocopy_signal_used(net, vq);
+ 
+-
++		busyloop_intr = false;
+ 		head = vhost_net_tx_get_vq_desc(net, vq, vq->iov,
+ 						ARRAY_SIZE(vq->iov),
+-						&out, &in);
++						&out, &in, &busyloop_intr);
+ 		/* On error, stop handling until the next kick. */
+ 		if (unlikely(head < 0))
+ 			break;
+ 		/* Nothing new?  Wait for eventfd to tell us they refilled. */
+ 		if (head == vq->num) {
+-			if (unlikely(vhost_enable_notify(&net->dev, vq))) {
++			if (unlikely(busyloop_intr)) {
++				vhost_poll_queue(&vq->poll);
++			} else if (unlikely(vhost_enable_notify(&net->dev, vq))) {
+ 				vhost_disable_notify(&net->dev, vq);
+ 				continue;
+ 			}
+@@ -663,7 +671,8 @@ static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk)
+ 		preempt_disable();
+ 		endtime = busy_clock() + vq->busyloop_timeout;
+ 
+-		while (vhost_can_busy_poll(&net->dev, endtime) &&
++		while (vhost_can_busy_poll(endtime) &&
++		       !vhost_has_work(&net->dev) &&
+ 		       !sk_has_rx_data(sk) &&
+ 		       vhost_vq_avail_empty(&net->dev, vq))
+ 			cpu_relax();
+diff --git a/fs/dax.c b/fs/dax.c
+index 641192808bb6..94f9fe002b12 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -1007,21 +1007,12 @@ static vm_fault_t dax_load_hole(struct address_space *mapping, void *entry,
+ {
+ 	struct inode *inode = mapping->host;
+ 	unsigned long vaddr = vmf->address;
+-	vm_fault_t ret = VM_FAULT_NOPAGE;
+-	struct page *zero_page;
+-	pfn_t pfn;
+-
+-	zero_page = ZERO_PAGE(0);
+-	if (unlikely(!zero_page)) {
+-		ret = VM_FAULT_OOM;
+-		goto out;
+-	}
++	pfn_t pfn = pfn_to_pfn_t(my_zero_pfn(vaddr));
++	vm_fault_t ret;
+ 
+-	pfn = page_to_pfn_t(zero_page);
+ 	dax_insert_mapping_entry(mapping, vmf, entry, pfn, RADIX_DAX_ZERO_PAGE,
+ 			false);
+ 	ret = vmf_insert_mixed(vmf->vma, vaddr, pfn);
+-out:
+ 	trace_dax_load_hole(inode, vmf, ret);
+ 	return ret;
+ }
+diff --git a/fs/ext2/inode.c b/fs/ext2/inode.c
+index 71635909df3b..b4e0501bcba1 100644
+--- a/fs/ext2/inode.c
++++ b/fs/ext2/inode.c
+@@ -1448,6 +1448,7 @@ struct inode *ext2_iget (struct super_block *sb, unsigned long ino)
+ 	}
+ 	inode->i_blocks = le32_to_cpu(raw_inode->i_blocks);
+ 	ei->i_flags = le32_to_cpu(raw_inode->i_flags);
++	ext2_set_inode_flags(inode);
+ 	ei->i_faddr = le32_to_cpu(raw_inode->i_faddr);
+ 	ei->i_frag_no = raw_inode->i_frag;
+ 	ei->i_frag_size = raw_inode->i_fsize;
+@@ -1517,7 +1518,6 @@ struct inode *ext2_iget (struct super_block *sb, unsigned long ino)
+ 			   new_decode_dev(le32_to_cpu(raw_inode->i_block[1])));
+ 	}
+ 	brelse (bh);
+-	ext2_set_inode_flags(inode);
+ 	unlock_new_inode(inode);
+ 	return inode;
+ 	
+diff --git a/fs/iomap.c b/fs/iomap.c
+index 0d0bd8845586..af6144fd4919 100644
+--- a/fs/iomap.c
++++ b/fs/iomap.c
+@@ -811,6 +811,7 @@ struct iomap_dio {
+ 	atomic_t		ref;
+ 	unsigned		flags;
+ 	int			error;
++	bool			wait_for_completion;
+ 
+ 	union {
+ 		/* used during submission and for synchronous completion: */
+@@ -914,9 +915,8 @@ static void iomap_dio_bio_end_io(struct bio *bio)
+ 		iomap_dio_set_error(dio, blk_status_to_errno(bio->bi_status));
+ 
+ 	if (atomic_dec_and_test(&dio->ref)) {
+-		if (is_sync_kiocb(dio->iocb)) {
++		if (dio->wait_for_completion) {
+ 			struct task_struct *waiter = dio->submit.waiter;
+-
+ 			WRITE_ONCE(dio->submit.waiter, NULL);
+ 			wake_up_process(waiter);
+ 		} else if (dio->flags & IOMAP_DIO_WRITE) {
+@@ -1131,13 +1131,12 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
+ 	dio->end_io = end_io;
+ 	dio->error = 0;
+ 	dio->flags = 0;
++	dio->wait_for_completion = is_sync_kiocb(iocb);
+ 
+ 	dio->submit.iter = iter;
+-	if (is_sync_kiocb(iocb)) {
+-		dio->submit.waiter = current;
+-		dio->submit.cookie = BLK_QC_T_NONE;
+-		dio->submit.last_queue = NULL;
+-	}
++	dio->submit.waiter = current;
++	dio->submit.cookie = BLK_QC_T_NONE;
++	dio->submit.last_queue = NULL;
+ 
+ 	if (iov_iter_rw(iter) == READ) {
+ 		if (pos >= dio->i_size)
+@@ -1187,7 +1186,7 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
+ 		dio_warn_stale_pagecache(iocb->ki_filp);
+ 	ret = 0;
+ 
+-	if (iov_iter_rw(iter) == WRITE && !is_sync_kiocb(iocb) &&
++	if (iov_iter_rw(iter) == WRITE && !dio->wait_for_completion &&
+ 	    !inode->i_sb->s_dio_done_wq) {
+ 		ret = sb_init_dio_done_wq(inode->i_sb);
+ 		if (ret < 0)
+@@ -1202,8 +1201,10 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
+ 				iomap_dio_actor);
+ 		if (ret <= 0) {
+ 			/* magic error code to fall back to buffered I/O */
+-			if (ret == -ENOTBLK)
++			if (ret == -ENOTBLK) {
++				dio->wait_for_completion = true;
+ 				ret = 0;
++			}
+ 			break;
+ 		}
+ 		pos += ret;
+@@ -1224,7 +1225,7 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
+ 		dio->flags &= ~IOMAP_DIO_NEED_SYNC;
+ 
+ 	if (!atomic_dec_and_test(&dio->ref)) {
+-		if (!is_sync_kiocb(iocb))
++		if (!dio->wait_for_completion)
+ 			return -EIOCBQUEUED;
+ 
+ 		for (;;) {
+diff --git a/fs/isofs/inode.c b/fs/isofs/inode.c
+index ec3fba7d492f..488a9e7f8f66 100644
+--- a/fs/isofs/inode.c
++++ b/fs/isofs/inode.c
+@@ -24,6 +24,7 @@
+ #include <linux/mpage.h>
+ #include <linux/user_namespace.h>
+ #include <linux/seq_file.h>
++#include <linux/blkdev.h>
+ 
+ #include "isofs.h"
+ #include "zisofs.h"
+@@ -653,6 +654,12 @@ static int isofs_fill_super(struct super_block *s, void *data, int silent)
+ 	/*
+ 	 * What if bugger tells us to go beyond page size?
+ 	 */
++	if (bdev_logical_block_size(s->s_bdev) > 2048) {
++		printk(KERN_WARNING
++		       "ISOFS: unsupported/invalid hardware sector size %d\n",
++			bdev_logical_block_size(s->s_bdev));
++		goto out_freesbi;
++	}
+ 	opt.blocksize = sb_min_blocksize(s, opt.blocksize);
+ 
+ 	sbi->s_high_sierra = 0; /* default is iso9660 */
+diff --git a/fs/locks.c b/fs/locks.c
+index db7b6917d9c5..fafce5a8d74f 100644
+--- a/fs/locks.c
++++ b/fs/locks.c
+@@ -2072,6 +2072,13 @@ static pid_t locks_translate_pid(struct file_lock *fl, struct pid_namespace *ns)
+ 		return -1;
+ 	if (IS_REMOTELCK(fl))
+ 		return fl->fl_pid;
++	/*
++	 * If the flock owner process is dead and its pid has been already
++	 * freed, the translation below won't work, but we still want to show
++	 * flock owner pid number in init pidns.
++	 */
++	if (ns == &init_pid_ns)
++		return (pid_t)fl->fl_pid;
+ 
+ 	rcu_read_lock();
+ 	pid = find_pid_ns(fl->fl_pid, &init_pid_ns);
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index 5d99e8810b85..0dded931f119 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -1726,6 +1726,7 @@ nfsd4_proc_compound(struct svc_rqst *rqstp)
+ 	if (status) {
+ 		op = &args->ops[0];
+ 		op->status = status;
++		resp->opcnt = 1;
+ 		goto encode_op;
+ 	}
+ 
+diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
+index ca1d2cc2cdfa..18863d56273c 100644
+--- a/include/linux/arm-smccc.h
++++ b/include/linux/arm-smccc.h
+@@ -199,47 +199,57 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
+ 
+ #define __declare_arg_0(a0, res)					\
+ 	struct arm_smccc_res   *___res = res;				\
+-	register u32           r0 asm("r0") = a0;			\
++	register unsigned long r0 asm("r0") = (u32)a0;			\
+ 	register unsigned long r1 asm("r1");				\
+ 	register unsigned long r2 asm("r2");				\
+ 	register unsigned long r3 asm("r3")
+ 
+ #define __declare_arg_1(a0, a1, res)					\
++	typeof(a1) __a1 = a1;						\
+ 	struct arm_smccc_res   *___res = res;				\
+-	register u32           r0 asm("r0") = a0;			\
+-	register typeof(a1)    r1 asm("r1") = a1;			\
++	register unsigned long r0 asm("r0") = (u32)a0;			\
++	register unsigned long r1 asm("r1") = __a1;			\
+ 	register unsigned long r2 asm("r2");				\
+ 	register unsigned long r3 asm("r3")
+ 
+ #define __declare_arg_2(a0, a1, a2, res)				\
++	typeof(a1) __a1 = a1;						\
++	typeof(a2) __a2 = a2;						\
+ 	struct arm_smccc_res   *___res = res;				\
+-	register u32           r0 asm("r0") = a0;			\
+-	register typeof(a1)    r1 asm("r1") = a1;			\
+-	register typeof(a2)    r2 asm("r2") = a2;			\
++	register unsigned long r0 asm("r0") = (u32)a0;			\
++	register unsigned long r1 asm("r1") = __a1;			\
++	register unsigned long r2 asm("r2") = __a2;			\
+ 	register unsigned long r3 asm("r3")
+ 
+ #define __declare_arg_3(a0, a1, a2, a3, res)				\
++	typeof(a1) __a1 = a1;						\
++	typeof(a2) __a2 = a2;						\
++	typeof(a3) __a3 = a3;						\
+ 	struct arm_smccc_res   *___res = res;				\
+-	register u32           r0 asm("r0") = a0;			\
+-	register typeof(a1)    r1 asm("r1") = a1;			\
+-	register typeof(a2)    r2 asm("r2") = a2;			\
+-	register typeof(a3)    r3 asm("r3") = a3
++	register unsigned long r0 asm("r0") = (u32)a0;			\
++	register unsigned long r1 asm("r1") = __a1;			\
++	register unsigned long r2 asm("r2") = __a2;			\
++	register unsigned long r3 asm("r3") = __a3
+ 
+ #define __declare_arg_4(a0, a1, a2, a3, a4, res)			\
++	typeof(a4) __a4 = a4;						\
+ 	__declare_arg_3(a0, a1, a2, a3, res);				\
+-	register typeof(a4) r4 asm("r4") = a4
++	register unsigned long r4 asm("r4") = __a4
+ 
+ #define __declare_arg_5(a0, a1, a2, a3, a4, a5, res)			\
++	typeof(a5) __a5 = a5;						\
+ 	__declare_arg_4(a0, a1, a2, a3, a4, res);			\
+-	register typeof(a5) r5 asm("r5") = a5
++	register unsigned long r5 asm("r5") = __a5
+ 
+ #define __declare_arg_6(a0, a1, a2, a3, a4, a5, a6, res)		\
++	typeof(a6) __a6 = a6;						\
+ 	__declare_arg_5(a0, a1, a2, a3, a4, a5, res);			\
+-	register typeof(a6) r6 asm("r6") = a6
++	register unsigned long r6 asm("r6") = __a6
+ 
+ #define __declare_arg_7(a0, a1, a2, a3, a4, a5, a6, a7, res)		\
++	typeof(a7) __a7 = a7;						\
+ 	__declare_arg_6(a0, a1, a2, a3, a4, a5, a6, res);		\
+-	register typeof(a7) r7 asm("r7") = a7
++	register unsigned long r7 asm("r7") = __a7
+ 
+ #define ___declare_args(count, ...) __declare_arg_ ## count(__VA_ARGS__)
+ #define __declare_args(count, ...)  ___declare_args(count, __VA_ARGS__)
+diff --git a/include/linux/bitfield.h b/include/linux/bitfield.h
+index cf2588d81148..147a7bb341dd 100644
+--- a/include/linux/bitfield.h
++++ b/include/linux/bitfield.h
+@@ -104,7 +104,7 @@
+ 		(typeof(_mask))(((_reg) & (_mask)) >> __bf_shf(_mask));	\
+ 	})
+ 
+-extern void __compiletime_warning("value doesn't fit into mask")
++extern void __compiletime_error("value doesn't fit into mask")
+ __field_overflow(void);
+ extern void __compiletime_error("bad bitfield mask")
+ __bad_mask(void);
+@@ -121,8 +121,8 @@ static __always_inline u64 field_mask(u64 field)
+ #define ____MAKE_OP(type,base,to,from)					\
+ static __always_inline __##type type##_encode_bits(base v, base field)	\
+ {									\
+-        if (__builtin_constant_p(v) &&	(v & ~field_multiplier(field)))	\
+-			    __field_overflow();				\
++	if (__builtin_constant_p(v) && (v & ~field_mask(field)))	\
++		__field_overflow();					\
+ 	return to((v & field_mask(field)) * field_multiplier(field));	\
+ }									\
+ static __always_inline __##type type##_replace_bits(__##type old,	\
+diff --git a/include/linux/platform_data/ina2xx.h b/include/linux/platform_data/ina2xx.h
+index 9abc0ca7259b..9f0aa1b48c78 100644
+--- a/include/linux/platform_data/ina2xx.h
++++ b/include/linux/platform_data/ina2xx.h
+@@ -1,7 +1,7 @@
+ /*
+  * Driver for Texas Instruments INA219, INA226 power monitor chips
+  *
+- * Copyright (C) 2012 Lothar Felten <l-felten@ti.com>
++ * Copyright (C) 2012 Lothar Felten <lothar.felten@gmail.com>
+  *
+  * This program is free software; you can redistribute it and/or modify
+  * it under the terms of the GNU General Public License version 2 as
+diff --git a/include/linux/posix-timers.h b/include/linux/posix-timers.h
+index c85704fcdbd2..ee7e987ea1b4 100644
+--- a/include/linux/posix-timers.h
++++ b/include/linux/posix-timers.h
+@@ -95,8 +95,8 @@ struct k_itimer {
+ 	clockid_t		it_clock;
+ 	timer_t			it_id;
+ 	int			it_active;
+-	int			it_overrun;
+-	int			it_overrun_last;
++	s64			it_overrun;
++	s64			it_overrun_last;
+ 	int			it_requeue_pending;
+ 	int			it_sigev_notify;
+ 	ktime_t			it_interval;
+diff --git a/include/linux/power_supply.h b/include/linux/power_supply.h
+index b21c4bd96b84..f80769175c56 100644
+--- a/include/linux/power_supply.h
++++ b/include/linux/power_supply.h
+@@ -269,6 +269,7 @@ struct power_supply {
+ 	spinlock_t changed_lock;
+ 	bool changed;
+ 	bool initialized;
++	bool removing;
+ 	atomic_t use_cnt;
+ #ifdef CONFIG_THERMAL
+ 	struct thermal_zone_device *tzd;
+diff --git a/include/linux/regulator/machine.h b/include/linux/regulator/machine.h
+index 3468703d663a..a459a5e973a7 100644
+--- a/include/linux/regulator/machine.h
++++ b/include/linux/regulator/machine.h
+@@ -48,9 +48,9 @@ struct regulator;
+  * DISABLE_IN_SUSPEND	- turn off regulator in suspend states
+  * ENABLE_IN_SUSPEND	- keep regulator on in suspend states
+  */
+-#define DO_NOTHING_IN_SUSPEND	(-1)
+-#define DISABLE_IN_SUSPEND	0
+-#define ENABLE_IN_SUSPEND	1
++#define DO_NOTHING_IN_SUSPEND	0
++#define DISABLE_IN_SUSPEND	1
++#define ENABLE_IN_SUSPEND	2
+ 
+ /* Regulator active discharge flags */
+ enum regulator_active_discharge {
+diff --git a/include/linux/uio.h b/include/linux/uio.h
+index 409c845d4cd3..422b1c01ee0d 100644
+--- a/include/linux/uio.h
++++ b/include/linux/uio.h
+@@ -172,7 +172,7 @@ size_t copy_from_iter_flushcache(void *addr, size_t bytes, struct iov_iter *i)
+ static __always_inline __must_check
+ size_t copy_to_iter_mcsafe(void *addr, size_t bytes, struct iov_iter *i)
+ {
+-	if (unlikely(!check_copy_size(addr, bytes, false)))
++	if (unlikely(!check_copy_size(addr, bytes, true)))
+ 		return 0;
+ 	else
+ 		return _copy_to_iter_mcsafe(addr, bytes, i);
+diff --git a/include/media/v4l2-fh.h b/include/media/v4l2-fh.h
+index ea73fef8bdc0..8586cfb49828 100644
+--- a/include/media/v4l2-fh.h
++++ b/include/media/v4l2-fh.h
+@@ -38,10 +38,13 @@ struct v4l2_ctrl_handler;
+  * @prio: priority of the file handler, as defined by &enum v4l2_priority
+  *
+  * @wait: event' s wait queue
++ * @subscribe_lock: serialise changes to the subscribed list; guarantee that
++ *		    the add and del event callbacks are orderly called
+  * @subscribed: list of subscribed events
+  * @available: list of events waiting to be dequeued
+  * @navailable: number of available events at @available list
+  * @sequence: event sequence number
++ *
+  * @m2m_ctx: pointer to &struct v4l2_m2m_ctx
+  */
+ struct v4l2_fh {
+@@ -52,6 +55,7 @@ struct v4l2_fh {
+ 
+ 	/* Events */
+ 	wait_queue_head_t	wait;
++	struct mutex		subscribe_lock;
+ 	struct list_head	subscribed;
+ 	struct list_head	available;
+ 	unsigned int		navailable;
+diff --git a/include/rdma/opa_addr.h b/include/rdma/opa_addr.h
+index 2bbb7a67e643..66d4393d339c 100644
+--- a/include/rdma/opa_addr.h
++++ b/include/rdma/opa_addr.h
+@@ -120,7 +120,7 @@ static inline bool rdma_is_valid_unicast_lid(struct rdma_ah_attr *attr)
+ 	if (attr->type == RDMA_AH_ATTR_TYPE_IB) {
+ 		if (!rdma_ah_get_dlid(attr) ||
+ 		    rdma_ah_get_dlid(attr) >=
+-		    be32_to_cpu(IB_MULTICAST_LID_BASE))
++		    be16_to_cpu(IB_MULTICAST_LID_BASE))
+ 			return false;
+ 	} else if (attr->type == RDMA_AH_ATTR_TYPE_OPA) {
+ 		if (!rdma_ah_get_dlid(attr) ||
+diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
+index 58899601fccf..ed707b21d152 100644
+--- a/kernel/bpf/sockmap.c
++++ b/kernel/bpf/sockmap.c
+@@ -1430,12 +1430,15 @@ out:
+ static void smap_write_space(struct sock *sk)
+ {
+ 	struct smap_psock *psock;
++	void (*write_space)(struct sock *sk);
+ 
+ 	rcu_read_lock();
+ 	psock = smap_psock_sk(sk);
+ 	if (likely(psock && test_bit(SMAP_TX_RUNNING, &psock->state)))
+ 		schedule_work(&psock->tx_work);
++	write_space = psock->save_write_space;
+ 	rcu_read_unlock();
++	write_space(sk);
+ }
+ 
+ static void smap_stop_sock(struct smap_psock *psock, struct sock *sk)
+@@ -2143,7 +2146,9 @@ static struct bpf_map *sock_hash_alloc(union bpf_attr *attr)
+ 		return ERR_PTR(-EPERM);
+ 
+ 	/* check sanity of attributes */
+-	if (attr->max_entries == 0 || attr->value_size != 4 ||
++	if (attr->max_entries == 0 ||
++	    attr->key_size == 0 ||
++	    attr->value_size != 4 ||
+ 	    attr->map_flags & ~SOCK_CREATE_FLAG_MASK)
+ 		return ERR_PTR(-EINVAL);
+ 
+@@ -2270,8 +2275,10 @@ static struct htab_elem *alloc_sock_hash_elem(struct bpf_htab *htab,
+ 	}
+ 	l_new = kmalloc_node(htab->elem_size, GFP_ATOMIC | __GFP_NOWARN,
+ 			     htab->map.numa_node);
+-	if (!l_new)
++	if (!l_new) {
++		atomic_dec(&htab->count);
+ 		return ERR_PTR(-ENOMEM);
++	}
+ 
+ 	memcpy(l_new->key, key, key_size);
+ 	l_new->sk = sk;
+diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c
+index 6e28d2866be5..314e2a9040c7 100644
+--- a/kernel/events/hw_breakpoint.c
++++ b/kernel/events/hw_breakpoint.c
+@@ -400,16 +400,35 @@ int dbg_release_bp_slot(struct perf_event *bp)
+ 	return 0;
+ }
+ 
+-static int validate_hw_breakpoint(struct perf_event *bp)
++#ifndef hw_breakpoint_arch_parse
++int hw_breakpoint_arch_parse(struct perf_event *bp,
++			     const struct perf_event_attr *attr,
++			     struct arch_hw_breakpoint *hw)
+ {
+-	int ret;
++	int err;
+ 
+-	ret = arch_validate_hwbkpt_settings(bp);
+-	if (ret)
+-		return ret;
++	err = arch_validate_hwbkpt_settings(bp);
++	if (err)
++		return err;
++
++	*hw = bp->hw.info;
++
++	return 0;
++}
++#endif
++
++static int hw_breakpoint_parse(struct perf_event *bp,
++			       const struct perf_event_attr *attr,
++			       struct arch_hw_breakpoint *hw)
++{
++	int err;
++
++	err = hw_breakpoint_arch_parse(bp, attr, hw);
++	if (err)
++		return err;
+ 
+ 	if (arch_check_bp_in_kernelspace(bp)) {
+-		if (bp->attr.exclude_kernel)
++		if (attr->exclude_kernel)
+ 			return -EINVAL;
+ 		/*
+ 		 * Don't let unprivileged users set a breakpoint in the trap
+@@ -424,19 +443,22 @@ static int validate_hw_breakpoint(struct perf_event *bp)
+ 
+ int register_perf_hw_breakpoint(struct perf_event *bp)
+ {
+-	int ret;
+-
+-	ret = reserve_bp_slot(bp);
+-	if (ret)
+-		return ret;
++	struct arch_hw_breakpoint hw;
++	int err;
+ 
+-	ret = validate_hw_breakpoint(bp);
++	err = reserve_bp_slot(bp);
++	if (err)
++		return err;
+ 
+-	/* if arch_validate_hwbkpt_settings() fails then release bp slot */
+-	if (ret)
++	err = hw_breakpoint_parse(bp, &bp->attr, &hw);
++	if (err) {
+ 		release_bp_slot(bp);
++		return err;
++	}
+ 
+-	return ret;
++	bp->hw.info = hw;
++
++	return 0;
+ }
+ 
+ /**
+@@ -464,6 +486,7 @@ modify_user_hw_breakpoint_check(struct perf_event *bp, struct perf_event_attr *a
+ 	u64 old_len  = bp->attr.bp_len;
+ 	int old_type = bp->attr.bp_type;
+ 	bool modify  = attr->bp_type != old_type;
++	struct arch_hw_breakpoint hw;
+ 	int err = 0;
+ 
+ 	bp->attr.bp_addr = attr->bp_addr;
+@@ -473,7 +496,7 @@ modify_user_hw_breakpoint_check(struct perf_event *bp, struct perf_event_attr *a
+ 	if (check && memcmp(&bp->attr, attr, sizeof(*attr)))
+ 		return -EINVAL;
+ 
+-	err = validate_hw_breakpoint(bp);
++	err = hw_breakpoint_parse(bp, attr, &hw);
+ 	if (!err && modify)
+ 		err = modify_bp_slot(bp, old_type);
+ 
+@@ -484,7 +507,9 @@ modify_user_hw_breakpoint_check(struct perf_event *bp, struct perf_event_attr *a
+ 		return err;
+ 	}
+ 
++	bp->hw.info = hw;
+ 	bp->attr.disabled = attr->disabled;
++
+ 	return 0;
+ }
+ 
+diff --git a/kernel/module.c b/kernel/module.c
+index f475f30eed8c..4a6b9c6d5f2c 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -4067,7 +4067,7 @@ static unsigned long mod_find_symname(struct module *mod, const char *name)
+ 
+ 	for (i = 0; i < kallsyms->num_symtab; i++)
+ 		if (strcmp(name, symname(kallsyms, i)) == 0 &&
+-		    kallsyms->symtab[i].st_info != 'U')
++		    kallsyms->symtab[i].st_shndx != SHN_UNDEF)
+ 			return kallsyms->symtab[i].st_value;
+ 	return 0;
+ }
+@@ -4113,6 +4113,10 @@ int module_kallsyms_on_each_symbol(int (*fn)(void *, const char *,
+ 		if (mod->state == MODULE_STATE_UNFORMED)
+ 			continue;
+ 		for (i = 0; i < kallsyms->num_symtab; i++) {
++
++			if (kallsyms->symtab[i].st_shndx == SHN_UNDEF)
++				continue;
++
+ 			ret = fn(data, symname(kallsyms, i),
+ 				 mod, kallsyms->symtab[i].st_value);
+ 			if (ret != 0)
+diff --git a/kernel/time/alarmtimer.c b/kernel/time/alarmtimer.c
+index 639321bf2e39..fa5de5e8de61 100644
+--- a/kernel/time/alarmtimer.c
++++ b/kernel/time/alarmtimer.c
+@@ -581,11 +581,11 @@ static void alarm_timer_rearm(struct k_itimer *timr)
+  * @timr:	Pointer to the posixtimer data struct
+  * @now:	Current time to forward the timer against
+  */
+-static int alarm_timer_forward(struct k_itimer *timr, ktime_t now)
++static s64 alarm_timer_forward(struct k_itimer *timr, ktime_t now)
+ {
+ 	struct alarm *alarm = &timr->it.alarm.alarmtimer;
+ 
+-	return (int) alarm_forward(alarm, timr->it_interval, now);
++	return alarm_forward(alarm, timr->it_interval, now);
+ }
+ 
+ /**
+@@ -808,7 +808,8 @@ static int alarm_timer_nsleep(const clockid_t which_clock, int flags,
+ 	/* Convert (if necessary) to absolute time */
+ 	if (flags != TIMER_ABSTIME) {
+ 		ktime_t now = alarm_bases[type].gettime();
+-		exp = ktime_add(now, exp);
++
++		exp = ktime_add_safe(now, exp);
+ 	}
+ 
+ 	ret = alarmtimer_do_nsleep(&alarm, exp, type);
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index 9cdf54b04ca8..294d7b65af33 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -85,7 +85,7 @@ static void bump_cpu_timer(struct k_itimer *timer, u64 now)
+ 			continue;
+ 
+ 		timer->it.cpu.expires += incr;
+-		timer->it_overrun += 1 << i;
++		timer->it_overrun += 1LL << i;
+ 		delta -= incr;
+ 	}
+ }
+diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
+index e08ce3f27447..e475012bff7e 100644
+--- a/kernel/time/posix-timers.c
++++ b/kernel/time/posix-timers.c
+@@ -283,6 +283,17 @@ static __init int init_posix_timers(void)
+ }
+ __initcall(init_posix_timers);
+ 
++/*
++ * The siginfo si_overrun field and the return value of timer_getoverrun(2)
++ * are of type int. Clamp the overrun value to INT_MAX
++ */
++static inline int timer_overrun_to_int(struct k_itimer *timr, int baseval)
++{
++	s64 sum = timr->it_overrun_last + (s64)baseval;
++
++	return sum > (s64)INT_MAX ? INT_MAX : (int)sum;
++}
++
+ static void common_hrtimer_rearm(struct k_itimer *timr)
+ {
+ 	struct hrtimer *timer = &timr->it.real.timer;
+@@ -290,9 +301,8 @@ static void common_hrtimer_rearm(struct k_itimer *timr)
+ 	if (!timr->it_interval)
+ 		return;
+ 
+-	timr->it_overrun += (unsigned int) hrtimer_forward(timer,
+-						timer->base->get_time(),
+-						timr->it_interval);
++	timr->it_overrun += hrtimer_forward(timer, timer->base->get_time(),
++					    timr->it_interval);
+ 	hrtimer_restart(timer);
+ }
+ 
+@@ -321,10 +331,10 @@ void posixtimer_rearm(struct siginfo *info)
+ 
+ 		timr->it_active = 1;
+ 		timr->it_overrun_last = timr->it_overrun;
+-		timr->it_overrun = -1;
++		timr->it_overrun = -1LL;
+ 		++timr->it_requeue_pending;
+ 
+-		info->si_overrun += timr->it_overrun_last;
++		info->si_overrun = timer_overrun_to_int(timr, info->si_overrun);
+ 	}
+ 
+ 	unlock_timer(timr, flags);
+@@ -418,9 +428,8 @@ static enum hrtimer_restart posix_timer_fn(struct hrtimer *timer)
+ 					now = ktime_add(now, kj);
+ 			}
+ #endif
+-			timr->it_overrun += (unsigned int)
+-				hrtimer_forward(timer, now,
+-						timr->it_interval);
++			timr->it_overrun += hrtimer_forward(timer, now,
++							    timr->it_interval);
+ 			ret = HRTIMER_RESTART;
+ 			++timr->it_requeue_pending;
+ 			timr->it_active = 1;
+@@ -524,7 +533,7 @@ static int do_timer_create(clockid_t which_clock, struct sigevent *event,
+ 	new_timer->it_id = (timer_t) new_timer_id;
+ 	new_timer->it_clock = which_clock;
+ 	new_timer->kclock = kc;
+-	new_timer->it_overrun = -1;
++	new_timer->it_overrun = -1LL;
+ 
+ 	if (event) {
+ 		rcu_read_lock();
+@@ -645,11 +654,11 @@ static ktime_t common_hrtimer_remaining(struct k_itimer *timr, ktime_t now)
+ 	return __hrtimer_expires_remaining_adjusted(timer, now);
+ }
+ 
+-static int common_hrtimer_forward(struct k_itimer *timr, ktime_t now)
++static s64 common_hrtimer_forward(struct k_itimer *timr, ktime_t now)
+ {
+ 	struct hrtimer *timer = &timr->it.real.timer;
+ 
+-	return (int)hrtimer_forward(timer, now, timr->it_interval);
++	return hrtimer_forward(timer, now, timr->it_interval);
+ }
+ 
+ /*
+@@ -789,7 +798,7 @@ SYSCALL_DEFINE1(timer_getoverrun, timer_t, timer_id)
+ 	if (!timr)
+ 		return -EINVAL;
+ 
+-	overrun = timr->it_overrun_last;
++	overrun = timer_overrun_to_int(timr, 0);
+ 	unlock_timer(timr, flags);
+ 
+ 	return overrun;
+diff --git a/kernel/time/posix-timers.h b/kernel/time/posix-timers.h
+index 151e28f5bf30..ddb21145211a 100644
+--- a/kernel/time/posix-timers.h
++++ b/kernel/time/posix-timers.h
+@@ -19,7 +19,7 @@ struct k_clock {
+ 	void	(*timer_get)(struct k_itimer *timr,
+ 			     struct itimerspec64 *cur_setting);
+ 	void	(*timer_rearm)(struct k_itimer *timr);
+-	int	(*timer_forward)(struct k_itimer *timr, ktime_t now);
++	s64	(*timer_forward)(struct k_itimer *timr, ktime_t now);
+ 	ktime_t	(*timer_remaining)(struct k_itimer *timr, ktime_t now);
+ 	int	(*timer_try_to_cancel)(struct k_itimer *timr);
+ 	void	(*timer_arm)(struct k_itimer *timr, ktime_t expires,
+diff --git a/lib/klist.c b/lib/klist.c
+index 0507fa5d84c5..f6b547812fe3 100644
+--- a/lib/klist.c
++++ b/lib/klist.c
+@@ -336,8 +336,9 @@ struct klist_node *klist_prev(struct klist_iter *i)
+ 	void (*put)(struct klist_node *) = i->i_klist->put;
+ 	struct klist_node *last = i->i_cur;
+ 	struct klist_node *prev;
++	unsigned long flags;
+ 
+-	spin_lock(&i->i_klist->k_lock);
++	spin_lock_irqsave(&i->i_klist->k_lock, flags);
+ 
+ 	if (last) {
+ 		prev = to_klist_node(last->n_node.prev);
+@@ -356,7 +357,7 @@ struct klist_node *klist_prev(struct klist_iter *i)
+ 		prev = to_klist_node(prev->n_node.prev);
+ 	}
+ 
+-	spin_unlock(&i->i_klist->k_lock);
++	spin_unlock_irqrestore(&i->i_klist->k_lock, flags);
+ 
+ 	if (put && last)
+ 		put(last);
+@@ -377,8 +378,9 @@ struct klist_node *klist_next(struct klist_iter *i)
+ 	void (*put)(struct klist_node *) = i->i_klist->put;
+ 	struct klist_node *last = i->i_cur;
+ 	struct klist_node *next;
++	unsigned long flags;
+ 
+-	spin_lock(&i->i_klist->k_lock);
++	spin_lock_irqsave(&i->i_klist->k_lock, flags);
+ 
+ 	if (last) {
+ 		next = to_klist_node(last->n_node.next);
+@@ -397,7 +399,7 @@ struct klist_node *klist_next(struct klist_iter *i)
+ 		next = to_klist_node(next->n_node.next);
+ 	}
+ 
+-	spin_unlock(&i->i_klist->k_lock);
++	spin_unlock_irqrestore(&i->i_klist->k_lock, flags);
+ 
+ 	if (put && last)
+ 		put(last);
+diff --git a/net/6lowpan/iphc.c b/net/6lowpan/iphc.c
+index 6b1042e21656..52fad5dad9f7 100644
+--- a/net/6lowpan/iphc.c
++++ b/net/6lowpan/iphc.c
+@@ -770,6 +770,7 @@ int lowpan_header_decompress(struct sk_buff *skb, const struct net_device *dev,
+ 		hdr.hop_limit, &hdr.daddr);
+ 
+ 	skb_push(skb, sizeof(hdr));
++	skb_reset_mac_header(skb);
+ 	skb_reset_network_header(skb);
+ 	skb_copy_to_linear_data(skb, &hdr, sizeof(hdr));
+ 
+diff --git a/net/ipv4/tcp_bbr.c b/net/ipv4/tcp_bbr.c
+index 4bfff3c87e8e..e99d6afb70ef 100644
+--- a/net/ipv4/tcp_bbr.c
++++ b/net/ipv4/tcp_bbr.c
+@@ -95,11 +95,10 @@ struct bbr {
+ 	u32     mode:3,		     /* current bbr_mode in state machine */
+ 		prev_ca_state:3,     /* CA state on previous ACK */
+ 		packet_conservation:1,  /* use packet conservation? */
+-		restore_cwnd:1,	     /* decided to revert cwnd to old value */
+ 		round_start:1,	     /* start of packet-timed tx->ack round? */
+ 		idle_restart:1,	     /* restarting after idle? */
+ 		probe_rtt_round_done:1,  /* a BBR_PROBE_RTT round at 4 pkts? */
+-		unused:12,
++		unused:13,
+ 		lt_is_sampling:1,    /* taking long-term ("LT") samples now? */
+ 		lt_rtt_cnt:7,	     /* round trips in long-term interval */
+ 		lt_use_bw:1;	     /* use lt_bw as our bw estimate? */
+@@ -175,6 +174,8 @@ static const u32 bbr_lt_bw_diff = 4000 / 8;
+ /* If we estimate we're policed, use lt_bw for this many round trips: */
+ static const u32 bbr_lt_bw_max_rtts = 48;
+ 
++static void bbr_check_probe_rtt_done(struct sock *sk);
++
+ /* Do we estimate that STARTUP filled the pipe? */
+ static bool bbr_full_bw_reached(const struct sock *sk)
+ {
+@@ -305,6 +306,8 @@ static void bbr_cwnd_event(struct sock *sk, enum tcp_ca_event event)
+ 		 */
+ 		if (bbr->mode == BBR_PROBE_BW)
+ 			bbr_set_pacing_rate(sk, bbr_bw(sk), BBR_UNIT);
++		else if (bbr->mode == BBR_PROBE_RTT)
++			bbr_check_probe_rtt_done(sk);
+ 	}
+ }
+ 
+@@ -392,17 +395,11 @@ static bool bbr_set_cwnd_to_recover_or_restore(
+ 		cwnd = tcp_packets_in_flight(tp) + acked;
+ 	} else if (prev_state >= TCP_CA_Recovery && state < TCP_CA_Recovery) {
+ 		/* Exiting loss recovery; restore cwnd saved before recovery. */
+-		bbr->restore_cwnd = 1;
++		cwnd = max(cwnd, bbr->prior_cwnd);
+ 		bbr->packet_conservation = 0;
+ 	}
+ 	bbr->prev_ca_state = state;
+ 
+-	if (bbr->restore_cwnd) {
+-		/* Restore cwnd after exiting loss recovery or PROBE_RTT. */
+-		cwnd = max(cwnd, bbr->prior_cwnd);
+-		bbr->restore_cwnd = 0;
+-	}
+-
+ 	if (bbr->packet_conservation) {
+ 		*new_cwnd = max(cwnd, tcp_packets_in_flight(tp) + acked);
+ 		return true;	/* yes, using packet conservation */
+@@ -744,6 +741,20 @@ static void bbr_check_drain(struct sock *sk, const struct rate_sample *rs)
+ 		bbr_reset_probe_bw_mode(sk);  /* we estimate queue is drained */
+ }
+ 
++static void bbr_check_probe_rtt_done(struct sock *sk)
++{
++	struct tcp_sock *tp = tcp_sk(sk);
++	struct bbr *bbr = inet_csk_ca(sk);
++
++	if (!(bbr->probe_rtt_done_stamp &&
++	      after(tcp_jiffies32, bbr->probe_rtt_done_stamp)))
++		return;
++
++	bbr->min_rtt_stamp = tcp_jiffies32;  /* wait a while until PROBE_RTT */
++	tp->snd_cwnd = max(tp->snd_cwnd, bbr->prior_cwnd);
++	bbr_reset_mode(sk);
++}
++
+ /* The goal of PROBE_RTT mode is to have BBR flows cooperatively and
+  * periodically drain the bottleneck queue, to converge to measure the true
+  * min_rtt (unloaded propagation delay). This allows the flows to keep queues
+@@ -802,12 +813,8 @@ static void bbr_update_min_rtt(struct sock *sk, const struct rate_sample *rs)
+ 		} else if (bbr->probe_rtt_done_stamp) {
+ 			if (bbr->round_start)
+ 				bbr->probe_rtt_round_done = 1;
+-			if (bbr->probe_rtt_round_done &&
+-			    after(tcp_jiffies32, bbr->probe_rtt_done_stamp)) {
+-				bbr->min_rtt_stamp = tcp_jiffies32;
+-				bbr->restore_cwnd = 1;  /* snap to prior_cwnd */
+-				bbr_reset_mode(sk);
+-			}
++			if (bbr->probe_rtt_round_done)
++				bbr_check_probe_rtt_done(sk);
+ 		}
+ 	}
+ 	/* Restart after idle ends only once we process a new S/ACK for data */
+@@ -858,7 +865,6 @@ static void bbr_init(struct sock *sk)
+ 	bbr->has_seen_rtt = 0;
+ 	bbr_init_pacing_rate_from_rtt(sk);
+ 
+-	bbr->restore_cwnd = 0;
+ 	bbr->round_start = 0;
+ 	bbr->idle_restart = 0;
+ 	bbr->full_bw_reached = 0;
+diff --git a/net/ncsi/ncsi-netlink.c b/net/ncsi/ncsi-netlink.c
+index 82e6edf9c5d9..45f33d6dedf7 100644
+--- a/net/ncsi/ncsi-netlink.c
++++ b/net/ncsi/ncsi-netlink.c
+@@ -100,7 +100,7 @@ static int ncsi_write_package_info(struct sk_buff *skb,
+ 	bool found;
+ 	int rc;
+ 
+-	if (id > ndp->package_num) {
++	if (id > ndp->package_num - 1) {
+ 		netdev_info(ndp->ndev.dev, "NCSI: No package with id %u\n", id);
+ 		return -ENODEV;
+ 	}
+@@ -240,7 +240,7 @@ static int ncsi_pkg_info_all_nl(struct sk_buff *skb,
+ 		return 0; /* done */
+ 
+ 	hdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq,
+-			  &ncsi_genl_family, 0,  NCSI_CMD_PKG_INFO);
++			  &ncsi_genl_family, NLM_F_MULTI,  NCSI_CMD_PKG_INFO);
+ 	if (!hdr) {
+ 		rc = -EMSGSIZE;
+ 		goto err;
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index 2ccf194c3ebb..8015e50e8d0a 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -222,9 +222,14 @@ static void tls_write_space(struct sock *sk)
+ {
+ 	struct tls_context *ctx = tls_get_ctx(sk);
+ 
+-	/* We are already sending pages, ignore notification */
+-	if (ctx->in_tcp_sendpages)
++	/* If in_tcp_sendpages call lower protocol write space handler
++	 * to ensure we wake up any waiting operations there. For example
++	 * if do_tcp_sendpages where to call sk_wait_event.
++	 */
++	if (ctx->in_tcp_sendpages) {
++		ctx->sk_write_space(sk);
+ 		return;
++	}
+ 
+ 	if (!sk->sk_write_pending && tls_is_pending_closed_record(ctx)) {
+ 		gfp_t sk_allocation = sk->sk_allocation;
+diff --git a/sound/aoa/core/gpio-feature.c b/sound/aoa/core/gpio-feature.c
+index 71960089e207..65557421fe0b 100644
+--- a/sound/aoa/core/gpio-feature.c
++++ b/sound/aoa/core/gpio-feature.c
+@@ -88,8 +88,10 @@ static struct device_node *get_gpio(char *name,
+ 	}
+ 
+ 	reg = of_get_property(np, "reg", NULL);
+-	if (!reg)
++	if (!reg) {
++		of_node_put(np);
+ 		return NULL;
++	}
+ 
+ 	*gpioptr = *reg;
+ 
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 647ae1a71e10..28dc5e124995 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2535,7 +2535,8 @@ static const struct pci_device_id azx_ids[] = {
+ 	  .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB },
+ 	/* AMD Raven */
+ 	{ PCI_DEVICE(0x1022, 0x15e3),
+-	  .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB },
++	  .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB |
++			 AZX_DCAPS_PM_RUNTIME },
+ 	/* ATI HDMI */
+ 	{ PCI_DEVICE(0x1002, 0x0002),
+ 	  .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
+diff --git a/sound/soc/codecs/rt1305.c b/sound/soc/codecs/rt1305.c
+index f4c8c45f4010..421b8fb2fa04 100644
+--- a/sound/soc/codecs/rt1305.c
++++ b/sound/soc/codecs/rt1305.c
+@@ -1066,7 +1066,7 @@ static void rt1305_calibrate(struct rt1305_priv *rt1305)
+ 	pr_debug("Left_rhl = 0x%x rh=0x%x rl=0x%x\n", rhl, rh, rl);
+ 	pr_info("Left channel %d.%dohm\n", (r0ohm/10), (r0ohm%10));
+ 
+-	r0l = 562949953421312;
++	r0l = 562949953421312ULL;
+ 	if (rhl != 0)
+ 		do_div(r0l, rhl);
+ 	pr_debug("Left_r0 = 0x%llx\n", r0l);
+@@ -1083,7 +1083,7 @@ static void rt1305_calibrate(struct rt1305_priv *rt1305)
+ 	pr_debug("Right_rhl = 0x%x rh=0x%x rl=0x%x\n", rhl, rh, rl);
+ 	pr_info("Right channel %d.%dohm\n", (r0ohm/10), (r0ohm%10));
+ 
+-	r0r = 562949953421312;
++	r0r = 562949953421312ULL;
+ 	if (rhl != 0)
+ 		do_div(r0r, rhl);
+ 	pr_debug("Right_r0 = 0x%llx\n", r0r);
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index 33065ba294a9..d2c9d7865bde 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -404,7 +404,7 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ 		},
+ 		.driver_data = (void *)(BYT_RT5640_DMIC1_MAP |
+ 					BYT_RT5640_JD_SRC_JD1_IN4P |
+-					BYT_RT5640_OVCD_TH_2000UA |
++					BYT_RT5640_OVCD_TH_1500UA |
+ 					BYT_RT5640_OVCD_SF_0P75 |
+ 					BYT_RT5640_SSP0_AIF1 |
+ 					BYT_RT5640_MCLK_EN),
+diff --git a/sound/soc/qcom/qdsp6/q6afe.c b/sound/soc/qcom/qdsp6/q6afe.c
+index 01f43218984b..69a7896cb713 100644
+--- a/sound/soc/qcom/qdsp6/q6afe.c
++++ b/sound/soc/qcom/qdsp6/q6afe.c
+@@ -777,7 +777,7 @@ static int q6afe_callback(struct apr_device *adev, struct apr_resp_pkt *data)
+  */
+ int q6afe_get_port_id(int index)
+ {
+-	if (index < 0 || index > AFE_PORT_MAX)
++	if (index < 0 || index >= AFE_PORT_MAX)
+ 		return -EINVAL;
+ 
+ 	return port_maps[index].port_id;
+@@ -1014,7 +1014,7 @@ int q6afe_port_stop(struct q6afe_port *port)
+ 
+ 	port_id = port->id;
+ 	index = port->token;
+-	if (index < 0 || index > AFE_PORT_MAX) {
++	if (index < 0 || index >= AFE_PORT_MAX) {
+ 		dev_err(afe->dev, "AFE port index[%d] invalid!\n", index);
+ 		return -EINVAL;
+ 	}
+@@ -1355,7 +1355,7 @@ struct q6afe_port *q6afe_port_get_from_id(struct device *dev, int id)
+ 	unsigned long flags;
+ 	int cfg_type;
+ 
+-	if (id < 0 || id > AFE_PORT_MAX) {
++	if (id < 0 || id >= AFE_PORT_MAX) {
+ 		dev_err(dev, "AFE port token[%d] invalid!\n", id);
+ 		return ERR_PTR(-EINVAL);
+ 	}
+diff --git a/sound/soc/sh/rcar/ssi.c b/sound/soc/sh/rcar/ssi.c
+index cf4b40d376e5..c675058b908b 100644
+--- a/sound/soc/sh/rcar/ssi.c
++++ b/sound/soc/sh/rcar/ssi.c
+@@ -37,6 +37,7 @@
+ #define	CHNL_4		(1 << 22)	/* Channels */
+ #define	CHNL_6		(2 << 22)	/* Channels */
+ #define	CHNL_8		(3 << 22)	/* Channels */
++#define DWL_MASK	(7 << 19)	/* Data Word Length mask */
+ #define	DWL_8		(0 << 19)	/* Data Word Length */
+ #define	DWL_16		(1 << 19)	/* Data Word Length */
+ #define	DWL_18		(2 << 19)	/* Data Word Length */
+@@ -353,21 +354,18 @@ static void rsnd_ssi_config_init(struct rsnd_mod *mod,
+ 	struct rsnd_dai *rdai = rsnd_io_to_rdai(io);
+ 	struct snd_pcm_runtime *runtime = rsnd_io_to_runtime(io);
+ 	struct rsnd_ssi *ssi = rsnd_mod_to_ssi(mod);
+-	u32 cr_own;
+-	u32 cr_mode;
+-	u32 wsr;
++	u32 cr_own	= ssi->cr_own;
++	u32 cr_mode	= ssi->cr_mode;
++	u32 wsr		= ssi->wsr;
+ 	int is_tdm;
+ 
+-	if (rsnd_ssi_is_parent(mod, io))
+-		return;
+-
+ 	is_tdm = rsnd_runtime_is_ssi_tdm(io);
+ 
+ 	/*
+ 	 * always use 32bit system word.
+ 	 * see also rsnd_ssi_master_clk_enable()
+ 	 */
+-	cr_own = FORCE | SWL_32;
++	cr_own |= FORCE | SWL_32;
+ 
+ 	if (rdai->bit_clk_inv)
+ 		cr_own |= SCKP;
+@@ -377,9 +375,18 @@ static void rsnd_ssi_config_init(struct rsnd_mod *mod,
+ 		cr_own |= SDTA;
+ 	if (rdai->sys_delay)
+ 		cr_own |= DEL;
++
++	/*
++	 * We shouldn't exchange SWSP after running.
++	 * This means, parent needs to care it.
++	 */
++	if (rsnd_ssi_is_parent(mod, io))
++		goto init_end;
++
+ 	if (rsnd_io_is_play(io))
+ 		cr_own |= TRMD;
+ 
++	cr_own &= ~DWL_MASK;
+ 	switch (snd_pcm_format_width(runtime->format)) {
+ 	case 16:
+ 		cr_own |= DWL_16;
+@@ -406,7 +413,7 @@ static void rsnd_ssi_config_init(struct rsnd_mod *mod,
+ 		wsr	|= WS_MODE;
+ 		cr_own	|= CHNL_8;
+ 	}
+-
++init_end:
+ 	ssi->cr_own	= cr_own;
+ 	ssi->cr_mode	= cr_mode;
+ 	ssi->wsr	= wsr;
+@@ -465,15 +472,18 @@ static int rsnd_ssi_quit(struct rsnd_mod *mod,
+ 		return -EIO;
+ 	}
+ 
+-	if (!rsnd_ssi_is_parent(mod, io))
+-		ssi->cr_own	= 0;
+-
+ 	rsnd_ssi_master_clk_stop(mod, io);
+ 
+ 	rsnd_mod_power_off(mod);
+ 
+ 	ssi->usrcnt--;
+ 
++	if (!ssi->usrcnt) {
++		ssi->cr_own	= 0;
++		ssi->cr_mode	= 0;
++		ssi->wsr	= 0;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index 229c12349803..a099c3e45504 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -4073,6 +4073,13 @@ int snd_soc_dapm_link_dai_widgets(struct snd_soc_card *card)
+ 			continue;
+ 		}
+ 
++		/* let users know there is no DAI to link */
++		if (!dai_w->priv) {
++			dev_dbg(card->dev, "dai widget %s has no DAI\n",
++				dai_w->name);
++			continue;
++		}
++
+ 		dai = dai_w->priv;
+ 
+ 		/* ...find all widgets with the same stream and link them */
+diff --git a/tools/bpf/bpftool/map_perf_ring.c b/tools/bpf/bpftool/map_perf_ring.c
+index 1832100d1b27..6d41323be291 100644
+--- a/tools/bpf/bpftool/map_perf_ring.c
++++ b/tools/bpf/bpftool/map_perf_ring.c
+@@ -194,8 +194,10 @@ int do_event_pipe(int argc, char **argv)
+ 	}
+ 
+ 	while (argc) {
+-		if (argc < 2)
++		if (argc < 2) {
+ 			BAD_ARG();
++			goto err_close_map;
++		}
+ 
+ 		if (is_prefix(*argv, "cpu")) {
+ 			char *endptr;
+@@ -221,6 +223,7 @@ int do_event_pipe(int argc, char **argv)
+ 			NEXT_ARG();
+ 		} else {
+ 			BAD_ARG();
++			goto err_close_map;
+ 		}
+ 
+ 		do_all = false;
+diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
+index 4f5de8245b32..6631b0b8b4ab 100644
+--- a/tools/perf/tests/builtin-test.c
++++ b/tools/perf/tests/builtin-test.c
+@@ -385,7 +385,7 @@ static int test_and_print(struct test *t, bool force_skip, int subtest)
+ 	if (!t->subtest.get_nr)
+ 		pr_debug("%s:", t->desc);
+ 	else
+-		pr_debug("%s subtest %d:", t->desc, subtest);
++		pr_debug("%s subtest %d:", t->desc, subtest + 1);
+ 
+ 	switch (err) {
+ 	case TEST_OK:
+diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh
+index 3bb4c2ba7b14..197e769c2ed1 100755
+--- a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh
++++ b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh
+@@ -74,12 +74,14 @@ test_vlan_match()
+ 
+ test_gretap()
+ {
+-	test_vlan_match gt4 'vlan_id 555 vlan_ethtype ip' "mirror to gretap"
++	test_vlan_match gt4 'skip_hw vlan_id 555 vlan_ethtype ip' \
++			"mirror to gretap"
+ }
+ 
+ test_ip6gretap()
+ {
+-	test_vlan_match gt6 'vlan_id 555 vlan_ethtype ipv6' "mirror to ip6gretap"
++	test_vlan_match gt6 'skip_hw vlan_id 555 vlan_ethtype ip' \
++			"mirror to ip6gretap"
+ }
+ 
+ test_gretap_stp()
+diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_lib.sh b/tools/testing/selftests/net/forwarding/mirror_gre_lib.sh
+index 619b469365be..1c18e332cd4f 100644
+--- a/tools/testing/selftests/net/forwarding/mirror_gre_lib.sh
++++ b/tools/testing/selftests/net/forwarding/mirror_gre_lib.sh
+@@ -62,7 +62,7 @@ full_test_span_gre_dir_vlan_ips()
+ 			  "$backward_type" "$ip1" "$ip2"
+ 
+ 	tc filter add dev $h3 ingress pref 77 prot 802.1q \
+-		flower $vlan_match ip_proto 0x2f \
++		flower $vlan_match \
+ 		action pass
+ 	mirror_test v$h1 $ip1 $ip2 $h3 77 10
+ 	tc filter del dev $h3 ingress pref 77
+diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh b/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh
+index 5dbc7a08f4bd..a12274776116 100755
+--- a/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh
++++ b/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh
+@@ -79,12 +79,14 @@ test_vlan_match()
+ 
+ test_gretap()
+ {
+-	test_vlan_match gt4 'vlan_id 555 vlan_ethtype ip' "mirror to gretap"
++	test_vlan_match gt4 'skip_hw vlan_id 555 vlan_ethtype ip' \
++			"mirror to gretap"
+ }
+ 
+ test_ip6gretap()
+ {
+-	test_vlan_match gt6 'vlan_id 555 vlan_ethtype ipv6' "mirror to ip6gretap"
++	test_vlan_match gt6 'skip_hw vlan_id 555 vlan_ethtype ip' \
++			"mirror to ip6gretap"
+ }
+ 
+ test_span_gre_forbidden_cpu()


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 11:37 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     9e77cd98792fc3ec13aceae7a3b7536b2129eb86
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Sun Nov  4 17:33:00 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 11:36:28 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9e77cd98

linux kernel 4.18.17

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1016_linux-4.18.17.patch | 4982 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4986 insertions(+)

diff --git a/0000_README b/0000_README
index 52e9ca9..fcd301e 100644
--- a/0000_README
+++ b/0000_README
@@ -107,6 +107,10 @@ Patch:  1015_linux-4.18.16.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.16
 
+Patch:  1016_linux-4.18.17.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.17
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1016_linux-4.18.17.patch b/1016_linux-4.18.17.patch
new file mode 100644
index 0000000..1e385a1
--- /dev/null
+++ b/1016_linux-4.18.17.patch
@@ -0,0 +1,4982 @@
+diff --git a/Makefile b/Makefile
+index 034dd990b0ae..c051db0ca5a0 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 16
++SUBLEVEL = 17
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/Kconfig b/arch/Kconfig
+index f03b72644902..a18371a36e03 100644
+--- a/arch/Kconfig
++++ b/arch/Kconfig
+@@ -977,4 +977,12 @@ config REFCOUNT_FULL
+ 	  against various use-after-free conditions that can be used in
+ 	  security flaw exploits.
+ 
++config HAVE_ARCH_COMPILER_H
++	bool
++	help
++	  An architecture can select this if it provides an
++	  asm/compiler.h header that should be included after
++	  linux/compiler-*.h in order to override macro definitions that those
++	  headers generally provide.
++
+ source "kernel/gcov/Kconfig"
+diff --git a/arch/arm/boot/dts/bcm63138.dtsi b/arch/arm/boot/dts/bcm63138.dtsi
+index 43ee992ccdcf..6df61518776f 100644
+--- a/arch/arm/boot/dts/bcm63138.dtsi
++++ b/arch/arm/boot/dts/bcm63138.dtsi
+@@ -106,21 +106,23 @@
+ 		global_timer: timer@1e200 {
+ 			compatible = "arm,cortex-a9-global-timer";
+ 			reg = <0x1e200 0x20>;
+-			interrupts = <GIC_PPI 11 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_PPI 11 IRQ_TYPE_EDGE_RISING>;
+ 			clocks = <&axi_clk>;
+ 		};
+ 
+ 		local_timer: local-timer@1e600 {
+ 			compatible = "arm,cortex-a9-twd-timer";
+ 			reg = <0x1e600 0x20>;
+-			interrupts = <GIC_PPI 13 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(2) |
++						  IRQ_TYPE_EDGE_RISING)>;
+ 			clocks = <&axi_clk>;
+ 		};
+ 
+ 		twd_watchdog: watchdog@1e620 {
+ 			compatible = "arm,cortex-a9-twd-wdt";
+ 			reg = <0x1e620 0x20>;
+-			interrupts = <GIC_PPI 14 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_PPI 14 (GIC_CPU_MASK_SIMPLE(2) |
++						  IRQ_TYPE_LEVEL_HIGH)>;
+ 		};
+ 
+ 		armpll: armpll {
+@@ -158,7 +160,7 @@
+ 		serial0: serial@600 {
+ 			compatible = "brcm,bcm6345-uart";
+ 			reg = <0x600 0x1b>;
+-			interrupts = <GIC_SPI 32 0>;
++			interrupts = <GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&periph_clk>;
+ 			clock-names = "periph";
+ 			status = "disabled";
+@@ -167,7 +169,7 @@
+ 		serial1: serial@620 {
+ 			compatible = "brcm,bcm6345-uart";
+ 			reg = <0x620 0x1b>;
+-			interrupts = <GIC_SPI 33 0>;
++			interrupts = <GIC_SPI 33 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&periph_clk>;
+ 			clock-names = "periph";
+ 			status = "disabled";
+@@ -180,7 +182,7 @@
+ 			reg = <0x2000 0x600>, <0xf0 0x10>;
+ 			reg-names = "nand", "nand-int-base";
+ 			status = "disabled";
+-			interrupts = <GIC_SPI 38 0>;
++			interrupts = <GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "nand";
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/imx53-qsb-common.dtsi b/arch/arm/boot/dts/imx53-qsb-common.dtsi
+index ef7658a78836..c1548adee789 100644
+--- a/arch/arm/boot/dts/imx53-qsb-common.dtsi
++++ b/arch/arm/boot/dts/imx53-qsb-common.dtsi
+@@ -123,6 +123,17 @@
+ 	};
+ };
+ 
++&cpu0 {
++	/* CPU rated to 1GHz, not 1.2GHz as per the default settings */
++	operating-points = <
++		/* kHz   uV */
++		166666  850000
++		400000  900000
++		800000  1050000
++		1000000 1200000
++	>;
++};
++
+ &esdhc1 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_esdhc1>;
+diff --git a/arch/arm/kernel/vmlinux.lds.h b/arch/arm/kernel/vmlinux.lds.h
+index ae5fdff18406..8247bc15addc 100644
+--- a/arch/arm/kernel/vmlinux.lds.h
++++ b/arch/arm/kernel/vmlinux.lds.h
+@@ -49,6 +49,8 @@
+ #define ARM_DISCARD							\
+ 		*(.ARM.exidx.exit.text)					\
+ 		*(.ARM.extab.exit.text)					\
++		*(.ARM.exidx.text.exit)					\
++		*(.ARM.extab.text.exit)					\
+ 		ARM_CPU_DISCARD(*(.ARM.exidx.cpuexit.text))		\
+ 		ARM_CPU_DISCARD(*(.ARM.extab.cpuexit.text))		\
+ 		ARM_EXIT_DISCARD(EXIT_TEXT)				\
+diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
+index fc91205ff46c..5bf9443cfbaa 100644
+--- a/arch/arm/mm/ioremap.c
++++ b/arch/arm/mm/ioremap.c
+@@ -473,7 +473,7 @@ void pci_ioremap_set_mem_type(int mem_type)
+ 
+ int pci_ioremap_io(unsigned int offset, phys_addr_t phys_addr)
+ {
+-	BUG_ON(offset + SZ_64K > IO_SPACE_LIMIT);
++	BUG_ON(offset + SZ_64K - 1 > IO_SPACE_LIMIT);
+ 
+ 	return ioremap_page_range(PCI_IO_VIRT_BASE + offset,
+ 				  PCI_IO_VIRT_BASE + offset + SZ_64K,
+diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
+index 192b3ba07075..f85be2f8b140 100644
+--- a/arch/arm64/mm/hugetlbpage.c
++++ b/arch/arm64/mm/hugetlbpage.c
+@@ -117,11 +117,14 @@ static pte_t get_clear_flush(struct mm_struct *mm,
+ 
+ 		/*
+ 		 * If HW_AFDBM is enabled, then the HW could turn on
+-		 * the dirty bit for any page in the set, so check
+-		 * them all.  All hugetlb entries are already young.
++		 * the dirty or accessed bit for any page in the set,
++		 * so check them all.
+ 		 */
+ 		if (pte_dirty(pte))
+ 			orig_pte = pte_mkdirty(orig_pte);
++
++		if (pte_young(pte))
++			orig_pte = pte_mkyoung(orig_pte);
+ 	}
+ 
+ 	if (valid) {
+@@ -340,10 +343,13 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
+ 	if (!pte_same(orig_pte, pte))
+ 		changed = 1;
+ 
+-	/* Make sure we don't lose the dirty state */
++	/* Make sure we don't lose the dirty or young state */
+ 	if (pte_dirty(orig_pte))
+ 		pte = pte_mkdirty(pte);
+ 
++	if (pte_young(orig_pte))
++		pte = pte_mkyoung(pte);
++
+ 	hugeprot = pte_pgprot(pte);
+ 	for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn)
+ 		set_pte_at(vma->vm_mm, addr, ptep, pfn_pte(pfn, hugeprot));
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index 59d07bd5374a..055b211b7126 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -1217,9 +1217,10 @@ int find_and_online_cpu_nid(int cpu)
+ 		 * Need to ensure that NODE_DATA is initialized for a node from
+ 		 * available memory (see memblock_alloc_try_nid). If unable to
+ 		 * init the node, then default to nearest node that has memory
+-		 * installed.
++		 * installed. Skip onlining a node if the subsystems are not
++		 * yet initialized.
+ 		 */
+-		if (try_online_node(new_nid))
++		if (!topology_inited || try_online_node(new_nid))
+ 			new_nid = first_online_node;
+ #else
+ 		/*
+diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
+index 0efa5b29d0a3..dcff272aee06 100644
+--- a/arch/riscv/kernel/setup.c
++++ b/arch/riscv/kernel/setup.c
+@@ -165,7 +165,7 @@ static void __init setup_bootmem(void)
+ 	BUG_ON(mem_size == 0);
+ 
+ 	set_max_mapnr(PFN_DOWN(mem_size));
+-	max_low_pfn = pfn_base + PFN_DOWN(mem_size);
++	max_low_pfn = memblock_end_of_DRAM();
+ 
+ #ifdef CONFIG_BLK_DEV_INITRD
+ 	setup_initrd();
+diff --git a/arch/sparc/include/asm/cpudata_64.h b/arch/sparc/include/asm/cpudata_64.h
+index 666d6b5c0440..9c3fc03abe9a 100644
+--- a/arch/sparc/include/asm/cpudata_64.h
++++ b/arch/sparc/include/asm/cpudata_64.h
+@@ -28,7 +28,7 @@ typedef struct {
+ 	unsigned short	sock_id;	/* physical package */
+ 	unsigned short	core_id;
+ 	unsigned short  max_cache_id;	/* groupings of highest shared cache */
+-	unsigned short	proc_id;	/* strand (aka HW thread) id */
++	signed short	proc_id;	/* strand (aka HW thread) id */
+ } cpuinfo_sparc;
+ 
+ DECLARE_PER_CPU(cpuinfo_sparc, __cpu_data);
+diff --git a/arch/sparc/include/asm/switch_to_64.h b/arch/sparc/include/asm/switch_to_64.h
+index 4ff29b1406a9..b1d4e2e3210f 100644
+--- a/arch/sparc/include/asm/switch_to_64.h
++++ b/arch/sparc/include/asm/switch_to_64.h
+@@ -67,6 +67,7 @@ do {	save_and_clear_fpu();						\
+ } while(0)
+ 
+ void synchronize_user_stack(void);
+-void fault_in_user_windows(void);
++struct pt_regs;
++void fault_in_user_windows(struct pt_regs *);
+ 
+ #endif /* __SPARC64_SWITCH_TO_64_H */
+diff --git a/arch/sparc/kernel/perf_event.c b/arch/sparc/kernel/perf_event.c
+index d3149baaa33c..67b3e6b3ce5d 100644
+--- a/arch/sparc/kernel/perf_event.c
++++ b/arch/sparc/kernel/perf_event.c
+@@ -24,6 +24,7 @@
+ #include <asm/cpudata.h>
+ #include <linux/uaccess.h>
+ #include <linux/atomic.h>
++#include <linux/sched/clock.h>
+ #include <asm/nmi.h>
+ #include <asm/pcr.h>
+ #include <asm/cacheflush.h>
+@@ -927,6 +928,8 @@ static void read_in_all_counters(struct cpu_hw_events *cpuc)
+ 			sparc_perf_event_update(cp, &cp->hw,
+ 						cpuc->current_idx[i]);
+ 			cpuc->current_idx[i] = PIC_NO_INDEX;
++			if (cp->hw.state & PERF_HES_STOPPED)
++				cp->hw.state |= PERF_HES_ARCH;
+ 		}
+ 	}
+ }
+@@ -959,10 +962,12 @@ static void calculate_single_pcr(struct cpu_hw_events *cpuc)
+ 
+ 		enc = perf_event_get_enc(cpuc->events[i]);
+ 		cpuc->pcr[0] &= ~mask_for_index(idx);
+-		if (hwc->state & PERF_HES_STOPPED)
++		if (hwc->state & PERF_HES_ARCH) {
+ 			cpuc->pcr[0] |= nop_for_index(idx);
+-		else
++		} else {
+ 			cpuc->pcr[0] |= event_encoding(enc, idx);
++			hwc->state = 0;
++		}
+ 	}
+ out:
+ 	cpuc->pcr[0] |= cpuc->event[0]->hw.config_base;
+@@ -988,6 +993,9 @@ static void calculate_multiple_pcrs(struct cpu_hw_events *cpuc)
+ 
+ 		cpuc->current_idx[i] = idx;
+ 
++		if (cp->hw.state & PERF_HES_ARCH)
++			continue;
++
+ 		sparc_pmu_start(cp, PERF_EF_RELOAD);
+ 	}
+ out:
+@@ -1079,6 +1087,8 @@ static void sparc_pmu_start(struct perf_event *event, int flags)
+ 	event->hw.state = 0;
+ 
+ 	sparc_pmu_enable_event(cpuc, &event->hw, idx);
++
++	perf_event_update_userpage(event);
+ }
+ 
+ static void sparc_pmu_stop(struct perf_event *event, int flags)
+@@ -1371,9 +1381,9 @@ static int sparc_pmu_add(struct perf_event *event, int ef_flags)
+ 	cpuc->events[n0] = event->hw.event_base;
+ 	cpuc->current_idx[n0] = PIC_NO_INDEX;
+ 
+-	event->hw.state = PERF_HES_UPTODATE;
++	event->hw.state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
+ 	if (!(ef_flags & PERF_EF_START))
+-		event->hw.state |= PERF_HES_STOPPED;
++		event->hw.state |= PERF_HES_ARCH;
+ 
+ 	/*
+ 	 * If group events scheduling transaction was started,
+@@ -1603,6 +1613,8 @@ static int __kprobes perf_event_nmi_handler(struct notifier_block *self,
+ 	struct perf_sample_data data;
+ 	struct cpu_hw_events *cpuc;
+ 	struct pt_regs *regs;
++	u64 finish_clock;
++	u64 start_clock;
+ 	int i;
+ 
+ 	if (!atomic_read(&active_events))
+@@ -1616,6 +1628,8 @@ static int __kprobes perf_event_nmi_handler(struct notifier_block *self,
+ 		return NOTIFY_DONE;
+ 	}
+ 
++	start_clock = sched_clock();
++
+ 	regs = args->regs;
+ 
+ 	cpuc = this_cpu_ptr(&cpu_hw_events);
+@@ -1654,6 +1668,10 @@ static int __kprobes perf_event_nmi_handler(struct notifier_block *self,
+ 			sparc_pmu_stop(event, 0);
+ 	}
+ 
++	finish_clock = sched_clock();
++
++	perf_sample_event_took(finish_clock - start_clock);
++
+ 	return NOTIFY_STOP;
+ }
+ 
+diff --git a/arch/sparc/kernel/process_64.c b/arch/sparc/kernel/process_64.c
+index 6c086086ca8f..59eaf6227af1 100644
+--- a/arch/sparc/kernel/process_64.c
++++ b/arch/sparc/kernel/process_64.c
+@@ -36,6 +36,7 @@
+ #include <linux/sysrq.h>
+ #include <linux/nmi.h>
+ #include <linux/context_tracking.h>
++#include <linux/signal.h>
+ 
+ #include <linux/uaccess.h>
+ #include <asm/page.h>
+@@ -521,7 +522,12 @@ static void stack_unaligned(unsigned long sp)
+ 	force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *) sp, 0, current);
+ }
+ 
+-void fault_in_user_windows(void)
++static const char uwfault32[] = KERN_INFO \
++	"%s[%d]: bad register window fault: SP %08lx (orig_sp %08lx) TPC %08lx O7 %08lx\n";
++static const char uwfault64[] = KERN_INFO \
++	"%s[%d]: bad register window fault: SP %016lx (orig_sp %016lx) TPC %08lx O7 %016lx\n";
++
++void fault_in_user_windows(struct pt_regs *regs)
+ {
+ 	struct thread_info *t = current_thread_info();
+ 	unsigned long window;
+@@ -534,9 +540,9 @@ void fault_in_user_windows(void)
+ 		do {
+ 			struct reg_window *rwin = &t->reg_window[window];
+ 			int winsize = sizeof(struct reg_window);
+-			unsigned long sp;
++			unsigned long sp, orig_sp;
+ 
+-			sp = t->rwbuf_stkptrs[window];
++			orig_sp = sp = t->rwbuf_stkptrs[window];
+ 
+ 			if (test_thread_64bit_stack(sp))
+ 				sp += STACK_BIAS;
+@@ -547,8 +553,16 @@ void fault_in_user_windows(void)
+ 				stack_unaligned(sp);
+ 
+ 			if (unlikely(copy_to_user((char __user *)sp,
+-						  rwin, winsize)))
++						  rwin, winsize))) {
++				if (show_unhandled_signals)
++					printk_ratelimited(is_compat_task() ?
++							   uwfault32 : uwfault64,
++							   current->comm, current->pid,
++							   sp, orig_sp,
++							   regs->tpc,
++							   regs->u_regs[UREG_I7]);
+ 				goto barf;
++			}
+ 		} while (window--);
+ 	}
+ 	set_thread_wsaved(0);
+@@ -556,8 +570,7 @@ void fault_in_user_windows(void)
+ 
+ barf:
+ 	set_thread_wsaved(window + 1);
+-	user_exit();
+-	do_exit(SIGILL);
++	force_sig(SIGSEGV, current);
+ }
+ 
+ asmlinkage long sparc_do_fork(unsigned long clone_flags,
+diff --git a/arch/sparc/kernel/rtrap_64.S b/arch/sparc/kernel/rtrap_64.S
+index f6528884a2c8..29aa34f11720 100644
+--- a/arch/sparc/kernel/rtrap_64.S
++++ b/arch/sparc/kernel/rtrap_64.S
+@@ -39,6 +39,7 @@ __handle_preemption:
+ 		 wrpr			%g0, RTRAP_PSTATE_IRQOFF, %pstate
+ 
+ __handle_user_windows:
++		add			%sp, PTREGS_OFF, %o0
+ 		call			fault_in_user_windows
+ 661:		 wrpr			%g0, RTRAP_PSTATE, %pstate
+ 		/* If userspace is using ADI, it could potentially pass
+@@ -84,8 +85,9 @@ __handle_signal:
+ 		ldx			[%sp + PTREGS_OFF + PT_V9_TSTATE], %l1
+ 		sethi			%hi(0xf << 20), %l4
+ 		and			%l1, %l4, %l4
++		andn			%l1, %l4, %l1
+ 		ba,pt			%xcc, __handle_preemption_continue
+-		 andn			%l1, %l4, %l1
++		 srl			%l4, 20, %l4
+ 
+ 		/* When returning from a NMI (%pil==15) interrupt we want to
+ 		 * avoid running softirqs, doing IRQ tracing, preempting, etc.
+diff --git a/arch/sparc/kernel/signal32.c b/arch/sparc/kernel/signal32.c
+index 44d379db3f64..4c5b3fcbed94 100644
+--- a/arch/sparc/kernel/signal32.c
++++ b/arch/sparc/kernel/signal32.c
+@@ -371,7 +371,11 @@ static int setup_frame32(struct ksignal *ksig, struct pt_regs *regs,
+ 		get_sigframe(ksig, regs, sigframe_size);
+ 	
+ 	if (invalid_frame_pointer(sf, sigframe_size)) {
+-		do_exit(SIGILL);
++		if (show_unhandled_signals)
++			pr_info("%s[%d] bad frame in setup_frame32: %08lx TPC %08lx O7 %08lx\n",
++				current->comm, current->pid, (unsigned long)sf,
++				regs->tpc, regs->u_regs[UREG_I7]);
++		force_sigsegv(ksig->sig, current);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -501,7 +505,11 @@ static int setup_rt_frame32(struct ksignal *ksig, struct pt_regs *regs,
+ 		get_sigframe(ksig, regs, sigframe_size);
+ 	
+ 	if (invalid_frame_pointer(sf, sigframe_size)) {
+-		do_exit(SIGILL);
++		if (show_unhandled_signals)
++			pr_info("%s[%d] bad frame in setup_rt_frame32: %08lx TPC %08lx O7 %08lx\n",
++				current->comm, current->pid, (unsigned long)sf,
++				regs->tpc, regs->u_regs[UREG_I7]);
++		force_sigsegv(ksig->sig, current);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/arch/sparc/kernel/signal_64.c b/arch/sparc/kernel/signal_64.c
+index 48366e5eb5b2..e9de1803a22e 100644
+--- a/arch/sparc/kernel/signal_64.c
++++ b/arch/sparc/kernel/signal_64.c
+@@ -370,7 +370,11 @@ setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs)
+ 		get_sigframe(ksig, regs, sf_size);
+ 
+ 	if (invalid_frame_pointer (sf)) {
+-		do_exit(SIGILL);	/* won't return, actually */
++		if (show_unhandled_signals)
++			pr_info("%s[%d] bad frame in setup_rt_frame: %016lx TPC %016lx O7 %016lx\n",
++				current->comm, current->pid, (unsigned long)sf,
++				regs->tpc, regs->u_regs[UREG_I7]);
++		force_sigsegv(ksig->sig, current);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/arch/sparc/kernel/systbls_64.S b/arch/sparc/kernel/systbls_64.S
+index 387ef993880a..25699462ad5b 100644
+--- a/arch/sparc/kernel/systbls_64.S
++++ b/arch/sparc/kernel/systbls_64.S
+@@ -47,9 +47,9 @@ sys_call_table32:
+ 	.word sys_recvfrom, sys_setreuid16, sys_setregid16, sys_rename, compat_sys_truncate
+ /*130*/	.word compat_sys_ftruncate, sys_flock, compat_sys_lstat64, sys_sendto, sys_shutdown
+ 	.word sys_socketpair, sys_mkdir, sys_rmdir, compat_sys_utimes, compat_sys_stat64
+-/*140*/	.word sys_sendfile64, sys_nis_syscall, compat_sys_futex, sys_gettid, compat_sys_getrlimit
++/*140*/	.word sys_sendfile64, sys_getpeername, compat_sys_futex, sys_gettid, compat_sys_getrlimit
+ 	.word compat_sys_setrlimit, sys_pivot_root, sys_prctl, sys_pciconfig_read, sys_pciconfig_write
+-/*150*/	.word sys_nis_syscall, sys_inotify_init, sys_inotify_add_watch, sys_poll, sys_getdents64
++/*150*/	.word sys_getsockname, sys_inotify_init, sys_inotify_add_watch, sys_poll, sys_getdents64
+ 	.word compat_sys_fcntl64, sys_inotify_rm_watch, compat_sys_statfs, compat_sys_fstatfs, sys_oldumount
+ /*160*/	.word compat_sys_sched_setaffinity, compat_sys_sched_getaffinity, sys_getdomainname, sys_setdomainname, sys_nis_syscall
+ 	.word sys_quotactl, sys_set_tid_address, compat_sys_mount, compat_sys_ustat, sys_setxattr
+diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
+index f396048a0d68..39822f611c01 100644
+--- a/arch/sparc/mm/init_64.c
++++ b/arch/sparc/mm/init_64.c
+@@ -1383,6 +1383,7 @@ int __node_distance(int from, int to)
+ 	}
+ 	return numa_latency[from][to];
+ }
++EXPORT_SYMBOL(__node_distance);
+ 
+ static int __init find_best_numa_node_for_mlgroup(struct mdesc_mlgroup *grp)
+ {
+diff --git a/arch/sparc/vdso/vclock_gettime.c b/arch/sparc/vdso/vclock_gettime.c
+index 3feb3d960ca5..75dca9aab737 100644
+--- a/arch/sparc/vdso/vclock_gettime.c
++++ b/arch/sparc/vdso/vclock_gettime.c
+@@ -33,9 +33,19 @@
+ #define	TICK_PRIV_BIT	(1ULL << 63)
+ #endif
+ 
++#ifdef	CONFIG_SPARC64
+ #define SYSCALL_STRING							\
+ 	"ta	0x6d;"							\
+-	"sub	%%g0, %%o0, %%o0;"					\
++	"bcs,a	1f;"							\
++	" sub	%%g0, %%o0, %%o0;"					\
++	"1:"
++#else
++#define SYSCALL_STRING							\
++	"ta	0x10;"							\
++	"bcs,a	1f;"							\
++	" sub	%%g0, %%o0, %%o0;"					\
++	"1:"
++#endif
+ 
+ #define SYSCALL_CLOBBERS						\
+ 	"f0", "f1", "f2", "f3", "f4", "f5", "f6", "f7",			\
+diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c
+index 981ba5e8241b..8671de126eac 100644
+--- a/arch/x86/events/amd/uncore.c
++++ b/arch/x86/events/amd/uncore.c
+@@ -36,6 +36,7 @@
+ 
+ static int num_counters_llc;
+ static int num_counters_nb;
++static bool l3_mask;
+ 
+ static HLIST_HEAD(uncore_unused_list);
+ 
+@@ -209,6 +210,13 @@ static int amd_uncore_event_init(struct perf_event *event)
+ 	hwc->config = event->attr.config & AMD64_RAW_EVENT_MASK_NB;
+ 	hwc->idx = -1;
+ 
++	/*
++	 * SliceMask and ThreadMask need to be set for certain L3 events in
++	 * Family 17h. For other events, the two fields do not affect the count.
++	 */
++	if (l3_mask)
++		hwc->config |= (AMD64_L3_SLICE_MASK | AMD64_L3_THREAD_MASK);
++
+ 	if (event->cpu < 0)
+ 		return -EINVAL;
+ 
+@@ -525,6 +533,7 @@ static int __init amd_uncore_init(void)
+ 		amd_llc_pmu.name	  = "amd_l3";
+ 		format_attr_event_df.show = &event_show_df;
+ 		format_attr_event_l3.show = &event_show_l3;
++		l3_mask			  = true;
+ 	} else {
+ 		num_counters_nb		  = NUM_COUNTERS_NB;
+ 		num_counters_llc	  = NUM_COUNTERS_L2;
+@@ -532,6 +541,7 @@ static int __init amd_uncore_init(void)
+ 		amd_llc_pmu.name	  = "amd_l2";
+ 		format_attr_event_df	  = format_attr_event;
+ 		format_attr_event_l3	  = format_attr_event;
++		l3_mask			  = false;
+ 	}
+ 
+ 	amd_nb_pmu.attr_groups	= amd_uncore_attr_groups_df;
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index 51d7c117e3c7..c07bee31abe8 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -3061,7 +3061,7 @@ static struct event_constraint bdx_uncore_pcu_constraints[] = {
+ 
+ void bdx_uncore_cpu_init(void)
+ {
+-	int pkg = topology_phys_to_logical_pkg(0);
++	int pkg = topology_phys_to_logical_pkg(boot_cpu_data.phys_proc_id);
+ 
+ 	if (bdx_uncore_cbox.num_boxes > boot_cpu_data.x86_max_cores)
+ 		bdx_uncore_cbox.num_boxes = boot_cpu_data.x86_max_cores;
+@@ -3931,16 +3931,16 @@ static const struct pci_device_id skx_uncore_pci_ids[] = {
+ 		.driver_data = UNCORE_PCI_DEV_FULL_DATA(21, 5, SKX_PCI_UNCORE_M2PCIE, 3),
+ 	},
+ 	{ /* M3UPI0 Link 0 */
+-		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x204C),
+-		.driver_data = UNCORE_PCI_DEV_FULL_DATA(18, 0, SKX_PCI_UNCORE_M3UPI, 0),
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x204D),
++		.driver_data = UNCORE_PCI_DEV_FULL_DATA(18, 1, SKX_PCI_UNCORE_M3UPI, 0),
+ 	},
+ 	{ /* M3UPI0 Link 1 */
+-		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x204D),
+-		.driver_data = UNCORE_PCI_DEV_FULL_DATA(18, 1, SKX_PCI_UNCORE_M3UPI, 1),
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x204E),
++		.driver_data = UNCORE_PCI_DEV_FULL_DATA(18, 2, SKX_PCI_UNCORE_M3UPI, 1),
+ 	},
+ 	{ /* M3UPI1 Link 2 */
+-		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x204C),
+-		.driver_data = UNCORE_PCI_DEV_FULL_DATA(18, 4, SKX_PCI_UNCORE_M3UPI, 2),
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x204D),
++		.driver_data = UNCORE_PCI_DEV_FULL_DATA(18, 5, SKX_PCI_UNCORE_M3UPI, 2),
+ 	},
+ 	{ /* end: all zeroes */ }
+ };
+diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
+index 12f54082f4c8..78241b736f2a 100644
+--- a/arch/x86/include/asm/perf_event.h
++++ b/arch/x86/include/asm/perf_event.h
+@@ -46,6 +46,14 @@
+ #define INTEL_ARCH_EVENT_MASK	\
+ 	(ARCH_PERFMON_EVENTSEL_UMASK | ARCH_PERFMON_EVENTSEL_EVENT)
+ 
++#define AMD64_L3_SLICE_SHIFT				48
++#define AMD64_L3_SLICE_MASK				\
++	((0xFULL) << AMD64_L3_SLICE_SHIFT)
++
++#define AMD64_L3_THREAD_SHIFT				56
++#define AMD64_L3_THREAD_MASK				\
++	((0xFFULL) << AMD64_L3_THREAD_SHIFT)
++
+ #define X86_RAW_EVENT_MASK		\
+ 	(ARCH_PERFMON_EVENTSEL_EVENT |	\
+ 	 ARCH_PERFMON_EVENTSEL_UMASK |	\
+diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
+index 930c88341e4e..1fbf38dde84c 100644
+--- a/arch/x86/kernel/paravirt.c
++++ b/arch/x86/kernel/paravirt.c
+@@ -90,7 +90,7 @@ unsigned paravirt_patch_call(void *insnbuf,
+ 
+ 	if (len < 5) {
+ #ifdef CONFIG_RETPOLINE
+-		WARN_ONCE("Failing to patch indirect CALL in %ps\n", (void *)addr);
++		WARN_ONCE(1, "Failing to patch indirect CALL in %ps\n", (void *)addr);
+ #endif
+ 		return len;	/* call too long for patch site */
+ 	}
+@@ -110,7 +110,7 @@ unsigned paravirt_patch_jmp(void *insnbuf, const void *target,
+ 
+ 	if (len < 5) {
+ #ifdef CONFIG_RETPOLINE
+-		WARN_ONCE("Failing to patch indirect JMP in %ps\n", (void *)addr);
++		WARN_ONCE(1, "Failing to patch indirect JMP in %ps\n", (void *)addr);
+ #endif
+ 		return len;	/* call too long for patch site */
+ 	}
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index ef772e5634d4..3e59a187fe30 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -436,14 +436,18 @@ static inline struct kvm_svm *to_kvm_svm(struct kvm *kvm)
+ 
+ static inline bool svm_sev_enabled(void)
+ {
+-	return max_sev_asid;
++	return IS_ENABLED(CONFIG_KVM_AMD_SEV) ? max_sev_asid : 0;
+ }
+ 
+ static inline bool sev_guest(struct kvm *kvm)
+ {
++#ifdef CONFIG_KVM_AMD_SEV
+ 	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+ 
+ 	return sev->active;
++#else
++	return false;
++#endif
+ }
+ 
+ static inline int sev_get_asid(struct kvm *kvm)
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 32721ef9652d..9efe130ea2e6 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -819,6 +819,7 @@ struct nested_vmx {
+ 
+ 	/* to migrate it to L2 if VM_ENTRY_LOAD_DEBUG_CONTROLS is off */
+ 	u64 vmcs01_debugctl;
++	u64 vmcs01_guest_bndcfgs;
+ 
+ 	u16 vpid02;
+ 	u16 last_vpid;
+@@ -3395,9 +3396,6 @@ static void nested_vmx_setup_ctls_msrs(struct nested_vmx_msrs *msrs, bool apicv)
+ 		VM_EXIT_LOAD_IA32_EFER | VM_EXIT_SAVE_IA32_EFER |
+ 		VM_EXIT_SAVE_VMX_PREEMPTION_TIMER | VM_EXIT_ACK_INTR_ON_EXIT;
+ 
+-	if (kvm_mpx_supported())
+-		msrs->exit_ctls_high |= VM_EXIT_CLEAR_BNDCFGS;
+-
+ 	/* We support free control of debug control saving. */
+ 	msrs->exit_ctls_low &= ~VM_EXIT_SAVE_DEBUG_CONTROLS;
+ 
+@@ -3414,8 +3412,6 @@ static void nested_vmx_setup_ctls_msrs(struct nested_vmx_msrs *msrs, bool apicv)
+ 		VM_ENTRY_LOAD_IA32_PAT;
+ 	msrs->entry_ctls_high |=
+ 		(VM_ENTRY_ALWAYSON_WITHOUT_TRUE_MSR | VM_ENTRY_LOAD_IA32_EFER);
+-	if (kvm_mpx_supported())
+-		msrs->entry_ctls_high |= VM_ENTRY_LOAD_BNDCFGS;
+ 
+ 	/* We support free control of debug control loading. */
+ 	msrs->entry_ctls_low &= ~VM_ENTRY_LOAD_DEBUG_CONTROLS;
+@@ -10825,6 +10821,23 @@ static void nested_vmx_cr_fixed1_bits_update(struct kvm_vcpu *vcpu)
+ #undef cr4_fixed1_update
+ }
+ 
++static void nested_vmx_entry_exit_ctls_update(struct kvm_vcpu *vcpu)
++{
++	struct vcpu_vmx *vmx = to_vmx(vcpu);
++
++	if (kvm_mpx_supported()) {
++		bool mpx_enabled = guest_cpuid_has(vcpu, X86_FEATURE_MPX);
++
++		if (mpx_enabled) {
++			vmx->nested.msrs.entry_ctls_high |= VM_ENTRY_LOAD_BNDCFGS;
++			vmx->nested.msrs.exit_ctls_high |= VM_EXIT_CLEAR_BNDCFGS;
++		} else {
++			vmx->nested.msrs.entry_ctls_high &= ~VM_ENTRY_LOAD_BNDCFGS;
++			vmx->nested.msrs.exit_ctls_high &= ~VM_EXIT_CLEAR_BNDCFGS;
++		}
++	}
++}
++
+ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
+ {
+ 	struct vcpu_vmx *vmx = to_vmx(vcpu);
+@@ -10841,8 +10854,10 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
+ 		to_vmx(vcpu)->msr_ia32_feature_control_valid_bits &=
+ 			~FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX;
+ 
+-	if (nested_vmx_allowed(vcpu))
++	if (nested_vmx_allowed(vcpu)) {
+ 		nested_vmx_cr_fixed1_bits_update(vcpu);
++		nested_vmx_entry_exit_ctls_update(vcpu);
++	}
+ }
+ 
+ static void vmx_set_supported_cpuid(u32 func, struct kvm_cpuid_entry2 *entry)
+@@ -11553,8 +11568,13 @@ static void prepare_vmcs02_full(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
+ 
+ 	set_cr4_guest_host_mask(vmx);
+ 
+-	if (vmx_mpx_supported())
+-		vmcs_write64(GUEST_BNDCFGS, vmcs12->guest_bndcfgs);
++	if (kvm_mpx_supported()) {
++		if (vmx->nested.nested_run_pending &&
++			(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS))
++			vmcs_write64(GUEST_BNDCFGS, vmcs12->guest_bndcfgs);
++		else
++			vmcs_write64(GUEST_BNDCFGS, vmx->nested.vmcs01_guest_bndcfgs);
++	}
+ 
+ 	if (enable_vpid) {
+ 		if (nested_cpu_has_vpid(vmcs12) && vmx->nested.vpid02)
+@@ -12068,6 +12088,9 @@ static int enter_vmx_non_root_mode(struct kvm_vcpu *vcpu)
+ 
+ 	if (!(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS))
+ 		vmx->nested.vmcs01_debugctl = vmcs_read64(GUEST_IA32_DEBUGCTL);
++	if (kvm_mpx_supported() &&
++		!(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS))
++		vmx->nested.vmcs01_guest_bndcfgs = vmcs_read64(GUEST_BNDCFGS);
+ 
+ 	vmx_switch_vmcs(vcpu, &vmx->nested.vmcs02);
+ 	vmx_segment_cache_clear(vmx);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 97fcac34e007..3cd58a5eb449 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -4625,7 +4625,7 @@ static void kvm_init_msr_list(void)
+ 		 */
+ 		switch (msrs_to_save[i]) {
+ 		case MSR_IA32_BNDCFGS:
+-			if (!kvm_x86_ops->mpx_supported())
++			if (!kvm_mpx_supported())
+ 				continue;
+ 			break;
+ 		case MSR_TSC_AUX:
+diff --git a/drivers/clk/mvebu/armada-37xx-periph.c b/drivers/clk/mvebu/armada-37xx-periph.c
+index 6f7637b19738..e764dfdea53f 100644
+--- a/drivers/clk/mvebu/armada-37xx-periph.c
++++ b/drivers/clk/mvebu/armada-37xx-periph.c
+@@ -419,7 +419,6 @@ static unsigned int armada_3700_pm_dvfs_get_cpu_parent(struct regmap *base)
+ static u8 clk_pm_cpu_get_parent(struct clk_hw *hw)
+ {
+ 	struct clk_pm_cpu *pm_cpu = to_clk_pm_cpu(hw);
+-	int num_parents = clk_hw_get_num_parents(hw);
+ 	u32 val;
+ 
+ 	if (armada_3700_pm_dvfs_is_enabled(pm_cpu->nb_pm_base)) {
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 06dce16e22bb..70f0dedca59f 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -1675,7 +1675,8 @@ static void gpiochip_set_cascaded_irqchip(struct gpio_chip *gpiochip,
+ 		irq_set_chained_handler_and_data(parent_irq, parent_handler,
+ 						 gpiochip);
+ 
+-		gpiochip->irq.parents = &parent_irq;
++		gpiochip->irq.parent_irq = parent_irq;
++		gpiochip->irq.parents = &gpiochip->irq.parent_irq;
+ 		gpiochip->irq.num_parents = 1;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index e484d0a94bdc..5b9cc3aeaa55 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -4494,12 +4494,18 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
+ 	}
+ 	spin_unlock_irqrestore(&adev->ddev->event_lock, flags);
+ 
+-	/* Signal HW programming completion */
+-	drm_atomic_helper_commit_hw_done(state);
+ 
+ 	if (wait_for_vblank)
+ 		drm_atomic_helper_wait_for_flip_done(dev, state);
+ 
++	/*
++	 * FIXME:
++	 * Delay hw_done() until flip_done() is signaled. This is to block
++	 * another commit from freeing the CRTC state while we're still
++	 * waiting on flip_done.
++	 */
++	drm_atomic_helper_commit_hw_done(state);
++
+ 	drm_atomic_helper_cleanup_planes(dev, state);
+ 
+ 	/* Finally, drop a runtime PM reference for each newly disabled CRTC,
+diff --git a/drivers/gpu/drm/i2c/tda9950.c b/drivers/gpu/drm/i2c/tda9950.c
+index 3f7396caad48..ccd355d0c123 100644
+--- a/drivers/gpu/drm/i2c/tda9950.c
++++ b/drivers/gpu/drm/i2c/tda9950.c
+@@ -188,7 +188,8 @@ static irqreturn_t tda9950_irq(int irq, void *data)
+ 			break;
+ 		}
+ 		/* TDA9950 executes all retries for us */
+-		tx_status |= CEC_TX_STATUS_MAX_RETRIES;
++		if (tx_status != CEC_TX_STATUS_OK)
++			tx_status |= CEC_TX_STATUS_MAX_RETRIES;
+ 		cec_transmit_done(priv->adap, tx_status, arb_lost_cnt,
+ 				  nack_cnt, 0, err_cnt);
+ 		break;
+@@ -307,7 +308,7 @@ static void tda9950_release(struct tda9950_priv *priv)
+ 	/* Wait up to .5s for it to signal non-busy */
+ 	do {
+ 		csr = tda9950_read(client, REG_CSR);
+-		if (!(csr & CSR_BUSY) || --timeout)
++		if (!(csr & CSR_BUSY) || !--timeout)
+ 			break;
+ 		msleep(10);
+ 	} while (1);
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index eee6b79fb131..ae5b72269e27 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -974,7 +974,6 @@
+ #define USB_DEVICE_ID_SIS817_TOUCH	0x0817
+ #define USB_DEVICE_ID_SIS_TS		0x1013
+ #define USB_DEVICE_ID_SIS1030_TOUCH	0x1030
+-#define USB_DEVICE_ID_SIS10FB_TOUCH	0x10fb
+ 
+ #define USB_VENDOR_ID_SKYCABLE			0x1223
+ #define	USB_DEVICE_ID_SKYCABLE_WIRELESS_PRESENTER	0x3F07
+diff --git a/drivers/hid/i2c-hid/i2c-hid.c b/drivers/hid/i2c-hid/i2c-hid.c
+index 37013b58098c..d17cf6e323b2 100644
+--- a/drivers/hid/i2c-hid/i2c-hid.c
++++ b/drivers/hid/i2c-hid/i2c-hid.c
+@@ -47,8 +47,7 @@
+ /* quirks to control the device */
+ #define I2C_HID_QUIRK_SET_PWR_WAKEUP_DEV	BIT(0)
+ #define I2C_HID_QUIRK_NO_IRQ_AFTER_RESET	BIT(1)
+-#define I2C_HID_QUIRK_RESEND_REPORT_DESCR	BIT(2)
+-#define I2C_HID_QUIRK_NO_RUNTIME_PM		BIT(3)
++#define I2C_HID_QUIRK_NO_RUNTIME_PM		BIT(2)
+ 
+ /* flags */
+ #define I2C_HID_STARTED		0
+@@ -172,8 +171,6 @@ static const struct i2c_hid_quirks {
+ 	{ I2C_VENDOR_ID_HANTICK, I2C_PRODUCT_ID_HANTICK_5288,
+ 		I2C_HID_QUIRK_NO_IRQ_AFTER_RESET |
+ 		I2C_HID_QUIRK_NO_RUNTIME_PM },
+-	{ USB_VENDOR_ID_SIS_TOUCH, USB_DEVICE_ID_SIS10FB_TOUCH,
+-		I2C_HID_QUIRK_RESEND_REPORT_DESCR },
+ 	{ 0, 0 }
+ };
+ 
+@@ -1241,22 +1238,13 @@ static int i2c_hid_resume(struct device *dev)
+ 
+ 	/* Instead of resetting device, simply powers the device on. This
+ 	 * solves "incomplete reports" on Raydium devices 2386:3118 and
+-	 * 2386:4B33
++	 * 2386:4B33 and fixes various SIS touchscreens no longer sending
++	 * data after a suspend/resume.
+ 	 */
+ 	ret = i2c_hid_set_power(client, I2C_HID_PWR_ON);
+ 	if (ret)
+ 		return ret;
+ 
+-	/* Some devices need to re-send report descr cmd
+-	 * after resume, after this it will be back normal.
+-	 * otherwise it issues too many incomplete reports.
+-	 */
+-	if (ihid->quirks & I2C_HID_QUIRK_RESEND_REPORT_DESCR) {
+-		ret = i2c_hid_command(client, &hid_report_descr_cmd, NULL, 0);
+-		if (ret)
+-			return ret;
+-	}
+-
+ 	if (hid->driver && hid->driver->reset_resume) {
+ 		ret = hid->driver->reset_resume(hid);
+ 		return ret;
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 308456d28afb..73339fd47dd8 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -544,6 +544,9 @@ void mlx5_mr_cache_free(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr)
+ 	int shrink = 0;
+ 	int c;
+ 
++	if (!mr->allocated_from_cache)
++		return;
++
+ 	c = order2idx(dev, mr->order);
+ 	if (c < 0 || c >= MAX_MR_CACHE_ENTRIES) {
+ 		mlx5_ib_warn(dev, "order %d, cache index %d\n", mr->order, c);
+@@ -1647,18 +1650,19 @@ static void dereg_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr)
+ 		umem = NULL;
+ 	}
+ #endif
+-
+ 	clean_mr(dev, mr);
+ 
++	/*
++	 * We should unregister the DMA address from the HCA before
++	 * remove the DMA mapping.
++	 */
++	mlx5_mr_cache_free(dev, mr);
+ 	if (umem) {
+ 		ib_umem_release(umem);
+ 		atomic_sub(npages, &dev->mdev->priv.reg_pages);
+ 	}
+-
+ 	if (!mr->allocated_from_cache)
+ 		kfree(mr);
+-	else
+-		mlx5_mr_cache_free(dev, mr);
+ }
+ 
+ int mlx5_ib_dereg_mr(struct ib_mr *ibmr)
+diff --git a/drivers/net/bonding/bond_netlink.c b/drivers/net/bonding/bond_netlink.c
+index 9697977b80f0..6b9ad8673218 100644
+--- a/drivers/net/bonding/bond_netlink.c
++++ b/drivers/net/bonding/bond_netlink.c
+@@ -638,8 +638,7 @@ static int bond_fill_info(struct sk_buff *skb,
+ 				goto nla_put_failure;
+ 
+ 			if (nla_put(skb, IFLA_BOND_AD_ACTOR_SYSTEM,
+-				    sizeof(bond->params.ad_actor_system),
+-				    &bond->params.ad_actor_system))
++				    ETH_ALEN, &bond->params.ad_actor_system))
+ 				goto nla_put_failure;
+ 		}
+ 		if (!bond_3ad_get_active_agg_info(bond, &info)) {
+diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+index 1b01cd2820ba..000f0d42a710 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+@@ -1580,8 +1580,6 @@ static int ena_up_complete(struct ena_adapter *adapter)
+ 	if (rc)
+ 		return rc;
+ 
+-	ena_init_napi(adapter);
+-
+ 	ena_change_mtu(adapter->netdev, adapter->netdev->mtu);
+ 
+ 	ena_refill_all_rx_bufs(adapter);
+@@ -1735,6 +1733,13 @@ static int ena_up(struct ena_adapter *adapter)
+ 
+ 	ena_setup_io_intr(adapter);
+ 
++	/* napi poll functions should be initialized before running
++	 * request_irq(), to handle a rare condition where there is a pending
++	 * interrupt, causing the ISR to fire immediately while the poll
++	 * function wasn't set yet, causing a null dereference
++	 */
++	ena_init_napi(adapter);
++
+ 	rc = ena_request_io_irq(adapter);
+ 	if (rc)
+ 		goto err_req_irq;
+@@ -2648,7 +2653,11 @@ err_disable_msix:
+ 	ena_free_mgmnt_irq(adapter);
+ 	ena_disable_msix(adapter);
+ err_device_destroy:
++	ena_com_abort_admin_commands(ena_dev);
++	ena_com_wait_for_abort_completion(ena_dev);
+ 	ena_com_admin_destroy(ena_dev);
++	ena_com_mmio_reg_read_request_destroy(ena_dev);
++	ena_com_dev_reset(ena_dev, ENA_REGS_RESET_DRIVER_INVALID_STATE);
+ err:
+ 	clear_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags);
+ 	clear_bit(ENA_FLAG_ONGOING_RESET, &adapter->flags);
+@@ -3128,15 +3137,8 @@ err_rss_init:
+ 
+ static void ena_release_bars(struct ena_com_dev *ena_dev, struct pci_dev *pdev)
+ {
+-	int release_bars;
+-
+-	if (ena_dev->mem_bar)
+-		devm_iounmap(&pdev->dev, ena_dev->mem_bar);
+-
+-	if (ena_dev->reg_bar)
+-		devm_iounmap(&pdev->dev, ena_dev->reg_bar);
++	int release_bars = pci_select_bars(pdev, IORESOURCE_MEM) & ENA_BAR_MASK;
+ 
+-	release_bars = pci_select_bars(pdev, IORESOURCE_MEM) & ENA_BAR_MASK;
+ 	pci_release_selected_regions(pdev, release_bars);
+ }
+ 
+diff --git a/drivers/net/ethernet/amd/declance.c b/drivers/net/ethernet/amd/declance.c
+index 116997a8b593..00332a1ea84b 100644
+--- a/drivers/net/ethernet/amd/declance.c
++++ b/drivers/net/ethernet/amd/declance.c
+@@ -1031,6 +1031,7 @@ static int dec_lance_probe(struct device *bdev, const int type)
+ 	int i, ret;
+ 	unsigned long esar_base;
+ 	unsigned char *esar;
++	const char *desc;
+ 
+ 	if (dec_lance_debug && version_printed++ == 0)
+ 		printk(version);
+@@ -1216,19 +1217,20 @@ static int dec_lance_probe(struct device *bdev, const int type)
+ 	 */
+ 	switch (type) {
+ 	case ASIC_LANCE:
+-		printk("%s: IOASIC onboard LANCE", name);
++		desc = "IOASIC onboard LANCE";
+ 		break;
+ 	case PMAD_LANCE:
+-		printk("%s: PMAD-AA", name);
++		desc = "PMAD-AA";
+ 		break;
+ 	case PMAX_LANCE:
+-		printk("%s: PMAX onboard LANCE", name);
++		desc = "PMAX onboard LANCE";
+ 		break;
+ 	}
+ 	for (i = 0; i < 6; i++)
+ 		dev->dev_addr[i] = esar[i * 4];
+ 
+-	printk(", addr = %pM, irq = %d\n", dev->dev_addr, dev->irq);
++	printk("%s: %s, addr = %pM, irq = %d\n",
++	       name, desc, dev->dev_addr, dev->irq);
+ 
+ 	dev->netdev_ops = &lance_netdev_ops;
+ 	dev->watchdog_timeo = 5*HZ;
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+index 4241ae928d4a..34af5f1569c8 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+@@ -321,9 +321,12 @@ int bcmgenet_mii_probe(struct net_device *dev)
+ 	phydev->advertising = phydev->supported;
+ 
+ 	/* The internal PHY has its link interrupts routed to the
+-	 * Ethernet MAC ISRs
++	 * Ethernet MAC ISRs. On GENETv5 there is a hardware issue
++	 * that prevents the signaling of link UP interrupts when
++	 * the link operates at 10Mbps, so fallback to polling for
++	 * those versions of GENET.
+ 	 */
+-	if (priv->internal_phy)
++	if (priv->internal_phy && !GENET_IS_V5(priv))
+ 		dev->phydev->irq = PHY_IGNORE_INTERRUPT;
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index dfa045f22ef1..db568232ff3e 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -2089,6 +2089,7 @@ static void macb_configure_dma(struct macb *bp)
+ 		else
+ 			dmacfg &= ~GEM_BIT(TXCOEN);
+ 
++		dmacfg &= ~GEM_BIT(ADDR64);
+ #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
+ 		if (bp->hw_dma_cap & HW_DMA_CAP_64B)
+ 			dmacfg |= GEM_BIT(ADDR64);
+diff --git a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
+index a19172dbe6be..c34ea385fe4a 100644
+--- a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
+@@ -2159,6 +2159,8 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
+ 			return -EPERM;
+ 		if (copy_from_user(&t, useraddr, sizeof(t)))
+ 			return -EFAULT;
++		if (t.cmd != CHELSIO_SET_QSET_PARAMS)
++			return -EINVAL;
+ 		if (t.qset_idx >= SGE_QSETS)
+ 			return -EINVAL;
+ 		if (!in_range(t.intr_lat, 0, M_NEWTIMER) ||
+@@ -2258,6 +2260,9 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
+ 		if (copy_from_user(&t, useraddr, sizeof(t)))
+ 			return -EFAULT;
+ 
++		if (t.cmd != CHELSIO_GET_QSET_PARAMS)
++			return -EINVAL;
++
+ 		/* Display qsets for all ports when offload enabled */
+ 		if (test_bit(OFFLOAD_DEVMAP_BIT, &adapter->open_device_map)) {
+ 			q1 = 0;
+@@ -2303,6 +2308,8 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
+ 			return -EBUSY;
+ 		if (copy_from_user(&edata, useraddr, sizeof(edata)))
+ 			return -EFAULT;
++		if (edata.cmd != CHELSIO_SET_QSET_NUM)
++			return -EINVAL;
+ 		if (edata.val < 1 ||
+ 			(edata.val > 1 && !(adapter->flags & USING_MSIX)))
+ 			return -EINVAL;
+@@ -2343,6 +2350,8 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
+ 			return -EPERM;
+ 		if (copy_from_user(&t, useraddr, sizeof(t)))
+ 			return -EFAULT;
++		if (t.cmd != CHELSIO_LOAD_FW)
++			return -EINVAL;
+ 		/* Check t.len sanity ? */
+ 		fw_data = memdup_user(useraddr + sizeof(t), t.len);
+ 		if (IS_ERR(fw_data))
+@@ -2366,6 +2375,8 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
+ 			return -EBUSY;
+ 		if (copy_from_user(&m, useraddr, sizeof(m)))
+ 			return -EFAULT;
++		if (m.cmd != CHELSIO_SETMTUTAB)
++			return -EINVAL;
+ 		if (m.nmtus != NMTUS)
+ 			return -EINVAL;
+ 		if (m.mtus[0] < 81)	/* accommodate SACK */
+@@ -2407,6 +2418,8 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
+ 			return -EBUSY;
+ 		if (copy_from_user(&m, useraddr, sizeof(m)))
+ 			return -EFAULT;
++		if (m.cmd != CHELSIO_SET_PM)
++			return -EINVAL;
+ 		if (!is_power_of_2(m.rx_pg_sz) ||
+ 			!is_power_of_2(m.tx_pg_sz))
+ 			return -EINVAL;	/* not power of 2 */
+@@ -2440,6 +2453,8 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
+ 			return -EIO;	/* need the memory controllers */
+ 		if (copy_from_user(&t, useraddr, sizeof(t)))
+ 			return -EFAULT;
++		if (t.cmd != CHELSIO_GET_MEM)
++			return -EINVAL;
+ 		if ((t.addr & 7) || (t.len & 7))
+ 			return -EINVAL;
+ 		if (t.mem_id == MEM_CM)
+@@ -2492,6 +2507,8 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
+ 			return -EAGAIN;
+ 		if (copy_from_user(&t, useraddr, sizeof(t)))
+ 			return -EFAULT;
++		if (t.cmd != CHELSIO_SET_TRACE_FILTER)
++			return -EINVAL;
+ 
+ 		tp = (const struct trace_params *)&t.sip;
+ 		if (t.config_tx)
+diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
+index 8f755009ff38..c8445a4135a9 100644
+--- a/drivers/net/ethernet/emulex/benet/be_main.c
++++ b/drivers/net/ethernet/emulex/benet/be_main.c
+@@ -3915,8 +3915,6 @@ static int be_enable_vxlan_offloads(struct be_adapter *adapter)
+ 	netdev->hw_enc_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
+ 				   NETIF_F_TSO | NETIF_F_TSO6 |
+ 				   NETIF_F_GSO_UDP_TUNNEL;
+-	netdev->hw_features |= NETIF_F_GSO_UDP_TUNNEL;
+-	netdev->features |= NETIF_F_GSO_UDP_TUNNEL;
+ 
+ 	dev_info(dev, "Enabled VxLAN offloads for UDP port %d\n",
+ 		 be16_to_cpu(port));
+@@ -3938,8 +3936,6 @@ static void be_disable_vxlan_offloads(struct be_adapter *adapter)
+ 	adapter->vxlan_port = 0;
+ 
+ 	netdev->hw_enc_features = 0;
+-	netdev->hw_features &= ~(NETIF_F_GSO_UDP_TUNNEL);
+-	netdev->features &= ~(NETIF_F_GSO_UDP_TUNNEL);
+ }
+ 
+ static void be_calculate_vf_res(struct be_adapter *adapter, u16 num_vfs,
+@@ -5232,6 +5228,7 @@ static void be_netdev_init(struct net_device *netdev)
+ 	struct be_adapter *adapter = netdev_priv(netdev);
+ 
+ 	netdev->hw_features |= NETIF_F_SG | NETIF_F_TSO | NETIF_F_TSO6 |
++		NETIF_F_GSO_UDP_TUNNEL |
+ 		NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | NETIF_F_RXCSUM |
+ 		NETIF_F_HW_VLAN_CTAG_TX;
+ 	if ((be_if_cap_flags(adapter) & BE_IF_FLAGS_RSS))
+diff --git a/drivers/net/ethernet/freescale/fec.h b/drivers/net/ethernet/freescale/fec.h
+index 4778b663653e..bf80855dd0dd 100644
+--- a/drivers/net/ethernet/freescale/fec.h
++++ b/drivers/net/ethernet/freescale/fec.h
+@@ -452,6 +452,10 @@ struct bufdesc_ex {
+  * initialisation.
+  */
+ #define FEC_QUIRK_MIB_CLEAR		(1 << 15)
++/* Only i.MX25/i.MX27/i.MX28 controller supports FRBR,FRSR registers,
++ * those FIFO receive registers are resolved in other platforms.
++ */
++#define FEC_QUIRK_HAS_FRREG		(1 << 16)
+ 
+ struct bufdesc_prop {
+ 	int qid;
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index c729665107f5..11f90bb2d2a9 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -90,14 +90,16 @@ static struct platform_device_id fec_devtype[] = {
+ 		.driver_data = 0,
+ 	}, {
+ 		.name = "imx25-fec",
+-		.driver_data = FEC_QUIRK_USE_GASKET | FEC_QUIRK_MIB_CLEAR,
++		.driver_data = FEC_QUIRK_USE_GASKET | FEC_QUIRK_MIB_CLEAR |
++			       FEC_QUIRK_HAS_FRREG,
+ 	}, {
+ 		.name = "imx27-fec",
+-		.driver_data = FEC_QUIRK_MIB_CLEAR,
++		.driver_data = FEC_QUIRK_MIB_CLEAR | FEC_QUIRK_HAS_FRREG,
+ 	}, {
+ 		.name = "imx28-fec",
+ 		.driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_SWAP_FRAME |
+-				FEC_QUIRK_SINGLE_MDIO | FEC_QUIRK_HAS_RACC,
++				FEC_QUIRK_SINGLE_MDIO | FEC_QUIRK_HAS_RACC |
++				FEC_QUIRK_HAS_FRREG,
+ 	}, {
+ 		.name = "imx6q-fec",
+ 		.driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT |
+@@ -1157,7 +1159,7 @@ static void fec_enet_timeout_work(struct work_struct *work)
+ 		napi_disable(&fep->napi);
+ 		netif_tx_lock_bh(ndev);
+ 		fec_restart(ndev);
+-		netif_wake_queue(ndev);
++		netif_tx_wake_all_queues(ndev);
+ 		netif_tx_unlock_bh(ndev);
+ 		napi_enable(&fep->napi);
+ 	}
+@@ -1272,7 +1274,7 @@ skb_done:
+ 
+ 		/* Since we have freed up a buffer, the ring is no longer full
+ 		 */
+-		if (netif_queue_stopped(ndev)) {
++		if (netif_tx_queue_stopped(nq)) {
+ 			entries_free = fec_enet_get_free_txdesc_num(txq);
+ 			if (entries_free >= txq->tx_wake_threshold)
+ 				netif_tx_wake_queue(nq);
+@@ -1745,7 +1747,7 @@ static void fec_enet_adjust_link(struct net_device *ndev)
+ 			napi_disable(&fep->napi);
+ 			netif_tx_lock_bh(ndev);
+ 			fec_restart(ndev);
+-			netif_wake_queue(ndev);
++			netif_tx_wake_all_queues(ndev);
+ 			netif_tx_unlock_bh(ndev);
+ 			napi_enable(&fep->napi);
+ 		}
+@@ -2163,7 +2165,13 @@ static void fec_enet_get_regs(struct net_device *ndev,
+ 	memset(buf, 0, regs->len);
+ 
+ 	for (i = 0; i < ARRAY_SIZE(fec_enet_register_offset); i++) {
+-		off = fec_enet_register_offset[i] / 4;
++		off = fec_enet_register_offset[i];
++
++		if ((off == FEC_R_BOUND || off == FEC_R_FSTART) &&
++		    !(fep->quirks & FEC_QUIRK_HAS_FRREG))
++			continue;
++
++		off >>= 2;
+ 		buf[off] = readl(&theregs[off]);
+ 	}
+ }
+@@ -2246,7 +2254,7 @@ static int fec_enet_set_pauseparam(struct net_device *ndev,
+ 		napi_disable(&fep->napi);
+ 		netif_tx_lock_bh(ndev);
+ 		fec_restart(ndev);
+-		netif_wake_queue(ndev);
++		netif_tx_wake_all_queues(ndev);
+ 		netif_tx_unlock_bh(ndev);
+ 		napi_enable(&fep->napi);
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index d3a1dd20e41d..fb6c72cf70a0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -429,10 +429,9 @@ static inline u16 mlx5e_icosq_wrap_cnt(struct mlx5e_icosq *sq)
+ 
+ static inline void mlx5e_fill_icosq_frag_edge(struct mlx5e_icosq *sq,
+ 					      struct mlx5_wq_cyc *wq,
+-					      u16 pi, u16 frag_pi)
++					      u16 pi, u16 nnops)
+ {
+ 	struct mlx5e_sq_wqe_info *edge_wi, *wi = &sq->db.ico_wqe[pi];
+-	u8 nnops = mlx5_wq_cyc_get_frag_size(wq) - frag_pi;
+ 
+ 	edge_wi = wi + nnops;
+ 
+@@ -451,15 +450,14 @@ static int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
+ 	struct mlx5_wq_cyc *wq = &sq->wq;
+ 	struct mlx5e_umr_wqe *umr_wqe;
+ 	u16 xlt_offset = ix << (MLX5E_LOG_ALIGNED_MPWQE_PPW - 1);
+-	u16 pi, frag_pi;
++	u16 pi, contig_wqebbs_room;
+ 	int err;
+ 	int i;
+ 
+ 	pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
+-	frag_pi = mlx5_wq_cyc_ctr2fragix(wq, sq->pc);
+-
+-	if (unlikely(frag_pi + MLX5E_UMR_WQEBBS > mlx5_wq_cyc_get_frag_size(wq))) {
+-		mlx5e_fill_icosq_frag_edge(sq, wq, pi, frag_pi);
++	contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi);
++	if (unlikely(contig_wqebbs_room < MLX5E_UMR_WQEBBS)) {
++		mlx5e_fill_icosq_frag_edge(sq, wq, pi, contig_wqebbs_room);
+ 		pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
+ 	}
+ 
+@@ -693,43 +691,15 @@ static inline bool is_last_ethertype_ip(struct sk_buff *skb, int *network_depth)
+ 	return (ethertype == htons(ETH_P_IP) || ethertype == htons(ETH_P_IPV6));
+ }
+ 
+-static __be32 mlx5e_get_fcs(struct sk_buff *skb)
++static u32 mlx5e_get_fcs(const struct sk_buff *skb)
+ {
+-	int last_frag_sz, bytes_in_prev, nr_frags;
+-	u8 *fcs_p1, *fcs_p2;
+-	skb_frag_t *last_frag;
+-	__be32 fcs_bytes;
+-
+-	if (!skb_is_nonlinear(skb))
+-		return *(__be32 *)(skb->data + skb->len - ETH_FCS_LEN);
+-
+-	nr_frags = skb_shinfo(skb)->nr_frags;
+-	last_frag = &skb_shinfo(skb)->frags[nr_frags - 1];
+-	last_frag_sz = skb_frag_size(last_frag);
+-
+-	/* If all FCS data is in last frag */
+-	if (last_frag_sz >= ETH_FCS_LEN)
+-		return *(__be32 *)(skb_frag_address(last_frag) +
+-				   last_frag_sz - ETH_FCS_LEN);
+-
+-	fcs_p2 = (u8 *)skb_frag_address(last_frag);
+-	bytes_in_prev = ETH_FCS_LEN - last_frag_sz;
+-
+-	/* Find where the other part of the FCS is - Linear or another frag */
+-	if (nr_frags == 1) {
+-		fcs_p1 = skb_tail_pointer(skb);
+-	} else {
+-		skb_frag_t *prev_frag = &skb_shinfo(skb)->frags[nr_frags - 2];
+-
+-		fcs_p1 = skb_frag_address(prev_frag) +
+-			    skb_frag_size(prev_frag);
+-	}
+-	fcs_p1 -= bytes_in_prev;
++	const void *fcs_bytes;
++	u32 _fcs_bytes;
+ 
+-	memcpy(&fcs_bytes, fcs_p1, bytes_in_prev);
+-	memcpy(((u8 *)&fcs_bytes) + bytes_in_prev, fcs_p2, last_frag_sz);
++	fcs_bytes = skb_header_pointer(skb, skb->len - ETH_FCS_LEN,
++				       ETH_FCS_LEN, &_fcs_bytes);
+ 
+-	return fcs_bytes;
++	return __get_unaligned_cpu32(fcs_bytes);
+ }
+ 
+ static inline void mlx5e_handle_csum(struct net_device *netdev,
+@@ -762,8 +732,9 @@ static inline void mlx5e_handle_csum(struct net_device *netdev,
+ 						 network_depth - ETH_HLEN,
+ 						 skb->csum);
+ 		if (unlikely(netdev->features & NETIF_F_RXFCS))
+-			skb->csum = csum_add(skb->csum,
+-					     (__force __wsum)mlx5e_get_fcs(skb));
++			skb->csum = csum_block_add(skb->csum,
++						   (__force __wsum)mlx5e_get_fcs(skb),
++						   skb->len - ETH_FCS_LEN);
+ 		stats->csum_complete++;
+ 		return;
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+index f29deb44bf3b..1e774d979c85 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+@@ -287,10 +287,9 @@ dma_unmap_wqe_err:
+ 
+ static inline void mlx5e_fill_sq_frag_edge(struct mlx5e_txqsq *sq,
+ 					   struct mlx5_wq_cyc *wq,
+-					   u16 pi, u16 frag_pi)
++					   u16 pi, u16 nnops)
+ {
+ 	struct mlx5e_tx_wqe_info *edge_wi, *wi = &sq->db.wqe_info[pi];
+-	u8 nnops = mlx5_wq_cyc_get_frag_size(wq) - frag_pi;
+ 
+ 	edge_wi = wi + nnops;
+ 
+@@ -345,8 +344,8 @@ netdev_tx_t mlx5e_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
+ 	struct mlx5e_tx_wqe_info *wi;
+ 
+ 	struct mlx5e_sq_stats *stats = sq->stats;
++	u16 headlen, ihs, contig_wqebbs_room;
+ 	u16 ds_cnt, ds_cnt_inl = 0;
+-	u16 headlen, ihs, frag_pi;
+ 	u8 num_wqebbs, opcode;
+ 	u32 num_bytes;
+ 	int num_dma;
+@@ -383,9 +382,9 @@ netdev_tx_t mlx5e_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
+ 	}
+ 
+ 	num_wqebbs = DIV_ROUND_UP(ds_cnt, MLX5_SEND_WQEBB_NUM_DS);
+-	frag_pi = mlx5_wq_cyc_ctr2fragix(wq, sq->pc);
+-	if (unlikely(frag_pi + num_wqebbs > mlx5_wq_cyc_get_frag_size(wq))) {
+-		mlx5e_fill_sq_frag_edge(sq, wq, pi, frag_pi);
++	contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi);
++	if (unlikely(contig_wqebbs_room < num_wqebbs)) {
++		mlx5e_fill_sq_frag_edge(sq, wq, pi, contig_wqebbs_room);
+ 		mlx5e_sq_fetch_wqe(sq, &wqe, &pi);
+ 	}
+ 
+@@ -629,7 +628,7 @@ netdev_tx_t mlx5i_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
+ 	struct mlx5e_tx_wqe_info *wi;
+ 
+ 	struct mlx5e_sq_stats *stats = sq->stats;
+-	u16 headlen, ihs, pi, frag_pi;
++	u16 headlen, ihs, pi, contig_wqebbs_room;
+ 	u16 ds_cnt, ds_cnt_inl = 0;
+ 	u8 num_wqebbs, opcode;
+ 	u32 num_bytes;
+@@ -665,13 +664,14 @@ netdev_tx_t mlx5i_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
+ 	}
+ 
+ 	num_wqebbs = DIV_ROUND_UP(ds_cnt, MLX5_SEND_WQEBB_NUM_DS);
+-	frag_pi = mlx5_wq_cyc_ctr2fragix(wq, sq->pc);
+-	if (unlikely(frag_pi + num_wqebbs > mlx5_wq_cyc_get_frag_size(wq))) {
++	pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
++	contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi);
++	if (unlikely(contig_wqebbs_room < num_wqebbs)) {
++		mlx5e_fill_sq_frag_edge(sq, wq, pi, contig_wqebbs_room);
+ 		pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
+-		mlx5e_fill_sq_frag_edge(sq, wq, pi, frag_pi);
+ 	}
+ 
+-	mlx5i_sq_fetch_wqe(sq, &wqe, &pi);
++	mlx5i_sq_fetch_wqe(sq, &wqe, pi);
+ 
+ 	/* fill wqe */
+ 	wi       = &sq->db.wqe_info[pi];
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+index 406c23862f5f..01ccc8201052 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+@@ -269,7 +269,7 @@ static void eq_pf_process(struct mlx5_eq *eq)
+ 		case MLX5_PFAULT_SUBTYPE_WQE:
+ 			/* WQE based event */
+ 			pfault->type =
+-				be32_to_cpu(pf_eqe->wqe.pftype_wq) >> 24;
++				(be32_to_cpu(pf_eqe->wqe.pftype_wq) >> 24) & 0x7;
+ 			pfault->token =
+ 				be32_to_cpu(pf_eqe->wqe.token);
+ 			pfault->wqe.wq_num =
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c
+index 5645a4facad2..b8ee9101c506 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c
+@@ -245,7 +245,7 @@ static void *mlx5_fpga_ipsec_cmd_exec(struct mlx5_core_dev *mdev,
+ 		return ERR_PTR(res);
+ 	}
+ 
+-	/* Context will be freed by wait func after completion */
++	/* Context should be freed by the caller after completion. */
+ 	return context;
+ }
+ 
+@@ -418,10 +418,8 @@ static int mlx5_fpga_ipsec_set_caps(struct mlx5_core_dev *mdev, u32 flags)
+ 	cmd.cmd = htonl(MLX5_FPGA_IPSEC_CMD_OP_SET_CAP);
+ 	cmd.flags = htonl(flags);
+ 	context = mlx5_fpga_ipsec_cmd_exec(mdev, &cmd, sizeof(cmd));
+-	if (IS_ERR(context)) {
+-		err = PTR_ERR(context);
+-		goto out;
+-	}
++	if (IS_ERR(context))
++		return PTR_ERR(context);
+ 
+ 	err = mlx5_fpga_ipsec_cmd_wait(context);
+ 	if (err)
+@@ -435,6 +433,7 @@ static int mlx5_fpga_ipsec_set_caps(struct mlx5_core_dev *mdev, u32 flags)
+ 	}
+ 
+ out:
++	kfree(context);
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.h b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.h
+index 08eac92fc26c..0982c579ec74 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.h
+@@ -109,12 +109,11 @@ struct mlx5i_tx_wqe {
+ 
+ static inline void mlx5i_sq_fetch_wqe(struct mlx5e_txqsq *sq,
+ 				      struct mlx5i_tx_wqe **wqe,
+-				      u16 *pi)
++				      u16 pi)
+ {
+ 	struct mlx5_wq_cyc *wq = &sq->wq;
+ 
+-	*pi  = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
+-	*wqe = mlx5_wq_cyc_get_wqe(wq, *pi);
++	*wqe = mlx5_wq_cyc_get_wqe(wq, pi);
+ 	memset(*wqe, 0, sizeof(**wqe));
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wq.c b/drivers/net/ethernet/mellanox/mlx5/core/wq.c
+index d838af9539b1..9046475c531c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/wq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/wq.c
+@@ -39,11 +39,6 @@ u32 mlx5_wq_cyc_get_size(struct mlx5_wq_cyc *wq)
+ 	return (u32)wq->fbc.sz_m1 + 1;
+ }
+ 
+-u16 mlx5_wq_cyc_get_frag_size(struct mlx5_wq_cyc *wq)
+-{
+-	return wq->fbc.frag_sz_m1 + 1;
+-}
+-
+ u32 mlx5_cqwq_get_size(struct mlx5_cqwq *wq)
+ {
+ 	return wq->fbc.sz_m1 + 1;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wq.h b/drivers/net/ethernet/mellanox/mlx5/core/wq.h
+index 16476cc1a602..311256554520 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/wq.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/wq.h
+@@ -80,7 +80,6 @@ int mlx5_wq_cyc_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
+ 		       void *wqc, struct mlx5_wq_cyc *wq,
+ 		       struct mlx5_wq_ctrl *wq_ctrl);
+ u32 mlx5_wq_cyc_get_size(struct mlx5_wq_cyc *wq);
+-u16 mlx5_wq_cyc_get_frag_size(struct mlx5_wq_cyc *wq);
+ 
+ int mlx5_wq_qp_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
+ 		      void *qpc, struct mlx5_wq_qp *wq,
+@@ -140,11 +139,6 @@ static inline u16 mlx5_wq_cyc_ctr2ix(struct mlx5_wq_cyc *wq, u16 ctr)
+ 	return ctr & wq->fbc.sz_m1;
+ }
+ 
+-static inline u16 mlx5_wq_cyc_ctr2fragix(struct mlx5_wq_cyc *wq, u16 ctr)
+-{
+-	return ctr & wq->fbc.frag_sz_m1;
+-}
+-
+ static inline u16 mlx5_wq_cyc_get_head(struct mlx5_wq_cyc *wq)
+ {
+ 	return mlx5_wq_cyc_ctr2ix(wq, wq->wqe_ctr);
+@@ -160,6 +154,11 @@ static inline void *mlx5_wq_cyc_get_wqe(struct mlx5_wq_cyc *wq, u16 ix)
+ 	return mlx5_frag_buf_get_wqe(&wq->fbc, ix);
+ }
+ 
++static inline u16 mlx5_wq_cyc_get_contig_wqebbs(struct mlx5_wq_cyc *wq, u16 ix)
++{
++	return mlx5_frag_buf_get_idx_last_contig_stride(&wq->fbc, ix) - ix + 1;
++}
++
+ static inline int mlx5_wq_cyc_cc_bigger(u16 cc1, u16 cc2)
+ {
+ 	int equal   = (cc1 == cc2);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c
+index f9c724752a32..13636a537f37 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core.c
+@@ -985,8 +985,8 @@ static int mlxsw_devlink_core_bus_device_reload(struct devlink *devlink,
+ 					     mlxsw_core->bus,
+ 					     mlxsw_core->bus_priv, true,
+ 					     devlink);
+-	if (err)
+-		mlxsw_core->reload_fail = true;
++	mlxsw_core->reload_fail = !!err;
++
+ 	return err;
+ }
+ 
+@@ -1126,8 +1126,15 @@ void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core,
+ 	const char *device_kind = mlxsw_core->bus_info->device_kind;
+ 	struct devlink *devlink = priv_to_devlink(mlxsw_core);
+ 
+-	if (mlxsw_core->reload_fail)
+-		goto reload_fail;
++	if (mlxsw_core->reload_fail) {
++		if (!reload)
++			/* Only the parts that were not de-initialized in the
++			 * failed reload attempt need to be de-initialized.
++			 */
++			goto reload_fail_deinit;
++		else
++			return;
++	}
+ 
+ 	if (mlxsw_core->driver->fini)
+ 		mlxsw_core->driver->fini(mlxsw_core);
+@@ -1140,9 +1147,12 @@ void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core,
+ 	if (!reload)
+ 		devlink_resources_unregister(devlink, NULL);
+ 	mlxsw_core->bus->fini(mlxsw_core->bus_priv);
+-	if (reload)
+-		return;
+-reload_fail:
++
++	return;
++
++reload_fail_deinit:
++	devlink_unregister(devlink);
++	devlink_resources_unregister(devlink, NULL);
+ 	devlink_free(devlink);
+ 	mlxsw_core_driver_put(device_kind);
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+index 6cb43dda8232..9883e48d8a21 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+@@ -2307,8 +2307,6 @@ static void mlxsw_sp_switchdev_event_work(struct work_struct *work)
+ 		break;
+ 	case SWITCHDEV_FDB_DEL_TO_DEVICE:
+ 		fdb_info = &switchdev_work->fdb_info;
+-		if (!fdb_info->added_by_user)
+-			break;
+ 		mlxsw_sp_port_fdb_set(mlxsw_sp_port, fdb_info, false);
+ 		break;
+ 	case SWITCHDEV_FDB_ADD_TO_BRIDGE: /* fall through */
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_iwarp.c b/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
+index 90a2b53096e2..51bbb0e5b514 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
+@@ -1710,7 +1710,7 @@ qed_iwarp_parse_rx_pkt(struct qed_hwfn *p_hwfn,
+ 
+ 		cm_info->local_ip[0] = ntohl(iph->daddr);
+ 		cm_info->remote_ip[0] = ntohl(iph->saddr);
+-		cm_info->ip_version = TCP_IPV4;
++		cm_info->ip_version = QED_TCP_IPV4;
+ 
+ 		ip_hlen = (iph->ihl) * sizeof(u32);
+ 		*payload_len = ntohs(iph->tot_len) - ip_hlen;
+@@ -1730,7 +1730,7 @@ qed_iwarp_parse_rx_pkt(struct qed_hwfn *p_hwfn,
+ 			cm_info->remote_ip[i] =
+ 			    ntohl(ip6h->saddr.in6_u.u6_addr32[i]);
+ 		}
+-		cm_info->ip_version = TCP_IPV6;
++		cm_info->ip_version = QED_TCP_IPV6;
+ 
+ 		ip_hlen = sizeof(*ip6h);
+ 		*payload_len = ntohs(ip6h->payload_len);
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_roce.c b/drivers/net/ethernet/qlogic/qed/qed_roce.c
+index b5ce1581645f..79424e6f0976 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_roce.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_roce.c
+@@ -138,23 +138,16 @@ static void qed_rdma_copy_gids(struct qed_rdma_qp *qp, __le32 *src_gid,
+ 
+ static enum roce_flavor qed_roce_mode_to_flavor(enum roce_mode roce_mode)
+ {
+-	enum roce_flavor flavor;
+-
+ 	switch (roce_mode) {
+ 	case ROCE_V1:
+-		flavor = PLAIN_ROCE;
+-		break;
++		return PLAIN_ROCE;
+ 	case ROCE_V2_IPV4:
+-		flavor = RROCE_IPV4;
+-		break;
++		return RROCE_IPV4;
+ 	case ROCE_V2_IPV6:
+-		flavor = ROCE_V2_IPV6;
+-		break;
++		return RROCE_IPV6;
+ 	default:
+-		flavor = MAX_ROCE_MODE;
+-		break;
++		return MAX_ROCE_FLAVOR;
+ 	}
+-	return flavor;
+ }
+ 
+ void qed_roce_free_cid_pair(struct qed_hwfn *p_hwfn, u16 cid)
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c b/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c
+index 8de644b4721e..77b6248ad3b9 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c
+@@ -154,7 +154,7 @@ qed_set_pf_update_tunn_mode(struct qed_tunnel_info *p_tun,
+ static void qed_set_tunn_cls_info(struct qed_tunnel_info *p_tun,
+ 				  struct qed_tunnel_info *p_src)
+ {
+-	enum tunnel_clss type;
++	int type;
+ 
+ 	p_tun->b_update_rx_cls = p_src->b_update_rx_cls;
+ 	p_tun->b_update_tx_cls = p_src->b_update_tx_cls;
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_vf.c b/drivers/net/ethernet/qlogic/qed/qed_vf.c
+index be6ddde1a104..c4766e4ac485 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_vf.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_vf.c
+@@ -413,7 +413,6 @@ static int qed_vf_pf_acquire(struct qed_hwfn *p_hwfn)
+ 	}
+ 
+ 	if (!p_iov->b_pre_fp_hsi &&
+-	    ETH_HSI_VER_MINOR &&
+ 	    (resp->pfdev_info.minor_fp_hsi < ETH_HSI_VER_MINOR)) {
+ 		DP_INFO(p_hwfn,
+ 			"PF is using older fastpath HSI; %02x.%02x is configured\n",
+@@ -572,7 +571,7 @@ free_p_iov:
+ static void
+ __qed_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req,
+ 			   struct qed_tunn_update_type *p_src,
+-			   enum qed_tunn_clss mask, u8 *p_cls)
++			   enum qed_tunn_mode mask, u8 *p_cls)
+ {
+ 	if (p_src->b_update_mode) {
+ 		p_req->tun_mode_update_mask |= BIT(mask);
+@@ -587,7 +586,7 @@ __qed_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req,
+ static void
+ qed_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req,
+ 			 struct qed_tunn_update_type *p_src,
+-			 enum qed_tunn_clss mask,
++			 enum qed_tunn_mode mask,
+ 			 u8 *p_cls, struct qed_tunn_update_udp_port *p_port,
+ 			 u8 *p_update_port, u16 *p_udp_port)
+ {
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index 627c5cd8f786..f18087102d40 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -7044,17 +7044,15 @@ static int rtl8169_poll(struct napi_struct *napi, int budget)
+ 	struct rtl8169_private *tp = container_of(napi, struct rtl8169_private, napi);
+ 	struct net_device *dev = tp->dev;
+ 	u16 enable_mask = RTL_EVENT_NAPI | tp->event_slow;
+-	int work_done= 0;
++	int work_done;
+ 	u16 status;
+ 
+ 	status = rtl_get_events(tp);
+ 	rtl_ack_events(tp, status & ~tp->event_slow);
+ 
+-	if (status & RTL_EVENT_NAPI_RX)
+-		work_done = rtl_rx(dev, tp, (u32) budget);
++	work_done = rtl_rx(dev, tp, (u32) budget);
+ 
+-	if (status & RTL_EVENT_NAPI_TX)
+-		rtl_tx(dev, tp);
++	rtl_tx(dev, tp);
+ 
+ 	if (status & tp->event_slow) {
+ 		enable_mask &= ~tp->event_slow;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
+index 5df1a608e566..541602d70c24 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
+@@ -133,7 +133,7 @@ static int stmmac_mdio_write(struct mii_bus *bus, int phyaddr, int phyreg,
+  */
+ int stmmac_mdio_reset(struct mii_bus *bus)
+ {
+-#if defined(CONFIG_STMMAC_PLATFORM)
++#if IS_ENABLED(CONFIG_STMMAC_PLATFORM)
+ 	struct net_device *ndev = bus->priv;
+ 	struct stmmac_priv *priv = netdev_priv(ndev);
+ 	unsigned int mii_address = priv->hw->mii.addr;
+diff --git a/drivers/net/hamradio/yam.c b/drivers/net/hamradio/yam.c
+index 16ec7af6ab7b..ba9df430fca6 100644
+--- a/drivers/net/hamradio/yam.c
++++ b/drivers/net/hamradio/yam.c
+@@ -966,6 +966,8 @@ static int yam_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+ 				 sizeof(struct yamdrv_ioctl_mcs));
+ 		if (IS_ERR(ym))
+ 			return PTR_ERR(ym);
++		if (ym->cmd != SIOCYAMSMCS)
++			return -EINVAL;
+ 		if (ym->bitrate > YAM_MAXBITRATE) {
+ 			kfree(ym);
+ 			return -EINVAL;
+@@ -981,6 +983,8 @@ static int yam_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+ 		if (copy_from_user(&yi, ifr->ifr_data, sizeof(struct yamdrv_ioctl_cfg)))
+ 			 return -EFAULT;
+ 
++		if (yi.cmd != SIOCYAMSCFG)
++			return -EINVAL;
+ 		if ((yi.cfg.mask & YAM_IOBASE) && netif_running(dev))
+ 			return -EINVAL;		/* Cannot change this parameter when up */
+ 		if ((yi.cfg.mask & YAM_IRQ) && netif_running(dev))
+diff --git a/drivers/net/usb/asix_common.c b/drivers/net/usb/asix_common.c
+index e95dd12edec4..023b8d0bf175 100644
+--- a/drivers/net/usb/asix_common.c
++++ b/drivers/net/usb/asix_common.c
+@@ -607,6 +607,9 @@ int asix_set_wol(struct net_device *net, struct ethtool_wolinfo *wolinfo)
+ 	struct usbnet *dev = netdev_priv(net);
+ 	u8 opt = 0;
+ 
++	if (wolinfo->wolopts & ~(WAKE_PHY | WAKE_MAGIC))
++		return -EINVAL;
++
+ 	if (wolinfo->wolopts & WAKE_PHY)
+ 		opt |= AX_MONITOR_LINK;
+ 	if (wolinfo->wolopts & WAKE_MAGIC)
+diff --git a/drivers/net/usb/ax88179_178a.c b/drivers/net/usb/ax88179_178a.c
+index 9e8ad372f419..2207f7a7d1ff 100644
+--- a/drivers/net/usb/ax88179_178a.c
++++ b/drivers/net/usb/ax88179_178a.c
+@@ -566,6 +566,9 @@ ax88179_set_wol(struct net_device *net, struct ethtool_wolinfo *wolinfo)
+ 	struct usbnet *dev = netdev_priv(net);
+ 	u8 opt = 0;
+ 
++	if (wolinfo->wolopts & ~(WAKE_PHY | WAKE_MAGIC))
++		return -EINVAL;
++
+ 	if (wolinfo->wolopts & WAKE_PHY)
+ 		opt |= AX_MONITOR_MODE_RWLC;
+ 	if (wolinfo->wolopts & WAKE_MAGIC)
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index aeca484a75b8..2bb3a081ff10 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -1401,19 +1401,10 @@ static int lan78xx_set_wol(struct net_device *netdev,
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	pdata->wol = 0;
+-	if (wol->wolopts & WAKE_UCAST)
+-		pdata->wol |= WAKE_UCAST;
+-	if (wol->wolopts & WAKE_MCAST)
+-		pdata->wol |= WAKE_MCAST;
+-	if (wol->wolopts & WAKE_BCAST)
+-		pdata->wol |= WAKE_BCAST;
+-	if (wol->wolopts & WAKE_MAGIC)
+-		pdata->wol |= WAKE_MAGIC;
+-	if (wol->wolopts & WAKE_PHY)
+-		pdata->wol |= WAKE_PHY;
+-	if (wol->wolopts & WAKE_ARP)
+-		pdata->wol |= WAKE_ARP;
++	if (wol->wolopts & ~WAKE_ALL)
++		return -EINVAL;
++
++	pdata->wol = wol->wolopts;
+ 
+ 	device_set_wakeup_enable(&dev->udev->dev, (bool)wol->wolopts);
+ 
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index 1b07bb5e110d..9a55d75f7f10 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -4503,6 +4503,9 @@ static int rtl8152_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+ 	if (!rtl_can_wakeup(tp))
+ 		return -EOPNOTSUPP;
+ 
++	if (wol->wolopts & ~WAKE_ANY)
++		return -EINVAL;
++
+ 	ret = usb_autopm_get_interface(tp->intf);
+ 	if (ret < 0)
+ 		goto out_set_wol;
+diff --git a/drivers/net/usb/smsc75xx.c b/drivers/net/usb/smsc75xx.c
+index b64b1ee56d2d..ec287c9741e8 100644
+--- a/drivers/net/usb/smsc75xx.c
++++ b/drivers/net/usb/smsc75xx.c
+@@ -731,6 +731,9 @@ static int smsc75xx_ethtool_set_wol(struct net_device *net,
+ 	struct smsc75xx_priv *pdata = (struct smsc75xx_priv *)(dev->data[0]);
+ 	int ret;
+ 
++	if (wolinfo->wolopts & ~SUPPORTED_WAKE)
++		return -EINVAL;
++
+ 	pdata->wolopts = wolinfo->wolopts & SUPPORTED_WAKE;
+ 
+ 	ret = device_set_wakeup_enable(&dev->udev->dev, pdata->wolopts);
+diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c
+index 06b4d290784d..262e7a3c23cb 100644
+--- a/drivers/net/usb/smsc95xx.c
++++ b/drivers/net/usb/smsc95xx.c
+@@ -774,6 +774,9 @@ static int smsc95xx_ethtool_set_wol(struct net_device *net,
+ 	struct smsc95xx_priv *pdata = (struct smsc95xx_priv *)(dev->data[0]);
+ 	int ret;
+ 
++	if (wolinfo->wolopts & ~SUPPORTED_WAKE)
++		return -EINVAL;
++
+ 	pdata->wolopts = wolinfo->wolopts & SUPPORTED_WAKE;
+ 
+ 	ret = device_set_wakeup_enable(&dev->udev->dev, pdata->wolopts);
+diff --git a/drivers/net/usb/sr9800.c b/drivers/net/usb/sr9800.c
+index 9277a0f228df..35f39f23d881 100644
+--- a/drivers/net/usb/sr9800.c
++++ b/drivers/net/usb/sr9800.c
+@@ -421,6 +421,9 @@ sr_set_wol(struct net_device *net, struct ethtool_wolinfo *wolinfo)
+ 	struct usbnet *dev = netdev_priv(net);
+ 	u8 opt = 0;
+ 
++	if (wolinfo->wolopts & ~(WAKE_PHY | WAKE_MAGIC))
++		return -EINVAL;
++
+ 	if (wolinfo->wolopts & WAKE_PHY)
+ 		opt |= SR_MONITOR_LINK;
+ 	if (wolinfo->wolopts & WAKE_MAGIC)
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 2b6ec927809e..500e2d8f10bc 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -2162,8 +2162,9 @@ static void virtnet_freeze_down(struct virtio_device *vdev)
+ 	/* Make sure no work handler is accessing the device */
+ 	flush_work(&vi->config_work);
+ 
++	netif_tx_lock_bh(vi->dev);
+ 	netif_device_detach(vi->dev);
+-	netif_tx_disable(vi->dev);
++	netif_tx_unlock_bh(vi->dev);
+ 	cancel_delayed_work_sync(&vi->refill);
+ 
+ 	if (netif_running(vi->dev)) {
+@@ -2199,7 +2200,9 @@ static int virtnet_restore_up(struct virtio_device *vdev)
+ 		}
+ 	}
+ 
++	netif_tx_lock_bh(vi->dev);
+ 	netif_device_attach(vi->dev);
++	netif_tx_unlock_bh(vi->dev);
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index 80e2c8595c7c..58dd217811c8 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -519,7 +519,6 @@ struct mac80211_hwsim_data {
+ 	int channels, idx;
+ 	bool use_chanctx;
+ 	bool destroy_on_close;
+-	struct work_struct destroy_work;
+ 	u32 portid;
+ 	char alpha2[2];
+ 	const struct ieee80211_regdomain *regd;
+@@ -2812,8 +2811,7 @@ static int mac80211_hwsim_new_radio(struct genl_info *info,
+ 	hwsim_radios_generation++;
+ 	spin_unlock_bh(&hwsim_radio_lock);
+ 
+-	if (idx > 0)
+-		hwsim_mcast_new_radio(idx, info, param);
++	hwsim_mcast_new_radio(idx, info, param);
+ 
+ 	return idx;
+ 
+@@ -3442,30 +3440,27 @@ static struct genl_family hwsim_genl_family __ro_after_init = {
+ 	.n_mcgrps = ARRAY_SIZE(hwsim_mcgrps),
+ };
+ 
+-static void destroy_radio(struct work_struct *work)
+-{
+-	struct mac80211_hwsim_data *data =
+-		container_of(work, struct mac80211_hwsim_data, destroy_work);
+-
+-	hwsim_radios_generation++;
+-	mac80211_hwsim_del_radio(data, wiphy_name(data->hw->wiphy), NULL);
+-}
+-
+ static void remove_user_radios(u32 portid)
+ {
+ 	struct mac80211_hwsim_data *entry, *tmp;
++	LIST_HEAD(list);
+ 
+ 	spin_lock_bh(&hwsim_radio_lock);
+ 	list_for_each_entry_safe(entry, tmp, &hwsim_radios, list) {
+ 		if (entry->destroy_on_close && entry->portid == portid) {
+-			list_del(&entry->list);
++			list_move(&entry->list, &list);
+ 			rhashtable_remove_fast(&hwsim_radios_rht, &entry->rht,
+ 					       hwsim_rht_params);
+-			INIT_WORK(&entry->destroy_work, destroy_radio);
+-			queue_work(hwsim_wq, &entry->destroy_work);
++			hwsim_radios_generation++;
+ 		}
+ 	}
+ 	spin_unlock_bh(&hwsim_radio_lock);
++
++	list_for_each_entry_safe(entry, tmp, &list, list) {
++		list_del(&entry->list);
++		mac80211_hwsim_del_radio(entry, wiphy_name(entry->hw->wiphy),
++					 NULL);
++	}
+ }
+ 
+ static int mac80211_hwsim_netlink_notify(struct notifier_block *nb,
+@@ -3523,6 +3518,7 @@ static __net_init int hwsim_init_net(struct net *net)
+ static void __net_exit hwsim_exit_net(struct net *net)
+ {
+ 	struct mac80211_hwsim_data *data, *tmp;
++	LIST_HEAD(list);
+ 
+ 	spin_lock_bh(&hwsim_radio_lock);
+ 	list_for_each_entry_safe(data, tmp, &hwsim_radios, list) {
+@@ -3533,17 +3529,19 @@ static void __net_exit hwsim_exit_net(struct net *net)
+ 		if (data->netgroup == hwsim_net_get_netgroup(&init_net))
+ 			continue;
+ 
+-		list_del(&data->list);
++		list_move(&data->list, &list);
+ 		rhashtable_remove_fast(&hwsim_radios_rht, &data->rht,
+ 				       hwsim_rht_params);
+ 		hwsim_radios_generation++;
+-		spin_unlock_bh(&hwsim_radio_lock);
++	}
++	spin_unlock_bh(&hwsim_radio_lock);
++
++	list_for_each_entry_safe(data, tmp, &list, list) {
++		list_del(&data->list);
+ 		mac80211_hwsim_del_radio(data,
+ 					 wiphy_name(data->hw->wiphy),
+ 					 NULL);
+-		spin_lock_bh(&hwsim_radio_lock);
+ 	}
+-	spin_unlock_bh(&hwsim_radio_lock);
+ 
+ 	ida_simple_remove(&hwsim_netgroup_ida, hwsim_net_get_netgroup(net));
+ }
+diff --git a/drivers/net/wireless/marvell/libertas/if_sdio.c b/drivers/net/wireless/marvell/libertas/if_sdio.c
+index 43743c26c071..39bf85d0ade0 100644
+--- a/drivers/net/wireless/marvell/libertas/if_sdio.c
++++ b/drivers/net/wireless/marvell/libertas/if_sdio.c
+@@ -1317,6 +1317,10 @@ static int if_sdio_suspend(struct device *dev)
+ 	if (priv->wol_criteria == EHS_REMOVE_WAKEUP) {
+ 		dev_info(dev, "Suspend without wake params -- powering down card\n");
+ 		if (priv->fw_ready) {
++			ret = lbs_suspend(priv);
++			if (ret)
++				return ret;
++
+ 			priv->power_up_on_resume = true;
+ 			if_sdio_power_off(card);
+ 		}
+diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
+index 3e18a68c2b03..054e66d93ed6 100644
+--- a/drivers/scsi/qedi/qedi_main.c
++++ b/drivers/scsi/qedi/qedi_main.c
+@@ -2472,6 +2472,7 @@ static int __qedi_probe(struct pci_dev *pdev, int mode)
+ 		/* start qedi context */
+ 		spin_lock_init(&qedi->hba_lock);
+ 		spin_lock_init(&qedi->task_idx_lock);
++		mutex_init(&qedi->stats_lock);
+ 	}
+ 	qedi_ops->ll2->register_cb_ops(qedi->cdev, &qedi_ll2_cb_ops, qedi);
+ 	qedi_ops->ll2->start(qedi->cdev, &params);
+diff --git a/drivers/soc/fsl/qbman/qman.c b/drivers/soc/fsl/qbman/qman.c
+index ecb22749df0b..8cc015183043 100644
+--- a/drivers/soc/fsl/qbman/qman.c
++++ b/drivers/soc/fsl/qbman/qman.c
+@@ -2729,6 +2729,9 @@ static int qman_alloc_range(struct gen_pool *p, u32 *result, u32 cnt)
+ {
+ 	unsigned long addr;
+ 
++	if (!p)
++		return -ENODEV;
++
+ 	addr = gen_pool_alloc(p, cnt);
+ 	if (!addr)
+ 		return -ENOMEM;
+diff --git a/drivers/soc/fsl/qe/ucc.c b/drivers/soc/fsl/qe/ucc.c
+index c646d8713861..681f7d4b7724 100644
+--- a/drivers/soc/fsl/qe/ucc.c
++++ b/drivers/soc/fsl/qe/ucc.c
+@@ -626,7 +626,7 @@ static u32 ucc_get_tdm_sync_shift(enum comm_dir mode, u32 tdm_num)
+ {
+ 	u32 shift;
+ 
+-	shift = (mode == COMM_DIR_RX) ? RX_SYNC_SHIFT_BASE : RX_SYNC_SHIFT_BASE;
++	shift = (mode == COMM_DIR_RX) ? RX_SYNC_SHIFT_BASE : TX_SYNC_SHIFT_BASE;
+ 	shift -= tdm_num * 2;
+ 
+ 	return shift;
+diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c
+index 500911f16498..5bad9fdec5f8 100644
+--- a/drivers/thunderbolt/icm.c
++++ b/drivers/thunderbolt/icm.c
+@@ -653,14 +653,6 @@ icm_fr_xdomain_connected(struct tb *tb, const struct icm_pkg_header *hdr)
+ 	bool approved;
+ 	u64 route;
+ 
+-	/*
+-	 * After NVM upgrade adding root switch device fails because we
+-	 * initiated reset. During that time ICM might still send
+-	 * XDomain connected message which we ignore here.
+-	 */
+-	if (!tb->root_switch)
+-		return;
+-
+ 	link = pkg->link_info & ICM_LINK_INFO_LINK_MASK;
+ 	depth = (pkg->link_info & ICM_LINK_INFO_DEPTH_MASK) >>
+ 		ICM_LINK_INFO_DEPTH_SHIFT;
+@@ -950,14 +942,6 @@ icm_tr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr)
+ 	if (pkg->hdr.packet_id)
+ 		return;
+ 
+-	/*
+-	 * After NVM upgrade adding root switch device fails because we
+-	 * initiated reset. During that time ICM might still send device
+-	 * connected message which we ignore here.
+-	 */
+-	if (!tb->root_switch)
+-		return;
+-
+ 	route = get_route(pkg->route_hi, pkg->route_lo);
+ 	authorized = pkg->link_info & ICM_LINK_INFO_APPROVED;
+ 	security_level = (pkg->hdr.flags & ICM_FLAGS_SLEVEL_MASK) >>
+@@ -1317,19 +1301,26 @@ static void icm_handle_notification(struct work_struct *work)
+ 
+ 	mutex_lock(&tb->lock);
+ 
+-	switch (n->pkg->code) {
+-	case ICM_EVENT_DEVICE_CONNECTED:
+-		icm->device_connected(tb, n->pkg);
+-		break;
+-	case ICM_EVENT_DEVICE_DISCONNECTED:
+-		icm->device_disconnected(tb, n->pkg);
+-		break;
+-	case ICM_EVENT_XDOMAIN_CONNECTED:
+-		icm->xdomain_connected(tb, n->pkg);
+-		break;
+-	case ICM_EVENT_XDOMAIN_DISCONNECTED:
+-		icm->xdomain_disconnected(tb, n->pkg);
+-		break;
++	/*
++	 * When the domain is stopped we flush its workqueue but before
++	 * that the root switch is removed. In that case we should treat
++	 * the queued events as being canceled.
++	 */
++	if (tb->root_switch) {
++		switch (n->pkg->code) {
++		case ICM_EVENT_DEVICE_CONNECTED:
++			icm->device_connected(tb, n->pkg);
++			break;
++		case ICM_EVENT_DEVICE_DISCONNECTED:
++			icm->device_disconnected(tb, n->pkg);
++			break;
++		case ICM_EVENT_XDOMAIN_CONNECTED:
++			icm->xdomain_connected(tb, n->pkg);
++			break;
++		case ICM_EVENT_XDOMAIN_DISCONNECTED:
++			icm->xdomain_disconnected(tb, n->pkg);
++			break;
++		}
+ 	}
+ 
+ 	mutex_unlock(&tb->lock);
+diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
+index f5a33e88e676..2d042150e41c 100644
+--- a/drivers/thunderbolt/nhi.c
++++ b/drivers/thunderbolt/nhi.c
+@@ -1147,5 +1147,5 @@ static void __exit nhi_unload(void)
+ 	tb_domain_exit();
+ }
+ 
+-fs_initcall(nhi_init);
++rootfs_initcall(nhi_init);
+ module_exit(nhi_unload);
+diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
+index af842000188c..a25f6ea5c784 100644
+--- a/drivers/tty/serial/8250/8250_dw.c
++++ b/drivers/tty/serial/8250/8250_dw.c
+@@ -576,10 +576,6 @@ static int dw8250_probe(struct platform_device *pdev)
+ 	if (!data->skip_autocfg)
+ 		dw8250_setup_port(p);
+ 
+-#ifdef CONFIG_PM
+-	uart.capabilities |= UART_CAP_RPM;
+-#endif
+-
+ 	/* If we have a valid fifosize, try hooking up DMA */
+ 	if (p->fifosize) {
+ 		data->dma.rxconf.src_maxburst = p->fifosize / 4;
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index 560ed8711706..c4424cbd9943 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -30,6 +30,7 @@
+ #include <linux/sched/mm.h>
+ #include <linux/sched/signal.h>
+ #include <linux/interval_tree_generic.h>
++#include <linux/nospec.h>
+ 
+ #include "vhost.h"
+ 
+@@ -1362,6 +1363,7 @@ long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *arg
+ 	if (idx >= d->nvqs)
+ 		return -ENOBUFS;
+ 
++	idx = array_index_nospec(idx, d->nvqs);
+ 	vq = d->vqs[idx];
+ 
+ 	mutex_lock(&vq->mutex);
+diff --git a/drivers/video/fbdev/pxa168fb.c b/drivers/video/fbdev/pxa168fb.c
+index def3a501acd6..d059d04c63ac 100644
+--- a/drivers/video/fbdev/pxa168fb.c
++++ b/drivers/video/fbdev/pxa168fb.c
+@@ -712,7 +712,7 @@ static int pxa168fb_probe(struct platform_device *pdev)
+ 	/*
+ 	 * enable controller clock
+ 	 */
+-	clk_enable(fbi->clk);
++	clk_prepare_enable(fbi->clk);
+ 
+ 	pxa168fb_set_par(info);
+ 
+@@ -767,7 +767,7 @@ static int pxa168fb_probe(struct platform_device *pdev)
+ failed_free_cmap:
+ 	fb_dealloc_cmap(&info->cmap);
+ failed_free_clk:
+-	clk_disable(fbi->clk);
++	clk_disable_unprepare(fbi->clk);
+ failed_free_fbmem:
+ 	dma_free_coherent(fbi->dev, info->fix.smem_len,
+ 			info->screen_base, fbi->fb_start_dma);
+@@ -807,7 +807,7 @@ static int pxa168fb_remove(struct platform_device *pdev)
+ 	dma_free_wc(fbi->dev, PAGE_ALIGN(info->fix.smem_len),
+ 		    info->screen_base, info->fix.smem_start);
+ 
+-	clk_disable(fbi->clk);
++	clk_disable_unprepare(fbi->clk);
+ 
+ 	framebuffer_release(info);
+ 
+diff --git a/fs/afs/cell.c b/fs/afs/cell.c
+index f3d0bef16d78..6127f0fcd62c 100644
+--- a/fs/afs/cell.c
++++ b/fs/afs/cell.c
+@@ -514,6 +514,8 @@ static int afs_alloc_anon_key(struct afs_cell *cell)
+  */
+ static int afs_activate_cell(struct afs_net *net, struct afs_cell *cell)
+ {
++	struct hlist_node **p;
++	struct afs_cell *pcell;
+ 	int ret;
+ 
+ 	if (!cell->anonymous_key) {
+@@ -534,7 +536,18 @@ static int afs_activate_cell(struct afs_net *net, struct afs_cell *cell)
+ 		return ret;
+ 
+ 	mutex_lock(&net->proc_cells_lock);
+-	list_add_tail(&cell->proc_link, &net->proc_cells);
++	for (p = &net->proc_cells.first; *p; p = &(*p)->next) {
++		pcell = hlist_entry(*p, struct afs_cell, proc_link);
++		if (strcmp(cell->name, pcell->name) < 0)
++			break;
++	}
++
++	cell->proc_link.pprev = p;
++	cell->proc_link.next = *p;
++	rcu_assign_pointer(*p, &cell->proc_link.next);
++	if (cell->proc_link.next)
++		cell->proc_link.next->pprev = &cell->proc_link.next;
++
+ 	afs_dynroot_mkdir(net, cell);
+ 	mutex_unlock(&net->proc_cells_lock);
+ 	return 0;
+@@ -550,7 +563,7 @@ static void afs_deactivate_cell(struct afs_net *net, struct afs_cell *cell)
+ 	afs_proc_cell_remove(cell);
+ 
+ 	mutex_lock(&net->proc_cells_lock);
+-	list_del_init(&cell->proc_link);
++	hlist_del_rcu(&cell->proc_link);
+ 	afs_dynroot_rmdir(net, cell);
+ 	mutex_unlock(&net->proc_cells_lock);
+ 
+diff --git a/fs/afs/dynroot.c b/fs/afs/dynroot.c
+index 174e843f0633..7de7223843cc 100644
+--- a/fs/afs/dynroot.c
++++ b/fs/afs/dynroot.c
+@@ -286,7 +286,7 @@ int afs_dynroot_populate(struct super_block *sb)
+ 		return -ERESTARTSYS;
+ 
+ 	net->dynroot_sb = sb;
+-	list_for_each_entry(cell, &net->proc_cells, proc_link) {
++	hlist_for_each_entry(cell, &net->proc_cells, proc_link) {
+ 		ret = afs_dynroot_mkdir(net, cell);
+ 		if (ret < 0)
+ 			goto error;
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index 9778df135717..270d1caa27c6 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -241,7 +241,7 @@ struct afs_net {
+ 	seqlock_t		cells_lock;
+ 
+ 	struct mutex		proc_cells_lock;
+-	struct list_head	proc_cells;
++	struct hlist_head	proc_cells;
+ 
+ 	/* Known servers.  Theoretically each fileserver can only be in one
+ 	 * cell, but in practice, people create aliases and subsets and there's
+@@ -319,7 +319,7 @@ struct afs_cell {
+ 	struct afs_net		*net;
+ 	struct key		*anonymous_key;	/* anonymous user key for this cell */
+ 	struct work_struct	manager;	/* Manager for init/deinit/dns */
+-	struct list_head	proc_link;	/* /proc cell list link */
++	struct hlist_node	proc_link;	/* /proc cell list link */
+ #ifdef CONFIG_AFS_FSCACHE
+ 	struct fscache_cookie	*cache;		/* caching cookie */
+ #endif
+diff --git a/fs/afs/main.c b/fs/afs/main.c
+index e84fe822a960..107427688edd 100644
+--- a/fs/afs/main.c
++++ b/fs/afs/main.c
+@@ -87,7 +87,7 @@ static int __net_init afs_net_init(struct net *net_ns)
+ 	timer_setup(&net->cells_timer, afs_cells_timer, 0);
+ 
+ 	mutex_init(&net->proc_cells_lock);
+-	INIT_LIST_HEAD(&net->proc_cells);
++	INIT_HLIST_HEAD(&net->proc_cells);
+ 
+ 	seqlock_init(&net->fs_lock);
+ 	net->fs_servers = RB_ROOT;
+diff --git a/fs/afs/proc.c b/fs/afs/proc.c
+index 476dcbb79713..9101f62707af 100644
+--- a/fs/afs/proc.c
++++ b/fs/afs/proc.c
+@@ -33,9 +33,8 @@ static inline struct afs_net *afs_seq2net_single(struct seq_file *m)
+ static int afs_proc_cells_show(struct seq_file *m, void *v)
+ {
+ 	struct afs_cell *cell = list_entry(v, struct afs_cell, proc_link);
+-	struct afs_net *net = afs_seq2net(m);
+ 
+-	if (v == &net->proc_cells) {
++	if (v == SEQ_START_TOKEN) {
+ 		/* display header on line 1 */
+ 		seq_puts(m, "USE NAME\n");
+ 		return 0;
+@@ -50,12 +49,12 @@ static void *afs_proc_cells_start(struct seq_file *m, loff_t *_pos)
+ 	__acquires(rcu)
+ {
+ 	rcu_read_lock();
+-	return seq_list_start_head(&afs_seq2net(m)->proc_cells, *_pos);
++	return seq_hlist_start_head_rcu(&afs_seq2net(m)->proc_cells, *_pos);
+ }
+ 
+ static void *afs_proc_cells_next(struct seq_file *m, void *v, loff_t *pos)
+ {
+-	return seq_list_next(v, &afs_seq2net(m)->proc_cells, pos);
++	return seq_hlist_next_rcu(v, &afs_seq2net(m)->proc_cells, pos);
+ }
+ 
+ static void afs_proc_cells_stop(struct seq_file *m, void *v)
+diff --git a/fs/fat/fatent.c b/fs/fat/fatent.c
+index 3aef8630a4b9..95d2c716e0da 100644
+--- a/fs/fat/fatent.c
++++ b/fs/fat/fatent.c
+@@ -681,6 +681,7 @@ int fat_count_free_clusters(struct super_block *sb)
+ 			if (ops->ent_get(&fatent) == FAT_ENT_FREE)
+ 				free++;
+ 		} while (fat_ent_next(sbi, &fatent));
++		cond_resched();
+ 	}
+ 	sbi->free_clusters = free;
+ 	sbi->free_clus_valid = 1;
+diff --git a/fs/ocfs2/refcounttree.c b/fs/ocfs2/refcounttree.c
+index 7869622af22a..7a5ee145c733 100644
+--- a/fs/ocfs2/refcounttree.c
++++ b/fs/ocfs2/refcounttree.c
+@@ -2946,6 +2946,7 @@ int ocfs2_duplicate_clusters_by_page(handle_t *handle,
+ 		if (map_end & (PAGE_SIZE - 1))
+ 			to = map_end & (PAGE_SIZE - 1);
+ 
++retry:
+ 		page = find_or_create_page(mapping, page_index, GFP_NOFS);
+ 		if (!page) {
+ 			ret = -ENOMEM;
+@@ -2954,11 +2955,18 @@ int ocfs2_duplicate_clusters_by_page(handle_t *handle,
+ 		}
+ 
+ 		/*
+-		 * In case PAGE_SIZE <= CLUSTER_SIZE, This page
+-		 * can't be dirtied before we CoW it out.
++		 * In case PAGE_SIZE <= CLUSTER_SIZE, we do not expect a dirty
++		 * page, so write it back.
+ 		 */
+-		if (PAGE_SIZE <= OCFS2_SB(sb)->s_clustersize)
+-			BUG_ON(PageDirty(page));
++		if (PAGE_SIZE <= OCFS2_SB(sb)->s_clustersize) {
++			if (PageDirty(page)) {
++				/*
++				 * write_on_page will unlock the page on return
++				 */
++				ret = write_one_page(page);
++				goto retry;
++			}
++		}
+ 
+ 		if (!PageUptodate(page)) {
+ 			ret = block_read_full_page(page, ocfs2_get_block);
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index e373e2e10f6a..83b930988e21 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -70,7 +70,7 @@
+  */
+ #ifdef CONFIG_LD_DEAD_CODE_DATA_ELIMINATION
+ #define TEXT_MAIN .text .text.[0-9a-zA-Z_]*
+-#define DATA_MAIN .data .data.[0-9a-zA-Z_]*
++#define DATA_MAIN .data .data.[0-9a-zA-Z_]* .data..LPBX*
+ #define SDATA_MAIN .sdata .sdata.[0-9a-zA-Z_]*
+ #define RODATA_MAIN .rodata .rodata.[0-9a-zA-Z_]*
+ #define BSS_MAIN .bss .bss.[0-9a-zA-Z_]*
+@@ -617,8 +617,8 @@
+ 
+ #define EXIT_DATA							\
+ 	*(.exit.data .exit.data.*)					\
+-	*(.fini_array)							\
+-	*(.dtors)							\
++	*(.fini_array .fini_array.*)					\
++	*(.dtors .dtors.*)						\
+ 	MEM_DISCARD(exit.data*)						\
+ 	MEM_DISCARD(exit.rodata*)
+ 
+diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
+index a8ba6b04152c..55e4be8b016b 100644
+--- a/include/linux/compiler_types.h
++++ b/include/linux/compiler_types.h
+@@ -78,6 +78,18 @@ extern void __chk_io_ptr(const volatile void __iomem *);
+ #include <linux/compiler-clang.h>
+ #endif
+ 
++/*
++ * Some architectures need to provide custom definitions of macros provided
++ * by linux/compiler-*.h, and can do so using asm/compiler.h. We include that
++ * conditionally rather than using an asm-generic wrapper in order to avoid
++ * build failures if any C compilation, which will include this file via an
++ * -include argument in c_flags, occurs prior to the asm-generic wrappers being
++ * generated.
++ */
++#ifdef CONFIG_HAVE_ARCH_COMPILER_H
++#include <asm/compiler.h>
++#endif
++
+ /*
+  * Generic compiler-dependent macros required for kernel
+  * build go below this comment. Actual compiler/compiler version
+diff --git a/include/linux/gpio/driver.h b/include/linux/gpio/driver.h
+index 5382b5183b7e..82a953ec5ef0 100644
+--- a/include/linux/gpio/driver.h
++++ b/include/linux/gpio/driver.h
+@@ -94,6 +94,13 @@ struct gpio_irq_chip {
+ 	 */
+ 	unsigned int num_parents;
+ 
++	/**
++	 * @parent_irq:
++	 *
++	 * For use by gpiochip_set_cascaded_irqchip()
++	 */
++	unsigned int parent_irq;
++
+ 	/**
+ 	 * @parents:
+ 	 *
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index 64f450593b54..b49bfc8e68b0 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -1022,6 +1022,14 @@ static inline void *mlx5_frag_buf_get_wqe(struct mlx5_frag_buf_ctrl *fbc,
+ 		((fbc->frag_sz_m1 & ix) << fbc->log_stride);
+ }
+ 
++static inline u32
++mlx5_frag_buf_get_idx_last_contig_stride(struct mlx5_frag_buf_ctrl *fbc, u32 ix)
++{
++	u32 last_frag_stride_idx = (ix + fbc->strides_offset) | fbc->frag_sz_m1;
++
++	return min_t(u32, last_frag_stride_idx - fbc->strides_offset, fbc->sz_m1);
++}
++
+ int mlx5_cmd_init(struct mlx5_core_dev *dev);
+ void mlx5_cmd_cleanup(struct mlx5_core_dev *dev);
+ void mlx5_cmd_use_events(struct mlx5_core_dev *dev);
+diff --git a/include/linux/netfilter.h b/include/linux/netfilter.h
+index dd2052f0efb7..11b7b8ab0696 100644
+--- a/include/linux/netfilter.h
++++ b/include/linux/netfilter.h
+@@ -215,6 +215,8 @@ static inline int nf_hook(u_int8_t pf, unsigned int hook, struct net *net,
+ 		break;
+ 	case NFPROTO_ARP:
+ #ifdef CONFIG_NETFILTER_FAMILY_ARP
++		if (WARN_ON_ONCE(hook >= ARRAY_SIZE(net->nf.hooks_arp)))
++			break;
+ 		hook_head = rcu_dereference(net->nf.hooks_arp[hook]);
+ #endif
+ 		break;
+diff --git a/include/net/ip6_fib.h b/include/net/ip6_fib.h
+index 3d4930528db0..2d31e22babd8 100644
+--- a/include/net/ip6_fib.h
++++ b/include/net/ip6_fib.h
+@@ -159,6 +159,10 @@ struct fib6_info {
+ 	struct rt6_info * __percpu	*rt6i_pcpu;
+ 	struct rt6_exception_bucket __rcu *rt6i_exception_bucket;
+ 
++#ifdef CONFIG_IPV6_ROUTER_PREF
++	unsigned long			last_probe;
++#endif
++
+ 	u32				fib6_metric;
+ 	u8				fib6_protocol;
+ 	u8				fib6_type;
+diff --git a/include/net/sctp/sm.h b/include/net/sctp/sm.h
+index 5ef1bad81ef5..9e3d32746430 100644
+--- a/include/net/sctp/sm.h
++++ b/include/net/sctp/sm.h
+@@ -347,7 +347,7 @@ static inline __u16 sctp_data_size(struct sctp_chunk *chunk)
+ 	__u16 size;
+ 
+ 	size = ntohs(chunk->chunk_hdr->length);
+-	size -= sctp_datahdr_len(&chunk->asoc->stream);
++	size -= sctp_datachk_len(&chunk->asoc->stream);
+ 
+ 	return size;
+ }
+diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
+index 4fff00e9da8a..0a774b64fc29 100644
+--- a/include/trace/events/rxrpc.h
++++ b/include/trace/events/rxrpc.h
+@@ -56,7 +56,6 @@ enum rxrpc_peer_trace {
+ 	rxrpc_peer_new,
+ 	rxrpc_peer_processing,
+ 	rxrpc_peer_put,
+-	rxrpc_peer_queued_error,
+ };
+ 
+ enum rxrpc_conn_trace {
+@@ -257,8 +256,7 @@ enum rxrpc_tx_fail_trace {
+ 	EM(rxrpc_peer_got,			"GOT") \
+ 	EM(rxrpc_peer_new,			"NEW") \
+ 	EM(rxrpc_peer_processing,		"PRO") \
+-	EM(rxrpc_peer_put,			"PUT") \
+-	E_(rxrpc_peer_queued_error,		"QER")
++	E_(rxrpc_peer_put,			"PUT")
+ 
+ #define rxrpc_conn_traces \
+ 	EM(rxrpc_conn_got,			"GOT") \
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index ae22d93701db..fc072b7f839d 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -8319,6 +8319,8 @@ void perf_tp_event(u16 event_type, u64 count, void *record, int entry_size,
+ 			goto unlock;
+ 
+ 		list_for_each_entry_rcu(event, &ctx->event_list, event_entry) {
++			if (event->cpu != smp_processor_id())
++				continue;
+ 			if (event->attr.type != PERF_TYPE_TRACEPOINT)
+ 				continue;
+ 			if (event->attr.config != entry->type)
+@@ -9436,9 +9438,7 @@ static void free_pmu_context(struct pmu *pmu)
+ 	if (pmu->task_ctx_nr > perf_invalid_context)
+ 		return;
+ 
+-	mutex_lock(&pmus_lock);
+ 	free_percpu(pmu->pmu_cpu_context);
+-	mutex_unlock(&pmus_lock);
+ }
+ 
+ /*
+@@ -9694,12 +9694,8 @@ EXPORT_SYMBOL_GPL(perf_pmu_register);
+ 
+ void perf_pmu_unregister(struct pmu *pmu)
+ {
+-	int remove_device;
+-
+ 	mutex_lock(&pmus_lock);
+-	remove_device = pmu_bus_running;
+ 	list_del_rcu(&pmu->entry);
+-	mutex_unlock(&pmus_lock);
+ 
+ 	/*
+ 	 * We dereference the pmu list under both SRCU and regular RCU, so
+@@ -9711,13 +9707,14 @@ void perf_pmu_unregister(struct pmu *pmu)
+ 	free_percpu(pmu->pmu_disable_count);
+ 	if (pmu->type >= PERF_TYPE_MAX)
+ 		idr_remove(&pmu_idr, pmu->type);
+-	if (remove_device) {
++	if (pmu_bus_running) {
+ 		if (pmu->nr_addr_filters)
+ 			device_remove_file(pmu->dev, &dev_attr_nr_addr_filters);
+ 		device_del(pmu->dev);
+ 		put_device(pmu->dev);
+ 	}
+ 	free_pmu_context(pmu);
++	mutex_unlock(&pmus_lock);
+ }
+ EXPORT_SYMBOL_GPL(perf_pmu_unregister);
+ 
+diff --git a/kernel/locking/test-ww_mutex.c b/kernel/locking/test-ww_mutex.c
+index 0e4cd64ad2c0..654977862b06 100644
+--- a/kernel/locking/test-ww_mutex.c
++++ b/kernel/locking/test-ww_mutex.c
+@@ -260,7 +260,7 @@ static void test_cycle_work(struct work_struct *work)
+ {
+ 	struct test_cycle *cycle = container_of(work, typeof(*cycle), work);
+ 	struct ww_acquire_ctx ctx;
+-	int err;
++	int err, erra = 0;
+ 
+ 	ww_acquire_init(&ctx, &ww_class);
+ 	ww_mutex_lock(&cycle->a_mutex, &ctx);
+@@ -270,17 +270,19 @@ static void test_cycle_work(struct work_struct *work)
+ 
+ 	err = ww_mutex_lock(cycle->b_mutex, &ctx);
+ 	if (err == -EDEADLK) {
++		err = 0;
+ 		ww_mutex_unlock(&cycle->a_mutex);
+ 		ww_mutex_lock_slow(cycle->b_mutex, &ctx);
+-		err = ww_mutex_lock(&cycle->a_mutex, &ctx);
++		erra = ww_mutex_lock(&cycle->a_mutex, &ctx);
+ 	}
+ 
+ 	if (!err)
+ 		ww_mutex_unlock(cycle->b_mutex);
+-	ww_mutex_unlock(&cycle->a_mutex);
++	if (!erra)
++		ww_mutex_unlock(&cycle->a_mutex);
+ 	ww_acquire_fini(&ctx);
+ 
+-	cycle->result = err;
++	cycle->result = err ?: erra;
+ }
+ 
+ static int __test_cycle(unsigned int nthreads)
+diff --git a/mm/gup_benchmark.c b/mm/gup_benchmark.c
+index 6a473709e9b6..7405c9d89d65 100644
+--- a/mm/gup_benchmark.c
++++ b/mm/gup_benchmark.c
+@@ -19,7 +19,8 @@ static int __gup_benchmark_ioctl(unsigned int cmd,
+ 		struct gup_benchmark *gup)
+ {
+ 	ktime_t start_time, end_time;
+-	unsigned long i, nr, nr_pages, addr, next;
++	unsigned long i, nr_pages, addr, next;
++	int nr;
+ 	struct page **pages;
+ 
+ 	nr_pages = gup->size / PAGE_SIZE;
+diff --git a/mm/migrate.c b/mm/migrate.c
+index 2a55289ee9f1..f49eb9589d73 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -1415,7 +1415,7 @@ retry:
+ 				 * we encounter them after the rest of the list
+ 				 * is processed.
+ 				 */
+-				if (PageTransHuge(page)) {
++				if (PageTransHuge(page) && !PageHuge(page)) {
+ 					lock_page(page);
+ 					rc = split_huge_page_to_list(page, from);
+ 					unlock_page(page);
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index fc0436407471..03822f86f288 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -386,17 +386,6 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
+ 	delta = freeable >> priority;
+ 	delta *= 4;
+ 	do_div(delta, shrinker->seeks);
+-
+-	/*
+-	 * Make sure we apply some minimal pressure on default priority
+-	 * even on small cgroups. Stale objects are not only consuming memory
+-	 * by themselves, but can also hold a reference to a dying cgroup,
+-	 * preventing it from being reclaimed. A dying cgroup with all
+-	 * corresponding structures like per-cpu stats and kmem caches
+-	 * can be really big, so it may lead to a significant waste of memory.
+-	 */
+-	delta = max_t(unsigned long long, delta, min(freeable, batch_size));
+-
+ 	total_scan += delta;
+ 	if (total_scan < 0) {
+ 		pr_err("shrink_slab: %pF negative objects to delete nr=%ld\n",
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 8a80d48d89c4..1b9984f653dd 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -2298,9 +2298,8 @@ static int unpair_device(struct sock *sk, struct hci_dev *hdev, void *data,
+ 	/* LE address type */
+ 	addr_type = le_addr_type(cp->addr.type);
+ 
+-	hci_remove_irk(hdev, &cp->addr.bdaddr, addr_type);
+-
+-	err = hci_remove_ltk(hdev, &cp->addr.bdaddr, addr_type);
++	/* Abort any ongoing SMP pairing. Removes ltk and irk if they exist. */
++	err = smp_cancel_and_remove_pairing(hdev, &cp->addr.bdaddr, addr_type);
+ 	if (err < 0) {
+ 		err = mgmt_cmd_complete(sk, hdev->id, MGMT_OP_UNPAIR_DEVICE,
+ 					MGMT_STATUS_NOT_PAIRED, &rp,
+@@ -2314,8 +2313,6 @@ static int unpair_device(struct sock *sk, struct hci_dev *hdev, void *data,
+ 		goto done;
+ 	}
+ 
+-	/* Abort any ongoing SMP pairing */
+-	smp_cancel_pairing(conn);
+ 
+ 	/* Defer clearing up the connection parameters until closing to
+ 	 * give a chance of keeping them if a repairing happens.
+diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
+index 3a7b0773536b..73f7211d0431 100644
+--- a/net/bluetooth/smp.c
++++ b/net/bluetooth/smp.c
+@@ -2422,30 +2422,51 @@ unlock:
+ 	return ret;
+ }
+ 
+-void smp_cancel_pairing(struct hci_conn *hcon)
++int smp_cancel_and_remove_pairing(struct hci_dev *hdev, bdaddr_t *bdaddr,
++				  u8 addr_type)
+ {
+-	struct l2cap_conn *conn = hcon->l2cap_data;
++	struct hci_conn *hcon;
++	struct l2cap_conn *conn;
+ 	struct l2cap_chan *chan;
+ 	struct smp_chan *smp;
++	int err;
++
++	err = hci_remove_ltk(hdev, bdaddr, addr_type);
++	hci_remove_irk(hdev, bdaddr, addr_type);
++
++	hcon = hci_conn_hash_lookup_le(hdev, bdaddr, addr_type);
++	if (!hcon)
++		goto done;
+ 
++	conn = hcon->l2cap_data;
+ 	if (!conn)
+-		return;
++		goto done;
+ 
+ 	chan = conn->smp;
+ 	if (!chan)
+-		return;
++		goto done;
+ 
+ 	l2cap_chan_lock(chan);
+ 
+ 	smp = chan->data;
+ 	if (smp) {
++		/* Set keys to NULL to make sure smp_failure() does not try to
++		 * remove and free already invalidated rcu list entries. */
++		smp->ltk = NULL;
++		smp->slave_ltk = NULL;
++		smp->remote_irk = NULL;
++
+ 		if (test_bit(SMP_FLAG_COMPLETE, &smp->flags))
+ 			smp_failure(conn, 0);
+ 		else
+ 			smp_failure(conn, SMP_UNSPECIFIED);
++		err = 0;
+ 	}
+ 
+ 	l2cap_chan_unlock(chan);
++
++done:
++	return err;
+ }
+ 
+ static int smp_cmd_encrypt_info(struct l2cap_conn *conn, struct sk_buff *skb)
+diff --git a/net/bluetooth/smp.h b/net/bluetooth/smp.h
+index 0ff6247eaa6c..121edadd5f8d 100644
+--- a/net/bluetooth/smp.h
++++ b/net/bluetooth/smp.h
+@@ -181,7 +181,8 @@ enum smp_key_pref {
+ };
+ 
+ /* SMP Commands */
+-void smp_cancel_pairing(struct hci_conn *hcon);
++int smp_cancel_and_remove_pairing(struct hci_dev *hdev, bdaddr_t *bdaddr,
++				  u8 addr_type);
+ bool smp_sufficient_security(struct hci_conn *hcon, u8 sec_level,
+ 			     enum smp_key_pref key_pref);
+ int smp_conn_security(struct hci_conn *hcon, __u8 sec_level);
+diff --git a/net/bpfilter/bpfilter_kern.c b/net/bpfilter/bpfilter_kern.c
+index f0fc182d3db7..d5dd6b8b4248 100644
+--- a/net/bpfilter/bpfilter_kern.c
++++ b/net/bpfilter/bpfilter_kern.c
+@@ -23,9 +23,11 @@ static void shutdown_umh(struct umh_info *info)
+ 
+ 	if (!info->pid)
+ 		return;
+-	tsk = pid_task(find_vpid(info->pid), PIDTYPE_PID);
+-	if (tsk)
++	tsk = get_pid_task(find_vpid(info->pid), PIDTYPE_PID);
++	if (tsk) {
+ 		force_sig(SIGKILL, tsk);
++		put_task_struct(tsk);
++	}
+ 	fput(info->pipe_to_umh);
+ 	fput(info->pipe_from_umh);
+ 	info->pid = 0;
+diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
+index 920665dd92db..6059a47f5e0c 100644
+--- a/net/bridge/br_multicast.c
++++ b/net/bridge/br_multicast.c
+@@ -1420,7 +1420,14 @@ static void br_multicast_query_received(struct net_bridge *br,
+ 		return;
+ 
+ 	br_multicast_update_query_timer(br, query, max_delay);
+-	br_multicast_mark_router(br, port);
++
++	/* Based on RFC4541, section 2.1.1 IGMP Forwarding Rules,
++	 * the arrival port for IGMP Queries where the source address
++	 * is 0.0.0.0 should not be added to router port list.
++	 */
++	if ((saddr->proto == htons(ETH_P_IP) && saddr->u.ip4) ||
++	    saddr->proto == htons(ETH_P_IPV6))
++		br_multicast_mark_router(br, port);
+ }
+ 
+ static int br_ip4_multicast_query(struct net_bridge *br,
+diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
+index 9b16eaf33819..58240cc185e7 100644
+--- a/net/bridge/br_netfilter_hooks.c
++++ b/net/bridge/br_netfilter_hooks.c
+@@ -834,7 +834,8 @@ static unsigned int ip_sabotage_in(void *priv,
+ 				   struct sk_buff *skb,
+ 				   const struct nf_hook_state *state)
+ {
+-	if (skb->nf_bridge && !skb->nf_bridge->in_prerouting) {
++	if (skb->nf_bridge && !skb->nf_bridge->in_prerouting &&
++	    !netif_is_l3_master(skb->dev)) {
+ 		state->okfn(state->net, state->sk, skb);
+ 		return NF_STOLEN;
+ 	}
+diff --git a/net/core/datagram.c b/net/core/datagram.c
+index 9938952c5c78..16f0eb0970c4 100644
+--- a/net/core/datagram.c
++++ b/net/core/datagram.c
+@@ -808,8 +808,9 @@ int skb_copy_and_csum_datagram_msg(struct sk_buff *skb,
+ 			return -EINVAL;
+ 		}
+ 
+-		if (unlikely(skb->ip_summed == CHECKSUM_COMPLETE))
+-			netdev_rx_csum_fault(skb->dev);
++		if (unlikely(skb->ip_summed == CHECKSUM_COMPLETE) &&
++		    !skb->csum_complete_sw)
++			netdev_rx_csum_fault(NULL);
+ 	}
+ 	return 0;
+ fault:
+diff --git a/net/core/ethtool.c b/net/core/ethtool.c
+index 6c04f1bf377d..548d0e615bc7 100644
+--- a/net/core/ethtool.c
++++ b/net/core/ethtool.c
+@@ -2461,13 +2461,17 @@ roll_back:
+ 	return ret;
+ }
+ 
+-static int ethtool_set_per_queue(struct net_device *dev, void __user *useraddr)
++static int ethtool_set_per_queue(struct net_device *dev,
++				 void __user *useraddr, u32 sub_cmd)
+ {
+ 	struct ethtool_per_queue_op per_queue_opt;
+ 
+ 	if (copy_from_user(&per_queue_opt, useraddr, sizeof(per_queue_opt)))
+ 		return -EFAULT;
+ 
++	if (per_queue_opt.sub_command != sub_cmd)
++		return -EINVAL;
++
+ 	switch (per_queue_opt.sub_command) {
+ 	case ETHTOOL_GCOALESCE:
+ 		return ethtool_get_per_queue_coalesce(dev, useraddr, &per_queue_opt);
+@@ -2838,7 +2842,7 @@ int dev_ethtool(struct net *net, struct ifreq *ifr)
+ 		rc = ethtool_get_phy_stats(dev, useraddr);
+ 		break;
+ 	case ETHTOOL_PERQUEUE:
+-		rc = ethtool_set_per_queue(dev, useraddr);
++		rc = ethtool_set_per_queue(dev, useraddr, sub_cmd);
+ 		break;
+ 	case ETHTOOL_GLINKSETTINGS:
+ 		rc = ethtool_get_link_ksettings(dev, useraddr);
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 18de39dbdc30..4b25fd14bc5a 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -3480,6 +3480,11 @@ static int rtnl_fdb_add(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 		return -EINVAL;
+ 	}
+ 
++	if (dev->type != ARPHRD_ETHER) {
++		NL_SET_ERR_MSG(extack, "FDB delete only supported for Ethernet devices");
++		return -EINVAL;
++	}
++
+ 	addr = nla_data(tb[NDA_LLADDR]);
+ 
+ 	err = fdb_vid_parse(tb[NDA_VLAN], &vid, extack);
+@@ -3584,6 +3589,11 @@ static int rtnl_fdb_del(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 		return -EINVAL;
+ 	}
+ 
++	if (dev->type != ARPHRD_ETHER) {
++		NL_SET_ERR_MSG(extack, "FDB add only supported for Ethernet devices");
++		return -EINVAL;
++	}
++
+ 	addr = nla_data(tb[NDA_LLADDR]);
+ 
+ 	err = fdb_vid_parse(tb[NDA_VLAN], &vid, extack);
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 3680912f056a..c45916b91a9c 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -1845,8 +1845,9 @@ int pskb_trim_rcsum_slow(struct sk_buff *skb, unsigned int len)
+ 	if (skb->ip_summed == CHECKSUM_COMPLETE) {
+ 		int delta = skb->len - len;
+ 
+-		skb->csum = csum_sub(skb->csum,
+-				     skb_checksum(skb, len, delta, 0));
++		skb->csum = csum_block_sub(skb->csum,
++					   skb_checksum(skb, len, delta, 0),
++					   len);
+ 	}
+ 	return __pskb_trim(skb, len);
+ }
+diff --git a/net/ipv4/ip_fragment.c b/net/ipv4/ip_fragment.c
+index d14d741fb05e..9d3bdce1ad8a 100644
+--- a/net/ipv4/ip_fragment.c
++++ b/net/ipv4/ip_fragment.c
+@@ -657,10 +657,14 @@ struct sk_buff *ip_check_defrag(struct net *net, struct sk_buff *skb, u32 user)
+ 	if (ip_is_fragment(&iph)) {
+ 		skb = skb_share_check(skb, GFP_ATOMIC);
+ 		if (skb) {
+-			if (!pskb_may_pull(skb, netoff + iph.ihl * 4))
+-				return skb;
+-			if (pskb_trim_rcsum(skb, netoff + len))
+-				return skb;
++			if (!pskb_may_pull(skb, netoff + iph.ihl * 4)) {
++				kfree_skb(skb);
++				return NULL;
++			}
++			if (pskb_trim_rcsum(skb, netoff + len)) {
++				kfree_skb(skb);
++				return NULL;
++			}
+ 			memset(IPCB(skb), 0, sizeof(struct inet_skb_parm));
+ 			if (ip_defrag(net, skb, user))
+ 				return NULL;
+diff --git a/net/ipv4/ipmr_base.c b/net/ipv4/ipmr_base.c
+index cafb0506c8c9..33be09791c74 100644
+--- a/net/ipv4/ipmr_base.c
++++ b/net/ipv4/ipmr_base.c
+@@ -295,8 +295,6 @@ int mr_rtm_dumproute(struct sk_buff *skb, struct netlink_callback *cb,
+ next_entry:
+ 			e++;
+ 		}
+-		e = 0;
+-		s_e = 0;
+ 
+ 		spin_lock_bh(lock);
+ 		list_for_each_entry(mfc, &mrt->mfc_unres_queue, list) {
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index a12df801de94..2fe7e2713350 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -2124,8 +2124,24 @@ static inline int udp4_csum_init(struct sk_buff *skb, struct udphdr *uh,
+ 	/* Note, we are only interested in != 0 or == 0, thus the
+ 	 * force to int.
+ 	 */
+-	return (__force int)skb_checksum_init_zero_check(skb, proto, uh->check,
+-							 inet_compute_pseudo);
++	err = (__force int)skb_checksum_init_zero_check(skb, proto, uh->check,
++							inet_compute_pseudo);
++	if (err)
++		return err;
++
++	if (skb->ip_summed == CHECKSUM_COMPLETE && !skb->csum_valid) {
++		/* If SW calculated the value, we know it's bad */
++		if (skb->csum_complete_sw)
++			return 1;
++
++		/* HW says the value is bad. Let's validate that.
++		 * skb->csum is no longer the full packet checksum,
++		 * so don't treat it as such.
++		 */
++		skb_checksum_complete_unset(skb);
++	}
++
++	return 0;
+ }
+ 
+ /* wrapper for udp_queue_rcv_skb tacking care of csum conversion and
+diff --git a/net/ipv4/xfrm4_input.c b/net/ipv4/xfrm4_input.c
+index bcfc00e88756..f8de2482a529 100644
+--- a/net/ipv4/xfrm4_input.c
++++ b/net/ipv4/xfrm4_input.c
+@@ -67,6 +67,7 @@ int xfrm4_transport_finish(struct sk_buff *skb, int async)
+ 
+ 	if (xo && (xo->flags & XFRM_GRO)) {
+ 		skb_mac_header_rebuild(skb);
++		skb_reset_transport_header(skb);
+ 		return 0;
+ 	}
+ 
+diff --git a/net/ipv4/xfrm4_mode_transport.c b/net/ipv4/xfrm4_mode_transport.c
+index 3d36644890bb..1ad2c2c4e250 100644
+--- a/net/ipv4/xfrm4_mode_transport.c
++++ b/net/ipv4/xfrm4_mode_transport.c
+@@ -46,7 +46,6 @@ static int xfrm4_transport_output(struct xfrm_state *x, struct sk_buff *skb)
+ static int xfrm4_transport_input(struct xfrm_state *x, struct sk_buff *skb)
+ {
+ 	int ihl = skb->data - skb_transport_header(skb);
+-	struct xfrm_offload *xo = xfrm_offload(skb);
+ 
+ 	if (skb->transport_header != skb->network_header) {
+ 		memmove(skb_transport_header(skb),
+@@ -54,8 +53,7 @@ static int xfrm4_transport_input(struct xfrm_state *x, struct sk_buff *skb)
+ 		skb->network_header = skb->transport_header;
+ 	}
+ 	ip_hdr(skb)->tot_len = htons(skb->len + ihl);
+-	if (!xo || !(xo->flags & XFRM_GRO))
+-		skb_reset_transport_header(skb);
++	skb_reset_transport_header(skb);
+ 	return 0;
+ }
+ 
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 3484c7020fd9..ac3de1aa1cd3 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -4930,8 +4930,8 @@ static int in6_dump_addrs(struct inet6_dev *idev, struct sk_buff *skb,
+ 
+ 		/* unicast address incl. temp addr */
+ 		list_for_each_entry(ifa, &idev->addr_list, if_list) {
+-			if (++ip_idx < s_ip_idx)
+-				continue;
++			if (ip_idx < s_ip_idx)
++				goto next;
+ 			err = inet6_fill_ifaddr(skb, ifa,
+ 						NETLINK_CB(cb->skb).portid,
+ 						cb->nlh->nlmsg_seq,
+@@ -4940,6 +4940,8 @@ static int in6_dump_addrs(struct inet6_dev *idev, struct sk_buff *skb,
+ 			if (err < 0)
+ 				break;
+ 			nl_dump_check_consistent(cb, nlmsg_hdr(skb));
++next:
++			ip_idx++;
+ 		}
+ 		break;
+ 	}
+diff --git a/net/ipv6/ip6_checksum.c b/net/ipv6/ip6_checksum.c
+index 547515e8450a..377717045f8f 100644
+--- a/net/ipv6/ip6_checksum.c
++++ b/net/ipv6/ip6_checksum.c
+@@ -88,8 +88,24 @@ int udp6_csum_init(struct sk_buff *skb, struct udphdr *uh, int proto)
+ 	 * Note, we are only interested in != 0 or == 0, thus the
+ 	 * force to int.
+ 	 */
+-	return (__force int)skb_checksum_init_zero_check(skb, proto, uh->check,
+-							 ip6_compute_pseudo);
++	err = (__force int)skb_checksum_init_zero_check(skb, proto, uh->check,
++							ip6_compute_pseudo);
++	if (err)
++		return err;
++
++	if (skb->ip_summed == CHECKSUM_COMPLETE && !skb->csum_valid) {
++		/* If SW calculated the value, we know it's bad */
++		if (skb->csum_complete_sw)
++			return 1;
++
++		/* HW says the value is bad. Let's validate that.
++		 * skb->csum is no longer the full packet checksum,
++		 * so don't treat is as such.
++		 */
++		skb_checksum_complete_unset(skb);
++	}
++
++	return 0;
+ }
+ EXPORT_SYMBOL(udp6_csum_init);
+ 
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index f5b5b0574a2d..009b508127e6 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -1184,10 +1184,6 @@ route_lookup:
+ 	}
+ 	skb_dst_set(skb, dst);
+ 
+-	if (encap_limit >= 0) {
+-		init_tel_txopt(&opt, encap_limit);
+-		ipv6_push_frag_opts(skb, &opt.ops, &proto);
+-	}
+ 	hop_limit = hop_limit ? : ip6_dst_hoplimit(dst);
+ 
+ 	/* Calculate max headroom for all the headers and adjust
+@@ -1202,6 +1198,11 @@ route_lookup:
+ 	if (err)
+ 		return err;
+ 
++	if (encap_limit >= 0) {
++		init_tel_txopt(&opt, encap_limit);
++		ipv6_push_frag_opts(skb, &opt.ops, &proto);
++	}
++
+ 	skb_push(skb, sizeof(struct ipv6hdr));
+ 	skb_reset_network_header(skb);
+ 	ipv6h = ipv6_hdr(skb);
+diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
+index f60f310785fd..131440ea6b51 100644
+--- a/net/ipv6/mcast.c
++++ b/net/ipv6/mcast.c
+@@ -2436,17 +2436,17 @@ static int ip6_mc_leave_src(struct sock *sk, struct ipv6_mc_socklist *iml,
+ {
+ 	int err;
+ 
+-	/* callers have the socket lock and rtnl lock
+-	 * so no other readers or writers of iml or its sflist
+-	 */
++	write_lock_bh(&iml->sflock);
+ 	if (!iml->sflist) {
+ 		/* any-source empty exclude case */
+-		return ip6_mc_del_src(idev, &iml->addr, iml->sfmode, 0, NULL, 0);
++		err = ip6_mc_del_src(idev, &iml->addr, iml->sfmode, 0, NULL, 0);
++	} else {
++		err = ip6_mc_del_src(idev, &iml->addr, iml->sfmode,
++				iml->sflist->sl_count, iml->sflist->sl_addr, 0);
++		sock_kfree_s(sk, iml->sflist, IP6_SFLSIZE(iml->sflist->sl_max));
++		iml->sflist = NULL;
+ 	}
+-	err = ip6_mc_del_src(idev, &iml->addr, iml->sfmode,
+-		iml->sflist->sl_count, iml->sflist->sl_addr, 0);
+-	sock_kfree_s(sk, iml->sflist, IP6_SFLSIZE(iml->sflist->sl_max));
+-	iml->sflist = NULL;
++	write_unlock_bh(&iml->sflock);
+ 	return err;
+ }
+ 
+diff --git a/net/ipv6/ndisc.c b/net/ipv6/ndisc.c
+index 0ec273997d1d..673a4a932f2a 100644
+--- a/net/ipv6/ndisc.c
++++ b/net/ipv6/ndisc.c
+@@ -1732,10 +1732,9 @@ int ndisc_rcv(struct sk_buff *skb)
+ 		return 0;
+ 	}
+ 
+-	memset(NEIGH_CB(skb), 0, sizeof(struct neighbour_cb));
+-
+ 	switch (msg->icmph.icmp6_type) {
+ 	case NDISC_NEIGHBOUR_SOLICITATION:
++		memset(NEIGH_CB(skb), 0, sizeof(struct neighbour_cb));
+ 		ndisc_recv_ns(skb);
+ 		break;
+ 
+diff --git a/net/ipv6/netfilter/nf_conntrack_reasm.c b/net/ipv6/netfilter/nf_conntrack_reasm.c
+index e4d9e6976d3c..a452d99c9f52 100644
+--- a/net/ipv6/netfilter/nf_conntrack_reasm.c
++++ b/net/ipv6/netfilter/nf_conntrack_reasm.c
+@@ -585,8 +585,6 @@ int nf_ct_frag6_gather(struct net *net, struct sk_buff *skb, u32 user)
+ 	    fq->q.meat == fq->q.len &&
+ 	    nf_ct_frag6_reasm(fq, skb, dev))
+ 		ret = 0;
+-	else
+-		skb_dst_drop(skb);
+ 
+ out_unlock:
+ 	spin_unlock_bh(&fq->q.lock);
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index ed526e257da6..a243d5249b51 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -517,10 +517,11 @@ static void rt6_probe_deferred(struct work_struct *w)
+ 
+ static void rt6_probe(struct fib6_info *rt)
+ {
+-	struct __rt6_probe_work *work;
++	struct __rt6_probe_work *work = NULL;
+ 	const struct in6_addr *nh_gw;
+ 	struct neighbour *neigh;
+ 	struct net_device *dev;
++	struct inet6_dev *idev;
+ 
+ 	/*
+ 	 * Okay, this does not seem to be appropriate
+@@ -536,15 +537,12 @@ static void rt6_probe(struct fib6_info *rt)
+ 	nh_gw = &rt->fib6_nh.nh_gw;
+ 	dev = rt->fib6_nh.nh_dev;
+ 	rcu_read_lock_bh();
++	idev = __in6_dev_get(dev);
+ 	neigh = __ipv6_neigh_lookup_noref(dev, nh_gw);
+ 	if (neigh) {
+-		struct inet6_dev *idev;
+-
+ 		if (neigh->nud_state & NUD_VALID)
+ 			goto out;
+ 
+-		idev = __in6_dev_get(dev);
+-		work = NULL;
+ 		write_lock(&neigh->lock);
+ 		if (!(neigh->nud_state & NUD_VALID) &&
+ 		    time_after(jiffies,
+@@ -554,11 +552,13 @@ static void rt6_probe(struct fib6_info *rt)
+ 				__neigh_set_probe_once(neigh);
+ 		}
+ 		write_unlock(&neigh->lock);
+-	} else {
++	} else if (time_after(jiffies, rt->last_probe +
++				       idev->cnf.rtr_probe_interval)) {
+ 		work = kmalloc(sizeof(*work), GFP_ATOMIC);
+ 	}
+ 
+ 	if (work) {
++		rt->last_probe = jiffies;
+ 		INIT_WORK(&work->work, rt6_probe_deferred);
+ 		work->target = *nh_gw;
+ 		dev_hold(dev);
+@@ -2792,6 +2792,8 @@ static int ip6_route_check_nh_onlink(struct net *net,
+ 	grt = ip6_nh_lookup_table(net, cfg, gw_addr, tbid, 0);
+ 	if (grt) {
+ 		if (!grt->dst.error &&
++		    /* ignore match if it is the default route */
++		    grt->from && !ipv6_addr_any(&grt->from->fib6_dst.addr) &&
+ 		    (grt->rt6i_flags & flags || dev != grt->dst.dev)) {
+ 			NL_SET_ERR_MSG(extack,
+ 				       "Nexthop has invalid gateway or device mismatch");
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 39d0cab919bb..4f2c7a196365 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -762,11 +762,9 @@ static int udp6_unicast_rcv_skb(struct sock *sk, struct sk_buff *skb,
+ 
+ 	ret = udpv6_queue_rcv_skb(sk, skb);
+ 
+-	/* a return value > 0 means to resubmit the input, but
+-	 * it wants the return to be -protocol, or 0
+-	 */
++	/* a return value > 0 means to resubmit the input */
+ 	if (ret > 0)
+-		return -ret;
++		return ret;
+ 	return 0;
+ }
+ 
+diff --git a/net/ipv6/xfrm6_input.c b/net/ipv6/xfrm6_input.c
+index 841f4a07438e..9ef490dddcea 100644
+--- a/net/ipv6/xfrm6_input.c
++++ b/net/ipv6/xfrm6_input.c
+@@ -59,6 +59,7 @@ int xfrm6_transport_finish(struct sk_buff *skb, int async)
+ 
+ 	if (xo && (xo->flags & XFRM_GRO)) {
+ 		skb_mac_header_rebuild(skb);
++		skb_reset_transport_header(skb);
+ 		return -1;
+ 	}
+ 
+diff --git a/net/ipv6/xfrm6_mode_transport.c b/net/ipv6/xfrm6_mode_transport.c
+index 9ad07a91708e..3c29da5defe6 100644
+--- a/net/ipv6/xfrm6_mode_transport.c
++++ b/net/ipv6/xfrm6_mode_transport.c
+@@ -51,7 +51,6 @@ static int xfrm6_transport_output(struct xfrm_state *x, struct sk_buff *skb)
+ static int xfrm6_transport_input(struct xfrm_state *x, struct sk_buff *skb)
+ {
+ 	int ihl = skb->data - skb_transport_header(skb);
+-	struct xfrm_offload *xo = xfrm_offload(skb);
+ 
+ 	if (skb->transport_header != skb->network_header) {
+ 		memmove(skb_transport_header(skb),
+@@ -60,8 +59,7 @@ static int xfrm6_transport_input(struct xfrm_state *x, struct sk_buff *skb)
+ 	}
+ 	ipv6_hdr(skb)->payload_len = htons(skb->len + ihl -
+ 					   sizeof(struct ipv6hdr));
+-	if (!xo || !(xo->flags & XFRM_GRO))
+-		skb_reset_transport_header(skb);
++	skb_reset_transport_header(skb);
+ 	return 0;
+ }
+ 
+diff --git a/net/ipv6/xfrm6_output.c b/net/ipv6/xfrm6_output.c
+index 5959ce9620eb..6a74080005cf 100644
+--- a/net/ipv6/xfrm6_output.c
++++ b/net/ipv6/xfrm6_output.c
+@@ -170,9 +170,11 @@ static int __xfrm6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ 
+ 	if (toobig && xfrm6_local_dontfrag(skb)) {
+ 		xfrm6_local_rxpmtu(skb, mtu);
++		kfree_skb(skb);
+ 		return -EMSGSIZE;
+ 	} else if (!skb->ignore_df && toobig && skb->sk) {
+ 		xfrm_local_error(skb, mtu);
++		kfree_skb(skb);
+ 		return -EMSGSIZE;
+ 	}
+ 
+diff --git a/net/llc/llc_conn.c b/net/llc/llc_conn.c
+index c0ac522b48a1..4ff89cb7c86f 100644
+--- a/net/llc/llc_conn.c
++++ b/net/llc/llc_conn.c
+@@ -734,6 +734,7 @@ void llc_sap_add_socket(struct llc_sap *sap, struct sock *sk)
+ 	llc_sk(sk)->sap = sap;
+ 
+ 	spin_lock_bh(&sap->sk_lock);
++	sock_set_flag(sk, SOCK_RCU_FREE);
+ 	sap->sk_count++;
+ 	sk_nulls_add_node_rcu(sk, laddr_hb);
+ 	hlist_add_head(&llc->dev_hash_node, dev_hb);
+diff --git a/net/mac80211/mesh.h b/net/mac80211/mesh.h
+index ee56f18cad3f..21526630bf65 100644
+--- a/net/mac80211/mesh.h
++++ b/net/mac80211/mesh.h
+@@ -217,7 +217,8 @@ void mesh_rmc_free(struct ieee80211_sub_if_data *sdata);
+ int mesh_rmc_init(struct ieee80211_sub_if_data *sdata);
+ void ieee80211s_init(void);
+ void ieee80211s_update_metric(struct ieee80211_local *local,
+-			      struct sta_info *sta, struct sk_buff *skb);
++			      struct sta_info *sta,
++			      struct ieee80211_tx_status *st);
+ void ieee80211_mesh_init_sdata(struct ieee80211_sub_if_data *sdata);
+ void ieee80211_mesh_teardown_sdata(struct ieee80211_sub_if_data *sdata);
+ int ieee80211_start_mesh(struct ieee80211_sub_if_data *sdata);
+diff --git a/net/mac80211/mesh_hwmp.c b/net/mac80211/mesh_hwmp.c
+index daf9db3c8f24..6950cd0bf594 100644
+--- a/net/mac80211/mesh_hwmp.c
++++ b/net/mac80211/mesh_hwmp.c
+@@ -295,15 +295,12 @@ int mesh_path_error_tx(struct ieee80211_sub_if_data *sdata,
+ }
+ 
+ void ieee80211s_update_metric(struct ieee80211_local *local,
+-		struct sta_info *sta, struct sk_buff *skb)
++			      struct sta_info *sta,
++			      struct ieee80211_tx_status *st)
+ {
+-	struct ieee80211_tx_info *txinfo = IEEE80211_SKB_CB(skb);
+-	struct ieee80211_hdr *hdr = (struct ieee80211_hdr *) skb->data;
++	struct ieee80211_tx_info *txinfo = st->info;
+ 	int failed;
+ 
+-	if (!ieee80211_is_data(hdr->frame_control))
+-		return;
+-
+ 	failed = !(txinfo->flags & IEEE80211_TX_STAT_ACK);
+ 
+ 	/* moving average, scaled to 100.
+diff --git a/net/mac80211/status.c b/net/mac80211/status.c
+index 9a6d7208bf4f..91d7c0cd1882 100644
+--- a/net/mac80211/status.c
++++ b/net/mac80211/status.c
+@@ -479,11 +479,6 @@ static void ieee80211_report_ack_skb(struct ieee80211_local *local,
+ 	if (!skb)
+ 		return;
+ 
+-	if (dropped) {
+-		dev_kfree_skb_any(skb);
+-		return;
+-	}
+-
+ 	if (info->flags & IEEE80211_TX_INTFL_NL80211_FRAME_TX) {
+ 		u64 cookie = IEEE80211_SKB_CB(skb)->ack.cookie;
+ 		struct ieee80211_sub_if_data *sdata;
+@@ -506,6 +501,8 @@ static void ieee80211_report_ack_skb(struct ieee80211_local *local,
+ 		}
+ 		rcu_read_unlock();
+ 
++		dev_kfree_skb_any(skb);
++	} else if (dropped) {
+ 		dev_kfree_skb_any(skb);
+ 	} else {
+ 		/* consumes skb */
+@@ -811,7 +808,7 @@ static void __ieee80211_tx_status(struct ieee80211_hw *hw,
+ 
+ 		rate_control_tx_status(local, sband, status);
+ 		if (ieee80211_vif_is_mesh(&sta->sdata->vif))
+-			ieee80211s_update_metric(local, sta, skb);
++			ieee80211s_update_metric(local, sta, status);
+ 
+ 		if (!(info->flags & IEEE80211_TX_CTL_INJECTED) && acked)
+ 			ieee80211_frame_acked(sta, skb);
+@@ -972,6 +969,8 @@ void ieee80211_tx_status_ext(struct ieee80211_hw *hw,
+ 		}
+ 
+ 		rate_control_tx_status(local, sband, status);
++		if (ieee80211_vif_is_mesh(&sta->sdata->vif))
++			ieee80211s_update_metric(local, sta, status);
+ 	}
+ 
+ 	if (acked || noack_success) {
+diff --git a/net/mac80211/tdls.c b/net/mac80211/tdls.c
+index 5cd5e6e5834e..6c647f425e05 100644
+--- a/net/mac80211/tdls.c
++++ b/net/mac80211/tdls.c
+@@ -16,6 +16,7 @@
+ #include "ieee80211_i.h"
+ #include "driver-ops.h"
+ #include "rate.h"
++#include "wme.h"
+ 
+ /* give usermode some time for retries in setting up the TDLS session */
+ #define TDLS_PEER_SETUP_TIMEOUT	(15 * HZ)
+@@ -1010,14 +1011,13 @@ ieee80211_tdls_prep_mgmt_packet(struct wiphy *wiphy, struct net_device *dev,
+ 	switch (action_code) {
+ 	case WLAN_TDLS_SETUP_REQUEST:
+ 	case WLAN_TDLS_SETUP_RESPONSE:
+-		skb_set_queue_mapping(skb, IEEE80211_AC_BK);
+-		skb->priority = 2;
++		skb->priority = 256 + 2;
+ 		break;
+ 	default:
+-		skb_set_queue_mapping(skb, IEEE80211_AC_VI);
+-		skb->priority = 5;
++		skb->priority = 256 + 5;
+ 		break;
+ 	}
++	skb_set_queue_mapping(skb, ieee80211_select_queue(sdata, skb));
+ 
+ 	/*
+ 	 * Set the WLAN_TDLS_TEARDOWN flag to indicate a teardown in progress.
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index 9b3b069e418a..361f2f6cc839 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -1886,7 +1886,7 @@ static bool ieee80211_tx(struct ieee80211_sub_if_data *sdata,
+ 			sdata->vif.hw_queue[skb_get_queue_mapping(skb)];
+ 
+ 	if (invoke_tx_handlers_early(&tx))
+-		return false;
++		return true;
+ 
+ 	if (ieee80211_queue_skb(local, sdata, tx.sta, tx.skb))
+ 		return true;
+diff --git a/net/netfilter/nf_conntrack_proto_tcp.c b/net/netfilter/nf_conntrack_proto_tcp.c
+index 8e67910185a0..1004fb5930de 100644
+--- a/net/netfilter/nf_conntrack_proto_tcp.c
++++ b/net/netfilter/nf_conntrack_proto_tcp.c
+@@ -1239,8 +1239,8 @@ static const struct nla_policy tcp_nla_policy[CTA_PROTOINFO_TCP_MAX+1] = {
+ #define TCP_NLATTR_SIZE	( \
+ 	NLA_ALIGN(NLA_HDRLEN + 1) + \
+ 	NLA_ALIGN(NLA_HDRLEN + 1) + \
+-	NLA_ALIGN(NLA_HDRLEN + sizeof(sizeof(struct nf_ct_tcp_flags))) + \
+-	NLA_ALIGN(NLA_HDRLEN + sizeof(sizeof(struct nf_ct_tcp_flags))))
++	NLA_ALIGN(NLA_HDRLEN + sizeof(struct nf_ct_tcp_flags)) + \
++	NLA_ALIGN(NLA_HDRLEN + sizeof(struct nf_ct_tcp_flags)))
+ 
+ static int nlattr_to_tcp(struct nlattr *cda[], struct nf_conn *ct)
+ {
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index 9873d734b494..8ad78b82c8e2 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -355,12 +355,11 @@ cont:
+ 
+ static void nft_rbtree_gc(struct work_struct *work)
+ {
++	struct nft_rbtree_elem *rbe, *rbe_end = NULL, *rbe_prev = NULL;
+ 	struct nft_set_gc_batch *gcb = NULL;
+-	struct rb_node *node, *prev = NULL;
+-	struct nft_rbtree_elem *rbe;
+ 	struct nft_rbtree *priv;
++	struct rb_node *node;
+ 	struct nft_set *set;
+-	int i;
+ 
+ 	priv = container_of(work, struct nft_rbtree, gc_work.work);
+ 	set  = nft_set_container_of(priv);
+@@ -371,7 +370,7 @@ static void nft_rbtree_gc(struct work_struct *work)
+ 		rbe = rb_entry(node, struct nft_rbtree_elem, node);
+ 
+ 		if (nft_rbtree_interval_end(rbe)) {
+-			prev = node;
++			rbe_end = rbe;
+ 			continue;
+ 		}
+ 		if (!nft_set_elem_expired(&rbe->ext))
+@@ -379,29 +378,30 @@ static void nft_rbtree_gc(struct work_struct *work)
+ 		if (nft_set_elem_mark_busy(&rbe->ext))
+ 			continue;
+ 
++		if (rbe_prev) {
++			rb_erase(&rbe_prev->node, &priv->root);
++			rbe_prev = NULL;
++		}
+ 		gcb = nft_set_gc_batch_check(set, gcb, GFP_ATOMIC);
+ 		if (!gcb)
+ 			break;
+ 
+ 		atomic_dec(&set->nelems);
+ 		nft_set_gc_batch_add(gcb, rbe);
++		rbe_prev = rbe;
+ 
+-		if (prev) {
+-			rbe = rb_entry(prev, struct nft_rbtree_elem, node);
++		if (rbe_end) {
+ 			atomic_dec(&set->nelems);
+-			nft_set_gc_batch_add(gcb, rbe);
+-			prev = NULL;
++			nft_set_gc_batch_add(gcb, rbe_end);
++			rb_erase(&rbe_end->node, &priv->root);
++			rbe_end = NULL;
+ 		}
+ 		node = rb_next(node);
+ 		if (!node)
+ 			break;
+ 	}
+-	if (gcb) {
+-		for (i = 0; i < gcb->head.cnt; i++) {
+-			rbe = gcb->elems[i];
+-			rb_erase(&rbe->node, &priv->root);
+-		}
+-	}
++	if (rbe_prev)
++		rb_erase(&rbe_prev->node, &priv->root);
+ 	write_seqcount_end(&priv->count);
+ 	write_unlock_bh(&priv->lock);
+ 
+diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c
+index 492ab0c36f7c..8b1ba43b1ece 100644
+--- a/net/openvswitch/flow_netlink.c
++++ b/net/openvswitch/flow_netlink.c
+@@ -2990,7 +2990,7 @@ static int __ovs_nla_copy_actions(struct net *net, const struct nlattr *attr,
+ 			 * is already present */
+ 			if (mac_proto != MAC_PROTO_NONE)
+ 				return -EINVAL;
+-			mac_proto = MAC_PROTO_NONE;
++			mac_proto = MAC_PROTO_ETHERNET;
+ 			break;
+ 
+ 		case OVS_ACTION_ATTR_POP_ETH:
+@@ -2998,7 +2998,7 @@ static int __ovs_nla_copy_actions(struct net *net, const struct nlattr *attr,
+ 				return -EINVAL;
+ 			if (vlan_tci & htons(VLAN_TAG_PRESENT))
+ 				return -EINVAL;
+-			mac_proto = MAC_PROTO_ETHERNET;
++			mac_proto = MAC_PROTO_NONE;
+ 			break;
+ 
+ 		case OVS_ACTION_ATTR_PUSH_NSH:
+diff --git a/net/rds/send.c b/net/rds/send.c
+index 59f17a2335f4..0e54ca0f4e9e 100644
+--- a/net/rds/send.c
++++ b/net/rds/send.c
+@@ -1006,7 +1006,8 @@ static int rds_cmsg_send(struct rds_sock *rs, struct rds_message *rm,
+ 	return ret;
+ }
+ 
+-static int rds_send_mprds_hash(struct rds_sock *rs, struct rds_connection *conn)
++static int rds_send_mprds_hash(struct rds_sock *rs,
++			       struct rds_connection *conn, int nonblock)
+ {
+ 	int hash;
+ 
+@@ -1022,10 +1023,16 @@ static int rds_send_mprds_hash(struct rds_sock *rs, struct rds_connection *conn)
+ 		 * used.  But if we are interrupted, we have to use the zero
+ 		 * c_path in case the connection ends up being non-MP capable.
+ 		 */
+-		if (conn->c_npaths == 0)
++		if (conn->c_npaths == 0) {
++			/* Cannot wait for the connection be made, so just use
++			 * the base c_path.
++			 */
++			if (nonblock)
++				return 0;
+ 			if (wait_event_interruptible(conn->c_hs_waitq,
+ 						     conn->c_npaths != 0))
+ 				hash = 0;
++		}
+ 		if (conn->c_npaths == 1)
+ 			hash = 0;
+ 	}
+@@ -1170,7 +1177,7 @@ int rds_sendmsg(struct socket *sock, struct msghdr *msg, size_t payload_len)
+ 	}
+ 
+ 	if (conn->c_trans->t_mp_capable)
+-		cpath = &conn->c_path[rds_send_mprds_hash(rs, conn)];
++		cpath = &conn->c_path[rds_send_mprds_hash(rs, conn, nonblock)];
+ 	else
+ 		cpath = &conn->c_path[0];
+ 
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index 707630ab4713..330372c04940 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -293,7 +293,6 @@ struct rxrpc_peer {
+ 	struct hlist_node	hash_link;
+ 	struct rxrpc_local	*local;
+ 	struct hlist_head	error_targets;	/* targets for net error distribution */
+-	struct work_struct	error_distributor;
+ 	struct rb_root		service_conns;	/* Service connections */
+ 	struct list_head	keepalive_link;	/* Link in net->peer_keepalive[] */
+ 	time64_t		last_tx_at;	/* Last time packet sent here */
+@@ -304,8 +303,6 @@ struct rxrpc_peer {
+ 	unsigned int		maxdata;	/* data size (MTU - hdrsize) */
+ 	unsigned short		hdrsize;	/* header size (IP + UDP + RxRPC) */
+ 	int			debug_id;	/* debug ID for printks */
+-	int			error_report;	/* Net (+0) or local (+1000000) to distribute */
+-#define RXRPC_LOCAL_ERROR_OFFSET 1000000
+ 	struct sockaddr_rxrpc	srx;		/* remote address */
+ 
+ 	/* calculated RTT cache */
+@@ -449,8 +446,7 @@ struct rxrpc_connection {
+ 	spinlock_t		state_lock;	/* state-change lock */
+ 	enum rxrpc_conn_cache_state cache_state;
+ 	enum rxrpc_conn_proto_state state;	/* current state of connection */
+-	u32			local_abort;	/* local abort code */
+-	u32			remote_abort;	/* remote abort code */
++	u32			abort_code;	/* Abort code of connection abort */
+ 	int			debug_id;	/* debug ID for printks */
+ 	atomic_t		serial;		/* packet serial number counter */
+ 	unsigned int		hi_serial;	/* highest serial number received */
+@@ -460,8 +456,19 @@ struct rxrpc_connection {
+ 	u8			security_size;	/* security header size */
+ 	u8			security_ix;	/* security type */
+ 	u8			out_clientflag;	/* RXRPC_CLIENT_INITIATED if we are client */
++	short			error;		/* Local error code */
+ };
+ 
++static inline bool rxrpc_to_server(const struct rxrpc_skb_priv *sp)
++{
++	return sp->hdr.flags & RXRPC_CLIENT_INITIATED;
++}
++
++static inline bool rxrpc_to_client(const struct rxrpc_skb_priv *sp)
++{
++	return !rxrpc_to_server(sp);
++}
++
+ /*
+  * Flags in call->flags.
+  */
+@@ -1029,7 +1036,6 @@ void rxrpc_send_keepalive(struct rxrpc_peer *);
+  * peer_event.c
+  */
+ void rxrpc_error_report(struct sock *);
+-void rxrpc_peer_error_distributor(struct work_struct *);
+ void rxrpc_peer_add_rtt(struct rxrpc_call *, enum rxrpc_rtt_rx_trace,
+ 			rxrpc_serial_t, rxrpc_serial_t, ktime_t, ktime_t);
+ void rxrpc_peer_keepalive_worker(struct work_struct *);
+@@ -1048,7 +1054,6 @@ void rxrpc_destroy_all_peers(struct rxrpc_net *);
+ struct rxrpc_peer *rxrpc_get_peer(struct rxrpc_peer *);
+ struct rxrpc_peer *rxrpc_get_peer_maybe(struct rxrpc_peer *);
+ void rxrpc_put_peer(struct rxrpc_peer *);
+-void __rxrpc_queue_peer_error(struct rxrpc_peer *);
+ 
+ /*
+  * proc.c
+diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c
+index 9d1e298b784c..0e378d73e856 100644
+--- a/net/rxrpc/call_accept.c
++++ b/net/rxrpc/call_accept.c
+@@ -422,11 +422,11 @@ found_service:
+ 
+ 	case RXRPC_CONN_REMOTELY_ABORTED:
+ 		rxrpc_set_call_completion(call, RXRPC_CALL_REMOTELY_ABORTED,
+-					  conn->remote_abort, -ECONNABORTED);
++					  conn->abort_code, conn->error);
+ 		break;
+ 	case RXRPC_CONN_LOCALLY_ABORTED:
+ 		rxrpc_abort_call("CON", call, sp->hdr.seq,
+-				 conn->local_abort, -ECONNABORTED);
++				 conn->abort_code, conn->error);
+ 		break;
+ 	default:
+ 		BUG();
+diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
+index f6734d8cb01a..ed69257203c2 100644
+--- a/net/rxrpc/call_object.c
++++ b/net/rxrpc/call_object.c
+@@ -400,7 +400,7 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx,
+ 	rcu_assign_pointer(conn->channels[chan].call, call);
+ 
+ 	spin_lock(&conn->params.peer->lock);
+-	hlist_add_head(&call->error_link, &conn->params.peer->error_targets);
++	hlist_add_head_rcu(&call->error_link, &conn->params.peer->error_targets);
+ 	spin_unlock(&conn->params.peer->lock);
+ 
+ 	_net("CALL incoming %d on CONN %d", call->debug_id, call->conn->debug_id);
+diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c
+index 5736f643c516..0be19132202b 100644
+--- a/net/rxrpc/conn_client.c
++++ b/net/rxrpc/conn_client.c
+@@ -709,8 +709,8 @@ int rxrpc_connect_call(struct rxrpc_call *call,
+ 	}
+ 
+ 	spin_lock_bh(&call->conn->params.peer->lock);
+-	hlist_add_head(&call->error_link,
+-		       &call->conn->params.peer->error_targets);
++	hlist_add_head_rcu(&call->error_link,
++			   &call->conn->params.peer->error_targets);
+ 	spin_unlock_bh(&call->conn->params.peer->lock);
+ 
+ out:
+diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c
+index 3fde001fcc39..5e7c8239e703 100644
+--- a/net/rxrpc/conn_event.c
++++ b/net/rxrpc/conn_event.c
+@@ -126,7 +126,7 @@ static void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn,
+ 
+ 	switch (chan->last_type) {
+ 	case RXRPC_PACKET_TYPE_ABORT:
+-		_proto("Tx ABORT %%%u { %d } [re]", serial, conn->local_abort);
++		_proto("Tx ABORT %%%u { %d } [re]", serial, conn->abort_code);
+ 		break;
+ 	case RXRPC_PACKET_TYPE_ACK:
+ 		trace_rxrpc_tx_ack(NULL, serial, chan->last_seq, 0,
+@@ -148,13 +148,12 @@ static void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn,
+  * pass a connection-level abort onto all calls on that connection
+  */
+ static void rxrpc_abort_calls(struct rxrpc_connection *conn,
+-			      enum rxrpc_call_completion compl,
+-			      u32 abort_code, int error)
++			      enum rxrpc_call_completion compl)
+ {
+ 	struct rxrpc_call *call;
+ 	int i;
+ 
+-	_enter("{%d},%x", conn->debug_id, abort_code);
++	_enter("{%d},%x", conn->debug_id, conn->abort_code);
+ 
+ 	spin_lock(&conn->channel_lock);
+ 
+@@ -167,9 +166,11 @@ static void rxrpc_abort_calls(struct rxrpc_connection *conn,
+ 				trace_rxrpc_abort(call->debug_id,
+ 						  "CON", call->cid,
+ 						  call->call_id, 0,
+-						  abort_code, error);
++						  conn->abort_code,
++						  conn->error);
+ 			if (rxrpc_set_call_completion(call, compl,
+-						      abort_code, error))
++						      conn->abort_code,
++						      conn->error))
+ 				rxrpc_notify_socket(call);
+ 		}
+ 	}
+@@ -202,10 +203,12 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn,
+ 		return 0;
+ 	}
+ 
++	conn->error = error;
++	conn->abort_code = abort_code;
+ 	conn->state = RXRPC_CONN_LOCALLY_ABORTED;
+ 	spin_unlock_bh(&conn->state_lock);
+ 
+-	rxrpc_abort_calls(conn, RXRPC_CALL_LOCALLY_ABORTED, abort_code, error);
++	rxrpc_abort_calls(conn, RXRPC_CALL_LOCALLY_ABORTED);
+ 
+ 	msg.msg_name	= &conn->params.peer->srx.transport;
+ 	msg.msg_namelen	= conn->params.peer->srx.transport_len;
+@@ -224,7 +227,7 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn,
+ 	whdr._rsvd	= 0;
+ 	whdr.serviceId	= htons(conn->service_id);
+ 
+-	word		= htonl(conn->local_abort);
++	word		= htonl(conn->abort_code);
+ 
+ 	iov[0].iov_base	= &whdr;
+ 	iov[0].iov_len	= sizeof(whdr);
+@@ -235,7 +238,7 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn,
+ 
+ 	serial = atomic_inc_return(&conn->serial);
+ 	whdr.serial = htonl(serial);
+-	_proto("Tx CONN ABORT %%%u { %d }", serial, conn->local_abort);
++	_proto("Tx CONN ABORT %%%u { %d }", serial, conn->abort_code);
+ 
+ 	ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, 2, len);
+ 	if (ret < 0) {
+@@ -308,9 +311,10 @@ static int rxrpc_process_event(struct rxrpc_connection *conn,
+ 		abort_code = ntohl(wtmp);
+ 		_proto("Rx ABORT %%%u { ac=%d }", sp->hdr.serial, abort_code);
+ 
++		conn->error = -ECONNABORTED;
++		conn->abort_code = abort_code;
+ 		conn->state = RXRPC_CONN_REMOTELY_ABORTED;
+-		rxrpc_abort_calls(conn, RXRPC_CALL_REMOTELY_ABORTED,
+-				  abort_code, -ECONNABORTED);
++		rxrpc_abort_calls(conn, RXRPC_CALL_REMOTELY_ABORTED);
+ 		return -ECONNABORTED;
+ 
+ 	case RXRPC_PACKET_TYPE_CHALLENGE:
+diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c
+index 4c77a78a252a..e0d6d0fb7426 100644
+--- a/net/rxrpc/conn_object.c
++++ b/net/rxrpc/conn_object.c
+@@ -99,7 +99,7 @@ struct rxrpc_connection *rxrpc_find_connection_rcu(struct rxrpc_local *local,
+ 	k.epoch	= sp->hdr.epoch;
+ 	k.cid	= sp->hdr.cid & RXRPC_CIDMASK;
+ 
+-	if (sp->hdr.flags & RXRPC_CLIENT_INITIATED) {
++	if (rxrpc_to_server(sp)) {
+ 		/* We need to look up service connections by the full protocol
+ 		 * parameter set.  We look up the peer first as an intermediate
+ 		 * step and then the connection from the peer's tree.
+@@ -214,7 +214,7 @@ void rxrpc_disconnect_call(struct rxrpc_call *call)
+ 	call->peer->cong_cwnd = call->cong_cwnd;
+ 
+ 	spin_lock_bh(&conn->params.peer->lock);
+-	hlist_del_init(&call->error_link);
++	hlist_del_rcu(&call->error_link);
+ 	spin_unlock_bh(&conn->params.peer->lock);
+ 
+ 	if (rxrpc_is_client_call(call))
+diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c
+index 608d078a4981..a81240845224 100644
+--- a/net/rxrpc/input.c
++++ b/net/rxrpc/input.c
+@@ -216,10 +216,11 @@ static void rxrpc_send_ping(struct rxrpc_call *call, struct sk_buff *skb,
+ /*
+  * Apply a hard ACK by advancing the Tx window.
+  */
+-static void rxrpc_rotate_tx_window(struct rxrpc_call *call, rxrpc_seq_t to,
++static bool rxrpc_rotate_tx_window(struct rxrpc_call *call, rxrpc_seq_t to,
+ 				   struct rxrpc_ack_summary *summary)
+ {
+ 	struct sk_buff *skb, *list = NULL;
++	bool rot_last = false;
+ 	int ix;
+ 	u8 annotation;
+ 
+@@ -243,15 +244,17 @@ static void rxrpc_rotate_tx_window(struct rxrpc_call *call, rxrpc_seq_t to,
+ 		skb->next = list;
+ 		list = skb;
+ 
+-		if (annotation & RXRPC_TX_ANNO_LAST)
++		if (annotation & RXRPC_TX_ANNO_LAST) {
+ 			set_bit(RXRPC_CALL_TX_LAST, &call->flags);
++			rot_last = true;
++		}
+ 		if ((annotation & RXRPC_TX_ANNO_MASK) != RXRPC_TX_ANNO_ACK)
+ 			summary->nr_rot_new_acks++;
+ 	}
+ 
+ 	spin_unlock(&call->lock);
+ 
+-	trace_rxrpc_transmit(call, (test_bit(RXRPC_CALL_TX_LAST, &call->flags) ?
++	trace_rxrpc_transmit(call, (rot_last ?
+ 				    rxrpc_transmit_rotate_last :
+ 				    rxrpc_transmit_rotate));
+ 	wake_up(&call->waitq);
+@@ -262,6 +265,8 @@ static void rxrpc_rotate_tx_window(struct rxrpc_call *call, rxrpc_seq_t to,
+ 		skb->next = NULL;
+ 		rxrpc_free_skb(skb, rxrpc_skb_tx_freed);
+ 	}
++
++	return rot_last;
+ }
+ 
+ /*
+@@ -273,23 +278,26 @@ static void rxrpc_rotate_tx_window(struct rxrpc_call *call, rxrpc_seq_t to,
+ static bool rxrpc_end_tx_phase(struct rxrpc_call *call, bool reply_begun,
+ 			       const char *abort_why)
+ {
++	unsigned int state;
+ 
+ 	ASSERT(test_bit(RXRPC_CALL_TX_LAST, &call->flags));
+ 
+ 	write_lock(&call->state_lock);
+ 
+-	switch (call->state) {
++	state = call->state;
++	switch (state) {
+ 	case RXRPC_CALL_CLIENT_SEND_REQUEST:
+ 	case RXRPC_CALL_CLIENT_AWAIT_REPLY:
+ 		if (reply_begun)
+-			call->state = RXRPC_CALL_CLIENT_RECV_REPLY;
++			call->state = state = RXRPC_CALL_CLIENT_RECV_REPLY;
+ 		else
+-			call->state = RXRPC_CALL_CLIENT_AWAIT_REPLY;
++			call->state = state = RXRPC_CALL_CLIENT_AWAIT_REPLY;
+ 		break;
+ 
+ 	case RXRPC_CALL_SERVER_AWAIT_ACK:
+ 		__rxrpc_call_completed(call);
+ 		rxrpc_notify_socket(call);
++		state = call->state;
+ 		break;
+ 
+ 	default:
+@@ -297,11 +305,10 @@ static bool rxrpc_end_tx_phase(struct rxrpc_call *call, bool reply_begun,
+ 	}
+ 
+ 	write_unlock(&call->state_lock);
+-	if (call->state == RXRPC_CALL_CLIENT_AWAIT_REPLY) {
++	if (state == RXRPC_CALL_CLIENT_AWAIT_REPLY)
+ 		trace_rxrpc_transmit(call, rxrpc_transmit_await_reply);
+-	} else {
++	else
+ 		trace_rxrpc_transmit(call, rxrpc_transmit_end);
+-	}
+ 	_leave(" = ok");
+ 	return true;
+ 
+@@ -332,11 +339,11 @@ static bool rxrpc_receiving_reply(struct rxrpc_call *call)
+ 		trace_rxrpc_timer(call, rxrpc_timer_init_for_reply, now);
+ 	}
+ 
+-	if (!test_bit(RXRPC_CALL_TX_LAST, &call->flags))
+-		rxrpc_rotate_tx_window(call, top, &summary);
+ 	if (!test_bit(RXRPC_CALL_TX_LAST, &call->flags)) {
+-		rxrpc_proto_abort("TXL", call, top);
+-		return false;
++		if (!rxrpc_rotate_tx_window(call, top, &summary)) {
++			rxrpc_proto_abort("TXL", call, top);
++			return false;
++		}
+ 	}
+ 	if (!rxrpc_end_tx_phase(call, true, "ETD"))
+ 		return false;
+@@ -616,13 +623,14 @@ static void rxrpc_input_requested_ack(struct rxrpc_call *call,
+ 		if (!skb)
+ 			continue;
+ 
++		sent_at = skb->tstamp;
++		smp_rmb(); /* Read timestamp before serial. */
+ 		sp = rxrpc_skb(skb);
+ 		if (sp->hdr.serial != orig_serial)
+ 			continue;
+-		smp_rmb();
+-		sent_at = skb->tstamp;
+ 		goto found;
+ 	}
++
+ 	return;
+ 
+ found:
+@@ -854,6 +862,16 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb,
+ 				  rxrpc_propose_ack_respond_to_ack);
+ 	}
+ 
++	/* Discard any out-of-order or duplicate ACKs. */
++	if (before_eq(sp->hdr.serial, call->acks_latest)) {
++		_debug("discard ACK %d <= %d",
++		       sp->hdr.serial, call->acks_latest);
++		return;
++	}
++	call->acks_latest_ts = skb->tstamp;
++	call->acks_latest = sp->hdr.serial;
++
++	/* Parse rwind and mtu sizes if provided. */
+ 	ioffset = offset + nr_acks + 3;
+ 	if (skb->len >= ioffset + sizeof(buf.info)) {
+ 		if (skb_copy_bits(skb, ioffset, &buf.info, sizeof(buf.info)) < 0)
+@@ -875,23 +893,18 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb,
+ 		return;
+ 	}
+ 
+-	/* Discard any out-of-order or duplicate ACKs. */
+-	if (before_eq(sp->hdr.serial, call->acks_latest)) {
+-		_debug("discard ACK %d <= %d",
+-		       sp->hdr.serial, call->acks_latest);
+-		return;
+-	}
+-	call->acks_latest_ts = skb->tstamp;
+-	call->acks_latest = sp->hdr.serial;
+-
+ 	if (before(hard_ack, call->tx_hard_ack) ||
+ 	    after(hard_ack, call->tx_top))
+ 		return rxrpc_proto_abort("AKW", call, 0);
+ 	if (nr_acks > call->tx_top - hard_ack)
+ 		return rxrpc_proto_abort("AKN", call, 0);
+ 
+-	if (after(hard_ack, call->tx_hard_ack))
+-		rxrpc_rotate_tx_window(call, hard_ack, &summary);
++	if (after(hard_ack, call->tx_hard_ack)) {
++		if (rxrpc_rotate_tx_window(call, hard_ack, &summary)) {
++			rxrpc_end_tx_phase(call, false, "ETA");
++			return;
++		}
++	}
+ 
+ 	if (nr_acks > 0) {
+ 		if (skb_copy_bits(skb, offset, buf.acks, nr_acks) < 0)
+@@ -900,11 +913,6 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb,
+ 				      &summary);
+ 	}
+ 
+-	if (test_bit(RXRPC_CALL_TX_LAST, &call->flags)) {
+-		rxrpc_end_tx_phase(call, false, "ETA");
+-		return;
+-	}
+-
+ 	if (call->rxtx_annotations[call->tx_top & RXRPC_RXTX_BUFF_MASK] &
+ 	    RXRPC_TX_ANNO_LAST &&
+ 	    summary.nr_acks == call->tx_top - hard_ack &&
+@@ -926,8 +934,7 @@ static void rxrpc_input_ackall(struct rxrpc_call *call, struct sk_buff *skb)
+ 
+ 	_proto("Rx ACKALL %%%u", sp->hdr.serial);
+ 
+-	rxrpc_rotate_tx_window(call, call->tx_top, &summary);
+-	if (test_bit(RXRPC_CALL_TX_LAST, &call->flags))
++	if (rxrpc_rotate_tx_window(call, call->tx_top, &summary))
+ 		rxrpc_end_tx_phase(call, false, "ETL");
+ }
+ 
+@@ -1137,6 +1144,9 @@ void rxrpc_data_ready(struct sock *udp_sk)
+ 		return;
+ 	}
+ 
++	if (skb->tstamp == 0)
++		skb->tstamp = ktime_get_real();
++
+ 	rxrpc_new_skb(skb, rxrpc_skb_rx_received);
+ 
+ 	_net("recv skb %p", skb);
+@@ -1171,10 +1181,6 @@ void rxrpc_data_ready(struct sock *udp_sk)
+ 
+ 	trace_rxrpc_rx_packet(sp);
+ 
+-	_net("Rx RxRPC %s ep=%x call=%x:%x",
+-	     sp->hdr.flags & RXRPC_CLIENT_INITIATED ? "ToServer" : "ToClient",
+-	     sp->hdr.epoch, sp->hdr.cid, sp->hdr.callNumber);
+-
+ 	if (sp->hdr.type >= RXRPC_N_PACKET_TYPES ||
+ 	    !((RXRPC_SUPPORTED_PACKET_TYPES >> sp->hdr.type) & 1)) {
+ 		_proto("Rx Bad Packet Type %u", sp->hdr.type);
+@@ -1183,13 +1189,13 @@ void rxrpc_data_ready(struct sock *udp_sk)
+ 
+ 	switch (sp->hdr.type) {
+ 	case RXRPC_PACKET_TYPE_VERSION:
+-		if (!(sp->hdr.flags & RXRPC_CLIENT_INITIATED))
++		if (rxrpc_to_client(sp))
+ 			goto discard;
+ 		rxrpc_post_packet_to_local(local, skb);
+ 		goto out;
+ 
+ 	case RXRPC_PACKET_TYPE_BUSY:
+-		if (sp->hdr.flags & RXRPC_CLIENT_INITIATED)
++		if (rxrpc_to_server(sp))
+ 			goto discard;
+ 		/* Fall through */
+ 
+@@ -1269,7 +1275,7 @@ void rxrpc_data_ready(struct sock *udp_sk)
+ 		call = rcu_dereference(chan->call);
+ 
+ 		if (sp->hdr.callNumber > chan->call_id) {
+-			if (!(sp->hdr.flags & RXRPC_CLIENT_INITIATED)) {
++			if (rxrpc_to_client(sp)) {
+ 				rcu_read_unlock();
+ 				goto reject_packet;
+ 			}
+@@ -1292,7 +1298,7 @@ void rxrpc_data_ready(struct sock *udp_sk)
+ 	}
+ 
+ 	if (!call || atomic_read(&call->usage) == 0) {
+-		if (!(sp->hdr.type & RXRPC_CLIENT_INITIATED) ||
++		if (rxrpc_to_client(sp) ||
+ 		    sp->hdr.callNumber == 0 ||
+ 		    sp->hdr.type != RXRPC_PACKET_TYPE_DATA)
+ 			goto bad_message_unlock;
+diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c
+index b493e6b62740..386dc1f20c73 100644
+--- a/net/rxrpc/local_object.c
++++ b/net/rxrpc/local_object.c
+@@ -135,10 +135,10 @@ static int rxrpc_open_socket(struct rxrpc_local *local, struct net *net)
+ 	}
+ 
+ 	switch (local->srx.transport.family) {
+-	case AF_INET:
+-		/* we want to receive ICMP errors */
++	case AF_INET6:
++		/* we want to receive ICMPv6 errors */
+ 		opt = 1;
+-		ret = kernel_setsockopt(local->socket, SOL_IP, IP_RECVERR,
++		ret = kernel_setsockopt(local->socket, SOL_IPV6, IPV6_RECVERR,
+ 					(char *) &opt, sizeof(opt));
+ 		if (ret < 0) {
+ 			_debug("setsockopt failed");
+@@ -146,19 +146,22 @@ static int rxrpc_open_socket(struct rxrpc_local *local, struct net *net)
+ 		}
+ 
+ 		/* we want to set the don't fragment bit */
+-		opt = IP_PMTUDISC_DO;
+-		ret = kernel_setsockopt(local->socket, SOL_IP, IP_MTU_DISCOVER,
++		opt = IPV6_PMTUDISC_DO;
++		ret = kernel_setsockopt(local->socket, SOL_IPV6, IPV6_MTU_DISCOVER,
+ 					(char *) &opt, sizeof(opt));
+ 		if (ret < 0) {
+ 			_debug("setsockopt failed");
+ 			goto error;
+ 		}
+-		break;
+ 
+-	case AF_INET6:
++		/* Fall through and set IPv4 options too otherwise we don't get
++		 * errors from IPv4 packets sent through the IPv6 socket.
++		 */
++
++	case AF_INET:
+ 		/* we want to receive ICMP errors */
+ 		opt = 1;
+-		ret = kernel_setsockopt(local->socket, SOL_IPV6, IPV6_RECVERR,
++		ret = kernel_setsockopt(local->socket, SOL_IP, IP_RECVERR,
+ 					(char *) &opt, sizeof(opt));
+ 		if (ret < 0) {
+ 			_debug("setsockopt failed");
+@@ -166,13 +169,22 @@ static int rxrpc_open_socket(struct rxrpc_local *local, struct net *net)
+ 		}
+ 
+ 		/* we want to set the don't fragment bit */
+-		opt = IPV6_PMTUDISC_DO;
+-		ret = kernel_setsockopt(local->socket, SOL_IPV6, IPV6_MTU_DISCOVER,
++		opt = IP_PMTUDISC_DO;
++		ret = kernel_setsockopt(local->socket, SOL_IP, IP_MTU_DISCOVER,
+ 					(char *) &opt, sizeof(opt));
+ 		if (ret < 0) {
+ 			_debug("setsockopt failed");
+ 			goto error;
+ 		}
++
++		/* We want receive timestamps. */
++		opt = 1;
++		ret = kernel_setsockopt(local->socket, SOL_SOCKET, SO_TIMESTAMPNS,
++					(char *)&opt, sizeof(opt));
++		if (ret < 0) {
++			_debug("setsockopt failed");
++			goto error;
++		}
+ 		break;
+ 
+ 	default:
+diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c
+index 4774c8f5634d..6ac21bb2071d 100644
+--- a/net/rxrpc/output.c
++++ b/net/rxrpc/output.c
+@@ -124,7 +124,6 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, bool ping,
+ 	struct kvec iov[2];
+ 	rxrpc_serial_t serial;
+ 	rxrpc_seq_t hard_ack, top;
+-	ktime_t now;
+ 	size_t len, n;
+ 	int ret;
+ 	u8 reason;
+@@ -196,9 +195,7 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, bool ping,
+ 		/* We need to stick a time in before we send the packet in case
+ 		 * the reply gets back before kernel_sendmsg() completes - but
+ 		 * asking UDP to send the packet can take a relatively long
+-		 * time, so we update the time after, on the assumption that
+-		 * the packet transmission is more likely to happen towards the
+-		 * end of the kernel_sendmsg() call.
++		 * time.
+ 		 */
+ 		call->ping_time = ktime_get_real();
+ 		set_bit(RXRPC_CALL_PINGING, &call->flags);
+@@ -206,9 +203,6 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, bool ping,
+ 	}
+ 
+ 	ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, 2, len);
+-	now = ktime_get_real();
+-	if (ping)
+-		call->ping_time = now;
+ 	conn->params.peer->last_tx_at = ktime_get_seconds();
+ 	if (ret < 0)
+ 		trace_rxrpc_tx_fail(call->debug_id, serial, ret,
+@@ -357,8 +351,14 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct sk_buff *skb,
+ 
+ 	/* If our RTT cache needs working on, request an ACK.  Also request
+ 	 * ACKs if a DATA packet appears to have been lost.
++	 *
++	 * However, we mustn't request an ACK on the last reply packet of a
++	 * service call, lest OpenAFS incorrectly send us an ACK with some
++	 * soft-ACKs in it and then never follow up with a proper hard ACK.
+ 	 */
+-	if (!(sp->hdr.flags & RXRPC_LAST_PACKET) &&
++	if ((!(sp->hdr.flags & RXRPC_LAST_PACKET) ||
++	     rxrpc_to_server(sp)
++	     ) &&
+ 	    (test_and_clear_bit(RXRPC_CALL_EV_ACK_LOST, &call->events) ||
+ 	     retrans ||
+ 	     call->cong_mode == RXRPC_CALL_SLOW_START ||
+@@ -384,6 +384,11 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct sk_buff *skb,
+ 		goto send_fragmentable;
+ 
+ 	down_read(&conn->params.local->defrag_sem);
++
++	sp->hdr.serial = serial;
++	smp_wmb(); /* Set serial before timestamp */
++	skb->tstamp = ktime_get_real();
++
+ 	/* send the packet by UDP
+ 	 * - returns -EMSGSIZE if UDP would have to fragment the packet
+ 	 *   to go out of the interface
+@@ -404,12 +409,8 @@ done:
+ 	trace_rxrpc_tx_data(call, sp->hdr.seq, serial, whdr.flags,
+ 			    retrans, lost);
+ 	if (ret >= 0) {
+-		ktime_t now = ktime_get_real();
+-		skb->tstamp = now;
+-		smp_wmb();
+-		sp->hdr.serial = serial;
+ 		if (whdr.flags & RXRPC_REQUEST_ACK) {
+-			call->peer->rtt_last_req = now;
++			call->peer->rtt_last_req = skb->tstamp;
+ 			trace_rxrpc_rtt_tx(call, rxrpc_rtt_tx_data, serial);
+ 			if (call->peer->rtt_usage > 1) {
+ 				unsigned long nowj = jiffies, ack_lost_at;
+@@ -448,6 +449,10 @@ send_fragmentable:
+ 
+ 	down_write(&conn->params.local->defrag_sem);
+ 
++	sp->hdr.serial = serial;
++	smp_wmb(); /* Set serial before timestamp */
++	skb->tstamp = ktime_get_real();
++
+ 	switch (conn->params.local->srx.transport.family) {
+ 	case AF_INET:
+ 		opt = IP_PMTUDISC_DONT;
+diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c
+index 4f9da2f51c69..f3e6fc670da2 100644
+--- a/net/rxrpc/peer_event.c
++++ b/net/rxrpc/peer_event.c
+@@ -23,6 +23,8 @@
+ #include "ar-internal.h"
+ 
+ static void rxrpc_store_error(struct rxrpc_peer *, struct sock_exterr_skb *);
++static void rxrpc_distribute_error(struct rxrpc_peer *, int,
++				   enum rxrpc_call_completion);
+ 
+ /*
+  * Find the peer associated with an ICMP packet.
+@@ -194,8 +196,6 @@ void rxrpc_error_report(struct sock *sk)
+ 	rcu_read_unlock();
+ 	rxrpc_free_skb(skb, rxrpc_skb_rx_freed);
+ 
+-	/* The ref we obtained is passed off to the work item */
+-	__rxrpc_queue_peer_error(peer);
+ 	_leave("");
+ }
+ 
+@@ -205,6 +205,7 @@ void rxrpc_error_report(struct sock *sk)
+ static void rxrpc_store_error(struct rxrpc_peer *peer,
+ 			      struct sock_exterr_skb *serr)
+ {
++	enum rxrpc_call_completion compl = RXRPC_CALL_NETWORK_ERROR;
+ 	struct sock_extended_err *ee;
+ 	int err;
+ 
+@@ -255,7 +256,7 @@ static void rxrpc_store_error(struct rxrpc_peer *peer,
+ 	case SO_EE_ORIGIN_NONE:
+ 	case SO_EE_ORIGIN_LOCAL:
+ 		_proto("Rx Received local error { error=%d }", err);
+-		err += RXRPC_LOCAL_ERROR_OFFSET;
++		compl = RXRPC_CALL_LOCAL_ERROR;
+ 		break;
+ 
+ 	case SO_EE_ORIGIN_ICMP6:
+@@ -264,48 +265,23 @@ static void rxrpc_store_error(struct rxrpc_peer *peer,
+ 		break;
+ 	}
+ 
+-	peer->error_report = err;
++	rxrpc_distribute_error(peer, err, compl);
+ }
+ 
+ /*
+- * Distribute an error that occurred on a peer
++ * Distribute an error that occurred on a peer.
+  */
+-void rxrpc_peer_error_distributor(struct work_struct *work)
++static void rxrpc_distribute_error(struct rxrpc_peer *peer, int error,
++				   enum rxrpc_call_completion compl)
+ {
+-	struct rxrpc_peer *peer =
+-		container_of(work, struct rxrpc_peer, error_distributor);
+ 	struct rxrpc_call *call;
+-	enum rxrpc_call_completion compl;
+-	int error;
+-
+-	_enter("");
+-
+-	error = READ_ONCE(peer->error_report);
+-	if (error < RXRPC_LOCAL_ERROR_OFFSET) {
+-		compl = RXRPC_CALL_NETWORK_ERROR;
+-	} else {
+-		compl = RXRPC_CALL_LOCAL_ERROR;
+-		error -= RXRPC_LOCAL_ERROR_OFFSET;
+-	}
+ 
+-	_debug("ISSUE ERROR %s %d", rxrpc_call_completions[compl], error);
+-
+-	spin_lock_bh(&peer->lock);
+-
+-	while (!hlist_empty(&peer->error_targets)) {
+-		call = hlist_entry(peer->error_targets.first,
+-				   struct rxrpc_call, error_link);
+-		hlist_del_init(&call->error_link);
++	hlist_for_each_entry_rcu(call, &peer->error_targets, error_link) {
+ 		rxrpc_see_call(call);
+-
+-		if (rxrpc_set_call_completion(call, compl, 0, -error))
++		if (call->state < RXRPC_CALL_COMPLETE &&
++		    rxrpc_set_call_completion(call, compl, 0, -error))
+ 			rxrpc_notify_socket(call);
+ 	}
+-
+-	spin_unlock_bh(&peer->lock);
+-
+-	rxrpc_put_peer(peer);
+-	_leave("");
+ }
+ 
+ /*
+diff --git a/net/rxrpc/peer_object.c b/net/rxrpc/peer_object.c
+index 24ec7cdcf332..ef4c2e8a35cc 100644
+--- a/net/rxrpc/peer_object.c
++++ b/net/rxrpc/peer_object.c
+@@ -222,8 +222,6 @@ struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *local, gfp_t gfp)
+ 		atomic_set(&peer->usage, 1);
+ 		peer->local = local;
+ 		INIT_HLIST_HEAD(&peer->error_targets);
+-		INIT_WORK(&peer->error_distributor,
+-			  &rxrpc_peer_error_distributor);
+ 		peer->service_conns = RB_ROOT;
+ 		seqlock_init(&peer->service_conn_lock);
+ 		spin_lock_init(&peer->lock);
+@@ -415,21 +413,6 @@ struct rxrpc_peer *rxrpc_get_peer_maybe(struct rxrpc_peer *peer)
+ 	return peer;
+ }
+ 
+-/*
+- * Queue a peer record.  This passes the caller's ref to the workqueue.
+- */
+-void __rxrpc_queue_peer_error(struct rxrpc_peer *peer)
+-{
+-	const void *here = __builtin_return_address(0);
+-	int n;
+-
+-	n = atomic_read(&peer->usage);
+-	if (rxrpc_queue_work(&peer->error_distributor))
+-		trace_rxrpc_peer(peer, rxrpc_peer_queued_error, n, here);
+-	else
+-		rxrpc_put_peer(peer);
+-}
+-
+ /*
+  * Discard a peer record.
+  */
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index f74513a7c7a8..c855fd045a3c 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -31,6 +31,8 @@
+ #include <net/pkt_sched.h>
+ #include <net/pkt_cls.h>
+ 
++extern const struct nla_policy rtm_tca_policy[TCA_MAX + 1];
++
+ /* The list of all installed classifier types */
+ static LIST_HEAD(tcf_proto_base);
+ 
+@@ -1083,7 +1085,7 @@ static int tc_new_tfilter(struct sk_buff *skb, struct nlmsghdr *n,
+ replay:
+ 	tp_created = 0;
+ 
+-	err = nlmsg_parse(n, sizeof(*t), tca, TCA_MAX, NULL, extack);
++	err = nlmsg_parse(n, sizeof(*t), tca, TCA_MAX, rtm_tca_policy, extack);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -1226,7 +1228,7 @@ static int tc_del_tfilter(struct sk_buff *skb, struct nlmsghdr *n,
+ 	if (!netlink_ns_capable(skb, net->user_ns, CAP_NET_ADMIN))
+ 		return -EPERM;
+ 
+-	err = nlmsg_parse(n, sizeof(*t), tca, TCA_MAX, NULL, extack);
++	err = nlmsg_parse(n, sizeof(*t), tca, TCA_MAX, rtm_tca_policy, extack);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -1334,7 +1336,7 @@ static int tc_get_tfilter(struct sk_buff *skb, struct nlmsghdr *n,
+ 	void *fh = NULL;
+ 	int err;
+ 
+-	err = nlmsg_parse(n, sizeof(*t), tca, TCA_MAX, NULL, extack);
++	err = nlmsg_parse(n, sizeof(*t), tca, TCA_MAX, rtm_tca_policy, extack);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -1488,7 +1490,8 @@ static int tc_dump_tfilter(struct sk_buff *skb, struct netlink_callback *cb)
+ 	if (nlmsg_len(cb->nlh) < sizeof(*tcm))
+ 		return skb->len;
+ 
+-	err = nlmsg_parse(cb->nlh, sizeof(*tcm), tca, TCA_MAX, NULL, NULL);
++	err = nlmsg_parse(cb->nlh, sizeof(*tcm), tca, TCA_MAX, rtm_tca_policy,
++			  NULL);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 99cc25aae503..57f71765febe 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -2052,7 +2052,8 @@ static int tc_dump_tclass_root(struct Qdisc *root, struct sk_buff *skb,
+ 
+ 	if (tcm->tcm_parent) {
+ 		q = qdisc_match_from_root(root, TC_H_MAJ(tcm->tcm_parent));
+-		if (q && tc_dump_tclass_qdisc(q, skb, tcm, cb, t_p, s_t) < 0)
++		if (q && q != root &&
++		    tc_dump_tclass_qdisc(q, skb, tcm, cb, t_p, s_t) < 0)
+ 			return -1;
+ 		return 0;
+ 	}
+diff --git a/net/sched/sch_gred.c b/net/sched/sch_gred.c
+index cbe4831f46f4..4a042abf844c 100644
+--- a/net/sched/sch_gred.c
++++ b/net/sched/sch_gred.c
+@@ -413,7 +413,7 @@ static int gred_change(struct Qdisc *sch, struct nlattr *opt,
+ 	if (tb[TCA_GRED_PARMS] == NULL && tb[TCA_GRED_STAB] == NULL) {
+ 		if (tb[TCA_GRED_LIMIT] != NULL)
+ 			sch->limit = nla_get_u32(tb[TCA_GRED_LIMIT]);
+-		return gred_change_table_def(sch, opt);
++		return gred_change_table_def(sch, tb[TCA_GRED_DPS]);
+ 	}
+ 
+ 	if (tb[TCA_GRED_PARMS] == NULL ||
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 50ee07cd20c4..9d903b870790 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -270,11 +270,10 @@ struct sctp_association *sctp_id2assoc(struct sock *sk, sctp_assoc_t id)
+ 
+ 	spin_lock_bh(&sctp_assocs_id_lock);
+ 	asoc = (struct sctp_association *)idr_find(&sctp_assocs_id, (int)id);
++	if (asoc && (asoc->base.sk != sk || asoc->base.dead))
++		asoc = NULL;
+ 	spin_unlock_bh(&sctp_assocs_id_lock);
+ 
+-	if (!asoc || (asoc->base.sk != sk) || asoc->base.dead)
+-		return NULL;
+-
+ 	return asoc;
+ }
+ 
+@@ -1940,8 +1939,10 @@ static int sctp_sendmsg_to_asoc(struct sctp_association *asoc,
+ 		if (sp->strm_interleave) {
+ 			timeo = sock_sndtimeo(sk, 0);
+ 			err = sctp_wait_for_connect(asoc, &timeo);
+-			if (err)
++			if (err) {
++				err = -ESRCH;
+ 				goto err;
++			}
+ 		} else {
+ 			wait_connect = true;
+ 		}
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index add82b0266f3..3be95f77ec7f 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -114,22 +114,17 @@ static void __smc_lgr_unregister_conn(struct smc_connection *conn)
+ 	sock_put(&smc->sk); /* sock_hold in smc_lgr_register_conn() */
+ }
+ 
+-/* Unregister connection and trigger lgr freeing if applicable
++/* Unregister connection from lgr
+  */
+ static void smc_lgr_unregister_conn(struct smc_connection *conn)
+ {
+ 	struct smc_link_group *lgr = conn->lgr;
+-	int reduced = 0;
+ 
+ 	write_lock_bh(&lgr->conns_lock);
+ 	if (conn->alert_token_local) {
+-		reduced = 1;
+ 		__smc_lgr_unregister_conn(conn);
+ 	}
+ 	write_unlock_bh(&lgr->conns_lock);
+-	if (!reduced || lgr->conns_num)
+-		return;
+-	smc_lgr_schedule_free_work(lgr);
+ }
+ 
+ static void smc_lgr_free_work(struct work_struct *work)
+@@ -238,7 +233,8 @@ out:
+ 	return rc;
+ }
+ 
+-static void smc_buf_unuse(struct smc_connection *conn)
++static void smc_buf_unuse(struct smc_connection *conn,
++			  struct smc_link_group *lgr)
+ {
+ 	if (conn->sndbuf_desc)
+ 		conn->sndbuf_desc->used = 0;
+@@ -248,8 +244,6 @@ static void smc_buf_unuse(struct smc_connection *conn)
+ 			conn->rmb_desc->used = 0;
+ 		} else {
+ 			/* buf registration failed, reuse not possible */
+-			struct smc_link_group *lgr = conn->lgr;
+-
+ 			write_lock_bh(&lgr->rmbs_lock);
+ 			list_del(&conn->rmb_desc->list);
+ 			write_unlock_bh(&lgr->rmbs_lock);
+@@ -262,11 +256,16 @@ static void smc_buf_unuse(struct smc_connection *conn)
+ /* remove a finished connection from its link group */
+ void smc_conn_free(struct smc_connection *conn)
+ {
+-	if (!conn->lgr)
++	struct smc_link_group *lgr = conn->lgr;
++
++	if (!lgr)
+ 		return;
+ 	smc_cdc_tx_dismiss_slots(conn);
+-	smc_lgr_unregister_conn(conn);
+-	smc_buf_unuse(conn);
++	smc_lgr_unregister_conn(conn);		/* unsets conn->lgr */
++	smc_buf_unuse(conn, lgr);		/* allow buffer reuse */
++
++	if (!lgr->conns_num)
++		smc_lgr_schedule_free_work(lgr);
+ }
+ 
+ static void smc_link_clear(struct smc_link *lnk)
+diff --git a/net/socket.c b/net/socket.c
+index d4187ac17d55..fcb18a7ed14b 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -2887,9 +2887,14 @@ static int ethtool_ioctl(struct net *net, struct compat_ifreq __user *ifr32)
+ 		    copy_in_user(&rxnfc->fs.ring_cookie,
+ 				 &compat_rxnfc->fs.ring_cookie,
+ 				 (void __user *)(&rxnfc->fs.location + 1) -
+-				 (void __user *)&rxnfc->fs.ring_cookie) ||
+-		    copy_in_user(&rxnfc->rule_cnt, &compat_rxnfc->rule_cnt,
+-				 sizeof(rxnfc->rule_cnt)))
++				 (void __user *)&rxnfc->fs.ring_cookie))
++			return -EFAULT;
++		if (ethcmd == ETHTOOL_GRXCLSRLALL) {
++			if (put_user(rule_cnt, &rxnfc->rule_cnt))
++				return -EFAULT;
++		} else if (copy_in_user(&rxnfc->rule_cnt,
++					&compat_rxnfc->rule_cnt,
++					sizeof(rxnfc->rule_cnt)))
+ 			return -EFAULT;
+ 	}
+ 
+diff --git a/net/tipc/name_distr.c b/net/tipc/name_distr.c
+index 51b4b96f89db..3cfeb9df64b0 100644
+--- a/net/tipc/name_distr.c
++++ b/net/tipc/name_distr.c
+@@ -115,7 +115,7 @@ struct sk_buff *tipc_named_withdraw(struct net *net, struct publication *publ)
+ 	struct sk_buff *buf;
+ 	struct distr_item *item;
+ 
+-	list_del(&publ->binding_node);
++	list_del_rcu(&publ->binding_node);
+ 
+ 	if (publ->scope == TIPC_NODE_SCOPE)
+ 		return NULL;
+@@ -147,7 +147,7 @@ static void named_distribute(struct net *net, struct sk_buff_head *list,
+ 			ITEM_SIZE) * ITEM_SIZE;
+ 	u32 msg_rem = msg_dsz;
+ 
+-	list_for_each_entry(publ, pls, binding_node) {
++	list_for_each_entry_rcu(publ, pls, binding_node) {
+ 		/* Prepare next buffer: */
+ 		if (!skb) {
+ 			skb = named_prepare_buf(net, PUBLICATION, msg_rem,
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 9fab8e5a4a5b..994ddc7ec9b1 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -286,7 +286,7 @@ static int zerocopy_from_iter(struct sock *sk, struct iov_iter *from,
+ 			      int length, int *pages_used,
+ 			      unsigned int *size_used,
+ 			      struct scatterlist *to, int to_max_pages,
+-			      bool charge, bool revert)
++			      bool charge)
+ {
+ 	struct page *pages[MAX_SKB_FRAGS];
+ 
+@@ -335,10 +335,10 @@ static int zerocopy_from_iter(struct sock *sk, struct iov_iter *from,
+ 	}
+ 
+ out:
++	if (rc)
++		iov_iter_revert(from, size - *size_used);
+ 	*size_used = size;
+ 	*pages_used = num_elem;
+-	if (revert)
+-		iov_iter_revert(from, size);
+ 
+ 	return rc;
+ }
+@@ -440,7 +440,7 @@ alloc_encrypted:
+ 				&ctx->sg_plaintext_size,
+ 				ctx->sg_plaintext_data,
+ 				ARRAY_SIZE(ctx->sg_plaintext_data),
+-				true, false);
++				true);
+ 			if (ret)
+ 				goto fallback_to_reg_send;
+ 
+@@ -453,8 +453,6 @@ alloc_encrypted:
+ 
+ 			copied -= try_to_copy;
+ fallback_to_reg_send:
+-			iov_iter_revert(&msg->msg_iter,
+-					ctx->sg_plaintext_size - orig_size);
+ 			trim_sg(sk, ctx->sg_plaintext_data,
+ 				&ctx->sg_plaintext_num_elem,
+ 				&ctx->sg_plaintext_size,
+@@ -828,7 +826,7 @@ int tls_sw_recvmsg(struct sock *sk,
+ 				err = zerocopy_from_iter(sk, &msg->msg_iter,
+ 							 to_copy, &pages,
+ 							 &chunk, &sgin[1],
+-							 MAX_SKB_FRAGS,	false, true);
++							 MAX_SKB_FRAGS,	false);
+ 				if (err < 0)
+ 					goto fallback_to_reg_recv;
+ 
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 733ccf867972..214f9ef79a64 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -3699,6 +3699,7 @@ static bool ht_rateset_to_mask(struct ieee80211_supported_band *sband,
+ 			return false;
+ 
+ 		/* check availability */
++		ridx = array_index_nospec(ridx, IEEE80211_HT_MCS_MASK_LEN);
+ 		if (sband->ht_cap.mcs.rx_mask[ridx] & rbit)
+ 			mcs[ridx] |= rbit;
+ 		else
+@@ -10124,7 +10125,7 @@ static int cfg80211_cqm_rssi_update(struct cfg80211_registered_device *rdev,
+ 	struct wireless_dev *wdev = dev->ieee80211_ptr;
+ 	s32 last, low, high;
+ 	u32 hyst;
+-	int i, n;
++	int i, n, low_index;
+ 	int err;
+ 
+ 	/* RSSI reporting disabled? */
+@@ -10161,10 +10162,19 @@ static int cfg80211_cqm_rssi_update(struct cfg80211_registered_device *rdev,
+ 		if (last < wdev->cqm_config->rssi_thresholds[i])
+ 			break;
+ 
+-	low = i > 0 ?
+-		(wdev->cqm_config->rssi_thresholds[i - 1] - hyst) : S32_MIN;
+-	high = i < n ?
+-		(wdev->cqm_config->rssi_thresholds[i] + hyst - 1) : S32_MAX;
++	low_index = i - 1;
++	if (low_index >= 0) {
++		low_index = array_index_nospec(low_index, n);
++		low = wdev->cqm_config->rssi_thresholds[low_index] - hyst;
++	} else {
++		low = S32_MIN;
++	}
++	if (i < n) {
++		i = array_index_nospec(i, n);
++		high = wdev->cqm_config->rssi_thresholds[i] + hyst - 1;
++	} else {
++		high = S32_MAX;
++	}
+ 
+ 	return rdev_set_cqm_rssi_range_config(rdev, dev, low, high);
+ }
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index 2f702adf2912..24cfa2776f50 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -2661,11 +2661,12 @@ static void reg_process_hint(struct regulatory_request *reg_request)
+ {
+ 	struct wiphy *wiphy = NULL;
+ 	enum reg_request_treatment treatment;
++	enum nl80211_reg_initiator initiator = reg_request->initiator;
+ 
+ 	if (reg_request->wiphy_idx != WIPHY_IDX_INVALID)
+ 		wiphy = wiphy_idx_to_wiphy(reg_request->wiphy_idx);
+ 
+-	switch (reg_request->initiator) {
++	switch (initiator) {
+ 	case NL80211_REGDOM_SET_BY_CORE:
+ 		treatment = reg_process_hint_core(reg_request);
+ 		break;
+@@ -2683,7 +2684,7 @@ static void reg_process_hint(struct regulatory_request *reg_request)
+ 		treatment = reg_process_hint_country_ie(wiphy, reg_request);
+ 		break;
+ 	default:
+-		WARN(1, "invalid initiator %d\n", reg_request->initiator);
++		WARN(1, "invalid initiator %d\n", initiator);
+ 		goto out_free;
+ 	}
+ 
+@@ -2698,7 +2699,7 @@ static void reg_process_hint(struct regulatory_request *reg_request)
+ 	 */
+ 	if (treatment == REG_REQ_ALREADY_SET && wiphy &&
+ 	    wiphy->regulatory_flags & REGULATORY_STRICT_REG) {
+-		wiphy_update_regulatory(wiphy, reg_request->initiator);
++		wiphy_update_regulatory(wiphy, initiator);
+ 		wiphy_all_share_dfs_chan_state(wiphy);
+ 		reg_check_channels();
+ 	}
+@@ -2867,6 +2868,7 @@ static int regulatory_hint_core(const char *alpha2)
+ 	request->alpha2[0] = alpha2[0];
+ 	request->alpha2[1] = alpha2[1];
+ 	request->initiator = NL80211_REGDOM_SET_BY_CORE;
++	request->wiphy_idx = WIPHY_IDX_INVALID;
+ 
+ 	queue_regulatory_request(request);
+ 
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index d36c3eb7b931..d0e7472dd9fd 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -1058,13 +1058,23 @@ cfg80211_bss_update(struct cfg80211_registered_device *rdev,
+ 	return NULL;
+ }
+ 
++/*
++ * Update RX channel information based on the available frame payload
++ * information. This is mainly for the 2.4 GHz band where frames can be received
++ * from neighboring channels and the Beacon frames use the DSSS Parameter Set
++ * element to indicate the current (transmitting) channel, but this might also
++ * be needed on other bands if RX frequency does not match with the actual
++ * operating channel of a BSS.
++ */
+ static struct ieee80211_channel *
+ cfg80211_get_bss_channel(struct wiphy *wiphy, const u8 *ie, size_t ielen,
+-			 struct ieee80211_channel *channel)
++			 struct ieee80211_channel *channel,
++			 enum nl80211_bss_scan_width scan_width)
+ {
+ 	const u8 *tmp;
+ 	u32 freq;
+ 	int channel_number = -1;
++	struct ieee80211_channel *alt_channel;
+ 
+ 	tmp = cfg80211_find_ie(WLAN_EID_DS_PARAMS, ie, ielen);
+ 	if (tmp && tmp[1] == 1) {
+@@ -1078,16 +1088,45 @@ cfg80211_get_bss_channel(struct wiphy *wiphy, const u8 *ie, size_t ielen,
+ 		}
+ 	}
+ 
+-	if (channel_number < 0)
++	if (channel_number < 0) {
++		/* No channel information in frame payload */
+ 		return channel;
++	}
+ 
+ 	freq = ieee80211_channel_to_frequency(channel_number, channel->band);
+-	channel = ieee80211_get_channel(wiphy, freq);
+-	if (!channel)
+-		return NULL;
+-	if (channel->flags & IEEE80211_CHAN_DISABLED)
++	alt_channel = ieee80211_get_channel(wiphy, freq);
++	if (!alt_channel) {
++		if (channel->band == NL80211_BAND_2GHZ) {
++			/*
++			 * Better not allow unexpected channels when that could
++			 * be going beyond the 1-11 range (e.g., discovering
++			 * BSS on channel 12 when radio is configured for
++			 * channel 11.
++			 */
++			return NULL;
++		}
++
++		/* No match for the payload channel number - ignore it */
++		return channel;
++	}
++
++	if (scan_width == NL80211_BSS_CHAN_WIDTH_10 ||
++	    scan_width == NL80211_BSS_CHAN_WIDTH_5) {
++		/*
++		 * Ignore channel number in 5 and 10 MHz channels where there
++		 * may not be an n:1 or 1:n mapping between frequencies and
++		 * channel numbers.
++		 */
++		return channel;
++	}
++
++	/*
++	 * Use the channel determined through the payload channel number
++	 * instead of the RX channel reported by the driver.
++	 */
++	if (alt_channel->flags & IEEE80211_CHAN_DISABLED)
+ 		return NULL;
+-	return channel;
++	return alt_channel;
+ }
+ 
+ /* Returned bss is reference counted and must be cleaned up appropriately. */
+@@ -1112,7 +1151,8 @@ cfg80211_inform_bss_data(struct wiphy *wiphy,
+ 		    (data->signal < 0 || data->signal > 100)))
+ 		return NULL;
+ 
+-	channel = cfg80211_get_bss_channel(wiphy, ie, ielen, data->chan);
++	channel = cfg80211_get_bss_channel(wiphy, ie, ielen, data->chan,
++					   data->scan_width);
+ 	if (!channel)
+ 		return NULL;
+ 
+@@ -1210,7 +1250,7 @@ cfg80211_inform_bss_frame_data(struct wiphy *wiphy,
+ 		return NULL;
+ 
+ 	channel = cfg80211_get_bss_channel(wiphy, mgmt->u.beacon.variable,
+-					   ielen, data->chan);
++					   ielen, data->chan, data->scan_width);
+ 	if (!channel)
+ 		return NULL;
+ 
+diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c
+index 352abca2605f..86f5afbd0a0c 100644
+--- a/net/xfrm/xfrm_input.c
++++ b/net/xfrm/xfrm_input.c
+@@ -453,6 +453,7 @@ resume:
+ 			XFRM_INC_STATS(net, LINUX_MIB_XFRMINHDRERROR);
+ 			goto drop;
+ 		}
++		crypto_done = false;
+ 	} while (!err);
+ 
+ 	err = xfrm_rcv_cb(skb, family, x->type->proto, 0);
+diff --git a/net/xfrm/xfrm_output.c b/net/xfrm/xfrm_output.c
+index 89b178a78dc7..36d15a38ce5e 100644
+--- a/net/xfrm/xfrm_output.c
++++ b/net/xfrm/xfrm_output.c
+@@ -101,6 +101,10 @@ static int xfrm_output_one(struct sk_buff *skb, int err)
+ 		spin_unlock_bh(&x->lock);
+ 
+ 		skb_dst_force(skb);
++		if (!skb_dst(skb)) {
++			XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTERROR);
++			goto error_nolock;
++		}
+ 
+ 		if (xfrm_offload(skb)) {
+ 			x->type_offload->encap(x, skb);
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index a94983e03a8b..526e6814ed4b 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -2551,6 +2551,10 @@ int __xfrm_route_forward(struct sk_buff *skb, unsigned short family)
+ 	}
+ 
+ 	skb_dst_force(skb);
++	if (!skb_dst(skb)) {
++		XFRM_INC_STATS(net, LINUX_MIB_XFRMFWDHDRERROR);
++		return 0;
++	}
+ 
+ 	dst = xfrm_lookup(net, skb_dst(skb), &fl, NULL, XFRM_LOOKUP_QUEUE);
+ 	if (IS_ERR(dst)) {
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index 33878e6e0d0a..d0672c400c2f 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -151,10 +151,16 @@ static int verify_newsa_info(struct xfrm_usersa_info *p,
+ 	err = -EINVAL;
+ 	switch (p->family) {
+ 	case AF_INET:
++		if (p->sel.prefixlen_d > 32 || p->sel.prefixlen_s > 32)
++			goto out;
++
+ 		break;
+ 
+ 	case AF_INET6:
+ #if IS_ENABLED(CONFIG_IPV6)
++		if (p->sel.prefixlen_d > 128 || p->sel.prefixlen_s > 128)
++			goto out;
++
+ 		break;
+ #else
+ 		err = -EAFNOSUPPORT;
+@@ -1359,10 +1365,16 @@ static int verify_newpolicy_info(struct xfrm_userpolicy_info *p)
+ 
+ 	switch (p->sel.family) {
+ 	case AF_INET:
++		if (p->sel.prefixlen_d > 32 || p->sel.prefixlen_s > 32)
++			return -EINVAL;
++
+ 		break;
+ 
+ 	case AF_INET6:
+ #if IS_ENABLED(CONFIG_IPV6)
++		if (p->sel.prefixlen_d > 128 || p->sel.prefixlen_s > 128)
++			return -EINVAL;
++
+ 		break;
+ #else
+ 		return  -EAFNOSUPPORT;
+@@ -1443,6 +1455,9 @@ static int validate_tmpl(int nr, struct xfrm_user_tmpl *ut, u16 family)
+ 		    (ut[i].family != prev_family))
+ 			return -EINVAL;
+ 
++		if (ut[i].mode >= XFRM_MODE_MAX)
++			return -EINVAL;
++
+ 		prev_family = ut[i].family;
+ 
+ 		switch (ut[i].family) {
+diff --git a/tools/perf/Makefile b/tools/perf/Makefile
+index 225454416ed5..7902a5681fc8 100644
+--- a/tools/perf/Makefile
++++ b/tools/perf/Makefile
+@@ -84,10 +84,10 @@ endif # has_clean
+ endif # MAKECMDGOALS
+ 
+ #
+-# The clean target is not really parallel, don't print the jobs info:
++# Explicitly disable parallelism for the clean target.
+ #
+ clean:
+-	$(make)
++	$(make) -j1
+ 
+ #
+ # The build-test target is not really parallel, don't print the jobs info,
+diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
+index 22dbb6612b41..b70cce40ca97 100644
+--- a/tools/perf/util/machine.c
++++ b/tools/perf/util/machine.c
+@@ -2246,7 +2246,8 @@ static int append_inlines(struct callchain_cursor *cursor,
+ 	if (!symbol_conf.inline_name || !map || !sym)
+ 		return ret;
+ 
+-	addr = map__rip_2objdump(map, ip);
++	addr = map__map_ip(map, ip);
++	addr = map__rip_2objdump(map, addr);
+ 
+ 	inline_node = inlines__tree_find(&map->dso->inlined_nodes, addr);
+ 	if (!inline_node) {
+@@ -2272,7 +2273,7 @@ static int unwind_entry(struct unwind_entry *entry, void *arg)
+ {
+ 	struct callchain_cursor *cursor = arg;
+ 	const char *srcline = NULL;
+-	u64 addr;
++	u64 addr = entry->ip;
+ 
+ 	if (symbol_conf.hide_unresolved && entry->sym == NULL)
+ 		return 0;
+@@ -2284,7 +2285,8 @@ static int unwind_entry(struct unwind_entry *entry, void *arg)
+ 	 * Convert entry->ip from a virtual address to an offset in
+ 	 * its corresponding binary.
+ 	 */
+-	addr = map__map_ip(entry->map, entry->ip);
++	if (entry->map)
++		addr = map__map_ip(entry->map, entry->ip);
+ 
+ 	srcline = callchain_srcline(entry->map, entry->sym, addr);
+ 	return callchain_cursor_append(cursor, entry->ip,
+diff --git a/tools/perf/util/setup.py b/tools/perf/util/setup.py
+index 001be4f9d3b9..a5f9e236cc71 100644
+--- a/tools/perf/util/setup.py
++++ b/tools/perf/util/setup.py
+@@ -27,7 +27,7 @@ class install_lib(_install_lib):
+ 
+ cflags = getenv('CFLAGS', '').split()
+ # switch off several checks (need to be at the end of cflags list)
+-cflags += ['-fno-strict-aliasing', '-Wno-write-strings', '-Wno-unused-parameter' ]
++cflags += ['-fno-strict-aliasing', '-Wno-write-strings', '-Wno-unused-parameter', '-Wno-redundant-decls' ]
+ if cc != "clang":
+     cflags += ['-Wno-cast-function-type' ]
+ 
+diff --git a/tools/testing/selftests/net/fib-onlink-tests.sh b/tools/testing/selftests/net/fib-onlink-tests.sh
+index 3991ad1a368d..864f865eee55 100755
+--- a/tools/testing/selftests/net/fib-onlink-tests.sh
++++ b/tools/testing/selftests/net/fib-onlink-tests.sh
+@@ -167,8 +167,8 @@ setup()
+ 	# add vrf table
+ 	ip li add ${VRF} type vrf table ${VRF_TABLE}
+ 	ip li set ${VRF} up
+-	ip ro add table ${VRF_TABLE} unreachable default
+-	ip -6 ro add table ${VRF_TABLE} unreachable default
++	ip ro add table ${VRF_TABLE} unreachable default metric 8192
++	ip -6 ro add table ${VRF_TABLE} unreachable default metric 8192
+ 
+ 	# create test interfaces
+ 	ip li add ${NETIFS[p1]} type veth peer name ${NETIFS[p2]}
+@@ -185,20 +185,20 @@ setup()
+ 	for n in 1 3 5 7; do
+ 		ip li set ${NETIFS[p${n}]} up
+ 		ip addr add ${V4ADDRS[p${n}]}/24 dev ${NETIFS[p${n}]}
+-		ip addr add ${V6ADDRS[p${n}]}/64 dev ${NETIFS[p${n}]}
++		ip addr add ${V6ADDRS[p${n}]}/64 dev ${NETIFS[p${n}]} nodad
+ 	done
+ 
+ 	# move peer interfaces to namespace and add addresses
+ 	for n in 2 4 6 8; do
+ 		ip li set ${NETIFS[p${n}]} netns ${PEER_NS} up
+ 		ip -netns ${PEER_NS} addr add ${V4ADDRS[p${n}]}/24 dev ${NETIFS[p${n}]}
+-		ip -netns ${PEER_NS} addr add ${V6ADDRS[p${n}]}/64 dev ${NETIFS[p${n}]}
++		ip -netns ${PEER_NS} addr add ${V6ADDRS[p${n}]}/64 dev ${NETIFS[p${n}]} nodad
+ 	done
+ 
+-	set +e
++	ip -6 ro add default via ${V6ADDRS[p3]/::[0-9]/::64}
++	ip -6 ro add table ${VRF_TABLE} default via ${V6ADDRS[p7]/::[0-9]/::64}
+ 
+-	# let DAD complete - assume default of 1 probe
+-	sleep 1
++	set +e
+ }
+ 
+ cleanup()
+diff --git a/tools/testing/selftests/net/rtnetlink.sh b/tools/testing/selftests/net/rtnetlink.sh
+index 0d7a44fa30af..8e509cbcb209 100755
+--- a/tools/testing/selftests/net/rtnetlink.sh
++++ b/tools/testing/selftests/net/rtnetlink.sh
+@@ -1,4 +1,4 @@
+-#!/bin/sh
++#!/bin/bash
+ #
+ # This test is for checking rtnetlink callpaths, and get as much coverage as possible.
+ #
+diff --git a/tools/testing/selftests/net/udpgso_bench.sh b/tools/testing/selftests/net/udpgso_bench.sh
+index 850767befa47..99e537ab5ad9 100755
+--- a/tools/testing/selftests/net/udpgso_bench.sh
++++ b/tools/testing/selftests/net/udpgso_bench.sh
+@@ -1,4 +1,4 @@
+-#!/bin/sh
++#!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ #
+ # Run a series of udpgso benchmarks


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 11:37 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     285ab410e6fc64fab55ce1263d3b31ea0ce889e0
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Oct 13 16:32:27 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 11:36:27 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=285ab410

Linux patch 4.18.14

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1013_linux-4.18.14.patch | 1692 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1696 insertions(+)

diff --git a/0000_README b/0000_README
index f5bb594..6d1cb28 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,10 @@ Patch:  1012_linux-4.18.13.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.13
 
+Patch:  1013_linux-4.18.14.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.14
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1013_linux-4.18.14.patch b/1013_linux-4.18.14.patch
new file mode 100644
index 0000000..742cbc9
--- /dev/null
+++ b/1013_linux-4.18.14.patch
@@ -0,0 +1,1692 @@
+diff --git a/Makefile b/Makefile
+index 4442e9ea4b6d..5274f8ae6b44 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 13
++SUBLEVEL = 14
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/arc/kernel/process.c b/arch/arc/kernel/process.c
+index 4674541eba3f..8ce6e7235915 100644
+--- a/arch/arc/kernel/process.c
++++ b/arch/arc/kernel/process.c
+@@ -241,6 +241,26 @@ int copy_thread(unsigned long clone_flags,
+ 		task_thread_info(current)->thr_ptr;
+ 	}
+ 
++
++	/*
++	 * setup usermode thread pointer #1:
++	 * when child is picked by scheduler, __switch_to() uses @c_callee to
++	 * populate usermode callee regs: this works (despite being in a kernel
++	 * function) since special return path for child @ret_from_fork()
++	 * ensures those regs are not clobbered all the way to RTIE to usermode
++	 */
++	c_callee->r25 = task_thread_info(p)->thr_ptr;
++
++#ifdef CONFIG_ARC_CURR_IN_REG
++	/*
++	 * setup usermode thread pointer #2:
++	 * however for this special use of r25 in kernel, __switch_to() sets
++	 * r25 for kernel needs and only in the final return path is usermode
++	 * r25 setup, from pt_regs->user_r25. So set that up as well
++	 */
++	c_regs->user_r25 = c_callee->r25;
++#endif
++
+ 	return 0;
+ }
+ 
+diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
+index 8721fd004291..e1a1518a1ec7 100644
+--- a/arch/powerpc/include/asm/setup.h
++++ b/arch/powerpc/include/asm/setup.h
+@@ -9,6 +9,7 @@ extern void ppc_printk_progress(char *s, unsigned short hex);
+ 
+ extern unsigned int rtas_data;
+ extern unsigned long long memory_limit;
++extern bool init_mem_is_free;
+ extern unsigned long klimit;
+ extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
+ 
+diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
+index e0d881ab304e..30cbcadb54d5 100644
+--- a/arch/powerpc/lib/code-patching.c
++++ b/arch/powerpc/lib/code-patching.c
+@@ -142,7 +142,7 @@ static inline int unmap_patch_area(unsigned long addr)
+ 	return 0;
+ }
+ 
+-int patch_instruction(unsigned int *addr, unsigned int instr)
++static int do_patch_instruction(unsigned int *addr, unsigned int instr)
+ {
+ 	int err;
+ 	unsigned int *patch_addr = NULL;
+@@ -182,12 +182,22 @@ out:
+ }
+ #else /* !CONFIG_STRICT_KERNEL_RWX */
+ 
+-int patch_instruction(unsigned int *addr, unsigned int instr)
++static int do_patch_instruction(unsigned int *addr, unsigned int instr)
+ {
+ 	return raw_patch_instruction(addr, instr);
+ }
+ 
+ #endif /* CONFIG_STRICT_KERNEL_RWX */
++
++int patch_instruction(unsigned int *addr, unsigned int instr)
++{
++	/* Make sure we aren't patching a freed init section */
++	if (init_mem_is_free && init_section_contains(addr, 4)) {
++		pr_debug("Skipping init section patching addr: 0x%px\n", addr);
++		return 0;
++	}
++	return do_patch_instruction(addr, instr);
++}
+ NOKPROBE_SYMBOL(patch_instruction);
+ 
+ int patch_branch(unsigned int *addr, unsigned long target, int flags)
+diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
+index 5c8530d0c611..04ccb274a620 100644
+--- a/arch/powerpc/mm/mem.c
++++ b/arch/powerpc/mm/mem.c
+@@ -63,6 +63,7 @@
+ #endif
+ 
+ unsigned long long memory_limit;
++bool init_mem_is_free;
+ 
+ #ifdef CONFIG_HIGHMEM
+ pte_t *kmap_pte;
+@@ -396,6 +397,7 @@ void free_initmem(void)
+ {
+ 	ppc_md.progress = ppc_printk_progress;
+ 	mark_initmem_nx();
++	init_mem_is_free = true;
+ 	free_initmem_default(POISON_FREE_INITMEM);
+ }
+ 
+diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
+index 9589878faf46..eb1ed9a7109d 100644
+--- a/arch/x86/entry/vdso/Makefile
++++ b/arch/x86/entry/vdso/Makefile
+@@ -72,7 +72,13 @@ $(obj)/vdso-image-%.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
+ CFL := $(PROFILING) -mcmodel=small -fPIC -O2 -fasynchronous-unwind-tables -m64 \
+        $(filter -g%,$(KBUILD_CFLAGS)) $(call cc-option, -fno-stack-protector) \
+        -fno-omit-frame-pointer -foptimize-sibling-calls \
+-       -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO $(RETPOLINE_VDSO_CFLAGS)
++       -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
++
++ifdef CONFIG_RETPOLINE
++ifneq ($(RETPOLINE_VDSO_CFLAGS),)
++  CFL += $(RETPOLINE_VDSO_CFLAGS)
++endif
++endif
+ 
+ $(vobjs): KBUILD_CFLAGS := $(filter-out $(GCC_PLUGINS_CFLAGS) $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS)) $(CFL)
+ 
+@@ -144,7 +150,13 @@ KBUILD_CFLAGS_32 += $(call cc-option, -fno-stack-protector)
+ KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls)
+ KBUILD_CFLAGS_32 += -fno-omit-frame-pointer
+ KBUILD_CFLAGS_32 += -DDISABLE_BRANCH_PROFILING
+-KBUILD_CFLAGS_32 += $(RETPOLINE_VDSO_CFLAGS)
++
++ifdef CONFIG_RETPOLINE
++ifneq ($(RETPOLINE_VDSO_CFLAGS),)
++  KBUILD_CFLAGS_32 += $(RETPOLINE_VDSO_CFLAGS)
++endif
++endif
++
+ $(obj)/vdso32.so.dbg: KBUILD_CFLAGS = $(KBUILD_CFLAGS_32)
+ 
+ $(obj)/vdso32.so.dbg: FORCE \
+diff --git a/arch/x86/entry/vdso/vclock_gettime.c b/arch/x86/entry/vdso/vclock_gettime.c
+index f19856d95c60..e48ca3afa091 100644
+--- a/arch/x86/entry/vdso/vclock_gettime.c
++++ b/arch/x86/entry/vdso/vclock_gettime.c
+@@ -43,8 +43,9 @@ extern u8 hvclock_page
+ notrace static long vdso_fallback_gettime(long clock, struct timespec *ts)
+ {
+ 	long ret;
+-	asm("syscall" : "=a" (ret) :
+-	    "0" (__NR_clock_gettime), "D" (clock), "S" (ts) : "memory");
++	asm ("syscall" : "=a" (ret), "=m" (*ts) :
++	     "0" (__NR_clock_gettime), "D" (clock), "S" (ts) :
++	     "memory", "rcx", "r11");
+ 	return ret;
+ }
+ 
+@@ -52,8 +53,9 @@ notrace static long vdso_fallback_gtod(struct timeval *tv, struct timezone *tz)
+ {
+ 	long ret;
+ 
+-	asm("syscall" : "=a" (ret) :
+-	    "0" (__NR_gettimeofday), "D" (tv), "S" (tz) : "memory");
++	asm ("syscall" : "=a" (ret), "=m" (*tv), "=m" (*tz) :
++	     "0" (__NR_gettimeofday), "D" (tv), "S" (tz) :
++	     "memory", "rcx", "r11");
+ 	return ret;
+ }
+ 
+@@ -64,13 +66,13 @@ notrace static long vdso_fallback_gettime(long clock, struct timespec *ts)
+ {
+ 	long ret;
+ 
+-	asm(
++	asm (
+ 		"mov %%ebx, %%edx \n"
+-		"mov %2, %%ebx \n"
++		"mov %[clock], %%ebx \n"
+ 		"call __kernel_vsyscall \n"
+ 		"mov %%edx, %%ebx \n"
+-		: "=a" (ret)
+-		: "0" (__NR_clock_gettime), "g" (clock), "c" (ts)
++		: "=a" (ret), "=m" (*ts)
++		: "0" (__NR_clock_gettime), [clock] "g" (clock), "c" (ts)
+ 		: "memory", "edx");
+ 	return ret;
+ }
+@@ -79,13 +81,13 @@ notrace static long vdso_fallback_gtod(struct timeval *tv, struct timezone *tz)
+ {
+ 	long ret;
+ 
+-	asm(
++	asm (
+ 		"mov %%ebx, %%edx \n"
+-		"mov %2, %%ebx \n"
++		"mov %[tv], %%ebx \n"
+ 		"call __kernel_vsyscall \n"
+ 		"mov %%edx, %%ebx \n"
+-		: "=a" (ret)
+-		: "0" (__NR_gettimeofday), "g" (tv), "c" (tz)
++		: "=a" (ret), "=m" (*tv), "=m" (*tz)
++		: "0" (__NR_gettimeofday), [tv] "g" (tv), "c" (tz)
+ 		: "memory", "edx");
+ 	return ret;
+ }
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index 97d41754769e..d02f0390c1c1 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -232,6 +232,17 @@ static u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
+  */
+ static const u64 shadow_nonpresent_or_rsvd_mask_len = 5;
+ 
++/*
++ * In some cases, we need to preserve the GFN of a non-present or reserved
++ * SPTE when we usurp the upper five bits of the physical address space to
++ * defend against L1TF, e.g. for MMIO SPTEs.  To preserve the GFN, we'll
++ * shift bits of the GFN that overlap with shadow_nonpresent_or_rsvd_mask
++ * left into the reserved bits, i.e. the GFN in the SPTE will be split into
++ * high and low parts.  This mask covers the lower bits of the GFN.
++ */
++static u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask;
++
++
+ static void mmu_spte_set(u64 *sptep, u64 spte);
+ 
+ void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value)
+@@ -338,9 +349,7 @@ static bool is_mmio_spte(u64 spte)
+ 
+ static gfn_t get_mmio_spte_gfn(u64 spte)
+ {
+-	u64 mask = generation_mmio_spte_mask(MMIO_GEN_MASK) | shadow_mmio_mask |
+-		   shadow_nonpresent_or_rsvd_mask;
+-	u64 gpa = spte & ~mask;
++	u64 gpa = spte & shadow_nonpresent_or_rsvd_lower_gfn_mask;
+ 
+ 	gpa |= (spte >> shadow_nonpresent_or_rsvd_mask_len)
+ 	       & shadow_nonpresent_or_rsvd_mask;
+@@ -404,6 +413,8 @@ EXPORT_SYMBOL_GPL(kvm_mmu_set_mask_ptes);
+ 
+ static void kvm_mmu_reset_all_pte_masks(void)
+ {
++	u8 low_phys_bits;
++
+ 	shadow_user_mask = 0;
+ 	shadow_accessed_mask = 0;
+ 	shadow_dirty_mask = 0;
+@@ -418,12 +429,17 @@ static void kvm_mmu_reset_all_pte_masks(void)
+ 	 * appropriate mask to guard against L1TF attacks. Otherwise, it is
+ 	 * assumed that the CPU is not vulnerable to L1TF.
+ 	 */
++	low_phys_bits = boot_cpu_data.x86_phys_bits;
+ 	if (boot_cpu_data.x86_phys_bits <
+-	    52 - shadow_nonpresent_or_rsvd_mask_len)
++	    52 - shadow_nonpresent_or_rsvd_mask_len) {
+ 		shadow_nonpresent_or_rsvd_mask =
+ 			rsvd_bits(boot_cpu_data.x86_phys_bits -
+ 				  shadow_nonpresent_or_rsvd_mask_len,
+ 				  boot_cpu_data.x86_phys_bits - 1);
++		low_phys_bits -= shadow_nonpresent_or_rsvd_mask_len;
++	}
++	shadow_nonpresent_or_rsvd_lower_gfn_mask =
++		GENMASK_ULL(low_phys_bits - 1, PAGE_SHIFT);
+ }
+ 
+ static int is_cpuid_PSE36(void)
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index d0c3be353bb6..32721ef9652d 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -9826,15 +9826,16 @@ static void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu)
+ 	if (!lapic_in_kernel(vcpu))
+ 		return;
+ 
++	if (!flexpriority_enabled &&
++	    !cpu_has_vmx_virtualize_x2apic_mode())
++		return;
++
+ 	/* Postpone execution until vmcs01 is the current VMCS. */
+ 	if (is_guest_mode(vcpu)) {
+ 		to_vmx(vcpu)->nested.change_vmcs01_virtual_apic_mode = true;
+ 		return;
+ 	}
+ 
+-	if (!cpu_need_tpr_shadow(vcpu))
+-		return;
+-
+ 	sec_exec_control = vmcs_read32(SECONDARY_VM_EXEC_CONTROL);
+ 	sec_exec_control &= ~(SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
+ 			      SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE);
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 2f9e14361673..90e8058ae557 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -1596,7 +1596,7 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
+ 		BUG_ON(!rq->q);
+ 		if (rq->mq_ctx != this_ctx) {
+ 			if (this_ctx) {
+-				trace_block_unplug(this_q, depth, from_schedule);
++				trace_block_unplug(this_q, depth, !from_schedule);
+ 				blk_mq_sched_insert_requests(this_q, this_ctx,
+ 								&ctx_list,
+ 								from_schedule);
+@@ -1616,7 +1616,7 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
+ 	 * on 'ctx_list'. Do those.
+ 	 */
+ 	if (this_ctx) {
+-		trace_block_unplug(this_q, depth, from_schedule);
++		trace_block_unplug(this_q, depth, !from_schedule);
+ 		blk_mq_sched_insert_requests(this_q, this_ctx, &ctx_list,
+ 						from_schedule);
+ 	}
+diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
+index 3f68e2919dc5..a690fd400260 100644
+--- a/drivers/base/power/main.c
++++ b/drivers/base/power/main.c
+@@ -1713,8 +1713,10 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
+ 
+ 	dpm_wait_for_subordinate(dev, async);
+ 
+-	if (async_error)
++	if (async_error) {
++		dev->power.direct_complete = false;
+ 		goto Complete;
++	}
+ 
+ 	/*
+ 	 * If a device configured to wake up the system from sleep states
+@@ -1726,6 +1728,7 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
+ 		pm_wakeup_event(dev, 0);
+ 
+ 	if (pm_wakeup_pending()) {
++		dev->power.direct_complete = false;
+ 		async_error = -EBUSY;
+ 		goto Complete;
+ 	}
+diff --git a/drivers/clocksource/timer-atmel-pit.c b/drivers/clocksource/timer-atmel-pit.c
+index ec8a4376f74f..2fab18fae4fc 100644
+--- a/drivers/clocksource/timer-atmel-pit.c
++++ b/drivers/clocksource/timer-atmel-pit.c
+@@ -180,26 +180,29 @@ static int __init at91sam926x_pit_dt_init(struct device_node *node)
+ 	data->base = of_iomap(node, 0);
+ 	if (!data->base) {
+ 		pr_err("Could not map PIT address\n");
+-		return -ENXIO;
++		ret = -ENXIO;
++		goto exit;
+ 	}
+ 
+ 	data->mck = of_clk_get(node, 0);
+ 	if (IS_ERR(data->mck)) {
+ 		pr_err("Unable to get mck clk\n");
+-		return PTR_ERR(data->mck);
++		ret = PTR_ERR(data->mck);
++		goto exit;
+ 	}
+ 
+ 	ret = clk_prepare_enable(data->mck);
+ 	if (ret) {
+ 		pr_err("Unable to enable mck\n");
+-		return ret;
++		goto exit;
+ 	}
+ 
+ 	/* Get the interrupts property */
+ 	data->irq = irq_of_parse_and_map(node, 0);
+ 	if (!data->irq) {
+ 		pr_err("Unable to get IRQ from DT\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto exit;
+ 	}
+ 
+ 	/*
+@@ -227,7 +230,7 @@ static int __init at91sam926x_pit_dt_init(struct device_node *node)
+ 	ret = clocksource_register_hz(&data->clksrc, pit_rate);
+ 	if (ret) {
+ 		pr_err("Failed to register clocksource\n");
+-		return ret;
++		goto exit;
+ 	}
+ 
+ 	/* Set up irq handler */
+@@ -236,7 +239,8 @@ static int __init at91sam926x_pit_dt_init(struct device_node *node)
+ 			  "at91_tick", data);
+ 	if (ret) {
+ 		pr_err("Unable to setup IRQ\n");
+-		return ret;
++		clocksource_unregister(&data->clksrc);
++		goto exit;
+ 	}
+ 
+ 	/* Set up and register clockevents */
+@@ -254,6 +258,10 @@ static int __init at91sam926x_pit_dt_init(struct device_node *node)
+ 	clockevents_register_device(&data->clkevt);
+ 
+ 	return 0;
++
++exit:
++	kfree(data);
++	return ret;
+ }
+ TIMER_OF_DECLARE(at91sam926x_pit, "atmel,at91sam9260-pit",
+ 		       at91sam926x_pit_dt_init);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+index 23d960ec1cf2..acad2999560c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+@@ -246,6 +246,8 @@ int amdgpu_vce_suspend(struct amdgpu_device *adev)
+ {
+ 	int i;
+ 
++	cancel_delayed_work_sync(&adev->vce.idle_work);
++
+ 	if (adev->vce.vcpu_bo == NULL)
+ 		return 0;
+ 
+@@ -256,7 +258,6 @@ int amdgpu_vce_suspend(struct amdgpu_device *adev)
+ 	if (i == AMDGPU_MAX_VCE_HANDLES)
+ 		return 0;
+ 
+-	cancel_delayed_work_sync(&adev->vce.idle_work);
+ 	/* TODO: suspending running encoding sessions isn't supported */
+ 	return -EINVAL;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+index bee49991c1ff..2dc3d1e28f3c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+@@ -151,11 +151,11 @@ int amdgpu_vcn_suspend(struct amdgpu_device *adev)
+ 	unsigned size;
+ 	void *ptr;
+ 
++	cancel_delayed_work_sync(&adev->vcn.idle_work);
++
+ 	if (adev->vcn.vcpu_bo == NULL)
+ 		return 0;
+ 
+-	cancel_delayed_work_sync(&adev->vcn.idle_work);
+-
+ 	size = amdgpu_bo_size(adev->vcn.vcpu_bo);
+ 	ptr = adev->vcn.cpu_addr;
+ 
+diff --git a/drivers/gpu/drm/drm_lease.c b/drivers/gpu/drm/drm_lease.c
+index d638c0fb3418..23a5643a4b98 100644
+--- a/drivers/gpu/drm/drm_lease.c
++++ b/drivers/gpu/drm/drm_lease.c
+@@ -566,14 +566,14 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev,
+ 	lessee_priv->is_master = 1;
+ 	lessee_priv->authenticated = 1;
+ 
+-	/* Hook up the fd */
+-	fd_install(fd, lessee_file);
+-
+ 	/* Pass fd back to userspace */
+ 	DRM_DEBUG_LEASE("Returning fd %d id %d\n", fd, lessee->lessee_id);
+ 	cl->fd = fd;
+ 	cl->lessee_id = lessee->lessee_id;
+ 
++	/* Hook up the fd */
++	fd_install(fd, lessee_file);
++
+ 	DRM_DEBUG_LEASE("drm_mode_create_lease_ioctl succeeded\n");
+ 	return 0;
+ 
+diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
+index d4f4ce484529..8e71da403324 100644
+--- a/drivers/gpu/drm/drm_syncobj.c
++++ b/drivers/gpu/drm/drm_syncobj.c
+@@ -97,6 +97,8 @@ static int drm_syncobj_fence_get_or_add_callback(struct drm_syncobj *syncobj,
+ {
+ 	int ret;
+ 
++	WARN_ON(*fence);
++
+ 	*fence = drm_syncobj_fence_get(syncobj);
+ 	if (*fence)
+ 		return 1;
+@@ -744,6 +746,9 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
+ 
+ 	if (flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT) {
+ 		for (i = 0; i < count; ++i) {
++			if (entries[i].fence)
++				continue;
++
+ 			drm_syncobj_fence_get_or_add_callback(syncobjs[i],
+ 							      &entries[i].fence,
+ 							      &entries[i].syncobj_cb,
+diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c
+index 5f437d1570fb..21863ddde63e 100644
+--- a/drivers/infiniband/core/ucma.c
++++ b/drivers/infiniband/core/ucma.c
+@@ -1759,6 +1759,8 @@ static int ucma_close(struct inode *inode, struct file *filp)
+ 		mutex_lock(&mut);
+ 		if (!ctx->closing) {
+ 			mutex_unlock(&mut);
++			ucma_put_ctx(ctx);
++			wait_for_completion(&ctx->comp);
+ 			/* rdma_destroy_id ensures that no event handlers are
+ 			 * inflight for that id before releasing it.
+ 			 */
+diff --git a/drivers/md/dm-cache-metadata.c b/drivers/md/dm-cache-metadata.c
+index 69dddeab124c..5936de71883f 100644
+--- a/drivers/md/dm-cache-metadata.c
++++ b/drivers/md/dm-cache-metadata.c
+@@ -1455,8 +1455,8 @@ static int __load_mappings(struct dm_cache_metadata *cmd,
+ 		if (hints_valid) {
+ 			r = dm_array_cursor_next(&cmd->hint_cursor);
+ 			if (r) {
+-				DMERR("dm_array_cursor_next for hint failed");
+-				goto out;
++				dm_array_cursor_end(&cmd->hint_cursor);
++				hints_valid = false;
+ 			}
+ 		}
+ 
+diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
+index 44df244807e5..a39ae8f45e32 100644
+--- a/drivers/md/dm-cache-target.c
++++ b/drivers/md/dm-cache-target.c
+@@ -3017,8 +3017,13 @@ static dm_cblock_t get_cache_dev_size(struct cache *cache)
+ 
+ static bool can_resize(struct cache *cache, dm_cblock_t new_size)
+ {
+-	if (from_cblock(new_size) > from_cblock(cache->cache_size))
+-		return true;
++	if (from_cblock(new_size) > from_cblock(cache->cache_size)) {
++		if (cache->sized) {
++			DMERR("%s: unable to extend cache due to missing cache table reload",
++			      cache_device_name(cache));
++			return false;
++		}
++	}
+ 
+ 	/*
+ 	 * We can't drop a dirty block when shrinking the cache.
+diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
+index d94ba6f72ff5..419362c2d8ac 100644
+--- a/drivers/md/dm-mpath.c
++++ b/drivers/md/dm-mpath.c
+@@ -806,19 +806,19 @@ static int parse_path_selector(struct dm_arg_set *as, struct priority_group *pg,
+ }
+ 
+ static int setup_scsi_dh(struct block_device *bdev, struct multipath *m,
+-			 const char *attached_handler_name, char **error)
++			 const char **attached_handler_name, char **error)
+ {
+ 	struct request_queue *q = bdev_get_queue(bdev);
+ 	int r;
+ 
+ 	if (test_bit(MPATHF_RETAIN_ATTACHED_HW_HANDLER, &m->flags)) {
+ retain:
+-		if (attached_handler_name) {
++		if (*attached_handler_name) {
+ 			/*
+ 			 * Clear any hw_handler_params associated with a
+ 			 * handler that isn't already attached.
+ 			 */
+-			if (m->hw_handler_name && strcmp(attached_handler_name, m->hw_handler_name)) {
++			if (m->hw_handler_name && strcmp(*attached_handler_name, m->hw_handler_name)) {
+ 				kfree(m->hw_handler_params);
+ 				m->hw_handler_params = NULL;
+ 			}
+@@ -830,7 +830,8 @@ retain:
+ 			 * handler instead of the original table passed in.
+ 			 */
+ 			kfree(m->hw_handler_name);
+-			m->hw_handler_name = attached_handler_name;
++			m->hw_handler_name = *attached_handler_name;
++			*attached_handler_name = NULL;
+ 		}
+ 	}
+ 
+@@ -867,7 +868,7 @@ static struct pgpath *parse_path(struct dm_arg_set *as, struct path_selector *ps
+ 	struct pgpath *p;
+ 	struct multipath *m = ti->private;
+ 	struct request_queue *q;
+-	const char *attached_handler_name;
++	const char *attached_handler_name = NULL;
+ 
+ 	/* we need at least a path arg */
+ 	if (as->argc < 1) {
+@@ -890,7 +891,7 @@ static struct pgpath *parse_path(struct dm_arg_set *as, struct path_selector *ps
+ 	attached_handler_name = scsi_dh_attached_handler_name(q, GFP_KERNEL);
+ 	if (attached_handler_name || m->hw_handler_name) {
+ 		INIT_DELAYED_WORK(&p->activate_path, activate_path_work);
+-		r = setup_scsi_dh(p->path.dev->bdev, m, attached_handler_name, &ti->error);
++		r = setup_scsi_dh(p->path.dev->bdev, m, &attached_handler_name, &ti->error);
+ 		if (r) {
+ 			dm_put_device(ti, p->path.dev);
+ 			goto bad;
+@@ -905,6 +906,7 @@ static struct pgpath *parse_path(struct dm_arg_set *as, struct path_selector *ps
+ 
+ 	return p;
+  bad:
++	kfree(attached_handler_name);
+ 	free_pgpath(p);
+ 	return ERR_PTR(r);
+ }
+diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
+index abf9e884386c..f57f5de54206 100644
+--- a/drivers/mmc/core/host.c
++++ b/drivers/mmc/core/host.c
+@@ -235,7 +235,7 @@ int mmc_of_parse(struct mmc_host *host)
+ 			host->caps |= MMC_CAP_NEEDS_POLL;
+ 
+ 		ret = mmc_gpiod_request_cd(host, "cd", 0, true,
+-					   cd_debounce_delay_ms,
++					   cd_debounce_delay_ms * 1000,
+ 					   &cd_gpio_invert);
+ 		if (!ret)
+ 			dev_info(host->parent, "Got CD GPIO\n");
+diff --git a/drivers/mmc/core/slot-gpio.c b/drivers/mmc/core/slot-gpio.c
+index 2a833686784b..86803a3a04dc 100644
+--- a/drivers/mmc/core/slot-gpio.c
++++ b/drivers/mmc/core/slot-gpio.c
+@@ -271,7 +271,7 @@ int mmc_gpiod_request_cd(struct mmc_host *host, const char *con_id,
+ 	if (debounce) {
+ 		ret = gpiod_set_debounce(desc, debounce);
+ 		if (ret < 0)
+-			ctx->cd_debounce_delay_ms = debounce;
++			ctx->cd_debounce_delay_ms = debounce / 1000;
+ 	}
+ 
+ 	if (gpio_invert)
+diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.c b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+index 21eb3a598a86..bdaad6e93be5 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.c
++++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+@@ -1619,10 +1619,10 @@ ath10k_wmi_tlv_op_gen_start_scan(struct ath10k *ar,
+ 	bssid_len = arg->n_bssids * sizeof(struct wmi_mac_addr);
+ 	ie_len = roundup(arg->ie_len, 4);
+ 	len = (sizeof(*tlv) + sizeof(*cmd)) +
+-	      (arg->n_channels ? sizeof(*tlv) + chan_len : 0) +
+-	      (arg->n_ssids ? sizeof(*tlv) + ssid_len : 0) +
+-	      (arg->n_bssids ? sizeof(*tlv) + bssid_len : 0) +
+-	      (arg->ie_len ? sizeof(*tlv) + ie_len : 0);
++	      sizeof(*tlv) + chan_len +
++	      sizeof(*tlv) + ssid_len +
++	      sizeof(*tlv) + bssid_len +
++	      sizeof(*tlv) + ie_len;
+ 
+ 	skb = ath10k_wmi_alloc_skb(ar, len);
+ 	if (!skb)
+diff --git a/drivers/net/xen-netback/hash.c b/drivers/net/xen-netback/hash.c
+index 3c4c58b9fe76..3b6fb5b3bdb2 100644
+--- a/drivers/net/xen-netback/hash.c
++++ b/drivers/net/xen-netback/hash.c
+@@ -332,20 +332,22 @@ u32 xenvif_set_hash_mapping_size(struct xenvif *vif, u32 size)
+ u32 xenvif_set_hash_mapping(struct xenvif *vif, u32 gref, u32 len,
+ 			    u32 off)
+ {
+-	u32 *mapping = &vif->hash.mapping[off];
++	u32 *mapping = vif->hash.mapping;
+ 	struct gnttab_copy copy_op = {
+ 		.source.u.ref = gref,
+ 		.source.domid = vif->domid,
+-		.dest.u.gmfn = virt_to_gfn(mapping),
+ 		.dest.domid = DOMID_SELF,
+-		.dest.offset = xen_offset_in_page(mapping),
+-		.len = len * sizeof(u32),
++		.len = len * sizeof(*mapping),
+ 		.flags = GNTCOPY_source_gref
+ 	};
+ 
+-	if ((off + len > vif->hash.size) || copy_op.len > XEN_PAGE_SIZE)
++	if ((off + len < off) || (off + len > vif->hash.size) ||
++	    len > XEN_PAGE_SIZE / sizeof(*mapping))
+ 		return XEN_NETIF_CTRL_STATUS_INVALID_PARAMETER;
+ 
++	copy_op.dest.u.gmfn = virt_to_gfn(mapping + off);
++	copy_op.dest.offset = xen_offset_in_page(mapping + off);
++
+ 	while (len-- != 0)
+ 		if (mapping[off++] >= vif->num_queues)
+ 			return XEN_NETIF_CTRL_STATUS_INVALID_PARAMETER;
+diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
+index 722537e14848..41b49716ac75 100644
+--- a/drivers/of/unittest.c
++++ b/drivers/of/unittest.c
+@@ -771,6 +771,9 @@ static void __init of_unittest_parse_interrupts(void)
+ 	struct of_phandle_args args;
+ 	int i, rc;
+ 
++	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
++		return;
++
+ 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts0");
+ 	if (!np) {
+ 		pr_err("missing testcase data\n");
+@@ -845,6 +848,9 @@ static void __init of_unittest_parse_interrupts_extended(void)
+ 	struct of_phandle_args args;
+ 	int i, rc;
+ 
++	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
++		return;
++
+ 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts-extended0");
+ 	if (!np) {
+ 		pr_err("missing testcase data\n");
+@@ -1001,15 +1007,19 @@ static void __init of_unittest_platform_populate(void)
+ 	pdev = of_find_device_by_node(np);
+ 	unittest(pdev, "device 1 creation failed\n");
+ 
+-	irq = platform_get_irq(pdev, 0);
+-	unittest(irq == -EPROBE_DEFER, "device deferred probe failed - %d\n", irq);
++	if (!(of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)) {
++		irq = platform_get_irq(pdev, 0);
++		unittest(irq == -EPROBE_DEFER,
++			 "device deferred probe failed - %d\n", irq);
+ 
+-	/* Test that a parsing failure does not return -EPROBE_DEFER */
+-	np = of_find_node_by_path("/testcase-data/testcase-device2");
+-	pdev = of_find_device_by_node(np);
+-	unittest(pdev, "device 2 creation failed\n");
+-	irq = platform_get_irq(pdev, 0);
+-	unittest(irq < 0 && irq != -EPROBE_DEFER, "device parsing error failed - %d\n", irq);
++		/* Test that a parsing failure does not return -EPROBE_DEFER */
++		np = of_find_node_by_path("/testcase-data/testcase-device2");
++		pdev = of_find_device_by_node(np);
++		unittest(pdev, "device 2 creation failed\n");
++		irq = platform_get_irq(pdev, 0);
++		unittest(irq < 0 && irq != -EPROBE_DEFER,
++			 "device parsing error failed - %d\n", irq);
++	}
+ 
+ 	np = of_find_node_by_path("/testcase-data/platform-tests");
+ 	unittest(np, "No testcase data in device tree\n");
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 0abe2865a3a5..c97ad905e7c9 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -1125,12 +1125,12 @@ int pci_save_state(struct pci_dev *dev)
+ EXPORT_SYMBOL(pci_save_state);
+ 
+ static void pci_restore_config_dword(struct pci_dev *pdev, int offset,
+-				     u32 saved_val, int retry)
++				     u32 saved_val, int retry, bool force)
+ {
+ 	u32 val;
+ 
+ 	pci_read_config_dword(pdev, offset, &val);
+-	if (val == saved_val)
++	if (!force && val == saved_val)
+ 		return;
+ 
+ 	for (;;) {
+@@ -1149,25 +1149,36 @@ static void pci_restore_config_dword(struct pci_dev *pdev, int offset,
+ }
+ 
+ static void pci_restore_config_space_range(struct pci_dev *pdev,
+-					   int start, int end, int retry)
++					   int start, int end, int retry,
++					   bool force)
+ {
+ 	int index;
+ 
+ 	for (index = end; index >= start; index--)
+ 		pci_restore_config_dword(pdev, 4 * index,
+ 					 pdev->saved_config_space[index],
+-					 retry);
++					 retry, force);
+ }
+ 
+ static void pci_restore_config_space(struct pci_dev *pdev)
+ {
+ 	if (pdev->hdr_type == PCI_HEADER_TYPE_NORMAL) {
+-		pci_restore_config_space_range(pdev, 10, 15, 0);
++		pci_restore_config_space_range(pdev, 10, 15, 0, false);
+ 		/* Restore BARs before the command register. */
+-		pci_restore_config_space_range(pdev, 4, 9, 10);
+-		pci_restore_config_space_range(pdev, 0, 3, 0);
++		pci_restore_config_space_range(pdev, 4, 9, 10, false);
++		pci_restore_config_space_range(pdev, 0, 3, 0, false);
++	} else if (pdev->hdr_type == PCI_HEADER_TYPE_BRIDGE) {
++		pci_restore_config_space_range(pdev, 12, 15, 0, false);
++
++		/*
++		 * Force rewriting of prefetch registers to avoid S3 resume
++		 * issues on Intel PCI bridges that occur when these
++		 * registers are not explicitly written.
++		 */
++		pci_restore_config_space_range(pdev, 9, 11, 0, true);
++		pci_restore_config_space_range(pdev, 0, 8, 0, false);
+ 	} else {
+-		pci_restore_config_space_range(pdev, 0, 15, 0);
++		pci_restore_config_space_range(pdev, 0, 15, 0, false);
+ 	}
+ }
+ 
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index aba59521ad48..31d06f59c4e4 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -1264,6 +1264,7 @@ static void tty_driver_remove_tty(struct tty_driver *driver, struct tty_struct *
+ static int tty_reopen(struct tty_struct *tty)
+ {
+ 	struct tty_driver *driver = tty->driver;
++	int retval;
+ 
+ 	if (driver->type == TTY_DRIVER_TYPE_PTY &&
+ 	    driver->subtype == PTY_TYPE_MASTER)
+@@ -1277,10 +1278,14 @@ static int tty_reopen(struct tty_struct *tty)
+ 
+ 	tty->count++;
+ 
+-	if (!tty->ldisc)
+-		return tty_ldisc_reinit(tty, tty->termios.c_line);
++	if (tty->ldisc)
++		return 0;
+ 
+-	return 0;
++	retval = tty_ldisc_reinit(tty, tty->termios.c_line);
++	if (retval)
++		tty->count--;
++
++	return retval;
+ }
+ 
+ /**
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index f8ee32d9843a..84f52774810a 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1514,6 +1514,7 @@ static void acm_disconnect(struct usb_interface *intf)
+ {
+ 	struct acm *acm = usb_get_intfdata(intf);
+ 	struct tty_struct *tty;
++	int i;
+ 
+ 	/* sibling interface is already cleaning up */
+ 	if (!acm)
+@@ -1544,6 +1545,11 @@ static void acm_disconnect(struct usb_interface *intf)
+ 
+ 	tty_unregister_device(acm_tty_driver, acm->minor);
+ 
++	usb_free_urb(acm->ctrlurb);
++	for (i = 0; i < ACM_NW; i++)
++		usb_free_urb(acm->wb[i].urb);
++	for (i = 0; i < acm->rx_buflimit; i++)
++		usb_free_urb(acm->read_urbs[i]);
+ 	acm_write_buffers_free(acm);
+ 	usb_free_coherent(acm->dev, acm->ctrlsize, acm->ctrl_buffer, acm->ctrl_dma);
+ 	acm_read_buffers_free(acm);
+diff --git a/drivers/usb/host/xhci-mtk.c b/drivers/usb/host/xhci-mtk.c
+index 7334da9e9779..71d0d33c3286 100644
+--- a/drivers/usb/host/xhci-mtk.c
++++ b/drivers/usb/host/xhci-mtk.c
+@@ -642,10 +642,10 @@ static int __maybe_unused xhci_mtk_resume(struct device *dev)
+ 	xhci_mtk_host_enable(mtk);
+ 
+ 	xhci_dbg(xhci, "%s: restart port polling\n", __func__);
+-	set_bit(HCD_FLAG_POLL_RH, &hcd->flags);
+-	usb_hcd_poll_rh_status(hcd);
+ 	set_bit(HCD_FLAG_POLL_RH, &xhci->shared_hcd->flags);
+ 	usb_hcd_poll_rh_status(xhci->shared_hcd);
++	set_bit(HCD_FLAG_POLL_RH, &hcd->flags);
++	usb_hcd_poll_rh_status(hcd);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 6372edf339d9..722860eb5a91 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -185,6 +185,8 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	}
+ 	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+ 	    (pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_INTEL_SUNRISEPOINT_LP_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_INTEL_SUNRISEPOINT_H_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_APL_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_DNV_XHCI))
+ 		xhci->quirks |= XHCI_MISSING_CAS;
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 0215b70c4efc..e72ad9f81c73 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -561,6 +561,9 @@ static void option_instat_callback(struct urb *urb);
+ /* Interface is reserved */
+ #define RSVD(ifnum)	((BIT(ifnum) & 0xff) << 0)
+ 
++/* Interface must have two endpoints */
++#define NUMEP2		BIT(16)
++
+ 
+ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_COLT) },
+@@ -1081,8 +1084,9 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(4) },
+ 	{ USB_DEVICE(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_BG96),
+ 	  .driver_info = RSVD(4) },
+-	{ USB_DEVICE(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06),
+-	  .driver_info = RSVD(4) | RSVD(5) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0xff, 0xff),
++	  .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0, 0) },
+ 	{ USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6001) },
+ 	{ USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CMU_300) },
+ 	{ USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6003),
+@@ -1999,6 +2003,13 @@ static int option_probe(struct usb_serial *serial,
+ 	if (device_flags & RSVD(iface_desc->bInterfaceNumber))
+ 		return -ENODEV;
+ 
++	/*
++	 * Allow matching on bNumEndpoints for devices whose interface numbers
++	 * can change (e.g. Quectel EP06).
++	 */
++	if (device_flags & NUMEP2 && iface_desc->bNumEndpoints != 2)
++		return -ENODEV;
++
+ 	/* Store the device flags so we can use them during attach. */
+ 	usb_set_serial_data(serial, (void *)device_flags);
+ 
+diff --git a/drivers/usb/serial/usb-serial-simple.c b/drivers/usb/serial/usb-serial-simple.c
+index 40864c2bd9dc..4d0273508043 100644
+--- a/drivers/usb/serial/usb-serial-simple.c
++++ b/drivers/usb/serial/usb-serial-simple.c
+@@ -84,7 +84,8 @@ DEVICE(moto_modem, MOTO_IDS);
+ 
+ /* Motorola Tetra driver */
+ #define MOTOROLA_TETRA_IDS()			\
+-	{ USB_DEVICE(0x0cad, 0x9011) }	/* Motorola Solutions TETRA PEI */
++	{ USB_DEVICE(0x0cad, 0x9011) },	/* Motorola Solutions TETRA PEI */ \
++	{ USB_DEVICE(0x0cad, 0x9012) }	/* MTP6550 */
+ DEVICE(motorola_tetra, MOTOROLA_TETRA_IDS);
+ 
+ /* Novatel Wireless GPS driver */
+diff --git a/drivers/video/fbdev/omap2/omapfb/omapfb-ioctl.c b/drivers/video/fbdev/omap2/omapfb/omapfb-ioctl.c
+index ef69273074ba..a3edb20ea4c3 100644
+--- a/drivers/video/fbdev/omap2/omapfb/omapfb-ioctl.c
++++ b/drivers/video/fbdev/omap2/omapfb/omapfb-ioctl.c
+@@ -496,6 +496,9 @@ static int omapfb_memory_read(struct fb_info *fbi,
+ 	if (!access_ok(VERIFY_WRITE, mr->buffer, mr->buffer_size))
+ 		return -EFAULT;
+ 
++	if (mr->w > 4096 || mr->h > 4096)
++		return -EINVAL;
++
+ 	if (mr->w * mr->h * 3 > mr->buffer_size)
+ 		return -EINVAL;
+ 
+@@ -509,7 +512,7 @@ static int omapfb_memory_read(struct fb_info *fbi,
+ 			mr->x, mr->y, mr->w, mr->h);
+ 
+ 	if (r > 0) {
+-		if (copy_to_user(mr->buffer, buf, mr->buffer_size))
++		if (copy_to_user(mr->buffer, buf, r))
+ 			r = -EFAULT;
+ 	}
+ 
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index 9f1c96caebda..782e7243c5c0 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -746,6 +746,7 @@ static int get_checkpoint_version(struct f2fs_sb_info *sbi, block_t cp_addr,
+ 
+ 	crc_offset = le32_to_cpu((*cp_block)->checksum_offset);
+ 	if (crc_offset > (blk_size - sizeof(__le32))) {
++		f2fs_put_page(*cp_page, 1);
+ 		f2fs_msg(sbi->sb, KERN_WARNING,
+ 			"invalid crc_offset: %zu", crc_offset);
+ 		return -EINVAL;
+@@ -753,6 +754,7 @@ static int get_checkpoint_version(struct f2fs_sb_info *sbi, block_t cp_addr,
+ 
+ 	crc = cur_cp_crc(*cp_block);
+ 	if (!f2fs_crc_valid(sbi, crc, *cp_block, crc_offset)) {
++		f2fs_put_page(*cp_page, 1);
+ 		f2fs_msg(sbi->sb, KERN_WARNING, "invalid crc value");
+ 		return -EINVAL;
+ 	}
+@@ -772,14 +774,14 @@ static struct page *validate_checkpoint(struct f2fs_sb_info *sbi,
+ 	err = get_checkpoint_version(sbi, cp_addr, &cp_block,
+ 					&cp_page_1, version);
+ 	if (err)
+-		goto invalid_cp1;
++		return NULL;
+ 	pre_version = *version;
+ 
+ 	cp_addr += le32_to_cpu(cp_block->cp_pack_total_block_count) - 1;
+ 	err = get_checkpoint_version(sbi, cp_addr, &cp_block,
+ 					&cp_page_2, version);
+ 	if (err)
+-		goto invalid_cp2;
++		goto invalid_cp;
+ 	cur_version = *version;
+ 
+ 	if (cur_version == pre_version) {
+@@ -787,9 +789,8 @@ static struct page *validate_checkpoint(struct f2fs_sb_info *sbi,
+ 		f2fs_put_page(cp_page_2, 1);
+ 		return cp_page_1;
+ 	}
+-invalid_cp2:
+ 	f2fs_put_page(cp_page_2, 1);
+-invalid_cp1:
++invalid_cp:
+ 	f2fs_put_page(cp_page_1, 1);
+ 	return NULL;
+ }
+diff --git a/fs/pstore/ram.c b/fs/pstore/ram.c
+index bbd1e357c23d..f4fd2e72add4 100644
+--- a/fs/pstore/ram.c
++++ b/fs/pstore/ram.c
+@@ -898,8 +898,22 @@ static struct platform_driver ramoops_driver = {
+ 	},
+ };
+ 
+-static void ramoops_register_dummy(void)
++static inline void ramoops_unregister_dummy(void)
+ {
++	platform_device_unregister(dummy);
++	dummy = NULL;
++
++	kfree(dummy_data);
++	dummy_data = NULL;
++}
++
++static void __init ramoops_register_dummy(void)
++{
++	/*
++	 * Prepare a dummy platform data structure to carry the module
++	 * parameters. If mem_size isn't set, then there are no module
++	 * parameters, and we can skip this.
++	 */
+ 	if (!mem_size)
+ 		return;
+ 
+@@ -932,21 +946,28 @@ static void ramoops_register_dummy(void)
+ 	if (IS_ERR(dummy)) {
+ 		pr_info("could not create platform device: %ld\n",
+ 			PTR_ERR(dummy));
++		dummy = NULL;
++		ramoops_unregister_dummy();
+ 	}
+ }
+ 
+ static int __init ramoops_init(void)
+ {
++	int ret;
++
+ 	ramoops_register_dummy();
+-	return platform_driver_register(&ramoops_driver);
++	ret = platform_driver_register(&ramoops_driver);
++	if (ret != 0)
++		ramoops_unregister_dummy();
++
++	return ret;
+ }
+ late_initcall(ramoops_init);
+ 
+ static void __exit ramoops_exit(void)
+ {
+ 	platform_driver_unregister(&ramoops_driver);
+-	platform_device_unregister(dummy);
+-	kfree(dummy_data);
++	ramoops_unregister_dummy();
+ }
+ module_exit(ramoops_exit);
+ 
+diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
+index c5466c70d620..2a82aeeacba5 100644
+--- a/fs/ubifs/super.c
++++ b/fs/ubifs/super.c
+@@ -1929,6 +1929,9 @@ static struct ubi_volume_desc *open_ubi(const char *name, int mode)
+ 	int dev, vol;
+ 	char *endptr;
+ 
++	if (!name || !*name)
++		return ERR_PTR(-EINVAL);
++
+ 	/* First, try to open using the device node path method */
+ 	ubi = ubi_open_volume_path(name, mode);
+ 	if (!IS_ERR(ubi))
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index 36fa6a2a82e3..4ee95d8c8413 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -140,6 +140,8 @@ pte_t *huge_pte_alloc(struct mm_struct *mm,
+ pte_t *huge_pte_offset(struct mm_struct *mm,
+ 		       unsigned long addr, unsigned long sz);
+ int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep);
++void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
++				unsigned long *start, unsigned long *end);
+ struct page *follow_huge_addr(struct mm_struct *mm, unsigned long address,
+ 			      int write);
+ struct page *follow_huge_pd(struct vm_area_struct *vma,
+@@ -170,6 +172,18 @@ static inline unsigned long hugetlb_total_pages(void)
+ 	return 0;
+ }
+ 
++static inline int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr,
++					pte_t *ptep)
++{
++	return 0;
++}
++
++static inline void adjust_range_if_pmd_sharing_possible(
++				struct vm_area_struct *vma,
++				unsigned long *start, unsigned long *end)
++{
++}
++
+ #define follow_hugetlb_page(m,v,p,vs,a,b,i,w,n)	({ BUG(); 0; })
+ #define follow_huge_addr(mm, addr, write)	ERR_PTR(-EINVAL)
+ #define copy_hugetlb_page_range(src, dst, vma)	({ BUG(); 0; })
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index 68a5121694ef..40ad93bc9548 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -2463,6 +2463,12 @@ static inline struct vm_area_struct *find_exact_vma(struct mm_struct *mm,
+ 	return vma;
+ }
+ 
++static inline bool range_in_vma(struct vm_area_struct *vma,
++				unsigned long start, unsigned long end)
++{
++	return (vma && vma->vm_start <= start && end <= vma->vm_end);
++}
++
+ #ifdef CONFIG_MMU
+ pgprot_t vm_get_page_prot(unsigned long vm_flags);
+ void vma_set_page_prot(struct vm_area_struct *vma);
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index c7b3e34811ec..ae22d93701db 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -3940,6 +3940,12 @@ int perf_event_read_local(struct perf_event *event, u64 *value,
+ 		goto out;
+ 	}
+ 
++	/* If this is a pinned event it must be running on this CPU */
++	if (event->attr.pinned && event->oncpu != smp_processor_id()) {
++		ret = -EBUSY;
++		goto out;
++	}
++
+ 	/*
+ 	 * If the event is currently on this CPU, its either a per-task event,
+ 	 * or local to this CPU. Furthermore it means its ACTIVE (otherwise
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 25346bd99364..571875b37453 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2929,7 +2929,7 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
+ 	else
+ 		page_add_file_rmap(new, true);
+ 	set_pmd_at(mm, mmun_start, pvmw->pmd, pmde);
+-	if (vma->vm_flags & VM_LOCKED)
++	if ((vma->vm_flags & VM_LOCKED) && !PageDoubleMap(new))
+ 		mlock_vma_page(new);
+ 	update_mmu_cache_pmd(vma, address, pvmw->pmd);
+ }
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 3103099f64fd..f469315a6a0f 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -4556,12 +4556,40 @@ static bool vma_shareable(struct vm_area_struct *vma, unsigned long addr)
+ 	/*
+ 	 * check on proper vm_flags and page table alignment
+ 	 */
+-	if (vma->vm_flags & VM_MAYSHARE &&
+-	    vma->vm_start <= base && end <= vma->vm_end)
++	if (vma->vm_flags & VM_MAYSHARE && range_in_vma(vma, base, end))
+ 		return true;
+ 	return false;
+ }
+ 
++/*
++ * Determine if start,end range within vma could be mapped by shared pmd.
++ * If yes, adjust start and end to cover range associated with possible
++ * shared pmd mappings.
++ */
++void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
++				unsigned long *start, unsigned long *end)
++{
++	unsigned long check_addr = *start;
++
++	if (!(vma->vm_flags & VM_MAYSHARE))
++		return;
++
++	for (check_addr = *start; check_addr < *end; check_addr += PUD_SIZE) {
++		unsigned long a_start = check_addr & PUD_MASK;
++		unsigned long a_end = a_start + PUD_SIZE;
++
++		/*
++		 * If sharing is possible, adjust start/end if necessary.
++		 */
++		if (range_in_vma(vma, a_start, a_end)) {
++			if (a_start < *start)
++				*start = a_start;
++			if (a_end > *end)
++				*end = a_end;
++		}
++	}
++}
++
+ /*
+  * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc()
+  * and returns the corresponding pte. While this is not necessary for the
+@@ -4659,6 +4687,11 @@ int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep)
+ {
+ 	return 0;
+ }
++
++void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
++				unsigned long *start, unsigned long *end)
++{
++}
+ #define want_pmd_share()	(0)
+ #endif /* CONFIG_ARCH_WANT_HUGE_PMD_SHARE */
+ 
+diff --git a/mm/migrate.c b/mm/migrate.c
+index 8c0af0f7cab1..2a55289ee9f1 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -275,6 +275,9 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
+ 		if (vma->vm_flags & VM_LOCKED && !PageTransCompound(new))
+ 			mlock_vma_page(new);
+ 
++		if (PageTransHuge(page) && PageMlocked(page))
++			clear_page_mlock(page);
++
+ 		/* No need to invalidate - it was non-present before */
+ 		update_mmu_cache(vma, pvmw.address, pvmw.pte);
+ 	}
+diff --git a/mm/rmap.c b/mm/rmap.c
+index eb477809a5c0..1e79fac3186b 100644
+--- a/mm/rmap.c
++++ b/mm/rmap.c
+@@ -1362,11 +1362,21 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
+ 	}
+ 
+ 	/*
+-	 * We have to assume the worse case ie pmd for invalidation. Note that
+-	 * the page can not be free in this function as call of try_to_unmap()
+-	 * must hold a reference on the page.
++	 * For THP, we have to assume the worse case ie pmd for invalidation.
++	 * For hugetlb, it could be much worse if we need to do pud
++	 * invalidation in the case of pmd sharing.
++	 *
++	 * Note that the page can not be free in this function as call of
++	 * try_to_unmap() must hold a reference on the page.
+ 	 */
+ 	end = min(vma->vm_end, start + (PAGE_SIZE << compound_order(page)));
++	if (PageHuge(page)) {
++		/*
++		 * If sharing is possible, start and end will be adjusted
++		 * accordingly.
++		 */
++		adjust_range_if_pmd_sharing_possible(vma, &start, &end);
++	}
+ 	mmu_notifier_invalidate_range_start(vma->vm_mm, start, end);
+ 
+ 	while (page_vma_mapped_walk(&pvmw)) {
+@@ -1409,6 +1419,32 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
+ 		subpage = page - page_to_pfn(page) + pte_pfn(*pvmw.pte);
+ 		address = pvmw.address;
+ 
++		if (PageHuge(page)) {
++			if (huge_pmd_unshare(mm, &address, pvmw.pte)) {
++				/*
++				 * huge_pmd_unshare unmapped an entire PMD
++				 * page.  There is no way of knowing exactly
++				 * which PMDs may be cached for this mm, so
++				 * we must flush them all.  start/end were
++				 * already adjusted above to cover this range.
++				 */
++				flush_cache_range(vma, start, end);
++				flush_tlb_range(vma, start, end);
++				mmu_notifier_invalidate_range(mm, start, end);
++
++				/*
++				 * The ref count of the PMD page was dropped
++				 * which is part of the way map counting
++				 * is done for shared PMDs.  Return 'true'
++				 * here.  When there is no other sharing,
++				 * huge_pmd_unshare returns false and we will
++				 * unmap the actual page and drop map count
++				 * to zero.
++				 */
++				page_vma_mapped_walk_done(&pvmw);
++				break;
++			}
++		}
+ 
+ 		if (IS_ENABLED(CONFIG_MIGRATION) &&
+ 		    (flags & TTU_MIGRATION) &&
+diff --git a/mm/vmstat.c b/mm/vmstat.c
+index 8ba0870ecddd..55a5bb1d773d 100644
+--- a/mm/vmstat.c
++++ b/mm/vmstat.c
+@@ -1275,6 +1275,9 @@ const char * const vmstat_text[] = {
+ #ifdef CONFIG_SMP
+ 	"nr_tlb_remote_flush",
+ 	"nr_tlb_remote_flush_received",
++#else
++	"", /* nr_tlb_remote_flush */
++	"", /* nr_tlb_remote_flush_received */
+ #endif /* CONFIG_SMP */
+ 	"nr_tlb_local_flush_all",
+ 	"nr_tlb_local_flush_one",
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index aa082b71d2e4..c6bbe5b56378 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -427,7 +427,7 @@ static int ieee80211_add_key(struct wiphy *wiphy, struct net_device *dev,
+ 	case NL80211_IFTYPE_AP:
+ 	case NL80211_IFTYPE_AP_VLAN:
+ 		/* Keys without a station are used for TX only */
+-		if (key->sta && test_sta_flag(key->sta, WLAN_STA_MFP))
++		if (sta && test_sta_flag(sta, WLAN_STA_MFP))
+ 			key->conf.flags |= IEEE80211_KEY_FLAG_RX_MGMT;
+ 		break;
+ 	case NL80211_IFTYPE_ADHOC:
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index 555e389b7dfa..5d22c058ae23 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -1756,7 +1756,8 @@ int ieee80211_if_add(struct ieee80211_local *local, const char *name,
+ 
+ 		if (local->ops->wake_tx_queue &&
+ 		    type != NL80211_IFTYPE_AP_VLAN &&
+-		    type != NL80211_IFTYPE_MONITOR)
++		    (type != NL80211_IFTYPE_MONITOR ||
++		     (params->flags & MONITOR_FLAG_ACTIVE)))
+ 			txq_size += sizeof(struct txq_info) +
+ 				    local->hw.txq_data_size;
+ 
+diff --git a/net/rds/ib.h b/net/rds/ib.h
+index a6f4d7d68e95..83ff7c18d691 100644
+--- a/net/rds/ib.h
++++ b/net/rds/ib.h
+@@ -371,7 +371,7 @@ void rds_ib_mr_cqe_handler(struct rds_ib_connection *ic, struct ib_wc *wc);
+ int rds_ib_recv_init(void);
+ void rds_ib_recv_exit(void);
+ int rds_ib_recv_path(struct rds_conn_path *conn);
+-int rds_ib_recv_alloc_caches(struct rds_ib_connection *ic);
++int rds_ib_recv_alloc_caches(struct rds_ib_connection *ic, gfp_t gfp);
+ void rds_ib_recv_free_caches(struct rds_ib_connection *ic);
+ void rds_ib_recv_refill(struct rds_connection *conn, int prefill, gfp_t gfp);
+ void rds_ib_inc_free(struct rds_incoming *inc);
+diff --git a/net/rds/ib_cm.c b/net/rds/ib_cm.c
+index f1684ae6abfd..6a909ea9e8fb 100644
+--- a/net/rds/ib_cm.c
++++ b/net/rds/ib_cm.c
+@@ -949,7 +949,7 @@ int rds_ib_conn_alloc(struct rds_connection *conn, gfp_t gfp)
+ 	if (!ic)
+ 		return -ENOMEM;
+ 
+-	ret = rds_ib_recv_alloc_caches(ic);
++	ret = rds_ib_recv_alloc_caches(ic, gfp);
+ 	if (ret) {
+ 		kfree(ic);
+ 		return ret;
+diff --git a/net/rds/ib_recv.c b/net/rds/ib_recv.c
+index b4e421aa9727..918d2e676b9b 100644
+--- a/net/rds/ib_recv.c
++++ b/net/rds/ib_recv.c
+@@ -98,12 +98,12 @@ static void rds_ib_cache_xfer_to_ready(struct rds_ib_refill_cache *cache)
+ 	}
+ }
+ 
+-static int rds_ib_recv_alloc_cache(struct rds_ib_refill_cache *cache)
++static int rds_ib_recv_alloc_cache(struct rds_ib_refill_cache *cache, gfp_t gfp)
+ {
+ 	struct rds_ib_cache_head *head;
+ 	int cpu;
+ 
+-	cache->percpu = alloc_percpu(struct rds_ib_cache_head);
++	cache->percpu = alloc_percpu_gfp(struct rds_ib_cache_head, gfp);
+ 	if (!cache->percpu)
+ 	       return -ENOMEM;
+ 
+@@ -118,13 +118,13 @@ static int rds_ib_recv_alloc_cache(struct rds_ib_refill_cache *cache)
+ 	return 0;
+ }
+ 
+-int rds_ib_recv_alloc_caches(struct rds_ib_connection *ic)
++int rds_ib_recv_alloc_caches(struct rds_ib_connection *ic, gfp_t gfp)
+ {
+ 	int ret;
+ 
+-	ret = rds_ib_recv_alloc_cache(&ic->i_cache_incs);
++	ret = rds_ib_recv_alloc_cache(&ic->i_cache_incs, gfp);
+ 	if (!ret) {
+-		ret = rds_ib_recv_alloc_cache(&ic->i_cache_frags);
++		ret = rds_ib_recv_alloc_cache(&ic->i_cache_frags, gfp);
+ 		if (ret)
+ 			free_percpu(ic->i_cache_incs.percpu);
+ 	}
+diff --git a/net/tipc/netlink_compat.c b/net/tipc/netlink_compat.c
+index a2f76743c73a..82f665728382 100644
+--- a/net/tipc/netlink_compat.c
++++ b/net/tipc/netlink_compat.c
+@@ -185,6 +185,7 @@ static int __tipc_nl_compat_dumpit(struct tipc_nl_compat_cmd_dump *cmd,
+ 		return -ENOMEM;
+ 
+ 	buf->sk = msg->dst_sk;
++	__tipc_dump_start(&cb, msg->net);
+ 
+ 	do {
+ 		int rem;
+@@ -216,6 +217,7 @@ static int __tipc_nl_compat_dumpit(struct tipc_nl_compat_cmd_dump *cmd,
+ 	err = 0;
+ 
+ err_out:
++	tipc_dump_done(&cb);
+ 	kfree_skb(buf);
+ 
+ 	if (err == -EMSGSIZE) {
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index bdb4a9a5a83a..093e16d1b770 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -3233,7 +3233,7 @@ int tipc_nl_sk_walk(struct sk_buff *skb, struct netlink_callback *cb,
+ 				       struct netlink_callback *cb,
+ 				       struct tipc_sock *tsk))
+ {
+-	struct rhashtable_iter *iter = (void *)cb->args[0];
++	struct rhashtable_iter *iter = (void *)cb->args[4];
+ 	struct tipc_sock *tsk;
+ 	int err;
+ 
+@@ -3269,8 +3269,14 @@ EXPORT_SYMBOL(tipc_nl_sk_walk);
+ 
+ int tipc_dump_start(struct netlink_callback *cb)
+ {
+-	struct rhashtable_iter *iter = (void *)cb->args[0];
+-	struct net *net = sock_net(cb->skb->sk);
++	return __tipc_dump_start(cb, sock_net(cb->skb->sk));
++}
++EXPORT_SYMBOL(tipc_dump_start);
++
++int __tipc_dump_start(struct netlink_callback *cb, struct net *net)
++{
++	/* tipc_nl_name_table_dump() uses cb->args[0...3]. */
++	struct rhashtable_iter *iter = (void *)cb->args[4];
+ 	struct tipc_net *tn = tipc_net(net);
+ 
+ 	if (!iter) {
+@@ -3278,17 +3284,16 @@ int tipc_dump_start(struct netlink_callback *cb)
+ 		if (!iter)
+ 			return -ENOMEM;
+ 
+-		cb->args[0] = (long)iter;
++		cb->args[4] = (long)iter;
+ 	}
+ 
+ 	rhashtable_walk_enter(&tn->sk_rht, iter);
+ 	return 0;
+ }
+-EXPORT_SYMBOL(tipc_dump_start);
+ 
+ int tipc_dump_done(struct netlink_callback *cb)
+ {
+-	struct rhashtable_iter *hti = (void *)cb->args[0];
++	struct rhashtable_iter *hti = (void *)cb->args[4];
+ 
+ 	rhashtable_walk_exit(hti);
+ 	kfree(hti);
+diff --git a/net/tipc/socket.h b/net/tipc/socket.h
+index d43032e26532..5e575f205afe 100644
+--- a/net/tipc/socket.h
++++ b/net/tipc/socket.h
+@@ -69,5 +69,6 @@ int tipc_nl_sk_walk(struct sk_buff *skb, struct netlink_callback *cb,
+ 				       struct netlink_callback *cb,
+ 				       struct tipc_sock *tsk));
+ int tipc_dump_start(struct netlink_callback *cb);
++int __tipc_dump_start(struct netlink_callback *cb, struct net *net);
+ int tipc_dump_done(struct netlink_callback *cb);
+ #endif
+diff --git a/tools/testing/selftests/x86/test_vdso.c b/tools/testing/selftests/x86/test_vdso.c
+index 235259011704..35edd61d1663 100644
+--- a/tools/testing/selftests/x86/test_vdso.c
++++ b/tools/testing/selftests/x86/test_vdso.c
+@@ -17,6 +17,7 @@
+ #include <errno.h>
+ #include <sched.h>
+ #include <stdbool.h>
++#include <limits.h>
+ 
+ #ifndef SYS_getcpu
+ # ifdef __x86_64__
+@@ -31,6 +32,14 @@
+ 
+ int nerrs = 0;
+ 
++typedef int (*vgettime_t)(clockid_t, struct timespec *);
++
++vgettime_t vdso_clock_gettime;
++
++typedef long (*vgtod_t)(struct timeval *tv, struct timezone *tz);
++
++vgtod_t vdso_gettimeofday;
++
+ typedef long (*getcpu_t)(unsigned *, unsigned *, void *);
+ 
+ getcpu_t vgetcpu;
+@@ -95,6 +104,15 @@ static void fill_function_pointers()
+ 		printf("Warning: failed to find getcpu in vDSO\n");
+ 
+ 	vgetcpu = (getcpu_t) vsyscall_getcpu();
++
++	vdso_clock_gettime = (vgettime_t)dlsym(vdso, "__vdso_clock_gettime");
++	if (!vdso_clock_gettime)
++		printf("Warning: failed to find clock_gettime in vDSO\n");
++
++	vdso_gettimeofday = (vgtod_t)dlsym(vdso, "__vdso_gettimeofday");
++	if (!vdso_gettimeofday)
++		printf("Warning: failed to find gettimeofday in vDSO\n");
++
+ }
+ 
+ static long sys_getcpu(unsigned * cpu, unsigned * node,
+@@ -103,6 +121,16 @@ static long sys_getcpu(unsigned * cpu, unsigned * node,
+ 	return syscall(__NR_getcpu, cpu, node, cache);
+ }
+ 
++static inline int sys_clock_gettime(clockid_t id, struct timespec *ts)
++{
++	return syscall(__NR_clock_gettime, id, ts);
++}
++
++static inline int sys_gettimeofday(struct timeval *tv, struct timezone *tz)
++{
++	return syscall(__NR_gettimeofday, tv, tz);
++}
++
+ static void test_getcpu(void)
+ {
+ 	printf("[RUN]\tTesting getcpu...\n");
+@@ -155,10 +183,154 @@ static void test_getcpu(void)
+ 	}
+ }
+ 
++static bool ts_leq(const struct timespec *a, const struct timespec *b)
++{
++	if (a->tv_sec != b->tv_sec)
++		return a->tv_sec < b->tv_sec;
++	else
++		return a->tv_nsec <= b->tv_nsec;
++}
++
++static bool tv_leq(const struct timeval *a, const struct timeval *b)
++{
++	if (a->tv_sec != b->tv_sec)
++		return a->tv_sec < b->tv_sec;
++	else
++		return a->tv_usec <= b->tv_usec;
++}
++
++static char const * const clocknames[] = {
++	[0] = "CLOCK_REALTIME",
++	[1] = "CLOCK_MONOTONIC",
++	[2] = "CLOCK_PROCESS_CPUTIME_ID",
++	[3] = "CLOCK_THREAD_CPUTIME_ID",
++	[4] = "CLOCK_MONOTONIC_RAW",
++	[5] = "CLOCK_REALTIME_COARSE",
++	[6] = "CLOCK_MONOTONIC_COARSE",
++	[7] = "CLOCK_BOOTTIME",
++	[8] = "CLOCK_REALTIME_ALARM",
++	[9] = "CLOCK_BOOTTIME_ALARM",
++	[10] = "CLOCK_SGI_CYCLE",
++	[11] = "CLOCK_TAI",
++};
++
++static void test_one_clock_gettime(int clock, const char *name)
++{
++	struct timespec start, vdso, end;
++	int vdso_ret, end_ret;
++
++	printf("[RUN]\tTesting clock_gettime for clock %s (%d)...\n", name, clock);
++
++	if (sys_clock_gettime(clock, &start) < 0) {
++		if (errno == EINVAL) {
++			vdso_ret = vdso_clock_gettime(clock, &vdso);
++			if (vdso_ret == -EINVAL) {
++				printf("[OK]\tNo such clock.\n");
++			} else {
++				printf("[FAIL]\tNo such clock, but __vdso_clock_gettime returned %d\n", vdso_ret);
++				nerrs++;
++			}
++		} else {
++			printf("[WARN]\t clock_gettime(%d) syscall returned error %d\n", clock, errno);
++		}
++		return;
++	}
++
++	vdso_ret = vdso_clock_gettime(clock, &vdso);
++	end_ret = sys_clock_gettime(clock, &end);
++
++	if (vdso_ret != 0 || end_ret != 0) {
++		printf("[FAIL]\tvDSO returned %d, syscall errno=%d\n",
++		       vdso_ret, errno);
++		nerrs++;
++		return;
++	}
++
++	printf("\t%llu.%09ld %llu.%09ld %llu.%09ld\n",
++	       (unsigned long long)start.tv_sec, start.tv_nsec,
++	       (unsigned long long)vdso.tv_sec, vdso.tv_nsec,
++	       (unsigned long long)end.tv_sec, end.tv_nsec);
++
++	if (!ts_leq(&start, &vdso) || !ts_leq(&vdso, &end)) {
++		printf("[FAIL]\tTimes are out of sequence\n");
++		nerrs++;
++	}
++}
++
++static void test_clock_gettime(void)
++{
++	for (int clock = 0; clock < sizeof(clocknames) / sizeof(clocknames[0]);
++	     clock++) {
++		test_one_clock_gettime(clock, clocknames[clock]);
++	}
++
++	/* Also test some invalid clock ids */
++	test_one_clock_gettime(-1, "invalid");
++	test_one_clock_gettime(INT_MIN, "invalid");
++	test_one_clock_gettime(INT_MAX, "invalid");
++}
++
++static void test_gettimeofday(void)
++{
++	struct timeval start, vdso, end;
++	struct timezone sys_tz, vdso_tz;
++	int vdso_ret, end_ret;
++
++	if (!vdso_gettimeofday)
++		return;
++
++	printf("[RUN]\tTesting gettimeofday...\n");
++
++	if (sys_gettimeofday(&start, &sys_tz) < 0) {
++		printf("[FAIL]\tsys_gettimeofday failed (%d)\n", errno);
++		nerrs++;
++		return;
++	}
++
++	vdso_ret = vdso_gettimeofday(&vdso, &vdso_tz);
++	end_ret = sys_gettimeofday(&end, NULL);
++
++	if (vdso_ret != 0 || end_ret != 0) {
++		printf("[FAIL]\tvDSO returned %d, syscall errno=%d\n",
++		       vdso_ret, errno);
++		nerrs++;
++		return;
++	}
++
++	printf("\t%llu.%06ld %llu.%06ld %llu.%06ld\n",
++	       (unsigned long long)start.tv_sec, start.tv_usec,
++	       (unsigned long long)vdso.tv_sec, vdso.tv_usec,
++	       (unsigned long long)end.tv_sec, end.tv_usec);
++
++	if (!tv_leq(&start, &vdso) || !tv_leq(&vdso, &end)) {
++		printf("[FAIL]\tTimes are out of sequence\n");
++		nerrs++;
++	}
++
++	if (sys_tz.tz_minuteswest == vdso_tz.tz_minuteswest &&
++	    sys_tz.tz_dsttime == vdso_tz.tz_dsttime) {
++		printf("[OK]\ttimezones match: minuteswest=%d, dsttime=%d\n",
++		       sys_tz.tz_minuteswest, sys_tz.tz_dsttime);
++	} else {
++		printf("[FAIL]\ttimezones do not match\n");
++		nerrs++;
++	}
++
++	/* And make sure that passing NULL for tz doesn't crash. */
++	vdso_gettimeofday(&vdso, NULL);
++}
++
+ int main(int argc, char **argv)
+ {
+ 	fill_function_pointers();
+ 
++	test_clock_gettime();
++	test_gettimeofday();
++
++	/*
++	 * Test getcpu() last so that, if something goes wrong setting affinity,
++	 * we still run the other tests.
++	 */
+ 	test_getcpu();
+ 
+ 	return nerrs ? 1 : 0;


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 11:37 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     6142f13e3ae3341aef28db2fd108ff53a7dba30a
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Oct 18 10:27:08 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 11:36:27 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6142f13e

Linux patch 4.18.15

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1014_linux-4.18.15.patch | 5433 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5437 insertions(+)

diff --git a/0000_README b/0000_README
index 6d1cb28..5676b13 100644
--- a/0000_README
+++ b/0000_README
@@ -99,6 +99,10 @@ Patch:  1013_linux-4.18.14.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.14
 
+Patch:  1014_linux-4.18.15.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.15
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1014_linux-4.18.15.patch b/1014_linux-4.18.15.patch
new file mode 100644
index 0000000..5477884
--- /dev/null
+++ b/1014_linux-4.18.15.patch
@@ -0,0 +1,5433 @@
+diff --git a/Documentation/devicetree/bindings/net/macb.txt b/Documentation/devicetree/bindings/net/macb.txt
+index 457d5ae16f23..3e17ac1d5d58 100644
+--- a/Documentation/devicetree/bindings/net/macb.txt
++++ b/Documentation/devicetree/bindings/net/macb.txt
+@@ -10,6 +10,7 @@ Required properties:
+   Use "cdns,pc302-gem" for Picochip picoXcell pc302 and later devices based on
+   the Cadence GEM, or the generic form: "cdns,gem".
+   Use "atmel,sama5d2-gem" for the GEM IP (10/100) available on Atmel sama5d2 SoCs.
++  Use "atmel,sama5d3-macb" for the 10/100Mbit IP available on Atmel sama5d3 SoCs.
+   Use "atmel,sama5d3-gem" for the Gigabit IP available on Atmel sama5d3 SoCs.
+   Use "atmel,sama5d4-gem" for the GEM IP (10/100) available on Atmel sama5d4 SoCs.
+   Use "cdns,zynq-gem" Xilinx Zynq-7xxx SoC.
+diff --git a/Makefile b/Makefile
+index 5274f8ae6b44..968eb96a0553 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 14
++SUBLEVEL = 15
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+@@ -298,19 +298,7 @@ KERNELRELEASE = $(shell cat include/config/kernel.release 2> /dev/null)
+ KERNELVERSION = $(VERSION)$(if $(PATCHLEVEL),.$(PATCHLEVEL)$(if $(SUBLEVEL),.$(SUBLEVEL)))$(EXTRAVERSION)
+ export VERSION PATCHLEVEL SUBLEVEL KERNELRELEASE KERNELVERSION
+ 
+-# SUBARCH tells the usermode build what the underlying arch is.  That is set
+-# first, and if a usermode build is happening, the "ARCH=um" on the command
+-# line overrides the setting of ARCH below.  If a native build is happening,
+-# then ARCH is assigned, getting whatever value it gets normally, and
+-# SUBARCH is subsequently ignored.
+-
+-SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \
+-				  -e s/sun4u/sparc64/ \
+-				  -e s/arm.*/arm/ -e s/sa110/arm/ \
+-				  -e s/s390x/s390/ -e s/parisc64/parisc/ \
+-				  -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \
+-				  -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ \
+-				  -e s/riscv.*/riscv/)
++include scripts/subarch.include
+ 
+ # Cross compiling and selecting different set of gcc/bin-utils
+ # ---------------------------------------------------------------------------
+diff --git a/arch/arm/boot/dts/sama5d3_emac.dtsi b/arch/arm/boot/dts/sama5d3_emac.dtsi
+index 7cb235ef0fb6..6e9e1c2f9def 100644
+--- a/arch/arm/boot/dts/sama5d3_emac.dtsi
++++ b/arch/arm/boot/dts/sama5d3_emac.dtsi
+@@ -41,7 +41,7 @@
+ 			};
+ 
+ 			macb1: ethernet@f802c000 {
+-				compatible = "cdns,at91sam9260-macb", "cdns,macb";
++				compatible = "atmel,sama5d3-macb", "cdns,at91sam9260-macb", "cdns,macb";
+ 				reg = <0xf802c000 0x100>;
+ 				interrupts = <35 IRQ_TYPE_LEVEL_HIGH 3>;
+ 				pinctrl-names = "default";
+diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
+index dd5b4fab114f..b7c8a718544c 100644
+--- a/arch/arm64/kernel/perf_event.c
++++ b/arch/arm64/kernel/perf_event.c
+@@ -823,6 +823,12 @@ static int armv8pmu_set_event_filter(struct hw_perf_event *event,
+ 	return 0;
+ }
+ 
++static int armv8pmu_filter_match(struct perf_event *event)
++{
++	unsigned long evtype = event->hw.config_base & ARMV8_PMU_EVTYPE_EVENT;
++	return evtype != ARMV8_PMUV3_PERFCTR_CHAIN;
++}
++
+ static void armv8pmu_reset(void *info)
+ {
+ 	struct arm_pmu *cpu_pmu = (struct arm_pmu *)info;
+@@ -968,6 +974,7 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu)
+ 	cpu_pmu->reset			= armv8pmu_reset,
+ 	cpu_pmu->max_period		= (1LLU << 32) - 1,
+ 	cpu_pmu->set_event_filter	= armv8pmu_set_event_filter;
++	cpu_pmu->filter_match		= armv8pmu_filter_match;
+ 
+ 	return 0;
+ }
+diff --git a/arch/mips/include/asm/processor.h b/arch/mips/include/asm/processor.h
+index b2fa62922d88..49d6046ca1d0 100644
+--- a/arch/mips/include/asm/processor.h
++++ b/arch/mips/include/asm/processor.h
+@@ -13,6 +13,7 @@
+ 
+ #include <linux/atomic.h>
+ #include <linux/cpumask.h>
++#include <linux/sizes.h>
+ #include <linux/threads.h>
+ 
+ #include <asm/cachectl.h>
+@@ -80,11 +81,10 @@ extern unsigned int vced_count, vcei_count;
+ 
+ #endif
+ 
+-/*
+- * One page above the stack is used for branch delay slot "emulation".
+- * See dsemul.c for details.
+- */
+-#define STACK_TOP	((TASK_SIZE & PAGE_MASK) - PAGE_SIZE)
++#define VDSO_RANDOMIZE_SIZE	(TASK_IS_32BIT_ADDR ? SZ_1M : SZ_256M)
++
++extern unsigned long mips_stack_top(void);
++#define STACK_TOP		mips_stack_top()
+ 
+ /*
+  * This decides where the kernel will search for a free chunk of vm
+diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
+index 9670e70139fd..1efd1798532b 100644
+--- a/arch/mips/kernel/process.c
++++ b/arch/mips/kernel/process.c
+@@ -31,6 +31,7 @@
+ #include <linux/prctl.h>
+ #include <linux/nmi.h>
+ 
++#include <asm/abi.h>
+ #include <asm/asm.h>
+ #include <asm/bootinfo.h>
+ #include <asm/cpu.h>
+@@ -38,6 +39,7 @@
+ #include <asm/dsp.h>
+ #include <asm/fpu.h>
+ #include <asm/irq.h>
++#include <asm/mips-cps.h>
+ #include <asm/msa.h>
+ #include <asm/pgtable.h>
+ #include <asm/mipsregs.h>
+@@ -644,6 +646,29 @@ out:
+ 	return pc;
+ }
+ 
++unsigned long mips_stack_top(void)
++{
++	unsigned long top = TASK_SIZE & PAGE_MASK;
++
++	/* One page for branch delay slot "emulation" */
++	top -= PAGE_SIZE;
++
++	/* Space for the VDSO, data page & GIC user page */
++	top -= PAGE_ALIGN(current->thread.abi->vdso->size);
++	top -= PAGE_SIZE;
++	top -= mips_gic_present() ? PAGE_SIZE : 0;
++
++	/* Space for cache colour alignment */
++	if (cpu_has_dc_aliases)
++		top -= shm_align_mask + 1;
++
++	/* Space to randomize the VDSO base */
++	if (current->flags & PF_RANDOMIZE)
++		top -= VDSO_RANDOMIZE_SIZE;
++
++	return top;
++}
++
+ /*
+  * Don't forget that the stack pointer must be aligned on a 8 bytes
+  * boundary for 32-bits ABI and 16 bytes for 64-bits ABI.
+diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
+index 2c96c0c68116..6138224a96b1 100644
+--- a/arch/mips/kernel/setup.c
++++ b/arch/mips/kernel/setup.c
+@@ -835,6 +835,34 @@ static void __init arch_mem_init(char **cmdline_p)
+ 	struct memblock_region *reg;
+ 	extern void plat_mem_setup(void);
+ 
++	/*
++	 * Initialize boot_command_line to an innocuous but non-empty string in
++	 * order to prevent early_init_dt_scan_chosen() from copying
++	 * CONFIG_CMDLINE into it without our knowledge. We handle
++	 * CONFIG_CMDLINE ourselves below & don't want to duplicate its
++	 * content because repeating arguments can be problematic.
++	 */
++	strlcpy(boot_command_line, " ", COMMAND_LINE_SIZE);
++
++	/* call board setup routine */
++	plat_mem_setup();
++
++	/*
++	 * Make sure all kernel memory is in the maps.  The "UP" and
++	 * "DOWN" are opposite for initdata since if it crosses over
++	 * into another memory section you don't want that to be
++	 * freed when the initdata is freed.
++	 */
++	arch_mem_addpart(PFN_DOWN(__pa_symbol(&_text)) << PAGE_SHIFT,
++			 PFN_UP(__pa_symbol(&_edata)) << PAGE_SHIFT,
++			 BOOT_MEM_RAM);
++	arch_mem_addpart(PFN_UP(__pa_symbol(&__init_begin)) << PAGE_SHIFT,
++			 PFN_DOWN(__pa_symbol(&__init_end)) << PAGE_SHIFT,
++			 BOOT_MEM_INIT_RAM);
++
++	pr_info("Determined physical RAM map:\n");
++	print_memory_map();
++
+ #if defined(CONFIG_CMDLINE_BOOL) && defined(CONFIG_CMDLINE_OVERRIDE)
+ 	strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
+ #else
+@@ -862,26 +890,6 @@ static void __init arch_mem_init(char **cmdline_p)
+ 	}
+ #endif
+ #endif
+-
+-	/* call board setup routine */
+-	plat_mem_setup();
+-
+-	/*
+-	 * Make sure all kernel memory is in the maps.  The "UP" and
+-	 * "DOWN" are opposite for initdata since if it crosses over
+-	 * into another memory section you don't want that to be
+-	 * freed when the initdata is freed.
+-	 */
+-	arch_mem_addpart(PFN_DOWN(__pa_symbol(&_text)) << PAGE_SHIFT,
+-			 PFN_UP(__pa_symbol(&_edata)) << PAGE_SHIFT,
+-			 BOOT_MEM_RAM);
+-	arch_mem_addpart(PFN_UP(__pa_symbol(&__init_begin)) << PAGE_SHIFT,
+-			 PFN_DOWN(__pa_symbol(&__init_end)) << PAGE_SHIFT,
+-			 BOOT_MEM_INIT_RAM);
+-
+-	pr_info("Determined physical RAM map:\n");
+-	print_memory_map();
+-
+ 	strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE);
+ 
+ 	*cmdline_p = command_line;
+diff --git a/arch/mips/kernel/vdso.c b/arch/mips/kernel/vdso.c
+index 8f845f6e5f42..48a9c6b90e07 100644
+--- a/arch/mips/kernel/vdso.c
++++ b/arch/mips/kernel/vdso.c
+@@ -15,6 +15,7 @@
+ #include <linux/ioport.h>
+ #include <linux/kernel.h>
+ #include <linux/mm.h>
++#include <linux/random.h>
+ #include <linux/sched.h>
+ #include <linux/slab.h>
+ #include <linux/timekeeper_internal.h>
+@@ -97,6 +98,21 @@ void update_vsyscall_tz(void)
+ 	}
+ }
+ 
++static unsigned long vdso_base(void)
++{
++	unsigned long base;
++
++	/* Skip the delay slot emulation page */
++	base = STACK_TOP + PAGE_SIZE;
++
++	if (current->flags & PF_RANDOMIZE) {
++		base += get_random_int() & (VDSO_RANDOMIZE_SIZE - 1);
++		base = PAGE_ALIGN(base);
++	}
++
++	return base;
++}
++
+ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
+ {
+ 	struct mips_vdso_image *image = current->thread.abi->vdso;
+@@ -137,7 +153,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
+ 	if (cpu_has_dc_aliases)
+ 		size += shm_align_mask + 1;
+ 
+-	base = get_unmapped_area(NULL, 0, size, 0, 0);
++	base = get_unmapped_area(NULL, vdso_base(), size, 0, 0);
+ 	if (IS_ERR_VALUE(base)) {
+ 		ret = base;
+ 		goto out;
+diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
+index 42aafba7a308..9532dff28091 100644
+--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
++++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
+@@ -104,7 +104,7 @@
+  */
+ #define _HPAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
+ 			 _PAGE_ACCESSED | H_PAGE_THP_HUGE | _PAGE_PTE | \
+-			 _PAGE_SOFT_DIRTY)
++			 _PAGE_SOFT_DIRTY | _PAGE_DEVMAP)
+ /*
+  * user access blocked by key
+  */
+@@ -122,7 +122,7 @@
+  */
+ #define _PAGE_CHG_MASK	(PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
+ 			 _PAGE_ACCESSED | _PAGE_SPECIAL | _PAGE_PTE |	\
+-			 _PAGE_SOFT_DIRTY)
++			 _PAGE_SOFT_DIRTY | _PAGE_DEVMAP)
+ 
+ #define H_PTE_PKEY  (H_PTE_PKEY_BIT0 | H_PTE_PKEY_BIT1 | H_PTE_PKEY_BIT2 | \
+ 		     H_PTE_PKEY_BIT3 | H_PTE_PKEY_BIT4)
+diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+index 7efc42538ccf..26d927bf2fdb 100644
+--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
++++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+@@ -538,8 +538,8 @@ int kvmppc_book3s_radix_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
+ 				   unsigned long ea, unsigned long dsisr)
+ {
+ 	struct kvm *kvm = vcpu->kvm;
+-	unsigned long mmu_seq, pte_size;
+-	unsigned long gpa, gfn, hva, pfn;
++	unsigned long mmu_seq;
++	unsigned long gpa, gfn, hva;
+ 	struct kvm_memory_slot *memslot;
+ 	struct page *page = NULL;
+ 	long ret;
+@@ -636,9 +636,10 @@ int kvmppc_book3s_radix_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
+ 	 */
+ 	hva = gfn_to_hva_memslot(memslot, gfn);
+ 	if (upgrade_p && __get_user_pages_fast(hva, 1, 1, &page) == 1) {
+-		pfn = page_to_pfn(page);
+ 		upgrade_write = true;
+ 	} else {
++		unsigned long pfn;
++
+ 		/* Call KVM generic code to do the slow-path check */
+ 		pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL,
+ 					   writing, upgrade_p);
+@@ -652,63 +653,55 @@ int kvmppc_book3s_radix_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
+ 		}
+ 	}
+ 
+-	/* See if we can insert a 1GB or 2MB large PTE here */
+-	level = 0;
+-	if (page && PageCompound(page)) {
+-		pte_size = PAGE_SIZE << compound_order(compound_head(page));
+-		if (pte_size >= PUD_SIZE &&
+-		    (gpa & (PUD_SIZE - PAGE_SIZE)) ==
+-		    (hva & (PUD_SIZE - PAGE_SIZE))) {
+-			level = 2;
+-			pfn &= ~((PUD_SIZE >> PAGE_SHIFT) - 1);
+-		} else if (pte_size >= PMD_SIZE &&
+-			   (gpa & (PMD_SIZE - PAGE_SIZE)) ==
+-			   (hva & (PMD_SIZE - PAGE_SIZE))) {
+-			level = 1;
+-			pfn &= ~((PMD_SIZE >> PAGE_SHIFT) - 1);
+-		}
+-	}
+-
+ 	/*
+-	 * Compute the PTE value that we need to insert.
++	 * Read the PTE from the process' radix tree and use that
++	 * so we get the shift and attribute bits.
+ 	 */
+-	if (page) {
+-		pgflags = _PAGE_READ | _PAGE_EXEC | _PAGE_PRESENT | _PAGE_PTE |
+-			_PAGE_ACCESSED;
+-		if (writing || upgrade_write)
+-			pgflags |= _PAGE_WRITE | _PAGE_DIRTY;
+-		pte = pfn_pte(pfn, __pgprot(pgflags));
+-	} else {
+-		/*
+-		 * Read the PTE from the process' radix tree and use that
+-		 * so we get the attribute bits.
+-		 */
+-		local_irq_disable();
+-		ptep = __find_linux_pte(vcpu->arch.pgdir, hva, NULL, &shift);
+-		pte = *ptep;
++	local_irq_disable();
++	ptep = __find_linux_pte(vcpu->arch.pgdir, hva, NULL, &shift);
++	/*
++	 * If the PTE disappeared temporarily due to a THP
++	 * collapse, just return and let the guest try again.
++	 */
++	if (!ptep) {
+ 		local_irq_enable();
+-		if (shift == PUD_SHIFT &&
+-		    (gpa & (PUD_SIZE - PAGE_SIZE)) ==
+-		    (hva & (PUD_SIZE - PAGE_SIZE))) {
+-			level = 2;
+-		} else if (shift == PMD_SHIFT &&
+-			   (gpa & (PMD_SIZE - PAGE_SIZE)) ==
+-			   (hva & (PMD_SIZE - PAGE_SIZE))) {
+-			level = 1;
+-		} else if (shift && shift != PAGE_SHIFT) {
+-			/* Adjust PFN */
+-			unsigned long mask = (1ul << shift) - PAGE_SIZE;
+-			pte = __pte(pte_val(pte) | (hva & mask));
+-		}
+-		pte = __pte(pte_val(pte) | _PAGE_EXEC | _PAGE_ACCESSED);
+-		if (writing || upgrade_write) {
+-			if (pte_val(pte) & _PAGE_WRITE)
+-				pte = __pte(pte_val(pte) | _PAGE_DIRTY);
+-		} else {
+-			pte = __pte(pte_val(pte) & ~(_PAGE_WRITE | _PAGE_DIRTY));
++		if (page)
++			put_page(page);
++		return RESUME_GUEST;
++	}
++	pte = *ptep;
++	local_irq_enable();
++
++	/* Get pte level from shift/size */
++	if (shift == PUD_SHIFT &&
++	    (gpa & (PUD_SIZE - PAGE_SIZE)) ==
++	    (hva & (PUD_SIZE - PAGE_SIZE))) {
++		level = 2;
++	} else if (shift == PMD_SHIFT &&
++		   (gpa & (PMD_SIZE - PAGE_SIZE)) ==
++		   (hva & (PMD_SIZE - PAGE_SIZE))) {
++		level = 1;
++	} else {
++		level = 0;
++		if (shift > PAGE_SHIFT) {
++			/*
++			 * If the pte maps more than one page, bring over
++			 * bits from the virtual address to get the real
++			 * address of the specific single page we want.
++			 */
++			unsigned long rpnmask = (1ul << shift) - PAGE_SIZE;
++			pte = __pte(pte_val(pte) | (hva & rpnmask));
+ 		}
+ 	}
+ 
++	pte = __pte(pte_val(pte) | _PAGE_EXEC | _PAGE_ACCESSED);
++	if (writing || upgrade_write) {
++		if (pte_val(pte) & _PAGE_WRITE)
++			pte = __pte(pte_val(pte) | _PAGE_DIRTY);
++	} else {
++		pte = __pte(pte_val(pte) & ~(_PAGE_WRITE | _PAGE_DIRTY));
++	}
++
+ 	/* Allocate space in the tree and write the PTE */
+ 	ret = kvmppc_create_pte(kvm, pte, gpa, level, mmu_seq);
+ 
+diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
+index 99fff853c944..a558381b016b 100644
+--- a/arch/x86/include/asm/pgtable_types.h
++++ b/arch/x86/include/asm/pgtable_types.h
+@@ -123,7 +123,7 @@
+  */
+ #define _PAGE_CHG_MASK	(PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT |		\
+ 			 _PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY |	\
+-			 _PAGE_SOFT_DIRTY)
++			 _PAGE_SOFT_DIRTY | _PAGE_DEVMAP)
+ #define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE)
+ 
+ /*
+diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
+index c535c2fdea13..9bba9737ee0b 100644
+--- a/arch/x86/include/uapi/asm/kvm.h
++++ b/arch/x86/include/uapi/asm/kvm.h
+@@ -377,5 +377,6 @@ struct kvm_sync_regs {
+ 
+ #define KVM_X86_QUIRK_LINT0_REENABLED	(1 << 0)
+ #define KVM_X86_QUIRK_CD_NW_CLEARED	(1 << 1)
++#define KVM_X86_QUIRK_LAPIC_MMIO_HOLE	(1 << 2)
+ 
+ #endif /* _ASM_X86_KVM_H */
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index b5cd8465d44f..83c4e8cc7eb9 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -1291,9 +1291,8 @@ EXPORT_SYMBOL_GPL(kvm_lapic_reg_read);
+ 
+ static int apic_mmio_in_range(struct kvm_lapic *apic, gpa_t addr)
+ {
+-	return kvm_apic_hw_enabled(apic) &&
+-	    addr >= apic->base_address &&
+-	    addr < apic->base_address + LAPIC_MMIO_LENGTH;
++	return addr >= apic->base_address &&
++		addr < apic->base_address + LAPIC_MMIO_LENGTH;
+ }
+ 
+ static int apic_mmio_read(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
+@@ -1305,6 +1304,15 @@ static int apic_mmio_read(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
+ 	if (!apic_mmio_in_range(apic, address))
+ 		return -EOPNOTSUPP;
+ 
++	if (!kvm_apic_hw_enabled(apic) || apic_x2apic_mode(apic)) {
++		if (!kvm_check_has_quirk(vcpu->kvm,
++					 KVM_X86_QUIRK_LAPIC_MMIO_HOLE))
++			return -EOPNOTSUPP;
++
++		memset(data, 0xff, len);
++		return 0;
++	}
++
+ 	kvm_lapic_reg_read(apic, offset, len, data);
+ 
+ 	return 0;
+@@ -1864,6 +1872,14 @@ static int apic_mmio_write(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
+ 	if (!apic_mmio_in_range(apic, address))
+ 		return -EOPNOTSUPP;
+ 
++	if (!kvm_apic_hw_enabled(apic) || apic_x2apic_mode(apic)) {
++		if (!kvm_check_has_quirk(vcpu->kvm,
++					 KVM_X86_QUIRK_LAPIC_MMIO_HOLE))
++			return -EOPNOTSUPP;
++
++		return 0;
++	}
++
+ 	/*
+ 	 * APIC register must be aligned on 128-bits boundary.
+ 	 * 32/64/128 bits registers must be accessed thru 32 bits.
+diff --git a/drivers/bluetooth/hci_ldisc.c b/drivers/bluetooth/hci_ldisc.c
+index 963bb0309e25..ea6238ed5c0e 100644
+--- a/drivers/bluetooth/hci_ldisc.c
++++ b/drivers/bluetooth/hci_ldisc.c
+@@ -543,6 +543,8 @@ static void hci_uart_tty_close(struct tty_struct *tty)
+ 	}
+ 	clear_bit(HCI_UART_PROTO_SET, &hu->flags);
+ 
++	percpu_free_rwsem(&hu->proto_lock);
++
+ 	kfree(hu);
+ }
+ 
+diff --git a/drivers/clk/x86/clk-pmc-atom.c b/drivers/clk/x86/clk-pmc-atom.c
+index 08ef69945ffb..d977193842df 100644
+--- a/drivers/clk/x86/clk-pmc-atom.c
++++ b/drivers/clk/x86/clk-pmc-atom.c
+@@ -55,6 +55,7 @@ struct clk_plt_data {
+ 	u8 nparents;
+ 	struct clk_plt *clks[PMC_CLK_NUM];
+ 	struct clk_lookup *mclk_lookup;
++	struct clk_lookup *ether_clk_lookup;
+ };
+ 
+ /* Return an index in parent table */
+@@ -186,13 +187,6 @@ static struct clk_plt *plt_clk_register(struct platform_device *pdev, int id,
+ 	pclk->reg = base + PMC_CLK_CTL_OFFSET + id * PMC_CLK_CTL_SIZE;
+ 	spin_lock_init(&pclk->lock);
+ 
+-	/*
+-	 * If the clock was already enabled by the firmware mark it as critical
+-	 * to avoid it being gated by the clock framework if no driver owns it.
+-	 */
+-	if (plt_clk_is_enabled(&pclk->hw))
+-		init.flags |= CLK_IS_CRITICAL;
+-
+ 	ret = devm_clk_hw_register(&pdev->dev, &pclk->hw);
+ 	if (ret) {
+ 		pclk = ERR_PTR(ret);
+@@ -351,11 +345,20 @@ static int plt_clk_probe(struct platform_device *pdev)
+ 		goto err_unreg_clk_plt;
+ 	}
+ 
++	data->ether_clk_lookup = clkdev_hw_create(&data->clks[4]->hw,
++						  "ether_clk", NULL);
++	if (!data->ether_clk_lookup) {
++		err = -ENOMEM;
++		goto err_drop_mclk;
++	}
++
+ 	plt_clk_free_parent_names_loop(parent_names, data->nparents);
+ 
+ 	platform_set_drvdata(pdev, data);
+ 	return 0;
+ 
++err_drop_mclk:
++	clkdev_drop(data->mclk_lookup);
+ err_unreg_clk_plt:
+ 	plt_clk_unregister_loop(data, i);
+ 	plt_clk_unregister_parents(data);
+@@ -369,6 +372,7 @@ static int plt_clk_remove(struct platform_device *pdev)
+ 
+ 	data = platform_get_drvdata(pdev);
+ 
++	clkdev_drop(data->ether_clk_lookup);
+ 	clkdev_drop(data->mclk_lookup);
+ 	plt_clk_unregister_loop(data, PMC_CLK_NUM);
+ 	plt_clk_unregister_parents(data);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+index 305143fcc1ce..1ac7933cccc5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+@@ -245,7 +245,7 @@ int amdgpu_amdkfd_resume(struct amdgpu_device *adev)
+ 
+ int alloc_gtt_mem(struct kgd_dev *kgd, size_t size,
+ 			void **mem_obj, uint64_t *gpu_addr,
+-			void **cpu_ptr)
++			void **cpu_ptr, bool mqd_gfx9)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)kgd;
+ 	struct amdgpu_bo *bo = NULL;
+@@ -261,6 +261,10 @@ int alloc_gtt_mem(struct kgd_dev *kgd, size_t size,
+ 	bp.flags = AMDGPU_GEM_CREATE_CPU_GTT_USWC;
+ 	bp.type = ttm_bo_type_kernel;
+ 	bp.resv = NULL;
++
++	if (mqd_gfx9)
++		bp.flags |= AMDGPU_GEM_CREATE_MQD_GFX9;
++
+ 	r = amdgpu_bo_create(adev, &bp, &bo);
+ 	if (r) {
+ 		dev_err(adev->dev,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+index a8418a3f4e9d..e3cf1c9fb3db 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+@@ -129,7 +129,7 @@ bool amdgpu_amdkfd_is_kfd_vmid(struct amdgpu_device *adev, u32 vmid);
+ /* Shared API */
+ int alloc_gtt_mem(struct kgd_dev *kgd, size_t size,
+ 			void **mem_obj, uint64_t *gpu_addr,
+-			void **cpu_ptr);
++			void **cpu_ptr, bool mqd_gfx9);
+ void free_gtt_mem(struct kgd_dev *kgd, void *mem_obj);
+ void get_local_mem_info(struct kgd_dev *kgd,
+ 			struct kfd_local_mem_info *mem_info);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c
+index ea79908dac4c..29a260e4aefe 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c
+@@ -677,7 +677,7 @@ static int kgd_hqd_sdma_destroy(struct kgd_dev *kgd, void *mqd,
+ 
+ 	while (true) {
+ 		temp = RREG32(sdma_base_addr + mmSDMA0_RLC0_CONTEXT_STATUS);
+-		if (temp & SDMA0_STATUS_REG__RB_CMD_IDLE__SHIFT)
++		if (temp & SDMA0_RLC0_CONTEXT_STATUS__IDLE_MASK)
+ 			break;
+ 		if (time_after(jiffies, end_jiffies))
+ 			return -ETIME;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index 7ee6cec2c060..6881b5a9275f 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -423,7 +423,8 @@ bool kgd2kfd_device_init(struct kfd_dev *kfd,
+ 
+ 	if (kfd->kfd2kgd->init_gtt_mem_allocation(
+ 			kfd->kgd, size, &kfd->gtt_mem,
+-			&kfd->gtt_start_gpu_addr, &kfd->gtt_start_cpu_ptr)){
++			&kfd->gtt_start_gpu_addr, &kfd->gtt_start_cpu_ptr,
++			false)) {
+ 		dev_err(kfd_device, "Could not allocate %d bytes\n", size);
+ 		goto out;
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_iommu.c b/drivers/gpu/drm/amd/amdkfd/kfd_iommu.c
+index c71817963eea..66c2f856d922 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_iommu.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_iommu.c
+@@ -62,9 +62,20 @@ int kfd_iommu_device_init(struct kfd_dev *kfd)
+ 	struct amd_iommu_device_info iommu_info;
+ 	unsigned int pasid_limit;
+ 	int err;
++	struct kfd_topology_device *top_dev;
+ 
+-	if (!kfd->device_info->needs_iommu_device)
++	top_dev = kfd_topology_device_by_id(kfd->id);
++
++	/*
++	 * Overwrite ATS capability according to needs_iommu_device to fix
++	 * potential missing corresponding bit in CRAT of BIOS.
++	 */
++	if (!kfd->device_info->needs_iommu_device) {
++		top_dev->node_props.capability &= ~HSA_CAP_ATS_PRESENT;
+ 		return 0;
++	}
++
++	top_dev->node_props.capability |= HSA_CAP_ATS_PRESENT;
+ 
+ 	iommu_info.flags = 0;
+ 	err = amd_iommu_device_info(kfd->pdev, &iommu_info);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
+index 684054ff02cd..8da079cc6fb9 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
+@@ -63,7 +63,7 @@ static int init_mqd(struct mqd_manager *mm, void **mqd,
+ 				ALIGN(sizeof(struct v9_mqd), PAGE_SIZE),
+ 			&((*mqd_mem_obj)->gtt_mem),
+ 			&((*mqd_mem_obj)->gpu_addr),
+-			(void *)&((*mqd_mem_obj)->cpu_ptr));
++			(void *)&((*mqd_mem_obj)->cpu_ptr), true);
+ 	} else
+ 		retval = kfd_gtt_sa_allocate(mm->dev, sizeof(struct v9_mqd),
+ 				mqd_mem_obj);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+index 5e3990bb4c4b..c4de9b2baf1c 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+@@ -796,6 +796,7 @@ int kfd_topology_add_device(struct kfd_dev *gpu);
+ int kfd_topology_remove_device(struct kfd_dev *gpu);
+ struct kfd_topology_device *kfd_topology_device_by_proximity_domain(
+ 						uint32_t proximity_domain);
++struct kfd_topology_device *kfd_topology_device_by_id(uint32_t gpu_id);
+ struct kfd_dev *kfd_device_by_id(uint32_t gpu_id);
+ struct kfd_dev *kfd_device_by_pci_dev(const struct pci_dev *pdev);
+ int kfd_topology_enum_kfd_devices(uint8_t idx, struct kfd_dev **kdev);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+index bc95d4dfee2e..80f5db4ef75f 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+@@ -63,22 +63,33 @@ struct kfd_topology_device *kfd_topology_device_by_proximity_domain(
+ 	return device;
+ }
+ 
+-struct kfd_dev *kfd_device_by_id(uint32_t gpu_id)
++struct kfd_topology_device *kfd_topology_device_by_id(uint32_t gpu_id)
+ {
+-	struct kfd_topology_device *top_dev;
+-	struct kfd_dev *device = NULL;
++	struct kfd_topology_device *top_dev = NULL;
++	struct kfd_topology_device *ret = NULL;
+ 
+ 	down_read(&topology_lock);
+ 
+ 	list_for_each_entry(top_dev, &topology_device_list, list)
+ 		if (top_dev->gpu_id == gpu_id) {
+-			device = top_dev->gpu;
++			ret = top_dev;
+ 			break;
+ 		}
+ 
+ 	up_read(&topology_lock);
+ 
+-	return device;
++	return ret;
++}
++
++struct kfd_dev *kfd_device_by_id(uint32_t gpu_id)
++{
++	struct kfd_topology_device *top_dev;
++
++	top_dev = kfd_topology_device_by_id(gpu_id);
++	if (!top_dev)
++		return NULL;
++
++	return top_dev->gpu;
+ }
+ 
+ struct kfd_dev *kfd_device_by_pci_dev(const struct pci_dev *pdev)
+diff --git a/drivers/gpu/drm/amd/include/kgd_kfd_interface.h b/drivers/gpu/drm/amd/include/kgd_kfd_interface.h
+index 5733fbee07f7..f56b7553e5ed 100644
+--- a/drivers/gpu/drm/amd/include/kgd_kfd_interface.h
++++ b/drivers/gpu/drm/amd/include/kgd_kfd_interface.h
+@@ -266,7 +266,7 @@ struct tile_config {
+ struct kfd2kgd_calls {
+ 	int (*init_gtt_mem_allocation)(struct kgd_dev *kgd, size_t size,
+ 					void **mem_obj, uint64_t *gpu_addr,
+-					void **cpu_ptr);
++					void **cpu_ptr, bool mqd_gfx9);
+ 
+ 	void (*free_gtt_mem)(struct kgd_dev *kgd, void *mem_obj);
+ 
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+index 7a12d75e5157..c3c8c84da113 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+@@ -875,9 +875,22 @@ static enum drm_connector_status
+ nv50_mstc_detect(struct drm_connector *connector, bool force)
+ {
+ 	struct nv50_mstc *mstc = nv50_mstc(connector);
++	enum drm_connector_status conn_status;
++	int ret;
++
+ 	if (!mstc->port)
+ 		return connector_status_disconnected;
+-	return drm_dp_mst_detect_port(connector, mstc->port->mgr, mstc->port);
++
++	ret = pm_runtime_get_sync(connector->dev->dev);
++	if (ret < 0 && ret != -EACCES)
++		return connector_status_disconnected;
++
++	conn_status = drm_dp_mst_detect_port(connector, mstc->port->mgr,
++					     mstc->port);
++
++	pm_runtime_mark_last_busy(connector->dev->dev);
++	pm_runtime_put_autosuspend(connector->dev->dev);
++	return conn_status;
+ }
+ 
+ static void
+diff --git a/drivers/gpu/drm/pl111/pl111_vexpress.c b/drivers/gpu/drm/pl111/pl111_vexpress.c
+index a534b225e31b..5fa0441bb6df 100644
+--- a/drivers/gpu/drm/pl111/pl111_vexpress.c
++++ b/drivers/gpu/drm/pl111/pl111_vexpress.c
+@@ -111,7 +111,8 @@ static int vexpress_muxfpga_probe(struct platform_device *pdev)
+ }
+ 
+ static const struct of_device_id vexpress_muxfpga_match[] = {
+-	{ .compatible = "arm,vexpress-muxfpga", }
++	{ .compatible = "arm,vexpress-muxfpga", },
++	{}
+ };
+ 
+ static struct platform_driver vexpress_muxfpga_driver = {
+diff --git a/drivers/hwmon/nct6775.c b/drivers/hwmon/nct6775.c
+index b89e8379d898..8859f5572885 100644
+--- a/drivers/hwmon/nct6775.c
++++ b/drivers/hwmon/nct6775.c
+@@ -207,8 +207,6 @@ superio_exit(int ioreg)
+ 
+ #define NUM_FAN		7
+ 
+-#define TEMP_SOURCE_VIRTUAL	0x1f
+-
+ /* Common and NCT6775 specific data */
+ 
+ /* Voltage min/max registers for nr=7..14 are in bank 5 */
+@@ -299,8 +297,9 @@ static const u16 NCT6775_REG_PWM_READ[] = {
+ 
+ static const u16 NCT6775_REG_FAN[] = { 0x630, 0x632, 0x634, 0x636, 0x638 };
+ static const u16 NCT6775_REG_FAN_MIN[] = { 0x3b, 0x3c, 0x3d };
+-static const u16 NCT6775_REG_FAN_PULSES[] = { 0x641, 0x642, 0x643, 0x644, 0 };
+-static const u16 NCT6775_FAN_PULSE_SHIFT[] = { 0, 0, 0, 0, 0, 0 };
++static const u16 NCT6775_REG_FAN_PULSES[NUM_FAN] = {
++	0x641, 0x642, 0x643, 0x644 };
++static const u16 NCT6775_FAN_PULSE_SHIFT[NUM_FAN] = { };
+ 
+ static const u16 NCT6775_REG_TEMP[] = {
+ 	0x27, 0x150, 0x250, 0x62b, 0x62c, 0x62d };
+@@ -373,6 +372,7 @@ static const char *const nct6775_temp_label[] = {
+ };
+ 
+ #define NCT6775_TEMP_MASK	0x001ffffe
++#define NCT6775_VIRT_TEMP_MASK	0x00000000
+ 
+ static const u16 NCT6775_REG_TEMP_ALTERNATE[32] = {
+ 	[13] = 0x661,
+@@ -425,8 +425,8 @@ static const u8 NCT6776_PWM_MODE_MASK[] = { 0x01, 0, 0, 0, 0, 0 };
+ 
+ static const u16 NCT6776_REG_FAN_MIN[] = {
+ 	0x63a, 0x63c, 0x63e, 0x640, 0x642, 0x64a, 0x64c };
+-static const u16 NCT6776_REG_FAN_PULSES[] = {
+-	0x644, 0x645, 0x646, 0x647, 0x648, 0x649, 0 };
++static const u16 NCT6776_REG_FAN_PULSES[NUM_FAN] = {
++	0x644, 0x645, 0x646, 0x647, 0x648, 0x649 };
+ 
+ static const u16 NCT6776_REG_WEIGHT_DUTY_BASE[] = {
+ 	0x13e, 0x23e, 0x33e, 0x83e, 0x93e, 0xa3e };
+@@ -461,6 +461,7 @@ static const char *const nct6776_temp_label[] = {
+ };
+ 
+ #define NCT6776_TEMP_MASK	0x007ffffe
++#define NCT6776_VIRT_TEMP_MASK	0x00000000
+ 
+ static const u16 NCT6776_REG_TEMP_ALTERNATE[32] = {
+ 	[14] = 0x401,
+@@ -501,9 +502,9 @@ static const s8 NCT6779_BEEP_BITS[] = {
+ 	30, 31 };			/* intrusion0, intrusion1 */
+ 
+ static const u16 NCT6779_REG_FAN[] = {
+-	0x4b0, 0x4b2, 0x4b4, 0x4b6, 0x4b8, 0x4ba, 0x660 };
+-static const u16 NCT6779_REG_FAN_PULSES[] = {
+-	0x644, 0x645, 0x646, 0x647, 0x648, 0x649, 0 };
++	0x4c0, 0x4c2, 0x4c4, 0x4c6, 0x4c8, 0x4ca, 0x4ce };
++static const u16 NCT6779_REG_FAN_PULSES[NUM_FAN] = {
++	0x644, 0x645, 0x646, 0x647, 0x648, 0x649 };
+ 
+ static const u16 NCT6779_REG_CRITICAL_PWM_ENABLE[] = {
+ 	0x136, 0x236, 0x336, 0x836, 0x936, 0xa36, 0xb36 };
+@@ -559,7 +560,9 @@ static const char *const nct6779_temp_label[] = {
+ };
+ 
+ #define NCT6779_TEMP_MASK	0x07ffff7e
++#define NCT6779_VIRT_TEMP_MASK	0x00000000
+ #define NCT6791_TEMP_MASK	0x87ffff7e
++#define NCT6791_VIRT_TEMP_MASK	0x80000000
+ 
+ static const u16 NCT6779_REG_TEMP_ALTERNATE[32]
+ 	= { 0x490, 0x491, 0x492, 0x493, 0x494, 0x495, 0, 0,
+@@ -638,6 +641,7 @@ static const char *const nct6792_temp_label[] = {
+ };
+ 
+ #define NCT6792_TEMP_MASK	0x9fffff7e
++#define NCT6792_VIRT_TEMP_MASK	0x80000000
+ 
+ static const char *const nct6793_temp_label[] = {
+ 	"",
+@@ -675,6 +679,7 @@ static const char *const nct6793_temp_label[] = {
+ };
+ 
+ #define NCT6793_TEMP_MASK	0xbfff037e
++#define NCT6793_VIRT_TEMP_MASK	0x80000000
+ 
+ static const char *const nct6795_temp_label[] = {
+ 	"",
+@@ -712,6 +717,7 @@ static const char *const nct6795_temp_label[] = {
+ };
+ 
+ #define NCT6795_TEMP_MASK	0xbfffff7e
++#define NCT6795_VIRT_TEMP_MASK	0x80000000
+ 
+ static const char *const nct6796_temp_label[] = {
+ 	"",
+@@ -724,8 +730,8 @@ static const char *const nct6796_temp_label[] = {
+ 	"AUXTIN4",
+ 	"SMBUSMASTER 0",
+ 	"SMBUSMASTER 1",
+-	"",
+-	"",
++	"Virtual_TEMP",
++	"Virtual_TEMP",
+ 	"",
+ 	"",
+ 	"",
+@@ -748,7 +754,8 @@ static const char *const nct6796_temp_label[] = {
+ 	"Virtual_TEMP"
+ };
+ 
+-#define NCT6796_TEMP_MASK	0xbfff03fe
++#define NCT6796_TEMP_MASK	0xbfff0ffe
++#define NCT6796_VIRT_TEMP_MASK	0x80000c00
+ 
+ /* NCT6102D/NCT6106D specific data */
+ 
+@@ -779,8 +786,8 @@ static const u16 NCT6106_REG_TEMP_CONFIG[] = {
+ 
+ static const u16 NCT6106_REG_FAN[] = { 0x20, 0x22, 0x24 };
+ static const u16 NCT6106_REG_FAN_MIN[] = { 0xe0, 0xe2, 0xe4 };
+-static const u16 NCT6106_REG_FAN_PULSES[] = { 0xf6, 0xf6, 0xf6, 0, 0 };
+-static const u16 NCT6106_FAN_PULSE_SHIFT[] = { 0, 2, 4, 0, 0 };
++static const u16 NCT6106_REG_FAN_PULSES[] = { 0xf6, 0xf6, 0xf6 };
++static const u16 NCT6106_FAN_PULSE_SHIFT[] = { 0, 2, 4 };
+ 
+ static const u8 NCT6106_REG_PWM_MODE[] = { 0xf3, 0xf3, 0xf3 };
+ static const u8 NCT6106_PWM_MODE_MASK[] = { 0x01, 0x02, 0x04 };
+@@ -917,6 +924,11 @@ static unsigned int fan_from_reg16(u16 reg, unsigned int divreg)
+ 	return 1350000U / (reg << divreg);
+ }
+ 
++static unsigned int fan_from_reg_rpm(u16 reg, unsigned int divreg)
++{
++	return reg;
++}
++
+ static u16 fan_to_reg(u32 fan, unsigned int divreg)
+ {
+ 	if (!fan)
+@@ -969,6 +981,7 @@ struct nct6775_data {
+ 	u16 reg_temp_config[NUM_TEMP];
+ 	const char * const *temp_label;
+ 	u32 temp_mask;
++	u32 virt_temp_mask;
+ 
+ 	u16 REG_CONFIG;
+ 	u16 REG_VBAT;
+@@ -1276,11 +1289,11 @@ static bool is_word_sized(struct nct6775_data *data, u16 reg)
+ 	case nct6795:
+ 	case nct6796:
+ 		return reg == 0x150 || reg == 0x153 || reg == 0x155 ||
+-		  ((reg & 0xfff0) == 0x4b0 && (reg & 0x000f) < 0x0b) ||
++		  (reg & 0xfff0) == 0x4c0 ||
+ 		  reg == 0x402 ||
+ 		  reg == 0x63a || reg == 0x63c || reg == 0x63e ||
+ 		  reg == 0x640 || reg == 0x642 || reg == 0x64a ||
+-		  reg == 0x64c || reg == 0x660 ||
++		  reg == 0x64c ||
+ 		  reg == 0x73 || reg == 0x75 || reg == 0x77 || reg == 0x79 ||
+ 		  reg == 0x7b || reg == 0x7d;
+ 	}
+@@ -1682,9 +1695,13 @@ static struct nct6775_data *nct6775_update_device(struct device *dev)
+ 			if (data->has_fan_min & BIT(i))
+ 				data->fan_min[i] = nct6775_read_value(data,
+ 					   data->REG_FAN_MIN[i]);
+-			data->fan_pulses[i] =
+-			  (nct6775_read_value(data, data->REG_FAN_PULSES[i])
+-				>> data->FAN_PULSE_SHIFT[i]) & 0x03;
++
++			if (data->REG_FAN_PULSES[i]) {
++				data->fan_pulses[i] =
++				  (nct6775_read_value(data,
++						      data->REG_FAN_PULSES[i])
++				   >> data->FAN_PULSE_SHIFT[i]) & 0x03;
++			}
+ 
+ 			nct6775_select_fan_div(dev, data, i, reg);
+ 		}
+@@ -3639,6 +3656,7 @@ static int nct6775_probe(struct platform_device *pdev)
+ 
+ 		data->temp_label = nct6776_temp_label;
+ 		data->temp_mask = NCT6776_TEMP_MASK;
++		data->virt_temp_mask = NCT6776_VIRT_TEMP_MASK;
+ 
+ 		data->REG_VBAT = NCT6106_REG_VBAT;
+ 		data->REG_DIODE = NCT6106_REG_DIODE;
+@@ -3717,6 +3735,7 @@ static int nct6775_probe(struct platform_device *pdev)
+ 
+ 		data->temp_label = nct6775_temp_label;
+ 		data->temp_mask = NCT6775_TEMP_MASK;
++		data->virt_temp_mask = NCT6775_VIRT_TEMP_MASK;
+ 
+ 		data->REG_CONFIG = NCT6775_REG_CONFIG;
+ 		data->REG_VBAT = NCT6775_REG_VBAT;
+@@ -3789,6 +3808,7 @@ static int nct6775_probe(struct platform_device *pdev)
+ 
+ 		data->temp_label = nct6776_temp_label;
+ 		data->temp_mask = NCT6776_TEMP_MASK;
++		data->virt_temp_mask = NCT6776_VIRT_TEMP_MASK;
+ 
+ 		data->REG_CONFIG = NCT6775_REG_CONFIG;
+ 		data->REG_VBAT = NCT6775_REG_VBAT;
+@@ -3853,7 +3873,7 @@ static int nct6775_probe(struct platform_device *pdev)
+ 		data->ALARM_BITS = NCT6779_ALARM_BITS;
+ 		data->BEEP_BITS = NCT6779_BEEP_BITS;
+ 
+-		data->fan_from_reg = fan_from_reg13;
++		data->fan_from_reg = fan_from_reg_rpm;
+ 		data->fan_from_reg_min = fan_from_reg13;
+ 		data->target_temp_mask = 0xff;
+ 		data->tolerance_mask = 0x07;
+@@ -3861,6 +3881,7 @@ static int nct6775_probe(struct platform_device *pdev)
+ 
+ 		data->temp_label = nct6779_temp_label;
+ 		data->temp_mask = NCT6779_TEMP_MASK;
++		data->virt_temp_mask = NCT6779_VIRT_TEMP_MASK;
+ 
+ 		data->REG_CONFIG = NCT6775_REG_CONFIG;
+ 		data->REG_VBAT = NCT6775_REG_VBAT;
+@@ -3933,7 +3954,7 @@ static int nct6775_probe(struct platform_device *pdev)
+ 		data->ALARM_BITS = NCT6791_ALARM_BITS;
+ 		data->BEEP_BITS = NCT6779_BEEP_BITS;
+ 
+-		data->fan_from_reg = fan_from_reg13;
++		data->fan_from_reg = fan_from_reg_rpm;
+ 		data->fan_from_reg_min = fan_from_reg13;
+ 		data->target_temp_mask = 0xff;
+ 		data->tolerance_mask = 0x07;
+@@ -3944,22 +3965,27 @@ static int nct6775_probe(struct platform_device *pdev)
+ 		case nct6791:
+ 			data->temp_label = nct6779_temp_label;
+ 			data->temp_mask = NCT6791_TEMP_MASK;
++			data->virt_temp_mask = NCT6791_VIRT_TEMP_MASK;
+ 			break;
+ 		case nct6792:
+ 			data->temp_label = nct6792_temp_label;
+ 			data->temp_mask = NCT6792_TEMP_MASK;
++			data->virt_temp_mask = NCT6792_VIRT_TEMP_MASK;
+ 			break;
+ 		case nct6793:
+ 			data->temp_label = nct6793_temp_label;
+ 			data->temp_mask = NCT6793_TEMP_MASK;
++			data->virt_temp_mask = NCT6793_VIRT_TEMP_MASK;
+ 			break;
+ 		case nct6795:
+ 			data->temp_label = nct6795_temp_label;
+ 			data->temp_mask = NCT6795_TEMP_MASK;
++			data->virt_temp_mask = NCT6795_VIRT_TEMP_MASK;
+ 			break;
+ 		case nct6796:
+ 			data->temp_label = nct6796_temp_label;
+ 			data->temp_mask = NCT6796_TEMP_MASK;
++			data->virt_temp_mask = NCT6796_VIRT_TEMP_MASK;
+ 			break;
+ 		}
+ 
+@@ -4143,7 +4169,7 @@ static int nct6775_probe(struct platform_device *pdev)
+ 		 * for each fan reflects a different temperature, and there
+ 		 * are no duplicates.
+ 		 */
+-		if (src != TEMP_SOURCE_VIRTUAL) {
++		if (!(data->virt_temp_mask & BIT(src))) {
+ 			if (mask & BIT(src))
+ 				continue;
+ 			mask |= BIT(src);
+diff --git a/drivers/i2c/busses/i2c-scmi.c b/drivers/i2c/busses/i2c-scmi.c
+index a01389b85f13..7e9a2bbf5ddc 100644
+--- a/drivers/i2c/busses/i2c-scmi.c
++++ b/drivers/i2c/busses/i2c-scmi.c
+@@ -152,6 +152,7 @@ acpi_smbus_cmi_access(struct i2c_adapter *adap, u16 addr, unsigned short flags,
+ 			mt_params[3].type = ACPI_TYPE_INTEGER;
+ 			mt_params[3].integer.value = len;
+ 			mt_params[4].type = ACPI_TYPE_BUFFER;
++			mt_params[4].buffer.length = len;
+ 			mt_params[4].buffer.pointer = data->block + 1;
+ 		}
+ 		break;
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index cd620e009bad..d4b9db487b16 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -231,6 +231,7 @@ static const struct xpad_device {
+ 	{ 0x0e6f, 0x0246, "Rock Candy Gamepad for Xbox One 2015", 0, XTYPE_XBOXONE },
+ 	{ 0x0e6f, 0x02ab, "PDP Controller for Xbox One", 0, XTYPE_XBOXONE },
+ 	{ 0x0e6f, 0x02a4, "PDP Wired Controller for Xbox One - Stealth Series", 0, XTYPE_XBOXONE },
++	{ 0x0e6f, 0x02a6, "PDP Wired Controller for Xbox One - Camo Series", 0, XTYPE_XBOXONE },
+ 	{ 0x0e6f, 0x0301, "Logic3 Controller", 0, XTYPE_XBOX360 },
+ 	{ 0x0e6f, 0x0346, "Rock Candy Gamepad for Xbox One 2016", 0, XTYPE_XBOXONE },
+ 	{ 0x0e6f, 0x0401, "Logic3 Controller", 0, XTYPE_XBOX360 },
+@@ -530,6 +531,8 @@ static const struct xboxone_init_packet xboxone_init_packets[] = {
+ 	XBOXONE_INIT_PKT(0x0e6f, 0x02ab, xboxone_pdp_init2),
+ 	XBOXONE_INIT_PKT(0x0e6f, 0x02a4, xboxone_pdp_init1),
+ 	XBOXONE_INIT_PKT(0x0e6f, 0x02a4, xboxone_pdp_init2),
++	XBOXONE_INIT_PKT(0x0e6f, 0x02a6, xboxone_pdp_init1),
++	XBOXONE_INIT_PKT(0x0e6f, 0x02a6, xboxone_pdp_init2),
+ 	XBOXONE_INIT_PKT(0x24c6, 0x541a, xboxone_rumblebegin_init),
+ 	XBOXONE_INIT_PKT(0x24c6, 0x542a, xboxone_rumblebegin_init),
+ 	XBOXONE_INIT_PKT(0x24c6, 0x543a, xboxone_rumblebegin_init),
+diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
+index a39ae8f45e32..32379e0ac536 100644
+--- a/drivers/md/dm-cache-target.c
++++ b/drivers/md/dm-cache-target.c
+@@ -3492,14 +3492,13 @@ static int __init dm_cache_init(void)
+ 	int r;
+ 
+ 	migration_cache = KMEM_CACHE(dm_cache_migration, 0);
+-	if (!migration_cache) {
+-		dm_unregister_target(&cache_target);
++	if (!migration_cache)
+ 		return -ENOMEM;
+-	}
+ 
+ 	r = dm_register_target(&cache_target);
+ 	if (r) {
+ 		DMERR("cache target registration failed: %d", r);
++		kmem_cache_destroy(migration_cache);
+ 		return r;
+ 	}
+ 
+diff --git a/drivers/md/dm-flakey.c b/drivers/md/dm-flakey.c
+index 21d126a5078c..32aabe27b37c 100644
+--- a/drivers/md/dm-flakey.c
++++ b/drivers/md/dm-flakey.c
+@@ -467,7 +467,9 @@ static int flakey_iterate_devices(struct dm_target *ti, iterate_devices_callout_
+ static struct target_type flakey_target = {
+ 	.name   = "flakey",
+ 	.version = {1, 5, 0},
++#ifdef CONFIG_BLK_DEV_ZONED
+ 	.features = DM_TARGET_ZONED_HM,
++#endif
+ 	.module = THIS_MODULE,
+ 	.ctr    = flakey_ctr,
+ 	.dtr    = flakey_dtr,
+diff --git a/drivers/md/dm-linear.c b/drivers/md/dm-linear.c
+index d10964d41fd7..2f7c44a006c4 100644
+--- a/drivers/md/dm-linear.c
++++ b/drivers/md/dm-linear.c
+@@ -102,6 +102,7 @@ static int linear_map(struct dm_target *ti, struct bio *bio)
+ 	return DM_MAPIO_REMAPPED;
+ }
+ 
++#ifdef CONFIG_BLK_DEV_ZONED
+ static int linear_end_io(struct dm_target *ti, struct bio *bio,
+ 			 blk_status_t *error)
+ {
+@@ -112,6 +113,7 @@ static int linear_end_io(struct dm_target *ti, struct bio *bio,
+ 
+ 	return DM_ENDIO_DONE;
+ }
++#endif
+ 
+ static void linear_status(struct dm_target *ti, status_type_t type,
+ 			  unsigned status_flags, char *result, unsigned maxlen)
+@@ -208,12 +210,16 @@ static size_t linear_dax_copy_to_iter(struct dm_target *ti, pgoff_t pgoff,
+ static struct target_type linear_target = {
+ 	.name   = "linear",
+ 	.version = {1, 4, 0},
++#ifdef CONFIG_BLK_DEV_ZONED
++	.end_io = linear_end_io,
+ 	.features = DM_TARGET_PASSES_INTEGRITY | DM_TARGET_ZONED_HM,
++#else
++	.features = DM_TARGET_PASSES_INTEGRITY,
++#endif
+ 	.module = THIS_MODULE,
+ 	.ctr    = linear_ctr,
+ 	.dtr    = linear_dtr,
+ 	.map    = linear_map,
+-	.end_io = linear_end_io,
+ 	.status = linear_status,
+ 	.prepare_ioctl = linear_prepare_ioctl,
+ 	.iterate_devices = linear_iterate_devices,
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index b0dd7027848b..4ad8312d5b8d 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1153,12 +1153,14 @@ void dm_accept_partial_bio(struct bio *bio, unsigned n_sectors)
+ EXPORT_SYMBOL_GPL(dm_accept_partial_bio);
+ 
+ /*
+- * The zone descriptors obtained with a zone report indicate
+- * zone positions within the target device. The zone descriptors
+- * must be remapped to match their position within the dm device.
+- * A target may call dm_remap_zone_report after completion of a
+- * REQ_OP_ZONE_REPORT bio to remap the zone descriptors obtained
+- * from the target device mapping to the dm device.
++ * The zone descriptors obtained with a zone report indicate zone positions
++ * within the target backing device, regardless of that device is a partition
++ * and regardless of the target mapping start sector on the device or partition.
++ * The zone descriptors start sector and write pointer position must be adjusted
++ * to match their relative position within the dm device.
++ * A target may call dm_remap_zone_report() after completion of a
++ * REQ_OP_ZONE_REPORT bio to remap the zone descriptors obtained from the
++ * backing device.
+  */
+ void dm_remap_zone_report(struct dm_target *ti, struct bio *bio, sector_t start)
+ {
+@@ -1169,6 +1171,7 @@ void dm_remap_zone_report(struct dm_target *ti, struct bio *bio, sector_t start)
+ 	struct blk_zone *zone;
+ 	unsigned int nr_rep = 0;
+ 	unsigned int ofst;
++	sector_t part_offset;
+ 	struct bio_vec bvec;
+ 	struct bvec_iter iter;
+ 	void *addr;
+@@ -1176,6 +1179,15 @@ void dm_remap_zone_report(struct dm_target *ti, struct bio *bio, sector_t start)
+ 	if (bio->bi_status)
+ 		return;
+ 
++	/*
++	 * bio sector was incremented by the request size on completion. Taking
++	 * into account the original request sector, the target start offset on
++	 * the backing device and the target mapping offset (ti->begin), the
++	 * start sector of the backing device. The partition offset is always 0
++	 * if the target uses a whole device.
++	 */
++	part_offset = bio->bi_iter.bi_sector + ti->begin - (start + bio_end_sector(report_bio));
++
+ 	/*
+ 	 * Remap the start sector of the reported zones. For sequential zones,
+ 	 * also remap the write pointer position.
+@@ -1193,6 +1205,7 @@ void dm_remap_zone_report(struct dm_target *ti, struct bio *bio, sector_t start)
+ 		/* Set zones start sector */
+ 		while (hdr->nr_zones && ofst < bvec.bv_len) {
+ 			zone = addr + ofst;
++			zone->start -= part_offset;
+ 			if (zone->start >= start + ti->len) {
+ 				hdr->nr_zones = 0;
+ 				break;
+@@ -1204,7 +1217,7 @@ void dm_remap_zone_report(struct dm_target *ti, struct bio *bio, sector_t start)
+ 				else if (zone->cond == BLK_ZONE_COND_EMPTY)
+ 					zone->wp = zone->start;
+ 				else
+-					zone->wp = zone->wp + ti->begin - start;
++					zone->wp = zone->wp + ti->begin - start - part_offset;
+ 			}
+ 			ofst += sizeof(struct blk_zone);
+ 			hdr->nr_zones--;
+diff --git a/drivers/mfd/omap-usb-host.c b/drivers/mfd/omap-usb-host.c
+index e11ab12fbdf2..800986a79704 100644
+--- a/drivers/mfd/omap-usb-host.c
++++ b/drivers/mfd/omap-usb-host.c
+@@ -528,8 +528,8 @@ static int usbhs_omap_get_dt_pdata(struct device *dev,
+ }
+ 
+ static const struct of_device_id usbhs_child_match_table[] = {
+-	{ .compatible = "ti,omap-ehci", },
+-	{ .compatible = "ti,omap-ohci", },
++	{ .compatible = "ti,ehci-omap", },
++	{ .compatible = "ti,ohci-omap3", },
+ 	{ }
+ };
+ 
+@@ -855,6 +855,7 @@ static struct platform_driver usbhs_omap_driver = {
+ 		.pm		= &usbhsomap_dev_pm_ops,
+ 		.of_match_table = usbhs_omap_dt_ids,
+ 	},
++	.probe		= usbhs_omap_probe,
+ 	.remove		= usbhs_omap_remove,
+ };
+ 
+@@ -864,9 +865,9 @@ MODULE_ALIAS("platform:" USBHS_DRIVER_NAME);
+ MODULE_LICENSE("GPL v2");
+ MODULE_DESCRIPTION("usb host common core driver for omap EHCI and OHCI");
+ 
+-static int __init omap_usbhs_drvinit(void)
++static int omap_usbhs_drvinit(void)
+ {
+-	return platform_driver_probe(&usbhs_omap_driver, usbhs_omap_probe);
++	return platform_driver_register(&usbhs_omap_driver);
+ }
+ 
+ /*
+@@ -878,7 +879,7 @@ static int __init omap_usbhs_drvinit(void)
+  */
+ fs_initcall_sync(omap_usbhs_drvinit);
+ 
+-static void __exit omap_usbhs_drvexit(void)
++static void omap_usbhs_drvexit(void)
+ {
+ 	platform_driver_unregister(&usbhs_omap_driver);
+ }
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index a0b9102c4c6e..e201ccb3fda4 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -1370,6 +1370,16 @@ static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
+ 		brq->data.blocks = card->host->max_blk_count;
+ 
+ 	if (brq->data.blocks > 1) {
++		/*
++		 * Some SD cards in SPI mode return a CRC error or even lock up
++		 * completely when trying to read the last block using a
++		 * multiblock read command.
++		 */
++		if (mmc_host_is_spi(card->host) && (rq_data_dir(req) == READ) &&
++		    (blk_rq_pos(req) + blk_rq_sectors(req) ==
++		     get_capacity(md->disk)))
++			brq->data.blocks--;
++
+ 		/*
+ 		 * After a read error, we redo the request one sector
+ 		 * at a time in order to accurately determine which
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 217b790d22ed..2b01180be834 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -210,6 +210,7 @@ static void bond_get_stats(struct net_device *bond_dev,
+ static void bond_slave_arr_handler(struct work_struct *work);
+ static bool bond_time_in_interval(struct bonding *bond, unsigned long last_act,
+ 				  int mod);
++static void bond_netdev_notify_work(struct work_struct *work);
+ 
+ /*---------------------------- General routines -----------------------------*/
+ 
+@@ -1177,9 +1178,27 @@ static rx_handler_result_t bond_handle_frame(struct sk_buff **pskb)
+ 		}
+ 	}
+ 
+-	/* don't change skb->dev for link-local packets */
+-	if (is_link_local_ether_addr(eth_hdr(skb)->h_dest))
++	/* Link-local multicast packets should be passed to the
++	 * stack on the link they arrive as well as pass them to the
++	 * bond-master device. These packets are mostly usable when
++	 * stack receives it with the link on which they arrive
++	 * (e.g. LLDP) they also must be available on master. Some of
++	 * the use cases include (but are not limited to): LLDP agents
++	 * that must be able to operate both on enslaved interfaces as
++	 * well as on bonds themselves; linux bridges that must be able
++	 * to process/pass BPDUs from attached bonds when any kind of
++	 * STP version is enabled on the network.
++	 */
++	if (is_link_local_ether_addr(eth_hdr(skb)->h_dest)) {
++		struct sk_buff *nskb = skb_clone(skb, GFP_ATOMIC);
++
++		if (nskb) {
++			nskb->dev = bond->dev;
++			nskb->queue_mapping = 0;
++			netif_rx(nskb);
++		}
+ 		return RX_HANDLER_PASS;
++	}
+ 	if (bond_should_deliver_exact_match(skb, slave, bond))
+ 		return RX_HANDLER_EXACT;
+ 
+@@ -1276,6 +1295,8 @@ static struct slave *bond_alloc_slave(struct bonding *bond)
+ 			return NULL;
+ 		}
+ 	}
++	INIT_DELAYED_WORK(&slave->notify_work, bond_netdev_notify_work);
++
+ 	return slave;
+ }
+ 
+@@ -1283,6 +1304,7 @@ static void bond_free_slave(struct slave *slave)
+ {
+ 	struct bonding *bond = bond_get_bond_by_slave(slave);
+ 
++	cancel_delayed_work_sync(&slave->notify_work);
+ 	if (BOND_MODE(bond) == BOND_MODE_8023AD)
+ 		kfree(SLAVE_AD_INFO(slave));
+ 
+@@ -1304,39 +1326,26 @@ static void bond_fill_ifslave(struct slave *slave, struct ifslave *info)
+ 	info->link_failure_count = slave->link_failure_count;
+ }
+ 
+-static void bond_netdev_notify(struct net_device *dev,
+-			       struct netdev_bonding_info *info)
+-{
+-	rtnl_lock();
+-	netdev_bonding_info_change(dev, info);
+-	rtnl_unlock();
+-}
+-
+ static void bond_netdev_notify_work(struct work_struct *_work)
+ {
+-	struct netdev_notify_work *w =
+-		container_of(_work, struct netdev_notify_work, work.work);
++	struct slave *slave = container_of(_work, struct slave,
++					   notify_work.work);
++
++	if (rtnl_trylock()) {
++		struct netdev_bonding_info binfo;
+ 
+-	bond_netdev_notify(w->dev, &w->bonding_info);
+-	dev_put(w->dev);
+-	kfree(w);
++		bond_fill_ifslave(slave, &binfo.slave);
++		bond_fill_ifbond(slave->bond, &binfo.master);
++		netdev_bonding_info_change(slave->dev, &binfo);
++		rtnl_unlock();
++	} else {
++		queue_delayed_work(slave->bond->wq, &slave->notify_work, 1);
++	}
+ }
+ 
+ void bond_queue_slave_event(struct slave *slave)
+ {
+-	struct bonding *bond = slave->bond;
+-	struct netdev_notify_work *nnw = kzalloc(sizeof(*nnw), GFP_ATOMIC);
+-
+-	if (!nnw)
+-		return;
+-
+-	dev_hold(slave->dev);
+-	nnw->dev = slave->dev;
+-	bond_fill_ifslave(slave, &nnw->bonding_info.slave);
+-	bond_fill_ifbond(bond, &nnw->bonding_info.master);
+-	INIT_DELAYED_WORK(&nnw->work, bond_netdev_notify_work);
+-
+-	queue_delayed_work(slave->bond->wq, &nnw->work, 0);
++	queue_delayed_work(slave->bond->wq, &slave->notify_work, 0);
+ }
+ 
+ void bond_lower_state_changed(struct slave *slave)
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index d93c790bfbe8..ad534b90ef21 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -1107,7 +1107,7 @@ void b53_vlan_add(struct dsa_switch *ds, int port,
+ 		b53_get_vlan_entry(dev, vid, vl);
+ 
+ 		vl->members |= BIT(port);
+-		if (untagged)
++		if (untagged && !dsa_is_cpu_port(ds, port))
+ 			vl->untag |= BIT(port);
+ 		else
+ 			vl->untag &= ~BIT(port);
+@@ -1149,7 +1149,7 @@ int b53_vlan_del(struct dsa_switch *ds, int port,
+ 				pvid = 0;
+ 		}
+ 
+-		if (untagged)
++		if (untagged && !dsa_is_cpu_port(ds, port))
+ 			vl->untag &= ~(BIT(port));
+ 
+ 		b53_set_vlan_entry(dev, vid, vl);
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index 02e8982519ce..d73204767cbe 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -698,7 +698,6 @@ static int bcm_sf2_sw_suspend(struct dsa_switch *ds)
+ static int bcm_sf2_sw_resume(struct dsa_switch *ds)
+ {
+ 	struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds);
+-	unsigned int port;
+ 	int ret;
+ 
+ 	ret = bcm_sf2_sw_rst(priv);
+@@ -710,14 +709,7 @@ static int bcm_sf2_sw_resume(struct dsa_switch *ds)
+ 	if (priv->hw_params.num_gphy == 1)
+ 		bcm_sf2_gphy_enable_set(ds, true);
+ 
+-	for (port = 0; port < DSA_MAX_PORTS; port++) {
+-		if (dsa_is_user_port(ds, port))
+-			bcm_sf2_port_setup(ds, port, NULL);
+-		else if (dsa_is_cpu_port(ds, port))
+-			bcm_sf2_imp_setup(ds, port);
+-	}
+-
+-	bcm_sf2_enable_acb(ds);
++	ds->ops->setup(ds);
+ 
+ 	return 0;
+ }
+@@ -1168,10 +1160,10 @@ static int bcm_sf2_sw_remove(struct platform_device *pdev)
+ {
+ 	struct bcm_sf2_priv *priv = platform_get_drvdata(pdev);
+ 
+-	/* Disable all ports and interrupts */
+ 	priv->wol_ports_mask = 0;
+-	bcm_sf2_sw_suspend(priv->dev->ds);
+ 	dsa_unregister_switch(priv->dev->ds);
++	/* Disable all ports and interrupts */
++	bcm_sf2_sw_suspend(priv->dev->ds);
+ 	bcm_sf2_mdio_unregister(priv);
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+index b5f1f62e8e25..d1e1a0ba8615 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+@@ -225,9 +225,10 @@ int aq_ring_rx_clean(struct aq_ring_s *self,
+ 		}
+ 
+ 		/* for single fragment packets use build_skb() */
+-		if (buff->is_eop) {
++		if (buff->is_eop &&
++		    buff->len <= AQ_CFG_RX_FRAME_MAX - AQ_SKB_ALIGN) {
+ 			skb = build_skb(page_address(buff->page),
+-					buff->len + AQ_SKB_ALIGN);
++					AQ_CFG_RX_FRAME_MAX);
+ 			if (unlikely(!skb)) {
+ 				err = -ENOMEM;
+ 				goto err_exit;
+@@ -247,18 +248,21 @@ int aq_ring_rx_clean(struct aq_ring_s *self,
+ 					buff->len - ETH_HLEN,
+ 					SKB_TRUESIZE(buff->len - ETH_HLEN));
+ 
+-			for (i = 1U, next_ = buff->next,
+-			     buff_ = &self->buff_ring[next_]; true;
+-			     next_ = buff_->next,
+-			     buff_ = &self->buff_ring[next_], ++i) {
+-				skb_add_rx_frag(skb, i, buff_->page, 0,
+-						buff_->len,
+-						SKB_TRUESIZE(buff->len -
+-						ETH_HLEN));
+-				buff_->is_cleaned = 1;
+-
+-				if (buff_->is_eop)
+-					break;
++			if (!buff->is_eop) {
++				for (i = 1U, next_ = buff->next,
++				     buff_ = &self->buff_ring[next_];
++				     true; next_ = buff_->next,
++				     buff_ = &self->buff_ring[next_], ++i) {
++					skb_add_rx_frag(skb, i,
++							buff_->page, 0,
++							buff_->len,
++							SKB_TRUESIZE(buff->len -
++							ETH_HLEN));
++					buff_->is_cleaned = 1;
++
++					if (buff_->is_eop)
++						break;
++				}
+ 			}
+ 		}
+ 
+diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c
+index a1f60f89e059..7a03ee45840e 100644
+--- a/drivers/net/ethernet/broadcom/bcmsysport.c
++++ b/drivers/net/ethernet/broadcom/bcmsysport.c
+@@ -1045,14 +1045,22 @@ static void bcm_sysport_resume_from_wol(struct bcm_sysport_priv *priv)
+ {
+ 	u32 reg;
+ 
+-	/* Stop monitoring MPD interrupt */
+-	intrl2_0_mask_set(priv, INTRL2_0_MPD);
+-
+ 	/* Clear the MagicPacket detection logic */
+ 	reg = umac_readl(priv, UMAC_MPD_CTRL);
+ 	reg &= ~MPD_EN;
+ 	umac_writel(priv, reg, UMAC_MPD_CTRL);
+ 
++	reg = intrl2_0_readl(priv, INTRL2_CPU_STATUS);
++	if (reg & INTRL2_0_MPD)
++		netdev_info(priv->netdev, "Wake-on-LAN (MPD) interrupt!\n");
++
++	if (reg & INTRL2_0_BRCM_MATCH_TAG) {
++		reg = rxchk_readl(priv, RXCHK_BRCM_TAG_MATCH_STATUS) &
++				  RXCHK_BRCM_TAG_MATCH_MASK;
++		netdev_info(priv->netdev,
++			    "Wake-on-LAN (filters 0x%02x) interrupt!\n", reg);
++	}
++
+ 	netif_dbg(priv, wol, priv->netdev, "resumed from WOL\n");
+ }
+ 
+@@ -1102,11 +1110,6 @@ static irqreturn_t bcm_sysport_rx_isr(int irq, void *dev_id)
+ 	if (priv->irq0_stat & INTRL2_0_TX_RING_FULL)
+ 		bcm_sysport_tx_reclaim_all(priv);
+ 
+-	if (priv->irq0_stat & INTRL2_0_MPD) {
+-		netdev_info(priv->netdev, "Wake-on-LAN interrupt!\n");
+-		bcm_sysport_resume_from_wol(priv);
+-	}
+-
+ 	if (!priv->is_lite)
+ 		goto out;
+ 
+@@ -2459,9 +2462,6 @@ static int bcm_sysport_suspend_to_wol(struct bcm_sysport_priv *priv)
+ 	/* UniMAC receive needs to be turned on */
+ 	umac_enable_set(priv, CMD_RX_EN, 1);
+ 
+-	/* Enable the interrupt wake-up source */
+-	intrl2_0_mask_clear(priv, INTRL2_0_MPD);
+-
+ 	netif_dbg(priv, wol, ndev, "entered WOL mode\n");
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 80b05597c5fe..33f0861057fd 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -1882,8 +1882,11 @@ static int bnxt_poll_work(struct bnxt *bp, struct bnxt_napi *bnapi, int budget)
+ 		if (TX_CMP_TYPE(txcmp) == CMP_TYPE_TX_L2_CMP) {
+ 			tx_pkts++;
+ 			/* return full budget so NAPI will complete. */
+-			if (unlikely(tx_pkts > bp->tx_wake_thresh))
++			if (unlikely(tx_pkts > bp->tx_wake_thresh)) {
+ 				rx_pkts = budget;
++				raw_cons = NEXT_RAW_CMP(raw_cons);
++				break;
++			}
+ 		} else if ((TX_CMP_TYPE(txcmp) & 0x30) == 0x10) {
+ 			if (likely(budget))
+ 				rc = bnxt_rx_pkt(bp, bnapi, &raw_cons, &event);
+@@ -1911,7 +1914,7 @@ static int bnxt_poll_work(struct bnxt *bp, struct bnxt_napi *bnapi, int budget)
+ 		}
+ 		raw_cons = NEXT_RAW_CMP(raw_cons);
+ 
+-		if (rx_pkts == budget)
++		if (rx_pkts && rx_pkts == budget)
+ 			break;
+ 	}
+ 
+@@ -2025,8 +2028,12 @@ static int bnxt_poll(struct napi_struct *napi, int budget)
+ 	while (1) {
+ 		work_done += bnxt_poll_work(bp, bnapi, budget - work_done);
+ 
+-		if (work_done >= budget)
++		if (work_done >= budget) {
++			if (!budget)
++				BNXT_CP_DB_REARM(cpr->cp_doorbell,
++						 cpr->cp_raw_cons);
+ 			break;
++		}
+ 
+ 		if (!bnxt_has_work(bp, cpr)) {
+ 			if (napi_complete_done(napi, work_done))
+@@ -3008,10 +3015,11 @@ static void bnxt_free_hwrm_resources(struct bnxt *bp)
+ {
+ 	struct pci_dev *pdev = bp->pdev;
+ 
+-	dma_free_coherent(&pdev->dev, PAGE_SIZE, bp->hwrm_cmd_resp_addr,
+-			  bp->hwrm_cmd_resp_dma_addr);
+-
+-	bp->hwrm_cmd_resp_addr = NULL;
++	if (bp->hwrm_cmd_resp_addr) {
++		dma_free_coherent(&pdev->dev, PAGE_SIZE, bp->hwrm_cmd_resp_addr,
++				  bp->hwrm_cmd_resp_dma_addr);
++		bp->hwrm_cmd_resp_addr = NULL;
++	}
+ 	if (bp->hwrm_dbg_resp_addr) {
+ 		dma_free_coherent(&pdev->dev, HWRM_DBG_REG_BUF_SIZE,
+ 				  bp->hwrm_dbg_resp_addr,
+@@ -4643,7 +4651,7 @@ __bnxt_hwrm_reserve_pf_rings(struct bnxt *bp, struct hwrm_func_cfg_input *req,
+ 				      FUNC_CFG_REQ_ENABLES_NUM_STAT_CTXS : 0;
+ 		enables |= ring_grps ?
+ 			   FUNC_CFG_REQ_ENABLES_NUM_HW_RING_GRPS : 0;
+-		enables |= vnics ? FUNC_VF_CFG_REQ_ENABLES_NUM_VNICS : 0;
++		enables |= vnics ? FUNC_CFG_REQ_ENABLES_NUM_VNICS : 0;
+ 
+ 		req->num_rx_rings = cpu_to_le16(rx_rings);
+ 		req->num_hw_ring_grps = cpu_to_le16(ring_grps);
+@@ -8493,7 +8501,7 @@ static void _bnxt_get_max_rings(struct bnxt *bp, int *max_rx, int *max_tx,
+ 	*max_tx = hw_resc->max_tx_rings;
+ 	*max_rx = hw_resc->max_rx_rings;
+ 	*max_cp = min_t(int, bnxt_get_max_func_cp_rings_for_en(bp),
+-			hw_resc->max_irqs);
++			hw_resc->max_irqs - bnxt_get_ulp_msix_num(bp));
+ 	*max_cp = min_t(int, *max_cp, hw_resc->max_stat_ctxs);
+ 	max_ring_grps = hw_resc->max_hw_ring_grps;
+ 	if (BNXT_CHIP_TYPE_NITRO_A0(bp) && BNXT_PF(bp)) {
+@@ -8924,6 +8932,7 @@ init_err_cleanup_tc:
+ 	bnxt_clear_int_mode(bp);
+ 
+ init_err_pci_clean:
++	bnxt_free_hwrm_resources(bp);
+ 	bnxt_cleanup_pci(bp);
+ 
+ init_err_free:
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
+index d5bc72cecde3..3f896acc4ca8 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
+@@ -98,13 +98,13 @@ static int bnxt_hwrm_queue_cos2bw_cfg(struct bnxt *bp, struct ieee_ets *ets,
+ 
+ 	bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_QUEUE_COS2BW_CFG, -1, -1);
+ 	for (i = 0; i < max_tc; i++) {
+-		u8 qidx;
++		u8 qidx = bp->tc_to_qidx[i];
+ 
+ 		req.enables |= cpu_to_le32(
+-			QUEUE_COS2BW_CFG_REQ_ENABLES_COS_QUEUE_ID0_VALID << i);
++			QUEUE_COS2BW_CFG_REQ_ENABLES_COS_QUEUE_ID0_VALID <<
++			qidx);
+ 
+ 		memset(&cos2bw, 0, sizeof(cos2bw));
+-		qidx = bp->tc_to_qidx[i];
+ 		cos2bw.queue_id = bp->q_info[qidx].queue_id;
+ 		if (ets->tc_tsa[i] == IEEE_8021QAZ_TSA_STRICT) {
+ 			cos2bw.tsa =
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+index 491bd40a254d..c4c9df029466 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+@@ -75,17 +75,23 @@ static int bnxt_tc_parse_redir(struct bnxt *bp,
+ 	return 0;
+ }
+ 
+-static void bnxt_tc_parse_vlan(struct bnxt *bp,
+-			       struct bnxt_tc_actions *actions,
+-			       const struct tc_action *tc_act)
++static int bnxt_tc_parse_vlan(struct bnxt *bp,
++			      struct bnxt_tc_actions *actions,
++			      const struct tc_action *tc_act)
+ {
+-	if (tcf_vlan_action(tc_act) == TCA_VLAN_ACT_POP) {
++	switch (tcf_vlan_action(tc_act)) {
++	case TCA_VLAN_ACT_POP:
+ 		actions->flags |= BNXT_TC_ACTION_FLAG_POP_VLAN;
+-	} else if (tcf_vlan_action(tc_act) == TCA_VLAN_ACT_PUSH) {
++		break;
++	case TCA_VLAN_ACT_PUSH:
+ 		actions->flags |= BNXT_TC_ACTION_FLAG_PUSH_VLAN;
+ 		actions->push_vlan_tci = htons(tcf_vlan_push_vid(tc_act));
+ 		actions->push_vlan_tpid = tcf_vlan_push_proto(tc_act);
++		break;
++	default:
++		return -EOPNOTSUPP;
+ 	}
++	return 0;
+ }
+ 
+ static int bnxt_tc_parse_tunnel_set(struct bnxt *bp,
+@@ -136,7 +142,9 @@ static int bnxt_tc_parse_actions(struct bnxt *bp,
+ 
+ 		/* Push/pop VLAN */
+ 		if (is_tcf_vlan(tc_act)) {
+-			bnxt_tc_parse_vlan(bp, actions, tc_act);
++			rc = bnxt_tc_parse_vlan(bp, actions, tc_act);
++			if (rc)
++				return rc;
+ 			continue;
+ 		}
+ 
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index c4d7479938e2..dfa045f22ef1 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -3765,6 +3765,13 @@ static const struct macb_config at91sam9260_config = {
+ 	.init = macb_init,
+ };
+ 
++static const struct macb_config sama5d3macb_config = {
++	.caps = MACB_CAPS_SG_DISABLED
++	      | MACB_CAPS_USRIO_HAS_CLKEN | MACB_CAPS_USRIO_DEFAULT_IS_MII_GMII,
++	.clk_init = macb_clk_init,
++	.init = macb_init,
++};
++
+ static const struct macb_config pc302gem_config = {
+ 	.caps = MACB_CAPS_SG_DISABLED | MACB_CAPS_GIGABIT_MODE_AVAILABLE,
+ 	.dma_burst_length = 16,
+@@ -3832,6 +3839,7 @@ static const struct of_device_id macb_dt_ids[] = {
+ 	{ .compatible = "cdns,gem", .data = &pc302gem_config },
+ 	{ .compatible = "atmel,sama5d2-gem", .data = &sama5d2_config },
+ 	{ .compatible = "atmel,sama5d3-gem", .data = &sama5d3_config },
++	{ .compatible = "atmel,sama5d3-macb", .data = &sama5d3macb_config },
+ 	{ .compatible = "atmel,sama5d4-gem", .data = &sama5d4_config },
+ 	{ .compatible = "cdns,at91rm9200-emac", .data = &emac_config },
+ 	{ .compatible = "cdns,emac", .data = &emac_config },
+diff --git a/drivers/net/ethernet/hisilicon/hns/hnae.c b/drivers/net/ethernet/hisilicon/hns/hnae.c
+index a051e582d541..79d03f8ee7b1 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hnae.c
++++ b/drivers/net/ethernet/hisilicon/hns/hnae.c
+@@ -84,7 +84,7 @@ static void hnae_unmap_buffer(struct hnae_ring *ring, struct hnae_desc_cb *cb)
+ 	if (cb->type == DESC_TYPE_SKB)
+ 		dma_unmap_single(ring_to_dev(ring), cb->dma, cb->length,
+ 				 ring_to_dma_dir(ring));
+-	else
++	else if (cb->length)
+ 		dma_unmap_page(ring_to_dev(ring), cb->dma, cb->length,
+ 			       ring_to_dma_dir(ring));
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_enet.c b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+index b4518f45f048..1336ec73230d 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+@@ -40,9 +40,9 @@
+ #define SKB_TMP_LEN(SKB) \
+ 	(((SKB)->transport_header - (SKB)->mac_header) + tcp_hdrlen(SKB))
+ 
+-static void fill_v2_desc(struct hnae_ring *ring, void *priv,
+-			 int size, dma_addr_t dma, int frag_end,
+-			 int buf_num, enum hns_desc_type type, int mtu)
++static void fill_v2_desc_hw(struct hnae_ring *ring, void *priv, int size,
++			    int send_sz, dma_addr_t dma, int frag_end,
++			    int buf_num, enum hns_desc_type type, int mtu)
+ {
+ 	struct hnae_desc *desc = &ring->desc[ring->next_to_use];
+ 	struct hnae_desc_cb *desc_cb = &ring->desc_cb[ring->next_to_use];
+@@ -64,7 +64,7 @@ static void fill_v2_desc(struct hnae_ring *ring, void *priv,
+ 	desc_cb->type = type;
+ 
+ 	desc->addr = cpu_to_le64(dma);
+-	desc->tx.send_size = cpu_to_le16((u16)size);
++	desc->tx.send_size = cpu_to_le16((u16)send_sz);
+ 
+ 	/* config bd buffer end */
+ 	hnae_set_bit(rrcfv, HNSV2_TXD_VLD_B, 1);
+@@ -133,6 +133,14 @@ static void fill_v2_desc(struct hnae_ring *ring, void *priv,
+ 	ring_ptr_move_fw(ring, next_to_use);
+ }
+ 
++static void fill_v2_desc(struct hnae_ring *ring, void *priv,
++			 int size, dma_addr_t dma, int frag_end,
++			 int buf_num, enum hns_desc_type type, int mtu)
++{
++	fill_v2_desc_hw(ring, priv, size, size, dma, frag_end,
++			buf_num, type, mtu);
++}
++
+ static const struct acpi_device_id hns_enet_acpi_match[] = {
+ 	{ "HISI00C1", 0 },
+ 	{ "HISI00C2", 0 },
+@@ -289,15 +297,15 @@ static void fill_tso_desc(struct hnae_ring *ring, void *priv,
+ 
+ 	/* when the frag size is bigger than hardware, split this frag */
+ 	for (k = 0; k < frag_buf_num; k++)
+-		fill_v2_desc(ring, priv,
+-			     (k == frag_buf_num - 1) ?
++		fill_v2_desc_hw(ring, priv, k == 0 ? size : 0,
++				(k == frag_buf_num - 1) ?
+ 					sizeoflast : BD_MAX_SEND_SIZE,
+-			     dma + BD_MAX_SEND_SIZE * k,
+-			     frag_end && (k == frag_buf_num - 1) ? 1 : 0,
+-			     buf_num,
+-			     (type == DESC_TYPE_SKB && !k) ?
++				dma + BD_MAX_SEND_SIZE * k,
++				frag_end && (k == frag_buf_num - 1) ? 1 : 0,
++				buf_num,
++				(type == DESC_TYPE_SKB && !k) ?
+ 					DESC_TYPE_SKB : DESC_TYPE_PAGE,
+-			     mtu);
++				mtu);
+ }
+ 
+ netdev_tx_t hns_nic_net_xmit_hw(struct net_device *ndev,
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index b8bba64673e5..3986ef83111b 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -1725,7 +1725,7 @@ static void mvpp2_txq_desc_put(struct mvpp2_tx_queue *txq)
+ }
+ 
+ /* Set Tx descriptors fields relevant for CSUM calculation */
+-static u32 mvpp2_txq_desc_csum(int l3_offs, int l3_proto,
++static u32 mvpp2_txq_desc_csum(int l3_offs, __be16 l3_proto,
+ 			       int ip_hdr_len, int l4_proto)
+ {
+ 	u32 command;
+@@ -2600,14 +2600,15 @@ static u32 mvpp2_skb_tx_csum(struct mvpp2_port *port, struct sk_buff *skb)
+ 	if (skb->ip_summed == CHECKSUM_PARTIAL) {
+ 		int ip_hdr_len = 0;
+ 		u8 l4_proto;
++		__be16 l3_proto = vlan_get_protocol(skb);
+ 
+-		if (skb->protocol == htons(ETH_P_IP)) {
++		if (l3_proto == htons(ETH_P_IP)) {
+ 			struct iphdr *ip4h = ip_hdr(skb);
+ 
+ 			/* Calculate IPv4 checksum and L4 checksum */
+ 			ip_hdr_len = ip4h->ihl;
+ 			l4_proto = ip4h->protocol;
+-		} else if (skb->protocol == htons(ETH_P_IPV6)) {
++		} else if (l3_proto == htons(ETH_P_IPV6)) {
+ 			struct ipv6hdr *ip6h = ipv6_hdr(skb);
+ 
+ 			/* Read l4_protocol from one of IPv6 extra headers */
+@@ -2619,7 +2620,7 @@ static u32 mvpp2_skb_tx_csum(struct mvpp2_port *port, struct sk_buff *skb)
+ 		}
+ 
+ 		return mvpp2_txq_desc_csum(skb_network_offset(skb),
+-				skb->protocol, ip_hdr_len, l4_proto);
++					   l3_proto, ip_hdr_len, l4_proto);
+ 	}
+ 
+ 	return MVPP2_TXD_L4_CSUM_NOT | MVPP2_TXD_IP_CSUM_DISABLE;
+@@ -3055,10 +3056,12 @@ static int mvpp2_poll(struct napi_struct *napi, int budget)
+ 				   cause_rx_tx & ~MVPP2_CAUSE_MISC_SUM_MASK);
+ 	}
+ 
+-	cause_tx = cause_rx_tx & MVPP2_CAUSE_TXQ_OCCUP_DESC_ALL_MASK;
+-	if (cause_tx) {
+-		cause_tx >>= MVPP2_CAUSE_TXQ_OCCUP_DESC_ALL_OFFSET;
+-		mvpp2_tx_done(port, cause_tx, qv->sw_thread_id);
++	if (port->has_tx_irqs) {
++		cause_tx = cause_rx_tx & MVPP2_CAUSE_TXQ_OCCUP_DESC_ALL_MASK;
++		if (cause_tx) {
++			cause_tx >>= MVPP2_CAUSE_TXQ_OCCUP_DESC_ALL_OFFSET;
++			mvpp2_tx_done(port, cause_tx, qv->sw_thread_id);
++		}
+ 	}
+ 
+ 	/* Process RX packets */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index dfbcda0d0e08..701af5ffcbc9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -1339,6 +1339,9 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
+ 
+ 			*match_level = MLX5_MATCH_L2;
+ 		}
++	} else {
++		MLX5_SET(fte_match_set_lyr_2_4, headers_c, svlan_tag, 1);
++		MLX5_SET(fte_match_set_lyr_2_4, headers_c, cvlan_tag, 1);
+ 	}
+ 
+ 	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+index 40dba9e8af92..69f356f5f8f5 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+@@ -2000,7 +2000,7 @@ static u32 calculate_vports_min_rate_divider(struct mlx5_eswitch *esw)
+ 	u32 max_guarantee = 0;
+ 	int i;
+ 
+-	for (i = 0; i <= esw->total_vports; i++) {
++	for (i = 0; i < esw->total_vports; i++) {
+ 		evport = &esw->vports[i];
+ 		if (!evport->enabled || evport->info.min_rate < max_guarantee)
+ 			continue;
+@@ -2020,7 +2020,7 @@ static int normalize_vports_min_rate(struct mlx5_eswitch *esw, u32 divider)
+ 	int err;
+ 	int i;
+ 
+-	for (i = 0; i <= esw->total_vports; i++) {
++	for (i = 0; i < esw->total_vports; i++) {
+ 		evport = &esw->vports[i];
+ 		if (!evport->enabled)
+ 			continue;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/transobj.c b/drivers/net/ethernet/mellanox/mlx5/core/transobj.c
+index dae1c5c5d27c..d2f76070ea7c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/transobj.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/transobj.c
+@@ -509,7 +509,7 @@ static int mlx5_hairpin_modify_sq(struct mlx5_core_dev *peer_mdev, u32 sqn,
+ 
+ 	sqc = MLX5_ADDR_OF(modify_sq_in, in, ctx);
+ 
+-	if (next_state == MLX5_RQC_STATE_RDY) {
++	if (next_state == MLX5_SQC_STATE_RDY) {
+ 		MLX5_SET(sqc, sqc, hairpin_peer_rq, peer_rq);
+ 		MLX5_SET(sqc, sqc, hairpin_peer_vhca, peer_vhca);
+ 	}
+diff --git a/drivers/net/ethernet/mscc/ocelot_board.c b/drivers/net/ethernet/mscc/ocelot_board.c
+index 18df7d934e81..ccfcf3048cd0 100644
+--- a/drivers/net/ethernet/mscc/ocelot_board.c
++++ b/drivers/net/ethernet/mscc/ocelot_board.c
+@@ -91,7 +91,7 @@ static irqreturn_t ocelot_xtr_irq_handler(int irq, void *arg)
+ 		struct sk_buff *skb;
+ 		struct net_device *dev;
+ 		u32 *buf;
+-		int sz, len;
++		int sz, len, buf_len;
+ 		u32 ifh[4];
+ 		u32 val;
+ 		struct frame_info info;
+@@ -116,14 +116,20 @@ static irqreturn_t ocelot_xtr_irq_handler(int irq, void *arg)
+ 			err = -ENOMEM;
+ 			break;
+ 		}
+-		buf = (u32 *)skb_put(skb, info.len);
++		buf_len = info.len - ETH_FCS_LEN;
++		buf = (u32 *)skb_put(skb, buf_len);
+ 
+ 		len = 0;
+ 		do {
+ 			sz = ocelot_rx_frame_word(ocelot, grp, false, &val);
+ 			*buf++ = val;
+ 			len += sz;
+-		} while ((sz == 4) && (len < info.len));
++		} while (len < buf_len);
++
++		/* Read the FCS and discard it */
++		sz = ocelot_rx_frame_word(ocelot, grp, false, &val);
++		/* Update the statistics if part of the FCS was read before */
++		len -= ETH_FCS_LEN - sz;
+ 
+ 		if (sz < 0) {
+ 			err = sz;
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+index bfccc1955907..80306e4f247c 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+@@ -2068,14 +2068,17 @@ nfp_ctrl_rx_one(struct nfp_net *nn, struct nfp_net_dp *dp,
+ 	return true;
+ }
+ 
+-static void nfp_ctrl_rx(struct nfp_net_r_vector *r_vec)
++static bool nfp_ctrl_rx(struct nfp_net_r_vector *r_vec)
+ {
+ 	struct nfp_net_rx_ring *rx_ring = r_vec->rx_ring;
+ 	struct nfp_net *nn = r_vec->nfp_net;
+ 	struct nfp_net_dp *dp = &nn->dp;
++	unsigned int budget = 512;
+ 
+-	while (nfp_ctrl_rx_one(nn, dp, r_vec, rx_ring))
++	while (nfp_ctrl_rx_one(nn, dp, r_vec, rx_ring) && budget--)
+ 		continue;
++
++	return budget;
+ }
+ 
+ static void nfp_ctrl_poll(unsigned long arg)
+@@ -2087,9 +2090,13 @@ static void nfp_ctrl_poll(unsigned long arg)
+ 	__nfp_ctrl_tx_queued(r_vec);
+ 	spin_unlock_bh(&r_vec->lock);
+ 
+-	nfp_ctrl_rx(r_vec);
+-
+-	nfp_net_irq_unmask(r_vec->nfp_net, r_vec->irq_entry);
++	if (nfp_ctrl_rx(r_vec)) {
++		nfp_net_irq_unmask(r_vec->nfp_net, r_vec->irq_entry);
++	} else {
++		tasklet_schedule(&r_vec->tasklet);
++		nn_dp_warn(&r_vec->nfp_net->dp,
++			   "control message budget exceeded!\n");
++	}
+ }
+ 
+ /* Setup and Configuration
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_hsi.h b/drivers/net/ethernet/qlogic/qed/qed_hsi.h
+index bee10c1781fb..463ffa83685f 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_hsi.h
++++ b/drivers/net/ethernet/qlogic/qed/qed_hsi.h
+@@ -11987,6 +11987,7 @@ struct public_global {
+ 	u32 running_bundle_id;
+ 	s32 external_temperature;
+ 	u32 mdump_reason;
++	u64 reserved;
+ 	u32 data_ptr;
+ 	u32 data_size;
+ };
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic.h b/drivers/net/ethernet/qlogic/qlcnic/qlcnic.h
+index 81312924df14..0c443ea98479 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic.h
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic.h
+@@ -1800,7 +1800,8 @@ struct qlcnic_hardware_ops {
+ 	int (*config_loopback) (struct qlcnic_adapter *, u8);
+ 	int (*clear_loopback) (struct qlcnic_adapter *, u8);
+ 	int (*config_promisc_mode) (struct qlcnic_adapter *, u32);
+-	void (*change_l2_filter) (struct qlcnic_adapter *, u64 *, u16);
++	void (*change_l2_filter)(struct qlcnic_adapter *adapter, u64 *addr,
++				 u16 vlan, struct qlcnic_host_tx_ring *tx_ring);
+ 	int (*get_board_info) (struct qlcnic_adapter *);
+ 	void (*set_mac_filter_count) (struct qlcnic_adapter *);
+ 	void (*free_mac_list) (struct qlcnic_adapter *);
+@@ -2064,9 +2065,10 @@ static inline int qlcnic_nic_set_promisc(struct qlcnic_adapter *adapter,
+ }
+ 
+ static inline void qlcnic_change_filter(struct qlcnic_adapter *adapter,
+-					u64 *addr, u16 id)
++					u64 *addr, u16 vlan,
++					struct qlcnic_host_tx_ring *tx_ring)
+ {
+-	adapter->ahw->hw_ops->change_l2_filter(adapter, addr, id);
++	adapter->ahw->hw_ops->change_l2_filter(adapter, addr, vlan, tx_ring);
+ }
+ 
+ static inline int qlcnic_get_board_info(struct qlcnic_adapter *adapter)
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
+index 569d54ededec..a79d84f99102 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
+@@ -2135,7 +2135,8 @@ out:
+ }
+ 
+ void qlcnic_83xx_change_l2_filter(struct qlcnic_adapter *adapter, u64 *addr,
+-				  u16 vlan_id)
++				  u16 vlan_id,
++				  struct qlcnic_host_tx_ring *tx_ring)
+ {
+ 	u8 mac[ETH_ALEN];
+ 	memcpy(&mac, addr, ETH_ALEN);
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.h b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.h
+index b75a81246856..73fe2f64491d 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.h
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.h
+@@ -550,7 +550,8 @@ int qlcnic_83xx_wrt_reg_indirect(struct qlcnic_adapter *, ulong, u32);
+ int qlcnic_83xx_nic_set_promisc(struct qlcnic_adapter *, u32);
+ int qlcnic_83xx_config_hw_lro(struct qlcnic_adapter *, int);
+ int qlcnic_83xx_config_rss(struct qlcnic_adapter *, int);
+-void qlcnic_83xx_change_l2_filter(struct qlcnic_adapter *, u64 *, u16);
++void qlcnic_83xx_change_l2_filter(struct qlcnic_adapter *adapter, u64 *addr,
++				  u16 vlan, struct qlcnic_host_tx_ring *ring);
+ int qlcnic_83xx_get_pci_info(struct qlcnic_adapter *, struct qlcnic_pci_info *);
+ int qlcnic_83xx_set_nic_info(struct qlcnic_adapter *, struct qlcnic_info *);
+ void qlcnic_83xx_initialize_nic(struct qlcnic_adapter *, int);
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.h b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.h
+index 4bb33af8e2b3..56a3bd9e37dc 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.h
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.h
+@@ -173,7 +173,8 @@ int qlcnic_82xx_napi_add(struct qlcnic_adapter *adapter,
+ 			 struct net_device *netdev);
+ void qlcnic_82xx_get_beacon_state(struct qlcnic_adapter *);
+ void qlcnic_82xx_change_filter(struct qlcnic_adapter *adapter,
+-			       u64 *uaddr, u16 vlan_id);
++			       u64 *uaddr, u16 vlan_id,
++			       struct qlcnic_host_tx_ring *tx_ring);
+ int qlcnic_82xx_config_intr_coalesce(struct qlcnic_adapter *,
+ 				     struct ethtool_coalesce *);
+ int qlcnic_82xx_set_rx_coalesce(struct qlcnic_adapter *);
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c
+index 84dd83031a1b..9647578cbe6a 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c
+@@ -268,13 +268,12 @@ static void qlcnic_add_lb_filter(struct qlcnic_adapter *adapter,
+ }
+ 
+ void qlcnic_82xx_change_filter(struct qlcnic_adapter *adapter, u64 *uaddr,
+-			       u16 vlan_id)
++			       u16 vlan_id, struct qlcnic_host_tx_ring *tx_ring)
+ {
+ 	struct cmd_desc_type0 *hwdesc;
+ 	struct qlcnic_nic_req *req;
+ 	struct qlcnic_mac_req *mac_req;
+ 	struct qlcnic_vlan_req *vlan_req;
+-	struct qlcnic_host_tx_ring *tx_ring = adapter->tx_ring;
+ 	u32 producer;
+ 	u64 word;
+ 
+@@ -301,7 +300,8 @@ void qlcnic_82xx_change_filter(struct qlcnic_adapter *adapter, u64 *uaddr,
+ 
+ static void qlcnic_send_filter(struct qlcnic_adapter *adapter,
+ 			       struct cmd_desc_type0 *first_desc,
+-			       struct sk_buff *skb)
++			       struct sk_buff *skb,
++			       struct qlcnic_host_tx_ring *tx_ring)
+ {
+ 	struct vlan_ethhdr *vh = (struct vlan_ethhdr *)(skb->data);
+ 	struct ethhdr *phdr = (struct ethhdr *)(skb->data);
+@@ -335,7 +335,7 @@ static void qlcnic_send_filter(struct qlcnic_adapter *adapter,
+ 		    tmp_fil->vlan_id == vlan_id) {
+ 			if (jiffies > (QLCNIC_READD_AGE * HZ + tmp_fil->ftime))
+ 				qlcnic_change_filter(adapter, &src_addr,
+-						     vlan_id);
++						     vlan_id, tx_ring);
+ 			tmp_fil->ftime = jiffies;
+ 			return;
+ 		}
+@@ -350,7 +350,7 @@ static void qlcnic_send_filter(struct qlcnic_adapter *adapter,
+ 	if (!fil)
+ 		return;
+ 
+-	qlcnic_change_filter(adapter, &src_addr, vlan_id);
++	qlcnic_change_filter(adapter, &src_addr, vlan_id, tx_ring);
+ 	fil->ftime = jiffies;
+ 	fil->vlan_id = vlan_id;
+ 	memcpy(fil->faddr, &src_addr, ETH_ALEN);
+@@ -766,7 +766,7 @@ netdev_tx_t qlcnic_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
+ 	}
+ 
+ 	if (adapter->drv_mac_learn)
+-		qlcnic_send_filter(adapter, first_desc, skb);
++		qlcnic_send_filter(adapter, first_desc, skb, tx_ring);
+ 
+ 	tx_ring->tx_stats.tx_bytes += skb->len;
+ 	tx_ring->tx_stats.xmit_called++;
+diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
+index 7fd86d40a337..11167abe5934 100644
+--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
++++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
+@@ -113,7 +113,7 @@ rmnet_map_ingress_handler(struct sk_buff *skb,
+ 	struct sk_buff *skbn;
+ 
+ 	if (skb->dev->type == ARPHRD_ETHER) {
+-		if (pskb_expand_head(skb, ETH_HLEN, 0, GFP_KERNEL)) {
++		if (pskb_expand_head(skb, ETH_HLEN, 0, GFP_ATOMIC)) {
+ 			kfree_skb(skb);
+ 			return;
+ 		}
+@@ -147,7 +147,7 @@ static int rmnet_map_egress_handler(struct sk_buff *skb,
+ 	}
+ 
+ 	if (skb_headroom(skb) < required_headroom) {
+-		if (pskb_expand_head(skb, required_headroom, 0, GFP_KERNEL))
++		if (pskb_expand_head(skb, required_headroom, 0, GFP_ATOMIC))
+ 			return -ENOMEM;
+ 	}
+ 
+@@ -189,6 +189,9 @@ rx_handler_result_t rmnet_rx_handler(struct sk_buff **pskb)
+ 	if (!skb)
+ 		goto done;
+ 
++	if (skb->pkt_type == PACKET_LOOPBACK)
++		return RX_HANDLER_PASS;
++
+ 	dev = skb->dev;
+ 	port = rmnet_get_port(dev);
+ 
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index 1d1e66002232..627c5cd8f786 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -4788,8 +4788,8 @@ static void rtl_init_rxcfg(struct rtl8169_private *tp)
+ 		RTL_W32(tp, RxConfig, RX_FIFO_THRESH | RX_DMA_BURST);
+ 		break;
+ 	case RTL_GIGA_MAC_VER_18 ... RTL_GIGA_MAC_VER_24:
+-	case RTL_GIGA_MAC_VER_34:
+-	case RTL_GIGA_MAC_VER_35:
++	case RTL_GIGA_MAC_VER_34 ... RTL_GIGA_MAC_VER_36:
++	case RTL_GIGA_MAC_VER_38:
+ 		RTL_W32(tp, RxConfig, RX128_INT_EN | RX_MULTI_EN | RX_DMA_BURST);
+ 		break;
+ 	case RTL_GIGA_MAC_VER_40 ... RTL_GIGA_MAC_VER_51:
+@@ -5041,9 +5041,14 @@ static void rtl8169_hw_reset(struct rtl8169_private *tp)
+ 
+ static void rtl_set_tx_config_registers(struct rtl8169_private *tp)
+ {
+-	/* Set DMA burst size and Interframe Gap Time */
+-	RTL_W32(tp, TxConfig, (TX_DMA_BURST << TxDMAShift) |
+-		(InterFrameGap << TxInterFrameGapShift));
++	u32 val = TX_DMA_BURST << TxDMAShift |
++		  InterFrameGap << TxInterFrameGapShift;
++
++	if (tp->mac_version >= RTL_GIGA_MAC_VER_34 &&
++	    tp->mac_version != RTL_GIGA_MAC_VER_39)
++		val |= TXCFG_AUTO_FIFO;
++
++	RTL_W32(tp, TxConfig, val);
+ }
+ 
+ static void rtl_set_rx_max_size(struct rtl8169_private *tp)
+@@ -5530,7 +5535,6 @@ static void rtl_hw_start_8168e_2(struct rtl8169_private *tp)
+ 
+ 	rtl_disable_clock_request(tp);
+ 
+-	RTL_W32(tp, TxConfig, RTL_R32(tp, TxConfig) | TXCFG_AUTO_FIFO);
+ 	RTL_W8(tp, MCU, RTL_R8(tp, MCU) & ~NOW_IS_OOB);
+ 
+ 	/* Adjust EEE LED frequency */
+@@ -5562,7 +5566,6 @@ static void rtl_hw_start_8168f(struct rtl8169_private *tp)
+ 
+ 	rtl_disable_clock_request(tp);
+ 
+-	RTL_W32(tp, TxConfig, RTL_R32(tp, TxConfig) | TXCFG_AUTO_FIFO);
+ 	RTL_W8(tp, MCU, RTL_R8(tp, MCU) & ~NOW_IS_OOB);
+ 	RTL_W8(tp, DLLPR, RTL_R8(tp, DLLPR) | PFM_EN);
+ 	RTL_W32(tp, MISC, RTL_R32(tp, MISC) | PWM_EN);
+@@ -5607,8 +5610,6 @@ static void rtl_hw_start_8411(struct rtl8169_private *tp)
+ 
+ static void rtl_hw_start_8168g(struct rtl8169_private *tp)
+ {
+-	RTL_W32(tp, TxConfig, RTL_R32(tp, TxConfig) | TXCFG_AUTO_FIFO);
+-
+ 	rtl_eri_write(tp, 0xc8, ERIAR_MASK_0101, 0x080002, ERIAR_EXGMAC);
+ 	rtl_eri_write(tp, 0xcc, ERIAR_MASK_0001, 0x38, ERIAR_EXGMAC);
+ 	rtl_eri_write(tp, 0xd0, ERIAR_MASK_0001, 0x48, ERIAR_EXGMAC);
+@@ -5707,8 +5708,6 @@ static void rtl_hw_start_8168h_1(struct rtl8169_private *tp)
+ 	RTL_W8(tp, Config5, RTL_R8(tp, Config5) & ~ASPM_en);
+ 	rtl_ephy_init(tp, e_info_8168h_1, ARRAY_SIZE(e_info_8168h_1));
+ 
+-	RTL_W32(tp, TxConfig, RTL_R32(tp, TxConfig) | TXCFG_AUTO_FIFO);
+-
+ 	rtl_eri_write(tp, 0xc8, ERIAR_MASK_0101, 0x00080002, ERIAR_EXGMAC);
+ 	rtl_eri_write(tp, 0xcc, ERIAR_MASK_0001, 0x38, ERIAR_EXGMAC);
+ 	rtl_eri_write(tp, 0xd0, ERIAR_MASK_0001, 0x48, ERIAR_EXGMAC);
+@@ -5789,8 +5788,6 @@ static void rtl_hw_start_8168ep(struct rtl8169_private *tp)
+ {
+ 	rtl8168ep_stop_cmac(tp);
+ 
+-	RTL_W32(tp, TxConfig, RTL_R32(tp, TxConfig) | TXCFG_AUTO_FIFO);
+-
+ 	rtl_eri_write(tp, 0xc8, ERIAR_MASK_0101, 0x00080002, ERIAR_EXGMAC);
+ 	rtl_eri_write(tp, 0xcc, ERIAR_MASK_0001, 0x2f, ERIAR_EXGMAC);
+ 	rtl_eri_write(tp, 0xd0, ERIAR_MASK_0001, 0x5f, ERIAR_EXGMAC);
+@@ -6108,7 +6105,6 @@ static void rtl_hw_start_8402(struct rtl8169_private *tp)
+ 	/* Force LAN exit from ASPM if Rx/Tx are not idle */
+ 	RTL_W32(tp, FuncEvent, RTL_R32(tp, FuncEvent) | 0x002800);
+ 
+-	RTL_W32(tp, TxConfig, RTL_R32(tp, TxConfig) | TXCFG_AUTO_FIFO);
+ 	RTL_W8(tp, MCU, RTL_R8(tp, MCU) & ~NOW_IS_OOB);
+ 
+ 	rtl_ephy_init(tp, e_info_8402, ARRAY_SIZE(e_info_8402));
+diff --git a/drivers/net/ethernet/stmicro/stmmac/common.h b/drivers/net/ethernet/stmicro/stmmac/common.h
+index 78fd0f8b8e81..a15006e2fb29 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/common.h
++++ b/drivers/net/ethernet/stmicro/stmmac/common.h
+@@ -256,10 +256,10 @@ struct stmmac_safety_stats {
+ #define MAX_DMA_RIWT		0xff
+ #define MIN_DMA_RIWT		0x20
+ /* Tx coalesce parameters */
+-#define STMMAC_COAL_TX_TIMER	40000
++#define STMMAC_COAL_TX_TIMER	1000
+ #define STMMAC_MAX_COAL_TX_TICK	100000
+ #define STMMAC_TX_MAX_FRAMES	256
+-#define STMMAC_TX_FRAMES	64
++#define STMMAC_TX_FRAMES	25
+ 
+ /* Packets types */
+ enum packets_types {
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+index c0a855b7ab3b..63e1064b27a2 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+@@ -48,6 +48,8 @@ struct stmmac_tx_info {
+ 
+ /* Frequently used values are kept adjacent for cache effect */
+ struct stmmac_tx_queue {
++	u32 tx_count_frames;
++	struct timer_list txtimer;
+ 	u32 queue_index;
+ 	struct stmmac_priv *priv_data;
+ 	struct dma_extended_desc *dma_etx ____cacheline_aligned_in_smp;
+@@ -73,7 +75,14 @@ struct stmmac_rx_queue {
+ 	u32 rx_zeroc_thresh;
+ 	dma_addr_t dma_rx_phy;
+ 	u32 rx_tail_addr;
++};
++
++struct stmmac_channel {
+ 	struct napi_struct napi ____cacheline_aligned_in_smp;
++	struct stmmac_priv *priv_data;
++	u32 index;
++	int has_rx;
++	int has_tx;
+ };
+ 
+ struct stmmac_tc_entry {
+@@ -109,14 +118,12 @@ struct stmmac_pps_cfg {
+ 
+ struct stmmac_priv {
+ 	/* Frequently used values are kept adjacent for cache effect */
+-	u32 tx_count_frames;
+ 	u32 tx_coal_frames;
+ 	u32 tx_coal_timer;
+ 
+ 	int tx_coalesce;
+ 	int hwts_tx_en;
+ 	bool tx_path_in_lpi_mode;
+-	struct timer_list txtimer;
+ 	bool tso;
+ 
+ 	unsigned int dma_buf_sz;
+@@ -137,6 +144,9 @@ struct stmmac_priv {
+ 	/* TX Queue */
+ 	struct stmmac_tx_queue tx_queue[MTL_MAX_TX_QUEUES];
+ 
++	/* Generic channel for NAPI */
++	struct stmmac_channel channel[STMMAC_CH_MAX];
++
+ 	bool oldlink;
+ 	int speed;
+ 	int oldduplex;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index c579d98b9666..1c6ba74e294b 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -147,12 +147,14 @@ static void stmmac_verify_args(void)
+ static void stmmac_disable_all_queues(struct stmmac_priv *priv)
+ {
+ 	u32 rx_queues_cnt = priv->plat->rx_queues_to_use;
++	u32 tx_queues_cnt = priv->plat->tx_queues_to_use;
++	u32 maxq = max(rx_queues_cnt, tx_queues_cnt);
+ 	u32 queue;
+ 
+-	for (queue = 0; queue < rx_queues_cnt; queue++) {
+-		struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
++	for (queue = 0; queue < maxq; queue++) {
++		struct stmmac_channel *ch = &priv->channel[queue];
+ 
+-		napi_disable(&rx_q->napi);
++		napi_disable(&ch->napi);
+ 	}
+ }
+ 
+@@ -163,12 +165,14 @@ static void stmmac_disable_all_queues(struct stmmac_priv *priv)
+ static void stmmac_enable_all_queues(struct stmmac_priv *priv)
+ {
+ 	u32 rx_queues_cnt = priv->plat->rx_queues_to_use;
++	u32 tx_queues_cnt = priv->plat->tx_queues_to_use;
++	u32 maxq = max(rx_queues_cnt, tx_queues_cnt);
+ 	u32 queue;
+ 
+-	for (queue = 0; queue < rx_queues_cnt; queue++) {
+-		struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
++	for (queue = 0; queue < maxq; queue++) {
++		struct stmmac_channel *ch = &priv->channel[queue];
+ 
+-		napi_enable(&rx_q->napi);
++		napi_enable(&ch->napi);
+ 	}
+ }
+ 
+@@ -1822,18 +1826,18 @@ static void stmmac_dma_operation_mode(struct stmmac_priv *priv)
+  * @queue: TX queue index
+  * Description: it reclaims the transmit resources after transmission completes.
+  */
+-static void stmmac_tx_clean(struct stmmac_priv *priv, u32 queue)
++static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue)
+ {
+ 	struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue];
+ 	unsigned int bytes_compl = 0, pkts_compl = 0;
+-	unsigned int entry;
++	unsigned int entry, count = 0;
+ 
+-	netif_tx_lock(priv->dev);
++	__netif_tx_lock_bh(netdev_get_tx_queue(priv->dev, queue));
+ 
+ 	priv->xstats.tx_clean++;
+ 
+ 	entry = tx_q->dirty_tx;
+-	while (entry != tx_q->cur_tx) {
++	while ((entry != tx_q->cur_tx) && (count < budget)) {
+ 		struct sk_buff *skb = tx_q->tx_skbuff[entry];
+ 		struct dma_desc *p;
+ 		int status;
+@@ -1849,6 +1853,8 @@ static void stmmac_tx_clean(struct stmmac_priv *priv, u32 queue)
+ 		if (unlikely(status & tx_dma_own))
+ 			break;
+ 
++		count++;
++
+ 		/* Make sure descriptor fields are read after reading
+ 		 * the own bit.
+ 		 */
+@@ -1916,7 +1922,10 @@ static void stmmac_tx_clean(struct stmmac_priv *priv, u32 queue)
+ 		stmmac_enable_eee_mode(priv);
+ 		mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_T(eee_timer));
+ 	}
+-	netif_tx_unlock(priv->dev);
++
++	__netif_tx_unlock_bh(netdev_get_tx_queue(priv->dev, queue));
++
++	return count;
+ }
+ 
+ /**
+@@ -1999,6 +2008,33 @@ static bool stmmac_safety_feat_interrupt(struct stmmac_priv *priv)
+ 	return false;
+ }
+ 
++static int stmmac_napi_check(struct stmmac_priv *priv, u32 chan)
++{
++	int status = stmmac_dma_interrupt_status(priv, priv->ioaddr,
++						 &priv->xstats, chan);
++	struct stmmac_channel *ch = &priv->channel[chan];
++	bool needs_work = false;
++
++	if ((status & handle_rx) && ch->has_rx) {
++		needs_work = true;
++	} else {
++		status &= ~handle_rx;
++	}
++
++	if ((status & handle_tx) && ch->has_tx) {
++		needs_work = true;
++	} else {
++		status &= ~handle_tx;
++	}
++
++	if (needs_work && napi_schedule_prep(&ch->napi)) {
++		stmmac_disable_dma_irq(priv, priv->ioaddr, chan);
++		__napi_schedule(&ch->napi);
++	}
++
++	return status;
++}
++
+ /**
+  * stmmac_dma_interrupt - DMA ISR
+  * @priv: driver private structure
+@@ -2013,57 +2049,14 @@ static void stmmac_dma_interrupt(struct stmmac_priv *priv)
+ 	u32 channels_to_check = tx_channel_count > rx_channel_count ?
+ 				tx_channel_count : rx_channel_count;
+ 	u32 chan;
+-	bool poll_scheduled = false;
+ 	int status[max_t(u32, MTL_MAX_TX_QUEUES, MTL_MAX_RX_QUEUES)];
+ 
+ 	/* Make sure we never check beyond our status buffer. */
+ 	if (WARN_ON_ONCE(channels_to_check > ARRAY_SIZE(status)))
+ 		channels_to_check = ARRAY_SIZE(status);
+ 
+-	/* Each DMA channel can be used for rx and tx simultaneously, yet
+-	 * napi_struct is embedded in struct stmmac_rx_queue rather than in a
+-	 * stmmac_channel struct.
+-	 * Because of this, stmmac_poll currently checks (and possibly wakes)
+-	 * all tx queues rather than just a single tx queue.
+-	 */
+ 	for (chan = 0; chan < channels_to_check; chan++)
+-		status[chan] = stmmac_dma_interrupt_status(priv, priv->ioaddr,
+-				&priv->xstats, chan);
+-
+-	for (chan = 0; chan < rx_channel_count; chan++) {
+-		if (likely(status[chan] & handle_rx)) {
+-			struct stmmac_rx_queue *rx_q = &priv->rx_queue[chan];
+-
+-			if (likely(napi_schedule_prep(&rx_q->napi))) {
+-				stmmac_disable_dma_irq(priv, priv->ioaddr, chan);
+-				__napi_schedule(&rx_q->napi);
+-				poll_scheduled = true;
+-			}
+-		}
+-	}
+-
+-	/* If we scheduled poll, we already know that tx queues will be checked.
+-	 * If we didn't schedule poll, see if any DMA channel (used by tx) has a
+-	 * completed transmission, if so, call stmmac_poll (once).
+-	 */
+-	if (!poll_scheduled) {
+-		for (chan = 0; chan < tx_channel_count; chan++) {
+-			if (status[chan] & handle_tx) {
+-				/* It doesn't matter what rx queue we choose
+-				 * here. We use 0 since it always exists.
+-				 */
+-				struct stmmac_rx_queue *rx_q =
+-					&priv->rx_queue[0];
+-
+-				if (likely(napi_schedule_prep(&rx_q->napi))) {
+-					stmmac_disable_dma_irq(priv,
+-							priv->ioaddr, chan);
+-					__napi_schedule(&rx_q->napi);
+-				}
+-				break;
+-			}
+-		}
+-	}
++		status[chan] = stmmac_napi_check(priv, chan);
+ 
+ 	for (chan = 0; chan < tx_channel_count; chan++) {
+ 		if (unlikely(status[chan] & tx_hard_error_bump_tc)) {
+@@ -2193,8 +2186,7 @@ static int stmmac_init_dma_engine(struct stmmac_priv *priv)
+ 		stmmac_init_tx_chan(priv, priv->ioaddr, priv->plat->dma_cfg,
+ 				    tx_q->dma_tx_phy, chan);
+ 
+-		tx_q->tx_tail_addr = tx_q->dma_tx_phy +
+-			    (DMA_TX_SIZE * sizeof(struct dma_desc));
++		tx_q->tx_tail_addr = tx_q->dma_tx_phy;
+ 		stmmac_set_tx_tail_ptr(priv, priv->ioaddr,
+ 				       tx_q->tx_tail_addr, chan);
+ 	}
+@@ -2212,6 +2204,13 @@ static int stmmac_init_dma_engine(struct stmmac_priv *priv)
+ 	return ret;
+ }
+ 
++static void stmmac_tx_timer_arm(struct stmmac_priv *priv, u32 queue)
++{
++	struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue];
++
++	mod_timer(&tx_q->txtimer, STMMAC_COAL_TIMER(priv->tx_coal_timer));
++}
++
+ /**
+  * stmmac_tx_timer - mitigation sw timer for tx.
+  * @data: data pointer
+@@ -2220,13 +2219,14 @@ static int stmmac_init_dma_engine(struct stmmac_priv *priv)
+  */
+ static void stmmac_tx_timer(struct timer_list *t)
+ {
+-	struct stmmac_priv *priv = from_timer(priv, t, txtimer);
+-	u32 tx_queues_count = priv->plat->tx_queues_to_use;
+-	u32 queue;
++	struct stmmac_tx_queue *tx_q = from_timer(tx_q, t, txtimer);
++	struct stmmac_priv *priv = tx_q->priv_data;
++	struct stmmac_channel *ch;
++
++	ch = &priv->channel[tx_q->queue_index];
+ 
+-	/* let's scan all the tx queues */
+-	for (queue = 0; queue < tx_queues_count; queue++)
+-		stmmac_tx_clean(priv, queue);
++	if (likely(napi_schedule_prep(&ch->napi)))
++		__napi_schedule(&ch->napi);
+ }
+ 
+ /**
+@@ -2239,11 +2239,17 @@ static void stmmac_tx_timer(struct timer_list *t)
+  */
+ static void stmmac_init_tx_coalesce(struct stmmac_priv *priv)
+ {
++	u32 tx_channel_count = priv->plat->tx_queues_to_use;
++	u32 chan;
++
+ 	priv->tx_coal_frames = STMMAC_TX_FRAMES;
+ 	priv->tx_coal_timer = STMMAC_COAL_TX_TIMER;
+-	timer_setup(&priv->txtimer, stmmac_tx_timer, 0);
+-	priv->txtimer.expires = STMMAC_COAL_TIMER(priv->tx_coal_timer);
+-	add_timer(&priv->txtimer);
++
++	for (chan = 0; chan < tx_channel_count; chan++) {
++		struct stmmac_tx_queue *tx_q = &priv->tx_queue[chan];
++
++		timer_setup(&tx_q->txtimer, stmmac_tx_timer, 0);
++	}
+ }
+ 
+ static void stmmac_set_rings_length(struct stmmac_priv *priv)
+@@ -2571,6 +2577,7 @@ static void stmmac_hw_teardown(struct net_device *dev)
+ static int stmmac_open(struct net_device *dev)
+ {
+ 	struct stmmac_priv *priv = netdev_priv(dev);
++	u32 chan;
+ 	int ret;
+ 
+ 	stmmac_check_ether_addr(priv);
+@@ -2667,7 +2674,9 @@ irq_error:
+ 	if (dev->phydev)
+ 		phy_stop(dev->phydev);
+ 
+-	del_timer_sync(&priv->txtimer);
++	for (chan = 0; chan < priv->plat->tx_queues_to_use; chan++)
++		del_timer_sync(&priv->tx_queue[chan].txtimer);
++
+ 	stmmac_hw_teardown(dev);
+ init_error:
+ 	free_dma_desc_resources(priv);
+@@ -2687,6 +2696,7 @@ dma_desc_error:
+ static int stmmac_release(struct net_device *dev)
+ {
+ 	struct stmmac_priv *priv = netdev_priv(dev);
++	u32 chan;
+ 
+ 	if (priv->eee_enabled)
+ 		del_timer_sync(&priv->eee_ctrl_timer);
+@@ -2701,7 +2711,8 @@ static int stmmac_release(struct net_device *dev)
+ 
+ 	stmmac_disable_all_queues(priv);
+ 
+-	del_timer_sync(&priv->txtimer);
++	for (chan = 0; chan < priv->plat->tx_queues_to_use; chan++)
++		del_timer_sync(&priv->tx_queue[chan].txtimer);
+ 
+ 	/* Free the IRQ lines */
+ 	free_irq(dev->irq, dev);
+@@ -2915,14 +2926,13 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	priv->xstats.tx_tso_nfrags += nfrags;
+ 
+ 	/* Manage tx mitigation */
+-	priv->tx_count_frames += nfrags + 1;
+-	if (likely(priv->tx_coal_frames > priv->tx_count_frames)) {
+-		mod_timer(&priv->txtimer,
+-			  STMMAC_COAL_TIMER(priv->tx_coal_timer));
+-	} else {
+-		priv->tx_count_frames = 0;
++	tx_q->tx_count_frames += nfrags + 1;
++	if (priv->tx_coal_frames <= tx_q->tx_count_frames) {
+ 		stmmac_set_tx_ic(priv, desc);
+ 		priv->xstats.tx_set_ic_bit++;
++		tx_q->tx_count_frames = 0;
++	} else {
++		stmmac_tx_timer_arm(priv, queue);
+ 	}
+ 
+ 	skb_tx_timestamp(skb);
+@@ -2971,6 +2981,7 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ 	netdev_tx_sent_queue(netdev_get_tx_queue(dev, queue), skb->len);
+ 
++	tx_q->tx_tail_addr = tx_q->dma_tx_phy + (tx_q->cur_tx * sizeof(*desc));
+ 	stmmac_set_tx_tail_ptr(priv, priv->ioaddr, tx_q->tx_tail_addr, queue);
+ 
+ 	return NETDEV_TX_OK;
+@@ -3125,14 +3136,13 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	 * This approach takes care about the fragments: desc is the first
+ 	 * element in case of no SG.
+ 	 */
+-	priv->tx_count_frames += nfrags + 1;
+-	if (likely(priv->tx_coal_frames > priv->tx_count_frames)) {
+-		mod_timer(&priv->txtimer,
+-			  STMMAC_COAL_TIMER(priv->tx_coal_timer));
+-	} else {
+-		priv->tx_count_frames = 0;
++	tx_q->tx_count_frames += nfrags + 1;
++	if (priv->tx_coal_frames <= tx_q->tx_count_frames) {
+ 		stmmac_set_tx_ic(priv, desc);
+ 		priv->xstats.tx_set_ic_bit++;
++		tx_q->tx_count_frames = 0;
++	} else {
++		stmmac_tx_timer_arm(priv, queue);
+ 	}
+ 
+ 	skb_tx_timestamp(skb);
+@@ -3178,6 +3188,8 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	netdev_tx_sent_queue(netdev_get_tx_queue(dev, queue), skb->len);
+ 
+ 	stmmac_enable_dma_transmission(priv, priv->ioaddr);
++
++	tx_q->tx_tail_addr = tx_q->dma_tx_phy + (tx_q->cur_tx * sizeof(*desc));
+ 	stmmac_set_tx_tail_ptr(priv, priv->ioaddr, tx_q->tx_tail_addr, queue);
+ 
+ 	return NETDEV_TX_OK;
+@@ -3298,6 +3310,7 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue)
+ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
+ {
+ 	struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
++	struct stmmac_channel *ch = &priv->channel[queue];
+ 	unsigned int entry = rx_q->cur_rx;
+ 	int coe = priv->hw->rx_csum;
+ 	unsigned int next_entry;
+@@ -3467,7 +3480,7 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
+ 			else
+ 				skb->ip_summed = CHECKSUM_UNNECESSARY;
+ 
+-			napi_gro_receive(&rx_q->napi, skb);
++			napi_gro_receive(&ch->napi, skb);
+ 
+ 			priv->dev->stats.rx_packets++;
+ 			priv->dev->stats.rx_bytes += frame_len;
+@@ -3490,27 +3503,33 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
+  *  Description :
+  *  To look at the incoming frames and clear the tx resources.
+  */
+-static int stmmac_poll(struct napi_struct *napi, int budget)
++static int stmmac_napi_poll(struct napi_struct *napi, int budget)
+ {
+-	struct stmmac_rx_queue *rx_q =
+-		container_of(napi, struct stmmac_rx_queue, napi);
+-	struct stmmac_priv *priv = rx_q->priv_data;
+-	u32 tx_count = priv->plat->tx_queues_to_use;
+-	u32 chan = rx_q->queue_index;
+-	int work_done = 0;
+-	u32 queue;
++	struct stmmac_channel *ch =
++		container_of(napi, struct stmmac_channel, napi);
++	struct stmmac_priv *priv = ch->priv_data;
++	int work_done = 0, work_rem = budget;
++	u32 chan = ch->index;
+ 
+ 	priv->xstats.napi_poll++;
+ 
+-	/* check all the queues */
+-	for (queue = 0; queue < tx_count; queue++)
+-		stmmac_tx_clean(priv, queue);
++	if (ch->has_tx) {
++		int done = stmmac_tx_clean(priv, work_rem, chan);
+ 
+-	work_done = stmmac_rx(priv, budget, rx_q->queue_index);
+-	if (work_done < budget) {
+-		napi_complete_done(napi, work_done);
+-		stmmac_enable_dma_irq(priv, priv->ioaddr, chan);
++		work_done += done;
++		work_rem -= done;
++	}
++
++	if (ch->has_rx) {
++		int done = stmmac_rx(priv, work_rem, chan);
++
++		work_done += done;
++		work_rem -= done;
+ 	}
++
++	if (work_done < budget && napi_complete_done(napi, work_done))
++		stmmac_enable_dma_irq(priv, priv->ioaddr, chan);
++
+ 	return work_done;
+ }
+ 
+@@ -4170,8 +4189,8 @@ int stmmac_dvr_probe(struct device *device,
+ {
+ 	struct net_device *ndev = NULL;
+ 	struct stmmac_priv *priv;
++	u32 queue, maxq;
+ 	int ret = 0;
+-	u32 queue;
+ 
+ 	ndev = alloc_etherdev_mqs(sizeof(struct stmmac_priv),
+ 				  MTL_MAX_TX_QUEUES,
+@@ -4291,11 +4310,22 @@ int stmmac_dvr_probe(struct device *device,
+ 			 "Enable RX Mitigation via HW Watchdog Timer\n");
+ 	}
+ 
+-	for (queue = 0; queue < priv->plat->rx_queues_to_use; queue++) {
+-		struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
++	/* Setup channels NAPI */
++	maxq = max(priv->plat->rx_queues_to_use, priv->plat->tx_queues_to_use);
+ 
+-		netif_napi_add(ndev, &rx_q->napi, stmmac_poll,
+-			       (8 * priv->plat->rx_queues_to_use));
++	for (queue = 0; queue < maxq; queue++) {
++		struct stmmac_channel *ch = &priv->channel[queue];
++
++		ch->priv_data = priv;
++		ch->index = queue;
++
++		if (queue < priv->plat->rx_queues_to_use)
++			ch->has_rx = true;
++		if (queue < priv->plat->tx_queues_to_use)
++			ch->has_tx = true;
++
++		netif_napi_add(ndev, &ch->napi, stmmac_napi_poll,
++			       NAPI_POLL_WEIGHT);
+ 	}
+ 
+ 	mutex_init(&priv->lock);
+@@ -4341,10 +4371,10 @@ error_netdev_register:
+ 	    priv->hw->pcs != STMMAC_PCS_RTBI)
+ 		stmmac_mdio_unregister(ndev);
+ error_mdio_register:
+-	for (queue = 0; queue < priv->plat->rx_queues_to_use; queue++) {
+-		struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
++	for (queue = 0; queue < maxq; queue++) {
++		struct stmmac_channel *ch = &priv->channel[queue];
+ 
+-		netif_napi_del(&rx_q->napi);
++		netif_napi_del(&ch->napi);
+ 	}
+ error_hw_init:
+ 	destroy_workqueue(priv->wq);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+index 72da77b94ecd..8a3867cec67a 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+@@ -67,7 +67,7 @@ static int dwmac1000_validate_mcast_bins(int mcast_bins)
+  * Description:
+  * This function validates the number of Unicast address entries supported
+  * by a particular Synopsys 10/100/1000 controller. The Synopsys controller
+- * supports 1, 32, 64, or 128 Unicast filter entries for it's Unicast filter
++ * supports 1..32, 64, or 128 Unicast filter entries for it's Unicast filter
+  * logic. This function validates a valid, supported configuration is
+  * selected, and defaults to 1 Unicast address if an unsupported
+  * configuration is selected.
+@@ -77,8 +77,7 @@ static int dwmac1000_validate_ucast_entries(int ucast_entries)
+ 	int x = ucast_entries;
+ 
+ 	switch (x) {
+-	case 1:
+-	case 32:
++	case 1 ... 32:
+ 	case 64:
+ 	case 128:
+ 		break;
+diff --git a/drivers/net/ethernet/ti/Kconfig b/drivers/net/ethernet/ti/Kconfig
+index 9263d638bd6d..f932923f7d56 100644
+--- a/drivers/net/ethernet/ti/Kconfig
++++ b/drivers/net/ethernet/ti/Kconfig
+@@ -41,6 +41,7 @@ config TI_DAVINCI_MDIO
+ config TI_DAVINCI_CPDMA
+ 	tristate "TI DaVinci CPDMA Support"
+ 	depends on ARCH_DAVINCI || ARCH_OMAP2PLUS || COMPILE_TEST
++	select GENERIC_ALLOCATOR
+ 	---help---
+ 	  This driver supports TI's DaVinci CPDMA dma engine.
+ 
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index af4dc4425be2..5827fccd4f29 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -717,6 +717,30 @@ static int phylink_bringup_phy(struct phylink *pl, struct phy_device *phy)
+ 	return 0;
+ }
+ 
++static int __phylink_connect_phy(struct phylink *pl, struct phy_device *phy,
++		phy_interface_t interface)
++{
++	int ret;
++
++	if (WARN_ON(pl->link_an_mode == MLO_AN_FIXED ||
++		    (pl->link_an_mode == MLO_AN_INBAND &&
++		     phy_interface_mode_is_8023z(interface))))
++		return -EINVAL;
++
++	if (pl->phydev)
++		return -EBUSY;
++
++	ret = phy_attach_direct(pl->netdev, phy, 0, interface);
++	if (ret)
++		return ret;
++
++	ret = phylink_bringup_phy(pl, phy);
++	if (ret)
++		phy_detach(phy);
++
++	return ret;
++}
++
+ /**
+  * phylink_connect_phy() - connect a PHY to the phylink instance
+  * @pl: a pointer to a &struct phylink returned from phylink_create()
+@@ -734,31 +758,13 @@ static int phylink_bringup_phy(struct phylink *pl, struct phy_device *phy)
+  */
+ int phylink_connect_phy(struct phylink *pl, struct phy_device *phy)
+ {
+-	int ret;
+-
+-	if (WARN_ON(pl->link_an_mode == MLO_AN_FIXED ||
+-		    (pl->link_an_mode == MLO_AN_INBAND &&
+-		     phy_interface_mode_is_8023z(pl->link_interface))))
+-		return -EINVAL;
+-
+-	if (pl->phydev)
+-		return -EBUSY;
+-
+ 	/* Use PHY device/driver interface */
+ 	if (pl->link_interface == PHY_INTERFACE_MODE_NA) {
+ 		pl->link_interface = phy->interface;
+ 		pl->link_config.interface = pl->link_interface;
+ 	}
+ 
+-	ret = phy_attach_direct(pl->netdev, phy, 0, pl->link_interface);
+-	if (ret)
+-		return ret;
+-
+-	ret = phylink_bringup_phy(pl, phy);
+-	if (ret)
+-		phy_detach(phy);
+-
+-	return ret;
++	return __phylink_connect_phy(pl, phy, pl->link_interface);
+ }
+ EXPORT_SYMBOL_GPL(phylink_connect_phy);
+ 
+@@ -1672,7 +1678,9 @@ static void phylink_sfp_link_up(void *upstream)
+ 
+ static int phylink_sfp_connect_phy(void *upstream, struct phy_device *phy)
+ {
+-	return phylink_connect_phy(upstream, phy);
++	struct phylink *pl = upstream;
++
++	return __phylink_connect_phy(upstream, phy, pl->link_config.interface);
+ }
+ 
+ static void phylink_sfp_disconnect_phy(void *upstream)
+diff --git a/drivers/net/phy/sfp-bus.c b/drivers/net/phy/sfp-bus.c
+index 740655261e5b..83060fb349f4 100644
+--- a/drivers/net/phy/sfp-bus.c
++++ b/drivers/net/phy/sfp-bus.c
+@@ -349,6 +349,7 @@ static int sfp_register_bus(struct sfp_bus *bus)
+ 	}
+ 	if (bus->started)
+ 		bus->socket_ops->start(bus->sfp);
++	bus->netdev->sfp_bus = bus;
+ 	bus->registered = true;
+ 	return 0;
+ }
+@@ -357,6 +358,7 @@ static void sfp_unregister_bus(struct sfp_bus *bus)
+ {
+ 	const struct sfp_upstream_ops *ops = bus->upstream_ops;
+ 
++	bus->netdev->sfp_bus = NULL;
+ 	if (bus->registered) {
+ 		if (bus->started)
+ 			bus->socket_ops->stop(bus->sfp);
+@@ -438,7 +440,6 @@ static void sfp_upstream_clear(struct sfp_bus *bus)
+ {
+ 	bus->upstream_ops = NULL;
+ 	bus->upstream = NULL;
+-	bus->netdev->sfp_bus = NULL;
+ 	bus->netdev = NULL;
+ }
+ 
+@@ -467,7 +468,6 @@ struct sfp_bus *sfp_register_upstream(struct fwnode_handle *fwnode,
+ 		bus->upstream_ops = ops;
+ 		bus->upstream = upstream;
+ 		bus->netdev = ndev;
+-		ndev->sfp_bus = bus;
+ 
+ 		if (bus->sfp) {
+ 			ret = sfp_register_bus(bus);
+diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
+index b070959737ff..286c947cb48d 100644
+--- a/drivers/net/team/team.c
++++ b/drivers/net/team/team.c
+@@ -1172,6 +1172,12 @@ static int team_port_add(struct team *team, struct net_device *port_dev,
+ 		return -EBUSY;
+ 	}
+ 
++	if (dev == port_dev) {
++		NL_SET_ERR_MSG(extack, "Cannot enslave team device to itself");
++		netdev_err(dev, "Cannot enslave team device to itself\n");
++		return -EINVAL;
++	}
++
+ 	if (port_dev->features & NETIF_F_VLAN_CHALLENGED &&
+ 	    vlan_uses_dev(dev)) {
+ 		NL_SET_ERR_MSG(extack, "Device is VLAN challenged and team device has VLAN set up");
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index f5727baac84a..725dd63f8413 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -181,6 +181,7 @@ struct tun_file {
+ 	};
+ 	struct napi_struct napi;
+ 	bool napi_enabled;
++	bool napi_frags_enabled;
+ 	struct mutex napi_mutex;	/* Protects access to the above napi */
+ 	struct list_head next;
+ 	struct tun_struct *detached;
+@@ -312,32 +313,32 @@ static int tun_napi_poll(struct napi_struct *napi, int budget)
+ }
+ 
+ static void tun_napi_init(struct tun_struct *tun, struct tun_file *tfile,
+-			  bool napi_en)
++			  bool napi_en, bool napi_frags)
+ {
+ 	tfile->napi_enabled = napi_en;
++	tfile->napi_frags_enabled = napi_en && napi_frags;
+ 	if (napi_en) {
+ 		netif_napi_add(tun->dev, &tfile->napi, tun_napi_poll,
+ 			       NAPI_POLL_WEIGHT);
+ 		napi_enable(&tfile->napi);
+-		mutex_init(&tfile->napi_mutex);
+ 	}
+ }
+ 
+-static void tun_napi_disable(struct tun_struct *tun, struct tun_file *tfile)
++static void tun_napi_disable(struct tun_file *tfile)
+ {
+ 	if (tfile->napi_enabled)
+ 		napi_disable(&tfile->napi);
+ }
+ 
+-static void tun_napi_del(struct tun_struct *tun, struct tun_file *tfile)
++static void tun_napi_del(struct tun_file *tfile)
+ {
+ 	if (tfile->napi_enabled)
+ 		netif_napi_del(&tfile->napi);
+ }
+ 
+-static bool tun_napi_frags_enabled(const struct tun_struct *tun)
++static bool tun_napi_frags_enabled(const struct tun_file *tfile)
+ {
+-	return READ_ONCE(tun->flags) & IFF_NAPI_FRAGS;
++	return tfile->napi_frags_enabled;
+ }
+ 
+ #ifdef CONFIG_TUN_VNET_CROSS_LE
+@@ -688,8 +689,8 @@ static void __tun_detach(struct tun_file *tfile, bool clean)
+ 	tun = rtnl_dereference(tfile->tun);
+ 
+ 	if (tun && clean) {
+-		tun_napi_disable(tun, tfile);
+-		tun_napi_del(tun, tfile);
++		tun_napi_disable(tfile);
++		tun_napi_del(tfile);
+ 	}
+ 
+ 	if (tun && !tfile->detached) {
+@@ -756,7 +757,7 @@ static void tun_detach_all(struct net_device *dev)
+ 	for (i = 0; i < n; i++) {
+ 		tfile = rtnl_dereference(tun->tfiles[i]);
+ 		BUG_ON(!tfile);
+-		tun_napi_disable(tun, tfile);
++		tun_napi_disable(tfile);
+ 		tfile->socket.sk->sk_shutdown = RCV_SHUTDOWN;
+ 		tfile->socket.sk->sk_data_ready(tfile->socket.sk);
+ 		RCU_INIT_POINTER(tfile->tun, NULL);
+@@ -772,7 +773,7 @@ static void tun_detach_all(struct net_device *dev)
+ 	synchronize_net();
+ 	for (i = 0; i < n; i++) {
+ 		tfile = rtnl_dereference(tun->tfiles[i]);
+-		tun_napi_del(tun, tfile);
++		tun_napi_del(tfile);
+ 		/* Drop read queue */
+ 		tun_queue_purge(tfile);
+ 		xdp_rxq_info_unreg(&tfile->xdp_rxq);
+@@ -791,7 +792,7 @@ static void tun_detach_all(struct net_device *dev)
+ }
+ 
+ static int tun_attach(struct tun_struct *tun, struct file *file,
+-		      bool skip_filter, bool napi)
++		      bool skip_filter, bool napi, bool napi_frags)
+ {
+ 	struct tun_file *tfile = file->private_data;
+ 	struct net_device *dev = tun->dev;
+@@ -864,7 +865,7 @@ static int tun_attach(struct tun_struct *tun, struct file *file,
+ 		tun_enable_queue(tfile);
+ 	} else {
+ 		sock_hold(&tfile->sk);
+-		tun_napi_init(tun, tfile, napi);
++		tun_napi_init(tun, tfile, napi, napi_frags);
+ 	}
+ 
+ 	tun_set_real_num_queues(tun);
+@@ -1174,13 +1175,11 @@ static void tun_poll_controller(struct net_device *dev)
+ 		struct tun_file *tfile;
+ 		int i;
+ 
+-		if (tun_napi_frags_enabled(tun))
+-			return;
+-
+ 		rcu_read_lock();
+ 		for (i = 0; i < tun->numqueues; i++) {
+ 			tfile = rcu_dereference(tun->tfiles[i]);
+-			if (tfile->napi_enabled)
++			if (!tun_napi_frags_enabled(tfile) &&
++			    tfile->napi_enabled)
+ 				napi_schedule(&tfile->napi);
+ 		}
+ 		rcu_read_unlock();
+@@ -1751,7 +1750,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
+ 	int err;
+ 	u32 rxhash = 0;
+ 	int skb_xdp = 1;
+-	bool frags = tun_napi_frags_enabled(tun);
++	bool frags = tun_napi_frags_enabled(tfile);
+ 
+ 	if (!(tun->dev->flags & IFF_UP))
+ 		return -EIO;
+@@ -2576,7 +2575,8 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr)
+ 			return err;
+ 
+ 		err = tun_attach(tun, file, ifr->ifr_flags & IFF_NOFILTER,
+-				 ifr->ifr_flags & IFF_NAPI);
++				 ifr->ifr_flags & IFF_NAPI,
++				 ifr->ifr_flags & IFF_NAPI_FRAGS);
+ 		if (err < 0)
+ 			return err;
+ 
+@@ -2674,7 +2674,8 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr)
+ 			      (ifr->ifr_flags & TUN_FEATURES);
+ 
+ 		INIT_LIST_HEAD(&tun->disabled);
+-		err = tun_attach(tun, file, false, ifr->ifr_flags & IFF_NAPI);
++		err = tun_attach(tun, file, false, ifr->ifr_flags & IFF_NAPI,
++				 ifr->ifr_flags & IFF_NAPI_FRAGS);
+ 		if (err < 0)
+ 			goto err_free_flow;
+ 
+@@ -2823,7 +2824,8 @@ static int tun_set_queue(struct file *file, struct ifreq *ifr)
+ 		ret = security_tun_dev_attach_queue(tun->security);
+ 		if (ret < 0)
+ 			goto unlock;
+-		ret = tun_attach(tun, file, false, tun->flags & IFF_NAPI);
++		ret = tun_attach(tun, file, false, tun->flags & IFF_NAPI,
++				 tun->flags & IFF_NAPI_FRAGS);
+ 	} else if (ifr->ifr_flags & IFF_DETACH_QUEUE) {
+ 		tun = rtnl_dereference(tfile->tun);
+ 		if (!tun || !(tun->flags & IFF_MULTI_QUEUE) || tfile->detached)
+@@ -3241,6 +3243,7 @@ static int tun_chr_open(struct inode *inode, struct file * file)
+ 		return -ENOMEM;
+ 	}
+ 
++	mutex_init(&tfile->napi_mutex);
+ 	RCU_INIT_POINTER(tfile->tun, NULL);
+ 	tfile->flags = 0;
+ 	tfile->ifindex = 0;
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 1e95d37c6e27..1bb01a9e5f92 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1234,6 +1234,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x0b3c, 0xc00b, 4)},	/* Olivetti Olicard 500 */
+ 	{QMI_FIXED_INTF(0x1e2d, 0x0060, 4)},	/* Cinterion PLxx */
+ 	{QMI_FIXED_INTF(0x1e2d, 0x0053, 4)},	/* Cinterion PHxx,PXxx */
++	{QMI_FIXED_INTF(0x1e2d, 0x0063, 10)},	/* Cinterion ALASxx (1 RmNet) */
+ 	{QMI_FIXED_INTF(0x1e2d, 0x0082, 4)},	/* Cinterion PHxx,PXxx (2 RmNet) */
+ 	{QMI_FIXED_INTF(0x1e2d, 0x0082, 5)},	/* Cinterion PHxx,PXxx (2 RmNet) */
+ 	{QMI_FIXED_INTF(0x1e2d, 0x0083, 4)},	/* Cinterion PHxx,PXxx (1 RmNet + USB Audio)*/
+diff --git a/drivers/net/usb/smsc75xx.c b/drivers/net/usb/smsc75xx.c
+index 05553d252446..b64b1ee56d2d 100644
+--- a/drivers/net/usb/smsc75xx.c
++++ b/drivers/net/usb/smsc75xx.c
+@@ -1517,6 +1517,7 @@ static void smsc75xx_unbind(struct usbnet *dev, struct usb_interface *intf)
+ {
+ 	struct smsc75xx_priv *pdata = (struct smsc75xx_priv *)(dev->data[0]);
+ 	if (pdata) {
++		cancel_work_sync(&pdata->set_multicast);
+ 		netif_dbg(dev, ifdown, dev->net, "free pdata\n");
+ 		kfree(pdata);
+ 		pdata = NULL;
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index e857cb3335f6..93a6c43a2354 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -3537,6 +3537,7 @@ static size_t vxlan_get_size(const struct net_device *dev)
+ 		nla_total_size(sizeof(__u32)) +	/* IFLA_VXLAN_LINK */
+ 		nla_total_size(sizeof(struct in6_addr)) + /* IFLA_VXLAN_LOCAL{6} */
+ 		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_TTL */
++		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_TTL_INHERIT */
+ 		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_TOS */
+ 		nla_total_size(sizeof(__be32)) + /* IFLA_VXLAN_LABEL */
+ 		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_LEARNING */
+@@ -3601,6 +3602,8 @@ static int vxlan_fill_info(struct sk_buff *skb, const struct net_device *dev)
+ 	}
+ 
+ 	if (nla_put_u8(skb, IFLA_VXLAN_TTL, vxlan->cfg.ttl) ||
++	    nla_put_u8(skb, IFLA_VXLAN_TTL_INHERIT,
++		       !!(vxlan->cfg.flags & VXLAN_F_TTL_INHERIT)) ||
+ 	    nla_put_u8(skb, IFLA_VXLAN_TOS, vxlan->cfg.tos) ||
+ 	    nla_put_be32(skb, IFLA_VXLAN_LABEL, vxlan->cfg.label) ||
+ 	    nla_put_u8(skb, IFLA_VXLAN_LEARNING,
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index d4d4a55f09f8..c6f375e9cce7 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -89,6 +89,9 @@ static enum pci_protocol_version_t pci_protocol_version;
+ 
+ #define STATUS_REVISION_MISMATCH 0xC0000059
+ 
++/* space for 32bit serial number as string */
++#define SLOT_NAME_SIZE 11
++
+ /*
+  * Message Types
+  */
+@@ -494,6 +497,7 @@ struct hv_pci_dev {
+ 	struct list_head list_entry;
+ 	refcount_t refs;
+ 	enum hv_pcichild_state state;
++	struct pci_slot *pci_slot;
+ 	struct pci_function_description desc;
+ 	bool reported_missing;
+ 	struct hv_pcibus_device *hbus;
+@@ -1457,6 +1461,34 @@ static void prepopulate_bars(struct hv_pcibus_device *hbus)
+ 	spin_unlock_irqrestore(&hbus->device_list_lock, flags);
+ }
+ 
++/*
++ * Assign entries in sysfs pci slot directory.
++ *
++ * Note that this function does not need to lock the children list
++ * because it is called from pci_devices_present_work which
++ * is serialized with hv_eject_device_work because they are on the
++ * same ordered workqueue. Therefore hbus->children list will not change
++ * even when pci_create_slot sleeps.
++ */
++static void hv_pci_assign_slots(struct hv_pcibus_device *hbus)
++{
++	struct hv_pci_dev *hpdev;
++	char name[SLOT_NAME_SIZE];
++	int slot_nr;
++
++	list_for_each_entry(hpdev, &hbus->children, list_entry) {
++		if (hpdev->pci_slot)
++			continue;
++
++		slot_nr = PCI_SLOT(wslot_to_devfn(hpdev->desc.win_slot.slot));
++		snprintf(name, SLOT_NAME_SIZE, "%u", hpdev->desc.ser);
++		hpdev->pci_slot = pci_create_slot(hbus->pci_bus, slot_nr,
++					  name, NULL);
++		if (!hpdev->pci_slot)
++			pr_warn("pci_create slot %s failed\n", name);
++	}
++}
++
+ /**
+  * create_root_hv_pci_bus() - Expose a new root PCI bus
+  * @hbus:	Root PCI bus, as understood by this driver
+@@ -1480,6 +1512,7 @@ static int create_root_hv_pci_bus(struct hv_pcibus_device *hbus)
+ 	pci_lock_rescan_remove();
+ 	pci_scan_child_bus(hbus->pci_bus);
+ 	pci_bus_assign_resources(hbus->pci_bus);
++	hv_pci_assign_slots(hbus);
+ 	pci_bus_add_devices(hbus->pci_bus);
+ 	pci_unlock_rescan_remove();
+ 	hbus->state = hv_pcibus_installed;
+@@ -1742,6 +1775,7 @@ static void pci_devices_present_work(struct work_struct *work)
+ 		 */
+ 		pci_lock_rescan_remove();
+ 		pci_scan_child_bus(hbus->pci_bus);
++		hv_pci_assign_slots(hbus);
+ 		pci_unlock_rescan_remove();
+ 		break;
+ 
+@@ -1858,6 +1892,9 @@ static void hv_eject_device_work(struct work_struct *work)
+ 	list_del(&hpdev->list_entry);
+ 	spin_unlock_irqrestore(&hpdev->hbus->device_list_lock, flags);
+ 
++	if (hpdev->pci_slot)
++		pci_destroy_slot(hpdev->pci_slot);
++
+ 	memset(&ctxt, 0, sizeof(ctxt));
+ 	ejct_pkt = (struct pci_eject_response *)&ctxt.pkt.message;
+ 	ejct_pkt->message_type.type = PCI_EJECTION_COMPLETE;
+diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
+index a6347d487635..1321104b9b9f 100644
+--- a/drivers/perf/arm_pmu.c
++++ b/drivers/perf/arm_pmu.c
+@@ -474,7 +474,13 @@ static int armpmu_filter_match(struct perf_event *event)
+ {
+ 	struct arm_pmu *armpmu = to_arm_pmu(event->pmu);
+ 	unsigned int cpu = smp_processor_id();
+-	return cpumask_test_cpu(cpu, &armpmu->supported_cpus);
++	int ret;
++
++	ret = cpumask_test_cpu(cpu, &armpmu->supported_cpus);
++	if (ret && armpmu->filter_match)
++		return armpmu->filter_match(event);
++
++	return ret;
+ }
+ 
+ static ssize_t armpmu_cpumask_show(struct device *dev,
+diff --git a/drivers/pinctrl/intel/pinctrl-cannonlake.c b/drivers/pinctrl/intel/pinctrl-cannonlake.c
+index 6243e7d95e7e..d36afb17f5e4 100644
+--- a/drivers/pinctrl/intel/pinctrl-cannonlake.c
++++ b/drivers/pinctrl/intel/pinctrl-cannonlake.c
+@@ -382,7 +382,7 @@ static const struct intel_padgroup cnlh_community1_gpps[] = {
+ static const struct intel_padgroup cnlh_community3_gpps[] = {
+ 	CNL_GPP(0, 155, 178, 192),		/* GPP_K */
+ 	CNL_GPP(1, 179, 202, 224),		/* GPP_H */
+-	CNL_GPP(2, 203, 215, 258),		/* GPP_E */
++	CNL_GPP(2, 203, 215, 256),		/* GPP_E */
+ 	CNL_GPP(3, 216, 239, 288),		/* GPP_F */
+ 	CNL_GPP(4, 240, 248, CNL_NO_GPIO),	/* SPI */
+ };
+diff --git a/drivers/pinctrl/pinctrl-mcp23s08.c b/drivers/pinctrl/pinctrl-mcp23s08.c
+index 022307dd4b54..bef6ff2e8f4f 100644
+--- a/drivers/pinctrl/pinctrl-mcp23s08.c
++++ b/drivers/pinctrl/pinctrl-mcp23s08.c
+@@ -636,6 +636,14 @@ static int mcp23s08_irq_setup(struct mcp23s08 *mcp)
+ 		return err;
+ 	}
+ 
++	return 0;
++}
++
++static int mcp23s08_irqchip_setup(struct mcp23s08 *mcp)
++{
++	struct gpio_chip *chip = &mcp->chip;
++	int err;
++
+ 	err =  gpiochip_irqchip_add_nested(chip,
+ 					   &mcp23s08_irq_chip,
+ 					   0,
+@@ -912,7 +920,7 @@ static int mcp23s08_probe_one(struct mcp23s08 *mcp, struct device *dev,
+ 	}
+ 
+ 	if (mcp->irq && mcp->irq_controller) {
+-		ret = mcp23s08_irq_setup(mcp);
++		ret = mcp23s08_irqchip_setup(mcp);
+ 		if (ret)
+ 			goto fail;
+ 	}
+@@ -944,6 +952,9 @@ static int mcp23s08_probe_one(struct mcp23s08 *mcp, struct device *dev,
+ 		goto fail;
+ 	}
+ 
++	if (mcp->irq)
++		ret = mcp23s08_irq_setup(mcp);
++
+ fail:
+ 	if (ret < 0)
+ 		dev_dbg(dev, "can't setup chip %d, --> %d\n", addr, ret);
+diff --git a/drivers/s390/cio/vfio_ccw_cp.c b/drivers/s390/cio/vfio_ccw_cp.c
+index dbe7c7ac9ac8..fd77e46eb3b2 100644
+--- a/drivers/s390/cio/vfio_ccw_cp.c
++++ b/drivers/s390/cio/vfio_ccw_cp.c
+@@ -163,7 +163,7 @@ static bool pfn_array_table_iova_pinned(struct pfn_array_table *pat,
+ 
+ 	for (i = 0; i < pat->pat_nr; i++, pa++)
+ 		for (j = 0; j < pa->pa_nr; j++)
+-			if (pa->pa_iova_pfn[i] == iova_pfn)
++			if (pa->pa_iova_pfn[j] == iova_pfn)
+ 				return true;
+ 
+ 	return false;
+diff --git a/drivers/scsi/qla2xxx/qla_target.h b/drivers/scsi/qla2xxx/qla_target.h
+index fecf96f0225c..199d3ba1916d 100644
+--- a/drivers/scsi/qla2xxx/qla_target.h
++++ b/drivers/scsi/qla2xxx/qla_target.h
+@@ -374,8 +374,8 @@ struct atio_from_isp {
+ static inline int fcpcmd_is_corrupted(struct atio *atio)
+ {
+ 	if (atio->entry_type == ATIO_TYPE7 &&
+-	    (le16_to_cpu(atio->attr_n_length & FCP_CMD_LENGTH_MASK) <
+-	    FCP_CMD_LENGTH_MIN))
++	    ((le16_to_cpu(atio->attr_n_length) & FCP_CMD_LENGTH_MASK) <
++	     FCP_CMD_LENGTH_MIN))
+ 		return 1;
+ 	else
+ 		return 0;
+diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
+index a4ecc9d77624..8e1c3cff567a 100644
+--- a/drivers/target/iscsi/iscsi_target.c
++++ b/drivers/target/iscsi/iscsi_target.c
+@@ -1419,7 +1419,8 @@ static void iscsit_do_crypto_hash_buf(struct ahash_request *hash,
+ 
+ 	sg_init_table(sg, ARRAY_SIZE(sg));
+ 	sg_set_buf(sg, buf, payload_length);
+-	sg_set_buf(sg + 1, pad_bytes, padding);
++	if (padding)
++		sg_set_buf(sg + 1, pad_bytes, padding);
+ 
+ 	ahash_request_set_crypt(hash, sg, data_crc, payload_length + padding);
+ 
+@@ -3913,10 +3914,14 @@ static bool iscsi_target_check_conn_state(struct iscsi_conn *conn)
+ static void iscsit_get_rx_pdu(struct iscsi_conn *conn)
+ {
+ 	int ret;
+-	u8 buffer[ISCSI_HDR_LEN], opcode;
++	u8 *buffer, opcode;
+ 	u32 checksum = 0, digest = 0;
+ 	struct kvec iov;
+ 
++	buffer = kcalloc(ISCSI_HDR_LEN, sizeof(*buffer), GFP_KERNEL);
++	if (!buffer)
++		return;
++
+ 	while (!kthread_should_stop()) {
+ 		/*
+ 		 * Ensure that both TX and RX per connection kthreads
+@@ -3924,7 +3929,6 @@ static void iscsit_get_rx_pdu(struct iscsi_conn *conn)
+ 		 */
+ 		iscsit_thread_check_cpumask(conn, current, 0);
+ 
+-		memset(buffer, 0, ISCSI_HDR_LEN);
+ 		memset(&iov, 0, sizeof(struct kvec));
+ 
+ 		iov.iov_base	= buffer;
+@@ -3933,7 +3937,7 @@ static void iscsit_get_rx_pdu(struct iscsi_conn *conn)
+ 		ret = rx_data(conn, &iov, 1, ISCSI_HDR_LEN);
+ 		if (ret != ISCSI_HDR_LEN) {
+ 			iscsit_rx_thread_wait_for_tcp(conn);
+-			return;
++			break;
+ 		}
+ 
+ 		if (conn->conn_ops->HeaderDigest) {
+@@ -3943,7 +3947,7 @@ static void iscsit_get_rx_pdu(struct iscsi_conn *conn)
+ 			ret = rx_data(conn, &iov, 1, ISCSI_CRC_LEN);
+ 			if (ret != ISCSI_CRC_LEN) {
+ 				iscsit_rx_thread_wait_for_tcp(conn);
+-				return;
++				break;
+ 			}
+ 
+ 			iscsit_do_crypto_hash_buf(conn->conn_rx_hash, buffer,
+@@ -3967,7 +3971,7 @@ static void iscsit_get_rx_pdu(struct iscsi_conn *conn)
+ 		}
+ 
+ 		if (conn->conn_state == TARG_CONN_STATE_IN_LOGOUT)
+-			return;
++			break;
+ 
+ 		opcode = buffer[0] & ISCSI_OPCODE_MASK;
+ 
+@@ -3978,13 +3982,15 @@ static void iscsit_get_rx_pdu(struct iscsi_conn *conn)
+ 			" while in Discovery Session, rejecting.\n", opcode);
+ 			iscsit_add_reject(conn, ISCSI_REASON_PROTOCOL_ERROR,
+ 					  buffer);
+-			return;
++			break;
+ 		}
+ 
+ 		ret = iscsi_target_rx_opcode(conn, buffer);
+ 		if (ret < 0)
+-			return;
++			break;
+ 	}
++
++	kfree(buffer);
+ }
+ 
+ int iscsi_target_rx_thread(void *arg)
+diff --git a/drivers/video/fbdev/aty/atyfb.h b/drivers/video/fbdev/aty/atyfb.h
+index 8235b285dbb2..d09bab3bf224 100644
+--- a/drivers/video/fbdev/aty/atyfb.h
++++ b/drivers/video/fbdev/aty/atyfb.h
+@@ -333,6 +333,8 @@ extern const struct aty_pll_ops aty_pll_ct; /* Integrated */
+ extern void aty_set_pll_ct(const struct fb_info *info, const union aty_pll *pll);
+ extern u8 aty_ld_pll_ct(int offset, const struct atyfb_par *par);
+ 
++extern const u8 aty_postdividers[8];
++
+ 
+     /*
+      *  Hardware cursor support
+@@ -359,7 +361,6 @@ static inline void wait_for_idle(struct atyfb_par *par)
+ 
+ extern void aty_reset_engine(const struct atyfb_par *par);
+ extern void aty_init_engine(struct atyfb_par *par, struct fb_info *info);
+-extern u8   aty_ld_pll_ct(int offset, const struct atyfb_par *par);
+ 
+ void atyfb_copyarea(struct fb_info *info, const struct fb_copyarea *area);
+ void atyfb_fillrect(struct fb_info *info, const struct fb_fillrect *rect);
+diff --git a/drivers/video/fbdev/aty/atyfb_base.c b/drivers/video/fbdev/aty/atyfb_base.c
+index a9a8272f7a6e..05111e90f168 100644
+--- a/drivers/video/fbdev/aty/atyfb_base.c
++++ b/drivers/video/fbdev/aty/atyfb_base.c
+@@ -3087,17 +3087,18 @@ static int atyfb_setup_sparc(struct pci_dev *pdev, struct fb_info *info,
+ 		/*
+ 		 * PLL Reference Divider M:
+ 		 */
+-		M = pll_regs[2];
++		M = pll_regs[PLL_REF_DIV];
+ 
+ 		/*
+ 		 * PLL Feedback Divider N (Dependent on CLOCK_CNTL):
+ 		 */
+-		N = pll_regs[7 + (clock_cntl & 3)];
++		N = pll_regs[VCLK0_FB_DIV + (clock_cntl & 3)];
+ 
+ 		/*
+ 		 * PLL Post Divider P (Dependent on CLOCK_CNTL):
+ 		 */
+-		P = 1 << (pll_regs[6] >> ((clock_cntl & 3) << 1));
++		P = aty_postdividers[((pll_regs[VCLK_POST_DIV] >> ((clock_cntl & 3) << 1)) & 3) |
++		                     ((pll_regs[PLL_EXT_CNTL] >> (2 + (clock_cntl & 3))) & 4)];
+ 
+ 		/*
+ 		 * PLL Divider Q:
+diff --git a/drivers/video/fbdev/aty/mach64_ct.c b/drivers/video/fbdev/aty/mach64_ct.c
+index 74a62aa193c0..f87cc81f4fa2 100644
+--- a/drivers/video/fbdev/aty/mach64_ct.c
++++ b/drivers/video/fbdev/aty/mach64_ct.c
+@@ -115,7 +115,7 @@ static void aty_st_pll_ct(int offset, u8 val, const struct atyfb_par *par)
+  */
+ 
+ #define Maximum_DSP_PRECISION 7
+-static u8 postdividers[] = {1,2,4,8,3};
++const u8 aty_postdividers[8] = {1,2,4,8,3,5,6,12};
+ 
+ static int aty_dsp_gt(const struct fb_info *info, u32 bpp, struct pll_ct *pll)
+ {
+@@ -222,7 +222,7 @@ static int aty_valid_pll_ct(const struct fb_info *info, u32 vclk_per, struct pll
+ 		pll->vclk_post_div += (q <  64*8);
+ 		pll->vclk_post_div += (q <  32*8);
+ 	}
+-	pll->vclk_post_div_real = postdividers[pll->vclk_post_div];
++	pll->vclk_post_div_real = aty_postdividers[pll->vclk_post_div];
+ 	//    pll->vclk_post_div <<= 6;
+ 	pll->vclk_fb_div = q * pll->vclk_post_div_real / 8;
+ 	pllvclk = (1000000 * 2 * pll->vclk_fb_div) /
+@@ -513,7 +513,7 @@ static int aty_init_pll_ct(const struct fb_info *info, union aty_pll *pll)
+ 		u8 mclk_fb_div, pll_ext_cntl;
+ 		pll->ct.pll_ref_div = aty_ld_pll_ct(PLL_REF_DIV, par);
+ 		pll_ext_cntl = aty_ld_pll_ct(PLL_EXT_CNTL, par);
+-		pll->ct.xclk_post_div_real = postdividers[pll_ext_cntl & 0x07];
++		pll->ct.xclk_post_div_real = aty_postdividers[pll_ext_cntl & 0x07];
+ 		mclk_fb_div = aty_ld_pll_ct(MCLK_FB_DIV, par);
+ 		if (pll_ext_cntl & PLL_MFB_TIMES_4_2B)
+ 			mclk_fb_div <<= 1;
+@@ -535,7 +535,7 @@ static int aty_init_pll_ct(const struct fb_info *info, union aty_pll *pll)
+ 		xpost_div += (q <  64*8);
+ 		xpost_div += (q <  32*8);
+ 	}
+-	pll->ct.xclk_post_div_real = postdividers[xpost_div];
++	pll->ct.xclk_post_div_real = aty_postdividers[xpost_div];
+ 	pll->ct.mclk_fb_div = q * pll->ct.xclk_post_div_real / 8;
+ 
+ #ifdef CONFIG_PPC
+@@ -584,7 +584,7 @@ static int aty_init_pll_ct(const struct fb_info *info, union aty_pll *pll)
+ 			mpost_div += (q <  64*8);
+ 			mpost_div += (q <  32*8);
+ 		}
+-		sclk_post_div_real = postdividers[mpost_div];
++		sclk_post_div_real = aty_postdividers[mpost_div];
+ 		pll->ct.sclk_fb_div = q * sclk_post_div_real / 8;
+ 		pll->ct.spll_cntl2 = mpost_div << 4;
+ #ifdef DEBUG
+diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c
+index a1b18082991b..b6735ae3334e 100644
+--- a/fs/afs/rxrpc.c
++++ b/fs/afs/rxrpc.c
+@@ -690,8 +690,6 @@ static void afs_process_async_call(struct work_struct *work)
+ 	}
+ 
+ 	if (call->state == AFS_CALL_COMPLETE) {
+-		call->reply[0] = NULL;
+-
+ 		/* We have two refs to release - one from the alloc and one
+ 		 * queued with the work item - and we can't just deallocate the
+ 		 * call because the work item may be queued again.
+diff --git a/fs/dax.c b/fs/dax.c
+index 94f9fe002b12..0d3f640653c0 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -558,6 +558,8 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
+ 	while (index < end && pagevec_lookup_entries(&pvec, mapping, index,
+ 				min(end - index, (pgoff_t)PAGEVEC_SIZE),
+ 				indices)) {
++		pgoff_t nr_pages = 1;
++
+ 		for (i = 0; i < pagevec_count(&pvec); i++) {
+ 			struct page *pvec_ent = pvec.pages[i];
+ 			void *entry;
+@@ -571,8 +573,15 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
+ 
+ 			xa_lock_irq(&mapping->i_pages);
+ 			entry = get_unlocked_mapping_entry(mapping, index, NULL);
+-			if (entry)
++			if (entry) {
+ 				page = dax_busy_page(entry);
++				/*
++				 * Account for multi-order entries at
++				 * the end of the pagevec.
++				 */
++				if (i + 1 >= pagevec_count(&pvec))
++					nr_pages = 1UL << dax_radix_order(entry);
++			}
+ 			put_unlocked_mapping_entry(mapping, index, entry);
+ 			xa_unlock_irq(&mapping->i_pages);
+ 			if (page)
+@@ -580,7 +589,7 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
+ 		}
+ 		pagevec_remove_exceptionals(&pvec);
+ 		pagevec_release(&pvec);
+-		index++;
++		index += nr_pages;
+ 
+ 		if (page)
+ 			break;
+diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
+index c0e68f903011..04da6a7c9d2d 100644
+--- a/include/linux/cgroup-defs.h
++++ b/include/linux/cgroup-defs.h
+@@ -412,6 +412,7 @@ struct cgroup {
+ 	 * specific task are charged to the dom_cgrp.
+ 	 */
+ 	struct cgroup *dom_cgrp;
++	struct cgroup *old_dom_cgrp;		/* used while enabling threaded */
+ 
+ 	/* per-cpu recursive resource statistics */
+ 	struct cgroup_rstat_cpu __percpu *rstat_cpu;
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 3d0cc0b5cec2..3045a5cee0d8 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -2420,6 +2420,13 @@ struct netdev_notifier_info {
+ 	struct netlink_ext_ack	*extack;
+ };
+ 
++struct netdev_notifier_info_ext {
++	struct netdev_notifier_info info; /* must be first */
++	union {
++		u32 mtu;
++	} ext;
++};
++
+ struct netdev_notifier_change_info {
+ 	struct netdev_notifier_info info; /* must be first */
+ 	unsigned int flags_changed;
+diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
+index ad5444491975..a2f6e178a2d7 100644
+--- a/include/linux/perf/arm_pmu.h
++++ b/include/linux/perf/arm_pmu.h
+@@ -93,6 +93,7 @@ struct arm_pmu {
+ 	void		(*stop)(struct arm_pmu *);
+ 	void		(*reset)(void *);
+ 	int		(*map_event)(struct perf_event *event);
++	int		(*filter_match)(struct perf_event *event);
+ 	int		num_events;
+ 	u64		max_period;
+ 	bool		secure_access; /* 32-bit ARM only */
+diff --git a/include/linux/stmmac.h b/include/linux/stmmac.h
+index 32feac5bbd75..f62e7721cd71 100644
+--- a/include/linux/stmmac.h
++++ b/include/linux/stmmac.h
+@@ -30,6 +30,7 @@
+ 
+ #define MTL_MAX_RX_QUEUES	8
+ #define MTL_MAX_TX_QUEUES	8
++#define STMMAC_CH_MAX		8
+ 
+ #define STMMAC_RX_COE_NONE	0
+ #define STMMAC_RX_COE_TYPE1	1
+diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
+index 9397628a1967..cb462f9ab7dd 100644
+--- a/include/linux/virtio_net.h
++++ b/include/linux/virtio_net.h
+@@ -5,6 +5,24 @@
+ #include <linux/if_vlan.h>
+ #include <uapi/linux/virtio_net.h>
+ 
++static inline int virtio_net_hdr_set_proto(struct sk_buff *skb,
++					   const struct virtio_net_hdr *hdr)
++{
++	switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) {
++	case VIRTIO_NET_HDR_GSO_TCPV4:
++	case VIRTIO_NET_HDR_GSO_UDP:
++		skb->protocol = cpu_to_be16(ETH_P_IP);
++		break;
++	case VIRTIO_NET_HDR_GSO_TCPV6:
++		skb->protocol = cpu_to_be16(ETH_P_IPV6);
++		break;
++	default:
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
+ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
+ 					const struct virtio_net_hdr *hdr,
+ 					bool little_endian)
+diff --git a/include/net/bonding.h b/include/net/bonding.h
+index 808f1d167349..a4f116f06c50 100644
+--- a/include/net/bonding.h
++++ b/include/net/bonding.h
+@@ -139,12 +139,6 @@ struct bond_parm_tbl {
+ 	int mode;
+ };
+ 
+-struct netdev_notify_work {
+-	struct delayed_work	work;
+-	struct net_device	*dev;
+-	struct netdev_bonding_info bonding_info;
+-};
+-
+ struct slave {
+ 	struct net_device *dev; /* first - useful for panic debug */
+ 	struct bonding *bond; /* our master */
+@@ -172,6 +166,7 @@ struct slave {
+ #ifdef CONFIG_NET_POLL_CONTROLLER
+ 	struct netpoll *np;
+ #endif
++	struct delayed_work notify_work;
+ 	struct kobject kobj;
+ 	struct rtnl_link_stats64 slave_stats;
+ };
+diff --git a/include/net/inet_sock.h b/include/net/inet_sock.h
+index 83d5b3c2ac42..7dba2d116e8c 100644
+--- a/include/net/inet_sock.h
++++ b/include/net/inet_sock.h
+@@ -130,12 +130,6 @@ static inline int inet_request_bound_dev_if(const struct sock *sk,
+ 	return sk->sk_bound_dev_if;
+ }
+ 
+-static inline struct ip_options_rcu *ireq_opt_deref(const struct inet_request_sock *ireq)
+-{
+-	return rcu_dereference_check(ireq->ireq_opt,
+-				     refcount_read(&ireq->req.rsk_refcnt) > 0);
+-}
+-
+ struct inet_cork {
+ 	unsigned int		flags;
+ 	__be32			addr;
+diff --git a/include/net/ip_fib.h b/include/net/ip_fib.h
+index 69c91d1934c1..c9b7b136939d 100644
+--- a/include/net/ip_fib.h
++++ b/include/net/ip_fib.h
+@@ -394,6 +394,7 @@ int ip_fib_check_default(__be32 gw, struct net_device *dev);
+ int fib_sync_down_dev(struct net_device *dev, unsigned long event, bool force);
+ int fib_sync_down_addr(struct net_device *dev, __be32 local);
+ int fib_sync_up(struct net_device *dev, unsigned int nh_flags);
++void fib_sync_mtu(struct net_device *dev, u32 orig_mtu);
+ 
+ #ifdef CONFIG_IP_ROUTE_MULTIPATH
+ int fib_multipath_hash(const struct net *net, const struct flowi4 *fl4,
+diff --git a/include/sound/hdaudio.h b/include/sound/hdaudio.h
+index c052afc27547..138e976a2ba2 100644
+--- a/include/sound/hdaudio.h
++++ b/include/sound/hdaudio.h
+@@ -355,6 +355,7 @@ void snd_hdac_bus_init_cmd_io(struct hdac_bus *bus);
+ void snd_hdac_bus_stop_cmd_io(struct hdac_bus *bus);
+ void snd_hdac_bus_enter_link_reset(struct hdac_bus *bus);
+ void snd_hdac_bus_exit_link_reset(struct hdac_bus *bus);
++int snd_hdac_bus_reset_link(struct hdac_bus *bus, bool full_reset);
+ 
+ void snd_hdac_bus_update_rirb(struct hdac_bus *bus);
+ int snd_hdac_bus_handle_stream_irq(struct hdac_bus *bus, unsigned int status,
+diff --git a/include/sound/soc-dapm.h b/include/sound/soc-dapm.h
+index a6ce2de4e20a..be3bee1cf91f 100644
+--- a/include/sound/soc-dapm.h
++++ b/include/sound/soc-dapm.h
+@@ -410,6 +410,7 @@ int snd_soc_dapm_new_dai_widgets(struct snd_soc_dapm_context *dapm,
+ int snd_soc_dapm_link_dai_widgets(struct snd_soc_card *card);
+ void snd_soc_dapm_connect_dai_link_widgets(struct snd_soc_card *card);
+ int snd_soc_dapm_new_pcm(struct snd_soc_card *card,
++			 struct snd_soc_pcm_runtime *rtd,
+ 			 const struct snd_soc_pcm_stream *params,
+ 			 unsigned int num_params,
+ 			 struct snd_soc_dapm_widget *source,
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 2590700237c1..138f0302692e 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -1844,7 +1844,7 @@ static int btf_check_all_metas(struct btf_verifier_env *env)
+ 
+ 	hdr = &btf->hdr;
+ 	cur = btf->nohdr_data + hdr->type_off;
+-	end = btf->nohdr_data + hdr->type_len;
++	end = cur + hdr->type_len;
+ 
+ 	env->log_type_id = 1;
+ 	while (cur < end) {
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 077370bf8964..6e052c899cab 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -2833,11 +2833,12 @@ restart:
+ }
+ 
+ /**
+- * cgroup_save_control - save control masks of a subtree
++ * cgroup_save_control - save control masks and dom_cgrp of a subtree
+  * @cgrp: root of the target subtree
+  *
+- * Save ->subtree_control and ->subtree_ss_mask to the respective old_
+- * prefixed fields for @cgrp's subtree including @cgrp itself.
++ * Save ->subtree_control, ->subtree_ss_mask and ->dom_cgrp to the
++ * respective old_ prefixed fields for @cgrp's subtree including @cgrp
++ * itself.
+  */
+ static void cgroup_save_control(struct cgroup *cgrp)
+ {
+@@ -2847,6 +2848,7 @@ static void cgroup_save_control(struct cgroup *cgrp)
+ 	cgroup_for_each_live_descendant_pre(dsct, d_css, cgrp) {
+ 		dsct->old_subtree_control = dsct->subtree_control;
+ 		dsct->old_subtree_ss_mask = dsct->subtree_ss_mask;
++		dsct->old_dom_cgrp = dsct->dom_cgrp;
+ 	}
+ }
+ 
+@@ -2872,11 +2874,12 @@ static void cgroup_propagate_control(struct cgroup *cgrp)
+ }
+ 
+ /**
+- * cgroup_restore_control - restore control masks of a subtree
++ * cgroup_restore_control - restore control masks and dom_cgrp of a subtree
+  * @cgrp: root of the target subtree
+  *
+- * Restore ->subtree_control and ->subtree_ss_mask from the respective old_
+- * prefixed fields for @cgrp's subtree including @cgrp itself.
++ * Restore ->subtree_control, ->subtree_ss_mask and ->dom_cgrp from the
++ * respective old_ prefixed fields for @cgrp's subtree including @cgrp
++ * itself.
+  */
+ static void cgroup_restore_control(struct cgroup *cgrp)
+ {
+@@ -2886,6 +2889,7 @@ static void cgroup_restore_control(struct cgroup *cgrp)
+ 	cgroup_for_each_live_descendant_post(dsct, d_css, cgrp) {
+ 		dsct->subtree_control = dsct->old_subtree_control;
+ 		dsct->subtree_ss_mask = dsct->old_subtree_ss_mask;
++		dsct->dom_cgrp = dsct->old_dom_cgrp;
+ 	}
+ }
+ 
+@@ -3193,6 +3197,8 @@ static int cgroup_enable_threaded(struct cgroup *cgrp)
+ {
+ 	struct cgroup *parent = cgroup_parent(cgrp);
+ 	struct cgroup *dom_cgrp = parent->dom_cgrp;
++	struct cgroup *dsct;
++	struct cgroup_subsys_state *d_css;
+ 	int ret;
+ 
+ 	lockdep_assert_held(&cgroup_mutex);
+@@ -3222,12 +3228,13 @@ static int cgroup_enable_threaded(struct cgroup *cgrp)
+ 	 */
+ 	cgroup_save_control(cgrp);
+ 
+-	cgrp->dom_cgrp = dom_cgrp;
++	cgroup_for_each_live_descendant_pre(dsct, d_css, cgrp)
++		if (dsct == cgrp || cgroup_is_threaded(dsct))
++			dsct->dom_cgrp = dom_cgrp;
++
+ 	ret = cgroup_apply_control(cgrp);
+ 	if (!ret)
+ 		parent->nr_threaded_children++;
+-	else
+-		cgrp->dom_cgrp = cgrp;
+ 
+ 	cgroup_finalize_control(cgrp, ret);
+ 	return ret;
+diff --git a/lib/vsprintf.c b/lib/vsprintf.c
+index cda186230287..8e58928e8227 100644
+--- a/lib/vsprintf.c
++++ b/lib/vsprintf.c
+@@ -2769,7 +2769,7 @@ int bstr_printf(char *buf, size_t size, const char *fmt, const u32 *bin_buf)
+ 						copy = end - str;
+ 					memcpy(str, args, copy);
+ 					str += len;
+-					args += len;
++					args += len + 1;
+ 				}
+ 			}
+ 			if (process)
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 571875b37453..f7274e0c8bdc 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2883,9 +2883,6 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
+ 	if (!(pvmw->pmd && !pvmw->pte))
+ 		return;
+ 
+-	mmu_notifier_invalidate_range_start(mm, address,
+-			address + HPAGE_PMD_SIZE);
+-
+ 	flush_cache_range(vma, address, address + HPAGE_PMD_SIZE);
+ 	pmdval = *pvmw->pmd;
+ 	pmdp_invalidate(vma, address, pvmw->pmd);
+@@ -2898,9 +2895,6 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
+ 	set_pmd_at(mm, address, pvmw->pmd, pmdswp);
+ 	page_remove_rmap(page, true);
+ 	put_page(page);
+-
+-	mmu_notifier_invalidate_range_end(mm, address,
+-			address + HPAGE_PMD_SIZE);
+ }
+ 
+ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
+diff --git a/mm/mmap.c b/mm/mmap.c
+index 17bbf4d3e24f..080c6b9b1d65 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -1410,7 +1410,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
+ 	if (flags & MAP_FIXED_NOREPLACE) {
+ 		struct vm_area_struct *vma = find_vma(mm, addr);
+ 
+-		if (vma && vma->vm_start <= addr)
++		if (vma && vma->vm_start < addr + len)
+ 			return -EEXIST;
+ 	}
+ 
+diff --git a/mm/percpu.c b/mm/percpu.c
+index 0b6480979ac7..074732f3c209 100644
+--- a/mm/percpu.c
++++ b/mm/percpu.c
+@@ -1204,6 +1204,7 @@ static void pcpu_free_chunk(struct pcpu_chunk *chunk)
+ {
+ 	if (!chunk)
+ 		return;
++	pcpu_mem_free(chunk->md_blocks);
+ 	pcpu_mem_free(chunk->bound_map);
+ 	pcpu_mem_free(chunk->alloc_map);
+ 	pcpu_mem_free(chunk);
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 03822f86f288..fc0436407471 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -386,6 +386,17 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
+ 	delta = freeable >> priority;
+ 	delta *= 4;
+ 	do_div(delta, shrinker->seeks);
++
++	/*
++	 * Make sure we apply some minimal pressure on default priority
++	 * even on small cgroups. Stale objects are not only consuming memory
++	 * by themselves, but can also hold a reference to a dying cgroup,
++	 * preventing it from being reclaimed. A dying cgroup with all
++	 * corresponding structures like per-cpu stats and kmem caches
++	 * can be really big, so it may lead to a significant waste of memory.
++	 */
++	delta = max_t(unsigned long long, delta, min(freeable, batch_size));
++
+ 	total_scan += delta;
+ 	if (total_scan < 0) {
+ 		pr_err("shrink_slab: %pF negative objects to delete nr=%ld\n",
+diff --git a/mm/vmstat.c b/mm/vmstat.c
+index 55a5bb1d773d..7878da76abf2 100644
+--- a/mm/vmstat.c
++++ b/mm/vmstat.c
+@@ -1286,7 +1286,6 @@ const char * const vmstat_text[] = {
+ #ifdef CONFIG_DEBUG_VM_VMACACHE
+ 	"vmacache_find_calls",
+ 	"vmacache_find_hits",
+-	"vmacache_full_flushes",
+ #endif
+ #ifdef CONFIG_SWAP
+ 	"swap_ra",
+diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
+index ae91e2d40056..3a7b0773536b 100644
+--- a/net/bluetooth/smp.c
++++ b/net/bluetooth/smp.c
+@@ -83,6 +83,7 @@ enum {
+ 
+ struct smp_dev {
+ 	/* Secure Connections OOB data */
++	bool			local_oob;
+ 	u8			local_pk[64];
+ 	u8			local_rand[16];
+ 	bool			debug_key;
+@@ -599,6 +600,8 @@ int smp_generate_oob(struct hci_dev *hdev, u8 hash[16], u8 rand[16])
+ 
+ 	memcpy(rand, smp->local_rand, 16);
+ 
++	smp->local_oob = true;
++
+ 	return 0;
+ }
+ 
+@@ -1785,7 +1788,7 @@ static u8 smp_cmd_pairing_req(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	 * successfully received our local OOB data - therefore set the
+ 	 * flag to indicate that local OOB is in use.
+ 	 */
+-	if (req->oob_flag == SMP_OOB_PRESENT)
++	if (req->oob_flag == SMP_OOB_PRESENT && SMP_DEV(hdev)->local_oob)
+ 		set_bit(SMP_FLAG_LOCAL_OOB, &smp->flags);
+ 
+ 	/* SMP over BR/EDR requires special treatment */
+@@ -1967,7 +1970,7 @@ static u8 smp_cmd_pairing_rsp(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	 * successfully received our local OOB data - therefore set the
+ 	 * flag to indicate that local OOB is in use.
+ 	 */
+-	if (rsp->oob_flag == SMP_OOB_PRESENT)
++	if (rsp->oob_flag == SMP_OOB_PRESENT && SMP_DEV(hdev)->local_oob)
+ 		set_bit(SMP_FLAG_LOCAL_OOB, &smp->flags);
+ 
+ 	smp->prsp[0] = SMP_CMD_PAIRING_RSP;
+@@ -2697,7 +2700,13 @@ static int smp_cmd_public_key(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	 * key was set/generated.
+ 	 */
+ 	if (test_bit(SMP_FLAG_LOCAL_OOB, &smp->flags)) {
+-		struct smp_dev *smp_dev = chan->data;
++		struct l2cap_chan *hchan = hdev->smp_data;
++		struct smp_dev *smp_dev;
++
++		if (!hchan || !hchan->data)
++			return SMP_UNSPECIFIED;
++
++		smp_dev = hchan->data;
+ 
+ 		tfm_ecdh = smp_dev->tfm_ecdh;
+ 	} else {
+@@ -3230,6 +3239,7 @@ static struct l2cap_chan *smp_add_cid(struct hci_dev *hdev, u16 cid)
+ 		return ERR_CAST(tfm_ecdh);
+ 	}
+ 
++	smp->local_oob = false;
+ 	smp->tfm_aes = tfm_aes;
+ 	smp->tfm_cmac = tfm_cmac;
+ 	smp->tfm_ecdh = tfm_ecdh;
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 559a91271f82..bf669e77f9f3 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -1754,6 +1754,28 @@ int call_netdevice_notifiers(unsigned long val, struct net_device *dev)
+ }
+ EXPORT_SYMBOL(call_netdevice_notifiers);
+ 
++/**
++ *	call_netdevice_notifiers_mtu - call all network notifier blocks
++ *	@val: value passed unmodified to notifier function
++ *	@dev: net_device pointer passed unmodified to notifier function
++ *	@arg: additional u32 argument passed to the notifier function
++ *
++ *	Call all network notifier blocks.  Parameters and return value
++ *	are as for raw_notifier_call_chain().
++ */
++static int call_netdevice_notifiers_mtu(unsigned long val,
++					struct net_device *dev, u32 arg)
++{
++	struct netdev_notifier_info_ext info = {
++		.info.dev = dev,
++		.ext.mtu = arg,
++	};
++
++	BUILD_BUG_ON(offsetof(struct netdev_notifier_info_ext, info) != 0);
++
++	return call_netdevice_notifiers_info(val, &info.info);
++}
++
+ #ifdef CONFIG_NET_INGRESS
+ static DEFINE_STATIC_KEY_FALSE(ingress_needed_key);
+ 
+@@ -7118,14 +7140,16 @@ int dev_set_mtu(struct net_device *dev, int new_mtu)
+ 	err = __dev_set_mtu(dev, new_mtu);
+ 
+ 	if (!err) {
+-		err = call_netdevice_notifiers(NETDEV_CHANGEMTU, dev);
++		err = call_netdevice_notifiers_mtu(NETDEV_CHANGEMTU, dev,
++						   orig_mtu);
+ 		err = notifier_to_errno(err);
+ 		if (err) {
+ 			/* setting mtu back and notifying everyone again,
+ 			 * so that they have a chance to revert changes.
+ 			 */
+ 			__dev_set_mtu(dev, orig_mtu);
+-			call_netdevice_notifiers(NETDEV_CHANGEMTU, dev);
++			call_netdevice_notifiers_mtu(NETDEV_CHANGEMTU, dev,
++						     new_mtu);
+ 		}
+ 	}
+ 	return err;
+diff --git a/net/core/ethtool.c b/net/core/ethtool.c
+index e677a20180cf..6c04f1bf377d 100644
+--- a/net/core/ethtool.c
++++ b/net/core/ethtool.c
+@@ -2623,6 +2623,7 @@ int dev_ethtool(struct net *net, struct ifreq *ifr)
+ 	case ETHTOOL_GPHYSTATS:
+ 	case ETHTOOL_GTSO:
+ 	case ETHTOOL_GPERMADDR:
++	case ETHTOOL_GUFO:
+ 	case ETHTOOL_GGSO:
+ 	case ETHTOOL_GGRO:
+ 	case ETHTOOL_GFLAGS:
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 963ee2e88861..0b2bd7d3220f 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2334,7 +2334,8 @@ BPF_CALL_4(bpf_msg_pull_data,
+ 	if (unlikely(bytes_sg_total > copy))
+ 		return -EINVAL;
+ 
+-	page = alloc_pages(__GFP_NOWARN | GFP_ATOMIC, get_order(copy));
++	page = alloc_pages(__GFP_NOWARN | GFP_ATOMIC | __GFP_COMP,
++			   get_order(copy));
+ 	if (unlikely(!page))
+ 		return -ENOMEM;
+ 	p = page_address(page);
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index bafaa033826f..18de39dbdc30 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -1848,10 +1848,8 @@ static int rtnl_dump_ifinfo(struct sk_buff *skb, struct netlink_callback *cb)
+ 		if (tb[IFLA_IF_NETNSID]) {
+ 			netnsid = nla_get_s32(tb[IFLA_IF_NETNSID]);
+ 			tgt_net = get_target_net(skb->sk, netnsid);
+-			if (IS_ERR(tgt_net)) {
+-				tgt_net = net;
+-				netnsid = -1;
+-			}
++			if (IS_ERR(tgt_net))
++				return PTR_ERR(tgt_net);
+ 		}
+ 
+ 		if (tb[IFLA_EXT_MASK])
+@@ -2787,6 +2785,12 @@ struct net_device *rtnl_create_link(struct net *net,
+ 	else if (ops->get_num_rx_queues)
+ 		num_rx_queues = ops->get_num_rx_queues();
+ 
++	if (num_tx_queues < 1 || num_tx_queues > 4096)
++		return ERR_PTR(-EINVAL);
++
++	if (num_rx_queues < 1 || num_rx_queues > 4096)
++		return ERR_PTR(-EINVAL);
++
+ 	dev = alloc_netdev_mqs(ops->priv_size, ifname, name_assign_type,
+ 			       ops->setup, num_tx_queues, num_rx_queues);
+ 	if (!dev)
+@@ -3694,16 +3698,27 @@ static int rtnl_fdb_dump(struct sk_buff *skb, struct netlink_callback *cb)
+ 	int err = 0;
+ 	int fidx = 0;
+ 
+-	err = nlmsg_parse(cb->nlh, sizeof(struct ifinfomsg), tb,
+-			  IFLA_MAX, ifla_policy, NULL);
+-	if (err < 0) {
+-		return -EINVAL;
+-	} else if (err == 0) {
+-		if (tb[IFLA_MASTER])
+-			br_idx = nla_get_u32(tb[IFLA_MASTER]);
+-	}
++	/* A hack to preserve kernel<->userspace interface.
++	 * Before Linux v4.12 this code accepted ndmsg since iproute2 v3.3.0.
++	 * However, ndmsg is shorter than ifinfomsg thus nlmsg_parse() bails.
++	 * So, check for ndmsg with an optional u32 attribute (not used here).
++	 * Fortunately these sizes don't conflict with the size of ifinfomsg
++	 * with an optional attribute.
++	 */
++	if (nlmsg_len(cb->nlh) != sizeof(struct ndmsg) &&
++	    (nlmsg_len(cb->nlh) != sizeof(struct ndmsg) +
++	     nla_attr_size(sizeof(u32)))) {
++		err = nlmsg_parse(cb->nlh, sizeof(struct ifinfomsg), tb,
++				  IFLA_MAX, ifla_policy, NULL);
++		if (err < 0) {
++			return -EINVAL;
++		} else if (err == 0) {
++			if (tb[IFLA_MASTER])
++				br_idx = nla_get_u32(tb[IFLA_MASTER]);
++		}
+ 
+-	brport_idx = ifm->ifi_index;
++		brport_idx = ifm->ifi_index;
++	}
+ 
+ 	if (br_idx) {
+ 		br_dev = __dev_get_by_index(net, br_idx);
+diff --git a/net/dccp/input.c b/net/dccp/input.c
+index d28d46bff6ab..85d6c879383d 100644
+--- a/net/dccp/input.c
++++ b/net/dccp/input.c
+@@ -606,11 +606,13 @@ int dccp_rcv_state_process(struct sock *sk, struct sk_buff *skb,
+ 	if (sk->sk_state == DCCP_LISTEN) {
+ 		if (dh->dccph_type == DCCP_PKT_REQUEST) {
+ 			/* It is possible that we process SYN packets from backlog,
+-			 * so we need to make sure to disable BH right there.
++			 * so we need to make sure to disable BH and RCU right there.
+ 			 */
++			rcu_read_lock();
+ 			local_bh_disable();
+ 			acceptable = inet_csk(sk)->icsk_af_ops->conn_request(sk, skb) >= 0;
+ 			local_bh_enable();
++			rcu_read_unlock();
+ 			if (!acceptable)
+ 				return 1;
+ 			consume_skb(skb);
+diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c
+index b08feb219b44..8e08cea6f178 100644
+--- a/net/dccp/ipv4.c
++++ b/net/dccp/ipv4.c
+@@ -493,9 +493,11 @@ static int dccp_v4_send_response(const struct sock *sk, struct request_sock *req
+ 
+ 		dh->dccph_checksum = dccp_v4_csum_finish(skb, ireq->ir_loc_addr,
+ 							      ireq->ir_rmt_addr);
++		rcu_read_lock();
+ 		err = ip_build_and_send_pkt(skb, sk, ireq->ir_loc_addr,
+ 					    ireq->ir_rmt_addr,
+-					    ireq_opt_deref(ireq));
++					    rcu_dereference(ireq->ireq_opt));
++		rcu_read_unlock();
+ 		err = net_xmit_eval(err);
+ 	}
+ 
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index 2998b0e47d4b..0113993e9b2c 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -1243,7 +1243,8 @@ static int fib_inetaddr_event(struct notifier_block *this, unsigned long event,
+ static int fib_netdev_event(struct notifier_block *this, unsigned long event, void *ptr)
+ {
+ 	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
+-	struct netdev_notifier_changeupper_info *info;
++	struct netdev_notifier_changeupper_info *upper_info = ptr;
++	struct netdev_notifier_info_ext *info_ext = ptr;
+ 	struct in_device *in_dev;
+ 	struct net *net = dev_net(dev);
+ 	unsigned int flags;
+@@ -1278,16 +1279,19 @@ static int fib_netdev_event(struct notifier_block *this, unsigned long event, vo
+ 			fib_sync_up(dev, RTNH_F_LINKDOWN);
+ 		else
+ 			fib_sync_down_dev(dev, event, false);
+-		/* fall through */
++		rt_cache_flush(net);
++		break;
+ 	case NETDEV_CHANGEMTU:
++		fib_sync_mtu(dev, info_ext->ext.mtu);
+ 		rt_cache_flush(net);
+ 		break;
+ 	case NETDEV_CHANGEUPPER:
+-		info = ptr;
++		upper_info = ptr;
+ 		/* flush all routes if dev is linked to or unlinked from
+ 		 * an L3 master device (e.g., VRF)
+ 		 */
+-		if (info->upper_dev && netif_is_l3_master(info->upper_dev))
++		if (upper_info->upper_dev &&
++		    netif_is_l3_master(upper_info->upper_dev))
+ 			fib_disable_ip(dev, NETDEV_DOWN, true);
+ 		break;
+ 	}
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index f3c89ccf14c5..446204ca7406 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -1470,6 +1470,56 @@ static int call_fib_nh_notifiers(struct fib_nh *fib_nh,
+ 	return NOTIFY_DONE;
+ }
+ 
++/* Update the PMTU of exceptions when:
++ * - the new MTU of the first hop becomes smaller than the PMTU
++ * - the old MTU was the same as the PMTU, and it limited discovery of
++ *   larger MTUs on the path. With that limit raised, we can now
++ *   discover larger MTUs
++ * A special case is locked exceptions, for which the PMTU is smaller
++ * than the minimal accepted PMTU:
++ * - if the new MTU is greater than the PMTU, don't make any change
++ * - otherwise, unlock and set PMTU
++ */
++static void nh_update_mtu(struct fib_nh *nh, u32 new, u32 orig)
++{
++	struct fnhe_hash_bucket *bucket;
++	int i;
++
++	bucket = rcu_dereference_protected(nh->nh_exceptions, 1);
++	if (!bucket)
++		return;
++
++	for (i = 0; i < FNHE_HASH_SIZE; i++) {
++		struct fib_nh_exception *fnhe;
++
++		for (fnhe = rcu_dereference_protected(bucket[i].chain, 1);
++		     fnhe;
++		     fnhe = rcu_dereference_protected(fnhe->fnhe_next, 1)) {
++			if (fnhe->fnhe_mtu_locked) {
++				if (new <= fnhe->fnhe_pmtu) {
++					fnhe->fnhe_pmtu = new;
++					fnhe->fnhe_mtu_locked = false;
++				}
++			} else if (new < fnhe->fnhe_pmtu ||
++				   orig == fnhe->fnhe_pmtu) {
++				fnhe->fnhe_pmtu = new;
++			}
++		}
++	}
++}
++
++void fib_sync_mtu(struct net_device *dev, u32 orig_mtu)
++{
++	unsigned int hash = fib_devindex_hashfn(dev->ifindex);
++	struct hlist_head *head = &fib_info_devhash[hash];
++	struct fib_nh *nh;
++
++	hlist_for_each_entry(nh, head, nh_hash) {
++		if (nh->nh_dev == dev)
++			nh_update_mtu(nh, dev->mtu, orig_mtu);
++	}
++}
++
+ /* Event              force Flags           Description
+  * NETDEV_CHANGE      0     LINKDOWN        Carrier OFF, not for scope host
+  * NETDEV_DOWN        0     LINKDOWN|DEAD   Link down, not for scope host
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index 33a88e045efd..39cfa3a191d8 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -535,7 +535,8 @@ struct dst_entry *inet_csk_route_req(const struct sock *sk,
+ 	struct ip_options_rcu *opt;
+ 	struct rtable *rt;
+ 
+-	opt = ireq_opt_deref(ireq);
++	rcu_read_lock();
++	opt = rcu_dereference(ireq->ireq_opt);
+ 
+ 	flowi4_init_output(fl4, ireq->ir_iif, ireq->ir_mark,
+ 			   RT_CONN_FLAGS(sk), RT_SCOPE_UNIVERSE,
+@@ -549,11 +550,13 @@ struct dst_entry *inet_csk_route_req(const struct sock *sk,
+ 		goto no_route;
+ 	if (opt && opt->opt.is_strictroute && rt->rt_uses_gateway)
+ 		goto route_err;
++	rcu_read_unlock();
+ 	return &rt->dst;
+ 
+ route_err:
+ 	ip_rt_put(rt);
+ no_route:
++	rcu_read_unlock();
+ 	__IP_INC_STATS(net, IPSTATS_MIB_OUTNOROUTES);
+ 	return NULL;
+ }
+diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c
+index c0fe5ad996f2..26c36cccabdc 100644
+--- a/net/ipv4/ip_sockglue.c
++++ b/net/ipv4/ip_sockglue.c
+@@ -149,7 +149,6 @@ static void ip_cmsg_recv_security(struct msghdr *msg, struct sk_buff *skb)
+ static void ip_cmsg_recv_dstaddr(struct msghdr *msg, struct sk_buff *skb)
+ {
+ 	struct sockaddr_in sin;
+-	const struct iphdr *iph = ip_hdr(skb);
+ 	__be16 *ports;
+ 	int end;
+ 
+@@ -164,7 +163,7 @@ static void ip_cmsg_recv_dstaddr(struct msghdr *msg, struct sk_buff *skb)
+ 	ports = (__be16 *)skb_transport_header(skb);
+ 
+ 	sin.sin_family = AF_INET;
+-	sin.sin_addr.s_addr = iph->daddr;
++	sin.sin_addr.s_addr = ip_hdr(skb)->daddr;
+ 	sin.sin_port = ports[1];
+ 	memset(sin.sin_zero, 0, sizeof(sin.sin_zero));
+ 
+diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
+index c4f5602308ed..284a22154b4e 100644
+--- a/net/ipv4/ip_tunnel.c
++++ b/net/ipv4/ip_tunnel.c
+@@ -627,6 +627,7 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
+ 		    const struct iphdr *tnl_params, u8 protocol)
+ {
+ 	struct ip_tunnel *tunnel = netdev_priv(dev);
++	unsigned int inner_nhdr_len = 0;
+ 	const struct iphdr *inner_iph;
+ 	struct flowi4 fl4;
+ 	u8     tos, ttl;
+@@ -636,6 +637,14 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
+ 	__be32 dst;
+ 	bool connected;
+ 
++	/* ensure we can access the inner net header, for several users below */
++	if (skb->protocol == htons(ETH_P_IP))
++		inner_nhdr_len = sizeof(struct iphdr);
++	else if (skb->protocol == htons(ETH_P_IPV6))
++		inner_nhdr_len = sizeof(struct ipv6hdr);
++	if (unlikely(!pskb_may_pull(skb, inner_nhdr_len)))
++		goto tx_error;
++
+ 	inner_iph = (const struct iphdr *)skb_inner_network_header(skb);
+ 	connected = (tunnel->parms.iph.daddr != 0);
+ 
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 1df6e97106d7..f80acb5f1896 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -1001,21 +1001,22 @@ out:	kfree_skb(skb);
+ static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
+ {
+ 	struct dst_entry *dst = &rt->dst;
++	u32 old_mtu = ipv4_mtu(dst);
+ 	struct fib_result res;
+ 	bool lock = false;
+ 
+ 	if (ip_mtu_locked(dst))
+ 		return;
+ 
+-	if (ipv4_mtu(dst) < mtu)
++	if (old_mtu < mtu)
+ 		return;
+ 
+ 	if (mtu < ip_rt_min_pmtu) {
+ 		lock = true;
+-		mtu = ip_rt_min_pmtu;
++		mtu = min(old_mtu, ip_rt_min_pmtu);
+ 	}
+ 
+-	if (rt->rt_pmtu == mtu &&
++	if (rt->rt_pmtu == mtu && !lock &&
+ 	    time_before(jiffies, dst->expires - ip_rt_mtu_expires / 2))
+ 		return;
+ 
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index f9dcb29be12d..8b7294688633 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -5976,11 +5976,13 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb)
+ 			if (th->fin)
+ 				goto discard;
+ 			/* It is possible that we process SYN packets from backlog,
+-			 * so we need to make sure to disable BH right there.
++			 * so we need to make sure to disable BH and RCU right there.
+ 			 */
++			rcu_read_lock();
+ 			local_bh_disable();
+ 			acceptable = icsk->icsk_af_ops->conn_request(sk, skb) >= 0;
+ 			local_bh_enable();
++			rcu_read_unlock();
+ 
+ 			if (!acceptable)
+ 				return 1;
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 488b201851d7..d380856ba488 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -942,9 +942,11 @@ static int tcp_v4_send_synack(const struct sock *sk, struct dst_entry *dst,
+ 	if (skb) {
+ 		__tcp_v4_send_check(skb, ireq->ir_loc_addr, ireq->ir_rmt_addr);
+ 
++		rcu_read_lock();
+ 		err = ip_build_and_send_pkt(skb, sk, ireq->ir_loc_addr,
+ 					    ireq->ir_rmt_addr,
+-					    ireq_opt_deref(ireq));
++					    rcu_dereference(ireq->ireq_opt));
++		rcu_read_unlock();
+ 		err = net_xmit_eval(err);
+ 	}
+ 
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index fed65bc9df86..a12df801de94 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -1631,7 +1631,7 @@ busy_check:
+ 	*err = error;
+ 	return NULL;
+ }
+-EXPORT_SYMBOL_GPL(__skb_recv_udp);
++EXPORT_SYMBOL(__skb_recv_udp);
+ 
+ /*
+  * 	This should be easy, if there is something there we
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index f66a1cae3366..3484c7020fd9 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -4203,7 +4203,6 @@ static struct inet6_ifaddr *if6_get_first(struct seq_file *seq, loff_t pos)
+ 				p++;
+ 				continue;
+ 			}
+-			state->offset++;
+ 			return ifa;
+ 		}
+ 
+@@ -4227,13 +4226,12 @@ static struct inet6_ifaddr *if6_get_next(struct seq_file *seq,
+ 		return ifa;
+ 	}
+ 
++	state->offset = 0;
+ 	while (++state->bucket < IN6_ADDR_HSIZE) {
+-		state->offset = 0;
+ 		hlist_for_each_entry_rcu(ifa,
+ 				     &inet6_addr_lst[state->bucket], addr_lst) {
+ 			if (!net_eq(dev_net(ifa->idev->dev), net))
+ 				continue;
+-			state->offset++;
+ 			return ifa;
+ 		}
+ 	}
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index 5516f55e214b..cbe46175bb59 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -196,6 +196,8 @@ void fib6_info_destroy_rcu(struct rcu_head *head)
+ 				*ppcpu_rt = NULL;
+ 			}
+ 		}
++
++		free_percpu(f6i->rt6i_pcpu);
+ 	}
+ 
+ 	lwtstate_put(f6i->fib6_nh.nh_lwtstate);
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index 1cc9650af9fb..f5b5b0574a2d 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -1226,7 +1226,7 @@ static inline int
+ ip4ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct ip6_tnl *t = netdev_priv(dev);
+-	const struct iphdr  *iph = ip_hdr(skb);
++	const struct iphdr  *iph;
+ 	int encap_limit = -1;
+ 	struct flowi6 fl6;
+ 	__u8 dsfield;
+@@ -1234,6 +1234,11 @@ ip4ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	u8 tproto;
+ 	int err;
+ 
++	/* ensure we can access the full inner ip header */
++	if (!pskb_may_pull(skb, sizeof(struct iphdr)))
++		return -1;
++
++	iph = ip_hdr(skb);
+ 	memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt));
+ 
+ 	tproto = READ_ONCE(t->parms.proto);
+@@ -1297,7 +1302,7 @@ static inline int
+ ip6ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct ip6_tnl *t = netdev_priv(dev);
+-	struct ipv6hdr *ipv6h = ipv6_hdr(skb);
++	struct ipv6hdr *ipv6h;
+ 	int encap_limit = -1;
+ 	__u16 offset;
+ 	struct flowi6 fl6;
+@@ -1306,6 +1311,10 @@ ip6ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	u8 tproto;
+ 	int err;
+ 
++	if (unlikely(!pskb_may_pull(skb, sizeof(*ipv6h))))
++		return -1;
++
++	ipv6h = ipv6_hdr(skb);
+ 	tproto = READ_ONCE(t->parms.proto);
+ 	if ((tproto != IPPROTO_IPV6 && tproto != 0) ||
+ 	    ip6_tnl_addr_conflict(t, ipv6h))
+diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
+index afc307c89d1a..7ef3e0a5bf86 100644
+--- a/net/ipv6/raw.c
++++ b/net/ipv6/raw.c
+@@ -650,8 +650,6 @@ static int rawv6_send_hdrinc(struct sock *sk, struct msghdr *msg, int length,
+ 	skb->protocol = htons(ETH_P_IPV6);
+ 	skb->priority = sk->sk_priority;
+ 	skb->mark = sk->sk_mark;
+-	skb_dst_set(skb, &rt->dst);
+-	*dstp = NULL;
+ 
+ 	skb_put(skb, length);
+ 	skb_reset_network_header(skb);
+@@ -664,8 +662,14 @@ static int rawv6_send_hdrinc(struct sock *sk, struct msghdr *msg, int length,
+ 
+ 	skb->transport_header = skb->network_header;
+ 	err = memcpy_from_msg(iph, msg, length);
+-	if (err)
+-		goto error_fault;
++	if (err) {
++		err = -EFAULT;
++		kfree_skb(skb);
++		goto error;
++	}
++
++	skb_dst_set(skb, &rt->dst);
++	*dstp = NULL;
+ 
+ 	/* if egress device is enslaved to an L3 master device pass the
+ 	 * skb to its handler for processing
+@@ -674,21 +678,28 @@ static int rawv6_send_hdrinc(struct sock *sk, struct msghdr *msg, int length,
+ 	if (unlikely(!skb))
+ 		return 0;
+ 
++	/* Acquire rcu_read_lock() in case we need to use rt->rt6i_idev
++	 * in the error path. Since skb has been freed, the dst could
++	 * have been queued for deletion.
++	 */
++	rcu_read_lock();
+ 	IP6_UPD_PO_STATS(net, rt->rt6i_idev, IPSTATS_MIB_OUT, skb->len);
+ 	err = NF_HOOK(NFPROTO_IPV6, NF_INET_LOCAL_OUT, net, sk, skb,
+ 		      NULL, rt->dst.dev, dst_output);
+ 	if (err > 0)
+ 		err = net_xmit_errno(err);
+-	if (err)
+-		goto error;
++	if (err) {
++		IP6_INC_STATS(net, rt->rt6i_idev, IPSTATS_MIB_OUTDISCARDS);
++		rcu_read_unlock();
++		goto error_check;
++	}
++	rcu_read_unlock();
+ out:
+ 	return 0;
+ 
+-error_fault:
+-	err = -EFAULT;
+-	kfree_skb(skb);
+ error:
+ 	IP6_INC_STATS(net, rt->rt6i_idev, IPSTATS_MIB_OUTDISCARDS);
++error_check:
+ 	if (err == -ENOBUFS && !np->recverr)
+ 		err = 0;
+ 	return err;
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 480a79f47c52..ed526e257da6 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -4314,11 +4314,6 @@ static int ip6_route_info_append(struct net *net,
+ 	if (!nh)
+ 		return -ENOMEM;
+ 	nh->fib6_info = rt;
+-	err = ip6_convert_metrics(net, rt, r_cfg);
+-	if (err) {
+-		kfree(nh);
+-		return err;
+-	}
+ 	memcpy(&nh->r_cfg, r_cfg, sizeof(*r_cfg));
+ 	list_add_tail(&nh->next, rt6_nh_list);
+ 
+diff --git a/net/netlabel/netlabel_unlabeled.c b/net/netlabel/netlabel_unlabeled.c
+index c070dfc0190a..c92894c3e40a 100644
+--- a/net/netlabel/netlabel_unlabeled.c
++++ b/net/netlabel/netlabel_unlabeled.c
+@@ -781,7 +781,8 @@ static int netlbl_unlabel_addrinfo_get(struct genl_info *info,
+ {
+ 	u32 addr_len;
+ 
+-	if (info->attrs[NLBL_UNLABEL_A_IPV4ADDR]) {
++	if (info->attrs[NLBL_UNLABEL_A_IPV4ADDR] &&
++	    info->attrs[NLBL_UNLABEL_A_IPV4MASK]) {
+ 		addr_len = nla_len(info->attrs[NLBL_UNLABEL_A_IPV4ADDR]);
+ 		if (addr_len != sizeof(struct in_addr) &&
+ 		    addr_len != nla_len(info->attrs[NLBL_UNLABEL_A_IPV4MASK]))
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index e6445d8f3f57..3237e9978c1a 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -2712,10 +2712,12 @@ tpacket_error:
+ 			}
+ 		}
+ 
+-		if (po->has_vnet_hdr && virtio_net_hdr_to_skb(skb, vnet_hdr,
+-							      vio_le())) {
+-			tp_len = -EINVAL;
+-			goto tpacket_error;
++		if (po->has_vnet_hdr) {
++			if (virtio_net_hdr_to_skb(skb, vnet_hdr, vio_le())) {
++				tp_len = -EINVAL;
++				goto tpacket_error;
++			}
++			virtio_net_hdr_set_proto(skb, vnet_hdr);
+ 		}
+ 
+ 		skb->destructor = tpacket_destruct_skb;
+@@ -2911,6 +2913,7 @@ static int packet_snd(struct socket *sock, struct msghdr *msg, size_t len)
+ 		if (err)
+ 			goto out_free;
+ 		len += sizeof(vnet_hdr);
++		virtio_net_hdr_set_proto(skb, &vnet_hdr);
+ 	}
+ 
+ 	skb_probe_transport_header(skb, reserve);
+diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c
+index 260749956ef3..24df95a7b9c7 100644
+--- a/net/sched/cls_u32.c
++++ b/net/sched/cls_u32.c
+@@ -397,6 +397,7 @@ static int u32_init(struct tcf_proto *tp)
+ 	rcu_assign_pointer(tp_c->hlist, root_ht);
+ 	root_ht->tp_c = tp_c;
+ 
++	root_ht->refcnt++;
+ 	rcu_assign_pointer(tp->root, root_ht);
+ 	tp->data = tp_c;
+ 	return 0;
+@@ -608,7 +609,7 @@ static int u32_destroy_hnode(struct tcf_proto *tp, struct tc_u_hnode *ht,
+ 	struct tc_u_hnode __rcu **hn;
+ 	struct tc_u_hnode *phn;
+ 
+-	WARN_ON(ht->refcnt);
++	WARN_ON(--ht->refcnt);
+ 
+ 	u32_clear_hnode(tp, ht, extack);
+ 
+@@ -647,7 +648,7 @@ static void u32_destroy(struct tcf_proto *tp, struct netlink_ext_ack *extack)
+ 
+ 	WARN_ON(root_ht == NULL);
+ 
+-	if (root_ht && --root_ht->refcnt == 0)
++	if (root_ht && --root_ht->refcnt == 1)
+ 		u32_destroy_hnode(tp, root_ht, extack);
+ 
+ 	if (--tp_c->refcnt == 0) {
+@@ -696,7 +697,6 @@ static int u32_delete(struct tcf_proto *tp, void *arg, bool *last,
+ 	}
+ 
+ 	if (ht->refcnt == 1) {
+-		ht->refcnt--;
+ 		u32_destroy_hnode(tp, ht, extack);
+ 	} else {
+ 		NL_SET_ERR_MSG_MOD(extack, "Can not delete in-use filter");
+@@ -706,11 +706,11 @@ static int u32_delete(struct tcf_proto *tp, void *arg, bool *last,
+ out:
+ 	*last = true;
+ 	if (root_ht) {
+-		if (root_ht->refcnt > 1) {
++		if (root_ht->refcnt > 2) {
+ 			*last = false;
+ 			goto ret;
+ 		}
+-		if (root_ht->refcnt == 1) {
++		if (root_ht->refcnt == 2) {
+ 			if (!ht_empty(root_ht)) {
+ 				*last = false;
+ 				goto ret;
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 54eca685420f..99cc25aae503 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -1304,6 +1304,18 @@ check_loop_fn(struct Qdisc *q, unsigned long cl, struct qdisc_walker *w)
+  * Delete/get qdisc.
+  */
+ 
++const struct nla_policy rtm_tca_policy[TCA_MAX + 1] = {
++	[TCA_KIND]		= { .type = NLA_STRING },
++	[TCA_OPTIONS]		= { .type = NLA_NESTED },
++	[TCA_RATE]		= { .type = NLA_BINARY,
++				    .len = sizeof(struct tc_estimator) },
++	[TCA_STAB]		= { .type = NLA_NESTED },
++	[TCA_DUMP_INVISIBLE]	= { .type = NLA_FLAG },
++	[TCA_CHAIN]		= { .type = NLA_U32 },
++	[TCA_INGRESS_BLOCK]	= { .type = NLA_U32 },
++	[TCA_EGRESS_BLOCK]	= { .type = NLA_U32 },
++};
++
+ static int tc_get_qdisc(struct sk_buff *skb, struct nlmsghdr *n,
+ 			struct netlink_ext_ack *extack)
+ {
+@@ -1320,7 +1332,8 @@ static int tc_get_qdisc(struct sk_buff *skb, struct nlmsghdr *n,
+ 	    !netlink_ns_capable(skb, net->user_ns, CAP_NET_ADMIN))
+ 		return -EPERM;
+ 
+-	err = nlmsg_parse(n, sizeof(*tcm), tca, TCA_MAX, NULL, extack);
++	err = nlmsg_parse(n, sizeof(*tcm), tca, TCA_MAX, rtm_tca_policy,
++			  extack);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -1404,7 +1417,8 @@ static int tc_modify_qdisc(struct sk_buff *skb, struct nlmsghdr *n,
+ 
+ replay:
+ 	/* Reinit, just in case something touches this. */
+-	err = nlmsg_parse(n, sizeof(*tcm), tca, TCA_MAX, NULL, extack);
++	err = nlmsg_parse(n, sizeof(*tcm), tca, TCA_MAX, rtm_tca_policy,
++			  extack);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -1638,7 +1652,8 @@ static int tc_dump_qdisc(struct sk_buff *skb, struct netlink_callback *cb)
+ 	idx = 0;
+ 	ASSERT_RTNL();
+ 
+-	err = nlmsg_parse(nlh, sizeof(struct tcmsg), tca, TCA_MAX, NULL, NULL);
++	err = nlmsg_parse(nlh, sizeof(struct tcmsg), tca, TCA_MAX,
++			  rtm_tca_policy, NULL);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -1857,7 +1872,8 @@ static int tc_ctl_tclass(struct sk_buff *skb, struct nlmsghdr *n,
+ 	    !netlink_ns_capable(skb, net->user_ns, CAP_NET_ADMIN))
+ 		return -EPERM;
+ 
+-	err = nlmsg_parse(n, sizeof(*tcm), tca, TCA_MAX, NULL, extack);
++	err = nlmsg_parse(n, sizeof(*tcm), tca, TCA_MAX, rtm_tca_policy,
++			  extack);
+ 	if (err < 0)
+ 		return err;
+ 
+diff --git a/net/sctp/transport.c b/net/sctp/transport.c
+index 12cac85da994..033696e6f74f 100644
+--- a/net/sctp/transport.c
++++ b/net/sctp/transport.c
+@@ -260,6 +260,7 @@ void sctp_transport_pmtu(struct sctp_transport *transport, struct sock *sk)
+ bool sctp_transport_update_pmtu(struct sctp_transport *t, u32 pmtu)
+ {
+ 	struct dst_entry *dst = sctp_transport_dst_check(t);
++	struct sock *sk = t->asoc->base.sk;
+ 	bool change = true;
+ 
+ 	if (unlikely(pmtu < SCTP_DEFAULT_MINSEGMENT)) {
+@@ -271,12 +272,19 @@ bool sctp_transport_update_pmtu(struct sctp_transport *t, u32 pmtu)
+ 	pmtu = SCTP_TRUNC4(pmtu);
+ 
+ 	if (dst) {
+-		dst->ops->update_pmtu(dst, t->asoc->base.sk, NULL, pmtu);
++		struct sctp_pf *pf = sctp_get_pf_specific(dst->ops->family);
++		union sctp_addr addr;
++
++		pf->af->from_sk(&addr, sk);
++		pf->to_sk_daddr(&t->ipaddr, sk);
++		dst->ops->update_pmtu(dst, sk, NULL, pmtu);
++		pf->to_sk_daddr(&addr, sk);
++
+ 		dst = sctp_transport_dst_check(t);
+ 	}
+ 
+ 	if (!dst) {
+-		t->af_specific->get_dst(t, &t->saddr, &t->fl, t->asoc->base.sk);
++		t->af_specific->get_dst(t, &t->saddr, &t->fl, sk);
+ 		dst = t->dst;
+ 	}
+ 
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 093e16d1b770..cdaf3534e373 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -1422,8 +1422,10 @@ static int __tipc_sendstream(struct socket *sock, struct msghdr *m, size_t dlen)
+ 	/* Handle implicit connection setup */
+ 	if (unlikely(dest)) {
+ 		rc = __tipc_sendmsg(sock, m, dlen);
+-		if (dlen && (dlen == rc))
++		if (dlen && dlen == rc) {
++			tsk->peer_caps = tipc_node_get_capabilities(net, dnode);
+ 			tsk->snt_unacked = tsk_inc(tsk, dlen + msg_hdr_sz(hdr));
++		}
+ 		return rc;
+ 	}
+ 
+diff --git a/scripts/subarch.include b/scripts/subarch.include
+new file mode 100644
+index 000000000000..650682821126
+--- /dev/null
++++ b/scripts/subarch.include
+@@ -0,0 +1,13 @@
++# SUBARCH tells the usermode build what the underlying arch is.  That is set
++# first, and if a usermode build is happening, the "ARCH=um" on the command
++# line overrides the setting of ARCH below.  If a native build is happening,
++# then ARCH is assigned, getting whatever value it gets normally, and
++# SUBARCH is subsequently ignored.
++
++SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \
++				  -e s/sun4u/sparc64/ \
++				  -e s/arm.*/arm/ -e s/sa110/arm/ \
++				  -e s/s390x/s390/ -e s/parisc64/parisc/ \
++				  -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \
++				  -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ \
++				  -e s/riscv.*/riscv/)
+diff --git a/sound/hda/hdac_controller.c b/sound/hda/hdac_controller.c
+index 560ec0986e1a..74244d8e2909 100644
+--- a/sound/hda/hdac_controller.c
++++ b/sound/hda/hdac_controller.c
+@@ -40,6 +40,8 @@ static void azx_clear_corbrp(struct hdac_bus *bus)
+  */
+ void snd_hdac_bus_init_cmd_io(struct hdac_bus *bus)
+ {
++	WARN_ON_ONCE(!bus->rb.area);
++
+ 	spin_lock_irq(&bus->reg_lock);
+ 	/* CORB set up */
+ 	bus->corb.addr = bus->rb.addr;
+@@ -383,7 +385,7 @@ void snd_hdac_bus_exit_link_reset(struct hdac_bus *bus)
+ EXPORT_SYMBOL_GPL(snd_hdac_bus_exit_link_reset);
+ 
+ /* reset codec link */
+-static int azx_reset(struct hdac_bus *bus, bool full_reset)
++int snd_hdac_bus_reset_link(struct hdac_bus *bus, bool full_reset)
+ {
+ 	if (!full_reset)
+ 		goto skip_reset;
+@@ -408,7 +410,7 @@ static int azx_reset(struct hdac_bus *bus, bool full_reset)
+  skip_reset:
+ 	/* check to see if controller is ready */
+ 	if (!snd_hdac_chip_readb(bus, GCTL)) {
+-		dev_dbg(bus->dev, "azx_reset: controller not ready!\n");
++		dev_dbg(bus->dev, "controller not ready!\n");
+ 		return -EBUSY;
+ 	}
+ 
+@@ -423,6 +425,7 @@ static int azx_reset(struct hdac_bus *bus, bool full_reset)
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(snd_hdac_bus_reset_link);
+ 
+ /* enable interrupts */
+ static void azx_int_enable(struct hdac_bus *bus)
+@@ -477,15 +480,17 @@ bool snd_hdac_bus_init_chip(struct hdac_bus *bus, bool full_reset)
+ 		return false;
+ 
+ 	/* reset controller */
+-	azx_reset(bus, full_reset);
++	snd_hdac_bus_reset_link(bus, full_reset);
+ 
+-	/* initialize interrupts */
++	/* clear interrupts */
+ 	azx_int_clear(bus);
+-	azx_int_enable(bus);
+ 
+ 	/* initialize the codec command I/O */
+ 	snd_hdac_bus_init_cmd_io(bus);
+ 
++	/* enable interrupts after CORB/RIRB buffers are initialized above */
++	azx_int_enable(bus);
++
+ 	/* program the position buffer */
+ 	if (bus->use_posbuf && bus->posbuf.addr) {
+ 		snd_hdac_chip_writel(bus, DPLBASE, (u32)bus->posbuf.addr);
+diff --git a/sound/soc/amd/acp-pcm-dma.c b/sound/soc/amd/acp-pcm-dma.c
+index 77203841c535..90df61d263b8 100644
+--- a/sound/soc/amd/acp-pcm-dma.c
++++ b/sound/soc/amd/acp-pcm-dma.c
+@@ -16,6 +16,7 @@
+ #include <linux/module.h>
+ #include <linux/delay.h>
+ #include <linux/io.h>
++#include <linux/iopoll.h>
+ #include <linux/sizes.h>
+ #include <linux/pm_runtime.h>
+ 
+@@ -184,6 +185,24 @@ static void config_dma_descriptor_in_sram(void __iomem *acp_mmio,
+ 	acp_reg_write(descr_info->xfer_val, acp_mmio, mmACP_SRBM_Targ_Idx_Data);
+ }
+ 
++static void pre_config_reset(void __iomem *acp_mmio, u16 ch_num)
++{
++	u32 dma_ctrl;
++	int ret;
++
++	/* clear the reset bit */
++	dma_ctrl = acp_reg_read(acp_mmio, mmACP_DMA_CNTL_0 + ch_num);
++	dma_ctrl &= ~ACP_DMA_CNTL_0__DMAChRst_MASK;
++	acp_reg_write(dma_ctrl, acp_mmio, mmACP_DMA_CNTL_0 + ch_num);
++	/* check the reset bit before programming configuration registers */
++	ret = readl_poll_timeout(acp_mmio + ((mmACP_DMA_CNTL_0 + ch_num) * 4),
++				 dma_ctrl,
++				 !(dma_ctrl & ACP_DMA_CNTL_0__DMAChRst_MASK),
++				 100, ACP_DMA_RESET_TIME);
++	if (ret < 0)
++		pr_err("Failed to clear reset of channel : %d\n", ch_num);
++}
++
+ /*
+  * Initialize the DMA descriptor information for transfer between
+  * system memory <-> ACP SRAM
+@@ -238,6 +257,7 @@ static void set_acp_sysmem_dma_descriptors(void __iomem *acp_mmio,
+ 		config_dma_descriptor_in_sram(acp_mmio, dma_dscr_idx,
+ 					      &dmadscr[i]);
+ 	}
++	pre_config_reset(acp_mmio, ch);
+ 	config_acp_dma_channel(acp_mmio, ch,
+ 			       dma_dscr_idx - 1,
+ 			       NUM_DSCRS_PER_CHANNEL,
+@@ -277,6 +297,7 @@ static void set_acp_to_i2s_dma_descriptors(void __iomem *acp_mmio, u32 size,
+ 		config_dma_descriptor_in_sram(acp_mmio, dma_dscr_idx,
+ 					      &dmadscr[i]);
+ 	}
++	pre_config_reset(acp_mmio, ch);
+ 	/* Configure the DMA channel with the above descriptore */
+ 	config_acp_dma_channel(acp_mmio, ch, dma_dscr_idx - 1,
+ 			       NUM_DSCRS_PER_CHANNEL,
+diff --git a/sound/soc/codecs/max98373.c b/sound/soc/codecs/max98373.c
+index a92586106932..f0948e84f6ae 100644
+--- a/sound/soc/codecs/max98373.c
++++ b/sound/soc/codecs/max98373.c
+@@ -519,6 +519,7 @@ static bool max98373_volatile_reg(struct device *dev, unsigned int reg)
+ {
+ 	switch (reg) {
+ 	case MAX98373_R2000_SW_RESET ... MAX98373_R2009_INT_FLAG3:
++	case MAX98373_R203E_AMP_PATH_GAIN:
+ 	case MAX98373_R2054_MEAS_ADC_PVDD_CH_READBACK:
+ 	case MAX98373_R2055_MEAS_ADC_THERM_CH_READBACK:
+ 	case MAX98373_R20B6_BDE_CUR_STATE_READBACK:
+@@ -728,6 +729,7 @@ static int max98373_probe(struct snd_soc_component *component)
+ 	/* Software Reset */
+ 	regmap_write(max98373->regmap,
+ 		MAX98373_R2000_SW_RESET, MAX98373_SOFT_RESET);
++	usleep_range(10000, 11000);
+ 
+ 	/* IV default slot configuration */
+ 	regmap_write(max98373->regmap,
+@@ -816,6 +818,7 @@ static int max98373_resume(struct device *dev)
+ 
+ 	regmap_write(max98373->regmap,
+ 		MAX98373_R2000_SW_RESET, MAX98373_SOFT_RESET);
++	usleep_range(10000, 11000);
+ 	regcache_cache_only(max98373->regmap, false);
+ 	regcache_sync(max98373->regmap);
+ 	return 0;
+diff --git a/sound/soc/codecs/rt5514.c b/sound/soc/codecs/rt5514.c
+index dca82dd6e3bf..32fe76c3134a 100644
+--- a/sound/soc/codecs/rt5514.c
++++ b/sound/soc/codecs/rt5514.c
+@@ -64,8 +64,8 @@ static const struct reg_sequence rt5514_patch[] = {
+ 	{RT5514_ANA_CTRL_LDO10,		0x00028604},
+ 	{RT5514_ANA_CTRL_ADCFED,	0x00000800},
+ 	{RT5514_ASRC_IN_CTRL1,		0x00000003},
+-	{RT5514_DOWNFILTER0_CTRL3,	0x10000352},
+-	{RT5514_DOWNFILTER1_CTRL3,	0x10000352},
++	{RT5514_DOWNFILTER0_CTRL3,	0x10000342},
++	{RT5514_DOWNFILTER1_CTRL3,	0x10000342},
+ };
+ 
+ static const struct reg_default rt5514_reg[] = {
+@@ -92,10 +92,10 @@ static const struct reg_default rt5514_reg[] = {
+ 	{RT5514_ASRC_IN_CTRL1,		0x00000003},
+ 	{RT5514_DOWNFILTER0_CTRL1,	0x00020c2f},
+ 	{RT5514_DOWNFILTER0_CTRL2,	0x00020c2f},
+-	{RT5514_DOWNFILTER0_CTRL3,	0x10000352},
++	{RT5514_DOWNFILTER0_CTRL3,	0x10000342},
+ 	{RT5514_DOWNFILTER1_CTRL1,	0x00020c2f},
+ 	{RT5514_DOWNFILTER1_CTRL2,	0x00020c2f},
+-	{RT5514_DOWNFILTER1_CTRL3,	0x10000352},
++	{RT5514_DOWNFILTER1_CTRL3,	0x10000342},
+ 	{RT5514_ANA_CTRL_LDO10,		0x00028604},
+ 	{RT5514_ANA_CTRL_LDO18_16,	0x02000345},
+ 	{RT5514_ANA_CTRL_ADC12,		0x0000a2a8},
+diff --git a/sound/soc/codecs/sigmadsp.c b/sound/soc/codecs/sigmadsp.c
+index d53680ac78e4..6df158669420 100644
+--- a/sound/soc/codecs/sigmadsp.c
++++ b/sound/soc/codecs/sigmadsp.c
+@@ -117,8 +117,7 @@ static int sigmadsp_ctrl_write(struct sigmadsp *sigmadsp,
+ 	struct sigmadsp_control *ctrl, void *data)
+ {
+ 	/* safeload loads up to 20 bytes in a atomic operation */
+-	if (ctrl->num_bytes > 4 && ctrl->num_bytes <= 20 && sigmadsp->ops &&
+-	    sigmadsp->ops->safeload)
++	if (ctrl->num_bytes <= 20 && sigmadsp->ops && sigmadsp->ops->safeload)
+ 		return sigmadsp->ops->safeload(sigmadsp, ctrl->addr, data,
+ 			ctrl->num_bytes);
+ 	else
+diff --git a/sound/soc/codecs/wm8804-i2c.c b/sound/soc/codecs/wm8804-i2c.c
+index f27464c2c5ba..79541960f45d 100644
+--- a/sound/soc/codecs/wm8804-i2c.c
++++ b/sound/soc/codecs/wm8804-i2c.c
+@@ -13,6 +13,7 @@
+ #include <linux/init.h>
+ #include <linux/module.h>
+ #include <linux/i2c.h>
++#include <linux/acpi.h>
+ 
+ #include "wm8804.h"
+ 
+@@ -40,17 +41,29 @@ static const struct i2c_device_id wm8804_i2c_id[] = {
+ };
+ MODULE_DEVICE_TABLE(i2c, wm8804_i2c_id);
+ 
++#if defined(CONFIG_OF)
+ static const struct of_device_id wm8804_of_match[] = {
+ 	{ .compatible = "wlf,wm8804", },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(of, wm8804_of_match);
++#endif
++
++#ifdef CONFIG_ACPI
++static const struct acpi_device_id wm8804_acpi_match[] = {
++	{ "1AEC8804", 0 }, /* Wolfson PCI ID + part ID */
++	{ "10138804", 0 }, /* Cirrus Logic PCI ID + part ID */
++	{ },
++};
++MODULE_DEVICE_TABLE(acpi, wm8804_acpi_match);
++#endif
+ 
+ static struct i2c_driver wm8804_i2c_driver = {
+ 	.driver = {
+ 		.name = "wm8804",
+ 		.pm = &wm8804_pm,
+-		.of_match_table = wm8804_of_match,
++		.of_match_table = of_match_ptr(wm8804_of_match),
++		.acpi_match_table = ACPI_PTR(wm8804_acpi_match),
+ 	},
+ 	.probe = wm8804_i2c_probe,
+ 	.remove = wm8804_i2c_remove,
+diff --git a/sound/soc/intel/skylake/skl.c b/sound/soc/intel/skylake/skl.c
+index f0d9793f872a..c7cdfa4a7076 100644
+--- a/sound/soc/intel/skylake/skl.c
++++ b/sound/soc/intel/skylake/skl.c
+@@ -844,7 +844,7 @@ static int skl_first_init(struct hdac_ext_bus *ebus)
+ 		return -ENXIO;
+ 	}
+ 
+-	skl_init_chip(bus, true);
++	snd_hdac_bus_reset_link(bus, true);
+ 
+ 	snd_hdac_bus_parse_capabilities(bus);
+ 
+diff --git a/sound/soc/qcom/qdsp6/q6routing.c b/sound/soc/qcom/qdsp6/q6routing.c
+index 593f66b8622f..33bb97c0b6b6 100644
+--- a/sound/soc/qcom/qdsp6/q6routing.c
++++ b/sound/soc/qcom/qdsp6/q6routing.c
+@@ -933,8 +933,10 @@ static int msm_routing_probe(struct snd_soc_component *c)
+ {
+ 	int i;
+ 
+-	for (i = 0; i < MAX_SESSIONS; i++)
++	for (i = 0; i < MAX_SESSIONS; i++) {
+ 		routing_data->sessions[i].port_id = -1;
++		routing_data->sessions[i].fedai_id = -1;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/sound/soc/sh/rcar/adg.c b/sound/soc/sh/rcar/adg.c
+index 4672688cac32..b7c1f34ec280 100644
+--- a/sound/soc/sh/rcar/adg.c
++++ b/sound/soc/sh/rcar/adg.c
+@@ -465,6 +465,11 @@ static void rsnd_adg_get_clkout(struct rsnd_priv *priv,
+ 		goto rsnd_adg_get_clkout_end;
+ 
+ 	req_size = prop->length / sizeof(u32);
++	if (req_size > REQ_SIZE) {
++		dev_err(dev,
++			"too many clock-frequency, use top %d\n", REQ_SIZE);
++		req_size = REQ_SIZE;
++	}
+ 
+ 	of_property_read_u32_array(np, "clock-frequency", req_rate, req_size);
+ 	req_48kHz_rate = 0;
+diff --git a/sound/soc/sh/rcar/core.c b/sound/soc/sh/rcar/core.c
+index ff13189a7ee4..982a72e73ea9 100644
+--- a/sound/soc/sh/rcar/core.c
++++ b/sound/soc/sh/rcar/core.c
+@@ -482,7 +482,7 @@ static int rsnd_status_update(u32 *status,
+ 			(func_call && (mod)->ops->fn) ? #fn : "");	\
+ 		if (func_call && (mod)->ops->fn)			\
+ 			tmp = (mod)->ops->fn(mod, io, param);		\
+-		if (tmp)						\
++		if (tmp && (tmp != -EPROBE_DEFER))			\
+ 			dev_err(dev, "%s[%d] : %s error %d\n",		\
+ 				rsnd_mod_name(mod), rsnd_mod_id(mod),	\
+ 						     #fn, tmp);		\
+@@ -1550,6 +1550,14 @@ exit_snd_probe:
+ 		rsnd_dai_call(remove, &rdai->capture, priv);
+ 	}
+ 
++	/*
++	 * adg is very special mod which can't use rsnd_dai_call(remove),
++	 * and it registers ADG clock on probe.
++	 * It should be unregister if probe failed.
++	 * Mainly it is assuming -EPROBE_DEFER case
++	 */
++	rsnd_adg_remove(priv);
++
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/sh/rcar/dma.c b/sound/soc/sh/rcar/dma.c
+index ef82b94d038b..2f3f4108fda5 100644
+--- a/sound/soc/sh/rcar/dma.c
++++ b/sound/soc/sh/rcar/dma.c
+@@ -244,6 +244,10 @@ static int rsnd_dmaen_attach(struct rsnd_dai_stream *io,
+ 	/* try to get DMAEngine channel */
+ 	chan = rsnd_dmaen_request_channel(io, mod_from, mod_to);
+ 	if (IS_ERR_OR_NULL(chan)) {
++		/* Let's follow when -EPROBE_DEFER case */
++		if (PTR_ERR(chan) == -EPROBE_DEFER)
++			return PTR_ERR(chan);
++
+ 		/*
+ 		 * DMA failed. try to PIO mode
+ 		 * see
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index 4663de3cf495..0b4896d411f9 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -1430,7 +1430,7 @@ static int soc_link_dai_widgets(struct snd_soc_card *card,
+ 	sink = codec_dai->playback_widget;
+ 	source = cpu_dai->capture_widget;
+ 	if (sink && source) {
+-		ret = snd_soc_dapm_new_pcm(card, dai_link->params,
++		ret = snd_soc_dapm_new_pcm(card, rtd, dai_link->params,
+ 					   dai_link->num_params,
+ 					   source, sink);
+ 		if (ret != 0) {
+@@ -1443,7 +1443,7 @@ static int soc_link_dai_widgets(struct snd_soc_card *card,
+ 	sink = cpu_dai->playback_widget;
+ 	source = codec_dai->capture_widget;
+ 	if (sink && source) {
+-		ret = snd_soc_dapm_new_pcm(card, dai_link->params,
++		ret = snd_soc_dapm_new_pcm(card, rtd, dai_link->params,
+ 					   dai_link->num_params,
+ 					   source, sink);
+ 		if (ret != 0) {
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index a099c3e45504..577f6178af57 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -3658,6 +3658,7 @@ static int snd_soc_dai_link_event(struct snd_soc_dapm_widget *w,
+ {
+ 	struct snd_soc_dapm_path *source_p, *sink_p;
+ 	struct snd_soc_dai *source, *sink;
++	struct snd_soc_pcm_runtime *rtd = w->priv;
+ 	const struct snd_soc_pcm_stream *config = w->params + w->params_select;
+ 	struct snd_pcm_substream substream;
+ 	struct snd_pcm_hw_params *params = NULL;
+@@ -3717,6 +3718,7 @@ static int snd_soc_dai_link_event(struct snd_soc_dapm_widget *w,
+ 		goto out;
+ 	}
+ 	substream.runtime = runtime;
++	substream.private_data = rtd;
+ 
+ 	switch (event) {
+ 	case SND_SOC_DAPM_PRE_PMU:
+@@ -3901,6 +3903,7 @@ outfree_w_param:
+ }
+ 
+ int snd_soc_dapm_new_pcm(struct snd_soc_card *card,
++			 struct snd_soc_pcm_runtime *rtd,
+ 			 const struct snd_soc_pcm_stream *params,
+ 			 unsigned int num_params,
+ 			 struct snd_soc_dapm_widget *source,
+@@ -3969,6 +3972,7 @@ int snd_soc_dapm_new_pcm(struct snd_soc_card *card,
+ 
+ 	w->params = params;
+ 	w->num_params = num_params;
++	w->priv = rtd;
+ 
+ 	ret = snd_soc_dapm_add_path(&card->dapm, source, w, NULL, NULL);
+ 	if (ret)
+diff --git a/tools/perf/scripts/python/export-to-postgresql.py b/tools/perf/scripts/python/export-to-postgresql.py
+index efcaf6cac2eb..e46f51b17513 100644
+--- a/tools/perf/scripts/python/export-to-postgresql.py
++++ b/tools/perf/scripts/python/export-to-postgresql.py
+@@ -204,14 +204,23 @@ from ctypes import *
+ libpq = CDLL("libpq.so.5")
+ PQconnectdb = libpq.PQconnectdb
+ PQconnectdb.restype = c_void_p
++PQconnectdb.argtypes = [ c_char_p ]
+ PQfinish = libpq.PQfinish
++PQfinish.argtypes = [ c_void_p ]
+ PQstatus = libpq.PQstatus
++PQstatus.restype = c_int
++PQstatus.argtypes = [ c_void_p ]
+ PQexec = libpq.PQexec
+ PQexec.restype = c_void_p
++PQexec.argtypes = [ c_void_p, c_char_p ]
+ PQresultStatus = libpq.PQresultStatus
++PQresultStatus.restype = c_int
++PQresultStatus.argtypes = [ c_void_p ]
+ PQputCopyData = libpq.PQputCopyData
++PQputCopyData.restype = c_int
+ PQputCopyData.argtypes = [ c_void_p, c_void_p, c_int ]
+ PQputCopyEnd = libpq.PQputCopyEnd
++PQputCopyEnd.restype = c_int
+ PQputCopyEnd.argtypes = [ c_void_p, c_void_p ]
+ 
+ sys.path.append(os.environ['PERF_EXEC_PATH'] + \
+diff --git a/tools/perf/scripts/python/export-to-sqlite.py b/tools/perf/scripts/python/export-to-sqlite.py
+index f827bf77e9d2..e4bb82c8aba9 100644
+--- a/tools/perf/scripts/python/export-to-sqlite.py
++++ b/tools/perf/scripts/python/export-to-sqlite.py
+@@ -440,7 +440,11 @@ def branch_type_table(*x):
+ 
+ def sample_table(*x):
+ 	if branches:
+-		bind_exec(sample_query, 18, x)
++		for xx in x[0:15]:
++			sample_query.addBindValue(str(xx))
++		for xx in x[19:22]:
++			sample_query.addBindValue(str(xx))
++		do_query_(sample_query)
+ 	else:
+ 		bind_exec(sample_query, 22, x)
+ 
+diff --git a/tools/testing/selftests/android/Makefile b/tools/testing/selftests/android/Makefile
+index 72c25a3cb658..d9a725478375 100644
+--- a/tools/testing/selftests/android/Makefile
++++ b/tools/testing/selftests/android/Makefile
+@@ -6,7 +6,7 @@ TEST_PROGS := run.sh
+ 
+ include ../lib.mk
+ 
+-all:
++all: khdr
+ 	@for DIR in $(SUBDIRS); do		\
+ 		BUILD_TARGET=$(OUTPUT)/$$DIR;	\
+ 		mkdir $$BUILD_TARGET  -p;	\
+diff --git a/tools/testing/selftests/android/config b/tools/testing/selftests/android/config
+new file mode 100644
+index 000000000000..b4ad748a9dd9
+--- /dev/null
++++ b/tools/testing/selftests/android/config
+@@ -0,0 +1,5 @@
++CONFIG_ANDROID=y
++CONFIG_STAGING=y
++CONFIG_ION=y
++CONFIG_ION_SYSTEM_HEAP=y
++CONFIG_DRM_VGEM=y
+diff --git a/tools/testing/selftests/android/ion/Makefile b/tools/testing/selftests/android/ion/Makefile
+index e03695287f76..88cfe88e466f 100644
+--- a/tools/testing/selftests/android/ion/Makefile
++++ b/tools/testing/selftests/android/ion/Makefile
+@@ -10,6 +10,8 @@ $(TEST_GEN_FILES): ipcsocket.c ionutils.c
+ 
+ TEST_PROGS := ion_test.sh
+ 
++KSFT_KHDR_INSTALL := 1
++top_srcdir = ../../../../..
+ include ../../lib.mk
+ 
+ $(OUTPUT)/ionapp_export: ionapp_export.c ipcsocket.c ionutils.c
+diff --git a/tools/testing/selftests/android/ion/config b/tools/testing/selftests/android/ion/config
+deleted file mode 100644
+index b4ad748a9dd9..000000000000
+--- a/tools/testing/selftests/android/ion/config
++++ /dev/null
+@@ -1,5 +0,0 @@
+-CONFIG_ANDROID=y
+-CONFIG_STAGING=y
+-CONFIG_ION=y
+-CONFIG_ION_SYSTEM_HEAP=y
+-CONFIG_DRM_VGEM=y
+diff --git a/tools/testing/selftests/cgroup/cgroup_util.c b/tools/testing/selftests/cgroup/cgroup_util.c
+index 1e9e3c470561..8b644ea39725 100644
+--- a/tools/testing/selftests/cgroup/cgroup_util.c
++++ b/tools/testing/selftests/cgroup/cgroup_util.c
+@@ -89,17 +89,28 @@ int cg_read(const char *cgroup, const char *control, char *buf, size_t len)
+ int cg_read_strcmp(const char *cgroup, const char *control,
+ 		   const char *expected)
+ {
+-	size_t size = strlen(expected) + 1;
++	size_t size;
+ 	char *buf;
++	int ret;
++
++	/* Handle the case of comparing against empty string */
++	if (!expected)
++		size = 32;
++	else
++		size = strlen(expected) + 1;
+ 
+ 	buf = malloc(size);
+ 	if (!buf)
+ 		return -1;
+ 
+-	if (cg_read(cgroup, control, buf, size))
++	if (cg_read(cgroup, control, buf, size)) {
++		free(buf);
+ 		return -1;
++	}
+ 
+-	return strcmp(expected, buf);
++	ret = strcmp(expected, buf);
++	free(buf);
++	return ret;
+ }
+ 
+ int cg_read_strstr(const char *cgroup, const char *control, const char *needle)
+diff --git a/tools/testing/selftests/efivarfs/config b/tools/testing/selftests/efivarfs/config
+new file mode 100644
+index 000000000000..4e151f1005b2
+--- /dev/null
++++ b/tools/testing/selftests/efivarfs/config
+@@ -0,0 +1 @@
++CONFIG_EFIVAR_FS=y
+diff --git a/tools/testing/selftests/futex/functional/Makefile b/tools/testing/selftests/futex/functional/Makefile
+index ff8feca49746..ad1eeb14fda7 100644
+--- a/tools/testing/selftests/futex/functional/Makefile
++++ b/tools/testing/selftests/futex/functional/Makefile
+@@ -18,6 +18,7 @@ TEST_GEN_FILES := \
+ 
+ TEST_PROGS := run.sh
+ 
++top_srcdir = ../../../../..
+ include ../../lib.mk
+ 
+ $(TEST_GEN_FILES): $(HEADERS)
+diff --git a/tools/testing/selftests/gpio/Makefile b/tools/testing/selftests/gpio/Makefile
+index 1bbb47565c55..4665cdbf1a8d 100644
+--- a/tools/testing/selftests/gpio/Makefile
++++ b/tools/testing/selftests/gpio/Makefile
+@@ -21,11 +21,8 @@ endef
+ CFLAGS += -O2 -g -std=gnu99 -Wall -I../../../../usr/include/
+ LDLIBS += -lmount -I/usr/include/libmount
+ 
+-$(BINARIES): ../../../gpio/gpio-utils.o ../../../../usr/include/linux/gpio.h
++$(BINARIES):| khdr
++$(BINARIES): ../../../gpio/gpio-utils.o
+ 
+ ../../../gpio/gpio-utils.o:
+ 	make ARCH=$(ARCH) CROSS_COMPILE=$(CROSS_COMPILE) -C ../../../gpio
+-
+-../../../../usr/include/linux/gpio.h:
+-	make -C ../../../.. headers_install INSTALL_HDR_PATH=$(shell pwd)/../../../../usr/
+-
+diff --git a/tools/testing/selftests/kselftest.h b/tools/testing/selftests/kselftest.h
+index 15e6b75fc3a5..a3edb2c8e43d 100644
+--- a/tools/testing/selftests/kselftest.h
++++ b/tools/testing/selftests/kselftest.h
+@@ -19,7 +19,6 @@
+ #define KSFT_FAIL  1
+ #define KSFT_XFAIL 2
+ #define KSFT_XPASS 3
+-/* Treat skip as pass */
+ #define KSFT_SKIP  4
+ 
+ /* counters */
+diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
+index d9d00319b07c..bcb69380bbab 100644
+--- a/tools/testing/selftests/kvm/Makefile
++++ b/tools/testing/selftests/kvm/Makefile
+@@ -32,9 +32,6 @@ $(LIBKVM_OBJ): $(OUTPUT)/%.o: %.c
+ $(OUTPUT)/libkvm.a: $(LIBKVM_OBJ)
+ 	$(AR) crs $@ $^
+ 
+-$(LINUX_HDR_PATH):
+-	make -C $(top_srcdir) headers_install
+-
+-all: $(STATIC_LIBS) $(LINUX_HDR_PATH)
++all: $(STATIC_LIBS)
+ $(TEST_GEN_PROGS): $(STATIC_LIBS)
+-$(TEST_GEN_PROGS) $(LIBKVM_OBJ): | $(LINUX_HDR_PATH)
++$(STATIC_LIBS):| khdr
+diff --git a/tools/testing/selftests/lib.mk b/tools/testing/selftests/lib.mk
+index 17ab36605a8e..0a8e75886224 100644
+--- a/tools/testing/selftests/lib.mk
++++ b/tools/testing/selftests/lib.mk
+@@ -16,8 +16,20 @@ TEST_GEN_PROGS := $(patsubst %,$(OUTPUT)/%,$(TEST_GEN_PROGS))
+ TEST_GEN_PROGS_EXTENDED := $(patsubst %,$(OUTPUT)/%,$(TEST_GEN_PROGS_EXTENDED))
+ TEST_GEN_FILES := $(patsubst %,$(OUTPUT)/%,$(TEST_GEN_FILES))
+ 
++top_srcdir ?= ../../../..
++include $(top_srcdir)/scripts/subarch.include
++ARCH		?= $(SUBARCH)
++
+ all: $(TEST_GEN_PROGS) $(TEST_GEN_PROGS_EXTENDED) $(TEST_GEN_FILES)
+ 
++.PHONY: khdr
++khdr:
++	make ARCH=$(ARCH) -C $(top_srcdir) headers_install
++
++ifdef KSFT_KHDR_INSTALL
++$(TEST_GEN_PROGS) $(TEST_GEN_PROGS_EXTENDED) $(TEST_GEN_FILES):| khdr
++endif
++
+ .ONESHELL:
+ define RUN_TEST_PRINT_RESULT
+ 	TEST_HDR_MSG="selftests: "`basename $$PWD`:" $$BASENAME_TEST";	\
+diff --git a/tools/testing/selftests/memory-hotplug/config b/tools/testing/selftests/memory-hotplug/config
+index 2fde30191a47..a7e8cd5bb265 100644
+--- a/tools/testing/selftests/memory-hotplug/config
++++ b/tools/testing/selftests/memory-hotplug/config
+@@ -2,3 +2,4 @@ CONFIG_MEMORY_HOTPLUG=y
+ CONFIG_MEMORY_HOTPLUG_SPARSE=y
+ CONFIG_NOTIFIER_ERROR_INJECTION=y
+ CONFIG_MEMORY_NOTIFIER_ERROR_INJECT=m
++CONFIG_MEMORY_HOTREMOVE=y
+diff --git a/tools/testing/selftests/net/Makefile b/tools/testing/selftests/net/Makefile
+index 663e11e85727..d515dabc6b0d 100644
+--- a/tools/testing/selftests/net/Makefile
++++ b/tools/testing/selftests/net/Makefile
+@@ -15,6 +15,7 @@ TEST_GEN_FILES += udpgso udpgso_bench_tx udpgso_bench_rx
+ TEST_GEN_PROGS = reuseport_bpf reuseport_bpf_cpu reuseport_bpf_numa
+ TEST_GEN_PROGS += reuseport_dualstack reuseaddr_conflict
+ 
++KSFT_KHDR_INSTALL := 1
+ include ../lib.mk
+ 
+ $(OUTPUT)/reuseport_bpf_numa: LDFLAGS += -lnuma
+diff --git a/tools/testing/selftests/networking/timestamping/Makefile b/tools/testing/selftests/networking/timestamping/Makefile
+index a728040edbe1..14cfcf006936 100644
+--- a/tools/testing/selftests/networking/timestamping/Makefile
++++ b/tools/testing/selftests/networking/timestamping/Makefile
+@@ -5,6 +5,7 @@ TEST_PROGS := hwtstamp_config rxtimestamp timestamping txtimestamp
+ 
+ all: $(TEST_PROGS)
+ 
++top_srcdir = ../../../../..
+ include ../../lib.mk
+ 
+ clean:
+diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
+index fdefa2295ddc..58759454b1d0 100644
+--- a/tools/testing/selftests/vm/Makefile
++++ b/tools/testing/selftests/vm/Makefile
+@@ -25,10 +25,6 @@ TEST_PROGS := run_vmtests
+ 
+ include ../lib.mk
+ 
+-$(OUTPUT)/userfaultfd: ../../../../usr/include/linux/kernel.h
+ $(OUTPUT)/userfaultfd: LDLIBS += -lpthread
+ 
+ $(OUTPUT)/mlock-random-test: LDLIBS += -lcap
+-
+-../../../../usr/include/linux/kernel.h:
+-	make -C ../../../.. headers_install


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 11:37 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     2988b4e9c29458eeec34e9889d4837ff38fae2b4
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Oct 10 11:16:13 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 11:36:26 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2988b4e9

Linux patch 4.18.13

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1012_linux-4.18.13.patch | 7273 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 7277 insertions(+)

diff --git a/0000_README b/0000_README
index ff87445..f5bb594 100644
--- a/0000_README
+++ b/0000_README
@@ -91,6 +91,10 @@ Patch:  1011_linux-4.18.12.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.12
 
+Patch:  1012_linux-4.18.13.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.13
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1012_linux-4.18.13.patch b/1012_linux-4.18.13.patch
new file mode 100644
index 0000000..6c8e751
--- /dev/null
+++ b/1012_linux-4.18.13.patch
@@ -0,0 +1,7273 @@
+diff --git a/Documentation/devicetree/bindings/net/sh_eth.txt b/Documentation/devicetree/bindings/net/sh_eth.txt
+index 82a4cf2c145d..a62fe3b613fc 100644
+--- a/Documentation/devicetree/bindings/net/sh_eth.txt
++++ b/Documentation/devicetree/bindings/net/sh_eth.txt
+@@ -16,6 +16,7 @@ Required properties:
+ 	      "renesas,ether-r8a7794"  if the device is a part of R8A7794 SoC.
+ 	      "renesas,gether-r8a77980" if the device is a part of R8A77980 SoC.
+ 	      "renesas,ether-r7s72100" if the device is a part of R7S72100 SoC.
++	      "renesas,ether-r7s9210" if the device is a part of R7S9210 SoC.
+ 	      "renesas,rcar-gen1-ether" for a generic R-Car Gen1 device.
+ 	      "renesas,rcar-gen2-ether" for a generic R-Car Gen2 or RZ/G1
+ 	                                device.
+diff --git a/Makefile b/Makefile
+index 466e07af8473..4442e9ea4b6d 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 12
++SUBLEVEL = 13
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
+index 11859287c52a..c98b59ac0612 100644
+--- a/arch/arc/include/asm/atomic.h
++++ b/arch/arc/include/asm/atomic.h
+@@ -84,7 +84,7 @@ static inline int atomic_fetch_##op(int i, atomic_t *v)			\
+ 	"1:	llock   %[orig], [%[ctr]]		\n"		\
+ 	"	" #asm_op " %[val], %[orig], %[i]	\n"		\
+ 	"	scond   %[val], [%[ctr]]		\n"		\
+-	"						\n"		\
++	"	bnz     1b				\n"		\
+ 	: [val]	"=&r"	(val),						\
+ 	  [orig] "=&r" (orig)						\
+ 	: [ctr]	"r"	(&v->counter),					\
+diff --git a/arch/arm64/include/asm/jump_label.h b/arch/arm64/include/asm/jump_label.h
+index 1b5e0e843c3a..7e2b3e360086 100644
+--- a/arch/arm64/include/asm/jump_label.h
++++ b/arch/arm64/include/asm/jump_label.h
+@@ -28,7 +28,7 @@
+ 
+ static __always_inline bool arch_static_branch(struct static_key *key, bool branch)
+ {
+-	asm goto("1: nop\n\t"
++	asm_volatile_goto("1: nop\n\t"
+ 		 ".pushsection __jump_table,  \"aw\"\n\t"
+ 		 ".align 3\n\t"
+ 		 ".quad 1b, %l[l_yes], %c0\n\t"
+@@ -42,7 +42,7 @@ l_yes:
+ 
+ static __always_inline bool arch_static_branch_jump(struct static_key *key, bool branch)
+ {
+-	asm goto("1: b %l[l_yes]\n\t"
++	asm_volatile_goto("1: b %l[l_yes]\n\t"
+ 		 ".pushsection __jump_table,  \"aw\"\n\t"
+ 		 ".align 3\n\t"
+ 		 ".quad 1b, %l[l_yes], %c0\n\t"
+diff --git a/arch/hexagon/include/asm/bitops.h b/arch/hexagon/include/asm/bitops.h
+index 5e4a59b3ec1b..2691a1857d20 100644
+--- a/arch/hexagon/include/asm/bitops.h
++++ b/arch/hexagon/include/asm/bitops.h
+@@ -211,7 +211,7 @@ static inline long ffz(int x)
+  * This is defined the same way as ffs.
+  * Note fls(0) = 0, fls(1) = 1, fls(0x80000000) = 32.
+  */
+-static inline long fls(int x)
++static inline int fls(int x)
+ {
+ 	int r;
+ 
+@@ -232,7 +232,7 @@ static inline long fls(int x)
+  * the libc and compiler builtin ffs routines, therefore
+  * differs in spirit from the above ffz (man ffs).
+  */
+-static inline long ffs(int x)
++static inline int ffs(int x)
+ {
+ 	int r;
+ 
+diff --git a/arch/hexagon/kernel/dma.c b/arch/hexagon/kernel/dma.c
+index 77459df34e2e..7ebe7ad19d15 100644
+--- a/arch/hexagon/kernel/dma.c
++++ b/arch/hexagon/kernel/dma.c
+@@ -60,7 +60,7 @@ static void *hexagon_dma_alloc_coherent(struct device *dev, size_t size,
+ 			panic("Can't create %s() memory pool!", __func__);
+ 		else
+ 			gen_pool_add(coherent_pool,
+-				pfn_to_virt(max_low_pfn),
++				(unsigned long)pfn_to_virt(max_low_pfn),
+ 				hexagon_coherent_pool_size, -1);
+ 	}
+ 
+diff --git a/arch/nds32/include/asm/elf.h b/arch/nds32/include/asm/elf.h
+index 56c479058802..f5f9cf7e0544 100644
+--- a/arch/nds32/include/asm/elf.h
++++ b/arch/nds32/include/asm/elf.h
+@@ -121,9 +121,9 @@ struct elf32_hdr;
+  */
+ #define ELF_CLASS	ELFCLASS32
+ #ifdef __NDS32_EB__
+-#define ELF_DATA	ELFDATA2MSB;
++#define ELF_DATA	ELFDATA2MSB
+ #else
+-#define ELF_DATA	ELFDATA2LSB;
++#define ELF_DATA	ELFDATA2LSB
+ #endif
+ #define ELF_ARCH	EM_NDS32
+ #define USE_ELF_CORE_DUMP
+diff --git a/arch/nds32/include/asm/uaccess.h b/arch/nds32/include/asm/uaccess.h
+index 18a009f3804d..3f771e0595e8 100644
+--- a/arch/nds32/include/asm/uaccess.h
++++ b/arch/nds32/include/asm/uaccess.h
+@@ -78,8 +78,9 @@ static inline void set_fs(mm_segment_t fs)
+ #define get_user(x,p)							\
+ ({									\
+ 	long __e = -EFAULT;						\
+-	if(likely(access_ok(VERIFY_READ,  p, sizeof(*p)))) {		\
+-		__e = __get_user(x,p);					\
++	const __typeof__(*(p)) __user *__p = (p);			\
++	if(likely(access_ok(VERIFY_READ, __p, sizeof(*__p)))) {		\
++		__e = __get_user(x, __p);				\
+ 	} else								\
+ 		x = 0;							\
+ 	__e;								\
+@@ -99,10 +100,10 @@ static inline void set_fs(mm_segment_t fs)
+ 
+ #define __get_user_err(x,ptr,err)					\
+ do {									\
+-	unsigned long __gu_addr = (unsigned long)(ptr);			\
++	const __typeof__(*(ptr)) __user *__gu_addr = (ptr);		\
+ 	unsigned long __gu_val;						\
+-	__chk_user_ptr(ptr);						\
+-	switch (sizeof(*(ptr))) {					\
++	__chk_user_ptr(__gu_addr);					\
++	switch (sizeof(*(__gu_addr))) {					\
+ 	case 1:								\
+ 		__get_user_asm("lbi",__gu_val,__gu_addr,err);		\
+ 		break;							\
+@@ -119,7 +120,7 @@ do {									\
+ 		BUILD_BUG(); 						\
+ 		break;							\
+ 	}								\
+-	(x) = (__typeof__(*(ptr)))__gu_val;				\
++	(x) = (__typeof__(*(__gu_addr)))__gu_val;			\
+ } while (0)
+ 
+ #define __get_user_asm(inst,x,addr,err)					\
+@@ -169,8 +170,9 @@ do {									\
+ #define put_user(x,p)							\
+ ({									\
+ 	long __e = -EFAULT;						\
+-	if(likely(access_ok(VERIFY_WRITE,  p, sizeof(*p)))) {		\
+-		__e = __put_user(x,p);					\
++	__typeof__(*(p)) __user *__p = (p);				\
++	if(likely(access_ok(VERIFY_WRITE, __p, sizeof(*__p)))) {	\
++		__e = __put_user(x, __p);				\
+ 	}								\
+ 	__e;								\
+ })
+@@ -189,10 +191,10 @@ do {									\
+ 
+ #define __put_user_err(x,ptr,err)					\
+ do {									\
+-	unsigned long __pu_addr = (unsigned long)(ptr);			\
+-	__typeof__(*(ptr)) __pu_val = (x);				\
+-	__chk_user_ptr(ptr);						\
+-	switch (sizeof(*(ptr))) {					\
++	__typeof__(*(ptr)) __user *__pu_addr = (ptr);			\
++	__typeof__(*(__pu_addr)) __pu_val = (x);			\
++	__chk_user_ptr(__pu_addr);					\
++	switch (sizeof(*(__pu_addr))) {					\
+ 	case 1:								\
+ 		__put_user_asm("sbi",__pu_val,__pu_addr,err);		\
+ 		break;							\
+diff --git a/arch/nds32/kernel/atl2c.c b/arch/nds32/kernel/atl2c.c
+index 0c6d031a1c4a..0c5386e72098 100644
+--- a/arch/nds32/kernel/atl2c.c
++++ b/arch/nds32/kernel/atl2c.c
+@@ -9,7 +9,8 @@
+ 
+ void __iomem *atl2c_base;
+ static const struct of_device_id atl2c_ids[] __initconst = {
+-	{.compatible = "andestech,atl2c",}
++	{.compatible = "andestech,atl2c",},
++	{}
+ };
+ 
+ static int __init atl2c_of_init(void)
+diff --git a/arch/nds32/kernel/module.c b/arch/nds32/kernel/module.c
+index 4167283d8293..1e31829cbc2a 100644
+--- a/arch/nds32/kernel/module.c
++++ b/arch/nds32/kernel/module.c
+@@ -40,7 +40,7 @@ void do_reloc16(unsigned int val, unsigned int *loc, unsigned int val_mask,
+ 
+ 	tmp2 = tmp & loc_mask;
+ 	if (partial_in_place) {
+-		tmp &= (!loc_mask);
++		tmp &= (~loc_mask);
+ 		tmp =
+ 		    tmp2 | ((tmp + ((val & val_mask) >> val_shift)) & val_mask);
+ 	} else {
+@@ -70,7 +70,7 @@ void do_reloc32(unsigned int val, unsigned int *loc, unsigned int val_mask,
+ 
+ 	tmp2 = tmp & loc_mask;
+ 	if (partial_in_place) {
+-		tmp &= (!loc_mask);
++		tmp &= (~loc_mask);
+ 		tmp =
+ 		    tmp2 | ((tmp + ((val & val_mask) >> val_shift)) & val_mask);
+ 	} else {
+diff --git a/arch/nds32/kernel/traps.c b/arch/nds32/kernel/traps.c
+index a6205fd4db52..f0e974347c26 100644
+--- a/arch/nds32/kernel/traps.c
++++ b/arch/nds32/kernel/traps.c
+@@ -137,7 +137,7 @@ static void __dump(struct task_struct *tsk, unsigned long *base_reg)
+ 		       !((unsigned long)base_reg & 0x3) &&
+ 		       ((unsigned long)base_reg >= TASK_SIZE)) {
+ 			unsigned long next_fp;
+-#if !defined(NDS32_ABI_2)
++#if !defined(__NDS32_ABI_2)
+ 			ret_addr = base_reg[0];
+ 			next_fp = base_reg[1];
+ #else
+diff --git a/arch/nds32/kernel/vmlinux.lds.S b/arch/nds32/kernel/vmlinux.lds.S
+index 288313b886ef..9e90f30a181d 100644
+--- a/arch/nds32/kernel/vmlinux.lds.S
++++ b/arch/nds32/kernel/vmlinux.lds.S
+@@ -13,14 +13,26 @@ OUTPUT_ARCH(nds32)
+ ENTRY(_stext_lma)
+ jiffies = jiffies_64;
+ 
++#if defined(CONFIG_GCOV_KERNEL)
++#define NDS32_EXIT_KEEP(x)	x
++#else
++#define NDS32_EXIT_KEEP(x)
++#endif
++
+ SECTIONS
+ {
+ 	_stext_lma = TEXTADDR - LOAD_OFFSET;
+ 	. = TEXTADDR;
+ 	__init_begin = .;
+ 	HEAD_TEXT_SECTION
++	.exit.text : {
++		NDS32_EXIT_KEEP(EXIT_TEXT)
++	}
+ 	INIT_TEXT_SECTION(PAGE_SIZE)
+ 	INIT_DATA_SECTION(16)
++	.exit.data : {
++		NDS32_EXIT_KEEP(EXIT_DATA)
++	}
+ 	PERCPU_SECTION(L1_CACHE_BYTES)
+ 	__init_end = .;
+ 
+diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
+index 7f3a8cf5d66f..4c08f42f6406 100644
+--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
++++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
+@@ -359,7 +359,7 @@ static int kvmppc_mmu_book3s_64_hv_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
+ 	unsigned long pp, key;
+ 	unsigned long v, orig_v, gr;
+ 	__be64 *hptep;
+-	int index;
++	long int index;
+ 	int virtmode = vcpu->arch.shregs.msr & (data ? MSR_DR : MSR_IR);
+ 
+ 	if (kvm_is_radix(vcpu->kvm))
+diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
+index f0d2070866d4..0efa5b29d0a3 100644
+--- a/arch/riscv/kernel/setup.c
++++ b/arch/riscv/kernel/setup.c
+@@ -64,15 +64,8 @@ atomic_t hart_lottery;
+ #ifdef CONFIG_BLK_DEV_INITRD
+ static void __init setup_initrd(void)
+ {
+-	extern char __initramfs_start[];
+-	extern unsigned long __initramfs_size;
+ 	unsigned long size;
+ 
+-	if (__initramfs_size > 0) {
+-		initrd_start = (unsigned long)(&__initramfs_start);
+-		initrd_end = initrd_start + __initramfs_size;
+-	}
+-
+ 	if (initrd_start >= initrd_end) {
+ 		printk(KERN_INFO "initrd not found or empty");
+ 		goto disable;
+diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
+index a4170048a30b..17fbd07e4245 100644
+--- a/arch/x86/events/intel/lbr.c
++++ b/arch/x86/events/intel/lbr.c
+@@ -1250,4 +1250,8 @@ void intel_pmu_lbr_init_knl(void)
+ 
+ 	x86_pmu.lbr_sel_mask = LBR_SEL_MASK;
+ 	x86_pmu.lbr_sel_map  = snb_lbr_sel_map;
++
++	/* Knights Landing does have MISPREDICT bit */
++	if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_LIP)
++		x86_pmu.intel_cap.lbr_format = LBR_FORMAT_EIP_FLAGS;
+ }
+diff --git a/arch/x86/kernel/apm_32.c b/arch/x86/kernel/apm_32.c
+index ec00d1ff5098..f7151cd03cb0 100644
+--- a/arch/x86/kernel/apm_32.c
++++ b/arch/x86/kernel/apm_32.c
+@@ -1640,6 +1640,7 @@ static int do_open(struct inode *inode, struct file *filp)
+ 	return 0;
+ }
+ 
++#ifdef CONFIG_PROC_FS
+ static int proc_apm_show(struct seq_file *m, void *v)
+ {
+ 	unsigned short	bx;
+@@ -1719,6 +1720,7 @@ static int proc_apm_show(struct seq_file *m, void *v)
+ 		   units);
+ 	return 0;
+ }
++#endif
+ 
+ static int apm(void *unused)
+ {
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index eb85cb87c40f..ec868373b11b 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -307,28 +307,11 @@ struct blkcg_gq *blkg_lookup_create(struct blkcg *blkcg,
+ 	}
+ }
+ 
+-static void blkg_pd_offline(struct blkcg_gq *blkg)
+-{
+-	int i;
+-
+-	lockdep_assert_held(blkg->q->queue_lock);
+-	lockdep_assert_held(&blkg->blkcg->lock);
+-
+-	for (i = 0; i < BLKCG_MAX_POLS; i++) {
+-		struct blkcg_policy *pol = blkcg_policy[i];
+-
+-		if (blkg->pd[i] && !blkg->pd[i]->offline &&
+-		    pol->pd_offline_fn) {
+-			pol->pd_offline_fn(blkg->pd[i]);
+-			blkg->pd[i]->offline = true;
+-		}
+-	}
+-}
+-
+ static void blkg_destroy(struct blkcg_gq *blkg)
+ {
+ 	struct blkcg *blkcg = blkg->blkcg;
+ 	struct blkcg_gq *parent = blkg->parent;
++	int i;
+ 
+ 	lockdep_assert_held(blkg->q->queue_lock);
+ 	lockdep_assert_held(&blkcg->lock);
+@@ -337,6 +320,13 @@ static void blkg_destroy(struct blkcg_gq *blkg)
+ 	WARN_ON_ONCE(list_empty(&blkg->q_node));
+ 	WARN_ON_ONCE(hlist_unhashed(&blkg->blkcg_node));
+ 
++	for (i = 0; i < BLKCG_MAX_POLS; i++) {
++		struct blkcg_policy *pol = blkcg_policy[i];
++
++		if (blkg->pd[i] && pol->pd_offline_fn)
++			pol->pd_offline_fn(blkg->pd[i]);
++	}
++
+ 	if (parent) {
+ 		blkg_rwstat_add_aux(&parent->stat_bytes, &blkg->stat_bytes);
+ 		blkg_rwstat_add_aux(&parent->stat_ios, &blkg->stat_ios);
+@@ -379,7 +369,6 @@ static void blkg_destroy_all(struct request_queue *q)
+ 		struct blkcg *blkcg = blkg->blkcg;
+ 
+ 		spin_lock(&blkcg->lock);
+-		blkg_pd_offline(blkg);
+ 		blkg_destroy(blkg);
+ 		spin_unlock(&blkcg->lock);
+ 	}
+@@ -1006,54 +995,21 @@ static struct cftype blkcg_legacy_files[] = {
+  * @css: css of interest
+  *
+  * This function is called when @css is about to go away and responsible
+- * for offlining all blkgs pd and killing all wbs associated with @css.
+- * blkgs pd offline should be done while holding both q and blkcg locks.
+- * As blkcg lock is nested inside q lock, this function performs reverse
+- * double lock dancing.
++ * for shooting down all blkgs associated with @css.  blkgs should be
++ * removed while holding both q and blkcg locks.  As blkcg lock is nested
++ * inside q lock, this function performs reverse double lock dancing.
+  *
+  * This is the blkcg counterpart of ioc_release_fn().
+  */
+ static void blkcg_css_offline(struct cgroup_subsys_state *css)
+ {
+ 	struct blkcg *blkcg = css_to_blkcg(css);
+-	struct blkcg_gq *blkg;
+ 
+ 	spin_lock_irq(&blkcg->lock);
+ 
+-	hlist_for_each_entry(blkg, &blkcg->blkg_list, blkcg_node) {
+-		struct request_queue *q = blkg->q;
+-
+-		if (spin_trylock(q->queue_lock)) {
+-			blkg_pd_offline(blkg);
+-			spin_unlock(q->queue_lock);
+-		} else {
+-			spin_unlock_irq(&blkcg->lock);
+-			cpu_relax();
+-			spin_lock_irq(&blkcg->lock);
+-		}
+-	}
+-
+-	spin_unlock_irq(&blkcg->lock);
+-
+-	wb_blkcg_offline(blkcg);
+-}
+-
+-/**
+- * blkcg_destroy_all_blkgs - destroy all blkgs associated with a blkcg
+- * @blkcg: blkcg of interest
+- *
+- * This function is called when blkcg css is about to free and responsible for
+- * destroying all blkgs associated with @blkcg.
+- * blkgs should be removed while holding both q and blkcg locks. As blkcg lock
+- * is nested inside q lock, this function performs reverse double lock dancing.
+- */
+-static void blkcg_destroy_all_blkgs(struct blkcg *blkcg)
+-{
+-	spin_lock_irq(&blkcg->lock);
+ 	while (!hlist_empty(&blkcg->blkg_list)) {
+ 		struct blkcg_gq *blkg = hlist_entry(blkcg->blkg_list.first,
+-						    struct blkcg_gq,
+-						    blkcg_node);
++						struct blkcg_gq, blkcg_node);
+ 		struct request_queue *q = blkg->q;
+ 
+ 		if (spin_trylock(q->queue_lock)) {
+@@ -1065,7 +1021,10 @@ static void blkcg_destroy_all_blkgs(struct blkcg *blkcg)
+ 			spin_lock_irq(&blkcg->lock);
+ 		}
+ 	}
++
+ 	spin_unlock_irq(&blkcg->lock);
++
++	wb_blkcg_offline(blkcg);
+ }
+ 
+ static void blkcg_css_free(struct cgroup_subsys_state *css)
+@@ -1073,8 +1032,6 @@ static void blkcg_css_free(struct cgroup_subsys_state *css)
+ 	struct blkcg *blkcg = css_to_blkcg(css);
+ 	int i;
+ 
+-	blkcg_destroy_all_blkgs(blkcg);
+-
+ 	mutex_lock(&blkcg_pol_mutex);
+ 
+ 	list_del(&blkcg->all_blkcgs_node);
+@@ -1412,11 +1369,8 @@ void blkcg_deactivate_policy(struct request_queue *q,
+ 
+ 	list_for_each_entry(blkg, &q->blkg_list, q_node) {
+ 		if (blkg->pd[pol->plid]) {
+-			if (!blkg->pd[pol->plid]->offline &&
+-			    pol->pd_offline_fn) {
++			if (pol->pd_offline_fn)
+ 				pol->pd_offline_fn(blkg->pd[pol->plid]);
+-				blkg->pd[pol->plid]->offline = true;
+-			}
+ 			pol->pd_free_fn(blkg->pd[pol->plid]);
+ 			blkg->pd[pol->plid] = NULL;
+ 		}
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 22a2bc5f25ce..99bf0c0394f8 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -7403,4 +7403,4 @@ EXPORT_SYMBOL_GPL(ata_cable_unknown);
+ EXPORT_SYMBOL_GPL(ata_cable_ignore);
+ EXPORT_SYMBOL_GPL(ata_cable_sata);
+ EXPORT_SYMBOL_GPL(ata_host_get);
+-EXPORT_SYMBOL_GPL(ata_host_put);
+\ No newline at end of file
++EXPORT_SYMBOL_GPL(ata_host_put);
+diff --git a/drivers/base/firmware_loader/main.c b/drivers/base/firmware_loader/main.c
+index 0943e7065e0e..8e9213b36e31 100644
+--- a/drivers/base/firmware_loader/main.c
++++ b/drivers/base/firmware_loader/main.c
+@@ -209,22 +209,28 @@ static struct fw_priv *__lookup_fw_priv(const char *fw_name)
+ static int alloc_lookup_fw_priv(const char *fw_name,
+ 				struct firmware_cache *fwc,
+ 				struct fw_priv **fw_priv, void *dbuf,
+-				size_t size)
++				size_t size, enum fw_opt opt_flags)
+ {
+ 	struct fw_priv *tmp;
+ 
+ 	spin_lock(&fwc->lock);
+-	tmp = __lookup_fw_priv(fw_name);
+-	if (tmp) {
+-		kref_get(&tmp->ref);
+-		spin_unlock(&fwc->lock);
+-		*fw_priv = tmp;
+-		pr_debug("batched request - sharing the same struct fw_priv and lookup for multiple requests\n");
+-		return 1;
++	if (!(opt_flags & FW_OPT_NOCACHE)) {
++		tmp = __lookup_fw_priv(fw_name);
++		if (tmp) {
++			kref_get(&tmp->ref);
++			spin_unlock(&fwc->lock);
++			*fw_priv = tmp;
++			pr_debug("batched request - sharing the same struct fw_priv and lookup for multiple requests\n");
++			return 1;
++		}
+ 	}
++
+ 	tmp = __allocate_fw_priv(fw_name, fwc, dbuf, size);
+-	if (tmp)
+-		list_add(&tmp->list, &fwc->head);
++	if (tmp) {
++		INIT_LIST_HEAD(&tmp->list);
++		if (!(opt_flags & FW_OPT_NOCACHE))
++			list_add(&tmp->list, &fwc->head);
++	}
+ 	spin_unlock(&fwc->lock);
+ 
+ 	*fw_priv = tmp;
+@@ -493,7 +499,8 @@ int assign_fw(struct firmware *fw, struct device *device,
+  */
+ static int
+ _request_firmware_prepare(struct firmware **firmware_p, const char *name,
+-			  struct device *device, void *dbuf, size_t size)
++			  struct device *device, void *dbuf, size_t size,
++			  enum fw_opt opt_flags)
+ {
+ 	struct firmware *firmware;
+ 	struct fw_priv *fw_priv;
+@@ -511,7 +518,8 @@ _request_firmware_prepare(struct firmware **firmware_p, const char *name,
+ 		return 0; /* assigned */
+ 	}
+ 
+-	ret = alloc_lookup_fw_priv(name, &fw_cache, &fw_priv, dbuf, size);
++	ret = alloc_lookup_fw_priv(name, &fw_cache, &fw_priv, dbuf, size,
++				  opt_flags);
+ 
+ 	/*
+ 	 * bind with 'priv' now to avoid warning in failure path
+@@ -571,7 +579,8 @@ _request_firmware(const struct firmware **firmware_p, const char *name,
+ 		goto out;
+ 	}
+ 
+-	ret = _request_firmware_prepare(&fw, name, device, buf, size);
++	ret = _request_firmware_prepare(&fw, name, device, buf, size,
++					opt_flags);
+ 	if (ret <= 0) /* error or already assigned */
+ 		goto out;
+ 
+diff --git a/drivers/cpufreq/qcom-cpufreq-kryo.c b/drivers/cpufreq/qcom-cpufreq-kryo.c
+index efc9a7ae4857..35e81d7dd929 100644
+--- a/drivers/cpufreq/qcom-cpufreq-kryo.c
++++ b/drivers/cpufreq/qcom-cpufreq-kryo.c
+@@ -44,7 +44,7 @@ enum _msm8996_version {
+ 
+ struct platform_device *cpufreq_dt_pdev, *kryo_cpufreq_pdev;
+ 
+-static enum _msm8996_version __init qcom_cpufreq_kryo_get_msm_id(void)
++static enum _msm8996_version qcom_cpufreq_kryo_get_msm_id(void)
+ {
+ 	size_t len;
+ 	u32 *msm_id;
+@@ -221,7 +221,7 @@ static int __init qcom_cpufreq_kryo_init(void)
+ }
+ module_init(qcom_cpufreq_kryo_init);
+ 
+-static void __init qcom_cpufreq_kryo_exit(void)
++static void __exit qcom_cpufreq_kryo_exit(void)
+ {
+ 	platform_device_unregister(kryo_cpufreq_pdev);
+ 	platform_driver_unregister(&qcom_cpufreq_kryo_driver);
+diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
+index d67667970f7e..ec40f991e6c6 100644
+--- a/drivers/crypto/caam/caamalg.c
++++ b/drivers/crypto/caam/caamalg.c
+@@ -1553,8 +1553,8 @@ static struct ablkcipher_edesc *ablkcipher_edesc_alloc(struct ablkcipher_request
+ 	edesc->src_nents = src_nents;
+ 	edesc->dst_nents = dst_nents;
+ 	edesc->sec4_sg_bytes = sec4_sg_bytes;
+-	edesc->sec4_sg = (void *)edesc + sizeof(struct ablkcipher_edesc) +
+-			 desc_bytes;
++	edesc->sec4_sg = (struct sec4_sg_entry *)((u8 *)edesc->hw_desc +
++						  desc_bytes);
+ 	edesc->iv_dir = DMA_TO_DEVICE;
+ 
+ 	/* Make sure IV is located in a DMAable area */
+@@ -1757,8 +1757,8 @@ static struct ablkcipher_edesc *ablkcipher_giv_edesc_alloc(
+ 	edesc->src_nents = src_nents;
+ 	edesc->dst_nents = dst_nents;
+ 	edesc->sec4_sg_bytes = sec4_sg_bytes;
+-	edesc->sec4_sg = (void *)edesc + sizeof(struct ablkcipher_edesc) +
+-			 desc_bytes;
++	edesc->sec4_sg = (struct sec4_sg_entry *)((u8 *)edesc->hw_desc +
++						  desc_bytes);
+ 	edesc->iv_dir = DMA_FROM_DEVICE;
+ 
+ 	/* Make sure IV is located in a DMAable area */
+diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c
+index b916c4eb608c..e5d2ac5aec40 100644
+--- a/drivers/crypto/chelsio/chcr_algo.c
++++ b/drivers/crypto/chelsio/chcr_algo.c
+@@ -367,7 +367,8 @@ static inline void dsgl_walk_init(struct dsgl_walk *walk,
+ 	walk->to = (struct phys_sge_pairs *)(dsgl + 1);
+ }
+ 
+-static inline void dsgl_walk_end(struct dsgl_walk *walk, unsigned short qid)
++static inline void dsgl_walk_end(struct dsgl_walk *walk, unsigned short qid,
++				 int pci_chan_id)
+ {
+ 	struct cpl_rx_phys_dsgl *phys_cpl;
+ 
+@@ -385,6 +386,7 @@ static inline void dsgl_walk_end(struct dsgl_walk *walk, unsigned short qid)
+ 	phys_cpl->rss_hdr_int.opcode = CPL_RX_PHYS_ADDR;
+ 	phys_cpl->rss_hdr_int.qid = htons(qid);
+ 	phys_cpl->rss_hdr_int.hash_val = 0;
++	phys_cpl->rss_hdr_int.channel = pci_chan_id;
+ }
+ 
+ static inline void dsgl_walk_add_page(struct dsgl_walk *walk,
+@@ -718,7 +720,7 @@ static inline void create_wreq(struct chcr_context *ctx,
+ 		FILL_WR_RX_Q_ID(ctx->dev->rx_channel_id, qid,
+ 				!!lcb, ctx->tx_qidx);
+ 
+-	chcr_req->ulptx.cmd_dest = FILL_ULPTX_CMD_DEST(ctx->dev->tx_channel_id,
++	chcr_req->ulptx.cmd_dest = FILL_ULPTX_CMD_DEST(ctx->tx_chan_id,
+ 						       qid);
+ 	chcr_req->ulptx.len = htonl((DIV_ROUND_UP(len16, 16) -
+ 				     ((sizeof(chcr_req->wreq)) >> 4)));
+@@ -1339,16 +1341,23 @@ static int chcr_device_init(struct chcr_context *ctx)
+ 				    adap->vres.ncrypto_fc);
+ 		rxq_perchan = u_ctx->lldi.nrxq / u_ctx->lldi.nchan;
+ 		txq_perchan = ntxq / u_ctx->lldi.nchan;
+-		rxq_idx = ctx->dev->tx_channel_id * rxq_perchan;
+-		rxq_idx += id % rxq_perchan;
+-		txq_idx = ctx->dev->tx_channel_id * txq_perchan;
+-		txq_idx += id % txq_perchan;
+ 		spin_lock(&ctx->dev->lock_chcr_dev);
+-		ctx->rx_qidx = rxq_idx;
+-		ctx->tx_qidx = txq_idx;
++		ctx->tx_chan_id = ctx->dev->tx_channel_id;
+ 		ctx->dev->tx_channel_id = !ctx->dev->tx_channel_id;
+ 		ctx->dev->rx_channel_id = 0;
+ 		spin_unlock(&ctx->dev->lock_chcr_dev);
++		rxq_idx = ctx->tx_chan_id * rxq_perchan;
++		rxq_idx += id % rxq_perchan;
++		txq_idx = ctx->tx_chan_id * txq_perchan;
++		txq_idx += id % txq_perchan;
++		ctx->rx_qidx = rxq_idx;
++		ctx->tx_qidx = txq_idx;
++		/* Channel Id used by SGE to forward packet to Host.
++		 * Same value should be used in cpl_fw6_pld RSS_CH field
++		 * by FW. Driver programs PCI channel ID to be used in fw
++		 * at the time of queue allocation with value "pi->tx_chan"
++		 */
++		ctx->pci_chan_id = txq_idx / txq_perchan;
+ 	}
+ out:
+ 	return err;
+@@ -2503,6 +2512,7 @@ void chcr_add_aead_dst_ent(struct aead_request *req,
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	struct dsgl_walk dsgl_walk;
+ 	unsigned int authsize = crypto_aead_authsize(tfm);
++	struct chcr_context *ctx = a_ctx(tfm);
+ 	u32 temp;
+ 
+ 	dsgl_walk_init(&dsgl_walk, phys_cpl);
+@@ -2512,7 +2522,7 @@ void chcr_add_aead_dst_ent(struct aead_request *req,
+ 	dsgl_walk_add_page(&dsgl_walk, IV, &reqctx->iv_dma);
+ 	temp = req->cryptlen + (reqctx->op ? -authsize : authsize);
+ 	dsgl_walk_add_sg(&dsgl_walk, req->dst, temp, req->assoclen);
+-	dsgl_walk_end(&dsgl_walk, qid);
++	dsgl_walk_end(&dsgl_walk, qid, ctx->pci_chan_id);
+ }
+ 
+ void chcr_add_cipher_src_ent(struct ablkcipher_request *req,
+@@ -2544,6 +2554,8 @@ void chcr_add_cipher_dst_ent(struct ablkcipher_request *req,
+ 			     unsigned short qid)
+ {
+ 	struct chcr_blkcipher_req_ctx *reqctx = ablkcipher_request_ctx(req);
++	struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(wrparam->req);
++	struct chcr_context *ctx = c_ctx(tfm);
+ 	struct dsgl_walk dsgl_walk;
+ 
+ 	dsgl_walk_init(&dsgl_walk, phys_cpl);
+@@ -2552,7 +2564,7 @@ void chcr_add_cipher_dst_ent(struct ablkcipher_request *req,
+ 	reqctx->dstsg = dsgl_walk.last_sg;
+ 	reqctx->dst_ofst = dsgl_walk.last_sg_len;
+ 
+-	dsgl_walk_end(&dsgl_walk, qid);
++	dsgl_walk_end(&dsgl_walk, qid, ctx->pci_chan_id);
+ }
+ 
+ void chcr_add_hash_src_ent(struct ahash_request *req,
+diff --git a/drivers/crypto/chelsio/chcr_crypto.h b/drivers/crypto/chelsio/chcr_crypto.h
+index 54835cb109e5..0d2c70c344f3 100644
+--- a/drivers/crypto/chelsio/chcr_crypto.h
++++ b/drivers/crypto/chelsio/chcr_crypto.h
+@@ -255,6 +255,8 @@ struct chcr_context {
+ 	struct chcr_dev *dev;
+ 	unsigned char tx_qidx;
+ 	unsigned char rx_qidx;
++	unsigned char tx_chan_id;
++	unsigned char pci_chan_id;
+ 	struct __crypto_ctx crypto_ctx[0];
+ };
+ 
+diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c
+index a10c418d4e5c..56bd28174f52 100644
+--- a/drivers/crypto/mxs-dcp.c
++++ b/drivers/crypto/mxs-dcp.c
+@@ -63,7 +63,7 @@ struct dcp {
+ 	struct dcp_coherent_block	*coh;
+ 
+ 	struct completion		completion[DCP_MAX_CHANS];
+-	struct mutex			mutex[DCP_MAX_CHANS];
++	spinlock_t			lock[DCP_MAX_CHANS];
+ 	struct task_struct		*thread[DCP_MAX_CHANS];
+ 	struct crypto_queue		queue[DCP_MAX_CHANS];
+ };
+@@ -349,13 +349,20 @@ static int dcp_chan_thread_aes(void *data)
+ 
+ 	int ret;
+ 
+-	do {
+-		__set_current_state(TASK_INTERRUPTIBLE);
++	while (!kthread_should_stop()) {
++		set_current_state(TASK_INTERRUPTIBLE);
+ 
+-		mutex_lock(&sdcp->mutex[chan]);
++		spin_lock(&sdcp->lock[chan]);
+ 		backlog = crypto_get_backlog(&sdcp->queue[chan]);
+ 		arq = crypto_dequeue_request(&sdcp->queue[chan]);
+-		mutex_unlock(&sdcp->mutex[chan]);
++		spin_unlock(&sdcp->lock[chan]);
++
++		if (!backlog && !arq) {
++			schedule();
++			continue;
++		}
++
++		set_current_state(TASK_RUNNING);
+ 
+ 		if (backlog)
+ 			backlog->complete(backlog, -EINPROGRESS);
+@@ -363,11 +370,8 @@ static int dcp_chan_thread_aes(void *data)
+ 		if (arq) {
+ 			ret = mxs_dcp_aes_block_crypt(arq);
+ 			arq->complete(arq, ret);
+-			continue;
+ 		}
+-
+-		schedule();
+-	} while (!kthread_should_stop());
++	}
+ 
+ 	return 0;
+ }
+@@ -409,9 +413,9 @@ static int mxs_dcp_aes_enqueue(struct ablkcipher_request *req, int enc, int ecb)
+ 	rctx->ecb = ecb;
+ 	actx->chan = DCP_CHAN_CRYPTO;
+ 
+-	mutex_lock(&sdcp->mutex[actx->chan]);
++	spin_lock(&sdcp->lock[actx->chan]);
+ 	ret = crypto_enqueue_request(&sdcp->queue[actx->chan], &req->base);
+-	mutex_unlock(&sdcp->mutex[actx->chan]);
++	spin_unlock(&sdcp->lock[actx->chan]);
+ 
+ 	wake_up_process(sdcp->thread[actx->chan]);
+ 
+@@ -640,13 +644,20 @@ static int dcp_chan_thread_sha(void *data)
+ 	struct ahash_request *req;
+ 	int ret, fini;
+ 
+-	do {
+-		__set_current_state(TASK_INTERRUPTIBLE);
++	while (!kthread_should_stop()) {
++		set_current_state(TASK_INTERRUPTIBLE);
+ 
+-		mutex_lock(&sdcp->mutex[chan]);
++		spin_lock(&sdcp->lock[chan]);
+ 		backlog = crypto_get_backlog(&sdcp->queue[chan]);
+ 		arq = crypto_dequeue_request(&sdcp->queue[chan]);
+-		mutex_unlock(&sdcp->mutex[chan]);
++		spin_unlock(&sdcp->lock[chan]);
++
++		if (!backlog && !arq) {
++			schedule();
++			continue;
++		}
++
++		set_current_state(TASK_RUNNING);
+ 
+ 		if (backlog)
+ 			backlog->complete(backlog, -EINPROGRESS);
+@@ -658,12 +669,8 @@ static int dcp_chan_thread_sha(void *data)
+ 			ret = dcp_sha_req_to_buf(arq);
+ 			fini = rctx->fini;
+ 			arq->complete(arq, ret);
+-			if (!fini)
+-				continue;
+ 		}
+-
+-		schedule();
+-	} while (!kthread_should_stop());
++	}
+ 
+ 	return 0;
+ }
+@@ -721,9 +728,9 @@ static int dcp_sha_update_fx(struct ahash_request *req, int fini)
+ 		rctx->init = 1;
+ 	}
+ 
+-	mutex_lock(&sdcp->mutex[actx->chan]);
++	spin_lock(&sdcp->lock[actx->chan]);
+ 	ret = crypto_enqueue_request(&sdcp->queue[actx->chan], &req->base);
+-	mutex_unlock(&sdcp->mutex[actx->chan]);
++	spin_unlock(&sdcp->lock[actx->chan]);
+ 
+ 	wake_up_process(sdcp->thread[actx->chan]);
+ 	mutex_unlock(&actx->mutex);
+@@ -997,7 +1004,7 @@ static int mxs_dcp_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, sdcp);
+ 
+ 	for (i = 0; i < DCP_MAX_CHANS; i++) {
+-		mutex_init(&sdcp->mutex[i]);
++		spin_lock_init(&sdcp->lock[i]);
+ 		init_completion(&sdcp->completion[i]);
+ 		crypto_init_queue(&sdcp->queue[i], 50);
+ 	}
+diff --git a/drivers/crypto/qat/qat_c3xxx/adf_drv.c b/drivers/crypto/qat/qat_c3xxx/adf_drv.c
+index ba197f34c252..763c2166ee0e 100644
+--- a/drivers/crypto/qat/qat_c3xxx/adf_drv.c
++++ b/drivers/crypto/qat/qat_c3xxx/adf_drv.c
+@@ -123,7 +123,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	struct adf_hw_device_data *hw_data;
+ 	char name[ADF_DEVICE_NAME_LENGTH];
+ 	unsigned int i, bar_nr;
+-	int ret, bar_mask;
++	unsigned long bar_mask;
++	int ret;
+ 
+ 	switch (ent->device) {
+ 	case ADF_C3XXX_PCI_DEVICE_ID:
+@@ -235,8 +236,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	/* Find and map all the device's BARS */
+ 	i = 0;
+ 	bar_mask = pci_select_bars(pdev, IORESOURCE_MEM);
+-	for_each_set_bit(bar_nr, (const unsigned long *)&bar_mask,
+-			 ADF_PCI_MAX_BARS * 2) {
++	for_each_set_bit(bar_nr, &bar_mask, ADF_PCI_MAX_BARS * 2) {
+ 		struct adf_bar *bar = &accel_pci_dev->pci_bars[i++];
+ 
+ 		bar->base_addr = pci_resource_start(pdev, bar_nr);
+diff --git a/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c b/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c
+index 24ec908eb26c..613c7d5644ce 100644
+--- a/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c
++++ b/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c
+@@ -125,7 +125,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	struct adf_hw_device_data *hw_data;
+ 	char name[ADF_DEVICE_NAME_LENGTH];
+ 	unsigned int i, bar_nr;
+-	int ret, bar_mask;
++	unsigned long bar_mask;
++	int ret;
+ 
+ 	switch (ent->device) {
+ 	case ADF_C3XXXIOV_PCI_DEVICE_ID:
+@@ -215,8 +216,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	/* Find and map all the device's BARS */
+ 	i = 0;
+ 	bar_mask = pci_select_bars(pdev, IORESOURCE_MEM);
+-	for_each_set_bit(bar_nr, (const unsigned long *)&bar_mask,
+-			 ADF_PCI_MAX_BARS * 2) {
++	for_each_set_bit(bar_nr, &bar_mask, ADF_PCI_MAX_BARS * 2) {
+ 		struct adf_bar *bar = &accel_pci_dev->pci_bars[i++];
+ 
+ 		bar->base_addr = pci_resource_start(pdev, bar_nr);
+diff --git a/drivers/crypto/qat/qat_c62x/adf_drv.c b/drivers/crypto/qat/qat_c62x/adf_drv.c
+index 59a5a0df50b6..9cb832963357 100644
+--- a/drivers/crypto/qat/qat_c62x/adf_drv.c
++++ b/drivers/crypto/qat/qat_c62x/adf_drv.c
+@@ -123,7 +123,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	struct adf_hw_device_data *hw_data;
+ 	char name[ADF_DEVICE_NAME_LENGTH];
+ 	unsigned int i, bar_nr;
+-	int ret, bar_mask;
++	unsigned long bar_mask;
++	int ret;
+ 
+ 	switch (ent->device) {
+ 	case ADF_C62X_PCI_DEVICE_ID:
+@@ -235,8 +236,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	/* Find and map all the device's BARS */
+ 	i = (hw_data->fuses & ADF_DEVICE_FUSECTL_MASK) ? 1 : 0;
+ 	bar_mask = pci_select_bars(pdev, IORESOURCE_MEM);
+-	for_each_set_bit(bar_nr, (const unsigned long *)&bar_mask,
+-			 ADF_PCI_MAX_BARS * 2) {
++	for_each_set_bit(bar_nr, &bar_mask, ADF_PCI_MAX_BARS * 2) {
+ 		struct adf_bar *bar = &accel_pci_dev->pci_bars[i++];
+ 
+ 		bar->base_addr = pci_resource_start(pdev, bar_nr);
+diff --git a/drivers/crypto/qat/qat_c62xvf/adf_drv.c b/drivers/crypto/qat/qat_c62xvf/adf_drv.c
+index b9f3e0e4fde9..278452b8ef81 100644
+--- a/drivers/crypto/qat/qat_c62xvf/adf_drv.c
++++ b/drivers/crypto/qat/qat_c62xvf/adf_drv.c
+@@ -125,7 +125,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	struct adf_hw_device_data *hw_data;
+ 	char name[ADF_DEVICE_NAME_LENGTH];
+ 	unsigned int i, bar_nr;
+-	int ret, bar_mask;
++	unsigned long bar_mask;
++	int ret;
+ 
+ 	switch (ent->device) {
+ 	case ADF_C62XIOV_PCI_DEVICE_ID:
+@@ -215,8 +216,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	/* Find and map all the device's BARS */
+ 	i = 0;
+ 	bar_mask = pci_select_bars(pdev, IORESOURCE_MEM);
+-	for_each_set_bit(bar_nr, (const unsigned long *)&bar_mask,
+-			 ADF_PCI_MAX_BARS * 2) {
++	for_each_set_bit(bar_nr, &bar_mask, ADF_PCI_MAX_BARS * 2) {
+ 		struct adf_bar *bar = &accel_pci_dev->pci_bars[i++];
+ 
+ 		bar->base_addr = pci_resource_start(pdev, bar_nr);
+diff --git a/drivers/crypto/qat/qat_dh895xcc/adf_drv.c b/drivers/crypto/qat/qat_dh895xcc/adf_drv.c
+index be5c5a988ca5..3a9708ef4ce2 100644
+--- a/drivers/crypto/qat/qat_dh895xcc/adf_drv.c
++++ b/drivers/crypto/qat/qat_dh895xcc/adf_drv.c
+@@ -123,7 +123,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	struct adf_hw_device_data *hw_data;
+ 	char name[ADF_DEVICE_NAME_LENGTH];
+ 	unsigned int i, bar_nr;
+-	int ret, bar_mask;
++	unsigned long bar_mask;
++	int ret;
+ 
+ 	switch (ent->device) {
+ 	case ADF_DH895XCC_PCI_DEVICE_ID:
+@@ -237,8 +238,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	/* Find and map all the device's BARS */
+ 	i = 0;
+ 	bar_mask = pci_select_bars(pdev, IORESOURCE_MEM);
+-	for_each_set_bit(bar_nr, (const unsigned long *)&bar_mask,
+-			 ADF_PCI_MAX_BARS * 2) {
++	for_each_set_bit(bar_nr, &bar_mask, ADF_PCI_MAX_BARS * 2) {
+ 		struct adf_bar *bar = &accel_pci_dev->pci_bars[i++];
+ 
+ 		bar->base_addr = pci_resource_start(pdev, bar_nr);
+diff --git a/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c b/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c
+index 26ab17bfc6da..3da0f951cb59 100644
+--- a/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c
++++ b/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c
+@@ -125,7 +125,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	struct adf_hw_device_data *hw_data;
+ 	char name[ADF_DEVICE_NAME_LENGTH];
+ 	unsigned int i, bar_nr;
+-	int ret, bar_mask;
++	unsigned long bar_mask;
++	int ret;
+ 
+ 	switch (ent->device) {
+ 	case ADF_DH895XCCIOV_PCI_DEVICE_ID:
+@@ -215,8 +216,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	/* Find and map all the device's BARS */
+ 	i = 0;
+ 	bar_mask = pci_select_bars(pdev, IORESOURCE_MEM);
+-	for_each_set_bit(bar_nr, (const unsigned long *)&bar_mask,
+-			 ADF_PCI_MAX_BARS * 2) {
++	for_each_set_bit(bar_nr, &bar_mask, ADF_PCI_MAX_BARS * 2) {
+ 		struct adf_bar *bar = &accel_pci_dev->pci_bars[i++];
+ 
+ 		bar->base_addr = pci_resource_start(pdev, bar_nr);
+diff --git a/drivers/firmware/arm_scmi/perf.c b/drivers/firmware/arm_scmi/perf.c
+index 2a219b1261b1..49cb74f54a10 100644
+--- a/drivers/firmware/arm_scmi/perf.c
++++ b/drivers/firmware/arm_scmi/perf.c
+@@ -166,7 +166,13 @@ scmi_perf_domain_attributes_get(const struct scmi_handle *handle, u32 domain,
+ 					le32_to_cpu(attr->sustained_freq_khz);
+ 		dom_info->sustained_perf_level =
+ 					le32_to_cpu(attr->sustained_perf_level);
+-		dom_info->mult_factor =	(dom_info->sustained_freq_khz * 1000) /
++		if (!dom_info->sustained_freq_khz ||
++		    !dom_info->sustained_perf_level)
++			/* CPUFreq converts to kHz, hence default 1000 */
++			dom_info->mult_factor =	1000;
++		else
++			dom_info->mult_factor =
++					(dom_info->sustained_freq_khz * 1000) /
+ 					dom_info->sustained_perf_level;
+ 		memcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE);
+ 	}
+diff --git a/drivers/gpio/gpio-adp5588.c b/drivers/gpio/gpio-adp5588.c
+index 3530ccd17e04..da9781a2ef4a 100644
+--- a/drivers/gpio/gpio-adp5588.c
++++ b/drivers/gpio/gpio-adp5588.c
+@@ -41,6 +41,8 @@ struct adp5588_gpio {
+ 	uint8_t int_en[3];
+ 	uint8_t irq_mask[3];
+ 	uint8_t irq_stat[3];
++	uint8_t int_input_en[3];
++	uint8_t int_lvl_cached[3];
+ };
+ 
+ static int adp5588_gpio_read(struct i2c_client *client, u8 reg)
+@@ -173,12 +175,28 @@ static void adp5588_irq_bus_sync_unlock(struct irq_data *d)
+ 	struct adp5588_gpio *dev = irq_data_get_irq_chip_data(d);
+ 	int i;
+ 
+-	for (i = 0; i <= ADP5588_BANK(ADP5588_MAXGPIO); i++)
++	for (i = 0; i <= ADP5588_BANK(ADP5588_MAXGPIO); i++) {
++		if (dev->int_input_en[i]) {
++			mutex_lock(&dev->lock);
++			dev->dir[i] &= ~dev->int_input_en[i];
++			dev->int_input_en[i] = 0;
++			adp5588_gpio_write(dev->client, GPIO_DIR1 + i,
++					   dev->dir[i]);
++			mutex_unlock(&dev->lock);
++		}
++
++		if (dev->int_lvl_cached[i] != dev->int_lvl[i]) {
++			dev->int_lvl_cached[i] = dev->int_lvl[i];
++			adp5588_gpio_write(dev->client, GPIO_INT_LVL1 + i,
++					   dev->int_lvl[i]);
++		}
++
+ 		if (dev->int_en[i] ^ dev->irq_mask[i]) {
+ 			dev->int_en[i] = dev->irq_mask[i];
+ 			adp5588_gpio_write(dev->client, GPIO_INT_EN1 + i,
+ 					   dev->int_en[i]);
+ 		}
++	}
+ 
+ 	mutex_unlock(&dev->irq_lock);
+ }
+@@ -221,9 +239,7 @@ static int adp5588_irq_set_type(struct irq_data *d, unsigned int type)
+ 	else
+ 		return -EINVAL;
+ 
+-	adp5588_gpio_direction_input(&dev->gpio_chip, gpio);
+-	adp5588_gpio_write(dev->client, GPIO_INT_LVL1 + bank,
+-			   dev->int_lvl[bank]);
++	dev->int_input_en[bank] |= bit;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpio/gpio-dwapb.c b/drivers/gpio/gpio-dwapb.c
+index 7a2de3de6571..5b12d6fdd448 100644
+--- a/drivers/gpio/gpio-dwapb.c
++++ b/drivers/gpio/gpio-dwapb.c
+@@ -726,6 +726,7 @@ static int dwapb_gpio_probe(struct platform_device *pdev)
+ out_unregister:
+ 	dwapb_gpio_unregister(gpio);
+ 	dwapb_irq_teardown(gpio);
++	clk_disable_unprepare(gpio->clk);
+ 
+ 	return err;
+ }
+diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c
+index addd9fecc198..a3e43cacd78e 100644
+--- a/drivers/gpio/gpiolib-acpi.c
++++ b/drivers/gpio/gpiolib-acpi.c
+@@ -25,7 +25,6 @@
+ 
+ struct acpi_gpio_event {
+ 	struct list_head node;
+-	struct list_head initial_sync_list;
+ 	acpi_handle handle;
+ 	unsigned int pin;
+ 	unsigned int irq;
+@@ -49,10 +48,19 @@ struct acpi_gpio_chip {
+ 	struct mutex conn_lock;
+ 	struct gpio_chip *chip;
+ 	struct list_head events;
++	struct list_head deferred_req_irqs_list_entry;
+ };
+ 
+-static LIST_HEAD(acpi_gpio_initial_sync_list);
+-static DEFINE_MUTEX(acpi_gpio_initial_sync_list_lock);
++/*
++ * For gpiochips which call acpi_gpiochip_request_interrupts() before late_init
++ * (so builtin drivers) we register the ACPI GpioInt event handlers from a
++ * late_initcall_sync handler, so that other builtin drivers can register their
++ * OpRegions before the event handlers can run.  This list contains gpiochips
++ * for which the acpi_gpiochip_request_interrupts() has been deferred.
++ */
++static DEFINE_MUTEX(acpi_gpio_deferred_req_irqs_lock);
++static LIST_HEAD(acpi_gpio_deferred_req_irqs_list);
++static bool acpi_gpio_deferred_req_irqs_done;
+ 
+ static int acpi_gpiochip_find(struct gpio_chip *gc, void *data)
+ {
+@@ -89,21 +97,6 @@ static struct gpio_desc *acpi_get_gpiod(char *path, int pin)
+ 	return gpiochip_get_desc(chip, pin);
+ }
+ 
+-static void acpi_gpio_add_to_initial_sync_list(struct acpi_gpio_event *event)
+-{
+-	mutex_lock(&acpi_gpio_initial_sync_list_lock);
+-	list_add(&event->initial_sync_list, &acpi_gpio_initial_sync_list);
+-	mutex_unlock(&acpi_gpio_initial_sync_list_lock);
+-}
+-
+-static void acpi_gpio_del_from_initial_sync_list(struct acpi_gpio_event *event)
+-{
+-	mutex_lock(&acpi_gpio_initial_sync_list_lock);
+-	if (!list_empty(&event->initial_sync_list))
+-		list_del_init(&event->initial_sync_list);
+-	mutex_unlock(&acpi_gpio_initial_sync_list_lock);
+-}
+-
+ static irqreturn_t acpi_gpio_irq_handler(int irq, void *data)
+ {
+ 	struct acpi_gpio_event *event = data;
+@@ -186,7 +179,7 @@ static acpi_status acpi_gpiochip_request_interrupt(struct acpi_resource *ares,
+ 
+ 	gpiod_direction_input(desc);
+ 
+-	value = gpiod_get_value(desc);
++	value = gpiod_get_value_cansleep(desc);
+ 
+ 	ret = gpiochip_lock_as_irq(chip, pin);
+ 	if (ret) {
+@@ -229,7 +222,6 @@ static acpi_status acpi_gpiochip_request_interrupt(struct acpi_resource *ares,
+ 	event->irq = irq;
+ 	event->pin = pin;
+ 	event->desc = desc;
+-	INIT_LIST_HEAD(&event->initial_sync_list);
+ 
+ 	ret = request_threaded_irq(event->irq, NULL, handler, irqflags,
+ 				   "ACPI:Event", event);
+@@ -251,10 +243,9 @@ static acpi_status acpi_gpiochip_request_interrupt(struct acpi_resource *ares,
+ 	 * may refer to OperationRegions from other (builtin) drivers which
+ 	 * may be probed after us.
+ 	 */
+-	if (handler == acpi_gpio_irq_handler &&
+-	    (((irqflags & IRQF_TRIGGER_RISING) && value == 1) ||
+-	     ((irqflags & IRQF_TRIGGER_FALLING) && value == 0)))
+-		acpi_gpio_add_to_initial_sync_list(event);
++	if (((irqflags & IRQF_TRIGGER_RISING) && value == 1) ||
++	    ((irqflags & IRQF_TRIGGER_FALLING) && value == 0))
++		handler(event->irq, event);
+ 
+ 	return AE_OK;
+ 
+@@ -283,6 +274,7 @@ void acpi_gpiochip_request_interrupts(struct gpio_chip *chip)
+ 	struct acpi_gpio_chip *acpi_gpio;
+ 	acpi_handle handle;
+ 	acpi_status status;
++	bool defer;
+ 
+ 	if (!chip->parent || !chip->to_irq)
+ 		return;
+@@ -295,6 +287,16 @@ void acpi_gpiochip_request_interrupts(struct gpio_chip *chip)
+ 	if (ACPI_FAILURE(status))
+ 		return;
+ 
++	mutex_lock(&acpi_gpio_deferred_req_irqs_lock);
++	defer = !acpi_gpio_deferred_req_irqs_done;
++	if (defer)
++		list_add(&acpi_gpio->deferred_req_irqs_list_entry,
++			 &acpi_gpio_deferred_req_irqs_list);
++	mutex_unlock(&acpi_gpio_deferred_req_irqs_lock);
++
++	if (defer)
++		return;
++
+ 	acpi_walk_resources(handle, "_AEI",
+ 			    acpi_gpiochip_request_interrupt, acpi_gpio);
+ }
+@@ -325,11 +327,14 @@ void acpi_gpiochip_free_interrupts(struct gpio_chip *chip)
+ 	if (ACPI_FAILURE(status))
+ 		return;
+ 
++	mutex_lock(&acpi_gpio_deferred_req_irqs_lock);
++	if (!list_empty(&acpi_gpio->deferred_req_irqs_list_entry))
++		list_del_init(&acpi_gpio->deferred_req_irqs_list_entry);
++	mutex_unlock(&acpi_gpio_deferred_req_irqs_lock);
++
+ 	list_for_each_entry_safe_reverse(event, ep, &acpi_gpio->events, node) {
+ 		struct gpio_desc *desc;
+ 
+-		acpi_gpio_del_from_initial_sync_list(event);
+-
+ 		if (irqd_is_wakeup_set(irq_get_irq_data(event->irq)))
+ 			disable_irq_wake(event->irq);
+ 
+@@ -1049,6 +1054,7 @@ void acpi_gpiochip_add(struct gpio_chip *chip)
+ 
+ 	acpi_gpio->chip = chip;
+ 	INIT_LIST_HEAD(&acpi_gpio->events);
++	INIT_LIST_HEAD(&acpi_gpio->deferred_req_irqs_list_entry);
+ 
+ 	status = acpi_attach_data(handle, acpi_gpio_chip_dh, acpi_gpio);
+ 	if (ACPI_FAILURE(status)) {
+@@ -1195,20 +1201,28 @@ bool acpi_can_fallback_to_crs(struct acpi_device *adev, const char *con_id)
+ 	return con_id == NULL;
+ }
+ 
+-/* Sync the initial state of handlers after all builtin drivers have probed */
+-static int acpi_gpio_initial_sync(void)
++/* Run deferred acpi_gpiochip_request_interrupts() */
++static int acpi_gpio_handle_deferred_request_interrupts(void)
+ {
+-	struct acpi_gpio_event *event, *ep;
++	struct acpi_gpio_chip *acpi_gpio, *tmp;
++
++	mutex_lock(&acpi_gpio_deferred_req_irqs_lock);
++	list_for_each_entry_safe(acpi_gpio, tmp,
++				 &acpi_gpio_deferred_req_irqs_list,
++				 deferred_req_irqs_list_entry) {
++		acpi_handle handle;
+ 
+-	mutex_lock(&acpi_gpio_initial_sync_list_lock);
+-	list_for_each_entry_safe(event, ep, &acpi_gpio_initial_sync_list,
+-				 initial_sync_list) {
+-		acpi_evaluate_object(event->handle, NULL, NULL, NULL);
+-		list_del_init(&event->initial_sync_list);
++		handle = ACPI_HANDLE(acpi_gpio->chip->parent);
++		acpi_walk_resources(handle, "_AEI",
++				    acpi_gpiochip_request_interrupt, acpi_gpio);
++
++		list_del_init(&acpi_gpio->deferred_req_irqs_list_entry);
+ 	}
+-	mutex_unlock(&acpi_gpio_initial_sync_list_lock);
++
++	acpi_gpio_deferred_req_irqs_done = true;
++	mutex_unlock(&acpi_gpio_deferred_req_irqs_lock);
+ 
+ 	return 0;
+ }
+ /* We must use _sync so that this runs after the first deferred_probe run */
+-late_initcall_sync(acpi_gpio_initial_sync);
++late_initcall_sync(acpi_gpio_handle_deferred_request_interrupts);
+diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c
+index 53a14ee8ad6d..a704d2e74421 100644
+--- a/drivers/gpio/gpiolib-of.c
++++ b/drivers/gpio/gpiolib-of.c
+@@ -31,6 +31,7 @@ static int of_gpiochip_match_node_and_xlate(struct gpio_chip *chip, void *data)
+ 	struct of_phandle_args *gpiospec = data;
+ 
+ 	return chip->gpiodev->dev.of_node == gpiospec->np &&
++				chip->of_xlate &&
+ 				chip->of_xlate(chip, gpiospec, NULL) >= 0;
+ }
+ 
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index e11a3bb03820..06dce16e22bb 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -565,7 +565,7 @@ static int linehandle_create(struct gpio_device *gdev, void __user *ip)
+ 		if (ret)
+ 			goto out_free_descs;
+ 		lh->descs[i] = desc;
+-		count = i;
++		count = i + 1;
+ 
+ 		if (lflags & GPIOHANDLE_REQUEST_ACTIVE_LOW)
+ 			set_bit(FLAG_ACTIVE_LOW, &desc->flags);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index 7200eea4f918..d9d8964a6e97 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -38,6 +38,7 @@ static int amdgpu_cs_user_fence_chunk(struct amdgpu_cs_parser *p,
+ {
+ 	struct drm_gem_object *gobj;
+ 	unsigned long size;
++	int r;
+ 
+ 	gobj = drm_gem_object_lookup(p->filp, data->handle);
+ 	if (gobj == NULL)
+@@ -49,20 +50,26 @@ static int amdgpu_cs_user_fence_chunk(struct amdgpu_cs_parser *p,
+ 	p->uf_entry.tv.shared = true;
+ 	p->uf_entry.user_pages = NULL;
+ 
+-	size = amdgpu_bo_size(p->uf_entry.robj);
+-	if (size != PAGE_SIZE || (data->offset + 8) > size)
+-		return -EINVAL;
+-
+-	*offset = data->offset;
+-
+ 	drm_gem_object_put_unlocked(gobj);
+ 
++	size = amdgpu_bo_size(p->uf_entry.robj);
++	if (size != PAGE_SIZE || (data->offset + 8) > size) {
++		r = -EINVAL;
++		goto error_unref;
++	}
++
+ 	if (amdgpu_ttm_tt_get_usermm(p->uf_entry.robj->tbo.ttm)) {
+-		amdgpu_bo_unref(&p->uf_entry.robj);
+-		return -EINVAL;
++		r = -EINVAL;
++		goto error_unref;
+ 	}
+ 
++	*offset = data->offset;
++
+ 	return 0;
++
++error_unref:
++	amdgpu_bo_unref(&p->uf_entry.robj);
++	return r;
+ }
+ 
+ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, void *data)
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+index ca53b3fba422..3e3e4e907ee5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+@@ -67,6 +67,7 @@ static const struct soc15_reg_golden golden_settings_sdma_4[] = {
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_IB_CNTL, 0x800f0100, 0x00000100),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_RB_WPTR_POLL_CNTL, 0x0000fff0, 0x00403000),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003c0),
++	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_WATERMK, 0xfc000000, 0x00000000),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CHICKEN_BITS, 0xfe931f07, 0x02831f07),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CLK_CTRL, 0xffffffff, 0x3f000100),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GFX_IB_CNTL, 0x800f0100, 0x00000100),
+@@ -78,7 +79,8 @@ static const struct soc15_reg_golden golden_settings_sdma_4[] = {
+ 	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_RLC0_RB_WPTR_POLL_CNTL, 0x0000fff0, 0x00403000),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_RLC1_IB_CNTL, 0x800f0100, 0x00000100),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_RLC1_RB_WPTR_POLL_CNTL, 0x0000fff0, 0x00403000),
+-	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_UTCL1_PAGE, 0x000003ff, 0x000003c0)
++	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_UTCL1_PAGE, 0x000003ff, 0x000003c0),
++	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_UTCL1_WATERMK, 0xfc000000, 0x00000000)
+ };
+ 
+ static const struct soc15_reg_golden golden_settings_sdma_vg10[] = {
+@@ -106,7 +108,8 @@ static const struct soc15_reg_golden golden_settings_sdma_4_1[] =
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC0_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_IB_CNTL, 0x800f0111, 0x00000100),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+-	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003c0)
++	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003c0),
++	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_WATERMK, 0xfc000000, 0x00000000)
+ };
+ 
+ static const struct soc15_reg_golden golden_settings_sdma_4_2[] =
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+index 77779adeef28..f8e866ceda02 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+@@ -4555,12 +4555,12 @@ static int smu7_get_sclks(struct pp_hwmgr *hwmgr, struct amd_pp_clocks *clocks)
+ 			return -EINVAL;
+ 		dep_sclk_table = table_info->vdd_dep_on_sclk;
+ 		for (i = 0; i < dep_sclk_table->count; i++)
+-			clocks->clock[i] = dep_sclk_table->entries[i].clk * 10;
++			clocks->clock[i] = dep_sclk_table->entries[i].clk;
+ 		clocks->count = dep_sclk_table->count;
+ 	} else if (hwmgr->pp_table_version == PP_TABLE_V0) {
+ 		sclk_table = hwmgr->dyn_state.vddc_dependency_on_sclk;
+ 		for (i = 0; i < sclk_table->count; i++)
+-			clocks->clock[i] = sclk_table->entries[i].clk * 10;
++			clocks->clock[i] = sclk_table->entries[i].clk;
+ 		clocks->count = sclk_table->count;
+ 	}
+ 
+@@ -4592,7 +4592,7 @@ static int smu7_get_mclks(struct pp_hwmgr *hwmgr, struct amd_pp_clocks *clocks)
+ 			return -EINVAL;
+ 		dep_mclk_table = table_info->vdd_dep_on_mclk;
+ 		for (i = 0; i < dep_mclk_table->count; i++) {
+-			clocks->clock[i] = dep_mclk_table->entries[i].clk * 10;
++			clocks->clock[i] = dep_mclk_table->entries[i].clk;
+ 			clocks->latency[i] = smu7_get_mem_latency(hwmgr,
+ 						dep_mclk_table->entries[i].clk);
+ 		}
+@@ -4600,7 +4600,7 @@ static int smu7_get_mclks(struct pp_hwmgr *hwmgr, struct amd_pp_clocks *clocks)
+ 	} else if (hwmgr->pp_table_version == PP_TABLE_V0) {
+ 		mclk_table = hwmgr->dyn_state.vddc_dependency_on_mclk;
+ 		for (i = 0; i < mclk_table->count; i++)
+-			clocks->clock[i] = mclk_table->entries[i].clk * 10;
++			clocks->clock[i] = mclk_table->entries[i].clk;
+ 		clocks->count = mclk_table->count;
+ 	}
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
+index 0adfc5392cd3..617557bd8c24 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
+@@ -1605,17 +1605,17 @@ static int smu8_get_clock_by_type(struct pp_hwmgr *hwmgr, enum amd_pp_clock_type
+ 	switch (type) {
+ 	case amd_pp_disp_clock:
+ 		for (i = 0; i < clocks->count; i++)
+-			clocks->clock[i] = data->sys_info.display_clock[i] * 10;
++			clocks->clock[i] = data->sys_info.display_clock[i];
+ 		break;
+ 	case amd_pp_sys_clock:
+ 		table = hwmgr->dyn_state.vddc_dependency_on_sclk;
+ 		for (i = 0; i < clocks->count; i++)
+-			clocks->clock[i] = table->entries[i].clk * 10;
++			clocks->clock[i] = table->entries[i].clk;
+ 		break;
+ 	case amd_pp_mem_clock:
+ 		clocks->count = SMU8_NUM_NBPMEMORYCLOCK;
+ 		for (i = 0; i < clocks->count; i++)
+-			clocks->clock[i] = data->sys_info.nbp_memory_clock[clocks->count - 1 - i] * 10;
++			clocks->clock[i] = data->sys_info.nbp_memory_clock[clocks->count - 1 - i];
+ 		break;
+ 	default:
+ 		return -1;
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index c2ebe5da34d0..89225adaa60a 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -230,7 +230,7 @@ nouveau_cli_init(struct nouveau_drm *drm, const char *sname,
+ 		mutex_unlock(&drm->master.lock);
+ 	}
+ 	if (ret) {
+-		NV_ERROR(drm, "Client allocation failed: %d\n", ret);
++		NV_PRINTK(err, cli, "Client allocation failed: %d\n", ret);
+ 		goto done;
+ 	}
+ 
+@@ -240,37 +240,37 @@ nouveau_cli_init(struct nouveau_drm *drm, const char *sname,
+ 			       }, sizeof(struct nv_device_v0),
+ 			       &cli->device);
+ 	if (ret) {
+-		NV_ERROR(drm, "Device allocation failed: %d\n", ret);
++		NV_PRINTK(err, cli, "Device allocation failed: %d\n", ret);
+ 		goto done;
+ 	}
+ 
+ 	ret = nvif_mclass(&cli->device.object, mmus);
+ 	if (ret < 0) {
+-		NV_ERROR(drm, "No supported MMU class\n");
++		NV_PRINTK(err, cli, "No supported MMU class\n");
+ 		goto done;
+ 	}
+ 
+ 	ret = nvif_mmu_init(&cli->device.object, mmus[ret].oclass, &cli->mmu);
+ 	if (ret) {
+-		NV_ERROR(drm, "MMU allocation failed: %d\n", ret);
++		NV_PRINTK(err, cli, "MMU allocation failed: %d\n", ret);
+ 		goto done;
+ 	}
+ 
+ 	ret = nvif_mclass(&cli->mmu.object, vmms);
+ 	if (ret < 0) {
+-		NV_ERROR(drm, "No supported VMM class\n");
++		NV_PRINTK(err, cli, "No supported VMM class\n");
+ 		goto done;
+ 	}
+ 
+ 	ret = nouveau_vmm_init(cli, vmms[ret].oclass, &cli->vmm);
+ 	if (ret) {
+-		NV_ERROR(drm, "VMM allocation failed: %d\n", ret);
++		NV_PRINTK(err, cli, "VMM allocation failed: %d\n", ret);
+ 		goto done;
+ 	}
+ 
+ 	ret = nvif_mclass(&cli->mmu.object, mems);
+ 	if (ret < 0) {
+-		NV_ERROR(drm, "No supported MEM class\n");
++		NV_PRINTK(err, cli, "No supported MEM class\n");
+ 		goto done;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/base.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/base.c
+index 32fa94a9773f..cbd33e87b799 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/base.c
+@@ -275,6 +275,7 @@ nvkm_disp_oneinit(struct nvkm_engine *engine)
+ 	struct nvkm_outp *outp, *outt, *pair;
+ 	struct nvkm_conn *conn;
+ 	struct nvkm_head *head;
++	struct nvkm_ior *ior;
+ 	struct nvbios_connE connE;
+ 	struct dcb_output dcbE;
+ 	u8  hpd = 0, ver, hdr;
+@@ -399,6 +400,19 @@ nvkm_disp_oneinit(struct nvkm_engine *engine)
+ 			return ret;
+ 	}
+ 
++	/* Enforce identity-mapped SOR assignment for panels, which have
++	 * certain bits (ie. backlight controls) wired to a specific SOR.
++	 */
++	list_for_each_entry(outp, &disp->outp, head) {
++		if (outp->conn->info.type == DCB_CONNECTOR_LVDS ||
++		    outp->conn->info.type == DCB_CONNECTOR_eDP) {
++			ior = nvkm_ior_find(disp, SOR, ffs(outp->info.or) - 1);
++			if (!WARN_ON(!ior))
++				ior->identity = true;
++			outp->identity = true;
++		}
++	}
++
+ 	i = 0;
+ 	list_for_each_entry(head, &disp->head, head)
+ 		i = max(i, head->id + 1);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c
+index 7c5bed29ffef..6160a6158cf2 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c
+@@ -412,14 +412,10 @@ nvkm_dp_train(struct nvkm_dp *dp, u32 dataKBps)
+ }
+ 
+ static void
+-nvkm_dp_release(struct nvkm_outp *outp, struct nvkm_ior *ior)
++nvkm_dp_disable(struct nvkm_outp *outp, struct nvkm_ior *ior)
+ {
+ 	struct nvkm_dp *dp = nvkm_dp(outp);
+ 
+-	/* Prevent link from being retrained if sink sends an IRQ. */
+-	atomic_set(&dp->lt.done, 0);
+-	ior->dp.nr = 0;
+-
+ 	/* Execute DisableLT script from DP Info Table. */
+ 	nvbios_init(&ior->disp->engine.subdev, dp->info.script[4],
+ 		init.outp = &dp->outp.info;
+@@ -428,6 +424,16 @@ nvkm_dp_release(struct nvkm_outp *outp, struct nvkm_ior *ior)
+ 	);
+ }
+ 
++static void
++nvkm_dp_release(struct nvkm_outp *outp)
++{
++	struct nvkm_dp *dp = nvkm_dp(outp);
++
++	/* Prevent link from being retrained if sink sends an IRQ. */
++	atomic_set(&dp->lt.done, 0);
++	dp->outp.ior->dp.nr = 0;
++}
++
+ static int
+ nvkm_dp_acquire(struct nvkm_outp *outp)
+ {
+@@ -576,6 +582,7 @@ nvkm_dp_func = {
+ 	.fini = nvkm_dp_fini,
+ 	.acquire = nvkm_dp_acquire,
+ 	.release = nvkm_dp_release,
++	.disable = nvkm_dp_disable,
+ };
+ 
+ static int
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/ior.h b/drivers/gpu/drm/nouveau/nvkm/engine/disp/ior.h
+index e0b4e0c5704e..19911211a12a 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/ior.h
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/ior.h
+@@ -16,6 +16,7 @@ struct nvkm_ior {
+ 	char name[8];
+ 
+ 	struct list_head head;
++	bool identity;
+ 
+ 	struct nvkm_ior_state {
+ 		struct nvkm_outp *outp;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.c
+index f89c7b977aa5..def005dd5fda 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.c
+@@ -501,11 +501,11 @@ nv50_disp_super_2_0(struct nv50_disp *disp, struct nvkm_head *head)
+ 	nv50_disp_super_ied_off(head, ior, 2);
+ 
+ 	/* If we're shutting down the OR's only active head, execute
+-	 * the output path's release function.
++	 * the output path's disable function.
+ 	 */
+ 	if (ior->arm.head == (1 << head->id)) {
+-		if ((outp = ior->arm.outp) && outp->func->release)
+-			outp->func->release(outp, ior);
++		if ((outp = ior->arm.outp) && outp->func->disable)
++			outp->func->disable(outp, ior);
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.c
+index be9e7f8c3b23..44df835e5473 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.c
+@@ -93,6 +93,8 @@ nvkm_outp_release(struct nvkm_outp *outp, u8 user)
+ 	if (ior) {
+ 		outp->acquired &= ~user;
+ 		if (!outp->acquired) {
++			if (outp->func->release && outp->ior)
++				outp->func->release(outp);
+ 			outp->ior->asy.outp = NULL;
+ 			outp->ior = NULL;
+ 		}
+@@ -127,17 +129,26 @@ nvkm_outp_acquire(struct nvkm_outp *outp, u8 user)
+ 	if (proto == UNKNOWN)
+ 		return -ENOSYS;
+ 
++	/* Deal with panels requiring identity-mapped SOR assignment. */
++	if (outp->identity) {
++		ior = nvkm_ior_find(outp->disp, SOR, ffs(outp->info.or) - 1);
++		if (WARN_ON(!ior))
++			return -ENOSPC;
++		return nvkm_outp_acquire_ior(outp, user, ior);
++	}
++
+ 	/* First preference is to reuse the OR that is currently armed
+ 	 * on HW, if any, in order to prevent unnecessary switching.
+ 	 */
+ 	list_for_each_entry(ior, &outp->disp->ior, head) {
+-		if (!ior->asy.outp && ior->arm.outp == outp)
++		if (!ior->identity && !ior->asy.outp && ior->arm.outp == outp)
+ 			return nvkm_outp_acquire_ior(outp, user, ior);
+ 	}
+ 
+ 	/* Failing that, a completely unused OR is the next best thing. */
+ 	list_for_each_entry(ior, &outp->disp->ior, head) {
+-		if (!ior->asy.outp && ior->type == type && !ior->arm.outp &&
++		if (!ior->identity &&
++		    !ior->asy.outp && ior->type == type && !ior->arm.outp &&
+ 		    (ior->func->route.set || ior->id == __ffs(outp->info.or)))
+ 			return nvkm_outp_acquire_ior(outp, user, ior);
+ 	}
+@@ -146,7 +157,7 @@ nvkm_outp_acquire(struct nvkm_outp *outp, u8 user)
+ 	 * but will be released during the next modeset.
+ 	 */
+ 	list_for_each_entry(ior, &outp->disp->ior, head) {
+-		if (!ior->asy.outp && ior->type == type &&
++		if (!ior->identity && !ior->asy.outp && ior->type == type &&
+ 		    (ior->func->route.set || ior->id == __ffs(outp->info.or)))
+ 			return nvkm_outp_acquire_ior(outp, user, ior);
+ 	}
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.h b/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.h
+index ea84d7d5741a..3f932fb39c94 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.h
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.h
+@@ -17,6 +17,7 @@ struct nvkm_outp {
+ 
+ 	struct list_head head;
+ 	struct nvkm_conn *conn;
++	bool identity;
+ 
+ 	/* Assembly state. */
+ #define NVKM_OUTP_PRIV 1
+@@ -41,7 +42,8 @@ struct nvkm_outp_func {
+ 	void (*init)(struct nvkm_outp *);
+ 	void (*fini)(struct nvkm_outp *);
+ 	int (*acquire)(struct nvkm_outp *);
+-	void (*release)(struct nvkm_outp *, struct nvkm_ior *);
++	void (*release)(struct nvkm_outp *);
++	void (*disable)(struct nvkm_outp *, struct nvkm_ior *);
+ };
+ 
+ #define OUTP_MSG(o,l,f,a...) do {                                              \
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/gm200.c b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/gm200.c
+index b80618e35491..d65959ef0564 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/gm200.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/gm200.c
+@@ -158,7 +158,8 @@ gm200_devinit_post(struct nvkm_devinit *base, bool post)
+ 	}
+ 
+ 	/* load and execute some other ucode image (bios therm?) */
+-	return pmu_load(init, 0x01, post, NULL, NULL);
++	pmu_load(init, 0x01, post, NULL, NULL);
++	return 0;
+ }
+ 
+ static const struct nvkm_devinit_func
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
+index de269eb482dd..7459def78d50 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
+@@ -1423,7 +1423,7 @@ nvkm_vmm_get(struct nvkm_vmm *vmm, u8 page, u64 size, struct nvkm_vma **pvma)
+ void
+ nvkm_vmm_part(struct nvkm_vmm *vmm, struct nvkm_memory *inst)
+ {
+-	if (vmm->func->part && inst) {
++	if (inst && vmm->func->part) {
+ 		mutex_lock(&vmm->mutex);
+ 		vmm->func->part(vmm, inst);
+ 		mutex_unlock(&vmm->mutex);
+diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c
+index 25b7bd56ae11..1cb41992aaa1 100644
+--- a/drivers/hid/hid-apple.c
++++ b/drivers/hid/hid-apple.c
+@@ -335,7 +335,8 @@ static int apple_input_mapping(struct hid_device *hdev, struct hid_input *hi,
+ 		struct hid_field *field, struct hid_usage *usage,
+ 		unsigned long **bit, int *max)
+ {
+-	if (usage->hid == (HID_UP_CUSTOM | 0x0003)) {
++	if (usage->hid == (HID_UP_CUSTOM | 0x0003) ||
++			usage->hid == (HID_UP_MSVENDOR | 0x0003)) {
+ 		/* The fn key on Apple USB keyboards */
+ 		set_bit(EV_REP, hi->input->evbit);
+ 		hid_map_usage_clear(hi, usage, bit, max, EV_KEY, KEY_FN);
+@@ -472,6 +473,12 @@ static const struct hid_device_id apple_devices[] = {
+ 		.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_ANSI),
+ 		.driver_data = APPLE_HAS_FN },
++	{ HID_BLUETOOTH_DEVICE(BT_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_ANSI),
++		.driver_data = APPLE_HAS_FN },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_NUMPAD_ANSI),
++		.driver_data = APPLE_HAS_FN },
++	{ HID_BLUETOOTH_DEVICE(BT_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_NUMPAD_ANSI),
++		.driver_data = APPLE_HAS_FN },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING_ANSI),
+ 		.driver_data = APPLE_HAS_FN },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING_ISO),
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index e80bcd71fe1e..eee6b79fb131 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -88,6 +88,7 @@
+ #define USB_DEVICE_ID_ANTON_TOUCH_PAD	0x3101
+ 
+ #define USB_VENDOR_ID_APPLE		0x05ac
++#define BT_VENDOR_ID_APPLE		0x004c
+ #define USB_DEVICE_ID_APPLE_MIGHTYMOUSE	0x0304
+ #define USB_DEVICE_ID_APPLE_MAGICMOUSE	0x030d
+ #define USB_DEVICE_ID_APPLE_MAGICTRACKPAD	0x030e
+@@ -157,6 +158,7 @@
+ #define USB_DEVICE_ID_APPLE_ALU_WIRELESS_2011_ISO   0x0256
+ #define USB_DEVICE_ID_APPLE_ALU_WIRELESS_2011_JIS   0x0257
+ #define USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_ANSI   0x0267
++#define USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_NUMPAD_ANSI   0x026c
+ #define USB_DEVICE_ID_APPLE_WELLSPRING8_ANSI	0x0290
+ #define USB_DEVICE_ID_APPLE_WELLSPRING8_ISO	0x0291
+ #define USB_DEVICE_ID_APPLE_WELLSPRING8_JIS	0x0292
+@@ -526,10 +528,6 @@
+ #define I2C_VENDOR_ID_HANTICK		0x0911
+ #define I2C_PRODUCT_ID_HANTICK_5288	0x5288
+ 
+-#define I2C_VENDOR_ID_RAYD		0x2386
+-#define I2C_PRODUCT_ID_RAYD_3118	0x3118
+-#define I2C_PRODUCT_ID_RAYD_4B33	0x4B33
+-
+ #define USB_VENDOR_ID_HANWANG		0x0b57
+ #define USB_DEVICE_ID_HANWANG_TABLET_FIRST	0x5000
+ #define USB_DEVICE_ID_HANWANG_TABLET_LAST	0x8fff
+@@ -949,6 +947,7 @@
+ #define USB_DEVICE_ID_SAITEK_RUMBLEPAD	0xff17
+ #define USB_DEVICE_ID_SAITEK_PS1000	0x0621
+ #define USB_DEVICE_ID_SAITEK_RAT7_OLD	0x0ccb
++#define USB_DEVICE_ID_SAITEK_RAT7_CONTAGION	0x0ccd
+ #define USB_DEVICE_ID_SAITEK_RAT7	0x0cd7
+ #define USB_DEVICE_ID_SAITEK_RAT9	0x0cfa
+ #define USB_DEVICE_ID_SAITEK_MMO7	0x0cd0
+diff --git a/drivers/hid/hid-saitek.c b/drivers/hid/hid-saitek.c
+index 39e642686ff0..683861f324e3 100644
+--- a/drivers/hid/hid-saitek.c
++++ b/drivers/hid/hid-saitek.c
+@@ -183,6 +183,8 @@ static const struct hid_device_id saitek_devices[] = {
+ 		.driver_data = SAITEK_RELEASE_MODE_RAT7 },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_RAT7),
+ 		.driver_data = SAITEK_RELEASE_MODE_RAT7 },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_RAT7_CONTAGION),
++		.driver_data = SAITEK_RELEASE_MODE_RAT7 },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_RAT9),
+ 		.driver_data = SAITEK_RELEASE_MODE_RAT7 },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_RAT9),
+diff --git a/drivers/hid/hid-sensor-hub.c b/drivers/hid/hid-sensor-hub.c
+index 50af72baa5ca..2b63487057c2 100644
+--- a/drivers/hid/hid-sensor-hub.c
++++ b/drivers/hid/hid-sensor-hub.c
+@@ -579,6 +579,28 @@ void sensor_hub_device_close(struct hid_sensor_hub_device *hsdev)
+ }
+ EXPORT_SYMBOL_GPL(sensor_hub_device_close);
+ 
++static __u8 *sensor_hub_report_fixup(struct hid_device *hdev, __u8 *rdesc,
++		unsigned int *rsize)
++{
++	/*
++	 * Checks if the report descriptor of Thinkpad Helix 2 has a logical
++	 * minimum for magnetic flux axis greater than the maximum.
++	 */
++	if (hdev->product == USB_DEVICE_ID_TEXAS_INSTRUMENTS_LENOVO_YOGA &&
++		*rsize == 2558 && rdesc[913] == 0x17 && rdesc[914] == 0x40 &&
++		rdesc[915] == 0x81 && rdesc[916] == 0x08 &&
++		rdesc[917] == 0x00 && rdesc[918] == 0x27 &&
++		rdesc[921] == 0x07 && rdesc[922] == 0x00) {
++		/* Sets negative logical minimum for mag x, y and z */
++		rdesc[914] = rdesc[935] = rdesc[956] = 0xc0;
++		rdesc[915] = rdesc[936] = rdesc[957] = 0x7e;
++		rdesc[916] = rdesc[937] = rdesc[958] = 0xf7;
++		rdesc[917] = rdesc[938] = rdesc[959] = 0xff;
++	}
++
++	return rdesc;
++}
++
+ static int sensor_hub_probe(struct hid_device *hdev,
+ 				const struct hid_device_id *id)
+ {
+@@ -743,6 +765,7 @@ static struct hid_driver sensor_hub_driver = {
+ 	.probe = sensor_hub_probe,
+ 	.remove = sensor_hub_remove,
+ 	.raw_event = sensor_hub_raw_event,
++	.report_fixup = sensor_hub_report_fixup,
+ #ifdef CONFIG_PM
+ 	.suspend = sensor_hub_suspend,
+ 	.resume = sensor_hub_resume,
+diff --git a/drivers/hid/i2c-hid/i2c-hid.c b/drivers/hid/i2c-hid/i2c-hid.c
+index 64773433b947..37013b58098c 100644
+--- a/drivers/hid/i2c-hid/i2c-hid.c
++++ b/drivers/hid/i2c-hid/i2c-hid.c
+@@ -48,6 +48,7 @@
+ #define I2C_HID_QUIRK_SET_PWR_WAKEUP_DEV	BIT(0)
+ #define I2C_HID_QUIRK_NO_IRQ_AFTER_RESET	BIT(1)
+ #define I2C_HID_QUIRK_RESEND_REPORT_DESCR	BIT(2)
++#define I2C_HID_QUIRK_NO_RUNTIME_PM		BIT(3)
+ 
+ /* flags */
+ #define I2C_HID_STARTED		0
+@@ -169,13 +170,10 @@ static const struct i2c_hid_quirks {
+ 	{ USB_VENDOR_ID_WEIDA, USB_DEVICE_ID_WEIDA_8755,
+ 		I2C_HID_QUIRK_SET_PWR_WAKEUP_DEV },
+ 	{ I2C_VENDOR_ID_HANTICK, I2C_PRODUCT_ID_HANTICK_5288,
+-		I2C_HID_QUIRK_NO_IRQ_AFTER_RESET },
+-	{ I2C_VENDOR_ID_RAYD, I2C_PRODUCT_ID_RAYD_3118,
+-		I2C_HID_QUIRK_RESEND_REPORT_DESCR },
++		I2C_HID_QUIRK_NO_IRQ_AFTER_RESET |
++		I2C_HID_QUIRK_NO_RUNTIME_PM },
+ 	{ USB_VENDOR_ID_SIS_TOUCH, USB_DEVICE_ID_SIS10FB_TOUCH,
+ 		I2C_HID_QUIRK_RESEND_REPORT_DESCR },
+-	{ I2C_VENDOR_ID_RAYD, I2C_PRODUCT_ID_RAYD_4B33,
+-		I2C_HID_QUIRK_RESEND_REPORT_DESCR },
+ 	{ 0, 0 }
+ };
+ 
+@@ -1110,7 +1108,9 @@ static int i2c_hid_probe(struct i2c_client *client,
+ 		goto err_mem_free;
+ 	}
+ 
+-	pm_runtime_put(&client->dev);
++	if (!(ihid->quirks & I2C_HID_QUIRK_NO_RUNTIME_PM))
++		pm_runtime_put(&client->dev);
++
+ 	return 0;
+ 
+ err_mem_free:
+@@ -1136,7 +1136,8 @@ static int i2c_hid_remove(struct i2c_client *client)
+ 	struct i2c_hid *ihid = i2c_get_clientdata(client);
+ 	struct hid_device *hid;
+ 
+-	pm_runtime_get_sync(&client->dev);
++	if (!(ihid->quirks & I2C_HID_QUIRK_NO_RUNTIME_PM))
++		pm_runtime_get_sync(&client->dev);
+ 	pm_runtime_disable(&client->dev);
+ 	pm_runtime_set_suspended(&client->dev);
+ 	pm_runtime_put_noidle(&client->dev);
+@@ -1237,11 +1238,16 @@ static int i2c_hid_resume(struct device *dev)
+ 	pm_runtime_enable(dev);
+ 
+ 	enable_irq(client->irq);
+-	ret = i2c_hid_hwreset(client);
++
++	/* Instead of resetting device, simply powers the device on. This
++	 * solves "incomplete reports" on Raydium devices 2386:3118 and
++	 * 2386:4B33
++	 */
++	ret = i2c_hid_set_power(client, I2C_HID_PWR_ON);
+ 	if (ret)
+ 		return ret;
+ 
+-	/* RAYDIUM device (2386:3118) need to re-send report descr cmd
++	/* Some devices need to re-send report descr cmd
+ 	 * after resume, after this it will be back normal.
+ 	 * otherwise it issues too many incomplete reports.
+ 	 */
+diff --git a/drivers/hid/intel-ish-hid/ipc/hw-ish.h b/drivers/hid/intel-ish-hid/ipc/hw-ish.h
+index 97869b7410eb..da133716bed0 100644
+--- a/drivers/hid/intel-ish-hid/ipc/hw-ish.h
++++ b/drivers/hid/intel-ish-hid/ipc/hw-ish.h
+@@ -29,6 +29,7 @@
+ #define CNL_Ax_DEVICE_ID	0x9DFC
+ #define GLK_Ax_DEVICE_ID	0x31A2
+ #define CNL_H_DEVICE_ID		0xA37C
++#define SPT_H_DEVICE_ID		0xA135
+ 
+ #define	REVISION_ID_CHT_A0	0x6
+ #define	REVISION_ID_CHT_Ax_SI	0x0
+diff --git a/drivers/hid/intel-ish-hid/ipc/pci-ish.c b/drivers/hid/intel-ish-hid/ipc/pci-ish.c
+index a2c53ea3b5ed..c7b8eb32b1ea 100644
+--- a/drivers/hid/intel-ish-hid/ipc/pci-ish.c
++++ b/drivers/hid/intel-ish-hid/ipc/pci-ish.c
+@@ -38,6 +38,7 @@ static const struct pci_device_id ish_pci_tbl[] = {
+ 	{PCI_DEVICE(PCI_VENDOR_ID_INTEL, CNL_Ax_DEVICE_ID)},
+ 	{PCI_DEVICE(PCI_VENDOR_ID_INTEL, GLK_Ax_DEVICE_ID)},
+ 	{PCI_DEVICE(PCI_VENDOR_ID_INTEL, CNL_H_DEVICE_ID)},
++	{PCI_DEVICE(PCI_VENDOR_ID_INTEL, SPT_H_DEVICE_ID)},
+ 	{0, }
+ };
+ MODULE_DEVICE_TABLE(pci, ish_pci_tbl);
+diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c
+index ced041899456..f4d08c8ac7f8 100644
+--- a/drivers/hv/connection.c
++++ b/drivers/hv/connection.c
+@@ -76,6 +76,7 @@ static int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo,
+ 					__u32 version)
+ {
+ 	int ret = 0;
++	unsigned int cur_cpu;
+ 	struct vmbus_channel_initiate_contact *msg;
+ 	unsigned long flags;
+ 
+@@ -118,9 +119,10 @@ static int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo,
+ 	 * the CPU attempting to connect may not be CPU 0.
+ 	 */
+ 	if (version >= VERSION_WIN8_1) {
+-		msg->target_vcpu =
+-			hv_cpu_number_to_vp_number(smp_processor_id());
+-		vmbus_connection.connect_cpu = smp_processor_id();
++		cur_cpu = get_cpu();
++		msg->target_vcpu = hv_cpu_number_to_vp_number(cur_cpu);
++		vmbus_connection.connect_cpu = cur_cpu;
++		put_cpu();
+ 	} else {
+ 		msg->target_vcpu = 0;
+ 		vmbus_connection.connect_cpu = 0;
+diff --git a/drivers/i2c/busses/i2c-uniphier-f.c b/drivers/i2c/busses/i2c-uniphier-f.c
+index 9918bdd81619..a403e8579b65 100644
+--- a/drivers/i2c/busses/i2c-uniphier-f.c
++++ b/drivers/i2c/busses/i2c-uniphier-f.c
+@@ -401,11 +401,8 @@ static int uniphier_fi2c_master_xfer(struct i2c_adapter *adap,
+ 		return ret;
+ 
+ 	for (msg = msgs; msg < emsg; msg++) {
+-		/* If next message is read, skip the stop condition */
+-		bool stop = !(msg + 1 < emsg && msg[1].flags & I2C_M_RD);
+-		/* but, force it if I2C_M_STOP is set */
+-		if (msg->flags & I2C_M_STOP)
+-			stop = true;
++		/* Emit STOP if it is the last message or I2C_M_STOP is set. */
++		bool stop = (msg + 1 == emsg) || (msg->flags & I2C_M_STOP);
+ 
+ 		ret = uniphier_fi2c_master_xfer_one(adap, msg, stop);
+ 		if (ret)
+diff --git a/drivers/i2c/busses/i2c-uniphier.c b/drivers/i2c/busses/i2c-uniphier.c
+index bb181b088291..454f914ae66d 100644
+--- a/drivers/i2c/busses/i2c-uniphier.c
++++ b/drivers/i2c/busses/i2c-uniphier.c
+@@ -248,11 +248,8 @@ static int uniphier_i2c_master_xfer(struct i2c_adapter *adap,
+ 		return ret;
+ 
+ 	for (msg = msgs; msg < emsg; msg++) {
+-		/* If next message is read, skip the stop condition */
+-		bool stop = !(msg + 1 < emsg && msg[1].flags & I2C_M_RD);
+-		/* but, force it if I2C_M_STOP is set */
+-		if (msg->flags & I2C_M_STOP)
+-			stop = true;
++		/* Emit STOP if it is the last message or I2C_M_STOP is set. */
++		bool stop = (msg + 1 == emsg) || (msg->flags & I2C_M_STOP);
+ 
+ 		ret = uniphier_i2c_master_xfer_one(adap, msg, stop);
+ 		if (ret)
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
+index 4994f920a836..8653182be818 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
+@@ -187,12 +187,15 @@ static int st_lsm6dsx_set_fifo_odr(struct st_lsm6dsx_sensor *sensor,
+ 
+ int st_lsm6dsx_update_watermark(struct st_lsm6dsx_sensor *sensor, u16 watermark)
+ {
+-	u16 fifo_watermark = ~0, cur_watermark, sip = 0, fifo_th_mask;
++	u16 fifo_watermark = ~0, cur_watermark, fifo_th_mask;
+ 	struct st_lsm6dsx_hw *hw = sensor->hw;
+ 	struct st_lsm6dsx_sensor *cur_sensor;
+ 	int i, err, data;
+ 	__le16 wdata;
+ 
++	if (!hw->sip)
++		return 0;
++
+ 	for (i = 0; i < ST_LSM6DSX_ID_MAX; i++) {
+ 		cur_sensor = iio_priv(hw->iio_devs[i]);
+ 
+@@ -203,14 +206,10 @@ int st_lsm6dsx_update_watermark(struct st_lsm6dsx_sensor *sensor, u16 watermark)
+ 						       : cur_sensor->watermark;
+ 
+ 		fifo_watermark = min_t(u16, fifo_watermark, cur_watermark);
+-		sip += cur_sensor->sip;
+ 	}
+ 
+-	if (!sip)
+-		return 0;
+-
+-	fifo_watermark = max_t(u16, fifo_watermark, sip);
+-	fifo_watermark = (fifo_watermark / sip) * sip;
++	fifo_watermark = max_t(u16, fifo_watermark, hw->sip);
++	fifo_watermark = (fifo_watermark / hw->sip) * hw->sip;
+ 	fifo_watermark = fifo_watermark * hw->settings->fifo_ops.th_wl;
+ 
+ 	err = regmap_read(hw->regmap, hw->settings->fifo_ops.fifo_th.addr + 1,
+diff --git a/drivers/iio/temperature/maxim_thermocouple.c b/drivers/iio/temperature/maxim_thermocouple.c
+index 54e383231d1e..c31b9633f32d 100644
+--- a/drivers/iio/temperature/maxim_thermocouple.c
++++ b/drivers/iio/temperature/maxim_thermocouple.c
+@@ -258,7 +258,6 @@ static int maxim_thermocouple_remove(struct spi_device *spi)
+ static const struct spi_device_id maxim_thermocouple_id[] = {
+ 	{"max6675", MAX6675},
+ 	{"max31855", MAX31855},
+-	{"max31856", MAX31855},
+ 	{},
+ };
+ MODULE_DEVICE_TABLE(spi, maxim_thermocouple_id);
+diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c
+index ec8fb289621f..5f437d1570fb 100644
+--- a/drivers/infiniband/core/ucma.c
++++ b/drivers/infiniband/core/ucma.c
+@@ -124,6 +124,8 @@ static DEFINE_MUTEX(mut);
+ static DEFINE_IDR(ctx_idr);
+ static DEFINE_IDR(multicast_idr);
+ 
++static const struct file_operations ucma_fops;
++
+ static inline struct ucma_context *_ucma_find_context(int id,
+ 						      struct ucma_file *file)
+ {
+@@ -1581,6 +1583,10 @@ static ssize_t ucma_migrate_id(struct ucma_file *new_file,
+ 	f = fdget(cmd.fd);
+ 	if (!f.file)
+ 		return -ENOENT;
++	if (f.file->f_op != &ucma_fops) {
++		ret = -EINVAL;
++		goto file_put;
++	}
+ 
+ 	/* Validate current fd and prevent destruction of id. */
+ 	ctx = ucma_get_ctx(f.file->private_data, cmd.id);
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index a76e206704d4..cb1e69bdad0b 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -844,6 +844,8 @@ int bnxt_re_destroy_qp(struct ib_qp *ib_qp)
+ 				"Failed to destroy Shadow QP");
+ 			return rc;
+ 		}
++		bnxt_qplib_free_qp_res(&rdev->qplib_res,
++				       &rdev->qp1_sqp->qplib_qp);
+ 		mutex_lock(&rdev->qp_lock);
+ 		list_del(&rdev->qp1_sqp->list);
+ 		atomic_dec(&rdev->qp_count);
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index e426b990c1dd..6ad0d46ab879 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -196,7 +196,7 @@ static int bnxt_qplib_alloc_qp_hdr_buf(struct bnxt_qplib_res *res,
+ 				       struct bnxt_qplib_qp *qp)
+ {
+ 	struct bnxt_qplib_q *rq = &qp->rq;
+-	struct bnxt_qplib_q *sq = &qp->rq;
++	struct bnxt_qplib_q *sq = &qp->sq;
+ 	int rc = 0;
+ 
+ 	if (qp->sq_hdr_buf_size && sq->hwq.max_elements) {
+diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
+index d77c97fe4a23..c53363443280 100644
+--- a/drivers/iommu/amd_iommu.c
++++ b/drivers/iommu/amd_iommu.c
+@@ -3073,7 +3073,7 @@ static phys_addr_t amd_iommu_iova_to_phys(struct iommu_domain *dom,
+ 		return 0;
+ 
+ 	offset_mask = pte_pgsize - 1;
+-	__pte	    = *pte & PM_ADDR_MASK;
++	__pte	    = __sme_clr(*pte & PM_ADDR_MASK);
+ 
+ 	return (__pte & ~offset_mask) | (iova & offset_mask);
+ }
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index 75df4c9d8b54..1c7c1250bf75 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -29,9 +29,6 @@
+  */
+ #define	MIN_RAID456_JOURNAL_SPACE (4*2048)
+ 
+-/* Global list of all raid sets */
+-static LIST_HEAD(raid_sets);
+-
+ static bool devices_handle_discard_safely = false;
+ 
+ /*
+@@ -227,7 +224,6 @@ struct rs_layout {
+ 
+ struct raid_set {
+ 	struct dm_target *ti;
+-	struct list_head list;
+ 
+ 	uint32_t stripe_cache_entries;
+ 	unsigned long ctr_flags;
+@@ -273,19 +269,6 @@ static void rs_config_restore(struct raid_set *rs, struct rs_layout *l)
+ 	mddev->new_chunk_sectors = l->new_chunk_sectors;
+ }
+ 
+-/* Find any raid_set in active slot for @rs on global list */
+-static struct raid_set *rs_find_active(struct raid_set *rs)
+-{
+-	struct raid_set *r;
+-	struct mapped_device *md = dm_table_get_md(rs->ti->table);
+-
+-	list_for_each_entry(r, &raid_sets, list)
+-		if (r != rs && dm_table_get_md(r->ti->table) == md)
+-			return r;
+-
+-	return NULL;
+-}
+-
+ /* raid10 algorithms (i.e. formats) */
+ #define	ALGORITHM_RAID10_DEFAULT	0
+ #define	ALGORITHM_RAID10_NEAR		1
+@@ -764,7 +747,6 @@ static struct raid_set *raid_set_alloc(struct dm_target *ti, struct raid_type *r
+ 
+ 	mddev_init(&rs->md);
+ 
+-	INIT_LIST_HEAD(&rs->list);
+ 	rs->raid_disks = raid_devs;
+ 	rs->delta_disks = 0;
+ 
+@@ -782,9 +764,6 @@ static struct raid_set *raid_set_alloc(struct dm_target *ti, struct raid_type *r
+ 	for (i = 0; i < raid_devs; i++)
+ 		md_rdev_init(&rs->dev[i].rdev);
+ 
+-	/* Add @rs to global list. */
+-	list_add(&rs->list, &raid_sets);
+-
+ 	/*
+ 	 * Remaining items to be initialized by further RAID params:
+ 	 *  rs->md.persistent
+@@ -797,7 +776,7 @@ static struct raid_set *raid_set_alloc(struct dm_target *ti, struct raid_type *r
+ 	return rs;
+ }
+ 
+-/* Free all @rs allocations and remove it from global list. */
++/* Free all @rs allocations */
+ static void raid_set_free(struct raid_set *rs)
+ {
+ 	int i;
+@@ -815,8 +794,6 @@ static void raid_set_free(struct raid_set *rs)
+ 			dm_put_device(rs->ti, rs->dev[i].data_dev);
+ 	}
+ 
+-	list_del(&rs->list);
+-
+ 	kfree(rs);
+ }
+ 
+@@ -3149,6 +3126,11 @@ static int raid_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+ 		set_bit(RT_FLAG_UPDATE_SBS, &rs->runtime_flags);
+ 		rs_set_new(rs);
+ 	} else if (rs_is_recovering(rs)) {
++		/* Rebuild particular devices */
++		if (test_bit(__CTR_FLAG_REBUILD, &rs->ctr_flags)) {
++			set_bit(RT_FLAG_UPDATE_SBS, &rs->runtime_flags);
++			rs_setup_recovery(rs, MaxSector);
++		}
+ 		/* A recovering raid set may be resized */
+ 		; /* skip setup rs */
+ 	} else if (rs_is_reshaping(rs)) {
+@@ -3350,32 +3332,53 @@ static int raid_map(struct dm_target *ti, struct bio *bio)
+ 	return DM_MAPIO_SUBMITTED;
+ }
+ 
+-/* Return string describing the current sync action of @mddev */
+-static const char *decipher_sync_action(struct mddev *mddev, unsigned long recovery)
++/* Return sync state string for @state */
++enum sync_state { st_frozen, st_reshape, st_resync, st_check, st_repair, st_recover, st_idle };
++static const char *sync_str(enum sync_state state)
++{
++	/* Has to be in above sync_state order! */
++	static const char *sync_strs[] = {
++		"frozen",
++		"reshape",
++		"resync",
++		"check",
++		"repair",
++		"recover",
++		"idle"
++	};
++
++	return __within_range(state, 0, ARRAY_SIZE(sync_strs) - 1) ? sync_strs[state] : "undef";
++};
++
++/* Return enum sync_state for @mddev derived from @recovery flags */
++static const enum sync_state decipher_sync_action(struct mddev *mddev, unsigned long recovery)
+ {
+ 	if (test_bit(MD_RECOVERY_FROZEN, &recovery))
+-		return "frozen";
++		return st_frozen;
+ 
+-	/* The MD sync thread can be done with io but still be running */
++	/* The MD sync thread can be done with io or be interrupted but still be running */
+ 	if (!test_bit(MD_RECOVERY_DONE, &recovery) &&
+ 	    (test_bit(MD_RECOVERY_RUNNING, &recovery) ||
+ 	     (!mddev->ro && test_bit(MD_RECOVERY_NEEDED, &recovery)))) {
+ 		if (test_bit(MD_RECOVERY_RESHAPE, &recovery))
+-			return "reshape";
++			return st_reshape;
+ 
+ 		if (test_bit(MD_RECOVERY_SYNC, &recovery)) {
+ 			if (!test_bit(MD_RECOVERY_REQUESTED, &recovery))
+-				return "resync";
+-			else if (test_bit(MD_RECOVERY_CHECK, &recovery))
+-				return "check";
+-			return "repair";
++				return st_resync;
++			if (test_bit(MD_RECOVERY_CHECK, &recovery))
++				return st_check;
++			return st_repair;
+ 		}
+ 
+ 		if (test_bit(MD_RECOVERY_RECOVER, &recovery))
+-			return "recover";
++			return st_recover;
++
++		if (mddev->reshape_position != MaxSector)
++			return st_reshape;
+ 	}
+ 
+-	return "idle";
++	return st_idle;
+ }
+ 
+ /*
+@@ -3409,6 +3412,7 @@ static sector_t rs_get_progress(struct raid_set *rs, unsigned long recovery,
+ 				sector_t resync_max_sectors)
+ {
+ 	sector_t r;
++	enum sync_state state;
+ 	struct mddev *mddev = &rs->md;
+ 
+ 	clear_bit(RT_FLAG_RS_IN_SYNC, &rs->runtime_flags);
+@@ -3419,20 +3423,14 @@ static sector_t rs_get_progress(struct raid_set *rs, unsigned long recovery,
+ 		set_bit(RT_FLAG_RS_IN_SYNC, &rs->runtime_flags);
+ 
+ 	} else {
+-		if (!test_bit(__CTR_FLAG_NOSYNC, &rs->ctr_flags) &&
+-		    !test_bit(MD_RECOVERY_INTR, &recovery) &&
+-		    (test_bit(MD_RECOVERY_NEEDED, &recovery) ||
+-		     test_bit(MD_RECOVERY_RESHAPE, &recovery) ||
+-		     test_bit(MD_RECOVERY_RUNNING, &recovery)))
+-			r = mddev->curr_resync_completed;
+-		else
++		state = decipher_sync_action(mddev, recovery);
++
++		if (state == st_idle && !test_bit(MD_RECOVERY_INTR, &recovery))
+ 			r = mddev->recovery_cp;
++		else
++			r = mddev->curr_resync_completed;
+ 
+-		if (r >= resync_max_sectors &&
+-		    (!test_bit(MD_RECOVERY_REQUESTED, &recovery) ||
+-		     (!test_bit(MD_RECOVERY_FROZEN, &recovery) &&
+-		      !test_bit(MD_RECOVERY_NEEDED, &recovery) &&
+-		      !test_bit(MD_RECOVERY_RUNNING, &recovery)))) {
++		if (state == st_idle && r >= resync_max_sectors) {
+ 			/*
+ 			 * Sync complete.
+ 			 */
+@@ -3440,24 +3438,20 @@ static sector_t rs_get_progress(struct raid_set *rs, unsigned long recovery,
+ 			if (test_bit(MD_RECOVERY_RECOVER, &recovery))
+ 				set_bit(RT_FLAG_RS_IN_SYNC, &rs->runtime_flags);
+ 
+-		} else if (test_bit(MD_RECOVERY_RECOVER, &recovery)) {
++		} else if (state == st_recover)
+ 			/*
+ 			 * In case we are recovering, the array is not in sync
+ 			 * and health chars should show the recovering legs.
+ 			 */
+ 			;
+-
+-		} else if (test_bit(MD_RECOVERY_SYNC, &recovery) &&
+-			   !test_bit(MD_RECOVERY_REQUESTED, &recovery)) {
++		else if (state == st_resync)
+ 			/*
+ 			 * If "resync" is occurring, the raid set
+ 			 * is or may be out of sync hence the health
+ 			 * characters shall be 'a'.
+ 			 */
+ 			set_bit(RT_FLAG_RS_RESYNCING, &rs->runtime_flags);
+-
+-		} else if (test_bit(MD_RECOVERY_RESHAPE, &recovery) &&
+-			   !test_bit(MD_RECOVERY_REQUESTED, &recovery)) {
++		else if (state == st_reshape)
+ 			/*
+ 			 * If "reshape" is occurring, the raid set
+ 			 * is or may be out of sync hence the health
+@@ -3465,7 +3459,7 @@ static sector_t rs_get_progress(struct raid_set *rs, unsigned long recovery,
+ 			 */
+ 			set_bit(RT_FLAG_RS_RESYNCING, &rs->runtime_flags);
+ 
+-		} else if (test_bit(MD_RECOVERY_REQUESTED, &recovery)) {
++		else if (state == st_check || state == st_repair)
+ 			/*
+ 			 * If "check" or "repair" is occurring, the raid set has
+ 			 * undergone an initial sync and the health characters
+@@ -3473,12 +3467,12 @@ static sector_t rs_get_progress(struct raid_set *rs, unsigned long recovery,
+ 			 */
+ 			set_bit(RT_FLAG_RS_IN_SYNC, &rs->runtime_flags);
+ 
+-		} else {
++		else {
+ 			struct md_rdev *rdev;
+ 
+ 			/*
+ 			 * We are idle and recovery is needed, prevent 'A' chars race
+-			 * caused by components still set to in-sync by constrcuctor.
++			 * caused by components still set to in-sync by constructor.
+ 			 */
+ 			if (test_bit(MD_RECOVERY_NEEDED, &recovery))
+ 				set_bit(RT_FLAG_RS_RESYNCING, &rs->runtime_flags);
+@@ -3542,7 +3536,7 @@ static void raid_status(struct dm_target *ti, status_type_t type,
+ 		progress = rs_get_progress(rs, recovery, resync_max_sectors);
+ 		resync_mismatches = (mddev->last_sync_action && !strcasecmp(mddev->last_sync_action, "check")) ?
+ 				    atomic64_read(&mddev->resync_mismatches) : 0;
+-		sync_action = decipher_sync_action(&rs->md, recovery);
++		sync_action = sync_str(decipher_sync_action(&rs->md, recovery));
+ 
+ 		/* HM FIXME: do we want another state char for raid0? It shows 'D'/'A'/'-' now */
+ 		for (i = 0; i < rs->raid_disks; i++)
+@@ -3892,14 +3886,13 @@ static int rs_start_reshape(struct raid_set *rs)
+ 	struct mddev *mddev = &rs->md;
+ 	struct md_personality *pers = mddev->pers;
+ 
++	/* Don't allow the sync thread to work until the table gets reloaded. */
++	set_bit(MD_RECOVERY_WAIT, &mddev->recovery);
++
+ 	r = rs_setup_reshape(rs);
+ 	if (r)
+ 		return r;
+ 
+-	/* Need to be resumed to be able to start reshape, recovery is frozen until raid_resume() though */
+-	if (test_and_clear_bit(RT_FLAG_RS_SUSPENDED, &rs->runtime_flags))
+-		mddev_resume(mddev);
+-
+ 	/*
+ 	 * Check any reshape constraints enforced by the personalility
+ 	 *
+@@ -3923,10 +3916,6 @@ static int rs_start_reshape(struct raid_set *rs)
+ 		}
+ 	}
+ 
+-	/* Suspend because a resume will happen in raid_resume() */
+-	set_bit(RT_FLAG_RS_SUSPENDED, &rs->runtime_flags);
+-	mddev_suspend(mddev);
+-
+ 	/*
+ 	 * Now reshape got set up, update superblocks to
+ 	 * reflect the fact so that a table reload will
+@@ -3947,29 +3936,6 @@ static int raid_preresume(struct dm_target *ti)
+ 	if (test_and_set_bit(RT_FLAG_RS_PRERESUMED, &rs->runtime_flags))
+ 		return 0;
+ 
+-	if (!test_bit(__CTR_FLAG_REBUILD, &rs->ctr_flags)) {
+-		struct raid_set *rs_active = rs_find_active(rs);
+-
+-		if (rs_active) {
+-			/*
+-			 * In case no rebuilds have been requested
+-			 * and an active table slot exists, copy
+-			 * current resynchonization completed and
+-			 * reshape position pointers across from
+-			 * suspended raid set in the active slot.
+-			 *
+-			 * This resumes the new mapping at current
+-			 * offsets to continue recover/reshape without
+-			 * necessarily redoing a raid set partially or
+-			 * causing data corruption in case of a reshape.
+-			 */
+-			if (rs_active->md.curr_resync_completed != MaxSector)
+-				mddev->curr_resync_completed = rs_active->md.curr_resync_completed;
+-			if (rs_active->md.reshape_position != MaxSector)
+-				mddev->reshape_position = rs_active->md.reshape_position;
+-		}
+-	}
+-
+ 	/*
+ 	 * The superblocks need to be updated on disk if the
+ 	 * array is new or new devices got added (thus zeroed
+diff --git a/drivers/md/dm-thin-metadata.c b/drivers/md/dm-thin-metadata.c
+index 72142021b5c9..20b0776e39ef 100644
+--- a/drivers/md/dm-thin-metadata.c
++++ b/drivers/md/dm-thin-metadata.c
+@@ -188,6 +188,12 @@ struct dm_pool_metadata {
+ 	unsigned long flags;
+ 	sector_t data_block_size;
+ 
++	/*
++	 * We reserve a section of the metadata for commit overhead.
++	 * All reported space does *not* include this.
++	 */
++	dm_block_t metadata_reserve;
++
+ 	/*
+ 	 * Set if a transaction has to be aborted but the attempt to roll back
+ 	 * to the previous (good) transaction failed.  The only pool metadata
+@@ -816,6 +822,20 @@ static int __commit_transaction(struct dm_pool_metadata *pmd)
+ 	return dm_tm_commit(pmd->tm, sblock);
+ }
+ 
++static void __set_metadata_reserve(struct dm_pool_metadata *pmd)
++{
++	int r;
++	dm_block_t total;
++	dm_block_t max_blocks = 4096; /* 16M */
++
++	r = dm_sm_get_nr_blocks(pmd->metadata_sm, &total);
++	if (r) {
++		DMERR("could not get size of metadata device");
++		pmd->metadata_reserve = max_blocks;
++	} else
++		pmd->metadata_reserve = min(max_blocks, div_u64(total, 10));
++}
++
+ struct dm_pool_metadata *dm_pool_metadata_open(struct block_device *bdev,
+ 					       sector_t data_block_size,
+ 					       bool format_device)
+@@ -849,6 +869,8 @@ struct dm_pool_metadata *dm_pool_metadata_open(struct block_device *bdev,
+ 		return ERR_PTR(r);
+ 	}
+ 
++	__set_metadata_reserve(pmd);
++
+ 	return pmd;
+ }
+ 
+@@ -1820,6 +1842,13 @@ int dm_pool_get_free_metadata_block_count(struct dm_pool_metadata *pmd,
+ 	down_read(&pmd->root_lock);
+ 	if (!pmd->fail_io)
+ 		r = dm_sm_get_nr_free(pmd->metadata_sm, result);
++
++	if (!r) {
++		if (*result < pmd->metadata_reserve)
++			*result = 0;
++		else
++			*result -= pmd->metadata_reserve;
++	}
+ 	up_read(&pmd->root_lock);
+ 
+ 	return r;
+@@ -1932,8 +1961,11 @@ int dm_pool_resize_metadata_dev(struct dm_pool_metadata *pmd, dm_block_t new_cou
+ 	int r = -EINVAL;
+ 
+ 	down_write(&pmd->root_lock);
+-	if (!pmd->fail_io)
++	if (!pmd->fail_io) {
+ 		r = __resize_space_map(pmd->metadata_sm, new_count);
++		if (!r)
++			__set_metadata_reserve(pmd);
++	}
+ 	up_write(&pmd->root_lock);
+ 
+ 	return r;
+diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
+index 1087f6a1ac79..b512efd4050c 100644
+--- a/drivers/md/dm-thin.c
++++ b/drivers/md/dm-thin.c
+@@ -200,7 +200,13 @@ struct dm_thin_new_mapping;
+ enum pool_mode {
+ 	PM_WRITE,		/* metadata may be changed */
+ 	PM_OUT_OF_DATA_SPACE,	/* metadata may be changed, though data may not be allocated */
++
++	/*
++	 * Like READ_ONLY, except may switch back to WRITE on metadata resize. Reported as READ_ONLY.
++	 */
++	PM_OUT_OF_METADATA_SPACE,
+ 	PM_READ_ONLY,		/* metadata may not be changed */
++
+ 	PM_FAIL,		/* all I/O fails */
+ };
+ 
+@@ -1388,7 +1394,35 @@ static void set_pool_mode(struct pool *pool, enum pool_mode new_mode);
+ 
+ static void requeue_bios(struct pool *pool);
+ 
+-static void check_for_space(struct pool *pool)
++static bool is_read_only_pool_mode(enum pool_mode mode)
++{
++	return (mode == PM_OUT_OF_METADATA_SPACE || mode == PM_READ_ONLY);
++}
++
++static bool is_read_only(struct pool *pool)
++{
++	return is_read_only_pool_mode(get_pool_mode(pool));
++}
++
++static void check_for_metadata_space(struct pool *pool)
++{
++	int r;
++	const char *ooms_reason = NULL;
++	dm_block_t nr_free;
++
++	r = dm_pool_get_free_metadata_block_count(pool->pmd, &nr_free);
++	if (r)
++		ooms_reason = "Could not get free metadata blocks";
++	else if (!nr_free)
++		ooms_reason = "No free metadata blocks";
++
++	if (ooms_reason && !is_read_only(pool)) {
++		DMERR("%s", ooms_reason);
++		set_pool_mode(pool, PM_OUT_OF_METADATA_SPACE);
++	}
++}
++
++static void check_for_data_space(struct pool *pool)
+ {
+ 	int r;
+ 	dm_block_t nr_free;
+@@ -1414,14 +1448,16 @@ static int commit(struct pool *pool)
+ {
+ 	int r;
+ 
+-	if (get_pool_mode(pool) >= PM_READ_ONLY)
++	if (get_pool_mode(pool) >= PM_OUT_OF_METADATA_SPACE)
+ 		return -EINVAL;
+ 
+ 	r = dm_pool_commit_metadata(pool->pmd);
+ 	if (r)
+ 		metadata_operation_failed(pool, "dm_pool_commit_metadata", r);
+-	else
+-		check_for_space(pool);
++	else {
++		check_for_metadata_space(pool);
++		check_for_data_space(pool);
++	}
+ 
+ 	return r;
+ }
+@@ -1487,6 +1523,19 @@ static int alloc_data_block(struct thin_c *tc, dm_block_t *result)
+ 		return r;
+ 	}
+ 
++	r = dm_pool_get_free_metadata_block_count(pool->pmd, &free_blocks);
++	if (r) {
++		metadata_operation_failed(pool, "dm_pool_get_free_metadata_block_count", r);
++		return r;
++	}
++
++	if (!free_blocks) {
++		/* Let's commit before we use up the metadata reserve. */
++		r = commit(pool);
++		if (r)
++			return r;
++	}
++
+ 	return 0;
+ }
+ 
+@@ -1518,6 +1567,7 @@ static blk_status_t should_error_unserviceable_bio(struct pool *pool)
+ 	case PM_OUT_OF_DATA_SPACE:
+ 		return pool->pf.error_if_no_space ? BLK_STS_NOSPC : 0;
+ 
++	case PM_OUT_OF_METADATA_SPACE:
+ 	case PM_READ_ONLY:
+ 	case PM_FAIL:
+ 		return BLK_STS_IOERR;
+@@ -2481,8 +2531,9 @@ static void set_pool_mode(struct pool *pool, enum pool_mode new_mode)
+ 		error_retry_list(pool);
+ 		break;
+ 
++	case PM_OUT_OF_METADATA_SPACE:
+ 	case PM_READ_ONLY:
+-		if (old_mode != new_mode)
++		if (!is_read_only_pool_mode(old_mode))
+ 			notify_of_pool_mode_change(pool, "read-only");
+ 		dm_pool_metadata_read_only(pool->pmd);
+ 		pool->process_bio = process_bio_read_only;
+@@ -3420,6 +3471,10 @@ static int maybe_resize_metadata_dev(struct dm_target *ti, bool *need_commit)
+ 		DMINFO("%s: growing the metadata device from %llu to %llu blocks",
+ 		       dm_device_name(pool->pool_md),
+ 		       sb_metadata_dev_size, metadata_dev_size);
++
++		if (get_pool_mode(pool) == PM_OUT_OF_METADATA_SPACE)
++			set_pool_mode(pool, PM_WRITE);
++
+ 		r = dm_pool_resize_metadata_dev(pool->pmd, metadata_dev_size);
+ 		if (r) {
+ 			metadata_operation_failed(pool, "dm_pool_resize_metadata_dev", r);
+@@ -3724,7 +3779,7 @@ static int pool_message(struct dm_target *ti, unsigned argc, char **argv,
+ 	struct pool_c *pt = ti->private;
+ 	struct pool *pool = pt->pool;
+ 
+-	if (get_pool_mode(pool) >= PM_READ_ONLY) {
++	if (get_pool_mode(pool) >= PM_OUT_OF_METADATA_SPACE) {
+ 		DMERR("%s: unable to service pool target messages in READ_ONLY or FAIL mode",
+ 		      dm_device_name(pool->pool_md));
+ 		return -EOPNOTSUPP;
+@@ -3798,6 +3853,7 @@ static void pool_status(struct dm_target *ti, status_type_t type,
+ 	dm_block_t nr_blocks_data;
+ 	dm_block_t nr_blocks_metadata;
+ 	dm_block_t held_root;
++	enum pool_mode mode;
+ 	char buf[BDEVNAME_SIZE];
+ 	char buf2[BDEVNAME_SIZE];
+ 	struct pool_c *pt = ti->private;
+@@ -3868,9 +3924,10 @@ static void pool_status(struct dm_target *ti, status_type_t type,
+ 		else
+ 			DMEMIT("- ");
+ 
+-		if (pool->pf.mode == PM_OUT_OF_DATA_SPACE)
++		mode = get_pool_mode(pool);
++		if (mode == PM_OUT_OF_DATA_SPACE)
+ 			DMEMIT("out_of_data_space ");
+-		else if (pool->pf.mode == PM_READ_ONLY)
++		else if (is_read_only_pool_mode(mode))
+ 			DMEMIT("ro ");
+ 		else
+ 			DMEMIT("rw ");
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 35bd3a62451b..8c93d44a052c 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -4531,11 +4531,12 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr,
+ 		allow_barrier(conf);
+ 	}
+ 
++	raise_barrier(conf, 0);
+ read_more:
+ 	/* Now schedule reads for blocks from sector_nr to last */
+ 	r10_bio = raid10_alloc_init_r10buf(conf);
+ 	r10_bio->state = 0;
+-	raise_barrier(conf, sectors_done != 0);
++	raise_barrier(conf, 1);
+ 	atomic_set(&r10_bio->remaining, 0);
+ 	r10_bio->mddev = mddev;
+ 	r10_bio->sector = sector_nr;
+@@ -4631,6 +4632,8 @@ read_more:
+ 	if (sector_nr <= last)
+ 		goto read_more;
+ 
++	lower_barrier(conf);
++
+ 	/* Now that we have done the whole section we can
+ 	 * update reshape_progress
+ 	 */
+diff --git a/drivers/md/raid5-log.h b/drivers/md/raid5-log.h
+index a001808a2b77..bfb811407061 100644
+--- a/drivers/md/raid5-log.h
++++ b/drivers/md/raid5-log.h
+@@ -46,6 +46,11 @@ extern int ppl_modify_log(struct r5conf *conf, struct md_rdev *rdev, bool add);
+ extern void ppl_quiesce(struct r5conf *conf, int quiesce);
+ extern int ppl_handle_flush_request(struct r5l_log *log, struct bio *bio);
+ 
++static inline bool raid5_has_log(struct r5conf *conf)
++{
++	return test_bit(MD_HAS_JOURNAL, &conf->mddev->flags);
++}
++
+ static inline bool raid5_has_ppl(struct r5conf *conf)
+ {
+ 	return test_bit(MD_HAS_PPL, &conf->mddev->flags);
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 49107c52c8e6..9050bfc71309 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -735,7 +735,7 @@ static bool stripe_can_batch(struct stripe_head *sh)
+ {
+ 	struct r5conf *conf = sh->raid_conf;
+ 
+-	if (conf->log || raid5_has_ppl(conf))
++	if (raid5_has_log(conf) || raid5_has_ppl(conf))
+ 		return false;
+ 	return test_bit(STRIPE_BATCH_READY, &sh->state) &&
+ 		!test_bit(STRIPE_BITMAP_PENDING, &sh->state) &&
+@@ -7739,7 +7739,7 @@ static int raid5_resize(struct mddev *mddev, sector_t sectors)
+ 	sector_t newsize;
+ 	struct r5conf *conf = mddev->private;
+ 
+-	if (conf->log || raid5_has_ppl(conf))
++	if (raid5_has_log(conf) || raid5_has_ppl(conf))
+ 		return -EINVAL;
+ 	sectors &= ~((sector_t)conf->chunk_sectors - 1);
+ 	newsize = raid5_size(mddev, sectors, mddev->raid_disks);
+@@ -7790,7 +7790,7 @@ static int check_reshape(struct mddev *mddev)
+ {
+ 	struct r5conf *conf = mddev->private;
+ 
+-	if (conf->log || raid5_has_ppl(conf))
++	if (raid5_has_log(conf) || raid5_has_ppl(conf))
+ 		return -EINVAL;
+ 	if (mddev->delta_disks == 0 &&
+ 	    mddev->new_layout == mddev->layout &&
+diff --git a/drivers/net/ethernet/amazon/ena/ena_com.c b/drivers/net/ethernet/amazon/ena/ena_com.c
+index 17f12c18d225..c37deef3bcf1 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_com.c
++++ b/drivers/net/ethernet/amazon/ena/ena_com.c
+@@ -459,7 +459,7 @@ static void ena_com_handle_admin_completion(struct ena_com_admin_queue *admin_qu
+ 	cqe = &admin_queue->cq.entries[head_masked];
+ 
+ 	/* Go over all the completions */
+-	while ((cqe->acq_common_descriptor.flags &
++	while ((READ_ONCE(cqe->acq_common_descriptor.flags) &
+ 			ENA_ADMIN_ACQ_COMMON_DESC_PHASE_MASK) == phase) {
+ 		/* Do not read the rest of the completion entry before the
+ 		 * phase bit was validated
+@@ -637,7 +637,7 @@ static u32 ena_com_reg_bar_read32(struct ena_com_dev *ena_dev, u16 offset)
+ 
+ 	mmiowb();
+ 	for (i = 0; i < timeout; i++) {
+-		if (read_resp->req_id == mmio_read->seq_num)
++		if (READ_ONCE(read_resp->req_id) == mmio_read->seq_num)
+ 			break;
+ 
+ 		udelay(1);
+@@ -1796,8 +1796,8 @@ void ena_com_aenq_intr_handler(struct ena_com_dev *dev, void *data)
+ 	aenq_common = &aenq_e->aenq_common_desc;
+ 
+ 	/* Go over all the events */
+-	while ((aenq_common->flags & ENA_ADMIN_AENQ_COMMON_DESC_PHASE_MASK) ==
+-	       phase) {
++	while ((READ_ONCE(aenq_common->flags) &
++		ENA_ADMIN_AENQ_COMMON_DESC_PHASE_MASK) == phase) {
+ 		pr_debug("AENQ! Group[%x] Syndrom[%x] timestamp: [%llus]\n",
+ 			 aenq_common->group, aenq_common->syndrom,
+ 			 (u64)aenq_common->timestamp_low +
+diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+index f2af87d70594..1b01cd2820ba 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+@@ -76,7 +76,7 @@ MODULE_DEVICE_TABLE(pci, ena_pci_tbl);
+ 
+ static int ena_rss_init_default(struct ena_adapter *adapter);
+ static void check_for_admin_com_state(struct ena_adapter *adapter);
+-static void ena_destroy_device(struct ena_adapter *adapter);
++static void ena_destroy_device(struct ena_adapter *adapter, bool graceful);
+ static int ena_restore_device(struct ena_adapter *adapter);
+ 
+ static void ena_tx_timeout(struct net_device *dev)
+@@ -461,7 +461,7 @@ static inline int ena_alloc_rx_page(struct ena_ring *rx_ring,
+ 		return -ENOMEM;
+ 	}
+ 
+-	dma = dma_map_page(rx_ring->dev, page, 0, PAGE_SIZE,
++	dma = dma_map_page(rx_ring->dev, page, 0, ENA_PAGE_SIZE,
+ 			   DMA_FROM_DEVICE);
+ 	if (unlikely(dma_mapping_error(rx_ring->dev, dma))) {
+ 		u64_stats_update_begin(&rx_ring->syncp);
+@@ -478,7 +478,7 @@ static inline int ena_alloc_rx_page(struct ena_ring *rx_ring,
+ 	rx_info->page_offset = 0;
+ 	ena_buf = &rx_info->ena_buf;
+ 	ena_buf->paddr = dma;
+-	ena_buf->len = PAGE_SIZE;
++	ena_buf->len = ENA_PAGE_SIZE;
+ 
+ 	return 0;
+ }
+@@ -495,7 +495,7 @@ static void ena_free_rx_page(struct ena_ring *rx_ring,
+ 		return;
+ 	}
+ 
+-	dma_unmap_page(rx_ring->dev, ena_buf->paddr, PAGE_SIZE,
++	dma_unmap_page(rx_ring->dev, ena_buf->paddr, ENA_PAGE_SIZE,
+ 		       DMA_FROM_DEVICE);
+ 
+ 	__free_page(page);
+@@ -916,10 +916,10 @@ static struct sk_buff *ena_rx_skb(struct ena_ring *rx_ring,
+ 	do {
+ 		dma_unmap_page(rx_ring->dev,
+ 			       dma_unmap_addr(&rx_info->ena_buf, paddr),
+-			       PAGE_SIZE, DMA_FROM_DEVICE);
++			       ENA_PAGE_SIZE, DMA_FROM_DEVICE);
+ 
+ 		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_info->page,
+-				rx_info->page_offset, len, PAGE_SIZE);
++				rx_info->page_offset, len, ENA_PAGE_SIZE);
+ 
+ 		netif_dbg(rx_ring->adapter, rx_status, rx_ring->netdev,
+ 			  "rx skb updated. len %d. data_len %d\n",
+@@ -1900,7 +1900,7 @@ static int ena_close(struct net_device *netdev)
+ 			  "Destroy failure, restarting device\n");
+ 		ena_dump_stats_to_dmesg(adapter);
+ 		/* rtnl lock already obtained in dev_ioctl() layer */
+-		ena_destroy_device(adapter);
++		ena_destroy_device(adapter, false);
+ 		ena_restore_device(adapter);
+ 	}
+ 
+@@ -2549,12 +2549,15 @@ err_disable_msix:
+ 	return rc;
+ }
+ 
+-static void ena_destroy_device(struct ena_adapter *adapter)
++static void ena_destroy_device(struct ena_adapter *adapter, bool graceful)
+ {
+ 	struct net_device *netdev = adapter->netdev;
+ 	struct ena_com_dev *ena_dev = adapter->ena_dev;
+ 	bool dev_up;
+ 
++	if (!test_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags))
++		return;
++
+ 	netif_carrier_off(netdev);
+ 
+ 	del_timer_sync(&adapter->timer_service);
+@@ -2562,7 +2565,8 @@ static void ena_destroy_device(struct ena_adapter *adapter)
+ 	dev_up = test_bit(ENA_FLAG_DEV_UP, &adapter->flags);
+ 	adapter->dev_up_before_reset = dev_up;
+ 
+-	ena_com_set_admin_running_state(ena_dev, false);
++	if (!graceful)
++		ena_com_set_admin_running_state(ena_dev, false);
+ 
+ 	if (test_bit(ENA_FLAG_DEV_UP, &adapter->flags))
+ 		ena_down(adapter);
+@@ -2590,6 +2594,7 @@ static void ena_destroy_device(struct ena_adapter *adapter)
+ 	adapter->reset_reason = ENA_REGS_RESET_NORMAL;
+ 
+ 	clear_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags);
++	clear_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags);
+ }
+ 
+ static int ena_restore_device(struct ena_adapter *adapter)
+@@ -2634,6 +2639,7 @@ static int ena_restore_device(struct ena_adapter *adapter)
+ 		}
+ 	}
+ 
++	set_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags);
+ 	mod_timer(&adapter->timer_service, round_jiffies(jiffies + HZ));
+ 	dev_err(&pdev->dev, "Device reset completed successfully\n");
+ 
+@@ -2664,7 +2670,7 @@ static void ena_fw_reset_device(struct work_struct *work)
+ 		return;
+ 	}
+ 	rtnl_lock();
+-	ena_destroy_device(adapter);
++	ena_destroy_device(adapter, false);
+ 	ena_restore_device(adapter);
+ 	rtnl_unlock();
+ }
+@@ -3408,30 +3414,24 @@ static void ena_remove(struct pci_dev *pdev)
+ 		netdev->rx_cpu_rmap = NULL;
+ 	}
+ #endif /* CONFIG_RFS_ACCEL */
+-
+-	unregister_netdev(netdev);
+ 	del_timer_sync(&adapter->timer_service);
+ 
+ 	cancel_work_sync(&adapter->reset_task);
+ 
+-	/* Reset the device only if the device is running. */
+-	if (test_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags))
+-		ena_com_dev_reset(ena_dev, adapter->reset_reason);
++	unregister_netdev(netdev);
+ 
+-	ena_free_mgmnt_irq(adapter);
++	/* If the device is running then we want to make sure the device will be
++	 * reset to make sure no more events will be issued by the device.
++	 */
++	if (test_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags))
++		set_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags);
+ 
+-	ena_disable_msix(adapter);
++	rtnl_lock();
++	ena_destroy_device(adapter, true);
++	rtnl_unlock();
+ 
+ 	free_netdev(netdev);
+ 
+-	ena_com_mmio_reg_read_request_destroy(ena_dev);
+-
+-	ena_com_abort_admin_commands(ena_dev);
+-
+-	ena_com_wait_for_abort_completion(ena_dev);
+-
+-	ena_com_admin_destroy(ena_dev);
+-
+ 	ena_com_rss_destroy(ena_dev);
+ 
+ 	ena_com_delete_debug_area(ena_dev);
+@@ -3466,7 +3466,7 @@ static int ena_suspend(struct pci_dev *pdev,  pm_message_t state)
+ 			"ignoring device reset request as the device is being suspended\n");
+ 		clear_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags);
+ 	}
+-	ena_destroy_device(adapter);
++	ena_destroy_device(adapter, true);
+ 	rtnl_unlock();
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h
+index f1972b5ab650..7c7ae56c52cf 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_netdev.h
++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h
+@@ -355,4 +355,15 @@ void ena_dump_stats_to_buf(struct ena_adapter *adapter, u8 *buf);
+ 
+ int ena_get_sset_count(struct net_device *netdev, int sset);
+ 
++/* The ENA buffer length fields is 16 bit long. So when PAGE_SIZE == 64kB the
++ * driver passas 0.
++ * Since the max packet size the ENA handles is ~9kB limit the buffer length to
++ * 16kB.
++ */
++#if PAGE_SIZE > SZ_16K
++#define ENA_PAGE_SIZE SZ_16K
++#else
++#define ENA_PAGE_SIZE PAGE_SIZE
++#endif
++
+ #endif /* !(ENA_H) */
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index 515d96e32143..c4d7479938e2 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -648,7 +648,7 @@ static int macb_halt_tx(struct macb *bp)
+ 		if (!(status & MACB_BIT(TGO)))
+ 			return 0;
+ 
+-		usleep_range(10, 250);
++		udelay(250);
+ 	} while (time_before(halt_time, timeout));
+ 
+ 	return -ETIMEDOUT;
+diff --git a/drivers/net/ethernet/hisilicon/hns/hnae.h b/drivers/net/ethernet/hisilicon/hns/hnae.h
+index cad52bd331f7..08a750fb60c4 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hnae.h
++++ b/drivers/net/ethernet/hisilicon/hns/hnae.h
+@@ -486,6 +486,8 @@ struct hnae_ae_ops {
+ 			u8 *auto_neg, u16 *speed, u8 *duplex);
+ 	void (*toggle_ring_irq)(struct hnae_ring *ring, u32 val);
+ 	void (*adjust_link)(struct hnae_handle *handle, int speed, int duplex);
++	bool (*need_adjust_link)(struct hnae_handle *handle,
++				 int speed, int duplex);
+ 	int (*set_loopback)(struct hnae_handle *handle,
+ 			    enum hnae_loop loop_mode, int en);
+ 	void (*get_ring_bdnum_limit)(struct hnae_queue *queue,
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c b/drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c
+index bd68379d2bea..bf930ab3c2bd 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c
+@@ -155,6 +155,41 @@ static void hns_ae_put_handle(struct hnae_handle *handle)
+ 		hns_ae_get_ring_pair(handle->qs[i])->used_by_vf = 0;
+ }
+ 
++static int hns_ae_wait_flow_down(struct hnae_handle *handle)
++{
++	struct dsaf_device *dsaf_dev;
++	struct hns_ppe_cb *ppe_cb;
++	struct hnae_vf_cb *vf_cb;
++	int ret;
++	int i;
++
++	for (i = 0; i < handle->q_num; i++) {
++		ret = hns_rcb_wait_tx_ring_clean(handle->qs[i]);
++		if (ret)
++			return ret;
++	}
++
++	ppe_cb = hns_get_ppe_cb(handle);
++	ret = hns_ppe_wait_tx_fifo_clean(ppe_cb);
++	if (ret)
++		return ret;
++
++	dsaf_dev = hns_ae_get_dsaf_dev(handle->dev);
++	if (!dsaf_dev)
++		return -EINVAL;
++	ret = hns_dsaf_wait_pkt_clean(dsaf_dev, handle->dport_id);
++	if (ret)
++		return ret;
++
++	vf_cb = hns_ae_get_vf_cb(handle);
++	ret = hns_mac_wait_fifo_clean(vf_cb->mac_cb);
++	if (ret)
++		return ret;
++
++	mdelay(10);
++	return 0;
++}
++
+ static void hns_ae_ring_enable_all(struct hnae_handle *handle, int val)
+ {
+ 	int q_num = handle->q_num;
+@@ -399,12 +434,41 @@ static int hns_ae_get_mac_info(struct hnae_handle *handle,
+ 	return hns_mac_get_port_info(mac_cb, auto_neg, speed, duplex);
+ }
+ 
++static bool hns_ae_need_adjust_link(struct hnae_handle *handle, int speed,
++				    int duplex)
++{
++	struct hns_mac_cb *mac_cb = hns_get_mac_cb(handle);
++
++	return hns_mac_need_adjust_link(mac_cb, speed, duplex);
++}
++
+ static void hns_ae_adjust_link(struct hnae_handle *handle, int speed,
+ 			       int duplex)
+ {
+ 	struct hns_mac_cb *mac_cb = hns_get_mac_cb(handle);
+ 
+-	hns_mac_adjust_link(mac_cb, speed, duplex);
++	switch (mac_cb->dsaf_dev->dsaf_ver) {
++	case AE_VERSION_1:
++		hns_mac_adjust_link(mac_cb, speed, duplex);
++		break;
++
++	case AE_VERSION_2:
++		/* chip need to clear all pkt inside */
++		hns_mac_disable(mac_cb, MAC_COMM_MODE_RX);
++		if (hns_ae_wait_flow_down(handle)) {
++			hns_mac_enable(mac_cb, MAC_COMM_MODE_RX);
++			break;
++		}
++
++		hns_mac_adjust_link(mac_cb, speed, duplex);
++		hns_mac_enable(mac_cb, MAC_COMM_MODE_RX);
++		break;
++
++	default:
++		break;
++	}
++
++	return;
+ }
+ 
+ static void hns_ae_get_ring_bdnum_limit(struct hnae_queue *queue,
+@@ -902,6 +966,7 @@ static struct hnae_ae_ops hns_dsaf_ops = {
+ 	.get_status = hns_ae_get_link_status,
+ 	.get_info = hns_ae_get_mac_info,
+ 	.adjust_link = hns_ae_adjust_link,
++	.need_adjust_link = hns_ae_need_adjust_link,
+ 	.set_loopback = hns_ae_config_loopback,
+ 	.get_ring_bdnum_limit = hns_ae_get_ring_bdnum_limit,
+ 	.get_pauseparam = hns_ae_get_pauseparam,
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c
+index 74bd260ca02a..8c7bc5cf193c 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c
+@@ -257,6 +257,16 @@ static void hns_gmac_get_pausefrm_cfg(void *mac_drv, u32 *rx_pause_en,
+ 	*tx_pause_en = dsaf_get_bit(pause_en, GMAC_PAUSE_EN_TX_FDFC_B);
+ }
+ 
++static bool hns_gmac_need_adjust_link(void *mac_drv, enum mac_speed speed,
++				      int duplex)
++{
++	struct mac_driver *drv = (struct mac_driver *)mac_drv;
++	struct hns_mac_cb *mac_cb = drv->mac_cb;
++
++	return (mac_cb->speed != speed) ||
++		(mac_cb->half_duplex == duplex);
++}
++
+ static int hns_gmac_adjust_link(void *mac_drv, enum mac_speed speed,
+ 				u32 full_duplex)
+ {
+@@ -309,6 +319,30 @@ static void hns_gmac_set_promisc(void *mac_drv, u8 en)
+ 		hns_gmac_set_uc_match(mac_drv, en);
+ }
+ 
++int hns_gmac_wait_fifo_clean(void *mac_drv)
++{
++	struct mac_driver *drv = (struct mac_driver *)mac_drv;
++	int wait_cnt;
++	u32 val;
++
++	wait_cnt = 0;
++	while (wait_cnt++ < HNS_MAX_WAIT_CNT) {
++		val = dsaf_read_dev(drv, GMAC_FIFO_STATE_REG);
++		/* bit5~bit0 is not send complete pkts */
++		if ((val & 0x3f) == 0)
++			break;
++		usleep_range(100, 200);
++	}
++
++	if (wait_cnt >= HNS_MAX_WAIT_CNT) {
++		dev_err(drv->dev,
++			"hns ge %d fifo was not idle.\n", drv->mac_id);
++		return -EBUSY;
++	}
++
++	return 0;
++}
++
+ static void hns_gmac_init(void *mac_drv)
+ {
+ 	u32 port;
+@@ -690,6 +724,7 @@ void *hns_gmac_config(struct hns_mac_cb *mac_cb, struct mac_params *mac_param)
+ 	mac_drv->mac_disable = hns_gmac_disable;
+ 	mac_drv->mac_free = hns_gmac_free;
+ 	mac_drv->adjust_link = hns_gmac_adjust_link;
++	mac_drv->need_adjust_link = hns_gmac_need_adjust_link;
+ 	mac_drv->set_tx_auto_pause_frames = hns_gmac_set_tx_auto_pause_frames;
+ 	mac_drv->config_max_frame_length = hns_gmac_config_max_frame_length;
+ 	mac_drv->mac_pausefrm_cfg = hns_gmac_pause_frm_cfg;
+@@ -717,6 +752,7 @@ void *hns_gmac_config(struct hns_mac_cb *mac_cb, struct mac_params *mac_param)
+ 	mac_drv->get_strings = hns_gmac_get_strings;
+ 	mac_drv->update_stats = hns_gmac_update_stats;
+ 	mac_drv->set_promiscuous = hns_gmac_set_promisc;
++	mac_drv->wait_fifo_clean = hns_gmac_wait_fifo_clean;
+ 
+ 	return (void *)mac_drv;
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c
+index 9dcc5765f11f..5c6b880c3eb7 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c
+@@ -114,6 +114,26 @@ int hns_mac_get_port_info(struct hns_mac_cb *mac_cb,
+ 	return 0;
+ }
+ 
++/**
++ *hns_mac_is_adjust_link - check is need change mac speed and duplex register
++ *@mac_cb: mac device
++ *@speed: phy device speed
++ *@duplex:phy device duplex
++ *
++ */
++bool hns_mac_need_adjust_link(struct hns_mac_cb *mac_cb, int speed, int duplex)
++{
++	struct mac_driver *mac_ctrl_drv;
++
++	mac_ctrl_drv = (struct mac_driver *)(mac_cb->priv.mac);
++
++	if (mac_ctrl_drv->need_adjust_link)
++		return mac_ctrl_drv->need_adjust_link(mac_ctrl_drv,
++			(enum mac_speed)speed, duplex);
++	else
++		return true;
++}
++
+ void hns_mac_adjust_link(struct hns_mac_cb *mac_cb, int speed, int duplex)
+ {
+ 	int ret;
+@@ -430,6 +450,16 @@ int hns_mac_vm_config_bc_en(struct hns_mac_cb *mac_cb, u32 vmid, bool enable)
+ 	return 0;
+ }
+ 
++int hns_mac_wait_fifo_clean(struct hns_mac_cb *mac_cb)
++{
++	struct mac_driver *drv = hns_mac_get_drv(mac_cb);
++
++	if (drv->wait_fifo_clean)
++		return drv->wait_fifo_clean(drv);
++
++	return 0;
++}
++
+ void hns_mac_reset(struct hns_mac_cb *mac_cb)
+ {
+ 	struct mac_driver *drv = hns_mac_get_drv(mac_cb);
+@@ -999,6 +1029,20 @@ static int hns_mac_get_max_port_num(struct dsaf_device *dsaf_dev)
+ 		return  DSAF_MAX_PORT_NUM;
+ }
+ 
++void hns_mac_enable(struct hns_mac_cb *mac_cb, enum mac_commom_mode mode)
++{
++	struct mac_driver *mac_ctrl_drv = hns_mac_get_drv(mac_cb);
++
++	mac_ctrl_drv->mac_enable(mac_cb->priv.mac, mode);
++}
++
++void hns_mac_disable(struct hns_mac_cb *mac_cb, enum mac_commom_mode mode)
++{
++	struct mac_driver *mac_ctrl_drv = hns_mac_get_drv(mac_cb);
++
++	mac_ctrl_drv->mac_disable(mac_cb->priv.mac, mode);
++}
++
+ /**
+  * hns_mac_init - init mac
+  * @dsaf_dev: dsa fabric device struct pointer
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.h
+index bbc0a98e7ca3..fbc75341bef7 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.h
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.h
+@@ -356,6 +356,9 @@ struct mac_driver {
+ 	/*adjust mac mode of port,include speed and duplex*/
+ 	int (*adjust_link)(void *mac_drv, enum mac_speed speed,
+ 			   u32 full_duplex);
++	/* need adjust link */
++	bool (*need_adjust_link)(void *mac_drv, enum mac_speed speed,
++				 int duplex);
+ 	/* config autoegotaite mode of port*/
+ 	void (*set_an_mode)(void *mac_drv, u8 enable);
+ 	/* config loopbank mode */
+@@ -394,6 +397,7 @@ struct mac_driver {
+ 	void (*get_info)(void *mac_drv, struct mac_info *mac_info);
+ 
+ 	void (*update_stats)(void *mac_drv);
++	int (*wait_fifo_clean)(void *mac_drv);
+ 
+ 	enum mac_mode mac_mode;
+ 	u8 mac_id;
+@@ -427,6 +431,7 @@ void *hns_xgmac_config(struct hns_mac_cb *mac_cb,
+ 
+ int hns_mac_init(struct dsaf_device *dsaf_dev);
+ void mac_adjust_link(struct net_device *net_dev);
++bool hns_mac_need_adjust_link(struct hns_mac_cb *mac_cb, int speed, int duplex);
+ void hns_mac_get_link_status(struct hns_mac_cb *mac_cb,	u32 *link_status);
+ int hns_mac_change_vf_addr(struct hns_mac_cb *mac_cb, u32 vmid, char *addr);
+ int hns_mac_set_multi(struct hns_mac_cb *mac_cb,
+@@ -463,5 +468,8 @@ int hns_mac_add_uc_addr(struct hns_mac_cb *mac_cb, u8 vf_id,
+ int hns_mac_rm_uc_addr(struct hns_mac_cb *mac_cb, u8 vf_id,
+ 		       const unsigned char *addr);
+ int hns_mac_clr_multicast(struct hns_mac_cb *mac_cb, int vfn);
++void hns_mac_enable(struct hns_mac_cb *mac_cb, enum mac_commom_mode mode);
++void hns_mac_disable(struct hns_mac_cb *mac_cb, enum mac_commom_mode mode);
++int hns_mac_wait_fifo_clean(struct hns_mac_cb *mac_cb);
+ 
+ #endif /* _HNS_DSAF_MAC_H */
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c
+index 0ce07f6eb1e6..0ef6d429308f 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c
+@@ -2733,6 +2733,35 @@ void hns_dsaf_set_promisc_tcam(struct dsaf_device *dsaf_dev,
+ 	soft_mac_entry->index = enable ? entry_index : DSAF_INVALID_ENTRY_IDX;
+ }
+ 
++int hns_dsaf_wait_pkt_clean(struct dsaf_device *dsaf_dev, int port)
++{
++	u32 val, val_tmp;
++	int wait_cnt;
++
++	if (port >= DSAF_SERVICE_NW_NUM)
++		return 0;
++
++	wait_cnt = 0;
++	while (wait_cnt++ < HNS_MAX_WAIT_CNT) {
++		val = dsaf_read_dev(dsaf_dev, DSAF_VOQ_IN_PKT_NUM_0_REG +
++			(port + DSAF_XGE_NUM) * 0x40);
++		val_tmp = dsaf_read_dev(dsaf_dev, DSAF_VOQ_OUT_PKT_NUM_0_REG +
++			(port + DSAF_XGE_NUM) * 0x40);
++		if (val == val_tmp)
++			break;
++
++		usleep_range(100, 200);
++	}
++
++	if (wait_cnt >= HNS_MAX_WAIT_CNT) {
++		dev_err(dsaf_dev->dev, "hns dsaf clean wait timeout(%u - %u).\n",
++			val, val_tmp);
++		return -EBUSY;
++	}
++
++	return 0;
++}
++
+ /**
+  * dsaf_probe - probo dsaf dev
+  * @pdev: dasf platform device
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.h
+index 4507e8222683..0e1cd99831a6 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.h
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.h
+@@ -44,6 +44,8 @@ struct hns_mac_cb;
+ #define DSAF_ROCE_CREDIT_CHN	8
+ #define DSAF_ROCE_CHAN_MODE	3
+ 
++#define HNS_MAX_WAIT_CNT 10000
++
+ enum dsaf_roce_port_mode {
+ 	DSAF_ROCE_6PORT_MODE,
+ 	DSAF_ROCE_4PORT_MODE,
+@@ -463,5 +465,6 @@ int hns_dsaf_rm_mac_addr(
+ 
+ int hns_dsaf_clr_mac_mc_port(struct dsaf_device *dsaf_dev,
+ 			     u8 mac_id, u8 port_num);
++int hns_dsaf_wait_pkt_clean(struct dsaf_device *dsaf_dev, int port);
+ 
+ #endif /* __HNS_DSAF_MAIN_H__ */
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c
+index 93e71e27401b..a19932aeb9d7 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c
+@@ -274,6 +274,29 @@ static void hns_ppe_exc_irq_en(struct hns_ppe_cb *ppe_cb, int en)
+ 	dsaf_write_dev(ppe_cb, PPE_INTEN_REG, msk_vlue & vld_msk);
+ }
+ 
++int hns_ppe_wait_tx_fifo_clean(struct hns_ppe_cb *ppe_cb)
++{
++	int wait_cnt;
++	u32 val;
++
++	wait_cnt = 0;
++	while (wait_cnt++ < HNS_MAX_WAIT_CNT) {
++		val = dsaf_read_dev(ppe_cb, PPE_CURR_TX_FIFO0_REG) & 0x3ffU;
++		if (!val)
++			break;
++
++		usleep_range(100, 200);
++	}
++
++	if (wait_cnt >= HNS_MAX_WAIT_CNT) {
++		dev_err(ppe_cb->dev, "hns ppe tx fifo clean wait timeout, still has %u pkt.\n",
++			val);
++		return -EBUSY;
++	}
++
++	return 0;
++}
++
+ /**
+  * ppe_init_hw - init ppe
+  * @ppe_cb: ppe device
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h
+index 9d8e643e8aa6..f670e63a5a01 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h
+@@ -100,6 +100,7 @@ struct ppe_common_cb {
+ 
+ };
+ 
++int hns_ppe_wait_tx_fifo_clean(struct hns_ppe_cb *ppe_cb);
+ int hns_ppe_init(struct dsaf_device *dsaf_dev);
+ 
+ void hns_ppe_uninit(struct dsaf_device *dsaf_dev);
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c
+index e2e28532e4dc..1e43d7a3ca86 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c
+@@ -66,6 +66,29 @@ void hns_rcb_wait_fbd_clean(struct hnae_queue **qs, int q_num, u32 flag)
+ 			"queue(%d) wait fbd(%d) clean fail!!\n", i, fbd_num);
+ }
+ 
++int hns_rcb_wait_tx_ring_clean(struct hnae_queue *qs)
++{
++	u32 head, tail;
++	int wait_cnt;
++
++	tail = dsaf_read_dev(&qs->tx_ring, RCB_REG_TAIL);
++	wait_cnt = 0;
++	while (wait_cnt++ < HNS_MAX_WAIT_CNT) {
++		head = dsaf_read_dev(&qs->tx_ring, RCB_REG_HEAD);
++		if (tail == head)
++			break;
++
++		usleep_range(100, 200);
++	}
++
++	if (wait_cnt >= HNS_MAX_WAIT_CNT) {
++		dev_err(qs->dev->dev, "rcb wait timeout, head not equal to tail.\n");
++		return -EBUSY;
++	}
++
++	return 0;
++}
++
+ /**
+  *hns_rcb_reset_ring_hw - ring reset
+  *@q: ring struct pointer
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h
+index 602816498c8d..2319b772a271 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h
+@@ -136,6 +136,7 @@ void hns_rcbv2_int_clr_hw(struct hnae_queue *q, u32 flag);
+ void hns_rcb_init_hw(struct ring_pair_cb *ring);
+ void hns_rcb_reset_ring_hw(struct hnae_queue *q);
+ void hns_rcb_wait_fbd_clean(struct hnae_queue **qs, int q_num, u32 flag);
++int hns_rcb_wait_tx_ring_clean(struct hnae_queue *qs);
+ u32 hns_rcb_get_rx_coalesced_frames(
+ 	struct rcb_common_cb *rcb_common, u32 port_idx);
+ u32 hns_rcb_get_tx_coalesced_frames(
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h
+index 886cbbf25761..74d935d82cbc 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h
+@@ -464,6 +464,7 @@
+ #define RCB_RING_INTMSK_TX_OVERTIME_REG		0x000C4
+ #define RCB_RING_INTSTS_TX_OVERTIME_REG		0x000C8
+ 
++#define GMAC_FIFO_STATE_REG			0x0000UL
+ #define GMAC_DUPLEX_TYPE_REG			0x0008UL
+ #define GMAC_FD_FC_TYPE_REG			0x000CUL
+ #define GMAC_TX_WATER_LINE_REG			0x0010UL
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_enet.c b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+index ef994a715f93..b4518f45f048 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+@@ -1212,11 +1212,26 @@ static void hns_nic_adjust_link(struct net_device *ndev)
+ 	struct hnae_handle *h = priv->ae_handle;
+ 	int state = 1;
+ 
++	/* If there is no phy, do not need adjust link */
+ 	if (ndev->phydev) {
+-		h->dev->ops->adjust_link(h, ndev->phydev->speed,
+-					 ndev->phydev->duplex);
+-		state = ndev->phydev->link;
++		/* When phy link down, do nothing */
++		if (ndev->phydev->link == 0)
++			return;
++
++		if (h->dev->ops->need_adjust_link(h, ndev->phydev->speed,
++						  ndev->phydev->duplex)) {
++			/* because Hi161X chip don't support to change gmac
++			 * speed and duplex with traffic. Delay 200ms to
++			 * make sure there is no more data in chip FIFO.
++			 */
++			netif_carrier_off(ndev);
++			msleep(200);
++			h->dev->ops->adjust_link(h, ndev->phydev->speed,
++						 ndev->phydev->duplex);
++			netif_carrier_on(ndev);
++		}
+ 	}
++
+ 	state = state && h->dev->ops->get_status(h);
+ 
+ 	if (state != priv->link) {
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c b/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
+index 2e14a3ae1d8b..c1e947bb852f 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
+@@ -243,7 +243,9 @@ static int hns_nic_set_link_ksettings(struct net_device *net_dev,
+ 	}
+ 
+ 	if (h->dev->ops->adjust_link) {
++		netif_carrier_off(net_dev);
+ 		h->dev->ops->adjust_link(h, (int)speed, cmd->base.duplex);
++		netif_carrier_on(net_dev);
+ 		return 0;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/ibm/emac/core.c b/drivers/net/ethernet/ibm/emac/core.c
+index 354c0982847b..372664686309 100644
+--- a/drivers/net/ethernet/ibm/emac/core.c
++++ b/drivers/net/ethernet/ibm/emac/core.c
+@@ -494,9 +494,6 @@ static u32 __emac_calc_base_mr1(struct emac_instance *dev, int tx_size, int rx_s
+ 	case 16384:
+ 		ret |= EMAC_MR1_RFS_16K;
+ 		break;
+-	case 8192:
+-		ret |= EMAC4_MR1_RFS_8K;
+-		break;
+ 	case 4096:
+ 		ret |= EMAC_MR1_RFS_4K;
+ 		break;
+@@ -537,6 +534,9 @@ static u32 __emac4_calc_base_mr1(struct emac_instance *dev, int tx_size, int rx_
+ 	case 16384:
+ 		ret |= EMAC4_MR1_RFS_16K;
+ 		break;
++	case 8192:
++		ret |= EMAC4_MR1_RFS_8K;
++		break;
+ 	case 4096:
+ 		ret |= EMAC4_MR1_RFS_4K;
+ 		break;
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index ffe7acbeaa22..d834308adf95 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -1841,11 +1841,17 @@ static int do_reset(struct ibmvnic_adapter *adapter,
+ 			adapter->map_id = 1;
+ 			release_rx_pools(adapter);
+ 			release_tx_pools(adapter);
+-			init_rx_pools(netdev);
+-			init_tx_pools(netdev);
++			rc = init_rx_pools(netdev);
++			if (rc)
++				return rc;
++			rc = init_tx_pools(netdev);
++			if (rc)
++				return rc;
+ 
+ 			release_napi(adapter);
+-			init_napi(adapter);
++			rc = init_napi(adapter);
++			if (rc)
++				return rc;
+ 		} else {
+ 			rc = reset_tx_pools(adapter);
+ 			if (rc)
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index 62e57b05a0ae..56b31e903cc1 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -3196,11 +3196,13 @@ int ixgbe_poll(struct napi_struct *napi, int budget)
+ 		return budget;
+ 
+ 	/* all work done, exit the polling mode */
+-	napi_complete_done(napi, work_done);
+-	if (adapter->rx_itr_setting & 1)
+-		ixgbe_set_itr(q_vector);
+-	if (!test_bit(__IXGBE_DOWN, &adapter->state))
+-		ixgbe_irq_enable_queues(adapter, BIT_ULL(q_vector->v_idx));
++	if (likely(napi_complete_done(napi, work_done))) {
++		if (adapter->rx_itr_setting & 1)
++			ixgbe_set_itr(q_vector);
++		if (!test_bit(__IXGBE_DOWN, &adapter->state))
++			ixgbe_irq_enable_queues(adapter,
++						BIT_ULL(q_vector->v_idx));
++	}
+ 
+ 	return min(work_done, budget - 1);
+ }
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index 661fa5a38df2..b8bba64673e5 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -4685,6 +4685,7 @@ static int mvpp2_port_probe(struct platform_device *pdev,
+ 	dev->min_mtu = ETH_MIN_MTU;
+ 	/* 9704 == 9728 - 20 and rounding to 8 */
+ 	dev->max_mtu = MVPP2_BM_JUMBO_PKT_SIZE;
++	dev->dev.of_node = port_node;
+ 
+ 	/* Phylink isn't used w/ ACPI as of now */
+ 	if (port_node) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dev.c b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+index 922811fb66e7..37ba7c78859d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/dev.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+@@ -396,16 +396,17 @@ void mlx5_remove_dev_by_protocol(struct mlx5_core_dev *dev, int protocol)
+ 		}
+ }
+ 
+-static u16 mlx5_gen_pci_id(struct mlx5_core_dev *dev)
++static u32 mlx5_gen_pci_id(struct mlx5_core_dev *dev)
+ {
+-	return (u16)((dev->pdev->bus->number << 8) |
++	return (u32)((pci_domain_nr(dev->pdev->bus) << 16) |
++		     (dev->pdev->bus->number << 8) |
+ 		     PCI_SLOT(dev->pdev->devfn));
+ }
+ 
+ /* Must be called with intf_mutex held */
+ struct mlx5_core_dev *mlx5_get_next_phys_dev(struct mlx5_core_dev *dev)
+ {
+-	u16 pci_id = mlx5_gen_pci_id(dev);
++	u32 pci_id = mlx5_gen_pci_id(dev);
+ 	struct mlx5_core_dev *res = NULL;
+ 	struct mlx5_core_dev *tmp_dev;
+ 	struct mlx5_priv *priv;
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index e5eb361b973c..1d1e66002232 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -730,7 +730,7 @@ struct rtl8169_tc_offsets {
+ };
+ 
+ enum rtl_flag {
+-	RTL_FLAG_TASK_ENABLED,
++	RTL_FLAG_TASK_ENABLED = 0,
+ 	RTL_FLAG_TASK_SLOW_PENDING,
+ 	RTL_FLAG_TASK_RESET_PENDING,
+ 	RTL_FLAG_TASK_PHY_PENDING,
+@@ -5150,13 +5150,13 @@ static void rtl_hw_start(struct  rtl8169_private *tp)
+ 
+ 	rtl_set_rx_max_size(tp);
+ 	rtl_set_rx_tx_desc_registers(tp);
+-	rtl_set_tx_config_registers(tp);
+ 	RTL_W8(tp, Cfg9346, Cfg9346_Lock);
+ 
+ 	/* Initially a 10 us delay. Turned it into a PCI commit. - FR */
+ 	RTL_R8(tp, IntrMask);
+ 	RTL_W8(tp, ChipCmd, CmdTxEnb | CmdRxEnb);
+ 	rtl_init_rxcfg(tp);
++	rtl_set_tx_config_registers(tp);
+ 
+ 	rtl_set_rx_mode(tp->dev);
+ 	/* no early-rx interrupts */
+@@ -7125,7 +7125,8 @@ static int rtl8169_close(struct net_device *dev)
+ 	rtl8169_update_counters(tp);
+ 
+ 	rtl_lock_work(tp);
+-	clear_bit(RTL_FLAG_TASK_ENABLED, tp->wk.flags);
++	/* Clear all task flags */
++	bitmap_zero(tp->wk.flags, RTL_FLAG_MAX);
+ 
+ 	rtl8169_down(dev);
+ 	rtl_unlock_work(tp);
+@@ -7301,7 +7302,9 @@ static void rtl8169_net_suspend(struct net_device *dev)
+ 
+ 	rtl_lock_work(tp);
+ 	napi_disable(&tp->napi);
+-	clear_bit(RTL_FLAG_TASK_ENABLED, tp->wk.flags);
++	/* Clear all task flags */
++	bitmap_zero(tp->wk.flags, RTL_FLAG_MAX);
++
+ 	rtl_unlock_work(tp);
+ 
+ 	rtl_pll_power_down(tp);
+diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c
+index 5614fd231bbe..6520379b390e 100644
+--- a/drivers/net/ethernet/renesas/sh_eth.c
++++ b/drivers/net/ethernet/renesas/sh_eth.c
+@@ -807,6 +807,41 @@ static struct sh_eth_cpu_data r8a77980_data = {
+ 	.magic		= 1,
+ 	.cexcr		= 1,
+ };
++
++/* R7S9210 */
++static struct sh_eth_cpu_data r7s9210_data = {
++	.soft_reset	= sh_eth_soft_reset,
++
++	.set_duplex	= sh_eth_set_duplex,
++	.set_rate	= sh_eth_set_rate_rcar,
++
++	.register_type	= SH_ETH_REG_FAST_SH4,
++
++	.edtrr_trns	= EDTRR_TRNS_ETHER,
++	.ecsr_value	= ECSR_ICD,
++	.ecsipr_value	= ECSIPR_ICDIP,
++	.eesipr_value	= EESIPR_TWBIP | EESIPR_TABTIP | EESIPR_RABTIP |
++			  EESIPR_RFCOFIP | EESIPR_ECIIP | EESIPR_FTCIP |
++			  EESIPR_TDEIP | EESIPR_TFUFIP | EESIPR_FRIP |
++			  EESIPR_RDEIP | EESIPR_RFOFIP | EESIPR_CNDIP |
++			  EESIPR_DLCIP | EESIPR_CDIP | EESIPR_TROIP |
++			  EESIPR_RMAFIP | EESIPR_RRFIP | EESIPR_RTLFIP |
++			  EESIPR_RTSFIP | EESIPR_PREIP | EESIPR_CERFIP,
++
++	.tx_check	= EESR_FTC | EESR_CND | EESR_DLC | EESR_CD | EESR_TRO,
++	.eesr_err_check	= EESR_TWB | EESR_TABT | EESR_RABT | EESR_RFE |
++			  EESR_RDE | EESR_RFRMER | EESR_TFE | EESR_TDE,
++
++	.fdr_value	= 0x0000070f,
++
++	.apr		= 1,
++	.mpr		= 1,
++	.tpauser	= 1,
++	.hw_swap	= 1,
++	.rpadir		= 1,
++	.no_ade		= 1,
++	.xdfar_rw	= 1,
++};
+ #endif /* CONFIG_OF */
+ 
+ static void sh_eth_set_rate_sh7724(struct net_device *ndev)
+@@ -3131,6 +3166,7 @@ static const struct of_device_id sh_eth_match_table[] = {
+ 	{ .compatible = "renesas,ether-r8a7794", .data = &rcar_gen2_data },
+ 	{ .compatible = "renesas,gether-r8a77980", .data = &r8a77980_data },
+ 	{ .compatible = "renesas,ether-r7s72100", .data = &r7s72100_data },
++	{ .compatible = "renesas,ether-r7s9210", .data = &r7s9210_data },
+ 	{ .compatible = "renesas,rcar-gen1-ether", .data = &rcar_gen1_data },
+ 	{ .compatible = "renesas,rcar-gen2-ether", .data = &rcar_gen2_data },
+ 	{ }
+diff --git a/drivers/net/wireless/broadcom/b43/dma.c b/drivers/net/wireless/broadcom/b43/dma.c
+index 6b0e1ec346cb..d46d57b989ae 100644
+--- a/drivers/net/wireless/broadcom/b43/dma.c
++++ b/drivers/net/wireless/broadcom/b43/dma.c
+@@ -1518,13 +1518,15 @@ void b43_dma_handle_txstatus(struct b43_wldev *dev,
+ 			}
+ 		} else {
+ 			/* More than a single header/data pair were missed.
+-			 * Report this error, and reset the controller to
++			 * Report this error. If running with open-source
++			 * firmware, then reset the controller to
+ 			 * revive operation.
+ 			 */
+ 			b43dbg(dev->wl,
+ 			       "Out of order TX status report on DMA ring %d. Expected %d, but got %d\n",
+ 			       ring->index, firstused, slot);
+-			b43_controller_restart(dev, "Out of order TX");
++			if (dev->fw.opensource)
++				b43_controller_restart(dev, "Out of order TX");
+ 			return;
+ 		}
+ 	}
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+index b815ba38dbdb..88121548eb9f 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+@@ -877,15 +877,12 @@ iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg,
+ 	const u8 *nvm_chan = cfg->nvm_type == IWL_NVM_EXT ?
+ 			     iwl_ext_nvm_channels : iwl_nvm_channels;
+ 	struct ieee80211_regdomain *regd, *copy_rd;
+-	int size_of_regd, regd_to_copy, wmms_to_copy;
+-	int size_of_wmms = 0;
++	int size_of_regd, regd_to_copy;
+ 	struct ieee80211_reg_rule *rule;
+-	struct ieee80211_wmm_rule *wmm_rule, *d_wmm, *s_wmm;
+ 	struct regdb_ptrs *regdb_ptrs;
+ 	enum nl80211_band band;
+ 	int center_freq, prev_center_freq = 0;
+-	int valid_rules = 0, n_wmms = 0;
+-	int i;
++	int valid_rules = 0;
+ 	bool new_rule;
+ 	int max_num_ch = cfg->nvm_type == IWL_NVM_EXT ?
+ 			 IWL_NVM_NUM_CHANNELS_EXT : IWL_NVM_NUM_CHANNELS;
+@@ -904,11 +901,7 @@ iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg,
+ 		sizeof(struct ieee80211_regdomain) +
+ 		num_of_ch * sizeof(struct ieee80211_reg_rule);
+ 
+-	if (geo_info & GEO_WMM_ETSI_5GHZ_INFO)
+-		size_of_wmms =
+-			num_of_ch * sizeof(struct ieee80211_wmm_rule);
+-
+-	regd = kzalloc(size_of_regd + size_of_wmms, GFP_KERNEL);
++	regd = kzalloc(size_of_regd, GFP_KERNEL);
+ 	if (!regd)
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -922,8 +915,6 @@ iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg,
+ 	regd->alpha2[0] = fw_mcc >> 8;
+ 	regd->alpha2[1] = fw_mcc & 0xff;
+ 
+-	wmm_rule = (struct ieee80211_wmm_rule *)((u8 *)regd + size_of_regd);
+-
+ 	for (ch_idx = 0; ch_idx < num_of_ch; ch_idx++) {
+ 		ch_flags = (u16)__le32_to_cpup(channels + ch_idx);
+ 		band = (ch_idx < NUM_2GHZ_CHANNELS) ?
+@@ -977,26 +968,10 @@ iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg,
+ 		    band == NL80211_BAND_2GHZ)
+ 			continue;
+ 
+-		if (!reg_query_regdb_wmm(regd->alpha2, center_freq,
+-					 &regdb_ptrs[n_wmms].token, wmm_rule)) {
+-			/* Add only new rules */
+-			for (i = 0; i < n_wmms; i++) {
+-				if (regdb_ptrs[i].token ==
+-				    regdb_ptrs[n_wmms].token) {
+-					rule->wmm_rule = regdb_ptrs[i].rule;
+-					break;
+-				}
+-			}
+-			if (i == n_wmms) {
+-				rule->wmm_rule = wmm_rule;
+-				regdb_ptrs[n_wmms++].rule = wmm_rule;
+-				wmm_rule++;
+-			}
+-		}
++		reg_query_regdb_wmm(regd->alpha2, center_freq, rule);
+ 	}
+ 
+ 	regd->n_reg_rules = valid_rules;
+-	regd->n_wmm_rules = n_wmms;
+ 
+ 	/*
+ 	 * Narrow down regdom for unused regulatory rules to prevent hole
+@@ -1005,28 +980,13 @@ iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg,
+ 	regd_to_copy = sizeof(struct ieee80211_regdomain) +
+ 		valid_rules * sizeof(struct ieee80211_reg_rule);
+ 
+-	wmms_to_copy = sizeof(struct ieee80211_wmm_rule) * n_wmms;
+-
+-	copy_rd = kzalloc(regd_to_copy + wmms_to_copy, GFP_KERNEL);
++	copy_rd = kzalloc(regd_to_copy, GFP_KERNEL);
+ 	if (!copy_rd) {
+ 		copy_rd = ERR_PTR(-ENOMEM);
+ 		goto out;
+ 	}
+ 
+ 	memcpy(copy_rd, regd, regd_to_copy);
+-	memcpy((u8 *)copy_rd + regd_to_copy, (u8 *)regd + size_of_regd,
+-	       wmms_to_copy);
+-
+-	d_wmm = (struct ieee80211_wmm_rule *)((u8 *)copy_rd + regd_to_copy);
+-	s_wmm = (struct ieee80211_wmm_rule *)((u8 *)regd + size_of_regd);
+-
+-	for (i = 0; i < regd->n_reg_rules; i++) {
+-		if (!regd->reg_rules[i].wmm_rule)
+-			continue;
+-
+-		copy_rd->reg_rules[i].wmm_rule = d_wmm +
+-			(regd->reg_rules[i].wmm_rule - s_wmm);
+-	}
+ 
+ out:
+ 	kfree(regdb_ptrs);
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index 18e819d964f1..80e2c8595c7c 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -33,6 +33,7 @@
+ #include <net/net_namespace.h>
+ #include <net/netns/generic.h>
+ #include <linux/rhashtable.h>
++#include <linux/nospec.h>
+ #include "mac80211_hwsim.h"
+ 
+ #define WARN_QUEUE 100
+@@ -2699,9 +2700,6 @@ static int mac80211_hwsim_new_radio(struct genl_info *info,
+ 				IEEE80211_VHT_CAP_SHORT_GI_80 |
+ 				IEEE80211_VHT_CAP_SHORT_GI_160 |
+ 				IEEE80211_VHT_CAP_TXSTBC |
+-				IEEE80211_VHT_CAP_RXSTBC_1 |
+-				IEEE80211_VHT_CAP_RXSTBC_2 |
+-				IEEE80211_VHT_CAP_RXSTBC_3 |
+ 				IEEE80211_VHT_CAP_RXSTBC_4 |
+ 				IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK;
+ 			sband->vht_cap.vht_mcs.rx_mcs_map =
+@@ -3194,6 +3192,11 @@ static int hwsim_new_radio_nl(struct sk_buff *msg, struct genl_info *info)
+ 	if (info->attrs[HWSIM_ATTR_CHANNELS])
+ 		param.channels = nla_get_u32(info->attrs[HWSIM_ATTR_CHANNELS]);
+ 
++	if (param.channels < 1) {
++		GENL_SET_ERR_MSG(info, "must have at least one channel");
++		return -EINVAL;
++	}
++
+ 	if (param.channels > CFG80211_MAX_NUM_DIFFERENT_CHANNELS) {
+ 		GENL_SET_ERR_MSG(info, "too many channels specified");
+ 		return -EINVAL;
+@@ -3227,6 +3230,9 @@ static int hwsim_new_radio_nl(struct sk_buff *msg, struct genl_info *info)
+ 			kfree(hwname);
+ 			return -EINVAL;
+ 		}
++
++		idx = array_index_nospec(idx,
++					 ARRAY_SIZE(hwsim_world_regdom_custom));
+ 		param.regd = hwsim_world_regdom_custom[idx];
+ 	}
+ 
+diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
+index 52e0c5d579a7..1d909e5ba657 100644
+--- a/drivers/nvme/target/rdma.c
++++ b/drivers/nvme/target/rdma.c
+@@ -65,6 +65,7 @@ struct nvmet_rdma_rsp {
+ 
+ 	struct nvmet_req	req;
+ 
++	bool			allocated;
+ 	u8			n_rdma;
+ 	u32			flags;
+ 	u32			invalidate_rkey;
+@@ -166,11 +167,19 @@ nvmet_rdma_get_rsp(struct nvmet_rdma_queue *queue)
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&queue->rsps_lock, flags);
+-	rsp = list_first_entry(&queue->free_rsps,
++	rsp = list_first_entry_or_null(&queue->free_rsps,
+ 				struct nvmet_rdma_rsp, free_list);
+-	list_del(&rsp->free_list);
++	if (likely(rsp))
++		list_del(&rsp->free_list);
+ 	spin_unlock_irqrestore(&queue->rsps_lock, flags);
+ 
++	if (unlikely(!rsp)) {
++		rsp = kmalloc(sizeof(*rsp), GFP_KERNEL);
++		if (unlikely(!rsp))
++			return NULL;
++		rsp->allocated = true;
++	}
++
+ 	return rsp;
+ }
+ 
+@@ -179,6 +188,11 @@ nvmet_rdma_put_rsp(struct nvmet_rdma_rsp *rsp)
+ {
+ 	unsigned long flags;
+ 
++	if (rsp->allocated) {
++		kfree(rsp);
++		return;
++	}
++
+ 	spin_lock_irqsave(&rsp->queue->rsps_lock, flags);
+ 	list_add_tail(&rsp->free_list, &rsp->queue->free_rsps);
+ 	spin_unlock_irqrestore(&rsp->queue->rsps_lock, flags);
+@@ -702,6 +716,15 @@ static void nvmet_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 
+ 	cmd->queue = queue;
+ 	rsp = nvmet_rdma_get_rsp(queue);
++	if (unlikely(!rsp)) {
++		/*
++		 * we get here only under memory pressure,
++		 * silently drop and have the host retry
++		 * as we can't even fail it.
++		 */
++		nvmet_rdma_post_recv(queue->dev, cmd);
++		return;
++	}
+ 	rsp->queue = queue;
+ 	rsp->cmd = cmd;
+ 	rsp->flags = 0;
+diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
+index ffdb78421a25..b0f0d4e86f67 100644
+--- a/drivers/s390/net/qeth_core_main.c
++++ b/drivers/s390/net/qeth_core_main.c
+@@ -25,6 +25,7 @@
+ #include <linux/netdevice.h>
+ #include <linux/netdev_features.h>
+ #include <linux/skbuff.h>
++#include <linux/vmalloc.h>
+ 
+ #include <net/iucv/af_iucv.h>
+ #include <net/dsfield.h>
+@@ -4738,7 +4739,7 @@ static int qeth_query_oat_command(struct qeth_card *card, char __user *udata)
+ 
+ 	priv.buffer_len = oat_data.buffer_len;
+ 	priv.response_len = 0;
+-	priv.buffer =  kzalloc(oat_data.buffer_len, GFP_KERNEL);
++	priv.buffer = vzalloc(oat_data.buffer_len);
+ 	if (!priv.buffer) {
+ 		rc = -ENOMEM;
+ 		goto out;
+@@ -4779,7 +4780,7 @@ static int qeth_query_oat_command(struct qeth_card *card, char __user *udata)
+ 			rc = -EFAULT;
+ 
+ out_free:
+-	kfree(priv.buffer);
++	vfree(priv.buffer);
+ out:
+ 	return rc;
+ }
+diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c
+index 2487f0aeb165..3bef60ae0480 100644
+--- a/drivers/s390/net/qeth_l2_main.c
++++ b/drivers/s390/net/qeth_l2_main.c
+@@ -425,7 +425,7 @@ static int qeth_l2_process_inbound_buffer(struct qeth_card *card,
+ 		default:
+ 			dev_kfree_skb_any(skb);
+ 			QETH_CARD_TEXT(card, 3, "inbunkno");
+-			QETH_DBF_HEX(CTRL, 3, hdr, QETH_DBF_CTRL_LEN);
++			QETH_DBF_HEX(CTRL, 3, hdr, sizeof(*hdr));
+ 			continue;
+ 		}
+ 		work_done++;
+diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c
+index 5905dc63e256..3ea840542767 100644
+--- a/drivers/s390/net/qeth_l3_main.c
++++ b/drivers/s390/net/qeth_l3_main.c
+@@ -1390,7 +1390,7 @@ static int qeth_l3_process_inbound_buffer(struct qeth_card *card,
+ 		default:
+ 			dev_kfree_skb_any(skb);
+ 			QETH_CARD_TEXT(card, 3, "inbunkno");
+-			QETH_DBF_HEX(CTRL, 3, hdr, QETH_DBF_CTRL_LEN);
++			QETH_DBF_HEX(CTRL, 3, hdr, sizeof(*hdr));
+ 			continue;
+ 		}
+ 		work_done++;
+diff --git a/drivers/scsi/aacraid/aacraid.h b/drivers/scsi/aacraid/aacraid.h
+index 29bf1e60f542..39eb415987fc 100644
+--- a/drivers/scsi/aacraid/aacraid.h
++++ b/drivers/scsi/aacraid/aacraid.h
+@@ -1346,7 +1346,7 @@ struct fib {
+ struct aac_hba_map_info {
+ 	__le32	rmw_nexus;		/* nexus for native HBA devices */
+ 	u8		devtype;	/* device type */
+-	u8		reset_state;	/* 0 - no reset, 1..x - */
++	s8		reset_state;	/* 0 - no reset, 1..x - */
+ 					/* after xth TM LUN reset */
+ 	u16		qd_limit;
+ 	u32		scan_counter;
+diff --git a/drivers/scsi/csiostor/csio_hw.c b/drivers/scsi/csiostor/csio_hw.c
+index a10cf25ee7f9..e4baf04ec5ea 100644
+--- a/drivers/scsi/csiostor/csio_hw.c
++++ b/drivers/scsi/csiostor/csio_hw.c
+@@ -1512,6 +1512,46 @@ fw_port_cap32_t fwcaps16_to_caps32(fw_port_cap16_t caps16)
+ 	return caps32;
+ }
+ 
++/**
++ *	fwcaps32_to_caps16 - convert 32-bit Port Capabilities to 16-bits
++ *	@caps32: a 32-bit Port Capabilities value
++ *
++ *	Returns the equivalent 16-bit Port Capabilities value.  Note that
++ *	not all 32-bit Port Capabilities can be represented in the 16-bit
++ *	Port Capabilities and some fields/values may not make it.
++ */
++fw_port_cap16_t fwcaps32_to_caps16(fw_port_cap32_t caps32)
++{
++	fw_port_cap16_t caps16 = 0;
++
++	#define CAP32_TO_CAP16(__cap) \
++		do { \
++			if (caps32 & FW_PORT_CAP32_##__cap) \
++				caps16 |= FW_PORT_CAP_##__cap; \
++		} while (0)
++
++	CAP32_TO_CAP16(SPEED_100M);
++	CAP32_TO_CAP16(SPEED_1G);
++	CAP32_TO_CAP16(SPEED_10G);
++	CAP32_TO_CAP16(SPEED_25G);
++	CAP32_TO_CAP16(SPEED_40G);
++	CAP32_TO_CAP16(SPEED_100G);
++	CAP32_TO_CAP16(FC_RX);
++	CAP32_TO_CAP16(FC_TX);
++	CAP32_TO_CAP16(802_3_PAUSE);
++	CAP32_TO_CAP16(802_3_ASM_DIR);
++	CAP32_TO_CAP16(ANEG);
++	CAP32_TO_CAP16(FORCE_PAUSE);
++	CAP32_TO_CAP16(MDIAUTO);
++	CAP32_TO_CAP16(MDISTRAIGHT);
++	CAP32_TO_CAP16(FEC_RS);
++	CAP32_TO_CAP16(FEC_BASER_RS);
++
++	#undef CAP32_TO_CAP16
++
++	return caps16;
++}
++
+ /**
+  *      lstatus_to_fwcap - translate old lstatus to 32-bit Port Capabilities
+  *      @lstatus: old FW_PORT_ACTION_GET_PORT_INFO lstatus value
+@@ -1670,7 +1710,7 @@ csio_enable_ports(struct csio_hw *hw)
+ 			val = 1;
+ 
+ 			csio_mb_params(hw, mbp, CSIO_MB_DEFAULT_TMO,
+-				       hw->pfn, 0, 1, &param, &val, false,
++				       hw->pfn, 0, 1, &param, &val, true,
+ 				       NULL);
+ 
+ 			if (csio_mb_issue(hw, mbp)) {
+@@ -1680,16 +1720,9 @@ csio_enable_ports(struct csio_hw *hw)
+ 				return -EINVAL;
+ 			}
+ 
+-			csio_mb_process_read_params_rsp(hw, mbp, &retval, 1,
+-							&val);
+-			if (retval != FW_SUCCESS) {
+-				csio_err(hw, "FW_PARAMS_CMD(r) port:%d failed: 0x%x\n",
+-					 portid, retval);
+-				mempool_free(mbp, hw->mb_mempool);
+-				return -EINVAL;
+-			}
+-
+-			fw_caps = val;
++			csio_mb_process_read_params_rsp(hw, mbp, &retval,
++							0, NULL);
++			fw_caps = retval ? FW_CAPS16 : FW_CAPS32;
+ 		}
+ 
+ 		/* Read PORT information */
+@@ -2275,8 +2308,8 @@ bye:
+ }
+ 
+ /*
+- * Returns -EINVAL if attempts to flash the firmware failed
+- * else returns 0,
++ * Returns -EINVAL if attempts to flash the firmware failed,
++ * -ENOMEM if memory allocation failed else returns 0,
+  * if flashing was not attempted because the card had the
+  * latest firmware ECANCELED is returned
+  */
+@@ -2304,6 +2337,13 @@ csio_hw_flash_fw(struct csio_hw *hw, int *reset)
+ 		return -EINVAL;
+ 	}
+ 
++	/* allocate memory to read the header of the firmware on the
++	 * card
++	 */
++	card_fw = kmalloc(sizeof(*card_fw), GFP_KERNEL);
++	if (!card_fw)
++		return -ENOMEM;
++
+ 	if (csio_is_t5(pci_dev->device & CSIO_HW_CHIP_MASK))
+ 		fw_bin_file = FW_FNAME_T5;
+ 	else
+@@ -2317,11 +2357,6 @@ csio_hw_flash_fw(struct csio_hw *hw, int *reset)
+ 		fw_size = fw->size;
+ 	}
+ 
+-	/* allocate memory to read the header of the firmware on the
+-	 * card
+-	 */
+-	card_fw = kmalloc(sizeof(*card_fw), GFP_KERNEL);
+-
+ 	/* upgrade FW logic */
+ 	ret = csio_hw_prep_fw(hw, fw_info, fw_data, fw_size, card_fw,
+ 			 hw->fw_state, reset);
+diff --git a/drivers/scsi/csiostor/csio_hw.h b/drivers/scsi/csiostor/csio_hw.h
+index 9e73ef771eb7..e351af6e7c81 100644
+--- a/drivers/scsi/csiostor/csio_hw.h
++++ b/drivers/scsi/csiostor/csio_hw.h
+@@ -639,6 +639,7 @@ int csio_handle_intr_status(struct csio_hw *, unsigned int,
+ 
+ fw_port_cap32_t fwcap_to_fwspeed(fw_port_cap32_t acaps);
+ fw_port_cap32_t fwcaps16_to_caps32(fw_port_cap16_t caps16);
++fw_port_cap16_t fwcaps32_to_caps16(fw_port_cap32_t caps32);
+ fw_port_cap32_t lstatus_to_fwcap(u32 lstatus);
+ 
+ int csio_hw_start(struct csio_hw *);
+diff --git a/drivers/scsi/csiostor/csio_mb.c b/drivers/scsi/csiostor/csio_mb.c
+index c026417269c3..6f13673d6aa0 100644
+--- a/drivers/scsi/csiostor/csio_mb.c
++++ b/drivers/scsi/csiostor/csio_mb.c
+@@ -368,7 +368,7 @@ csio_mb_port(struct csio_hw *hw, struct csio_mb *mbp, uint32_t tmo,
+ 			FW_CMD_LEN16_V(sizeof(*cmdp) / 16));
+ 
+ 	if (fw_caps == FW_CAPS16)
+-		cmdp->u.l1cfg.rcap = cpu_to_be32(fc);
++		cmdp->u.l1cfg.rcap = cpu_to_be32(fwcaps32_to_caps16(fc));
+ 	else
+ 		cmdp->u.l1cfg32.rcap32 = cpu_to_be32(fc);
+ }
+@@ -395,8 +395,8 @@ csio_mb_process_read_port_rsp(struct csio_hw *hw, struct csio_mb *mbp,
+ 			*pcaps = fwcaps16_to_caps32(ntohs(rsp->u.info.pcap));
+ 			*acaps = fwcaps16_to_caps32(ntohs(rsp->u.info.acap));
+ 		} else {
+-			*pcaps = ntohs(rsp->u.info32.pcaps32);
+-			*acaps = ntohs(rsp->u.info32.acaps32);
++			*pcaps = be32_to_cpu(rsp->u.info32.pcaps32);
++			*acaps = be32_to_cpu(rsp->u.info32.acaps32);
+ 		}
+ 	}
+ }
+diff --git a/drivers/scsi/qedi/qedi.h b/drivers/scsi/qedi/qedi.h
+index fc3babc15fa3..a6f96b35e971 100644
+--- a/drivers/scsi/qedi/qedi.h
++++ b/drivers/scsi/qedi/qedi.h
+@@ -77,6 +77,11 @@ enum qedi_nvm_tgts {
+ 	QEDI_NVM_TGT_SEC,
+ };
+ 
++struct qedi_nvm_iscsi_image {
++	struct nvm_iscsi_cfg iscsi_cfg;
++	u32 crc;
++};
++
+ struct qedi_uio_ctrl {
+ 	/* meta data */
+ 	u32 uio_hsi_version;
+@@ -294,7 +299,7 @@ struct qedi_ctx {
+ 	void *bdq_pbl_list;
+ 	dma_addr_t bdq_pbl_list_dma;
+ 	u8 bdq_pbl_list_num_entries;
+-	struct nvm_iscsi_cfg *iscsi_cfg;
++	struct qedi_nvm_iscsi_image *iscsi_image;
+ 	dma_addr_t nvm_buf_dma;
+ 	void __iomem *bdq_primary_prod;
+ 	void __iomem *bdq_secondary_prod;
+diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
+index cff83b9457f7..3e18a68c2b03 100644
+--- a/drivers/scsi/qedi/qedi_main.c
++++ b/drivers/scsi/qedi/qedi_main.c
+@@ -1346,23 +1346,26 @@ exit_setup_int:
+ 
+ static void qedi_free_nvm_iscsi_cfg(struct qedi_ctx *qedi)
+ {
+-	if (qedi->iscsi_cfg)
++	if (qedi->iscsi_image)
+ 		dma_free_coherent(&qedi->pdev->dev,
+-				  sizeof(struct nvm_iscsi_cfg),
+-				  qedi->iscsi_cfg, qedi->nvm_buf_dma);
++				  sizeof(struct qedi_nvm_iscsi_image),
++				  qedi->iscsi_image, qedi->nvm_buf_dma);
+ }
+ 
+ static int qedi_alloc_nvm_iscsi_cfg(struct qedi_ctx *qedi)
+ {
+-	qedi->iscsi_cfg = dma_zalloc_coherent(&qedi->pdev->dev,
+-					     sizeof(struct nvm_iscsi_cfg),
+-					     &qedi->nvm_buf_dma, GFP_KERNEL);
+-	if (!qedi->iscsi_cfg) {
++	struct qedi_nvm_iscsi_image nvm_image;
++
++	qedi->iscsi_image = dma_zalloc_coherent(&qedi->pdev->dev,
++						sizeof(nvm_image),
++						&qedi->nvm_buf_dma,
++						GFP_KERNEL);
++	if (!qedi->iscsi_image) {
+ 		QEDI_ERR(&qedi->dbg_ctx, "Could not allocate NVM BUF.\n");
+ 		return -ENOMEM;
+ 	}
+ 	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+-		  "NVM BUF addr=0x%p dma=0x%llx.\n", qedi->iscsi_cfg,
++		  "NVM BUF addr=0x%p dma=0x%llx.\n", qedi->iscsi_image,
+ 		  qedi->nvm_buf_dma);
+ 
+ 	return 0;
+@@ -1905,7 +1908,7 @@ qedi_get_nvram_block(struct qedi_ctx *qedi)
+ 	struct nvm_iscsi_block *block;
+ 
+ 	pf = qedi->dev_info.common.abs_pf_id;
+-	block = &qedi->iscsi_cfg->block[0];
++	block = &qedi->iscsi_image->iscsi_cfg.block[0];
+ 	for (i = 0; i < NUM_OF_ISCSI_PF_SUPPORTED; i++, block++) {
+ 		flags = ((block->id) & NVM_ISCSI_CFG_BLK_CTRL_FLAG_MASK) >>
+ 			NVM_ISCSI_CFG_BLK_CTRL_FLAG_OFFSET;
+@@ -2194,15 +2197,14 @@ static void qedi_boot_release(void *data)
+ static int qedi_get_boot_info(struct qedi_ctx *qedi)
+ {
+ 	int ret = 1;
+-	u16 len;
+-
+-	len = sizeof(struct nvm_iscsi_cfg);
++	struct qedi_nvm_iscsi_image nvm_image;
+ 
+ 	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+ 		  "Get NVM iSCSI CFG image\n");
+ 	ret = qedi_ops->common->nvm_get_image(qedi->cdev,
+ 					      QED_NVM_IMAGE_ISCSI_CFG,
+-					      (char *)qedi->iscsi_cfg, len);
++					      (char *)qedi->iscsi_image,
++					      sizeof(nvm_image));
+ 	if (ret)
+ 		QEDI_ERR(&qedi->dbg_ctx,
+ 			 "Could not get NVM image. ret = %d\n", ret);
+diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
+index 8e223799347a..a4ecc9d77624 100644
+--- a/drivers/target/iscsi/iscsi_target.c
++++ b/drivers/target/iscsi/iscsi_target.c
+@@ -4211,22 +4211,15 @@ int iscsit_close_connection(
+ 		crypto_free_ahash(tfm);
+ 	}
+ 
+-	free_cpumask_var(conn->conn_cpumask);
+-
+-	kfree(conn->conn_ops);
+-	conn->conn_ops = NULL;
+-
+ 	if (conn->sock)
+ 		sock_release(conn->sock);
+ 
+ 	if (conn->conn_transport->iscsit_free_conn)
+ 		conn->conn_transport->iscsit_free_conn(conn);
+ 
+-	iscsit_put_transport(conn->conn_transport);
+-
+ 	pr_debug("Moving to TARG_CONN_STATE_FREE.\n");
+ 	conn->conn_state = TARG_CONN_STATE_FREE;
+-	kfree(conn);
++	iscsit_free_conn(conn);
+ 
+ 	spin_lock_bh(&sess->conn_lock);
+ 	atomic_dec(&sess->nconn);
+diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c
+index 68b3eb00a9d0..2fda5b0664fd 100644
+--- a/drivers/target/iscsi/iscsi_target_login.c
++++ b/drivers/target/iscsi/iscsi_target_login.c
+@@ -67,45 +67,10 @@ static struct iscsi_login *iscsi_login_init_conn(struct iscsi_conn *conn)
+ 		goto out_req_buf;
+ 	}
+ 
+-	conn->conn_ops = kzalloc(sizeof(struct iscsi_conn_ops), GFP_KERNEL);
+-	if (!conn->conn_ops) {
+-		pr_err("Unable to allocate memory for"
+-			" struct iscsi_conn_ops.\n");
+-		goto out_rsp_buf;
+-	}
+-
+-	init_waitqueue_head(&conn->queues_wq);
+-	INIT_LIST_HEAD(&conn->conn_list);
+-	INIT_LIST_HEAD(&conn->conn_cmd_list);
+-	INIT_LIST_HEAD(&conn->immed_queue_list);
+-	INIT_LIST_HEAD(&conn->response_queue_list);
+-	init_completion(&conn->conn_post_wait_comp);
+-	init_completion(&conn->conn_wait_comp);
+-	init_completion(&conn->conn_wait_rcfr_comp);
+-	init_completion(&conn->conn_waiting_on_uc_comp);
+-	init_completion(&conn->conn_logout_comp);
+-	init_completion(&conn->rx_half_close_comp);
+-	init_completion(&conn->tx_half_close_comp);
+-	init_completion(&conn->rx_login_comp);
+-	spin_lock_init(&conn->cmd_lock);
+-	spin_lock_init(&conn->conn_usage_lock);
+-	spin_lock_init(&conn->immed_queue_lock);
+-	spin_lock_init(&conn->nopin_timer_lock);
+-	spin_lock_init(&conn->response_queue_lock);
+-	spin_lock_init(&conn->state_lock);
+-
+-	if (!zalloc_cpumask_var(&conn->conn_cpumask, GFP_KERNEL)) {
+-		pr_err("Unable to allocate conn->conn_cpumask\n");
+-		goto out_conn_ops;
+-	}
+ 	conn->conn_login = login;
+ 
+ 	return login;
+ 
+-out_conn_ops:
+-	kfree(conn->conn_ops);
+-out_rsp_buf:
+-	kfree(login->rsp_buf);
+ out_req_buf:
+ 	kfree(login->req_buf);
+ out_login:
+@@ -310,11 +275,9 @@ static int iscsi_login_zero_tsih_s1(
+ 		return -ENOMEM;
+ 	}
+ 
+-	ret = iscsi_login_set_conn_values(sess, conn, pdu->cid);
+-	if (unlikely(ret)) {
+-		kfree(sess);
+-		return ret;
+-	}
++	if (iscsi_login_set_conn_values(sess, conn, pdu->cid))
++		goto free_sess;
++
+ 	sess->init_task_tag	= pdu->itt;
+ 	memcpy(&sess->isid, pdu->isid, 6);
+ 	sess->exp_cmd_sn	= be32_to_cpu(pdu->cmdsn);
+@@ -1157,6 +1120,75 @@ iscsit_conn_set_transport(struct iscsi_conn *conn, struct iscsit_transport *t)
+ 	return 0;
+ }
+ 
++static struct iscsi_conn *iscsit_alloc_conn(struct iscsi_np *np)
++{
++	struct iscsi_conn *conn;
++
++	conn = kzalloc(sizeof(struct iscsi_conn), GFP_KERNEL);
++	if (!conn) {
++		pr_err("Could not allocate memory for new connection\n");
++		return NULL;
++	}
++	pr_debug("Moving to TARG_CONN_STATE_FREE.\n");
++	conn->conn_state = TARG_CONN_STATE_FREE;
++
++	init_waitqueue_head(&conn->queues_wq);
++	INIT_LIST_HEAD(&conn->conn_list);
++	INIT_LIST_HEAD(&conn->conn_cmd_list);
++	INIT_LIST_HEAD(&conn->immed_queue_list);
++	INIT_LIST_HEAD(&conn->response_queue_list);
++	init_completion(&conn->conn_post_wait_comp);
++	init_completion(&conn->conn_wait_comp);
++	init_completion(&conn->conn_wait_rcfr_comp);
++	init_completion(&conn->conn_waiting_on_uc_comp);
++	init_completion(&conn->conn_logout_comp);
++	init_completion(&conn->rx_half_close_comp);
++	init_completion(&conn->tx_half_close_comp);
++	init_completion(&conn->rx_login_comp);
++	spin_lock_init(&conn->cmd_lock);
++	spin_lock_init(&conn->conn_usage_lock);
++	spin_lock_init(&conn->immed_queue_lock);
++	spin_lock_init(&conn->nopin_timer_lock);
++	spin_lock_init(&conn->response_queue_lock);
++	spin_lock_init(&conn->state_lock);
++
++	timer_setup(&conn->nopin_response_timer,
++		    iscsit_handle_nopin_response_timeout, 0);
++	timer_setup(&conn->nopin_timer, iscsit_handle_nopin_timeout, 0);
++
++	if (iscsit_conn_set_transport(conn, np->np_transport) < 0)
++		goto free_conn;
++
++	conn->conn_ops = kzalloc(sizeof(struct iscsi_conn_ops), GFP_KERNEL);
++	if (!conn->conn_ops) {
++		pr_err("Unable to allocate memory for struct iscsi_conn_ops.\n");
++		goto put_transport;
++	}
++
++	if (!zalloc_cpumask_var(&conn->conn_cpumask, GFP_KERNEL)) {
++		pr_err("Unable to allocate conn->conn_cpumask\n");
++		goto free_mask;
++	}
++
++	return conn;
++
++free_mask:
++	free_cpumask_var(conn->conn_cpumask);
++put_transport:
++	iscsit_put_transport(conn->conn_transport);
++free_conn:
++	kfree(conn);
++	return NULL;
++}
++
++void iscsit_free_conn(struct iscsi_conn *conn)
++{
++	free_cpumask_var(conn->conn_cpumask);
++	kfree(conn->conn_ops);
++	iscsit_put_transport(conn->conn_transport);
++	kfree(conn);
++}
++
+ void iscsi_target_login_sess_out(struct iscsi_conn *conn,
+ 		struct iscsi_np *np, bool zero_tsih, bool new_sess)
+ {
+@@ -1210,10 +1242,6 @@ old_sess_out:
+ 		crypto_free_ahash(tfm);
+ 	}
+ 
+-	free_cpumask_var(conn->conn_cpumask);
+-
+-	kfree(conn->conn_ops);
+-
+ 	if (conn->param_list) {
+ 		iscsi_release_param_list(conn->param_list);
+ 		conn->param_list = NULL;
+@@ -1231,8 +1259,7 @@ old_sess_out:
+ 	if (conn->conn_transport->iscsit_free_conn)
+ 		conn->conn_transport->iscsit_free_conn(conn);
+ 
+-	iscsit_put_transport(conn->conn_transport);
+-	kfree(conn);
++	iscsit_free_conn(conn);
+ }
+ 
+ static int __iscsi_target_login_thread(struct iscsi_np *np)
+@@ -1262,31 +1289,16 @@ static int __iscsi_target_login_thread(struct iscsi_np *np)
+ 	}
+ 	spin_unlock_bh(&np->np_thread_lock);
+ 
+-	conn = kzalloc(sizeof(struct iscsi_conn), GFP_KERNEL);
++	conn = iscsit_alloc_conn(np);
+ 	if (!conn) {
+-		pr_err("Could not allocate memory for"
+-			" new connection\n");
+ 		/* Get another socket */
+ 		return 1;
+ 	}
+-	pr_debug("Moving to TARG_CONN_STATE_FREE.\n");
+-	conn->conn_state = TARG_CONN_STATE_FREE;
+-
+-	timer_setup(&conn->nopin_response_timer,
+-		    iscsit_handle_nopin_response_timeout, 0);
+-	timer_setup(&conn->nopin_timer, iscsit_handle_nopin_timeout, 0);
+-
+-	if (iscsit_conn_set_transport(conn, np->np_transport) < 0) {
+-		kfree(conn);
+-		return 1;
+-	}
+ 
+ 	rc = np->np_transport->iscsit_accept_np(np, conn);
+ 	if (rc == -ENOSYS) {
+ 		complete(&np->np_restart_comp);
+-		iscsit_put_transport(conn->conn_transport);
+-		kfree(conn);
+-		conn = NULL;
++		iscsit_free_conn(conn);
+ 		goto exit;
+ 	} else if (rc < 0) {
+ 		spin_lock_bh(&np->np_thread_lock);
+@@ -1294,17 +1306,13 @@ static int __iscsi_target_login_thread(struct iscsi_np *np)
+ 			np->np_thread_state = ISCSI_NP_THREAD_ACTIVE;
+ 			spin_unlock_bh(&np->np_thread_lock);
+ 			complete(&np->np_restart_comp);
+-			iscsit_put_transport(conn->conn_transport);
+-			kfree(conn);
+-			conn = NULL;
++			iscsit_free_conn(conn);
+ 			/* Get another socket */
+ 			return 1;
+ 		}
+ 		spin_unlock_bh(&np->np_thread_lock);
+-		iscsit_put_transport(conn->conn_transport);
+-		kfree(conn);
+-		conn = NULL;
+-		goto out;
++		iscsit_free_conn(conn);
++		return 1;
+ 	}
+ 	/*
+ 	 * Perform the remaining iSCSI connection initialization items..
+@@ -1454,7 +1462,6 @@ old_sess_out:
+ 		tpg_np = NULL;
+ 	}
+ 
+-out:
+ 	return 1;
+ 
+ exit:
+diff --git a/drivers/target/iscsi/iscsi_target_login.h b/drivers/target/iscsi/iscsi_target_login.h
+index 74ac3abc44a0..3b8e3639ff5d 100644
+--- a/drivers/target/iscsi/iscsi_target_login.h
++++ b/drivers/target/iscsi/iscsi_target_login.h
+@@ -19,7 +19,7 @@ extern int iscsi_target_setup_login_socket(struct iscsi_np *,
+ extern int iscsit_accept_np(struct iscsi_np *, struct iscsi_conn *);
+ extern int iscsit_get_login_rx(struct iscsi_conn *, struct iscsi_login *);
+ extern int iscsit_put_login_tx(struct iscsi_conn *, struct iscsi_login *, u32);
+-extern void iscsit_free_conn(struct iscsi_np *, struct iscsi_conn *);
++extern void iscsit_free_conn(struct iscsi_conn *);
+ extern int iscsit_start_kthreads(struct iscsi_conn *);
+ extern void iscsi_post_login_handler(struct iscsi_np *, struct iscsi_conn *, u8);
+ extern void iscsi_target_login_sess_out(struct iscsi_conn *, struct iscsi_np *,
+diff --git a/drivers/usb/gadget/udc/fotg210-udc.c b/drivers/usb/gadget/udc/fotg210-udc.c
+index 53a48f561458..587c5037ff07 100644
+--- a/drivers/usb/gadget/udc/fotg210-udc.c
++++ b/drivers/usb/gadget/udc/fotg210-udc.c
+@@ -1063,12 +1063,15 @@ static const struct usb_gadget_ops fotg210_gadget_ops = {
+ static int fotg210_udc_remove(struct platform_device *pdev)
+ {
+ 	struct fotg210_udc *fotg210 = platform_get_drvdata(pdev);
++	int i;
+ 
+ 	usb_del_gadget_udc(&fotg210->gadget);
+ 	iounmap(fotg210->reg);
+ 	free_irq(platform_get_irq(pdev, 0), fotg210);
+ 
+ 	fotg210_ep_free_request(&fotg210->ep[0]->ep, fotg210->ep0_req);
++	for (i = 0; i < FOTG210_MAX_NUM_EP; i++)
++		kfree(fotg210->ep[i]);
+ 	kfree(fotg210);
+ 
+ 	return 0;
+@@ -1099,7 +1102,7 @@ static int fotg210_udc_probe(struct platform_device *pdev)
+ 	/* initialize udc */
+ 	fotg210 = kzalloc(sizeof(struct fotg210_udc), GFP_KERNEL);
+ 	if (fotg210 == NULL)
+-		goto err_alloc;
++		goto err;
+ 
+ 	for (i = 0; i < FOTG210_MAX_NUM_EP; i++) {
+ 		_ep[i] = kzalloc(sizeof(struct fotg210_ep), GFP_KERNEL);
+@@ -1111,7 +1114,7 @@ static int fotg210_udc_probe(struct platform_device *pdev)
+ 	fotg210->reg = ioremap(res->start, resource_size(res));
+ 	if (fotg210->reg == NULL) {
+ 		pr_err("ioremap error.\n");
+-		goto err_map;
++		goto err_alloc;
+ 	}
+ 
+ 	spin_lock_init(&fotg210->lock);
+@@ -1159,7 +1162,7 @@ static int fotg210_udc_probe(struct platform_device *pdev)
+ 	fotg210->ep0_req = fotg210_ep_alloc_request(&fotg210->ep[0]->ep,
+ 				GFP_KERNEL);
+ 	if (fotg210->ep0_req == NULL)
+-		goto err_req;
++		goto err_map;
+ 
+ 	fotg210_init(fotg210);
+ 
+@@ -1187,12 +1190,14 @@ err_req:
+ 	fotg210_ep_free_request(&fotg210->ep[0]->ep, fotg210->ep0_req);
+ 
+ err_map:
+-	if (fotg210->reg)
+-		iounmap(fotg210->reg);
++	iounmap(fotg210->reg);
+ 
+ err_alloc:
++	for (i = 0; i < FOTG210_MAX_NUM_EP; i++)
++		kfree(fotg210->ep[i]);
+ 	kfree(fotg210);
+ 
++err:
+ 	return ret;
+ }
+ 
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index c1b22fc64e38..b5a14caa9297 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -152,7 +152,7 @@ static int xhci_plat_probe(struct platform_device *pdev)
+ {
+ 	const struct xhci_plat_priv *priv_match;
+ 	const struct hc_driver	*driver;
+-	struct device		*sysdev;
++	struct device		*sysdev, *tmpdev;
+ 	struct xhci_hcd		*xhci;
+ 	struct resource         *res;
+ 	struct usb_hcd		*hcd;
+@@ -272,19 +272,24 @@ static int xhci_plat_probe(struct platform_device *pdev)
+ 		goto disable_clk;
+ 	}
+ 
+-	if (device_property_read_bool(sysdev, "usb2-lpm-disable"))
+-		xhci->quirks |= XHCI_HW_LPM_DISABLE;
++	/* imod_interval is the interrupt moderation value in nanoseconds. */
++	xhci->imod_interval = 40000;
+ 
+-	if (device_property_read_bool(sysdev, "usb3-lpm-capable"))
+-		xhci->quirks |= XHCI_LPM_SUPPORT;
++	/* Iterate over all parent nodes for finding quirks */
++	for (tmpdev = &pdev->dev; tmpdev; tmpdev = tmpdev->parent) {
+ 
+-	if (device_property_read_bool(&pdev->dev, "quirk-broken-port-ped"))
+-		xhci->quirks |= XHCI_BROKEN_PORT_PED;
++		if (device_property_read_bool(tmpdev, "usb2-lpm-disable"))
++			xhci->quirks |= XHCI_HW_LPM_DISABLE;
+ 
+-	/* imod_interval is the interrupt moderation value in nanoseconds. */
+-	xhci->imod_interval = 40000;
+-	device_property_read_u32(sysdev, "imod-interval-ns",
+-				 &xhci->imod_interval);
++		if (device_property_read_bool(tmpdev, "usb3-lpm-capable"))
++			xhci->quirks |= XHCI_LPM_SUPPORT;
++
++		if (device_property_read_bool(tmpdev, "quirk-broken-port-ped"))
++			xhci->quirks |= XHCI_BROKEN_PORT_PED;
++
++		device_property_read_u32(tmpdev, "imod-interval-ns",
++					 &xhci->imod_interval);
++	}
+ 
+ 	hcd->usb_phy = devm_usb_get_phy_by_phandle(sysdev, "usb-phy", 0);
+ 	if (IS_ERR(hcd->usb_phy)) {
+diff --git a/drivers/usb/misc/yurex.c b/drivers/usb/misc/yurex.c
+index 1232dd49556d..6d9fd5f64903 100644
+--- a/drivers/usb/misc/yurex.c
++++ b/drivers/usb/misc/yurex.c
+@@ -413,6 +413,9 @@ static ssize_t yurex_read(struct file *file, char __user *buffer, size_t count,
+ 	spin_unlock_irqrestore(&dev->lock, flags);
+ 	mutex_unlock(&dev->io_mutex);
+ 
++	if (WARN_ON_ONCE(len >= sizeof(in_buffer)))
++		return -EIO;
++
+ 	return simple_read_from_buffer(buffer, count, ppos, in_buffer, len);
+ }
+ 
+diff --git a/drivers/xen/cpu_hotplug.c b/drivers/xen/cpu_hotplug.c
+index d4265c8ebb22..b1357aa4bc55 100644
+--- a/drivers/xen/cpu_hotplug.c
++++ b/drivers/xen/cpu_hotplug.c
+@@ -19,15 +19,16 @@ static void enable_hotplug_cpu(int cpu)
+ 
+ static void disable_hotplug_cpu(int cpu)
+ {
+-	if (cpu_online(cpu)) {
+-		lock_device_hotplug();
++	if (!cpu_is_hotpluggable(cpu))
++		return;
++	lock_device_hotplug();
++	if (cpu_online(cpu))
+ 		device_offline(get_cpu_device(cpu));
+-		unlock_device_hotplug();
+-	}
+-	if (cpu_present(cpu))
++	if (!cpu_online(cpu) && cpu_present(cpu)) {
+ 		xen_arch_unregister_cpu(cpu);
+-
+-	set_cpu_present(cpu, false);
++		set_cpu_present(cpu, false);
++	}
++	unlock_device_hotplug();
+ }
+ 
+ static int vcpu_online(unsigned int cpu)
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index 08e4af04d6f2..e6c1934734b7 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -138,7 +138,7 @@ static int set_evtchn_to_irq(unsigned evtchn, unsigned irq)
+ 		clear_evtchn_to_irq_row(row);
+ 	}
+ 
+-	evtchn_to_irq[EVTCHN_ROW(evtchn)][EVTCHN_COL(evtchn)] = irq;
++	evtchn_to_irq[row][col] = irq;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
+index c93d8ef8df34..5bb01a62f214 100644
+--- a/drivers/xen/manage.c
++++ b/drivers/xen/manage.c
+@@ -280,9 +280,11 @@ static void sysrq_handler(struct xenbus_watch *watch, const char *path,
+ 		/*
+ 		 * The Xenstore watch fires directly after registering it and
+ 		 * after a suspend/resume cycle. So ENOENT is no error but
+-		 * might happen in those cases.
++		 * might happen in those cases. ERANGE is observed when we get
++		 * an empty value (''), this happens when we acknowledge the
++		 * request by writing '\0' below.
+ 		 */
+-		if (err != -ENOENT)
++		if (err != -ENOENT && err != -ERANGE)
+ 			pr_err("Error %d reading sysrq code in control/sysrq\n",
+ 			       err);
+ 		xenbus_transaction_end(xbt, 1);
+diff --git a/fs/afs/proc.c b/fs/afs/proc.c
+index 0c3285c8db95..476dcbb79713 100644
+--- a/fs/afs/proc.c
++++ b/fs/afs/proc.c
+@@ -98,13 +98,13 @@ static int afs_proc_cells_write(struct file *file, char *buf, size_t size)
+ 		goto inval;
+ 
+ 	args = strchr(name, ' ');
+-	if (!args)
+-		goto inval;
+-	do {
+-		*args++ = 0;
+-	} while(*args == ' ');
+-	if (!*args)
+-		goto inval;
++	if (args) {
++		do {
++			*args++ = 0;
++		} while(*args == ' ');
++		if (!*args)
++			goto inval;
++	}
+ 
+ 	/* determine command to perform */
+ 	_debug("cmd=%s name=%s args=%s", buf, name, args);
+@@ -120,7 +120,6 @@ static int afs_proc_cells_write(struct file *file, char *buf, size_t size)
+ 
+ 		if (test_and_set_bit(AFS_CELL_FL_NO_GC, &cell->flags))
+ 			afs_put_cell(net, cell);
+-		printk("kAFS: Added new cell '%s'\n", name);
+ 	} else {
+ 		goto inval;
+ 	}
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 118346aceea9..663ce0518d27 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -1277,6 +1277,7 @@ struct btrfs_root {
+ 	int send_in_progress;
+ 	struct btrfs_subvolume_writers *subv_writers;
+ 	atomic_t will_be_snapshotted;
++	atomic_t snapshot_force_cow;
+ 
+ 	/* For qgroup metadata reserved space */
+ 	spinlock_t qgroup_meta_rsv_lock;
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index dfed08e70ec1..891b1aab3480 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -1217,6 +1217,7 @@ static void __setup_root(struct btrfs_root *root, struct btrfs_fs_info *fs_info,
+ 	atomic_set(&root->log_batch, 0);
+ 	refcount_set(&root->refs, 1);
+ 	atomic_set(&root->will_be_snapshotted, 0);
++	atomic_set(&root->snapshot_force_cow, 0);
+ 	root->log_transid = 0;
+ 	root->log_transid_committed = -1;
+ 	root->last_log_commit = 0;
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 071d949f69ec..d3736fbf6774 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -1275,7 +1275,7 @@ static noinline int run_delalloc_nocow(struct inode *inode,
+ 	u64 disk_num_bytes;
+ 	u64 ram_bytes;
+ 	int extent_type;
+-	int ret, err;
++	int ret;
+ 	int type;
+ 	int nocow;
+ 	int check_prev = 1;
+@@ -1407,11 +1407,8 @@ next_slot:
+ 			 * if there are pending snapshots for this root,
+ 			 * we fall into common COW way.
+ 			 */
+-			if (!nolock) {
+-				err = btrfs_start_write_no_snapshotting(root);
+-				if (!err)
+-					goto out_check;
+-			}
++			if (!nolock && atomic_read(&root->snapshot_force_cow))
++				goto out_check;
+ 			/*
+ 			 * force cow if csum exists in the range.
+ 			 * this ensure that csum for a given extent are
+@@ -1420,9 +1417,6 @@ next_slot:
+ 			ret = csum_exist_in_range(fs_info, disk_bytenr,
+ 						  num_bytes);
+ 			if (ret) {
+-				if (!nolock)
+-					btrfs_end_write_no_snapshotting(root);
+-
+ 				/*
+ 				 * ret could be -EIO if the above fails to read
+ 				 * metadata.
+@@ -1435,11 +1429,8 @@ next_slot:
+ 				WARN_ON_ONCE(nolock);
+ 				goto out_check;
+ 			}
+-			if (!btrfs_inc_nocow_writers(fs_info, disk_bytenr)) {
+-				if (!nolock)
+-					btrfs_end_write_no_snapshotting(root);
++			if (!btrfs_inc_nocow_writers(fs_info, disk_bytenr))
+ 				goto out_check;
+-			}
+ 			nocow = 1;
+ 		} else if (extent_type == BTRFS_FILE_EXTENT_INLINE) {
+ 			extent_end = found_key.offset +
+@@ -1453,8 +1444,6 @@ next_slot:
+ out_check:
+ 		if (extent_end <= start) {
+ 			path->slots[0]++;
+-			if (!nolock && nocow)
+-				btrfs_end_write_no_snapshotting(root);
+ 			if (nocow)
+ 				btrfs_dec_nocow_writers(fs_info, disk_bytenr);
+ 			goto next_slot;
+@@ -1476,8 +1465,6 @@ out_check:
+ 					     end, page_started, nr_written, 1,
+ 					     NULL);
+ 			if (ret) {
+-				if (!nolock && nocow)
+-					btrfs_end_write_no_snapshotting(root);
+ 				if (nocow)
+ 					btrfs_dec_nocow_writers(fs_info,
+ 								disk_bytenr);
+@@ -1497,8 +1484,6 @@ out_check:
+ 					  ram_bytes, BTRFS_COMPRESS_NONE,
+ 					  BTRFS_ORDERED_PREALLOC);
+ 			if (IS_ERR(em)) {
+-				if (!nolock && nocow)
+-					btrfs_end_write_no_snapshotting(root);
+ 				if (nocow)
+ 					btrfs_dec_nocow_writers(fs_info,
+ 								disk_bytenr);
+@@ -1537,8 +1522,6 @@ out_check:
+ 					     EXTENT_CLEAR_DATA_RESV,
+ 					     PAGE_UNLOCK | PAGE_SET_PRIVATE2);
+ 
+-		if (!nolock && nocow)
+-			btrfs_end_write_no_snapshotting(root);
+ 		cur_offset = extent_end;
+ 
+ 		/*
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index f3d6be0c657b..ef7159646615 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -761,6 +761,7 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir,
+ 	struct btrfs_pending_snapshot *pending_snapshot;
+ 	struct btrfs_trans_handle *trans;
+ 	int ret;
++	bool snapshot_force_cow = false;
+ 
+ 	if (!test_bit(BTRFS_ROOT_REF_COWS, &root->state))
+ 		return -EINVAL;
+@@ -777,6 +778,11 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir,
+ 		goto free_pending;
+ 	}
+ 
++	/*
++	 * Force new buffered writes to reserve space even when NOCOW is
++	 * possible. This is to avoid later writeback (running dealloc) to
++	 * fallback to COW mode and unexpectedly fail with ENOSPC.
++	 */
+ 	atomic_inc(&root->will_be_snapshotted);
+ 	smp_mb__after_atomic();
+ 	/* wait for no snapshot writes */
+@@ -787,6 +793,14 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir,
+ 	if (ret)
+ 		goto dec_and_free;
+ 
++	/*
++	 * All previous writes have started writeback in NOCOW mode, so now
++	 * we force future writes to fallback to COW mode during snapshot
++	 * creation.
++	 */
++	atomic_inc(&root->snapshot_force_cow);
++	snapshot_force_cow = true;
++
+ 	btrfs_wait_ordered_extents(root, U64_MAX, 0, (u64)-1);
+ 
+ 	btrfs_init_block_rsv(&pending_snapshot->block_rsv,
+@@ -851,6 +865,8 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir,
+ fail:
+ 	btrfs_subvolume_release_metadata(fs_info, &pending_snapshot->block_rsv);
+ dec_and_free:
++	if (snapshot_force_cow)
++		atomic_dec(&root->snapshot_force_cow);
+ 	if (atomic_dec_and_test(&root->will_be_snapshotted))
+ 		wake_up_var(&root->will_be_snapshotted);
+ free_pending:
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 5304b8d6ceb8..1a22c0ecaf67 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -4584,7 +4584,12 @@ again:
+ 
+ 	/* Now btrfs_update_device() will change the on-disk size. */
+ 	ret = btrfs_update_device(trans, device);
+-	btrfs_end_transaction(trans);
++	if (ret < 0) {
++		btrfs_abort_transaction(trans, ret);
++		btrfs_end_transaction(trans);
++	} else {
++		ret = btrfs_commit_transaction(trans);
++	}
+ done:
+ 	btrfs_free_path(path);
+ 	if (ret) {
+diff --git a/fs/ceph/super.c b/fs/ceph/super.c
+index 95a3b3ac9b6e..60f81ac369b5 100644
+--- a/fs/ceph/super.c
++++ b/fs/ceph/super.c
+@@ -603,6 +603,8 @@ static int extra_mon_dispatch(struct ceph_client *client, struct ceph_msg *msg)
+ 
+ /*
+  * create a new fs client
++ *
++ * Success or not, this function consumes @fsopt and @opt.
+  */
+ static struct ceph_fs_client *create_fs_client(struct ceph_mount_options *fsopt,
+ 					struct ceph_options *opt)
+@@ -610,17 +612,20 @@ static struct ceph_fs_client *create_fs_client(struct ceph_mount_options *fsopt,
+ 	struct ceph_fs_client *fsc;
+ 	int page_count;
+ 	size_t size;
+-	int err = -ENOMEM;
++	int err;
+ 
+ 	fsc = kzalloc(sizeof(*fsc), GFP_KERNEL);
+-	if (!fsc)
+-		return ERR_PTR(-ENOMEM);
++	if (!fsc) {
++		err = -ENOMEM;
++		goto fail;
++	}
+ 
+ 	fsc->client = ceph_create_client(opt, fsc);
+ 	if (IS_ERR(fsc->client)) {
+ 		err = PTR_ERR(fsc->client);
+ 		goto fail;
+ 	}
++	opt = NULL; /* fsc->client now owns this */
+ 
+ 	fsc->client->extra_mon_dispatch = extra_mon_dispatch;
+ 	fsc->client->osdc.abort_on_full = true;
+@@ -678,6 +683,9 @@ fail_client:
+ 	ceph_destroy_client(fsc->client);
+ fail:
+ 	kfree(fsc);
++	if (opt)
++		ceph_destroy_options(opt);
++	destroy_mount_options(fsopt);
+ 	return ERR_PTR(err);
+ }
+ 
+@@ -1042,8 +1050,6 @@ static struct dentry *ceph_mount(struct file_system_type *fs_type,
+ 	fsc = create_fs_client(fsopt, opt);
+ 	if (IS_ERR(fsc)) {
+ 		res = ERR_CAST(fsc);
+-		destroy_mount_options(fsopt);
+-		ceph_destroy_options(opt);
+ 		goto out_final;
+ 	}
+ 
+diff --git a/fs/cifs/cifs_unicode.c b/fs/cifs/cifs_unicode.c
+index b380e0871372..a2b2355e7f01 100644
+--- a/fs/cifs/cifs_unicode.c
++++ b/fs/cifs/cifs_unicode.c
+@@ -105,9 +105,6 @@ convert_sfm_char(const __u16 src_char, char *target)
+ 	case SFM_LESSTHAN:
+ 		*target = '<';
+ 		break;
+-	case SFM_SLASH:
+-		*target = '\\';
+-		break;
+ 	case SFM_SPACE:
+ 		*target = ' ';
+ 		break;
+diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c
+index 93408eab92e7..f5baf777564c 100644
+--- a/fs/cifs/cifssmb.c
++++ b/fs/cifs/cifssmb.c
+@@ -601,10 +601,15 @@ CIFSSMBNegotiate(const unsigned int xid, struct cifs_ses *ses)
+ 	}
+ 
+ 	count = 0;
++	/*
++	 * We know that all the name entries in the protocols array
++	 * are short (< 16 bytes anyway) and are NUL terminated.
++	 */
+ 	for (i = 0; i < CIFS_NUM_PROT; i++) {
+-		strncpy(pSMB->DialectsArray+count, protocols[i].name, 16);
+-		count += strlen(protocols[i].name) + 1;
+-		/* null at end of source and target buffers anyway */
++		size_t len = strlen(protocols[i].name) + 1;
++
++		memcpy(pSMB->DialectsArray+count, protocols[i].name, len);
++		count += len;
+ 	}
+ 	inc_rfc1001_len(pSMB, count);
+ 	pSMB->ByteCount = cpu_to_le16(count);
+diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
+index 53e8362cbc4a..6737f54d9a34 100644
+--- a/fs/cifs/misc.c
++++ b/fs/cifs/misc.c
+@@ -404,9 +404,17 @@ is_valid_oplock_break(char *buffer, struct TCP_Server_Info *srv)
+ 			(struct smb_com_transaction_change_notify_rsp *)buf;
+ 		struct file_notify_information *pnotify;
+ 		__u32 data_offset = 0;
++		size_t len = srv->total_read - sizeof(pSMBr->hdr.smb_buf_length);
++
+ 		if (get_bcc(buf) > sizeof(struct file_notify_information)) {
+ 			data_offset = le32_to_cpu(pSMBr->DataOffset);
+ 
++			if (data_offset >
++			    len - sizeof(struct file_notify_information)) {
++				cifs_dbg(FYI, "invalid data_offset %u\n",
++					 data_offset);
++				return true;
++			}
+ 			pnotify = (struct file_notify_information *)
+ 				((char *)&pSMBr->hdr.Protocol + data_offset);
+ 			cifs_dbg(FYI, "dnotify on %s Action: 0x%x\n",
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 5ecbc99f46e4..abb54b852bdc 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -1484,7 +1484,7 @@ smb2_query_dir_first(const unsigned int xid, struct cifs_tcon *tcon,
+ 	}
+ 
+ 	srch_inf->entries_in_buffer = 0;
+-	srch_inf->index_of_last_entry = 0;
++	srch_inf->index_of_last_entry = 2;
+ 
+ 	rc = SMB2_query_directory(xid, tcon, fid->persistent_fid,
+ 				  fid->volatile_fid, 0, srch_inf);
+diff --git a/fs/dcache.c b/fs/dcache.c
+index d19a0dc46c04..baa89f092a2d 100644
+--- a/fs/dcache.c
++++ b/fs/dcache.c
+@@ -1890,7 +1890,7 @@ void d_instantiate_new(struct dentry *entry, struct inode *inode)
+ 	spin_lock(&inode->i_lock);
+ 	__d_instantiate(entry, inode);
+ 	WARN_ON(!(inode->i_state & I_NEW));
+-	inode->i_state &= ~I_NEW;
++	inode->i_state &= ~I_NEW & ~I_CREATING;
+ 	smp_mb();
+ 	wake_up_bit(&inode->i_state, __I_NEW);
+ 	spin_unlock(&inode->i_lock);
+diff --git a/fs/inode.c b/fs/inode.c
+index 8c86c809ca17..a06de4454232 100644
+--- a/fs/inode.c
++++ b/fs/inode.c
+@@ -804,6 +804,10 @@ repeat:
+ 			__wait_on_freeing_inode(inode);
+ 			goto repeat;
+ 		}
++		if (unlikely(inode->i_state & I_CREATING)) {
++			spin_unlock(&inode->i_lock);
++			return ERR_PTR(-ESTALE);
++		}
+ 		__iget(inode);
+ 		spin_unlock(&inode->i_lock);
+ 		return inode;
+@@ -831,6 +835,10 @@ repeat:
+ 			__wait_on_freeing_inode(inode);
+ 			goto repeat;
+ 		}
++		if (unlikely(inode->i_state & I_CREATING)) {
++			spin_unlock(&inode->i_lock);
++			return ERR_PTR(-ESTALE);
++		}
+ 		__iget(inode);
+ 		spin_unlock(&inode->i_lock);
+ 		return inode;
+@@ -961,13 +969,26 @@ void unlock_new_inode(struct inode *inode)
+ 	lockdep_annotate_inode_mutex_key(inode);
+ 	spin_lock(&inode->i_lock);
+ 	WARN_ON(!(inode->i_state & I_NEW));
+-	inode->i_state &= ~I_NEW;
++	inode->i_state &= ~I_NEW & ~I_CREATING;
+ 	smp_mb();
+ 	wake_up_bit(&inode->i_state, __I_NEW);
+ 	spin_unlock(&inode->i_lock);
+ }
+ EXPORT_SYMBOL(unlock_new_inode);
+ 
++void discard_new_inode(struct inode *inode)
++{
++	lockdep_annotate_inode_mutex_key(inode);
++	spin_lock(&inode->i_lock);
++	WARN_ON(!(inode->i_state & I_NEW));
++	inode->i_state &= ~I_NEW;
++	smp_mb();
++	wake_up_bit(&inode->i_state, __I_NEW);
++	spin_unlock(&inode->i_lock);
++	iput(inode);
++}
++EXPORT_SYMBOL(discard_new_inode);
++
+ /**
+  * lock_two_nondirectories - take two i_mutexes on non-directory objects
+  *
+@@ -1029,6 +1050,7 @@ struct inode *inode_insert5(struct inode *inode, unsigned long hashval,
+ {
+ 	struct hlist_head *head = inode_hashtable + hash(inode->i_sb, hashval);
+ 	struct inode *old;
++	bool creating = inode->i_state & I_CREATING;
+ 
+ again:
+ 	spin_lock(&inode_hash_lock);
+@@ -1039,6 +1061,8 @@ again:
+ 		 * Use the old inode instead of the preallocated one.
+ 		 */
+ 		spin_unlock(&inode_hash_lock);
++		if (IS_ERR(old))
++			return NULL;
+ 		wait_on_inode(old);
+ 		if (unlikely(inode_unhashed(old))) {
+ 			iput(old);
+@@ -1060,6 +1084,8 @@ again:
+ 	inode->i_state |= I_NEW;
+ 	hlist_add_head(&inode->i_hash, head);
+ 	spin_unlock(&inode->i_lock);
++	if (!creating)
++		inode_sb_list_add(inode);
+ unlock:
+ 	spin_unlock(&inode_hash_lock);
+ 
+@@ -1094,12 +1120,13 @@ struct inode *iget5_locked(struct super_block *sb, unsigned long hashval,
+ 	struct inode *inode = ilookup5(sb, hashval, test, data);
+ 
+ 	if (!inode) {
+-		struct inode *new = new_inode(sb);
++		struct inode *new = alloc_inode(sb);
+ 
+ 		if (new) {
++			new->i_state = 0;
+ 			inode = inode_insert5(new, hashval, test, set, data);
+ 			if (unlikely(inode != new))
+-				iput(new);
++				destroy_inode(new);
+ 		}
+ 	}
+ 	return inode;
+@@ -1128,6 +1155,8 @@ again:
+ 	inode = find_inode_fast(sb, head, ino);
+ 	spin_unlock(&inode_hash_lock);
+ 	if (inode) {
++		if (IS_ERR(inode))
++			return NULL;
+ 		wait_on_inode(inode);
+ 		if (unlikely(inode_unhashed(inode))) {
+ 			iput(inode);
+@@ -1165,6 +1194,8 @@ again:
+ 		 */
+ 		spin_unlock(&inode_hash_lock);
+ 		destroy_inode(inode);
++		if (IS_ERR(old))
++			return NULL;
+ 		inode = old;
+ 		wait_on_inode(inode);
+ 		if (unlikely(inode_unhashed(inode))) {
+@@ -1282,7 +1313,7 @@ struct inode *ilookup5_nowait(struct super_block *sb, unsigned long hashval,
+ 	inode = find_inode(sb, head, test, data);
+ 	spin_unlock(&inode_hash_lock);
+ 
+-	return inode;
++	return IS_ERR(inode) ? NULL : inode;
+ }
+ EXPORT_SYMBOL(ilookup5_nowait);
+ 
+@@ -1338,6 +1369,8 @@ again:
+ 	spin_unlock(&inode_hash_lock);
+ 
+ 	if (inode) {
++		if (IS_ERR(inode))
++			return NULL;
+ 		wait_on_inode(inode);
+ 		if (unlikely(inode_unhashed(inode))) {
+ 			iput(inode);
+@@ -1421,12 +1454,17 @@ int insert_inode_locked(struct inode *inode)
+ 		}
+ 		if (likely(!old)) {
+ 			spin_lock(&inode->i_lock);
+-			inode->i_state |= I_NEW;
++			inode->i_state |= I_NEW | I_CREATING;
+ 			hlist_add_head(&inode->i_hash, head);
+ 			spin_unlock(&inode->i_lock);
+ 			spin_unlock(&inode_hash_lock);
+ 			return 0;
+ 		}
++		if (unlikely(old->i_state & I_CREATING)) {
++			spin_unlock(&old->i_lock);
++			spin_unlock(&inode_hash_lock);
++			return -EBUSY;
++		}
+ 		__iget(old);
+ 		spin_unlock(&old->i_lock);
+ 		spin_unlock(&inode_hash_lock);
+@@ -1443,7 +1481,10 @@ EXPORT_SYMBOL(insert_inode_locked);
+ int insert_inode_locked4(struct inode *inode, unsigned long hashval,
+ 		int (*test)(struct inode *, void *), void *data)
+ {
+-	struct inode *old = inode_insert5(inode, hashval, test, NULL, data);
++	struct inode *old;
++
++	inode->i_state |= I_CREATING;
++	old = inode_insert5(inode, hashval, test, NULL, data);
+ 
+ 	if (old != inode) {
+ 		iput(old);
+diff --git a/fs/notify/fsnotify.c b/fs/notify/fsnotify.c
+index f174397b63a0..ababdbfab537 100644
+--- a/fs/notify/fsnotify.c
++++ b/fs/notify/fsnotify.c
+@@ -351,16 +351,9 @@ int fsnotify(struct inode *to_tell, __u32 mask, const void *data, int data_is,
+ 
+ 	iter_info.srcu_idx = srcu_read_lock(&fsnotify_mark_srcu);
+ 
+-	if ((mask & FS_MODIFY) ||
+-	    (test_mask & to_tell->i_fsnotify_mask)) {
+-		iter_info.marks[FSNOTIFY_OBJ_TYPE_INODE] =
+-			fsnotify_first_mark(&to_tell->i_fsnotify_marks);
+-	}
+-
+-	if (mnt && ((mask & FS_MODIFY) ||
+-		    (test_mask & mnt->mnt_fsnotify_mask))) {
+-		iter_info.marks[FSNOTIFY_OBJ_TYPE_INODE] =
+-			fsnotify_first_mark(&to_tell->i_fsnotify_marks);
++	iter_info.marks[FSNOTIFY_OBJ_TYPE_INODE] =
++		fsnotify_first_mark(&to_tell->i_fsnotify_marks);
++	if (mnt) {
+ 		iter_info.marks[FSNOTIFY_OBJ_TYPE_VFSMOUNT] =
+ 			fsnotify_first_mark(&mnt->mnt_fsnotify_marks);
+ 	}
+diff --git a/fs/ocfs2/dlm/dlmmaster.c b/fs/ocfs2/dlm/dlmmaster.c
+index aaca0949fe53..826f0567ec43 100644
+--- a/fs/ocfs2/dlm/dlmmaster.c
++++ b/fs/ocfs2/dlm/dlmmaster.c
+@@ -584,9 +584,9 @@ static void dlm_init_lockres(struct dlm_ctxt *dlm,
+ 
+ 	res->last_used = 0;
+ 
+-	spin_lock(&dlm->spinlock);
++	spin_lock(&dlm->track_lock);
+ 	list_add_tail(&res->tracking, &dlm->tracking_list);
+-	spin_unlock(&dlm->spinlock);
++	spin_unlock(&dlm->track_lock);
+ 
+ 	memset(res->lvb, 0, DLM_LVB_LEN);
+ 	memset(res->refmap, 0, sizeof(res->refmap));
+diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c
+index f480b1a2cd2e..da9b3ccfde23 100644
+--- a/fs/overlayfs/dir.c
++++ b/fs/overlayfs/dir.c
+@@ -601,6 +601,10 @@ static int ovl_create_object(struct dentry *dentry, int mode, dev_t rdev,
+ 	if (!inode)
+ 		goto out_drop_write;
+ 
++	spin_lock(&inode->i_lock);
++	inode->i_state |= I_CREATING;
++	spin_unlock(&inode->i_lock);
++
+ 	inode_init_owner(inode, dentry->d_parent->d_inode, mode);
+ 	attr.mode = inode->i_mode;
+ 
+diff --git a/fs/overlayfs/namei.c b/fs/overlayfs/namei.c
+index c993dd8db739..c2229f02389b 100644
+--- a/fs/overlayfs/namei.c
++++ b/fs/overlayfs/namei.c
+@@ -705,7 +705,7 @@ struct dentry *ovl_lookup_index(struct ovl_fs *ofs, struct dentry *upper,
+ 			index = NULL;
+ 			goto out;
+ 		}
+-		pr_warn_ratelimited("overlayfs: failed inode index lookup (ino=%lu, key=%*s, err=%i);\n"
++		pr_warn_ratelimited("overlayfs: failed inode index lookup (ino=%lu, key=%.*s, err=%i);\n"
+ 				    "overlayfs: mount with '-o index=off' to disable inodes index.\n",
+ 				    d_inode(origin)->i_ino, name.len, name.name,
+ 				    err);
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index 7538b9b56237..e789924e9833 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -147,8 +147,8 @@ static inline int ovl_do_setxattr(struct dentry *dentry, const char *name,
+ 				  const void *value, size_t size, int flags)
+ {
+ 	int err = vfs_setxattr(dentry, name, value, size, flags);
+-	pr_debug("setxattr(%pd2, \"%s\", \"%*s\", 0x%x) = %i\n",
+-		 dentry, name, (int) size, (char *) value, flags, err);
++	pr_debug("setxattr(%pd2, \"%s\", \"%*pE\", %zu, 0x%x) = %i\n",
++		 dentry, name, min((int)size, 48), value, size, flags, err);
+ 	return err;
+ }
+ 
+diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c
+index 6f1078028c66..319a7eeb388f 100644
+--- a/fs/overlayfs/util.c
++++ b/fs/overlayfs/util.c
+@@ -531,7 +531,7 @@ static void ovl_cleanup_index(struct dentry *dentry)
+ 	struct dentry *upperdentry = ovl_dentry_upper(dentry);
+ 	struct dentry *index = NULL;
+ 	struct inode *inode;
+-	struct qstr name;
++	struct qstr name = { };
+ 	int err;
+ 
+ 	err = ovl_get_index_name(lowerdentry, &name);
+@@ -574,6 +574,7 @@ static void ovl_cleanup_index(struct dentry *dentry)
+ 		goto fail;
+ 
+ out:
++	kfree(name.name);
+ 	dput(index);
+ 	return;
+ 
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index aaffc0c30216..bbcad104505c 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -407,6 +407,20 @@ static int proc_pid_stack(struct seq_file *m, struct pid_namespace *ns,
+ 	unsigned long *entries;
+ 	int err;
+ 
++	/*
++	 * The ability to racily run the kernel stack unwinder on a running task
++	 * and then observe the unwinder output is scary; while it is useful for
++	 * debugging kernel issues, it can also allow an attacker to leak kernel
++	 * stack contents.
++	 * Doing this in a manner that is at least safe from races would require
++	 * some work to ensure that the remote task can not be scheduled; and
++	 * even then, this would still expose the unwinder as local attack
++	 * surface.
++	 * Therefore, this interface is restricted to root.
++	 */
++	if (!file_ns_capable(m->file, &init_user_ns, CAP_SYS_ADMIN))
++		return -EACCES;
++
+ 	entries = kmalloc_array(MAX_STACK_TRACE_DEPTH, sizeof(*entries),
+ 				GFP_KERNEL);
+ 	if (!entries)
+diff --git a/fs/xattr.c b/fs/xattr.c
+index 1bee74682513..c689fd5b5679 100644
+--- a/fs/xattr.c
++++ b/fs/xattr.c
+@@ -949,17 +949,19 @@ ssize_t simple_xattr_list(struct inode *inode, struct simple_xattrs *xattrs,
+ 	int err = 0;
+ 
+ #ifdef CONFIG_FS_POSIX_ACL
+-	if (inode->i_acl) {
+-		err = xattr_list_one(&buffer, &remaining_size,
+-				     XATTR_NAME_POSIX_ACL_ACCESS);
+-		if (err)
+-			return err;
+-	}
+-	if (inode->i_default_acl) {
+-		err = xattr_list_one(&buffer, &remaining_size,
+-				     XATTR_NAME_POSIX_ACL_DEFAULT);
+-		if (err)
+-			return err;
++	if (IS_POSIXACL(inode)) {
++		if (inode->i_acl) {
++			err = xattr_list_one(&buffer, &remaining_size,
++					     XATTR_NAME_POSIX_ACL_ACCESS);
++			if (err)
++				return err;
++		}
++		if (inode->i_default_acl) {
++			err = xattr_list_one(&buffer, &remaining_size,
++					     XATTR_NAME_POSIX_ACL_DEFAULT);
++			if (err)
++				return err;
++		}
+ 	}
+ #endif
+ 
+diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h
+index 66d1d45fa2e1..d356f802945a 100644
+--- a/include/asm-generic/io.h
++++ b/include/asm-generic/io.h
+@@ -1026,7 +1026,8 @@ static inline void __iomem *ioremap_wt(phys_addr_t offset, size_t size)
+ #define ioport_map ioport_map
+ static inline void __iomem *ioport_map(unsigned long port, unsigned int nr)
+ {
+-	return PCI_IOBASE + (port & MMIO_UPPER_LIMIT);
++	port &= IO_SPACE_LIMIT;
++	return (port > MMIO_UPPER_LIMIT) ? NULL : PCI_IOBASE + port;
+ }
+ #endif
+ 
+diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
+index 0fce47d5acb1..5d46b83d4820 100644
+--- a/include/linux/blk-cgroup.h
++++ b/include/linux/blk-cgroup.h
+@@ -88,7 +88,6 @@ struct blkg_policy_data {
+ 	/* the blkg and policy id this per-policy data belongs to */
+ 	struct blkcg_gq			*blkg;
+ 	int				plid;
+-	bool				offline;
+ };
+ 
+ /*
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 805bf22898cf..a3afa50bb79f 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -2014,6 +2014,8 @@ static inline void init_sync_kiocb(struct kiocb *kiocb, struct file *filp)
+  * I_OVL_INUSE		Used by overlayfs to get exclusive ownership on upper
+  *			and work dirs among overlayfs mounts.
+  *
++ * I_CREATING		New object's inode in the middle of setting up.
++ *
+  * Q: What is the difference between I_WILL_FREE and I_FREEING?
+  */
+ #define I_DIRTY_SYNC		(1 << 0)
+@@ -2034,7 +2036,8 @@ static inline void init_sync_kiocb(struct kiocb *kiocb, struct file *filp)
+ #define __I_DIRTY_TIME_EXPIRED	12
+ #define I_DIRTY_TIME_EXPIRED	(1 << __I_DIRTY_TIME_EXPIRED)
+ #define I_WB_SWITCH		(1 << 13)
+-#define I_OVL_INUSE			(1 << 14)
++#define I_OVL_INUSE		(1 << 14)
++#define I_CREATING		(1 << 15)
+ 
+ #define I_DIRTY_INODE (I_DIRTY_SYNC | I_DIRTY_DATASYNC)
+ #define I_DIRTY (I_DIRTY_INODE | I_DIRTY_PAGES)
+@@ -2918,6 +2921,7 @@ extern void lockdep_annotate_inode_mutex_key(struct inode *inode);
+ static inline void lockdep_annotate_inode_mutex_key(struct inode *inode) { };
+ #endif
+ extern void unlock_new_inode(struct inode *);
++extern void discard_new_inode(struct inode *);
+ extern unsigned int get_next_ino(void);
+ extern void evict_inodes(struct super_block *sb);
+ 
+diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
+index 1beb3ead0385..7229c186d199 100644
+--- a/include/net/cfg80211.h
++++ b/include/net/cfg80211.h
+@@ -4763,8 +4763,8 @@ const char *reg_initiator_name(enum nl80211_reg_initiator initiator);
+  *
+  * Return: 0 on success. -ENODATA.
+  */
+-int reg_query_regdb_wmm(char *alpha2, int freq, u32 *ptr,
+-			struct ieee80211_wmm_rule *rule);
++int reg_query_regdb_wmm(char *alpha2, int freq,
++			struct ieee80211_reg_rule *rule);
+ 
+ /*
+  * callbacks for asynchronous cfg80211 methods, notification
+diff --git a/include/net/regulatory.h b/include/net/regulatory.h
+index 60f8cc86a447..3469750df0f4 100644
+--- a/include/net/regulatory.h
++++ b/include/net/regulatory.h
+@@ -217,15 +217,15 @@ struct ieee80211_wmm_rule {
+ struct ieee80211_reg_rule {
+ 	struct ieee80211_freq_range freq_range;
+ 	struct ieee80211_power_rule power_rule;
+-	struct ieee80211_wmm_rule *wmm_rule;
++	struct ieee80211_wmm_rule wmm_rule;
+ 	u32 flags;
+ 	u32 dfs_cac_ms;
++	bool has_wmm;
+ };
+ 
+ struct ieee80211_regdomain {
+ 	struct rcu_head rcu_head;
+ 	u32 n_reg_rules;
+-	u32 n_wmm_rules;
+ 	char alpha2[3];
+ 	enum nl80211_dfs_regions dfs_region;
+ 	struct ieee80211_reg_rule reg_rules[];
+diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
+index ed707b21d152..f833a60699ad 100644
+--- a/kernel/bpf/sockmap.c
++++ b/kernel/bpf/sockmap.c
+@@ -236,7 +236,7 @@ static int bpf_tcp_init(struct sock *sk)
+ }
+ 
+ static void smap_release_sock(struct smap_psock *psock, struct sock *sock);
+-static int free_start_sg(struct sock *sk, struct sk_msg_buff *md);
++static int free_start_sg(struct sock *sk, struct sk_msg_buff *md, bool charge);
+ 
+ static void bpf_tcp_release(struct sock *sk)
+ {
+@@ -248,7 +248,7 @@ static void bpf_tcp_release(struct sock *sk)
+ 		goto out;
+ 
+ 	if (psock->cork) {
+-		free_start_sg(psock->sock, psock->cork);
++		free_start_sg(psock->sock, psock->cork, true);
+ 		kfree(psock->cork);
+ 		psock->cork = NULL;
+ 	}
+@@ -330,14 +330,14 @@ static void bpf_tcp_close(struct sock *sk, long timeout)
+ 	close_fun = psock->save_close;
+ 
+ 	if (psock->cork) {
+-		free_start_sg(psock->sock, psock->cork);
++		free_start_sg(psock->sock, psock->cork, true);
+ 		kfree(psock->cork);
+ 		psock->cork = NULL;
+ 	}
+ 
+ 	list_for_each_entry_safe(md, mtmp, &psock->ingress, list) {
+ 		list_del(&md->list);
+-		free_start_sg(psock->sock, md);
++		free_start_sg(psock->sock, md, true);
+ 		kfree(md);
+ 	}
+ 
+@@ -369,7 +369,7 @@ static void bpf_tcp_close(struct sock *sk, long timeout)
+ 			/* If another thread deleted this object skip deletion.
+ 			 * The refcnt on psock may or may not be zero.
+ 			 */
+-			if (l) {
++			if (l && l == link) {
+ 				hlist_del_rcu(&link->hash_node);
+ 				smap_release_sock(psock, link->sk);
+ 				free_htab_elem(htab, link);
+@@ -570,14 +570,16 @@ static void free_bytes_sg(struct sock *sk, int bytes,
+ 	md->sg_start = i;
+ }
+ 
+-static int free_sg(struct sock *sk, int start, struct sk_msg_buff *md)
++static int free_sg(struct sock *sk, int start,
++		   struct sk_msg_buff *md, bool charge)
+ {
+ 	struct scatterlist *sg = md->sg_data;
+ 	int i = start, free = 0;
+ 
+ 	while (sg[i].length) {
+ 		free += sg[i].length;
+-		sk_mem_uncharge(sk, sg[i].length);
++		if (charge)
++			sk_mem_uncharge(sk, sg[i].length);
+ 		if (!md->skb)
+ 			put_page(sg_page(&sg[i]));
+ 		sg[i].length = 0;
+@@ -594,9 +596,9 @@ static int free_sg(struct sock *sk, int start, struct sk_msg_buff *md)
+ 	return free;
+ }
+ 
+-static int free_start_sg(struct sock *sk, struct sk_msg_buff *md)
++static int free_start_sg(struct sock *sk, struct sk_msg_buff *md, bool charge)
+ {
+-	int free = free_sg(sk, md->sg_start, md);
++	int free = free_sg(sk, md->sg_start, md, charge);
+ 
+ 	md->sg_start = md->sg_end;
+ 	return free;
+@@ -604,7 +606,7 @@ static int free_start_sg(struct sock *sk, struct sk_msg_buff *md)
+ 
+ static int free_curr_sg(struct sock *sk, struct sk_msg_buff *md)
+ {
+-	return free_sg(sk, md->sg_curr, md);
++	return free_sg(sk, md->sg_curr, md, true);
+ }
+ 
+ static int bpf_map_msg_verdict(int _rc, struct sk_msg_buff *md)
+@@ -718,7 +720,7 @@ static int bpf_tcp_ingress(struct sock *sk, int apply_bytes,
+ 		list_add_tail(&r->list, &psock->ingress);
+ 		sk->sk_data_ready(sk);
+ 	} else {
+-		free_start_sg(sk, r);
++		free_start_sg(sk, r, true);
+ 		kfree(r);
+ 	}
+ 
+@@ -755,14 +757,10 @@ static int bpf_tcp_sendmsg_do_redirect(struct sock *sk, int send,
+ 		release_sock(sk);
+ 	}
+ 	smap_release_sock(psock, sk);
+-	if (unlikely(err))
+-		goto out;
+-	return 0;
++	return err;
+ out_rcu:
+ 	rcu_read_unlock();
+-out:
+-	free_bytes_sg(NULL, send, md, false);
+-	return err;
++	return 0;
+ }
+ 
+ static inline void bpf_md_init(struct smap_psock *psock)
+@@ -825,7 +823,7 @@ more_data:
+ 	case __SK_PASS:
+ 		err = bpf_tcp_push(sk, send, m, flags, true);
+ 		if (unlikely(err)) {
+-			*copied -= free_start_sg(sk, m);
++			*copied -= free_start_sg(sk, m, true);
+ 			break;
+ 		}
+ 
+@@ -848,16 +846,17 @@ more_data:
+ 		lock_sock(sk);
+ 
+ 		if (unlikely(err < 0)) {
+-			free_start_sg(sk, m);
++			int free = free_start_sg(sk, m, false);
++
+ 			psock->sg_size = 0;
+ 			if (!cork)
+-				*copied -= send;
++				*copied -= free;
+ 		} else {
+ 			psock->sg_size -= send;
+ 		}
+ 
+ 		if (cork) {
+-			free_start_sg(sk, m);
++			free_start_sg(sk, m, true);
+ 			psock->sg_size = 0;
+ 			kfree(m);
+ 			m = NULL;
+@@ -915,6 +914,8 @@ static int bpf_tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ 
+ 	if (unlikely(flags & MSG_ERRQUEUE))
+ 		return inet_recv_error(sk, msg, len, addr_len);
++	if (!skb_queue_empty(&sk->sk_receive_queue))
++		return tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len);
+ 
+ 	rcu_read_lock();
+ 	psock = smap_psock_sk(sk);
+@@ -925,9 +926,6 @@ static int bpf_tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ 		goto out;
+ 	rcu_read_unlock();
+ 
+-	if (!skb_queue_empty(&sk->sk_receive_queue))
+-		return tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len);
+-
+ 	lock_sock(sk);
+ bytes_ready:
+ 	while (copied != len) {
+@@ -1125,7 +1123,7 @@ wait_for_memory:
+ 		err = sk_stream_wait_memory(sk, &timeo);
+ 		if (err) {
+ 			if (m && m != psock->cork)
+-				free_start_sg(sk, m);
++				free_start_sg(sk, m, true);
+ 			goto out_err;
+ 		}
+ 	}
+@@ -1467,10 +1465,16 @@ static void smap_destroy_psock(struct rcu_head *rcu)
+ 	schedule_work(&psock->gc_work);
+ }
+ 
++static bool psock_is_smap_sk(struct sock *sk)
++{
++	return inet_csk(sk)->icsk_ulp_ops == &bpf_tcp_ulp_ops;
++}
++
+ static void smap_release_sock(struct smap_psock *psock, struct sock *sock)
+ {
+ 	if (refcount_dec_and_test(&psock->refcnt)) {
+-		tcp_cleanup_ulp(sock);
++		if (psock_is_smap_sk(sock))
++			tcp_cleanup_ulp(sock);
+ 		write_lock_bh(&sock->sk_callback_lock);
+ 		smap_stop_sock(psock, sock);
+ 		write_unlock_bh(&sock->sk_callback_lock);
+@@ -1584,13 +1588,13 @@ static void smap_gc_work(struct work_struct *w)
+ 		bpf_prog_put(psock->bpf_tx_msg);
+ 
+ 	if (psock->cork) {
+-		free_start_sg(psock->sock, psock->cork);
++		free_start_sg(psock->sock, psock->cork, true);
+ 		kfree(psock->cork);
+ 	}
+ 
+ 	list_for_each_entry_safe(md, mtmp, &psock->ingress, list) {
+ 		list_del(&md->list);
+-		free_start_sg(psock->sock, md);
++		free_start_sg(psock->sock, md, true);
+ 		kfree(md);
+ 	}
+ 
+@@ -1897,6 +1901,10 @@ static int __sock_map_ctx_update_elem(struct bpf_map *map,
+ 	 * doesn't update user data.
+ 	 */
+ 	if (psock) {
++		if (!psock_is_smap_sk(sock)) {
++			err = -EBUSY;
++			goto out_progs;
++		}
+ 		if (READ_ONCE(psock->bpf_parse) && parse) {
+ 			err = -EBUSY;
+ 			goto out_progs;
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index adbe21c8876e..82e8edef6ea0 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -2865,6 +2865,15 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
+ 	u64 umin_val, umax_val;
+ 	u64 insn_bitness = (BPF_CLASS(insn->code) == BPF_ALU64) ? 64 : 32;
+ 
++	if (insn_bitness == 32) {
++		/* Relevant for 32-bit RSH: Information can propagate towards
++		 * LSB, so it isn't sufficient to only truncate the output to
++		 * 32 bits.
++		 */
++		coerce_reg_to_size(dst_reg, 4);
++		coerce_reg_to_size(&src_reg, 4);
++	}
++
+ 	smin_val = src_reg.smin_value;
+ 	smax_val = src_reg.smax_value;
+ 	umin_val = src_reg.umin_value;
+@@ -3100,7 +3109,6 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
+ 	if (BPF_CLASS(insn->code) != BPF_ALU64) {
+ 		/* 32-bit ALU ops are (32,32)->32 */
+ 		coerce_reg_to_size(dst_reg, 4);
+-		coerce_reg_to_size(&src_reg, 4);
+ 	}
+ 
+ 	__reg_deduce_bounds(dst_reg);
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index 56a0fed30c0a..505a41c42b96 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -1295,7 +1295,7 @@ static void init_numa_topology_type(void)
+ 
+ 	n = sched_max_numa_distance;
+ 
+-	if (sched_domains_numa_levels <= 1) {
++	if (sched_domains_numa_levels <= 2) {
+ 		sched_numa_topology_type = NUMA_DIRECT;
+ 		return;
+ 	}
+@@ -1380,9 +1380,6 @@ void sched_init_numa(void)
+ 			break;
+ 	}
+ 
+-	if (!level)
+-		return;
+-
+ 	/*
+ 	 * 'level' contains the number of unique distances
+ 	 *
+diff --git a/mm/madvise.c b/mm/madvise.c
+index 4d3c922ea1a1..8534ea2978c5 100644
+--- a/mm/madvise.c
++++ b/mm/madvise.c
+@@ -96,7 +96,7 @@ static long madvise_behavior(struct vm_area_struct *vma,
+ 		new_flags |= VM_DONTDUMP;
+ 		break;
+ 	case MADV_DODUMP:
+-		if (new_flags & VM_SPECIAL) {
++		if (!is_vm_hugetlb_page(vma) && new_flags & VM_SPECIAL) {
+ 			error = -EINVAL;
+ 			goto out;
+ 		}
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 9dfd145eedcc..963ee2e88861 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2272,14 +2272,21 @@ static const struct bpf_func_proto bpf_msg_cork_bytes_proto = {
+ 	.arg2_type      = ARG_ANYTHING,
+ };
+ 
++#define sk_msg_iter_var(var)			\
++	do {					\
++		var++;				\
++		if (var == MAX_SKB_FRAGS)	\
++			var = 0;		\
++	} while (0)
++
+ BPF_CALL_4(bpf_msg_pull_data,
+ 	   struct sk_msg_buff *, msg, u32, start, u32, end, u64, flags)
+ {
+-	unsigned int len = 0, offset = 0, copy = 0;
++	unsigned int len = 0, offset = 0, copy = 0, poffset = 0;
++	int bytes = end - start, bytes_sg_total;
+ 	struct scatterlist *sg = msg->sg_data;
+ 	int first_sg, last_sg, i, shift;
+ 	unsigned char *p, *to, *from;
+-	int bytes = end - start;
+ 	struct page *page;
+ 
+ 	if (unlikely(flags || end <= start))
+@@ -2289,21 +2296,22 @@ BPF_CALL_4(bpf_msg_pull_data,
+ 	i = msg->sg_start;
+ 	do {
+ 		len = sg[i].length;
+-		offset += len;
+ 		if (start < offset + len)
+ 			break;
+-		i++;
+-		if (i == MAX_SKB_FRAGS)
+-			i = 0;
++		offset += len;
++		sk_msg_iter_var(i);
+ 	} while (i != msg->sg_end);
+ 
+ 	if (unlikely(start >= offset + len))
+ 		return -EINVAL;
+ 
+-	if (!msg->sg_copy[i] && bytes <= len)
+-		goto out;
+-
+ 	first_sg = i;
++	/* The start may point into the sg element so we need to also
++	 * account for the headroom.
++	 */
++	bytes_sg_total = start - offset + bytes;
++	if (!msg->sg_copy[i] && bytes_sg_total <= len)
++		goto out;
+ 
+ 	/* At this point we need to linearize multiple scatterlist
+ 	 * elements or a single shared page. Either way we need to
+@@ -2317,37 +2325,32 @@ BPF_CALL_4(bpf_msg_pull_data,
+ 	 */
+ 	do {
+ 		copy += sg[i].length;
+-		i++;
+-		if (i == MAX_SKB_FRAGS)
+-			i = 0;
+-		if (bytes < copy)
++		sk_msg_iter_var(i);
++		if (bytes_sg_total <= copy)
+ 			break;
+ 	} while (i != msg->sg_end);
+ 	last_sg = i;
+ 
+-	if (unlikely(copy < end - start))
++	if (unlikely(bytes_sg_total > copy))
+ 		return -EINVAL;
+ 
+ 	page = alloc_pages(__GFP_NOWARN | GFP_ATOMIC, get_order(copy));
+ 	if (unlikely(!page))
+ 		return -ENOMEM;
+ 	p = page_address(page);
+-	offset = 0;
+ 
+ 	i = first_sg;
+ 	do {
+ 		from = sg_virt(&sg[i]);
+ 		len = sg[i].length;
+-		to = p + offset;
++		to = p + poffset;
+ 
+ 		memcpy(to, from, len);
+-		offset += len;
++		poffset += len;
+ 		sg[i].length = 0;
+ 		put_page(sg_page(&sg[i]));
+ 
+-		i++;
+-		if (i == MAX_SKB_FRAGS)
+-			i = 0;
++		sk_msg_iter_var(i);
+ 	} while (i != last_sg);
+ 
+ 	sg[first_sg].length = copy;
+@@ -2357,11 +2360,15 @@ BPF_CALL_4(bpf_msg_pull_data,
+ 	 * had a single entry though we can just replace it and
+ 	 * be done. Otherwise walk the ring and shift the entries.
+ 	 */
+-	shift = last_sg - first_sg - 1;
++	WARN_ON_ONCE(last_sg == first_sg);
++	shift = last_sg > first_sg ?
++		last_sg - first_sg - 1 :
++		MAX_SKB_FRAGS - first_sg + last_sg - 1;
+ 	if (!shift)
+ 		goto out;
+ 
+-	i = first_sg + 1;
++	i = first_sg;
++	sk_msg_iter_var(i);
+ 	do {
+ 		int move_from;
+ 
+@@ -2378,15 +2385,13 @@ BPF_CALL_4(bpf_msg_pull_data,
+ 		sg[move_from].page_link = 0;
+ 		sg[move_from].offset = 0;
+ 
+-		i++;
+-		if (i == MAX_SKB_FRAGS)
+-			i = 0;
++		sk_msg_iter_var(i);
+ 	} while (1);
+ 	msg->sg_end -= shift;
+ 	if (msg->sg_end < 0)
+ 		msg->sg_end += MAX_SKB_FRAGS;
+ out:
+-	msg->data = sg_virt(&sg[i]) + start - offset;
++	msg->data = sg_virt(&sg[first_sg]) + start - offset;
+ 	msg->data_end = msg->data + bytes;
+ 
+ 	return 0;
+diff --git a/net/ipv4/netfilter/Kconfig b/net/ipv4/netfilter/Kconfig
+index bbfc356cb1b5..d7ecae5e93ea 100644
+--- a/net/ipv4/netfilter/Kconfig
++++ b/net/ipv4/netfilter/Kconfig
+@@ -122,6 +122,10 @@ config NF_NAT_IPV4
+ 
+ if NF_NAT_IPV4
+ 
++config NF_NAT_MASQUERADE_IPV4
++	bool
++
++if NF_TABLES
+ config NFT_CHAIN_NAT_IPV4
+ 	depends on NF_TABLES_IPV4
+ 	tristate "IPv4 nf_tables nat chain support"
+@@ -131,9 +135,6 @@ config NFT_CHAIN_NAT_IPV4
+ 	  packet transformations such as the source, destination address and
+ 	  source and destination ports.
+ 
+-config NF_NAT_MASQUERADE_IPV4
+-	bool
+-
+ config NFT_MASQ_IPV4
+ 	tristate "IPv4 masquerading support for nf_tables"
+ 	depends on NF_TABLES_IPV4
+@@ -151,6 +152,7 @@ config NFT_REDIR_IPV4
+ 	help
+ 	  This is the expression that provides IPv4 redirect support for
+ 	  nf_tables.
++endif # NF_TABLES
+ 
+ config NF_NAT_SNMP_BASIC
+ 	tristate "Basic SNMP-ALG support"
+diff --git a/net/mac80211/ibss.c b/net/mac80211/ibss.c
+index 6449a1c2283b..f0f5fedb8caa 100644
+--- a/net/mac80211/ibss.c
++++ b/net/mac80211/ibss.c
+@@ -947,8 +947,8 @@ static void ieee80211_rx_mgmt_deauth_ibss(struct ieee80211_sub_if_data *sdata,
+ 	if (len < IEEE80211_DEAUTH_FRAME_LEN)
+ 		return;
+ 
+-	ibss_dbg(sdata, "RX DeAuth SA=%pM DA=%pM BSSID=%pM (reason: %d)\n",
+-		 mgmt->sa, mgmt->da, mgmt->bssid, reason);
++	ibss_dbg(sdata, "RX DeAuth SA=%pM DA=%pM\n", mgmt->sa, mgmt->da);
++	ibss_dbg(sdata, "\tBSSID=%pM (reason: %d)\n", mgmt->bssid, reason);
+ 	sta_info_destroy_addr(sdata, mgmt->sa);
+ }
+ 
+@@ -966,9 +966,9 @@ static void ieee80211_rx_mgmt_auth_ibss(struct ieee80211_sub_if_data *sdata,
+ 	auth_alg = le16_to_cpu(mgmt->u.auth.auth_alg);
+ 	auth_transaction = le16_to_cpu(mgmt->u.auth.auth_transaction);
+ 
+-	ibss_dbg(sdata,
+-		 "RX Auth SA=%pM DA=%pM BSSID=%pM (auth_transaction=%d)\n",
+-		 mgmt->sa, mgmt->da, mgmt->bssid, auth_transaction);
++	ibss_dbg(sdata, "RX Auth SA=%pM DA=%pM\n", mgmt->sa, mgmt->da);
++	ibss_dbg(sdata, "\tBSSID=%pM (auth_transaction=%d)\n",
++		 mgmt->bssid, auth_transaction);
+ 
+ 	if (auth_alg != WLAN_AUTH_OPEN || auth_transaction != 1)
+ 		return;
+@@ -1175,10 +1175,10 @@ static void ieee80211_rx_bss_info(struct ieee80211_sub_if_data *sdata,
+ 		rx_timestamp = drv_get_tsf(local, sdata);
+ 	}
+ 
+-	ibss_dbg(sdata,
+-		 "RX beacon SA=%pM BSSID=%pM TSF=0x%llx BCN=0x%llx diff=%lld @%lu\n",
++	ibss_dbg(sdata, "RX beacon SA=%pM BSSID=%pM TSF=0x%llx\n",
+ 		 mgmt->sa, mgmt->bssid,
+-		 (unsigned long long)rx_timestamp,
++		 (unsigned long long)rx_timestamp);
++	ibss_dbg(sdata, "\tBCN=0x%llx diff=%lld @%lu\n",
+ 		 (unsigned long long)beacon_timestamp,
+ 		 (unsigned long long)(rx_timestamp - beacon_timestamp),
+ 		 jiffies);
+@@ -1537,9 +1537,9 @@ static void ieee80211_rx_mgmt_probe_req(struct ieee80211_sub_if_data *sdata,
+ 
+ 	tx_last_beacon = drv_tx_last_beacon(local);
+ 
+-	ibss_dbg(sdata,
+-		 "RX ProbeReq SA=%pM DA=%pM BSSID=%pM (tx_last_beacon=%d)\n",
+-		 mgmt->sa, mgmt->da, mgmt->bssid, tx_last_beacon);
++	ibss_dbg(sdata, "RX ProbeReq SA=%pM DA=%pM\n", mgmt->sa, mgmt->da);
++	ibss_dbg(sdata, "\tBSSID=%pM (tx_last_beacon=%d)\n",
++		 mgmt->bssid, tx_last_beacon);
+ 
+ 	if (!tx_last_beacon && is_multicast_ether_addr(mgmt->da))
+ 		return;
+diff --git a/net/mac80211/main.c b/net/mac80211/main.c
+index fb73451ed85e..66cbddd65b47 100644
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -255,8 +255,27 @@ static void ieee80211_restart_work(struct work_struct *work)
+ 
+ 	flush_work(&local->radar_detected_work);
+ 	rtnl_lock();
+-	list_for_each_entry(sdata, &local->interfaces, list)
++	list_for_each_entry(sdata, &local->interfaces, list) {
++		/*
++		 * XXX: there may be more work for other vif types and even
++		 * for station mode: a good thing would be to run most of
++		 * the iface type's dependent _stop (ieee80211_mg_stop,
++		 * ieee80211_ibss_stop) etc...
++		 * For now, fix only the specific bug that was seen: race
++		 * between csa_connection_drop_work and us.
++		 */
++		if (sdata->vif.type == NL80211_IFTYPE_STATION) {
++			/*
++			 * This worker is scheduled from the iface worker that
++			 * runs on mac80211's workqueue, so we can't be
++			 * scheduling this worker after the cancel right here.
++			 * The exception is ieee80211_chswitch_done.
++			 * Then we can have a race...
++			 */
++			cancel_work_sync(&sdata->u.mgd.csa_connection_drop_work);
++		}
+ 		flush_delayed_work(&sdata->dec_tailroom_needed_wk);
++	}
+ 	ieee80211_scan_cancel(local);
+ 
+ 	/* make sure any new ROC will consider local->in_reconfig */
+@@ -470,10 +489,7 @@ static const struct ieee80211_vht_cap mac80211_vht_capa_mod_mask = {
+ 		cpu_to_le32(IEEE80211_VHT_CAP_RXLDPC |
+ 			    IEEE80211_VHT_CAP_SHORT_GI_80 |
+ 			    IEEE80211_VHT_CAP_SHORT_GI_160 |
+-			    IEEE80211_VHT_CAP_RXSTBC_1 |
+-			    IEEE80211_VHT_CAP_RXSTBC_2 |
+-			    IEEE80211_VHT_CAP_RXSTBC_3 |
+-			    IEEE80211_VHT_CAP_RXSTBC_4 |
++			    IEEE80211_VHT_CAP_RXSTBC_MASK |
+ 			    IEEE80211_VHT_CAP_TXSTBC |
+ 			    IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE |
+ 			    IEEE80211_VHT_CAP_SU_BEAMFORMEE_CAPABLE |
+@@ -1182,6 +1198,7 @@ void ieee80211_unregister_hw(struct ieee80211_hw *hw)
+ #if IS_ENABLED(CONFIG_IPV6)
+ 	unregister_inet6addr_notifier(&local->ifa6_notifier);
+ #endif
++	ieee80211_txq_teardown_flows(local);
+ 
+ 	rtnl_lock();
+ 
+@@ -1210,7 +1227,6 @@ void ieee80211_unregister_hw(struct ieee80211_hw *hw)
+ 	skb_queue_purge(&local->skb_queue);
+ 	skb_queue_purge(&local->skb_queue_unreliable);
+ 	skb_queue_purge(&local->skb_queue_tdls_chsw);
+-	ieee80211_txq_teardown_flows(local);
+ 
+ 	destroy_workqueue(local->workqueue);
+ 	wiphy_unregister(local->hw.wiphy);
+diff --git a/net/mac80211/mesh_hwmp.c b/net/mac80211/mesh_hwmp.c
+index 35ad3983ae4b..daf9db3c8f24 100644
+--- a/net/mac80211/mesh_hwmp.c
++++ b/net/mac80211/mesh_hwmp.c
+@@ -572,6 +572,10 @@ static void hwmp_preq_frame_process(struct ieee80211_sub_if_data *sdata,
+ 		forward = false;
+ 		reply = true;
+ 		target_metric = 0;
++
++		if (SN_GT(target_sn, ifmsh->sn))
++			ifmsh->sn = target_sn;
++
+ 		if (time_after(jiffies, ifmsh->last_sn_update +
+ 					net_traversal_jiffies(sdata)) ||
+ 		    time_before(jiffies, ifmsh->last_sn_update)) {
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index a59187c016e0..b046bf95eb3c 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -978,6 +978,10 @@ static void ieee80211_chswitch_work(struct work_struct *work)
+ 	 */
+ 
+ 	if (sdata->reserved_chanctx) {
++		struct ieee80211_supported_band *sband = NULL;
++		struct sta_info *mgd_sta = NULL;
++		enum ieee80211_sta_rx_bandwidth bw = IEEE80211_STA_RX_BW_20;
++
+ 		/*
+ 		 * with multi-vif csa driver may call ieee80211_csa_finish()
+ 		 * many times while waiting for other interfaces to use their
+@@ -986,6 +990,48 @@ static void ieee80211_chswitch_work(struct work_struct *work)
+ 		if (sdata->reserved_ready)
+ 			goto out;
+ 
++		if (sdata->vif.bss_conf.chandef.width !=
++		    sdata->csa_chandef.width) {
++			/*
++			 * For managed interface, we need to also update the AP
++			 * station bandwidth and align the rate scale algorithm
++			 * on the bandwidth change. Here we only consider the
++			 * bandwidth of the new channel definition (as channel
++			 * switch flow does not have the full HT/VHT/HE
++			 * information), assuming that if additional changes are
++			 * required they would be done as part of the processing
++			 * of the next beacon from the AP.
++			 */
++			switch (sdata->csa_chandef.width) {
++			case NL80211_CHAN_WIDTH_20_NOHT:
++			case NL80211_CHAN_WIDTH_20:
++			default:
++				bw = IEEE80211_STA_RX_BW_20;
++				break;
++			case NL80211_CHAN_WIDTH_40:
++				bw = IEEE80211_STA_RX_BW_40;
++				break;
++			case NL80211_CHAN_WIDTH_80:
++				bw = IEEE80211_STA_RX_BW_80;
++				break;
++			case NL80211_CHAN_WIDTH_80P80:
++			case NL80211_CHAN_WIDTH_160:
++				bw = IEEE80211_STA_RX_BW_160;
++				break;
++			}
++
++			mgd_sta = sta_info_get(sdata, ifmgd->bssid);
++			sband =
++				local->hw.wiphy->bands[sdata->csa_chandef.chan->band];
++		}
++
++		if (sdata->vif.bss_conf.chandef.width >
++		    sdata->csa_chandef.width) {
++			mgd_sta->sta.bandwidth = bw;
++			rate_control_rate_update(local, sband, mgd_sta,
++						 IEEE80211_RC_BW_CHANGED);
++		}
++
+ 		ret = ieee80211_vif_use_reserved_context(sdata);
+ 		if (ret) {
+ 			sdata_info(sdata,
+@@ -996,6 +1042,13 @@ static void ieee80211_chswitch_work(struct work_struct *work)
+ 			goto out;
+ 		}
+ 
++		if (sdata->vif.bss_conf.chandef.width <
++		    sdata->csa_chandef.width) {
++			mgd_sta->sta.bandwidth = bw;
++			rate_control_rate_update(local, sband, mgd_sta,
++						 IEEE80211_RC_BW_CHANGED);
++		}
++
+ 		goto out;
+ 	}
+ 
+@@ -1217,6 +1270,16 @@ ieee80211_sta_process_chanswitch(struct ieee80211_sub_if_data *sdata,
+ 					 cbss->beacon_interval));
+ 	return;
+  drop_connection:
++	/*
++	 * This is just so that the disconnect flow will know that
++	 * we were trying to switch channel and failed. In case the
++	 * mode is 1 (we are not allowed to Tx), we will know not to
++	 * send a deauthentication frame. Those two fields will be
++	 * reset when the disconnection worker runs.
++	 */
++	sdata->vif.csa_active = true;
++	sdata->csa_block_tx = csa_ie.mode;
++
+ 	ieee80211_queue_work(&local->hw, &ifmgd->csa_connection_drop_work);
+ 	mutex_unlock(&local->chanctx_mtx);
+ 	mutex_unlock(&local->mtx);
+@@ -2400,6 +2463,7 @@ static void __ieee80211_disconnect(struct ieee80211_sub_if_data *sdata)
+ 	struct ieee80211_local *local = sdata->local;
+ 	struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
+ 	u8 frame_buf[IEEE80211_DEAUTH_FRAME_LEN];
++	bool tx;
+ 
+ 	sdata_lock(sdata);
+ 	if (!ifmgd->associated) {
+@@ -2407,6 +2471,8 @@ static void __ieee80211_disconnect(struct ieee80211_sub_if_data *sdata)
+ 		return;
+ 	}
+ 
++	tx = !sdata->csa_block_tx;
++
+ 	/* AP is probably out of range (or not reachable for another reason) so
+ 	 * remove the bss struct for that AP.
+ 	 */
+@@ -2414,7 +2480,7 @@ static void __ieee80211_disconnect(struct ieee80211_sub_if_data *sdata)
+ 
+ 	ieee80211_set_disassoc(sdata, IEEE80211_STYPE_DEAUTH,
+ 			       WLAN_REASON_DISASSOC_DUE_TO_INACTIVITY,
+-			       true, frame_buf);
++			       tx, frame_buf);
+ 	mutex_lock(&local->mtx);
+ 	sdata->vif.csa_active = false;
+ 	ifmgd->csa_waiting_bcn = false;
+@@ -2425,7 +2491,7 @@ static void __ieee80211_disconnect(struct ieee80211_sub_if_data *sdata)
+ 	}
+ 	mutex_unlock(&local->mtx);
+ 
+-	ieee80211_report_disconnect(sdata, frame_buf, sizeof(frame_buf), true,
++	ieee80211_report_disconnect(sdata, frame_buf, sizeof(frame_buf), tx,
+ 				    WLAN_REASON_DISASSOC_DUE_TO_INACTIVITY);
+ 
+ 	sdata_unlock(sdata);
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index fa1f1e63a264..9b3b069e418a 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -3073,27 +3073,18 @@ void ieee80211_clear_fast_xmit(struct sta_info *sta)
+ }
+ 
+ static bool ieee80211_amsdu_realloc_pad(struct ieee80211_local *local,
+-					struct sk_buff *skb, int headroom,
+-					int *subframe_len)
++					struct sk_buff *skb, int headroom)
+ {
+-	int amsdu_len = *subframe_len + sizeof(struct ethhdr);
+-	int padding = (4 - amsdu_len) & 3;
+-
+-	if (skb_headroom(skb) < headroom || skb_tailroom(skb) < padding) {
++	if (skb_headroom(skb) < headroom) {
+ 		I802_DEBUG_INC(local->tx_expand_skb_head);
+ 
+-		if (pskb_expand_head(skb, headroom, padding, GFP_ATOMIC)) {
++		if (pskb_expand_head(skb, headroom, 0, GFP_ATOMIC)) {
+ 			wiphy_debug(local->hw.wiphy,
+ 				    "failed to reallocate TX buffer\n");
+ 			return false;
+ 		}
+ 	}
+ 
+-	if (padding) {
+-		*subframe_len += padding;
+-		skb_put_zero(skb, padding);
+-	}
+-
+ 	return true;
+ }
+ 
+@@ -3117,8 +3108,7 @@ static bool ieee80211_amsdu_prepare_head(struct ieee80211_sub_if_data *sdata,
+ 	if (info->control.flags & IEEE80211_TX_CTRL_AMSDU)
+ 		return true;
+ 
+-	if (!ieee80211_amsdu_realloc_pad(local, skb, sizeof(*amsdu_hdr),
+-					 &subframe_len))
++	if (!ieee80211_amsdu_realloc_pad(local, skb, sizeof(*amsdu_hdr)))
+ 		return false;
+ 
+ 	data = skb_push(skb, sizeof(*amsdu_hdr));
+@@ -3184,7 +3174,8 @@ static bool ieee80211_amsdu_aggregate(struct ieee80211_sub_if_data *sdata,
+ 	void *data;
+ 	bool ret = false;
+ 	unsigned int orig_len;
+-	int n = 1, nfrags;
++	int n = 2, nfrags, pad = 0;
++	u16 hdrlen;
+ 
+ 	if (!ieee80211_hw_check(&local->hw, TX_AMSDU))
+ 		return false;
+@@ -3217,9 +3208,6 @@ static bool ieee80211_amsdu_aggregate(struct ieee80211_sub_if_data *sdata,
+ 	if (skb->len + head->len > max_amsdu_len)
+ 		goto out;
+ 
+-	if (!ieee80211_amsdu_prepare_head(sdata, fast_tx, head))
+-		goto out;
+-
+ 	nfrags = 1 + skb_shinfo(skb)->nr_frags;
+ 	nfrags += 1 + skb_shinfo(head)->nr_frags;
+ 	frag_tail = &skb_shinfo(head)->frag_list;
+@@ -3235,10 +3223,24 @@ static bool ieee80211_amsdu_aggregate(struct ieee80211_sub_if_data *sdata,
+ 	if (max_frags && nfrags > max_frags)
+ 		goto out;
+ 
+-	if (!ieee80211_amsdu_realloc_pad(local, skb, sizeof(rfc1042_header) + 2,
+-					 &subframe_len))
++	if (!ieee80211_amsdu_prepare_head(sdata, fast_tx, head))
+ 		goto out;
+ 
++	/*
++	 * Pad out the previous subframe to a multiple of 4 by adding the
++	 * padding to the next one, that's being added. Note that head->len
++	 * is the length of the full A-MSDU, but that works since each time
++	 * we add a new subframe we pad out the previous one to a multiple
++	 * of 4 and thus it no longer matters in the next round.
++	 */
++	hdrlen = fast_tx->hdr_len - sizeof(rfc1042_header);
++	if ((head->len - hdrlen) & 3)
++		pad = 4 - ((head->len - hdrlen) & 3);
++
++	if (!ieee80211_amsdu_realloc_pad(local, skb, sizeof(rfc1042_header) +
++						     2 + pad))
++		goto out_recalc;
++
+ 	ret = true;
+ 	data = skb_push(skb, ETH_ALEN + 2);
+ 	memmove(data, data + ETH_ALEN + 2, 2 * ETH_ALEN);
+@@ -3248,15 +3250,19 @@ static bool ieee80211_amsdu_aggregate(struct ieee80211_sub_if_data *sdata,
+ 	memcpy(data, &len, 2);
+ 	memcpy(data + 2, rfc1042_header, sizeof(rfc1042_header));
+ 
++	memset(skb_push(skb, pad), 0, pad);
++
+ 	head->len += skb->len;
+ 	head->data_len += skb->len;
+ 	*frag_tail = skb;
+ 
+-	flow->backlog += head->len - orig_len;
+-	tin->backlog_bytes += head->len - orig_len;
+-
+-	fq_recalc_backlog(fq, tin, flow);
++out_recalc:
++	if (head->len != orig_len) {
++		flow->backlog += head->len - orig_len;
++		tin->backlog_bytes += head->len - orig_len;
+ 
++		fq_recalc_backlog(fq, tin, flow);
++	}
+ out:
+ 	spin_unlock_bh(&fq->lock);
+ 
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index d02fbfec3783..93b5bb849ad7 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -1120,7 +1120,7 @@ void ieee80211_regulatory_limit_wmm_params(struct ieee80211_sub_if_data *sdata,
+ {
+ 	struct ieee80211_chanctx_conf *chanctx_conf;
+ 	const struct ieee80211_reg_rule *rrule;
+-	struct ieee80211_wmm_ac *wmm_ac;
++	const struct ieee80211_wmm_ac *wmm_ac;
+ 	u16 center_freq = 0;
+ 
+ 	if (sdata->vif.type != NL80211_IFTYPE_AP &&
+@@ -1139,20 +1139,19 @@ void ieee80211_regulatory_limit_wmm_params(struct ieee80211_sub_if_data *sdata,
+ 
+ 	rrule = freq_reg_info(sdata->wdev.wiphy, MHZ_TO_KHZ(center_freq));
+ 
+-	if (IS_ERR_OR_NULL(rrule) || !rrule->wmm_rule) {
++	if (IS_ERR_OR_NULL(rrule) || !rrule->has_wmm) {
+ 		rcu_read_unlock();
+ 		return;
+ 	}
+ 
+ 	if (sdata->vif.type == NL80211_IFTYPE_AP)
+-		wmm_ac = &rrule->wmm_rule->ap[ac];
++		wmm_ac = &rrule->wmm_rule.ap[ac];
+ 	else
+-		wmm_ac = &rrule->wmm_rule->client[ac];
++		wmm_ac = &rrule->wmm_rule.client[ac];
+ 	qparam->cw_min = max_t(u16, qparam->cw_min, wmm_ac->cw_min);
+ 	qparam->cw_max = max_t(u16, qparam->cw_max, wmm_ac->cw_max);
+ 	qparam->aifs = max_t(u8, qparam->aifs, wmm_ac->aifsn);
+-	qparam->txop = !qparam->txop ? wmm_ac->cot / 32 :
+-		min_t(u16, qparam->txop, wmm_ac->cot / 32);
++	qparam->txop = min_t(u16, qparam->txop, wmm_ac->cot / 32);
+ 	rcu_read_unlock();
+ }
+ 
+diff --git a/net/netfilter/Kconfig b/net/netfilter/Kconfig
+index f0a1c536ef15..e6d5c87f0d96 100644
+--- a/net/netfilter/Kconfig
++++ b/net/netfilter/Kconfig
+@@ -740,13 +740,13 @@ config NETFILTER_XT_TARGET_CHECKSUM
+ 	depends on NETFILTER_ADVANCED
+ 	---help---
+ 	  This option adds a `CHECKSUM' target, which can be used in the iptables mangle
+-	  table.
++	  table to work around buggy DHCP clients in virtualized environments.
+ 
+-	  You can use this target to compute and fill in the checksum in
+-	  a packet that lacks a checksum.  This is particularly useful,
+-	  if you need to work around old applications such as dhcp clients,
+-	  that do not work well with checksum offloads, but don't want to disable
+-	  checksum offload in your device.
++	  Some old DHCP clients drop packets because they are not aware
++	  that the checksum would normally be offloaded to hardware and
++	  thus should be considered valid.
++	  This target can be used to fill in the checksum using iptables
++	  when such packets are sent via a virtual network device.
+ 
+ 	  To compile it as a module, choose M here.  If unsure, say N.
+ 
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index f5745e4c6513..77d690a87144 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -4582,6 +4582,7 @@ static int nft_flush_set(const struct nft_ctx *ctx,
+ 	}
+ 	set->ndeact++;
+ 
++	nft_set_elem_deactivate(ctx->net, set, elem);
+ 	nft_trans_elem_set(trans) = set;
+ 	nft_trans_elem(trans) = *elem;
+ 	list_add_tail(&trans->list, &ctx->net->nft.commit_list);
+diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c
+index ea4ba551abb2..d33094f4ec41 100644
+--- a/net/netfilter/nfnetlink_queue.c
++++ b/net/netfilter/nfnetlink_queue.c
+@@ -233,6 +233,7 @@ static void nfqnl_reinject(struct nf_queue_entry *entry, unsigned int verdict)
+ 	int err;
+ 
+ 	if (verdict == NF_ACCEPT ||
++	    verdict == NF_REPEAT ||
+ 	    verdict == NF_STOP) {
+ 		rcu_read_lock();
+ 		ct_hook = rcu_dereference(nf_ct_hook);
+diff --git a/net/netfilter/xt_CHECKSUM.c b/net/netfilter/xt_CHECKSUM.c
+index 9f4151ec3e06..6c7aa6a0a0d2 100644
+--- a/net/netfilter/xt_CHECKSUM.c
++++ b/net/netfilter/xt_CHECKSUM.c
+@@ -16,6 +16,9 @@
+ #include <linux/netfilter/x_tables.h>
+ #include <linux/netfilter/xt_CHECKSUM.h>
+ 
++#include <linux/netfilter_ipv4/ip_tables.h>
++#include <linux/netfilter_ipv6/ip6_tables.h>
++
+ MODULE_LICENSE("GPL");
+ MODULE_AUTHOR("Michael S. Tsirkin <mst@redhat.com>");
+ MODULE_DESCRIPTION("Xtables: checksum modification");
+@@ -25,7 +28,7 @@ MODULE_ALIAS("ip6t_CHECKSUM");
+ static unsigned int
+ checksum_tg(struct sk_buff *skb, const struct xt_action_param *par)
+ {
+-	if (skb->ip_summed == CHECKSUM_PARTIAL)
++	if (skb->ip_summed == CHECKSUM_PARTIAL && !skb_is_gso(skb))
+ 		skb_checksum_help(skb);
+ 
+ 	return XT_CONTINUE;
+@@ -34,6 +37,8 @@ checksum_tg(struct sk_buff *skb, const struct xt_action_param *par)
+ static int checksum_tg_check(const struct xt_tgchk_param *par)
+ {
+ 	const struct xt_CHECKSUM_info *einfo = par->targinfo;
++	const struct ip6t_ip6 *i6 = par->entryinfo;
++	const struct ipt_ip *i4 = par->entryinfo;
+ 
+ 	if (einfo->operation & ~XT_CHECKSUM_OP_FILL) {
+ 		pr_info_ratelimited("unsupported CHECKSUM operation %x\n",
+@@ -43,6 +48,21 @@ static int checksum_tg_check(const struct xt_tgchk_param *par)
+ 	if (!einfo->operation)
+ 		return -EINVAL;
+ 
++	switch (par->family) {
++	case NFPROTO_IPV4:
++		if (i4->proto == IPPROTO_UDP &&
++		    (i4->invflags & XT_INV_PROTO) == 0)
++			return 0;
++		break;
++	case NFPROTO_IPV6:
++		if ((i6->flags & IP6T_F_PROTO) &&
++		    i6->proto == IPPROTO_UDP &&
++		    (i6->invflags & XT_INV_PROTO) == 0)
++			return 0;
++		break;
++	}
++
++	pr_warn_once("CHECKSUM should be avoided.  If really needed, restrict with \"-p udp\" and only use in OUTPUT\n");
+ 	return 0;
+ }
+ 
+diff --git a/net/netfilter/xt_cluster.c b/net/netfilter/xt_cluster.c
+index dfbdbb2fc0ed..51d0c257e7a5 100644
+--- a/net/netfilter/xt_cluster.c
++++ b/net/netfilter/xt_cluster.c
+@@ -125,6 +125,7 @@ xt_cluster_mt(const struct sk_buff *skb, struct xt_action_param *par)
+ static int xt_cluster_mt_checkentry(const struct xt_mtchk_param *par)
+ {
+ 	struct xt_cluster_match_info *info = par->matchinfo;
++	int ret;
+ 
+ 	if (info->total_nodes > XT_CLUSTER_NODES_MAX) {
+ 		pr_info_ratelimited("you have exceeded the maximum number of cluster nodes (%u > %u)\n",
+@@ -135,7 +136,17 @@ static int xt_cluster_mt_checkentry(const struct xt_mtchk_param *par)
+ 		pr_info_ratelimited("node mask cannot exceed total number of nodes\n");
+ 		return -EDOM;
+ 	}
+-	return 0;
++
++	ret = nf_ct_netns_get(par->net, par->family);
++	if (ret < 0)
++		pr_info_ratelimited("cannot load conntrack support for proto=%u\n",
++				    par->family);
++	return ret;
++}
++
++static void xt_cluster_mt_destroy(const struct xt_mtdtor_param *par)
++{
++	nf_ct_netns_put(par->net, par->family);
+ }
+ 
+ static struct xt_match xt_cluster_match __read_mostly = {
+@@ -144,6 +155,7 @@ static struct xt_match xt_cluster_match __read_mostly = {
+ 	.match		= xt_cluster_mt,
+ 	.checkentry	= xt_cluster_mt_checkentry,
+ 	.matchsize	= sizeof(struct xt_cluster_match_info),
++	.destroy	= xt_cluster_mt_destroy,
+ 	.me		= THIS_MODULE,
+ };
+ 
+diff --git a/net/netfilter/xt_hashlimit.c b/net/netfilter/xt_hashlimit.c
+index 9b16402f29af..3e7d259e5d8d 100644
+--- a/net/netfilter/xt_hashlimit.c
++++ b/net/netfilter/xt_hashlimit.c
+@@ -1057,7 +1057,7 @@ static struct xt_match hashlimit_mt_reg[] __read_mostly = {
+ static void *dl_seq_start(struct seq_file *s, loff_t *pos)
+ 	__acquires(htable->lock)
+ {
+-	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->file));
+ 	unsigned int *bucket;
+ 
+ 	spin_lock_bh(&htable->lock);
+@@ -1074,7 +1074,7 @@ static void *dl_seq_start(struct seq_file *s, loff_t *pos)
+ 
+ static void *dl_seq_next(struct seq_file *s, void *v, loff_t *pos)
+ {
+-	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->file));
+ 	unsigned int *bucket = v;
+ 
+ 	*pos = ++(*bucket);
+@@ -1088,7 +1088,7 @@ static void *dl_seq_next(struct seq_file *s, void *v, loff_t *pos)
+ static void dl_seq_stop(struct seq_file *s, void *v)
+ 	__releases(htable->lock)
+ {
+-	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->file));
+ 	unsigned int *bucket = v;
+ 
+ 	if (!IS_ERR(bucket))
+@@ -1130,7 +1130,7 @@ static void dl_seq_print(struct dsthash_ent *ent, u_int8_t family,
+ static int dl_seq_real_show_v2(struct dsthash_ent *ent, u_int8_t family,
+ 			       struct seq_file *s)
+ {
+-	struct xt_hashlimit_htable *ht = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *ht = PDE_DATA(file_inode(s->file));
+ 
+ 	spin_lock(&ent->lock);
+ 	/* recalculate to show accurate numbers */
+@@ -1145,7 +1145,7 @@ static int dl_seq_real_show_v2(struct dsthash_ent *ent, u_int8_t family,
+ static int dl_seq_real_show_v1(struct dsthash_ent *ent, u_int8_t family,
+ 			       struct seq_file *s)
+ {
+-	struct xt_hashlimit_htable *ht = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *ht = PDE_DATA(file_inode(s->file));
+ 
+ 	spin_lock(&ent->lock);
+ 	/* recalculate to show accurate numbers */
+@@ -1160,7 +1160,7 @@ static int dl_seq_real_show_v1(struct dsthash_ent *ent, u_int8_t family,
+ static int dl_seq_real_show(struct dsthash_ent *ent, u_int8_t family,
+ 			    struct seq_file *s)
+ {
+-	struct xt_hashlimit_htable *ht = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *ht = PDE_DATA(file_inode(s->file));
+ 
+ 	spin_lock(&ent->lock);
+ 	/* recalculate to show accurate numbers */
+@@ -1174,7 +1174,7 @@ static int dl_seq_real_show(struct dsthash_ent *ent, u_int8_t family,
+ 
+ static int dl_seq_show_v2(struct seq_file *s, void *v)
+ {
+-	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->file));
+ 	unsigned int *bucket = (unsigned int *)v;
+ 	struct dsthash_ent *ent;
+ 
+@@ -1188,7 +1188,7 @@ static int dl_seq_show_v2(struct seq_file *s, void *v)
+ 
+ static int dl_seq_show_v1(struct seq_file *s, void *v)
+ {
+-	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->file));
+ 	unsigned int *bucket = v;
+ 	struct dsthash_ent *ent;
+ 
+@@ -1202,7 +1202,7 @@ static int dl_seq_show_v1(struct seq_file *s, void *v)
+ 
+ static int dl_seq_show(struct seq_file *s, void *v)
+ {
+-	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->file));
+ 	unsigned int *bucket = v;
+ 	struct dsthash_ent *ent;
+ 
+diff --git a/net/tipc/diag.c b/net/tipc/diag.c
+index aaabb0b776dd..73137f4aeb68 100644
+--- a/net/tipc/diag.c
++++ b/net/tipc/diag.c
+@@ -84,7 +84,9 @@ static int tipc_sock_diag_handler_dump(struct sk_buff *skb,
+ 
+ 	if (h->nlmsg_flags & NLM_F_DUMP) {
+ 		struct netlink_dump_control c = {
++			.start = tipc_dump_start,
+ 			.dump = tipc_diag_dump,
++			.done = tipc_dump_done,
+ 		};
+ 		netlink_dump_start(net->diag_nlsk, skb, h, &c);
+ 		return 0;
+diff --git a/net/tipc/netlink.c b/net/tipc/netlink.c
+index 6ff2254088f6..99ee419210ba 100644
+--- a/net/tipc/netlink.c
++++ b/net/tipc/netlink.c
+@@ -167,7 +167,9 @@ static const struct genl_ops tipc_genl_v2_ops[] = {
+ 	},
+ 	{
+ 		.cmd	= TIPC_NL_SOCK_GET,
++		.start = tipc_dump_start,
+ 		.dumpit	= tipc_nl_sk_dump,
++		.done	= tipc_dump_done,
+ 		.policy = tipc_nl_policy,
+ 	},
+ 	{
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index ac8ca238c541..bdb4a9a5a83a 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -3233,45 +3233,69 @@ int tipc_nl_sk_walk(struct sk_buff *skb, struct netlink_callback *cb,
+ 				       struct netlink_callback *cb,
+ 				       struct tipc_sock *tsk))
+ {
+-	struct net *net = sock_net(skb->sk);
+-	struct tipc_net *tn = tipc_net(net);
+-	const struct bucket_table *tbl;
+-	u32 prev_portid = cb->args[1];
+-	u32 tbl_id = cb->args[0];
+-	struct rhash_head *pos;
++	struct rhashtable_iter *iter = (void *)cb->args[0];
+ 	struct tipc_sock *tsk;
+ 	int err;
+ 
+-	rcu_read_lock();
+-	tbl = rht_dereference_rcu((&tn->sk_rht)->tbl, &tn->sk_rht);
+-	for (; tbl_id < tbl->size; tbl_id++) {
+-		rht_for_each_entry_rcu(tsk, pos, tbl, tbl_id, node) {
+-			spin_lock_bh(&tsk->sk.sk_lock.slock);
+-			if (prev_portid && prev_portid != tsk->portid) {
+-				spin_unlock_bh(&tsk->sk.sk_lock.slock);
++	rhashtable_walk_start(iter);
++	while ((tsk = rhashtable_walk_next(iter)) != NULL) {
++		if (IS_ERR(tsk)) {
++			err = PTR_ERR(tsk);
++			if (err == -EAGAIN) {
++				err = 0;
+ 				continue;
+ 			}
++			break;
++		}
+ 
+-			err = skb_handler(skb, cb, tsk);
+-			if (err) {
+-				prev_portid = tsk->portid;
+-				spin_unlock_bh(&tsk->sk.sk_lock.slock);
+-				goto out;
+-			}
+-
+-			prev_portid = 0;
+-			spin_unlock_bh(&tsk->sk.sk_lock.slock);
++		sock_hold(&tsk->sk);
++		rhashtable_walk_stop(iter);
++		lock_sock(&tsk->sk);
++		err = skb_handler(skb, cb, tsk);
++		if (err) {
++			release_sock(&tsk->sk);
++			sock_put(&tsk->sk);
++			goto out;
+ 		}
++		release_sock(&tsk->sk);
++		rhashtable_walk_start(iter);
++		sock_put(&tsk->sk);
+ 	}
++	rhashtable_walk_stop(iter);
+ out:
+-	rcu_read_unlock();
+-	cb->args[0] = tbl_id;
+-	cb->args[1] = prev_portid;
+-
+ 	return skb->len;
+ }
+ EXPORT_SYMBOL(tipc_nl_sk_walk);
+ 
++int tipc_dump_start(struct netlink_callback *cb)
++{
++	struct rhashtable_iter *iter = (void *)cb->args[0];
++	struct net *net = sock_net(cb->skb->sk);
++	struct tipc_net *tn = tipc_net(net);
++
++	if (!iter) {
++		iter = kmalloc(sizeof(*iter), GFP_KERNEL);
++		if (!iter)
++			return -ENOMEM;
++
++		cb->args[0] = (long)iter;
++	}
++
++	rhashtable_walk_enter(&tn->sk_rht, iter);
++	return 0;
++}
++EXPORT_SYMBOL(tipc_dump_start);
++
++int tipc_dump_done(struct netlink_callback *cb)
++{
++	struct rhashtable_iter *hti = (void *)cb->args[0];
++
++	rhashtable_walk_exit(hti);
++	kfree(hti);
++	return 0;
++}
++EXPORT_SYMBOL(tipc_dump_done);
++
+ int tipc_sk_fill_sock_diag(struct sk_buff *skb, struct netlink_callback *cb,
+ 			   struct tipc_sock *tsk, u32 sk_filter_state,
+ 			   u64 (*tipc_diag_gen_cookie)(struct sock *sk))
+diff --git a/net/tipc/socket.h b/net/tipc/socket.h
+index aff9b2ae5a1f..d43032e26532 100644
+--- a/net/tipc/socket.h
++++ b/net/tipc/socket.h
+@@ -68,4 +68,6 @@ int tipc_nl_sk_walk(struct sk_buff *skb, struct netlink_callback *cb,
+ 		    int (*skb_handler)(struct sk_buff *skb,
+ 				       struct netlink_callback *cb,
+ 				       struct tipc_sock *tsk));
++int tipc_dump_start(struct netlink_callback *cb);
++int tipc_dump_done(struct netlink_callback *cb);
+ #endif
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 80bc986c79e5..733ccf867972 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -667,13 +667,13 @@ static int nl80211_msg_put_wmm_rules(struct sk_buff *msg,
+ 			goto nla_put_failure;
+ 
+ 		if (nla_put_u16(msg, NL80211_WMMR_CW_MIN,
+-				rule->wmm_rule->client[j].cw_min) ||
++				rule->wmm_rule.client[j].cw_min) ||
+ 		    nla_put_u16(msg, NL80211_WMMR_CW_MAX,
+-				rule->wmm_rule->client[j].cw_max) ||
++				rule->wmm_rule.client[j].cw_max) ||
+ 		    nla_put_u8(msg, NL80211_WMMR_AIFSN,
+-			       rule->wmm_rule->client[j].aifsn) ||
+-		    nla_put_u8(msg, NL80211_WMMR_TXOP,
+-			       rule->wmm_rule->client[j].cot))
++			       rule->wmm_rule.client[j].aifsn) ||
++		    nla_put_u16(msg, NL80211_WMMR_TXOP,
++			        rule->wmm_rule.client[j].cot))
+ 			goto nla_put_failure;
+ 
+ 		nla_nest_end(msg, nl_wmm_rule);
+@@ -764,9 +764,9 @@ static int nl80211_msg_put_channel(struct sk_buff *msg, struct wiphy *wiphy,
+ 
+ 	if (large) {
+ 		const struct ieee80211_reg_rule *rule =
+-			freq_reg_info(wiphy, chan->center_freq);
++			freq_reg_info(wiphy, MHZ_TO_KHZ(chan->center_freq));
+ 
+-		if (!IS_ERR(rule) && rule->wmm_rule) {
++		if (!IS_ERR_OR_NULL(rule) && rule->has_wmm) {
+ 			if (nl80211_msg_put_wmm_rules(msg, rule))
+ 				goto nla_put_failure;
+ 		}
+@@ -12099,6 +12099,7 @@ static int nl80211_update_ft_ies(struct sk_buff *skb, struct genl_info *info)
+ 		return -EOPNOTSUPP;
+ 
+ 	if (!info->attrs[NL80211_ATTR_MDID] ||
++	    !info->attrs[NL80211_ATTR_IE] ||
+ 	    !is_valid_ie_attr(info->attrs[NL80211_ATTR_IE]))
+ 		return -EINVAL;
+ 
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index 4fc66a117b7d..2f702adf2912 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -425,36 +425,23 @@ static const struct ieee80211_regdomain *
+ reg_copy_regd(const struct ieee80211_regdomain *src_regd)
+ {
+ 	struct ieee80211_regdomain *regd;
+-	int size_of_regd, size_of_wmms;
++	int size_of_regd;
+ 	unsigned int i;
+-	struct ieee80211_wmm_rule *d_wmm, *s_wmm;
+ 
+ 	size_of_regd =
+ 		sizeof(struct ieee80211_regdomain) +
+ 		src_regd->n_reg_rules * sizeof(struct ieee80211_reg_rule);
+-	size_of_wmms = src_regd->n_wmm_rules *
+-		sizeof(struct ieee80211_wmm_rule);
+ 
+-	regd = kzalloc(size_of_regd + size_of_wmms, GFP_KERNEL);
++	regd = kzalloc(size_of_regd, GFP_KERNEL);
+ 	if (!regd)
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	memcpy(regd, src_regd, sizeof(struct ieee80211_regdomain));
+ 
+-	d_wmm = (struct ieee80211_wmm_rule *)((u8 *)regd + size_of_regd);
+-	s_wmm = (struct ieee80211_wmm_rule *)((u8 *)src_regd + size_of_regd);
+-	memcpy(d_wmm, s_wmm, size_of_wmms);
+-
+-	for (i = 0; i < src_regd->n_reg_rules; i++) {
++	for (i = 0; i < src_regd->n_reg_rules; i++)
+ 		memcpy(&regd->reg_rules[i], &src_regd->reg_rules[i],
+ 		       sizeof(struct ieee80211_reg_rule));
+-		if (!src_regd->reg_rules[i].wmm_rule)
+-			continue;
+ 
+-		regd->reg_rules[i].wmm_rule = d_wmm +
+-			(src_regd->reg_rules[i].wmm_rule - s_wmm) /
+-			sizeof(struct ieee80211_wmm_rule);
+-	}
+ 	return regd;
+ }
+ 
+@@ -860,9 +847,10 @@ static bool valid_regdb(const u8 *data, unsigned int size)
+ 	return true;
+ }
+ 
+-static void set_wmm_rule(struct ieee80211_wmm_rule *rule,
++static void set_wmm_rule(struct ieee80211_reg_rule *rrule,
+ 			 struct fwdb_wmm_rule *wmm)
+ {
++	struct ieee80211_wmm_rule *rule = &rrule->wmm_rule;
+ 	unsigned int i;
+ 
+ 	for (i = 0; i < IEEE80211_NUM_ACS; i++) {
+@@ -876,11 +864,13 @@ static void set_wmm_rule(struct ieee80211_wmm_rule *rule,
+ 		rule->ap[i].aifsn = wmm->ap[i].aifsn;
+ 		rule->ap[i].cot = 1000 * be16_to_cpu(wmm->ap[i].cot);
+ 	}
++
++	rrule->has_wmm = true;
+ }
+ 
+ static int __regdb_query_wmm(const struct fwdb_header *db,
+ 			     const struct fwdb_country *country, int freq,
+-			     u32 *dbptr, struct ieee80211_wmm_rule *rule)
++			     struct ieee80211_reg_rule *rule)
+ {
+ 	unsigned int ptr = be16_to_cpu(country->coll_ptr) << 2;
+ 	struct fwdb_collection *coll = (void *)((u8 *)db + ptr);
+@@ -901,8 +891,6 @@ static int __regdb_query_wmm(const struct fwdb_header *db,
+ 			wmm_ptr = be16_to_cpu(rrule->wmm_ptr) << 2;
+ 			wmm = (void *)((u8 *)db + wmm_ptr);
+ 			set_wmm_rule(rule, wmm);
+-			if (dbptr)
+-				*dbptr = wmm_ptr;
+ 			return 0;
+ 		}
+ 	}
+@@ -910,8 +898,7 @@ static int __regdb_query_wmm(const struct fwdb_header *db,
+ 	return -ENODATA;
+ }
+ 
+-int reg_query_regdb_wmm(char *alpha2, int freq, u32 *dbptr,
+-			struct ieee80211_wmm_rule *rule)
++int reg_query_regdb_wmm(char *alpha2, int freq, struct ieee80211_reg_rule *rule)
+ {
+ 	const struct fwdb_header *hdr = regdb;
+ 	const struct fwdb_country *country;
+@@ -925,8 +912,7 @@ int reg_query_regdb_wmm(char *alpha2, int freq, u32 *dbptr,
+ 	country = &hdr->country[0];
+ 	while (country->coll_ptr) {
+ 		if (alpha2_equal(alpha2, country->alpha2))
+-			return __regdb_query_wmm(regdb, country, freq, dbptr,
+-						 rule);
++			return __regdb_query_wmm(regdb, country, freq, rule);
+ 
+ 		country++;
+ 	}
+@@ -935,32 +921,13 @@ int reg_query_regdb_wmm(char *alpha2, int freq, u32 *dbptr,
+ }
+ EXPORT_SYMBOL(reg_query_regdb_wmm);
+ 
+-struct wmm_ptrs {
+-	struct ieee80211_wmm_rule *rule;
+-	u32 ptr;
+-};
+-
+-static struct ieee80211_wmm_rule *find_wmm_ptr(struct wmm_ptrs *wmm_ptrs,
+-					       u32 wmm_ptr, int n_wmms)
+-{
+-	int i;
+-
+-	for (i = 0; i < n_wmms; i++) {
+-		if (wmm_ptrs[i].ptr == wmm_ptr)
+-			return wmm_ptrs[i].rule;
+-	}
+-	return NULL;
+-}
+-
+ static int regdb_query_country(const struct fwdb_header *db,
+ 			       const struct fwdb_country *country)
+ {
+ 	unsigned int ptr = be16_to_cpu(country->coll_ptr) << 2;
+ 	struct fwdb_collection *coll = (void *)((u8 *)db + ptr);
+ 	struct ieee80211_regdomain *regdom;
+-	struct ieee80211_regdomain *tmp_rd;
+-	unsigned int size_of_regd, i, n_wmms = 0;
+-	struct wmm_ptrs *wmm_ptrs;
++	unsigned int size_of_regd, i;
+ 
+ 	size_of_regd = sizeof(struct ieee80211_regdomain) +
+ 		coll->n_rules * sizeof(struct ieee80211_reg_rule);
+@@ -969,12 +936,6 @@ static int regdb_query_country(const struct fwdb_header *db,
+ 	if (!regdom)
+ 		return -ENOMEM;
+ 
+-	wmm_ptrs = kcalloc(coll->n_rules, sizeof(*wmm_ptrs), GFP_KERNEL);
+-	if (!wmm_ptrs) {
+-		kfree(regdom);
+-		return -ENOMEM;
+-	}
+-
+ 	regdom->n_reg_rules = coll->n_rules;
+ 	regdom->alpha2[0] = country->alpha2[0];
+ 	regdom->alpha2[1] = country->alpha2[1];
+@@ -1013,37 +974,11 @@ static int regdb_query_country(const struct fwdb_header *db,
+ 				1000 * be16_to_cpu(rule->cac_timeout);
+ 		if (rule->len >= offsetofend(struct fwdb_rule, wmm_ptr)) {
+ 			u32 wmm_ptr = be16_to_cpu(rule->wmm_ptr) << 2;
+-			struct ieee80211_wmm_rule *wmm_pos =
+-				find_wmm_ptr(wmm_ptrs, wmm_ptr, n_wmms);
+-			struct fwdb_wmm_rule *wmm;
+-			struct ieee80211_wmm_rule *wmm_rule;
+-
+-			if (wmm_pos) {
+-				rrule->wmm_rule = wmm_pos;
+-				continue;
+-			}
+-			wmm = (void *)((u8 *)db + wmm_ptr);
+-			tmp_rd = krealloc(regdom, size_of_regd + (n_wmms + 1) *
+-					  sizeof(struct ieee80211_wmm_rule),
+-					  GFP_KERNEL);
+-
+-			if (!tmp_rd) {
+-				kfree(regdom);
+-				kfree(wmm_ptrs);
+-				return -ENOMEM;
+-			}
+-			regdom = tmp_rd;
+-
+-			wmm_rule = (struct ieee80211_wmm_rule *)
+-				((u8 *)regdom + size_of_regd + n_wmms *
+-				sizeof(struct ieee80211_wmm_rule));
++			struct fwdb_wmm_rule *wmm = (void *)((u8 *)db + wmm_ptr);
+ 
+-			set_wmm_rule(wmm_rule, wmm);
+-			wmm_ptrs[n_wmms].ptr = wmm_ptr;
+-			wmm_ptrs[n_wmms++].rule = wmm_rule;
++			set_wmm_rule(rrule, wmm);
+ 		}
+ 	}
+-	kfree(wmm_ptrs);
+ 
+ 	return reg_schedule_apply(regdom);
+ }
+diff --git a/net/wireless/util.c b/net/wireless/util.c
+index 3c654cd7ba56..908bf5b6d89e 100644
+--- a/net/wireless/util.c
++++ b/net/wireless/util.c
+@@ -1374,7 +1374,7 @@ bool ieee80211_chandef_to_operating_class(struct cfg80211_chan_def *chandef,
+ 					  u8 *op_class)
+ {
+ 	u8 vht_opclass;
+-	u16 freq = chandef->center_freq1;
++	u32 freq = chandef->center_freq1;
+ 
+ 	if (freq >= 2412 && freq <= 2472) {
+ 		if (chandef->width > NL80211_CHAN_WIDTH_40)
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index d14b05f68d6d..08b6369f930b 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6455,6 +6455,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0706, "Dell Inspiron 7559", ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER),
+ 	SND_PCI_QUIRK(0x1028, 0x0725, "Dell Inspiron 3162", ALC255_FIXUP_DELL_SPK_NOISE),
+ 	SND_PCI_QUIRK(0x1028, 0x075b, "Dell XPS 13 9360", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE),
++	SND_PCI_QUIRK(0x1028, 0x075c, "Dell XPS 27 7760", ALC298_FIXUP_SPK_VOLUME),
+ 	SND_PCI_QUIRK(0x1028, 0x075d, "Dell AIO", ALC298_FIXUP_SPK_VOLUME),
+ 	SND_PCI_QUIRK(0x1028, 0x07b0, "Dell Precision 7520", ALC295_FIXUP_DISABLE_DAC3),
+ 	SND_PCI_QUIRK(0x1028, 0x0798, "Dell Inspiron 17 7000 Gaming", ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER),
+diff --git a/tools/hv/hv_fcopy_daemon.c b/tools/hv/hv_fcopy_daemon.c
+index d78aed86af09..8ff8cb1a11f4 100644
+--- a/tools/hv/hv_fcopy_daemon.c
++++ b/tools/hv/hv_fcopy_daemon.c
+@@ -234,6 +234,7 @@ int main(int argc, char *argv[])
+ 			break;
+ 
+ 		default:
++			error = HV_E_FAIL;
+ 			syslog(LOG_ERR, "Unknown operation: %d",
+ 				buffer.hdr.operation);
+ 
+diff --git a/tools/kvm/kvm_stat/kvm_stat b/tools/kvm/kvm_stat/kvm_stat
+index 56c4b3f8a01b..7c92545931e3 100755
+--- a/tools/kvm/kvm_stat/kvm_stat
++++ b/tools/kvm/kvm_stat/kvm_stat
+@@ -759,13 +759,20 @@ class DebugfsProvider(Provider):
+             if len(vms) == 0:
+                 self.do_read = False
+ 
+-            self.paths = filter(lambda x: "{}-".format(pid) in x, vms)
++            self.paths = list(filter(lambda x: "{}-".format(pid) in x, vms))
+ 
+         else:
+             self.paths = []
+             self.do_read = True
+         self.reset()
+ 
++    def _verify_paths(self):
++        """Remove invalid paths"""
++        for path in self.paths:
++            if not os.path.exists(os.path.join(PATH_DEBUGFS_KVM, path)):
++                self.paths.remove(path)
++                continue
++
+     def read(self, reset=0, by_guest=0):
+         """Returns a dict with format:'file name / field -> current value'.
+ 
+@@ -780,6 +787,7 @@ class DebugfsProvider(Provider):
+         # If no debugfs filtering support is available, then don't read.
+         if not self.do_read:
+             return results
++        self._verify_paths()
+ 
+         paths = self.paths
+         if self._pid == 0:
+@@ -1162,6 +1170,9 @@ class Tui(object):
+ 
+             return sorted_items
+ 
++        if not self._is_running_guest(self.stats.pid_filter):
++            # leave final data on screen
++            return
+         row = 3
+         self.screen.move(row, 0)
+         self.screen.clrtobot()
+@@ -1219,10 +1230,10 @@ class Tui(object):
+         (x, term_width) = self.screen.getmaxyx()
+         row = 2
+         for line in text:
+-            start = (term_width - len(line)) / 2
++            start = (term_width - len(line)) // 2
+             self.screen.addstr(row, start, line)
+             row += 1
+-        self.screen.addstr(row + 1, (term_width - len(hint)) / 2, hint,
++        self.screen.addstr(row + 1, (term_width - len(hint)) // 2, hint,
+                            curses.A_STANDOUT)
+         self.screen.getkey()
+ 
+@@ -1319,6 +1330,12 @@ class Tui(object):
+                 msg = '"' + str(val) + '": Invalid value'
+         self._refresh_header()
+ 
++    def _is_running_guest(self, pid):
++        """Check if pid is still a running process."""
++        if not pid:
++            return True
++        return os.path.isdir(os.path.join('/proc/', str(pid)))
++
+     def _show_vm_selection_by_guest(self):
+         """Draws guest selection mask.
+ 
+@@ -1346,7 +1363,7 @@ class Tui(object):
+             if not guest or guest == '0':
+                 break
+             if guest.isdigit():
+-                if not os.path.isdir(os.path.join('/proc/', guest)):
++                if not self._is_running_guest(guest):
+                     msg = '"' + guest + '": Not a running process'
+                     continue
+                 pid = int(guest)
+diff --git a/tools/perf/arch/powerpc/util/sym-handling.c b/tools/perf/arch/powerpc/util/sym-handling.c
+index 20e7d74d86cd..10a44e946f77 100644
+--- a/tools/perf/arch/powerpc/util/sym-handling.c
++++ b/tools/perf/arch/powerpc/util/sym-handling.c
+@@ -22,15 +22,16 @@ bool elf__needs_adjust_symbols(GElf_Ehdr ehdr)
+ 
+ #endif
+ 
+-#if !defined(_CALL_ELF) || _CALL_ELF != 2
+ int arch__choose_best_symbol(struct symbol *syma,
+ 			     struct symbol *symb __maybe_unused)
+ {
+ 	char *sym = syma->name;
+ 
++#if !defined(_CALL_ELF) || _CALL_ELF != 2
+ 	/* Skip over any initial dot */
+ 	if (*sym == '.')
+ 		sym++;
++#endif
+ 
+ 	/* Avoid "SyS" kernel syscall aliases */
+ 	if (strlen(sym) >= 3 && !strncmp(sym, "SyS", 3))
+@@ -41,6 +42,7 @@ int arch__choose_best_symbol(struct symbol *syma,
+ 	return SYMBOL_A;
+ }
+ 
++#if !defined(_CALL_ELF) || _CALL_ELF != 2
+ /* Allow matching against dot variants */
+ int arch__compare_symbol_names(const char *namea, const char *nameb)
+ {
+diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
+index f91775b4bc3c..3b05219c3ed7 100644
+--- a/tools/perf/util/annotate.c
++++ b/tools/perf/util/annotate.c
+@@ -245,8 +245,14 @@ find_target:
+ 
+ indirect_call:
+ 	tok = strchr(endptr, '*');
+-	if (tok != NULL)
+-		ops->target.addr = strtoull(tok + 1, NULL, 16);
++	if (tok != NULL) {
++		endptr++;
++
++		/* Indirect call can use a non-rip register and offset: callq  *0x8(%rbx).
++		 * Do not parse such instruction.  */
++		if (strstr(endptr, "(%r") == NULL)
++			ops->target.addr = strtoull(endptr, NULL, 16);
++	}
+ 	goto find_target;
+ }
+ 
+@@ -275,7 +281,19 @@ bool ins__is_call(const struct ins *ins)
+ 	return ins->ops == &call_ops || ins->ops == &s390_call_ops;
+ }
+ 
+-static int jump__parse(struct arch *arch __maybe_unused, struct ins_operands *ops, struct map_symbol *ms)
++/*
++ * Prevents from matching commas in the comment section, e.g.:
++ * ffff200008446e70:       b.cs    ffff2000084470f4 <generic_exec_single+0x314>  // b.hs, b.nlast
++ */
++static inline const char *validate_comma(const char *c, struct ins_operands *ops)
++{
++	if (ops->raw_comment && c > ops->raw_comment)
++		return NULL;
++
++	return c;
++}
++
++static int jump__parse(struct arch *arch, struct ins_operands *ops, struct map_symbol *ms)
+ {
+ 	struct map *map = ms->map;
+ 	struct symbol *sym = ms->sym;
+@@ -284,6 +302,10 @@ static int jump__parse(struct arch *arch __maybe_unused, struct ins_operands *op
+ 	};
+ 	const char *c = strchr(ops->raw, ',');
+ 	u64 start, end;
++
++	ops->raw_comment = strchr(ops->raw, arch->objdump.comment_char);
++	c = validate_comma(c, ops);
++
+ 	/*
+ 	 * Examples of lines to parse for the _cpp_lex_token@@Base
+ 	 * function:
+@@ -303,6 +325,7 @@ static int jump__parse(struct arch *arch __maybe_unused, struct ins_operands *op
+ 		ops->target.addr = strtoull(c, NULL, 16);
+ 		if (!ops->target.addr) {
+ 			c = strchr(c, ',');
++			c = validate_comma(c, ops);
+ 			if (c++ != NULL)
+ 				ops->target.addr = strtoull(c, NULL, 16);
+ 		}
+@@ -360,9 +383,12 @@ static int jump__scnprintf(struct ins *ins, char *bf, size_t size,
+ 		return scnprintf(bf, size, "%-6s %s", ins->name, ops->target.sym->name);
+ 
+ 	c = strchr(ops->raw, ',');
++	c = validate_comma(c, ops);
++
+ 	if (c != NULL) {
+ 		const char *c2 = strchr(c + 1, ',');
+ 
++		c2 = validate_comma(c2, ops);
+ 		/* check for 3-op insn */
+ 		if (c2 != NULL)
+ 			c = c2;
+diff --git a/tools/perf/util/annotate.h b/tools/perf/util/annotate.h
+index a4c0d91907e6..61e0c7fd5efd 100644
+--- a/tools/perf/util/annotate.h
++++ b/tools/perf/util/annotate.h
+@@ -21,6 +21,7 @@ struct ins {
+ 
+ struct ins_operands {
+ 	char	*raw;
++	char	*raw_comment;
+ 	struct {
+ 		char	*raw;
+ 		char	*name;
+diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
+index 0d5504751cc5..6324afba8fdd 100644
+--- a/tools/perf/util/evsel.c
++++ b/tools/perf/util/evsel.c
+@@ -251,8 +251,9 @@ struct perf_evsel *perf_evsel__new_idx(struct perf_event_attr *attr, int idx)
+ {
+ 	struct perf_evsel *evsel = zalloc(perf_evsel__object.size);
+ 
+-	if (evsel != NULL)
+-		perf_evsel__init(evsel, attr, idx);
++	if (!evsel)
++		return NULL;
++	perf_evsel__init(evsel, attr, idx);
+ 
+ 	if (perf_evsel__is_bpf_output(evsel)) {
+ 		evsel->attr.sample_type |= (PERF_SAMPLE_RAW | PERF_SAMPLE_TIME |
+diff --git a/tools/perf/util/trace-event-info.c b/tools/perf/util/trace-event-info.c
+index c85d0d1a65ed..7b0ca7cbb7de 100644
+--- a/tools/perf/util/trace-event-info.c
++++ b/tools/perf/util/trace-event-info.c
+@@ -377,7 +377,7 @@ out:
+ 
+ static int record_saved_cmdline(void)
+ {
+-	unsigned int size;
++	unsigned long long size;
+ 	char *path;
+ 	struct stat st;
+ 	int ret, err = 0;
+diff --git a/tools/testing/selftests/net/pmtu.sh b/tools/testing/selftests/net/pmtu.sh
+index f8cc38afffa2..32a194e3e07a 100755
+--- a/tools/testing/selftests/net/pmtu.sh
++++ b/tools/testing/selftests/net/pmtu.sh
+@@ -46,6 +46,9 @@
+ # Kselftest framework requirement - SKIP code is 4.
+ ksft_skip=4
+ 
++# Some systems don't have a ping6 binary anymore
++which ping6 > /dev/null 2>&1 && ping6=$(which ping6) || ping6=$(which ping)
++
+ tests="
+ 	pmtu_vti6_exception		vti6: PMTU exceptions
+ 	pmtu_vti4_exception		vti4: PMTU exceptions
+@@ -274,7 +277,7 @@ test_pmtu_vti6_exception() {
+ 	mtu "${ns_b}" veth_b 4000
+ 	mtu "${ns_a}" vti6_a 5000
+ 	mtu "${ns_b}" vti6_b 5000
+-	${ns_a} ping6 -q -i 0.1 -w 2 -s 60000 ${vti6_b_addr} > /dev/null
++	${ns_a} ${ping6} -q -i 0.1 -w 2 -s 60000 ${vti6_b_addr} > /dev/null
+ 
+ 	# Check that exception was created
+ 	if [ "$(route_get_dst_pmtu_from_exception "${ns_a}" ${vti6_b_addr})" = "" ]; then
+@@ -334,7 +337,7 @@ test_pmtu_vti4_link_add_mtu() {
+ 	fail=0
+ 
+ 	min=68
+-	max=$((65528 - 20))
++	max=$((65535 - 20))
+ 	# Check invalid values first
+ 	for v in $((min - 1)) $((max + 1)); do
+ 		${ns_a} ip link add vti4_a mtu ${v} type vti local ${veth4_a_addr} remote ${veth4_b_addr} key 10 2>/dev/null
+diff --git a/tools/testing/selftests/rseq/param_test.c b/tools/testing/selftests/rseq/param_test.c
+index 615252331813..4bc071525bf7 100644
+--- a/tools/testing/selftests/rseq/param_test.c
++++ b/tools/testing/selftests/rseq/param_test.c
+@@ -56,15 +56,13 @@ unsigned int yield_mod_cnt, nr_abort;
+ 			printf(fmt, ## __VA_ARGS__);	\
+ 	} while (0)
+ 
+-#if defined(__x86_64__) || defined(__i386__)
++#ifdef __i386__
+ 
+ #define INJECT_ASM_REG	"eax"
+ 
+ #define RSEQ_INJECT_CLOBBER \
+ 	, INJECT_ASM_REG
+ 
+-#ifdef __i386__
+-
+ #define RSEQ_INJECT_ASM(n) \
+ 	"mov asm_loop_cnt_" #n ", %%" INJECT_ASM_REG "\n\t" \
+ 	"test %%" INJECT_ASM_REG ",%%" INJECT_ASM_REG "\n\t" \
+@@ -76,9 +74,16 @@ unsigned int yield_mod_cnt, nr_abort;
+ 
+ #elif defined(__x86_64__)
+ 
++#define INJECT_ASM_REG_P	"rax"
++#define INJECT_ASM_REG		"eax"
++
++#define RSEQ_INJECT_CLOBBER \
++	, INJECT_ASM_REG_P \
++	, INJECT_ASM_REG
++
+ #define RSEQ_INJECT_ASM(n) \
+-	"lea asm_loop_cnt_" #n "(%%rip), %%" INJECT_ASM_REG "\n\t" \
+-	"mov (%%" INJECT_ASM_REG "), %%" INJECT_ASM_REG "\n\t" \
++	"lea asm_loop_cnt_" #n "(%%rip), %%" INJECT_ASM_REG_P "\n\t" \
++	"mov (%%" INJECT_ASM_REG_P "), %%" INJECT_ASM_REG "\n\t" \
+ 	"test %%" INJECT_ASM_REG ",%%" INJECT_ASM_REG "\n\t" \
+ 	"jz 333f\n\t" \
+ 	"222:\n\t" \
+@@ -86,10 +91,6 @@ unsigned int yield_mod_cnt, nr_abort;
+ 	"jnz 222b\n\t" \
+ 	"333:\n\t"
+ 
+-#else
+-#error "Unsupported architecture"
+-#endif
+-
+ #elif defined(__ARMEL__)
+ 
+ #define RSEQ_INJECT_INPUT \
+diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/police.json b/tools/testing/selftests/tc-testing/tc-tests/actions/police.json
+index f03763d81617..30f9b54bd666 100644
+--- a/tools/testing/selftests/tc-testing/tc-tests/actions/police.json
++++ b/tools/testing/selftests/tc-testing/tc-tests/actions/police.json
+@@ -312,6 +312,54 @@
+             "$TC actions flush action police"
+         ]
+     },
++    {
++        "id": "6aaf",
++        "name": "Add police actions with conform-exceed control pass/pipe [with numeric values]",
++        "category": [
++            "actions",
++            "police"
++        ],
++        "setup": [
++            [
++                "$TC actions flush action police",
++                0,
++                1,
++                255
++            ]
++        ],
++        "cmdUnderTest": "$TC actions add action police rate 3mbit burst 250k conform-exceed 0/3 index 1",
++        "expExitCode": "0",
++        "verifyCmd": "$TC actions get action police index 1",
++        "matchPattern": "action order [0-9]*:  police 0x1 rate 3Mbit burst 250Kb mtu 2Kb action pass/pipe",
++        "matchCount": "1",
++        "teardown": [
++            "$TC actions flush action police"
++        ]
++    },
++    {
++        "id": "29b1",
++        "name": "Add police actions with conform-exceed control <invalid>/drop",
++        "category": [
++            "actions",
++            "police"
++        ],
++        "setup": [
++            [
++                "$TC actions flush action police",
++                0,
++                1,
++                255
++            ]
++        ],
++        "cmdUnderTest": "$TC actions add action police rate 3mbit burst 250k conform-exceed 10/drop index 1",
++        "expExitCode": "255",
++        "verifyCmd": "$TC actions ls action police",
++        "matchPattern": "action order [0-9]*:  police 0x1 rate 3Mbit burst 250Kb mtu 2Kb action ",
++        "matchCount": "0",
++        "teardown": [
++            "$TC actions flush action police"
++        ]
++    },
+     {
+         "id": "c26f",
+         "name": "Add police action with invalid peakrate value",
+diff --git a/tools/vm/page-types.c b/tools/vm/page-types.c
+index cce853dca691..a4c31fb2887b 100644
+--- a/tools/vm/page-types.c
++++ b/tools/vm/page-types.c
+@@ -156,12 +156,6 @@ static const char * const page_flag_names[] = {
+ };
+ 
+ 
+-static const char * const debugfs_known_mountpoints[] = {
+-	"/sys/kernel/debug",
+-	"/debug",
+-	0,
+-};
+-
+ /*
+  * data structures
+  */
+diff --git a/tools/vm/slabinfo.c b/tools/vm/slabinfo.c
+index f82c2eaa859d..334b16db0ebb 100644
+--- a/tools/vm/slabinfo.c
++++ b/tools/vm/slabinfo.c
+@@ -30,8 +30,8 @@ struct slabinfo {
+ 	int alias;
+ 	int refs;
+ 	int aliases, align, cache_dma, cpu_slabs, destroy_by_rcu;
+-	int hwcache_align, object_size, objs_per_slab;
+-	int sanity_checks, slab_size, store_user, trace;
++	unsigned int hwcache_align, object_size, objs_per_slab;
++	unsigned int sanity_checks, slab_size, store_user, trace;
+ 	int order, poison, reclaim_account, red_zone;
+ 	unsigned long partial, objects, slabs, objects_partial, objects_total;
+ 	unsigned long alloc_fastpath, alloc_slowpath;


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 11:37 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     16dd88333ee2f7578d8881621c2ad2e3803b7fb5
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Aug 17 19:28:20 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 11:36:21 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=16dd8833

Linux patch 4.18.2

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1001_linux-4.18.2.patch | 1679 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1683 insertions(+)

diff --git a/0000_README b/0000_README
index ad4a3ed..c801597 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,10 @@ Patch:  1000_linux-4.18.1.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.1
 
+Patch:  1001_linux-4.18.2.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.2
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1001_linux-4.18.2.patch b/1001_linux-4.18.2.patch
new file mode 100644
index 0000000..1853255
--- /dev/null
+++ b/1001_linux-4.18.2.patch
@@ -0,0 +1,1679 @@
+diff --git a/Documentation/process/changes.rst b/Documentation/process/changes.rst
+index ddc029734b25..005d8842a503 100644
+--- a/Documentation/process/changes.rst
++++ b/Documentation/process/changes.rst
+@@ -35,7 +35,7 @@ binutils               2.20             ld -v
+ flex                   2.5.35           flex --version
+ bison                  2.0              bison --version
+ util-linux             2.10o            fdformat --version
+-module-init-tools      0.9.10           depmod -V
++kmod                   13               depmod -V
+ e2fsprogs              1.41.4           e2fsck -V
+ jfsutils               1.1.3            fsck.jfs -V
+ reiserfsprogs          3.6.3            reiserfsck -V
+@@ -156,12 +156,6 @@ is not build with ``CONFIG_KALLSYMS`` and you have no way to rebuild and
+ reproduce the Oops with that option, then you can still decode that Oops
+ with ksymoops.
+ 
+-Module-Init-Tools
+------------------
+-
+-A new module loader is now in the kernel that requires ``module-init-tools``
+-to use.  It is backward compatible with the 2.4.x series kernels.
+-
+ Mkinitrd
+ --------
+ 
+@@ -371,16 +365,17 @@ Util-linux
+ 
+ - <https://www.kernel.org/pub/linux/utils/util-linux/>
+ 
++Kmod
++----
++
++- <https://www.kernel.org/pub/linux/utils/kernel/kmod/>
++- <https://git.kernel.org/pub/scm/utils/kernel/kmod/kmod.git>
++
+ Ksymoops
+ --------
+ 
+ - <https://www.kernel.org/pub/linux/utils/kernel/ksymoops/v2.4/>
+ 
+-Module-Init-Tools
+------------------
+-
+-- <https://www.kernel.org/pub/linux/utils/kernel/module-init-tools/>
+-
+ Mkinitrd
+ --------
+ 
+diff --git a/Makefile b/Makefile
+index 5edf963148e8..fd409a0fd4e1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index 493ff75670ff..8ae5d7ae4af3 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -977,12 +977,12 @@ int pmd_clear_huge(pmd_t *pmdp)
+ 	return 1;
+ }
+ 
+-int pud_free_pmd_page(pud_t *pud)
++int pud_free_pmd_page(pud_t *pud, unsigned long addr)
+ {
+ 	return pud_none(*pud);
+ }
+ 
+-int pmd_free_pte_page(pmd_t *pmd)
++int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
+ {
+ 	return pmd_none(*pmd);
+ }
+diff --git a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S b/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S
+index 16c4ccb1f154..d2364c55bbde 100644
+--- a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S
++++ b/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S
+@@ -265,7 +265,7 @@ ENTRY(sha256_mb_mgr_get_comp_job_avx2)
+ 	vpinsrd	$1, _args_digest+1*32(state, idx, 4), %xmm0, %xmm0
+ 	vpinsrd	$2, _args_digest+2*32(state, idx, 4), %xmm0, %xmm0
+ 	vpinsrd	$3, _args_digest+3*32(state, idx, 4), %xmm0, %xmm0
+-	vmovd   _args_digest(state , idx, 4) , %xmm0
++	vmovd	_args_digest+4*32(state, idx, 4), %xmm1
+ 	vpinsrd	$1, _args_digest+5*32(state, idx, 4), %xmm1, %xmm1
+ 	vpinsrd	$2, _args_digest+6*32(state, idx, 4), %xmm1, %xmm1
+ 	vpinsrd	$3, _args_digest+7*32(state, idx, 4), %xmm1, %xmm1
+diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c
+index de27615c51ea..0c662cb6a723 100644
+--- a/arch/x86/hyperv/mmu.c
++++ b/arch/x86/hyperv/mmu.c
+@@ -95,6 +95,11 @@ static void hyperv_flush_tlb_others(const struct cpumask *cpus,
+ 	} else {
+ 		for_each_cpu(cpu, cpus) {
+ 			vcpu = hv_cpu_number_to_vp_number(cpu);
++			if (vcpu == VP_INVAL) {
++				local_irq_restore(flags);
++				goto do_native;
++			}
++
+ 			if (vcpu >= 64)
+ 				goto do_native;
+ 
+diff --git a/arch/x86/include/asm/i8259.h b/arch/x86/include/asm/i8259.h
+index 5cdcdbd4d892..89789e8c80f6 100644
+--- a/arch/x86/include/asm/i8259.h
++++ b/arch/x86/include/asm/i8259.h
+@@ -3,6 +3,7 @@
+ #define _ASM_X86_I8259_H
+ 
+ #include <linux/delay.h>
++#include <asm/io.h>
+ 
+ extern unsigned int cached_irq_mask;
+ 
+diff --git a/arch/x86/kernel/apic/x2apic_uv_x.c b/arch/x86/kernel/apic/x2apic_uv_x.c
+index d492752f79e1..391f358ebb4c 100644
+--- a/arch/x86/kernel/apic/x2apic_uv_x.c
++++ b/arch/x86/kernel/apic/x2apic_uv_x.c
+@@ -394,10 +394,10 @@ extern int uv_hub_info_version(void)
+ EXPORT_SYMBOL(uv_hub_info_version);
+ 
+ /* Default UV memory block size is 2GB */
+-static unsigned long mem_block_size = (2UL << 30);
++static unsigned long mem_block_size __initdata = (2UL << 30);
+ 
+ /* Kernel parameter to specify UV mem block size */
+-static int parse_mem_block_size(char *ptr)
++static int __init parse_mem_block_size(char *ptr)
+ {
+ 	unsigned long size = memparse(ptr, NULL);
+ 
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index c4f0ae49a53d..664f161f96ff 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -648,10 +648,9 @@ void x86_spec_ctrl_setup_ap(void)
+ enum l1tf_mitigations l1tf_mitigation __ro_after_init = L1TF_MITIGATION_FLUSH;
+ #if IS_ENABLED(CONFIG_KVM_INTEL)
+ EXPORT_SYMBOL_GPL(l1tf_mitigation);
+-
++#endif
+ enum vmx_l1d_flush_state l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
+ EXPORT_SYMBOL_GPL(l1tf_vmx_mitigation);
+-#endif
+ 
+ static void __init l1tf_select_mitigation(void)
+ {
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 9eda6f730ec4..b41b72bd8bb8 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -905,7 +905,7 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
+ 	apply_forced_caps(c);
+ }
+ 
+-static void get_cpu_address_sizes(struct cpuinfo_x86 *c)
++void get_cpu_address_sizes(struct cpuinfo_x86 *c)
+ {
+ 	u32 eax, ebx, ecx, edx;
+ 
+diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
+index e59c0ea82a33..7b229afa0a37 100644
+--- a/arch/x86/kernel/cpu/cpu.h
++++ b/arch/x86/kernel/cpu/cpu.h
+@@ -46,6 +46,7 @@ extern const struct cpu_dev *const __x86_cpu_dev_start[],
+ 			    *const __x86_cpu_dev_end[];
+ 
+ extern void get_cpu_cap(struct cpuinfo_x86 *c);
++extern void get_cpu_address_sizes(struct cpuinfo_x86 *c);
+ extern void cpu_detect_cache_sizes(struct cpuinfo_x86 *c);
+ extern void init_scattered_cpuid_features(struct cpuinfo_x86 *c);
+ extern u32 get_scattered_cpuid_leaf(unsigned int level,
+diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
+index 7bb6f65c79de..29505724202a 100644
+--- a/arch/x86/mm/pageattr.c
++++ b/arch/x86/mm/pageattr.c
+@@ -1784,6 +1784,12 @@ int set_memory_nonglobal(unsigned long addr, int numpages)
+ 				      __pgprot(_PAGE_GLOBAL), 0);
+ }
+ 
++int set_memory_global(unsigned long addr, int numpages)
++{
++	return change_page_attr_set(&addr, numpages,
++				    __pgprot(_PAGE_GLOBAL), 0);
++}
++
+ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc)
+ {
+ 	struct cpa_data cpa;
+diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
+index 47b5951e592b..e3deefb891da 100644
+--- a/arch/x86/mm/pgtable.c
++++ b/arch/x86/mm/pgtable.c
+@@ -719,28 +719,50 @@ int pmd_clear_huge(pmd_t *pmd)
+ 	return 0;
+ }
+ 
++#ifdef CONFIG_X86_64
+ /**
+  * pud_free_pmd_page - Clear pud entry and free pmd page.
+  * @pud: Pointer to a PUD.
++ * @addr: Virtual address associated with pud.
+  *
+- * Context: The pud range has been unmaped and TLB purged.
++ * Context: The pud range has been unmapped and TLB purged.
+  * Return: 1 if clearing the entry succeeded. 0 otherwise.
++ *
++ * NOTE: Callers must allow a single page allocation.
+  */
+-int pud_free_pmd_page(pud_t *pud)
++int pud_free_pmd_page(pud_t *pud, unsigned long addr)
+ {
+-	pmd_t *pmd;
++	pmd_t *pmd, *pmd_sv;
++	pte_t *pte;
+ 	int i;
+ 
+ 	if (pud_none(*pud))
+ 		return 1;
+ 
+ 	pmd = (pmd_t *)pud_page_vaddr(*pud);
++	pmd_sv = (pmd_t *)__get_free_page(GFP_KERNEL);
++	if (!pmd_sv)
++		return 0;
+ 
+-	for (i = 0; i < PTRS_PER_PMD; i++)
+-		if (!pmd_free_pte_page(&pmd[i]))
+-			return 0;
++	for (i = 0; i < PTRS_PER_PMD; i++) {
++		pmd_sv[i] = pmd[i];
++		if (!pmd_none(pmd[i]))
++			pmd_clear(&pmd[i]);
++	}
+ 
+ 	pud_clear(pud);
++
++	/* INVLPG to clear all paging-structure caches */
++	flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1);
++
++	for (i = 0; i < PTRS_PER_PMD; i++) {
++		if (!pmd_none(pmd_sv[i])) {
++			pte = (pte_t *)pmd_page_vaddr(pmd_sv[i]);
++			free_page((unsigned long)pte);
++		}
++	}
++
++	free_page((unsigned long)pmd_sv);
+ 	free_page((unsigned long)pmd);
+ 
+ 	return 1;
+@@ -749,11 +771,12 @@ int pud_free_pmd_page(pud_t *pud)
+ /**
+  * pmd_free_pte_page - Clear pmd entry and free pte page.
+  * @pmd: Pointer to a PMD.
++ * @addr: Virtual address associated with pmd.
+  *
+- * Context: The pmd range has been unmaped and TLB purged.
++ * Context: The pmd range has been unmapped and TLB purged.
+  * Return: 1 if clearing the entry succeeded. 0 otherwise.
+  */
+-int pmd_free_pte_page(pmd_t *pmd)
++int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
+ {
+ 	pte_t *pte;
+ 
+@@ -762,8 +785,30 @@ int pmd_free_pte_page(pmd_t *pmd)
+ 
+ 	pte = (pte_t *)pmd_page_vaddr(*pmd);
+ 	pmd_clear(pmd);
++
++	/* INVLPG to clear all paging-structure caches */
++	flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1);
++
+ 	free_page((unsigned long)pte);
+ 
+ 	return 1;
+ }
++
++#else /* !CONFIG_X86_64 */
++
++int pud_free_pmd_page(pud_t *pud, unsigned long addr)
++{
++	return pud_none(*pud);
++}
++
++/*
++ * Disable free page handling on x86-PAE. This assures that ioremap()
++ * does not update sync'd pmd entries. See vmalloc_sync_one().
++ */
++int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
++{
++	return pmd_none(*pmd);
++}
++
++#endif /* CONFIG_X86_64 */
+ #endif	/* CONFIG_HAVE_ARCH_HUGE_VMAP */
+diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
+index fb752d9a3ce9..946455e9cfef 100644
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -435,6 +435,13 @@ static inline bool pti_kernel_image_global_ok(void)
+ 	return true;
+ }
+ 
++/*
++ * This is the only user for these and it is not arch-generic
++ * like the other set_memory.h functions.  Just extern them.
++ */
++extern int set_memory_nonglobal(unsigned long addr, int numpages);
++extern int set_memory_global(unsigned long addr, int numpages);
++
+ /*
+  * For some configurations, map all of kernel text into the user page
+  * tables.  This reduces TLB misses, especially on non-PCID systems.
+@@ -447,7 +454,8 @@ void pti_clone_kernel_text(void)
+ 	 * clone the areas past rodata, they might contain secrets.
+ 	 */
+ 	unsigned long start = PFN_ALIGN(_text);
+-	unsigned long end = (unsigned long)__end_rodata_hpage_align;
++	unsigned long end_clone  = (unsigned long)__end_rodata_hpage_align;
++	unsigned long end_global = PFN_ALIGN((unsigned long)__stop___ex_table);
+ 
+ 	if (!pti_kernel_image_global_ok())
+ 		return;
+@@ -459,14 +467,18 @@ void pti_clone_kernel_text(void)
+ 	 * pti_set_kernel_image_nonglobal() did to clear the
+ 	 * global bit.
+ 	 */
+-	pti_clone_pmds(start, end, _PAGE_RW);
++	pti_clone_pmds(start, end_clone, _PAGE_RW);
++
++	/*
++	 * pti_clone_pmds() will set the global bit in any PMDs
++	 * that it clones, but we also need to get any PTEs in
++	 * the last level for areas that are not huge-page-aligned.
++	 */
++
++	/* Set the global bit for normal non-__init kernel text: */
++	set_memory_global(start, (end_global - start) >> PAGE_SHIFT);
+ }
+ 
+-/*
+- * This is the only user for it and it is not arch-generic like
+- * the other set_memory.h functions.  Just extern it.
+- */
+-extern int set_memory_nonglobal(unsigned long addr, int numpages);
+ void pti_set_kernel_image_nonglobal(void)
+ {
+ 	/*
+@@ -478,9 +490,11 @@ void pti_set_kernel_image_nonglobal(void)
+ 	unsigned long start = PFN_ALIGN(_text);
+ 	unsigned long end = ALIGN((unsigned long)_end, PMD_PAGE_SIZE);
+ 
+-	if (pti_kernel_image_global_ok())
+-		return;
+-
++	/*
++	 * This clears _PAGE_GLOBAL from the entire kernel image.
++	 * pti_clone_kernel_text() map put _PAGE_GLOBAL back for
++	 * areas that are mapped to userspace.
++	 */
+ 	set_memory_nonglobal(start, (end - start) >> PAGE_SHIFT);
+ }
+ 
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index 439a94bf89ad..c5e3f2acc7f0 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -1259,6 +1259,9 @@ asmlinkage __visible void __init xen_start_kernel(void)
+ 	get_cpu_cap(&boot_cpu_data);
+ 	x86_configure_nx();
+ 
++	/* Determine virtual and physical address sizes */
++	get_cpu_address_sizes(&boot_cpu_data);
++
+ 	/* Let's presume PV guests always boot on vCPU with id 0. */
+ 	per_cpu(xen_vcpu_id, 0) = 0;
+ 
+diff --git a/crypto/ablkcipher.c b/crypto/ablkcipher.c
+index d880a4897159..4ee7c041bb82 100644
+--- a/crypto/ablkcipher.c
++++ b/crypto/ablkcipher.c
+@@ -71,11 +71,9 @@ static inline u8 *ablkcipher_get_spot(u8 *start, unsigned int len)
+ 	return max(start, end_page);
+ }
+ 
+-static inline unsigned int ablkcipher_done_slow(struct ablkcipher_walk *walk,
+-						unsigned int bsize)
++static inline void ablkcipher_done_slow(struct ablkcipher_walk *walk,
++					unsigned int n)
+ {
+-	unsigned int n = bsize;
+-
+ 	for (;;) {
+ 		unsigned int len_this_page = scatterwalk_pagelen(&walk->out);
+ 
+@@ -87,17 +85,13 @@ static inline unsigned int ablkcipher_done_slow(struct ablkcipher_walk *walk,
+ 		n -= len_this_page;
+ 		scatterwalk_start(&walk->out, sg_next(walk->out.sg));
+ 	}
+-
+-	return bsize;
+ }
+ 
+-static inline unsigned int ablkcipher_done_fast(struct ablkcipher_walk *walk,
+-						unsigned int n)
++static inline void ablkcipher_done_fast(struct ablkcipher_walk *walk,
++					unsigned int n)
+ {
+ 	scatterwalk_advance(&walk->in, n);
+ 	scatterwalk_advance(&walk->out, n);
+-
+-	return n;
+ }
+ 
+ static int ablkcipher_walk_next(struct ablkcipher_request *req,
+@@ -107,39 +101,40 @@ int ablkcipher_walk_done(struct ablkcipher_request *req,
+ 			 struct ablkcipher_walk *walk, int err)
+ {
+ 	struct crypto_tfm *tfm = req->base.tfm;
+-	unsigned int nbytes = 0;
++	unsigned int n; /* bytes processed */
++	bool more;
+ 
+-	if (likely(err >= 0)) {
+-		unsigned int n = walk->nbytes - err;
++	if (unlikely(err < 0))
++		goto finish;
+ 
+-		if (likely(!(walk->flags & ABLKCIPHER_WALK_SLOW)))
+-			n = ablkcipher_done_fast(walk, n);
+-		else if (WARN_ON(err)) {
+-			err = -EINVAL;
+-			goto err;
+-		} else
+-			n = ablkcipher_done_slow(walk, n);
++	n = walk->nbytes - err;
++	walk->total -= n;
++	more = (walk->total != 0);
+ 
+-		nbytes = walk->total - n;
+-		err = 0;
++	if (likely(!(walk->flags & ABLKCIPHER_WALK_SLOW))) {
++		ablkcipher_done_fast(walk, n);
++	} else {
++		if (WARN_ON(err)) {
++			/* unexpected case; didn't process all bytes */
++			err = -EINVAL;
++			goto finish;
++		}
++		ablkcipher_done_slow(walk, n);
+ 	}
+ 
+-	scatterwalk_done(&walk->in, 0, nbytes);
+-	scatterwalk_done(&walk->out, 1, nbytes);
+-
+-err:
+-	walk->total = nbytes;
+-	walk->nbytes = nbytes;
++	scatterwalk_done(&walk->in, 0, more);
++	scatterwalk_done(&walk->out, 1, more);
+ 
+-	if (nbytes) {
++	if (more) {
+ 		crypto_yield(req->base.flags);
+ 		return ablkcipher_walk_next(req, walk);
+ 	}
+-
++	err = 0;
++finish:
++	walk->nbytes = 0;
+ 	if (walk->iv != req->info)
+ 		memcpy(req->info, walk->iv, tfm->crt_ablkcipher.ivsize);
+ 	kfree(walk->iv_buffer);
+-
+ 	return err;
+ }
+ EXPORT_SYMBOL_GPL(ablkcipher_walk_done);
+diff --git a/crypto/blkcipher.c b/crypto/blkcipher.c
+index 01c0d4aa2563..77b5fa293f66 100644
+--- a/crypto/blkcipher.c
++++ b/crypto/blkcipher.c
+@@ -70,19 +70,18 @@ static inline u8 *blkcipher_get_spot(u8 *start, unsigned int len)
+ 	return max(start, end_page);
+ }
+ 
+-static inline unsigned int blkcipher_done_slow(struct blkcipher_walk *walk,
+-					       unsigned int bsize)
++static inline void blkcipher_done_slow(struct blkcipher_walk *walk,
++				       unsigned int bsize)
+ {
+ 	u8 *addr;
+ 
+ 	addr = (u8 *)ALIGN((unsigned long)walk->buffer, walk->alignmask + 1);
+ 	addr = blkcipher_get_spot(addr, bsize);
+ 	scatterwalk_copychunks(addr, &walk->out, bsize, 1);
+-	return bsize;
+ }
+ 
+-static inline unsigned int blkcipher_done_fast(struct blkcipher_walk *walk,
+-					       unsigned int n)
++static inline void blkcipher_done_fast(struct blkcipher_walk *walk,
++				       unsigned int n)
+ {
+ 	if (walk->flags & BLKCIPHER_WALK_COPY) {
+ 		blkcipher_map_dst(walk);
+@@ -96,49 +95,48 @@ static inline unsigned int blkcipher_done_fast(struct blkcipher_walk *walk,
+ 
+ 	scatterwalk_advance(&walk->in, n);
+ 	scatterwalk_advance(&walk->out, n);
+-
+-	return n;
+ }
+ 
+ int blkcipher_walk_done(struct blkcipher_desc *desc,
+ 			struct blkcipher_walk *walk, int err)
+ {
+-	unsigned int nbytes = 0;
++	unsigned int n; /* bytes processed */
++	bool more;
+ 
+-	if (likely(err >= 0)) {
+-		unsigned int n = walk->nbytes - err;
++	if (unlikely(err < 0))
++		goto finish;
+ 
+-		if (likely(!(walk->flags & BLKCIPHER_WALK_SLOW)))
+-			n = blkcipher_done_fast(walk, n);
+-		else if (WARN_ON(err)) {
+-			err = -EINVAL;
+-			goto err;
+-		} else
+-			n = blkcipher_done_slow(walk, n);
++	n = walk->nbytes - err;
++	walk->total -= n;
++	more = (walk->total != 0);
+ 
+-		nbytes = walk->total - n;
+-		err = 0;
++	if (likely(!(walk->flags & BLKCIPHER_WALK_SLOW))) {
++		blkcipher_done_fast(walk, n);
++	} else {
++		if (WARN_ON(err)) {
++			/* unexpected case; didn't process all bytes */
++			err = -EINVAL;
++			goto finish;
++		}
++		blkcipher_done_slow(walk, n);
+ 	}
+ 
+-	scatterwalk_done(&walk->in, 0, nbytes);
+-	scatterwalk_done(&walk->out, 1, nbytes);
++	scatterwalk_done(&walk->in, 0, more);
++	scatterwalk_done(&walk->out, 1, more);
+ 
+-err:
+-	walk->total = nbytes;
+-	walk->nbytes = nbytes;
+-
+-	if (nbytes) {
++	if (more) {
+ 		crypto_yield(desc->flags);
+ 		return blkcipher_walk_next(desc, walk);
+ 	}
+-
++	err = 0;
++finish:
++	walk->nbytes = 0;
+ 	if (walk->iv != desc->info)
+ 		memcpy(desc->info, walk->iv, walk->ivsize);
+ 	if (walk->buffer != walk->page)
+ 		kfree(walk->buffer);
+ 	if (walk->page)
+ 		free_page((unsigned long)walk->page);
+-
+ 	return err;
+ }
+ EXPORT_SYMBOL_GPL(blkcipher_walk_done);
+diff --git a/crypto/skcipher.c b/crypto/skcipher.c
+index 0fe2a2923ad0..5dc8407bdaa9 100644
+--- a/crypto/skcipher.c
++++ b/crypto/skcipher.c
+@@ -95,7 +95,7 @@ static inline u8 *skcipher_get_spot(u8 *start, unsigned int len)
+ 	return max(start, end_page);
+ }
+ 
+-static int skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize)
++static void skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize)
+ {
+ 	u8 *addr;
+ 
+@@ -103,23 +103,24 @@ static int skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize)
+ 	addr = skcipher_get_spot(addr, bsize);
+ 	scatterwalk_copychunks(addr, &walk->out, bsize,
+ 			       (walk->flags & SKCIPHER_WALK_PHYS) ? 2 : 1);
+-	return 0;
+ }
+ 
+ int skcipher_walk_done(struct skcipher_walk *walk, int err)
+ {
+-	unsigned int n = walk->nbytes - err;
+-	unsigned int nbytes;
+-
+-	nbytes = walk->total - n;
+-
+-	if (unlikely(err < 0)) {
+-		nbytes = 0;
+-		n = 0;
+-	} else if (likely(!(walk->flags & (SKCIPHER_WALK_PHYS |
+-					   SKCIPHER_WALK_SLOW |
+-					   SKCIPHER_WALK_COPY |
+-					   SKCIPHER_WALK_DIFF)))) {
++	unsigned int n; /* bytes processed */
++	bool more;
++
++	if (unlikely(err < 0))
++		goto finish;
++
++	n = walk->nbytes - err;
++	walk->total -= n;
++	more = (walk->total != 0);
++
++	if (likely(!(walk->flags & (SKCIPHER_WALK_PHYS |
++				    SKCIPHER_WALK_SLOW |
++				    SKCIPHER_WALK_COPY |
++				    SKCIPHER_WALK_DIFF)))) {
+ unmap_src:
+ 		skcipher_unmap_src(walk);
+ 	} else if (walk->flags & SKCIPHER_WALK_DIFF) {
+@@ -131,28 +132,28 @@ unmap_src:
+ 		skcipher_unmap_dst(walk);
+ 	} else if (unlikely(walk->flags & SKCIPHER_WALK_SLOW)) {
+ 		if (WARN_ON(err)) {
++			/* unexpected case; didn't process all bytes */
+ 			err = -EINVAL;
+-			nbytes = 0;
+-		} else
+-			n = skcipher_done_slow(walk, n);
++			goto finish;
++		}
++		skcipher_done_slow(walk, n);
++		goto already_advanced;
+ 	}
+ 
+-	if (err > 0)
+-		err = 0;
+-
+-	walk->total = nbytes;
+-	walk->nbytes = nbytes;
+-
+ 	scatterwalk_advance(&walk->in, n);
+ 	scatterwalk_advance(&walk->out, n);
+-	scatterwalk_done(&walk->in, 0, nbytes);
+-	scatterwalk_done(&walk->out, 1, nbytes);
++already_advanced:
++	scatterwalk_done(&walk->in, 0, more);
++	scatterwalk_done(&walk->out, 1, more);
+ 
+-	if (nbytes) {
++	if (more) {
+ 		crypto_yield(walk->flags & SKCIPHER_WALK_SLEEP ?
+ 			     CRYPTO_TFM_REQ_MAY_SLEEP : 0);
+ 		return skcipher_walk_next(walk);
+ 	}
++	err = 0;
++finish:
++	walk->nbytes = 0;
+ 
+ 	/* Short-circuit for the common/fast path. */
+ 	if (!((unsigned long)walk->buffer | (unsigned long)walk->page))
+@@ -399,7 +400,7 @@ static int skcipher_copy_iv(struct skcipher_walk *walk)
+ 	unsigned size;
+ 	u8 *iv;
+ 
+-	aligned_bs = ALIGN(bs, alignmask);
++	aligned_bs = ALIGN(bs, alignmask + 1);
+ 
+ 	/* Minimum size to align buffer by alignmask. */
+ 	size = alignmask & ~a;
+diff --git a/crypto/vmac.c b/crypto/vmac.c
+index df76a816cfb2..bb2fc787d615 100644
+--- a/crypto/vmac.c
++++ b/crypto/vmac.c
+@@ -1,6 +1,10 @@
+ /*
+- * Modified to interface to the Linux kernel
++ * VMAC: Message Authentication Code using Universal Hashing
++ *
++ * Reference: https://tools.ietf.org/html/draft-krovetz-vmac-01
++ *
+  * Copyright (c) 2009, Intel Corporation.
++ * Copyright (c) 2018, Google Inc.
+  *
+  * This program is free software; you can redistribute it and/or modify it
+  * under the terms and conditions of the GNU General Public License,
+@@ -16,14 +20,15 @@
+  * Place - Suite 330, Boston, MA 02111-1307 USA.
+  */
+ 
+-/* --------------------------------------------------------------------------
+- * VMAC and VHASH Implementation by Ted Krovetz (tdk@acm.org) and Wei Dai.
+- * This implementation is herby placed in the public domain.
+- * The authors offers no warranty. Use at your own risk.
+- * Please send bug reports to the authors.
+- * Last modified: 17 APR 08, 1700 PDT
+- * ----------------------------------------------------------------------- */
++/*
++ * Derived from:
++ *	VMAC and VHASH Implementation by Ted Krovetz (tdk@acm.org) and Wei Dai.
++ *	This implementation is herby placed in the public domain.
++ *	The authors offers no warranty. Use at your own risk.
++ *	Last modified: 17 APR 08, 1700 PDT
++ */
+ 
++#include <asm/unaligned.h>
+ #include <linux/init.h>
+ #include <linux/types.h>
+ #include <linux/crypto.h>
+@@ -31,9 +36,35 @@
+ #include <linux/scatterlist.h>
+ #include <asm/byteorder.h>
+ #include <crypto/scatterwalk.h>
+-#include <crypto/vmac.h>
+ #include <crypto/internal/hash.h>
+ 
++/*
++ * User definable settings.
++ */
++#define VMAC_TAG_LEN	64
++#define VMAC_KEY_SIZE	128/* Must be 128, 192 or 256			*/
++#define VMAC_KEY_LEN	(VMAC_KEY_SIZE/8)
++#define VMAC_NHBYTES	128/* Must 2^i for any 3 < i < 13 Standard = 128*/
++
++/* per-transform (per-key) context */
++struct vmac_tfm_ctx {
++	struct crypto_cipher *cipher;
++	u64 nhkey[(VMAC_NHBYTES/8)+2*(VMAC_TAG_LEN/64-1)];
++	u64 polykey[2*VMAC_TAG_LEN/64];
++	u64 l3key[2*VMAC_TAG_LEN/64];
++};
++
++/* per-request context */
++struct vmac_desc_ctx {
++	union {
++		u8 partial[VMAC_NHBYTES];	/* partial block */
++		__le64 partial_words[VMAC_NHBYTES / 8];
++	};
++	unsigned int partial_size;	/* size of the partial block */
++	bool first_block_processed;
++	u64 polytmp[2*VMAC_TAG_LEN/64];	/* running total of L2-hash */
++};
++
+ /*
+  * Constants and masks
+  */
+@@ -318,13 +349,6 @@ static void poly_step_func(u64 *ahi, u64 *alo,
+ 	} while (0)
+ #endif
+ 
+-static void vhash_abort(struct vmac_ctx *ctx)
+-{
+-	ctx->polytmp[0] = ctx->polykey[0] ;
+-	ctx->polytmp[1] = ctx->polykey[1] ;
+-	ctx->first_block_processed = 0;
+-}
+-
+ static u64 l3hash(u64 p1, u64 p2, u64 k1, u64 k2, u64 len)
+ {
+ 	u64 rh, rl, t, z = 0;
+@@ -364,280 +388,209 @@ static u64 l3hash(u64 p1, u64 p2, u64 k1, u64 k2, u64 len)
+ 	return rl;
+ }
+ 
+-static void vhash_update(const unsigned char *m,
+-			unsigned int mbytes, /* Pos multiple of VMAC_NHBYTES */
+-			struct vmac_ctx *ctx)
++/* L1 and L2-hash one or more VMAC_NHBYTES-byte blocks */
++static void vhash_blocks(const struct vmac_tfm_ctx *tctx,
++			 struct vmac_desc_ctx *dctx,
++			 const __le64 *mptr, unsigned int blocks)
+ {
+-	u64 rh, rl, *mptr;
+-	const u64 *kptr = (u64 *)ctx->nhkey;
+-	int i;
+-	u64 ch, cl;
+-	u64 pkh = ctx->polykey[0];
+-	u64 pkl = ctx->polykey[1];
+-
+-	if (!mbytes)
+-		return;
+-
+-	BUG_ON(mbytes % VMAC_NHBYTES);
+-
+-	mptr = (u64 *)m;
+-	i = mbytes / VMAC_NHBYTES;  /* Must be non-zero */
+-
+-	ch = ctx->polytmp[0];
+-	cl = ctx->polytmp[1];
+-
+-	if (!ctx->first_block_processed) {
+-		ctx->first_block_processed = 1;
++	const u64 *kptr = tctx->nhkey;
++	const u64 pkh = tctx->polykey[0];
++	const u64 pkl = tctx->polykey[1];
++	u64 ch = dctx->polytmp[0];
++	u64 cl = dctx->polytmp[1];
++	u64 rh, rl;
++
++	if (!dctx->first_block_processed) {
++		dctx->first_block_processed = true;
+ 		nh_vmac_nhbytes(mptr, kptr, VMAC_NHBYTES/8, rh, rl);
+ 		rh &= m62;
+ 		ADD128(ch, cl, rh, rl);
+ 		mptr += (VMAC_NHBYTES/sizeof(u64));
+-		i--;
++		blocks--;
+ 	}
+ 
+-	while (i--) {
++	while (blocks--) {
+ 		nh_vmac_nhbytes(mptr, kptr, VMAC_NHBYTES/8, rh, rl);
+ 		rh &= m62;
+ 		poly_step(ch, cl, pkh, pkl, rh, rl);
+ 		mptr += (VMAC_NHBYTES/sizeof(u64));
+ 	}
+ 
+-	ctx->polytmp[0] = ch;
+-	ctx->polytmp[1] = cl;
++	dctx->polytmp[0] = ch;
++	dctx->polytmp[1] = cl;
+ }
+ 
+-static u64 vhash(unsigned char m[], unsigned int mbytes,
+-			u64 *tagl, struct vmac_ctx *ctx)
++static int vmac_setkey(struct crypto_shash *tfm,
++		       const u8 *key, unsigned int keylen)
+ {
+-	u64 rh, rl, *mptr;
+-	const u64 *kptr = (u64 *)ctx->nhkey;
+-	int i, remaining;
+-	u64 ch, cl;
+-	u64 pkh = ctx->polykey[0];
+-	u64 pkl = ctx->polykey[1];
+-
+-	mptr = (u64 *)m;
+-	i = mbytes / VMAC_NHBYTES;
+-	remaining = mbytes % VMAC_NHBYTES;
+-
+-	if (ctx->first_block_processed) {
+-		ch = ctx->polytmp[0];
+-		cl = ctx->polytmp[1];
+-	} else if (i) {
+-		nh_vmac_nhbytes(mptr, kptr, VMAC_NHBYTES/8, ch, cl);
+-		ch &= m62;
+-		ADD128(ch, cl, pkh, pkl);
+-		mptr += (VMAC_NHBYTES/sizeof(u64));
+-		i--;
+-	} else if (remaining) {
+-		nh_16(mptr, kptr, 2*((remaining+15)/16), ch, cl);
+-		ch &= m62;
+-		ADD128(ch, cl, pkh, pkl);
+-		mptr += (VMAC_NHBYTES/sizeof(u64));
+-		goto do_l3;
+-	} else {/* Empty String */
+-		ch = pkh; cl = pkl;
+-		goto do_l3;
+-	}
+-
+-	while (i--) {
+-		nh_vmac_nhbytes(mptr, kptr, VMAC_NHBYTES/8, rh, rl);
+-		rh &= m62;
+-		poly_step(ch, cl, pkh, pkl, rh, rl);
+-		mptr += (VMAC_NHBYTES/sizeof(u64));
+-	}
+-	if (remaining) {
+-		nh_16(mptr, kptr, 2*((remaining+15)/16), rh, rl);
+-		rh &= m62;
+-		poly_step(ch, cl, pkh, pkl, rh, rl);
+-	}
+-
+-do_l3:
+-	vhash_abort(ctx);
+-	remaining *= 8;
+-	return l3hash(ch, cl, ctx->l3key[0], ctx->l3key[1], remaining);
+-}
++	struct vmac_tfm_ctx *tctx = crypto_shash_ctx(tfm);
++	__be64 out[2];
++	u8 in[16] = { 0 };
++	unsigned int i;
++	int err;
+ 
+-static u64 vmac(unsigned char m[], unsigned int mbytes,
+-			const unsigned char n[16], u64 *tagl,
+-			struct vmac_ctx_t *ctx)
+-{
+-	u64 *in_n, *out_p;
+-	u64 p, h;
+-	int i;
+-
+-	in_n = ctx->__vmac_ctx.cached_nonce;
+-	out_p = ctx->__vmac_ctx.cached_aes;
+-
+-	i = n[15] & 1;
+-	if ((*(u64 *)(n+8) != in_n[1]) || (*(u64 *)(n) != in_n[0])) {
+-		in_n[0] = *(u64 *)(n);
+-		in_n[1] = *(u64 *)(n+8);
+-		((unsigned char *)in_n)[15] &= 0xFE;
+-		crypto_cipher_encrypt_one(ctx->child,
+-			(unsigned char *)out_p, (unsigned char *)in_n);
+-
+-		((unsigned char *)in_n)[15] |= (unsigned char)(1-i);
++	if (keylen != VMAC_KEY_LEN) {
++		crypto_shash_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
++		return -EINVAL;
+ 	}
+-	p = be64_to_cpup(out_p + i);
+-	h = vhash(m, mbytes, (u64 *)0, &ctx->__vmac_ctx);
+-	return le64_to_cpu(p + h);
+-}
+ 
+-static int vmac_set_key(unsigned char user_key[], struct vmac_ctx_t *ctx)
+-{
+-	u64 in[2] = {0}, out[2];
+-	unsigned i;
+-	int err = 0;
+-
+-	err = crypto_cipher_setkey(ctx->child, user_key, VMAC_KEY_LEN);
++	err = crypto_cipher_setkey(tctx->cipher, key, keylen);
+ 	if (err)
+ 		return err;
+ 
+ 	/* Fill nh key */
+-	((unsigned char *)in)[0] = 0x80;
+-	for (i = 0; i < sizeof(ctx->__vmac_ctx.nhkey)/8; i += 2) {
+-		crypto_cipher_encrypt_one(ctx->child,
+-			(unsigned char *)out, (unsigned char *)in);
+-		ctx->__vmac_ctx.nhkey[i] = be64_to_cpup(out);
+-		ctx->__vmac_ctx.nhkey[i+1] = be64_to_cpup(out+1);
+-		((unsigned char *)in)[15] += 1;
++	in[0] = 0x80;
++	for (i = 0; i < ARRAY_SIZE(tctx->nhkey); i += 2) {
++		crypto_cipher_encrypt_one(tctx->cipher, (u8 *)out, in);
++		tctx->nhkey[i] = be64_to_cpu(out[0]);
++		tctx->nhkey[i+1] = be64_to_cpu(out[1]);
++		in[15]++;
+ 	}
+ 
+ 	/* Fill poly key */
+-	((unsigned char *)in)[0] = 0xC0;
+-	in[1] = 0;
+-	for (i = 0; i < sizeof(ctx->__vmac_ctx.polykey)/8; i += 2) {
+-		crypto_cipher_encrypt_one(ctx->child,
+-			(unsigned char *)out, (unsigned char *)in);
+-		ctx->__vmac_ctx.polytmp[i] =
+-			ctx->__vmac_ctx.polykey[i] =
+-				be64_to_cpup(out) & mpoly;
+-		ctx->__vmac_ctx.polytmp[i+1] =
+-			ctx->__vmac_ctx.polykey[i+1] =
+-				be64_to_cpup(out+1) & mpoly;
+-		((unsigned char *)in)[15] += 1;
++	in[0] = 0xC0;
++	in[15] = 0;
++	for (i = 0; i < ARRAY_SIZE(tctx->polykey); i += 2) {
++		crypto_cipher_encrypt_one(tctx->cipher, (u8 *)out, in);
++		tctx->polykey[i] = be64_to_cpu(out[0]) & mpoly;
++		tctx->polykey[i+1] = be64_to_cpu(out[1]) & mpoly;
++		in[15]++;
+ 	}
+ 
+ 	/* Fill ip key */
+-	((unsigned char *)in)[0] = 0xE0;
+-	in[1] = 0;
+-	for (i = 0; i < sizeof(ctx->__vmac_ctx.l3key)/8; i += 2) {
++	in[0] = 0xE0;
++	in[15] = 0;
++	for (i = 0; i < ARRAY_SIZE(tctx->l3key); i += 2) {
+ 		do {
+-			crypto_cipher_encrypt_one(ctx->child,
+-				(unsigned char *)out, (unsigned char *)in);
+-			ctx->__vmac_ctx.l3key[i] = be64_to_cpup(out);
+-			ctx->__vmac_ctx.l3key[i+1] = be64_to_cpup(out+1);
+-			((unsigned char *)in)[15] += 1;
+-		} while (ctx->__vmac_ctx.l3key[i] >= p64
+-			|| ctx->__vmac_ctx.l3key[i+1] >= p64);
++			crypto_cipher_encrypt_one(tctx->cipher, (u8 *)out, in);
++			tctx->l3key[i] = be64_to_cpu(out[0]);
++			tctx->l3key[i+1] = be64_to_cpu(out[1]);
++			in[15]++;
++		} while (tctx->l3key[i] >= p64 || tctx->l3key[i+1] >= p64);
+ 	}
+ 
+-	/* Invalidate nonce/aes cache and reset other elements */
+-	ctx->__vmac_ctx.cached_nonce[0] = (u64)-1; /* Ensure illegal nonce */
+-	ctx->__vmac_ctx.cached_nonce[1] = (u64)0;  /* Ensure illegal nonce */
+-	ctx->__vmac_ctx.first_block_processed = 0;
+-
+-	return err;
++	return 0;
+ }
+ 
+-static int vmac_setkey(struct crypto_shash *parent,
+-		const u8 *key, unsigned int keylen)
++static int vmac_init(struct shash_desc *desc)
+ {
+-	struct vmac_ctx_t *ctx = crypto_shash_ctx(parent);
++	const struct vmac_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
++	struct vmac_desc_ctx *dctx = shash_desc_ctx(desc);
+ 
+-	if (keylen != VMAC_KEY_LEN) {
+-		crypto_shash_set_flags(parent, CRYPTO_TFM_RES_BAD_KEY_LEN);
+-		return -EINVAL;
+-	}
+-
+-	return vmac_set_key((u8 *)key, ctx);
+-}
+-
+-static int vmac_init(struct shash_desc *pdesc)
+-{
++	dctx->partial_size = 0;
++	dctx->first_block_processed = false;
++	memcpy(dctx->polytmp, tctx->polykey, sizeof(dctx->polytmp));
+ 	return 0;
+ }
+ 
+-static int vmac_update(struct shash_desc *pdesc, const u8 *p,
+-		unsigned int len)
++static int vmac_update(struct shash_desc *desc, const u8 *p, unsigned int len)
+ {
+-	struct crypto_shash *parent = pdesc->tfm;
+-	struct vmac_ctx_t *ctx = crypto_shash_ctx(parent);
+-	int expand;
+-	int min;
+-
+-	expand = VMAC_NHBYTES - ctx->partial_size > 0 ?
+-			VMAC_NHBYTES - ctx->partial_size : 0;
+-
+-	min = len < expand ? len : expand;
+-
+-	memcpy(ctx->partial + ctx->partial_size, p, min);
+-	ctx->partial_size += min;
+-
+-	if (len < expand)
+-		return 0;
+-
+-	vhash_update(ctx->partial, VMAC_NHBYTES, &ctx->__vmac_ctx);
+-	ctx->partial_size = 0;
+-
+-	len -= expand;
+-	p += expand;
++	const struct vmac_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
++	struct vmac_desc_ctx *dctx = shash_desc_ctx(desc);
++	unsigned int n;
++
++	if (dctx->partial_size) {
++		n = min(len, VMAC_NHBYTES - dctx->partial_size);
++		memcpy(&dctx->partial[dctx->partial_size], p, n);
++		dctx->partial_size += n;
++		p += n;
++		len -= n;
++		if (dctx->partial_size == VMAC_NHBYTES) {
++			vhash_blocks(tctx, dctx, dctx->partial_words, 1);
++			dctx->partial_size = 0;
++		}
++	}
+ 
+-	if (len % VMAC_NHBYTES) {
+-		memcpy(ctx->partial, p + len - (len % VMAC_NHBYTES),
+-			len % VMAC_NHBYTES);
+-		ctx->partial_size = len % VMAC_NHBYTES;
++	if (len >= VMAC_NHBYTES) {
++		n = round_down(len, VMAC_NHBYTES);
++		/* TODO: 'p' may be misaligned here */
++		vhash_blocks(tctx, dctx, (const __le64 *)p, n / VMAC_NHBYTES);
++		p += n;
++		len -= n;
+ 	}
+ 
+-	vhash_update(p, len - len % VMAC_NHBYTES, &ctx->__vmac_ctx);
++	if (len) {
++		memcpy(dctx->partial, p, len);
++		dctx->partial_size = len;
++	}
+ 
+ 	return 0;
+ }
+ 
+-static int vmac_final(struct shash_desc *pdesc, u8 *out)
++static u64 vhash_final(const struct vmac_tfm_ctx *tctx,
++		       struct vmac_desc_ctx *dctx)
+ {
+-	struct crypto_shash *parent = pdesc->tfm;
+-	struct vmac_ctx_t *ctx = crypto_shash_ctx(parent);
+-	vmac_t mac;
+-	u8 nonce[16] = {};
+-
+-	/* vmac() ends up accessing outside the array bounds that
+-	 * we specify.  In appears to access up to the next 2-word
+-	 * boundary.  We'll just be uber cautious and zero the
+-	 * unwritten bytes in the buffer.
+-	 */
+-	if (ctx->partial_size) {
+-		memset(ctx->partial + ctx->partial_size, 0,
+-			VMAC_NHBYTES - ctx->partial_size);
++	unsigned int partial = dctx->partial_size;
++	u64 ch = dctx->polytmp[0];
++	u64 cl = dctx->polytmp[1];
++
++	/* L1 and L2-hash the final block if needed */
++	if (partial) {
++		/* Zero-pad to next 128-bit boundary */
++		unsigned int n = round_up(partial, 16);
++		u64 rh, rl;
++
++		memset(&dctx->partial[partial], 0, n - partial);
++		nh_16(dctx->partial_words, tctx->nhkey, n / 8, rh, rl);
++		rh &= m62;
++		if (dctx->first_block_processed)
++			poly_step(ch, cl, tctx->polykey[0], tctx->polykey[1],
++				  rh, rl);
++		else
++			ADD128(ch, cl, rh, rl);
+ 	}
+-	mac = vmac(ctx->partial, ctx->partial_size, nonce, NULL, ctx);
+-	memcpy(out, &mac, sizeof(vmac_t));
+-	memzero_explicit(&mac, sizeof(vmac_t));
+-	memset(&ctx->__vmac_ctx, 0, sizeof(struct vmac_ctx));
+-	ctx->partial_size = 0;
++
++	/* L3-hash the 128-bit output of L2-hash */
++	return l3hash(ch, cl, tctx->l3key[0], tctx->l3key[1], partial * 8);
++}
++
++static int vmac_final(struct shash_desc *desc, u8 *out)
++{
++	const struct vmac_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
++	struct vmac_desc_ctx *dctx = shash_desc_ctx(desc);
++	static const u8 nonce[16] = {}; /* TODO: this is insecure */
++	union {
++		u8 bytes[16];
++		__be64 pads[2];
++	} block;
++	int index;
++	u64 hash, pad;
++
++	/* Finish calculating the VHASH of the message */
++	hash = vhash_final(tctx, dctx);
++
++	/* Generate pseudorandom pad by encrypting the nonce */
++	memcpy(&block, nonce, 16);
++	index = block.bytes[15] & 1;
++	block.bytes[15] &= ~1;
++	crypto_cipher_encrypt_one(tctx->cipher, block.bytes, block.bytes);
++	pad = be64_to_cpu(block.pads[index]);
++
++	/* The VMAC is the sum of VHASH and the pseudorandom pad */
++	put_unaligned_le64(hash + pad, out);
+ 	return 0;
+ }
+ 
+ static int vmac_init_tfm(struct crypto_tfm *tfm)
+ {
+-	struct crypto_cipher *cipher;
+-	struct crypto_instance *inst = (void *)tfm->__crt_alg;
++	struct crypto_instance *inst = crypto_tfm_alg_instance(tfm);
+ 	struct crypto_spawn *spawn = crypto_instance_ctx(inst);
+-	struct vmac_ctx_t *ctx = crypto_tfm_ctx(tfm);
++	struct vmac_tfm_ctx *tctx = crypto_tfm_ctx(tfm);
++	struct crypto_cipher *cipher;
+ 
+ 	cipher = crypto_spawn_cipher(spawn);
+ 	if (IS_ERR(cipher))
+ 		return PTR_ERR(cipher);
+ 
+-	ctx->child = cipher;
++	tctx->cipher = cipher;
+ 	return 0;
+ }
+ 
+ static void vmac_exit_tfm(struct crypto_tfm *tfm)
+ {
+-	struct vmac_ctx_t *ctx = crypto_tfm_ctx(tfm);
+-	crypto_free_cipher(ctx->child);
++	struct vmac_tfm_ctx *tctx = crypto_tfm_ctx(tfm);
++
++	crypto_free_cipher(tctx->cipher);
+ }
+ 
+ static int vmac_create(struct crypto_template *tmpl, struct rtattr **tb)
+@@ -655,6 +608,10 @@ static int vmac_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 	if (IS_ERR(alg))
+ 		return PTR_ERR(alg);
+ 
++	err = -EINVAL;
++	if (alg->cra_blocksize != 16)
++		goto out_put_alg;
++
+ 	inst = shash_alloc_instance("vmac", alg);
+ 	err = PTR_ERR(inst);
+ 	if (IS_ERR(inst))
+@@ -670,11 +627,12 @@ static int vmac_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 	inst->alg.base.cra_blocksize = alg->cra_blocksize;
+ 	inst->alg.base.cra_alignmask = alg->cra_alignmask;
+ 
+-	inst->alg.digestsize = sizeof(vmac_t);
+-	inst->alg.base.cra_ctxsize = sizeof(struct vmac_ctx_t);
++	inst->alg.base.cra_ctxsize = sizeof(struct vmac_tfm_ctx);
+ 	inst->alg.base.cra_init = vmac_init_tfm;
+ 	inst->alg.base.cra_exit = vmac_exit_tfm;
+ 
++	inst->alg.descsize = sizeof(struct vmac_desc_ctx);
++	inst->alg.digestsize = VMAC_TAG_LEN / 8;
+ 	inst->alg.init = vmac_init;
+ 	inst->alg.update = vmac_update;
+ 	inst->alg.final = vmac_final;
+diff --git a/drivers/crypto/ccp/psp-dev.c b/drivers/crypto/ccp/psp-dev.c
+index ff478d826d7d..051b8c6bae64 100644
+--- a/drivers/crypto/ccp/psp-dev.c
++++ b/drivers/crypto/ccp/psp-dev.c
+@@ -84,8 +84,6 @@ done:
+ 
+ static void sev_wait_cmd_ioc(struct psp_device *psp, unsigned int *reg)
+ {
+-	psp->sev_int_rcvd = 0;
+-
+ 	wait_event(psp->sev_int_queue, psp->sev_int_rcvd);
+ 	*reg = ioread32(psp->io_regs + PSP_CMDRESP);
+ }
+@@ -148,6 +146,8 @@ static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
+ 	iowrite32(phys_lsb, psp->io_regs + PSP_CMDBUFF_ADDR_LO);
+ 	iowrite32(phys_msb, psp->io_regs + PSP_CMDBUFF_ADDR_HI);
+ 
++	psp->sev_int_rcvd = 0;
++
+ 	reg = cmd;
+ 	reg <<= PSP_CMDRESP_CMD_SHIFT;
+ 	reg |= PSP_CMDRESP_IOC;
+@@ -856,6 +856,9 @@ void psp_dev_destroy(struct sp_device *sp)
+ {
+ 	struct psp_device *psp = sp->psp_data;
+ 
++	if (!psp)
++		return;
++
+ 	if (psp->sev_misc)
+ 		kref_put(&misc_dev->refcount, sev_exit);
+ 
+diff --git a/drivers/crypto/ccree/cc_cipher.c b/drivers/crypto/ccree/cc_cipher.c
+index d2810c183b73..958ced3ca485 100644
+--- a/drivers/crypto/ccree/cc_cipher.c
++++ b/drivers/crypto/ccree/cc_cipher.c
+@@ -593,34 +593,82 @@ static void cc_setup_cipher_data(struct crypto_tfm *tfm,
+ 	}
+ }
+ 
++/*
++ * Update a CTR-AES 128 bit counter
++ */
++static void cc_update_ctr(u8 *ctr, unsigned int increment)
++{
++	if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) ||
++	    IS_ALIGNED((unsigned long)ctr, 8)) {
++
++		__be64 *high_be = (__be64 *)ctr;
++		__be64 *low_be = high_be + 1;
++		u64 orig_low = __be64_to_cpu(*low_be);
++		u64 new_low = orig_low + (u64)increment;
++
++		*low_be = __cpu_to_be64(new_low);
++
++		if (new_low < orig_low)
++			*high_be = __cpu_to_be64(__be64_to_cpu(*high_be) + 1);
++	} else {
++		u8 *pos = (ctr + AES_BLOCK_SIZE);
++		u8 val;
++		unsigned int size;
++
++		for (; increment; increment--)
++			for (size = AES_BLOCK_SIZE; size; size--) {
++				val = *--pos + 1;
++				*pos = val;
++				if (val)
++					break;
++			}
++	}
++}
++
+ static void cc_cipher_complete(struct device *dev, void *cc_req, int err)
+ {
+ 	struct skcipher_request *req = (struct skcipher_request *)cc_req;
+ 	struct scatterlist *dst = req->dst;
+ 	struct scatterlist *src = req->src;
+ 	struct cipher_req_ctx *req_ctx = skcipher_request_ctx(req);
+-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+-	unsigned int ivsize = crypto_skcipher_ivsize(tfm);
++	struct crypto_skcipher *sk_tfm = crypto_skcipher_reqtfm(req);
++	struct crypto_tfm *tfm = crypto_skcipher_tfm(sk_tfm);
++	struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
++	unsigned int ivsize = crypto_skcipher_ivsize(sk_tfm);
++	unsigned int len;
+ 
+-	cc_unmap_cipher_request(dev, req_ctx, ivsize, src, dst);
+-	kzfree(req_ctx->iv);
++	switch (ctx_p->cipher_mode) {
++	case DRV_CIPHER_CBC:
++		/*
++		 * The crypto API expects us to set the req->iv to the last
++		 * ciphertext block. For encrypt, simply copy from the result.
++		 * For decrypt, we must copy from a saved buffer since this
++		 * could be an in-place decryption operation and the src is
++		 * lost by this point.
++		 */
++		if (req_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT)  {
++			memcpy(req->iv, req_ctx->backup_info, ivsize);
++			kzfree(req_ctx->backup_info);
++		} else if (!err) {
++			len = req->cryptlen - ivsize;
++			scatterwalk_map_and_copy(req->iv, req->dst, len,
++						 ivsize, 0);
++		}
++		break;
+ 
+-	/*
+-	 * The crypto API expects us to set the req->iv to the last
+-	 * ciphertext block. For encrypt, simply copy from the result.
+-	 * For decrypt, we must copy from a saved buffer since this
+-	 * could be an in-place decryption operation and the src is
+-	 * lost by this point.
+-	 */
+-	if (req_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT)  {
+-		memcpy(req->iv, req_ctx->backup_info, ivsize);
+-		kzfree(req_ctx->backup_info);
+-	} else if (!err) {
+-		scatterwalk_map_and_copy(req->iv, req->dst,
+-					 (req->cryptlen - ivsize),
+-					 ivsize, 0);
++	case DRV_CIPHER_CTR:
++		/* Compute the counter of the last block */
++		len = ALIGN(req->cryptlen, AES_BLOCK_SIZE) / AES_BLOCK_SIZE;
++		cc_update_ctr((u8 *)req->iv, len);
++		break;
++
++	default:
++		break;
+ 	}
+ 
++	cc_unmap_cipher_request(dev, req_ctx, ivsize, src, dst);
++	kzfree(req_ctx->iv);
++
+ 	skcipher_request_complete(req, err);
+ }
+ 
+@@ -752,20 +800,29 @@ static int cc_cipher_encrypt(struct skcipher_request *req)
+ static int cc_cipher_decrypt(struct skcipher_request *req)
+ {
+ 	struct crypto_skcipher *sk_tfm = crypto_skcipher_reqtfm(req);
++	struct crypto_tfm *tfm = crypto_skcipher_tfm(sk_tfm);
++	struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+ 	struct cipher_req_ctx *req_ctx = skcipher_request_ctx(req);
+ 	unsigned int ivsize = crypto_skcipher_ivsize(sk_tfm);
+ 	gfp_t flags = cc_gfp_flags(&req->base);
++	unsigned int len;
+ 
+-	/*
+-	 * Allocate and save the last IV sized bytes of the source, which will
+-	 * be lost in case of in-place decryption and might be needed for CTS.
+-	 */
+-	req_ctx->backup_info = kmalloc(ivsize, flags);
+-	if (!req_ctx->backup_info)
+-		return -ENOMEM;
++	if (ctx_p->cipher_mode == DRV_CIPHER_CBC) {
++
++		/* Allocate and save the last IV sized bytes of the source,
++		 * which will be lost in case of in-place decryption.
++		 */
++		req_ctx->backup_info = kzalloc(ivsize, flags);
++		if (!req_ctx->backup_info)
++			return -ENOMEM;
++
++		len = req->cryptlen - ivsize;
++		scatterwalk_map_and_copy(req_ctx->backup_info, req->src, len,
++					 ivsize, 0);
++	} else {
++		req_ctx->backup_info = NULL;
++	}
+ 
+-	scatterwalk_map_and_copy(req_ctx->backup_info, req->src,
+-				 (req->cryptlen - ivsize), ivsize, 0);
+ 	req_ctx->is_giv = false;
+ 
+ 	return cc_cipher_process(req, DRV_CRYPTO_DIRECTION_DECRYPT);
+diff --git a/drivers/crypto/ccree/cc_hash.c b/drivers/crypto/ccree/cc_hash.c
+index 96ff777474d7..e4ebde05a8a0 100644
+--- a/drivers/crypto/ccree/cc_hash.c
++++ b/drivers/crypto/ccree/cc_hash.c
+@@ -602,66 +602,7 @@ static int cc_hash_update(struct ahash_request *req)
+ 	return rc;
+ }
+ 
+-static int cc_hash_finup(struct ahash_request *req)
+-{
+-	struct ahash_req_ctx *state = ahash_request_ctx(req);
+-	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+-	struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+-	u32 digestsize = crypto_ahash_digestsize(tfm);
+-	struct scatterlist *src = req->src;
+-	unsigned int nbytes = req->nbytes;
+-	u8 *result = req->result;
+-	struct device *dev = drvdata_to_dev(ctx->drvdata);
+-	bool is_hmac = ctx->is_hmac;
+-	struct cc_crypto_req cc_req = {};
+-	struct cc_hw_desc desc[CC_MAX_HASH_SEQ_LEN];
+-	unsigned int idx = 0;
+-	int rc;
+-	gfp_t flags = cc_gfp_flags(&req->base);
+-
+-	dev_dbg(dev, "===== %s-finup (%d) ====\n", is_hmac ? "hmac" : "hash",
+-		nbytes);
+-
+-	if (cc_map_req(dev, state, ctx)) {
+-		dev_err(dev, "map_ahash_source() failed\n");
+-		return -EINVAL;
+-	}
+-
+-	if (cc_map_hash_request_final(ctx->drvdata, state, src, nbytes, 1,
+-				      flags)) {
+-		dev_err(dev, "map_ahash_request_final() failed\n");
+-		cc_unmap_req(dev, state, ctx);
+-		return -ENOMEM;
+-	}
+-	if (cc_map_result(dev, state, digestsize)) {
+-		dev_err(dev, "map_ahash_digest() failed\n");
+-		cc_unmap_hash_request(dev, state, src, true);
+-		cc_unmap_req(dev, state, ctx);
+-		return -ENOMEM;
+-	}
+-
+-	/* Setup request structure */
+-	cc_req.user_cb = cc_hash_complete;
+-	cc_req.user_arg = req;
+-
+-	idx = cc_restore_hash(desc, ctx, state, idx);
+-
+-	if (is_hmac)
+-		idx = cc_fin_hmac(desc, req, idx);
+-
+-	idx = cc_fin_result(desc, req, idx);
+-
+-	rc = cc_send_request(ctx->drvdata, &cc_req, desc, idx, &req->base);
+-	if (rc != -EINPROGRESS && rc != -EBUSY) {
+-		dev_err(dev, "send_request() failed (rc=%d)\n", rc);
+-		cc_unmap_hash_request(dev, state, src, true);
+-		cc_unmap_result(dev, state, digestsize, result);
+-		cc_unmap_req(dev, state, ctx);
+-	}
+-	return rc;
+-}
+-
+-static int cc_hash_final(struct ahash_request *req)
++static int cc_do_finup(struct ahash_request *req, bool update)
+ {
+ 	struct ahash_req_ctx *state = ahash_request_ctx(req);
+ 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+@@ -678,21 +619,20 @@ static int cc_hash_final(struct ahash_request *req)
+ 	int rc;
+ 	gfp_t flags = cc_gfp_flags(&req->base);
+ 
+-	dev_dbg(dev, "===== %s-final (%d) ====\n", is_hmac ? "hmac" : "hash",
+-		nbytes);
++	dev_dbg(dev, "===== %s-%s (%d) ====\n", is_hmac ? "hmac" : "hash",
++		update ? "finup" : "final", nbytes);
+ 
+ 	if (cc_map_req(dev, state, ctx)) {
+ 		dev_err(dev, "map_ahash_source() failed\n");
+ 		return -EINVAL;
+ 	}
+ 
+-	if (cc_map_hash_request_final(ctx->drvdata, state, src, nbytes, 0,
++	if (cc_map_hash_request_final(ctx->drvdata, state, src, nbytes, update,
+ 				      flags)) {
+ 		dev_err(dev, "map_ahash_request_final() failed\n");
+ 		cc_unmap_req(dev, state, ctx);
+ 		return -ENOMEM;
+ 	}
+-
+ 	if (cc_map_result(dev, state, digestsize)) {
+ 		dev_err(dev, "map_ahash_digest() failed\n");
+ 		cc_unmap_hash_request(dev, state, src, true);
+@@ -706,7 +646,7 @@ static int cc_hash_final(struct ahash_request *req)
+ 
+ 	idx = cc_restore_hash(desc, ctx, state, idx);
+ 
+-	/* "DO-PAD" must be enabled only when writing current length to HW */
++	/* Pad the hash */
+ 	hw_desc_init(&desc[idx]);
+ 	set_cipher_do(&desc[idx], DO_PAD);
+ 	set_cipher_mode(&desc[idx], ctx->hw_mode);
+@@ -731,6 +671,17 @@ static int cc_hash_final(struct ahash_request *req)
+ 	return rc;
+ }
+ 
++static int cc_hash_finup(struct ahash_request *req)
++{
++	return cc_do_finup(req, true);
++}
++
++
++static int cc_hash_final(struct ahash_request *req)
++{
++	return cc_do_finup(req, false);
++}
++
+ static int cc_hash_init(struct ahash_request *req)
+ {
+ 	struct ahash_req_ctx *state = ahash_request_ctx(req);
+diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
+index 26ca0276b503..a75cb371cd19 100644
+--- a/include/asm-generic/pgtable.h
++++ b/include/asm-generic/pgtable.h
+@@ -1019,8 +1019,8 @@ int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot);
+ int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot);
+ int pud_clear_huge(pud_t *pud);
+ int pmd_clear_huge(pmd_t *pmd);
+-int pud_free_pmd_page(pud_t *pud);
+-int pmd_free_pte_page(pmd_t *pmd);
++int pud_free_pmd_page(pud_t *pud, unsigned long addr);
++int pmd_free_pte_page(pmd_t *pmd, unsigned long addr);
+ #else	/* !CONFIG_HAVE_ARCH_HUGE_VMAP */
+ static inline int p4d_set_huge(p4d_t *p4d, phys_addr_t addr, pgprot_t prot)
+ {
+@@ -1046,11 +1046,11 @@ static inline int pmd_clear_huge(pmd_t *pmd)
+ {
+ 	return 0;
+ }
+-static inline int pud_free_pmd_page(pud_t *pud)
++static inline int pud_free_pmd_page(pud_t *pud, unsigned long addr)
+ {
+ 	return 0;
+ }
+-static inline int pmd_free_pte_page(pmd_t *pmd)
++static inline int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
+ {
+ 	return 0;
+ }
+diff --git a/include/crypto/vmac.h b/include/crypto/vmac.h
+deleted file mode 100644
+index 6b700c7b2fe1..000000000000
+--- a/include/crypto/vmac.h
++++ /dev/null
+@@ -1,63 +0,0 @@
+-/*
+- * Modified to interface to the Linux kernel
+- * Copyright (c) 2009, Intel Corporation.
+- *
+- * This program is free software; you can redistribute it and/or modify it
+- * under the terms and conditions of the GNU General Public License,
+- * version 2, as published by the Free Software Foundation.
+- *
+- * This program is distributed in the hope it will be useful, but WITHOUT
+- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+- * more details.
+- *
+- * You should have received a copy of the GNU General Public License along with
+- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+- * Place - Suite 330, Boston, MA 02111-1307 USA.
+- */
+-
+-#ifndef __CRYPTO_VMAC_H
+-#define __CRYPTO_VMAC_H
+-
+-/* --------------------------------------------------------------------------
+- * VMAC and VHASH Implementation by Ted Krovetz (tdk@acm.org) and Wei Dai.
+- * This implementation is herby placed in the public domain.
+- * The authors offers no warranty. Use at your own risk.
+- * Please send bug reports to the authors.
+- * Last modified: 17 APR 08, 1700 PDT
+- * ----------------------------------------------------------------------- */
+-
+-/*
+- * User definable settings.
+- */
+-#define VMAC_TAG_LEN	64
+-#define VMAC_KEY_SIZE	128/* Must be 128, 192 or 256			*/
+-#define VMAC_KEY_LEN	(VMAC_KEY_SIZE/8)
+-#define VMAC_NHBYTES	128/* Must 2^i for any 3 < i < 13 Standard = 128*/
+-
+-/*
+- * This implementation uses u32 and u64 as names for unsigned 32-
+- * and 64-bit integer types. These are defined in C99 stdint.h. The
+- * following may need adaptation if you are not running a C99 or
+- * Microsoft C environment.
+- */
+-struct vmac_ctx {
+-	u64 nhkey[(VMAC_NHBYTES/8)+2*(VMAC_TAG_LEN/64-1)];
+-	u64 polykey[2*VMAC_TAG_LEN/64];
+-	u64 l3key[2*VMAC_TAG_LEN/64];
+-	u64 polytmp[2*VMAC_TAG_LEN/64];
+-	u64 cached_nonce[2];
+-	u64 cached_aes[2];
+-	int first_block_processed;
+-};
+-
+-typedef u64 vmac_t;
+-
+-struct vmac_ctx_t {
+-	struct crypto_cipher *child;
+-	struct vmac_ctx __vmac_ctx;
+-	u8 partial[VMAC_NHBYTES];	/* partial block */
+-	int partial_size;		/* size of the partial block */
+-};
+-
+-#endif /* __CRYPTO_VMAC_H */
+diff --git a/lib/ioremap.c b/lib/ioremap.c
+index 54e5bbaa3200..517f5853ffed 100644
+--- a/lib/ioremap.c
++++ b/lib/ioremap.c
+@@ -92,7 +92,7 @@ static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr,
+ 		if (ioremap_pmd_enabled() &&
+ 		    ((next - addr) == PMD_SIZE) &&
+ 		    IS_ALIGNED(phys_addr + addr, PMD_SIZE) &&
+-		    pmd_free_pte_page(pmd)) {
++		    pmd_free_pte_page(pmd, addr)) {
+ 			if (pmd_set_huge(pmd, phys_addr + addr, prot))
+ 				continue;
+ 		}
+@@ -119,7 +119,7 @@ static inline int ioremap_pud_range(p4d_t *p4d, unsigned long addr,
+ 		if (ioremap_pud_enabled() &&
+ 		    ((next - addr) == PUD_SIZE) &&
+ 		    IS_ALIGNED(phys_addr + addr, PUD_SIZE) &&
+-		    pud_free_pmd_page(pud)) {
++		    pud_free_pmd_page(pud, addr)) {
+ 			if (pud_set_huge(pud, phys_addr + addr, prot))
+ 				continue;
+ 		}
+diff --git a/net/bluetooth/hidp/core.c b/net/bluetooth/hidp/core.c
+index 1036e4fa1ea2..3bba8f4b08a9 100644
+--- a/net/bluetooth/hidp/core.c
++++ b/net/bluetooth/hidp/core.c
+@@ -431,8 +431,8 @@ static void hidp_del_timer(struct hidp_session *session)
+ 		del_timer(&session->timer);
+ }
+ 
+-static void hidp_process_report(struct hidp_session *session,
+-				int type, const u8 *data, int len, int intr)
++static void hidp_process_report(struct hidp_session *session, int type,
++				const u8 *data, unsigned int len, int intr)
+ {
+ 	if (len > HID_MAX_BUFFER_SIZE)
+ 		len = HID_MAX_BUFFER_SIZE;
+diff --git a/scripts/depmod.sh b/scripts/depmod.sh
+index 1a6f85e0e6e1..999d585eaa73 100755
+--- a/scripts/depmod.sh
++++ b/scripts/depmod.sh
+@@ -10,10 +10,16 @@ fi
+ DEPMOD=$1
+ KERNELRELEASE=$2
+ 
+-if ! test -r System.map -a -x "$DEPMOD"; then
++if ! test -r System.map ; then
+ 	exit 0
+ fi
+ 
++if [ -z $(command -v $DEPMOD) ]; then
++	echo "'make modules_install' requires $DEPMOD. Please install it." >&2
++	echo "This is probably in the kmod package." >&2
++	exit 1
++fi
++
+ # older versions of depmod require the version string to start with three
+ # numbers, so we cheat with a symlink here
+ depmod_hack_needed=true


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 11:37 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     81edbc863e5ef834718d87ab706c1882ef565693
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Sep  5 15:30:20 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 11:36:23 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=81edbc86

Linux patch 4.18.6

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1005_linux-4.18.6.patch | 5123 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5127 insertions(+)

diff --git a/0000_README b/0000_README
index 8da0979..8bfc2e4 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch:  1004_linux-4.18.5.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.5
 
+Patch:  1005_linux-4.18.6.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.6
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1005_linux-4.18.6.patch b/1005_linux-4.18.6.patch
new file mode 100644
index 0000000..99632b3
--- /dev/null
+++ b/1005_linux-4.18.6.patch
@@ -0,0 +1,5123 @@
+diff --git a/Makefile b/Makefile
+index a41692c5827a..62524f4d42ad 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+@@ -493,9 +493,13 @@ KBUILD_AFLAGS += $(call cc-option, -no-integrated-as)
+ endif
+ 
+ RETPOLINE_CFLAGS_GCC := -mindirect-branch=thunk-extern -mindirect-branch-register
++RETPOLINE_VDSO_CFLAGS_GCC := -mindirect-branch=thunk-inline -mindirect-branch-register
+ RETPOLINE_CFLAGS_CLANG := -mretpoline-external-thunk
++RETPOLINE_VDSO_CFLAGS_CLANG := -mretpoline
+ RETPOLINE_CFLAGS := $(call cc-option,$(RETPOLINE_CFLAGS_GCC),$(call cc-option,$(RETPOLINE_CFLAGS_CLANG)))
++RETPOLINE_VDSO_CFLAGS := $(call cc-option,$(RETPOLINE_VDSO_CFLAGS_GCC),$(call cc-option,$(RETPOLINE_VDSO_CFLAGS_CLANG)))
+ export RETPOLINE_CFLAGS
++export RETPOLINE_VDSO_CFLAGS
+ 
+ KBUILD_CFLAGS	+= $(call cc-option,-fno-PIE)
+ KBUILD_AFLAGS	+= $(call cc-option,-fno-PIE)
+diff --git a/arch/Kconfig b/arch/Kconfig
+index d1f2ed462ac8..f03b72644902 100644
+--- a/arch/Kconfig
++++ b/arch/Kconfig
+@@ -354,6 +354,9 @@ config HAVE_ARCH_JUMP_LABEL
+ config HAVE_RCU_TABLE_FREE
+ 	bool
+ 
++config HAVE_RCU_TABLE_INVALIDATE
++	bool
++
+ config ARCH_HAVE_NMI_SAFE_CMPXCHG
+ 	bool
+ 
+diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
+index f6a62ae44a65..c864f6b045ba 100644
+--- a/arch/arm/net/bpf_jit_32.c
++++ b/arch/arm/net/bpf_jit_32.c
+@@ -238,7 +238,7 @@ static void jit_fill_hole(void *area, unsigned int size)
+ #define STACK_SIZE	ALIGN(_STACK_SIZE, STACK_ALIGNMENT)
+ 
+ /* Get the offset of eBPF REGISTERs stored on scratch space. */
+-#define STACK_VAR(off) (STACK_SIZE - off)
++#define STACK_VAR(off) (STACK_SIZE - off - 4)
+ 
+ #if __LINUX_ARM_ARCH__ < 7
+ 
+diff --git a/arch/arm/probes/kprobes/core.c b/arch/arm/probes/kprobes/core.c
+index e90cc8a08186..a8be6fe3946d 100644
+--- a/arch/arm/probes/kprobes/core.c
++++ b/arch/arm/probes/kprobes/core.c
+@@ -289,8 +289,8 @@ void __kprobes kprobe_handler(struct pt_regs *regs)
+ 				break;
+ 			case KPROBE_REENTER:
+ 				/* A nested probe was hit in FIQ, it is a BUG */
+-				pr_warn("Unrecoverable kprobe detected at %p.\n",
+-					p->addr);
++				pr_warn("Unrecoverable kprobe detected.\n");
++				dump_kprobe(p);
+ 				/* fall through */
+ 			default:
+ 				/* impossible cases */
+diff --git a/arch/arm/probes/kprobes/test-core.c b/arch/arm/probes/kprobes/test-core.c
+index 14db14152909..cc237fa9b90f 100644
+--- a/arch/arm/probes/kprobes/test-core.c
++++ b/arch/arm/probes/kprobes/test-core.c
+@@ -1461,7 +1461,6 @@ fail:
+ 	print_registers(&result_regs);
+ 
+ 	if (mem) {
+-		pr_err("current_stack=%p\n", current_stack);
+ 		pr_err("expected_memory:\n");
+ 		print_memory(expected_memory, mem_size);
+ 		pr_err("result_memory:\n");
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328.dtsi b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+index b8e9da15e00c..2c1aa84abeea 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+@@ -331,7 +331,7 @@
+ 		reg = <0x0 0xff120000 0x0 0x100>;
+ 		interrupts = <GIC_SPI 56 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&cru SCLK_UART1>, <&cru PCLK_UART1>;
+-		clock-names = "sclk_uart", "pclk_uart";
++		clock-names = "baudclk", "apb_pclk";
+ 		dmas = <&dmac 4>, <&dmac 5>;
+ 		dma-names = "tx", "rx";
+ 		pinctrl-names = "default";
+diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h
+index 5df5cfe1c143..5ee5bca8c24b 100644
+--- a/arch/arm64/include/asm/cache.h
++++ b/arch/arm64/include/asm/cache.h
+@@ -21,12 +21,16 @@
+ #define CTR_L1IP_SHIFT		14
+ #define CTR_L1IP_MASK		3
+ #define CTR_DMINLINE_SHIFT	16
++#define CTR_IMINLINE_SHIFT	0
+ #define CTR_ERG_SHIFT		20
+ #define CTR_CWG_SHIFT		24
+ #define CTR_CWG_MASK		15
+ #define CTR_IDC_SHIFT		28
+ #define CTR_DIC_SHIFT		29
+ 
++#define CTR_CACHE_MINLINE_MASK	\
++	(0xf << CTR_DMINLINE_SHIFT | 0xf << CTR_IMINLINE_SHIFT)
++
+ #define CTR_L1IP(ctr)		(((ctr) >> CTR_L1IP_SHIFT) & CTR_L1IP_MASK)
+ 
+ #define ICACHE_POLICY_VPIPT	0
+diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
+index 8a699c708fc9..be3bf3d08916 100644
+--- a/arch/arm64/include/asm/cpucaps.h
++++ b/arch/arm64/include/asm/cpucaps.h
+@@ -49,7 +49,8 @@
+ #define ARM64_HAS_CACHE_DIC			28
+ #define ARM64_HW_DBM				29
+ #define ARM64_SSBD				30
++#define ARM64_MISMATCHED_CACHE_TYPE		31
+ 
+-#define ARM64_NCAPS				31
++#define ARM64_NCAPS				32
+ 
+ #endif /* __ASM_CPUCAPS_H */
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 1d2b6d768efe..5d59ff9a8da9 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -65,12 +65,18 @@ is_kryo_midr(const struct arm64_cpu_capabilities *entry, int scope)
+ }
+ 
+ static bool
+-has_mismatched_cache_line_size(const struct arm64_cpu_capabilities *entry,
+-				int scope)
++has_mismatched_cache_type(const struct arm64_cpu_capabilities *entry,
++			  int scope)
+ {
++	u64 mask = CTR_CACHE_MINLINE_MASK;
++
++	/* Skip matching the min line sizes for cache type check */
++	if (entry->capability == ARM64_MISMATCHED_CACHE_TYPE)
++		mask ^= arm64_ftr_reg_ctrel0.strict_mask;
++
+ 	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
+-	return (read_cpuid_cachetype() & arm64_ftr_reg_ctrel0.strict_mask) !=
+-		(arm64_ftr_reg_ctrel0.sys_val & arm64_ftr_reg_ctrel0.strict_mask);
++	return (read_cpuid_cachetype() & mask) !=
++	       (arm64_ftr_reg_ctrel0.sys_val & mask);
+ }
+ 
+ static void
+@@ -613,7 +619,14 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ 	{
+ 		.desc = "Mismatched cache line size",
+ 		.capability = ARM64_MISMATCHED_CACHE_LINE_SIZE,
+-		.matches = has_mismatched_cache_line_size,
++		.matches = has_mismatched_cache_type,
++		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
++		.cpu_enable = cpu_enable_trap_ctr_access,
++	},
++	{
++		.desc = "Mismatched cache type",
++		.capability = ARM64_MISMATCHED_CACHE_TYPE,
++		.matches = has_mismatched_cache_type,
+ 		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+ 		.cpu_enable = cpu_enable_trap_ctr_access,
+ 	},
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index c6d80743f4ed..e4103b718a7c 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -214,7 +214,7 @@ static const struct arm64_ftr_bits ftr_ctr[] = {
+ 	 * If we have differing I-cache policies, report it as the weakest - VIPT.
+ 	 */
+ 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_EXACT, 14, 2, ICACHE_POLICY_VIPT),	/* L1Ip */
+-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 0, 4, 0),	/* IminLine */
++	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IMINLINE_SHIFT, 4, 0),
+ 	ARM64_FTR_END,
+ };
+ 
+diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
+index d849d9804011..22a5921562c7 100644
+--- a/arch/arm64/kernel/probes/kprobes.c
++++ b/arch/arm64/kernel/probes/kprobes.c
+@@ -275,7 +275,7 @@ static int __kprobes reenter_kprobe(struct kprobe *p,
+ 		break;
+ 	case KPROBE_HIT_SS:
+ 	case KPROBE_REENTER:
+-		pr_warn("Unrecoverable kprobe detected at %p.\n", p->addr);
++		pr_warn("Unrecoverable kprobe detected.\n");
+ 		dump_kprobe(p);
+ 		BUG();
+ 		break;
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index 9abf8a1e7b25..787e27964ab9 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -287,7 +287,11 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
+ #ifdef CONFIG_HAVE_ARCH_PFN_VALID
+ int pfn_valid(unsigned long pfn)
+ {
+-	return memblock_is_map_memory(pfn << PAGE_SHIFT);
++	phys_addr_t addr = pfn << PAGE_SHIFT;
++
++	if ((addr >> PAGE_SHIFT) != pfn)
++		return 0;
++	return memblock_is_map_memory(addr);
+ }
+ EXPORT_SYMBOL(pfn_valid);
+ #endif
+diff --git a/arch/mips/Makefile b/arch/mips/Makefile
+index e2122cca4ae2..1e98d22ec119 100644
+--- a/arch/mips/Makefile
++++ b/arch/mips/Makefile
+@@ -155,15 +155,11 @@ cflags-$(CONFIG_CPU_R4300)	+= -march=r4300 -Wa,--trap
+ cflags-$(CONFIG_CPU_VR41XX)	+= -march=r4100 -Wa,--trap
+ cflags-$(CONFIG_CPU_R4X00)	+= -march=r4600 -Wa,--trap
+ cflags-$(CONFIG_CPU_TX49XX)	+= -march=r4600 -Wa,--trap
+-cflags-$(CONFIG_CPU_MIPS32_R1)	+= $(call cc-option,-march=mips32,-mips32 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS32) \
+-			-Wa,-mips32 -Wa,--trap
+-cflags-$(CONFIG_CPU_MIPS32_R2)	+= $(call cc-option,-march=mips32r2,-mips32r2 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS32) \
+-			-Wa,-mips32r2 -Wa,--trap
++cflags-$(CONFIG_CPU_MIPS32_R1)	+= -march=mips32 -Wa,--trap
++cflags-$(CONFIG_CPU_MIPS32_R2)	+= -march=mips32r2 -Wa,--trap
+ cflags-$(CONFIG_CPU_MIPS32_R6)	+= -march=mips32r6 -Wa,--trap -modd-spreg
+-cflags-$(CONFIG_CPU_MIPS64_R1)	+= $(call cc-option,-march=mips64,-mips64 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS64) \
+-			-Wa,-mips64 -Wa,--trap
+-cflags-$(CONFIG_CPU_MIPS64_R2)	+= $(call cc-option,-march=mips64r2,-mips64r2 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS64) \
+-			-Wa,-mips64r2 -Wa,--trap
++cflags-$(CONFIG_CPU_MIPS64_R1)	+= -march=mips64 -Wa,--trap
++cflags-$(CONFIG_CPU_MIPS64_R2)	+= -march=mips64r2 -Wa,--trap
+ cflags-$(CONFIG_CPU_MIPS64_R6)	+= -march=mips64r6 -Wa,--trap
+ cflags-$(CONFIG_CPU_R5000)	+= -march=r5000 -Wa,--trap
+ cflags-$(CONFIG_CPU_R5432)	+= $(call cc-option,-march=r5400,-march=r5000) \
+diff --git a/arch/mips/include/asm/processor.h b/arch/mips/include/asm/processor.h
+index af34afbc32d9..b2fa62922d88 100644
+--- a/arch/mips/include/asm/processor.h
++++ b/arch/mips/include/asm/processor.h
+@@ -141,7 +141,7 @@ struct mips_fpu_struct {
+ 
+ #define NUM_DSP_REGS   6
+ 
+-typedef __u32 dspreg_t;
++typedef unsigned long dspreg_t;
+ 
+ struct mips_dsp_state {
+ 	dspreg_t	dspr[NUM_DSP_REGS];
+@@ -386,7 +386,20 @@ unsigned long get_wchan(struct task_struct *p);
+ #define KSTK_ESP(tsk) (task_pt_regs(tsk)->regs[29])
+ #define KSTK_STATUS(tsk) (task_pt_regs(tsk)->cp0_status)
+ 
++#ifdef CONFIG_CPU_LOONGSON3
++/*
++ * Loongson-3's SFB (Store-Fill-Buffer) may buffer writes indefinitely when a
++ * tight read loop is executed, because reads take priority over writes & the
++ * hardware (incorrectly) doesn't ensure that writes will eventually occur.
++ *
++ * Since spin loops of any kind should have a cpu_relax() in them, force an SFB
++ * flush from cpu_relax() such that any pending writes will become visible as
++ * expected.
++ */
++#define cpu_relax()	smp_mb()
++#else
+ #define cpu_relax()	barrier()
++#endif
+ 
+ /*
+  * Return_address is a replacement for __builtin_return_address(count)
+diff --git a/arch/mips/kernel/ptrace.c b/arch/mips/kernel/ptrace.c
+index 9f6c3f2aa2e2..8c8d42823bda 100644
+--- a/arch/mips/kernel/ptrace.c
++++ b/arch/mips/kernel/ptrace.c
+@@ -856,7 +856,7 @@ long arch_ptrace(struct task_struct *child, long request,
+ 				goto out;
+ 			}
+ 			dregs = __get_dsp_regs(child);
+-			tmp = (unsigned long) (dregs[addr - DSP_BASE]);
++			tmp = dregs[addr - DSP_BASE];
+ 			break;
+ 		}
+ 		case DSP_CONTROL:
+diff --git a/arch/mips/kernel/ptrace32.c b/arch/mips/kernel/ptrace32.c
+index 7edc629304c8..bc348d44d151 100644
+--- a/arch/mips/kernel/ptrace32.c
++++ b/arch/mips/kernel/ptrace32.c
+@@ -142,7 +142,7 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
+ 				goto out;
+ 			}
+ 			dregs = __get_dsp_regs(child);
+-			tmp = (unsigned long) (dregs[addr - DSP_BASE]);
++			tmp = dregs[addr - DSP_BASE];
+ 			break;
+ 		}
+ 		case DSP_CONTROL:
+diff --git a/arch/mips/lib/memset.S b/arch/mips/lib/memset.S
+index 1cc306520a55..fac26ce64b2f 100644
+--- a/arch/mips/lib/memset.S
++++ b/arch/mips/lib/memset.S
+@@ -195,6 +195,7 @@
+ #endif
+ #else
+ 	 PTR_SUBU	t0, $0, a2
++	move		a2, zero		/* No remaining longs */
+ 	PTR_ADDIU	t0, 1
+ 	STORE_BYTE(0)
+ 	STORE_BYTE(1)
+@@ -231,7 +232,7 @@
+ 
+ #ifdef CONFIG_CPU_MIPSR6
+ .Lbyte_fixup\@:
+-	PTR_SUBU	a2, $0, t0
++	PTR_SUBU	a2, t0
+ 	jr		ra
+ 	 PTR_ADDIU	a2, 1
+ #endif /* CONFIG_CPU_MIPSR6 */
+diff --git a/arch/mips/lib/multi3.c b/arch/mips/lib/multi3.c
+index 111ad475aa0c..4c2483f410c2 100644
+--- a/arch/mips/lib/multi3.c
++++ b/arch/mips/lib/multi3.c
+@@ -4,12 +4,12 @@
+ #include "libgcc.h"
+ 
+ /*
+- * GCC 7 suboptimally generates __multi3 calls for mips64r6, so for that
+- * specific case only we'll implement it here.
++ * GCC 7 & older can suboptimally generate __multi3 calls for mips64r6, so for
++ * that specific case only we implement that intrinsic here.
+  *
+  * See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82981
+  */
+-#if defined(CONFIG_64BIT) && defined(CONFIG_CPU_MIPSR6) && (__GNUC__ == 7)
++#if defined(CONFIG_64BIT) && defined(CONFIG_CPU_MIPSR6) && (__GNUC__ < 8)
+ 
+ /* multiply 64-bit values, low 64-bits returned */
+ static inline long long notrace dmulu(long long a, long long b)
+diff --git a/arch/s390/include/asm/qdio.h b/arch/s390/include/asm/qdio.h
+index de11ecc99c7c..9c9970a5dfb1 100644
+--- a/arch/s390/include/asm/qdio.h
++++ b/arch/s390/include/asm/qdio.h
+@@ -262,7 +262,6 @@ struct qdio_outbuf_state {
+ 	void *user;
+ };
+ 
+-#define QDIO_OUTBUF_STATE_FLAG_NONE	0x00
+ #define QDIO_OUTBUF_STATE_FLAG_PENDING	0x01
+ 
+ #define CHSC_AC1_INITIATE_INPUTQ	0x80
+diff --git a/arch/s390/lib/mem.S b/arch/s390/lib/mem.S
+index 2311f15be9cf..40c4d59c926e 100644
+--- a/arch/s390/lib/mem.S
++++ b/arch/s390/lib/mem.S
+@@ -17,7 +17,7 @@
+ ENTRY(memmove)
+ 	ltgr	%r4,%r4
+ 	lgr	%r1,%r2
+-	bzr	%r14
++	jz	.Lmemmove_exit
+ 	aghi	%r4,-1
+ 	clgr	%r2,%r3
+ 	jnh	.Lmemmove_forward
+@@ -36,6 +36,7 @@ ENTRY(memmove)
+ .Lmemmove_forward_remainder:
+ 	larl	%r5,.Lmemmove_mvc
+ 	ex	%r4,0(%r5)
++.Lmemmove_exit:
+ 	BR_EX	%r14
+ .Lmemmove_reverse:
+ 	ic	%r0,0(%r4,%r3)
+@@ -65,7 +66,7 @@ EXPORT_SYMBOL(memmove)
+  */
+ ENTRY(memset)
+ 	ltgr	%r4,%r4
+-	bzr	%r14
++	jz	.Lmemset_exit
+ 	ltgr	%r3,%r3
+ 	jnz	.Lmemset_fill
+ 	aghi	%r4,-1
+@@ -80,6 +81,7 @@ ENTRY(memset)
+ .Lmemset_clear_remainder:
+ 	larl	%r3,.Lmemset_xc
+ 	ex	%r4,0(%r3)
++.Lmemset_exit:
+ 	BR_EX	%r14
+ .Lmemset_fill:
+ 	cghi	%r4,1
+@@ -115,7 +117,7 @@ EXPORT_SYMBOL(memset)
+  */
+ ENTRY(memcpy)
+ 	ltgr	%r4,%r4
+-	bzr	%r14
++	jz	.Lmemcpy_exit
+ 	aghi	%r4,-1
+ 	srlg	%r5,%r4,8
+ 	ltgr	%r5,%r5
+@@ -124,6 +126,7 @@ ENTRY(memcpy)
+ .Lmemcpy_remainder:
+ 	larl	%r5,.Lmemcpy_mvc
+ 	ex	%r4,0(%r5)
++.Lmemcpy_exit:
+ 	BR_EX	%r14
+ .Lmemcpy_loop:
+ 	mvc	0(256,%r1),0(%r3)
+@@ -145,9 +148,9 @@ EXPORT_SYMBOL(memcpy)
+ .macro __MEMSET bits,bytes,insn
+ ENTRY(__memset\bits)
+ 	ltgr	%r4,%r4
+-	bzr	%r14
++	jz	.L__memset_exit\bits
+ 	cghi	%r4,\bytes
+-	je	.L__memset_exit\bits
++	je	.L__memset_store\bits
+ 	aghi	%r4,-(\bytes+1)
+ 	srlg	%r5,%r4,8
+ 	ltgr	%r5,%r5
+@@ -163,8 +166,9 @@ ENTRY(__memset\bits)
+ 	larl	%r5,.L__memset_mvc\bits
+ 	ex	%r4,0(%r5)
+ 	BR_EX	%r14
+-.L__memset_exit\bits:
++.L__memset_store\bits:
+ 	\insn	%r3,0(%r2)
++.L__memset_exit\bits:
+ 	BR_EX	%r14
+ .L__memset_mvc\bits:
+ 	mvc	\bytes(1,%r1),0(%r1)
+diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
+index e074480d3598..4cc3f06b0ab3 100644
+--- a/arch/s390/mm/fault.c
++++ b/arch/s390/mm/fault.c
+@@ -502,6 +502,8 @@ retry:
+ 	/* No reason to continue if interrupted by SIGKILL. */
+ 	if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) {
+ 		fault = VM_FAULT_SIGNAL;
++		if (flags & FAULT_FLAG_RETRY_NOWAIT)
++			goto out_up;
+ 		goto out;
+ 	}
+ 	if (unlikely(fault & VM_FAULT_ERROR))
+diff --git a/arch/s390/mm/page-states.c b/arch/s390/mm/page-states.c
+index 382153ff17e3..dc3cede7f2ec 100644
+--- a/arch/s390/mm/page-states.c
++++ b/arch/s390/mm/page-states.c
+@@ -271,7 +271,7 @@ void arch_set_page_states(int make_stable)
+ 			list_for_each(l, &zone->free_area[order].free_list[t]) {
+ 				page = list_entry(l, struct page, lru);
+ 				if (make_stable)
+-					set_page_stable_dat(page, 0);
++					set_page_stable_dat(page, order);
+ 				else
+ 					set_page_unused(page, order);
+ 			}
+diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
+index 5f0234ec8038..d7052cbe984f 100644
+--- a/arch/s390/net/bpf_jit_comp.c
++++ b/arch/s390/net/bpf_jit_comp.c
+@@ -485,8 +485,6 @@ static void bpf_jit_epilogue(struct bpf_jit *jit, u32 stack_depth)
+ 			/* br %r1 */
+ 			_EMIT2(0x07f1);
+ 		} else {
+-			/* larl %r1,.+14 */
+-			EMIT6_PCREL_RILB(0xc0000000, REG_1, jit->prg + 14);
+ 			/* ex 0,S390_lowcore.br_r1_tampoline */
+ 			EMIT4_DISP(0x44000000, REG_0, REG_0,
+ 				   offsetof(struct lowcore, br_r1_trampoline));
+diff --git a/arch/s390/numa/numa.c b/arch/s390/numa/numa.c
+index 06a80434cfe6..5bd374491f94 100644
+--- a/arch/s390/numa/numa.c
++++ b/arch/s390/numa/numa.c
+@@ -134,26 +134,14 @@ void __init numa_setup(void)
+ {
+ 	pr_info("NUMA mode: %s\n", mode->name);
+ 	nodes_clear(node_possible_map);
++	/* Initially attach all possible CPUs to node 0. */
++	cpumask_copy(&node_to_cpumask_map[0], cpu_possible_mask);
+ 	if (mode->setup)
+ 		mode->setup();
+ 	numa_setup_memory();
+ 	memblock_dump_all();
+ }
+ 
+-/*
+- * numa_init_early() - Initialization initcall
+- *
+- * This runs when only one CPU is online and before the first
+- * topology update is called for by the scheduler.
+- */
+-static int __init numa_init_early(void)
+-{
+-	/* Attach all possible CPUs to node 0 for now. */
+-	cpumask_copy(&node_to_cpumask_map[0], cpu_possible_mask);
+-	return 0;
+-}
+-early_initcall(numa_init_early);
+-
+ /*
+  * numa_init_late() - Initialization initcall
+  *
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index 4902fed221c0..8a505cfdd9b9 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -421,6 +421,8 @@ int arch_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type)
+ 	hwirq = 0;
+ 	for_each_pci_msi_entry(msi, pdev) {
+ 		rc = -EIO;
++		if (hwirq >= msi_vecs)
++			break;
+ 		irq = irq_alloc_desc(0);	/* Alloc irq on node 0 */
+ 		if (irq < 0)
+ 			return -ENOMEM;
+diff --git a/arch/s390/purgatory/Makefile b/arch/s390/purgatory/Makefile
+index 1ace023cbdce..abfa8c7a6d9a 100644
+--- a/arch/s390/purgatory/Makefile
++++ b/arch/s390/purgatory/Makefile
+@@ -7,13 +7,13 @@ purgatory-y := head.o purgatory.o string.o sha256.o mem.o
+ targets += $(purgatory-y) purgatory.ro kexec-purgatory.c
+ PURGATORY_OBJS = $(addprefix $(obj)/,$(purgatory-y))
+ 
+-$(obj)/sha256.o: $(srctree)/lib/sha256.c
++$(obj)/sha256.o: $(srctree)/lib/sha256.c FORCE
+ 	$(call if_changed_rule,cc_o_c)
+ 
+-$(obj)/mem.o: $(srctree)/arch/s390/lib/mem.S
++$(obj)/mem.o: $(srctree)/arch/s390/lib/mem.S FORCE
+ 	$(call if_changed_rule,as_o_S)
+ 
+-$(obj)/string.o: $(srctree)/arch/s390/lib/string.c
++$(obj)/string.o: $(srctree)/arch/s390/lib/string.c FORCE
+ 	$(call if_changed_rule,cc_o_c)
+ 
+ LDFLAGS_purgatory.ro := -e purgatory_start -r --no-undefined -nostdlib
+@@ -23,6 +23,7 @@ KBUILD_CFLAGS += -Wno-pointer-sign -Wno-sign-compare
+ KBUILD_CFLAGS += -fno-zero-initialized-in-bss -fno-builtin -ffreestanding
+ KBUILD_CFLAGS += -c -MD -Os -m64 -msoft-float
+ KBUILD_CFLAGS += $(call cc-option,-fno-PIE)
++KBUILD_AFLAGS := $(filter-out -DCC_USING_EXPOLINE,$(KBUILD_AFLAGS))
+ 
+ $(obj)/purgatory.ro: $(PURGATORY_OBJS) FORCE
+ 		$(call if_changed,ld)
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 6b8065d718bd..1aa4dd3b5687 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -179,6 +179,7 @@ config X86
+ 	select HAVE_PERF_REGS
+ 	select HAVE_PERF_USER_STACK_DUMP
+ 	select HAVE_RCU_TABLE_FREE
++	select HAVE_RCU_TABLE_INVALIDATE	if HAVE_RCU_TABLE_FREE
+ 	select HAVE_REGS_AND_STACK_ACCESS_API
+ 	select HAVE_RELIABLE_STACKTRACE		if X86_64 && UNWINDER_FRAME_POINTER && STACK_VALIDATION
+ 	select HAVE_STACKPROTECTOR		if CC_HAS_SANE_STACKPROTECTOR
+diff --git a/arch/x86/Makefile b/arch/x86/Makefile
+index a08e82856563..d944b52649a4 100644
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -180,10 +180,6 @@ ifdef CONFIG_FUNCTION_GRAPH_TRACER
+   endif
+ endif
+ 
+-ifndef CC_HAVE_ASM_GOTO
+-  $(error Compiler lacks asm-goto support.)
+-endif
+-
+ #
+ # Jump labels need '-maccumulate-outgoing-args' for gcc < 4.5.2 to prevent a
+ # GCC bug (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=46226).  There's no way
+@@ -317,6 +313,13 @@ PHONY += vdso_install
+ vdso_install:
+ 	$(Q)$(MAKE) $(build)=arch/x86/entry/vdso $@
+ 
++archprepare: checkbin
++checkbin:
++ifndef CC_HAVE_ASM_GOTO
++	@echo Compiler lacks asm-goto support.
++	@exit 1
++endif
++
+ archclean:
+ 	$(Q)rm -rf $(objtree)/arch/i386
+ 	$(Q)rm -rf $(objtree)/arch/x86_64
+diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
+index 261802b1cc50..9589878faf46 100644
+--- a/arch/x86/entry/vdso/Makefile
++++ b/arch/x86/entry/vdso/Makefile
+@@ -72,9 +72,9 @@ $(obj)/vdso-image-%.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
+ CFL := $(PROFILING) -mcmodel=small -fPIC -O2 -fasynchronous-unwind-tables -m64 \
+        $(filter -g%,$(KBUILD_CFLAGS)) $(call cc-option, -fno-stack-protector) \
+        -fno-omit-frame-pointer -foptimize-sibling-calls \
+-       -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
++       -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO $(RETPOLINE_VDSO_CFLAGS)
+ 
+-$(vobjs): KBUILD_CFLAGS := $(filter-out $(GCC_PLUGINS_CFLAGS),$(KBUILD_CFLAGS)) $(CFL)
++$(vobjs): KBUILD_CFLAGS := $(filter-out $(GCC_PLUGINS_CFLAGS) $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS)) $(CFL)
+ 
+ #
+ # vDSO code runs in userspace and -pg doesn't help with profiling anyway.
+@@ -138,11 +138,13 @@ KBUILD_CFLAGS_32 := $(filter-out -mcmodel=kernel,$(KBUILD_CFLAGS_32))
+ KBUILD_CFLAGS_32 := $(filter-out -fno-pic,$(KBUILD_CFLAGS_32))
+ KBUILD_CFLAGS_32 := $(filter-out -mfentry,$(KBUILD_CFLAGS_32))
+ KBUILD_CFLAGS_32 := $(filter-out $(GCC_PLUGINS_CFLAGS),$(KBUILD_CFLAGS_32))
++KBUILD_CFLAGS_32 := $(filter-out $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS_32))
+ KBUILD_CFLAGS_32 += -m32 -msoft-float -mregparm=0 -fpic
+ KBUILD_CFLAGS_32 += $(call cc-option, -fno-stack-protector)
+ KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls)
+ KBUILD_CFLAGS_32 += -fno-omit-frame-pointer
+ KBUILD_CFLAGS_32 += -DDISABLE_BRANCH_PROFILING
++KBUILD_CFLAGS_32 += $(RETPOLINE_VDSO_CFLAGS)
+ $(obj)/vdso32.so.dbg: KBUILD_CFLAGS = $(KBUILD_CFLAGS_32)
+ 
+ $(obj)/vdso32.so.dbg: FORCE \
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index 5f4829f10129..dfb2f7c0d019 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -2465,7 +2465,7 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs
+ 
+ 	perf_callchain_store(entry, regs->ip);
+ 
+-	if (!current->mm)
++	if (!nmi_uaccess_okay())
+ 		return;
+ 
+ 	if (perf_callchain_user32(regs, entry))
+diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
+index c14f2a74b2be..15450a675031 100644
+--- a/arch/x86/include/asm/irqflags.h
++++ b/arch/x86/include/asm/irqflags.h
+@@ -33,7 +33,8 @@ extern inline unsigned long native_save_fl(void)
+ 	return flags;
+ }
+ 
+-static inline void native_restore_fl(unsigned long flags)
++extern inline void native_restore_fl(unsigned long flags);
++extern inline void native_restore_fl(unsigned long flags)
+ {
+ 	asm volatile("push %0 ; popf"
+ 		     : /* no output */
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index 682286aca881..d53c54b842da 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -132,6 +132,8 @@ struct cpuinfo_x86 {
+ 	/* Index into per_cpu list: */
+ 	u16			cpu_index;
+ 	u32			microcode;
++	/* Address space bits used by the cache internally */
++	u8			x86_cache_bits;
+ 	unsigned		initialized : 1;
+ } __randomize_layout;
+ 
+@@ -181,9 +183,9 @@ extern const struct seq_operations cpuinfo_op;
+ 
+ extern void cpu_detect(struct cpuinfo_x86 *c);
+ 
+-static inline unsigned long l1tf_pfn_limit(void)
++static inline unsigned long long l1tf_pfn_limit(void)
+ {
+-	return BIT(boot_cpu_data.x86_phys_bits - 1 - PAGE_SHIFT) - 1;
++	return BIT_ULL(boot_cpu_data.x86_cache_bits - 1 - PAGE_SHIFT);
+ }
+ 
+ extern void early_cpu_init(void);
+diff --git a/arch/x86/include/asm/stacktrace.h b/arch/x86/include/asm/stacktrace.h
+index b6dc698f992a..f335aad404a4 100644
+--- a/arch/x86/include/asm/stacktrace.h
++++ b/arch/x86/include/asm/stacktrace.h
+@@ -111,6 +111,6 @@ static inline unsigned long caller_frame_pointer(void)
+ 	return (unsigned long)frame;
+ }
+ 
+-void show_opcodes(u8 *rip, const char *loglvl);
++void show_opcodes(struct pt_regs *regs, const char *loglvl);
+ void show_ip(struct pt_regs *regs, const char *loglvl);
+ #endif /* _ASM_X86_STACKTRACE_H */
+diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
+index 6690cd3fc8b1..0af97e51e609 100644
+--- a/arch/x86/include/asm/tlbflush.h
++++ b/arch/x86/include/asm/tlbflush.h
+@@ -175,8 +175,16 @@ struct tlb_state {
+ 	 * are on.  This means that it may not match current->active_mm,
+ 	 * which will contain the previous user mm when we're in lazy TLB
+ 	 * mode even if we've already switched back to swapper_pg_dir.
++	 *
++	 * During switch_mm_irqs_off(), loaded_mm will be set to
++	 * LOADED_MM_SWITCHING during the brief interrupts-off window
++	 * when CR3 and loaded_mm would otherwise be inconsistent.  This
++	 * is for nmi_uaccess_okay()'s benefit.
+ 	 */
+ 	struct mm_struct *loaded_mm;
++
++#define LOADED_MM_SWITCHING ((struct mm_struct *)1)
++
+ 	u16 loaded_mm_asid;
+ 	u16 next_asid;
+ 	/* last user mm's ctx id */
+@@ -246,6 +254,38 @@ struct tlb_state {
+ };
+ DECLARE_PER_CPU_SHARED_ALIGNED(struct tlb_state, cpu_tlbstate);
+ 
++/*
++ * Blindly accessing user memory from NMI context can be dangerous
++ * if we're in the middle of switching the current user task or
++ * switching the loaded mm.  It can also be dangerous if we
++ * interrupted some kernel code that was temporarily using a
++ * different mm.
++ */
++static inline bool nmi_uaccess_okay(void)
++{
++	struct mm_struct *loaded_mm = this_cpu_read(cpu_tlbstate.loaded_mm);
++	struct mm_struct *current_mm = current->mm;
++
++	VM_WARN_ON_ONCE(!loaded_mm);
++
++	/*
++	 * The condition we want to check is
++	 * current_mm->pgd == __va(read_cr3_pa()).  This may be slow, though,
++	 * if we're running in a VM with shadow paging, and nmi_uaccess_okay()
++	 * is supposed to be reasonably fast.
++	 *
++	 * Instead, we check the almost equivalent but somewhat conservative
++	 * condition below, and we rely on the fact that switch_mm_irqs_off()
++	 * sets loaded_mm to LOADED_MM_SWITCHING before writing to CR3.
++	 */
++	if (loaded_mm != current_mm)
++		return false;
++
++	VM_WARN_ON_ONCE(current_mm->pgd != __va(read_cr3_pa()));
++
++	return true;
++}
++
+ /* Initialize cr4 shadow for this CPU. */
+ static inline void cr4_init_shadow(void)
+ {
+diff --git a/arch/x86/include/asm/vgtod.h b/arch/x86/include/asm/vgtod.h
+index fb856c9f0449..53748541c487 100644
+--- a/arch/x86/include/asm/vgtod.h
++++ b/arch/x86/include/asm/vgtod.h
+@@ -93,7 +93,7 @@ static inline unsigned int __getcpu(void)
+ 	 *
+ 	 * If RDPID is available, use it.
+ 	 */
+-	alternative_io ("lsl %[p],%[seg]",
++	alternative_io ("lsl %[seg],%[p]",
+ 			".byte 0xf3,0x0f,0xc7,0xf8", /* RDPID %eax/rax */
+ 			X86_FEATURE_RDPID,
+ 			[p] "=a" (p), [seg] "r" (__PER_CPU_SEG));
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 664f161f96ff..4891a621a752 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -652,6 +652,45 @@ EXPORT_SYMBOL_GPL(l1tf_mitigation);
+ enum vmx_l1d_flush_state l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
+ EXPORT_SYMBOL_GPL(l1tf_vmx_mitigation);
+ 
++/*
++ * These CPUs all support 44bits physical address space internally in the
++ * cache but CPUID can report a smaller number of physical address bits.
++ *
++ * The L1TF mitigation uses the top most address bit for the inversion of
++ * non present PTEs. When the installed memory reaches into the top most
++ * address bit due to memory holes, which has been observed on machines
++ * which report 36bits physical address bits and have 32G RAM installed,
++ * then the mitigation range check in l1tf_select_mitigation() triggers.
++ * This is a false positive because the mitigation is still possible due to
++ * the fact that the cache uses 44bit internally. Use the cache bits
++ * instead of the reported physical bits and adjust them on the affected
++ * machines to 44bit if the reported bits are less than 44.
++ */
++static void override_cache_bits(struct cpuinfo_x86 *c)
++{
++	if (c->x86 != 6)
++		return;
++
++	switch (c->x86_model) {
++	case INTEL_FAM6_NEHALEM:
++	case INTEL_FAM6_WESTMERE:
++	case INTEL_FAM6_SANDYBRIDGE:
++	case INTEL_FAM6_IVYBRIDGE:
++	case INTEL_FAM6_HASWELL_CORE:
++	case INTEL_FAM6_HASWELL_ULT:
++	case INTEL_FAM6_HASWELL_GT3E:
++	case INTEL_FAM6_BROADWELL_CORE:
++	case INTEL_FAM6_BROADWELL_GT3E:
++	case INTEL_FAM6_SKYLAKE_MOBILE:
++	case INTEL_FAM6_SKYLAKE_DESKTOP:
++	case INTEL_FAM6_KABYLAKE_MOBILE:
++	case INTEL_FAM6_KABYLAKE_DESKTOP:
++		if (c->x86_cache_bits < 44)
++			c->x86_cache_bits = 44;
++		break;
++	}
++}
++
+ static void __init l1tf_select_mitigation(void)
+ {
+ 	u64 half_pa;
+@@ -659,6 +698,8 @@ static void __init l1tf_select_mitigation(void)
+ 	if (!boot_cpu_has_bug(X86_BUG_L1TF))
+ 		return;
+ 
++	override_cache_bits(&boot_cpu_data);
++
+ 	switch (l1tf_mitigation) {
+ 	case L1TF_MITIGATION_OFF:
+ 	case L1TF_MITIGATION_FLUSH_NOWARN:
+@@ -678,14 +719,13 @@ static void __init l1tf_select_mitigation(void)
+ 	return;
+ #endif
+ 
+-	/*
+-	 * This is extremely unlikely to happen because almost all
+-	 * systems have far more MAX_PA/2 than RAM can be fit into
+-	 * DIMM slots.
+-	 */
+ 	half_pa = (u64)l1tf_pfn_limit() << PAGE_SHIFT;
+ 	if (e820__mapped_any(half_pa, ULLONG_MAX - half_pa, E820_TYPE_RAM)) {
+ 		pr_warn("System has more than MAX_PA/2 memory. L1TF mitigation not effective.\n");
++		pr_info("You may make it effective by booting the kernel with mem=%llu parameter.\n",
++				half_pa);
++		pr_info("However, doing so will make a part of your RAM unusable.\n");
++		pr_info("Reading https://www.kernel.org/doc/html/latest/admin-guide/l1tf.html might help you decide.\n");
+ 		return;
+ 	}
+ 
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index b41b72bd8bb8..1ee8ea36af30 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -919,6 +919,7 @@ void get_cpu_address_sizes(struct cpuinfo_x86 *c)
+ 	else if (cpu_has(c, X86_FEATURE_PAE) || cpu_has(c, X86_FEATURE_PSE36))
+ 		c->x86_phys_bits = 36;
+ #endif
++	c->x86_cache_bits = c->x86_phys_bits;
+ }
+ 
+ static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c)
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index 6602941cfebf..3f0abb62161b 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -150,6 +150,9 @@ static bool bad_spectre_microcode(struct cpuinfo_x86 *c)
+ 	if (cpu_has(c, X86_FEATURE_HYPERVISOR))
+ 		return false;
+ 
++	if (c->x86 != 6)
++		return false;
++
+ 	for (i = 0; i < ARRAY_SIZE(spectre_bad_microcodes); i++) {
+ 		if (c->x86_model == spectre_bad_microcodes[i].model &&
+ 		    c->x86_stepping == spectre_bad_microcodes[i].stepping)
+diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
+index 666a284116ac..17b02adc79aa 100644
+--- a/arch/x86/kernel/dumpstack.c
++++ b/arch/x86/kernel/dumpstack.c
+@@ -17,6 +17,7 @@
+ #include <linux/bug.h>
+ #include <linux/nmi.h>
+ #include <linux/sysfs.h>
++#include <linux/kasan.h>
+ 
+ #include <asm/cpu_entry_area.h>
+ #include <asm/stacktrace.h>
+@@ -91,23 +92,32 @@ static void printk_stack_address(unsigned long address, int reliable,
+  * Thus, the 2/3rds prologue and 64 byte OPCODE_BUFSIZE is just a random
+  * guesstimate in attempt to achieve all of the above.
+  */
+-void show_opcodes(u8 *rip, const char *loglvl)
++void show_opcodes(struct pt_regs *regs, const char *loglvl)
+ {
+ 	unsigned int code_prologue = OPCODE_BUFSIZE * 2 / 3;
+ 	u8 opcodes[OPCODE_BUFSIZE];
+-	u8 *ip;
++	unsigned long ip;
+ 	int i;
++	bool bad_ip;
+ 
+ 	printk("%sCode: ", loglvl);
+ 
+-	ip = (u8 *)rip - code_prologue;
+-	if (probe_kernel_read(opcodes, ip, OPCODE_BUFSIZE)) {
++	ip = regs->ip - code_prologue;
++
++	/*
++	 * Make sure userspace isn't trying to trick us into dumping kernel
++	 * memory by pointing the userspace instruction pointer at it.
++	 */
++	bad_ip = user_mode(regs) &&
++		 __chk_range_not_ok(ip, OPCODE_BUFSIZE, TASK_SIZE_MAX);
++
++	if (bad_ip || probe_kernel_read(opcodes, (u8 *)ip, OPCODE_BUFSIZE)) {
+ 		pr_cont("Bad RIP value.\n");
+ 		return;
+ 	}
+ 
+ 	for (i = 0; i < OPCODE_BUFSIZE; i++, ip++) {
+-		if (ip == rip)
++		if (ip == regs->ip)
+ 			pr_cont("<%02x> ", opcodes[i]);
+ 		else
+ 			pr_cont("%02x ", opcodes[i]);
+@@ -122,7 +132,7 @@ void show_ip(struct pt_regs *regs, const char *loglvl)
+ #else
+ 	printk("%sRIP: %04x:%pS\n", loglvl, (int)regs->cs, (void *)regs->ip);
+ #endif
+-	show_opcodes((u8 *)regs->ip, loglvl);
++	show_opcodes(regs, loglvl);
+ }
+ 
+ void show_iret_regs(struct pt_regs *regs)
+@@ -356,7 +366,10 @@ void oops_end(unsigned long flags, struct pt_regs *regs, int signr)
+ 	 * We're not going to return, but we might be on an IST stack or
+ 	 * have very little stack space left.  Rewind the stack and kill
+ 	 * the task.
++	 * Before we rewind the stack, we have to tell KASAN that we're going to
++	 * reuse the task stack and that existing poisons are invalid.
+ 	 */
++	kasan_unpoison_task_stack(current);
+ 	rewind_stack_do_exit(signr);
+ }
+ NOKPROBE_SYMBOL(oops_end);
+diff --git a/arch/x86/kernel/early-quirks.c b/arch/x86/kernel/early-quirks.c
+index da5d8ac60062..50d5848bf22e 100644
+--- a/arch/x86/kernel/early-quirks.c
++++ b/arch/x86/kernel/early-quirks.c
+@@ -338,6 +338,18 @@ static resource_size_t __init gen3_stolen_base(int num, int slot, int func,
+ 	return bsm & INTEL_BSM_MASK;
+ }
+ 
++static resource_size_t __init gen11_stolen_base(int num, int slot, int func,
++						resource_size_t stolen_size)
++{
++	u64 bsm;
++
++	bsm = read_pci_config(num, slot, func, INTEL_GEN11_BSM_DW0);
++	bsm &= INTEL_BSM_MASK;
++	bsm |= (u64)read_pci_config(num, slot, func, INTEL_GEN11_BSM_DW1) << 32;
++
++	return bsm;
++}
++
+ static resource_size_t __init i830_stolen_size(int num, int slot, int func)
+ {
+ 	u16 gmch_ctrl;
+@@ -498,6 +510,11 @@ static const struct intel_early_ops chv_early_ops __initconst = {
+ 	.stolen_size = chv_stolen_size,
+ };
+ 
++static const struct intel_early_ops gen11_early_ops __initconst = {
++	.stolen_base = gen11_stolen_base,
++	.stolen_size = gen9_stolen_size,
++};
++
+ static const struct pci_device_id intel_early_ids[] __initconst = {
+ 	INTEL_I830_IDS(&i830_early_ops),
+ 	INTEL_I845G_IDS(&i845_early_ops),
+@@ -529,6 +546,7 @@ static const struct pci_device_id intel_early_ids[] __initconst = {
+ 	INTEL_CFL_IDS(&gen9_early_ops),
+ 	INTEL_GLK_IDS(&gen9_early_ops),
+ 	INTEL_CNL_IDS(&gen9_early_ops),
++	INTEL_ICL_11_IDS(&gen11_early_ops),
+ };
+ 
+ struct resource intel_graphics_stolen_res __ro_after_init = DEFINE_RES_MEM(0, 0);
+diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
+index 12bb445fb98d..4344a032ebe6 100644
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -384,6 +384,7 @@ start_thread(struct pt_regs *regs, unsigned long new_ip, unsigned long new_sp)
+ 	start_thread_common(regs, new_ip, new_sp,
+ 			    __USER_CS, __USER_DS, 0);
+ }
++EXPORT_SYMBOL_GPL(start_thread);
+ 
+ #ifdef CONFIG_COMPAT
+ void compat_start_thread(struct pt_regs *regs, u32 new_ip, u32 new_sp)
+diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
+index af8caf965baa..01d209ab5481 100644
+--- a/arch/x86/kvm/hyperv.c
++++ b/arch/x86/kvm/hyperv.c
+@@ -235,7 +235,7 @@ static int synic_set_msr(struct kvm_vcpu_hv_synic *synic,
+ 	struct kvm_vcpu *vcpu = synic_to_vcpu(synic);
+ 	int ret;
+ 
+-	if (!synic->active)
++	if (!synic->active && !host)
+ 		return 1;
+ 
+ 	trace_kvm_hv_synic_set_msr(vcpu->vcpu_id, msr, data, host);
+@@ -295,11 +295,12 @@ static int synic_set_msr(struct kvm_vcpu_hv_synic *synic,
+ 	return ret;
+ }
+ 
+-static int synic_get_msr(struct kvm_vcpu_hv_synic *synic, u32 msr, u64 *pdata)
++static int synic_get_msr(struct kvm_vcpu_hv_synic *synic, u32 msr, u64 *pdata,
++			 bool host)
+ {
+ 	int ret;
+ 
+-	if (!synic->active)
++	if (!synic->active && !host)
+ 		return 1;
+ 
+ 	ret = 0;
+@@ -1014,6 +1015,11 @@ static int kvm_hv_set_msr_pw(struct kvm_vcpu *vcpu, u32 msr, u64 data,
+ 	case HV_X64_MSR_TSC_EMULATION_STATUS:
+ 		hv->hv_tsc_emulation_status = data;
+ 		break;
++	case HV_X64_MSR_TIME_REF_COUNT:
++		/* read-only, but still ignore it if host-initiated */
++		if (!host)
++			return 1;
++		break;
+ 	default:
+ 		vcpu_unimpl(vcpu, "Hyper-V uhandled wrmsr: 0x%x data 0x%llx\n",
+ 			    msr, data);
+@@ -1101,6 +1107,12 @@ static int kvm_hv_set_msr(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host)
+ 		return stimer_set_count(vcpu_to_stimer(vcpu, timer_index),
+ 					data, host);
+ 	}
++	case HV_X64_MSR_TSC_FREQUENCY:
++	case HV_X64_MSR_APIC_FREQUENCY:
++		/* read-only, but still ignore it if host-initiated */
++		if (!host)
++			return 1;
++		break;
+ 	default:
+ 		vcpu_unimpl(vcpu, "Hyper-V uhandled wrmsr: 0x%x data 0x%llx\n",
+ 			    msr, data);
+@@ -1156,7 +1168,8 @@ static int kvm_hv_get_msr_pw(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
+ 	return 0;
+ }
+ 
+-static int kvm_hv_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
++static int kvm_hv_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata,
++			  bool host)
+ {
+ 	u64 data = 0;
+ 	struct kvm_vcpu_hv *hv = &vcpu->arch.hyperv;
+@@ -1183,7 +1196,7 @@ static int kvm_hv_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
+ 	case HV_X64_MSR_SIMP:
+ 	case HV_X64_MSR_EOM:
+ 	case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15:
+-		return synic_get_msr(vcpu_to_synic(vcpu), msr, pdata);
++		return synic_get_msr(vcpu_to_synic(vcpu), msr, pdata, host);
+ 	case HV_X64_MSR_STIMER0_CONFIG:
+ 	case HV_X64_MSR_STIMER1_CONFIG:
+ 	case HV_X64_MSR_STIMER2_CONFIG:
+@@ -1229,7 +1242,7 @@ int kvm_hv_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host)
+ 		return kvm_hv_set_msr(vcpu, msr, data, host);
+ }
+ 
+-int kvm_hv_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
++int kvm_hv_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata, bool host)
+ {
+ 	if (kvm_hv_msr_partition_wide(msr)) {
+ 		int r;
+@@ -1239,7 +1252,7 @@ int kvm_hv_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
+ 		mutex_unlock(&vcpu->kvm->arch.hyperv.hv_lock);
+ 		return r;
+ 	} else
+-		return kvm_hv_get_msr(vcpu, msr, pdata);
++		return kvm_hv_get_msr(vcpu, msr, pdata, host);
+ }
+ 
+ static __always_inline int get_sparse_bank_no(u64 valid_bank_mask, int bank_no)
+diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h
+index 837465d69c6d..d6aa969e20f1 100644
+--- a/arch/x86/kvm/hyperv.h
++++ b/arch/x86/kvm/hyperv.h
+@@ -48,7 +48,7 @@ static inline struct kvm_vcpu *synic_to_vcpu(struct kvm_vcpu_hv_synic *synic)
+ }
+ 
+ int kvm_hv_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host);
+-int kvm_hv_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata);
++int kvm_hv_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata, bool host);
+ 
+ bool kvm_hv_hypercall_enabled(struct kvm *kvm);
+ int kvm_hv_hypercall(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index f059a73f0fd0..9799f86388e7 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -5580,8 +5580,6 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
+ 
+ 	clgi();
+ 
+-	local_irq_enable();
+-
+ 	/*
+ 	 * If this vCPU has touched SPEC_CTRL, restore the guest's value if
+ 	 * it's non-zero. Since vmentry is serialising on affected CPUs, there
+@@ -5590,6 +5588,8 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
+ 	 */
+ 	x86_spec_ctrl_set_guest(svm->spec_ctrl, svm->virt_spec_ctrl);
+ 
++	local_irq_enable();
++
+ 	asm volatile (
+ 		"push %%" _ASM_BP "; \n\t"
+ 		"mov %c[rbx](%[svm]), %%" _ASM_BX " \n\t"
+@@ -5712,12 +5712,12 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
+ 	if (unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL)))
+ 		svm->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL);
+ 
+-	x86_spec_ctrl_restore_host(svm->spec_ctrl, svm->virt_spec_ctrl);
+-
+ 	reload_tss(vcpu);
+ 
+ 	local_irq_disable();
+ 
++	x86_spec_ctrl_restore_host(svm->spec_ctrl, svm->virt_spec_ctrl);
++
+ 	vcpu->arch.cr2 = svm->vmcb->save.cr2;
+ 	vcpu->arch.regs[VCPU_REGS_RAX] = svm->vmcb->save.rax;
+ 	vcpu->arch.regs[VCPU_REGS_RSP] = svm->vmcb->save.rsp;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index a5caa5e5480c..24c84aa87049 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -2185,10 +2185,11 @@ static int set_msr_mce(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		vcpu->arch.mcg_status = data;
+ 		break;
+ 	case MSR_IA32_MCG_CTL:
+-		if (!(mcg_cap & MCG_CTL_P))
++		if (!(mcg_cap & MCG_CTL_P) &&
++		    (data || !msr_info->host_initiated))
+ 			return 1;
+ 		if (data != 0 && data != ~(u64)0)
+-			return -1;
++			return 1;
+ 		vcpu->arch.mcg_ctl = data;
+ 		break;
+ 	default:
+@@ -2576,7 +2577,7 @@ int kvm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
+ }
+ EXPORT_SYMBOL_GPL(kvm_get_msr);
+ 
+-static int get_msr_mce(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
++static int get_msr_mce(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata, bool host)
+ {
+ 	u64 data;
+ 	u64 mcg_cap = vcpu->arch.mcg_cap;
+@@ -2591,7 +2592,7 @@ static int get_msr_mce(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
+ 		data = vcpu->arch.mcg_cap;
+ 		break;
+ 	case MSR_IA32_MCG_CTL:
+-		if (!(mcg_cap & MCG_CTL_P))
++		if (!(mcg_cap & MCG_CTL_P) && !host)
+ 			return 1;
+ 		data = vcpu->arch.mcg_ctl;
+ 		break;
+@@ -2724,7 +2725,8 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 	case MSR_IA32_MCG_CTL:
+ 	case MSR_IA32_MCG_STATUS:
+ 	case MSR_IA32_MC0_CTL ... MSR_IA32_MCx_CTL(KVM_MAX_MCE_BANKS) - 1:
+-		return get_msr_mce(vcpu, msr_info->index, &msr_info->data);
++		return get_msr_mce(vcpu, msr_info->index, &msr_info->data,
++				   msr_info->host_initiated);
+ 	case MSR_K7_CLK_CTL:
+ 		/*
+ 		 * Provide expected ramp-up count for K7. All other
+@@ -2745,7 +2747,8 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 	case HV_X64_MSR_TSC_EMULATION_CONTROL:
+ 	case HV_X64_MSR_TSC_EMULATION_STATUS:
+ 		return kvm_hv_get_msr_common(vcpu,
+-					     msr_info->index, &msr_info->data);
++					     msr_info->index, &msr_info->data,
++					     msr_info->host_initiated);
+ 		break;
+ 	case MSR_IA32_BBL_CR_CTL3:
+ 		/* This legacy MSR exists but isn't fully documented in current
+diff --git a/arch/x86/lib/usercopy.c b/arch/x86/lib/usercopy.c
+index c8c6ad0d58b8..3f435d7fca5e 100644
+--- a/arch/x86/lib/usercopy.c
++++ b/arch/x86/lib/usercopy.c
+@@ -7,6 +7,8 @@
+ #include <linux/uaccess.h>
+ #include <linux/export.h>
+ 
++#include <asm/tlbflush.h>
++
+ /*
+  * We rely on the nested NMI work to allow atomic faults from the NMI path; the
+  * nested NMI paths are careful to preserve CR2.
+@@ -19,6 +21,9 @@ copy_from_user_nmi(void *to, const void __user *from, unsigned long n)
+ 	if (__range_not_ok(from, n, TASK_SIZE))
+ 		return n;
+ 
++	if (!nmi_uaccess_okay())
++		return n;
++
+ 	/*
+ 	 * Even though this function is typically called from NMI/IRQ context
+ 	 * disable pagefaults so that its behaviour is consistent even when
+diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
+index 2aafa6ab6103..d1f1612672c7 100644
+--- a/arch/x86/mm/fault.c
++++ b/arch/x86/mm/fault.c
+@@ -838,7 +838,7 @@ show_signal_msg(struct pt_regs *regs, unsigned long error_code,
+ 
+ 	printk(KERN_CONT "\n");
+ 
+-	show_opcodes((u8 *)regs->ip, loglvl);
++	show_opcodes(regs, loglvl);
+ }
+ 
+ static void
+diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
+index acfab322fbe0..63a6f9fcaf20 100644
+--- a/arch/x86/mm/init.c
++++ b/arch/x86/mm/init.c
+@@ -923,7 +923,7 @@ unsigned long max_swapfile_size(void)
+ 
+ 	if (boot_cpu_has_bug(X86_BUG_L1TF)) {
+ 		/* Limit the swap file size to MAX_PA/2 for L1TF workaround */
+-		unsigned long l1tf_limit = l1tf_pfn_limit() + 1;
++		unsigned long long l1tf_limit = l1tf_pfn_limit();
+ 		/*
+ 		 * We encode swap offsets also with 3 bits below those for pfn
+ 		 * which makes the usable limit higher.
+@@ -931,7 +931,7 @@ unsigned long max_swapfile_size(void)
+ #if CONFIG_PGTABLE_LEVELS > 2
+ 		l1tf_limit <<= PAGE_SHIFT - SWP_OFFSET_FIRST_BIT;
+ #endif
+-		pages = min_t(unsigned long, l1tf_limit, pages);
++		pages = min_t(unsigned long long, l1tf_limit, pages);
+ 	}
+ 	return pages;
+ }
+diff --git a/arch/x86/mm/mmap.c b/arch/x86/mm/mmap.c
+index f40ab8185d94..1e95d57760cf 100644
+--- a/arch/x86/mm/mmap.c
++++ b/arch/x86/mm/mmap.c
+@@ -257,7 +257,7 @@ bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot)
+ 	/* If it's real memory always allow */
+ 	if (pfn_valid(pfn))
+ 		return true;
+-	if (pfn > l1tf_pfn_limit() && !capable(CAP_SYS_ADMIN))
++	if (pfn >= l1tf_pfn_limit() && !capable(CAP_SYS_ADMIN))
+ 		return false;
+ 	return true;
+ }
+diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
+index 6eb1f34c3c85..cd2617285e2e 100644
+--- a/arch/x86/mm/tlb.c
++++ b/arch/x86/mm/tlb.c
+@@ -298,6 +298,10 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
+ 
+ 		choose_new_asid(next, next_tlb_gen, &new_asid, &need_flush);
+ 
++		/* Let nmi_uaccess_okay() know that we're changing CR3. */
++		this_cpu_write(cpu_tlbstate.loaded_mm, LOADED_MM_SWITCHING);
++		barrier();
++
+ 		if (need_flush) {
+ 			this_cpu_write(cpu_tlbstate.ctxs[new_asid].ctx_id, next->context.ctx_id);
+ 			this_cpu_write(cpu_tlbstate.ctxs[new_asid].tlb_gen, next_tlb_gen);
+@@ -328,6 +332,9 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
+ 		if (next != &init_mm)
+ 			this_cpu_write(cpu_tlbstate.last_ctx_id, next->context.ctx_id);
+ 
++		/* Make sure we write CR3 before loaded_mm. */
++		barrier();
++
+ 		this_cpu_write(cpu_tlbstate.loaded_mm, next);
+ 		this_cpu_write(cpu_tlbstate.loaded_mm_asid, new_asid);
+ 	}
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index cc71c63df381..984b37647b2f 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -6424,6 +6424,7 @@ void ata_host_init(struct ata_host *host, struct device *dev,
+ 	host->n_tags = ATA_MAX_QUEUE;
+ 	host->dev = dev;
+ 	host->ops = ops;
++	kref_init(&host->kref);
+ }
+ 
+ void __ata_port_probe(struct ata_port *ap)
+@@ -7391,3 +7392,5 @@ EXPORT_SYMBOL_GPL(ata_cable_80wire);
+ EXPORT_SYMBOL_GPL(ata_cable_unknown);
+ EXPORT_SYMBOL_GPL(ata_cable_ignore);
+ EXPORT_SYMBOL_GPL(ata_cable_sata);
++EXPORT_SYMBOL_GPL(ata_host_get);
++EXPORT_SYMBOL_GPL(ata_host_put);
+\ No newline at end of file
+diff --git a/drivers/ata/libata.h b/drivers/ata/libata.h
+index 9e21c49cf6be..f953cb4bb1ba 100644
+--- a/drivers/ata/libata.h
++++ b/drivers/ata/libata.h
+@@ -100,8 +100,6 @@ extern int ata_port_probe(struct ata_port *ap);
+ extern void __ata_port_probe(struct ata_port *ap);
+ extern unsigned int ata_read_log_page(struct ata_device *dev, u8 log,
+ 				      u8 page, void *buf, unsigned int sectors);
+-extern void ata_host_get(struct ata_host *host);
+-extern void ata_host_put(struct ata_host *host);
+ 
+ #define to_ata_port(d) container_of(d, struct ata_port, tdev)
+ 
+diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
+index 8e2e4757adcb..5a42ae4078c2 100644
+--- a/drivers/base/power/clock_ops.c
++++ b/drivers/base/power/clock_ops.c
+@@ -185,7 +185,7 @@ EXPORT_SYMBOL_GPL(of_pm_clk_add_clk);
+ int of_pm_clk_add_clks(struct device *dev)
+ {
+ 	struct clk **clks;
+-	unsigned int i, count;
++	int i, count;
+ 	int ret;
+ 
+ 	if (!dev || !dev->of_node)
+diff --git a/drivers/cdrom/cdrom.c b/drivers/cdrom/cdrom.c
+index a78b8e7085e9..66acbd063562 100644
+--- a/drivers/cdrom/cdrom.c
++++ b/drivers/cdrom/cdrom.c
+@@ -2542,7 +2542,7 @@ static int cdrom_ioctl_drive_status(struct cdrom_device_info *cdi,
+ 	if (!CDROM_CAN(CDC_SELECT_DISC) ||
+ 	    (arg == CDSL_CURRENT || arg == CDSL_NONE))
+ 		return cdi->ops->drive_status(cdi, CDSL_CURRENT);
+-	if (((int)arg >= cdi->capacity))
++	if (arg >= cdi->capacity)
+ 		return -EINVAL;
+ 	return cdrom_slot_status(cdi, arg);
+ }
+diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
+index e32f6e85dc6d..3a3a7a548a85 100644
+--- a/drivers/char/tpm/tpm-interface.c
++++ b/drivers/char/tpm/tpm-interface.c
+@@ -29,7 +29,6 @@
+ #include <linux/mutex.h>
+ #include <linux/spinlock.h>
+ #include <linux/freezer.h>
+-#include <linux/pm_runtime.h>
+ #include <linux/tpm_eventlog.h>
+ 
+ #include "tpm.h"
+@@ -369,10 +368,13 @@ err_len:
+ 	return -EINVAL;
+ }
+ 
+-static int tpm_request_locality(struct tpm_chip *chip)
++static int tpm_request_locality(struct tpm_chip *chip, unsigned int flags)
+ {
+ 	int rc;
+ 
++	if (flags & TPM_TRANSMIT_RAW)
++		return 0;
++
+ 	if (!chip->ops->request_locality)
+ 		return 0;
+ 
+@@ -385,10 +387,13 @@ static int tpm_request_locality(struct tpm_chip *chip)
+ 	return 0;
+ }
+ 
+-static void tpm_relinquish_locality(struct tpm_chip *chip)
++static void tpm_relinquish_locality(struct tpm_chip *chip, unsigned int flags)
+ {
+ 	int rc;
+ 
++	if (flags & TPM_TRANSMIT_RAW)
++		return;
++
+ 	if (!chip->ops->relinquish_locality)
+ 		return;
+ 
+@@ -399,6 +404,28 @@ static void tpm_relinquish_locality(struct tpm_chip *chip)
+ 	chip->locality = -1;
+ }
+ 
++static int tpm_cmd_ready(struct tpm_chip *chip, unsigned int flags)
++{
++	if (flags & TPM_TRANSMIT_RAW)
++		return 0;
++
++	if (!chip->ops->cmd_ready)
++		return 0;
++
++	return chip->ops->cmd_ready(chip);
++}
++
++static int tpm_go_idle(struct tpm_chip *chip, unsigned int flags)
++{
++	if (flags & TPM_TRANSMIT_RAW)
++		return 0;
++
++	if (!chip->ops->go_idle)
++		return 0;
++
++	return chip->ops->go_idle(chip);
++}
++
+ static ssize_t tpm_try_transmit(struct tpm_chip *chip,
+ 				struct tpm_space *space,
+ 				u8 *buf, size_t bufsiz,
+@@ -423,7 +450,7 @@ static ssize_t tpm_try_transmit(struct tpm_chip *chip,
+ 		header->tag = cpu_to_be16(TPM2_ST_NO_SESSIONS);
+ 		header->return_code = cpu_to_be32(TPM2_RC_COMMAND_CODE |
+ 						  TSS2_RESMGR_TPM_RC_LAYER);
+-		return bufsiz;
++		return sizeof(*header);
+ 	}
+ 
+ 	if (bufsiz > TPM_BUFSIZE)
+@@ -449,14 +476,15 @@ static ssize_t tpm_try_transmit(struct tpm_chip *chip,
+ 	/* Store the decision as chip->locality will be changed. */
+ 	need_locality = chip->locality == -1;
+ 
+-	if (!(flags & TPM_TRANSMIT_RAW) && need_locality) {
+-		rc = tpm_request_locality(chip);
++	if (need_locality) {
++		rc = tpm_request_locality(chip, flags);
+ 		if (rc < 0)
+ 			goto out_no_locality;
+ 	}
+ 
+-	if (chip->dev.parent)
+-		pm_runtime_get_sync(chip->dev.parent);
++	rc = tpm_cmd_ready(chip, flags);
++	if (rc)
++		goto out;
+ 
+ 	rc = tpm2_prepare_space(chip, space, ordinal, buf);
+ 	if (rc)
+@@ -516,13 +544,16 @@ out_recv:
+ 	}
+ 
+ 	rc = tpm2_commit_space(chip, space, ordinal, buf, &len);
++	if (rc)
++		dev_err(&chip->dev, "tpm2_commit_space: error %d\n", rc);
+ 
+ out:
+-	if (chip->dev.parent)
+-		pm_runtime_put_sync(chip->dev.parent);
++	rc = tpm_go_idle(chip, flags);
++	if (rc)
++		goto out;
+ 
+ 	if (need_locality)
+-		tpm_relinquish_locality(chip);
++		tpm_relinquish_locality(chip, flags);
+ 
+ out_no_locality:
+ 	if (chip->ops->clk_enable != NULL)
+diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
+index 4426649e431c..5f02dcd3df97 100644
+--- a/drivers/char/tpm/tpm.h
++++ b/drivers/char/tpm/tpm.h
+@@ -511,9 +511,17 @@ extern const struct file_operations tpm_fops;
+ extern const struct file_operations tpmrm_fops;
+ extern struct idr dev_nums_idr;
+ 
++/**
++ * enum tpm_transmit_flags
++ *
++ * @TPM_TRANSMIT_UNLOCKED: used to lock sequence of tpm_transmit calls.
++ * @TPM_TRANSMIT_RAW: prevent recursive calls into setup steps
++ *                    (go idle, locality,..). Always use with UNLOCKED
++ *                    as it will fail on double locking.
++ */
+ enum tpm_transmit_flags {
+-	TPM_TRANSMIT_UNLOCKED	= BIT(0),
+-	TPM_TRANSMIT_RAW	= BIT(1),
++	TPM_TRANSMIT_UNLOCKED = BIT(0),
++	TPM_TRANSMIT_RAW      = BIT(1),
+ };
+ 
+ ssize_t tpm_transmit(struct tpm_chip *chip, struct tpm_space *space,
+diff --git a/drivers/char/tpm/tpm2-space.c b/drivers/char/tpm/tpm2-space.c
+index 6122d3276f72..11c85ed8c113 100644
+--- a/drivers/char/tpm/tpm2-space.c
++++ b/drivers/char/tpm/tpm2-space.c
+@@ -39,7 +39,8 @@ static void tpm2_flush_sessions(struct tpm_chip *chip, struct tpm_space *space)
+ 	for (i = 0; i < ARRAY_SIZE(space->session_tbl); i++) {
+ 		if (space->session_tbl[i])
+ 			tpm2_flush_context_cmd(chip, space->session_tbl[i],
+-					       TPM_TRANSMIT_UNLOCKED);
++					       TPM_TRANSMIT_UNLOCKED |
++					       TPM_TRANSMIT_RAW);
+ 	}
+ }
+ 
+@@ -84,7 +85,7 @@ static int tpm2_load_context(struct tpm_chip *chip, u8 *buf,
+ 	tpm_buf_append(&tbuf, &buf[*offset], body_size);
+ 
+ 	rc = tpm_transmit_cmd(chip, NULL, tbuf.data, PAGE_SIZE, 4,
+-			      TPM_TRANSMIT_UNLOCKED, NULL);
++			      TPM_TRANSMIT_UNLOCKED | TPM_TRANSMIT_RAW, NULL);
+ 	if (rc < 0) {
+ 		dev_warn(&chip->dev, "%s: failed with a system error %d\n",
+ 			 __func__, rc);
+@@ -133,7 +134,7 @@ static int tpm2_save_context(struct tpm_chip *chip, u32 handle, u8 *buf,
+ 	tpm_buf_append_u32(&tbuf, handle);
+ 
+ 	rc = tpm_transmit_cmd(chip, NULL, tbuf.data, PAGE_SIZE, 0,
+-			      TPM_TRANSMIT_UNLOCKED, NULL);
++			      TPM_TRANSMIT_UNLOCKED | TPM_TRANSMIT_RAW, NULL);
+ 	if (rc < 0) {
+ 		dev_warn(&chip->dev, "%s: failed with a system error %d\n",
+ 			 __func__, rc);
+@@ -170,7 +171,8 @@ static void tpm2_flush_space(struct tpm_chip *chip)
+ 	for (i = 0; i < ARRAY_SIZE(space->context_tbl); i++)
+ 		if (space->context_tbl[i] && ~space->context_tbl[i])
+ 			tpm2_flush_context_cmd(chip, space->context_tbl[i],
+-					       TPM_TRANSMIT_UNLOCKED);
++					       TPM_TRANSMIT_UNLOCKED |
++					       TPM_TRANSMIT_RAW);
+ 
+ 	tpm2_flush_sessions(chip, space);
+ }
+@@ -377,7 +379,8 @@ static int tpm2_map_response_header(struct tpm_chip *chip, u32 cc, u8 *rsp,
+ 
+ 	return 0;
+ out_no_slots:
+-	tpm2_flush_context_cmd(chip, phandle, TPM_TRANSMIT_UNLOCKED);
++	tpm2_flush_context_cmd(chip, phandle,
++			       TPM_TRANSMIT_UNLOCKED | TPM_TRANSMIT_RAW);
+ 	dev_warn(&chip->dev, "%s: out of slots for 0x%08X\n", __func__,
+ 		 phandle);
+ 	return -ENOMEM;
+@@ -465,7 +468,8 @@ static int tpm2_save_space(struct tpm_chip *chip)
+ 			return rc;
+ 
+ 		tpm2_flush_context_cmd(chip, space->context_tbl[i],
+-				       TPM_TRANSMIT_UNLOCKED);
++				       TPM_TRANSMIT_UNLOCKED |
++				       TPM_TRANSMIT_RAW);
+ 		space->context_tbl[i] = ~0;
+ 	}
+ 
+diff --git a/drivers/char/tpm/tpm_crb.c b/drivers/char/tpm/tpm_crb.c
+index 34fbc6cb097b..36952ef98f90 100644
+--- a/drivers/char/tpm/tpm_crb.c
++++ b/drivers/char/tpm/tpm_crb.c
+@@ -132,7 +132,7 @@ static bool crb_wait_for_reg_32(u32 __iomem *reg, u32 mask, u32 value,
+ }
+ 
+ /**
+- * crb_go_idle - request tpm crb device to go the idle state
++ * __crb_go_idle - request tpm crb device to go the idle state
+  *
+  * @dev:  crb device
+  * @priv: crb private data
+@@ -147,7 +147,7 @@ static bool crb_wait_for_reg_32(u32 __iomem *reg, u32 mask, u32 value,
+  *
+  * Return: 0 always
+  */
+-static int crb_go_idle(struct device *dev, struct crb_priv *priv)
++static int __crb_go_idle(struct device *dev, struct crb_priv *priv)
+ {
+ 	if ((priv->sm == ACPI_TPM2_START_METHOD) ||
+ 	    (priv->sm == ACPI_TPM2_COMMAND_BUFFER_WITH_START_METHOD) ||
+@@ -163,11 +163,20 @@ static int crb_go_idle(struct device *dev, struct crb_priv *priv)
+ 		dev_warn(dev, "goIdle timed out\n");
+ 		return -ETIME;
+ 	}
++
+ 	return 0;
+ }
+ 
++static int crb_go_idle(struct tpm_chip *chip)
++{
++	struct device *dev = &chip->dev;
++	struct crb_priv *priv = dev_get_drvdata(dev);
++
++	return __crb_go_idle(dev, priv);
++}
++
+ /**
+- * crb_cmd_ready - request tpm crb device to enter ready state
++ * __crb_cmd_ready - request tpm crb device to enter ready state
+  *
+  * @dev:  crb device
+  * @priv: crb private data
+@@ -181,7 +190,7 @@ static int crb_go_idle(struct device *dev, struct crb_priv *priv)
+  *
+  * Return: 0 on success -ETIME on timeout;
+  */
+-static int crb_cmd_ready(struct device *dev, struct crb_priv *priv)
++static int __crb_cmd_ready(struct device *dev, struct crb_priv *priv)
+ {
+ 	if ((priv->sm == ACPI_TPM2_START_METHOD) ||
+ 	    (priv->sm == ACPI_TPM2_COMMAND_BUFFER_WITH_START_METHOD) ||
+@@ -200,6 +209,14 @@ static int crb_cmd_ready(struct device *dev, struct crb_priv *priv)
+ 	return 0;
+ }
+ 
++static int crb_cmd_ready(struct tpm_chip *chip)
++{
++	struct device *dev = &chip->dev;
++	struct crb_priv *priv = dev_get_drvdata(dev);
++
++	return __crb_cmd_ready(dev, priv);
++}
++
+ static int __crb_request_locality(struct device *dev,
+ 				  struct crb_priv *priv, int loc)
+ {
+@@ -401,6 +418,8 @@ static const struct tpm_class_ops tpm_crb = {
+ 	.send = crb_send,
+ 	.cancel = crb_cancel,
+ 	.req_canceled = crb_req_canceled,
++	.go_idle  = crb_go_idle,
++	.cmd_ready = crb_cmd_ready,
+ 	.request_locality = crb_request_locality,
+ 	.relinquish_locality = crb_relinquish_locality,
+ 	.req_complete_mask = CRB_DRV_STS_COMPLETE,
+@@ -520,7 +539,7 @@ static int crb_map_io(struct acpi_device *device, struct crb_priv *priv,
+ 	 * PTT HW bug w/a: wake up the device to access
+ 	 * possibly not retained registers.
+ 	 */
+-	ret = crb_cmd_ready(dev, priv);
++	ret = __crb_cmd_ready(dev, priv);
+ 	if (ret)
+ 		goto out_relinquish_locality;
+ 
+@@ -565,7 +584,7 @@ out:
+ 	if (!ret)
+ 		priv->cmd_size = cmd_size;
+ 
+-	crb_go_idle(dev, priv);
++	__crb_go_idle(dev, priv);
+ 
+ out_relinquish_locality:
+ 
+@@ -628,32 +647,7 @@ static int crb_acpi_add(struct acpi_device *device)
+ 	chip->acpi_dev_handle = device->handle;
+ 	chip->flags = TPM_CHIP_FLAG_TPM2;
+ 
+-	rc = __crb_request_locality(dev, priv, 0);
+-	if (rc)
+-		return rc;
+-
+-	rc  = crb_cmd_ready(dev, priv);
+-	if (rc)
+-		goto out;
+-
+-	pm_runtime_get_noresume(dev);
+-	pm_runtime_set_active(dev);
+-	pm_runtime_enable(dev);
+-
+-	rc = tpm_chip_register(chip);
+-	if (rc) {
+-		crb_go_idle(dev, priv);
+-		pm_runtime_put_noidle(dev);
+-		pm_runtime_disable(dev);
+-		goto out;
+-	}
+-
+-	pm_runtime_put_sync(dev);
+-
+-out:
+-	__crb_relinquish_locality(dev, priv, 0);
+-
+-	return rc;
++	return tpm_chip_register(chip);
+ }
+ 
+ static int crb_acpi_remove(struct acpi_device *device)
+@@ -663,52 +657,11 @@ static int crb_acpi_remove(struct acpi_device *device)
+ 
+ 	tpm_chip_unregister(chip);
+ 
+-	pm_runtime_disable(dev);
+-
+ 	return 0;
+ }
+ 
+-static int __maybe_unused crb_pm_runtime_suspend(struct device *dev)
+-{
+-	struct tpm_chip *chip = dev_get_drvdata(dev);
+-	struct crb_priv *priv = dev_get_drvdata(&chip->dev);
+-
+-	return crb_go_idle(dev, priv);
+-}
+-
+-static int __maybe_unused crb_pm_runtime_resume(struct device *dev)
+-{
+-	struct tpm_chip *chip = dev_get_drvdata(dev);
+-	struct crb_priv *priv = dev_get_drvdata(&chip->dev);
+-
+-	return crb_cmd_ready(dev, priv);
+-}
+-
+-static int __maybe_unused crb_pm_suspend(struct device *dev)
+-{
+-	int ret;
+-
+-	ret = tpm_pm_suspend(dev);
+-	if (ret)
+-		return ret;
+-
+-	return crb_pm_runtime_suspend(dev);
+-}
+-
+-static int __maybe_unused crb_pm_resume(struct device *dev)
+-{
+-	int ret;
+-
+-	ret = crb_pm_runtime_resume(dev);
+-	if (ret)
+-		return ret;
+-
+-	return tpm_pm_resume(dev);
+-}
+-
+ static const struct dev_pm_ops crb_pm = {
+-	SET_SYSTEM_SLEEP_PM_OPS(crb_pm_suspend, crb_pm_resume)
+-	SET_RUNTIME_PM_OPS(crb_pm_runtime_suspend, crb_pm_runtime_resume, NULL)
++	SET_SYSTEM_SLEEP_PM_OPS(tpm_pm_suspend, tpm_pm_resume)
+ };
+ 
+ static const struct acpi_device_id crb_device_ids[] = {
+diff --git a/drivers/clk/clk-npcm7xx.c b/drivers/clk/clk-npcm7xx.c
+index 740af90a9508..c5edf8f2fd19 100644
+--- a/drivers/clk/clk-npcm7xx.c
++++ b/drivers/clk/clk-npcm7xx.c
+@@ -558,8 +558,8 @@ static void __init npcm7xx_clk_init(struct device_node *clk_np)
+ 	if (!clk_base)
+ 		goto npcm7xx_init_error;
+ 
+-	npcm7xx_clk_data = kzalloc(sizeof(*npcm7xx_clk_data->hws) *
+-		NPCM7XX_NUM_CLOCKS + sizeof(npcm7xx_clk_data), GFP_KERNEL);
++	npcm7xx_clk_data = kzalloc(struct_size(npcm7xx_clk_data, hws,
++				   NPCM7XX_NUM_CLOCKS), GFP_KERNEL);
+ 	if (!npcm7xx_clk_data)
+ 		goto npcm7xx_init_np_err;
+ 
+diff --git a/drivers/clk/rockchip/clk-rk3399.c b/drivers/clk/rockchip/clk-rk3399.c
+index bca10d618f0a..2a8634a52856 100644
+--- a/drivers/clk/rockchip/clk-rk3399.c
++++ b/drivers/clk/rockchip/clk-rk3399.c
+@@ -631,7 +631,7 @@ static struct rockchip_clk_branch rk3399_clk_branches[] __initdata = {
+ 	MUX(0, "clk_i2sout_src", mux_i2sch_p, CLK_SET_RATE_PARENT,
+ 			RK3399_CLKSEL_CON(31), 0, 2, MFLAGS),
+ 	COMPOSITE_NODIV(SCLK_I2S_8CH_OUT, "clk_i2sout", mux_i2sout_p, CLK_SET_RATE_PARENT,
+-			RK3399_CLKSEL_CON(30), 8, 2, MFLAGS,
++			RK3399_CLKSEL_CON(31), 2, 1, MFLAGS,
+ 			RK3399_CLKGATE_CON(8), 12, GFLAGS),
+ 
+ 	/* uart */
+diff --git a/drivers/gpu/drm/udl/udl_drv.h b/drivers/gpu/drm/udl/udl_drv.h
+index 55c0cc309198..7588a9eb0ee0 100644
+--- a/drivers/gpu/drm/udl/udl_drv.h
++++ b/drivers/gpu/drm/udl/udl_drv.h
+@@ -112,7 +112,7 @@ udl_fb_user_fb_create(struct drm_device *dev,
+ 		      struct drm_file *file,
+ 		      const struct drm_mode_fb_cmd2 *mode_cmd);
+ 
+-int udl_render_hline(struct drm_device *dev, int bpp, struct urb **urb_ptr,
++int udl_render_hline(struct drm_device *dev, int log_bpp, struct urb **urb_ptr,
+ 		     const char *front, char **urb_buf_ptr,
+ 		     u32 byte_offset, u32 device_byte_offset, u32 byte_width,
+ 		     int *ident_ptr, int *sent_ptr);
+diff --git a/drivers/gpu/drm/udl/udl_fb.c b/drivers/gpu/drm/udl/udl_fb.c
+index d5583190f3e4..8746eeeec44d 100644
+--- a/drivers/gpu/drm/udl/udl_fb.c
++++ b/drivers/gpu/drm/udl/udl_fb.c
+@@ -90,7 +90,10 @@ int udl_handle_damage(struct udl_framebuffer *fb, int x, int y,
+ 	int bytes_identical = 0;
+ 	struct urb *urb;
+ 	int aligned_x;
+-	int bpp = fb->base.format->cpp[0];
++	int log_bpp;
++
++	BUG_ON(!is_power_of_2(fb->base.format->cpp[0]));
++	log_bpp = __ffs(fb->base.format->cpp[0]);
+ 
+ 	if (!fb->active_16)
+ 		return 0;
+@@ -125,12 +128,12 @@ int udl_handle_damage(struct udl_framebuffer *fb, int x, int y,
+ 
+ 	for (i = y; i < y + height ; i++) {
+ 		const int line_offset = fb->base.pitches[0] * i;
+-		const int byte_offset = line_offset + (x * bpp);
+-		const int dev_byte_offset = (fb->base.width * bpp * i) + (x * bpp);
+-		if (udl_render_hline(dev, bpp, &urb,
++		const int byte_offset = line_offset + (x << log_bpp);
++		const int dev_byte_offset = (fb->base.width * i + x) << log_bpp;
++		if (udl_render_hline(dev, log_bpp, &urb,
+ 				     (char *) fb->obj->vmapping,
+ 				     &cmd, byte_offset, dev_byte_offset,
+-				     width * bpp,
++				     width << log_bpp,
+ 				     &bytes_identical, &bytes_sent))
+ 			goto error;
+ 	}
+@@ -149,7 +152,7 @@ int udl_handle_damage(struct udl_framebuffer *fb, int x, int y,
+ error:
+ 	atomic_add(bytes_sent, &udl->bytes_sent);
+ 	atomic_add(bytes_identical, &udl->bytes_identical);
+-	atomic_add(width*height*bpp, &udl->bytes_rendered);
++	atomic_add((width * height) << log_bpp, &udl->bytes_rendered);
+ 	end_cycles = get_cycles();
+ 	atomic_add(((unsigned int) ((end_cycles - start_cycles)
+ 		    >> 10)), /* Kcycles */
+@@ -221,7 +224,7 @@ static int udl_fb_open(struct fb_info *info, int user)
+ 
+ 		struct fb_deferred_io *fbdefio;
+ 
+-		fbdefio = kmalloc(sizeof(struct fb_deferred_io), GFP_KERNEL);
++		fbdefio = kzalloc(sizeof(struct fb_deferred_io), GFP_KERNEL);
+ 
+ 		if (fbdefio) {
+ 			fbdefio->delay = DL_DEFIO_WRITE_DELAY;
+diff --git a/drivers/gpu/drm/udl/udl_main.c b/drivers/gpu/drm/udl/udl_main.c
+index d518de8f496b..7e9ad926926a 100644
+--- a/drivers/gpu/drm/udl/udl_main.c
++++ b/drivers/gpu/drm/udl/udl_main.c
+@@ -170,18 +170,13 @@ static void udl_free_urb_list(struct drm_device *dev)
+ 	struct list_head *node;
+ 	struct urb_node *unode;
+ 	struct urb *urb;
+-	int ret;
+ 	unsigned long flags;
+ 
+ 	DRM_DEBUG("Waiting for completes and freeing all render urbs\n");
+ 
+ 	/* keep waiting and freeing, until we've got 'em all */
+ 	while (count--) {
+-
+-		/* Getting interrupted means a leak, but ok at shutdown*/
+-		ret = down_interruptible(&udl->urbs.limit_sem);
+-		if (ret)
+-			break;
++		down(&udl->urbs.limit_sem);
+ 
+ 		spin_lock_irqsave(&udl->urbs.lock, flags);
+ 
+@@ -205,17 +200,22 @@ static void udl_free_urb_list(struct drm_device *dev)
+ static int udl_alloc_urb_list(struct drm_device *dev, int count, size_t size)
+ {
+ 	struct udl_device *udl = dev->dev_private;
+-	int i = 0;
+ 	struct urb *urb;
+ 	struct urb_node *unode;
+ 	char *buf;
++	size_t wanted_size = count * size;
+ 
+ 	spin_lock_init(&udl->urbs.lock);
+ 
++retry:
+ 	udl->urbs.size = size;
+ 	INIT_LIST_HEAD(&udl->urbs.list);
+ 
+-	while (i < count) {
++	sema_init(&udl->urbs.limit_sem, 0);
++	udl->urbs.count = 0;
++	udl->urbs.available = 0;
++
++	while (udl->urbs.count * size < wanted_size) {
+ 		unode = kzalloc(sizeof(struct urb_node), GFP_KERNEL);
+ 		if (!unode)
+ 			break;
+@@ -231,11 +231,16 @@ static int udl_alloc_urb_list(struct drm_device *dev, int count, size_t size)
+ 		}
+ 		unode->urb = urb;
+ 
+-		buf = usb_alloc_coherent(udl->udev, MAX_TRANSFER, GFP_KERNEL,
++		buf = usb_alloc_coherent(udl->udev, size, GFP_KERNEL,
+ 					 &urb->transfer_dma);
+ 		if (!buf) {
+ 			kfree(unode);
+ 			usb_free_urb(urb);
++			if (size > PAGE_SIZE) {
++				size /= 2;
++				udl_free_urb_list(dev);
++				goto retry;
++			}
+ 			break;
+ 		}
+ 
+@@ -246,16 +251,14 @@ static int udl_alloc_urb_list(struct drm_device *dev, int count, size_t size)
+ 
+ 		list_add_tail(&unode->entry, &udl->urbs.list);
+ 
+-		i++;
++		up(&udl->urbs.limit_sem);
++		udl->urbs.count++;
++		udl->urbs.available++;
+ 	}
+ 
+-	sema_init(&udl->urbs.limit_sem, i);
+-	udl->urbs.count = i;
+-	udl->urbs.available = i;
+-
+-	DRM_DEBUG("allocated %d %d byte urbs\n", i, (int) size);
++	DRM_DEBUG("allocated %d %d byte urbs\n", udl->urbs.count, (int) size);
+ 
+-	return i;
++	return udl->urbs.count;
+ }
+ 
+ struct urb *udl_get_urb(struct drm_device *dev)
+diff --git a/drivers/gpu/drm/udl/udl_transfer.c b/drivers/gpu/drm/udl/udl_transfer.c
+index b992644c17e6..f3331d33547a 100644
+--- a/drivers/gpu/drm/udl/udl_transfer.c
++++ b/drivers/gpu/drm/udl/udl_transfer.c
+@@ -83,12 +83,12 @@ static inline u16 pixel32_to_be16(const uint32_t pixel)
+ 		((pixel >> 8) & 0xf800));
+ }
+ 
+-static inline u16 get_pixel_val16(const uint8_t *pixel, int bpp)
++static inline u16 get_pixel_val16(const uint8_t *pixel, int log_bpp)
+ {
+-	u16 pixel_val16 = 0;
+-	if (bpp == 2)
++	u16 pixel_val16;
++	if (log_bpp == 1)
+ 		pixel_val16 = *(const uint16_t *)pixel;
+-	else if (bpp == 4)
++	else
+ 		pixel_val16 = pixel32_to_be16(*(const uint32_t *)pixel);
+ 	return pixel_val16;
+ }
+@@ -125,8 +125,9 @@ static void udl_compress_hline16(
+ 	const u8 *const pixel_end,
+ 	uint32_t *device_address_ptr,
+ 	uint8_t **command_buffer_ptr,
+-	const uint8_t *const cmd_buffer_end, int bpp)
++	const uint8_t *const cmd_buffer_end, int log_bpp)
+ {
++	const int bpp = 1 << log_bpp;
+ 	const u8 *pixel = *pixel_start_ptr;
+ 	uint32_t dev_addr  = *device_address_ptr;
+ 	uint8_t *cmd = *command_buffer_ptr;
+@@ -153,12 +154,12 @@ static void udl_compress_hline16(
+ 		raw_pixels_count_byte = cmd++; /*  we'll know this later */
+ 		raw_pixel_start = pixel;
+ 
+-		cmd_pixel_end = pixel + min3(MAX_CMD_PIXELS + 1UL,
+-					(unsigned long)(pixel_end - pixel) / bpp,
+-					(unsigned long)(cmd_buffer_end - 1 - cmd) / 2) * bpp;
++		cmd_pixel_end = pixel + (min3(MAX_CMD_PIXELS + 1UL,
++					(unsigned long)(pixel_end - pixel) >> log_bpp,
++					(unsigned long)(cmd_buffer_end - 1 - cmd) / 2) << log_bpp);
+ 
+ 		prefetch_range((void *) pixel, cmd_pixel_end - pixel);
+-		pixel_val16 = get_pixel_val16(pixel, bpp);
++		pixel_val16 = get_pixel_val16(pixel, log_bpp);
+ 
+ 		while (pixel < cmd_pixel_end) {
+ 			const u8 *const start = pixel;
+@@ -170,7 +171,7 @@ static void udl_compress_hline16(
+ 			pixel += bpp;
+ 
+ 			while (pixel < cmd_pixel_end) {
+-				pixel_val16 = get_pixel_val16(pixel, bpp);
++				pixel_val16 = get_pixel_val16(pixel, log_bpp);
+ 				if (pixel_val16 != repeating_pixel_val16)
+ 					break;
+ 				pixel += bpp;
+@@ -179,10 +180,10 @@ static void udl_compress_hline16(
+ 			if (unlikely(pixel > start + bpp)) {
+ 				/* go back and fill in raw pixel count */
+ 				*raw_pixels_count_byte = (((start -
+-						raw_pixel_start) / bpp) + 1) & 0xFF;
++						raw_pixel_start) >> log_bpp) + 1) & 0xFF;
+ 
+ 				/* immediately after raw data is repeat byte */
+-				*cmd++ = (((pixel - start) / bpp) - 1) & 0xFF;
++				*cmd++ = (((pixel - start) >> log_bpp) - 1) & 0xFF;
+ 
+ 				/* Then start another raw pixel span */
+ 				raw_pixel_start = pixel;
+@@ -192,14 +193,14 @@ static void udl_compress_hline16(
+ 
+ 		if (pixel > raw_pixel_start) {
+ 			/* finalize last RAW span */
+-			*raw_pixels_count_byte = ((pixel-raw_pixel_start) / bpp) & 0xFF;
++			*raw_pixels_count_byte = ((pixel - raw_pixel_start) >> log_bpp) & 0xFF;
+ 		} else {
+ 			/* undo unused byte */
+ 			cmd--;
+ 		}
+ 
+-		*cmd_pixels_count_byte = ((pixel - cmd_pixel_start) / bpp) & 0xFF;
+-		dev_addr += ((pixel - cmd_pixel_start) / bpp) * 2;
++		*cmd_pixels_count_byte = ((pixel - cmd_pixel_start) >> log_bpp) & 0xFF;
++		dev_addr += ((pixel - cmd_pixel_start) >> log_bpp) * 2;
+ 	}
+ 
+ 	if (cmd_buffer_end <= MIN_RLX_CMD_BYTES + cmd) {
+@@ -222,19 +223,19 @@ static void udl_compress_hline16(
+  * (that we can only write to, slowly, and can never read), and (optionally)
+  * our shadow copy that tracks what's been sent to that hardware buffer.
+  */
+-int udl_render_hline(struct drm_device *dev, int bpp, struct urb **urb_ptr,
++int udl_render_hline(struct drm_device *dev, int log_bpp, struct urb **urb_ptr,
+ 		     const char *front, char **urb_buf_ptr,
+ 		     u32 byte_offset, u32 device_byte_offset,
+ 		     u32 byte_width,
+ 		     int *ident_ptr, int *sent_ptr)
+ {
+ 	const u8 *line_start, *line_end, *next_pixel;
+-	u32 base16 = 0 + (device_byte_offset / bpp) * 2;
++	u32 base16 = 0 + (device_byte_offset >> log_bpp) * 2;
+ 	struct urb *urb = *urb_ptr;
+ 	u8 *cmd = *urb_buf_ptr;
+ 	u8 *cmd_end = (u8 *) urb->transfer_buffer + urb->transfer_buffer_length;
+ 
+-	BUG_ON(!(bpp == 2 || bpp == 4));
++	BUG_ON(!(log_bpp == 1 || log_bpp == 2));
+ 
+ 	line_start = (u8 *) (front + byte_offset);
+ 	next_pixel = line_start;
+@@ -244,7 +245,7 @@ int udl_render_hline(struct drm_device *dev, int bpp, struct urb **urb_ptr,
+ 
+ 		udl_compress_hline16(&next_pixel,
+ 			     line_end, &base16,
+-			     (u8 **) &cmd, (u8 *) cmd_end, bpp);
++			     (u8 **) &cmd, (u8 *) cmd_end, log_bpp);
+ 
+ 		if (cmd >= cmd_end) {
+ 			int len = cmd - (u8 *) urb->transfer_buffer;
+diff --git a/drivers/hwmon/k10temp.c b/drivers/hwmon/k10temp.c
+index 17c6460ae351..577e2ede5a1a 100644
+--- a/drivers/hwmon/k10temp.c
++++ b/drivers/hwmon/k10temp.c
+@@ -105,6 +105,8 @@ static const struct tctl_offset tctl_offset_table[] = {
+ 	{ 0x17, "AMD Ryzen Threadripper 1950", 10000 },
+ 	{ 0x17, "AMD Ryzen Threadripper 1920", 10000 },
+ 	{ 0x17, "AMD Ryzen Threadripper 1910", 10000 },
++	{ 0x17, "AMD Ryzen Threadripper 2950X", 27000 },
++	{ 0x17, "AMD Ryzen Threadripper 2990WX", 27000 },
+ };
+ 
+ static void read_htcreg_pci(struct pci_dev *pdev, u32 *regval)
+diff --git a/drivers/hwmon/nct6775.c b/drivers/hwmon/nct6775.c
+index f9d1349c3286..b89e8379d898 100644
+--- a/drivers/hwmon/nct6775.c
++++ b/drivers/hwmon/nct6775.c
+@@ -63,6 +63,7 @@
+ #include <linux/bitops.h>
+ #include <linux/dmi.h>
+ #include <linux/io.h>
++#include <linux/nospec.h>
+ #include "lm75.h"
+ 
+ #define USE_ALTERNATE
+@@ -2689,6 +2690,7 @@ store_pwm_weight_temp_sel(struct device *dev, struct device_attribute *attr,
+ 		return err;
+ 	if (val > NUM_TEMP)
+ 		return -EINVAL;
++	val = array_index_nospec(val, NUM_TEMP + 1);
+ 	if (val && (!(data->have_temp & BIT(val - 1)) ||
+ 		    !data->temp_src[val - 1]))
+ 		return -EINVAL;
+diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
+index f7a96bcf94a6..5349e22b5c78 100644
+--- a/drivers/iommu/arm-smmu.c
++++ b/drivers/iommu/arm-smmu.c
+@@ -2103,12 +2103,16 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
+ 	if (err)
+ 		return err;
+ 
+-	if (smmu->version == ARM_SMMU_V2 &&
+-	    smmu->num_context_banks != smmu->num_context_irqs) {
+-		dev_err(dev,
+-			"found only %d context interrupt(s) but %d required\n",
+-			smmu->num_context_irqs, smmu->num_context_banks);
+-		return -ENODEV;
++	if (smmu->version == ARM_SMMU_V2) {
++		if (smmu->num_context_banks > smmu->num_context_irqs) {
++			dev_err(dev,
++			      "found only %d context irq(s) but %d required\n",
++			      smmu->num_context_irqs, smmu->num_context_banks);
++			return -ENODEV;
++		}
++
++		/* Ignore superfluous interrupts */
++		smmu->num_context_irqs = smmu->num_context_banks;
+ 	}
+ 
+ 	for (i = 0; i < smmu->num_global_irqs; ++i) {
+diff --git a/drivers/misc/mei/main.c b/drivers/misc/mei/main.c
+index 7465f17e1559..38175ebd92d4 100644
+--- a/drivers/misc/mei/main.c
++++ b/drivers/misc/mei/main.c
+@@ -312,7 +312,6 @@ static ssize_t mei_write(struct file *file, const char __user *ubuf,
+ 		}
+ 	}
+ 
+-	*offset = 0;
+ 	cb = mei_cl_alloc_cb(cl, length, MEI_FOP_WRITE, file);
+ 	if (!cb) {
+ 		rets = -ENOMEM;
+diff --git a/drivers/mtd/nand/raw/fsmc_nand.c b/drivers/mtd/nand/raw/fsmc_nand.c
+index f4a5a317d4ae..e1086a010b88 100644
+--- a/drivers/mtd/nand/raw/fsmc_nand.c
++++ b/drivers/mtd/nand/raw/fsmc_nand.c
+@@ -740,7 +740,7 @@ static int fsmc_read_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip,
+ 	for (i = 0, s = 0; s < eccsteps; s++, i += eccbytes, p += eccsize) {
+ 		nand_read_page_op(chip, page, s * eccsize, NULL, 0);
+ 		chip->ecc.hwctl(mtd, NAND_ECC_READ);
+-		chip->read_buf(mtd, p, eccsize);
++		nand_read_data_op(chip, p, eccsize, false);
+ 
+ 		for (j = 0; j < eccbytes;) {
+ 			struct mtd_oob_region oobregion;
+diff --git a/drivers/mtd/nand/raw/marvell_nand.c b/drivers/mtd/nand/raw/marvell_nand.c
+index ebb1d141b900..c88588815ca1 100644
+--- a/drivers/mtd/nand/raw/marvell_nand.c
++++ b/drivers/mtd/nand/raw/marvell_nand.c
+@@ -2677,6 +2677,21 @@ static int marvell_nfc_init_dma(struct marvell_nfc *nfc)
+ 	return 0;
+ }
+ 
++static void marvell_nfc_reset(struct marvell_nfc *nfc)
++{
++	/*
++	 * ECC operations and interruptions are only enabled when specifically
++	 * needed. ECC shall not be activated in the early stages (fails probe).
++	 * Arbiter flag, even if marked as "reserved", must be set (empirical).
++	 * SPARE_EN bit must always be set or ECC bytes will not be at the same
++	 * offset in the read page and this will fail the protection.
++	 */
++	writel_relaxed(NDCR_ALL_INT | NDCR_ND_ARB_EN | NDCR_SPARE_EN |
++		       NDCR_RD_ID_CNT(NFCV1_READID_LEN), nfc->regs + NDCR);
++	writel_relaxed(0xFFFFFFFF, nfc->regs + NDSR);
++	writel_relaxed(0, nfc->regs + NDECCCTRL);
++}
++
+ static int marvell_nfc_init(struct marvell_nfc *nfc)
+ {
+ 	struct device_node *np = nfc->dev->of_node;
+@@ -2715,17 +2730,7 @@ static int marvell_nfc_init(struct marvell_nfc *nfc)
+ 	if (!nfc->caps->is_nfcv2)
+ 		marvell_nfc_init_dma(nfc);
+ 
+-	/*
+-	 * ECC operations and interruptions are only enabled when specifically
+-	 * needed. ECC shall not be activated in the early stages (fails probe).
+-	 * Arbiter flag, even if marked as "reserved", must be set (empirical).
+-	 * SPARE_EN bit must always be set or ECC bytes will not be at the same
+-	 * offset in the read page and this will fail the protection.
+-	 */
+-	writel_relaxed(NDCR_ALL_INT | NDCR_ND_ARB_EN | NDCR_SPARE_EN |
+-		       NDCR_RD_ID_CNT(NFCV1_READID_LEN), nfc->regs + NDCR);
+-	writel_relaxed(0xFFFFFFFF, nfc->regs + NDSR);
+-	writel_relaxed(0, nfc->regs + NDECCCTRL);
++	marvell_nfc_reset(nfc);
+ 
+ 	return 0;
+ }
+@@ -2840,6 +2845,51 @@ static int marvell_nfc_remove(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
++static int __maybe_unused marvell_nfc_suspend(struct device *dev)
++{
++	struct marvell_nfc *nfc = dev_get_drvdata(dev);
++	struct marvell_nand_chip *chip;
++
++	list_for_each_entry(chip, &nfc->chips, node)
++		marvell_nfc_wait_ndrun(&chip->chip);
++
++	clk_disable_unprepare(nfc->reg_clk);
++	clk_disable_unprepare(nfc->core_clk);
++
++	return 0;
++}
++
++static int __maybe_unused marvell_nfc_resume(struct device *dev)
++{
++	struct marvell_nfc *nfc = dev_get_drvdata(dev);
++	int ret;
++
++	ret = clk_prepare_enable(nfc->core_clk);
++	if (ret < 0)
++		return ret;
++
++	if (!IS_ERR(nfc->reg_clk)) {
++		ret = clk_prepare_enable(nfc->reg_clk);
++		if (ret < 0)
++			return ret;
++	}
++
++	/*
++	 * Reset nfc->selected_chip so the next command will cause the timing
++	 * registers to be restored in marvell_nfc_select_chip().
++	 */
++	nfc->selected_chip = NULL;
++
++	/* Reset registers that have lost their contents */
++	marvell_nfc_reset(nfc);
++
++	return 0;
++}
++
++static const struct dev_pm_ops marvell_nfc_pm_ops = {
++	SET_SYSTEM_SLEEP_PM_OPS(marvell_nfc_suspend, marvell_nfc_resume)
++};
++
+ static const struct marvell_nfc_caps marvell_armada_8k_nfc_caps = {
+ 	.max_cs_nb = 4,
+ 	.max_rb_nb = 2,
+@@ -2924,6 +2974,7 @@ static struct platform_driver marvell_nfc_driver = {
+ 	.driver	= {
+ 		.name		= "marvell-nfc",
+ 		.of_match_table = marvell_nfc_of_ids,
++		.pm		= &marvell_nfc_pm_ops,
+ 	},
+ 	.id_table = marvell_nfc_platform_ids,
+ 	.probe = marvell_nfc_probe,
+diff --git a/drivers/mtd/nand/raw/nand_hynix.c b/drivers/mtd/nand/raw/nand_hynix.c
+index d542908a0ebb..766df4134482 100644
+--- a/drivers/mtd/nand/raw/nand_hynix.c
++++ b/drivers/mtd/nand/raw/nand_hynix.c
+@@ -100,6 +100,16 @@ static int hynix_nand_reg_write_op(struct nand_chip *chip, u8 addr, u8 val)
+ 	struct mtd_info *mtd = nand_to_mtd(chip);
+ 	u16 column = ((u16)addr << 8) | addr;
+ 
++	if (chip->exec_op) {
++		struct nand_op_instr instrs[] = {
++			NAND_OP_ADDR(1, &addr, 0),
++			NAND_OP_8BIT_DATA_OUT(1, &val, 0),
++		};
++		struct nand_operation op = NAND_OPERATION(instrs);
++
++		return nand_exec_op(chip, &op);
++	}
++
+ 	chip->cmdfunc(mtd, NAND_CMD_NONE, column, -1);
+ 	chip->write_byte(mtd, val);
+ 
+diff --git a/drivers/mtd/nand/raw/qcom_nandc.c b/drivers/mtd/nand/raw/qcom_nandc.c
+index 6a5519f0ff25..49b4e70fefe7 100644
+--- a/drivers/mtd/nand/raw/qcom_nandc.c
++++ b/drivers/mtd/nand/raw/qcom_nandc.c
+@@ -213,6 +213,8 @@ nandc_set_reg(nandc, NAND_READ_LOCATION_##reg,			\
+ #define QPIC_PER_CW_CMD_SGL		32
+ #define QPIC_PER_CW_DATA_SGL		8
+ 
++#define QPIC_NAND_COMPLETION_TIMEOUT	msecs_to_jiffies(2000)
++
+ /*
+  * Flags used in DMA descriptor preparation helper functions
+  * (i.e. read_reg_dma/write_reg_dma/read_data_dma/write_data_dma)
+@@ -245,6 +247,11 @@ nandc_set_reg(nandc, NAND_READ_LOCATION_##reg,			\
+  * @tx_sgl_start - start index in data sgl for tx.
+  * @rx_sgl_pos - current index in data sgl for rx.
+  * @rx_sgl_start - start index in data sgl for rx.
++ * @wait_second_completion - wait for second DMA desc completion before making
++ *			     the NAND transfer completion.
++ * @txn_done - completion for NAND transfer.
++ * @last_data_desc - last DMA desc in data channel (tx/rx).
++ * @last_cmd_desc - last DMA desc in command channel.
+  */
+ struct bam_transaction {
+ 	struct bam_cmd_element *bam_ce;
+@@ -258,6 +265,10 @@ struct bam_transaction {
+ 	u32 tx_sgl_start;
+ 	u32 rx_sgl_pos;
+ 	u32 rx_sgl_start;
++	bool wait_second_completion;
++	struct completion txn_done;
++	struct dma_async_tx_descriptor *last_data_desc;
++	struct dma_async_tx_descriptor *last_cmd_desc;
+ };
+ 
+ /*
+@@ -504,6 +515,8 @@ alloc_bam_transaction(struct qcom_nand_controller *nandc)
+ 
+ 	bam_txn->data_sgl = bam_txn_buf;
+ 
++	init_completion(&bam_txn->txn_done);
++
+ 	return bam_txn;
+ }
+ 
+@@ -523,11 +536,33 @@ static void clear_bam_transaction(struct qcom_nand_controller *nandc)
+ 	bam_txn->tx_sgl_start = 0;
+ 	bam_txn->rx_sgl_pos = 0;
+ 	bam_txn->rx_sgl_start = 0;
++	bam_txn->last_data_desc = NULL;
++	bam_txn->wait_second_completion = false;
+ 
+ 	sg_init_table(bam_txn->cmd_sgl, nandc->max_cwperpage *
+ 		      QPIC_PER_CW_CMD_SGL);
+ 	sg_init_table(bam_txn->data_sgl, nandc->max_cwperpage *
+ 		      QPIC_PER_CW_DATA_SGL);
++
++	reinit_completion(&bam_txn->txn_done);
++}
++
++/* Callback for DMA descriptor completion */
++static void qpic_bam_dma_done(void *data)
++{
++	struct bam_transaction *bam_txn = data;
++
++	/*
++	 * In case of data transfer with NAND, 2 callbacks will be generated.
++	 * One for command channel and another one for data channel.
++	 * If current transaction has data descriptors
++	 * (i.e. wait_second_completion is true), then set this to false
++	 * and wait for second DMA descriptor completion.
++	 */
++	if (bam_txn->wait_second_completion)
++		bam_txn->wait_second_completion = false;
++	else
++		complete(&bam_txn->txn_done);
+ }
+ 
+ static inline struct qcom_nand_host *to_qcom_nand_host(struct nand_chip *chip)
+@@ -756,6 +791,12 @@ static int prepare_bam_async_desc(struct qcom_nand_controller *nandc,
+ 
+ 	desc->dma_desc = dma_desc;
+ 
++	/* update last data/command descriptor */
++	if (chan == nandc->cmd_chan)
++		bam_txn->last_cmd_desc = dma_desc;
++	else
++		bam_txn->last_data_desc = dma_desc;
++
+ 	list_add_tail(&desc->node, &nandc->desc_list);
+ 
+ 	return 0;
+@@ -1273,10 +1314,20 @@ static int submit_descs(struct qcom_nand_controller *nandc)
+ 		cookie = dmaengine_submit(desc->dma_desc);
+ 
+ 	if (nandc->props->is_bam) {
++		bam_txn->last_cmd_desc->callback = qpic_bam_dma_done;
++		bam_txn->last_cmd_desc->callback_param = bam_txn;
++		if (bam_txn->last_data_desc) {
++			bam_txn->last_data_desc->callback = qpic_bam_dma_done;
++			bam_txn->last_data_desc->callback_param = bam_txn;
++			bam_txn->wait_second_completion = true;
++		}
++
+ 		dma_async_issue_pending(nandc->tx_chan);
+ 		dma_async_issue_pending(nandc->rx_chan);
++		dma_async_issue_pending(nandc->cmd_chan);
+ 
+-		if (dma_sync_wait(nandc->cmd_chan, cookie) != DMA_COMPLETE)
++		if (!wait_for_completion_timeout(&bam_txn->txn_done,
++						 QPIC_NAND_COMPLETION_TIMEOUT))
+ 			return -ETIMEDOUT;
+ 	} else {
+ 		if (dma_sync_wait(nandc->chan, cookie) != DMA_COMPLETE)
+diff --git a/drivers/net/wireless/broadcom/b43/leds.c b/drivers/net/wireless/broadcom/b43/leds.c
+index cb987c2ecc6b..87131f663292 100644
+--- a/drivers/net/wireless/broadcom/b43/leds.c
++++ b/drivers/net/wireless/broadcom/b43/leds.c
+@@ -131,7 +131,7 @@ static int b43_register_led(struct b43_wldev *dev, struct b43_led *led,
+ 	led->wl = dev->wl;
+ 	led->index = led_index;
+ 	led->activelow = activelow;
+-	strncpy(led->name, name, sizeof(led->name));
++	strlcpy(led->name, name, sizeof(led->name));
+ 	atomic_set(&led->state, 0);
+ 
+ 	led->led_dev.name = led->name;
+diff --git a/drivers/net/wireless/broadcom/b43legacy/leds.c b/drivers/net/wireless/broadcom/b43legacy/leds.c
+index fd4565389c77..bc922118b6ac 100644
+--- a/drivers/net/wireless/broadcom/b43legacy/leds.c
++++ b/drivers/net/wireless/broadcom/b43legacy/leds.c
+@@ -101,7 +101,7 @@ static int b43legacy_register_led(struct b43legacy_wldev *dev,
+ 	led->dev = dev;
+ 	led->index = led_index;
+ 	led->activelow = activelow;
+-	strncpy(led->name, name, sizeof(led->name));
++	strlcpy(led->name, name, sizeof(led->name));
+ 
+ 	led->led_dev.name = led->name;
+ 	led->led_dev.default_trigger = default_trigger;
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index ddd441b1516a..e10b0d20c4a7 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -316,6 +316,14 @@ static bool nvme_dbbuf_update_and_check_event(u16 value, u32 *dbbuf_db,
+ 		old_value = *dbbuf_db;
+ 		*dbbuf_db = value;
+ 
++		/*
++		 * Ensure that the doorbell is updated before reading the event
++		 * index from memory.  The controller needs to provide similar
++		 * ordering to ensure the envent index is updated before reading
++		 * the doorbell.
++		 */
++		mb();
++
+ 		if (!nvme_dbbuf_need_event(*dbbuf_ei, value, old_value))
+ 			return false;
+ 	}
+diff --git a/drivers/pinctrl/freescale/pinctrl-imx1-core.c b/drivers/pinctrl/freescale/pinctrl-imx1-core.c
+index c3bdd90b1422..deb7870b3d1a 100644
+--- a/drivers/pinctrl/freescale/pinctrl-imx1-core.c
++++ b/drivers/pinctrl/freescale/pinctrl-imx1-core.c
+@@ -429,7 +429,7 @@ static void imx1_pinconf_group_dbg_show(struct pinctrl_dev *pctldev,
+ 	const char *name;
+ 	int i, ret;
+ 
+-	if (group > info->ngroups)
++	if (group >= info->ngroups)
+ 		return;
+ 
+ 	seq_puts(s, "\n");
+diff --git a/drivers/platform/x86/ideapad-laptop.c b/drivers/platform/x86/ideapad-laptop.c
+index 45b7cb01f410..307403decf76 100644
+--- a/drivers/platform/x86/ideapad-laptop.c
++++ b/drivers/platform/x86/ideapad-laptop.c
+@@ -1133,10 +1133,10 @@ static const struct dmi_system_id no_hw_rfkill_list[] = {
+ 		},
+ 	},
+ 	{
+-		.ident = "Lenovo Legion Y520-15IKBN",
++		.ident = "Lenovo Legion Y520-15IKB",
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo Y520-15IKBN"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo Y520-15IKB"),
+ 		},
+ 	},
+ 	{
+diff --git a/drivers/platform/x86/wmi.c b/drivers/platform/x86/wmi.c
+index 8e3d0146ff8c..04791ea5d97b 100644
+--- a/drivers/platform/x86/wmi.c
++++ b/drivers/platform/x86/wmi.c
+@@ -895,7 +895,6 @@ static int wmi_dev_probe(struct device *dev)
+ 	struct wmi_driver *wdriver =
+ 		container_of(dev->driver, struct wmi_driver, driver);
+ 	int ret = 0;
+-	int count;
+ 	char *buf;
+ 
+ 	if (ACPI_FAILURE(wmi_method_enable(wblock, 1)))
+@@ -917,9 +916,8 @@ static int wmi_dev_probe(struct device *dev)
+ 			goto probe_failure;
+ 		}
+ 
+-		count = get_order(wblock->req_buf_size);
+-		wblock->handler_data = (void *)__get_free_pages(GFP_KERNEL,
+-								count);
++		wblock->handler_data = kmalloc(wblock->req_buf_size,
++					       GFP_KERNEL);
+ 		if (!wblock->handler_data) {
+ 			ret = -ENOMEM;
+ 			goto probe_failure;
+@@ -964,8 +962,7 @@ static int wmi_dev_remove(struct device *dev)
+ 	if (wdriver->filter_callback) {
+ 		misc_deregister(&wblock->char_dev);
+ 		kfree(wblock->char_dev.name);
+-		free_pages((unsigned long)wblock->handler_data,
+-			   get_order(wblock->req_buf_size));
++		kfree(wblock->handler_data);
+ 	}
+ 
+ 	if (wdriver->remove)
+diff --git a/drivers/power/supply/generic-adc-battery.c b/drivers/power/supply/generic-adc-battery.c
+index 28dc056eaafa..bc462d1ec963 100644
+--- a/drivers/power/supply/generic-adc-battery.c
++++ b/drivers/power/supply/generic-adc-battery.c
+@@ -241,10 +241,10 @@ static int gab_probe(struct platform_device *pdev)
+ 	struct power_supply_desc *psy_desc;
+ 	struct power_supply_config psy_cfg = {};
+ 	struct gab_platform_data *pdata = pdev->dev.platform_data;
+-	enum power_supply_property *properties;
+ 	int ret = 0;
+ 	int chan;
+-	int index = 0;
++	int index = ARRAY_SIZE(gab_props);
++	bool any = false;
+ 
+ 	adc_bat = devm_kzalloc(&pdev->dev, sizeof(*adc_bat), GFP_KERNEL);
+ 	if (!adc_bat) {
+@@ -278,8 +278,6 @@ static int gab_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	memcpy(psy_desc->properties, gab_props, sizeof(gab_props));
+-	properties = (enum power_supply_property *)
+-			((char *)psy_desc->properties + sizeof(gab_props));
+ 
+ 	/*
+ 	 * getting channel from iio and copying the battery properties
+@@ -293,15 +291,22 @@ static int gab_probe(struct platform_device *pdev)
+ 			adc_bat->channel[chan] = NULL;
+ 		} else {
+ 			/* copying properties for supported channels only */
+-			memcpy(properties + sizeof(*(psy_desc->properties)) * index,
+-					&gab_dyn_props[chan],
+-					sizeof(gab_dyn_props[chan]));
+-			index++;
++			int index2;
++
++			for (index2 = 0; index2 < index; index2++) {
++				if (psy_desc->properties[index2] ==
++				    gab_dyn_props[chan])
++					break;	/* already known */
++			}
++			if (index2 == index)	/* really new */
++				psy_desc->properties[index++] =
++					gab_dyn_props[chan];
++			any = true;
+ 		}
+ 	}
+ 
+ 	/* none of the channels are supported so let's bail out */
+-	if (index == 0) {
++	if (!any) {
+ 		ret = -ENODEV;
+ 		goto second_mem_fail;
+ 	}
+@@ -312,7 +317,7 @@ static int gab_probe(struct platform_device *pdev)
+ 	 * as come channels may be not be supported by the device.So
+ 	 * we need to take care of that.
+ 	 */
+-	psy_desc->num_properties = ARRAY_SIZE(gab_props) + index;
++	psy_desc->num_properties = index;
+ 
+ 	adc_bat->psy = power_supply_register(&pdev->dev, psy_desc, &psy_cfg);
+ 	if (IS_ERR(adc_bat->psy)) {
+diff --git a/drivers/regulator/arizona-ldo1.c b/drivers/regulator/arizona-ldo1.c
+index f6d6a4ad9e8a..e976d073f28d 100644
+--- a/drivers/regulator/arizona-ldo1.c
++++ b/drivers/regulator/arizona-ldo1.c
+@@ -36,6 +36,8 @@ struct arizona_ldo1 {
+ 
+ 	struct regulator_consumer_supply supply;
+ 	struct regulator_init_data init_data;
++
++	struct gpio_desc *ena_gpiod;
+ };
+ 
+ static int arizona_ldo1_hc_list_voltage(struct regulator_dev *rdev,
+@@ -253,12 +255,17 @@ static int arizona_ldo1_common_init(struct platform_device *pdev,
+ 		}
+ 	}
+ 
+-	/* We assume that high output = regulator off */
+-	config.ena_gpiod = devm_gpiod_get_optional(&pdev->dev, "wlf,ldoena",
+-						   GPIOD_OUT_HIGH);
++	/* We assume that high output = regulator off
++	 * Don't use devm, since we need to get against the parent device
++	 * so clean up would happen at the wrong time
++	 */
++	config.ena_gpiod = gpiod_get_optional(parent_dev, "wlf,ldoena",
++					      GPIOD_OUT_LOW);
+ 	if (IS_ERR(config.ena_gpiod))
+ 		return PTR_ERR(config.ena_gpiod);
+ 
++	ldo1->ena_gpiod = config.ena_gpiod;
++
+ 	if (pdata->init_data)
+ 		config.init_data = pdata->init_data;
+ 	else
+@@ -276,6 +283,9 @@ static int arizona_ldo1_common_init(struct platform_device *pdev,
+ 	of_node_put(config.of_node);
+ 
+ 	if (IS_ERR(ldo1->regulator)) {
++		if (config.ena_gpiod)
++			gpiod_put(config.ena_gpiod);
++
+ 		ret = PTR_ERR(ldo1->regulator);
+ 		dev_err(&pdev->dev, "Failed to register LDO1 supply: %d\n",
+ 			ret);
+@@ -334,8 +344,19 @@ static int arizona_ldo1_probe(struct platform_device *pdev)
+ 	return ret;
+ }
+ 
++static int arizona_ldo1_remove(struct platform_device *pdev)
++{
++	struct arizona_ldo1 *ldo1 = platform_get_drvdata(pdev);
++
++	if (ldo1->ena_gpiod)
++		gpiod_put(ldo1->ena_gpiod);
++
++	return 0;
++}
++
+ static struct platform_driver arizona_ldo1_driver = {
+ 	.probe = arizona_ldo1_probe,
++	.remove = arizona_ldo1_remove,
+ 	.driver		= {
+ 		.name	= "arizona-ldo1",
+ 	},
+diff --git a/drivers/s390/cio/qdio_main.c b/drivers/s390/cio/qdio_main.c
+index f4ca72dd862f..9c7d9da42ba0 100644
+--- a/drivers/s390/cio/qdio_main.c
++++ b/drivers/s390/cio/qdio_main.c
+@@ -631,21 +631,20 @@ static inline unsigned long qdio_aob_for_buffer(struct qdio_output_q *q,
+ 	unsigned long phys_aob = 0;
+ 
+ 	if (!q->use_cq)
+-		goto out;
++		return 0;
+ 
+ 	if (!q->aobs[bufnr]) {
+ 		struct qaob *aob = qdio_allocate_aob();
+ 		q->aobs[bufnr] = aob;
+ 	}
+ 	if (q->aobs[bufnr]) {
+-		q->sbal_state[bufnr].flags = QDIO_OUTBUF_STATE_FLAG_NONE;
+ 		q->sbal_state[bufnr].aob = q->aobs[bufnr];
+ 		q->aobs[bufnr]->user1 = (u64) q->sbal_state[bufnr].user;
+ 		phys_aob = virt_to_phys(q->aobs[bufnr]);
+ 		WARN_ON_ONCE(phys_aob & 0xFF);
+ 	}
+ 
+-out:
++	q->sbal_state[bufnr].flags = 0;
+ 	return phys_aob;
+ }
+ 
+diff --git a/drivers/scsi/libsas/sas_ata.c b/drivers/scsi/libsas/sas_ata.c
+index ff1d612f6fb9..41cdda7a926b 100644
+--- a/drivers/scsi/libsas/sas_ata.c
++++ b/drivers/scsi/libsas/sas_ata.c
+@@ -557,34 +557,46 @@ int sas_ata_init(struct domain_device *found_dev)
+ {
+ 	struct sas_ha_struct *ha = found_dev->port->ha;
+ 	struct Scsi_Host *shost = ha->core.shost;
++	struct ata_host *ata_host;
+ 	struct ata_port *ap;
+ 	int rc;
+ 
+-	ata_host_init(&found_dev->sata_dev.ata_host, ha->dev, &sas_sata_ops);
+-	ap = ata_sas_port_alloc(&found_dev->sata_dev.ata_host,
+-				&sata_port_info,
+-				shost);
++	ata_host = kzalloc(sizeof(*ata_host), GFP_KERNEL);
++	if (!ata_host)	{
++		SAS_DPRINTK("ata host alloc failed.\n");
++		return -ENOMEM;
++	}
++
++	ata_host_init(ata_host, ha->dev, &sas_sata_ops);
++
++	ap = ata_sas_port_alloc(ata_host, &sata_port_info, shost);
+ 	if (!ap) {
+ 		SAS_DPRINTK("ata_sas_port_alloc failed.\n");
+-		return -ENODEV;
++		rc = -ENODEV;
++		goto free_host;
+ 	}
+ 
+ 	ap->private_data = found_dev;
+ 	ap->cbl = ATA_CBL_SATA;
+ 	ap->scsi_host = shost;
+ 	rc = ata_sas_port_init(ap);
+-	if (rc) {
+-		ata_sas_port_destroy(ap);
+-		return rc;
+-	}
+-	rc = ata_sas_tport_add(found_dev->sata_dev.ata_host.dev, ap);
+-	if (rc) {
+-		ata_sas_port_destroy(ap);
+-		return rc;
+-	}
++	if (rc)
++		goto destroy_port;
++
++	rc = ata_sas_tport_add(ata_host->dev, ap);
++	if (rc)
++		goto destroy_port;
++
++	found_dev->sata_dev.ata_host = ata_host;
+ 	found_dev->sata_dev.ap = ap;
+ 
+ 	return 0;
++
++destroy_port:
++	ata_sas_port_destroy(ap);
++free_host:
++	ata_host_put(ata_host);
++	return rc;
+ }
+ 
+ void sas_ata_task_abort(struct sas_task *task)
+diff --git a/drivers/scsi/libsas/sas_discover.c b/drivers/scsi/libsas/sas_discover.c
+index 1ffca28fe6a8..0148ae62a52a 100644
+--- a/drivers/scsi/libsas/sas_discover.c
++++ b/drivers/scsi/libsas/sas_discover.c
+@@ -316,6 +316,8 @@ void sas_free_device(struct kref *kref)
+ 	if (dev_is_sata(dev) && dev->sata_dev.ap) {
+ 		ata_sas_tport_delete(dev->sata_dev.ap);
+ 		ata_sas_port_destroy(dev->sata_dev.ap);
++		ata_host_put(dev->sata_dev.ata_host);
++		dev->sata_dev.ata_host = NULL;
+ 		dev->sata_dev.ap = NULL;
+ 	}
+ 
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index e44c91edf92d..3c8c17c0b547 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -3284,6 +3284,7 @@ void mpt3sas_base_clear_st(struct MPT3SAS_ADAPTER *ioc,
+ 	st->cb_idx = 0xFF;
+ 	st->direct_io = 0;
+ 	atomic_set(&ioc->chain_lookup[st->smid - 1].chain_offset, 0);
++	st->smid = 0;
+ }
+ 
+ /**
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index b8d131a455d0..f3d727076e1f 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -1489,7 +1489,7 @@ mpt3sas_scsih_scsi_lookup_get(struct MPT3SAS_ADAPTER *ioc, u16 smid)
+ 		scmd = scsi_host_find_tag(ioc->shost, unique_tag);
+ 		if (scmd) {
+ 			st = scsi_cmd_priv(scmd);
+-			if (st->cb_idx == 0xFF)
++			if (st->cb_idx == 0xFF || st->smid == 0)
+ 				scmd = NULL;
+ 		}
+ 	}
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_transport.c b/drivers/scsi/mpt3sas/mpt3sas_transport.c
+index 3a143bb5ca72..6c71b20af9e3 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_transport.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_transport.c
+@@ -1936,12 +1936,12 @@ _transport_smp_handler(struct bsg_job *job, struct Scsi_Host *shost,
+ 		pr_info(MPT3SAS_FMT "%s: host reset in progress!\n",
+ 		    __func__, ioc->name);
+ 		rc = -EFAULT;
+-		goto out;
++		goto job_done;
+ 	}
+ 
+ 	rc = mutex_lock_interruptible(&ioc->transport_cmds.mutex);
+ 	if (rc)
+-		goto out;
++		goto job_done;
+ 
+ 	if (ioc->transport_cmds.status != MPT3_CMD_NOT_USED) {
+ 		pr_err(MPT3SAS_FMT "%s: transport_cmds in use\n", ioc->name,
+@@ -2066,6 +2066,7 @@ _transport_smp_handler(struct bsg_job *job, struct Scsi_Host *shost,
+  out:
+ 	ioc->transport_cmds.status = MPT3_CMD_NOT_USED;
+ 	mutex_unlock(&ioc->transport_cmds.mutex);
++job_done:
+ 	bsg_job_done(job, rc, reslen);
+ }
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 1b19b954bbae..ec550ee0108e 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -382,7 +382,7 @@ qla2x00_async_adisc_sp_done(void *ptr, int res)
+ 	    "Async done-%s res %x %8phC\n",
+ 	    sp->name, res, sp->fcport->port_name);
+ 
+-	sp->fcport->flags &= ~FCF_ASYNC_SENT;
++	sp->fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE);
+ 
+ 	memset(&ea, 0, sizeof(ea));
+ 	ea.event = FCME_ADISC_DONE;
+diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c
+index dd93a22fe843..667055cbe155 100644
+--- a/drivers/scsi/qla2xxx/qla_iocb.c
++++ b/drivers/scsi/qla2xxx/qla_iocb.c
+@@ -2656,6 +2656,7 @@ qla24xx_els_dcmd2_iocb(scsi_qla_host_t *vha, int els_opcode,
+ 	ql_dbg(ql_dbg_io, vha, 0x3073,
+ 	    "Enter: PLOGI portid=%06x\n", fcport->d_id.b24);
+ 
++	fcport->flags |= FCF_ASYNC_SENT;
+ 	sp->type = SRB_ELS_DCMD;
+ 	sp->name = "ELS_DCMD";
+ 	sp->fcport = fcport;
+diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
+index 7943b762c12d..87ef6714845b 100644
+--- a/drivers/scsi/scsi_sysfs.c
++++ b/drivers/scsi/scsi_sysfs.c
+@@ -722,8 +722,24 @@ static ssize_t
+ sdev_store_delete(struct device *dev, struct device_attribute *attr,
+ 		  const char *buf, size_t count)
+ {
+-	if (device_remove_file_self(dev, attr))
+-		scsi_remove_device(to_scsi_device(dev));
++	struct kernfs_node *kn;
++
++	kn = sysfs_break_active_protection(&dev->kobj, &attr->attr);
++	WARN_ON_ONCE(!kn);
++	/*
++	 * Concurrent writes into the "delete" sysfs attribute may trigger
++	 * concurrent calls to device_remove_file() and scsi_remove_device().
++	 * device_remove_file() handles concurrent removal calls by
++	 * serializing these and by ignoring the second and later removal
++	 * attempts.  Concurrent calls of scsi_remove_device() are
++	 * serialized. The second and later calls of scsi_remove_device() are
++	 * ignored because the first call of that function changes the device
++	 * state into SDEV_DEL.
++	 */
++	device_remove_file(dev, attr);
++	scsi_remove_device(to_scsi_device(dev));
++	if (kn)
++		sysfs_unbreak_active_protection(kn);
+ 	return count;
+ };
+ static DEVICE_ATTR(delete, S_IWUSR, NULL, sdev_store_delete);
+diff --git a/drivers/soc/qcom/rmtfs_mem.c b/drivers/soc/qcom/rmtfs_mem.c
+index c8999e38b005..8a3678c2e83c 100644
+--- a/drivers/soc/qcom/rmtfs_mem.c
++++ b/drivers/soc/qcom/rmtfs_mem.c
+@@ -184,6 +184,7 @@ static int qcom_rmtfs_mem_probe(struct platform_device *pdev)
+ 	device_initialize(&rmtfs_mem->dev);
+ 	rmtfs_mem->dev.parent = &pdev->dev;
+ 	rmtfs_mem->dev.groups = qcom_rmtfs_mem_groups;
++	rmtfs_mem->dev.release = qcom_rmtfs_mem_release_device;
+ 
+ 	rmtfs_mem->base = devm_memremap(&rmtfs_mem->dev, rmtfs_mem->addr,
+ 					rmtfs_mem->size, MEMREMAP_WC);
+@@ -206,8 +207,6 @@ static int qcom_rmtfs_mem_probe(struct platform_device *pdev)
+ 		goto put_device;
+ 	}
+ 
+-	rmtfs_mem->dev.release = qcom_rmtfs_mem_release_device;
+-
+ 	ret = of_property_read_u32(node, "qcom,vmid", &vmid);
+ 	if (ret < 0 && ret != -EINVAL) {
+ 		dev_err(&pdev->dev, "failed to parse qcom,vmid\n");
+diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c
+index 99501785cdc1..68b3eb00a9d0 100644
+--- a/drivers/target/iscsi/iscsi_target_login.c
++++ b/drivers/target/iscsi/iscsi_target_login.c
+@@ -348,8 +348,7 @@ static int iscsi_login_zero_tsih_s1(
+ 		pr_err("idr_alloc() for sess_idr failed\n");
+ 		iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+ 				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+-		kfree(sess);
+-		return -ENOMEM;
++		goto free_sess;
+ 	}
+ 
+ 	sess->creation_time = get_jiffies_64();
+@@ -365,20 +364,28 @@ static int iscsi_login_zero_tsih_s1(
+ 				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+ 		pr_err("Unable to allocate memory for"
+ 				" struct iscsi_sess_ops.\n");
+-		kfree(sess);
+-		return -ENOMEM;
++		goto remove_idr;
+ 	}
+ 
+ 	sess->se_sess = transport_init_session(TARGET_PROT_NORMAL);
+ 	if (IS_ERR(sess->se_sess)) {
+ 		iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+ 				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+-		kfree(sess->sess_ops);
+-		kfree(sess);
+-		return -ENOMEM;
++		goto free_ops;
+ 	}
+ 
+ 	return 0;
++
++free_ops:
++	kfree(sess->sess_ops);
++remove_idr:
++	spin_lock_bh(&sess_idr_lock);
++	idr_remove(&sess_idr, sess->session_index);
++	spin_unlock_bh(&sess_idr_lock);
++free_sess:
++	kfree(sess);
++	conn->sess = NULL;
++	return -ENOMEM;
+ }
+ 
+ static int iscsi_login_zero_tsih_s2(
+@@ -1161,13 +1168,13 @@ void iscsi_target_login_sess_out(struct iscsi_conn *conn,
+ 				   ISCSI_LOGIN_STATUS_INIT_ERR);
+ 	if (!zero_tsih || !conn->sess)
+ 		goto old_sess_out;
+-	if (conn->sess->se_sess)
+-		transport_free_session(conn->sess->se_sess);
+-	if (conn->sess->session_index != 0) {
+-		spin_lock_bh(&sess_idr_lock);
+-		idr_remove(&sess_idr, conn->sess->session_index);
+-		spin_unlock_bh(&sess_idr_lock);
+-	}
++
++	transport_free_session(conn->sess->se_sess);
++
++	spin_lock_bh(&sess_idr_lock);
++	idr_remove(&sess_idr, conn->sess->session_index);
++	spin_unlock_bh(&sess_idr_lock);
++
+ 	kfree(conn->sess->sess_ops);
+ 	kfree(conn->sess);
+ 	conn->sess = NULL;
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 205092dc9390..dfed08e70ec1 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -961,8 +961,9 @@ static int btree_writepages(struct address_space *mapping,
+ 
+ 		fs_info = BTRFS_I(mapping->host)->root->fs_info;
+ 		/* this is a bit racy, but that's ok */
+-		ret = percpu_counter_compare(&fs_info->dirty_metadata_bytes,
+-					     BTRFS_DIRTY_METADATA_THRESH);
++		ret = __percpu_counter_compare(&fs_info->dirty_metadata_bytes,
++					     BTRFS_DIRTY_METADATA_THRESH,
++					     fs_info->dirty_metadata_batch);
+ 		if (ret < 0)
+ 			return 0;
+ 	}
+@@ -4150,8 +4151,9 @@ static void __btrfs_btree_balance_dirty(struct btrfs_fs_info *fs_info,
+ 	if (flush_delayed)
+ 		btrfs_balance_delayed_items(fs_info);
+ 
+-	ret = percpu_counter_compare(&fs_info->dirty_metadata_bytes,
+-				     BTRFS_DIRTY_METADATA_THRESH);
++	ret = __percpu_counter_compare(&fs_info->dirty_metadata_bytes,
++				     BTRFS_DIRTY_METADATA_THRESH,
++				     fs_info->dirty_metadata_batch);
+ 	if (ret > 0) {
+ 		balance_dirty_pages_ratelimited(fs_info->btree_inode->i_mapping);
+ 	}
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 3d9fe58c0080..8aab7a6c1e58 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -4358,7 +4358,7 @@ commit_trans:
+ 				      data_sinfo->flags, bytes, 1);
+ 	spin_unlock(&data_sinfo->lock);
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ int btrfs_check_data_free_space(struct inode *inode,
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index eba61bcb9bb3..071d949f69ec 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -6027,32 +6027,6 @@ err:
+ 	return ret;
+ }
+ 
+-int btrfs_write_inode(struct inode *inode, struct writeback_control *wbc)
+-{
+-	struct btrfs_root *root = BTRFS_I(inode)->root;
+-	struct btrfs_trans_handle *trans;
+-	int ret = 0;
+-	bool nolock = false;
+-
+-	if (test_bit(BTRFS_INODE_DUMMY, &BTRFS_I(inode)->runtime_flags))
+-		return 0;
+-
+-	if (btrfs_fs_closing(root->fs_info) &&
+-			btrfs_is_free_space_inode(BTRFS_I(inode)))
+-		nolock = true;
+-
+-	if (wbc->sync_mode == WB_SYNC_ALL) {
+-		if (nolock)
+-			trans = btrfs_join_transaction_nolock(root);
+-		else
+-			trans = btrfs_join_transaction(root);
+-		if (IS_ERR(trans))
+-			return PTR_ERR(trans);
+-		ret = btrfs_commit_transaction(trans);
+-	}
+-	return ret;
+-}
+-
+ /*
+  * This is somewhat expensive, updating the tree every time the
+  * inode changes.  But, it is most likely to find the inode in cache.
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index c47f62b19226..b75b4abaa4a5 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -100,6 +100,7 @@ struct send_ctx {
+ 	u64 cur_inode_rdev;
+ 	u64 cur_inode_last_extent;
+ 	u64 cur_inode_next_write_offset;
++	bool ignore_cur_inode;
+ 
+ 	u64 send_progress;
+ 
+@@ -5006,6 +5007,15 @@ static int send_hole(struct send_ctx *sctx, u64 end)
+ 	u64 len;
+ 	int ret = 0;
+ 
++	/*
++	 * A hole that starts at EOF or beyond it. Since we do not yet support
++	 * fallocate (for extent preallocation and hole punching), sending a
++	 * write of zeroes starting at EOF or beyond would later require issuing
++	 * a truncate operation which would undo the write and achieve nothing.
++	 */
++	if (offset >= sctx->cur_inode_size)
++		return 0;
++
+ 	if (sctx->flags & BTRFS_SEND_FLAG_NO_FILE_DATA)
+ 		return send_update_extent(sctx, offset, end - offset);
+ 
+@@ -5799,6 +5809,9 @@ static int finish_inode_if_needed(struct send_ctx *sctx, int at_end)
+ 	int pending_move = 0;
+ 	int refs_processed = 0;
+ 
++	if (sctx->ignore_cur_inode)
++		return 0;
++
+ 	ret = process_recorded_refs_if_needed(sctx, at_end, &pending_move,
+ 					      &refs_processed);
+ 	if (ret < 0)
+@@ -5917,6 +5930,93 @@ out:
+ 	return ret;
+ }
+ 
++struct parent_paths_ctx {
++	struct list_head *refs;
++	struct send_ctx *sctx;
++};
++
++static int record_parent_ref(int num, u64 dir, int index, struct fs_path *name,
++			     void *ctx)
++{
++	struct parent_paths_ctx *ppctx = ctx;
++
++	return record_ref(ppctx->sctx->parent_root, dir, name, ppctx->sctx,
++			  ppctx->refs);
++}
++
++/*
++ * Issue unlink operations for all paths of the current inode found in the
++ * parent snapshot.
++ */
++static int btrfs_unlink_all_paths(struct send_ctx *sctx)
++{
++	LIST_HEAD(deleted_refs);
++	struct btrfs_path *path;
++	struct btrfs_key key;
++	struct parent_paths_ctx ctx;
++	int ret;
++
++	path = alloc_path_for_send();
++	if (!path)
++		return -ENOMEM;
++
++	key.objectid = sctx->cur_ino;
++	key.type = BTRFS_INODE_REF_KEY;
++	key.offset = 0;
++	ret = btrfs_search_slot(NULL, sctx->parent_root, &key, path, 0, 0);
++	if (ret < 0)
++		goto out;
++
++	ctx.refs = &deleted_refs;
++	ctx.sctx = sctx;
++
++	while (true) {
++		struct extent_buffer *eb = path->nodes[0];
++		int slot = path->slots[0];
++
++		if (slot >= btrfs_header_nritems(eb)) {
++			ret = btrfs_next_leaf(sctx->parent_root, path);
++			if (ret < 0)
++				goto out;
++			else if (ret > 0)
++				break;
++			continue;
++		}
++
++		btrfs_item_key_to_cpu(eb, &key, slot);
++		if (key.objectid != sctx->cur_ino)
++			break;
++		if (key.type != BTRFS_INODE_REF_KEY &&
++		    key.type != BTRFS_INODE_EXTREF_KEY)
++			break;
++
++		ret = iterate_inode_ref(sctx->parent_root, path, &key, 1,
++					record_parent_ref, &ctx);
++		if (ret < 0)
++			goto out;
++
++		path->slots[0]++;
++	}
++
++	while (!list_empty(&deleted_refs)) {
++		struct recorded_ref *ref;
++
++		ref = list_first_entry(&deleted_refs, struct recorded_ref, list);
++		ret = send_unlink(sctx, ref->full_path);
++		if (ret < 0)
++			goto out;
++		fs_path_free(ref->full_path);
++		list_del(&ref->list);
++		kfree(ref);
++	}
++	ret = 0;
++out:
++	btrfs_free_path(path);
++	if (ret)
++		__free_recorded_refs(&deleted_refs);
++	return ret;
++}
++
+ static int changed_inode(struct send_ctx *sctx,
+ 			 enum btrfs_compare_tree_result result)
+ {
+@@ -5931,6 +6031,7 @@ static int changed_inode(struct send_ctx *sctx,
+ 	sctx->cur_inode_new_gen = 0;
+ 	sctx->cur_inode_last_extent = (u64)-1;
+ 	sctx->cur_inode_next_write_offset = 0;
++	sctx->ignore_cur_inode = false;
+ 
+ 	/*
+ 	 * Set send_progress to current inode. This will tell all get_cur_xxx
+@@ -5971,6 +6072,33 @@ static int changed_inode(struct send_ctx *sctx,
+ 			sctx->cur_inode_new_gen = 1;
+ 	}
+ 
++	/*
++	 * Normally we do not find inodes with a link count of zero (orphans)
++	 * because the most common case is to create a snapshot and use it
++	 * for a send operation. However other less common use cases involve
++	 * using a subvolume and send it after turning it to RO mode just
++	 * after deleting all hard links of a file while holding an open
++	 * file descriptor against it or turning a RO snapshot into RW mode,
++	 * keep an open file descriptor against a file, delete it and then
++	 * turn the snapshot back to RO mode before using it for a send
++	 * operation. So if we find such cases, ignore the inode and all its
++	 * items completely if it's a new inode, or if it's a changed inode
++	 * make sure all its previous paths (from the parent snapshot) are all
++	 * unlinked and all other the inode items are ignored.
++	 */
++	if (result == BTRFS_COMPARE_TREE_NEW ||
++	    result == BTRFS_COMPARE_TREE_CHANGED) {
++		u32 nlinks;
++
++		nlinks = btrfs_inode_nlink(sctx->left_path->nodes[0], left_ii);
++		if (nlinks == 0) {
++			sctx->ignore_cur_inode = true;
++			if (result == BTRFS_COMPARE_TREE_CHANGED)
++				ret = btrfs_unlink_all_paths(sctx);
++			goto out;
++		}
++	}
++
+ 	if (result == BTRFS_COMPARE_TREE_NEW) {
+ 		sctx->cur_inode_gen = left_gen;
+ 		sctx->cur_inode_new = 1;
+@@ -6309,15 +6437,17 @@ static int changed_cb(struct btrfs_path *left_path,
+ 	    key->objectid == BTRFS_FREE_SPACE_OBJECTID)
+ 		goto out;
+ 
+-	if (key->type == BTRFS_INODE_ITEM_KEY)
++	if (key->type == BTRFS_INODE_ITEM_KEY) {
+ 		ret = changed_inode(sctx, result);
+-	else if (key->type == BTRFS_INODE_REF_KEY ||
+-		 key->type == BTRFS_INODE_EXTREF_KEY)
+-		ret = changed_ref(sctx, result);
+-	else if (key->type == BTRFS_XATTR_ITEM_KEY)
+-		ret = changed_xattr(sctx, result);
+-	else if (key->type == BTRFS_EXTENT_DATA_KEY)
+-		ret = changed_extent(sctx, result);
++	} else if (!sctx->ignore_cur_inode) {
++		if (key->type == BTRFS_INODE_REF_KEY ||
++		    key->type == BTRFS_INODE_EXTREF_KEY)
++			ret = changed_ref(sctx, result);
++		else if (key->type == BTRFS_XATTR_ITEM_KEY)
++			ret = changed_xattr(sctx, result);
++		else if (key->type == BTRFS_EXTENT_DATA_KEY)
++			ret = changed_extent(sctx, result);
++	}
+ 
+ out:
+ 	return ret;
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index 81107ad49f3a..bddfc28b27c0 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -2331,7 +2331,6 @@ static const struct super_operations btrfs_super_ops = {
+ 	.sync_fs	= btrfs_sync_fs,
+ 	.show_options	= btrfs_show_options,
+ 	.show_devname	= btrfs_show_devname,
+-	.write_inode	= btrfs_write_inode,
+ 	.alloc_inode	= btrfs_alloc_inode,
+ 	.destroy_inode	= btrfs_destroy_inode,
+ 	.statfs		= btrfs_statfs,
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index f8220ec02036..84b00a29d531 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -1291,6 +1291,46 @@ again:
+ 	return ret;
+ }
+ 
++static int btrfs_inode_ref_exists(struct inode *inode, struct inode *dir,
++				  const u8 ref_type, const char *name,
++				  const int namelen)
++{
++	struct btrfs_key key;
++	struct btrfs_path *path;
++	const u64 parent_id = btrfs_ino(BTRFS_I(dir));
++	int ret;
++
++	path = btrfs_alloc_path();
++	if (!path)
++		return -ENOMEM;
++
++	key.objectid = btrfs_ino(BTRFS_I(inode));
++	key.type = ref_type;
++	if (key.type == BTRFS_INODE_REF_KEY)
++		key.offset = parent_id;
++	else
++		key.offset = btrfs_extref_hash(parent_id, name, namelen);
++
++	ret = btrfs_search_slot(NULL, BTRFS_I(inode)->root, &key, path, 0, 0);
++	if (ret < 0)
++		goto out;
++	if (ret > 0) {
++		ret = 0;
++		goto out;
++	}
++	if (key.type == BTRFS_INODE_EXTREF_KEY)
++		ret = btrfs_find_name_in_ext_backref(path->nodes[0],
++						     path->slots[0], parent_id,
++						     name, namelen, NULL);
++	else
++		ret = btrfs_find_name_in_backref(path->nodes[0], path->slots[0],
++						 name, namelen, NULL);
++
++out:
++	btrfs_free_path(path);
++	return ret;
++}
++
+ /*
+  * replay one inode back reference item found in the log tree.
+  * eb, slot and key refer to the buffer and key found in the log tree.
+@@ -1400,6 +1440,32 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans,
+ 				}
+ 			}
+ 
++			/*
++			 * If a reference item already exists for this inode
++			 * with the same parent and name, but different index,
++			 * drop it and the corresponding directory index entries
++			 * from the parent before adding the new reference item
++			 * and dir index entries, otherwise we would fail with
++			 * -EEXIST returned from btrfs_add_link() below.
++			 */
++			ret = btrfs_inode_ref_exists(inode, dir, key->type,
++						     name, namelen);
++			if (ret > 0) {
++				ret = btrfs_unlink_inode(trans, root,
++							 BTRFS_I(dir),
++							 BTRFS_I(inode),
++							 name, namelen);
++				/*
++				 * If we dropped the link count to 0, bump it so
++				 * that later the iput() on the inode will not
++				 * free it. We will fixup the link count later.
++				 */
++				if (!ret && inode->i_nlink == 0)
++					inc_nlink(inode);
++			}
++			if (ret < 0)
++				goto out;
++
+ 			/* insert our name */
+ 			ret = btrfs_add_link(trans, BTRFS_I(dir),
+ 					BTRFS_I(inode),
+diff --git a/fs/cifs/cifs_debug.c b/fs/cifs/cifs_debug.c
+index bfe999505815..991bfb271908 100644
+--- a/fs/cifs/cifs_debug.c
++++ b/fs/cifs/cifs_debug.c
+@@ -160,25 +160,41 @@ static int cifs_debug_data_proc_show(struct seq_file *m, void *v)
+ 	seq_printf(m, "CIFS Version %s\n", CIFS_VERSION);
+ 	seq_printf(m, "Features:");
+ #ifdef CONFIG_CIFS_DFS_UPCALL
+-	seq_printf(m, " dfs");
++	seq_printf(m, " DFS");
+ #endif
+ #ifdef CONFIG_CIFS_FSCACHE
+-	seq_printf(m, " fscache");
++	seq_printf(m, ",FSCACHE");
++#endif
++#ifdef CONFIG_CIFS_SMB_DIRECT
++	seq_printf(m, ",SMB_DIRECT");
++#endif
++#ifdef CONFIG_CIFS_STATS2
++	seq_printf(m, ",STATS2");
++#elif defined(CONFIG_CIFS_STATS)
++	seq_printf(m, ",STATS");
++#endif
++#ifdef CONFIG_CIFS_DEBUG2
++	seq_printf(m, ",DEBUG2");
++#elif defined(CONFIG_CIFS_DEBUG)
++	seq_printf(m, ",DEBUG");
++#endif
++#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY
++	seq_printf(m, ",ALLOW_INSECURE_LEGACY");
+ #endif
+ #ifdef CONFIG_CIFS_WEAK_PW_HASH
+-	seq_printf(m, " lanman");
++	seq_printf(m, ",WEAK_PW_HASH");
+ #endif
+ #ifdef CONFIG_CIFS_POSIX
+-	seq_printf(m, " posix");
++	seq_printf(m, ",CIFS_POSIX");
+ #endif
+ #ifdef CONFIG_CIFS_UPCALL
+-	seq_printf(m, " spnego");
++	seq_printf(m, ",UPCALL(SPNEGO)");
+ #endif
+ #ifdef CONFIG_CIFS_XATTR
+-	seq_printf(m, " xattr");
++	seq_printf(m, ",XATTR");
+ #endif
+ #ifdef CONFIG_CIFS_ACL
+-	seq_printf(m, " acl");
++	seq_printf(m, ",ACL");
+ #endif
+ 	seq_putc(m, '\n');
+ 	seq_printf(m, "Active VFS Requests: %d\n", GlobalTotalActiveXid);
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index d5aa7ae917bf..69ec5427769c 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -209,14 +209,16 @@ cifs_statfs(struct dentry *dentry, struct kstatfs *buf)
+ 
+ 	xid = get_xid();
+ 
+-	/*
+-	 * PATH_MAX may be too long - it would presumably be total path,
+-	 * but note that some servers (includinng Samba 3) have a shorter
+-	 * maximum path.
+-	 *
+-	 * Instead could get the real value via SMB_QUERY_FS_ATTRIBUTE_INFO.
+-	 */
+-	buf->f_namelen = PATH_MAX;
++	if (le32_to_cpu(tcon->fsAttrInfo.MaxPathNameComponentLength) > 0)
++		buf->f_namelen =
++		       le32_to_cpu(tcon->fsAttrInfo.MaxPathNameComponentLength);
++	else
++		buf->f_namelen = PATH_MAX;
++
++	buf->f_fsid.val[0] = tcon->vol_serial_number;
++	/* are using part of create time for more randomness, see man statfs */
++	buf->f_fsid.val[1] =  (int)le64_to_cpu(tcon->vol_create_time);
++
+ 	buf->f_files = 0;	/* undefined */
+ 	buf->f_ffree = 0;	/* unlimited */
+ 
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index c923c7854027..4b45d3ef3f9d 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -913,6 +913,7 @@ cap_unix(struct cifs_ses *ses)
+ 
+ struct cached_fid {
+ 	bool is_valid:1;	/* Do we have a useable root fid */
++	struct kref refcount;
+ 	struct cifs_fid *fid;
+ 	struct mutex fid_mutex;
+ 	struct cifs_tcon *tcon;
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index a2cfb33e85c1..9051b9dfd590 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -1122,6 +1122,8 @@ cifs_set_file_info(struct inode *inode, struct iattr *attrs, unsigned int xid,
+ 	if (!server->ops->set_file_info)
+ 		return -ENOSYS;
+ 
++	info_buf.Pad = 0;
++
+ 	if (attrs->ia_valid & ATTR_ATIME) {
+ 		set_time = true;
+ 		info_buf.LastAccessTime =
+diff --git a/fs/cifs/link.c b/fs/cifs/link.c
+index de41f96aba49..2148b0f60e5e 100644
+--- a/fs/cifs/link.c
++++ b/fs/cifs/link.c
+@@ -396,7 +396,7 @@ smb3_query_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
+ 	struct cifs_io_parms io_parms;
+ 	int buf_type = CIFS_NO_BUFFER;
+ 	__le16 *utf16_path;
+-	__u8 oplock = SMB2_OPLOCK_LEVEL_II;
++	__u8 oplock = SMB2_OPLOCK_LEVEL_NONE;
+ 	struct smb2_file_all_info *pfile_info = NULL;
+ 
+ 	oparms.tcon = tcon;
+@@ -459,7 +459,7 @@ smb3_create_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
+ 	struct cifs_io_parms io_parms;
+ 	int create_options = CREATE_NOT_DIR;
+ 	__le16 *utf16_path;
+-	__u8 oplock = SMB2_OPLOCK_LEVEL_EXCLUSIVE;
++	__u8 oplock = SMB2_OPLOCK_LEVEL_NONE;
+ 	struct kvec iov[2];
+ 
+ 	if (backup_cred(cifs_sb))
+diff --git a/fs/cifs/sess.c b/fs/cifs/sess.c
+index 8b0502cd39af..aa23c00367ec 100644
+--- a/fs/cifs/sess.c
++++ b/fs/cifs/sess.c
+@@ -398,6 +398,12 @@ int build_ntlmssp_auth_blob(unsigned char **pbuffer,
+ 		goto setup_ntlmv2_ret;
+ 	}
+ 	*pbuffer = kmalloc(size_of_ntlmssp_blob(ses), GFP_KERNEL);
++	if (!*pbuffer) {
++		rc = -ENOMEM;
++		cifs_dbg(VFS, "Error %d during NTLMSSP allocation\n", rc);
++		*buflen = 0;
++		goto setup_ntlmv2_ret;
++	}
+ 	sec_blob = (AUTHENTICATE_MESSAGE *)*pbuffer;
+ 
+ 	memcpy(sec_blob->Signature, NTLMSSP_SIGNATURE, 8);
+diff --git a/fs/cifs/smb2inode.c b/fs/cifs/smb2inode.c
+index d01ad706d7fc..1eef1791d0c4 100644
+--- a/fs/cifs/smb2inode.c
++++ b/fs/cifs/smb2inode.c
+@@ -120,7 +120,9 @@ smb2_open_op_close(const unsigned int xid, struct cifs_tcon *tcon,
+ 		break;
+ 	}
+ 
+-	if (use_cached_root_handle == false)
++	if (use_cached_root_handle)
++		close_shroot(&tcon->crfid);
++	else
+ 		rc = SMB2_close(xid, tcon, fid.persistent_fid, fid.volatile_fid);
+ 	if (tmprc)
+ 		rc = tmprc;
+@@ -281,7 +283,7 @@ smb2_set_file_info(struct inode *inode, const char *full_path,
+ 	int rc;
+ 
+ 	if ((buf->CreationTime == 0) && (buf->LastAccessTime == 0) &&
+-	    (buf->LastWriteTime == 0) && (buf->ChangeTime) &&
++	    (buf->LastWriteTime == 0) && (buf->ChangeTime == 0) &&
+ 	    (buf->Attributes == 0))
+ 		return 0; /* would be a no op, no sense sending this */
+ 
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index ea92a38b2f08..ee6c4a952ce9 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -466,21 +466,36 @@ out:
+ 	return rc;
+ }
+ 
+-void
+-smb2_cached_lease_break(struct work_struct *work)
++static void
++smb2_close_cached_fid(struct kref *ref)
+ {
+-	struct cached_fid *cfid = container_of(work,
+-				struct cached_fid, lease_break);
+-	mutex_lock(&cfid->fid_mutex);
++	struct cached_fid *cfid = container_of(ref, struct cached_fid,
++					       refcount);
++
+ 	if (cfid->is_valid) {
+ 		cifs_dbg(FYI, "clear cached root file handle\n");
+ 		SMB2_close(0, cfid->tcon, cfid->fid->persistent_fid,
+ 			   cfid->fid->volatile_fid);
+ 		cfid->is_valid = false;
+ 	}
++}
++
++void close_shroot(struct cached_fid *cfid)
++{
++	mutex_lock(&cfid->fid_mutex);
++	kref_put(&cfid->refcount, smb2_close_cached_fid);
+ 	mutex_unlock(&cfid->fid_mutex);
+ }
+ 
++void
++smb2_cached_lease_break(struct work_struct *work)
++{
++	struct cached_fid *cfid = container_of(work,
++				struct cached_fid, lease_break);
++
++	close_shroot(cfid);
++}
++
+ /*
+  * Open the directory at the root of a share
+  */
+@@ -495,6 +510,7 @@ int open_shroot(unsigned int xid, struct cifs_tcon *tcon, struct cifs_fid *pfid)
+ 	if (tcon->crfid.is_valid) {
+ 		cifs_dbg(FYI, "found a cached root file handle\n");
+ 		memcpy(pfid, tcon->crfid.fid, sizeof(struct cifs_fid));
++		kref_get(&tcon->crfid.refcount);
+ 		mutex_unlock(&tcon->crfid.fid_mutex);
+ 		return 0;
+ 	}
+@@ -511,6 +527,8 @@ int open_shroot(unsigned int xid, struct cifs_tcon *tcon, struct cifs_fid *pfid)
+ 		memcpy(tcon->crfid.fid, pfid, sizeof(struct cifs_fid));
+ 		tcon->crfid.tcon = tcon;
+ 		tcon->crfid.is_valid = true;
++		kref_init(&tcon->crfid.refcount);
++		kref_get(&tcon->crfid.refcount);
+ 	}
+ 	mutex_unlock(&tcon->crfid.fid_mutex);
+ 	return rc;
+@@ -548,10 +566,15 @@ smb3_qfs_tcon(const unsigned int xid, struct cifs_tcon *tcon)
+ 			FS_ATTRIBUTE_INFORMATION);
+ 	SMB2_QFS_attr(xid, tcon, fid.persistent_fid, fid.volatile_fid,
+ 			FS_DEVICE_INFORMATION);
++	SMB2_QFS_attr(xid, tcon, fid.persistent_fid, fid.volatile_fid,
++			FS_VOLUME_INFORMATION);
+ 	SMB2_QFS_attr(xid, tcon, fid.persistent_fid, fid.volatile_fid,
+ 			FS_SECTOR_SIZE_INFORMATION); /* SMB3 specific */
+ 	if (no_cached_open)
+ 		SMB2_close(xid, tcon, fid.persistent_fid, fid.volatile_fid);
++	else
++		close_shroot(&tcon->crfid);
++
+ 	return;
+ }
+ 
+@@ -1353,6 +1376,13 @@ smb3_set_integrity(const unsigned int xid, struct cifs_tcon *tcon,
+ 
+ }
+ 
++/* GMT Token is @GMT-YYYY.MM.DD-HH.MM.SS Unicode which is 48 bytes + null */
++#define GMT_TOKEN_SIZE 50
++
++/*
++ * Input buffer contains (empty) struct smb_snapshot array with size filled in
++ * For output see struct SRV_SNAPSHOT_ARRAY in MS-SMB2 section 2.2.32.2
++ */
+ static int
+ smb3_enum_snapshots(const unsigned int xid, struct cifs_tcon *tcon,
+ 		   struct cifsFileInfo *cfile, void __user *ioc_buf)
+@@ -1382,14 +1412,27 @@ smb3_enum_snapshots(const unsigned int xid, struct cifs_tcon *tcon,
+ 			kfree(retbuf);
+ 			return rc;
+ 		}
+-		if (snapshot_in.snapshot_array_size < sizeof(struct smb_snapshot_array)) {
+-			rc = -ERANGE;
+-			kfree(retbuf);
+-			return rc;
+-		}
+ 
+-		if (ret_data_len > snapshot_in.snapshot_array_size)
+-			ret_data_len = snapshot_in.snapshot_array_size;
++		/*
++		 * Check for min size, ie not large enough to fit even one GMT
++		 * token (snapshot).  On the first ioctl some users may pass in
++		 * smaller size (or zero) to simply get the size of the array
++		 * so the user space caller can allocate sufficient memory
++		 * and retry the ioctl again with larger array size sufficient
++		 * to hold all of the snapshot GMT tokens on the second try.
++		 */
++		if (snapshot_in.snapshot_array_size < GMT_TOKEN_SIZE)
++			ret_data_len = sizeof(struct smb_snapshot_array);
++
++		/*
++		 * We return struct SRV_SNAPSHOT_ARRAY, followed by
++		 * the snapshot array (of 50 byte GMT tokens) each
++		 * representing an available previous version of the data
++		 */
++		if (ret_data_len > (snapshot_in.snapshot_array_size +
++					sizeof(struct smb_snapshot_array)))
++			ret_data_len = snapshot_in.snapshot_array_size +
++					sizeof(struct smb_snapshot_array);
+ 
+ 		if (copy_to_user(ioc_buf, retbuf, ret_data_len))
+ 			rc = -EFAULT;
+@@ -3366,6 +3409,11 @@ struct smb_version_operations smb311_operations = {
+ 	.query_all_EAs = smb2_query_eas,
+ 	.set_EA = smb2_set_ea,
+ #endif /* CIFS_XATTR */
++#ifdef CONFIG_CIFS_ACL
++	.get_acl = get_smb2_acl,
++	.get_acl_by_fid = get_smb2_acl_by_fid,
++	.set_acl = set_smb2_acl,
++#endif /* CIFS_ACL */
+ 	.next_header = smb2_next_header,
+ };
+ #endif /* CIFS_SMB311 */
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 3c92678cb45b..ffce77e00a58 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -4046,6 +4046,9 @@ SMB2_QFS_attr(const unsigned int xid, struct cifs_tcon *tcon,
+ 	} else if (level == FS_SECTOR_SIZE_INFORMATION) {
+ 		max_len = sizeof(struct smb3_fs_ss_info);
+ 		min_len = sizeof(struct smb3_fs_ss_info);
++	} else if (level == FS_VOLUME_INFORMATION) {
++		max_len = sizeof(struct smb3_fs_vol_info) + MAX_VOL_LABEL_LEN;
++		min_len = sizeof(struct smb3_fs_vol_info);
+ 	} else {
+ 		cifs_dbg(FYI, "Invalid qfsinfo level %d\n", level);
+ 		return -EINVAL;
+@@ -4090,6 +4093,11 @@ SMB2_QFS_attr(const unsigned int xid, struct cifs_tcon *tcon,
+ 		tcon->ss_flags = le32_to_cpu(ss_info->Flags);
+ 		tcon->perf_sector_size =
+ 			le32_to_cpu(ss_info->PhysicalBytesPerSectorForPerf);
++	} else if (level == FS_VOLUME_INFORMATION) {
++		struct smb3_fs_vol_info *vol_info = (struct smb3_fs_vol_info *)
++			(offset + (char *)rsp);
++		tcon->vol_serial_number = vol_info->VolumeSerialNumber;
++		tcon->vol_create_time = vol_info->VolumeCreationTime;
+ 	}
+ 
+ qfsattr_exit:
+diff --git a/fs/cifs/smb2pdu.h b/fs/cifs/smb2pdu.h
+index a671adcc44a6..c2a4526512b5 100644
+--- a/fs/cifs/smb2pdu.h
++++ b/fs/cifs/smb2pdu.h
+@@ -1248,6 +1248,17 @@ struct smb3_fs_ss_info {
+ 	__le32 ByteOffsetForPartitionAlignment;
+ } __packed;
+ 
++/* volume info struct - see MS-FSCC 2.5.9 */
++#define MAX_VOL_LABEL_LEN	32
++struct smb3_fs_vol_info {
++	__le64	VolumeCreationTime;
++	__u32	VolumeSerialNumber;
++	__le32	VolumeLabelLength; /* includes trailing null */
++	__u8	SupportsObjects; /* True if eg like NTFS, supports objects */
++	__u8	Reserved;
++	__u8	VolumeLabel[0]; /* variable len */
++} __packed;
++
+ /* partial list of QUERY INFO levels */
+ #define FILE_DIRECTORY_INFORMATION	1
+ #define FILE_FULL_DIRECTORY_INFORMATION 2
+diff --git a/fs/cifs/smb2proto.h b/fs/cifs/smb2proto.h
+index 6e6a4f2ec890..c1520b48d1e1 100644
+--- a/fs/cifs/smb2proto.h
++++ b/fs/cifs/smb2proto.h
+@@ -68,6 +68,7 @@ extern int smb3_handle_read_data(struct TCP_Server_Info *server,
+ 
+ extern int open_shroot(unsigned int xid, struct cifs_tcon *tcon,
+ 			struct cifs_fid *pfid);
++extern void close_shroot(struct cached_fid *cfid);
+ extern void move_smb2_info_to_cifs(FILE_ALL_INFO *dst,
+ 				   struct smb2_file_all_info *src);
+ extern int smb2_query_path_info(const unsigned int xid, struct cifs_tcon *tcon,
+diff --git a/fs/cifs/smb2transport.c b/fs/cifs/smb2transport.c
+index 719d55e63d88..bf61c3774830 100644
+--- a/fs/cifs/smb2transport.c
++++ b/fs/cifs/smb2transport.c
+@@ -173,7 +173,7 @@ smb2_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server)
+ 	struct kvec *iov = rqst->rq_iov;
+ 	struct smb2_sync_hdr *shdr = (struct smb2_sync_hdr *)iov[0].iov_base;
+ 	struct cifs_ses *ses;
+-	struct shash_desc *shash = &server->secmech.sdeschmacsha256->shash;
++	struct shash_desc *shash;
+ 	struct smb_rqst drqst;
+ 
+ 	ses = smb2_find_smb_ses(server, shdr->SessionId);
+@@ -187,7 +187,7 @@ smb2_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server)
+ 
+ 	rc = smb2_crypto_shash_allocate(server);
+ 	if (rc) {
+-		cifs_dbg(VFS, "%s: shah256 alloc failed\n", __func__);
++		cifs_dbg(VFS, "%s: sha256 alloc failed\n", __func__);
+ 		return rc;
+ 	}
+ 
+@@ -198,6 +198,7 @@ smb2_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server)
+ 		return rc;
+ 	}
+ 
++	shash = &server->secmech.sdeschmacsha256->shash;
+ 	rc = crypto_shash_init(shash);
+ 	if (rc) {
+ 		cifs_dbg(VFS, "%s: Could not init sha256", __func__);
+diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
+index aa52d87985aa..e5d6ee61ff48 100644
+--- a/fs/ext4/balloc.c
++++ b/fs/ext4/balloc.c
+@@ -426,9 +426,9 @@ ext4_read_block_bitmap_nowait(struct super_block *sb, ext4_group_t block_group)
+ 	}
+ 	bh = sb_getblk(sb, bitmap_blk);
+ 	if (unlikely(!bh)) {
+-		ext4_error(sb, "Cannot get buffer for block bitmap - "
+-			   "block_group = %u, block_bitmap = %llu",
+-			   block_group, bitmap_blk);
++		ext4_warning(sb, "Cannot get buffer for block bitmap - "
++			     "block_group = %u, block_bitmap = %llu",
++			     block_group, bitmap_blk);
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+ 
+diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
+index f336cbc6e932..796aa609bcb9 100644
+--- a/fs/ext4/ialloc.c
++++ b/fs/ext4/ialloc.c
+@@ -138,9 +138,9 @@ ext4_read_inode_bitmap(struct super_block *sb, ext4_group_t block_group)
+ 	}
+ 	bh = sb_getblk(sb, bitmap_blk);
+ 	if (unlikely(!bh)) {
+-		ext4_error(sb, "Cannot read inode bitmap - "
+-			    "block_group = %u, inode_bitmap = %llu",
+-			    block_group, bitmap_blk);
++		ext4_warning(sb, "Cannot read inode bitmap - "
++			     "block_group = %u, inode_bitmap = %llu",
++			     block_group, bitmap_blk);
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+ 	if (bitmap_uptodate(bh))
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 2a4c25c4681d..116ff68c5bd4 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -1398,6 +1398,7 @@ static struct buffer_head * ext4_find_entry (struct inode *dir,
+ 			goto cleanup_and_exit;
+ 		dxtrace(printk(KERN_DEBUG "ext4_find_entry: dx failed, "
+ 			       "falling back\n"));
++		ret = NULL;
+ 	}
+ 	nblocks = dir->i_size >> EXT4_BLOCK_SIZE_BITS(sb);
+ 	if (!nblocks) {
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index b7f7922061be..130c12974e28 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -776,26 +776,26 @@ void ext4_mark_group_bitmap_corrupted(struct super_block *sb,
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+ 	struct ext4_group_info *grp = ext4_get_group_info(sb, group);
+ 	struct ext4_group_desc *gdp = ext4_get_group_desc(sb, group, NULL);
++	int ret;
+ 
+-	if ((flags & EXT4_GROUP_INFO_BBITMAP_CORRUPT) &&
+-	    !EXT4_MB_GRP_BBITMAP_CORRUPT(grp)) {
+-		percpu_counter_sub(&sbi->s_freeclusters_counter,
+-					grp->bb_free);
+-		set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT,
+-			&grp->bb_state);
++	if (flags & EXT4_GROUP_INFO_BBITMAP_CORRUPT) {
++		ret = ext4_test_and_set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT,
++					    &grp->bb_state);
++		if (!ret)
++			percpu_counter_sub(&sbi->s_freeclusters_counter,
++					   grp->bb_free);
+ 	}
+ 
+-	if ((flags & EXT4_GROUP_INFO_IBITMAP_CORRUPT) &&
+-	    !EXT4_MB_GRP_IBITMAP_CORRUPT(grp)) {
+-		if (gdp) {
++	if (flags & EXT4_GROUP_INFO_IBITMAP_CORRUPT) {
++		ret = ext4_test_and_set_bit(EXT4_GROUP_INFO_IBITMAP_CORRUPT_BIT,
++					    &grp->bb_state);
++		if (!ret && gdp) {
+ 			int count;
+ 
+ 			count = ext4_free_inodes_count(sb, gdp);
+ 			percpu_counter_sub(&sbi->s_freeinodes_counter,
+ 					   count);
+ 		}
+-		set_bit(EXT4_GROUP_INFO_IBITMAP_CORRUPT_BIT,
+-			&grp->bb_state);
+ 	}
+ }
+ 
+diff --git a/fs/ext4/sysfs.c b/fs/ext4/sysfs.c
+index f34da0bb8f17..b970a200f20c 100644
+--- a/fs/ext4/sysfs.c
++++ b/fs/ext4/sysfs.c
+@@ -274,8 +274,12 @@ static ssize_t ext4_attr_show(struct kobject *kobj,
+ 	case attr_pointer_ui:
+ 		if (!ptr)
+ 			return 0;
+-		return snprintf(buf, PAGE_SIZE, "%u\n",
+-				*((unsigned int *) ptr));
++		if (a->attr_ptr == ptr_ext4_super_block_offset)
++			return snprintf(buf, PAGE_SIZE, "%u\n",
++					le32_to_cpup(ptr));
++		else
++			return snprintf(buf, PAGE_SIZE, "%u\n",
++					*((unsigned int *) ptr));
+ 	case attr_pointer_atomic:
+ 		if (!ptr)
+ 			return 0;
+@@ -308,7 +312,10 @@ static ssize_t ext4_attr_store(struct kobject *kobj,
+ 		ret = kstrtoul(skip_spaces(buf), 0, &t);
+ 		if (ret)
+ 			return ret;
+-		*((unsigned int *) ptr) = t;
++		if (a->attr_ptr == ptr_ext4_super_block_offset)
++			*((__le32 *) ptr) = cpu_to_le32(t);
++		else
++			*((unsigned int *) ptr) = t;
+ 		return len;
+ 	case attr_inode_readahead:
+ 		return inode_readahead_blks_store(sbi, buf, len);
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 723df14f4084..f36fc5d5b257 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -190,6 +190,8 @@ ext4_xattr_check_entries(struct ext4_xattr_entry *entry, void *end,
+ 		struct ext4_xattr_entry *next = EXT4_XATTR_NEXT(e);
+ 		if ((void *)next >= end)
+ 			return -EFSCORRUPTED;
++		if (strnlen(e->e_name, e->e_name_len) != e->e_name_len)
++			return -EFSCORRUPTED;
+ 		e = next;
+ 	}
+ 
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index c6b88fa85e2e..4a9ace7280b9 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -127,6 +127,16 @@ static bool fuse_block_alloc(struct fuse_conn *fc, bool for_background)
+ 	return !fc->initialized || (for_background && fc->blocked);
+ }
+ 
++static void fuse_drop_waiting(struct fuse_conn *fc)
++{
++	if (fc->connected) {
++		atomic_dec(&fc->num_waiting);
++	} else if (atomic_dec_and_test(&fc->num_waiting)) {
++		/* wake up aborters */
++		wake_up_all(&fc->blocked_waitq);
++	}
++}
++
+ static struct fuse_req *__fuse_get_req(struct fuse_conn *fc, unsigned npages,
+ 				       bool for_background)
+ {
+@@ -175,7 +185,7 @@ static struct fuse_req *__fuse_get_req(struct fuse_conn *fc, unsigned npages,
+ 	return req;
+ 
+  out:
+-	atomic_dec(&fc->num_waiting);
++	fuse_drop_waiting(fc);
+ 	return ERR_PTR(err);
+ }
+ 
+@@ -285,7 +295,7 @@ void fuse_put_request(struct fuse_conn *fc, struct fuse_req *req)
+ 
+ 		if (test_bit(FR_WAITING, &req->flags)) {
+ 			__clear_bit(FR_WAITING, &req->flags);
+-			atomic_dec(&fc->num_waiting);
++			fuse_drop_waiting(fc);
+ 		}
+ 
+ 		if (req->stolen_file)
+@@ -371,7 +381,7 @@ static void request_end(struct fuse_conn *fc, struct fuse_req *req)
+ 	struct fuse_iqueue *fiq = &fc->iq;
+ 
+ 	if (test_and_set_bit(FR_FINISHED, &req->flags))
+-		return;
++		goto put_request;
+ 
+ 	spin_lock(&fiq->waitq.lock);
+ 	list_del_init(&req->intr_entry);
+@@ -400,6 +410,7 @@ static void request_end(struct fuse_conn *fc, struct fuse_req *req)
+ 	wake_up(&req->waitq);
+ 	if (req->end)
+ 		req->end(fc, req);
++put_request:
+ 	fuse_put_request(fc, req);
+ }
+ 
+@@ -1944,12 +1955,15 @@ static ssize_t fuse_dev_splice_write(struct pipe_inode_info *pipe,
+ 	if (!fud)
+ 		return -EPERM;
+ 
++	pipe_lock(pipe);
++
+ 	bufs = kmalloc_array(pipe->buffers, sizeof(struct pipe_buffer),
+ 			     GFP_KERNEL);
+-	if (!bufs)
++	if (!bufs) {
++		pipe_unlock(pipe);
+ 		return -ENOMEM;
++	}
+ 
+-	pipe_lock(pipe);
+ 	nbuf = 0;
+ 	rem = 0;
+ 	for (idx = 0; idx < pipe->nrbufs && rem < len; idx++)
+@@ -2105,6 +2119,7 @@ void fuse_abort_conn(struct fuse_conn *fc, bool is_abort)
+ 				set_bit(FR_ABORTED, &req->flags);
+ 				if (!test_bit(FR_LOCKED, &req->flags)) {
+ 					set_bit(FR_PRIVATE, &req->flags);
++					__fuse_get_request(req);
+ 					list_move(&req->list, &to_end1);
+ 				}
+ 				spin_unlock(&req->waitq.lock);
+@@ -2131,7 +2146,6 @@ void fuse_abort_conn(struct fuse_conn *fc, bool is_abort)
+ 
+ 		while (!list_empty(&to_end1)) {
+ 			req = list_first_entry(&to_end1, struct fuse_req, list);
+-			__fuse_get_request(req);
+ 			list_del_init(&req->list);
+ 			request_end(fc, req);
+ 		}
+@@ -2142,6 +2156,11 @@ void fuse_abort_conn(struct fuse_conn *fc, bool is_abort)
+ }
+ EXPORT_SYMBOL_GPL(fuse_abort_conn);
+ 
++void fuse_wait_aborted(struct fuse_conn *fc)
++{
++	wait_event(fc->blocked_waitq, atomic_read(&fc->num_waiting) == 0);
++}
++
+ int fuse_dev_release(struct inode *inode, struct file *file)
+ {
+ 	struct fuse_dev *fud = fuse_get_dev(file);
+@@ -2149,9 +2168,15 @@ int fuse_dev_release(struct inode *inode, struct file *file)
+ 	if (fud) {
+ 		struct fuse_conn *fc = fud->fc;
+ 		struct fuse_pqueue *fpq = &fud->pq;
++		LIST_HEAD(to_end);
+ 
++		spin_lock(&fpq->lock);
+ 		WARN_ON(!list_empty(&fpq->io));
+-		end_requests(fc, &fpq->processing);
++		list_splice_init(&fpq->processing, &to_end);
++		spin_unlock(&fpq->lock);
++
++		end_requests(fc, &to_end);
++
+ 		/* Are we the last open device? */
+ 		if (atomic_dec_and_test(&fc->dev_count)) {
+ 			WARN_ON(fc->iq.fasync != NULL);
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index 56231b31f806..606909ed5f21 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -355,11 +355,12 @@ static struct dentry *fuse_lookup(struct inode *dir, struct dentry *entry,
+ 	struct inode *inode;
+ 	struct dentry *newent;
+ 	bool outarg_valid = true;
++	bool locked;
+ 
+-	fuse_lock_inode(dir);
++	locked = fuse_lock_inode(dir);
+ 	err = fuse_lookup_name(dir->i_sb, get_node_id(dir), &entry->d_name,
+ 			       &outarg, &inode);
+-	fuse_unlock_inode(dir);
++	fuse_unlock_inode(dir, locked);
+ 	if (err == -ENOENT) {
+ 		outarg_valid = false;
+ 		err = 0;
+@@ -1340,6 +1341,7 @@ static int fuse_readdir(struct file *file, struct dir_context *ctx)
+ 	struct fuse_conn *fc = get_fuse_conn(inode);
+ 	struct fuse_req *req;
+ 	u64 attr_version = 0;
++	bool locked;
+ 
+ 	if (is_bad_inode(inode))
+ 		return -EIO;
+@@ -1367,9 +1369,9 @@ static int fuse_readdir(struct file *file, struct dir_context *ctx)
+ 		fuse_read_fill(req, file, ctx->pos, PAGE_SIZE,
+ 			       FUSE_READDIR);
+ 	}
+-	fuse_lock_inode(inode);
++	locked = fuse_lock_inode(inode);
+ 	fuse_request_send(fc, req);
+-	fuse_unlock_inode(inode);
++	fuse_unlock_inode(inode, locked);
+ 	nbytes = req->out.args[0].size;
+ 	err = req->out.h.error;
+ 	fuse_put_request(fc, req);
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index a201fb0ac64f..aa23749a943b 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -866,6 +866,7 @@ static int fuse_readpages_fill(void *_data, struct page *page)
+ 	}
+ 
+ 	if (WARN_ON(req->num_pages >= req->max_pages)) {
++		unlock_page(page);
+ 		fuse_put_request(fc, req);
+ 		return -EIO;
+ 	}
+diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
+index 5256ad333b05..f78e9614bb5f 100644
+--- a/fs/fuse/fuse_i.h
++++ b/fs/fuse/fuse_i.h
+@@ -862,6 +862,7 @@ void fuse_request_send_background_locked(struct fuse_conn *fc,
+ 
+ /* Abort all requests */
+ void fuse_abort_conn(struct fuse_conn *fc, bool is_abort);
++void fuse_wait_aborted(struct fuse_conn *fc);
+ 
+ /**
+  * Invalidate inode attributes
+@@ -974,8 +975,8 @@ int fuse_do_setattr(struct dentry *dentry, struct iattr *attr,
+ 
+ void fuse_set_initialized(struct fuse_conn *fc);
+ 
+-void fuse_unlock_inode(struct inode *inode);
+-void fuse_lock_inode(struct inode *inode);
++void fuse_unlock_inode(struct inode *inode, bool locked);
++bool fuse_lock_inode(struct inode *inode);
+ 
+ int fuse_setxattr(struct inode *inode, const char *name, const void *value,
+ 		  size_t size, int flags);
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index a24df8861b40..2dbd487390a3 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -357,15 +357,21 @@ int fuse_reverse_inval_inode(struct super_block *sb, u64 nodeid,
+ 	return 0;
+ }
+ 
+-void fuse_lock_inode(struct inode *inode)
++bool fuse_lock_inode(struct inode *inode)
+ {
+-	if (!get_fuse_conn(inode)->parallel_dirops)
++	bool locked = false;
++
++	if (!get_fuse_conn(inode)->parallel_dirops) {
+ 		mutex_lock(&get_fuse_inode(inode)->mutex);
++		locked = true;
++	}
++
++	return locked;
+ }
+ 
+-void fuse_unlock_inode(struct inode *inode)
++void fuse_unlock_inode(struct inode *inode, bool locked)
+ {
+-	if (!get_fuse_conn(inode)->parallel_dirops)
++	if (locked)
+ 		mutex_unlock(&get_fuse_inode(inode)->mutex);
+ }
+ 
+@@ -391,9 +397,6 @@ static void fuse_put_super(struct super_block *sb)
+ {
+ 	struct fuse_conn *fc = get_fuse_conn_super(sb);
+ 
+-	fuse_send_destroy(fc);
+-
+-	fuse_abort_conn(fc, false);
+ 	mutex_lock(&fuse_mutex);
+ 	list_del(&fc->entry);
+ 	fuse_ctl_remove_conn(fc);
+@@ -1210,16 +1213,25 @@ static struct dentry *fuse_mount(struct file_system_type *fs_type,
+ 	return mount_nodev(fs_type, flags, raw_data, fuse_fill_super);
+ }
+ 
+-static void fuse_kill_sb_anon(struct super_block *sb)
++static void fuse_sb_destroy(struct super_block *sb)
+ {
+ 	struct fuse_conn *fc = get_fuse_conn_super(sb);
+ 
+ 	if (fc) {
++		fuse_send_destroy(fc);
++
++		fuse_abort_conn(fc, false);
++		fuse_wait_aborted(fc);
++
+ 		down_write(&fc->killsb);
+ 		fc->sb = NULL;
+ 		up_write(&fc->killsb);
+ 	}
++}
+ 
++static void fuse_kill_sb_anon(struct super_block *sb)
++{
++	fuse_sb_destroy(sb);
+ 	kill_anon_super(sb);
+ }
+ 
+@@ -1242,14 +1254,7 @@ static struct dentry *fuse_mount_blk(struct file_system_type *fs_type,
+ 
+ static void fuse_kill_sb_blk(struct super_block *sb)
+ {
+-	struct fuse_conn *fc = get_fuse_conn_super(sb);
+-
+-	if (fc) {
+-		down_write(&fc->killsb);
+-		fc->sb = NULL;
+-		up_write(&fc->killsb);
+-	}
+-
++	fuse_sb_destroy(sb);
+ 	kill_block_super(sb);
+ }
+ 
+diff --git a/fs/sysfs/file.c b/fs/sysfs/file.c
+index 5c13f29bfcdb..118fa197a35f 100644
+--- a/fs/sysfs/file.c
++++ b/fs/sysfs/file.c
+@@ -405,6 +405,50 @@ int sysfs_chmod_file(struct kobject *kobj, const struct attribute *attr,
+ }
+ EXPORT_SYMBOL_GPL(sysfs_chmod_file);
+ 
++/**
++ * sysfs_break_active_protection - break "active" protection
++ * @kobj: The kernel object @attr is associated with.
++ * @attr: The attribute to break the "active" protection for.
++ *
++ * With sysfs, just like kernfs, deletion of an attribute is postponed until
++ * all active .show() and .store() callbacks have finished unless this function
++ * is called. Hence this function is useful in methods that implement self
++ * deletion.
++ */
++struct kernfs_node *sysfs_break_active_protection(struct kobject *kobj,
++						  const struct attribute *attr)
++{
++	struct kernfs_node *kn;
++
++	kobject_get(kobj);
++	kn = kernfs_find_and_get(kobj->sd, attr->name);
++	if (kn)
++		kernfs_break_active_protection(kn);
++	return kn;
++}
++EXPORT_SYMBOL_GPL(sysfs_break_active_protection);
++
++/**
++ * sysfs_unbreak_active_protection - restore "active" protection
++ * @kn: Pointer returned by sysfs_break_active_protection().
++ *
++ * Undo the effects of sysfs_break_active_protection(). Since this function
++ * calls kernfs_put() on the kernfs node that corresponds to the 'attr'
++ * argument passed to sysfs_break_active_protection() that attribute may have
++ * been removed between the sysfs_break_active_protection() and
++ * sysfs_unbreak_active_protection() calls, it is not safe to access @kn after
++ * this function has returned.
++ */
++void sysfs_unbreak_active_protection(struct kernfs_node *kn)
++{
++	struct kobject *kobj = kn->parent->priv;
++
++	kernfs_unbreak_active_protection(kn);
++	kernfs_put(kn);
++	kobject_put(kobj);
++}
++EXPORT_SYMBOL_GPL(sysfs_unbreak_active_protection);
++
+ /**
+  * sysfs_remove_file_ns - remove an object attribute with a custom ns tag
+  * @kobj: object we're acting for
+diff --git a/include/drm/i915_drm.h b/include/drm/i915_drm.h
+index c9e5a6621b95..c44703f471b3 100644
+--- a/include/drm/i915_drm.h
++++ b/include/drm/i915_drm.h
+@@ -95,7 +95,9 @@ extern struct resource intel_graphics_stolen_res;
+ #define    I845_TSEG_SIZE_512K	(2 << 1)
+ #define    I845_TSEG_SIZE_1M	(3 << 1)
+ 
+-#define INTEL_BSM 0x5c
++#define INTEL_BSM		0x5c
++#define INTEL_GEN11_BSM_DW0	0xc0
++#define INTEL_GEN11_BSM_DW1	0xc4
+ #define   INTEL_BSM_MASK	(-(1u << 20))
+ 
+ #endif				/* _I915_DRM_H_ */
+diff --git a/include/linux/libata.h b/include/linux/libata.h
+index 32f247cb5e9e..bc4f87cbe7f4 100644
+--- a/include/linux/libata.h
++++ b/include/linux/libata.h
+@@ -1111,6 +1111,8 @@ extern struct ata_host *ata_host_alloc(struct device *dev, int max_ports);
+ extern struct ata_host *ata_host_alloc_pinfo(struct device *dev,
+ 			const struct ata_port_info * const * ppi, int n_ports);
+ extern int ata_slave_link_init(struct ata_port *ap);
++extern void ata_host_get(struct ata_host *host);
++extern void ata_host_put(struct ata_host *host);
+ extern int ata_host_start(struct ata_host *host);
+ extern int ata_host_register(struct ata_host *host,
+ 			     struct scsi_host_template *sht);
+diff --git a/include/linux/printk.h b/include/linux/printk.h
+index 6d7e800affd8..3ede9f46a494 100644
+--- a/include/linux/printk.h
++++ b/include/linux/printk.h
+@@ -148,9 +148,13 @@ void early_printk(const char *s, ...) { }
+ #ifdef CONFIG_PRINTK_NMI
+ extern void printk_nmi_enter(void);
+ extern void printk_nmi_exit(void);
++extern void printk_nmi_direct_enter(void);
++extern void printk_nmi_direct_exit(void);
+ #else
+ static inline void printk_nmi_enter(void) { }
+ static inline void printk_nmi_exit(void) { }
++static inline void printk_nmi_direct_enter(void) { }
++static inline void printk_nmi_direct_exit(void) { }
+ #endif /* PRINTK_NMI */
+ 
+ #ifdef CONFIG_PRINTK
+diff --git a/include/linux/sysfs.h b/include/linux/sysfs.h
+index b8bfdc173ec0..3c12198c0103 100644
+--- a/include/linux/sysfs.h
++++ b/include/linux/sysfs.h
+@@ -237,6 +237,9 @@ int __must_check sysfs_create_files(struct kobject *kobj,
+ 				   const struct attribute **attr);
+ int __must_check sysfs_chmod_file(struct kobject *kobj,
+ 				  const struct attribute *attr, umode_t mode);
++struct kernfs_node *sysfs_break_active_protection(struct kobject *kobj,
++						  const struct attribute *attr);
++void sysfs_unbreak_active_protection(struct kernfs_node *kn);
+ void sysfs_remove_file_ns(struct kobject *kobj, const struct attribute *attr,
+ 			  const void *ns);
+ bool sysfs_remove_file_self(struct kobject *kobj, const struct attribute *attr);
+@@ -350,6 +353,17 @@ static inline int sysfs_chmod_file(struct kobject *kobj,
+ 	return 0;
+ }
+ 
++static inline struct kernfs_node *
++sysfs_break_active_protection(struct kobject *kobj,
++			      const struct attribute *attr)
++{
++	return NULL;
++}
++
++static inline void sysfs_unbreak_active_protection(struct kernfs_node *kn)
++{
++}
++
+ static inline void sysfs_remove_file_ns(struct kobject *kobj,
+ 					const struct attribute *attr,
+ 					const void *ns)
+diff --git a/include/linux/tpm.h b/include/linux/tpm.h
+index 06639fb6ab85..8eb5e5ebe136 100644
+--- a/include/linux/tpm.h
++++ b/include/linux/tpm.h
+@@ -43,6 +43,8 @@ struct tpm_class_ops {
+ 	u8 (*status) (struct tpm_chip *chip);
+ 	bool (*update_timeouts)(struct tpm_chip *chip,
+ 				unsigned long *timeout_cap);
++	int (*go_idle)(struct tpm_chip *chip);
++	int (*cmd_ready)(struct tpm_chip *chip);
+ 	int (*request_locality)(struct tpm_chip *chip, int loc);
+ 	int (*relinquish_locality)(struct tpm_chip *chip, int loc);
+ 	void (*clk_enable)(struct tpm_chip *chip, bool value);
+diff --git a/include/scsi/libsas.h b/include/scsi/libsas.h
+index 225ab7783dfd..3de3b10da19a 100644
+--- a/include/scsi/libsas.h
++++ b/include/scsi/libsas.h
+@@ -161,7 +161,7 @@ struct sata_device {
+ 	u8     port_no;        /* port number, if this is a PM (Port) */
+ 
+ 	struct ata_port *ap;
+-	struct ata_host ata_host;
++	struct ata_host *ata_host;
+ 	struct smp_resp rps_resp ____cacheline_aligned; /* report_phy_sata_resp */
+ 	u8     fis[ATA_RESP_FIS_SIZE];
+ };
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index ea619021d901..f3183ad10d96 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -710,9 +710,7 @@ static void reuse_unused_kprobe(struct kprobe *ap)
+ 	 * there is still a relative jump) and disabled.
+ 	 */
+ 	op = container_of(ap, struct optimized_kprobe, kp);
+-	if (unlikely(list_empty(&op->list)))
+-		printk(KERN_WARNING "Warning: found a stray unused "
+-			"aggrprobe@%p\n", ap->addr);
++	WARN_ON_ONCE(list_empty(&op->list));
+ 	/* Enable the probe again */
+ 	ap->flags &= ~KPROBE_FLAG_DISABLED;
+ 	/* Optimize it again (remove from op->list) */
+@@ -985,7 +983,8 @@ static int arm_kprobe_ftrace(struct kprobe *p)
+ 	ret = ftrace_set_filter_ip(&kprobe_ftrace_ops,
+ 				   (unsigned long)p->addr, 0, 0);
+ 	if (ret) {
+-		pr_debug("Failed to arm kprobe-ftrace at %p (%d)\n", p->addr, ret);
++		pr_debug("Failed to arm kprobe-ftrace at %pS (%d)\n",
++			 p->addr, ret);
+ 		return ret;
+ 	}
+ 
+@@ -1025,7 +1024,8 @@ static int disarm_kprobe_ftrace(struct kprobe *p)
+ 
+ 	ret = ftrace_set_filter_ip(&kprobe_ftrace_ops,
+ 			   (unsigned long)p->addr, 1, 0);
+-	WARN(ret < 0, "Failed to disarm kprobe-ftrace at %p (%d)\n", p->addr, ret);
++	WARN_ONCE(ret < 0, "Failed to disarm kprobe-ftrace at %pS (%d)\n",
++		  p->addr, ret);
+ 	return ret;
+ }
+ #else	/* !CONFIG_KPROBES_ON_FTRACE */
+@@ -2169,11 +2169,12 @@ out:
+ }
+ EXPORT_SYMBOL_GPL(enable_kprobe);
+ 
++/* Caller must NOT call this in usual path. This is only for critical case */
+ void dump_kprobe(struct kprobe *kp)
+ {
+-	printk(KERN_WARNING "Dumping kprobe:\n");
+-	printk(KERN_WARNING "Name: %s\nAddress: %p\nOffset: %x\n",
+-	       kp->symbol_name, kp->addr, kp->offset);
++	pr_err("Dumping kprobe:\n");
++	pr_err("Name: %s\nOffset: %x\nAddress: %pS\n",
++	       kp->symbol_name, kp->offset, kp->addr);
+ }
+ NOKPROBE_SYMBOL(dump_kprobe);
+ 
+@@ -2196,11 +2197,8 @@ static int __init populate_kprobe_blacklist(unsigned long *start,
+ 		entry = arch_deref_entry_point((void *)*iter);
+ 
+ 		if (!kernel_text_address(entry) ||
+-		    !kallsyms_lookup_size_offset(entry, &size, &offset)) {
+-			pr_err("Failed to find blacklist at %p\n",
+-				(void *)entry);
++		    !kallsyms_lookup_size_offset(entry, &size, &offset))
+ 			continue;
+-		}
+ 
+ 		ent = kmalloc(sizeof(*ent), GFP_KERNEL);
+ 		if (!ent)
+@@ -2428,8 +2426,16 @@ static int kprobe_blacklist_seq_show(struct seq_file *m, void *v)
+ 	struct kprobe_blacklist_entry *ent =
+ 		list_entry(v, struct kprobe_blacklist_entry, list);
+ 
+-	seq_printf(m, "0x%px-0x%px\t%ps\n", (void *)ent->start_addr,
+-		   (void *)ent->end_addr, (void *)ent->start_addr);
++	/*
++	 * If /proc/kallsyms is not showing kernel address, we won't
++	 * show them here either.
++	 */
++	if (!kallsyms_show_value())
++		seq_printf(m, "0x%px-0x%px\t%ps\n", NULL, NULL,
++			   (void *)ent->start_addr);
++	else
++		seq_printf(m, "0x%px-0x%px\t%ps\n", (void *)ent->start_addr,
++			   (void *)ent->end_addr, (void *)ent->start_addr);
+ 	return 0;
+ }
+ 
+@@ -2611,7 +2617,7 @@ static int __init debugfs_kprobe_init(void)
+ 	if (!dir)
+ 		return -ENOMEM;
+ 
+-	file = debugfs_create_file("list", 0444, dir, NULL,
++	file = debugfs_create_file("list", 0400, dir, NULL,
+ 				&debugfs_kprobes_operations);
+ 	if (!file)
+ 		goto error;
+@@ -2621,7 +2627,7 @@ static int __init debugfs_kprobe_init(void)
+ 	if (!file)
+ 		goto error;
+ 
+-	file = debugfs_create_file("blacklist", 0444, dir, NULL,
++	file = debugfs_create_file("blacklist", 0400, dir, NULL,
+ 				&debugfs_kprobe_blacklist_ops);
+ 	if (!file)
+ 		goto error;
+diff --git a/kernel/printk/internal.h b/kernel/printk/internal.h
+index 2a7d04049af4..0f1898820cba 100644
+--- a/kernel/printk/internal.h
++++ b/kernel/printk/internal.h
+@@ -19,11 +19,16 @@
+ #ifdef CONFIG_PRINTK
+ 
+ #define PRINTK_SAFE_CONTEXT_MASK	 0x3fffffff
+-#define PRINTK_NMI_DEFERRED_CONTEXT_MASK 0x40000000
++#define PRINTK_NMI_DIRECT_CONTEXT_MASK	 0x40000000
+ #define PRINTK_NMI_CONTEXT_MASK		 0x80000000
+ 
+ extern raw_spinlock_t logbuf_lock;
+ 
++__printf(5, 0)
++int vprintk_store(int facility, int level,
++		  const char *dict, size_t dictlen,
++		  const char *fmt, va_list args);
++
+ __printf(1, 0) int vprintk_default(const char *fmt, va_list args);
+ __printf(1, 0) int vprintk_deferred(const char *fmt, va_list args);
+ __printf(1, 0) int vprintk_func(const char *fmt, va_list args);
+@@ -54,6 +59,8 @@ void __printk_safe_exit(void);
+ 		local_irq_enable();		\
+ 	} while (0)
+ 
++void defer_console_output(void);
++
+ #else
+ 
+ __printf(1, 0) int vprintk_func(const char *fmt, va_list args) { return 0; }
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index 247808333ba4..1d1513215c22 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -1824,28 +1824,16 @@ static size_t log_output(int facility, int level, enum log_flags lflags, const c
+ 	return log_store(facility, level, lflags, 0, dict, dictlen, text, text_len);
+ }
+ 
+-asmlinkage int vprintk_emit(int facility, int level,
+-			    const char *dict, size_t dictlen,
+-			    const char *fmt, va_list args)
++/* Must be called under logbuf_lock. */
++int vprintk_store(int facility, int level,
++		  const char *dict, size_t dictlen,
++		  const char *fmt, va_list args)
+ {
+ 	static char textbuf[LOG_LINE_MAX];
+ 	char *text = textbuf;
+ 	size_t text_len;
+ 	enum log_flags lflags = 0;
+-	unsigned long flags;
+-	int printed_len;
+-	bool in_sched = false;
+-
+-	if (level == LOGLEVEL_SCHED) {
+-		level = LOGLEVEL_DEFAULT;
+-		in_sched = true;
+-	}
+-
+-	boot_delay_msec(level);
+-	printk_delay();
+ 
+-	/* This stops the holder of console_sem just where we want him */
+-	logbuf_lock_irqsave(flags);
+ 	/*
+ 	 * The printf needs to come first; we need the syslog
+ 	 * prefix which might be passed-in as a parameter.
+@@ -1886,8 +1874,29 @@ asmlinkage int vprintk_emit(int facility, int level,
+ 	if (dict)
+ 		lflags |= LOG_PREFIX|LOG_NEWLINE;
+ 
+-	printed_len = log_output(facility, level, lflags, dict, dictlen, text, text_len);
++	return log_output(facility, level, lflags,
++			  dict, dictlen, text, text_len);
++}
+ 
++asmlinkage int vprintk_emit(int facility, int level,
++			    const char *dict, size_t dictlen,
++			    const char *fmt, va_list args)
++{
++	int printed_len;
++	bool in_sched = false;
++	unsigned long flags;
++
++	if (level == LOGLEVEL_SCHED) {
++		level = LOGLEVEL_DEFAULT;
++		in_sched = true;
++	}
++
++	boot_delay_msec(level);
++	printk_delay();
++
++	/* This stops the holder of console_sem just where we want him */
++	logbuf_lock_irqsave(flags);
++	printed_len = vprintk_store(facility, level, dict, dictlen, fmt, args);
+ 	logbuf_unlock_irqrestore(flags);
+ 
+ 	/* If called from the scheduler, we can not call up(). */
+@@ -2878,16 +2887,20 @@ void wake_up_klogd(void)
+ 	preempt_enable();
+ }
+ 
+-int vprintk_deferred(const char *fmt, va_list args)
++void defer_console_output(void)
+ {
+-	int r;
+-
+-	r = vprintk_emit(0, LOGLEVEL_SCHED, NULL, 0, fmt, args);
+-
+ 	preempt_disable();
+ 	__this_cpu_or(printk_pending, PRINTK_PENDING_OUTPUT);
+ 	irq_work_queue(this_cpu_ptr(&wake_up_klogd_work));
+ 	preempt_enable();
++}
++
++int vprintk_deferred(const char *fmt, va_list args)
++{
++	int r;
++
++	r = vprintk_emit(0, LOGLEVEL_SCHED, NULL, 0, fmt, args);
++	defer_console_output();
+ 
+ 	return r;
+ }
+diff --git a/kernel/printk/printk_safe.c b/kernel/printk/printk_safe.c
+index d7d091309054..a0a74c533e4b 100644
+--- a/kernel/printk/printk_safe.c
++++ b/kernel/printk/printk_safe.c
+@@ -308,24 +308,33 @@ static __printf(1, 0) int vprintk_nmi(const char *fmt, va_list args)
+ 
+ void printk_nmi_enter(void)
+ {
+-	/*
+-	 * The size of the extra per-CPU buffer is limited. Use it only when
+-	 * the main one is locked. If this CPU is not in the safe context,
+-	 * the lock must be taken on another CPU and we could wait for it.
+-	 */
+-	if ((this_cpu_read(printk_context) & PRINTK_SAFE_CONTEXT_MASK) &&
+-	    raw_spin_is_locked(&logbuf_lock)) {
+-		this_cpu_or(printk_context, PRINTK_NMI_CONTEXT_MASK);
+-	} else {
+-		this_cpu_or(printk_context, PRINTK_NMI_DEFERRED_CONTEXT_MASK);
+-	}
++	this_cpu_or(printk_context, PRINTK_NMI_CONTEXT_MASK);
+ }
+ 
+ void printk_nmi_exit(void)
+ {
+-	this_cpu_and(printk_context,
+-		     ~(PRINTK_NMI_CONTEXT_MASK |
+-		       PRINTK_NMI_DEFERRED_CONTEXT_MASK));
++	this_cpu_and(printk_context, ~PRINTK_NMI_CONTEXT_MASK);
++}
++
++/*
++ * Marks a code that might produce many messages in NMI context
++ * and the risk of losing them is more critical than eventual
++ * reordering.
++ *
++ * It has effect only when called in NMI context. Then printk()
++ * will try to store the messages into the main logbuf directly
++ * and use the per-CPU buffers only as a fallback when the lock
++ * is not available.
++ */
++void printk_nmi_direct_enter(void)
++{
++	if (this_cpu_read(printk_context) & PRINTK_NMI_CONTEXT_MASK)
++		this_cpu_or(printk_context, PRINTK_NMI_DIRECT_CONTEXT_MASK);
++}
++
++void printk_nmi_direct_exit(void)
++{
++	this_cpu_and(printk_context, ~PRINTK_NMI_DIRECT_CONTEXT_MASK);
+ }
+ 
+ #else
+@@ -363,6 +372,20 @@ void __printk_safe_exit(void)
+ 
+ __printf(1, 0) int vprintk_func(const char *fmt, va_list args)
+ {
++	/*
++	 * Try to use the main logbuf even in NMI. But avoid calling console
++	 * drivers that might have their own locks.
++	 */
++	if ((this_cpu_read(printk_context) & PRINTK_NMI_DIRECT_CONTEXT_MASK) &&
++	    raw_spin_trylock(&logbuf_lock)) {
++		int len;
++
++		len = vprintk_store(0, LOGLEVEL_DEFAULT, NULL, 0, fmt, args);
++		raw_spin_unlock(&logbuf_lock);
++		defer_console_output();
++		return len;
++	}
++
+ 	/* Use extra buffer in NMI when logbuf_lock is taken or in safe mode. */
+ 	if (this_cpu_read(printk_context) & PRINTK_NMI_CONTEXT_MASK)
+ 		return vprintk_nmi(fmt, args);
+@@ -371,13 +394,6 @@ __printf(1, 0) int vprintk_func(const char *fmt, va_list args)
+ 	if (this_cpu_read(printk_context) & PRINTK_SAFE_CONTEXT_MASK)
+ 		return vprintk_safe(fmt, args);
+ 
+-	/*
+-	 * Use the main logbuf when logbuf_lock is available in NMI.
+-	 * But avoid calling console drivers that might have their own locks.
+-	 */
+-	if (this_cpu_read(printk_context) & PRINTK_NMI_DEFERRED_CONTEXT_MASK)
+-		return vprintk_deferred(fmt, args);
+-
+ 	/* No obstacles. */
+ 	return vprintk_default(fmt, args);
+ }
+diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
+index e190d1ef3a23..067cb83f37ea 100644
+--- a/kernel/stop_machine.c
++++ b/kernel/stop_machine.c
+@@ -81,6 +81,7 @@ static bool cpu_stop_queue_work(unsigned int cpu, struct cpu_stop_work *work)
+ 	unsigned long flags;
+ 	bool enabled;
+ 
++	preempt_disable();
+ 	raw_spin_lock_irqsave(&stopper->lock, flags);
+ 	enabled = stopper->enabled;
+ 	if (enabled)
+@@ -90,6 +91,7 @@ static bool cpu_stop_queue_work(unsigned int cpu, struct cpu_stop_work *work)
+ 	raw_spin_unlock_irqrestore(&stopper->lock, flags);
+ 
+ 	wake_up_q(&wakeq);
++	preempt_enable();
+ 
+ 	return enabled;
+ }
+@@ -236,13 +238,24 @@ static int cpu_stop_queue_two_works(int cpu1, struct cpu_stop_work *work1,
+ 	struct cpu_stopper *stopper2 = per_cpu_ptr(&cpu_stopper, cpu2);
+ 	DEFINE_WAKE_Q(wakeq);
+ 	int err;
++
+ retry:
++	/*
++	 * The waking up of stopper threads has to happen in the same
++	 * scheduling context as the queueing.  Otherwise, there is a
++	 * possibility of one of the above stoppers being woken up by another
++	 * CPU, and preempting us. This will cause us to not wake up the other
++	 * stopper forever.
++	 */
++	preempt_disable();
+ 	raw_spin_lock_irq(&stopper1->lock);
+ 	raw_spin_lock_nested(&stopper2->lock, SINGLE_DEPTH_NESTING);
+ 
+-	err = -ENOENT;
+-	if (!stopper1->enabled || !stopper2->enabled)
++	if (!stopper1->enabled || !stopper2->enabled) {
++		err = -ENOENT;
+ 		goto unlock;
++	}
++
+ 	/*
+ 	 * Ensure that if we race with __stop_cpus() the stoppers won't get
+ 	 * queued up in reverse order leading to system deadlock.
+@@ -253,36 +266,30 @@ retry:
+ 	 * It can be falsely true but it is safe to spin until it is cleared,
+ 	 * queue_stop_cpus_work() does everything under preempt_disable().
+ 	 */
+-	err = -EDEADLK;
+-	if (unlikely(stop_cpus_in_progress))
+-			goto unlock;
++	if (unlikely(stop_cpus_in_progress)) {
++		err = -EDEADLK;
++		goto unlock;
++	}
+ 
+ 	err = 0;
+ 	__cpu_stop_queue_work(stopper1, work1, &wakeq);
+ 	__cpu_stop_queue_work(stopper2, work2, &wakeq);
+-	/*
+-	 * The waking up of stopper threads has to happen
+-	 * in the same scheduling context as the queueing.
+-	 * Otherwise, there is a possibility of one of the
+-	 * above stoppers being woken up by another CPU,
+-	 * and preempting us. This will cause us to n ot
+-	 * wake up the other stopper forever.
+-	 */
+-	preempt_disable();
++
+ unlock:
+ 	raw_spin_unlock(&stopper2->lock);
+ 	raw_spin_unlock_irq(&stopper1->lock);
+ 
+ 	if (unlikely(err == -EDEADLK)) {
++		preempt_enable();
++
+ 		while (stop_cpus_in_progress)
+ 			cpu_relax();
++
+ 		goto retry;
+ 	}
+ 
+-	if (!err) {
+-		wake_up_q(&wakeq);
+-		preempt_enable();
+-	}
++	wake_up_q(&wakeq);
++	preempt_enable();
+ 
+ 	return err;
+ }
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 823687997b01..176debd3481b 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -8288,6 +8288,7 @@ void ftrace_dump(enum ftrace_dump_mode oops_dump_mode)
+ 	tracing_off();
+ 
+ 	local_irq_save(flags);
++	printk_nmi_direct_enter();
+ 
+ 	/* Simulate the iterator */
+ 	trace_init_global_iter(&iter);
+@@ -8367,7 +8368,8 @@ void ftrace_dump(enum ftrace_dump_mode oops_dump_mode)
+ 	for_each_tracing_cpu(cpu) {
+ 		atomic_dec(&per_cpu_ptr(iter.trace_buffer->data, cpu)->disabled);
+ 	}
+- 	atomic_dec(&dump_running);
++	atomic_dec(&dump_running);
++	printk_nmi_direct_exit();
+ 	local_irq_restore(flags);
+ }
+ EXPORT_SYMBOL_GPL(ftrace_dump);
+diff --git a/kernel/watchdog.c b/kernel/watchdog.c
+index 576d18045811..51f5a64d9ec2 100644
+--- a/kernel/watchdog.c
++++ b/kernel/watchdog.c
+@@ -266,7 +266,7 @@ static void __touch_watchdog(void)
+  * entering idle state.  This should only be used for scheduler events.
+  * Use touch_softlockup_watchdog() for everything else.
+  */
+-void touch_softlockup_watchdog_sched(void)
++notrace void touch_softlockup_watchdog_sched(void)
+ {
+ 	/*
+ 	 * Preemption can be enabled.  It doesn't matter which CPU's timestamp
+@@ -275,7 +275,7 @@ void touch_softlockup_watchdog_sched(void)
+ 	raw_cpu_write(watchdog_touch_ts, 0);
+ }
+ 
+-void touch_softlockup_watchdog(void)
++notrace void touch_softlockup_watchdog(void)
+ {
+ 	touch_softlockup_watchdog_sched();
+ 	wq_watchdog_touch(raw_smp_processor_id());
+diff --git a/kernel/watchdog_hld.c b/kernel/watchdog_hld.c
+index e449a23e9d59..4ece6028007a 100644
+--- a/kernel/watchdog_hld.c
++++ b/kernel/watchdog_hld.c
+@@ -29,7 +29,7 @@ static struct cpumask dead_events_mask;
+ static unsigned long hardlockup_allcpu_dumped;
+ static atomic_t watchdog_cpus = ATOMIC_INIT(0);
+ 
+-void arch_touch_nmi_watchdog(void)
++notrace void arch_touch_nmi_watchdog(void)
+ {
+ 	/*
+ 	 * Using __raw here because some code paths have
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 78b192071ef7..5f78c6e41796 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -5559,7 +5559,7 @@ static void wq_watchdog_timer_fn(struct timer_list *unused)
+ 	mod_timer(&wq_watchdog_timer, jiffies + thresh);
+ }
+ 
+-void wq_watchdog_touch(int cpu)
++notrace void wq_watchdog_touch(int cpu)
+ {
+ 	if (cpu >= 0)
+ 		per_cpu(wq_watchdog_touched_cpu, cpu) = jiffies;
+diff --git a/lib/nmi_backtrace.c b/lib/nmi_backtrace.c
+index 61a6b5aab07e..15ca78e1c7d4 100644
+--- a/lib/nmi_backtrace.c
++++ b/lib/nmi_backtrace.c
+@@ -87,11 +87,9 @@ void nmi_trigger_cpumask_backtrace(const cpumask_t *mask,
+ 
+ bool nmi_cpu_backtrace(struct pt_regs *regs)
+ {
+-	static arch_spinlock_t lock = __ARCH_SPIN_LOCK_UNLOCKED;
+ 	int cpu = smp_processor_id();
+ 
+ 	if (cpumask_test_cpu(cpu, to_cpumask(backtrace_mask))) {
+-		arch_spin_lock(&lock);
+ 		if (regs && cpu_in_idle(instruction_pointer(regs))) {
+ 			pr_warn("NMI backtrace for cpu %d skipped: idling at %pS\n",
+ 				cpu, (void *)instruction_pointer(regs));
+@@ -102,7 +100,6 @@ bool nmi_cpu_backtrace(struct pt_regs *regs)
+ 			else
+ 				dump_stack();
+ 		}
+-		arch_spin_unlock(&lock);
+ 		cpumask_clear_cpu(cpu, to_cpumask(backtrace_mask));
+ 		return true;
+ 	}
+diff --git a/lib/vsprintf.c b/lib/vsprintf.c
+index a48aaa79d352..cda186230287 100644
+--- a/lib/vsprintf.c
++++ b/lib/vsprintf.c
+@@ -1942,6 +1942,7 @@ char *pointer(const char *fmt, char *buf, char *end, void *ptr,
+ 		case 'F':
+ 			return device_node_string(buf, end, ptr, spec, fmt + 1);
+ 		}
++		break;
+ 	case 'x':
+ 		return pointer_string(buf, end, ptr, spec);
+ 	}
+diff --git a/mm/memory.c b/mm/memory.c
+index 0e356dd923c2..86d4329acb05 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -245,9 +245,6 @@ static void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb)
+ 
+ 	tlb_flush(tlb);
+ 	mmu_notifier_invalidate_range(tlb->mm, tlb->start, tlb->end);
+-#ifdef CONFIG_HAVE_RCU_TABLE_FREE
+-	tlb_table_flush(tlb);
+-#endif
+ 	__tlb_reset_range(tlb);
+ }
+ 
+@@ -255,6 +252,9 @@ static void tlb_flush_mmu_free(struct mmu_gather *tlb)
+ {
+ 	struct mmu_gather_batch *batch;
+ 
++#ifdef CONFIG_HAVE_RCU_TABLE_FREE
++	tlb_table_flush(tlb);
++#endif
+ 	for (batch = &tlb->local; batch && batch->nr; batch = batch->next) {
+ 		free_pages_and_swap_cache(batch->pages, batch->nr);
+ 		batch->nr = 0;
+@@ -330,6 +330,21 @@ bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_
+  * See the comment near struct mmu_table_batch.
+  */
+ 
++/*
++ * If we want tlb_remove_table() to imply TLB invalidates.
++ */
++static inline void tlb_table_invalidate(struct mmu_gather *tlb)
++{
++#ifdef CONFIG_HAVE_RCU_TABLE_INVALIDATE
++	/*
++	 * Invalidate page-table caches used by hardware walkers. Then we still
++	 * need to RCU-sched wait while freeing the pages because software
++	 * walkers can still be in-flight.
++	 */
++	tlb_flush_mmu_tlbonly(tlb);
++#endif
++}
++
+ static void tlb_remove_table_smp_sync(void *arg)
+ {
+ 	/* Simply deliver the interrupt */
+@@ -366,6 +381,7 @@ void tlb_table_flush(struct mmu_gather *tlb)
+ 	struct mmu_table_batch **batch = &tlb->batch;
+ 
+ 	if (*batch) {
++		tlb_table_invalidate(tlb);
+ 		call_rcu_sched(&(*batch)->rcu, tlb_remove_table_rcu);
+ 		*batch = NULL;
+ 	}
+@@ -387,11 +403,13 @@ void tlb_remove_table(struct mmu_gather *tlb, void *table)
+ 	if (*batch == NULL) {
+ 		*batch = (struct mmu_table_batch *)__get_free_page(GFP_NOWAIT | __GFP_NOWARN);
+ 		if (*batch == NULL) {
++			tlb_table_invalidate(tlb);
+ 			tlb_remove_table_one(table);
+ 			return;
+ 		}
+ 		(*batch)->nr = 0;
+ 	}
++
+ 	(*batch)->tables[(*batch)->nr++] = table;
+ 	if ((*batch)->nr == MAX_TABLE_BATCH)
+ 		tlb_table_flush(tlb);
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index 16161a36dc73..e8d1024dc547 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -280,7 +280,6 @@ rpcrdma_conn_upcall(struct rdma_cm_id *id, struct rdma_cm_event *event)
+ 		++xprt->rx_xprt.connect_cookie;
+ 		connstate = -ECONNABORTED;
+ connected:
+-		xprt->rx_buf.rb_credits = 1;
+ 		ep->rep_connected = connstate;
+ 		rpcrdma_conn_func(ep);
+ 		wake_up_all(&ep->rep_connect_wait);
+@@ -755,6 +754,7 @@ retry:
+ 	}
+ 
+ 	ep->rep_connected = 0;
++	rpcrdma_post_recvs(r_xprt, true);
+ 
+ 	rc = rdma_connect(ia->ri_id, &ep->rep_remote_cma);
+ 	if (rc) {
+@@ -773,8 +773,6 @@ retry:
+ 
+ 	dprintk("RPC:       %s: connected\n", __func__);
+ 
+-	rpcrdma_post_recvs(r_xprt, true);
+-
+ out:
+ 	if (rc)
+ 		ep->rep_connected = rc;
+@@ -1171,6 +1169,7 @@ rpcrdma_buffer_create(struct rpcrdma_xprt *r_xprt)
+ 		list_add(&req->rl_list, &buf->rb_send_bufs);
+ 	}
+ 
++	buf->rb_credits = 1;
+ 	buf->rb_posted_receives = 0;
+ 	INIT_LIST_HEAD(&buf->rb_recv_bufs);
+ 
+diff --git a/scripts/kernel-doc b/scripts/kernel-doc
+index 0057d8eafcc1..8f0f508a78e9 100755
+--- a/scripts/kernel-doc
++++ b/scripts/kernel-doc
+@@ -1062,7 +1062,7 @@ sub dump_struct($$) {
+     my $x = shift;
+     my $file = shift;
+ 
+-    if ($x =~ /(struct|union)\s+(\w+)\s*{(.*)}/) {
++    if ($x =~ /(struct|union)\s+(\w+)\s*\{(.*)\}/) {
+ 	my $decl_type = $1;
+ 	$declaration_name = $2;
+ 	my $members = $3;
+@@ -1148,20 +1148,20 @@ sub dump_struct($$) {
+ 				}
+ 			}
+ 		}
+-		$members =~ s/(struct|union)([^\{\};]+)\{([^\{\}]*)}([^\{\}\;]*)\;/$newmember/;
++		$members =~ s/(struct|union)([^\{\};]+)\{([^\{\}]*)\}([^\{\}\;]*)\;/$newmember/;
+ 	}
+ 
+ 	# Ignore other nested elements, like enums
+-	$members =~ s/({[^\{\}]*})//g;
++	$members =~ s/(\{[^\{\}]*\})//g;
+ 
+ 	create_parameterlist($members, ';', $file, $declaration_name);
+ 	check_sections($file, $declaration_name, $decl_type, $sectcheck, $struct_actual);
+ 
+ 	# Adjust declaration for better display
+-	$declaration =~ s/([{;])/$1\n/g;
+-	$declaration =~ s/}\s+;/};/g;
++	$declaration =~ s/([\{;])/$1\n/g;
++	$declaration =~ s/\}\s+;/};/g;
+ 	# Better handle inlined enums
+-	do {} while ($declaration =~ s/(enum\s+{[^}]+),([^\n])/$1,\n$2/);
++	do {} while ($declaration =~ s/(enum\s+\{[^\}]+),([^\n])/$1,\n$2/);
+ 
+ 	my @def_args = split /\n/, $declaration;
+ 	my $level = 1;
+@@ -1171,12 +1171,12 @@ sub dump_struct($$) {
+ 		$clause =~ s/\s+$//;
+ 		$clause =~ s/\s+/ /;
+ 		next if (!$clause);
+-		$level-- if ($clause =~ m/(})/ && $level > 1);
++		$level-- if ($clause =~ m/(\})/ && $level > 1);
+ 		if (!($clause =~ m/^\s*#/)) {
+ 			$declaration .= "\t" x $level;
+ 		}
+ 		$declaration .= "\t" . $clause . "\n";
+-		$level++ if ($clause =~ m/({)/ && !($clause =~m/}/));
++		$level++ if ($clause =~ m/(\{)/ && !($clause =~m/\}/));
+ 	}
+ 	output_declaration($declaration_name,
+ 			   'struct',
+@@ -1244,7 +1244,7 @@ sub dump_enum($$) {
+     # strip #define macros inside enums
+     $x =~ s@#\s*((define|ifdef)\s+|endif)[^;]*;@@gos;
+ 
+-    if ($x =~ /enum\s+(\w+)\s*{(.*)}/) {
++    if ($x =~ /enum\s+(\w+)\s*\{(.*)\}/) {
+ 	$declaration_name = $1;
+ 	my $members = $2;
+ 	my %_members;
+@@ -1785,7 +1785,7 @@ sub process_proto_type($$) {
+     }
+ 
+     while (1) {
+-	if ( $x =~ /([^{};]*)([{};])(.*)/ ) {
++	if ( $x =~ /([^\{\};]*)([\{\};])(.*)/ ) {
+             if( length $prototype ) {
+                 $prototype .= " "
+             }
+diff --git a/sound/soc/codecs/wm_adsp.c b/sound/soc/codecs/wm_adsp.c
+index 2fcdd84021a5..86c7805da997 100644
+--- a/sound/soc/codecs/wm_adsp.c
++++ b/sound/soc/codecs/wm_adsp.c
+@@ -2642,7 +2642,10 @@ int wm_adsp2_preloader_get(struct snd_kcontrol *kcontrol,
+ 			   struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct snd_soc_component *component = snd_soc_kcontrol_component(kcontrol);
+-	struct wm_adsp *dsp = snd_soc_component_get_drvdata(component);
++	struct wm_adsp *dsps = snd_soc_component_get_drvdata(component);
++	struct soc_mixer_control *mc =
++		(struct soc_mixer_control *)kcontrol->private_value;
++	struct wm_adsp *dsp = &dsps[mc->shift - 1];
+ 
+ 	ucontrol->value.integer.value[0] = dsp->preloaded;
+ 
+@@ -2654,10 +2657,11 @@ int wm_adsp2_preloader_put(struct snd_kcontrol *kcontrol,
+ 			   struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct snd_soc_component *component = snd_soc_kcontrol_component(kcontrol);
+-	struct wm_adsp *dsp = snd_soc_component_get_drvdata(component);
++	struct wm_adsp *dsps = snd_soc_component_get_drvdata(component);
+ 	struct snd_soc_dapm_context *dapm = snd_soc_component_get_dapm(component);
+ 	struct soc_mixer_control *mc =
+ 		(struct soc_mixer_control *)kcontrol->private_value;
++	struct wm_adsp *dsp = &dsps[mc->shift - 1];
+ 	char preload[32];
+ 
+ 	snprintf(preload, ARRAY_SIZE(preload), "DSP%u Preload", mc->shift);
+diff --git a/sound/soc/sirf/sirf-usp.c b/sound/soc/sirf/sirf-usp.c
+index 77e7dcf969d0..d70fcd4a1adf 100644
+--- a/sound/soc/sirf/sirf-usp.c
++++ b/sound/soc/sirf/sirf-usp.c
+@@ -370,10 +370,9 @@ static int sirf_usp_pcm_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, usp);
+ 
+ 	mem_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	base = devm_ioremap(&pdev->dev, mem_res->start,
+-		resource_size(mem_res));
+-	if (base == NULL)
+-		return -ENOMEM;
++	base = devm_ioremap_resource(&pdev->dev, mem_res);
++	if (IS_ERR(base))
++		return PTR_ERR(base);
+ 	usp->regmap = devm_regmap_init_mmio(&pdev->dev, base,
+ 					    &sirf_usp_regmap_config);
+ 	if (IS_ERR(usp->regmap))
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 5e7ae47a9658..5feae9666822 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -1694,6 +1694,14 @@ static u64 dpcm_runtime_base_format(struct snd_pcm_substream *substream)
+ 		int i;
+ 
+ 		for (i = 0; i < be->num_codecs; i++) {
++			/*
++			 * Skip CODECs which don't support the current stream
++			 * type. See soc_pcm_init_runtime_hw() for more details
++			 */
++			if (!snd_soc_dai_stream_valid(be->codec_dais[i],
++						      stream))
++				continue;
++
+ 			codec_dai_drv = be->codec_dais[i]->driver;
+ 			if (stream == SNDRV_PCM_STREAM_PLAYBACK)
+ 				codec_stream = &codec_dai_drv->playback;
+diff --git a/sound/soc/zte/zx-tdm.c b/sound/soc/zte/zx-tdm.c
+index dc955272f58b..389272eeba9a 100644
+--- a/sound/soc/zte/zx-tdm.c
++++ b/sound/soc/zte/zx-tdm.c
+@@ -144,8 +144,8 @@ static void zx_tdm_rx_dma_en(struct zx_tdm_info *tdm, bool on)
+ #define ZX_TDM_RATES	(SNDRV_PCM_RATE_8000 | SNDRV_PCM_RATE_16000)
+ 
+ #define ZX_TDM_FMTBIT \
+-	(SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FORMAT_MU_LAW | \
+-	SNDRV_PCM_FORMAT_A_LAW)
++	(SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_MU_LAW | \
++	SNDRV_PCM_FMTBIT_A_LAW)
+ 
+ static int zx_tdm_dai_probe(struct snd_soc_dai *dai)
+ {
+diff --git a/tools/perf/arch/s390/util/kvm-stat.c b/tools/perf/arch/s390/util/kvm-stat.c
+index d233e2eb9592..aaabab5e2830 100644
+--- a/tools/perf/arch/s390/util/kvm-stat.c
++++ b/tools/perf/arch/s390/util/kvm-stat.c
+@@ -102,7 +102,7 @@ const char * const kvm_skip_events[] = {
+ 
+ int cpu_isa_init(struct perf_kvm_stat *kvm, const char *cpuid)
+ {
+-	if (strstr(cpuid, "IBM/S390")) {
++	if (strstr(cpuid, "IBM")) {
+ 		kvm->exit_reasons = sie_exit_reasons;
+ 		kvm->exit_reasons_isa = "SIE";
+ 	} else
+diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
+index bd3d57f40f1b..17cecc96f735 100644
+--- a/virt/kvm/arm/arch_timer.c
++++ b/virt/kvm/arm/arch_timer.c
+@@ -295,9 +295,9 @@ static void phys_timer_emulate(struct kvm_vcpu *vcpu)
+ 	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
+ 
+ 	/*
+-	 * If the timer can fire now we have just raised the IRQ line and we
+-	 * don't need to have a soft timer scheduled for the future.  If the
+-	 * timer cannot fire at all, then we also don't need a soft timer.
++	 * If the timer can fire now, we don't need to have a soft timer
++	 * scheduled for the future.  If the timer cannot fire at all,
++	 * then we also don't need a soft timer.
+ 	 */
+ 	if (kvm_timer_should_fire(ptimer) || !kvm_timer_irq_can_fire(ptimer)) {
+ 		soft_timer_cancel(&timer->phys_timer, NULL);
+@@ -332,10 +332,10 @@ static void kvm_timer_update_state(struct kvm_vcpu *vcpu)
+ 	level = kvm_timer_should_fire(vtimer);
+ 	kvm_timer_update_irq(vcpu, level, vtimer);
+ 
++	phys_timer_emulate(vcpu);
++
+ 	if (kvm_timer_should_fire(ptimer) != ptimer->irq.level)
+ 		kvm_timer_update_irq(vcpu, !ptimer->irq.level, ptimer);
+-
+-	phys_timer_emulate(vcpu);
+ }
+ 
+ static void vtimer_save_state(struct kvm_vcpu *vcpu)
+@@ -487,6 +487,7 @@ void kvm_timer_vcpu_load(struct kvm_vcpu *vcpu)
+ {
+ 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+ 	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
++	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
+ 
+ 	if (unlikely(!timer->enabled))
+ 		return;
+@@ -502,6 +503,10 @@ void kvm_timer_vcpu_load(struct kvm_vcpu *vcpu)
+ 
+ 	/* Set the background timer for the physical timer emulation. */
+ 	phys_timer_emulate(vcpu);
++
++	/* If the timer fired while we weren't running, inject it now */
++	if (kvm_timer_should_fire(ptimer) != ptimer->irq.level)
++		kvm_timer_update_irq(vcpu, !ptimer->irq.level, ptimer);
+ }
+ 
+ bool kvm_timer_should_notify_user(struct kvm_vcpu *vcpu)
+diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
+index 1d90d79706bd..c2b95a22959b 100644
+--- a/virt/kvm/arm/mmu.c
++++ b/virt/kvm/arm/mmu.c
+@@ -1015,19 +1015,35 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
+ 	pmd = stage2_get_pmd(kvm, cache, addr);
+ 	VM_BUG_ON(!pmd);
+ 
+-	/*
+-	 * Mapping in huge pages should only happen through a fault.  If a
+-	 * page is merged into a transparent huge page, the individual
+-	 * subpages of that huge page should be unmapped through MMU
+-	 * notifiers before we get here.
+-	 *
+-	 * Merging of CompoundPages is not supported; they should become
+-	 * splitting first, unmapped, merged, and mapped back in on-demand.
+-	 */
+-	VM_BUG_ON(pmd_present(*pmd) && pmd_pfn(*pmd) != pmd_pfn(*new_pmd));
+-
+ 	old_pmd = *pmd;
+ 	if (pmd_present(old_pmd)) {
++		/*
++		 * Multiple vcpus faulting on the same PMD entry, can
++		 * lead to them sequentially updating the PMD with the
++		 * same value. Following the break-before-make
++		 * (pmd_clear() followed by tlb_flush()) process can
++		 * hinder forward progress due to refaults generated
++		 * on missing translations.
++		 *
++		 * Skip updating the page table if the entry is
++		 * unchanged.
++		 */
++		if (pmd_val(old_pmd) == pmd_val(*new_pmd))
++			return 0;
++
++		/*
++		 * Mapping in huge pages should only happen through a
++		 * fault.  If a page is merged into a transparent huge
++		 * page, the individual subpages of that huge page
++		 * should be unmapped through MMU notifiers before we
++		 * get here.
++		 *
++		 * Merging of CompoundPages is not supported; they
++		 * should become splitting first, unmapped, merged,
++		 * and mapped back in on-demand.
++		 */
++		VM_BUG_ON(pmd_pfn(old_pmd) != pmd_pfn(*new_pmd));
++
+ 		pmd_clear(pmd);
+ 		kvm_tlb_flush_vmid_ipa(kvm, addr);
+ 	} else {
+@@ -1102,6 +1118,10 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
+ 	/* Create 2nd stage page table mapping - Level 3 */
+ 	old_pte = *pte;
+ 	if (pte_present(old_pte)) {
++		/* Skip page table update if there is no change */
++		if (pte_val(old_pte) == pte_val(*new_pte))
++			return 0;
++
+ 		kvm_set_pte(pte, __pte(0));
+ 		kvm_tlb_flush_vmid_ipa(kvm, addr);
+ 	} else {


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 11:37 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     8e059277ddb6ba638e6fbbc3f6f005fd4bdbac44
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 19 22:41:12 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 11:36:24 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8e059277

Linux patch 4.18.9

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1008_linux-4.18.9.patch | 5298 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5302 insertions(+)

diff --git a/0000_README b/0000_README
index 597262e..6534d27 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch:  1007_linux-4.18.8.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.8
 
+Patch:  1008_linux-4.18.9.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.9
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1008_linux-4.18.9.patch b/1008_linux-4.18.9.patch
new file mode 100644
index 0000000..877b17a
--- /dev/null
+++ b/1008_linux-4.18.9.patch
@@ -0,0 +1,5298 @@
+diff --git a/Makefile b/Makefile
+index 0d73431f66cd..1178348fb9ca 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/arc/boot/dts/axs10x_mb.dtsi b/arch/arc/boot/dts/axs10x_mb.dtsi
+index 47b74fbc403c..37bafd44e36d 100644
+--- a/arch/arc/boot/dts/axs10x_mb.dtsi
++++ b/arch/arc/boot/dts/axs10x_mb.dtsi
+@@ -9,6 +9,10 @@
+  */
+ 
+ / {
++	aliases {
++		ethernet = &gmac;
++	};
++
+ 	axs10x_mb {
+ 		compatible = "simple-bus";
+ 		#address-cells = <1>;
+@@ -68,7 +72,7 @@
+ 			};
+ 		};
+ 
+-		ethernet@0x18000 {
++		gmac: ethernet@0x18000 {
+ 			#interrupt-cells = <1>;
+ 			compatible = "snps,dwmac";
+ 			reg = < 0x18000 0x2000 >;
+@@ -81,6 +85,7 @@
+ 			max-speed = <100>;
+ 			resets = <&creg_rst 5>;
+ 			reset-names = "stmmaceth";
++			mac-address = [00 00 00 00 00 00]; /* Filled in by U-Boot */
+ 		};
+ 
+ 		ehci@0x40000 {
+diff --git a/arch/arc/boot/dts/hsdk.dts b/arch/arc/boot/dts/hsdk.dts
+index 006aa3de5348..d00f283094d3 100644
+--- a/arch/arc/boot/dts/hsdk.dts
++++ b/arch/arc/boot/dts/hsdk.dts
+@@ -25,6 +25,10 @@
+ 		bootargs = "earlycon=uart8250,mmio32,0xf0005000,115200n8 console=ttyS0,115200n8 debug print-fatal-signals=1";
+ 	};
+ 
++	aliases {
++		ethernet = &gmac;
++	};
++
+ 	cpus {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+@@ -163,7 +167,7 @@
+ 			#clock-cells = <0>;
+ 		};
+ 
+-		ethernet@8000 {
++		gmac: ethernet@8000 {
+ 			#interrupt-cells = <1>;
+ 			compatible = "snps,dwmac";
+ 			reg = <0x8000 0x2000>;
+@@ -176,6 +180,7 @@
+ 			phy-handle = <&phy0>;
+ 			resets = <&cgu_rst HSDK_ETH_RESET>;
+ 			reset-names = "stmmaceth";
++			mac-address = [00 00 00 00 00 00]; /* Filled in by U-Boot */
+ 
+ 			mdio {
+ 				#address-cells = <1>;
+diff --git a/arch/arc/configs/axs101_defconfig b/arch/arc/configs/axs101_defconfig
+index a635ea972304..df848c44dacd 100644
+--- a/arch/arc/configs/axs101_defconfig
++++ b/arch/arc/configs/axs101_defconfig
+@@ -1,5 +1,4 @@
+ CONFIG_DEFAULT_HOSTNAME="ARCLinux"
+-# CONFIG_SWAP is not set
+ CONFIG_SYSVIPC=y
+ CONFIG_POSIX_MQUEUE=y
+ # CONFIG_CROSS_MEMORY_ATTACH is not set
+diff --git a/arch/arc/configs/axs103_defconfig b/arch/arc/configs/axs103_defconfig
+index aa507e423075..bcbdc0494faa 100644
+--- a/arch/arc/configs/axs103_defconfig
++++ b/arch/arc/configs/axs103_defconfig
+@@ -1,5 +1,4 @@
+ CONFIG_DEFAULT_HOSTNAME="ARCLinux"
+-# CONFIG_SWAP is not set
+ CONFIG_SYSVIPC=y
+ CONFIG_POSIX_MQUEUE=y
+ # CONFIG_CROSS_MEMORY_ATTACH is not set
+diff --git a/arch/arc/configs/axs103_smp_defconfig b/arch/arc/configs/axs103_smp_defconfig
+index eba07f468654..d145bce7ebdf 100644
+--- a/arch/arc/configs/axs103_smp_defconfig
++++ b/arch/arc/configs/axs103_smp_defconfig
+@@ -1,5 +1,4 @@
+ CONFIG_DEFAULT_HOSTNAME="ARCLinux"
+-# CONFIG_SWAP is not set
+ CONFIG_SYSVIPC=y
+ CONFIG_POSIX_MQUEUE=y
+ # CONFIG_CROSS_MEMORY_ATTACH is not set
+diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
+index d496ef579859..ca46153d7915 100644
+--- a/arch/arm64/kvm/hyp/switch.c
++++ b/arch/arm64/kvm/hyp/switch.c
+@@ -98,8 +98,10 @@ static void activate_traps_vhe(struct kvm_vcpu *vcpu)
+ 	val = read_sysreg(cpacr_el1);
+ 	val |= CPACR_EL1_TTA;
+ 	val &= ~CPACR_EL1_ZEN;
+-	if (!update_fp_enabled(vcpu))
++	if (!update_fp_enabled(vcpu)) {
+ 		val &= ~CPACR_EL1_FPEN;
++		__activate_traps_fpsimd32(vcpu);
++	}
+ 
+ 	write_sysreg(val, cpacr_el1);
+ 
+@@ -114,8 +116,10 @@ static void __hyp_text __activate_traps_nvhe(struct kvm_vcpu *vcpu)
+ 
+ 	val = CPTR_EL2_DEFAULT;
+ 	val |= CPTR_EL2_TTA | CPTR_EL2_TZ;
+-	if (!update_fp_enabled(vcpu))
++	if (!update_fp_enabled(vcpu)) {
+ 		val |= CPTR_EL2_TFP;
++		__activate_traps_fpsimd32(vcpu);
++	}
+ 
+ 	write_sysreg(val, cptr_el2);
+ }
+@@ -129,7 +133,6 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
+ 	if (cpus_have_const_cap(ARM64_HAS_RAS_EXTN) && (hcr & HCR_VSE))
+ 		write_sysreg_s(vcpu->arch.vsesr_el2, SYS_VSESR_EL2);
+ 
+-	__activate_traps_fpsimd32(vcpu);
+ 	if (has_vhe())
+ 		activate_traps_vhe(vcpu);
+ 	else
+diff --git a/arch/mips/boot/dts/mscc/ocelot.dtsi b/arch/mips/boot/dts/mscc/ocelot.dtsi
+index 4f33dbc67348..7096915f26e0 100644
+--- a/arch/mips/boot/dts/mscc/ocelot.dtsi
++++ b/arch/mips/boot/dts/mscc/ocelot.dtsi
+@@ -184,7 +184,7 @@
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 			compatible = "mscc,ocelot-miim";
+-			reg = <0x107009c 0x36>, <0x10700f0 0x8>;
++			reg = <0x107009c 0x24>, <0x10700f0 0x8>;
+ 			interrupts = <14>;
+ 			status = "disabled";
+ 
+diff --git a/arch/mips/cavium-octeon/octeon-platform.c b/arch/mips/cavium-octeon/octeon-platform.c
+index 8505db478904..1d92efb82c37 100644
+--- a/arch/mips/cavium-octeon/octeon-platform.c
++++ b/arch/mips/cavium-octeon/octeon-platform.c
+@@ -322,6 +322,7 @@ static int __init octeon_ehci_device_init(void)
+ 		return 0;
+ 
+ 	pd = of_find_device_by_node(ehci_node);
++	of_node_put(ehci_node);
+ 	if (!pd)
+ 		return 0;
+ 
+@@ -384,6 +385,7 @@ static int __init octeon_ohci_device_init(void)
+ 		return 0;
+ 
+ 	pd = of_find_device_by_node(ohci_node);
++	of_node_put(ohci_node);
+ 	if (!pd)
+ 		return 0;
+ 
+diff --git a/arch/mips/generic/init.c b/arch/mips/generic/init.c
+index 5ba6fcc26fa7..94a78dbbc91f 100644
+--- a/arch/mips/generic/init.c
++++ b/arch/mips/generic/init.c
+@@ -204,6 +204,7 @@ void __init arch_init_irq(void)
+ 					    "mti,cpu-interrupt-controller");
+ 	if (!cpu_has_veic && !intc_node)
+ 		mips_cpu_irq_init();
++	of_node_put(intc_node);
+ 
+ 	irqchip_init();
+ }
+diff --git a/arch/mips/include/asm/io.h b/arch/mips/include/asm/io.h
+index cea8ad864b3f..57b34257be2b 100644
+--- a/arch/mips/include/asm/io.h
++++ b/arch/mips/include/asm/io.h
+@@ -141,14 +141,14 @@ static inline void * phys_to_virt(unsigned long address)
+ /*
+  * ISA I/O bus memory addresses are 1:1 with the physical address.
+  */
+-static inline unsigned long isa_virt_to_bus(volatile void * address)
++static inline unsigned long isa_virt_to_bus(volatile void *address)
+ {
+-	return (unsigned long)address - PAGE_OFFSET;
++	return virt_to_phys(address);
+ }
+ 
+-static inline void * isa_bus_to_virt(unsigned long address)
++static inline void *isa_bus_to_virt(unsigned long address)
+ {
+-	return (void *)(address + PAGE_OFFSET);
++	return phys_to_virt(address);
+ }
+ 
+ #define isa_page_to_bus page_to_phys
+diff --git a/arch/mips/kernel/vdso.c b/arch/mips/kernel/vdso.c
+index 019035d7225c..8f845f6e5f42 100644
+--- a/arch/mips/kernel/vdso.c
++++ b/arch/mips/kernel/vdso.c
+@@ -13,6 +13,7 @@
+ #include <linux/err.h>
+ #include <linux/init.h>
+ #include <linux/ioport.h>
++#include <linux/kernel.h>
+ #include <linux/mm.h>
+ #include <linux/sched.h>
+ #include <linux/slab.h>
+@@ -20,6 +21,7 @@
+ 
+ #include <asm/abi.h>
+ #include <asm/mips-cps.h>
++#include <asm/page.h>
+ #include <asm/vdso.h>
+ 
+ /* Kernel-provided data used by the VDSO. */
+@@ -128,12 +130,30 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
+ 	vvar_size = gic_size + PAGE_SIZE;
+ 	size = vvar_size + image->size;
+ 
++	/*
++	 * Find a region that's large enough for us to perform the
++	 * colour-matching alignment below.
++	 */
++	if (cpu_has_dc_aliases)
++		size += shm_align_mask + 1;
++
+ 	base = get_unmapped_area(NULL, 0, size, 0, 0);
+ 	if (IS_ERR_VALUE(base)) {
+ 		ret = base;
+ 		goto out;
+ 	}
+ 
++	/*
++	 * If we suffer from dcache aliasing, ensure that the VDSO data page
++	 * mapping is coloured the same as the kernel's mapping of that memory.
++	 * This ensures that when the kernel updates the VDSO data userland
++	 * will observe it without requiring cache invalidations.
++	 */
++	if (cpu_has_dc_aliases) {
++		base = __ALIGN_MASK(base, shm_align_mask);
++		base += ((unsigned long)&vdso_data - gic_size) & shm_align_mask;
++	}
++
+ 	data_addr = base + gic_size;
+ 	vdso_addr = data_addr + PAGE_SIZE;
+ 
+diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c
+index e12dfa48b478..a5893b2cdc0e 100644
+--- a/arch/mips/mm/c-r4k.c
++++ b/arch/mips/mm/c-r4k.c
+@@ -835,7 +835,8 @@ static void r4k_flush_icache_user_range(unsigned long start, unsigned long end)
+ static void r4k_dma_cache_wback_inv(unsigned long addr, unsigned long size)
+ {
+ 	/* Catch bad driver code */
+-	BUG_ON(size == 0);
++	if (WARN_ON(size == 0))
++		return;
+ 
+ 	preempt_disable();
+ 	if (cpu_has_inclusive_pcaches) {
+@@ -871,7 +872,8 @@ static void r4k_dma_cache_wback_inv(unsigned long addr, unsigned long size)
+ static void r4k_dma_cache_inv(unsigned long addr, unsigned long size)
+ {
+ 	/* Catch bad driver code */
+-	BUG_ON(size == 0);
++	if (WARN_ON(size == 0))
++		return;
+ 
+ 	preempt_disable();
+ 	if (cpu_has_inclusive_pcaches) {
+diff --git a/arch/powerpc/include/asm/book3s/64/pgalloc.h b/arch/powerpc/include/asm/book3s/64/pgalloc.h
+index 01ee40f11f3a..76234a14b97d 100644
+--- a/arch/powerpc/include/asm/book3s/64/pgalloc.h
++++ b/arch/powerpc/include/asm/book3s/64/pgalloc.h
+@@ -9,6 +9,7 @@
+ 
+ #include <linux/slab.h>
+ #include <linux/cpumask.h>
++#include <linux/kmemleak.h>
+ #include <linux/percpu.h>
+ 
+ struct vmemmap_backing {
+@@ -82,6 +83,13 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm)
+ 
+ 	pgd = kmem_cache_alloc(PGT_CACHE(PGD_INDEX_SIZE),
+ 			       pgtable_gfp_flags(mm, GFP_KERNEL));
++	/*
++	 * Don't scan the PGD for pointers, it contains references to PUDs but
++	 * those references are not full pointers and so can't be recognised by
++	 * kmemleak.
++	 */
++	kmemleak_no_scan(pgd);
++
+ 	/*
+ 	 * With hugetlb, we don't clear the second half of the page table.
+ 	 * If we share the same slab cache with the pmd or pud level table,
+@@ -110,8 +118,19 @@ static inline void pgd_populate(struct mm_struct *mm, pgd_t *pgd, pud_t *pud)
+ 
+ static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr)
+ {
+-	return kmem_cache_alloc(PGT_CACHE(PUD_CACHE_INDEX),
+-		pgtable_gfp_flags(mm, GFP_KERNEL));
++	pud_t *pud;
++
++	pud = kmem_cache_alloc(PGT_CACHE(PUD_CACHE_INDEX),
++			       pgtable_gfp_flags(mm, GFP_KERNEL));
++	/*
++	 * Tell kmemleak to ignore the PUD, that means don't scan it for
++	 * pointers and don't consider it a leak. PUDs are typically only
++	 * referred to by their PGD, but kmemleak is not able to recognise those
++	 * as pointers, leading to false leak reports.
++	 */
++	kmemleak_ignore(pud);
++
++	return pud;
+ }
+ 
+ static inline void pud_free(struct mm_struct *mm, pud_t *pud)
+diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+index 176f911ee983..7efc42538ccf 100644
+--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
++++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+@@ -738,10 +738,10 @@ int kvm_unmap_radix(struct kvm *kvm, struct kvm_memory_slot *memslot,
+ 					      gpa, shift);
+ 		kvmppc_radix_tlbie_page(kvm, gpa, shift);
+ 		if ((old & _PAGE_DIRTY) && memslot->dirty_bitmap) {
+-			unsigned long npages = 1;
++			unsigned long psize = PAGE_SIZE;
+ 			if (shift)
+-				npages = 1ul << (shift - PAGE_SHIFT);
+-			kvmppc_update_dirty_map(memslot, gfn, npages);
++				psize = 1ul << shift;
++			kvmppc_update_dirty_map(memslot, gfn, psize);
+ 		}
+ 	}
+ 	return 0;				
+diff --git a/arch/powerpc/platforms/4xx/msi.c b/arch/powerpc/platforms/4xx/msi.c
+index 81b2cbce7df8..7c324eff2f22 100644
+--- a/arch/powerpc/platforms/4xx/msi.c
++++ b/arch/powerpc/platforms/4xx/msi.c
+@@ -146,13 +146,19 @@ static int ppc4xx_setup_pcieh_hw(struct platform_device *dev,
+ 	const u32 *sdr_addr;
+ 	dma_addr_t msi_phys;
+ 	void *msi_virt;
++	int err;
+ 
+ 	sdr_addr = of_get_property(dev->dev.of_node, "sdr-base", NULL);
+ 	if (!sdr_addr)
+-		return -1;
++		return -EINVAL;
+ 
+-	mtdcri(SDR0, *sdr_addr, upper_32_bits(res.start));	/*HIGH addr */
+-	mtdcri(SDR0, *sdr_addr + 1, lower_32_bits(res.start));	/* Low addr */
++	msi_data = of_get_property(dev->dev.of_node, "msi-data", NULL);
++	if (!msi_data)
++		return -EINVAL;
++
++	msi_mask = of_get_property(dev->dev.of_node, "msi-mask", NULL);
++	if (!msi_mask)
++		return -EINVAL;
+ 
+ 	msi->msi_dev = of_find_node_by_name(NULL, "ppc4xx-msi");
+ 	if (!msi->msi_dev)
+@@ -160,30 +166,30 @@ static int ppc4xx_setup_pcieh_hw(struct platform_device *dev,
+ 
+ 	msi->msi_regs = of_iomap(msi->msi_dev, 0);
+ 	if (!msi->msi_regs) {
+-		dev_err(&dev->dev, "of_iomap problem failed\n");
+-		return -ENOMEM;
++		dev_err(&dev->dev, "of_iomap failed\n");
++		err = -ENOMEM;
++		goto node_put;
+ 	}
+ 	dev_dbg(&dev->dev, "PCIE-MSI: msi register mapped 0x%x 0x%x\n",
+ 		(u32) (msi->msi_regs + PEIH_TERMADH), (u32) (msi->msi_regs));
+ 
+ 	msi_virt = dma_alloc_coherent(&dev->dev, 64, &msi_phys, GFP_KERNEL);
+-	if (!msi_virt)
+-		return -ENOMEM;
++	if (!msi_virt) {
++		err = -ENOMEM;
++		goto iounmap;
++	}
+ 	msi->msi_addr_hi = upper_32_bits(msi_phys);
+ 	msi->msi_addr_lo = lower_32_bits(msi_phys & 0xffffffff);
+ 	dev_dbg(&dev->dev, "PCIE-MSI: msi address high 0x%x, low 0x%x\n",
+ 		msi->msi_addr_hi, msi->msi_addr_lo);
+ 
++	mtdcri(SDR0, *sdr_addr, upper_32_bits(res.start));	/*HIGH addr */
++	mtdcri(SDR0, *sdr_addr + 1, lower_32_bits(res.start));	/* Low addr */
++
+ 	/* Progam the Interrupt handler Termination addr registers */
+ 	out_be32(msi->msi_regs + PEIH_TERMADH, msi->msi_addr_hi);
+ 	out_be32(msi->msi_regs + PEIH_TERMADL, msi->msi_addr_lo);
+ 
+-	msi_data = of_get_property(dev->dev.of_node, "msi-data", NULL);
+-	if (!msi_data)
+-		return -1;
+-	msi_mask = of_get_property(dev->dev.of_node, "msi-mask", NULL);
+-	if (!msi_mask)
+-		return -1;
+ 	/* Program MSI Expected data and Mask bits */
+ 	out_be32(msi->msi_regs + PEIH_MSIED, *msi_data);
+ 	out_be32(msi->msi_regs + PEIH_MSIMK, *msi_mask);
+@@ -191,6 +197,12 @@ static int ppc4xx_setup_pcieh_hw(struct platform_device *dev,
+ 	dma_free_coherent(&dev->dev, 64, msi_virt, msi_phys);
+ 
+ 	return 0;
++
++iounmap:
++	iounmap(msi->msi_regs);
++node_put:
++	of_node_put(msi->msi_dev);
++	return err;
+ }
+ 
+ static int ppc4xx_of_msi_remove(struct platform_device *dev)
+@@ -209,7 +221,6 @@ static int ppc4xx_of_msi_remove(struct platform_device *dev)
+ 		msi_bitmap_free(&msi->bitmap);
+ 	iounmap(msi->msi_regs);
+ 	of_node_put(msi->msi_dev);
+-	kfree(msi);
+ 
+ 	return 0;
+ }
+@@ -223,18 +234,16 @@ static int ppc4xx_msi_probe(struct platform_device *dev)
+ 
+ 	dev_dbg(&dev->dev, "PCIE-MSI: Setting up MSI support...\n");
+ 
+-	msi = kzalloc(sizeof(*msi), GFP_KERNEL);
+-	if (!msi) {
+-		dev_err(&dev->dev, "No memory for MSI structure\n");
++	msi = devm_kzalloc(&dev->dev, sizeof(*msi), GFP_KERNEL);
++	if (!msi)
+ 		return -ENOMEM;
+-	}
+ 	dev->dev.platform_data = msi;
+ 
+ 	/* Get MSI ranges */
+ 	err = of_address_to_resource(dev->dev.of_node, 0, &res);
+ 	if (err) {
+ 		dev_err(&dev->dev, "%pOF resource error!\n", dev->dev.of_node);
+-		goto error_out;
++		return err;
+ 	}
+ 
+ 	msi_irqs = of_irq_count(dev->dev.of_node);
+@@ -243,7 +252,7 @@ static int ppc4xx_msi_probe(struct platform_device *dev)
+ 
+ 	err = ppc4xx_setup_pcieh_hw(dev, res, msi);
+ 	if (err)
+-		goto error_out;
++		return err;
+ 
+ 	err = ppc4xx_msi_init_allocator(dev, msi);
+ 	if (err) {
+@@ -256,7 +265,7 @@ static int ppc4xx_msi_probe(struct platform_device *dev)
+ 		phb->controller_ops.setup_msi_irqs = ppc4xx_setup_msi_irqs;
+ 		phb->controller_ops.teardown_msi_irqs = ppc4xx_teardown_msi_irqs;
+ 	}
+-	return err;
++	return 0;
+ 
+ error_out:
+ 	ppc4xx_of_msi_remove(dev);
+diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
+index 8cdf91f5d3a4..c773465b2c95 100644
+--- a/arch/powerpc/platforms/powernv/npu-dma.c
++++ b/arch/powerpc/platforms/powernv/npu-dma.c
+@@ -437,8 +437,9 @@ static int get_mmio_atsd_reg(struct npu *npu)
+ 	int i;
+ 
+ 	for (i = 0; i < npu->mmio_atsd_count; i++) {
+-		if (!test_and_set_bit_lock(i, &npu->mmio_atsd_usage))
+-			return i;
++		if (!test_bit(i, &npu->mmio_atsd_usage))
++			if (!test_and_set_bit_lock(i, &npu->mmio_atsd_usage))
++				return i;
+ 	}
+ 
+ 	return -ENOSPC;
+diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
+index 8a4868a3964b..cb098e962ffe 100644
+--- a/arch/powerpc/platforms/pseries/setup.c
++++ b/arch/powerpc/platforms/pseries/setup.c
+@@ -647,6 +647,15 @@ void of_pci_parse_iov_addrs(struct pci_dev *dev, const int *indexes)
+ 	}
+ }
+ 
++static void pseries_disable_sriov_resources(struct pci_dev *pdev)
++{
++	int i;
++
++	pci_warn(pdev, "No hypervisor support for SR-IOV on this device, IOV BARs disabled.\n");
++	for (i = 0; i < PCI_SRIOV_NUM_BARS; i++)
++		pdev->resource[i + PCI_IOV_RESOURCES].flags = 0;
++}
++
+ static void pseries_pci_fixup_resources(struct pci_dev *pdev)
+ {
+ 	const int *indexes;
+@@ -654,10 +663,10 @@ static void pseries_pci_fixup_resources(struct pci_dev *pdev)
+ 
+ 	/*Firmware must support open sriov otherwise dont configure*/
+ 	indexes = of_get_property(dn, "ibm,open-sriov-vf-bar-info", NULL);
+-	if (!indexes)
+-		return;
+-	/* Assign the addresses from device tree*/
+-	of_pci_set_vf_bar_size(pdev, indexes);
++	if (indexes)
++		of_pci_set_vf_bar_size(pdev, indexes);
++	else
++		pseries_disable_sriov_resources(pdev);
+ }
+ 
+ static void pseries_pci_fixup_iov_resources(struct pci_dev *pdev)
+@@ -669,10 +678,10 @@ static void pseries_pci_fixup_iov_resources(struct pci_dev *pdev)
+ 		return;
+ 	/*Firmware must support open sriov otherwise dont configure*/
+ 	indexes = of_get_property(dn, "ibm,open-sriov-vf-bar-info", NULL);
+-	if (!indexes)
+-		return;
+-	/* Assign the addresses from device tree*/
+-	of_pci_parse_iov_addrs(pdev, indexes);
++	if (indexes)
++		of_pci_parse_iov_addrs(pdev, indexes);
++	else
++		pseries_disable_sriov_resources(pdev);
+ }
+ 
+ static resource_size_t pseries_pci_iov_resource_alignment(struct pci_dev *pdev,
+diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c
+index 84c89cb9636f..cbdd8341f17e 100644
+--- a/arch/s390/kvm/vsie.c
++++ b/arch/s390/kvm/vsie.c
+@@ -173,7 +173,8 @@ static int shadow_crycb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
+ 		return set_validity_icpt(scb_s, 0x0039U);
+ 
+ 	/* copy only the wrapping keys */
+-	if (read_guest_real(vcpu, crycb_addr + 72, &vsie_page->crycb, 56))
++	if (read_guest_real(vcpu, crycb_addr + 72,
++			    vsie_page->crycb.dea_wrapping_key_mask, 56))
+ 		return set_validity_icpt(scb_s, 0x0035U);
+ 
+ 	scb_s->ecb3 |= ecb3_flags;
+diff --git a/arch/x86/include/asm/kdebug.h b/arch/x86/include/asm/kdebug.h
+index 395c9631e000..75f1e35e7c15 100644
+--- a/arch/x86/include/asm/kdebug.h
++++ b/arch/x86/include/asm/kdebug.h
+@@ -22,10 +22,20 @@ enum die_val {
+ 	DIE_NMIUNKNOWN,
+ };
+ 
++enum show_regs_mode {
++	SHOW_REGS_SHORT,
++	/*
++	 * For when userspace crashed, but we don't think it's our fault, and
++	 * therefore don't print kernel registers.
++	 */
++	SHOW_REGS_USER,
++	SHOW_REGS_ALL
++};
++
+ extern void die(const char *, struct pt_regs *,long);
+ extern int __must_check __die(const char *, struct pt_regs *, long);
+ extern void show_stack_regs(struct pt_regs *regs);
+-extern void __show_regs(struct pt_regs *regs, int all);
++extern void __show_regs(struct pt_regs *regs, enum show_regs_mode);
+ extern void show_iret_regs(struct pt_regs *regs);
+ extern unsigned long oops_begin(void);
+ extern void oops_end(unsigned long, struct pt_regs *, int signr);
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index acebb808c4b5..0722b7745382 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1198,18 +1198,22 @@ enum emulation_result {
+ #define EMULTYPE_NO_DECODE	    (1 << 0)
+ #define EMULTYPE_TRAP_UD	    (1 << 1)
+ #define EMULTYPE_SKIP		    (1 << 2)
+-#define EMULTYPE_RETRY		    (1 << 3)
+-#define EMULTYPE_NO_REEXECUTE	    (1 << 4)
+-#define EMULTYPE_NO_UD_ON_FAIL	    (1 << 5)
+-#define EMULTYPE_VMWARE		    (1 << 6)
++#define EMULTYPE_ALLOW_RETRY	    (1 << 3)
++#define EMULTYPE_NO_UD_ON_FAIL	    (1 << 4)
++#define EMULTYPE_VMWARE		    (1 << 5)
+ int x86_emulate_instruction(struct kvm_vcpu *vcpu, unsigned long cr2,
+ 			    int emulation_type, void *insn, int insn_len);
+ 
+ static inline int emulate_instruction(struct kvm_vcpu *vcpu,
+ 			int emulation_type)
+ {
+-	return x86_emulate_instruction(vcpu, 0,
+-			emulation_type | EMULTYPE_NO_REEXECUTE, NULL, 0);
++	return x86_emulate_instruction(vcpu, 0, emulation_type, NULL, 0);
++}
++
++static inline int kvm_emulate_instruction_from_buffer(struct kvm_vcpu *vcpu,
++						      void *insn, int insn_len)
++{
++	return x86_emulate_instruction(vcpu, 0, 0, insn, insn_len);
+ }
+ 
+ void kvm_enable_efer_bits(u64);
+diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
+index c9b773401fd8..21d1fa5eaa5f 100644
+--- a/arch/x86/kernel/apic/vector.c
++++ b/arch/x86/kernel/apic/vector.c
+@@ -422,7 +422,7 @@ static int activate_managed(struct irq_data *irqd)
+ 	if (WARN_ON_ONCE(cpumask_empty(vector_searchmask))) {
+ 		/* Something in the core code broke! Survive gracefully */
+ 		pr_err("Managed startup for irq %u, but no CPU\n", irqd->irq);
+-		return EINVAL;
++		return -EINVAL;
+ 	}
+ 
+ 	ret = assign_managed_vector(irqd, vector_searchmask);
+diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
+index 0624957aa068..07b5fc00b188 100644
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -504,6 +504,7 @@ static enum ucode_state apply_microcode_amd(int cpu)
+ 	struct microcode_amd *mc_amd;
+ 	struct ucode_cpu_info *uci;
+ 	struct ucode_patch *p;
++	enum ucode_state ret;
+ 	u32 rev, dummy;
+ 
+ 	BUG_ON(raw_smp_processor_id() != cpu);
+@@ -521,9 +522,8 @@ static enum ucode_state apply_microcode_amd(int cpu)
+ 
+ 	/* need to apply patch? */
+ 	if (rev >= mc_amd->hdr.patch_id) {
+-		c->microcode = rev;
+-		uci->cpu_sig.rev = rev;
+-		return UCODE_OK;
++		ret = UCODE_OK;
++		goto out;
+ 	}
+ 
+ 	if (__apply_microcode_amd(mc_amd)) {
+@@ -531,13 +531,21 @@ static enum ucode_state apply_microcode_amd(int cpu)
+ 			cpu, mc_amd->hdr.patch_id);
+ 		return UCODE_ERROR;
+ 	}
+-	pr_info("CPU%d: new patch_level=0x%08x\n", cpu,
+-		mc_amd->hdr.patch_id);
+ 
+-	uci->cpu_sig.rev = mc_amd->hdr.patch_id;
+-	c->microcode = mc_amd->hdr.patch_id;
++	rev = mc_amd->hdr.patch_id;
++	ret = UCODE_UPDATED;
++
++	pr_info("CPU%d: new patch_level=0x%08x\n", cpu, rev);
+ 
+-	return UCODE_UPDATED;
++out:
++	uci->cpu_sig.rev = rev;
++	c->microcode	 = rev;
++
++	/* Update boot_cpu_data's revision too, if we're on the BSP: */
++	if (c->cpu_index == boot_cpu_data.cpu_index)
++		boot_cpu_data.microcode = rev;
++
++	return ret;
+ }
+ 
+ static int install_equiv_cpu_table(const u8 *buf)
+diff --git a/arch/x86/kernel/cpu/microcode/intel.c b/arch/x86/kernel/cpu/microcode/intel.c
+index 97ccf4c3b45b..16936a24795c 100644
+--- a/arch/x86/kernel/cpu/microcode/intel.c
++++ b/arch/x86/kernel/cpu/microcode/intel.c
+@@ -795,6 +795,7 @@ static enum ucode_state apply_microcode_intel(int cpu)
+ 	struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
+ 	struct cpuinfo_x86 *c = &cpu_data(cpu);
+ 	struct microcode_intel *mc;
++	enum ucode_state ret;
+ 	static int prev_rev;
+ 	u32 rev;
+ 
+@@ -817,9 +818,8 @@ static enum ucode_state apply_microcode_intel(int cpu)
+ 	 */
+ 	rev = intel_get_microcode_revision();
+ 	if (rev >= mc->hdr.rev) {
+-		uci->cpu_sig.rev = rev;
+-		c->microcode = rev;
+-		return UCODE_OK;
++		ret = UCODE_OK;
++		goto out;
+ 	}
+ 
+ 	/*
+@@ -848,10 +848,17 @@ static enum ucode_state apply_microcode_intel(int cpu)
+ 		prev_rev = rev;
+ 	}
+ 
++	ret = UCODE_UPDATED;
++
++out:
+ 	uci->cpu_sig.rev = rev;
+-	c->microcode = rev;
++	c->microcode	 = rev;
++
++	/* Update boot_cpu_data's revision too, if we're on the BSP: */
++	if (c->cpu_index == boot_cpu_data.cpu_index)
++		boot_cpu_data.microcode = rev;
+ 
+-	return UCODE_UPDATED;
++	return ret;
+ }
+ 
+ static enum ucode_state generic_load_microcode(int cpu, void *data, size_t size,
+diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
+index 17b02adc79aa..0c5a9fc6e36d 100644
+--- a/arch/x86/kernel/dumpstack.c
++++ b/arch/x86/kernel/dumpstack.c
+@@ -155,7 +155,7 @@ static void show_regs_if_on_stack(struct stack_info *info, struct pt_regs *regs,
+ 	 * they can be printed in the right context.
+ 	 */
+ 	if (!partial && on_stack(info, regs, sizeof(*regs))) {
+-		__show_regs(regs, 0);
++		__show_regs(regs, SHOW_REGS_SHORT);
+ 
+ 	} else if (partial && on_stack(info, (void *)regs + IRET_FRAME_OFFSET,
+ 				       IRET_FRAME_SIZE)) {
+@@ -353,7 +353,7 @@ void oops_end(unsigned long flags, struct pt_regs *regs, int signr)
+ 	oops_exit();
+ 
+ 	/* Executive summary in case the oops scrolled away */
+-	__show_regs(&exec_summary_regs, true);
++	__show_regs(&exec_summary_regs, SHOW_REGS_ALL);
+ 
+ 	if (!signr)
+ 		return;
+@@ -416,14 +416,9 @@ void die(const char *str, struct pt_regs *regs, long err)
+ 
+ void show_regs(struct pt_regs *regs)
+ {
+-	bool all = true;
+-
+ 	show_regs_print_info(KERN_DEFAULT);
+ 
+-	if (IS_ENABLED(CONFIG_X86_32))
+-		all = !user_mode(regs);
+-
+-	__show_regs(regs, all);
++	__show_regs(regs, user_mode(regs) ? SHOW_REGS_USER : SHOW_REGS_ALL);
+ 
+ 	/*
+ 	 * When in-kernel, we also print out the stack at the time of the fault..
+diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
+index 0ae659de21eb..666d1825390d 100644
+--- a/arch/x86/kernel/process_32.c
++++ b/arch/x86/kernel/process_32.c
+@@ -59,7 +59,7 @@
+ #include <asm/intel_rdt_sched.h>
+ #include <asm/proto.h>
+ 
+-void __show_regs(struct pt_regs *regs, int all)
++void __show_regs(struct pt_regs *regs, enum show_regs_mode mode)
+ {
+ 	unsigned long cr0 = 0L, cr2 = 0L, cr3 = 0L, cr4 = 0L;
+ 	unsigned long d0, d1, d2, d3, d6, d7;
+@@ -85,7 +85,7 @@ void __show_regs(struct pt_regs *regs, int all)
+ 	printk(KERN_DEFAULT "DS: %04x ES: %04x FS: %04x GS: %04x SS: %04x EFLAGS: %08lx\n",
+ 	       (u16)regs->ds, (u16)regs->es, (u16)regs->fs, gs, ss, regs->flags);
+ 
+-	if (!all)
++	if (mode != SHOW_REGS_ALL)
+ 		return;
+ 
+ 	cr0 = read_cr0();
+diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
+index 4344a032ebe6..0091a733c1cf 100644
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -62,7 +62,7 @@
+ __visible DEFINE_PER_CPU(unsigned long, rsp_scratch);
+ 
+ /* Prints also some state that isn't saved in the pt_regs */
+-void __show_regs(struct pt_regs *regs, int all)
++void __show_regs(struct pt_regs *regs, enum show_regs_mode mode)
+ {
+ 	unsigned long cr0 = 0L, cr2 = 0L, cr3 = 0L, cr4 = 0L, fs, gs, shadowgs;
+ 	unsigned long d0, d1, d2, d3, d6, d7;
+@@ -87,9 +87,17 @@ void __show_regs(struct pt_regs *regs, int all)
+ 	printk(KERN_DEFAULT "R13: %016lx R14: %016lx R15: %016lx\n",
+ 	       regs->r13, regs->r14, regs->r15);
+ 
+-	if (!all)
++	if (mode == SHOW_REGS_SHORT)
+ 		return;
+ 
++	if (mode == SHOW_REGS_USER) {
++		rdmsrl(MSR_FS_BASE, fs);
++		rdmsrl(MSR_KERNEL_GS_BASE, shadowgs);
++		printk(KERN_DEFAULT "FS:  %016lx GS:  %016lx\n",
++		       fs, shadowgs);
++		return;
++	}
++
+ 	asm("movl %%ds,%0" : "=r" (ds));
+ 	asm("movl %%cs,%0" : "=r" (cs));
+ 	asm("movl %%es,%0" : "=r" (es));
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index 42f1ba92622a..97d41754769e 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -4960,7 +4960,7 @@ static int make_mmu_pages_available(struct kvm_vcpu *vcpu)
+ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
+ 		       void *insn, int insn_len)
+ {
+-	int r, emulation_type = EMULTYPE_RETRY;
++	int r, emulation_type = 0;
+ 	enum emulation_result er;
+ 	bool direct = vcpu->arch.mmu.direct_map;
+ 
+@@ -4973,10 +4973,8 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
+ 	r = RET_PF_INVALID;
+ 	if (unlikely(error_code & PFERR_RSVD_MASK)) {
+ 		r = handle_mmio_page_fault(vcpu, cr2, direct);
+-		if (r == RET_PF_EMULATE) {
+-			emulation_type = 0;
++		if (r == RET_PF_EMULATE)
+ 			goto emulate;
+-		}
+ 	}
+ 
+ 	if (r == RET_PF_INVALID) {
+@@ -5003,8 +5001,19 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
+ 		return 1;
+ 	}
+ 
+-	if (mmio_info_in_cache(vcpu, cr2, direct))
+-		emulation_type = 0;
++	/*
++	 * vcpu->arch.mmu.page_fault returned RET_PF_EMULATE, but we can still
++	 * optimistically try to just unprotect the page and let the processor
++	 * re-execute the instruction that caused the page fault.  Do not allow
++	 * retrying MMIO emulation, as it's not only pointless but could also
++	 * cause us to enter an infinite loop because the processor will keep
++	 * faulting on the non-existent MMIO address.  Retrying an instruction
++	 * from a nested guest is also pointless and dangerous as we are only
++	 * explicitly shadowing L1's page tables, i.e. unprotecting something
++	 * for L1 isn't going to magically fix whatever issue cause L2 to fail.
++	 */
++	if (!mmio_info_in_cache(vcpu, cr2, direct) && !is_guest_mode(vcpu))
++		emulation_type = EMULTYPE_ALLOW_RETRY;
+ emulate:
+ 	/*
+ 	 * On AMD platforms, under certain conditions insn_len may be zero on #NPF.
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index 9799f86388e7..ef772e5634d4 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -3875,8 +3875,8 @@ static int emulate_on_interception(struct vcpu_svm *svm)
+ 
+ static int rsm_interception(struct vcpu_svm *svm)
+ {
+-	return x86_emulate_instruction(&svm->vcpu, 0, 0,
+-				       rsm_ins_bytes, 2) == EMULATE_DONE;
++	return kvm_emulate_instruction_from_buffer(&svm->vcpu,
++					rsm_ins_bytes, 2) == EMULATE_DONE;
+ }
+ 
+ static int rdpmc_interception(struct vcpu_svm *svm)
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 9869bfd0c601..d0c3be353bb6 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -7539,8 +7539,8 @@ static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
+ 		if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
+ 			return kvm_skip_emulated_instruction(vcpu);
+ 		else
+-			return x86_emulate_instruction(vcpu, gpa, EMULTYPE_SKIP,
+-						       NULL, 0) == EMULATE_DONE;
++			return emulate_instruction(vcpu, EMULTYPE_SKIP) ==
++								EMULATE_DONE;
+ 	}
+ 
+ 	return kvm_mmu_page_fault(vcpu, gpa, PFERR_RSVD_MASK, NULL, 0);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 94cd63081471..97fcac34e007 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -5810,7 +5810,10 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gva_t cr2,
+ 	gpa_t gpa = cr2;
+ 	kvm_pfn_t pfn;
+ 
+-	if (emulation_type & EMULTYPE_NO_REEXECUTE)
++	if (!(emulation_type & EMULTYPE_ALLOW_RETRY))
++		return false;
++
++	if (WARN_ON_ONCE(is_guest_mode(vcpu)))
+ 		return false;
+ 
+ 	if (!vcpu->arch.mmu.direct_map) {
+@@ -5898,7 +5901,10 @@ static bool retry_instruction(struct x86_emulate_ctxt *ctxt,
+ 	 */
+ 	vcpu->arch.last_retry_eip = vcpu->arch.last_retry_addr = 0;
+ 
+-	if (!(emulation_type & EMULTYPE_RETRY))
++	if (!(emulation_type & EMULTYPE_ALLOW_RETRY))
++		return false;
++
++	if (WARN_ON_ONCE(is_guest_mode(vcpu)))
+ 		return false;
+ 
+ 	if (x86_page_table_writing_insn(ctxt))
+diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
+index d1f1612672c7..045338ac1667 100644
+--- a/arch/x86/mm/fault.c
++++ b/arch/x86/mm/fault.c
+@@ -317,8 +317,6 @@ static noinline int vmalloc_fault(unsigned long address)
+ 	if (!(address >= VMALLOC_START && address < VMALLOC_END))
+ 		return -1;
+ 
+-	WARN_ON_ONCE(in_nmi());
+-
+ 	/*
+ 	 * Synchronize this task's top level page-table
+ 	 * with the 'reference' page table.
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+index 58c6efa9f9a9..9fe5952d117d 100644
+--- a/block/bfq-cgroup.c
++++ b/block/bfq-cgroup.c
+@@ -275,9 +275,9 @@ static void bfqg_and_blkg_get(struct bfq_group *bfqg)
+ 
+ void bfqg_and_blkg_put(struct bfq_group *bfqg)
+ {
+-	bfqg_put(bfqg);
+-
+ 	blkg_put(bfqg_to_blkg(bfqg));
++
++	bfqg_put(bfqg);
+ }
+ 
+ /* @stats = 0 */
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 746a5eac4541..cbaca5a73f2e 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -2161,9 +2161,12 @@ static inline bool bio_check_ro(struct bio *bio, struct hd_struct *part)
+ {
+ 	const int op = bio_op(bio);
+ 
+-	if (part->policy && (op_is_write(op) && !op_is_flush(op))) {
++	if (part->policy && op_is_write(op)) {
+ 		char b[BDEVNAME_SIZE];
+ 
++		if (op_is_flush(bio->bi_opf) && !bio_sectors(bio))
++			return false;
++
+ 		WARN_ONCE(1,
+ 		       "generic_make_request: Trying to write "
+ 			"to read-only block-device %s (partno %d)\n",
+diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
+index d5f2c21d8531..816923bf874d 100644
+--- a/block/blk-mq-tag.c
++++ b/block/blk-mq-tag.c
+@@ -402,8 +402,6 @@ int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx,
+ 	if (tdepth <= tags->nr_reserved_tags)
+ 		return -EINVAL;
+ 
+-	tdepth -= tags->nr_reserved_tags;
+-
+ 	/*
+ 	 * If we are allowed to grow beyond the original size, allocate
+ 	 * a new set of tags before freeing the old one.
+@@ -423,7 +421,8 @@ int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx,
+ 		if (tdepth > 16 * BLKDEV_MAX_RQ)
+ 			return -EINVAL;
+ 
+-		new = blk_mq_alloc_rq_map(set, hctx->queue_num, tdepth, 0);
++		new = blk_mq_alloc_rq_map(set, hctx->queue_num, tdepth,
++				tags->nr_reserved_tags);
+ 		if (!new)
+ 			return -ENOMEM;
+ 		ret = blk_mq_alloc_rqs(set, new, hctx->queue_num, tdepth);
+@@ -440,7 +439,8 @@ int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx,
+ 		 * Don't need (or can't) update reserved tags here, they
+ 		 * remain static and should never need resizing.
+ 		 */
+-		sbitmap_queue_resize(&tags->bitmap_tags, tdepth);
++		sbitmap_queue_resize(&tags->bitmap_tags,
++				tdepth - tags->nr_reserved_tags);
+ 	}
+ 
+ 	return 0;
+diff --git a/block/partitions/aix.c b/block/partitions/aix.c
+index 007f95eea0e1..903f3ed175d0 100644
+--- a/block/partitions/aix.c
++++ b/block/partitions/aix.c
+@@ -178,7 +178,7 @@ int aix_partition(struct parsed_partitions *state)
+ 	u32 vgda_sector = 0;
+ 	u32 vgda_len = 0;
+ 	int numlvs = 0;
+-	struct pvd *pvd;
++	struct pvd *pvd = NULL;
+ 	struct lv_info {
+ 		unsigned short pps_per_lv;
+ 		unsigned short pps_found;
+@@ -232,10 +232,11 @@ int aix_partition(struct parsed_partitions *state)
+ 				if (lvip[i].pps_per_lv)
+ 					foundlvs += 1;
+ 			}
++			/* pvd loops depend on n[].name and lvip[].pps_per_lv */
++			pvd = alloc_pvd(state, vgda_sector + 17);
+ 		}
+ 		put_dev_sector(sect);
+ 	}
+-	pvd = alloc_pvd(state, vgda_sector + 17);
+ 	if (pvd) {
+ 		int numpps = be16_to_cpu(pvd->pp_count);
+ 		int psn_part1 = be32_to_cpu(pvd->psn_part1);
+@@ -282,10 +283,14 @@ int aix_partition(struct parsed_partitions *state)
+ 				next_lp_ix += 1;
+ 		}
+ 		for (i = 0; i < state->limit; i += 1)
+-			if (lvip[i].pps_found && !lvip[i].lv_is_contiguous)
++			if (lvip[i].pps_found && !lvip[i].lv_is_contiguous) {
++				char tmp[sizeof(n[i].name) + 1]; // null char
++
++				snprintf(tmp, sizeof(tmp), "%s", n[i].name);
+ 				pr_warn("partition %s (%u pp's found) is "
+ 					"not contiguous\n",
+-					n[i].name, lvip[i].pps_found);
++					tmp, lvip[i].pps_found);
++			}
+ 		kfree(pvd);
+ 	}
+ 	kfree(n);
+diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c
+index 9706613eecf9..bf64cfa30feb 100644
+--- a/drivers/acpi/acpi_lpss.c
++++ b/drivers/acpi/acpi_lpss.c
+@@ -879,7 +879,7 @@ static void acpi_lpss_dismiss(struct device *dev)
+ #define LPSS_GPIODEF0_DMA_LLP		BIT(13)
+ 
+ static DEFINE_MUTEX(lpss_iosf_mutex);
+-static bool lpss_iosf_d3_entered;
++static bool lpss_iosf_d3_entered = true;
+ 
+ static void lpss_iosf_enter_d3_state(void)
+ {
+diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
+index 2628806c64a2..3d5277a39097 100644
+--- a/drivers/android/binder_alloc.c
++++ b/drivers/android/binder_alloc.c
+@@ -327,6 +327,35 @@ err_no_vma:
+ 	return vma ? -ENOMEM : -ESRCH;
+ }
+ 
++
++static inline void binder_alloc_set_vma(struct binder_alloc *alloc,
++		struct vm_area_struct *vma)
++{
++	if (vma)
++		alloc->vma_vm_mm = vma->vm_mm;
++	/*
++	 * If we see alloc->vma is not NULL, buffer data structures set up
++	 * completely. Look at smp_rmb side binder_alloc_get_vma.
++	 * We also want to guarantee new alloc->vma_vm_mm is always visible
++	 * if alloc->vma is set.
++	 */
++	smp_wmb();
++	alloc->vma = vma;
++}
++
++static inline struct vm_area_struct *binder_alloc_get_vma(
++		struct binder_alloc *alloc)
++{
++	struct vm_area_struct *vma = NULL;
++
++	if (alloc->vma) {
++		/* Look at description in binder_alloc_set_vma */
++		smp_rmb();
++		vma = alloc->vma;
++	}
++	return vma;
++}
++
+ static struct binder_buffer *binder_alloc_new_buf_locked(
+ 				struct binder_alloc *alloc,
+ 				size_t data_size,
+@@ -343,7 +372,7 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
+ 	size_t size, data_offsets_size;
+ 	int ret;
+ 
+-	if (alloc->vma == NULL) {
++	if (!binder_alloc_get_vma(alloc)) {
+ 		pr_err("%d: binder_alloc_buf, no vma\n",
+ 		       alloc->pid);
+ 		return ERR_PTR(-ESRCH);
+@@ -714,9 +743,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc,
+ 	buffer->free = 1;
+ 	binder_insert_free_buffer(alloc, buffer);
+ 	alloc->free_async_space = alloc->buffer_size / 2;
+-	barrier();
+-	alloc->vma = vma;
+-	alloc->vma_vm_mm = vma->vm_mm;
++	binder_alloc_set_vma(alloc, vma);
+ 	mmgrab(alloc->vma_vm_mm);
+ 
+ 	return 0;
+@@ -743,10 +770,10 @@ void binder_alloc_deferred_release(struct binder_alloc *alloc)
+ 	int buffers, page_count;
+ 	struct binder_buffer *buffer;
+ 
+-	BUG_ON(alloc->vma);
+-
+ 	buffers = 0;
+ 	mutex_lock(&alloc->mutex);
++	BUG_ON(alloc->vma);
++
+ 	while ((n = rb_first(&alloc->allocated_buffers))) {
+ 		buffer = rb_entry(n, struct binder_buffer, rb_node);
+ 
+@@ -889,7 +916,7 @@ int binder_alloc_get_allocated_count(struct binder_alloc *alloc)
+  */
+ void binder_alloc_vma_close(struct binder_alloc *alloc)
+ {
+-	WRITE_ONCE(alloc->vma, NULL);
++	binder_alloc_set_vma(alloc, NULL);
+ }
+ 
+ /**
+@@ -924,7 +951,7 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
+ 
+ 	index = page - alloc->pages;
+ 	page_addr = (uintptr_t)alloc->buffer + index * PAGE_SIZE;
+-	vma = alloc->vma;
++	vma = binder_alloc_get_vma(alloc);
+ 	if (vma) {
+ 		if (!mmget_not_zero(alloc->vma_vm_mm))
+ 			goto err_mmget;
+diff --git a/drivers/ata/libahci.c b/drivers/ata/libahci.c
+index 09620c2ffa0f..704a761f94b2 100644
+--- a/drivers/ata/libahci.c
++++ b/drivers/ata/libahci.c
+@@ -2107,7 +2107,7 @@ static void ahci_set_aggressive_devslp(struct ata_port *ap, bool sleep)
+ 	struct ahci_host_priv *hpriv = ap->host->private_data;
+ 	void __iomem *port_mmio = ahci_port_base(ap);
+ 	struct ata_device *dev = ap->link.device;
+-	u32 devslp, dm, dito, mdat, deto;
++	u32 devslp, dm, dito, mdat, deto, dito_conf;
+ 	int rc;
+ 	unsigned int err_mask;
+ 
+@@ -2131,8 +2131,15 @@ static void ahci_set_aggressive_devslp(struct ata_port *ap, bool sleep)
+ 		return;
+ 	}
+ 
+-	/* device sleep was already enabled */
+-	if (devslp & PORT_DEVSLP_ADSE)
++	dm = (devslp & PORT_DEVSLP_DM_MASK) >> PORT_DEVSLP_DM_OFFSET;
++	dito = devslp_idle_timeout / (dm + 1);
++	if (dito > 0x3ff)
++		dito = 0x3ff;
++
++	dito_conf = (devslp >> PORT_DEVSLP_DITO_OFFSET) & 0x3FF;
++
++	/* device sleep was already enabled and same dito */
++	if ((devslp & PORT_DEVSLP_ADSE) && (dito_conf == dito))
+ 		return;
+ 
+ 	/* set DITO, MDAT, DETO and enable DevSlp, need to stop engine first */
+@@ -2140,11 +2147,6 @@ static void ahci_set_aggressive_devslp(struct ata_port *ap, bool sleep)
+ 	if (rc)
+ 		return;
+ 
+-	dm = (devslp & PORT_DEVSLP_DM_MASK) >> PORT_DEVSLP_DM_OFFSET;
+-	dito = devslp_idle_timeout / (dm + 1);
+-	if (dito > 0x3ff)
+-		dito = 0x3ff;
+-
+ 	/* Use the nominal value 10 ms if the read MDAT is zero,
+ 	 * the nominal value of DETO is 20 ms.
+ 	 */
+@@ -2162,6 +2164,8 @@ static void ahci_set_aggressive_devslp(struct ata_port *ap, bool sleep)
+ 		deto = 20;
+ 	}
+ 
++	/* Make dito, mdat, deto bits to 0s */
++	devslp &= ~GENMASK_ULL(24, 2);
+ 	devslp |= ((dito << PORT_DEVSLP_DITO_OFFSET) |
+ 		   (mdat << PORT_DEVSLP_MDAT_OFFSET) |
+ 		   (deto << PORT_DEVSLP_DETO_OFFSET) |
+diff --git a/drivers/base/memory.c b/drivers/base/memory.c
+index f5e560188a18..622ab8edc035 100644
+--- a/drivers/base/memory.c
++++ b/drivers/base/memory.c
+@@ -416,26 +416,24 @@ static ssize_t show_valid_zones(struct device *dev,
+ 	struct zone *default_zone;
+ 	int nid;
+ 
+-	/*
+-	 * The block contains more than one zone can not be offlined.
+-	 * This can happen e.g. for ZONE_DMA and ZONE_DMA32
+-	 */
+-	if (!test_pages_in_a_zone(start_pfn, start_pfn + nr_pages, &valid_start_pfn, &valid_end_pfn))
+-		return sprintf(buf, "none\n");
+-
+-	start_pfn = valid_start_pfn;
+-	nr_pages = valid_end_pfn - start_pfn;
+-
+ 	/*
+ 	 * Check the existing zone. Make sure that we do that only on the
+ 	 * online nodes otherwise the page_zone is not reliable
+ 	 */
+ 	if (mem->state == MEM_ONLINE) {
++		/*
++		 * The block contains more than one zone can not be offlined.
++		 * This can happen e.g. for ZONE_DMA and ZONE_DMA32
++		 */
++		if (!test_pages_in_a_zone(start_pfn, start_pfn + nr_pages,
++					  &valid_start_pfn, &valid_end_pfn))
++			return sprintf(buf, "none\n");
++		start_pfn = valid_start_pfn;
+ 		strcat(buf, page_zone(pfn_to_page(start_pfn))->name);
+ 		goto out;
+ 	}
+ 
+-	nid = pfn_to_nid(start_pfn);
++	nid = mem->nid;
+ 	default_zone = zone_for_pfn_range(MMOP_ONLINE_KEEP, nid, start_pfn, nr_pages);
+ 	strcat(buf, default_zone->name);
+ 
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 3fb95c8d9fd8..15a5ce5bba3d 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -1239,6 +1239,9 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
+ 	case NBD_SET_SOCK:
+ 		return nbd_add_socket(nbd, arg, false);
+ 	case NBD_SET_BLKSIZE:
++		if (!arg || !is_power_of_2(arg) || arg < 512 ||
++		    arg > PAGE_SIZE)
++			return -EINVAL;
+ 		nbd_size_set(nbd, arg,
+ 			     div_s64(config->bytesize, arg));
+ 		return 0;
+diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c
+index b3f83cd96f33..01f59be71433 100644
+--- a/drivers/block/pktcdvd.c
++++ b/drivers/block/pktcdvd.c
+@@ -67,7 +67,7 @@
+ #include <scsi/scsi.h>
+ #include <linux/debugfs.h>
+ #include <linux/device.h>
+-
++#include <linux/nospec.h>
+ #include <linux/uaccess.h>
+ 
+ #define DRIVER_NAME	"pktcdvd"
+@@ -2231,6 +2231,8 @@ static struct pktcdvd_device *pkt_find_dev_from_minor(unsigned int dev_minor)
+ {
+ 	if (dev_minor >= MAX_WRITERS)
+ 		return NULL;
++
++	dev_minor = array_index_nospec(dev_minor, MAX_WRITERS);
+ 	return pkt_devs[dev_minor];
+ }
+ 
+diff --git a/drivers/bluetooth/Kconfig b/drivers/bluetooth/Kconfig
+index f3c643a0473c..5f953ca8ac5b 100644
+--- a/drivers/bluetooth/Kconfig
++++ b/drivers/bluetooth/Kconfig
+@@ -159,6 +159,7 @@ config BT_HCIUART_LL
+ config BT_HCIUART_3WIRE
+ 	bool "Three-wire UART (H5) protocol support"
+ 	depends on BT_HCIUART
++	depends on BT_HCIUART_SERDEV
+ 	help
+ 	  The HCI Three-wire UART Transport Layer makes it possible to
+ 	  user the Bluetooth HCI over a serial port interface. The HCI
+diff --git a/drivers/char/tpm/tpm_i2c_infineon.c b/drivers/char/tpm/tpm_i2c_infineon.c
+index 6116cd05e228..9086edc9066b 100644
+--- a/drivers/char/tpm/tpm_i2c_infineon.c
++++ b/drivers/char/tpm/tpm_i2c_infineon.c
+@@ -117,7 +117,7 @@ static int iic_tpm_read(u8 addr, u8 *buffer, size_t len)
+ 	/* Lock the adapter for the duration of the whole sequence. */
+ 	if (!tpm_dev.client->adapter->algo->master_xfer)
+ 		return -EOPNOTSUPP;
+-	i2c_lock_adapter(tpm_dev.client->adapter);
++	i2c_lock_bus(tpm_dev.client->adapter, I2C_LOCK_SEGMENT);
+ 
+ 	if (tpm_dev.chip_type == SLB9645) {
+ 		/* use a combined read for newer chips
+@@ -192,7 +192,7 @@ static int iic_tpm_read(u8 addr, u8 *buffer, size_t len)
+ 	}
+ 
+ out:
+-	i2c_unlock_adapter(tpm_dev.client->adapter);
++	i2c_unlock_bus(tpm_dev.client->adapter, I2C_LOCK_SEGMENT);
+ 	/* take care of 'guard time' */
+ 	usleep_range(SLEEP_DURATION_LOW, SLEEP_DURATION_HI);
+ 
+@@ -224,7 +224,7 @@ static int iic_tpm_write_generic(u8 addr, u8 *buffer, size_t len,
+ 
+ 	if (!tpm_dev.client->adapter->algo->master_xfer)
+ 		return -EOPNOTSUPP;
+-	i2c_lock_adapter(tpm_dev.client->adapter);
++	i2c_lock_bus(tpm_dev.client->adapter, I2C_LOCK_SEGMENT);
+ 
+ 	/* prepend the 'register address' to the buffer */
+ 	tpm_dev.buf[0] = addr;
+@@ -243,7 +243,7 @@ static int iic_tpm_write_generic(u8 addr, u8 *buffer, size_t len,
+ 		usleep_range(sleep_low, sleep_hi);
+ 	}
+ 
+-	i2c_unlock_adapter(tpm_dev.client->adapter);
++	i2c_unlock_bus(tpm_dev.client->adapter, I2C_LOCK_SEGMENT);
+ 	/* take care of 'guard time' */
+ 	usleep_range(SLEEP_DURATION_LOW, SLEEP_DURATION_HI);
+ 
+diff --git a/drivers/char/tpm/tpm_tis_spi.c b/drivers/char/tpm/tpm_tis_spi.c
+index 424ff2fde1f2..9914f6973463 100644
+--- a/drivers/char/tpm/tpm_tis_spi.c
++++ b/drivers/char/tpm/tpm_tis_spi.c
+@@ -199,6 +199,7 @@ static const struct tpm_tis_phy_ops tpm_spi_phy_ops = {
+ static int tpm_tis_spi_probe(struct spi_device *dev)
+ {
+ 	struct tpm_tis_spi_phy *phy;
++	int irq;
+ 
+ 	phy = devm_kzalloc(&dev->dev, sizeof(struct tpm_tis_spi_phy),
+ 			   GFP_KERNEL);
+@@ -211,7 +212,13 @@ static int tpm_tis_spi_probe(struct spi_device *dev)
+ 	if (!phy->iobuf)
+ 		return -ENOMEM;
+ 
+-	return tpm_tis_core_init(&dev->dev, &phy->priv, -1, &tpm_spi_phy_ops,
++	/* If the SPI device has an IRQ then use that */
++	if (dev->irq > 0)
++		irq = dev->irq;
++	else
++		irq = -1;
++
++	return tpm_tis_core_init(&dev->dev, &phy->priv, irq, &tpm_spi_phy_ops,
+ 				 NULL);
+ }
+ 
+diff --git a/drivers/clk/clk-scmi.c b/drivers/clk/clk-scmi.c
+index bb2a6f2f5516..a985bf5e1ac6 100644
+--- a/drivers/clk/clk-scmi.c
++++ b/drivers/clk/clk-scmi.c
+@@ -38,7 +38,6 @@ static unsigned long scmi_clk_recalc_rate(struct clk_hw *hw,
+ static long scmi_clk_round_rate(struct clk_hw *hw, unsigned long rate,
+ 				unsigned long *parent_rate)
+ {
+-	int step;
+ 	u64 fmin, fmax, ftmp;
+ 	struct scmi_clk *clk = to_scmi_clk(hw);
+ 
+@@ -60,9 +59,9 @@ static long scmi_clk_round_rate(struct clk_hw *hw, unsigned long rate,
+ 
+ 	ftmp = rate - fmin;
+ 	ftmp += clk->info->range.step_size - 1; /* to round up */
+-	step = do_div(ftmp, clk->info->range.step_size);
++	do_div(ftmp, clk->info->range.step_size);
+ 
+-	return step * clk->info->range.step_size + fmin;
++	return ftmp * clk->info->range.step_size + fmin;
+ }
+ 
+ static int scmi_clk_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/dax/pmem.c b/drivers/dax/pmem.c
+index fd49b24fd6af..99e2aace8078 100644
+--- a/drivers/dax/pmem.c
++++ b/drivers/dax/pmem.c
+@@ -105,15 +105,19 @@ static int dax_pmem_probe(struct device *dev)
+ 	if (rc)
+ 		return rc;
+ 
+-	rc = devm_add_action_or_reset(dev, dax_pmem_percpu_exit,
+-							&dax_pmem->ref);
+-	if (rc)
++	rc = devm_add_action(dev, dax_pmem_percpu_exit, &dax_pmem->ref);
++	if (rc) {
++		percpu_ref_exit(&dax_pmem->ref);
+ 		return rc;
++	}
+ 
+ 	dax_pmem->pgmap.ref = &dax_pmem->ref;
+ 	addr = devm_memremap_pages(dev, &dax_pmem->pgmap);
+-	if (IS_ERR(addr))
++	if (IS_ERR(addr)) {
++		devm_remove_action(dev, dax_pmem_percpu_exit, &dax_pmem->ref);
++		percpu_ref_exit(&dax_pmem->ref);
+ 		return PTR_ERR(addr);
++	}
+ 
+ 	rc = devm_add_action_or_reset(dev, dax_pmem_percpu_kill,
+ 							&dax_pmem->ref);
+diff --git a/drivers/firmware/google/vpd.c b/drivers/firmware/google/vpd.c
+index e9db895916c3..1aa67bb5d8c0 100644
+--- a/drivers/firmware/google/vpd.c
++++ b/drivers/firmware/google/vpd.c
+@@ -246,6 +246,7 @@ static int vpd_section_destroy(struct vpd_section *sec)
+ 		sysfs_remove_bin_file(vpd_kobj, &sec->bin_attr);
+ 		kfree(sec->raw_name);
+ 		memunmap(sec->baseaddr);
++		sec->enabled = false;
+ 	}
+ 
+ 	return 0;
+@@ -279,8 +280,10 @@ static int vpd_sections_init(phys_addr_t physaddr)
+ 		ret = vpd_section_init("rw", &rw_vpd,
+ 				       physaddr + sizeof(struct vpd_cbmem) +
+ 				       header.ro_size, header.rw_size);
+-		if (ret)
++		if (ret) {
++			vpd_section_destroy(&ro_vpd);
+ 			return ret;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/gpio/gpio-ml-ioh.c b/drivers/gpio/gpio-ml-ioh.c
+index b23d9a36be1f..51c7d1b84c2e 100644
+--- a/drivers/gpio/gpio-ml-ioh.c
++++ b/drivers/gpio/gpio-ml-ioh.c
+@@ -496,9 +496,10 @@ static int ioh_gpio_probe(struct pci_dev *pdev,
+ 	return 0;
+ 
+ err_gpiochip_add:
++	chip = chip_save;
+ 	while (--i >= 0) {
+-		chip--;
+ 		gpiochip_remove(&chip->gpio);
++		chip++;
+ 	}
+ 	kfree(chip_save);
+ 
+diff --git a/drivers/gpio/gpio-pxa.c b/drivers/gpio/gpio-pxa.c
+index 1e66f808051c..2e33fd552899 100644
+--- a/drivers/gpio/gpio-pxa.c
++++ b/drivers/gpio/gpio-pxa.c
+@@ -241,6 +241,17 @@ int pxa_irq_to_gpio(int irq)
+ 	return irq_gpio0;
+ }
+ 
++static bool pxa_gpio_has_pinctrl(void)
++{
++	switch (gpio_type) {
++	case PXA3XX_GPIO:
++		return false;
++
++	default:
++		return true;
++	}
++}
++
+ static int pxa_gpio_to_irq(struct gpio_chip *chip, unsigned offset)
+ {
+ 	struct pxa_gpio_chip *pchip = chip_to_pxachip(chip);
+@@ -255,9 +266,11 @@ static int pxa_gpio_direction_input(struct gpio_chip *chip, unsigned offset)
+ 	unsigned long flags;
+ 	int ret;
+ 
+-	ret = pinctrl_gpio_direction_input(chip->base + offset);
+-	if (!ret)
+-		return 0;
++	if (pxa_gpio_has_pinctrl()) {
++		ret = pinctrl_gpio_direction_input(chip->base + offset);
++		if (!ret)
++			return 0;
++	}
+ 
+ 	spin_lock_irqsave(&gpio_lock, flags);
+ 
+@@ -282,9 +295,11 @@ static int pxa_gpio_direction_output(struct gpio_chip *chip,
+ 
+ 	writel_relaxed(mask, base + (value ? GPSR_OFFSET : GPCR_OFFSET));
+ 
+-	ret = pinctrl_gpio_direction_output(chip->base + offset);
+-	if (ret)
+-		return ret;
++	if (pxa_gpio_has_pinctrl()) {
++		ret = pinctrl_gpio_direction_output(chip->base + offset);
++		if (ret)
++			return ret;
++	}
+ 
+ 	spin_lock_irqsave(&gpio_lock, flags);
+ 
+@@ -348,8 +363,12 @@ static int pxa_init_gpio_chip(struct pxa_gpio_chip *pchip, int ngpio,
+ 	pchip->chip.set = pxa_gpio_set;
+ 	pchip->chip.to_irq = pxa_gpio_to_irq;
+ 	pchip->chip.ngpio = ngpio;
+-	pchip->chip.request = gpiochip_generic_request;
+-	pchip->chip.free = gpiochip_generic_free;
++
++	if (pxa_gpio_has_pinctrl()) {
++		pchip->chip.request = gpiochip_generic_request;
++		pchip->chip.free = gpiochip_generic_free;
++	}
++
+ #ifdef CONFIG_OF_GPIO
+ 	pchip->chip.of_node = np;
+ 	pchip->chip.of_xlate = pxa_gpio_of_xlate;
+diff --git a/drivers/gpio/gpio-tegra.c b/drivers/gpio/gpio-tegra.c
+index 94396caaca75..d5d79727c55d 100644
+--- a/drivers/gpio/gpio-tegra.c
++++ b/drivers/gpio/gpio-tegra.c
+@@ -720,4 +720,4 @@ static int __init tegra_gpio_init(void)
+ {
+ 	return platform_driver_register(&tegra_gpio_driver);
+ }
+-postcore_initcall(tegra_gpio_init);
++subsys_initcall(tegra_gpio_init);
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c b/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c
+index a576b8bbb3cd..dea40b322191 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c
+@@ -150,7 +150,7 @@ static void dce_dmcu_set_psr_enable(struct dmcu *dmcu, bool enable, bool wait)
+ 	}
+ }
+ 
+-static void dce_dmcu_setup_psr(struct dmcu *dmcu,
++static bool dce_dmcu_setup_psr(struct dmcu *dmcu,
+ 		struct dc_link *link,
+ 		struct psr_context *psr_context)
+ {
+@@ -261,6 +261,8 @@ static void dce_dmcu_setup_psr(struct dmcu *dmcu,
+ 
+ 	/* notifyDMCUMsg */
+ 	REG_UPDATE(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 1);
++
++	return true;
+ }
+ 
+ static bool dce_is_dmcu_initialized(struct dmcu *dmcu)
+@@ -545,24 +547,25 @@ static void dcn10_dmcu_set_psr_enable(struct dmcu *dmcu, bool enable, bool wait)
+ 	 *  least a few frames. Should never hit the max retry assert below.
+ 	 */
+ 	if (wait == true) {
+-	for (retryCount = 0; retryCount <= 1000; retryCount++) {
+-		dcn10_get_dmcu_psr_state(dmcu, &psr_state);
+-		if (enable) {
+-			if (psr_state != 0)
+-				break;
+-		} else {
+-			if (psr_state == 0)
+-				break;
++		for (retryCount = 0; retryCount <= 1000; retryCount++) {
++			dcn10_get_dmcu_psr_state(dmcu, &psr_state);
++			if (enable) {
++				if (psr_state != 0)
++					break;
++			} else {
++				if (psr_state == 0)
++					break;
++			}
++			udelay(500);
+ 		}
+-		udelay(500);
+-	}
+ 
+-	/* assert if max retry hit */
+-	ASSERT(retryCount <= 1000);
++		/* assert if max retry hit */
++		if (retryCount >= 1000)
++			ASSERT(0);
+ 	}
+ }
+ 
+-static void dcn10_dmcu_setup_psr(struct dmcu *dmcu,
++static bool dcn10_dmcu_setup_psr(struct dmcu *dmcu,
+ 		struct dc_link *link,
+ 		struct psr_context *psr_context)
+ {
+@@ -577,7 +580,7 @@ static void dcn10_dmcu_setup_psr(struct dmcu *dmcu,
+ 
+ 	/* If microcontroller is not running, do nothing */
+ 	if (dmcu->dmcu_state != DMCU_RUNNING)
+-		return;
++		return false;
+ 
+ 	link->link_enc->funcs->psr_program_dp_dphy_fast_training(link->link_enc,
+ 			psr_context->psrExitLinkTrainingRequired);
+@@ -677,6 +680,11 @@ static void dcn10_dmcu_setup_psr(struct dmcu *dmcu,
+ 
+ 	/* notifyDMCUMsg */
+ 	REG_UPDATE(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 1);
++
++	/* waitDMCUReadyForCmd */
++	REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 0, 1, 10000);
++
++	return true;
+ }
+ 
+ static void dcn10_psr_wait_loop(
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/dmcu.h b/drivers/gpu/drm/amd/display/dc/inc/hw/dmcu.h
+index de60f940030d..4550747fb61c 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/dmcu.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/dmcu.h
+@@ -48,7 +48,7 @@ struct dmcu_funcs {
+ 			const char *src,
+ 			unsigned int bytes);
+ 	void (*set_psr_enable)(struct dmcu *dmcu, bool enable, bool wait);
+-	void (*setup_psr)(struct dmcu *dmcu,
++	bool (*setup_psr)(struct dmcu *dmcu,
+ 			struct dc_link *link,
+ 			struct psr_context *psr_context);
+ 	void (*get_psr_state)(struct dmcu *dmcu, uint32_t *psr_state);
+diff --git a/drivers/gpu/ipu-v3/ipu-common.c b/drivers/gpu/ipu-v3/ipu-common.c
+index 48685cddbad1..c73bd003f845 100644
+--- a/drivers/gpu/ipu-v3/ipu-common.c
++++ b/drivers/gpu/ipu-v3/ipu-common.c
+@@ -1401,6 +1401,8 @@ static int ipu_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 
+ 	ipu->id = of_alias_get_id(np, "ipu");
++	if (ipu->id < 0)
++		ipu->id = 0;
+ 
+ 	if (of_device_is_compatible(np, "fsl,imx6qp-ipu") &&
+ 	    IS_ENABLED(CONFIG_DRM)) {
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index c7981ddd8776..e80bcd71fe1e 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -528,6 +528,7 @@
+ 
+ #define I2C_VENDOR_ID_RAYD		0x2386
+ #define I2C_PRODUCT_ID_RAYD_3118	0x3118
++#define I2C_PRODUCT_ID_RAYD_4B33	0x4B33
+ 
+ #define USB_VENDOR_ID_HANWANG		0x0b57
+ #define USB_DEVICE_ID_HANWANG_TABLET_FIRST	0x5000
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index ab93dd5927c3..b23c4b5854d8 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -1579,6 +1579,7 @@ static struct hid_input *hidinput_allocate(struct hid_device *hid,
+ 	input_dev->dev.parent = &hid->dev;
+ 
+ 	hidinput->input = input_dev;
++	hidinput->application = application;
+ 	list_add_tail(&hidinput->list, &hid->inputs);
+ 
+ 	INIT_LIST_HEAD(&hidinput->reports);
+@@ -1674,8 +1675,7 @@ static struct hid_input *hidinput_match_application(struct hid_report *report)
+ 	struct hid_input *hidinput;
+ 
+ 	list_for_each_entry(hidinput, &hid->inputs, list) {
+-		if (hidinput->report &&
+-		    hidinput->report->application == report->application)
++		if (hidinput->application == report->application)
+ 			return hidinput;
+ 	}
+ 
+@@ -1812,6 +1812,7 @@ void hidinput_disconnect(struct hid_device *hid)
+ 			input_unregister_device(hidinput->input);
+ 		else
+ 			input_free_device(hidinput->input);
++		kfree(hidinput->name);
+ 		kfree(hidinput);
+ 	}
+ 
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 45968f7970f8..15c934ef6b18 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -1167,7 +1167,8 @@ static bool mt_need_to_apply_feature(struct hid_device *hdev,
+ 				     struct hid_usage *usage,
+ 				     enum latency_mode latency,
+ 				     bool surface_switch,
+-				     bool button_switch)
++				     bool button_switch,
++				     bool *inputmode_found)
+ {
+ 	struct mt_device *td = hid_get_drvdata(hdev);
+ 	struct mt_class *cls = &td->mtclass;
+@@ -1179,6 +1180,14 @@ static bool mt_need_to_apply_feature(struct hid_device *hdev,
+ 
+ 	switch (usage->hid) {
+ 	case HID_DG_INPUTMODE:
++		/*
++		 * Some elan panels wrongly declare 2 input mode features,
++		 * and silently ignore when we set the value in the second
++		 * field. Skip the second feature and hope for the best.
++		 */
++		if (*inputmode_found)
++			return false;
++
+ 		if (cls->quirks & MT_QUIRK_FORCE_GET_FEATURE) {
+ 			report_len = hid_report_len(report);
+ 			buf = hid_alloc_report_buf(report, GFP_KERNEL);
+@@ -1194,6 +1203,7 @@ static bool mt_need_to_apply_feature(struct hid_device *hdev,
+ 		}
+ 
+ 		field->value[index] = td->inputmode_value;
++		*inputmode_found = true;
+ 		return true;
+ 
+ 	case HID_DG_CONTACTMAX:
+@@ -1231,6 +1241,7 @@ static void mt_set_modes(struct hid_device *hdev, enum latency_mode latency,
+ 	struct hid_usage *usage;
+ 	int i, j;
+ 	bool update_report;
++	bool inputmode_found = false;
+ 
+ 	rep_enum = &hdev->report_enum[HID_FEATURE_REPORT];
+ 	list_for_each_entry(rep, &rep_enum->report_list, list) {
+@@ -1249,7 +1260,8 @@ static void mt_set_modes(struct hid_device *hdev, enum latency_mode latency,
+ 							     usage,
+ 							     latency,
+ 							     surface_switch,
+-							     button_switch))
++							     button_switch,
++							     &inputmode_found))
+ 					update_report = true;
+ 			}
+ 		}
+@@ -1476,6 +1488,9 @@ static int mt_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 	 */
+ 	hdev->quirks |= HID_QUIRK_INPUT_PER_APP;
+ 
++	if (id->group != HID_GROUP_MULTITOUCH_WIN_8)
++		hdev->quirks |= HID_QUIRK_MULTI_INPUT;
++
+ 	timer_setup(&td->release_timer, mt_expired_timeout, 0);
+ 
+ 	ret = hid_parse(hdev);
+diff --git a/drivers/hid/i2c-hid/i2c-hid.c b/drivers/hid/i2c-hid/i2c-hid.c
+index eae0cb3ddec6..5fd1159fc095 100644
+--- a/drivers/hid/i2c-hid/i2c-hid.c
++++ b/drivers/hid/i2c-hid/i2c-hid.c
+@@ -174,6 +174,8 @@ static const struct i2c_hid_quirks {
+ 		I2C_HID_QUIRK_RESEND_REPORT_DESCR },
+ 	{ USB_VENDOR_ID_SIS_TOUCH, USB_DEVICE_ID_SIS10FB_TOUCH,
+ 		I2C_HID_QUIRK_RESEND_REPORT_DESCR },
++	{ I2C_VENDOR_ID_RAYD, I2C_PRODUCT_ID_RAYD_4B33,
++		I2C_HID_QUIRK_RESEND_REPORT_DESCR },
+ 	{ 0, 0 }
+ };
+ 
+diff --git a/drivers/hv/hv.c b/drivers/hv/hv.c
+index 658dc765753b..553adccb05d7 100644
+--- a/drivers/hv/hv.c
++++ b/drivers/hv/hv.c
+@@ -242,6 +242,10 @@ int hv_synic_alloc(void)
+ 
+ 	return 0;
+ err:
++	/*
++	 * Any memory allocations that succeeded will be freed when
++	 * the caller cleans up by calling hv_synic_free()
++	 */
+ 	return -ENOMEM;
+ }
+ 
+@@ -254,12 +258,10 @@ void hv_synic_free(void)
+ 		struct hv_per_cpu_context *hv_cpu
+ 			= per_cpu_ptr(hv_context.cpu_context, cpu);
+ 
+-		if (hv_cpu->synic_event_page)
+-			free_page((unsigned long)hv_cpu->synic_event_page);
+-		if (hv_cpu->synic_message_page)
+-			free_page((unsigned long)hv_cpu->synic_message_page);
+-		if (hv_cpu->post_msg_page)
+-			free_page((unsigned long)hv_cpu->post_msg_page);
++		kfree(hv_cpu->clk_evt);
++		free_page((unsigned long)hv_cpu->synic_event_page);
++		free_page((unsigned long)hv_cpu->synic_message_page);
++		free_page((unsigned long)hv_cpu->post_msg_page);
+ 	}
+ 
+ 	kfree(hv_context.hv_numa_map);
+diff --git a/drivers/i2c/busses/i2c-aspeed.c b/drivers/i2c/busses/i2c-aspeed.c
+index 60e4d0e939a3..715b6fdb4989 100644
+--- a/drivers/i2c/busses/i2c-aspeed.c
++++ b/drivers/i2c/busses/i2c-aspeed.c
+@@ -868,7 +868,7 @@ static int aspeed_i2c_probe_bus(struct platform_device *pdev)
+ 	if (!match)
+ 		bus->get_clk_reg_val = aspeed_i2c_24xx_get_clk_reg_val;
+ 	else
+-		bus->get_clk_reg_val = match->data;
++		bus->get_clk_reg_val = (u32 (*)(u32))match->data;
+ 
+ 	/* Initialize the I2C adapter */
+ 	spin_lock_init(&bus->lock);
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index aa726607645e..45fcf0c37a9e 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -139,6 +139,7 @@
+ 
+ #define SBREG_BAR		0x10
+ #define SBREG_SMBCTRL		0xc6000c
++#define SBREG_SMBCTRL_DNV	0xcf000c
+ 
+ /* Host status bits for SMBPCISTS */
+ #define SMBPCISTS_INTS		BIT(3)
+@@ -1396,7 +1397,11 @@ static void i801_add_tco(struct i801_priv *priv)
+ 	spin_unlock(&p2sb_spinlock);
+ 
+ 	res = &tco_res[ICH_RES_MEM_OFF];
+-	res->start = (resource_size_t)base64_addr + SBREG_SMBCTRL;
++	if (pci_dev->device == PCI_DEVICE_ID_INTEL_DNV_SMBUS)
++		res->start = (resource_size_t)base64_addr + SBREG_SMBCTRL_DNV;
++	else
++		res->start = (resource_size_t)base64_addr + SBREG_SMBCTRL;
++
+ 	res->end = res->start + 3;
+ 	res->flags = IORESOURCE_MEM;
+ 
+diff --git a/drivers/i2c/busses/i2c-xiic.c b/drivers/i2c/busses/i2c-xiic.c
+index 9a71e50d21f1..0c51c0ffdda9 100644
+--- a/drivers/i2c/busses/i2c-xiic.c
++++ b/drivers/i2c/busses/i2c-xiic.c
+@@ -532,6 +532,7 @@ static void xiic_start_recv(struct xiic_i2c *i2c)
+ {
+ 	u8 rx_watermark;
+ 	struct i2c_msg *msg = i2c->rx_msg = i2c->tx_msg;
++	unsigned long flags;
+ 
+ 	/* Clear and enable Rx full interrupt. */
+ 	xiic_irq_clr_en(i2c, XIIC_INTR_RX_FULL_MASK | XIIC_INTR_TX_ERROR_MASK);
+@@ -547,6 +548,7 @@ static void xiic_start_recv(struct xiic_i2c *i2c)
+ 		rx_watermark = IIC_RX_FIFO_DEPTH;
+ 	xiic_setreg8(i2c, XIIC_RFD_REG_OFFSET, rx_watermark - 1);
+ 
++	local_irq_save(flags);
+ 	if (!(msg->flags & I2C_M_NOSTART))
+ 		/* write the address */
+ 		xiic_setreg16(i2c, XIIC_DTR_REG_OFFSET,
+@@ -556,6 +558,8 @@ static void xiic_start_recv(struct xiic_i2c *i2c)
+ 
+ 	xiic_setreg16(i2c, XIIC_DTR_REG_OFFSET,
+ 		msg->len | ((i2c->nmsgs == 1) ? XIIC_TX_DYN_STOP_MASK : 0));
++	local_irq_restore(flags);
++
+ 	if (i2c->nmsgs == 1)
+ 		/* very last, enable bus not busy as well */
+ 		xiic_irq_clr_en(i2c, XIIC_INTR_BNB_MASK);
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index bff10ab141b0..dafcb6f019b3 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -1445,9 +1445,16 @@ static bool cma_match_net_dev(const struct rdma_cm_id *id,
+ 		       (addr->src_addr.ss_family == AF_IB ||
+ 			rdma_protocol_roce(id->device, port_num));
+ 
+-	return !addr->dev_addr.bound_dev_if ||
+-	       (net_eq(dev_net(net_dev), addr->dev_addr.net) &&
+-		addr->dev_addr.bound_dev_if == net_dev->ifindex);
++	/*
++	 * Net namespaces must match, and if the listner is listening
++	 * on a specific netdevice than netdevice must match as well.
++	 */
++	if (net_eq(dev_net(net_dev), addr->dev_addr.net) &&
++	    (!!addr->dev_addr.bound_dev_if ==
++	     (addr->dev_addr.bound_dev_if == net_dev->ifindex)))
++		return true;
++	else
++		return false;
+ }
+ 
+ static struct rdma_id_private *cma_find_listener(
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
+index 63b5b3edabcb..8dc336a85128 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
+@@ -494,6 +494,9 @@ static int hns_roce_table_mhop_get(struct hns_roce_dev *hr_dev,
+ 			step_idx = 1;
+ 		} else if (hop_num == HNS_ROCE_HOP_NUM_0) {
+ 			step_idx = 0;
++		} else {
++			ret = -EINVAL;
++			goto err_dma_alloc_l1;
+ 		}
+ 
+ 		/* set HEM base address to hardware */
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index a6e11be0ea0f..c00925ed9da8 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -273,7 +273,8 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
+ 			switch (wr->opcode) {
+ 			case IB_WR_SEND_WITH_IMM:
+ 			case IB_WR_RDMA_WRITE_WITH_IMM:
+-				ud_sq_wqe->immtdata = wr->ex.imm_data;
++				ud_sq_wqe->immtdata =
++				      cpu_to_le32(be32_to_cpu(wr->ex.imm_data));
+ 				break;
+ 			default:
+ 				ud_sq_wqe->immtdata = 0;
+@@ -371,7 +372,8 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
+ 			switch (wr->opcode) {
+ 			case IB_WR_SEND_WITH_IMM:
+ 			case IB_WR_RDMA_WRITE_WITH_IMM:
+-				rc_sq_wqe->immtdata = wr->ex.imm_data;
++				rc_sq_wqe->immtdata =
++				      cpu_to_le32(be32_to_cpu(wr->ex.imm_data));
+ 				break;
+ 			case IB_WR_SEND_WITH_INV:
+ 				rc_sq_wqe->inv_key =
+@@ -1931,7 +1933,8 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *hr_cq,
+ 		case HNS_ROCE_V2_OPCODE_RDMA_WRITE_IMM:
+ 			wc->opcode = IB_WC_RECV_RDMA_WITH_IMM;
+ 			wc->wc_flags = IB_WC_WITH_IMM;
+-			wc->ex.imm_data = cqe->immtdata;
++			wc->ex.imm_data =
++				cpu_to_be32(le32_to_cpu(cqe->immtdata));
+ 			break;
+ 		case HNS_ROCE_V2_OPCODE_SEND:
+ 			wc->opcode = IB_WC_RECV;
+@@ -1940,7 +1943,8 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *hr_cq,
+ 		case HNS_ROCE_V2_OPCODE_SEND_WITH_IMM:
+ 			wc->opcode = IB_WC_RECV;
+ 			wc->wc_flags = IB_WC_WITH_IMM;
+-			wc->ex.imm_data = cqe->immtdata;
++			wc->ex.imm_data =
++				cpu_to_be32(le32_to_cpu(cqe->immtdata));
+ 			break;
+ 		case HNS_ROCE_V2_OPCODE_SEND_WITH_INV:
+ 			wc->opcode = IB_WC_RECV;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+index d47675f365c7..7e2c740e0df5 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+@@ -768,7 +768,7 @@ struct hns_roce_v2_cqe {
+ 	__le32	byte_4;
+ 	union {
+ 		__le32 rkey;
+-		__be32 immtdata;
++		__le32 immtdata;
+ 	};
+ 	__le32	byte_12;
+ 	__le32	byte_16;
+@@ -926,7 +926,7 @@ struct hns_roce_v2_cq_db {
+ struct hns_roce_v2_ud_send_wqe {
+ 	__le32	byte_4;
+ 	__le32	msg_len;
+-	__be32	immtdata;
++	__le32	immtdata;
+ 	__le32	byte_16;
+ 	__le32	byte_20;
+ 	__le32	byte_24;
+@@ -1012,7 +1012,7 @@ struct hns_roce_v2_rc_send_wqe {
+ 	__le32		msg_len;
+ 	union {
+ 		__le32  inv_key;
+-		__be32  immtdata;
++		__le32  immtdata;
+ 	};
+ 	__le32		byte_16;
+ 	__le32		byte_20;
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+index 6709328d90f8..c7e034963738 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+@@ -822,6 +822,7 @@ void ipoib_mcast_send(struct net_device *dev, u8 *daddr, struct sk_buff *skb)
+ 			if (neigh && list_empty(&neigh->list)) {
+ 				kref_get(&mcast->ah->ref);
+ 				neigh->ah	= mcast->ah;
++				neigh->ah->valid = 1;
+ 				list_add_tail(&neigh->list, &mcast->neigh_list);
+ 			}
+ 		}
+diff --git a/drivers/input/touchscreen/atmel_mxt_ts.c b/drivers/input/touchscreen/atmel_mxt_ts.c
+index 54fe190fd4bc..48c5ccab00a0 100644
+--- a/drivers/input/touchscreen/atmel_mxt_ts.c
++++ b/drivers/input/touchscreen/atmel_mxt_ts.c
+@@ -1658,10 +1658,11 @@ static int mxt_parse_object_table(struct mxt_data *data,
+ 			break;
+ 		case MXT_TOUCH_MULTI_T9:
+ 			data->multitouch = MXT_TOUCH_MULTI_T9;
++			/* Only handle messages from first T9 instance */
+ 			data->T9_reportid_min = min_id;
+-			data->T9_reportid_max = max_id;
+-			data->num_touchids = object->num_report_ids
+-						* mxt_obj_instances(object);
++			data->T9_reportid_max = min_id +
++						object->num_report_ids - 1;
++			data->num_touchids = object->num_report_ids;
+ 			break;
+ 		case MXT_SPT_MESSAGECOUNT_T44:
+ 			data->T44_address = object->start_address;
+diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
+index 1d647104bccc..b73c6a7bf7f2 100644
+--- a/drivers/iommu/arm-smmu-v3.c
++++ b/drivers/iommu/arm-smmu-v3.c
+@@ -24,6 +24,7 @@
+ #include <linux/acpi_iort.h>
+ #include <linux/bitfield.h>
+ #include <linux/bitops.h>
++#include <linux/crash_dump.h>
+ #include <linux/delay.h>
+ #include <linux/dma-iommu.h>
+ #include <linux/err.h>
+@@ -2211,8 +2212,12 @@ static int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr)
+ 	reg &= ~clr;
+ 	reg |= set;
+ 	writel_relaxed(reg | GBPA_UPDATE, gbpa);
+-	return readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE),
+-					  1, ARM_SMMU_POLL_TIMEOUT_US);
++	ret = readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE),
++					 1, ARM_SMMU_POLL_TIMEOUT_US);
++
++	if (ret)
++		dev_err(smmu->dev, "GBPA not responding to update\n");
++	return ret;
+ }
+ 
+ static void arm_smmu_free_msis(void *data)
+@@ -2392,8 +2397,15 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
+ 
+ 	/* Clear CR0 and sync (disables SMMU and queue processing) */
+ 	reg = readl_relaxed(smmu->base + ARM_SMMU_CR0);
+-	if (reg & CR0_SMMUEN)
++	if (reg & CR0_SMMUEN) {
++		if (is_kdump_kernel()) {
++			arm_smmu_update_gbpa(smmu, GBPA_ABORT, 0);
++			arm_smmu_device_disable(smmu);
++			return -EBUSY;
++		}
++
+ 		dev_warn(smmu->dev, "SMMU currently enabled! Resetting...\n");
++	}
+ 
+ 	ret = arm_smmu_device_disable(smmu);
+ 	if (ret)
+@@ -2491,10 +2503,8 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
+ 		enables |= CR0_SMMUEN;
+ 	} else {
+ 		ret = arm_smmu_update_gbpa(smmu, 0, GBPA_ABORT);
+-		if (ret) {
+-			dev_err(smmu->dev, "GBPA not responding to update\n");
++		if (ret)
+ 			return ret;
+-		}
+ 	}
+ 	ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+ 				      ARM_SMMU_CR0ACK);
+diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
+index 09b47260c74b..feb1664815b7 100644
+--- a/drivers/iommu/ipmmu-vmsa.c
++++ b/drivers/iommu/ipmmu-vmsa.c
+@@ -73,7 +73,7 @@ struct ipmmu_vmsa_domain {
+ 	struct io_pgtable_ops *iop;
+ 
+ 	unsigned int context_id;
+-	spinlock_t lock;			/* Protects mappings */
++	struct mutex mutex;			/* Protects mappings */
+ };
+ 
+ static struct ipmmu_vmsa_domain *to_vmsa_domain(struct iommu_domain *dom)
+@@ -595,7 +595,7 @@ static struct iommu_domain *__ipmmu_domain_alloc(unsigned type)
+ 	if (!domain)
+ 		return NULL;
+ 
+-	spin_lock_init(&domain->lock);
++	mutex_init(&domain->mutex);
+ 
+ 	return &domain->io_domain;
+ }
+@@ -641,7 +641,6 @@ static int ipmmu_attach_device(struct iommu_domain *io_domain,
+ 	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+ 	struct ipmmu_vmsa_device *mmu = to_ipmmu(dev);
+ 	struct ipmmu_vmsa_domain *domain = to_vmsa_domain(io_domain);
+-	unsigned long flags;
+ 	unsigned int i;
+ 	int ret = 0;
+ 
+@@ -650,7 +649,7 @@ static int ipmmu_attach_device(struct iommu_domain *io_domain,
+ 		return -ENXIO;
+ 	}
+ 
+-	spin_lock_irqsave(&domain->lock, flags);
++	mutex_lock(&domain->mutex);
+ 
+ 	if (!domain->mmu) {
+ 		/* The domain hasn't been used yet, initialize it. */
+@@ -674,7 +673,7 @@ static int ipmmu_attach_device(struct iommu_domain *io_domain,
+ 	} else
+ 		dev_info(dev, "Reusing IPMMU context %u\n", domain->context_id);
+ 
+-	spin_unlock_irqrestore(&domain->lock, flags);
++	mutex_unlock(&domain->mutex);
+ 
+ 	if (ret < 0)
+ 		return ret;
+diff --git a/drivers/macintosh/via-pmu.c b/drivers/macintosh/via-pmu.c
+index 25c1ce811053..1fdd09ebb3f1 100644
+--- a/drivers/macintosh/via-pmu.c
++++ b/drivers/macintosh/via-pmu.c
+@@ -534,8 +534,9 @@ init_pmu(void)
+ 	int timeout;
+ 	struct adb_request req;
+ 
+-	out_8(&via[B], via[B] | TREQ);			/* negate TREQ */
+-	out_8(&via[DIRB], (via[DIRB] | TREQ) & ~TACK);	/* TACK in, TREQ out */
++	/* Negate TREQ. Set TACK to input and TREQ to output. */
++	out_8(&via[B], in_8(&via[B]) | TREQ);
++	out_8(&via[DIRB], (in_8(&via[DIRB]) | TREQ) & ~TACK);
+ 
+ 	pmu_request(&req, NULL, 2, PMU_SET_INTR_MASK, pmu_intr_mask);
+ 	timeout =  100000;
+@@ -1418,8 +1419,8 @@ pmu_sr_intr(void)
+ 	struct adb_request *req;
+ 	int bite = 0;
+ 
+-	if (via[B] & TREQ) {
+-		printk(KERN_ERR "PMU: spurious SR intr (%x)\n", via[B]);
++	if (in_8(&via[B]) & TREQ) {
++		printk(KERN_ERR "PMU: spurious SR intr (%x)\n", in_8(&via[B]));
+ 		out_8(&via[IFR], SR_INT);
+ 		return NULL;
+ 	}
+diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
+index ce14a3d1f609..44df244807e5 100644
+--- a/drivers/md/dm-cache-target.c
++++ b/drivers/md/dm-cache-target.c
+@@ -2250,7 +2250,7 @@ static int parse_features(struct cache_args *ca, struct dm_arg_set *as,
+ 		{0, 2, "Invalid number of cache feature arguments"},
+ 	};
+ 
+-	int r;
++	int r, mode_ctr = 0;
+ 	unsigned argc;
+ 	const char *arg;
+ 	struct cache_features *cf = &ca->features;
+@@ -2264,14 +2264,20 @@ static int parse_features(struct cache_args *ca, struct dm_arg_set *as,
+ 	while (argc--) {
+ 		arg = dm_shift_arg(as);
+ 
+-		if (!strcasecmp(arg, "writeback"))
++		if (!strcasecmp(arg, "writeback")) {
+ 			cf->io_mode = CM_IO_WRITEBACK;
++			mode_ctr++;
++		}
+ 
+-		else if (!strcasecmp(arg, "writethrough"))
++		else if (!strcasecmp(arg, "writethrough")) {
+ 			cf->io_mode = CM_IO_WRITETHROUGH;
++			mode_ctr++;
++		}
+ 
+-		else if (!strcasecmp(arg, "passthrough"))
++		else if (!strcasecmp(arg, "passthrough")) {
+ 			cf->io_mode = CM_IO_PASSTHROUGH;
++			mode_ctr++;
++		}
+ 
+ 		else if (!strcasecmp(arg, "metadata2"))
+ 			cf->metadata_version = 2;
+@@ -2282,6 +2288,11 @@ static int parse_features(struct cache_args *ca, struct dm_arg_set *as,
+ 		}
+ 	}
+ 
++	if (mode_ctr > 1) {
++		*error = "Duplicate cache io_mode features requested";
++		return -EINVAL;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 2031506a0ecd..49107c52c8e6 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -4521,6 +4521,12 @@ static void analyse_stripe(struct stripe_head *sh, struct stripe_head_state *s)
+ 			s->failed++;
+ 			if (rdev && !test_bit(Faulty, &rdev->flags))
+ 				do_recovery = 1;
++			else if (!rdev) {
++				rdev = rcu_dereference(
++				    conf->disks[i].replacement);
++				if (rdev && !test_bit(Faulty, &rdev->flags))
++					do_recovery = 1;
++			}
+ 		}
+ 
+ 		if (test_bit(R5_InJournal, &dev->flags))
+diff --git a/drivers/media/dvb-frontends/helene.c b/drivers/media/dvb-frontends/helene.c
+index a0d0b53c91d7..a5de65dcf784 100644
+--- a/drivers/media/dvb-frontends/helene.c
++++ b/drivers/media/dvb-frontends/helene.c
+@@ -897,7 +897,10 @@ static int helene_x_pon(struct helene_priv *priv)
+ 	helene_write_regs(priv, 0x99, cdata, sizeof(cdata));
+ 
+ 	/* 0x81 - 0x94 */
+-	data[0] = 0x18; /* xtal 24 MHz */
++	if (priv->xtal == SONY_HELENE_XTAL_16000)
++		data[0] = 0x10; /* xtal 16 MHz */
++	else
++		data[0] = 0x18; /* xtal 24 MHz */
+ 	data[1] = (uint8_t)(0x80 | (0x04 & 0x1F)); /* 4 x 25 = 100uA */
+ 	data[2] = (uint8_t)(0x80 | (0x26 & 0x7F)); /* 38 x 0.25 = 9.5pF */
+ 	data[3] = 0x80; /* REFOUT signal output 500mVpp */
+diff --git a/drivers/media/platform/davinci/vpif_display.c b/drivers/media/platform/davinci/vpif_display.c
+index 7be636237acf..0f324055cc9f 100644
+--- a/drivers/media/platform/davinci/vpif_display.c
++++ b/drivers/media/platform/davinci/vpif_display.c
+@@ -1114,6 +1114,14 @@ vpif_init_free_channel_objects:
+ 	return err;
+ }
+ 
++static void free_vpif_objs(void)
++{
++	int i;
++
++	for (i = 0; i < VPIF_DISPLAY_MAX_DEVICES; i++)
++		kfree(vpif_obj.dev[i]);
++}
++
+ static int vpif_async_bound(struct v4l2_async_notifier *notifier,
+ 			    struct v4l2_subdev *subdev,
+ 			    struct v4l2_async_subdev *asd)
+@@ -1255,11 +1263,6 @@ static __init int vpif_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 	}
+ 
+-	if (!pdev->dev.platform_data) {
+-		dev_warn(&pdev->dev, "Missing platform data.  Giving up.\n");
+-		return -EINVAL;
+-	}
+-
+ 	vpif_dev = &pdev->dev;
+ 	err = initialize_vpif();
+ 
+@@ -1271,7 +1274,7 @@ static __init int vpif_probe(struct platform_device *pdev)
+ 	err = v4l2_device_register(vpif_dev, &vpif_obj.v4l2_dev);
+ 	if (err) {
+ 		v4l2_err(vpif_dev->driver, "Error registering v4l2 device\n");
+-		return err;
++		goto vpif_free;
+ 	}
+ 
+ 	while ((res = platform_get_resource(pdev, IORESOURCE_IRQ, res_idx))) {
+@@ -1314,7 +1317,10 @@ static __init int vpif_probe(struct platform_device *pdev)
+ 			if (vpif_obj.sd[i])
+ 				vpif_obj.sd[i]->grp_id = 1 << i;
+ 		}
+-		vpif_probe_complete();
++		err = vpif_probe_complete();
++		if (err) {
++			goto probe_subdev_out;
++		}
+ 	} else {
+ 		vpif_obj.notifier.subdevs = vpif_obj.config->asd;
+ 		vpif_obj.notifier.num_subdevs = vpif_obj.config->asd_sizes[0];
+@@ -1334,6 +1340,8 @@ probe_subdev_out:
+ 	kfree(vpif_obj.sd);
+ vpif_unregister:
+ 	v4l2_device_unregister(&vpif_obj.v4l2_dev);
++vpif_free:
++	free_vpif_objs();
+ 
+ 	return err;
+ }
+@@ -1355,8 +1363,8 @@ static int vpif_remove(struct platform_device *device)
+ 		ch = vpif_obj.dev[i];
+ 		/* Unregister video device */
+ 		video_unregister_device(&ch->video_dev);
+-		kfree(vpif_obj.dev[i]);
+ 	}
++	free_vpif_objs();
+ 
+ 	return 0;
+ }
+diff --git a/drivers/media/platform/qcom/camss-8x16/camss-csid.c b/drivers/media/platform/qcom/camss-8x16/camss-csid.c
+index 226f36ef7419..2bf65805f2c1 100644
+--- a/drivers/media/platform/qcom/camss-8x16/camss-csid.c
++++ b/drivers/media/platform/qcom/camss-8x16/camss-csid.c
+@@ -392,9 +392,6 @@ static int csid_set_stream(struct v4l2_subdev *sd, int enable)
+ 		    !media_entity_remote_pad(&csid->pads[MSM_CSID_PAD_SINK]))
+ 			return -ENOLINK;
+ 
+-		dt = csid_get_fmt_entry(csid->fmt[MSM_CSID_PAD_SRC].code)->
+-								data_type;
+-
+ 		if (tg->enabled) {
+ 			/* Config Test Generator */
+ 			struct v4l2_mbus_framefmt *f =
+@@ -416,6 +413,9 @@ static int csid_set_stream(struct v4l2_subdev *sd, int enable)
+ 			writel_relaxed(val, csid->base +
+ 				       CAMSS_CSID_TG_DT_n_CGG_0(0));
+ 
++			dt = csid_get_fmt_entry(
++				csid->fmt[MSM_CSID_PAD_SRC].code)->data_type;
++
+ 			/* 5:0 data type */
+ 			val = dt;
+ 			writel_relaxed(val, csid->base +
+@@ -425,6 +425,9 @@ static int csid_set_stream(struct v4l2_subdev *sd, int enable)
+ 			val = tg->payload_mode;
+ 			writel_relaxed(val, csid->base +
+ 				       CAMSS_CSID_TG_DT_n_CGG_2(0));
++
++			df = csid_get_fmt_entry(
++				csid->fmt[MSM_CSID_PAD_SRC].code)->decode_format;
+ 		} else {
+ 			struct csid_phy_config *phy = &csid->phy;
+ 
+@@ -439,13 +442,16 @@ static int csid_set_stream(struct v4l2_subdev *sd, int enable)
+ 
+ 			writel_relaxed(val,
+ 				       csid->base + CAMSS_CSID_CORE_CTRL_1);
++
++			dt = csid_get_fmt_entry(
++				csid->fmt[MSM_CSID_PAD_SINK].code)->data_type;
++			df = csid_get_fmt_entry(
++				csid->fmt[MSM_CSID_PAD_SINK].code)->decode_format;
+ 		}
+ 
+ 		/* Config LUT */
+ 
+ 		dt_shift = (cid % 4) * 8;
+-		df = csid_get_fmt_entry(csid->fmt[MSM_CSID_PAD_SINK].code)->
+-								decode_format;
+ 
+ 		val = readl_relaxed(csid->base + CAMSS_CSID_CID_LUT_VC_n(vc));
+ 		val &= ~(0xff << dt_shift);
+diff --git a/drivers/media/platform/rcar-vin/rcar-csi2.c b/drivers/media/platform/rcar-vin/rcar-csi2.c
+index daef72d410a3..dc5ae8025832 100644
+--- a/drivers/media/platform/rcar-vin/rcar-csi2.c
++++ b/drivers/media/platform/rcar-vin/rcar-csi2.c
+@@ -339,6 +339,7 @@ enum rcar_csi2_pads {
+ 
+ struct rcar_csi2_info {
+ 	int (*init_phtw)(struct rcar_csi2 *priv, unsigned int mbps);
++	int (*confirm_start)(struct rcar_csi2 *priv);
+ 	const struct rcsi2_mbps_reg *hsfreqrange;
+ 	unsigned int csi0clkfreqrange;
+ 	bool clear_ulps;
+@@ -545,6 +546,13 @@ static int rcsi2_start(struct rcar_csi2 *priv)
+ 	if (ret)
+ 		return ret;
+ 
++	/* Confirm start */
++	if (priv->info->confirm_start) {
++		ret = priv->info->confirm_start(priv);
++		if (ret)
++			return ret;
++	}
++
+ 	/* Clear Ultra Low Power interrupt. */
+ 	if (priv->info->clear_ulps)
+ 		rcsi2_write(priv, INTSTATE_REG,
+@@ -880,6 +888,11 @@ static int rcsi2_init_phtw_h3_v3h_m3n(struct rcar_csi2 *priv, unsigned int mbps)
+ }
+ 
+ static int rcsi2_init_phtw_v3m_e3(struct rcar_csi2 *priv, unsigned int mbps)
++{
++	return rcsi2_phtw_write_mbps(priv, mbps, phtw_mbps_v3m_e3, 0x44);
++}
++
++static int rcsi2_confirm_start_v3m_e3(struct rcar_csi2 *priv)
+ {
+ 	static const struct phtw_value step1[] = {
+ 		{ .data = 0xed, .code = 0x34 },
+@@ -890,12 +903,6 @@ static int rcsi2_init_phtw_v3m_e3(struct rcar_csi2 *priv, unsigned int mbps)
+ 		{ /* sentinel */ },
+ 	};
+ 
+-	int ret;
+-
+-	ret = rcsi2_phtw_write_mbps(priv, mbps, phtw_mbps_v3m_e3, 0x44);
+-	if (ret)
+-		return ret;
+-
+ 	return rcsi2_phtw_write_array(priv, step1);
+ }
+ 
+@@ -949,6 +956,7 @@ static const struct rcar_csi2_info rcar_csi2_info_r8a77965 = {
+ 
+ static const struct rcar_csi2_info rcar_csi2_info_r8a77970 = {
+ 	.init_phtw = rcsi2_init_phtw_v3m_e3,
++	.confirm_start = rcsi2_confirm_start_v3m_e3,
+ };
+ 
+ static const struct of_device_id rcar_csi2_of_table[] = {
+diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc.c b/drivers/media/platform/s5p-mfc/s5p_mfc.c
+index a80251ed3143..780548dd650e 100644
+--- a/drivers/media/platform/s5p-mfc/s5p_mfc.c
++++ b/drivers/media/platform/s5p-mfc/s5p_mfc.c
+@@ -254,24 +254,24 @@ static void s5p_mfc_handle_frame_all_extracted(struct s5p_mfc_ctx *ctx)
+ static void s5p_mfc_handle_frame_copy_time(struct s5p_mfc_ctx *ctx)
+ {
+ 	struct s5p_mfc_dev *dev = ctx->dev;
+-	struct s5p_mfc_buf  *dst_buf, *src_buf;
+-	size_t dec_y_addr;
++	struct s5p_mfc_buf *dst_buf, *src_buf;
++	u32 dec_y_addr;
+ 	unsigned int frame_type;
+ 
+ 	/* Make sure we actually have a new frame before continuing. */
+ 	frame_type = s5p_mfc_hw_call(dev->mfc_ops, get_dec_frame_type, dev);
+ 	if (frame_type == S5P_FIMV_DECODE_FRAME_SKIPPED)
+ 		return;
+-	dec_y_addr = s5p_mfc_hw_call(dev->mfc_ops, get_dec_y_adr, dev);
++	dec_y_addr = (u32)s5p_mfc_hw_call(dev->mfc_ops, get_dec_y_adr, dev);
+ 
+ 	/* Copy timestamp / timecode from decoded src to dst and set
+ 	   appropriate flags. */
+ 	src_buf = list_entry(ctx->src_queue.next, struct s5p_mfc_buf, list);
+ 	list_for_each_entry(dst_buf, &ctx->dst_queue, list) {
+-		if (vb2_dma_contig_plane_dma_addr(&dst_buf->b->vb2_buf, 0)
+-				== dec_y_addr) {
+-			dst_buf->b->timecode =
+-						src_buf->b->timecode;
++		u32 addr = (u32)vb2_dma_contig_plane_dma_addr(&dst_buf->b->vb2_buf, 0);
++
++		if (addr == dec_y_addr) {
++			dst_buf->b->timecode = src_buf->b->timecode;
+ 			dst_buf->b->vb2_buf.timestamp =
+ 						src_buf->b->vb2_buf.timestamp;
+ 			dst_buf->b->flags &=
+@@ -307,10 +307,10 @@ static void s5p_mfc_handle_frame_new(struct s5p_mfc_ctx *ctx, unsigned int err)
+ {
+ 	struct s5p_mfc_dev *dev = ctx->dev;
+ 	struct s5p_mfc_buf  *dst_buf;
+-	size_t dspl_y_addr;
++	u32 dspl_y_addr;
+ 	unsigned int frame_type;
+ 
+-	dspl_y_addr = s5p_mfc_hw_call(dev->mfc_ops, get_dspl_y_adr, dev);
++	dspl_y_addr = (u32)s5p_mfc_hw_call(dev->mfc_ops, get_dspl_y_adr, dev);
+ 	if (IS_MFCV6_PLUS(dev))
+ 		frame_type = s5p_mfc_hw_call(dev->mfc_ops,
+ 			get_disp_frame_type, ctx);
+@@ -329,9 +329,10 @@ static void s5p_mfc_handle_frame_new(struct s5p_mfc_ctx *ctx, unsigned int err)
+ 	/* The MFC returns address of the buffer, now we have to
+ 	 * check which videobuf does it correspond to */
+ 	list_for_each_entry(dst_buf, &ctx->dst_queue, list) {
++		u32 addr = (u32)vb2_dma_contig_plane_dma_addr(&dst_buf->b->vb2_buf, 0);
++
+ 		/* Check if this is the buffer we're looking for */
+-		if (vb2_dma_contig_plane_dma_addr(&dst_buf->b->vb2_buf, 0)
+-				== dspl_y_addr) {
++		if (addr == dspl_y_addr) {
+ 			list_del(&dst_buf->list);
+ 			ctx->dst_queue_cnt--;
+ 			dst_buf->b->sequence = ctx->sequence;
+diff --git a/drivers/media/usb/dvb-usb/dw2102.c b/drivers/media/usb/dvb-usb/dw2102.c
+index 0d4fdd34a710..9ce8b4d79d1f 100644
+--- a/drivers/media/usb/dvb-usb/dw2102.c
++++ b/drivers/media/usb/dvb-usb/dw2102.c
+@@ -2101,14 +2101,12 @@ static struct dvb_usb_device_properties s6x0_properties = {
+ 	}
+ };
+ 
+-static struct dvb_usb_device_properties *p1100;
+ static const struct dvb_usb_device_description d1100 = {
+ 	"Prof 1100 USB ",
+ 	{&dw2102_table[PROF_1100], NULL},
+ 	{NULL},
+ };
+ 
+-static struct dvb_usb_device_properties *s660;
+ static const struct dvb_usb_device_description d660 = {
+ 	"TeVii S660 USB",
+ 	{&dw2102_table[TEVII_S660], NULL},
+@@ -2127,14 +2125,12 @@ static const struct dvb_usb_device_description d480_2 = {
+ 	{NULL},
+ };
+ 
+-static struct dvb_usb_device_properties *p7500;
+ static const struct dvb_usb_device_description d7500 = {
+ 	"Prof 7500 USB DVB-S2",
+ 	{&dw2102_table[PROF_7500], NULL},
+ 	{NULL},
+ };
+ 
+-static struct dvb_usb_device_properties *s421;
+ static const struct dvb_usb_device_description d421 = {
+ 	"TeVii S421 PCI",
+ 	{&dw2102_table[TEVII_S421], NULL},
+@@ -2334,6 +2330,11 @@ static int dw2102_probe(struct usb_interface *intf,
+ 		const struct usb_device_id *id)
+ {
+ 	int retval = -ENOMEM;
++	struct dvb_usb_device_properties *p1100;
++	struct dvb_usb_device_properties *s660;
++	struct dvb_usb_device_properties *p7500;
++	struct dvb_usb_device_properties *s421;
++
+ 	p1100 = kmemdup(&s6x0_properties,
+ 			sizeof(struct dvb_usb_device_properties), GFP_KERNEL);
+ 	if (!p1100)
+@@ -2402,8 +2403,16 @@ static int dw2102_probe(struct usb_interface *intf,
+ 	    0 == dvb_usb_device_init(intf, &t220_properties,
+ 			 THIS_MODULE, NULL, adapter_nr) ||
+ 	    0 == dvb_usb_device_init(intf, &tt_s2_4600_properties,
+-			 THIS_MODULE, NULL, adapter_nr))
++			 THIS_MODULE, NULL, adapter_nr)) {
++
++		/* clean up copied properties */
++		kfree(s421);
++		kfree(p7500);
++		kfree(s660);
++		kfree(p1100);
++
+ 		return 0;
++	}
+ 
+ 	retval = -ENODEV;
+ 	kfree(s421);
+diff --git a/drivers/media/usb/em28xx/em28xx-cards.c b/drivers/media/usb/em28xx/em28xx-cards.c
+index 6c8438311d3b..ff5e41ac4723 100644
+--- a/drivers/media/usb/em28xx/em28xx-cards.c
++++ b/drivers/media/usb/em28xx/em28xx-cards.c
+@@ -3376,7 +3376,9 @@ void em28xx_free_device(struct kref *ref)
+ 	if (!dev->disconnected)
+ 		em28xx_release_resources(dev);
+ 
+-	kfree(dev->alt_max_pkt_size_isoc);
++	if (dev->ts == PRIMARY_TS)
++		kfree(dev->alt_max_pkt_size_isoc);
++
+ 	kfree(dev);
+ }
+ EXPORT_SYMBOL_GPL(em28xx_free_device);
+diff --git a/drivers/media/usb/em28xx/em28xx-core.c b/drivers/media/usb/em28xx/em28xx-core.c
+index f70845e7d8c6..45b24776a695 100644
+--- a/drivers/media/usb/em28xx/em28xx-core.c
++++ b/drivers/media/usb/em28xx/em28xx-core.c
+@@ -655,12 +655,12 @@ int em28xx_capture_start(struct em28xx *dev, int start)
+ 			rc = em28xx_write_reg_bits(dev,
+ 						   EM2874_R5F_TS_ENABLE,
+ 						   start ? EM2874_TS1_CAPTURE_ENABLE : 0x00,
+-						   EM2874_TS1_CAPTURE_ENABLE);
++						   EM2874_TS1_CAPTURE_ENABLE | EM2874_TS1_FILTER_ENABLE | EM2874_TS1_NULL_DISCARD);
+ 		else
+ 			rc = em28xx_write_reg_bits(dev,
+ 						   EM2874_R5F_TS_ENABLE,
+ 						   start ? EM2874_TS2_CAPTURE_ENABLE : 0x00,
+-						   EM2874_TS2_CAPTURE_ENABLE);
++						   EM2874_TS2_CAPTURE_ENABLE | EM2874_TS2_FILTER_ENABLE | EM2874_TS2_NULL_DISCARD);
+ 	} else {
+ 		/* FIXME: which is the best order? */
+ 		/* video registers are sampled by VREF */
+diff --git a/drivers/media/usb/em28xx/em28xx-dvb.c b/drivers/media/usb/em28xx/em28xx-dvb.c
+index b778d8a1983e..a73faf12f7e4 100644
+--- a/drivers/media/usb/em28xx/em28xx-dvb.c
++++ b/drivers/media/usb/em28xx/em28xx-dvb.c
+@@ -218,7 +218,9 @@ static int em28xx_start_streaming(struct em28xx_dvb *dvb)
+ 		dvb_alt = dev->dvb_alt_isoc;
+ 	}
+ 
+-	usb_set_interface(udev, dev->ifnum, dvb_alt);
++	if (!dev->board.has_dual_ts)
++		usb_set_interface(udev, dev->ifnum, dvb_alt);
++
+ 	rc = em28xx_set_mode(dev, EM28XX_DIGITAL_MODE);
+ 	if (rc < 0)
+ 		return rc;
+diff --git a/drivers/memory/ti-aemif.c b/drivers/memory/ti-aemif.c
+index 31112f622b88..475e5b3790ed 100644
+--- a/drivers/memory/ti-aemif.c
++++ b/drivers/memory/ti-aemif.c
+@@ -411,7 +411,7 @@ static int aemif_probe(struct platform_device *pdev)
+ 			if (ret < 0)
+ 				goto error;
+ 		}
+-	} else {
++	} else if (pdata) {
+ 		for (i = 0; i < pdata->num_sub_devices; i++) {
+ 			pdata->sub_devices[i].dev.parent = dev;
+ 			ret = platform_device_register(&pdata->sub_devices[i]);
+diff --git a/drivers/mfd/rave-sp.c b/drivers/mfd/rave-sp.c
+index 36dcd98977d6..4f545fdc6ebc 100644
+--- a/drivers/mfd/rave-sp.c
++++ b/drivers/mfd/rave-sp.c
+@@ -776,6 +776,13 @@ static int rave_sp_probe(struct serdev_device *serdev)
+ 		return ret;
+ 
+ 	serdev_device_set_baudrate(serdev, baud);
++	serdev_device_set_flow_control(serdev, false);
++
++	ret = serdev_device_set_parity(serdev, SERDEV_PARITY_NONE);
++	if (ret) {
++		dev_err(dev, "Failed to set parity\n");
++		return ret;
++	}
+ 
+ 	ret = rave_sp_get_status(sp);
+ 	if (ret) {
+diff --git a/drivers/mfd/ti_am335x_tscadc.c b/drivers/mfd/ti_am335x_tscadc.c
+index 47012c0899cd..7a30546880a4 100644
+--- a/drivers/mfd/ti_am335x_tscadc.c
++++ b/drivers/mfd/ti_am335x_tscadc.c
+@@ -209,14 +209,13 @@ static	int ti_tscadc_probe(struct platform_device *pdev)
+ 	 * The TSC_ADC_SS controller design assumes the OCP clock is
+ 	 * at least 6x faster than the ADC clock.
+ 	 */
+-	clk = clk_get(&pdev->dev, "adc_tsc_fck");
++	clk = devm_clk_get(&pdev->dev, "adc_tsc_fck");
+ 	if (IS_ERR(clk)) {
+ 		dev_err(&pdev->dev, "failed to get TSC fck\n");
+ 		err = PTR_ERR(clk);
+ 		goto err_disable_clk;
+ 	}
+ 	clock_rate = clk_get_rate(clk);
+-	clk_put(clk);
+ 	tscadc->clk_div = clock_rate / ADC_CLK;
+ 
+ 	/* TSCADC_CLKDIV needs to be configured to the value minus 1 */
+diff --git a/drivers/misc/mic/scif/scif_api.c b/drivers/misc/mic/scif/scif_api.c
+index 7b2dddcdd46d..42f7a12894d6 100644
+--- a/drivers/misc/mic/scif/scif_api.c
++++ b/drivers/misc/mic/scif/scif_api.c
+@@ -370,11 +370,10 @@ int scif_bind(scif_epd_t epd, u16 pn)
+ 			goto scif_bind_exit;
+ 		}
+ 	} else {
+-		pn = scif_get_new_port();
+-		if (!pn) {
+-			ret = -ENOSPC;
++		ret = scif_get_new_port();
++		if (ret < 0)
+ 			goto scif_bind_exit;
+-		}
++		pn = ret;
+ 	}
+ 
+ 	ep->state = SCIFEP_BOUND;
+@@ -648,13 +647,12 @@ int __scif_connect(scif_epd_t epd, struct scif_port_id *dst, bool non_block)
+ 			err = -EISCONN;
+ 		break;
+ 	case SCIFEP_UNBOUND:
+-		ep->port.port = scif_get_new_port();
+-		if (!ep->port.port) {
+-			err = -ENOSPC;
+-		} else {
+-			ep->port.node = scif_info.nodeid;
+-			ep->conn_async_state = ASYNC_CONN_IDLE;
+-		}
++		err = scif_get_new_port();
++		if (err < 0)
++			break;
++		ep->port.port = err;
++		ep->port.node = scif_info.nodeid;
++		ep->conn_async_state = ASYNC_CONN_IDLE;
+ 		/* Fall through */
+ 	case SCIFEP_BOUND:
+ 		/*
+diff --git a/drivers/misc/ti-st/st_kim.c b/drivers/misc/ti-st/st_kim.c
+index 5ec3f5a43718..14a5e9da32bd 100644
+--- a/drivers/misc/ti-st/st_kim.c
++++ b/drivers/misc/ti-st/st_kim.c
+@@ -756,14 +756,14 @@ static int kim_probe(struct platform_device *pdev)
+ 	err = gpio_request(kim_gdata->nshutdown, "kim");
+ 	if (unlikely(err)) {
+ 		pr_err(" gpio %d request failed ", kim_gdata->nshutdown);
+-		return err;
++		goto err_sysfs_group;
+ 	}
+ 
+ 	/* Configure nShutdown GPIO as output=0 */
+ 	err = gpio_direction_output(kim_gdata->nshutdown, 0);
+ 	if (unlikely(err)) {
+ 		pr_err(" unable to configure gpio %d", kim_gdata->nshutdown);
+-		return err;
++		goto err_sysfs_group;
+ 	}
+ 	/* get reference of pdev for request_firmware
+ 	 */
+diff --git a/drivers/mtd/nand/raw/nand_base.c b/drivers/mtd/nand/raw/nand_base.c
+index b01d15ec4c56..3e3e6a8f1abc 100644
+--- a/drivers/mtd/nand/raw/nand_base.c
++++ b/drivers/mtd/nand/raw/nand_base.c
+@@ -2668,8 +2668,8 @@ static bool nand_subop_instr_is_valid(const struct nand_subop *subop,
+ 	return subop && instr_idx < subop->ninstrs;
+ }
+ 
+-static int nand_subop_get_start_off(const struct nand_subop *subop,
+-				    unsigned int instr_idx)
++static unsigned int nand_subop_get_start_off(const struct nand_subop *subop,
++					     unsigned int instr_idx)
+ {
+ 	if (instr_idx)
+ 		return 0;
+@@ -2688,12 +2688,12 @@ static int nand_subop_get_start_off(const struct nand_subop *subop,
+  *
+  * Given an address instruction, returns the offset of the first cycle to issue.
+  */
+-int nand_subop_get_addr_start_off(const struct nand_subop *subop,
+-				  unsigned int instr_idx)
++unsigned int nand_subop_get_addr_start_off(const struct nand_subop *subop,
++					   unsigned int instr_idx)
+ {
+-	if (!nand_subop_instr_is_valid(subop, instr_idx) ||
+-	    subop->instrs[instr_idx].type != NAND_OP_ADDR_INSTR)
+-		return -EINVAL;
++	if (WARN_ON(!nand_subop_instr_is_valid(subop, instr_idx) ||
++		    subop->instrs[instr_idx].type != NAND_OP_ADDR_INSTR))
++		return 0;
+ 
+ 	return nand_subop_get_start_off(subop, instr_idx);
+ }
+@@ -2710,14 +2710,14 @@ EXPORT_SYMBOL_GPL(nand_subop_get_addr_start_off);
+  *
+  * Given an address instruction, returns the number of address cycle to issue.
+  */
+-int nand_subop_get_num_addr_cyc(const struct nand_subop *subop,
+-				unsigned int instr_idx)
++unsigned int nand_subop_get_num_addr_cyc(const struct nand_subop *subop,
++					 unsigned int instr_idx)
+ {
+ 	int start_off, end_off;
+ 
+-	if (!nand_subop_instr_is_valid(subop, instr_idx) ||
+-	    subop->instrs[instr_idx].type != NAND_OP_ADDR_INSTR)
+-		return -EINVAL;
++	if (WARN_ON(!nand_subop_instr_is_valid(subop, instr_idx) ||
++		    subop->instrs[instr_idx].type != NAND_OP_ADDR_INSTR))
++		return 0;
+ 
+ 	start_off = nand_subop_get_addr_start_off(subop, instr_idx);
+ 
+@@ -2742,12 +2742,12 @@ EXPORT_SYMBOL_GPL(nand_subop_get_num_addr_cyc);
+  *
+  * Given a data instruction, returns the offset to start from.
+  */
+-int nand_subop_get_data_start_off(const struct nand_subop *subop,
+-				  unsigned int instr_idx)
++unsigned int nand_subop_get_data_start_off(const struct nand_subop *subop,
++					   unsigned int instr_idx)
+ {
+-	if (!nand_subop_instr_is_valid(subop, instr_idx) ||
+-	    !nand_instr_is_data(&subop->instrs[instr_idx]))
+-		return -EINVAL;
++	if (WARN_ON(!nand_subop_instr_is_valid(subop, instr_idx) ||
++		    !nand_instr_is_data(&subop->instrs[instr_idx])))
++		return 0;
+ 
+ 	return nand_subop_get_start_off(subop, instr_idx);
+ }
+@@ -2764,14 +2764,14 @@ EXPORT_SYMBOL_GPL(nand_subop_get_data_start_off);
+  *
+  * Returns the length of the chunk of data to send/receive.
+  */
+-int nand_subop_get_data_len(const struct nand_subop *subop,
+-			    unsigned int instr_idx)
++unsigned int nand_subop_get_data_len(const struct nand_subop *subop,
++				     unsigned int instr_idx)
+ {
+ 	int start_off = 0, end_off;
+ 
+-	if (!nand_subop_instr_is_valid(subop, instr_idx) ||
+-	    !nand_instr_is_data(&subop->instrs[instr_idx]))
+-		return -EINVAL;
++	if (WARN_ON(!nand_subop_instr_is_valid(subop, instr_idx) ||
++		    !nand_instr_is_data(&subop->instrs[instr_idx])))
++		return 0;
+ 
+ 	start_off = nand_subop_get_data_start_off(subop, instr_idx);
+ 
+diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
+index 82ac1d10f239..b4253d0e056b 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -3196,7 +3196,6 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu)
+ 
+ 	on_each_cpu(mvneta_percpu_enable, pp, true);
+ 	mvneta_start_dev(pp);
+-	mvneta_port_up(pp);
+ 
+ 	netdev_update_features(dev);
+ 
+diff --git a/drivers/net/phy/mdio-mux-bcm-iproc.c b/drivers/net/phy/mdio-mux-bcm-iproc.c
+index 0c5b68e7da51..9b3167054843 100644
+--- a/drivers/net/phy/mdio-mux-bcm-iproc.c
++++ b/drivers/net/phy/mdio-mux-bcm-iproc.c
+@@ -22,7 +22,7 @@
+ #include <linux/mdio-mux.h>
+ #include <linux/delay.h>
+ 
+-#define MDIO_PARAM_OFFSET		0x00
++#define MDIO_PARAM_OFFSET		0x23c
+ #define MDIO_PARAM_MIIM_CYCLE		29
+ #define MDIO_PARAM_INTERNAL_SEL		25
+ #define MDIO_PARAM_BUS_ID		22
+@@ -30,20 +30,22 @@
+ #define MDIO_PARAM_PHY_ID		16
+ #define MDIO_PARAM_PHY_DATA		0
+ 
+-#define MDIO_READ_OFFSET		0x04
++#define MDIO_READ_OFFSET		0x240
+ #define MDIO_READ_DATA_MASK		0xffff
+-#define MDIO_ADDR_OFFSET		0x08
++#define MDIO_ADDR_OFFSET		0x244
+ 
+-#define MDIO_CTRL_OFFSET		0x0C
++#define MDIO_CTRL_OFFSET		0x248
+ #define MDIO_CTRL_WRITE_OP		0x1
+ #define MDIO_CTRL_READ_OP		0x2
+ 
+-#define MDIO_STAT_OFFSET		0x10
++#define MDIO_STAT_OFFSET		0x24c
+ #define MDIO_STAT_DONE			1
+ 
+ #define BUS_MAX_ADDR			32
+ #define EXT_BUS_START_ADDR		16
+ 
++#define MDIO_REG_ADDR_SPACE_SIZE	0x250
++
+ struct iproc_mdiomux_desc {
+ 	void *mux_handle;
+ 	void __iomem *base;
+@@ -169,6 +171,14 @@ static int mdio_mux_iproc_probe(struct platform_device *pdev)
+ 	md->dev = &pdev->dev;
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (res->start & 0xfff) {
++		/* For backward compatibility in case the
++		 * base address is specified with an offset.
++		 */
++		dev_info(&pdev->dev, "fix base address in dt-blob\n");
++		res->start &= ~0xfff;
++		res->end = res->start + MDIO_REG_ADDR_SPACE_SIZE - 1;
++	}
+ 	md->base = devm_ioremap_resource(&pdev->dev, res);
+ 	if (IS_ERR(md->base)) {
+ 		dev_err(&pdev->dev, "failed to ioremap register\n");
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index 836e0a47b94a..747c6951b5c1 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -3085,6 +3085,13 @@ static int ath10k_update_channel_list(struct ath10k *ar)
+ 			passive = channel->flags & IEEE80211_CHAN_NO_IR;
+ 			ch->passive = passive;
+ 
++			/* the firmware is ignoring the "radar" flag of the
++			 * channel and is scanning actively using Probe Requests
++			 * on "Radar detection"/DFS channels which are not
++			 * marked as "available"
++			 */
++			ch->passive |= ch->chan_radar;
++
+ 			ch->freq = channel->center_freq;
+ 			ch->band_center_freq1 = channel->center_freq;
+ 			ch->min_power = 0;
+diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.c b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+index 8c49a26fc571..21eb3a598a86 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.c
++++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+@@ -1584,6 +1584,11 @@ static struct sk_buff *ath10k_wmi_tlv_op_gen_init(struct ath10k *ar)
+ 	cfg->keep_alive_pattern_size = __cpu_to_le32(0);
+ 	cfg->max_tdls_concurrent_sleep_sta = __cpu_to_le32(1);
+ 	cfg->max_tdls_concurrent_buffer_sta = __cpu_to_le32(1);
++	cfg->wmi_send_separate = __cpu_to_le32(0);
++	cfg->num_ocb_vdevs = __cpu_to_le32(0);
++	cfg->num_ocb_channels = __cpu_to_le32(0);
++	cfg->num_ocb_schedules = __cpu_to_le32(0);
++	cfg->host_capab = __cpu_to_le32(0);
+ 
+ 	ath10k_wmi_put_host_mem_chunks(ar, chunks);
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.h b/drivers/net/wireless/ath/ath10k/wmi-tlv.h
+index 3e1e340cd834..1cb93d09b8a9 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.h
++++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.h
+@@ -1670,6 +1670,11 @@ struct wmi_tlv_resource_config {
+ 	__le32 keep_alive_pattern_size;
+ 	__le32 max_tdls_concurrent_sleep_sta;
+ 	__le32 max_tdls_concurrent_buffer_sta;
++	__le32 wmi_send_separate;
++	__le32 num_ocb_vdevs;
++	__le32 num_ocb_channels;
++	__le32 num_ocb_schedules;
++	__le32 host_capab;
+ } __packed;
+ 
+ struct wmi_tlv_init_cmd {
+diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c
+index e60bea4604e4..fcd9d5eeae72 100644
+--- a/drivers/net/wireless/ath/ath9k/hw.c
++++ b/drivers/net/wireless/ath/ath9k/hw.c
+@@ -2942,16 +2942,19 @@ void ath9k_hw_apply_txpower(struct ath_hw *ah, struct ath9k_channel *chan,
+ 	struct ath_regulatory *reg = ath9k_hw_regulatory(ah);
+ 	struct ieee80211_channel *channel;
+ 	int chan_pwr, new_pwr;
++	u16 ctl = NO_CTL;
+ 
+ 	if (!chan)
+ 		return;
+ 
++	if (!test)
++		ctl = ath9k_regd_get_ctl(reg, chan);
++
+ 	channel = chan->chan;
+ 	chan_pwr = min_t(int, channel->max_power * 2, MAX_RATE_POWER);
+ 	new_pwr = min_t(int, chan_pwr, reg->power_limit);
+ 
+-	ah->eep_ops->set_txpower(ah, chan,
+-				 ath9k_regd_get_ctl(reg, chan),
++	ah->eep_ops->set_txpower(ah, chan, ctl,
+ 				 get_antenna_gain(ah, chan), new_pwr, test);
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c
+index 7fdb152be0bb..a249ee747dc9 100644
+--- a/drivers/net/wireless/ath/ath9k/xmit.c
++++ b/drivers/net/wireless/ath/ath9k/xmit.c
+@@ -86,7 +86,8 @@ static void ath_tx_status(struct ieee80211_hw *hw, struct sk_buff *skb)
+ 	struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+ 	struct ieee80211_sta *sta = info->status.status_driver_data[0];
+ 
+-	if (info->flags & IEEE80211_TX_CTL_REQ_TX_STATUS) {
++	if (info->flags & (IEEE80211_TX_CTL_REQ_TX_STATUS |
++			   IEEE80211_TX_STATUS_EOSP)) {
+ 		ieee80211_tx_status(hw, skb);
+ 		return;
+ 	}
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 8520523b91b4..d8d8443c1c93 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -1003,6 +1003,10 @@ static int iwl_pci_resume(struct device *device)
+ 	if (!trans->op_mode)
+ 		return 0;
+ 
++	/* In WOWLAN, let iwl_trans_pcie_d3_resume do the rest of the work */
++	if (test_bit(STATUS_DEVICE_ENABLED, &trans->status))
++		return 0;
++
+ 	/* reconfigure the MSI-X mapping to get the correct IRQ for rfkill */
+ 	iwl_pcie_conf_msix_hw(trans_pcie);
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+index 7229991ae70d..a2a98087eb41 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+@@ -1539,18 +1539,6 @@ static int iwl_trans_pcie_d3_resume(struct iwl_trans *trans,
+ 
+ 	iwl_pcie_enable_rx_wake(trans, true);
+ 
+-	/*
+-	 * Reconfigure IVAR table in case of MSIX or reset ict table in
+-	 * MSI mode since HW reset erased it.
+-	 * Also enables interrupts - none will happen as
+-	 * the device doesn't know we're waking it up, only when
+-	 * the opmode actually tells it after this call.
+-	 */
+-	iwl_pcie_conf_msix_hw(trans_pcie);
+-	if (!trans_pcie->msix_enabled)
+-		iwl_pcie_reset_ict(trans);
+-	iwl_enable_interrupts(trans);
+-
+ 	iwl_set_bit(trans, CSR_GP_CNTRL,
+ 		    BIT(trans->cfg->csr->flag_mac_access_req));
+ 	iwl_set_bit(trans, CSR_GP_CNTRL,
+@@ -1568,6 +1556,18 @@ static int iwl_trans_pcie_d3_resume(struct iwl_trans *trans,
+ 		return ret;
+ 	}
+ 
++	/*
++	 * Reconfigure IVAR table in case of MSIX or reset ict table in
++	 * MSI mode since HW reset erased it.
++	 * Also enables interrupts - none will happen as
++	 * the device doesn't know we're waking it up, only when
++	 * the opmode actually tells it after this call.
++	 */
++	iwl_pcie_conf_msix_hw(trans_pcie);
++	if (!trans_pcie->msix_enabled)
++		iwl_pcie_reset_ict(trans);
++	iwl_enable_interrupts(trans);
++
+ 	iwl_pcie_set_pwr(trans, false);
+ 
+ 	if (!reset) {
+diff --git a/drivers/net/wireless/ti/wlcore/rx.c b/drivers/net/wireless/ti/wlcore/rx.c
+index 0f15696195f8..078a4940bc5c 100644
+--- a/drivers/net/wireless/ti/wlcore/rx.c
++++ b/drivers/net/wireless/ti/wlcore/rx.c
+@@ -59,7 +59,7 @@ static u32 wlcore_rx_get_align_buf_size(struct wl1271 *wl, u32 pkt_len)
+ static void wl1271_rx_status(struct wl1271 *wl,
+ 			     struct wl1271_rx_descriptor *desc,
+ 			     struct ieee80211_rx_status *status,
+-			     u8 beacon)
++			     u8 beacon, u8 probe_rsp)
+ {
+ 	memset(status, 0, sizeof(struct ieee80211_rx_status));
+ 
+@@ -106,6 +106,9 @@ static void wl1271_rx_status(struct wl1271 *wl,
+ 		}
+ 	}
+ 
++	if (beacon || probe_rsp)
++		status->boottime_ns = ktime_get_boot_ns();
++
+ 	if (beacon)
+ 		wlcore_set_pending_regdomain_ch(wl, (u16)desc->channel,
+ 						status->band);
+@@ -191,7 +194,8 @@ static int wl1271_rx_handle_data(struct wl1271 *wl, u8 *data, u32 length,
+ 	if (ieee80211_is_data_present(hdr->frame_control))
+ 		is_data = 1;
+ 
+-	wl1271_rx_status(wl, desc, IEEE80211_SKB_RXCB(skb), beacon);
++	wl1271_rx_status(wl, desc, IEEE80211_SKB_RXCB(skb), beacon,
++			 ieee80211_is_probe_resp(hdr->frame_control));
+ 	wlcore_hw_set_rx_csum(wl, desc, skb);
+ 
+ 	seq_num = (le16_to_cpu(hdr->seq_ctrl) & IEEE80211_SCTL_SEQ) >> 4;
+diff --git a/drivers/pci/controller/pcie-mobiveil.c b/drivers/pci/controller/pcie-mobiveil.c
+index cf0aa7cee5b0..a939e8d31735 100644
+--- a/drivers/pci/controller/pcie-mobiveil.c
++++ b/drivers/pci/controller/pcie-mobiveil.c
+@@ -23,6 +23,8 @@
+ #include <linux/platform_device.h>
+ #include <linux/slab.h>
+ 
++#include "../pci.h"
++
+ /* register offsets and bit positions */
+ 
+ /*
+@@ -130,7 +132,7 @@ struct mobiveil_pcie {
+ 	void __iomem *config_axi_slave_base;	/* endpoint config base */
+ 	void __iomem *csr_axi_slave_base;	/* root port config base */
+ 	void __iomem *apb_csr_base;	/* MSI register base */
+-	void __iomem *pcie_reg_base;	/* Physical PCIe Controller Base */
++	phys_addr_t pcie_reg_base;	/* Physical PCIe Controller Base */
+ 	struct irq_domain *intx_domain;
+ 	raw_spinlock_t intx_mask_lock;
+ 	int irq;
+diff --git a/drivers/pci/switch/switchtec.c b/drivers/pci/switch/switchtec.c
+index 47cd0c037433..f96af1467984 100644
+--- a/drivers/pci/switch/switchtec.c
++++ b/drivers/pci/switch/switchtec.c
+@@ -14,6 +14,8 @@
+ #include <linux/poll.h>
+ #include <linux/wait.h>
+ 
++#include <linux/nospec.h>
++
+ MODULE_DESCRIPTION("Microsemi Switchtec(tm) PCIe Management Driver");
+ MODULE_VERSION("0.1");
+ MODULE_LICENSE("GPL");
+@@ -909,6 +911,8 @@ static int ioctl_port_to_pff(struct switchtec_dev *stdev,
+ 	default:
+ 		if (p.port > ARRAY_SIZE(pcfg->dsp_pff_inst_id))
+ 			return -EINVAL;
++		p.port = array_index_nospec(p.port,
++					ARRAY_SIZE(pcfg->dsp_pff_inst_id) + 1);
+ 		p.pff = ioread32(&pcfg->dsp_pff_inst_id[p.port - 1]);
+ 		break;
+ 	}
+diff --git a/drivers/pinctrl/berlin/berlin.c b/drivers/pinctrl/berlin/berlin.c
+index d6d183e9db17..b5903fffb3d0 100644
+--- a/drivers/pinctrl/berlin/berlin.c
++++ b/drivers/pinctrl/berlin/berlin.c
+@@ -216,10 +216,8 @@ static int berlin_pinctrl_build_state(struct platform_device *pdev)
+ 	}
+ 
+ 	/* we will reallocate later */
+-	pctrl->functions = devm_kcalloc(&pdev->dev,
+-					max_functions,
+-					sizeof(*pctrl->functions),
+-					GFP_KERNEL);
++	pctrl->functions = kcalloc(max_functions,
++				   sizeof(*pctrl->functions), GFP_KERNEL);
+ 	if (!pctrl->functions)
+ 		return -ENOMEM;
+ 
+@@ -257,8 +255,10 @@ static int berlin_pinctrl_build_state(struct platform_device *pdev)
+ 				function++;
+ 			}
+ 
+-			if (!found)
++			if (!found) {
++				kfree(pctrl->functions);
+ 				return -EINVAL;
++			}
+ 
+ 			if (!function->groups) {
+ 				function->groups =
+@@ -267,8 +267,10 @@ static int berlin_pinctrl_build_state(struct platform_device *pdev)
+ 						     sizeof(char *),
+ 						     GFP_KERNEL);
+ 
+-				if (!function->groups)
++				if (!function->groups) {
++					kfree(pctrl->functions);
+ 					return -ENOMEM;
++				}
+ 			}
+ 
+ 			groups = function->groups;
+diff --git a/drivers/pinctrl/freescale/pinctrl-imx.c b/drivers/pinctrl/freescale/pinctrl-imx.c
+index 1c6bb15579e1..b04edc22dad7 100644
+--- a/drivers/pinctrl/freescale/pinctrl-imx.c
++++ b/drivers/pinctrl/freescale/pinctrl-imx.c
+@@ -383,7 +383,7 @@ static void imx_pinconf_group_dbg_show(struct pinctrl_dev *pctldev,
+ 	const char *name;
+ 	int i, ret;
+ 
+-	if (group > pctldev->num_groups)
++	if (group >= pctldev->num_groups)
+ 		return;
+ 
+ 	seq_puts(s, "\n");
+diff --git a/drivers/pinctrl/pinctrl-amd.c b/drivers/pinctrl/pinctrl-amd.c
+index 04ae139671c8..b91db89eb924 100644
+--- a/drivers/pinctrl/pinctrl-amd.c
++++ b/drivers/pinctrl/pinctrl-amd.c
+@@ -552,7 +552,8 @@ static irqreturn_t amd_gpio_irq_handler(int irq, void *dev_id)
+ 		/* Each status bit covers four pins */
+ 		for (i = 0; i < 4; i++) {
+ 			regval = readl(regs + i);
+-			if (!(regval & PIN_IRQ_PENDING))
++			if (!(regval & PIN_IRQ_PENDING) ||
++			    !(regval & BIT(INTERRUPT_MASK_OFF)))
+ 				continue;
+ 			irq = irq_find_mapping(gc->irq.domain, irqnr + i);
+ 			generic_handle_irq(irq);
+diff --git a/drivers/regulator/tps65217-regulator.c b/drivers/regulator/tps65217-regulator.c
+index fc12badf3805..d84fab616abf 100644
+--- a/drivers/regulator/tps65217-regulator.c
++++ b/drivers/regulator/tps65217-regulator.c
+@@ -232,6 +232,8 @@ static int tps65217_regulator_probe(struct platform_device *pdev)
+ 	tps->strobes = devm_kcalloc(&pdev->dev,
+ 				    TPS65217_NUM_REGULATOR, sizeof(u8),
+ 				    GFP_KERNEL);
++	if (!tps->strobes)
++		return -ENOMEM;
+ 
+ 	platform_set_drvdata(pdev, tps);
+ 
+diff --git a/drivers/rpmsg/rpmsg_core.c b/drivers/rpmsg/rpmsg_core.c
+index b714a543a91d..8122807db380 100644
+--- a/drivers/rpmsg/rpmsg_core.c
++++ b/drivers/rpmsg/rpmsg_core.c
+@@ -15,6 +15,7 @@
+ #include <linux/module.h>
+ #include <linux/rpmsg.h>
+ #include <linux/of_device.h>
++#include <linux/pm_domain.h>
+ #include <linux/slab.h>
+ 
+ #include "rpmsg_internal.h"
+@@ -449,6 +450,10 @@ static int rpmsg_dev_probe(struct device *dev)
+ 	struct rpmsg_endpoint *ept = NULL;
+ 	int err;
+ 
++	err = dev_pm_domain_attach(dev, true);
++	if (err)
++		goto out;
++
+ 	if (rpdrv->callback) {
+ 		strncpy(chinfo.name, rpdev->id.name, RPMSG_NAME_SIZE);
+ 		chinfo.src = rpdev->src;
+@@ -490,6 +495,8 @@ static int rpmsg_dev_remove(struct device *dev)
+ 
+ 	rpdrv->remove(rpdev);
+ 
++	dev_pm_domain_detach(dev, true);
++
+ 	if (rpdev->ept)
+ 		rpmsg_destroy_ept(rpdev->ept);
+ 
+diff --git a/drivers/scsi/3w-9xxx.c b/drivers/scsi/3w-9xxx.c
+index 99ba4a770406..27521fc3ef5a 100644
+--- a/drivers/scsi/3w-9xxx.c
++++ b/drivers/scsi/3w-9xxx.c
+@@ -2038,6 +2038,7 @@ static int twa_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 
+ 	if (twa_initialize_device_extension(tw_dev)) {
+ 		TW_PRINTK(tw_dev->host, TW_DRIVER, 0x25, "Failed to initialize device extension");
++		retval = -ENOMEM;
+ 		goto out_free_device_extension;
+ 	}
+ 
+@@ -2060,6 +2061,7 @@ static int twa_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 	tw_dev->base_addr = ioremap(mem_addr, mem_len);
+ 	if (!tw_dev->base_addr) {
+ 		TW_PRINTK(tw_dev->host, TW_DRIVER, 0x35, "Failed to ioremap");
++		retval = -ENOMEM;
+ 		goto out_release_mem_region;
+ 	}
+ 
+@@ -2067,8 +2069,10 @@ static int twa_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 	TW_DISABLE_INTERRUPTS(tw_dev);
+ 
+ 	/* Initialize the card */
+-	if (twa_reset_sequence(tw_dev, 0))
++	if (twa_reset_sequence(tw_dev, 0)) {
++		retval = -ENOMEM;
+ 		goto out_iounmap;
++	}
+ 
+ 	/* Set host specific parameters */
+ 	if ((pdev->device == PCI_DEVICE_ID_3WARE_9650SE) ||
+diff --git a/drivers/scsi/3w-sas.c b/drivers/scsi/3w-sas.c
+index cf9f2a09b47d..40c1e6e64f58 100644
+--- a/drivers/scsi/3w-sas.c
++++ b/drivers/scsi/3w-sas.c
+@@ -1594,6 +1594,7 @@ static int twl_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 
+ 	if (twl_initialize_device_extension(tw_dev)) {
+ 		TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1a, "Failed to initialize device extension");
++		retval = -ENOMEM;
+ 		goto out_free_device_extension;
+ 	}
+ 
+@@ -1608,6 +1609,7 @@ static int twl_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 	tw_dev->base_addr = pci_iomap(pdev, 1, 0);
+ 	if (!tw_dev->base_addr) {
+ 		TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1c, "Failed to ioremap");
++		retval = -ENOMEM;
+ 		goto out_release_mem_region;
+ 	}
+ 
+@@ -1617,6 +1619,7 @@ static int twl_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 	/* Initialize the card */
+ 	if (twl_reset_sequence(tw_dev, 0)) {
+ 		TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1d, "Controller reset failed during probe");
++		retval = -ENOMEM;
+ 		goto out_iounmap;
+ 	}
+ 
+diff --git a/drivers/scsi/3w-xxxx.c b/drivers/scsi/3w-xxxx.c
+index f6179e3d6953..961ea6f7def8 100644
+--- a/drivers/scsi/3w-xxxx.c
++++ b/drivers/scsi/3w-xxxx.c
+@@ -2280,6 +2280,7 @@ static int tw_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 
+ 	if (tw_initialize_device_extension(tw_dev)) {
+ 		printk(KERN_WARNING "3w-xxxx: Failed to initialize device extension.");
++		retval = -ENOMEM;
+ 		goto out_free_device_extension;
+ 	}
+ 
+@@ -2294,6 +2295,7 @@ static int tw_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 	tw_dev->base_addr = pci_resource_start(pdev, 0);
+ 	if (!tw_dev->base_addr) {
+ 		printk(KERN_WARNING "3w-xxxx: Failed to get io address.");
++		retval = -ENOMEM;
+ 		goto out_release_mem_region;
+ 	}
+ 
+diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
+index 20b249a649dd..902004dc8dc7 100644
+--- a/drivers/scsi/lpfc/lpfc.h
++++ b/drivers/scsi/lpfc/lpfc.h
+@@ -672,7 +672,7 @@ struct lpfc_hba {
+ #define LS_NPIV_FAB_SUPPORTED 0x2	/* Fabric supports NPIV */
+ #define LS_IGNORE_ERATT       0x4	/* intr handler should ignore ERATT */
+ #define LS_MDS_LINK_DOWN      0x8	/* MDS Diagnostics Link Down */
+-#define LS_MDS_LOOPBACK      0x16	/* MDS Diagnostics Link Up (Loopback) */
++#define LS_MDS_LOOPBACK      0x10	/* MDS Diagnostics Link Up (Loopback) */
+ 
+ 	uint32_t hba_flag;	/* hba generic flags */
+ #define HBA_ERATT_HANDLED	0x1 /* This flag is set when eratt handled */
+diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
+index 76a5a99605aa..d723fd1d7b26 100644
+--- a/drivers/scsi/lpfc/lpfc_nvme.c
++++ b/drivers/scsi/lpfc/lpfc_nvme.c
+@@ -2687,7 +2687,7 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ 	struct lpfc_nvme_rport *oldrport;
+ 	struct nvme_fc_remote_port *remote_port;
+ 	struct nvme_fc_port_info rpinfo;
+-	struct lpfc_nodelist *prev_ndlp;
++	struct lpfc_nodelist *prev_ndlp = NULL;
+ 
+ 	lpfc_printf_vlog(ndlp->vport, KERN_INFO, LOG_NVME_DISC,
+ 			 "6006 Register NVME PORT. DID x%06x nlptype x%x\n",
+@@ -2736,23 +2736,29 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ 		spin_unlock_irq(&vport->phba->hbalock);
+ 		rport = remote_port->private;
+ 		if (oldrport) {
++			/* New remoteport record does not guarantee valid
++			 * host private memory area.
++			 */
++			prev_ndlp = oldrport->ndlp;
+ 			if (oldrport == remote_port->private) {
+-				/* Same remoteport.  Just reuse. */
++				/* Same remoteport - ndlp should match.
++				 * Just reuse.
++				 */
+ 				lpfc_printf_vlog(ndlp->vport, KERN_INFO,
+ 						 LOG_NVME_DISC,
+ 						 "6014 Rebinding lport to "
+ 						 "remoteport %p wwpn 0x%llx, "
+-						 "Data: x%x x%x %p x%x x%06x\n",
++						 "Data: x%x x%x %p %p x%x x%06x\n",
+ 						 remote_port,
+ 						 remote_port->port_name,
+ 						 remote_port->port_id,
+ 						 remote_port->port_role,
++						 prev_ndlp,
+ 						 ndlp,
+ 						 ndlp->nlp_type,
+ 						 ndlp->nlp_DID);
+ 				return 0;
+ 			}
+-			prev_ndlp = rport->ndlp;
+ 
+ 			/* Sever the ndlp<->rport association
+ 			 * before dropping the ndlp ref from
+@@ -2786,13 +2792,13 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ 		lpfc_printf_vlog(vport, KERN_INFO,
+ 				 LOG_NVME_DISC | LOG_NODE,
+ 				 "6022 Binding new rport to "
+-				 "lport %p Remoteport %p  WWNN 0x%llx, "
++				 "lport %p Remoteport %p rport %p WWNN 0x%llx, "
+ 				 "Rport WWPN 0x%llx DID "
+-				 "x%06x Role x%x, ndlp %p\n",
+-				 lport, remote_port,
++				 "x%06x Role x%x, ndlp %p prev_ndlp %p\n",
++				 lport, remote_port, rport,
+ 				 rpinfo.node_name, rpinfo.port_name,
+ 				 rpinfo.port_id, rpinfo.port_role,
+-				 ndlp);
++				 ndlp, prev_ndlp);
+ 	} else {
+ 		lpfc_printf_vlog(vport, KERN_ERR,
+ 				 LOG_NVME_DISC | LOG_NODE,
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index ec550ee0108e..75d34def2361 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -1074,9 +1074,12 @@ void qla24xx_handle_gpdb_event(scsi_qla_host_t *vha, struct event_arg *ea)
+ 	case PDS_PLOGI_COMPLETE:
+ 	case PDS_PRLI_PENDING:
+ 	case PDS_PRLI2_PENDING:
+-		ql_dbg(ql_dbg_disc, vha, 0x20d5, "%s %d %8phC relogin needed\n",
+-		    __func__, __LINE__, fcport->port_name);
+-		set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
++		/* Set discovery state back to GNL to Relogin attempt */
++		if (qla_dual_mode_enabled(vha) ||
++		    qla_ini_mode_enabled(vha)) {
++			fcport->disc_state = DSC_GNL;
++			set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
++		}
+ 		return;
+ 	case PDS_LOGO_PENDING:
+ 	case PDS_PORT_UNAVAILABLE:
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index 1027b0cb7fa3..6dc1b1bd8069 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -982,8 +982,9 @@ void qlt_free_session_done(struct work_struct *work)
+ 
+ 			logo.id = sess->d_id;
+ 			logo.cmd_count = 0;
++			if (!own)
++				qlt_send_first_logo(vha, &logo);
+ 			sess->send_els_logo = 0;
+-			qlt_send_first_logo(vha, &logo);
+ 		}
+ 
+ 		if (sess->logout_on_delete && sess->loop_id != FC_NO_LOOP_ID) {
+diff --git a/drivers/scsi/qla2xxx/qla_tmpl.c b/drivers/scsi/qla2xxx/qla_tmpl.c
+index 731ca0d8520a..9f3c263756a8 100644
+--- a/drivers/scsi/qla2xxx/qla_tmpl.c
++++ b/drivers/scsi/qla2xxx/qla_tmpl.c
+@@ -571,6 +571,15 @@ qla27xx_fwdt_entry_t268(struct scsi_qla_host *vha,
+ 		}
+ 		break;
+ 
++	case T268_BUF_TYPE_REQ_MIRROR:
++	case T268_BUF_TYPE_RSP_MIRROR:
++		/*
++		 * Mirror pointers are not implemented in the
++		 * driver, instead shadow pointers are used by
++		 * the drier. Skip these entries.
++		 */
++		qla27xx_skip_entry(ent, buf);
++		break;
+ 	default:
+ 		ql_dbg(ql_dbg_async, vha, 0xd02b,
+ 		    "%s: unknown buffer %x\n", __func__, ent->t268.buf_type);
+diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
+index ee5081ba5313..1fc87a3260cc 100644
+--- a/drivers/target/target_core_transport.c
++++ b/drivers/target/target_core_transport.c
+@@ -316,6 +316,7 @@ void __transport_register_session(
+ {
+ 	const struct target_core_fabric_ops *tfo = se_tpg->se_tpg_tfo;
+ 	unsigned char buf[PR_REG_ISID_LEN];
++	unsigned long flags;
+ 
+ 	se_sess->se_tpg = se_tpg;
+ 	se_sess->fabric_sess_ptr = fabric_sess_ptr;
+@@ -352,7 +353,7 @@ void __transport_register_session(
+ 			se_sess->sess_bin_isid = get_unaligned_be64(&buf[0]);
+ 		}
+ 
+-		spin_lock_irq(&se_nacl->nacl_sess_lock);
++		spin_lock_irqsave(&se_nacl->nacl_sess_lock, flags);
+ 		/*
+ 		 * The se_nacl->nacl_sess pointer will be set to the
+ 		 * last active I_T Nexus for each struct se_node_acl.
+@@ -361,7 +362,7 @@ void __transport_register_session(
+ 
+ 		list_add_tail(&se_sess->sess_acl_list,
+ 			      &se_nacl->acl_sess_list);
+-		spin_unlock_irq(&se_nacl->nacl_sess_lock);
++		spin_unlock_irqrestore(&se_nacl->nacl_sess_lock, flags);
+ 	}
+ 	list_add_tail(&se_sess->sess_list, &se_tpg->tpg_sess_list);
+ 
+diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
+index d8dc3d22051f..b8dc5efc606b 100644
+--- a/drivers/target/target_core_user.c
++++ b/drivers/target/target_core_user.c
+@@ -1745,9 +1745,11 @@ static int tcmu_configure_device(struct se_device *dev)
+ 
+ 	info = &udev->uio_info;
+ 
++	mutex_lock(&udev->cmdr_lock);
+ 	udev->data_bitmap = kcalloc(BITS_TO_LONGS(udev->max_blocks),
+ 				    sizeof(unsigned long),
+ 				    GFP_KERNEL);
++	mutex_unlock(&udev->cmdr_lock);
+ 	if (!udev->data_bitmap) {
+ 		ret = -ENOMEM;
+ 		goto err_bitmap_alloc;
+@@ -1957,7 +1959,7 @@ static match_table_t tokens = {
+ 	{Opt_hw_block_size, "hw_block_size=%u"},
+ 	{Opt_hw_max_sectors, "hw_max_sectors=%u"},
+ 	{Opt_nl_reply_supported, "nl_reply_supported=%d"},
+-	{Opt_max_data_area_mb, "max_data_area_mb=%u"},
++	{Opt_max_data_area_mb, "max_data_area_mb=%d"},
+ 	{Opt_err, NULL}
+ };
+ 
+@@ -1985,13 +1987,48 @@ static int tcmu_set_dev_attrib(substring_t *arg, u32 *dev_attrib)
+ 	return 0;
+ }
+ 
++static int tcmu_set_max_blocks_param(struct tcmu_dev *udev, substring_t *arg)
++{
++	int val, ret;
++
++	ret = match_int(arg, &val);
++	if (ret < 0) {
++		pr_err("match_int() failed for max_data_area_mb=. Error %d.\n",
++		       ret);
++		return ret;
++	}
++
++	if (val <= 0) {
++		pr_err("Invalid max_data_area %d.\n", val);
++		return -EINVAL;
++	}
++
++	mutex_lock(&udev->cmdr_lock);
++	if (udev->data_bitmap) {
++		pr_err("Cannot set max_data_area_mb after it has been enabled.\n");
++		ret = -EINVAL;
++		goto unlock;
++	}
++
++	udev->max_blocks = TCMU_MBS_TO_BLOCKS(val);
++	if (udev->max_blocks > tcmu_global_max_blocks) {
++		pr_err("%d is too large. Adjusting max_data_area_mb to global limit of %u\n",
++		       val, TCMU_BLOCKS_TO_MBS(tcmu_global_max_blocks));
++		udev->max_blocks = tcmu_global_max_blocks;
++	}
++
++unlock:
++	mutex_unlock(&udev->cmdr_lock);
++	return ret;
++}
++
+ static ssize_t tcmu_set_configfs_dev_params(struct se_device *dev,
+ 		const char *page, ssize_t count)
+ {
+ 	struct tcmu_dev *udev = TCMU_DEV(dev);
+ 	char *orig, *ptr, *opts, *arg_p;
+ 	substring_t args[MAX_OPT_ARGS];
+-	int ret = 0, token, tmpval;
++	int ret = 0, token;
+ 
+ 	opts = kstrdup(page, GFP_KERNEL);
+ 	if (!opts)
+@@ -2044,37 +2081,7 @@ static ssize_t tcmu_set_configfs_dev_params(struct se_device *dev,
+ 				pr_err("kstrtoint() failed for nl_reply_supported=\n");
+ 			break;
+ 		case Opt_max_data_area_mb:
+-			if (dev->export_count) {
+-				pr_err("Unable to set max_data_area_mb while exports exist\n");
+-				ret = -EINVAL;
+-				break;
+-			}
+-
+-			arg_p = match_strdup(&args[0]);
+-			if (!arg_p) {
+-				ret = -ENOMEM;
+-				break;
+-			}
+-			ret = kstrtoint(arg_p, 0, &tmpval);
+-			kfree(arg_p);
+-			if (ret < 0) {
+-				pr_err("kstrtoint() failed for max_data_area_mb=\n");
+-				break;
+-			}
+-
+-			if (tmpval <= 0) {
+-				pr_err("Invalid max_data_area %d\n", tmpval);
+-				ret = -EINVAL;
+-				break;
+-			}
+-
+-			udev->max_blocks = TCMU_MBS_TO_BLOCKS(tmpval);
+-			if (udev->max_blocks > tcmu_global_max_blocks) {
+-				pr_err("%d is too large. Adjusting max_data_area_mb to global limit of %u\n",
+-				       tmpval,
+-				       TCMU_BLOCKS_TO_MBS(tcmu_global_max_blocks));
+-				udev->max_blocks = tcmu_global_max_blocks;
+-			}
++			ret = tcmu_set_max_blocks_param(udev, &args[0]);
+ 			break;
+ 		default:
+ 			break;
+diff --git a/drivers/thermal/rcar_thermal.c b/drivers/thermal/rcar_thermal.c
+index 45fb284d4c11..e77e63070e99 100644
+--- a/drivers/thermal/rcar_thermal.c
++++ b/drivers/thermal/rcar_thermal.c
+@@ -598,7 +598,7 @@ static int rcar_thermal_probe(struct platform_device *pdev)
+ 			enr_bits |= 3 << (i * 8);
+ 	}
+ 
+-	if (enr_bits)
++	if (common->base && enr_bits)
+ 		rcar_thermal_common_write(common, ENR, enr_bits);
+ 
+ 	dev_info(dev, "%d sensor probed\n", i);
+diff --git a/drivers/thermal/thermal_hwmon.c b/drivers/thermal/thermal_hwmon.c
+index 11278836ed12..0bd47007c57f 100644
+--- a/drivers/thermal/thermal_hwmon.c
++++ b/drivers/thermal/thermal_hwmon.c
+@@ -142,6 +142,7 @@ int thermal_add_hwmon_sysfs(struct thermal_zone_device *tz)
+ 
+ 	INIT_LIST_HEAD(&hwmon->tz_list);
+ 	strlcpy(hwmon->type, tz->type, THERMAL_NAME_LENGTH);
++	strreplace(hwmon->type, '-', '_');
+ 	hwmon->device = hwmon_device_register_with_info(NULL, hwmon->type,
+ 							hwmon, NULL, NULL);
+ 	if (IS_ERR(hwmon->device)) {
+diff --git a/drivers/tty/rocket.c b/drivers/tty/rocket.c
+index bdd17d2aaafd..b121d8f8f3d7 100644
+--- a/drivers/tty/rocket.c
++++ b/drivers/tty/rocket.c
+@@ -1881,7 +1881,7 @@ static __init int register_PCI(int i, struct pci_dev *dev)
+ 	ByteIO_t UPCIRingInd = 0;
+ 
+ 	if (!dev || !pci_match_id(rocket_pci_ids, dev) ||
+-	    pci_enable_device(dev))
++	    pci_enable_device(dev) || i >= NUM_BOARDS)
+ 		return 0;
+ 
+ 	rcktpt_io_addr[i] = pci_resource_start(dev, 0);
+diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c
+index f68c1121fa7c..6c58ad1abd7e 100644
+--- a/drivers/uio/uio.c
++++ b/drivers/uio/uio.c
+@@ -622,6 +622,12 @@ static ssize_t uio_write(struct file *filep, const char __user *buf,
+ 	ssize_t retval;
+ 	s32 irq_on;
+ 
++	if (count != sizeof(s32))
++		return -EINVAL;
++
++	if (copy_from_user(&irq_on, buf, count))
++		return -EFAULT;
++
+ 	mutex_lock(&idev->info_lock);
+ 	if (!idev->info) {
+ 		retval = -EINVAL;
+@@ -633,21 +639,11 @@ static ssize_t uio_write(struct file *filep, const char __user *buf,
+ 		goto out;
+ 	}
+ 
+-	if (count != sizeof(s32)) {
+-		retval = -EINVAL;
+-		goto out;
+-	}
+-
+ 	if (!idev->info->irqcontrol) {
+ 		retval = -ENOSYS;
+ 		goto out;
+ 	}
+ 
+-	if (copy_from_user(&irq_on, buf, count)) {
+-		retval = -EFAULT;
+-		goto out;
+-	}
+-
+ 	retval = idev->info->irqcontrol(idev->info, irq_on);
+ 
+ out:
+@@ -955,8 +951,6 @@ int __uio_register_device(struct module *owner,
+ 	if (ret)
+ 		goto err_uio_dev_add_attributes;
+ 
+-	info->uio_dev = idev;
+-
+ 	if (info->irq && (info->irq != UIO_IRQ_CUSTOM)) {
+ 		/*
+ 		 * Note that we deliberately don't use devm_request_irq
+@@ -972,6 +966,7 @@ int __uio_register_device(struct module *owner,
+ 			goto err_request_irq;
+ 	}
+ 
++	info->uio_dev = idev;
+ 	return 0;
+ 
+ err_request_irq:
+diff --git a/fs/autofs/autofs_i.h b/fs/autofs/autofs_i.h
+index 9400a9f6318a..5057b9f0f846 100644
+--- a/fs/autofs/autofs_i.h
++++ b/fs/autofs/autofs_i.h
+@@ -26,6 +26,7 @@
+ #include <linux/list.h>
+ #include <linux/completion.h>
+ #include <linux/file.h>
++#include <linux/magic.h>
+ 
+ /* This is the range of ioctl() numbers we claim as ours */
+ #define AUTOFS_IOC_FIRST     AUTOFS_IOC_READY
+@@ -124,7 +125,8 @@ struct autofs_sb_info {
+ 
+ static inline struct autofs_sb_info *autofs_sbi(struct super_block *sb)
+ {
+-	return (struct autofs_sb_info *)(sb->s_fs_info);
++	return sb->s_magic != AUTOFS_SUPER_MAGIC ?
++		NULL : (struct autofs_sb_info *)(sb->s_fs_info);
+ }
+ 
+ static inline struct autofs_info *autofs_dentry_ino(struct dentry *dentry)
+diff --git a/fs/autofs/inode.c b/fs/autofs/inode.c
+index b51980fc274e..846c052569dd 100644
+--- a/fs/autofs/inode.c
++++ b/fs/autofs/inode.c
+@@ -10,7 +10,6 @@
+ #include <linux/seq_file.h>
+ #include <linux/pagemap.h>
+ #include <linux/parser.h>
+-#include <linux/magic.h>
+ 
+ #include "autofs_i.h"
+ 
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 53cac20650d8..4ab0bccfa281 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -5935,7 +5935,7 @@ void btrfs_trans_release_chunk_metadata(struct btrfs_trans_handle *trans)
+  * root: the root of the parent directory
+  * rsv: block reservation
+  * items: the number of items that we need do reservation
+- * qgroup_reserved: used to return the reserved size in qgroup
++ * use_global_rsv: allow fallback to the global block reservation
+  *
+  * This function is used to reserve the space for snapshot/subvolume
+  * creation and deletion. Those operations are different with the
+@@ -5945,10 +5945,10 @@ void btrfs_trans_release_chunk_metadata(struct btrfs_trans_handle *trans)
+  * the space reservation mechanism in start_transaction().
+  */
+ int btrfs_subvolume_reserve_metadata(struct btrfs_root *root,
+-				     struct btrfs_block_rsv *rsv,
+-				     int items,
++				     struct btrfs_block_rsv *rsv, int items,
+ 				     bool use_global_rsv)
+ {
++	u64 qgroup_num_bytes = 0;
+ 	u64 num_bytes;
+ 	int ret;
+ 	struct btrfs_fs_info *fs_info = root->fs_info;
+@@ -5956,12 +5956,11 @@ int btrfs_subvolume_reserve_metadata(struct btrfs_root *root,
+ 
+ 	if (test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags)) {
+ 		/* One for parent inode, two for dir entries */
+-		num_bytes = 3 * fs_info->nodesize;
+-		ret = btrfs_qgroup_reserve_meta_prealloc(root, num_bytes, true);
++		qgroup_num_bytes = 3 * fs_info->nodesize;
++		ret = btrfs_qgroup_reserve_meta_prealloc(root,
++				qgroup_num_bytes, true);
+ 		if (ret)
+ 			return ret;
+-	} else {
+-		num_bytes = 0;
+ 	}
+ 
+ 	num_bytes = btrfs_calc_trans_metadata_size(fs_info, items);
+@@ -5973,8 +5972,8 @@ int btrfs_subvolume_reserve_metadata(struct btrfs_root *root,
+ 	if (ret == -ENOSPC && use_global_rsv)
+ 		ret = btrfs_block_rsv_migrate(global_rsv, rsv, num_bytes, 1);
+ 
+-	if (ret && num_bytes)
+-		btrfs_qgroup_free_meta_prealloc(root, num_bytes);
++	if (ret && qgroup_num_bytes)
++		btrfs_qgroup_free_meta_prealloc(root, qgroup_num_bytes);
+ 
+ 	return ret;
+ }
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index b077544b5232..f3d6be0c657b 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -3463,6 +3463,25 @@ static int btrfs_extent_same_range(struct inode *src, u64 loff, u64 olen,
+ 
+ 		same_lock_start = min_t(u64, loff, dst_loff);
+ 		same_lock_len = max_t(u64, loff, dst_loff) + len - same_lock_start;
++	} else {
++		/*
++		 * If the source and destination inodes are different, the
++		 * source's range end offset matches the source's i_size, that
++		 * i_size is not a multiple of the sector size, and the
++		 * destination range does not go past the destination's i_size,
++		 * we must round down the length to the nearest sector size
++		 * multiple. If we don't do this adjustment we end replacing
++		 * with zeroes the bytes in the range that starts at the
++		 * deduplication range's end offset and ends at the next sector
++		 * size multiple.
++		 */
++		if (loff + olen == i_size_read(src) &&
++		    dst_loff + len < i_size_read(dst)) {
++			const u64 sz = BTRFS_I(src)->root->fs_info->sectorsize;
++
++			len = round_down(i_size_read(src), sz) - loff;
++			olen = len;
++		}
+ 	}
+ 
+ again:
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 9d02563b2147..44043f809a3c 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -2523,7 +2523,7 @@ cifs_setup_ipc(struct cifs_ses *ses, struct smb_vol *volume_info)
+ 	if (tcon == NULL)
+ 		return -ENOMEM;
+ 
+-	snprintf(unc, sizeof(unc), "\\\\%s\\IPC$", ses->serverName);
++	snprintf(unc, sizeof(unc), "\\\\%s\\IPC$", ses->server->hostname);
+ 
+ 	/* cannot fail */
+ 	nls_codepage = load_nls_default();
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index 9051b9dfd590..d279fa5472db 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -469,6 +469,8 @@ cifs_sfu_type(struct cifs_fattr *fattr, const char *path,
+ 	oparms.cifs_sb = cifs_sb;
+ 	oparms.desired_access = GENERIC_READ;
+ 	oparms.create_options = CREATE_NOT_DIR;
++	if (backup_cred(cifs_sb))
++		oparms.create_options |= CREATE_OPEN_BACKUP_INTENT;
+ 	oparms.disposition = FILE_OPEN;
+ 	oparms.path = path;
+ 	oparms.fid = &fid;
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index ee6c4a952ce9..5ecbc99f46e4 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -626,7 +626,10 @@ smb2_is_path_accessible(const unsigned int xid, struct cifs_tcon *tcon,
+ 	oparms.tcon = tcon;
+ 	oparms.desired_access = FILE_READ_ATTRIBUTES;
+ 	oparms.disposition = FILE_OPEN;
+-	oparms.create_options = 0;
++	if (backup_cred(cifs_sb))
++		oparms.create_options = CREATE_OPEN_BACKUP_INTENT;
++	else
++		oparms.create_options = 0;
+ 	oparms.fid = &fid;
+ 	oparms.reconnect = false;
+ 
+@@ -775,7 +778,10 @@ smb2_query_eas(const unsigned int xid, struct cifs_tcon *tcon,
+ 	oparms.tcon = tcon;
+ 	oparms.desired_access = FILE_READ_EA;
+ 	oparms.disposition = FILE_OPEN;
+-	oparms.create_options = 0;
++	if (backup_cred(cifs_sb))
++		oparms.create_options = CREATE_OPEN_BACKUP_INTENT;
++	else
++		oparms.create_options = 0;
+ 	oparms.fid = &fid;
+ 	oparms.reconnect = false;
+ 
+@@ -854,7 +860,10 @@ smb2_set_ea(const unsigned int xid, struct cifs_tcon *tcon,
+ 	oparms.tcon = tcon;
+ 	oparms.desired_access = FILE_WRITE_EA;
+ 	oparms.disposition = FILE_OPEN;
+-	oparms.create_options = 0;
++	if (backup_cred(cifs_sb))
++		oparms.create_options = CREATE_OPEN_BACKUP_INTENT;
++	else
++		oparms.create_options = 0;
+ 	oparms.fid = &fid;
+ 	oparms.reconnect = false;
+ 
+@@ -1460,7 +1469,10 @@ smb2_query_dir_first(const unsigned int xid, struct cifs_tcon *tcon,
+ 	oparms.tcon = tcon;
+ 	oparms.desired_access = FILE_READ_ATTRIBUTES | FILE_READ_DATA;
+ 	oparms.disposition = FILE_OPEN;
+-	oparms.create_options = 0;
++	if (backup_cred(cifs_sb))
++		oparms.create_options = CREATE_OPEN_BACKUP_INTENT;
++	else
++		oparms.create_options = 0;
+ 	oparms.fid = fid;
+ 	oparms.reconnect = false;
+ 
+@@ -1735,7 +1747,10 @@ smb2_query_symlink(const unsigned int xid, struct cifs_tcon *tcon,
+ 	oparms.tcon = tcon;
+ 	oparms.desired_access = FILE_READ_ATTRIBUTES;
+ 	oparms.disposition = FILE_OPEN;
+-	oparms.create_options = 0;
++	if (backup_cred(cifs_sb))
++		oparms.create_options = CREATE_OPEN_BACKUP_INTENT;
++	else
++		oparms.create_options = 0;
+ 	oparms.fid = &fid;
+ 	oparms.reconnect = false;
+ 
+@@ -3463,7 +3478,7 @@ struct smb_version_values smb21_values = {
+ struct smb_version_values smb3any_values = {
+ 	.version_string = SMB3ANY_VERSION_STRING,
+ 	.protocol_id = SMB302_PROT_ID, /* doesn't matter, send protocol array */
+-	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION,
++	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION | SMB2_GLOBAL_CAP_DIRECTORY_LEASING,
+ 	.large_lock_type = 0,
+ 	.exclusive_lock_type = SMB2_LOCKFLAG_EXCLUSIVE_LOCK,
+ 	.shared_lock_type = SMB2_LOCKFLAG_SHARED_LOCK,
+@@ -3484,7 +3499,7 @@ struct smb_version_values smb3any_values = {
+ struct smb_version_values smbdefault_values = {
+ 	.version_string = SMBDEFAULT_VERSION_STRING,
+ 	.protocol_id = SMB302_PROT_ID, /* doesn't matter, send protocol array */
+-	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION,
++	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION | SMB2_GLOBAL_CAP_DIRECTORY_LEASING,
+ 	.large_lock_type = 0,
+ 	.exclusive_lock_type = SMB2_LOCKFLAG_EXCLUSIVE_LOCK,
+ 	.shared_lock_type = SMB2_LOCKFLAG_SHARED_LOCK,
+@@ -3505,7 +3520,7 @@ struct smb_version_values smbdefault_values = {
+ struct smb_version_values smb30_values = {
+ 	.version_string = SMB30_VERSION_STRING,
+ 	.protocol_id = SMB30_PROT_ID,
+-	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION,
++	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION | SMB2_GLOBAL_CAP_DIRECTORY_LEASING,
+ 	.large_lock_type = 0,
+ 	.exclusive_lock_type = SMB2_LOCKFLAG_EXCLUSIVE_LOCK,
+ 	.shared_lock_type = SMB2_LOCKFLAG_SHARED_LOCK,
+@@ -3526,7 +3541,7 @@ struct smb_version_values smb30_values = {
+ struct smb_version_values smb302_values = {
+ 	.version_string = SMB302_VERSION_STRING,
+ 	.protocol_id = SMB302_PROT_ID,
+-	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION,
++	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION | SMB2_GLOBAL_CAP_DIRECTORY_LEASING,
+ 	.large_lock_type = 0,
+ 	.exclusive_lock_type = SMB2_LOCKFLAG_EXCLUSIVE_LOCK,
+ 	.shared_lock_type = SMB2_LOCKFLAG_SHARED_LOCK,
+@@ -3548,7 +3563,7 @@ struct smb_version_values smb302_values = {
+ struct smb_version_values smb311_values = {
+ 	.version_string = SMB311_VERSION_STRING,
+ 	.protocol_id = SMB311_PROT_ID,
+-	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION,
++	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION | SMB2_GLOBAL_CAP_DIRECTORY_LEASING,
+ 	.large_lock_type = 0,
+ 	.exclusive_lock_type = SMB2_LOCKFLAG_EXCLUSIVE_LOCK,
+ 	.shared_lock_type = SMB2_LOCKFLAG_SHARED_LOCK,
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 44e511a35559..82be1dfeca33 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -2179,6 +2179,9 @@ SMB2_open(const unsigned int xid, struct cifs_open_parms *oparms, __le16 *path,
+ 	if (!(server->capabilities & SMB2_GLOBAL_CAP_LEASING) ||
+ 	    *oplock == SMB2_OPLOCK_LEVEL_NONE)
+ 		req->RequestedOplockLevel = *oplock;
++	else if (!(server->capabilities & SMB2_GLOBAL_CAP_DIRECTORY_LEASING) &&
++		  (oparms->create_options & CREATE_NOT_FILE))
++		req->RequestedOplockLevel = *oplock; /* no srv lease support */
+ 	else {
+ 		rc = add_lease_context(server, iov, &n_iov,
+ 				       oparms->fid->lease_key, oplock);
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 4d8b1de83143..b6f2dc8163e1 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -1680,18 +1680,20 @@ static inline int inc_valid_block_count(struct f2fs_sb_info *sbi,
+ 		sbi->total_valid_block_count -= diff;
+ 		if (!*count) {
+ 			spin_unlock(&sbi->stat_lock);
+-			percpu_counter_sub(&sbi->alloc_valid_block_count, diff);
+ 			goto enospc;
+ 		}
+ 	}
+ 	spin_unlock(&sbi->stat_lock);
+ 
+-	if (unlikely(release))
++	if (unlikely(release)) {
++		percpu_counter_sub(&sbi->alloc_valid_block_count, release);
+ 		dquot_release_reservation_block(inode, release);
++	}
+ 	f2fs_i_blocks_write(inode, *count, true, true);
+ 	return 0;
+ 
+ enospc:
++	percpu_counter_sub(&sbi->alloc_valid_block_count, release);
+ 	dquot_release_reservation_block(inode, release);
+ 	return -ENOSPC;
+ }
+@@ -1954,8 +1956,13 @@ static inline struct page *f2fs_grab_cache_page(struct address_space *mapping,
+ 						pgoff_t index, bool for_write)
+ {
+ #ifdef CONFIG_F2FS_FAULT_INJECTION
+-	struct page *page = find_lock_page(mapping, index);
++	struct page *page;
+ 
++	if (!for_write)
++		page = find_get_page_flags(mapping, index,
++						FGP_LOCK | FGP_ACCESSED);
++	else
++		page = find_lock_page(mapping, index);
+ 	if (page)
+ 		return page;
+ 
+@@ -2812,7 +2819,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
+ int f2fs_sync_node_pages(struct f2fs_sb_info *sbi,
+ 			struct writeback_control *wbc,
+ 			bool do_balance, enum iostat_type io_type);
+-void f2fs_build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount);
++int f2fs_build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount);
+ bool f2fs_alloc_nid(struct f2fs_sb_info *sbi, nid_t *nid);
+ void f2fs_alloc_nid_done(struct f2fs_sb_info *sbi, nid_t nid);
+ void f2fs_alloc_nid_failed(struct f2fs_sb_info *sbi, nid_t nid);
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 3ffa341cf586..4c9f9bcbd2d9 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -1882,7 +1882,7 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	struct super_block *sb = sbi->sb;
+ 	__u32 in;
+-	int ret;
++	int ret = 0;
+ 
+ 	if (!capable(CAP_SYS_ADMIN))
+ 		return -EPERM;
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 9093be6e7a7d..37ab2d10a872 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -986,7 +986,13 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi,
+ 			goto next;
+ 
+ 		sum = page_address(sum_page);
+-		f2fs_bug_on(sbi, type != GET_SUM_TYPE((&sum->footer)));
++		if (type != GET_SUM_TYPE((&sum->footer))) {
++			f2fs_msg(sbi->sb, KERN_ERR, "Inconsistent segment (%u) "
++				"type [%d, %d] in SSA and SIT",
++				segno, type, GET_SUM_TYPE((&sum->footer)));
++			set_sbi_flag(sbi, SBI_NEED_FSCK);
++			goto next;
++		}
+ 
+ 		/*
+ 		 * this is to avoid deadlock:
+diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
+index 043830be5662..2bcb2d36f024 100644
+--- a/fs/f2fs/inline.c
++++ b/fs/f2fs/inline.c
+@@ -130,6 +130,16 @@ int f2fs_convert_inline_page(struct dnode_of_data *dn, struct page *page)
+ 	if (err)
+ 		return err;
+ 
++	if (unlikely(dn->data_blkaddr != NEW_ADDR)) {
++		f2fs_put_dnode(dn);
++		set_sbi_flag(fio.sbi, SBI_NEED_FSCK);
++		f2fs_msg(fio.sbi->sb, KERN_WARNING,
++			"%s: corrupted inline inode ino=%lx, i_addr[0]:0x%x, "
++			"run fsck to fix.",
++			__func__, dn->inode->i_ino, dn->data_blkaddr);
++		return -EINVAL;
++	}
++
+ 	f2fs_bug_on(F2FS_P_SB(page), PageWriteback(page));
+ 
+ 	f2fs_do_read_inline_data(page, dn->inode_page);
+@@ -363,6 +373,17 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage,
+ 	if (err)
+ 		goto out;
+ 
++	if (unlikely(dn.data_blkaddr != NEW_ADDR)) {
++		f2fs_put_dnode(&dn);
++		set_sbi_flag(F2FS_P_SB(page), SBI_NEED_FSCK);
++		f2fs_msg(F2FS_P_SB(page)->sb, KERN_WARNING,
++			"%s: corrupted inline inode ino=%lx, i_addr[0]:0x%x, "
++			"run fsck to fix.",
++			__func__, dir->i_ino, dn.data_blkaddr);
++		err = -EINVAL;
++		goto out;
++	}
++
+ 	f2fs_wait_on_page_writeback(page, DATA, true);
+ 
+ 	dentry_blk = page_address(page);
+@@ -477,6 +498,7 @@ static int f2fs_move_rehashed_dirents(struct inode *dir, struct page *ipage,
+ 	return 0;
+ recover:
+ 	lock_page(ipage);
++	f2fs_wait_on_page_writeback(ipage, NODE, true);
+ 	memcpy(inline_dentry, backup_dentry, MAX_INLINE_DATA(dir));
+ 	f2fs_i_depth_write(dir, 0);
+ 	f2fs_i_size_write(dir, MAX_INLINE_DATA(dir));
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index f121c864f4c0..cf0f944fcaea 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -197,6 +197,16 @@ static bool sanity_check_inode(struct inode *inode)
+ 			__func__, inode->i_ino);
+ 		return false;
+ 	}
++
++	if (f2fs_has_extra_attr(inode) &&
++			!f2fs_sb_has_extra_attr(sbi->sb)) {
++		set_sbi_flag(sbi, SBI_NEED_FSCK);
++		f2fs_msg(sbi->sb, KERN_WARNING,
++			"%s: inode (ino=%lx) is with extra_attr, "
++			"but extra_attr feature is off",
++			__func__, inode->i_ino);
++		return false;
++	}
+ 	return true;
+ }
+ 
+@@ -249,6 +259,11 @@ static int do_read_inode(struct inode *inode)
+ 
+ 	get_inline_info(inode, ri);
+ 
++	if (!sanity_check_inode(inode)) {
++		f2fs_put_page(node_page, 1);
++		return -EINVAL;
++	}
++
+ 	fi->i_extra_isize = f2fs_has_extra_attr(inode) ?
+ 					le16_to_cpu(ri->i_extra_isize) : 0;
+ 
+@@ -330,10 +345,6 @@ struct inode *f2fs_iget(struct super_block *sb, unsigned long ino)
+ 	ret = do_read_inode(inode);
+ 	if (ret)
+ 		goto bad_inode;
+-	if (!sanity_check_inode(inode)) {
+-		ret = -EINVAL;
+-		goto bad_inode;
+-	}
+ make_now:
+ 	if (ino == F2FS_NODE_INO(sbi)) {
+ 		inode->i_mapping->a_ops = &f2fs_node_aops;
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 10643b11bd59..52ed02b0327c 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -1633,7 +1633,9 @@ next_step:
+ 						!is_cold_node(page)))
+ 				continue;
+ lock_node:
+-			if (!trylock_page(page))
++			if (wbc->sync_mode == WB_SYNC_ALL)
++				lock_page(page);
++			else if (!trylock_page(page))
+ 				continue;
+ 
+ 			if (unlikely(page->mapping != NODE_MAPPING(sbi))) {
+@@ -1968,7 +1970,7 @@ static void remove_free_nid(struct f2fs_sb_info *sbi, nid_t nid)
+ 		kmem_cache_free(free_nid_slab, i);
+ }
+ 
+-static void scan_nat_page(struct f2fs_sb_info *sbi,
++static int scan_nat_page(struct f2fs_sb_info *sbi,
+ 			struct page *nat_page, nid_t start_nid)
+ {
+ 	struct f2fs_nm_info *nm_i = NM_I(sbi);
+@@ -1986,7 +1988,10 @@ static void scan_nat_page(struct f2fs_sb_info *sbi,
+ 			break;
+ 
+ 		blk_addr = le32_to_cpu(nat_blk->entries[i].block_addr);
+-		f2fs_bug_on(sbi, blk_addr == NEW_ADDR);
++
++		if (blk_addr == NEW_ADDR)
++			return -EINVAL;
++
+ 		if (blk_addr == NULL_ADDR) {
+ 			add_free_nid(sbi, start_nid, true, true);
+ 		} else {
+@@ -1995,6 +2000,8 @@ static void scan_nat_page(struct f2fs_sb_info *sbi,
+ 			spin_unlock(&NM_I(sbi)->nid_list_lock);
+ 		}
+ 	}
++
++	return 0;
+ }
+ 
+ static void scan_curseg_cache(struct f2fs_sb_info *sbi)
+@@ -2050,11 +2057,11 @@ out:
+ 	up_read(&nm_i->nat_tree_lock);
+ }
+ 
+-static void __f2fs_build_free_nids(struct f2fs_sb_info *sbi,
++static int __f2fs_build_free_nids(struct f2fs_sb_info *sbi,
+ 						bool sync, bool mount)
+ {
+ 	struct f2fs_nm_info *nm_i = NM_I(sbi);
+-	int i = 0;
++	int i = 0, ret;
+ 	nid_t nid = nm_i->next_scan_nid;
+ 
+ 	if (unlikely(nid >= nm_i->max_nid))
+@@ -2062,17 +2069,17 @@ static void __f2fs_build_free_nids(struct f2fs_sb_info *sbi,
+ 
+ 	/* Enough entries */
+ 	if (nm_i->nid_cnt[FREE_NID] >= NAT_ENTRY_PER_BLOCK)
+-		return;
++		return 0;
+ 
+ 	if (!sync && !f2fs_available_free_memory(sbi, FREE_NIDS))
+-		return;
++		return 0;
+ 
+ 	if (!mount) {
+ 		/* try to find free nids in free_nid_bitmap */
+ 		scan_free_nid_bits(sbi);
+ 
+ 		if (nm_i->nid_cnt[FREE_NID] >= NAT_ENTRY_PER_BLOCK)
+-			return;
++			return 0;
+ 	}
+ 
+ 	/* readahead nat pages to be scanned */
+@@ -2086,8 +2093,16 @@ static void __f2fs_build_free_nids(struct f2fs_sb_info *sbi,
+ 						nm_i->nat_block_bitmap)) {
+ 			struct page *page = get_current_nat_page(sbi, nid);
+ 
+-			scan_nat_page(sbi, page, nid);
++			ret = scan_nat_page(sbi, page, nid);
+ 			f2fs_put_page(page, 1);
++
++			if (ret) {
++				up_read(&nm_i->nat_tree_lock);
++				f2fs_bug_on(sbi, !mount);
++				f2fs_msg(sbi->sb, KERN_ERR,
++					"NAT is corrupt, run fsck to fix it");
++				return -EINVAL;
++			}
+ 		}
+ 
+ 		nid += (NAT_ENTRY_PER_BLOCK - (nid % NAT_ENTRY_PER_BLOCK));
+@@ -2108,13 +2123,19 @@ static void __f2fs_build_free_nids(struct f2fs_sb_info *sbi,
+ 
+ 	f2fs_ra_meta_pages(sbi, NAT_BLOCK_OFFSET(nm_i->next_scan_nid),
+ 					nm_i->ra_nid_pages, META_NAT, false);
++
++	return 0;
+ }
+ 
+-void f2fs_build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount)
++int f2fs_build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount)
+ {
++	int ret;
++
+ 	mutex_lock(&NM_I(sbi)->build_lock);
+-	__f2fs_build_free_nids(sbi, sync, mount);
++	ret = __f2fs_build_free_nids(sbi, sync, mount);
+ 	mutex_unlock(&NM_I(sbi)->build_lock);
++
++	return ret;
+ }
+ 
+ /*
+@@ -2801,8 +2822,7 @@ int f2fs_build_node_manager(struct f2fs_sb_info *sbi)
+ 	/* load free nid status from nat_bits table */
+ 	load_free_nid_bitmap(sbi);
+ 
+-	f2fs_build_free_nids(sbi, true, true);
+-	return 0;
++	return f2fs_build_free_nids(sbi, true, true);
+ }
+ 
+ void f2fs_destroy_node_manager(struct f2fs_sb_info *sbi)
+diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
+index 38f25f0b193a..ad70e62c5da4 100644
+--- a/fs/f2fs/recovery.c
++++ b/fs/f2fs/recovery.c
+@@ -241,8 +241,8 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head,
+ 	struct page *page = NULL;
+ 	block_t blkaddr;
+ 	unsigned int loop_cnt = 0;
+-	unsigned int free_blocks = sbi->user_block_count -
+-					valid_user_blocks(sbi);
++	unsigned int free_blocks = MAIN_SEGS(sbi) * sbi->blocks_per_seg -
++						valid_user_blocks(sbi);
+ 	int err = 0;
+ 
+ 	/* get node pages in the current segment */
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 9efce174c51a..43fecd5eb252 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -1643,21 +1643,30 @@ void f2fs_clear_prefree_segments(struct f2fs_sb_info *sbi,
+ 	unsigned int start = 0, end = -1;
+ 	unsigned int secno, start_segno;
+ 	bool force = (cpc->reason & CP_DISCARD);
++	bool need_align = test_opt(sbi, LFS) && sbi->segs_per_sec > 1;
+ 
+ 	mutex_lock(&dirty_i->seglist_lock);
+ 
+ 	while (1) {
+ 		int i;
++
++		if (need_align && end != -1)
++			end--;
+ 		start = find_next_bit(prefree_map, MAIN_SEGS(sbi), end + 1);
+ 		if (start >= MAIN_SEGS(sbi))
+ 			break;
+ 		end = find_next_zero_bit(prefree_map, MAIN_SEGS(sbi),
+ 								start + 1);
+ 
+-		for (i = start; i < end; i++)
+-			clear_bit(i, prefree_map);
++		if (need_align) {
++			start = rounddown(start, sbi->segs_per_sec);
++			end = roundup(end, sbi->segs_per_sec);
++		}
+ 
+-		dirty_i->nr_dirty[PRE] -= end - start;
++		for (i = start; i < end; i++) {
++			if (test_and_clear_bit(i, prefree_map))
++				dirty_i->nr_dirty[PRE]--;
++		}
+ 
+ 		if (!test_opt(sbi, DISCARD))
+ 			continue;
+@@ -2437,6 +2446,7 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range)
+ 	struct discard_policy dpolicy;
+ 	unsigned long long trimmed = 0;
+ 	int err = 0;
++	bool need_align = test_opt(sbi, LFS) && sbi->segs_per_sec > 1;
+ 
+ 	if (start >= MAX_BLKADDR(sbi) || range->len < sbi->blocksize)
+ 		return -EINVAL;
+@@ -2454,6 +2464,10 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range)
+ 	start_segno = (start <= MAIN_BLKADDR(sbi)) ? 0 : GET_SEGNO(sbi, start);
+ 	end_segno = (end >= MAX_BLKADDR(sbi)) ? MAIN_SEGS(sbi) - 1 :
+ 						GET_SEGNO(sbi, end);
++	if (need_align) {
++		start_segno = rounddown(start_segno, sbi->segs_per_sec);
++		end_segno = roundup(end_segno + 1, sbi->segs_per_sec) - 1;
++	}
+ 
+ 	cpc.reason = CP_DISCARD;
+ 	cpc.trim_minlen = max_t(__u64, 1, F2FS_BYTES_TO_BLK(range->minlen));
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index f18fc82fbe99..38c549d77a80 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -448,6 +448,8 @@ static inline void __set_test_and_free(struct f2fs_sb_info *sbi,
+ 	if (test_and_clear_bit(segno, free_i->free_segmap)) {
+ 		free_i->free_segments++;
+ 
++		if (IS_CURSEC(sbi, secno))
++			goto skip_free;
+ 		next = find_next_bit(free_i->free_segmap,
+ 				start_segno + sbi->segs_per_sec, start_segno);
+ 		if (next >= start_segno + sbi->segs_per_sec) {
+@@ -455,6 +457,7 @@ static inline void __set_test_and_free(struct f2fs_sb_info *sbi,
+ 				free_i->free_sections++;
+ 		}
+ 	}
++skip_free:
+ 	spin_unlock(&free_i->segmap_lock);
+ }
+ 
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 3995e926ba3a..128d489acebb 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -2229,9 +2229,9 @@ static int sanity_check_raw_super(struct f2fs_sb_info *sbi,
+ 		return 1;
+ 	}
+ 
+-	if (secs_per_zone > total_sections) {
++	if (secs_per_zone > total_sections || !secs_per_zone) {
+ 		f2fs_msg(sb, KERN_INFO,
+-			"Wrong secs_per_zone (%u > %u)",
++			"Wrong secs_per_zone / total_sections (%u, %u)",
+ 			secs_per_zone, total_sections);
+ 		return 1;
+ 	}
+@@ -2282,12 +2282,17 @@ int f2fs_sanity_check_ckpt(struct f2fs_sb_info *sbi)
+ 	struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi);
+ 	unsigned int ovp_segments, reserved_segments;
+ 	unsigned int main_segs, blocks_per_seg;
++	unsigned int sit_segs, nat_segs;
++	unsigned int sit_bitmap_size, nat_bitmap_size;
++	unsigned int log_blocks_per_seg;
+ 	int i;
+ 
+ 	total = le32_to_cpu(raw_super->segment_count);
+ 	fsmeta = le32_to_cpu(raw_super->segment_count_ckpt);
+-	fsmeta += le32_to_cpu(raw_super->segment_count_sit);
+-	fsmeta += le32_to_cpu(raw_super->segment_count_nat);
++	sit_segs = le32_to_cpu(raw_super->segment_count_sit);
++	fsmeta += sit_segs;
++	nat_segs = le32_to_cpu(raw_super->segment_count_nat);
++	fsmeta += nat_segs;
+ 	fsmeta += le32_to_cpu(ckpt->rsvd_segment_count);
+ 	fsmeta += le32_to_cpu(raw_super->segment_count_ssa);
+ 
+@@ -2318,6 +2323,18 @@ int f2fs_sanity_check_ckpt(struct f2fs_sb_info *sbi)
+ 			return 1;
+ 	}
+ 
++	sit_bitmap_size = le32_to_cpu(ckpt->sit_ver_bitmap_bytesize);
++	nat_bitmap_size = le32_to_cpu(ckpt->nat_ver_bitmap_bytesize);
++	log_blocks_per_seg = le32_to_cpu(raw_super->log_blocks_per_seg);
++
++	if (sit_bitmap_size != ((sit_segs / 2) << log_blocks_per_seg) / 8 ||
++		nat_bitmap_size != ((nat_segs / 2) << log_blocks_per_seg) / 8) {
++		f2fs_msg(sbi->sb, KERN_ERR,
++			"Wrong bitmap size: sit: %u, nat:%u",
++			sit_bitmap_size, nat_bitmap_size);
++		return 1;
++	}
++
+ 	if (unlikely(f2fs_cp_error(sbi))) {
+ 		f2fs_msg(sbi->sb, KERN_ERR, "A bug case: need to run fsck");
+ 		return 1;
+diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
+index 2e7e611deaef..bca1236fd6fa 100644
+--- a/fs/f2fs/sysfs.c
++++ b/fs/f2fs/sysfs.c
+@@ -9,6 +9,7 @@
+  * it under the terms of the GNU General Public License version 2 as
+  * published by the Free Software Foundation.
+  */
++#include <linux/compiler.h>
+ #include <linux/proc_fs.h>
+ #include <linux/f2fs_fs.h>
+ #include <linux/seq_file.h>
+@@ -286,8 +287,10 @@ static ssize_t f2fs_sbi_store(struct f2fs_attr *a,
+ 	bool gc_entry = (!strcmp(a->attr.name, "gc_urgent") ||
+ 					a->struct_type == GC_THREAD);
+ 
+-	if (gc_entry)
+-		down_read(&sbi->sb->s_umount);
++	if (gc_entry) {
++		if (!down_read_trylock(&sbi->sb->s_umount))
++			return -EAGAIN;
++	}
+ 	ret = __sbi_store(a, sbi, buf, count);
+ 	if (gc_entry)
+ 		up_read(&sbi->sb->s_umount);
+@@ -516,7 +519,8 @@ static struct kobject f2fs_feat = {
+ 	.kset	= &f2fs_kset,
+ };
+ 
+-static int segment_info_seq_show(struct seq_file *seq, void *offset)
++static int __maybe_unused segment_info_seq_show(struct seq_file *seq,
++						void *offset)
+ {
+ 	struct super_block *sb = seq->private;
+ 	struct f2fs_sb_info *sbi = F2FS_SB(sb);
+@@ -543,7 +547,8 @@ static int segment_info_seq_show(struct seq_file *seq, void *offset)
+ 	return 0;
+ }
+ 
+-static int segment_bits_seq_show(struct seq_file *seq, void *offset)
++static int __maybe_unused segment_bits_seq_show(struct seq_file *seq,
++						void *offset)
+ {
+ 	struct super_block *sb = seq->private;
+ 	struct f2fs_sb_info *sbi = F2FS_SB(sb);
+@@ -567,7 +572,8 @@ static int segment_bits_seq_show(struct seq_file *seq, void *offset)
+ 	return 0;
+ }
+ 
+-static int iostat_info_seq_show(struct seq_file *seq, void *offset)
++static int __maybe_unused iostat_info_seq_show(struct seq_file *seq,
++					       void *offset)
+ {
+ 	struct super_block *sb = seq->private;
+ 	struct f2fs_sb_info *sbi = F2FS_SB(sb);
+diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
+index 5d57e818d0c3..6d049dfddb14 100644
+--- a/fs/nfs/callback_proc.c
++++ b/fs/nfs/callback_proc.c
+@@ -215,9 +215,9 @@ static u32 pnfs_check_callback_stateid(struct pnfs_layout_hdr *lo,
+ {
+ 	u32 oldseq, newseq;
+ 
+-	/* Is the stateid still not initialised? */
++	/* Is the stateid not initialised? */
+ 	if (!pnfs_layout_is_valid(lo))
+-		return NFS4ERR_DELAY;
++		return NFS4ERR_NOMATCHING_LAYOUT;
+ 
+ 	/* Mismatched stateid? */
+ 	if (!nfs4_stateid_match_other(&lo->plh_stateid, new))
+diff --git a/fs/nfs/callback_xdr.c b/fs/nfs/callback_xdr.c
+index a813979b5be0..cb905c0e606c 100644
+--- a/fs/nfs/callback_xdr.c
++++ b/fs/nfs/callback_xdr.c
+@@ -883,16 +883,21 @@ static __be32 nfs4_callback_compound(struct svc_rqst *rqstp)
+ 
+ 	if (hdr_arg.minorversion == 0) {
+ 		cps.clp = nfs4_find_client_ident(SVC_NET(rqstp), hdr_arg.cb_ident);
+-		if (!cps.clp || !check_gss_callback_principal(cps.clp, rqstp))
++		if (!cps.clp || !check_gss_callback_principal(cps.clp, rqstp)) {
++			if (cps.clp)
++				nfs_put_client(cps.clp);
+ 			goto out_invalidcred;
++		}
+ 	}
+ 
+ 	cps.minorversion = hdr_arg.minorversion;
+ 	hdr_res.taglen = hdr_arg.taglen;
+ 	hdr_res.tag = hdr_arg.tag;
+-	if (encode_compound_hdr_res(&xdr_out, &hdr_res) != 0)
++	if (encode_compound_hdr_res(&xdr_out, &hdr_res) != 0) {
++		if (cps.clp)
++			nfs_put_client(cps.clp);
+ 		return rpc_system_err;
+-
++	}
+ 	while (status == 0 && nops != hdr_arg.nops) {
+ 		status = process_op(nops, rqstp, &xdr_in,
+ 				    rqstp->rq_argp, &xdr_out, rqstp->rq_resp,
+diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c
+index 979631411a0e..d7124fb12041 100644
+--- a/fs/nfs/nfs4client.c
++++ b/fs/nfs/nfs4client.c
+@@ -1127,7 +1127,7 @@ struct nfs_server *nfs4_create_referral_server(struct nfs_clone_mount *data,
+ 	nfs_server_copy_userdata(server, parent_server);
+ 
+ 	/* Get a client representation */
+-#ifdef CONFIG_SUNRPC_XPRT_RDMA
++#if IS_ENABLED(CONFIG_SUNRPC_XPRT_RDMA)
+ 	rpc_set_port(data->addr, NFS_RDMA_PORT);
+ 	error = nfs4_set_client(server, data->hostname,
+ 				data->addr,
+@@ -1139,7 +1139,7 @@ struct nfs_server *nfs4_create_referral_server(struct nfs_clone_mount *data,
+ 				parent_client->cl_net);
+ 	if (!error)
+ 		goto init_server;
+-#endif	/* CONFIG_SUNRPC_XPRT_RDMA */
++#endif	/* IS_ENABLED(CONFIG_SUNRPC_XPRT_RDMA) */
+ 
+ 	rpc_set_port(data->addr, NFS_PORT);
+ 	error = nfs4_set_client(server, data->hostname,
+@@ -1153,7 +1153,7 @@ struct nfs_server *nfs4_create_referral_server(struct nfs_clone_mount *data,
+ 	if (error < 0)
+ 		goto error;
+ 
+-#ifdef CONFIG_SUNRPC_XPRT_RDMA
++#if IS_ENABLED(CONFIG_SUNRPC_XPRT_RDMA)
+ init_server:
+ #endif
+ 	error = nfs_init_server_rpcclient(server, parent_server->client->cl_timeout, data->authflavor);
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index 773bcb1d4044..5482dd6ae9ef 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -520,6 +520,7 @@ struct hid_input {
+ 	const char *name;
+ 	bool registered;
+ 	struct list_head reports;	/* the list of reports */
++	unsigned int application;	/* application usage for this input */
+ };
+ 
+ enum hid_type {
+diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
+index 22651e124071..a590419e46c5 100644
+--- a/include/linux/mm_types.h
++++ b/include/linux/mm_types.h
+@@ -340,7 +340,7 @@ struct kioctx_table;
+ struct mm_struct {
+ 	struct vm_area_struct *mmap;		/* list of VMAs */
+ 	struct rb_root mm_rb;
+-	u32 vmacache_seqnum;                   /* per-thread vmacache */
++	u64 vmacache_seqnum;                   /* per-thread vmacache */
+ #ifdef CONFIG_MMU
+ 	unsigned long (*get_unmapped_area) (struct file *filp,
+ 				unsigned long addr, unsigned long len,
+diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h
+index 5fe87687664c..d7016dcb245e 100644
+--- a/include/linux/mm_types_task.h
++++ b/include/linux/mm_types_task.h
+@@ -32,7 +32,7 @@
+ #define VMACACHE_MASK (VMACACHE_SIZE - 1)
+ 
+ struct vmacache {
+-	u32 seqnum;
++	u64 seqnum;
+ 	struct vm_area_struct *vmas[VMACACHE_SIZE];
+ };
+ 
+diff --git a/include/linux/mtd/rawnand.h b/include/linux/mtd/rawnand.h
+index 3e8ec3b8a39c..87c635d6c773 100644
+--- a/include/linux/mtd/rawnand.h
++++ b/include/linux/mtd/rawnand.h
+@@ -986,14 +986,14 @@ struct nand_subop {
+ 	unsigned int last_instr_end_off;
+ };
+ 
+-int nand_subop_get_addr_start_off(const struct nand_subop *subop,
+-				  unsigned int op_id);
+-int nand_subop_get_num_addr_cyc(const struct nand_subop *subop,
+-				unsigned int op_id);
+-int nand_subop_get_data_start_off(const struct nand_subop *subop,
+-				  unsigned int op_id);
+-int nand_subop_get_data_len(const struct nand_subop *subop,
+-			    unsigned int op_id);
++unsigned int nand_subop_get_addr_start_off(const struct nand_subop *subop,
++					   unsigned int op_id);
++unsigned int nand_subop_get_num_addr_cyc(const struct nand_subop *subop,
++					 unsigned int op_id);
++unsigned int nand_subop_get_data_start_off(const struct nand_subop *subop,
++					   unsigned int op_id);
++unsigned int nand_subop_get_data_len(const struct nand_subop *subop,
++				     unsigned int op_id);
+ 
+ /**
+  * struct nand_op_parser_addr_constraints - Constraints for address instructions
+diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
+index 5c7f010676a7..47a3441cf4c4 100644
+--- a/include/linux/vm_event_item.h
++++ b/include/linux/vm_event_item.h
+@@ -105,7 +105,6 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
+ #ifdef CONFIG_DEBUG_VM_VMACACHE
+ 		VMACACHE_FIND_CALLS,
+ 		VMACACHE_FIND_HITS,
+-		VMACACHE_FULL_FLUSHES,
+ #endif
+ #ifdef CONFIG_SWAP
+ 		SWAP_RA,
+diff --git a/include/linux/vmacache.h b/include/linux/vmacache.h
+index a5b3aa8d281f..a09b28f76460 100644
+--- a/include/linux/vmacache.h
++++ b/include/linux/vmacache.h
+@@ -16,7 +16,6 @@ static inline void vmacache_flush(struct task_struct *tsk)
+ 	memset(tsk->vmacache.vmas, 0, sizeof(tsk->vmacache.vmas));
+ }
+ 
+-extern void vmacache_flush_all(struct mm_struct *mm);
+ extern void vmacache_update(unsigned long addr, struct vm_area_struct *newvma);
+ extern struct vm_area_struct *vmacache_find(struct mm_struct *mm,
+ 						    unsigned long addr);
+@@ -30,10 +29,6 @@ extern struct vm_area_struct *vmacache_find_exact(struct mm_struct *mm,
+ static inline void vmacache_invalidate(struct mm_struct *mm)
+ {
+ 	mm->vmacache_seqnum++;
+-
+-	/* deal with overflows */
+-	if (unlikely(mm->vmacache_seqnum == 0))
+-		vmacache_flush_all(mm);
+ }
+ 
+ #endif /* __LINUX_VMACACHE_H */
+diff --git a/include/uapi/linux/ethtool.h b/include/uapi/linux/ethtool.h
+index 7363f18e65a5..813282cc8af6 100644
+--- a/include/uapi/linux/ethtool.h
++++ b/include/uapi/linux/ethtool.h
+@@ -902,13 +902,13 @@ struct ethtool_rx_flow_spec {
+ static inline __u64 ethtool_get_flow_spec_ring(__u64 ring_cookie)
+ {
+ 	return ETHTOOL_RX_FLOW_SPEC_RING & ring_cookie;
+-};
++}
+ 
+ static inline __u64 ethtool_get_flow_spec_ring_vf(__u64 ring_cookie)
+ {
+ 	return (ETHTOOL_RX_FLOW_SPEC_RING_VF & ring_cookie) >>
+ 				ETHTOOL_RX_FLOW_SPEC_RING_VF_OFF;
+-};
++}
+ 
+ /**
+  * struct ethtool_rxnfc - command to get or set RX flow classification rules
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index f80afc674f02..517907b082df 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -608,15 +608,15 @@ static void cpuhp_thread_fun(unsigned int cpu)
+ 	bool bringup = st->bringup;
+ 	enum cpuhp_state state;
+ 
++	if (WARN_ON_ONCE(!st->should_run))
++		return;
++
+ 	/*
+ 	 * ACQUIRE for the cpuhp_should_run() load of ->should_run. Ensures
+ 	 * that if we see ->should_run we also see the rest of the state.
+ 	 */
+ 	smp_mb();
+ 
+-	if (WARN_ON_ONCE(!st->should_run))
+-		return;
+-
+ 	cpuhp_lock_acquire(bringup);
+ 
+ 	if (st->single) {
+@@ -928,7 +928,8 @@ static int cpuhp_down_callbacks(unsigned int cpu, struct cpuhp_cpu_state *st,
+ 		ret = cpuhp_invoke_callback(cpu, st->state, false, NULL, NULL);
+ 		if (ret) {
+ 			st->target = prev_state;
+-			undo_cpu_down(cpu, st);
++			if (st->state < prev_state)
++				undo_cpu_down(cpu, st);
+ 			break;
+ 		}
+ 	}
+@@ -981,7 +982,7 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen,
+ 	 * to do the further cleanups.
+ 	 */
+ 	ret = cpuhp_down_callbacks(cpu, st, target);
+-	if (ret && st->state > CPUHP_TEARDOWN_CPU && st->state < prev_state) {
++	if (ret && st->state == CPUHP_TEARDOWN_CPU && st->state < prev_state) {
+ 		cpuhp_reset_state(st, prev_state);
+ 		__cpuhp_kick_ap(st);
+ 	}
+diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
+index f89a78e2792b..443941aa784e 100644
+--- a/kernel/time/clocksource.c
++++ b/kernel/time/clocksource.c
+@@ -129,19 +129,40 @@ static void inline clocksource_watchdog_unlock(unsigned long *flags)
+ 	spin_unlock_irqrestore(&watchdog_lock, *flags);
+ }
+ 
++static int clocksource_watchdog_kthread(void *data);
++static void __clocksource_change_rating(struct clocksource *cs, int rating);
++
+ /*
+  * Interval: 0.5sec Threshold: 0.0625s
+  */
+ #define WATCHDOG_INTERVAL (HZ >> 1)
+ #define WATCHDOG_THRESHOLD (NSEC_PER_SEC >> 4)
+ 
++static void clocksource_watchdog_work(struct work_struct *work)
++{
++	/*
++	 * We cannot directly run clocksource_watchdog_kthread() here, because
++	 * clocksource_select() calls timekeeping_notify() which uses
++	 * stop_machine(). One cannot use stop_machine() from a workqueue() due
++	 * lock inversions wrt CPU hotplug.
++	 *
++	 * Also, we only ever run this work once or twice during the lifetime
++	 * of the kernel, so there is no point in creating a more permanent
++	 * kthread for this.
++	 *
++	 * If kthread_run fails the next watchdog scan over the
++	 * watchdog_list will find the unstable clock again.
++	 */
++	kthread_run(clocksource_watchdog_kthread, NULL, "kwatchdog");
++}
++
+ static void __clocksource_unstable(struct clocksource *cs)
+ {
+ 	cs->flags &= ~(CLOCK_SOURCE_VALID_FOR_HRES | CLOCK_SOURCE_WATCHDOG);
+ 	cs->flags |= CLOCK_SOURCE_UNSTABLE;
+ 
+ 	/*
+-	 * If the clocksource is registered clocksource_watchdog_work() will
++	 * If the clocksource is registered clocksource_watchdog_kthread() will
+ 	 * re-rate and re-select.
+ 	 */
+ 	if (list_empty(&cs->list)) {
+@@ -152,7 +173,7 @@ static void __clocksource_unstable(struct clocksource *cs)
+ 	if (cs->mark_unstable)
+ 		cs->mark_unstable(cs);
+ 
+-	/* kick clocksource_watchdog_work() */
++	/* kick clocksource_watchdog_kthread() */
+ 	if (finished_booting)
+ 		schedule_work(&watchdog_work);
+ }
+@@ -162,7 +183,7 @@ static void __clocksource_unstable(struct clocksource *cs)
+  * @cs:		clocksource to be marked unstable
+  *
+  * This function is called by the x86 TSC code to mark clocksources as unstable;
+- * it defers demotion and re-selection to a work.
++ * it defers demotion and re-selection to a kthread.
+  */
+ void clocksource_mark_unstable(struct clocksource *cs)
+ {
+@@ -387,9 +408,7 @@ static void clocksource_dequeue_watchdog(struct clocksource *cs)
+ 	}
+ }
+ 
+-static void __clocksource_change_rating(struct clocksource *cs, int rating);
+-
+-static int __clocksource_watchdog_work(void)
++static int __clocksource_watchdog_kthread(void)
+ {
+ 	struct clocksource *cs, *tmp;
+ 	unsigned long flags;
+@@ -414,12 +433,13 @@ static int __clocksource_watchdog_work(void)
+ 	return select;
+ }
+ 
+-static void clocksource_watchdog_work(struct work_struct *work)
++static int clocksource_watchdog_kthread(void *data)
+ {
+ 	mutex_lock(&clocksource_mutex);
+-	if (__clocksource_watchdog_work())
++	if (__clocksource_watchdog_kthread())
+ 		clocksource_select();
+ 	mutex_unlock(&clocksource_mutex);
++	return 0;
+ }
+ 
+ static bool clocksource_is_watchdog(struct clocksource *cs)
+@@ -438,7 +458,7 @@ static void clocksource_enqueue_watchdog(struct clocksource *cs)
+ static void clocksource_select_watchdog(bool fallback) { }
+ static inline void clocksource_dequeue_watchdog(struct clocksource *cs) { }
+ static inline void clocksource_resume_watchdog(void) { }
+-static inline int __clocksource_watchdog_work(void) { return 0; }
++static inline int __clocksource_watchdog_kthread(void) { return 0; }
+ static bool clocksource_is_watchdog(struct clocksource *cs) { return false; }
+ void clocksource_mark_unstable(struct clocksource *cs) { }
+ 
+@@ -672,7 +692,7 @@ static int __init clocksource_done_booting(void)
+ 	/*
+ 	 * Run the watchdog first to eliminate unstable clock sources
+ 	 */
+-	__clocksource_watchdog_work();
++	__clocksource_watchdog_kthread();
+ 	clocksource_select();
+ 	mutex_unlock(&clocksource_mutex);
+ 	return 0;
+diff --git a/kernel/time/timer.c b/kernel/time/timer.c
+index cc2d23e6ff61..786f8c014e7e 100644
+--- a/kernel/time/timer.c
++++ b/kernel/time/timer.c
+@@ -1657,6 +1657,22 @@ static inline void __run_timers(struct timer_base *base)
+ 
+ 	raw_spin_lock_irq(&base->lock);
+ 
++	/*
++	 * timer_base::must_forward_clk must be cleared before running
++	 * timers so that any timer functions that call mod_timer() will
++	 * not try to forward the base. Idle tracking / clock forwarding
++	 * logic is only used with BASE_STD timers.
++	 *
++	 * The must_forward_clk flag is cleared unconditionally also for
++	 * the deferrable base. The deferrable base is not affected by idle
++	 * tracking and never forwarded, so clearing the flag is a NOOP.
++	 *
++	 * The fact that the deferrable base is never forwarded can cause
++	 * large variations in granularity for deferrable timers, but they
++	 * can be deferred for long periods due to idle anyway.
++	 */
++	base->must_forward_clk = false;
++
+ 	while (time_after_eq(jiffies, base->clk)) {
+ 
+ 		levels = collect_expired_timers(base, heads);
+@@ -1676,19 +1692,6 @@ static __latent_entropy void run_timer_softirq(struct softirq_action *h)
+ {
+ 	struct timer_base *base = this_cpu_ptr(&timer_bases[BASE_STD]);
+ 
+-	/*
+-	 * must_forward_clk must be cleared before running timers so that any
+-	 * timer functions that call mod_timer will not try to forward the
+-	 * base. idle trcking / clock forwarding logic is only used with
+-	 * BASE_STD timers.
+-	 *
+-	 * The deferrable base does not do idle tracking at all, so we do
+-	 * not forward it. This can result in very large variations in
+-	 * granularity for deferrable timers, but they can be deferred for
+-	 * long periods due to idle.
+-	 */
+-	base->must_forward_clk = false;
+-
+ 	__run_timers(base);
+ 	if (IS_ENABLED(CONFIG_NO_HZ_COMMON))
+ 		__run_timers(this_cpu_ptr(&timer_bases[BASE_DEF]));
+diff --git a/mm/debug.c b/mm/debug.c
+index 38c926520c97..bd10aad8539a 100644
+--- a/mm/debug.c
++++ b/mm/debug.c
+@@ -114,7 +114,7 @@ EXPORT_SYMBOL(dump_vma);
+ 
+ void dump_mm(const struct mm_struct *mm)
+ {
+-	pr_emerg("mm %px mmap %px seqnum %d task_size %lu\n"
++	pr_emerg("mm %px mmap %px seqnum %llu task_size %lu\n"
+ #ifdef CONFIG_MMU
+ 		"get_unmapped_area %px\n"
+ #endif
+@@ -142,7 +142,7 @@ void dump_mm(const struct mm_struct *mm)
+ 		"tlb_flush_pending %d\n"
+ 		"def_flags: %#lx(%pGv)\n",
+ 
+-		mm, mm->mmap, mm->vmacache_seqnum, mm->task_size,
++		mm, mm->mmap, (long long) mm->vmacache_seqnum, mm->task_size,
+ #ifdef CONFIG_MMU
+ 		mm->get_unmapped_area,
+ #endif
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 7deb49f69e27..785252397e35 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1341,7 +1341,8 @@ static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
+ 			if (__PageMovable(page))
+ 				return pfn;
+ 			if (PageHuge(page)) {
+-				if (page_huge_active(page))
++				if (hugepage_migration_supported(page_hstate(page)) &&
++				    page_huge_active(page))
+ 					return pfn;
+ 				else
+ 					pfn = round_up(pfn + 1,
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 3222193c46c6..65f2e6481c99 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -7649,6 +7649,10 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
+ 		 * handle each tail page individually in migration.
+ 		 */
+ 		if (PageHuge(page)) {
++
++			if (!hugepage_migration_supported(page_hstate(page)))
++				goto unmovable;
++
+ 			iter = round_up(iter + 1, 1<<compound_order(page)) - 1;
+ 			continue;
+ 		}
+diff --git a/mm/vmacache.c b/mm/vmacache.c
+index db7596eb6132..f1729617dc85 100644
+--- a/mm/vmacache.c
++++ b/mm/vmacache.c
+@@ -7,44 +7,6 @@
+ #include <linux/mm.h>
+ #include <linux/vmacache.h>
+ 
+-/*
+- * Flush vma caches for threads that share a given mm.
+- *
+- * The operation is safe because the caller holds the mmap_sem
+- * exclusively and other threads accessing the vma cache will
+- * have mmap_sem held at least for read, so no extra locking
+- * is required to maintain the vma cache.
+- */
+-void vmacache_flush_all(struct mm_struct *mm)
+-{
+-	struct task_struct *g, *p;
+-
+-	count_vm_vmacache_event(VMACACHE_FULL_FLUSHES);
+-
+-	/*
+-	 * Single threaded tasks need not iterate the entire
+-	 * list of process. We can avoid the flushing as well
+-	 * since the mm's seqnum was increased and don't have
+-	 * to worry about other threads' seqnum. Current's
+-	 * flush will occur upon the next lookup.
+-	 */
+-	if (atomic_read(&mm->mm_users) == 1)
+-		return;
+-
+-	rcu_read_lock();
+-	for_each_process_thread(g, p) {
+-		/*
+-		 * Only flush the vmacache pointers as the
+-		 * mm seqnum is already set and curr's will
+-		 * be set upon invalidation when the next
+-		 * lookup is done.
+-		 */
+-		if (mm == p->mm)
+-			vmacache_flush(p);
+-	}
+-	rcu_read_unlock();
+-}
+-
+ /*
+  * This task may be accessing a foreign mm via (for example)
+  * get_user_pages()->find_vma().  The vmacache is task-local and this
+diff --git a/net/bluetooth/hidp/core.c b/net/bluetooth/hidp/core.c
+index 3bba8f4b08a9..253975cce943 100644
+--- a/net/bluetooth/hidp/core.c
++++ b/net/bluetooth/hidp/core.c
+@@ -775,7 +775,7 @@ static int hidp_setup_hid(struct hidp_session *session,
+ 	hid->version = req->version;
+ 	hid->country = req->country;
+ 
+-	strncpy(hid->name, req->name, sizeof(req->name) - 1);
++	strncpy(hid->name, req->name, sizeof(hid->name));
+ 
+ 	snprintf(hid->phys, sizeof(hid->phys), "%pMR",
+ 		 &l2cap_pi(session->ctrl_sock->sk)->chan->src);
+diff --git a/net/dcb/dcbnl.c b/net/dcb/dcbnl.c
+index 2589a6b78aa1..013fdb6fa07a 100644
+--- a/net/dcb/dcbnl.c
++++ b/net/dcb/dcbnl.c
+@@ -1786,7 +1786,7 @@ static struct dcb_app_type *dcb_app_lookup(const struct dcb_app *app,
+ 		if (itr->app.selector == app->selector &&
+ 		    itr->app.protocol == app->protocol &&
+ 		    itr->ifindex == ifindex &&
+-		    (!prio || itr->app.priority == prio))
++		    ((prio == -1) || itr->app.priority == prio))
+ 			return itr;
+ 	}
+ 
+@@ -1821,7 +1821,8 @@ u8 dcb_getapp(struct net_device *dev, struct dcb_app *app)
+ 	u8 prio = 0;
+ 
+ 	spin_lock_bh(&dcb_lock);
+-	if ((itr = dcb_app_lookup(app, dev->ifindex, 0)))
++	itr = dcb_app_lookup(app, dev->ifindex, -1);
++	if (itr)
+ 		prio = itr->app.priority;
+ 	spin_unlock_bh(&dcb_lock);
+ 
+@@ -1849,7 +1850,8 @@ int dcb_setapp(struct net_device *dev, struct dcb_app *new)
+ 
+ 	spin_lock_bh(&dcb_lock);
+ 	/* Search for existing match and replace */
+-	if ((itr = dcb_app_lookup(new, dev->ifindex, 0))) {
++	itr = dcb_app_lookup(new, dev->ifindex, -1);
++	if (itr) {
+ 		if (new->priority)
+ 			itr->app.priority = new->priority;
+ 		else {
+@@ -1882,7 +1884,8 @@ u8 dcb_ieee_getapp_mask(struct net_device *dev, struct dcb_app *app)
+ 	u8 prio = 0;
+ 
+ 	spin_lock_bh(&dcb_lock);
+-	if ((itr = dcb_app_lookup(app, dev->ifindex, 0)))
++	itr = dcb_app_lookup(app, dev->ifindex, -1);
++	if (itr)
+ 		prio |= 1 << itr->app.priority;
+ 	spin_unlock_bh(&dcb_lock);
+ 
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 932985ca4e66..3f80a5ca4050 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -1612,6 +1612,7 @@ ieee80211_rx_h_sta_process(struct ieee80211_rx_data *rx)
+ 	 */
+ 	if (!ieee80211_hw_check(&sta->local->hw, AP_LINK_PS) &&
+ 	    !ieee80211_has_morefrags(hdr->frame_control) &&
++	    !is_multicast_ether_addr(hdr->addr1) &&
+ 	    (ieee80211_is_mgmt(hdr->frame_control) ||
+ 	     ieee80211_is_data(hdr->frame_control)) &&
+ 	    !(status->rx_flags & IEEE80211_RX_DEFERRED_RELEASE) &&
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 20a171ac4bb2..16849969c138 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -3910,7 +3910,8 @@ void snd_hda_bus_reset_codecs(struct hda_bus *bus)
+ 
+ 	list_for_each_codec(codec, bus) {
+ 		/* FIXME: maybe a better way needed for forced reset */
+-		cancel_delayed_work_sync(&codec->jackpoll_work);
++		if (current_work() != &codec->jackpoll_work.work)
++			cancel_delayed_work_sync(&codec->jackpoll_work);
+ #ifdef CONFIG_PM
+ 		if (hda_codec_is_power_on(codec)) {
+ 			hda_call_codec_suspend(codec);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index f6af3e1c2b93..d14b05f68d6d 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6530,6 +6530,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x827e, "HP x360", ALC295_FIXUP_HP_X360),
+ 	SND_PCI_QUIRK(0x103c, 0x82bf, "HP", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x82c0, "HP", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x103c, 0x83b9, "HP Spectre x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+ 	SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 5feae9666822..55d6c9488d8e 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -1165,6 +1165,9 @@ static snd_pcm_uframes_t soc_pcm_pointer(struct snd_pcm_substream *substream)
+ 	snd_pcm_sframes_t codec_delay = 0;
+ 	int i;
+ 
++	/* clearing the previous total delay */
++	runtime->delay = 0;
++
+ 	for_each_rtdcom(rtd, rtdcom) {
+ 		component = rtdcom->component;
+ 
+@@ -1176,6 +1179,8 @@ static snd_pcm_uframes_t soc_pcm_pointer(struct snd_pcm_substream *substream)
+ 		offset = component->driver->ops->pointer(substream);
+ 		break;
+ 	}
++	/* base delay if assigned in pointer callback */
++	delay = runtime->delay;
+ 
+ 	if (cpu_dai->driver->ops->delay)
+ 		delay += cpu_dai->driver->ops->delay(substream, cpu_dai);
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index f5a3b402589e..67b042738ed7 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -905,8 +905,8 @@ bindir = $(abspath $(prefix)/$(bindir_relative))
+ mandir = share/man
+ infodir = share/info
+ perfexecdir = libexec/perf-core
+-perf_include_dir = lib/include/perf
+-perf_examples_dir = lib/examples/perf
++perf_include_dir = lib/perf/include
++perf_examples_dir = lib/perf/examples
+ sharedir = $(prefix)/share
+ template_dir = share/perf-core/templates
+ STRACE_GROUPS_DIR = share/perf-core/strace/groups
+diff --git a/tools/perf/builtin-c2c.c b/tools/perf/builtin-c2c.c
+index 6a8738f7ead3..eab66e3b0a19 100644
+--- a/tools/perf/builtin-c2c.c
++++ b/tools/perf/builtin-c2c.c
+@@ -2349,6 +2349,9 @@ static int perf_c2c__browse_cacheline(struct hist_entry *he)
+ 	" s             Toggle full length of symbol and source line columns \n"
+ 	" q             Return back to cacheline list \n";
+ 
++	if (!he)
++		return 0;
++
+ 	/* Display compact version first. */
+ 	c2c.symbol_full = false;
+ 
+diff --git a/tools/perf/perf.h b/tools/perf/perf.h
+index d215714f48df..21bf7f5a3cf5 100644
+--- a/tools/perf/perf.h
++++ b/tools/perf/perf.h
+@@ -25,7 +25,9 @@ static inline unsigned long long rdclock(void)
+ 	return ts.tv_sec * 1000000000ULL + ts.tv_nsec;
+ }
+ 
++#ifndef MAX_NR_CPUS
+ #define MAX_NR_CPUS			1024
++#endif
+ 
+ extern const char *input_name;
+ extern bool perf_host, perf_guest;
+diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
+index 94fce4f537e9..0d5504751cc5 100644
+--- a/tools/perf/util/evsel.c
++++ b/tools/perf/util/evsel.c
+@@ -848,6 +848,12 @@ static void apply_config_terms(struct perf_evsel *evsel,
+ 	}
+ }
+ 
++static bool is_dummy_event(struct perf_evsel *evsel)
++{
++	return (evsel->attr.type == PERF_TYPE_SOFTWARE) &&
++	       (evsel->attr.config == PERF_COUNT_SW_DUMMY);
++}
++
+ /*
+  * The enable_on_exec/disabled value strategy:
+  *
+@@ -1086,6 +1092,14 @@ void perf_evsel__config(struct perf_evsel *evsel, struct record_opts *opts,
+ 		else
+ 			perf_evsel__reset_sample_bit(evsel, PERIOD);
+ 	}
++
++	/*
++	 * For initial_delay, a dummy event is added implicitly.
++	 * The software event will trigger -EOPNOTSUPP error out,
++	 * if BRANCH_STACK bit is set.
++	 */
++	if (opts->initial_delay && is_dummy_event(evsel))
++		perf_evsel__reset_sample_bit(evsel, BRANCH_STACK);
+ }
+ 
+ static int perf_evsel__alloc_fd(struct perf_evsel *evsel, int ncpus, int nthreads)
+diff --git a/tools/testing/nvdimm/pmem-dax.c b/tools/testing/nvdimm/pmem-dax.c
+index b53596ad601b..2e7fd8227969 100644
+--- a/tools/testing/nvdimm/pmem-dax.c
++++ b/tools/testing/nvdimm/pmem-dax.c
+@@ -31,17 +31,21 @@ long __pmem_direct_access(struct pmem_device *pmem, pgoff_t pgoff,
+ 	if (get_nfit_res(pmem->phys_addr + offset)) {
+ 		struct page *page;
+ 
+-		*kaddr = pmem->virt_addr + offset;
++		if (kaddr)
++			*kaddr = pmem->virt_addr + offset;
+ 		page = vmalloc_to_page(pmem->virt_addr + offset);
+-		*pfn = page_to_pfn_t(page);
++		if (pfn)
++			*pfn = page_to_pfn_t(page);
+ 		pr_debug_ratelimited("%s: pmem: %p pgoff: %#lx pfn: %#lx\n",
+ 				__func__, pmem, pgoff, page_to_pfn(page));
+ 
+ 		return 1;
+ 	}
+ 
+-	*kaddr = pmem->virt_addr + offset;
+-	*pfn = phys_to_pfn_t(pmem->phys_addr + offset, pmem->pfn_flags);
++	if (kaddr)
++		*kaddr = pmem->virt_addr + offset;
++	if (pfn)
++		*pfn = phys_to_pfn_t(pmem->phys_addr + offset, pmem->pfn_flags);
+ 
+ 	/*
+ 	 * If badblocks are present, limit known good range to the
+diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
+index 41106d9d5cc7..f9c856c8e472 100644
+--- a/tools/testing/selftests/bpf/test_verifier.c
++++ b/tools/testing/selftests/bpf/test_verifier.c
+@@ -6997,7 +6997,7 @@ static struct bpf_test tests[] = {
+ 			BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
+ 			BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ 				     BPF_FUNC_map_lookup_elem),
+-			BPF_MOV64_REG(BPF_REG_0, 0),
++			BPF_MOV64_IMM(BPF_REG_0, 0),
+ 			BPF_EXIT_INSN(),
+ 		},
+ 		.fixup_map_in_map = { 3 },
+@@ -7020,7 +7020,7 @@ static struct bpf_test tests[] = {
+ 			BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
+ 			BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ 				     BPF_FUNC_map_lookup_elem),
+-			BPF_MOV64_REG(BPF_REG_0, 0),
++			BPF_MOV64_IMM(BPF_REG_0, 0),
+ 			BPF_EXIT_INSN(),
+ 		},
+ 		.fixup_map_in_map = { 3 },
+@@ -7042,7 +7042,7 @@ static struct bpf_test tests[] = {
+ 			BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
+ 			BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ 				     BPF_FUNC_map_lookup_elem),
+-			BPF_MOV64_REG(BPF_REG_0, 0),
++			BPF_MOV64_IMM(BPF_REG_0, 0),
+ 			BPF_EXIT_INSN(),
+ 		},
+ 		.fixup_map_in_map = { 3 },
+diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/connmark.json b/tools/testing/selftests/tc-testing/tc-tests/actions/connmark.json
+index 70952bd98ff9..13147a1f5731 100644
+--- a/tools/testing/selftests/tc-testing/tc-tests/actions/connmark.json
++++ b/tools/testing/selftests/tc-testing/tc-tests/actions/connmark.json
+@@ -17,7 +17,7 @@
+         "cmdUnderTest": "$TC actions add action connmark",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions list action connmark",
+-        "matchPattern": "action order [0-9]+:  connmark zone 0 pipe",
++        "matchPattern": "action order [0-9]+: connmark zone 0 pipe",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -41,7 +41,7 @@
+         "cmdUnderTest": "$TC actions add action connmark pass index 1",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions get action connmark index 1",
+-        "matchPattern": "action order [0-9]+:  connmark zone 0 pass.*index 1 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 0 pass.*index 1 ref",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -65,7 +65,7 @@
+         "cmdUnderTest": "$TC actions add action connmark drop index 100",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions get action connmark index 100",
+-        "matchPattern": "action order [0-9]+:  connmark zone 0 drop.*index 100 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 0 drop.*index 100 ref",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -89,7 +89,7 @@
+         "cmdUnderTest": "$TC actions add action connmark pipe index 455",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions get action connmark index 455",
+-        "matchPattern": "action order [0-9]+:  connmark zone 0 pipe.*index 455 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 0 pipe.*index 455 ref",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -113,7 +113,7 @@
+         "cmdUnderTest": "$TC actions add action connmark reclassify index 7",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions list action connmark",
+-        "matchPattern": "action order [0-9]+:  connmark zone 0 reclassify.*index 7 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 0 reclassify.*index 7 ref",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -137,7 +137,7 @@
+         "cmdUnderTest": "$TC actions add action connmark continue index 17",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions list action connmark",
+-        "matchPattern": "action order [0-9]+:  connmark zone 0 continue.*index 17 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 0 continue.*index 17 ref",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -161,7 +161,7 @@
+         "cmdUnderTest": "$TC actions add action connmark jump 10 index 17",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions list action connmark",
+-        "matchPattern": "action order [0-9]+:  connmark zone 0 jump 10.*index 17 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 0 jump 10.*index 17 ref",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -185,7 +185,7 @@
+         "cmdUnderTest": "$TC actions add action connmark zone 100 pipe index 1",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions get action connmark index 1",
+-        "matchPattern": "action order [0-9]+:  connmark zone 100 pipe.*index 1 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 100 pipe.*index 1 ref",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -209,7 +209,7 @@
+         "cmdUnderTest": "$TC actions add action connmark zone 65536 reclassify index 21",
+         "expExitCode": "255",
+         "verifyCmd": "$TC actions get action connmark index 1",
+-        "matchPattern": "action order [0-9]+:  connmark zone 65536 reclassify.*index 21 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 65536 reclassify.*index 21 ref",
+         "matchCount": "0",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -233,7 +233,7 @@
+         "cmdUnderTest": "$TC actions add action connmark zone 655 unsupp_arg pass index 2",
+         "expExitCode": "255",
+         "verifyCmd": "$TC actions get action connmark index 2",
+-        "matchPattern": "action order [0-9]+:  connmark zone 655 unsupp_arg pass.*index 2 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 655 unsupp_arg pass.*index 2 ref",
+         "matchCount": "0",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -258,7 +258,7 @@
+         "cmdUnderTest": "$TC actions replace action connmark zone 555 reclassify index 555",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions get action connmark index 555",
+-        "matchPattern": "action order [0-9]+:  connmark zone 555 reclassify.*index 555 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 555 reclassify.*index 555 ref",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -282,7 +282,7 @@
+         "cmdUnderTest": "$TC actions add action connmark zone 555 pipe index 5 cookie aabbccddeeff112233445566778800a1",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions get action connmark index 5",
+-        "matchPattern": "action order [0-9]+:  connmark zone 555 pipe.*index 5 ref.*cookie aabbccddeeff112233445566778800a1",
++        "matchPattern": "action order [0-9]+: connmark zone 555 pipe.*index 5 ref.*cookie aabbccddeeff112233445566778800a1",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/mirred.json b/tools/testing/selftests/tc-testing/tc-tests/actions/mirred.json
+index 6e4edfae1799..db49fd0f8445 100644
+--- a/tools/testing/selftests/tc-testing/tc-tests/actions/mirred.json
++++ b/tools/testing/selftests/tc-testing/tc-tests/actions/mirred.json
+@@ -44,7 +44,8 @@
+         "matchPattern": "action order [0-9]*: mirred \\(Egress Redirect to device lo\\).*index 2 ref",
+         "matchCount": "1",
+         "teardown": [
+-            "$TC actions flush action mirred"
++            "$TC actions flush action mirred",
++            "$TC actions flush action gact"
+         ]
+     },
+     {
+diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
+index c2b95a22959b..fd8c88463928 100644
+--- a/virt/kvm/arm/mmu.c
++++ b/virt/kvm/arm/mmu.c
+@@ -1831,13 +1831,20 @@ static int kvm_set_spte_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data
+ void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte)
+ {
+ 	unsigned long end = hva + PAGE_SIZE;
++	kvm_pfn_t pfn = pte_pfn(pte);
+ 	pte_t stage2_pte;
+ 
+ 	if (!kvm->arch.pgd)
+ 		return;
+ 
+ 	trace_kvm_set_spte_hva(hva);
+-	stage2_pte = pfn_pte(pte_pfn(pte), PAGE_S2);
++
++	/*
++	 * We've moved a page around, probably through CoW, so let's treat it
++	 * just like a translation fault and clean the cache to the PoC.
++	 */
++	clean_dcache_guest_page(pfn, PAGE_SIZE);
++	stage2_pte = pfn_pte(pfn, PAGE_S2);
+ 	handle_hva_to_gpa(kvm, hva, end, &kvm_set_spte_handler, &stage2_pte);
+ }
+ 


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 11:37 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     d818ac781d276bcead7d4e53d0783e39ffca6efc
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 16 11:45:09 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 11:36:21 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d818ac78

x86/l1tf: Fix build error seen if CONFIG_KVM_INTEL is disabled.

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                    |  4 +++
 1700_x86-l1tf-config-kvm-build-error-fix.patch | 40 ++++++++++++++++++++++++++
 2 files changed, 44 insertions(+)

diff --git a/0000_README b/0000_README
index cf32ff2..ad4a3ed 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.
 
+Patch:  1700_x86-l1tf-config-kvm-build-error-fix.patch
+From:   http://www.kernel.org
+Desc:   x86/l1tf: Fix build error seen if CONFIG_KVM_INTEL is disabled
+
 Patch:  2500_usb-storage-Disable-UAS-on-JMicron-SATA-enclosure.patch
 From:   https://bugzilla.redhat.com/show_bug.cgi?id=1260207#c5
 Desc:   Add UAS disable quirk. See bug #640082.

diff --git a/1700_x86-l1tf-config-kvm-build-error-fix.patch b/1700_x86-l1tf-config-kvm-build-error-fix.patch
new file mode 100644
index 0000000..88c2ec6
--- /dev/null
+++ b/1700_x86-l1tf-config-kvm-build-error-fix.patch
@@ -0,0 +1,40 @@
+From 1eb46908b35dfbac0ec1848d4b1e39667e0187e9 Mon Sep 17 00:00:00 2001
+From: Guenter Roeck <linux@roeck-us.net>
+Date: Wed, 15 Aug 2018 08:38:33 -0700
+Subject: x86/l1tf: Fix build error seen if CONFIG_KVM_INTEL is disabled
+
+From: Guenter Roeck <linux@roeck-us.net>
+
+commit 1eb46908b35dfbac0ec1848d4b1e39667e0187e9 upstream.
+
+allmodconfig+CONFIG_INTEL_KVM=n results in the following build error.
+
+  ERROR: "l1tf_vmx_mitigation" [arch/x86/kvm/kvm.ko] undefined!
+
+Fixes: 5b76a3cff011 ("KVM: VMX: Tell the nested hypervisor to skip L1D flush on vmentry")
+Reported-by: Meelis Roos <mroos@linux.ee>
+Cc: Meelis Roos <mroos@linux.ee>
+Cc: Paolo Bonzini <pbonzini@redhat.com>
+Cc: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Guenter Roeck <linux@roeck-us.net>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/x86/kernel/cpu/bugs.c |    3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
+
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -648,10 +648,9 @@ void x86_spec_ctrl_setup_ap(void)
+ enum l1tf_mitigations l1tf_mitigation __ro_after_init = L1TF_MITIGATION_FLUSH;
+ #if IS_ENABLED(CONFIG_KVM_INTEL)
+ EXPORT_SYMBOL_GPL(l1tf_mitigation);
+-
++#endif
+ enum vmx_l1d_flush_state l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
+ EXPORT_SYMBOL_GPL(l1tf_vmx_mitigation);
+-#endif
+ 
+ static void __init l1tf_select_mitigation(void)
+ {


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 11:37 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     a593437457757a3aee0bc1ec58a17bfd226271e3
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 26 10:40:05 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 11:36:25 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a5934374

Linux patch 4.18.10

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1009_linux-4.18.10.patch | 6974 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6978 insertions(+)

diff --git a/0000_README b/0000_README
index 6534d27..a9e2bd7 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch:  1008_linux-4.18.9.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.9
 
+Patch:  1009_linux-4.18.10.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.10
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1009_linux-4.18.10.patch b/1009_linux-4.18.10.patch
new file mode 100644
index 0000000..16ee162
--- /dev/null
+++ b/1009_linux-4.18.10.patch
@@ -0,0 +1,6974 @@
+diff --git a/Makefile b/Makefile
+index 1178348fb9ca..ffab15235ff0 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+@@ -225,10 +225,12 @@ no-dot-config-targets := $(clean-targets) \
+ 			 cscope gtags TAGS tags help% %docs check% coccicheck \
+ 			 $(version_h) headers_% archheaders archscripts \
+ 			 kernelversion %src-pkg
++no-sync-config-targets := $(no-dot-config-targets) install %install
+ 
+-config-targets := 0
+-mixed-targets  := 0
+-dot-config     := 1
++config-targets  := 0
++mixed-targets   := 0
++dot-config      := 1
++may-sync-config := 1
+ 
+ ifneq ($(filter $(no-dot-config-targets), $(MAKECMDGOALS)),)
+ 	ifeq ($(filter-out $(no-dot-config-targets), $(MAKECMDGOALS)),)
+@@ -236,6 +238,16 @@ ifneq ($(filter $(no-dot-config-targets), $(MAKECMDGOALS)),)
+ 	endif
+ endif
+ 
++ifneq ($(filter $(no-sync-config-targets), $(MAKECMDGOALS)),)
++	ifeq ($(filter-out $(no-sync-config-targets), $(MAKECMDGOALS)),)
++		may-sync-config := 0
++	endif
++endif
++
++ifneq ($(KBUILD_EXTMOD),)
++	may-sync-config := 0
++endif
++
+ ifeq ($(KBUILD_EXTMOD),)
+         ifneq ($(filter config %config,$(MAKECMDGOALS)),)
+                 config-targets := 1
+@@ -610,7 +622,7 @@ ARCH_CFLAGS :=
+ include arch/$(SRCARCH)/Makefile
+ 
+ ifeq ($(dot-config),1)
+-ifeq ($(KBUILD_EXTMOD),)
++ifeq ($(may-sync-config),1)
+ # Read in dependencies to all Kconfig* files, make sure to run syncconfig if
+ # changes are detected. This should be included after arch/$(SRCARCH)/Makefile
+ # because some architectures define CROSS_COMPILE there.
+@@ -625,8 +637,9 @@ $(KCONFIG_CONFIG) include/config/auto.conf.cmd: ;
+ include/config/%.conf: $(KCONFIG_CONFIG) include/config/auto.conf.cmd
+ 	$(Q)$(MAKE) -f $(srctree)/Makefile syncconfig
+ else
+-# external modules needs include/generated/autoconf.h and include/config/auto.conf
+-# but do not care if they are up-to-date. Use auto.conf to trigger the test
++# External modules and some install targets need include/generated/autoconf.h
++# and include/config/auto.conf but do not care if they are up-to-date.
++# Use auto.conf to trigger the test
+ PHONY += include/config/auto.conf
+ 
+ include/config/auto.conf:
+@@ -638,7 +651,7 @@ include/config/auto.conf:
+ 	echo >&2 ;							\
+ 	/bin/false)
+ 
+-endif # KBUILD_EXTMOD
++endif # may-sync-config
+ 
+ else
+ # Dummy target needed, because used as prerequisite
+diff --git a/arch/arm/boot/dts/qcom-msm8974-lge-nexus5-hammerhead.dts b/arch/arm/boot/dts/qcom-msm8974-lge-nexus5-hammerhead.dts
+index 4dc0b347b1ee..c2dc9d09484a 100644
+--- a/arch/arm/boot/dts/qcom-msm8974-lge-nexus5-hammerhead.dts
++++ b/arch/arm/boot/dts/qcom-msm8974-lge-nexus5-hammerhead.dts
+@@ -189,6 +189,8 @@
+ 						regulator-max-microvolt = <2950000>;
+ 
+ 						regulator-boot-on;
++						regulator-system-load = <200000>;
++						regulator-allow-set-load;
+ 					};
+ 
+ 					l21 {
+diff --git a/arch/arm/mach-exynos/suspend.c b/arch/arm/mach-exynos/suspend.c
+index d3db306a5a70..941b0ffd9806 100644
+--- a/arch/arm/mach-exynos/suspend.c
++++ b/arch/arm/mach-exynos/suspend.c
+@@ -203,6 +203,7 @@ static int __init exynos_pmu_irq_init(struct device_node *node,
+ 					  NULL);
+ 	if (!domain) {
+ 		iounmap(pmu_base_addr);
++		pmu_base_addr = NULL;
+ 		return -ENOMEM;
+ 	}
+ 
+diff --git a/arch/arm/mach-hisi/hotplug.c b/arch/arm/mach-hisi/hotplug.c
+index a129aae72602..909bb2493781 100644
+--- a/arch/arm/mach-hisi/hotplug.c
++++ b/arch/arm/mach-hisi/hotplug.c
+@@ -148,13 +148,20 @@ static int hi3xxx_hotplug_init(void)
+ 	struct device_node *node;
+ 
+ 	node = of_find_compatible_node(NULL, NULL, "hisilicon,sysctrl");
+-	if (node) {
+-		ctrl_base = of_iomap(node, 0);
+-		id = HI3620_CTRL;
+-		return 0;
++	if (!node) {
++		id = ERROR_CTRL;
++		return -ENOENT;
+ 	}
+-	id = ERROR_CTRL;
+-	return -ENOENT;
++
++	ctrl_base = of_iomap(node, 0);
++	of_node_put(node);
++	if (!ctrl_base) {
++		id = ERROR_CTRL;
++		return -ENOMEM;
++	}
++
++	id = HI3620_CTRL;
++	return 0;
+ }
+ 
+ void hi3xxx_set_cpu(int cpu, bool enable)
+@@ -173,11 +180,15 @@ static bool hix5hd2_hotplug_init(void)
+ 	struct device_node *np;
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "hisilicon,cpuctrl");
+-	if (np) {
+-		ctrl_base = of_iomap(np, 0);
+-		return true;
+-	}
+-	return false;
++	if (!np)
++		return false;
++
++	ctrl_base = of_iomap(np, 0);
++	of_node_put(np);
++	if (!ctrl_base)
++		return false;
++
++	return true;
+ }
+ 
+ void hix5hd2_set_cpu(int cpu, bool enable)
+@@ -219,10 +230,10 @@ void hip01_set_cpu(int cpu, bool enable)
+ 
+ 	if (!ctrl_base) {
+ 		np = of_find_compatible_node(NULL, NULL, "hisilicon,hip01-sysctrl");
+-		if (np)
+-			ctrl_base = of_iomap(np, 0);
+-		else
+-			BUG();
++		BUG_ON(!np);
++		ctrl_base = of_iomap(np, 0);
++		of_node_put(np);
++		BUG_ON(!ctrl_base);
+ 	}
+ 
+ 	if (enable) {
+diff --git a/arch/arm64/boot/dts/mediatek/mt7622.dtsi b/arch/arm64/boot/dts/mediatek/mt7622.dtsi
+index 9213c966c224..ec7ea8dca777 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7622.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt7622.dtsi
+@@ -331,7 +331,7 @@
+ 		reg = <0 0x11002000 0 0x400>;
+ 		interrupts = <GIC_SPI 91 IRQ_TYPE_LEVEL_LOW>;
+ 		clocks = <&topckgen CLK_TOP_UART_SEL>,
+-			 <&pericfg CLK_PERI_UART1_PD>;
++			 <&pericfg CLK_PERI_UART0_PD>;
+ 		clock-names = "baud", "bus";
+ 		status = "disabled";
+ 	};
+diff --git a/arch/arm64/boot/dts/qcom/apq8016-sbc.dtsi b/arch/arm64/boot/dts/qcom/apq8016-sbc.dtsi
+index 9ff848792712..78ce3979ef09 100644
+--- a/arch/arm64/boot/dts/qcom/apq8016-sbc.dtsi
++++ b/arch/arm64/boot/dts/qcom/apq8016-sbc.dtsi
+@@ -338,7 +338,7 @@
+ 			led@6 {
+ 				label = "apq8016-sbc:blue:bt";
+ 				gpios = <&pm8916_mpps 3 GPIO_ACTIVE_HIGH>;
+-				linux,default-trigger = "bt";
++				linux,default-trigger = "bluetooth-power";
+ 				default-state = "off";
+ 			};
+ 		};
+diff --git a/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi b/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi
+index 0298bd0d0e1a..caf112629caa 100644
+--- a/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi
++++ b/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi
+@@ -58,6 +58,7 @@
+ 			clocks = <&sys_clk 32>;
+ 			enable-method = "psci";
+ 			operating-points-v2 = <&cluster0_opp>;
++			#cooling-cells = <2>;
+ 		};
+ 
+ 		cpu2: cpu@100 {
+@@ -77,6 +78,7 @@
+ 			clocks = <&sys_clk 33>;
+ 			enable-method = "psci";
+ 			operating-points-v2 = <&cluster1_opp>;
++			#cooling-cells = <2>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
+index 33147aacdafd..dd5b4fab114f 100644
+--- a/arch/arm64/kernel/perf_event.c
++++ b/arch/arm64/kernel/perf_event.c
+@@ -670,6 +670,28 @@ static void armv8pmu_disable_event(struct perf_event *event)
+ 	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
+ }
+ 
++static void armv8pmu_start(struct arm_pmu *cpu_pmu)
++{
++	unsigned long flags;
++	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
++
++	raw_spin_lock_irqsave(&events->pmu_lock, flags);
++	/* Enable all counters */
++	armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E);
++	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
++}
++
++static void armv8pmu_stop(struct arm_pmu *cpu_pmu)
++{
++	unsigned long flags;
++	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
++
++	raw_spin_lock_irqsave(&events->pmu_lock, flags);
++	/* Disable all counters */
++	armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E);
++	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
++}
++
+ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
+ {
+ 	u32 pmovsr;
+@@ -694,6 +716,11 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
+ 	 */
+ 	regs = get_irq_regs();
+ 
++	/*
++	 * Stop the PMU while processing the counter overflows
++	 * to prevent skews in group events.
++	 */
++	armv8pmu_stop(cpu_pmu);
+ 	for (idx = 0; idx < cpu_pmu->num_events; ++idx) {
+ 		struct perf_event *event = cpuc->events[idx];
+ 		struct hw_perf_event *hwc;
+@@ -718,6 +745,7 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
+ 		if (perf_event_overflow(event, &data, regs))
+ 			cpu_pmu->disable(event);
+ 	}
++	armv8pmu_start(cpu_pmu);
+ 
+ 	/*
+ 	 * Handle the pending perf events.
+@@ -731,28 +759,6 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
+ 	return IRQ_HANDLED;
+ }
+ 
+-static void armv8pmu_start(struct arm_pmu *cpu_pmu)
+-{
+-	unsigned long flags;
+-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
+-
+-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
+-	/* Enable all counters */
+-	armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E);
+-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
+-}
+-
+-static void armv8pmu_stop(struct arm_pmu *cpu_pmu)
+-{
+-	unsigned long flags;
+-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
+-
+-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
+-	/* Disable all counters */
+-	armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E);
+-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
+-}
+-
+ static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc,
+ 				  struct perf_event *event)
+ {
+diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
+index 5c338ce5a7fa..db5440339ab3 100644
+--- a/arch/arm64/kernel/ptrace.c
++++ b/arch/arm64/kernel/ptrace.c
+@@ -277,19 +277,22 @@ static int ptrace_hbp_set_event(unsigned int note_type,
+ 
+ 	switch (note_type) {
+ 	case NT_ARM_HW_BREAK:
+-		if (idx < ARM_MAX_BRP) {
+-			tsk->thread.debug.hbp_break[idx] = bp;
+-			err = 0;
+-		}
++		if (idx >= ARM_MAX_BRP)
++			goto out;
++		idx = array_index_nospec(idx, ARM_MAX_BRP);
++		tsk->thread.debug.hbp_break[idx] = bp;
++		err = 0;
+ 		break;
+ 	case NT_ARM_HW_WATCH:
+-		if (idx < ARM_MAX_WRP) {
+-			tsk->thread.debug.hbp_watch[idx] = bp;
+-			err = 0;
+-		}
++		if (idx >= ARM_MAX_WRP)
++			goto out;
++		idx = array_index_nospec(idx, ARM_MAX_WRP);
++		tsk->thread.debug.hbp_watch[idx] = bp;
++		err = 0;
+ 		break;
+ 	}
+ 
++out:
+ 	return err;
+ }
+ 
+diff --git a/arch/mips/ath79/setup.c b/arch/mips/ath79/setup.c
+index f206dafbb0a3..26a058d58d37 100644
+--- a/arch/mips/ath79/setup.c
++++ b/arch/mips/ath79/setup.c
+@@ -40,6 +40,7 @@ static char ath79_sys_type[ATH79_SYS_TYPE_LEN];
+ 
+ static void ath79_restart(char *command)
+ {
++	local_irq_disable();
+ 	ath79_device_reset_set(AR71XX_RESET_FULL_CHIP);
+ 	for (;;)
+ 		if (cpu_wait)
+diff --git a/arch/mips/include/asm/mach-ath79/ath79.h b/arch/mips/include/asm/mach-ath79/ath79.h
+index 441faa92c3cd..6e6c0fead776 100644
+--- a/arch/mips/include/asm/mach-ath79/ath79.h
++++ b/arch/mips/include/asm/mach-ath79/ath79.h
+@@ -134,6 +134,7 @@ static inline u32 ath79_pll_rr(unsigned reg)
+ static inline void ath79_reset_wr(unsigned reg, u32 val)
+ {
+ 	__raw_writel(val, ath79_reset_base + reg);
++	(void) __raw_readl(ath79_reset_base + reg); /* flush */
+ }
+ 
+ static inline u32 ath79_reset_rr(unsigned reg)
+diff --git a/arch/mips/jz4740/Platform b/arch/mips/jz4740/Platform
+index 28448d358c10..a2a5a85ea1f9 100644
+--- a/arch/mips/jz4740/Platform
++++ b/arch/mips/jz4740/Platform
+@@ -1,4 +1,4 @@
+ platform-$(CONFIG_MACH_INGENIC)	+= jz4740/
+ cflags-$(CONFIG_MACH_INGENIC)	+= -I$(srctree)/arch/mips/include/asm/mach-jz4740
+ load-$(CONFIG_MACH_INGENIC)	+= 0xffffffff80010000
+-zload-$(CONFIG_MACH_INGENIC)	+= 0xffffffff80600000
++zload-$(CONFIG_MACH_INGENIC)	+= 0xffffffff81000000
+diff --git a/arch/mips/loongson64/common/cs5536/cs5536_ohci.c b/arch/mips/loongson64/common/cs5536/cs5536_ohci.c
+index f7c905e50dc4..92dc6bafc127 100644
+--- a/arch/mips/loongson64/common/cs5536/cs5536_ohci.c
++++ b/arch/mips/loongson64/common/cs5536/cs5536_ohci.c
+@@ -138,7 +138,7 @@ u32 pci_ohci_read_reg(int reg)
+ 		break;
+ 	case PCI_OHCI_INT_REG:
+ 		_rdmsr(DIVIL_MSR_REG(PIC_YSEL_LOW), &hi, &lo);
+-		if ((lo & 0x00000f00) == CS5536_USB_INTR)
++		if (((lo >> PIC_YSEL_LOW_USB_SHIFT) & 0xf) == CS5536_USB_INTR)
+ 			conf_data = 1;
+ 		break;
+ 	default:
+diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c
+index 8c456fa691a5..8167ce8e0cdd 100644
+--- a/arch/powerpc/kvm/book3s_64_vio.c
++++ b/arch/powerpc/kvm/book3s_64_vio.c
+@@ -180,7 +180,7 @@ extern long kvm_spapr_tce_attach_iommu_group(struct kvm *kvm, int tablefd,
+ 		if ((tbltmp->it_page_shift <= stt->page_shift) &&
+ 				(tbltmp->it_offset << tbltmp->it_page_shift ==
+ 				 stt->offset << stt->page_shift) &&
+-				(tbltmp->it_size << tbltmp->it_page_shift ==
++				(tbltmp->it_size << tbltmp->it_page_shift >=
+ 				 stt->size << stt->page_shift)) {
+ 			/*
+ 			 * Reference the table to avoid races with
+@@ -296,7 +296,7 @@ long kvm_vm_ioctl_create_spapr_tce(struct kvm *kvm,
+ {
+ 	struct kvmppc_spapr_tce_table *stt = NULL;
+ 	struct kvmppc_spapr_tce_table *siter;
+-	unsigned long npages, size;
++	unsigned long npages, size = args->size;
+ 	int ret = -ENOMEM;
+ 	int i;
+ 
+@@ -304,7 +304,6 @@ long kvm_vm_ioctl_create_spapr_tce(struct kvm *kvm,
+ 		(args->offset + args->size > (ULLONG_MAX >> args->page_shift)))
+ 		return -EINVAL;
+ 
+-	size = _ALIGN_UP(args->size, PAGE_SIZE >> 3);
+ 	npages = kvmppc_tce_pages(size);
+ 	ret = kvmppc_account_memlimit(kvmppc_stt_pages(npages), true);
+ 	if (ret)
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index a995513573c2..2ebd5132a29f 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -4562,6 +4562,8 @@ static int kvmppc_book3s_init_hv(void)
+ 			pr_err("KVM-HV: Cannot determine method for accessing XICS\n");
+ 			return -ENODEV;
+ 		}
++		/* presence of intc confirmed - node can be dropped again */
++		of_node_put(np);
+ 	}
+ #endif
+ 
+diff --git a/arch/powerpc/platforms/powernv/opal.c b/arch/powerpc/platforms/powernv/opal.c
+index 0d539c661748..371e33ecc547 100644
+--- a/arch/powerpc/platforms/powernv/opal.c
++++ b/arch/powerpc/platforms/powernv/opal.c
+@@ -388,7 +388,7 @@ int opal_put_chars(uint32_t vtermno, const char *data, int total_len)
+ 		/* Closed or other error drop */
+ 		if (rc != OPAL_SUCCESS && rc != OPAL_BUSY &&
+ 		    rc != OPAL_BUSY_EVENT) {
+-			written = total_len;
++			written += total_len;
+ 			break;
+ 		}
+ 		if (rc == OPAL_SUCCESS) {
+diff --git a/arch/s390/crypto/paes_s390.c b/arch/s390/crypto/paes_s390.c
+index 80b27294c1de..ab9a0ebecc19 100644
+--- a/arch/s390/crypto/paes_s390.c
++++ b/arch/s390/crypto/paes_s390.c
+@@ -208,7 +208,7 @@ static int cbc_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
+ 			      walk->dst.virt.addr, walk->src.virt.addr, n);
+ 		if (k)
+ 			ret = blkcipher_walk_done(desc, walk, nbytes - k);
+-		if (n < k) {
++		if (k < n) {
+ 			if (__cbc_paes_set_key(ctx) != 0)
+ 				return blkcipher_walk_done(desc, walk, -EIO);
+ 			memcpy(param.key, ctx->pk.protkey, MAXPROTKEYSIZE);
+diff --git a/arch/x86/kernel/eisa.c b/arch/x86/kernel/eisa.c
+index f260e452e4f8..e8c8c5d78dbd 100644
+--- a/arch/x86/kernel/eisa.c
++++ b/arch/x86/kernel/eisa.c
+@@ -7,11 +7,17 @@
+ #include <linux/eisa.h>
+ #include <linux/io.h>
+ 
++#include <xen/xen.h>
++
+ static __init int eisa_bus_probe(void)
+ {
+-	void __iomem *p = ioremap(0x0FFFD9, 4);
++	void __iomem *p;
++
++	if (xen_pv_domain() && !xen_initial_domain())
++		return 0;
+ 
+-	if (readl(p) == 'E' + ('I'<<8) + ('S'<<16) + ('A'<<24))
++	p = ioremap(0x0FFFD9, 4);
++	if (p && readl(p) == 'E' + ('I' << 8) + ('S' << 16) + ('A' << 24))
+ 		EISA_bus = 1;
+ 	iounmap(p);
+ 	return 0;
+diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
+index 946455e9cfef..1d2106d83b4e 100644
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -177,7 +177,7 @@ static p4d_t *pti_user_pagetable_walk_p4d(unsigned long address)
+ 
+ 	if (pgd_none(*pgd)) {
+ 		unsigned long new_p4d_page = __get_free_page(gfp);
+-		if (!new_p4d_page)
++		if (WARN_ON_ONCE(!new_p4d_page))
+ 			return NULL;
+ 
+ 		set_pgd(pgd, __pgd(_KERNPG_TABLE | __pa(new_p4d_page)));
+@@ -196,13 +196,17 @@ static p4d_t *pti_user_pagetable_walk_p4d(unsigned long address)
+ static pmd_t *pti_user_pagetable_walk_pmd(unsigned long address)
+ {
+ 	gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
+-	p4d_t *p4d = pti_user_pagetable_walk_p4d(address);
++	p4d_t *p4d;
+ 	pud_t *pud;
+ 
++	p4d = pti_user_pagetable_walk_p4d(address);
++	if (!p4d)
++		return NULL;
++
+ 	BUILD_BUG_ON(p4d_large(*p4d) != 0);
+ 	if (p4d_none(*p4d)) {
+ 		unsigned long new_pud_page = __get_free_page(gfp);
+-		if (!new_pud_page)
++		if (WARN_ON_ONCE(!new_pud_page))
+ 			return NULL;
+ 
+ 		set_p4d(p4d, __p4d(_KERNPG_TABLE | __pa(new_pud_page)));
+@@ -216,7 +220,7 @@ static pmd_t *pti_user_pagetable_walk_pmd(unsigned long address)
+ 	}
+ 	if (pud_none(*pud)) {
+ 		unsigned long new_pmd_page = __get_free_page(gfp);
+-		if (!new_pmd_page)
++		if (WARN_ON_ONCE(!new_pmd_page))
+ 			return NULL;
+ 
+ 		set_pud(pud, __pud(_KERNPG_TABLE | __pa(new_pmd_page)));
+@@ -238,9 +242,13 @@ static pmd_t *pti_user_pagetable_walk_pmd(unsigned long address)
+ static __init pte_t *pti_user_pagetable_walk_pte(unsigned long address)
+ {
+ 	gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
+-	pmd_t *pmd = pti_user_pagetable_walk_pmd(address);
++	pmd_t *pmd;
+ 	pte_t *pte;
+ 
++	pmd = pti_user_pagetable_walk_pmd(address);
++	if (!pmd)
++		return NULL;
++
+ 	/* We can't do anything sensible if we hit a large mapping. */
+ 	if (pmd_large(*pmd)) {
+ 		WARN_ON(1);
+@@ -298,6 +306,10 @@ pti_clone_pmds(unsigned long start, unsigned long end, pmdval_t clear)
+ 		p4d_t *p4d;
+ 		pud_t *pud;
+ 
++		/* Overflow check */
++		if (addr < start)
++			break;
++
+ 		pgd = pgd_offset_k(addr);
+ 		if (WARN_ON(pgd_none(*pgd)))
+ 			return;
+@@ -355,6 +367,9 @@ static void __init pti_clone_p4d(unsigned long addr)
+ 	pgd_t *kernel_pgd;
+ 
+ 	user_p4d = pti_user_pagetable_walk_p4d(addr);
++	if (!user_p4d)
++		return;
++
+ 	kernel_pgd = pgd_offset_k(addr);
+ 	kernel_p4d = p4d_offset(kernel_pgd, addr);
+ 	*user_p4d = *kernel_p4d;
+diff --git a/arch/xtensa/platforms/iss/setup.c b/arch/xtensa/platforms/iss/setup.c
+index f4bbb28026f8..58709e89a8ed 100644
+--- a/arch/xtensa/platforms/iss/setup.c
++++ b/arch/xtensa/platforms/iss/setup.c
+@@ -78,23 +78,28 @@ static struct notifier_block iss_panic_block = {
+ 
+ void __init platform_setup(char **p_cmdline)
+ {
++	static void *argv[COMMAND_LINE_SIZE / sizeof(void *)] __initdata;
++	static char cmdline[COMMAND_LINE_SIZE] __initdata;
+ 	int argc = simc_argc();
+ 	int argv_size = simc_argv_size();
+ 
+ 	if (argc > 1) {
+-		void **argv = alloc_bootmem(argv_size);
+-		char *cmdline = alloc_bootmem(argv_size);
+-		int i;
++		if (argv_size > sizeof(argv)) {
++			pr_err("%s: command line too long: argv_size = %d\n",
++			       __func__, argv_size);
++		} else {
++			int i;
+ 
+-		cmdline[0] = 0;
+-		simc_argv((void *)argv);
++			cmdline[0] = 0;
++			simc_argv((void *)argv);
+ 
+-		for (i = 1; i < argc; ++i) {
+-			if (i > 1)
+-				strcat(cmdline, " ");
+-			strcat(cmdline, argv[i]);
++			for (i = 1; i < argc; ++i) {
++				if (i > 1)
++					strcat(cmdline, " ");
++				strcat(cmdline, argv[i]);
++			}
++			*p_cmdline = cmdline;
+ 		}
+-		*p_cmdline = cmdline;
+ 	}
+ 
+ 	atomic_notifier_chain_register(&panic_notifier_list, &iss_panic_block);
+diff --git a/block/blk-core.c b/block/blk-core.c
+index cbaca5a73f2e..f9d2e1b66e05 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -791,9 +791,13 @@ void blk_cleanup_queue(struct request_queue *q)
+ 	 * make sure all in-progress dispatch are completed because
+ 	 * blk_freeze_queue() can only complete all requests, and
+ 	 * dispatch may still be in-progress since we dispatch requests
+-	 * from more than one contexts
++	 * from more than one contexts.
++	 *
++	 * No need to quiesce queue if it isn't initialized yet since
++	 * blk_freeze_queue() should be enough for cases of passthrough
++	 * request.
+ 	 */
+-	if (q->mq_ops)
++	if (q->mq_ops && blk_queue_init_done(q))
+ 		blk_mq_quiesce_queue(q);
+ 
+ 	/* for synchronous bio-based driver finish in-flight integrity i/o */
+diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
+index 56c493c6cd90..f5745acc2d98 100644
+--- a/block/blk-mq-sched.c
++++ b/block/blk-mq-sched.c
+@@ -339,7 +339,8 @@ bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio)
+ 		return e->type->ops.mq.bio_merge(hctx, bio);
+ 	}
+ 
+-	if (hctx->flags & BLK_MQ_F_SHOULD_MERGE) {
++	if ((hctx->flags & BLK_MQ_F_SHOULD_MERGE) &&
++			!list_empty_careful(&ctx->rq_list)) {
+ 		/* default per sw-queue merge */
+ 		spin_lock(&ctx->lock);
+ 		ret = blk_mq_attempt_merge(q, ctx, bio);
+diff --git a/block/blk-settings.c b/block/blk-settings.c
+index d1de71124656..24fff4a3d08a 100644
+--- a/block/blk-settings.c
++++ b/block/blk-settings.c
+@@ -128,7 +128,7 @@ void blk_set_stacking_limits(struct queue_limits *lim)
+ 
+ 	/* Inherit limits from component devices */
+ 	lim->max_segments = USHRT_MAX;
+-	lim->max_discard_segments = 1;
++	lim->max_discard_segments = USHRT_MAX;
+ 	lim->max_hw_sectors = UINT_MAX;
+ 	lim->max_segment_size = UINT_MAX;
+ 	lim->max_sectors = UINT_MAX;
+diff --git a/crypto/api.c b/crypto/api.c
+index 0ee632bba064..7aca9f86c5f3 100644
+--- a/crypto/api.c
++++ b/crypto/api.c
+@@ -229,7 +229,7 @@ static struct crypto_alg *crypto_larval_lookup(const char *name, u32 type,
+ 	mask &= ~(CRYPTO_ALG_LARVAL | CRYPTO_ALG_DEAD);
+ 
+ 	alg = crypto_alg_lookup(name, type, mask);
+-	if (!alg) {
++	if (!alg && !(mask & CRYPTO_NOLOAD)) {
+ 		request_module("crypto-%s", name);
+ 
+ 		if (!((type ^ CRYPTO_ALG_NEED_FALLBACK) & mask &
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index df3e1a44707a..3aba4ad8af5c 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -2809,6 +2809,9 @@ void device_shutdown(void)
+ {
+ 	struct device *dev, *parent;
+ 
++	wait_for_device_probe();
++	device_block_probing();
++
+ 	spin_lock(&devices_kset->list_lock);
+ 	/*
+ 	 * Walk the devices list backward, shutting down each in turn.
+diff --git a/drivers/block/DAC960.c b/drivers/block/DAC960.c
+index f6518067aa7d..f99e5c883368 100644
+--- a/drivers/block/DAC960.c
++++ b/drivers/block/DAC960.c
+@@ -21,6 +21,7 @@
+ #define DAC960_DriverDate			"21 Aug 2007"
+ 
+ 
++#include <linux/compiler.h>
+ #include <linux/module.h>
+ #include <linux/types.h>
+ #include <linux/miscdevice.h>
+@@ -6426,7 +6427,7 @@ static bool DAC960_V2_ExecuteUserCommand(DAC960_Controller_T *Controller,
+   return true;
+ }
+ 
+-static int dac960_proc_show(struct seq_file *m, void *v)
++static int __maybe_unused dac960_proc_show(struct seq_file *m, void *v)
+ {
+   unsigned char *StatusMessage = "OK\n";
+   int ControllerNumber;
+@@ -6446,14 +6447,16 @@ static int dac960_proc_show(struct seq_file *m, void *v)
+   return 0;
+ }
+ 
+-static int dac960_initial_status_proc_show(struct seq_file *m, void *v)
++static int __maybe_unused dac960_initial_status_proc_show(struct seq_file *m,
++							  void *v)
+ {
+ 	DAC960_Controller_T *Controller = (DAC960_Controller_T *)m->private;
+ 	seq_printf(m, "%.*s", Controller->InitialStatusLength, Controller->CombinedStatusBuffer);
+ 	return 0;
+ }
+ 
+-static int dac960_current_status_proc_show(struct seq_file *m, void *v)
++static int __maybe_unused dac960_current_status_proc_show(struct seq_file *m,
++							  void *v)
+ {
+   DAC960_Controller_T *Controller = (DAC960_Controller_T *) m->private;
+   unsigned char *StatusMessage =
+diff --git a/drivers/char/ipmi/ipmi_bt_sm.c b/drivers/char/ipmi/ipmi_bt_sm.c
+index a3397664f800..97d6856c9c0f 100644
+--- a/drivers/char/ipmi/ipmi_bt_sm.c
++++ b/drivers/char/ipmi/ipmi_bt_sm.c
+@@ -59,8 +59,6 @@ enum bt_states {
+ 	BT_STATE_RESET3,
+ 	BT_STATE_RESTART,
+ 	BT_STATE_PRINTME,
+-	BT_STATE_CAPABILITIES_BEGIN,
+-	BT_STATE_CAPABILITIES_END,
+ 	BT_STATE_LONG_BUSY	/* BT doesn't get hosed :-) */
+ };
+ 
+@@ -86,7 +84,6 @@ struct si_sm_data {
+ 	int		error_retries;	/* end of "common" fields */
+ 	int		nonzero_status;	/* hung BMCs stay all 0 */
+ 	enum bt_states	complete;	/* to divert the state machine */
+-	int		BT_CAP_outreqs;
+ 	long		BT_CAP_req2rsp;
+ 	int		BT_CAP_retries;	/* Recommended retries */
+ };
+@@ -137,8 +134,6 @@ static char *state2txt(unsigned char state)
+ 	case BT_STATE_RESET3:		return("RESET3");
+ 	case BT_STATE_RESTART:		return("RESTART");
+ 	case BT_STATE_LONG_BUSY:	return("LONG_BUSY");
+-	case BT_STATE_CAPABILITIES_BEGIN: return("CAP_BEGIN");
+-	case BT_STATE_CAPABILITIES_END:	return("CAP_END");
+ 	}
+ 	return("BAD STATE");
+ }
+@@ -185,7 +180,6 @@ static unsigned int bt_init_data(struct si_sm_data *bt, struct si_sm_io *io)
+ 	bt->complete = BT_STATE_IDLE;	/* end here */
+ 	bt->BT_CAP_req2rsp = BT_NORMAL_TIMEOUT * USEC_PER_SEC;
+ 	bt->BT_CAP_retries = BT_NORMAL_RETRY_LIMIT;
+-	/* BT_CAP_outreqs == zero is a flag to read BT Capabilities */
+ 	return 3; /* We claim 3 bytes of space; ought to check SPMI table */
+ }
+ 
+@@ -451,7 +445,7 @@ static enum si_sm_result error_recovery(struct si_sm_data *bt,
+ 
+ static enum si_sm_result bt_event(struct si_sm_data *bt, long time)
+ {
+-	unsigned char status, BT_CAP[8];
++	unsigned char status;
+ 	static enum bt_states last_printed = BT_STATE_PRINTME;
+ 	int i;
+ 
+@@ -504,12 +498,6 @@ static enum si_sm_result bt_event(struct si_sm_data *bt, long time)
+ 		if (status & BT_H_BUSY)		/* clear a leftover H_BUSY */
+ 			BT_CONTROL(BT_H_BUSY);
+ 
+-		bt->timeout = bt->BT_CAP_req2rsp;
+-
+-		/* Read BT capabilities if it hasn't been done yet */
+-		if (!bt->BT_CAP_outreqs)
+-			BT_STATE_CHANGE(BT_STATE_CAPABILITIES_BEGIN,
+-					SI_SM_CALL_WITHOUT_DELAY);
+ 		BT_SI_SM_RETURN(SI_SM_IDLE);
+ 
+ 	case BT_STATE_XACTION_START:
+@@ -614,37 +602,6 @@ static enum si_sm_result bt_event(struct si_sm_data *bt, long time)
+ 		BT_STATE_CHANGE(BT_STATE_XACTION_START,
+ 				SI_SM_CALL_WITH_DELAY);
+ 
+-	/*
+-	 * Get BT Capabilities, using timing of upper level state machine.
+-	 * Set outreqs to prevent infinite loop on timeout.
+-	 */
+-	case BT_STATE_CAPABILITIES_BEGIN:
+-		bt->BT_CAP_outreqs = 1;
+-		{
+-			unsigned char GetBT_CAP[] = { 0x18, 0x36 };
+-			bt->state = BT_STATE_IDLE;
+-			bt_start_transaction(bt, GetBT_CAP, sizeof(GetBT_CAP));
+-		}
+-		bt->complete = BT_STATE_CAPABILITIES_END;
+-		BT_STATE_CHANGE(BT_STATE_XACTION_START,
+-				SI_SM_CALL_WITH_DELAY);
+-
+-	case BT_STATE_CAPABILITIES_END:
+-		i = bt_get_result(bt, BT_CAP, sizeof(BT_CAP));
+-		bt_init_data(bt, bt->io);
+-		if ((i == 8) && !BT_CAP[2]) {
+-			bt->BT_CAP_outreqs = BT_CAP[3];
+-			bt->BT_CAP_req2rsp = BT_CAP[6] * USEC_PER_SEC;
+-			bt->BT_CAP_retries = BT_CAP[7];
+-		} else
+-			printk(KERN_WARNING "IPMI BT: using default values\n");
+-		if (!bt->BT_CAP_outreqs)
+-			bt->BT_CAP_outreqs = 1;
+-		printk(KERN_WARNING "IPMI BT: req2rsp=%ld secs retries=%d\n",
+-			bt->BT_CAP_req2rsp / USEC_PER_SEC, bt->BT_CAP_retries);
+-		bt->timeout = bt->BT_CAP_req2rsp;
+-		return SI_SM_CALL_WITHOUT_DELAY;
+-
+ 	default:	/* should never occur */
+ 		return error_recovery(bt,
+ 				      status,
+@@ -655,6 +612,11 @@ static enum si_sm_result bt_event(struct si_sm_data *bt, long time)
+ 
+ static int bt_detect(struct si_sm_data *bt)
+ {
++	unsigned char GetBT_CAP[] = { 0x18, 0x36 };
++	unsigned char BT_CAP[8];
++	enum si_sm_result smi_result;
++	int rv;
++
+ 	/*
+ 	 * It's impossible for the BT status and interrupt registers to be
+ 	 * all 1's, (assuming a properly functioning, self-initialized BMC)
+@@ -665,6 +627,48 @@ static int bt_detect(struct si_sm_data *bt)
+ 	if ((BT_STATUS == 0xFF) && (BT_INTMASK_R == 0xFF))
+ 		return 1;
+ 	reset_flags(bt);
++
++	/*
++	 * Try getting the BT capabilities here.
++	 */
++	rv = bt_start_transaction(bt, GetBT_CAP, sizeof(GetBT_CAP));
++	if (rv) {
++		dev_warn(bt->io->dev,
++			 "Can't start capabilities transaction: %d\n", rv);
++		goto out_no_bt_cap;
++	}
++
++	smi_result = SI_SM_CALL_WITHOUT_DELAY;
++	for (;;) {
++		if (smi_result == SI_SM_CALL_WITH_DELAY ||
++		    smi_result == SI_SM_CALL_WITH_TICK_DELAY) {
++			schedule_timeout_uninterruptible(1);
++			smi_result = bt_event(bt, jiffies_to_usecs(1));
++		} else if (smi_result == SI_SM_CALL_WITHOUT_DELAY) {
++			smi_result = bt_event(bt, 0);
++		} else
++			break;
++	}
++
++	rv = bt_get_result(bt, BT_CAP, sizeof(BT_CAP));
++	bt_init_data(bt, bt->io);
++	if (rv < 8) {
++		dev_warn(bt->io->dev, "bt cap response too short: %d\n", rv);
++		goto out_no_bt_cap;
++	}
++
++	if (BT_CAP[2]) {
++		dev_warn(bt->io->dev, "Error fetching bt cap: %x\n", BT_CAP[2]);
++out_no_bt_cap:
++		dev_warn(bt->io->dev, "using default values\n");
++	} else {
++		bt->BT_CAP_req2rsp = BT_CAP[6] * USEC_PER_SEC;
++		bt->BT_CAP_retries = BT_CAP[7];
++	}
++
++	dev_info(bt->io->dev, "req2rsp=%ld secs retries=%d\n",
++		 bt->BT_CAP_req2rsp / USEC_PER_SEC, bt->BT_CAP_retries);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
+index 51832b8a2c62..7fc9612070a1 100644
+--- a/drivers/char/ipmi/ipmi_msghandler.c
++++ b/drivers/char/ipmi/ipmi_msghandler.c
+@@ -3381,39 +3381,45 @@ int ipmi_register_smi(const struct ipmi_smi_handlers *handlers,
+ 
+ 	rv = handlers->start_processing(send_info, intf);
+ 	if (rv)
+-		goto out;
++		goto out_err;
+ 
+ 	rv = __bmc_get_device_id(intf, NULL, &id, NULL, NULL, i);
+ 	if (rv) {
+ 		dev_err(si_dev, "Unable to get the device id: %d\n", rv);
+-		goto out;
++		goto out_err_started;
+ 	}
+ 
+ 	mutex_lock(&intf->bmc_reg_mutex);
+ 	rv = __scan_channels(intf, &id);
+ 	mutex_unlock(&intf->bmc_reg_mutex);
++	if (rv)
++		goto out_err_bmc_reg;
+ 
+- out:
+-	if (rv) {
+-		ipmi_bmc_unregister(intf);
+-		list_del_rcu(&intf->link);
+-		mutex_unlock(&ipmi_interfaces_mutex);
+-		synchronize_srcu(&ipmi_interfaces_srcu);
+-		cleanup_srcu_struct(&intf->users_srcu);
+-		kref_put(&intf->refcount, intf_free);
+-	} else {
+-		/*
+-		 * Keep memory order straight for RCU readers.  Make
+-		 * sure everything else is committed to memory before
+-		 * setting intf_num to mark the interface valid.
+-		 */
+-		smp_wmb();
+-		intf->intf_num = i;
+-		mutex_unlock(&ipmi_interfaces_mutex);
++	/*
++	 * Keep memory order straight for RCU readers.  Make
++	 * sure everything else is committed to memory before
++	 * setting intf_num to mark the interface valid.
++	 */
++	smp_wmb();
++	intf->intf_num = i;
++	mutex_unlock(&ipmi_interfaces_mutex);
+ 
+-		/* After this point the interface is legal to use. */
+-		call_smi_watchers(i, intf->si_dev);
+-	}
++	/* After this point the interface is legal to use. */
++	call_smi_watchers(i, intf->si_dev);
++
++	return 0;
++
++ out_err_bmc_reg:
++	ipmi_bmc_unregister(intf);
++ out_err_started:
++	if (intf->handlers->shutdown)
++		intf->handlers->shutdown(intf->send_info);
++ out_err:
++	list_del_rcu(&intf->link);
++	mutex_unlock(&ipmi_interfaces_mutex);
++	synchronize_srcu(&ipmi_interfaces_srcu);
++	cleanup_srcu_struct(&intf->users_srcu);
++	kref_put(&intf->refcount, intf_free);
+ 
+ 	return rv;
+ }
+@@ -3504,7 +3510,8 @@ void ipmi_unregister_smi(struct ipmi_smi *intf)
+ 	}
+ 	srcu_read_unlock(&intf->users_srcu, index);
+ 
+-	intf->handlers->shutdown(intf->send_info);
++	if (intf->handlers->shutdown)
++		intf->handlers->shutdown(intf->send_info);
+ 
+ 	cleanup_smi_msgs(intf);
+ 
+diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
+index 90ec010bffbd..5faa917df1b6 100644
+--- a/drivers/char/ipmi/ipmi_si_intf.c
++++ b/drivers/char/ipmi/ipmi_si_intf.c
+@@ -2083,18 +2083,9 @@ static int try_smi_init(struct smi_info *new_smi)
+ 		 si_to_str[new_smi->io.si_type]);
+ 
+ 	WARN_ON(new_smi->io.dev->init_name != NULL);
+-	kfree(init_name);
+-
+-	return 0;
+-
+-out_err:
+-	if (new_smi->intf) {
+-		ipmi_unregister_smi(new_smi->intf);
+-		new_smi->intf = NULL;
+-	}
+ 
++ out_err:
+ 	kfree(init_name);
+-
+ 	return rv;
+ }
+ 
+@@ -2227,6 +2218,8 @@ static void shutdown_smi(void *send_info)
+ 
+ 	kfree(smi_info->si_sm);
+ 	smi_info->si_sm = NULL;
++
++	smi_info->intf = NULL;
+ }
+ 
+ /*
+@@ -2240,10 +2233,8 @@ static void cleanup_one_si(struct smi_info *smi_info)
+ 
+ 	list_del(&smi_info->link);
+ 
+-	if (smi_info->intf) {
++	if (smi_info->intf)
+ 		ipmi_unregister_smi(smi_info->intf);
+-		smi_info->intf = NULL;
+-	}
+ 
+ 	if (smi_info->pdev) {
+ 		if (smi_info->pdev_registered)
+diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c
+index 18e4650c233b..265d6a6583bc 100644
+--- a/drivers/char/ipmi/ipmi_ssif.c
++++ b/drivers/char/ipmi/ipmi_ssif.c
+@@ -181,6 +181,8 @@ struct ssif_addr_info {
+ 	struct device *dev;
+ 	struct i2c_client *client;
+ 
++	struct i2c_client *added_client;
++
+ 	struct mutex clients_mutex;
+ 	struct list_head clients;
+ 
+@@ -1214,18 +1216,11 @@ static void shutdown_ssif(void *send_info)
+ 		complete(&ssif_info->wake_thread);
+ 		kthread_stop(ssif_info->thread);
+ 	}
+-
+-	/*
+-	 * No message can be outstanding now, we have removed the
+-	 * upper layer and it permitted us to do so.
+-	 */
+-	kfree(ssif_info);
+ }
+ 
+ static int ssif_remove(struct i2c_client *client)
+ {
+ 	struct ssif_info *ssif_info = i2c_get_clientdata(client);
+-	struct ipmi_smi *intf;
+ 	struct ssif_addr_info *addr_info;
+ 
+ 	if (!ssif_info)
+@@ -1235,9 +1230,7 @@ static int ssif_remove(struct i2c_client *client)
+ 	 * After this point, we won't deliver anything asychronously
+ 	 * to the message handler.  We can unregister ourself.
+ 	 */
+-	intf = ssif_info->intf;
+-	ssif_info->intf = NULL;
+-	ipmi_unregister_smi(intf);
++	ipmi_unregister_smi(ssif_info->intf);
+ 
+ 	list_for_each_entry(addr_info, &ssif_infos, link) {
+ 		if (addr_info->client == client) {
+@@ -1246,6 +1239,8 @@ static int ssif_remove(struct i2c_client *client)
+ 		}
+ 	}
+ 
++	kfree(ssif_info);
++
+ 	return 0;
+ }
+ 
+@@ -1648,15 +1643,7 @@ static int ssif_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ 
+  out:
+ 	if (rv) {
+-		/*
+-		 * Note that if addr_info->client is assigned, we
+-		 * leave it.  The i2c client hangs around even if we
+-		 * return a failure here, and the failure here is not
+-		 * propagated back to the i2c code.  This seems to be
+-		 * design intent, strange as it may be.  But if we
+-		 * don't leave it, ssif_platform_remove will not remove
+-		 * the client like it should.
+-		 */
++		addr_info->client = NULL;
+ 		dev_err(&client->dev, "Unable to start IPMI SSIF: %d\n", rv);
+ 		kfree(ssif_info);
+ 	}
+@@ -1676,7 +1663,8 @@ static int ssif_adapter_handler(struct device *adev, void *opaque)
+ 	if (adev->type != &i2c_adapter_type)
+ 		return 0;
+ 
+-	i2c_new_device(to_i2c_adapter(adev), &addr_info->binfo);
++	addr_info->added_client = i2c_new_device(to_i2c_adapter(adev),
++						 &addr_info->binfo);
+ 
+ 	if (!addr_info->adapter_name)
+ 		return 1; /* Only try the first I2C adapter by default. */
+@@ -1849,7 +1837,7 @@ static int ssif_platform_remove(struct platform_device *dev)
+ 		return 0;
+ 
+ 	mutex_lock(&ssif_infos_mutex);
+-	i2c_unregister_device(addr_info->client);
++	i2c_unregister_device(addr_info->added_client);
+ 
+ 	list_del(&addr_info->link);
+ 	kfree(addr_info);
+diff --git a/drivers/clk/clk-fixed-factor.c b/drivers/clk/clk-fixed-factor.c
+index a5d402de5584..20724abd38bd 100644
+--- a/drivers/clk/clk-fixed-factor.c
++++ b/drivers/clk/clk-fixed-factor.c
+@@ -177,8 +177,15 @@ static struct clk *_of_fixed_factor_clk_setup(struct device_node *node)
+ 
+ 	clk = clk_register_fixed_factor(NULL, clk_name, parent_name, flags,
+ 					mult, div);
+-	if (IS_ERR(clk))
++	if (IS_ERR(clk)) {
++		/*
++		 * If parent clock is not registered, registration would fail.
++		 * Clear OF_POPULATED flag so that clock registration can be
++		 * attempted again from probe function.
++		 */
++		of_node_clear_flag(node, OF_POPULATED);
+ 		return clk;
++	}
+ 
+ 	ret = of_clk_add_provider(node, of_clk_src_simple_get, clk);
+ 	if (ret) {
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index e2ed078abd90..2d96e7966e94 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -2933,6 +2933,7 @@ struct clk *__clk_create_clk(struct clk_hw *hw, const char *dev_id,
+ 	return clk;
+ }
+ 
++/* keep in sync with __clk_put */
+ void __clk_free_clk(struct clk *clk)
+ {
+ 	clk_prepare_lock();
+@@ -3312,6 +3313,7 @@ int __clk_get(struct clk *clk)
+ 	return 1;
+ }
+ 
++/* keep in sync with __clk_free_clk */
+ void __clk_put(struct clk *clk)
+ {
+ 	struct module *owner;
+@@ -3345,6 +3347,7 @@ void __clk_put(struct clk *clk)
+ 
+ 	module_put(owner);
+ 
++	kfree_const(clk->con_id);
+ 	kfree(clk);
+ }
+ 
+diff --git a/drivers/clk/imx/clk-imx6sll.c b/drivers/clk/imx/clk-imx6sll.c
+index 3651c77fbabe..645d8a42007c 100644
+--- a/drivers/clk/imx/clk-imx6sll.c
++++ b/drivers/clk/imx/clk-imx6sll.c
+@@ -92,6 +92,7 @@ static void __init imx6sll_clocks_init(struct device_node *ccm_node)
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "fsl,imx6sll-anatop");
+ 	base = of_iomap(np, 0);
++	of_node_put(np);
+ 	WARN_ON(!base);
+ 
+ 	/* Do not bypass PLLs initially */
+diff --git a/drivers/clk/imx/clk-imx6ul.c b/drivers/clk/imx/clk-imx6ul.c
+index ba563ba50b40..9f1a40498642 100644
+--- a/drivers/clk/imx/clk-imx6ul.c
++++ b/drivers/clk/imx/clk-imx6ul.c
+@@ -142,6 +142,7 @@ static void __init imx6ul_clocks_init(struct device_node *ccm_node)
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "fsl,imx6ul-anatop");
+ 	base = of_iomap(np, 0);
++	of_node_put(np);
+ 	WARN_ON(!base);
+ 
+ 	clks[IMX6UL_PLL1_BYPASS_SRC] = imx_clk_mux("pll1_bypass_src", base + 0x00, 14, 1, pll_bypass_src_sels, ARRAY_SIZE(pll_bypass_src_sels));
+diff --git a/drivers/clk/mvebu/armada-37xx-periph.c b/drivers/clk/mvebu/armada-37xx-periph.c
+index 44e4e27eddad..6f7637b19738 100644
+--- a/drivers/clk/mvebu/armada-37xx-periph.c
++++ b/drivers/clk/mvebu/armada-37xx-periph.c
+@@ -429,9 +429,6 @@ static u8 clk_pm_cpu_get_parent(struct clk_hw *hw)
+ 		val &= pm_cpu->mask_mux;
+ 	}
+ 
+-	if (val >= num_parents)
+-		return -EINVAL;
+-
+ 	return val;
+ }
+ 
+diff --git a/drivers/clk/tegra/clk-bpmp.c b/drivers/clk/tegra/clk-bpmp.c
+index a896692b74ec..01dada561c10 100644
+--- a/drivers/clk/tegra/clk-bpmp.c
++++ b/drivers/clk/tegra/clk-bpmp.c
+@@ -586,9 +586,15 @@ static struct clk_hw *tegra_bpmp_clk_of_xlate(struct of_phandle_args *clkspec,
+ 	unsigned int id = clkspec->args[0], i;
+ 	struct tegra_bpmp *bpmp = data;
+ 
+-	for (i = 0; i < bpmp->num_clocks; i++)
+-		if (bpmp->clocks[i]->id == id)
+-			return &bpmp->clocks[i]->hw;
++	for (i = 0; i < bpmp->num_clocks; i++) {
++		struct tegra_bpmp_clk *clk = bpmp->clocks[i];
++
++		if (!clk)
++			continue;
++
++		if (clk->id == id)
++			return &clk->hw;
++	}
+ 
+ 	return NULL;
+ }
+diff --git a/drivers/crypto/ccp/psp-dev.c b/drivers/crypto/ccp/psp-dev.c
+index 051b8c6bae64..a9c85095bd56 100644
+--- a/drivers/crypto/ccp/psp-dev.c
++++ b/drivers/crypto/ccp/psp-dev.c
+@@ -38,6 +38,17 @@ static DEFINE_MUTEX(sev_cmd_mutex);
+ static struct sev_misc_dev *misc_dev;
+ static struct psp_device *psp_master;
+ 
++static int psp_cmd_timeout = 100;
++module_param(psp_cmd_timeout, int, 0644);
++MODULE_PARM_DESC(psp_cmd_timeout, " default timeout value, in seconds, for PSP commands");
++
++static int psp_probe_timeout = 5;
++module_param(psp_probe_timeout, int, 0644);
++MODULE_PARM_DESC(psp_probe_timeout, " default timeout value, in seconds, during PSP device probe");
++
++static bool psp_dead;
++static int psp_timeout;
++
+ static struct psp_device *psp_alloc_struct(struct sp_device *sp)
+ {
+ 	struct device *dev = sp->dev;
+@@ -82,10 +93,19 @@ done:
+ 	return IRQ_HANDLED;
+ }
+ 
+-static void sev_wait_cmd_ioc(struct psp_device *psp, unsigned int *reg)
++static int sev_wait_cmd_ioc(struct psp_device *psp,
++			    unsigned int *reg, unsigned int timeout)
+ {
+-	wait_event(psp->sev_int_queue, psp->sev_int_rcvd);
++	int ret;
++
++	ret = wait_event_timeout(psp->sev_int_queue,
++			psp->sev_int_rcvd, timeout * HZ);
++	if (!ret)
++		return -ETIMEDOUT;
++
+ 	*reg = ioread32(psp->io_regs + PSP_CMDRESP);
++
++	return 0;
+ }
+ 
+ static int sev_cmd_buffer_len(int cmd)
+@@ -133,12 +153,15 @@ static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
+ 	if (!psp)
+ 		return -ENODEV;
+ 
++	if (psp_dead)
++		return -EBUSY;
++
+ 	/* Get the physical address of the command buffer */
+ 	phys_lsb = data ? lower_32_bits(__psp_pa(data)) : 0;
+ 	phys_msb = data ? upper_32_bits(__psp_pa(data)) : 0;
+ 
+-	dev_dbg(psp->dev, "sev command id %#x buffer 0x%08x%08x\n",
+-		cmd, phys_msb, phys_lsb);
++	dev_dbg(psp->dev, "sev command id %#x buffer 0x%08x%08x timeout %us\n",
++		cmd, phys_msb, phys_lsb, psp_timeout);
+ 
+ 	print_hex_dump_debug("(in):  ", DUMP_PREFIX_OFFSET, 16, 2, data,
+ 			     sev_cmd_buffer_len(cmd), false);
+@@ -154,7 +177,18 @@ static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
+ 	iowrite32(reg, psp->io_regs + PSP_CMDRESP);
+ 
+ 	/* wait for command completion */
+-	sev_wait_cmd_ioc(psp, &reg);
++	ret = sev_wait_cmd_ioc(psp, &reg, psp_timeout);
++	if (ret) {
++		if (psp_ret)
++			*psp_ret = 0;
++
++		dev_err(psp->dev, "sev command %#x timed out, disabling PSP \n", cmd);
++		psp_dead = true;
++
++		return ret;
++	}
++
++	psp_timeout = psp_cmd_timeout;
+ 
+ 	if (psp_ret)
+ 		*psp_ret = reg & PSP_CMDRESP_ERR_MASK;
+@@ -886,6 +920,8 @@ void psp_pci_init(void)
+ 
+ 	psp_master = sp->psp_data;
+ 
++	psp_timeout = psp_probe_timeout;
++
+ 	if (sev_get_api_version())
+ 		goto err;
+ 
+diff --git a/drivers/crypto/sahara.c b/drivers/crypto/sahara.c
+index 0f2245e1af2b..97d86dca7e85 100644
+--- a/drivers/crypto/sahara.c
++++ b/drivers/crypto/sahara.c
+@@ -1351,7 +1351,7 @@ err_sha_v4_algs:
+ 
+ err_sha_v3_algs:
+ 	for (j = 0; j < k; j++)
+-		crypto_unregister_ahash(&sha_v4_algs[j]);
++		crypto_unregister_ahash(&sha_v3_algs[j]);
+ 
+ err_aes_algs:
+ 	for (j = 0; j < i; j++)
+@@ -1367,7 +1367,7 @@ static void sahara_unregister_algs(struct sahara_dev *dev)
+ 	for (i = 0; i < ARRAY_SIZE(aes_algs); i++)
+ 		crypto_unregister_alg(&aes_algs[i]);
+ 
+-	for (i = 0; i < ARRAY_SIZE(sha_v4_algs); i++)
++	for (i = 0; i < ARRAY_SIZE(sha_v3_algs); i++)
+ 		crypto_unregister_ahash(&sha_v3_algs[i]);
+ 
+ 	if (dev->version > SAHARA_VERSION_3)
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index 0b5b3abe054e..e26adf67e218 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -625,7 +625,8 @@ struct devfreq *devfreq_add_device(struct device *dev,
+ 	err = device_register(&devfreq->dev);
+ 	if (err) {
+ 		mutex_unlock(&devfreq->lock);
+-		goto err_dev;
++		put_device(&devfreq->dev);
++		goto err_out;
+ 	}
+ 
+ 	devfreq->trans_table =
+@@ -672,6 +673,7 @@ err_init:
+ 	mutex_unlock(&devfreq_list_lock);
+ 
+ 	device_unregister(&devfreq->dev);
++	devfreq = NULL;
+ err_dev:
+ 	if (devfreq)
+ 		kfree(devfreq);
+diff --git a/drivers/dma/mv_xor_v2.c b/drivers/dma/mv_xor_v2.c
+index c6589ccf1b9a..d349fedf4ab2 100644
+--- a/drivers/dma/mv_xor_v2.c
++++ b/drivers/dma/mv_xor_v2.c
+@@ -899,6 +899,8 @@ static int mv_xor_v2_remove(struct platform_device *pdev)
+ 
+ 	platform_msi_domain_free_irqs(&pdev->dev);
+ 
++	tasklet_kill(&xor_dev->irq_tasklet);
++
+ 	clk_disable_unprepare(xor_dev->clk);
+ 
+ 	return 0;
+diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c
+index de0957fe9668..bb6dfa2e1e8a 100644
+--- a/drivers/dma/pl330.c
++++ b/drivers/dma/pl330.c
+@@ -2257,13 +2257,14 @@ static int pl330_terminate_all(struct dma_chan *chan)
+ 
+ 	pm_runtime_get_sync(pl330->ddma.dev);
+ 	spin_lock_irqsave(&pch->lock, flags);
++
+ 	spin_lock(&pl330->lock);
+ 	_stop(pch->thread);
+-	spin_unlock(&pl330->lock);
+-
+ 	pch->thread->req[0].desc = NULL;
+ 	pch->thread->req[1].desc = NULL;
+ 	pch->thread->req_running = -1;
++	spin_unlock(&pl330->lock);
++
+ 	power_down = pch->active;
+ 	pch->active = false;
+ 
+diff --git a/drivers/dma/sh/rcar-dmac.c b/drivers/dma/sh/rcar-dmac.c
+index 2a2ccd9c78e4..8305a1ce8a9b 100644
+--- a/drivers/dma/sh/rcar-dmac.c
++++ b/drivers/dma/sh/rcar-dmac.c
+@@ -774,8 +774,9 @@ static void rcar_dmac_sync_tcr(struct rcar_dmac_chan *chan)
+ 	/* make sure all remaining data was flushed */
+ 	rcar_dmac_chcr_de_barrier(chan);
+ 
+-	/* back DE */
+-	rcar_dmac_chan_write(chan, RCAR_DMACHCR, chcr);
++	/* back DE if remain data exists */
++	if (rcar_dmac_chan_read(chan, RCAR_DMATCR))
++		rcar_dmac_chan_write(chan, RCAR_DMACHCR, chcr);
+ }
+ 
+ static void rcar_dmac_chan_halt(struct rcar_dmac_chan *chan)
+diff --git a/drivers/firmware/efi/arm-init.c b/drivers/firmware/efi/arm-init.c
+index b5214c143fee..388a929baf95 100644
+--- a/drivers/firmware/efi/arm-init.c
++++ b/drivers/firmware/efi/arm-init.c
+@@ -259,7 +259,6 @@ void __init efi_init(void)
+ 
+ 	reserve_regions();
+ 	efi_esrt_init();
+-	efi_memmap_unmap();
+ 
+ 	memblock_reserve(params.mmap & PAGE_MASK,
+ 			 PAGE_ALIGN(params.mmap_size +
+diff --git a/drivers/firmware/efi/arm-runtime.c b/drivers/firmware/efi/arm-runtime.c
+index 5889cbea60b8..4712445c3213 100644
+--- a/drivers/firmware/efi/arm-runtime.c
++++ b/drivers/firmware/efi/arm-runtime.c
+@@ -110,11 +110,13 @@ static int __init arm_enable_runtime_services(void)
+ {
+ 	u64 mapsize;
+ 
+-	if (!efi_enabled(EFI_BOOT)) {
++	if (!efi_enabled(EFI_BOOT) || !efi_enabled(EFI_MEMMAP)) {
+ 		pr_info("EFI services will not be available.\n");
+ 		return 0;
+ 	}
+ 
++	efi_memmap_unmap();
++
+ 	if (efi_runtime_disabled()) {
+ 		pr_info("EFI runtime services will be disabled.\n");
+ 		return 0;
+diff --git a/drivers/firmware/efi/esrt.c b/drivers/firmware/efi/esrt.c
+index 1ab80e06e7c5..e5d80ebd72b6 100644
+--- a/drivers/firmware/efi/esrt.c
++++ b/drivers/firmware/efi/esrt.c
+@@ -326,7 +326,8 @@ void __init efi_esrt_init(void)
+ 
+ 	end = esrt_data + size;
+ 	pr_info("Reserving ESRT space from %pa to %pa.\n", &esrt_data, &end);
+-	efi_mem_reserve(esrt_data, esrt_data_size);
++	if (md.type == EFI_BOOT_SERVICES_DATA)
++		efi_mem_reserve(esrt_data, esrt_data_size);
+ 
+ 	pr_debug("esrt-init: loaded.\n");
+ }
+diff --git a/drivers/gpio/gpio-pxa.c b/drivers/gpio/gpio-pxa.c
+index 2e33fd552899..99070e2ac3cd 100644
+--- a/drivers/gpio/gpio-pxa.c
++++ b/drivers/gpio/gpio-pxa.c
+@@ -665,6 +665,8 @@ static int pxa_gpio_probe(struct platform_device *pdev)
+ 	pchip->irq0 = irq0;
+ 	pchip->irq1 = irq1;
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!res)
++		return -EINVAL;
+ 	gpio_reg_base = devm_ioremap(&pdev->dev, res->start,
+ 				     resource_size(res));
+ 	if (!gpio_reg_base)
+diff --git a/drivers/gpio/gpiolib.h b/drivers/gpio/gpiolib.h
+index 1a8e20363861..a7e49fef73d4 100644
+--- a/drivers/gpio/gpiolib.h
++++ b/drivers/gpio/gpiolib.h
+@@ -92,7 +92,7 @@ struct acpi_gpio_info {
+ };
+ 
+ /* gpio suffixes used for ACPI and device tree lookup */
+-static const char * const gpio_suffixes[] = { "gpios", "gpio" };
++static __maybe_unused const char * const gpio_suffixes[] = { "gpios", "gpio" };
+ 
+ #ifdef CONFIG_OF_GPIO
+ struct gpio_desc *of_find_gpio(struct device *dev,
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c b/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
+index c3744d89352c..ebe79bf00145 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
+@@ -188,9 +188,9 @@ void __iomem *kfd_get_kernel_doorbell(struct kfd_dev *kfd,
+ 	*doorbell_off = kfd->doorbell_id_offset + inx;
+ 
+ 	pr_debug("Get kernel queue doorbell\n"
+-			 "     doorbell offset   == 0x%08X\n"
+-			 "     kernel address    == %p\n",
+-		*doorbell_off, (kfd->doorbell_kernel_ptr + inx));
++			"     doorbell offset   == 0x%08X\n"
++			"     doorbell index    == 0x%x\n",
++		*doorbell_off, inx);
+ 
+ 	return kfd->doorbell_kernel_ptr + inx;
+ }
+@@ -199,7 +199,8 @@ void kfd_release_kernel_doorbell(struct kfd_dev *kfd, u32 __iomem *db_addr)
+ {
+ 	unsigned int inx;
+ 
+-	inx = (unsigned int)(db_addr - kfd->doorbell_kernel_ptr);
++	inx = (unsigned int)(db_addr - kfd->doorbell_kernel_ptr)
++		* sizeof(u32) / kfd->device_info->doorbell_size;
+ 
+ 	mutex_lock(&kfd->doorbell_mutex);
+ 	__clear_bit(inx, kfd->doorbell_available_index);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+index 1d80b4f7c681..4694386cc623 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+@@ -244,6 +244,8 @@ struct kfd_process *kfd_get_process(const struct task_struct *thread)
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	process = find_process(thread);
++	if (!process)
++		return ERR_PTR(-EINVAL);
+ 
+ 	return process;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+index 8a7890b03d97..6ccd59b87403 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+@@ -497,6 +497,10 @@ static bool detect_dp(
+ 			sink_caps->signal = SIGNAL_TYPE_DISPLAY_PORT_MST;
+ 			link->type = dc_connection_mst_branch;
+ 
++			dal_ddc_service_set_transaction_type(
++							link->ddc,
++							sink_caps->transaction_type);
++
+ 			/*
+ 			 * This call will initiate MST topology discovery. Which
+ 			 * will detect MST ports and add new DRM connector DRM
+diff --git a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
+index d567be49c31b..b487774d8041 100644
+--- a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
++++ b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
+@@ -1020,7 +1020,7 @@ static int pp_get_display_power_level(void *handle,
+ static int pp_get_current_clocks(void *handle,
+ 		struct amd_pp_clock_info *clocks)
+ {
+-	struct amd_pp_simple_clock_info simple_clocks;
++	struct amd_pp_simple_clock_info simple_clocks = { 0 };
+ 	struct pp_clock_info hw_clocks;
+ 	struct pp_hwmgr *hwmgr = handle;
+ 	int ret = 0;
+@@ -1056,7 +1056,10 @@ static int pp_get_current_clocks(void *handle,
+ 	clocks->max_engine_clock_in_sr = hw_clocks.max_eng_clk;
+ 	clocks->min_engine_clock_in_sr = hw_clocks.min_eng_clk;
+ 
+-	clocks->max_clocks_state = simple_clocks.level;
++	if (simple_clocks.level == 0)
++		clocks->max_clocks_state = PP_DAL_POWERLEVEL_7;
++	else
++		clocks->max_clocks_state = simple_clocks.level;
+ 
+ 	if (0 == phm_get_current_shallow_sleep_clocks(hwmgr, &hwmgr->current_ps->hardware, &hw_clocks)) {
+ 		clocks->max_engine_clock_in_sr = hw_clocks.max_eng_clk;
+@@ -1159,6 +1162,8 @@ static int pp_get_display_mode_validation_clocks(void *handle,
+ 	if (!hwmgr || !hwmgr->pm_en ||!clocks)
+ 		return -EINVAL;
+ 
++	clocks->level = PP_DAL_POWERLEVEL_7;
++
+ 	mutex_lock(&hwmgr->smu_lock);
+ 
+ 	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps, PHM_PlatformCaps_DynamicPatchPowerState))
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+index f8e866ceda02..77779adeef28 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+@@ -4555,12 +4555,12 @@ static int smu7_get_sclks(struct pp_hwmgr *hwmgr, struct amd_pp_clocks *clocks)
+ 			return -EINVAL;
+ 		dep_sclk_table = table_info->vdd_dep_on_sclk;
+ 		for (i = 0; i < dep_sclk_table->count; i++)
+-			clocks->clock[i] = dep_sclk_table->entries[i].clk;
++			clocks->clock[i] = dep_sclk_table->entries[i].clk * 10;
+ 		clocks->count = dep_sclk_table->count;
+ 	} else if (hwmgr->pp_table_version == PP_TABLE_V0) {
+ 		sclk_table = hwmgr->dyn_state.vddc_dependency_on_sclk;
+ 		for (i = 0; i < sclk_table->count; i++)
+-			clocks->clock[i] = sclk_table->entries[i].clk;
++			clocks->clock[i] = sclk_table->entries[i].clk * 10;
+ 		clocks->count = sclk_table->count;
+ 	}
+ 
+@@ -4592,7 +4592,7 @@ static int smu7_get_mclks(struct pp_hwmgr *hwmgr, struct amd_pp_clocks *clocks)
+ 			return -EINVAL;
+ 		dep_mclk_table = table_info->vdd_dep_on_mclk;
+ 		for (i = 0; i < dep_mclk_table->count; i++) {
+-			clocks->clock[i] = dep_mclk_table->entries[i].clk;
++			clocks->clock[i] = dep_mclk_table->entries[i].clk * 10;
+ 			clocks->latency[i] = smu7_get_mem_latency(hwmgr,
+ 						dep_mclk_table->entries[i].clk);
+ 		}
+@@ -4600,7 +4600,7 @@ static int smu7_get_mclks(struct pp_hwmgr *hwmgr, struct amd_pp_clocks *clocks)
+ 	} else if (hwmgr->pp_table_version == PP_TABLE_V0) {
+ 		mclk_table = hwmgr->dyn_state.vddc_dependency_on_mclk;
+ 		for (i = 0; i < mclk_table->count; i++)
+-			clocks->clock[i] = mclk_table->entries[i].clk;
++			clocks->clock[i] = mclk_table->entries[i].clk * 10;
+ 		clocks->count = mclk_table->count;
+ 	}
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
+index 617557bd8c24..0adfc5392cd3 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
+@@ -1605,17 +1605,17 @@ static int smu8_get_clock_by_type(struct pp_hwmgr *hwmgr, enum amd_pp_clock_type
+ 	switch (type) {
+ 	case amd_pp_disp_clock:
+ 		for (i = 0; i < clocks->count; i++)
+-			clocks->clock[i] = data->sys_info.display_clock[i];
++			clocks->clock[i] = data->sys_info.display_clock[i] * 10;
+ 		break;
+ 	case amd_pp_sys_clock:
+ 		table = hwmgr->dyn_state.vddc_dependency_on_sclk;
+ 		for (i = 0; i < clocks->count; i++)
+-			clocks->clock[i] = table->entries[i].clk;
++			clocks->clock[i] = table->entries[i].clk * 10;
+ 		break;
+ 	case amd_pp_mem_clock:
+ 		clocks->count = SMU8_NUM_NBPMEMORYCLOCK;
+ 		for (i = 0; i < clocks->count; i++)
+-			clocks->clock[i] = data->sys_info.nbp_memory_clock[clocks->count - 1 - i];
++			clocks->clock[i] = data->sys_info.nbp_memory_clock[clocks->count - 1 - i] * 10;
+ 		break;
+ 	default:
+ 		return -1;
+diff --git a/drivers/gpu/drm/nouveau/nouveau_debugfs.c b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
+index 963a4dba8213..9109b69cd052 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_debugfs.c
++++ b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
+@@ -160,7 +160,11 @@ nouveau_debugfs_pstate_set(struct file *file, const char __user *ubuf,
+ 		args.ustate = value;
+ 	}
+ 
++	ret = pm_runtime_get_sync(drm->dev);
++	if (IS_ERR_VALUE(ret) && ret != -EACCES)
++		return ret;
+ 	ret = nvif_mthd(ctrl, NVIF_CONTROL_PSTATE_USER, &args, sizeof(args));
++	pm_runtime_put_autosuspend(drm->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index f5d3158f0378..c7ec86d6c3c9 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -908,8 +908,10 @@ nouveau_drm_open(struct drm_device *dev, struct drm_file *fpriv)
+ 	get_task_comm(tmpname, current);
+ 	snprintf(name, sizeof(name), "%s[%d]", tmpname, pid_nr(fpriv->pid));
+ 
+-	if (!(cli = kzalloc(sizeof(*cli), GFP_KERNEL)))
+-		return ret;
++	if (!(cli = kzalloc(sizeof(*cli), GFP_KERNEL))) {
++		ret = -ENOMEM;
++		goto done;
++	}
+ 
+ 	ret = nouveau_cli_init(drm, name, cli);
+ 	if (ret)
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c b/drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c
+index 78597da6313a..0e372a190d3f 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c
+@@ -23,6 +23,10 @@
+ #ifdef CONFIG_NOUVEAU_PLATFORM_DRIVER
+ #include "priv.h"
+ 
++#if IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU)
++#include <asm/dma-iommu.h>
++#endif
++
+ static int
+ nvkm_device_tegra_power_up(struct nvkm_device_tegra *tdev)
+ {
+@@ -105,6 +109,15 @@ nvkm_device_tegra_probe_iommu(struct nvkm_device_tegra *tdev)
+ 	unsigned long pgsize_bitmap;
+ 	int ret;
+ 
++#if IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU)
++	if (dev->archdata.mapping) {
++		struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev);
++
++		arm_iommu_detach_device(dev);
++		arm_iommu_release_mapping(mapping);
++	}
++#endif
++
+ 	if (!tdev->func->iommu_bit)
+ 		return;
+ 
+diff --git a/drivers/gpu/drm/panel/panel-samsung-s6e8aa0.c b/drivers/gpu/drm/panel/panel-samsung-s6e8aa0.c
+index a188a3959f1a..6ad827b93ae1 100644
+--- a/drivers/gpu/drm/panel/panel-samsung-s6e8aa0.c
++++ b/drivers/gpu/drm/panel/panel-samsung-s6e8aa0.c
+@@ -823,7 +823,7 @@ static void s6e8aa0_read_mtp_id(struct s6e8aa0 *ctx)
+ 	int ret, i;
+ 
+ 	ret = s6e8aa0_dcs_read(ctx, 0xd1, id, ARRAY_SIZE(id));
+-	if (ret < ARRAY_SIZE(id) || id[0] == 0x00) {
++	if (ret < 0 || ret < ARRAY_SIZE(id) || id[0] == 0x00) {
+ 		dev_err(ctx->dev, "read id failed\n");
+ 		ctx->error = -EIO;
+ 		return;
+diff --git a/drivers/gpu/ipu-v3/ipu-csi.c b/drivers/gpu/ipu-v3/ipu-csi.c
+index 5450a2db1219..2beadb3f79c2 100644
+--- a/drivers/gpu/ipu-v3/ipu-csi.c
++++ b/drivers/gpu/ipu-v3/ipu-csi.c
+@@ -318,13 +318,17 @@ static int mbus_code_to_bus_cfg(struct ipu_csi_bus_config *cfg, u32 mbus_code)
+ /*
+  * Fill a CSI bus config struct from mbus_config and mbus_framefmt.
+  */
+-static void fill_csi_bus_cfg(struct ipu_csi_bus_config *csicfg,
++static int fill_csi_bus_cfg(struct ipu_csi_bus_config *csicfg,
+ 				 struct v4l2_mbus_config *mbus_cfg,
+ 				 struct v4l2_mbus_framefmt *mbus_fmt)
+ {
++	int ret;
++
+ 	memset(csicfg, 0, sizeof(*csicfg));
+ 
+-	mbus_code_to_bus_cfg(csicfg, mbus_fmt->code);
++	ret = mbus_code_to_bus_cfg(csicfg, mbus_fmt->code);
++	if (ret < 0)
++		return ret;
+ 
+ 	switch (mbus_cfg->type) {
+ 	case V4L2_MBUS_PARALLEL:
+@@ -356,6 +360,8 @@ static void fill_csi_bus_cfg(struct ipu_csi_bus_config *csicfg,
+ 		/* will never get here, keep compiler quiet */
+ 		break;
+ 	}
++
++	return 0;
+ }
+ 
+ int ipu_csi_init_interface(struct ipu_csi *csi,
+@@ -365,8 +371,11 @@ int ipu_csi_init_interface(struct ipu_csi *csi,
+ 	struct ipu_csi_bus_config cfg;
+ 	unsigned long flags;
+ 	u32 width, height, data = 0;
++	int ret;
+ 
+-	fill_csi_bus_cfg(&cfg, mbus_cfg, mbus_fmt);
++	ret = fill_csi_bus_cfg(&cfg, mbus_cfg, mbus_fmt);
++	if (ret < 0)
++		return ret;
+ 
+ 	/* set default sensor frame width and height */
+ 	width = mbus_fmt->width;
+@@ -587,11 +596,14 @@ int ipu_csi_set_mipi_datatype(struct ipu_csi *csi, u32 vc,
+ 	struct ipu_csi_bus_config cfg;
+ 	unsigned long flags;
+ 	u32 temp;
++	int ret;
+ 
+ 	if (vc > 3)
+ 		return -EINVAL;
+ 
+-	mbus_code_to_bus_cfg(&cfg, mbus_fmt->code);
++	ret = mbus_code_to_bus_cfg(&cfg, mbus_fmt->code);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&csi->lock, flags);
+ 
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
+index b10fe26c4891..c9a466be7709 100644
+--- a/drivers/hv/vmbus_drv.c
++++ b/drivers/hv/vmbus_drv.c
+@@ -1178,6 +1178,9 @@ static ssize_t vmbus_chan_attr_show(struct kobject *kobj,
+ 	if (!attribute->show)
+ 		return -EIO;
+ 
++	if (chan->state != CHANNEL_OPENED_STATE)
++		return -EINVAL;
++
+ 	return attribute->show(chan, buf);
+ }
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x.c b/drivers/hwtracing/coresight/coresight-etm4x.c
+index 9bc04c50d45b..1d94ebec027b 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x.c
+@@ -1027,7 +1027,8 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
+ 	}
+ 
+ 	pm_runtime_put(&adev->dev);
+-	dev_info(dev, "%s initialized\n", (char *)id->data);
++	dev_info(dev, "CPU%d: ETM v%d.%d initialized\n",
++		 drvdata->cpu, drvdata->arch >> 4, drvdata->arch & 0xf);
+ 
+ 	if (boot_enable) {
+ 		coresight_enable(drvdata->csdev);
+@@ -1045,23 +1046,19 @@ err_arch_supported:
+ 	return ret;
+ }
+ 
++#define ETM4x_AMBA_ID(pid)			\
++	{					\
++		.id	= pid,			\
++		.mask	= 0x000fffff,		\
++	}
++
+ static const struct amba_id etm4_ids[] = {
+-	{       /* ETM 4.0 - Cortex-A53  */
+-		.id	= 0x000bb95d,
+-		.mask	= 0x000fffff,
+-		.data	= "ETM 4.0",
+-	},
+-	{       /* ETM 4.0 - Cortex-A57 */
+-		.id	= 0x000bb95e,
+-		.mask	= 0x000fffff,
+-		.data	= "ETM 4.0",
+-	},
+-	{       /* ETM 4.0 - A72, Maia, HiSilicon */
+-		.id = 0x000bb95a,
+-		.mask = 0x000fffff,
+-		.data = "ETM 4.0",
+-	},
+-	{ 0, 0},
++	ETM4x_AMBA_ID(0x000bb95d),		/* Cortex-A53 */
++	ETM4x_AMBA_ID(0x000bb95e),		/* Cortex-A57 */
++	ETM4x_AMBA_ID(0x000bb95a),		/* Cortex-A72 */
++	ETM4x_AMBA_ID(0x000bb959),		/* Cortex-A73 */
++	ETM4x_AMBA_ID(0x000bb9da),		/* Cortex-A35 */
++	{},
+ };
+ 
+ static struct amba_driver etm4x_driver = {
+diff --git a/drivers/hwtracing/coresight/coresight-tpiu.c b/drivers/hwtracing/coresight/coresight-tpiu.c
+index 01b7457fe8fc..459ef930d98c 100644
+--- a/drivers/hwtracing/coresight/coresight-tpiu.c
++++ b/drivers/hwtracing/coresight/coresight-tpiu.c
+@@ -40,8 +40,9 @@
+ 
+ /** register definition **/
+ /* FFSR - 0x300 */
+-#define FFSR_FT_STOPPED		BIT(1)
++#define FFSR_FT_STOPPED_BIT	1
+ /* FFCR - 0x304 */
++#define FFCR_FON_MAN_BIT	6
+ #define FFCR_FON_MAN		BIT(6)
+ #define FFCR_STOP_FI		BIT(12)
+ 
+@@ -86,9 +87,9 @@ static void tpiu_disable_hw(struct tpiu_drvdata *drvdata)
+ 	/* Generate manual flush */
+ 	writel_relaxed(FFCR_STOP_FI | FFCR_FON_MAN, drvdata->base + TPIU_FFCR);
+ 	/* Wait for flush to complete */
+-	coresight_timeout(drvdata->base, TPIU_FFCR, FFCR_FON_MAN, 0);
++	coresight_timeout(drvdata->base, TPIU_FFCR, FFCR_FON_MAN_BIT, 0);
+ 	/* Wait for formatter to stop */
+-	coresight_timeout(drvdata->base, TPIU_FFSR, FFSR_FT_STOPPED, 1);
++	coresight_timeout(drvdata->base, TPIU_FFSR, FFSR_FT_STOPPED_BIT, 1);
+ 
+ 	CS_LOCK(drvdata->base);
+ }
+diff --git a/drivers/hwtracing/coresight/coresight.c b/drivers/hwtracing/coresight/coresight.c
+index 29e834aab539..b673718952f6 100644
+--- a/drivers/hwtracing/coresight/coresight.c
++++ b/drivers/hwtracing/coresight/coresight.c
+@@ -108,7 +108,7 @@ static int coresight_find_link_inport(struct coresight_device *csdev,
+ 	dev_err(&csdev->dev, "couldn't find inport, parent: %s, child: %s\n",
+ 		dev_name(&parent->dev), dev_name(&csdev->dev));
+ 
+-	return 0;
++	return -ENODEV;
+ }
+ 
+ static int coresight_find_link_outport(struct coresight_device *csdev,
+@@ -126,7 +126,7 @@ static int coresight_find_link_outport(struct coresight_device *csdev,
+ 	dev_err(&csdev->dev, "couldn't find outport, parent: %s, child: %s\n",
+ 		dev_name(&csdev->dev), dev_name(&child->dev));
+ 
+-	return 0;
++	return -ENODEV;
+ }
+ 
+ static int coresight_enable_sink(struct coresight_device *csdev, u32 mode)
+@@ -179,6 +179,9 @@ static int coresight_enable_link(struct coresight_device *csdev,
+ 	else
+ 		refport = 0;
+ 
++	if (refport < 0)
++		return refport;
++
+ 	if (atomic_inc_return(&csdev->refcnt[refport]) == 1) {
+ 		if (link_ops(csdev)->enable) {
+ 			ret = link_ops(csdev)->enable(csdev, inport, outport);
+diff --git a/drivers/i2c/busses/i2c-aspeed.c b/drivers/i2c/busses/i2c-aspeed.c
+index 715b6fdb4989..5c8ea4e9203c 100644
+--- a/drivers/i2c/busses/i2c-aspeed.c
++++ b/drivers/i2c/busses/i2c-aspeed.c
+@@ -111,22 +111,22 @@
+ #define ASPEED_I2CD_DEV_ADDR_MASK			GENMASK(6, 0)
+ 
+ enum aspeed_i2c_master_state {
++	ASPEED_I2C_MASTER_INACTIVE,
+ 	ASPEED_I2C_MASTER_START,
+ 	ASPEED_I2C_MASTER_TX_FIRST,
+ 	ASPEED_I2C_MASTER_TX,
+ 	ASPEED_I2C_MASTER_RX_FIRST,
+ 	ASPEED_I2C_MASTER_RX,
+ 	ASPEED_I2C_MASTER_STOP,
+-	ASPEED_I2C_MASTER_INACTIVE,
+ };
+ 
+ enum aspeed_i2c_slave_state {
++	ASPEED_I2C_SLAVE_STOP,
+ 	ASPEED_I2C_SLAVE_START,
+ 	ASPEED_I2C_SLAVE_READ_REQUESTED,
+ 	ASPEED_I2C_SLAVE_READ_PROCESSED,
+ 	ASPEED_I2C_SLAVE_WRITE_REQUESTED,
+ 	ASPEED_I2C_SLAVE_WRITE_RECEIVED,
+-	ASPEED_I2C_SLAVE_STOP,
+ };
+ 
+ struct aspeed_i2c_bus {
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index dafcb6f019b3..2702ead01a03 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -722,6 +722,7 @@ static int cma_resolve_ib_dev(struct rdma_id_private *id_priv)
+ 	dgid = (union ib_gid *) &addr->sib_addr;
+ 	pkey = ntohs(addr->sib_pkey);
+ 
++	mutex_lock(&lock);
+ 	list_for_each_entry(cur_dev, &dev_list, list) {
+ 		for (p = 1; p <= cur_dev->device->phys_port_cnt; ++p) {
+ 			if (!rdma_cap_af_ib(cur_dev->device, p))
+@@ -748,18 +749,19 @@ static int cma_resolve_ib_dev(struct rdma_id_private *id_priv)
+ 					cma_dev = cur_dev;
+ 					sgid = gid;
+ 					id_priv->id.port_num = p;
++					goto found;
+ 				}
+ 			}
+ 		}
+ 	}
+-
+-	if (!cma_dev)
+-		return -ENODEV;
++	mutex_unlock(&lock);
++	return -ENODEV;
+ 
+ found:
+ 	cma_attach_to_dev(id_priv, cma_dev);
+-	addr = (struct sockaddr_ib *) cma_src_addr(id_priv);
+-	memcpy(&addr->sib_addr, &sgid, sizeof sgid);
++	mutex_unlock(&lock);
++	addr = (struct sockaddr_ib *)cma_src_addr(id_priv);
++	memcpy(&addr->sib_addr, &sgid, sizeof(sgid));
+ 	cma_translate_ib(addr, &id_priv->id.route.addr.dev_addr);
+ 	return 0;
+ }
+diff --git a/drivers/infiniband/hw/mlx5/cong.c b/drivers/infiniband/hw/mlx5/cong.c
+index 985fa2637390..7e4e358a4fd8 100644
+--- a/drivers/infiniband/hw/mlx5/cong.c
++++ b/drivers/infiniband/hw/mlx5/cong.c
+@@ -359,9 +359,6 @@ static ssize_t get_param(struct file *filp, char __user *buf, size_t count,
+ 	int ret;
+ 	char lbuf[11];
+ 
+-	if (*pos)
+-		return 0;
+-
+ 	ret = mlx5_ib_get_cc_params(param->dev, param->port_num, offset, &var);
+ 	if (ret)
+ 		return ret;
+@@ -370,11 +367,7 @@ static ssize_t get_param(struct file *filp, char __user *buf, size_t count,
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	if (copy_to_user(buf, lbuf, ret))
+-		return -EFAULT;
+-
+-	*pos += ret;
+-	return ret;
++	return simple_read_from_buffer(buf, count, pos, lbuf, ret);
+ }
+ 
+ static const struct file_operations dbg_cc_fops = {
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 90a9c461cedc..308456d28afb 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -271,16 +271,16 @@ static ssize_t size_write(struct file *filp, const char __user *buf,
+ {
+ 	struct mlx5_cache_ent *ent = filp->private_data;
+ 	struct mlx5_ib_dev *dev = ent->dev;
+-	char lbuf[20];
++	char lbuf[20] = {0};
+ 	u32 var;
+ 	int err;
+ 	int c;
+ 
+-	if (copy_from_user(lbuf, buf, sizeof(lbuf)))
++	count = min(count, sizeof(lbuf) - 1);
++	if (copy_from_user(lbuf, buf, count))
+ 		return -EFAULT;
+ 
+ 	c = order2idx(dev, ent->order);
+-	lbuf[sizeof(lbuf) - 1] = 0;
+ 
+ 	if (sscanf(lbuf, "%u", &var) != 1)
+ 		return -EINVAL;
+@@ -310,19 +310,11 @@ static ssize_t size_read(struct file *filp, char __user *buf, size_t count,
+ 	char lbuf[20];
+ 	int err;
+ 
+-	if (*pos)
+-		return 0;
+-
+ 	err = snprintf(lbuf, sizeof(lbuf), "%d\n", ent->size);
+ 	if (err < 0)
+ 		return err;
+ 
+-	if (copy_to_user(buf, lbuf, err))
+-		return -EFAULT;
+-
+-	*pos += err;
+-
+-	return err;
++	return simple_read_from_buffer(buf, count, pos, lbuf, err);
+ }
+ 
+ static const struct file_operations size_fops = {
+@@ -337,16 +329,16 @@ static ssize_t limit_write(struct file *filp, const char __user *buf,
+ {
+ 	struct mlx5_cache_ent *ent = filp->private_data;
+ 	struct mlx5_ib_dev *dev = ent->dev;
+-	char lbuf[20];
++	char lbuf[20] = {0};
+ 	u32 var;
+ 	int err;
+ 	int c;
+ 
+-	if (copy_from_user(lbuf, buf, sizeof(lbuf)))
++	count = min(count, sizeof(lbuf) - 1);
++	if (copy_from_user(lbuf, buf, count))
+ 		return -EFAULT;
+ 
+ 	c = order2idx(dev, ent->order);
+-	lbuf[sizeof(lbuf) - 1] = 0;
+ 
+ 	if (sscanf(lbuf, "%u", &var) != 1)
+ 		return -EINVAL;
+@@ -372,19 +364,11 @@ static ssize_t limit_read(struct file *filp, char __user *buf, size_t count,
+ 	char lbuf[20];
+ 	int err;
+ 
+-	if (*pos)
+-		return 0;
+-
+ 	err = snprintf(lbuf, sizeof(lbuf), "%d\n", ent->limit);
+ 	if (err < 0)
+ 		return err;
+ 
+-	if (copy_to_user(buf, lbuf, err))
+-		return -EFAULT;
+-
+-	*pos += err;
+-
+-	return err;
++	return simple_read_from_buffer(buf, count, pos, lbuf, err);
+ }
+ 
+ static const struct file_operations limit_fops = {
+diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c
+index dfba44a40f0b..fe45d6cad6cd 100644
+--- a/drivers/infiniband/sw/rxe/rxe_recv.c
++++ b/drivers/infiniband/sw/rxe/rxe_recv.c
+@@ -225,9 +225,14 @@ static int hdr_check(struct rxe_pkt_info *pkt)
+ 		goto err1;
+ 	}
+ 
++	if (unlikely(qpn == 0)) {
++		pr_warn_once("QP 0 not supported");
++		goto err1;
++	}
++
+ 	if (qpn != IB_MULTICAST_QPN) {
+-		index = (qpn == 0) ? port->qp_smi_index :
+-			((qpn == 1) ? port->qp_gsi_index : qpn);
++		index = (qpn == 1) ? port->qp_gsi_index : qpn;
++
+ 		qp = rxe_pool_get_index(&rxe->qp_pool, index);
+ 		if (unlikely(!qp)) {
+ 			pr_warn_ratelimited("no qp matches qpn 0x%x\n", qpn);
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+index 6535d9beb24d..a620701f9d41 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+@@ -1028,12 +1028,14 @@ static int ipoib_cm_rep_handler(struct ib_cm_id *cm_id, struct ib_cm_event *even
+ 
+ 	skb_queue_head_init(&skqueue);
+ 
++	netif_tx_lock_bh(p->dev);
+ 	spin_lock_irq(&priv->lock);
+ 	set_bit(IPOIB_FLAG_OPER_UP, &p->flags);
+ 	if (p->neigh)
+ 		while ((skb = __skb_dequeue(&p->neigh->queue)))
+ 			__skb_queue_tail(&skqueue, skb);
+ 	spin_unlock_irq(&priv->lock);
++	netif_tx_unlock_bh(p->dev);
+ 
+ 	while ((skb = __skb_dequeue(&skqueue))) {
+ 		skb->dev = p->dev;
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+index 26cde95bc0f3..7630d5ed2b41 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+@@ -1787,7 +1787,8 @@ int ipoib_dev_init(struct net_device *dev, struct ib_device *ca, int port)
+ 		goto out_free_pd;
+ 	}
+ 
+-	if (ipoib_neigh_hash_init(priv) < 0) {
++	ret = ipoib_neigh_hash_init(priv);
++	if (ret) {
+ 		pr_warn("%s failed to init neigh hash\n", dev->name);
+ 		goto out_dev_uninit;
+ 	}
+diff --git a/drivers/input/joystick/pxrc.c b/drivers/input/joystick/pxrc.c
+index 07a0dbd3ced2..cfb410cf0789 100644
+--- a/drivers/input/joystick/pxrc.c
++++ b/drivers/input/joystick/pxrc.c
+@@ -120,48 +120,51 @@ static void pxrc_close(struct input_dev *input)
+ 	mutex_unlock(&pxrc->pm_mutex);
+ }
+ 
++static void pxrc_free_urb(void *_pxrc)
++{
++	struct pxrc *pxrc = _pxrc;
++
++	usb_free_urb(pxrc->urb);
++}
++
+ static int pxrc_usb_init(struct pxrc *pxrc)
+ {
+ 	struct usb_endpoint_descriptor *epirq;
+ 	unsigned int pipe;
+-	int retval;
++	int error;
+ 
+ 	/* Set up the endpoint information */
+ 	/* This device only has an interrupt endpoint */
+-	retval = usb_find_common_endpoints(pxrc->intf->cur_altsetting,
+-			NULL, NULL, &epirq, NULL);
+-	if (retval) {
+-		dev_err(&pxrc->intf->dev,
+-			"Could not find endpoint\n");
+-		goto error;
++	error = usb_find_common_endpoints(pxrc->intf->cur_altsetting,
++					  NULL, NULL, &epirq, NULL);
++	if (error) {
++		dev_err(&pxrc->intf->dev, "Could not find endpoint\n");
++		return error;
+ 	}
+ 
+ 	pxrc->bsize = usb_endpoint_maxp(epirq);
+ 	pxrc->epaddr = epirq->bEndpointAddress;
+ 	pxrc->data = devm_kmalloc(&pxrc->intf->dev, pxrc->bsize, GFP_KERNEL);
+-	if (!pxrc->data) {
+-		retval = -ENOMEM;
+-		goto error;
+-	}
++	if (!pxrc->data)
++		return -ENOMEM;
+ 
+ 	usb_set_intfdata(pxrc->intf, pxrc);
+ 	usb_make_path(pxrc->udev, pxrc->phys, sizeof(pxrc->phys));
+ 	strlcat(pxrc->phys, "/input0", sizeof(pxrc->phys));
+ 
+ 	pxrc->urb = usb_alloc_urb(0, GFP_KERNEL);
+-	if (!pxrc->urb) {
+-		retval = -ENOMEM;
+-		goto error;
+-	}
++	if (!pxrc->urb)
++		return -ENOMEM;
++
++	error = devm_add_action_or_reset(&pxrc->intf->dev, pxrc_free_urb, pxrc);
++	if (error)
++		return error;
+ 
+ 	pipe = usb_rcvintpipe(pxrc->udev, pxrc->epaddr),
+ 	usb_fill_int_urb(pxrc->urb, pxrc->udev, pipe, pxrc->data, pxrc->bsize,
+ 						pxrc_usb_irq, pxrc, 1);
+ 
+-error:
+-	return retval;
+-
+-
++	return 0;
+ }
+ 
+ static int pxrc_input_init(struct pxrc *pxrc)
+@@ -197,7 +200,7 @@ static int pxrc_probe(struct usb_interface *intf,
+ 		      const struct usb_device_id *id)
+ {
+ 	struct pxrc *pxrc;
+-	int retval;
++	int error;
+ 
+ 	pxrc = devm_kzalloc(&intf->dev, sizeof(*pxrc), GFP_KERNEL);
+ 	if (!pxrc)
+@@ -207,29 +210,20 @@ static int pxrc_probe(struct usb_interface *intf,
+ 	pxrc->udev = usb_get_dev(interface_to_usbdev(intf));
+ 	pxrc->intf = intf;
+ 
+-	retval = pxrc_usb_init(pxrc);
+-	if (retval)
+-		goto error;
++	error = pxrc_usb_init(pxrc);
++	if (error)
++		return error;
+ 
+-	retval = pxrc_input_init(pxrc);
+-	if (retval)
+-		goto err_free_urb;
++	error = pxrc_input_init(pxrc);
++	if (error)
++		return error;
+ 
+ 	return 0;
+-
+-err_free_urb:
+-	usb_free_urb(pxrc->urb);
+-
+-error:
+-	return retval;
+ }
+ 
+ static void pxrc_disconnect(struct usb_interface *intf)
+ {
+-	struct pxrc *pxrc = usb_get_intfdata(intf);
+-
+-	usb_free_urb(pxrc->urb);
+-	usb_set_intfdata(intf, NULL);
++	/* All driver resources are devm-managed. */
+ }
+ 
+ static int pxrc_suspend(struct usb_interface *intf, pm_message_t message)
+diff --git a/drivers/input/touchscreen/rohm_bu21023.c b/drivers/input/touchscreen/rohm_bu21023.c
+index bda0500c9b57..714affdd742f 100644
+--- a/drivers/input/touchscreen/rohm_bu21023.c
++++ b/drivers/input/touchscreen/rohm_bu21023.c
+@@ -304,7 +304,7 @@ static int rohm_i2c_burst_read(struct i2c_client *client, u8 start, void *buf,
+ 	msg[1].len = len;
+ 	msg[1].buf = buf;
+ 
+-	i2c_lock_adapter(adap);
++	i2c_lock_bus(adap, I2C_LOCK_SEGMENT);
+ 
+ 	for (i = 0; i < 2; i++) {
+ 		if (__i2c_transfer(adap, &msg[i], 1) < 0) {
+@@ -313,7 +313,7 @@ static int rohm_i2c_burst_read(struct i2c_client *client, u8 start, void *buf,
+ 		}
+ 	}
+ 
+-	i2c_unlock_adapter(adap);
++	i2c_unlock_bus(adap, I2C_LOCK_SEGMENT);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
+index b73c6a7bf7f2..b7076aa24d6b 100644
+--- a/drivers/iommu/arm-smmu-v3.c
++++ b/drivers/iommu/arm-smmu-v3.c
+@@ -1302,6 +1302,7 @@ static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
+ 
+ 	/* Sync our overflow flag, as we believe we're up to speed */
+ 	q->cons = Q_OVF(q, q->prod) | Q_WRP(q, q->cons) | Q_IDX(q, q->cons);
++	writel(q->cons, q->cons_reg);
+ 	return IRQ_HANDLED;
+ }
+ 
+diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-arm-v7s.c
+index 50e3a9fcf43e..b5948ba6b3b3 100644
+--- a/drivers/iommu/io-pgtable-arm-v7s.c
++++ b/drivers/iommu/io-pgtable-arm-v7s.c
+@@ -192,6 +192,7 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
+ {
+ 	struct io_pgtable_cfg *cfg = &data->iop.cfg;
+ 	struct device *dev = cfg->iommu_dev;
++	phys_addr_t phys;
+ 	dma_addr_t dma;
+ 	size_t size = ARM_V7S_TABLE_SIZE(lvl);
+ 	void *table = NULL;
+@@ -200,6 +201,10 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
+ 		table = (void *)__get_dma_pages(__GFP_ZERO, get_order(size));
+ 	else if (lvl == 2)
+ 		table = kmem_cache_zalloc(data->l2_tables, gfp | GFP_DMA);
++	phys = virt_to_phys(table);
++	if (phys != (arm_v7s_iopte)phys)
++		/* Doesn't fit in PTE */
++		goto out_free;
+ 	if (table && !(cfg->quirks & IO_PGTABLE_QUIRK_NO_DMA)) {
+ 		dma = dma_map_single(dev, table, size, DMA_TO_DEVICE);
+ 		if (dma_mapping_error(dev, dma))
+@@ -209,7 +214,7 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
+ 		 * address directly, so if the DMA layer suggests otherwise by
+ 		 * translating or truncating them, that bodes very badly...
+ 		 */
+-		if (dma != virt_to_phys(table))
++		if (dma != phys)
+ 			goto out_unmap;
+ 	}
+ 	kmemleak_ignore(table);
+diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
+index 010a254305dd..88641b4560bc 100644
+--- a/drivers/iommu/io-pgtable-arm.c
++++ b/drivers/iommu/io-pgtable-arm.c
+@@ -237,7 +237,8 @@ static void *__arm_lpae_alloc_pages(size_t size, gfp_t gfp,
+ 	void *pages;
+ 
+ 	VM_BUG_ON((gfp & __GFP_HIGHMEM));
+-	p = alloc_pages_node(dev_to_node(dev), gfp | __GFP_ZERO, order);
++	p = alloc_pages_node(dev ? dev_to_node(dev) : NUMA_NO_NODE,
++			     gfp | __GFP_ZERO, order);
+ 	if (!p)
+ 		return NULL;
+ 
+diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
+index feb1664815b7..6e2882cda55d 100644
+--- a/drivers/iommu/ipmmu-vmsa.c
++++ b/drivers/iommu/ipmmu-vmsa.c
+@@ -47,6 +47,7 @@ struct ipmmu_features {
+ 	unsigned int number_of_contexts;
+ 	bool setup_imbuscr;
+ 	bool twobit_imttbcr_sl0;
++	bool reserved_context;
+ };
+ 
+ struct ipmmu_vmsa_device {
+@@ -916,6 +917,7 @@ static const struct ipmmu_features ipmmu_features_default = {
+ 	.number_of_contexts = 1, /* software only tested with one context */
+ 	.setup_imbuscr = true,
+ 	.twobit_imttbcr_sl0 = false,
++	.reserved_context = false,
+ };
+ 
+ static const struct ipmmu_features ipmmu_features_r8a7795 = {
+@@ -924,6 +926,7 @@ static const struct ipmmu_features ipmmu_features_r8a7795 = {
+ 	.number_of_contexts = 8,
+ 	.setup_imbuscr = false,
+ 	.twobit_imttbcr_sl0 = true,
++	.reserved_context = true,
+ };
+ 
+ static const struct of_device_id ipmmu_of_ids[] = {
+@@ -1017,6 +1020,11 @@ static int ipmmu_probe(struct platform_device *pdev)
+ 		}
+ 
+ 		ipmmu_device_reset(mmu);
++
++		if (mmu->features->reserved_context) {
++			dev_info(&pdev->dev, "IPMMU context 0 is reserved\n");
++			set_bit(0, mmu->ctx);
++		}
+ 	}
+ 
+ 	/*
+diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c
+index b57f764d6a16..93ebba6dcc25 100644
+--- a/drivers/lightnvm/pblk-init.c
++++ b/drivers/lightnvm/pblk-init.c
+@@ -716,10 +716,11 @@ static int pblk_setup_line_meta_12(struct pblk *pblk, struct pblk_line *line,
+ 
+ 		/*
+ 		 * In 1.2 spec. chunk state is not persisted by the device. Thus
+-		 * some of the values are reset each time pblk is instantiated.
++		 * some of the values are reset each time pblk is instantiated,
++		 * so we have to assume that the block is closed.
+ 		 */
+ 		if (lun_bb_meta[line->id] == NVM_BLK_T_FREE)
+-			chunk->state =  NVM_CHK_ST_FREE;
++			chunk->state =  NVM_CHK_ST_CLOSED;
+ 		else
+ 			chunk->state = NVM_CHK_ST_OFFLINE;
+ 
+diff --git a/drivers/lightnvm/pblk-recovery.c b/drivers/lightnvm/pblk-recovery.c
+index 3a5069183859..d83466b3821b 100644
+--- a/drivers/lightnvm/pblk-recovery.c
++++ b/drivers/lightnvm/pblk-recovery.c
+@@ -742,9 +742,10 @@ static int pblk_recov_check_line_version(struct pblk *pblk,
+ 		return 1;
+ 	}
+ 
+-#ifdef NVM_DEBUG
++#ifdef CONFIG_NVM_PBLK_DEBUG
+ 	if (header->version_minor > EMETA_VERSION_MINOR)
+-		pr_info("pblk: newer line minor version found: %d\n", line_v);
++		pr_info("pblk: newer line minor version found: %d\n",
++				header->version_minor);
+ #endif
+ 
+ 	return 0;
+diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
+index 12decdbd722d..fc65f0dedf7f 100644
+--- a/drivers/md/dm-verity-target.c
++++ b/drivers/md/dm-verity-target.c
+@@ -99,10 +99,26 @@ static int verity_hash_update(struct dm_verity *v, struct ahash_request *req,
+ {
+ 	struct scatterlist sg;
+ 
+-	sg_init_one(&sg, data, len);
+-	ahash_request_set_crypt(req, &sg, NULL, len);
+-
+-	return crypto_wait_req(crypto_ahash_update(req), wait);
++	if (likely(!is_vmalloc_addr(data))) {
++		sg_init_one(&sg, data, len);
++		ahash_request_set_crypt(req, &sg, NULL, len);
++		return crypto_wait_req(crypto_ahash_update(req), wait);
++	} else {
++		do {
++			int r;
++			size_t this_step = min_t(size_t, len, PAGE_SIZE - offset_in_page(data));
++			flush_kernel_vmap_range((void *)data, this_step);
++			sg_init_table(&sg, 1);
++			sg_set_page(&sg, vmalloc_to_page(data), this_step, offset_in_page(data));
++			ahash_request_set_crypt(req, &sg, NULL, this_step);
++			r = crypto_wait_req(crypto_ahash_update(req), wait);
++			if (unlikely(r))
++				return r;
++			data += this_step;
++			len -= this_step;
++		} while (len);
++		return 0;
++	}
+ }
+ 
+ /*
+diff --git a/drivers/media/common/videobuf2/videobuf2-core.c b/drivers/media/common/videobuf2/videobuf2-core.c
+index f32ec7342ef0..5653e8eebe2b 100644
+--- a/drivers/media/common/videobuf2/videobuf2-core.c
++++ b/drivers/media/common/videobuf2/videobuf2-core.c
+@@ -1377,6 +1377,11 @@ int vb2_core_qbuf(struct vb2_queue *q, unsigned int index, void *pb)
+ 	struct vb2_buffer *vb;
+ 	int ret;
+ 
++	if (q->error) {
++		dprintk(1, "fatal error occurred on queue\n");
++		return -EIO;
++	}
++
+ 	vb = q->bufs[index];
+ 
+ 	switch (vb->state) {
+diff --git a/drivers/media/i2c/ov5645.c b/drivers/media/i2c/ov5645.c
+index b3f762578f7f..1722cdab0daf 100644
+--- a/drivers/media/i2c/ov5645.c
++++ b/drivers/media/i2c/ov5645.c
+@@ -510,8 +510,8 @@ static const struct reg_value ov5645_setting_full[] = {
+ };
+ 
+ static const s64 link_freq[] = {
+-	222880000,
+-	334320000
++	224000000,
++	336000000
+ };
+ 
+ static const struct ov5645_mode_info ov5645_mode_info_data[] = {
+@@ -520,7 +520,7 @@ static const struct ov5645_mode_info ov5645_mode_info_data[] = {
+ 		.height = 960,
+ 		.data = ov5645_setting_sxga,
+ 		.data_size = ARRAY_SIZE(ov5645_setting_sxga),
+-		.pixel_clock = 111440000,
++		.pixel_clock = 112000000,
+ 		.link_freq = 0 /* an index in link_freq[] */
+ 	},
+ 	{
+@@ -528,7 +528,7 @@ static const struct ov5645_mode_info ov5645_mode_info_data[] = {
+ 		.height = 1080,
+ 		.data = ov5645_setting_1080p,
+ 		.data_size = ARRAY_SIZE(ov5645_setting_1080p),
+-		.pixel_clock = 167160000,
++		.pixel_clock = 168000000,
+ 		.link_freq = 1 /* an index in link_freq[] */
+ 	},
+ 	{
+@@ -536,7 +536,7 @@ static const struct ov5645_mode_info ov5645_mode_info_data[] = {
+ 		.height = 1944,
+ 		.data = ov5645_setting_full,
+ 		.data_size = ARRAY_SIZE(ov5645_setting_full),
+-		.pixel_clock = 167160000,
++		.pixel_clock = 168000000,
+ 		.link_freq = 1 /* an index in link_freq[] */
+ 	},
+ };
+@@ -1145,7 +1145,8 @@ static int ov5645_probe(struct i2c_client *client,
+ 		return ret;
+ 	}
+ 
+-	if (xclk_freq != 23880000) {
++	/* external clock must be 24MHz, allow 1% tolerance */
++	if (xclk_freq < 23760000 || xclk_freq > 24240000) {
+ 		dev_err(dev, "external clock frequency %u is not supported\n",
+ 			xclk_freq);
+ 		return -EINVAL;
+diff --git a/drivers/media/pci/tw686x/tw686x-video.c b/drivers/media/pci/tw686x/tw686x-video.c
+index 0ea8dd44026c..3a06c000f97b 100644
+--- a/drivers/media/pci/tw686x/tw686x-video.c
++++ b/drivers/media/pci/tw686x/tw686x-video.c
+@@ -1190,6 +1190,14 @@ int tw686x_video_init(struct tw686x_dev *dev)
+ 			return err;
+ 	}
+ 
++	/* Initialize vc->dev and vc->ch for the error path */
++	for (ch = 0; ch < max_channels(dev); ch++) {
++		struct tw686x_video_channel *vc = &dev->video_channels[ch];
++
++		vc->dev = dev;
++		vc->ch = ch;
++	}
++
+ 	for (ch = 0; ch < max_channels(dev); ch++) {
+ 		struct tw686x_video_channel *vc = &dev->video_channels[ch];
+ 		struct video_device *vdev;
+@@ -1198,9 +1206,6 @@ int tw686x_video_init(struct tw686x_dev *dev)
+ 		spin_lock_init(&vc->qlock);
+ 		INIT_LIST_HEAD(&vc->vidq_queued);
+ 
+-		vc->dev = dev;
+-		vc->ch = ch;
+-
+ 		/* default settings */
+ 		err = tw686x_set_standard(vc, V4L2_STD_NTSC);
+ 		if (err)
+diff --git a/drivers/mfd/88pm860x-i2c.c b/drivers/mfd/88pm860x-i2c.c
+index 84e313107233..7b9052ea7413 100644
+--- a/drivers/mfd/88pm860x-i2c.c
++++ b/drivers/mfd/88pm860x-i2c.c
+@@ -146,14 +146,14 @@ int pm860x_page_reg_write(struct i2c_client *i2c, int reg,
+ 	unsigned char zero;
+ 	int ret;
+ 
+-	i2c_lock_adapter(i2c->adapter);
++	i2c_lock_bus(i2c->adapter, I2C_LOCK_SEGMENT);
+ 	read_device(i2c, 0xFA, 0, &zero);
+ 	read_device(i2c, 0xFB, 0, &zero);
+ 	read_device(i2c, 0xFF, 0, &zero);
+ 	ret = write_device(i2c, reg, 1, &data);
+ 	read_device(i2c, 0xFE, 0, &zero);
+ 	read_device(i2c, 0xFC, 0, &zero);
+-	i2c_unlock_adapter(i2c->adapter);
++	i2c_unlock_bus(i2c->adapter, I2C_LOCK_SEGMENT);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(pm860x_page_reg_write);
+@@ -164,14 +164,14 @@ int pm860x_page_bulk_read(struct i2c_client *i2c, int reg,
+ 	unsigned char zero = 0;
+ 	int ret;
+ 
+-	i2c_lock_adapter(i2c->adapter);
++	i2c_lock_bus(i2c->adapter, I2C_LOCK_SEGMENT);
+ 	read_device(i2c, 0xfa, 0, &zero);
+ 	read_device(i2c, 0xfb, 0, &zero);
+ 	read_device(i2c, 0xff, 0, &zero);
+ 	ret = read_device(i2c, reg, count, buf);
+ 	read_device(i2c, 0xFE, 0, &zero);
+ 	read_device(i2c, 0xFC, 0, &zero);
+-	i2c_unlock_adapter(i2c->adapter);
++	i2c_unlock_bus(i2c->adapter, I2C_LOCK_SEGMENT);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(pm860x_page_bulk_read);
+diff --git a/drivers/misc/hmc6352.c b/drivers/misc/hmc6352.c
+index eeb7eef62174..38f90e179927 100644
+--- a/drivers/misc/hmc6352.c
++++ b/drivers/misc/hmc6352.c
+@@ -27,6 +27,7 @@
+ #include <linux/err.h>
+ #include <linux/delay.h>
+ #include <linux/sysfs.h>
++#include <linux/nospec.h>
+ 
+ static DEFINE_MUTEX(compass_mutex);
+ 
+@@ -50,6 +51,7 @@ static int compass_store(struct device *dev, const char *buf, size_t count,
+ 		return ret;
+ 	if (val >= strlen(map))
+ 		return -EINVAL;
++	val = array_index_nospec(val, strlen(map));
+ 	mutex_lock(&compass_mutex);
+ 	ret = compass_command(c, map[val]);
+ 	mutex_unlock(&compass_mutex);
+diff --git a/drivers/misc/ibmvmc.c b/drivers/misc/ibmvmc.c
+index fb83d1375638..50d82c3d032a 100644
+--- a/drivers/misc/ibmvmc.c
++++ b/drivers/misc/ibmvmc.c
+@@ -2131,7 +2131,7 @@ static int ibmvmc_init_crq_queue(struct crq_server_adapter *adapter)
+ 	retrc = plpar_hcall_norets(H_REG_CRQ,
+ 				   vdev->unit_address,
+ 				   queue->msg_token, PAGE_SIZE);
+-	retrc = rc;
++	rc = retrc;
+ 
+ 	if (rc == H_RESOURCE)
+ 		rc = ibmvmc_reset_crq_queue(adapter);
+diff --git a/drivers/misc/mei/bus-fixup.c b/drivers/misc/mei/bus-fixup.c
+index 0208c4b027c5..fa0236a5e59a 100644
+--- a/drivers/misc/mei/bus-fixup.c
++++ b/drivers/misc/mei/bus-fixup.c
+@@ -267,7 +267,7 @@ static int mei_nfc_if_version(struct mei_cl *cl,
+ 
+ 	ret = 0;
+ 	bytes_recv = __mei_cl_recv(cl, (u8 *)reply, if_version_length, 0);
+-	if (bytes_recv < if_version_length) {
++	if (bytes_recv < 0 || bytes_recv < if_version_length) {
+ 		dev_err(bus->dev, "Could not read IF version\n");
+ 		ret = -EIO;
+ 		goto err;
+diff --git a/drivers/misc/mei/bus.c b/drivers/misc/mei/bus.c
+index b1133739fb4b..692b2f9a18cb 100644
+--- a/drivers/misc/mei/bus.c
++++ b/drivers/misc/mei/bus.c
+@@ -505,17 +505,15 @@ int mei_cldev_enable(struct mei_cl_device *cldev)
+ 
+ 	cl = cldev->cl;
+ 
++	mutex_lock(&bus->device_lock);
+ 	if (cl->state == MEI_FILE_UNINITIALIZED) {
+-		mutex_lock(&bus->device_lock);
+ 		ret = mei_cl_link(cl);
+-		mutex_unlock(&bus->device_lock);
+ 		if (ret)
+-			return ret;
++			goto out;
+ 		/* update pointers */
+ 		cl->cldev = cldev;
+ 	}
+ 
+-	mutex_lock(&bus->device_lock);
+ 	if (mei_cl_is_connected(cl)) {
+ 		ret = 0;
+ 		goto out;
+@@ -600,9 +598,8 @@ int mei_cldev_disable(struct mei_cl_device *cldev)
+ 	if (err < 0)
+ 		dev_err(bus->dev, "Could not disconnect from the ME client\n");
+ 
+-out:
+ 	mei_cl_bus_module_put(cldev);
+-
++out:
+ 	/* Flush queues and remove any pending read */
+ 	mei_cl_flush_queues(cl, NULL);
+ 	mei_cl_unlink(cl);
+@@ -860,12 +857,13 @@ static void mei_cl_bus_dev_release(struct device *dev)
+ 
+ 	mei_me_cl_put(cldev->me_cl);
+ 	mei_dev_bus_put(cldev->bus);
++	mei_cl_unlink(cldev->cl);
+ 	kfree(cldev->cl);
+ 	kfree(cldev);
+ }
+ 
+ static const struct device_type mei_cl_device_type = {
+-	.release	= mei_cl_bus_dev_release,
++	.release = mei_cl_bus_dev_release,
+ };
+ 
+ /**
+diff --git a/drivers/misc/mei/hbm.c b/drivers/misc/mei/hbm.c
+index fe6595fe94f1..995ff1b7e7b5 100644
+--- a/drivers/misc/mei/hbm.c
++++ b/drivers/misc/mei/hbm.c
+@@ -1140,15 +1140,18 @@ int mei_hbm_dispatch(struct mei_device *dev, struct mei_msg_hdr *hdr)
+ 
+ 		props_res = (struct hbm_props_response *)mei_msg;
+ 
+-		if (props_res->status) {
++		if (props_res->status == MEI_HBMS_CLIENT_NOT_FOUND) {
++			dev_dbg(dev->dev, "hbm: properties response: %d CLIENT_NOT_FOUND\n",
++				props_res->me_addr);
++		} else if (props_res->status) {
+ 			dev_err(dev->dev, "hbm: properties response: wrong status = %d %s\n",
+ 				props_res->status,
+ 				mei_hbm_status_str(props_res->status));
+ 			return -EPROTO;
++		} else {
++			mei_hbm_me_cl_add(dev, props_res);
+ 		}
+ 
+-		mei_hbm_me_cl_add(dev, props_res);
+-
+ 		/* request property for the next client */
+ 		if (mei_hbm_prop_req(dev, props_res->me_addr + 1))
+ 			return -EIO;
+diff --git a/drivers/mmc/host/meson-mx-sdio.c b/drivers/mmc/host/meson-mx-sdio.c
+index 09cb89645d06..2cfec33178c1 100644
+--- a/drivers/mmc/host/meson-mx-sdio.c
++++ b/drivers/mmc/host/meson-mx-sdio.c
+@@ -517,19 +517,23 @@ static struct mmc_host_ops meson_mx_mmc_ops = {
+ static struct platform_device *meson_mx_mmc_slot_pdev(struct device *parent)
+ {
+ 	struct device_node *slot_node;
++	struct platform_device *pdev;
+ 
+ 	/*
+ 	 * TODO: the MMC core framework currently does not support
+ 	 * controllers with multiple slots properly. So we only register
+ 	 * the first slot for now
+ 	 */
+-	slot_node = of_find_compatible_node(parent->of_node, NULL, "mmc-slot");
++	slot_node = of_get_compatible_child(parent->of_node, "mmc-slot");
+ 	if (!slot_node) {
+ 		dev_warn(parent, "no 'mmc-slot' sub-node found\n");
+ 		return ERR_PTR(-ENOENT);
+ 	}
+ 
+-	return of_platform_device_create(slot_node, NULL, parent);
++	pdev = of_platform_device_create(slot_node, NULL, parent);
++	of_node_put(slot_node);
++
++	return pdev;
+ }
+ 
+ static int meson_mx_mmc_add_host(struct meson_mx_mmc_host *host)
+diff --git a/drivers/mmc/host/omap_hsmmc.c b/drivers/mmc/host/omap_hsmmc.c
+index 071693ebfe18..68760d4a5d3d 100644
+--- a/drivers/mmc/host/omap_hsmmc.c
++++ b/drivers/mmc/host/omap_hsmmc.c
+@@ -2177,6 +2177,7 @@ static int omap_hsmmc_remove(struct platform_device *pdev)
+ 	dma_release_channel(host->tx_chan);
+ 	dma_release_channel(host->rx_chan);
+ 
++	dev_pm_clear_wake_irq(host->dev);
+ 	pm_runtime_dont_use_autosuspend(host->dev);
+ 	pm_runtime_put_sync(host->dev);
+ 	pm_runtime_disable(host->dev);
+diff --git a/drivers/mmc/host/sdhci-of-esdhc.c b/drivers/mmc/host/sdhci-of-esdhc.c
+index 4ffa6b173a21..8332f56e6c0d 100644
+--- a/drivers/mmc/host/sdhci-of-esdhc.c
++++ b/drivers/mmc/host/sdhci-of-esdhc.c
+@@ -22,6 +22,7 @@
+ #include <linux/sys_soc.h>
+ #include <linux/clk.h>
+ #include <linux/ktime.h>
++#include <linux/dma-mapping.h>
+ #include <linux/mmc/host.h>
+ #include "sdhci-pltfm.h"
+ #include "sdhci-esdhc.h"
+@@ -427,6 +428,11 @@ static void esdhc_of_adma_workaround(struct sdhci_host *host, u32 intmask)
+ static int esdhc_of_enable_dma(struct sdhci_host *host)
+ {
+ 	u32 value;
++	struct device *dev = mmc_dev(host->mmc);
++
++	if (of_device_is_compatible(dev->of_node, "fsl,ls1043a-esdhc") ||
++	    of_device_is_compatible(dev->of_node, "fsl,ls1046a-esdhc"))
++		dma_set_mask_and_coherent(dev, DMA_BIT_MASK(40));
+ 
+ 	value = sdhci_readl(host, ESDHC_DMA_SYSCTL);
+ 	value |= ESDHC_DMA_SNOOP;
+diff --git a/drivers/mmc/host/sdhci-tegra.c b/drivers/mmc/host/sdhci-tegra.c
+index 970d38f68939..137df06b9b6e 100644
+--- a/drivers/mmc/host/sdhci-tegra.c
++++ b/drivers/mmc/host/sdhci-tegra.c
+@@ -334,7 +334,8 @@ static const struct sdhci_pltfm_data sdhci_tegra30_pdata = {
+ 		  SDHCI_QUIRK_NO_HISPD_BIT |
+ 		  SDHCI_QUIRK_BROKEN_ADMA_ZEROLEN_DESC |
+ 		  SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN,
+-	.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
++	.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
++		   SDHCI_QUIRK2_BROKEN_HS200,
+ 	.ops  = &tegra_sdhci_ops,
+ };
+ 
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index 1c828e0e9905..a7b5602ef6f7 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -3734,14 +3734,21 @@ int sdhci_setup_host(struct sdhci_host *host)
+ 	    mmc_gpio_get_cd(host->mmc) < 0)
+ 		mmc->caps |= MMC_CAP_NEEDS_POLL;
+ 
+-	/* If vqmmc regulator and no 1.8V signalling, then there's no UHS */
+ 	if (!IS_ERR(mmc->supply.vqmmc)) {
+ 		ret = regulator_enable(mmc->supply.vqmmc);
++
++		/* If vqmmc provides no 1.8V signalling, then there's no UHS */
+ 		if (!regulator_is_supported_voltage(mmc->supply.vqmmc, 1700000,
+ 						    1950000))
+ 			host->caps1 &= ~(SDHCI_SUPPORT_SDR104 |
+ 					 SDHCI_SUPPORT_SDR50 |
+ 					 SDHCI_SUPPORT_DDR50);
++
++		/* In eMMC case vqmmc might be a fixed 1.8V regulator */
++		if (!regulator_is_supported_voltage(mmc->supply.vqmmc, 2700000,
++						    3600000))
++			host->flags &= ~SDHCI_SIGNALING_330;
++
+ 		if (ret) {
+ 			pr_warn("%s: Failed to enable vqmmc regulator: %d\n",
+ 				mmc_hostname(mmc), ret);
+diff --git a/drivers/mtd/maps/solutionengine.c b/drivers/mtd/maps/solutionengine.c
+index bb580bc16445..c07f21b20463 100644
+--- a/drivers/mtd/maps/solutionengine.c
++++ b/drivers/mtd/maps/solutionengine.c
+@@ -59,9 +59,9 @@ static int __init init_soleng_maps(void)
+ 			return -ENXIO;
+ 		}
+ 	}
+-	printk(KERN_NOTICE "Solution Engine: Flash at 0x%08lx, EPROM at 0x%08lx\n",
+-	       soleng_flash_map.phys & 0x1fffffff,
+-	       soleng_eprom_map.phys & 0x1fffffff);
++	printk(KERN_NOTICE "Solution Engine: Flash at 0x%pap, EPROM at 0x%pap\n",
++	       &soleng_flash_map.phys,
++	       &soleng_eprom_map.phys);
+ 	flash_mtd->owner = THIS_MODULE;
+ 
+ 	eprom_mtd = do_map_probe("map_rom", &soleng_eprom_map);
+diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c
+index cd67c85cc87d..02389528f622 100644
+--- a/drivers/mtd/mtdchar.c
++++ b/drivers/mtd/mtdchar.c
+@@ -160,8 +160,12 @@ static ssize_t mtdchar_read(struct file *file, char __user *buf, size_t count,
+ 
+ 	pr_debug("MTD_read\n");
+ 
+-	if (*ppos + count > mtd->size)
+-		count = mtd->size - *ppos;
++	if (*ppos + count > mtd->size) {
++		if (*ppos < mtd->size)
++			count = mtd->size - *ppos;
++		else
++			count = 0;
++	}
+ 
+ 	if (!count)
+ 		return 0;
+@@ -246,7 +250,7 @@ static ssize_t mtdchar_write(struct file *file, const char __user *buf, size_t c
+ 
+ 	pr_debug("MTD_write\n");
+ 
+-	if (*ppos == mtd->size)
++	if (*ppos >= mtd->size)
+ 		return -ENOSPC;
+ 
+ 	if (*ppos + count > mtd->size)
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-desc.c b/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
+index cc1e4f820e64..533094233659 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
+@@ -289,7 +289,7 @@ static int xgbe_alloc_pages(struct xgbe_prv_data *pdata,
+ 	struct page *pages = NULL;
+ 	dma_addr_t pages_dma;
+ 	gfp_t gfp;
+-	int order, ret;
++	int order;
+ 
+ again:
+ 	order = alloc_order;
+@@ -316,10 +316,9 @@ again:
+ 	/* Map the pages */
+ 	pages_dma = dma_map_page(pdata->dev, pages, 0,
+ 				 PAGE_SIZE << order, DMA_FROM_DEVICE);
+-	ret = dma_mapping_error(pdata->dev, pages_dma);
+-	if (ret) {
++	if (dma_mapping_error(pdata->dev, pages_dma)) {
+ 		put_page(pages);
+-		return ret;
++		return -ENOMEM;
+ 	}
+ 
+ 	pa->pages = pages;
+diff --git a/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c b/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c
+index 929d485a3a2f..e088dedc1747 100644
+--- a/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c
++++ b/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c
+@@ -493,6 +493,9 @@ static void cn23xx_pf_setup_global_output_regs(struct octeon_device *oct)
+ 	for (q_no = srn; q_no < ern; q_no++) {
+ 		reg_val = octeon_read_csr(oct, CN23XX_SLI_OQ_PKT_CONTROL(q_no));
+ 
++		/* clear IPTR */
++		reg_val &= ~CN23XX_PKT_OUTPUT_CTL_IPTR;
++
+ 		/* set DPTR */
+ 		reg_val |= CN23XX_PKT_OUTPUT_CTL_DPTR;
+ 
+diff --git a/drivers/net/ethernet/cavium/liquidio/cn23xx_vf_device.c b/drivers/net/ethernet/cavium/liquidio/cn23xx_vf_device.c
+index 9338a0008378..1f8b7f651254 100644
+--- a/drivers/net/ethernet/cavium/liquidio/cn23xx_vf_device.c
++++ b/drivers/net/ethernet/cavium/liquidio/cn23xx_vf_device.c
+@@ -165,6 +165,9 @@ static void cn23xx_vf_setup_global_output_regs(struct octeon_device *oct)
+ 		reg_val =
+ 		    octeon_read_csr(oct, CN23XX_VF_SLI_OQ_PKT_CONTROL(q_no));
+ 
++		/* clear IPTR */
++		reg_val &= ~CN23XX_PKT_OUTPUT_CTL_IPTR;
++
+ 		/* set DPTR */
+ 		reg_val |= CN23XX_PKT_OUTPUT_CTL_DPTR;
+ 
+diff --git a/drivers/net/ethernet/cortina/gemini.c b/drivers/net/ethernet/cortina/gemini.c
+index 6d7404f66f84..c9a061e707c4 100644
+--- a/drivers/net/ethernet/cortina/gemini.c
++++ b/drivers/net/ethernet/cortina/gemini.c
+@@ -1753,7 +1753,10 @@ static int gmac_open(struct net_device *netdev)
+ 	phy_start(netdev->phydev);
+ 
+ 	err = geth_resize_freeq(port);
+-	if (err) {
++	/* It's fine if it's just busy, the other port has set up
++	 * the freeq in that case.
++	 */
++	if (err && (err != -EBUSY)) {
+ 		netdev_err(netdev, "could not resize freeq\n");
+ 		goto err_stop_phy;
+ 	}
+diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.c b/drivers/net/ethernet/emulex/benet/be_cmds.c
+index ff92ab1daeb8..1e9d882c04ef 100644
+--- a/drivers/net/ethernet/emulex/benet/be_cmds.c
++++ b/drivers/net/ethernet/emulex/benet/be_cmds.c
+@@ -4500,7 +4500,7 @@ int be_cmd_get_profile_config(struct be_adapter *adapter,
+ 				port_res->max_vfs += le16_to_cpu(pcie->num_vfs);
+ 			}
+ 		}
+-		return status;
++		goto err;
+ 	}
+ 
+ 	pcie = be_get_pcie_desc(resp->func_param, desc_count,
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 25a73bb2e642..9d69621f5ab4 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -3081,7 +3081,6 @@ static int hns3_client_init(struct hnae3_handle *handle)
+ 	priv->dev = &pdev->dev;
+ 	priv->netdev = netdev;
+ 	priv->ae_handle = handle;
+-	priv->ae_handle->reset_level = HNAE3_NONE_RESET;
+ 	priv->ae_handle->last_reset_time = jiffies;
+ 	priv->tx_timeout_count = 0;
+ 
+@@ -3102,6 +3101,11 @@ static int hns3_client_init(struct hnae3_handle *handle)
+ 	/* Carrier off reporting is important to ethtool even BEFORE open */
+ 	netif_carrier_off(netdev);
+ 
++	if (handle->flags & HNAE3_SUPPORT_VF)
++		handle->reset_level = HNAE3_VF_RESET;
++	else
++		handle->reset_level = HNAE3_FUNC_RESET;
++
+ 	ret = hns3_get_ring_config(priv);
+ 	if (ret) {
+ 		ret = -ENOMEM;
+@@ -3418,7 +3422,7 @@ static int hns3_reset_notify_down_enet(struct hnae3_handle *handle)
+ 	struct net_device *ndev = kinfo->netdev;
+ 
+ 	if (!netif_running(ndev))
+-		return -EIO;
++		return 0;
+ 
+ 	return hns3_nic_net_stop(ndev);
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 6fd7ea8074b0..13f43b74fd6d 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -2825,15 +2825,13 @@ static void hclge_clear_reset_cause(struct hclge_dev *hdev)
+ static void hclge_reset(struct hclge_dev *hdev)
+ {
+ 	/* perform reset of the stack & ae device for a client */
+-
++	rtnl_lock();
+ 	hclge_notify_client(hdev, HNAE3_DOWN_CLIENT);
+ 
+ 	if (!hclge_reset_wait(hdev)) {
+-		rtnl_lock();
+ 		hclge_notify_client(hdev, HNAE3_UNINIT_CLIENT);
+ 		hclge_reset_ae_dev(hdev->ae_dev);
+ 		hclge_notify_client(hdev, HNAE3_INIT_CLIENT);
+-		rtnl_unlock();
+ 
+ 		hclge_clear_reset_cause(hdev);
+ 	} else {
+@@ -2843,6 +2841,7 @@ static void hclge_reset(struct hclge_dev *hdev)
+ 	}
+ 
+ 	hclge_notify_client(hdev, HNAE3_UP_CLIENT);
++	rtnl_unlock();
+ }
+ 
+ static void hclge_reset_event(struct hnae3_handle *handle)
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index 0319ed9ef8b8..f7f08e3fa761 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -5011,6 +5011,12 @@ static int mvpp2_probe(struct platform_device *pdev)
+ 			(unsigned long)of_device_get_match_data(&pdev->dev);
+ 	}
+ 
++	/* multi queue mode isn't supported on PPV2.1, fallback to single
++	 * mode
++	 */
++	if (priv->hw_version == MVPP21)
++		queue_mode = MVPP2_QDIST_SINGLE_MODE;
++
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	base = devm_ioremap_resource(&pdev->dev, res);
+ 	if (IS_ERR(base))
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index 384c1fa49081..f167f4eec3ff 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -452,6 +452,7 @@ const char *mlx5_command_str(int command)
+ 	MLX5_COMMAND_STR_CASE(SET_HCA_CAP);
+ 	MLX5_COMMAND_STR_CASE(QUERY_ISSI);
+ 	MLX5_COMMAND_STR_CASE(SET_ISSI);
++	MLX5_COMMAND_STR_CASE(SET_DRIVER_VERSION);
+ 	MLX5_COMMAND_STR_CASE(CREATE_MKEY);
+ 	MLX5_COMMAND_STR_CASE(QUERY_MKEY);
+ 	MLX5_COMMAND_STR_CASE(DESTROY_MKEY);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dev.c b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+index b994b80d5714..922811fb66e7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/dev.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+@@ -132,11 +132,11 @@ void mlx5_add_device(struct mlx5_interface *intf, struct mlx5_priv *priv)
+ 	delayed_event_start(priv);
+ 
+ 	dev_ctx->context = intf->add(dev);
+-	set_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state);
+-	if (intf->attach)
+-		set_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state);
+-
+ 	if (dev_ctx->context) {
++		set_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state);
++		if (intf->attach)
++			set_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state);
++
+ 		spin_lock_irq(&priv->ctx_lock);
+ 		list_add_tail(&dev_ctx->list, &priv->ctx_list);
+ 
+@@ -211,12 +211,17 @@ static void mlx5_attach_interface(struct mlx5_interface *intf, struct mlx5_priv
+ 	if (intf->attach) {
+ 		if (test_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state))
+ 			goto out;
+-		intf->attach(dev, dev_ctx->context);
++		if (intf->attach(dev, dev_ctx->context))
++			goto out;
++
+ 		set_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state);
+ 	} else {
+ 		if (test_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state))
+ 			goto out;
+ 		dev_ctx->context = intf->add(dev);
++		if (!dev_ctx->context)
++			goto out;
++
+ 		set_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state);
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index 91f1209886ff..4c53957c918c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -658,6 +658,7 @@ static int esw_create_offloads_fdb_tables(struct mlx5_eswitch *esw, int nvports)
+ 	if (err)
+ 		goto miss_rule_err;
+ 
++	kvfree(flow_group_in);
+ 	return 0;
+ 
+ miss_rule_err:
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 6ddb2565884d..0031c510ab68 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -1649,6 +1649,33 @@ static u64 matched_fgs_get_version(struct list_head *match_head)
+ 	return version;
+ }
+ 
++static struct fs_fte *
++lookup_fte_locked(struct mlx5_flow_group *g,
++		  u32 *match_value,
++		  bool take_write)
++{
++	struct fs_fte *fte_tmp;
++
++	if (take_write)
++		nested_down_write_ref_node(&g->node, FS_LOCK_PARENT);
++	else
++		nested_down_read_ref_node(&g->node, FS_LOCK_PARENT);
++	fte_tmp = rhashtable_lookup_fast(&g->ftes_hash, match_value,
++					 rhash_fte);
++	if (!fte_tmp || !tree_get_node(&fte_tmp->node)) {
++		fte_tmp = NULL;
++		goto out;
++	}
++
++	nested_down_write_ref_node(&fte_tmp->node, FS_LOCK_CHILD);
++out:
++	if (take_write)
++		up_write_ref_node(&g->node);
++	else
++		up_read_ref_node(&g->node);
++	return fte_tmp;
++}
++
+ static struct mlx5_flow_handle *
+ try_add_to_existing_fg(struct mlx5_flow_table *ft,
+ 		       struct list_head *match_head,
+@@ -1671,10 +1698,6 @@ try_add_to_existing_fg(struct mlx5_flow_table *ft,
+ 	if (IS_ERR(fte))
+ 		return  ERR_PTR(-ENOMEM);
+ 
+-	list_for_each_entry(iter, match_head, list) {
+-		nested_down_read_ref_node(&iter->g->node, FS_LOCK_PARENT);
+-	}
+-
+ search_again_locked:
+ 	version = matched_fgs_get_version(match_head);
+ 	/* Try to find a fg that already contains a matching fte */
+@@ -1682,20 +1705,9 @@ search_again_locked:
+ 		struct fs_fte *fte_tmp;
+ 
+ 		g = iter->g;
+-		fte_tmp = rhashtable_lookup_fast(&g->ftes_hash, spec->match_value,
+-						 rhash_fte);
+-		if (!fte_tmp || !tree_get_node(&fte_tmp->node))
++		fte_tmp = lookup_fte_locked(g, spec->match_value, take_write);
++		if (!fte_tmp)
+ 			continue;
+-
+-		nested_down_write_ref_node(&fte_tmp->node, FS_LOCK_CHILD);
+-		if (!take_write) {
+-			list_for_each_entry(iter, match_head, list)
+-				up_read_ref_node(&iter->g->node);
+-		} else {
+-			list_for_each_entry(iter, match_head, list)
+-				up_write_ref_node(&iter->g->node);
+-		}
+-
+ 		rule = add_rule_fg(g, spec->match_value,
+ 				   flow_act, dest, dest_num, fte_tmp);
+ 		up_write_ref_node(&fte_tmp->node);
+@@ -1704,19 +1716,6 @@ search_again_locked:
+ 		return rule;
+ 	}
+ 
+-	/* No group with matching fte found. Try to add a new fte to any
+-	 * matching fg.
+-	 */
+-
+-	if (!take_write) {
+-		list_for_each_entry(iter, match_head, list)
+-			up_read_ref_node(&iter->g->node);
+-		list_for_each_entry(iter, match_head, list)
+-			nested_down_write_ref_node(&iter->g->node,
+-						   FS_LOCK_PARENT);
+-		take_write = true;
+-	}
+-
+ 	/* Check the ft version, for case that new flow group
+ 	 * was added while the fgs weren't locked
+ 	 */
+@@ -1728,27 +1727,30 @@ search_again_locked:
+ 	/* Check the fgs version, for case the new FTE with the
+ 	 * same values was added while the fgs weren't locked
+ 	 */
+-	if (version != matched_fgs_get_version(match_head))
++	if (version != matched_fgs_get_version(match_head)) {
++		take_write = true;
+ 		goto search_again_locked;
++	}
+ 
+ 	list_for_each_entry(iter, match_head, list) {
+ 		g = iter->g;
+ 
+ 		if (!g->node.active)
+ 			continue;
++
++		nested_down_write_ref_node(&g->node, FS_LOCK_PARENT);
++
+ 		err = insert_fte(g, fte);
+ 		if (err) {
++			up_write_ref_node(&g->node);
+ 			if (err == -ENOSPC)
+ 				continue;
+-			list_for_each_entry(iter, match_head, list)
+-				up_write_ref_node(&iter->g->node);
+ 			kmem_cache_free(steering->ftes_cache, fte);
+ 			return ERR_PTR(err);
+ 		}
+ 
+ 		nested_down_write_ref_node(&fte->node, FS_LOCK_CHILD);
+-		list_for_each_entry(iter, match_head, list)
+-			up_write_ref_node(&iter->g->node);
++		up_write_ref_node(&g->node);
+ 		rule = add_rule_fg(g, spec->match_value,
+ 				   flow_act, dest, dest_num, fte);
+ 		up_write_ref_node(&fte->node);
+@@ -1757,8 +1759,6 @@ search_again_locked:
+ 	}
+ 	rule = ERR_PTR(-ENOENT);
+ out:
+-	list_for_each_entry(iter, match_head, list)
+-		up_write_ref_node(&iter->g->node);
+ 	kmem_cache_free(steering->ftes_cache, fte);
+ 	return rule;
+ }
+@@ -1797,6 +1797,8 @@ search_again_locked:
+ 	if (err) {
+ 		if (take_write)
+ 			up_write_ref_node(&ft->node);
++		else
++			up_read_ref_node(&ft->node);
+ 		return ERR_PTR(err);
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+index d39b0b7011b2..9f39aeca863f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/health.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+@@ -331,9 +331,17 @@ void mlx5_start_health_poll(struct mlx5_core_dev *dev)
+ 	add_timer(&health->timer);
+ }
+ 
+-void mlx5_stop_health_poll(struct mlx5_core_dev *dev)
++void mlx5_stop_health_poll(struct mlx5_core_dev *dev, bool disable_health)
+ {
+ 	struct mlx5_core_health *health = &dev->priv.health;
++	unsigned long flags;
++
++	if (disable_health) {
++		spin_lock_irqsave(&health->wq_lock, flags);
++		set_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags);
++		set_bit(MLX5_DROP_NEW_RECOVERY_WORK, &health->flags);
++		spin_unlock_irqrestore(&health->wq_lock, flags);
++	}
+ 
+ 	del_timer_sync(&health->timer);
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 615005e63819..76e6ca87db11 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -874,8 +874,10 @@ static int mlx5_pci_init(struct mlx5_core_dev *dev, struct mlx5_priv *priv)
+ 	priv->numa_node = dev_to_node(&dev->pdev->dev);
+ 
+ 	priv->dbg_root = debugfs_create_dir(dev_name(&pdev->dev), mlx5_debugfs_root);
+-	if (!priv->dbg_root)
++	if (!priv->dbg_root) {
++		dev_err(&pdev->dev, "Cannot create debugfs dir, aborting\n");
+ 		return -ENOMEM;
++	}
+ 
+ 	err = mlx5_pci_enable_device(dev);
+ 	if (err) {
+@@ -924,7 +926,7 @@ static void mlx5_pci_close(struct mlx5_core_dev *dev, struct mlx5_priv *priv)
+ 	pci_clear_master(dev->pdev);
+ 	release_bar(dev->pdev);
+ 	mlx5_pci_disable_device(dev);
+-	debugfs_remove(priv->dbg_root);
++	debugfs_remove_recursive(priv->dbg_root);
+ }
+ 
+ static int mlx5_init_once(struct mlx5_core_dev *dev, struct mlx5_priv *priv)
+@@ -1266,7 +1268,7 @@ err_cleanup_once:
+ 		mlx5_cleanup_once(dev);
+ 
+ err_stop_poll:
+-	mlx5_stop_health_poll(dev);
++	mlx5_stop_health_poll(dev, boot);
+ 	if (mlx5_cmd_teardown_hca(dev)) {
+ 		dev_err(&dev->pdev->dev, "tear_down_hca failed, skip cleanup\n");
+ 		goto out_err;
+@@ -1325,7 +1327,7 @@ static int mlx5_unload_one(struct mlx5_core_dev *dev, struct mlx5_priv *priv,
+ 	mlx5_free_irq_vectors(dev);
+ 	if (cleanup)
+ 		mlx5_cleanup_once(dev);
+-	mlx5_stop_health_poll(dev);
++	mlx5_stop_health_poll(dev, cleanup);
+ 	err = mlx5_cmd_teardown_hca(dev);
+ 	if (err) {
+ 		dev_err(&dev->pdev->dev, "tear_down_hca failed, skip cleanup\n");
+@@ -1587,7 +1589,7 @@ static int mlx5_try_fast_unload(struct mlx5_core_dev *dev)
+ 	 * with the HCA, so the health polll is no longer needed.
+ 	 */
+ 	mlx5_drain_health_wq(dev);
+-	mlx5_stop_health_poll(dev);
++	mlx5_stop_health_poll(dev, false);
+ 
+ 	ret = mlx5_cmd_force_teardown_hca(dev);
+ 	if (ret) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wq.c b/drivers/net/ethernet/mellanox/mlx5/core/wq.c
+index c8c315eb5128..d838af9539b1 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/wq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/wq.c
+@@ -39,9 +39,9 @@ u32 mlx5_wq_cyc_get_size(struct mlx5_wq_cyc *wq)
+ 	return (u32)wq->fbc.sz_m1 + 1;
+ }
+ 
+-u32 mlx5_wq_cyc_get_frag_size(struct mlx5_wq_cyc *wq)
++u16 mlx5_wq_cyc_get_frag_size(struct mlx5_wq_cyc *wq)
+ {
+-	return (u32)wq->fbc.frag_sz_m1 + 1;
++	return wq->fbc.frag_sz_m1 + 1;
+ }
+ 
+ u32 mlx5_cqwq_get_size(struct mlx5_cqwq *wq)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wq.h b/drivers/net/ethernet/mellanox/mlx5/core/wq.h
+index 0b47126815b6..16476cc1a602 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/wq.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/wq.h
+@@ -80,7 +80,7 @@ int mlx5_wq_cyc_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
+ 		       void *wqc, struct mlx5_wq_cyc *wq,
+ 		       struct mlx5_wq_ctrl *wq_ctrl);
+ u32 mlx5_wq_cyc_get_size(struct mlx5_wq_cyc *wq);
+-u32 mlx5_wq_cyc_get_frag_size(struct mlx5_wq_cyc *wq);
++u16 mlx5_wq_cyc_get_frag_size(struct mlx5_wq_cyc *wq);
+ 
+ int mlx5_wq_qp_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
+ 		      void *qpc, struct mlx5_wq_qp *wq,
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_main.c b/drivers/net/ethernet/netronome/nfp/nfp_main.c
+index 152283d7e59c..4a540c5e27fe 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_main.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_main.c
+@@ -236,16 +236,20 @@ static int nfp_pcie_sriov_read_nfd_limit(struct nfp_pf *pf)
+ 	int err;
+ 
+ 	pf->limit_vfs = nfp_rtsym_read_le(pf->rtbl, "nfd_vf_cfg_max_vfs", &err);
+-	if (!err)
+-		return pci_sriov_set_totalvfs(pf->pdev, pf->limit_vfs);
++	if (err) {
++		/* For backwards compatibility if symbol not found allow all */
++		pf->limit_vfs = ~0;
++		if (err == -ENOENT)
++			return 0;
+ 
+-	pf->limit_vfs = ~0;
+-	/* Allow any setting for backwards compatibility if symbol not found */
+-	if (err == -ENOENT)
+-		return 0;
++		nfp_warn(pf->cpp, "Warning: VF limit read failed: %d\n", err);
++		return err;
++	}
+ 
+-	nfp_warn(pf->cpp, "Warning: VF limit read failed: %d\n", err);
+-	return err;
++	err = pci_sriov_set_totalvfs(pf->pdev, pf->limit_vfs);
++	if (err)
++		nfp_warn(pf->cpp, "Failed to set VF count in sysfs: %d\n", err);
++	return 0;
+ }
+ 
+ static int nfp_pcie_sriov_enable(struct pci_dev *pdev, int num_vfs)
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+index c2a9e64bc57b..bfccc1955907 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+@@ -1093,7 +1093,7 @@ static bool nfp_net_xdp_complete(struct nfp_net_tx_ring *tx_ring)
+  * @dp:		NFP Net data path struct
+  * @tx_ring:	TX ring structure
+  *
+- * Assumes that the device is stopped
++ * Assumes that the device is stopped, must be idempotent.
+  */
+ static void
+ nfp_net_tx_ring_reset(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring)
+@@ -1295,13 +1295,18 @@ static void nfp_net_rx_give_one(const struct nfp_net_dp *dp,
+  * nfp_net_rx_ring_reset() - Reflect in SW state of freelist after disable
+  * @rx_ring:	RX ring structure
+  *
+- * Warning: Do *not* call if ring buffers were never put on the FW freelist
+- *	    (i.e. device was not enabled)!
++ * Assumes that the device is stopped, must be idempotent.
+  */
+ static void nfp_net_rx_ring_reset(struct nfp_net_rx_ring *rx_ring)
+ {
+ 	unsigned int wr_idx, last_idx;
+ 
++	/* wr_p == rd_p means ring was never fed FL bufs.  RX rings are always
++	 * kept at cnt - 1 FL bufs.
++	 */
++	if (rx_ring->wr_p == 0 && rx_ring->rd_p == 0)
++		return;
++
+ 	/* Move the empty entry to the end of the list */
+ 	wr_idx = D_IDX(rx_ring, rx_ring->wr_p);
+ 	last_idx = rx_ring->cnt - 1;
+@@ -2524,6 +2529,8 @@ static void nfp_net_vec_clear_ring_data(struct nfp_net *nn, unsigned int idx)
+ /**
+  * nfp_net_clear_config_and_disable() - Clear control BAR and disable NFP
+  * @nn:      NFP Net device to reconfigure
++ *
++ * Warning: must be fully idempotent.
+  */
+ static void nfp_net_clear_config_and_disable(struct nfp_net *nn)
+ {
+diff --git a/drivers/net/ethernet/qualcomm/qca_7k.c b/drivers/net/ethernet/qualcomm/qca_7k.c
+index ffe7a16bdfc8..6c8543fb90c0 100644
+--- a/drivers/net/ethernet/qualcomm/qca_7k.c
++++ b/drivers/net/ethernet/qualcomm/qca_7k.c
+@@ -45,34 +45,33 @@ qcaspi_read_register(struct qcaspi *qca, u16 reg, u16 *result)
+ {
+ 	__be16 rx_data;
+ 	__be16 tx_data;
+-	struct spi_transfer *transfer;
+-	struct spi_message *msg;
++	struct spi_transfer transfer[2];
++	struct spi_message msg;
+ 	int ret;
+ 
++	memset(transfer, 0, sizeof(transfer));
++
++	spi_message_init(&msg);
++
+ 	tx_data = cpu_to_be16(QCA7K_SPI_READ | QCA7K_SPI_INTERNAL | reg);
++	*result = 0;
++
++	transfer[0].tx_buf = &tx_data;
++	transfer[0].len = QCASPI_CMD_LEN;
++	transfer[1].rx_buf = &rx_data;
++	transfer[1].len = QCASPI_CMD_LEN;
++
++	spi_message_add_tail(&transfer[0], &msg);
+ 
+ 	if (qca->legacy_mode) {
+-		msg = &qca->spi_msg1;
+-		transfer = &qca->spi_xfer1;
+-		transfer->tx_buf = &tx_data;
+-		transfer->rx_buf = NULL;
+-		transfer->len = QCASPI_CMD_LEN;
+-		spi_sync(qca->spi_dev, msg);
+-	} else {
+-		msg = &qca->spi_msg2;
+-		transfer = &qca->spi_xfer2[0];
+-		transfer->tx_buf = &tx_data;
+-		transfer->rx_buf = NULL;
+-		transfer->len = QCASPI_CMD_LEN;
+-		transfer = &qca->spi_xfer2[1];
++		spi_sync(qca->spi_dev, &msg);
++		spi_message_init(&msg);
+ 	}
+-	transfer->tx_buf = NULL;
+-	transfer->rx_buf = &rx_data;
+-	transfer->len = QCASPI_CMD_LEN;
+-	ret = spi_sync(qca->spi_dev, msg);
++	spi_message_add_tail(&transfer[1], &msg);
++	ret = spi_sync(qca->spi_dev, &msg);
+ 
+ 	if (!ret)
+-		ret = msg->status;
++		ret = msg.status;
+ 
+ 	if (ret)
+ 		qcaspi_spi_error(qca);
+@@ -86,35 +85,32 @@ int
+ qcaspi_write_register(struct qcaspi *qca, u16 reg, u16 value)
+ {
+ 	__be16 tx_data[2];
+-	struct spi_transfer *transfer;
+-	struct spi_message *msg;
++	struct spi_transfer transfer[2];
++	struct spi_message msg;
+ 	int ret;
+ 
++	memset(&transfer, 0, sizeof(transfer));
++
++	spi_message_init(&msg);
++
+ 	tx_data[0] = cpu_to_be16(QCA7K_SPI_WRITE | QCA7K_SPI_INTERNAL | reg);
+ 	tx_data[1] = cpu_to_be16(value);
+ 
++	transfer[0].tx_buf = &tx_data[0];
++	transfer[0].len = QCASPI_CMD_LEN;
++	transfer[1].tx_buf = &tx_data[1];
++	transfer[1].len = QCASPI_CMD_LEN;
++
++	spi_message_add_tail(&transfer[0], &msg);
+ 	if (qca->legacy_mode) {
+-		msg = &qca->spi_msg1;
+-		transfer = &qca->spi_xfer1;
+-		transfer->tx_buf = &tx_data[0];
+-		transfer->rx_buf = NULL;
+-		transfer->len = QCASPI_CMD_LEN;
+-		spi_sync(qca->spi_dev, msg);
+-	} else {
+-		msg = &qca->spi_msg2;
+-		transfer = &qca->spi_xfer2[0];
+-		transfer->tx_buf = &tx_data[0];
+-		transfer->rx_buf = NULL;
+-		transfer->len = QCASPI_CMD_LEN;
+-		transfer = &qca->spi_xfer2[1];
++		spi_sync(qca->spi_dev, &msg);
++		spi_message_init(&msg);
+ 	}
+-	transfer->tx_buf = &tx_data[1];
+-	transfer->rx_buf = NULL;
+-	transfer->len = QCASPI_CMD_LEN;
+-	ret = spi_sync(qca->spi_dev, msg);
++	spi_message_add_tail(&transfer[1], &msg);
++	ret = spi_sync(qca->spi_dev, &msg);
+ 
+ 	if (!ret)
+-		ret = msg->status;
++		ret = msg.status;
+ 
+ 	if (ret)
+ 		qcaspi_spi_error(qca);
+diff --git a/drivers/net/ethernet/qualcomm/qca_spi.c b/drivers/net/ethernet/qualcomm/qca_spi.c
+index 206f0266463e..66b775d462fd 100644
+--- a/drivers/net/ethernet/qualcomm/qca_spi.c
++++ b/drivers/net/ethernet/qualcomm/qca_spi.c
+@@ -99,22 +99,24 @@ static u32
+ qcaspi_write_burst(struct qcaspi *qca, u8 *src, u32 len)
+ {
+ 	__be16 cmd;
+-	struct spi_message *msg = &qca->spi_msg2;
+-	struct spi_transfer *transfer = &qca->spi_xfer2[0];
++	struct spi_message msg;
++	struct spi_transfer transfer[2];
+ 	int ret;
+ 
++	memset(&transfer, 0, sizeof(transfer));
++	spi_message_init(&msg);
++
+ 	cmd = cpu_to_be16(QCA7K_SPI_WRITE | QCA7K_SPI_EXTERNAL);
+-	transfer->tx_buf = &cmd;
+-	transfer->rx_buf = NULL;
+-	transfer->len = QCASPI_CMD_LEN;
+-	transfer = &qca->spi_xfer2[1];
+-	transfer->tx_buf = src;
+-	transfer->rx_buf = NULL;
+-	transfer->len = len;
++	transfer[0].tx_buf = &cmd;
++	transfer[0].len = QCASPI_CMD_LEN;
++	transfer[1].tx_buf = src;
++	transfer[1].len = len;
+ 
+-	ret = spi_sync(qca->spi_dev, msg);
++	spi_message_add_tail(&transfer[0], &msg);
++	spi_message_add_tail(&transfer[1], &msg);
++	ret = spi_sync(qca->spi_dev, &msg);
+ 
+-	if (ret || (msg->actual_length != QCASPI_CMD_LEN + len)) {
++	if (ret || (msg.actual_length != QCASPI_CMD_LEN + len)) {
+ 		qcaspi_spi_error(qca);
+ 		return 0;
+ 	}
+@@ -125,17 +127,20 @@ qcaspi_write_burst(struct qcaspi *qca, u8 *src, u32 len)
+ static u32
+ qcaspi_write_legacy(struct qcaspi *qca, u8 *src, u32 len)
+ {
+-	struct spi_message *msg = &qca->spi_msg1;
+-	struct spi_transfer *transfer = &qca->spi_xfer1;
++	struct spi_message msg;
++	struct spi_transfer transfer;
+ 	int ret;
+ 
+-	transfer->tx_buf = src;
+-	transfer->rx_buf = NULL;
+-	transfer->len = len;
++	memset(&transfer, 0, sizeof(transfer));
++	spi_message_init(&msg);
++
++	transfer.tx_buf = src;
++	transfer.len = len;
+ 
+-	ret = spi_sync(qca->spi_dev, msg);
++	spi_message_add_tail(&transfer, &msg);
++	ret = spi_sync(qca->spi_dev, &msg);
+ 
+-	if (ret || (msg->actual_length != len)) {
++	if (ret || (msg.actual_length != len)) {
+ 		qcaspi_spi_error(qca);
+ 		return 0;
+ 	}
+@@ -146,23 +151,25 @@ qcaspi_write_legacy(struct qcaspi *qca, u8 *src, u32 len)
+ static u32
+ qcaspi_read_burst(struct qcaspi *qca, u8 *dst, u32 len)
+ {
+-	struct spi_message *msg = &qca->spi_msg2;
++	struct spi_message msg;
+ 	__be16 cmd;
+-	struct spi_transfer *transfer = &qca->spi_xfer2[0];
++	struct spi_transfer transfer[2];
+ 	int ret;
+ 
++	memset(&transfer, 0, sizeof(transfer));
++	spi_message_init(&msg);
++
+ 	cmd = cpu_to_be16(QCA7K_SPI_READ | QCA7K_SPI_EXTERNAL);
+-	transfer->tx_buf = &cmd;
+-	transfer->rx_buf = NULL;
+-	transfer->len = QCASPI_CMD_LEN;
+-	transfer = &qca->spi_xfer2[1];
+-	transfer->tx_buf = NULL;
+-	transfer->rx_buf = dst;
+-	transfer->len = len;
++	transfer[0].tx_buf = &cmd;
++	transfer[0].len = QCASPI_CMD_LEN;
++	transfer[1].rx_buf = dst;
++	transfer[1].len = len;
+ 
+-	ret = spi_sync(qca->spi_dev, msg);
++	spi_message_add_tail(&transfer[0], &msg);
++	spi_message_add_tail(&transfer[1], &msg);
++	ret = spi_sync(qca->spi_dev, &msg);
+ 
+-	if (ret || (msg->actual_length != QCASPI_CMD_LEN + len)) {
++	if (ret || (msg.actual_length != QCASPI_CMD_LEN + len)) {
+ 		qcaspi_spi_error(qca);
+ 		return 0;
+ 	}
+@@ -173,17 +180,20 @@ qcaspi_read_burst(struct qcaspi *qca, u8 *dst, u32 len)
+ static u32
+ qcaspi_read_legacy(struct qcaspi *qca, u8 *dst, u32 len)
+ {
+-	struct spi_message *msg = &qca->spi_msg1;
+-	struct spi_transfer *transfer = &qca->spi_xfer1;
++	struct spi_message msg;
++	struct spi_transfer transfer;
+ 	int ret;
+ 
+-	transfer->tx_buf = NULL;
+-	transfer->rx_buf = dst;
+-	transfer->len = len;
++	memset(&transfer, 0, sizeof(transfer));
++	spi_message_init(&msg);
+ 
+-	ret = spi_sync(qca->spi_dev, msg);
++	transfer.rx_buf = dst;
++	transfer.len = len;
+ 
+-	if (ret || (msg->actual_length != len)) {
++	spi_message_add_tail(&transfer, &msg);
++	ret = spi_sync(qca->spi_dev, &msg);
++
++	if (ret || (msg.actual_length != len)) {
+ 		qcaspi_spi_error(qca);
+ 		return 0;
+ 	}
+@@ -195,19 +205,23 @@ static int
+ qcaspi_tx_cmd(struct qcaspi *qca, u16 cmd)
+ {
+ 	__be16 tx_data;
+-	struct spi_message *msg = &qca->spi_msg1;
+-	struct spi_transfer *transfer = &qca->spi_xfer1;
++	struct spi_message msg;
++	struct spi_transfer transfer;
+ 	int ret;
+ 
++	memset(&transfer, 0, sizeof(transfer));
++
++	spi_message_init(&msg);
++
+ 	tx_data = cpu_to_be16(cmd);
+-	transfer->len = sizeof(tx_data);
+-	transfer->tx_buf = &tx_data;
+-	transfer->rx_buf = NULL;
++	transfer.len = sizeof(cmd);
++	transfer.tx_buf = &tx_data;
++	spi_message_add_tail(&transfer, &msg);
+ 
+-	ret = spi_sync(qca->spi_dev, msg);
++	ret = spi_sync(qca->spi_dev, &msg);
+ 
+ 	if (!ret)
+-		ret = msg->status;
++		ret = msg.status;
+ 
+ 	if (ret)
+ 		qcaspi_spi_error(qca);
+@@ -835,16 +849,6 @@ qcaspi_netdev_setup(struct net_device *dev)
+ 	qca = netdev_priv(dev);
+ 	memset(qca, 0, sizeof(struct qcaspi));
+ 
+-	memset(&qca->spi_xfer1, 0, sizeof(struct spi_transfer));
+-	memset(&qca->spi_xfer2, 0, sizeof(struct spi_transfer) * 2);
+-
+-	spi_message_init(&qca->spi_msg1);
+-	spi_message_add_tail(&qca->spi_xfer1, &qca->spi_msg1);
+-
+-	spi_message_init(&qca->spi_msg2);
+-	spi_message_add_tail(&qca->spi_xfer2[0], &qca->spi_msg2);
+-	spi_message_add_tail(&qca->spi_xfer2[1], &qca->spi_msg2);
+-
+ 	memset(&qca->txr, 0, sizeof(qca->txr));
+ 	qca->txr.count = TX_RING_MAX_LEN;
+ }
+diff --git a/drivers/net/ethernet/qualcomm/qca_spi.h b/drivers/net/ethernet/qualcomm/qca_spi.h
+index fc4beb1b32d1..fc0e98726b36 100644
+--- a/drivers/net/ethernet/qualcomm/qca_spi.h
++++ b/drivers/net/ethernet/qualcomm/qca_spi.h
+@@ -83,11 +83,6 @@ struct qcaspi {
+ 	struct tx_ring txr;
+ 	struct qcaspi_stats stats;
+ 
+-	struct spi_message spi_msg1;
+-	struct spi_message spi_msg2;
+-	struct spi_transfer spi_xfer1;
+-	struct spi_transfer spi_xfer2[2];
+-
+ 	u8 *rx_buffer;
+ 	u32 buffer_size;
+ 	u8 sync;
+diff --git a/drivers/net/wan/fsl_ucc_hdlc.c b/drivers/net/wan/fsl_ucc_hdlc.c
+index 9b09c9d0d0fb..5f0366a125e2 100644
+--- a/drivers/net/wan/fsl_ucc_hdlc.c
++++ b/drivers/net/wan/fsl_ucc_hdlc.c
+@@ -192,7 +192,7 @@ static int uhdlc_init(struct ucc_hdlc_private *priv)
+ 	priv->ucc_pram_offset = qe_muram_alloc(sizeof(struct ucc_hdlc_param),
+ 				ALIGNMENT_OF_UCC_HDLC_PRAM);
+ 
+-	if (priv->ucc_pram_offset < 0) {
++	if (IS_ERR_VALUE(priv->ucc_pram_offset)) {
+ 		dev_err(priv->dev, "Can not allocate MURAM for hdlc parameter.\n");
+ 		ret = -ENOMEM;
+ 		goto free_tx_bd;
+@@ -230,14 +230,14 @@ static int uhdlc_init(struct ucc_hdlc_private *priv)
+ 
+ 	/* Alloc riptr, tiptr */
+ 	riptr = qe_muram_alloc(32, 32);
+-	if (riptr < 0) {
++	if (IS_ERR_VALUE(riptr)) {
+ 		dev_err(priv->dev, "Cannot allocate MURAM mem for Receive internal temp data pointer\n");
+ 		ret = -ENOMEM;
+ 		goto free_tx_skbuff;
+ 	}
+ 
+ 	tiptr = qe_muram_alloc(32, 32);
+-	if (tiptr < 0) {
++	if (IS_ERR_VALUE(tiptr)) {
+ 		dev_err(priv->dev, "Cannot allocate MURAM mem for Transmit internal temp data pointer\n");
+ 		ret = -ENOMEM;
+ 		goto free_riptr;
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
+index 45ea32796cda..92b38a21cd10 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
+@@ -660,7 +660,7 @@ static inline void iwl_enable_fw_load_int(struct iwl_trans *trans)
+ 	}
+ }
+ 
+-static inline u8 iwl_pcie_get_cmd_index(struct iwl_txq *q, u32 index)
++static inline u8 iwl_pcie_get_cmd_index(const struct iwl_txq *q, u32 index)
+ {
+ 	return index & (q->n_window - 1);
+ }
+@@ -730,9 +730,13 @@ static inline void iwl_stop_queue(struct iwl_trans *trans,
+ 
+ static inline bool iwl_queue_used(const struct iwl_txq *q, int i)
+ {
+-	return q->write_ptr >= q->read_ptr ?
+-		(i >= q->read_ptr && i < q->write_ptr) :
+-		!(i < q->read_ptr && i >= q->write_ptr);
++	int index = iwl_pcie_get_cmd_index(q, i);
++	int r = iwl_pcie_get_cmd_index(q, q->read_ptr);
++	int w = iwl_pcie_get_cmd_index(q, q->write_ptr);
++
++	return w >= r ?
++		(index >= r && index < w) :
++		!(index < r && index >= w);
+ }
+ 
+ static inline bool iwl_is_rfkill_set(struct iwl_trans *trans)
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+index 473fe7ccb07c..11bd7ce2be8e 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+@@ -1225,9 +1225,13 @@ static void iwl_pcie_cmdq_reclaim(struct iwl_trans *trans, int txq_id, int idx)
+ 	struct iwl_txq *txq = trans_pcie->txq[txq_id];
+ 	unsigned long flags;
+ 	int nfreed = 0;
++	u16 r;
+ 
+ 	lockdep_assert_held(&txq->lock);
+ 
++	idx = iwl_pcie_get_cmd_index(txq, idx);
++	r = iwl_pcie_get_cmd_index(txq, txq->read_ptr);
++
+ 	if ((idx >= TFD_QUEUE_SIZE_MAX) || (!iwl_queue_used(txq, idx))) {
+ 		IWL_ERR(trans,
+ 			"%s: Read index for DMA queue txq id (%d), index %d is out of range [0-%d] %d %d.\n",
+@@ -1236,12 +1240,13 @@ static void iwl_pcie_cmdq_reclaim(struct iwl_trans *trans, int txq_id, int idx)
+ 		return;
+ 	}
+ 
+-	for (idx = iwl_queue_inc_wrap(idx); txq->read_ptr != idx;
+-	     txq->read_ptr = iwl_queue_inc_wrap(txq->read_ptr)) {
++	for (idx = iwl_queue_inc_wrap(idx); r != idx;
++	     r = iwl_queue_inc_wrap(r)) {
++		txq->read_ptr = iwl_queue_inc_wrap(txq->read_ptr);
+ 
+ 		if (nfreed++ > 0) {
+ 			IWL_ERR(trans, "HCMD skipped: index (%d) %d %d\n",
+-				idx, txq->write_ptr, txq->read_ptr);
++				idx, txq->write_ptr, r);
+ 			iwl_force_nmi(trans);
+ 		}
+ 	}
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index 9dd2ca62d84a..c2b6aa1d485f 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -87,8 +87,7 @@ struct netfront_cb {
+ /* IRQ name is queue name with "-tx" or "-rx" appended */
+ #define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 3)
+ 
+-static DECLARE_WAIT_QUEUE_HEAD(module_load_q);
+-static DECLARE_WAIT_QUEUE_HEAD(module_unload_q);
++static DECLARE_WAIT_QUEUE_HEAD(module_wq);
+ 
+ struct netfront_stats {
+ 	u64			packets;
+@@ -1331,11 +1330,11 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
+ 	netif_carrier_off(netdev);
+ 
+ 	xenbus_switch_state(dev, XenbusStateInitialising);
+-	wait_event(module_load_q,
+-			   xenbus_read_driver_state(dev->otherend) !=
+-			   XenbusStateClosed &&
+-			   xenbus_read_driver_state(dev->otherend) !=
+-			   XenbusStateUnknown);
++	wait_event(module_wq,
++		   xenbus_read_driver_state(dev->otherend) !=
++		   XenbusStateClosed &&
++		   xenbus_read_driver_state(dev->otherend) !=
++		   XenbusStateUnknown);
+ 	return netdev;
+ 
+  exit:
+@@ -1603,14 +1602,16 @@ static int xennet_init_queue(struct netfront_queue *queue)
+ {
+ 	unsigned short i;
+ 	int err = 0;
++	char *devid;
+ 
+ 	spin_lock_init(&queue->tx_lock);
+ 	spin_lock_init(&queue->rx_lock);
+ 
+ 	timer_setup(&queue->rx_refill_timer, rx_refill_timeout, 0);
+ 
+-	snprintf(queue->name, sizeof(queue->name), "%s-q%u",
+-		 queue->info->netdev->name, queue->id);
++	devid = strrchr(queue->info->xbdev->nodename, '/') + 1;
++	snprintf(queue->name, sizeof(queue->name), "vif%s-q%u",
++		 devid, queue->id);
+ 
+ 	/* Initialise tx_skbs as a free chain containing every entry. */
+ 	queue->tx_skb_freelist = 0;
+@@ -2007,15 +2008,14 @@ static void netback_changed(struct xenbus_device *dev,
+ 
+ 	dev_dbg(&dev->dev, "%s\n", xenbus_strstate(backend_state));
+ 
++	wake_up_all(&module_wq);
++
+ 	switch (backend_state) {
+ 	case XenbusStateInitialising:
+ 	case XenbusStateInitialised:
+ 	case XenbusStateReconfiguring:
+ 	case XenbusStateReconfigured:
+-		break;
+-
+ 	case XenbusStateUnknown:
+-		wake_up_all(&module_unload_q);
+ 		break;
+ 
+ 	case XenbusStateInitWait:
+@@ -2031,12 +2031,10 @@ static void netback_changed(struct xenbus_device *dev,
+ 		break;
+ 
+ 	case XenbusStateClosed:
+-		wake_up_all(&module_unload_q);
+ 		if (dev->state == XenbusStateClosed)
+ 			break;
+ 		/* Missed the backend's CLOSING state -- fallthrough */
+ 	case XenbusStateClosing:
+-		wake_up_all(&module_unload_q);
+ 		xenbus_frontend_closed(dev);
+ 		break;
+ 	}
+@@ -2144,14 +2142,14 @@ static int xennet_remove(struct xenbus_device *dev)
+ 
+ 	if (xenbus_read_driver_state(dev->otherend) != XenbusStateClosed) {
+ 		xenbus_switch_state(dev, XenbusStateClosing);
+-		wait_event(module_unload_q,
++		wait_event(module_wq,
+ 			   xenbus_read_driver_state(dev->otherend) ==
+ 			   XenbusStateClosing ||
+ 			   xenbus_read_driver_state(dev->otherend) ==
+ 			   XenbusStateUnknown);
+ 
+ 		xenbus_switch_state(dev, XenbusStateClosed);
+-		wait_event(module_unload_q,
++		wait_event(module_wq,
+ 			   xenbus_read_driver_state(dev->otherend) ==
+ 			   XenbusStateClosed ||
+ 			   xenbus_read_driver_state(dev->otherend) ==
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 66ec5985c9f3..69fb62feb833 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -1741,6 +1741,8 @@ static void nvme_rdma_shutdown_ctrl(struct nvme_rdma_ctrl *ctrl, bool shutdown)
+ 		nvme_rdma_stop_io_queues(ctrl);
+ 		blk_mq_tagset_busy_iter(&ctrl->tag_set,
+ 					nvme_cancel_request, &ctrl->ctrl);
++		if (shutdown)
++			nvme_start_queues(&ctrl->ctrl);
+ 		nvme_rdma_destroy_io_queues(ctrl, shutdown);
+ 	}
+ 
+diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c
+index 8c42b3a8c420..64c7596a46a1 100644
+--- a/drivers/nvme/target/io-cmd-file.c
++++ b/drivers/nvme/target/io-cmd-file.c
+@@ -209,22 +209,24 @@ static void nvmet_file_execute_discard(struct nvmet_req *req)
+ {
+ 	int mode = FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE;
+ 	struct nvme_dsm_range range;
+-	loff_t offset;
+-	loff_t len;
+-	int i, ret;
++	loff_t offset, len;
++	u16 ret;
++	int i;
+ 
+ 	for (i = 0; i <= le32_to_cpu(req->cmd->dsm.nr); i++) {
+-		if (nvmet_copy_from_sgl(req, i * sizeof(range), &range,
+-					sizeof(range)))
++		ret = nvmet_copy_from_sgl(req, i * sizeof(range), &range,
++					sizeof(range));
++		if (ret)
+ 			break;
+ 		offset = le64_to_cpu(range.slba) << req->ns->blksize_shift;
+ 		len = le32_to_cpu(range.nlb) << req->ns->blksize_shift;
+-		ret = vfs_fallocate(req->ns->file, mode, offset, len);
+-		if (ret)
++		if (vfs_fallocate(req->ns->file, mode, offset, len)) {
++			ret = NVME_SC_INTERNAL | NVME_SC_DNR;
+ 			break;
++		}
+ 	}
+ 
+-	nvmet_req_complete(req, ret < 0 ? NVME_SC_INTERNAL | NVME_SC_DNR : 0);
++	nvmet_req_complete(req, ret);
+ }
+ 
+ static void nvmet_file_dsm_work(struct work_struct *w)
+diff --git a/drivers/of/base.c b/drivers/of/base.c
+index 466e3c8582f0..53a51c6911eb 100644
+--- a/drivers/of/base.c
++++ b/drivers/of/base.c
+@@ -118,6 +118,9 @@ void of_populate_phandle_cache(void)
+ 		if (np->phandle && np->phandle != OF_PHANDLE_ILLEGAL)
+ 			phandles++;
+ 
++	if (!phandles)
++		goto out;
++
+ 	cache_entries = roundup_pow_of_two(phandles);
+ 	phandle_cache_mask = cache_entries - 1;
+ 
+@@ -719,6 +722,31 @@ struct device_node *of_get_next_available_child(const struct device_node *node,
+ }
+ EXPORT_SYMBOL(of_get_next_available_child);
+ 
++/**
++ * of_get_compatible_child - Find compatible child node
++ * @parent:	parent node
++ * @compatible:	compatible string
++ *
++ * Lookup child node whose compatible property contains the given compatible
++ * string.
++ *
++ * Returns a node pointer with refcount incremented, use of_node_put() on it
++ * when done; or NULL if not found.
++ */
++struct device_node *of_get_compatible_child(const struct device_node *parent,
++				const char *compatible)
++{
++	struct device_node *child;
++
++	for_each_child_of_node(parent, child) {
++		if (of_device_is_compatible(child, compatible))
++			break;
++	}
++
++	return child;
++}
++EXPORT_SYMBOL(of_get_compatible_child);
++
+ /**
+  *	of_get_child_by_name - Find the child node by name for a given parent
+  *	@node:	parent node
+diff --git a/drivers/parport/parport_sunbpp.c b/drivers/parport/parport_sunbpp.c
+index 01cf1c1a841a..8de329546b82 100644
+--- a/drivers/parport/parport_sunbpp.c
++++ b/drivers/parport/parport_sunbpp.c
+@@ -286,12 +286,16 @@ static int bpp_probe(struct platform_device *op)
+ 
+ 	ops = kmemdup(&parport_sunbpp_ops, sizeof(struct parport_operations),
+ 		      GFP_KERNEL);
+-        if (!ops)
++	if (!ops) {
++		err = -ENOMEM;
+ 		goto out_unmap;
++	}
+ 
+ 	dprintk(("register_port\n"));
+-	if (!(p = parport_register_port((unsigned long)base, irq, dma, ops)))
++	if (!(p = parport_register_port((unsigned long)base, irq, dma, ops))) {
++		err = -ENOMEM;
+ 		goto out_free_ops;
++	}
+ 
+ 	p->size = size;
+ 	p->dev = &op->dev;
+diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c
+index a2e88386af28..0fbf612b8ef2 100644
+--- a/drivers/pci/pcie/aer.c
++++ b/drivers/pci/pcie/aer.c
+@@ -303,6 +303,9 @@ int pcie_aer_get_firmware_first(struct pci_dev *dev)
+ 	if (!pci_is_pcie(dev))
+ 		return 0;
+ 
++	if (pcie_ports_native)
++		return 0;
++
+ 	if (!dev->__aer_firmware_first_valid)
+ 		aer_set_firmware_first(dev);
+ 	return dev->__aer_firmware_first;
+@@ -323,6 +326,9 @@ bool aer_acpi_firmware_first(void)
+ 		.firmware_first	= 0,
+ 	};
+ 
++	if (pcie_ports_native)
++		return false;
++
+ 	if (!parsed) {
+ 		apei_hest_parse(aer_hest_parse, &info);
+ 		aer_firmware_first = info.firmware_first;
+diff --git a/drivers/pinctrl/mediatek/pinctrl-mt7622.c b/drivers/pinctrl/mediatek/pinctrl-mt7622.c
+index 4c4740ffeb9c..3ea685634b6c 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-mt7622.c
++++ b/drivers/pinctrl/mediatek/pinctrl-mt7622.c
+@@ -1537,7 +1537,7 @@ static int mtk_build_groups(struct mtk_pinctrl *hw)
+ 		err = pinctrl_generic_add_group(hw->pctrl, group->name,
+ 						group->pins, group->num_pins,
+ 						group->data);
+-		if (err) {
++		if (err < 0) {
+ 			dev_err(hw->dev, "Failed to register group %s\n",
+ 				group->name);
+ 			return err;
+@@ -1558,7 +1558,7 @@ static int mtk_build_functions(struct mtk_pinctrl *hw)
+ 						  func->group_names,
+ 						  func->num_group_names,
+ 						  func->data);
+-		if (err) {
++		if (err < 0) {
+ 			dev_err(hw->dev, "Failed to register function %s\n",
+ 				func->name);
+ 			return err;
+diff --git a/drivers/pinctrl/pinctrl-rza1.c b/drivers/pinctrl/pinctrl-rza1.c
+index 717c0f4449a0..f76edf664539 100644
+--- a/drivers/pinctrl/pinctrl-rza1.c
++++ b/drivers/pinctrl/pinctrl-rza1.c
+@@ -1006,6 +1006,7 @@ static int rza1_dt_node_to_map(struct pinctrl_dev *pctldev,
+ 	const char *grpname;
+ 	const char **fngrps;
+ 	int ret, npins;
++	int gsel, fsel;
+ 
+ 	npins = rza1_dt_node_pin_count(np);
+ 	if (npins < 0) {
+@@ -1055,18 +1056,19 @@ static int rza1_dt_node_to_map(struct pinctrl_dev *pctldev,
+ 	fngrps[0] = grpname;
+ 
+ 	mutex_lock(&rza1_pctl->mutex);
+-	ret = pinctrl_generic_add_group(pctldev, grpname, grpins, npins,
+-					NULL);
+-	if (ret) {
++	gsel = pinctrl_generic_add_group(pctldev, grpname, grpins, npins,
++					 NULL);
++	if (gsel < 0) {
+ 		mutex_unlock(&rza1_pctl->mutex);
+-		return ret;
++		return gsel;
+ 	}
+ 
+-	ret = pinmux_generic_add_function(pctldev, grpname, fngrps, 1,
+-					  mux_confs);
+-	if (ret)
++	fsel = pinmux_generic_add_function(pctldev, grpname, fngrps, 1,
++					   mux_confs);
++	if (fsel < 0) {
++		ret = fsel;
+ 		goto remove_group;
+-	mutex_unlock(&rza1_pctl->mutex);
++	}
+ 
+ 	dev_info(rza1_pctl->dev, "Parsed function and group %s with %d pins\n",
+ 				 grpname, npins);
+@@ -1083,15 +1085,15 @@ static int rza1_dt_node_to_map(struct pinctrl_dev *pctldev,
+ 	(*map)->data.mux.group = np->name;
+ 	(*map)->data.mux.function = np->name;
+ 	*num_maps = 1;
++	mutex_unlock(&rza1_pctl->mutex);
+ 
+ 	return 0;
+ 
+ remove_function:
+-	mutex_lock(&rza1_pctl->mutex);
+-	pinmux_generic_remove_last_function(pctldev);
++	pinmux_generic_remove_function(pctldev, fsel);
+ 
+ remove_group:
+-	pinctrl_generic_remove_last_group(pctldev);
++	pinctrl_generic_remove_group(pctldev, gsel);
+ 	mutex_unlock(&rza1_pctl->mutex);
+ 
+ 	dev_info(rza1_pctl->dev, "Unable to parse function and group %s\n",
+diff --git a/drivers/pinctrl/qcom/pinctrl-msm.c b/drivers/pinctrl/qcom/pinctrl-msm.c
+index 0e22f52b2a19..2155a30c282b 100644
+--- a/drivers/pinctrl/qcom/pinctrl-msm.c
++++ b/drivers/pinctrl/qcom/pinctrl-msm.c
+@@ -250,22 +250,30 @@ static int msm_config_group_get(struct pinctrl_dev *pctldev,
+ 	/* Convert register value to pinconf value */
+ 	switch (param) {
+ 	case PIN_CONFIG_BIAS_DISABLE:
+-		arg = arg == MSM_NO_PULL;
++		if (arg != MSM_NO_PULL)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_PULL_DOWN:
+-		arg = arg == MSM_PULL_DOWN;
++		if (arg != MSM_PULL_DOWN)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_BUS_HOLD:
+ 		if (pctrl->soc->pull_no_keeper)
+ 			return -ENOTSUPP;
+ 
+-		arg = arg == MSM_KEEPER;
++		if (arg != MSM_KEEPER)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_PULL_UP:
+ 		if (pctrl->soc->pull_no_keeper)
+ 			arg = arg == MSM_PULL_UP_NO_KEEPER;
+ 		else
+ 			arg = arg == MSM_PULL_UP;
++		if (!arg)
++			return -EINVAL;
+ 		break;
+ 	case PIN_CONFIG_DRIVE_STRENGTH:
+ 		arg = msm_regval_to_drive(arg);
+diff --git a/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c b/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
+index 3e66e0d10010..cf82db78e69e 100644
+--- a/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
++++ b/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
+@@ -390,31 +390,47 @@ static int pmic_gpio_config_get(struct pinctrl_dev *pctldev,
+ 
+ 	switch (param) {
+ 	case PIN_CONFIG_DRIVE_PUSH_PULL:
+-		arg = pad->buffer_type == PMIC_GPIO_OUT_BUF_CMOS;
++		if (pad->buffer_type != PMIC_GPIO_OUT_BUF_CMOS)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_DRIVE_OPEN_DRAIN:
+-		arg = pad->buffer_type == PMIC_GPIO_OUT_BUF_OPEN_DRAIN_NMOS;
++		if (pad->buffer_type != PMIC_GPIO_OUT_BUF_OPEN_DRAIN_NMOS)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_DRIVE_OPEN_SOURCE:
+-		arg = pad->buffer_type == PMIC_GPIO_OUT_BUF_OPEN_DRAIN_PMOS;
++		if (pad->buffer_type != PMIC_GPIO_OUT_BUF_OPEN_DRAIN_PMOS)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_PULL_DOWN:
+-		arg = pad->pullup == PMIC_GPIO_PULL_DOWN;
++		if (pad->pullup != PMIC_GPIO_PULL_DOWN)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_DISABLE:
+-		arg = pad->pullup = PMIC_GPIO_PULL_DISABLE;
++		if (pad->pullup != PMIC_GPIO_PULL_DISABLE)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_PULL_UP:
+-		arg = pad->pullup == PMIC_GPIO_PULL_UP_30;
++		if (pad->pullup != PMIC_GPIO_PULL_UP_30)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_HIGH_IMPEDANCE:
+-		arg = !pad->is_enabled;
++		if (pad->is_enabled)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_POWER_SOURCE:
+ 		arg = pad->power_source;
+ 		break;
+ 	case PIN_CONFIG_INPUT_ENABLE:
+-		arg = pad->input_enabled;
++		if (!pad->input_enabled)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_OUTPUT:
+ 		arg = pad->out_value;
+diff --git a/drivers/platform/x86/toshiba_acpi.c b/drivers/platform/x86/toshiba_acpi.c
+index eef76bfa5d73..e50941c3ba54 100644
+--- a/drivers/platform/x86/toshiba_acpi.c
++++ b/drivers/platform/x86/toshiba_acpi.c
+@@ -34,6 +34,7 @@
+ #define TOSHIBA_ACPI_VERSION	"0.24"
+ #define PROC_INTERFACE_VERSION	1
+ 
++#include <linux/compiler.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/moduleparam.h>
+@@ -1682,7 +1683,7 @@ static const struct file_operations keys_proc_fops = {
+ 	.write		= keys_proc_write,
+ };
+ 
+-static int version_proc_show(struct seq_file *m, void *v)
++static int __maybe_unused version_proc_show(struct seq_file *m, void *v)
+ {
+ 	seq_printf(m, "driver:                  %s\n", TOSHIBA_ACPI_VERSION);
+ 	seq_printf(m, "proc_interface:          %d\n", PROC_INTERFACE_VERSION);
+diff --git a/drivers/regulator/qcom_spmi-regulator.c b/drivers/regulator/qcom_spmi-regulator.c
+index 9817f1a75342..ba3d5e63ada6 100644
+--- a/drivers/regulator/qcom_spmi-regulator.c
++++ b/drivers/regulator/qcom_spmi-regulator.c
+@@ -1752,7 +1752,8 @@ static int qcom_spmi_regulator_probe(struct platform_device *pdev)
+ 	const char *name;
+ 	struct device *dev = &pdev->dev;
+ 	struct device_node *node = pdev->dev.of_node;
+-	struct device_node *syscon;
++	struct device_node *syscon, *reg_node;
++	struct property *reg_prop;
+ 	int ret, lenp;
+ 	struct list_head *vreg_list;
+ 
+@@ -1774,16 +1775,19 @@ static int qcom_spmi_regulator_probe(struct platform_device *pdev)
+ 		syscon = of_parse_phandle(node, "qcom,saw-reg", 0);
+ 		saw_regmap = syscon_node_to_regmap(syscon);
+ 		of_node_put(syscon);
+-		if (IS_ERR(regmap))
++		if (IS_ERR(saw_regmap))
+ 			dev_err(dev, "ERROR reading SAW regmap\n");
+ 	}
+ 
+ 	for (reg = match->data; reg->name; reg++) {
+ 
+-		if (saw_regmap && \
+-		    of_find_property(of_find_node_by_name(node, reg->name), \
+-				     "qcom,saw-slave", &lenp)) {
+-			continue;
++		if (saw_regmap) {
++			reg_node = of_get_child_by_name(node, reg->name);
++			reg_prop = of_find_property(reg_node, "qcom,saw-slave",
++						    &lenp);
++			of_node_put(reg_node);
++			if (reg_prop)
++				continue;
+ 		}
+ 
+ 		vreg = devm_kzalloc(dev, sizeof(*vreg), GFP_KERNEL);
+@@ -1816,13 +1820,17 @@ static int qcom_spmi_regulator_probe(struct platform_device *pdev)
+ 		if (ret)
+ 			continue;
+ 
+-		if (saw_regmap && \
+-		    of_find_property(of_find_node_by_name(node, reg->name), \
+-				     "qcom,saw-leader", &lenp)) {
+-			spmi_saw_ops = *(vreg->desc.ops);
+-			spmi_saw_ops.set_voltage_sel = \
+-				spmi_regulator_saw_set_voltage;
+-			vreg->desc.ops = &spmi_saw_ops;
++		if (saw_regmap) {
++			reg_node = of_get_child_by_name(node, reg->name);
++			reg_prop = of_find_property(reg_node, "qcom,saw-leader",
++						    &lenp);
++			of_node_put(reg_node);
++			if (reg_prop) {
++				spmi_saw_ops = *(vreg->desc.ops);
++				spmi_saw_ops.set_voltage_sel =
++					spmi_regulator_saw_set_voltage;
++				vreg->desc.ops = &spmi_saw_ops;
++			}
+ 		}
+ 
+ 		config.dev = dev;
+diff --git a/drivers/remoteproc/qcom_q6v5_pil.c b/drivers/remoteproc/qcom_q6v5_pil.c
+index 2bf8e7c49f2a..e5ec59102b01 100644
+--- a/drivers/remoteproc/qcom_q6v5_pil.c
++++ b/drivers/remoteproc/qcom_q6v5_pil.c
+@@ -1370,7 +1370,6 @@ static const struct rproc_hexagon_res sdm845_mss = {
+ 	.hexagon_mba_image = "mba.mbn",
+ 	.proxy_clk_names = (char*[]){
+ 			"xo",
+-			"axis2",
+ 			"prng",
+ 			NULL
+ 	},
+diff --git a/drivers/reset/reset-imx7.c b/drivers/reset/reset-imx7.c
+index 4db177bc89bc..fdeac1946429 100644
+--- a/drivers/reset/reset-imx7.c
++++ b/drivers/reset/reset-imx7.c
+@@ -80,7 +80,7 @@ static int imx7_reset_set(struct reset_controller_dev *rcdev,
+ {
+ 	struct imx7_src *imx7src = to_imx7_src(rcdev);
+ 	const struct imx7_src_signal *signal = &imx7_src_signals[id];
+-	unsigned int value = 0;
++	unsigned int value = assert ? signal->bit : 0;
+ 
+ 	switch (id) {
+ 	case IMX7_RESET_PCIEPHY:
+diff --git a/drivers/rtc/rtc-bq4802.c b/drivers/rtc/rtc-bq4802.c
+index d768f6747961..113493b52149 100644
+--- a/drivers/rtc/rtc-bq4802.c
++++ b/drivers/rtc/rtc-bq4802.c
+@@ -162,6 +162,10 @@ static int bq4802_probe(struct platform_device *pdev)
+ 	} else if (p->r->flags & IORESOURCE_MEM) {
+ 		p->regs = devm_ioremap(&pdev->dev, p->r->start,
+ 					resource_size(p->r));
++		if (!p->regs){
++			err = -ENOMEM;
++			goto out;
++		}
+ 		p->read = bq4802_read_mem;
+ 		p->write = bq4802_write_mem;
+ 	} else {
+diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
+index d01ac29fd986..ffdb78421a25 100644
+--- a/drivers/s390/net/qeth_core_main.c
++++ b/drivers/s390/net/qeth_core_main.c
+@@ -3530,13 +3530,14 @@ static void qeth_flush_buffers(struct qeth_qdio_out_q *queue, int index,
+ 	qdio_flags = QDIO_FLAG_SYNC_OUTPUT;
+ 	if (atomic_read(&queue->set_pci_flags_count))
+ 		qdio_flags |= QDIO_FLAG_PCI_OUT;
++	atomic_add(count, &queue->used_buffers);
++
+ 	rc = do_QDIO(CARD_DDEV(queue->card), qdio_flags,
+ 		     queue->queue_no, index, count);
+ 	if (queue->card->options.performance_stats)
+ 		queue->card->perf_stats.outbound_do_qdio_time +=
+ 			qeth_get_micros() -
+ 			queue->card->perf_stats.outbound_do_qdio_start_time;
+-	atomic_add(count, &queue->used_buffers);
+ 	if (rc) {
+ 		queue->card->stats.tx_errors += count;
+ 		/* ignore temporary SIGA errors without busy condition */
+diff --git a/drivers/s390/net/qeth_core_sys.c b/drivers/s390/net/qeth_core_sys.c
+index c3f18afb368b..cfb659747693 100644
+--- a/drivers/s390/net/qeth_core_sys.c
++++ b/drivers/s390/net/qeth_core_sys.c
+@@ -426,6 +426,7 @@ static ssize_t qeth_dev_layer2_store(struct device *dev,
+ 	if (card->discipline) {
+ 		card->discipline->remove(card->gdev);
+ 		qeth_core_free_discipline(card);
++		card->options.layer2 = -1;
+ 	}
+ 
+ 	rc = qeth_core_load_discipline(card, newdis);
+diff --git a/drivers/scsi/libfc/fc_disc.c b/drivers/scsi/libfc/fc_disc.c
+index 3f3569ec5ce3..ddc7921ae5da 100644
+--- a/drivers/scsi/libfc/fc_disc.c
++++ b/drivers/scsi/libfc/fc_disc.c
+@@ -294,9 +294,11 @@ static void fc_disc_done(struct fc_disc *disc, enum fc_disc_event event)
+ 	 * discovery, reverify or log them in.	Otherwise, log them out.
+ 	 * Skip ports which were never discovered.  These are the dNS port
+ 	 * and ports which were created by PLOGI.
++	 *
++	 * We don't need to use the _rcu variant here as the rport list
++	 * is protected by the disc mutex which is already held on entry.
+ 	 */
+-	rcu_read_lock();
+-	list_for_each_entry_rcu(rdata, &disc->rports, peers) {
++	list_for_each_entry(rdata, &disc->rports, peers) {
+ 		if (!kref_get_unless_zero(&rdata->kref))
+ 			continue;
+ 		if (rdata->disc_id) {
+@@ -307,7 +309,6 @@ static void fc_disc_done(struct fc_disc *disc, enum fc_disc_event event)
+ 		}
+ 		kref_put(&rdata->kref, fc_rport_destroy);
+ 	}
+-	rcu_read_unlock();
+ 	mutex_unlock(&disc->disc_mutex);
+ 	disc->disc_callback(lport, event);
+ 	mutex_lock(&disc->disc_mutex);
+diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
+index d723fd1d7b26..cab1fb087e6a 100644
+--- a/drivers/scsi/lpfc/lpfc_nvme.c
++++ b/drivers/scsi/lpfc/lpfc_nvme.c
+@@ -2976,7 +2976,7 @@ lpfc_nvme_wait_for_io_drain(struct lpfc_hba *phba)
+ 	struct lpfc_sli_ring  *pring;
+ 	u32 i, wait_cnt = 0;
+ 
+-	if (phba->sli_rev < LPFC_SLI_REV4)
++	if (phba->sli_rev < LPFC_SLI_REV4 || !phba->sli4_hba.nvme_wq)
+ 		return;
+ 
+ 	/* Cycle through all NVME rings and make sure all outstanding
+@@ -2985,6 +2985,9 @@ lpfc_nvme_wait_for_io_drain(struct lpfc_hba *phba)
+ 	for (i = 0; i < phba->cfg_nvme_io_channel; i++) {
+ 		pring = phba->sli4_hba.nvme_wq[i]->pring;
+ 
++		if (!pring)
++			continue;
++
+ 		/* Retrieve everything on the txcmplq */
+ 		while (!list_empty(&pring->txcmplq)) {
+ 			msleep(LPFC_XRI_EXCH_BUSY_WAIT_T1);
+diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
+index 7271c9d885dd..5e5ec3363b44 100644
+--- a/drivers/scsi/lpfc/lpfc_nvmet.c
++++ b/drivers/scsi/lpfc/lpfc_nvmet.c
+@@ -402,6 +402,7 @@ lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctx_buf)
+ 
+ 		/* Process FCP command */
+ 		if (rc == 0) {
++			ctxp->rqb_buffer = NULL;
+ 			atomic_inc(&tgtp->rcv_fcp_cmd_out);
+ 			nvmebuf->hrq->rqbp->rqb_free_buffer(phba, nvmebuf);
+ 			return;
+@@ -1116,8 +1117,17 @@ lpfc_nvmet_defer_rcv(struct nvmet_fc_target_port *tgtport,
+ 	lpfc_nvmeio_data(phba, "NVMET DEFERRCV: xri x%x sz %d CPU %02x\n",
+ 			 ctxp->oxid, ctxp->size, smp_processor_id());
+ 
++	if (!nvmebuf) {
++		lpfc_printf_log(phba, KERN_INFO, LOG_NVME_IOERR,
++				"6425 Defer rcv: no buffer xri x%x: "
++				"flg %x ste %x\n",
++				ctxp->oxid, ctxp->flag, ctxp->state);
++		return;
++	}
++
+ 	tgtp = phba->targetport->private;
+-	atomic_inc(&tgtp->rcv_fcp_cmd_defer);
++	if (tgtp)
++		atomic_inc(&tgtp->rcv_fcp_cmd_defer);
+ 
+ 	/* Free the nvmebuf since a new buffer already replaced it */
+ 	nvmebuf->hrq->rqbp->rqb_free_buffer(phba, nvmebuf);
+diff --git a/drivers/soc/qcom/smem.c b/drivers/soc/qcom/smem.c
+index 70b2ee80d6bd..bf4bd71ab53f 100644
+--- a/drivers/soc/qcom/smem.c
++++ b/drivers/soc/qcom/smem.c
+@@ -364,11 +364,6 @@ static int qcom_smem_alloc_private(struct qcom_smem *smem,
+ 	end = phdr_to_last_uncached_entry(phdr);
+ 	cached = phdr_to_last_cached_entry(phdr);
+ 
+-	if (smem->global_partition) {
+-		dev_err(smem->dev, "Already found the global partition\n");
+-		return -EINVAL;
+-	}
+-
+ 	while (hdr < end) {
+ 		if (hdr->canary != SMEM_PRIVATE_CANARY)
+ 			goto bad_canary;
+@@ -736,6 +731,11 @@ static int qcom_smem_set_global_partition(struct qcom_smem *smem)
+ 	bool found = false;
+ 	int i;
+ 
++	if (smem->global_partition) {
++		dev_err(smem->dev, "Already found the global partition\n");
++		return -EINVAL;
++	}
++
+ 	ptable = qcom_smem_get_ptable(smem);
+ 	if (IS_ERR(ptable))
+ 		return PTR_ERR(ptable);
+diff --git a/drivers/spi/spi-dw.c b/drivers/spi/spi-dw.c
+index f693bfe95ab9..a087464efdd7 100644
+--- a/drivers/spi/spi-dw.c
++++ b/drivers/spi/spi-dw.c
+@@ -485,6 +485,8 @@ int dw_spi_add_host(struct device *dev, struct dw_spi *dws)
+ 	dws->dma_inited = 0;
+ 	dws->dma_addr = (dma_addr_t)(dws->paddr + DW_SPI_DR);
+ 
++	spi_controller_set_devdata(master, dws);
++
+ 	ret = request_irq(dws->irq, dw_spi_irq, IRQF_SHARED, dev_name(dev),
+ 			  master);
+ 	if (ret < 0) {
+@@ -518,7 +520,6 @@ int dw_spi_add_host(struct device *dev, struct dw_spi *dws)
+ 		}
+ 	}
+ 
+-	spi_controller_set_devdata(master, dws);
+ 	ret = devm_spi_register_controller(dev, master);
+ 	if (ret) {
+ 		dev_err(&master->dev, "problem registering spi master\n");
+diff --git a/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c
+index 396371728aa1..537d5bb5e294 100644
+--- a/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c
++++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c
+@@ -767,7 +767,7 @@ static void free_bufs(struct dpaa2_eth_priv *priv, u64 *buf_array, int count)
+ 	for (i = 0; i < count; i++) {
+ 		vaddr = dpaa2_iova_to_virt(priv->iommu_domain, buf_array[i]);
+ 		dma_unmap_single(dev, buf_array[i], DPAA2_ETH_RX_BUF_SIZE,
+-				 DMA_BIDIRECTIONAL);
++				 DMA_FROM_DEVICE);
+ 		skb_free_frag(vaddr);
+ 	}
+ }
+diff --git a/drivers/staging/vc04_services/bcm2835-audio/bcm2835-vchiq.c b/drivers/staging/vc04_services/bcm2835-audio/bcm2835-vchiq.c
+index f0cefa1b7b0f..b20d34449ed4 100644
+--- a/drivers/staging/vc04_services/bcm2835-audio/bcm2835-vchiq.c
++++ b/drivers/staging/vc04_services/bcm2835-audio/bcm2835-vchiq.c
+@@ -439,16 +439,16 @@ int bcm2835_audio_open(struct bcm2835_alsa_stream *alsa_stream)
+ 	my_workqueue_init(alsa_stream);
+ 
+ 	ret = bcm2835_audio_open_connection(alsa_stream);
+-	if (ret) {
+-		ret = -1;
+-		goto exit;
+-	}
++	if (ret)
++		goto free_wq;
++
+ 	instance = alsa_stream->instance;
+ 	LOG_DBG(" instance (%p)\n", instance);
+ 
+ 	if (mutex_lock_interruptible(&instance->vchi_mutex)) {
+ 		LOG_DBG("Interrupted whilst waiting for lock on (%d)\n", instance->num_connections);
+-		return -EINTR;
++		ret = -EINTR;
++		goto free_wq;
+ 	}
+ 	vchi_service_use(instance->vchi_handle[0]);
+ 
+@@ -471,7 +471,11 @@ int bcm2835_audio_open(struct bcm2835_alsa_stream *alsa_stream)
+ unlock:
+ 	vchi_service_release(instance->vchi_handle[0]);
+ 	mutex_unlock(&instance->vchi_mutex);
+-exit:
++
++free_wq:
++	if (ret)
++		destroy_workqueue(alsa_stream->my_wq);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c b/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c
+index ce26741ae9d9..3f61d04c47ab 100644
+--- a/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c
++++ b/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c
+@@ -580,6 +580,7 @@ static int start_streaming(struct vb2_queue *vq, unsigned int count)
+ static void stop_streaming(struct vb2_queue *vq)
+ {
+ 	int ret;
++	unsigned long timeout;
+ 	struct bm2835_mmal_dev *dev = vb2_get_drv_priv(vq);
+ 
+ 	v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev, "%s: dev:%p\n",
+@@ -605,10 +606,10 @@ static void stop_streaming(struct vb2_queue *vq)
+ 				      sizeof(dev->capture.frame_count));
+ 
+ 	/* wait for last frame to complete */
+-	ret = wait_for_completion_timeout(&dev->capture.frame_cmplt, HZ);
+-	if (ret <= 0)
++	timeout = wait_for_completion_timeout(&dev->capture.frame_cmplt, HZ);
++	if (timeout == 0)
+ 		v4l2_err(&dev->v4l2_dev,
+-			 "error %d waiting for frame completion\n", ret);
++			 "timed out waiting for frame completion\n");
+ 
+ 	v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
+ 		 "disabling connection\n");
+diff --git a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c
+index f5b5ead6347c..51e5b04ff0f5 100644
+--- a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c
++++ b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c
+@@ -630,6 +630,7 @@ static int send_synchronous_mmal_msg(struct vchiq_mmal_instance *instance,
+ {
+ 	struct mmal_msg_context *msg_context;
+ 	int ret;
++	unsigned long timeout;
+ 
+ 	/* payload size must not cause message to exceed max size */
+ 	if (payload_len >
+@@ -668,11 +669,11 @@ static int send_synchronous_mmal_msg(struct vchiq_mmal_instance *instance,
+ 		return ret;
+ 	}
+ 
+-	ret = wait_for_completion_timeout(&msg_context->u.sync.cmplt, 3 * HZ);
+-	if (ret <= 0) {
+-		pr_err("error %d waiting for sync completion\n", ret);
+-		if (ret == 0)
+-			ret = -ETIME;
++	timeout = wait_for_completion_timeout(&msg_context->u.sync.cmplt,
++					      3 * HZ);
++	if (timeout == 0) {
++		pr_err("timed out waiting for sync completion\n");
++		ret = -ETIME;
+ 		/* todo: what happens if the message arrives after aborting */
+ 		release_msg_context(msg_context);
+ 		return ret;
+diff --git a/drivers/tty/serial/8250/8250_of.c b/drivers/tty/serial/8250/8250_of.c
+index bfb37f0be22f..863e86b9a424 100644
+--- a/drivers/tty/serial/8250/8250_of.c
++++ b/drivers/tty/serial/8250/8250_of.c
+@@ -124,7 +124,7 @@ static int of_platform_serial_setup(struct platform_device *ofdev,
+ 				dev_warn(&ofdev->dev, "unsupported reg-io-width (%d)\n",
+ 					 prop);
+ 				ret = -EINVAL;
+-				goto err_dispose;
++				goto err_unprepare;
+ 			}
+ 		}
+ 		port->flags |= UPF_IOREMAP;
+diff --git a/drivers/tty/tty_baudrate.c b/drivers/tty/tty_baudrate.c
+index 6ff8cdfc9d2a..3e827a3d48d5 100644
+--- a/drivers/tty/tty_baudrate.c
++++ b/drivers/tty/tty_baudrate.c
+@@ -157,18 +157,25 @@ void tty_termios_encode_baud_rate(struct ktermios *termios,
+ 	termios->c_ospeed = obaud;
+ 
+ #ifdef BOTHER
++	if ((termios->c_cflag >> IBSHIFT) & CBAUD)
++		ibinput = 1;	/* An input speed was specified */
++
+ 	/* If the user asked for a precise weird speed give a precise weird
+ 	   answer. If they asked for a Bfoo speed they may have problems
+ 	   digesting non-exact replies so fuzz a bit */
+ 
+-	if ((termios->c_cflag & CBAUD) == BOTHER)
++	if ((termios->c_cflag & CBAUD) == BOTHER) {
+ 		oclose = 0;
++		if (!ibinput)
++			iclose = 0;
++	}
+ 	if (((termios->c_cflag >> IBSHIFT) & CBAUD) == BOTHER)
+ 		iclose = 0;
+-	if ((termios->c_cflag >> IBSHIFT) & CBAUD)
+-		ibinput = 1;	/* An input speed was specified */
+ #endif
+ 	termios->c_cflag &= ~CBAUD;
++#ifdef IBSHIFT
++	termios->c_cflag &= ~(CBAUD << IBSHIFT);
++#endif
+ 
+ 	/*
+ 	 *	Our goal is to find a close match to the standard baud rate
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 75c4623ad779..f8ee32d9843a 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -779,20 +779,9 @@ static int acm_tty_write(struct tty_struct *tty,
+ 	}
+ 
+ 	if (acm->susp_count) {
+-		if (acm->putbuffer) {
+-			/* now to preserve order */
+-			usb_anchor_urb(acm->putbuffer->urb, &acm->delayed);
+-			acm->putbuffer = NULL;
+-		}
+ 		usb_anchor_urb(wb->urb, &acm->delayed);
+ 		spin_unlock_irqrestore(&acm->write_lock, flags);
+ 		return count;
+-	} else {
+-		if (acm->putbuffer) {
+-			/* at this point there is no good way to handle errors */
+-			acm_start_wb(acm, acm->putbuffer);
+-			acm->putbuffer = NULL;
+-		}
+ 	}
+ 
+ 	stat = acm_start_wb(acm, wb);
+@@ -803,66 +792,6 @@ static int acm_tty_write(struct tty_struct *tty,
+ 	return count;
+ }
+ 
+-static void acm_tty_flush_chars(struct tty_struct *tty)
+-{
+-	struct acm *acm = tty->driver_data;
+-	struct acm_wb *cur;
+-	int err;
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&acm->write_lock, flags);
+-
+-	cur = acm->putbuffer;
+-	if (!cur) /* nothing to do */
+-		goto out;
+-
+-	acm->putbuffer = NULL;
+-	err = usb_autopm_get_interface_async(acm->control);
+-	if (err < 0) {
+-		cur->use = 0;
+-		acm->putbuffer = cur;
+-		goto out;
+-	}
+-
+-	if (acm->susp_count)
+-		usb_anchor_urb(cur->urb, &acm->delayed);
+-	else
+-		acm_start_wb(acm, cur);
+-out:
+-	spin_unlock_irqrestore(&acm->write_lock, flags);
+-	return;
+-}
+-
+-static int acm_tty_put_char(struct tty_struct *tty, unsigned char ch)
+-{
+-	struct acm *acm = tty->driver_data;
+-	struct acm_wb *cur;
+-	int wbn;
+-	unsigned long flags;
+-
+-overflow:
+-	cur = acm->putbuffer;
+-	if (!cur) {
+-		spin_lock_irqsave(&acm->write_lock, flags);
+-		wbn = acm_wb_alloc(acm);
+-		if (wbn >= 0) {
+-			cur = &acm->wb[wbn];
+-			acm->putbuffer = cur;
+-		}
+-		spin_unlock_irqrestore(&acm->write_lock, flags);
+-		if (!cur)
+-			return 0;
+-	}
+-
+-	if (cur->len == acm->writesize) {
+-		acm_tty_flush_chars(tty);
+-		goto overflow;
+-	}
+-
+-	cur->buf[cur->len++] = ch;
+-	return 1;
+-}
+-
+ static int acm_tty_write_room(struct tty_struct *tty)
+ {
+ 	struct acm *acm = tty->driver_data;
+@@ -1987,8 +1916,6 @@ static const struct tty_operations acm_ops = {
+ 	.cleanup =		acm_tty_cleanup,
+ 	.hangup =		acm_tty_hangup,
+ 	.write =		acm_tty_write,
+-	.put_char =		acm_tty_put_char,
+-	.flush_chars =		acm_tty_flush_chars,
+ 	.write_room =		acm_tty_write_room,
+ 	.ioctl =		acm_tty_ioctl,
+ 	.throttle =		acm_tty_throttle,
+diff --git a/drivers/usb/class/cdc-acm.h b/drivers/usb/class/cdc-acm.h
+index eacc116e83da..ca06b20d7af9 100644
+--- a/drivers/usb/class/cdc-acm.h
++++ b/drivers/usb/class/cdc-acm.h
+@@ -96,7 +96,6 @@ struct acm {
+ 	unsigned long read_urbs_free;
+ 	struct urb *read_urbs[ACM_NR];
+ 	struct acm_rb read_buffers[ACM_NR];
+-	struct acm_wb *putbuffer;			/* for acm_tty_put_char() */
+ 	int rx_buflimit;
+ 	spinlock_t read_lock;
+ 	u8 *notification_buffer;			/* to reassemble fragmented notifications */
+diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
+index a0d284ef3f40..632a2bfabc08 100644
+--- a/drivers/usb/class/cdc-wdm.c
++++ b/drivers/usb/class/cdc-wdm.c
+@@ -458,7 +458,7 @@ static int service_outstanding_interrupt(struct wdm_device *desc)
+ 
+ 	set_bit(WDM_RESPONDING, &desc->flags);
+ 	spin_unlock_irq(&desc->iuspin);
+-	rv = usb_submit_urb(desc->response, GFP_KERNEL);
++	rv = usb_submit_urb(desc->response, GFP_ATOMIC);
+ 	spin_lock_irq(&desc->iuspin);
+ 	if (rv) {
+ 		dev_err(&desc->intf->dev,
+diff --git a/drivers/usb/core/hcd-pci.c b/drivers/usb/core/hcd-pci.c
+index 66fe1b78d952..03432467b05f 100644
+--- a/drivers/usb/core/hcd-pci.c
++++ b/drivers/usb/core/hcd-pci.c
+@@ -515,8 +515,6 @@ static int resume_common(struct device *dev, int event)
+ 				event == PM_EVENT_RESTORE);
+ 		if (retval) {
+ 			dev_err(dev, "PCI post-resume error %d!\n", retval);
+-			if (hcd->shared_hcd)
+-				usb_hc_died(hcd->shared_hcd);
+ 			usb_hc_died(hcd);
+ 		}
+ 	}
+diff --git a/drivers/usb/core/message.c b/drivers/usb/core/message.c
+index 1a15392326fc..525ebd03cfe5 100644
+--- a/drivers/usb/core/message.c
++++ b/drivers/usb/core/message.c
+@@ -1340,6 +1340,11 @@ void usb_enable_interface(struct usb_device *dev,
+  * is submitted that needs that bandwidth.  Some other operating systems
+  * allocate bandwidth early, when a configuration is chosen.
+  *
++ * xHCI reserves bandwidth and configures the alternate setting in
++ * usb_hcd_alloc_bandwidth(). If it fails the original interface altsetting
++ * may be disabled. Drivers cannot rely on any particular alternate
++ * setting being in effect after a failure.
++ *
+  * This call is synchronous, and may not be used in an interrupt context.
+  * Also, drivers must not change altsettings while urbs are scheduled for
+  * endpoints in that interface; all such urbs must first be completed
+@@ -1375,6 +1380,12 @@ int usb_set_interface(struct usb_device *dev, int interface, int alternate)
+ 			 alternate);
+ 		return -EINVAL;
+ 	}
++	/*
++	 * usb3 hosts configure the interface in usb_hcd_alloc_bandwidth,
++	 * including freeing dropped endpoint ring buffers.
++	 * Make sure the interface endpoints are flushed before that
++	 */
++	usb_disable_interface(dev, iface, false);
+ 
+ 	/* Make sure we have enough bandwidth for this alternate interface.
+ 	 * Remove the current alt setting and add the new alt setting.
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 097057d2eacf..e77dfe5ed5ec 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -178,6 +178,10 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	/* CBM - Flash disk */
+ 	{ USB_DEVICE(0x0204, 0x6025), .driver_info = USB_QUIRK_RESET_RESUME },
+ 
++	/* WORLDE Controller KS49 or Prodipe MIDI 49C USB controller */
++	{ USB_DEVICE(0x0218, 0x0201), .driver_info =
++			USB_QUIRK_CONFIG_INTF_STRINGS },
++
+ 	/* WORLDE easy key (easykey.25) MIDI controller  */
+ 	{ USB_DEVICE(0x0218, 0x0401), .driver_info =
+ 			USB_QUIRK_CONFIG_INTF_STRINGS },
+@@ -406,6 +410,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	{ USB_DEVICE(0x2040, 0x7200), .driver_info =
+ 			USB_QUIRK_CONFIG_INTF_STRINGS },
+ 
++	/* DJI CineSSD */
++	{ USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM },
++
+ 	/* INTEL VALUE SSD */
+ 	{ USB_DEVICE(0x8086, 0xf1a5), .driver_info = USB_QUIRK_RESET_RESUME },
+ 
+diff --git a/drivers/usb/dwc3/gadget.h b/drivers/usb/dwc3/gadget.h
+index db610c56f1d6..2aacd1afd9ff 100644
+--- a/drivers/usb/dwc3/gadget.h
++++ b/drivers/usb/dwc3/gadget.h
+@@ -25,7 +25,7 @@ struct dwc3;
+ #define DWC3_DEPCFG_XFER_IN_PROGRESS_EN	BIT(9)
+ #define DWC3_DEPCFG_XFER_NOT_READY_EN	BIT(10)
+ #define DWC3_DEPCFG_FIFO_ERROR_EN	BIT(11)
+-#define DWC3_DEPCFG_STREAM_EVENT_EN	BIT(12)
++#define DWC3_DEPCFG_STREAM_EVENT_EN	BIT(13)
+ #define DWC3_DEPCFG_BINTERVAL_M1(n)	(((n) & 0xff) << 16)
+ #define DWC3_DEPCFG_STREAM_CAPABLE	BIT(24)
+ #define DWC3_DEPCFG_EP_NUMBER(n)	(((n) & 0x1f) << 25)
+diff --git a/drivers/usb/gadget/udc/net2280.c b/drivers/usb/gadget/udc/net2280.c
+index 318246d8b2e2..b02ab2a8d927 100644
+--- a/drivers/usb/gadget/udc/net2280.c
++++ b/drivers/usb/gadget/udc/net2280.c
+@@ -1545,11 +1545,14 @@ static int net2280_pullup(struct usb_gadget *_gadget, int is_on)
+ 		writel(tmp | BIT(USB_DETECT_ENABLE), &dev->usb->usbctl);
+ 	} else {
+ 		writel(tmp & ~BIT(USB_DETECT_ENABLE), &dev->usb->usbctl);
+-		stop_activity(dev, dev->driver);
++		stop_activity(dev, NULL);
+ 	}
+ 
+ 	spin_unlock_irqrestore(&dev->lock, flags);
+ 
++	if (!is_on && dev->driver)
++		dev->driver->disconnect(&dev->gadget);
++
+ 	return 0;
+ }
+ 
+@@ -2466,8 +2469,11 @@ static void stop_activity(struct net2280 *dev, struct usb_gadget_driver *driver)
+ 		nuke(&dev->ep[i]);
+ 
+ 	/* report disconnect; the driver is already quiesced */
+-	if (driver)
++	if (driver) {
++		spin_unlock(&dev->lock);
+ 		driver->disconnect(&dev->gadget);
++		spin_lock(&dev->lock);
++	}
+ 
+ 	usb_reinit(dev);
+ }
+@@ -3341,6 +3347,8 @@ next_endpoints:
+ 		BIT(PCI_RETRY_ABORT_INTERRUPT))
+ 
+ static void handle_stat1_irqs(struct net2280 *dev, u32 stat)
++__releases(dev->lock)
++__acquires(dev->lock)
+ {
+ 	struct net2280_ep	*ep;
+ 	u32			tmp, num, mask, scratch;
+@@ -3381,12 +3389,14 @@ static void handle_stat1_irqs(struct net2280 *dev, u32 stat)
+ 			if (disconnect || reset) {
+ 				stop_activity(dev, dev->driver);
+ 				ep0_start(dev);
++				spin_unlock(&dev->lock);
+ 				if (reset)
+ 					usb_gadget_udc_reset
+ 						(&dev->gadget, dev->driver);
+ 				else
+ 					(dev->driver->disconnect)
+ 						(&dev->gadget);
++				spin_lock(&dev->lock);
+ 				return;
+ 			}
+ 		}
+@@ -3405,6 +3415,7 @@ static void handle_stat1_irqs(struct net2280 *dev, u32 stat)
+ 	tmp = BIT(SUSPEND_REQUEST_CHANGE_INTERRUPT);
+ 	if (stat & tmp) {
+ 		writel(tmp, &dev->regs->irqstat1);
++		spin_unlock(&dev->lock);
+ 		if (stat & BIT(SUSPEND_REQUEST_INTERRUPT)) {
+ 			if (dev->driver->suspend)
+ 				dev->driver->suspend(&dev->gadget);
+@@ -3415,6 +3426,7 @@ static void handle_stat1_irqs(struct net2280 *dev, u32 stat)
+ 				dev->driver->resume(&dev->gadget);
+ 			/* at high speed, note erratum 0133 */
+ 		}
++		spin_lock(&dev->lock);
+ 		stat &= ~tmp;
+ 	}
+ 
+diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c
+index 7cf98c793e04..5b5f1c8b47c9 100644
+--- a/drivers/usb/gadget/udc/renesas_usb3.c
++++ b/drivers/usb/gadget/udc/renesas_usb3.c
+@@ -787,12 +787,15 @@ static void usb3_irq_epc_int_1_speed(struct renesas_usb3 *usb3)
+ 	switch (speed) {
+ 	case USB_STA_SPEED_SS:
+ 		usb3->gadget.speed = USB_SPEED_SUPER;
++		usb3->gadget.ep0->maxpacket = USB3_EP0_SS_MAX_PACKET_SIZE;
+ 		break;
+ 	case USB_STA_SPEED_HS:
+ 		usb3->gadget.speed = USB_SPEED_HIGH;
++		usb3->gadget.ep0->maxpacket = USB3_EP0_HSFS_MAX_PACKET_SIZE;
+ 		break;
+ 	case USB_STA_SPEED_FS:
+ 		usb3->gadget.speed = USB_SPEED_FULL;
++		usb3->gadget.ep0->maxpacket = USB3_EP0_HSFS_MAX_PACKET_SIZE;
+ 		break;
+ 	default:
+ 		usb3->gadget.speed = USB_SPEED_UNKNOWN;
+@@ -2451,7 +2454,7 @@ static int renesas_usb3_init_ep(struct renesas_usb3 *usb3, struct device *dev,
+ 			/* for control pipe */
+ 			usb3->gadget.ep0 = &usb3_ep->ep;
+ 			usb_ep_set_maxpacket_limit(&usb3_ep->ep,
+-						USB3_EP0_HSFS_MAX_PACKET_SIZE);
++						USB3_EP0_SS_MAX_PACKET_SIZE);
+ 			usb3_ep->ep.caps.type_control = true;
+ 			usb3_ep->ep.caps.dir_in = true;
+ 			usb3_ep->ep.caps.dir_out = true;
+diff --git a/drivers/usb/host/u132-hcd.c b/drivers/usb/host/u132-hcd.c
+index 032b8652910a..02f8e08b3ee8 100644
+--- a/drivers/usb/host/u132-hcd.c
++++ b/drivers/usb/host/u132-hcd.c
+@@ -2555,7 +2555,7 @@ static int u132_get_frame(struct usb_hcd *hcd)
+ 	} else {
+ 		int frame = 0;
+ 		dev_err(&u132->platform_dev->dev, "TODO: u132_get_frame\n");
+-		msleep(100);
++		mdelay(100);
+ 		return frame;
+ 	}
+ }
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index ef350c33dc4a..b1f27aa38b10 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -1613,6 +1613,10 @@ void xhci_endpoint_copy(struct xhci_hcd *xhci,
+ 	in_ep_ctx->ep_info2 = out_ep_ctx->ep_info2;
+ 	in_ep_ctx->deq = out_ep_ctx->deq;
+ 	in_ep_ctx->tx_info = out_ep_ctx->tx_info;
++	if (xhci->quirks & XHCI_MTK_HOST) {
++		in_ep_ctx->reserved[0] = out_ep_ctx->reserved[0];
++		in_ep_ctx->reserved[1] = out_ep_ctx->reserved[1];
++	}
+ }
+ 
+ /* Copy output xhci_slot_ctx to the input xhci_slot_ctx.
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 68e6132aa8b2..c2220a7fc758 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -37,6 +37,21 @@ static unsigned long long quirks;
+ module_param(quirks, ullong, S_IRUGO);
+ MODULE_PARM_DESC(quirks, "Bit flags for quirks to be enabled as default");
+ 
++static bool td_on_ring(struct xhci_td *td, struct xhci_ring *ring)
++{
++	struct xhci_segment *seg = ring->first_seg;
++
++	if (!td || !td->start_seg)
++		return false;
++	do {
++		if (seg == td->start_seg)
++			return true;
++		seg = seg->next;
++	} while (seg && seg != ring->first_seg);
++
++	return false;
++}
++
+ /* TODO: copied from ehci-hcd.c - can this be refactored? */
+ /*
+  * xhci_handshake - spin reading hc until handshake completes or fails
+@@ -1571,6 +1586,21 @@ static int xhci_urb_dequeue(struct usb_hcd *hcd, struct urb *urb, int status)
+ 		goto done;
+ 	}
+ 
++	/*
++	 * check ring is not re-allocated since URB was enqueued. If it is, then
++	 * make sure none of the ring related pointers in this URB private data
++	 * are touched, such as td_list, otherwise we overwrite freed data
++	 */
++	if (!td_on_ring(&urb_priv->td[0], ep_ring)) {
++		xhci_err(xhci, "Canceled URB td not found on endpoint ring");
++		for (i = urb_priv->num_tds_done; i < urb_priv->num_tds; i++) {
++			td = &urb_priv->td[i];
++			if (!list_empty(&td->cancelled_td_list))
++				list_del_init(&td->cancelled_td_list);
++		}
++		goto err_giveback;
++	}
++
+ 	if (xhci->xhc_state & XHCI_STATE_HALTED) {
+ 		xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb,
+ 				"HC halted, freeing TD manually.");
+diff --git a/drivers/usb/misc/uss720.c b/drivers/usb/misc/uss720.c
+index de9a502491c2..69822852888a 100644
+--- a/drivers/usb/misc/uss720.c
++++ b/drivers/usb/misc/uss720.c
+@@ -369,7 +369,7 @@ static unsigned char parport_uss720_frob_control(struct parport *pp, unsigned ch
+ 	mask &= 0x0f;
+ 	val &= 0x0f;
+ 	d = (priv->reg[1] & (~mask)) ^ val;
+-	if (set_1284_register(pp, 2, d, GFP_KERNEL))
++	if (set_1284_register(pp, 2, d, GFP_ATOMIC))
+ 		return 0;
+ 	priv->reg[1] = d;
+ 	return d & 0xf;
+@@ -379,7 +379,7 @@ static unsigned char parport_uss720_read_status(struct parport *pp)
+ {
+ 	unsigned char ret;
+ 
+-	if (get_1284_register(pp, 1, &ret, GFP_KERNEL))
++	if (get_1284_register(pp, 1, &ret, GFP_ATOMIC))
+ 		return 0;
+ 	return ret & 0xf8;
+ }
+diff --git a/drivers/usb/misc/yurex.c b/drivers/usb/misc/yurex.c
+index 3be40eaa1ac9..1232dd49556d 100644
+--- a/drivers/usb/misc/yurex.c
++++ b/drivers/usb/misc/yurex.c
+@@ -421,13 +421,13 @@ static ssize_t yurex_write(struct file *file, const char __user *user_buffer,
+ {
+ 	struct usb_yurex *dev;
+ 	int i, set = 0, retval = 0;
+-	char buffer[16];
++	char buffer[16 + 1];
+ 	char *data = buffer;
+ 	unsigned long long c, c2 = 0;
+ 	signed long timeout = 0;
+ 	DEFINE_WAIT(wait);
+ 
+-	count = min(sizeof(buffer), count);
++	count = min(sizeof(buffer) - 1, count);
+ 	dev = file->private_data;
+ 
+ 	/* verify that we actually have some data to write */
+@@ -446,6 +446,7 @@ static ssize_t yurex_write(struct file *file, const char __user *user_buffer,
+ 		retval = -EFAULT;
+ 		goto error;
+ 	}
++	buffer[count] = 0;
+ 	memset(dev->cntl_buffer, CMD_PADDING, YUREX_BUF_SIZE);
+ 
+ 	switch (buffer[0]) {
+diff --git a/drivers/usb/mtu3/mtu3_core.c b/drivers/usb/mtu3/mtu3_core.c
+index eecfd0671362..d045d8458f81 100644
+--- a/drivers/usb/mtu3/mtu3_core.c
++++ b/drivers/usb/mtu3/mtu3_core.c
+@@ -107,8 +107,12 @@ static int mtu3_device_enable(struct mtu3 *mtu)
+ 		(SSUSB_U2_PORT_DIS | SSUSB_U2_PORT_PDN |
+ 		SSUSB_U2_PORT_HOST_SEL));
+ 
+-	if (mtu->ssusb->dr_mode == USB_DR_MODE_OTG)
++	if (mtu->ssusb->dr_mode == USB_DR_MODE_OTG) {
+ 		mtu3_setbits(ibase, SSUSB_U2_CTRL(0), SSUSB_U2_PORT_OTG_SEL);
++		if (mtu->is_u3_ip)
++			mtu3_setbits(ibase, SSUSB_U3_CTRL(0),
++				     SSUSB_U3_PORT_DUAL_MODE);
++	}
+ 
+ 	return ssusb_check_clocks(mtu->ssusb, check_clk);
+ }
+diff --git a/drivers/usb/mtu3/mtu3_hw_regs.h b/drivers/usb/mtu3/mtu3_hw_regs.h
+index 6ee371478d89..a45bb253939f 100644
+--- a/drivers/usb/mtu3/mtu3_hw_regs.h
++++ b/drivers/usb/mtu3/mtu3_hw_regs.h
+@@ -459,6 +459,7 @@
+ 
+ /* U3D_SSUSB_U3_CTRL_0P */
+ #define SSUSB_U3_PORT_SSP_SPEED	BIT(9)
++#define SSUSB_U3_PORT_DUAL_MODE	BIT(7)
+ #define SSUSB_U3_PORT_HOST_SEL		BIT(2)
+ #define SSUSB_U3_PORT_PDN		BIT(1)
+ #define SSUSB_U3_PORT_DIS		BIT(0)
+diff --git a/drivers/usb/serial/io_ti.h b/drivers/usb/serial/io_ti.h
+index e53c68261017..9bbcee37524e 100644
+--- a/drivers/usb/serial/io_ti.h
++++ b/drivers/usb/serial/io_ti.h
+@@ -173,7 +173,7 @@ struct ump_interrupt {
+ }  __attribute__((packed));
+ 
+ 
+-#define TIUMP_GET_PORT_FROM_CODE(c)	(((c) >> 4) - 3)
++#define TIUMP_GET_PORT_FROM_CODE(c)	(((c) >> 6) & 0x01)
+ #define TIUMP_GET_FUNC_FROM_CODE(c)	((c) & 0x0f)
+ #define TIUMP_INTERRUPT_CODE_LSR	0x03
+ #define TIUMP_INTERRUPT_CODE_MSR	0x04
+diff --git a/drivers/usb/serial/ti_usb_3410_5052.c b/drivers/usb/serial/ti_usb_3410_5052.c
+index 6b22857f6e52..58fc7964ee6b 100644
+--- a/drivers/usb/serial/ti_usb_3410_5052.c
++++ b/drivers/usb/serial/ti_usb_3410_5052.c
+@@ -1119,7 +1119,7 @@ static void ti_break(struct tty_struct *tty, int break_state)
+ 
+ static int ti_get_port_from_code(unsigned char code)
+ {
+-	return (code >> 4) - 3;
++	return (code >> 6) & 0x01;
+ }
+ 
+ static int ti_get_func_from_code(unsigned char code)
+diff --git a/drivers/usb/storage/scsiglue.c b/drivers/usb/storage/scsiglue.c
+index c267f2812a04..e227bb5b794f 100644
+--- a/drivers/usb/storage/scsiglue.c
++++ b/drivers/usb/storage/scsiglue.c
+@@ -376,6 +376,15 @@ static int queuecommand_lck(struct scsi_cmnd *srb,
+ 		return 0;
+ 	}
+ 
++	if ((us->fflags & US_FL_NO_ATA_1X) &&
++			(srb->cmnd[0] == ATA_12 || srb->cmnd[0] == ATA_16)) {
++		memcpy(srb->sense_buffer, usb_stor_sense_invalidCDB,
++		       sizeof(usb_stor_sense_invalidCDB));
++		srb->result = SAM_STAT_CHECK_CONDITION;
++		done(srb);
++		return 0;
++	}
++
+ 	/* enqueue the command and wake up the control thread */
+ 	srb->scsi_done = done;
+ 	us->srb = srb;
+diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c
+index 9e9de5452860..1f7b401c4d04 100644
+--- a/drivers/usb/storage/uas.c
++++ b/drivers/usb/storage/uas.c
+@@ -842,6 +842,27 @@ static int uas_slave_configure(struct scsi_device *sdev)
+ 		sdev->skip_ms_page_8 = 1;
+ 		sdev->wce_default_on = 1;
+ 	}
++
++	/*
++	 * Some disks return the total number of blocks in response
++	 * to READ CAPACITY rather than the highest block number.
++	 * If this device makes that mistake, tell the sd driver.
++	 */
++	if (devinfo->flags & US_FL_FIX_CAPACITY)
++		sdev->fix_capacity = 1;
++
++	/*
++	 * Some devices don't like MODE SENSE with page=0x3f,
++	 * which is the command used for checking if a device
++	 * is write-protected.  Now that we tell the sd driver
++	 * to do a 192-byte transfer with this command the
++	 * majority of devices work fine, but a few still can't
++	 * handle it.  The sd driver will simply assume those
++	 * devices are write-enabled.
++	 */
++	if (devinfo->flags & US_FL_NO_WP_DETECT)
++		sdev->skip_ms_page_3f = 1;
++
+ 	scsi_change_queue_depth(sdev, devinfo->qdepth - 2);
+ 	return 0;
+ }
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index 22fcfccf453a..f7f83b21dc74 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -2288,6 +2288,13 @@ UNUSUAL_DEV(  0x2735, 0x100b, 0x0000, 0x9999,
+ 		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ 		US_FL_GO_SLOW ),
+ 
++/* Reported-by: Tim Anderson <tsa@biglakesoftware.com> */
++UNUSUAL_DEV(  0x2ca3, 0x0031, 0x0000, 0x9999,
++		"DJI",
++		"CineSSD",
++		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++		US_FL_NO_ATA_1X),
++
+ /*
+  * Reported by Frederic Marchal <frederic.marchal@wowcompany.com>
+  * Mio Moov 330
+diff --git a/drivers/video/fbdev/core/modedb.c b/drivers/video/fbdev/core/modedb.c
+index 2510fa728d77..de119f11b78f 100644
+--- a/drivers/video/fbdev/core/modedb.c
++++ b/drivers/video/fbdev/core/modedb.c
+@@ -644,7 +644,7 @@ static int fb_try_mode(struct fb_var_screeninfo *var, struct fb_info *info,
+  *
+  *     Valid mode specifiers for @mode_option:
+  *
+- *     <xres>x<yres>[M][R][-<bpp>][@<refresh>][i][m] or
++ *     <xres>x<yres>[M][R][-<bpp>][@<refresh>][i][p][m] or
+  *     <name>[-<bpp>][@<refresh>]
+  *
+  *     with <xres>, <yres>, <bpp> and <refresh> decimal numbers and
+@@ -653,10 +653,10 @@ static int fb_try_mode(struct fb_var_screeninfo *var, struct fb_info *info,
+  *      If 'M' is present after yres (and before refresh/bpp if present),
+  *      the function will compute the timings using VESA(tm) Coordinated
+  *      Video Timings (CVT).  If 'R' is present after 'M', will compute with
+- *      reduced blanking (for flatpanels).  If 'i' is present, compute
+- *      interlaced mode.  If 'm' is present, add margins equal to 1.8%
+- *      of xres rounded down to 8 pixels, and 1.8% of yres. The char
+- *      'i' and 'm' must be after 'M' and 'R'. Example:
++ *      reduced blanking (for flatpanels).  If 'i' or 'p' are present, compute
++ *      interlaced or progressive mode.  If 'm' is present, add margins equal
++ *      to 1.8% of xres rounded down to 8 pixels, and 1.8% of yres. The chars
++ *      'i', 'p' and 'm' must be after 'M' and 'R'. Example:
+  *
+  *      1024x768MR-8@60m - Reduced blank with margins at 60Hz.
+  *
+@@ -697,7 +697,8 @@ int fb_find_mode(struct fb_var_screeninfo *var,
+ 		unsigned int namelen = strlen(name);
+ 		int res_specified = 0, bpp_specified = 0, refresh_specified = 0;
+ 		unsigned int xres = 0, yres = 0, bpp = default_bpp, refresh = 0;
+-		int yres_specified = 0, cvt = 0, rb = 0, interlace = 0;
++		int yres_specified = 0, cvt = 0, rb = 0;
++		int interlace_specified = 0, interlace = 0;
+ 		int margins = 0;
+ 		u32 best, diff, tdiff;
+ 
+@@ -748,9 +749,17 @@ int fb_find_mode(struct fb_var_screeninfo *var,
+ 				if (!cvt)
+ 					margins = 1;
+ 				break;
++			case 'p':
++				if (!cvt) {
++					interlace = 0;
++					interlace_specified = 1;
++				}
++				break;
+ 			case 'i':
+-				if (!cvt)
++				if (!cvt) {
+ 					interlace = 1;
++					interlace_specified = 1;
++				}
+ 				break;
+ 			default:
+ 				goto done;
+@@ -819,11 +828,21 @@ done:
+ 			if ((name_matches(db[i], name, namelen) ||
+ 			     (res_specified && res_matches(db[i], xres, yres))) &&
+ 			    !fb_try_mode(var, info, &db[i], bpp)) {
+-				if (refresh_specified && db[i].refresh == refresh)
+-					return 1;
++				const int db_interlace = (db[i].vmode &
++					FB_VMODE_INTERLACED ? 1 : 0);
++				int score = abs(db[i].refresh - refresh);
++
++				if (interlace_specified)
++					score += abs(db_interlace - interlace);
++
++				if (!interlace_specified ||
++				    db_interlace == interlace)
++					if (refresh_specified &&
++					    db[i].refresh == refresh)
++						return 1;
+ 
+-				if (abs(db[i].refresh - refresh) < diff) {
+-					diff = abs(db[i].refresh - refresh);
++				if (score < diff) {
++					diff = score;
+ 					best = i;
+ 				}
+ 			}
+diff --git a/drivers/video/fbdev/goldfishfb.c b/drivers/video/fbdev/goldfishfb.c
+index 3b70044773b6..9fe7edf725c6 100644
+--- a/drivers/video/fbdev/goldfishfb.c
++++ b/drivers/video/fbdev/goldfishfb.c
+@@ -301,6 +301,7 @@ static int goldfish_fb_remove(struct platform_device *pdev)
+ 	dma_free_coherent(&pdev->dev, framesize, (void *)fb->fb.screen_base,
+ 						fb->fb.fix.smem_start);
+ 	iounmap(fb->reg_base);
++	kfree(fb);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/video/fbdev/omap/omapfb_main.c b/drivers/video/fbdev/omap/omapfb_main.c
+index 585f39efcff6..1c75f4806ed3 100644
+--- a/drivers/video/fbdev/omap/omapfb_main.c
++++ b/drivers/video/fbdev/omap/omapfb_main.c
+@@ -958,7 +958,7 @@ int omapfb_register_client(struct omapfb_notifier_block *omapfb_nb,
+ {
+ 	int r;
+ 
+-	if ((unsigned)omapfb_nb->plane_idx > OMAPFB_PLANE_NUM)
++	if ((unsigned)omapfb_nb->plane_idx >= OMAPFB_PLANE_NUM)
+ 		return -EINVAL;
+ 
+ 	if (!notifier_inited) {
+diff --git a/drivers/video/fbdev/omap2/omapfb/Makefile b/drivers/video/fbdev/omap2/omapfb/Makefile
+index 602edfed09df..f54c3f56b641 100644
+--- a/drivers/video/fbdev/omap2/omapfb/Makefile
++++ b/drivers/video/fbdev/omap2/omapfb/Makefile
+@@ -2,5 +2,5 @@
+ obj-$(CONFIG_OMAP2_VRFB) += vrfb.o
+ obj-y += dss/
+ obj-y += displays/
+-obj-$(CONFIG_FB_OMAP2) += omapfb.o
+-omapfb-y := omapfb-main.o omapfb-sysfs.o omapfb-ioctl.o
++obj-$(CONFIG_FB_OMAP2) += omap2fb.o
++omap2fb-y := omapfb-main.o omapfb-sysfs.o omapfb-ioctl.o
+diff --git a/drivers/video/fbdev/pxafb.c b/drivers/video/fbdev/pxafb.c
+index 76722a59f55e..dfe382e68287 100644
+--- a/drivers/video/fbdev/pxafb.c
++++ b/drivers/video/fbdev/pxafb.c
+@@ -2128,8 +2128,8 @@ static int of_get_pxafb_display(struct device *dev, struct device_node *disp,
+ 		return -EINVAL;
+ 
+ 	ret = -ENOMEM;
+-	info->modes = kmalloc_array(timings->num_timings,
+-				    sizeof(info->modes[0]), GFP_KERNEL);
++	info->modes = kcalloc(timings->num_timings, sizeof(info->modes[0]),
++			      GFP_KERNEL);
+ 	if (!info->modes)
+ 		goto out;
+ 	info->num_modes = timings->num_timings;
+diff --git a/drivers/video/fbdev/via/viafbdev.c b/drivers/video/fbdev/via/viafbdev.c
+index d2f785068ef4..7bb7e90b8f00 100644
+--- a/drivers/video/fbdev/via/viafbdev.c
++++ b/drivers/video/fbdev/via/viafbdev.c
+@@ -19,6 +19,7 @@
+  * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+  */
+ 
++#include <linux/compiler.h>
+ #include <linux/module.h>
+ #include <linux/seq_file.h>
+ #include <linux/slab.h>
+@@ -1468,7 +1469,7 @@ static const struct file_operations viafb_vt1636_proc_fops = {
+ 
+ #endif /* CONFIG_FB_VIA_DIRECT_PROCFS */
+ 
+-static int viafb_sup_odev_proc_show(struct seq_file *m, void *v)
++static int __maybe_unused viafb_sup_odev_proc_show(struct seq_file *m, void *v)
+ {
+ 	via_odev_to_seq(m, supported_odev_map[
+ 		viaparinfo->shared->chip_info.gfx_chip_name]);
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index 816cc921cf36..efae2fb0930a 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -1751,7 +1751,7 @@ static int fill_thread_core_info(struct elf_thread_core_info *t,
+ 		const struct user_regset *regset = &view->regsets[i];
+ 		do_thread_regset_writeback(t->task, regset);
+ 		if (regset->core_note_type && regset->get &&
+-		    (!regset->active || regset->active(t->task, regset))) {
++		    (!regset->active || regset->active(t->task, regset) > 0)) {
+ 			int ret;
+ 			size_t size = regset_size(t->task, regset);
+ 			void *data = kmalloc(size, GFP_KERNEL);
+diff --git a/fs/cifs/readdir.c b/fs/cifs/readdir.c
+index eeab81c9452f..e169e1a5fd35 100644
+--- a/fs/cifs/readdir.c
++++ b/fs/cifs/readdir.c
+@@ -376,8 +376,15 @@ static char *nxt_dir_entry(char *old_entry, char *end_of_smb, int level)
+ 
+ 		new_entry = old_entry + sizeof(FIND_FILE_STANDARD_INFO) +
+ 				pfData->FileNameLength;
+-	} else
+-		new_entry = old_entry + le32_to_cpu(pDirInfo->NextEntryOffset);
++	} else {
++		u32 next_offset = le32_to_cpu(pDirInfo->NextEntryOffset);
++
++		if (old_entry + next_offset < old_entry) {
++			cifs_dbg(VFS, "invalid offset %u\n", next_offset);
++			return NULL;
++		}
++		new_entry = old_entry + next_offset;
++	}
+ 	cifs_dbg(FYI, "new entry %p old entry %p\n", new_entry, old_entry);
+ 	/* validate that new_entry is not past end of SMB */
+ 	if (new_entry >= end_of_smb) {
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 82be1dfeca33..29cce842ed04 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -2418,14 +2418,14 @@ SMB2_ioctl(const unsigned int xid, struct cifs_tcon *tcon, u64 persistent_fid,
+ 	/* We check for obvious errors in the output buffer length and offset */
+ 	if (*plen == 0)
+ 		goto ioctl_exit; /* server returned no data */
+-	else if (*plen > 0xFF00) {
++	else if (*plen > rsp_iov.iov_len || *plen > 0xFF00) {
+ 		cifs_dbg(VFS, "srv returned invalid ioctl length: %d\n", *plen);
+ 		*plen = 0;
+ 		rc = -EIO;
+ 		goto ioctl_exit;
+ 	}
+ 
+-	if (rsp_iov.iov_len < le32_to_cpu(rsp->OutputOffset) + *plen) {
++	if (rsp_iov.iov_len - *plen < le32_to_cpu(rsp->OutputOffset)) {
+ 		cifs_dbg(VFS, "Malformed ioctl resp: len %d offset %d\n", *plen,
+ 			le32_to_cpu(rsp->OutputOffset));
+ 		*plen = 0;
+@@ -3492,33 +3492,38 @@ num_entries(char *bufstart, char *end_of_buf, char **lastentry, size_t size)
+ 	int len;
+ 	unsigned int entrycount = 0;
+ 	unsigned int next_offset = 0;
+-	FILE_DIRECTORY_INFO *entryptr;
++	char *entryptr;
++	FILE_DIRECTORY_INFO *dir_info;
+ 
+ 	if (bufstart == NULL)
+ 		return 0;
+ 
+-	entryptr = (FILE_DIRECTORY_INFO *)bufstart;
++	entryptr = bufstart;
+ 
+ 	while (1) {
+-		entryptr = (FILE_DIRECTORY_INFO *)
+-					((char *)entryptr + next_offset);
+-
+-		if ((char *)entryptr + size > end_of_buf) {
++		if (entryptr + next_offset < entryptr ||
++		    entryptr + next_offset > end_of_buf ||
++		    entryptr + next_offset + size > end_of_buf) {
+ 			cifs_dbg(VFS, "malformed search entry would overflow\n");
+ 			break;
+ 		}
+ 
+-		len = le32_to_cpu(entryptr->FileNameLength);
+-		if ((char *)entryptr + len + size > end_of_buf) {
++		entryptr = entryptr + next_offset;
++		dir_info = (FILE_DIRECTORY_INFO *)entryptr;
++
++		len = le32_to_cpu(dir_info->FileNameLength);
++		if (entryptr + len < entryptr ||
++		    entryptr + len > end_of_buf ||
++		    entryptr + len + size > end_of_buf) {
+ 			cifs_dbg(VFS, "directory entry name would overflow frame end of buf %p\n",
+ 				 end_of_buf);
+ 			break;
+ 		}
+ 
+-		*lastentry = (char *)entryptr;
++		*lastentry = entryptr;
+ 		entrycount++;
+ 
+-		next_offset = le32_to_cpu(entryptr->NextEntryOffset);
++		next_offset = le32_to_cpu(dir_info->NextEntryOffset);
+ 		if (!next_offset)
+ 			break;
+ 	}
+diff --git a/fs/configfs/dir.c b/fs/configfs/dir.c
+index 577cff24707b..39843fa7e11b 100644
+--- a/fs/configfs/dir.c
++++ b/fs/configfs/dir.c
+@@ -1777,6 +1777,16 @@ void configfs_unregister_group(struct config_group *group)
+ 	struct dentry *dentry = group->cg_item.ci_dentry;
+ 	struct dentry *parent = group->cg_item.ci_parent->ci_dentry;
+ 
++	mutex_lock(&subsys->su_mutex);
++	if (!group->cg_item.ci_parent->ci_group) {
++		/*
++		 * The parent has already been unlinked and detached
++		 * due to a rmdir.
++		 */
++		goto unlink_group;
++	}
++	mutex_unlock(&subsys->su_mutex);
++
+ 	inode_lock_nested(d_inode(parent), I_MUTEX_PARENT);
+ 	spin_lock(&configfs_dirent_lock);
+ 	configfs_detach_prep(dentry, NULL);
+@@ -1791,6 +1801,7 @@ void configfs_unregister_group(struct config_group *group)
+ 	dput(dentry);
+ 
+ 	mutex_lock(&subsys->su_mutex);
++unlink_group:
+ 	unlink_group(group);
+ 	mutex_unlock(&subsys->su_mutex);
+ }
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 128d489acebb..742147cbe759 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -3106,9 +3106,19 @@ static struct dentry *f2fs_mount(struct file_system_type *fs_type, int flags,
+ static void kill_f2fs_super(struct super_block *sb)
+ {
+ 	if (sb->s_root) {
+-		set_sbi_flag(F2FS_SB(sb), SBI_IS_CLOSE);
+-		f2fs_stop_gc_thread(F2FS_SB(sb));
+-		f2fs_stop_discard_thread(F2FS_SB(sb));
++		struct f2fs_sb_info *sbi = F2FS_SB(sb);
++
++		set_sbi_flag(sbi, SBI_IS_CLOSE);
++		f2fs_stop_gc_thread(sbi);
++		f2fs_stop_discard_thread(sbi);
++
++		if (is_sbi_flag_set(sbi, SBI_IS_DIRTY) ||
++				!is_set_ckpt_flags(sbi, CP_UMOUNT_FLAG)) {
++			struct cp_control cpc = {
++				.reason = CP_UMOUNT,
++			};
++			f2fs_write_checkpoint(sbi, &cpc);
++		}
+ 	}
+ 	kill_block_super(sb);
+ }
+diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
+index ed6699705c13..fd5bea55fd60 100644
+--- a/fs/gfs2/bmap.c
++++ b/fs/gfs2/bmap.c
+@@ -2060,7 +2060,7 @@ int gfs2_write_alloc_required(struct gfs2_inode *ip, u64 offset,
+ 	end_of_file = (i_size_read(&ip->i_inode) + sdp->sd_sb.sb_bsize - 1) >> shift;
+ 	lblock = offset >> shift;
+ 	lblock_stop = (offset + len + sdp->sd_sb.sb_bsize - 1) >> shift;
+-	if (lblock_stop > end_of_file)
++	if (lblock_stop > end_of_file && ip != GFS2_I(sdp->sd_rindex))
+ 		return 1;
+ 
+ 	size = (lblock_stop - lblock) << shift;
+diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c
+index 33abcf29bc05..b86249ebde11 100644
+--- a/fs/gfs2/rgrp.c
++++ b/fs/gfs2/rgrp.c
+@@ -1686,7 +1686,8 @@ static int gfs2_rbm_find(struct gfs2_rbm *rbm, u8 state, u32 *minext,
+ 
+ 	while(1) {
+ 		bi = rbm_bi(rbm);
+-		if (test_bit(GBF_FULL, &bi->bi_flags) &&
++		if ((ip == NULL || !gfs2_rs_active(&ip->i_res)) &&
++		    test_bit(GBF_FULL, &bi->bi_flags) &&
+ 		    (state == GFS2_BLKST_FREE))
+ 			goto next_bitmap;
+ 
+diff --git a/fs/namespace.c b/fs/namespace.c
+index bd2f4c68506a..1949e0939d40 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -446,10 +446,10 @@ int mnt_want_write_file_path(struct file *file)
+ {
+ 	int ret;
+ 
+-	sb_start_write(file->f_path.mnt->mnt_sb);
++	sb_start_write(file_inode(file)->i_sb);
+ 	ret = __mnt_want_write_file(file);
+ 	if (ret)
+-		sb_end_write(file->f_path.mnt->mnt_sb);
++		sb_end_write(file_inode(file)->i_sb);
+ 	return ret;
+ }
+ 
+@@ -540,7 +540,8 @@ void __mnt_drop_write_file(struct file *file)
+ 
+ void mnt_drop_write_file_path(struct file *file)
+ {
+-	mnt_drop_write(file->f_path.mnt);
++	__mnt_drop_write_file(file);
++	sb_end_write(file_inode(file)->i_sb);
+ }
+ 
+ void mnt_drop_write_file(struct file *file)
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index ff98e2a3f3cc..f688338b0482 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -2642,14 +2642,18 @@ static void nfs41_check_delegation_stateid(struct nfs4_state *state)
+ 	}
+ 
+ 	nfs4_stateid_copy(&stateid, &delegation->stateid);
+-	if (test_bit(NFS_DELEGATION_REVOKED, &delegation->flags) ||
+-		!test_and_clear_bit(NFS_DELEGATION_TEST_EXPIRED,
+-			&delegation->flags)) {
++	if (test_bit(NFS_DELEGATION_REVOKED, &delegation->flags)) {
+ 		rcu_read_unlock();
+ 		nfs_finish_clear_delegation_stateid(state, &stateid);
+ 		return;
+ 	}
+ 
++	if (!test_and_clear_bit(NFS_DELEGATION_TEST_EXPIRED,
++				&delegation->flags)) {
++		rcu_read_unlock();
++		return;
++	}
++
+ 	cred = get_rpccred(delegation->cred);
+ 	rcu_read_unlock();
+ 	status = nfs41_test_and_free_expired_stateid(server, &stateid, cred);
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index 2bf2eaa08ca7..3c18c12a5c4c 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -1390,6 +1390,8 @@ int nfs4_schedule_stateid_recovery(const struct nfs_server *server, struct nfs4_
+ 
+ 	if (!nfs4_state_mark_reclaim_nograce(clp, state))
+ 		return -EBADF;
++	nfs_inode_find_delegation_state_and_recover(state->inode,
++			&state->stateid);
+ 	dprintk("%s: scheduling stateid recovery for server %s\n", __func__,
+ 			clp->cl_hostname);
+ 	nfs4_schedule_state_manager(clp);
+diff --git a/fs/nfs/nfs4trace.h b/fs/nfs/nfs4trace.h
+index a275fba93170..708342f4692f 100644
+--- a/fs/nfs/nfs4trace.h
++++ b/fs/nfs/nfs4trace.h
+@@ -1194,7 +1194,7 @@ DECLARE_EVENT_CLASS(nfs4_inode_stateid_callback_event,
+ 		TP_fast_assign(
+ 			__entry->error = error;
+ 			__entry->fhandle = nfs_fhandle_hash(fhandle);
+-			if (inode != NULL) {
++			if (!IS_ERR_OR_NULL(inode)) {
+ 				__entry->fileid = NFS_FILEID(inode);
+ 				__entry->dev = inode->i_sb->s_dev;
+ 			} else {
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index 704b37311467..fa2121f877c1 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -970,16 +970,6 @@ static int ovl_get_upper(struct ovl_fs *ofs, struct path *upperpath)
+ 	if (err)
+ 		goto out;
+ 
+-	err = -EBUSY;
+-	if (ovl_inuse_trylock(upperpath->dentry)) {
+-		ofs->upperdir_locked = true;
+-	} else if (ofs->config.index) {
+-		pr_err("overlayfs: upperdir is in-use by another mount, mount with '-o index=off' to override exclusive upperdir protection.\n");
+-		goto out;
+-	} else {
+-		pr_warn("overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.\n");
+-	}
+-
+ 	upper_mnt = clone_private_mount(upperpath);
+ 	err = PTR_ERR(upper_mnt);
+ 	if (IS_ERR(upper_mnt)) {
+@@ -990,6 +980,17 @@ static int ovl_get_upper(struct ovl_fs *ofs, struct path *upperpath)
+ 	/* Don't inherit atime flags */
+ 	upper_mnt->mnt_flags &= ~(MNT_NOATIME | MNT_NODIRATIME | MNT_RELATIME);
+ 	ofs->upper_mnt = upper_mnt;
++
++	err = -EBUSY;
++	if (ovl_inuse_trylock(ofs->upper_mnt->mnt_root)) {
++		ofs->upperdir_locked = true;
++	} else if (ofs->config.index) {
++		pr_err("overlayfs: upperdir is in-use by another mount, mount with '-o index=off' to override exclusive upperdir protection.\n");
++		goto out;
++	} else {
++		pr_warn("overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.\n");
++	}
++
+ 	err = 0;
+ out:
+ 	return err;
+@@ -1089,8 +1090,10 @@ static int ovl_get_workdir(struct ovl_fs *ofs, struct path *upperpath)
+ 		goto out;
+ 	}
+ 
++	ofs->workbasedir = dget(workpath.dentry);
++
+ 	err = -EBUSY;
+-	if (ovl_inuse_trylock(workpath.dentry)) {
++	if (ovl_inuse_trylock(ofs->workbasedir)) {
+ 		ofs->workdir_locked = true;
+ 	} else if (ofs->config.index) {
+ 		pr_err("overlayfs: workdir is in-use by another mount, mount with '-o index=off' to override exclusive workdir protection.\n");
+@@ -1099,7 +1102,6 @@ static int ovl_get_workdir(struct ovl_fs *ofs, struct path *upperpath)
+ 		pr_warn("overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.\n");
+ 	}
+ 
+-	ofs->workbasedir = dget(workpath.dentry);
+ 	err = ovl_make_workdir(ofs, &workpath);
+ 	if (err)
+ 		goto out;
+diff --git a/fs/pstore/ram_core.c b/fs/pstore/ram_core.c
+index 951a14edcf51..0792595ebcfb 100644
+--- a/fs/pstore/ram_core.c
++++ b/fs/pstore/ram_core.c
+@@ -429,7 +429,12 @@ static void *persistent_ram_vmap(phys_addr_t start, size_t size,
+ 	vaddr = vmap(pages, page_count, VM_MAP, prot);
+ 	kfree(pages);
+ 
+-	return vaddr;
++	/*
++	 * Since vmap() uses page granularity, we must add the offset
++	 * into the page here, to get the byte granularity address
++	 * into the mapping to represent the actual "start" location.
++	 */
++	return vaddr + offset_in_page(start);
+ }
+ 
+ static void *persistent_ram_iomap(phys_addr_t start, size_t size,
+@@ -448,6 +453,11 @@ static void *persistent_ram_iomap(phys_addr_t start, size_t size,
+ 	else
+ 		va = ioremap_wc(start, size);
+ 
++	/*
++	 * Since request_mem_region() and ioremap() are byte-granularity
++	 * there is no need handle anything special like we do when the
++	 * vmap() case in persistent_ram_vmap() above.
++	 */
+ 	return va;
+ }
+ 
+@@ -468,7 +478,7 @@ static int persistent_ram_buffer_map(phys_addr_t start, phys_addr_t size,
+ 		return -ENOMEM;
+ 	}
+ 
+-	prz->buffer = prz->vaddr + offset_in_page(start);
++	prz->buffer = prz->vaddr;
+ 	prz->buffer_size = size - sizeof(struct persistent_ram_buffer);
+ 
+ 	return 0;
+@@ -515,7 +525,8 @@ void persistent_ram_free(struct persistent_ram_zone *prz)
+ 
+ 	if (prz->vaddr) {
+ 		if (pfn_valid(prz->paddr >> PAGE_SHIFT)) {
+-			vunmap(prz->vaddr);
++			/* We must vunmap() at page-granularity. */
++			vunmap(prz->vaddr - offset_in_page(prz->paddr));
+ 		} else {
+ 			iounmap(prz->vaddr);
+ 			release_mem_region(prz->paddr, prz->size);
+diff --git a/include/linux/crypto.h b/include/linux/crypto.h
+index 6eb06101089f..e8839d3a7559 100644
+--- a/include/linux/crypto.h
++++ b/include/linux/crypto.h
+@@ -112,6 +112,11 @@
+  */
+ #define CRYPTO_ALG_OPTIONAL_KEY		0x00004000
+ 
++/*
++ * Don't trigger module loading
++ */
++#define CRYPTO_NOLOAD			0x00008000
++
+ /*
+  * Transform masks and values (for crt_flags).
+  */
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index 83957920653a..64f450593b54 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -357,7 +357,7 @@ struct mlx5_frag_buf {
+ struct mlx5_frag_buf_ctrl {
+ 	struct mlx5_frag_buf	frag_buf;
+ 	u32			sz_m1;
+-	u32			frag_sz_m1;
++	u16			frag_sz_m1;
+ 	u32			strides_offset;
+ 	u8			log_sz;
+ 	u8			log_stride;
+@@ -1042,7 +1042,7 @@ int mlx5_cmd_free_uar(struct mlx5_core_dev *dev, u32 uarn);
+ void mlx5_health_cleanup(struct mlx5_core_dev *dev);
+ int mlx5_health_init(struct mlx5_core_dev *dev);
+ void mlx5_start_health_poll(struct mlx5_core_dev *dev);
+-void mlx5_stop_health_poll(struct mlx5_core_dev *dev);
++void mlx5_stop_health_poll(struct mlx5_core_dev *dev, bool disable_health);
+ void mlx5_drain_health_wq(struct mlx5_core_dev *dev);
+ void mlx5_trigger_health_work(struct mlx5_core_dev *dev);
+ void mlx5_drain_health_recovery(struct mlx5_core_dev *dev);
+diff --git a/include/linux/of.h b/include/linux/of.h
+index 4d25e4f952d9..b99a1a8c2952 100644
+--- a/include/linux/of.h
++++ b/include/linux/of.h
+@@ -290,6 +290,8 @@ extern struct device_node *of_get_next_child(const struct device_node *node,
+ extern struct device_node *of_get_next_available_child(
+ 	const struct device_node *node, struct device_node *prev);
+ 
++extern struct device_node *of_get_compatible_child(const struct device_node *parent,
++					const char *compatible);
+ extern struct device_node *of_get_child_by_name(const struct device_node *node,
+ 					const char *name);
+ 
+@@ -632,6 +634,12 @@ static inline bool of_have_populated_dt(void)
+ 	return false;
+ }
+ 
++static inline struct device_node *of_get_compatible_child(const struct device_node *parent,
++					const char *compatible)
++{
++	return NULL;
++}
++
+ static inline struct device_node *of_get_child_by_name(
+ 					const struct device_node *node,
+ 					const char *name)
+diff --git a/kernel/audit_watch.c b/kernel/audit_watch.c
+index c17c0c268436..dce35e16bff4 100644
+--- a/kernel/audit_watch.c
++++ b/kernel/audit_watch.c
+@@ -419,6 +419,13 @@ int audit_add_watch(struct audit_krule *krule, struct list_head **list)
+ 	struct path parent_path;
+ 	int h, ret = 0;
+ 
++	/*
++	 * When we will be calling audit_add_to_parent, krule->watch might have
++	 * been updated and watch might have been freed.
++	 * So we need to keep a reference of watch.
++	 */
++	audit_get_watch(watch);
++
+ 	mutex_unlock(&audit_filter_mutex);
+ 
+ 	/* Avoid calling path_lookup under audit_filter_mutex. */
+@@ -427,8 +434,10 @@ int audit_add_watch(struct audit_krule *krule, struct list_head **list)
+ 	/* caller expects mutex locked */
+ 	mutex_lock(&audit_filter_mutex);
+ 
+-	if (ret)
++	if (ret) {
++		audit_put_watch(watch);
+ 		return ret;
++	}
+ 
+ 	/* either find an old parent or attach a new one */
+ 	parent = audit_find_parent(d_backing_inode(parent_path.dentry));
+@@ -446,6 +455,7 @@ int audit_add_watch(struct audit_krule *krule, struct list_head **list)
+ 	*list = &audit_inode_hash[h];
+ error:
+ 	path_put(&parent_path);
++	audit_put_watch(watch);
+ 	return ret;
+ }
+ 
+diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
+index 3d83ee7df381..badabb0b435c 100644
+--- a/kernel/bpf/cgroup.c
++++ b/kernel/bpf/cgroup.c
+@@ -95,7 +95,7 @@ static int compute_effective_progs(struct cgroup *cgrp,
+ 				   enum bpf_attach_type type,
+ 				   struct bpf_prog_array __rcu **array)
+ {
+-	struct bpf_prog_array __rcu *progs;
++	struct bpf_prog_array *progs;
+ 	struct bpf_prog_list *pl;
+ 	struct cgroup *p = cgrp;
+ 	int cnt = 0;
+@@ -120,13 +120,12 @@ static int compute_effective_progs(struct cgroup *cgrp,
+ 					    &p->bpf.progs[type], node) {
+ 				if (!pl->prog)
+ 					continue;
+-				rcu_dereference_protected(progs, 1)->
+-					progs[cnt++] = pl->prog;
++				progs->progs[cnt++] = pl->prog;
+ 			}
+ 		p = cgroup_parent(p);
+ 	} while (p);
+ 
+-	*array = progs;
++	rcu_assign_pointer(*array, progs);
+ 	return 0;
+ }
+ 
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index eec2d5fb676b..c7b3e34811ec 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -5948,6 +5948,7 @@ perf_output_sample_ustack(struct perf_output_handle *handle, u64 dump_size,
+ 		unsigned long sp;
+ 		unsigned int rem;
+ 		u64 dyn_size;
++		mm_segment_t fs;
+ 
+ 		/*
+ 		 * We dump:
+@@ -5965,7 +5966,10 @@ perf_output_sample_ustack(struct perf_output_handle *handle, u64 dump_size,
+ 
+ 		/* Data. */
+ 		sp = perf_user_stack_pointer(regs);
++		fs = get_fs();
++		set_fs(USER_DS);
+ 		rem = __output_copy_user(handle, (void *) sp, dump_size);
++		set_fs(fs);
+ 		dyn_size = dump_size - rem;
+ 
+ 		perf_output_skip(handle, rem);
+diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
+index 42fcb7f05fac..f42cf69ef539 100644
+--- a/kernel/rcu/rcutorture.c
++++ b/kernel/rcu/rcutorture.c
+@@ -1446,7 +1446,7 @@ static int rcu_torture_stall(void *args)
+ 		VERBOSE_TOROUT_STRING("rcu_torture_stall end holdoff");
+ 	}
+ 	if (!kthread_should_stop()) {
+-		stop_at = get_seconds() + stall_cpu;
++		stop_at = ktime_get_seconds() + stall_cpu;
+ 		/* RCU CPU stall is expected behavior in following code. */
+ 		rcu_read_lock();
+ 		if (stall_cpu_irqsoff)
+@@ -1455,7 +1455,8 @@ static int rcu_torture_stall(void *args)
+ 			preempt_disable();
+ 		pr_alert("rcu_torture_stall start on CPU %d.\n",
+ 			 smp_processor_id());
+-		while (ULONG_CMP_LT(get_seconds(), stop_at))
++		while (ULONG_CMP_LT((unsigned long)ktime_get_seconds(),
++				    stop_at))
+ 			continue;  /* Induce RCU CPU stall warning. */
+ 		if (stall_cpu_irqsoff)
+ 			local_irq_enable();
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 9c219f7b0970..478d9d3e6be9 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -735,11 +735,12 @@ static void attach_entity_cfs_rq(struct sched_entity *se);
+  * To solve this problem, we also cap the util_avg of successive tasks to
+  * only 1/2 of the left utilization budget:
+  *
+- *   util_avg_cap = (1024 - cfs_rq->avg.util_avg) / 2^n
++ *   util_avg_cap = (cpu_scale - cfs_rq->avg.util_avg) / 2^n
+  *
+- * where n denotes the nth task.
++ * where n denotes the nth task and cpu_scale the CPU capacity.
+  *
+- * For example, a simplest series from the beginning would be like:
++ * For example, for a CPU with 1024 of capacity, a simplest series from
++ * the beginning would be like:
+  *
+  *  task  util_avg: 512, 256, 128,  64,  32,   16,    8, ...
+  * cfs_rq util_avg: 512, 768, 896, 960, 992, 1008, 1016, ...
+@@ -751,7 +752,8 @@ void post_init_entity_util_avg(struct sched_entity *se)
+ {
+ 	struct cfs_rq *cfs_rq = cfs_rq_of(se);
+ 	struct sched_avg *sa = &se->avg;
+-	long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2;
++	long cpu_scale = arch_scale_cpu_capacity(NULL, cpu_of(rq_of(cfs_rq)));
++	long cap = (long)(cpu_scale - cfs_rq->avg.util_avg) / 2;
+ 
+ 	if (cap > 0) {
+ 		if (cfs_rq->avg.util_avg != 0) {
+diff --git a/kernel/sched/wait.c b/kernel/sched/wait.c
+index 928be527477e..a7a2aaa3026a 100644
+--- a/kernel/sched/wait.c
++++ b/kernel/sched/wait.c
+@@ -392,35 +392,36 @@ static inline bool is_kthread_should_stop(void)
+  *     if (condition)
+  *         break;
+  *
+- *     p->state = mode;				condition = true;
+- *     smp_mb(); // A				smp_wmb(); // C
+- *     if (!wq_entry->flags & WQ_FLAG_WOKEN)	wq_entry->flags |= WQ_FLAG_WOKEN;
+- *         schedule()				try_to_wake_up();
+- *     p->state = TASK_RUNNING;		    ~~~~~~~~~~~~~~~~~~
+- *     wq_entry->flags &= ~WQ_FLAG_WOKEN;		condition = true;
+- *     smp_mb() // B				smp_wmb(); // C
+- *						wq_entry->flags |= WQ_FLAG_WOKEN;
+- * }
+- * remove_wait_queue(&wq_head, &wait);
++ *     // in wait_woken()			// in woken_wake_function()
+  *
++ *     p->state = mode;				wq_entry->flags |= WQ_FLAG_WOKEN;
++ *     smp_mb(); // A				try_to_wake_up():
++ *     if (!(wq_entry->flags & WQ_FLAG_WOKEN))	   <full barrier>
++ *         schedule()				   if (p->state & mode)
++ *     p->state = TASK_RUNNING;			      p->state = TASK_RUNNING;
++ *     wq_entry->flags &= ~WQ_FLAG_WOKEN;	~~~~~~~~~~~~~~~~~~
++ *     smp_mb(); // B				condition = true;
++ * }						smp_mb(); // C
++ * remove_wait_queue(&wq_head, &wait);		wq_entry->flags |= WQ_FLAG_WOKEN;
+  */
+ long wait_woken(struct wait_queue_entry *wq_entry, unsigned mode, long timeout)
+ {
+-	set_current_state(mode); /* A */
+ 	/*
+-	 * The above implies an smp_mb(), which matches with the smp_wmb() from
+-	 * woken_wake_function() such that if we observe WQ_FLAG_WOKEN we must
+-	 * also observe all state before the wakeup.
++	 * The below executes an smp_mb(), which matches with the full barrier
++	 * executed by the try_to_wake_up() in woken_wake_function() such that
++	 * either we see the store to wq_entry->flags in woken_wake_function()
++	 * or woken_wake_function() sees our store to current->state.
+ 	 */
++	set_current_state(mode); /* A */
+ 	if (!(wq_entry->flags & WQ_FLAG_WOKEN) && !is_kthread_should_stop())
+ 		timeout = schedule_timeout(timeout);
+ 	__set_current_state(TASK_RUNNING);
+ 
+ 	/*
+-	 * The below implies an smp_mb(), it too pairs with the smp_wmb() from
+-	 * woken_wake_function() such that we must either observe the wait
+-	 * condition being true _OR_ WQ_FLAG_WOKEN such that we will not miss
+-	 * an event.
++	 * The below executes an smp_mb(), which matches with the smp_mb() (C)
++	 * in woken_wake_function() such that either we see the wait condition
++	 * being true or the store to wq_entry->flags in woken_wake_function()
++	 * follows ours in the coherence order.
+ 	 */
+ 	smp_store_mb(wq_entry->flags, wq_entry->flags & ~WQ_FLAG_WOKEN); /* B */
+ 
+@@ -430,14 +431,8 @@ EXPORT_SYMBOL(wait_woken);
+ 
+ int woken_wake_function(struct wait_queue_entry *wq_entry, unsigned mode, int sync, void *key)
+ {
+-	/*
+-	 * Although this function is called under waitqueue lock, LOCK
+-	 * doesn't imply write barrier and the users expects write
+-	 * barrier semantics on wakeup functions.  The following
+-	 * smp_wmb() is equivalent to smp_wmb() in try_to_wake_up()
+-	 * and is paired with smp_store_mb() in wait_woken().
+-	 */
+-	smp_wmb(); /* C */
++	/* Pairs with the smp_store_mb() in wait_woken(). */
++	smp_mb(); /* C */
+ 	wq_entry->flags |= WQ_FLAG_WOKEN;
+ 
+ 	return default_wake_function(wq_entry, mode, sync, key);
+diff --git a/net/bluetooth/af_bluetooth.c b/net/bluetooth/af_bluetooth.c
+index 3264e1873219..deacc52d7ff1 100644
+--- a/net/bluetooth/af_bluetooth.c
++++ b/net/bluetooth/af_bluetooth.c
+@@ -159,7 +159,7 @@ void bt_accept_enqueue(struct sock *parent, struct sock *sk)
+ 	BT_DBG("parent %p, sk %p", parent, sk);
+ 
+ 	sock_hold(sk);
+-	lock_sock(sk);
++	lock_sock_nested(sk, SINGLE_DEPTH_NESTING);
+ 	list_add_tail(&bt_sk(sk)->accept_q, &bt_sk(parent)->accept_q);
+ 	bt_sk(sk)->parent = parent;
+ 	release_sock(sk);
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index fb35b62af272..3680912f056a 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -939,9 +939,6 @@ struct ubuf_info *sock_zerocopy_alloc(struct sock *sk, size_t size)
+ 
+ 	WARN_ON_ONCE(!in_task());
+ 
+-	if (!sock_flag(sk, SOCK_ZEROCOPY))
+-		return NULL;
+-
+ 	skb = sock_omalloc(sk, 0, GFP_KERNEL);
+ 	if (!skb)
+ 		return NULL;
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index 055f4bbba86b..41883c34a385 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -178,6 +178,9 @@ static void ipgre_err(struct sk_buff *skb, u32 info,
+ 
+ 	if (tpi->proto == htons(ETH_P_TEB))
+ 		itn = net_generic(net, gre_tap_net_id);
++	else if (tpi->proto == htons(ETH_P_ERSPAN) ||
++		 tpi->proto == htons(ETH_P_ERSPAN2))
++		itn = net_generic(net, erspan_net_id);
+ 	else
+ 		itn = net_generic(net, ipgre_net_id);
+ 
+@@ -328,6 +331,8 @@ static int erspan_rcv(struct sk_buff *skb, struct tnl_ptk_info *tpi,
+ 		ip_tunnel_rcv(tunnel, skb, tpi, tun_dst, log_ecn_error);
+ 		return PACKET_RCVD;
+ 	}
++	return PACKET_REJECT;
++
+ drop:
+ 	kfree_skb(skb);
+ 	return PACKET_RCVD;
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 4491faf83f4f..086201d96d54 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -1186,7 +1186,7 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
+ 
+ 	flags = msg->msg_flags;
+ 
+-	if (flags & MSG_ZEROCOPY && size) {
++	if (flags & MSG_ZEROCOPY && size && sock_flag(sk, SOCK_ZEROCOPY)) {
+ 		if (sk->sk_state != TCP_ESTABLISHED) {
+ 			err = -EINVAL;
+ 			goto out_err;
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index bdf6fa78d0d2..aa082b71d2e4 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -495,7 +495,7 @@ static int ieee80211_del_key(struct wiphy *wiphy, struct net_device *dev,
+ 		goto out_unlock;
+ 	}
+ 
+-	ieee80211_key_free(key, true);
++	ieee80211_key_free(key, sdata->vif.type == NL80211_IFTYPE_STATION);
+ 
+ 	ret = 0;
+  out_unlock:
+diff --git a/net/mac80211/key.c b/net/mac80211/key.c
+index ee0d0cc8dc3b..c054ac85793c 100644
+--- a/net/mac80211/key.c
++++ b/net/mac80211/key.c
+@@ -656,11 +656,15 @@ int ieee80211_key_link(struct ieee80211_key *key,
+ {
+ 	struct ieee80211_local *local = sdata->local;
+ 	struct ieee80211_key *old_key;
+-	int idx, ret;
+-	bool pairwise;
+-
+-	pairwise = key->conf.flags & IEEE80211_KEY_FLAG_PAIRWISE;
+-	idx = key->conf.keyidx;
++	int idx = key->conf.keyidx;
++	bool pairwise = key->conf.flags & IEEE80211_KEY_FLAG_PAIRWISE;
++	/*
++	 * We want to delay tailroom updates only for station - in that
++	 * case it helps roaming speed, but in other cases it hurts and
++	 * can cause warnings to appear.
++	 */
++	bool delay_tailroom = sdata->vif.type == NL80211_IFTYPE_STATION;
++	int ret;
+ 
+ 	mutex_lock(&sdata->local->key_mtx);
+ 
+@@ -688,14 +692,14 @@ int ieee80211_key_link(struct ieee80211_key *key,
+ 	increment_tailroom_need_count(sdata);
+ 
+ 	ieee80211_key_replace(sdata, sta, pairwise, old_key, key);
+-	ieee80211_key_destroy(old_key, true);
++	ieee80211_key_destroy(old_key, delay_tailroom);
+ 
+ 	ieee80211_debugfs_key_add(key);
+ 
+ 	if (!local->wowlan) {
+ 		ret = ieee80211_key_enable_hw_accel(key);
+ 		if (ret)
+-			ieee80211_key_free(key, true);
++			ieee80211_key_free(key, delay_tailroom);
+ 	} else {
+ 		ret = 0;
+ 	}
+@@ -930,7 +934,8 @@ void ieee80211_free_sta_keys(struct ieee80211_local *local,
+ 		ieee80211_key_replace(key->sdata, key->sta,
+ 				key->conf.flags & IEEE80211_KEY_FLAG_PAIRWISE,
+ 				key, NULL);
+-		__ieee80211_key_destroy(key, true);
++		__ieee80211_key_destroy(key, key->sdata->vif.type ==
++					NL80211_IFTYPE_STATION);
+ 	}
+ 
+ 	for (i = 0; i < NUM_DEFAULT_KEYS; i++) {
+@@ -940,7 +945,8 @@ void ieee80211_free_sta_keys(struct ieee80211_local *local,
+ 		ieee80211_key_replace(key->sdata, key->sta,
+ 				key->conf.flags & IEEE80211_KEY_FLAG_PAIRWISE,
+ 				key, NULL);
+-		__ieee80211_key_destroy(key, true);
++		__ieee80211_key_destroy(key, key->sdata->vif.type ==
++					NL80211_IFTYPE_STATION);
+ 	}
+ 
+ 	mutex_unlock(&local->key_mtx);
+diff --git a/net/rds/bind.c b/net/rds/bind.c
+index 5aa3a64aa4f0..48257d3a4201 100644
+--- a/net/rds/bind.c
++++ b/net/rds/bind.c
+@@ -60,11 +60,13 @@ struct rds_sock *rds_find_bound(__be32 addr, __be16 port)
+ 	u64 key = ((u64)addr << 32) | port;
+ 	struct rds_sock *rs;
+ 
+-	rs = rhashtable_lookup_fast(&bind_hash_table, &key, ht_parms);
++	rcu_read_lock();
++	rs = rhashtable_lookup(&bind_hash_table, &key, ht_parms);
+ 	if (rs && !sock_flag(rds_rs_to_sk(rs), SOCK_DEAD))
+ 		rds_sock_addref(rs);
+ 	else
+ 		rs = NULL;
++	rcu_read_unlock();
+ 
+ 	rdsdebug("returning rs %p for %pI4:%u\n", rs, &addr,
+ 		ntohs(port));
+@@ -157,6 +159,7 @@ int rds_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ 		goto out;
+ 	}
+ 
++	sock_set_flag(sk, SOCK_RCU_FREE);
+ 	ret = rds_add_bound(rs, sin->sin_addr.s_addr, &sin->sin_port);
+ 	if (ret)
+ 		goto out;
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 0a5fa347135e..ac8ca238c541 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -578,6 +578,7 @@ static int tipc_release(struct socket *sock)
+ 	sk_stop_timer(sk, &sk->sk_timer);
+ 	tipc_sk_remove(tsk);
+ 
++	sock_orphan(sk);
+ 	/* Reject any messages that accumulated in backlog queue */
+ 	release_sock(sk);
+ 	tipc_dest_list_purge(&tsk->cong_links);
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 1f3d9789af30..b3344bbe336b 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -149,6 +149,9 @@ static int alloc_encrypted_sg(struct sock *sk, int len)
+ 			 &ctx->sg_encrypted_num_elem,
+ 			 &ctx->sg_encrypted_size, 0);
+ 
++	if (rc == -ENOSPC)
++		ctx->sg_encrypted_num_elem = ARRAY_SIZE(ctx->sg_encrypted_data);
++
+ 	return rc;
+ }
+ 
+@@ -162,6 +165,9 @@ static int alloc_plaintext_sg(struct sock *sk, int len)
+ 			 &ctx->sg_plaintext_num_elem, &ctx->sg_plaintext_size,
+ 			 tls_ctx->pending_open_record_frags);
+ 
++	if (rc == -ENOSPC)
++		ctx->sg_plaintext_num_elem = ARRAY_SIZE(ctx->sg_plaintext_data);
++
+ 	return rc;
+ }
+ 
+@@ -280,7 +286,7 @@ static int zerocopy_from_iter(struct sock *sk, struct iov_iter *from,
+ 			      int length, int *pages_used,
+ 			      unsigned int *size_used,
+ 			      struct scatterlist *to, int to_max_pages,
+-			      bool charge)
++			      bool charge, bool revert)
+ {
+ 	struct page *pages[MAX_SKB_FRAGS];
+ 
+@@ -331,6 +337,8 @@ static int zerocopy_from_iter(struct sock *sk, struct iov_iter *from,
+ out:
+ 	*size_used = size;
+ 	*pages_used = num_elem;
++	if (revert)
++		iov_iter_revert(from, size);
+ 
+ 	return rc;
+ }
+@@ -432,7 +440,7 @@ alloc_encrypted:
+ 				&ctx->sg_plaintext_size,
+ 				ctx->sg_plaintext_data,
+ 				ARRAY_SIZE(ctx->sg_plaintext_data),
+-				true);
++				true, false);
+ 			if (ret)
+ 				goto fallback_to_reg_send;
+ 
+@@ -820,7 +828,7 @@ int tls_sw_recvmsg(struct sock *sk,
+ 				err = zerocopy_from_iter(sk, &msg->msg_iter,
+ 							 to_copy, &pages,
+ 							 &chunk, &sgin[1],
+-							 MAX_SKB_FRAGS,	false);
++							 MAX_SKB_FRAGS,	false, true);
+ 				if (err < 0)
+ 					goto fallback_to_reg_recv;
+ 
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 7c5e8978aeaa..a94983e03a8b 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -1831,7 +1831,10 @@ xfrm_resolve_and_create_bundle(struct xfrm_policy **pols, int num_pols,
+ 	/* Try to instantiate a bundle */
+ 	err = xfrm_tmpl_resolve(pols, num_pols, fl, xfrm, family);
+ 	if (err <= 0) {
+-		if (err != 0 && err != -EAGAIN)
++		if (err == 0)
++			return NULL;
++
++		if (err != -EAGAIN)
+ 			XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTPOLERROR);
+ 		return ERR_PTR(err);
+ 	}
+diff --git a/scripts/Kbuild.include b/scripts/Kbuild.include
+index 86321f06461e..ed303f552f9d 100644
+--- a/scripts/Kbuild.include
++++ b/scripts/Kbuild.include
+@@ -400,3 +400,6 @@ endif
+ endef
+ #
+ ###############################################################################
++
++# delete partially updated (i.e. corrupted) files on error
++.DELETE_ON_ERROR:
+diff --git a/security/integrity/evm/evm_crypto.c b/security/integrity/evm/evm_crypto.c
+index b60524310855..c20e3142b541 100644
+--- a/security/integrity/evm/evm_crypto.c
++++ b/security/integrity/evm/evm_crypto.c
+@@ -97,7 +97,8 @@ static struct shash_desc *init_desc(char type)
+ 		mutex_lock(&mutex);
+ 		if (*tfm)
+ 			goto out;
+-		*tfm = crypto_alloc_shash(algo, 0, CRYPTO_ALG_ASYNC);
++		*tfm = crypto_alloc_shash(algo, 0,
++					  CRYPTO_ALG_ASYNC | CRYPTO_NOLOAD);
+ 		if (IS_ERR(*tfm)) {
+ 			rc = PTR_ERR(*tfm);
+ 			pr_err("Can not allocate %s (reason: %ld)\n", algo, rc);
+diff --git a/security/security.c b/security/security.c
+index 68f46d849abe..4e572b38937d 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -118,6 +118,8 @@ static int lsm_append(char *new, char **result)
+ 
+ 	if (*result == NULL) {
+ 		*result = kstrdup(new, GFP_KERNEL);
++		if (*result == NULL)
++			return -ENOMEM;
+ 	} else {
+ 		/* Check if it is the last registered name */
+ 		if (match_last_lsm(*result, new))
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 19de675d4504..8b6cd5a79bfa 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -3924,15 +3924,19 @@ static int smack_socket_sock_rcv_skb(struct sock *sk, struct sk_buff *skb)
+ 	struct smack_known *skp = NULL;
+ 	int rc = 0;
+ 	struct smk_audit_info ad;
++	u16 family = sk->sk_family;
+ #ifdef CONFIG_AUDIT
+ 	struct lsm_network_audit net;
+ #endif
+ #if IS_ENABLED(CONFIG_IPV6)
+ 	struct sockaddr_in6 sadd;
+ 	int proto;
++
++	if (family == PF_INET6 && skb->protocol == htons(ETH_P_IP))
++		family = PF_INET;
+ #endif /* CONFIG_IPV6 */
+ 
+-	switch (sk->sk_family) {
++	switch (family) {
+ 	case PF_INET:
+ #ifdef CONFIG_SECURITY_SMACK_NETFILTER
+ 		/*
+@@ -3950,7 +3954,7 @@ static int smack_socket_sock_rcv_skb(struct sock *sk, struct sk_buff *skb)
+ 		 */
+ 		netlbl_secattr_init(&secattr);
+ 
+-		rc = netlbl_skbuff_getattr(skb, sk->sk_family, &secattr);
++		rc = netlbl_skbuff_getattr(skb, family, &secattr);
+ 		if (rc == 0)
+ 			skp = smack_from_secattr(&secattr, ssp);
+ 		else
+@@ -3963,7 +3967,7 @@ access_check:
+ #endif
+ #ifdef CONFIG_AUDIT
+ 		smk_ad_init_net(&ad, __func__, LSM_AUDIT_DATA_NET, &net);
+-		ad.a.u.net->family = sk->sk_family;
++		ad.a.u.net->family = family;
+ 		ad.a.u.net->netif = skb->skb_iif;
+ 		ipv4_skb_to_auditdata(skb, &ad.a, NULL);
+ #endif
+@@ -3977,7 +3981,7 @@ access_check:
+ 		rc = smk_bu_note("IPv4 delivery", skp, ssp->smk_in,
+ 					MAY_WRITE, rc);
+ 		if (rc != 0)
+-			netlbl_skbuff_err(skb, sk->sk_family, rc, 0);
++			netlbl_skbuff_err(skb, family, rc, 0);
+ 		break;
+ #if IS_ENABLED(CONFIG_IPV6)
+ 	case PF_INET6:
+@@ -3993,7 +3997,7 @@ access_check:
+ 			skp = smack_net_ambient;
+ #ifdef CONFIG_AUDIT
+ 		smk_ad_init_net(&ad, __func__, LSM_AUDIT_DATA_NET, &net);
+-		ad.a.u.net->family = sk->sk_family;
++		ad.a.u.net->family = family;
+ 		ad.a.u.net->netif = skb->skb_iif;
+ 		ipv6_skb_to_auditdata(skb, &ad.a, NULL);
+ #endif /* CONFIG_AUDIT */
+diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c
+index 44b5ae833082..a4aac948ea49 100644
+--- a/sound/core/pcm_lib.c
++++ b/sound/core/pcm_lib.c
+@@ -626,27 +626,33 @@ EXPORT_SYMBOL(snd_interval_refine);
+ 
+ static int snd_interval_refine_first(struct snd_interval *i)
+ {
++	const unsigned int last_max = i->max;
++
+ 	if (snd_BUG_ON(snd_interval_empty(i)))
+ 		return -EINVAL;
+ 	if (snd_interval_single(i))
+ 		return 0;
+ 	i->max = i->min;
+-	i->openmax = i->openmin;
+-	if (i->openmax)
++	if (i->openmin)
+ 		i->max++;
++	/* only exclude max value if also excluded before refine */
++	i->openmax = (i->openmax && i->max >= last_max);
+ 	return 1;
+ }
+ 
+ static int snd_interval_refine_last(struct snd_interval *i)
+ {
++	const unsigned int last_min = i->min;
++
+ 	if (snd_BUG_ON(snd_interval_empty(i)))
+ 		return -EINVAL;
+ 	if (snd_interval_single(i))
+ 		return 0;
+ 	i->min = i->max;
+-	i->openmin = i->openmax;
+-	if (i->openmin)
++	if (i->openmax)
+ 		i->min--;
++	/* only exclude min value if also excluded before refine */
++	i->openmin = (i->openmin && i->min <= last_min);
+ 	return 1;
+ }
+ 
+diff --git a/sound/isa/msnd/msnd_pinnacle.c b/sound/isa/msnd/msnd_pinnacle.c
+index 6c584d9b6c42..a19f802b2071 100644
+--- a/sound/isa/msnd/msnd_pinnacle.c
++++ b/sound/isa/msnd/msnd_pinnacle.c
+@@ -82,10 +82,10 @@
+ 
+ static void set_default_audio_parameters(struct snd_msnd *chip)
+ {
+-	chip->play_sample_size = DEFSAMPLESIZE;
++	chip->play_sample_size = snd_pcm_format_width(DEFSAMPLESIZE);
+ 	chip->play_sample_rate = DEFSAMPLERATE;
+ 	chip->play_channels = DEFCHANNELS;
+-	chip->capture_sample_size = DEFSAMPLESIZE;
++	chip->capture_sample_size = snd_pcm_format_width(DEFSAMPLESIZE);
+ 	chip->capture_sample_rate = DEFSAMPLERATE;
+ 	chip->capture_channels = DEFCHANNELS;
+ }
+diff --git a/sound/soc/codecs/hdmi-codec.c b/sound/soc/codecs/hdmi-codec.c
+index 38e4a8515709..d00734d31e04 100644
+--- a/sound/soc/codecs/hdmi-codec.c
++++ b/sound/soc/codecs/hdmi-codec.c
+@@ -291,10 +291,6 @@ static const struct snd_soc_dapm_widget hdmi_widgets[] = {
+ 	SND_SOC_DAPM_OUTPUT("TX"),
+ };
+ 
+-static const struct snd_soc_dapm_route hdmi_routes[] = {
+-	{ "TX", NULL, "Playback" },
+-};
+-
+ enum {
+ 	DAI_ID_I2S = 0,
+ 	DAI_ID_SPDIF,
+@@ -689,9 +685,23 @@ static int hdmi_codec_pcm_new(struct snd_soc_pcm_runtime *rtd,
+ 	return snd_ctl_add(rtd->card->snd_card, kctl);
+ }
+ 
++static int hdmi_dai_probe(struct snd_soc_dai *dai)
++{
++	struct snd_soc_dapm_context *dapm;
++	struct snd_soc_dapm_route route = {
++		.sink = "TX",
++		.source = dai->driver->playback.stream_name,
++	};
++
++	dapm = snd_soc_component_get_dapm(dai->component);
++
++	return snd_soc_dapm_add_routes(dapm, &route, 1);
++}
++
+ static const struct snd_soc_dai_driver hdmi_i2s_dai = {
+ 	.name = "i2s-hifi",
+ 	.id = DAI_ID_I2S,
++	.probe = hdmi_dai_probe,
+ 	.playback = {
+ 		.stream_name = "I2S Playback",
+ 		.channels_min = 2,
+@@ -707,6 +717,7 @@ static const struct snd_soc_dai_driver hdmi_i2s_dai = {
+ static const struct snd_soc_dai_driver hdmi_spdif_dai = {
+ 	.name = "spdif-hifi",
+ 	.id = DAI_ID_SPDIF,
++	.probe = hdmi_dai_probe,
+ 	.playback = {
+ 		.stream_name = "SPDIF Playback",
+ 		.channels_min = 2,
+@@ -733,8 +744,6 @@ static int hdmi_of_xlate_dai_id(struct snd_soc_component *component,
+ static const struct snd_soc_component_driver hdmi_driver = {
+ 	.dapm_widgets		= hdmi_widgets,
+ 	.num_dapm_widgets	= ARRAY_SIZE(hdmi_widgets),
+-	.dapm_routes		= hdmi_routes,
+-	.num_dapm_routes	= ARRAY_SIZE(hdmi_routes),
+ 	.of_xlate_dai_id	= hdmi_of_xlate_dai_id,
+ 	.idle_bias_on		= 1,
+ 	.use_pmdown_time	= 1,
+diff --git a/sound/soc/codecs/rt5514.c b/sound/soc/codecs/rt5514.c
+index 1570b91bf018..dca82dd6e3bf 100644
+--- a/sound/soc/codecs/rt5514.c
++++ b/sound/soc/codecs/rt5514.c
+@@ -64,8 +64,8 @@ static const struct reg_sequence rt5514_patch[] = {
+ 	{RT5514_ANA_CTRL_LDO10,		0x00028604},
+ 	{RT5514_ANA_CTRL_ADCFED,	0x00000800},
+ 	{RT5514_ASRC_IN_CTRL1,		0x00000003},
+-	{RT5514_DOWNFILTER0_CTRL3,	0x10000362},
+-	{RT5514_DOWNFILTER1_CTRL3,	0x10000362},
++	{RT5514_DOWNFILTER0_CTRL3,	0x10000352},
++	{RT5514_DOWNFILTER1_CTRL3,	0x10000352},
+ };
+ 
+ static const struct reg_default rt5514_reg[] = {
+@@ -92,10 +92,10 @@ static const struct reg_default rt5514_reg[] = {
+ 	{RT5514_ASRC_IN_CTRL1,		0x00000003},
+ 	{RT5514_DOWNFILTER0_CTRL1,	0x00020c2f},
+ 	{RT5514_DOWNFILTER0_CTRL2,	0x00020c2f},
+-	{RT5514_DOWNFILTER0_CTRL3,	0x10000362},
++	{RT5514_DOWNFILTER0_CTRL3,	0x10000352},
+ 	{RT5514_DOWNFILTER1_CTRL1,	0x00020c2f},
+ 	{RT5514_DOWNFILTER1_CTRL2,	0x00020c2f},
+-	{RT5514_DOWNFILTER1_CTRL3,	0x10000362},
++	{RT5514_DOWNFILTER1_CTRL3,	0x10000352},
+ 	{RT5514_ANA_CTRL_LDO10,		0x00028604},
+ 	{RT5514_ANA_CTRL_LDO18_16,	0x02000345},
+ 	{RT5514_ANA_CTRL_ADC12,		0x0000a2a8},
+diff --git a/sound/soc/codecs/rt5651.c b/sound/soc/codecs/rt5651.c
+index 6b5669f3e85d..39d2c67cd064 100644
+--- a/sound/soc/codecs/rt5651.c
++++ b/sound/soc/codecs/rt5651.c
+@@ -1696,6 +1696,13 @@ static irqreturn_t rt5651_irq(int irq, void *data)
+ 	return IRQ_HANDLED;
+ }
+ 
++static void rt5651_cancel_work(void *data)
++{
++	struct rt5651_priv *rt5651 = data;
++
++	cancel_work_sync(&rt5651->jack_detect_work);
++}
++
+ static int rt5651_set_jack(struct snd_soc_component *component,
+ 			   struct snd_soc_jack *hp_jack, void *data)
+ {
+@@ -2036,6 +2043,11 @@ static int rt5651_i2c_probe(struct i2c_client *i2c,
+ 
+ 	INIT_WORK(&rt5651->jack_detect_work, rt5651_jack_detect_work);
+ 
++	/* Make sure work is stopped on probe-error / remove */
++	ret = devm_add_action_or_reset(&i2c->dev, rt5651_cancel_work, rt5651);
++	if (ret)
++		return ret;
++
+ 	ret = devm_snd_soc_register_component(&i2c->dev,
+ 				&soc_component_dev_rt5651,
+ 				rt5651_dai, ARRAY_SIZE(rt5651_dai));
+@@ -2043,15 +2055,6 @@ static int rt5651_i2c_probe(struct i2c_client *i2c,
+ 	return ret;
+ }
+ 
+-static int rt5651_i2c_remove(struct i2c_client *i2c)
+-{
+-	struct rt5651_priv *rt5651 = i2c_get_clientdata(i2c);
+-
+-	cancel_work_sync(&rt5651->jack_detect_work);
+-
+-	return 0;
+-}
+-
+ static struct i2c_driver rt5651_i2c_driver = {
+ 	.driver = {
+ 		.name = "rt5651",
+@@ -2059,7 +2062,6 @@ static struct i2c_driver rt5651_i2c_driver = {
+ 		.of_match_table = of_match_ptr(rt5651_of_match),
+ 	},
+ 	.probe = rt5651_i2c_probe,
+-	.remove   = rt5651_i2c_remove,
+ 	.id_table = rt5651_i2c_id,
+ };
+ module_i2c_driver(rt5651_i2c_driver);
+diff --git a/sound/soc/qcom/qdsp6/q6afe-dai.c b/sound/soc/qcom/qdsp6/q6afe-dai.c
+index 5002dd05bf27..f8298be7038f 100644
+--- a/sound/soc/qcom/qdsp6/q6afe-dai.c
++++ b/sound/soc/qcom/qdsp6/q6afe-dai.c
+@@ -1180,7 +1180,7 @@ static void of_q6afe_parse_dai_data(struct device *dev,
+ 		int id, i, num_lines;
+ 
+ 		ret = of_property_read_u32(node, "reg", &id);
+-		if (ret || id > AFE_PORT_MAX) {
++		if (ret || id < 0 || id >= AFE_PORT_MAX) {
+ 			dev_err(dev, "valid dai id not found:%d\n", ret);
+ 			continue;
+ 		}
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 8aac48f9c322..08aa78007020 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -2875,7 +2875,8 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+  */
+ 
+ #define AU0828_DEVICE(vid, pid, vname, pname) { \
+-	USB_DEVICE_VENDOR_SPEC(vid, pid), \
++	.idVendor = vid, \
++	.idProduct = pid, \
+ 	.match_flags = USB_DEVICE_ID_MATCH_DEVICE | \
+ 		       USB_DEVICE_ID_MATCH_INT_CLASS | \
+ 		       USB_DEVICE_ID_MATCH_INT_SUBCLASS, \
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 02b6cc02767f..dde87d64bc32 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1373,6 +1373,7 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip,
+ 			return SNDRV_PCM_FMTBIT_DSD_U32_BE;
+ 		break;
+ 
++	case USB_ID(0x16d0, 0x09dd): /* Encore mDSD */
+ 	case USB_ID(0x0d8c, 0x0316): /* Hegel HD12 DSD */
+ 	case USB_ID(0x16b0, 0x06b2): /* NuPrime DAC-10 */
+ 	case USB_ID(0x16d0, 0x0733): /* Furutech ADL Stratos */
+@@ -1443,6 +1444,7 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip,
+ 	 */
+ 	switch (USB_ID_VENDOR(chip->usb_id)) {
+ 	case 0x20b1:  /* XMOS based devices */
++	case 0x152a:  /* Thesycon devices */
+ 	case 0x25ce:  /* Mytek devices */
+ 		if (fp->dsd_raw)
+ 			return SNDRV_PCM_FMTBIT_DSD_U32_BE;
+diff --git a/tools/hv/hv_kvp_daemon.c b/tools/hv/hv_kvp_daemon.c
+index dbf6e8bd98ba..bbb2a8ef367c 100644
+--- a/tools/hv/hv_kvp_daemon.c
++++ b/tools/hv/hv_kvp_daemon.c
+@@ -286,7 +286,7 @@ static int kvp_key_delete(int pool, const __u8 *key, int key_size)
+ 		 * Found a match; just move the remaining
+ 		 * entries up.
+ 		 */
+-		if (i == num_records) {
++		if (i == (num_records - 1)) {
+ 			kvp_file_info[pool].num_records--;
+ 			kvp_update_file(pool);
+ 			return 0;
+diff --git a/tools/perf/arch/powerpc/util/skip-callchain-idx.c b/tools/perf/arch/powerpc/util/skip-callchain-idx.c
+index ef5d59a5742e..7c6eeb4633fe 100644
+--- a/tools/perf/arch/powerpc/util/skip-callchain-idx.c
++++ b/tools/perf/arch/powerpc/util/skip-callchain-idx.c
+@@ -58,9 +58,13 @@ static int check_return_reg(int ra_regno, Dwarf_Frame *frame)
+ 	}
+ 
+ 	/*
+-	 * Check if return address is on the stack.
++	 * Check if return address is on the stack. If return address
++	 * is in a register (typically R0), it is yet to be saved on
++	 * the stack.
+ 	 */
+-	if (nops != 0 || ops != NULL)
++	if ((nops != 0 || ops != NULL) &&
++		!(nops == 1 && ops[0].atom == DW_OP_regx &&
++			ops[0].number2 == 0 && ops[0].offset == 0))
+ 		return 0;
+ 
+ 	/*
+@@ -246,7 +250,7 @@ int arch_skip_callchain_idx(struct thread *thread, struct ip_callchain *chain)
+ 	if (!chain || chain->nr < 3)
+ 		return skip_slot;
+ 
+-	ip = chain->ips[2];
++	ip = chain->ips[1];
+ 
+ 	thread__find_symbol(thread, PERF_RECORD_MISC_USER, ip, &al);
+ 
+diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
+index dd850a26d579..4f5de8245b32 100644
+--- a/tools/perf/tests/builtin-test.c
++++ b/tools/perf/tests/builtin-test.c
+@@ -599,7 +599,7 @@ static int __cmd_test(int argc, const char *argv[], struct intlist *skiplist)
+ 			for (subi = 0; subi < subn; subi++) {
+ 				pr_info("%2d.%1d: %-*s:", i, subi + 1, subw,
+ 					t->subtest.get_desc(subi));
+-				err = test_and_print(t, skip, subi);
++				err = test_and_print(t, skip, subi + 1);
+ 				if (err != TEST_OK && t->subtest.skip_if_fail)
+ 					skip = true;
+ 			}
+diff --git a/tools/perf/tests/shell/record+probe_libc_inet_pton.sh b/tools/perf/tests/shell/record+probe_libc_inet_pton.sh
+index 94e513e62b34..3013ac8f83d0 100755
+--- a/tools/perf/tests/shell/record+probe_libc_inet_pton.sh
++++ b/tools/perf/tests/shell/record+probe_libc_inet_pton.sh
+@@ -13,11 +13,24 @@
+ libc=$(grep -w libc /proc/self/maps | head -1 | sed -r 's/.*[[:space:]](\/.*)/\1/g')
+ nm -Dg $libc 2>/dev/null | fgrep -q inet_pton || exit 254
+ 
++event_pattern='probe_libc:inet_pton(\_[[:digit:]]+)?'
++
++add_libc_inet_pton_event() {
++
++	event_name=$(perf probe -f -x $libc -a inet_pton 2>&1 | tail -n +2 | head -n -5 | \
++			grep -P -o "$event_pattern(?=[[:space:]]\(on inet_pton in $libc\))")
++
++	if [ $? -ne 0 -o -z "$event_name" ] ; then
++		printf "FAIL: could not add event\n"
++		return 1
++	fi
++}
++
+ trace_libc_inet_pton_backtrace() {
+ 
+ 	expected=`mktemp -u /tmp/expected.XXX`
+ 
+-	echo "ping[][0-9 \.:]+probe_libc:inet_pton: \([[:xdigit:]]+\)" > $expected
++	echo "ping[][0-9 \.:]+$event_name: \([[:xdigit:]]+\)" > $expected
+ 	echo ".*inet_pton\+0x[[:xdigit:]]+[[:space:]]\($libc|inlined\)$" >> $expected
+ 	case "$(uname -m)" in
+ 	s390x)
+@@ -26,6 +39,12 @@ trace_libc_inet_pton_backtrace() {
+ 		echo "(__GI_)?getaddrinfo\+0x[[:xdigit:]]+[[:space:]]\($libc|inlined\)$" >> $expected
+ 		echo "main\+0x[[:xdigit:]]+[[:space:]]\(.*/bin/ping.*\)$" >> $expected
+ 		;;
++	ppc64|ppc64le)
++		eventattr='max-stack=4'
++		echo "gaih_inet.*\+0x[[:xdigit:]]+[[:space:]]\($libc\)$" >> $expected
++		echo "getaddrinfo\+0x[[:xdigit:]]+[[:space:]]\($libc\)$" >> $expected
++		echo ".*\+0x[[:xdigit:]]+[[:space:]]\(.*/bin/ping.*\)$" >> $expected
++		;;
+ 	*)
+ 		eventattr='max-stack=3'
+ 		echo "getaddrinfo\+0x[[:xdigit:]]+[[:space:]]\($libc\)$" >> $expected
+@@ -35,7 +54,7 @@ trace_libc_inet_pton_backtrace() {
+ 
+ 	perf_data=`mktemp -u /tmp/perf.data.XXX`
+ 	perf_script=`mktemp -u /tmp/perf.script.XXX`
+-	perf record -e probe_libc:inet_pton/$eventattr/ -o $perf_data ping -6 -c 1 ::1 > /dev/null 2>&1
++	perf record -e $event_name/$eventattr/ -o $perf_data ping -6 -c 1 ::1 > /dev/null 2>&1
+ 	perf script -i $perf_data > $perf_script
+ 
+ 	exec 3<$perf_script
+@@ -46,7 +65,7 @@ trace_libc_inet_pton_backtrace() {
+ 		echo "$line" | egrep -q "$pattern"
+ 		if [ $? -ne 0 ] ; then
+ 			printf "FAIL: expected backtrace entry \"%s\" got \"%s\"\n" "$pattern" "$line"
+-			exit 1
++			return 1
+ 		fi
+ 	done
+ 
+@@ -56,13 +75,20 @@ trace_libc_inet_pton_backtrace() {
+ 	# even if the perf script output does not match.
+ }
+ 
++delete_libc_inet_pton_event() {
++
++	if [ -n "$event_name" ] ; then
++		perf probe -q -d $event_name
++	fi
++}
++
+ # Check for IPv6 interface existence
+ ip a sh lo | fgrep -q inet6 || exit 2
+ 
+ skip_if_no_perf_probe && \
+-perf probe -q $libc inet_pton && \
++add_libc_inet_pton_event && \
+ trace_libc_inet_pton_backtrace
+ err=$?
+ rm -f ${perf_data} ${perf_script} ${expected}
+-perf probe -q -d probe_libc:inet_pton
++delete_libc_inet_pton_event
+ exit $err
+diff --git a/tools/perf/util/comm.c b/tools/perf/util/comm.c
+index 7798a2cc8a86..31279a7bd919 100644
+--- a/tools/perf/util/comm.c
++++ b/tools/perf/util/comm.c
+@@ -20,9 +20,10 @@ static struct rw_semaphore comm_str_lock = {.lock = PTHREAD_RWLOCK_INITIALIZER,}
+ 
+ static struct comm_str *comm_str__get(struct comm_str *cs)
+ {
+-	if (cs)
+-		refcount_inc(&cs->refcnt);
+-	return cs;
++	if (cs && refcount_inc_not_zero(&cs->refcnt))
++		return cs;
++
++	return NULL;
+ }
+ 
+ static void comm_str__put(struct comm_str *cs)
+@@ -67,9 +68,14 @@ struct comm_str *__comm_str__findnew(const char *str, struct rb_root *root)
+ 		parent = *p;
+ 		iter = rb_entry(parent, struct comm_str, rb_node);
+ 
++		/*
++		 * If we race with comm_str__put, iter->refcnt is 0
++		 * and it will be removed within comm_str__put call
++		 * shortly, ignore it in this search.
++		 */
+ 		cmp = strcmp(str, iter->str);
+-		if (!cmp)
+-			return comm_str__get(iter);
++		if (!cmp && comm_str__get(iter))
++			return iter;
+ 
+ 		if (cmp < 0)
+ 			p = &(*p)->rb_left;
+diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
+index 653ff65aa2c3..5af58aac91ad 100644
+--- a/tools/perf/util/header.c
++++ b/tools/perf/util/header.c
+@@ -2587,7 +2587,7 @@ static const struct feature_ops feat_ops[HEADER_LAST_FEATURE] = {
+ 	FEAT_OPR(NUMA_TOPOLOGY,	numa_topology,	true),
+ 	FEAT_OPN(BRANCH_STACK,	branch_stack,	false),
+ 	FEAT_OPR(PMU_MAPPINGS,	pmu_mappings,	false),
+-	FEAT_OPN(GROUP_DESC,	group_desc,	false),
++	FEAT_OPR(GROUP_DESC,	group_desc,	false),
+ 	FEAT_OPN(AUXTRACE,	auxtrace,	false),
+ 	FEAT_OPN(STAT,		stat,		false),
+ 	FEAT_OPN(CACHE,		cache,		true),
+diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
+index e7b4a8b513f2..22dbb6612b41 100644
+--- a/tools/perf/util/machine.c
++++ b/tools/perf/util/machine.c
+@@ -2272,6 +2272,7 @@ static int unwind_entry(struct unwind_entry *entry, void *arg)
+ {
+ 	struct callchain_cursor *cursor = arg;
+ 	const char *srcline = NULL;
++	u64 addr;
+ 
+ 	if (symbol_conf.hide_unresolved && entry->sym == NULL)
+ 		return 0;
+@@ -2279,7 +2280,13 @@ static int unwind_entry(struct unwind_entry *entry, void *arg)
+ 	if (append_inlines(cursor, entry->map, entry->sym, entry->ip) == 0)
+ 		return 0;
+ 
+-	srcline = callchain_srcline(entry->map, entry->sym, entry->ip);
++	/*
++	 * Convert entry->ip from a virtual address to an offset in
++	 * its corresponding binary.
++	 */
++	addr = map__map_ip(entry->map, entry->ip);
++
++	srcline = callchain_srcline(entry->map, entry->sym, addr);
+ 	return callchain_cursor_append(cursor, entry->ip,
+ 				       entry->map, entry->sym,
+ 				       false, NULL, 0, 0, 0, srcline);
+diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
+index 89ac5b5dc218..f5431092c6d1 100644
+--- a/tools/perf/util/map.c
++++ b/tools/perf/util/map.c
+@@ -590,6 +590,13 @@ struct symbol *map_groups__find_symbol(struct map_groups *mg,
+ 	return NULL;
+ }
+ 
++static bool map__contains_symbol(struct map *map, struct symbol *sym)
++{
++	u64 ip = map->unmap_ip(map, sym->start);
++
++	return ip >= map->start && ip < map->end;
++}
++
+ struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name,
+ 					 struct map **mapp)
+ {
+@@ -605,6 +612,10 @@ struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name,
+ 
+ 		if (sym == NULL)
+ 			continue;
++		if (!map__contains_symbol(pos, sym)) {
++			sym = NULL;
++			continue;
++		}
+ 		if (mapp != NULL)
+ 			*mapp = pos;
+ 		goto out;
+diff --git a/tools/perf/util/unwind-libdw.c b/tools/perf/util/unwind-libdw.c
+index 538db4e5d1e6..6f318b15950e 100644
+--- a/tools/perf/util/unwind-libdw.c
++++ b/tools/perf/util/unwind-libdw.c
+@@ -77,7 +77,7 @@ static int entry(u64 ip, struct unwind_info *ui)
+ 	if (__report_module(&al, ip, ui))
+ 		return -1;
+ 
+-	e->ip  = al.addr;
++	e->ip  = ip;
+ 	e->map = al.map;
+ 	e->sym = al.sym;
+ 
+diff --git a/tools/perf/util/unwind-libunwind-local.c b/tools/perf/util/unwind-libunwind-local.c
+index 6a11bc7e6b27..79f521a552cf 100644
+--- a/tools/perf/util/unwind-libunwind-local.c
++++ b/tools/perf/util/unwind-libunwind-local.c
+@@ -575,7 +575,7 @@ static int entry(u64 ip, struct thread *thread,
+ 	struct addr_location al;
+ 
+ 	e.sym = thread__find_symbol(thread, PERF_RECORD_MISC_USER, ip, &al);
+-	e.ip = al.addr;
++	e.ip  = ip;
+ 	e.map = al.map;
+ 
+ 	pr_debug("unwind: %s:ip = 0x%" PRIx64 " (0x%" PRIx64 ")\n",
+diff --git a/tools/testing/nvdimm/test/nfit.c b/tools/testing/nvdimm/test/nfit.c
+index e2926f72a821..94c3bdf82ff7 100644
+--- a/tools/testing/nvdimm/test/nfit.c
++++ b/tools/testing/nvdimm/test/nfit.c
+@@ -1308,7 +1308,8 @@ static void smart_init(struct nfit_test *t)
+ 			| ND_INTEL_SMART_ALARM_VALID
+ 			| ND_INTEL_SMART_USED_VALID
+ 			| ND_INTEL_SMART_SHUTDOWN_VALID
+-			| ND_INTEL_SMART_MTEMP_VALID,
++			| ND_INTEL_SMART_MTEMP_VALID
++			| ND_INTEL_SMART_CTEMP_VALID,
+ 		.health = ND_INTEL_SMART_NON_CRITICAL_HEALTH,
+ 		.media_temperature = 23 * 16,
+ 		.ctrl_temperature = 25 * 16,
+diff --git a/tools/testing/selftests/android/ion/ionapp_export.c b/tools/testing/selftests/android/ion/ionapp_export.c
+index a944e72621a9..b5fa0a2dc968 100644
+--- a/tools/testing/selftests/android/ion/ionapp_export.c
++++ b/tools/testing/selftests/android/ion/ionapp_export.c
+@@ -51,6 +51,7 @@ int main(int argc, char *argv[])
+ 
+ 	heap_size = 0;
+ 	flags = 0;
++	heap_type = ION_HEAP_TYPE_SYSTEM;
+ 
+ 	while ((opt = getopt(argc, argv, "hi:s:")) != -1) {
+ 		switch (opt) {
+diff --git a/tools/testing/selftests/timers/raw_skew.c b/tools/testing/selftests/timers/raw_skew.c
+index ca6cd146aafe..dcf73c5dab6e 100644
+--- a/tools/testing/selftests/timers/raw_skew.c
++++ b/tools/testing/selftests/timers/raw_skew.c
+@@ -134,6 +134,11 @@ int main(int argv, char **argc)
+ 	printf(" %lld.%i(act)", ppm/1000, abs((int)(ppm%1000)));
+ 
+ 	if (llabs(eppm - ppm) > 1000) {
++		if (tx1.offset || tx2.offset ||
++		    tx1.freq != tx2.freq || tx1.tick != tx2.tick) {
++			printf("	[SKIP]\n");
++			return ksft_exit_skip("The clock was adjusted externally. Shutdown NTPd or other time sync daemons\n");
++		}
+ 		printf("	[FAILED]\n");
+ 		return ksft_exit_fail();
+ 	}
+diff --git a/tools/testing/selftests/vDSO/vdso_test.c b/tools/testing/selftests/vDSO/vdso_test.c
+index 2df26bd0099c..eda53f833d8e 100644
+--- a/tools/testing/selftests/vDSO/vdso_test.c
++++ b/tools/testing/selftests/vDSO/vdso_test.c
+@@ -15,6 +15,8 @@
+ #include <sys/auxv.h>
+ #include <sys/time.h>
+ 
++#include "../kselftest.h"
++
+ extern void *vdso_sym(const char *version, const char *name);
+ extern void vdso_init_from_sysinfo_ehdr(uintptr_t base);
+ extern void vdso_init_from_auxv(void *auxv);
+@@ -37,7 +39,7 @@ int main(int argc, char **argv)
+ 	unsigned long sysinfo_ehdr = getauxval(AT_SYSINFO_EHDR);
+ 	if (!sysinfo_ehdr) {
+ 		printf("AT_SYSINFO_EHDR is not present!\n");
+-		return 0;
++		return KSFT_SKIP;
+ 	}
+ 
+ 	vdso_init_from_sysinfo_ehdr(getauxval(AT_SYSINFO_EHDR));
+@@ -48,7 +50,7 @@ int main(int argc, char **argv)
+ 
+ 	if (!gtod) {
+ 		printf("Could not find %s\n", name);
+-		return 1;
++		return KSFT_SKIP;
+ 	}
+ 
+ 	struct timeval tv;
+@@ -59,6 +61,7 @@ int main(int argc, char **argv)
+ 		       (long long)tv.tv_sec, (long long)tv.tv_usec);
+ 	} else {
+ 		printf("%s failed\n", name);
++		return KSFT_FAIL;
+ 	}
+ 
+ 	return 0;
+diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c
+index 2673efce65f3..b71417913741 100644
+--- a/virt/kvm/arm/vgic/vgic-init.c
++++ b/virt/kvm/arm/vgic/vgic-init.c
+@@ -271,6 +271,10 @@ int vgic_init(struct kvm *kvm)
+ 	if (vgic_initialized(kvm))
+ 		return 0;
+ 
++	/* Are we also in the middle of creating a VCPU? */
++	if (kvm->created_vcpus != atomic_read(&kvm->online_vcpus))
++		return -EBUSY;
++
+ 	/* freeze the number of spis */
+ 	if (!dist->nr_spis)
+ 		dist->nr_spis = VGIC_NR_IRQS_LEGACY - VGIC_NR_PRIVATE_IRQS;
+diff --git a/virt/kvm/arm/vgic/vgic-mmio-v2.c b/virt/kvm/arm/vgic/vgic-mmio-v2.c
+index ffc587bf4742..64e571cc02df 100644
+--- a/virt/kvm/arm/vgic/vgic-mmio-v2.c
++++ b/virt/kvm/arm/vgic/vgic-mmio-v2.c
+@@ -352,6 +352,9 @@ static void vgic_mmio_write_apr(struct kvm_vcpu *vcpu,
+ 
+ 		if (n > vgic_v3_max_apr_idx(vcpu))
+ 			return;
++
++		n = array_index_nospec(n, 4);
++
+ 		/* GICv3 only uses ICH_AP1Rn for memory mapped (GICv2) guests */
+ 		vgicv3->vgic_ap1r[n] = val;
+ 	}


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 11:37 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     9e80908010d57ea18c5aa3052208900ba32ddff9
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Aug 17 19:43:43 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 11:36:22 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9e809080

Removal of redundant patch.

ix86/l1tf: Fix build error seen if CONFIG_KVM_INTEL is disabled

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                    |  4 ---
 1700_x86-l1tf-config-kvm-build-error-fix.patch | 40 --------------------------
 2 files changed, 44 deletions(-)

diff --git a/0000_README b/0000_README
index c801597..f72e2ad 100644
--- a/0000_README
+++ b/0000_README
@@ -59,10 +59,6 @@ Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.
 
-Patch:  1700_x86-l1tf-config-kvm-build-error-fix.patch
-From:   http://www.kernel.org
-Desc:   x86/l1tf: Fix build error seen if CONFIG_KVM_INTEL is disabled
-
 Patch:  2500_usb-storage-Disable-UAS-on-JMicron-SATA-enclosure.patch
 From:   https://bugzilla.redhat.com/show_bug.cgi?id=1260207#c5
 Desc:   Add UAS disable quirk. See bug #640082.

diff --git a/1700_x86-l1tf-config-kvm-build-error-fix.patch b/1700_x86-l1tf-config-kvm-build-error-fix.patch
deleted file mode 100644
index 88c2ec6..0000000
--- a/1700_x86-l1tf-config-kvm-build-error-fix.patch
+++ /dev/null
@@ -1,40 +0,0 @@
-From 1eb46908b35dfbac0ec1848d4b1e39667e0187e9 Mon Sep 17 00:00:00 2001
-From: Guenter Roeck <linux@roeck-us.net>
-Date: Wed, 15 Aug 2018 08:38:33 -0700
-Subject: x86/l1tf: Fix build error seen if CONFIG_KVM_INTEL is disabled
-
-From: Guenter Roeck <linux@roeck-us.net>
-
-commit 1eb46908b35dfbac0ec1848d4b1e39667e0187e9 upstream.
-
-allmodconfig+CONFIG_INTEL_KVM=n results in the following build error.
-
-  ERROR: "l1tf_vmx_mitigation" [arch/x86/kvm/kvm.ko] undefined!
-
-Fixes: 5b76a3cff011 ("KVM: VMX: Tell the nested hypervisor to skip L1D flush on vmentry")
-Reported-by: Meelis Roos <mroos@linux.ee>
-Cc: Meelis Roos <mroos@linux.ee>
-Cc: Paolo Bonzini <pbonzini@redhat.com>
-Cc: Thomas Gleixner <tglx@linutronix.de>
-Signed-off-by: Guenter Roeck <linux@roeck-us.net>
-Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
----
- arch/x86/kernel/cpu/bugs.c |    3 +--
- 1 file changed, 1 insertion(+), 2 deletions(-)
-
---- a/arch/x86/kernel/cpu/bugs.c
-+++ b/arch/x86/kernel/cpu/bugs.c
-@@ -648,10 +648,9 @@ void x86_spec_ctrl_setup_ap(void)
- enum l1tf_mitigations l1tf_mitigation __ro_after_init = L1TF_MITIGATION_FLUSH;
- #if IS_ENABLED(CONFIG_KVM_INTEL)
- EXPORT_SYMBOL_GPL(l1tf_mitigation);
--
-+#endif
- enum vmx_l1d_flush_state l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
- EXPORT_SYMBOL_GPL(l1tf_vmx_mitigation);
--#endif
- 
- static void __init l1tf_select_mitigation(void)
- {


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 11:37 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     5dbc4df565362a677f46e44f9592b809bc552f2d
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 15 16:36:52 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 11:36:20 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5dbc4df5

Linuxpatch 4.18.1

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1000_linux-4.18.1.patch | 4083 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4087 insertions(+)

diff --git a/0000_README b/0000_README
index 917d838..cf32ff2 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,10 @@ EXPERIMENTAL
 Individual Patch Descriptions:
 --------------------------------------------------------------------------
 
+Patch:  1000_linux-4.18.1.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.1
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1000_linux-4.18.1.patch b/1000_linux-4.18.1.patch
new file mode 100644
index 0000000..bd9c2da
--- /dev/null
+++ b/1000_linux-4.18.1.patch
@@ -0,0 +1,4083 @@
+diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
+index 9c5e7732d249..73318225a368 100644
+--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
++++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
+@@ -476,6 +476,7 @@ What:		/sys/devices/system/cpu/vulnerabilities
+ 		/sys/devices/system/cpu/vulnerabilities/spectre_v1
+ 		/sys/devices/system/cpu/vulnerabilities/spectre_v2
+ 		/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
++		/sys/devices/system/cpu/vulnerabilities/l1tf
+ Date:		January 2018
+ Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
+ Description:	Information about CPU vulnerabilities
+@@ -487,3 +488,26 @@ Description:	Information about CPU vulnerabilities
+ 		"Not affected"	  CPU is not affected by the vulnerability
+ 		"Vulnerable"	  CPU is affected and no mitigation in effect
+ 		"Mitigation: $M"  CPU is affected and mitigation $M is in effect
++
++		Details about the l1tf file can be found in
++		Documentation/admin-guide/l1tf.rst
++
++What:		/sys/devices/system/cpu/smt
++		/sys/devices/system/cpu/smt/active
++		/sys/devices/system/cpu/smt/control
++Date:		June 2018
++Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
++Description:	Control Symetric Multi Threading (SMT)
++
++		active:  Tells whether SMT is active (enabled and siblings online)
++
++		control: Read/write interface to control SMT. Possible
++			 values:
++
++			 "on"		SMT is enabled
++			 "off"		SMT is disabled
++			 "forceoff"	SMT is force disabled. Cannot be changed.
++			 "notsupported" SMT is not supported by the CPU
++
++			 If control status is "forceoff" or "notsupported" writes
++			 are rejected.
+diff --git a/Documentation/admin-guide/index.rst b/Documentation/admin-guide/index.rst
+index 48d70af11652..0873685bab0f 100644
+--- a/Documentation/admin-guide/index.rst
++++ b/Documentation/admin-guide/index.rst
+@@ -17,6 +17,15 @@ etc.
+    kernel-parameters
+    devices
+ 
++This section describes CPU vulnerabilities and provides an overview of the
++possible mitigations along with guidance for selecting mitigations if they
++are configurable at compile, boot or run time.
++
++.. toctree::
++   :maxdepth: 1
++
++   l1tf
++
+ Here is a set of documents aimed at users who are trying to track down
+ problems and bugs in particular.
+ 
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 533ff5c68970..1370b424a453 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -1967,10 +1967,84 @@
+ 			(virtualized real and unpaged mode) on capable
+ 			Intel chips. Default is 1 (enabled)
+ 
++	kvm-intel.vmentry_l1d_flush=[KVM,Intel] Mitigation for L1 Terminal Fault
++			CVE-2018-3620.
++
++			Valid arguments: never, cond, always
++
++			always: L1D cache flush on every VMENTER.
++			cond:	Flush L1D on VMENTER only when the code between
++				VMEXIT and VMENTER can leak host memory.
++			never:	Disables the mitigation
++
++			Default is cond (do L1 cache flush in specific instances)
++
+ 	kvm-intel.vpid=	[KVM,Intel] Disable Virtual Processor Identification
+ 			feature (tagged TLBs) on capable Intel chips.
+ 			Default is 1 (enabled)
+ 
++	l1tf=           [X86] Control mitigation of the L1TF vulnerability on
++			      affected CPUs
++
++			The kernel PTE inversion protection is unconditionally
++			enabled and cannot be disabled.
++
++			full
++				Provides all available mitigations for the
++				L1TF vulnerability. Disables SMT and
++				enables all mitigations in the
++				hypervisors, i.e. unconditional L1D flush.
++
++				SMT control and L1D flush control via the
++				sysfs interface is still possible after
++				boot.  Hypervisors will issue a warning
++				when the first VM is started in a
++				potentially insecure configuration,
++				i.e. SMT enabled or L1D flush disabled.
++
++			full,force
++				Same as 'full', but disables SMT and L1D
++				flush runtime control. Implies the
++				'nosmt=force' command line option.
++				(i.e. sysfs control of SMT is disabled.)
++
++			flush
++				Leaves SMT enabled and enables the default
++				hypervisor mitigation, i.e. conditional
++				L1D flush.
++
++				SMT control and L1D flush control via the
++				sysfs interface is still possible after
++				boot.  Hypervisors will issue a warning
++				when the first VM is started in a
++				potentially insecure configuration,
++				i.e. SMT enabled or L1D flush disabled.
++
++			flush,nosmt
++
++				Disables SMT and enables the default
++				hypervisor mitigation.
++
++				SMT control and L1D flush control via the
++				sysfs interface is still possible after
++				boot.  Hypervisors will issue a warning
++				when the first VM is started in a
++				potentially insecure configuration,
++				i.e. SMT enabled or L1D flush disabled.
++
++			flush,nowarn
++				Same as 'flush', but hypervisors will not
++				warn when a VM is started in a potentially
++				insecure configuration.
++
++			off
++				Disables hypervisor mitigations and doesn't
++				emit any warnings.
++
++			Default is 'flush'.
++
++			For details see: Documentation/admin-guide/l1tf.rst
++
+ 	l2cr=		[PPC]
+ 
+ 	l3cr=		[PPC]
+@@ -2687,6 +2761,10 @@
+ 	nosmt		[KNL,S390] Disable symmetric multithreading (SMT).
+ 			Equivalent to smt=1.
+ 
++			[KNL,x86] Disable symmetric multithreading (SMT).
++			nosmt=force: Force disable SMT, cannot be undone
++				     via the sysfs control file.
++
+ 	nospectre_v2	[X86] Disable all mitigations for the Spectre variant 2
+ 			(indirect branch prediction) vulnerability. System may
+ 			allow data leaks with this option, which is equivalent
+diff --git a/Documentation/admin-guide/l1tf.rst b/Documentation/admin-guide/l1tf.rst
+new file mode 100644
+index 000000000000..bae52b845de0
+--- /dev/null
++++ b/Documentation/admin-guide/l1tf.rst
+@@ -0,0 +1,610 @@
++L1TF - L1 Terminal Fault
++========================
++
++L1 Terminal Fault is a hardware vulnerability which allows unprivileged
++speculative access to data which is available in the Level 1 Data Cache
++when the page table entry controlling the virtual address, which is used
++for the access, has the Present bit cleared or other reserved bits set.
++
++Affected processors
++-------------------
++
++This vulnerability affects a wide range of Intel processors. The
++vulnerability is not present on:
++
++   - Processors from AMD, Centaur and other non Intel vendors
++
++   - Older processor models, where the CPU family is < 6
++
++   - A range of Intel ATOM processors (Cedarview, Cloverview, Lincroft,
++     Penwell, Pineview, Silvermont, Airmont, Merrifield)
++
++   - The Intel XEON PHI family
++
++   - Intel processors which have the ARCH_CAP_RDCL_NO bit set in the
++     IA32_ARCH_CAPABILITIES MSR. If the bit is set the CPU is not affected
++     by the Meltdown vulnerability either. These CPUs should become
++     available by end of 2018.
++
++Whether a processor is affected or not can be read out from the L1TF
++vulnerability file in sysfs. See :ref:`l1tf_sys_info`.
++
++Related CVEs
++------------
++
++The following CVE entries are related to the L1TF vulnerability:
++
++   =============  =================  ==============================
++   CVE-2018-3615  L1 Terminal Fault  SGX related aspects
++   CVE-2018-3620  L1 Terminal Fault  OS, SMM related aspects
++   CVE-2018-3646  L1 Terminal Fault  Virtualization related aspects
++   =============  =================  ==============================
++
++Problem
++-------
++
++If an instruction accesses a virtual address for which the relevant page
++table entry (PTE) has the Present bit cleared or other reserved bits set,
++then speculative execution ignores the invalid PTE and loads the referenced
++data if it is present in the Level 1 Data Cache, as if the page referenced
++by the address bits in the PTE was still present and accessible.
++
++While this is a purely speculative mechanism and the instruction will raise
++a page fault when it is retired eventually, the pure act of loading the
++data and making it available to other speculative instructions opens up the
++opportunity for side channel attacks to unprivileged malicious code,
++similar to the Meltdown attack.
++
++While Meltdown breaks the user space to kernel space protection, L1TF
++allows to attack any physical memory address in the system and the attack
++works across all protection domains. It allows an attack of SGX and also
++works from inside virtual machines because the speculation bypasses the
++extended page table (EPT) protection mechanism.
++
++
++Attack scenarios
++----------------
++
++1. Malicious user space
++^^^^^^^^^^^^^^^^^^^^^^^
++
++   Operating Systems store arbitrary information in the address bits of a
++   PTE which is marked non present. This allows a malicious user space
++   application to attack the physical memory to which these PTEs resolve.
++   In some cases user-space can maliciously influence the information
++   encoded in the address bits of the PTE, thus making attacks more
++   deterministic and more practical.
++
++   The Linux kernel contains a mitigation for this attack vector, PTE
++   inversion, which is permanently enabled and has no performance
++   impact. The kernel ensures that the address bits of PTEs, which are not
++   marked present, never point to cacheable physical memory space.
++
++   A system with an up to date kernel is protected against attacks from
++   malicious user space applications.
++
++2. Malicious guest in a virtual machine
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   The fact that L1TF breaks all domain protections allows malicious guest
++   OSes, which can control the PTEs directly, and malicious guest user
++   space applications, which run on an unprotected guest kernel lacking the
++   PTE inversion mitigation for L1TF, to attack physical host memory.
++
++   A special aspect of L1TF in the context of virtualization is symmetric
++   multi threading (SMT). The Intel implementation of SMT is called
++   HyperThreading. The fact that Hyperthreads on the affected processors
++   share the L1 Data Cache (L1D) is important for this. As the flaw allows
++   only to attack data which is present in L1D, a malicious guest running
++   on one Hyperthread can attack the data which is brought into the L1D by
++   the context which runs on the sibling Hyperthread of the same physical
++   core. This context can be host OS, host user space or a different guest.
++
++   If the processor does not support Extended Page Tables, the attack is
++   only possible, when the hypervisor does not sanitize the content of the
++   effective (shadow) page tables.
++
++   While solutions exist to mitigate these attack vectors fully, these
++   mitigations are not enabled by default in the Linux kernel because they
++   can affect performance significantly. The kernel provides several
++   mechanisms which can be utilized to address the problem depending on the
++   deployment scenario. The mitigations, their protection scope and impact
++   are described in the next sections.
++
++   The default mitigations and the rationale for choosing them are explained
++   at the end of this document. See :ref:`default_mitigations`.
++
++.. _l1tf_sys_info:
++
++L1TF system information
++-----------------------
++
++The Linux kernel provides a sysfs interface to enumerate the current L1TF
++status of the system: whether the system is vulnerable, and which
++mitigations are active. The relevant sysfs file is:
++
++/sys/devices/system/cpu/vulnerabilities/l1tf
++
++The possible values in this file are:
++
++  ===========================   ===============================
++  'Not affected'		The processor is not vulnerable
++  'Mitigation: PTE Inversion'	The host protection is active
++  ===========================   ===============================
++
++If KVM/VMX is enabled and the processor is vulnerable then the following
++information is appended to the 'Mitigation: PTE Inversion' part:
++
++  - SMT status:
++
++    =====================  ================
++    'VMX: SMT vulnerable'  SMT is enabled
++    'VMX: SMT disabled'    SMT is disabled
++    =====================  ================
++
++  - L1D Flush mode:
++
++    ================================  ====================================
++    'L1D vulnerable'		      L1D flushing is disabled
++
++    'L1D conditional cache flushes'   L1D flush is conditionally enabled
++
++    'L1D cache flushes'		      L1D flush is unconditionally enabled
++    ================================  ====================================
++
++The resulting grade of protection is discussed in the following sections.
++
++
++Host mitigation mechanism
++-------------------------
++
++The kernel is unconditionally protected against L1TF attacks from malicious
++user space running on the host.
++
++
++Guest mitigation mechanisms
++---------------------------
++
++.. _l1d_flush:
++
++1. L1D flush on VMENTER
++^^^^^^^^^^^^^^^^^^^^^^^
++
++   To make sure that a guest cannot attack data which is present in the L1D
++   the hypervisor flushes the L1D before entering the guest.
++
++   Flushing the L1D evicts not only the data which should not be accessed
++   by a potentially malicious guest, it also flushes the guest
++   data. Flushing the L1D has a performance impact as the processor has to
++   bring the flushed guest data back into the L1D. Depending on the
++   frequency of VMEXIT/VMENTER and the type of computations in the guest
++   performance degradation in the range of 1% to 50% has been observed. For
++   scenarios where guest VMEXIT/VMENTER are rare the performance impact is
++   minimal. Virtio and mechanisms like posted interrupts are designed to
++   confine the VMEXITs to a bare minimum, but specific configurations and
++   application scenarios might still suffer from a high VMEXIT rate.
++
++   The kernel provides two L1D flush modes:
++    - conditional ('cond')
++    - unconditional ('always')
++
++   The conditional mode avoids L1D flushing after VMEXITs which execute
++   only audited code paths before the corresponding VMENTER. These code
++   paths have been verified that they cannot expose secrets or other
++   interesting data to an attacker, but they can leak information about the
++   address space layout of the hypervisor.
++
++   Unconditional mode flushes L1D on all VMENTER invocations and provides
++   maximum protection. It has a higher overhead than the conditional
++   mode. The overhead cannot be quantified correctly as it depends on the
++   workload scenario and the resulting number of VMEXITs.
++
++   The general recommendation is to enable L1D flush on VMENTER. The kernel
++   defaults to conditional mode on affected processors.
++
++   **Note**, that L1D flush does not prevent the SMT problem because the
++   sibling thread will also bring back its data into the L1D which makes it
++   attackable again.
++
++   L1D flush can be controlled by the administrator via the kernel command
++   line and sysfs control files. See :ref:`mitigation_control_command_line`
++   and :ref:`mitigation_control_kvm`.
++
++.. _guest_confinement:
++
++2. Guest VCPU confinement to dedicated physical cores
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   To address the SMT problem, it is possible to make a guest or a group of
++   guests affine to one or more physical cores. The proper mechanism for
++   that is to utilize exclusive cpusets to ensure that no other guest or
++   host tasks can run on these cores.
++
++   If only a single guest or related guests run on sibling SMT threads on
++   the same physical core then they can only attack their own memory and
++   restricted parts of the host memory.
++
++   Host memory is attackable, when one of the sibling SMT threads runs in
++   host OS (hypervisor) context and the other in guest context. The amount
++   of valuable information from the host OS context depends on the context
++   which the host OS executes, i.e. interrupts, soft interrupts and kernel
++   threads. The amount of valuable data from these contexts cannot be
++   declared as non-interesting for an attacker without deep inspection of
++   the code.
++
++   **Note**, that assigning guests to a fixed set of physical cores affects
++   the ability of the scheduler to do load balancing and might have
++   negative effects on CPU utilization depending on the hosting
++   scenario. Disabling SMT might be a viable alternative for particular
++   scenarios.
++
++   For further information about confining guests to a single or to a group
++   of cores consult the cpusets documentation:
++
++   https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt
++
++.. _interrupt_isolation:
++
++3. Interrupt affinity
++^^^^^^^^^^^^^^^^^^^^^
++
++   Interrupts can be made affine to logical CPUs. This is not universally
++   true because there are types of interrupts which are truly per CPU
++   interrupts, e.g. the local timer interrupt. Aside of that multi queue
++   devices affine their interrupts to single CPUs or groups of CPUs per
++   queue without allowing the administrator to control the affinities.
++
++   Moving the interrupts, which can be affinity controlled, away from CPUs
++   which run untrusted guests, reduces the attack vector space.
++
++   Whether the interrupts with are affine to CPUs, which run untrusted
++   guests, provide interesting data for an attacker depends on the system
++   configuration and the scenarios which run on the system. While for some
++   of the interrupts it can be assumed that they won't expose interesting
++   information beyond exposing hints about the host OS memory layout, there
++   is no way to make general assumptions.
++
++   Interrupt affinity can be controlled by the administrator via the
++   /proc/irq/$NR/smp_affinity[_list] files. Limited documentation is
++   available at:
++
++   https://www.kernel.org/doc/Documentation/IRQ-affinity.txt
++
++.. _smt_control:
++
++4. SMT control
++^^^^^^^^^^^^^^
++
++   To prevent the SMT issues of L1TF it might be necessary to disable SMT
++   completely. Disabling SMT can have a significant performance impact, but
++   the impact depends on the hosting scenario and the type of workloads.
++   The impact of disabling SMT needs also to be weighted against the impact
++   of other mitigation solutions like confining guests to dedicated cores.
++
++   The kernel provides a sysfs interface to retrieve the status of SMT and
++   to control it. It also provides a kernel command line interface to
++   control SMT.
++
++   The kernel command line interface consists of the following options:
++
++     =========== ==========================================================
++     nosmt	 Affects the bring up of the secondary CPUs during boot. The
++		 kernel tries to bring all present CPUs online during the
++		 boot process. "nosmt" makes sure that from each physical
++		 core only one - the so called primary (hyper) thread is
++		 activated. Due to a design flaw of Intel processors related
++		 to Machine Check Exceptions the non primary siblings have
++		 to be brought up at least partially and are then shut down
++		 again.  "nosmt" can be undone via the sysfs interface.
++
++     nosmt=force Has the same effect as "nosmt" but it does not allow to
++		 undo the SMT disable via the sysfs interface.
++     =========== ==========================================================
++
++   The sysfs interface provides two files:
++
++   - /sys/devices/system/cpu/smt/control
++   - /sys/devices/system/cpu/smt/active
++
++   /sys/devices/system/cpu/smt/control:
++
++     This file allows to read out the SMT control state and provides the
++     ability to disable or (re)enable SMT. The possible states are:
++
++	==============  ===================================================
++	on		SMT is supported by the CPU and enabled. All
++			logical CPUs can be onlined and offlined without
++			restrictions.
++
++	off		SMT is supported by the CPU and disabled. Only
++			the so called primary SMT threads can be onlined
++			and offlined without restrictions. An attempt to
++			online a non-primary sibling is rejected
++
++	forceoff	Same as 'off' but the state cannot be controlled.
++			Attempts to write to the control file are rejected.
++
++	notsupported	The processor does not support SMT. It's therefore
++			not affected by the SMT implications of L1TF.
++			Attempts to write to the control file are rejected.
++	==============  ===================================================
++
++     The possible states which can be written into this file to control SMT
++     state are:
++
++     - on
++     - off
++     - forceoff
++
++   /sys/devices/system/cpu/smt/active:
++
++     This file reports whether SMT is enabled and active, i.e. if on any
++     physical core two or more sibling threads are online.
++
++   SMT control is also possible at boot time via the l1tf kernel command
++   line parameter in combination with L1D flush control. See
++   :ref:`mitigation_control_command_line`.
++
++5. Disabling EPT
++^^^^^^^^^^^^^^^^
++
++  Disabling EPT for virtual machines provides full mitigation for L1TF even
++  with SMT enabled, because the effective page tables for guests are
++  managed and sanitized by the hypervisor. Though disabling EPT has a
++  significant performance impact especially when the Meltdown mitigation
++  KPTI is enabled.
++
++  EPT can be disabled in the hypervisor via the 'kvm-intel.ept' parameter.
++
++There is ongoing research and development for new mitigation mechanisms to
++address the performance impact of disabling SMT or EPT.
++
++.. _mitigation_control_command_line:
++
++Mitigation control on the kernel command line
++---------------------------------------------
++
++The kernel command line allows to control the L1TF mitigations at boot
++time with the option "l1tf=". The valid arguments for this option are:
++
++  ============  =============================================================
++  full		Provides all available mitigations for the L1TF
++		vulnerability. Disables SMT and enables all mitigations in
++		the hypervisors, i.e. unconditional L1D flushing
++
++		SMT control and L1D flush control via the sysfs interface
++		is still possible after boot.  Hypervisors will issue a
++		warning when the first VM is started in a potentially
++		insecure configuration, i.e. SMT enabled or L1D flush
++		disabled.
++
++  full,force	Same as 'full', but disables SMT and L1D flush runtime
++		control. Implies the 'nosmt=force' command line option.
++		(i.e. sysfs control of SMT is disabled.)
++
++  flush		Leaves SMT enabled and enables the default hypervisor
++		mitigation, i.e. conditional L1D flushing
++
++		SMT control and L1D flush control via the sysfs interface
++		is still possible after boot.  Hypervisors will issue a
++		warning when the first VM is started in a potentially
++		insecure configuration, i.e. SMT enabled or L1D flush
++		disabled.
++
++  flush,nosmt	Disables SMT and enables the default hypervisor mitigation,
++		i.e. conditional L1D flushing.
++
++		SMT control and L1D flush control via the sysfs interface
++		is still possible after boot.  Hypervisors will issue a
++		warning when the first VM is started in a potentially
++		insecure configuration, i.e. SMT enabled or L1D flush
++		disabled.
++
++  flush,nowarn	Same as 'flush', but hypervisors will not warn when a VM is
++		started in a potentially insecure configuration.
++
++  off		Disables hypervisor mitigations and doesn't emit any
++		warnings.
++  ============  =============================================================
++
++The default is 'flush'. For details about L1D flushing see :ref:`l1d_flush`.
++
++
++.. _mitigation_control_kvm:
++
++Mitigation control for KVM - module parameter
++-------------------------------------------------------------
++
++The KVM hypervisor mitigation mechanism, flushing the L1D cache when
++entering a guest, can be controlled with a module parameter.
++
++The option/parameter is "kvm-intel.vmentry_l1d_flush=". It takes the
++following arguments:
++
++  ============  ==============================================================
++  always	L1D cache flush on every VMENTER.
++
++  cond		Flush L1D on VMENTER only when the code between VMEXIT and
++		VMENTER can leak host memory which is considered
++		interesting for an attacker. This still can leak host memory
++		which allows e.g. to determine the hosts address space layout.
++
++  never		Disables the mitigation
++  ============  ==============================================================
++
++The parameter can be provided on the kernel command line, as a module
++parameter when loading the modules and at runtime modified via the sysfs
++file:
++
++/sys/module/kvm_intel/parameters/vmentry_l1d_flush
++
++The default is 'cond'. If 'l1tf=full,force' is given on the kernel command
++line, then 'always' is enforced and the kvm-intel.vmentry_l1d_flush
++module parameter is ignored and writes to the sysfs file are rejected.
++
++
++Mitigation selection guide
++--------------------------
++
++1. No virtualization in use
++^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   The system is protected by the kernel unconditionally and no further
++   action is required.
++
++2. Virtualization with trusted guests
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   If the guest comes from a trusted source and the guest OS kernel is
++   guaranteed to have the L1TF mitigations in place the system is fully
++   protected against L1TF and no further action is required.
++
++   To avoid the overhead of the default L1D flushing on VMENTER the
++   administrator can disable the flushing via the kernel command line and
++   sysfs control files. See :ref:`mitigation_control_command_line` and
++   :ref:`mitigation_control_kvm`.
++
++
++3. Virtualization with untrusted guests
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++3.1. SMT not supported or disabled
++""""""""""""""""""""""""""""""""""
++
++  If SMT is not supported by the processor or disabled in the BIOS or by
++  the kernel, it's only required to enforce L1D flushing on VMENTER.
++
++  Conditional L1D flushing is the default behaviour and can be tuned. See
++  :ref:`mitigation_control_command_line` and :ref:`mitigation_control_kvm`.
++
++3.2. EPT not supported or disabled
++""""""""""""""""""""""""""""""""""
++
++  If EPT is not supported by the processor or disabled in the hypervisor,
++  the system is fully protected. SMT can stay enabled and L1D flushing on
++  VMENTER is not required.
++
++  EPT can be disabled in the hypervisor via the 'kvm-intel.ept' parameter.
++
++3.3. SMT and EPT supported and active
++"""""""""""""""""""""""""""""""""""""
++
++  If SMT and EPT are supported and active then various degrees of
++  mitigations can be employed:
++
++  - L1D flushing on VMENTER:
++
++    L1D flushing on VMENTER is the minimal protection requirement, but it
++    is only potent in combination with other mitigation methods.
++
++    Conditional L1D flushing is the default behaviour and can be tuned. See
++    :ref:`mitigation_control_command_line` and :ref:`mitigation_control_kvm`.
++
++  - Guest confinement:
++
++    Confinement of guests to a single or a group of physical cores which
++    are not running any other processes, can reduce the attack surface
++    significantly, but interrupts, soft interrupts and kernel threads can
++    still expose valuable data to a potential attacker. See
++    :ref:`guest_confinement`.
++
++  - Interrupt isolation:
++
++    Isolating the guest CPUs from interrupts can reduce the attack surface
++    further, but still allows a malicious guest to explore a limited amount
++    of host physical memory. This can at least be used to gain knowledge
++    about the host address space layout. The interrupts which have a fixed
++    affinity to the CPUs which run the untrusted guests can depending on
++    the scenario still trigger soft interrupts and schedule kernel threads
++    which might expose valuable information. See
++    :ref:`interrupt_isolation`.
++
++The above three mitigation methods combined can provide protection to a
++certain degree, but the risk of the remaining attack surface has to be
++carefully analyzed. For full protection the following methods are
++available:
++
++  - Disabling SMT:
++
++    Disabling SMT and enforcing the L1D flushing provides the maximum
++    amount of protection. This mitigation is not depending on any of the
++    above mitigation methods.
++
++    SMT control and L1D flushing can be tuned by the command line
++    parameters 'nosmt', 'l1tf', 'kvm-intel.vmentry_l1d_flush' and at run
++    time with the matching sysfs control files. See :ref:`smt_control`,
++    :ref:`mitigation_control_command_line` and
++    :ref:`mitigation_control_kvm`.
++
++  - Disabling EPT:
++
++    Disabling EPT provides the maximum amount of protection as well. It is
++    not depending on any of the above mitigation methods. SMT can stay
++    enabled and L1D flushing is not required, but the performance impact is
++    significant.
++
++    EPT can be disabled in the hypervisor via the 'kvm-intel.ept'
++    parameter.
++
++3.4. Nested virtual machines
++""""""""""""""""""""""""""""
++
++When nested virtualization is in use, three operating systems are involved:
++the bare metal hypervisor, the nested hypervisor and the nested virtual
++machine.  VMENTER operations from the nested hypervisor into the nested
++guest will always be processed by the bare metal hypervisor. If KVM is the
++bare metal hypervisor it wiil:
++
++ - Flush the L1D cache on every switch from the nested hypervisor to the
++   nested virtual machine, so that the nested hypervisor's secrets are not
++   exposed to the nested virtual machine;
++
++ - Flush the L1D cache on every switch from the nested virtual machine to
++   the nested hypervisor; this is a complex operation, and flushing the L1D
++   cache avoids that the bare metal hypervisor's secrets are exposed to the
++   nested virtual machine;
++
++ - Instruct the nested hypervisor to not perform any L1D cache flush. This
++   is an optimization to avoid double L1D flushing.
++
++
++.. _default_mitigations:
++
++Default mitigations
++-------------------
++
++  The kernel default mitigations for vulnerable processors are:
++
++  - PTE inversion to protect against malicious user space. This is done
++    unconditionally and cannot be controlled.
++
++  - L1D conditional flushing on VMENTER when EPT is enabled for
++    a guest.
++
++  The kernel does not by default enforce the disabling of SMT, which leaves
++  SMT systems vulnerable when running untrusted guests with EPT enabled.
++
++  The rationale for this choice is:
++
++  - Force disabling SMT can break existing setups, especially with
++    unattended updates.
++
++  - If regular users run untrusted guests on their machine, then L1TF is
++    just an add on to other malware which might be embedded in an untrusted
++    guest, e.g. spam-bots or attacks on the local network.
++
++    There is no technical way to prevent a user from running untrusted code
++    on their machines blindly.
++
++  - It's technically extremely unlikely and from today's knowledge even
++    impossible that L1TF can be exploited via the most popular attack
++    mechanisms like JavaScript because these mechanisms have no way to
++    control PTEs. If this would be possible and not other mitigation would
++    be possible, then the default might be different.
++
++  - The administrators of cloud and hosting setups have to carefully
++    analyze the risk for their scenarios and make the appropriate
++    mitigation choices, which might even vary across their deployed
++    machines and also result in other changes of their overall setup.
++    There is no way for the kernel to provide a sensible default for this
++    kind of scenarios.
+diff --git a/Makefile b/Makefile
+index 863f58503bee..5edf963148e8 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/Kconfig b/arch/Kconfig
+index 1aa59063f1fd..d1f2ed462ac8 100644
+--- a/arch/Kconfig
++++ b/arch/Kconfig
+@@ -13,6 +13,9 @@ config KEXEC_CORE
+ config HAVE_IMA_KEXEC
+ 	bool
+ 
++config HOTPLUG_SMT
++	bool
++
+ config OPROFILE
+ 	tristate "OProfile system profiling"
+ 	depends on PROFILING
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 887d3a7bb646..6b8065d718bd 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -187,6 +187,7 @@ config X86
+ 	select HAVE_SYSCALL_TRACEPOINTS
+ 	select HAVE_UNSTABLE_SCHED_CLOCK
+ 	select HAVE_USER_RETURN_NOTIFIER
++	select HOTPLUG_SMT			if SMP
+ 	select IRQ_FORCED_THREADING
+ 	select NEED_SG_DMA_LENGTH
+ 	select PCI_LOCKLESS_CONFIG
+diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h
+index 74a9e06b6cfd..130e81e10fc7 100644
+--- a/arch/x86/include/asm/apic.h
++++ b/arch/x86/include/asm/apic.h
+@@ -10,6 +10,7 @@
+ #include <asm/fixmap.h>
+ #include <asm/mpspec.h>
+ #include <asm/msr.h>
++#include <asm/hardirq.h>
+ 
+ #define ARCH_APICTIMER_STOPS_ON_C3	1
+ 
+@@ -502,12 +503,19 @@ extern int default_check_phys_apicid_present(int phys_apicid);
+ 
+ #endif /* CONFIG_X86_LOCAL_APIC */
+ 
++#ifdef CONFIG_SMP
++bool apic_id_is_primary_thread(unsigned int id);
++#else
++static inline bool apic_id_is_primary_thread(unsigned int id) { return false; }
++#endif
++
+ extern void irq_enter(void);
+ extern void irq_exit(void);
+ 
+ static inline void entering_irq(void)
+ {
+ 	irq_enter();
++	kvm_set_cpu_l1tf_flush_l1d();
+ }
+ 
+ static inline void entering_ack_irq(void)
+@@ -520,6 +528,7 @@ static inline void ipi_entering_ack_irq(void)
+ {
+ 	irq_enter();
+ 	ack_APIC_irq();
++	kvm_set_cpu_l1tf_flush_l1d();
+ }
+ 
+ static inline void exiting_irq(void)
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 5701f5cecd31..64aaa3f5f36c 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -219,6 +219,7 @@
+ #define X86_FEATURE_IBPB		( 7*32+26) /* Indirect Branch Prediction Barrier */
+ #define X86_FEATURE_STIBP		( 7*32+27) /* Single Thread Indirect Branch Predictors */
+ #define X86_FEATURE_ZEN			( 7*32+28) /* "" CPU is AMD family 0x17 (Zen) */
++#define X86_FEATURE_L1TF_PTEINV		( 7*32+29) /* "" L1TF workaround PTE inversion */
+ 
+ /* Virtualization flags: Linux defined, word 8 */
+ #define X86_FEATURE_TPR_SHADOW		( 8*32+ 0) /* Intel TPR Shadow */
+@@ -341,6 +342,7 @@
+ #define X86_FEATURE_PCONFIG		(18*32+18) /* Intel PCONFIG */
+ #define X86_FEATURE_SPEC_CTRL		(18*32+26) /* "" Speculation Control (IBRS + IBPB) */
+ #define X86_FEATURE_INTEL_STIBP		(18*32+27) /* "" Single Thread Indirect Branch Predictors */
++#define X86_FEATURE_FLUSH_L1D		(18*32+28) /* Flush L1D cache */
+ #define X86_FEATURE_ARCH_CAPABILITIES	(18*32+29) /* IA32_ARCH_CAPABILITIES MSR (Intel) */
+ #define X86_FEATURE_SPEC_CTRL_SSBD	(18*32+31) /* "" Speculative Store Bypass Disable */
+ 
+@@ -373,5 +375,6 @@
+ #define X86_BUG_SPECTRE_V1		X86_BUG(15) /* CPU is affected by Spectre variant 1 attack with conditional branches */
+ #define X86_BUG_SPECTRE_V2		X86_BUG(16) /* CPU is affected by Spectre variant 2 attack with indirect branches */
+ #define X86_BUG_SPEC_STORE_BYPASS	X86_BUG(17) /* CPU is affected by speculative store bypass attack */
++#define X86_BUG_L1TF			X86_BUG(18) /* CPU is affected by L1 Terminal Fault */
+ 
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/include/asm/dmi.h b/arch/x86/include/asm/dmi.h
+index 0ab2ab27ad1f..b825cb201251 100644
+--- a/arch/x86/include/asm/dmi.h
++++ b/arch/x86/include/asm/dmi.h
+@@ -4,8 +4,8 @@
+ 
+ #include <linux/compiler.h>
+ #include <linux/init.h>
++#include <linux/io.h>
+ 
+-#include <asm/io.h>
+ #include <asm/setup.h>
+ 
+ static __always_inline __init void *dmi_alloc(unsigned len)
+diff --git a/arch/x86/include/asm/hardirq.h b/arch/x86/include/asm/hardirq.h
+index 740a428acf1e..d9069bb26c7f 100644
+--- a/arch/x86/include/asm/hardirq.h
++++ b/arch/x86/include/asm/hardirq.h
+@@ -3,10 +3,12 @@
+ #define _ASM_X86_HARDIRQ_H
+ 
+ #include <linux/threads.h>
+-#include <linux/irq.h>
+ 
+ typedef struct {
+-	unsigned int __softirq_pending;
++	u16	     __softirq_pending;
++#if IS_ENABLED(CONFIG_KVM_INTEL)
++	u8	     kvm_cpu_l1tf_flush_l1d;
++#endif
+ 	unsigned int __nmi_count;	/* arch dependent */
+ #ifdef CONFIG_X86_LOCAL_APIC
+ 	unsigned int apic_timer_irqs;	/* arch dependent */
+@@ -58,4 +60,24 @@ extern u64 arch_irq_stat_cpu(unsigned int cpu);
+ extern u64 arch_irq_stat(void);
+ #define arch_irq_stat		arch_irq_stat
+ 
++
++#if IS_ENABLED(CONFIG_KVM_INTEL)
++static inline void kvm_set_cpu_l1tf_flush_l1d(void)
++{
++	__this_cpu_write(irq_stat.kvm_cpu_l1tf_flush_l1d, 1);
++}
++
++static inline void kvm_clear_cpu_l1tf_flush_l1d(void)
++{
++	__this_cpu_write(irq_stat.kvm_cpu_l1tf_flush_l1d, 0);
++}
++
++static inline bool kvm_get_cpu_l1tf_flush_l1d(void)
++{
++	return __this_cpu_read(irq_stat.kvm_cpu_l1tf_flush_l1d);
++}
++#else /* !IS_ENABLED(CONFIG_KVM_INTEL) */
++static inline void kvm_set_cpu_l1tf_flush_l1d(void) { }
++#endif /* IS_ENABLED(CONFIG_KVM_INTEL) */
++
+ #endif /* _ASM_X86_HARDIRQ_H */
+diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
+index c4fc17220df9..c14f2a74b2be 100644
+--- a/arch/x86/include/asm/irqflags.h
++++ b/arch/x86/include/asm/irqflags.h
+@@ -13,6 +13,8 @@
+  * Interrupt control:
+  */
+ 
++/* Declaration required for gcc < 4.9 to prevent -Werror=missing-prototypes */
++extern inline unsigned long native_save_fl(void);
+ extern inline unsigned long native_save_fl(void)
+ {
+ 	unsigned long flags;
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index c13cd28d9d1b..acebb808c4b5 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -17,6 +17,7 @@
+ #include <linux/tracepoint.h>
+ #include <linux/cpumask.h>
+ #include <linux/irq_work.h>
++#include <linux/irq.h>
+ 
+ #include <linux/kvm.h>
+ #include <linux/kvm_para.h>
+@@ -713,6 +714,9 @@ struct kvm_vcpu_arch {
+ 
+ 	/* be preempted when it's in kernel-mode(cpl=0) */
+ 	bool preempted_in_kernel;
++
++	/* Flush the L1 Data cache for L1TF mitigation on VMENTER */
++	bool l1tf_flush_l1d;
+ };
+ 
+ struct kvm_lpage_info {
+@@ -881,6 +885,7 @@ struct kvm_vcpu_stat {
+ 	u64 signal_exits;
+ 	u64 irq_window_exits;
+ 	u64 nmi_window_exits;
++	u64 l1d_flush;
+ 	u64 halt_exits;
+ 	u64 halt_successful_poll;
+ 	u64 halt_attempted_poll;
+@@ -1413,6 +1418,7 @@ int kvm_cpu_get_interrupt(struct kvm_vcpu *v);
+ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event);
+ void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu);
+ 
++u64 kvm_get_arch_capabilities(void);
+ void kvm_define_shared_msr(unsigned index, u32 msr);
+ int kvm_set_shared_msr(unsigned index, u64 val, u64 mask);
+ 
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 68b2c3150de1..4731f0cf97c5 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -70,12 +70,19 @@
+ #define MSR_IA32_ARCH_CAPABILITIES	0x0000010a
+ #define ARCH_CAP_RDCL_NO		(1 << 0)   /* Not susceptible to Meltdown */
+ #define ARCH_CAP_IBRS_ALL		(1 << 1)   /* Enhanced IBRS support */
++#define ARCH_CAP_SKIP_VMENTRY_L1DFLUSH	(1 << 3)   /* Skip L1D flush on vmentry */
+ #define ARCH_CAP_SSB_NO			(1 << 4)   /*
+ 						    * Not susceptible to Speculative Store Bypass
+ 						    * attack, so no Speculative Store Bypass
+ 						    * control required.
+ 						    */
+ 
++#define MSR_IA32_FLUSH_CMD		0x0000010b
++#define L1D_FLUSH			(1 << 0)   /*
++						    * Writeback and invalidate the
++						    * L1 data cache.
++						    */
++
+ #define MSR_IA32_BBL_CR_CTL		0x00000119
+ #define MSR_IA32_BBL_CR_CTL3		0x0000011e
+ 
+diff --git a/arch/x86/include/asm/page_32_types.h b/arch/x86/include/asm/page_32_types.h
+index aa30c3241ea7..0d5c739eebd7 100644
+--- a/arch/x86/include/asm/page_32_types.h
++++ b/arch/x86/include/asm/page_32_types.h
+@@ -29,8 +29,13 @@
+ #define N_EXCEPTION_STACKS 1
+ 
+ #ifdef CONFIG_X86_PAE
+-/* 44=32+12, the limit we can fit into an unsigned long pfn */
+-#define __PHYSICAL_MASK_SHIFT	44
++/*
++ * This is beyond the 44 bit limit imposed by the 32bit long pfns,
++ * but we need the full mask to make sure inverted PROT_NONE
++ * entries have all the host bits set in a guest.
++ * The real limit is still 44 bits.
++ */
++#define __PHYSICAL_MASK_SHIFT	52
+ #define __VIRTUAL_MASK_SHIFT	32
+ 
+ #else  /* !CONFIG_X86_PAE */
+diff --git a/arch/x86/include/asm/pgtable-2level.h b/arch/x86/include/asm/pgtable-2level.h
+index 685ffe8a0eaf..60d0f9015317 100644
+--- a/arch/x86/include/asm/pgtable-2level.h
++++ b/arch/x86/include/asm/pgtable-2level.h
+@@ -95,4 +95,21 @@ static inline unsigned long pte_bitop(unsigned long value, unsigned int rightshi
+ #define __pte_to_swp_entry(pte)		((swp_entry_t) { (pte).pte_low })
+ #define __swp_entry_to_pte(x)		((pte_t) { .pte = (x).val })
+ 
++/* No inverted PFNs on 2 level page tables */
++
++static inline u64 protnone_mask(u64 val)
++{
++	return 0;
++}
++
++static inline u64 flip_protnone_guard(u64 oldval, u64 val, u64 mask)
++{
++	return val;
++}
++
++static inline bool __pte_needs_invert(u64 val)
++{
++	return false;
++}
++
+ #endif /* _ASM_X86_PGTABLE_2LEVEL_H */
+diff --git a/arch/x86/include/asm/pgtable-3level.h b/arch/x86/include/asm/pgtable-3level.h
+index f24df59c40b2..bb035a4cbc8c 100644
+--- a/arch/x86/include/asm/pgtable-3level.h
++++ b/arch/x86/include/asm/pgtable-3level.h
+@@ -241,12 +241,43 @@ static inline pud_t native_pudp_get_and_clear(pud_t *pudp)
+ #endif
+ 
+ /* Encode and de-code a swap entry */
++#define SWP_TYPE_BITS		5
++
++#define SWP_OFFSET_FIRST_BIT	(_PAGE_BIT_PROTNONE + 1)
++
++/* We always extract/encode the offset by shifting it all the way up, and then down again */
++#define SWP_OFFSET_SHIFT	(SWP_OFFSET_FIRST_BIT + SWP_TYPE_BITS)
++
+ #define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > 5)
+ #define __swp_type(x)			(((x).val) & 0x1f)
+ #define __swp_offset(x)			((x).val >> 5)
+ #define __swp_entry(type, offset)	((swp_entry_t){(type) | (offset) << 5})
+-#define __pte_to_swp_entry(pte)		((swp_entry_t){ (pte).pte_high })
+-#define __swp_entry_to_pte(x)		((pte_t){ { .pte_high = (x).val } })
++
++/*
++ * Normally, __swp_entry() converts from arch-independent swp_entry_t to
++ * arch-dependent swp_entry_t, and __swp_entry_to_pte() just stores the result
++ * to pte. But here we have 32bit swp_entry_t and 64bit pte, and need to use the
++ * whole 64 bits. Thus, we shift the "real" arch-dependent conversion to
++ * __swp_entry_to_pte() through the following helper macro based on 64bit
++ * __swp_entry().
++ */
++#define __swp_pteval_entry(type, offset) ((pteval_t) { \
++	(~(pteval_t)(offset) << SWP_OFFSET_SHIFT >> SWP_TYPE_BITS) \
++	| ((pteval_t)(type) << (64 - SWP_TYPE_BITS)) })
++
++#define __swp_entry_to_pte(x)	((pte_t){ .pte = \
++		__swp_pteval_entry(__swp_type(x), __swp_offset(x)) })
++/*
++ * Analogically, __pte_to_swp_entry() doesn't just extract the arch-dependent
++ * swp_entry_t, but also has to convert it from 64bit to the 32bit
++ * intermediate representation, using the following macros based on 64bit
++ * __swp_type() and __swp_offset().
++ */
++#define __pteval_swp_type(x) ((unsigned long)((x).pte >> (64 - SWP_TYPE_BITS)))
++#define __pteval_swp_offset(x) ((unsigned long)(~((x).pte) << SWP_TYPE_BITS >> SWP_OFFSET_SHIFT))
++
++#define __pte_to_swp_entry(pte)	(__swp_entry(__pteval_swp_type(pte), \
++					     __pteval_swp_offset(pte)))
+ 
+ #define gup_get_pte gup_get_pte
+ /*
+@@ -295,4 +326,6 @@ static inline pte_t gup_get_pte(pte_t *ptep)
+ 	return pte;
+ }
+ 
++#include <asm/pgtable-invert.h>
++
+ #endif /* _ASM_X86_PGTABLE_3LEVEL_H */
+diff --git a/arch/x86/include/asm/pgtable-invert.h b/arch/x86/include/asm/pgtable-invert.h
+new file mode 100644
+index 000000000000..44b1203ece12
+--- /dev/null
++++ b/arch/x86/include/asm/pgtable-invert.h
+@@ -0,0 +1,32 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef _ASM_PGTABLE_INVERT_H
++#define _ASM_PGTABLE_INVERT_H 1
++
++#ifndef __ASSEMBLY__
++
++static inline bool __pte_needs_invert(u64 val)
++{
++	return !(val & _PAGE_PRESENT);
++}
++
++/* Get a mask to xor with the page table entry to get the correct pfn. */
++static inline u64 protnone_mask(u64 val)
++{
++	return __pte_needs_invert(val) ?  ~0ull : 0;
++}
++
++static inline u64 flip_protnone_guard(u64 oldval, u64 val, u64 mask)
++{
++	/*
++	 * When a PTE transitions from NONE to !NONE or vice-versa
++	 * invert the PFN part to stop speculation.
++	 * pte_pfn undoes this when needed.
++	 */
++	if (__pte_needs_invert(oldval) != __pte_needs_invert(val))
++		val = (val & ~mask) | (~val & mask);
++	return val;
++}
++
++#endif /* __ASSEMBLY__ */
++
++#endif
+diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
+index 5715647fc4fe..13125aad804c 100644
+--- a/arch/x86/include/asm/pgtable.h
++++ b/arch/x86/include/asm/pgtable.h
+@@ -185,19 +185,29 @@ static inline int pte_special(pte_t pte)
+ 	return pte_flags(pte) & _PAGE_SPECIAL;
+ }
+ 
++/* Entries that were set to PROT_NONE are inverted */
++
++static inline u64 protnone_mask(u64 val);
++
+ static inline unsigned long pte_pfn(pte_t pte)
+ {
+-	return (pte_val(pte) & PTE_PFN_MASK) >> PAGE_SHIFT;
++	phys_addr_t pfn = pte_val(pte);
++	pfn ^= protnone_mask(pfn);
++	return (pfn & PTE_PFN_MASK) >> PAGE_SHIFT;
+ }
+ 
+ static inline unsigned long pmd_pfn(pmd_t pmd)
+ {
+-	return (pmd_val(pmd) & pmd_pfn_mask(pmd)) >> PAGE_SHIFT;
++	phys_addr_t pfn = pmd_val(pmd);
++	pfn ^= protnone_mask(pfn);
++	return (pfn & pmd_pfn_mask(pmd)) >> PAGE_SHIFT;
+ }
+ 
+ static inline unsigned long pud_pfn(pud_t pud)
+ {
+-	return (pud_val(pud) & pud_pfn_mask(pud)) >> PAGE_SHIFT;
++	phys_addr_t pfn = pud_val(pud);
++	pfn ^= protnone_mask(pfn);
++	return (pfn & pud_pfn_mask(pud)) >> PAGE_SHIFT;
+ }
+ 
+ static inline unsigned long p4d_pfn(p4d_t p4d)
+@@ -400,11 +410,6 @@ static inline pmd_t pmd_mkwrite(pmd_t pmd)
+ 	return pmd_set_flags(pmd, _PAGE_RW);
+ }
+ 
+-static inline pmd_t pmd_mknotpresent(pmd_t pmd)
+-{
+-	return pmd_clear_flags(pmd, _PAGE_PRESENT | _PAGE_PROTNONE);
+-}
+-
+ static inline pud_t pud_set_flags(pud_t pud, pudval_t set)
+ {
+ 	pudval_t v = native_pud_val(pud);
+@@ -459,11 +464,6 @@ static inline pud_t pud_mkwrite(pud_t pud)
+ 	return pud_set_flags(pud, _PAGE_RW);
+ }
+ 
+-static inline pud_t pud_mknotpresent(pud_t pud)
+-{
+-	return pud_clear_flags(pud, _PAGE_PRESENT | _PAGE_PROTNONE);
+-}
+-
+ #ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY
+ static inline int pte_soft_dirty(pte_t pte)
+ {
+@@ -545,25 +545,45 @@ static inline pgprotval_t check_pgprot(pgprot_t pgprot)
+ 
+ static inline pte_t pfn_pte(unsigned long page_nr, pgprot_t pgprot)
+ {
+-	return __pte(((phys_addr_t)page_nr << PAGE_SHIFT) |
+-		     check_pgprot(pgprot));
++	phys_addr_t pfn = (phys_addr_t)page_nr << PAGE_SHIFT;
++	pfn ^= protnone_mask(pgprot_val(pgprot));
++	pfn &= PTE_PFN_MASK;
++	return __pte(pfn | check_pgprot(pgprot));
+ }
+ 
+ static inline pmd_t pfn_pmd(unsigned long page_nr, pgprot_t pgprot)
+ {
+-	return __pmd(((phys_addr_t)page_nr << PAGE_SHIFT) |
+-		     check_pgprot(pgprot));
++	phys_addr_t pfn = (phys_addr_t)page_nr << PAGE_SHIFT;
++	pfn ^= protnone_mask(pgprot_val(pgprot));
++	pfn &= PHYSICAL_PMD_PAGE_MASK;
++	return __pmd(pfn | check_pgprot(pgprot));
+ }
+ 
+ static inline pud_t pfn_pud(unsigned long page_nr, pgprot_t pgprot)
+ {
+-	return __pud(((phys_addr_t)page_nr << PAGE_SHIFT) |
+-		     check_pgprot(pgprot));
++	phys_addr_t pfn = (phys_addr_t)page_nr << PAGE_SHIFT;
++	pfn ^= protnone_mask(pgprot_val(pgprot));
++	pfn &= PHYSICAL_PUD_PAGE_MASK;
++	return __pud(pfn | check_pgprot(pgprot));
+ }
+ 
++static inline pmd_t pmd_mknotpresent(pmd_t pmd)
++{
++	return pfn_pmd(pmd_pfn(pmd),
++		      __pgprot(pmd_flags(pmd) & ~(_PAGE_PRESENT|_PAGE_PROTNONE)));
++}
++
++static inline pud_t pud_mknotpresent(pud_t pud)
++{
++	return pfn_pud(pud_pfn(pud),
++	      __pgprot(pud_flags(pud) & ~(_PAGE_PRESENT|_PAGE_PROTNONE)));
++}
++
++static inline u64 flip_protnone_guard(u64 oldval, u64 val, u64 mask);
++
+ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+ {
+-	pteval_t val = pte_val(pte);
++	pteval_t val = pte_val(pte), oldval = val;
+ 
+ 	/*
+ 	 * Chop off the NX bit (if present), and add the NX portion of
+@@ -571,17 +591,17 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+ 	 */
+ 	val &= _PAGE_CHG_MASK;
+ 	val |= check_pgprot(newprot) & ~_PAGE_CHG_MASK;
+-
++	val = flip_protnone_guard(oldval, val, PTE_PFN_MASK);
+ 	return __pte(val);
+ }
+ 
+ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
+ {
+-	pmdval_t val = pmd_val(pmd);
++	pmdval_t val = pmd_val(pmd), oldval = val;
+ 
+ 	val &= _HPAGE_CHG_MASK;
+ 	val |= check_pgprot(newprot) & ~_HPAGE_CHG_MASK;
+-
++	val = flip_protnone_guard(oldval, val, PHYSICAL_PMD_PAGE_MASK);
+ 	return __pmd(val);
+ }
+ 
+@@ -1320,6 +1340,14 @@ static inline bool pud_access_permitted(pud_t pud, bool write)
+ 	return __pte_access_permitted(pud_val(pud), write);
+ }
+ 
++#define __HAVE_ARCH_PFN_MODIFY_ALLOWED 1
++extern bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot);
++
++static inline bool arch_has_pfn_modify_check(void)
++{
++	return boot_cpu_has_bug(X86_BUG_L1TF);
++}
++
+ #include <asm-generic/pgtable.h>
+ #endif	/* __ASSEMBLY__ */
+ 
+diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h
+index 3c5385f9a88f..82ff20b0ae45 100644
+--- a/arch/x86/include/asm/pgtable_64.h
++++ b/arch/x86/include/asm/pgtable_64.h
+@@ -273,7 +273,7 @@ static inline int pgd_large(pgd_t pgd) { return 0; }
+  *
+  * |     ...            | 11| 10|  9|8|7|6|5| 4| 3|2| 1|0| <- bit number
+  * |     ...            |SW3|SW2|SW1|G|L|D|A|CD|WT|U| W|P| <- bit names
+- * | OFFSET (14->63) | TYPE (9-13)  |0|0|X|X| X| X|X|SD|0| <- swp entry
++ * | TYPE (59-63) | ~OFFSET (9-58)  |0|0|X|X| X| X|X|SD|0| <- swp entry
+  *
+  * G (8) is aliased and used as a PROT_NONE indicator for
+  * !present ptes.  We need to start storing swap entries above
+@@ -286,20 +286,34 @@ static inline int pgd_large(pgd_t pgd) { return 0; }
+  *
+  * Bit 7 in swp entry should be 0 because pmd_present checks not only P,
+  * but also L and G.
++ *
++ * The offset is inverted by a binary not operation to make the high
++ * physical bits set.
+  */
+-#define SWP_TYPE_FIRST_BIT (_PAGE_BIT_PROTNONE + 1)
+-#define SWP_TYPE_BITS 5
+-/* Place the offset above the type: */
+-#define SWP_OFFSET_FIRST_BIT (SWP_TYPE_FIRST_BIT + SWP_TYPE_BITS)
++#define SWP_TYPE_BITS		5
++
++#define SWP_OFFSET_FIRST_BIT	(_PAGE_BIT_PROTNONE + 1)
++
++/* We always extract/encode the offset by shifting it all the way up, and then down again */
++#define SWP_OFFSET_SHIFT	(SWP_OFFSET_FIRST_BIT+SWP_TYPE_BITS)
+ 
+ #define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > SWP_TYPE_BITS)
+ 
+-#define __swp_type(x)			(((x).val >> (SWP_TYPE_FIRST_BIT)) \
+-					 & ((1U << SWP_TYPE_BITS) - 1))
+-#define __swp_offset(x)			((x).val >> SWP_OFFSET_FIRST_BIT)
+-#define __swp_entry(type, offset)	((swp_entry_t) { \
+-					 ((type) << (SWP_TYPE_FIRST_BIT)) \
+-					 | ((offset) << SWP_OFFSET_FIRST_BIT) })
++/* Extract the high bits for type */
++#define __swp_type(x) ((x).val >> (64 - SWP_TYPE_BITS))
++
++/* Shift up (to get rid of type), then down to get value */
++#define __swp_offset(x) (~(x).val << SWP_TYPE_BITS >> SWP_OFFSET_SHIFT)
++
++/*
++ * Shift the offset up "too far" by TYPE bits, then down again
++ * The offset is inverted by a binary not operation to make the high
++ * physical bits set.
++ */
++#define __swp_entry(type, offset) ((swp_entry_t) { \
++	(~(unsigned long)(offset) << SWP_OFFSET_SHIFT >> SWP_TYPE_BITS) \
++	| ((unsigned long)(type) << (64-SWP_TYPE_BITS)) })
++
+ #define __pte_to_swp_entry(pte)		((swp_entry_t) { pte_val((pte)) })
+ #define __pmd_to_swp_entry(pmd)		((swp_entry_t) { pmd_val((pmd)) })
+ #define __swp_entry_to_pte(x)		((pte_t) { .pte = (x).val })
+@@ -343,5 +357,7 @@ static inline bool gup_fast_permitted(unsigned long start, int nr_pages,
+ 	return true;
+ }
+ 
++#include <asm/pgtable-invert.h>
++
+ #endif /* !__ASSEMBLY__ */
+ #endif /* _ASM_X86_PGTABLE_64_H */
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index cfd29ee8c3da..79e409974ccc 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -181,6 +181,11 @@ extern const struct seq_operations cpuinfo_op;
+ 
+ extern void cpu_detect(struct cpuinfo_x86 *c);
+ 
++static inline unsigned long l1tf_pfn_limit(void)
++{
++	return BIT(boot_cpu_data.x86_phys_bits - 1 - PAGE_SHIFT) - 1;
++}
++
+ extern void early_cpu_init(void);
+ extern void identify_boot_cpu(void);
+ extern void identify_secondary_cpu(struct cpuinfo_x86 *);
+@@ -977,4 +982,16 @@ bool xen_set_default_idle(void);
+ void stop_this_cpu(void *dummy);
+ void df_debug(struct pt_regs *regs, long error_code);
+ void microcode_check(void);
++
++enum l1tf_mitigations {
++	L1TF_MITIGATION_OFF,
++	L1TF_MITIGATION_FLUSH_NOWARN,
++	L1TF_MITIGATION_FLUSH,
++	L1TF_MITIGATION_FLUSH_NOSMT,
++	L1TF_MITIGATION_FULL,
++	L1TF_MITIGATION_FULL_FORCE
++};
++
++extern enum l1tf_mitigations l1tf_mitigation;
++
+ #endif /* _ASM_X86_PROCESSOR_H */
+diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h
+index c1d2a9892352..453cf38a1c33 100644
+--- a/arch/x86/include/asm/topology.h
++++ b/arch/x86/include/asm/topology.h
+@@ -123,13 +123,17 @@ static inline int topology_max_smt_threads(void)
+ }
+ 
+ int topology_update_package_map(unsigned int apicid, unsigned int cpu);
+-extern int topology_phys_to_logical_pkg(unsigned int pkg);
++int topology_phys_to_logical_pkg(unsigned int pkg);
++bool topology_is_primary_thread(unsigned int cpu);
++bool topology_smt_supported(void);
+ #else
+ #define topology_max_packages()			(1)
+ static inline int
+ topology_update_package_map(unsigned int apicid, unsigned int cpu) { return 0; }
+ static inline int topology_phys_to_logical_pkg(unsigned int pkg) { return 0; }
+ static inline int topology_max_smt_threads(void) { return 1; }
++static inline bool topology_is_primary_thread(unsigned int cpu) { return true; }
++static inline bool topology_smt_supported(void) { return false; }
+ #endif
+ 
+ static inline void arch_fix_phys_package_id(int num, u32 slot)
+diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
+index 6aa8499e1f62..95f9107449bf 100644
+--- a/arch/x86/include/asm/vmx.h
++++ b/arch/x86/include/asm/vmx.h
+@@ -576,4 +576,15 @@ enum vm_instruction_error_number {
+ 	VMXERR_INVALID_OPERAND_TO_INVEPT_INVVPID = 28,
+ };
+ 
++enum vmx_l1d_flush_state {
++	VMENTER_L1D_FLUSH_AUTO,
++	VMENTER_L1D_FLUSH_NEVER,
++	VMENTER_L1D_FLUSH_COND,
++	VMENTER_L1D_FLUSH_ALWAYS,
++	VMENTER_L1D_FLUSH_EPT_DISABLED,
++	VMENTER_L1D_FLUSH_NOT_REQUIRED,
++};
++
++extern enum vmx_l1d_flush_state l1tf_vmx_mitigation;
++
+ #endif
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index adbda5847b14..3b3a2d0af78d 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -56,6 +56,7 @@
+ #include <asm/hypervisor.h>
+ #include <asm/cpu_device_id.h>
+ #include <asm/intel-family.h>
++#include <asm/irq_regs.h>
+ 
+ unsigned int num_processors;
+ 
+@@ -2192,6 +2193,23 @@ static int cpuid_to_apicid[] = {
+ 	[0 ... NR_CPUS - 1] = -1,
+ };
+ 
++#ifdef CONFIG_SMP
++/**
++ * apic_id_is_primary_thread - Check whether APIC ID belongs to a primary thread
++ * @id:	APIC ID to check
++ */
++bool apic_id_is_primary_thread(unsigned int apicid)
++{
++	u32 mask;
++
++	if (smp_num_siblings == 1)
++		return true;
++	/* Isolate the SMT bit(s) in the APICID and check for 0 */
++	mask = (1U << (fls(smp_num_siblings) - 1)) - 1;
++	return !(apicid & mask);
++}
++#endif
++
+ /*
+  * Should use this API to allocate logical CPU IDs to keep nr_logical_cpuids
+  * and cpuid_to_apicid[] synchronized.
+diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
+index 3982f79d2377..ff0d14cd9e82 100644
+--- a/arch/x86/kernel/apic/io_apic.c
++++ b/arch/x86/kernel/apic/io_apic.c
+@@ -33,6 +33,7 @@
+ 
+ #include <linux/mm.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/init.h>
+ #include <linux/delay.h>
+ #include <linux/sched.h>
+diff --git a/arch/x86/kernel/apic/msi.c b/arch/x86/kernel/apic/msi.c
+index ce503c99f5c4..72a94401f9e0 100644
+--- a/arch/x86/kernel/apic/msi.c
++++ b/arch/x86/kernel/apic/msi.c
+@@ -12,6 +12,7 @@
+  */
+ #include <linux/mm.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/pci.h>
+ #include <linux/dmar.h>
+ #include <linux/hpet.h>
+diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
+index 35aaee4fc028..c9b773401fd8 100644
+--- a/arch/x86/kernel/apic/vector.c
++++ b/arch/x86/kernel/apic/vector.c
+@@ -11,6 +11,7 @@
+  * published by the Free Software Foundation.
+  */
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/seq_file.h>
+ #include <linux/init.h>
+ #include <linux/compiler.h>
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 38915fbfae73..97e962afb967 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -315,6 +315,13 @@ static void legacy_fixup_core_id(struct cpuinfo_x86 *c)
+ 	c->cpu_core_id %= cus_per_node;
+ }
+ 
++
++static void amd_get_topology_early(struct cpuinfo_x86 *c)
++{
++	if (cpu_has(c, X86_FEATURE_TOPOEXT))
++		smp_num_siblings = ((cpuid_ebx(0x8000001e) >> 8) & 0xff) + 1;
++}
++
+ /*
+  * Fixup core topology information for
+  * (1) AMD multi-node processors
+@@ -334,7 +341,6 @@ static void amd_get_topology(struct cpuinfo_x86 *c)
+ 		cpuid(0x8000001e, &eax, &ebx, &ecx, &edx);
+ 
+ 		node_id  = ecx & 0xff;
+-		smp_num_siblings = ((ebx >> 8) & 0xff) + 1;
+ 
+ 		if (c->x86 == 0x15)
+ 			c->cu_id = ebx & 0xff;
+@@ -613,6 +619,7 @@ clear_sev:
+ 
+ static void early_init_amd(struct cpuinfo_x86 *c)
+ {
++	u64 value;
+ 	u32 dummy;
+ 
+ 	early_init_amd_mc(c);
+@@ -683,6 +690,22 @@ static void early_init_amd(struct cpuinfo_x86 *c)
+ 		set_cpu_bug(c, X86_BUG_AMD_E400);
+ 
+ 	early_detect_mem_encrypt(c);
++
++	/* Re-enable TopologyExtensions if switched off by BIOS */
++	if (c->x86 == 0x15 &&
++	    (c->x86_model >= 0x10 && c->x86_model <= 0x6f) &&
++	    !cpu_has(c, X86_FEATURE_TOPOEXT)) {
++
++		if (msr_set_bit(0xc0011005, 54) > 0) {
++			rdmsrl(0xc0011005, value);
++			if (value & BIT_64(54)) {
++				set_cpu_cap(c, X86_FEATURE_TOPOEXT);
++				pr_info_once(FW_INFO "CPU: Re-enabling disabled Topology Extensions Support.\n");
++			}
++		}
++	}
++
++	amd_get_topology_early(c);
+ }
+ 
+ static void init_amd_k8(struct cpuinfo_x86 *c)
+@@ -774,19 +797,6 @@ static void init_amd_bd(struct cpuinfo_x86 *c)
+ {
+ 	u64 value;
+ 
+-	/* re-enable TopologyExtensions if switched off by BIOS */
+-	if ((c->x86_model >= 0x10) && (c->x86_model <= 0x6f) &&
+-	    !cpu_has(c, X86_FEATURE_TOPOEXT)) {
+-
+-		if (msr_set_bit(0xc0011005, 54) > 0) {
+-			rdmsrl(0xc0011005, value);
+-			if (value & BIT_64(54)) {
+-				set_cpu_cap(c, X86_FEATURE_TOPOEXT);
+-				pr_info_once(FW_INFO "CPU: Re-enabling disabled Topology Extensions Support.\n");
+-			}
+-		}
+-	}
+-
+ 	/*
+ 	 * The way access filter has a performance penalty on some workloads.
+ 	 * Disable it on the affected CPUs.
+@@ -850,16 +860,9 @@ static void init_amd(struct cpuinfo_x86 *c)
+ 
+ 	cpu_detect_cache_sizes(c);
+ 
+-	/* Multi core CPU? */
+-	if (c->extended_cpuid_level >= 0x80000008) {
+-		amd_detect_cmp(c);
+-		amd_get_topology(c);
+-		srat_detect_node(c);
+-	}
+-
+-#ifdef CONFIG_X86_32
+-	detect_ht(c);
+-#endif
++	amd_detect_cmp(c);
++	amd_get_topology(c);
++	srat_detect_node(c);
+ 
+ 	init_amd_cacheinfo(c);
+ 
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 5c0ea39311fe..c4f0ae49a53d 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -22,15 +22,18 @@
+ #include <asm/processor-flags.h>
+ #include <asm/fpu/internal.h>
+ #include <asm/msr.h>
++#include <asm/vmx.h>
+ #include <asm/paravirt.h>
+ #include <asm/alternative.h>
+ #include <asm/pgtable.h>
+ #include <asm/set_memory.h>
+ #include <asm/intel-family.h>
+ #include <asm/hypervisor.h>
++#include <asm/e820/api.h>
+ 
+ static void __init spectre_v2_select_mitigation(void);
+ static void __init ssb_select_mitigation(void);
++static void __init l1tf_select_mitigation(void);
+ 
+ /*
+  * Our boot-time value of the SPEC_CTRL MSR. We read it once so that any
+@@ -56,6 +59,12 @@ void __init check_bugs(void)
+ {
+ 	identify_boot_cpu();
+ 
++	/*
++	 * identify_boot_cpu() initialized SMT support information, let the
++	 * core code know.
++	 */
++	cpu_smt_check_topology_early();
++
+ 	if (!IS_ENABLED(CONFIG_SMP)) {
+ 		pr_info("CPU: ");
+ 		print_cpu_info(&boot_cpu_data);
+@@ -82,6 +91,8 @@ void __init check_bugs(void)
+ 	 */
+ 	ssb_select_mitigation();
+ 
++	l1tf_select_mitigation();
++
+ #ifdef CONFIG_X86_32
+ 	/*
+ 	 * Check whether we are able to run this kernel safely on SMP.
+@@ -313,23 +324,6 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
+ 	return cmd;
+ }
+ 
+-/* Check for Skylake-like CPUs (for RSB handling) */
+-static bool __init is_skylake_era(void)
+-{
+-	if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
+-	    boot_cpu_data.x86 == 6) {
+-		switch (boot_cpu_data.x86_model) {
+-		case INTEL_FAM6_SKYLAKE_MOBILE:
+-		case INTEL_FAM6_SKYLAKE_DESKTOP:
+-		case INTEL_FAM6_SKYLAKE_X:
+-		case INTEL_FAM6_KABYLAKE_MOBILE:
+-		case INTEL_FAM6_KABYLAKE_DESKTOP:
+-			return true;
+-		}
+-	}
+-	return false;
+-}
+-
+ static void __init spectre_v2_select_mitigation(void)
+ {
+ 	enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
+@@ -390,22 +384,15 @@ retpoline_auto:
+ 	pr_info("%s\n", spectre_v2_strings[mode]);
+ 
+ 	/*
+-	 * If neither SMEP nor PTI are available, there is a risk of
+-	 * hitting userspace addresses in the RSB after a context switch
+-	 * from a shallow call stack to a deeper one. To prevent this fill
+-	 * the entire RSB, even when using IBRS.
++	 * If spectre v2 protection has been enabled, unconditionally fill
++	 * RSB during a context switch; this protects against two independent
++	 * issues:
+ 	 *
+-	 * Skylake era CPUs have a separate issue with *underflow* of the
+-	 * RSB, when they will predict 'ret' targets from the generic BTB.
+-	 * The proper mitigation for this is IBRS. If IBRS is not supported
+-	 * or deactivated in favour of retpolines the RSB fill on context
+-	 * switch is required.
++	 *	- RSB underflow (and switch to BTB) on Skylake+
++	 *	- SpectreRSB variant of spectre v2 on X86_BUG_SPECTRE_V2 CPUs
+ 	 */
+-	if ((!boot_cpu_has(X86_FEATURE_PTI) &&
+-	     !boot_cpu_has(X86_FEATURE_SMEP)) || is_skylake_era()) {
+-		setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
+-		pr_info("Spectre v2 mitigation: Filling RSB on context switch\n");
+-	}
++	setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
++	pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n");
+ 
+ 	/* Initialize Indirect Branch Prediction Barrier if supported */
+ 	if (boot_cpu_has(X86_FEATURE_IBPB)) {
+@@ -654,8 +641,121 @@ void x86_spec_ctrl_setup_ap(void)
+ 		x86_amd_ssb_disable();
+ }
+ 
++#undef pr_fmt
++#define pr_fmt(fmt)	"L1TF: " fmt
++
++/* Default mitigation for L1TF-affected CPUs */
++enum l1tf_mitigations l1tf_mitigation __ro_after_init = L1TF_MITIGATION_FLUSH;
++#if IS_ENABLED(CONFIG_KVM_INTEL)
++EXPORT_SYMBOL_GPL(l1tf_mitigation);
++
++enum vmx_l1d_flush_state l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
++EXPORT_SYMBOL_GPL(l1tf_vmx_mitigation);
++#endif
++
++static void __init l1tf_select_mitigation(void)
++{
++	u64 half_pa;
++
++	if (!boot_cpu_has_bug(X86_BUG_L1TF))
++		return;
++
++	switch (l1tf_mitigation) {
++	case L1TF_MITIGATION_OFF:
++	case L1TF_MITIGATION_FLUSH_NOWARN:
++	case L1TF_MITIGATION_FLUSH:
++		break;
++	case L1TF_MITIGATION_FLUSH_NOSMT:
++	case L1TF_MITIGATION_FULL:
++		cpu_smt_disable(false);
++		break;
++	case L1TF_MITIGATION_FULL_FORCE:
++		cpu_smt_disable(true);
++		break;
++	}
++
++#if CONFIG_PGTABLE_LEVELS == 2
++	pr_warn("Kernel not compiled for PAE. No mitigation for L1TF\n");
++	return;
++#endif
++
++	/*
++	 * This is extremely unlikely to happen because almost all
++	 * systems have far more MAX_PA/2 than RAM can be fit into
++	 * DIMM slots.
++	 */
++	half_pa = (u64)l1tf_pfn_limit() << PAGE_SHIFT;
++	if (e820__mapped_any(half_pa, ULLONG_MAX - half_pa, E820_TYPE_RAM)) {
++		pr_warn("System has more than MAX_PA/2 memory. L1TF mitigation not effective.\n");
++		return;
++	}
++
++	setup_force_cpu_cap(X86_FEATURE_L1TF_PTEINV);
++}
++
++static int __init l1tf_cmdline(char *str)
++{
++	if (!boot_cpu_has_bug(X86_BUG_L1TF))
++		return 0;
++
++	if (!str)
++		return -EINVAL;
++
++	if (!strcmp(str, "off"))
++		l1tf_mitigation = L1TF_MITIGATION_OFF;
++	else if (!strcmp(str, "flush,nowarn"))
++		l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOWARN;
++	else if (!strcmp(str, "flush"))
++		l1tf_mitigation = L1TF_MITIGATION_FLUSH;
++	else if (!strcmp(str, "flush,nosmt"))
++		l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT;
++	else if (!strcmp(str, "full"))
++		l1tf_mitigation = L1TF_MITIGATION_FULL;
++	else if (!strcmp(str, "full,force"))
++		l1tf_mitigation = L1TF_MITIGATION_FULL_FORCE;
++
++	return 0;
++}
++early_param("l1tf", l1tf_cmdline);
++
++#undef pr_fmt
++
+ #ifdef CONFIG_SYSFS
+ 
++#define L1TF_DEFAULT_MSG "Mitigation: PTE Inversion"
++
++#if IS_ENABLED(CONFIG_KVM_INTEL)
++static const char *l1tf_vmx_states[] = {
++	[VMENTER_L1D_FLUSH_AUTO]		= "auto",
++	[VMENTER_L1D_FLUSH_NEVER]		= "vulnerable",
++	[VMENTER_L1D_FLUSH_COND]		= "conditional cache flushes",
++	[VMENTER_L1D_FLUSH_ALWAYS]		= "cache flushes",
++	[VMENTER_L1D_FLUSH_EPT_DISABLED]	= "EPT disabled",
++	[VMENTER_L1D_FLUSH_NOT_REQUIRED]	= "flush not necessary"
++};
++
++static ssize_t l1tf_show_state(char *buf)
++{
++	if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_AUTO)
++		return sprintf(buf, "%s\n", L1TF_DEFAULT_MSG);
++
++	if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_EPT_DISABLED ||
++	    (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_NEVER &&
++	     cpu_smt_control == CPU_SMT_ENABLED))
++		return sprintf(buf, "%s; VMX: %s\n", L1TF_DEFAULT_MSG,
++			       l1tf_vmx_states[l1tf_vmx_mitigation]);
++
++	return sprintf(buf, "%s; VMX: %s, SMT %s\n", L1TF_DEFAULT_MSG,
++		       l1tf_vmx_states[l1tf_vmx_mitigation],
++		       cpu_smt_control == CPU_SMT_ENABLED ? "vulnerable" : "disabled");
++}
++#else
++static ssize_t l1tf_show_state(char *buf)
++{
++	return sprintf(buf, "%s\n", L1TF_DEFAULT_MSG);
++}
++#endif
++
+ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
+ 			       char *buf, unsigned int bug)
+ {
+@@ -684,6 +784,10 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ 	case X86_BUG_SPEC_STORE_BYPASS:
+ 		return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);
+ 
++	case X86_BUG_L1TF:
++		if (boot_cpu_has(X86_FEATURE_L1TF_PTEINV))
++			return l1tf_show_state(buf);
++		break;
+ 	default:
+ 		break;
+ 	}
+@@ -710,4 +814,9 @@ ssize_t cpu_show_spec_store_bypass(struct device *dev, struct device_attribute *
+ {
+ 	return cpu_show_common(dev, attr, buf, X86_BUG_SPEC_STORE_BYPASS);
+ }
++
++ssize_t cpu_show_l1tf(struct device *dev, struct device_attribute *attr, char *buf)
++{
++	return cpu_show_common(dev, attr, buf, X86_BUG_L1TF);
++}
+ #endif
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index eb4cb3efd20e..9eda6f730ec4 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -661,33 +661,36 @@ static void cpu_detect_tlb(struct cpuinfo_x86 *c)
+ 		tlb_lld_4m[ENTRIES], tlb_lld_1g[ENTRIES]);
+ }
+ 
+-void detect_ht(struct cpuinfo_x86 *c)
++int detect_ht_early(struct cpuinfo_x86 *c)
+ {
+ #ifdef CONFIG_SMP
+ 	u32 eax, ebx, ecx, edx;
+-	int index_msb, core_bits;
+-	static bool printed;
+ 
+ 	if (!cpu_has(c, X86_FEATURE_HT))
+-		return;
++		return -1;
+ 
+ 	if (cpu_has(c, X86_FEATURE_CMP_LEGACY))
+-		goto out;
++		return -1;
+ 
+ 	if (cpu_has(c, X86_FEATURE_XTOPOLOGY))
+-		return;
++		return -1;
+ 
+ 	cpuid(1, &eax, &ebx, &ecx, &edx);
+ 
+ 	smp_num_siblings = (ebx & 0xff0000) >> 16;
+-
+-	if (smp_num_siblings == 1) {
++	if (smp_num_siblings == 1)
+ 		pr_info_once("CPU0: Hyper-Threading is disabled\n");
+-		goto out;
+-	}
++#endif
++	return 0;
++}
+ 
+-	if (smp_num_siblings <= 1)
+-		goto out;
++void detect_ht(struct cpuinfo_x86 *c)
++{
++#ifdef CONFIG_SMP
++	int index_msb, core_bits;
++
++	if (detect_ht_early(c) < 0)
++		return;
+ 
+ 	index_msb = get_count_order(smp_num_siblings);
+ 	c->phys_proc_id = apic->phys_pkg_id(c->initial_apicid, index_msb);
+@@ -700,15 +703,6 @@ void detect_ht(struct cpuinfo_x86 *c)
+ 
+ 	c->cpu_core_id = apic->phys_pkg_id(c->initial_apicid, index_msb) &
+ 				       ((1 << core_bits) - 1);
+-
+-out:
+-	if (!printed && (c->x86_max_cores * smp_num_siblings) > 1) {
+-		pr_info("CPU: Physical Processor ID: %d\n",
+-			c->phys_proc_id);
+-		pr_info("CPU: Processor Core ID: %d\n",
+-			c->cpu_core_id);
+-		printed = 1;
+-	}
+ #endif
+ }
+ 
+@@ -987,6 +981,21 @@ static const __initconst struct x86_cpu_id cpu_no_spec_store_bypass[] = {
+ 	{}
+ };
+ 
++static const __initconst struct x86_cpu_id cpu_no_l1tf[] = {
++	/* in addition to cpu_no_speculation */
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_SILVERMONT1	},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_SILVERMONT2	},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_AIRMONT		},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_MERRIFIELD	},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_MOOREFIELD	},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_GOLDMONT	},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_DENVERTON	},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_GEMINI_LAKE	},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_XEON_PHI_KNL		},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_XEON_PHI_KNM		},
++	{}
++};
++
+ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ {
+ 	u64 ia32_cap = 0;
+@@ -1013,6 +1022,11 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ 		return;
+ 
+ 	setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
++
++	if (x86_match_cpu(cpu_no_l1tf))
++		return;
++
++	setup_force_cpu_bug(X86_BUG_L1TF);
+ }
+ 
+ /*
+diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
+index 38216f678fc3..e59c0ea82a33 100644
+--- a/arch/x86/kernel/cpu/cpu.h
++++ b/arch/x86/kernel/cpu/cpu.h
+@@ -55,7 +55,9 @@ extern void init_intel_cacheinfo(struct cpuinfo_x86 *c);
+ extern void init_amd_cacheinfo(struct cpuinfo_x86 *c);
+ 
+ extern void detect_num_cpu_cores(struct cpuinfo_x86 *c);
++extern int detect_extended_topology_early(struct cpuinfo_x86 *c);
+ extern int detect_extended_topology(struct cpuinfo_x86 *c);
++extern int detect_ht_early(struct cpuinfo_x86 *c);
+ extern void detect_ht(struct cpuinfo_x86 *c);
+ 
+ unsigned int aperfmperf_get_khz(int cpu);
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index eb75564f2d25..6602941cfebf 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -301,6 +301,13 @@ static void early_init_intel(struct cpuinfo_x86 *c)
+ 	}
+ 
+ 	check_mpx_erratum(c);
++
++	/*
++	 * Get the number of SMT siblings early from the extended topology
++	 * leaf, if available. Otherwise try the legacy SMT detection.
++	 */
++	if (detect_extended_topology_early(c) < 0)
++		detect_ht_early(c);
+ }
+ 
+ #ifdef CONFIG_X86_32
+diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
+index 08286269fd24..b9bc8a1a584e 100644
+--- a/arch/x86/kernel/cpu/microcode/core.c
++++ b/arch/x86/kernel/cpu/microcode/core.c
+@@ -509,12 +509,20 @@ static struct platform_device	*microcode_pdev;
+ 
+ static int check_online_cpus(void)
+ {
+-	if (num_online_cpus() == num_present_cpus())
+-		return 0;
++	unsigned int cpu;
+ 
+-	pr_err("Not all CPUs online, aborting microcode update.\n");
++	/*
++	 * Make sure all CPUs are online.  It's fine for SMT to be disabled if
++	 * all the primary threads are still online.
++	 */
++	for_each_present_cpu(cpu) {
++		if (topology_is_primary_thread(cpu) && !cpu_online(cpu)) {
++			pr_err("Not all CPUs online, aborting microcode update.\n");
++			return -EINVAL;
++		}
++	}
+ 
+-	return -EINVAL;
++	return 0;
+ }
+ 
+ static atomic_t late_cpus_in;
+diff --git a/arch/x86/kernel/cpu/topology.c b/arch/x86/kernel/cpu/topology.c
+index 81c0afb39d0a..71ca064e3794 100644
+--- a/arch/x86/kernel/cpu/topology.c
++++ b/arch/x86/kernel/cpu/topology.c
+@@ -22,18 +22,10 @@
+ #define BITS_SHIFT_NEXT_LEVEL(eax)	((eax) & 0x1f)
+ #define LEVEL_MAX_SIBLINGS(ebx)		((ebx) & 0xffff)
+ 
+-/*
+- * Check for extended topology enumeration cpuid leaf 0xb and if it
+- * exists, use it for populating initial_apicid and cpu topology
+- * detection.
+- */
+-int detect_extended_topology(struct cpuinfo_x86 *c)
++int detect_extended_topology_early(struct cpuinfo_x86 *c)
+ {
+ #ifdef CONFIG_SMP
+-	unsigned int eax, ebx, ecx, edx, sub_index;
+-	unsigned int ht_mask_width, core_plus_mask_width;
+-	unsigned int core_select_mask, core_level_siblings;
+-	static bool printed;
++	unsigned int eax, ebx, ecx, edx;
+ 
+ 	if (c->cpuid_level < 0xb)
+ 		return -1;
+@@ -52,10 +44,30 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
+ 	 * initial apic id, which also represents 32-bit extended x2apic id.
+ 	 */
+ 	c->initial_apicid = edx;
++	smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx);
++#endif
++	return 0;
++}
++
++/*
++ * Check for extended topology enumeration cpuid leaf 0xb and if it
++ * exists, use it for populating initial_apicid and cpu topology
++ * detection.
++ */
++int detect_extended_topology(struct cpuinfo_x86 *c)
++{
++#ifdef CONFIG_SMP
++	unsigned int eax, ebx, ecx, edx, sub_index;
++	unsigned int ht_mask_width, core_plus_mask_width;
++	unsigned int core_select_mask, core_level_siblings;
++
++	if (detect_extended_topology_early(c) < 0)
++		return -1;
+ 
+ 	/*
+ 	 * Populate HT related information from sub-leaf level 0.
+ 	 */
++	cpuid_count(0xb, SMT_LEVEL, &eax, &ebx, &ecx, &edx);
+ 	core_level_siblings = smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx);
+ 	core_plus_mask_width = ht_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
+ 
+@@ -86,15 +98,6 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
+ 	c->apicid = apic->phys_pkg_id(c->initial_apicid, 0);
+ 
+ 	c->x86_max_cores = (core_level_siblings / smp_num_siblings);
+-
+-	if (!printed) {
+-		pr_info("CPU: Physical Processor ID: %d\n",
+-		       c->phys_proc_id);
+-		if (c->x86_max_cores > 1)
+-			pr_info("CPU: Processor Core ID: %d\n",
+-			       c->cpu_core_id);
+-		printed = 1;
+-	}
+ #endif
+ 	return 0;
+ }
+diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c
+index f92a6593de1e..2ea85b32421a 100644
+--- a/arch/x86/kernel/fpu/core.c
++++ b/arch/x86/kernel/fpu/core.c
+@@ -10,6 +10,7 @@
+ #include <asm/fpu/signal.h>
+ #include <asm/fpu/types.h>
+ #include <asm/traps.h>
++#include <asm/irq_regs.h>
+ 
+ #include <linux/hardirq.h>
+ #include <linux/pkeys.h>
+diff --git a/arch/x86/kernel/hpet.c b/arch/x86/kernel/hpet.c
+index 346b24883911..b0acb22e5a46 100644
+--- a/arch/x86/kernel/hpet.c
++++ b/arch/x86/kernel/hpet.c
+@@ -1,6 +1,7 @@
+ #include <linux/clocksource.h>
+ #include <linux/clockchips.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/export.h>
+ #include <linux/delay.h>
+ #include <linux/errno.h>
+diff --git a/arch/x86/kernel/i8259.c b/arch/x86/kernel/i8259.c
+index 86c4439f9d74..519649ddf100 100644
+--- a/arch/x86/kernel/i8259.c
++++ b/arch/x86/kernel/i8259.c
+@@ -5,6 +5,7 @@
+ #include <linux/sched.h>
+ #include <linux/ioport.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/timex.h>
+ #include <linux/random.h>
+ #include <linux/init.h>
+diff --git a/arch/x86/kernel/idt.c b/arch/x86/kernel/idt.c
+index 74383a3780dc..01adea278a71 100644
+--- a/arch/x86/kernel/idt.c
++++ b/arch/x86/kernel/idt.c
+@@ -8,6 +8,7 @@
+ #include <asm/traps.h>
+ #include <asm/proto.h>
+ #include <asm/desc.h>
++#include <asm/hw_irq.h>
+ 
+ struct idt_data {
+ 	unsigned int	vector;
+diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
+index 328d027d829d..59b5f2ea7c2f 100644
+--- a/arch/x86/kernel/irq.c
++++ b/arch/x86/kernel/irq.c
+@@ -10,6 +10,7 @@
+ #include <linux/ftrace.h>
+ #include <linux/delay.h>
+ #include <linux/export.h>
++#include <linux/irq.h>
+ 
+ #include <asm/apic.h>
+ #include <asm/io_apic.h>
+diff --git a/arch/x86/kernel/irq_32.c b/arch/x86/kernel/irq_32.c
+index c1bdbd3d3232..95600a99ae93 100644
+--- a/arch/x86/kernel/irq_32.c
++++ b/arch/x86/kernel/irq_32.c
+@@ -11,6 +11,7 @@
+ 
+ #include <linux/seq_file.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/kernel_stat.h>
+ #include <linux/notifier.h>
+ #include <linux/cpu.h>
+diff --git a/arch/x86/kernel/irq_64.c b/arch/x86/kernel/irq_64.c
+index d86e344f5b3d..0469cd078db1 100644
+--- a/arch/x86/kernel/irq_64.c
++++ b/arch/x86/kernel/irq_64.c
+@@ -11,6 +11,7 @@
+ 
+ #include <linux/kernel_stat.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/seq_file.h>
+ #include <linux/delay.h>
+ #include <linux/ftrace.h>
+diff --git a/arch/x86/kernel/irqinit.c b/arch/x86/kernel/irqinit.c
+index 772196c1b8c4..a0693b71cfc1 100644
+--- a/arch/x86/kernel/irqinit.c
++++ b/arch/x86/kernel/irqinit.c
+@@ -5,6 +5,7 @@
+ #include <linux/sched.h>
+ #include <linux/ioport.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/timex.h>
+ #include <linux/random.h>
+ #include <linux/kprobes.h>
+diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
+index 6f4d42377fe5..44e26dc326d5 100644
+--- a/arch/x86/kernel/kprobes/core.c
++++ b/arch/x86/kernel/kprobes/core.c
+@@ -395,8 +395,6 @@ int __copy_instruction(u8 *dest, u8 *src, u8 *real, struct insn *insn)
+ 			  - (u8 *) real;
+ 		if ((s64) (s32) newdisp != newdisp) {
+ 			pr_err("Kprobes error: new displacement does not fit into s32 (%llx)\n", newdisp);
+-			pr_err("\tSrc: %p, Dest: %p, old disp: %x\n",
+-				src, real, insn->displacement.value);
+ 			return 0;
+ 		}
+ 		disp = (u8 *) dest + insn_offset_displacement(insn);
+@@ -640,8 +638,7 @@ static int reenter_kprobe(struct kprobe *p, struct pt_regs *regs,
+ 		 * Raise a BUG or we'll continue in an endless reentering loop
+ 		 * and eventually a stack overflow.
+ 		 */
+-		printk(KERN_WARNING "Unrecoverable kprobe detected at %p.\n",
+-		       p->addr);
++		pr_err("Unrecoverable kprobe detected.\n");
+ 		dump_kprobe(p);
+ 		BUG();
+ 	default:
+diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
+index 99dc79e76bdc..930c88341e4e 100644
+--- a/arch/x86/kernel/paravirt.c
++++ b/arch/x86/kernel/paravirt.c
+@@ -88,10 +88,12 @@ unsigned paravirt_patch_call(void *insnbuf,
+ 	struct branch *b = insnbuf;
+ 	unsigned long delta = (unsigned long)target - (addr+5);
+ 
+-	if (tgt_clobbers & ~site_clobbers)
+-		return len;	/* target would clobber too much for this site */
+-	if (len < 5)
++	if (len < 5) {
++#ifdef CONFIG_RETPOLINE
++		WARN_ONCE("Failing to patch indirect CALL in %ps\n", (void *)addr);
++#endif
+ 		return len;	/* call too long for patch site */
++	}
+ 
+ 	b->opcode = 0xe8; /* call */
+ 	b->delta = delta;
+@@ -106,8 +108,12 @@ unsigned paravirt_patch_jmp(void *insnbuf, const void *target,
+ 	struct branch *b = insnbuf;
+ 	unsigned long delta = (unsigned long)target - (addr+5);
+ 
+-	if (len < 5)
++	if (len < 5) {
++#ifdef CONFIG_RETPOLINE
++		WARN_ONCE("Failing to patch indirect JMP in %ps\n", (void *)addr);
++#endif
+ 		return len;	/* call too long for patch site */
++	}
+ 
+ 	b->opcode = 0xe9;	/* jmp */
+ 	b->delta = delta;
+diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
+index 2f86d883dd95..74b4472ba0a6 100644
+--- a/arch/x86/kernel/setup.c
++++ b/arch/x86/kernel/setup.c
+@@ -823,6 +823,12 @@ void __init setup_arch(char **cmdline_p)
+ 	memblock_reserve(__pa_symbol(_text),
+ 			 (unsigned long)__bss_stop - (unsigned long)_text);
+ 
++	/*
++	 * Make sure page 0 is always reserved because on systems with
++	 * L1TF its contents can be leaked to user processes.
++	 */
++	memblock_reserve(0, PAGE_SIZE);
++
+ 	early_reserve_initrd();
+ 
+ 	/*
+diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
+index 5c574dff4c1a..04adc8d60aed 100644
+--- a/arch/x86/kernel/smp.c
++++ b/arch/x86/kernel/smp.c
+@@ -261,6 +261,7 @@ __visible void __irq_entry smp_reschedule_interrupt(struct pt_regs *regs)
+ {
+ 	ack_APIC_irq();
+ 	inc_irq_stat(irq_resched_count);
++	kvm_set_cpu_l1tf_flush_l1d();
+ 
+ 	if (trace_resched_ipi_enabled()) {
+ 		/*
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index db9656e13ea0..f02ecaf97904 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -80,6 +80,7 @@
+ #include <asm/intel-family.h>
+ #include <asm/cpu_device_id.h>
+ #include <asm/spec-ctrl.h>
++#include <asm/hw_irq.h>
+ 
+ /* representing HT siblings of each logical CPU */
+ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_sibling_map);
+@@ -270,6 +271,23 @@ static void notrace start_secondary(void *unused)
+ 	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+ }
+ 
++/**
++ * topology_is_primary_thread - Check whether CPU is the primary SMT thread
++ * @cpu:	CPU to check
++ */
++bool topology_is_primary_thread(unsigned int cpu)
++{
++	return apic_id_is_primary_thread(per_cpu(x86_cpu_to_apicid, cpu));
++}
++
++/**
++ * topology_smt_supported - Check whether SMT is supported by the CPUs
++ */
++bool topology_smt_supported(void)
++{
++	return smp_num_siblings > 1;
++}
++
+ /**
+  * topology_phys_to_logical_pkg - Map a physical package id to a logical
+  *
+diff --git a/arch/x86/kernel/time.c b/arch/x86/kernel/time.c
+index 774ebafa97c4..be01328eb755 100644
+--- a/arch/x86/kernel/time.c
++++ b/arch/x86/kernel/time.c
+@@ -12,6 +12,7 @@
+ 
+ #include <linux/clockchips.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/i8253.h>
+ #include <linux/time.h>
+ #include <linux/export.h>
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index 6b8f11521c41..a44e568363a4 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -3840,6 +3840,7 @@ int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code,
+ {
+ 	int r = 1;
+ 
++	vcpu->arch.l1tf_flush_l1d = true;
+ 	switch (vcpu->arch.apf.host_apf_reason) {
+ 	default:
+ 		trace_kvm_page_fault(fault_address, error_code);
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 5d8e317c2b04..46b428c0990e 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -188,6 +188,150 @@ module_param(ple_window_max, uint, 0444);
+ 
+ extern const ulong vmx_return;
+ 
++static DEFINE_STATIC_KEY_FALSE(vmx_l1d_should_flush);
++static DEFINE_STATIC_KEY_FALSE(vmx_l1d_flush_cond);
++static DEFINE_MUTEX(vmx_l1d_flush_mutex);
++
++/* Storage for pre module init parameter parsing */
++static enum vmx_l1d_flush_state __read_mostly vmentry_l1d_flush_param = VMENTER_L1D_FLUSH_AUTO;
++
++static const struct {
++	const char *option;
++	enum vmx_l1d_flush_state cmd;
++} vmentry_l1d_param[] = {
++	{"auto",	VMENTER_L1D_FLUSH_AUTO},
++	{"never",	VMENTER_L1D_FLUSH_NEVER},
++	{"cond",	VMENTER_L1D_FLUSH_COND},
++	{"always",	VMENTER_L1D_FLUSH_ALWAYS},
++};
++
++#define L1D_CACHE_ORDER 4
++static void *vmx_l1d_flush_pages;
++
++static int vmx_setup_l1d_flush(enum vmx_l1d_flush_state l1tf)
++{
++	struct page *page;
++	unsigned int i;
++
++	if (!enable_ept) {
++		l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_EPT_DISABLED;
++		return 0;
++	}
++
++       if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES)) {
++	       u64 msr;
++
++	       rdmsrl(MSR_IA32_ARCH_CAPABILITIES, msr);
++	       if (msr & ARCH_CAP_SKIP_VMENTRY_L1DFLUSH) {
++		       l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_NOT_REQUIRED;
++		       return 0;
++	       }
++       }
++
++	/* If set to auto use the default l1tf mitigation method */
++	if (l1tf == VMENTER_L1D_FLUSH_AUTO) {
++		switch (l1tf_mitigation) {
++		case L1TF_MITIGATION_OFF:
++			l1tf = VMENTER_L1D_FLUSH_NEVER;
++			break;
++		case L1TF_MITIGATION_FLUSH_NOWARN:
++		case L1TF_MITIGATION_FLUSH:
++		case L1TF_MITIGATION_FLUSH_NOSMT:
++			l1tf = VMENTER_L1D_FLUSH_COND;
++			break;
++		case L1TF_MITIGATION_FULL:
++		case L1TF_MITIGATION_FULL_FORCE:
++			l1tf = VMENTER_L1D_FLUSH_ALWAYS;
++			break;
++		}
++	} else if (l1tf_mitigation == L1TF_MITIGATION_FULL_FORCE) {
++		l1tf = VMENTER_L1D_FLUSH_ALWAYS;
++	}
++
++	if (l1tf != VMENTER_L1D_FLUSH_NEVER && !vmx_l1d_flush_pages &&
++	    !boot_cpu_has(X86_FEATURE_FLUSH_L1D)) {
++		page = alloc_pages(GFP_KERNEL, L1D_CACHE_ORDER);
++		if (!page)
++			return -ENOMEM;
++		vmx_l1d_flush_pages = page_address(page);
++
++		/*
++		 * Initialize each page with a different pattern in
++		 * order to protect against KSM in the nested
++		 * virtualization case.
++		 */
++		for (i = 0; i < 1u << L1D_CACHE_ORDER; ++i) {
++			memset(vmx_l1d_flush_pages + i * PAGE_SIZE, i + 1,
++			       PAGE_SIZE);
++		}
++	}
++
++	l1tf_vmx_mitigation = l1tf;
++
++	if (l1tf != VMENTER_L1D_FLUSH_NEVER)
++		static_branch_enable(&vmx_l1d_should_flush);
++	else
++		static_branch_disable(&vmx_l1d_should_flush);
++
++	if (l1tf == VMENTER_L1D_FLUSH_COND)
++		static_branch_enable(&vmx_l1d_flush_cond);
++	else
++		static_branch_disable(&vmx_l1d_flush_cond);
++	return 0;
++}
++
++static int vmentry_l1d_flush_parse(const char *s)
++{
++	unsigned int i;
++
++	if (s) {
++		for (i = 0; i < ARRAY_SIZE(vmentry_l1d_param); i++) {
++			if (sysfs_streq(s, vmentry_l1d_param[i].option))
++				return vmentry_l1d_param[i].cmd;
++		}
++	}
++	return -EINVAL;
++}
++
++static int vmentry_l1d_flush_set(const char *s, const struct kernel_param *kp)
++{
++	int l1tf, ret;
++
++	if (!boot_cpu_has(X86_BUG_L1TF))
++		return 0;
++
++	l1tf = vmentry_l1d_flush_parse(s);
++	if (l1tf < 0)
++		return l1tf;
++
++	/*
++	 * Has vmx_init() run already? If not then this is the pre init
++	 * parameter parsing. In that case just store the value and let
++	 * vmx_init() do the proper setup after enable_ept has been
++	 * established.
++	 */
++	if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_AUTO) {
++		vmentry_l1d_flush_param = l1tf;
++		return 0;
++	}
++
++	mutex_lock(&vmx_l1d_flush_mutex);
++	ret = vmx_setup_l1d_flush(l1tf);
++	mutex_unlock(&vmx_l1d_flush_mutex);
++	return ret;
++}
++
++static int vmentry_l1d_flush_get(char *s, const struct kernel_param *kp)
++{
++	return sprintf(s, "%s\n", vmentry_l1d_param[l1tf_vmx_mitigation].option);
++}
++
++static const struct kernel_param_ops vmentry_l1d_flush_ops = {
++	.set = vmentry_l1d_flush_set,
++	.get = vmentry_l1d_flush_get,
++};
++module_param_cb(vmentry_l1d_flush, &vmentry_l1d_flush_ops, NULL, 0644);
++
+ struct kvm_vmx {
+ 	struct kvm kvm;
+ 
+@@ -757,6 +901,11 @@ static inline int pi_test_sn(struct pi_desc *pi_desc)
+ 			(unsigned long *)&pi_desc->control);
+ }
+ 
++struct vmx_msrs {
++	unsigned int		nr;
++	struct vmx_msr_entry	val[NR_AUTOLOAD_MSRS];
++};
++
+ struct vcpu_vmx {
+ 	struct kvm_vcpu       vcpu;
+ 	unsigned long         host_rsp;
+@@ -790,9 +939,8 @@ struct vcpu_vmx {
+ 	struct loaded_vmcs   *loaded_vmcs;
+ 	bool                  __launched; /* temporary, used in vmx_vcpu_run */
+ 	struct msr_autoload {
+-		unsigned nr;
+-		struct vmx_msr_entry guest[NR_AUTOLOAD_MSRS];
+-		struct vmx_msr_entry host[NR_AUTOLOAD_MSRS];
++		struct vmx_msrs guest;
++		struct vmx_msrs host;
+ 	} msr_autoload;
+ 	struct {
+ 		int           loaded;
+@@ -2377,9 +2525,20 @@ static void clear_atomic_switch_msr_special(struct vcpu_vmx *vmx,
+ 	vm_exit_controls_clearbit(vmx, exit);
+ }
+ 
++static int find_msr(struct vmx_msrs *m, unsigned int msr)
++{
++	unsigned int i;
++
++	for (i = 0; i < m->nr; ++i) {
++		if (m->val[i].index == msr)
++			return i;
++	}
++	return -ENOENT;
++}
++
+ static void clear_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr)
+ {
+-	unsigned i;
++	int i;
+ 	struct msr_autoload *m = &vmx->msr_autoload;
+ 
+ 	switch (msr) {
+@@ -2400,18 +2559,21 @@ static void clear_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr)
+ 		}
+ 		break;
+ 	}
++	i = find_msr(&m->guest, msr);
++	if (i < 0)
++		goto skip_guest;
++	--m->guest.nr;
++	m->guest.val[i] = m->guest.val[m->guest.nr];
++	vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->guest.nr);
+ 
+-	for (i = 0; i < m->nr; ++i)
+-		if (m->guest[i].index == msr)
+-			break;
+-
+-	if (i == m->nr)
++skip_guest:
++	i = find_msr(&m->host, msr);
++	if (i < 0)
+ 		return;
+-	--m->nr;
+-	m->guest[i] = m->guest[m->nr];
+-	m->host[i] = m->host[m->nr];
+-	vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->nr);
+-	vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->nr);
++
++	--m->host.nr;
++	m->host.val[i] = m->host.val[m->host.nr];
++	vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->host.nr);
+ }
+ 
+ static void add_atomic_switch_msr_special(struct vcpu_vmx *vmx,
+@@ -2426,9 +2588,9 @@ static void add_atomic_switch_msr_special(struct vcpu_vmx *vmx,
+ }
+ 
+ static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr,
+-				  u64 guest_val, u64 host_val)
++				  u64 guest_val, u64 host_val, bool entry_only)
+ {
+-	unsigned i;
++	int i, j = 0;
+ 	struct msr_autoload *m = &vmx->msr_autoload;
+ 
+ 	switch (msr) {
+@@ -2463,24 +2625,31 @@ static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr,
+ 		wrmsrl(MSR_IA32_PEBS_ENABLE, 0);
+ 	}
+ 
+-	for (i = 0; i < m->nr; ++i)
+-		if (m->guest[i].index == msr)
+-			break;
++	i = find_msr(&m->guest, msr);
++	if (!entry_only)
++		j = find_msr(&m->host, msr);
+ 
+-	if (i == NR_AUTOLOAD_MSRS) {
++	if (i == NR_AUTOLOAD_MSRS || j == NR_AUTOLOAD_MSRS) {
+ 		printk_once(KERN_WARNING "Not enough msr switch entries. "
+ 				"Can't add msr %x\n", msr);
+ 		return;
+-	} else if (i == m->nr) {
+-		++m->nr;
+-		vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->nr);
+-		vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->nr);
+ 	}
++	if (i < 0) {
++		i = m->guest.nr++;
++		vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->guest.nr);
++	}
++	m->guest.val[i].index = msr;
++	m->guest.val[i].value = guest_val;
++
++	if (entry_only)
++		return;
+ 
+-	m->guest[i].index = msr;
+-	m->guest[i].value = guest_val;
+-	m->host[i].index = msr;
+-	m->host[i].value = host_val;
++	if (j < 0) {
++		j = m->host.nr++;
++		vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->host.nr);
++	}
++	m->host.val[j].index = msr;
++	m->host.val[j].value = host_val;
+ }
+ 
+ static bool update_transition_efer(struct vcpu_vmx *vmx, int efer_offset)
+@@ -2524,7 +2693,7 @@ static bool update_transition_efer(struct vcpu_vmx *vmx, int efer_offset)
+ 			guest_efer &= ~EFER_LME;
+ 		if (guest_efer != host_efer)
+ 			add_atomic_switch_msr(vmx, MSR_EFER,
+-					      guest_efer, host_efer);
++					      guest_efer, host_efer, false);
+ 		return false;
+ 	} else {
+ 		guest_efer &= ~ignore_bits;
+@@ -3987,7 +4156,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		vcpu->arch.ia32_xss = data;
+ 		if (vcpu->arch.ia32_xss != host_xss)
+ 			add_atomic_switch_msr(vmx, MSR_IA32_XSS,
+-				vcpu->arch.ia32_xss, host_xss);
++				vcpu->arch.ia32_xss, host_xss, false);
+ 		else
+ 			clear_atomic_switch_msr(vmx, MSR_IA32_XSS);
+ 		break;
+@@ -6274,9 +6443,9 @@ static void vmx_vcpu_setup(struct vcpu_vmx *vmx)
+ 
+ 	vmcs_write32(VM_EXIT_MSR_STORE_COUNT, 0);
+ 	vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, 0);
+-	vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host));
++	vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host.val));
+ 	vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, 0);
+-	vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest));
++	vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest.val));
+ 
+ 	if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT)
+ 		vmcs_write64(GUEST_IA32_PAT, vmx->vcpu.arch.pat);
+@@ -6296,8 +6465,7 @@ static void vmx_vcpu_setup(struct vcpu_vmx *vmx)
+ 		++vmx->nmsrs;
+ 	}
+ 
+-	if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES))
+-		rdmsrl(MSR_IA32_ARCH_CAPABILITIES, vmx->arch_capabilities);
++	vmx->arch_capabilities = kvm_get_arch_capabilities();
+ 
+ 	vm_exit_controls_init(vmx, vmcs_config.vmexit_ctrl);
+ 
+@@ -9548,6 +9716,79 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu)
+ 	}
+ }
+ 
++/*
++ * Software based L1D cache flush which is used when microcode providing
++ * the cache control MSR is not loaded.
++ *
++ * The L1D cache is 32 KiB on Nehalem and later microarchitectures, but to
++ * flush it is required to read in 64 KiB because the replacement algorithm
++ * is not exactly LRU. This could be sized at runtime via topology
++ * information but as all relevant affected CPUs have 32KiB L1D cache size
++ * there is no point in doing so.
++ */
++#define L1D_CACHE_ORDER 4
++static void *vmx_l1d_flush_pages;
++
++static void vmx_l1d_flush(struct kvm_vcpu *vcpu)
++{
++	int size = PAGE_SIZE << L1D_CACHE_ORDER;
++
++	/*
++	 * This code is only executed when the the flush mode is 'cond' or
++	 * 'always'
++	 */
++	if (static_branch_likely(&vmx_l1d_flush_cond)) {
++		bool flush_l1d;
++
++		/*
++		 * Clear the per-vcpu flush bit, it gets set again
++		 * either from vcpu_run() or from one of the unsafe
++		 * VMEXIT handlers.
++		 */
++		flush_l1d = vcpu->arch.l1tf_flush_l1d;
++		vcpu->arch.l1tf_flush_l1d = false;
++
++		/*
++		 * Clear the per-cpu flush bit, it gets set again from
++		 * the interrupt handlers.
++		 */
++		flush_l1d |= kvm_get_cpu_l1tf_flush_l1d();
++		kvm_clear_cpu_l1tf_flush_l1d();
++
++		if (!flush_l1d)
++			return;
++	}
++
++	vcpu->stat.l1d_flush++;
++
++	if (static_cpu_has(X86_FEATURE_FLUSH_L1D)) {
++		wrmsrl(MSR_IA32_FLUSH_CMD, L1D_FLUSH);
++		return;
++	}
++
++	asm volatile(
++		/* First ensure the pages are in the TLB */
++		"xorl	%%eax, %%eax\n"
++		".Lpopulate_tlb:\n\t"
++		"movzbl	(%[flush_pages], %%" _ASM_AX "), %%ecx\n\t"
++		"addl	$4096, %%eax\n\t"
++		"cmpl	%%eax, %[size]\n\t"
++		"jne	.Lpopulate_tlb\n\t"
++		"xorl	%%eax, %%eax\n\t"
++		"cpuid\n\t"
++		/* Now fill the cache */
++		"xorl	%%eax, %%eax\n"
++		".Lfill_cache:\n"
++		"movzbl	(%[flush_pages], %%" _ASM_AX "), %%ecx\n\t"
++		"addl	$64, %%eax\n\t"
++		"cmpl	%%eax, %[size]\n\t"
++		"jne	.Lfill_cache\n\t"
++		"lfence\n"
++		:: [flush_pages] "r" (vmx_l1d_flush_pages),
++		    [size] "r" (size)
++		: "eax", "ebx", "ecx", "edx");
++}
++
+ static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
+ {
+ 	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
+@@ -9949,7 +10190,7 @@ static void atomic_switch_perf_msrs(struct vcpu_vmx *vmx)
+ 			clear_atomic_switch_msr(vmx, msrs[i].msr);
+ 		else
+ 			add_atomic_switch_msr(vmx, msrs[i].msr, msrs[i].guest,
+-					msrs[i].host);
++					msrs[i].host, false);
+ }
+ 
+ static void vmx_arm_hv_timer(struct kvm_vcpu *vcpu)
+@@ -10044,6 +10285,9 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
+ 	evmcs_rsp = static_branch_unlikely(&enable_evmcs) ?
+ 		(unsigned long)&current_evmcs->host_rsp : 0;
+ 
++	if (static_branch_unlikely(&vmx_l1d_should_flush))
++		vmx_l1d_flush(vcpu);
++
+ 	asm(
+ 		/* Store host registers */
+ 		"push %%" _ASM_DX "; push %%" _ASM_BP ";"
+@@ -10403,10 +10647,37 @@ free_vcpu:
+ 	return ERR_PTR(err);
+ }
+ 
++#define L1TF_MSG_SMT "L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/l1tf.html for details.\n"
++#define L1TF_MSG_L1D "L1TF CPU bug present and virtualization mitigation disabled, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/l1tf.html for details.\n"
++
+ static int vmx_vm_init(struct kvm *kvm)
+ {
+ 	if (!ple_gap)
+ 		kvm->arch.pause_in_guest = true;
++
++	if (boot_cpu_has(X86_BUG_L1TF) && enable_ept) {
++		switch (l1tf_mitigation) {
++		case L1TF_MITIGATION_OFF:
++		case L1TF_MITIGATION_FLUSH_NOWARN:
++			/* 'I explicitly don't care' is set */
++			break;
++		case L1TF_MITIGATION_FLUSH:
++		case L1TF_MITIGATION_FLUSH_NOSMT:
++		case L1TF_MITIGATION_FULL:
++			/*
++			 * Warn upon starting the first VM in a potentially
++			 * insecure environment.
++			 */
++			if (cpu_smt_control == CPU_SMT_ENABLED)
++				pr_warn_once(L1TF_MSG_SMT);
++			if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_NEVER)
++				pr_warn_once(L1TF_MSG_L1D);
++			break;
++		case L1TF_MITIGATION_FULL_FORCE:
++			/* Flush is enforced */
++			break;
++		}
++	}
+ 	return 0;
+ }
+ 
+@@ -11260,10 +11531,10 @@ static void prepare_vmcs02_full(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
+ 	 * Set the MSR load/store lists to match L0's settings.
+ 	 */
+ 	vmcs_write32(VM_EXIT_MSR_STORE_COUNT, 0);
+-	vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.nr);
+-	vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host));
+-	vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.nr);
+-	vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest));
++	vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr);
++	vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host.val));
++	vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.guest.nr);
++	vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest.val));
+ 
+ 	set_cr4_guest_host_mask(vmx);
+ 
+@@ -11899,6 +12170,9 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch)
+ 		return ret;
+ 	}
+ 
++	/* Hide L1D cache contents from the nested guest.  */
++	vmx->vcpu.arch.l1tf_flush_l1d = true;
++
+ 	/*
+ 	 * If we're entering a halted L2 vcpu and the L2 vcpu won't be woken
+ 	 * by event injection, halt vcpu.
+@@ -12419,8 +12693,8 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason,
+ 	vmx_segment_cache_clear(vmx);
+ 
+ 	/* Update any VMCS fields that might have changed while L2 ran */
+-	vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.nr);
+-	vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.nr);
++	vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr);
++	vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.guest.nr);
+ 	vmcs_write64(TSC_OFFSET, vcpu->arch.tsc_offset);
+ 	if (vmx->hv_deadline_tsc == -1)
+ 		vmcs_clear_bits(PIN_BASED_VM_EXEC_CONTROL,
+@@ -13137,6 +13411,51 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
+ 	.enable_smi_window = enable_smi_window,
+ };
+ 
++static void vmx_cleanup_l1d_flush(void)
++{
++	if (vmx_l1d_flush_pages) {
++		free_pages((unsigned long)vmx_l1d_flush_pages, L1D_CACHE_ORDER);
++		vmx_l1d_flush_pages = NULL;
++	}
++	/* Restore state so sysfs ignores VMX */
++	l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
++}
++
++static void vmx_exit(void)
++{
++#ifdef CONFIG_KEXEC_CORE
++	RCU_INIT_POINTER(crash_vmclear_loaded_vmcss, NULL);
++	synchronize_rcu();
++#endif
++
++	kvm_exit();
++
++#if IS_ENABLED(CONFIG_HYPERV)
++	if (static_branch_unlikely(&enable_evmcs)) {
++		int cpu;
++		struct hv_vp_assist_page *vp_ap;
++		/*
++		 * Reset everything to support using non-enlightened VMCS
++		 * access later (e.g. when we reload the module with
++		 * enlightened_vmcs=0)
++		 */
++		for_each_online_cpu(cpu) {
++			vp_ap =	hv_get_vp_assist_page(cpu);
++
++			if (!vp_ap)
++				continue;
++
++			vp_ap->current_nested_vmcs = 0;
++			vp_ap->enlighten_vmentry = 0;
++		}
++
++		static_branch_disable(&enable_evmcs);
++	}
++#endif
++	vmx_cleanup_l1d_flush();
++}
++module_exit(vmx_exit);
++
+ static int __init vmx_init(void)
+ {
+ 	int r;
+@@ -13171,10 +13490,25 @@ static int __init vmx_init(void)
+ #endif
+ 
+ 	r = kvm_init(&vmx_x86_ops, sizeof(struct vcpu_vmx),
+-                     __alignof__(struct vcpu_vmx), THIS_MODULE);
++		     __alignof__(struct vcpu_vmx), THIS_MODULE);
+ 	if (r)
+ 		return r;
+ 
++	/*
++	 * Must be called after kvm_init() so enable_ept is properly set
++	 * up. Hand the parameter mitigation value in which was stored in
++	 * the pre module init parser. If no parameter was given, it will
++	 * contain 'auto' which will be turned into the default 'cond'
++	 * mitigation mode.
++	 */
++	if (boot_cpu_has(X86_BUG_L1TF)) {
++		r = vmx_setup_l1d_flush(vmentry_l1d_flush_param);
++		if (r) {
++			vmx_exit();
++			return r;
++		}
++	}
++
+ #ifdef CONFIG_KEXEC_CORE
+ 	rcu_assign_pointer(crash_vmclear_loaded_vmcss,
+ 			   crash_vmclear_local_loaded_vmcss);
+@@ -13183,39 +13517,4 @@ static int __init vmx_init(void)
+ 
+ 	return 0;
+ }
+-
+-static void __exit vmx_exit(void)
+-{
+-#ifdef CONFIG_KEXEC_CORE
+-	RCU_INIT_POINTER(crash_vmclear_loaded_vmcss, NULL);
+-	synchronize_rcu();
+-#endif
+-
+-	kvm_exit();
+-
+-#if IS_ENABLED(CONFIG_HYPERV)
+-	if (static_branch_unlikely(&enable_evmcs)) {
+-		int cpu;
+-		struct hv_vp_assist_page *vp_ap;
+-		/*
+-		 * Reset everything to support using non-enlightened VMCS
+-		 * access later (e.g. when we reload the module with
+-		 * enlightened_vmcs=0)
+-		 */
+-		for_each_online_cpu(cpu) {
+-			vp_ap =	hv_get_vp_assist_page(cpu);
+-
+-			if (!vp_ap)
+-				continue;
+-
+-			vp_ap->current_nested_vmcs = 0;
+-			vp_ap->enlighten_vmentry = 0;
+-		}
+-
+-		static_branch_disable(&enable_evmcs);
+-	}
+-#endif
+-}
+-
+-module_init(vmx_init)
+-module_exit(vmx_exit)
++module_init(vmx_init);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 2b812b3c5088..a5caa5e5480c 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -195,6 +195,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
+ 	{ "irq_injections", VCPU_STAT(irq_injections) },
+ 	{ "nmi_injections", VCPU_STAT(nmi_injections) },
+ 	{ "req_event", VCPU_STAT(req_event) },
++	{ "l1d_flush", VCPU_STAT(l1d_flush) },
+ 	{ "mmu_shadow_zapped", VM_STAT(mmu_shadow_zapped) },
+ 	{ "mmu_pte_write", VM_STAT(mmu_pte_write) },
+ 	{ "mmu_pte_updated", VM_STAT(mmu_pte_updated) },
+@@ -1102,11 +1103,35 @@ static u32 msr_based_features[] = {
+ 
+ static unsigned int num_msr_based_features;
+ 
++u64 kvm_get_arch_capabilities(void)
++{
++	u64 data;
++
++	rdmsrl_safe(MSR_IA32_ARCH_CAPABILITIES, &data);
++
++	/*
++	 * If we're doing cache flushes (either "always" or "cond")
++	 * we will do one whenever the guest does a vmlaunch/vmresume.
++	 * If an outer hypervisor is doing the cache flush for us
++	 * (VMENTER_L1D_FLUSH_NESTED_VM), we can safely pass that
++	 * capability to the guest too, and if EPT is disabled we're not
++	 * vulnerable.  Overall, only VMENTER_L1D_FLUSH_NEVER will
++	 * require a nested hypervisor to do a flush of its own.
++	 */
++	if (l1tf_vmx_mitigation != VMENTER_L1D_FLUSH_NEVER)
++		data |= ARCH_CAP_SKIP_VMENTRY_L1DFLUSH;
++
++	return data;
++}
++EXPORT_SYMBOL_GPL(kvm_get_arch_capabilities);
++
+ static int kvm_get_msr_feature(struct kvm_msr_entry *msr)
+ {
+ 	switch (msr->index) {
+-	case MSR_IA32_UCODE_REV:
+ 	case MSR_IA32_ARCH_CAPABILITIES:
++		msr->data = kvm_get_arch_capabilities();
++		break;
++	case MSR_IA32_UCODE_REV:
+ 		rdmsrl_safe(msr->index, &msr->data);
+ 		break;
+ 	default:
+@@ -4876,6 +4901,9 @@ static int emulator_write_std(struct x86_emulate_ctxt *ctxt, gva_t addr, void *v
+ int kvm_write_guest_virt_system(struct kvm_vcpu *vcpu, gva_t addr, void *val,
+ 				unsigned int bytes, struct x86_exception *exception)
+ {
++	/* kvm_write_guest_virt_system can pull in tons of pages. */
++	vcpu->arch.l1tf_flush_l1d = true;
++
+ 	return kvm_write_guest_virt_helper(addr, val, bytes, vcpu,
+ 					   PFERR_WRITE_MASK, exception);
+ }
+@@ -6052,6 +6080,8 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu,
+ 	bool writeback = true;
+ 	bool write_fault_to_spt = vcpu->arch.write_fault_to_shadow_pgtable;
+ 
++	vcpu->arch.l1tf_flush_l1d = true;
++
+ 	/*
+ 	 * Clear write_fault_to_shadow_pgtable here to ensure it is
+ 	 * never reused.
+@@ -7581,6 +7611,7 @@ static int vcpu_run(struct kvm_vcpu *vcpu)
+ 	struct kvm *kvm = vcpu->kvm;
+ 
+ 	vcpu->srcu_idx = srcu_read_lock(&kvm->srcu);
++	vcpu->arch.l1tf_flush_l1d = true;
+ 
+ 	for (;;) {
+ 		if (kvm_vcpu_running(vcpu)) {
+@@ -8700,6 +8731,7 @@ void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu)
+ 
+ void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu)
+ {
++	vcpu->arch.l1tf_flush_l1d = true;
+ 	kvm_x86_ops->sched_in(vcpu, cpu);
+ }
+ 
+diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
+index cee58a972cb2..83241eb71cd4 100644
+--- a/arch/x86/mm/init.c
++++ b/arch/x86/mm/init.c
+@@ -4,6 +4,8 @@
+ #include <linux/swap.h>
+ #include <linux/memblock.h>
+ #include <linux/bootmem.h>	/* for max_low_pfn */
++#include <linux/swapfile.h>
++#include <linux/swapops.h>
+ 
+ #include <asm/set_memory.h>
+ #include <asm/e820/api.h>
+@@ -880,3 +882,26 @@ void update_cache_mode_entry(unsigned entry, enum page_cache_mode cache)
+ 	__cachemode2pte_tbl[cache] = __cm_idx2pte(entry);
+ 	__pte2cachemode_tbl[entry] = cache;
+ }
++
++#ifdef CONFIG_SWAP
++unsigned long max_swapfile_size(void)
++{
++	unsigned long pages;
++
++	pages = generic_max_swapfile_size();
++
++	if (boot_cpu_has_bug(X86_BUG_L1TF)) {
++		/* Limit the swap file size to MAX_PA/2 for L1TF workaround */
++		unsigned long l1tf_limit = l1tf_pfn_limit() + 1;
++		/*
++		 * We encode swap offsets also with 3 bits below those for pfn
++		 * which makes the usable limit higher.
++		 */
++#if CONFIG_PGTABLE_LEVELS > 2
++		l1tf_limit <<= PAGE_SHIFT - SWP_OFFSET_FIRST_BIT;
++#endif
++		pages = min_t(unsigned long, l1tf_limit, pages);
++	}
++	return pages;
++}
++#endif
+diff --git a/arch/x86/mm/kmmio.c b/arch/x86/mm/kmmio.c
+index 7c8686709636..79eb55ce69a9 100644
+--- a/arch/x86/mm/kmmio.c
++++ b/arch/x86/mm/kmmio.c
+@@ -126,24 +126,29 @@ static struct kmmio_fault_page *get_kmmio_fault_page(unsigned long addr)
+ 
+ static void clear_pmd_presence(pmd_t *pmd, bool clear, pmdval_t *old)
+ {
++	pmd_t new_pmd;
+ 	pmdval_t v = pmd_val(*pmd);
+ 	if (clear) {
+-		*old = v & _PAGE_PRESENT;
+-		v &= ~_PAGE_PRESENT;
+-	} else	/* presume this has been called with clear==true previously */
+-		v |= *old;
+-	set_pmd(pmd, __pmd(v));
++		*old = v;
++		new_pmd = pmd_mknotpresent(*pmd);
++	} else {
++		/* Presume this has been called with clear==true previously */
++		new_pmd = __pmd(*old);
++	}
++	set_pmd(pmd, new_pmd);
+ }
+ 
+ static void clear_pte_presence(pte_t *pte, bool clear, pteval_t *old)
+ {
+ 	pteval_t v = pte_val(*pte);
+ 	if (clear) {
+-		*old = v & _PAGE_PRESENT;
+-		v &= ~_PAGE_PRESENT;
+-	} else	/* presume this has been called with clear==true previously */
+-		v |= *old;
+-	set_pte_atomic(pte, __pte(v));
++		*old = v;
++		/* Nothing should care about address */
++		pte_clear(&init_mm, 0, pte);
++	} else {
++		/* Presume this has been called with clear==true previously */
++		set_pte_atomic(pte, __pte(*old));
++	}
+ }
+ 
+ static int clear_page_presence(struct kmmio_fault_page *f, bool clear)
+diff --git a/arch/x86/mm/mmap.c b/arch/x86/mm/mmap.c
+index 48c591251600..f40ab8185d94 100644
+--- a/arch/x86/mm/mmap.c
++++ b/arch/x86/mm/mmap.c
+@@ -240,3 +240,24 @@ int valid_mmap_phys_addr_range(unsigned long pfn, size_t count)
+ 
+ 	return phys_addr_valid(addr + count - 1);
+ }
++
++/*
++ * Only allow root to set high MMIO mappings to PROT_NONE.
++ * This prevents an unpriv. user to set them to PROT_NONE and invert
++ * them, then pointing to valid memory for L1TF speculation.
++ *
++ * Note: for locked down kernels may want to disable the root override.
++ */
++bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot)
++{
++	if (!boot_cpu_has_bug(X86_BUG_L1TF))
++		return true;
++	if (!__pte_needs_invert(pgprot_val(prot)))
++		return true;
++	/* If it's real memory always allow */
++	if (pfn_valid(pfn))
++		return true;
++	if (pfn > l1tf_pfn_limit() && !capable(CAP_SYS_ADMIN))
++		return false;
++	return true;
++}
+diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
+index 3bded76e8d5c..7bb6f65c79de 100644
+--- a/arch/x86/mm/pageattr.c
++++ b/arch/x86/mm/pageattr.c
+@@ -1014,8 +1014,8 @@ static long populate_pmd(struct cpa_data *cpa,
+ 
+ 		pmd = pmd_offset(pud, start);
+ 
+-		set_pmd(pmd, __pmd(cpa->pfn << PAGE_SHIFT | _PAGE_PSE |
+-				   massage_pgprot(pmd_pgprot)));
++		set_pmd(pmd, pmd_mkhuge(pfn_pmd(cpa->pfn,
++					canon_pgprot(pmd_pgprot))));
+ 
+ 		start	  += PMD_SIZE;
+ 		cpa->pfn  += PMD_SIZE >> PAGE_SHIFT;
+@@ -1087,8 +1087,8 @@ static int populate_pud(struct cpa_data *cpa, unsigned long start, p4d_t *p4d,
+ 	 * Map everything starting from the Gb boundary, possibly with 1G pages
+ 	 */
+ 	while (boot_cpu_has(X86_FEATURE_GBPAGES) && end - start >= PUD_SIZE) {
+-		set_pud(pud, __pud(cpa->pfn << PAGE_SHIFT | _PAGE_PSE |
+-				   massage_pgprot(pud_pgprot)));
++		set_pud(pud, pud_mkhuge(pfn_pud(cpa->pfn,
++				   canon_pgprot(pud_pgprot))));
+ 
+ 		start	  += PUD_SIZE;
+ 		cpa->pfn  += PUD_SIZE >> PAGE_SHIFT;
+diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
+index 4d418e705878..fb752d9a3ce9 100644
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -45,6 +45,7 @@
+ #include <asm/pgalloc.h>
+ #include <asm/tlbflush.h>
+ #include <asm/desc.h>
++#include <asm/sections.h>
+ 
+ #undef pr_fmt
+ #define pr_fmt(fmt)     "Kernel/User page tables isolation: " fmt
+diff --git a/arch/x86/platform/intel-mid/device_libs/platform_mrfld_wdt.c b/arch/x86/platform/intel-mid/device_libs/platform_mrfld_wdt.c
+index 4f5fa65a1011..2acd6be13375 100644
+--- a/arch/x86/platform/intel-mid/device_libs/platform_mrfld_wdt.c
++++ b/arch/x86/platform/intel-mid/device_libs/platform_mrfld_wdt.c
+@@ -18,6 +18,7 @@
+ #include <asm/intel-mid.h>
+ #include <asm/intel_scu_ipc.h>
+ #include <asm/io_apic.h>
++#include <asm/hw_irq.h>
+ 
+ #define TANGIER_EXT_TIMER0_MSI 12
+ 
+diff --git a/arch/x86/platform/uv/tlb_uv.c b/arch/x86/platform/uv/tlb_uv.c
+index ca446da48fd2..3866b96a7ee7 100644
+--- a/arch/x86/platform/uv/tlb_uv.c
++++ b/arch/x86/platform/uv/tlb_uv.c
+@@ -1285,6 +1285,7 @@ void uv_bau_message_interrupt(struct pt_regs *regs)
+ 	struct msg_desc msgdesc;
+ 
+ 	ack_APIC_irq();
++	kvm_set_cpu_l1tf_flush_l1d();
+ 	time_start = get_cycles();
+ 
+ 	bcp = &per_cpu(bau_control, smp_processor_id());
+diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
+index 3b5318505c69..2eeddd814653 100644
+--- a/arch/x86/xen/enlighten.c
++++ b/arch/x86/xen/enlighten.c
+@@ -3,6 +3,7 @@
+ #endif
+ #include <linux/cpu.h>
+ #include <linux/kexec.h>
++#include <linux/slab.h>
+ 
+ #include <xen/features.h>
+ #include <xen/page.h>
+diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
+index 30cc9c877ebb..eb9443d5bae1 100644
+--- a/drivers/base/cpu.c
++++ b/drivers/base/cpu.c
+@@ -540,16 +540,24 @@ ssize_t __weak cpu_show_spec_store_bypass(struct device *dev,
+ 	return sprintf(buf, "Not affected\n");
+ }
+ 
++ssize_t __weak cpu_show_l1tf(struct device *dev,
++			     struct device_attribute *attr, char *buf)
++{
++	return sprintf(buf, "Not affected\n");
++}
++
+ static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
+ static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
+ static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
+ static DEVICE_ATTR(spec_store_bypass, 0444, cpu_show_spec_store_bypass, NULL);
++static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL);
+ 
+ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ 	&dev_attr_meltdown.attr,
+ 	&dev_attr_spectre_v1.attr,
+ 	&dev_attr_spectre_v2.attr,
+ 	&dev_attr_spec_store_bypass.attr,
++	&dev_attr_l1tf.attr,
+ 	NULL
+ };
+ 
+diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c
+index dc87797db500..b50b74053664 100644
+--- a/drivers/gpu/drm/i915/i915_pmu.c
++++ b/drivers/gpu/drm/i915/i915_pmu.c
+@@ -4,6 +4,7 @@
+  * Copyright © 2017-2018 Intel Corporation
+  */
+ 
++#include <linux/irq.h>
+ #include "i915_pmu.h"
+ #include "intel_ringbuffer.h"
+ #include "i915_drv.h"
+diff --git a/drivers/gpu/drm/i915/intel_lpe_audio.c b/drivers/gpu/drm/i915/intel_lpe_audio.c
+index 6269750e2b54..b4941101f21a 100644
+--- a/drivers/gpu/drm/i915/intel_lpe_audio.c
++++ b/drivers/gpu/drm/i915/intel_lpe_audio.c
+@@ -62,6 +62,7 @@
+ 
+ #include <linux/acpi.h>
+ #include <linux/device.h>
++#include <linux/irq.h>
+ #include <linux/pci.h>
+ #include <linux/pm_runtime.h>
+ 
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index f6325f1a89e8..d4d4a55f09f8 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -45,6 +45,7 @@
+ #include <linux/irqdomain.h>
+ #include <asm/irqdomain.h>
+ #include <asm/apic.h>
++#include <linux/irq.h>
+ #include <linux/msi.h>
+ #include <linux/hyperv.h>
+ #include <linux/refcount.h>
+diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
+index f59639afaa39..26ca0276b503 100644
+--- a/include/asm-generic/pgtable.h
++++ b/include/asm-generic/pgtable.h
+@@ -1083,6 +1083,18 @@ int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
+ static inline void init_espfix_bsp(void) { }
+ #endif
+ 
++#ifndef __HAVE_ARCH_PFN_MODIFY_ALLOWED
++static inline bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot)
++{
++	return true;
++}
++
++static inline bool arch_has_pfn_modify_check(void)
++{
++	return false;
++}
++#endif /* !_HAVE_ARCH_PFN_MODIFY_ALLOWED */
++
+ #endif /* !__ASSEMBLY__ */
+ 
+ #ifndef io_remap_pfn_range
+diff --git a/include/linux/cpu.h b/include/linux/cpu.h
+index 3233fbe23594..45789a892c41 100644
+--- a/include/linux/cpu.h
++++ b/include/linux/cpu.h
+@@ -55,6 +55,8 @@ extern ssize_t cpu_show_spectre_v2(struct device *dev,
+ 				   struct device_attribute *attr, char *buf);
+ extern ssize_t cpu_show_spec_store_bypass(struct device *dev,
+ 					  struct device_attribute *attr, char *buf);
++extern ssize_t cpu_show_l1tf(struct device *dev,
++			     struct device_attribute *attr, char *buf);
+ 
+ extern __printf(4, 5)
+ struct device *cpu_device_create(struct device *parent, void *drvdata,
+@@ -166,4 +168,23 @@ void cpuhp_report_idle_dead(void);
+ static inline void cpuhp_report_idle_dead(void) { }
+ #endif /* #ifdef CONFIG_HOTPLUG_CPU */
+ 
++enum cpuhp_smt_control {
++	CPU_SMT_ENABLED,
++	CPU_SMT_DISABLED,
++	CPU_SMT_FORCE_DISABLED,
++	CPU_SMT_NOT_SUPPORTED,
++};
++
++#if defined(CONFIG_SMP) && defined(CONFIG_HOTPLUG_SMT)
++extern enum cpuhp_smt_control cpu_smt_control;
++extern void cpu_smt_disable(bool force);
++extern void cpu_smt_check_topology_early(void);
++extern void cpu_smt_check_topology(void);
++#else
++# define cpu_smt_control		(CPU_SMT_ENABLED)
++static inline void cpu_smt_disable(bool force) { }
++static inline void cpu_smt_check_topology_early(void) { }
++static inline void cpu_smt_check_topology(void) { }
++#endif
++
+ #endif /* _LINUX_CPU_H_ */
+diff --git a/include/linux/swapfile.h b/include/linux/swapfile.h
+index 06bd7b096167..e06febf62978 100644
+--- a/include/linux/swapfile.h
++++ b/include/linux/swapfile.h
+@@ -10,5 +10,7 @@ extern spinlock_t swap_lock;
+ extern struct plist_head swap_active_head;
+ extern struct swap_info_struct *swap_info[];
+ extern int try_to_unuse(unsigned int, bool, unsigned long);
++extern unsigned long generic_max_swapfile_size(void);
++extern unsigned long max_swapfile_size(void);
+ 
+ #endif /* _LINUX_SWAPFILE_H */
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 2f8f338e77cf..f80afc674f02 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -60,6 +60,7 @@ struct cpuhp_cpu_state {
+ 	bool			rollback;
+ 	bool			single;
+ 	bool			bringup;
++	bool			booted_once;
+ 	struct hlist_node	*node;
+ 	struct hlist_node	*last;
+ 	enum cpuhp_state	cb_state;
+@@ -342,6 +343,85 @@ void cpu_hotplug_enable(void)
+ EXPORT_SYMBOL_GPL(cpu_hotplug_enable);
+ #endif	/* CONFIG_HOTPLUG_CPU */
+ 
++#ifdef CONFIG_HOTPLUG_SMT
++enum cpuhp_smt_control cpu_smt_control __read_mostly = CPU_SMT_ENABLED;
++EXPORT_SYMBOL_GPL(cpu_smt_control);
++
++static bool cpu_smt_available __read_mostly;
++
++void __init cpu_smt_disable(bool force)
++{
++	if (cpu_smt_control == CPU_SMT_FORCE_DISABLED ||
++		cpu_smt_control == CPU_SMT_NOT_SUPPORTED)
++		return;
++
++	if (force) {
++		pr_info("SMT: Force disabled\n");
++		cpu_smt_control = CPU_SMT_FORCE_DISABLED;
++	} else {
++		cpu_smt_control = CPU_SMT_DISABLED;
++	}
++}
++
++/*
++ * The decision whether SMT is supported can only be done after the full
++ * CPU identification. Called from architecture code before non boot CPUs
++ * are brought up.
++ */
++void __init cpu_smt_check_topology_early(void)
++{
++	if (!topology_smt_supported())
++		cpu_smt_control = CPU_SMT_NOT_SUPPORTED;
++}
++
++/*
++ * If SMT was disabled by BIOS, detect it here, after the CPUs have been
++ * brought online. This ensures the smt/l1tf sysfs entries are consistent
++ * with reality. cpu_smt_available is set to true during the bringup of non
++ * boot CPUs when a SMT sibling is detected. Note, this may overwrite
++ * cpu_smt_control's previous setting.
++ */
++void __init cpu_smt_check_topology(void)
++{
++	if (!cpu_smt_available)
++		cpu_smt_control = CPU_SMT_NOT_SUPPORTED;
++}
++
++static int __init smt_cmdline_disable(char *str)
++{
++	cpu_smt_disable(str && !strcmp(str, "force"));
++	return 0;
++}
++early_param("nosmt", smt_cmdline_disable);
++
++static inline bool cpu_smt_allowed(unsigned int cpu)
++{
++	if (topology_is_primary_thread(cpu))
++		return true;
++
++	/*
++	 * If the CPU is not a 'primary' thread and the booted_once bit is
++	 * set then the processor has SMT support. Store this information
++	 * for the late check of SMT support in cpu_smt_check_topology().
++	 */
++	if (per_cpu(cpuhp_state, cpu).booted_once)
++		cpu_smt_available = true;
++
++	if (cpu_smt_control == CPU_SMT_ENABLED)
++		return true;
++
++	/*
++	 * On x86 it's required to boot all logical CPUs at least once so
++	 * that the init code can get a chance to set CR4.MCE on each
++	 * CPU. Otherwise, a broadacasted MCE observing CR4.MCE=0b on any
++	 * core will shutdown the machine.
++	 */
++	return !per_cpu(cpuhp_state, cpu).booted_once;
++}
++#else
++static inline bool cpu_smt_allowed(unsigned int cpu) { return true; }
++#endif
++
+ static inline enum cpuhp_state
+ cpuhp_set_state(struct cpuhp_cpu_state *st, enum cpuhp_state target)
+ {
+@@ -422,6 +502,16 @@ static int bringup_wait_for_ap(unsigned int cpu)
+ 	stop_machine_unpark(cpu);
+ 	kthread_unpark(st->thread);
+ 
++	/*
++	 * SMT soft disabling on X86 requires to bring the CPU out of the
++	 * BIOS 'wait for SIPI' state in order to set the CR4.MCE bit.  The
++	 * CPU marked itself as booted_once in cpu_notify_starting() so the
++	 * cpu_smt_allowed() check will now return false if this is not the
++	 * primary sibling.
++	 */
++	if (!cpu_smt_allowed(cpu))
++		return -ECANCELED;
++
+ 	if (st->target <= CPUHP_AP_ONLINE_IDLE)
+ 		return 0;
+ 
+@@ -754,7 +844,6 @@ static int takedown_cpu(unsigned int cpu)
+ 
+ 	/* Park the smpboot threads */
+ 	kthread_park(per_cpu_ptr(&cpuhp_state, cpu)->thread);
+-	smpboot_park_threads(cpu);
+ 
+ 	/*
+ 	 * Prevent irq alloc/free while the dying cpu reorganizes the
+@@ -907,20 +996,19 @@ out:
+ 	return ret;
+ }
+ 
++static int cpu_down_maps_locked(unsigned int cpu, enum cpuhp_state target)
++{
++	if (cpu_hotplug_disabled)
++		return -EBUSY;
++	return _cpu_down(cpu, 0, target);
++}
++
+ static int do_cpu_down(unsigned int cpu, enum cpuhp_state target)
+ {
+ 	int err;
+ 
+ 	cpu_maps_update_begin();
+-
+-	if (cpu_hotplug_disabled) {
+-		err = -EBUSY;
+-		goto out;
+-	}
+-
+-	err = _cpu_down(cpu, 0, target);
+-
+-out:
++	err = cpu_down_maps_locked(cpu, target);
+ 	cpu_maps_update_done();
+ 	return err;
+ }
+@@ -949,6 +1037,7 @@ void notify_cpu_starting(unsigned int cpu)
+ 	int ret;
+ 
+ 	rcu_cpu_starting(cpu);	/* Enables RCU usage on this CPU. */
++	st->booted_once = true;
+ 	while (st->state < target) {
+ 		st->state++;
+ 		ret = cpuhp_invoke_callback(cpu, st->state, true, NULL, NULL);
+@@ -1058,6 +1147,10 @@ static int do_cpu_up(unsigned int cpu, enum cpuhp_state target)
+ 		err = -EBUSY;
+ 		goto out;
+ 	}
++	if (!cpu_smt_allowed(cpu)) {
++		err = -EPERM;
++		goto out;
++	}
+ 
+ 	err = _cpu_up(cpu, 0, target);
+ out:
+@@ -1332,7 +1425,7 @@ static struct cpuhp_step cpuhp_hp_states[] = {
+ 	[CPUHP_AP_SMPBOOT_THREADS] = {
+ 		.name			= "smpboot/threads:online",
+ 		.startup.single		= smpboot_unpark_threads,
+-		.teardown.single	= NULL,
++		.teardown.single	= smpboot_park_threads,
+ 	},
+ 	[CPUHP_AP_IRQ_AFFINITY_ONLINE] = {
+ 		.name			= "irq/affinity:online",
+@@ -1906,10 +1999,172 @@ static const struct attribute_group cpuhp_cpu_root_attr_group = {
+ 	NULL
+ };
+ 
++#ifdef CONFIG_HOTPLUG_SMT
++
++static const char *smt_states[] = {
++	[CPU_SMT_ENABLED]		= "on",
++	[CPU_SMT_DISABLED]		= "off",
++	[CPU_SMT_FORCE_DISABLED]	= "forceoff",
++	[CPU_SMT_NOT_SUPPORTED]		= "notsupported",
++};
++
++static ssize_t
++show_smt_control(struct device *dev, struct device_attribute *attr, char *buf)
++{
++	return snprintf(buf, PAGE_SIZE - 2, "%s\n", smt_states[cpu_smt_control]);
++}
++
++static void cpuhp_offline_cpu_device(unsigned int cpu)
++{
++	struct device *dev = get_cpu_device(cpu);
++
++	dev->offline = true;
++	/* Tell user space about the state change */
++	kobject_uevent(&dev->kobj, KOBJ_OFFLINE);
++}
++
++static void cpuhp_online_cpu_device(unsigned int cpu)
++{
++	struct device *dev = get_cpu_device(cpu);
++
++	dev->offline = false;
++	/* Tell user space about the state change */
++	kobject_uevent(&dev->kobj, KOBJ_ONLINE);
++}
++
++static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
++{
++	int cpu, ret = 0;
++
++	cpu_maps_update_begin();
++	for_each_online_cpu(cpu) {
++		if (topology_is_primary_thread(cpu))
++			continue;
++		ret = cpu_down_maps_locked(cpu, CPUHP_OFFLINE);
++		if (ret)
++			break;
++		/*
++		 * As this needs to hold the cpu maps lock it's impossible
++		 * to call device_offline() because that ends up calling
++		 * cpu_down() which takes cpu maps lock. cpu maps lock
++		 * needs to be held as this might race against in kernel
++		 * abusers of the hotplug machinery (thermal management).
++		 *
++		 * So nothing would update device:offline state. That would
++		 * leave the sysfs entry stale and prevent onlining after
++		 * smt control has been changed to 'off' again. This is
++		 * called under the sysfs hotplug lock, so it is properly
++		 * serialized against the regular offline usage.
++		 */
++		cpuhp_offline_cpu_device(cpu);
++	}
++	if (!ret)
++		cpu_smt_control = ctrlval;
++	cpu_maps_update_done();
++	return ret;
++}
++
++static int cpuhp_smt_enable(void)
++{
++	int cpu, ret = 0;
++
++	cpu_maps_update_begin();
++	cpu_smt_control = CPU_SMT_ENABLED;
++	for_each_present_cpu(cpu) {
++		/* Skip online CPUs and CPUs on offline nodes */
++		if (cpu_online(cpu) || !node_online(cpu_to_node(cpu)))
++			continue;
++		ret = _cpu_up(cpu, 0, CPUHP_ONLINE);
++		if (ret)
++			break;
++		/* See comment in cpuhp_smt_disable() */
++		cpuhp_online_cpu_device(cpu);
++	}
++	cpu_maps_update_done();
++	return ret;
++}
++
++static ssize_t
++store_smt_control(struct device *dev, struct device_attribute *attr,
++		  const char *buf, size_t count)
++{
++	int ctrlval, ret;
++
++	if (sysfs_streq(buf, "on"))
++		ctrlval = CPU_SMT_ENABLED;
++	else if (sysfs_streq(buf, "off"))
++		ctrlval = CPU_SMT_DISABLED;
++	else if (sysfs_streq(buf, "forceoff"))
++		ctrlval = CPU_SMT_FORCE_DISABLED;
++	else
++		return -EINVAL;
++
++	if (cpu_smt_control == CPU_SMT_FORCE_DISABLED)
++		return -EPERM;
++
++	if (cpu_smt_control == CPU_SMT_NOT_SUPPORTED)
++		return -ENODEV;
++
++	ret = lock_device_hotplug_sysfs();
++	if (ret)
++		return ret;
++
++	if (ctrlval != cpu_smt_control) {
++		switch (ctrlval) {
++		case CPU_SMT_ENABLED:
++			ret = cpuhp_smt_enable();
++			break;
++		case CPU_SMT_DISABLED:
++		case CPU_SMT_FORCE_DISABLED:
++			ret = cpuhp_smt_disable(ctrlval);
++			break;
++		}
++	}
++
++	unlock_device_hotplug();
++	return ret ? ret : count;
++}
++static DEVICE_ATTR(control, 0644, show_smt_control, store_smt_control);
++
++static ssize_t
++show_smt_active(struct device *dev, struct device_attribute *attr, char *buf)
++{
++	bool active = topology_max_smt_threads() > 1;
++
++	return snprintf(buf, PAGE_SIZE - 2, "%d\n", active);
++}
++static DEVICE_ATTR(active, 0444, show_smt_active, NULL);
++
++static struct attribute *cpuhp_smt_attrs[] = {
++	&dev_attr_control.attr,
++	&dev_attr_active.attr,
++	NULL
++};
++
++static const struct attribute_group cpuhp_smt_attr_group = {
++	.attrs = cpuhp_smt_attrs,
++	.name = "smt",
++	NULL
++};
++
++static int __init cpu_smt_state_init(void)
++{
++	return sysfs_create_group(&cpu_subsys.dev_root->kobj,
++				  &cpuhp_smt_attr_group);
++}
++
++#else
++static inline int cpu_smt_state_init(void) { return 0; }
++#endif
++
+ static int __init cpuhp_sysfs_init(void)
+ {
+ 	int cpu, ret;
+ 
++	ret = cpu_smt_state_init();
++	if (ret)
++		return ret;
++
+ 	ret = sysfs_create_group(&cpu_subsys.dev_root->kobj,
+ 				 &cpuhp_cpu_root_attr_group);
+ 	if (ret)
+@@ -2012,5 +2267,8 @@ void __init boot_cpu_init(void)
+  */
+ void __init boot_cpu_hotplug_init(void)
+ {
+-	per_cpu_ptr(&cpuhp_state, smp_processor_id())->state = CPUHP_ONLINE;
++#ifdef CONFIG_SMP
++	this_cpu_write(cpuhp_state.booted_once, true);
++#endif
++	this_cpu_write(cpuhp_state.state, CPUHP_ONLINE);
+ }
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index fe365c9a08e9..5ba96d9ddbde 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -5774,6 +5774,18 @@ int sched_cpu_activate(unsigned int cpu)
+ 	struct rq *rq = cpu_rq(cpu);
+ 	struct rq_flags rf;
+ 
++#ifdef CONFIG_SCHED_SMT
++	/*
++	 * The sched_smt_present static key needs to be evaluated on every
++	 * hotplug event because at boot time SMT might be disabled when
++	 * the number of booted CPUs is limited.
++	 *
++	 * If then later a sibling gets hotplugged, then the key would stay
++	 * off and SMT scheduling would never be functional.
++	 */
++	if (cpumask_weight(cpu_smt_mask(cpu)) > 1)
++		static_branch_enable_cpuslocked(&sched_smt_present);
++#endif
+ 	set_cpu_active(cpu, true);
+ 
+ 	if (sched_smp_initialized) {
+@@ -5871,22 +5883,6 @@ int sched_cpu_dying(unsigned int cpu)
+ }
+ #endif
+ 
+-#ifdef CONFIG_SCHED_SMT
+-DEFINE_STATIC_KEY_FALSE(sched_smt_present);
+-
+-static void sched_init_smt(void)
+-{
+-	/*
+-	 * We've enumerated all CPUs and will assume that if any CPU
+-	 * has SMT siblings, CPU0 will too.
+-	 */
+-	if (cpumask_weight(cpu_smt_mask(0)) > 1)
+-		static_branch_enable(&sched_smt_present);
+-}
+-#else
+-static inline void sched_init_smt(void) { }
+-#endif
+-
+ void __init sched_init_smp(void)
+ {
+ 	sched_init_numa();
+@@ -5908,8 +5904,6 @@ void __init sched_init_smp(void)
+ 	init_sched_rt_class();
+ 	init_sched_dl_class();
+ 
+-	sched_init_smt();
+-
+ 	sched_smp_initialized = true;
+ }
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 2f0a0be4d344..9c219f7b0970 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -6237,6 +6237,7 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
+ }
+ 
+ #ifdef CONFIG_SCHED_SMT
++DEFINE_STATIC_KEY_FALSE(sched_smt_present);
+ 
+ static inline void set_idle_cores(int cpu, int val)
+ {
+diff --git a/kernel/smp.c b/kernel/smp.c
+index 084c8b3a2681..d86eec5f51c1 100644
+--- a/kernel/smp.c
++++ b/kernel/smp.c
+@@ -584,6 +584,8 @@ void __init smp_init(void)
+ 		num_nodes, (num_nodes > 1 ? "s" : ""),
+ 		num_cpus,  (num_cpus  > 1 ? "s" : ""));
+ 
++	/* Final decision about SMT support */
++	cpu_smt_check_topology();
+ 	/* Any cleanup work */
+ 	smp_cpus_done(setup_max_cpus);
+ }
+diff --git a/mm/memory.c b/mm/memory.c
+index c5e87a3a82ba..0e356dd923c2 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -1884,6 +1884,9 @@ int vm_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr,
+ 	if (addr < vma->vm_start || addr >= vma->vm_end)
+ 		return -EFAULT;
+ 
++	if (!pfn_modify_allowed(pfn, pgprot))
++		return -EACCES;
++
+ 	track_pfn_insert(vma, &pgprot, __pfn_to_pfn_t(pfn, PFN_DEV));
+ 
+ 	ret = insert_pfn(vma, addr, __pfn_to_pfn_t(pfn, PFN_DEV), pgprot,
+@@ -1919,6 +1922,9 @@ static int __vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
+ 
+ 	track_pfn_insert(vma, &pgprot, pfn);
+ 
++	if (!pfn_modify_allowed(pfn_t_to_pfn(pfn), pgprot))
++		return -EACCES;
++
+ 	/*
+ 	 * If we don't have pte special, then we have to use the pfn_valid()
+ 	 * based VM_MIXEDMAP scheme (see vm_normal_page), and thus we *must*
+@@ -1980,6 +1986,7 @@ static int remap_pte_range(struct mm_struct *mm, pmd_t *pmd,
+ {
+ 	pte_t *pte;
+ 	spinlock_t *ptl;
++	int err = 0;
+ 
+ 	pte = pte_alloc_map_lock(mm, pmd, addr, &ptl);
+ 	if (!pte)
+@@ -1987,12 +1994,16 @@ static int remap_pte_range(struct mm_struct *mm, pmd_t *pmd,
+ 	arch_enter_lazy_mmu_mode();
+ 	do {
+ 		BUG_ON(!pte_none(*pte));
++		if (!pfn_modify_allowed(pfn, prot)) {
++			err = -EACCES;
++			break;
++		}
+ 		set_pte_at(mm, addr, pte, pte_mkspecial(pfn_pte(pfn, prot)));
+ 		pfn++;
+ 	} while (pte++, addr += PAGE_SIZE, addr != end);
+ 	arch_leave_lazy_mmu_mode();
+ 	pte_unmap_unlock(pte - 1, ptl);
+-	return 0;
++	return err;
+ }
+ 
+ static inline int remap_pmd_range(struct mm_struct *mm, pud_t *pud,
+@@ -2001,6 +2012,7 @@ static inline int remap_pmd_range(struct mm_struct *mm, pud_t *pud,
+ {
+ 	pmd_t *pmd;
+ 	unsigned long next;
++	int err;
+ 
+ 	pfn -= addr >> PAGE_SHIFT;
+ 	pmd = pmd_alloc(mm, pud, addr);
+@@ -2009,9 +2021,10 @@ static inline int remap_pmd_range(struct mm_struct *mm, pud_t *pud,
+ 	VM_BUG_ON(pmd_trans_huge(*pmd));
+ 	do {
+ 		next = pmd_addr_end(addr, end);
+-		if (remap_pte_range(mm, pmd, addr, next,
+-				pfn + (addr >> PAGE_SHIFT), prot))
+-			return -ENOMEM;
++		err = remap_pte_range(mm, pmd, addr, next,
++				pfn + (addr >> PAGE_SHIFT), prot);
++		if (err)
++			return err;
+ 	} while (pmd++, addr = next, addr != end);
+ 	return 0;
+ }
+@@ -2022,6 +2035,7 @@ static inline int remap_pud_range(struct mm_struct *mm, p4d_t *p4d,
+ {
+ 	pud_t *pud;
+ 	unsigned long next;
++	int err;
+ 
+ 	pfn -= addr >> PAGE_SHIFT;
+ 	pud = pud_alloc(mm, p4d, addr);
+@@ -2029,9 +2043,10 @@ static inline int remap_pud_range(struct mm_struct *mm, p4d_t *p4d,
+ 		return -ENOMEM;
+ 	do {
+ 		next = pud_addr_end(addr, end);
+-		if (remap_pmd_range(mm, pud, addr, next,
+-				pfn + (addr >> PAGE_SHIFT), prot))
+-			return -ENOMEM;
++		err = remap_pmd_range(mm, pud, addr, next,
++				pfn + (addr >> PAGE_SHIFT), prot);
++		if (err)
++			return err;
+ 	} while (pud++, addr = next, addr != end);
+ 	return 0;
+ }
+@@ -2042,6 +2057,7 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd,
+ {
+ 	p4d_t *p4d;
+ 	unsigned long next;
++	int err;
+ 
+ 	pfn -= addr >> PAGE_SHIFT;
+ 	p4d = p4d_alloc(mm, pgd, addr);
+@@ -2049,9 +2065,10 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd,
+ 		return -ENOMEM;
+ 	do {
+ 		next = p4d_addr_end(addr, end);
+-		if (remap_pud_range(mm, p4d, addr, next,
+-				pfn + (addr >> PAGE_SHIFT), prot))
+-			return -ENOMEM;
++		err = remap_pud_range(mm, p4d, addr, next,
++				pfn + (addr >> PAGE_SHIFT), prot);
++		if (err)
++			return err;
+ 	} while (p4d++, addr = next, addr != end);
+ 	return 0;
+ }
+diff --git a/mm/mprotect.c b/mm/mprotect.c
+index 625608bc8962..6d331620b9e5 100644
+--- a/mm/mprotect.c
++++ b/mm/mprotect.c
+@@ -306,6 +306,42 @@ unsigned long change_protection(struct vm_area_struct *vma, unsigned long start,
+ 	return pages;
+ }
+ 
++static int prot_none_pte_entry(pte_t *pte, unsigned long addr,
++			       unsigned long next, struct mm_walk *walk)
++{
++	return pfn_modify_allowed(pte_pfn(*pte), *(pgprot_t *)(walk->private)) ?
++		0 : -EACCES;
++}
++
++static int prot_none_hugetlb_entry(pte_t *pte, unsigned long hmask,
++				   unsigned long addr, unsigned long next,
++				   struct mm_walk *walk)
++{
++	return pfn_modify_allowed(pte_pfn(*pte), *(pgprot_t *)(walk->private)) ?
++		0 : -EACCES;
++}
++
++static int prot_none_test(unsigned long addr, unsigned long next,
++			  struct mm_walk *walk)
++{
++	return 0;
++}
++
++static int prot_none_walk(struct vm_area_struct *vma, unsigned long start,
++			   unsigned long end, unsigned long newflags)
++{
++	pgprot_t new_pgprot = vm_get_page_prot(newflags);
++	struct mm_walk prot_none_walk = {
++		.pte_entry = prot_none_pte_entry,
++		.hugetlb_entry = prot_none_hugetlb_entry,
++		.test_walk = prot_none_test,
++		.mm = current->mm,
++		.private = &new_pgprot,
++	};
++
++	return walk_page_range(start, end, &prot_none_walk);
++}
++
+ int
+ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev,
+ 	unsigned long start, unsigned long end, unsigned long newflags)
+@@ -323,6 +359,19 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev,
+ 		return 0;
+ 	}
+ 
++	/*
++	 * Do PROT_NONE PFN permission checks here when we can still
++	 * bail out without undoing a lot of state. This is a rather
++	 * uncommon case, so doesn't need to be very optimized.
++	 */
++	if (arch_has_pfn_modify_check() &&
++	    (vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) &&
++	    (newflags & (VM_READ|VM_WRITE|VM_EXEC)) == 0) {
++		error = prot_none_walk(vma, start, end, newflags);
++		if (error)
++			return error;
++	}
++
+ 	/*
+ 	 * If we make a private mapping writable we increase our commit;
+ 	 * but (without finer accounting) cannot reduce our commit if we
+diff --git a/mm/swapfile.c b/mm/swapfile.c
+index 2cc2972eedaf..18185ae4f223 100644
+--- a/mm/swapfile.c
++++ b/mm/swapfile.c
+@@ -2909,6 +2909,35 @@ static int claim_swapfile(struct swap_info_struct *p, struct inode *inode)
+ 	return 0;
+ }
+ 
++
++/*
++ * Find out how many pages are allowed for a single swap device. There
++ * are two limiting factors:
++ * 1) the number of bits for the swap offset in the swp_entry_t type, and
++ * 2) the number of bits in the swap pte, as defined by the different
++ * architectures.
++ *
++ * In order to find the largest possible bit mask, a swap entry with
++ * swap type 0 and swap offset ~0UL is created, encoded to a swap pte,
++ * decoded to a swp_entry_t again, and finally the swap offset is
++ * extracted.
++ *
++ * This will mask all the bits from the initial ~0UL mask that can't
++ * be encoded in either the swp_entry_t or the architecture definition
++ * of a swap pte.
++ */
++unsigned long generic_max_swapfile_size(void)
++{
++	return swp_offset(pte_to_swp_entry(
++			swp_entry_to_pte(swp_entry(0, ~0UL)))) + 1;
++}
++
++/* Can be overridden by an architecture for additional checks. */
++__weak unsigned long max_swapfile_size(void)
++{
++	return generic_max_swapfile_size();
++}
++
+ static unsigned long read_swap_header(struct swap_info_struct *p,
+ 					union swap_header *swap_header,
+ 					struct inode *inode)
+@@ -2944,22 +2973,7 @@ static unsigned long read_swap_header(struct swap_info_struct *p,
+ 	p->cluster_next = 1;
+ 	p->cluster_nr = 0;
+ 
+-	/*
+-	 * Find out how many pages are allowed for a single swap
+-	 * device. There are two limiting factors: 1) the number
+-	 * of bits for the swap offset in the swp_entry_t type, and
+-	 * 2) the number of bits in the swap pte as defined by the
+-	 * different architectures. In order to find the
+-	 * largest possible bit mask, a swap entry with swap type 0
+-	 * and swap offset ~0UL is created, encoded to a swap pte,
+-	 * decoded to a swp_entry_t again, and finally the swap
+-	 * offset is extracted. This will mask all the bits from
+-	 * the initial ~0UL mask that can't be encoded in either
+-	 * the swp_entry_t or the architecture definition of a
+-	 * swap pte.
+-	 */
+-	maxpages = swp_offset(pte_to_swp_entry(
+-			swp_entry_to_pte(swp_entry(0, ~0UL)))) + 1;
++	maxpages = max_swapfile_size();
+ 	last_page = swap_header->info.last_page;
+ 	if (!last_page) {
+ 		pr_warn("Empty swap-file\n");
+diff --git a/tools/arch/x86/include/asm/cpufeatures.h b/tools/arch/x86/include/asm/cpufeatures.h
+index 5701f5cecd31..64aaa3f5f36c 100644
+--- a/tools/arch/x86/include/asm/cpufeatures.h
++++ b/tools/arch/x86/include/asm/cpufeatures.h
+@@ -219,6 +219,7 @@
+ #define X86_FEATURE_IBPB		( 7*32+26) /* Indirect Branch Prediction Barrier */
+ #define X86_FEATURE_STIBP		( 7*32+27) /* Single Thread Indirect Branch Predictors */
+ #define X86_FEATURE_ZEN			( 7*32+28) /* "" CPU is AMD family 0x17 (Zen) */
++#define X86_FEATURE_L1TF_PTEINV		( 7*32+29) /* "" L1TF workaround PTE inversion */
+ 
+ /* Virtualization flags: Linux defined, word 8 */
+ #define X86_FEATURE_TPR_SHADOW		( 8*32+ 0) /* Intel TPR Shadow */
+@@ -341,6 +342,7 @@
+ #define X86_FEATURE_PCONFIG		(18*32+18) /* Intel PCONFIG */
+ #define X86_FEATURE_SPEC_CTRL		(18*32+26) /* "" Speculation Control (IBRS + IBPB) */
+ #define X86_FEATURE_INTEL_STIBP		(18*32+27) /* "" Single Thread Indirect Branch Predictors */
++#define X86_FEATURE_FLUSH_L1D		(18*32+28) /* Flush L1D cache */
+ #define X86_FEATURE_ARCH_CAPABILITIES	(18*32+29) /* IA32_ARCH_CAPABILITIES MSR (Intel) */
+ #define X86_FEATURE_SPEC_CTRL_SSBD	(18*32+31) /* "" Speculative Store Bypass Disable */
+ 
+@@ -373,5 +375,6 @@
+ #define X86_BUG_SPECTRE_V1		X86_BUG(15) /* CPU is affected by Spectre variant 1 attack with conditional branches */
+ #define X86_BUG_SPECTRE_V2		X86_BUG(16) /* CPU is affected by Spectre variant 2 attack with indirect branches */
+ #define X86_BUG_SPEC_STORE_BYPASS	X86_BUG(17) /* CPU is affected by speculative store bypass attack */
++#define X86_BUG_L1TF			X86_BUG(18) /* CPU is affected by L1 Terminal Fault */
+ 
+ #endif /* _ASM_X86_CPUFEATURES_H */


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 11:37 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     66eba0794237f1a37a335313e9923fac9ae5c0f7
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Aug 18 18:13:21 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 11:36:22 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=66eba079

Linux patch 4.18.3

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |  4 ++++
 1002_linux-4.18.3.patch | 37 +++++++++++++++++++++++++++++++++++++
 2 files changed, 41 insertions(+)

diff --git a/0000_README b/0000_README
index f72e2ad..c313d8e 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch:  1001_linux-4.18.2.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.2
 
+Patch:  1002_linux-4.18.3.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.3
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1002_linux-4.18.3.patch b/1002_linux-4.18.3.patch
new file mode 100644
index 0000000..62abf0a
--- /dev/null
+++ b/1002_linux-4.18.3.patch
@@ -0,0 +1,37 @@
+diff --git a/Makefile b/Makefile
+index fd409a0fd4e1..e2bd815f24eb 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/x86/include/asm/pgtable-invert.h b/arch/x86/include/asm/pgtable-invert.h
+index 44b1203ece12..a0c1525f1b6f 100644
+--- a/arch/x86/include/asm/pgtable-invert.h
++++ b/arch/x86/include/asm/pgtable-invert.h
+@@ -4,9 +4,18 @@
+ 
+ #ifndef __ASSEMBLY__
+ 
++/*
++ * A clear pte value is special, and doesn't get inverted.
++ *
++ * Note that even users that only pass a pgprot_t (rather
++ * than a full pte) won't trigger the special zero case,
++ * because even PAGE_NONE has _PAGE_PROTNONE | _PAGE_ACCESSED
++ * set. So the all zero case really is limited to just the
++ * cleared page table entry case.
++ */
+ static inline bool __pte_needs_invert(u64 val)
+ {
+-	return !(val & _PAGE_PRESENT);
++	return val && !(val & _PAGE_PRESENT);
+ }
+ 
+ /* Get a mask to xor with the page table entry to get the correct pfn. */


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 11:37 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     eefff5dd2b26e09f2a836b7f8f15097af356e090
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Aug 12 23:21:05 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 11:36:17 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=eefff5dd

Additional fixes for Gentoo distro patch.

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 5555b8a..43bae55 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -1,12 +1,11 @@
---- a/Kconfig	2016-07-01 19:22:17.117439707 -0400
-+++ b/Kconfig	2016-07-01 19:21:54.371440596 -0400
-@@ -8,4 +8,6 @@ config SRCARCH
- 	string
- 	option env="SRCARCH"
+--- a/Kconfig	2018-08-12 19:17:17.558649438 -0400
++++ b/Kconfig	2018-08-12 19:17:44.434897289 -0400
+@@ -10,3 +10,5 @@ comment "Compiler: $(CC_VERSION_TEXT)"
+ source "scripts/Kconfig.include"
  
-+source "distro/Kconfig"
+ source "arch/$(SRCARCH)/Kconfig"
 +
- source "arch/$SRCARCH/Kconfig"
++source "distro/Kconfig"
 --- /dev/null	2017-03-02 01:55:04.096566155 -0500
 +++ b/distro/Kconfig	2017-03-02 11:12:05.049448255 -0500
 @@ -0,0 +1,145 @@


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 11:37 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     f5bcc74302a1750aa15305f7b61cd012d8162138
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Sep 15 10:12:46 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 11:36:24 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f5bcc743

Linux patch 4.18.8

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1007_linux-4.18.8.patch | 6654 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6658 insertions(+)

diff --git a/0000_README b/0000_README
index f3682ca..597262e 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch:  1006_linux-4.18.7.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.7
 
+Patch:  1007_linux-4.18.8.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.8
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1007_linux-4.18.8.patch b/1007_linux-4.18.8.patch
new file mode 100644
index 0000000..8a888c7
--- /dev/null
+++ b/1007_linux-4.18.8.patch
@@ -0,0 +1,6654 @@
+diff --git a/Makefile b/Makefile
+index 711b04d00e49..0d73431f66cd 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/arm/mach-rockchip/Kconfig b/arch/arm/mach-rockchip/Kconfig
+index fafd3d7f9f8c..8ca926522026 100644
+--- a/arch/arm/mach-rockchip/Kconfig
++++ b/arch/arm/mach-rockchip/Kconfig
+@@ -17,6 +17,7 @@ config ARCH_ROCKCHIP
+ 	select ARM_GLOBAL_TIMER
+ 	select CLKSRC_ARM_GLOBAL_TIMER_SCHED_CLOCK
+ 	select ZONE_DMA if ARM_LPAE
++	select PM
+ 	help
+ 	  Support for Rockchip's Cortex-A9 Single-to-Quad-Core-SoCs
+ 	  containing the RK2928, RK30xx and RK31xx series.
+diff --git a/arch/arm64/Kconfig.platforms b/arch/arm64/Kconfig.platforms
+index d5aeac351fc3..21a715ad8222 100644
+--- a/arch/arm64/Kconfig.platforms
++++ b/arch/arm64/Kconfig.platforms
+@@ -151,6 +151,7 @@ config ARCH_ROCKCHIP
+ 	select GPIOLIB
+ 	select PINCTRL
+ 	select PINCTRL_ROCKCHIP
++	select PM
+ 	select ROCKCHIP_TIMER
+ 	help
+ 	  This enables support for the ARMv8 based Rockchip chipsets,
+diff --git a/arch/powerpc/include/asm/topology.h b/arch/powerpc/include/asm/topology.h
+index 16b077801a5f..a4a718dbfec6 100644
+--- a/arch/powerpc/include/asm/topology.h
++++ b/arch/powerpc/include/asm/topology.h
+@@ -92,6 +92,7 @@ extern int stop_topology_update(void);
+ extern int prrn_is_enabled(void);
+ extern int find_and_online_cpu_nid(int cpu);
+ extern int timed_topology_update(int nsecs);
++extern void __init shared_proc_topology_init(void);
+ #else
+ static inline int start_topology_update(void)
+ {
+@@ -113,6 +114,10 @@ static inline int timed_topology_update(int nsecs)
+ {
+ 	return 0;
+ }
++
++#ifdef CONFIG_SMP
++static inline void shared_proc_topology_init(void) {}
++#endif
+ #endif /* CONFIG_NUMA && CONFIG_PPC_SPLPAR */
+ 
+ #include <asm-generic/topology.h>
+diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
+index 468653ce844c..327f6112fe8e 100644
+--- a/arch/powerpc/include/asm/uaccess.h
++++ b/arch/powerpc/include/asm/uaccess.h
+@@ -250,10 +250,17 @@ do {								\
+ 	}							\
+ } while (0)
+ 
++/*
++ * This is a type: either unsigned long, if the argument fits into
++ * that type, or otherwise unsigned long long.
++ */
++#define __long_type(x) \
++	__typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL))
++
+ #define __get_user_nocheck(x, ptr, size)			\
+ ({								\
+ 	long __gu_err;						\
+-	unsigned long __gu_val;					\
++	__long_type(*(ptr)) __gu_val;				\
+ 	const __typeof__(*(ptr)) __user *__gu_addr = (ptr);	\
+ 	__chk_user_ptr(ptr);					\
+ 	if (!is_kernel_addr((unsigned long)__gu_addr))		\
+@@ -267,7 +274,7 @@ do {								\
+ #define __get_user_check(x, ptr, size)					\
+ ({									\
+ 	long __gu_err = -EFAULT;					\
+-	unsigned long  __gu_val = 0;					\
++	__long_type(*(ptr)) __gu_val = 0;				\
+ 	const __typeof__(*(ptr)) __user *__gu_addr = (ptr);		\
+ 	might_fault();							\
+ 	if (access_ok(VERIFY_READ, __gu_addr, (size))) {		\
+@@ -281,7 +288,7 @@ do {								\
+ #define __get_user_nosleep(x, ptr, size)			\
+ ({								\
+ 	long __gu_err;						\
+-	unsigned long __gu_val;					\
++	__long_type(*(ptr)) __gu_val;				\
+ 	const __typeof__(*(ptr)) __user *__gu_addr = (ptr);	\
+ 	__chk_user_ptr(ptr);					\
+ 	barrier_nospec();					\
+diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
+index 285c6465324a..f817342aab8f 100644
+--- a/arch/powerpc/kernel/exceptions-64s.S
++++ b/arch/powerpc/kernel/exceptions-64s.S
+@@ -1526,6 +1526,8 @@ TRAMP_REAL_BEGIN(stf_barrier_fallback)
+ TRAMP_REAL_BEGIN(rfi_flush_fallback)
+ 	SET_SCRATCH0(r13);
+ 	GET_PACA(r13);
++	std	r1,PACA_EXRFI+EX_R12(r13)
++	ld	r1,PACAKSAVE(r13)
+ 	std	r9,PACA_EXRFI+EX_R9(r13)
+ 	std	r10,PACA_EXRFI+EX_R10(r13)
+ 	std	r11,PACA_EXRFI+EX_R11(r13)
+@@ -1560,12 +1562,15 @@ TRAMP_REAL_BEGIN(rfi_flush_fallback)
+ 	ld	r9,PACA_EXRFI+EX_R9(r13)
+ 	ld	r10,PACA_EXRFI+EX_R10(r13)
+ 	ld	r11,PACA_EXRFI+EX_R11(r13)
++	ld	r1,PACA_EXRFI+EX_R12(r13)
+ 	GET_SCRATCH0(r13);
+ 	rfid
+ 
+ TRAMP_REAL_BEGIN(hrfi_flush_fallback)
+ 	SET_SCRATCH0(r13);
+ 	GET_PACA(r13);
++	std	r1,PACA_EXRFI+EX_R12(r13)
++	ld	r1,PACAKSAVE(r13)
+ 	std	r9,PACA_EXRFI+EX_R9(r13)
+ 	std	r10,PACA_EXRFI+EX_R10(r13)
+ 	std	r11,PACA_EXRFI+EX_R11(r13)
+@@ -1600,6 +1605,7 @@ TRAMP_REAL_BEGIN(hrfi_flush_fallback)
+ 	ld	r9,PACA_EXRFI+EX_R9(r13)
+ 	ld	r10,PACA_EXRFI+EX_R10(r13)
+ 	ld	r11,PACA_EXRFI+EX_R11(r13)
++	ld	r1,PACA_EXRFI+EX_R12(r13)
+ 	GET_SCRATCH0(r13);
+ 	hrfid
+ 
+diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
+index 4794d6b4f4d2..b3142c7b9c31 100644
+--- a/arch/powerpc/kernel/smp.c
++++ b/arch/powerpc/kernel/smp.c
+@@ -1156,6 +1156,11 @@ void __init smp_cpus_done(unsigned int max_cpus)
+ 	if (smp_ops && smp_ops->bringup_done)
+ 		smp_ops->bringup_done();
+ 
++	/*
++	 * On a shared LPAR, associativity needs to be requested.
++	 * Hence, get numa topology before dumping cpu topology
++	 */
++	shared_proc_topology_init();
+ 	dump_numa_cpu_topology();
+ 
+ 	/*
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index 0c7e05d89244..35ac5422903a 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -1078,7 +1078,6 @@ static int prrn_enabled;
+ static void reset_topology_timer(void);
+ static int topology_timer_secs = 1;
+ static int topology_inited;
+-static int topology_update_needed;
+ 
+ /*
+  * Change polling interval for associativity changes.
+@@ -1306,11 +1305,8 @@ int numa_update_cpu_topology(bool cpus_locked)
+ 	struct device *dev;
+ 	int weight, new_nid, i = 0;
+ 
+-	if (!prrn_enabled && !vphn_enabled) {
+-		if (!topology_inited)
+-			topology_update_needed = 1;
++	if (!prrn_enabled && !vphn_enabled && topology_inited)
+ 		return 0;
+-	}
+ 
+ 	weight = cpumask_weight(&cpu_associativity_changes_mask);
+ 	if (!weight)
+@@ -1423,7 +1419,6 @@ int numa_update_cpu_topology(bool cpus_locked)
+ 
+ out:
+ 	kfree(updates);
+-	topology_update_needed = 0;
+ 	return changed;
+ }
+ 
+@@ -1551,6 +1546,15 @@ int prrn_is_enabled(void)
+ 	return prrn_enabled;
+ }
+ 
++void __init shared_proc_topology_init(void)
++{
++	if (lppaca_shared_proc(get_lppaca())) {
++		bitmap_fill(cpumask_bits(&cpu_associativity_changes_mask),
++			    nr_cpumask_bits);
++		numa_update_cpu_topology(false);
++	}
++}
++
+ static int topology_read(struct seq_file *file, void *v)
+ {
+ 	if (vphn_enabled || prrn_enabled)
+@@ -1608,10 +1612,6 @@ static int topology_update_init(void)
+ 		return -ENOMEM;
+ 
+ 	topology_inited = 1;
+-	if (topology_update_needed)
+-		bitmap_fill(cpumask_bits(&cpu_associativity_changes_mask),
+-					nr_cpumask_bits);
+-
+ 	return 0;
+ }
+ device_initcall(topology_update_init);
+diff --git a/arch/powerpc/platforms/85xx/t1042rdb_diu.c b/arch/powerpc/platforms/85xx/t1042rdb_diu.c
+index 58fa3d319f1c..dac36ba82fea 100644
+--- a/arch/powerpc/platforms/85xx/t1042rdb_diu.c
++++ b/arch/powerpc/platforms/85xx/t1042rdb_diu.c
+@@ -9,8 +9,10 @@
+  * option) any later version.
+  */
+ 
++#include <linux/init.h>
+ #include <linux/io.h>
+ #include <linux/kernel.h>
++#include <linux/module.h>
+ #include <linux/of.h>
+ #include <linux/of_address.h>
+ 
+@@ -150,3 +152,5 @@ static int __init t1042rdb_diu_init(void)
+ }
+ 
+ early_initcall(t1042rdb_diu_init);
++
++MODULE_LICENSE("GPL");
+diff --git a/arch/powerpc/platforms/pseries/ras.c b/arch/powerpc/platforms/pseries/ras.c
+index 2edc673be137..99d1152ae224 100644
+--- a/arch/powerpc/platforms/pseries/ras.c
++++ b/arch/powerpc/platforms/pseries/ras.c
+@@ -371,7 +371,7 @@ static struct rtas_error_log *fwnmi_get_errinfo(struct pt_regs *regs)
+ 		int len, error_log_length;
+ 
+ 		error_log_length = 8 + rtas_error_extended_log_length(h);
+-		len = max_t(int, error_log_length, RTAS_ERROR_LOG_MAX);
++		len = min_t(int, error_log_length, RTAS_ERROR_LOG_MAX);
+ 		memset(global_mce_data_buf, 0, RTAS_ERROR_LOG_MAX);
+ 		memcpy(global_mce_data_buf, h, len);
+ 		errhdr = (struct rtas_error_log *)global_mce_data_buf;
+diff --git a/arch/powerpc/sysdev/mpic_msgr.c b/arch/powerpc/sysdev/mpic_msgr.c
+index eb69a5186243..280e964e1aa8 100644
+--- a/arch/powerpc/sysdev/mpic_msgr.c
++++ b/arch/powerpc/sysdev/mpic_msgr.c
+@@ -196,7 +196,7 @@ static int mpic_msgr_probe(struct platform_device *dev)
+ 
+ 	/* IO map the message register block. */
+ 	of_address_to_resource(np, 0, &rsrc);
+-	msgr_block_addr = ioremap(rsrc.start, rsrc.end - rsrc.start);
++	msgr_block_addr = ioremap(rsrc.start, resource_size(&rsrc));
+ 	if (!msgr_block_addr) {
+ 		dev_err(&dev->dev, "Failed to iomap MPIC message registers");
+ 		return -EFAULT;
+diff --git a/arch/riscv/kernel/vdso/Makefile b/arch/riscv/kernel/vdso/Makefile
+index f6561b783b61..eed1c137f618 100644
+--- a/arch/riscv/kernel/vdso/Makefile
++++ b/arch/riscv/kernel/vdso/Makefile
+@@ -52,8 +52,8 @@ $(obj)/%.so: $(obj)/%.so.dbg FORCE
+ # Add -lgcc so rv32 gets static muldi3 and lshrdi3 definitions.
+ # Make sure only to export the intended __vdso_xxx symbol offsets.
+ quiet_cmd_vdsold = VDSOLD  $@
+-      cmd_vdsold = $(CC) $(KCFLAGS) $(call cc-option, -no-pie) -nostdlib $(SYSCFLAGS_$(@F)) \
+-                           -Wl,-T,$(filter-out FORCE,$^) -o $@.tmp -lgcc && \
++      cmd_vdsold = $(CC) $(KBUILD_CFLAGS) $(call cc-option, -no-pie) -nostdlib -nostartfiles $(SYSCFLAGS_$(@F)) \
++                           -Wl,-T,$(filter-out FORCE,$^) -o $@.tmp && \
+                    $(CROSS_COMPILE)objcopy \
+                            $(patsubst %, -G __vdso_%, $(vdso-syms)) $@.tmp $@
+ 
+diff --git a/arch/s390/kernel/crash_dump.c b/arch/s390/kernel/crash_dump.c
+index 9f5ea9d87069..9b0216d571ad 100644
+--- a/arch/s390/kernel/crash_dump.c
++++ b/arch/s390/kernel/crash_dump.c
+@@ -404,11 +404,13 @@ static void *get_vmcoreinfo_old(unsigned long *size)
+ 	if (copy_oldmem_kernel(nt_name, addr + sizeof(note),
+ 			       sizeof(nt_name) - 1))
+ 		return NULL;
+-	if (strcmp(nt_name, "VMCOREINFO") != 0)
++	if (strcmp(nt_name, VMCOREINFO_NOTE_NAME) != 0)
+ 		return NULL;
+ 	vmcoreinfo = kzalloc_panic(note.n_descsz);
+-	if (copy_oldmem_kernel(vmcoreinfo, addr + 24, note.n_descsz))
++	if (copy_oldmem_kernel(vmcoreinfo, addr + 24, note.n_descsz)) {
++		kfree(vmcoreinfo);
+ 		return NULL;
++	}
+ 	*size = note.n_descsz;
+ 	return vmcoreinfo;
+ }
+@@ -418,15 +420,20 @@ static void *get_vmcoreinfo_old(unsigned long *size)
+  */
+ static void *nt_vmcoreinfo(void *ptr)
+ {
++	const char *name = VMCOREINFO_NOTE_NAME;
+ 	unsigned long size;
+ 	void *vmcoreinfo;
+ 
+ 	vmcoreinfo = os_info_old_entry(OS_INFO_VMCOREINFO, &size);
+-	if (!vmcoreinfo)
+-		vmcoreinfo = get_vmcoreinfo_old(&size);
++	if (vmcoreinfo)
++		return nt_init_name(ptr, 0, vmcoreinfo, size, name);
++
++	vmcoreinfo = get_vmcoreinfo_old(&size);
+ 	if (!vmcoreinfo)
+ 		return ptr;
+-	return nt_init_name(ptr, 0, vmcoreinfo, size, "VMCOREINFO");
++	ptr = nt_init_name(ptr, 0, vmcoreinfo, size, name);
++	kfree(vmcoreinfo);
++	return ptr;
+ }
+ 
+ /*
+diff --git a/arch/um/Makefile b/arch/um/Makefile
+index e54dda8a0363..de340e41f3b2 100644
+--- a/arch/um/Makefile
++++ b/arch/um/Makefile
+@@ -122,8 +122,7 @@ archheaders:
+ 	$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.asm-generic \
+ 	            kbuild-file=$(HOST_DIR)/include/uapi/asm/Kbuild \
+ 		    obj=$(HOST_DIR)/include/generated/uapi/asm
+-	$(Q)$(MAKE) KBUILD_SRC= ARCH=$(HEADER_ARCH) archheaders
+-
++	$(Q)$(MAKE) -f $(srctree)/Makefile ARCH=$(HEADER_ARCH) archheaders
+ 
+ archprepare: include/generated/user_constants.h
+ 
+diff --git a/arch/x86/include/asm/mce.h b/arch/x86/include/asm/mce.h
+index 8c7b3e5a2d01..3a17107594c8 100644
+--- a/arch/x86/include/asm/mce.h
++++ b/arch/x86/include/asm/mce.h
+@@ -148,6 +148,7 @@ enum mce_notifier_prios {
+ 	MCE_PRIO_LOWEST		= 0,
+ };
+ 
++struct notifier_block;
+ extern void mce_register_decode_chain(struct notifier_block *nb);
+ extern void mce_unregister_decode_chain(struct notifier_block *nb);
+ 
+diff --git a/arch/x86/include/asm/pgtable-3level.h b/arch/x86/include/asm/pgtable-3level.h
+index bb035a4cbc8c..9eeb1359ec75 100644
+--- a/arch/x86/include/asm/pgtable-3level.h
++++ b/arch/x86/include/asm/pgtable-3level.h
+@@ -2,6 +2,8 @@
+ #ifndef _ASM_X86_PGTABLE_3LEVEL_H
+ #define _ASM_X86_PGTABLE_3LEVEL_H
+ 
++#include <asm/atomic64_32.h>
++
+ /*
+  * Intel Physical Address Extension (PAE) Mode - three-level page
+  * tables on PPro+ CPUs.
+@@ -147,10 +149,7 @@ static inline pte_t native_ptep_get_and_clear(pte_t *ptep)
+ {
+ 	pte_t res;
+ 
+-	/* xchg acts as a barrier before the setting of the high bits */
+-	res.pte_low = xchg(&ptep->pte_low, 0);
+-	res.pte_high = ptep->pte_high;
+-	ptep->pte_high = 0;
++	res.pte = (pteval_t)arch_atomic64_xchg((atomic64_t *)ptep, 0);
+ 
+ 	return res;
+ }
+diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
+index 74392d9d51e0..a10481656d82 100644
+--- a/arch/x86/kernel/tsc.c
++++ b/arch/x86/kernel/tsc.c
+@@ -1343,7 +1343,7 @@ device_initcall(init_tsc_clocksource);
+ 
+ void __init tsc_early_delay_calibrate(void)
+ {
+-	unsigned long lpj;
++	u64 lpj;
+ 
+ 	if (!boot_cpu_has(X86_FEATURE_TSC))
+ 		return;
+@@ -1355,7 +1355,7 @@ void __init tsc_early_delay_calibrate(void)
+ 	if (!tsc_khz)
+ 		return;
+ 
+-	lpj = tsc_khz * 1000;
++	lpj = (u64)tsc_khz * 1000;
+ 	do_div(lpj, HZ);
+ 	loops_per_jiffy = lpj;
+ }
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index a44e568363a4..42f1ba92622a 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -221,6 +221,17 @@ static const u64 shadow_acc_track_saved_bits_mask = PT64_EPT_READABLE_MASK |
+ 						    PT64_EPT_EXECUTABLE_MASK;
+ static const u64 shadow_acc_track_saved_bits_shift = PT64_SECOND_AVAIL_BITS_SHIFT;
+ 
++/*
++ * This mask must be set on all non-zero Non-Present or Reserved SPTEs in order
++ * to guard against L1TF attacks.
++ */
++static u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
++
++/*
++ * The number of high-order 1 bits to use in the mask above.
++ */
++static const u64 shadow_nonpresent_or_rsvd_mask_len = 5;
++
+ static void mmu_spte_set(u64 *sptep, u64 spte);
+ 
+ void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value)
+@@ -308,9 +319,13 @@ static void mark_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 gfn,
+ {
+ 	unsigned int gen = kvm_current_mmio_generation(vcpu);
+ 	u64 mask = generation_mmio_spte_mask(gen);
++	u64 gpa = gfn << PAGE_SHIFT;
+ 
+ 	access &= ACC_WRITE_MASK | ACC_USER_MASK;
+-	mask |= shadow_mmio_value | access | gfn << PAGE_SHIFT;
++	mask |= shadow_mmio_value | access;
++	mask |= gpa | shadow_nonpresent_or_rsvd_mask;
++	mask |= (gpa & shadow_nonpresent_or_rsvd_mask)
++		<< shadow_nonpresent_or_rsvd_mask_len;
+ 
+ 	trace_mark_mmio_spte(sptep, gfn, access, gen);
+ 	mmu_spte_set(sptep, mask);
+@@ -323,8 +338,14 @@ static bool is_mmio_spte(u64 spte)
+ 
+ static gfn_t get_mmio_spte_gfn(u64 spte)
+ {
+-	u64 mask = generation_mmio_spte_mask(MMIO_GEN_MASK) | shadow_mmio_mask;
+-	return (spte & ~mask) >> PAGE_SHIFT;
++	u64 mask = generation_mmio_spte_mask(MMIO_GEN_MASK) | shadow_mmio_mask |
++		   shadow_nonpresent_or_rsvd_mask;
++	u64 gpa = spte & ~mask;
++
++	gpa |= (spte >> shadow_nonpresent_or_rsvd_mask_len)
++	       & shadow_nonpresent_or_rsvd_mask;
++
++	return gpa >> PAGE_SHIFT;
+ }
+ 
+ static unsigned get_mmio_spte_access(u64 spte)
+@@ -381,7 +402,7 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask,
+ }
+ EXPORT_SYMBOL_GPL(kvm_mmu_set_mask_ptes);
+ 
+-static void kvm_mmu_clear_all_pte_masks(void)
++static void kvm_mmu_reset_all_pte_masks(void)
+ {
+ 	shadow_user_mask = 0;
+ 	shadow_accessed_mask = 0;
+@@ -391,6 +412,18 @@ static void kvm_mmu_clear_all_pte_masks(void)
+ 	shadow_mmio_mask = 0;
+ 	shadow_present_mask = 0;
+ 	shadow_acc_track_mask = 0;
++
++	/*
++	 * If the CPU has 46 or less physical address bits, then set an
++	 * appropriate mask to guard against L1TF attacks. Otherwise, it is
++	 * assumed that the CPU is not vulnerable to L1TF.
++	 */
++	if (boot_cpu_data.x86_phys_bits <
++	    52 - shadow_nonpresent_or_rsvd_mask_len)
++		shadow_nonpresent_or_rsvd_mask =
++			rsvd_bits(boot_cpu_data.x86_phys_bits -
++				  shadow_nonpresent_or_rsvd_mask_len,
++				  boot_cpu_data.x86_phys_bits - 1);
+ }
+ 
+ static int is_cpuid_PSE36(void)
+@@ -5500,7 +5533,7 @@ int kvm_mmu_module_init(void)
+ {
+ 	int ret = -ENOMEM;
+ 
+-	kvm_mmu_clear_all_pte_masks();
++	kvm_mmu_reset_all_pte_masks();
+ 
+ 	pte_list_desc_cache = kmem_cache_create("pte_list_desc",
+ 					    sizeof(struct pte_list_desc),
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index bedabcf33a3e..9869bfd0c601 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -939,17 +939,21 @@ struct vcpu_vmx {
+ 	/*
+ 	 * loaded_vmcs points to the VMCS currently used in this vcpu. For a
+ 	 * non-nested (L1) guest, it always points to vmcs01. For a nested
+-	 * guest (L2), it points to a different VMCS.
++	 * guest (L2), it points to a different VMCS.  loaded_cpu_state points
++	 * to the VMCS whose state is loaded into the CPU registers that only
++	 * need to be switched when transitioning to/from the kernel; a NULL
++	 * value indicates that host state is loaded.
+ 	 */
+ 	struct loaded_vmcs    vmcs01;
+ 	struct loaded_vmcs   *loaded_vmcs;
++	struct loaded_vmcs   *loaded_cpu_state;
+ 	bool                  __launched; /* temporary, used in vmx_vcpu_run */
+ 	struct msr_autoload {
+ 		struct vmx_msrs guest;
+ 		struct vmx_msrs host;
+ 	} msr_autoload;
++
+ 	struct {
+-		int           loaded;
+ 		u16           fs_sel, gs_sel, ldt_sel;
+ #ifdef CONFIG_X86_64
+ 		u16           ds_sel, es_sel;
+@@ -2750,10 +2754,11 @@ static void vmx_save_host_state(struct kvm_vcpu *vcpu)
+ #endif
+ 	int i;
+ 
+-	if (vmx->host_state.loaded)
++	if (vmx->loaded_cpu_state)
+ 		return;
+ 
+-	vmx->host_state.loaded = 1;
++	vmx->loaded_cpu_state = vmx->loaded_vmcs;
++
+ 	/*
+ 	 * Set host fs and gs selectors.  Unfortunately, 22.2.3 does not
+ 	 * allow segment selectors with cpl > 0 or ti == 1.
+@@ -2815,11 +2820,14 @@ static void vmx_save_host_state(struct kvm_vcpu *vcpu)
+ 
+ static void __vmx_load_host_state(struct vcpu_vmx *vmx)
+ {
+-	if (!vmx->host_state.loaded)
++	if (!vmx->loaded_cpu_state)
+ 		return;
+ 
++	WARN_ON_ONCE(vmx->loaded_cpu_state != vmx->loaded_vmcs);
++
+ 	++vmx->vcpu.stat.host_state_reload;
+-	vmx->host_state.loaded = 0;
++	vmx->loaded_cpu_state = NULL;
++
+ #ifdef CONFIG_X86_64
+ 	if (is_long_mode(&vmx->vcpu))
+ 		rdmsrl(MSR_KERNEL_GS_BASE, vmx->msr_guest_kernel_gs_base);
+@@ -8115,7 +8123,7 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
+ 
+ 	/* CPL=0 must be checked manually. */
+ 	if (vmx_get_cpl(vcpu)) {
+-		kvm_queue_exception(vcpu, UD_VECTOR);
++		kvm_inject_gp(vcpu, 0);
+ 		return 1;
+ 	}
+ 
+@@ -8179,7 +8187,7 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
+ static int nested_vmx_check_permission(struct kvm_vcpu *vcpu)
+ {
+ 	if (vmx_get_cpl(vcpu)) {
+-		kvm_queue_exception(vcpu, UD_VECTOR);
++		kvm_inject_gp(vcpu, 0);
+ 		return 0;
+ 	}
+ 
+@@ -10517,8 +10525,8 @@ static void vmx_switch_vmcs(struct kvm_vcpu *vcpu, struct loaded_vmcs *vmcs)
+ 		return;
+ 
+ 	cpu = get_cpu();
+-	vmx->loaded_vmcs = vmcs;
+ 	vmx_vcpu_put(vcpu);
++	vmx->loaded_vmcs = vmcs;
+ 	vmx_vcpu_load(vcpu, cpu);
+ 	put_cpu();
+ }
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 24c84aa87049..94cd63081471 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -6506,20 +6506,22 @@ static void kvm_set_mmio_spte_mask(void)
+ 	 * Set the reserved bits and the present bit of an paging-structure
+ 	 * entry to generate page fault with PFER.RSV = 1.
+ 	 */
+-	 /* Mask the reserved physical address bits. */
+-	mask = rsvd_bits(maxphyaddr, 51);
++
++	/*
++	 * Mask the uppermost physical address bit, which would be reserved as
++	 * long as the supported physical address width is less than 52.
++	 */
++	mask = 1ull << 51;
+ 
+ 	/* Set the present bit. */
+ 	mask |= 1ull;
+ 
+-#ifdef CONFIG_X86_64
+ 	/*
+ 	 * If reserved bit is not supported, clear the present bit to disable
+ 	 * mmio page fault.
+ 	 */
+-	if (maxphyaddr == 52)
++	if (IS_ENABLED(CONFIG_X86_64) && maxphyaddr == 52)
+ 		mask &= ~1ull;
+-#endif
+ 
+ 	kvm_mmu_set_mmio_spte_mask(mask, mask);
+ }
+diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
+index 2c30cabfda90..071d82ec9abb 100644
+--- a/arch/x86/xen/mmu_pv.c
++++ b/arch/x86/xen/mmu_pv.c
+@@ -434,14 +434,13 @@ static void xen_set_pud(pud_t *ptr, pud_t val)
+ static void xen_set_pte_atomic(pte_t *ptep, pte_t pte)
+ {
+ 	trace_xen_mmu_set_pte_atomic(ptep, pte);
+-	set_64bit((u64 *)ptep, native_pte_val(pte));
++	__xen_set_pte(ptep, pte);
+ }
+ 
+ static void xen_pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
+ {
+ 	trace_xen_mmu_pte_clear(mm, addr, ptep);
+-	if (!xen_batched_set_pte(ptep, native_make_pte(0)))
+-		native_pte_clear(mm, addr, ptep);
++	__xen_set_pte(ptep, native_make_pte(0));
+ }
+ 
+ static void xen_pmd_clear(pmd_t *pmdp)
+@@ -1571,7 +1570,7 @@ static void __init xen_set_pte_init(pte_t *ptep, pte_t pte)
+ 		pte = __pte_ma(((pte_val_ma(*ptep) & _PAGE_RW) | ~_PAGE_RW) &
+ 			       pte_val_ma(pte));
+ #endif
+-	native_set_pte(ptep, pte);
++	__xen_set_pte(ptep, pte);
+ }
+ 
+ /* Early in boot, while setting up the initial pagetable, assume
+diff --git a/block/bio.c b/block/bio.c
+index 047c5dca6d90..ff94640bc734 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -156,7 +156,7 @@ out:
+ 
+ unsigned int bvec_nr_vecs(unsigned short idx)
+ {
+-	return bvec_slabs[idx].nr_vecs;
++	return bvec_slabs[--idx].nr_vecs;
+ }
+ 
+ void bvec_free(mempool_t *pool, struct bio_vec *bv, unsigned int idx)
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 1646ea85dade..746a5eac4541 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -2159,7 +2159,9 @@ static inline bool should_fail_request(struct hd_struct *part,
+ 
+ static inline bool bio_check_ro(struct bio *bio, struct hd_struct *part)
+ {
+-	if (part->policy && op_is_write(bio_op(bio))) {
++	const int op = bio_op(bio);
++
++	if (part->policy && (op_is_write(op) && !op_is_flush(op))) {
+ 		char b[BDEVNAME_SIZE];
+ 
+ 		WARN_ONCE(1,
+diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
+index 3de0836163c2..d5f2c21d8531 100644
+--- a/block/blk-mq-tag.c
++++ b/block/blk-mq-tag.c
+@@ -23,6 +23,9 @@ bool blk_mq_has_free_tags(struct blk_mq_tags *tags)
+ 
+ /*
+  * If a previously inactive queue goes active, bump the active user count.
++ * We need to do this before try to allocate driver tag, then even if fail
++ * to get tag when first time, the other shared-tag users could reserve
++ * budget for it.
+  */
+ bool __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
+ {
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 654b0dc7e001..2f9e14361673 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -285,7 +285,7 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data,
+ 		rq->tag = -1;
+ 		rq->internal_tag = tag;
+ 	} else {
+-		if (blk_mq_tag_busy(data->hctx)) {
++		if (data->hctx->flags & BLK_MQ_F_TAG_SHARED) {
+ 			rq_flags = RQF_MQ_INFLIGHT;
+ 			atomic_inc(&data->hctx->nr_active);
+ 		}
+@@ -367,6 +367,8 @@ static struct request *blk_mq_get_request(struct request_queue *q,
+ 		if (!op_is_flush(op) && e->type->ops.mq.limit_depth &&
+ 		    !(data->flags & BLK_MQ_REQ_RESERVED))
+ 			e->type->ops.mq.limit_depth(op, data);
++	} else {
++		blk_mq_tag_busy(data->hctx);
+ 	}
+ 
+ 	tag = blk_mq_get_tag(data);
+@@ -970,6 +972,7 @@ bool blk_mq_get_driver_tag(struct request *rq, struct blk_mq_hw_ctx **hctx,
+ 		.hctx = blk_mq_map_queue(rq->q, rq->mq_ctx->cpu),
+ 		.flags = wait ? 0 : BLK_MQ_REQ_NOWAIT,
+ 	};
++	bool shared;
+ 
+ 	might_sleep_if(wait);
+ 
+@@ -979,9 +982,10 @@ bool blk_mq_get_driver_tag(struct request *rq, struct blk_mq_hw_ctx **hctx,
+ 	if (blk_mq_tag_is_reserved(data.hctx->sched_tags, rq->internal_tag))
+ 		data.flags |= BLK_MQ_REQ_RESERVED;
+ 
++	shared = blk_mq_tag_busy(data.hctx);
+ 	rq->tag = blk_mq_get_tag(&data);
+ 	if (rq->tag >= 0) {
+-		if (blk_mq_tag_busy(data.hctx)) {
++		if (shared) {
+ 			rq->rq_flags |= RQF_MQ_INFLIGHT;
+ 			atomic_inc(&data.hctx->nr_active);
+ 		}
+diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
+index 82b6c27b3245..f6f180f3aa1c 100644
+--- a/block/cfq-iosched.c
++++ b/block/cfq-iosched.c
+@@ -4735,12 +4735,13 @@ USEC_SHOW_FUNCTION(cfq_target_latency_us_show, cfqd->cfq_target_latency);
+ static ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count)	\
+ {									\
+ 	struct cfq_data *cfqd = e->elevator_data;			\
+-	unsigned int __data;						\
++	unsigned int __data, __min = (MIN), __max = (MAX);		\
++									\
+ 	cfq_var_store(&__data, (page));					\
+-	if (__data < (MIN))						\
+-		__data = (MIN);						\
+-	else if (__data > (MAX))					\
+-		__data = (MAX);						\
++	if (__data < __min)						\
++		__data = __min;						\
++	else if (__data > __max)					\
++		__data = __max;						\
+ 	if (__CONV)							\
+ 		*(__PTR) = (u64)__data * NSEC_PER_MSEC;			\
+ 	else								\
+@@ -4769,12 +4770,13 @@ STORE_FUNCTION(cfq_target_latency_store, &cfqd->cfq_target_latency, 1, UINT_MAX,
+ static ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count)	\
+ {									\
+ 	struct cfq_data *cfqd = e->elevator_data;			\
+-	unsigned int __data;						\
++	unsigned int __data, __min = (MIN), __max = (MAX);		\
++									\
+ 	cfq_var_store(&__data, (page));					\
+-	if (__data < (MIN))						\
+-		__data = (MIN);						\
+-	else if (__data > (MAX))					\
+-		__data = (MAX);						\
++	if (__data < __min)						\
++		__data = __min;						\
++	else if (__data > __max)					\
++		__data = __max;						\
+ 	*(__PTR) = (u64)__data * NSEC_PER_USEC;				\
+ 	return count;							\
+ }
+diff --git a/drivers/acpi/acpica/hwregs.c b/drivers/acpi/acpica/hwregs.c
+index 3de794bcf8fa..69603ba52a3a 100644
+--- a/drivers/acpi/acpica/hwregs.c
++++ b/drivers/acpi/acpica/hwregs.c
+@@ -528,13 +528,18 @@ acpi_status acpi_hw_register_read(u32 register_id, u32 *return_value)
+ 
+ 		status =
+ 		    acpi_hw_read(&value64, &acpi_gbl_FADT.xpm2_control_block);
+-		value = (u32)value64;
++		if (ACPI_SUCCESS(status)) {
++			value = (u32)value64;
++		}
+ 		break;
+ 
+ 	case ACPI_REGISTER_PM_TIMER:	/* 32-bit access */
+ 
+ 		status = acpi_hw_read(&value64, &acpi_gbl_FADT.xpm_timer_block);
+-		value = (u32)value64;
++		if (ACPI_SUCCESS(status)) {
++			value = (u32)value64;
++		}
++
+ 		break;
+ 
+ 	case ACPI_REGISTER_SMI_COMMAND_BLOCK:	/* 8-bit access */
+diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
+index 970dd87d347c..6799d00dd790 100644
+--- a/drivers/acpi/scan.c
++++ b/drivers/acpi/scan.c
+@@ -1612,7 +1612,8 @@ static int acpi_add_single_object(struct acpi_device **child,
+ 	 * Note this must be done before the get power-/wakeup_dev-flags calls.
+ 	 */
+ 	if (type == ACPI_BUS_TYPE_DEVICE)
+-		acpi_bus_get_status(device);
++		if (acpi_bus_get_status(device) < 0)
++			acpi_set_device_status(device, 0);
+ 
+ 	acpi_bus_get_power_flags(device);
+ 	acpi_bus_get_wakeup_device_flags(device);
+@@ -1690,7 +1691,7 @@ static int acpi_bus_type_and_status(acpi_handle handle, int *type,
+ 		 * acpi_add_single_object updates this once we've an acpi_device
+ 		 * so that acpi_bus_get_status' quirk handling can be used.
+ 		 */
+-		*sta = 0;
++		*sta = ACPI_STA_DEFAULT;
+ 		break;
+ 	case ACPI_TYPE_PROCESSOR:
+ 		*type = ACPI_BUS_TYPE_PROCESSOR;
+diff --git a/drivers/clk/rockchip/clk-rk3399.c b/drivers/clk/rockchip/clk-rk3399.c
+index 2a8634a52856..5a628148f3f0 100644
+--- a/drivers/clk/rockchip/clk-rk3399.c
++++ b/drivers/clk/rockchip/clk-rk3399.c
+@@ -1523,6 +1523,7 @@ static const char *const rk3399_pmucru_critical_clocks[] __initconst = {
+ 	"pclk_pmu_src",
+ 	"fclk_cm0s_src_pmu",
+ 	"clk_timer_src_pmu",
++	"pclk_rkpwm_pmu",
+ };
+ 
+ static void __init rk3399_clk_init(struct device_node *np)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+index 7dcbac8af9a7..b60aa7d43cb7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+@@ -1579,9 +1579,9 @@ struct amdgpu_device {
+ 	DECLARE_HASHTABLE(mn_hash, 7);
+ 
+ 	/* tracking pinned memory */
+-	u64 vram_pin_size;
+-	u64 invisible_pin_size;
+-	u64 gart_pin_size;
++	atomic64_t vram_pin_size;
++	atomic64_t visible_pin_size;
++	atomic64_t gart_pin_size;
+ 
+ 	/* amdkfd interface */
+ 	struct kfd_dev          *kfd;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index 9c85a90be293..5a196ec49be8 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -257,7 +257,7 @@ static void amdgpu_cs_get_threshold_for_moves(struct amdgpu_device *adev,
+ 		return;
+ 	}
+ 
+-	total_vram = adev->gmc.real_vram_size - adev->vram_pin_size;
++	total_vram = adev->gmc.real_vram_size - atomic64_read(&adev->vram_pin_size);
+ 	used_vram = amdgpu_vram_mgr_usage(&adev->mman.bdev.man[TTM_PL_VRAM]);
+ 	free_vram = used_vram >= total_vram ? 0 : total_vram - used_vram;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+index 91517b166a3b..063f9aa96946 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+@@ -494,13 +494,13 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
+ 	case AMDGPU_INFO_VRAM_GTT: {
+ 		struct drm_amdgpu_info_vram_gtt vram_gtt;
+ 
+-		vram_gtt.vram_size = adev->gmc.real_vram_size;
+-		vram_gtt.vram_size -= adev->vram_pin_size;
+-		vram_gtt.vram_cpu_accessible_size = adev->gmc.visible_vram_size;
+-		vram_gtt.vram_cpu_accessible_size -= (adev->vram_pin_size - adev->invisible_pin_size);
++		vram_gtt.vram_size = adev->gmc.real_vram_size -
++			atomic64_read(&adev->vram_pin_size);
++		vram_gtt.vram_cpu_accessible_size = adev->gmc.visible_vram_size -
++			atomic64_read(&adev->visible_pin_size);
+ 		vram_gtt.gtt_size = adev->mman.bdev.man[TTM_PL_TT].size;
+ 		vram_gtt.gtt_size *= PAGE_SIZE;
+-		vram_gtt.gtt_size -= adev->gart_pin_size;
++		vram_gtt.gtt_size -= atomic64_read(&adev->gart_pin_size);
+ 		return copy_to_user(out, &vram_gtt,
+ 				    min((size_t)size, sizeof(vram_gtt))) ? -EFAULT : 0;
+ 	}
+@@ -509,17 +509,16 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
+ 
+ 		memset(&mem, 0, sizeof(mem));
+ 		mem.vram.total_heap_size = adev->gmc.real_vram_size;
+-		mem.vram.usable_heap_size =
+-			adev->gmc.real_vram_size - adev->vram_pin_size;
++		mem.vram.usable_heap_size = adev->gmc.real_vram_size -
++			atomic64_read(&adev->vram_pin_size);
+ 		mem.vram.heap_usage =
+ 			amdgpu_vram_mgr_usage(&adev->mman.bdev.man[TTM_PL_VRAM]);
+ 		mem.vram.max_allocation = mem.vram.usable_heap_size * 3 / 4;
+ 
+ 		mem.cpu_accessible_vram.total_heap_size =
+ 			adev->gmc.visible_vram_size;
+-		mem.cpu_accessible_vram.usable_heap_size =
+-			adev->gmc.visible_vram_size -
+-			(adev->vram_pin_size - adev->invisible_pin_size);
++		mem.cpu_accessible_vram.usable_heap_size = adev->gmc.visible_vram_size -
++			atomic64_read(&adev->visible_pin_size);
+ 		mem.cpu_accessible_vram.heap_usage =
+ 			amdgpu_vram_mgr_vis_usage(&adev->mman.bdev.man[TTM_PL_VRAM]);
+ 		mem.cpu_accessible_vram.max_allocation =
+@@ -527,8 +526,8 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
+ 
+ 		mem.gtt.total_heap_size = adev->mman.bdev.man[TTM_PL_TT].size;
+ 		mem.gtt.total_heap_size *= PAGE_SIZE;
+-		mem.gtt.usable_heap_size = mem.gtt.total_heap_size
+-			- adev->gart_pin_size;
++		mem.gtt.usable_heap_size = mem.gtt.total_heap_size -
++			atomic64_read(&adev->gart_pin_size);
+ 		mem.gtt.heap_usage =
+ 			amdgpu_gtt_mgr_usage(&adev->mman.bdev.man[TTM_PL_TT]);
+ 		mem.gtt.max_allocation = mem.gtt.usable_heap_size * 3 / 4;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index 3526efa8960e..3873c3353020 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -50,11 +50,35 @@ static bool amdgpu_need_backup(struct amdgpu_device *adev)
+ 	return true;
+ }
+ 
++/**
++ * amdgpu_bo_subtract_pin_size - Remove BO from pin_size accounting
++ *
++ * @bo: &amdgpu_bo buffer object
++ *
++ * This function is called when a BO stops being pinned, and updates the
++ * &amdgpu_device pin_size values accordingly.
++ */
++static void amdgpu_bo_subtract_pin_size(struct amdgpu_bo *bo)
++{
++	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
++
++	if (bo->tbo.mem.mem_type == TTM_PL_VRAM) {
++		atomic64_sub(amdgpu_bo_size(bo), &adev->vram_pin_size);
++		atomic64_sub(amdgpu_vram_mgr_bo_visible_size(bo),
++			     &adev->visible_pin_size);
++	} else if (bo->tbo.mem.mem_type == TTM_PL_TT) {
++		atomic64_sub(amdgpu_bo_size(bo), &adev->gart_pin_size);
++	}
++}
++
+ static void amdgpu_ttm_bo_destroy(struct ttm_buffer_object *tbo)
+ {
+ 	struct amdgpu_device *adev = amdgpu_ttm_adev(tbo->bdev);
+ 	struct amdgpu_bo *bo = ttm_to_amdgpu_bo(tbo);
+ 
++	if (bo->pin_count > 0)
++		amdgpu_bo_subtract_pin_size(bo);
++
+ 	if (bo->kfd_bo)
+ 		amdgpu_amdkfd_unreserve_system_memory_limit(bo);
+ 
+@@ -761,10 +785,11 @@ int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 domain,
+ 
+ 	domain = amdgpu_mem_type_to_domain(bo->tbo.mem.mem_type);
+ 	if (domain == AMDGPU_GEM_DOMAIN_VRAM) {
+-		adev->vram_pin_size += amdgpu_bo_size(bo);
+-		adev->invisible_pin_size += amdgpu_vram_mgr_bo_invisible_size(bo);
++		atomic64_add(amdgpu_bo_size(bo), &adev->vram_pin_size);
++		atomic64_add(amdgpu_vram_mgr_bo_visible_size(bo),
++			     &adev->visible_pin_size);
+ 	} else if (domain == AMDGPU_GEM_DOMAIN_GTT) {
+-		adev->gart_pin_size += amdgpu_bo_size(bo);
++		atomic64_add(amdgpu_bo_size(bo), &adev->gart_pin_size);
+ 	}
+ 
+ error:
+@@ -790,12 +815,7 @@ int amdgpu_bo_unpin(struct amdgpu_bo *bo)
+ 	if (bo->pin_count)
+ 		return 0;
+ 
+-	if (bo->tbo.mem.mem_type == TTM_PL_VRAM) {
+-		adev->vram_pin_size -= amdgpu_bo_size(bo);
+-		adev->invisible_pin_size -= amdgpu_vram_mgr_bo_invisible_size(bo);
+-	} else if (bo->tbo.mem.mem_type == TTM_PL_TT) {
+-		adev->gart_pin_size -= amdgpu_bo_size(bo);
+-	}
++	amdgpu_bo_subtract_pin_size(bo);
+ 
+ 	for (i = 0; i < bo->placement.num_placement; i++) {
+ 		bo->placements[i].lpfn = 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+index a44c3d58fef4..2ec20348b983 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+@@ -1157,7 +1157,7 @@ static ssize_t amdgpu_hwmon_show_vddnb(struct device *dev,
+ 	int r, size = sizeof(vddnb);
+ 
+ 	/* only APUs have vddnb */
+-	if  (adev->flags & AMD_IS_APU)
++	if  (!(adev->flags & AMD_IS_APU))
+ 		return -EINVAL;
+ 
+ 	/* Can't get voltage when the card is off */
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index 9f1a5bd39ae8..5b39d1399630 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -131,6 +131,11 @@ psp_cmd_submit_buf(struct psp_context *psp,
+ 		msleep(1);
+ 	}
+ 
++	if (ucode) {
++		ucode->tmr_mc_addr_lo = psp->cmd_buf_mem->resp.fw_addr_lo;
++		ucode->tmr_mc_addr_hi = psp->cmd_buf_mem->resp.fw_addr_hi;
++	}
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
+index 86a0715d9431..1cafe8d83a4d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
+@@ -53,9 +53,8 @@ static int amdgpu_sched_process_priority_override(struct amdgpu_device *adev,
+ 						  int fd,
+ 						  enum drm_sched_priority priority)
+ {
+-	struct file *filp = fcheck(fd);
++	struct file *filp = fget(fd);
+ 	struct drm_file *file;
+-	struct pid *pid;
+ 	struct amdgpu_fpriv *fpriv;
+ 	struct amdgpu_ctx *ctx;
+ 	uint32_t id;
+@@ -63,20 +62,12 @@ static int amdgpu_sched_process_priority_override(struct amdgpu_device *adev,
+ 	if (!filp)
+ 		return -EINVAL;
+ 
+-	pid = get_pid(((struct drm_file *)filp->private_data)->pid);
++	file = filp->private_data;
++	fpriv = file->driver_priv;
++	idr_for_each_entry(&fpriv->ctx_mgr.ctx_handles, ctx, id)
++		amdgpu_ctx_priority_override(ctx, priority);
+ 
+-	mutex_lock(&adev->ddev->filelist_mutex);
+-	list_for_each_entry(file, &adev->ddev->filelist, lhead) {
+-		if (file->pid != pid)
+-			continue;
+-
+-		fpriv = file->driver_priv;
+-		idr_for_each_entry(&fpriv->ctx_mgr.ctx_handles, ctx, id)
+-				amdgpu_ctx_priority_override(ctx, priority);
+-	}
+-	mutex_unlock(&adev->ddev->filelist_mutex);
+-
+-	put_pid(pid);
++	fput(filp);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+index e5da4654b630..8b3cc6687769 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+@@ -73,7 +73,7 @@ bool amdgpu_gtt_mgr_has_gart_addr(struct ttm_mem_reg *mem);
+ uint64_t amdgpu_gtt_mgr_usage(struct ttm_mem_type_manager *man);
+ int amdgpu_gtt_mgr_recover(struct ttm_mem_type_manager *man);
+ 
+-u64 amdgpu_vram_mgr_bo_invisible_size(struct amdgpu_bo *bo);
++u64 amdgpu_vram_mgr_bo_visible_size(struct amdgpu_bo *bo);
+ uint64_t amdgpu_vram_mgr_usage(struct ttm_mem_type_manager *man);
+ uint64_t amdgpu_vram_mgr_vis_usage(struct ttm_mem_type_manager *man);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
+index 08e38579af24..bdc472b6e641 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
+@@ -194,6 +194,7 @@ enum AMDGPU_UCODE_ID {
+ 	AMDGPU_UCODE_ID_SMC,
+ 	AMDGPU_UCODE_ID_UVD,
+ 	AMDGPU_UCODE_ID_VCE,
++	AMDGPU_UCODE_ID_VCN,
+ 	AMDGPU_UCODE_ID_MAXIMUM,
+ };
+ 
+@@ -226,6 +227,9 @@ struct amdgpu_firmware_info {
+ 	void *kaddr;
+ 	/* ucode_size_bytes */
+ 	uint32_t ucode_size;
++	/* starting tmr mc address */
++	uint32_t tmr_mc_addr_lo;
++	uint32_t tmr_mc_addr_hi;
+ };
+ 
+ void amdgpu_ucode_print_mc_hdr(const struct common_firmware_header *hdr);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+index 1b4ad9b2a755..bee49991c1ff 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+@@ -111,9 +111,10 @@ int amdgpu_vcn_sw_init(struct amdgpu_device *adev)
+ 			version_major, version_minor, family_id);
+ 	}
+ 
+-	bo_size = AMDGPU_GPU_PAGE_ALIGN(le32_to_cpu(hdr->ucode_size_bytes) + 8)
+-		  +  AMDGPU_VCN_STACK_SIZE + AMDGPU_VCN_HEAP_SIZE
++	bo_size = AMDGPU_VCN_STACK_SIZE + AMDGPU_VCN_HEAP_SIZE
+ 		  +  AMDGPU_VCN_SESSION_SIZE * 40;
++	if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP)
++		bo_size += AMDGPU_GPU_PAGE_ALIGN(le32_to_cpu(hdr->ucode_size_bytes) + 8);
+ 	r = amdgpu_bo_create_kernel(adev, bo_size, PAGE_SIZE,
+ 				    AMDGPU_GEM_DOMAIN_VRAM, &adev->vcn.vcpu_bo,
+ 				    &adev->vcn.gpu_addr, &adev->vcn.cpu_addr);
+@@ -187,11 +188,13 @@ int amdgpu_vcn_resume(struct amdgpu_device *adev)
+ 		unsigned offset;
+ 
+ 		hdr = (const struct common_firmware_header *)adev->vcn.fw->data;
+-		offset = le32_to_cpu(hdr->ucode_array_offset_bytes);
+-		memcpy_toio(adev->vcn.cpu_addr, adev->vcn.fw->data + offset,
+-			    le32_to_cpu(hdr->ucode_size_bytes));
+-		size -= le32_to_cpu(hdr->ucode_size_bytes);
+-		ptr += le32_to_cpu(hdr->ucode_size_bytes);
++		if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP) {
++			offset = le32_to_cpu(hdr->ucode_array_offset_bytes);
++			memcpy_toio(adev->vcn.cpu_addr, adev->vcn.fw->data + offset,
++				    le32_to_cpu(hdr->ucode_size_bytes));
++			size -= le32_to_cpu(hdr->ucode_size_bytes);
++			ptr += le32_to_cpu(hdr->ucode_size_bytes);
++		}
+ 		memset_io(ptr, 0, size);
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+index b6333f92ba45..ef4784458800 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+@@ -97,33 +97,29 @@ static u64 amdgpu_vram_mgr_vis_size(struct amdgpu_device *adev,
+ }
+ 
+ /**
+- * amdgpu_vram_mgr_bo_invisible_size - CPU invisible BO size
++ * amdgpu_vram_mgr_bo_visible_size - CPU visible BO size
+  *
+  * @bo: &amdgpu_bo buffer object (must be in VRAM)
+  *
+  * Returns:
+- * How much of the given &amdgpu_bo buffer object lies in CPU invisible VRAM.
++ * How much of the given &amdgpu_bo buffer object lies in CPU visible VRAM.
+  */
+-u64 amdgpu_vram_mgr_bo_invisible_size(struct amdgpu_bo *bo)
++u64 amdgpu_vram_mgr_bo_visible_size(struct amdgpu_bo *bo)
+ {
+ 	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
+ 	struct ttm_mem_reg *mem = &bo->tbo.mem;
+ 	struct drm_mm_node *nodes = mem->mm_node;
+ 	unsigned pages = mem->num_pages;
+-	u64 usage = 0;
++	u64 usage;
+ 
+ 	if (adev->gmc.visible_vram_size == adev->gmc.real_vram_size)
+-		return 0;
++		return amdgpu_bo_size(bo);
+ 
+ 	if (mem->start >= adev->gmc.visible_vram_size >> PAGE_SHIFT)
+-		return amdgpu_bo_size(bo);
++		return 0;
+ 
+-	while (nodes && pages) {
+-		usage += nodes->size << PAGE_SHIFT;
+-		usage -= amdgpu_vram_mgr_vis_size(adev, nodes);
+-		pages -= nodes->size;
+-		++nodes;
+-	}
++	for (usage = 0; nodes && pages; pages -= nodes->size, nodes++)
++		usage += amdgpu_vram_mgr_vis_size(adev, nodes);
+ 
+ 	return usage;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index a69153435ea7..8f0ac805ecd2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -3433,7 +3433,7 @@ static void gfx_v9_0_enter_rlc_safe_mode(struct amdgpu_device *adev)
+ 
+ 		/* wait for RLC_SAFE_MODE */
+ 		for (i = 0; i < adev->usec_timeout; i++) {
+-			if (!REG_GET_FIELD(SOC15_REG_OFFSET(GC, 0, mmRLC_SAFE_MODE), RLC_SAFE_MODE, CMD))
++			if (!REG_GET_FIELD(RREG32_SOC15(GC, 0, mmRLC_SAFE_MODE), RLC_SAFE_MODE, CMD))
+ 				break;
+ 			udelay(1);
+ 		}
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c
+index 0ff136d02d9b..02be34e72ed9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c
+@@ -88,6 +88,9 @@ psp_v10_0_get_fw_type(struct amdgpu_firmware_info *ucode, enum psp_gfx_fw_type *
+ 	case AMDGPU_UCODE_ID_VCE:
+ 		*type = GFX_FW_TYPE_VCE;
+ 		break;
++	case AMDGPU_UCODE_ID_VCN:
++		*type = GFX_FW_TYPE_VCN;
++		break;
+ 	case AMDGPU_UCODE_ID_MAXIMUM:
+ 	default:
+ 		return -EINVAL;
+diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
+index bfddf97dd13e..a16eebc05d12 100644
+--- a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
+@@ -1569,7 +1569,6 @@ static const struct amdgpu_ring_funcs uvd_v6_0_ring_phys_funcs = {
+ static const struct amdgpu_ring_funcs uvd_v6_0_ring_vm_funcs = {
+ 	.type = AMDGPU_RING_TYPE_UVD,
+ 	.align_mask = 0xf,
+-	.nop = PACKET0(mmUVD_NO_OP, 0),
+ 	.support_64bit_ptrs = false,
+ 	.get_rptr = uvd_v6_0_ring_get_rptr,
+ 	.get_wptr = uvd_v6_0_ring_get_wptr,
+@@ -1587,7 +1586,7 @@ static const struct amdgpu_ring_funcs uvd_v6_0_ring_vm_funcs = {
+ 	.emit_hdp_flush = uvd_v6_0_ring_emit_hdp_flush,
+ 	.test_ring = uvd_v6_0_ring_test_ring,
+ 	.test_ib = amdgpu_uvd_ring_test_ib,
+-	.insert_nop = amdgpu_ring_insert_nop,
++	.insert_nop = uvd_v6_0_ring_insert_nop,
+ 	.pad_ib = amdgpu_ring_generic_pad_ib,
+ 	.begin_use = amdgpu_uvd_ring_begin_use,
+ 	.end_use = amdgpu_uvd_ring_end_use,
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+index 29684c3ea4ef..700119168067 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+@@ -90,6 +90,16 @@ static int vcn_v1_0_sw_init(void *handle)
+ 	if (r)
+ 		return r;
+ 
++	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
++		const struct common_firmware_header *hdr;
++		hdr = (const struct common_firmware_header *)adev->vcn.fw->data;
++		adev->firmware.ucode[AMDGPU_UCODE_ID_VCN].ucode_id = AMDGPU_UCODE_ID_VCN;
++		adev->firmware.ucode[AMDGPU_UCODE_ID_VCN].fw = adev->vcn.fw;
++		adev->firmware.fw_size +=
++			ALIGN(le32_to_cpu(hdr->ucode_size_bytes), PAGE_SIZE);
++		DRM_INFO("PSP loading VCN firmware\n");
++	}
++
+ 	r = amdgpu_vcn_resume(adev);
+ 	if (r)
+ 		return r;
+@@ -241,26 +251,38 @@ static int vcn_v1_0_resume(void *handle)
+ static void vcn_v1_0_mc_resume(struct amdgpu_device *adev)
+ {
+ 	uint32_t size = AMDGPU_GPU_PAGE_ALIGN(adev->vcn.fw->size + 4);
+-
+-	WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW,
++	uint32_t offset;
++
++	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
++		WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW,
++			     (adev->firmware.ucode[AMDGPU_UCODE_ID_VCN].tmr_mc_addr_lo));
++		WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH,
++			     (adev->firmware.ucode[AMDGPU_UCODE_ID_VCN].tmr_mc_addr_hi));
++		WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_OFFSET0, 0);
++		offset = 0;
++	} else {
++		WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW,
+ 			lower_32_bits(adev->vcn.gpu_addr));
+-	WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH,
++		WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH,
+ 			upper_32_bits(adev->vcn.gpu_addr));
+-	WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_OFFSET0,
+-				AMDGPU_UVD_FIRMWARE_OFFSET >> 3);
++		offset = size;
++		WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_OFFSET0,
++			     AMDGPU_UVD_FIRMWARE_OFFSET >> 3);
++	}
++
+ 	WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_SIZE0, size);
+ 
+ 	WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE1_64BIT_BAR_LOW,
+-			lower_32_bits(adev->vcn.gpu_addr + size));
++		     lower_32_bits(adev->vcn.gpu_addr + offset));
+ 	WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE1_64BIT_BAR_HIGH,
+-			upper_32_bits(adev->vcn.gpu_addr + size));
++		     upper_32_bits(adev->vcn.gpu_addr + offset));
+ 	WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_OFFSET1, 0);
+ 	WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_SIZE1, AMDGPU_VCN_HEAP_SIZE);
+ 
+ 	WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE2_64BIT_BAR_LOW,
+-			lower_32_bits(adev->vcn.gpu_addr + size + AMDGPU_VCN_HEAP_SIZE));
++		     lower_32_bits(adev->vcn.gpu_addr + offset + AMDGPU_VCN_HEAP_SIZE));
+ 	WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE2_64BIT_BAR_HIGH,
+-			upper_32_bits(adev->vcn.gpu_addr + size + AMDGPU_VCN_HEAP_SIZE));
++		     upper_32_bits(adev->vcn.gpu_addr + offset + AMDGPU_VCN_HEAP_SIZE));
+ 	WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_OFFSET2, 0);
+ 	WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_SIZE2,
+ 			AMDGPU_VCN_STACK_SIZE + (AMDGPU_VCN_SESSION_SIZE * 40));
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 770c6b24be0b..e484d0a94bdc 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1334,6 +1334,7 @@ amdgpu_dm_register_backlight_device(struct amdgpu_display_manager *dm)
+ 	struct backlight_properties props = { 0 };
+ 
+ 	props.max_brightness = AMDGPU_MAX_BL_LEVEL;
++	props.brightness = AMDGPU_MAX_BL_LEVEL;
+ 	props.type = BACKLIGHT_RAW;
+ 
+ 	snprintf(bl_name, sizeof(bl_name), "amdgpu_bl%d",
+@@ -2123,13 +2124,8 @@ convert_color_depth_from_display_info(const struct drm_connector *connector)
+ static enum dc_aspect_ratio
+ get_aspect_ratio(const struct drm_display_mode *mode_in)
+ {
+-	int32_t width = mode_in->crtc_hdisplay * 9;
+-	int32_t height = mode_in->crtc_vdisplay * 16;
+-
+-	if ((width - height) < 10 && (width - height) > -10)
+-		return ASPECT_RATIO_16_9;
+-	else
+-		return ASPECT_RATIO_4_3;
++	/* 1-1 mapping, since both enums follow the HDMI spec. */
++	return (enum dc_aspect_ratio) mode_in->picture_aspect_ratio;
+ }
+ 
+ static enum dc_color_space
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
+index 52f2c01349e3..9bfb040352e9 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
+@@ -98,10 +98,16 @@ int amdgpu_dm_crtc_set_crc_source(struct drm_crtc *crtc, const char *src_name,
+  */
+ void amdgpu_dm_crtc_handle_crc_irq(struct drm_crtc *crtc)
+ {
+-	struct dm_crtc_state *crtc_state = to_dm_crtc_state(crtc->state);
+-	struct dc_stream_state *stream_state = crtc_state->stream;
++	struct dm_crtc_state *crtc_state;
++	struct dc_stream_state *stream_state;
+ 	uint32_t crcs[3];
+ 
++	if (crtc == NULL)
++		return;
++
++	crtc_state = to_dm_crtc_state(crtc->state);
++	stream_state = crtc_state->stream;
++
+ 	/* Early return if CRC capture is not enabled. */
+ 	if (!crtc_state->crc_enabled)
+ 		return;
+diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table.c b/drivers/gpu/drm/amd/display/dc/bios/command_table.c
+index 651e1fd4622f..a558bfaa0c46 100644
+--- a/drivers/gpu/drm/amd/display/dc/bios/command_table.c
++++ b/drivers/gpu/drm/amd/display/dc/bios/command_table.c
+@@ -808,6 +808,24 @@ static enum bp_result transmitter_control_v1_5(
+ 	 * (=1: 8bpp, =1.25: 10bpp, =1.5:12bpp, =2: 16bpp)
+ 	 * LVDS mode: usPixelClock = pixel clock
+ 	 */
++	if  (cntl->signal == SIGNAL_TYPE_HDMI_TYPE_A) {
++		switch (cntl->color_depth) {
++		case COLOR_DEPTH_101010:
++			params.usSymClock =
++				cpu_to_le16((le16_to_cpu(params.usSymClock) * 30) / 24);
++			break;
++		case COLOR_DEPTH_121212:
++			params.usSymClock =
++				cpu_to_le16((le16_to_cpu(params.usSymClock) * 36) / 24);
++			break;
++		case COLOR_DEPTH_161616:
++			params.usSymClock =
++				cpu_to_le16((le16_to_cpu(params.usSymClock) * 48) / 24);
++			break;
++		default:
++			break;
++		}
++	}
+ 
+ 	if (EXEC_BIOS_CMD_TABLE(UNIPHYTransmitterControl, params))
+ 		result = BP_RESULT_OK;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+index 2fa521812d23..8a7890b03d97 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+@@ -728,6 +728,17 @@ bool dc_link_detect(struct dc_link *link, enum dc_detect_reason reason)
+ 			break;
+ 		case EDID_NO_RESPONSE:
+ 			DC_LOG_ERROR("No EDID read.\n");
++
++			/*
++			 * Abort detection for non-DP connectors if we have
++			 * no EDID
++			 *
++			 * DP needs to report as connected if HDP is high
++			 * even if we have no EDID in order to go to
++			 * fail-safe mode
++			 */
++			if (!dc_is_dp_signal(link->connector_signal))
++				return false;
+ 		default:
+ 			break;
+ 		}
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index 751f3ac9d921..754b4c2fc90a 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -268,24 +268,30 @@ bool resource_construct(
+ 
+ 	return true;
+ }
++static int find_matching_clock_source(
++		const struct resource_pool *pool,
++		struct clock_source *clock_source)
++{
+ 
++	int i;
++
++	for (i = 0; i < pool->clk_src_count; i++) {
++		if (pool->clock_sources[i] == clock_source)
++			return i;
++	}
++	return -1;
++}
+ 
+ void resource_unreference_clock_source(
+ 		struct resource_context *res_ctx,
+ 		const struct resource_pool *pool,
+ 		struct clock_source *clock_source)
+ {
+-	int i;
+-
+-	for (i = 0; i < pool->clk_src_count; i++) {
+-		if (pool->clock_sources[i] != clock_source)
+-			continue;
++	int i = find_matching_clock_source(pool, clock_source);
+ 
++	if (i > -1)
+ 		res_ctx->clock_source_ref_count[i]--;
+ 
+-		break;
+-	}
+-
+ 	if (pool->dp_clock_source == clock_source)
+ 		res_ctx->dp_clock_source_ref_count--;
+ }
+@@ -295,19 +301,31 @@ void resource_reference_clock_source(
+ 		const struct resource_pool *pool,
+ 		struct clock_source *clock_source)
+ {
+-	int i;
+-	for (i = 0; i < pool->clk_src_count; i++) {
+-		if (pool->clock_sources[i] != clock_source)
+-			continue;
++	int i = find_matching_clock_source(pool, clock_source);
+ 
++	if (i > -1)
+ 		res_ctx->clock_source_ref_count[i]++;
+-		break;
+-	}
+ 
+ 	if (pool->dp_clock_source == clock_source)
+ 		res_ctx->dp_clock_source_ref_count++;
+ }
+ 
++int resource_get_clock_source_reference(
++		struct resource_context *res_ctx,
++		const struct resource_pool *pool,
++		struct clock_source *clock_source)
++{
++	int i = find_matching_clock_source(pool, clock_source);
++
++	if (i > -1)
++		return res_ctx->clock_source_ref_count[i];
++
++	if (pool->dp_clock_source == clock_source)
++		return res_ctx->dp_clock_source_ref_count;
++
++	return -1;
++}
++
+ bool resource_are_streams_timing_synchronizable(
+ 	struct dc_stream_state *stream1,
+ 	struct dc_stream_state *stream2)
+@@ -330,6 +348,9 @@ bool resource_are_streams_timing_synchronizable(
+ 				!= stream2->timing.pix_clk_khz)
+ 		return false;
+ 
++	if (stream1->clamping.c_depth != stream2->clamping.c_depth)
++		return false;
++
+ 	if (stream1->phy_pix_clk != stream2->phy_pix_clk
+ 			&& (!dc_is_dp_signal(stream1->signal)
+ 			|| !dc_is_dp_signal(stream2->signal)))
+@@ -337,6 +358,20 @@ bool resource_are_streams_timing_synchronizable(
+ 
+ 	return true;
+ }
++static bool is_dp_and_hdmi_sharable(
++		struct dc_stream_state *stream1,
++		struct dc_stream_state *stream2)
++{
++	if (stream1->ctx->dc->caps.disable_dp_clk_share)
++		return false;
++
++	if (stream1->clamping.c_depth != COLOR_DEPTH_888 ||
++	    stream2->clamping.c_depth != COLOR_DEPTH_888)
++	return false;
++
++	return true;
++
++}
+ 
+ static bool is_sharable_clk_src(
+ 	const struct pipe_ctx *pipe_with_clk_src,
+@@ -348,7 +383,10 @@ static bool is_sharable_clk_src(
+ 	if (pipe_with_clk_src->stream->signal == SIGNAL_TYPE_VIRTUAL)
+ 		return false;
+ 
+-	if (dc_is_dp_signal(pipe_with_clk_src->stream->signal))
++	if (dc_is_dp_signal(pipe_with_clk_src->stream->signal) ||
++		(dc_is_dp_signal(pipe->stream->signal) &&
++		!is_dp_and_hdmi_sharable(pipe_with_clk_src->stream,
++				     pipe->stream)))
+ 		return false;
+ 
+ 	if (dc_is_hdmi_signal(pipe_with_clk_src->stream->signal)
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index 53c71296f3dd..efe155d50668 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -77,6 +77,7 @@ struct dc_caps {
+ 	bool dual_link_dvi;
+ 	bool post_blend_color_processing;
+ 	bool force_dp_tps4_for_cp2520;
++	bool disable_dp_clk_share;
+ };
+ 
+ struct dc_dcc_surface_param {
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c b/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c
+index dbe3b26b6d9e..f6ec1d3dfd0c 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c
+@@ -919,7 +919,7 @@ void dce110_link_encoder_enable_tmds_output(
+ 	enum bp_result result;
+ 
+ 	/* Enable the PHY */
+-
++	cntl.connector_obj_id = enc110->base.connector;
+ 	cntl.action = TRANSMITTER_CONTROL_ENABLE;
+ 	cntl.engine_id = enc->preferred_engine;
+ 	cntl.transmitter = enc110->base.transmitter;
+@@ -961,7 +961,7 @@ void dce110_link_encoder_enable_dp_output(
+ 	 * We need to set number of lanes manually.
+ 	 */
+ 	configure_encoder(enc110, link_settings);
+-
++	cntl.connector_obj_id = enc110->base.connector;
+ 	cntl.action = TRANSMITTER_CONTROL_ENABLE;
+ 	cntl.engine_id = enc->preferred_engine;
+ 	cntl.transmitter = enc110->base.transmitter;
+diff --git a/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c b/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
+index 344dd2e69e7c..aa2f03eb46fe 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
+@@ -884,7 +884,7 @@ static bool construct(
+ 	dc->caps.i2c_speed_in_khz = 40;
+ 	dc->caps.max_cursor_size = 128;
+ 	dc->caps.dual_link_dvi = true;
+-
++	dc->caps.disable_dp_clk_share = true;
+ 	for (i = 0; i < pool->base.pipe_count; i++) {
+ 		pool->base.timing_generators[i] =
+ 			dce100_timing_generator_create(
+diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_compressor.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_compressor.c
+index e2994d337044..111c4921987f 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_compressor.c
++++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_compressor.c
+@@ -143,7 +143,7 @@ static void wait_for_fbc_state_changed(
+ 	struct dce110_compressor *cp110,
+ 	bool enabled)
+ {
+-	uint8_t counter = 0;
++	uint16_t counter = 0;
+ 	uint32_t addr = mmFBC_STATUS;
+ 	uint32_t value;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+index c29052b6da5a..7c0b1d7aa9b8 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+@@ -1939,7 +1939,9 @@ static void dce110_reset_hw_ctx_wrap(
+ 			pipe_ctx_old->plane_res.mi->funcs->free_mem_input(
+ 					pipe_ctx_old->plane_res.mi, dc->current_state->stream_count);
+ 
+-			if (old_clk)
++			if (old_clk && 0 == resource_get_clock_source_reference(&context->res_ctx,
++										dc->res_pool,
++										old_clk))
+ 				old_clk->funcs->cs_power_down(old_clk);
+ 
+ 			dc->hwss.disable_plane(dc, pipe_ctx_old);
+diff --git a/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c b/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
+index 48a068964722..6f4992bdc9ce 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
+@@ -902,6 +902,7 @@ static bool dce80_construct(
+ 	}
+ 
+ 	dc->caps.max_planes =  pool->base.pipe_count;
++	dc->caps.disable_dp_clk_share = true;
+ 
+ 	if (!resource_construct(num_virtual_links, dc, &pool->base,
+ 			&res_create_funcs))
+@@ -1087,6 +1088,7 @@ static bool dce81_construct(
+ 	}
+ 
+ 	dc->caps.max_planes =  pool->base.pipe_count;
++	dc->caps.disable_dp_clk_share = true;
+ 
+ 	if (!resource_construct(num_virtual_links, dc, &pool->base,
+ 			&res_create_funcs))
+@@ -1268,6 +1270,7 @@ static bool dce83_construct(
+ 	}
+ 
+ 	dc->caps.max_planes =  pool->base.pipe_count;
++	dc->caps.disable_dp_clk_share = true;
+ 
+ 	if (!resource_construct(num_virtual_links, dc, &pool->base,
+ 			&res_create_funcs))
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/resource.h b/drivers/gpu/drm/amd/display/dc/inc/resource.h
+index 640a647f4611..abf42a7d0859 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/resource.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/resource.h
+@@ -102,6 +102,11 @@ void resource_reference_clock_source(
+ 		const struct resource_pool *pool,
+ 		struct clock_source *clock_source);
+ 
++int resource_get_clock_source_reference(
++		struct resource_context *res_ctx,
++		const struct resource_pool *pool,
++		struct clock_source *clock_source);
++
+ bool resource_are_streams_timing_synchronizable(
+ 		struct dc_stream_state *stream1,
+ 		struct dc_stream_state *stream2);
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_powertune.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_powertune.c
+index c952845833d7..5e19f5977eb1 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_powertune.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_powertune.c
+@@ -403,6 +403,49 @@ static const struct gpu_pt_config_reg DIDTConfig_Polaris12[] = {
+ 	{   ixDIDT_SQ_CTRL1,                   DIDT_SQ_CTRL1__MAX_POWER_MASK,                      DIDT_SQ_CTRL1__MAX_POWER__SHIFT,                    0xffff,     GPU_CONFIGREG_DIDT_IND },
+ 
+ 	{   ixDIDT_SQ_CTRL_OCP,                DIDT_SQ_CTRL_OCP__UNUSED_0_MASK,                    DIDT_SQ_CTRL_OCP__UNUSED_0__SHIFT,                  0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL_OCP,                DIDT_SQ_CTRL_OCP__OCP_MAX_POWER_MASK,               DIDT_SQ_CTRL_OCP__OCP_MAX_POWER__SHIFT,             0xffff,     GPU_CONFIGREG_DIDT_IND },
++
++	{   ixDIDT_SQ_CTRL2,                   DIDT_SQ_CTRL2__MAX_POWER_DELTA_MASK,                DIDT_SQ_CTRL2__MAX_POWER_DELTA__SHIFT,              0x3853,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL2,                   DIDT_SQ_CTRL2__UNUSED_0_MASK,                       DIDT_SQ_CTRL2__UNUSED_0__SHIFT,                     0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL2,                   DIDT_SQ_CTRL2__SHORT_TERM_INTERVAL_SIZE_MASK,       DIDT_SQ_CTRL2__SHORT_TERM_INTERVAL_SIZE__SHIFT,     0x005a,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL2,                   DIDT_SQ_CTRL2__UNUSED_1_MASK,                       DIDT_SQ_CTRL2__UNUSED_1__SHIFT,                     0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL2,                   DIDT_SQ_CTRL2__LONG_TERM_INTERVAL_RATIO_MASK,       DIDT_SQ_CTRL2__LONG_TERM_INTERVAL_RATIO__SHIFT,     0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL2,                   DIDT_SQ_CTRL2__UNUSED_2_MASK,                       DIDT_SQ_CTRL2__UNUSED_2__SHIFT,                     0x0000,     GPU_CONFIGREG_DIDT_IND },
++
++	{   ixDIDT_SQ_STALL_CTRL,              DIDT_SQ_STALL_CTRL__DIDT_STALL_CTRL_ENABLE_MASK,    DIDT_SQ_STALL_CTRL__DIDT_STALL_CTRL_ENABLE__SHIFT,  0x0001,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_STALL_CTRL,              DIDT_SQ_STALL_CTRL__DIDT_STALL_DELAY_HI_MASK,       DIDT_SQ_STALL_CTRL__DIDT_STALL_DELAY_HI__SHIFT,     0x0001,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_STALL_CTRL,              DIDT_SQ_STALL_CTRL__DIDT_STALL_DELAY_LO_MASK,       DIDT_SQ_STALL_CTRL__DIDT_STALL_DELAY_LO__SHIFT,     0x0001,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_STALL_CTRL,              DIDT_SQ_STALL_CTRL__DIDT_HI_POWER_THRESHOLD_MASK,   DIDT_SQ_STALL_CTRL__DIDT_HI_POWER_THRESHOLD__SHIFT, 0x0ebb,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_STALL_CTRL,              DIDT_SQ_STALL_CTRL__UNUSED_0_MASK,                  DIDT_SQ_STALL_CTRL__UNUSED_0__SHIFT,                0x0000,     GPU_CONFIGREG_DIDT_IND },
++
++	{   ixDIDT_SQ_TUNING_CTRL,             DIDT_SQ_TUNING_CTRL__DIDT_TUNING_ENABLE_MASK,       DIDT_SQ_TUNING_CTRL__DIDT_TUNING_ENABLE__SHIFT,     0x0001,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_TUNING_CTRL,             DIDT_SQ_TUNING_CTRL__MAX_POWER_DELTA_HI_MASK,       DIDT_SQ_TUNING_CTRL__MAX_POWER_DELTA_HI__SHIFT,     0x3853,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_TUNING_CTRL,             DIDT_SQ_TUNING_CTRL__MAX_POWER_DELTA_LO_MASK,       DIDT_SQ_TUNING_CTRL__MAX_POWER_DELTA_LO__SHIFT,     0x3153,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_TUNING_CTRL,             DIDT_SQ_TUNING_CTRL__UNUSED_0_MASK,                 DIDT_SQ_TUNING_CTRL__UNUSED_0__SHIFT,               0x0000,     GPU_CONFIGREG_DIDT_IND },
++
++	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__DIDT_CTRL_EN_MASK,                   DIDT_SQ_CTRL0__DIDT_CTRL_EN__SHIFT,                 0x0001,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__USE_REF_CLOCK_MASK,                  DIDT_SQ_CTRL0__USE_REF_CLOCK__SHIFT,                0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__PHASE_OFFSET_MASK,                   DIDT_SQ_CTRL0__PHASE_OFFSET__SHIFT,                 0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__DIDT_CTRL_RST_MASK,                  DIDT_SQ_CTRL0__DIDT_CTRL_RST__SHIFT,                0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__DIDT_CLK_EN_OVERRIDE_MASK,           DIDT_SQ_CTRL0__DIDT_CLK_EN_OVERRIDE__SHIFT,         0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__DIDT_MAX_STALLS_ALLOWED_HI_MASK,     DIDT_SQ_CTRL0__DIDT_MAX_STALLS_ALLOWED_HI__SHIFT,   0x0010,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__DIDT_MAX_STALLS_ALLOWED_LO_MASK,     DIDT_SQ_CTRL0__DIDT_MAX_STALLS_ALLOWED_LO__SHIFT,   0x0010,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__UNUSED_0_MASK,                       DIDT_SQ_CTRL0__UNUSED_0__SHIFT,                     0x0000,     GPU_CONFIGREG_DIDT_IND },
++
++	{   ixDIDT_TD_WEIGHT0_3,               DIDT_TD_WEIGHT0_3__WEIGHT0_MASK,                    DIDT_TD_WEIGHT0_3__WEIGHT0__SHIFT,                  0x000a,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_TD_WEIGHT0_3,               DIDT_TD_WEIGHT0_3__WEIGHT1_MASK,                    DIDT_TD_WEIGHT0_3__WEIGHT1__SHIFT,                  0x0010,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_TD_WEIGHT0_3,               DIDT_TD_WEIGHT0_3__WEIGHT2_MASK,                    DIDT_TD_WEIGHT0_3__WEIGHT2__SHIFT,                  0x0017,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_TD_WEIGHT0_3,               DIDT_TD_WEIGHT0_3__WEIGHT3_MASK,                    DIDT_TD_WEIGHT0_3__WEIGHT3__SHIFT,                  0x002f,     GPU_CONFIGREG_DIDT_IND },
++
++	{   ixDIDT_TD_WEIGHT4_7,               DIDT_TD_WEIGHT4_7__WEIGHT4_MASK,                    DIDT_TD_WEIGHT4_7__WEIGHT4__SHIFT,                  0x0046,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_TD_WEIGHT4_7,               DIDT_TD_WEIGHT4_7__WEIGHT5_MASK,                    DIDT_TD_WEIGHT4_7__WEIGHT5__SHIFT,                  0x005d,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_TD_WEIGHT4_7,               DIDT_TD_WEIGHT4_7__WEIGHT6_MASK,                    DIDT_TD_WEIGHT4_7__WEIGHT6__SHIFT,                  0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_TD_WEIGHT4_7,               DIDT_TD_WEIGHT4_7__WEIGHT7_MASK,                    DIDT_TD_WEIGHT4_7__WEIGHT7__SHIFT,                  0x0000,     GPU_CONFIGREG_DIDT_IND },
++
++	{   ixDIDT_TD_CTRL1,                   DIDT_TD_CTRL1__MIN_POWER_MASK,                      DIDT_TD_CTRL1__MIN_POWER__SHIFT,                    0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_TD_CTRL1,                   DIDT_TD_CTRL1__MAX_POWER_MASK,                      DIDT_TD_CTRL1__MAX_POWER__SHIFT,                    0xffff,     GPU_CONFIGREG_DIDT_IND },
++
++	{   ixDIDT_TD_CTRL_OCP,                DIDT_TD_CTRL_OCP__UNUSED_0_MASK,                    DIDT_TD_CTRL_OCP__UNUSED_0__SHIFT,                  0x0000,     GPU_CONFIGREG_DIDT_IND },
+ 	{   ixDIDT_TD_CTRL_OCP,                DIDT_TD_CTRL_OCP__OCP_MAX_POWER_MASK,               DIDT_TD_CTRL_OCP__OCP_MAX_POWER__SHIFT,             0x00ff,     GPU_CONFIGREG_DIDT_IND },
+ 
+ 	{   ixDIDT_TD_CTRL2,                   DIDT_TD_CTRL2__MAX_POWER_DELTA_MASK,                DIDT_TD_CTRL2__MAX_POWER_DELTA__SHIFT,              0x3fff,     GPU_CONFIGREG_DIDT_IND },
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
+index 50690c72b2ea..617557bd8c24 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
+@@ -244,6 +244,7 @@ static int smu8_initialize_dpm_defaults(struct pp_hwmgr *hwmgr)
+ 	return 0;
+ }
+ 
++/* convert form 8bit vid to real voltage in mV*4 */
+ static uint32_t smu8_convert_8Bit_index_to_voltage(
+ 			struct pp_hwmgr *hwmgr, uint16_t voltage)
+ {
+@@ -1702,13 +1703,13 @@ static int smu8_read_sensor(struct pp_hwmgr *hwmgr, int idx,
+ 	case AMDGPU_PP_SENSOR_VDDNB:
+ 		tmp = (cgs_read_ind_register(hwmgr->device, CGS_IND_REG__SMC, ixSMUSVI_NB_CURRENTVID) &
+ 			CURRENT_NB_VID_MASK) >> CURRENT_NB_VID__SHIFT;
+-		vddnb = smu8_convert_8Bit_index_to_voltage(hwmgr, tmp);
++		vddnb = smu8_convert_8Bit_index_to_voltage(hwmgr, tmp) / 4;
+ 		*((uint32_t *)value) = vddnb;
+ 		return 0;
+ 	case AMDGPU_PP_SENSOR_VDDGFX:
+ 		tmp = (cgs_read_ind_register(hwmgr->device, CGS_IND_REG__SMC, ixSMUSVI_GFX_CURRENTVID) &
+ 			CURRENT_GFX_VID_MASK) >> CURRENT_GFX_VID__SHIFT;
+-		vddgfx = smu8_convert_8Bit_index_to_voltage(hwmgr, (u16)tmp);
++		vddgfx = smu8_convert_8Bit_index_to_voltage(hwmgr, (u16)tmp) / 4;
+ 		*((uint32_t *)value) = vddgfx;
+ 		return 0;
+ 	case AMDGPU_PP_SENSOR_UVD_VCLK:
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
+index c98e5de777cd..fcd2808874bf 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
+@@ -490,7 +490,7 @@ static int vega12_get_number_dpm_level(struct pp_hwmgr *hwmgr,
+ static int vega12_get_dpm_frequency_by_index(struct pp_hwmgr *hwmgr,
+ 		PPCLK_e clkID, uint32_t index, uint32_t *clock)
+ {
+-	int result;
++	int result = 0;
+ 
+ 	/*
+ 	 *SMU expects the Clock ID to be in the top 16 bits.
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index a5808382bdf0..c7b4481c90d7 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -116,6 +116,9 @@ static const struct edid_quirk {
+ 	/* CPT panel of Asus UX303LA reports 8 bpc, but is a 6 bpc panel */
+ 	{ "CPT", 0x17df, EDID_QUIRK_FORCE_6BPC },
+ 
++	/* SDC panel of Lenovo B50-80 reports 8 bpc, but is a 6 bpc panel */
++	{ "SDC", 0x3652, EDID_QUIRK_FORCE_6BPC },
++
+ 	/* Belinea 10 15 55 */
+ 	{ "MAX", 1516, EDID_QUIRK_PREFER_LARGE_60 },
+ 	{ "MAX", 0x77e, EDID_QUIRK_PREFER_LARGE_60 },
+@@ -163,8 +166,9 @@ static const struct edid_quirk {
+ 	/* Rotel RSX-1058 forwards sink's EDID but only does HDMI 1.1*/
+ 	{ "ETR", 13896, EDID_QUIRK_FORCE_8BPC },
+ 
+-	/* HTC Vive VR Headset */
++	/* HTC Vive and Vive Pro VR Headsets */
+ 	{ "HVR", 0xaa01, EDID_QUIRK_NON_DESKTOP },
++	{ "HVR", 0xaa02, EDID_QUIRK_NON_DESKTOP },
+ 
+ 	/* Oculus Rift DK1, DK2, and CV1 VR Headsets */
+ 	{ "OVR", 0x0001, EDID_QUIRK_NON_DESKTOP },
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+index 686f6552db48..3ef440b235e5 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+@@ -799,6 +799,7 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
+ 
+ free_buffer:
+ 	etnaviv_cmdbuf_free(&gpu->buffer);
++	gpu->buffer.suballoc = NULL;
+ destroy_iommu:
+ 	etnaviv_iommu_destroy(gpu->mmu);
+ 	gpu->mmu = NULL;
+diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
+index 9c449b8d8eab..015f9e93419d 100644
+--- a/drivers/gpu/drm/i915/i915_drv.c
++++ b/drivers/gpu/drm/i915/i915_drv.c
+@@ -919,7 +919,6 @@ static int i915_driver_init_early(struct drm_i915_private *dev_priv,
+ 	spin_lock_init(&dev_priv->uncore.lock);
+ 
+ 	mutex_init(&dev_priv->sb_lock);
+-	mutex_init(&dev_priv->modeset_restore_lock);
+ 	mutex_init(&dev_priv->av_mutex);
+ 	mutex_init(&dev_priv->wm.wm_mutex);
+ 	mutex_init(&dev_priv->pps_mutex);
+@@ -1560,11 +1559,6 @@ static int i915_drm_suspend(struct drm_device *dev)
+ 	pci_power_t opregion_target_state;
+ 	int error;
+ 
+-	/* ignore lid events during suspend */
+-	mutex_lock(&dev_priv->modeset_restore_lock);
+-	dev_priv->modeset_restore = MODESET_SUSPENDED;
+-	mutex_unlock(&dev_priv->modeset_restore_lock);
+-
+ 	disable_rpm_wakeref_asserts(dev_priv);
+ 
+ 	/* We do a lot of poking in a lot of registers, make sure they work
+@@ -1764,10 +1758,6 @@ static int i915_drm_resume(struct drm_device *dev)
+ 
+ 	intel_fbdev_set_suspend(dev, FBINFO_STATE_RUNNING, false);
+ 
+-	mutex_lock(&dev_priv->modeset_restore_lock);
+-	dev_priv->modeset_restore = MODESET_DONE;
+-	mutex_unlock(&dev_priv->modeset_restore_lock);
+-
+ 	intel_opregion_notify_adapter(dev_priv, PCI_D0);
+ 
+ 	enable_rpm_wakeref_asserts(dev_priv);
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index 71e1aa54f774..7c22fac3aa04 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -1003,12 +1003,6 @@ struct i915_gem_mm {
+ #define I915_ENGINE_DEAD_TIMEOUT  (4 * HZ)  /* Seqno, head and subunits dead */
+ #define I915_SEQNO_DEAD_TIMEOUT   (12 * HZ) /* Seqno dead with active head */
+ 
+-enum modeset_restore {
+-	MODESET_ON_LID_OPEN,
+-	MODESET_DONE,
+-	MODESET_SUSPENDED,
+-};
+-
+ #define DP_AUX_A 0x40
+ #define DP_AUX_B 0x10
+ #define DP_AUX_C 0x20
+@@ -1740,8 +1734,6 @@ struct drm_i915_private {
+ 
+ 	unsigned long quirks;
+ 
+-	enum modeset_restore modeset_restore;
+-	struct mutex modeset_restore_lock;
+ 	struct drm_atomic_state *modeset_restore_state;
+ 	struct drm_modeset_acquire_ctx reset_ctx;
+ 
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 7720569f2024..6e048ee88e3f 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -8825,6 +8825,7 @@ enum skl_power_gate {
+ #define  TRANS_MSA_10_BPC		(2<<5)
+ #define  TRANS_MSA_12_BPC		(3<<5)
+ #define  TRANS_MSA_16_BPC		(4<<5)
++#define  TRANS_MSA_CEA_RANGE		(1<<3)
+ 
+ /* LCPLL Control */
+ #define LCPLL_CTL			_MMIO(0x130040)
+diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c
+index fed26d6e4e27..e195c287c263 100644
+--- a/drivers/gpu/drm/i915/intel_ddi.c
++++ b/drivers/gpu/drm/i915/intel_ddi.c
+@@ -1659,6 +1659,10 @@ void intel_ddi_set_pipe_settings(const struct intel_crtc_state *crtc_state)
+ 	WARN_ON(transcoder_is_dsi(cpu_transcoder));
+ 
+ 	temp = TRANS_MSA_SYNC_CLK;
++
++	if (crtc_state->limited_color_range)
++		temp |= TRANS_MSA_CEA_RANGE;
++
+ 	switch (crtc_state->pipe_bpp) {
+ 	case 18:
+ 		temp |= TRANS_MSA_6_BPC;
+diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
+index 16faea30114a..8e465095fe06 100644
+--- a/drivers/gpu/drm/i915/intel_dp.c
++++ b/drivers/gpu/drm/i915/intel_dp.c
+@@ -4293,18 +4293,6 @@ intel_dp_needs_link_retrain(struct intel_dp *intel_dp)
+ 	return !drm_dp_channel_eq_ok(link_status, intel_dp->lane_count);
+ }
+ 
+-/*
+- * If display is now connected check links status,
+- * there has been known issues of link loss triggering
+- * long pulse.
+- *
+- * Some sinks (eg. ASUS PB287Q) seem to perform some
+- * weird HPD ping pong during modesets. So we can apparently
+- * end up with HPD going low during a modeset, and then
+- * going back up soon after. And once that happens we must
+- * retrain the link to get a picture. That's in case no
+- * userspace component reacted to intermittent HPD dip.
+- */
+ int intel_dp_retrain_link(struct intel_encoder *encoder,
+ 			  struct drm_modeset_acquire_ctx *ctx)
+ {
+@@ -4794,7 +4782,8 @@ intel_dp_unset_edid(struct intel_dp *intel_dp)
+ }
+ 
+ static int
+-intel_dp_long_pulse(struct intel_connector *connector)
++intel_dp_long_pulse(struct intel_connector *connector,
++		    struct drm_modeset_acquire_ctx *ctx)
+ {
+ 	struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
+ 	struct intel_dp *intel_dp = intel_attached_dp(&connector->base);
+@@ -4853,6 +4842,22 @@ intel_dp_long_pulse(struct intel_connector *connector)
+ 		 */
+ 		status = connector_status_disconnected;
+ 		goto out;
++	} else {
++		/*
++		 * If display is now connected check links status,
++		 * there has been known issues of link loss triggering
++		 * long pulse.
++		 *
++		 * Some sinks (eg. ASUS PB287Q) seem to perform some
++		 * weird HPD ping pong during modesets. So we can apparently
++		 * end up with HPD going low during a modeset, and then
++		 * going back up soon after. And once that happens we must
++		 * retrain the link to get a picture. That's in case no
++		 * userspace component reacted to intermittent HPD dip.
++		 */
++		struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
++
++		intel_dp_retrain_link(encoder, ctx);
+ 	}
+ 
+ 	/*
+@@ -4914,7 +4919,7 @@ intel_dp_detect(struct drm_connector *connector,
+ 				return ret;
+ 		}
+ 
+-		status = intel_dp_long_pulse(intel_dp->attached_connector);
++		status = intel_dp_long_pulse(intel_dp->attached_connector, ctx);
+ 	}
+ 
+ 	intel_dp->detect_done = false;
+diff --git a/drivers/gpu/drm/i915/intel_hdmi.c b/drivers/gpu/drm/i915/intel_hdmi.c
+index d8cb53ef4351..c8640959a7fc 100644
+--- a/drivers/gpu/drm/i915/intel_hdmi.c
++++ b/drivers/gpu/drm/i915/intel_hdmi.c
+@@ -933,8 +933,12 @@ static int intel_hdmi_hdcp_write(struct intel_digital_port *intel_dig_port,
+ 
+ 	ret = i2c_transfer(adapter, &msg, 1);
+ 	if (ret == 1)
+-		return 0;
+-	return ret >= 0 ? -EIO : ret;
++		ret = 0;
++	else if (ret >= 0)
++		ret = -EIO;
++
++	kfree(write_buf);
++	return ret;
+ }
+ 
+ static
+diff --git a/drivers/gpu/drm/i915/intel_lpe_audio.c b/drivers/gpu/drm/i915/intel_lpe_audio.c
+index b4941101f21a..cdf19553ffac 100644
+--- a/drivers/gpu/drm/i915/intel_lpe_audio.c
++++ b/drivers/gpu/drm/i915/intel_lpe_audio.c
+@@ -127,9 +127,7 @@ lpe_audio_platdev_create(struct drm_i915_private *dev_priv)
+ 		return platdev;
+ 	}
+ 
+-	pm_runtime_forbid(&platdev->dev);
+-	pm_runtime_set_active(&platdev->dev);
+-	pm_runtime_enable(&platdev->dev);
++	pm_runtime_no_callbacks(&platdev->dev);
+ 
+ 	return platdev;
+ }
+diff --git a/drivers/gpu/drm/i915/intel_lspcon.c b/drivers/gpu/drm/i915/intel_lspcon.c
+index 8ae8f42f430a..6b6758419fb3 100644
+--- a/drivers/gpu/drm/i915/intel_lspcon.c
++++ b/drivers/gpu/drm/i915/intel_lspcon.c
+@@ -74,7 +74,7 @@ static enum drm_lspcon_mode lspcon_wait_mode(struct intel_lspcon *lspcon,
+ 	DRM_DEBUG_KMS("Waiting for LSPCON mode %s to settle\n",
+ 		      lspcon_mode_name(mode));
+ 
+-	wait_for((current_mode = lspcon_get_current_mode(lspcon)) == mode, 100);
++	wait_for((current_mode = lspcon_get_current_mode(lspcon)) == mode, 400);
+ 	if (current_mode != mode)
+ 		DRM_ERROR("LSPCON mode hasn't settled\n");
+ 
+diff --git a/drivers/gpu/drm/i915/intel_lvds.c b/drivers/gpu/drm/i915/intel_lvds.c
+index 48f618dc9abb..63d7faa99946 100644
+--- a/drivers/gpu/drm/i915/intel_lvds.c
++++ b/drivers/gpu/drm/i915/intel_lvds.c
+@@ -44,8 +44,6 @@
+ /* Private structure for the integrated LVDS support */
+ struct intel_lvds_connector {
+ 	struct intel_connector base;
+-
+-	struct notifier_block lid_notifier;
+ };
+ 
+ struct intel_lvds_pps {
+@@ -454,26 +452,9 @@ static bool intel_lvds_compute_config(struct intel_encoder *intel_encoder,
+ 	return true;
+ }
+ 
+-/*
+- * Detect the LVDS connection.
+- *
+- * Since LVDS doesn't have hotlug, we use the lid as a proxy.  Open means
+- * connected and closed means disconnected.  We also send hotplug events as
+- * needed, using lid status notification from the input layer.
+- */
+ static enum drm_connector_status
+ intel_lvds_detect(struct drm_connector *connector, bool force)
+ {
+-	struct drm_i915_private *dev_priv = to_i915(connector->dev);
+-	enum drm_connector_status status;
+-
+-	DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n",
+-		      connector->base.id, connector->name);
+-
+-	status = intel_panel_detect(dev_priv);
+-	if (status != connector_status_unknown)
+-		return status;
+-
+ 	return connector_status_connected;
+ }
+ 
+@@ -498,117 +479,6 @@ static int intel_lvds_get_modes(struct drm_connector *connector)
+ 	return 1;
+ }
+ 
+-static int intel_no_modeset_on_lid_dmi_callback(const struct dmi_system_id *id)
+-{
+-	DRM_INFO("Skipping forced modeset for %s\n", id->ident);
+-	return 1;
+-}
+-
+-/* The GPU hangs up on these systems if modeset is performed on LID open */
+-static const struct dmi_system_id intel_no_modeset_on_lid[] = {
+-	{
+-		.callback = intel_no_modeset_on_lid_dmi_callback,
+-		.ident = "Toshiba Tecra A11",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "TECRA A11"),
+-		},
+-	},
+-
+-	{ }	/* terminating entry */
+-};
+-
+-/*
+- * Lid events. Note the use of 'modeset':
+- *  - we set it to MODESET_ON_LID_OPEN on lid close,
+- *    and set it to MODESET_DONE on open
+- *  - we use it as a "only once" bit (ie we ignore
+- *    duplicate events where it was already properly set)
+- *  - the suspend/resume paths will set it to
+- *    MODESET_SUSPENDED and ignore the lid open event,
+- *    because they restore the mode ("lid open").
+- */
+-static int intel_lid_notify(struct notifier_block *nb, unsigned long val,
+-			    void *unused)
+-{
+-	struct intel_lvds_connector *lvds_connector =
+-		container_of(nb, struct intel_lvds_connector, lid_notifier);
+-	struct drm_connector *connector = &lvds_connector->base.base;
+-	struct drm_device *dev = connector->dev;
+-	struct drm_i915_private *dev_priv = to_i915(dev);
+-
+-	if (dev->switch_power_state != DRM_SWITCH_POWER_ON)
+-		return NOTIFY_OK;
+-
+-	mutex_lock(&dev_priv->modeset_restore_lock);
+-	if (dev_priv->modeset_restore == MODESET_SUSPENDED)
+-		goto exit;
+-	/*
+-	 * check and update the status of LVDS connector after receiving
+-	 * the LID nofication event.
+-	 */
+-	connector->status = connector->funcs->detect(connector, false);
+-
+-	/* Don't force modeset on machines where it causes a GPU lockup */
+-	if (dmi_check_system(intel_no_modeset_on_lid))
+-		goto exit;
+-	if (!acpi_lid_open()) {
+-		/* do modeset on next lid open event */
+-		dev_priv->modeset_restore = MODESET_ON_LID_OPEN;
+-		goto exit;
+-	}
+-
+-	if (dev_priv->modeset_restore == MODESET_DONE)
+-		goto exit;
+-
+-	/*
+-	 * Some old platform's BIOS love to wreak havoc while the lid is closed.
+-	 * We try to detect this here and undo any damage. The split for PCH
+-	 * platforms is rather conservative and a bit arbitrary expect that on
+-	 * those platforms VGA disabling requires actual legacy VGA I/O access,
+-	 * and as part of the cleanup in the hw state restore we also redisable
+-	 * the vga plane.
+-	 */
+-	if (!HAS_PCH_SPLIT(dev_priv))
+-		intel_display_resume(dev);
+-
+-	dev_priv->modeset_restore = MODESET_DONE;
+-
+-exit:
+-	mutex_unlock(&dev_priv->modeset_restore_lock);
+-	return NOTIFY_OK;
+-}
+-
+-static int
+-intel_lvds_connector_register(struct drm_connector *connector)
+-{
+-	struct intel_lvds_connector *lvds = to_lvds_connector(connector);
+-	int ret;
+-
+-	ret = intel_connector_register(connector);
+-	if (ret)
+-		return ret;
+-
+-	lvds->lid_notifier.notifier_call = intel_lid_notify;
+-	if (acpi_lid_notifier_register(&lvds->lid_notifier)) {
+-		DRM_DEBUG_KMS("lid notifier registration failed\n");
+-		lvds->lid_notifier.notifier_call = NULL;
+-	}
+-
+-	return 0;
+-}
+-
+-static void
+-intel_lvds_connector_unregister(struct drm_connector *connector)
+-{
+-	struct intel_lvds_connector *lvds = to_lvds_connector(connector);
+-
+-	if (lvds->lid_notifier.notifier_call)
+-		acpi_lid_notifier_unregister(&lvds->lid_notifier);
+-
+-	intel_connector_unregister(connector);
+-}
+-
+ /**
+  * intel_lvds_destroy - unregister and free LVDS structures
+  * @connector: connector to free
+@@ -641,8 +511,8 @@ static const struct drm_connector_funcs intel_lvds_connector_funcs = {
+ 	.fill_modes = drm_helper_probe_single_connector_modes,
+ 	.atomic_get_property = intel_digital_connector_atomic_get_property,
+ 	.atomic_set_property = intel_digital_connector_atomic_set_property,
+-	.late_register = intel_lvds_connector_register,
+-	.early_unregister = intel_lvds_connector_unregister,
++	.late_register = intel_connector_register,
++	.early_unregister = intel_connector_unregister,
+ 	.destroy = intel_lvds_destroy,
+ 	.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+ 	.atomic_duplicate_state = intel_digital_connector_duplicate_state,
+@@ -1108,8 +978,6 @@ void intel_lvds_init(struct drm_i915_private *dev_priv)
+ 	 * 2) check for VBT data
+ 	 * 3) check to see if LVDS is already on
+ 	 *    if none of the above, no panel
+-	 * 4) make sure lid is open
+-	 *    if closed, act like it's not there for now
+ 	 */
+ 
+ 	/*
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+index 2121345a61af..78ce3d232c4d 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+@@ -486,6 +486,31 @@ static void vop_line_flag_irq_disable(struct vop *vop)
+ 	spin_unlock_irqrestore(&vop->irq_lock, flags);
+ }
+ 
++static int vop_core_clks_enable(struct vop *vop)
++{
++	int ret;
++
++	ret = clk_enable(vop->hclk);
++	if (ret < 0)
++		return ret;
++
++	ret = clk_enable(vop->aclk);
++	if (ret < 0)
++		goto err_disable_hclk;
++
++	return 0;
++
++err_disable_hclk:
++	clk_disable(vop->hclk);
++	return ret;
++}
++
++static void vop_core_clks_disable(struct vop *vop)
++{
++	clk_disable(vop->aclk);
++	clk_disable(vop->hclk);
++}
++
+ static int vop_enable(struct drm_crtc *crtc)
+ {
+ 	struct vop *vop = to_vop(crtc);
+@@ -497,17 +522,13 @@ static int vop_enable(struct drm_crtc *crtc)
+ 		return ret;
+ 	}
+ 
+-	ret = clk_enable(vop->hclk);
++	ret = vop_core_clks_enable(vop);
+ 	if (WARN_ON(ret < 0))
+ 		goto err_put_pm_runtime;
+ 
+ 	ret = clk_enable(vop->dclk);
+ 	if (WARN_ON(ret < 0))
+-		goto err_disable_hclk;
+-
+-	ret = clk_enable(vop->aclk);
+-	if (WARN_ON(ret < 0))
+-		goto err_disable_dclk;
++		goto err_disable_core;
+ 
+ 	/*
+ 	 * Slave iommu shares power, irq and clock with vop.  It was associated
+@@ -519,7 +540,7 @@ static int vop_enable(struct drm_crtc *crtc)
+ 	if (ret) {
+ 		DRM_DEV_ERROR(vop->dev,
+ 			      "failed to attach dma mapping, %d\n", ret);
+-		goto err_disable_aclk;
++		goto err_disable_dclk;
+ 	}
+ 
+ 	spin_lock(&vop->reg_lock);
+@@ -552,18 +573,14 @@ static int vop_enable(struct drm_crtc *crtc)
+ 
+ 	spin_unlock(&vop->reg_lock);
+ 
+-	enable_irq(vop->irq);
+-
+ 	drm_crtc_vblank_on(crtc);
+ 
+ 	return 0;
+ 
+-err_disable_aclk:
+-	clk_disable(vop->aclk);
+ err_disable_dclk:
+ 	clk_disable(vop->dclk);
+-err_disable_hclk:
+-	clk_disable(vop->hclk);
++err_disable_core:
++	vop_core_clks_disable(vop);
+ err_put_pm_runtime:
+ 	pm_runtime_put_sync(vop->dev);
+ 	return ret;
+@@ -599,8 +616,6 @@ static void vop_crtc_atomic_disable(struct drm_crtc *crtc,
+ 
+ 	vop_dsp_hold_valid_irq_disable(vop);
+ 
+-	disable_irq(vop->irq);
+-
+ 	vop->is_enabled = false;
+ 
+ 	/*
+@@ -609,8 +624,7 @@ static void vop_crtc_atomic_disable(struct drm_crtc *crtc,
+ 	rockchip_drm_dma_detach_device(vop->drm_dev, vop->dev);
+ 
+ 	clk_disable(vop->dclk);
+-	clk_disable(vop->aclk);
+-	clk_disable(vop->hclk);
++	vop_core_clks_disable(vop);
+ 	pm_runtime_put(vop->dev);
+ 	mutex_unlock(&vop->vop_lock);
+ 
+@@ -1177,6 +1191,18 @@ static irqreturn_t vop_isr(int irq, void *data)
+ 	uint32_t active_irqs;
+ 	int ret = IRQ_NONE;
+ 
++	/*
++	 * The irq is shared with the iommu. If the runtime-pm state of the
++	 * vop-device is disabled the irq has to be targeted at the iommu.
++	 */
++	if (!pm_runtime_get_if_in_use(vop->dev))
++		return IRQ_NONE;
++
++	if (vop_core_clks_enable(vop)) {
++		DRM_DEV_ERROR_RATELIMITED(vop->dev, "couldn't enable clocks\n");
++		goto out;
++	}
++
+ 	/*
+ 	 * interrupt register has interrupt status, enable and clear bits, we
+ 	 * must hold irq_lock to avoid a race with enable/disable_vblank().
+@@ -1192,7 +1218,7 @@ static irqreturn_t vop_isr(int irq, void *data)
+ 
+ 	/* This is expected for vop iommu irqs, since the irq is shared */
+ 	if (!active_irqs)
+-		return IRQ_NONE;
++		goto out_disable;
+ 
+ 	if (active_irqs & DSP_HOLD_VALID_INTR) {
+ 		complete(&vop->dsp_hold_completion);
+@@ -1218,6 +1244,10 @@ static irqreturn_t vop_isr(int irq, void *data)
+ 		DRM_DEV_ERROR(vop->dev, "Unknown VOP IRQs: %#02x\n",
+ 			      active_irqs);
+ 
++out_disable:
++	vop_core_clks_disable(vop);
++out:
++	pm_runtime_put(vop->dev);
+ 	return ret;
+ }
+ 
+@@ -1596,9 +1626,6 @@ static int vop_bind(struct device *dev, struct device *master, void *data)
+ 	if (ret)
+ 		goto err_disable_pm_runtime;
+ 
+-	/* IRQ is initially disabled; it gets enabled in power_on */
+-	disable_irq(vop->irq);
+-
+ 	return 0;
+ 
+ err_disable_pm_runtime:
+diff --git a/drivers/gpu/drm/rockchip/rockchip_lvds.c b/drivers/gpu/drm/rockchip/rockchip_lvds.c
+index e67f4ea28c0e..051b8be3dc0f 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_lvds.c
++++ b/drivers/gpu/drm/rockchip/rockchip_lvds.c
+@@ -363,8 +363,10 @@ static int rockchip_lvds_bind(struct device *dev, struct device *master,
+ 		of_property_read_u32(endpoint, "reg", &endpoint_id);
+ 		ret = drm_of_find_panel_or_bridge(dev->of_node, 1, endpoint_id,
+ 						  &lvds->panel, &lvds->bridge);
+-		if (!ret)
++		if (!ret) {
++			of_node_put(endpoint);
+ 			break;
++		}
+ 	}
+ 	if (!child_count) {
+ 		DRM_DEV_ERROR(dev, "lvds port does not have any children\n");
+diff --git a/drivers/hid/hid-redragon.c b/drivers/hid/hid-redragon.c
+index daf59578bf93..73c9d4c4fa34 100644
+--- a/drivers/hid/hid-redragon.c
++++ b/drivers/hid/hid-redragon.c
+@@ -44,29 +44,6 @@ static __u8 *redragon_report_fixup(struct hid_device *hdev, __u8 *rdesc,
+ 	return rdesc;
+ }
+ 
+-static int redragon_probe(struct hid_device *dev,
+-	const struct hid_device_id *id)
+-{
+-	int ret;
+-
+-	ret = hid_parse(dev);
+-	if (ret) {
+-		hid_err(dev, "parse failed\n");
+-		return ret;
+-	}
+-
+-	/* do not register unused input device */
+-	if (dev->maxapplication == 1)
+-		return 0;
+-
+-	ret = hid_hw_start(dev, HID_CONNECT_DEFAULT);
+-	if (ret) {
+-		hid_err(dev, "hw start failed\n");
+-		return ret;
+-	}
+-
+-	return 0;
+-}
+ static const struct hid_device_id redragon_devices[] = {
+ 	{HID_USB_DEVICE(USB_VENDOR_ID_JESS, USB_DEVICE_ID_REDRAGON_ASURA)},
+ 	{}
+@@ -77,8 +54,7 @@ MODULE_DEVICE_TABLE(hid, redragon_devices);
+ static struct hid_driver redragon_driver = {
+ 	.name = "redragon",
+ 	.id_table = redragon_devices,
+-	.report_fixup = redragon_report_fixup,
+-	.probe = redragon_probe
++	.report_fixup = redragon_report_fixup
+ };
+ 
+ module_hid_driver(redragon_driver);
+diff --git a/drivers/i2c/i2c-core-acpi.c b/drivers/i2c/i2c-core-acpi.c
+index b8f303dea305..32affd3fa8bd 100644
+--- a/drivers/i2c/i2c-core-acpi.c
++++ b/drivers/i2c/i2c-core-acpi.c
+@@ -453,8 +453,12 @@ static int acpi_gsb_i2c_read_bytes(struct i2c_client *client,
+ 		else
+ 			dev_err(&client->adapter->dev, "i2c read %d bytes from client@%#x starting at reg %#x failed, error: %d\n",
+ 				data_len, client->addr, cmd, ret);
+-	} else {
++	/* 2 transfers must have completed successfully */
++	} else if (ret == 2) {
+ 		memcpy(data, buffer, data_len);
++		ret = 0;
++	} else {
++		ret = -EIO;
+ 	}
+ 
+ 	kfree(buffer);
+@@ -595,8 +599,6 @@ i2c_acpi_space_handler(u32 function, acpi_physical_address command,
+ 		if (action == ACPI_READ) {
+ 			status = acpi_gsb_i2c_read_bytes(client, command,
+ 					gsb->data, info->access_length);
+-			if (status > 0)
+-				status = 0;
+ 		} else {
+ 			status = acpi_gsb_i2c_write_bytes(client, command,
+ 					gsb->data, info->access_length);
+diff --git a/drivers/infiniband/hw/hfi1/affinity.c b/drivers/infiniband/hw/hfi1/affinity.c
+index fbe7198a715a..bedd5fba33b0 100644
+--- a/drivers/infiniband/hw/hfi1/affinity.c
++++ b/drivers/infiniband/hw/hfi1/affinity.c
+@@ -198,7 +198,7 @@ int node_affinity_init(void)
+ 		while ((dev = pci_get_device(ids->vendor, ids->device, dev))) {
+ 			node = pcibus_to_node(dev->bus);
+ 			if (node < 0)
+-				node = numa_node_id();
++				goto out;
+ 
+ 			hfi1_per_node_cntr[node]++;
+ 		}
+@@ -206,6 +206,18 @@ int node_affinity_init(void)
+ 	}
+ 
+ 	return 0;
++
++out:
++	/*
++	 * Invalid PCI NUMA node information found, note it, and populate
++	 * our database 1:1.
++	 */
++	pr_err("HFI: Invalid PCI NUMA node. Performance may be affected\n");
++	pr_err("HFI: System BIOS may need to be upgraded\n");
++	for (node = 0; node < node_affinity.num_possible_nodes; node++)
++		hfi1_per_node_cntr[node] = 1;
++
++	return 0;
+ }
+ 
+ static void node_affinity_destroy(struct hfi1_affinity_node *entry)
+@@ -622,8 +634,14 @@ int hfi1_dev_affinity_init(struct hfi1_devdata *dd)
+ 	int curr_cpu, possible, i, ret;
+ 	bool new_entry = false;
+ 
+-	if (node < 0)
+-		node = numa_node_id();
++	/*
++	 * If the BIOS does not have the NUMA node information set, select
++	 * NUMA 0 so we get consistent performance.
++	 */
++	if (node < 0) {
++		dd_dev_err(dd, "Invalid PCI NUMA node. Performance may be affected\n");
++		node = 0;
++	}
+ 	dd->node = node;
+ 
+ 	local_mask = cpumask_of_node(dd->node);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_pd.c b/drivers/infiniband/hw/hns/hns_roce_pd.c
+index b9f2c871ff9a..e11c149da04d 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_pd.c
++++ b/drivers/infiniband/hw/hns/hns_roce_pd.c
+@@ -37,7 +37,7 @@
+ 
+ static int hns_roce_pd_alloc(struct hns_roce_dev *hr_dev, unsigned long *pdn)
+ {
+-	return hns_roce_bitmap_alloc(&hr_dev->pd_bitmap, pdn);
++	return hns_roce_bitmap_alloc(&hr_dev->pd_bitmap, pdn) ? -ENOMEM : 0;
+ }
+ 
+ static void hns_roce_pd_free(struct hns_roce_dev *hr_dev, unsigned long pdn)
+diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
+index baaf906f7c2e..97664570c5ac 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
+@@ -115,7 +115,10 @@ static int hns_roce_reserve_range_qp(struct hns_roce_dev *hr_dev, int cnt,
+ {
+ 	struct hns_roce_qp_table *qp_table = &hr_dev->qp_table;
+ 
+-	return hns_roce_bitmap_alloc_range(&qp_table->bitmap, cnt, align, base);
++	return hns_roce_bitmap_alloc_range(&qp_table->bitmap, cnt, align,
++					   base) ?
++		       -ENOMEM :
++		       0;
+ }
+ 
+ enum hns_roce_qp_state to_hns_roce_state(enum ib_qp_state state)
+diff --git a/drivers/input/input.c b/drivers/input/input.c
+index 6365c1958264..3304aaaffe87 100644
+--- a/drivers/input/input.c
++++ b/drivers/input/input.c
+@@ -480,11 +480,19 @@ EXPORT_SYMBOL(input_inject_event);
+  */
+ void input_alloc_absinfo(struct input_dev *dev)
+ {
+-	if (!dev->absinfo)
+-		dev->absinfo = kcalloc(ABS_CNT, sizeof(*dev->absinfo),
+-					GFP_KERNEL);
++	if (dev->absinfo)
++		return;
+ 
+-	WARN(!dev->absinfo, "%s(): kcalloc() failed?\n", __func__);
++	dev->absinfo = kcalloc(ABS_CNT, sizeof(*dev->absinfo), GFP_KERNEL);
++	if (!dev->absinfo) {
++		dev_err(dev->dev.parent ?: &dev->dev,
++			"%s: unable to allocate memory\n", __func__);
++		/*
++		 * We will handle this allocation failure in
++		 * input_register_device() when we refuse to register input
++		 * device with ABS bits but without absinfo.
++		 */
++	}
+ }
+ EXPORT_SYMBOL(input_alloc_absinfo);
+ 
+diff --git a/drivers/iommu/omap-iommu.c b/drivers/iommu/omap-iommu.c
+index af4a8e7fcd27..3b05117118c3 100644
+--- a/drivers/iommu/omap-iommu.c
++++ b/drivers/iommu/omap-iommu.c
+@@ -550,7 +550,7 @@ static u32 *iopte_alloc(struct omap_iommu *obj, u32 *iopgd,
+ 
+ pte_ready:
+ 	iopte = iopte_offset(iopgd, da);
+-	*pt_dma = virt_to_phys(iopte);
++	*pt_dma = iopgd_page_paddr(iopgd);
+ 	dev_vdbg(obj->dev,
+ 		 "%s: da:%08x pgd:%p *pgd:%08x pte:%p *pte:%08x\n",
+ 		 __func__, da, iopgd, *iopgd, iopte, *iopte);
+@@ -738,7 +738,7 @@ static size_t iopgtable_clear_entry_core(struct omap_iommu *obj, u32 da)
+ 		}
+ 		bytes *= nent;
+ 		memset(iopte, 0, nent * sizeof(*iopte));
+-		pt_dma = virt_to_phys(iopte);
++		pt_dma = iopgd_page_paddr(iopgd);
+ 		flush_iopte_range(obj->dev, pt_dma, pt_offset, nent);
+ 
+ 		/*
+diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c
+index 054cd2c8e9c8..2b1724e8d307 100644
+--- a/drivers/iommu/rockchip-iommu.c
++++ b/drivers/iommu/rockchip-iommu.c
+@@ -521,10 +521,11 @@ static irqreturn_t rk_iommu_irq(int irq, void *dev_id)
+ 	u32 int_status;
+ 	dma_addr_t iova;
+ 	irqreturn_t ret = IRQ_NONE;
+-	int i;
++	int i, err;
+ 
+-	if (WARN_ON(!pm_runtime_get_if_in_use(iommu->dev)))
+-		return 0;
++	err = pm_runtime_get_if_in_use(iommu->dev);
++	if (WARN_ON_ONCE(err <= 0))
++		return ret;
+ 
+ 	if (WARN_ON(clk_bulk_enable(iommu->num_clocks, iommu->clocks)))
+ 		goto out;
+@@ -620,11 +621,15 @@ static void rk_iommu_zap_iova(struct rk_iommu_domain *rk_domain,
+ 	spin_lock_irqsave(&rk_domain->iommus_lock, flags);
+ 	list_for_each(pos, &rk_domain->iommus) {
+ 		struct rk_iommu *iommu;
++		int ret;
+ 
+ 		iommu = list_entry(pos, struct rk_iommu, node);
+ 
+ 		/* Only zap TLBs of IOMMUs that are powered on. */
+-		if (pm_runtime_get_if_in_use(iommu->dev)) {
++		ret = pm_runtime_get_if_in_use(iommu->dev);
++		if (WARN_ON_ONCE(ret < 0))
++			continue;
++		if (ret) {
+ 			WARN_ON(clk_bulk_enable(iommu->num_clocks,
+ 						iommu->clocks));
+ 			rk_iommu_zap_lines(iommu, iova, size);
+@@ -891,6 +896,7 @@ static void rk_iommu_detach_device(struct iommu_domain *domain,
+ 	struct rk_iommu *iommu;
+ 	struct rk_iommu_domain *rk_domain = to_rk_domain(domain);
+ 	unsigned long flags;
++	int ret;
+ 
+ 	/* Allow 'virtual devices' (eg drm) to detach from domain */
+ 	iommu = rk_iommu_from_dev(dev);
+@@ -909,7 +915,9 @@ static void rk_iommu_detach_device(struct iommu_domain *domain,
+ 	list_del_init(&iommu->node);
+ 	spin_unlock_irqrestore(&rk_domain->iommus_lock, flags);
+ 
+-	if (pm_runtime_get_if_in_use(iommu->dev)) {
++	ret = pm_runtime_get_if_in_use(iommu->dev);
++	WARN_ON_ONCE(ret < 0);
++	if (ret > 0) {
+ 		rk_iommu_disable(iommu);
+ 		pm_runtime_put(iommu->dev);
+ 	}
+@@ -946,7 +954,8 @@ static int rk_iommu_attach_device(struct iommu_domain *domain,
+ 	list_add_tail(&iommu->node, &rk_domain->iommus);
+ 	spin_unlock_irqrestore(&rk_domain->iommus_lock, flags);
+ 
+-	if (!pm_runtime_get_if_in_use(iommu->dev))
++	ret = pm_runtime_get_if_in_use(iommu->dev);
++	if (!ret || WARN_ON_ONCE(ret < 0))
+ 		return 0;
+ 
+ 	ret = rk_iommu_enable(iommu);
+@@ -1152,17 +1161,6 @@ static int rk_iommu_probe(struct platform_device *pdev)
+ 	if (iommu->num_mmu == 0)
+ 		return PTR_ERR(iommu->bases[0]);
+ 
+-	i = 0;
+-	while ((irq = platform_get_irq(pdev, i++)) != -ENXIO) {
+-		if (irq < 0)
+-			return irq;
+-
+-		err = devm_request_irq(iommu->dev, irq, rk_iommu_irq,
+-				       IRQF_SHARED, dev_name(dev), iommu);
+-		if (err)
+-			return err;
+-	}
+-
+ 	iommu->reset_disabled = device_property_read_bool(dev,
+ 					"rockchip,disable-mmu-reset");
+ 
+@@ -1219,6 +1217,19 @@ static int rk_iommu_probe(struct platform_device *pdev)
+ 
+ 	pm_runtime_enable(dev);
+ 
++	i = 0;
++	while ((irq = platform_get_irq(pdev, i++)) != -ENXIO) {
++		if (irq < 0)
++			return irq;
++
++		err = devm_request_irq(iommu->dev, irq, rk_iommu_irq,
++				       IRQF_SHARED, dev_name(dev), iommu);
++		if (err) {
++			pm_runtime_disable(dev);
++			goto err_remove_sysfs;
++		}
++	}
++
+ 	return 0;
+ err_remove_sysfs:
+ 	iommu_device_sysfs_remove(&iommu->iommu);
+diff --git a/drivers/irqchip/irq-bcm7038-l1.c b/drivers/irqchip/irq-bcm7038-l1.c
+index faf734ff4cf3..0f6e30e9009d 100644
+--- a/drivers/irqchip/irq-bcm7038-l1.c
++++ b/drivers/irqchip/irq-bcm7038-l1.c
+@@ -217,6 +217,7 @@ static int bcm7038_l1_set_affinity(struct irq_data *d,
+ 	return 0;
+ }
+ 
++#ifdef CONFIG_SMP
+ static void bcm7038_l1_cpu_offline(struct irq_data *d)
+ {
+ 	struct cpumask *mask = irq_data_get_affinity_mask(d);
+@@ -241,6 +242,7 @@ static void bcm7038_l1_cpu_offline(struct irq_data *d)
+ 	}
+ 	irq_set_affinity_locked(d, &new_affinity, false);
+ }
++#endif
+ 
+ static int __init bcm7038_l1_init_one(struct device_node *dn,
+ 				      unsigned int idx,
+@@ -293,7 +295,9 @@ static struct irq_chip bcm7038_l1_irq_chip = {
+ 	.irq_mask		= bcm7038_l1_mask,
+ 	.irq_unmask		= bcm7038_l1_unmask,
+ 	.irq_set_affinity	= bcm7038_l1_set_affinity,
++#ifdef CONFIG_SMP
+ 	.irq_cpu_offline	= bcm7038_l1_cpu_offline,
++#endif
+ };
+ 
+ static int bcm7038_l1_map(struct irq_domain *d, unsigned int virq,
+diff --git a/drivers/irqchip/irq-stm32-exti.c b/drivers/irqchip/irq-stm32-exti.c
+index 3a7e8905a97e..880e48947576 100644
+--- a/drivers/irqchip/irq-stm32-exti.c
++++ b/drivers/irqchip/irq-stm32-exti.c
+@@ -602,17 +602,24 @@ stm32_exti_host_data *stm32_exti_host_init(const struct stm32_exti_drv_data *dd,
+ 					sizeof(struct stm32_exti_chip_data),
+ 					GFP_KERNEL);
+ 	if (!host_data->chips_data)
+-		return NULL;
++		goto free_host_data;
+ 
+ 	host_data->base = of_iomap(node, 0);
+ 	if (!host_data->base) {
+ 		pr_err("%pOF: Unable to map registers\n", node);
+-		return NULL;
++		goto free_chips_data;
+ 	}
+ 
+ 	stm32_host_data = host_data;
+ 
+ 	return host_data;
++
++free_chips_data:
++	kfree(host_data->chips_data);
++free_host_data:
++	kfree(host_data);
++
++	return NULL;
+ }
+ 
+ static struct
+@@ -664,10 +671,8 @@ static int __init stm32_exti_init(const struct stm32_exti_drv_data *drv_data,
+ 	struct irq_domain *domain;
+ 
+ 	host_data = stm32_exti_host_init(drv_data, node);
+-	if (!host_data) {
+-		ret = -ENOMEM;
+-		goto out_free_mem;
+-	}
++	if (!host_data)
++		return -ENOMEM;
+ 
+ 	domain = irq_domain_add_linear(node, drv_data->bank_nr * IRQS_PER_BANK,
+ 				       &irq_exti_domain_ops, NULL);
+@@ -724,7 +729,6 @@ out_free_domain:
+ 	irq_domain_remove(domain);
+ out_unmap:
+ 	iounmap(host_data->base);
+-out_free_mem:
+ 	kfree(host_data->chips_data);
+ 	kfree(host_data);
+ 	return ret;
+@@ -751,10 +755,8 @@ __init stm32_exti_hierarchy_init(const struct stm32_exti_drv_data *drv_data,
+ 	}
+ 
+ 	host_data = stm32_exti_host_init(drv_data, node);
+-	if (!host_data) {
+-		ret = -ENOMEM;
+-		goto out_free_mem;
+-	}
++	if (!host_data)
++		return -ENOMEM;
+ 
+ 	for (i = 0; i < drv_data->bank_nr; i++)
+ 		stm32_exti_chip_init(host_data, i, node);
+@@ -776,7 +778,6 @@ __init stm32_exti_hierarchy_init(const struct stm32_exti_drv_data *drv_data,
+ 
+ out_unmap:
+ 	iounmap(host_data->base);
+-out_free_mem:
+ 	kfree(host_data->chips_data);
+ 	kfree(host_data);
+ 	return ret;
+diff --git a/drivers/md/dm-kcopyd.c b/drivers/md/dm-kcopyd.c
+index 3c7547a3c371..d7b9cdafd1c3 100644
+--- a/drivers/md/dm-kcopyd.c
++++ b/drivers/md/dm-kcopyd.c
+@@ -487,6 +487,8 @@ static int run_complete_job(struct kcopyd_job *job)
+ 	if (atomic_dec_and_test(&kc->nr_jobs))
+ 		wake_up(&kc->destroyq);
+ 
++	cond_resched();
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/mfd/sm501.c b/drivers/mfd/sm501.c
+index 2a87b0d2f21f..a530972c5a7e 100644
+--- a/drivers/mfd/sm501.c
++++ b/drivers/mfd/sm501.c
+@@ -715,6 +715,7 @@ sm501_create_subdev(struct sm501_devdata *sm, char *name,
+ 	smdev->pdev.name = name;
+ 	smdev->pdev.id = sm->pdev_id;
+ 	smdev->pdev.dev.parent = sm->dev;
++	smdev->pdev.dev.coherent_dma_mask = 0xffffffff;
+ 
+ 	if (res_count) {
+ 		smdev->pdev.resource = (struct resource *)(smdev+1);
+diff --git a/drivers/mtd/ubi/vtbl.c b/drivers/mtd/ubi/vtbl.c
+index 94d7a865b135..7504f430c011 100644
+--- a/drivers/mtd/ubi/vtbl.c
++++ b/drivers/mtd/ubi/vtbl.c
+@@ -578,6 +578,16 @@ static int init_volumes(struct ubi_device *ubi,
+ 		vol->ubi = ubi;
+ 		reserved_pebs += vol->reserved_pebs;
+ 
++		/*
++		 * We use ubi->peb_count and not vol->reserved_pebs because
++		 * we want to keep the code simple. Otherwise we'd have to
++		 * resize/check the bitmap upon volume resize too.
++		 * Allocating a few bytes more does not hurt.
++		 */
++		err = ubi_fastmap_init_checkmap(vol, ubi->peb_count);
++		if (err)
++			return err;
++
+ 		/*
+ 		 * In case of dynamic volume UBI knows nothing about how many
+ 		 * data is stored there. So assume the whole volume is used.
+@@ -620,16 +630,6 @@ static int init_volumes(struct ubi_device *ubi,
+ 			(long long)(vol->used_ebs - 1) * vol->usable_leb_size;
+ 		vol->used_bytes += av->last_data_size;
+ 		vol->last_eb_bytes = av->last_data_size;
+-
+-		/*
+-		 * We use ubi->peb_count and not vol->reserved_pebs because
+-		 * we want to keep the code simple. Otherwise we'd have to
+-		 * resize/check the bitmap upon volume resize too.
+-		 * Allocating a few bytes more does not hurt.
+-		 */
+-		err = ubi_fastmap_init_checkmap(vol, ubi->peb_count);
+-		if (err)
+-			return err;
+ 	}
+ 
+ 	/* And add the layout volume */
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 4394c1162be4..4fdf3d33aa59 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -5907,12 +5907,12 @@ unsigned int bnxt_get_max_func_cp_rings(struct bnxt *bp)
+ 	return bp->hw_resc.max_cp_rings;
+ }
+ 
+-void bnxt_set_max_func_cp_rings(struct bnxt *bp, unsigned int max)
++unsigned int bnxt_get_max_func_cp_rings_for_en(struct bnxt *bp)
+ {
+-	bp->hw_resc.max_cp_rings = max;
++	return bp->hw_resc.max_cp_rings - bnxt_get_ulp_msix_num(bp);
+ }
+ 
+-unsigned int bnxt_get_max_func_irqs(struct bnxt *bp)
++static unsigned int bnxt_get_max_func_irqs(struct bnxt *bp)
+ {
+ 	struct bnxt_hw_resc *hw_resc = &bp->hw_resc;
+ 
+@@ -8492,7 +8492,8 @@ static void _bnxt_get_max_rings(struct bnxt *bp, int *max_rx, int *max_tx,
+ 
+ 	*max_tx = hw_resc->max_tx_rings;
+ 	*max_rx = hw_resc->max_rx_rings;
+-	*max_cp = min_t(int, hw_resc->max_irqs, hw_resc->max_cp_rings);
++	*max_cp = min_t(int, bnxt_get_max_func_cp_rings_for_en(bp),
++			hw_resc->max_irqs);
+ 	*max_cp = min_t(int, *max_cp, hw_resc->max_stat_ctxs);
+ 	max_ring_grps = hw_resc->max_hw_ring_grps;
+ 	if (BNXT_CHIP_TYPE_NITRO_A0(bp) && BNXT_PF(bp)) {
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index 91575ef97c8c..ea1246a94b38 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -1468,8 +1468,7 @@ int bnxt_hwrm_set_coal(struct bnxt *);
+ unsigned int bnxt_get_max_func_stat_ctxs(struct bnxt *bp);
+ void bnxt_set_max_func_stat_ctxs(struct bnxt *bp, unsigned int max);
+ unsigned int bnxt_get_max_func_cp_rings(struct bnxt *bp);
+-void bnxt_set_max_func_cp_rings(struct bnxt *bp, unsigned int max);
+-unsigned int bnxt_get_max_func_irqs(struct bnxt *bp);
++unsigned int bnxt_get_max_func_cp_rings_for_en(struct bnxt *bp);
+ int bnxt_get_avail_msix(struct bnxt *bp, int num);
+ int bnxt_reserve_rings(struct bnxt *bp);
+ void bnxt_tx_disable(struct bnxt *bp);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+index a64910892c25..2c77004a022b 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+@@ -451,7 +451,7 @@ static int bnxt_hwrm_func_vf_resc_cfg(struct bnxt *bp, int num_vfs)
+ 
+ 	bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_VF_RESOURCE_CFG, -1, -1);
+ 
+-	vf_cp_rings = hw_resc->max_cp_rings - bp->cp_nr_rings;
++	vf_cp_rings = bnxt_get_max_func_cp_rings_for_en(bp) - bp->cp_nr_rings;
+ 	vf_stat_ctx = hw_resc->max_stat_ctxs - bp->num_stat_ctxs;
+ 	if (bp->flags & BNXT_FLAG_AGG_RINGS)
+ 		vf_rx_rings = hw_resc->max_rx_rings - bp->rx_nr_rings * 2;
+@@ -544,7 +544,8 @@ static int bnxt_hwrm_func_cfg(struct bnxt *bp, int num_vfs)
+ 	max_stat_ctxs = hw_resc->max_stat_ctxs;
+ 
+ 	/* Remaining rings are distributed equally amongs VF's for now */
+-	vf_cp_rings = (hw_resc->max_cp_rings - bp->cp_nr_rings) / num_vfs;
++	vf_cp_rings = (bnxt_get_max_func_cp_rings_for_en(bp) -
++		       bp->cp_nr_rings) / num_vfs;
+ 	vf_stat_ctx = (max_stat_ctxs - bp->num_stat_ctxs) / num_vfs;
+ 	if (bp->flags & BNXT_FLAG_AGG_RINGS)
+ 		vf_rx_rings = (hw_resc->max_rx_rings - bp->rx_nr_rings * 2) /
+@@ -638,7 +639,7 @@ static int bnxt_sriov_enable(struct bnxt *bp, int *num_vfs)
+ 	 */
+ 	vfs_supported = *num_vfs;
+ 
+-	avail_cp = hw_resc->max_cp_rings - bp->cp_nr_rings;
++	avail_cp = bnxt_get_max_func_cp_rings_for_en(bp) - bp->cp_nr_rings;
+ 	avail_stat = hw_resc->max_stat_ctxs - bp->num_stat_ctxs;
+ 	avail_cp = min_t(int, avail_cp, avail_stat);
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+index 840f6e505f73..4209cfd73971 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+@@ -169,7 +169,6 @@ static int bnxt_req_msix_vecs(struct bnxt_en_dev *edev, int ulp_id,
+ 		edev->ulp_tbl[ulp_id].msix_requested = avail_msix;
+ 	}
+ 	bnxt_fill_msix_vecs(bp, ent);
+-	bnxt_set_max_func_cp_rings(bp, max_cp_rings - avail_msix);
+ 	edev->flags |= BNXT_EN_FLAG_MSIX_REQUESTED;
+ 	return avail_msix;
+ }
+@@ -178,7 +177,6 @@ static int bnxt_free_msix_vecs(struct bnxt_en_dev *edev, int ulp_id)
+ {
+ 	struct net_device *dev = edev->net;
+ 	struct bnxt *bp = netdev_priv(dev);
+-	int max_cp_rings, msix_requested;
+ 
+ 	ASSERT_RTNL();
+ 	if (ulp_id != BNXT_ROCE_ULP)
+@@ -187,9 +185,6 @@ static int bnxt_free_msix_vecs(struct bnxt_en_dev *edev, int ulp_id)
+ 	if (!(edev->flags & BNXT_EN_FLAG_MSIX_REQUESTED))
+ 		return 0;
+ 
+-	max_cp_rings = bnxt_get_max_func_cp_rings(bp);
+-	msix_requested = edev->ulp_tbl[ulp_id].msix_requested;
+-	bnxt_set_max_func_cp_rings(bp, max_cp_rings + msix_requested);
+ 	edev->ulp_tbl[ulp_id].msix_requested = 0;
+ 	edev->flags &= ~BNXT_EN_FLAG_MSIX_REQUESTED;
+ 	if (netif_running(dev)) {
+@@ -220,21 +215,6 @@ int bnxt_get_ulp_msix_base(struct bnxt *bp)
+ 	return 0;
+ }
+ 
+-void bnxt_subtract_ulp_resources(struct bnxt *bp, int ulp_id)
+-{
+-	ASSERT_RTNL();
+-	if (bnxt_ulp_registered(bp->edev, ulp_id)) {
+-		struct bnxt_en_dev *edev = bp->edev;
+-		unsigned int msix_req, max;
+-
+-		msix_req = edev->ulp_tbl[ulp_id].msix_requested;
+-		max = bnxt_get_max_func_cp_rings(bp);
+-		bnxt_set_max_func_cp_rings(bp, max - msix_req);
+-		max = bnxt_get_max_func_stat_ctxs(bp);
+-		bnxt_set_max_func_stat_ctxs(bp, max - 1);
+-	}
+-}
+-
+ static int bnxt_send_msg(struct bnxt_en_dev *edev, int ulp_id,
+ 			 struct bnxt_fw_msg *fw_msg)
+ {
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h
+index df48ac71729f..d9bea37cd211 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h
+@@ -90,7 +90,6 @@ static inline bool bnxt_ulp_registered(struct bnxt_en_dev *edev, int ulp_id)
+ 
+ int bnxt_get_ulp_msix_num(struct bnxt *bp);
+ int bnxt_get_ulp_msix_base(struct bnxt *bp);
+-void bnxt_subtract_ulp_resources(struct bnxt *bp, int ulp_id);
+ void bnxt_ulp_stop(struct bnxt *bp);
+ void bnxt_ulp_start(struct bnxt *bp);
+ void bnxt_ulp_sriov_cfg(struct bnxt *bp, int num_vfs);
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.h b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+index b773bc07edf7..14b49612aa86 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.h
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+@@ -186,6 +186,9 @@ struct bcmgenet_mib_counters {
+ #define UMAC_MAC1			0x010
+ #define UMAC_MAX_FRAME_LEN		0x014
+ 
++#define UMAC_MODE			0x44
++#define  MODE_LINK_STATUS		(1 << 5)
++
+ #define UMAC_EEE_CTRL			0x064
+ #define  EN_LPI_RX_PAUSE		(1 << 0)
+ #define  EN_LPI_TX_PFC			(1 << 1)
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+index 5333274a283c..4241ae928d4a 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+@@ -115,8 +115,14 @@ void bcmgenet_mii_setup(struct net_device *dev)
+ static int bcmgenet_fixed_phy_link_update(struct net_device *dev,
+ 					  struct fixed_phy_status *status)
+ {
+-	if (dev && dev->phydev && status)
+-		status->link = dev->phydev->link;
++	struct bcmgenet_priv *priv;
++	u32 reg;
++
++	if (dev && dev->phydev && status) {
++		priv = netdev_priv(dev);
++		reg = bcmgenet_umac_readl(priv, UMAC_MODE);
++		status->link = !!(reg & MODE_LINK_STATUS);
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index a6c911bb5ce2..515d96e32143 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -481,11 +481,6 @@ static int macb_mii_probe(struct net_device *dev)
+ 
+ 	if (np) {
+ 		if (of_phy_is_fixed_link(np)) {
+-			if (of_phy_register_fixed_link(np) < 0) {
+-				dev_err(&bp->pdev->dev,
+-					"broken fixed-link specification\n");
+-				return -ENODEV;
+-			}
+ 			bp->phy_node = of_node_get(np);
+ 		} else {
+ 			bp->phy_node = of_parse_phandle(np, "phy-handle", 0);
+@@ -568,7 +563,7 @@ static int macb_mii_init(struct macb *bp)
+ {
+ 	struct macb_platform_data *pdata;
+ 	struct device_node *np;
+-	int err;
++	int err = -ENXIO;
+ 
+ 	/* Enable management port */
+ 	macb_writel(bp, NCR, MACB_BIT(MPE));
+@@ -591,12 +586,23 @@ static int macb_mii_init(struct macb *bp)
+ 	dev_set_drvdata(&bp->dev->dev, bp->mii_bus);
+ 
+ 	np = bp->pdev->dev.of_node;
+-	if (pdata)
+-		bp->mii_bus->phy_mask = pdata->phy_mask;
++	if (np && of_phy_is_fixed_link(np)) {
++		if (of_phy_register_fixed_link(np) < 0) {
++			dev_err(&bp->pdev->dev,
++				"broken fixed-link specification %pOF\n", np);
++			goto err_out_free_mdiobus;
++		}
++
++		err = mdiobus_register(bp->mii_bus);
++	} else {
++		if (pdata)
++			bp->mii_bus->phy_mask = pdata->phy_mask;
++
++		err = of_mdiobus_register(bp->mii_bus, np);
++	}
+ 
+-	err = of_mdiobus_register(bp->mii_bus, np);
+ 	if (err)
+-		goto err_out_free_mdiobus;
++		goto err_out_free_fixed_link;
+ 
+ 	err = macb_mii_probe(bp->dev);
+ 	if (err)
+@@ -606,6 +612,7 @@ static int macb_mii_init(struct macb *bp)
+ 
+ err_out_unregister_bus:
+ 	mdiobus_unregister(bp->mii_bus);
++err_out_free_fixed_link:
+ 	if (np && of_phy_is_fixed_link(np))
+ 		of_phy_deregister_fixed_link(np);
+ err_out_free_mdiobus:
+@@ -1957,14 +1964,17 @@ static void macb_reset_hw(struct macb *bp)
+ {
+ 	struct macb_queue *queue;
+ 	unsigned int q;
++	u32 ctrl = macb_readl(bp, NCR);
+ 
+ 	/* Disable RX and TX (XXX: Should we halt the transmission
+ 	 * more gracefully?)
+ 	 */
+-	macb_writel(bp, NCR, 0);
++	ctrl &= ~(MACB_BIT(RE) | MACB_BIT(TE));
+ 
+ 	/* Clear the stats registers (XXX: Update stats first?) */
+-	macb_writel(bp, NCR, MACB_BIT(CLRSTAT));
++	ctrl |= MACB_BIT(CLRSTAT);
++
++	macb_writel(bp, NCR, ctrl);
+ 
+ 	/* Clear all status flags */
+ 	macb_writel(bp, TSR, -1);
+@@ -2152,7 +2162,7 @@ static void macb_init_hw(struct macb *bp)
+ 	}
+ 
+ 	/* Enable TX and RX */
+-	macb_writel(bp, NCR, MACB_BIT(RE) | MACB_BIT(TE) | MACB_BIT(MPE));
++	macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(RE) | MACB_BIT(TE));
+ }
+ 
+ /* The hash address register is 64 bits long and takes up two
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index d318d35e598f..6fd7ea8074b0 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -3911,7 +3911,7 @@ static bool hclge_is_all_function_id_zero(struct hclge_desc *desc)
+ #define HCLGE_FUNC_NUMBER_PER_DESC 6
+ 	int i, j;
+ 
+-	for (i = 0; i < HCLGE_DESC_NUMBER; i++)
++	for (i = 1; i < HCLGE_DESC_NUMBER; i++)
+ 		for (j = 0; j < HCLGE_FUNC_NUMBER_PER_DESC; j++)
+ 			if (desc[i].data[j])
+ 				return false;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
+index 9f7932e423b5..6315e8ad8467 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
+@@ -208,6 +208,8 @@ int hclge_mac_start_phy(struct hclge_dev *hdev)
+ 	if (!phydev)
+ 		return 0;
+ 
++	phydev->supported &= ~SUPPORTED_FIBRE;
++
+ 	ret = phy_connect_direct(netdev, phydev,
+ 				 hclge_mac_adjust_link,
+ 				 PHY_INTERFACE_MODE_SGMII);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wq.c b/drivers/net/ethernet/mellanox/mlx5/core/wq.c
+index 86478a6b99c5..c8c315eb5128 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/wq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/wq.c
+@@ -139,14 +139,15 @@ int mlx5_wq_qp_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
+ 		      struct mlx5_wq_ctrl *wq_ctrl)
+ {
+ 	u32 sq_strides_offset;
++	u32 rq_pg_remainder;
+ 	int err;
+ 
+ 	mlx5_fill_fbc(MLX5_GET(qpc, qpc, log_rq_stride) + 4,
+ 		      MLX5_GET(qpc, qpc, log_rq_size),
+ 		      &wq->rq.fbc);
+ 
+-	sq_strides_offset =
+-		((wq->rq.fbc.frag_sz_m1 + 1) % PAGE_SIZE) / MLX5_SEND_WQE_BB;
++	rq_pg_remainder   = mlx5_wq_cyc_get_byte_size(&wq->rq) % PAGE_SIZE;
++	sq_strides_offset = rq_pg_remainder / MLX5_SEND_WQE_BB;
+ 
+ 	mlx5_fill_fbc_offset(ilog2(MLX5_SEND_WQE_BB),
+ 			     MLX5_GET(qpc, qpc, log_sq_size),
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
+index 4a519d8edec8..3500c79e29cd 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
+@@ -433,6 +433,8 @@ mlxsw_sp_netdevice_ipip_ul_event(struct mlxsw_sp *mlxsw_sp,
+ void
+ mlxsw_sp_port_vlan_router_leave(struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan);
+ void mlxsw_sp_rif_destroy(struct mlxsw_sp_rif *rif);
++void mlxsw_sp_rif_destroy_by_dev(struct mlxsw_sp *mlxsw_sp,
++				 struct net_device *dev);
+ 
+ /* spectrum_kvdl.c */
+ int mlxsw_sp_kvdl_init(struct mlxsw_sp *mlxsw_sp);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+index 77b2adb29341..cb43d17097fa 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+@@ -6228,6 +6228,17 @@ void mlxsw_sp_rif_destroy(struct mlxsw_sp_rif *rif)
+ 	mlxsw_sp_vr_put(mlxsw_sp, vr);
+ }
+ 
++void mlxsw_sp_rif_destroy_by_dev(struct mlxsw_sp *mlxsw_sp,
++				 struct net_device *dev)
++{
++	struct mlxsw_sp_rif *rif;
++
++	rif = mlxsw_sp_rif_find_by_dev(mlxsw_sp, dev);
++	if (!rif)
++		return;
++	mlxsw_sp_rif_destroy(rif);
++}
++
+ static void
+ mlxsw_sp_rif_subport_params_init(struct mlxsw_sp_rif_params *params,
+ 				 struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan)
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+index eea5666a86b2..6cb43dda8232 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+@@ -160,6 +160,24 @@ bool mlxsw_sp_bridge_device_is_offloaded(const struct mlxsw_sp *mlxsw_sp,
+ 	return !!mlxsw_sp_bridge_device_find(mlxsw_sp->bridge, br_dev);
+ }
+ 
++static int mlxsw_sp_bridge_device_upper_rif_destroy(struct net_device *dev,
++						    void *data)
++{
++	struct mlxsw_sp *mlxsw_sp = data;
++
++	mlxsw_sp_rif_destroy_by_dev(mlxsw_sp, dev);
++	return 0;
++}
++
++static void mlxsw_sp_bridge_device_rifs_destroy(struct mlxsw_sp *mlxsw_sp,
++						struct net_device *dev)
++{
++	mlxsw_sp_rif_destroy_by_dev(mlxsw_sp, dev);
++	netdev_walk_all_upper_dev_rcu(dev,
++				      mlxsw_sp_bridge_device_upper_rif_destroy,
++				      mlxsw_sp);
++}
++
+ static struct mlxsw_sp_bridge_device *
+ mlxsw_sp_bridge_device_create(struct mlxsw_sp_bridge *bridge,
+ 			      struct net_device *br_dev)
+@@ -198,6 +216,8 @@ static void
+ mlxsw_sp_bridge_device_destroy(struct mlxsw_sp_bridge *bridge,
+ 			       struct mlxsw_sp_bridge_device *bridge_device)
+ {
++	mlxsw_sp_bridge_device_rifs_destroy(bridge->mlxsw_sp,
++					    bridge_device->dev);
+ 	list_del(&bridge_device->list);
+ 	if (bridge_device->vlan_enabled)
+ 		bridge->vlan_enabled_exists = false;
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+index d4c27f849f9b..c2a9e64bc57b 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+@@ -227,29 +227,16 @@ done:
+ 	spin_unlock_bh(&nn->reconfig_lock);
+ }
+ 
+-/**
+- * nfp_net_reconfig() - Reconfigure the firmware
+- * @nn:      NFP Net device to reconfigure
+- * @update:  The value for the update field in the BAR config
+- *
+- * Write the update word to the BAR and ping the reconfig queue.  The
+- * poll until the firmware has acknowledged the update by zeroing the
+- * update word.
+- *
+- * Return: Negative errno on error, 0 on success
+- */
+-int nfp_net_reconfig(struct nfp_net *nn, u32 update)
++static void nfp_net_reconfig_sync_enter(struct nfp_net *nn)
+ {
+ 	bool cancelled_timer = false;
+ 	u32 pre_posted_requests;
+-	int ret;
+ 
+ 	spin_lock_bh(&nn->reconfig_lock);
+ 
+ 	nn->reconfig_sync_present = true;
+ 
+ 	if (nn->reconfig_timer_active) {
+-		del_timer(&nn->reconfig_timer);
+ 		nn->reconfig_timer_active = false;
+ 		cancelled_timer = true;
+ 	}
+@@ -258,14 +245,43 @@ int nfp_net_reconfig(struct nfp_net *nn, u32 update)
+ 
+ 	spin_unlock_bh(&nn->reconfig_lock);
+ 
+-	if (cancelled_timer)
++	if (cancelled_timer) {
++		del_timer_sync(&nn->reconfig_timer);
+ 		nfp_net_reconfig_wait(nn, nn->reconfig_timer.expires);
++	}
+ 
+ 	/* Run the posted reconfigs which were issued before we started */
+ 	if (pre_posted_requests) {
+ 		nfp_net_reconfig_start(nn, pre_posted_requests);
+ 		nfp_net_reconfig_wait(nn, jiffies + HZ * NFP_NET_POLL_TIMEOUT);
+ 	}
++}
++
++static void nfp_net_reconfig_wait_posted(struct nfp_net *nn)
++{
++	nfp_net_reconfig_sync_enter(nn);
++
++	spin_lock_bh(&nn->reconfig_lock);
++	nn->reconfig_sync_present = false;
++	spin_unlock_bh(&nn->reconfig_lock);
++}
++
++/**
++ * nfp_net_reconfig() - Reconfigure the firmware
++ * @nn:      NFP Net device to reconfigure
++ * @update:  The value for the update field in the BAR config
++ *
++ * Write the update word to the BAR and ping the reconfig queue.  The
++ * poll until the firmware has acknowledged the update by zeroing the
++ * update word.
++ *
++ * Return: Negative errno on error, 0 on success
++ */
++int nfp_net_reconfig(struct nfp_net *nn, u32 update)
++{
++	int ret;
++
++	nfp_net_reconfig_sync_enter(nn);
+ 
+ 	nfp_net_reconfig_start(nn, update);
+ 	ret = nfp_net_reconfig_wait(nn, jiffies + HZ * NFP_NET_POLL_TIMEOUT);
+@@ -3609,6 +3625,7 @@ struct nfp_net *nfp_net_alloc(struct pci_dev *pdev, bool needs_netdev,
+  */
+ void nfp_net_free(struct nfp_net *nn)
+ {
++	WARN_ON(timer_pending(&nn->reconfig_timer) || nn->reconfig_posted);
+ 	if (nn->dp.netdev)
+ 		free_netdev(nn->dp.netdev);
+ 	else
+@@ -3893,4 +3910,5 @@ void nfp_net_clean(struct nfp_net *nn)
+ 		return;
+ 
+ 	unregister_netdev(nn->dp.netdev);
++	nfp_net_reconfig_wait_posted(nn);
+ }
+diff --git a/drivers/net/ethernet/qlogic/qlge/qlge_main.c b/drivers/net/ethernet/qlogic/qlge/qlge_main.c
+index 353f1c129af1..059ba9429e51 100644
+--- a/drivers/net/ethernet/qlogic/qlge/qlge_main.c
++++ b/drivers/net/ethernet/qlogic/qlge/qlge_main.c
+@@ -2384,26 +2384,20 @@ static int qlge_update_hw_vlan_features(struct net_device *ndev,
+ 	return status;
+ }
+ 
+-static netdev_features_t qlge_fix_features(struct net_device *ndev,
+-	netdev_features_t features)
+-{
+-	int err;
+-
+-	/* Update the behavior of vlan accel in the adapter */
+-	err = qlge_update_hw_vlan_features(ndev, features);
+-	if (err)
+-		return err;
+-
+-	return features;
+-}
+-
+ static int qlge_set_features(struct net_device *ndev,
+ 	netdev_features_t features)
+ {
+ 	netdev_features_t changed = ndev->features ^ features;
++	int err;
++
++	if (changed & NETIF_F_HW_VLAN_CTAG_RX) {
++		/* Update the behavior of vlan accel in the adapter */
++		err = qlge_update_hw_vlan_features(ndev, features);
++		if (err)
++			return err;
+ 
+-	if (changed & NETIF_F_HW_VLAN_CTAG_RX)
+ 		qlge_vlan_mode(ndev, features);
++	}
+ 
+ 	return 0;
+ }
+@@ -4719,7 +4713,6 @@ static const struct net_device_ops qlge_netdev_ops = {
+ 	.ndo_set_mac_address	= qlge_set_mac_address,
+ 	.ndo_validate_addr	= eth_validate_addr,
+ 	.ndo_tx_timeout		= qlge_tx_timeout,
+-	.ndo_fix_features	= qlge_fix_features,
+ 	.ndo_set_features	= qlge_set_features,
+ 	.ndo_vlan_rx_add_vid	= qlge_vlan_rx_add_vid,
+ 	.ndo_vlan_rx_kill_vid	= qlge_vlan_rx_kill_vid,
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index 9ceb34bac3a9..e5eb361b973c 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -303,6 +303,7 @@ static const struct pci_device_id rtl8169_pci_tbl[] = {
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_REALTEK,	0x8161), 0, 0, RTL_CFG_1 },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_REALTEK,	0x8167), 0, 0, RTL_CFG_0 },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_REALTEK,	0x8168), 0, 0, RTL_CFG_1 },
++	{ PCI_DEVICE(PCI_VENDOR_ID_NCUBE,	0x8168), 0, 0, RTL_CFG_1 },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_REALTEK,	0x8169), 0, 0, RTL_CFG_0 },
+ 	{ PCI_VENDOR_ID_DLINK,			0x4300,
+ 		PCI_VENDOR_ID_DLINK, 0x4b10,		 0, 0, RTL_CFG_1 },
+@@ -5038,7 +5039,7 @@ static void rtl8169_hw_reset(struct rtl8169_private *tp)
+ 	rtl_hw_reset(tp);
+ }
+ 
+-static void rtl_set_rx_tx_config_registers(struct rtl8169_private *tp)
++static void rtl_set_tx_config_registers(struct rtl8169_private *tp)
+ {
+ 	/* Set DMA burst size and Interframe Gap Time */
+ 	RTL_W32(tp, TxConfig, (TX_DMA_BURST << TxDMAShift) |
+@@ -5149,12 +5150,14 @@ static void rtl_hw_start(struct  rtl8169_private *tp)
+ 
+ 	rtl_set_rx_max_size(tp);
+ 	rtl_set_rx_tx_desc_registers(tp);
+-	rtl_set_rx_tx_config_registers(tp);
++	rtl_set_tx_config_registers(tp);
+ 	RTL_W8(tp, Cfg9346, Cfg9346_Lock);
+ 
+ 	/* Initially a 10 us delay. Turned it into a PCI commit. - FR */
+ 	RTL_R8(tp, IntrMask);
+ 	RTL_W8(tp, ChipCmd, CmdTxEnb | CmdRxEnb);
++	rtl_init_rxcfg(tp);
++
+ 	rtl_set_rx_mode(tp->dev);
+ 	/* no early-rx interrupts */
+ 	RTL_W16(tp, MultiIntr, RTL_R16(tp, MultiIntr) & 0xf000);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+index 76649adf8fb0..c0a855b7ab3b 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+@@ -112,7 +112,6 @@ struct stmmac_priv {
+ 	u32 tx_count_frames;
+ 	u32 tx_coal_frames;
+ 	u32 tx_coal_timer;
+-	bool tx_timer_armed;
+ 
+ 	int tx_coalesce;
+ 	int hwts_tx_en;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index ef6a8d39db2f..c579d98b9666 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -3126,16 +3126,13 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	 * element in case of no SG.
+ 	 */
+ 	priv->tx_count_frames += nfrags + 1;
+-	if (likely(priv->tx_coal_frames > priv->tx_count_frames) &&
+-	    !priv->tx_timer_armed) {
++	if (likely(priv->tx_coal_frames > priv->tx_count_frames)) {
+ 		mod_timer(&priv->txtimer,
+ 			  STMMAC_COAL_TIMER(priv->tx_coal_timer));
+-		priv->tx_timer_armed = true;
+ 	} else {
+ 		priv->tx_count_frames = 0;
+ 		stmmac_set_tx_ic(priv, desc);
+ 		priv->xstats.tx_set_ic_bit++;
+-		priv->tx_timer_armed = false;
+ 	}
+ 
+ 	skb_tx_timestamp(skb);
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index dd1d6e115145..6d74cde68163 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -29,6 +29,7 @@
+ #include <linux/netdevice.h>
+ #include <linux/inetdevice.h>
+ #include <linux/etherdevice.h>
++#include <linux/pci.h>
+ #include <linux/skbuff.h>
+ #include <linux/if_vlan.h>
+ #include <linux/in.h>
+@@ -1939,12 +1940,16 @@ static int netvsc_register_vf(struct net_device *vf_netdev)
+ {
+ 	struct net_device *ndev;
+ 	struct net_device_context *net_device_ctx;
++	struct device *pdev = vf_netdev->dev.parent;
+ 	struct netvsc_device *netvsc_dev;
+ 	int ret;
+ 
+ 	if (vf_netdev->addr_len != ETH_ALEN)
+ 		return NOTIFY_DONE;
+ 
++	if (!pdev || !dev_is_pci(pdev) || dev_is_pf(pdev))
++		return NOTIFY_DONE;
++
+ 	/*
+ 	 * We will use the MAC address to locate the synthetic interface to
+ 	 * associate with the VF interface. If we don't find a matching
+@@ -2101,6 +2106,16 @@ static int netvsc_probe(struct hv_device *dev,
+ 
+ 	memcpy(net->dev_addr, device_info.mac_adr, ETH_ALEN);
+ 
++	/* We must get rtnl lock before scheduling nvdev->subchan_work,
++	 * otherwise netvsc_subchan_work() can get rtnl lock first and wait
++	 * all subchannels to show up, but that may not happen because
++	 * netvsc_probe() can't get rtnl lock and as a result vmbus_onoffer()
++	 * -> ... -> device_add() -> ... -> __device_attach() can't get
++	 * the device lock, so all the subchannels can't be processed --
++	 * finally netvsc_subchan_work() hangs for ever.
++	 */
++	rtnl_lock();
++
+ 	if (nvdev->num_chn > 1)
+ 		schedule_work(&nvdev->subchan_work);
+ 
+@@ -2119,7 +2134,6 @@ static int netvsc_probe(struct hv_device *dev,
+ 	else
+ 		net->max_mtu = ETH_DATA_LEN;
+ 
+-	rtnl_lock();
+ 	ret = register_netdevice(net);
+ 	if (ret != 0) {
+ 		pr_err("Unable to register netdev.\n");
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index 2a58607a6aea..1b07bb5e110d 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -5214,8 +5214,8 @@ static int rtl8152_probe(struct usb_interface *intf,
+ 		netdev->hw_features &= ~NETIF_F_RXCSUM;
+ 	}
+ 
+-	if (le16_to_cpu(udev->descriptor.bcdDevice) == 0x3011 &&
+-	    udev->serial && !strcmp(udev->serial, "000001000000")) {
++	if (le16_to_cpu(udev->descriptor.bcdDevice) == 0x3011 && udev->serial &&
++	    (!strcmp(udev->serial, "000001000000") || !strcmp(udev->serial, "000002000000"))) {
+ 		dev_info(&udev->dev, "Dell TB16 Dock, disable RX aggregation");
+ 		set_bit(DELL_TB_RX_AGG_BUG, &tp->flags);
+ 	}
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index b6122aad639e..7569f9af8d47 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -6926,15 +6926,15 @@ struct brcmf_cfg80211_info *brcmf_cfg80211_attach(struct brcmf_pub *drvr,
+ 	cfg->d11inf.io_type = (u8)io_type;
+ 	brcmu_d11_attach(&cfg->d11inf);
+ 
+-	err = brcmf_setup_wiphy(wiphy, ifp);
+-	if (err < 0)
+-		goto priv_out;
+-
+ 	/* regulatory notifer below needs access to cfg so
+ 	 * assign it now.
+ 	 */
+ 	drvr->config = cfg;
+ 
++	err = brcmf_setup_wiphy(wiphy, ifp);
++	if (err < 0)
++		goto priv_out;
++
+ 	brcmf_dbg(INFO, "Registering custom regulatory\n");
+ 	wiphy->reg_notifier = brcmf_cfg80211_reg_notifier;
+ 	wiphy->regulatory_flags |= REGULATORY_CUSTOM_REG;
+diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
+index 23e270839e6a..f00df2384985 100644
+--- a/drivers/pci/controller/pci-mvebu.c
++++ b/drivers/pci/controller/pci-mvebu.c
+@@ -1219,7 +1219,7 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
+ 		pcie->realio.start = PCIBIOS_MIN_IO;
+ 		pcie->realio.end = min_t(resource_size_t,
+ 					 IO_SPACE_LIMIT,
+-					 resource_size(&pcie->io));
++					 resource_size(&pcie->io) - 1);
+ 	} else
+ 		pcie->realio = pcie->io;
+ 
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index b2857865c0aa..a1a243ee36bb 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -1725,7 +1725,7 @@ int pci_setup_device(struct pci_dev *dev)
+ static void pci_configure_mps(struct pci_dev *dev)
+ {
+ 	struct pci_dev *bridge = pci_upstream_bridge(dev);
+-	int mps, p_mps, rc;
++	int mps, mpss, p_mps, rc;
+ 
+ 	if (!pci_is_pcie(dev) || !bridge || !pci_is_pcie(bridge))
+ 		return;
+@@ -1753,6 +1753,14 @@ static void pci_configure_mps(struct pci_dev *dev)
+ 	if (pcie_bus_config != PCIE_BUS_DEFAULT)
+ 		return;
+ 
++	mpss = 128 << dev->pcie_mpss;
++	if (mpss < p_mps && pci_pcie_type(bridge) == PCI_EXP_TYPE_ROOT_PORT) {
++		pcie_set_mps(bridge, mpss);
++		pci_info(dev, "Upstream bridge's Max Payload Size set to %d (was %d, max %d)\n",
++			 mpss, p_mps, 128 << bridge->pcie_mpss);
++		p_mps = pcie_get_mps(bridge);
++	}
++
+ 	rc = pcie_set_mps(dev, p_mps);
+ 	if (rc) {
+ 		pci_warn(dev, "can't set Max Payload Size to %d; if necessary, use \"pci=pcie_bus_safe\" and report a bug\n",
+@@ -1761,7 +1769,7 @@ static void pci_configure_mps(struct pci_dev *dev)
+ 	}
+ 
+ 	pci_info(dev, "Max Payload Size set to %d (was %d, max %d)\n",
+-		 p_mps, mps, 128 << dev->pcie_mpss);
++		 p_mps, mps, mpss);
+ }
+ 
+ static struct hpp_type0 pci_default_type0 = {
+diff --git a/drivers/pinctrl/pinctrl-axp209.c b/drivers/pinctrl/pinctrl-axp209.c
+index a52779f33ad4..afd0b533c40a 100644
+--- a/drivers/pinctrl/pinctrl-axp209.c
++++ b/drivers/pinctrl/pinctrl-axp209.c
+@@ -316,7 +316,7 @@ static const struct pinctrl_ops axp20x_pctrl_ops = {
+ 	.get_group_pins		= axp20x_group_pins,
+ };
+ 
+-static void axp20x_funcs_groups_from_mask(struct device *dev, unsigned int mask,
++static int axp20x_funcs_groups_from_mask(struct device *dev, unsigned int mask,
+ 					  unsigned int mask_len,
+ 					  struct axp20x_pinctrl_function *func,
+ 					  const struct pinctrl_pin_desc *pins)
+@@ -331,18 +331,22 @@ static void axp20x_funcs_groups_from_mask(struct device *dev, unsigned int mask,
+ 		func->groups = devm_kcalloc(dev,
+ 					    ngroups, sizeof(const char *),
+ 					    GFP_KERNEL);
++		if (!func->groups)
++			return -ENOMEM;
+ 		group = func->groups;
+ 		for_each_set_bit(bit, &mask_cpy, mask_len) {
+ 			*group = pins[bit].name;
+ 			group++;
+ 		}
+ 	}
++
++	return 0;
+ }
+ 
+-static void axp20x_build_funcs_groups(struct platform_device *pdev)
++static int axp20x_build_funcs_groups(struct platform_device *pdev)
+ {
+ 	struct axp20x_pctl *pctl = platform_get_drvdata(pdev);
+-	int i, pin, npins = pctl->desc->npins;
++	int i, ret, pin, npins = pctl->desc->npins;
+ 
+ 	pctl->funcs[AXP20X_FUNC_GPIO_OUT].name = "gpio_out";
+ 	pctl->funcs[AXP20X_FUNC_GPIO_OUT].muxval = AXP20X_MUX_GPIO_OUT;
+@@ -366,13 +370,19 @@ static void axp20x_build_funcs_groups(struct platform_device *pdev)
+ 			pctl->funcs[i].groups[pin] = pctl->desc->pins[pin].name;
+ 	}
+ 
+-	axp20x_funcs_groups_from_mask(&pdev->dev, pctl->desc->ldo_mask,
++	ret = axp20x_funcs_groups_from_mask(&pdev->dev, pctl->desc->ldo_mask,
+ 				      npins, &pctl->funcs[AXP20X_FUNC_LDO],
+ 				      pctl->desc->pins);
++	if (ret)
++		return ret;
+ 
+-	axp20x_funcs_groups_from_mask(&pdev->dev, pctl->desc->adc_mask,
++	ret = axp20x_funcs_groups_from_mask(&pdev->dev, pctl->desc->adc_mask,
+ 				      npins, &pctl->funcs[AXP20X_FUNC_ADC],
+ 				      pctl->desc->pins);
++	if (ret)
++		return ret;
++
++	return 0;
+ }
+ 
+ static const struct of_device_id axp20x_pctl_match[] = {
+@@ -424,7 +434,11 @@ static int axp20x_pctl_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, pctl);
+ 
+-	axp20x_build_funcs_groups(pdev);
++	ret = axp20x_build_funcs_groups(pdev);
++	if (ret) {
++		dev_err(&pdev->dev, "failed to build groups\n");
++		return ret;
++	}
+ 
+ 	pctrl_desc = devm_kzalloc(&pdev->dev, sizeof(*pctrl_desc), GFP_KERNEL);
+ 	if (!pctrl_desc)
+diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c
+index 136ff2b4cce5..db2af09067db 100644
+--- a/drivers/platform/x86/asus-nb-wmi.c
++++ b/drivers/platform/x86/asus-nb-wmi.c
+@@ -496,6 +496,7 @@ static const struct key_entry asus_nb_wmi_keymap[] = {
+ 	{ KE_KEY, 0xC4, { KEY_KBDILLUMUP } },
+ 	{ KE_KEY, 0xC5, { KEY_KBDILLUMDOWN } },
+ 	{ KE_IGNORE, 0xC6, },  /* Ambient Light Sensor notification */
++	{ KE_KEY, 0xFA, { KEY_PROG2 } },           /* Lid flip action */
+ 	{ KE_END, 0},
+ };
+ 
+diff --git a/drivers/platform/x86/intel_punit_ipc.c b/drivers/platform/x86/intel_punit_ipc.c
+index b5b890127479..b7dfe06261f1 100644
+--- a/drivers/platform/x86/intel_punit_ipc.c
++++ b/drivers/platform/x86/intel_punit_ipc.c
+@@ -17,6 +17,7 @@
+ #include <linux/bitops.h>
+ #include <linux/device.h>
+ #include <linux/interrupt.h>
++#include <linux/io.h>
+ #include <linux/platform_device.h>
+ #include <asm/intel_punit_ipc.h>
+ 
+diff --git a/drivers/pwm/pwm-meson.c b/drivers/pwm/pwm-meson.c
+index 822860b4801a..c1ed641b3e26 100644
+--- a/drivers/pwm/pwm-meson.c
++++ b/drivers/pwm/pwm-meson.c
+@@ -458,7 +458,6 @@ static int meson_pwm_init_channels(struct meson_pwm *meson,
+ 				   struct meson_pwm_channel *channels)
+ {
+ 	struct device *dev = meson->chip.dev;
+-	struct device_node *np = dev->of_node;
+ 	struct clk_init_data init;
+ 	unsigned int i;
+ 	char name[255];
+@@ -467,7 +466,7 @@ static int meson_pwm_init_channels(struct meson_pwm *meson,
+ 	for (i = 0; i < meson->chip.npwm; i++) {
+ 		struct meson_pwm_channel *channel = &channels[i];
+ 
+-		snprintf(name, sizeof(name), "%pOF#mux%u", np, i);
++		snprintf(name, sizeof(name), "%s#mux%u", dev_name(dev), i);
+ 
+ 		init.name = name;
+ 		init.ops = &clk_mux_ops;
+diff --git a/drivers/s390/block/dasd_eckd.c b/drivers/s390/block/dasd_eckd.c
+index bbf95b78ef5d..43e3398c9268 100644
+--- a/drivers/s390/block/dasd_eckd.c
++++ b/drivers/s390/block/dasd_eckd.c
+@@ -1780,6 +1780,9 @@ static void dasd_eckd_uncheck_device(struct dasd_device *device)
+ 	struct dasd_eckd_private *private = device->private;
+ 	int i;
+ 
++	if (!private)
++		return;
++
+ 	dasd_alias_disconnect_device_from_lcu(device);
+ 	private->ned = NULL;
+ 	private->sneq = NULL;
+@@ -2035,8 +2038,11 @@ static int dasd_eckd_basic_to_ready(struct dasd_device *device)
+ 
+ static int dasd_eckd_online_to_ready(struct dasd_device *device)
+ {
+-	cancel_work_sync(&device->reload_device);
+-	cancel_work_sync(&device->kick_validate);
++	if (cancel_work_sync(&device->reload_device))
++		dasd_put_device(device);
++	if (cancel_work_sync(&device->kick_validate))
++		dasd_put_device(device);
++
+ 	return 0;
+ };
+ 
+diff --git a/drivers/scsi/aic94xx/aic94xx_init.c b/drivers/scsi/aic94xx/aic94xx_init.c
+index 80e5b283fd81..1391e5f35918 100644
+--- a/drivers/scsi/aic94xx/aic94xx_init.c
++++ b/drivers/scsi/aic94xx/aic94xx_init.c
+@@ -1030,8 +1030,10 @@ static int __init aic94xx_init(void)
+ 
+ 	aic94xx_transport_template =
+ 		sas_domain_attach_transport(&aic94xx_transport_functions);
+-	if (!aic94xx_transport_template)
++	if (!aic94xx_transport_template) {
++		err = -ENOMEM;
+ 		goto out_destroy_caches;
++	}
+ 
+ 	err = pci_register_driver(&aic94xx_pci_driver);
+ 	if (err)
+diff --git a/drivers/staging/comedi/drivers/ni_mio_common.c b/drivers/staging/comedi/drivers/ni_mio_common.c
+index e40a2c0a9543..d3da39a9f567 100644
+--- a/drivers/staging/comedi/drivers/ni_mio_common.c
++++ b/drivers/staging/comedi/drivers/ni_mio_common.c
+@@ -5446,11 +5446,11 @@ static int ni_E_init(struct comedi_device *dev,
+ 	/* Digital I/O (PFI) subdevice */
+ 	s = &dev->subdevices[NI_PFI_DIO_SUBDEV];
+ 	s->type		= COMEDI_SUBD_DIO;
+-	s->subdev_flags	= SDF_READABLE | SDF_WRITABLE | SDF_INTERNAL;
+ 	s->maxdata	= 1;
+ 	if (devpriv->is_m_series) {
+ 		s->n_chan	= 16;
+ 		s->insn_bits	= ni_pfi_insn_bits;
++		s->subdev_flags	= SDF_READABLE | SDF_WRITABLE | SDF_INTERNAL;
+ 
+ 		ni_writew(dev, s->state, NI_M_PFI_DO_REG);
+ 		for (i = 0; i < NUM_PFI_OUTPUT_SELECT_REGS; ++i) {
+@@ -5459,6 +5459,7 @@ static int ni_E_init(struct comedi_device *dev,
+ 		}
+ 	} else {
+ 		s->n_chan	= 10;
++		s->subdev_flags	= SDF_INTERNAL;
+ 	}
+ 	s->insn_config	= ni_pfi_insn_config;
+ 
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index ed3114556fda..560ed8711706 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -951,7 +951,7 @@ static void vhost_iotlb_notify_vq(struct vhost_dev *d,
+ 	list_for_each_entry_safe(node, n, &d->pending_list, node) {
+ 		struct vhost_iotlb_msg *vq_msg = &node->msg.iotlb;
+ 		if (msg->iova <= vq_msg->iova &&
+-		    msg->iova + msg->size - 1 > vq_msg->iova &&
++		    msg->iova + msg->size - 1 >= vq_msg->iova &&
+ 		    vq_msg->type == VHOST_IOTLB_MISS) {
+ 			vhost_poll_queue(&node->vq->poll);
+ 			list_del(&node->node);
+diff --git a/drivers/virtio/virtio_pci_legacy.c b/drivers/virtio/virtio_pci_legacy.c
+index 2780886e8ba3..de062fb201bc 100644
+--- a/drivers/virtio/virtio_pci_legacy.c
++++ b/drivers/virtio/virtio_pci_legacy.c
+@@ -122,6 +122,7 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
+ 	struct virtqueue *vq;
+ 	u16 num;
+ 	int err;
++	u64 q_pfn;
+ 
+ 	/* Select the queue we're interested in */
+ 	iowrite16(index, vp_dev->ioaddr + VIRTIO_PCI_QUEUE_SEL);
+@@ -141,9 +142,17 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
+ 	if (!vq)
+ 		return ERR_PTR(-ENOMEM);
+ 
++	q_pfn = virtqueue_get_desc_addr(vq) >> VIRTIO_PCI_QUEUE_ADDR_SHIFT;
++	if (q_pfn >> 32) {
++		dev_err(&vp_dev->pci_dev->dev,
++			"platform bug: legacy virtio-mmio must not be used with RAM above 0x%llxGB\n",
++			0x1ULL << (32 + PAGE_SHIFT - 30));
++		err = -E2BIG;
++		goto out_del_vq;
++	}
++
+ 	/* activate the queue */
+-	iowrite32(virtqueue_get_desc_addr(vq) >> VIRTIO_PCI_QUEUE_ADDR_SHIFT,
+-		  vp_dev->ioaddr + VIRTIO_PCI_QUEUE_PFN);
++	iowrite32(q_pfn, vp_dev->ioaddr + VIRTIO_PCI_QUEUE_PFN);
+ 
+ 	vq->priv = (void __force *)vp_dev->ioaddr + VIRTIO_PCI_QUEUE_NOTIFY;
+ 
+@@ -160,6 +169,7 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
+ 
+ out_deactivate:
+ 	iowrite32(0, vp_dev->ioaddr + VIRTIO_PCI_QUEUE_PFN);
++out_del_vq:
+ 	vring_del_virtqueue(vq);
+ 	return ERR_PTR(err);
+ }
+diff --git a/drivers/xen/xen-balloon.c b/drivers/xen/xen-balloon.c
+index b437fccd4e62..294f35ce9e46 100644
+--- a/drivers/xen/xen-balloon.c
++++ b/drivers/xen/xen-balloon.c
+@@ -81,7 +81,7 @@ static void watch_target(struct xenbus_watch *watch,
+ 			static_max = new_target;
+ 		else
+ 			static_max >>= PAGE_SHIFT - 10;
+-		target_diff = xen_pv_domain() ? 0
++		target_diff = (xen_pv_domain() || xen_initial_domain()) ? 0
+ 				: static_max - balloon_stats.target_pages;
+ 	}
+ 
+diff --git a/fs/btrfs/check-integrity.c b/fs/btrfs/check-integrity.c
+index a3fdb4fe967d..daf45472bef9 100644
+--- a/fs/btrfs/check-integrity.c
++++ b/fs/btrfs/check-integrity.c
+@@ -1539,7 +1539,12 @@ static int btrfsic_map_block(struct btrfsic_state *state, u64 bytenr, u32 len,
+ 	}
+ 
+ 	device = multi->stripes[0].dev;
+-	block_ctx_out->dev = btrfsic_dev_state_lookup(device->bdev->bd_dev);
++	if (test_bit(BTRFS_DEV_STATE_MISSING, &device->dev_state) ||
++	    !device->bdev || !device->name)
++		block_ctx_out->dev = NULL;
++	else
++		block_ctx_out->dev = btrfsic_dev_state_lookup(
++							device->bdev->bd_dev);
+ 	block_ctx_out->dev_bytenr = multi->stripes[0].physical;
+ 	block_ctx_out->start = bytenr;
+ 	block_ctx_out->len = len;
+diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
+index e2ba0419297a..d20b244623f2 100644
+--- a/fs/btrfs/dev-replace.c
++++ b/fs/btrfs/dev-replace.c
+@@ -676,6 +676,12 @@ static int btrfs_dev_replace_finishing(struct btrfs_fs_info *fs_info,
+ 
+ 	btrfs_rm_dev_replace_unblocked(fs_info);
+ 
++	/*
++	 * Increment dev_stats_ccnt so that btrfs_run_dev_stats() will
++	 * update on-disk dev stats value during commit transaction
++	 */
++	atomic_inc(&tgt_device->dev_stats_ccnt);
++
+ 	/*
+ 	 * this is again a consistent state where no dev_replace procedure
+ 	 * is running, the target device is part of the filesystem, the
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 8aab7a6c1e58..53cac20650d8 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -10687,7 +10687,7 @@ void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info)
+ 		/* Don't want to race with allocators so take the groups_sem */
+ 		down_write(&space_info->groups_sem);
+ 		spin_lock(&block_group->lock);
+-		if (block_group->reserved ||
++		if (block_group->reserved || block_group->pinned ||
+ 		    btrfs_block_group_used(&block_group->item) ||
+ 		    block_group->ro ||
+ 		    list_is_singular(&block_group->list)) {
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index 879b76fa881a..be94c65bb4d2 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -1321,18 +1321,19 @@ static void __del_reloc_root(struct btrfs_root *root)
+ 	struct mapping_node *node = NULL;
+ 	struct reloc_control *rc = fs_info->reloc_ctl;
+ 
+-	spin_lock(&rc->reloc_root_tree.lock);
+-	rb_node = tree_search(&rc->reloc_root_tree.rb_root,
+-			      root->node->start);
+-	if (rb_node) {
+-		node = rb_entry(rb_node, struct mapping_node, rb_node);
+-		rb_erase(&node->rb_node, &rc->reloc_root_tree.rb_root);
++	if (rc) {
++		spin_lock(&rc->reloc_root_tree.lock);
++		rb_node = tree_search(&rc->reloc_root_tree.rb_root,
++				      root->node->start);
++		if (rb_node) {
++			node = rb_entry(rb_node, struct mapping_node, rb_node);
++			rb_erase(&node->rb_node, &rc->reloc_root_tree.rb_root);
++		}
++		spin_unlock(&rc->reloc_root_tree.lock);
++		if (!node)
++			return;
++		BUG_ON((struct btrfs_root *)node->data != root);
+ 	}
+-	spin_unlock(&rc->reloc_root_tree.lock);
+-
+-	if (!node)
+-		return;
+-	BUG_ON((struct btrfs_root *)node->data != root);
+ 
+ 	spin_lock(&fs_info->trans_lock);
+ 	list_del_init(&root->root_list);
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index bddfc28b27c0..9b25f29d0e73 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -892,6 +892,8 @@ static int btrfs_parse_early_options(const char *options, fmode_t flags,
+ 	char *device_name, *opts, *orig, *p;
+ 	int error = 0;
+ 
++	lockdep_assert_held(&uuid_mutex);
++
+ 	if (!options)
+ 		return 0;
+ 
+@@ -1526,12 +1528,6 @@ static struct dentry *btrfs_mount_root(struct file_system_type *fs_type,
+ 	if (!(flags & SB_RDONLY))
+ 		mode |= FMODE_WRITE;
+ 
+-	error = btrfs_parse_early_options(data, mode, fs_type,
+-					  &fs_devices);
+-	if (error) {
+-		return ERR_PTR(error);
+-	}
+-
+ 	security_init_mnt_opts(&new_sec_opts);
+ 	if (data) {
+ 		error = parse_security_options(data, &new_sec_opts);
+@@ -1539,10 +1535,6 @@ static struct dentry *btrfs_mount_root(struct file_system_type *fs_type,
+ 			return ERR_PTR(error);
+ 	}
+ 
+-	error = btrfs_scan_one_device(device_name, mode, fs_type, &fs_devices);
+-	if (error)
+-		goto error_sec_opts;
+-
+ 	/*
+ 	 * Setup a dummy root and fs_info for test/set super.  This is because
+ 	 * we don't actually fill this stuff out until open_ctree, but we need
+@@ -1555,8 +1547,6 @@ static struct dentry *btrfs_mount_root(struct file_system_type *fs_type,
+ 		goto error_sec_opts;
+ 	}
+ 
+-	fs_info->fs_devices = fs_devices;
+-
+ 	fs_info->super_copy = kzalloc(BTRFS_SUPER_INFO_SIZE, GFP_KERNEL);
+ 	fs_info->super_for_commit = kzalloc(BTRFS_SUPER_INFO_SIZE, GFP_KERNEL);
+ 	security_init_mnt_opts(&fs_info->security_opts);
+@@ -1565,7 +1555,23 @@ static struct dentry *btrfs_mount_root(struct file_system_type *fs_type,
+ 		goto error_fs_info;
+ 	}
+ 
++	mutex_lock(&uuid_mutex);
++	error = btrfs_parse_early_options(data, mode, fs_type, &fs_devices);
++	if (error) {
++		mutex_unlock(&uuid_mutex);
++		goto error_fs_info;
++	}
++
++	error = btrfs_scan_one_device(device_name, mode, fs_type, &fs_devices);
++	if (error) {
++		mutex_unlock(&uuid_mutex);
++		goto error_fs_info;
++	}
++
++	fs_info->fs_devices = fs_devices;
++
+ 	error = btrfs_open_devices(fs_devices, mode, fs_type);
++	mutex_unlock(&uuid_mutex);
+ 	if (error)
+ 		goto error_fs_info;
+ 
+@@ -2234,15 +2240,21 @@ static long btrfs_control_ioctl(struct file *file, unsigned int cmd,
+ 
+ 	switch (cmd) {
+ 	case BTRFS_IOC_SCAN_DEV:
++		mutex_lock(&uuid_mutex);
+ 		ret = btrfs_scan_one_device(vol->name, FMODE_READ,
+ 					    &btrfs_root_fs_type, &fs_devices);
++		mutex_unlock(&uuid_mutex);
+ 		break;
+ 	case BTRFS_IOC_DEVICES_READY:
++		mutex_lock(&uuid_mutex);
+ 		ret = btrfs_scan_one_device(vol->name, FMODE_READ,
+ 					    &btrfs_root_fs_type, &fs_devices);
+-		if (ret)
++		if (ret) {
++			mutex_unlock(&uuid_mutex);
+ 			break;
++		}
+ 		ret = !(fs_devices->num_devices == fs_devices->total_devices);
++		mutex_unlock(&uuid_mutex);
+ 		break;
+ 	case BTRFS_IOC_GET_SUPPORTED_FEATURES:
+ 		ret = btrfs_ioctl_get_supported_features((void __user*)arg);
+@@ -2368,7 +2380,7 @@ static __cold void btrfs_interface_exit(void)
+ 
+ static void __init btrfs_print_mod_info(void)
+ {
+-	pr_info("Btrfs loaded, crc32c=%s"
++	static const char options[] = ""
+ #ifdef CONFIG_BTRFS_DEBUG
+ 			", debug=on"
+ #endif
+@@ -2381,8 +2393,8 @@ static void __init btrfs_print_mod_info(void)
+ #ifdef CONFIG_BTRFS_FS_REF_VERIFY
+ 			", ref-verify=on"
+ #endif
+-			"\n",
+-			crc32c_impl());
++			;
++	pr_info("Btrfs loaded, crc32c=%s%s\n", crc32c_impl(), options);
+ }
+ 
+ static int __init init_btrfs_fs(void)
+diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
+index 8d40e7dd8c30..d014af352ce0 100644
+--- a/fs/btrfs/tree-checker.c
++++ b/fs/btrfs/tree-checker.c
+@@ -396,9 +396,22 @@ static int check_leaf(struct btrfs_fs_info *fs_info, struct extent_buffer *leaf,
+ 	 * skip this check for relocation trees.
+ 	 */
+ 	if (nritems == 0 && !btrfs_header_flag(leaf, BTRFS_HEADER_FLAG_RELOC)) {
++		u64 owner = btrfs_header_owner(leaf);
+ 		struct btrfs_root *check_root;
+ 
+-		key.objectid = btrfs_header_owner(leaf);
++		/* These trees must never be empty */
++		if (owner == BTRFS_ROOT_TREE_OBJECTID ||
++		    owner == BTRFS_CHUNK_TREE_OBJECTID ||
++		    owner == BTRFS_EXTENT_TREE_OBJECTID ||
++		    owner == BTRFS_DEV_TREE_OBJECTID ||
++		    owner == BTRFS_FS_TREE_OBJECTID ||
++		    owner == BTRFS_DATA_RELOC_TREE_OBJECTID) {
++			generic_err(fs_info, leaf, 0,
++			"invalid root, root %llu must never be empty",
++				    owner);
++			return -EUCLEAN;
++		}
++		key.objectid = owner;
+ 		key.type = BTRFS_ROOT_ITEM_KEY;
+ 		key.offset = (u64)-1;
+ 
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 1da162928d1a..5304b8d6ceb8 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -634,44 +634,48 @@ static void pending_bios_fn(struct btrfs_work *work)
+  *		devices.
+  */
+ static void btrfs_free_stale_devices(const char *path,
+-				     struct btrfs_device *skip_dev)
++				     struct btrfs_device *skip_device)
+ {
+-	struct btrfs_fs_devices *fs_devs, *tmp_fs_devs;
+-	struct btrfs_device *dev, *tmp_dev;
++	struct btrfs_fs_devices *fs_devices, *tmp_fs_devices;
++	struct btrfs_device *device, *tmp_device;
+ 
+-	list_for_each_entry_safe(fs_devs, tmp_fs_devs, &fs_uuids, fs_list) {
+-
+-		if (fs_devs->opened)
++	list_for_each_entry_safe(fs_devices, tmp_fs_devices, &fs_uuids, fs_list) {
++		mutex_lock(&fs_devices->device_list_mutex);
++		if (fs_devices->opened) {
++			mutex_unlock(&fs_devices->device_list_mutex);
+ 			continue;
++		}
+ 
+-		list_for_each_entry_safe(dev, tmp_dev,
+-					 &fs_devs->devices, dev_list) {
++		list_for_each_entry_safe(device, tmp_device,
++					 &fs_devices->devices, dev_list) {
+ 			int not_found = 0;
+ 
+-			if (skip_dev && skip_dev == dev)
++			if (skip_device && skip_device == device)
+ 				continue;
+-			if (path && !dev->name)
++			if (path && !device->name)
+ 				continue;
+ 
+ 			rcu_read_lock();
+ 			if (path)
+-				not_found = strcmp(rcu_str_deref(dev->name),
++				not_found = strcmp(rcu_str_deref(device->name),
+ 						   path);
+ 			rcu_read_unlock();
+ 			if (not_found)
+ 				continue;
+ 
+ 			/* delete the stale device */
+-			if (fs_devs->num_devices == 1) {
+-				btrfs_sysfs_remove_fsid(fs_devs);
+-				list_del(&fs_devs->fs_list);
+-				free_fs_devices(fs_devs);
++			fs_devices->num_devices--;
++			list_del(&device->dev_list);
++			btrfs_free_device(device);
++
++			if (fs_devices->num_devices == 0)
+ 				break;
+-			} else {
+-				fs_devs->num_devices--;
+-				list_del(&dev->dev_list);
+-				btrfs_free_device(dev);
+-			}
++		}
++		mutex_unlock(&fs_devices->device_list_mutex);
++		if (fs_devices->num_devices == 0) {
++			btrfs_sysfs_remove_fsid(fs_devices);
++			list_del(&fs_devices->fs_list);
++			free_fs_devices(fs_devices);
+ 		}
+ 	}
+ }
+@@ -750,7 +754,8 @@ error_brelse:
+  * error pointer when failed
+  */
+ static noinline struct btrfs_device *device_list_add(const char *path,
+-			   struct btrfs_super_block *disk_super)
++			   struct btrfs_super_block *disk_super,
++			   bool *new_device_added)
+ {
+ 	struct btrfs_device *device;
+ 	struct btrfs_fs_devices *fs_devices;
+@@ -764,21 +769,26 @@ static noinline struct btrfs_device *device_list_add(const char *path,
+ 		if (IS_ERR(fs_devices))
+ 			return ERR_CAST(fs_devices);
+ 
++		mutex_lock(&fs_devices->device_list_mutex);
+ 		list_add(&fs_devices->fs_list, &fs_uuids);
+ 
+ 		device = NULL;
+ 	} else {
++		mutex_lock(&fs_devices->device_list_mutex);
+ 		device = find_device(fs_devices, devid,
+ 				disk_super->dev_item.uuid);
+ 	}
+ 
+ 	if (!device) {
+-		if (fs_devices->opened)
++		if (fs_devices->opened) {
++			mutex_unlock(&fs_devices->device_list_mutex);
+ 			return ERR_PTR(-EBUSY);
++		}
+ 
+ 		device = btrfs_alloc_device(NULL, &devid,
+ 					    disk_super->dev_item.uuid);
+ 		if (IS_ERR(device)) {
++			mutex_unlock(&fs_devices->device_list_mutex);
+ 			/* we can safely leave the fs_devices entry around */
+ 			return device;
+ 		}
+@@ -786,17 +796,16 @@ static noinline struct btrfs_device *device_list_add(const char *path,
+ 		name = rcu_string_strdup(path, GFP_NOFS);
+ 		if (!name) {
+ 			btrfs_free_device(device);
++			mutex_unlock(&fs_devices->device_list_mutex);
+ 			return ERR_PTR(-ENOMEM);
+ 		}
+ 		rcu_assign_pointer(device->name, name);
+ 
+-		mutex_lock(&fs_devices->device_list_mutex);
+ 		list_add_rcu(&device->dev_list, &fs_devices->devices);
+ 		fs_devices->num_devices++;
+-		mutex_unlock(&fs_devices->device_list_mutex);
+ 
+ 		device->fs_devices = fs_devices;
+-		btrfs_free_stale_devices(path, device);
++		*new_device_added = true;
+ 
+ 		if (disk_super->label[0])
+ 			pr_info("BTRFS: device label %s devid %llu transid %llu %s\n",
+@@ -840,12 +849,15 @@ static noinline struct btrfs_device *device_list_add(const char *path,
+ 			 * with larger generation number or the last-in if
+ 			 * generation are equal.
+ 			 */
++			mutex_unlock(&fs_devices->device_list_mutex);
+ 			return ERR_PTR(-EEXIST);
+ 		}
+ 
+ 		name = rcu_string_strdup(path, GFP_NOFS);
+-		if (!name)
++		if (!name) {
++			mutex_unlock(&fs_devices->device_list_mutex);
+ 			return ERR_PTR(-ENOMEM);
++		}
+ 		rcu_string_free(device->name);
+ 		rcu_assign_pointer(device->name, name);
+ 		if (test_bit(BTRFS_DEV_STATE_MISSING, &device->dev_state)) {
+@@ -865,6 +877,7 @@ static noinline struct btrfs_device *device_list_add(const char *path,
+ 
+ 	fs_devices->total_devices = btrfs_super_num_devices(disk_super);
+ 
++	mutex_unlock(&fs_devices->device_list_mutex);
+ 	return device;
+ }
+ 
+@@ -1146,7 +1159,8 @@ int btrfs_open_devices(struct btrfs_fs_devices *fs_devices,
+ {
+ 	int ret;
+ 
+-	mutex_lock(&uuid_mutex);
++	lockdep_assert_held(&uuid_mutex);
++
+ 	mutex_lock(&fs_devices->device_list_mutex);
+ 	if (fs_devices->opened) {
+ 		fs_devices->opened++;
+@@ -1156,7 +1170,6 @@ int btrfs_open_devices(struct btrfs_fs_devices *fs_devices,
+ 		ret = open_fs_devices(fs_devices, flags, holder);
+ 	}
+ 	mutex_unlock(&fs_devices->device_list_mutex);
+-	mutex_unlock(&uuid_mutex);
+ 
+ 	return ret;
+ }
+@@ -1221,12 +1234,15 @@ int btrfs_scan_one_device(const char *path, fmode_t flags, void *holder,
+ 			  struct btrfs_fs_devices **fs_devices_ret)
+ {
+ 	struct btrfs_super_block *disk_super;
++	bool new_device_added = false;
+ 	struct btrfs_device *device;
+ 	struct block_device *bdev;
+ 	struct page *page;
+ 	int ret = 0;
+ 	u64 bytenr;
+ 
++	lockdep_assert_held(&uuid_mutex);
++
+ 	/*
+ 	 * we would like to check all the supers, but that would make
+ 	 * a btrfs mount succeed after a mkfs from a different FS.
+@@ -1245,13 +1261,14 @@ int btrfs_scan_one_device(const char *path, fmode_t flags, void *holder,
+ 		goto error_bdev_put;
+ 	}
+ 
+-	mutex_lock(&uuid_mutex);
+-	device = device_list_add(path, disk_super);
+-	if (IS_ERR(device))
++	device = device_list_add(path, disk_super, &new_device_added);
++	if (IS_ERR(device)) {
+ 		ret = PTR_ERR(device);
+-	else
++	} else {
+ 		*fs_devices_ret = device->fs_devices;
+-	mutex_unlock(&uuid_mutex);
++		if (new_device_added)
++			btrfs_free_stale_devices(path, device);
++	}
+ 
+ 	btrfs_release_disk_super(page);
+ 
+@@ -2029,6 +2046,9 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info, const char *device_path,
+ 
+ 	cur_devices->num_devices--;
+ 	cur_devices->total_devices--;
++	/* Update total_devices of the parent fs_devices if it's seed */
++	if (cur_devices != fs_devices)
++		fs_devices->total_devices--;
+ 
+ 	if (test_bit(BTRFS_DEV_STATE_MISSING, &device->dev_state))
+ 		cur_devices->missing_devices--;
+@@ -6563,10 +6583,14 @@ static int read_one_chunk(struct btrfs_fs_info *fs_info, struct btrfs_key *key,
+ 	write_lock(&map_tree->map_tree.lock);
+ 	ret = add_extent_mapping(&map_tree->map_tree, em, 0);
+ 	write_unlock(&map_tree->map_tree.lock);
+-	BUG_ON(ret); /* Tree corruption */
++	if (ret < 0) {
++		btrfs_err(fs_info,
++			  "failed to add chunk map, start=%llu len=%llu: %d",
++			  em->start, em->len, ret);
++	}
+ 	free_extent_map(em);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static void fill_device_from_item(struct extent_buffer *leaf,
+diff --git a/fs/cifs/cifs_debug.c b/fs/cifs/cifs_debug.c
+index 991bfb271908..b20297988fe0 100644
+--- a/fs/cifs/cifs_debug.c
++++ b/fs/cifs/cifs_debug.c
+@@ -383,6 +383,10 @@ static ssize_t cifs_stats_proc_write(struct file *file,
+ 		atomic_set(&totBufAllocCount, 0);
+ 		atomic_set(&totSmBufAllocCount, 0);
+ #endif /* CONFIG_CIFS_STATS2 */
++		spin_lock(&GlobalMid_Lock);
++		GlobalMaxActiveXid = 0;
++		GlobalCurrentXid = 0;
++		spin_unlock(&GlobalMid_Lock);
+ 		spin_lock(&cifs_tcp_ses_lock);
+ 		list_for_each(tmp1, &cifs_tcp_ses_list) {
+ 			server = list_entry(tmp1, struct TCP_Server_Info,
+@@ -395,6 +399,10 @@ static ssize_t cifs_stats_proc_write(struct file *file,
+ 							  struct cifs_tcon,
+ 							  tcon_list);
+ 					atomic_set(&tcon->num_smbs_sent, 0);
++					spin_lock(&tcon->stat_lock);
++					tcon->bytes_read = 0;
++					tcon->bytes_written = 0;
++					spin_unlock(&tcon->stat_lock);
+ 					if (server->ops->clear_stats)
+ 						server->ops->clear_stats(tcon);
+ 				}
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 5df2c0698cda..9d02563b2147 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -3031,11 +3031,15 @@ cifs_get_tcon(struct cifs_ses *ses, struct smb_vol *volume_info)
+ 	}
+ 
+ #ifdef CONFIG_CIFS_SMB311
+-	if ((volume_info->linux_ext) && (ses->server->posix_ext_supported)) {
+-		if (ses->server->vals->protocol_id == SMB311_PROT_ID) {
++	if (volume_info->linux_ext) {
++		if (ses->server->posix_ext_supported) {
+ 			tcon->posix_extensions = true;
+ 			printk_once(KERN_WARNING
+ 				"SMB3.11 POSIX Extensions are experimental\n");
++		} else {
++			cifs_dbg(VFS, "Server does not support mounting with posix SMB3.11 extensions.\n");
++			rc = -EOPNOTSUPP;
++			goto out_fail;
+ 		}
+ 	}
+ #endif /* 311 */
+diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c
+index 3ff7cec2da81..239215dcc00b 100644
+--- a/fs/cifs/smb2misc.c
++++ b/fs/cifs/smb2misc.c
+@@ -240,6 +240,13 @@ smb2_check_message(char *buf, unsigned int len, struct TCP_Server_Info *srvr)
+ 		if (clc_len == len + 1)
+ 			return 0;
+ 
++		/*
++		 * Some windows servers (win2016) will pad also the final
++		 * PDU in a compound to 8 bytes.
++		 */
++		if (((clc_len + 7) & ~7) == len)
++			return 0;
++
+ 		/*
+ 		 * MacOS server pads after SMB2.1 write response with 3 bytes
+ 		 * of junk. Other servers match RFC1001 len to actual
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index ffce77e00a58..44e511a35559 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -360,7 +360,7 @@ smb2_plain_req_init(__le16 smb2_command, struct cifs_tcon *tcon,
+ 		       total_len);
+ 
+ 	if (tcon != NULL) {
+-#ifdef CONFIG_CIFS_STATS2
++#ifdef CONFIG_CIFS_STATS
+ 		uint16_t com_code = le16_to_cpu(smb2_command);
+ 		cifs_stats_inc(&tcon->stats.smb2_stats.smb2_com_sent[com_code]);
+ #endif
+@@ -1928,7 +1928,7 @@ int smb311_posix_mkdir(const unsigned int xid, struct inode *inode,
+ {
+ 	struct smb_rqst rqst;
+ 	struct smb2_create_req *req;
+-	struct smb2_create_rsp *rsp;
++	struct smb2_create_rsp *rsp = NULL;
+ 	struct TCP_Server_Info *server;
+ 	struct cifs_ses *ses = tcon->ses;
+ 	struct kvec iov[3]; /* make sure at least one for each open context */
+@@ -1943,27 +1943,31 @@ int smb311_posix_mkdir(const unsigned int xid, struct inode *inode,
+ 	char *pc_buf = NULL;
+ 	int flags = 0;
+ 	unsigned int total_len;
+-	__le16 *path = cifs_convert_path_to_utf16(full_path, cifs_sb);
+-
+-	if (!path)
+-		return -ENOMEM;
++	__le16 *utf16_path = NULL;
+ 
+ 	cifs_dbg(FYI, "mkdir\n");
+ 
++	/* resource #1: path allocation */
++	utf16_path = cifs_convert_path_to_utf16(full_path, cifs_sb);
++	if (!utf16_path)
++		return -ENOMEM;
++
+ 	if (ses && (ses->server))
+ 		server = ses->server;
+-	else
+-		return -EIO;
++	else {
++		rc = -EIO;
++		goto err_free_path;
++	}
+ 
++	/* resource #2: request */
+ 	rc = smb2_plain_req_init(SMB2_CREATE, tcon, (void **) &req, &total_len);
+-
+ 	if (rc)
+-		return rc;
++		goto err_free_path;
++
+ 
+ 	if (smb3_encryption_required(tcon))
+ 		flags |= CIFS_TRANSFORM_REQ;
+ 
+-
+ 	req->ImpersonationLevel = IL_IMPERSONATION;
+ 	req->DesiredAccess = cpu_to_le32(FILE_WRITE_ATTRIBUTES);
+ 	/* File attributes ignored on open (used in create though) */
+@@ -1992,50 +1996,44 @@ int smb311_posix_mkdir(const unsigned int xid, struct inode *inode,
+ 		req->sync_hdr.Flags |= SMB2_FLAGS_DFS_OPERATIONS;
+ 		rc = alloc_path_with_tree_prefix(&copy_path, &copy_size,
+ 						 &name_len,
+-						 tcon->treeName, path);
+-		if (rc) {
+-			cifs_small_buf_release(req);
+-			return rc;
+-		}
++						 tcon->treeName, utf16_path);
++		if (rc)
++			goto err_free_req;
++
+ 		req->NameLength = cpu_to_le16(name_len * 2);
+ 		uni_path_len = copy_size;
+-		path = copy_path;
++		/* free before overwriting resource */
++		kfree(utf16_path);
++		utf16_path = copy_path;
+ 	} else {
+-		uni_path_len = (2 * UniStrnlen((wchar_t *)path, PATH_MAX)) + 2;
++		uni_path_len = (2 * UniStrnlen((wchar_t *)utf16_path, PATH_MAX)) + 2;
+ 		/* MUST set path len (NameLength) to 0 opening root of share */
+ 		req->NameLength = cpu_to_le16(uni_path_len - 2);
+ 		if (uni_path_len % 8 != 0) {
+ 			copy_size = roundup(uni_path_len, 8);
+ 			copy_path = kzalloc(copy_size, GFP_KERNEL);
+ 			if (!copy_path) {
+-				cifs_small_buf_release(req);
+-				return -ENOMEM;
++				rc = -ENOMEM;
++				goto err_free_req;
+ 			}
+-			memcpy((char *)copy_path, (const char *)path,
++			memcpy((char *)copy_path, (const char *)utf16_path,
+ 			       uni_path_len);
+ 			uni_path_len = copy_size;
+-			path = copy_path;
++			/* free before overwriting resource */
++			kfree(utf16_path);
++			utf16_path = copy_path;
+ 		}
+ 	}
+ 
+ 	iov[1].iov_len = uni_path_len;
+-	iov[1].iov_base = path;
++	iov[1].iov_base = utf16_path;
+ 	req->RequestedOplockLevel = SMB2_OPLOCK_LEVEL_NONE;
+ 
+ 	if (tcon->posix_extensions) {
+-		if (n_iov > 2) {
+-			struct create_context *ccontext =
+-			    (struct create_context *)iov[n_iov-1].iov_base;
+-			ccontext->Next =
+-				cpu_to_le32(iov[n_iov-1].iov_len);
+-		}
+-
++		/* resource #3: posix buf */
+ 		rc = add_posix_context(iov, &n_iov, mode);
+-		if (rc) {
+-			cifs_small_buf_release(req);
+-			kfree(copy_path);
+-			return rc;
+-		}
++		if (rc)
++			goto err_free_req;
+ 		pc_buf = iov[n_iov-1].iov_base;
+ 	}
+ 
+@@ -2044,32 +2042,33 @@ int smb311_posix_mkdir(const unsigned int xid, struct inode *inode,
+ 	rqst.rq_iov = iov;
+ 	rqst.rq_nvec = n_iov;
+ 
+-	rc = cifs_send_recv(xid, ses, &rqst, &resp_buftype, flags,
+-			    &rsp_iov);
+-
+-	cifs_small_buf_release(req);
+-	rsp = (struct smb2_create_rsp *)rsp_iov.iov_base;
+-
+-	if (rc != 0) {
++	/* resource #4: response buffer */
++	rc = cifs_send_recv(xid, ses, &rqst, &resp_buftype, flags, &rsp_iov);
++	if (rc) {
+ 		cifs_stats_fail_inc(tcon, SMB2_CREATE_HE);
+ 		trace_smb3_posix_mkdir_err(xid, tcon->tid, ses->Suid,
+-				    CREATE_NOT_FILE, FILE_WRITE_ATTRIBUTES, rc);
+-		goto smb311_mkdir_exit;
+-	} else
+-		trace_smb3_posix_mkdir_done(xid, rsp->PersistentFileId, tcon->tid,
+-				     ses->Suid, CREATE_NOT_FILE,
+-				     FILE_WRITE_ATTRIBUTES);
++					   CREATE_NOT_FILE,
++					   FILE_WRITE_ATTRIBUTES, rc);
++		goto err_free_rsp_buf;
++	}
++
++	rsp = (struct smb2_create_rsp *)rsp_iov.iov_base;
++	trace_smb3_posix_mkdir_done(xid, rsp->PersistentFileId, tcon->tid,
++				    ses->Suid, CREATE_NOT_FILE,
++				    FILE_WRITE_ATTRIBUTES);
+ 
+ 	SMB2_close(xid, tcon, rsp->PersistentFileId, rsp->VolatileFileId);
+ 
+ 	/* Eventually save off posix specific response info and timestaps */
+ 
+-smb311_mkdir_exit:
+-	kfree(copy_path);
+-	kfree(pc_buf);
++err_free_rsp_buf:
+ 	free_rsp_buf(resp_buftype, rsp);
++	kfree(pc_buf);
++err_free_req:
++	cifs_small_buf_release(req);
++err_free_path:
++	kfree(utf16_path);
+ 	return rc;
+-
+ }
+ #endif /* SMB311 */
+ 
+diff --git a/fs/dcache.c b/fs/dcache.c
+index ceb7b491d1b9..d19a0dc46c04 100644
+--- a/fs/dcache.c
++++ b/fs/dcache.c
+@@ -292,7 +292,8 @@ void take_dentry_name_snapshot(struct name_snapshot *name, struct dentry *dentry
+ 		spin_unlock(&dentry->d_lock);
+ 		name->name = p->name;
+ 	} else {
+-		memcpy(name->inline_name, dentry->d_iname, DNAME_INLINE_LEN);
++		memcpy(name->inline_name, dentry->d_iname,
++		       dentry->d_name.len + 1);
+ 		spin_unlock(&dentry->d_lock);
+ 		name->name = name->inline_name;
+ 	}
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 8f931d699287..b61954d40c25 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -2149,8 +2149,12 @@ static void f2fs_write_failed(struct address_space *mapping, loff_t to)
+ 
+ 	if (to > i_size) {
+ 		down_write(&F2FS_I(inode)->i_mmap_sem);
++		down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
++
+ 		truncate_pagecache(inode, i_size);
+ 		f2fs_truncate_blocks(inode, i_size, true);
++
++		up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ 		up_write(&F2FS_I(inode)->i_mmap_sem);
+ 	}
+ }
+@@ -2490,6 +2494,10 @@ static int f2fs_set_data_page_dirty(struct page *page)
+ 	if (!PageUptodate(page))
+ 		SetPageUptodate(page);
+ 
++	/* don't remain PG_checked flag which was set during GC */
++	if (is_cold_data(page))
++		clear_cold_data(page);
++
+ 	if (f2fs_is_atomic_file(inode) && !f2fs_is_commit_atomic_write(inode)) {
+ 		if (!IS_ATOMIC_WRITTEN_PAGE(page)) {
+ 			f2fs_register_inmem_page(inode, page);
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 6880c6f78d58..3ffa341cf586 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -782,22 +782,26 @@ int f2fs_setattr(struct dentry *dentry, struct iattr *attr)
+ 	}
+ 
+ 	if (attr->ia_valid & ATTR_SIZE) {
+-		if (attr->ia_size <= i_size_read(inode)) {
+-			down_write(&F2FS_I(inode)->i_mmap_sem);
+-			truncate_setsize(inode, attr->ia_size);
++		bool to_smaller = (attr->ia_size <= i_size_read(inode));
++
++		down_write(&F2FS_I(inode)->i_mmap_sem);
++		down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
++
++		truncate_setsize(inode, attr->ia_size);
++
++		if (to_smaller)
+ 			err = f2fs_truncate(inode);
+-			up_write(&F2FS_I(inode)->i_mmap_sem);
+-			if (err)
+-				return err;
+-		} else {
+-			/*
+-			 * do not trim all blocks after i_size if target size is
+-			 * larger than i_size.
+-			 */
+-			down_write(&F2FS_I(inode)->i_mmap_sem);
+-			truncate_setsize(inode, attr->ia_size);
+-			up_write(&F2FS_I(inode)->i_mmap_sem);
++		/*
++		 * do not trim all blocks after i_size if target size is
++		 * larger than i_size.
++		 */
++		up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
++		up_write(&F2FS_I(inode)->i_mmap_sem);
+ 
++		if (err)
++			return err;
++
++		if (!to_smaller) {
+ 			/* should convert inline inode here */
+ 			if (!f2fs_may_inline_data(inode)) {
+ 				err = f2fs_convert_inline_inode(inode);
+@@ -944,13 +948,18 @@ static int punch_hole(struct inode *inode, loff_t offset, loff_t len)
+ 
+ 			blk_start = (loff_t)pg_start << PAGE_SHIFT;
+ 			blk_end = (loff_t)pg_end << PAGE_SHIFT;
++
+ 			down_write(&F2FS_I(inode)->i_mmap_sem);
++			down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
++
+ 			truncate_inode_pages_range(mapping, blk_start,
+ 					blk_end - 1);
+ 
+ 			f2fs_lock_op(sbi);
+ 			ret = f2fs_truncate_hole(inode, pg_start, pg_end);
+ 			f2fs_unlock_op(sbi);
++
++			up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ 			up_write(&F2FS_I(inode)->i_mmap_sem);
+ 		}
+ 	}
+@@ -1295,8 +1304,6 @@ static int f2fs_zero_range(struct inode *inode, loff_t offset, loff_t len,
+ 	if (ret)
+ 		goto out_sem;
+ 
+-	truncate_pagecache_range(inode, offset, offset + len - 1);
+-
+ 	pg_start = ((unsigned long long) offset) >> PAGE_SHIFT;
+ 	pg_end = ((unsigned long long) offset + len) >> PAGE_SHIFT;
+ 
+@@ -1326,12 +1333,19 @@ static int f2fs_zero_range(struct inode *inode, loff_t offset, loff_t len,
+ 			unsigned int end_offset;
+ 			pgoff_t end;
+ 
++			down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
++
++			truncate_pagecache_range(inode,
++				(loff_t)index << PAGE_SHIFT,
++				((loff_t)pg_end << PAGE_SHIFT) - 1);
++
+ 			f2fs_lock_op(sbi);
+ 
+ 			set_new_dnode(&dn, inode, NULL, NULL, 0);
+ 			ret = f2fs_get_dnode_of_data(&dn, index, ALLOC_NODE);
+ 			if (ret) {
+ 				f2fs_unlock_op(sbi);
++				up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ 				goto out;
+ 			}
+ 
+@@ -1340,7 +1354,9 @@ static int f2fs_zero_range(struct inode *inode, loff_t offset, loff_t len,
+ 
+ 			ret = f2fs_do_zero_range(&dn, index, end);
+ 			f2fs_put_dnode(&dn);
++
+ 			f2fs_unlock_op(sbi);
++			up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ 
+ 			f2fs_balance_fs(sbi, dn.node_changed);
+ 
+diff --git a/fs/fat/cache.c b/fs/fat/cache.c
+index e9bed49df6b7..78d501c1fb65 100644
+--- a/fs/fat/cache.c
++++ b/fs/fat/cache.c
+@@ -225,7 +225,8 @@ static inline void cache_init(struct fat_cache_id *cid, int fclus, int dclus)
+ int fat_get_cluster(struct inode *inode, int cluster, int *fclus, int *dclus)
+ {
+ 	struct super_block *sb = inode->i_sb;
+-	const int limit = sb->s_maxbytes >> MSDOS_SB(sb)->cluster_bits;
++	struct msdos_sb_info *sbi = MSDOS_SB(sb);
++	const int limit = sb->s_maxbytes >> sbi->cluster_bits;
+ 	struct fat_entry fatent;
+ 	struct fat_cache_id cid;
+ 	int nr;
+@@ -234,6 +235,12 @@ int fat_get_cluster(struct inode *inode, int cluster, int *fclus, int *dclus)
+ 
+ 	*fclus = 0;
+ 	*dclus = MSDOS_I(inode)->i_start;
++	if (!fat_valid_entry(sbi, *dclus)) {
++		fat_fs_error_ratelimit(sb,
++			"%s: invalid start cluster (i_pos %lld, start %08x)",
++			__func__, MSDOS_I(inode)->i_pos, *dclus);
++		return -EIO;
++	}
+ 	if (cluster == 0)
+ 		return 0;
+ 
+@@ -250,9 +257,8 @@ int fat_get_cluster(struct inode *inode, int cluster, int *fclus, int *dclus)
+ 		/* prevent the infinite loop of cluster chain */
+ 		if (*fclus > limit) {
+ 			fat_fs_error_ratelimit(sb,
+-					"%s: detected the cluster chain loop"
+-					" (i_pos %lld)", __func__,
+-					MSDOS_I(inode)->i_pos);
++				"%s: detected the cluster chain loop (i_pos %lld)",
++				__func__, MSDOS_I(inode)->i_pos);
+ 			nr = -EIO;
+ 			goto out;
+ 		}
+@@ -262,9 +268,8 @@ int fat_get_cluster(struct inode *inode, int cluster, int *fclus, int *dclus)
+ 			goto out;
+ 		else if (nr == FAT_ENT_FREE) {
+ 			fat_fs_error_ratelimit(sb,
+-				       "%s: invalid cluster chain (i_pos %lld)",
+-				       __func__,
+-				       MSDOS_I(inode)->i_pos);
++				"%s: invalid cluster chain (i_pos %lld)",
++				__func__, MSDOS_I(inode)->i_pos);
+ 			nr = -EIO;
+ 			goto out;
+ 		} else if (nr == FAT_ENT_EOF) {
+diff --git a/fs/fat/fat.h b/fs/fat/fat.h
+index 8fc1093da47d..a0a00f3734bc 100644
+--- a/fs/fat/fat.h
++++ b/fs/fat/fat.h
+@@ -348,6 +348,11 @@ static inline void fatent_brelse(struct fat_entry *fatent)
+ 	fatent->fat_inode = NULL;
+ }
+ 
++static inline bool fat_valid_entry(struct msdos_sb_info *sbi, int entry)
++{
++	return FAT_START_ENT <= entry && entry < sbi->max_cluster;
++}
++
+ extern void fat_ent_access_init(struct super_block *sb);
+ extern int fat_ent_read(struct inode *inode, struct fat_entry *fatent,
+ 			int entry);
+diff --git a/fs/fat/fatent.c b/fs/fat/fatent.c
+index bac10de678cc..3aef8630a4b9 100644
+--- a/fs/fat/fatent.c
++++ b/fs/fat/fatent.c
+@@ -23,7 +23,7 @@ static void fat12_ent_blocknr(struct super_block *sb, int entry,
+ {
+ 	struct msdos_sb_info *sbi = MSDOS_SB(sb);
+ 	int bytes = entry + (entry >> 1);
+-	WARN_ON(entry < FAT_START_ENT || sbi->max_cluster <= entry);
++	WARN_ON(!fat_valid_entry(sbi, entry));
+ 	*offset = bytes & (sb->s_blocksize - 1);
+ 	*blocknr = sbi->fat_start + (bytes >> sb->s_blocksize_bits);
+ }
+@@ -33,7 +33,7 @@ static void fat_ent_blocknr(struct super_block *sb, int entry,
+ {
+ 	struct msdos_sb_info *sbi = MSDOS_SB(sb);
+ 	int bytes = (entry << sbi->fatent_shift);
+-	WARN_ON(entry < FAT_START_ENT || sbi->max_cluster <= entry);
++	WARN_ON(!fat_valid_entry(sbi, entry));
+ 	*offset = bytes & (sb->s_blocksize - 1);
+ 	*blocknr = sbi->fat_start + (bytes >> sb->s_blocksize_bits);
+ }
+@@ -353,7 +353,7 @@ int fat_ent_read(struct inode *inode, struct fat_entry *fatent, int entry)
+ 	int err, offset;
+ 	sector_t blocknr;
+ 
+-	if (entry < FAT_START_ENT || sbi->max_cluster <= entry) {
++	if (!fat_valid_entry(sbi, entry)) {
+ 		fatent_brelse(fatent);
+ 		fat_fs_error(sb, "invalid access to FAT (entry 0x%08x)", entry);
+ 		return -EIO;
+diff --git a/fs/hfs/brec.c b/fs/hfs/brec.c
+index ad04a5741016..9a8772465a90 100644
+--- a/fs/hfs/brec.c
++++ b/fs/hfs/brec.c
+@@ -75,9 +75,10 @@ int hfs_brec_insert(struct hfs_find_data *fd, void *entry, int entry_len)
+ 	if (!fd->bnode) {
+ 		if (!tree->root)
+ 			hfs_btree_inc_height(tree);
+-		fd->bnode = hfs_bnode_find(tree, tree->leaf_head);
+-		if (IS_ERR(fd->bnode))
+-			return PTR_ERR(fd->bnode);
++		node = hfs_bnode_find(tree, tree->leaf_head);
++		if (IS_ERR(node))
++			return PTR_ERR(node);
++		fd->bnode = node;
+ 		fd->record = -1;
+ 	}
+ 	new_node = NULL;
+diff --git a/fs/hfsplus/dir.c b/fs/hfsplus/dir.c
+index b5254378f011..cd017d7dbdfa 100644
+--- a/fs/hfsplus/dir.c
++++ b/fs/hfsplus/dir.c
+@@ -78,13 +78,13 @@ again:
+ 				cpu_to_be32(HFSP_HARDLINK_TYPE) &&
+ 				entry.file.user_info.fdCreator ==
+ 				cpu_to_be32(HFSP_HFSPLUS_CREATOR) &&
++				HFSPLUS_SB(sb)->hidden_dir &&
+ 				(entry.file.create_date ==
+ 					HFSPLUS_I(HFSPLUS_SB(sb)->hidden_dir)->
+ 						create_date ||
+ 				entry.file.create_date ==
+ 					HFSPLUS_I(d_inode(sb->s_root))->
+-						create_date) &&
+-				HFSPLUS_SB(sb)->hidden_dir) {
++						create_date)) {
+ 			struct qstr str;
+ 			char name[32];
+ 
+diff --git a/fs/hfsplus/super.c b/fs/hfsplus/super.c
+index a6c0f54c48c3..80abba550bfa 100644
+--- a/fs/hfsplus/super.c
++++ b/fs/hfsplus/super.c
+@@ -524,8 +524,10 @@ static int hfsplus_fill_super(struct super_block *sb, void *data, int silent)
+ 		goto out_put_root;
+ 	if (!hfs_brec_read(&fd, &entry, sizeof(entry))) {
+ 		hfs_find_exit(&fd);
+-		if (entry.type != cpu_to_be16(HFSPLUS_FOLDER))
++		if (entry.type != cpu_to_be16(HFSPLUS_FOLDER)) {
++			err = -EINVAL;
+ 			goto out_put_root;
++		}
+ 		inode = hfsplus_iget(sb, be32_to_cpu(entry.folder.id));
+ 		if (IS_ERR(inode)) {
+ 			err = PTR_ERR(inode);
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 464db0c0f5c8..ff98e2a3f3cc 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -7734,7 +7734,7 @@ static int nfs4_sp4_select_mode(struct nfs_client *clp,
+ 	}
+ out:
+ 	clp->cl_sp4_flags = flags;
+-	return 0;
++	return ret;
+ }
+ 
+ struct nfs41_exchange_id_data {
+diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c
+index e64ecb9f2720..66c373230e60 100644
+--- a/fs/proc/kcore.c
++++ b/fs/proc/kcore.c
+@@ -384,8 +384,10 @@ static void elf_kcore_store_hdr(char *bufp, int nphdr, int dataoff)
+ 		phdr->p_flags	= PF_R|PF_W|PF_X;
+ 		phdr->p_offset	= kc_vaddr_to_offset(m->addr) + dataoff;
+ 		phdr->p_vaddr	= (size_t)m->addr;
+-		if (m->type == KCORE_RAM || m->type == KCORE_TEXT)
++		if (m->type == KCORE_RAM)
+ 			phdr->p_paddr	= __pa(m->addr);
++		else if (m->type == KCORE_TEXT)
++			phdr->p_paddr	= __pa_symbol(m->addr);
+ 		else
+ 			phdr->p_paddr	= (elf_addr_t)-1;
+ 		phdr->p_filesz	= phdr->p_memsz	= m->size;
+diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
+index cfb6674331fd..0651646dd04d 100644
+--- a/fs/proc/vmcore.c
++++ b/fs/proc/vmcore.c
+@@ -225,6 +225,7 @@ out_unlock:
+ 	return ret;
+ }
+ 
++#ifdef CONFIG_MMU
+ static int vmcoredd_mmap_dumps(struct vm_area_struct *vma, unsigned long dst,
+ 			       u64 start, size_t size)
+ {
+@@ -259,6 +260,7 @@ out_unlock:
+ 	mutex_unlock(&vmcoredd_mutex);
+ 	return ret;
+ }
++#endif /* CONFIG_MMU */
+ #endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP */
+ 
+ /* Read from the ELF header and then the crash dump. On error, negative value is
+diff --git a/fs/reiserfs/reiserfs.h b/fs/reiserfs/reiserfs.h
+index ae4811fecc1f..6d670bd9ab6b 100644
+--- a/fs/reiserfs/reiserfs.h
++++ b/fs/reiserfs/reiserfs.h
+@@ -271,7 +271,7 @@ struct reiserfs_journal_list {
+ 
+ 	struct mutex j_commit_mutex;
+ 	unsigned int j_trans_id;
+-	time_t j_timestamp;
++	time64_t j_timestamp; /* write-only but useful for crash dump analysis */
+ 	struct reiserfs_list_bitmap *j_list_bitmap;
+ 	struct buffer_head *j_commit_bh;	/* commit buffer head */
+ 	struct reiserfs_journal_cnode *j_realblock;
+diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
+index 29502238e510..bf85e152af05 100644
+--- a/include/linux/pci_ids.h
++++ b/include/linux/pci_ids.h
+@@ -3082,4 +3082,6 @@
+ 
+ #define PCI_VENDOR_ID_OCZ		0x1b85
+ 
++#define PCI_VENDOR_ID_NCUBE		0x10ff
++
+ #endif /* _LINUX_PCI_IDS_H */
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index cd3ecda9386a..106e01c721e6 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -2023,6 +2023,10 @@ int tcp_set_ulp_id(struct sock *sk, const int ulp);
+ void tcp_get_available_ulp(char *buf, size_t len);
+ void tcp_cleanup_ulp(struct sock *sk);
+ 
++#define MODULE_ALIAS_TCP_ULP(name)				\
++	__MODULE_INFO(alias, alias_userspace, name);		\
++	__MODULE_INFO(alias, alias_tcp_ulp, "tcp-ulp-" name)
++
+ /* Call BPF_SOCK_OPS program that returns an int. If the return value
+  * is < 0, then the BPF op failed (for example if the loaded BPF
+  * program does not support the chosen operation or there is no BPF
+diff --git a/include/uapi/linux/keyctl.h b/include/uapi/linux/keyctl.h
+index 7b8c9e19bad1..910cc4334b21 100644
+--- a/include/uapi/linux/keyctl.h
++++ b/include/uapi/linux/keyctl.h
+@@ -65,7 +65,7 @@
+ 
+ /* keyctl structures */
+ struct keyctl_dh_params {
+-	__s32 private;
++	__s32 dh_private;
+ 	__s32 prime;
+ 	__s32 base;
+ };
+diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c
+index 76efe9a183f5..fc5b103512e7 100644
+--- a/kernel/bpf/inode.c
++++ b/kernel/bpf/inode.c
+@@ -196,19 +196,21 @@ static void *map_seq_next(struct seq_file *m, void *v, loff_t *pos)
+ {
+ 	struct bpf_map *map = seq_file_to_map(m);
+ 	void *key = map_iter(m)->key;
++	void *prev_key;
+ 
+ 	if (map_iter(m)->done)
+ 		return NULL;
+ 
+ 	if (unlikely(v == SEQ_START_TOKEN))
+-		goto done;
++		prev_key = NULL;
++	else
++		prev_key = key;
+ 
+-	if (map->ops->map_get_next_key(map, key, key)) {
++	if (map->ops->map_get_next_key(map, prev_key, key)) {
+ 		map_iter(m)->done = true;
+ 		return NULL;
+ 	}
+ 
+-done:
+ 	++(*pos);
+ 	return key;
+ }
+diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
+index c4d75c52b4fc..58899601fccf 100644
+--- a/kernel/bpf/sockmap.c
++++ b/kernel/bpf/sockmap.c
+@@ -58,6 +58,7 @@ struct bpf_stab {
+ 	struct bpf_map map;
+ 	struct sock **sock_map;
+ 	struct bpf_sock_progs progs;
++	raw_spinlock_t lock;
+ };
+ 
+ struct bucket {
+@@ -89,9 +90,9 @@ enum smap_psock_state {
+ 
+ struct smap_psock_map_entry {
+ 	struct list_head list;
++	struct bpf_map *map;
+ 	struct sock **entry;
+ 	struct htab_elem __rcu *hash_link;
+-	struct bpf_htab __rcu *htab;
+ };
+ 
+ struct smap_psock {
+@@ -343,13 +344,18 @@ static void bpf_tcp_close(struct sock *sk, long timeout)
+ 	e = psock_map_pop(sk, psock);
+ 	while (e) {
+ 		if (e->entry) {
+-			osk = cmpxchg(e->entry, sk, NULL);
++			struct bpf_stab *stab = container_of(e->map, struct bpf_stab, map);
++
++			raw_spin_lock_bh(&stab->lock);
++			osk = *e->entry;
+ 			if (osk == sk) {
++				*e->entry = NULL;
+ 				smap_release_sock(psock, sk);
+ 			}
++			raw_spin_unlock_bh(&stab->lock);
+ 		} else {
+ 			struct htab_elem *link = rcu_dereference(e->hash_link);
+-			struct bpf_htab *htab = rcu_dereference(e->htab);
++			struct bpf_htab *htab = container_of(e->map, struct bpf_htab, map);
+ 			struct hlist_head *head;
+ 			struct htab_elem *l;
+ 			struct bucket *b;
+@@ -370,6 +376,7 @@ static void bpf_tcp_close(struct sock *sk, long timeout)
+ 			}
+ 			raw_spin_unlock_bh(&b->lock);
+ 		}
++		kfree(e);
+ 		e = psock_map_pop(sk, psock);
+ 	}
+ 	rcu_read_unlock();
+@@ -1644,6 +1651,7 @@ static struct bpf_map *sock_map_alloc(union bpf_attr *attr)
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	bpf_map_init_from_attr(&stab->map, attr);
++	raw_spin_lock_init(&stab->lock);
+ 
+ 	/* make sure page count doesn't overflow */
+ 	cost = (u64) stab->map.max_entries * sizeof(struct sock *);
+@@ -1678,8 +1686,10 @@ static void smap_list_map_remove(struct smap_psock *psock,
+ 
+ 	spin_lock_bh(&psock->maps_lock);
+ 	list_for_each_entry_safe(e, tmp, &psock->maps, list) {
+-		if (e->entry == entry)
++		if (e->entry == entry) {
+ 			list_del(&e->list);
++			kfree(e);
++		}
+ 	}
+ 	spin_unlock_bh(&psock->maps_lock);
+ }
+@@ -1693,8 +1703,10 @@ static void smap_list_hash_remove(struct smap_psock *psock,
+ 	list_for_each_entry_safe(e, tmp, &psock->maps, list) {
+ 		struct htab_elem *c = rcu_dereference(e->hash_link);
+ 
+-		if (c == hash_link)
++		if (c == hash_link) {
+ 			list_del(&e->list);
++			kfree(e);
++		}
+ 	}
+ 	spin_unlock_bh(&psock->maps_lock);
+ }
+@@ -1714,14 +1726,15 @@ static void sock_map_free(struct bpf_map *map)
+ 	 * and a grace period expire to ensure psock is really safe to remove.
+ 	 */
+ 	rcu_read_lock();
++	raw_spin_lock_bh(&stab->lock);
+ 	for (i = 0; i < stab->map.max_entries; i++) {
+ 		struct smap_psock *psock;
+ 		struct sock *sock;
+ 
+-		sock = xchg(&stab->sock_map[i], NULL);
++		sock = stab->sock_map[i];
+ 		if (!sock)
+ 			continue;
+-
++		stab->sock_map[i] = NULL;
+ 		psock = smap_psock_sk(sock);
+ 		/* This check handles a racing sock event that can get the
+ 		 * sk_callback_lock before this case but after xchg happens
+@@ -1733,6 +1746,7 @@ static void sock_map_free(struct bpf_map *map)
+ 			smap_release_sock(psock, sock);
+ 		}
+ 	}
++	raw_spin_unlock_bh(&stab->lock);
+ 	rcu_read_unlock();
+ 
+ 	sock_map_remove_complete(stab);
+@@ -1776,19 +1790,23 @@ static int sock_map_delete_elem(struct bpf_map *map, void *key)
+ 	if (k >= map->max_entries)
+ 		return -EINVAL;
+ 
+-	sock = xchg(&stab->sock_map[k], NULL);
++	raw_spin_lock_bh(&stab->lock);
++	sock = stab->sock_map[k];
++	stab->sock_map[k] = NULL;
++	raw_spin_unlock_bh(&stab->lock);
+ 	if (!sock)
+ 		return -EINVAL;
+ 
+ 	psock = smap_psock_sk(sock);
+ 	if (!psock)
+-		goto out;
+-
+-	if (psock->bpf_parse)
++		return 0;
++	if (psock->bpf_parse) {
++		write_lock_bh(&sock->sk_callback_lock);
+ 		smap_stop_sock(psock, sock);
++		write_unlock_bh(&sock->sk_callback_lock);
++	}
+ 	smap_list_map_remove(psock, &stab->sock_map[k]);
+ 	smap_release_sock(psock, sock);
+-out:
+ 	return 0;
+ }
+ 
+@@ -1824,11 +1842,9 @@ out:
+ static int __sock_map_ctx_update_elem(struct bpf_map *map,
+ 				      struct bpf_sock_progs *progs,
+ 				      struct sock *sock,
+-				      struct sock **map_link,
+ 				      void *key)
+ {
+ 	struct bpf_prog *verdict, *parse, *tx_msg;
+-	struct smap_psock_map_entry *e = NULL;
+ 	struct smap_psock *psock;
+ 	bool new = false;
+ 	int err = 0;
+@@ -1901,14 +1917,6 @@ static int __sock_map_ctx_update_elem(struct bpf_map *map,
+ 		new = true;
+ 	}
+ 
+-	if (map_link) {
+-		e = kzalloc(sizeof(*e), GFP_ATOMIC | __GFP_NOWARN);
+-		if (!e) {
+-			err = -ENOMEM;
+-			goto out_free;
+-		}
+-	}
+-
+ 	/* 3. At this point we have a reference to a valid psock that is
+ 	 * running. Attach any BPF programs needed.
+ 	 */
+@@ -1930,17 +1938,6 @@ static int __sock_map_ctx_update_elem(struct bpf_map *map,
+ 		write_unlock_bh(&sock->sk_callback_lock);
+ 	}
+ 
+-	/* 4. Place psock in sockmap for use and stop any programs on
+-	 * the old sock assuming its not the same sock we are replacing
+-	 * it with. Because we can only have a single set of programs if
+-	 * old_sock has a strp we can stop it.
+-	 */
+-	if (map_link) {
+-		e->entry = map_link;
+-		spin_lock_bh(&psock->maps_lock);
+-		list_add_tail(&e->list, &psock->maps);
+-		spin_unlock_bh(&psock->maps_lock);
+-	}
+ 	return err;
+ out_free:
+ 	smap_release_sock(psock, sock);
+@@ -1951,7 +1948,6 @@ out_progs:
+ 	}
+ 	if (tx_msg)
+ 		bpf_prog_put(tx_msg);
+-	kfree(e);
+ 	return err;
+ }
+ 
+@@ -1961,36 +1957,57 @@ static int sock_map_ctx_update_elem(struct bpf_sock_ops_kern *skops,
+ {
+ 	struct bpf_stab *stab = container_of(map, struct bpf_stab, map);
+ 	struct bpf_sock_progs *progs = &stab->progs;
+-	struct sock *osock, *sock;
++	struct sock *osock, *sock = skops->sk;
++	struct smap_psock_map_entry *e;
++	struct smap_psock *psock;
+ 	u32 i = *(u32 *)key;
+ 	int err;
+ 
+ 	if (unlikely(flags > BPF_EXIST))
+ 		return -EINVAL;
+-
+ 	if (unlikely(i >= stab->map.max_entries))
+ 		return -E2BIG;
+ 
+-	sock = READ_ONCE(stab->sock_map[i]);
+-	if (flags == BPF_EXIST && !sock)
+-		return -ENOENT;
+-	else if (flags == BPF_NOEXIST && sock)
+-		return -EEXIST;
++	e = kzalloc(sizeof(*e), GFP_ATOMIC | __GFP_NOWARN);
++	if (!e)
++		return -ENOMEM;
+ 
+-	sock = skops->sk;
+-	err = __sock_map_ctx_update_elem(map, progs, sock, &stab->sock_map[i],
+-					 key);
++	err = __sock_map_ctx_update_elem(map, progs, sock, key);
+ 	if (err)
+ 		goto out;
+ 
+-	osock = xchg(&stab->sock_map[i], sock);
+-	if (osock) {
+-		struct smap_psock *opsock = smap_psock_sk(osock);
++	/* psock guaranteed to be present. */
++	psock = smap_psock_sk(sock);
++	raw_spin_lock_bh(&stab->lock);
++	osock = stab->sock_map[i];
++	if (osock && flags == BPF_NOEXIST) {
++		err = -EEXIST;
++		goto out_unlock;
++	}
++	if (!osock && flags == BPF_EXIST) {
++		err = -ENOENT;
++		goto out_unlock;
++	}
+ 
+-		smap_list_map_remove(opsock, &stab->sock_map[i]);
+-		smap_release_sock(opsock, osock);
++	e->entry = &stab->sock_map[i];
++	e->map = map;
++	spin_lock_bh(&psock->maps_lock);
++	list_add_tail(&e->list, &psock->maps);
++	spin_unlock_bh(&psock->maps_lock);
++
++	stab->sock_map[i] = sock;
++	if (osock) {
++		psock = smap_psock_sk(osock);
++		smap_list_map_remove(psock, &stab->sock_map[i]);
++		smap_release_sock(psock, osock);
+ 	}
++	raw_spin_unlock_bh(&stab->lock);
++	return 0;
++out_unlock:
++	smap_release_sock(psock, sock);
++	raw_spin_unlock_bh(&stab->lock);
+ out:
++	kfree(e);
+ 	return err;
+ }
+ 
+@@ -2353,7 +2370,7 @@ static int sock_hash_ctx_update_elem(struct bpf_sock_ops_kern *skops,
+ 	b = __select_bucket(htab, hash);
+ 	head = &b->head;
+ 
+-	err = __sock_map_ctx_update_elem(map, progs, sock, NULL, key);
++	err = __sock_map_ctx_update_elem(map, progs, sock, key);
+ 	if (err)
+ 		goto err;
+ 
+@@ -2379,8 +2396,7 @@ static int sock_hash_ctx_update_elem(struct bpf_sock_ops_kern *skops,
+ 	}
+ 
+ 	rcu_assign_pointer(e->hash_link, l_new);
+-	rcu_assign_pointer(e->htab,
+-			   container_of(map, struct bpf_htab, map));
++	e->map = map;
+ 	spin_lock_bh(&psock->maps_lock);
+ 	list_add_tail(&e->list, &psock->maps);
+ 	spin_unlock_bh(&psock->maps_lock);
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 1b27babc4c78..8ed48ca2cc43 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -549,8 +549,7 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
+ 			goto out;
+ 	}
+ 	/* a new mm has just been created */
+-	arch_dup_mmap(oldmm, mm);
+-	retval = 0;
++	retval = arch_dup_mmap(oldmm, mm);
+ out:
+ 	up_write(&mm->mmap_sem);
+ 	flush_tlb_mm(oldmm);
+@@ -1417,7 +1416,9 @@ static int copy_sighand(unsigned long clone_flags, struct task_struct *tsk)
+ 		return -ENOMEM;
+ 
+ 	atomic_set(&sig->count, 1);
++	spin_lock_irq(&current->sighand->siglock);
+ 	memcpy(sig->action, current->sighand->action, sizeof(sig->action));
++	spin_unlock_irq(&current->sighand->siglock);
+ 	return 0;
+ }
+ 
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 5f78c6e41796..0280deac392e 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -2652,6 +2652,9 @@ void flush_workqueue(struct workqueue_struct *wq)
+ 	if (WARN_ON(!wq_online))
+ 		return;
+ 
++	lock_map_acquire(&wq->lockdep_map);
++	lock_map_release(&wq->lockdep_map);
++
+ 	mutex_lock(&wq->mutex);
+ 
+ 	/*
+@@ -2843,7 +2846,8 @@ reflush:
+ }
+ EXPORT_SYMBOL_GPL(drain_workqueue);
+ 
+-static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr)
++static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr,
++			     bool from_cancel)
+ {
+ 	struct worker *worker = NULL;
+ 	struct worker_pool *pool;
+@@ -2885,7 +2889,8 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr)
+ 	 * workqueues the deadlock happens when the rescuer stalls, blocking
+ 	 * forward progress.
+ 	 */
+-	if (pwq->wq->saved_max_active == 1 || pwq->wq->rescuer) {
++	if (!from_cancel &&
++	    (pwq->wq->saved_max_active == 1 || pwq->wq->rescuer)) {
+ 		lock_map_acquire(&pwq->wq->lockdep_map);
+ 		lock_map_release(&pwq->wq->lockdep_map);
+ 	}
+@@ -2896,6 +2901,27 @@ already_gone:
+ 	return false;
+ }
+ 
++static bool __flush_work(struct work_struct *work, bool from_cancel)
++{
++	struct wq_barrier barr;
++
++	if (WARN_ON(!wq_online))
++		return false;
++
++	if (!from_cancel) {
++		lock_map_acquire(&work->lockdep_map);
++		lock_map_release(&work->lockdep_map);
++	}
++
++	if (start_flush_work(work, &barr, from_cancel)) {
++		wait_for_completion(&barr.done);
++		destroy_work_on_stack(&barr.work);
++		return true;
++	} else {
++		return false;
++	}
++}
++
+ /**
+  * flush_work - wait for a work to finish executing the last queueing instance
+  * @work: the work to flush
+@@ -2909,18 +2935,7 @@ already_gone:
+  */
+ bool flush_work(struct work_struct *work)
+ {
+-	struct wq_barrier barr;
+-
+-	if (WARN_ON(!wq_online))
+-		return false;
+-
+-	if (start_flush_work(work, &barr)) {
+-		wait_for_completion(&barr.done);
+-		destroy_work_on_stack(&barr.work);
+-		return true;
+-	} else {
+-		return false;
+-	}
++	return __flush_work(work, false);
+ }
+ EXPORT_SYMBOL_GPL(flush_work);
+ 
+@@ -2986,7 +3001,7 @@ static bool __cancel_work_timer(struct work_struct *work, bool is_dwork)
+ 	 * isn't executing.
+ 	 */
+ 	if (wq_online)
+-		flush_work(work);
++		__flush_work(work, true);
+ 
+ 	clear_work_data(work);
+ 
+diff --git a/lib/debugobjects.c b/lib/debugobjects.c
+index 994be4805cec..24c1df0d7466 100644
+--- a/lib/debugobjects.c
++++ b/lib/debugobjects.c
+@@ -360,9 +360,12 @@ static void debug_object_is_on_stack(void *addr, int onstack)
+ 
+ 	limit++;
+ 	if (is_on_stack)
+-		pr_warn("object is on stack, but not annotated\n");
++		pr_warn("object %p is on stack %p, but NOT annotated.\n", addr,
++			 task_stack_page(current));
+ 	else
+-		pr_warn("object is not on stack, but annotated\n");
++		pr_warn("object %p is NOT on stack %p, but annotated.\n", addr,
++			 task_stack_page(current));
++
+ 	WARN_ON(1);
+ }
+ 
+diff --git a/mm/Kconfig b/mm/Kconfig
+index ce95491abd6a..94af022b7f3d 100644
+--- a/mm/Kconfig
++++ b/mm/Kconfig
+@@ -635,7 +635,7 @@ config DEFERRED_STRUCT_PAGE_INIT
+ 	bool "Defer initialisation of struct pages to kthreads"
+ 	default n
+ 	depends on NO_BOOTMEM
+-	depends on !FLATMEM
++	depends on SPARSEMEM
+ 	depends on !NEED_PER_CPU_KM
+ 	help
+ 	  Ordinarily all struct pages are initialised during early boot in a
+diff --git a/mm/fadvise.c b/mm/fadvise.c
+index afa41491d324..2d8376e3c640 100644
+--- a/mm/fadvise.c
++++ b/mm/fadvise.c
+@@ -72,8 +72,12 @@ int ksys_fadvise64_64(int fd, loff_t offset, loff_t len, int advice)
+ 		goto out;
+ 	}
+ 
+-	/* Careful about overflows. Len == 0 means "as much as possible" */
+-	endbyte = offset + len;
++	/*
++	 * Careful about overflows. Len == 0 means "as much as possible".  Use
++	 * unsigned math because signed overflows are undefined and UBSan
++	 * complains.
++	 */
++	endbyte = (u64)offset + (u64)len;
+ 	if (!len || endbyte < len)
+ 		endbyte = -1;
+ 	else
+diff --git a/net/9p/trans_fd.c b/net/9p/trans_fd.c
+index ef456395645a..7fb60dd4be79 100644
+--- a/net/9p/trans_fd.c
++++ b/net/9p/trans_fd.c
+@@ -199,15 +199,14 @@ static void p9_mux_poll_stop(struct p9_conn *m)
+ static void p9_conn_cancel(struct p9_conn *m, int err)
+ {
+ 	struct p9_req_t *req, *rtmp;
+-	unsigned long flags;
+ 	LIST_HEAD(cancel_list);
+ 
+ 	p9_debug(P9_DEBUG_ERROR, "mux %p err %d\n", m, err);
+ 
+-	spin_lock_irqsave(&m->client->lock, flags);
++	spin_lock(&m->client->lock);
+ 
+ 	if (m->err) {
+-		spin_unlock_irqrestore(&m->client->lock, flags);
++		spin_unlock(&m->client->lock);
+ 		return;
+ 	}
+ 
+@@ -219,7 +218,6 @@ static void p9_conn_cancel(struct p9_conn *m, int err)
+ 	list_for_each_entry_safe(req, rtmp, &m->unsent_req_list, req_list) {
+ 		list_move(&req->req_list, &cancel_list);
+ 	}
+-	spin_unlock_irqrestore(&m->client->lock, flags);
+ 
+ 	list_for_each_entry_safe(req, rtmp, &cancel_list, req_list) {
+ 		p9_debug(P9_DEBUG_ERROR, "call back req %p\n", req);
+@@ -228,6 +226,7 @@ static void p9_conn_cancel(struct p9_conn *m, int err)
+ 			req->t_err = err;
+ 		p9_client_cb(m->client, req, REQ_STATUS_ERROR);
+ 	}
++	spin_unlock(&m->client->lock);
+ }
+ 
+ static __poll_t
+@@ -375,8 +374,9 @@ static void p9_read_work(struct work_struct *work)
+ 		if (m->req->status != REQ_STATUS_ERROR)
+ 			status = REQ_STATUS_RCVD;
+ 		list_del(&m->req->req_list);
+-		spin_unlock(&m->client->lock);
++		/* update req->status while holding client->lock  */
+ 		p9_client_cb(m->client, m->req, status);
++		spin_unlock(&m->client->lock);
+ 		m->rc.sdata = NULL;
+ 		m->rc.offset = 0;
+ 		m->rc.capacity = 0;
+diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c
+index 4c2da2513c8b..2dc1c293092b 100644
+--- a/net/9p/trans_virtio.c
++++ b/net/9p/trans_virtio.c
+@@ -571,7 +571,7 @@ static int p9_virtio_probe(struct virtio_device *vdev)
+ 	chan->vq = virtio_find_single_vq(vdev, req_done, "requests");
+ 	if (IS_ERR(chan->vq)) {
+ 		err = PTR_ERR(chan->vq);
+-		goto out_free_vq;
++		goto out_free_chan;
+ 	}
+ 	chan->vq->vdev->priv = chan;
+ 	spin_lock_init(&chan->lock);
+@@ -624,6 +624,7 @@ out_free_tag:
+ 	kfree(tag);
+ out_free_vq:
+ 	vdev->config->del_vqs(vdev);
++out_free_chan:
+ 	kfree(chan);
+ fail:
+ 	return err;
+diff --git a/net/core/xdp.c b/net/core/xdp.c
+index 6771f1855b96..2657056130a4 100644
+--- a/net/core/xdp.c
++++ b/net/core/xdp.c
+@@ -95,23 +95,15 @@ static void __xdp_rxq_info_unreg_mem_model(struct xdp_rxq_info *xdp_rxq)
+ {
+ 	struct xdp_mem_allocator *xa;
+ 	int id = xdp_rxq->mem.id;
+-	int err;
+ 
+ 	if (id == 0)
+ 		return;
+ 
+ 	mutex_lock(&mem_id_lock);
+ 
+-	xa = rhashtable_lookup(mem_id_ht, &id, mem_id_rht_params);
+-	if (!xa) {
+-		mutex_unlock(&mem_id_lock);
+-		return;
+-	}
+-
+-	err = rhashtable_remove_fast(mem_id_ht, &xa->node, mem_id_rht_params);
+-	WARN_ON(err);
+-
+-	call_rcu(&xa->rcu, __xdp_mem_allocator_rcu_free);
++	xa = rhashtable_lookup_fast(mem_id_ht, &id, mem_id_rht_params);
++	if (xa && !rhashtable_remove_fast(mem_id_ht, &xa->node, mem_id_rht_params))
++		call_rcu(&xa->rcu, __xdp_mem_allocator_rcu_free);
+ 
+ 	mutex_unlock(&mem_id_lock);
+ }
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index 2d8efeecf619..055f4bbba86b 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -1511,11 +1511,14 @@ nla_put_failure:
+ 
+ static void erspan_setup(struct net_device *dev)
+ {
++	struct ip_tunnel *t = netdev_priv(dev);
++
+ 	ether_setup(dev);
+ 	dev->netdev_ops = &erspan_netdev_ops;
+ 	dev->priv_flags &= ~IFF_TX_SKB_SHARING;
+ 	dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
+ 	ip_tunnel_setup(dev, erspan_net_id);
++	t->erspan_ver = 1;
+ }
+ 
+ static const struct nla_policy ipgre_policy[IFLA_GRE_MAX + 1] = {
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 3b2711e33e4c..488b201851d7 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -2516,6 +2516,12 @@ static int __net_init tcp_sk_init(struct net *net)
+ 		if (res)
+ 			goto fail;
+ 		sock_set_flag(sk, SOCK_USE_WRITE_QUEUE);
++
++		/* Please enforce IP_DF and IPID==0 for RST and
++		 * ACK sent in SYN-RECV and TIME-WAIT state.
++		 */
++		inet_sk(sk)->pmtudisc = IP_PMTUDISC_DO;
++
+ 		*per_cpu_ptr(net->ipv4.tcp_sk, cpu) = sk;
+ 	}
+ 
+diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
+index 1dda1341a223..b690132f5da2 100644
+--- a/net/ipv4/tcp_minisocks.c
++++ b/net/ipv4/tcp_minisocks.c
+@@ -184,8 +184,9 @@ kill:
+ 				inet_twsk_deschedule_put(tw);
+ 				return TCP_TW_SUCCESS;
+ 			}
++		} else {
++			inet_twsk_reschedule(tw, TCP_TIMEWAIT_LEN);
+ 		}
+-		inet_twsk_reschedule(tw, TCP_TIMEWAIT_LEN);
+ 
+ 		if (tmp_opt.saw_tstamp) {
+ 			tcptw->tw_ts_recent	  = tmp_opt.rcv_tsval;
+diff --git a/net/ipv4/tcp_ulp.c b/net/ipv4/tcp_ulp.c
+index 622caa4039e0..a5995bb2eaca 100644
+--- a/net/ipv4/tcp_ulp.c
++++ b/net/ipv4/tcp_ulp.c
+@@ -51,7 +51,7 @@ static const struct tcp_ulp_ops *__tcp_ulp_find_autoload(const char *name)
+ #ifdef CONFIG_MODULES
+ 	if (!ulp && capable(CAP_NET_ADMIN)) {
+ 		rcu_read_unlock();
+-		request_module("%s", name);
++		request_module("tcp-ulp-%s", name);
+ 		rcu_read_lock();
+ 		ulp = tcp_ulp_find(name);
+ 	}
+@@ -129,6 +129,8 @@ void tcp_cleanup_ulp(struct sock *sk)
+ 	if (icsk->icsk_ulp_ops->release)
+ 		icsk->icsk_ulp_ops->release(sk);
+ 	module_put(icsk->icsk_ulp_ops->owner);
++
++	icsk->icsk_ulp_ops = NULL;
+ }
+ 
+ /* Change upper layer protocol for socket */
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index d212738e9d10..5516f55e214b 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -198,6 +198,8 @@ void fib6_info_destroy_rcu(struct rcu_head *head)
+ 		}
+ 	}
+ 
++	lwtstate_put(f6i->fib6_nh.nh_lwtstate);
++
+ 	if (f6i->fib6_nh.nh_dev)
+ 		dev_put(f6i->fib6_nh.nh_dev);
+ 
+@@ -987,7 +989,10 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct fib6_info *rt,
+ 					fib6_clean_expires(iter);
+ 				else
+ 					fib6_set_expires(iter, rt->expires);
+-				fib6_metric_set(iter, RTAX_MTU, rt->fib6_pmtu);
++
++				if (rt->fib6_pmtu)
++					fib6_metric_set(iter, RTAX_MTU,
++							rt->fib6_pmtu);
+ 				return -EEXIST;
+ 			}
+ 			/* If we have the same destination and the same metric,
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index cd2cfb04e5d8..7ec997fcbc43 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -1776,6 +1776,7 @@ static void ip6gre_netlink_parms(struct nlattr *data[],
+ 	if (data[IFLA_GRE_COLLECT_METADATA])
+ 		parms->collect_md = true;
+ 
++	parms->erspan_ver = 1;
+ 	if (data[IFLA_GRE_ERSPAN_VER])
+ 		parms->erspan_ver = nla_get_u8(data[IFLA_GRE_ERSPAN_VER]);
+ 
+diff --git a/net/ipv6/ip6_vti.c b/net/ipv6/ip6_vti.c
+index c72ae3a4fe09..c31a7c4a9249 100644
+--- a/net/ipv6/ip6_vti.c
++++ b/net/ipv6/ip6_vti.c
+@@ -481,7 +481,7 @@ vti6_xmit(struct sk_buff *skb, struct net_device *dev, struct flowi *fl)
+ 	}
+ 
+ 	mtu = dst_mtu(dst);
+-	if (!skb->ignore_df && skb->len > mtu) {
++	if (skb->len > mtu) {
+ 		skb_dst_update_pmtu(skb, mtu);
+ 
+ 		if (skb->protocol == htons(ETH_P_IPV6)) {
+@@ -1102,7 +1102,8 @@ static void __net_exit vti6_destroy_tunnels(struct vti6_net *ip6n,
+ 	}
+ 
+ 	t = rtnl_dereference(ip6n->tnls_wc[0]);
+-	unregister_netdevice_queue(t->dev, list);
++	if (t)
++		unregister_netdevice_queue(t->dev, list);
+ }
+ 
+ static int __net_init vti6_init_net(struct net *net)
+@@ -1114,6 +1115,8 @@ static int __net_init vti6_init_net(struct net *net)
+ 	ip6n->tnls[0] = ip6n->tnls_wc;
+ 	ip6n->tnls[1] = ip6n->tnls_r_l;
+ 
++	if (!net_has_fallback_tunnels(net))
++		return 0;
+ 	err = -ENOMEM;
+ 	ip6n->fb_tnl_dev = alloc_netdev(sizeof(struct ip6_tnl), "ip6_vti0",
+ 					NET_NAME_UNKNOWN, vti6_dev_setup);
+diff --git a/net/ipv6/netfilter/ip6t_rpfilter.c b/net/ipv6/netfilter/ip6t_rpfilter.c
+index 0fe61ede77c6..c3c6b09acdc4 100644
+--- a/net/ipv6/netfilter/ip6t_rpfilter.c
++++ b/net/ipv6/netfilter/ip6t_rpfilter.c
+@@ -26,6 +26,12 @@ static bool rpfilter_addr_unicast(const struct in6_addr *addr)
+ 	return addr_type & IPV6_ADDR_UNICAST;
+ }
+ 
++static bool rpfilter_addr_linklocal(const struct in6_addr *addr)
++{
++	int addr_type = ipv6_addr_type(addr);
++	return addr_type & IPV6_ADDR_LINKLOCAL;
++}
++
+ static bool rpfilter_lookup_reverse6(struct net *net, const struct sk_buff *skb,
+ 				     const struct net_device *dev, u8 flags)
+ {
+@@ -48,7 +54,11 @@ static bool rpfilter_lookup_reverse6(struct net *net, const struct sk_buff *skb,
+ 	}
+ 
+ 	fl6.flowi6_mark = flags & XT_RPFILTER_VALID_MARK ? skb->mark : 0;
+-	if ((flags & XT_RPFILTER_LOOSE) == 0)
++
++	if (rpfilter_addr_linklocal(&iph->saddr)) {
++		lookup_flags |= RT6_LOOKUP_F_IFACE;
++		fl6.flowi6_oif = dev->ifindex;
++	} else if ((flags & XT_RPFILTER_LOOSE) == 0)
+ 		fl6.flowi6_oif = dev->ifindex;
+ 
+ 	rt = (void *)ip6_route_lookup(net, &fl6, skb, lookup_flags);
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 7208c16302f6..18e00ce1719a 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -956,7 +956,7 @@ static void ip6_rt_init_dst(struct rt6_info *rt, struct fib6_info *ort)
+ 	rt->dst.error = 0;
+ 	rt->dst.output = ip6_output;
+ 
+-	if (ort->fib6_type == RTN_LOCAL) {
++	if (ort->fib6_type == RTN_LOCAL || ort->fib6_type == RTN_ANYCAST) {
+ 		rt->dst.input = ip6_input;
+ 	} else if (ipv6_addr_type(&ort->fib6_dst.addr) & IPV6_ADDR_MULTICAST) {
+ 		rt->dst.input = ip6_mc_input;
+@@ -996,7 +996,6 @@ static void ip6_rt_copy_init(struct rt6_info *rt, struct fib6_info *ort)
+ 	rt->rt6i_src = ort->fib6_src;
+ #endif
+ 	rt->rt6i_prefsrc = ort->fib6_prefsrc;
+-	rt->dst.lwtstate = lwtstate_get(ort->fib6_nh.nh_lwtstate);
+ }
+ 
+ static struct fib6_node* fib6_backtrack(struct fib6_node *fn,
+diff --git a/net/netfilter/ipvs/ip_vs_core.c b/net/netfilter/ipvs/ip_vs_core.c
+index 0679dd101e72..7ca926a03b81 100644
+--- a/net/netfilter/ipvs/ip_vs_core.c
++++ b/net/netfilter/ipvs/ip_vs_core.c
+@@ -1972,13 +1972,20 @@ ip_vs_in(struct netns_ipvs *ipvs, unsigned int hooknum, struct sk_buff *skb, int
+ 	if (cp->dest && !(cp->dest->flags & IP_VS_DEST_F_AVAILABLE)) {
+ 		/* the destination server is not available */
+ 
+-		if (sysctl_expire_nodest_conn(ipvs)) {
++		__u32 flags = cp->flags;
++
++		/* when timer already started, silently drop the packet.*/
++		if (timer_pending(&cp->timer))
++			__ip_vs_conn_put(cp);
++		else
++			ip_vs_conn_put(cp);
++
++		if (sysctl_expire_nodest_conn(ipvs) &&
++		    !(flags & IP_VS_CONN_F_ONE_PACKET)) {
+ 			/* try to expire the connection immediately */
+ 			ip_vs_conn_expire_now(cp);
+ 		}
+-		/* don't restart its timer, and silently
+-		   drop the packet. */
+-		__ip_vs_conn_put(cp);
++
+ 		return NF_DROP;
+ 	}
+ 
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index 20a2e37c76d1..e952eedf44b4 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -821,6 +821,21 @@ ctnetlink_alloc_filter(const struct nlattr * const cda[])
+ #endif
+ }
+ 
++static int ctnetlink_start(struct netlink_callback *cb)
++{
++	const struct nlattr * const *cda = cb->data;
++	struct ctnetlink_filter *filter = NULL;
++
++	if (cda[CTA_MARK] && cda[CTA_MARK_MASK]) {
++		filter = ctnetlink_alloc_filter(cda);
++		if (IS_ERR(filter))
++			return PTR_ERR(filter);
++	}
++
++	cb->data = filter;
++	return 0;
++}
++
+ static int ctnetlink_filter_match(struct nf_conn *ct, void *data)
+ {
+ 	struct ctnetlink_filter *filter = data;
+@@ -1240,19 +1255,12 @@ static int ctnetlink_get_conntrack(struct net *net, struct sock *ctnl,
+ 
+ 	if (nlh->nlmsg_flags & NLM_F_DUMP) {
+ 		struct netlink_dump_control c = {
++			.start = ctnetlink_start,
+ 			.dump = ctnetlink_dump_table,
+ 			.done = ctnetlink_done,
++			.data = (void *)cda,
+ 		};
+ 
+-		if (cda[CTA_MARK] && cda[CTA_MARK_MASK]) {
+-			struct ctnetlink_filter *filter;
+-
+-			filter = ctnetlink_alloc_filter(cda);
+-			if (IS_ERR(filter))
+-				return PTR_ERR(filter);
+-
+-			c.data = filter;
+-		}
+ 		return netlink_dump_start(ctnl, skb, nlh, &c);
+ 	}
+ 
+diff --git a/net/netfilter/nfnetlink_acct.c b/net/netfilter/nfnetlink_acct.c
+index a0e5adf0b3b6..8fa8bf7c48e6 100644
+--- a/net/netfilter/nfnetlink_acct.c
++++ b/net/netfilter/nfnetlink_acct.c
+@@ -238,29 +238,33 @@ static const struct nla_policy filter_policy[NFACCT_FILTER_MAX + 1] = {
+ 	[NFACCT_FILTER_VALUE]	= { .type = NLA_U32 },
+ };
+ 
+-static struct nfacct_filter *
+-nfacct_filter_alloc(const struct nlattr * const attr)
++static int nfnl_acct_start(struct netlink_callback *cb)
+ {
+-	struct nfacct_filter *filter;
++	const struct nlattr *const attr = cb->data;
+ 	struct nlattr *tb[NFACCT_FILTER_MAX + 1];
++	struct nfacct_filter *filter;
+ 	int err;
+ 
++	if (!attr)
++		return 0;
++
+ 	err = nla_parse_nested(tb, NFACCT_FILTER_MAX, attr, filter_policy,
+ 			       NULL);
+ 	if (err < 0)
+-		return ERR_PTR(err);
++		return err;
+ 
+ 	if (!tb[NFACCT_FILTER_MASK] || !tb[NFACCT_FILTER_VALUE])
+-		return ERR_PTR(-EINVAL);
++		return -EINVAL;
+ 
+ 	filter = kzalloc(sizeof(struct nfacct_filter), GFP_KERNEL);
+ 	if (!filter)
+-		return ERR_PTR(-ENOMEM);
++		return -ENOMEM;
+ 
+ 	filter->mask = ntohl(nla_get_be32(tb[NFACCT_FILTER_MASK]));
+ 	filter->value = ntohl(nla_get_be32(tb[NFACCT_FILTER_VALUE]));
++	cb->data = filter;
+ 
+-	return filter;
++	return 0;
+ }
+ 
+ static int nfnl_acct_get(struct net *net, struct sock *nfnl,
+@@ -275,18 +279,11 @@ static int nfnl_acct_get(struct net *net, struct sock *nfnl,
+ 	if (nlh->nlmsg_flags & NLM_F_DUMP) {
+ 		struct netlink_dump_control c = {
+ 			.dump = nfnl_acct_dump,
++			.start = nfnl_acct_start,
+ 			.done = nfnl_acct_done,
++			.data = (void *)tb[NFACCT_FILTER],
+ 		};
+ 
+-		if (tb[NFACCT_FILTER]) {
+-			struct nfacct_filter *filter;
+-
+-			filter = nfacct_filter_alloc(tb[NFACCT_FILTER]);
+-			if (IS_ERR(filter))
+-				return PTR_ERR(filter);
+-
+-			c.data = filter;
+-		}
+ 		return netlink_dump_start(nfnl, skb, nlh, &c);
+ 	}
+ 
+diff --git a/net/netfilter/x_tables.c b/net/netfilter/x_tables.c
+index d0d8397c9588..aecadd471e1d 100644
+--- a/net/netfilter/x_tables.c
++++ b/net/netfilter/x_tables.c
+@@ -1178,12 +1178,7 @@ struct xt_table_info *xt_alloc_table_info(unsigned int size)
+ 	if (sz < sizeof(*info) || sz >= XT_MAX_TABLE_SIZE)
+ 		return NULL;
+ 
+-	/* __GFP_NORETRY is not fully supported by kvmalloc but it should
+-	 * work reasonably well if sz is too large and bail out rather
+-	 * than shoot all processes down before realizing there is nothing
+-	 * more to reclaim.
+-	 */
+-	info = kvmalloc(sz, GFP_KERNEL | __GFP_NORETRY);
++	info = kvmalloc(sz, GFP_KERNEL_ACCOUNT);
+ 	if (!info)
+ 		return NULL;
+ 
+diff --git a/net/rds/ib_frmr.c b/net/rds/ib_frmr.c
+index d152e48ea371..8596eed6d9a8 100644
+--- a/net/rds/ib_frmr.c
++++ b/net/rds/ib_frmr.c
+@@ -61,6 +61,7 @@ static struct rds_ib_mr *rds_ib_alloc_frmr(struct rds_ib_device *rds_ibdev,
+ 			 pool->fmr_attr.max_pages);
+ 	if (IS_ERR(frmr->mr)) {
+ 		pr_warn("RDS/IB: %s failed to allocate MR", __func__);
++		err = PTR_ERR(frmr->mr);
+ 		goto out_no_cigar;
+ 	}
+ 
+diff --git a/net/sched/act_ife.c b/net/sched/act_ife.c
+index 20d7d36b2fc9..005cb21348c9 100644
+--- a/net/sched/act_ife.c
++++ b/net/sched/act_ife.c
+@@ -265,10 +265,8 @@ static const char *ife_meta_id2name(u32 metaid)
+ #endif
+ 
+ /* called when adding new meta information
+- * under ife->tcf_lock for existing action
+ */
+-static int load_metaops_and_vet(struct tcf_ife_info *ife, u32 metaid,
+-				void *val, int len, bool exists)
++static int load_metaops_and_vet(u32 metaid, void *val, int len)
+ {
+ 	struct tcf_meta_ops *ops = find_ife_oplist(metaid);
+ 	int ret = 0;
+@@ -276,13 +274,9 @@ static int load_metaops_and_vet(struct tcf_ife_info *ife, u32 metaid,
+ 	if (!ops) {
+ 		ret = -ENOENT;
+ #ifdef CONFIG_MODULES
+-		if (exists)
+-			spin_unlock_bh(&ife->tcf_lock);
+ 		rtnl_unlock();
+ 		request_module("ife-meta-%s", ife_meta_id2name(metaid));
+ 		rtnl_lock();
+-		if (exists)
+-			spin_lock_bh(&ife->tcf_lock);
+ 		ops = find_ife_oplist(metaid);
+ #endif
+ 	}
+@@ -299,24 +293,17 @@ static int load_metaops_and_vet(struct tcf_ife_info *ife, u32 metaid,
+ }
+ 
+ /* called when adding new meta information
+- * under ife->tcf_lock for existing action
+ */
+-static int add_metainfo(struct tcf_ife_info *ife, u32 metaid, void *metaval,
+-			int len, bool atomic)
++static int __add_metainfo(const struct tcf_meta_ops *ops,
++			  struct tcf_ife_info *ife, u32 metaid, void *metaval,
++			  int len, bool atomic, bool exists)
+ {
+ 	struct tcf_meta_info *mi = NULL;
+-	struct tcf_meta_ops *ops = find_ife_oplist(metaid);
+ 	int ret = 0;
+ 
+-	if (!ops)
+-		return -ENOENT;
+-
+ 	mi = kzalloc(sizeof(*mi), atomic ? GFP_ATOMIC : GFP_KERNEL);
+-	if (!mi) {
+-		/*put back what find_ife_oplist took */
+-		module_put(ops->owner);
++	if (!mi)
+ 		return -ENOMEM;
+-	}
+ 
+ 	mi->metaid = metaid;
+ 	mi->ops = ops;
+@@ -324,17 +311,49 @@ static int add_metainfo(struct tcf_ife_info *ife, u32 metaid, void *metaval,
+ 		ret = ops->alloc(mi, metaval, atomic ? GFP_ATOMIC : GFP_KERNEL);
+ 		if (ret != 0) {
+ 			kfree(mi);
+-			module_put(ops->owner);
+ 			return ret;
+ 		}
+ 	}
+ 
++	if (exists)
++		spin_lock_bh(&ife->tcf_lock);
+ 	list_add_tail(&mi->metalist, &ife->metalist);
++	if (exists)
++		spin_unlock_bh(&ife->tcf_lock);
+ 
+ 	return ret;
+ }
+ 
+-static int use_all_metadata(struct tcf_ife_info *ife)
++static int add_metainfo_and_get_ops(const struct tcf_meta_ops *ops,
++				    struct tcf_ife_info *ife, u32 metaid,
++				    bool exists)
++{
++	int ret;
++
++	if (!try_module_get(ops->owner))
++		return -ENOENT;
++	ret = __add_metainfo(ops, ife, metaid, NULL, 0, true, exists);
++	if (ret)
++		module_put(ops->owner);
++	return ret;
++}
++
++static int add_metainfo(struct tcf_ife_info *ife, u32 metaid, void *metaval,
++			int len, bool exists)
++{
++	const struct tcf_meta_ops *ops = find_ife_oplist(metaid);
++	int ret;
++
++	if (!ops)
++		return -ENOENT;
++	ret = __add_metainfo(ops, ife, metaid, metaval, len, false, exists);
++	if (ret)
++		/*put back what find_ife_oplist took */
++		module_put(ops->owner);
++	return ret;
++}
++
++static int use_all_metadata(struct tcf_ife_info *ife, bool exists)
+ {
+ 	struct tcf_meta_ops *o;
+ 	int rc = 0;
+@@ -342,7 +361,7 @@ static int use_all_metadata(struct tcf_ife_info *ife)
+ 
+ 	read_lock(&ife_mod_lock);
+ 	list_for_each_entry(o, &ifeoplist, list) {
+-		rc = add_metainfo(ife, o->metaid, NULL, 0, true);
++		rc = add_metainfo_and_get_ops(o, ife, o->metaid, exists);
+ 		if (rc == 0)
+ 			installed += 1;
+ 	}
+@@ -393,7 +412,6 @@ static void _tcf_ife_cleanup(struct tc_action *a)
+ 	struct tcf_meta_info *e, *n;
+ 
+ 	list_for_each_entry_safe(e, n, &ife->metalist, metalist) {
+-		module_put(e->ops->owner);
+ 		list_del(&e->metalist);
+ 		if (e->metaval) {
+ 			if (e->ops->release)
+@@ -401,6 +419,7 @@ static void _tcf_ife_cleanup(struct tc_action *a)
+ 			else
+ 				kfree(e->metaval);
+ 		}
++		module_put(e->ops->owner);
+ 		kfree(e);
+ 	}
+ }
+@@ -419,7 +438,6 @@ static void tcf_ife_cleanup(struct tc_action *a)
+ 		kfree_rcu(p, rcu);
+ }
+ 
+-/* under ife->tcf_lock for existing action */
+ static int populate_metalist(struct tcf_ife_info *ife, struct nlattr **tb,
+ 			     bool exists)
+ {
+@@ -433,7 +451,7 @@ static int populate_metalist(struct tcf_ife_info *ife, struct nlattr **tb,
+ 			val = nla_data(tb[i]);
+ 			len = nla_len(tb[i]);
+ 
+-			rc = load_metaops_and_vet(ife, i, val, len, exists);
++			rc = load_metaops_and_vet(i, val, len);
+ 			if (rc != 0)
+ 				return rc;
+ 
+@@ -531,8 +549,6 @@ static int tcf_ife_init(struct net *net, struct nlattr *nla,
+ 		p->eth_type = ife_type;
+ 	}
+ 
+-	if (exists)
+-		spin_lock_bh(&ife->tcf_lock);
+ 
+ 	if (ret == ACT_P_CREATED)
+ 		INIT_LIST_HEAD(&ife->metalist);
+@@ -544,9 +560,6 @@ static int tcf_ife_init(struct net *net, struct nlattr *nla,
+ metadata_parse_err:
+ 			if (ret == ACT_P_CREATED)
+ 				tcf_idr_release(*a, bind);
+-
+-			if (exists)
+-				spin_unlock_bh(&ife->tcf_lock);
+ 			kfree(p);
+ 			return err;
+ 		}
+@@ -561,18 +574,17 @@ metadata_parse_err:
+ 		 * as we can. You better have at least one else we are
+ 		 * going to bail out
+ 		 */
+-		err = use_all_metadata(ife);
++		err = use_all_metadata(ife, exists);
+ 		if (err) {
+ 			if (ret == ACT_P_CREATED)
+ 				tcf_idr_release(*a, bind);
+-
+-			if (exists)
+-				spin_unlock_bh(&ife->tcf_lock);
+ 			kfree(p);
+ 			return err;
+ 		}
+ 	}
+ 
++	if (exists)
++		spin_lock_bh(&ife->tcf_lock);
+ 	ife->tcf_action = parm->action;
+ 	if (exists)
+ 		spin_unlock_bh(&ife->tcf_lock);
+diff --git a/net/sched/act_pedit.c b/net/sched/act_pedit.c
+index 8a925c72db5f..bad475c87688 100644
+--- a/net/sched/act_pedit.c
++++ b/net/sched/act_pedit.c
+@@ -109,16 +109,18 @@ static int tcf_pedit_key_ex_dump(struct sk_buff *skb,
+ {
+ 	struct nlattr *keys_start = nla_nest_start(skb, TCA_PEDIT_KEYS_EX);
+ 
++	if (!keys_start)
++		goto nla_failure;
+ 	for (; n > 0; n--) {
+ 		struct nlattr *key_start;
+ 
+ 		key_start = nla_nest_start(skb, TCA_PEDIT_KEY_EX);
++		if (!key_start)
++			goto nla_failure;
+ 
+ 		if (nla_put_u16(skb, TCA_PEDIT_KEY_EX_HTYPE, keys_ex->htype) ||
+-		    nla_put_u16(skb, TCA_PEDIT_KEY_EX_CMD, keys_ex->cmd)) {
+-			nlmsg_trim(skb, keys_start);
+-			return -EINVAL;
+-		}
++		    nla_put_u16(skb, TCA_PEDIT_KEY_EX_CMD, keys_ex->cmd))
++			goto nla_failure;
+ 
+ 		nla_nest_end(skb, key_start);
+ 
+@@ -128,6 +130,9 @@ static int tcf_pedit_key_ex_dump(struct sk_buff *skb,
+ 	nla_nest_end(skb, keys_start);
+ 
+ 	return 0;
++nla_failure:
++	nla_nest_cancel(skb, keys_start);
++	return -EINVAL;
+ }
+ 
+ static int tcf_pedit_init(struct net *net, struct nlattr *nla,
+@@ -395,7 +400,10 @@ static int tcf_pedit_dump(struct sk_buff *skb, struct tc_action *a,
+ 	opt->bindcnt = p->tcf_bindcnt - bind;
+ 
+ 	if (p->tcfp_keys_ex) {
+-		tcf_pedit_key_ex_dump(skb, p->tcfp_keys_ex, p->tcfp_nkeys);
++		if (tcf_pedit_key_ex_dump(skb,
++					  p->tcfp_keys_ex,
++					  p->tcfp_nkeys))
++			goto nla_put_failure;
+ 
+ 		if (nla_put(skb, TCA_PEDIT_PARMS_EX, s, opt))
+ 			goto nla_put_failure;
+diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c
+index fb861f90fde6..260749956ef3 100644
+--- a/net/sched/cls_u32.c
++++ b/net/sched/cls_u32.c
+@@ -912,6 +912,7 @@ static int u32_change(struct net *net, struct sk_buff *in_skb,
+ 	struct nlattr *opt = tca[TCA_OPTIONS];
+ 	struct nlattr *tb[TCA_U32_MAX + 1];
+ 	u32 htid, flags = 0;
++	size_t sel_size;
+ 	int err;
+ #ifdef CONFIG_CLS_U32_PERF
+ 	size_t size;
+@@ -1074,8 +1075,13 @@ static int u32_change(struct net *net, struct sk_buff *in_skb,
+ 	}
+ 
+ 	s = nla_data(tb[TCA_U32_SEL]);
++	sel_size = struct_size(s, keys, s->nkeys);
++	if (nla_len(tb[TCA_U32_SEL]) < sel_size) {
++		err = -EINVAL;
++		goto erridr;
++	}
+ 
+-	n = kzalloc(sizeof(*n) + s->nkeys*sizeof(struct tc_u32_key), GFP_KERNEL);
++	n = kzalloc(offsetof(typeof(*n), sel) + sel_size, GFP_KERNEL);
+ 	if (n == NULL) {
+ 		err = -ENOBUFS;
+ 		goto erridr;
+@@ -1090,7 +1096,7 @@ static int u32_change(struct net *net, struct sk_buff *in_skb,
+ 	}
+ #endif
+ 
+-	memcpy(&n->sel, s, sizeof(*s) + s->nkeys*sizeof(struct tc_u32_key));
++	memcpy(&n->sel, s, sel_size);
+ 	RCU_INIT_POINTER(n->ht_up, ht);
+ 	n->handle = handle;
+ 	n->fshift = s->hmask ? ffs(ntohl(s->hmask)) - 1 : 0;
+diff --git a/net/sctp/proc.c b/net/sctp/proc.c
+index ef5c9a82d4e8..a644292f9faf 100644
+--- a/net/sctp/proc.c
++++ b/net/sctp/proc.c
+@@ -215,7 +215,6 @@ static const struct seq_operations sctp_eps_ops = {
+ struct sctp_ht_iter {
+ 	struct seq_net_private p;
+ 	struct rhashtable_iter hti;
+-	int start_fail;
+ };
+ 
+ static void *sctp_transport_seq_start(struct seq_file *seq, loff_t *pos)
+@@ -224,7 +223,6 @@ static void *sctp_transport_seq_start(struct seq_file *seq, loff_t *pos)
+ 
+ 	sctp_transport_walk_start(&iter->hti);
+ 
+-	iter->start_fail = 0;
+ 	return sctp_transport_get_idx(seq_file_net(seq), &iter->hti, *pos);
+ }
+ 
+@@ -232,8 +230,6 @@ static void sctp_transport_seq_stop(struct seq_file *seq, void *v)
+ {
+ 	struct sctp_ht_iter *iter = seq->private;
+ 
+-	if (iter->start_fail)
+-		return;
+ 	sctp_transport_walk_stop(&iter->hti);
+ }
+ 
+@@ -264,8 +260,6 @@ static int sctp_assocs_seq_show(struct seq_file *seq, void *v)
+ 	}
+ 
+ 	transport = (struct sctp_transport *)v;
+-	if (!sctp_transport_hold(transport))
+-		return 0;
+ 	assoc = transport->asoc;
+ 	epb = &assoc->base;
+ 	sk = epb->sk;
+@@ -322,8 +316,6 @@ static int sctp_remaddr_seq_show(struct seq_file *seq, void *v)
+ 	}
+ 
+ 	transport = (struct sctp_transport *)v;
+-	if (!sctp_transport_hold(transport))
+-		return 0;
+ 	assoc = transport->asoc;
+ 
+ 	list_for_each_entry_rcu(tsp, &assoc->peer.transport_addr_list,
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index ce620e878538..50ee07cd20c4 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -4881,9 +4881,14 @@ struct sctp_transport *sctp_transport_get_next(struct net *net,
+ 			break;
+ 		}
+ 
++		if (!sctp_transport_hold(t))
++			continue;
++
+ 		if (net_eq(sock_net(t->asoc->base.sk), net) &&
+ 		    t->asoc->peer.primary_path == t)
+ 			break;
++
++		sctp_transport_put(t);
+ 	}
+ 
+ 	return t;
+@@ -4893,13 +4898,18 @@ struct sctp_transport *sctp_transport_get_idx(struct net *net,
+ 					      struct rhashtable_iter *iter,
+ 					      int pos)
+ {
+-	void *obj = SEQ_START_TOKEN;
++	struct sctp_transport *t;
+ 
+-	while (pos && (obj = sctp_transport_get_next(net, iter)) &&
+-	       !IS_ERR(obj))
+-		pos--;
++	if (!pos)
++		return SEQ_START_TOKEN;
+ 
+-	return obj;
++	while ((t = sctp_transport_get_next(net, iter)) && !IS_ERR(t)) {
++		if (!--pos)
++			break;
++		sctp_transport_put(t);
++	}
++
++	return t;
+ }
+ 
+ int sctp_for_each_endpoint(int (*cb)(struct sctp_endpoint *, void *),
+@@ -4958,8 +4968,6 @@ again:
+ 
+ 	tsp = sctp_transport_get_idx(net, &hti, *pos + 1);
+ 	for (; !IS_ERR_OR_NULL(tsp); tsp = sctp_transport_get_next(net, &hti)) {
+-		if (!sctp_transport_hold(tsp))
+-			continue;
+ 		ret = cb(tsp, p);
+ 		if (ret)
+ 			break;
+diff --git a/net/sunrpc/auth_gss/gss_krb5_crypto.c b/net/sunrpc/auth_gss/gss_krb5_crypto.c
+index 8654494b4d0a..834eb2b9e41b 100644
+--- a/net/sunrpc/auth_gss/gss_krb5_crypto.c
++++ b/net/sunrpc/auth_gss/gss_krb5_crypto.c
+@@ -169,7 +169,7 @@ make_checksum_hmac_md5(struct krb5_ctx *kctx, char *header, int hdrlen,
+ 	struct scatterlist              sg[1];
+ 	int err = -1;
+ 	u8 *checksumdata;
+-	u8 rc4salt[4];
++	u8 *rc4salt;
+ 	struct crypto_ahash *md5;
+ 	struct crypto_ahash *hmac_md5;
+ 	struct ahash_request *req;
+@@ -183,14 +183,18 @@ make_checksum_hmac_md5(struct krb5_ctx *kctx, char *header, int hdrlen,
+ 		return GSS_S_FAILURE;
+ 	}
+ 
++	rc4salt = kmalloc_array(4, sizeof(*rc4salt), GFP_NOFS);
++	if (!rc4salt)
++		return GSS_S_FAILURE;
++
+ 	if (arcfour_hmac_md5_usage_to_salt(usage, rc4salt)) {
+ 		dprintk("%s: invalid usage value %u\n", __func__, usage);
+-		return GSS_S_FAILURE;
++		goto out_free_rc4salt;
+ 	}
+ 
+ 	checksumdata = kmalloc(GSS_KRB5_MAX_CKSUM_LEN, GFP_NOFS);
+ 	if (!checksumdata)
+-		return GSS_S_FAILURE;
++		goto out_free_rc4salt;
+ 
+ 	md5 = crypto_alloc_ahash("md5", 0, CRYPTO_ALG_ASYNC);
+ 	if (IS_ERR(md5))
+@@ -258,6 +262,8 @@ out_free_md5:
+ 	crypto_free_ahash(md5);
+ out_free_cksum:
+ 	kfree(checksumdata);
++out_free_rc4salt:
++	kfree(rc4salt);
+ 	return err ? GSS_S_FAILURE : 0;
+ }
+ 
+diff --git a/net/tipc/name_table.c b/net/tipc/name_table.c
+index bebe88cae07b..ff968c7afef6 100644
+--- a/net/tipc/name_table.c
++++ b/net/tipc/name_table.c
+@@ -980,20 +980,17 @@ int tipc_nl_name_table_dump(struct sk_buff *skb, struct netlink_callback *cb)
+ 
+ struct tipc_dest *tipc_dest_find(struct list_head *l, u32 node, u32 port)
+ {
+-	u64 value = (u64)node << 32 | port;
+ 	struct tipc_dest *dst;
+ 
+ 	list_for_each_entry(dst, l, list) {
+-		if (dst->value != value)
+-			continue;
+-		return dst;
++		if (dst->node == node && dst->port == port)
++			return dst;
+ 	}
+ 	return NULL;
+ }
+ 
+ bool tipc_dest_push(struct list_head *l, u32 node, u32 port)
+ {
+-	u64 value = (u64)node << 32 | port;
+ 	struct tipc_dest *dst;
+ 
+ 	if (tipc_dest_find(l, node, port))
+@@ -1002,7 +999,8 @@ bool tipc_dest_push(struct list_head *l, u32 node, u32 port)
+ 	dst = kmalloc(sizeof(*dst), GFP_ATOMIC);
+ 	if (unlikely(!dst))
+ 		return false;
+-	dst->value = value;
++	dst->node = node;
++	dst->port = port;
+ 	list_add(&dst->list, l);
+ 	return true;
+ }
+diff --git a/net/tipc/name_table.h b/net/tipc/name_table.h
+index 0febba41da86..892bd750b85f 100644
+--- a/net/tipc/name_table.h
++++ b/net/tipc/name_table.h
+@@ -133,13 +133,8 @@ void tipc_nametbl_stop(struct net *net);
+ 
+ struct tipc_dest {
+ 	struct list_head list;
+-	union {
+-		struct {
+-			u32 port;
+-			u32 node;
+-		};
+-		u64 value;
+-	};
++	u32 port;
++	u32 node;
+ };
+ 
+ struct tipc_dest *tipc_dest_find(struct list_head *l, u32 node, u32 port);
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 930852c54d7a..0a5fa347135e 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -2675,6 +2675,8 @@ void tipc_sk_reinit(struct net *net)
+ 
+ 		rhashtable_walk_stop(&iter);
+ 	} while (tsk == ERR_PTR(-EAGAIN));
++
++	rhashtable_walk_exit(&iter);
+ }
+ 
+ static struct tipc_sock *tipc_sk_lookup(struct net *net, u32 portid)
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index 301f22430469..45188d920013 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -45,6 +45,7 @@
+ MODULE_AUTHOR("Mellanox Technologies");
+ MODULE_DESCRIPTION("Transport Layer Security Support");
+ MODULE_LICENSE("Dual BSD/GPL");
++MODULE_ALIAS_TCP_ULP("tls");
+ 
+ enum {
+ 	TLSV4,
+diff --git a/samples/bpf/xdp_redirect_cpu_user.c b/samples/bpf/xdp_redirect_cpu_user.c
+index 4b4d78fffe30..da9070889223 100644
+--- a/samples/bpf/xdp_redirect_cpu_user.c
++++ b/samples/bpf/xdp_redirect_cpu_user.c
+@@ -679,8 +679,9 @@ int main(int argc, char **argv)
+ 		return EXIT_FAIL_OPTION;
+ 	}
+ 
+-	/* Remove XDP program when program is interrupted */
++	/* Remove XDP program when program is interrupted or killed */
+ 	signal(SIGINT, int_exit);
++	signal(SIGTERM, int_exit);
+ 
+ 	if (bpf_set_link_xdp_fd(ifindex, prog_fd[prog_num], xdp_flags) < 0) {
+ 		fprintf(stderr, "link set xdp fd failed\n");
+diff --git a/samples/bpf/xdp_rxq_info_user.c b/samples/bpf/xdp_rxq_info_user.c
+index e4e9ba52bff0..bb278447299c 100644
+--- a/samples/bpf/xdp_rxq_info_user.c
++++ b/samples/bpf/xdp_rxq_info_user.c
+@@ -534,8 +534,9 @@ int main(int argc, char **argv)
+ 		exit(EXIT_FAIL_BPF);
+ 	}
+ 
+-	/* Remove XDP program when program is interrupted */
++	/* Remove XDP program when program is interrupted or killed */
+ 	signal(SIGINT, int_exit);
++	signal(SIGTERM, int_exit);
+ 
+ 	if (bpf_set_link_xdp_fd(ifindex, prog_fd, xdp_flags) < 0) {
+ 		fprintf(stderr, "link set xdp fd failed\n");
+diff --git a/scripts/coccicheck b/scripts/coccicheck
+index 9fedca611b7f..e04d328210ac 100755
+--- a/scripts/coccicheck
++++ b/scripts/coccicheck
+@@ -128,9 +128,10 @@ run_cmd_parmap() {
+ 	fi
+ 	echo $@ >>$DEBUG_FILE
+ 	$@ 2>>$DEBUG_FILE
+-	if [[ $? -ne 0 ]]; then
++	err=$?
++	if [[ $err -ne 0 ]]; then
+ 		echo "coccicheck failed"
+-		exit $?
++		exit $err
+ 	fi
+ }
+ 
+diff --git a/scripts/depmod.sh b/scripts/depmod.sh
+index 999d585eaa73..e5f0aad75b96 100755
+--- a/scripts/depmod.sh
++++ b/scripts/depmod.sh
+@@ -15,9 +15,9 @@ if ! test -r System.map ; then
+ fi
+ 
+ if [ -z $(command -v $DEPMOD) ]; then
+-	echo "'make modules_install' requires $DEPMOD. Please install it." >&2
++	echo "Warning: 'make modules_install' requires $DEPMOD. Please install it." >&2
+ 	echo "This is probably in the kmod package." >&2
+-	exit 1
++	exit 0
+ fi
+ 
+ # older versions of depmod require the version string to start with three
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index 1663fb19343a..b95cf57782a3 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -672,7 +672,7 @@ static void handle_modversions(struct module *mod, struct elf_info *info,
+ 			if (ELF_ST_TYPE(sym->st_info) == STT_SPARC_REGISTER)
+ 				break;
+ 			if (symname[0] == '.') {
+-				char *munged = strdup(symname);
++				char *munged = NOFAIL(strdup(symname));
+ 				munged[0] = '_';
+ 				munged[1] = toupper(munged[1]);
+ 				symname = munged;
+@@ -1318,7 +1318,7 @@ static Elf_Sym *find_elf_symbol2(struct elf_info *elf, Elf_Addr addr,
+ static char *sec2annotation(const char *s)
+ {
+ 	if (match(s, init_exit_sections)) {
+-		char *p = malloc(20);
++		char *p = NOFAIL(malloc(20));
+ 		char *r = p;
+ 
+ 		*p++ = '_';
+@@ -1338,7 +1338,7 @@ static char *sec2annotation(const char *s)
+ 			strcat(p, " ");
+ 		return r;
+ 	} else {
+-		return strdup("");
++		return NOFAIL(strdup(""));
+ 	}
+ }
+ 
+@@ -2036,7 +2036,7 @@ void buf_write(struct buffer *buf, const char *s, int len)
+ {
+ 	if (buf->size - buf->pos < len) {
+ 		buf->size += len + SZ;
+-		buf->p = realloc(buf->p, buf->size);
++		buf->p = NOFAIL(realloc(buf->p, buf->size));
+ 	}
+ 	strncpy(buf->p + buf->pos, s, len);
+ 	buf->pos += len;
+diff --git a/security/apparmor/policy_ns.c b/security/apparmor/policy_ns.c
+index b0f9dc3f765a..1a7cec5d9cac 100644
+--- a/security/apparmor/policy_ns.c
++++ b/security/apparmor/policy_ns.c
+@@ -255,7 +255,7 @@ static struct aa_ns *__aa_create_ns(struct aa_ns *parent, const char *name,
+ 
+ 	ns = alloc_ns(parent->base.hname, name);
+ 	if (!ns)
+-		return NULL;
++		return ERR_PTR(-ENOMEM);
+ 	ns->level = parent->level + 1;
+ 	mutex_lock_nested(&ns->lock, ns->level);
+ 	error = __aafs_ns_mkdir(ns, ns_subns_dir(parent), name, dir);
+diff --git a/security/keys/dh.c b/security/keys/dh.c
+index b203f7758f97..1a68d27e72b4 100644
+--- a/security/keys/dh.c
++++ b/security/keys/dh.c
+@@ -300,7 +300,7 @@ long __keyctl_dh_compute(struct keyctl_dh_params __user *params,
+ 	}
+ 	dh_inputs.g_size = dlen;
+ 
+-	dlen = dh_data_from_key(pcopy.private, &dh_inputs.key);
++	dlen = dh_data_from_key(pcopy.dh_private, &dh_inputs.key);
+ 	if (dlen < 0) {
+ 		ret = dlen;
+ 		goto out2;
+diff --git a/security/selinux/selinuxfs.c b/security/selinux/selinuxfs.c
+index 79d3709b0671..0b66d7283b00 100644
+--- a/security/selinux/selinuxfs.c
++++ b/security/selinux/selinuxfs.c
+@@ -1365,13 +1365,18 @@ static int sel_make_bools(struct selinux_fs_info *fsi)
+ 
+ 		ret = -ENOMEM;
+ 		inode = sel_make_inode(dir->d_sb, S_IFREG | S_IRUGO | S_IWUSR);
+-		if (!inode)
++		if (!inode) {
++			dput(dentry);
+ 			goto out;
++		}
+ 
+ 		ret = -ENAMETOOLONG;
+ 		len = snprintf(page, PAGE_SIZE, "/%s/%s", BOOL_DIR_NAME, names[i]);
+-		if (len >= PAGE_SIZE)
++		if (len >= PAGE_SIZE) {
++			dput(dentry);
++			iput(inode);
+ 			goto out;
++		}
+ 
+ 		isec = (struct inode_security_struct *)inode->i_security;
+ 		ret = security_genfs_sid(fsi->state, "selinuxfs", page,
+@@ -1586,8 +1591,10 @@ static int sel_make_avc_files(struct dentry *dir)
+ 			return -ENOMEM;
+ 
+ 		inode = sel_make_inode(dir->d_sb, S_IFREG|files[i].mode);
+-		if (!inode)
++		if (!inode) {
++			dput(dentry);
+ 			return -ENOMEM;
++		}
+ 
+ 		inode->i_fop = files[i].ops;
+ 		inode->i_ino = ++fsi->last_ino;
+@@ -1632,8 +1639,10 @@ static int sel_make_initcon_files(struct dentry *dir)
+ 			return -ENOMEM;
+ 
+ 		inode = sel_make_inode(dir->d_sb, S_IFREG|S_IRUGO);
+-		if (!inode)
++		if (!inode) {
++			dput(dentry);
+ 			return -ENOMEM;
++		}
+ 
+ 		inode->i_fop = &sel_initcon_ops;
+ 		inode->i_ino = i|SEL_INITCON_INO_OFFSET;
+@@ -1733,8 +1742,10 @@ static int sel_make_perm_files(char *objclass, int classvalue,
+ 
+ 		rc = -ENOMEM;
+ 		inode = sel_make_inode(dir->d_sb, S_IFREG|S_IRUGO);
+-		if (!inode)
++		if (!inode) {
++			dput(dentry);
+ 			goto out;
++		}
+ 
+ 		inode->i_fop = &sel_perm_ops;
+ 		/* i+1 since perm values are 1-indexed */
+@@ -1763,8 +1774,10 @@ static int sel_make_class_dir_entries(char *classname, int index,
+ 		return -ENOMEM;
+ 
+ 	inode = sel_make_inode(dir->d_sb, S_IFREG|S_IRUGO);
+-	if (!inode)
++	if (!inode) {
++		dput(dentry);
+ 		return -ENOMEM;
++	}
+ 
+ 	inode->i_fop = &sel_class_ops;
+ 	inode->i_ino = sel_class_to_ino(index);
+@@ -1838,8 +1851,10 @@ static int sel_make_policycap(struct selinux_fs_info *fsi)
+ 			return -ENOMEM;
+ 
+ 		inode = sel_make_inode(fsi->sb, S_IFREG | 0444);
+-		if (inode == NULL)
++		if (inode == NULL) {
++			dput(dentry);
+ 			return -ENOMEM;
++		}
+ 
+ 		inode->i_fop = &sel_policycap_ops;
+ 		inode->i_ino = iter | SEL_POLICYCAP_INO_OFFSET;
+@@ -1932,8 +1947,10 @@ static int sel_fill_super(struct super_block *sb, void *data, int silent)
+ 
+ 	ret = -ENOMEM;
+ 	inode = sel_make_inode(sb, S_IFCHR | S_IRUGO | S_IWUGO);
+-	if (!inode)
++	if (!inode) {
++		dput(dentry);
+ 		goto err;
++	}
+ 
+ 	inode->i_ino = ++fsi->last_ino;
+ 	isec = (struct inode_security_struct *)inode->i_security;
+diff --git a/sound/soc/codecs/rt5677.c b/sound/soc/codecs/rt5677.c
+index 8a0181a2db08..47feef30dadb 100644
+--- a/sound/soc/codecs/rt5677.c
++++ b/sound/soc/codecs/rt5677.c
+@@ -5007,7 +5007,7 @@ static const struct regmap_config rt5677_regmap = {
+ };
+ 
+ static const struct of_device_id rt5677_of_match[] = {
+-	{ .compatible = "realtek,rt5677", RT5677 },
++	{ .compatible = "realtek,rt5677", .data = (const void *)RT5677 },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(of, rt5677_of_match);
+diff --git a/sound/soc/codecs/wm8994.c b/sound/soc/codecs/wm8994.c
+index 7fdfdf3f6e67..14f1b0c0d286 100644
+--- a/sound/soc/codecs/wm8994.c
++++ b/sound/soc/codecs/wm8994.c
+@@ -2432,6 +2432,7 @@ static int wm8994_set_dai_sysclk(struct snd_soc_dai *dai,
+ 			snd_soc_component_update_bits(component, WM8994_POWER_MANAGEMENT_2,
+ 					    WM8994_OPCLK_ENA, 0);
+ 		}
++		break;
+ 
+ 	default:
+ 		return -EINVAL;
+diff --git a/tools/perf/arch/arm64/util/arm-spe.c b/tools/perf/arch/arm64/util/arm-spe.c
+index 1120e39c1b00..5ccfce87e693 100644
+--- a/tools/perf/arch/arm64/util/arm-spe.c
++++ b/tools/perf/arch/arm64/util/arm-spe.c
+@@ -194,6 +194,7 @@ struct auxtrace_record *arm_spe_recording_init(int *err,
+ 	sper->itr.read_finish = arm_spe_read_finish;
+ 	sper->itr.alignment = 0;
+ 
++	*err = 0;
+ 	return &sper->itr;
+ }
+ 
+diff --git a/tools/perf/arch/powerpc/util/sym-handling.c b/tools/perf/arch/powerpc/util/sym-handling.c
+index 53d83d7e6a09..20e7d74d86cd 100644
+--- a/tools/perf/arch/powerpc/util/sym-handling.c
++++ b/tools/perf/arch/powerpc/util/sym-handling.c
+@@ -141,8 +141,10 @@ void arch__post_process_probe_trace_events(struct perf_probe_event *pev,
+ 	for (i = 0; i < ntevs; i++) {
+ 		tev = &pev->tevs[i];
+ 		map__for_each_symbol(map, sym, tmp) {
+-			if (map->unmap_ip(map, sym->start) == tev->point.address)
++			if (map->unmap_ip(map, sym->start) == tev->point.address) {
+ 				arch__fix_tev_from_maps(pev, tev, map, sym);
++				break;
++			}
+ 		}
+ 	}
+ }
+diff --git a/tools/perf/util/namespaces.c b/tools/perf/util/namespaces.c
+index 5be021701f34..cf8bd123cf73 100644
+--- a/tools/perf/util/namespaces.c
++++ b/tools/perf/util/namespaces.c
+@@ -139,6 +139,9 @@ struct nsinfo *nsinfo__copy(struct nsinfo *nsi)
+ {
+ 	struct nsinfo *nnsi;
+ 
++	if (nsi == NULL)
++		return NULL;
++
+ 	nnsi = calloc(1, sizeof(*nnsi));
+ 	if (nnsi != NULL) {
+ 		nnsi->pid = nsi->pid;
+diff --git a/tools/testing/selftests/powerpc/harness.c b/tools/testing/selftests/powerpc/harness.c
+index 66d31de60b9a..9d7166dfad1e 100644
+--- a/tools/testing/selftests/powerpc/harness.c
++++ b/tools/testing/selftests/powerpc/harness.c
+@@ -85,13 +85,13 @@ wait:
+ 	return status;
+ }
+ 
+-static void alarm_handler(int signum)
++static void sig_handler(int signum)
+ {
+-	/* Jut wake us up from waitpid */
++	/* Just wake us up from waitpid */
+ }
+ 
+-static struct sigaction alarm_action = {
+-	.sa_handler = alarm_handler,
++static struct sigaction sig_action = {
++	.sa_handler = sig_handler,
+ };
+ 
+ void test_harness_set_timeout(uint64_t time)
+@@ -106,8 +106,14 @@ int test_harness(int (test_function)(void), char *name)
+ 	test_start(name);
+ 	test_set_git_version(GIT_VERSION);
+ 
+-	if (sigaction(SIGALRM, &alarm_action, NULL)) {
+-		perror("sigaction");
++	if (sigaction(SIGINT, &sig_action, NULL)) {
++		perror("sigaction (sigint)");
++		test_error(name);
++		return 1;
++	}
++
++	if (sigaction(SIGALRM, &sig_action, NULL)) {
++		perror("sigaction (sigalrm)");
+ 		test_error(name);
+ 		return 1;
+ 	}


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 11:37 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     107ee01091cf08b22d5d50fae81187efad1f89f6
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Aug 24 11:46:20 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 11:36:23 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=107ee010

Linux patch 4.18.5

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |   4 +
 1004_linux-4.18.5.patch | 742 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 746 insertions(+)

diff --git a/0000_README b/0000_README
index c7d6cc0..8da0979 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch:  1003_linux-4.18.4.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.4
 
+Patch:  1004_linux-4.18.5.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.5
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1004_linux-4.18.5.patch b/1004_linux-4.18.5.patch
new file mode 100644
index 0000000..abf70a2
--- /dev/null
+++ b/1004_linux-4.18.5.patch
@@ -0,0 +1,742 @@
+diff --git a/Makefile b/Makefile
+index ef0dd566c104..a41692c5827a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/parisc/include/asm/spinlock.h b/arch/parisc/include/asm/spinlock.h
+index 6f84b6acc86e..8a63515f03bf 100644
+--- a/arch/parisc/include/asm/spinlock.h
++++ b/arch/parisc/include/asm/spinlock.h
+@@ -20,7 +20,6 @@ static inline void arch_spin_lock_flags(arch_spinlock_t *x,
+ {
+ 	volatile unsigned int *a;
+ 
+-	mb();
+ 	a = __ldcw_align(x);
+ 	while (__ldcw(a) == 0)
+ 		while (*a == 0)
+@@ -30,17 +29,16 @@ static inline void arch_spin_lock_flags(arch_spinlock_t *x,
+ 				local_irq_disable();
+ 			} else
+ 				cpu_relax();
+-	mb();
+ }
+ #define arch_spin_lock_flags arch_spin_lock_flags
+ 
+ static inline void arch_spin_unlock(arch_spinlock_t *x)
+ {
+ 	volatile unsigned int *a;
+-	mb();
++
+ 	a = __ldcw_align(x);
+-	*a = 1;
+ 	mb();
++	*a = 1;
+ }
+ 
+ static inline int arch_spin_trylock(arch_spinlock_t *x)
+@@ -48,10 +46,8 @@ static inline int arch_spin_trylock(arch_spinlock_t *x)
+ 	volatile unsigned int *a;
+ 	int ret;
+ 
+-	mb();
+ 	a = __ldcw_align(x);
+         ret = __ldcw(a) != 0;
+-	mb();
+ 
+ 	return ret;
+ }
+diff --git a/arch/parisc/kernel/syscall.S b/arch/parisc/kernel/syscall.S
+index 4886a6db42e9..5f7e57fcaeef 100644
+--- a/arch/parisc/kernel/syscall.S
++++ b/arch/parisc/kernel/syscall.S
+@@ -629,12 +629,12 @@ cas_action:
+ 	stw	%r1, 4(%sr2,%r20)
+ #endif
+ 	/* The load and store could fail */
+-1:	ldw,ma	0(%r26), %r28
++1:	ldw	0(%r26), %r28
+ 	sub,<>	%r28, %r25, %r0
+-2:	stw,ma	%r24, 0(%r26)
++2:	stw	%r24, 0(%r26)
+ 	/* Free lock */
+ 	sync
+-	stw,ma	%r20, 0(%sr2,%r20)
++	stw	%r20, 0(%sr2,%r20)
+ #if ENABLE_LWS_DEBUG
+ 	/* Clear thread register indicator */
+ 	stw	%r0, 4(%sr2,%r20)
+@@ -798,30 +798,30 @@ cas2_action:
+ 	ldo	1(%r0),%r28
+ 
+ 	/* 8bit CAS */
+-13:	ldb,ma	0(%r26), %r29
++13:	ldb	0(%r26), %r29
+ 	sub,=	%r29, %r25, %r0
+ 	b,n	cas2_end
+-14:	stb,ma	%r24, 0(%r26)
++14:	stb	%r24, 0(%r26)
+ 	b	cas2_end
+ 	copy	%r0, %r28
+ 	nop
+ 	nop
+ 
+ 	/* 16bit CAS */
+-15:	ldh,ma	0(%r26), %r29
++15:	ldh	0(%r26), %r29
+ 	sub,=	%r29, %r25, %r0
+ 	b,n	cas2_end
+-16:	sth,ma	%r24, 0(%r26)
++16:	sth	%r24, 0(%r26)
+ 	b	cas2_end
+ 	copy	%r0, %r28
+ 	nop
+ 	nop
+ 
+ 	/* 32bit CAS */
+-17:	ldw,ma	0(%r26), %r29
++17:	ldw	0(%r26), %r29
+ 	sub,=	%r29, %r25, %r0
+ 	b,n	cas2_end
+-18:	stw,ma	%r24, 0(%r26)
++18:	stw	%r24, 0(%r26)
+ 	b	cas2_end
+ 	copy	%r0, %r28
+ 	nop
+@@ -829,10 +829,10 @@ cas2_action:
+ 
+ 	/* 64bit CAS */
+ #ifdef CONFIG_64BIT
+-19:	ldd,ma	0(%r26), %r29
++19:	ldd	0(%r26), %r29
+ 	sub,*=	%r29, %r25, %r0
+ 	b,n	cas2_end
+-20:	std,ma	%r24, 0(%r26)
++20:	std	%r24, 0(%r26)
+ 	copy	%r0, %r28
+ #else
+ 	/* Compare first word */
+@@ -851,7 +851,7 @@ cas2_action:
+ cas2_end:
+ 	/* Free lock */
+ 	sync
+-	stw,ma	%r20, 0(%sr2,%r20)
++	stw	%r20, 0(%sr2,%r20)
+ 	/* Enable interrupts */
+ 	ssm	PSW_SM_I, %r0
+ 	/* Return to userspace, set no error */
+diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
+index a8b277362931..4cb8f1f7b593 100644
+--- a/arch/powerpc/kernel/security.c
++++ b/arch/powerpc/kernel/security.c
+@@ -117,25 +117,35 @@ ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, cha
+ 
+ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, char *buf)
+ {
+-	if (!security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR))
+-		return sprintf(buf, "Not affected\n");
++	struct seq_buf s;
++
++	seq_buf_init(&s, buf, PAGE_SIZE - 1);
+ 
+-	if (barrier_nospec_enabled)
+-		return sprintf(buf, "Mitigation: __user pointer sanitization\n");
++	if (security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR)) {
++		if (barrier_nospec_enabled)
++			seq_buf_printf(&s, "Mitigation: __user pointer sanitization");
++		else
++			seq_buf_printf(&s, "Vulnerable");
+ 
+-	return sprintf(buf, "Vulnerable\n");
++		if (security_ftr_enabled(SEC_FTR_SPEC_BAR_ORI31))
++			seq_buf_printf(&s, ", ori31 speculation barrier enabled");
++
++		seq_buf_printf(&s, "\n");
++	} else
++		seq_buf_printf(&s, "Not affected\n");
++
++	return s.len;
+ }
+ 
+ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, char *buf)
+ {
+-	bool bcs, ccd, ori;
+ 	struct seq_buf s;
++	bool bcs, ccd;
+ 
+ 	seq_buf_init(&s, buf, PAGE_SIZE - 1);
+ 
+ 	bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
+ 	ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
+-	ori = security_ftr_enabled(SEC_FTR_SPEC_BAR_ORI31);
+ 
+ 	if (bcs || ccd) {
+ 		seq_buf_printf(&s, "Mitigation: ");
+@@ -151,9 +161,6 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, c
+ 	} else
+ 		seq_buf_printf(&s, "Vulnerable");
+ 
+-	if (ori)
+-		seq_buf_printf(&s, ", ori31 speculation barrier enabled");
+-
+ 	seq_buf_printf(&s, "\n");
+ 
+ 	return s.len;
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index 79e409974ccc..682286aca881 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -971,6 +971,7 @@ static inline uint32_t hypervisor_cpuid_base(const char *sig, uint32_t leaves)
+ 
+ extern unsigned long arch_align_stack(unsigned long sp);
+ extern void free_init_pages(char *what, unsigned long begin, unsigned long end);
++extern void free_kernel_image_pages(void *begin, void *end);
+ 
+ void default_idle(void);
+ #ifdef	CONFIG_XEN
+diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
+index bd090367236c..34cffcef7375 100644
+--- a/arch/x86/include/asm/set_memory.h
++++ b/arch/x86/include/asm/set_memory.h
+@@ -46,6 +46,7 @@ int set_memory_np(unsigned long addr, int numpages);
+ int set_memory_4k(unsigned long addr, int numpages);
+ int set_memory_encrypted(unsigned long addr, int numpages);
+ int set_memory_decrypted(unsigned long addr, int numpages);
++int set_memory_np_noalias(unsigned long addr, int numpages);
+ 
+ int set_memory_array_uc(unsigned long *addr, int addrinarray);
+ int set_memory_array_wc(unsigned long *addr, int addrinarray);
+diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
+index 83241eb71cd4..acfab322fbe0 100644
+--- a/arch/x86/mm/init.c
++++ b/arch/x86/mm/init.c
+@@ -775,13 +775,44 @@ void free_init_pages(char *what, unsigned long begin, unsigned long end)
+ 	}
+ }
+ 
++/*
++ * begin/end can be in the direct map or the "high kernel mapping"
++ * used for the kernel image only.  free_init_pages() will do the
++ * right thing for either kind of address.
++ */
++void free_kernel_image_pages(void *begin, void *end)
++{
++	unsigned long begin_ul = (unsigned long)begin;
++	unsigned long end_ul = (unsigned long)end;
++	unsigned long len_pages = (end_ul - begin_ul) >> PAGE_SHIFT;
++
++
++	free_init_pages("unused kernel image", begin_ul, end_ul);
++
++	/*
++	 * PTI maps some of the kernel into userspace.  For performance,
++	 * this includes some kernel areas that do not contain secrets.
++	 * Those areas might be adjacent to the parts of the kernel image
++	 * being freed, which may contain secrets.  Remove the "high kernel
++	 * image mapping" for these freed areas, ensuring they are not even
++	 * potentially vulnerable to Meltdown regardless of the specific
++	 * optimizations PTI is currently using.
++	 *
++	 * The "noalias" prevents unmapping the direct map alias which is
++	 * needed to access the freed pages.
++	 *
++	 * This is only valid for 64bit kernels. 32bit has only one mapping
++	 * which can't be treated in this way for obvious reasons.
++	 */
++	if (IS_ENABLED(CONFIG_X86_64) && cpu_feature_enabled(X86_FEATURE_PTI))
++		set_memory_np_noalias(begin_ul, len_pages);
++}
++
+ void __ref free_initmem(void)
+ {
+ 	e820__reallocate_tables();
+ 
+-	free_init_pages("unused kernel",
+-			(unsigned long)(&__init_begin),
+-			(unsigned long)(&__init_end));
++	free_kernel_image_pages(&__init_begin, &__init_end);
+ }
+ 
+ #ifdef CONFIG_BLK_DEV_INITRD
+diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
+index a688617c727e..68c292cb1ebf 100644
+--- a/arch/x86/mm/init_64.c
++++ b/arch/x86/mm/init_64.c
+@@ -1283,12 +1283,8 @@ void mark_rodata_ro(void)
+ 	set_memory_ro(start, (end-start) >> PAGE_SHIFT);
+ #endif
+ 
+-	free_init_pages("unused kernel",
+-			(unsigned long) __va(__pa_symbol(text_end)),
+-			(unsigned long) __va(__pa_symbol(rodata_start)));
+-	free_init_pages("unused kernel",
+-			(unsigned long) __va(__pa_symbol(rodata_end)),
+-			(unsigned long) __va(__pa_symbol(_sdata)));
++	free_kernel_image_pages((void *)text_end, (void *)rodata_start);
++	free_kernel_image_pages((void *)rodata_end, (void *)_sdata);
+ 
+ 	debug_checkwx();
+ 
+diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
+index 29505724202a..8d6c34fe49be 100644
+--- a/arch/x86/mm/pageattr.c
++++ b/arch/x86/mm/pageattr.c
+@@ -53,6 +53,7 @@ static DEFINE_SPINLOCK(cpa_lock);
+ #define CPA_FLUSHTLB 1
+ #define CPA_ARRAY 2
+ #define CPA_PAGES_ARRAY 4
++#define CPA_NO_CHECK_ALIAS 8 /* Do not search for aliases */
+ 
+ #ifdef CONFIG_PROC_FS
+ static unsigned long direct_pages_count[PG_LEVEL_NUM];
+@@ -1486,6 +1487,9 @@ static int change_page_attr_set_clr(unsigned long *addr, int numpages,
+ 
+ 	/* No alias checking for _NX bit modifications */
+ 	checkalias = (pgprot_val(mask_set) | pgprot_val(mask_clr)) != _PAGE_NX;
++	/* Has caller explicitly disabled alias checking? */
++	if (in_flag & CPA_NO_CHECK_ALIAS)
++		checkalias = 0;
+ 
+ 	ret = __change_page_attr_set_clr(&cpa, checkalias);
+ 
+@@ -1772,6 +1776,15 @@ int set_memory_np(unsigned long addr, int numpages)
+ 	return change_page_attr_clear(&addr, numpages, __pgprot(_PAGE_PRESENT), 0);
+ }
+ 
++int set_memory_np_noalias(unsigned long addr, int numpages)
++{
++	int cpa_flags = CPA_NO_CHECK_ALIAS;
++
++	return change_page_attr_set_clr(&addr, numpages, __pgprot(0),
++					__pgprot(_PAGE_PRESENT), 0,
++					cpa_flags, NULL);
++}
++
+ int set_memory_4k(unsigned long addr, int numpages)
+ {
+ 	return change_page_attr_set_clr(&addr, numpages, __pgprot(0),
+diff --git a/drivers/edac/edac_mc.c b/drivers/edac/edac_mc.c
+index 3bb82e511eca..7d3edd713932 100644
+--- a/drivers/edac/edac_mc.c
++++ b/drivers/edac/edac_mc.c
+@@ -215,6 +215,7 @@ const char * const edac_mem_types[] = {
+ 	[MEM_LRDDR3]	= "Load-Reduced-DDR3-RAM",
+ 	[MEM_DDR4]	= "Unbuffered-DDR4",
+ 	[MEM_RDDR4]	= "Registered-DDR4",
++	[MEM_LRDDR4]	= "Load-Reduced-DDR4-RAM",
+ 	[MEM_NVDIMM]	= "Non-volatile-RAM",
+ };
+ EXPORT_SYMBOL_GPL(edac_mem_types);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+index fc818b4d849c..a44c3d58fef4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+@@ -31,7 +31,7 @@
+ #include <linux/power_supply.h>
+ #include <linux/hwmon.h>
+ #include <linux/hwmon-sysfs.h>
+-
++#include <linux/nospec.h>
+ 
+ static int amdgpu_debugfs_pm_init(struct amdgpu_device *adev);
+ 
+@@ -393,6 +393,7 @@ static ssize_t amdgpu_set_pp_force_state(struct device *dev,
+ 			count = -EINVAL;
+ 			goto fail;
+ 		}
++		idx = array_index_nospec(idx, ARRAY_SIZE(data.states));
+ 
+ 		amdgpu_dpm_get_pp_num_states(adev, &data);
+ 		state = data.states[idx];
+diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
+index df4e4a07db3d..14dce5c201d5 100644
+--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
++++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
+@@ -43,6 +43,8 @@
+ #include <linux/mdev.h>
+ #include <linux/debugfs.h>
+ 
++#include <linux/nospec.h>
++
+ #include "i915_drv.h"
+ #include "gvt.h"
+ 
+@@ -1084,7 +1086,8 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
+ 	} else if (cmd == VFIO_DEVICE_GET_REGION_INFO) {
+ 		struct vfio_region_info info;
+ 		struct vfio_info_cap caps = { .buf = NULL, .size = 0 };
+-		int i, ret;
++		unsigned int i;
++		int ret;
+ 		struct vfio_region_info_cap_sparse_mmap *sparse = NULL;
+ 		size_t size;
+ 		int nr_areas = 1;
+@@ -1169,6 +1172,10 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
+ 				if (info.index >= VFIO_PCI_NUM_REGIONS +
+ 						vgpu->vdev.num_regions)
+ 					return -EINVAL;
++				info.index =
++					array_index_nospec(info.index,
++							VFIO_PCI_NUM_REGIONS +
++							vgpu->vdev.num_regions);
+ 
+ 				i = info.index - VFIO_PCI_NUM_REGIONS;
+ 
+diff --git a/drivers/i2c/busses/i2c-imx.c b/drivers/i2c/busses/i2c-imx.c
+index 498c5e891649..ad6adefb64da 100644
+--- a/drivers/i2c/busses/i2c-imx.c
++++ b/drivers/i2c/busses/i2c-imx.c
+@@ -668,9 +668,6 @@ static int i2c_imx_dma_read(struct imx_i2c_struct *i2c_imx,
+ 	struct imx_i2c_dma *dma = i2c_imx->dma;
+ 	struct device *dev = &i2c_imx->adapter.dev;
+ 
+-	temp = imx_i2c_read_reg(i2c_imx, IMX_I2C_I2CR);
+-	temp |= I2CR_DMAEN;
+-	imx_i2c_write_reg(temp, i2c_imx, IMX_I2C_I2CR);
+ 
+ 	dma->chan_using = dma->chan_rx;
+ 	dma->dma_transfer_dir = DMA_DEV_TO_MEM;
+@@ -783,6 +780,7 @@ static int i2c_imx_read(struct imx_i2c_struct *i2c_imx, struct i2c_msg *msgs, bo
+ 	int i, result;
+ 	unsigned int temp;
+ 	int block_data = msgs->flags & I2C_M_RECV_LEN;
++	int use_dma = i2c_imx->dma && msgs->len >= DMA_THRESHOLD && !block_data;
+ 
+ 	dev_dbg(&i2c_imx->adapter.dev,
+ 		"<%s> write slave address: addr=0x%x\n",
+@@ -809,12 +807,14 @@ static int i2c_imx_read(struct imx_i2c_struct *i2c_imx, struct i2c_msg *msgs, bo
+ 	 */
+ 	if ((msgs->len - 1) || block_data)
+ 		temp &= ~I2CR_TXAK;
++	if (use_dma)
++		temp |= I2CR_DMAEN;
+ 	imx_i2c_write_reg(temp, i2c_imx, IMX_I2C_I2CR);
+ 	imx_i2c_read_reg(i2c_imx, IMX_I2C_I2DR); /* dummy read */
+ 
+ 	dev_dbg(&i2c_imx->adapter.dev, "<%s> read data\n", __func__);
+ 
+-	if (i2c_imx->dma && msgs->len >= DMA_THRESHOLD && !block_data)
++	if (use_dma)
+ 		return i2c_imx_dma_read(i2c_imx, msgs, is_lastmsg);
+ 
+ 	/* read data */
+diff --git a/drivers/i2c/i2c-core-acpi.c b/drivers/i2c/i2c-core-acpi.c
+index 7c3b4740b94b..b8f303dea305 100644
+--- a/drivers/i2c/i2c-core-acpi.c
++++ b/drivers/i2c/i2c-core-acpi.c
+@@ -482,11 +482,16 @@ static int acpi_gsb_i2c_write_bytes(struct i2c_client *client,
+ 	msgs[0].buf = buffer;
+ 
+ 	ret = i2c_transfer(client->adapter, msgs, ARRAY_SIZE(msgs));
+-	if (ret < 0)
+-		dev_err(&client->adapter->dev, "i2c write failed\n");
+ 
+ 	kfree(buffer);
+-	return ret;
++
++	if (ret < 0) {
++		dev_err(&client->adapter->dev, "i2c write failed: %d\n", ret);
++		return ret;
++	}
++
++	/* 1 transfer must have completed successfully */
++	return (ret == 1) ? 0 : -EIO;
+ }
+ 
+ static acpi_status
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index 0fae816fba39..44604af23b3a 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -952,6 +952,7 @@ static int advk_pcie_probe(struct platform_device *pdev)
+ 
+ 	bus = bridge->bus;
+ 
++	pci_bus_size_bridges(bus);
+ 	pci_bus_assign_resources(bus);
+ 
+ 	list_for_each_entry(child, &bus->children, node)
+diff --git a/drivers/pci/hotplug/pci_hotplug_core.c b/drivers/pci/hotplug/pci_hotplug_core.c
+index af92fed46ab7..fd93783a87b0 100644
+--- a/drivers/pci/hotplug/pci_hotplug_core.c
++++ b/drivers/pci/hotplug/pci_hotplug_core.c
+@@ -438,8 +438,17 @@ int __pci_hp_register(struct hotplug_slot *slot, struct pci_bus *bus,
+ 	list_add(&slot->slot_list, &pci_hotplug_slot_list);
+ 
+ 	result = fs_add_slot(pci_slot);
++	if (result)
++		goto err_list_del;
++
+ 	kobject_uevent(&pci_slot->kobj, KOBJ_ADD);
+ 	dbg("Added slot %s to the list\n", name);
++	goto out;
++
++err_list_del:
++	list_del(&slot->slot_list);
++	pci_slot->hotplug = NULL;
++	pci_destroy_slot(pci_slot);
+ out:
+ 	mutex_unlock(&pci_hp_mutex);
+ 	return result;
+diff --git a/drivers/pci/hotplug/pciehp.h b/drivers/pci/hotplug/pciehp.h
+index 5f892065585e..fca87a1a2b22 100644
+--- a/drivers/pci/hotplug/pciehp.h
++++ b/drivers/pci/hotplug/pciehp.h
+@@ -119,6 +119,7 @@ int pciehp_unconfigure_device(struct slot *p_slot);
+ void pciehp_queue_pushbutton_work(struct work_struct *work);
+ struct controller *pcie_init(struct pcie_device *dev);
+ int pcie_init_notification(struct controller *ctrl);
++void pcie_shutdown_notification(struct controller *ctrl);
+ int pciehp_enable_slot(struct slot *p_slot);
+ int pciehp_disable_slot(struct slot *p_slot);
+ void pcie_reenable_notification(struct controller *ctrl);
+diff --git a/drivers/pci/hotplug/pciehp_core.c b/drivers/pci/hotplug/pciehp_core.c
+index 44a6a63802d5..2ba59fc94827 100644
+--- a/drivers/pci/hotplug/pciehp_core.c
++++ b/drivers/pci/hotplug/pciehp_core.c
+@@ -62,6 +62,12 @@ static int reset_slot(struct hotplug_slot *slot, int probe);
+  */
+ static void release_slot(struct hotplug_slot *hotplug_slot)
+ {
++	struct slot *slot = hotplug_slot->private;
++
++	/* queued work needs hotplug_slot name */
++	cancel_delayed_work(&slot->work);
++	drain_workqueue(slot->wq);
++
+ 	kfree(hotplug_slot->ops);
+ 	kfree(hotplug_slot->info);
+ 	kfree(hotplug_slot);
+@@ -264,6 +270,7 @@ static void pciehp_remove(struct pcie_device *dev)
+ {
+ 	struct controller *ctrl = get_service_data(dev);
+ 
++	pcie_shutdown_notification(ctrl);
+ 	cleanup_slot(ctrl);
+ 	pciehp_release_ctrl(ctrl);
+ }
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index 718b6073afad..aff191b4552c 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -539,8 +539,6 @@ static irqreturn_t pciehp_isr(int irq, void *dev_id)
+ {
+ 	struct controller *ctrl = (struct controller *)dev_id;
+ 	struct pci_dev *pdev = ctrl_dev(ctrl);
+-	struct pci_bus *subordinate = pdev->subordinate;
+-	struct pci_dev *dev;
+ 	struct slot *slot = ctrl->slot;
+ 	u16 status, events;
+ 	u8 present;
+@@ -588,14 +586,9 @@ static irqreturn_t pciehp_isr(int irq, void *dev_id)
+ 		wake_up(&ctrl->queue);
+ 	}
+ 
+-	if (subordinate) {
+-		list_for_each_entry(dev, &subordinate->devices, bus_list) {
+-			if (dev->ignore_hotplug) {
+-				ctrl_dbg(ctrl, "ignoring hotplug event %#06x (%s requested no hotplug)\n",
+-					 events, pci_name(dev));
+-				return IRQ_HANDLED;
+-			}
+-		}
++	if (pdev->ignore_hotplug) {
++		ctrl_dbg(ctrl, "ignoring hotplug event %#06x\n", events);
++		return IRQ_HANDLED;
+ 	}
+ 
+ 	/* Check Attention Button Pressed */
+@@ -765,7 +758,7 @@ int pcie_init_notification(struct controller *ctrl)
+ 	return 0;
+ }
+ 
+-static void pcie_shutdown_notification(struct controller *ctrl)
++void pcie_shutdown_notification(struct controller *ctrl)
+ {
+ 	if (ctrl->notification_enabled) {
+ 		pcie_disable_notification(ctrl);
+@@ -800,7 +793,7 @@ abort:
+ static void pcie_cleanup_slot(struct controller *ctrl)
+ {
+ 	struct slot *slot = ctrl->slot;
+-	cancel_delayed_work(&slot->work);
++
+ 	destroy_workqueue(slot->wq);
+ 	kfree(slot);
+ }
+@@ -893,7 +886,6 @@ abort:
+ 
+ void pciehp_release_ctrl(struct controller *ctrl)
+ {
+-	pcie_shutdown_notification(ctrl);
+ 	pcie_cleanup_slot(ctrl);
+ 	kfree(ctrl);
+ }
+diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
+index 89ee6a2b6eb8..5d1698265da5 100644
+--- a/drivers/pci/pci-acpi.c
++++ b/drivers/pci/pci-acpi.c
+@@ -632,13 +632,11 @@ static bool acpi_pci_need_resume(struct pci_dev *dev)
+ 	/*
+ 	 * In some cases (eg. Samsung 305V4A) leaving a bridge in suspend over
+ 	 * system-wide suspend/resume confuses the platform firmware, so avoid
+-	 * doing that, unless the bridge has a driver that should take care of
+-	 * the PM handling.  According to Section 16.1.6 of ACPI 6.2, endpoint
++	 * doing that.  According to Section 16.1.6 of ACPI 6.2, endpoint
+ 	 * devices are expected to be in D3 before invoking the S3 entry path
+ 	 * from the firmware, so they should not be affected by this issue.
+ 	 */
+-	if (pci_is_bridge(dev) && !dev->driver &&
+-	    acpi_target_system_state() != ACPI_STATE_S0)
++	if (pci_is_bridge(dev) && acpi_target_system_state() != ACPI_STATE_S0)
+ 		return true;
+ 
+ 	if (!adev || !acpi_device_power_manageable(adev))
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 316496e99da9..0abe2865a3a5 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -1171,6 +1171,33 @@ static void pci_restore_config_space(struct pci_dev *pdev)
+ 	}
+ }
+ 
++static void pci_restore_rebar_state(struct pci_dev *pdev)
++{
++	unsigned int pos, nbars, i;
++	u32 ctrl;
++
++	pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_REBAR);
++	if (!pos)
++		return;
++
++	pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl);
++	nbars = (ctrl & PCI_REBAR_CTRL_NBAR_MASK) >>
++		    PCI_REBAR_CTRL_NBAR_SHIFT;
++
++	for (i = 0; i < nbars; i++, pos += 8) {
++		struct resource *res;
++		int bar_idx, size;
++
++		pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl);
++		bar_idx = ctrl & PCI_REBAR_CTRL_BAR_IDX;
++		res = pdev->resource + bar_idx;
++		size = order_base_2((resource_size(res) >> 20) | 1) - 1;
++		ctrl &= ~PCI_REBAR_CTRL_BAR_SIZE;
++		ctrl |= size << 8;
++		pci_write_config_dword(pdev, pos + PCI_REBAR_CTRL, ctrl);
++	}
++}
++
+ /**
+  * pci_restore_state - Restore the saved state of a PCI device
+  * @dev: - PCI device that we're dealing with
+@@ -1186,6 +1213,7 @@ void pci_restore_state(struct pci_dev *dev)
+ 	pci_restore_pri_state(dev);
+ 	pci_restore_ats_state(dev);
+ 	pci_restore_vc_state(dev);
++	pci_restore_rebar_state(dev);
+ 
+ 	pci_cleanup_aer_error_status_regs(dev);
+ 
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index 611adcd9c169..b2857865c0aa 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -1730,6 +1730,10 @@ static void pci_configure_mps(struct pci_dev *dev)
+ 	if (!pci_is_pcie(dev) || !bridge || !pci_is_pcie(bridge))
+ 		return;
+ 
++	/* MPS and MRRS fields are of type 'RsvdP' for VFs, short-circuit out */
++	if (dev->is_virtfn)
++		return;
++
+ 	mps = pcie_get_mps(dev);
+ 	p_mps = pcie_get_mps(bridge);
+ 
+diff --git a/drivers/tty/pty.c b/drivers/tty/pty.c
+index b0e2c4847a5d..678406e0948b 100644
+--- a/drivers/tty/pty.c
++++ b/drivers/tty/pty.c
+@@ -625,7 +625,7 @@ int ptm_open_peer(struct file *master, struct tty_struct *tty, int flags)
+ 	if (tty->driver != ptm_driver)
+ 		return -EIO;
+ 
+-	fd = get_unused_fd_flags(0);
++	fd = get_unused_fd_flags(flags);
+ 	if (fd < 0) {
+ 		retval = fd;
+ 		goto err;
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index f7ab34088162..8b24d3d42cb3 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -14,6 +14,7 @@
+ #include <linux/log2.h>
+ #include <linux/module.h>
+ #include <linux/slab.h>
++#include <linux/nospec.h>
+ #include <linux/backing-dev.h>
+ #include <trace/events/ext4.h>
+ 
+@@ -2140,7 +2141,8 @@ ext4_mb_regular_allocator(struct ext4_allocation_context *ac)
+ 		 * This should tell if fe_len is exactly power of 2
+ 		 */
+ 		if ((ac->ac_g_ex.fe_len & (~(1 << (i - 1)))) == 0)
+-			ac->ac_2order = i - 1;
++			ac->ac_2order = array_index_nospec(i - 1,
++							   sb->s_blocksize_bits + 2);
+ 	}
+ 
+ 	/* if stream allocation is enabled, use global goal */
+diff --git a/fs/reiserfs/xattr.c b/fs/reiserfs/xattr.c
+index ff94fad477e4..48cdfc81fe10 100644
+--- a/fs/reiserfs/xattr.c
++++ b/fs/reiserfs/xattr.c
+@@ -792,8 +792,10 @@ static int listxattr_filler(struct dir_context *ctx, const char *name,
+ 			return 0;
+ 		size = namelen + 1;
+ 		if (b->buf) {
+-			if (size > b->size)
++			if (b->pos + size > b->size) {
++				b->pos = -ERANGE;
+ 				return -ERANGE;
++			}
+ 			memcpy(b->buf + b->pos, name, namelen);
+ 			b->buf[b->pos + namelen] = 0;
+ 		}
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index a790ef4be74e..3222193c46c6 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -6939,9 +6939,21 @@ unsigned long free_reserved_area(void *start, void *end, int poison, char *s)
+ 	start = (void *)PAGE_ALIGN((unsigned long)start);
+ 	end = (void *)((unsigned long)end & PAGE_MASK);
+ 	for (pos = start; pos < end; pos += PAGE_SIZE, pages++) {
++		struct page *page = virt_to_page(pos);
++		void *direct_map_addr;
++
++		/*
++		 * 'direct_map_addr' might be different from 'pos'
++		 * because some architectures' virt_to_page()
++		 * work with aliases.  Getting the direct map
++		 * address ensures that we get a _writeable_
++		 * alias for the memset().
++		 */
++		direct_map_addr = page_address(page);
+ 		if ((unsigned int)poison <= 0xFF)
+-			memset(pos, poison, PAGE_SIZE);
+-		free_reserved_page(virt_to_page(pos));
++			memset(direct_map_addr, poison, PAGE_SIZE);
++
++		free_reserved_page(page);
+ 	}
+ 
+ 	if (pages && s)


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 11:37 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     a9ff5c21104641ed6c1123301ab8cf18eea1be84
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 22 09:59:11 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 11:36:22 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a9ff5c21

linux kernel 4.18.4

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |   4 +
 1003_linux-4.18.4.patch | 817 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 821 insertions(+)

diff --git a/0000_README b/0000_README
index c313d8e..c7d6cc0 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  1002_linux-4.18.3.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.3
 
+Patch:  1003_linux-4.18.4.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.4
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1003_linux-4.18.4.patch b/1003_linux-4.18.4.patch
new file mode 100644
index 0000000..a94a413
--- /dev/null
+++ b/1003_linux-4.18.4.patch
@@ -0,0 +1,817 @@
+diff --git a/Makefile b/Makefile
+index e2bd815f24eb..ef0dd566c104 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index 5d0486f1cfcd..1a1c0718cd7a 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -338,6 +338,14 @@ static const struct dmi_system_id acpisleep_dmi_table[] __initconst = {
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "K54HR"),
+ 		},
+ 	},
++	{
++	.callback = init_nvs_save_s3,
++	.ident = "Asus 1025C",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++		DMI_MATCH(DMI_PRODUCT_NAME, "1025C"),
++		},
++	},
+ 	/*
+ 	 * https://bugzilla.kernel.org/show_bug.cgi?id=189431
+ 	 * Lenovo G50-45 is a platform later than 2012, but needs nvs memory
+diff --git a/drivers/isdn/i4l/isdn_common.c b/drivers/isdn/i4l/isdn_common.c
+index 7a501dbe7123..6a5b3f00f9ad 100644
+--- a/drivers/isdn/i4l/isdn_common.c
++++ b/drivers/isdn/i4l/isdn_common.c
+@@ -1640,13 +1640,7 @@ isdn_ioctl(struct file *file, uint cmd, ulong arg)
+ 			} else
+ 				return -EINVAL;
+ 		case IIOCDBGVAR:
+-			if (arg) {
+-				if (copy_to_user(argp, &dev, sizeof(ulong)))
+-					return -EFAULT;
+-				return 0;
+-			} else
+-				return -EINVAL;
+-			break;
++			return -EINVAL;
+ 		default:
+ 			if ((cmd & IIOCDRVCTL) == IIOCDRVCTL)
+ 				cmd = ((cmd >> _IOC_NRSHIFT) & _IOC_NRMASK) & ISDN_DRVIOCTL_MASK;
+diff --git a/drivers/media/usb/dvb-usb-v2/gl861.c b/drivers/media/usb/dvb-usb-v2/gl861.c
+index 9d154fdae45b..fee4b30df778 100644
+--- a/drivers/media/usb/dvb-usb-v2/gl861.c
++++ b/drivers/media/usb/dvb-usb-v2/gl861.c
+@@ -26,10 +26,14 @@ static int gl861_i2c_msg(struct dvb_usb_device *d, u8 addr,
+ 	if (wo) {
+ 		req = GL861_REQ_I2C_WRITE;
+ 		type = GL861_WRITE;
++		buf = kmemdup(wbuf, wlen, GFP_KERNEL);
+ 	} else { /* rw */
+ 		req = GL861_REQ_I2C_READ;
+ 		type = GL861_READ;
++		buf = kmalloc(rlen, GFP_KERNEL);
+ 	}
++	if (!buf)
++		return -ENOMEM;
+ 
+ 	switch (wlen) {
+ 	case 1:
+@@ -42,24 +46,19 @@ static int gl861_i2c_msg(struct dvb_usb_device *d, u8 addr,
+ 	default:
+ 		dev_err(&d->udev->dev, "%s: wlen=%d, aborting\n",
+ 				KBUILD_MODNAME, wlen);
++		kfree(buf);
+ 		return -EINVAL;
+ 	}
+-	buf = NULL;
+-	if (rlen > 0) {
+-		buf = kmalloc(rlen, GFP_KERNEL);
+-		if (!buf)
+-			return -ENOMEM;
+-	}
++
+ 	usleep_range(1000, 2000); /* avoid I2C errors */
+ 
+ 	ret = usb_control_msg(d->udev, usb_rcvctrlpipe(d->udev, 0), req, type,
+ 			      value, index, buf, rlen, 2000);
+-	if (rlen > 0) {
+-		if (ret > 0)
+-			memcpy(rbuf, buf, rlen);
+-		kfree(buf);
+-	}
+ 
++	if (!wo && ret > 0)
++		memcpy(rbuf, buf, rlen);
++
++	kfree(buf);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/misc/sram.c b/drivers/misc/sram.c
+index c5dc6095686a..679647713e36 100644
+--- a/drivers/misc/sram.c
++++ b/drivers/misc/sram.c
+@@ -407,13 +407,20 @@ static int sram_probe(struct platform_device *pdev)
+ 	if (init_func) {
+ 		ret = init_func();
+ 		if (ret)
+-			return ret;
++			goto err_disable_clk;
+ 	}
+ 
+ 	dev_dbg(sram->dev, "SRAM pool: %zu KiB @ 0x%p\n",
+ 		gen_pool_size(sram->pool) / 1024, sram->virt_base);
+ 
+ 	return 0;
++
++err_disable_clk:
++	if (sram->clk)
++		clk_disable_unprepare(sram->clk);
++	sram_free_partitions(sram);
++
++	return ret;
+ }
+ 
+ static int sram_remove(struct platform_device *pdev)
+diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
+index 0ad2f3f7da85..82ac1d10f239 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -1901,10 +1901,10 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
+ }
+ 
+ /* Main rx processing when using software buffer management */
+-static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
++static int mvneta_rx_swbm(struct napi_struct *napi,
++			  struct mvneta_port *pp, int rx_todo,
+ 			  struct mvneta_rx_queue *rxq)
+ {
+-	struct mvneta_pcpu_port *port = this_cpu_ptr(pp->ports);
+ 	struct net_device *dev = pp->dev;
+ 	int rx_done;
+ 	u32 rcvd_pkts = 0;
+@@ -1959,7 +1959,7 @@ err_drop_frame:
+ 
+ 			skb->protocol = eth_type_trans(skb, dev);
+ 			mvneta_rx_csum(pp, rx_status, skb);
+-			napi_gro_receive(&port->napi, skb);
++			napi_gro_receive(napi, skb);
+ 
+ 			rcvd_pkts++;
+ 			rcvd_bytes += rx_bytes;
+@@ -2001,7 +2001,7 @@ err_drop_frame:
+ 
+ 		mvneta_rx_csum(pp, rx_status, skb);
+ 
+-		napi_gro_receive(&port->napi, skb);
++		napi_gro_receive(napi, skb);
+ 	}
+ 
+ 	if (rcvd_pkts) {
+@@ -2020,10 +2020,10 @@ err_drop_frame:
+ }
+ 
+ /* Main rx processing when using hardware buffer management */
+-static int mvneta_rx_hwbm(struct mvneta_port *pp, int rx_todo,
++static int mvneta_rx_hwbm(struct napi_struct *napi,
++			  struct mvneta_port *pp, int rx_todo,
+ 			  struct mvneta_rx_queue *rxq)
+ {
+-	struct mvneta_pcpu_port *port = this_cpu_ptr(pp->ports);
+ 	struct net_device *dev = pp->dev;
+ 	int rx_done;
+ 	u32 rcvd_pkts = 0;
+@@ -2085,7 +2085,7 @@ err_drop_frame:
+ 
+ 			skb->protocol = eth_type_trans(skb, dev);
+ 			mvneta_rx_csum(pp, rx_status, skb);
+-			napi_gro_receive(&port->napi, skb);
++			napi_gro_receive(napi, skb);
+ 
+ 			rcvd_pkts++;
+ 			rcvd_bytes += rx_bytes;
+@@ -2129,7 +2129,7 @@ err_drop_frame:
+ 
+ 		mvneta_rx_csum(pp, rx_status, skb);
+ 
+-		napi_gro_receive(&port->napi, skb);
++		napi_gro_receive(napi, skb);
+ 	}
+ 
+ 	if (rcvd_pkts) {
+@@ -2722,9 +2722,11 @@ static int mvneta_poll(struct napi_struct *napi, int budget)
+ 	if (rx_queue) {
+ 		rx_queue = rx_queue - 1;
+ 		if (pp->bm_priv)
+-			rx_done = mvneta_rx_hwbm(pp, budget, &pp->rxqs[rx_queue]);
++			rx_done = mvneta_rx_hwbm(napi, pp, budget,
++						 &pp->rxqs[rx_queue]);
+ 		else
+-			rx_done = mvneta_rx_swbm(pp, budget, &pp->rxqs[rx_queue]);
++			rx_done = mvneta_rx_swbm(napi, pp, budget,
++						 &pp->rxqs[rx_queue]);
+ 	}
+ 
+ 	if (rx_done < budget) {
+@@ -4018,13 +4020,18 @@ static int  mvneta_config_rss(struct mvneta_port *pp)
+ 
+ 	on_each_cpu(mvneta_percpu_mask_interrupt, pp, true);
+ 
+-	/* We have to synchronise on the napi of each CPU */
+-	for_each_online_cpu(cpu) {
+-		struct mvneta_pcpu_port *pcpu_port =
+-			per_cpu_ptr(pp->ports, cpu);
++	if (!pp->neta_armada3700) {
++		/* We have to synchronise on the napi of each CPU */
++		for_each_online_cpu(cpu) {
++			struct mvneta_pcpu_port *pcpu_port =
++				per_cpu_ptr(pp->ports, cpu);
+ 
+-		napi_synchronize(&pcpu_port->napi);
+-		napi_disable(&pcpu_port->napi);
++			napi_synchronize(&pcpu_port->napi);
++			napi_disable(&pcpu_port->napi);
++		}
++	} else {
++		napi_synchronize(&pp->napi);
++		napi_disable(&pp->napi);
+ 	}
+ 
+ 	pp->rxq_def = pp->indir[0];
+@@ -4041,12 +4048,16 @@ static int  mvneta_config_rss(struct mvneta_port *pp)
+ 	mvneta_percpu_elect(pp);
+ 	spin_unlock(&pp->lock);
+ 
+-	/* We have to synchronise on the napi of each CPU */
+-	for_each_online_cpu(cpu) {
+-		struct mvneta_pcpu_port *pcpu_port =
+-			per_cpu_ptr(pp->ports, cpu);
++	if (!pp->neta_armada3700) {
++		/* We have to synchronise on the napi of each CPU */
++		for_each_online_cpu(cpu) {
++			struct mvneta_pcpu_port *pcpu_port =
++				per_cpu_ptr(pp->ports, cpu);
+ 
+-		napi_enable(&pcpu_port->napi);
++			napi_enable(&pcpu_port->napi);
++		}
++	} else {
++		napi_enable(&pp->napi);
+ 	}
+ 
+ 	netif_tx_start_all_queues(pp->dev);
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index eaedc11ed686..9ceb34bac3a9 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -7539,12 +7539,20 @@ static int rtl_alloc_irq(struct rtl8169_private *tp)
+ {
+ 	unsigned int flags;
+ 
+-	if (tp->mac_version <= RTL_GIGA_MAC_VER_06) {
++	switch (tp->mac_version) {
++	case RTL_GIGA_MAC_VER_01 ... RTL_GIGA_MAC_VER_06:
+ 		RTL_W8(tp, Cfg9346, Cfg9346_Unlock);
+ 		RTL_W8(tp, Config2, RTL_R8(tp, Config2) & ~MSIEnable);
+ 		RTL_W8(tp, Cfg9346, Cfg9346_Lock);
+ 		flags = PCI_IRQ_LEGACY;
+-	} else {
++		break;
++	case RTL_GIGA_MAC_VER_39 ... RTL_GIGA_MAC_VER_40:
++		/* This version was reported to have issues with resume
++		 * from suspend when using MSI-X
++		 */
++		flags = PCI_IRQ_LEGACY | PCI_IRQ_MSI;
++		break;
++	default:
+ 		flags = PCI_IRQ_ALL_TYPES;
+ 	}
+ 
+diff --git a/drivers/net/hyperv/rndis_filter.c b/drivers/net/hyperv/rndis_filter.c
+index 408ece27131c..2a5209f23f29 100644
+--- a/drivers/net/hyperv/rndis_filter.c
++++ b/drivers/net/hyperv/rndis_filter.c
+@@ -1338,7 +1338,7 @@ out:
+ 	/* setting up multiple channels failed */
+ 	net_device->max_chn = 1;
+ 	net_device->num_chn = 1;
+-	return 0;
++	return net_device;
+ 
+ err_dev_remv:
+ 	rndis_filter_device_remove(dev, net_device);
+diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
+index aff04f1de3a5..af842000188c 100644
+--- a/drivers/tty/serial/8250/8250_dw.c
++++ b/drivers/tty/serial/8250/8250_dw.c
+@@ -293,7 +293,7 @@ static void dw8250_set_termios(struct uart_port *p, struct ktermios *termios,
+ 	long rate;
+ 	int ret;
+ 
+-	if (IS_ERR(d->clk) || !old)
++	if (IS_ERR(d->clk))
+ 		goto out;
+ 
+ 	clk_disable_unprepare(d->clk);
+@@ -707,6 +707,7 @@ static const struct acpi_device_id dw8250_acpi_match[] = {
+ 	{ "APMC0D08", 0},
+ 	{ "AMD0020", 0 },
+ 	{ "AMDI0020", 0 },
++	{ "BRCM2032", 0 },
+ 	{ "HISI0031", 0 },
+ 	{ },
+ };
+diff --git a/drivers/tty/serial/8250/8250_exar.c b/drivers/tty/serial/8250/8250_exar.c
+index 38af306ca0e8..a951511f04cf 100644
+--- a/drivers/tty/serial/8250/8250_exar.c
++++ b/drivers/tty/serial/8250/8250_exar.c
+@@ -433,7 +433,11 @@ static irqreturn_t exar_misc_handler(int irq, void *data)
+ 	struct exar8250 *priv = data;
+ 
+ 	/* Clear all PCI interrupts by reading INT0. No effect on IIR */
+-	ioread8(priv->virt + UART_EXAR_INT0);
++	readb(priv->virt + UART_EXAR_INT0);
++
++	/* Clear INT0 for Expansion Interface slave ports, too */
++	if (priv->board->num_ports > 8)
++		readb(priv->virt + 0x2000 + UART_EXAR_INT0);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index cf541aab2bd0..5cbc13e3d316 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -90,8 +90,7 @@ static const struct serial8250_config uart_config[] = {
+ 		.name		= "16550A",
+ 		.fifo_size	= 16,
+ 		.tx_loadsz	= 16,
+-		.fcr		= UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10 |
+-				  UART_FCR_CLEAR_RCVR | UART_FCR_CLEAR_XMIT,
++		.fcr		= UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
+ 		.rxtrig_bytes	= {1, 4, 8, 14},
+ 		.flags		= UART_CAP_FIFO,
+ 	},
+diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c
+index 5d421d7e8904..f68c1121fa7c 100644
+--- a/drivers/uio/uio.c
++++ b/drivers/uio/uio.c
+@@ -443,13 +443,10 @@ static irqreturn_t uio_interrupt(int irq, void *dev_id)
+ 	struct uio_device *idev = (struct uio_device *)dev_id;
+ 	irqreturn_t ret;
+ 
+-	mutex_lock(&idev->info_lock);
+-
+ 	ret = idev->info->handler(irq, idev->info);
+ 	if (ret == IRQ_HANDLED)
+ 		uio_event_notify(idev->info);
+ 
+-	mutex_unlock(&idev->info_lock);
+ 	return ret;
+ }
+ 
+@@ -814,7 +811,7 @@ static int uio_mmap(struct file *filep, struct vm_area_struct *vma)
+ 
+ out:
+ 	mutex_unlock(&idev->info_lock);
+-	return 0;
++	return ret;
+ }
+ 
+ static const struct file_operations uio_fops = {
+@@ -969,9 +966,8 @@ int __uio_register_device(struct module *owner,
+ 		 * FDs at the time of unregister and therefore may not be
+ 		 * freed until they are released.
+ 		 */
+-		ret = request_threaded_irq(info->irq, NULL, uio_interrupt,
+-					   info->irq_flags, info->name, idev);
+-
++		ret = request_irq(info->irq, uio_interrupt,
++				  info->irq_flags, info->name, idev);
+ 		if (ret)
+ 			goto err_request_irq;
+ 	}
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 664e61f16b6a..0215b70c4efc 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -196,6 +196,8 @@ static void option_instat_callback(struct urb *urb);
+ #define DELL_PRODUCT_5800_V2_MINICARD_VZW	0x8196  /* Novatel E362 */
+ #define DELL_PRODUCT_5804_MINICARD_ATT		0x819b  /* Novatel E371 */
+ 
++#define DELL_PRODUCT_5821E			0x81d7
++
+ #define KYOCERA_VENDOR_ID			0x0c88
+ #define KYOCERA_PRODUCT_KPC650			0x17da
+ #define KYOCERA_PRODUCT_KPC680			0x180a
+@@ -1030,6 +1032,8 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(DELL_VENDOR_ID, DELL_PRODUCT_5800_MINICARD_VZW, 0xff, 0xff, 0xff) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(DELL_VENDOR_ID, DELL_PRODUCT_5800_V2_MINICARD_VZW, 0xff, 0xff, 0xff) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(DELL_VENDOR_ID, DELL_PRODUCT_5804_MINICARD_ATT, 0xff, 0xff, 0xff) },
++	{ USB_DEVICE(DELL_VENDOR_ID, DELL_PRODUCT_5821E),
++	  .driver_info = RSVD(0) | RSVD(1) | RSVD(6) },
+ 	{ USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_E100A) },	/* ADU-E100, ADU-310 */
+ 	{ USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_500A) },
+ 	{ USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_620UW) },
+diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
+index 5d1a1931967e..e41f725ac7aa 100644
+--- a/drivers/usb/serial/pl2303.c
++++ b/drivers/usb/serial/pl2303.c
+@@ -52,6 +52,8 @@ static const struct usb_device_id id_table[] = {
+ 		.driver_info = PL2303_QUIRK_ENDPOINT_HACK },
+ 	{ USB_DEVICE(ATEN_VENDOR_ID, ATEN_PRODUCT_UC485),
+ 		.driver_info = PL2303_QUIRK_ENDPOINT_HACK },
++	{ USB_DEVICE(ATEN_VENDOR_ID, ATEN_PRODUCT_UC232B),
++		.driver_info = PL2303_QUIRK_ENDPOINT_HACK },
+ 	{ USB_DEVICE(ATEN_VENDOR_ID, ATEN_PRODUCT_ID2) },
+ 	{ USB_DEVICE(ATEN_VENDOR_ID2, ATEN_PRODUCT_ID) },
+ 	{ USB_DEVICE(ELCOM_VENDOR_ID, ELCOM_PRODUCT_ID) },
+diff --git a/drivers/usb/serial/pl2303.h b/drivers/usb/serial/pl2303.h
+index fcd72396a7b6..26965cc23c17 100644
+--- a/drivers/usb/serial/pl2303.h
++++ b/drivers/usb/serial/pl2303.h
+@@ -24,6 +24,7 @@
+ #define ATEN_VENDOR_ID2		0x0547
+ #define ATEN_PRODUCT_ID		0x2008
+ #define ATEN_PRODUCT_UC485	0x2021
++#define ATEN_PRODUCT_UC232B	0x2022
+ #define ATEN_PRODUCT_ID2	0x2118
+ 
+ #define IODATA_VENDOR_ID	0x04bb
+diff --git a/drivers/usb/serial/sierra.c b/drivers/usb/serial/sierra.c
+index d189f953c891..55956a638f5b 100644
+--- a/drivers/usb/serial/sierra.c
++++ b/drivers/usb/serial/sierra.c
+@@ -770,9 +770,9 @@ static void sierra_close(struct usb_serial_port *port)
+ 		kfree(urb->transfer_buffer);
+ 		usb_free_urb(urb);
+ 		usb_autopm_put_interface_async(serial->interface);
+-		spin_lock(&portdata->lock);
++		spin_lock_irq(&portdata->lock);
+ 		portdata->outstanding_urbs--;
+-		spin_unlock(&portdata->lock);
++		spin_unlock_irq(&portdata->lock);
+ 	}
+ 
+ 	sierra_stop_rx_urbs(port);
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index 413b8ee49fec..8f0f9279eac9 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -393,7 +393,8 @@ static void sco_sock_cleanup_listen(struct sock *parent)
+  */
+ static void sco_sock_kill(struct sock *sk)
+ {
+-	if (!sock_flag(sk, SOCK_ZAPPED) || sk->sk_socket)
++	if (!sock_flag(sk, SOCK_ZAPPED) || sk->sk_socket ||
++	    sock_flag(sk, SOCK_DEAD))
+ 		return;
+ 
+ 	BT_DBG("sk %p state %d", sk, sk->sk_state);
+diff --git a/net/core/sock_diag.c b/net/core/sock_diag.c
+index c37b5be7c5e4..3312a5849a97 100644
+--- a/net/core/sock_diag.c
++++ b/net/core/sock_diag.c
+@@ -10,6 +10,7 @@
+ #include <linux/kernel.h>
+ #include <linux/tcp.h>
+ #include <linux/workqueue.h>
++#include <linux/nospec.h>
+ 
+ #include <linux/inet_diag.h>
+ #include <linux/sock_diag.h>
+@@ -218,6 +219,7 @@ static int __sock_diag_cmd(struct sk_buff *skb, struct nlmsghdr *nlh)
+ 
+ 	if (req->sdiag_family >= AF_MAX)
+ 		return -EINVAL;
++	req->sdiag_family = array_index_nospec(req->sdiag_family, AF_MAX);
+ 
+ 	if (sock_diag_handlers[req->sdiag_family] == NULL)
+ 		sock_load_diag_module(req->sdiag_family, 0);
+diff --git a/net/ipv4/ip_vti.c b/net/ipv4/ip_vti.c
+index 3f091ccad9af..f38cb21d773d 100644
+--- a/net/ipv4/ip_vti.c
++++ b/net/ipv4/ip_vti.c
+@@ -438,7 +438,8 @@ static int __net_init vti_init_net(struct net *net)
+ 	if (err)
+ 		return err;
+ 	itn = net_generic(net, vti_net_id);
+-	vti_fb_tunnel_init(itn->fb_tunnel_dev);
++	if (itn->fb_tunnel_dev)
++		vti_fb_tunnel_init(itn->fb_tunnel_dev);
+ 	return 0;
+ }
+ 
+diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
+index 40261cb68e83..8aaf8157da2b 100644
+--- a/net/l2tp/l2tp_core.c
++++ b/net/l2tp/l2tp_core.c
+@@ -1110,7 +1110,7 @@ int l2tp_xmit_skb(struct l2tp_session *session, struct sk_buff *skb, int hdr_len
+ 
+ 	/* Get routing info from the tunnel socket */
+ 	skb_dst_drop(skb);
+-	skb_dst_set(skb, dst_clone(__sk_dst_check(sk, 0)));
++	skb_dst_set(skb, sk_dst_check(sk, 0));
+ 
+ 	inet = inet_sk(sk);
+ 	fl = &inet->cork.fl;
+diff --git a/net/sched/cls_matchall.c b/net/sched/cls_matchall.c
+index 47b207ef7762..7ad65daf66a4 100644
+--- a/net/sched/cls_matchall.c
++++ b/net/sched/cls_matchall.c
+@@ -111,6 +111,8 @@ static void mall_destroy(struct tcf_proto *tp, struct netlink_ext_ack *extack)
+ 	if (!head)
+ 		return;
+ 
++	tcf_unbind_filter(tp, &head->res);
++
+ 	if (!tc_skip_hw(head->flags))
+ 		mall_destroy_hw_filter(tp, head, (unsigned long) head, extack);
+ 
+diff --git a/net/sched/cls_tcindex.c b/net/sched/cls_tcindex.c
+index 32f4bbd82f35..9ccc93f257db 100644
+--- a/net/sched/cls_tcindex.c
++++ b/net/sched/cls_tcindex.c
+@@ -447,11 +447,6 @@ tcindex_set_parms(struct net *net, struct tcf_proto *tp, unsigned long base,
+ 		tcf_bind_filter(tp, &cr.res, base);
+ 	}
+ 
+-	if (old_r)
+-		tcf_exts_change(&r->exts, &e);
+-	else
+-		tcf_exts_change(&cr.exts, &e);
+-
+ 	if (old_r && old_r != r) {
+ 		err = tcindex_filter_result_init(old_r);
+ 		if (err < 0) {
+@@ -462,12 +457,15 @@ tcindex_set_parms(struct net *net, struct tcf_proto *tp, unsigned long base,
+ 
+ 	oldp = p;
+ 	r->res = cr.res;
++	tcf_exts_change(&r->exts, &e);
++
+ 	rcu_assign_pointer(tp->root, cp);
+ 
+ 	if (r == &new_filter_result) {
+ 		struct tcindex_filter *nfp;
+ 		struct tcindex_filter __rcu **fp;
+ 
++		f->result.res = r->res;
+ 		tcf_exts_change(&f->result.exts, &r->exts);
+ 
+ 		fp = cp->h + (handle % cp->hash);
+diff --git a/net/socket.c b/net/socket.c
+index 8c24d5dc4bc8..4ac3b834cce9 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -2690,8 +2690,7 @@ EXPORT_SYMBOL(sock_unregister);
+ 
+ bool sock_is_registered(int family)
+ {
+-	return family < NPROTO &&
+-		rcu_access_pointer(net_families[array_index_nospec(family, NPROTO)]);
++	return family < NPROTO && rcu_access_pointer(net_families[family]);
+ }
+ 
+ static int __init sock_init(void)
+diff --git a/sound/core/memalloc.c b/sound/core/memalloc.c
+index 7f89d3c79a4b..753d5fc4b284 100644
+--- a/sound/core/memalloc.c
++++ b/sound/core/memalloc.c
+@@ -242,16 +242,12 @@ int snd_dma_alloc_pages_fallback(int type, struct device *device, size_t size,
+ 	int err;
+ 
+ 	while ((err = snd_dma_alloc_pages(type, device, size, dmab)) < 0) {
+-		size_t aligned_size;
+ 		if (err != -ENOMEM)
+ 			return err;
+ 		if (size <= PAGE_SIZE)
+ 			return -ENOMEM;
+-		aligned_size = PAGE_SIZE << get_order(size);
+-		if (size != aligned_size)
+-			size = aligned_size;
+-		else
+-			size >>= 1;
++		size >>= 1;
++		size = PAGE_SIZE << get_order(size);
+ 	}
+ 	if (! dmab->area)
+ 		return -ENOMEM;
+diff --git a/sound/core/seq/oss/seq_oss.c b/sound/core/seq/oss/seq_oss.c
+index 5f64d0d88320..e1f44fc86885 100644
+--- a/sound/core/seq/oss/seq_oss.c
++++ b/sound/core/seq/oss/seq_oss.c
+@@ -203,7 +203,7 @@ odev_poll(struct file *file, poll_table * wait)
+ 	struct seq_oss_devinfo *dp;
+ 	dp = file->private_data;
+ 	if (snd_BUG_ON(!dp))
+-		return -ENXIO;
++		return EPOLLERR;
+ 	return snd_seq_oss_poll(dp, file, wait);
+ }
+ 
+diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
+index 56ca78423040..6fd4b074b206 100644
+--- a/sound/core/seq/seq_clientmgr.c
++++ b/sound/core/seq/seq_clientmgr.c
+@@ -1101,7 +1101,7 @@ static __poll_t snd_seq_poll(struct file *file, poll_table * wait)
+ 
+ 	/* check client structures are in place */
+ 	if (snd_BUG_ON(!client))
+-		return -ENXIO;
++		return EPOLLERR;
+ 
+ 	if ((snd_seq_file_flags(file) & SNDRV_SEQ_LFLG_INPUT) &&
+ 	    client->data.user.fifo) {
+diff --git a/sound/core/seq/seq_virmidi.c b/sound/core/seq/seq_virmidi.c
+index 289ae6bb81d9..8ebbca554e99 100644
+--- a/sound/core/seq/seq_virmidi.c
++++ b/sound/core/seq/seq_virmidi.c
+@@ -163,6 +163,7 @@ static void snd_virmidi_output_trigger(struct snd_rawmidi_substream *substream,
+ 	int count, res;
+ 	unsigned char buf[32], *pbuf;
+ 	unsigned long flags;
++	bool check_resched = !in_atomic();
+ 
+ 	if (up) {
+ 		vmidi->trigger = 1;
+@@ -200,6 +201,15 @@ static void snd_virmidi_output_trigger(struct snd_rawmidi_substream *substream,
+ 					vmidi->event.type = SNDRV_SEQ_EVENT_NONE;
+ 				}
+ 			}
++			if (!check_resched)
++				continue;
++			/* do temporary unlock & cond_resched() for avoiding
++			 * CPU soft lockup, which may happen via a write from
++			 * a huge rawmidi buffer
++			 */
++			spin_unlock_irqrestore(&substream->runtime->lock, flags);
++			cond_resched();
++			spin_lock_irqsave(&substream->runtime->lock, flags);
+ 		}
+ 	out:
+ 		spin_unlock_irqrestore(&substream->runtime->lock, flags);
+diff --git a/sound/firewire/dice/dice-alesis.c b/sound/firewire/dice/dice-alesis.c
+index b2efb1c71a98..218292bdace6 100644
+--- a/sound/firewire/dice/dice-alesis.c
++++ b/sound/firewire/dice/dice-alesis.c
+@@ -37,7 +37,7 @@ int snd_dice_detect_alesis_formats(struct snd_dice *dice)
+ 				MAX_STREAMS * SND_DICE_RATE_MODE_COUNT *
+ 				sizeof(unsigned int));
+ 	} else {
+-		memcpy(dice->rx_pcm_chs, alesis_io26_tx_pcm_chs,
++		memcpy(dice->tx_pcm_chs, alesis_io26_tx_pcm_chs,
+ 				MAX_STREAMS * SND_DICE_RATE_MODE_COUNT *
+ 				sizeof(unsigned int));
+ 	}
+diff --git a/sound/pci/cs5535audio/cs5535audio.h b/sound/pci/cs5535audio/cs5535audio.h
+index f4fcdf93f3c8..d84620a0c26c 100644
+--- a/sound/pci/cs5535audio/cs5535audio.h
++++ b/sound/pci/cs5535audio/cs5535audio.h
+@@ -67,9 +67,9 @@ struct cs5535audio_dma_ops {
+ };
+ 
+ struct cs5535audio_dma_desc {
+-	u32 addr;
+-	u16 size;
+-	u16 ctlreserved;
++	__le32 addr;
++	__le16 size;
++	__le16 ctlreserved;
+ };
+ 
+ struct cs5535audio_dma {
+diff --git a/sound/pci/cs5535audio/cs5535audio_pcm.c b/sound/pci/cs5535audio/cs5535audio_pcm.c
+index ee7065f6e162..326caec854e1 100644
+--- a/sound/pci/cs5535audio/cs5535audio_pcm.c
++++ b/sound/pci/cs5535audio/cs5535audio_pcm.c
+@@ -158,8 +158,8 @@ static int cs5535audio_build_dma_packets(struct cs5535audio *cs5535au,
+ 	lastdesc->addr = cpu_to_le32((u32) dma->desc_buf.addr);
+ 	lastdesc->size = 0;
+ 	lastdesc->ctlreserved = cpu_to_le16(PRD_JMP);
+-	jmpprd_addr = cpu_to_le32(lastdesc->addr +
+-				  (sizeof(struct cs5535audio_dma_desc)*periods));
++	jmpprd_addr = (u32)dma->desc_buf.addr +
++		sizeof(struct cs5535audio_dma_desc) * periods;
+ 
+ 	dma->substream = substream;
+ 	dma->period_bytes = period_bytes;
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 1ae1850b3bfd..647ae1a71e10 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2207,7 +2207,7 @@ out_free:
+  */
+ static struct snd_pci_quirk power_save_blacklist[] = {
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
+-	SND_PCI_QUIRK(0x1849, 0x0c0c, "Asrock B85M-ITX", 0),
++	SND_PCI_QUIRK(0x1849, 0xc892, "Asrock B85M-ITX", 0),
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
+ 	SND_PCI_QUIRK(0x1849, 0x7662, "Asrock H81M-HDS", 0),
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index f641c20095f7..1a8a2d440fbd 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -211,6 +211,7 @@ static void cx_auto_reboot_notify(struct hda_codec *codec)
+ 	struct conexant_spec *spec = codec->spec;
+ 
+ 	switch (codec->core.vendor_id) {
++	case 0x14f12008: /* CX8200 */
+ 	case 0x14f150f2: /* CX20722 */
+ 	case 0x14f150f4: /* CX20724 */
+ 		break;
+@@ -218,13 +219,14 @@ static void cx_auto_reboot_notify(struct hda_codec *codec)
+ 		return;
+ 	}
+ 
+-	/* Turn the CX20722 codec into D3 to avoid spurious noises
++	/* Turn the problematic codec into D3 to avoid spurious noises
+ 	   from the internal speaker during (and after) reboot */
+ 	cx_auto_turn_eapd(codec, spec->num_eapds, spec->eapds, false);
+ 
+ 	snd_hda_codec_set_power_to_all(codec, codec->core.afg, AC_PWRST_D3);
+ 	snd_hda_codec_write(codec, codec->core.afg, 0,
+ 			    AC_VERB_SET_POWER_STATE, AC_PWRST_D3);
++	msleep(10);
+ }
+ 
+ static void cx_auto_free(struct hda_codec *codec)
+diff --git a/sound/pci/vx222/vx222_ops.c b/sound/pci/vx222/vx222_ops.c
+index d4298af6d3ee..c0d0bf44f365 100644
+--- a/sound/pci/vx222/vx222_ops.c
++++ b/sound/pci/vx222/vx222_ops.c
+@@ -275,7 +275,7 @@ static void vx2_dma_write(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ 		length >>= 2; /* in 32bit words */
+ 		/* Transfer using pseudo-dma. */
+ 		for (; length > 0; length--) {
+-			outl(cpu_to_le32(*addr), port);
++			outl(*addr, port);
+ 			addr++;
+ 		}
+ 		addr = (u32 *)runtime->dma_area;
+@@ -285,7 +285,7 @@ static void vx2_dma_write(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ 	count >>= 2; /* in 32bit words */
+ 	/* Transfer using pseudo-dma. */
+ 	for (; count > 0; count--) {
+-		outl(cpu_to_le32(*addr), port);
++		outl(*addr, port);
+ 		addr++;
+ 	}
+ 
+@@ -313,7 +313,7 @@ static void vx2_dma_read(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ 		length >>= 2; /* in 32bit words */
+ 		/* Transfer using pseudo-dma. */
+ 		for (; length > 0; length--)
+-			*addr++ = le32_to_cpu(inl(port));
++			*addr++ = inl(port);
+ 		addr = (u32 *)runtime->dma_area;
+ 		pipe->hw_ptr = 0;
+ 	}
+@@ -321,7 +321,7 @@ static void vx2_dma_read(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ 	count >>= 2; /* in 32bit words */
+ 	/* Transfer using pseudo-dma. */
+ 	for (; count > 0; count--)
+-		*addr++ = le32_to_cpu(inl(port));
++		*addr++ = inl(port);
+ 
+ 	vx2_release_pseudo_dma(chip);
+ }
+diff --git a/sound/pcmcia/vx/vxp_ops.c b/sound/pcmcia/vx/vxp_ops.c
+index 8cde40226355..4c4ef1fec69f 100644
+--- a/sound/pcmcia/vx/vxp_ops.c
++++ b/sound/pcmcia/vx/vxp_ops.c
+@@ -375,7 +375,7 @@ static void vxp_dma_write(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ 		length >>= 1; /* in 16bit words */
+ 		/* Transfer using pseudo-dma. */
+ 		for (; length > 0; length--) {
+-			outw(cpu_to_le16(*addr), port);
++			outw(*addr, port);
+ 			addr++;
+ 		}
+ 		addr = (unsigned short *)runtime->dma_area;
+@@ -385,7 +385,7 @@ static void vxp_dma_write(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ 	count >>= 1; /* in 16bit words */
+ 	/* Transfer using pseudo-dma. */
+ 	for (; count > 0; count--) {
+-		outw(cpu_to_le16(*addr), port);
++		outw(*addr, port);
+ 		addr++;
+ 	}
+ 	vx_release_pseudo_dma(chip);
+@@ -417,7 +417,7 @@ static void vxp_dma_read(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ 		length >>= 1; /* in 16bit words */
+ 		/* Transfer using pseudo-dma. */
+ 		for (; length > 0; length--)
+-			*addr++ = le16_to_cpu(inw(port));
++			*addr++ = inw(port);
+ 		addr = (unsigned short *)runtime->dma_area;
+ 		pipe->hw_ptr = 0;
+ 	}
+@@ -425,12 +425,12 @@ static void vxp_dma_read(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ 	count >>= 1; /* in 16bit words */
+ 	/* Transfer using pseudo-dma. */
+ 	for (; count > 1; count--)
+-		*addr++ = le16_to_cpu(inw(port));
++		*addr++ = inw(port);
+ 	/* Disable DMA */
+ 	pchip->regDIALOG &= ~VXP_DLG_DMAREAD_SEL_MASK;
+ 	vx_outb(chip, DIALOG, pchip->regDIALOG);
+ 	/* Read the last word (16 bits) */
+-	*addr = le16_to_cpu(inw(port));
++	*addr = inw(port);
+ 	/* Disable 16-bit accesses */
+ 	pchip->regDIALOG &= ~VXP_DLG_DMA16_SEL_MASK;
+ 	vx_outb(chip, DIALOG, pchip->regDIALOG);


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-14 11:37 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-14 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     62a3a28a5a82e6a8d264299a242e1e5faf8af4e9
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Sep  9 11:25:12 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 11:36:24 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=62a3a28a

Linux patch 4.18.7

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1006_linux-4.18.7.patch | 5658 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5662 insertions(+)

diff --git a/0000_README b/0000_README
index 8bfc2e4..f3682ca 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch:  1005_linux-4.18.6.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.6
 
+Patch:  1006_linux-4.18.7.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.7
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1006_linux-4.18.7.patch b/1006_linux-4.18.7.patch
new file mode 100644
index 0000000..7ab3155
--- /dev/null
+++ b/1006_linux-4.18.7.patch
@@ -0,0 +1,5658 @@
+diff --git a/Makefile b/Makefile
+index 62524f4d42ad..711b04d00e49 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/alpha/kernel/osf_sys.c b/arch/alpha/kernel/osf_sys.c
+index c210a25dd6da..cff52d8ffdb1 100644
+--- a/arch/alpha/kernel/osf_sys.c
++++ b/arch/alpha/kernel/osf_sys.c
+@@ -530,24 +530,19 @@ SYSCALL_DEFINE4(osf_mount, unsigned long, typenr, const char __user *, path,
+ SYSCALL_DEFINE1(osf_utsname, char __user *, name)
+ {
+ 	int error;
++	char tmp[5 * 32];
+ 
+ 	down_read(&uts_sem);
+-	error = -EFAULT;
+-	if (copy_to_user(name + 0, utsname()->sysname, 32))
+-		goto out;
+-	if (copy_to_user(name + 32, utsname()->nodename, 32))
+-		goto out;
+-	if (copy_to_user(name + 64, utsname()->release, 32))
+-		goto out;
+-	if (copy_to_user(name + 96, utsname()->version, 32))
+-		goto out;
+-	if (copy_to_user(name + 128, utsname()->machine, 32))
+-		goto out;
++	memcpy(tmp + 0 * 32, utsname()->sysname, 32);
++	memcpy(tmp + 1 * 32, utsname()->nodename, 32);
++	memcpy(tmp + 2 * 32, utsname()->release, 32);
++	memcpy(tmp + 3 * 32, utsname()->version, 32);
++	memcpy(tmp + 4 * 32, utsname()->machine, 32);
++	up_read(&uts_sem);
+ 
+-	error = 0;
+- out:
+-	up_read(&uts_sem);	
+-	return error;
++	if (copy_to_user(name, tmp, sizeof(tmp)))
++		return -EFAULT;
++	return 0;
+ }
+ 
+ SYSCALL_DEFINE0(getpagesize)
+@@ -567,18 +562,21 @@ SYSCALL_DEFINE2(osf_getdomainname, char __user *, name, int, namelen)
+ {
+ 	int len, err = 0;
+ 	char *kname;
++	char tmp[32];
+ 
+-	if (namelen > 32)
++	if (namelen < 0 || namelen > 32)
+ 		namelen = 32;
+ 
+ 	down_read(&uts_sem);
+ 	kname = utsname()->domainname;
+ 	len = strnlen(kname, namelen);
+-	if (copy_to_user(name, kname, min(len + 1, namelen)))
+-		err = -EFAULT;
++	len = min(len + 1, namelen);
++	memcpy(tmp, kname, len);
+ 	up_read(&uts_sem);
+ 
+-	return err;
++	if (copy_to_user(name, tmp, len))
++		return -EFAULT;
++	return 0;
+ }
+ 
+ /*
+@@ -739,13 +737,14 @@ SYSCALL_DEFINE3(osf_sysinfo, int, command, char __user *, buf, long, count)
+ 	};
+ 	unsigned long offset;
+ 	const char *res;
+-	long len, err = -EINVAL;
++	long len;
++	char tmp[__NEW_UTS_LEN + 1];
+ 
+ 	offset = command-1;
+ 	if (offset >= ARRAY_SIZE(sysinfo_table)) {
+ 		/* Digital UNIX has a few unpublished interfaces here */
+ 		printk("sysinfo(%d)", command);
+-		goto out;
++		return -EINVAL;
+ 	}
+ 
+ 	down_read(&uts_sem);
+@@ -753,13 +752,11 @@ SYSCALL_DEFINE3(osf_sysinfo, int, command, char __user *, buf, long, count)
+ 	len = strlen(res)+1;
+ 	if ((unsigned long)len > (unsigned long)count)
+ 		len = count;
+-	if (copy_to_user(buf, res, len))
+-		err = -EFAULT;
+-	else
+-		err = 0;
++	memcpy(tmp, res, len);
+ 	up_read(&uts_sem);
+- out:
+-	return err;
++	if (copy_to_user(buf, tmp, len))
++		return -EFAULT;
++	return 0;
+ }
+ 
+ SYSCALL_DEFINE5(osf_getsysinfo, unsigned long, op, void __user *, buffer,
+diff --git a/arch/arm/boot/dts/am571x-idk.dts b/arch/arm/boot/dts/am571x-idk.dts
+index 5bb9d68d6e90..d9a2049a1ea8 100644
+--- a/arch/arm/boot/dts/am571x-idk.dts
++++ b/arch/arm/boot/dts/am571x-idk.dts
+@@ -66,10 +66,6 @@
+ 	};
+ };
+ 
+-&omap_dwc3_2 {
+-	extcon = <&extcon_usb2>;
+-};
+-
+ &extcon_usb2 {
+ 	id-gpio = <&gpio5 7 GPIO_ACTIVE_HIGH>;
+ 	vbus-gpio = <&gpio7 22 GPIO_ACTIVE_HIGH>;
+diff --git a/arch/arm/boot/dts/am572x-idk-common.dtsi b/arch/arm/boot/dts/am572x-idk-common.dtsi
+index c6d858b31011..784639ddf451 100644
+--- a/arch/arm/boot/dts/am572x-idk-common.dtsi
++++ b/arch/arm/boot/dts/am572x-idk-common.dtsi
+@@ -57,10 +57,6 @@
+ 	};
+ };
+ 
+-&omap_dwc3_2 {
+-	extcon = <&extcon_usb2>;
+-};
+-
+ &extcon_usb2 {
+ 	id-gpio = <&gpio3 16 GPIO_ACTIVE_HIGH>;
+ 	vbus-gpio = <&gpio3 26 GPIO_ACTIVE_HIGH>;
+diff --git a/arch/arm/boot/dts/am57xx-idk-common.dtsi b/arch/arm/boot/dts/am57xx-idk-common.dtsi
+index ad87f1ae904d..c9063ffca524 100644
+--- a/arch/arm/boot/dts/am57xx-idk-common.dtsi
++++ b/arch/arm/boot/dts/am57xx-idk-common.dtsi
+@@ -395,8 +395,13 @@
+ 	dr_mode = "host";
+ };
+ 
++&omap_dwc3_2 {
++	extcon = <&extcon_usb2>;
++};
++
+ &usb2 {
+-	dr_mode = "peripheral";
++	extcon = <&extcon_usb2>;
++	dr_mode = "otg";
+ };
+ 
+ &mmc1 {
+diff --git a/arch/arm/boot/dts/tegra30-cardhu.dtsi b/arch/arm/boot/dts/tegra30-cardhu.dtsi
+index 92a9740c533f..3b1db7b9ec50 100644
+--- a/arch/arm/boot/dts/tegra30-cardhu.dtsi
++++ b/arch/arm/boot/dts/tegra30-cardhu.dtsi
+@@ -206,6 +206,7 @@
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 			reg = <0x70>;
++			reset-gpio = <&gpio TEGRA_GPIO(BB, 0) GPIO_ACTIVE_LOW>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 42c090cf0292..3eb034189cf8 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -754,7 +754,6 @@ config NEED_PER_CPU_EMBED_FIRST_CHUNK
+ 
+ config HOLES_IN_ZONE
+ 	def_bool y
+-	depends on NUMA
+ 
+ source kernel/Kconfig.preempt
+ source kernel/Kconfig.hz
+diff --git a/arch/arm64/crypto/sm4-ce-glue.c b/arch/arm64/crypto/sm4-ce-glue.c
+index b7fb5274b250..0c4fc223f225 100644
+--- a/arch/arm64/crypto/sm4-ce-glue.c
++++ b/arch/arm64/crypto/sm4-ce-glue.c
+@@ -69,5 +69,5 @@ static void __exit sm4_ce_mod_fini(void)
+ 	crypto_unregister_alg(&sm4_ce_alg);
+ }
+ 
+-module_cpu_feature_match(SM3, sm4_ce_mod_init);
++module_cpu_feature_match(SM4, sm4_ce_mod_init);
+ module_exit(sm4_ce_mod_fini);
+diff --git a/arch/powerpc/include/asm/fadump.h b/arch/powerpc/include/asm/fadump.h
+index 5a23010af600..1e7a33592e29 100644
+--- a/arch/powerpc/include/asm/fadump.h
++++ b/arch/powerpc/include/asm/fadump.h
+@@ -195,9 +195,6 @@ struct fadump_crash_info_header {
+ 	struct cpumask	online_mask;
+ };
+ 
+-/* Crash memory ranges */
+-#define INIT_CRASHMEM_RANGES	(INIT_MEMBLOCK_REGIONS + 2)
+-
+ struct fad_crash_memory_ranges {
+ 	unsigned long long	base;
+ 	unsigned long long	size;
+diff --git a/arch/powerpc/include/asm/nohash/pgtable.h b/arch/powerpc/include/asm/nohash/pgtable.h
+index 2160be2e4339..b321c82b3624 100644
+--- a/arch/powerpc/include/asm/nohash/pgtable.h
++++ b/arch/powerpc/include/asm/nohash/pgtable.h
+@@ -51,17 +51,14 @@ static inline int pte_present(pte_t pte)
+ #define pte_access_permitted pte_access_permitted
+ static inline bool pte_access_permitted(pte_t pte, bool write)
+ {
+-	unsigned long pteval = pte_val(pte);
+ 	/*
+ 	 * A read-only access is controlled by _PAGE_USER bit.
+ 	 * We have _PAGE_READ set for WRITE and EXECUTE
+ 	 */
+-	unsigned long need_pte_bits = _PAGE_PRESENT | _PAGE_USER;
+-
+-	if (write)
+-		need_pte_bits |= _PAGE_WRITE;
++	if (!pte_present(pte) || !pte_user(pte) || !pte_read(pte))
++		return false;
+ 
+-	if ((pteval & need_pte_bits) != need_pte_bits)
++	if (write && !pte_write(pte))
+ 		return false;
+ 
+ 	return true;
+diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
+index 5ba80cffb505..3312606fda07 100644
+--- a/arch/powerpc/include/asm/pkeys.h
++++ b/arch/powerpc/include/asm/pkeys.h
+@@ -94,8 +94,6 @@ static inline bool mm_pkey_is_allocated(struct mm_struct *mm, int pkey)
+ 		__mm_pkey_is_allocated(mm, pkey));
+ }
+ 
+-extern void __arch_activate_pkey(int pkey);
+-extern void __arch_deactivate_pkey(int pkey);
+ /*
+  * Returns a positive, 5-bit key on success, or -1 on failure.
+  * Relies on the mmap_sem to protect against concurrency in mm_pkey_alloc() and
+@@ -124,11 +122,6 @@ static inline int mm_pkey_alloc(struct mm_struct *mm)
+ 	ret = ffz((u32)mm_pkey_allocation_map(mm));
+ 	__mm_pkey_allocated(mm, ret);
+ 
+-	/*
+-	 * Enable the key in the hardware
+-	 */
+-	if (ret > 0)
+-		__arch_activate_pkey(ret);
+ 	return ret;
+ }
+ 
+@@ -140,10 +133,6 @@ static inline int mm_pkey_free(struct mm_struct *mm, int pkey)
+ 	if (!mm_pkey_is_allocated(mm, pkey))
+ 		return -EINVAL;
+ 
+-	/*
+-	 * Disable the key in the hardware
+-	 */
+-	__arch_deactivate_pkey(pkey);
+ 	__mm_pkey_free(mm, pkey);
+ 
+ 	return 0;
+diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
+index 07e8396d472b..958eb5cd2a9e 100644
+--- a/arch/powerpc/kernel/fadump.c
++++ b/arch/powerpc/kernel/fadump.c
+@@ -47,8 +47,10 @@ static struct fadump_mem_struct fdm;
+ static const struct fadump_mem_struct *fdm_active;
+ 
+ static DEFINE_MUTEX(fadump_mutex);
+-struct fad_crash_memory_ranges crash_memory_ranges[INIT_CRASHMEM_RANGES];
++struct fad_crash_memory_ranges *crash_memory_ranges;
++int crash_memory_ranges_size;
+ int crash_mem_ranges;
++int max_crash_mem_ranges;
+ 
+ /* Scan the Firmware Assisted dump configuration details. */
+ int __init early_init_dt_scan_fw_dump(unsigned long node,
+@@ -868,38 +870,88 @@ static int __init process_fadump(const struct fadump_mem_struct *fdm_active)
+ 	return 0;
+ }
+ 
+-static inline void fadump_add_crash_memory(unsigned long long base,
+-					unsigned long long end)
++static void free_crash_memory_ranges(void)
++{
++	kfree(crash_memory_ranges);
++	crash_memory_ranges = NULL;
++	crash_memory_ranges_size = 0;
++	max_crash_mem_ranges = 0;
++}
++
++/*
++ * Allocate or reallocate crash memory ranges array in incremental units
++ * of PAGE_SIZE.
++ */
++static int allocate_crash_memory_ranges(void)
++{
++	struct fad_crash_memory_ranges *new_array;
++	u64 new_size;
++
++	new_size = crash_memory_ranges_size + PAGE_SIZE;
++	pr_debug("Allocating %llu bytes of memory for crash memory ranges\n",
++		 new_size);
++
++	new_array = krealloc(crash_memory_ranges, new_size, GFP_KERNEL);
++	if (new_array == NULL) {
++		pr_err("Insufficient memory for setting up crash memory ranges\n");
++		free_crash_memory_ranges();
++		return -ENOMEM;
++	}
++
++	crash_memory_ranges = new_array;
++	crash_memory_ranges_size = new_size;
++	max_crash_mem_ranges = (new_size /
++				sizeof(struct fad_crash_memory_ranges));
++	return 0;
++}
++
++static inline int fadump_add_crash_memory(unsigned long long base,
++					  unsigned long long end)
+ {
+ 	if (base == end)
+-		return;
++		return 0;
++
++	if (crash_mem_ranges == max_crash_mem_ranges) {
++		int ret;
++
++		ret = allocate_crash_memory_ranges();
++		if (ret)
++			return ret;
++	}
+ 
+ 	pr_debug("crash_memory_range[%d] [%#016llx-%#016llx], %#llx bytes\n",
+ 		crash_mem_ranges, base, end - 1, (end - base));
+ 	crash_memory_ranges[crash_mem_ranges].base = base;
+ 	crash_memory_ranges[crash_mem_ranges].size = end - base;
+ 	crash_mem_ranges++;
++	return 0;
+ }
+ 
+-static void fadump_exclude_reserved_area(unsigned long long start,
++static int fadump_exclude_reserved_area(unsigned long long start,
+ 					unsigned long long end)
+ {
+ 	unsigned long long ra_start, ra_end;
++	int ret = 0;
+ 
+ 	ra_start = fw_dump.reserve_dump_area_start;
+ 	ra_end = ra_start + fw_dump.reserve_dump_area_size;
+ 
+ 	if ((ra_start < end) && (ra_end > start)) {
+ 		if ((start < ra_start) && (end > ra_end)) {
+-			fadump_add_crash_memory(start, ra_start);
+-			fadump_add_crash_memory(ra_end, end);
++			ret = fadump_add_crash_memory(start, ra_start);
++			if (ret)
++				return ret;
++
++			ret = fadump_add_crash_memory(ra_end, end);
+ 		} else if (start < ra_start) {
+-			fadump_add_crash_memory(start, ra_start);
++			ret = fadump_add_crash_memory(start, ra_start);
+ 		} else if (ra_end < end) {
+-			fadump_add_crash_memory(ra_end, end);
++			ret = fadump_add_crash_memory(ra_end, end);
+ 		}
+ 	} else
+-		fadump_add_crash_memory(start, end);
++		ret = fadump_add_crash_memory(start, end);
++
++	return ret;
+ }
+ 
+ static int fadump_init_elfcore_header(char *bufp)
+@@ -939,10 +991,11 @@ static int fadump_init_elfcore_header(char *bufp)
+  * Traverse through memblock structure and setup crash memory ranges. These
+  * ranges will be used create PT_LOAD program headers in elfcore header.
+  */
+-static void fadump_setup_crash_memory_ranges(void)
++static int fadump_setup_crash_memory_ranges(void)
+ {
+ 	struct memblock_region *reg;
+ 	unsigned long long start, end;
++	int ret;
+ 
+ 	pr_debug("Setup crash memory ranges.\n");
+ 	crash_mem_ranges = 0;
+@@ -953,7 +1006,9 @@ static void fadump_setup_crash_memory_ranges(void)
+ 	 * specified during fadump registration. We need to create a separate
+ 	 * program header for this chunk with the correct offset.
+ 	 */
+-	fadump_add_crash_memory(RMA_START, fw_dump.boot_memory_size);
++	ret = fadump_add_crash_memory(RMA_START, fw_dump.boot_memory_size);
++	if (ret)
++		return ret;
+ 
+ 	for_each_memblock(memory, reg) {
+ 		start = (unsigned long long)reg->base;
+@@ -973,8 +1028,12 @@ static void fadump_setup_crash_memory_ranges(void)
+ 		}
+ 
+ 		/* add this range excluding the reserved dump area. */
+-		fadump_exclude_reserved_area(start, end);
++		ret = fadump_exclude_reserved_area(start, end);
++		if (ret)
++			return ret;
+ 	}
++
++	return 0;
+ }
+ 
+ /*
+@@ -1097,6 +1156,7 @@ static int register_fadump(void)
+ {
+ 	unsigned long addr;
+ 	void *vaddr;
++	int ret;
+ 
+ 	/*
+ 	 * If no memory is reserved then we can not register for firmware-
+@@ -1105,7 +1165,9 @@ static int register_fadump(void)
+ 	if (!fw_dump.reserve_dump_area_size)
+ 		return -ENODEV;
+ 
+-	fadump_setup_crash_memory_ranges();
++	ret = fadump_setup_crash_memory_ranges();
++	if (ret)
++		return ret;
+ 
+ 	addr = be64_to_cpu(fdm.rmr_region.destination_address) + be64_to_cpu(fdm.rmr_region.source_len);
+ 	/* Initialize fadump crash info header. */
+@@ -1183,6 +1245,7 @@ void fadump_cleanup(void)
+ 	} else if (fw_dump.dump_registered) {
+ 		/* Un-register Firmware-assisted dump if it was registered. */
+ 		fadump_unregister_dump(&fdm);
++		free_crash_memory_ranges();
+ 	}
+ }
+ 
+diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
+index 9ef4aea9fffe..991d09774108 100644
+--- a/arch/powerpc/kernel/process.c
++++ b/arch/powerpc/kernel/process.c
+@@ -583,6 +583,7 @@ static void save_all(struct task_struct *tsk)
+ 		__giveup_spe(tsk);
+ 
+ 	msr_check_and_clear(msr_all_available);
++	thread_pkey_regs_save(&tsk->thread);
+ }
+ 
+ void flush_all_to_thread(struct task_struct *tsk)
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index de686b340f4a..a995513573c2 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -46,6 +46,7 @@
+ #include <linux/compiler.h>
+ #include <linux/of.h>
+ 
++#include <asm/ftrace.h>
+ #include <asm/reg.h>
+ #include <asm/ppc-opcode.h>
+ #include <asm/asm-prototypes.h>
+diff --git a/arch/powerpc/mm/mmu_context_book3s64.c b/arch/powerpc/mm/mmu_context_book3s64.c
+index f3d4b4a0e561..3bb5cec03d1f 100644
+--- a/arch/powerpc/mm/mmu_context_book3s64.c
++++ b/arch/powerpc/mm/mmu_context_book3s64.c
+@@ -200,9 +200,9 @@ static void pte_frag_destroy(void *pte_frag)
+ 	/* drop all the pending references */
+ 	count = ((unsigned long)pte_frag & ~PAGE_MASK) >> PTE_FRAG_SIZE_SHIFT;
+ 	/* We allow PTE_FRAG_NR fragments from a PTE page */
+-	if (page_ref_sub_and_test(page, PTE_FRAG_NR - count)) {
++	if (atomic_sub_and_test(PTE_FRAG_NR - count, &page->pt_frag_refcount)) {
+ 		pgtable_page_dtor(page);
+-		free_unref_page(page);
++		__free_page(page);
+ 	}
+ }
+ 
+@@ -215,9 +215,9 @@ static void pmd_frag_destroy(void *pmd_frag)
+ 	/* drop all the pending references */
+ 	count = ((unsigned long)pmd_frag & ~PAGE_MASK) >> PMD_FRAG_SIZE_SHIFT;
+ 	/* We allow PTE_FRAG_NR fragments from a PTE page */
+-	if (page_ref_sub_and_test(page, PMD_FRAG_NR - count)) {
++	if (atomic_sub_and_test(PMD_FRAG_NR - count, &page->pt_frag_refcount)) {
+ 		pgtable_pmd_page_dtor(page);
+-		free_unref_page(page);
++		__free_page(page);
+ 	}
+ }
+ 
+diff --git a/arch/powerpc/mm/mmu_context_iommu.c b/arch/powerpc/mm/mmu_context_iommu.c
+index a4ca57612558..c9ee9e23845f 100644
+--- a/arch/powerpc/mm/mmu_context_iommu.c
++++ b/arch/powerpc/mm/mmu_context_iommu.c
+@@ -129,6 +129,7 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
+ 	long i, j, ret = 0, locked_entries = 0;
+ 	unsigned int pageshift;
+ 	unsigned long flags;
++	unsigned long cur_ua;
+ 	struct page *page = NULL;
+ 
+ 	mutex_lock(&mem_list_mutex);
+@@ -177,7 +178,8 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
+ 	}
+ 
+ 	for (i = 0; i < entries; ++i) {
+-		if (1 != get_user_pages_fast(ua + (i << PAGE_SHIFT),
++		cur_ua = ua + (i << PAGE_SHIFT);
++		if (1 != get_user_pages_fast(cur_ua,
+ 					1/* pages */, 1/* iswrite */, &page)) {
+ 			ret = -EFAULT;
+ 			for (j = 0; j < i; ++j)
+@@ -196,7 +198,7 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
+ 		if (is_migrate_cma_page(page)) {
+ 			if (mm_iommu_move_page_from_cma(page))
+ 				goto populate;
+-			if (1 != get_user_pages_fast(ua + (i << PAGE_SHIFT),
++			if (1 != get_user_pages_fast(cur_ua,
+ 						1/* pages */, 1/* iswrite */,
+ 						&page)) {
+ 				ret = -EFAULT;
+@@ -210,20 +212,21 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
+ 		}
+ populate:
+ 		pageshift = PAGE_SHIFT;
+-		if (PageCompound(page)) {
++		if (mem->pageshift > PAGE_SHIFT && PageCompound(page)) {
+ 			pte_t *pte;
+ 			struct page *head = compound_head(page);
+ 			unsigned int compshift = compound_order(head);
++			unsigned int pteshift;
+ 
+ 			local_irq_save(flags); /* disables as well */
+-			pte = find_linux_pte(mm->pgd, ua, NULL, &pageshift);
+-			local_irq_restore(flags);
++			pte = find_linux_pte(mm->pgd, cur_ua, NULL, &pteshift);
+ 
+ 			/* Double check it is still the same pinned page */
+ 			if (pte && pte_page(*pte) == head &&
+-					pageshift == compshift)
+-				pageshift = max_t(unsigned int, pageshift,
++			    pteshift == compshift + PAGE_SHIFT)
++				pageshift = max_t(unsigned int, pteshift,
+ 						PAGE_SHIFT);
++			local_irq_restore(flags);
+ 		}
+ 		mem->pageshift = min(mem->pageshift, pageshift);
+ 		mem->hpas[i] = page_to_pfn(page) << PAGE_SHIFT;
+diff --git a/arch/powerpc/mm/pgtable-book3s64.c b/arch/powerpc/mm/pgtable-book3s64.c
+index 4afbfbb64bfd..78d0b3d5ebad 100644
+--- a/arch/powerpc/mm/pgtable-book3s64.c
++++ b/arch/powerpc/mm/pgtable-book3s64.c
+@@ -270,6 +270,8 @@ static pmd_t *__alloc_for_pmdcache(struct mm_struct *mm)
+ 		return NULL;
+ 	}
+ 
++	atomic_set(&page->pt_frag_refcount, 1);
++
+ 	ret = page_address(page);
+ 	/*
+ 	 * if we support only one fragment just return the
+@@ -285,7 +287,7 @@ static pmd_t *__alloc_for_pmdcache(struct mm_struct *mm)
+ 	 * count.
+ 	 */
+ 	if (likely(!mm->context.pmd_frag)) {
+-		set_page_count(page, PMD_FRAG_NR);
++		atomic_set(&page->pt_frag_refcount, PMD_FRAG_NR);
+ 		mm->context.pmd_frag = ret + PMD_FRAG_SIZE;
+ 	}
+ 	spin_unlock(&mm->page_table_lock);
+@@ -308,9 +310,10 @@ void pmd_fragment_free(unsigned long *pmd)
+ {
+ 	struct page *page = virt_to_page(pmd);
+ 
+-	if (put_page_testzero(page)) {
++	BUG_ON(atomic_read(&page->pt_frag_refcount) <= 0);
++	if (atomic_dec_and_test(&page->pt_frag_refcount)) {
+ 		pgtable_pmd_page_dtor(page);
+-		free_unref_page(page);
++		__free_page(page);
+ 	}
+ }
+ 
+@@ -352,6 +355,7 @@ static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel)
+ 			return NULL;
+ 	}
+ 
++	atomic_set(&page->pt_frag_refcount, 1);
+ 
+ 	ret = page_address(page);
+ 	/*
+@@ -367,7 +371,7 @@ static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel)
+ 	 * count.
+ 	 */
+ 	if (likely(!mm->context.pte_frag)) {
+-		set_page_count(page, PTE_FRAG_NR);
++		atomic_set(&page->pt_frag_refcount, PTE_FRAG_NR);
+ 		mm->context.pte_frag = ret + PTE_FRAG_SIZE;
+ 	}
+ 	spin_unlock(&mm->page_table_lock);
+@@ -390,10 +394,11 @@ void pte_fragment_free(unsigned long *table, int kernel)
+ {
+ 	struct page *page = virt_to_page(table);
+ 
+-	if (put_page_testzero(page)) {
++	BUG_ON(atomic_read(&page->pt_frag_refcount) <= 0);
++	if (atomic_dec_and_test(&page->pt_frag_refcount)) {
+ 		if (!kernel)
+ 			pgtable_page_dtor(page);
+-		free_unref_page(page);
++		__free_page(page);
+ 	}
+ }
+ 
+diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
+index e6f500fabf5e..0e7810ccd1ae 100644
+--- a/arch/powerpc/mm/pkeys.c
++++ b/arch/powerpc/mm/pkeys.c
+@@ -15,8 +15,10 @@ bool pkey_execute_disable_supported;
+ int  pkeys_total;		/* Total pkeys as per device tree */
+ bool pkeys_devtree_defined;	/* pkey property exported by device tree */
+ u32  initial_allocation_mask;	/* Bits set for reserved keys */
+-u64  pkey_amr_uamor_mask;	/* Bits in AMR/UMOR not to be touched */
++u64  pkey_amr_mask;		/* Bits in AMR not to be touched */
+ u64  pkey_iamr_mask;		/* Bits in AMR not to be touched */
++u64  pkey_uamor_mask;		/* Bits in UMOR not to be touched */
++int  execute_only_key = 2;
+ 
+ #define AMR_BITS_PER_PKEY 2
+ #define AMR_RD_BIT 0x1UL
+@@ -91,7 +93,7 @@ int pkey_initialize(void)
+ 	 * arch-neutral code.
+ 	 */
+ 	pkeys_total = min_t(int, pkeys_total,
+-			(ARCH_VM_PKEY_FLAGS >> VM_PKEY_SHIFT));
++			((ARCH_VM_PKEY_FLAGS >> VM_PKEY_SHIFT)+1));
+ 
+ 	if (!pkey_mmu_enabled() || radix_enabled() || !pkeys_total)
+ 		static_branch_enable(&pkey_disabled);
+@@ -119,20 +121,38 @@ int pkey_initialize(void)
+ #else
+ 	os_reserved = 0;
+ #endif
+-	initial_allocation_mask = ~0x0;
+-	pkey_amr_uamor_mask = ~0x0ul;
++	initial_allocation_mask  = (0x1 << 0) | (0x1 << 1) |
++					(0x1 << execute_only_key);
++
++	/* register mask is in BE format */
++	pkey_amr_mask = ~0x0ul;
++	pkey_amr_mask &= ~(0x3ul << pkeyshift(0));
++
+ 	pkey_iamr_mask = ~0x0ul;
+-	/*
+-	 * key 0, 1 are reserved.
+-	 * key 0 is the default key, which allows read/write/execute.
+-	 * key 1 is recommended not to be used. PowerISA(3.0) page 1015,
+-	 * programming note.
+-	 */
+-	for (i = 2; i < (pkeys_total - os_reserved); i++) {
+-		initial_allocation_mask &= ~(0x1 << i);
+-		pkey_amr_uamor_mask &= ~(0x3ul << pkeyshift(i));
+-		pkey_iamr_mask &= ~(0x1ul << pkeyshift(i));
++	pkey_iamr_mask &= ~(0x3ul << pkeyshift(0));
++	pkey_iamr_mask &= ~(0x3ul << pkeyshift(execute_only_key));
++
++	pkey_uamor_mask = ~0x0ul;
++	pkey_uamor_mask &= ~(0x3ul << pkeyshift(0));
++	pkey_uamor_mask &= ~(0x3ul << pkeyshift(execute_only_key));
++
++	/* mark the rest of the keys as reserved and hence unavailable */
++	for (i = (pkeys_total - os_reserved); i < pkeys_total; i++) {
++		initial_allocation_mask |= (0x1 << i);
++		pkey_uamor_mask &= ~(0x3ul << pkeyshift(i));
++	}
++
++	if (unlikely((pkeys_total - os_reserved) <= execute_only_key)) {
++		/*
++		 * Insufficient number of keys to support
++		 * execute only key. Mark it unavailable.
++		 * Any AMR, UAMOR, IAMR bit set for
++		 * this key is irrelevant since this key
++		 * can never be allocated.
++		 */
++		execute_only_key = -1;
+ 	}
++
+ 	return 0;
+ }
+ 
+@@ -143,8 +163,7 @@ void pkey_mm_init(struct mm_struct *mm)
+ 	if (static_branch_likely(&pkey_disabled))
+ 		return;
+ 	mm_pkey_allocation_map(mm) = initial_allocation_mask;
+-	/* -1 means unallocated or invalid */
+-	mm->context.execute_only_pkey = -1;
++	mm->context.execute_only_pkey = execute_only_key;
+ }
+ 
+ static inline u64 read_amr(void)
+@@ -213,33 +232,6 @@ static inline void init_iamr(int pkey, u8 init_bits)
+ 	write_iamr(old_iamr | new_iamr_bits);
+ }
+ 
+-static void pkey_status_change(int pkey, bool enable)
+-{
+-	u64 old_uamor;
+-
+-	/* Reset the AMR and IAMR bits for this key */
+-	init_amr(pkey, 0x0);
+-	init_iamr(pkey, 0x0);
+-
+-	/* Enable/disable key */
+-	old_uamor = read_uamor();
+-	if (enable)
+-		old_uamor |= (0x3ul << pkeyshift(pkey));
+-	else
+-		old_uamor &= ~(0x3ul << pkeyshift(pkey));
+-	write_uamor(old_uamor);
+-}
+-
+-void __arch_activate_pkey(int pkey)
+-{
+-	pkey_status_change(pkey, true);
+-}
+-
+-void __arch_deactivate_pkey(int pkey)
+-{
+-	pkey_status_change(pkey, false);
+-}
+-
+ /*
+  * Set the access rights in AMR IAMR and UAMOR registers for @pkey to that
+  * specified in @init_val.
+@@ -289,9 +281,6 @@ void thread_pkey_regs_restore(struct thread_struct *new_thread,
+ 	if (static_branch_likely(&pkey_disabled))
+ 		return;
+ 
+-	/*
+-	 * TODO: Just set UAMOR to zero if @new_thread hasn't used any keys yet.
+-	 */
+ 	if (old_thread->amr != new_thread->amr)
+ 		write_amr(new_thread->amr);
+ 	if (old_thread->iamr != new_thread->iamr)
+@@ -305,9 +294,13 @@ void thread_pkey_regs_init(struct thread_struct *thread)
+ 	if (static_branch_likely(&pkey_disabled))
+ 		return;
+ 
+-	thread->amr = read_amr() & pkey_amr_uamor_mask;
+-	thread->iamr = read_iamr() & pkey_iamr_mask;
+-	thread->uamor = read_uamor() & pkey_amr_uamor_mask;
++	thread->amr = pkey_amr_mask;
++	thread->iamr = pkey_iamr_mask;
++	thread->uamor = pkey_uamor_mask;
++
++	write_uamor(pkey_uamor_mask);
++	write_amr(pkey_amr_mask);
++	write_iamr(pkey_iamr_mask);
+ }
+ 
+ static inline bool pkey_allows_readwrite(int pkey)
+@@ -322,48 +315,7 @@ static inline bool pkey_allows_readwrite(int pkey)
+ 
+ int __execute_only_pkey(struct mm_struct *mm)
+ {
+-	bool need_to_set_mm_pkey = false;
+-	int execute_only_pkey = mm->context.execute_only_pkey;
+-	int ret;
+-
+-	/* Do we need to assign a pkey for mm's execute-only maps? */
+-	if (execute_only_pkey == -1) {
+-		/* Go allocate one to use, which might fail */
+-		execute_only_pkey = mm_pkey_alloc(mm);
+-		if (execute_only_pkey < 0)
+-			return -1;
+-		need_to_set_mm_pkey = true;
+-	}
+-
+-	/*
+-	 * We do not want to go through the relatively costly dance to set AMR
+-	 * if we do not need to. Check it first and assume that if the
+-	 * execute-only pkey is readwrite-disabled than we do not have to set it
+-	 * ourselves.
+-	 */
+-	if (!need_to_set_mm_pkey && !pkey_allows_readwrite(execute_only_pkey))
+-		return execute_only_pkey;
+-
+-	/*
+-	 * Set up AMR so that it denies access for everything other than
+-	 * execution.
+-	 */
+-	ret = __arch_set_user_pkey_access(current, execute_only_pkey,
+-					  PKEY_DISABLE_ACCESS |
+-					  PKEY_DISABLE_WRITE);
+-	/*
+-	 * If the AMR-set operation failed somehow, just return 0 and
+-	 * effectively disable execute-only support.
+-	 */
+-	if (ret) {
+-		mm_pkey_free(mm, execute_only_pkey);
+-		return -1;
+-	}
+-
+-	/* We got one, store it and use it from here on out */
+-	if (need_to_set_mm_pkey)
+-		mm->context.execute_only_pkey = execute_only_pkey;
+-	return execute_only_pkey;
++	return mm->context.execute_only_pkey;
+ }
+ 
+ static inline bool vma_is_pkey_exec_only(struct vm_area_struct *vma)
+diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
+index 70b2e1e0f23c..a2cdf358a3ac 100644
+--- a/arch/powerpc/platforms/powernv/pci-ioda.c
++++ b/arch/powerpc/platforms/powernv/pci-ioda.c
+@@ -3368,12 +3368,49 @@ static void pnv_pci_ioda_create_dbgfs(void)
+ #endif /* CONFIG_DEBUG_FS */
+ }
+ 
++static void pnv_pci_enable_bridge(struct pci_bus *bus)
++{
++	struct pci_dev *dev = bus->self;
++	struct pci_bus *child;
++
++	/* Empty bus ? bail */
++	if (list_empty(&bus->devices))
++		return;
++
++	/*
++	 * If there's a bridge associated with that bus enable it. This works
++	 * around races in the generic code if the enabling is done during
++	 * parallel probing. This can be removed once those races have been
++	 * fixed.
++	 */
++	if (dev) {
++		int rc = pci_enable_device(dev);
++		if (rc)
++			pci_err(dev, "Error enabling bridge (%d)\n", rc);
++		pci_set_master(dev);
++	}
++
++	/* Perform the same to child busses */
++	list_for_each_entry(child, &bus->children, node)
++		pnv_pci_enable_bridge(child);
++}
++
++static void pnv_pci_enable_bridges(void)
++{
++	struct pci_controller *hose;
++
++	list_for_each_entry(hose, &hose_list, list_node)
++		pnv_pci_enable_bridge(hose->bus);
++}
++
+ static void pnv_pci_ioda_fixup(void)
+ {
+ 	pnv_pci_ioda_setup_PEs();
+ 	pnv_pci_ioda_setup_iommu_api();
+ 	pnv_pci_ioda_create_dbgfs();
+ 
++	pnv_pci_enable_bridges();
++
+ #ifdef CONFIG_EEH
+ 	pnv_eeh_post_init();
+ #endif
+diff --git a/arch/powerpc/platforms/pseries/ras.c b/arch/powerpc/platforms/pseries/ras.c
+index 5e1ef9150182..2edc673be137 100644
+--- a/arch/powerpc/platforms/pseries/ras.c
++++ b/arch/powerpc/platforms/pseries/ras.c
+@@ -360,7 +360,7 @@ static struct rtas_error_log *fwnmi_get_errinfo(struct pt_regs *regs)
+ 	}
+ 
+ 	savep = __va(regs->gpr[3]);
+-	regs->gpr[3] = savep[0];	/* restore original r3 */
++	regs->gpr[3] = be64_to_cpu(savep[0]);	/* restore original r3 */
+ 
+ 	/* If it isn't an extended log we can use the per cpu 64bit buffer */
+ 	h = (struct rtas_error_log *)&savep[1];
+diff --git a/arch/sparc/kernel/sys_sparc_32.c b/arch/sparc/kernel/sys_sparc_32.c
+index 7f3d9c59719a..452e4d080855 100644
+--- a/arch/sparc/kernel/sys_sparc_32.c
++++ b/arch/sparc/kernel/sys_sparc_32.c
+@@ -197,23 +197,27 @@ SYSCALL_DEFINE5(rt_sigaction, int, sig,
+ 
+ SYSCALL_DEFINE2(getdomainname, char __user *, name, int, len)
+ {
+- 	int nlen, err;
+- 	
++	int nlen, err;
++	char tmp[__NEW_UTS_LEN + 1];
++
+ 	if (len < 0)
+ 		return -EINVAL;
+ 
+- 	down_read(&uts_sem);
+- 	
++	down_read(&uts_sem);
++
+ 	nlen = strlen(utsname()->domainname) + 1;
+ 	err = -EINVAL;
+ 	if (nlen > len)
+-		goto out;
++		goto out_unlock;
++	memcpy(tmp, utsname()->domainname, nlen);
+ 
+-	err = -EFAULT;
+-	if (!copy_to_user(name, utsname()->domainname, nlen))
+-		err = 0;
++	up_read(&uts_sem);
+ 
+-out:
++	if (copy_to_user(name, tmp, nlen))
++		return -EFAULT;
++	return 0;
++
++out_unlock:
+ 	up_read(&uts_sem);
+ 	return err;
+ }
+diff --git a/arch/sparc/kernel/sys_sparc_64.c b/arch/sparc/kernel/sys_sparc_64.c
+index 63baa8aa9414..274ed0b9b3e0 100644
+--- a/arch/sparc/kernel/sys_sparc_64.c
++++ b/arch/sparc/kernel/sys_sparc_64.c
+@@ -519,23 +519,27 @@ asmlinkage void sparc_breakpoint(struct pt_regs *regs)
+ 
+ SYSCALL_DEFINE2(getdomainname, char __user *, name, int, len)
+ {
+-        int nlen, err;
++	int nlen, err;
++	char tmp[__NEW_UTS_LEN + 1];
+ 
+ 	if (len < 0)
+ 		return -EINVAL;
+ 
+- 	down_read(&uts_sem);
+- 	
++	down_read(&uts_sem);
++
+ 	nlen = strlen(utsname()->domainname) + 1;
+ 	err = -EINVAL;
+ 	if (nlen > len)
+-		goto out;
++		goto out_unlock;
++	memcpy(tmp, utsname()->domainname, nlen);
++
++	up_read(&uts_sem);
+ 
+-	err = -EFAULT;
+-	if (!copy_to_user(name, utsname()->domainname, nlen))
+-		err = 0;
++	if (copy_to_user(name, tmp, nlen))
++		return -EFAULT;
++	return 0;
+ 
+-out:
++out_unlock:
+ 	up_read(&uts_sem);
+ 	return err;
+ }
+diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S
+index e762ef417562..d27a50656aa1 100644
+--- a/arch/x86/crypto/aesni-intel_asm.S
++++ b/arch/x86/crypto/aesni-intel_asm.S
+@@ -223,34 +223,34 @@ ALL_F:      .octa 0xffffffffffffffffffffffffffffffff
+ 	pcmpeqd TWOONE(%rip), \TMP2
+ 	pand	POLY(%rip), \TMP2
+ 	pxor	\TMP2, \TMP3
+-	movdqa	\TMP3, HashKey(%arg2)
++	movdqu	\TMP3, HashKey(%arg2)
+ 
+ 	movdqa	   \TMP3, \TMP5
+ 	pshufd	   $78, \TMP3, \TMP1
+ 	pxor	   \TMP3, \TMP1
+-	movdqa	   \TMP1, HashKey_k(%arg2)
++	movdqu	   \TMP1, HashKey_k(%arg2)
+ 
+ 	GHASH_MUL  \TMP5, \TMP3, \TMP1, \TMP2, \TMP4, \TMP6, \TMP7
+ # TMP5 = HashKey^2<<1 (mod poly)
+-	movdqa	   \TMP5, HashKey_2(%arg2)
++	movdqu	   \TMP5, HashKey_2(%arg2)
+ # HashKey_2 = HashKey^2<<1 (mod poly)
+ 	pshufd	   $78, \TMP5, \TMP1
+ 	pxor	   \TMP5, \TMP1
+-	movdqa	   \TMP1, HashKey_2_k(%arg2)
++	movdqu	   \TMP1, HashKey_2_k(%arg2)
+ 
+ 	GHASH_MUL  \TMP5, \TMP3, \TMP1, \TMP2, \TMP4, \TMP6, \TMP7
+ # TMP5 = HashKey^3<<1 (mod poly)
+-	movdqa	   \TMP5, HashKey_3(%arg2)
++	movdqu	   \TMP5, HashKey_3(%arg2)
+ 	pshufd	   $78, \TMP5, \TMP1
+ 	pxor	   \TMP5, \TMP1
+-	movdqa	   \TMP1, HashKey_3_k(%arg2)
++	movdqu	   \TMP1, HashKey_3_k(%arg2)
+ 
+ 	GHASH_MUL  \TMP5, \TMP3, \TMP1, \TMP2, \TMP4, \TMP6, \TMP7
+ # TMP5 = HashKey^3<<1 (mod poly)
+-	movdqa	   \TMP5, HashKey_4(%arg2)
++	movdqu	   \TMP5, HashKey_4(%arg2)
+ 	pshufd	   $78, \TMP5, \TMP1
+ 	pxor	   \TMP5, \TMP1
+-	movdqa	   \TMP1, HashKey_4_k(%arg2)
++	movdqu	   \TMP1, HashKey_4_k(%arg2)
+ .endm
+ 
+ # GCM_INIT initializes a gcm_context struct to prepare for encoding/decoding.
+@@ -271,7 +271,7 @@ ALL_F:      .octa 0xffffffffffffffffffffffffffffffff
+ 	movdqu %xmm0, CurCount(%arg2) # ctx_data.current_counter = iv
+ 
+ 	PRECOMPUTE \SUBKEY, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7,
+-	movdqa HashKey(%arg2), %xmm13
++	movdqu HashKey(%arg2), %xmm13
+ 
+ 	CALC_AAD_HASH %xmm13, \AAD, \AADLEN, %xmm0, %xmm1, %xmm2, %xmm3, \
+ 	%xmm4, %xmm5, %xmm6
+@@ -997,7 +997,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	pshufd	  $78, \XMM5, \TMP6
+ 	pxor	  \XMM5, \TMP6
+ 	paddd     ONE(%rip), \XMM0		# INCR CNT
+-	movdqa	  HashKey_4(%arg2), \TMP5
++	movdqu	  HashKey_4(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP4           # TMP4 = a1*b1
+ 	movdqa    \XMM0, \XMM1
+ 	paddd     ONE(%rip), \XMM0		# INCR CNT
+@@ -1016,7 +1016,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	pxor	  (%arg1), \XMM2
+ 	pxor	  (%arg1), \XMM3
+ 	pxor	  (%arg1), \XMM4
+-	movdqa	  HashKey_4_k(%arg2), \TMP5
++	movdqu	  HashKey_4_k(%arg2), \TMP5
+ 	PCLMULQDQ 0x00, \TMP5, \TMP6           # TMP6 = (a1+a0)*(b1+b0)
+ 	movaps 0x10(%arg1), \TMP1
+ 	AESENC	  \TMP1, \XMM1              # Round 1
+@@ -1031,7 +1031,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	movdqa	  \XMM6, \TMP1
+ 	pshufd	  $78, \XMM6, \TMP2
+ 	pxor	  \XMM6, \TMP2
+-	movdqa	  HashKey_3(%arg2), \TMP5
++	movdqu	  HashKey_3(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP1           # TMP1 = a1 * b1
+ 	movaps 0x30(%arg1), \TMP3
+ 	AESENC    \TMP3, \XMM1              # Round 3
+@@ -1044,7 +1044,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	AESENC	  \TMP3, \XMM2
+ 	AESENC	  \TMP3, \XMM3
+ 	AESENC	  \TMP3, \XMM4
+-	movdqa	  HashKey_3_k(%arg2), \TMP5
++	movdqu	  HashKey_3_k(%arg2), \TMP5
+ 	PCLMULQDQ 0x00, \TMP5, \TMP2           # TMP2 = (a1+a0)*(b1+b0)
+ 	movaps 0x50(%arg1), \TMP3
+ 	AESENC	  \TMP3, \XMM1              # Round 5
+@@ -1058,7 +1058,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	movdqa	  \XMM7, \TMP1
+ 	pshufd	  $78, \XMM7, \TMP2
+ 	pxor	  \XMM7, \TMP2
+-	movdqa	  HashKey_2(%arg2), \TMP5
++	movdqu	  HashKey_2(%arg2), \TMP5
+ 
+         # Multiply TMP5 * HashKey using karatsuba
+ 
+@@ -1074,7 +1074,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	AESENC	  \TMP3, \XMM2
+ 	AESENC	  \TMP3, \XMM3
+ 	AESENC	  \TMP3, \XMM4
+-	movdqa	  HashKey_2_k(%arg2), \TMP5
++	movdqu	  HashKey_2_k(%arg2), \TMP5
+ 	PCLMULQDQ 0x00, \TMP5, \TMP2           # TMP2 = (a1+a0)*(b1+b0)
+ 	movaps 0x80(%arg1), \TMP3
+ 	AESENC	  \TMP3, \XMM1             # Round 8
+@@ -1092,7 +1092,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	movdqa	  \XMM8, \TMP1
+ 	pshufd	  $78, \XMM8, \TMP2
+ 	pxor	  \XMM8, \TMP2
+-	movdqa	  HashKey(%arg2), \TMP5
++	movdqu	  HashKey(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP1          # TMP1 = a1*b1
+ 	movaps 0x90(%arg1), \TMP3
+ 	AESENC	  \TMP3, \XMM1            # Round 9
+@@ -1121,7 +1121,7 @@ aes_loop_par_enc_done\@:
+ 	AESENCLAST \TMP3, \XMM2
+ 	AESENCLAST \TMP3, \XMM3
+ 	AESENCLAST \TMP3, \XMM4
+-	movdqa    HashKey_k(%arg2), \TMP5
++	movdqu    HashKey_k(%arg2), \TMP5
+ 	PCLMULQDQ 0x00, \TMP5, \TMP2          # TMP2 = (a1+a0)*(b1+b0)
+ 	movdqu	  (%arg4,%r11,1), \TMP3
+ 	pxor	  \TMP3, \XMM1                 # Ciphertext/Plaintext XOR EK
+@@ -1205,7 +1205,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	pshufd	  $78, \XMM5, \TMP6
+ 	pxor	  \XMM5, \TMP6
+ 	paddd     ONE(%rip), \XMM0		# INCR CNT
+-	movdqa	  HashKey_4(%arg2), \TMP5
++	movdqu	  HashKey_4(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP4           # TMP4 = a1*b1
+ 	movdqa    \XMM0, \XMM1
+ 	paddd     ONE(%rip), \XMM0		# INCR CNT
+@@ -1224,7 +1224,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	pxor	  (%arg1), \XMM2
+ 	pxor	  (%arg1), \XMM3
+ 	pxor	  (%arg1), \XMM4
+-	movdqa	  HashKey_4_k(%arg2), \TMP5
++	movdqu	  HashKey_4_k(%arg2), \TMP5
+ 	PCLMULQDQ 0x00, \TMP5, \TMP6           # TMP6 = (a1+a0)*(b1+b0)
+ 	movaps 0x10(%arg1), \TMP1
+ 	AESENC	  \TMP1, \XMM1              # Round 1
+@@ -1239,7 +1239,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	movdqa	  \XMM6, \TMP1
+ 	pshufd	  $78, \XMM6, \TMP2
+ 	pxor	  \XMM6, \TMP2
+-	movdqa	  HashKey_3(%arg2), \TMP5
++	movdqu	  HashKey_3(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP1           # TMP1 = a1 * b1
+ 	movaps 0x30(%arg1), \TMP3
+ 	AESENC    \TMP3, \XMM1              # Round 3
+@@ -1252,7 +1252,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	AESENC	  \TMP3, \XMM2
+ 	AESENC	  \TMP3, \XMM3
+ 	AESENC	  \TMP3, \XMM4
+-	movdqa	  HashKey_3_k(%arg2), \TMP5
++	movdqu	  HashKey_3_k(%arg2), \TMP5
+ 	PCLMULQDQ 0x00, \TMP5, \TMP2           # TMP2 = (a1+a0)*(b1+b0)
+ 	movaps 0x50(%arg1), \TMP3
+ 	AESENC	  \TMP3, \XMM1              # Round 5
+@@ -1266,7 +1266,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	movdqa	  \XMM7, \TMP1
+ 	pshufd	  $78, \XMM7, \TMP2
+ 	pxor	  \XMM7, \TMP2
+-	movdqa	  HashKey_2(%arg2), \TMP5
++	movdqu	  HashKey_2(%arg2), \TMP5
+ 
+         # Multiply TMP5 * HashKey using karatsuba
+ 
+@@ -1282,7 +1282,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	AESENC	  \TMP3, \XMM2
+ 	AESENC	  \TMP3, \XMM3
+ 	AESENC	  \TMP3, \XMM4
+-	movdqa	  HashKey_2_k(%arg2), \TMP5
++	movdqu	  HashKey_2_k(%arg2), \TMP5
+ 	PCLMULQDQ 0x00, \TMP5, \TMP2           # TMP2 = (a1+a0)*(b1+b0)
+ 	movaps 0x80(%arg1), \TMP3
+ 	AESENC	  \TMP3, \XMM1             # Round 8
+@@ -1300,7 +1300,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	movdqa	  \XMM8, \TMP1
+ 	pshufd	  $78, \XMM8, \TMP2
+ 	pxor	  \XMM8, \TMP2
+-	movdqa	  HashKey(%arg2), \TMP5
++	movdqu	  HashKey(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP1          # TMP1 = a1*b1
+ 	movaps 0x90(%arg1), \TMP3
+ 	AESENC	  \TMP3, \XMM1            # Round 9
+@@ -1329,7 +1329,7 @@ aes_loop_par_dec_done\@:
+ 	AESENCLAST \TMP3, \XMM2
+ 	AESENCLAST \TMP3, \XMM3
+ 	AESENCLAST \TMP3, \XMM4
+-	movdqa    HashKey_k(%arg2), \TMP5
++	movdqu    HashKey_k(%arg2), \TMP5
+ 	PCLMULQDQ 0x00, \TMP5, \TMP2          # TMP2 = (a1+a0)*(b1+b0)
+ 	movdqu	  (%arg4,%r11,1), \TMP3
+ 	pxor	  \TMP3, \XMM1                 # Ciphertext/Plaintext XOR EK
+@@ -1405,10 +1405,10 @@ TMP7 XMM1 XMM2 XMM3 XMM4 XMMDst
+ 	movdqa	  \XMM1, \TMP6
+ 	pshufd	  $78, \XMM1, \TMP2
+ 	pxor	  \XMM1, \TMP2
+-	movdqa	  HashKey_4(%arg2), \TMP5
++	movdqu	  HashKey_4(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP6       # TMP6 = a1*b1
+ 	PCLMULQDQ 0x00, \TMP5, \XMM1       # XMM1 = a0*b0
+-	movdqa	  HashKey_4_k(%arg2), \TMP4
++	movdqu	  HashKey_4_k(%arg2), \TMP4
+ 	PCLMULQDQ 0x00, \TMP4, \TMP2       # TMP2 = (a1+a0)*(b1+b0)
+ 	movdqa	  \XMM1, \XMMDst
+ 	movdqa	  \TMP2, \XMM1              # result in TMP6, XMMDst, XMM1
+@@ -1418,10 +1418,10 @@ TMP7 XMM1 XMM2 XMM3 XMM4 XMMDst
+ 	movdqa	  \XMM2, \TMP1
+ 	pshufd	  $78, \XMM2, \TMP2
+ 	pxor	  \XMM2, \TMP2
+-	movdqa	  HashKey_3(%arg2), \TMP5
++	movdqu	  HashKey_3(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP1       # TMP1 = a1*b1
+ 	PCLMULQDQ 0x00, \TMP5, \XMM2       # XMM2 = a0*b0
+-	movdqa	  HashKey_3_k(%arg2), \TMP4
++	movdqu	  HashKey_3_k(%arg2), \TMP4
+ 	PCLMULQDQ 0x00, \TMP4, \TMP2       # TMP2 = (a1+a0)*(b1+b0)
+ 	pxor	  \TMP1, \TMP6
+ 	pxor	  \XMM2, \XMMDst
+@@ -1433,10 +1433,10 @@ TMP7 XMM1 XMM2 XMM3 XMM4 XMMDst
+ 	movdqa	  \XMM3, \TMP1
+ 	pshufd	  $78, \XMM3, \TMP2
+ 	pxor	  \XMM3, \TMP2
+-	movdqa	  HashKey_2(%arg2), \TMP5
++	movdqu	  HashKey_2(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP1       # TMP1 = a1*b1
+ 	PCLMULQDQ 0x00, \TMP5, \XMM3       # XMM3 = a0*b0
+-	movdqa	  HashKey_2_k(%arg2), \TMP4
++	movdqu	  HashKey_2_k(%arg2), \TMP4
+ 	PCLMULQDQ 0x00, \TMP4, \TMP2       # TMP2 = (a1+a0)*(b1+b0)
+ 	pxor	  \TMP1, \TMP6
+ 	pxor	  \XMM3, \XMMDst
+@@ -1446,10 +1446,10 @@ TMP7 XMM1 XMM2 XMM3 XMM4 XMMDst
+ 	movdqa	  \XMM4, \TMP1
+ 	pshufd	  $78, \XMM4, \TMP2
+ 	pxor	  \XMM4, \TMP2
+-	movdqa	  HashKey(%arg2), \TMP5
++	movdqu	  HashKey(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP1	    # TMP1 = a1*b1
+ 	PCLMULQDQ 0x00, \TMP5, \XMM4       # XMM4 = a0*b0
+-	movdqa	  HashKey_k(%arg2), \TMP4
++	movdqu	  HashKey_k(%arg2), \TMP4
+ 	PCLMULQDQ 0x00, \TMP4, \TMP2       # TMP2 = (a1+a0)*(b1+b0)
+ 	pxor	  \TMP1, \TMP6
+ 	pxor	  \XMM4, \XMMDst
+diff --git a/arch/x86/kernel/kexec-bzimage64.c b/arch/x86/kernel/kexec-bzimage64.c
+index 7326078eaa7a..278cd07228dd 100644
+--- a/arch/x86/kernel/kexec-bzimage64.c
++++ b/arch/x86/kernel/kexec-bzimage64.c
+@@ -532,7 +532,7 @@ static int bzImage64_cleanup(void *loader_data)
+ static int bzImage64_verify_sig(const char *kernel, unsigned long kernel_len)
+ {
+ 	return verify_pefile_signature(kernel, kernel_len,
+-				       NULL,
++				       VERIFY_USE_SECONDARY_KEYRING,
+ 				       VERIFYING_KEXEC_PE_SIGNATURE);
+ }
+ #endif
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 46b428c0990e..bedabcf33a3e 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -197,12 +197,14 @@ static enum vmx_l1d_flush_state __read_mostly vmentry_l1d_flush_param = VMENTER_
+ 
+ static const struct {
+ 	const char *option;
+-	enum vmx_l1d_flush_state cmd;
++	bool for_parse;
+ } vmentry_l1d_param[] = {
+-	{"auto",	VMENTER_L1D_FLUSH_AUTO},
+-	{"never",	VMENTER_L1D_FLUSH_NEVER},
+-	{"cond",	VMENTER_L1D_FLUSH_COND},
+-	{"always",	VMENTER_L1D_FLUSH_ALWAYS},
++	[VMENTER_L1D_FLUSH_AUTO]	 = {"auto", true},
++	[VMENTER_L1D_FLUSH_NEVER]	 = {"never", true},
++	[VMENTER_L1D_FLUSH_COND]	 = {"cond", true},
++	[VMENTER_L1D_FLUSH_ALWAYS]	 = {"always", true},
++	[VMENTER_L1D_FLUSH_EPT_DISABLED] = {"EPT disabled", false},
++	[VMENTER_L1D_FLUSH_NOT_REQUIRED] = {"not required", false},
+ };
+ 
+ #define L1D_CACHE_ORDER 4
+@@ -286,8 +288,9 @@ static int vmentry_l1d_flush_parse(const char *s)
+ 
+ 	if (s) {
+ 		for (i = 0; i < ARRAY_SIZE(vmentry_l1d_param); i++) {
+-			if (sysfs_streq(s, vmentry_l1d_param[i].option))
+-				return vmentry_l1d_param[i].cmd;
++			if (vmentry_l1d_param[i].for_parse &&
++			    sysfs_streq(s, vmentry_l1d_param[i].option))
++				return i;
+ 		}
+ 	}
+ 	return -EINVAL;
+@@ -297,13 +300,13 @@ static int vmentry_l1d_flush_set(const char *s, const struct kernel_param *kp)
+ {
+ 	int l1tf, ret;
+ 
+-	if (!boot_cpu_has(X86_BUG_L1TF))
+-		return 0;
+-
+ 	l1tf = vmentry_l1d_flush_parse(s);
+ 	if (l1tf < 0)
+ 		return l1tf;
+ 
++	if (!boot_cpu_has(X86_BUG_L1TF))
++		return 0;
++
+ 	/*
+ 	 * Has vmx_init() run already? If not then this is the pre init
+ 	 * parameter parsing. In that case just store the value and let
+@@ -323,6 +326,9 @@ static int vmentry_l1d_flush_set(const char *s, const struct kernel_param *kp)
+ 
+ static int vmentry_l1d_flush_get(char *s, const struct kernel_param *kp)
+ {
++	if (WARN_ON_ONCE(l1tf_vmx_mitigation >= ARRAY_SIZE(vmentry_l1d_param)))
++		return sprintf(s, "???\n");
++
+ 	return sprintf(s, "%s\n", vmentry_l1d_param[l1tf_vmx_mitigation].option);
+ }
+ 
+diff --git a/arch/xtensa/include/asm/cacheasm.h b/arch/xtensa/include/asm/cacheasm.h
+index 2041abb10a23..34545ecfdd6b 100644
+--- a/arch/xtensa/include/asm/cacheasm.h
++++ b/arch/xtensa/include/asm/cacheasm.h
+@@ -31,16 +31,32 @@
+  *
+  */
+ 
+-	.macro	__loop_cache_all ar at insn size line_width
+ 
+-	movi	\ar, 0
++	.macro	__loop_cache_unroll ar at insn size line_width max_immed
++
++	.if	(1 << (\line_width)) > (\max_immed)
++	.set	_reps, 1
++	.elseif	(2 << (\line_width)) > (\max_immed)
++	.set	_reps, 2
++	.else
++	.set	_reps, 4
++	.endif
++
++	__loopi	\ar, \at, \size, (_reps << (\line_width))
++	.set	_index, 0
++	.rep	_reps
++	\insn	\ar, _index << (\line_width)
++	.set	_index, _index + 1
++	.endr
++	__endla	\ar, \at, _reps << (\line_width)
++
++	.endm
++
+ 
+-	__loopi	\ar, \at, \size, (4 << (\line_width))
+-	\insn	\ar, 0 << (\line_width)
+-	\insn	\ar, 1 << (\line_width)
+-	\insn	\ar, 2 << (\line_width)
+-	\insn	\ar, 3 << (\line_width)
+-	__endla	\ar, \at, 4 << (\line_width)
++	.macro	__loop_cache_all ar at insn size line_width max_immed
++
++	movi	\ar, 0
++	__loop_cache_unroll \ar, \at, \insn, \size, \line_width, \max_immed
+ 
+ 	.endm
+ 
+@@ -57,14 +73,9 @@
+ 	.endm
+ 
+ 
+-	.macro	__loop_cache_page ar at insn line_width
++	.macro	__loop_cache_page ar at insn line_width max_immed
+ 
+-	__loopi	\ar, \at, PAGE_SIZE, 4 << (\line_width)
+-	\insn	\ar, 0 << (\line_width)
+-	\insn	\ar, 1 << (\line_width)
+-	\insn	\ar, 2 << (\line_width)
+-	\insn	\ar, 3 << (\line_width)
+-	__endla	\ar, \at, 4 << (\line_width)
++	__loop_cache_unroll \ar, \at, \insn, PAGE_SIZE, \line_width, \max_immed
+ 
+ 	.endm
+ 
+@@ -72,7 +83,8 @@
+ 	.macro	___unlock_dcache_all ar at
+ 
+ #if XCHAL_DCACHE_LINE_LOCKABLE && XCHAL_DCACHE_SIZE
+-	__loop_cache_all \ar \at diu XCHAL_DCACHE_SIZE XCHAL_DCACHE_LINEWIDTH
++	__loop_cache_all \ar \at diu XCHAL_DCACHE_SIZE \
++		XCHAL_DCACHE_LINEWIDTH 240
+ #endif
+ 
+ 	.endm
+@@ -81,7 +93,8 @@
+ 	.macro	___unlock_icache_all ar at
+ 
+ #if XCHAL_ICACHE_LINE_LOCKABLE && XCHAL_ICACHE_SIZE
+-	__loop_cache_all \ar \at iiu XCHAL_ICACHE_SIZE XCHAL_ICACHE_LINEWIDTH
++	__loop_cache_all \ar \at iiu XCHAL_ICACHE_SIZE \
++		XCHAL_ICACHE_LINEWIDTH 240
+ #endif
+ 
+ 	.endm
+@@ -90,7 +103,8 @@
+ 	.macro	___flush_invalidate_dcache_all ar at
+ 
+ #if XCHAL_DCACHE_SIZE
+-	__loop_cache_all \ar \at diwbi XCHAL_DCACHE_SIZE XCHAL_DCACHE_LINEWIDTH
++	__loop_cache_all \ar \at diwbi XCHAL_DCACHE_SIZE \
++		XCHAL_DCACHE_LINEWIDTH 240
+ #endif
+ 
+ 	.endm
+@@ -99,7 +113,8 @@
+ 	.macro	___flush_dcache_all ar at
+ 
+ #if XCHAL_DCACHE_SIZE
+-	__loop_cache_all \ar \at diwb XCHAL_DCACHE_SIZE XCHAL_DCACHE_LINEWIDTH
++	__loop_cache_all \ar \at diwb XCHAL_DCACHE_SIZE \
++		XCHAL_DCACHE_LINEWIDTH 240
+ #endif
+ 
+ 	.endm
+@@ -108,8 +123,8 @@
+ 	.macro	___invalidate_dcache_all ar at
+ 
+ #if XCHAL_DCACHE_SIZE
+-	__loop_cache_all \ar \at dii __stringify(DCACHE_WAY_SIZE) \
+-			 XCHAL_DCACHE_LINEWIDTH
++	__loop_cache_all \ar \at dii XCHAL_DCACHE_SIZE \
++			 XCHAL_DCACHE_LINEWIDTH 1020
+ #endif
+ 
+ 	.endm
+@@ -118,8 +133,8 @@
+ 	.macro	___invalidate_icache_all ar at
+ 
+ #if XCHAL_ICACHE_SIZE
+-	__loop_cache_all \ar \at iii __stringify(ICACHE_WAY_SIZE) \
+-			 XCHAL_ICACHE_LINEWIDTH
++	__loop_cache_all \ar \at iii XCHAL_ICACHE_SIZE \
++			 XCHAL_ICACHE_LINEWIDTH 1020
+ #endif
+ 
+ 	.endm
+@@ -166,7 +181,7 @@
+ 	.macro	___flush_invalidate_dcache_page ar as
+ 
+ #if XCHAL_DCACHE_SIZE
+-	__loop_cache_page \ar \as dhwbi XCHAL_DCACHE_LINEWIDTH
++	__loop_cache_page \ar \as dhwbi XCHAL_DCACHE_LINEWIDTH 1020
+ #endif
+ 
+ 	.endm
+@@ -175,7 +190,7 @@
+ 	.macro ___flush_dcache_page ar as
+ 
+ #if XCHAL_DCACHE_SIZE
+-	__loop_cache_page \ar \as dhwb XCHAL_DCACHE_LINEWIDTH
++	__loop_cache_page \ar \as dhwb XCHAL_DCACHE_LINEWIDTH 1020
+ #endif
+ 
+ 	.endm
+@@ -184,7 +199,7 @@
+ 	.macro	___invalidate_dcache_page ar as
+ 
+ #if XCHAL_DCACHE_SIZE
+-	__loop_cache_page \ar \as dhi XCHAL_DCACHE_LINEWIDTH
++	__loop_cache_page \ar \as dhi XCHAL_DCACHE_LINEWIDTH 1020
+ #endif
+ 
+ 	.endm
+@@ -193,7 +208,7 @@
+ 	.macro	___invalidate_icache_page ar as
+ 
+ #if XCHAL_ICACHE_SIZE
+-	__loop_cache_page \ar \as ihi XCHAL_ICACHE_LINEWIDTH
++	__loop_cache_page \ar \as ihi XCHAL_ICACHE_LINEWIDTH 1020
+ #endif
+ 
+ 	.endm
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+index a9e8633388f4..58c6efa9f9a9 100644
+--- a/block/bfq-cgroup.c
++++ b/block/bfq-cgroup.c
+@@ -913,7 +913,8 @@ static ssize_t bfq_io_set_weight(struct kernfs_open_file *of,
+ 	if (ret)
+ 		return ret;
+ 
+-	return bfq_io_set_weight_legacy(of_css(of), NULL, weight);
++	ret = bfq_io_set_weight_legacy(of_css(of), NULL, weight);
++	return ret ?: nbytes;
+ }
+ 
+ #ifdef CONFIG_DEBUG_BLK_CGROUP
+diff --git a/block/blk-core.c b/block/blk-core.c
+index ee33590f54eb..1646ea85dade 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -715,6 +715,35 @@ void blk_set_queue_dying(struct request_queue *q)
+ }
+ EXPORT_SYMBOL_GPL(blk_set_queue_dying);
+ 
++/* Unconfigure the I/O scheduler and dissociate from the cgroup controller. */
++void blk_exit_queue(struct request_queue *q)
++{
++	/*
++	 * Since the I/O scheduler exit code may access cgroup information,
++	 * perform I/O scheduler exit before disassociating from the block
++	 * cgroup controller.
++	 */
++	if (q->elevator) {
++		ioc_clear_queue(q);
++		elevator_exit(q, q->elevator);
++		q->elevator = NULL;
++	}
++
++	/*
++	 * Remove all references to @q from the block cgroup controller before
++	 * restoring @q->queue_lock to avoid that restoring this pointer causes
++	 * e.g. blkcg_print_blkgs() to crash.
++	 */
++	blkcg_exit_queue(q);
++
++	/*
++	 * Since the cgroup code may dereference the @q->backing_dev_info
++	 * pointer, only decrease its reference count after having removed the
++	 * association with the block cgroup controller.
++	 */
++	bdi_put(q->backing_dev_info);
++}
++
+ /**
+  * blk_cleanup_queue - shutdown a request queue
+  * @q: request queue to shutdown
+@@ -780,30 +809,7 @@ void blk_cleanup_queue(struct request_queue *q)
+ 	 */
+ 	WARN_ON_ONCE(q->kobj.state_in_sysfs);
+ 
+-	/*
+-	 * Since the I/O scheduler exit code may access cgroup information,
+-	 * perform I/O scheduler exit before disassociating from the block
+-	 * cgroup controller.
+-	 */
+-	if (q->elevator) {
+-		ioc_clear_queue(q);
+-		elevator_exit(q, q->elevator);
+-		q->elevator = NULL;
+-	}
+-
+-	/*
+-	 * Remove all references to @q from the block cgroup controller before
+-	 * restoring @q->queue_lock to avoid that restoring this pointer causes
+-	 * e.g. blkcg_print_blkgs() to crash.
+-	 */
+-	blkcg_exit_queue(q);
+-
+-	/*
+-	 * Since the cgroup code may dereference the @q->backing_dev_info
+-	 * pointer, only decrease its reference count after having removed the
+-	 * association with the block cgroup controller.
+-	 */
+-	bdi_put(q->backing_dev_info);
++	blk_exit_queue(q);
+ 
+ 	if (q->mq_ops)
+ 		blk_mq_free_queue(q);
+@@ -1180,6 +1186,7 @@ out_exit_flush_rq:
+ 		q->exit_rq_fn(q, q->fq->flush_rq);
+ out_free_flush_queue:
+ 	blk_free_flush_queue(q->fq);
++	q->fq = NULL;
+ 	return -ENOMEM;
+ }
+ EXPORT_SYMBOL(blk_init_allocated_queue);
+@@ -3763,9 +3770,11 @@ EXPORT_SYMBOL(blk_finish_plug);
+  */
+ void blk_pm_runtime_init(struct request_queue *q, struct device *dev)
+ {
+-	/* not support for RQF_PM and ->rpm_status in blk-mq yet */
+-	if (q->mq_ops)
++	/* Don't enable runtime PM for blk-mq until it is ready */
++	if (q->mq_ops) {
++		pm_runtime_disable(dev);
+ 		return;
++	}
+ 
+ 	q->dev = dev;
+ 	q->rpm_status = RPM_ACTIVE;
+diff --git a/block/blk-lib.c b/block/blk-lib.c
+index 8faa70f26fcd..d1b9dd03da25 100644
+--- a/block/blk-lib.c
++++ b/block/blk-lib.c
+@@ -68,6 +68,8 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
+ 		 */
+ 		req_sects = min_t(sector_t, nr_sects,
+ 					q->limits.max_discard_sectors);
++		if (!req_sects)
++			goto fail;
+ 		if (req_sects > UINT_MAX >> 9)
+ 			req_sects = UINT_MAX >> 9;
+ 
+@@ -105,6 +107,14 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
+ 
+ 	*biop = bio;
+ 	return 0;
++
++fail:
++	if (bio) {
++		submit_bio_wait(bio);
++		bio_put(bio);
++	}
++	*biop = NULL;
++	return -EOPNOTSUPP;
+ }
+ EXPORT_SYMBOL(__blkdev_issue_discard);
+ 
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index 94987b1f69e1..96c7dfc04852 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -804,6 +804,21 @@ static void __blk_release_queue(struct work_struct *work)
+ 		blk_stat_remove_callback(q, q->poll_cb);
+ 	blk_stat_free_callback(q->poll_cb);
+ 
++	if (!blk_queue_dead(q)) {
++		/*
++		 * Last reference was dropped without having called
++		 * blk_cleanup_queue().
++		 */
++		WARN_ONCE(blk_queue_init_done(q),
++			  "request queue %p has been registered but blk_cleanup_queue() has not been called for that queue\n",
++			  q);
++		blk_exit_queue(q);
++	}
++
++	WARN(blkg_root_lookup(q),
++	     "request queue %p is being released but it has not yet been removed from the blkcg controller\n",
++	     q);
++
+ 	blk_free_queue_stats(q->stats);
+ 
+ 	blk_exit_rl(q, &q->root_rl);
+diff --git a/block/blk.h b/block/blk.h
+index 8d23aea96ce9..a8f0f7986cfd 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -130,6 +130,7 @@ void blk_free_flush_queue(struct blk_flush_queue *q);
+ int blk_init_rl(struct request_list *rl, struct request_queue *q,
+ 		gfp_t gfp_mask);
+ void blk_exit_rl(struct request_queue *q, struct request_list *rl);
++void blk_exit_queue(struct request_queue *q);
+ void blk_rq_bio_prep(struct request_queue *q, struct request *rq,
+ 			struct bio *bio);
+ void blk_queue_bypass_start(struct request_queue *q);
+diff --git a/certs/system_keyring.c b/certs/system_keyring.c
+index 6251d1b27f0c..81728717523d 100644
+--- a/certs/system_keyring.c
++++ b/certs/system_keyring.c
+@@ -15,6 +15,7 @@
+ #include <linux/cred.h>
+ #include <linux/err.h>
+ #include <linux/slab.h>
++#include <linux/verification.h>
+ #include <keys/asymmetric-type.h>
+ #include <keys/system_keyring.h>
+ #include <crypto/pkcs7.h>
+@@ -230,7 +231,7 @@ int verify_pkcs7_signature(const void *data, size_t len,
+ 
+ 	if (!trusted_keys) {
+ 		trusted_keys = builtin_trusted_keys;
+-	} else if (trusted_keys == (void *)1UL) {
++	} else if (trusted_keys == VERIFY_USE_SECONDARY_KEYRING) {
+ #ifdef CONFIG_SECONDARY_TRUSTED_KEYRING
+ 		trusted_keys = secondary_trusted_keys;
+ #else
+diff --git a/crypto/asymmetric_keys/pkcs7_key_type.c b/crypto/asymmetric_keys/pkcs7_key_type.c
+index e284d9cb9237..5b2f6a2b5585 100644
+--- a/crypto/asymmetric_keys/pkcs7_key_type.c
++++ b/crypto/asymmetric_keys/pkcs7_key_type.c
+@@ -63,7 +63,7 @@ static int pkcs7_preparse(struct key_preparsed_payload *prep)
+ 
+ 	return verify_pkcs7_signature(NULL, 0,
+ 				      prep->data, prep->datalen,
+-				      (void *)1UL, usage,
++				      VERIFY_USE_SECONDARY_KEYRING, usage,
+ 				      pkcs7_view_content, prep);
+ }
+ 
+diff --git a/drivers/acpi/acpica/hwsleep.c b/drivers/acpi/acpica/hwsleep.c
+index fe9d46d81750..d8b8fc2ff563 100644
+--- a/drivers/acpi/acpica/hwsleep.c
++++ b/drivers/acpi/acpica/hwsleep.c
+@@ -56,14 +56,9 @@ acpi_status acpi_hw_legacy_sleep(u8 sleep_state)
+ 	if (ACPI_FAILURE(status)) {
+ 		return_ACPI_STATUS(status);
+ 	}
+-	/*
+-	 * If the target sleep state is S5, clear all GPEs and fixed events too
+-	 */
+-	if (sleep_state == ACPI_STATE_S5) {
+-		status = acpi_hw_clear_acpi_status();
+-		if (ACPI_FAILURE(status)) {
+-			return_ACPI_STATUS(status);
+-		}
++	status = acpi_hw_clear_acpi_status();
++	if (ACPI_FAILURE(status)) {
++		return_ACPI_STATUS(status);
+ 	}
+ 	acpi_gbl_system_awake_and_running = FALSE;
+ 
+diff --git a/drivers/acpi/acpica/psloop.c b/drivers/acpi/acpica/psloop.c
+index 44f35ab3347d..0f0bdc9d24c6 100644
+--- a/drivers/acpi/acpica/psloop.c
++++ b/drivers/acpi/acpica/psloop.c
+@@ -22,6 +22,7 @@
+ #include "acdispat.h"
+ #include "amlcode.h"
+ #include "acconvert.h"
++#include "acnamesp.h"
+ 
+ #define _COMPONENT          ACPI_PARSER
+ ACPI_MODULE_NAME("psloop")
+@@ -527,12 +528,18 @@ acpi_status acpi_ps_parse_loop(struct acpi_walk_state *walk_state)
+ 				if (ACPI_FAILURE(status)) {
+ 					return_ACPI_STATUS(status);
+ 				}
+-				if (walk_state->opcode == AML_SCOPE_OP) {
++				if (acpi_ns_opens_scope
++				    (acpi_ps_get_opcode_info
++				     (walk_state->opcode)->object_type)) {
+ 					/*
+-					 * If the scope op fails to parse, skip the body of the
+-					 * scope op because the parse failure indicates that the
+-					 * device may not exist.
++					 * If the scope/device op fails to parse, skip the body of
++					 * the scope op because the parse failure indicates that
++					 * the device may not exist.
+ 					 */
++					ACPI_ERROR((AE_INFO,
++						    "Skip parsing opcode %s",
++						    acpi_ps_get_opcode_name
++						    (walk_state->opcode)));
+ 					walk_state->parser_state.aml =
+ 					    walk_state->aml + 1;
+ 					walk_state->parser_state.aml =
+@@ -540,8 +547,6 @@ acpi_status acpi_ps_parse_loop(struct acpi_walk_state *walk_state)
+ 					    (&walk_state->parser_state);
+ 					walk_state->aml =
+ 					    walk_state->parser_state.aml;
+-					ACPI_ERROR((AE_INFO,
+-						    "Skipping Scope block"));
+ 				}
+ 
+ 				continue;
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index a390c6d4f72d..af7cb8e618fe 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -337,6 +337,7 @@ static ssize_t backing_dev_store(struct device *dev,
+ 		struct device_attribute *attr, const char *buf, size_t len)
+ {
+ 	char *file_name;
++	size_t sz;
+ 	struct file *backing_dev = NULL;
+ 	struct inode *inode;
+ 	struct address_space *mapping;
+@@ -357,7 +358,11 @@ static ssize_t backing_dev_store(struct device *dev,
+ 		goto out;
+ 	}
+ 
+-	strlcpy(file_name, buf, len);
++	strlcpy(file_name, buf, PATH_MAX);
++	/* ignore trailing newline */
++	sz = strlen(file_name);
++	if (sz > 0 && file_name[sz - 1] == '\n')
++		file_name[sz - 1] = 0x00;
+ 
+ 	backing_dev = filp_open(file_name, O_RDWR|O_LARGEFILE, 0);
+ 	if (IS_ERR(backing_dev)) {
+diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
+index 1d50e97d49f1..6d53f7d9fc7a 100644
+--- a/drivers/cpufreq/cpufreq_governor.c
++++ b/drivers/cpufreq/cpufreq_governor.c
+@@ -555,12 +555,20 @@ EXPORT_SYMBOL_GPL(cpufreq_dbs_governor_stop);
+ 
+ void cpufreq_dbs_governor_limits(struct cpufreq_policy *policy)
+ {
+-	struct policy_dbs_info *policy_dbs = policy->governor_data;
++	struct policy_dbs_info *policy_dbs;
++
++	/* Protect gov->gdbs_data against cpufreq_dbs_governor_exit() */
++	mutex_lock(&gov_dbs_data_mutex);
++	policy_dbs = policy->governor_data;
++	if (!policy_dbs)
++		goto out;
+ 
+ 	mutex_lock(&policy_dbs->update_mutex);
+ 	cpufreq_policy_apply_limits(policy);
+ 	gov_update_sample_delay(policy_dbs, 0);
+-
+ 	mutex_unlock(&policy_dbs->update_mutex);
++
++out:
++	mutex_unlock(&gov_dbs_data_mutex);
+ }
+ EXPORT_SYMBOL_GPL(cpufreq_dbs_governor_limits);
+diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c
+index 1aef60d160eb..910f8a68f58b 100644
+--- a/drivers/cpuidle/governors/menu.c
++++ b/drivers/cpuidle/governors/menu.c
+@@ -349,14 +349,12 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ 		 * If the tick is already stopped, the cost of possible short
+ 		 * idle duration misprediction is much higher, because the CPU
+ 		 * may be stuck in a shallow idle state for a long time as a
+-		 * result of it.  In that case say we might mispredict and try
+-		 * to force the CPU into a state for which we would have stopped
+-		 * the tick, unless a timer is going to expire really soon
+-		 * anyway.
++		 * result of it.  In that case say we might mispredict and use
++		 * the known time till the closest timer event for the idle
++		 * state selection.
+ 		 */
+ 		if (data->predicted_us < TICK_USEC)
+-			data->predicted_us = min_t(unsigned int, TICK_USEC,
+-						   ktime_to_us(delta_next));
++			data->predicted_us = ktime_to_us(delta_next);
+ 	} else {
+ 		/*
+ 		 * Use the performance multiplier and the user-configurable
+@@ -381,8 +379,33 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ 			continue;
+ 		if (idx == -1)
+ 			idx = i; /* first enabled state */
+-		if (s->target_residency > data->predicted_us)
+-			break;
++		if (s->target_residency > data->predicted_us) {
++			if (data->predicted_us < TICK_USEC)
++				break;
++
++			if (!tick_nohz_tick_stopped()) {
++				/*
++				 * If the state selected so far is shallow,
++				 * waking up early won't hurt, so retain the
++				 * tick in that case and let the governor run
++				 * again in the next iteration of the loop.
++				 */
++				expected_interval = drv->states[idx].target_residency;
++				break;
++			}
++
++			/*
++			 * If the state selected so far is shallow and this
++			 * state's target residency matches the time till the
++			 * closest timer event, select this one to avoid getting
++			 * stuck in the shallow one for too long.
++			 */
++			if (drv->states[idx].target_residency < TICK_USEC &&
++			    s->target_residency <= ktime_to_us(delta_next))
++				idx = i;
++
++			goto out;
++		}
+ 		if (s->exit_latency > latency_req) {
+ 			/*
+ 			 * If we break out of the loop for latency reasons, use
+@@ -403,14 +426,13 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ 	 * Don't stop the tick if the selected state is a polling one or if the
+ 	 * expected idle duration is shorter than the tick period length.
+ 	 */
+-	if ((drv->states[idx].flags & CPUIDLE_FLAG_POLLING) ||
+-	    expected_interval < TICK_USEC) {
++	if (((drv->states[idx].flags & CPUIDLE_FLAG_POLLING) ||
++	     expected_interval < TICK_USEC) && !tick_nohz_tick_stopped()) {
+ 		unsigned int delta_next_us = ktime_to_us(delta_next);
+ 
+ 		*stop_tick = false;
+ 
+-		if (!tick_nohz_tick_stopped() && idx > 0 &&
+-		    drv->states[idx].target_residency > delta_next_us) {
++		if (idx > 0 && drv->states[idx].target_residency > delta_next_us) {
+ 			/*
+ 			 * The tick is not going to be stopped and the target
+ 			 * residency of the state to be returned is not within
+@@ -429,6 +451,7 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ 		}
+ 	}
+ 
++out:
+ 	data->last_state_idx = idx;
+ 
+ 	return data->last_state_idx;
+diff --git a/drivers/crypto/caam/caamalg_qi.c b/drivers/crypto/caam/caamalg_qi.c
+index 6e61cc93c2b0..d7aa7d7ff102 100644
+--- a/drivers/crypto/caam/caamalg_qi.c
++++ b/drivers/crypto/caam/caamalg_qi.c
+@@ -679,10 +679,8 @@ static int xts_ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher,
+ 	int ret = 0;
+ 
+ 	if (keylen != 2 * AES_MIN_KEY_SIZE  && keylen != 2 * AES_MAX_KEY_SIZE) {
+-		crypto_ablkcipher_set_flags(ablkcipher,
+-					    CRYPTO_TFM_RES_BAD_KEY_LEN);
+ 		dev_err(jrdev, "key size mismatch\n");
+-		return -EINVAL;
++		goto badkey;
+ 	}
+ 
+ 	ctx->cdata.keylen = keylen;
+@@ -715,7 +713,7 @@ static int xts_ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher,
+ 	return ret;
+ badkey:
+ 	crypto_ablkcipher_set_flags(ablkcipher, CRYPTO_TFM_RES_BAD_KEY_LEN);
+-	return 0;
++	return -EINVAL;
+ }
+ 
+ /*
+diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
+index 578ea63a3109..f26d62e5533a 100644
+--- a/drivers/crypto/caam/caampkc.c
++++ b/drivers/crypto/caam/caampkc.c
+@@ -71,8 +71,8 @@ static void rsa_priv_f2_unmap(struct device *dev, struct rsa_edesc *edesc,
+ 	dma_unmap_single(dev, pdb->d_dma, key->d_sz, DMA_TO_DEVICE);
+ 	dma_unmap_single(dev, pdb->p_dma, p_sz, DMA_TO_DEVICE);
+ 	dma_unmap_single(dev, pdb->q_dma, q_sz, DMA_TO_DEVICE);
+-	dma_unmap_single(dev, pdb->tmp1_dma, p_sz, DMA_TO_DEVICE);
+-	dma_unmap_single(dev, pdb->tmp2_dma, q_sz, DMA_TO_DEVICE);
++	dma_unmap_single(dev, pdb->tmp1_dma, p_sz, DMA_BIDIRECTIONAL);
++	dma_unmap_single(dev, pdb->tmp2_dma, q_sz, DMA_BIDIRECTIONAL);
+ }
+ 
+ static void rsa_priv_f3_unmap(struct device *dev, struct rsa_edesc *edesc,
+@@ -90,8 +90,8 @@ static void rsa_priv_f3_unmap(struct device *dev, struct rsa_edesc *edesc,
+ 	dma_unmap_single(dev, pdb->dp_dma, p_sz, DMA_TO_DEVICE);
+ 	dma_unmap_single(dev, pdb->dq_dma, q_sz, DMA_TO_DEVICE);
+ 	dma_unmap_single(dev, pdb->c_dma, p_sz, DMA_TO_DEVICE);
+-	dma_unmap_single(dev, pdb->tmp1_dma, p_sz, DMA_TO_DEVICE);
+-	dma_unmap_single(dev, pdb->tmp2_dma, q_sz, DMA_TO_DEVICE);
++	dma_unmap_single(dev, pdb->tmp1_dma, p_sz, DMA_BIDIRECTIONAL);
++	dma_unmap_single(dev, pdb->tmp2_dma, q_sz, DMA_BIDIRECTIONAL);
+ }
+ 
+ /* RSA Job Completion handler */
+@@ -417,13 +417,13 @@ static int set_rsa_priv_f2_pdb(struct akcipher_request *req,
+ 		goto unmap_p;
+ 	}
+ 
+-	pdb->tmp1_dma = dma_map_single(dev, key->tmp1, p_sz, DMA_TO_DEVICE);
++	pdb->tmp1_dma = dma_map_single(dev, key->tmp1, p_sz, DMA_BIDIRECTIONAL);
+ 	if (dma_mapping_error(dev, pdb->tmp1_dma)) {
+ 		dev_err(dev, "Unable to map RSA tmp1 memory\n");
+ 		goto unmap_q;
+ 	}
+ 
+-	pdb->tmp2_dma = dma_map_single(dev, key->tmp2, q_sz, DMA_TO_DEVICE);
++	pdb->tmp2_dma = dma_map_single(dev, key->tmp2, q_sz, DMA_BIDIRECTIONAL);
+ 	if (dma_mapping_error(dev, pdb->tmp2_dma)) {
+ 		dev_err(dev, "Unable to map RSA tmp2 memory\n");
+ 		goto unmap_tmp1;
+@@ -451,7 +451,7 @@ static int set_rsa_priv_f2_pdb(struct akcipher_request *req,
+ 	return 0;
+ 
+ unmap_tmp1:
+-	dma_unmap_single(dev, pdb->tmp1_dma, p_sz, DMA_TO_DEVICE);
++	dma_unmap_single(dev, pdb->tmp1_dma, p_sz, DMA_BIDIRECTIONAL);
+ unmap_q:
+ 	dma_unmap_single(dev, pdb->q_dma, q_sz, DMA_TO_DEVICE);
+ unmap_p:
+@@ -504,13 +504,13 @@ static int set_rsa_priv_f3_pdb(struct akcipher_request *req,
+ 		goto unmap_dq;
+ 	}
+ 
+-	pdb->tmp1_dma = dma_map_single(dev, key->tmp1, p_sz, DMA_TO_DEVICE);
++	pdb->tmp1_dma = dma_map_single(dev, key->tmp1, p_sz, DMA_BIDIRECTIONAL);
+ 	if (dma_mapping_error(dev, pdb->tmp1_dma)) {
+ 		dev_err(dev, "Unable to map RSA tmp1 memory\n");
+ 		goto unmap_qinv;
+ 	}
+ 
+-	pdb->tmp2_dma = dma_map_single(dev, key->tmp2, q_sz, DMA_TO_DEVICE);
++	pdb->tmp2_dma = dma_map_single(dev, key->tmp2, q_sz, DMA_BIDIRECTIONAL);
+ 	if (dma_mapping_error(dev, pdb->tmp2_dma)) {
+ 		dev_err(dev, "Unable to map RSA tmp2 memory\n");
+ 		goto unmap_tmp1;
+@@ -538,7 +538,7 @@ static int set_rsa_priv_f3_pdb(struct akcipher_request *req,
+ 	return 0;
+ 
+ unmap_tmp1:
+-	dma_unmap_single(dev, pdb->tmp1_dma, p_sz, DMA_TO_DEVICE);
++	dma_unmap_single(dev, pdb->tmp1_dma, p_sz, DMA_BIDIRECTIONAL);
+ unmap_qinv:
+ 	dma_unmap_single(dev, pdb->c_dma, p_sz, DMA_TO_DEVICE);
+ unmap_dq:
+diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c
+index f4f258075b89..acdd72016ffe 100644
+--- a/drivers/crypto/caam/jr.c
++++ b/drivers/crypto/caam/jr.c
+@@ -190,7 +190,8 @@ static void caam_jr_dequeue(unsigned long devarg)
+ 		BUG_ON(CIRC_CNT(head, tail + i, JOBR_DEPTH) <= 0);
+ 
+ 		/* Unmap just-run descriptor so we can post-process */
+-		dma_unmap_single(dev, jrp->outring[hw_idx].desc,
++		dma_unmap_single(dev,
++				 caam_dma_to_cpu(jrp->outring[hw_idx].desc),
+ 				 jrp->entinfo[sw_idx].desc_size,
+ 				 DMA_TO_DEVICE);
+ 
+diff --git a/drivers/crypto/vmx/aes_cbc.c b/drivers/crypto/vmx/aes_cbc.c
+index 5285ece4f33a..b71895871be3 100644
+--- a/drivers/crypto/vmx/aes_cbc.c
++++ b/drivers/crypto/vmx/aes_cbc.c
+@@ -107,24 +107,23 @@ static int p8_aes_cbc_encrypt(struct blkcipher_desc *desc,
+ 		ret = crypto_skcipher_encrypt(req);
+ 		skcipher_request_zero(req);
+ 	} else {
+-		preempt_disable();
+-		pagefault_disable();
+-		enable_kernel_vsx();
+-
+ 		blkcipher_walk_init(&walk, dst, src, nbytes);
+ 		ret = blkcipher_walk_virt(desc, &walk);
+ 		while ((nbytes = walk.nbytes)) {
++			preempt_disable();
++			pagefault_disable();
++			enable_kernel_vsx();
+ 			aes_p8_cbc_encrypt(walk.src.virt.addr,
+ 					   walk.dst.virt.addr,
+ 					   nbytes & AES_BLOCK_MASK,
+ 					   &ctx->enc_key, walk.iv, 1);
++			disable_kernel_vsx();
++			pagefault_enable();
++			preempt_enable();
++
+ 			nbytes &= AES_BLOCK_SIZE - 1;
+ 			ret = blkcipher_walk_done(desc, &walk, nbytes);
+ 		}
+-
+-		disable_kernel_vsx();
+-		pagefault_enable();
+-		preempt_enable();
+ 	}
+ 
+ 	return ret;
+@@ -147,24 +146,23 @@ static int p8_aes_cbc_decrypt(struct blkcipher_desc *desc,
+ 		ret = crypto_skcipher_decrypt(req);
+ 		skcipher_request_zero(req);
+ 	} else {
+-		preempt_disable();
+-		pagefault_disable();
+-		enable_kernel_vsx();
+-
+ 		blkcipher_walk_init(&walk, dst, src, nbytes);
+ 		ret = blkcipher_walk_virt(desc, &walk);
+ 		while ((nbytes = walk.nbytes)) {
++			preempt_disable();
++			pagefault_disable();
++			enable_kernel_vsx();
+ 			aes_p8_cbc_encrypt(walk.src.virt.addr,
+ 					   walk.dst.virt.addr,
+ 					   nbytes & AES_BLOCK_MASK,
+ 					   &ctx->dec_key, walk.iv, 0);
++			disable_kernel_vsx();
++			pagefault_enable();
++			preempt_enable();
++
+ 			nbytes &= AES_BLOCK_SIZE - 1;
+ 			ret = blkcipher_walk_done(desc, &walk, nbytes);
+ 		}
+-
+-		disable_kernel_vsx();
+-		pagefault_enable();
+-		preempt_enable();
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/crypto/vmx/aes_xts.c b/drivers/crypto/vmx/aes_xts.c
+index 8bd9aff0f55f..e9954a7d4694 100644
+--- a/drivers/crypto/vmx/aes_xts.c
++++ b/drivers/crypto/vmx/aes_xts.c
+@@ -116,32 +116,39 @@ static int p8_aes_xts_crypt(struct blkcipher_desc *desc,
+ 		ret = enc? crypto_skcipher_encrypt(req) : crypto_skcipher_decrypt(req);
+ 		skcipher_request_zero(req);
+ 	} else {
++		blkcipher_walk_init(&walk, dst, src, nbytes);
++
++		ret = blkcipher_walk_virt(desc, &walk);
++
+ 		preempt_disable();
+ 		pagefault_disable();
+ 		enable_kernel_vsx();
+ 
+-		blkcipher_walk_init(&walk, dst, src, nbytes);
+-
+-		ret = blkcipher_walk_virt(desc, &walk);
+ 		iv = walk.iv;
+ 		memset(tweak, 0, AES_BLOCK_SIZE);
+ 		aes_p8_encrypt(iv, tweak, &ctx->tweak_key);
+ 
++		disable_kernel_vsx();
++		pagefault_enable();
++		preempt_enable();
++
+ 		while ((nbytes = walk.nbytes)) {
++			preempt_disable();
++			pagefault_disable();
++			enable_kernel_vsx();
+ 			if (enc)
+ 				aes_p8_xts_encrypt(walk.src.virt.addr, walk.dst.virt.addr,
+ 						nbytes & AES_BLOCK_MASK, &ctx->enc_key, NULL, tweak);
+ 			else
+ 				aes_p8_xts_decrypt(walk.src.virt.addr, walk.dst.virt.addr,
+ 						nbytes & AES_BLOCK_MASK, &ctx->dec_key, NULL, tweak);
++			disable_kernel_vsx();
++			pagefault_enable();
++			preempt_enable();
+ 
+ 			nbytes &= AES_BLOCK_SIZE - 1;
+ 			ret = blkcipher_walk_done(desc, &walk, nbytes);
+ 		}
+-
+-		disable_kernel_vsx();
+-		pagefault_enable();
+-		preempt_enable();
+ 	}
+ 	return ret;
+ }
+diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c
+index 314eb1071cce..532545b9488e 100644
+--- a/drivers/dma-buf/reservation.c
++++ b/drivers/dma-buf/reservation.c
+@@ -141,6 +141,7 @@ reservation_object_add_shared_inplace(struct reservation_object *obj,
+ 	if (signaled) {
+ 		RCU_INIT_POINTER(fobj->shared[signaled_idx], fence);
+ 	} else {
++		BUG_ON(fobj->shared_count >= fobj->shared_max);
+ 		RCU_INIT_POINTER(fobj->shared[fobj->shared_count], fence);
+ 		fobj->shared_count++;
+ 	}
+@@ -230,10 +231,9 @@ void reservation_object_add_shared_fence(struct reservation_object *obj,
+ 	old = reservation_object_get_list(obj);
+ 	obj->staged = NULL;
+ 
+-	if (!fobj) {
+-		BUG_ON(old->shared_count >= old->shared_max);
++	if (!fobj)
+ 		reservation_object_add_shared_inplace(obj, old, fence);
+-	} else
++	else
+ 		reservation_object_add_shared_replace(obj, old, fobj, fence);
+ }
+ EXPORT_SYMBOL(reservation_object_add_shared_fence);
+diff --git a/drivers/extcon/extcon.c b/drivers/extcon/extcon.c
+index af83ad58819c..b9d27c8fe57e 100644
+--- a/drivers/extcon/extcon.c
++++ b/drivers/extcon/extcon.c
+@@ -433,8 +433,8 @@ int extcon_sync(struct extcon_dev *edev, unsigned int id)
+ 		return index;
+ 
+ 	spin_lock_irqsave(&edev->lock, flags);
+-
+ 	state = !!(edev->state & BIT(index));
++	spin_unlock_irqrestore(&edev->lock, flags);
+ 
+ 	/*
+ 	 * Call functions in a raw notifier chain for the specific one
+@@ -448,6 +448,7 @@ int extcon_sync(struct extcon_dev *edev, unsigned int id)
+ 	 */
+ 	raw_notifier_call_chain(&edev->nh_all, state, edev);
+ 
++	spin_lock_irqsave(&edev->lock, flags);
+ 	/* This could be in interrupt handler */
+ 	prop_buf = (char *)get_zeroed_page(GFP_ATOMIC);
+ 	if (!prop_buf) {
+diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
+index ba0a092ae085..c3949220b770 100644
+--- a/drivers/hv/channel.c
++++ b/drivers/hv/channel.c
+@@ -558,11 +558,8 @@ static void reset_channel_cb(void *arg)
+ 	channel->onchannel_callback = NULL;
+ }
+ 
+-static int vmbus_close_internal(struct vmbus_channel *channel)
++void vmbus_reset_channel_cb(struct vmbus_channel *channel)
+ {
+-	struct vmbus_channel_close_channel *msg;
+-	int ret;
+-
+ 	/*
+ 	 * vmbus_on_event(), running in the per-channel tasklet, can race
+ 	 * with vmbus_close_internal() in the case of SMP guest, e.g., when
+@@ -572,6 +569,29 @@ static int vmbus_close_internal(struct vmbus_channel *channel)
+ 	 */
+ 	tasklet_disable(&channel->callback_event);
+ 
++	channel->sc_creation_callback = NULL;
++
++	/* Stop the callback asap */
++	if (channel->target_cpu != get_cpu()) {
++		put_cpu();
++		smp_call_function_single(channel->target_cpu, reset_channel_cb,
++					 channel, true);
++	} else {
++		reset_channel_cb(channel);
++		put_cpu();
++	}
++
++	/* Re-enable tasklet for use on re-open */
++	tasklet_enable(&channel->callback_event);
++}
++
++static int vmbus_close_internal(struct vmbus_channel *channel)
++{
++	struct vmbus_channel_close_channel *msg;
++	int ret;
++
++	vmbus_reset_channel_cb(channel);
++
+ 	/*
+ 	 * In case a device driver's probe() fails (e.g.,
+ 	 * util_probe() -> vmbus_open() returns -ENOMEM) and the device is
+@@ -585,16 +605,6 @@ static int vmbus_close_internal(struct vmbus_channel *channel)
+ 	}
+ 
+ 	channel->state = CHANNEL_OPEN_STATE;
+-	channel->sc_creation_callback = NULL;
+-	/* Stop callback and cancel the timer asap */
+-	if (channel->target_cpu != get_cpu()) {
+-		put_cpu();
+-		smp_call_function_single(channel->target_cpu, reset_channel_cb,
+-					 channel, true);
+-	} else {
+-		reset_channel_cb(channel);
+-		put_cpu();
+-	}
+ 
+ 	/* Send a closing message */
+ 
+@@ -639,8 +649,6 @@ static int vmbus_close_internal(struct vmbus_channel *channel)
+ 		get_order(channel->ringbuffer_pagecount * PAGE_SIZE));
+ 
+ out:
+-	/* re-enable tasklet for use on re-open */
+-	tasklet_enable(&channel->callback_event);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index ecc2bd275a73..0f0e091c117c 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -527,10 +527,8 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
+ 		struct hv_device *dev
+ 			= newchannel->primary_channel->device_obj;
+ 
+-		if (vmbus_add_channel_kobj(dev, newchannel)) {
+-			atomic_dec(&vmbus_connection.offer_in_progress);
++		if (vmbus_add_channel_kobj(dev, newchannel))
+ 			goto err_free_chan;
+-		}
+ 
+ 		if (channel->sc_creation_callback != NULL)
+ 			channel->sc_creation_callback(newchannel);
+@@ -894,6 +892,12 @@ static void vmbus_onoffer_rescind(struct vmbus_channel_message_header *hdr)
+ 		return;
+ 	}
+ 
++	/*
++	 * Before setting channel->rescind in vmbus_rescind_cleanup(), we
++	 * should make sure the channel callback is not running any more.
++	 */
++	vmbus_reset_channel_cb(channel);
++
+ 	/*
+ 	 * Now wait for offer handling to complete.
+ 	 */
+diff --git a/drivers/i2c/busses/i2c-designware-master.c b/drivers/i2c/busses/i2c-designware-master.c
+index 27436a937492..54b2a3a86677 100644
+--- a/drivers/i2c/busses/i2c-designware-master.c
++++ b/drivers/i2c/busses/i2c-designware-master.c
+@@ -693,7 +693,6 @@ int i2c_dw_probe(struct dw_i2c_dev *dev)
+ 	i2c_set_adapdata(adap, dev);
+ 
+ 	if (dev->pm_disabled) {
+-		dev_pm_syscore_device(dev->dev, true);
+ 		irq_flags = IRQF_NO_SUSPEND;
+ 	} else {
+ 		irq_flags = IRQF_SHARED | IRQF_COND_SUSPEND;
+diff --git a/drivers/i2c/busses/i2c-designware-platdrv.c b/drivers/i2c/busses/i2c-designware-platdrv.c
+index 5660daf6c92e..d281d21cdd8e 100644
+--- a/drivers/i2c/busses/i2c-designware-platdrv.c
++++ b/drivers/i2c/busses/i2c-designware-platdrv.c
+@@ -448,6 +448,9 @@ static int dw_i2c_plat_suspend(struct device *dev)
+ {
+ 	struct dw_i2c_dev *i_dev = dev_get_drvdata(dev);
+ 
++	if (i_dev->pm_disabled)
++		return 0;
++
+ 	i_dev->disable(i_dev);
+ 	i2c_dw_prepare_clk(i_dev, false);
+ 
+@@ -458,7 +461,9 @@ static int dw_i2c_plat_resume(struct device *dev)
+ {
+ 	struct dw_i2c_dev *i_dev = dev_get_drvdata(dev);
+ 
+-	i2c_dw_prepare_clk(i_dev, true);
++	if (!i_dev->pm_disabled)
++		i2c_dw_prepare_clk(i_dev, true);
++
+ 	i_dev->init(i_dev);
+ 
+ 	return 0;
+diff --git a/drivers/iio/accel/sca3000.c b/drivers/iio/accel/sca3000.c
+index 4dceb75e3586..4964561595f5 100644
+--- a/drivers/iio/accel/sca3000.c
++++ b/drivers/iio/accel/sca3000.c
+@@ -797,6 +797,7 @@ static int sca3000_write_raw(struct iio_dev *indio_dev,
+ 		mutex_lock(&st->lock);
+ 		ret = sca3000_write_3db_freq(st, val);
+ 		mutex_unlock(&st->lock);
++		return ret;
+ 	default:
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/iio/frequency/ad9523.c b/drivers/iio/frequency/ad9523.c
+index ddb6a334ae68..8e8263758439 100644
+--- a/drivers/iio/frequency/ad9523.c
++++ b/drivers/iio/frequency/ad9523.c
+@@ -508,7 +508,7 @@ static ssize_t ad9523_store(struct device *dev,
+ 		return ret;
+ 
+ 	if (!state)
+-		return 0;
++		return len;
+ 
+ 	mutex_lock(&indio_dev->mlock);
+ 	switch ((u32)this_attr->address) {
+@@ -642,7 +642,7 @@ static int ad9523_read_raw(struct iio_dev *indio_dev,
+ 		code = (AD9523_CLK_DIST_DIV_PHASE_REV(ret) * 3141592) /
+ 			AD9523_CLK_DIST_DIV_REV(ret);
+ 		*val = code / 1000000;
+-		*val2 = (code % 1000000) * 10;
++		*val2 = code % 1000000;
+ 		return IIO_VAL_INT_PLUS_MICRO;
+ 	default:
+ 		return -EINVAL;
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index b3ba9a222550..cbeae4509359 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -4694,7 +4694,7 @@ static void mlx5_ib_dealloc_counters(struct mlx5_ib_dev *dev)
+ 	int i;
+ 
+ 	for (i = 0; i < dev->num_ports; i++) {
+-		if (dev->port[i].cnts.set_id)
++		if (dev->port[i].cnts.set_id_valid)
+ 			mlx5_core_dealloc_q_counter(dev->mdev,
+ 						    dev->port[i].cnts.set_id);
+ 		kfree(dev->port[i].cnts.names);
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index a4f1f638509f..01eae67d5a6e 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -1626,7 +1626,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
+ 	struct mlx5_ib_resources *devr = &dev->devr;
+ 	int inlen = MLX5_ST_SZ_BYTES(create_qp_in);
+ 	struct mlx5_core_dev *mdev = dev->mdev;
+-	struct mlx5_ib_create_qp_resp resp;
++	struct mlx5_ib_create_qp_resp resp = {};
+ 	struct mlx5_ib_cq *send_cq;
+ 	struct mlx5_ib_cq *recv_cq;
+ 	unsigned long flags;
+@@ -5365,7 +5365,9 @@ static int set_user_rq_size(struct mlx5_ib_dev *dev,
+ 
+ 	rwq->wqe_count = ucmd->rq_wqe_count;
+ 	rwq->wqe_shift = ucmd->rq_wqe_shift;
+-	rwq->buf_size = (rwq->wqe_count << rwq->wqe_shift);
++	if (check_shl_overflow(rwq->wqe_count, rwq->wqe_shift, &rwq->buf_size))
++		return -EINVAL;
++
+ 	rwq->log_rq_stride = rwq->wqe_shift;
+ 	rwq->log_rq_size = ilog2(rwq->wqe_count);
+ 	return 0;
+diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c
+index 98d470d1f3fc..83311dd07019 100644
+--- a/drivers/infiniband/sw/rxe/rxe_comp.c
++++ b/drivers/infiniband/sw/rxe/rxe_comp.c
+@@ -276,6 +276,7 @@ static inline enum comp_state check_ack(struct rxe_qp *qp,
+ 	case IB_OPCODE_RC_RDMA_READ_RESPONSE_MIDDLE:
+ 		if (wqe->wr.opcode != IB_WR_RDMA_READ &&
+ 		    wqe->wr.opcode != IB_WR_RDMA_READ_WITH_INV) {
++			wqe->status = IB_WC_FATAL_ERR;
+ 			return COMPST_ERROR;
+ 		}
+ 		reset_retry_counters(qp);
+diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
+index 3081c629a7f7..8a9633e97bec 100644
+--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
++++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
+@@ -1833,8 +1833,7 @@ static bool srpt_close_ch(struct srpt_rdma_ch *ch)
+ 	int ret;
+ 
+ 	if (!srpt_set_ch_state(ch, CH_DRAINING)) {
+-		pr_debug("%s-%d: already closed\n", ch->sess_name,
+-			 ch->qp->qp_num);
++		pr_debug("%s: already closed\n", ch->sess_name);
+ 		return false;
+ 	}
+ 
+@@ -1940,8 +1939,8 @@ static void __srpt_close_all_ch(struct srpt_port *sport)
+ 	list_for_each_entry(nexus, &sport->nexus_list, entry) {
+ 		list_for_each_entry(ch, &nexus->ch_list, list) {
+ 			if (srpt_disconnect_ch(ch) >= 0)
+-				pr_info("Closing channel %s-%d because target %s_%d has been disabled\n",
+-					ch->sess_name, ch->qp->qp_num,
++				pr_info("Closing channel %s because target %s_%d has been disabled\n",
++					ch->sess_name,
+ 					sport->sdev->device->name, sport->port);
+ 			srpt_close_ch(ch);
+ 		}
+@@ -2087,7 +2086,7 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev,
+ 		struct rdma_conn_param rdma_cm;
+ 		struct ib_cm_rep_param ib_cm;
+ 	} *rep_param = NULL;
+-	struct srpt_rdma_ch *ch;
++	struct srpt_rdma_ch *ch = NULL;
+ 	char i_port_id[36];
+ 	u32 it_iu_len;
+ 	int i, ret;
+@@ -2234,13 +2233,15 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev,
+ 						TARGET_PROT_NORMAL,
+ 						i_port_id + 2, ch, NULL);
+ 	if (IS_ERR_OR_NULL(ch->sess)) {
++		WARN_ON_ONCE(ch->sess == NULL);
+ 		ret = PTR_ERR(ch->sess);
++		ch->sess = NULL;
+ 		pr_info("Rejected login for initiator %s: ret = %d.\n",
+ 			ch->sess_name, ret);
+ 		rej->reason = cpu_to_be32(ret == -ENOMEM ?
+ 				SRP_LOGIN_REJ_INSUFFICIENT_RESOURCES :
+ 				SRP_LOGIN_REJ_CHANNEL_LIMIT_REACHED);
+-		goto reject;
++		goto destroy_ib;
+ 	}
+ 
+ 	mutex_lock(&sport->mutex);
+@@ -2279,7 +2280,7 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev,
+ 		rej->reason = cpu_to_be32(SRP_LOGIN_REJ_INSUFFICIENT_RESOURCES);
+ 		pr_err("rejected SRP_LOGIN_REQ because enabling RTR failed (error code = %d)\n",
+ 		       ret);
+-		goto destroy_ib;
++		goto reject;
+ 	}
+ 
+ 	pr_debug("Establish connection sess=%p name=%s ch=%p\n", ch->sess,
+@@ -2358,8 +2359,11 @@ free_ring:
+ 	srpt_free_ioctx_ring((struct srpt_ioctx **)ch->ioctx_ring,
+ 			     ch->sport->sdev, ch->rq_size,
+ 			     ch->max_rsp_size, DMA_TO_DEVICE);
++
+ free_ch:
+-	if (ib_cm_id)
++	if (rdma_cm_id)
++		rdma_cm_id->context = NULL;
++	else
+ 		ib_cm_id->context = NULL;
+ 	kfree(ch);
+ 	ch = NULL;
+@@ -2379,6 +2383,15 @@ reject:
+ 		ib_send_cm_rej(ib_cm_id, IB_CM_REJ_CONSUMER_DEFINED, NULL, 0,
+ 			       rej, sizeof(*rej));
+ 
++	if (ch && ch->sess) {
++		srpt_close_ch(ch);
++		/*
++		 * Tell the caller not to free cm_id since
++		 * srpt_release_channel_work() will do that.
++		 */
++		ret = 0;
++	}
++
+ out:
+ 	kfree(rep_param);
+ 	kfree(rsp);
+@@ -2969,7 +2982,8 @@ static void srpt_add_one(struct ib_device *device)
+ 
+ 	pr_debug("device = %p\n", device);
+ 
+-	sdev = kzalloc(sizeof(*sdev), GFP_KERNEL);
++	sdev = kzalloc(struct_size(sdev, port, device->phys_port_cnt),
++		       GFP_KERNEL);
+ 	if (!sdev)
+ 		goto err;
+ 
+@@ -3023,8 +3037,6 @@ static void srpt_add_one(struct ib_device *device)
+ 			      srpt_event_handler);
+ 	ib_register_event_handler(&sdev->event_handler);
+ 
+-	WARN_ON(sdev->device->phys_port_cnt > ARRAY_SIZE(sdev->port));
+-
+ 	for (i = 1; i <= sdev->device->phys_port_cnt; i++) {
+ 		sport = &sdev->port[i - 1];
+ 		INIT_LIST_HEAD(&sport->nexus_list);
+diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.h b/drivers/infiniband/ulp/srpt/ib_srpt.h
+index 2361483476a0..444dfd7281b5 100644
+--- a/drivers/infiniband/ulp/srpt/ib_srpt.h
++++ b/drivers/infiniband/ulp/srpt/ib_srpt.h
+@@ -396,9 +396,9 @@ struct srpt_port {
+  * @sdev_mutex:	   Serializes use_srq changes.
+  * @use_srq:       Whether or not to use SRQ.
+  * @ioctx_ring:    Per-HCA SRQ.
+- * @port:          Information about the ports owned by this HCA.
+  * @event_handler: Per-HCA asynchronous IB event handler.
+  * @list:          Node in srpt_dev_list.
++ * @port:          Information about the ports owned by this HCA.
+  */
+ struct srpt_device {
+ 	struct ib_device	*device;
+@@ -410,9 +410,9 @@ struct srpt_device {
+ 	struct mutex		sdev_mutex;
+ 	bool			use_srq;
+ 	struct srpt_recv_ioctx	**ioctx_ring;
+-	struct srpt_port	port[2];
+ 	struct ib_event_handler	event_handler;
+ 	struct list_head	list;
++	struct srpt_port        port[];
+ };
+ 
+ #endif				/* IB_SRPT_H */
+diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
+index 75456b5aa825..d9c748b6f9e4 100644
+--- a/drivers/iommu/dmar.c
++++ b/drivers/iommu/dmar.c
+@@ -1339,8 +1339,8 @@ void qi_flush_iotlb(struct intel_iommu *iommu, u16 did, u64 addr,
+ 	qi_submit_sync(&desc, iommu);
+ }
+ 
+-void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 qdep,
+-			u64 addr, unsigned mask)
++void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 pfsid,
++			u16 qdep, u64 addr, unsigned mask)
+ {
+ 	struct qi_desc desc;
+ 
+@@ -1355,7 +1355,7 @@ void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 qdep,
+ 		qdep = 0;
+ 
+ 	desc.low = QI_DEV_IOTLB_SID(sid) | QI_DEV_IOTLB_QDEP(qdep) |
+-		   QI_DIOTLB_TYPE;
++		   QI_DIOTLB_TYPE | QI_DEV_IOTLB_PFSID(pfsid);
+ 
+ 	qi_submit_sync(&desc, iommu);
+ }
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index 115ff26e9ced..07dc938199f9 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -421,6 +421,7 @@ struct device_domain_info {
+ 	struct list_head global; /* link to global list */
+ 	u8 bus;			/* PCI bus number */
+ 	u8 devfn;		/* PCI devfn number */
++	u16 pfsid;		/* SRIOV physical function source ID */
+ 	u8 pasid_supported:3;
+ 	u8 pasid_enabled:1;
+ 	u8 pri_supported:1;
+@@ -1501,6 +1502,20 @@ static void iommu_enable_dev_iotlb(struct device_domain_info *info)
+ 		return;
+ 
+ 	pdev = to_pci_dev(info->dev);
++	/* For IOMMU that supports device IOTLB throttling (DIT), we assign
++	 * PFSID to the invalidation desc of a VF such that IOMMU HW can gauge
++	 * queue depth at PF level. If DIT is not set, PFSID will be treated as
++	 * reserved, which should be set to 0.
++	 */
++	if (!ecap_dit(info->iommu->ecap))
++		info->pfsid = 0;
++	else {
++		struct pci_dev *pf_pdev;
++
++		/* pdev will be returned if device is not a vf */
++		pf_pdev = pci_physfn(pdev);
++		info->pfsid = PCI_DEVID(pf_pdev->bus->number, pf_pdev->devfn);
++	}
+ 
+ #ifdef CONFIG_INTEL_IOMMU_SVM
+ 	/* The PCIe spec, in its wisdom, declares that the behaviour of
+@@ -1566,7 +1581,8 @@ static void iommu_flush_dev_iotlb(struct dmar_domain *domain,
+ 
+ 		sid = info->bus << 8 | info->devfn;
+ 		qdep = info->ats_qdep;
+-		qi_flush_dev_iotlb(info->iommu, sid, qdep, addr, mask);
++		qi_flush_dev_iotlb(info->iommu, sid, info->pfsid,
++				qdep, addr, mask);
+ 	}
+ 	spin_unlock_irqrestore(&device_domain_lock, flags);
+ }
+diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
+index 40ae6e87cb88..09b47260c74b 100644
+--- a/drivers/iommu/ipmmu-vmsa.c
++++ b/drivers/iommu/ipmmu-vmsa.c
+@@ -1081,12 +1081,19 @@ static struct platform_driver ipmmu_driver = {
+ 
+ static int __init ipmmu_init(void)
+ {
++	struct device_node *np;
+ 	static bool setup_done;
+ 	int ret;
+ 
+ 	if (setup_done)
+ 		return 0;
+ 
++	np = of_find_matching_node(NULL, ipmmu_of_ids);
++	if (!np)
++		return 0;
++
++	of_node_put(np);
++
+ 	ret = platform_driver_register(&ipmmu_driver);
+ 	if (ret < 0)
+ 		return ret;
+diff --git a/drivers/mailbox/mailbox-xgene-slimpro.c b/drivers/mailbox/mailbox-xgene-slimpro.c
+index a7040163dd43..b8b2b3533f46 100644
+--- a/drivers/mailbox/mailbox-xgene-slimpro.c
++++ b/drivers/mailbox/mailbox-xgene-slimpro.c
+@@ -195,9 +195,9 @@ static int slimpro_mbox_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, ctx);
+ 
+ 	regs = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	mb_base = devm_ioremap(&pdev->dev, regs->start, resource_size(regs));
+-	if (!mb_base)
+-		return -ENOMEM;
++	mb_base = devm_ioremap_resource(&pdev->dev, regs);
++	if (IS_ERR(mb_base))
++		return PTR_ERR(mb_base);
+ 
+ 	/* Setup mailbox links */
+ 	for (i = 0; i < MBOX_CNT; i++) {
+diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
+index ad45ebe1a74b..6c33923c2c35 100644
+--- a/drivers/md/bcache/writeback.c
++++ b/drivers/md/bcache/writeback.c
+@@ -645,8 +645,10 @@ static int bch_writeback_thread(void *arg)
+ 			 * data on cache. BCACHE_DEV_DETACHING flag is set in
+ 			 * bch_cached_dev_detach().
+ 			 */
+-			if (test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags))
++			if (test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags)) {
++				up_write(&dc->writeback_lock);
+ 				break;
++			}
+ 		}
+ 
+ 		up_write(&dc->writeback_lock);
+diff --git a/drivers/md/dm-cache-metadata.c b/drivers/md/dm-cache-metadata.c
+index 0d7212410e21..69dddeab124c 100644
+--- a/drivers/md/dm-cache-metadata.c
++++ b/drivers/md/dm-cache-metadata.c
+@@ -363,7 +363,7 @@ static int __write_initial_superblock(struct dm_cache_metadata *cmd)
+ 	disk_super->version = cpu_to_le32(cmd->version);
+ 	memset(disk_super->policy_name, 0, sizeof(disk_super->policy_name));
+ 	memset(disk_super->policy_version, 0, sizeof(disk_super->policy_version));
+-	disk_super->policy_hint_size = 0;
++	disk_super->policy_hint_size = cpu_to_le32(0);
+ 
+ 	__copy_sm_root(cmd, disk_super);
+ 
+@@ -701,6 +701,7 @@ static int __commit_transaction(struct dm_cache_metadata *cmd,
+ 	disk_super->policy_version[0] = cpu_to_le32(cmd->policy_version[0]);
+ 	disk_super->policy_version[1] = cpu_to_le32(cmd->policy_version[1]);
+ 	disk_super->policy_version[2] = cpu_to_le32(cmd->policy_version[2]);
++	disk_super->policy_hint_size = cpu_to_le32(cmd->policy_hint_size);
+ 
+ 	disk_super->read_hits = cpu_to_le32(cmd->stats.read_hits);
+ 	disk_super->read_misses = cpu_to_le32(cmd->stats.read_misses);
+@@ -1322,6 +1323,7 @@ static int __load_mapping_v1(struct dm_cache_metadata *cmd,
+ 
+ 	dm_oblock_t oblock;
+ 	unsigned flags;
++	bool dirty = true;
+ 
+ 	dm_array_cursor_get_value(mapping_cursor, (void **) &mapping_value_le);
+ 	memcpy(&mapping, mapping_value_le, sizeof(mapping));
+@@ -1332,8 +1334,10 @@ static int __load_mapping_v1(struct dm_cache_metadata *cmd,
+ 			dm_array_cursor_get_value(hint_cursor, (void **) &hint_value_le);
+ 			memcpy(&hint, hint_value_le, sizeof(hint));
+ 		}
++		if (cmd->clean_when_opened)
++			dirty = flags & M_DIRTY;
+ 
+-		r = fn(context, oblock, to_cblock(cb), flags & M_DIRTY,
++		r = fn(context, oblock, to_cblock(cb), dirty,
+ 		       le32_to_cpu(hint), hints_valid);
+ 		if (r) {
+ 			DMERR("policy couldn't load cache block %llu",
+@@ -1361,7 +1365,7 @@ static int __load_mapping_v2(struct dm_cache_metadata *cmd,
+ 
+ 	dm_oblock_t oblock;
+ 	unsigned flags;
+-	bool dirty;
++	bool dirty = true;
+ 
+ 	dm_array_cursor_get_value(mapping_cursor, (void **) &mapping_value_le);
+ 	memcpy(&mapping, mapping_value_le, sizeof(mapping));
+@@ -1372,8 +1376,9 @@ static int __load_mapping_v2(struct dm_cache_metadata *cmd,
+ 			dm_array_cursor_get_value(hint_cursor, (void **) &hint_value_le);
+ 			memcpy(&hint, hint_value_le, sizeof(hint));
+ 		}
++		if (cmd->clean_when_opened)
++			dirty = dm_bitset_cursor_get_value(dirty_cursor);
+ 
+-		dirty = dm_bitset_cursor_get_value(dirty_cursor);
+ 		r = fn(context, oblock, to_cblock(cb), dirty,
+ 		       le32_to_cpu(hint), hints_valid);
+ 		if (r) {
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index b61b069c33af..3fdec1147221 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -3069,11 +3069,11 @@ static void crypt_io_hints(struct dm_target *ti, struct queue_limits *limits)
+ 	 */
+ 	limits->max_segment_size = PAGE_SIZE;
+ 
+-	if (cc->sector_size != (1 << SECTOR_SHIFT)) {
+-		limits->logical_block_size = cc->sector_size;
+-		limits->physical_block_size = cc->sector_size;
+-		blk_limits_io_min(limits, cc->sector_size);
+-	}
++	limits->logical_block_size =
++		max_t(unsigned short, limits->logical_block_size, cc->sector_size);
++	limits->physical_block_size =
++		max_t(unsigned, limits->physical_block_size, cc->sector_size);
++	limits->io_min = max_t(unsigned, limits->io_min, cc->sector_size);
+ }
+ 
+ static struct target_type crypt_target = {
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index 86438b2f10dd..0a8a4c2aa3ea 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -178,7 +178,7 @@ struct dm_integrity_c {
+ 	__u8 sectors_per_block;
+ 
+ 	unsigned char mode;
+-	bool suspending;
++	int suspending;
+ 
+ 	int failed;
+ 
+@@ -2210,7 +2210,7 @@ static void dm_integrity_postsuspend(struct dm_target *ti)
+ 
+ 	del_timer_sync(&ic->autocommit_timer);
+ 
+-	ic->suspending = true;
++	WRITE_ONCE(ic->suspending, 1);
+ 
+ 	queue_work(ic->commit_wq, &ic->commit_work);
+ 	drain_workqueue(ic->commit_wq);
+@@ -2220,7 +2220,7 @@ static void dm_integrity_postsuspend(struct dm_target *ti)
+ 		dm_integrity_flush_buffers(ic);
+ 	}
+ 
+-	ic->suspending = false;
++	WRITE_ONCE(ic->suspending, 0);
+ 
+ 	BUG_ON(!RB_EMPTY_ROOT(&ic->in_progress));
+ 
+diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
+index b900723bbd0f..1087f6a1ac79 100644
+--- a/drivers/md/dm-thin.c
++++ b/drivers/md/dm-thin.c
+@@ -2520,6 +2520,8 @@ static void set_pool_mode(struct pool *pool, enum pool_mode new_mode)
+ 	case PM_WRITE:
+ 		if (old_mode != new_mode)
+ 			notify_of_pool_mode_change(pool, "write");
++		if (old_mode == PM_OUT_OF_DATA_SPACE)
++			cancel_delayed_work_sync(&pool->no_space_timeout);
+ 		pool->out_of_data_space = false;
+ 		pool->pf.error_if_no_space = pt->requested_pf.error_if_no_space;
+ 		dm_pool_metadata_read_write(pool->pmd);
+diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c
+index 87107c995cb5..7669069005e9 100644
+--- a/drivers/md/dm-writecache.c
++++ b/drivers/md/dm-writecache.c
+@@ -457,7 +457,7 @@ static void ssd_commit_flushed(struct dm_writecache *wc)
+ 		COMPLETION_INITIALIZER_ONSTACK(endio.c),
+ 		ATOMIC_INIT(1),
+ 	};
+-	unsigned bitmap_bits = wc->dirty_bitmap_size * BITS_PER_LONG;
++	unsigned bitmap_bits = wc->dirty_bitmap_size * 8;
+ 	unsigned i = 0;
+ 
+ 	while (1) {
+diff --git a/drivers/media/i2c/tvp5150.c b/drivers/media/i2c/tvp5150.c
+index b162c2fe62c3..76e6bed5a1da 100644
+--- a/drivers/media/i2c/tvp5150.c
++++ b/drivers/media/i2c/tvp5150.c
+@@ -872,7 +872,7 @@ static int tvp5150_fill_fmt(struct v4l2_subdev *sd,
+ 	f = &format->format;
+ 
+ 	f->width = decoder->rect.width;
+-	f->height = decoder->rect.height;
++	f->height = decoder->rect.height / 2;
+ 
+ 	f->code = MEDIA_BUS_FMT_UYVY8_2X8;
+ 	f->field = V4L2_FIELD_ALTERNATE;
+diff --git a/drivers/mfd/hi655x-pmic.c b/drivers/mfd/hi655x-pmic.c
+index c37ccbfd52f2..96c07fa1802a 100644
+--- a/drivers/mfd/hi655x-pmic.c
++++ b/drivers/mfd/hi655x-pmic.c
+@@ -49,7 +49,7 @@ static struct regmap_config hi655x_regmap_config = {
+ 	.reg_bits = 32,
+ 	.reg_stride = HI655X_STRIDE,
+ 	.val_bits = 8,
+-	.max_register = HI655X_BUS_ADDR(0xFFF),
++	.max_register = HI655X_BUS_ADDR(0x400) - HI655X_STRIDE,
+ };
+ 
+ static struct resource pwrkey_resources[] = {
+diff --git a/drivers/misc/cxl/main.c b/drivers/misc/cxl/main.c
+index c1ba0d42cbc8..e0f29b8a872d 100644
+--- a/drivers/misc/cxl/main.c
++++ b/drivers/misc/cxl/main.c
+@@ -287,7 +287,7 @@ int cxl_adapter_context_get(struct cxl *adapter)
+ 	int rc;
+ 
+ 	rc = atomic_inc_unless_negative(&adapter->contexts_num);
+-	return rc >= 0 ? 0 : -EBUSY;
++	return rc ? 0 : -EBUSY;
+ }
+ 
+ void cxl_adapter_context_put(struct cxl *adapter)
+diff --git a/drivers/misc/ocxl/link.c b/drivers/misc/ocxl/link.c
+index 88876ae8f330..a963b0a4a3c5 100644
+--- a/drivers/misc/ocxl/link.c
++++ b/drivers/misc/ocxl/link.c
+@@ -136,7 +136,7 @@ static void xsl_fault_handler_bh(struct work_struct *fault_work)
+ 	int rc;
+ 
+ 	/*
+-	 * We need to release a reference on the mm whenever exiting this
++	 * We must release a reference on mm_users whenever exiting this
+ 	 * function (taken in the memory fault interrupt handler)
+ 	 */
+ 	rc = copro_handle_mm_fault(fault->pe_data.mm, fault->dar, fault->dsisr,
+@@ -172,7 +172,7 @@ static void xsl_fault_handler_bh(struct work_struct *fault_work)
+ 	}
+ 	r = RESTART;
+ ack:
+-	mmdrop(fault->pe_data.mm);
++	mmput(fault->pe_data.mm);
+ 	ack_irq(spa, r);
+ }
+ 
+@@ -184,6 +184,7 @@ static irqreturn_t xsl_fault_handler(int irq, void *data)
+ 	struct pe_data *pe_data;
+ 	struct ocxl_process_element *pe;
+ 	int lpid, pid, tid;
++	bool schedule = false;
+ 
+ 	read_irq(spa, &dsisr, &dar, &pe_handle);
+ 	trace_ocxl_fault(spa->spa_mem, pe_handle, dsisr, dar, -1);
+@@ -226,14 +227,19 @@ static irqreturn_t xsl_fault_handler(int irq, void *data)
+ 	}
+ 	WARN_ON(pe_data->mm->context.id != pid);
+ 
+-	spa->xsl_fault.pe = pe_handle;
+-	spa->xsl_fault.dar = dar;
+-	spa->xsl_fault.dsisr = dsisr;
+-	spa->xsl_fault.pe_data = *pe_data;
+-	mmgrab(pe_data->mm); /* mm count is released by bottom half */
+-
++	if (mmget_not_zero(pe_data->mm)) {
++			spa->xsl_fault.pe = pe_handle;
++			spa->xsl_fault.dar = dar;
++			spa->xsl_fault.dsisr = dsisr;
++			spa->xsl_fault.pe_data = *pe_data;
++			schedule = true;
++			/* mm_users count released by bottom half */
++	}
+ 	rcu_read_unlock();
+-	schedule_work(&spa->xsl_fault.fault_work);
++	if (schedule)
++		schedule_work(&spa->xsl_fault.fault_work);
++	else
++		ack_irq(spa, ADDRESS_ERROR);
+ 	return IRQ_HANDLED;
+ }
+ 
+diff --git a/drivers/misc/vmw_balloon.c b/drivers/misc/vmw_balloon.c
+index 56c6f79a5c5a..5f8b583c6e41 100644
+--- a/drivers/misc/vmw_balloon.c
++++ b/drivers/misc/vmw_balloon.c
+@@ -341,7 +341,13 @@ static bool vmballoon_send_start(struct vmballoon *b, unsigned long req_caps)
+ 		success = false;
+ 	}
+ 
+-	if (b->capabilities & VMW_BALLOON_BATCHED_2M_CMDS)
++	/*
++	 * 2MB pages are only supported with batching. If batching is for some
++	 * reason disabled, do not use 2MB pages, since otherwise the legacy
++	 * mechanism is used with 2MB pages, causing a failure.
++	 */
++	if ((b->capabilities & VMW_BALLOON_BATCHED_2M_CMDS) &&
++	    (b->capabilities & VMW_BALLOON_BATCHED_CMDS))
+ 		b->supported_page_sizes = 2;
+ 	else
+ 		b->supported_page_sizes = 1;
+@@ -450,7 +456,7 @@ static int vmballoon_send_lock_page(struct vmballoon *b, unsigned long pfn,
+ 
+ 	pfn32 = (u32)pfn;
+ 	if (pfn32 != pfn)
+-		return -1;
++		return -EINVAL;
+ 
+ 	STATS_INC(b->stats.lock[false]);
+ 
+@@ -460,7 +466,7 @@ static int vmballoon_send_lock_page(struct vmballoon *b, unsigned long pfn,
+ 
+ 	pr_debug("%s - ppn %lx, hv returns %ld\n", __func__, pfn, status);
+ 	STATS_INC(b->stats.lock_fail[false]);
+-	return 1;
++	return -EIO;
+ }
+ 
+ static int vmballoon_send_batched_lock(struct vmballoon *b,
+@@ -597,11 +603,12 @@ static int vmballoon_lock_page(struct vmballoon *b, unsigned int num_pages,
+ 
+ 	locked = vmballoon_send_lock_page(b, page_to_pfn(page), &hv_status,
+ 								target);
+-	if (locked > 0) {
++	if (locked) {
+ 		STATS_INC(b->stats.refused_alloc[false]);
+ 
+-		if (hv_status == VMW_BALLOON_ERROR_RESET ||
+-				hv_status == VMW_BALLOON_ERROR_PPN_NOTNEEDED) {
++		if (locked == -EIO &&
++		    (hv_status == VMW_BALLOON_ERROR_RESET ||
++		     hv_status == VMW_BALLOON_ERROR_PPN_NOTNEEDED)) {
+ 			vmballoon_free_page(page, false);
+ 			return -EIO;
+ 		}
+@@ -617,7 +624,7 @@ static int vmballoon_lock_page(struct vmballoon *b, unsigned int num_pages,
+ 		} else {
+ 			vmballoon_free_page(page, false);
+ 		}
+-		return -EIO;
++		return locked;
+ 	}
+ 
+ 	/* track allocated page */
+@@ -1029,29 +1036,30 @@ static void vmballoon_vmci_cleanup(struct vmballoon *b)
+  */
+ static int vmballoon_vmci_init(struct vmballoon *b)
+ {
+-	int error = 0;
++	unsigned long error, dummy;
+ 
+-	if ((b->capabilities & VMW_BALLOON_SIGNALLED_WAKEUP_CMD) != 0) {
+-		error = vmci_doorbell_create(&b->vmci_doorbell,
+-				VMCI_FLAG_DELAYED_CB,
+-				VMCI_PRIVILEGE_FLAG_RESTRICTED,
+-				vmballoon_doorbell, b);
+-
+-		if (error == VMCI_SUCCESS) {
+-			VMWARE_BALLOON_CMD(VMCI_DOORBELL_SET,
+-					b->vmci_doorbell.context,
+-					b->vmci_doorbell.resource, error);
+-			STATS_INC(b->stats.doorbell_set);
+-		}
+-	}
++	if ((b->capabilities & VMW_BALLOON_SIGNALLED_WAKEUP_CMD) == 0)
++		return 0;
+ 
+-	if (error != 0) {
+-		vmballoon_vmci_cleanup(b);
++	error = vmci_doorbell_create(&b->vmci_doorbell, VMCI_FLAG_DELAYED_CB,
++				     VMCI_PRIVILEGE_FLAG_RESTRICTED,
++				     vmballoon_doorbell, b);
+ 
+-		return -EIO;
+-	}
++	if (error != VMCI_SUCCESS)
++		goto fail;
++
++	error = VMWARE_BALLOON_CMD(VMCI_DOORBELL_SET, b->vmci_doorbell.context,
++				   b->vmci_doorbell.resource, dummy);
++
++	STATS_INC(b->stats.doorbell_set);
++
++	if (error != VMW_BALLOON_SUCCESS)
++		goto fail;
+ 
+ 	return 0;
++fail:
++	vmballoon_vmci_cleanup(b);
++	return -EIO;
+ }
+ 
+ /*
+@@ -1289,7 +1297,14 @@ static int __init vmballoon_init(void)
+ 
+ 	return 0;
+ }
+-module_init(vmballoon_init);
++
++/*
++ * Using late_initcall() instead of module_init() allows the balloon to use the
++ * VMCI doorbell even when the balloon is built into the kernel. Otherwise the
++ * VMCI is probed only after the balloon is initialized. If the balloon is used
++ * as a module, late_initcall() is equivalent to module_init().
++ */
++late_initcall(vmballoon_init);
+ 
+ static void __exit vmballoon_exit(void)
+ {
+diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
+index 648eb6743ed5..6edffeed9953 100644
+--- a/drivers/mmc/core/queue.c
++++ b/drivers/mmc/core/queue.c
+@@ -238,10 +238,6 @@ static void mmc_mq_exit_request(struct blk_mq_tag_set *set, struct request *req,
+ 	mmc_exit_request(mq->queue, req);
+ }
+ 
+-/*
+- * We use BLK_MQ_F_BLOCKING and have only 1 hardware queue, which means requests
+- * will not be dispatched in parallel.
+- */
+ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 				    const struct blk_mq_queue_data *bd)
+ {
+@@ -264,7 +260,7 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 
+ 	spin_lock_irq(q->queue_lock);
+ 
+-	if (mq->recovery_needed) {
++	if (mq->recovery_needed || mq->busy) {
+ 		spin_unlock_irq(q->queue_lock);
+ 		return BLK_STS_RESOURCE;
+ 	}
+@@ -291,6 +287,9 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 		break;
+ 	}
+ 
++	/* Parallel dispatch of requests is not supported at the moment */
++	mq->busy = true;
++
+ 	mq->in_flight[issue_type] += 1;
+ 	get_card = (mmc_tot_in_flight(mq) == 1);
+ 	cqe_retune_ok = (mmc_cqe_qcnt(mq) == 1);
+@@ -333,9 +332,12 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 		mq->in_flight[issue_type] -= 1;
+ 		if (mmc_tot_in_flight(mq) == 0)
+ 			put_card = true;
++		mq->busy = false;
+ 		spin_unlock_irq(q->queue_lock);
+ 		if (put_card)
+ 			mmc_put_card(card, &mq->ctx);
++	} else {
++		WRITE_ONCE(mq->busy, false);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
+index 17e59d50b496..9bf3c9245075 100644
+--- a/drivers/mmc/core/queue.h
++++ b/drivers/mmc/core/queue.h
+@@ -81,6 +81,7 @@ struct mmc_queue {
+ 	unsigned int		cqe_busy;
+ #define MMC_CQE_DCMD_BUSY	BIT(0)
+ #define MMC_CQE_QUEUE_FULL	BIT(1)
++	bool			busy;
+ 	bool			use_cqe;
+ 	bool			recovery_needed;
+ 	bool			in_recovery;
+diff --git a/drivers/mmc/host/renesas_sdhi_internal_dmac.c b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
+index d032bd63444d..4a7991151918 100644
+--- a/drivers/mmc/host/renesas_sdhi_internal_dmac.c
++++ b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
+@@ -45,14 +45,16 @@
+ /* DM_CM_RST */
+ #define RST_DTRANRST1		BIT(9)
+ #define RST_DTRANRST0		BIT(8)
+-#define RST_RESERVED_BITS	GENMASK_ULL(32, 0)
++#define RST_RESERVED_BITS	GENMASK_ULL(31, 0)
+ 
+ /* DM_CM_INFO1 and DM_CM_INFO1_MASK */
+ #define INFO1_CLEAR		0
++#define INFO1_MASK_CLEAR	GENMASK_ULL(31, 0)
+ #define INFO1_DTRANEND1		BIT(17)
+ #define INFO1_DTRANEND0		BIT(16)
+ 
+ /* DM_CM_INFO2 and DM_CM_INFO2_MASK */
++#define INFO2_MASK_CLEAR	GENMASK_ULL(31, 0)
+ #define INFO2_DTRANERR1		BIT(17)
+ #define INFO2_DTRANERR0		BIT(16)
+ 
+@@ -236,6 +238,12 @@ renesas_sdhi_internal_dmac_request_dma(struct tmio_mmc_host *host,
+ {
+ 	struct renesas_sdhi *priv = host_to_priv(host);
+ 
++	/* Disable DMAC interrupts, we don't use them */
++	renesas_sdhi_internal_dmac_dm_write(host, DM_CM_INFO1_MASK,
++					    INFO1_MASK_CLEAR);
++	renesas_sdhi_internal_dmac_dm_write(host, DM_CM_INFO2_MASK,
++					    INFO2_MASK_CLEAR);
++
+ 	/* Each value is set to non-zero to assume "enabling" each DMA */
+ 	host->chan_rx = host->chan_tx = (void *)0xdeadbeaf;
+ 
+diff --git a/drivers/net/wireless/marvell/libertas/dev.h b/drivers/net/wireless/marvell/libertas/dev.h
+index dd1ee1f0af48..469134930026 100644
+--- a/drivers/net/wireless/marvell/libertas/dev.h
++++ b/drivers/net/wireless/marvell/libertas/dev.h
+@@ -104,6 +104,7 @@ struct lbs_private {
+ 	u8 fw_ready;
+ 	u8 surpriseremoved;
+ 	u8 setup_fw_on_resume;
++	u8 power_up_on_resume;
+ 	int (*hw_host_to_card) (struct lbs_private *priv, u8 type, u8 *payload, u16 nb);
+ 	void (*reset_card) (struct lbs_private *priv);
+ 	int (*power_save) (struct lbs_private *priv);
+diff --git a/drivers/net/wireless/marvell/libertas/if_sdio.c b/drivers/net/wireless/marvell/libertas/if_sdio.c
+index 2300e796c6ab..43743c26c071 100644
+--- a/drivers/net/wireless/marvell/libertas/if_sdio.c
++++ b/drivers/net/wireless/marvell/libertas/if_sdio.c
+@@ -1290,15 +1290,23 @@ static void if_sdio_remove(struct sdio_func *func)
+ static int if_sdio_suspend(struct device *dev)
+ {
+ 	struct sdio_func *func = dev_to_sdio_func(dev);
+-	int ret;
+ 	struct if_sdio_card *card = sdio_get_drvdata(func);
++	struct lbs_private *priv = card->priv;
++	int ret;
+ 
+ 	mmc_pm_flag_t flags = sdio_get_host_pm_caps(func);
++	priv->power_up_on_resume = false;
+ 
+ 	/* If we're powered off anyway, just let the mmc layer remove the
+ 	 * card. */
+-	if (!lbs_iface_active(card->priv))
+-		return -ENOSYS;
++	if (!lbs_iface_active(priv)) {
++		if (priv->fw_ready) {
++			priv->power_up_on_resume = true;
++			if_sdio_power_off(card);
++		}
++
++		return 0;
++	}
+ 
+ 	dev_info(dev, "%s: suspend: PM flags = 0x%x\n",
+ 		 sdio_func_id(func), flags);
+@@ -1306,9 +1314,14 @@ static int if_sdio_suspend(struct device *dev)
+ 	/* If we aren't being asked to wake on anything, we should bail out
+ 	 * and let the SD stack power down the card.
+ 	 */
+-	if (card->priv->wol_criteria == EHS_REMOVE_WAKEUP) {
++	if (priv->wol_criteria == EHS_REMOVE_WAKEUP) {
+ 		dev_info(dev, "Suspend without wake params -- powering down card\n");
+-		return -ENOSYS;
++		if (priv->fw_ready) {
++			priv->power_up_on_resume = true;
++			if_sdio_power_off(card);
++		}
++
++		return 0;
+ 	}
+ 
+ 	if (!(flags & MMC_PM_KEEP_POWER)) {
+@@ -1321,7 +1334,7 @@ static int if_sdio_suspend(struct device *dev)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = lbs_suspend(card->priv);
++	ret = lbs_suspend(priv);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1336,6 +1349,11 @@ static int if_sdio_resume(struct device *dev)
+ 
+ 	dev_info(dev, "%s: resume: we're back\n", sdio_func_id(func));
+ 
++	if (card->priv->power_up_on_resume) {
++		if_sdio_power_on(card);
++		wait_event(card->pwron_waitq, card->priv->fw_ready);
++	}
++
+ 	ret = lbs_resume(card->priv);
+ 
+ 	return ret;
+diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
+index 27902a8799b1..8aae6dcc839f 100644
+--- a/drivers/nvdimm/bus.c
++++ b/drivers/nvdimm/bus.c
+@@ -812,9 +812,9 @@ u32 nd_cmd_out_size(struct nvdimm *nvdimm, int cmd,
+ 		 * overshoots the remainder by 4 bytes, assume it was
+ 		 * including 'status'.
+ 		 */
+-		if (out_field[1] - 8 == remainder)
++		if (out_field[1] - 4 == remainder)
+ 			return remainder;
+-		return out_field[1] - 4;
++		return out_field[1] - 8;
+ 	} else if (cmd == ND_CMD_CALL) {
+ 		struct nd_cmd_pkg *pkg = (struct nd_cmd_pkg *) in_field;
+ 
+diff --git a/drivers/nvdimm/dimm_devs.c b/drivers/nvdimm/dimm_devs.c
+index 8d348b22ba45..863cabc35215 100644
+--- a/drivers/nvdimm/dimm_devs.c
++++ b/drivers/nvdimm/dimm_devs.c
+@@ -536,6 +536,37 @@ resource_size_t nd_blk_available_dpa(struct nd_region *nd_region)
+ 	return info.available;
+ }
+ 
++/**
++ * nd_pmem_max_contiguous_dpa - For the given dimm+region, return the max
++ *			   contiguous unallocated dpa range.
++ * @nd_region: constrain available space check to this reference region
++ * @nd_mapping: container of dpa-resource-root + labels
++ */
++resource_size_t nd_pmem_max_contiguous_dpa(struct nd_region *nd_region,
++					   struct nd_mapping *nd_mapping)
++{
++	struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
++	struct nvdimm_bus *nvdimm_bus;
++	resource_size_t max = 0;
++	struct resource *res;
++
++	/* if a dimm is disabled the available capacity is zero */
++	if (!ndd)
++		return 0;
++
++	nvdimm_bus = walk_to_nvdimm_bus(ndd->dev);
++	if (__reserve_free_pmem(&nd_region->dev, nd_mapping->nvdimm))
++		return 0;
++	for_each_dpa_resource(ndd, res) {
++		if (strcmp(res->name, "pmem-reserve") != 0)
++			continue;
++		if (resource_size(res) > max)
++			max = resource_size(res);
++	}
++	release_free_pmem(nvdimm_bus, nd_mapping);
++	return max;
++}
++
+ /**
+  * nd_pmem_available_dpa - for the given dimm+region account unallocated dpa
+  * @nd_mapping: container of dpa-resource-root + labels
+diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
+index 28afdd668905..4525d8ef6022 100644
+--- a/drivers/nvdimm/namespace_devs.c
++++ b/drivers/nvdimm/namespace_devs.c
+@@ -799,7 +799,7 @@ static int merge_dpa(struct nd_region *nd_region,
+ 	return 0;
+ }
+ 
+-static int __reserve_free_pmem(struct device *dev, void *data)
++int __reserve_free_pmem(struct device *dev, void *data)
+ {
+ 	struct nvdimm *nvdimm = data;
+ 	struct nd_region *nd_region;
+@@ -836,7 +836,7 @@ static int __reserve_free_pmem(struct device *dev, void *data)
+ 	return 0;
+ }
+ 
+-static void release_free_pmem(struct nvdimm_bus *nvdimm_bus,
++void release_free_pmem(struct nvdimm_bus *nvdimm_bus,
+ 		struct nd_mapping *nd_mapping)
+ {
+ 	struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
+@@ -1032,7 +1032,7 @@ static ssize_t __size_store(struct device *dev, unsigned long long val)
+ 
+ 		allocated += nvdimm_allocated_dpa(ndd, &label_id);
+ 	}
+-	available = nd_region_available_dpa(nd_region);
++	available = nd_region_allocatable_dpa(nd_region);
+ 
+ 	if (val > available + allocated)
+ 		return -ENOSPC;
+diff --git a/drivers/nvdimm/nd-core.h b/drivers/nvdimm/nd-core.h
+index 79274ead54fb..ac68072fb8cd 100644
+--- a/drivers/nvdimm/nd-core.h
++++ b/drivers/nvdimm/nd-core.h
+@@ -100,6 +100,14 @@ struct nd_region;
+ struct nvdimm_drvdata;
+ struct nd_mapping;
+ void nd_mapping_free_labels(struct nd_mapping *nd_mapping);
++
++int __reserve_free_pmem(struct device *dev, void *data);
++void release_free_pmem(struct nvdimm_bus *nvdimm_bus,
++		       struct nd_mapping *nd_mapping);
++
++resource_size_t nd_pmem_max_contiguous_dpa(struct nd_region *nd_region,
++					   struct nd_mapping *nd_mapping);
++resource_size_t nd_region_allocatable_dpa(struct nd_region *nd_region);
+ resource_size_t nd_pmem_available_dpa(struct nd_region *nd_region,
+ 		struct nd_mapping *nd_mapping, resource_size_t *overlap);
+ resource_size_t nd_blk_available_dpa(struct nd_region *nd_region);
+diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
+index ec3543b83330..c30d5af02cc2 100644
+--- a/drivers/nvdimm/region_devs.c
++++ b/drivers/nvdimm/region_devs.c
+@@ -389,6 +389,30 @@ resource_size_t nd_region_available_dpa(struct nd_region *nd_region)
+ 	return available;
+ }
+ 
++resource_size_t nd_region_allocatable_dpa(struct nd_region *nd_region)
++{
++	resource_size_t available = 0;
++	int i;
++
++	if (is_memory(&nd_region->dev))
++		available = PHYS_ADDR_MAX;
++
++	WARN_ON(!is_nvdimm_bus_locked(&nd_region->dev));
++	for (i = 0; i < nd_region->ndr_mappings; i++) {
++		struct nd_mapping *nd_mapping = &nd_region->mapping[i];
++
++		if (is_memory(&nd_region->dev))
++			available = min(available,
++					nd_pmem_max_contiguous_dpa(nd_region,
++								   nd_mapping));
++		else if (is_nd_blk(&nd_region->dev))
++			available += nd_blk_available_dpa(nd_region);
++	}
++	if (is_memory(&nd_region->dev))
++		return available * nd_region->ndr_mappings;
++	return available;
++}
++
+ static ssize_t available_size_show(struct device *dev,
+ 		struct device_attribute *attr, char *buf)
+ {
+diff --git a/drivers/pwm/pwm-omap-dmtimer.c b/drivers/pwm/pwm-omap-dmtimer.c
+index 665da3c8fbce..f45798679e3c 100644
+--- a/drivers/pwm/pwm-omap-dmtimer.c
++++ b/drivers/pwm/pwm-omap-dmtimer.c
+@@ -264,8 +264,9 @@ static int pwm_omap_dmtimer_probe(struct platform_device *pdev)
+ 
+ 	timer_pdata = dev_get_platdata(&timer_pdev->dev);
+ 	if (!timer_pdata) {
+-		dev_err(&pdev->dev, "dmtimer pdata structure NULL\n");
+-		ret = -EINVAL;
++		dev_dbg(&pdev->dev,
++			 "dmtimer pdata structure NULL, deferring probe\n");
++		ret = -EPROBE_DEFER;
+ 		goto put;
+ 	}
+ 
+diff --git a/drivers/pwm/pwm-tiehrpwm.c b/drivers/pwm/pwm-tiehrpwm.c
+index 4c22cb395040..f7b8a86fa5c5 100644
+--- a/drivers/pwm/pwm-tiehrpwm.c
++++ b/drivers/pwm/pwm-tiehrpwm.c
+@@ -33,10 +33,6 @@
+ #define TBCTL			0x00
+ #define TBPRD			0x0A
+ 
+-#define TBCTL_RUN_MASK		(BIT(15) | BIT(14))
+-#define TBCTL_STOP_NEXT		0
+-#define TBCTL_STOP_ON_CYCLE	BIT(14)
+-#define TBCTL_FREE_RUN		(BIT(15) | BIT(14))
+ #define TBCTL_PRDLD_MASK	BIT(3)
+ #define TBCTL_PRDLD_SHDW	0
+ #define TBCTL_PRDLD_IMDT	BIT(3)
+@@ -360,7 +356,7 @@ static int ehrpwm_pwm_enable(struct pwm_chip *chip, struct pwm_device *pwm)
+ 	/* Channels polarity can be configured from action qualifier module */
+ 	configure_polarity(pc, pwm->hwpwm);
+ 
+-	/* Enable TBCLK before enabling PWM device */
++	/* Enable TBCLK */
+ 	ret = clk_enable(pc->tbclk);
+ 	if (ret) {
+ 		dev_err(chip->dev, "Failed to enable TBCLK for %s: %d\n",
+@@ -368,9 +364,6 @@ static int ehrpwm_pwm_enable(struct pwm_chip *chip, struct pwm_device *pwm)
+ 		return ret;
+ 	}
+ 
+-	/* Enable time counter for free_run */
+-	ehrpwm_modify(pc->mmio_base, TBCTL, TBCTL_RUN_MASK, TBCTL_FREE_RUN);
+-
+ 	return 0;
+ }
+ 
+@@ -388,6 +381,8 @@ static void ehrpwm_pwm_disable(struct pwm_chip *chip, struct pwm_device *pwm)
+ 		aqcsfrc_mask = AQCSFRC_CSFA_MASK;
+ 	}
+ 
++	/* Update shadow register first before modifying active register */
++	ehrpwm_modify(pc->mmio_base, AQCSFRC, aqcsfrc_mask, aqcsfrc_val);
+ 	/*
+ 	 * Changes to immediate action on Action Qualifier. This puts
+ 	 * Action Qualifier control on PWM output from next TBCLK
+@@ -400,9 +395,6 @@ static void ehrpwm_pwm_disable(struct pwm_chip *chip, struct pwm_device *pwm)
+ 	/* Disabling TBCLK on PWM disable */
+ 	clk_disable(pc->tbclk);
+ 
+-	/* Stop Time base counter */
+-	ehrpwm_modify(pc->mmio_base, TBCTL, TBCTL_RUN_MASK, TBCTL_STOP_NEXT);
+-
+ 	/* Disable clock on PWM disable */
+ 	pm_runtime_put_sync(chip->dev);
+ }
+diff --git a/drivers/rtc/rtc-omap.c b/drivers/rtc/rtc-omap.c
+index 39086398833e..6a7b804c3074 100644
+--- a/drivers/rtc/rtc-omap.c
++++ b/drivers/rtc/rtc-omap.c
+@@ -861,13 +861,6 @@ static int omap_rtc_probe(struct platform_device *pdev)
+ 			goto err;
+ 	}
+ 
+-	if (rtc->is_pmic_controller) {
+-		if (!pm_power_off) {
+-			omap_rtc_power_off_rtc = rtc;
+-			pm_power_off = omap_rtc_power_off;
+-		}
+-	}
+-
+ 	/* Support ext_wakeup pinconf */
+ 	rtc_pinctrl_desc.name = dev_name(&pdev->dev);
+ 
+@@ -880,12 +873,21 @@ static int omap_rtc_probe(struct platform_device *pdev)
+ 
+ 	ret = rtc_register_device(rtc->rtc);
+ 	if (ret)
+-		goto err;
++		goto err_deregister_pinctrl;
+ 
+ 	rtc_nvmem_register(rtc->rtc, &omap_rtc_nvmem_config);
+ 
++	if (rtc->is_pmic_controller) {
++		if (!pm_power_off) {
++			omap_rtc_power_off_rtc = rtc;
++			pm_power_off = omap_rtc_power_off;
++		}
++	}
++
+ 	return 0;
+ 
++err_deregister_pinctrl:
++	pinctrl_unregister(rtc->pctldev);
+ err:
+ 	clk_disable_unprepare(rtc->clk);
+ 	device_init_wakeup(&pdev->dev, false);
+diff --git a/drivers/spi/spi-cadence.c b/drivers/spi/spi-cadence.c
+index f3dad6fcdc35..a568f35522f9 100644
+--- a/drivers/spi/spi-cadence.c
++++ b/drivers/spi/spi-cadence.c
+@@ -319,7 +319,7 @@ static void cdns_spi_fill_tx_fifo(struct cdns_spi *xspi)
+ 		 */
+ 		if (cdns_spi_read(xspi, CDNS_SPI_ISR) &
+ 		    CDNS_SPI_IXR_TXFULL)
+-			usleep_range(10, 20);
++			udelay(10);
+ 
+ 		if (xspi->txbuf)
+ 			cdns_spi_write(xspi, CDNS_SPI_TXD, *xspi->txbuf++);
+diff --git a/drivers/spi/spi-davinci.c b/drivers/spi/spi-davinci.c
+index 577084bb911b..a02099c90c5c 100644
+--- a/drivers/spi/spi-davinci.c
++++ b/drivers/spi/spi-davinci.c
+@@ -217,7 +217,7 @@ static void davinci_spi_chipselect(struct spi_device *spi, int value)
+ 	pdata = &dspi->pdata;
+ 
+ 	/* program delay transfers if tx_delay is non zero */
+-	if (spicfg->wdelay)
++	if (spicfg && spicfg->wdelay)
+ 		spidat1 |= SPIDAT1_WDEL;
+ 
+ 	/*
+diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c
+index 0630962ce442..f225f7c99a32 100644
+--- a/drivers/spi/spi-fsl-dspi.c
++++ b/drivers/spi/spi-fsl-dspi.c
+@@ -1029,30 +1029,30 @@ static int dspi_probe(struct platform_device *pdev)
+ 		goto out_master_put;
+ 	}
+ 
++	dspi->clk = devm_clk_get(&pdev->dev, "dspi");
++	if (IS_ERR(dspi->clk)) {
++		ret = PTR_ERR(dspi->clk);
++		dev_err(&pdev->dev, "unable to get clock\n");
++		goto out_master_put;
++	}
++	ret = clk_prepare_enable(dspi->clk);
++	if (ret)
++		goto out_master_put;
++
+ 	dspi_init(dspi);
+ 	dspi->irq = platform_get_irq(pdev, 0);
+ 	if (dspi->irq < 0) {
+ 		dev_err(&pdev->dev, "can't get platform irq\n");
+ 		ret = dspi->irq;
+-		goto out_master_put;
++		goto out_clk_put;
+ 	}
+ 
+ 	ret = devm_request_irq(&pdev->dev, dspi->irq, dspi_interrupt, 0,
+ 			pdev->name, dspi);
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "Unable to attach DSPI interrupt\n");
+-		goto out_master_put;
+-	}
+-
+-	dspi->clk = devm_clk_get(&pdev->dev, "dspi");
+-	if (IS_ERR(dspi->clk)) {
+-		ret = PTR_ERR(dspi->clk);
+-		dev_err(&pdev->dev, "unable to get clock\n");
+-		goto out_master_put;
++		goto out_clk_put;
+ 	}
+-	ret = clk_prepare_enable(dspi->clk);
+-	if (ret)
+-		goto out_master_put;
+ 
+ 	if (dspi->devtype_data->trans_mode == DSPI_DMA_MODE) {
+ 		ret = dspi_request_dma(dspi, res->start);
+diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c
+index 0b2d60d30f69..14f4ea59caff 100644
+--- a/drivers/spi/spi-pxa2xx.c
++++ b/drivers/spi/spi-pxa2xx.c
+@@ -1391,6 +1391,10 @@ static const struct pci_device_id pxa2xx_spi_pci_compound_match[] = {
+ 	{ PCI_VDEVICE(INTEL, 0x31c2), LPSS_BXT_SSP },
+ 	{ PCI_VDEVICE(INTEL, 0x31c4), LPSS_BXT_SSP },
+ 	{ PCI_VDEVICE(INTEL, 0x31c6), LPSS_BXT_SSP },
++	/* ICL-LP */
++	{ PCI_VDEVICE(INTEL, 0x34aa), LPSS_CNL_SSP },
++	{ PCI_VDEVICE(INTEL, 0x34ab), LPSS_CNL_SSP },
++	{ PCI_VDEVICE(INTEL, 0x34fb), LPSS_CNL_SSP },
+ 	/* APL */
+ 	{ PCI_VDEVICE(INTEL, 0x5ac2), LPSS_BXT_SSP },
+ 	{ PCI_VDEVICE(INTEL, 0x5ac4), LPSS_BXT_SSP },
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 9c14a453f73c..80bb56facfb6 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -182,6 +182,7 @@ static int uart_port_startup(struct tty_struct *tty, struct uart_state *state,
+ {
+ 	struct uart_port *uport = uart_port_check(state);
+ 	unsigned long page;
++	unsigned long flags = 0;
+ 	int retval = 0;
+ 
+ 	if (uport->type == PORT_UNKNOWN)
+@@ -196,15 +197,18 @@ static int uart_port_startup(struct tty_struct *tty, struct uart_state *state,
+ 	 * Initialise and allocate the transmit and temporary
+ 	 * buffer.
+ 	 */
+-	if (!state->xmit.buf) {
+-		/* This is protected by the per port mutex */
+-		page = get_zeroed_page(GFP_KERNEL);
+-		if (!page)
+-			return -ENOMEM;
++	page = get_zeroed_page(GFP_KERNEL);
++	if (!page)
++		return -ENOMEM;
+ 
++	uart_port_lock(state, flags);
++	if (!state->xmit.buf) {
+ 		state->xmit.buf = (unsigned char *) page;
+ 		uart_circ_clear(&state->xmit);
++	} else {
++		free_page(page);
+ 	}
++	uart_port_unlock(uport, flags);
+ 
+ 	retval = uport->ops->startup(uport);
+ 	if (retval == 0) {
+@@ -263,6 +267,7 @@ static void uart_shutdown(struct tty_struct *tty, struct uart_state *state)
+ {
+ 	struct uart_port *uport = uart_port_check(state);
+ 	struct tty_port *port = &state->port;
++	unsigned long flags = 0;
+ 
+ 	/*
+ 	 * Set the TTY IO error marker
+@@ -295,10 +300,12 @@ static void uart_shutdown(struct tty_struct *tty, struct uart_state *state)
+ 	/*
+ 	 * Free the transmit buffer page.
+ 	 */
++	uart_port_lock(state, flags);
+ 	if (state->xmit.buf) {
+ 		free_page((unsigned long)state->xmit.buf);
+ 		state->xmit.buf = NULL;
+ 	}
++	uart_port_unlock(uport, flags);
+ }
+ 
+ /**
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index 609438d2465b..9ae2fb1344de 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -1704,12 +1704,12 @@ static int do_register_framebuffer(struct fb_info *fb_info)
+ 	return 0;
+ }
+ 
+-static int do_unregister_framebuffer(struct fb_info *fb_info)
++static int unbind_console(struct fb_info *fb_info)
+ {
+ 	struct fb_event event;
+-	int i, ret = 0;
++	int ret;
++	int i = fb_info->node;
+ 
+-	i = fb_info->node;
+ 	if (i < 0 || i >= FB_MAX || registered_fb[i] != fb_info)
+ 		return -EINVAL;
+ 
+@@ -1724,17 +1724,29 @@ static int do_unregister_framebuffer(struct fb_info *fb_info)
+ 	unlock_fb_info(fb_info);
+ 	console_unlock();
+ 
++	return ret;
++}
++
++static int __unlink_framebuffer(struct fb_info *fb_info);
++
++static int do_unregister_framebuffer(struct fb_info *fb_info)
++{
++	struct fb_event event;
++	int ret;
++
++	ret = unbind_console(fb_info);
++
+ 	if (ret)
+ 		return -EINVAL;
+ 
+ 	pm_vt_switch_unregister(fb_info->dev);
+ 
+-	unlink_framebuffer(fb_info);
++	__unlink_framebuffer(fb_info);
+ 	if (fb_info->pixmap.addr &&
+ 	    (fb_info->pixmap.flags & FB_PIXMAP_DEFAULT))
+ 		kfree(fb_info->pixmap.addr);
+ 	fb_destroy_modelist(&fb_info->modelist);
+-	registered_fb[i] = NULL;
++	registered_fb[fb_info->node] = NULL;
+ 	num_registered_fb--;
+ 	fb_cleanup_device(fb_info);
+ 	event.info = fb_info;
+@@ -1747,7 +1759,7 @@ static int do_unregister_framebuffer(struct fb_info *fb_info)
+ 	return 0;
+ }
+ 
+-int unlink_framebuffer(struct fb_info *fb_info)
++static int __unlink_framebuffer(struct fb_info *fb_info)
+ {
+ 	int i;
+ 
+@@ -1759,6 +1771,20 @@ int unlink_framebuffer(struct fb_info *fb_info)
+ 		device_destroy(fb_class, MKDEV(FB_MAJOR, i));
+ 		fb_info->dev = NULL;
+ 	}
++
++	return 0;
++}
++
++int unlink_framebuffer(struct fb_info *fb_info)
++{
++	int ret;
++
++	ret = __unlink_framebuffer(fb_info);
++	if (ret)
++		return ret;
++
++	unbind_console(fb_info);
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL(unlink_framebuffer);
+diff --git a/drivers/video/fbdev/udlfb.c b/drivers/video/fbdev/udlfb.c
+index f365d4862015..862e8027acf6 100644
+--- a/drivers/video/fbdev/udlfb.c
++++ b/drivers/video/fbdev/udlfb.c
+@@ -27,6 +27,7 @@
+ #include <linux/slab.h>
+ #include <linux/prefetch.h>
+ #include <linux/delay.h>
++#include <asm/unaligned.h>
+ #include <video/udlfb.h>
+ #include "edid.h"
+ 
+@@ -450,17 +451,17 @@ static void dlfb_compress_hline(
+ 		raw_pixels_count_byte = cmd++; /*  we'll know this later */
+ 		raw_pixel_start = pixel;
+ 
+-		cmd_pixel_end = pixel + min(MAX_CMD_PIXELS + 1,
+-			min((int)(pixel_end - pixel),
+-			    (int)(cmd_buffer_end - cmd) / BPP));
++		cmd_pixel_end = pixel + min3(MAX_CMD_PIXELS + 1UL,
++					(unsigned long)(pixel_end - pixel),
++					(unsigned long)(cmd_buffer_end - 1 - cmd) / BPP);
+ 
+-		prefetch_range((void *) pixel, (cmd_pixel_end - pixel) * BPP);
++		prefetch_range((void *) pixel, (u8 *)cmd_pixel_end - (u8 *)pixel);
+ 
+ 		while (pixel < cmd_pixel_end) {
+ 			const uint16_t * const repeating_pixel = pixel;
+ 
+-			*cmd++ = *pixel >> 8;
+-			*cmd++ = *pixel;
++			put_unaligned_be16(*pixel, cmd);
++			cmd += 2;
+ 			pixel++;
+ 
+ 			if (unlikely((pixel < cmd_pixel_end) &&
+@@ -486,13 +487,16 @@ static void dlfb_compress_hline(
+ 		if (pixel > raw_pixel_start) {
+ 			/* finalize last RAW span */
+ 			*raw_pixels_count_byte = (pixel-raw_pixel_start) & 0xFF;
++		} else {
++			/* undo unused byte */
++			cmd--;
+ 		}
+ 
+ 		*cmd_pixels_count_byte = (pixel - cmd_pixel_start) & 0xFF;
+-		dev_addr += (pixel - cmd_pixel_start) * BPP;
++		dev_addr += (u8 *)pixel - (u8 *)cmd_pixel_start;
+ 	}
+ 
+-	if (cmd_buffer_end <= MIN_RLX_CMD_BYTES + cmd) {
++	if (cmd_buffer_end - MIN_RLX_CMD_BYTES <= cmd) {
+ 		/* Fill leftover bytes with no-ops */
+ 		if (cmd_buffer_end > cmd)
+ 			memset(cmd, 0xAF, cmd_buffer_end - cmd);
+@@ -610,8 +614,11 @@ static int dlfb_handle_damage(struct dlfb_data *dlfb, int x, int y,
+ 	}
+ 
+ 	if (cmd > (char *) urb->transfer_buffer) {
++		int len;
++		if (cmd < (char *) urb->transfer_buffer + urb->transfer_buffer_length)
++			*cmd++ = 0xAF;
+ 		/* Send partial buffer remaining before exiting */
+-		int len = cmd - (char *) urb->transfer_buffer;
++		len = cmd - (char *) urb->transfer_buffer;
+ 		ret = dlfb_submit_urb(dlfb, urb, len);
+ 		bytes_sent += len;
+ 	} else
+@@ -735,8 +742,11 @@ static void dlfb_dpy_deferred_io(struct fb_info *info,
+ 	}
+ 
+ 	if (cmd > (char *) urb->transfer_buffer) {
++		int len;
++		if (cmd < (char *) urb->transfer_buffer + urb->transfer_buffer_length)
++			*cmd++ = 0xAF;
+ 		/* Send partial buffer remaining before exiting */
+-		int len = cmd - (char *) urb->transfer_buffer;
++		len = cmd - (char *) urb->transfer_buffer;
+ 		dlfb_submit_urb(dlfb, urb, len);
+ 		bytes_sent += len;
+ 	} else
+@@ -922,14 +932,6 @@ static void dlfb_free(struct kref *kref)
+ 	kfree(dlfb);
+ }
+ 
+-static void dlfb_release_urb_work(struct work_struct *work)
+-{
+-	struct urb_node *unode = container_of(work, struct urb_node,
+-					      release_urb_work.work);
+-
+-	up(&unode->dlfb->urbs.limit_sem);
+-}
+-
+ static void dlfb_free_framebuffer(struct dlfb_data *dlfb)
+ {
+ 	struct fb_info *info = dlfb->info;
+@@ -1039,10 +1041,25 @@ static int dlfb_ops_set_par(struct fb_info *info)
+ 	int result;
+ 	u16 *pix_framebuffer;
+ 	int i;
++	struct fb_var_screeninfo fvs;
++
++	/* clear the activate field because it causes spurious miscompares */
++	fvs = info->var;
++	fvs.activate = 0;
++	fvs.vmode &= ~FB_VMODE_SMOOTH_XPAN;
++
++	if (!memcmp(&dlfb->current_mode, &fvs, sizeof(struct fb_var_screeninfo)))
++		return 0;
+ 
+ 	result = dlfb_set_video_mode(dlfb, &info->var);
+ 
+-	if ((result == 0) && (dlfb->fb_count == 0)) {
++	if (result)
++		return result;
++
++	dlfb->current_mode = fvs;
++	info->fix.line_length = info->var.xres * (info->var.bits_per_pixel / 8);
++
++	if (dlfb->fb_count == 0) {
+ 
+ 		/* paint greenscreen */
+ 
+@@ -1054,7 +1071,7 @@ static int dlfb_ops_set_par(struct fb_info *info)
+ 				   info->screen_base);
+ 	}
+ 
+-	return result;
++	return 0;
+ }
+ 
+ /* To fonzi the jukebox (e.g. make blanking changes take effect) */
+@@ -1649,7 +1666,8 @@ static void dlfb_init_framebuffer_work(struct work_struct *work)
+ 	dlfb->info = info;
+ 	info->par = dlfb;
+ 	info->pseudo_palette = dlfb->pseudo_palette;
+-	info->fbops = &dlfb_ops;
++	dlfb->ops = dlfb_ops;
++	info->fbops = &dlfb->ops;
+ 
+ 	retval = fb_alloc_cmap(&info->cmap, 256, 0);
+ 	if (retval < 0) {
+@@ -1789,14 +1807,7 @@ static void dlfb_urb_completion(struct urb *urb)
+ 	dlfb->urbs.available++;
+ 	spin_unlock_irqrestore(&dlfb->urbs.lock, flags);
+ 
+-	/*
+-	 * When using fb_defio, we deadlock if up() is called
+-	 * while another is waiting. So queue to another process.
+-	 */
+-	if (fb_defio)
+-		schedule_delayed_work(&unode->release_urb_work, 0);
+-	else
+-		up(&dlfb->urbs.limit_sem);
++	up(&dlfb->urbs.limit_sem);
+ }
+ 
+ static void dlfb_free_urb_list(struct dlfb_data *dlfb)
+@@ -1805,16 +1816,11 @@ static void dlfb_free_urb_list(struct dlfb_data *dlfb)
+ 	struct list_head *node;
+ 	struct urb_node *unode;
+ 	struct urb *urb;
+-	int ret;
+ 	unsigned long flags;
+ 
+ 	/* keep waiting and freeing, until we've got 'em all */
+ 	while (count--) {
+-
+-		/* Getting interrupted means a leak, but ok at disconnect */
+-		ret = down_interruptible(&dlfb->urbs.limit_sem);
+-		if (ret)
+-			break;
++		down(&dlfb->urbs.limit_sem);
+ 
+ 		spin_lock_irqsave(&dlfb->urbs.lock, flags);
+ 
+@@ -1838,25 +1844,27 @@ static void dlfb_free_urb_list(struct dlfb_data *dlfb)
+ 
+ static int dlfb_alloc_urb_list(struct dlfb_data *dlfb, int count, size_t size)
+ {
+-	int i = 0;
+ 	struct urb *urb;
+ 	struct urb_node *unode;
+ 	char *buf;
++	size_t wanted_size = count * size;
+ 
+ 	spin_lock_init(&dlfb->urbs.lock);
+ 
++retry:
+ 	dlfb->urbs.size = size;
+ 	INIT_LIST_HEAD(&dlfb->urbs.list);
+ 
+-	while (i < count) {
++	sema_init(&dlfb->urbs.limit_sem, 0);
++	dlfb->urbs.count = 0;
++	dlfb->urbs.available = 0;
++
++	while (dlfb->urbs.count * size < wanted_size) {
+ 		unode = kzalloc(sizeof(*unode), GFP_KERNEL);
+ 		if (!unode)
+ 			break;
+ 		unode->dlfb = dlfb;
+ 
+-		INIT_DELAYED_WORK(&unode->release_urb_work,
+-			  dlfb_release_urb_work);
+-
+ 		urb = usb_alloc_urb(0, GFP_KERNEL);
+ 		if (!urb) {
+ 			kfree(unode);
+@@ -1864,11 +1872,16 @@ static int dlfb_alloc_urb_list(struct dlfb_data *dlfb, int count, size_t size)
+ 		}
+ 		unode->urb = urb;
+ 
+-		buf = usb_alloc_coherent(dlfb->udev, MAX_TRANSFER, GFP_KERNEL,
++		buf = usb_alloc_coherent(dlfb->udev, size, GFP_KERNEL,
+ 					 &urb->transfer_dma);
+ 		if (!buf) {
+ 			kfree(unode);
+ 			usb_free_urb(urb);
++			if (size > PAGE_SIZE) {
++				size /= 2;
++				dlfb_free_urb_list(dlfb);
++				goto retry;
++			}
+ 			break;
+ 		}
+ 
+@@ -1879,14 +1892,12 @@ static int dlfb_alloc_urb_list(struct dlfb_data *dlfb, int count, size_t size)
+ 
+ 		list_add_tail(&unode->entry, &dlfb->urbs.list);
+ 
+-		i++;
++		up(&dlfb->urbs.limit_sem);
++		dlfb->urbs.count++;
++		dlfb->urbs.available++;
+ 	}
+ 
+-	sema_init(&dlfb->urbs.limit_sem, i);
+-	dlfb->urbs.count = i;
+-	dlfb->urbs.available = i;
+-
+-	return i;
++	return dlfb->urbs.count;
+ }
+ 
+ static struct urb *dlfb_get_urb(struct dlfb_data *dlfb)
+diff --git a/fs/9p/xattr.c b/fs/9p/xattr.c
+index f329eee6dc93..352abc39e891 100644
+--- a/fs/9p/xattr.c
++++ b/fs/9p/xattr.c
+@@ -105,7 +105,7 @@ int v9fs_fid_xattr_set(struct p9_fid *fid, const char *name,
+ {
+ 	struct kvec kvec = {.iov_base = (void *)value, .iov_len = value_len};
+ 	struct iov_iter from;
+-	int retval;
++	int retval, err;
+ 
+ 	iov_iter_kvec(&from, WRITE | ITER_KVEC, &kvec, 1, value_len);
+ 
+@@ -126,7 +126,9 @@ int v9fs_fid_xattr_set(struct p9_fid *fid, const char *name,
+ 			 retval);
+ 	else
+ 		p9_client_write(fid, 0, &from, &retval);
+-	p9_client_clunk(fid);
++	err = p9_client_clunk(fid);
++	if (!retval && err)
++		retval = err;
+ 	return retval;
+ }
+ 
+diff --git a/fs/lockd/clntlock.c b/fs/lockd/clntlock.c
+index 96c1d14c18f1..c2a128678e6e 100644
+--- a/fs/lockd/clntlock.c
++++ b/fs/lockd/clntlock.c
+@@ -187,7 +187,7 @@ __be32 nlmclnt_grant(const struct sockaddr *addr, const struct nlm_lock *lock)
+ 			continue;
+ 		if (!rpc_cmp_addr(nlm_addr(block->b_host), addr))
+ 			continue;
+-		if (nfs_compare_fh(NFS_FH(file_inode(fl_blocked->fl_file)) ,fh) != 0)
++		if (nfs_compare_fh(NFS_FH(locks_inode(fl_blocked->fl_file)), fh) != 0)
+ 			continue;
+ 		/* Alright, we found a lock. Set the return status
+ 		 * and wake up the caller
+diff --git a/fs/lockd/clntproc.c b/fs/lockd/clntproc.c
+index a2c0dfc6fdc0..d20b92f271c2 100644
+--- a/fs/lockd/clntproc.c
++++ b/fs/lockd/clntproc.c
+@@ -128,7 +128,7 @@ static void nlmclnt_setlockargs(struct nlm_rqst *req, struct file_lock *fl)
+ 	char *nodename = req->a_host->h_rpcclnt->cl_nodename;
+ 
+ 	nlmclnt_next_cookie(&argp->cookie);
+-	memcpy(&lock->fh, NFS_FH(file_inode(fl->fl_file)), sizeof(struct nfs_fh));
++	memcpy(&lock->fh, NFS_FH(locks_inode(fl->fl_file)), sizeof(struct nfs_fh));
+ 	lock->caller  = nodename;
+ 	lock->oh.data = req->a_owner;
+ 	lock->oh.len  = snprintf(req->a_owner, sizeof(req->a_owner), "%u@%s",
+diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c
+index 3701bccab478..74330daeab71 100644
+--- a/fs/lockd/svclock.c
++++ b/fs/lockd/svclock.c
+@@ -405,8 +405,8 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
+ 	__be32			ret;
+ 
+ 	dprintk("lockd: nlmsvc_lock(%s/%ld, ty=%d, pi=%d, %Ld-%Ld, bl=%d)\n",
+-				file_inode(file->f_file)->i_sb->s_id,
+-				file_inode(file->f_file)->i_ino,
++				locks_inode(file->f_file)->i_sb->s_id,
++				locks_inode(file->f_file)->i_ino,
+ 				lock->fl.fl_type, lock->fl.fl_pid,
+ 				(long long)lock->fl.fl_start,
+ 				(long long)lock->fl.fl_end,
+@@ -511,8 +511,8 @@ nlmsvc_testlock(struct svc_rqst *rqstp, struct nlm_file *file,
+ 	__be32			ret;
+ 
+ 	dprintk("lockd: nlmsvc_testlock(%s/%ld, ty=%d, %Ld-%Ld)\n",
+-				file_inode(file->f_file)->i_sb->s_id,
+-				file_inode(file->f_file)->i_ino,
++				locks_inode(file->f_file)->i_sb->s_id,
++				locks_inode(file->f_file)->i_ino,
+ 				lock->fl.fl_type,
+ 				(long long)lock->fl.fl_start,
+ 				(long long)lock->fl.fl_end);
+@@ -566,8 +566,8 @@ nlmsvc_unlock(struct net *net, struct nlm_file *file, struct nlm_lock *lock)
+ 	int	error;
+ 
+ 	dprintk("lockd: nlmsvc_unlock(%s/%ld, pi=%d, %Ld-%Ld)\n",
+-				file_inode(file->f_file)->i_sb->s_id,
+-				file_inode(file->f_file)->i_ino,
++				locks_inode(file->f_file)->i_sb->s_id,
++				locks_inode(file->f_file)->i_ino,
+ 				lock->fl.fl_pid,
+ 				(long long)lock->fl.fl_start,
+ 				(long long)lock->fl.fl_end);
+@@ -595,8 +595,8 @@ nlmsvc_cancel_blocked(struct net *net, struct nlm_file *file, struct nlm_lock *l
+ 	int status = 0;
+ 
+ 	dprintk("lockd: nlmsvc_cancel(%s/%ld, pi=%d, %Ld-%Ld)\n",
+-				file_inode(file->f_file)->i_sb->s_id,
+-				file_inode(file->f_file)->i_ino,
++				locks_inode(file->f_file)->i_sb->s_id,
++				locks_inode(file->f_file)->i_ino,
+ 				lock->fl.fl_pid,
+ 				(long long)lock->fl.fl_start,
+ 				(long long)lock->fl.fl_end);
+diff --git a/fs/lockd/svcsubs.c b/fs/lockd/svcsubs.c
+index 4ec3d6e03e76..899360ba3b84 100644
+--- a/fs/lockd/svcsubs.c
++++ b/fs/lockd/svcsubs.c
+@@ -44,7 +44,7 @@ static inline void nlm_debug_print_fh(char *msg, struct nfs_fh *f)
+ 
+ static inline void nlm_debug_print_file(char *msg, struct nlm_file *file)
+ {
+-	struct inode *inode = file_inode(file->f_file);
++	struct inode *inode = locks_inode(file->f_file);
+ 
+ 	dprintk("lockd: %s %s/%ld\n",
+ 		msg, inode->i_sb->s_id, inode->i_ino);
+@@ -414,7 +414,7 @@ nlmsvc_match_sb(void *datap, struct nlm_file *file)
+ {
+ 	struct super_block *sb = datap;
+ 
+-	return sb == file_inode(file->f_file)->i_sb;
++	return sb == locks_inode(file->f_file)->i_sb;
+ }
+ 
+ /**
+diff --git a/fs/nfs/blocklayout/dev.c b/fs/nfs/blocklayout/dev.c
+index a7efd83779d2..dec5880ac6de 100644
+--- a/fs/nfs/blocklayout/dev.c
++++ b/fs/nfs/blocklayout/dev.c
+@@ -204,7 +204,7 @@ static bool bl_map_stripe(struct pnfs_block_dev *dev, u64 offset,
+ 	chunk = div_u64(offset, dev->chunk_size);
+ 	div_u64_rem(chunk, dev->nr_children, &chunk_idx);
+ 
+-	if (chunk_idx > dev->nr_children) {
++	if (chunk_idx >= dev->nr_children) {
+ 		dprintk("%s: invalid chunk idx %d (%lld/%lld)\n",
+ 			__func__, chunk_idx, offset, dev->chunk_size);
+ 		/* error, should not happen */
+diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
+index 64c214fb9da6..5d57e818d0c3 100644
+--- a/fs/nfs/callback_proc.c
++++ b/fs/nfs/callback_proc.c
+@@ -441,11 +441,14 @@ validate_seqid(const struct nfs4_slot_table *tbl, const struct nfs4_slot *slot,
+  * a match.  If the slot is in use and the sequence numbers match, the
+  * client is still waiting for a response to the original request.
+  */
+-static bool referring_call_exists(struct nfs_client *clp,
++static int referring_call_exists(struct nfs_client *clp,
+ 				  uint32_t nrclists,
+-				  struct referring_call_list *rclists)
++				  struct referring_call_list *rclists,
++				  spinlock_t *lock)
++	__releases(lock)
++	__acquires(lock)
+ {
+-	bool status = false;
++	int status = 0;
+ 	int i, j;
+ 	struct nfs4_session *session;
+ 	struct nfs4_slot_table *tbl;
+@@ -468,8 +471,10 @@ static bool referring_call_exists(struct nfs_client *clp,
+ 
+ 		for (j = 0; j < rclist->rcl_nrefcalls; j++) {
+ 			ref = &rclist->rcl_refcalls[j];
++			spin_unlock(lock);
+ 			status = nfs4_slot_wait_on_seqid(tbl, ref->rc_slotid,
+ 					ref->rc_sequenceid, HZ >> 1) < 0;
++			spin_lock(lock);
+ 			if (status)
+ 				goto out;
+ 		}
+@@ -546,7 +551,8 @@ __be32 nfs4_callback_sequence(void *argp, void *resp,
+ 	 * related callback was received before the response to the original
+ 	 * call.
+ 	 */
+-	if (referring_call_exists(clp, args->csa_nrclists, args->csa_rclists)) {
++	if (referring_call_exists(clp, args->csa_nrclists, args->csa_rclists,
++				&tbl->slot_tbl_lock) < 0) {
+ 		status = htonl(NFS4ERR_DELAY);
+ 		goto out_unlock;
+ 	}
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index f6c4ccd693f4..464db0c0f5c8 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -581,8 +581,15 @@ nfs4_async_handle_exception(struct rpc_task *task, struct nfs_server *server,
+ 		ret = -EIO;
+ 	return ret;
+ out_retry:
+-	if (ret == 0)
++	if (ret == 0) {
+ 		exception->retry = 1;
++		/*
++		 * For NFS4ERR_MOVED, the client transport will need to
++		 * be recomputed after migration recovery has completed.
++		 */
++		if (errorcode == -NFS4ERR_MOVED)
++			rpc_task_release_transport(task);
++	}
+ 	return ret;
+ }
+ 
+diff --git a/fs/nfs/pnfs_nfs.c b/fs/nfs/pnfs_nfs.c
+index 32ba2d471853..d5e4d3cd8c7f 100644
+--- a/fs/nfs/pnfs_nfs.c
++++ b/fs/nfs/pnfs_nfs.c
+@@ -61,7 +61,7 @@ EXPORT_SYMBOL_GPL(pnfs_generic_commit_release);
+ 
+ /* The generic layer is about to remove the req from the commit list.
+  * If this will make the bucket empty, it will need to put the lseg reference.
+- * Note this must be called holding i_lock
++ * Note this must be called holding nfsi->commit_mutex
+  */
+ void
+ pnfs_generic_clear_request_commit(struct nfs_page *req,
+@@ -149,9 +149,7 @@ restart:
+ 		if (list_empty(&b->written)) {
+ 			freeme = b->wlseg;
+ 			b->wlseg = NULL;
+-			spin_unlock(&cinfo->inode->i_lock);
+ 			pnfs_put_lseg(freeme);
+-			spin_lock(&cinfo->inode->i_lock);
+ 			goto restart;
+ 		}
+ 	}
+@@ -167,7 +165,7 @@ static void pnfs_generic_retry_commit(struct nfs_commit_info *cinfo, int idx)
+ 	LIST_HEAD(pages);
+ 	int i;
+ 
+-	spin_lock(&cinfo->inode->i_lock);
++	mutex_lock(&NFS_I(cinfo->inode)->commit_mutex);
+ 	for (i = idx; i < fl_cinfo->nbuckets; i++) {
+ 		bucket = &fl_cinfo->buckets[i];
+ 		if (list_empty(&bucket->committing))
+@@ -177,12 +175,12 @@ static void pnfs_generic_retry_commit(struct nfs_commit_info *cinfo, int idx)
+ 		list_for_each(pos, &bucket->committing)
+ 			cinfo->ds->ncommitting--;
+ 		list_splice_init(&bucket->committing, &pages);
+-		spin_unlock(&cinfo->inode->i_lock);
++		mutex_unlock(&NFS_I(cinfo->inode)->commit_mutex);
+ 		nfs_retry_commit(&pages, freeme, cinfo, i);
+ 		pnfs_put_lseg(freeme);
+-		spin_lock(&cinfo->inode->i_lock);
++		mutex_lock(&NFS_I(cinfo->inode)->commit_mutex);
+ 	}
+-	spin_unlock(&cinfo->inode->i_lock);
++	mutex_unlock(&NFS_I(cinfo->inode)->commit_mutex);
+ }
+ 
+ static unsigned int
+@@ -222,13 +220,13 @@ void pnfs_fetch_commit_bucket_list(struct list_head *pages,
+ 	struct list_head *pos;
+ 
+ 	bucket = &cinfo->ds->buckets[data->ds_commit_index];
+-	spin_lock(&cinfo->inode->i_lock);
++	mutex_lock(&NFS_I(cinfo->inode)->commit_mutex);
+ 	list_for_each(pos, &bucket->committing)
+ 		cinfo->ds->ncommitting--;
+ 	list_splice_init(&bucket->committing, pages);
+ 	data->lseg = bucket->clseg;
+ 	bucket->clseg = NULL;
+-	spin_unlock(&cinfo->inode->i_lock);
++	mutex_unlock(&NFS_I(cinfo->inode)->commit_mutex);
+ 
+ }
+ 
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 857141446d6b..4a17fad93411 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -6293,7 +6293,7 @@ check_for_locks(struct nfs4_file *fp, struct nfs4_lockowner *lowner)
+ 		return status;
+ 	}
+ 
+-	inode = file_inode(filp);
++	inode = locks_inode(filp);
+ 	flctx = inode->i_flctx;
+ 
+ 	if (flctx && !list_empty_careful(&flctx->flc_posix)) {
+diff --git a/fs/overlayfs/readdir.c b/fs/overlayfs/readdir.c
+index ef1fe42ff7bb..cc8303a806b4 100644
+--- a/fs/overlayfs/readdir.c
++++ b/fs/overlayfs/readdir.c
+@@ -668,6 +668,21 @@ static int ovl_fill_real(struct dir_context *ctx, const char *name,
+ 	return orig_ctx->actor(orig_ctx, name, namelen, offset, ino, d_type);
+ }
+ 
++static bool ovl_is_impure_dir(struct file *file)
++{
++	struct ovl_dir_file *od = file->private_data;
++	struct inode *dir = d_inode(file->f_path.dentry);
++
++	/*
++	 * Only upper dir can be impure, but if we are in the middle of
++	 * iterating a lower real dir, dir could be copied up and marked
++	 * impure. We only want the impure cache if we started iterating
++	 * a real upper dir to begin with.
++	 */
++	return od->is_upper && ovl_test_flag(OVL_IMPURE, dir);
++
++}
++
+ static int ovl_iterate_real(struct file *file, struct dir_context *ctx)
+ {
+ 	int err;
+@@ -696,7 +711,7 @@ static int ovl_iterate_real(struct file *file, struct dir_context *ctx)
+ 		rdt.parent_ino = stat.ino;
+ 	}
+ 
+-	if (ovl_test_flag(OVL_IMPURE, d_inode(dir))) {
++	if (ovl_is_impure_dir(file)) {
+ 		rdt.cache = ovl_cache_get_impure(&file->f_path);
+ 		if (IS_ERR(rdt.cache))
+ 			return PTR_ERR(rdt.cache);
+@@ -727,7 +742,7 @@ static int ovl_iterate(struct file *file, struct dir_context *ctx)
+ 		 */
+ 		if (ovl_xino_bits(dentry->d_sb) ||
+ 		    (ovl_same_sb(dentry->d_sb) &&
+-		     (ovl_test_flag(OVL_IMPURE, d_inode(dentry)) ||
++		     (ovl_is_impure_dir(file) ||
+ 		      OVL_TYPE_MERGE(ovl_path_type(dentry->d_parent))))) {
+ 			return ovl_iterate_real(file, ctx);
+ 		}
+diff --git a/fs/quota/quota.c b/fs/quota/quota.c
+index 860bfbe7a07a..dac1735312df 100644
+--- a/fs/quota/quota.c
++++ b/fs/quota/quota.c
+@@ -18,6 +18,7 @@
+ #include <linux/quotaops.h>
+ #include <linux/types.h>
+ #include <linux/writeback.h>
++#include <linux/nospec.h>
+ 
+ static int check_quotactl_permission(struct super_block *sb, int type, int cmd,
+ 				     qid_t id)
+@@ -703,6 +704,7 @@ static int do_quotactl(struct super_block *sb, int type, int cmd, qid_t id,
+ 
+ 	if (type >= (XQM_COMMAND(cmd) ? XQM_MAXQUOTAS : MAXQUOTAS))
+ 		return -EINVAL;
++	type = array_index_nospec(type, MAXQUOTAS);
+ 	/*
+ 	 * Quota not supported on this fs? Check this before s_quota_types
+ 	 * since they needn't be set if quota is not supported at all.
+diff --git a/fs/ubifs/dir.c b/fs/ubifs/dir.c
+index 9da224d4f2da..e8616040bffc 100644
+--- a/fs/ubifs/dir.c
++++ b/fs/ubifs/dir.c
+@@ -1123,8 +1123,7 @@ static int ubifs_symlink(struct inode *dir, struct dentry *dentry,
+ 	struct ubifs_inode *ui;
+ 	struct ubifs_inode *dir_ui = ubifs_inode(dir);
+ 	struct ubifs_info *c = dir->i_sb->s_fs_info;
+-	int err, len = strlen(symname);
+-	int sz_change = CALC_DENT_SIZE(len);
++	int err, sz_change, len = strlen(symname);
+ 	struct fscrypt_str disk_link;
+ 	struct ubifs_budget_req req = { .new_ino = 1, .new_dent = 1,
+ 					.new_ino_d = ALIGN(len, 8),
+@@ -1151,6 +1150,8 @@ static int ubifs_symlink(struct inode *dir, struct dentry *dentry,
+ 	if (err)
+ 		goto out_budg;
+ 
++	sz_change = CALC_DENT_SIZE(fname_len(&nm));
++
+ 	inode = ubifs_new_inode(c, dir, S_IFLNK | S_IRWXUGO);
+ 	if (IS_ERR(inode)) {
+ 		err = PTR_ERR(inode);
+diff --git a/fs/ubifs/journal.c b/fs/ubifs/journal.c
+index 07b4956e0425..48060dc48683 100644
+--- a/fs/ubifs/journal.c
++++ b/fs/ubifs/journal.c
+@@ -664,6 +664,11 @@ int ubifs_jnl_update(struct ubifs_info *c, const struct inode *dir,
+ 	spin_lock(&ui->ui_lock);
+ 	ui->synced_i_size = ui->ui_size;
+ 	spin_unlock(&ui->ui_lock);
++	if (xent) {
++		spin_lock(&host_ui->ui_lock);
++		host_ui->synced_i_size = host_ui->ui_size;
++		spin_unlock(&host_ui->ui_lock);
++	}
+ 	mark_inode_clean(c, ui);
+ 	mark_inode_clean(c, host_ui);
+ 	return 0;
+@@ -1282,11 +1287,10 @@ static int truncate_data_node(const struct ubifs_info *c, const struct inode *in
+ 			      int *new_len)
+ {
+ 	void *buf;
+-	int err, compr_type;
+-	u32 dlen, out_len, old_dlen;
++	int err, dlen, compr_type, out_len, old_dlen;
+ 
+ 	out_len = le32_to_cpu(dn->size);
+-	buf = kmalloc_array(out_len, WORST_COMPR_FACTOR, GFP_NOFS);
++	buf = kmalloc(out_len * WORST_COMPR_FACTOR, GFP_NOFS);
+ 	if (!buf)
+ 		return -ENOMEM;
+ 
+@@ -1388,7 +1392,16 @@ int ubifs_jnl_truncate(struct ubifs_info *c, const struct inode *inode,
+ 		else if (err)
+ 			goto out_free;
+ 		else {
+-			if (le32_to_cpu(dn->size) <= dlen)
++			int dn_len = le32_to_cpu(dn->size);
++
++			if (dn_len <= 0 || dn_len > UBIFS_BLOCK_SIZE) {
++				ubifs_err(c, "bad data node (block %u, inode %lu)",
++					  blk, inode->i_ino);
++				ubifs_dump_node(c, dn);
++				goto out_free;
++			}
++
++			if (dn_len <= dlen)
+ 				dlen = 0; /* Nothing to do */
+ 			else {
+ 				err = truncate_data_node(c, inode, blk, dn, &dlen);
+diff --git a/fs/ubifs/lprops.c b/fs/ubifs/lprops.c
+index f5a46844340c..8ade493a423a 100644
+--- a/fs/ubifs/lprops.c
++++ b/fs/ubifs/lprops.c
+@@ -1089,10 +1089,6 @@ static int scan_check_cb(struct ubifs_info *c,
+ 		}
+ 	}
+ 
+-	buf = __vmalloc(c->leb_size, GFP_NOFS, PAGE_KERNEL);
+-	if (!buf)
+-		return -ENOMEM;
+-
+ 	/*
+ 	 * After an unclean unmount, empty and freeable LEBs
+ 	 * may contain garbage - do not scan them.
+@@ -1111,6 +1107,10 @@ static int scan_check_cb(struct ubifs_info *c,
+ 		return LPT_SCAN_CONTINUE;
+ 	}
+ 
++	buf = __vmalloc(c->leb_size, GFP_NOFS, PAGE_KERNEL);
++	if (!buf)
++		return -ENOMEM;
++
+ 	sleb = ubifs_scan(c, lnum, 0, buf, 0);
+ 	if (IS_ERR(sleb)) {
+ 		ret = PTR_ERR(sleb);
+diff --git a/fs/ubifs/xattr.c b/fs/ubifs/xattr.c
+index 6f720fdf5020..09e37e63bddd 100644
+--- a/fs/ubifs/xattr.c
++++ b/fs/ubifs/xattr.c
+@@ -152,6 +152,12 @@ static int create_xattr(struct ubifs_info *c, struct inode *host,
+ 	ui->data_len = size;
+ 
+ 	mutex_lock(&host_ui->ui_mutex);
++
++	if (!host->i_nlink) {
++		err = -ENOENT;
++		goto out_noent;
++	}
++
+ 	host->i_ctime = current_time(host);
+ 	host_ui->xattr_cnt += 1;
+ 	host_ui->xattr_size += CALC_DENT_SIZE(fname_len(nm));
+@@ -184,6 +190,7 @@ out_cancel:
+ 	host_ui->xattr_size -= CALC_XATTR_BYTES(size);
+ 	host_ui->xattr_names -= fname_len(nm);
+ 	host_ui->flags &= ~UBIFS_CRYPT_FL;
++out_noent:
+ 	mutex_unlock(&host_ui->ui_mutex);
+ out_free:
+ 	make_bad_inode(inode);
+@@ -235,6 +242,12 @@ static int change_xattr(struct ubifs_info *c, struct inode *host,
+ 	mutex_unlock(&ui->ui_mutex);
+ 
+ 	mutex_lock(&host_ui->ui_mutex);
++
++	if (!host->i_nlink) {
++		err = -ENOENT;
++		goto out_noent;
++	}
++
+ 	host->i_ctime = current_time(host);
+ 	host_ui->xattr_size -= CALC_XATTR_BYTES(old_size);
+ 	host_ui->xattr_size += CALC_XATTR_BYTES(size);
+@@ -256,6 +269,7 @@ static int change_xattr(struct ubifs_info *c, struct inode *host,
+ out_cancel:
+ 	host_ui->xattr_size -= CALC_XATTR_BYTES(size);
+ 	host_ui->xattr_size += CALC_XATTR_BYTES(old_size);
++out_noent:
+ 	mutex_unlock(&host_ui->ui_mutex);
+ 	make_bad_inode(inode);
+ out_free:
+@@ -482,6 +496,12 @@ static int remove_xattr(struct ubifs_info *c, struct inode *host,
+ 		return err;
+ 
+ 	mutex_lock(&host_ui->ui_mutex);
++
++	if (!host->i_nlink) {
++		err = -ENOENT;
++		goto out_noent;
++	}
++
+ 	host->i_ctime = current_time(host);
+ 	host_ui->xattr_cnt -= 1;
+ 	host_ui->xattr_size -= CALC_DENT_SIZE(fname_len(nm));
+@@ -501,6 +521,7 @@ out_cancel:
+ 	host_ui->xattr_size += CALC_DENT_SIZE(fname_len(nm));
+ 	host_ui->xattr_size += CALC_XATTR_BYTES(ui->data_len);
+ 	host_ui->xattr_names += fname_len(nm);
++out_noent:
+ 	mutex_unlock(&host_ui->ui_mutex);
+ 	ubifs_release_budget(c, &req);
+ 	make_bad_inode(inode);
+@@ -540,6 +561,9 @@ static int ubifs_xattr_remove(struct inode *host, const char *name)
+ 
+ 	ubifs_assert(inode_is_locked(host));
+ 
++	if (!host->i_nlink)
++		return -ENOENT;
++
+ 	if (fname_len(&nm) > UBIFS_MAX_NLEN)
+ 		return -ENAMETOOLONG;
+ 
+diff --git a/fs/udf/super.c b/fs/udf/super.c
+index 0c504c8031d3..74b13347cd94 100644
+--- a/fs/udf/super.c
++++ b/fs/udf/super.c
+@@ -1570,10 +1570,16 @@ static void udf_load_logicalvolint(struct super_block *sb, struct kernel_extent_
+  */
+ #define PART_DESC_ALLOC_STEP 32
+ 
++struct part_desc_seq_scan_data {
++	struct udf_vds_record rec;
++	u32 partnum;
++};
++
+ struct desc_seq_scan_data {
+ 	struct udf_vds_record vds[VDS_POS_LENGTH];
+ 	unsigned int size_part_descs;
+-	struct udf_vds_record *part_descs_loc;
++	unsigned int num_part_descs;
++	struct part_desc_seq_scan_data *part_descs_loc;
+ };
+ 
+ static struct udf_vds_record *handle_partition_descriptor(
+@@ -1582,10 +1588,14 @@ static struct udf_vds_record *handle_partition_descriptor(
+ {
+ 	struct partitionDesc *desc = (struct partitionDesc *)bh->b_data;
+ 	int partnum;
++	int i;
+ 
+ 	partnum = le16_to_cpu(desc->partitionNumber);
+-	if (partnum >= data->size_part_descs) {
+-		struct udf_vds_record *new_loc;
++	for (i = 0; i < data->num_part_descs; i++)
++		if (partnum == data->part_descs_loc[i].partnum)
++			return &(data->part_descs_loc[i].rec);
++	if (data->num_part_descs >= data->size_part_descs) {
++		struct part_desc_seq_scan_data *new_loc;
+ 		unsigned int new_size = ALIGN(partnum, PART_DESC_ALLOC_STEP);
+ 
+ 		new_loc = kcalloc(new_size, sizeof(*new_loc), GFP_KERNEL);
+@@ -1597,7 +1607,7 @@ static struct udf_vds_record *handle_partition_descriptor(
+ 		data->part_descs_loc = new_loc;
+ 		data->size_part_descs = new_size;
+ 	}
+-	return &(data->part_descs_loc[partnum]);
++	return &(data->part_descs_loc[data->num_part_descs++].rec);
+ }
+ 
+ 
+@@ -1647,6 +1657,7 @@ static noinline int udf_process_sequence(
+ 
+ 	memset(data.vds, 0, sizeof(struct udf_vds_record) * VDS_POS_LENGTH);
+ 	data.size_part_descs = PART_DESC_ALLOC_STEP;
++	data.num_part_descs = 0;
+ 	data.part_descs_loc = kcalloc(data.size_part_descs,
+ 				      sizeof(*data.part_descs_loc),
+ 				      GFP_KERNEL);
+@@ -1658,7 +1669,6 @@ static noinline int udf_process_sequence(
+ 	 * are in it.
+ 	 */
+ 	for (; (!done && block <= lastblock); block++) {
+-
+ 		bh = udf_read_tagged(sb, block, block, &ident);
+ 		if (!bh)
+ 			break;
+@@ -1730,13 +1740,10 @@ static noinline int udf_process_sequence(
+ 	}
+ 
+ 	/* Now handle prevailing Partition Descriptors */
+-	for (i = 0; i < data.size_part_descs; i++) {
+-		if (data.part_descs_loc[i].block) {
+-			ret = udf_load_partdesc(sb,
+-						data.part_descs_loc[i].block);
+-			if (ret < 0)
+-				return ret;
+-		}
++	for (i = 0; i < data.num_part_descs; i++) {
++		ret = udf_load_partdesc(sb, data.part_descs_loc[i].rec.block);
++		if (ret < 0)
++			return ret;
+ 	}
+ 
+ 	return 0;
+diff --git a/fs/xattr.c b/fs/xattr.c
+index f9cb1db187b7..1bee74682513 100644
+--- a/fs/xattr.c
++++ b/fs/xattr.c
+@@ -539,7 +539,7 @@ getxattr(struct dentry *d, const char __user *name, void __user *value,
+ 	if (error > 0) {
+ 		if ((strcmp(kname, XATTR_NAME_POSIX_ACL_ACCESS) == 0) ||
+ 		    (strcmp(kname, XATTR_NAME_POSIX_ACL_DEFAULT) == 0))
+-			posix_acl_fix_xattr_to_user(kvalue, size);
++			posix_acl_fix_xattr_to_user(kvalue, error);
+ 		if (size && copy_to_user(value, kvalue, error))
+ 			error = -EFAULT;
+ 	} else if (error == -ERANGE && size >= XATTR_SIZE_MAX) {
+diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
+index 6c666fd7de3c..0fce47d5acb1 100644
+--- a/include/linux/blk-cgroup.h
++++ b/include/linux/blk-cgroup.h
+@@ -295,6 +295,23 @@ static inline struct blkcg_gq *blkg_lookup(struct blkcg *blkcg,
+ 	return __blkg_lookup(blkcg, q, false);
+ }
+ 
++/**
++ * blkg_lookup - look up blkg for the specified request queue
++ * @q: request_queue of interest
++ *
++ * Lookup blkg for @q at the root level. See also blkg_lookup().
++ */
++static inline struct blkcg_gq *blkg_root_lookup(struct request_queue *q)
++{
++	struct blkcg_gq *blkg;
++
++	rcu_read_lock();
++	blkg = blkg_lookup(&blkcg_root, q);
++	rcu_read_unlock();
++
++	return blkg;
++}
++
+ /**
+  * blkg_to_pdata - get policy private data
+  * @blkg: blkg of interest
+@@ -737,6 +754,7 @@ struct blkcg_policy {
+ #ifdef CONFIG_BLOCK
+ 
+ static inline struct blkcg_gq *blkg_lookup(struct blkcg *blkcg, void *key) { return NULL; }
++static inline struct blkcg_gq *blkg_root_lookup(struct request_queue *q) { return NULL; }
+ static inline int blkcg_init_queue(struct request_queue *q) { return 0; }
+ static inline void blkcg_drain_queue(struct request_queue *q) { }
+ static inline void blkcg_exit_queue(struct request_queue *q) { }
+diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
+index 3a3012f57be4..5389012f1d25 100644
+--- a/include/linux/hyperv.h
++++ b/include/linux/hyperv.h
+@@ -1046,6 +1046,8 @@ extern int vmbus_establish_gpadl(struct vmbus_channel *channel,
+ extern int vmbus_teardown_gpadl(struct vmbus_channel *channel,
+ 				     u32 gpadl_handle);
+ 
++void vmbus_reset_channel_cb(struct vmbus_channel *channel);
++
+ extern int vmbus_recvpacket(struct vmbus_channel *channel,
+ 				  void *buffer,
+ 				  u32 bufferlen,
+diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
+index ef169d67df92..7fd9fbaea5aa 100644
+--- a/include/linux/intel-iommu.h
++++ b/include/linux/intel-iommu.h
+@@ -114,6 +114,7 @@
+  * Extended Capability Register
+  */
+ 
++#define ecap_dit(e)		((e >> 41) & 0x1)
+ #define ecap_pasid(e)		((e >> 40) & 0x1)
+ #define ecap_pss(e)		((e >> 35) & 0x1f)
+ #define ecap_eafs(e)		((e >> 34) & 0x1)
+@@ -284,6 +285,7 @@ enum {
+ #define QI_DEV_IOTLB_SID(sid)	((u64)((sid) & 0xffff) << 32)
+ #define QI_DEV_IOTLB_QDEP(qdep)	(((qdep) & 0x1f) << 16)
+ #define QI_DEV_IOTLB_ADDR(addr)	((u64)(addr) & VTD_PAGE_MASK)
++#define QI_DEV_IOTLB_PFSID(pfsid) (((u64)(pfsid & 0xf) << 12) | ((u64)(pfsid & 0xfff) << 52))
+ #define QI_DEV_IOTLB_SIZE	1
+ #define QI_DEV_IOTLB_MAX_INVS	32
+ 
+@@ -308,6 +310,7 @@ enum {
+ #define QI_DEV_EIOTLB_PASID(p)	(((u64)p) << 32)
+ #define QI_DEV_EIOTLB_SID(sid)	((u64)((sid) & 0xffff) << 16)
+ #define QI_DEV_EIOTLB_QDEP(qd)	((u64)((qd) & 0x1f) << 4)
++#define QI_DEV_EIOTLB_PFSID(pfsid) (((u64)(pfsid & 0xf) << 12) | ((u64)(pfsid & 0xfff) << 52))
+ #define QI_DEV_EIOTLB_MAX_INVS	32
+ 
+ #define QI_PGRP_IDX(idx)	(((u64)(idx)) << 55)
+@@ -453,9 +456,8 @@ extern void qi_flush_context(struct intel_iommu *iommu, u16 did, u16 sid,
+ 			     u8 fm, u64 type);
+ extern void qi_flush_iotlb(struct intel_iommu *iommu, u16 did, u64 addr,
+ 			  unsigned int size_order, u64 type);
+-extern void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 qdep,
+-			       u64 addr, unsigned mask);
+-
++extern void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 pfsid,
++			u16 qdep, u64 addr, unsigned mask);
+ extern int qi_submit_sync(struct qi_desc *desc, struct intel_iommu *iommu);
+ 
+ extern int dmar_ir_support(void);
+diff --git a/include/linux/lockd/lockd.h b/include/linux/lockd/lockd.h
+index 4fd95dbeb52f..b065ef406770 100644
+--- a/include/linux/lockd/lockd.h
++++ b/include/linux/lockd/lockd.h
+@@ -299,7 +299,7 @@ int           nlmsvc_unlock_all_by_ip(struct sockaddr *server_addr);
+ 
+ static inline struct inode *nlmsvc_file_inode(struct nlm_file *file)
+ {
+-	return file_inode(file->f_file);
++	return locks_inode(file->f_file);
+ }
+ 
+ static inline int __nlm_privileged_request4(const struct sockaddr *sap)
+@@ -359,7 +359,7 @@ static inline int nlm_privileged_requester(const struct svc_rqst *rqstp)
+ static inline int nlm_compare_locks(const struct file_lock *fl1,
+ 				    const struct file_lock *fl2)
+ {
+-	return file_inode(fl1->fl_file) == file_inode(fl2->fl_file)
++	return locks_inode(fl1->fl_file) == locks_inode(fl2->fl_file)
+ 	     && fl1->fl_pid   == fl2->fl_pid
+ 	     && fl1->fl_owner == fl2->fl_owner
+ 	     && fl1->fl_start == fl2->fl_start
+diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
+index 99ce070e7dcb..22651e124071 100644
+--- a/include/linux/mm_types.h
++++ b/include/linux/mm_types.h
+@@ -139,7 +139,10 @@ struct page {
+ 			unsigned long _pt_pad_1;	/* compound_head */
+ 			pgtable_t pmd_huge_pte; /* protected by page->ptl */
+ 			unsigned long _pt_pad_2;	/* mapping */
+-			struct mm_struct *pt_mm;	/* x86 pgds only */
++			union {
++				struct mm_struct *pt_mm; /* x86 pgds only */
++				atomic_t pt_frag_refcount; /* powerpc */
++			};
+ #if ALLOC_SPLIT_PTLOCKS
+ 			spinlock_t *ptl;
+ #else
+diff --git a/include/linux/overflow.h b/include/linux/overflow.h
+index 8712ff70995f..40b48e2133cb 100644
+--- a/include/linux/overflow.h
++++ b/include/linux/overflow.h
+@@ -202,6 +202,37 @@
+ 
+ #endif /* COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW */
+ 
++/** check_shl_overflow() - Calculate a left-shifted value and check overflow
++ *
++ * @a: Value to be shifted
++ * @s: How many bits left to shift
++ * @d: Pointer to where to store the result
++ *
++ * Computes *@d = (@a << @s)
++ *
++ * Returns true if '*d' cannot hold the result or when 'a << s' doesn't
++ * make sense. Example conditions:
++ * - 'a << s' causes bits to be lost when stored in *d.
++ * - 's' is garbage (e.g. negative) or so large that the result of
++ *   'a << s' is guaranteed to be 0.
++ * - 'a' is negative.
++ * - 'a << s' sets the sign bit, if any, in '*d'.
++ *
++ * '*d' will hold the results of the attempted shift, but is not
++ * considered "safe for use" if false is returned.
++ */
++#define check_shl_overflow(a, s, d) ({					\
++	typeof(a) _a = a;						\
++	typeof(s) _s = s;						\
++	typeof(d) _d = d;						\
++	u64 _a_full = _a;						\
++	unsigned int _to_shift =					\
++		_s >= 0 && _s < 8 * sizeof(*d) ? _s : 0;		\
++	*_d = (_a_full << _to_shift);					\
++	(_to_shift != _s || *_d < 0 || _a < 0 ||			\
++		(*_d >> _to_shift) != _a);				\
++})
++
+ /**
+  * array_size() - Calculate size of 2-dimensional array.
+  *
+diff --git a/include/linux/sunrpc/clnt.h b/include/linux/sunrpc/clnt.h
+index 9b11b6a0978c..73d5c4a870fa 100644
+--- a/include/linux/sunrpc/clnt.h
++++ b/include/linux/sunrpc/clnt.h
+@@ -156,6 +156,7 @@ int		rpc_switch_client_transport(struct rpc_clnt *,
+ 
+ void		rpc_shutdown_client(struct rpc_clnt *);
+ void		rpc_release_client(struct rpc_clnt *);
++void		rpc_task_release_transport(struct rpc_task *);
+ void		rpc_task_release_client(struct rpc_task *);
+ 
+ int		rpcb_create_local(struct net *);
+diff --git a/include/linux/verification.h b/include/linux/verification.h
+index a10549a6c7cd..cfa4730d607a 100644
+--- a/include/linux/verification.h
++++ b/include/linux/verification.h
+@@ -12,6 +12,12 @@
+ #ifndef _LINUX_VERIFICATION_H
+ #define _LINUX_VERIFICATION_H
+ 
++/*
++ * Indicate that both builtin trusted keys and secondary trusted keys
++ * should be used.
++ */
++#define VERIFY_USE_SECONDARY_KEYRING ((struct key *)1UL)
++
+ /*
+  * The use to which an asymmetric key is being put.
+  */
+diff --git a/include/uapi/linux/eventpoll.h b/include/uapi/linux/eventpoll.h
+index bf48e71f2634..8a3432d0f0dc 100644
+--- a/include/uapi/linux/eventpoll.h
++++ b/include/uapi/linux/eventpoll.h
+@@ -42,7 +42,7 @@
+ #define EPOLLRDHUP	(__force __poll_t)0x00002000
+ 
+ /* Set exclusive wakeup mode for the target file descriptor */
+-#define EPOLLEXCLUSIVE (__force __poll_t)(1U << 28)
++#define EPOLLEXCLUSIVE	((__force __poll_t)(1U << 28))
+ 
+ /*
+  * Request the handling of system wakeup events so as to prevent system suspends
+@@ -54,13 +54,13 @@
+  *
+  * Requires CAP_BLOCK_SUSPEND
+  */
+-#define EPOLLWAKEUP (__force __poll_t)(1U << 29)
++#define EPOLLWAKEUP	((__force __poll_t)(1U << 29))
+ 
+ /* Set the One Shot behaviour for the target file descriptor */
+-#define EPOLLONESHOT (__force __poll_t)(1U << 30)
++#define EPOLLONESHOT	((__force __poll_t)(1U << 30))
+ 
+ /* Set the Edge Triggered behaviour for the target file descriptor */
+-#define EPOLLET (__force __poll_t)(1U << 31)
++#define EPOLLET		((__force __poll_t)(1U << 31))
+ 
+ /* 
+  * On x86-64 make the 64bit structure have the same alignment as the
+diff --git a/include/video/udlfb.h b/include/video/udlfb.h
+index 0cabe6b09095..6e1a2e790b1b 100644
+--- a/include/video/udlfb.h
++++ b/include/video/udlfb.h
+@@ -20,7 +20,6 @@ struct dloarea {
+ struct urb_node {
+ 	struct list_head entry;
+ 	struct dlfb_data *dlfb;
+-	struct delayed_work release_urb_work;
+ 	struct urb *urb;
+ };
+ 
+@@ -52,11 +51,13 @@ struct dlfb_data {
+ 	int base8;
+ 	u32 pseudo_palette[256];
+ 	int blank_mode; /*one of FB_BLANK_ */
++	struct fb_ops ops;
+ 	/* blit-only rendering path metrics, exposed through sysfs */
+ 	atomic_t bytes_rendered; /* raw pixel-bytes driver asked to render */
+ 	atomic_t bytes_identical; /* saved effort with backbuffer comparison */
+ 	atomic_t bytes_sent; /* to usb, after compression including overhead */
+ 	atomic_t cpu_kcycles_used; /* transpired during pixel processing */
++	struct fb_var_screeninfo current_mode;
+ };
+ 
+ #define NR_USB_REQUEST_I2C_SUB_IO 0x02
+@@ -87,7 +88,7 @@ struct dlfb_data {
+ #define MIN_RAW_PIX_BYTES	2
+ #define MIN_RAW_CMD_BYTES	(RAW_HEADER_BYTES + MIN_RAW_PIX_BYTES)
+ 
+-#define DL_DEFIO_WRITE_DELAY    5 /* fb_deferred_io.delay in jiffies */
++#define DL_DEFIO_WRITE_DELAY    msecs_to_jiffies(HZ <= 300 ? 4 : 10) /* optimal value for 720p video */
+ #define DL_DEFIO_WRITE_DISABLE  (HZ*60) /* "disable" with long delay */
+ 
+ /* remove these once align.h patch is taken into kernel */
+diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
+index 3a4656fb7047..5b77a7314e01 100644
+--- a/kernel/livepatch/core.c
++++ b/kernel/livepatch/core.c
+@@ -678,6 +678,9 @@ static int klp_init_func(struct klp_object *obj, struct klp_func *func)
+ 	if (!func->old_name || !func->new_func)
+ 		return -EINVAL;
+ 
++	if (strlen(func->old_name) >= KSYM_NAME_LEN)
++		return -EINVAL;
++
+ 	INIT_LIST_HEAD(&func->stack_node);
+ 	func->patched = false;
+ 	func->transition = false;
+@@ -751,6 +754,9 @@ static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
+ 	if (!obj->funcs)
+ 		return -EINVAL;
+ 
++	if (klp_is_module(obj) && strlen(obj->name) >= MODULE_NAME_LEN)
++		return -EINVAL;
++
+ 	obj->patched = false;
+ 	obj->mod = NULL;
+ 
+diff --git a/kernel/memremap.c b/kernel/memremap.c
+index 38283363da06..cfb750105e1e 100644
+--- a/kernel/memremap.c
++++ b/kernel/memremap.c
+@@ -355,7 +355,6 @@ void __put_devmap_managed_page(struct page *page)
+ 		__ClearPageActive(page);
+ 		__ClearPageWaiters(page);
+ 
+-		page->mapping = NULL;
+ 		mem_cgroup_uncharge(page);
+ 
+ 		page->pgmap->page_free(page, page->pgmap->data);
+diff --git a/kernel/power/Kconfig b/kernel/power/Kconfig
+index e880ca22c5a5..3a6c2f87699e 100644
+--- a/kernel/power/Kconfig
++++ b/kernel/power/Kconfig
+@@ -105,6 +105,7 @@ config PM_SLEEP
+ 	def_bool y
+ 	depends on SUSPEND || HIBERNATE_CALLBACKS
+ 	select PM
++	select SRCU
+ 
+ config PM_SLEEP_SMP
+ 	def_bool y
+diff --git a/kernel/printk/printk_safe.c b/kernel/printk/printk_safe.c
+index a0a74c533e4b..0913b4d385de 100644
+--- a/kernel/printk/printk_safe.c
++++ b/kernel/printk/printk_safe.c
+@@ -306,12 +306,12 @@ static __printf(1, 0) int vprintk_nmi(const char *fmt, va_list args)
+ 	return printk_safe_log_store(s, fmt, args);
+ }
+ 
+-void printk_nmi_enter(void)
++void notrace printk_nmi_enter(void)
+ {
+ 	this_cpu_or(printk_context, PRINTK_NMI_CONTEXT_MASK);
+ }
+ 
+-void printk_nmi_exit(void)
++void notrace printk_nmi_exit(void)
+ {
+ 	this_cpu_and(printk_context, ~PRINTK_NMI_CONTEXT_MASK);
+ }
+diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
+index d40708e8c5d6..01b6ddeb4f05 100644
+--- a/kernel/rcu/tree_exp.h
++++ b/kernel/rcu/tree_exp.h
+@@ -472,6 +472,7 @@ retry_ipi:
+ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
+ 				     smp_call_func_t func)
+ {
++	int cpu;
+ 	struct rcu_node *rnp;
+ 
+ 	trace_rcu_exp_grace_period(rsp->name, rcu_exp_gp_seq_endval(rsp), TPS("reset"));
+@@ -492,7 +493,13 @@ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
+ 			continue;
+ 		}
+ 		INIT_WORK(&rnp->rew.rew_work, sync_rcu_exp_select_node_cpus);
+-		queue_work_on(rnp->grplo, rcu_par_gp_wq, &rnp->rew.rew_work);
++		preempt_disable();
++		cpu = cpumask_next(rnp->grplo - 1, cpu_online_mask);
++		/* If all offline, queue the work on an unbound CPU. */
++		if (unlikely(cpu > rnp->grphi))
++			cpu = WORK_CPU_UNBOUND;
++		queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
++		preempt_enable();
+ 		rnp->exp_need_flush = true;
+ 	}
+ 
+diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
+index 1a3e9bddd17b..16f84142f2f4 100644
+--- a/kernel/sched/idle.c
++++ b/kernel/sched/idle.c
+@@ -190,7 +190,7 @@ static void cpuidle_idle_call(void)
+ 		 */
+ 		next_state = cpuidle_select(drv, dev, &stop_tick);
+ 
+-		if (stop_tick)
++		if (stop_tick || tick_nohz_tick_stopped())
+ 			tick_nohz_idle_stop_tick();
+ 		else
+ 			tick_nohz_idle_retain_tick();
+diff --git a/kernel/sys.c b/kernel/sys.c
+index 38509dc1f77b..69b9a37ecf0d 100644
+--- a/kernel/sys.c
++++ b/kernel/sys.c
+@@ -1237,18 +1237,19 @@ static int override_release(char __user *release, size_t len)
+ 
+ SYSCALL_DEFINE1(newuname, struct new_utsname __user *, name)
+ {
+-	int errno = 0;
++	struct new_utsname tmp;
+ 
+ 	down_read(&uts_sem);
+-	if (copy_to_user(name, utsname(), sizeof *name))
+-		errno = -EFAULT;
++	memcpy(&tmp, utsname(), sizeof(tmp));
+ 	up_read(&uts_sem);
++	if (copy_to_user(name, &tmp, sizeof(tmp)))
++		return -EFAULT;
+ 
+-	if (!errno && override_release(name->release, sizeof(name->release)))
+-		errno = -EFAULT;
+-	if (!errno && override_architecture(name))
+-		errno = -EFAULT;
+-	return errno;
++	if (override_release(name->release, sizeof(name->release)))
++		return -EFAULT;
++	if (override_architecture(name))
++		return -EFAULT;
++	return 0;
+ }
+ 
+ #ifdef __ARCH_WANT_SYS_OLD_UNAME
+@@ -1257,55 +1258,46 @@ SYSCALL_DEFINE1(newuname, struct new_utsname __user *, name)
+  */
+ SYSCALL_DEFINE1(uname, struct old_utsname __user *, name)
+ {
+-	int error = 0;
++	struct old_utsname tmp;
+ 
+ 	if (!name)
+ 		return -EFAULT;
+ 
+ 	down_read(&uts_sem);
+-	if (copy_to_user(name, utsname(), sizeof(*name)))
+-		error = -EFAULT;
++	memcpy(&tmp, utsname(), sizeof(tmp));
+ 	up_read(&uts_sem);
++	if (copy_to_user(name, &tmp, sizeof(tmp)))
++		return -EFAULT;
+ 
+-	if (!error && override_release(name->release, sizeof(name->release)))
+-		error = -EFAULT;
+-	if (!error && override_architecture(name))
+-		error = -EFAULT;
+-	return error;
++	if (override_release(name->release, sizeof(name->release)))
++		return -EFAULT;
++	if (override_architecture(name))
++		return -EFAULT;
++	return 0;
+ }
+ 
+ SYSCALL_DEFINE1(olduname, struct oldold_utsname __user *, name)
+ {
+-	int error;
++	struct oldold_utsname tmp = {};
+ 
+ 	if (!name)
+ 		return -EFAULT;
+-	if (!access_ok(VERIFY_WRITE, name, sizeof(struct oldold_utsname)))
+-		return -EFAULT;
+ 
+ 	down_read(&uts_sem);
+-	error = __copy_to_user(&name->sysname, &utsname()->sysname,
+-			       __OLD_UTS_LEN);
+-	error |= __put_user(0, name->sysname + __OLD_UTS_LEN);
+-	error |= __copy_to_user(&name->nodename, &utsname()->nodename,
+-				__OLD_UTS_LEN);
+-	error |= __put_user(0, name->nodename + __OLD_UTS_LEN);
+-	error |= __copy_to_user(&name->release, &utsname()->release,
+-				__OLD_UTS_LEN);
+-	error |= __put_user(0, name->release + __OLD_UTS_LEN);
+-	error |= __copy_to_user(&name->version, &utsname()->version,
+-				__OLD_UTS_LEN);
+-	error |= __put_user(0, name->version + __OLD_UTS_LEN);
+-	error |= __copy_to_user(&name->machine, &utsname()->machine,
+-				__OLD_UTS_LEN);
+-	error |= __put_user(0, name->machine + __OLD_UTS_LEN);
++	memcpy(&tmp.sysname, &utsname()->sysname, __OLD_UTS_LEN);
++	memcpy(&tmp.nodename, &utsname()->nodename, __OLD_UTS_LEN);
++	memcpy(&tmp.release, &utsname()->release, __OLD_UTS_LEN);
++	memcpy(&tmp.version, &utsname()->version, __OLD_UTS_LEN);
++	memcpy(&tmp.machine, &utsname()->machine, __OLD_UTS_LEN);
+ 	up_read(&uts_sem);
++	if (copy_to_user(name, &tmp, sizeof(tmp)))
++		return -EFAULT;
+ 
+-	if (!error && override_architecture(name))
+-		error = -EFAULT;
+-	if (!error && override_release(name->release, sizeof(name->release)))
+-		error = -EFAULT;
+-	return error ? -EFAULT : 0;
++	if (override_architecture(name))
++		return -EFAULT;
++	if (override_release(name->release, sizeof(name->release)))
++		return -EFAULT;
++	return 0;
+ }
+ #endif
+ 
+@@ -1319,17 +1311,18 @@ SYSCALL_DEFINE2(sethostname, char __user *, name, int, len)
+ 
+ 	if (len < 0 || len > __NEW_UTS_LEN)
+ 		return -EINVAL;
+-	down_write(&uts_sem);
+ 	errno = -EFAULT;
+ 	if (!copy_from_user(tmp, name, len)) {
+-		struct new_utsname *u = utsname();
++		struct new_utsname *u;
+ 
++		down_write(&uts_sem);
++		u = utsname();
+ 		memcpy(u->nodename, tmp, len);
+ 		memset(u->nodename + len, 0, sizeof(u->nodename) - len);
+ 		errno = 0;
+ 		uts_proc_notify(UTS_PROC_HOSTNAME);
++		up_write(&uts_sem);
+ 	}
+-	up_write(&uts_sem);
+ 	return errno;
+ }
+ 
+@@ -1337,8 +1330,9 @@ SYSCALL_DEFINE2(sethostname, char __user *, name, int, len)
+ 
+ SYSCALL_DEFINE2(gethostname, char __user *, name, int, len)
+ {
+-	int i, errno;
++	int i;
+ 	struct new_utsname *u;
++	char tmp[__NEW_UTS_LEN + 1];
+ 
+ 	if (len < 0)
+ 		return -EINVAL;
+@@ -1347,11 +1341,11 @@ SYSCALL_DEFINE2(gethostname, char __user *, name, int, len)
+ 	i = 1 + strlen(u->nodename);
+ 	if (i > len)
+ 		i = len;
+-	errno = 0;
+-	if (copy_to_user(name, u->nodename, i))
+-		errno = -EFAULT;
++	memcpy(tmp, u->nodename, i);
+ 	up_read(&uts_sem);
+-	return errno;
++	if (copy_to_user(name, tmp, i))
++		return -EFAULT;
++	return 0;
+ }
+ 
+ #endif
+@@ -1370,17 +1364,18 @@ SYSCALL_DEFINE2(setdomainname, char __user *, name, int, len)
+ 	if (len < 0 || len > __NEW_UTS_LEN)
+ 		return -EINVAL;
+ 
+-	down_write(&uts_sem);
+ 	errno = -EFAULT;
+ 	if (!copy_from_user(tmp, name, len)) {
+-		struct new_utsname *u = utsname();
++		struct new_utsname *u;
+ 
++		down_write(&uts_sem);
++		u = utsname();
+ 		memcpy(u->domainname, tmp, len);
+ 		memset(u->domainname + len, 0, sizeof(u->domainname) - len);
+ 		errno = 0;
+ 		uts_proc_notify(UTS_PROC_DOMAINNAME);
++		up_write(&uts_sem);
+ 	}
+-	up_write(&uts_sem);
+ 	return errno;
+ }
+ 
+diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
+index 987d9a9ae283..8defc6fd8c0f 100644
+--- a/kernel/trace/blktrace.c
++++ b/kernel/trace/blktrace.c
+@@ -1841,6 +1841,10 @@ static ssize_t sysfs_blk_trace_attr_store(struct device *dev,
+ 	mutex_lock(&q->blk_trace_mutex);
+ 
+ 	if (attr == &dev_attr_enable) {
++		if (!!value == !!q->blk_trace) {
++			ret = 0;
++			goto out_unlock_bdev;
++		}
+ 		if (value)
+ 			ret = blk_trace_setup_queue(q, bdev);
+ 		else
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 176debd3481b..ddae35127571 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -7628,7 +7628,9 @@ rb_simple_write(struct file *filp, const char __user *ubuf,
+ 
+ 	if (buffer) {
+ 		mutex_lock(&trace_types_lock);
+-		if (val) {
++		if (!!val == tracer_tracing_is_on(tr)) {
++			val = 0; /* do nothing */
++		} else if (val) {
+ 			tracer_tracing_on(tr);
+ 			if (tr->current_trace->start)
+ 				tr->current_trace->start(tr);
+diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
+index bf89a51e740d..ac02fafc9f1b 100644
+--- a/kernel/trace/trace_uprobe.c
++++ b/kernel/trace/trace_uprobe.c
+@@ -952,7 +952,7 @@ probe_event_disable(struct trace_uprobe *tu, struct trace_event_file *file)
+ 
+ 		list_del_rcu(&link->list);
+ 		/* synchronize with u{,ret}probe_trace_func */
+-		synchronize_sched();
++		synchronize_rcu();
+ 		kfree(link);
+ 
+ 		if (!list_empty(&tu->tp.files))
+diff --git a/kernel/user_namespace.c b/kernel/user_namespace.c
+index c3d7583fcd21..e5222b5fb4fe 100644
+--- a/kernel/user_namespace.c
++++ b/kernel/user_namespace.c
+@@ -859,7 +859,16 @@ static ssize_t map_write(struct file *file, const char __user *buf,
+ 	unsigned idx;
+ 	struct uid_gid_extent extent;
+ 	char *kbuf = NULL, *pos, *next_line;
+-	ssize_t ret = -EINVAL;
++	ssize_t ret;
++
++	/* Only allow < page size writes at the beginning of the file */
++	if ((*ppos != 0) || (count >= PAGE_SIZE))
++		return -EINVAL;
++
++	/* Slurp in the user data */
++	kbuf = memdup_user_nul(buf, count);
++	if (IS_ERR(kbuf))
++		return PTR_ERR(kbuf);
+ 
+ 	/*
+ 	 * The userns_state_mutex serializes all writes to any given map.
+@@ -895,19 +904,6 @@ static ssize_t map_write(struct file *file, const char __user *buf,
+ 	if (cap_valid(cap_setid) && !file_ns_capable(file, ns, CAP_SYS_ADMIN))
+ 		goto out;
+ 
+-	/* Only allow < page size writes at the beginning of the file */
+-	ret = -EINVAL;
+-	if ((*ppos != 0) || (count >= PAGE_SIZE))
+-		goto out;
+-
+-	/* Slurp in the user data */
+-	kbuf = memdup_user_nul(buf, count);
+-	if (IS_ERR(kbuf)) {
+-		ret = PTR_ERR(kbuf);
+-		kbuf = NULL;
+-		goto out;
+-	}
+-
+ 	/* Parse the user data */
+ 	ret = -EINVAL;
+ 	pos = kbuf;
+diff --git a/kernel/utsname_sysctl.c b/kernel/utsname_sysctl.c
+index 233cd8fc6910..258033d62cb3 100644
+--- a/kernel/utsname_sysctl.c
++++ b/kernel/utsname_sysctl.c
+@@ -18,7 +18,7 @@
+ 
+ #ifdef CONFIG_PROC_SYSCTL
+ 
+-static void *get_uts(struct ctl_table *table, int write)
++static void *get_uts(struct ctl_table *table)
+ {
+ 	char *which = table->data;
+ 	struct uts_namespace *uts_ns;
+@@ -26,21 +26,9 @@ static void *get_uts(struct ctl_table *table, int write)
+ 	uts_ns = current->nsproxy->uts_ns;
+ 	which = (which - (char *)&init_uts_ns) + (char *)uts_ns;
+ 
+-	if (!write)
+-		down_read(&uts_sem);
+-	else
+-		down_write(&uts_sem);
+ 	return which;
+ }
+ 
+-static void put_uts(struct ctl_table *table, int write, void *which)
+-{
+-	if (!write)
+-		up_read(&uts_sem);
+-	else
+-		up_write(&uts_sem);
+-}
+-
+ /*
+  *	Special case of dostring for the UTS structure. This has locks
+  *	to observe. Should this be in kernel/sys.c ????
+@@ -50,13 +38,34 @@ static int proc_do_uts_string(struct ctl_table *table, int write,
+ {
+ 	struct ctl_table uts_table;
+ 	int r;
++	char tmp_data[__NEW_UTS_LEN + 1];
++
+ 	memcpy(&uts_table, table, sizeof(uts_table));
+-	uts_table.data = get_uts(table, write);
++	uts_table.data = tmp_data;
++
++	/*
++	 * Buffer the value in tmp_data so that proc_dostring() can be called
++	 * without holding any locks.
++	 * We also need to read the original value in the write==1 case to
++	 * support partial writes.
++	 */
++	down_read(&uts_sem);
++	memcpy(tmp_data, get_uts(table), sizeof(tmp_data));
++	up_read(&uts_sem);
+ 	r = proc_dostring(&uts_table, write, buffer, lenp, ppos);
+-	put_uts(table, write, uts_table.data);
+ 
+-	if (write)
++	if (write) {
++		/*
++		 * Write back the new value.
++		 * Note that, since we dropped uts_sem, the result can
++		 * theoretically be incorrect if there are two parallel writes
++		 * at non-zero offsets to the same sysctl.
++		 */
++		down_write(&uts_sem);
++		memcpy(get_uts(table), tmp_data, sizeof(tmp_data));
++		up_write(&uts_sem);
+ 		proc_sys_poll_notify(table->poll);
++	}
+ 
+ 	return r;
+ }
+diff --git a/mm/hmm.c b/mm/hmm.c
+index de7b6bf77201..f9d1d89dec4d 100644
+--- a/mm/hmm.c
++++ b/mm/hmm.c
+@@ -963,6 +963,8 @@ static void hmm_devmem_free(struct page *page, void *data)
+ {
+ 	struct hmm_devmem *devmem = data;
+ 
++	page->mapping = NULL;
++
+ 	devmem->ops->free(devmem, page);
+ }
+ 
+diff --git a/mm/memory.c b/mm/memory.c
+index 86d4329acb05..f94feec6518d 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -391,15 +391,6 @@ void tlb_remove_table(struct mmu_gather *tlb, void *table)
+ {
+ 	struct mmu_table_batch **batch = &tlb->batch;
+ 
+-	/*
+-	 * When there's less then two users of this mm there cannot be a
+-	 * concurrent page-table walk.
+-	 */
+-	if (atomic_read(&tlb->mm->mm_users) < 2) {
+-		__tlb_remove_table(table);
+-		return;
+-	}
+-
+ 	if (*batch == NULL) {
+ 		*batch = (struct mmu_table_batch *)__get_free_page(GFP_NOWAIT | __GFP_NOWARN);
+ 		if (*batch == NULL) {
+diff --git a/mm/readahead.c b/mm/readahead.c
+index e273f0de3376..792dea696d54 100644
+--- a/mm/readahead.c
++++ b/mm/readahead.c
+@@ -385,6 +385,7 @@ ondemand_readahead(struct address_space *mapping,
+ {
+ 	struct backing_dev_info *bdi = inode_to_bdi(mapping->host);
+ 	unsigned long max_pages = ra->ra_pages;
++	unsigned long add_pages;
+ 	pgoff_t prev_offset;
+ 
+ 	/*
+@@ -474,10 +475,17 @@ readit:
+ 	 * Will this read hit the readahead marker made by itself?
+ 	 * If so, trigger the readahead marker hit now, and merge
+ 	 * the resulted next readahead window into the current one.
++	 * Take care of maximum IO pages as above.
+ 	 */
+ 	if (offset == ra->start && ra->size == ra->async_size) {
+-		ra->async_size = get_next_ra_size(ra, max_pages);
+-		ra->size += ra->async_size;
++		add_pages = get_next_ra_size(ra, max_pages);
++		if (ra->size + add_pages <= max_pages) {
++			ra->async_size = add_pages;
++			ra->size += add_pages;
++		} else {
++			ra->size = max_pages;
++			ra->async_size = max_pages >> 1;
++		}
+ 	}
+ 
+ 	return ra_submit(ra, mapping, filp);
+diff --git a/net/9p/client.c b/net/9p/client.c
+index 5c1343195292..2872f3dbfd86 100644
+--- a/net/9p/client.c
++++ b/net/9p/client.c
+@@ -958,7 +958,7 @@ static int p9_client_version(struct p9_client *c)
+ {
+ 	int err = 0;
+ 	struct p9_req_t *req;
+-	char *version;
++	char *version = NULL;
+ 	int msize;
+ 
+ 	p9_debug(P9_DEBUG_9P, ">>> TVERSION msize %d protocol %d\n",
+diff --git a/net/9p/trans_fd.c b/net/9p/trans_fd.c
+index 588bf88c3305..ef456395645a 100644
+--- a/net/9p/trans_fd.c
++++ b/net/9p/trans_fd.c
+@@ -185,6 +185,8 @@ static void p9_mux_poll_stop(struct p9_conn *m)
+ 	spin_lock_irqsave(&p9_poll_lock, flags);
+ 	list_del_init(&m->poll_pending_link);
+ 	spin_unlock_irqrestore(&p9_poll_lock, flags);
++
++	flush_work(&p9_poll_work);
+ }
+ 
+ /**
+@@ -940,7 +942,7 @@ p9_fd_create_tcp(struct p9_client *client, const char *addr, char *args)
+ 	if (err < 0)
+ 		return err;
+ 
+-	if (valid_ipaddr4(addr) < 0)
++	if (addr == NULL || valid_ipaddr4(addr) < 0)
+ 		return -EINVAL;
+ 
+ 	csocket = NULL;
+@@ -990,6 +992,9 @@ p9_fd_create_unix(struct p9_client *client, const char *addr, char *args)
+ 
+ 	csocket = NULL;
+ 
++	if (addr == NULL)
++		return -EINVAL;
++
+ 	if (strlen(addr) >= UNIX_PATH_MAX) {
+ 		pr_err("%s (%d): address too long: %s\n",
+ 		       __func__, task_pid_nr(current), addr);
+diff --git a/net/9p/trans_rdma.c b/net/9p/trans_rdma.c
+index 3d414acb7015..afaf0d65f3dd 100644
+--- a/net/9p/trans_rdma.c
++++ b/net/9p/trans_rdma.c
+@@ -644,6 +644,9 @@ rdma_create_trans(struct p9_client *client, const char *addr, char *args)
+ 	struct rdma_conn_param conn_param;
+ 	struct ib_qp_init_attr qp_attr;
+ 
++	if (addr == NULL)
++		return -EINVAL;
++
+ 	/* Parse the transport specific mount options */
+ 	err = parse_opts(args, &opts);
+ 	if (err < 0)
+diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c
+index 05006cbb3361..4c2da2513c8b 100644
+--- a/net/9p/trans_virtio.c
++++ b/net/9p/trans_virtio.c
+@@ -188,7 +188,7 @@ static int pack_sg_list(struct scatterlist *sg, int start,
+ 		s = rest_of_page(data);
+ 		if (s > count)
+ 			s = count;
+-		BUG_ON(index > limit);
++		BUG_ON(index >= limit);
+ 		/* Make sure we don't terminate early. */
+ 		sg_unmark_end(&sg[index]);
+ 		sg_set_buf(&sg[index++], data, s);
+@@ -233,6 +233,7 @@ pack_sg_list_p(struct scatterlist *sg, int start, int limit,
+ 		s = PAGE_SIZE - data_off;
+ 		if (s > count)
+ 			s = count;
++		BUG_ON(index >= limit);
+ 		/* Make sure we don't terminate early. */
+ 		sg_unmark_end(&sg[index]);
+ 		sg_set_page(&sg[index++], pdata[i++], s, data_off);
+@@ -406,6 +407,7 @@ p9_virtio_zc_request(struct p9_client *client, struct p9_req_t *req,
+ 	p9_debug(P9_DEBUG_TRANS, "virtio request\n");
+ 
+ 	if (uodata) {
++		__le32 sz;
+ 		int n = p9_get_mapped_pages(chan, &out_pages, uodata,
+ 					    outlen, &offs, &need_drop);
+ 		if (n < 0)
+@@ -416,6 +418,12 @@ p9_virtio_zc_request(struct p9_client *client, struct p9_req_t *req,
+ 			memcpy(&req->tc->sdata[req->tc->size - 4], &v, 4);
+ 			outlen = n;
+ 		}
++		/* The size field of the message must include the length of the
++		 * header and the length of the data.  We didn't actually know
++		 * the length of the data until this point so add it in now.
++		 */
++		sz = cpu_to_le32(req->tc->size + outlen);
++		memcpy(&req->tc->sdata[0], &sz, sizeof(sz));
+ 	} else if (uidata) {
+ 		int n = p9_get_mapped_pages(chan, &in_pages, uidata,
+ 					    inlen, &offs, &need_drop);
+@@ -643,6 +651,9 @@ p9_virtio_create(struct p9_client *client, const char *devname, char *args)
+ 	int ret = -ENOENT;
+ 	int found = 0;
+ 
++	if (devname == NULL)
++		return -EINVAL;
++
+ 	mutex_lock(&virtio_9p_lock);
+ 	list_for_each_entry(chan, &virtio_chan_list, chan_list) {
+ 		if (!strncmp(devname, chan->tag, chan->tag_len) &&
+diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
+index 2e2b8bca54f3..c2d54ac76bfd 100644
+--- a/net/9p/trans_xen.c
++++ b/net/9p/trans_xen.c
+@@ -94,6 +94,9 @@ static int p9_xen_create(struct p9_client *client, const char *addr, char *args)
+ {
+ 	struct xen_9pfs_front_priv *priv;
+ 
++	if (addr == NULL)
++		return -EINVAL;
++
+ 	read_lock(&xen_9pfs_lock);
+ 	list_for_each_entry(priv, &xen_9pfs_devs, list) {
+ 		if (!strcmp(priv->tag, addr)) {
+diff --git a/net/ieee802154/6lowpan/tx.c b/net/ieee802154/6lowpan/tx.c
+index e6ff5128e61a..ca53efa17be1 100644
+--- a/net/ieee802154/6lowpan/tx.c
++++ b/net/ieee802154/6lowpan/tx.c
+@@ -265,9 +265,24 @@ netdev_tx_t lowpan_xmit(struct sk_buff *skb, struct net_device *ldev)
+ 	/* We must take a copy of the skb before we modify/replace the ipv6
+ 	 * header as the header could be used elsewhere
+ 	 */
+-	skb = skb_unshare(skb, GFP_ATOMIC);
+-	if (!skb)
+-		return NET_XMIT_DROP;
++	if (unlikely(skb_headroom(skb) < ldev->needed_headroom ||
++		     skb_tailroom(skb) < ldev->needed_tailroom)) {
++		struct sk_buff *nskb;
++
++		nskb = skb_copy_expand(skb, ldev->needed_headroom,
++				       ldev->needed_tailroom, GFP_ATOMIC);
++		if (likely(nskb)) {
++			consume_skb(skb);
++			skb = nskb;
++		} else {
++			kfree_skb(skb);
++			return NET_XMIT_DROP;
++		}
++	} else {
++		skb = skb_unshare(skb, GFP_ATOMIC);
++		if (!skb)
++			return NET_XMIT_DROP;
++	}
+ 
+ 	ret = lowpan_header(skb, ldev, &dgram_size, &dgram_offset);
+ 	if (ret < 0) {
+diff --git a/net/mac802154/tx.c b/net/mac802154/tx.c
+index 7e253455f9dd..bcd1a5e6ebf4 100644
+--- a/net/mac802154/tx.c
++++ b/net/mac802154/tx.c
+@@ -63,8 +63,21 @@ ieee802154_tx(struct ieee802154_local *local, struct sk_buff *skb)
+ 	int ret;
+ 
+ 	if (!(local->hw.flags & IEEE802154_HW_TX_OMIT_CKSUM)) {
+-		u16 crc = crc_ccitt(0, skb->data, skb->len);
++		struct sk_buff *nskb;
++		u16 crc;
++
++		if (unlikely(skb_tailroom(skb) < IEEE802154_FCS_LEN)) {
++			nskb = skb_copy_expand(skb, 0, IEEE802154_FCS_LEN,
++					       GFP_ATOMIC);
++			if (likely(nskb)) {
++				consume_skb(skb);
++				skb = nskb;
++			} else {
++				goto err_tx;
++			}
++		}
+ 
++		crc = crc_ccitt(0, skb->data, skb->len);
+ 		put_unaligned_le16(crc, skb_put(skb, 2));
+ 	}
+ 
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index d839c33ae7d9..0d85425b1e07 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -965,10 +965,20 @@ out:
+ }
+ EXPORT_SYMBOL_GPL(rpc_bind_new_program);
+ 
++void rpc_task_release_transport(struct rpc_task *task)
++{
++	struct rpc_xprt *xprt = task->tk_xprt;
++
++	if (xprt) {
++		task->tk_xprt = NULL;
++		xprt_put(xprt);
++	}
++}
++EXPORT_SYMBOL_GPL(rpc_task_release_transport);
++
+ void rpc_task_release_client(struct rpc_task *task)
+ {
+ 	struct rpc_clnt *clnt = task->tk_client;
+-	struct rpc_xprt *xprt = task->tk_xprt;
+ 
+ 	if (clnt != NULL) {
+ 		/* Remove from client task list */
+@@ -979,12 +989,14 @@ void rpc_task_release_client(struct rpc_task *task)
+ 
+ 		rpc_release_client(clnt);
+ 	}
++	rpc_task_release_transport(task);
++}
+ 
+-	if (xprt != NULL) {
+-		task->tk_xprt = NULL;
+-
+-		xprt_put(xprt);
+-	}
++static
++void rpc_task_set_transport(struct rpc_task *task, struct rpc_clnt *clnt)
++{
++	if (!task->tk_xprt)
++		task->tk_xprt = xprt_iter_get_next(&clnt->cl_xpi);
+ }
+ 
+ static
+@@ -992,8 +1004,7 @@ void rpc_task_set_client(struct rpc_task *task, struct rpc_clnt *clnt)
+ {
+ 
+ 	if (clnt != NULL) {
+-		if (task->tk_xprt == NULL)
+-			task->tk_xprt = xprt_iter_get_next(&clnt->cl_xpi);
++		rpc_task_set_transport(task, clnt);
+ 		task->tk_client = clnt;
+ 		atomic_inc(&clnt->cl_count);
+ 		if (clnt->cl_softrtry)
+@@ -1512,6 +1523,7 @@ call_start(struct rpc_task *task)
+ 		clnt->cl_program->version[clnt->cl_vers]->counts[idx]++;
+ 	clnt->cl_stats->rpccnt++;
+ 	task->tk_action = call_reserve;
++	rpc_task_set_transport(task, clnt);
+ }
+ 
+ /*
+diff --git a/scripts/kconfig/Makefile b/scripts/kconfig/Makefile
+index a3ac2c91331c..5e1dd493ce59 100644
+--- a/scripts/kconfig/Makefile
++++ b/scripts/kconfig/Makefile
+@@ -173,7 +173,7 @@ HOSTLOADLIBES_nconf	= $(shell . $(obj)/.nconf-cfg && echo $$libs)
+ HOSTCFLAGS_nconf.o	= $(shell . $(obj)/.nconf-cfg && echo $$cflags)
+ HOSTCFLAGS_nconf.gui.o	= $(shell . $(obj)/.nconf-cfg && echo $$cflags)
+ 
+-$(obj)/nconf.o: $(obj)/.nconf-cfg
++$(obj)/nconf.o $(obj)/nconf.gui.o: $(obj)/.nconf-cfg
+ 
+ # mconf: Used for the menuconfig target based on lxdialog
+ hostprogs-y	+= mconf
+@@ -184,7 +184,8 @@ HOSTLOADLIBES_mconf = $(shell . $(obj)/.mconf-cfg && echo $$libs)
+ $(foreach f, mconf.o $(lxdialog), \
+   $(eval HOSTCFLAGS_$f = $$(shell . $(obj)/.mconf-cfg && echo $$$$cflags)))
+ 
+-$(addprefix $(obj)/, mconf.o $(lxdialog)): $(obj)/.mconf-cfg
++$(obj)/mconf.o: $(obj)/.mconf-cfg
++$(addprefix $(obj)/lxdialog/, $(lxdialog)): $(obj)/.mconf-cfg
+ 
+ # qconf: Used for the xconfig target based on Qt
+ hostprogs-y	+= qconf
+diff --git a/security/apparmor/secid.c b/security/apparmor/secid.c
+index f2f22d00db18..4ccec1bcf6f5 100644
+--- a/security/apparmor/secid.c
++++ b/security/apparmor/secid.c
+@@ -79,7 +79,6 @@ int apparmor_secid_to_secctx(u32 secid, char **secdata, u32 *seclen)
+ 	struct aa_label *label = aa_secid_to_label(secid);
+ 	int len;
+ 
+-	AA_BUG(!secdata);
+ 	AA_BUG(!seclen);
+ 
+ 	if (!label)
+diff --git a/security/commoncap.c b/security/commoncap.c
+index f4c33abd9959..2e489d6a3ac8 100644
+--- a/security/commoncap.c
++++ b/security/commoncap.c
+@@ -388,7 +388,7 @@ int cap_inode_getsecurity(struct inode *inode, const char *name, void **buffer,
+ 	if (strcmp(name, "capability") != 0)
+ 		return -EOPNOTSUPP;
+ 
+-	dentry = d_find_alias(inode);
++	dentry = d_find_any_alias(inode);
+ 	if (!dentry)
+ 		return -EINVAL;
+ 
+diff --git a/sound/ac97/bus.c b/sound/ac97/bus.c
+index 31f858eceffc..83eed9d7f679 100644
+--- a/sound/ac97/bus.c
++++ b/sound/ac97/bus.c
+@@ -503,7 +503,7 @@ static int ac97_bus_remove(struct device *dev)
+ 	int ret;
+ 
+ 	ret = pm_runtime_get_sync(dev);
+-	if (ret)
++	if (ret < 0)
+ 		return ret;
+ 
+ 	ret = adrv->remove(adev);
+@@ -511,6 +511,8 @@ static int ac97_bus_remove(struct device *dev)
+ 	if (ret == 0)
+ 		ac97_put_disable_clk(adev);
+ 
++	pm_runtime_disable(dev);
++
+ 	return ret;
+ }
+ 
+diff --git a/sound/ac97/snd_ac97_compat.c b/sound/ac97/snd_ac97_compat.c
+index 61544e0d8de4..8bab44f74bb8 100644
+--- a/sound/ac97/snd_ac97_compat.c
++++ b/sound/ac97/snd_ac97_compat.c
+@@ -15,6 +15,11 @@
+ 
+ #include "ac97_core.h"
+ 
++static void compat_ac97_release(struct device *dev)
++{
++	kfree(to_ac97_t(dev));
++}
++
+ static void compat_ac97_reset(struct snd_ac97 *ac97)
+ {
+ 	struct ac97_codec_device *adev = to_ac97_device(ac97->private_data);
+@@ -65,21 +70,31 @@ static struct snd_ac97_bus compat_soc_ac97_bus = {
+ struct snd_ac97 *snd_ac97_compat_alloc(struct ac97_codec_device *adev)
+ {
+ 	struct snd_ac97 *ac97;
++	int ret;
+ 
+ 	ac97 = kzalloc(sizeof(struct snd_ac97), GFP_KERNEL);
+ 	if (ac97 == NULL)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	ac97->dev = adev->dev;
+ 	ac97->private_data = adev;
+ 	ac97->bus = &compat_soc_ac97_bus;
++
++	ac97->dev.parent = &adev->dev;
++	ac97->dev.release = compat_ac97_release;
++	dev_set_name(&ac97->dev, "%s-compat", dev_name(&adev->dev));
++	ret = device_register(&ac97->dev);
++	if (ret) {
++		put_device(&ac97->dev);
++		return ERR_PTR(ret);
++	}
++
+ 	return ac97;
+ }
+ EXPORT_SYMBOL_GPL(snd_ac97_compat_alloc);
+ 
+ void snd_ac97_compat_release(struct snd_ac97 *ac97)
+ {
+-	kfree(ac97);
++	device_unregister(&ac97->dev);
+ }
+ EXPORT_SYMBOL_GPL(snd_ac97_compat_release);
+ 
+diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
+index d056447520a2..eeb6d1f7cfb3 100644
+--- a/tools/perf/util/auxtrace.c
++++ b/tools/perf/util/auxtrace.c
+@@ -202,6 +202,9 @@ static int auxtrace_queues__grow(struct auxtrace_queues *queues,
+ 	for (i = 0; i < queues->nr_queues; i++) {
+ 		list_splice_tail(&queues->queue_array[i].head,
+ 				 &queue_array[i].head);
++		queue_array[i].tid = queues->queue_array[i].tid;
++		queue_array[i].cpu = queues->queue_array[i].cpu;
++		queue_array[i].set = queues->queue_array[i].set;
+ 		queue_array[i].priv = queues->queue_array[i].priv;
+ 	}
+ 


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-13 21:17 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-13 21:17 UTC (permalink / raw
  To: gentoo-commits

commit:     baff7bbd22061d6307a998016251982c843d4cb7
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Nov 13 21:16:56 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Nov 13 21:16:56 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=baff7bbd

proj/linux-patches: Linux patch 4.18.19

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |     4 +
 1018_linux-4.18.19.patch | 15151 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 15155 insertions(+)

diff --git a/0000_README b/0000_README
index bdc7ee9..afaac7a 100644
--- a/0000_README
+++ b/0000_README
@@ -115,6 +115,10 @@ Patch:  1017_linux-4.18.18.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.18
 
+Patch:  1018_linux-4.18.19.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.19
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1018_linux-4.18.19.patch b/1018_linux-4.18.19.patch
new file mode 100644
index 0000000..40499cf
--- /dev/null
+++ b/1018_linux-4.18.19.patch
@@ -0,0 +1,15151 @@
+diff --git a/Documentation/filesystems/fscrypt.rst b/Documentation/filesystems/fscrypt.rst
+index 48b424de85bb..cfbc18f0d9c9 100644
+--- a/Documentation/filesystems/fscrypt.rst
++++ b/Documentation/filesystems/fscrypt.rst
+@@ -191,21 +191,11 @@ Currently, the following pairs of encryption modes are supported:
+ 
+ - AES-256-XTS for contents and AES-256-CTS-CBC for filenames
+ - AES-128-CBC for contents and AES-128-CTS-CBC for filenames
+-- Speck128/256-XTS for contents and Speck128/256-CTS-CBC for filenames
+ 
+ It is strongly recommended to use AES-256-XTS for contents encryption.
+ AES-128-CBC was added only for low-powered embedded devices with
+ crypto accelerators such as CAAM or CESA that do not support XTS.
+ 
+-Similarly, Speck128/256 support was only added for older or low-end
+-CPUs which cannot do AES fast enough -- especially ARM CPUs which have
+-NEON instructions but not the Cryptography Extensions -- and for which
+-it would not otherwise be feasible to use encryption at all.  It is
+-not recommended to use Speck on CPUs that have AES instructions.
+-Speck support is only available if it has been enabled in the crypto
+-API via CONFIG_CRYPTO_SPECK.  Also, on ARM platforms, to get
+-acceptable performance CONFIG_CRYPTO_SPECK_NEON must be enabled.
+-
+ New encryption modes can be added relatively easily, without changes
+ to individual filesystems.  However, authenticated encryption (AE)
+ modes are not currently supported because of the difficulty of dealing
+diff --git a/Documentation/media/uapi/cec/cec-ioc-receive.rst b/Documentation/media/uapi/cec/cec-ioc-receive.rst
+index e964074cd15b..b25e48afaa08 100644
+--- a/Documentation/media/uapi/cec/cec-ioc-receive.rst
++++ b/Documentation/media/uapi/cec/cec-ioc-receive.rst
+@@ -16,10 +16,10 @@ CEC_RECEIVE, CEC_TRANSMIT - Receive or transmit a CEC message
+ Synopsis
+ ========
+ 
+-.. c:function:: int ioctl( int fd, CEC_RECEIVE, struct cec_msg *argp )
++.. c:function:: int ioctl( int fd, CEC_RECEIVE, struct cec_msg \*argp )
+     :name: CEC_RECEIVE
+ 
+-.. c:function:: int ioctl( int fd, CEC_TRANSMIT, struct cec_msg *argp )
++.. c:function:: int ioctl( int fd, CEC_TRANSMIT, struct cec_msg \*argp )
+     :name: CEC_TRANSMIT
+ 
+ Arguments
+@@ -272,6 +272,19 @@ View On' messages from initiator 0xf ('Unregistered') to destination 0 ('TV').
+       - The transmit failed after one or more retries. This status bit is
+ 	mutually exclusive with :ref:`CEC_TX_STATUS_OK <CEC-TX-STATUS-OK>`.
+ 	Other bits can still be set to explain which failures were seen.
++    * .. _`CEC-TX-STATUS-ABORTED`:
++
++      - ``CEC_TX_STATUS_ABORTED``
++      - 0x40
++      - The transmit was aborted due to an HDMI disconnect, or the adapter
++        was unconfigured, or a transmit was interrupted, or the driver
++	returned an error when attempting to start a transmit.
++    * .. _`CEC-TX-STATUS-TIMEOUT`:
++
++      - ``CEC_TX_STATUS_TIMEOUT``
++      - 0x80
++      - The transmit timed out. This should not normally happen and this
++	indicates a driver problem.
+ 
+ 
+ .. tabularcolumns:: |p{5.6cm}|p{0.9cm}|p{11.0cm}|
+@@ -300,6 +313,14 @@ View On' messages from initiator 0xf ('Unregistered') to destination 0 ('TV').
+       - The message was received successfully but the reply was
+ 	``CEC_MSG_FEATURE_ABORT``. This status is only set if this message
+ 	was the reply to an earlier transmitted message.
++    * .. _`CEC-RX-STATUS-ABORTED`:
++
++      - ``CEC_RX_STATUS_ABORTED``
++      - 0x08
++      - The wait for a reply to an earlier transmitted message was aborted
++        because the HDMI cable was disconnected, the adapter was unconfigured
++	or the :ref:`CEC_TRANSMIT <CEC_RECEIVE>` that waited for a
++	reply was interrupted.
+ 
+ 
+ 
+diff --git a/Documentation/media/uapi/v4l/biblio.rst b/Documentation/media/uapi/v4l/biblio.rst
+index 1cedcfc04327..386d6cf83e9c 100644
+--- a/Documentation/media/uapi/v4l/biblio.rst
++++ b/Documentation/media/uapi/v4l/biblio.rst
+@@ -226,16 +226,6 @@ xvYCC
+ 
+ :author:    International Electrotechnical Commission (http://www.iec.ch)
+ 
+-.. _adobergb:
+-
+-AdobeRGB
+-========
+-
+-
+-:title:     Adobe© RGB (1998) Color Image Encoding Version 2005-05
+-
+-:author:    Adobe Systems Incorporated (http://www.adobe.com)
+-
+ .. _oprgb:
+ 
+ opRGB
+diff --git a/Documentation/media/uapi/v4l/colorspaces-defs.rst b/Documentation/media/uapi/v4l/colorspaces-defs.rst
+index 410907fe9415..f24615544792 100644
+--- a/Documentation/media/uapi/v4l/colorspaces-defs.rst
++++ b/Documentation/media/uapi/v4l/colorspaces-defs.rst
+@@ -51,8 +51,8 @@ whole range, 0-255, dividing the angular value by 1.41. The enum
+       - See :ref:`col-rec709`.
+     * - ``V4L2_COLORSPACE_SRGB``
+       - See :ref:`col-srgb`.
+-    * - ``V4L2_COLORSPACE_ADOBERGB``
+-      - See :ref:`col-adobergb`.
++    * - ``V4L2_COLORSPACE_OPRGB``
++      - See :ref:`col-oprgb`.
+     * - ``V4L2_COLORSPACE_BT2020``
+       - See :ref:`col-bt2020`.
+     * - ``V4L2_COLORSPACE_DCI_P3``
+@@ -90,8 +90,8 @@ whole range, 0-255, dividing the angular value by 1.41. The enum
+       - Use the Rec. 709 transfer function.
+     * - ``V4L2_XFER_FUNC_SRGB``
+       - Use the sRGB transfer function.
+-    * - ``V4L2_XFER_FUNC_ADOBERGB``
+-      - Use the AdobeRGB transfer function.
++    * - ``V4L2_XFER_FUNC_OPRGB``
++      - Use the opRGB transfer function.
+     * - ``V4L2_XFER_FUNC_SMPTE240M``
+       - Use the SMPTE 240M transfer function.
+     * - ``V4L2_XFER_FUNC_NONE``
+diff --git a/Documentation/media/uapi/v4l/colorspaces-details.rst b/Documentation/media/uapi/v4l/colorspaces-details.rst
+index b5d551b9cc8f..09fabf4cd412 100644
+--- a/Documentation/media/uapi/v4l/colorspaces-details.rst
++++ b/Documentation/media/uapi/v4l/colorspaces-details.rst
+@@ -290,15 +290,14 @@ Y' is clamped to the range [0…1] and Cb and Cr are clamped to the range
+ 170M/BT.601. The Y'CbCr quantization is limited range.
+ 
+ 
+-.. _col-adobergb:
++.. _col-oprgb:
+ 
+-Colorspace Adobe RGB (V4L2_COLORSPACE_ADOBERGB)
++Colorspace opRGB (V4L2_COLORSPACE_OPRGB)
+ ===============================================
+ 
+-The :ref:`adobergb` standard defines the colorspace used by computer
+-graphics that use the AdobeRGB colorspace. This is also known as the
+-:ref:`oprgb` standard. The default transfer function is
+-``V4L2_XFER_FUNC_ADOBERGB``. The default Y'CbCr encoding is
++The :ref:`oprgb` standard defines the colorspace used by computer
++graphics that use the opRGB colorspace. The default transfer function is
++``V4L2_XFER_FUNC_OPRGB``. The default Y'CbCr encoding is
+ ``V4L2_YCBCR_ENC_601``. The default Y'CbCr quantization is limited
+ range.
+ 
+@@ -312,7 +311,7 @@ The chromaticities of the primary colors and the white reference are:
+ 
+ .. tabularcolumns:: |p{4.4cm}|p{4.4cm}|p{8.7cm}|
+ 
+-.. flat-table:: Adobe RGB Chromaticities
++.. flat-table:: opRGB Chromaticities
+     :header-rows:  1
+     :stub-columns: 0
+     :widths:       1 1 2
+diff --git a/Documentation/media/videodev2.h.rst.exceptions b/Documentation/media/videodev2.h.rst.exceptions
+index a5cb0a8686ac..8813ff9c42b9 100644
+--- a/Documentation/media/videodev2.h.rst.exceptions
++++ b/Documentation/media/videodev2.h.rst.exceptions
+@@ -56,7 +56,8 @@ replace symbol V4L2_MEMORY_USERPTR :c:type:`v4l2_memory`
+ # Documented enum v4l2_colorspace
+ replace symbol V4L2_COLORSPACE_470_SYSTEM_BG :c:type:`v4l2_colorspace`
+ replace symbol V4L2_COLORSPACE_470_SYSTEM_M :c:type:`v4l2_colorspace`
+-replace symbol V4L2_COLORSPACE_ADOBERGB :c:type:`v4l2_colorspace`
++replace symbol V4L2_COLORSPACE_OPRGB :c:type:`v4l2_colorspace`
++replace define V4L2_COLORSPACE_ADOBERGB :c:type:`v4l2_colorspace`
+ replace symbol V4L2_COLORSPACE_BT2020 :c:type:`v4l2_colorspace`
+ replace symbol V4L2_COLORSPACE_DCI_P3 :c:type:`v4l2_colorspace`
+ replace symbol V4L2_COLORSPACE_DEFAULT :c:type:`v4l2_colorspace`
+@@ -69,7 +70,8 @@ replace symbol V4L2_COLORSPACE_SRGB :c:type:`v4l2_colorspace`
+ 
+ # Documented enum v4l2_xfer_func
+ replace symbol V4L2_XFER_FUNC_709 :c:type:`v4l2_xfer_func`
+-replace symbol V4L2_XFER_FUNC_ADOBERGB :c:type:`v4l2_xfer_func`
++replace symbol V4L2_XFER_FUNC_OPRGB :c:type:`v4l2_xfer_func`
++replace define V4L2_XFER_FUNC_ADOBERGB :c:type:`v4l2_xfer_func`
+ replace symbol V4L2_XFER_FUNC_DCI_P3 :c:type:`v4l2_xfer_func`
+ replace symbol V4L2_XFER_FUNC_DEFAULT :c:type:`v4l2_xfer_func`
+ replace symbol V4L2_XFER_FUNC_NONE :c:type:`v4l2_xfer_func`
+diff --git a/Makefile b/Makefile
+index 7b35c1ec0427..71642133ba22 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 18
++SUBLEVEL = 19
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/arm/boot/dts/dra7.dtsi b/arch/arm/boot/dts/dra7.dtsi
+index a0ddf497e8cd..2cb45ddd2ae3 100644
+--- a/arch/arm/boot/dts/dra7.dtsi
++++ b/arch/arm/boot/dts/dra7.dtsi
+@@ -354,7 +354,7 @@
+ 				ti,hwmods = "pcie1";
+ 				phys = <&pcie1_phy>;
+ 				phy-names = "pcie-phy0";
+-				ti,syscon-unaligned-access = <&scm_conf1 0x14 2>;
++				ti,syscon-unaligned-access = <&scm_conf1 0x14 1>;
+ 				status = "disabled";
+ 			};
+ 		};
+diff --git a/arch/arm/boot/dts/exynos3250.dtsi b/arch/arm/boot/dts/exynos3250.dtsi
+index 962af97c1883..aff5d66ae058 100644
+--- a/arch/arm/boot/dts/exynos3250.dtsi
++++ b/arch/arm/boot/dts/exynos3250.dtsi
+@@ -78,6 +78,22 @@
+ 			compatible = "arm,cortex-a7";
+ 			reg = <1>;
+ 			clock-frequency = <1000000000>;
++			clocks = <&cmu CLK_ARM_CLK>;
++			clock-names = "cpu";
++			#cooling-cells = <2>;
++
++			operating-points = <
++				1000000 1150000
++				900000  1112500
++				800000  1075000
++				700000  1037500
++				600000  1000000
++				500000  962500
++				400000  925000
++				300000  887500
++				200000  850000
++				100000  850000
++			>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/exynos4210-origen.dts b/arch/arm/boot/dts/exynos4210-origen.dts
+index 2ab99f9f3d0a..dd9ec05eb0f7 100644
+--- a/arch/arm/boot/dts/exynos4210-origen.dts
++++ b/arch/arm/boot/dts/exynos4210-origen.dts
+@@ -151,6 +151,8 @@
+ 		reg = <0x66>;
+ 		interrupt-parent = <&gpx0>;
+ 		interrupts = <4 IRQ_TYPE_NONE>, <3 IRQ_TYPE_NONE>;
++		pinctrl-names = "default";
++		pinctrl-0 = <&max8997_irq>;
+ 
+ 		max8997,pmic-buck1-dvs-voltage = <1350000>;
+ 		max8997,pmic-buck2-dvs-voltage = <1100000>;
+@@ -288,6 +290,13 @@
+ 	};
+ };
+ 
++&pinctrl_1 {
++	max8997_irq: max8997-irq {
++		samsung,pins = "gpx0-3", "gpx0-4";
++		samsung,pin-pud = <EXYNOS_PIN_PULL_NONE>;
++	};
++};
++
+ &sdhci_0 {
+ 	bus-width = <4>;
+ 	pinctrl-0 = <&sd0_clk &sd0_cmd &sd0_bus4 &sd0_cd>;
+diff --git a/arch/arm/boot/dts/exynos4210.dtsi b/arch/arm/boot/dts/exynos4210.dtsi
+index 88fb47cef9a8..b6091c27f155 100644
+--- a/arch/arm/boot/dts/exynos4210.dtsi
++++ b/arch/arm/boot/dts/exynos4210.dtsi
+@@ -55,6 +55,19 @@
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a9";
+ 			reg = <0x901>;
++			clocks = <&clock CLK_ARM_CLK>;
++			clock-names = "cpu";
++			clock-latency = <160000>;
++
++			operating-points = <
++				1200000 1250000
++				1000000 1150000
++				800000	1075000
++				500000	975000
++				400000	975000
++				200000	950000
++			>;
++			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/exynos4412.dtsi b/arch/arm/boot/dts/exynos4412.dtsi
+index 7b43c10c510b..51f72f0327e5 100644
+--- a/arch/arm/boot/dts/exynos4412.dtsi
++++ b/arch/arm/boot/dts/exynos4412.dtsi
+@@ -49,21 +49,30 @@
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a9";
+ 			reg = <0xA01>;
++			clocks = <&clock CLK_ARM_CLK>;
++			clock-names = "cpu";
+ 			operating-points-v2 = <&cpu0_opp_table>;
++			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 
+ 		cpu@a02 {
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a9";
+ 			reg = <0xA02>;
++			clocks = <&clock CLK_ARM_CLK>;
++			clock-names = "cpu";
+ 			operating-points-v2 = <&cpu0_opp_table>;
++			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 
+ 		cpu@a03 {
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a9";
+ 			reg = <0xA03>;
++			clocks = <&clock CLK_ARM_CLK>;
++			clock-names = "cpu";
+ 			operating-points-v2 = <&cpu0_opp_table>;
++			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/exynos5250.dtsi b/arch/arm/boot/dts/exynos5250.dtsi
+index 2daf505b3d08..f04adf72b80e 100644
+--- a/arch/arm/boot/dts/exynos5250.dtsi
++++ b/arch/arm/boot/dts/exynos5250.dtsi
+@@ -54,36 +54,106 @@
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a15";
+ 			reg = <0>;
+-			clock-frequency = <1700000000>;
+ 			clocks = <&clock CLK_ARM_CLK>;
+ 			clock-names = "cpu";
+-			clock-latency = <140000>;
+-
+-			operating-points = <
+-				1700000 1300000
+-				1600000 1250000
+-				1500000 1225000
+-				1400000 1200000
+-				1300000 1150000
+-				1200000 1125000
+-				1100000 1100000
+-				1000000 1075000
+-				 900000 1050000
+-				 800000 1025000
+-				 700000 1012500
+-				 600000 1000000
+-				 500000  975000
+-				 400000  950000
+-				 300000  937500
+-				 200000  925000
+-			>;
++			operating-points-v2 = <&cpu0_opp_table>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 		cpu@1 {
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a15";
+ 			reg = <1>;
+-			clock-frequency = <1700000000>;
++			clocks = <&clock CLK_ARM_CLK>;
++			clock-names = "cpu";
++			operating-points-v2 = <&cpu0_opp_table>;
++			#cooling-cells = <2>; /* min followed by max */
++		};
++	};
++
++	cpu0_opp_table: opp_table0 {
++		compatible = "operating-points-v2";
++		opp-shared;
++
++		opp-200000000 {
++			opp-hz = /bits/ 64 <200000000>;
++			opp-microvolt = <925000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-300000000 {
++			opp-hz = /bits/ 64 <300000000>;
++			opp-microvolt = <937500>;
++			clock-latency-ns = <140000>;
++		};
++		opp-400000000 {
++			opp-hz = /bits/ 64 <400000000>;
++			opp-microvolt = <950000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-500000000 {
++			opp-hz = /bits/ 64 <500000000>;
++			opp-microvolt = <975000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-600000000 {
++			opp-hz = /bits/ 64 <600000000>;
++			opp-microvolt = <1000000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-700000000 {
++			opp-hz = /bits/ 64 <700000000>;
++			opp-microvolt = <1012500>;
++			clock-latency-ns = <140000>;
++		};
++		opp-800000000 {
++			opp-hz = /bits/ 64 <800000000>;
++			opp-microvolt = <1025000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-900000000 {
++			opp-hz = /bits/ 64 <900000000>;
++			opp-microvolt = <1050000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1000000000 {
++			opp-hz = /bits/ 64 <1000000000>;
++			opp-microvolt = <1075000>;
++			clock-latency-ns = <140000>;
++			opp-suspend;
++		};
++		opp-1100000000 {
++			opp-hz = /bits/ 64 <1100000000>;
++			opp-microvolt = <1100000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1200000000 {
++			opp-hz = /bits/ 64 <1200000000>;
++			opp-microvolt = <1125000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1300000000 {
++			opp-hz = /bits/ 64 <1300000000>;
++			opp-microvolt = <1150000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1400000000 {
++			opp-hz = /bits/ 64 <1400000000>;
++			opp-microvolt = <1200000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1500000000 {
++			opp-hz = /bits/ 64 <1500000000>;
++			opp-microvolt = <1225000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1600000000 {
++			opp-hz = /bits/ 64 <1600000000>;
++			opp-microvolt = <1250000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1700000000 {
++			opp-hz = /bits/ 64 <1700000000>;
++			opp-microvolt = <1300000>;
++			clock-latency-ns = <140000>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/socfpga_arria10.dtsi b/arch/arm/boot/dts/socfpga_arria10.dtsi
+index 791ca15c799e..bd1985694bca 100644
+--- a/arch/arm/boot/dts/socfpga_arria10.dtsi
++++ b/arch/arm/boot/dts/socfpga_arria10.dtsi
+@@ -601,7 +601,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		sdr: sdr@ffc25000 {
++		sdr: sdr@ffcfb100 {
+ 			compatible = "altr,sdr-ctl", "syscon";
+ 			reg = <0xffcfb100 0x80>;
+ 		};
+diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig
+index 925d1364727a..b8e69fe282b8 100644
+--- a/arch/arm/crypto/Kconfig
++++ b/arch/arm/crypto/Kconfig
+@@ -121,10 +121,4 @@ config CRYPTO_CHACHA20_NEON
+ 	select CRYPTO_BLKCIPHER
+ 	select CRYPTO_CHACHA20
+ 
+-config CRYPTO_SPECK_NEON
+-	tristate "NEON accelerated Speck cipher algorithms"
+-	depends on KERNEL_MODE_NEON
+-	select CRYPTO_BLKCIPHER
+-	select CRYPTO_SPECK
+-
+ endif
+diff --git a/arch/arm/crypto/Makefile b/arch/arm/crypto/Makefile
+index 8de542c48ade..bd5bceef0605 100644
+--- a/arch/arm/crypto/Makefile
++++ b/arch/arm/crypto/Makefile
+@@ -10,7 +10,6 @@ obj-$(CONFIG_CRYPTO_SHA1_ARM_NEON) += sha1-arm-neon.o
+ obj-$(CONFIG_CRYPTO_SHA256_ARM) += sha256-arm.o
+ obj-$(CONFIG_CRYPTO_SHA512_ARM) += sha512-arm.o
+ obj-$(CONFIG_CRYPTO_CHACHA20_NEON) += chacha20-neon.o
+-obj-$(CONFIG_CRYPTO_SPECK_NEON) += speck-neon.o
+ 
+ ce-obj-$(CONFIG_CRYPTO_AES_ARM_CE) += aes-arm-ce.o
+ ce-obj-$(CONFIG_CRYPTO_SHA1_ARM_CE) += sha1-arm-ce.o
+@@ -54,7 +53,6 @@ ghash-arm-ce-y	:= ghash-ce-core.o ghash-ce-glue.o
+ crct10dif-arm-ce-y	:= crct10dif-ce-core.o crct10dif-ce-glue.o
+ crc32-arm-ce-y:= crc32-ce-core.o crc32-ce-glue.o
+ chacha20-neon-y := chacha20-neon-core.o chacha20-neon-glue.o
+-speck-neon-y := speck-neon-core.o speck-neon-glue.o
+ 
+ ifdef REGENERATE_ARM_CRYPTO
+ quiet_cmd_perl = PERL    $@
+diff --git a/arch/arm/crypto/speck-neon-core.S b/arch/arm/crypto/speck-neon-core.S
+deleted file mode 100644
+index 57caa742016e..000000000000
+--- a/arch/arm/crypto/speck-neon-core.S
++++ /dev/null
+@@ -1,434 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * NEON-accelerated implementation of Speck128-XTS and Speck64-XTS
+- *
+- * Copyright (c) 2018 Google, Inc
+- *
+- * Author: Eric Biggers <ebiggers@google.com>
+- */
+-
+-#include <linux/linkage.h>
+-
+-	.text
+-	.fpu		neon
+-
+-	// arguments
+-	ROUND_KEYS	.req	r0	// const {u64,u32} *round_keys
+-	NROUNDS		.req	r1	// int nrounds
+-	DST		.req	r2	// void *dst
+-	SRC		.req	r3	// const void *src
+-	NBYTES		.req	r4	// unsigned int nbytes
+-	TWEAK		.req	r5	// void *tweak
+-
+-	// registers which hold the data being encrypted/decrypted
+-	X0		.req	q0
+-	X0_L		.req	d0
+-	X0_H		.req	d1
+-	Y0		.req	q1
+-	Y0_H		.req	d3
+-	X1		.req	q2
+-	X1_L		.req	d4
+-	X1_H		.req	d5
+-	Y1		.req	q3
+-	Y1_H		.req	d7
+-	X2		.req	q4
+-	X2_L		.req	d8
+-	X2_H		.req	d9
+-	Y2		.req	q5
+-	Y2_H		.req	d11
+-	X3		.req	q6
+-	X3_L		.req	d12
+-	X3_H		.req	d13
+-	Y3		.req	q7
+-	Y3_H		.req	d15
+-
+-	// the round key, duplicated in all lanes
+-	ROUND_KEY	.req	q8
+-	ROUND_KEY_L	.req	d16
+-	ROUND_KEY_H	.req	d17
+-
+-	// index vector for vtbl-based 8-bit rotates
+-	ROTATE_TABLE	.req	d18
+-
+-	// multiplication table for updating XTS tweaks
+-	GF128MUL_TABLE	.req	d19
+-	GF64MUL_TABLE	.req	d19
+-
+-	// current XTS tweak value(s)
+-	TWEAKV		.req	q10
+-	TWEAKV_L	.req	d20
+-	TWEAKV_H	.req	d21
+-
+-	TMP0		.req	q12
+-	TMP0_L		.req	d24
+-	TMP0_H		.req	d25
+-	TMP1		.req	q13
+-	TMP2		.req	q14
+-	TMP3		.req	q15
+-
+-	.align		4
+-.Lror64_8_table:
+-	.byte		1, 2, 3, 4, 5, 6, 7, 0
+-.Lror32_8_table:
+-	.byte		1, 2, 3, 0, 5, 6, 7, 4
+-.Lrol64_8_table:
+-	.byte		7, 0, 1, 2, 3, 4, 5, 6
+-.Lrol32_8_table:
+-	.byte		3, 0, 1, 2, 7, 4, 5, 6
+-.Lgf128mul_table:
+-	.byte		0, 0x87
+-	.fill		14
+-.Lgf64mul_table:
+-	.byte		0, 0x1b, (0x1b << 1), (0x1b << 1) ^ 0x1b
+-	.fill		12
+-
+-/*
+- * _speck_round_128bytes() - Speck encryption round on 128 bytes at a time
+- *
+- * Do one Speck encryption round on the 128 bytes (8 blocks for Speck128, 16 for
+- * Speck64) stored in X0-X3 and Y0-Y3, using the round key stored in all lanes
+- * of ROUND_KEY.  'n' is the lane size: 64 for Speck128, or 32 for Speck64.
+- *
+- * The 8-bit rotates are implemented using vtbl instead of vshr + vsli because
+- * the vtbl approach is faster on some processors and the same speed on others.
+- */
+-.macro _speck_round_128bytes	n
+-
+-	// x = ror(x, 8)
+-	vtbl.8		X0_L, {X0_L}, ROTATE_TABLE
+-	vtbl.8		X0_H, {X0_H}, ROTATE_TABLE
+-	vtbl.8		X1_L, {X1_L}, ROTATE_TABLE
+-	vtbl.8		X1_H, {X1_H}, ROTATE_TABLE
+-	vtbl.8		X2_L, {X2_L}, ROTATE_TABLE
+-	vtbl.8		X2_H, {X2_H}, ROTATE_TABLE
+-	vtbl.8		X3_L, {X3_L}, ROTATE_TABLE
+-	vtbl.8		X3_H, {X3_H}, ROTATE_TABLE
+-
+-	// x += y
+-	vadd.u\n	X0, Y0
+-	vadd.u\n	X1, Y1
+-	vadd.u\n	X2, Y2
+-	vadd.u\n	X3, Y3
+-
+-	// x ^= k
+-	veor		X0, ROUND_KEY
+-	veor		X1, ROUND_KEY
+-	veor		X2, ROUND_KEY
+-	veor		X3, ROUND_KEY
+-
+-	// y = rol(y, 3)
+-	vshl.u\n	TMP0, Y0, #3
+-	vshl.u\n	TMP1, Y1, #3
+-	vshl.u\n	TMP2, Y2, #3
+-	vshl.u\n	TMP3, Y3, #3
+-	vsri.u\n	TMP0, Y0, #(\n - 3)
+-	vsri.u\n	TMP1, Y1, #(\n - 3)
+-	vsri.u\n	TMP2, Y2, #(\n - 3)
+-	vsri.u\n	TMP3, Y3, #(\n - 3)
+-
+-	// y ^= x
+-	veor		Y0, TMP0, X0
+-	veor		Y1, TMP1, X1
+-	veor		Y2, TMP2, X2
+-	veor		Y3, TMP3, X3
+-.endm
+-
+-/*
+- * _speck_unround_128bytes() - Speck decryption round on 128 bytes at a time
+- *
+- * This is the inverse of _speck_round_128bytes().
+- */
+-.macro _speck_unround_128bytes	n
+-
+-	// y ^= x
+-	veor		TMP0, Y0, X0
+-	veor		TMP1, Y1, X1
+-	veor		TMP2, Y2, X2
+-	veor		TMP3, Y3, X3
+-
+-	// y = ror(y, 3)
+-	vshr.u\n	Y0, TMP0, #3
+-	vshr.u\n	Y1, TMP1, #3
+-	vshr.u\n	Y2, TMP2, #3
+-	vshr.u\n	Y3, TMP3, #3
+-	vsli.u\n	Y0, TMP0, #(\n - 3)
+-	vsli.u\n	Y1, TMP1, #(\n - 3)
+-	vsli.u\n	Y2, TMP2, #(\n - 3)
+-	vsli.u\n	Y3, TMP3, #(\n - 3)
+-
+-	// x ^= k
+-	veor		X0, ROUND_KEY
+-	veor		X1, ROUND_KEY
+-	veor		X2, ROUND_KEY
+-	veor		X3, ROUND_KEY
+-
+-	// x -= y
+-	vsub.u\n	X0, Y0
+-	vsub.u\n	X1, Y1
+-	vsub.u\n	X2, Y2
+-	vsub.u\n	X3, Y3
+-
+-	// x = rol(x, 8);
+-	vtbl.8		X0_L, {X0_L}, ROTATE_TABLE
+-	vtbl.8		X0_H, {X0_H}, ROTATE_TABLE
+-	vtbl.8		X1_L, {X1_L}, ROTATE_TABLE
+-	vtbl.8		X1_H, {X1_H}, ROTATE_TABLE
+-	vtbl.8		X2_L, {X2_L}, ROTATE_TABLE
+-	vtbl.8		X2_H, {X2_H}, ROTATE_TABLE
+-	vtbl.8		X3_L, {X3_L}, ROTATE_TABLE
+-	vtbl.8		X3_H, {X3_H}, ROTATE_TABLE
+-.endm
+-
+-.macro _xts128_precrypt_one	dst_reg, tweak_buf, tmp
+-
+-	// Load the next source block
+-	vld1.8		{\dst_reg}, [SRC]!
+-
+-	// Save the current tweak in the tweak buffer
+-	vst1.8		{TWEAKV}, [\tweak_buf:128]!
+-
+-	// XOR the next source block with the current tweak
+-	veor		\dst_reg, TWEAKV
+-
+-	/*
+-	 * Calculate the next tweak by multiplying the current one by x,
+-	 * modulo p(x) = x^128 + x^7 + x^2 + x + 1.
+-	 */
+-	vshr.u64	\tmp, TWEAKV, #63
+-	vshl.u64	TWEAKV, #1
+-	veor		TWEAKV_H, \tmp\()_L
+-	vtbl.8		\tmp\()_H, {GF128MUL_TABLE}, \tmp\()_H
+-	veor		TWEAKV_L, \tmp\()_H
+-.endm
+-
+-.macro _xts64_precrypt_two	dst_reg, tweak_buf, tmp
+-
+-	// Load the next two source blocks
+-	vld1.8		{\dst_reg}, [SRC]!
+-
+-	// Save the current two tweaks in the tweak buffer
+-	vst1.8		{TWEAKV}, [\tweak_buf:128]!
+-
+-	// XOR the next two source blocks with the current two tweaks
+-	veor		\dst_reg, TWEAKV
+-
+-	/*
+-	 * Calculate the next two tweaks by multiplying the current ones by x^2,
+-	 * modulo p(x) = x^64 + x^4 + x^3 + x + 1.
+-	 */
+-	vshr.u64	\tmp, TWEAKV, #62
+-	vshl.u64	TWEAKV, #2
+-	vtbl.8		\tmp\()_L, {GF64MUL_TABLE}, \tmp\()_L
+-	vtbl.8		\tmp\()_H, {GF64MUL_TABLE}, \tmp\()_H
+-	veor		TWEAKV, \tmp
+-.endm
+-
+-/*
+- * _speck_xts_crypt() - Speck-XTS encryption/decryption
+- *
+- * Encrypt or decrypt NBYTES bytes of data from the SRC buffer to the DST buffer
+- * using Speck-XTS, specifically the variant with a block size of '2n' and round
+- * count given by NROUNDS.  The expanded round keys are given in ROUND_KEYS, and
+- * the current XTS tweak value is given in TWEAK.  It's assumed that NBYTES is a
+- * nonzero multiple of 128.
+- */
+-.macro _speck_xts_crypt	n, decrypting
+-	push		{r4-r7}
+-	mov		r7, sp
+-
+-	/*
+-	 * The first four parameters were passed in registers r0-r3.  Load the
+-	 * additional parameters, which were passed on the stack.
+-	 */
+-	ldr		NBYTES, [sp, #16]
+-	ldr		TWEAK, [sp, #20]
+-
+-	/*
+-	 * If decrypting, modify the ROUND_KEYS parameter to point to the last
+-	 * round key rather than the first, since for decryption the round keys
+-	 * are used in reverse order.
+-	 */
+-.if \decrypting
+-.if \n == 64
+-	add		ROUND_KEYS, ROUND_KEYS, NROUNDS, lsl #3
+-	sub		ROUND_KEYS, #8
+-.else
+-	add		ROUND_KEYS, ROUND_KEYS, NROUNDS, lsl #2
+-	sub		ROUND_KEYS, #4
+-.endif
+-.endif
+-
+-	// Load the index vector for vtbl-based 8-bit rotates
+-.if \decrypting
+-	ldr		r12, =.Lrol\n\()_8_table
+-.else
+-	ldr		r12, =.Lror\n\()_8_table
+-.endif
+-	vld1.8		{ROTATE_TABLE}, [r12:64]
+-
+-	// One-time XTS preparation
+-
+-	/*
+-	 * Allocate stack space to store 128 bytes worth of tweaks.  For
+-	 * performance, this space is aligned to a 16-byte boundary so that we
+-	 * can use the load/store instructions that declare 16-byte alignment.
+-	 * For Thumb2 compatibility, don't do the 'bic' directly on 'sp'.
+-	 */
+-	sub		r12, sp, #128
+-	bic		r12, #0xf
+-	mov		sp, r12
+-
+-.if \n == 64
+-	// Load first tweak
+-	vld1.8		{TWEAKV}, [TWEAK]
+-
+-	// Load GF(2^128) multiplication table
+-	ldr		r12, =.Lgf128mul_table
+-	vld1.8		{GF128MUL_TABLE}, [r12:64]
+-.else
+-	// Load first tweak
+-	vld1.8		{TWEAKV_L}, [TWEAK]
+-
+-	// Load GF(2^64) multiplication table
+-	ldr		r12, =.Lgf64mul_table
+-	vld1.8		{GF64MUL_TABLE}, [r12:64]
+-
+-	// Calculate second tweak, packing it together with the first
+-	vshr.u64	TMP0_L, TWEAKV_L, #63
+-	vtbl.u8		TMP0_L, {GF64MUL_TABLE}, TMP0_L
+-	vshl.u64	TWEAKV_H, TWEAKV_L, #1
+-	veor		TWEAKV_H, TMP0_L
+-.endif
+-
+-.Lnext_128bytes_\@:
+-
+-	/*
+-	 * Load the source blocks into {X,Y}[0-3], XOR them with their XTS tweak
+-	 * values, and save the tweaks on the stack for later.  Then
+-	 * de-interleave the 'x' and 'y' elements of each block, i.e. make it so
+-	 * that the X[0-3] registers contain only the second halves of blocks,
+-	 * and the Y[0-3] registers contain only the first halves of blocks.
+-	 * (Speck uses the order (y, x) rather than the more intuitive (x, y).)
+-	 */
+-	mov		r12, sp
+-.if \n == 64
+-	_xts128_precrypt_one	X0, r12, TMP0
+-	_xts128_precrypt_one	Y0, r12, TMP0
+-	_xts128_precrypt_one	X1, r12, TMP0
+-	_xts128_precrypt_one	Y1, r12, TMP0
+-	_xts128_precrypt_one	X2, r12, TMP0
+-	_xts128_precrypt_one	Y2, r12, TMP0
+-	_xts128_precrypt_one	X3, r12, TMP0
+-	_xts128_precrypt_one	Y3, r12, TMP0
+-	vswp		X0_L, Y0_H
+-	vswp		X1_L, Y1_H
+-	vswp		X2_L, Y2_H
+-	vswp		X3_L, Y3_H
+-.else
+-	_xts64_precrypt_two	X0, r12, TMP0
+-	_xts64_precrypt_two	Y0, r12, TMP0
+-	_xts64_precrypt_two	X1, r12, TMP0
+-	_xts64_precrypt_two	Y1, r12, TMP0
+-	_xts64_precrypt_two	X2, r12, TMP0
+-	_xts64_precrypt_two	Y2, r12, TMP0
+-	_xts64_precrypt_two	X3, r12, TMP0
+-	_xts64_precrypt_two	Y3, r12, TMP0
+-	vuzp.32		Y0, X0
+-	vuzp.32		Y1, X1
+-	vuzp.32		Y2, X2
+-	vuzp.32		Y3, X3
+-.endif
+-
+-	// Do the cipher rounds
+-
+-	mov		r12, ROUND_KEYS
+-	mov		r6, NROUNDS
+-
+-.Lnext_round_\@:
+-.if \decrypting
+-.if \n == 64
+-	vld1.64		ROUND_KEY_L, [r12]
+-	sub		r12, #8
+-	vmov		ROUND_KEY_H, ROUND_KEY_L
+-.else
+-	vld1.32		{ROUND_KEY_L[],ROUND_KEY_H[]}, [r12]
+-	sub		r12, #4
+-.endif
+-	_speck_unround_128bytes	\n
+-.else
+-.if \n == 64
+-	vld1.64		ROUND_KEY_L, [r12]!
+-	vmov		ROUND_KEY_H, ROUND_KEY_L
+-.else
+-	vld1.32		{ROUND_KEY_L[],ROUND_KEY_H[]}, [r12]!
+-.endif
+-	_speck_round_128bytes	\n
+-.endif
+-	subs		r6, r6, #1
+-	bne		.Lnext_round_\@
+-
+-	// Re-interleave the 'x' and 'y' elements of each block
+-.if \n == 64
+-	vswp		X0_L, Y0_H
+-	vswp		X1_L, Y1_H
+-	vswp		X2_L, Y2_H
+-	vswp		X3_L, Y3_H
+-.else
+-	vzip.32		Y0, X0
+-	vzip.32		Y1, X1
+-	vzip.32		Y2, X2
+-	vzip.32		Y3, X3
+-.endif
+-
+-	// XOR the encrypted/decrypted blocks with the tweaks we saved earlier
+-	mov		r12, sp
+-	vld1.8		{TMP0, TMP1}, [r12:128]!
+-	vld1.8		{TMP2, TMP3}, [r12:128]!
+-	veor		X0, TMP0
+-	veor		Y0, TMP1
+-	veor		X1, TMP2
+-	veor		Y1, TMP3
+-	vld1.8		{TMP0, TMP1}, [r12:128]!
+-	vld1.8		{TMP2, TMP3}, [r12:128]!
+-	veor		X2, TMP0
+-	veor		Y2, TMP1
+-	veor		X3, TMP2
+-	veor		Y3, TMP3
+-
+-	// Store the ciphertext in the destination buffer
+-	vst1.8		{X0, Y0}, [DST]!
+-	vst1.8		{X1, Y1}, [DST]!
+-	vst1.8		{X2, Y2}, [DST]!
+-	vst1.8		{X3, Y3}, [DST]!
+-
+-	// Continue if there are more 128-byte chunks remaining, else return
+-	subs		NBYTES, #128
+-	bne		.Lnext_128bytes_\@
+-
+-	// Store the next tweak
+-.if \n == 64
+-	vst1.8		{TWEAKV}, [TWEAK]
+-.else
+-	vst1.8		{TWEAKV_L}, [TWEAK]
+-.endif
+-
+-	mov		sp, r7
+-	pop		{r4-r7}
+-	bx		lr
+-.endm
+-
+-ENTRY(speck128_xts_encrypt_neon)
+-	_speck_xts_crypt	n=64, decrypting=0
+-ENDPROC(speck128_xts_encrypt_neon)
+-
+-ENTRY(speck128_xts_decrypt_neon)
+-	_speck_xts_crypt	n=64, decrypting=1
+-ENDPROC(speck128_xts_decrypt_neon)
+-
+-ENTRY(speck64_xts_encrypt_neon)
+-	_speck_xts_crypt	n=32, decrypting=0
+-ENDPROC(speck64_xts_encrypt_neon)
+-
+-ENTRY(speck64_xts_decrypt_neon)
+-	_speck_xts_crypt	n=32, decrypting=1
+-ENDPROC(speck64_xts_decrypt_neon)
+diff --git a/arch/arm/crypto/speck-neon-glue.c b/arch/arm/crypto/speck-neon-glue.c
+deleted file mode 100644
+index f012c3ea998f..000000000000
+--- a/arch/arm/crypto/speck-neon-glue.c
++++ /dev/null
+@@ -1,288 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * NEON-accelerated implementation of Speck128-XTS and Speck64-XTS
+- *
+- * Copyright (c) 2018 Google, Inc
+- *
+- * Note: the NIST recommendation for XTS only specifies a 128-bit block size,
+- * but a 64-bit version (needed for Speck64) is fairly straightforward; the math
+- * is just done in GF(2^64) instead of GF(2^128), with the reducing polynomial
+- * x^64 + x^4 + x^3 + x + 1 from the original XEX paper (Rogaway, 2004:
+- * "Efficient Instantiations of Tweakable Blockciphers and Refinements to Modes
+- * OCB and PMAC"), represented as 0x1B.
+- */
+-
+-#include <asm/hwcap.h>
+-#include <asm/neon.h>
+-#include <asm/simd.h>
+-#include <crypto/algapi.h>
+-#include <crypto/gf128mul.h>
+-#include <crypto/internal/skcipher.h>
+-#include <crypto/speck.h>
+-#include <crypto/xts.h>
+-#include <linux/kernel.h>
+-#include <linux/module.h>
+-
+-/* The assembly functions only handle multiples of 128 bytes */
+-#define SPECK_NEON_CHUNK_SIZE	128
+-
+-/* Speck128 */
+-
+-struct speck128_xts_tfm_ctx {
+-	struct speck128_tfm_ctx main_key;
+-	struct speck128_tfm_ctx tweak_key;
+-};
+-
+-asmlinkage void speck128_xts_encrypt_neon(const u64 *round_keys, int nrounds,
+-					  void *dst, const void *src,
+-					  unsigned int nbytes, void *tweak);
+-
+-asmlinkage void speck128_xts_decrypt_neon(const u64 *round_keys, int nrounds,
+-					  void *dst, const void *src,
+-					  unsigned int nbytes, void *tweak);
+-
+-typedef void (*speck128_crypt_one_t)(const struct speck128_tfm_ctx *,
+-				     u8 *, const u8 *);
+-typedef void (*speck128_xts_crypt_many_t)(const u64 *, int, void *,
+-					  const void *, unsigned int, void *);
+-
+-static __always_inline int
+-__speck128_xts_crypt(struct skcipher_request *req,
+-		     speck128_crypt_one_t crypt_one,
+-		     speck128_xts_crypt_many_t crypt_many)
+-{
+-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+-	const struct speck128_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	struct skcipher_walk walk;
+-	le128 tweak;
+-	int err;
+-
+-	err = skcipher_walk_virt(&walk, req, true);
+-
+-	crypto_speck128_encrypt(&ctx->tweak_key, (u8 *)&tweak, walk.iv);
+-
+-	while (walk.nbytes > 0) {
+-		unsigned int nbytes = walk.nbytes;
+-		u8 *dst = walk.dst.virt.addr;
+-		const u8 *src = walk.src.virt.addr;
+-
+-		if (nbytes >= SPECK_NEON_CHUNK_SIZE && may_use_simd()) {
+-			unsigned int count;
+-
+-			count = round_down(nbytes, SPECK_NEON_CHUNK_SIZE);
+-			kernel_neon_begin();
+-			(*crypt_many)(ctx->main_key.round_keys,
+-				      ctx->main_key.nrounds,
+-				      dst, src, count, &tweak);
+-			kernel_neon_end();
+-			dst += count;
+-			src += count;
+-			nbytes -= count;
+-		}
+-
+-		/* Handle any remainder with generic code */
+-		while (nbytes >= sizeof(tweak)) {
+-			le128_xor((le128 *)dst, (const le128 *)src, &tweak);
+-			(*crypt_one)(&ctx->main_key, dst, dst);
+-			le128_xor((le128 *)dst, (const le128 *)dst, &tweak);
+-			gf128mul_x_ble(&tweak, &tweak);
+-
+-			dst += sizeof(tweak);
+-			src += sizeof(tweak);
+-			nbytes -= sizeof(tweak);
+-		}
+-		err = skcipher_walk_done(&walk, nbytes);
+-	}
+-
+-	return err;
+-}
+-
+-static int speck128_xts_encrypt(struct skcipher_request *req)
+-{
+-	return __speck128_xts_crypt(req, crypto_speck128_encrypt,
+-				    speck128_xts_encrypt_neon);
+-}
+-
+-static int speck128_xts_decrypt(struct skcipher_request *req)
+-{
+-	return __speck128_xts_crypt(req, crypto_speck128_decrypt,
+-				    speck128_xts_decrypt_neon);
+-}
+-
+-static int speck128_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
+-			       unsigned int keylen)
+-{
+-	struct speck128_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	int err;
+-
+-	err = xts_verify_key(tfm, key, keylen);
+-	if (err)
+-		return err;
+-
+-	keylen /= 2;
+-
+-	err = crypto_speck128_setkey(&ctx->main_key, key, keylen);
+-	if (err)
+-		return err;
+-
+-	return crypto_speck128_setkey(&ctx->tweak_key, key + keylen, keylen);
+-}
+-
+-/* Speck64 */
+-
+-struct speck64_xts_tfm_ctx {
+-	struct speck64_tfm_ctx main_key;
+-	struct speck64_tfm_ctx tweak_key;
+-};
+-
+-asmlinkage void speck64_xts_encrypt_neon(const u32 *round_keys, int nrounds,
+-					 void *dst, const void *src,
+-					 unsigned int nbytes, void *tweak);
+-
+-asmlinkage void speck64_xts_decrypt_neon(const u32 *round_keys, int nrounds,
+-					 void *dst, const void *src,
+-					 unsigned int nbytes, void *tweak);
+-
+-typedef void (*speck64_crypt_one_t)(const struct speck64_tfm_ctx *,
+-				    u8 *, const u8 *);
+-typedef void (*speck64_xts_crypt_many_t)(const u32 *, int, void *,
+-					 const void *, unsigned int, void *);
+-
+-static __always_inline int
+-__speck64_xts_crypt(struct skcipher_request *req, speck64_crypt_one_t crypt_one,
+-		    speck64_xts_crypt_many_t crypt_many)
+-{
+-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+-	const struct speck64_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	struct skcipher_walk walk;
+-	__le64 tweak;
+-	int err;
+-
+-	err = skcipher_walk_virt(&walk, req, true);
+-
+-	crypto_speck64_encrypt(&ctx->tweak_key, (u8 *)&tweak, walk.iv);
+-
+-	while (walk.nbytes > 0) {
+-		unsigned int nbytes = walk.nbytes;
+-		u8 *dst = walk.dst.virt.addr;
+-		const u8 *src = walk.src.virt.addr;
+-
+-		if (nbytes >= SPECK_NEON_CHUNK_SIZE && may_use_simd()) {
+-			unsigned int count;
+-
+-			count = round_down(nbytes, SPECK_NEON_CHUNK_SIZE);
+-			kernel_neon_begin();
+-			(*crypt_many)(ctx->main_key.round_keys,
+-				      ctx->main_key.nrounds,
+-				      dst, src, count, &tweak);
+-			kernel_neon_end();
+-			dst += count;
+-			src += count;
+-			nbytes -= count;
+-		}
+-
+-		/* Handle any remainder with generic code */
+-		while (nbytes >= sizeof(tweak)) {
+-			*(__le64 *)dst = *(__le64 *)src ^ tweak;
+-			(*crypt_one)(&ctx->main_key, dst, dst);
+-			*(__le64 *)dst ^= tweak;
+-			tweak = cpu_to_le64((le64_to_cpu(tweak) << 1) ^
+-					    ((tweak & cpu_to_le64(1ULL << 63)) ?
+-					     0x1B : 0));
+-			dst += sizeof(tweak);
+-			src += sizeof(tweak);
+-			nbytes -= sizeof(tweak);
+-		}
+-		err = skcipher_walk_done(&walk, nbytes);
+-	}
+-
+-	return err;
+-}
+-
+-static int speck64_xts_encrypt(struct skcipher_request *req)
+-{
+-	return __speck64_xts_crypt(req, crypto_speck64_encrypt,
+-				   speck64_xts_encrypt_neon);
+-}
+-
+-static int speck64_xts_decrypt(struct skcipher_request *req)
+-{
+-	return __speck64_xts_crypt(req, crypto_speck64_decrypt,
+-				   speck64_xts_decrypt_neon);
+-}
+-
+-static int speck64_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
+-			      unsigned int keylen)
+-{
+-	struct speck64_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	int err;
+-
+-	err = xts_verify_key(tfm, key, keylen);
+-	if (err)
+-		return err;
+-
+-	keylen /= 2;
+-
+-	err = crypto_speck64_setkey(&ctx->main_key, key, keylen);
+-	if (err)
+-		return err;
+-
+-	return crypto_speck64_setkey(&ctx->tweak_key, key + keylen, keylen);
+-}
+-
+-static struct skcipher_alg speck_algs[] = {
+-	{
+-		.base.cra_name		= "xts(speck128)",
+-		.base.cra_driver_name	= "xts-speck128-neon",
+-		.base.cra_priority	= 300,
+-		.base.cra_blocksize	= SPECK128_BLOCK_SIZE,
+-		.base.cra_ctxsize	= sizeof(struct speck128_xts_tfm_ctx),
+-		.base.cra_alignmask	= 7,
+-		.base.cra_module	= THIS_MODULE,
+-		.min_keysize		= 2 * SPECK128_128_KEY_SIZE,
+-		.max_keysize		= 2 * SPECK128_256_KEY_SIZE,
+-		.ivsize			= SPECK128_BLOCK_SIZE,
+-		.walksize		= SPECK_NEON_CHUNK_SIZE,
+-		.setkey			= speck128_xts_setkey,
+-		.encrypt		= speck128_xts_encrypt,
+-		.decrypt		= speck128_xts_decrypt,
+-	}, {
+-		.base.cra_name		= "xts(speck64)",
+-		.base.cra_driver_name	= "xts-speck64-neon",
+-		.base.cra_priority	= 300,
+-		.base.cra_blocksize	= SPECK64_BLOCK_SIZE,
+-		.base.cra_ctxsize	= sizeof(struct speck64_xts_tfm_ctx),
+-		.base.cra_alignmask	= 7,
+-		.base.cra_module	= THIS_MODULE,
+-		.min_keysize		= 2 * SPECK64_96_KEY_SIZE,
+-		.max_keysize		= 2 * SPECK64_128_KEY_SIZE,
+-		.ivsize			= SPECK64_BLOCK_SIZE,
+-		.walksize		= SPECK_NEON_CHUNK_SIZE,
+-		.setkey			= speck64_xts_setkey,
+-		.encrypt		= speck64_xts_encrypt,
+-		.decrypt		= speck64_xts_decrypt,
+-	}
+-};
+-
+-static int __init speck_neon_module_init(void)
+-{
+-	if (!(elf_hwcap & HWCAP_NEON))
+-		return -ENODEV;
+-	return crypto_register_skciphers(speck_algs, ARRAY_SIZE(speck_algs));
+-}
+-
+-static void __exit speck_neon_module_exit(void)
+-{
+-	crypto_unregister_skciphers(speck_algs, ARRAY_SIZE(speck_algs));
+-}
+-
+-module_init(speck_neon_module_init);
+-module_exit(speck_neon_module_exit);
+-
+-MODULE_DESCRIPTION("Speck block cipher (NEON-accelerated)");
+-MODULE_LICENSE("GPL");
+-MODULE_AUTHOR("Eric Biggers <ebiggers@google.com>");
+-MODULE_ALIAS_CRYPTO("xts(speck128)");
+-MODULE_ALIAS_CRYPTO("xts-speck128-neon");
+-MODULE_ALIAS_CRYPTO("xts(speck64)");
+-MODULE_ALIAS_CRYPTO("xts-speck64-neon");
+diff --git a/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi b/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
+index 67dac595dc72..3989876ab699 100644
+--- a/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
++++ b/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
+@@ -327,7 +327,7 @@
+ 
+ 		sysmgr: sysmgr@ffd12000 {
+ 			compatible = "altr,sys-mgr", "syscon";
+-			reg = <0xffd12000 0x1000>;
++			reg = <0xffd12000 0x228>;
+ 		};
+ 
+ 		/* Local timer */
+diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig
+index e3fdb0fd6f70..d51944ff9f91 100644
+--- a/arch/arm64/crypto/Kconfig
++++ b/arch/arm64/crypto/Kconfig
+@@ -119,10 +119,4 @@ config CRYPTO_AES_ARM64_BS
+ 	select CRYPTO_AES_ARM64
+ 	select CRYPTO_SIMD
+ 
+-config CRYPTO_SPECK_NEON
+-	tristate "NEON accelerated Speck cipher algorithms"
+-	depends on KERNEL_MODE_NEON
+-	select CRYPTO_BLKCIPHER
+-	select CRYPTO_SPECK
+-
+ endif
+diff --git a/arch/arm64/crypto/Makefile b/arch/arm64/crypto/Makefile
+index bcafd016618e..7bc4bda6d9c6 100644
+--- a/arch/arm64/crypto/Makefile
++++ b/arch/arm64/crypto/Makefile
+@@ -56,9 +56,6 @@ sha512-arm64-y := sha512-glue.o sha512-core.o
+ obj-$(CONFIG_CRYPTO_CHACHA20_NEON) += chacha20-neon.o
+ chacha20-neon-y := chacha20-neon-core.o chacha20-neon-glue.o
+ 
+-obj-$(CONFIG_CRYPTO_SPECK_NEON) += speck-neon.o
+-speck-neon-y := speck-neon-core.o speck-neon-glue.o
+-
+ obj-$(CONFIG_CRYPTO_AES_ARM64) += aes-arm64.o
+ aes-arm64-y := aes-cipher-core.o aes-cipher-glue.o
+ 
+diff --git a/arch/arm64/crypto/speck-neon-core.S b/arch/arm64/crypto/speck-neon-core.S
+deleted file mode 100644
+index b14463438b09..000000000000
+--- a/arch/arm64/crypto/speck-neon-core.S
++++ /dev/null
+@@ -1,352 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * ARM64 NEON-accelerated implementation of Speck128-XTS and Speck64-XTS
+- *
+- * Copyright (c) 2018 Google, Inc
+- *
+- * Author: Eric Biggers <ebiggers@google.com>
+- */
+-
+-#include <linux/linkage.h>
+-
+-	.text
+-
+-	// arguments
+-	ROUND_KEYS	.req	x0	// const {u64,u32} *round_keys
+-	NROUNDS		.req	w1	// int nrounds
+-	NROUNDS_X	.req	x1
+-	DST		.req	x2	// void *dst
+-	SRC		.req	x3	// const void *src
+-	NBYTES		.req	w4	// unsigned int nbytes
+-	TWEAK		.req	x5	// void *tweak
+-
+-	// registers which hold the data being encrypted/decrypted
+-	// (underscores avoid a naming collision with ARM64 registers x0-x3)
+-	X_0		.req	v0
+-	Y_0		.req	v1
+-	X_1		.req	v2
+-	Y_1		.req	v3
+-	X_2		.req	v4
+-	Y_2		.req	v5
+-	X_3		.req	v6
+-	Y_3		.req	v7
+-
+-	// the round key, duplicated in all lanes
+-	ROUND_KEY	.req	v8
+-
+-	// index vector for tbl-based 8-bit rotates
+-	ROTATE_TABLE	.req	v9
+-	ROTATE_TABLE_Q	.req	q9
+-
+-	// temporary registers
+-	TMP0		.req	v10
+-	TMP1		.req	v11
+-	TMP2		.req	v12
+-	TMP3		.req	v13
+-
+-	// multiplication table for updating XTS tweaks
+-	GFMUL_TABLE	.req	v14
+-	GFMUL_TABLE_Q	.req	q14
+-
+-	// next XTS tweak value(s)
+-	TWEAKV_NEXT	.req	v15
+-
+-	// XTS tweaks for the blocks currently being encrypted/decrypted
+-	TWEAKV0		.req	v16
+-	TWEAKV1		.req	v17
+-	TWEAKV2		.req	v18
+-	TWEAKV3		.req	v19
+-	TWEAKV4		.req	v20
+-	TWEAKV5		.req	v21
+-	TWEAKV6		.req	v22
+-	TWEAKV7		.req	v23
+-
+-	.align		4
+-.Lror64_8_table:
+-	.octa		0x080f0e0d0c0b0a090007060504030201
+-.Lror32_8_table:
+-	.octa		0x0c0f0e0d080b0a090407060500030201
+-.Lrol64_8_table:
+-	.octa		0x0e0d0c0b0a09080f0605040302010007
+-.Lrol32_8_table:
+-	.octa		0x0e0d0c0f0a09080b0605040702010003
+-.Lgf128mul_table:
+-	.octa		0x00000000000000870000000000000001
+-.Lgf64mul_table:
+-	.octa		0x0000000000000000000000002d361b00
+-
+-/*
+- * _speck_round_128bytes() - Speck encryption round on 128 bytes at a time
+- *
+- * Do one Speck encryption round on the 128 bytes (8 blocks for Speck128, 16 for
+- * Speck64) stored in X0-X3 and Y0-Y3, using the round key stored in all lanes
+- * of ROUND_KEY.  'n' is the lane size: 64 for Speck128, or 32 for Speck64.
+- * 'lanes' is the lane specifier: "2d" for Speck128 or "4s" for Speck64.
+- */
+-.macro _speck_round_128bytes	n, lanes
+-
+-	// x = ror(x, 8)
+-	tbl		X_0.16b, {X_0.16b}, ROTATE_TABLE.16b
+-	tbl		X_1.16b, {X_1.16b}, ROTATE_TABLE.16b
+-	tbl		X_2.16b, {X_2.16b}, ROTATE_TABLE.16b
+-	tbl		X_3.16b, {X_3.16b}, ROTATE_TABLE.16b
+-
+-	// x += y
+-	add		X_0.\lanes, X_0.\lanes, Y_0.\lanes
+-	add		X_1.\lanes, X_1.\lanes, Y_1.\lanes
+-	add		X_2.\lanes, X_2.\lanes, Y_2.\lanes
+-	add		X_3.\lanes, X_3.\lanes, Y_3.\lanes
+-
+-	// x ^= k
+-	eor		X_0.16b, X_0.16b, ROUND_KEY.16b
+-	eor		X_1.16b, X_1.16b, ROUND_KEY.16b
+-	eor		X_2.16b, X_2.16b, ROUND_KEY.16b
+-	eor		X_3.16b, X_3.16b, ROUND_KEY.16b
+-
+-	// y = rol(y, 3)
+-	shl		TMP0.\lanes, Y_0.\lanes, #3
+-	shl		TMP1.\lanes, Y_1.\lanes, #3
+-	shl		TMP2.\lanes, Y_2.\lanes, #3
+-	shl		TMP3.\lanes, Y_3.\lanes, #3
+-	sri		TMP0.\lanes, Y_0.\lanes, #(\n - 3)
+-	sri		TMP1.\lanes, Y_1.\lanes, #(\n - 3)
+-	sri		TMP2.\lanes, Y_2.\lanes, #(\n - 3)
+-	sri		TMP3.\lanes, Y_3.\lanes, #(\n - 3)
+-
+-	// y ^= x
+-	eor		Y_0.16b, TMP0.16b, X_0.16b
+-	eor		Y_1.16b, TMP1.16b, X_1.16b
+-	eor		Y_2.16b, TMP2.16b, X_2.16b
+-	eor		Y_3.16b, TMP3.16b, X_3.16b
+-.endm
+-
+-/*
+- * _speck_unround_128bytes() - Speck decryption round on 128 bytes at a time
+- *
+- * This is the inverse of _speck_round_128bytes().
+- */
+-.macro _speck_unround_128bytes	n, lanes
+-
+-	// y ^= x
+-	eor		TMP0.16b, Y_0.16b, X_0.16b
+-	eor		TMP1.16b, Y_1.16b, X_1.16b
+-	eor		TMP2.16b, Y_2.16b, X_2.16b
+-	eor		TMP3.16b, Y_3.16b, X_3.16b
+-
+-	// y = ror(y, 3)
+-	ushr		Y_0.\lanes, TMP0.\lanes, #3
+-	ushr		Y_1.\lanes, TMP1.\lanes, #3
+-	ushr		Y_2.\lanes, TMP2.\lanes, #3
+-	ushr		Y_3.\lanes, TMP3.\lanes, #3
+-	sli		Y_0.\lanes, TMP0.\lanes, #(\n - 3)
+-	sli		Y_1.\lanes, TMP1.\lanes, #(\n - 3)
+-	sli		Y_2.\lanes, TMP2.\lanes, #(\n - 3)
+-	sli		Y_3.\lanes, TMP3.\lanes, #(\n - 3)
+-
+-	// x ^= k
+-	eor		X_0.16b, X_0.16b, ROUND_KEY.16b
+-	eor		X_1.16b, X_1.16b, ROUND_KEY.16b
+-	eor		X_2.16b, X_2.16b, ROUND_KEY.16b
+-	eor		X_3.16b, X_3.16b, ROUND_KEY.16b
+-
+-	// x -= y
+-	sub		X_0.\lanes, X_0.\lanes, Y_0.\lanes
+-	sub		X_1.\lanes, X_1.\lanes, Y_1.\lanes
+-	sub		X_2.\lanes, X_2.\lanes, Y_2.\lanes
+-	sub		X_3.\lanes, X_3.\lanes, Y_3.\lanes
+-
+-	// x = rol(x, 8)
+-	tbl		X_0.16b, {X_0.16b}, ROTATE_TABLE.16b
+-	tbl		X_1.16b, {X_1.16b}, ROTATE_TABLE.16b
+-	tbl		X_2.16b, {X_2.16b}, ROTATE_TABLE.16b
+-	tbl		X_3.16b, {X_3.16b}, ROTATE_TABLE.16b
+-.endm
+-
+-.macro _next_xts_tweak	next, cur, tmp, n
+-.if \n == 64
+-	/*
+-	 * Calculate the next tweak by multiplying the current one by x,
+-	 * modulo p(x) = x^128 + x^7 + x^2 + x + 1.
+-	 */
+-	sshr		\tmp\().2d, \cur\().2d, #63
+-	and		\tmp\().16b, \tmp\().16b, GFMUL_TABLE.16b
+-	shl		\next\().2d, \cur\().2d, #1
+-	ext		\tmp\().16b, \tmp\().16b, \tmp\().16b, #8
+-	eor		\next\().16b, \next\().16b, \tmp\().16b
+-.else
+-	/*
+-	 * Calculate the next two tweaks by multiplying the current ones by x^2,
+-	 * modulo p(x) = x^64 + x^4 + x^3 + x + 1.
+-	 */
+-	ushr		\tmp\().2d, \cur\().2d, #62
+-	shl		\next\().2d, \cur\().2d, #2
+-	tbl		\tmp\().16b, {GFMUL_TABLE.16b}, \tmp\().16b
+-	eor		\next\().16b, \next\().16b, \tmp\().16b
+-.endif
+-.endm
+-
+-/*
+- * _speck_xts_crypt() - Speck-XTS encryption/decryption
+- *
+- * Encrypt or decrypt NBYTES bytes of data from the SRC buffer to the DST buffer
+- * using Speck-XTS, specifically the variant with a block size of '2n' and round
+- * count given by NROUNDS.  The expanded round keys are given in ROUND_KEYS, and
+- * the current XTS tweak value is given in TWEAK.  It's assumed that NBYTES is a
+- * nonzero multiple of 128.
+- */
+-.macro _speck_xts_crypt	n, lanes, decrypting
+-
+-	/*
+-	 * If decrypting, modify the ROUND_KEYS parameter to point to the last
+-	 * round key rather than the first, since for decryption the round keys
+-	 * are used in reverse order.
+-	 */
+-.if \decrypting
+-	mov		NROUNDS, NROUNDS	/* zero the high 32 bits */
+-.if \n == 64
+-	add		ROUND_KEYS, ROUND_KEYS, NROUNDS_X, lsl #3
+-	sub		ROUND_KEYS, ROUND_KEYS, #8
+-.else
+-	add		ROUND_KEYS, ROUND_KEYS, NROUNDS_X, lsl #2
+-	sub		ROUND_KEYS, ROUND_KEYS, #4
+-.endif
+-.endif
+-
+-	// Load the index vector for tbl-based 8-bit rotates
+-.if \decrypting
+-	ldr		ROTATE_TABLE_Q, .Lrol\n\()_8_table
+-.else
+-	ldr		ROTATE_TABLE_Q, .Lror\n\()_8_table
+-.endif
+-
+-	// One-time XTS preparation
+-.if \n == 64
+-	// Load first tweak
+-	ld1		{TWEAKV0.16b}, [TWEAK]
+-
+-	// Load GF(2^128) multiplication table
+-	ldr		GFMUL_TABLE_Q, .Lgf128mul_table
+-.else
+-	// Load first tweak
+-	ld1		{TWEAKV0.8b}, [TWEAK]
+-
+-	// Load GF(2^64) multiplication table
+-	ldr		GFMUL_TABLE_Q, .Lgf64mul_table
+-
+-	// Calculate second tweak, packing it together with the first
+-	ushr		TMP0.2d, TWEAKV0.2d, #63
+-	shl		TMP1.2d, TWEAKV0.2d, #1
+-	tbl		TMP0.8b, {GFMUL_TABLE.16b}, TMP0.8b
+-	eor		TMP0.8b, TMP0.8b, TMP1.8b
+-	mov		TWEAKV0.d[1], TMP0.d[0]
+-.endif
+-
+-.Lnext_128bytes_\@:
+-
+-	// Calculate XTS tweaks for next 128 bytes
+-	_next_xts_tweak	TWEAKV1, TWEAKV0, TMP0, \n
+-	_next_xts_tweak	TWEAKV2, TWEAKV1, TMP0, \n
+-	_next_xts_tweak	TWEAKV3, TWEAKV2, TMP0, \n
+-	_next_xts_tweak	TWEAKV4, TWEAKV3, TMP0, \n
+-	_next_xts_tweak	TWEAKV5, TWEAKV4, TMP0, \n
+-	_next_xts_tweak	TWEAKV6, TWEAKV5, TMP0, \n
+-	_next_xts_tweak	TWEAKV7, TWEAKV6, TMP0, \n
+-	_next_xts_tweak	TWEAKV_NEXT, TWEAKV7, TMP0, \n
+-
+-	// Load the next source blocks into {X,Y}[0-3]
+-	ld1		{X_0.16b-Y_1.16b}, [SRC], #64
+-	ld1		{X_2.16b-Y_3.16b}, [SRC], #64
+-
+-	// XOR the source blocks with their XTS tweaks
+-	eor		TMP0.16b, X_0.16b, TWEAKV0.16b
+-	eor		Y_0.16b,  Y_0.16b, TWEAKV1.16b
+-	eor		TMP1.16b, X_1.16b, TWEAKV2.16b
+-	eor		Y_1.16b,  Y_1.16b, TWEAKV3.16b
+-	eor		TMP2.16b, X_2.16b, TWEAKV4.16b
+-	eor		Y_2.16b,  Y_2.16b, TWEAKV5.16b
+-	eor		TMP3.16b, X_3.16b, TWEAKV6.16b
+-	eor		Y_3.16b,  Y_3.16b, TWEAKV7.16b
+-
+-	/*
+-	 * De-interleave the 'x' and 'y' elements of each block, i.e. make it so
+-	 * that the X[0-3] registers contain only the second halves of blocks,
+-	 * and the Y[0-3] registers contain only the first halves of blocks.
+-	 * (Speck uses the order (y, x) rather than the more intuitive (x, y).)
+-	 */
+-	uzp2		X_0.\lanes, TMP0.\lanes, Y_0.\lanes
+-	uzp1		Y_0.\lanes, TMP0.\lanes, Y_0.\lanes
+-	uzp2		X_1.\lanes, TMP1.\lanes, Y_1.\lanes
+-	uzp1		Y_1.\lanes, TMP1.\lanes, Y_1.\lanes
+-	uzp2		X_2.\lanes, TMP2.\lanes, Y_2.\lanes
+-	uzp1		Y_2.\lanes, TMP2.\lanes, Y_2.\lanes
+-	uzp2		X_3.\lanes, TMP3.\lanes, Y_3.\lanes
+-	uzp1		Y_3.\lanes, TMP3.\lanes, Y_3.\lanes
+-
+-	// Do the cipher rounds
+-	mov		x6, ROUND_KEYS
+-	mov		w7, NROUNDS
+-.Lnext_round_\@:
+-.if \decrypting
+-	ld1r		{ROUND_KEY.\lanes}, [x6]
+-	sub		x6, x6, #( \n / 8 )
+-	_speck_unround_128bytes	\n, \lanes
+-.else
+-	ld1r		{ROUND_KEY.\lanes}, [x6], #( \n / 8 )
+-	_speck_round_128bytes	\n, \lanes
+-.endif
+-	subs		w7, w7, #1
+-	bne		.Lnext_round_\@
+-
+-	// Re-interleave the 'x' and 'y' elements of each block
+-	zip1		TMP0.\lanes, Y_0.\lanes, X_0.\lanes
+-	zip2		Y_0.\lanes,  Y_0.\lanes, X_0.\lanes
+-	zip1		TMP1.\lanes, Y_1.\lanes, X_1.\lanes
+-	zip2		Y_1.\lanes,  Y_1.\lanes, X_1.\lanes
+-	zip1		TMP2.\lanes, Y_2.\lanes, X_2.\lanes
+-	zip2		Y_2.\lanes,  Y_2.\lanes, X_2.\lanes
+-	zip1		TMP3.\lanes, Y_3.\lanes, X_3.\lanes
+-	zip2		Y_3.\lanes,  Y_3.\lanes, X_3.\lanes
+-
+-	// XOR the encrypted/decrypted blocks with the tweaks calculated earlier
+-	eor		X_0.16b, TMP0.16b, TWEAKV0.16b
+-	eor		Y_0.16b, Y_0.16b,  TWEAKV1.16b
+-	eor		X_1.16b, TMP1.16b, TWEAKV2.16b
+-	eor		Y_1.16b, Y_1.16b,  TWEAKV3.16b
+-	eor		X_2.16b, TMP2.16b, TWEAKV4.16b
+-	eor		Y_2.16b, Y_2.16b,  TWEAKV5.16b
+-	eor		X_3.16b, TMP3.16b, TWEAKV6.16b
+-	eor		Y_3.16b, Y_3.16b,  TWEAKV7.16b
+-	mov		TWEAKV0.16b, TWEAKV_NEXT.16b
+-
+-	// Store the ciphertext in the destination buffer
+-	st1		{X_0.16b-Y_1.16b}, [DST], #64
+-	st1		{X_2.16b-Y_3.16b}, [DST], #64
+-
+-	// Continue if there are more 128-byte chunks remaining
+-	subs		NBYTES, NBYTES, #128
+-	bne		.Lnext_128bytes_\@
+-
+-	// Store the next tweak and return
+-.if \n == 64
+-	st1		{TWEAKV_NEXT.16b}, [TWEAK]
+-.else
+-	st1		{TWEAKV_NEXT.8b}, [TWEAK]
+-.endif
+-	ret
+-.endm
+-
+-ENTRY(speck128_xts_encrypt_neon)
+-	_speck_xts_crypt	n=64, lanes=2d, decrypting=0
+-ENDPROC(speck128_xts_encrypt_neon)
+-
+-ENTRY(speck128_xts_decrypt_neon)
+-	_speck_xts_crypt	n=64, lanes=2d, decrypting=1
+-ENDPROC(speck128_xts_decrypt_neon)
+-
+-ENTRY(speck64_xts_encrypt_neon)
+-	_speck_xts_crypt	n=32, lanes=4s, decrypting=0
+-ENDPROC(speck64_xts_encrypt_neon)
+-
+-ENTRY(speck64_xts_decrypt_neon)
+-	_speck_xts_crypt	n=32, lanes=4s, decrypting=1
+-ENDPROC(speck64_xts_decrypt_neon)
+diff --git a/arch/arm64/crypto/speck-neon-glue.c b/arch/arm64/crypto/speck-neon-glue.c
+deleted file mode 100644
+index 6e233aeb4ff4..000000000000
+--- a/arch/arm64/crypto/speck-neon-glue.c
++++ /dev/null
+@@ -1,282 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * NEON-accelerated implementation of Speck128-XTS and Speck64-XTS
+- * (64-bit version; based on the 32-bit version)
+- *
+- * Copyright (c) 2018 Google, Inc
+- */
+-
+-#include <asm/hwcap.h>
+-#include <asm/neon.h>
+-#include <asm/simd.h>
+-#include <crypto/algapi.h>
+-#include <crypto/gf128mul.h>
+-#include <crypto/internal/skcipher.h>
+-#include <crypto/speck.h>
+-#include <crypto/xts.h>
+-#include <linux/kernel.h>
+-#include <linux/module.h>
+-
+-/* The assembly functions only handle multiples of 128 bytes */
+-#define SPECK_NEON_CHUNK_SIZE	128
+-
+-/* Speck128 */
+-
+-struct speck128_xts_tfm_ctx {
+-	struct speck128_tfm_ctx main_key;
+-	struct speck128_tfm_ctx tweak_key;
+-};
+-
+-asmlinkage void speck128_xts_encrypt_neon(const u64 *round_keys, int nrounds,
+-					  void *dst, const void *src,
+-					  unsigned int nbytes, void *tweak);
+-
+-asmlinkage void speck128_xts_decrypt_neon(const u64 *round_keys, int nrounds,
+-					  void *dst, const void *src,
+-					  unsigned int nbytes, void *tweak);
+-
+-typedef void (*speck128_crypt_one_t)(const struct speck128_tfm_ctx *,
+-				     u8 *, const u8 *);
+-typedef void (*speck128_xts_crypt_many_t)(const u64 *, int, void *,
+-					  const void *, unsigned int, void *);
+-
+-static __always_inline int
+-__speck128_xts_crypt(struct skcipher_request *req,
+-		     speck128_crypt_one_t crypt_one,
+-		     speck128_xts_crypt_many_t crypt_many)
+-{
+-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+-	const struct speck128_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	struct skcipher_walk walk;
+-	le128 tweak;
+-	int err;
+-
+-	err = skcipher_walk_virt(&walk, req, true);
+-
+-	crypto_speck128_encrypt(&ctx->tweak_key, (u8 *)&tweak, walk.iv);
+-
+-	while (walk.nbytes > 0) {
+-		unsigned int nbytes = walk.nbytes;
+-		u8 *dst = walk.dst.virt.addr;
+-		const u8 *src = walk.src.virt.addr;
+-
+-		if (nbytes >= SPECK_NEON_CHUNK_SIZE && may_use_simd()) {
+-			unsigned int count;
+-
+-			count = round_down(nbytes, SPECK_NEON_CHUNK_SIZE);
+-			kernel_neon_begin();
+-			(*crypt_many)(ctx->main_key.round_keys,
+-				      ctx->main_key.nrounds,
+-				      dst, src, count, &tweak);
+-			kernel_neon_end();
+-			dst += count;
+-			src += count;
+-			nbytes -= count;
+-		}
+-
+-		/* Handle any remainder with generic code */
+-		while (nbytes >= sizeof(tweak)) {
+-			le128_xor((le128 *)dst, (const le128 *)src, &tweak);
+-			(*crypt_one)(&ctx->main_key, dst, dst);
+-			le128_xor((le128 *)dst, (const le128 *)dst, &tweak);
+-			gf128mul_x_ble(&tweak, &tweak);
+-
+-			dst += sizeof(tweak);
+-			src += sizeof(tweak);
+-			nbytes -= sizeof(tweak);
+-		}
+-		err = skcipher_walk_done(&walk, nbytes);
+-	}
+-
+-	return err;
+-}
+-
+-static int speck128_xts_encrypt(struct skcipher_request *req)
+-{
+-	return __speck128_xts_crypt(req, crypto_speck128_encrypt,
+-				    speck128_xts_encrypt_neon);
+-}
+-
+-static int speck128_xts_decrypt(struct skcipher_request *req)
+-{
+-	return __speck128_xts_crypt(req, crypto_speck128_decrypt,
+-				    speck128_xts_decrypt_neon);
+-}
+-
+-static int speck128_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
+-			       unsigned int keylen)
+-{
+-	struct speck128_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	int err;
+-
+-	err = xts_verify_key(tfm, key, keylen);
+-	if (err)
+-		return err;
+-
+-	keylen /= 2;
+-
+-	err = crypto_speck128_setkey(&ctx->main_key, key, keylen);
+-	if (err)
+-		return err;
+-
+-	return crypto_speck128_setkey(&ctx->tweak_key, key + keylen, keylen);
+-}
+-
+-/* Speck64 */
+-
+-struct speck64_xts_tfm_ctx {
+-	struct speck64_tfm_ctx main_key;
+-	struct speck64_tfm_ctx tweak_key;
+-};
+-
+-asmlinkage void speck64_xts_encrypt_neon(const u32 *round_keys, int nrounds,
+-					 void *dst, const void *src,
+-					 unsigned int nbytes, void *tweak);
+-
+-asmlinkage void speck64_xts_decrypt_neon(const u32 *round_keys, int nrounds,
+-					 void *dst, const void *src,
+-					 unsigned int nbytes, void *tweak);
+-
+-typedef void (*speck64_crypt_one_t)(const struct speck64_tfm_ctx *,
+-				    u8 *, const u8 *);
+-typedef void (*speck64_xts_crypt_many_t)(const u32 *, int, void *,
+-					 const void *, unsigned int, void *);
+-
+-static __always_inline int
+-__speck64_xts_crypt(struct skcipher_request *req, speck64_crypt_one_t crypt_one,
+-		    speck64_xts_crypt_many_t crypt_many)
+-{
+-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+-	const struct speck64_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	struct skcipher_walk walk;
+-	__le64 tweak;
+-	int err;
+-
+-	err = skcipher_walk_virt(&walk, req, true);
+-
+-	crypto_speck64_encrypt(&ctx->tweak_key, (u8 *)&tweak, walk.iv);
+-
+-	while (walk.nbytes > 0) {
+-		unsigned int nbytes = walk.nbytes;
+-		u8 *dst = walk.dst.virt.addr;
+-		const u8 *src = walk.src.virt.addr;
+-
+-		if (nbytes >= SPECK_NEON_CHUNK_SIZE && may_use_simd()) {
+-			unsigned int count;
+-
+-			count = round_down(nbytes, SPECK_NEON_CHUNK_SIZE);
+-			kernel_neon_begin();
+-			(*crypt_many)(ctx->main_key.round_keys,
+-				      ctx->main_key.nrounds,
+-				      dst, src, count, &tweak);
+-			kernel_neon_end();
+-			dst += count;
+-			src += count;
+-			nbytes -= count;
+-		}
+-
+-		/* Handle any remainder with generic code */
+-		while (nbytes >= sizeof(tweak)) {
+-			*(__le64 *)dst = *(__le64 *)src ^ tweak;
+-			(*crypt_one)(&ctx->main_key, dst, dst);
+-			*(__le64 *)dst ^= tweak;
+-			tweak = cpu_to_le64((le64_to_cpu(tweak) << 1) ^
+-					    ((tweak & cpu_to_le64(1ULL << 63)) ?
+-					     0x1B : 0));
+-			dst += sizeof(tweak);
+-			src += sizeof(tweak);
+-			nbytes -= sizeof(tweak);
+-		}
+-		err = skcipher_walk_done(&walk, nbytes);
+-	}
+-
+-	return err;
+-}
+-
+-static int speck64_xts_encrypt(struct skcipher_request *req)
+-{
+-	return __speck64_xts_crypt(req, crypto_speck64_encrypt,
+-				   speck64_xts_encrypt_neon);
+-}
+-
+-static int speck64_xts_decrypt(struct skcipher_request *req)
+-{
+-	return __speck64_xts_crypt(req, crypto_speck64_decrypt,
+-				   speck64_xts_decrypt_neon);
+-}
+-
+-static int speck64_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
+-			      unsigned int keylen)
+-{
+-	struct speck64_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	int err;
+-
+-	err = xts_verify_key(tfm, key, keylen);
+-	if (err)
+-		return err;
+-
+-	keylen /= 2;
+-
+-	err = crypto_speck64_setkey(&ctx->main_key, key, keylen);
+-	if (err)
+-		return err;
+-
+-	return crypto_speck64_setkey(&ctx->tweak_key, key + keylen, keylen);
+-}
+-
+-static struct skcipher_alg speck_algs[] = {
+-	{
+-		.base.cra_name		= "xts(speck128)",
+-		.base.cra_driver_name	= "xts-speck128-neon",
+-		.base.cra_priority	= 300,
+-		.base.cra_blocksize	= SPECK128_BLOCK_SIZE,
+-		.base.cra_ctxsize	= sizeof(struct speck128_xts_tfm_ctx),
+-		.base.cra_alignmask	= 7,
+-		.base.cra_module	= THIS_MODULE,
+-		.min_keysize		= 2 * SPECK128_128_KEY_SIZE,
+-		.max_keysize		= 2 * SPECK128_256_KEY_SIZE,
+-		.ivsize			= SPECK128_BLOCK_SIZE,
+-		.walksize		= SPECK_NEON_CHUNK_SIZE,
+-		.setkey			= speck128_xts_setkey,
+-		.encrypt		= speck128_xts_encrypt,
+-		.decrypt		= speck128_xts_decrypt,
+-	}, {
+-		.base.cra_name		= "xts(speck64)",
+-		.base.cra_driver_name	= "xts-speck64-neon",
+-		.base.cra_priority	= 300,
+-		.base.cra_blocksize	= SPECK64_BLOCK_SIZE,
+-		.base.cra_ctxsize	= sizeof(struct speck64_xts_tfm_ctx),
+-		.base.cra_alignmask	= 7,
+-		.base.cra_module	= THIS_MODULE,
+-		.min_keysize		= 2 * SPECK64_96_KEY_SIZE,
+-		.max_keysize		= 2 * SPECK64_128_KEY_SIZE,
+-		.ivsize			= SPECK64_BLOCK_SIZE,
+-		.walksize		= SPECK_NEON_CHUNK_SIZE,
+-		.setkey			= speck64_xts_setkey,
+-		.encrypt		= speck64_xts_encrypt,
+-		.decrypt		= speck64_xts_decrypt,
+-	}
+-};
+-
+-static int __init speck_neon_module_init(void)
+-{
+-	if (!(elf_hwcap & HWCAP_ASIMD))
+-		return -ENODEV;
+-	return crypto_register_skciphers(speck_algs, ARRAY_SIZE(speck_algs));
+-}
+-
+-static void __exit speck_neon_module_exit(void)
+-{
+-	crypto_unregister_skciphers(speck_algs, ARRAY_SIZE(speck_algs));
+-}
+-
+-module_init(speck_neon_module_init);
+-module_exit(speck_neon_module_exit);
+-
+-MODULE_DESCRIPTION("Speck block cipher (NEON-accelerated)");
+-MODULE_LICENSE("GPL");
+-MODULE_AUTHOR("Eric Biggers <ebiggers@google.com>");
+-MODULE_ALIAS_CRYPTO("xts(speck128)");
+-MODULE_ALIAS_CRYPTO("xts-speck128-neon");
+-MODULE_ALIAS_CRYPTO("xts(speck64)");
+-MODULE_ALIAS_CRYPTO("xts-speck64-neon");
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index e4103b718a7c..b687c80a9c10 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -847,15 +847,29 @@ static bool has_no_fpsimd(const struct arm64_cpu_capabilities *entry, int __unus
+ }
+ 
+ static bool has_cache_idc(const struct arm64_cpu_capabilities *entry,
+-			  int __unused)
++			  int scope)
+ {
+-	return read_sanitised_ftr_reg(SYS_CTR_EL0) & BIT(CTR_IDC_SHIFT);
++	u64 ctr;
++
++	if (scope == SCOPE_SYSTEM)
++		ctr = arm64_ftr_reg_ctrel0.sys_val;
++	else
++		ctr = read_cpuid_cachetype();
++
++	return ctr & BIT(CTR_IDC_SHIFT);
+ }
+ 
+ static bool has_cache_dic(const struct arm64_cpu_capabilities *entry,
+-			  int __unused)
++			  int scope)
+ {
+-	return read_sanitised_ftr_reg(SYS_CTR_EL0) & BIT(CTR_DIC_SHIFT);
++	u64 ctr;
++
++	if (scope == SCOPE_SYSTEM)
++		ctr = arm64_ftr_reg_ctrel0.sys_val;
++	else
++		ctr = read_cpuid_cachetype();
++
++	return ctr & BIT(CTR_DIC_SHIFT);
+ }
+ 
+ #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index 28ad8799406f..b0db91eefbde 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -599,7 +599,7 @@ el1_undef:
+ 	inherit_daif	pstate=x23, tmp=x2
+ 	mov	x0, sp
+ 	bl	do_undefinstr
+-	ASM_BUG()
++	kernel_exit 1
+ el1_dbg:
+ 	/*
+ 	 * Debug exception handling
+diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
+index d399d459397b..9fa3d69cceaa 100644
+--- a/arch/arm64/kernel/traps.c
++++ b/arch/arm64/kernel/traps.c
+@@ -310,10 +310,12 @@ static int call_undef_hook(struct pt_regs *regs)
+ 	int (*fn)(struct pt_regs *regs, u32 instr) = NULL;
+ 	void __user *pc = (void __user *)instruction_pointer(regs);
+ 
+-	if (!user_mode(regs))
+-		return 1;
+-
+-	if (compat_thumb_mode(regs)) {
++	if (!user_mode(regs)) {
++		__le32 instr_le;
++		if (probe_kernel_address((__force __le32 *)pc, instr_le))
++			goto exit;
++		instr = le32_to_cpu(instr_le);
++	} else if (compat_thumb_mode(regs)) {
+ 		/* 16-bit Thumb instruction */
+ 		__le16 instr_le;
+ 		if (get_user(instr_le, (__le16 __user *)pc))
+@@ -407,6 +409,7 @@ asmlinkage void __exception do_undefinstr(struct pt_regs *regs)
+ 		return;
+ 
+ 	force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc);
++	BUG_ON(!user_mode(regs));
+ }
+ 
+ void cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
+diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile
+index 137710f4dac3..5105bb044aa5 100644
+--- a/arch/arm64/lib/Makefile
++++ b/arch/arm64/lib/Makefile
+@@ -12,7 +12,7 @@ lib-y		:= bitops.o clear_user.o delay.o copy_from_user.o	\
+ # when supported by the CPU. Result and argument registers are handled
+ # correctly, based on the function prototype.
+ lib-$(CONFIG_ARM64_LSE_ATOMICS) += atomic_ll_sc.o
+-CFLAGS_atomic_ll_sc.o	:= -fcall-used-x0 -ffixed-x1 -ffixed-x2		\
++CFLAGS_atomic_ll_sc.o	:= -ffixed-x1 -ffixed-x2        		\
+ 		   -ffixed-x3 -ffixed-x4 -ffixed-x5 -ffixed-x6		\
+ 		   -ffixed-x7 -fcall-saved-x8 -fcall-saved-x9		\
+ 		   -fcall-saved-x10 -fcall-saved-x11 -fcall-saved-x12	\
+diff --git a/arch/m68k/configs/amiga_defconfig b/arch/m68k/configs/amiga_defconfig
+index a874e54404d1..4d4c76ab0bac 100644
+--- a/arch/m68k/configs/amiga_defconfig
++++ b/arch/m68k/configs/amiga_defconfig
+@@ -650,7 +650,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/apollo_defconfig b/arch/m68k/configs/apollo_defconfig
+index 8ce39e23aa42..0fd006c19fa3 100644
+--- a/arch/m68k/configs/apollo_defconfig
++++ b/arch/m68k/configs/apollo_defconfig
+@@ -609,7 +609,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/atari_defconfig b/arch/m68k/configs/atari_defconfig
+index 346c4e75edf8..9343e8d5cf60 100644
+--- a/arch/m68k/configs/atari_defconfig
++++ b/arch/m68k/configs/atari_defconfig
+@@ -631,7 +631,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/bvme6000_defconfig b/arch/m68k/configs/bvme6000_defconfig
+index fca9c7aa71a3..a10fff6e7b50 100644
+--- a/arch/m68k/configs/bvme6000_defconfig
++++ b/arch/m68k/configs/bvme6000_defconfig
+@@ -601,7 +601,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/hp300_defconfig b/arch/m68k/configs/hp300_defconfig
+index f9eab174915c..db81d8ea9d03 100644
+--- a/arch/m68k/configs/hp300_defconfig
++++ b/arch/m68k/configs/hp300_defconfig
+@@ -611,7 +611,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/mac_defconfig b/arch/m68k/configs/mac_defconfig
+index b52e597899eb..2546617a1147 100644
+--- a/arch/m68k/configs/mac_defconfig
++++ b/arch/m68k/configs/mac_defconfig
+@@ -633,7 +633,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/multi_defconfig b/arch/m68k/configs/multi_defconfig
+index 2a84eeec5b02..dc9b0d885e8b 100644
+--- a/arch/m68k/configs/multi_defconfig
++++ b/arch/m68k/configs/multi_defconfig
+@@ -713,7 +713,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/mvme147_defconfig b/arch/m68k/configs/mvme147_defconfig
+index 476e69994340..0d815a375ba0 100644
+--- a/arch/m68k/configs/mvme147_defconfig
++++ b/arch/m68k/configs/mvme147_defconfig
+@@ -601,7 +601,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/mvme16x_defconfig b/arch/m68k/configs/mvme16x_defconfig
+index 1477cda9146e..0cb8109b4c9e 100644
+--- a/arch/m68k/configs/mvme16x_defconfig
++++ b/arch/m68k/configs/mvme16x_defconfig
+@@ -601,7 +601,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/q40_defconfig b/arch/m68k/configs/q40_defconfig
+index b3a543dc48a0..e91a1c28bba7 100644
+--- a/arch/m68k/configs/q40_defconfig
++++ b/arch/m68k/configs/q40_defconfig
+@@ -624,7 +624,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/sun3_defconfig b/arch/m68k/configs/sun3_defconfig
+index d543ed5dfa96..3b2f0914c34f 100644
+--- a/arch/m68k/configs/sun3_defconfig
++++ b/arch/m68k/configs/sun3_defconfig
+@@ -602,7 +602,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/m68k/configs/sun3x_defconfig b/arch/m68k/configs/sun3x_defconfig
+index a67e54246023..e4365ef4f5ed 100644
+--- a/arch/m68k/configs/sun3x_defconfig
++++ b/arch/m68k/configs/sun3x_defconfig
+@@ -603,7 +603,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_LZO=m
+diff --git a/arch/mips/cavium-octeon/executive/cvmx-helper.c b/arch/mips/cavium-octeon/executive/cvmx-helper.c
+index 75108ec669eb..6c79e8a16a26 100644
+--- a/arch/mips/cavium-octeon/executive/cvmx-helper.c
++++ b/arch/mips/cavium-octeon/executive/cvmx-helper.c
+@@ -67,7 +67,7 @@ void (*cvmx_override_pko_queue_priority) (int pko_port,
+ void (*cvmx_override_ipd_port_setup) (int ipd_port);
+ 
+ /* Port count per interface */
+-static int interface_port_count[5];
++static int interface_port_count[9];
+ 
+ /**
+  * Return the number of interfaces the chip has. Each interface
+diff --git a/arch/mips/lib/memset.S b/arch/mips/lib/memset.S
+index fac26ce64b2f..e76e88222a4b 100644
+--- a/arch/mips/lib/memset.S
++++ b/arch/mips/lib/memset.S
+@@ -262,9 +262,11 @@
+ 	 nop
+ 
+ .Lsmall_fixup\@:
++	.set		reorder
+ 	PTR_SUBU	a2, t1, a0
++	PTR_ADDIU	a2, 1
+ 	jr		ra
+-	 PTR_ADDIU	a2, 1
++	.set		noreorder
+ 
+ 	.endm
+ 
+diff --git a/arch/parisc/kernel/entry.S b/arch/parisc/kernel/entry.S
+index 1b4732e20137..843825a7e6e2 100644
+--- a/arch/parisc/kernel/entry.S
++++ b/arch/parisc/kernel/entry.S
+@@ -185,7 +185,7 @@
+ 	bv,n	0(%r3)
+ 	nop
+ 	.word	0		/* checksum (will be patched) */
+-	.word	PA(os_hpmc)	/* address of handler */
++	.word	0		/* address of handler */
+ 	.word	0		/* length of handler */
+ 	.endm
+ 
+diff --git a/arch/parisc/kernel/hpmc.S b/arch/parisc/kernel/hpmc.S
+index 781c3b9a3e46..fde654115564 100644
+--- a/arch/parisc/kernel/hpmc.S
++++ b/arch/parisc/kernel/hpmc.S
+@@ -85,7 +85,7 @@ END(hpmc_pim_data)
+ 
+ 	.import intr_save, code
+ 	.align 16
+-ENTRY_CFI(os_hpmc)
++ENTRY(os_hpmc)
+ .os_hpmc:
+ 
+ 	/*
+@@ -302,7 +302,6 @@ os_hpmc_6:
+ 	b .
+ 	nop
+ 	.align 16	/* make function length multiple of 16 bytes */
+-ENDPROC_CFI(os_hpmc)
+ .os_hpmc_end:
+ 
+ 
+diff --git a/arch/parisc/kernel/traps.c b/arch/parisc/kernel/traps.c
+index 4309ad31a874..2cb35e1e0099 100644
+--- a/arch/parisc/kernel/traps.c
++++ b/arch/parisc/kernel/traps.c
+@@ -827,7 +827,8 @@ void __init initialize_ivt(const void *iva)
+ 	 *    the Length/4 words starting at Address is zero.
+ 	 */
+ 
+-	/* Compute Checksum for HPMC handler */
++	/* Setup IVA and compute checksum for HPMC handler */
++	ivap[6] = (u32)__pa(os_hpmc);
+ 	length = os_hpmc_size;
+ 	ivap[7] = length;
+ 
+diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
+index 2607d2d33405..db6cd857c8c0 100644
+--- a/arch/parisc/mm/init.c
++++ b/arch/parisc/mm/init.c
+@@ -495,12 +495,8 @@ static void __init map_pages(unsigned long start_vaddr,
+ 						pte = pte_mkhuge(pte);
+ 				}
+ 
+-				if (address >= end_paddr) {
+-					if (force)
+-						break;
+-					else
+-						pte_val(pte) = 0;
+-				}
++				if (address >= end_paddr)
++					break;
+ 
+ 				set_pte(pg_table, pte);
+ 
+diff --git a/arch/powerpc/include/asm/mpic.h b/arch/powerpc/include/asm/mpic.h
+index fad8ddd697ac..0abf2e7fd222 100644
+--- a/arch/powerpc/include/asm/mpic.h
++++ b/arch/powerpc/include/asm/mpic.h
+@@ -393,7 +393,14 @@ extern struct bus_type mpic_subsys;
+ #define	MPIC_REGSET_TSI108		MPIC_REGSET(1)	/* Tsi108/109 PIC */
+ 
+ /* Get the version of primary MPIC */
++#ifdef CONFIG_MPIC
+ extern u32 fsl_mpic_primary_get_version(void);
++#else
++static inline u32 fsl_mpic_primary_get_version(void)
++{
++	return 0;
++}
++#endif
+ 
+ /* Allocate the controller structure and setup the linux irq descs
+  * for the range if interrupts passed in. No HW initialization is
+diff --git a/arch/powerpc/kernel/mce_power.c b/arch/powerpc/kernel/mce_power.c
+index 38c5b4764bfe..a74ffd5ad15c 100644
+--- a/arch/powerpc/kernel/mce_power.c
++++ b/arch/powerpc/kernel/mce_power.c
+@@ -97,6 +97,13 @@ static void flush_and_reload_slb(void)
+ 
+ static void flush_erat(void)
+ {
++#ifdef CONFIG_PPC_BOOK3S_64
++	if (!early_cpu_has_feature(CPU_FTR_ARCH_300)) {
++		flush_and_reload_slb();
++		return;
++	}
++#endif
++	/* PPC_INVALIDATE_ERAT can only be used on ISA v3 and newer */
+ 	asm volatile(PPC_INVALIDATE_ERAT : : :"memory");
+ }
+ 
+diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
+index 225bc5f91049..03dd2f9d60cf 100644
+--- a/arch/powerpc/kernel/setup_64.c
++++ b/arch/powerpc/kernel/setup_64.c
+@@ -242,13 +242,19 @@ static void cpu_ready_for_interrupts(void)
+ 	}
+ 
+ 	/*
+-	 * Fixup HFSCR:TM based on CPU features. The bit is set by our
+-	 * early asm init because at that point we haven't updated our
+-	 * CPU features from firmware and device-tree. Here we have,
+-	 * so let's do it.
++	 * Set HFSCR:TM based on CPU features:
++	 * In the special case of TM no suspend (P9N DD2.1), Linux is
++	 * told TM is off via the dt-ftrs but told to (partially) use
++	 * it via OPAL_REINIT_CPUS_TM_SUSPEND_DISABLED. So HFSCR[TM]
++	 * will be off from dt-ftrs but we need to turn it on for the
++	 * no suspend case.
+ 	 */
+-	if (cpu_has_feature(CPU_FTR_HVMODE) && !cpu_has_feature(CPU_FTR_TM_COMP))
+-		mtspr(SPRN_HFSCR, mfspr(SPRN_HFSCR) & ~HFSCR_TM);
++	if (cpu_has_feature(CPU_FTR_HVMODE)) {
++		if (cpu_has_feature(CPU_FTR_TM_COMP))
++			mtspr(SPRN_HFSCR, mfspr(SPRN_HFSCR) | HFSCR_TM);
++		else
++			mtspr(SPRN_HFSCR, mfspr(SPRN_HFSCR) & ~HFSCR_TM);
++	}
+ 
+ 	/* Set IR and DR in PACA MSR */
+ 	get_paca()->kernel_msr = MSR_KERNEL;
+diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
+index 1d049c78c82a..2e45e5fbad5b 100644
+--- a/arch/powerpc/mm/hash_native_64.c
++++ b/arch/powerpc/mm/hash_native_64.c
+@@ -115,6 +115,8 @@ static void tlbiel_all_isa300(unsigned int num_sets, unsigned int is)
+ 	tlbiel_hash_set_isa300(0, is, 0, 2, 1);
+ 
+ 	asm volatile("ptesync": : :"memory");
++
++	asm volatile(PPC_INVALIDATE_ERAT "; isync" : : :"memory");
+ }
+ 
+ void hash__tlbiel_all(unsigned int action)
+@@ -140,8 +142,6 @@ void hash__tlbiel_all(unsigned int action)
+ 		tlbiel_all_isa206(POWER7_TLB_SETS, is);
+ 	else
+ 		WARN(1, "%s called on pre-POWER7 CPU\n", __func__);
+-
+-	asm volatile(PPC_INVALIDATE_ERAT "; isync" : : :"memory");
+ }
+ 
+ static inline unsigned long  ___tlbie(unsigned long vpn, int psize,
+diff --git a/arch/s390/defconfig b/arch/s390/defconfig
+index f40600eb1762..5134c71a4937 100644
+--- a/arch/s390/defconfig
++++ b/arch/s390/defconfig
+@@ -221,7 +221,6 @@ CONFIG_CRYPTO_SALSA20=m
+ CONFIG_CRYPTO_SEED=m
+ CONFIG_CRYPTO_SERPENT=m
+ CONFIG_CRYPTO_SM4=m
+-CONFIG_CRYPTO_SPECK=m
+ CONFIG_CRYPTO_TEA=m
+ CONFIG_CRYPTO_TWOFISH=m
+ CONFIG_CRYPTO_DEFLATE=m
+diff --git a/arch/s390/kernel/sthyi.c b/arch/s390/kernel/sthyi.c
+index 0859cde36f75..888cc2f166db 100644
+--- a/arch/s390/kernel/sthyi.c
++++ b/arch/s390/kernel/sthyi.c
+@@ -183,17 +183,19 @@ static void fill_hdr(struct sthyi_sctns *sctns)
+ static void fill_stsi_mac(struct sthyi_sctns *sctns,
+ 			  struct sysinfo_1_1_1 *sysinfo)
+ {
++	sclp_ocf_cpc_name_copy(sctns->mac.infmname);
++	if (*(u64 *)sctns->mac.infmname != 0)
++		sctns->mac.infmval1 |= MAC_NAME_VLD;
++
+ 	if (stsi(sysinfo, 1, 1, 1))
+ 		return;
+ 
+-	sclp_ocf_cpc_name_copy(sctns->mac.infmname);
+-
+ 	memcpy(sctns->mac.infmtype, sysinfo->type, sizeof(sctns->mac.infmtype));
+ 	memcpy(sctns->mac.infmmanu, sysinfo->manufacturer, sizeof(sctns->mac.infmmanu));
+ 	memcpy(sctns->mac.infmpman, sysinfo->plant, sizeof(sctns->mac.infmpman));
+ 	memcpy(sctns->mac.infmseq, sysinfo->sequence, sizeof(sctns->mac.infmseq));
+ 
+-	sctns->mac.infmval1 |= MAC_ID_VLD | MAC_NAME_VLD;
++	sctns->mac.infmval1 |= MAC_ID_VLD;
+ }
+ 
+ static void fill_stsi_par(struct sthyi_sctns *sctns,
+diff --git a/arch/x86/boot/tools/build.c b/arch/x86/boot/tools/build.c
+index d4e6cd4577e5..bf0e82400358 100644
+--- a/arch/x86/boot/tools/build.c
++++ b/arch/x86/boot/tools/build.c
+@@ -391,6 +391,13 @@ int main(int argc, char ** argv)
+ 		die("Unable to mmap '%s': %m", argv[2]);
+ 	/* Number of 16-byte paragraphs, including space for a 4-byte CRC */
+ 	sys_size = (sz + 15 + 4) / 16;
++#ifdef CONFIG_EFI_STUB
++	/*
++	 * COFF requires minimum 32-byte alignment of sections, and
++	 * adding a signature is problematic without that alignment.
++	 */
++	sys_size = (sys_size + 1) & ~1;
++#endif
+ 
+ 	/* Patch the setup code with the appropriate size parameters */
+ 	buf[0x1f1] = setup_sectors-1;
+diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
+index acbe7e8336d8..e4b78f962874 100644
+--- a/arch/x86/crypto/aesni-intel_glue.c
++++ b/arch/x86/crypto/aesni-intel_glue.c
+@@ -817,7 +817,7 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req,
+ 	/* Linearize assoc, if not already linear */
+ 	if (req->src->length >= assoclen && req->src->length &&
+ 		(!PageHighMem(sg_page(req->src)) ||
+-			req->src->offset + req->src->length < PAGE_SIZE)) {
++			req->src->offset + req->src->length <= PAGE_SIZE)) {
+ 		scatterwalk_start(&assoc_sg_walk, req->src);
+ 		assoc = scatterwalk_map(&assoc_sg_walk);
+ 	} else {
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 64aaa3f5f36c..c8ac84e90d0f 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -220,6 +220,7 @@
+ #define X86_FEATURE_STIBP		( 7*32+27) /* Single Thread Indirect Branch Predictors */
+ #define X86_FEATURE_ZEN			( 7*32+28) /* "" CPU is AMD family 0x17 (Zen) */
+ #define X86_FEATURE_L1TF_PTEINV		( 7*32+29) /* "" L1TF workaround PTE inversion */
++#define X86_FEATURE_IBRS_ENHANCED	( 7*32+30) /* Enhanced IBRS */
+ 
+ /* Virtualization flags: Linux defined, word 8 */
+ #define X86_FEATURE_TPR_SHADOW		( 8*32+ 0) /* Intel TPR Shadow */
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 0722b7745382..ccc23203b327 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -176,6 +176,7 @@ enum {
+ 
+ #define DR6_BD		(1 << 13)
+ #define DR6_BS		(1 << 14)
++#define DR6_BT		(1 << 15)
+ #define DR6_RTM		(1 << 16)
+ #define DR6_FIXED_1	0xfffe0ff0
+ #define DR6_INIT	0xffff0ff0
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index f6f6c63da62f..e7c8086e570e 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -215,6 +215,7 @@ enum spectre_v2_mitigation {
+ 	SPECTRE_V2_RETPOLINE_GENERIC,
+ 	SPECTRE_V2_RETPOLINE_AMD,
+ 	SPECTRE_V2_IBRS,
++	SPECTRE_V2_IBRS_ENHANCED,
+ };
+ 
+ /* The Speculative Store Bypass disable variants */
+diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
+index 0af97e51e609..6f293d9a0b07 100644
+--- a/arch/x86/include/asm/tlbflush.h
++++ b/arch/x86/include/asm/tlbflush.h
+@@ -469,6 +469,12 @@ static inline void __native_flush_tlb_one_user(unsigned long addr)
+  */
+ static inline void __flush_tlb_all(void)
+ {
++	/*
++	 * This is to catch users with enabled preemption and the PGE feature
++	 * and don't trigger the warning in __native_flush_tlb().
++	 */
++	VM_WARN_ON_ONCE(preemptible());
++
+ 	if (boot_cpu_has(X86_FEATURE_PGE)) {
+ 		__flush_tlb_global();
+ 	} else {
+diff --git a/arch/x86/kernel/check.c b/arch/x86/kernel/check.c
+index 33399426793e..cc8258a5378b 100644
+--- a/arch/x86/kernel/check.c
++++ b/arch/x86/kernel/check.c
+@@ -31,6 +31,11 @@ static __init int set_corruption_check(char *arg)
+ 	ssize_t ret;
+ 	unsigned long val;
+ 
++	if (!arg) {
++		pr_err("memory_corruption_check config string not provided\n");
++		return -EINVAL;
++	}
++
+ 	ret = kstrtoul(arg, 10, &val);
+ 	if (ret)
+ 		return ret;
+@@ -45,6 +50,11 @@ static __init int set_corruption_check_period(char *arg)
+ 	ssize_t ret;
+ 	unsigned long val;
+ 
++	if (!arg) {
++		pr_err("memory_corruption_check_period config string not provided\n");
++		return -EINVAL;
++	}
++
+ 	ret = kstrtoul(arg, 10, &val);
+ 	if (ret)
+ 		return ret;
+@@ -59,6 +69,11 @@ static __init int set_corruption_check_size(char *arg)
+ 	char *end;
+ 	unsigned size;
+ 
++	if (!arg) {
++		pr_err("memory_corruption_check_size config string not provided\n");
++		return -EINVAL;
++	}
++
+ 	size = memparse(arg, &end);
+ 
+ 	if (*end == '\0')
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 4891a621a752..91e5e086606c 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -35,12 +35,10 @@ static void __init spectre_v2_select_mitigation(void);
+ static void __init ssb_select_mitigation(void);
+ static void __init l1tf_select_mitigation(void);
+ 
+-/*
+- * Our boot-time value of the SPEC_CTRL MSR. We read it once so that any
+- * writes to SPEC_CTRL contain whatever reserved bits have been set.
+- */
+-u64 __ro_after_init x86_spec_ctrl_base;
++/* The base value of the SPEC_CTRL MSR that always has to be preserved. */
++u64 x86_spec_ctrl_base;
+ EXPORT_SYMBOL_GPL(x86_spec_ctrl_base);
++static DEFINE_MUTEX(spec_ctrl_mutex);
+ 
+ /*
+  * The vendor and possibly platform specific bits which can be modified in
+@@ -141,6 +139,7 @@ static const char *spectre_v2_strings[] = {
+ 	[SPECTRE_V2_RETPOLINE_MINIMAL_AMD]	= "Vulnerable: Minimal AMD ASM retpoline",
+ 	[SPECTRE_V2_RETPOLINE_GENERIC]		= "Mitigation: Full generic retpoline",
+ 	[SPECTRE_V2_RETPOLINE_AMD]		= "Mitigation: Full AMD retpoline",
++	[SPECTRE_V2_IBRS_ENHANCED]		= "Mitigation: Enhanced IBRS",
+ };
+ 
+ #undef pr_fmt
+@@ -324,6 +323,46 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
+ 	return cmd;
+ }
+ 
++static bool stibp_needed(void)
++{
++	if (spectre_v2_enabled == SPECTRE_V2_NONE)
++		return false;
++
++	if (!boot_cpu_has(X86_FEATURE_STIBP))
++		return false;
++
++	return true;
++}
++
++static void update_stibp_msr(void *info)
++{
++	wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
++}
++
++void arch_smt_update(void)
++{
++	u64 mask;
++
++	if (!stibp_needed())
++		return;
++
++	mutex_lock(&spec_ctrl_mutex);
++	mask = x86_spec_ctrl_base;
++	if (cpu_smt_control == CPU_SMT_ENABLED)
++		mask |= SPEC_CTRL_STIBP;
++	else
++		mask &= ~SPEC_CTRL_STIBP;
++
++	if (mask != x86_spec_ctrl_base) {
++		pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
++				cpu_smt_control == CPU_SMT_ENABLED ?
++				"Enabling" : "Disabling");
++		x86_spec_ctrl_base = mask;
++		on_each_cpu(update_stibp_msr, NULL, 1);
++	}
++	mutex_unlock(&spec_ctrl_mutex);
++}
++
+ static void __init spectre_v2_select_mitigation(void)
+ {
+ 	enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
+@@ -343,6 +382,13 @@ static void __init spectre_v2_select_mitigation(void)
+ 
+ 	case SPECTRE_V2_CMD_FORCE:
+ 	case SPECTRE_V2_CMD_AUTO:
++		if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
++			mode = SPECTRE_V2_IBRS_ENHANCED;
++			/* Force it so VMEXIT will restore correctly */
++			x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
++			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
++			goto specv2_set_mode;
++		}
+ 		if (IS_ENABLED(CONFIG_RETPOLINE))
+ 			goto retpoline_auto;
+ 		break;
+@@ -380,6 +426,7 @@ retpoline_auto:
+ 		setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
+ 	}
+ 
++specv2_set_mode:
+ 	spectre_v2_enabled = mode;
+ 	pr_info("%s\n", spectre_v2_strings[mode]);
+ 
+@@ -402,12 +449,22 @@ retpoline_auto:
+ 
+ 	/*
+ 	 * Retpoline means the kernel is safe because it has no indirect
+-	 * branches. But firmware isn't, so use IBRS to protect that.
++	 * branches. Enhanced IBRS protects firmware too, so, enable restricted
++	 * speculation around firmware calls only when Enhanced IBRS isn't
++	 * supported.
++	 *
++	 * Use "mode" to check Enhanced IBRS instead of boot_cpu_has(), because
++	 * the user might select retpoline on the kernel command line and if
++	 * the CPU supports Enhanced IBRS, kernel might un-intentionally not
++	 * enable IBRS around firmware calls.
+ 	 */
+-	if (boot_cpu_has(X86_FEATURE_IBRS)) {
++	if (boot_cpu_has(X86_FEATURE_IBRS) && mode != SPECTRE_V2_IBRS_ENHANCED) {
+ 		setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
+ 		pr_info("Enabling Restricted Speculation for firmware calls\n");
+ 	}
++
++	/* Enable STIBP if appropriate */
++	arch_smt_update();
+ }
+ 
+ #undef pr_fmt
+@@ -798,6 +855,8 @@ static ssize_t l1tf_show_state(char *buf)
+ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
+ 			       char *buf, unsigned int bug)
+ {
++	int ret;
++
+ 	if (!boot_cpu_has_bug(bug))
+ 		return sprintf(buf, "Not affected\n");
+ 
+@@ -815,10 +874,12 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ 		return sprintf(buf, "Mitigation: __user pointer sanitization\n");
+ 
+ 	case X86_BUG_SPECTRE_V2:
+-		return sprintf(buf, "%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
++		ret = sprintf(buf, "%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
+ 			       boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "",
+ 			       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
++			       (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
+ 			       spectre_v2_module_string());
++		return ret;
+ 
+ 	case X86_BUG_SPEC_STORE_BYPASS:
+ 		return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 1ee8ea36af30..79561bfcfa87 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1015,6 +1015,9 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ 	   !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
+ 		setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
+ 
++	if (ia32_cap & ARCH_CAP_IBRS_ALL)
++		setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
++
+ 	if (x86_match_cpu(cpu_no_meltdown))
+ 		return;
+ 
+diff --git a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c
+index 749856a2e736..bc3801985d73 100644
+--- a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c
++++ b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c
+@@ -2032,6 +2032,13 @@ static int rdtgroup_show_options(struct seq_file *seq, struct kernfs_root *kf)
+ {
+ 	if (rdt_resources_all[RDT_RESOURCE_L3DATA].alloc_enabled)
+ 		seq_puts(seq, ",cdp");
++
++	if (rdt_resources_all[RDT_RESOURCE_L2DATA].alloc_enabled)
++		seq_puts(seq, ",cdpl2");
++
++	if (is_mba_sc(&rdt_resources_all[RDT_RESOURCE_MBA]))
++		seq_puts(seq, ",mba_MBps");
++
+ 	return 0;
+ }
+ 
+diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
+index 23f1691670b6..61a949d84dfa 100644
+--- a/arch/x86/kernel/fpu/signal.c
++++ b/arch/x86/kernel/fpu/signal.c
+@@ -314,7 +314,6 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
+ 		 * thread's fpu state, reconstruct fxstate from the fsave
+ 		 * header. Validate and sanitize the copied state.
+ 		 */
+-		struct fpu *fpu = &tsk->thread.fpu;
+ 		struct user_i387_ia32_struct env;
+ 		int err = 0;
+ 
+diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
+index 203d398802a3..1467f966cfec 100644
+--- a/arch/x86/kernel/kprobes/opt.c
++++ b/arch/x86/kernel/kprobes/opt.c
+@@ -179,7 +179,7 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
+ 		opt_pre_handler(&op->kp, regs);
+ 		__this_cpu_write(current_kprobe, NULL);
+ 	}
+-	preempt_enable_no_resched();
++	preempt_enable();
+ }
+ NOKPROBE_SYMBOL(optimized_callback);
+ 
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 9efe130ea2e6..9fcc3ec3ab78 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -3160,10 +3160,13 @@ static int nested_vmx_check_exception(struct kvm_vcpu *vcpu, unsigned long *exit
+ 		}
+ 	} else {
+ 		if (vmcs12->exception_bitmap & (1u << nr)) {
+-			if (nr == DB_VECTOR)
++			if (nr == DB_VECTOR) {
+ 				*exit_qual = vcpu->arch.dr6;
+-			else
++				*exit_qual &= ~(DR6_FIXED_1 | DR6_BT);
++				*exit_qual ^= DR6_RTM;
++			} else {
+ 				*exit_qual = 0;
++			}
+ 			return 1;
+ 		}
+ 	}
+diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
+index 8d6c34fe49be..800de88208d7 100644
+--- a/arch/x86/mm/pageattr.c
++++ b/arch/x86/mm/pageattr.c
+@@ -2063,9 +2063,13 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
+ 
+ 	/*
+ 	 * We should perform an IPI and flush all tlbs,
+-	 * but that can deadlock->flush only current cpu:
++	 * but that can deadlock->flush only current cpu.
++	 * Preemption needs to be disabled around __flush_tlb_all() due to
++	 * CR3 reload in __native_flush_tlb().
+ 	 */
++	preempt_disable();
+ 	__flush_tlb_all();
++	preempt_enable();
+ 
+ 	arch_flush_lazy_mmu_mode();
+ }
+diff --git a/arch/x86/platform/olpc/olpc-xo1-rtc.c b/arch/x86/platform/olpc/olpc-xo1-rtc.c
+index a2b4efddd61a..8e7ddd7e313a 100644
+--- a/arch/x86/platform/olpc/olpc-xo1-rtc.c
++++ b/arch/x86/platform/olpc/olpc-xo1-rtc.c
+@@ -16,6 +16,7 @@
+ 
+ #include <asm/msr.h>
+ #include <asm/olpc.h>
++#include <asm/x86_init.h>
+ 
+ static void rtc_wake_on(struct device *dev)
+ {
+@@ -75,6 +76,8 @@ static int __init xo1_rtc_init(void)
+ 	if (r)
+ 		return r;
+ 
++	x86_platform.legacy.rtc = 0;
++
+ 	device_init_wakeup(&xo1_rtc_device.dev, 1);
+ 	return 0;
+ }
+diff --git a/arch/x86/xen/enlighten_pvh.c b/arch/x86/xen/enlighten_pvh.c
+index c85d1a88f476..f7f77023288a 100644
+--- a/arch/x86/xen/enlighten_pvh.c
++++ b/arch/x86/xen/enlighten_pvh.c
+@@ -75,7 +75,7 @@ static void __init init_pvh_bootparams(void)
+ 	 * Version 2.12 supports Xen entry point but we will use default x86/PC
+ 	 * environment (i.e. hardware_subarch 0).
+ 	 */
+-	pvh_bootparams.hdr.version = 0x212;
++	pvh_bootparams.hdr.version = (2 << 8) | 12;
+ 	pvh_bootparams.hdr.type_of_loader = (9 << 4) | 0; /* Xen loader */
+ 
+ 	x86_init.acpi.get_root_pointer = pvh_get_root_pointer;
+diff --git a/arch/x86/xen/platform-pci-unplug.c b/arch/x86/xen/platform-pci-unplug.c
+index 33a783c77d96..184b36922397 100644
+--- a/arch/x86/xen/platform-pci-unplug.c
++++ b/arch/x86/xen/platform-pci-unplug.c
+@@ -146,6 +146,10 @@ void xen_unplug_emulated_devices(void)
+ {
+ 	int r;
+ 
++	/* PVH guests don't have emulated devices. */
++	if (xen_pvh_domain())
++		return;
++
+ 	/* user explicitly requested no unplug */
+ 	if (xen_emul_unplug & XEN_UNPLUG_NEVER)
+ 		return;
+diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
+index cd97a62394e7..a970a2aa4456 100644
+--- a/arch/x86/xen/spinlock.c
++++ b/arch/x86/xen/spinlock.c
+@@ -9,6 +9,7 @@
+ #include <linux/log2.h>
+ #include <linux/gfp.h>
+ #include <linux/slab.h>
++#include <linux/atomic.h>
+ 
+ #include <asm/paravirt.h>
+ #include <asm/qspinlock.h>
+@@ -21,6 +22,7 @@
+ 
+ static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+ static DEFINE_PER_CPU(char *, irq_name);
++static DEFINE_PER_CPU(atomic_t, xen_qlock_wait_nest);
+ static bool xen_pvspin = true;
+ 
+ static void xen_qlock_kick(int cpu)
+@@ -40,33 +42,24 @@ static void xen_qlock_kick(int cpu)
+ static void xen_qlock_wait(u8 *byte, u8 val)
+ {
+ 	int irq = __this_cpu_read(lock_kicker_irq);
++	atomic_t *nest_cnt = this_cpu_ptr(&xen_qlock_wait_nest);
+ 
+ 	/* If kicker interrupts not initialized yet, just spin */
+-	if (irq == -1)
++	if (irq == -1 || in_nmi())
+ 		return;
+ 
+-	/* clear pending */
+-	xen_clear_irq_pending(irq);
+-	barrier();
+-
+-	/*
+-	 * We check the byte value after clearing pending IRQ to make sure
+-	 * that we won't miss a wakeup event because of the clearing.
+-	 *
+-	 * The sync_clear_bit() call in xen_clear_irq_pending() is atomic.
+-	 * So it is effectively a memory barrier for x86.
+-	 */
+-	if (READ_ONCE(*byte) != val)
+-		return;
++	/* Detect reentry. */
++	atomic_inc(nest_cnt);
+ 
+-	/*
+-	 * If an interrupt happens here, it will leave the wakeup irq
+-	 * pending, which will cause xen_poll_irq() to return
+-	 * immediately.
+-	 */
++	/* If irq pending already and no nested call clear it. */
++	if (atomic_read(nest_cnt) == 1 && xen_test_irq_pending(irq)) {
++		xen_clear_irq_pending(irq);
++	} else if (READ_ONCE(*byte) == val) {
++		/* Block until irq becomes pending (or a spurious wakeup) */
++		xen_poll_irq(irq);
++	}
+ 
+-	/* Block until irq becomes pending (or perhaps a spurious wakeup) */
+-	xen_poll_irq(irq);
++	atomic_dec(nest_cnt);
+ }
+ 
+ static irqreturn_t dummy_handler(int irq, void *dev_id)
+diff --git a/arch/x86/xen/xen-pvh.S b/arch/x86/xen/xen-pvh.S
+index ca2d3b2bf2af..58722a052f9c 100644
+--- a/arch/x86/xen/xen-pvh.S
++++ b/arch/x86/xen/xen-pvh.S
+@@ -181,7 +181,7 @@ canary:
+ 	.fill 48, 1, 0
+ 
+ early_stack:
+-	.fill 256, 1, 0
++	.fill BOOT_STACK_SIZE, 1, 0
+ early_stack_end:
+ 
+ 	ELFNOTE(Xen, XEN_ELFNOTE_PHYS32_ENTRY,
+diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
+index 4498c43245e2..681498e5d40a 100644
+--- a/block/bfq-wf2q.c
++++ b/block/bfq-wf2q.c
+@@ -1178,10 +1178,17 @@ bool __bfq_deactivate_entity(struct bfq_entity *entity, bool ins_into_idle_tree)
+ 	st = bfq_entity_service_tree(entity);
+ 	is_in_service = entity == sd->in_service_entity;
+ 
+-	if (is_in_service) {
+-		bfq_calc_finish(entity, entity->service);
++	bfq_calc_finish(entity, entity->service);
++
++	if (is_in_service)
+ 		sd->in_service_entity = NULL;
+-	}
++	else
++		/*
++		 * Non in-service entity: nobody will take care of
++		 * resetting its service counter on expiration. Do it
++		 * now.
++		 */
++		entity->service = 0;
+ 
+ 	if (entity->tree == &st->active)
+ 		bfq_active_extract(st, entity);
+diff --git a/block/blk-lib.c b/block/blk-lib.c
+index d1b9dd03da25..1f196cf0aa5d 100644
+--- a/block/blk-lib.c
++++ b/block/blk-lib.c
+@@ -29,9 +29,7 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
+ {
+ 	struct request_queue *q = bdev_get_queue(bdev);
+ 	struct bio *bio = *biop;
+-	unsigned int granularity;
+ 	unsigned int op;
+-	int alignment;
+ 	sector_t bs_mask;
+ 
+ 	if (!q)
+@@ -54,38 +52,15 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
+ 	if ((sector | nr_sects) & bs_mask)
+ 		return -EINVAL;
+ 
+-	/* Zero-sector (unknown) and one-sector granularities are the same.  */
+-	granularity = max(q->limits.discard_granularity >> 9, 1U);
+-	alignment = (bdev_discard_alignment(bdev) >> 9) % granularity;
+-
+ 	while (nr_sects) {
+-		unsigned int req_sects;
+-		sector_t end_sect, tmp;
++		unsigned int req_sects = nr_sects;
++		sector_t end_sect;
+ 
+-		/*
+-		 * Issue in chunks of the user defined max discard setting,
+-		 * ensuring that bi_size doesn't overflow
+-		 */
+-		req_sects = min_t(sector_t, nr_sects,
+-					q->limits.max_discard_sectors);
+ 		if (!req_sects)
+ 			goto fail;
+-		if (req_sects > UINT_MAX >> 9)
+-			req_sects = UINT_MAX >> 9;
++		req_sects = min(req_sects, bio_allowed_max_sectors(q));
+ 
+-		/*
+-		 * If splitting a request, and the next starting sector would be
+-		 * misaligned, stop the discard at the previous aligned sector.
+-		 */
+ 		end_sect = sector + req_sects;
+-		tmp = end_sect;
+-		if (req_sects < nr_sects &&
+-		    sector_div(tmp, granularity) != alignment) {
+-			end_sect = end_sect - alignment;
+-			sector_div(end_sect, granularity);
+-			end_sect = end_sect * granularity + alignment;
+-			req_sects = end_sect - sector;
+-		}
+ 
+ 		bio = next_bio(bio, 0, gfp_mask);
+ 		bio->bi_iter.bi_sector = sector;
+@@ -186,7 +161,7 @@ static int __blkdev_issue_write_same(struct block_device *bdev, sector_t sector,
+ 		return -EOPNOTSUPP;
+ 
+ 	/* Ensure that max_write_same_sectors doesn't overflow bi_size */
+-	max_write_same_sectors = UINT_MAX >> 9;
++	max_write_same_sectors = bio_allowed_max_sectors(q);
+ 
+ 	while (nr_sects) {
+ 		bio = next_bio(bio, 1, gfp_mask);
+diff --git a/block/blk-merge.c b/block/blk-merge.c
+index aaec38cc37b8..2e042190a4f1 100644
+--- a/block/blk-merge.c
++++ b/block/blk-merge.c
+@@ -27,7 +27,8 @@ static struct bio *blk_bio_discard_split(struct request_queue *q,
+ 	/* Zero-sector (unknown) and one-sector granularities are the same.  */
+ 	granularity = max(q->limits.discard_granularity >> 9, 1U);
+ 
+-	max_discard_sectors = min(q->limits.max_discard_sectors, UINT_MAX >> 9);
++	max_discard_sectors = min(q->limits.max_discard_sectors,
++			bio_allowed_max_sectors(q));
+ 	max_discard_sectors -= max_discard_sectors % granularity;
+ 
+ 	if (unlikely(!max_discard_sectors)) {
+diff --git a/block/blk.h b/block/blk.h
+index a8f0f7986cfd..a26a8fb257a4 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -326,6 +326,16 @@ static inline unsigned long blk_rq_deadline(struct request *rq)
+ 	return rq->__deadline & ~0x1UL;
+ }
+ 
++/*
++ * The max size one bio can handle is UINT_MAX becasue bvec_iter.bi_size
++ * is defined as 'unsigned int', meantime it has to aligned to with logical
++ * block size which is the minimum accepted unit by hardware.
++ */
++static inline unsigned int bio_allowed_max_sectors(struct request_queue *q)
++{
++	return round_down(UINT_MAX, queue_logical_block_size(q)) >> 9;
++}
++
+ /*
+  * Internal io_context interface
+  */
+diff --git a/block/bounce.c b/block/bounce.c
+index fd31347b7836..5849535296b9 100644
+--- a/block/bounce.c
++++ b/block/bounce.c
+@@ -31,6 +31,24 @@
+ static struct bio_set bounce_bio_set, bounce_bio_split;
+ static mempool_t page_pool, isa_page_pool;
+ 
++static void init_bounce_bioset(void)
++{
++	static bool bounce_bs_setup;
++	int ret;
++
++	if (bounce_bs_setup)
++		return;
++
++	ret = bioset_init(&bounce_bio_set, BIO_POOL_SIZE, 0, BIOSET_NEED_BVECS);
++	BUG_ON(ret);
++	if (bioset_integrity_create(&bounce_bio_set, BIO_POOL_SIZE))
++		BUG_ON(1);
++
++	ret = bioset_init(&bounce_bio_split, BIO_POOL_SIZE, 0, 0);
++	BUG_ON(ret);
++	bounce_bs_setup = true;
++}
++
+ #if defined(CONFIG_HIGHMEM)
+ static __init int init_emergency_pool(void)
+ {
+@@ -44,14 +62,7 @@ static __init int init_emergency_pool(void)
+ 	BUG_ON(ret);
+ 	pr_info("pool size: %d pages\n", POOL_SIZE);
+ 
+-	ret = bioset_init(&bounce_bio_set, BIO_POOL_SIZE, 0, BIOSET_NEED_BVECS);
+-	BUG_ON(ret);
+-	if (bioset_integrity_create(&bounce_bio_set, BIO_POOL_SIZE))
+-		BUG_ON(1);
+-
+-	ret = bioset_init(&bounce_bio_split, BIO_POOL_SIZE, 0, 0);
+-	BUG_ON(ret);
+-
++	init_bounce_bioset();
+ 	return 0;
+ }
+ 
+@@ -86,6 +97,8 @@ static void *mempool_alloc_pages_isa(gfp_t gfp_mask, void *data)
+ 	return mempool_alloc_pages(gfp_mask | GFP_DMA, data);
+ }
+ 
++static DEFINE_MUTEX(isa_mutex);
++
+ /*
+  * gets called "every" time someone init's a queue with BLK_BOUNCE_ISA
+  * as the max address, so check if the pool has already been created.
+@@ -94,14 +107,20 @@ int init_emergency_isa_pool(void)
+ {
+ 	int ret;
+ 
+-	if (mempool_initialized(&isa_page_pool))
++	mutex_lock(&isa_mutex);
++
++	if (mempool_initialized(&isa_page_pool)) {
++		mutex_unlock(&isa_mutex);
+ 		return 0;
++	}
+ 
+ 	ret = mempool_init(&isa_page_pool, ISA_POOL_SIZE, mempool_alloc_pages_isa,
+ 			   mempool_free_pages, (void *) 0);
+ 	BUG_ON(ret);
+ 
+ 	pr_info("isa pool size: %d pages\n", ISA_POOL_SIZE);
++	init_bounce_bioset();
++	mutex_unlock(&isa_mutex);
+ 	return 0;
+ }
+ 
+diff --git a/crypto/Kconfig b/crypto/Kconfig
+index f3e40ac56d93..59e32623a7ce 100644
+--- a/crypto/Kconfig
++++ b/crypto/Kconfig
+@@ -1590,20 +1590,6 @@ config CRYPTO_SM4
+ 
+ 	  If unsure, say N.
+ 
+-config CRYPTO_SPECK
+-	tristate "Speck cipher algorithm"
+-	select CRYPTO_ALGAPI
+-	help
+-	  Speck is a lightweight block cipher that is tuned for optimal
+-	  performance in software (rather than hardware).
+-
+-	  Speck may not be as secure as AES, and should only be used on systems
+-	  where AES is not fast enough.
+-
+-	  See also: <https://eprint.iacr.org/2013/404.pdf>
+-
+-	  If unsure, say N.
+-
+ config CRYPTO_TEA
+ 	tristate "TEA, XTEA and XETA cipher algorithms"
+ 	select CRYPTO_ALGAPI
+diff --git a/crypto/Makefile b/crypto/Makefile
+index 6d1d40eeb964..f6a234d08882 100644
+--- a/crypto/Makefile
++++ b/crypto/Makefile
+@@ -115,7 +115,6 @@ obj-$(CONFIG_CRYPTO_TEA) += tea.o
+ obj-$(CONFIG_CRYPTO_KHAZAD) += khazad.o
+ obj-$(CONFIG_CRYPTO_ANUBIS) += anubis.o
+ obj-$(CONFIG_CRYPTO_SEED) += seed.o
+-obj-$(CONFIG_CRYPTO_SPECK) += speck.o
+ obj-$(CONFIG_CRYPTO_SALSA20) += salsa20_generic.o
+ obj-$(CONFIG_CRYPTO_CHACHA20) += chacha20_generic.o
+ obj-$(CONFIG_CRYPTO_POLY1305) += poly1305_generic.o
+diff --git a/crypto/aegis.h b/crypto/aegis.h
+index f1c6900ddb80..405e025fc906 100644
+--- a/crypto/aegis.h
++++ b/crypto/aegis.h
+@@ -21,7 +21,7 @@
+ 
+ union aegis_block {
+ 	__le64 words64[AEGIS_BLOCK_SIZE / sizeof(__le64)];
+-	u32 words32[AEGIS_BLOCK_SIZE / sizeof(u32)];
++	__le32 words32[AEGIS_BLOCK_SIZE / sizeof(__le32)];
+ 	u8 bytes[AEGIS_BLOCK_SIZE];
+ };
+ 
+@@ -57,24 +57,22 @@ static void crypto_aegis_aesenc(union aegis_block *dst,
+ 				const union aegis_block *src,
+ 				const union aegis_block *key)
+ {
+-	u32 *d = dst->words32;
+ 	const u8  *s  = src->bytes;
+-	const u32 *k  = key->words32;
+ 	const u32 *t0 = crypto_ft_tab[0];
+ 	const u32 *t1 = crypto_ft_tab[1];
+ 	const u32 *t2 = crypto_ft_tab[2];
+ 	const u32 *t3 = crypto_ft_tab[3];
+ 	u32 d0, d1, d2, d3;
+ 
+-	d0 = t0[s[ 0]] ^ t1[s[ 5]] ^ t2[s[10]] ^ t3[s[15]] ^ k[0];
+-	d1 = t0[s[ 4]] ^ t1[s[ 9]] ^ t2[s[14]] ^ t3[s[ 3]] ^ k[1];
+-	d2 = t0[s[ 8]] ^ t1[s[13]] ^ t2[s[ 2]] ^ t3[s[ 7]] ^ k[2];
+-	d3 = t0[s[12]] ^ t1[s[ 1]] ^ t2[s[ 6]] ^ t3[s[11]] ^ k[3];
++	d0 = t0[s[ 0]] ^ t1[s[ 5]] ^ t2[s[10]] ^ t3[s[15]];
++	d1 = t0[s[ 4]] ^ t1[s[ 9]] ^ t2[s[14]] ^ t3[s[ 3]];
++	d2 = t0[s[ 8]] ^ t1[s[13]] ^ t2[s[ 2]] ^ t3[s[ 7]];
++	d3 = t0[s[12]] ^ t1[s[ 1]] ^ t2[s[ 6]] ^ t3[s[11]];
+ 
+-	d[0] = d0;
+-	d[1] = d1;
+-	d[2] = d2;
+-	d[3] = d3;
++	dst->words32[0] = cpu_to_le32(d0) ^ key->words32[0];
++	dst->words32[1] = cpu_to_le32(d1) ^ key->words32[1];
++	dst->words32[2] = cpu_to_le32(d2) ^ key->words32[2];
++	dst->words32[3] = cpu_to_le32(d3) ^ key->words32[3];
+ }
+ 
+ #endif /* _CRYPTO_AEGIS_H */
+diff --git a/crypto/lrw.c b/crypto/lrw.c
+index 954a7064a179..7657bebd060c 100644
+--- a/crypto/lrw.c
++++ b/crypto/lrw.c
+@@ -143,7 +143,12 @@ static inline int get_index128(be128 *block)
+ 		return x + ffz(val);
+ 	}
+ 
+-	return x;
++	/*
++	 * If we get here, then x == 128 and we are incrementing the counter
++	 * from all ones to all zeros. This means we must return index 127, i.e.
++	 * the one corresponding to key2*{ 1,...,1 }.
++	 */
++	return 127;
+ }
+ 
+ static int post_crypt(struct skcipher_request *req)
+diff --git a/crypto/morus1280.c b/crypto/morus1280.c
+index 6180b2557836..8f1952d96ebd 100644
+--- a/crypto/morus1280.c
++++ b/crypto/morus1280.c
+@@ -385,14 +385,11 @@ static void crypto_morus1280_final(struct morus1280_state *state,
+ 				   struct morus1280_block *tag_xor,
+ 				   u64 assoclen, u64 cryptlen)
+ {
+-	u64 assocbits = assoclen * 8;
+-	u64 cryptbits = cryptlen * 8;
+-
+ 	struct morus1280_block tmp;
+ 	unsigned int i;
+ 
+-	tmp.words[0] = cpu_to_le64(assocbits);
+-	tmp.words[1] = cpu_to_le64(cryptbits);
++	tmp.words[0] = assoclen * 8;
++	tmp.words[1] = cryptlen * 8;
+ 	tmp.words[2] = 0;
+ 	tmp.words[3] = 0;
+ 
+diff --git a/crypto/morus640.c b/crypto/morus640.c
+index 5eede3749e64..6ccb901934c3 100644
+--- a/crypto/morus640.c
++++ b/crypto/morus640.c
+@@ -384,21 +384,13 @@ static void crypto_morus640_final(struct morus640_state *state,
+ 				  struct morus640_block *tag_xor,
+ 				  u64 assoclen, u64 cryptlen)
+ {
+-	u64 assocbits = assoclen * 8;
+-	u64 cryptbits = cryptlen * 8;
+-
+-	u32 assocbits_lo = (u32)assocbits;
+-	u32 assocbits_hi = (u32)(assocbits >> 32);
+-	u32 cryptbits_lo = (u32)cryptbits;
+-	u32 cryptbits_hi = (u32)(cryptbits >> 32);
+-
+ 	struct morus640_block tmp;
+ 	unsigned int i;
+ 
+-	tmp.words[0] = cpu_to_le32(assocbits_lo);
+-	tmp.words[1] = cpu_to_le32(assocbits_hi);
+-	tmp.words[2] = cpu_to_le32(cryptbits_lo);
+-	tmp.words[3] = cpu_to_le32(cryptbits_hi);
++	tmp.words[0] = lower_32_bits(assoclen * 8);
++	tmp.words[1] = upper_32_bits(assoclen * 8);
++	tmp.words[2] = lower_32_bits(cryptlen * 8);
++	tmp.words[3] = upper_32_bits(cryptlen * 8);
+ 
+ 	for (i = 0; i < MORUS_BLOCK_WORDS; i++)
+ 		state->s[4].words[i] ^= state->s[0].words[i];
+diff --git a/crypto/speck.c b/crypto/speck.c
+deleted file mode 100644
+index 58aa9f7f91f7..000000000000
+--- a/crypto/speck.c
++++ /dev/null
+@@ -1,307 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * Speck: a lightweight block cipher
+- *
+- * Copyright (c) 2018 Google, Inc
+- *
+- * Speck has 10 variants, including 5 block sizes.  For now we only implement
+- * the variants Speck128/128, Speck128/192, Speck128/256, Speck64/96, and
+- * Speck64/128.   Speck${B}/${K} denotes the variant with a block size of B bits
+- * and a key size of K bits.  The Speck128 variants are believed to be the most
+- * secure variants, and they use the same block size and key sizes as AES.  The
+- * Speck64 variants are less secure, but on 32-bit processors are usually
+- * faster.  The remaining variants (Speck32, Speck48, and Speck96) are even less
+- * secure and/or not as well suited for implementation on either 32-bit or
+- * 64-bit processors, so are omitted.
+- *
+- * Reference: "The Simon and Speck Families of Lightweight Block Ciphers"
+- * https://eprint.iacr.org/2013/404.pdf
+- *
+- * In a correspondence, the Speck designers have also clarified that the words
+- * should be interpreted in little-endian format, and the words should be
+- * ordered such that the first word of each block is 'y' rather than 'x', and
+- * the first key word (rather than the last) becomes the first round key.
+- */
+-
+-#include <asm/unaligned.h>
+-#include <crypto/speck.h>
+-#include <linux/bitops.h>
+-#include <linux/crypto.h>
+-#include <linux/init.h>
+-#include <linux/module.h>
+-
+-/* Speck128 */
+-
+-static __always_inline void speck128_round(u64 *x, u64 *y, u64 k)
+-{
+-	*x = ror64(*x, 8);
+-	*x += *y;
+-	*x ^= k;
+-	*y = rol64(*y, 3);
+-	*y ^= *x;
+-}
+-
+-static __always_inline void speck128_unround(u64 *x, u64 *y, u64 k)
+-{
+-	*y ^= *x;
+-	*y = ror64(*y, 3);
+-	*x ^= k;
+-	*x -= *y;
+-	*x = rol64(*x, 8);
+-}
+-
+-void crypto_speck128_encrypt(const struct speck128_tfm_ctx *ctx,
+-			     u8 *out, const u8 *in)
+-{
+-	u64 y = get_unaligned_le64(in);
+-	u64 x = get_unaligned_le64(in + 8);
+-	int i;
+-
+-	for (i = 0; i < ctx->nrounds; i++)
+-		speck128_round(&x, &y, ctx->round_keys[i]);
+-
+-	put_unaligned_le64(y, out);
+-	put_unaligned_le64(x, out + 8);
+-}
+-EXPORT_SYMBOL_GPL(crypto_speck128_encrypt);
+-
+-static void speck128_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+-{
+-	crypto_speck128_encrypt(crypto_tfm_ctx(tfm), out, in);
+-}
+-
+-void crypto_speck128_decrypt(const struct speck128_tfm_ctx *ctx,
+-			     u8 *out, const u8 *in)
+-{
+-	u64 y = get_unaligned_le64(in);
+-	u64 x = get_unaligned_le64(in + 8);
+-	int i;
+-
+-	for (i = ctx->nrounds - 1; i >= 0; i--)
+-		speck128_unround(&x, &y, ctx->round_keys[i]);
+-
+-	put_unaligned_le64(y, out);
+-	put_unaligned_le64(x, out + 8);
+-}
+-EXPORT_SYMBOL_GPL(crypto_speck128_decrypt);
+-
+-static void speck128_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+-{
+-	crypto_speck128_decrypt(crypto_tfm_ctx(tfm), out, in);
+-}
+-
+-int crypto_speck128_setkey(struct speck128_tfm_ctx *ctx, const u8 *key,
+-			   unsigned int keylen)
+-{
+-	u64 l[3];
+-	u64 k;
+-	int i;
+-
+-	switch (keylen) {
+-	case SPECK128_128_KEY_SIZE:
+-		k = get_unaligned_le64(key);
+-		l[0] = get_unaligned_le64(key + 8);
+-		ctx->nrounds = SPECK128_128_NROUNDS;
+-		for (i = 0; i < ctx->nrounds; i++) {
+-			ctx->round_keys[i] = k;
+-			speck128_round(&l[0], &k, i);
+-		}
+-		break;
+-	case SPECK128_192_KEY_SIZE:
+-		k = get_unaligned_le64(key);
+-		l[0] = get_unaligned_le64(key + 8);
+-		l[1] = get_unaligned_le64(key + 16);
+-		ctx->nrounds = SPECK128_192_NROUNDS;
+-		for (i = 0; i < ctx->nrounds; i++) {
+-			ctx->round_keys[i] = k;
+-			speck128_round(&l[i % 2], &k, i);
+-		}
+-		break;
+-	case SPECK128_256_KEY_SIZE:
+-		k = get_unaligned_le64(key);
+-		l[0] = get_unaligned_le64(key + 8);
+-		l[1] = get_unaligned_le64(key + 16);
+-		l[2] = get_unaligned_le64(key + 24);
+-		ctx->nrounds = SPECK128_256_NROUNDS;
+-		for (i = 0; i < ctx->nrounds; i++) {
+-			ctx->round_keys[i] = k;
+-			speck128_round(&l[i % 3], &k, i);
+-		}
+-		break;
+-	default:
+-		return -EINVAL;
+-	}
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(crypto_speck128_setkey);
+-
+-static int speck128_setkey(struct crypto_tfm *tfm, const u8 *key,
+-			   unsigned int keylen)
+-{
+-	return crypto_speck128_setkey(crypto_tfm_ctx(tfm), key, keylen);
+-}
+-
+-/* Speck64 */
+-
+-static __always_inline void speck64_round(u32 *x, u32 *y, u32 k)
+-{
+-	*x = ror32(*x, 8);
+-	*x += *y;
+-	*x ^= k;
+-	*y = rol32(*y, 3);
+-	*y ^= *x;
+-}
+-
+-static __always_inline void speck64_unround(u32 *x, u32 *y, u32 k)
+-{
+-	*y ^= *x;
+-	*y = ror32(*y, 3);
+-	*x ^= k;
+-	*x -= *y;
+-	*x = rol32(*x, 8);
+-}
+-
+-void crypto_speck64_encrypt(const struct speck64_tfm_ctx *ctx,
+-			    u8 *out, const u8 *in)
+-{
+-	u32 y = get_unaligned_le32(in);
+-	u32 x = get_unaligned_le32(in + 4);
+-	int i;
+-
+-	for (i = 0; i < ctx->nrounds; i++)
+-		speck64_round(&x, &y, ctx->round_keys[i]);
+-
+-	put_unaligned_le32(y, out);
+-	put_unaligned_le32(x, out + 4);
+-}
+-EXPORT_SYMBOL_GPL(crypto_speck64_encrypt);
+-
+-static void speck64_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+-{
+-	crypto_speck64_encrypt(crypto_tfm_ctx(tfm), out, in);
+-}
+-
+-void crypto_speck64_decrypt(const struct speck64_tfm_ctx *ctx,
+-			    u8 *out, const u8 *in)
+-{
+-	u32 y = get_unaligned_le32(in);
+-	u32 x = get_unaligned_le32(in + 4);
+-	int i;
+-
+-	for (i = ctx->nrounds - 1; i >= 0; i--)
+-		speck64_unround(&x, &y, ctx->round_keys[i]);
+-
+-	put_unaligned_le32(y, out);
+-	put_unaligned_le32(x, out + 4);
+-}
+-EXPORT_SYMBOL_GPL(crypto_speck64_decrypt);
+-
+-static void speck64_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+-{
+-	crypto_speck64_decrypt(crypto_tfm_ctx(tfm), out, in);
+-}
+-
+-int crypto_speck64_setkey(struct speck64_tfm_ctx *ctx, const u8 *key,
+-			  unsigned int keylen)
+-{
+-	u32 l[3];
+-	u32 k;
+-	int i;
+-
+-	switch (keylen) {
+-	case SPECK64_96_KEY_SIZE:
+-		k = get_unaligned_le32(key);
+-		l[0] = get_unaligned_le32(key + 4);
+-		l[1] = get_unaligned_le32(key + 8);
+-		ctx->nrounds = SPECK64_96_NROUNDS;
+-		for (i = 0; i < ctx->nrounds; i++) {
+-			ctx->round_keys[i] = k;
+-			speck64_round(&l[i % 2], &k, i);
+-		}
+-		break;
+-	case SPECK64_128_KEY_SIZE:
+-		k = get_unaligned_le32(key);
+-		l[0] = get_unaligned_le32(key + 4);
+-		l[1] = get_unaligned_le32(key + 8);
+-		l[2] = get_unaligned_le32(key + 12);
+-		ctx->nrounds = SPECK64_128_NROUNDS;
+-		for (i = 0; i < ctx->nrounds; i++) {
+-			ctx->round_keys[i] = k;
+-			speck64_round(&l[i % 3], &k, i);
+-		}
+-		break;
+-	default:
+-		return -EINVAL;
+-	}
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(crypto_speck64_setkey);
+-
+-static int speck64_setkey(struct crypto_tfm *tfm, const u8 *key,
+-			  unsigned int keylen)
+-{
+-	return crypto_speck64_setkey(crypto_tfm_ctx(tfm), key, keylen);
+-}
+-
+-/* Algorithm definitions */
+-
+-static struct crypto_alg speck_algs[] = {
+-	{
+-		.cra_name		= "speck128",
+-		.cra_driver_name	= "speck128-generic",
+-		.cra_priority		= 100,
+-		.cra_flags		= CRYPTO_ALG_TYPE_CIPHER,
+-		.cra_blocksize		= SPECK128_BLOCK_SIZE,
+-		.cra_ctxsize		= sizeof(struct speck128_tfm_ctx),
+-		.cra_module		= THIS_MODULE,
+-		.cra_u			= {
+-			.cipher = {
+-				.cia_min_keysize	= SPECK128_128_KEY_SIZE,
+-				.cia_max_keysize	= SPECK128_256_KEY_SIZE,
+-				.cia_setkey		= speck128_setkey,
+-				.cia_encrypt		= speck128_encrypt,
+-				.cia_decrypt		= speck128_decrypt
+-			}
+-		}
+-	}, {
+-		.cra_name		= "speck64",
+-		.cra_driver_name	= "speck64-generic",
+-		.cra_priority		= 100,
+-		.cra_flags		= CRYPTO_ALG_TYPE_CIPHER,
+-		.cra_blocksize		= SPECK64_BLOCK_SIZE,
+-		.cra_ctxsize		= sizeof(struct speck64_tfm_ctx),
+-		.cra_module		= THIS_MODULE,
+-		.cra_u			= {
+-			.cipher = {
+-				.cia_min_keysize	= SPECK64_96_KEY_SIZE,
+-				.cia_max_keysize	= SPECK64_128_KEY_SIZE,
+-				.cia_setkey		= speck64_setkey,
+-				.cia_encrypt		= speck64_encrypt,
+-				.cia_decrypt		= speck64_decrypt
+-			}
+-		}
+-	}
+-};
+-
+-static int __init speck_module_init(void)
+-{
+-	return crypto_register_algs(speck_algs, ARRAY_SIZE(speck_algs));
+-}
+-
+-static void __exit speck_module_exit(void)
+-{
+-	crypto_unregister_algs(speck_algs, ARRAY_SIZE(speck_algs));
+-}
+-
+-module_init(speck_module_init);
+-module_exit(speck_module_exit);
+-
+-MODULE_DESCRIPTION("Speck block cipher (generic)");
+-MODULE_LICENSE("GPL");
+-MODULE_AUTHOR("Eric Biggers <ebiggers@google.com>");
+-MODULE_ALIAS_CRYPTO("speck128");
+-MODULE_ALIAS_CRYPTO("speck128-generic");
+-MODULE_ALIAS_CRYPTO("speck64");
+-MODULE_ALIAS_CRYPTO("speck64-generic");
+diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
+index d5bcdd905007..ee4f2a175bda 100644
+--- a/crypto/tcrypt.c
++++ b/crypto/tcrypt.c
+@@ -1097,6 +1097,9 @@ static void test_ahash_speed_common(const char *algo, unsigned int secs,
+ 			break;
+ 		}
+ 
++		if (speed[i].klen)
++			crypto_ahash_setkey(tfm, tvmem[0], speed[i].klen);
++
+ 		pr_info("test%3u "
+ 			"(%5u byte blocks,%5u bytes per update,%4u updates): ",
+ 			i, speed[i].blen, speed[i].plen, speed[i].blen / speed[i].plen);
+diff --git a/crypto/testmgr.c b/crypto/testmgr.c
+index 11e45352fd0b..1ed03bf6a977 100644
+--- a/crypto/testmgr.c
++++ b/crypto/testmgr.c
+@@ -3000,18 +3000,6 @@ static const struct alg_test_desc alg_test_descs[] = {
+ 		.suite = {
+ 			.cipher = __VECS(sm4_tv_template)
+ 		}
+-	}, {
+-		.alg = "ecb(speck128)",
+-		.test = alg_test_skcipher,
+-		.suite = {
+-			.cipher = __VECS(speck128_tv_template)
+-		}
+-	}, {
+-		.alg = "ecb(speck64)",
+-		.test = alg_test_skcipher,
+-		.suite = {
+-			.cipher = __VECS(speck64_tv_template)
+-		}
+ 	}, {
+ 		.alg = "ecb(tea)",
+ 		.test = alg_test_skcipher,
+@@ -3539,18 +3527,6 @@ static const struct alg_test_desc alg_test_descs[] = {
+ 		.suite = {
+ 			.cipher = __VECS(serpent_xts_tv_template)
+ 		}
+-	}, {
+-		.alg = "xts(speck128)",
+-		.test = alg_test_skcipher,
+-		.suite = {
+-			.cipher = __VECS(speck128_xts_tv_template)
+-		}
+-	}, {
+-		.alg = "xts(speck64)",
+-		.test = alg_test_skcipher,
+-		.suite = {
+-			.cipher = __VECS(speck64_xts_tv_template)
+-		}
+ 	}, {
+ 		.alg = "xts(twofish)",
+ 		.test = alg_test_skcipher,
+diff --git a/crypto/testmgr.h b/crypto/testmgr.h
+index b950aa234e43..36572c665026 100644
+--- a/crypto/testmgr.h
++++ b/crypto/testmgr.h
+@@ -10141,744 +10141,6 @@ static const struct cipher_testvec sm4_tv_template[] = {
+ 	}
+ };
+ 
+-/*
+- * Speck test vectors taken from the original paper:
+- * "The Simon and Speck Families of Lightweight Block Ciphers"
+- * https://eprint.iacr.org/2013/404.pdf
+- *
+- * Note that the paper does not make byte and word order clear.  But it was
+- * confirmed with the authors that the intended orders are little endian byte
+- * order and (y, x) word order.  Equivalently, the printed test vectors, when
+- * looking at only the bytes (ignoring the whitespace that divides them into
+- * words), are backwards: the left-most byte is actually the one with the
+- * highest memory address, while the right-most byte is actually the one with
+- * the lowest memory address.
+- */
+-
+-static const struct cipher_testvec speck128_tv_template[] = {
+-	{ /* Speck128/128 */
+-		.key	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
+-		.klen	= 16,
+-		.ptext	= "\x20\x6d\x61\x64\x65\x20\x69\x74"
+-			  "\x20\x65\x71\x75\x69\x76\x61\x6c",
+-		.ctext	= "\x18\x0d\x57\x5c\xdf\xfe\x60\x78"
+-			  "\x65\x32\x78\x79\x51\x98\x5d\xa6",
+-		.len	= 16,
+-	}, { /* Speck128/192 */
+-		.key	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17",
+-		.klen	= 24,
+-		.ptext	= "\x65\x6e\x74\x20\x74\x6f\x20\x43"
+-			  "\x68\x69\x65\x66\x20\x48\x61\x72",
+-		.ctext	= "\x86\x18\x3c\xe0\x5d\x18\xbc\xf9"
+-			  "\x66\x55\x13\x13\x3a\xcf\xe4\x1b",
+-		.len	= 16,
+-	}, { /* Speck128/256 */
+-		.key	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f",
+-		.klen	= 32,
+-		.ptext	= "\x70\x6f\x6f\x6e\x65\x72\x2e\x20"
+-			  "\x49\x6e\x20\x74\x68\x6f\x73\x65",
+-		.ctext	= "\x43\x8f\x18\x9c\x8d\xb4\xee\x4e"
+-			  "\x3e\xf5\xc0\x05\x04\x01\x09\x41",
+-		.len	= 16,
+-	},
+-};
+-
+-/*
+- * Speck128-XTS test vectors, taken from the AES-XTS test vectors with the
+- * ciphertext recomputed with Speck128 as the cipher
+- */
+-static const struct cipher_testvec speck128_xts_tv_template[] = {
+-	{
+-		.key	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.klen	= 32,
+-		.iv	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ctext	= "\xbe\xa0\xe7\x03\xd7\xfe\xab\x62"
+-			  "\x3b\x99\x4a\x64\x74\x77\xac\xed"
+-			  "\xd8\xf4\xa6\xcf\xae\xb9\x07\x42"
+-			  "\x51\xd9\xb6\x1d\xe0\x5e\xbc\x54",
+-		.len	= 32,
+-	}, {
+-		.key	= "\x11\x11\x11\x11\x11\x11\x11\x11"
+-			  "\x11\x11\x11\x11\x11\x11\x11\x11"
+-			  "\x22\x22\x22\x22\x22\x22\x22\x22"
+-			  "\x22\x22\x22\x22\x22\x22\x22\x22",
+-		.klen	= 32,
+-		.iv	= "\x33\x33\x33\x33\x33\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44",
+-		.ctext	= "\xfb\x53\x81\x75\x6f\x9f\x34\xad"
+-			  "\x7e\x01\xed\x7b\xcc\xda\x4e\x4a"
+-			  "\xd4\x84\xa4\x53\xd5\x88\x73\x1b"
+-			  "\xfd\xcb\xae\x0d\xf3\x04\xee\xe6",
+-		.len	= 32,
+-	}, {
+-		.key	= "\xff\xfe\xfd\xfc\xfb\xfa\xf9\xf8"
+-			  "\xf7\xf6\xf5\xf4\xf3\xf2\xf1\xf0"
+-			  "\x22\x22\x22\x22\x22\x22\x22\x22"
+-			  "\x22\x22\x22\x22\x22\x22\x22\x22",
+-		.klen	= 32,
+-		.iv	= "\x33\x33\x33\x33\x33\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44",
+-		.ctext	= "\x21\x52\x84\x15\xd1\xf7\x21\x55"
+-			  "\xd9\x75\x4a\xd3\xc5\xdb\x9f\x7d"
+-			  "\xda\x63\xb2\xf1\x82\xb0\x89\x59"
+-			  "\x86\xd4\xaa\xaa\xdd\xff\x4f\x92",
+-		.len	= 32,
+-	}, {
+-		.key	= "\x27\x18\x28\x18\x28\x45\x90\x45"
+-			  "\x23\x53\x60\x28\x74\x71\x35\x26"
+-			  "\x31\x41\x59\x26\x53\x58\x97\x93"
+-			  "\x23\x84\x62\x64\x33\x83\x27\x95",
+-		.klen	= 32,
+-		.iv	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+-			  "\x20\x21\x22\x23\x24\x25\x26\x27"
+-			  "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+-			  "\x30\x31\x32\x33\x34\x35\x36\x37"
+-			  "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+-			  "\x40\x41\x42\x43\x44\x45\x46\x47"
+-			  "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+-			  "\x50\x51\x52\x53\x54\x55\x56\x57"
+-			  "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+-			  "\x60\x61\x62\x63\x64\x65\x66\x67"
+-			  "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+-			  "\x70\x71\x72\x73\x74\x75\x76\x77"
+-			  "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+-			  "\x80\x81\x82\x83\x84\x85\x86\x87"
+-			  "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+-			  "\x90\x91\x92\x93\x94\x95\x96\x97"
+-			  "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+-			  "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+-			  "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+-			  "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+-			  "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+-			  "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+-			  "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+-			  "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+-			  "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+-			  "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+-			  "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+-			  "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+-			  "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
+-			  "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+-			  "\x20\x21\x22\x23\x24\x25\x26\x27"
+-			  "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+-			  "\x30\x31\x32\x33\x34\x35\x36\x37"
+-			  "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+-			  "\x40\x41\x42\x43\x44\x45\x46\x47"
+-			  "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+-			  "\x50\x51\x52\x53\x54\x55\x56\x57"
+-			  "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+-			  "\x60\x61\x62\x63\x64\x65\x66\x67"
+-			  "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+-			  "\x70\x71\x72\x73\x74\x75\x76\x77"
+-			  "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+-			  "\x80\x81\x82\x83\x84\x85\x86\x87"
+-			  "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+-			  "\x90\x91\x92\x93\x94\x95\x96\x97"
+-			  "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+-			  "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+-			  "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+-			  "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+-			  "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+-			  "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+-			  "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+-			  "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+-			  "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+-			  "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+-			  "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+-			  "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+-			  "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
+-		.ctext	= "\x57\xb5\xf8\x71\x6e\x6d\xdd\x82"
+-			  "\x53\xd0\xed\x2d\x30\xc1\x20\xef"
+-			  "\x70\x67\x5e\xff\x09\x70\xbb\xc1"
+-			  "\x3a\x7b\x48\x26\xd9\x0b\xf4\x48"
+-			  "\xbe\xce\xb1\xc7\xb2\x67\xc4\xa7"
+-			  "\x76\xf8\x36\x30\xb7\xb4\x9a\xd9"
+-			  "\xf5\x9d\xd0\x7b\xc1\x06\x96\x44"
+-			  "\x19\xc5\x58\x84\x63\xb9\x12\x68"
+-			  "\x68\xc7\xaa\x18\x98\xf2\x1f\x5c"
+-			  "\x39\xa6\xd8\x32\x2b\xc3\x51\xfd"
+-			  "\x74\x79\x2e\xb4\x44\xd7\x69\xc4"
+-			  "\xfc\x29\xe6\xed\x26\x1e\xa6\x9d"
+-			  "\x1c\xbe\x00\x0e\x7f\x3a\xca\xfb"
+-			  "\x6d\x13\x65\xa0\xf9\x31\x12\xe2"
+-			  "\x26\xd1\xec\x2b\x0a\x8b\x59\x99"
+-			  "\xa7\x49\xa0\x0e\x09\x33\x85\x50"
+-			  "\xc3\x23\xca\x7a\xdd\x13\x45\x5f"
+-			  "\xde\x4c\xa7\xcb\x00\x8a\x66\x6f"
+-			  "\xa2\xb6\xb1\x2e\xe1\xa0\x18\xf6"
+-			  "\xad\xf3\xbd\xeb\xc7\xef\x55\x4f"
+-			  "\x79\x91\x8d\x36\x13\x7b\xd0\x4a"
+-			  "\x6c\x39\xfb\x53\xb8\x6f\x02\x51"
+-			  "\xa5\x20\xac\x24\x1c\x73\x59\x73"
+-			  "\x58\x61\x3a\x87\x58\xb3\x20\x56"
+-			  "\x39\x06\x2b\x4d\xd3\x20\x2b\x89"
+-			  "\x3f\xa2\xf0\x96\xeb\x7f\xa4\xcd"
+-			  "\x11\xae\xbd\xcb\x3a\xb4\xd9\x91"
+-			  "\x09\x35\x71\x50\x65\xac\x92\xe3"
+-			  "\x7b\x32\xc0\x7a\xdd\xd4\xc3\x92"
+-			  "\x6f\xeb\x79\xde\x6f\xd3\x25\xc9"
+-			  "\xcd\x63\xf5\x1e\x7a\x3b\x26\x9d"
+-			  "\x77\x04\x80\xa9\xbf\x38\xb5\xbd"
+-			  "\xb8\x05\x07\xbd\xfd\xab\x7b\xf8"
+-			  "\x2a\x26\xcc\x49\x14\x6d\x55\x01"
+-			  "\x06\x94\xd8\xb2\x2d\x53\x83\x1b"
+-			  "\x8f\xd4\xdd\x57\x12\x7e\x18\xba"
+-			  "\x8e\xe2\x4d\x80\xef\x7e\x6b\x9d"
+-			  "\x24\xa9\x60\xa4\x97\x85\x86\x2a"
+-			  "\x01\x00\x09\xf1\xcb\x4a\x24\x1c"
+-			  "\xd8\xf6\xe6\x5b\xe7\x5d\xf2\xc4"
+-			  "\x97\x1c\x10\xc6\x4d\x66\x4f\x98"
+-			  "\x87\x30\xac\xd5\xea\x73\x49\x10"
+-			  "\x80\xea\xe5\x5f\x4d\x5f\x03\x33"
+-			  "\x66\x02\x35\x3d\x60\x06\x36\x4f"
+-			  "\x14\x1c\xd8\x07\x1f\x78\xd0\xf8"
+-			  "\x4f\x6c\x62\x7c\x15\xa5\x7c\x28"
+-			  "\x7c\xcc\xeb\x1f\xd1\x07\x90\x93"
+-			  "\x7e\xc2\xa8\x3a\x80\xc0\xf5\x30"
+-			  "\xcc\x75\xcf\x16\x26\xa9\x26\x3b"
+-			  "\xe7\x68\x2f\x15\x21\x5b\xe4\x00"
+-			  "\xbd\x48\x50\xcd\x75\x70\xc4\x62"
+-			  "\xbb\x41\xfb\x89\x4a\x88\x3b\x3b"
+-			  "\x51\x66\x02\x69\x04\x97\x36\xd4"
+-			  "\x75\xae\x0b\xa3\x42\xf8\xca\x79"
+-			  "\x8f\x93\xe9\xcc\x38\xbd\xd6\xd2"
+-			  "\xf9\x70\x4e\xc3\x6a\x8e\x25\xbd"
+-			  "\xea\x15\x5a\xa0\x85\x7e\x81\x0d"
+-			  "\x03\xe7\x05\x39\xf5\x05\x26\xee"
+-			  "\xec\xaa\x1f\x3d\xc9\x98\x76\x01"
+-			  "\x2c\xf4\xfc\xa3\x88\x77\x38\xc4"
+-			  "\x50\x65\x50\x6d\x04\x1f\xdf\x5a"
+-			  "\xaa\xf2\x01\xa9\xc1\x8d\xee\xca"
+-			  "\x47\x26\xef\x39\xb8\xb4\xf2\xd1"
+-			  "\xd6\xbb\x1b\x2a\xc1\x34\x14\xcf",
+-		.len	= 512,
+-	}, {
+-		.key	= "\x27\x18\x28\x18\x28\x45\x90\x45"
+-			  "\x23\x53\x60\x28\x74\x71\x35\x26"
+-			  "\x62\x49\x77\x57\x24\x70\x93\x69"
+-			  "\x99\x59\x57\x49\x66\x96\x76\x27"
+-			  "\x31\x41\x59\x26\x53\x58\x97\x93"
+-			  "\x23\x84\x62\x64\x33\x83\x27\x95"
+-			  "\x02\x88\x41\x97\x16\x93\x99\x37"
+-			  "\x51\x05\x82\x09\x74\x94\x45\x92",
+-		.klen	= 64,
+-		.iv	= "\xff\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+-			  "\x20\x21\x22\x23\x24\x25\x26\x27"
+-			  "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+-			  "\x30\x31\x32\x33\x34\x35\x36\x37"
+-			  "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+-			  "\x40\x41\x42\x43\x44\x45\x46\x47"
+-			  "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+-			  "\x50\x51\x52\x53\x54\x55\x56\x57"
+-			  "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+-			  "\x60\x61\x62\x63\x64\x65\x66\x67"
+-			  "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+-			  "\x70\x71\x72\x73\x74\x75\x76\x77"
+-			  "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+-			  "\x80\x81\x82\x83\x84\x85\x86\x87"
+-			  "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+-			  "\x90\x91\x92\x93\x94\x95\x96\x97"
+-			  "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+-			  "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+-			  "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+-			  "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+-			  "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+-			  "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+-			  "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+-			  "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+-			  "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+-			  "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+-			  "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+-			  "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+-			  "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
+-			  "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+-			  "\x20\x21\x22\x23\x24\x25\x26\x27"
+-			  "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+-			  "\x30\x31\x32\x33\x34\x35\x36\x37"
+-			  "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+-			  "\x40\x41\x42\x43\x44\x45\x46\x47"
+-			  "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+-			  "\x50\x51\x52\x53\x54\x55\x56\x57"
+-			  "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+-			  "\x60\x61\x62\x63\x64\x65\x66\x67"
+-			  "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+-			  "\x70\x71\x72\x73\x74\x75\x76\x77"
+-			  "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+-			  "\x80\x81\x82\x83\x84\x85\x86\x87"
+-			  "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+-			  "\x90\x91\x92\x93\x94\x95\x96\x97"
+-			  "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+-			  "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+-			  "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+-			  "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+-			  "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+-			  "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+-			  "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+-			  "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+-			  "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+-			  "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+-			  "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+-			  "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+-			  "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
+-		.ctext	= "\xc5\x85\x2a\x4b\x73\xe4\xf6\xf1"
+-			  "\x7e\xf9\xf6\xe9\xa3\x73\x36\xcb"
+-			  "\xaa\xb6\x22\xb0\x24\x6e\x3d\x73"
+-			  "\x92\x99\xde\xd3\x76\xed\xcd\x63"
+-			  "\x64\x3a\x22\x57\xc1\x43\x49\xd4"
+-			  "\x79\x36\x31\x19\x62\xae\x10\x7e"
+-			  "\x7d\xcf\x7a\xe2\x6b\xce\x27\xfa"
+-			  "\xdc\x3d\xd9\x83\xd3\x42\x4c\xe0"
+-			  "\x1b\xd6\x1d\x1a\x6f\xd2\x03\x00"
+-			  "\xfc\x81\x99\x8a\x14\x62\xf5\x7e"
+-			  "\x0d\xe7\x12\xe8\x17\x9d\x0b\xec"
+-			  "\xe2\xf7\xc9\xa7\x63\xd1\x79\xb6"
+-			  "\x62\x62\x37\xfe\x0a\x4c\x4a\x37"
+-			  "\x70\xc7\x5e\x96\x5f\xbc\x8e\x9e"
+-			  "\x85\x3c\x4f\x26\x64\x85\xbc\x68"
+-			  "\xb0\xe0\x86\x5e\x26\x41\xce\x11"
+-			  "\x50\xda\x97\x14\xe9\x9e\xc7\x6d"
+-			  "\x3b\xdc\x43\xde\x2b\x27\x69\x7d"
+-			  "\xfc\xb0\x28\xbd\x8f\xb1\xc6\x31"
+-			  "\x14\x4d\xf0\x74\x37\xfd\x07\x25"
+-			  "\x96\x55\xe5\xfc\x9e\x27\x2a\x74"
+-			  "\x1b\x83\x4d\x15\x83\xac\x57\xa0"
+-			  "\xac\xa5\xd0\x38\xef\x19\x56\x53"
+-			  "\x25\x4b\xfc\xce\x04\x23\xe5\x6b"
+-			  "\xf6\xc6\x6c\x32\x0b\xb3\x12\xc5"
+-			  "\xed\x22\x34\x1c\x5d\xed\x17\x06"
+-			  "\x36\xa3\xe6\x77\xb9\x97\x46\xb8"
+-			  "\xe9\x3f\x7e\xc7\xbc\x13\x5c\xdc"
+-			  "\x6e\x3f\x04\x5e\xd1\x59\xa5\x82"
+-			  "\x35\x91\x3d\x1b\xe4\x97\x9f\x92"
+-			  "\x1c\x5e\x5f\x6f\x41\xd4\x62\xa1"
+-			  "\x8d\x39\xfc\x42\xfb\x38\x80\xb9"
+-			  "\x0a\xe3\xcc\x6a\x93\xd9\x7a\xb1"
+-			  "\xe9\x69\xaf\x0a\x6b\x75\x38\xa7"
+-			  "\xa1\xbf\xf7\xda\x95\x93\x4b\x78"
+-			  "\x19\xf5\x94\xf9\xd2\x00\x33\x37"
+-			  "\xcf\xf5\x9e\x9c\xf3\xcc\xa6\xee"
+-			  "\x42\xb2\x9e\x2c\x5f\x48\x23\x26"
+-			  "\x15\x25\x17\x03\x3d\xfe\x2c\xfc"
+-			  "\xeb\xba\xda\xe0\x00\x05\xb6\xa6"
+-			  "\x07\xb3\xe8\x36\x5b\xec\x5b\xbf"
+-			  "\xd6\x5b\x00\x74\xc6\x97\xf1\x6a"
+-			  "\x49\xa1\xc3\xfa\x10\x52\xb9\x14"
+-			  "\xad\xb7\x73\xf8\x78\x12\xc8\x59"
+-			  "\x17\x80\x4c\x57\x39\xf1\x6d\x80"
+-			  "\x25\x77\x0f\x5e\x7d\xf0\xaf\x21"
+-			  "\xec\xce\xb7\xc8\x02\x8a\xed\x53"
+-			  "\x2c\x25\x68\x2e\x1f\x85\x5e\x67"
+-			  "\xd1\x07\x7a\x3a\x89\x08\xe0\x34"
+-			  "\xdc\xdb\x26\xb4\x6b\x77\xfc\x40"
+-			  "\x31\x15\x72\xa0\xf0\x73\xd9\x3b"
+-			  "\xd5\xdb\xfe\xfc\x8f\xa9\x44\xa2"
+-			  "\x09\x9f\xc6\x33\xe5\xe2\x88\xe8"
+-			  "\xf3\xf0\x1a\xf4\xce\x12\x0f\xd6"
+-			  "\xf7\x36\xe6\xa4\xf4\x7a\x10\x58"
+-			  "\xcc\x1f\x48\x49\x65\x47\x75\xe9"
+-			  "\x28\xe1\x65\x7b\xf2\xc4\xb5\x07"
+-			  "\xf2\xec\x76\xd8\x8f\x09\xf3\x16"
+-			  "\xa1\x51\x89\x3b\xeb\x96\x42\xac"
+-			  "\x65\xe0\x67\x63\x29\xdc\xb4\x7d"
+-			  "\xf2\x41\x51\x6a\xcb\xde\x3c\xfb"
+-			  "\x66\x8d\x13\xca\xe0\x59\x2a\x00"
+-			  "\xc9\x53\x4c\xe6\x9e\xe2\x73\xd5"
+-			  "\x67\x19\xb2\xbd\x9a\x63\xd7\x5c",
+-		.len	= 512,
+-		.also_non_np = 1,
+-		.np	= 3,
+-		.tap	= { 512 - 20, 4, 16 },
+-	}
+-};
+-
+-static const struct cipher_testvec speck64_tv_template[] = {
+-	{ /* Speck64/96 */
+-		.key	= "\x00\x01\x02\x03\x08\x09\x0a\x0b"
+-			  "\x10\x11\x12\x13",
+-		.klen	= 12,
+-		.ptext	= "\x65\x61\x6e\x73\x20\x46\x61\x74",
+-		.ctext	= "\x6c\x94\x75\x41\xec\x52\x79\x9f",
+-		.len	= 8,
+-	}, { /* Speck64/128 */
+-		.key	= "\x00\x01\x02\x03\x08\x09\x0a\x0b"
+-			  "\x10\x11\x12\x13\x18\x19\x1a\x1b",
+-		.klen	= 16,
+-		.ptext	= "\x2d\x43\x75\x74\x74\x65\x72\x3b",
+-		.ctext	= "\x8b\x02\x4e\x45\x48\xa5\x6f\x8c",
+-		.len	= 8,
+-	},
+-};
+-
+-/*
+- * Speck64-XTS test vectors, taken from the AES-XTS test vectors with the
+- * ciphertext recomputed with Speck64 as the cipher, and key lengths adjusted
+- */
+-static const struct cipher_testvec speck64_xts_tv_template[] = {
+-	{
+-		.key	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.klen	= 24,
+-		.iv	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ctext	= "\x84\xaf\x54\x07\x19\xd4\x7c\xa6"
+-			  "\xe4\xfe\xdf\xc4\x1f\x34\xc3\xc2"
+-			  "\x80\xf5\x72\xe7\xcd\xf0\x99\x22"
+-			  "\x35\xa7\x2f\x06\xef\xdc\x51\xaa",
+-		.len	= 32,
+-	}, {
+-		.key	= "\x11\x11\x11\x11\x11\x11\x11\x11"
+-			  "\x11\x11\x11\x11\x11\x11\x11\x11"
+-			  "\x22\x22\x22\x22\x22\x22\x22\x22",
+-		.klen	= 24,
+-		.iv	= "\x33\x33\x33\x33\x33\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44",
+-		.ctext	= "\x12\x56\x73\xcd\x15\x87\xa8\x59"
+-			  "\xcf\x84\xae\xd9\x1c\x66\xd6\x9f"
+-			  "\xb3\x12\x69\x7e\x36\xeb\x52\xff"
+-			  "\x62\xdd\xba\x90\xb3\xe1\xee\x99",
+-		.len	= 32,
+-	}, {
+-		.key	= "\xff\xfe\xfd\xfc\xfb\xfa\xf9\xf8"
+-			  "\xf7\xf6\xf5\xf4\xf3\xf2\xf1\xf0"
+-			  "\x22\x22\x22\x22\x22\x22\x22\x22",
+-		.klen	= 24,
+-		.iv	= "\x33\x33\x33\x33\x33\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44"
+-			  "\x44\x44\x44\x44\x44\x44\x44\x44",
+-		.ctext	= "\x15\x1b\xe4\x2c\xa2\x5a\x2d\x2c"
+-			  "\x27\x36\xc0\xbf\x5d\xea\x36\x37"
+-			  "\x2d\x1a\x88\xbc\x66\xb5\xd0\x0b"
+-			  "\xa1\xbc\x19\xb2\x0f\x3b\x75\x34",
+-		.len	= 32,
+-	}, {
+-		.key	= "\x27\x18\x28\x18\x28\x45\x90\x45"
+-			  "\x23\x53\x60\x28\x74\x71\x35\x26"
+-			  "\x31\x41\x59\x26\x53\x58\x97\x93",
+-		.klen	= 24,
+-		.iv	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+-			  "\x20\x21\x22\x23\x24\x25\x26\x27"
+-			  "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+-			  "\x30\x31\x32\x33\x34\x35\x36\x37"
+-			  "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+-			  "\x40\x41\x42\x43\x44\x45\x46\x47"
+-			  "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+-			  "\x50\x51\x52\x53\x54\x55\x56\x57"
+-			  "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+-			  "\x60\x61\x62\x63\x64\x65\x66\x67"
+-			  "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+-			  "\x70\x71\x72\x73\x74\x75\x76\x77"
+-			  "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+-			  "\x80\x81\x82\x83\x84\x85\x86\x87"
+-			  "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+-			  "\x90\x91\x92\x93\x94\x95\x96\x97"
+-			  "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+-			  "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+-			  "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+-			  "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+-			  "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+-			  "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+-			  "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+-			  "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+-			  "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+-			  "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+-			  "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+-			  "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+-			  "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
+-			  "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+-			  "\x20\x21\x22\x23\x24\x25\x26\x27"
+-			  "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+-			  "\x30\x31\x32\x33\x34\x35\x36\x37"
+-			  "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+-			  "\x40\x41\x42\x43\x44\x45\x46\x47"
+-			  "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+-			  "\x50\x51\x52\x53\x54\x55\x56\x57"
+-			  "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+-			  "\x60\x61\x62\x63\x64\x65\x66\x67"
+-			  "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+-			  "\x70\x71\x72\x73\x74\x75\x76\x77"
+-			  "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+-			  "\x80\x81\x82\x83\x84\x85\x86\x87"
+-			  "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+-			  "\x90\x91\x92\x93\x94\x95\x96\x97"
+-			  "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+-			  "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+-			  "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+-			  "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+-			  "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+-			  "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+-			  "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+-			  "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+-			  "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+-			  "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+-			  "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+-			  "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+-			  "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
+-		.ctext	= "\xaf\xa1\x81\xa6\x32\xbb\x15\x8e"
+-			  "\xf8\x95\x2e\xd3\xe6\xee\x7e\x09"
+-			  "\x0c\x1a\xf5\x02\x97\x8b\xe3\xb3"
+-			  "\x11\xc7\x39\x96\xd0\x95\xf4\x56"
+-			  "\xf4\xdd\x03\x38\x01\x44\x2c\xcf"
+-			  "\x88\xae\x8e\x3c\xcd\xe7\xaa\x66"
+-			  "\xfe\x3d\xc6\xfb\x01\x23\x51\x43"
+-			  "\xd5\xd2\x13\x86\x94\x34\xe9\x62"
+-			  "\xf9\x89\xe3\xd1\x7b\xbe\xf8\xef"
+-			  "\x76\x35\x04\x3f\xdb\x23\x9d\x0b"
+-			  "\x85\x42\xb9\x02\xd6\xcc\xdb\x96"
+-			  "\xa7\x6b\x27\xb6\xd4\x45\x8f\x7d"
+-			  "\xae\xd2\x04\xd5\xda\xc1\x7e\x24"
+-			  "\x8c\x73\xbe\x48\x7e\xcf\x65\x28"
+-			  "\x29\xe5\xbe\x54\x30\xcb\x46\x95"
+-			  "\x4f\x2e\x8a\x36\xc8\x27\xc5\xbe"
+-			  "\xd0\x1a\xaf\xab\x26\xcd\x9e\x69"
+-			  "\xa1\x09\x95\x71\x26\xe9\xc4\xdf"
+-			  "\xe6\x31\xc3\x46\xda\xaf\x0b\x41"
+-			  "\x1f\xab\xb1\x8e\xd6\xfc\x0b\xb3"
+-			  "\x82\xc0\x37\x27\xfc\x91\xa7\x05"
+-			  "\xfb\xc5\xdc\x2b\x74\x96\x48\x43"
+-			  "\x5d\x9c\x19\x0f\x60\x63\x3a\x1f"
+-			  "\x6f\xf0\x03\xbe\x4d\xfd\xc8\x4a"
+-			  "\xc6\xa4\x81\x6d\xc3\x12\x2a\x5c"
+-			  "\x07\xff\xf3\x72\x74\x48\xb5\x40"
+-			  "\x50\xb5\xdd\x90\x43\x31\x18\x15"
+-			  "\x7b\xf2\xa6\xdb\x83\xc8\x4b\x4a"
+-			  "\x29\x93\x90\x8b\xda\x07\xf0\x35"
+-			  "\x6d\x90\x88\x09\x4e\x83\xf5\x5b"
+-			  "\x94\x12\xbb\x33\x27\x1d\x3f\x23"
+-			  "\x51\xa8\x7c\x07\xa2\xae\x77\xa6"
+-			  "\x50\xfd\xcc\xc0\x4f\x80\x7a\x9f"
+-			  "\x66\xdd\xcd\x75\x24\x8b\x33\xf7"
+-			  "\x20\xdb\x83\x9b\x4f\x11\x63\x6e"
+-			  "\xcf\x37\xef\xc9\x11\x01\x5c\x45"
+-			  "\x32\x99\x7c\x3c\x9e\x42\x89\xe3"
+-			  "\x70\x6d\x15\x9f\xb1\xe6\xb6\x05"
+-			  "\xfe\x0c\xb9\x49\x2d\x90\x6d\xcc"
+-			  "\x5d\x3f\xc1\xfe\x89\x0a\x2e\x2d"
+-			  "\xa0\xa8\x89\x3b\x73\x39\xa5\x94"
+-			  "\x4c\xa4\xa6\xbb\xa7\x14\x46\x89"
+-			  "\x10\xff\xaf\xef\xca\xdd\x4f\x80"
+-			  "\xb3\xdf\x3b\xab\xd4\xe5\x5a\xc7"
+-			  "\x33\xca\x00\x8b\x8b\x3f\xea\xec"
+-			  "\x68\x8a\xc2\x6d\xfd\xd4\x67\x0f"
+-			  "\x22\x31\xe1\x0e\xfe\x5a\x04\xd5"
+-			  "\x64\xa3\xf1\x1a\x76\x28\xcc\x35"
+-			  "\x36\xa7\x0a\x74\xf7\x1c\x44\x9b"
+-			  "\xc7\x1b\x53\x17\x02\xea\xd1\xad"
+-			  "\x13\x51\x73\xc0\xa0\xb2\x05\x32"
+-			  "\xa8\xa2\x37\x2e\xe1\x7a\x3a\x19"
+-			  "\x26\xb4\x6c\x62\x5d\xb3\x1a\x1d"
+-			  "\x59\xda\xee\x1a\x22\x18\xda\x0d"
+-			  "\x88\x0f\x55\x8b\x72\x62\xfd\xc1"
+-			  "\x69\x13\xcd\x0d\x5f\xc1\x09\x52"
+-			  "\xee\xd6\xe3\x84\x4d\xee\xf6\x88"
+-			  "\xaf\x83\xdc\x76\xf4\xc0\x93\x3f"
+-			  "\x4a\x75\x2f\xb0\x0b\x3e\xc4\x54"
+-			  "\x7d\x69\x8d\x00\x62\x77\x0d\x14"
+-			  "\xbe\x7c\xa6\x7d\xc5\x24\x4f\xf3"
+-			  "\x50\xf7\x5f\xf4\xc2\xca\x41\x97"
+-			  "\x37\xbe\x75\x74\xcd\xf0\x75\x6e"
+-			  "\x25\x23\x94\xbd\xda\x8d\xb0\xd4",
+-		.len	= 512,
+-	}, {
+-		.key	= "\x27\x18\x28\x18\x28\x45\x90\x45"
+-			  "\x23\x53\x60\x28\x74\x71\x35\x26"
+-			  "\x62\x49\x77\x57\x24\x70\x93\x69"
+-			  "\x99\x59\x57\x49\x66\x96\x76\x27",
+-		.klen	= 32,
+-		.iv	= "\xff\x00\x00\x00\x00\x00\x00\x00"
+-			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-		.ptext	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+-			  "\x20\x21\x22\x23\x24\x25\x26\x27"
+-			  "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+-			  "\x30\x31\x32\x33\x34\x35\x36\x37"
+-			  "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+-			  "\x40\x41\x42\x43\x44\x45\x46\x47"
+-			  "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+-			  "\x50\x51\x52\x53\x54\x55\x56\x57"
+-			  "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+-			  "\x60\x61\x62\x63\x64\x65\x66\x67"
+-			  "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+-			  "\x70\x71\x72\x73\x74\x75\x76\x77"
+-			  "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+-			  "\x80\x81\x82\x83\x84\x85\x86\x87"
+-			  "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+-			  "\x90\x91\x92\x93\x94\x95\x96\x97"
+-			  "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+-			  "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+-			  "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+-			  "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+-			  "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+-			  "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+-			  "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+-			  "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+-			  "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+-			  "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+-			  "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+-			  "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+-			  "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
+-			  "\x00\x01\x02\x03\x04\x05\x06\x07"
+-			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f"
+-			  "\x10\x11\x12\x13\x14\x15\x16\x17"
+-			  "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
+-			  "\x20\x21\x22\x23\x24\x25\x26\x27"
+-			  "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f"
+-			  "\x30\x31\x32\x33\x34\x35\x36\x37"
+-			  "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f"
+-			  "\x40\x41\x42\x43\x44\x45\x46\x47"
+-			  "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f"
+-			  "\x50\x51\x52\x53\x54\x55\x56\x57"
+-			  "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f"
+-			  "\x60\x61\x62\x63\x64\x65\x66\x67"
+-			  "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f"
+-			  "\x70\x71\x72\x73\x74\x75\x76\x77"
+-			  "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+-			  "\x80\x81\x82\x83\x84\x85\x86\x87"
+-			  "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
+-			  "\x90\x91\x92\x93\x94\x95\x96\x97"
+-			  "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f"
+-			  "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7"
+-			  "\xa8\xa9\xaa\xab\xac\xad\xae\xaf"
+-			  "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7"
+-			  "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf"
+-			  "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7"
+-			  "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf"
+-			  "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7"
+-			  "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf"
+-			  "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7"
+-			  "\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+-			  "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7"
+-			  "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff",
+-		.ctext	= "\x55\xed\x71\xd3\x02\x8e\x15\x3b"
+-			  "\xc6\x71\x29\x2d\x3e\x89\x9f\x59"
+-			  "\x68\x6a\xcc\x8a\x56\x97\xf3\x95"
+-			  "\x4e\x51\x08\xda\x2a\xf8\x6f\x3c"
+-			  "\x78\x16\xea\x80\xdb\x33\x75\x94"
+-			  "\xf9\x29\xc4\x2b\x76\x75\x97\xc7"
+-			  "\xf2\x98\x2c\xf9\xff\xc8\xd5\x2b"
+-			  "\x18\xf1\xaf\xcf\x7c\xc5\x0b\xee"
+-			  "\xad\x3c\x76\x7c\xe6\x27\xa2\x2a"
+-			  "\xe4\x66\xe1\xab\xa2\x39\xfc\x7c"
+-			  "\xf5\xec\x32\x74\xa3\xb8\x03\x88"
+-			  "\x52\xfc\x2e\x56\x3f\xa1\xf0\x9f"
+-			  "\x84\x5e\x46\xed\x20\x89\xb6\x44"
+-			  "\x8d\xd0\xed\x54\x47\x16\xbe\x95"
+-			  "\x8a\xb3\x6b\x72\xc4\x32\x52\x13"
+-			  "\x1b\xb0\x82\xbe\xac\xf9\x70\xa6"
+-			  "\x44\x18\xdd\x8c\x6e\xca\x6e\x45"
+-			  "\x8f\x1e\x10\x07\x57\x25\x98\x7b"
+-			  "\x17\x8c\x78\xdd\x80\xa7\xd9\xd8"
+-			  "\x63\xaf\xb9\x67\x57\xfd\xbc\xdb"
+-			  "\x44\xe9\xc5\x65\xd1\xc7\x3b\xff"
+-			  "\x20\xa0\x80\x1a\xc3\x9a\xad\x5e"
+-			  "\x5d\x3b\xd3\x07\xd9\xf5\xfd\x3d"
+-			  "\x4a\x8b\xa8\xd2\x6e\x7a\x51\x65"
+-			  "\x6c\x8e\x95\xe0\x45\xc9\x5f\x4a"
+-			  "\x09\x3c\x3d\x71\x7f\x0c\x84\x2a"
+-			  "\xc8\x48\x52\x1a\xc2\xd5\xd6\x78"
+-			  "\x92\x1e\xa0\x90\x2e\xea\xf0\xf3"
+-			  "\xdc\x0f\xb1\xaf\x0d\x9b\x06\x2e"
+-			  "\x35\x10\x30\x82\x0d\xe7\xc5\x9b"
+-			  "\xde\x44\x18\xbd\x9f\xd1\x45\xa9"
+-			  "\x7b\x7a\x4a\xad\x35\x65\x27\xca"
+-			  "\xb2\xc3\xd4\x9b\x71\x86\x70\xee"
+-			  "\xf1\x89\x3b\x85\x4b\x5b\xaa\xaf"
+-			  "\xfc\x42\xc8\x31\x59\xbe\x16\x60"
+-			  "\x4f\xf9\xfa\x12\xea\xd0\xa7\x14"
+-			  "\xf0\x7a\xf3\xd5\x8d\xbd\x81\xef"
+-			  "\x52\x7f\x29\x51\x94\x20\x67\x3c"
+-			  "\xd1\xaf\x77\x9f\x22\x5a\x4e\x63"
+-			  "\xe7\xff\x73\x25\xd1\xdd\x96\x8a"
+-			  "\x98\x52\x6d\xf3\xac\x3e\xf2\x18"
+-			  "\x6d\xf6\x0a\x29\xa6\x34\x3d\xed"
+-			  "\xe3\x27\x0d\x9d\x0a\x02\x44\x7e"
+-			  "\x5a\x7e\x67\x0f\x0a\x9e\xd6\xad"
+-			  "\x91\xe6\x4d\x81\x8c\x5c\x59\xaa"
+-			  "\xfb\xeb\x56\x53\xd2\x7d\x4c\x81"
+-			  "\x65\x53\x0f\x41\x11\xbd\x98\x99"
+-			  "\xf9\xc6\xfa\x51\x2e\xa3\xdd\x8d"
+-			  "\x84\x98\xf9\x34\xed\x33\x2a\x1f"
+-			  "\x82\xed\xc1\x73\x98\xd3\x02\xdc"
+-			  "\xe6\xc2\x33\x1d\xa2\xb4\xca\x76"
+-			  "\x63\x51\x34\x9d\x96\x12\xae\xce"
+-			  "\x83\xc9\x76\x5e\xa4\x1b\x53\x37"
+-			  "\x17\xd5\xc0\x80\x1d\x62\xf8\x3d"
+-			  "\x54\x27\x74\xbb\x10\x86\x57\x46"
+-			  "\x68\xe1\xed\x14\xe7\x9d\xfc\x84"
+-			  "\x47\xbc\xc2\xf8\x19\x4b\x99\xcf"
+-			  "\x7a\xe9\xc4\xb8\x8c\x82\x72\x4d"
+-			  "\x7b\x4f\x38\x55\x36\x71\x64\xc1"
+-			  "\xfc\x5c\x75\x52\x33\x02\x18\xf8"
+-			  "\x17\xe1\x2b\xc2\x43\x39\xbd\x76"
+-			  "\x9b\x63\x76\x32\x2f\x19\x72\x10"
+-			  "\x9f\x21\x0c\xf1\x66\x50\x7f\xa5"
+-			  "\x0d\x1f\x46\xe0\xba\xd3\x2f\x3c",
+-		.len	= 512,
+-		.also_non_np = 1,
+-		.np	= 3,
+-		.tap	= { 512 - 20, 4, 16 },
+-	}
+-};
+-
+ /* Cast6 test vectors from RFC 2612 */
+ static const struct cipher_testvec cast6_tv_template[] = {
+ 	{
+diff --git a/drivers/acpi/acpi_lpit.c b/drivers/acpi/acpi_lpit.c
+index cf4fc0161164..e43cb71b6972 100644
+--- a/drivers/acpi/acpi_lpit.c
++++ b/drivers/acpi/acpi_lpit.c
+@@ -117,11 +117,17 @@ static void lpit_update_residency(struct lpit_residency_info *info,
+ 		if (!info->iomem_addr)
+ 			return;
+ 
++		if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0))
++			return;
++
+ 		/* Silently fail, if cpuidle attribute group is not present */
+ 		sysfs_add_file_to_group(&cpu_subsys.dev_root->kobj,
+ 					&dev_attr_low_power_idle_system_residency_us.attr,
+ 					"cpuidle");
+ 	} else if (info->gaddr.space_id == ACPI_ADR_SPACE_FIXED_HARDWARE) {
++		if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0))
++			return;
++
+ 		/* Silently fail, if cpuidle attribute group is not present */
+ 		sysfs_add_file_to_group(&cpu_subsys.dev_root->kobj,
+ 					&dev_attr_low_power_idle_cpu_residency_us.attr,
+diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c
+index bf64cfa30feb..969bf8d515c0 100644
+--- a/drivers/acpi/acpi_lpss.c
++++ b/drivers/acpi/acpi_lpss.c
+@@ -327,9 +327,11 @@ static const struct acpi_device_id acpi_lpss_device_ids[] = {
+ 	{ "INT33FC", },
+ 
+ 	/* Braswell LPSS devices */
++	{ "80862286", LPSS_ADDR(lpss_dma_desc) },
+ 	{ "80862288", LPSS_ADDR(bsw_pwm_dev_desc) },
+ 	{ "8086228A", LPSS_ADDR(bsw_uart_dev_desc) },
+ 	{ "8086228E", LPSS_ADDR(bsw_spi_dev_desc) },
++	{ "808622C0", LPSS_ADDR(lpss_dma_desc) },
+ 	{ "808622C1", LPSS_ADDR(bsw_i2c_dev_desc) },
+ 
+ 	/* Broadwell LPSS devices */
+diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
+index 449d86d39965..fc447410ae4d 100644
+--- a/drivers/acpi/acpi_processor.c
++++ b/drivers/acpi/acpi_processor.c
+@@ -643,7 +643,7 @@ static acpi_status __init acpi_processor_ids_walk(acpi_handle handle,
+ 
+ 	status = acpi_get_type(handle, &acpi_type);
+ 	if (ACPI_FAILURE(status))
+-		return false;
++		return status;
+ 
+ 	switch (acpi_type) {
+ 	case ACPI_TYPE_PROCESSOR:
+@@ -663,11 +663,12 @@ static acpi_status __init acpi_processor_ids_walk(acpi_handle handle,
+ 	}
+ 
+ 	processor_validated_ids_update(uid);
+-	return true;
++	return AE_OK;
+ 
+ err:
++	/* Exit on error, but don't abort the namespace walk */
+ 	acpi_handle_info(handle, "Invalid processor object\n");
+-	return false;
++	return AE_OK;
+ 
+ }
+ 
+diff --git a/drivers/acpi/acpica/dsopcode.c b/drivers/acpi/acpica/dsopcode.c
+index e9fb0bf3c8d2..78f9de260d5f 100644
+--- a/drivers/acpi/acpica/dsopcode.c
++++ b/drivers/acpi/acpica/dsopcode.c
+@@ -417,6 +417,10 @@ acpi_ds_eval_region_operands(struct acpi_walk_state *walk_state,
+ 			  ACPI_FORMAT_UINT64(obj_desc->region.address),
+ 			  obj_desc->region.length));
+ 
++	status = acpi_ut_add_address_range(obj_desc->region.space_id,
++					   obj_desc->region.address,
++					   obj_desc->region.length, node);
++
+ 	/* Now the address and length are valid for this opregion */
+ 
+ 	obj_desc->region.flags |= AOPOBJ_DATA_VALID;
+diff --git a/drivers/acpi/acpica/psloop.c b/drivers/acpi/acpica/psloop.c
+index 0f0bdc9d24c6..314276779f57 100644
+--- a/drivers/acpi/acpica/psloop.c
++++ b/drivers/acpi/acpica/psloop.c
+@@ -417,6 +417,7 @@ acpi_status acpi_ps_parse_loop(struct acpi_walk_state *walk_state)
+ 	union acpi_parse_object *op = NULL;	/* current op */
+ 	struct acpi_parse_state *parser_state;
+ 	u8 *aml_op_start = NULL;
++	u8 opcode_length;
+ 
+ 	ACPI_FUNCTION_TRACE_PTR(ps_parse_loop, walk_state);
+ 
+@@ -540,8 +541,19 @@ acpi_status acpi_ps_parse_loop(struct acpi_walk_state *walk_state)
+ 						    "Skip parsing opcode %s",
+ 						    acpi_ps_get_opcode_name
+ 						    (walk_state->opcode)));
++
++					/*
++					 * Determine the opcode length before skipping the opcode.
++					 * An opcode can be 1 byte or 2 bytes in length.
++					 */
++					opcode_length = 1;
++					if ((walk_state->opcode & 0xFF00) ==
++					    AML_EXTENDED_OPCODE) {
++						opcode_length = 2;
++					}
+ 					walk_state->parser_state.aml =
+-					    walk_state->aml + 1;
++					    walk_state->aml + opcode_length;
++
+ 					walk_state->parser_state.aml =
+ 					    acpi_ps_get_next_package_end
+ 					    (&walk_state->parser_state);
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index 7c479002e798..c0db96e8a81a 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -2456,7 +2456,8 @@ static int ars_get_cap(struct acpi_nfit_desc *acpi_desc,
+ 	return cmd_rc;
+ }
+ 
+-static int ars_start(struct acpi_nfit_desc *acpi_desc, struct nfit_spa *nfit_spa)
++static int ars_start(struct acpi_nfit_desc *acpi_desc,
++		struct nfit_spa *nfit_spa, enum nfit_ars_state req_type)
+ {
+ 	int rc;
+ 	int cmd_rc;
+@@ -2467,7 +2468,7 @@ static int ars_start(struct acpi_nfit_desc *acpi_desc, struct nfit_spa *nfit_spa
+ 	memset(&ars_start, 0, sizeof(ars_start));
+ 	ars_start.address = spa->address;
+ 	ars_start.length = spa->length;
+-	if (test_bit(ARS_SHORT, &nfit_spa->ars_state))
++	if (req_type == ARS_REQ_SHORT)
+ 		ars_start.flags = ND_ARS_RETURN_PREV_DATA;
+ 	if (nfit_spa_type(spa) == NFIT_SPA_PM)
+ 		ars_start.type = ND_ARS_PERSISTENT;
+@@ -2524,6 +2525,15 @@ static void ars_complete(struct acpi_nfit_desc *acpi_desc,
+ 	struct nd_region *nd_region = nfit_spa->nd_region;
+ 	struct device *dev;
+ 
++	lockdep_assert_held(&acpi_desc->init_mutex);
++	/*
++	 * Only advance the ARS state for ARS runs initiated by the
++	 * kernel, ignore ARS results from BIOS initiated runs for scrub
++	 * completion tracking.
++	 */
++	if (acpi_desc->scrub_spa != nfit_spa)
++		return;
++
+ 	if ((ars_status->address >= spa->address && ars_status->address
+ 				< spa->address + spa->length)
+ 			|| (ars_status->address < spa->address)) {
+@@ -2543,23 +2553,13 @@ static void ars_complete(struct acpi_nfit_desc *acpi_desc,
+ 	} else
+ 		return;
+ 
+-	if (test_bit(ARS_DONE, &nfit_spa->ars_state))
+-		return;
+-
+-	if (!test_and_clear_bit(ARS_REQ, &nfit_spa->ars_state))
+-		return;
+-
++	acpi_desc->scrub_spa = NULL;
+ 	if (nd_region) {
+ 		dev = nd_region_dev(nd_region);
+ 		nvdimm_region_notify(nd_region, NVDIMM_REVALIDATE_POISON);
+ 	} else
+ 		dev = acpi_desc->dev;
+-
+-	dev_dbg(dev, "ARS: range %d %s complete\n", spa->range_index,
+-			test_bit(ARS_SHORT, &nfit_spa->ars_state)
+-			? "short" : "long");
+-	clear_bit(ARS_SHORT, &nfit_spa->ars_state);
+-	set_bit(ARS_DONE, &nfit_spa->ars_state);
++	dev_dbg(dev, "ARS: range %d complete\n", spa->range_index);
+ }
+ 
+ static int ars_status_process_records(struct acpi_nfit_desc *acpi_desc)
+@@ -2840,46 +2840,55 @@ static int acpi_nfit_query_poison(struct acpi_nfit_desc *acpi_desc)
+ 	return 0;
+ }
+ 
+-static int ars_register(struct acpi_nfit_desc *acpi_desc, struct nfit_spa *nfit_spa,
+-		int *query_rc)
++static int ars_register(struct acpi_nfit_desc *acpi_desc,
++		struct nfit_spa *nfit_spa)
+ {
+-	int rc = *query_rc;
++	int rc;
+ 
+-	if (no_init_ars)
++	if (no_init_ars || test_bit(ARS_FAILED, &nfit_spa->ars_state))
+ 		return acpi_nfit_register_region(acpi_desc, nfit_spa);
+ 
+-	set_bit(ARS_REQ, &nfit_spa->ars_state);
+-	set_bit(ARS_SHORT, &nfit_spa->ars_state);
++	set_bit(ARS_REQ_SHORT, &nfit_spa->ars_state);
++	set_bit(ARS_REQ_LONG, &nfit_spa->ars_state);
+ 
+-	switch (rc) {
++	switch (acpi_nfit_query_poison(acpi_desc)) {
+ 	case 0:
+ 	case -EAGAIN:
+-		rc = ars_start(acpi_desc, nfit_spa);
+-		if (rc == -EBUSY) {
+-			*query_rc = rc;
++		rc = ars_start(acpi_desc, nfit_spa, ARS_REQ_SHORT);
++		/* shouldn't happen, try again later */
++		if (rc == -EBUSY)
+ 			break;
+-		} else if (rc == 0) {
+-			rc = acpi_nfit_query_poison(acpi_desc);
+-		} else {
++		if (rc) {
+ 			set_bit(ARS_FAILED, &nfit_spa->ars_state);
+ 			break;
+ 		}
+-		if (rc == -EAGAIN)
+-			clear_bit(ARS_SHORT, &nfit_spa->ars_state);
+-		else if (rc == 0)
+-			ars_complete(acpi_desc, nfit_spa);
++		clear_bit(ARS_REQ_SHORT, &nfit_spa->ars_state);
++		rc = acpi_nfit_query_poison(acpi_desc);
++		if (rc)
++			break;
++		acpi_desc->scrub_spa = nfit_spa;
++		ars_complete(acpi_desc, nfit_spa);
++		/*
++		 * If ars_complete() says we didn't complete the
++		 * short scrub, we'll try again with a long
++		 * request.
++		 */
++		acpi_desc->scrub_spa = NULL;
+ 		break;
+ 	case -EBUSY:
++	case -ENOMEM:
+ 	case -ENOSPC:
++		/*
++		 * BIOS was using ARS, wait for it to complete (or
++		 * resources to become available) and then perform our
++		 * own scrubs.
++		 */
+ 		break;
+ 	default:
+ 		set_bit(ARS_FAILED, &nfit_spa->ars_state);
+ 		break;
+ 	}
+ 
+-	if (test_and_clear_bit(ARS_DONE, &nfit_spa->ars_state))
+-		set_bit(ARS_REQ, &nfit_spa->ars_state);
+-
+ 	return acpi_nfit_register_region(acpi_desc, nfit_spa);
+ }
+ 
+@@ -2901,6 +2910,8 @@ static unsigned int __acpi_nfit_scrub(struct acpi_nfit_desc *acpi_desc,
+ 	struct device *dev = acpi_desc->dev;
+ 	struct nfit_spa *nfit_spa;
+ 
++	lockdep_assert_held(&acpi_desc->init_mutex);
++
+ 	if (acpi_desc->cancel)
+ 		return 0;
+ 
+@@ -2924,21 +2935,49 @@ static unsigned int __acpi_nfit_scrub(struct acpi_nfit_desc *acpi_desc,
+ 
+ 	ars_complete_all(acpi_desc);
+ 	list_for_each_entry(nfit_spa, &acpi_desc->spas, list) {
++		enum nfit_ars_state req_type;
++		int rc;
++
+ 		if (test_bit(ARS_FAILED, &nfit_spa->ars_state))
+ 			continue;
+-		if (test_bit(ARS_REQ, &nfit_spa->ars_state)) {
+-			int rc = ars_start(acpi_desc, nfit_spa);
+-
+-			clear_bit(ARS_DONE, &nfit_spa->ars_state);
+-			dev = nd_region_dev(nfit_spa->nd_region);
+-			dev_dbg(dev, "ARS: range %d ARS start (%d)\n",
+-					nfit_spa->spa->range_index, rc);
+-			if (rc == 0 || rc == -EBUSY)
+-				return 1;
+-			dev_err(dev, "ARS: range %d ARS failed (%d)\n",
+-					nfit_spa->spa->range_index, rc);
+-			set_bit(ARS_FAILED, &nfit_spa->ars_state);
++
++		/* prefer short ARS requests first */
++		if (test_bit(ARS_REQ_SHORT, &nfit_spa->ars_state))
++			req_type = ARS_REQ_SHORT;
++		else if (test_bit(ARS_REQ_LONG, &nfit_spa->ars_state))
++			req_type = ARS_REQ_LONG;
++		else
++			continue;
++		rc = ars_start(acpi_desc, nfit_spa, req_type);
++
++		dev = nd_region_dev(nfit_spa->nd_region);
++		dev_dbg(dev, "ARS: range %d ARS start %s (%d)\n",
++				nfit_spa->spa->range_index,
++				req_type == ARS_REQ_SHORT ? "short" : "long",
++				rc);
++		/*
++		 * Hmm, we raced someone else starting ARS? Try again in
++		 * a bit.
++		 */
++		if (rc == -EBUSY)
++			return 1;
++		if (rc == 0) {
++			dev_WARN_ONCE(dev, acpi_desc->scrub_spa,
++					"scrub start while range %d active\n",
++					acpi_desc->scrub_spa->spa->range_index);
++			clear_bit(req_type, &nfit_spa->ars_state);
++			acpi_desc->scrub_spa = nfit_spa;
++			/*
++			 * Consider this spa last for future scrub
++			 * requests
++			 */
++			list_move_tail(&nfit_spa->list, &acpi_desc->spas);
++			return 1;
+ 		}
++
++		dev_err(dev, "ARS: range %d ARS failed (%d)\n",
++				nfit_spa->spa->range_index, rc);
++		set_bit(ARS_FAILED, &nfit_spa->ars_state);
+ 	}
+ 	return 0;
+ }
+@@ -2994,6 +3033,7 @@ static void acpi_nfit_init_ars(struct acpi_nfit_desc *acpi_desc,
+ 	struct nd_cmd_ars_cap ars_cap;
+ 	int rc;
+ 
++	set_bit(ARS_FAILED, &nfit_spa->ars_state);
+ 	memset(&ars_cap, 0, sizeof(ars_cap));
+ 	rc = ars_get_cap(acpi_desc, &ars_cap, nfit_spa);
+ 	if (rc < 0)
+@@ -3010,16 +3050,14 @@ static void acpi_nfit_init_ars(struct acpi_nfit_desc *acpi_desc,
+ 	nfit_spa->clear_err_unit = ars_cap.clear_err_unit;
+ 	acpi_desc->max_ars = max(nfit_spa->max_ars, acpi_desc->max_ars);
+ 	clear_bit(ARS_FAILED, &nfit_spa->ars_state);
+-	set_bit(ARS_REQ, &nfit_spa->ars_state);
+ }
+ 
+ static int acpi_nfit_register_regions(struct acpi_nfit_desc *acpi_desc)
+ {
+ 	struct nfit_spa *nfit_spa;
+-	int rc, query_rc;
++	int rc;
+ 
+ 	list_for_each_entry(nfit_spa, &acpi_desc->spas, list) {
+-		set_bit(ARS_FAILED, &nfit_spa->ars_state);
+ 		switch (nfit_spa_type(nfit_spa->spa)) {
+ 		case NFIT_SPA_VOLATILE:
+ 		case NFIT_SPA_PM:
+@@ -3028,20 +3066,12 @@ static int acpi_nfit_register_regions(struct acpi_nfit_desc *acpi_desc)
+ 		}
+ 	}
+ 
+-	/*
+-	 * Reap any results that might be pending before starting new
+-	 * short requests.
+-	 */
+-	query_rc = acpi_nfit_query_poison(acpi_desc);
+-	if (query_rc == 0)
+-		ars_complete_all(acpi_desc);
+-
+ 	list_for_each_entry(nfit_spa, &acpi_desc->spas, list)
+ 		switch (nfit_spa_type(nfit_spa->spa)) {
+ 		case NFIT_SPA_VOLATILE:
+ 		case NFIT_SPA_PM:
+ 			/* register regions and kick off initial ARS run */
+-			rc = ars_register(acpi_desc, nfit_spa, &query_rc);
++			rc = ars_register(acpi_desc, nfit_spa);
+ 			if (rc)
+ 				return rc;
+ 			break;
+@@ -3236,7 +3266,8 @@ static int acpi_nfit_clear_to_send(struct nvdimm_bus_descriptor *nd_desc,
+ 	return 0;
+ }
+ 
+-int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc, unsigned long flags)
++int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc,
++		enum nfit_ars_state req_type)
+ {
+ 	struct device *dev = acpi_desc->dev;
+ 	int scheduled = 0, busy = 0;
+@@ -3256,13 +3287,10 @@ int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc, unsigned long flags)
+ 		if (test_bit(ARS_FAILED, &nfit_spa->ars_state))
+ 			continue;
+ 
+-		if (test_and_set_bit(ARS_REQ, &nfit_spa->ars_state))
++		if (test_and_set_bit(req_type, &nfit_spa->ars_state))
+ 			busy++;
+-		else {
+-			if (test_bit(ARS_SHORT, &flags))
+-				set_bit(ARS_SHORT, &nfit_spa->ars_state);
++		else
+ 			scheduled++;
+-		}
+ 	}
+ 	if (scheduled) {
+ 		sched_ars(acpi_desc);
+@@ -3448,10 +3476,11 @@ static void acpi_nfit_update_notify(struct device *dev, acpi_handle handle)
+ static void acpi_nfit_uc_error_notify(struct device *dev, acpi_handle handle)
+ {
+ 	struct acpi_nfit_desc *acpi_desc = dev_get_drvdata(dev);
+-	unsigned long flags = (acpi_desc->scrub_mode == HW_ERROR_SCRUB_ON) ?
+-			0 : 1 << ARS_SHORT;
+ 
+-	acpi_nfit_ars_rescan(acpi_desc, flags);
++	if (acpi_desc->scrub_mode == HW_ERROR_SCRUB_ON)
++		acpi_nfit_ars_rescan(acpi_desc, ARS_REQ_LONG);
++	else
++		acpi_nfit_ars_rescan(acpi_desc, ARS_REQ_SHORT);
+ }
+ 
+ void __acpi_nfit_notify(struct device *dev, acpi_handle handle, u32 event)
+diff --git a/drivers/acpi/nfit/nfit.h b/drivers/acpi/nfit/nfit.h
+index a97ff42fe311..02c10de50386 100644
+--- a/drivers/acpi/nfit/nfit.h
++++ b/drivers/acpi/nfit/nfit.h
+@@ -118,9 +118,8 @@ enum nfit_dimm_notifiers {
+ };
+ 
+ enum nfit_ars_state {
+-	ARS_REQ,
+-	ARS_DONE,
+-	ARS_SHORT,
++	ARS_REQ_SHORT,
++	ARS_REQ_LONG,
+ 	ARS_FAILED,
+ };
+ 
+@@ -197,6 +196,7 @@ struct acpi_nfit_desc {
+ 	struct device *dev;
+ 	u8 ars_start_flags;
+ 	struct nd_cmd_ars_status *ars_status;
++	struct nfit_spa *scrub_spa;
+ 	struct delayed_work dwork;
+ 	struct list_head list;
+ 	struct kernfs_node *scrub_count_state;
+@@ -251,7 +251,8 @@ struct nfit_blk {
+ 
+ extern struct list_head acpi_descs;
+ extern struct mutex acpi_desc_lock;
+-int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc, unsigned long flags);
++int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc,
++		enum nfit_ars_state req_type);
+ 
+ #ifdef CONFIG_X86_MCE
+ void nfit_mce_register(void);
+diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
+index 8df9abfa947b..ed73f6fb0779 100644
+--- a/drivers/acpi/osl.c
++++ b/drivers/acpi/osl.c
+@@ -617,15 +617,18 @@ void acpi_os_stall(u32 us)
+ }
+ 
+ /*
+- * Support ACPI 3.0 AML Timer operand
+- * Returns 64-bit free-running, monotonically increasing timer
+- * with 100ns granularity
++ * Support ACPI 3.0 AML Timer operand. Returns a 64-bit free-running,
++ * monotonically increasing timer with 100ns granularity. Do not use
++ * ktime_get() to implement this function because this function may get
++ * called after timekeeping has been suspended. Note: calling this function
++ * after timekeeping has been suspended may lead to unexpected results
++ * because when timekeeping is suspended the jiffies counter is not
++ * incremented. See also timekeeping_suspend().
+  */
+ u64 acpi_os_get_timer(void)
+ {
+-	u64 time_ns = ktime_to_ns(ktime_get());
+-	do_div(time_ns, 100);
+-	return time_ns;
++	return (get_jiffies_64() - INITIAL_JIFFIES) *
++		(ACPI_100NSEC_PER_SEC / HZ);
+ }
+ 
+ acpi_status acpi_os_read_port(acpi_io_address port, u32 * value, u32 width)
+diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c
+index d1e26cb599bf..da031b1df6f5 100644
+--- a/drivers/acpi/pptt.c
++++ b/drivers/acpi/pptt.c
+@@ -338,9 +338,6 @@ static struct acpi_pptt_cache *acpi_find_cache_node(struct acpi_table_header *ta
+ 	return found;
+ }
+ 
+-/* total number of attributes checked by the properties code */
+-#define PPTT_CHECKED_ATTRIBUTES 4
+-
+ /**
+  * update_cache_properties() - Update cacheinfo for the given processor
+  * @this_leaf: Kernel cache info structure being updated
+@@ -357,25 +354,15 @@ static void update_cache_properties(struct cacheinfo *this_leaf,
+ 				    struct acpi_pptt_cache *found_cache,
+ 				    struct acpi_pptt_processor *cpu_node)
+ {
+-	int valid_flags = 0;
+-
+ 	this_leaf->fw_token = cpu_node;
+-	if (found_cache->flags & ACPI_PPTT_SIZE_PROPERTY_VALID) {
++	if (found_cache->flags & ACPI_PPTT_SIZE_PROPERTY_VALID)
+ 		this_leaf->size = found_cache->size;
+-		valid_flags++;
+-	}
+-	if (found_cache->flags & ACPI_PPTT_LINE_SIZE_VALID) {
++	if (found_cache->flags & ACPI_PPTT_LINE_SIZE_VALID)
+ 		this_leaf->coherency_line_size = found_cache->line_size;
+-		valid_flags++;
+-	}
+-	if (found_cache->flags & ACPI_PPTT_NUMBER_OF_SETS_VALID) {
++	if (found_cache->flags & ACPI_PPTT_NUMBER_OF_SETS_VALID)
+ 		this_leaf->number_of_sets = found_cache->number_of_sets;
+-		valid_flags++;
+-	}
+-	if (found_cache->flags & ACPI_PPTT_ASSOCIATIVITY_VALID) {
++	if (found_cache->flags & ACPI_PPTT_ASSOCIATIVITY_VALID)
+ 		this_leaf->ways_of_associativity = found_cache->associativity;
+-		valid_flags++;
+-	}
+ 	if (found_cache->flags & ACPI_PPTT_WRITE_POLICY_VALID) {
+ 		switch (found_cache->attributes & ACPI_PPTT_MASK_WRITE_POLICY) {
+ 		case ACPI_PPTT_CACHE_POLICY_WT:
+@@ -402,11 +389,17 @@ static void update_cache_properties(struct cacheinfo *this_leaf,
+ 		}
+ 	}
+ 	/*
+-	 * If the above flags are valid, and the cache type is NOCACHE
+-	 * update the cache type as well.
++	 * If cache type is NOCACHE, then the cache hasn't been specified
++	 * via other mechanisms.  Update the type if a cache type has been
++	 * provided.
++	 *
++	 * Note, we assume such caches are unified based on conventional system
++	 * design and known examples.  Significant work is required elsewhere to
++	 * fully support data/instruction only type caches which are only
++	 * specified in PPTT.
+ 	 */
+ 	if (this_leaf->type == CACHE_TYPE_NOCACHE &&
+-	    valid_flags == PPTT_CHECKED_ATTRIBUTES)
++	    found_cache->flags & ACPI_PPTT_CACHE_TYPE_VALID)
+ 		this_leaf->type = CACHE_TYPE_UNIFIED;
+ }
+ 
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 99bf0c0394f8..321a9579556d 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -4552,6 +4552,7 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ 	/* These specific Samsung models/firmware-revs do not handle LPM well */
+ 	{ "SAMSUNG MZMPC128HBFU-000MV", "CXM14M1Q", ATA_HORKAGE_NOLPM, },
+ 	{ "SAMSUNG SSD PM830 mSATA *",  "CXM13D1Q", ATA_HORKAGE_NOLPM, },
++	{ "SAMSUNG MZ7TD256HAFV-000L9", "DXT02L5Q", ATA_HORKAGE_NOLPM, },
+ 
+ 	/* devices that don't properly handle queued TRIM commands */
+ 	{ "Micron_M500IT_*",		"MU01",	ATA_HORKAGE_NO_NCQ_TRIM |
+diff --git a/drivers/block/ataflop.c b/drivers/block/ataflop.c
+index dfb2c2622e5a..822e3060d834 100644
+--- a/drivers/block/ataflop.c
++++ b/drivers/block/ataflop.c
+@@ -1935,6 +1935,11 @@ static int __init atari_floppy_init (void)
+ 		unit[i].disk = alloc_disk(1);
+ 		if (!unit[i].disk)
+ 			goto Enomem;
++
++		unit[i].disk->queue = blk_init_queue(do_fd_request,
++						     &ataflop_lock);
++		if (!unit[i].disk->queue)
++			goto Enomem;
+ 	}
+ 
+ 	if (UseTrackbuffer < 0)
+@@ -1966,10 +1971,6 @@ static int __init atari_floppy_init (void)
+ 		sprintf(unit[i].disk->disk_name, "fd%d", i);
+ 		unit[i].disk->fops = &floppy_fops;
+ 		unit[i].disk->private_data = &unit[i];
+-		unit[i].disk->queue = blk_init_queue(do_fd_request,
+-					&ataflop_lock);
+-		if (!unit[i].disk->queue)
+-			goto Enomem;
+ 		set_capacity(unit[i].disk, MAX_DISK_SIZE * 2);
+ 		add_disk(unit[i].disk);
+ 	}
+@@ -1984,13 +1985,17 @@ static int __init atari_floppy_init (void)
+ 
+ 	return 0;
+ Enomem:
+-	while (i--) {
+-		struct request_queue *q = unit[i].disk->queue;
++	do {
++		struct gendisk *disk = unit[i].disk;
+ 
+-		put_disk(unit[i].disk);
+-		if (q)
+-			blk_cleanup_queue(q);
+-	}
++		if (disk) {
++			if (disk->queue) {
++				blk_cleanup_queue(disk->queue);
++				disk->queue = NULL;
++			}
++			put_disk(unit[i].disk);
++		}
++	} while (i--);
+ 
+ 	unregister_blkdev(FLOPPY_MAJOR, "fd");
+ 	return -ENOMEM;
+diff --git a/drivers/block/swim.c b/drivers/block/swim.c
+index 0e31884a9519..cbe909c51847 100644
+--- a/drivers/block/swim.c
++++ b/drivers/block/swim.c
+@@ -887,8 +887,17 @@ static int swim_floppy_init(struct swim_priv *swd)
+ 
+ exit_put_disks:
+ 	unregister_blkdev(FLOPPY_MAJOR, "fd");
+-	while (drive--)
+-		put_disk(swd->unit[drive].disk);
++	do {
++		struct gendisk *disk = swd->unit[drive].disk;
++
++		if (disk) {
++			if (disk->queue) {
++				blk_cleanup_queue(disk->queue);
++				disk->queue = NULL;
++			}
++			put_disk(disk);
++		}
++	} while (drive--);
+ 	return err;
+ }
+ 
+diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
+index b5cedccb5d7d..144df6830b82 100644
+--- a/drivers/block/xen-blkfront.c
++++ b/drivers/block/xen-blkfront.c
+@@ -1911,6 +1911,7 @@ static int negotiate_mq(struct blkfront_info *info)
+ 			      GFP_KERNEL);
+ 	if (!info->rinfo) {
+ 		xenbus_dev_fatal(info->xbdev, -ENOMEM, "allocating ring_info structure");
++		info->nr_rings = 0;
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -2475,6 +2476,9 @@ static int blkfront_remove(struct xenbus_device *xbdev)
+ 
+ 	dev_dbg(&xbdev->dev, "%s removed", xbdev->nodename);
+ 
++	if (!info)
++		return 0;
++
+ 	blkif_free(info, 0);
+ 
+ 	mutex_lock(&info->mutex);
+diff --git a/drivers/bluetooth/btbcm.c b/drivers/bluetooth/btbcm.c
+index 99cde1f9467d..e3e4d929e74f 100644
+--- a/drivers/bluetooth/btbcm.c
++++ b/drivers/bluetooth/btbcm.c
+@@ -324,6 +324,7 @@ static const struct bcm_subver_table bcm_uart_subver_table[] = {
+ 	{ 0x4103, "BCM4330B1"	},	/* 002.001.003 */
+ 	{ 0x410e, "BCM43341B0"	},	/* 002.001.014 */
+ 	{ 0x4406, "BCM4324B3"	},	/* 002.004.006 */
++	{ 0x6109, "BCM4335C0"	},	/* 003.001.009 */
+ 	{ 0x610c, "BCM4354"	},	/* 003.001.012 */
+ 	{ 0x2122, "BCM4343A0"	},	/* 001.001.034 */
+ 	{ 0x2209, "BCM43430A1"  },	/* 001.002.009 */
+diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c
+index 265d6a6583bc..e33fefd6ceae 100644
+--- a/drivers/char/ipmi/ipmi_ssif.c
++++ b/drivers/char/ipmi/ipmi_ssif.c
+@@ -606,8 +606,9 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ 			flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
+ 			ssif_info->waiting_alert = true;
+ 			ssif_info->rtc_us_timer = SSIF_MSG_USEC;
+-			mod_timer(&ssif_info->retry_timer,
+-				  jiffies + SSIF_MSG_JIFFIES);
++			if (!ssif_info->stopping)
++				mod_timer(&ssif_info->retry_timer,
++					  jiffies + SSIF_MSG_JIFFIES);
+ 			ipmi_ssif_unlock_cond(ssif_info, flags);
+ 			return;
+ 		}
+@@ -939,8 +940,9 @@ static void msg_written_handler(struct ssif_info *ssif_info, int result,
+ 			ssif_info->waiting_alert = true;
+ 			ssif_info->retries_left = SSIF_RECV_RETRIES;
+ 			ssif_info->rtc_us_timer = SSIF_MSG_PART_USEC;
+-			mod_timer(&ssif_info->retry_timer,
+-				  jiffies + SSIF_MSG_PART_JIFFIES);
++			if (!ssif_info->stopping)
++				mod_timer(&ssif_info->retry_timer,
++					  jiffies + SSIF_MSG_PART_JIFFIES);
+ 			ipmi_ssif_unlock_cond(ssif_info, flags);
+ 		}
+ 	}
+diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
+index 3a3a7a548a85..e8822b3d10e1 100644
+--- a/drivers/char/tpm/tpm-interface.c
++++ b/drivers/char/tpm/tpm-interface.c
+@@ -664,7 +664,8 @@ ssize_t tpm_transmit_cmd(struct tpm_chip *chip, struct tpm_space *space,
+ 		return len;
+ 
+ 	err = be32_to_cpu(header->return_code);
+-	if (err != 0 && desc)
++	if (err != 0 && err != TPM_ERR_DISABLED && err != TPM_ERR_DEACTIVATED
++	    && desc)
+ 		dev_err(&chip->dev, "A TPM error (%d) occurred %s\n", err,
+ 			desc);
+ 	if (err)
+diff --git a/drivers/char/tpm/xen-tpmfront.c b/drivers/char/tpm/xen-tpmfront.c
+index 911475d36800..b150f87f38f5 100644
+--- a/drivers/char/tpm/xen-tpmfront.c
++++ b/drivers/char/tpm/xen-tpmfront.c
+@@ -264,7 +264,7 @@ static int setup_ring(struct xenbus_device *dev, struct tpm_private *priv)
+ 		return -ENOMEM;
+ 	}
+ 
+-	rv = xenbus_grant_ring(dev, &priv->shr, 1, &gref);
++	rv = xenbus_grant_ring(dev, priv->shr, 1, &gref);
+ 	if (rv < 0)
+ 		return rv;
+ 
+diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c
+index 0a9ebf00be46..e58bfcb1169e 100644
+--- a/drivers/cpufreq/cpufreq-dt.c
++++ b/drivers/cpufreq/cpufreq-dt.c
+@@ -32,6 +32,7 @@ struct private_data {
+ 	struct device *cpu_dev;
+ 	struct thermal_cooling_device *cdev;
+ 	const char *reg_name;
++	bool have_static_opps;
+ };
+ 
+ static struct freq_attr *cpufreq_dt_attr[] = {
+@@ -204,6 +205,15 @@ static int cpufreq_init(struct cpufreq_policy *policy)
+ 		}
+ 	}
+ 
++	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
++	if (!priv) {
++		ret = -ENOMEM;
++		goto out_put_regulator;
++	}
++
++	priv->reg_name = name;
++	priv->opp_table = opp_table;
++
+ 	/*
+ 	 * Initialize OPP tables for all policy->cpus. They will be shared by
+ 	 * all CPUs which have marked their CPUs shared with OPP bindings.
+@@ -214,7 +224,8 @@ static int cpufreq_init(struct cpufreq_policy *policy)
+ 	 *
+ 	 * OPPs might be populated at runtime, don't check for error here
+ 	 */
+-	dev_pm_opp_of_cpumask_add_table(policy->cpus);
++	if (!dev_pm_opp_of_cpumask_add_table(policy->cpus))
++		priv->have_static_opps = true;
+ 
+ 	/*
+ 	 * But we need OPP table to function so if it is not there let's
+@@ -240,19 +251,10 @@ static int cpufreq_init(struct cpufreq_policy *policy)
+ 				__func__, ret);
+ 	}
+ 
+-	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+-	if (!priv) {
+-		ret = -ENOMEM;
+-		goto out_free_opp;
+-	}
+-
+-	priv->reg_name = name;
+-	priv->opp_table = opp_table;
+-
+ 	ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table);
+ 	if (ret) {
+ 		dev_err(cpu_dev, "failed to init cpufreq table: %d\n", ret);
+-		goto out_free_priv;
++		goto out_free_opp;
+ 	}
+ 
+ 	priv->cpu_dev = cpu_dev;
+@@ -282,10 +284,11 @@ static int cpufreq_init(struct cpufreq_policy *policy)
+ 
+ out_free_cpufreq_table:
+ 	dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table);
+-out_free_priv:
+-	kfree(priv);
+ out_free_opp:
+-	dev_pm_opp_of_cpumask_remove_table(policy->cpus);
++	if (priv->have_static_opps)
++		dev_pm_opp_of_cpumask_remove_table(policy->cpus);
++	kfree(priv);
++out_put_regulator:
+ 	if (name)
+ 		dev_pm_opp_put_regulators(opp_table);
+ out_put_clk:
+@@ -300,7 +303,8 @@ static int cpufreq_exit(struct cpufreq_policy *policy)
+ 
+ 	cpufreq_cooling_unregister(priv->cdev);
+ 	dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
+-	dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
++	if (priv->have_static_opps)
++		dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
+ 	if (priv->reg_name)
+ 		dev_pm_opp_put_regulators(priv->opp_table);
+ 
+diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
+index f20f20a77d4d..4268f87e99fc 100644
+--- a/drivers/cpufreq/cpufreq_conservative.c
++++ b/drivers/cpufreq/cpufreq_conservative.c
+@@ -80,8 +80,10 @@ static unsigned int cs_dbs_update(struct cpufreq_policy *policy)
+ 	 * changed in the meantime, so fall back to current frequency in that
+ 	 * case.
+ 	 */
+-	if (requested_freq > policy->max || requested_freq < policy->min)
++	if (requested_freq > policy->max || requested_freq < policy->min) {
+ 		requested_freq = policy->cur;
++		dbs_info->requested_freq = requested_freq;
++	}
+ 
+ 	freq_step = get_freq_step(cs_tuners, policy);
+ 
+@@ -92,7 +94,7 @@ static unsigned int cs_dbs_update(struct cpufreq_policy *policy)
+ 	if (policy_dbs->idle_periods < UINT_MAX) {
+ 		unsigned int freq_steps = policy_dbs->idle_periods * freq_step;
+ 
+-		if (requested_freq > freq_steps)
++		if (requested_freq > policy->min + freq_steps)
+ 			requested_freq -= freq_steps;
+ 		else
+ 			requested_freq = policy->min;
+diff --git a/drivers/crypto/caam/regs.h b/drivers/crypto/caam/regs.h
+index 4fb91ba39c36..ce3f9ad7120f 100644
+--- a/drivers/crypto/caam/regs.h
++++ b/drivers/crypto/caam/regs.h
+@@ -70,22 +70,22 @@
+ extern bool caam_little_end;
+ extern bool caam_imx;
+ 
+-#define caam_to_cpu(len)				\
+-static inline u##len caam##len ## _to_cpu(u##len val)	\
+-{							\
+-	if (caam_little_end)				\
+-		return le##len ## _to_cpu(val);		\
+-	else						\
+-		return be##len ## _to_cpu(val);		\
++#define caam_to_cpu(len)						\
++static inline u##len caam##len ## _to_cpu(u##len val)			\
++{									\
++	if (caam_little_end)						\
++		return le##len ## _to_cpu((__force __le##len)val);	\
++	else								\
++		return be##len ## _to_cpu((__force __be##len)val);	\
+ }
+ 
+-#define cpu_to_caam(len)				\
+-static inline u##len cpu_to_caam##len(u##len val)	\
+-{							\
+-	if (caam_little_end)				\
+-		return cpu_to_le##len(val);		\
+-	else						\
+-		return cpu_to_be##len(val);		\
++#define cpu_to_caam(len)					\
++static inline u##len cpu_to_caam##len(u##len val)		\
++{								\
++	if (caam_little_end)					\
++		return (__force u##len)cpu_to_le##len(val);	\
++	else							\
++		return (__force u##len)cpu_to_be##len(val);	\
+ }
+ 
+ caam_to_cpu(16)
+diff --git a/drivers/dma/dma-jz4780.c b/drivers/dma/dma-jz4780.c
+index 85820a2d69d4..987899610b46 100644
+--- a/drivers/dma/dma-jz4780.c
++++ b/drivers/dma/dma-jz4780.c
+@@ -761,6 +761,11 @@ static int jz4780_dma_probe(struct platform_device *pdev)
+ 	struct resource *res;
+ 	int i, ret;
+ 
++	if (!dev->of_node) {
++		dev_err(dev, "This driver must be probed from devicetree\n");
++		return -EINVAL;
++	}
++
+ 	jzdma = devm_kzalloc(dev, sizeof(*jzdma), GFP_KERNEL);
+ 	if (!jzdma)
+ 		return -ENOMEM;
+diff --git a/drivers/dma/ioat/init.c b/drivers/dma/ioat/init.c
+index 4fa4c06c9edb..21a5708985bc 100644
+--- a/drivers/dma/ioat/init.c
++++ b/drivers/dma/ioat/init.c
+@@ -1205,8 +1205,15 @@ static void ioat_shutdown(struct pci_dev *pdev)
+ 
+ 		spin_lock_bh(&ioat_chan->prep_lock);
+ 		set_bit(IOAT_CHAN_DOWN, &ioat_chan->state);
+-		del_timer_sync(&ioat_chan->timer);
+ 		spin_unlock_bh(&ioat_chan->prep_lock);
++		/*
++		 * Synchronization rule for del_timer_sync():
++		 *  - The caller must not hold locks which would prevent
++		 *    completion of the timer's handler.
++		 * So prep_lock cannot be held before calling it.
++		 */
++		del_timer_sync(&ioat_chan->timer);
++
+ 		/* this should quiesce then reset */
+ 		ioat_reset_hw(ioat_chan);
+ 	}
+diff --git a/drivers/dma/ppc4xx/adma.c b/drivers/dma/ppc4xx/adma.c
+index 4cf0d4d0cecf..25610286979f 100644
+--- a/drivers/dma/ppc4xx/adma.c
++++ b/drivers/dma/ppc4xx/adma.c
+@@ -4360,7 +4360,7 @@ static ssize_t enable_store(struct device_driver *dev, const char *buf,
+ }
+ static DRIVER_ATTR_RW(enable);
+ 
+-static ssize_t poly_store(struct device_driver *dev, char *buf)
++static ssize_t poly_show(struct device_driver *dev, char *buf)
+ {
+ 	ssize_t size = 0;
+ 	u32 reg;
+diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c
+index 18aeabb1d5ee..e2addb2bca29 100644
+--- a/drivers/edac/amd64_edac.c
++++ b/drivers/edac/amd64_edac.c
+@@ -2200,6 +2200,15 @@ static struct amd64_family_type family_types[] = {
+ 			.dbam_to_cs		= f17_base_addr_to_cs_size,
+ 		}
+ 	},
++	[F17_M10H_CPUS] = {
++		.ctl_name = "F17h_M10h",
++		.f0_id = PCI_DEVICE_ID_AMD_17H_M10H_DF_F0,
++		.f6_id = PCI_DEVICE_ID_AMD_17H_M10H_DF_F6,
++		.ops = {
++			.early_channel_count	= f17_early_channel_count,
++			.dbam_to_cs		= f17_base_addr_to_cs_size,
++		}
++	},
+ };
+ 
+ /*
+@@ -3188,6 +3197,11 @@ static struct amd64_family_type *per_family_init(struct amd64_pvt *pvt)
+ 		break;
+ 
+ 	case 0x17:
++		if (pvt->model >= 0x10 && pvt->model <= 0x2f) {
++			fam_type = &family_types[F17_M10H_CPUS];
++			pvt->ops = &family_types[F17_M10H_CPUS].ops;
++			break;
++		}
+ 		fam_type	= &family_types[F17_CPUS];
+ 		pvt->ops	= &family_types[F17_CPUS].ops;
+ 		break;
+diff --git a/drivers/edac/amd64_edac.h b/drivers/edac/amd64_edac.h
+index 1d4b74e9a037..4242f8e39c18 100644
+--- a/drivers/edac/amd64_edac.h
++++ b/drivers/edac/amd64_edac.h
+@@ -115,6 +115,8 @@
+ #define PCI_DEVICE_ID_AMD_16H_M30H_NB_F2 0x1582
+ #define PCI_DEVICE_ID_AMD_17H_DF_F0	0x1460
+ #define PCI_DEVICE_ID_AMD_17H_DF_F6	0x1466
++#define PCI_DEVICE_ID_AMD_17H_M10H_DF_F0 0x15e8
++#define PCI_DEVICE_ID_AMD_17H_M10H_DF_F6 0x15ee
+ 
+ /*
+  * Function 1 - Address Map
+@@ -281,6 +283,7 @@ enum amd_families {
+ 	F16_CPUS,
+ 	F16_M30H_CPUS,
+ 	F17_CPUS,
++	F17_M10H_CPUS,
+ 	NUM_FAMILIES,
+ };
+ 
+diff --git a/drivers/edac/i7core_edac.c b/drivers/edac/i7core_edac.c
+index 8e120bf60624..f1d19504a028 100644
+--- a/drivers/edac/i7core_edac.c
++++ b/drivers/edac/i7core_edac.c
+@@ -1711,6 +1711,7 @@ static void i7core_mce_output_error(struct mem_ctl_info *mci,
+ 	u32 errnum = find_first_bit(&error, 32);
+ 
+ 	if (uncorrected_error) {
++		core_err_cnt = 1;
+ 		if (ripv)
+ 			tp_event = HW_EVENT_ERR_FATAL;
+ 		else
+diff --git a/drivers/edac/sb_edac.c b/drivers/edac/sb_edac.c
+index 4a89c8093307..498d253a3b7e 100644
+--- a/drivers/edac/sb_edac.c
++++ b/drivers/edac/sb_edac.c
+@@ -2881,6 +2881,7 @@ static void sbridge_mce_output_error(struct mem_ctl_info *mci,
+ 		recoverable = GET_BITFIELD(m->status, 56, 56);
+ 
+ 	if (uncorrected_error) {
++		core_err_cnt = 1;
+ 		if (ripv) {
+ 			type = "FATAL";
+ 			tp_event = HW_EVENT_ERR_FATAL;
+diff --git a/drivers/edac/skx_edac.c b/drivers/edac/skx_edac.c
+index fae095162c01..4ba92f1dd0f7 100644
+--- a/drivers/edac/skx_edac.c
++++ b/drivers/edac/skx_edac.c
+@@ -668,7 +668,7 @@ sad_found:
+ 			break;
+ 		case 2:
+ 			lchan = (addr >> shift) % 2;
+-			lchan = (lchan << 1) | ~lchan;
++			lchan = (lchan << 1) | !lchan;
+ 			break;
+ 		case 3:
+ 			lchan = ((addr >> shift) % 2) << 1;
+@@ -959,6 +959,7 @@ static void skx_mce_output_error(struct mem_ctl_info *mci,
+ 	recoverable = GET_BITFIELD(m->status, 56, 56);
+ 
+ 	if (uncorrected_error) {
++		core_err_cnt = 1;
+ 		if (ripv) {
+ 			type = "FATAL";
+ 			tp_event = HW_EVENT_ERR_FATAL;
+diff --git a/drivers/firmware/google/coreboot_table.c b/drivers/firmware/google/coreboot_table.c
+index 19db5709ae28..898bb9abc41f 100644
+--- a/drivers/firmware/google/coreboot_table.c
++++ b/drivers/firmware/google/coreboot_table.c
+@@ -110,7 +110,8 @@ int coreboot_table_init(struct device *dev, void __iomem *ptr)
+ 
+ 	if (strncmp(header.signature, "LBIO", sizeof(header.signature))) {
+ 		pr_warn("coreboot_table: coreboot table missing or corrupt!\n");
+-		return -ENODEV;
++		ret = -ENODEV;
++		goto out;
+ 	}
+ 
+ 	ptr_entry = (void *)ptr_header + header.header_bytes;
+@@ -137,7 +138,8 @@ int coreboot_table_init(struct device *dev, void __iomem *ptr)
+ 
+ 		ptr_entry += entry.size;
+ 	}
+-
++out:
++	iounmap(ptr);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(coreboot_table_init);
+@@ -146,7 +148,6 @@ int coreboot_table_exit(void)
+ {
+ 	if (ptr_header) {
+ 		bus_unregister(&coreboot_bus_type);
+-		iounmap(ptr_header);
+ 		ptr_header = NULL;
+ 	}
+ 
+diff --git a/drivers/gpio/gpio-brcmstb.c b/drivers/gpio/gpio-brcmstb.c
+index 16c7f9f49416..af936dcca659 100644
+--- a/drivers/gpio/gpio-brcmstb.c
++++ b/drivers/gpio/gpio-brcmstb.c
+@@ -664,6 +664,18 @@ static int brcmstb_gpio_probe(struct platform_device *pdev)
+ 		struct brcmstb_gpio_bank *bank;
+ 		struct gpio_chip *gc;
+ 
++		/*
++		 * If bank_width is 0, then there is an empty bank in the
++		 * register block. Special handling for this case.
++		 */
++		if (bank_width == 0) {
++			dev_dbg(dev, "Width 0 found: Empty bank @ %d\n",
++				num_banks);
++			num_banks++;
++			gpio_base += MAX_GPIO_PER_BANK;
++			continue;
++		}
++
+ 		bank = devm_kzalloc(dev, sizeof(*bank), GFP_KERNEL);
+ 		if (!bank) {
+ 			err = -ENOMEM;
+@@ -740,9 +752,6 @@ static int brcmstb_gpio_probe(struct platform_device *pdev)
+ 			goto fail;
+ 	}
+ 
+-	dev_info(dev, "Registered %d banks (GPIO(s): %d-%d)\n",
+-			num_banks, priv->gpio_base, gpio_base - 1);
+-
+ 	if (priv->parent_wake_irq && need_wakeup_event)
+ 		pm_wakeup_event(dev, 0);
+ 
+diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c
+index 895741e9cd7d..52ccf1c31855 100644
+--- a/drivers/gpu/drm/drm_atomic.c
++++ b/drivers/gpu/drm/drm_atomic.c
+@@ -173,6 +173,11 @@ void drm_atomic_state_default_clear(struct drm_atomic_state *state)
+ 		state->crtcs[i].state = NULL;
+ 		state->crtcs[i].old_state = NULL;
+ 		state->crtcs[i].new_state = NULL;
++
++		if (state->crtcs[i].commit) {
++			drm_crtc_commit_put(state->crtcs[i].commit);
++			state->crtcs[i].commit = NULL;
++		}
+ 	}
+ 
+ 	for (i = 0; i < config->num_total_plane; i++) {
+diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
+index 81e32199d3ef..abca95b970ea 100644
+--- a/drivers/gpu/drm/drm_atomic_helper.c
++++ b/drivers/gpu/drm/drm_atomic_helper.c
+@@ -1384,15 +1384,16 @@ EXPORT_SYMBOL(drm_atomic_helper_wait_for_vblanks);
+ void drm_atomic_helper_wait_for_flip_done(struct drm_device *dev,
+ 					  struct drm_atomic_state *old_state)
+ {
+-	struct drm_crtc_state *new_crtc_state;
+ 	struct drm_crtc *crtc;
+ 	int i;
+ 
+-	for_each_new_crtc_in_state(old_state, crtc, new_crtc_state, i) {
+-		struct drm_crtc_commit *commit = new_crtc_state->commit;
++	for (i = 0; i < dev->mode_config.num_crtc; i++) {
++		struct drm_crtc_commit *commit = old_state->crtcs[i].commit;
+ 		int ret;
+ 
+-		if (!commit)
++		crtc = old_state->crtcs[i].ptr;
++
++		if (!crtc || !commit)
+ 			continue;
+ 
+ 		ret = wait_for_completion_timeout(&commit->flip_done, 10 * HZ);
+@@ -1906,6 +1907,9 @@ int drm_atomic_helper_setup_commit(struct drm_atomic_state *state,
+ 		drm_crtc_commit_get(commit);
+ 
+ 		commit->abort_completion = true;
++
++		state->crtcs[i].commit = commit;
++		drm_crtc_commit_get(commit);
+ 	}
+ 
+ 	for_each_oldnew_connector_in_state(state, conn, old_conn_state, new_conn_state, i) {
+diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c
+index 98a36e6c69ad..bd207857a964 100644
+--- a/drivers/gpu/drm/drm_crtc.c
++++ b/drivers/gpu/drm/drm_crtc.c
+@@ -560,9 +560,9 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data,
+ 	struct drm_mode_crtc *crtc_req = data;
+ 	struct drm_crtc *crtc;
+ 	struct drm_plane *plane;
+-	struct drm_connector **connector_set = NULL, *connector;
+-	struct drm_framebuffer *fb = NULL;
+-	struct drm_display_mode *mode = NULL;
++	struct drm_connector **connector_set, *connector;
++	struct drm_framebuffer *fb;
++	struct drm_display_mode *mode;
+ 	struct drm_mode_set set;
+ 	uint32_t __user *set_connectors_ptr;
+ 	struct drm_modeset_acquire_ctx ctx;
+@@ -591,6 +591,10 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data,
+ 	mutex_lock(&crtc->dev->mode_config.mutex);
+ 	drm_modeset_acquire_init(&ctx, DRM_MODESET_ACQUIRE_INTERRUPTIBLE);
+ retry:
++	connector_set = NULL;
++	fb = NULL;
++	mode = NULL;
++
+ 	ret = drm_modeset_lock_all_ctx(crtc->dev, &ctx);
+ 	if (ret)
+ 		goto out;
+diff --git a/drivers/gpu/drm/mediatek/mtk_hdmi.c b/drivers/gpu/drm/mediatek/mtk_hdmi.c
+index 59a11026dceb..45a8ba42c8f4 100644
+--- a/drivers/gpu/drm/mediatek/mtk_hdmi.c
++++ b/drivers/gpu/drm/mediatek/mtk_hdmi.c
+@@ -1446,8 +1446,7 @@ static int mtk_hdmi_dt_parse_pdata(struct mtk_hdmi *hdmi,
+ 	}
+ 
+ 	/* The CEC module handles HDMI hotplug detection */
+-	cec_np = of_find_compatible_node(np->parent, NULL,
+-					 "mediatek,mt8173-cec");
++	cec_np = of_get_compatible_child(np->parent, "mediatek,mt8173-cec");
+ 	if (!cec_np) {
+ 		dev_err(dev, "Failed to find CEC node\n");
+ 		return -EINVAL;
+@@ -1457,8 +1456,10 @@ static int mtk_hdmi_dt_parse_pdata(struct mtk_hdmi *hdmi,
+ 	if (!cec_pdev) {
+ 		dev_err(hdmi->dev, "Waiting for CEC device %pOF\n",
+ 			cec_np);
++		of_node_put(cec_np);
+ 		return -EPROBE_DEFER;
+ 	}
++	of_node_put(cec_np);
+ 	hdmi->cec_dev = &cec_pdev->dev;
+ 
+ 	/*
+diff --git a/drivers/hid/usbhid/hiddev.c b/drivers/hid/usbhid/hiddev.c
+index 23872d08308c..a746017fac17 100644
+--- a/drivers/hid/usbhid/hiddev.c
++++ b/drivers/hid/usbhid/hiddev.c
+@@ -512,14 +512,24 @@ static noinline int hiddev_ioctl_usage(struct hiddev *hiddev, unsigned int cmd,
+ 			if (cmd == HIDIOCGCOLLECTIONINDEX) {
+ 				if (uref->usage_index >= field->maxusage)
+ 					goto inval;
++				uref->usage_index =
++					array_index_nospec(uref->usage_index,
++							   field->maxusage);
+ 			} else if (uref->usage_index >= field->report_count)
+ 				goto inval;
+ 		}
+ 
+-		if ((cmd == HIDIOCGUSAGES || cmd == HIDIOCSUSAGES) &&
+-		    (uref_multi->num_values > HID_MAX_MULTI_USAGES ||
+-		     uref->usage_index + uref_multi->num_values > field->report_count))
+-			goto inval;
++		if (cmd == HIDIOCGUSAGES || cmd == HIDIOCSUSAGES) {
++			if (uref_multi->num_values > HID_MAX_MULTI_USAGES ||
++			    uref->usage_index + uref_multi->num_values >
++			    field->report_count)
++				goto inval;
++
++			uref->usage_index =
++				array_index_nospec(uref->usage_index,
++						   field->report_count -
++						   uref_multi->num_values);
++		}
+ 
+ 		switch (cmd) {
+ 		case HIDIOCGUSAGE:
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index ad7afa74d365..ff9a1d8e90f7 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -3335,6 +3335,7 @@ static void wacom_setup_intuos(struct wacom_wac *wacom_wac)
+ 
+ void wacom_setup_device_quirks(struct wacom *wacom)
+ {
++	struct wacom_wac *wacom_wac = &wacom->wacom_wac;
+ 	struct wacom_features *features = &wacom->wacom_wac.features;
+ 
+ 	/* The pen and pad share the same interface on most devices */
+@@ -3464,6 +3465,24 @@ void wacom_setup_device_quirks(struct wacom *wacom)
+ 
+ 	if (features->type == REMOTE)
+ 		features->device_type |= WACOM_DEVICETYPE_WL_MONITOR;
++
++	/* HID descriptor for DTK-2451 / DTH-2452 claims to report lots
++	 * of things it shouldn't. Lets fix up the damage...
++	 */
++	if (wacom->hdev->product == 0x382 || wacom->hdev->product == 0x37d) {
++		features->quirks &= ~WACOM_QUIRK_TOOLSERIAL;
++		__clear_bit(BTN_TOOL_BRUSH, wacom_wac->pen_input->keybit);
++		__clear_bit(BTN_TOOL_PENCIL, wacom_wac->pen_input->keybit);
++		__clear_bit(BTN_TOOL_AIRBRUSH, wacom_wac->pen_input->keybit);
++		__clear_bit(ABS_Z, wacom_wac->pen_input->absbit);
++		__clear_bit(ABS_DISTANCE, wacom_wac->pen_input->absbit);
++		__clear_bit(ABS_TILT_X, wacom_wac->pen_input->absbit);
++		__clear_bit(ABS_TILT_Y, wacom_wac->pen_input->absbit);
++		__clear_bit(ABS_WHEEL, wacom_wac->pen_input->absbit);
++		__clear_bit(ABS_MISC, wacom_wac->pen_input->absbit);
++		__clear_bit(MSC_SERIAL, wacom_wac->pen_input->mscbit);
++		__clear_bit(EV_MSC, wacom_wac->pen_input->evbit);
++	}
+ }
+ 
+ int wacom_setup_pen_input_capabilities(struct input_dev *input_dev,
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index 0f0e091c117c..c4a1ebcfffb6 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -606,16 +606,18 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
+ 	bool perf_chn = vmbus_devs[dev_type].perf_device;
+ 	struct vmbus_channel *primary = channel->primary_channel;
+ 	int next_node;
+-	struct cpumask available_mask;
++	cpumask_var_t available_mask;
+ 	struct cpumask *alloced_mask;
+ 
+ 	if ((vmbus_proto_version == VERSION_WS2008) ||
+-	    (vmbus_proto_version == VERSION_WIN7) || (!perf_chn)) {
++	    (vmbus_proto_version == VERSION_WIN7) || (!perf_chn) ||
++	    !alloc_cpumask_var(&available_mask, GFP_KERNEL)) {
+ 		/*
+ 		 * Prior to win8, all channel interrupts are
+ 		 * delivered on cpu 0.
+ 		 * Also if the channel is not a performance critical
+ 		 * channel, bind it to cpu 0.
++		 * In case alloc_cpumask_var() fails, bind it to cpu 0.
+ 		 */
+ 		channel->numa_node = 0;
+ 		channel->target_cpu = 0;
+@@ -653,7 +655,7 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
+ 		cpumask_clear(alloced_mask);
+ 	}
+ 
+-	cpumask_xor(&available_mask, alloced_mask,
++	cpumask_xor(available_mask, alloced_mask,
+ 		    cpumask_of_node(primary->numa_node));
+ 
+ 	cur_cpu = -1;
+@@ -671,10 +673,10 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
+ 	}
+ 
+ 	while (true) {
+-		cur_cpu = cpumask_next(cur_cpu, &available_mask);
++		cur_cpu = cpumask_next(cur_cpu, available_mask);
+ 		if (cur_cpu >= nr_cpu_ids) {
+ 			cur_cpu = -1;
+-			cpumask_copy(&available_mask,
++			cpumask_copy(available_mask,
+ 				     cpumask_of_node(primary->numa_node));
+ 			continue;
+ 		}
+@@ -704,6 +706,8 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
+ 
+ 	channel->target_cpu = cur_cpu;
+ 	channel->target_vp = hv_cpu_number_to_vp_number(cur_cpu);
++
++	free_cpumask_var(available_mask);
+ }
+ 
+ static void vmbus_wait_for_unload(void)
+diff --git a/drivers/hwmon/pmbus/pmbus.c b/drivers/hwmon/pmbus/pmbus.c
+index 7718e58dbda5..7688dab32f6e 100644
+--- a/drivers/hwmon/pmbus/pmbus.c
++++ b/drivers/hwmon/pmbus/pmbus.c
+@@ -118,6 +118,8 @@ static int pmbus_identify(struct i2c_client *client,
+ 		} else {
+ 			info->pages = 1;
+ 		}
++
++		pmbus_clear_faults(client);
+ 	}
+ 
+ 	if (pmbus_check_byte_register(client, 0, PMBUS_VOUT_MODE)) {
+diff --git a/drivers/hwmon/pmbus/pmbus_core.c b/drivers/hwmon/pmbus/pmbus_core.c
+index 82c3754e21e3..2e2b5851139c 100644
+--- a/drivers/hwmon/pmbus/pmbus_core.c
++++ b/drivers/hwmon/pmbus/pmbus_core.c
+@@ -2015,7 +2015,10 @@ static int pmbus_init_common(struct i2c_client *client, struct pmbus_data *data,
+ 	if (ret >= 0 && (ret & PB_CAPABILITY_ERROR_CHECK))
+ 		client->flags |= I2C_CLIENT_PEC;
+ 
+-	pmbus_clear_faults(client);
++	if (data->info->pages)
++		pmbus_clear_faults(client);
++	else
++		pmbus_clear_fault_page(client, -1);
+ 
+ 	if (info->identify) {
+ 		ret = (*info->identify)(client, info);
+diff --git a/drivers/hwmon/pwm-fan.c b/drivers/hwmon/pwm-fan.c
+index 7838af58f92d..9d611dd268e1 100644
+--- a/drivers/hwmon/pwm-fan.c
++++ b/drivers/hwmon/pwm-fan.c
+@@ -290,9 +290,19 @@ static int pwm_fan_remove(struct platform_device *pdev)
+ static int pwm_fan_suspend(struct device *dev)
+ {
+ 	struct pwm_fan_ctx *ctx = dev_get_drvdata(dev);
++	struct pwm_args args;
++	int ret;
++
++	pwm_get_args(ctx->pwm, &args);
++
++	if (ctx->pwm_value) {
++		ret = pwm_config(ctx->pwm, 0, args.period);
++		if (ret < 0)
++			return ret;
+ 
+-	if (ctx->pwm_value)
+ 		pwm_disable(ctx->pwm);
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-etb10.c b/drivers/hwtracing/coresight/coresight-etb10.c
+index 320d29df17e1..8c1d53f7af83 100644
+--- a/drivers/hwtracing/coresight/coresight-etb10.c
++++ b/drivers/hwtracing/coresight/coresight-etb10.c
+@@ -147,6 +147,10 @@ static int etb_enable(struct coresight_device *csdev, u32 mode)
+ 	if (val == CS_MODE_PERF)
+ 		return -EBUSY;
+ 
++	/* Don't let perf disturb sysFS sessions */
++	if (val == CS_MODE_SYSFS && mode == CS_MODE_PERF)
++		return -EBUSY;
++
+ 	/* Nothing to do, the tracer is already enabled. */
+ 	if (val == CS_MODE_SYSFS)
+ 		goto out;
+diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
+index 3c1c817f6968..e152716bf07f 100644
+--- a/drivers/i2c/busses/i2c-rcar.c
++++ b/drivers/i2c/busses/i2c-rcar.c
+@@ -812,8 +812,12 @@ static int rcar_i2c_master_xfer(struct i2c_adapter *adap,
+ 
+ 	time_left = wait_event_timeout(priv->wait, priv->flags & ID_DONE,
+ 				     num * adap->timeout);
+-	if (!time_left) {
++
++	/* cleanup DMA if it couldn't complete properly due to an error */
++	if (priv->dma_direction != DMA_NONE)
+ 		rcar_i2c_cleanup_dma(priv);
++
++	if (!time_left) {
+ 		rcar_i2c_init(priv);
+ 		ret = -ETIMEDOUT;
+ 	} else if (priv->flags & ID_NACK) {
+diff --git a/drivers/iio/adc/at91_adc.c b/drivers/iio/adc/at91_adc.c
+index 44b516863c9d..75d2f73582a3 100644
+--- a/drivers/iio/adc/at91_adc.c
++++ b/drivers/iio/adc/at91_adc.c
+@@ -248,12 +248,14 @@ static irqreturn_t at91_adc_trigger_handler(int irq, void *p)
+ 	struct iio_poll_func *pf = p;
+ 	struct iio_dev *idev = pf->indio_dev;
+ 	struct at91_adc_state *st = iio_priv(idev);
++	struct iio_chan_spec const *chan;
+ 	int i, j = 0;
+ 
+ 	for (i = 0; i < idev->masklength; i++) {
+ 		if (!test_bit(i, idev->active_scan_mask))
+ 			continue;
+-		st->buffer[j] = at91_adc_readl(st, AT91_ADC_CHAN(st, i));
++		chan = idev->channels + i;
++		st->buffer[j] = at91_adc_readl(st, AT91_ADC_CHAN(st, chan->channel));
+ 		j++;
+ 	}
+ 
+@@ -279,6 +281,8 @@ static void handle_adc_eoc_trigger(int irq, struct iio_dev *idev)
+ 		iio_trigger_poll(idev->trig);
+ 	} else {
+ 		st->last_value = at91_adc_readl(st, AT91_ADC_CHAN(st, st->chnb));
++		/* Needed to ACK the DRDY interruption */
++		at91_adc_readl(st, AT91_ADC_LCDR);
+ 		st->done = true;
+ 		wake_up_interruptible(&st->wq_data_avail);
+ 	}
+diff --git a/drivers/iio/adc/fsl-imx25-gcq.c b/drivers/iio/adc/fsl-imx25-gcq.c
+index ea264fa9e567..929c617db364 100644
+--- a/drivers/iio/adc/fsl-imx25-gcq.c
++++ b/drivers/iio/adc/fsl-imx25-gcq.c
+@@ -209,12 +209,14 @@ static int mx25_gcq_setup_cfgs(struct platform_device *pdev,
+ 		ret = of_property_read_u32(child, "reg", &reg);
+ 		if (ret) {
+ 			dev_err(dev, "Failed to get reg property\n");
++			of_node_put(child);
+ 			return ret;
+ 		}
+ 
+ 		if (reg >= MX25_NUM_CFGS) {
+ 			dev_err(dev,
+ 				"reg value is greater than the number of available configuration registers\n");
++			of_node_put(child);
+ 			return -EINVAL;
+ 		}
+ 
+@@ -228,6 +230,7 @@ static int mx25_gcq_setup_cfgs(struct platform_device *pdev,
+ 			if (IS_ERR(priv->vref[refp])) {
+ 				dev_err(dev, "Error, trying to use external voltage reference without a vref-%s regulator.",
+ 					mx25_gcq_refp_names[refp]);
++				of_node_put(child);
+ 				return PTR_ERR(priv->vref[refp]);
+ 			}
+ 			priv->channel_vref_mv[reg] =
+@@ -240,6 +243,7 @@ static int mx25_gcq_setup_cfgs(struct platform_device *pdev,
+ 			break;
+ 		default:
+ 			dev_err(dev, "Invalid positive reference %d\n", refp);
++			of_node_put(child);
+ 			return -EINVAL;
+ 		}
+ 
+@@ -254,10 +258,12 @@ static int mx25_gcq_setup_cfgs(struct platform_device *pdev,
+ 
+ 		if ((refp & MX25_ADCQ_CFG_REFP_MASK) != refp) {
+ 			dev_err(dev, "Invalid fsl,adc-refp property value\n");
++			of_node_put(child);
+ 			return -EINVAL;
+ 		}
+ 		if ((refn & MX25_ADCQ_CFG_REFN_MASK) != refn) {
+ 			dev_err(dev, "Invalid fsl,adc-refn property value\n");
++			of_node_put(child);
+ 			return -EINVAL;
+ 		}
+ 
+diff --git a/drivers/iio/dac/ad5064.c b/drivers/iio/dac/ad5064.c
+index bf4fc40ec84d..2f98cb2a3b96 100644
+--- a/drivers/iio/dac/ad5064.c
++++ b/drivers/iio/dac/ad5064.c
+@@ -808,6 +808,40 @@ static int ad5064_set_config(struct ad5064_state *st, unsigned int val)
+ 	return ad5064_write(st, cmd, 0, val, 0);
+ }
+ 
++static int ad5064_request_vref(struct ad5064_state *st, struct device *dev)
++{
++	unsigned int i;
++	int ret;
++
++	for (i = 0; i < ad5064_num_vref(st); ++i)
++		st->vref_reg[i].supply = ad5064_vref_name(st, i);
++
++	if (!st->chip_info->internal_vref)
++		return devm_regulator_bulk_get(dev, ad5064_num_vref(st),
++					       st->vref_reg);
++
++	/*
++	 * This assumes that when the regulator has an internal VREF
++	 * there is only one external VREF connection, which is
++	 * currently the case for all supported devices.
++	 */
++	st->vref_reg[0].consumer = devm_regulator_get_optional(dev, "vref");
++	if (!IS_ERR(st->vref_reg[0].consumer))
++		return 0;
++
++	ret = PTR_ERR(st->vref_reg[0].consumer);
++	if (ret != -ENODEV)
++		return ret;
++
++	/* If no external regulator was supplied use the internal VREF */
++	st->use_internal_vref = true;
++	ret = ad5064_set_config(st, AD5064_CONFIG_INT_VREF_ENABLE);
++	if (ret)
++		dev_err(dev, "Failed to enable internal vref: %d\n", ret);
++
++	return ret;
++}
++
+ static int ad5064_probe(struct device *dev, enum ad5064_type type,
+ 			const char *name, ad5064_write_func write)
+ {
+@@ -828,22 +862,11 @@ static int ad5064_probe(struct device *dev, enum ad5064_type type,
+ 	st->dev = dev;
+ 	st->write = write;
+ 
+-	for (i = 0; i < ad5064_num_vref(st); ++i)
+-		st->vref_reg[i].supply = ad5064_vref_name(st, i);
++	ret = ad5064_request_vref(st, dev);
++	if (ret)
++		return ret;
+ 
+-	ret = devm_regulator_bulk_get(dev, ad5064_num_vref(st),
+-		st->vref_reg);
+-	if (ret) {
+-		if (!st->chip_info->internal_vref)
+-			return ret;
+-		st->use_internal_vref = true;
+-		ret = ad5064_set_config(st, AD5064_CONFIG_INT_VREF_ENABLE);
+-		if (ret) {
+-			dev_err(dev, "Failed to enable internal vref: %d\n",
+-				ret);
+-			return ret;
+-		}
+-	} else {
++	if (!st->use_internal_vref) {
+ 		ret = regulator_bulk_enable(ad5064_num_vref(st), st->vref_reg);
+ 		if (ret)
+ 			return ret;
+diff --git a/drivers/infiniband/core/sysfs.c b/drivers/infiniband/core/sysfs.c
+index 31c7efaf8e7a..63406cd212a7 100644
+--- a/drivers/infiniband/core/sysfs.c
++++ b/drivers/infiniband/core/sysfs.c
+@@ -516,7 +516,7 @@ static ssize_t show_pma_counter(struct ib_port *p, struct port_attribute *attr,
+ 	ret = get_perf_mad(p->ibdev, p->port_num, tab_attr->attr_id, &data,
+ 			40 + offset / 8, sizeof(data));
+ 	if (ret < 0)
+-		return sprintf(buf, "N/A (no PMA)\n");
++		return ret;
+ 
+ 	switch (width) {
+ 	case 4:
+@@ -1061,10 +1061,12 @@ static int add_port(struct ib_device *device, int port_num,
+ 		goto err_put;
+ 	}
+ 
+-	p->pma_table = get_counter_table(device, port_num);
+-	ret = sysfs_create_group(&p->kobj, p->pma_table);
+-	if (ret)
+-		goto err_put_gid_attrs;
++	if (device->process_mad) {
++		p->pma_table = get_counter_table(device, port_num);
++		ret = sysfs_create_group(&p->kobj, p->pma_table);
++		if (ret)
++			goto err_put_gid_attrs;
++	}
+ 
+ 	p->gid_group.name  = "gids";
+ 	p->gid_group.attrs = alloc_group_attrs(show_port_gid, attr.gid_tbl_len);
+@@ -1177,7 +1179,8 @@ err_free_gid:
+ 	p->gid_group.attrs = NULL;
+ 
+ err_remove_pma:
+-	sysfs_remove_group(&p->kobj, p->pma_table);
++	if (p->pma_table)
++		sysfs_remove_group(&p->kobj, p->pma_table);
+ 
+ err_put_gid_attrs:
+ 	kobject_put(&p->gid_attr_group->kobj);
+@@ -1289,7 +1292,9 @@ static void free_port_list_attributes(struct ib_device *device)
+ 			kfree(port->hw_stats);
+ 			free_hsag(&port->kobj, port->hw_stats_ag);
+ 		}
+-		sysfs_remove_group(p, port->pma_table);
++
++		if (port->pma_table)
++			sysfs_remove_group(p, port->pma_table);
+ 		sysfs_remove_group(p, &port->pkey_group);
+ 		sysfs_remove_group(p, &port->gid_group);
+ 		sysfs_remove_group(&port->gid_attr_group->kobj,
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index 6ad0d46ab879..249efa0a6aba 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -360,7 +360,8 @@ void bnxt_qplib_disable_nq(struct bnxt_qplib_nq *nq)
+ 	}
+ 
+ 	/* Make sure the HW is stopped! */
+-	bnxt_qplib_nq_stop_irq(nq, true);
++	if (nq->requested)
++		bnxt_qplib_nq_stop_irq(nq, true);
+ 
+ 	if (nq->bar_reg_iomem)
+ 		iounmap(nq->bar_reg_iomem);
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+index 2852d350ada1..6637df77d236 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+@@ -309,8 +309,17 @@ static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw,
+ 		rcfw->aeq_handler(rcfw, qp_event, qp);
+ 		break;
+ 	default:
+-		/* Command Response */
+-		spin_lock_irqsave(&cmdq->lock, flags);
++		/*
++		 * Command Response
++		 * cmdq->lock needs to be acquired to synchronie
++		 * the command send and completion reaping. This function
++		 * is always called with creq->lock held. Using
++		 * the nested variant of spin_lock.
++		 *
++		 */
++
++		spin_lock_irqsave_nested(&cmdq->lock, flags,
++					 SINGLE_DEPTH_NESTING);
+ 		cookie = le16_to_cpu(qp_event->cookie);
+ 		mcookie = qp_event->cookie;
+ 		blocked = cookie & RCFW_CMD_IS_BLOCKING;
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 73339fd47dd8..addd432f3f38 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -691,7 +691,6 @@ int mlx5_mr_cache_init(struct mlx5_ib_dev *dev)
+ 		init_completion(&ent->compl);
+ 		INIT_WORK(&ent->work, cache_work_func);
+ 		INIT_DELAYED_WORK(&ent->dwork, delayed_cache_work_func);
+-		queue_work(cache->wq, &ent->work);
+ 
+ 		if (i > MR_CACHE_LAST_STD_ENTRY) {
+ 			mlx5_odp_init_mr_cache_entry(ent);
+@@ -711,6 +710,7 @@ int mlx5_mr_cache_init(struct mlx5_ib_dev *dev)
+ 			ent->limit = dev->mdev->profile->mr_cache[i].limit;
+ 		else
+ 			ent->limit = 0;
++		queue_work(cache->wq, &ent->work);
+ 	}
+ 
+ 	err = mlx5_mr_cache_debugfs_init(dev);
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index 01eae67d5a6e..e260f6a156ed 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -3264,7 +3264,9 @@ static bool modify_dci_qp_is_ok(enum ib_qp_state cur_state, enum ib_qp_state new
+ 	int req = IB_QP_STATE;
+ 	int opt = 0;
+ 
+-	if (cur_state == IB_QPS_RESET && new_state == IB_QPS_INIT) {
++	if (new_state == IB_QPS_RESET) {
++		return is_valid_mask(attr_mask, req, opt);
++	} else if (cur_state == IB_QPS_RESET && new_state == IB_QPS_INIT) {
+ 		req |= IB_QP_PKEY_INDEX | IB_QP_PORT;
+ 		return is_valid_mask(attr_mask, req, opt);
+ 	} else if (cur_state == IB_QPS_INIT && new_state == IB_QPS_INIT) {
+diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
+index 5b57de30dee4..b8104d50b1a0 100644
+--- a/drivers/infiniband/sw/rxe/rxe_resp.c
++++ b/drivers/infiniband/sw/rxe/rxe_resp.c
+@@ -682,6 +682,7 @@ static enum resp_states read_reply(struct rxe_qp *qp,
+ 		rxe_advance_resp_resource(qp);
+ 
+ 		res->type		= RXE_READ_MASK;
++		res->replay		= 0;
+ 
+ 		res->read.va		= qp->resp.va;
+ 		res->read.va_org	= qp->resp.va;
+@@ -752,7 +753,8 @@ static enum resp_states read_reply(struct rxe_qp *qp,
+ 		state = RESPST_DONE;
+ 	} else {
+ 		qp->resp.res = NULL;
+-		qp->resp.opcode = -1;
++		if (!res->replay)
++			qp->resp.opcode = -1;
+ 		if (psn_compare(res->cur_psn, qp->resp.psn) >= 0)
+ 			qp->resp.psn = res->cur_psn;
+ 		state = RESPST_CLEANUP;
+@@ -814,6 +816,7 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt)
+ 
+ 	/* next expected psn, read handles this separately */
+ 	qp->resp.psn = (pkt->psn + 1) & BTH_PSN_MASK;
++	qp->resp.ack_psn = qp->resp.psn;
+ 
+ 	qp->resp.opcode = pkt->opcode;
+ 	qp->resp.status = IB_WC_SUCCESS;
+@@ -1060,7 +1063,7 @@ static enum resp_states duplicate_request(struct rxe_qp *qp,
+ 					  struct rxe_pkt_info *pkt)
+ {
+ 	enum resp_states rc;
+-	u32 prev_psn = (qp->resp.psn - 1) & BTH_PSN_MASK;
++	u32 prev_psn = (qp->resp.ack_psn - 1) & BTH_PSN_MASK;
+ 
+ 	if (pkt->mask & RXE_SEND_MASK ||
+ 	    pkt->mask & RXE_WRITE_MASK) {
+@@ -1103,6 +1106,7 @@ static enum resp_states duplicate_request(struct rxe_qp *qp,
+ 			res->state = (pkt->psn == res->first_psn) ?
+ 					rdatm_res_state_new :
+ 					rdatm_res_state_replay;
++			res->replay = 1;
+ 
+ 			/* Reset the resource, except length. */
+ 			res->read.va_org = iova;
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
+index af1470d29391..332a16dad2a7 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
+@@ -171,6 +171,7 @@ enum rdatm_res_state {
+ 
+ struct resp_res {
+ 	int			type;
++	int			replay;
+ 	u32			first_psn;
+ 	u32			last_psn;
+ 	u32			cur_psn;
+@@ -195,6 +196,7 @@ struct rxe_resp_info {
+ 	enum rxe_qp_state	state;
+ 	u32			msn;
+ 	u32			psn;
++	u32			ack_psn;
+ 	int			opcode;
+ 	int			drop_msg;
+ 	int			goto_error;
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+index a620701f9d41..1ac2bbc84671 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+@@ -1439,11 +1439,15 @@ static void ipoib_cm_skb_reap(struct work_struct *work)
+ 		spin_unlock_irqrestore(&priv->lock, flags);
+ 		netif_tx_unlock_bh(dev);
+ 
+-		if (skb->protocol == htons(ETH_P_IP))
++		if (skb->protocol == htons(ETH_P_IP)) {
++			memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
+ 			icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, htonl(mtu));
++		}
+ #if IS_ENABLED(CONFIG_IPV6)
+-		else if (skb->protocol == htons(ETH_P_IPV6))
++		else if (skb->protocol == htons(ETH_P_IPV6)) {
++			memset(IP6CB(skb), 0, sizeof(*IP6CB(skb)));
+ 			icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
++		}
+ #endif
+ 		dev_kfree_skb_any(skb);
+ 
+diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
+index 5349e22b5c78..29646004a4a7 100644
+--- a/drivers/iommu/arm-smmu.c
++++ b/drivers/iommu/arm-smmu.c
+@@ -469,6 +469,9 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
+ 	bool stage1 = cfg->cbar != CBAR_TYPE_S2_TRANS;
+ 	void __iomem *reg = ARM_SMMU_CB(smmu_domain->smmu, cfg->cbndx);
+ 
++	if (smmu_domain->smmu->features & ARM_SMMU_FEAT_COHERENT_WALK)
++		wmb();
++
+ 	if (stage1) {
+ 		reg += leaf ? ARM_SMMU_CB_S1_TLBIVAL : ARM_SMMU_CB_S1_TLBIVA;
+ 
+@@ -510,6 +513,9 @@ static void arm_smmu_tlb_inv_vmid_nosync(unsigned long iova, size_t size,
+ 	struct arm_smmu_domain *smmu_domain = cookie;
+ 	void __iomem *base = ARM_SMMU_GR0(smmu_domain->smmu);
+ 
++	if (smmu_domain->smmu->features & ARM_SMMU_FEAT_COHERENT_WALK)
++		wmb();
++
+ 	writel_relaxed(smmu_domain->cfg.vmid, base + ARM_SMMU_GR0_TLBIVMID);
+ }
+ 
+diff --git a/drivers/irqchip/qcom-pdc.c b/drivers/irqchip/qcom-pdc.c
+index b1b47a40a278..faa7d61b9d6c 100644
+--- a/drivers/irqchip/qcom-pdc.c
++++ b/drivers/irqchip/qcom-pdc.c
+@@ -124,6 +124,7 @@ static int qcom_pdc_gic_set_type(struct irq_data *d, unsigned int type)
+ 		break;
+ 	case IRQ_TYPE_EDGE_BOTH:
+ 		pdc_type = PDC_EDGE_DUAL;
++		type = IRQ_TYPE_EDGE_RISING;
+ 		break;
+ 	case IRQ_TYPE_LEVEL_HIGH:
+ 		pdc_type = PDC_LEVEL_HIGH;
+diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c
+index ed9cc977c8b3..f6427e805150 100644
+--- a/drivers/lightnvm/pblk-core.c
++++ b/drivers/lightnvm/pblk-core.c
+@@ -1538,13 +1538,14 @@ struct pblk_line *pblk_line_replace_data(struct pblk *pblk)
+ 	struct pblk_line *cur, *new = NULL;
+ 	unsigned int left_seblks;
+ 
+-	cur = l_mg->data_line;
+ 	new = l_mg->data_next;
+ 	if (!new)
+ 		goto out;
+-	l_mg->data_line = new;
+ 
+ 	spin_lock(&l_mg->free_lock);
++	cur = l_mg->data_line;
++	l_mg->data_line = new;
++
+ 	pblk_line_setup_metadata(new, l_mg, &pblk->lm);
+ 	spin_unlock(&l_mg->free_lock);
+ 
+diff --git a/drivers/lightnvm/pblk-recovery.c b/drivers/lightnvm/pblk-recovery.c
+index d83466b3821b..958bda8a69b7 100644
+--- a/drivers/lightnvm/pblk-recovery.c
++++ b/drivers/lightnvm/pblk-recovery.c
+@@ -956,12 +956,14 @@ next:
+ 		}
+ 	}
+ 
+-	spin_lock(&l_mg->free_lock);
+ 	if (!open_lines) {
++		spin_lock(&l_mg->free_lock);
+ 		WARN_ON_ONCE(!test_and_clear_bit(meta_line,
+ 							&l_mg->meta_bitmap));
++		spin_unlock(&l_mg->free_lock);
+ 		pblk_line_replace_data(pblk);
+ 	} else {
++		spin_lock(&l_mg->free_lock);
+ 		/* Allocate next line for preparation */
+ 		l_mg->data_next = pblk_line_get(pblk);
+ 		if (l_mg->data_next) {
+@@ -969,8 +971,8 @@ next:
+ 			l_mg->data_next->type = PBLK_LINETYPE_DATA;
+ 			is_next = 1;
+ 		}
++		spin_unlock(&l_mg->free_lock);
+ 	}
+-	spin_unlock(&l_mg->free_lock);
+ 
+ 	if (is_next)
+ 		pblk_line_erase(pblk, l_mg->data_next);
+diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c
+index 88a0a7c407aa..432f7d94d369 100644
+--- a/drivers/lightnvm/pblk-sysfs.c
++++ b/drivers/lightnvm/pblk-sysfs.c
+@@ -262,8 +262,14 @@ static ssize_t pblk_sysfs_lines(struct pblk *pblk, char *page)
+ 		sec_in_line = l_mg->data_line->sec_in_line;
+ 		meta_weight = bitmap_weight(&l_mg->meta_bitmap,
+ 							PBLK_DATA_LINES);
+-		map_weight = bitmap_weight(l_mg->data_line->map_bitmap,
++
++		spin_lock(&l_mg->data_line->lock);
++		if (l_mg->data_line->map_bitmap)
++			map_weight = bitmap_weight(l_mg->data_line->map_bitmap,
+ 							lm->sec_per_line);
++		else
++			map_weight = 0;
++		spin_unlock(&l_mg->data_line->lock);
+ 	}
+ 	spin_unlock(&l_mg->free_lock);
+ 
+diff --git a/drivers/lightnvm/pblk-write.c b/drivers/lightnvm/pblk-write.c
+index f353e52941f5..89ac60d4849e 100644
+--- a/drivers/lightnvm/pblk-write.c
++++ b/drivers/lightnvm/pblk-write.c
+@@ -417,12 +417,11 @@ int pblk_submit_meta_io(struct pblk *pblk, struct pblk_line *meta_line)
+ 			rqd->ppa_list[i] = addr_to_gen_ppa(pblk, paddr, id);
+ 	}
+ 
++	spin_lock(&l_mg->close_lock);
+ 	emeta->mem += rq_len;
+-	if (emeta->mem >= lm->emeta_len[0]) {
+-		spin_lock(&l_mg->close_lock);
++	if (emeta->mem >= lm->emeta_len[0])
+ 		list_del(&meta_line->list);
+-		spin_unlock(&l_mg->close_lock);
+-	}
++	spin_unlock(&l_mg->close_lock);
+ 
+ 	pblk_down_page(pblk, rqd->ppa_list, rqd->nr_ppas);
+ 
+@@ -491,14 +490,15 @@ static struct pblk_line *pblk_should_submit_meta_io(struct pblk *pblk,
+ 	struct pblk_line *meta_line;
+ 
+ 	spin_lock(&l_mg->close_lock);
+-retry:
+ 	if (list_empty(&l_mg->emeta_list)) {
+ 		spin_unlock(&l_mg->close_lock);
+ 		return NULL;
+ 	}
+ 	meta_line = list_first_entry(&l_mg->emeta_list, struct pblk_line, list);
+-	if (meta_line->emeta->mem >= lm->emeta_len[0])
+-		goto retry;
++	if (meta_line->emeta->mem >= lm->emeta_len[0]) {
++		spin_unlock(&l_mg->close_lock);
++		return NULL;
++	}
+ 	spin_unlock(&l_mg->close_lock);
+ 
+ 	if (!pblk_valid_meta_ppa(pblk, meta_line, data_rqd))
+diff --git a/drivers/mailbox/pcc.c b/drivers/mailbox/pcc.c
+index 311e91b1a14f..256f18b67e8a 100644
+--- a/drivers/mailbox/pcc.c
++++ b/drivers/mailbox/pcc.c
+@@ -461,8 +461,11 @@ static int __init acpi_pcc_probe(void)
+ 	count = acpi_table_parse_entries_array(ACPI_SIG_PCCT,
+ 			sizeof(struct acpi_table_pcct), proc,
+ 			ACPI_PCCT_TYPE_RESERVED, MAX_PCC_SUBSPACES);
+-	if (count == 0 || count > MAX_PCC_SUBSPACES) {
+-		pr_warn("Invalid PCCT: %d PCC subspaces\n", count);
++	if (count <= 0 || count > MAX_PCC_SUBSPACES) {
++		if (count < 0)
++			pr_warn("Error parsing PCC subspaces from PCCT\n");
++		else
++			pr_warn("Invalid PCCT: %d PCC subspaces\n", count);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
+index 547c9eedc2f4..d681524f82a4 100644
+--- a/drivers/md/bcache/btree.c
++++ b/drivers/md/bcache/btree.c
+@@ -2380,7 +2380,7 @@ static int refill_keybuf_fn(struct btree_op *op, struct btree *b,
+ 	struct keybuf *buf = refill->buf;
+ 	int ret = MAP_CONTINUE;
+ 
+-	if (bkey_cmp(k, refill->end) >= 0) {
++	if (bkey_cmp(k, refill->end) > 0) {
+ 		ret = MAP_DONE;
+ 		goto out;
+ 	}
+diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
+index ae67f5fa8047..9d2fa1359029 100644
+--- a/drivers/md/bcache/request.c
++++ b/drivers/md/bcache/request.c
+@@ -843,7 +843,7 @@ static void cached_dev_read_done_bh(struct closure *cl)
+ 
+ 	bch_mark_cache_accounting(s->iop.c, s->d,
+ 				  !s->cache_missed, s->iop.bypass);
+-	trace_bcache_read(s->orig_bio, !s->cache_miss, s->iop.bypass);
++	trace_bcache_read(s->orig_bio, !s->cache_missed, s->iop.bypass);
+ 
+ 	if (s->iop.status)
+ 		continue_at_nobarrier(cl, cached_dev_read_error, bcache_wq);
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index fa4058e43202..6e5220554220 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -1131,11 +1131,12 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c,
+ 	}
+ 
+ 	if (BDEV_STATE(&dc->sb) == BDEV_STATE_DIRTY) {
+-		bch_sectors_dirty_init(&dc->disk);
+ 		atomic_set(&dc->has_dirty, 1);
+ 		bch_writeback_queue(dc);
+ 	}
+ 
++	bch_sectors_dirty_init(&dc->disk);
++
+ 	bch_cached_dev_run(dc);
+ 	bcache_device_link(&dc->disk, c, "bdev");
+ 
+diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
+index 225b15aa0340..34819f2c257d 100644
+--- a/drivers/md/bcache/sysfs.c
++++ b/drivers/md/bcache/sysfs.c
+@@ -263,6 +263,7 @@ STORE(__cached_dev)
+ 			    1, WRITEBACK_RATE_UPDATE_SECS_MAX);
+ 	d_strtoul(writeback_rate_i_term_inverse);
+ 	d_strtoul_nonzero(writeback_rate_p_term_inverse);
++	d_strtoul_nonzero(writeback_rate_minimum);
+ 
+ 	sysfs_strtoul_clamp(io_error_limit, dc->error_limit, 0, INT_MAX);
+ 
+@@ -389,6 +390,7 @@ static struct attribute *bch_cached_dev_files[] = {
+ 	&sysfs_writeback_rate_update_seconds,
+ 	&sysfs_writeback_rate_i_term_inverse,
+ 	&sysfs_writeback_rate_p_term_inverse,
++	&sysfs_writeback_rate_minimum,
+ 	&sysfs_writeback_rate_debug,
+ 	&sysfs_errors,
+ 	&sysfs_io_error_limit,
+diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
+index b810ea77e6b1..f666778ad237 100644
+--- a/drivers/md/dm-ioctl.c
++++ b/drivers/md/dm-ioctl.c
+@@ -1720,8 +1720,7 @@ static void free_params(struct dm_ioctl *param, size_t param_size, int param_fla
+ }
+ 
+ static int copy_params(struct dm_ioctl __user *user, struct dm_ioctl *param_kernel,
+-		       int ioctl_flags,
+-		       struct dm_ioctl **param, int *param_flags)
++		       int ioctl_flags, struct dm_ioctl **param, int *param_flags)
+ {
+ 	struct dm_ioctl *dmi;
+ 	int secure_data;
+@@ -1762,18 +1761,13 @@ static int copy_params(struct dm_ioctl __user *user, struct dm_ioctl *param_kern
+ 
+ 	*param_flags |= DM_PARAMS_MALLOC;
+ 
+-	if (copy_from_user(dmi, user, param_kernel->data_size))
+-		goto bad;
++	/* Copy from param_kernel (which was already copied from user) */
++	memcpy(dmi, param_kernel, minimum_data_size);
+ 
+-data_copied:
+-	/*
+-	 * Abort if something changed the ioctl data while it was being copied.
+-	 */
+-	if (dmi->data_size != param_kernel->data_size) {
+-		DMERR("rejecting ioctl: data size modified while processing parameters");
++	if (copy_from_user(&dmi->data, (char __user *)user + minimum_data_size,
++			   param_kernel->data_size - minimum_data_size))
+ 		goto bad;
+-	}
+-
++data_copied:
+ 	/* Wipe the user buffer so we do not return it to userspace */
+ 	if (secure_data && clear_user(user, param_kernel->data_size))
+ 		goto bad;
+diff --git a/drivers/md/dm-zoned-metadata.c b/drivers/md/dm-zoned-metadata.c
+index 969954915566..fa68336560c3 100644
+--- a/drivers/md/dm-zoned-metadata.c
++++ b/drivers/md/dm-zoned-metadata.c
+@@ -99,7 +99,7 @@ struct dmz_mblock {
+ 	struct rb_node		node;
+ 	struct list_head	link;
+ 	sector_t		no;
+-	atomic_t		ref;
++	unsigned int		ref;
+ 	unsigned long		state;
+ 	struct page		*page;
+ 	void			*data;
+@@ -296,7 +296,7 @@ static struct dmz_mblock *dmz_alloc_mblock(struct dmz_metadata *zmd,
+ 
+ 	RB_CLEAR_NODE(&mblk->node);
+ 	INIT_LIST_HEAD(&mblk->link);
+-	atomic_set(&mblk->ref, 0);
++	mblk->ref = 0;
+ 	mblk->state = 0;
+ 	mblk->no = mblk_no;
+ 	mblk->data = page_address(mblk->page);
+@@ -339,10 +339,11 @@ static void dmz_insert_mblock(struct dmz_metadata *zmd, struct dmz_mblock *mblk)
+ }
+ 
+ /*
+- * Lookup a metadata block in the rbtree.
++ * Lookup a metadata block in the rbtree. If the block is found, increment
++ * its reference count.
+  */
+-static struct dmz_mblock *dmz_lookup_mblock(struct dmz_metadata *zmd,
+-					    sector_t mblk_no)
++static struct dmz_mblock *dmz_get_mblock_fast(struct dmz_metadata *zmd,
++					      sector_t mblk_no)
+ {
+ 	struct rb_root *root = &zmd->mblk_rbtree;
+ 	struct rb_node *node = root->rb_node;
+@@ -350,8 +351,17 @@ static struct dmz_mblock *dmz_lookup_mblock(struct dmz_metadata *zmd,
+ 
+ 	while (node) {
+ 		mblk = container_of(node, struct dmz_mblock, node);
+-		if (mblk->no == mblk_no)
++		if (mblk->no == mblk_no) {
++			/*
++			 * If this is the first reference to the block,
++			 * remove it from the LRU list.
++			 */
++			mblk->ref++;
++			if (mblk->ref == 1 &&
++			    !test_bit(DMZ_META_DIRTY, &mblk->state))
++				list_del_init(&mblk->link);
+ 			return mblk;
++		}
+ 		node = (mblk->no < mblk_no) ? node->rb_left : node->rb_right;
+ 	}
+ 
+@@ -382,32 +392,47 @@ static void dmz_mblock_bio_end_io(struct bio *bio)
+ }
+ 
+ /*
+- * Read a metadata block from disk.
++ * Read an uncached metadata block from disk and add it to the cache.
+  */
+-static struct dmz_mblock *dmz_fetch_mblock(struct dmz_metadata *zmd,
+-					   sector_t mblk_no)
++static struct dmz_mblock *dmz_get_mblock_slow(struct dmz_metadata *zmd,
++					      sector_t mblk_no)
+ {
+-	struct dmz_mblock *mblk;
++	struct dmz_mblock *mblk, *m;
+ 	sector_t block = zmd->sb[zmd->mblk_primary].block + mblk_no;
+ 	struct bio *bio;
+ 
+-	/* Get block and insert it */
++	/* Get a new block and a BIO to read it */
+ 	mblk = dmz_alloc_mblock(zmd, mblk_no);
+ 	if (!mblk)
+ 		return NULL;
+ 
+-	spin_lock(&zmd->mblk_lock);
+-	atomic_inc(&mblk->ref);
+-	set_bit(DMZ_META_READING, &mblk->state);
+-	dmz_insert_mblock(zmd, mblk);
+-	spin_unlock(&zmd->mblk_lock);
+-
+ 	bio = bio_alloc(GFP_NOIO, 1);
+ 	if (!bio) {
+ 		dmz_free_mblock(zmd, mblk);
+ 		return NULL;
+ 	}
+ 
++	spin_lock(&zmd->mblk_lock);
++
++	/*
++	 * Make sure that another context did not start reading
++	 * the block already.
++	 */
++	m = dmz_get_mblock_fast(zmd, mblk_no);
++	if (m) {
++		spin_unlock(&zmd->mblk_lock);
++		dmz_free_mblock(zmd, mblk);
++		bio_put(bio);
++		return m;
++	}
++
++	mblk->ref++;
++	set_bit(DMZ_META_READING, &mblk->state);
++	dmz_insert_mblock(zmd, mblk);
++
++	spin_unlock(&zmd->mblk_lock);
++
++	/* Submit read BIO */
+ 	bio->bi_iter.bi_sector = dmz_blk2sect(block);
+ 	bio_set_dev(bio, zmd->dev->bdev);
+ 	bio->bi_private = mblk;
+@@ -484,7 +509,8 @@ static void dmz_release_mblock(struct dmz_metadata *zmd,
+ 
+ 	spin_lock(&zmd->mblk_lock);
+ 
+-	if (atomic_dec_and_test(&mblk->ref)) {
++	mblk->ref--;
++	if (mblk->ref == 0) {
+ 		if (test_bit(DMZ_META_ERROR, &mblk->state)) {
+ 			rb_erase(&mblk->node, &zmd->mblk_rbtree);
+ 			dmz_free_mblock(zmd, mblk);
+@@ -508,18 +534,12 @@ static struct dmz_mblock *dmz_get_mblock(struct dmz_metadata *zmd,
+ 
+ 	/* Check rbtree */
+ 	spin_lock(&zmd->mblk_lock);
+-	mblk = dmz_lookup_mblock(zmd, mblk_no);
+-	if (mblk) {
+-		/* Cache hit: remove block from LRU list */
+-		if (atomic_inc_return(&mblk->ref) == 1 &&
+-		    !test_bit(DMZ_META_DIRTY, &mblk->state))
+-			list_del_init(&mblk->link);
+-	}
++	mblk = dmz_get_mblock_fast(zmd, mblk_no);
+ 	spin_unlock(&zmd->mblk_lock);
+ 
+ 	if (!mblk) {
+ 		/* Cache miss: read the block from disk */
+-		mblk = dmz_fetch_mblock(zmd, mblk_no);
++		mblk = dmz_get_mblock_slow(zmd, mblk_no);
+ 		if (!mblk)
+ 			return ERR_PTR(-ENOMEM);
+ 	}
+@@ -753,7 +773,7 @@ int dmz_flush_metadata(struct dmz_metadata *zmd)
+ 
+ 		spin_lock(&zmd->mblk_lock);
+ 		clear_bit(DMZ_META_DIRTY, &mblk->state);
+-		if (atomic_read(&mblk->ref) == 0)
++		if (mblk->ref == 0)
+ 			list_add_tail(&mblk->link, &zmd->mblk_lru_list);
+ 		spin_unlock(&zmd->mblk_lock);
+ 	}
+@@ -2308,7 +2328,7 @@ static void dmz_cleanup_metadata(struct dmz_metadata *zmd)
+ 		mblk = list_first_entry(&zmd->mblk_dirty_list,
+ 					struct dmz_mblock, link);
+ 		dmz_dev_warn(zmd->dev, "mblock %llu still in dirty list (ref %u)",
+-			     (u64)mblk->no, atomic_read(&mblk->ref));
++			     (u64)mblk->no, mblk->ref);
+ 		list_del_init(&mblk->link);
+ 		rb_erase(&mblk->node, &zmd->mblk_rbtree);
+ 		dmz_free_mblock(zmd, mblk);
+@@ -2326,8 +2346,8 @@ static void dmz_cleanup_metadata(struct dmz_metadata *zmd)
+ 	root = &zmd->mblk_rbtree;
+ 	rbtree_postorder_for_each_entry_safe(mblk, next, root, node) {
+ 		dmz_dev_warn(zmd->dev, "mblock %llu ref %u still in rbtree",
+-			     (u64)mblk->no, atomic_read(&mblk->ref));
+-		atomic_set(&mblk->ref, 0);
++			     (u64)mblk->no, mblk->ref);
++		mblk->ref = 0;
+ 		dmz_free_mblock(zmd, mblk);
+ 	}
+ 
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 994aed2f9dff..71665e2c30eb 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -455,10 +455,11 @@ static void md_end_flush(struct bio *fbio)
+ 	rdev_dec_pending(rdev, mddev);
+ 
+ 	if (atomic_dec_and_test(&fi->flush_pending)) {
+-		if (bio->bi_iter.bi_size == 0)
++		if (bio->bi_iter.bi_size == 0) {
+ 			/* an empty barrier - all done */
+ 			bio_endio(bio);
+-		else {
++			mempool_free(fi, mddev->flush_pool);
++		} else {
+ 			INIT_WORK(&fi->flush_work, submit_flushes);
+ 			queue_work(md_wq, &fi->flush_work);
+ 		}
+@@ -512,10 +513,11 @@ void md_flush_request(struct mddev *mddev, struct bio *bio)
+ 	rcu_read_unlock();
+ 
+ 	if (atomic_dec_and_test(&fi->flush_pending)) {
+-		if (bio->bi_iter.bi_size == 0)
++		if (bio->bi_iter.bi_size == 0) {
+ 			/* an empty barrier - all done */
+ 			bio_endio(bio);
+-		else {
++			mempool_free(fi, mddev->flush_pool);
++		} else {
+ 			INIT_WORK(&fi->flush_work, submit_flushes);
+ 			queue_work(md_wq, &fi->flush_work);
+ 		}
+@@ -5907,14 +5909,6 @@ static void __md_stop(struct mddev *mddev)
+ 		mddev->to_remove = &md_redundancy_group;
+ 	module_put(pers->owner);
+ 	clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+-}
+-
+-void md_stop(struct mddev *mddev)
+-{
+-	/* stop the array and free an attached data structures.
+-	 * This is called from dm-raid
+-	 */
+-	__md_stop(mddev);
+ 	if (mddev->flush_bio_pool) {
+ 		mempool_destroy(mddev->flush_bio_pool);
+ 		mddev->flush_bio_pool = NULL;
+@@ -5923,6 +5917,14 @@ void md_stop(struct mddev *mddev)
+ 		mempool_destroy(mddev->flush_pool);
+ 		mddev->flush_pool = NULL;
+ 	}
++}
++
++void md_stop(struct mddev *mddev)
++{
++	/* stop the array and free an attached data structures.
++	 * This is called from dm-raid
++	 */
++	__md_stop(mddev);
+ 	bioset_exit(&mddev->bio_set);
+ 	bioset_exit(&mddev->sync_set);
+ }
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index 8e05c1092aef..c9362463d266 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -1736,6 +1736,7 @@ static int raid1_add_disk(struct mddev *mddev, struct md_rdev *rdev)
+ 	 */
+ 	if (rdev->saved_raid_disk >= 0 &&
+ 	    rdev->saved_raid_disk >= first &&
++	    rdev->saved_raid_disk < conf->raid_disks &&
+ 	    conf->mirrors[rdev->saved_raid_disk].rdev == NULL)
+ 		first = last = rdev->saved_raid_disk;
+ 
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 8c93d44a052c..e555221fb75b 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -1808,6 +1808,7 @@ static int raid10_add_disk(struct mddev *mddev, struct md_rdev *rdev)
+ 		first = last = rdev->raid_disk;
+ 
+ 	if (rdev->saved_raid_disk >= first &&
++	    rdev->saved_raid_disk < conf->geo.raid_disks &&
+ 	    conf->mirrors[rdev->saved_raid_disk].rdev == NULL)
+ 		mirror = rdev->saved_raid_disk;
+ 	else
+diff --git a/drivers/media/cec/cec-adap.c b/drivers/media/cec/cec-adap.c
+index b7fad0ec5710..fecba7ddcd00 100644
+--- a/drivers/media/cec/cec-adap.c
++++ b/drivers/media/cec/cec-adap.c
+@@ -325,7 +325,7 @@ static void cec_data_completed(struct cec_data *data)
+  *
+  * This function is called with adap->lock held.
+  */
+-static void cec_data_cancel(struct cec_data *data)
++static void cec_data_cancel(struct cec_data *data, u8 tx_status)
+ {
+ 	/*
+ 	 * It's either the current transmit, or it is a pending
+@@ -340,13 +340,11 @@ static void cec_data_cancel(struct cec_data *data)
+ 	}
+ 
+ 	if (data->msg.tx_status & CEC_TX_STATUS_OK) {
+-		/* Mark the canceled RX as a timeout */
+ 		data->msg.rx_ts = ktime_get_ns();
+-		data->msg.rx_status = CEC_RX_STATUS_TIMEOUT;
++		data->msg.rx_status = CEC_RX_STATUS_ABORTED;
+ 	} else {
+-		/* Mark the canceled TX as an error */
+ 		data->msg.tx_ts = ktime_get_ns();
+-		data->msg.tx_status |= CEC_TX_STATUS_ERROR |
++		data->msg.tx_status |= tx_status |
+ 				       CEC_TX_STATUS_MAX_RETRIES;
+ 		data->msg.tx_error_cnt++;
+ 		data->attempts = 0;
+@@ -374,15 +372,15 @@ static void cec_flush(struct cec_adapter *adap)
+ 	while (!list_empty(&adap->transmit_queue)) {
+ 		data = list_first_entry(&adap->transmit_queue,
+ 					struct cec_data, list);
+-		cec_data_cancel(data);
++		cec_data_cancel(data, CEC_TX_STATUS_ABORTED);
+ 	}
+ 	if (adap->transmitting)
+-		cec_data_cancel(adap->transmitting);
++		cec_data_cancel(adap->transmitting, CEC_TX_STATUS_ABORTED);
+ 
+ 	/* Cancel the pending timeout work. */
+ 	list_for_each_entry_safe(data, n, &adap->wait_queue, list) {
+ 		if (cancel_delayed_work(&data->work))
+-			cec_data_cancel(data);
++			cec_data_cancel(data, CEC_TX_STATUS_OK);
+ 		/*
+ 		 * If cancel_delayed_work returned false, then
+ 		 * the cec_wait_timeout function is running,
+@@ -458,12 +456,13 @@ int cec_thread_func(void *_adap)
+ 			 * so much traffic on the bus that the adapter was
+ 			 * unable to transmit for CEC_XFER_TIMEOUT_MS (2.1s).
+ 			 */
+-			dprintk(1, "%s: message %*ph timed out\n", __func__,
++			pr_warn("cec-%s: message %*ph timed out\n", adap->name,
+ 				adap->transmitting->msg.len,
+ 				adap->transmitting->msg.msg);
+ 			adap->tx_timeouts++;
+ 			/* Just give up on this. */
+-			cec_data_cancel(adap->transmitting);
++			cec_data_cancel(adap->transmitting,
++					CEC_TX_STATUS_TIMEOUT);
+ 			goto unlock;
+ 		}
+ 
+@@ -498,9 +497,11 @@ int cec_thread_func(void *_adap)
+ 		if (data->attempts) {
+ 			/* should be >= 3 data bit periods for a retry */
+ 			signal_free_time = CEC_SIGNAL_FREE_TIME_RETRY;
+-		} else if (data->new_initiator) {
++		} else if (adap->last_initiator !=
++			   cec_msg_initiator(&data->msg)) {
+ 			/* should be >= 5 data bit periods for new initiator */
+ 			signal_free_time = CEC_SIGNAL_FREE_TIME_NEW_INITIATOR;
++			adap->last_initiator = cec_msg_initiator(&data->msg);
+ 		} else {
+ 			/*
+ 			 * should be >= 7 data bit periods for sending another
+@@ -514,7 +515,7 @@ int cec_thread_func(void *_adap)
+ 		/* Tell the adapter to transmit, cancel on error */
+ 		if (adap->ops->adap_transmit(adap, data->attempts,
+ 					     signal_free_time, &data->msg))
+-			cec_data_cancel(data);
++			cec_data_cancel(data, CEC_TX_STATUS_ABORTED);
+ 
+ unlock:
+ 		mutex_unlock(&adap->lock);
+@@ -685,9 +686,6 @@ int cec_transmit_msg_fh(struct cec_adapter *adap, struct cec_msg *msg,
+ 			struct cec_fh *fh, bool block)
+ {
+ 	struct cec_data *data;
+-	u8 last_initiator = 0xff;
+-	unsigned int timeout;
+-	int res = 0;
+ 
+ 	msg->rx_ts = 0;
+ 	msg->tx_ts = 0;
+@@ -797,23 +795,6 @@ int cec_transmit_msg_fh(struct cec_adapter *adap, struct cec_msg *msg,
+ 	data->adap = adap;
+ 	data->blocking = block;
+ 
+-	/*
+-	 * Determine if this message follows a message from the same
+-	 * initiator. Needed to determine the free signal time later on.
+-	 */
+-	if (msg->len > 1) {
+-		if (!(list_empty(&adap->transmit_queue))) {
+-			const struct cec_data *last;
+-
+-			last = list_last_entry(&adap->transmit_queue,
+-					       const struct cec_data, list);
+-			last_initiator = cec_msg_initiator(&last->msg);
+-		} else if (adap->transmitting) {
+-			last_initiator =
+-				cec_msg_initiator(&adap->transmitting->msg);
+-		}
+-	}
+-	data->new_initiator = last_initiator != cec_msg_initiator(msg);
+ 	init_completion(&data->c);
+ 	INIT_DELAYED_WORK(&data->work, cec_wait_timeout);
+ 
+@@ -829,48 +810,23 @@ int cec_transmit_msg_fh(struct cec_adapter *adap, struct cec_msg *msg,
+ 	if (!block)
+ 		return 0;
+ 
+-	/*
+-	 * If we don't get a completion before this time something is really
+-	 * wrong and we time out.
+-	 */
+-	timeout = CEC_XFER_TIMEOUT_MS;
+-	/* Add the requested timeout if we have to wait for a reply as well */
+-	if (msg->timeout)
+-		timeout += msg->timeout;
+-
+ 	/*
+ 	 * Release the lock and wait, retake the lock afterwards.
+ 	 */
+ 	mutex_unlock(&adap->lock);
+-	res = wait_for_completion_killable_timeout(&data->c,
+-						   msecs_to_jiffies(timeout));
++	wait_for_completion_killable(&data->c);
++	if (!data->completed)
++		cancel_delayed_work_sync(&data->work);
+ 	mutex_lock(&adap->lock);
+ 
+-	if (data->completed) {
+-		/* The transmit completed (possibly with an error) */
+-		*msg = data->msg;
+-		kfree(data);
+-		return 0;
+-	}
+-	/*
+-	 * The wait for completion timed out or was interrupted, so mark this
+-	 * as non-blocking and disconnect from the filehandle since it is
+-	 * still 'in flight'. When it finally completes it will just drop the
+-	 * result silently.
+-	 */
+-	data->blocking = false;
+-	if (data->fh)
+-		list_del(&data->xfer_list);
+-	data->fh = NULL;
++	/* Cancel the transmit if it was interrupted */
++	if (!data->completed)
++		cec_data_cancel(data, CEC_TX_STATUS_ABORTED);
+ 
+-	if (res == 0) { /* timed out */
+-		/* Check if the reply or the transmit failed */
+-		if (msg->timeout && (msg->tx_status & CEC_TX_STATUS_OK))
+-			msg->rx_status = CEC_RX_STATUS_TIMEOUT;
+-		else
+-			msg->tx_status = CEC_TX_STATUS_MAX_RETRIES;
+-	}
+-	return res > 0 ? 0 : res;
++	/* The transmit completed (possibly with an error) */
++	*msg = data->msg;
++	kfree(data);
++	return 0;
+ }
+ 
+ /* Helper function to be used by drivers and this framework. */
+@@ -1028,6 +984,8 @@ void cec_received_msg_ts(struct cec_adapter *adap,
+ 	mutex_lock(&adap->lock);
+ 	dprintk(2, "%s: %*ph\n", __func__, msg->len, msg->msg);
+ 
++	adap->last_initiator = 0xff;
++
+ 	/* Check if this message was for us (directed or broadcast). */
+ 	if (!cec_msg_is_broadcast(msg))
+ 		valid_la = cec_has_log_addr(adap, msg_dest);
+@@ -1490,6 +1448,8 @@ void __cec_s_phys_addr(struct cec_adapter *adap, u16 phys_addr, bool block)
+ 	}
+ 
+ 	mutex_lock(&adap->devnode.lock);
++	adap->last_initiator = 0xff;
++
+ 	if ((adap->needs_hpd || list_empty(&adap->devnode.fhs)) &&
+ 	    adap->ops->adap_enable(adap, true)) {
+ 		mutex_unlock(&adap->devnode.lock);
+diff --git a/drivers/media/cec/cec-api.c b/drivers/media/cec/cec-api.c
+index 10b67fc40318..0199765fbae6 100644
+--- a/drivers/media/cec/cec-api.c
++++ b/drivers/media/cec/cec-api.c
+@@ -101,6 +101,23 @@ static long cec_adap_g_phys_addr(struct cec_adapter *adap,
+ 	return 0;
+ }
+ 
++static int cec_validate_phys_addr(u16 phys_addr)
++{
++	int i;
++
++	if (phys_addr == CEC_PHYS_ADDR_INVALID)
++		return 0;
++	for (i = 0; i < 16; i += 4)
++		if (phys_addr & (0xf << i))
++			break;
++	if (i == 16)
++		return 0;
++	for (i += 4; i < 16; i += 4)
++		if ((phys_addr & (0xf << i)) == 0)
++			return -EINVAL;
++	return 0;
++}
++
+ static long cec_adap_s_phys_addr(struct cec_adapter *adap, struct cec_fh *fh,
+ 				 bool block, __u16 __user *parg)
+ {
+@@ -112,7 +129,7 @@ static long cec_adap_s_phys_addr(struct cec_adapter *adap, struct cec_fh *fh,
+ 	if (copy_from_user(&phys_addr, parg, sizeof(phys_addr)))
+ 		return -EFAULT;
+ 
+-	err = cec_phys_addr_validate(phys_addr, NULL, NULL);
++	err = cec_validate_phys_addr(phys_addr);
+ 	if (err)
+ 		return err;
+ 	mutex_lock(&adap->lock);
+diff --git a/drivers/media/cec/cec-edid.c b/drivers/media/cec/cec-edid.c
+index ec72ac1c0b91..f587e8eaefd8 100644
+--- a/drivers/media/cec/cec-edid.c
++++ b/drivers/media/cec/cec-edid.c
+@@ -10,66 +10,6 @@
+ #include <linux/types.h>
+ #include <media/cec.h>
+ 
+-/*
+- * This EDID is expected to be a CEA-861 compliant, which means that there are
+- * at least two blocks and one or more of the extensions blocks are CEA-861
+- * blocks.
+- *
+- * The returned location is guaranteed to be < size - 1.
+- */
+-static unsigned int cec_get_edid_spa_location(const u8 *edid, unsigned int size)
+-{
+-	unsigned int blocks = size / 128;
+-	unsigned int block;
+-	u8 d;
+-
+-	/* Sanity check: at least 2 blocks and a multiple of the block size */
+-	if (blocks < 2 || size % 128)
+-		return 0;
+-
+-	/*
+-	 * If there are fewer extension blocks than the size, then update
+-	 * 'blocks'. It is allowed to have more extension blocks than the size,
+-	 * since some hardware can only read e.g. 256 bytes of the EDID, even
+-	 * though more blocks are present. The first CEA-861 extension block
+-	 * should normally be in block 1 anyway.
+-	 */
+-	if (edid[0x7e] + 1 < blocks)
+-		blocks = edid[0x7e] + 1;
+-
+-	for (block = 1; block < blocks; block++) {
+-		unsigned int offset = block * 128;
+-
+-		/* Skip any non-CEA-861 extension blocks */
+-		if (edid[offset] != 0x02 || edid[offset + 1] != 0x03)
+-			continue;
+-
+-		/* search Vendor Specific Data Block (tag 3) */
+-		d = edid[offset + 2] & 0x7f;
+-		/* Check if there are Data Blocks */
+-		if (d <= 4)
+-			continue;
+-		if (d > 4) {
+-			unsigned int i = offset + 4;
+-			unsigned int end = offset + d;
+-
+-			/* Note: 'end' is always < 'size' */
+-			do {
+-				u8 tag = edid[i] >> 5;
+-				u8 len = edid[i] & 0x1f;
+-
+-				if (tag == 3 && len >= 5 && i + len <= end &&
+-				    edid[i + 1] == 0x03 &&
+-				    edid[i + 2] == 0x0c &&
+-				    edid[i + 3] == 0x00)
+-					return i + 4;
+-				i += len + 1;
+-			} while (i < end);
+-		}
+-	}
+-	return 0;
+-}
+-
+ u16 cec_get_edid_phys_addr(const u8 *edid, unsigned int size,
+ 			   unsigned int *offset)
+ {
+diff --git a/drivers/media/common/v4l2-tpg/v4l2-tpg-colors.c b/drivers/media/common/v4l2-tpg/v4l2-tpg-colors.c
+index 3a3dc23c560c..a4341205c197 100644
+--- a/drivers/media/common/v4l2-tpg/v4l2-tpg-colors.c
++++ b/drivers/media/common/v4l2-tpg/v4l2-tpg-colors.c
+@@ -602,14 +602,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_SRGB][5] = { 3138, 657, 810 },
+ 	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_SRGB][6] = { 731, 680, 3048 },
+ 	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_SRGB][7] = { 800, 799, 800 },
+-	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][1] = { 3046, 3054, 886 },
+-	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][2] = { 0, 3058, 3031 },
+-	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][3] = { 360, 3079, 877 },
+-	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][4] = { 3103, 587, 3027 },
+-	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][5] = { 3116, 723, 861 },
+-	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][6] = { 789, 744, 3025 },
+-	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][1] = { 3046, 3054, 886 },
++	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][2] = { 0, 3058, 3031 },
++	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][3] = { 360, 3079, 877 },
++	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][4] = { 3103, 587, 3027 },
++	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][5] = { 3116, 723, 861 },
++	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][6] = { 789, 744, 3025 },
++	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
+ 	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+ 	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_SMPTE240M][1] = { 2941, 2950, 546 },
+ 	[V4L2_COLORSPACE_SMPTE170M][V4L2_XFER_FUNC_SMPTE240M][2] = { 0, 2954, 2924 },
+@@ -658,14 +658,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_SRGB][5] = { 3138, 657, 810 },
+ 	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_SRGB][6] = { 731, 680, 3048 },
+ 	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_SRGB][7] = { 800, 799, 800 },
+-	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][1] = { 3046, 3054, 886 },
+-	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][2] = { 0, 3058, 3031 },
+-	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][3] = { 360, 3079, 877 },
+-	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][4] = { 3103, 587, 3027 },
+-	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][5] = { 3116, 723, 861 },
+-	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][6] = { 789, 744, 3025 },
+-	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][1] = { 3046, 3054, 886 },
++	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][2] = { 0, 3058, 3031 },
++	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][3] = { 360, 3079, 877 },
++	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][4] = { 3103, 587, 3027 },
++	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][5] = { 3116, 723, 861 },
++	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][6] = { 789, 744, 3025 },
++	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
+ 	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+ 	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_SMPTE240M][1] = { 2941, 2950, 546 },
+ 	[V4L2_COLORSPACE_SMPTE240M][V4L2_XFER_FUNC_SMPTE240M][2] = { 0, 2954, 2924 },
+@@ -714,14 +714,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_SRGB][5] = { 3056, 800, 800 },
+ 	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_SRGB][6] = { 800, 800, 3056 },
+ 	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 800 },
+-	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][1] = { 3033, 3033, 851 },
+-	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][2] = { 851, 3033, 3033 },
+-	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][3] = { 851, 3033, 851 },
+-	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][4] = { 3033, 851, 3033 },
+-	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][5] = { 3033, 851, 851 },
+-	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][6] = { 851, 851, 3033 },
+-	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][1] = { 3033, 3033, 851 },
++	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][2] = { 851, 3033, 3033 },
++	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][3] = { 851, 3033, 851 },
++	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][4] = { 3033, 851, 3033 },
++	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][5] = { 3033, 851, 851 },
++	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][6] = { 851, 851, 3033 },
++	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
+ 	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+ 	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_SMPTE240M][1] = { 2926, 2926, 507 },
+ 	[V4L2_COLORSPACE_REC709][V4L2_XFER_FUNC_SMPTE240M][2] = { 507, 2926, 2926 },
+@@ -770,14 +770,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_SRGB][5] = { 2599, 901, 909 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_SRGB][6] = { 991, 0, 2966 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_SRGB][7] = { 800, 799, 800 },
+-	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][1] = { 2989, 3120, 1180 },
+-	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][2] = { 1913, 3011, 3009 },
+-	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][3] = { 1836, 3099, 1105 },
+-	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][4] = { 2627, 413, 2966 },
+-	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][5] = { 2576, 943, 951 },
+-	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][6] = { 1026, 0, 2942 },
+-	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][1] = { 2989, 3120, 1180 },
++	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][2] = { 1913, 3011, 3009 },
++	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][3] = { 1836, 3099, 1105 },
++	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][4] = { 2627, 413, 2966 },
++	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][5] = { 2576, 943, 951 },
++	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][6] = { 1026, 0, 2942 },
++	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_SMPTE240M][1] = { 2879, 3022, 874 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_M][V4L2_XFER_FUNC_SMPTE240M][2] = { 1688, 2903, 2901 },
+@@ -826,14 +826,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_SRGB][5] = { 3001, 800, 799 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_SRGB][6] = { 800, 800, 3071 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 799 },
+-	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][1] = { 3033, 3033, 776 },
+-	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][2] = { 1068, 3033, 3033 },
+-	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][3] = { 1068, 3033, 776 },
+-	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][4] = { 2977, 851, 3048 },
+-	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][5] = { 2977, 851, 851 },
+-	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][6] = { 851, 851, 3048 },
+-	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][1] = { 3033, 3033, 776 },
++	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][2] = { 1068, 3033, 3033 },
++	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][3] = { 1068, 3033, 776 },
++	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][4] = { 2977, 851, 3048 },
++	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][5] = { 2977, 851, 851 },
++	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][6] = { 851, 851, 3048 },
++	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_SMPTE240M][1] = { 2926, 2926, 423 },
+ 	[V4L2_COLORSPACE_470_SYSTEM_BG][V4L2_XFER_FUNC_SMPTE240M][2] = { 749, 2926, 2926 },
+@@ -882,14 +882,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SRGB][5] = { 3056, 800, 800 },
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SRGB][6] = { 800, 800, 3056 },
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 800 },
+-	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][1] = { 3033, 3033, 851 },
+-	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][2] = { 851, 3033, 3033 },
+-	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][3] = { 851, 3033, 851 },
+-	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][4] = { 3033, 851, 3033 },
+-	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][5] = { 3033, 851, 851 },
+-	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][6] = { 851, 851, 3033 },
+-	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][1] = { 3033, 3033, 851 },
++	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][2] = { 851, 3033, 3033 },
++	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][3] = { 851, 3033, 851 },
++	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][4] = { 3033, 851, 3033 },
++	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][5] = { 3033, 851, 851 },
++	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][6] = { 851, 851, 3033 },
++	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SMPTE240M][1] = { 2926, 2926, 507 },
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SMPTE240M][2] = { 507, 2926, 2926 },
+@@ -922,62 +922,62 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SMPTE2084][5] = { 1812, 886, 886 },
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SMPTE2084][6] = { 886, 886, 1812 },
+ 	[V4L2_COLORSPACE_SRGB][V4L2_XFER_FUNC_SMPTE2084][7] = { 886, 886, 886 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][0] = { 2939, 2939, 2939 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][1] = { 2939, 2939, 781 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][2] = { 1622, 2939, 2939 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][3] = { 1622, 2939, 781 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][4] = { 2502, 547, 2881 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][5] = { 2502, 547, 547 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][6] = { 547, 547, 2881 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_709][7] = { 547, 547, 547 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][0] = { 3056, 3056, 3056 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][1] = { 3056, 3056, 1031 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][2] = { 1838, 3056, 3056 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][3] = { 1838, 3056, 1031 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][4] = { 2657, 800, 3002 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][5] = { 2657, 800, 800 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][6] = { 800, 800, 3002 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 800 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][1] = { 3033, 3033, 1063 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][2] = { 1828, 3033, 3033 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][3] = { 1828, 3033, 1063 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][4] = { 2633, 851, 2979 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][5] = { 2633, 851, 851 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][6] = { 851, 851, 2979 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][1] = { 2926, 2926, 744 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][2] = { 1594, 2926, 2926 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][3] = { 1594, 2926, 744 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][4] = { 2484, 507, 2867 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][5] = { 2484, 507, 507 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][6] = { 507, 507, 2867 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE240M][7] = { 507, 507, 507 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][0] = { 2125, 2125, 2125 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][1] = { 2125, 2125, 212 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][2] = { 698, 2125, 2125 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][3] = { 698, 2125, 212 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][4] = { 1557, 130, 2043 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][5] = { 1557, 130, 130 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][6] = { 130, 130, 2043 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_NONE][7] = { 130, 130, 130 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][0] = { 3175, 3175, 3175 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][1] = { 3175, 3175, 1308 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][2] = { 2069, 3175, 3175 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][3] = { 2069, 3175, 1308 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][4] = { 2816, 1084, 3127 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][5] = { 2816, 1084, 1084 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][6] = { 1084, 1084, 3127 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_DCI_P3][7] = { 1084, 1084, 1084 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][0] = { 1812, 1812, 1812 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][1] = { 1812, 1812, 1022 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][2] = { 1402, 1812, 1812 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][3] = { 1402, 1812, 1022 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][4] = { 1692, 886, 1797 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][5] = { 1692, 886, 886 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][6] = { 886, 886, 1797 },
+-	[V4L2_COLORSPACE_ADOBERGB][V4L2_XFER_FUNC_SMPTE2084][7] = { 886, 886, 886 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][0] = { 2939, 2939, 2939 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][1] = { 2939, 2939, 781 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][2] = { 1622, 2939, 2939 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][3] = { 1622, 2939, 781 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][4] = { 2502, 547, 2881 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][5] = { 2502, 547, 547 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][6] = { 547, 547, 2881 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_709][7] = { 547, 547, 547 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][0] = { 3056, 3056, 3056 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][1] = { 3056, 3056, 1031 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][2] = { 1838, 3056, 3056 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][3] = { 1838, 3056, 1031 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][4] = { 2657, 800, 3002 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][5] = { 2657, 800, 800 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][6] = { 800, 800, 3002 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 800 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][1] = { 3033, 3033, 1063 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][2] = { 1828, 3033, 3033 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][3] = { 1828, 3033, 1063 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][4] = { 2633, 851, 2979 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][5] = { 2633, 851, 851 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][6] = { 851, 851, 2979 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][1] = { 2926, 2926, 744 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][2] = { 1594, 2926, 2926 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][3] = { 1594, 2926, 744 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][4] = { 2484, 507, 2867 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][5] = { 2484, 507, 507 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][6] = { 507, 507, 2867 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE240M][7] = { 507, 507, 507 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][0] = { 2125, 2125, 2125 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][1] = { 2125, 2125, 212 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][2] = { 698, 2125, 2125 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][3] = { 698, 2125, 212 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][4] = { 1557, 130, 2043 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][5] = { 1557, 130, 130 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][6] = { 130, 130, 2043 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_NONE][7] = { 130, 130, 130 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][0] = { 3175, 3175, 3175 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][1] = { 3175, 3175, 1308 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][2] = { 2069, 3175, 3175 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][3] = { 2069, 3175, 1308 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][4] = { 2816, 1084, 3127 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][5] = { 2816, 1084, 1084 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][6] = { 1084, 1084, 3127 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_DCI_P3][7] = { 1084, 1084, 1084 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][0] = { 1812, 1812, 1812 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][1] = { 1812, 1812, 1022 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][2] = { 1402, 1812, 1812 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][3] = { 1402, 1812, 1022 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][4] = { 1692, 886, 1797 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][5] = { 1692, 886, 886 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][6] = { 886, 886, 1797 },
++	[V4L2_COLORSPACE_OPRGB][V4L2_XFER_FUNC_SMPTE2084][7] = { 886, 886, 886 },
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_709][0] = { 2939, 2939, 2939 },
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_709][1] = { 2877, 2923, 1058 },
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_709][2] = { 1837, 2840, 2916 },
+@@ -994,14 +994,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_SRGB][5] = { 2517, 1159, 900 },
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_SRGB][6] = { 1042, 870, 2917 },
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 800 },
+-	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][1] = { 2976, 3018, 1315 },
+-	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][2] = { 2024, 2942, 3011 },
+-	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][3] = { 1930, 2926, 1256 },
+-	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][4] = { 2563, 1227, 2916 },
+-	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][5] = { 2494, 1183, 943 },
+-	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][6] = { 1073, 916, 2894 },
+-	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][1] = { 2976, 3018, 1315 },
++	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][2] = { 2024, 2942, 3011 },
++	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][3] = { 1930, 2926, 1256 },
++	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][4] = { 2563, 1227, 2916 },
++	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][5] = { 2494, 1183, 943 },
++	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][6] = { 1073, 916, 2894 },
++	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_SMPTE240M][1] = { 2864, 2910, 1024 },
+ 	[V4L2_COLORSPACE_BT2020][V4L2_XFER_FUNC_SMPTE240M][2] = { 1811, 2826, 2903 },
+@@ -1050,14 +1050,14 @@ const struct tpg_rbg_color16 tpg_csc_colors[V4L2_COLORSPACE_DCI_P3 + 1][V4L2_XFE
+ 	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_SRGB][5] = { 2880, 998, 902 },
+ 	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_SRGB][6] = { 816, 823, 2940 },
+ 	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_SRGB][7] = { 800, 800, 799 },
+-	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][0] = { 3033, 3033, 3033 },
+-	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][1] = { 3029, 3028, 1255 },
+-	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][2] = { 1406, 2988, 3011 },
+-	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][3] = { 1398, 2983, 1190 },
+-	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][4] = { 2860, 1050, 2939 },
+-	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][5] = { 2857, 1033, 945 },
+-	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][6] = { 866, 873, 2916 },
+-	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_ADOBERGB][7] = { 851, 851, 851 },
++	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][0] = { 3033, 3033, 3033 },
++	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][1] = { 3029, 3028, 1255 },
++	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][2] = { 1406, 2988, 3011 },
++	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][3] = { 1398, 2983, 1190 },
++	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][4] = { 2860, 1050, 2939 },
++	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][5] = { 2857, 1033, 945 },
++	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][6] = { 866, 873, 2916 },
++	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_OPRGB][7] = { 851, 851, 851 },
+ 	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_SMPTE240M][0] = { 2926, 2926, 2926 },
+ 	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_SMPTE240M][1] = { 2923, 2921, 957 },
+ 	[V4L2_COLORSPACE_DCI_P3][V4L2_XFER_FUNC_SMPTE240M][2] = { 1125, 2877, 2902 },
+@@ -1128,7 +1128,7 @@ static const double rec709_to_240m[3][3] = {
+ 	{ 0.0016327, 0.0044133, 0.9939540 },
+ };
+ 
+-static const double rec709_to_adobergb[3][3] = {
++static const double rec709_to_oprgb[3][3] = {
+ 	{ 0.7151627, 0.2848373, -0.0000000 },
+ 	{ 0.0000000, 1.0000000, 0.0000000 },
+ 	{ -0.0000000, 0.0411705, 0.9588295 },
+@@ -1195,7 +1195,7 @@ static double transfer_rec709_to_rgb(double v)
+ 	return (v < 0.081) ? v / 4.5 : pow((v + 0.099) / 1.099, 1.0 / 0.45);
+ }
+ 
+-static double transfer_rgb_to_adobergb(double v)
++static double transfer_rgb_to_oprgb(double v)
+ {
+ 	return pow(v, 1.0 / 2.19921875);
+ }
+@@ -1251,8 +1251,8 @@ static void csc(enum v4l2_colorspace colorspace, enum v4l2_xfer_func xfer_func,
+ 	case V4L2_COLORSPACE_470_SYSTEM_M:
+ 		mult_matrix(r, g, b, rec709_to_ntsc1953);
+ 		break;
+-	case V4L2_COLORSPACE_ADOBERGB:
+-		mult_matrix(r, g, b, rec709_to_adobergb);
++	case V4L2_COLORSPACE_OPRGB:
++		mult_matrix(r, g, b, rec709_to_oprgb);
+ 		break;
+ 	case V4L2_COLORSPACE_BT2020:
+ 		mult_matrix(r, g, b, rec709_to_bt2020);
+@@ -1284,10 +1284,10 @@ static void csc(enum v4l2_colorspace colorspace, enum v4l2_xfer_func xfer_func,
+ 		*g = transfer_rgb_to_srgb(*g);
+ 		*b = transfer_rgb_to_srgb(*b);
+ 		break;
+-	case V4L2_XFER_FUNC_ADOBERGB:
+-		*r = transfer_rgb_to_adobergb(*r);
+-		*g = transfer_rgb_to_adobergb(*g);
+-		*b = transfer_rgb_to_adobergb(*b);
++	case V4L2_XFER_FUNC_OPRGB:
++		*r = transfer_rgb_to_oprgb(*r);
++		*g = transfer_rgb_to_oprgb(*g);
++		*b = transfer_rgb_to_oprgb(*b);
+ 		break;
+ 	case V4L2_XFER_FUNC_DCI_P3:
+ 		*r = transfer_rgb_to_dcip3(*r);
+@@ -1321,7 +1321,7 @@ int main(int argc, char **argv)
+ 		V4L2_COLORSPACE_470_SYSTEM_BG,
+ 		0,
+ 		V4L2_COLORSPACE_SRGB,
+-		V4L2_COLORSPACE_ADOBERGB,
++		V4L2_COLORSPACE_OPRGB,
+ 		V4L2_COLORSPACE_BT2020,
+ 		0,
+ 		V4L2_COLORSPACE_DCI_P3,
+@@ -1336,7 +1336,7 @@ int main(int argc, char **argv)
+ 		"V4L2_COLORSPACE_470_SYSTEM_BG",
+ 		"",
+ 		"V4L2_COLORSPACE_SRGB",
+-		"V4L2_COLORSPACE_ADOBERGB",
++		"V4L2_COLORSPACE_OPRGB",
+ 		"V4L2_COLORSPACE_BT2020",
+ 		"",
+ 		"V4L2_COLORSPACE_DCI_P3",
+@@ -1345,7 +1345,7 @@ int main(int argc, char **argv)
+ 		"",
+ 		"V4L2_XFER_FUNC_709",
+ 		"V4L2_XFER_FUNC_SRGB",
+-		"V4L2_XFER_FUNC_ADOBERGB",
++		"V4L2_XFER_FUNC_OPRGB",
+ 		"V4L2_XFER_FUNC_SMPTE240M",
+ 		"V4L2_XFER_FUNC_NONE",
+ 		"V4L2_XFER_FUNC_DCI_P3",
+diff --git a/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c b/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
+index abd4c788dffd..f40ab5704bf0 100644
+--- a/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
++++ b/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
+@@ -1770,7 +1770,7 @@ typedef struct { u16 __; u8 _; } __packed x24;
+ 				pos[7] = (chr & (0x01 << 0) ? fg : bg);	\
+ 			} \
+ 	\
+-			pos += (tpg->hflip ? -8 : 8) / hdiv;	\
++			pos += (tpg->hflip ? -8 : 8) / (int)hdiv;	\
+ 		}	\
+ 	}	\
+ } while (0)
+diff --git a/drivers/media/i2c/adv7511.c b/drivers/media/i2c/adv7511.c
+index 5731751d3f2a..cd6e7372ef9c 100644
+--- a/drivers/media/i2c/adv7511.c
++++ b/drivers/media/i2c/adv7511.c
+@@ -1355,10 +1355,10 @@ static int adv7511_set_fmt(struct v4l2_subdev *sd,
+ 	state->xfer_func = format->format.xfer_func;
+ 
+ 	switch (format->format.colorspace) {
+-	case V4L2_COLORSPACE_ADOBERGB:
++	case V4L2_COLORSPACE_OPRGB:
+ 		c = HDMI_COLORIMETRY_EXTENDED;
+-		ec = y ? HDMI_EXTENDED_COLORIMETRY_ADOBE_YCC_601 :
+-			 HDMI_EXTENDED_COLORIMETRY_ADOBE_RGB;
++		ec = y ? HDMI_EXTENDED_COLORIMETRY_OPYCC_601 :
++			 HDMI_EXTENDED_COLORIMETRY_OPRGB;
+ 		break;
+ 	case V4L2_COLORSPACE_SMPTE170M:
+ 		c = y ? HDMI_COLORIMETRY_ITU_601 : HDMI_COLORIMETRY_NONE;
+diff --git a/drivers/media/i2c/adv7604.c b/drivers/media/i2c/adv7604.c
+index cac2081e876e..2437f72f7caf 100644
+--- a/drivers/media/i2c/adv7604.c
++++ b/drivers/media/i2c/adv7604.c
+@@ -2284,8 +2284,10 @@ static int adv76xx_set_edid(struct v4l2_subdev *sd, struct v4l2_edid *edid)
+ 		state->aspect_ratio.numerator = 16;
+ 		state->aspect_ratio.denominator = 9;
+ 
+-		if (!state->edid.present)
++		if (!state->edid.present) {
+ 			state->edid.blocks = 0;
++			cec_phys_addr_invalidate(state->cec_adap);
++		}
+ 
+ 		v4l2_dbg(2, debug, sd, "%s: clear EDID pad %d, edid.present = 0x%x\n",
+ 				__func__, edid->pad, state->edid.present);
+@@ -2474,7 +2476,7 @@ static int adv76xx_log_status(struct v4l2_subdev *sd)
+ 		"YCbCr Bt.601 (16-235)", "YCbCr Bt.709 (16-235)",
+ 		"xvYCC Bt.601", "xvYCC Bt.709",
+ 		"YCbCr Bt.601 (0-255)", "YCbCr Bt.709 (0-255)",
+-		"sYCC", "Adobe YCC 601", "AdobeRGB", "invalid", "invalid",
++		"sYCC", "opYCC 601", "opRGB", "invalid", "invalid",
+ 		"invalid", "invalid", "invalid"
+ 	};
+ 	static const char * const rgb_quantization_range_txt[] = {
+diff --git a/drivers/media/i2c/adv7842.c b/drivers/media/i2c/adv7842.c
+index fddac32e5051..ceca6be13ca9 100644
+--- a/drivers/media/i2c/adv7842.c
++++ b/drivers/media/i2c/adv7842.c
+@@ -786,8 +786,10 @@ static int edid_write_hdmi_segment(struct v4l2_subdev *sd, u8 port)
+ 	/* Disable I2C access to internal EDID ram from HDMI DDC ports */
+ 	rep_write_and_or(sd, 0x77, 0xf3, 0x00);
+ 
+-	if (!state->hdmi_edid.present)
++	if (!state->hdmi_edid.present) {
++		cec_phys_addr_invalidate(state->cec_adap);
+ 		return 0;
++	}
+ 
+ 	pa = cec_get_edid_phys_addr(edid, 256, &spa_loc);
+ 	err = cec_phys_addr_validate(pa, &pa, NULL);
+diff --git a/drivers/media/i2c/ov7670.c b/drivers/media/i2c/ov7670.c
+index 3474ef832c1e..480edeebac60 100644
+--- a/drivers/media/i2c/ov7670.c
++++ b/drivers/media/i2c/ov7670.c
+@@ -1810,17 +1810,24 @@ static int ov7670_probe(struct i2c_client *client,
+ 			info->pclk_hb_disable = true;
+ 	}
+ 
+-	info->clk = devm_clk_get(&client->dev, "xclk");
+-	if (IS_ERR(info->clk))
+-		return PTR_ERR(info->clk);
+-	ret = clk_prepare_enable(info->clk);
+-	if (ret)
+-		return ret;
++	info->clk = devm_clk_get(&client->dev, "xclk"); /* optional */
++	if (IS_ERR(info->clk)) {
++		ret = PTR_ERR(info->clk);
++		if (ret == -ENOENT)
++			info->clk = NULL;
++		else
++			return ret;
++	}
++	if (info->clk) {
++		ret = clk_prepare_enable(info->clk);
++		if (ret)
++			return ret;
+ 
+-	info->clock_speed = clk_get_rate(info->clk) / 1000000;
+-	if (info->clock_speed < 10 || info->clock_speed > 48) {
+-		ret = -EINVAL;
+-		goto clk_disable;
++		info->clock_speed = clk_get_rate(info->clk) / 1000000;
++		if (info->clock_speed < 10 || info->clock_speed > 48) {
++			ret = -EINVAL;
++			goto clk_disable;
++		}
+ 	}
+ 
+ 	ret = ov7670_init_gpio(client, info);
+diff --git a/drivers/media/i2c/tc358743.c b/drivers/media/i2c/tc358743.c
+index 393bbbbbaad7..865639587a97 100644
+--- a/drivers/media/i2c/tc358743.c
++++ b/drivers/media/i2c/tc358743.c
+@@ -1243,9 +1243,9 @@ static int tc358743_log_status(struct v4l2_subdev *sd)
+ 	u8 vi_status3 =  i2c_rd8(sd, VI_STATUS3);
+ 	const int deep_color_mode[4] = { 8, 10, 12, 16 };
+ 	static const char * const input_color_space[] = {
+-		"RGB", "YCbCr 601", "Adobe RGB", "YCbCr 709", "NA (4)",
++		"RGB", "YCbCr 601", "opRGB", "YCbCr 709", "NA (4)",
+ 		"xvYCC 601", "NA(6)", "xvYCC 709", "NA(8)", "sYCC601",
+-		"NA(10)", "NA(11)", "NA(12)", "Adobe YCC 601"};
++		"NA(10)", "NA(11)", "NA(12)", "opYCC 601"};
+ 
+ 	v4l2_info(sd, "-----Chip status-----\n");
+ 	v4l2_info(sd, "Chip ID: 0x%02x\n",
+diff --git a/drivers/media/i2c/tvp5150.c b/drivers/media/i2c/tvp5150.c
+index 76e6bed5a1da..805bd9c65940 100644
+--- a/drivers/media/i2c/tvp5150.c
++++ b/drivers/media/i2c/tvp5150.c
+@@ -1534,7 +1534,7 @@ static int tvp5150_probe(struct i2c_client *c,
+ 			27000000, 1, 27000000);
+ 	v4l2_ctrl_new_std_menu_items(&core->hdl, &tvp5150_ctrl_ops,
+ 				     V4L2_CID_TEST_PATTERN,
+-				     ARRAY_SIZE(tvp5150_test_patterns),
++				     ARRAY_SIZE(tvp5150_test_patterns) - 1,
+ 				     0, 0, tvp5150_test_patterns);
+ 	sd->ctrl_handler = &core->hdl;
+ 	if (core->hdl.error) {
+diff --git a/drivers/media/platform/vivid/vivid-core.h b/drivers/media/platform/vivid/vivid-core.h
+index 477c80a4d44c..cd4c8230563c 100644
+--- a/drivers/media/platform/vivid/vivid-core.h
++++ b/drivers/media/platform/vivid/vivid-core.h
+@@ -111,7 +111,7 @@ enum vivid_colorspace {
+ 	VIVID_CS_170M,
+ 	VIVID_CS_709,
+ 	VIVID_CS_SRGB,
+-	VIVID_CS_ADOBERGB,
++	VIVID_CS_OPRGB,
+ 	VIVID_CS_2020,
+ 	VIVID_CS_DCI_P3,
+ 	VIVID_CS_240M,
+diff --git a/drivers/media/platform/vivid/vivid-ctrls.c b/drivers/media/platform/vivid/vivid-ctrls.c
+index 6b0bfa091592..e1185f0f6607 100644
+--- a/drivers/media/platform/vivid/vivid-ctrls.c
++++ b/drivers/media/platform/vivid/vivid-ctrls.c
+@@ -348,7 +348,7 @@ static int vivid_vid_cap_s_ctrl(struct v4l2_ctrl *ctrl)
+ 		V4L2_COLORSPACE_SMPTE170M,
+ 		V4L2_COLORSPACE_REC709,
+ 		V4L2_COLORSPACE_SRGB,
+-		V4L2_COLORSPACE_ADOBERGB,
++		V4L2_COLORSPACE_OPRGB,
+ 		V4L2_COLORSPACE_BT2020,
+ 		V4L2_COLORSPACE_DCI_P3,
+ 		V4L2_COLORSPACE_SMPTE240M,
+@@ -729,7 +729,7 @@ static const char * const vivid_ctrl_colorspace_strings[] = {
+ 	"SMPTE 170M",
+ 	"Rec. 709",
+ 	"sRGB",
+-	"AdobeRGB",
++	"opRGB",
+ 	"BT.2020",
+ 	"DCI-P3",
+ 	"SMPTE 240M",
+@@ -752,7 +752,7 @@ static const char * const vivid_ctrl_xfer_func_strings[] = {
+ 	"Default",
+ 	"Rec. 709",
+ 	"sRGB",
+-	"AdobeRGB",
++	"opRGB",
+ 	"SMPTE 240M",
+ 	"None",
+ 	"DCI-P3",
+diff --git a/drivers/media/platform/vivid/vivid-vid-out.c b/drivers/media/platform/vivid/vivid-vid-out.c
+index 51fec66d8d45..50248e2176a0 100644
+--- a/drivers/media/platform/vivid/vivid-vid-out.c
++++ b/drivers/media/platform/vivid/vivid-vid-out.c
+@@ -413,7 +413,7 @@ int vivid_try_fmt_vid_out(struct file *file, void *priv,
+ 		mp->colorspace = V4L2_COLORSPACE_SMPTE170M;
+ 	} else if (mp->colorspace != V4L2_COLORSPACE_SMPTE170M &&
+ 		   mp->colorspace != V4L2_COLORSPACE_REC709 &&
+-		   mp->colorspace != V4L2_COLORSPACE_ADOBERGB &&
++		   mp->colorspace != V4L2_COLORSPACE_OPRGB &&
+ 		   mp->colorspace != V4L2_COLORSPACE_BT2020 &&
+ 		   mp->colorspace != V4L2_COLORSPACE_SRGB) {
+ 		mp->colorspace = V4L2_COLORSPACE_REC709;
+diff --git a/drivers/media/usb/dvb-usb-v2/dvbsky.c b/drivers/media/usb/dvb-usb-v2/dvbsky.c
+index 1aa88d94e57f..e28bd8836751 100644
+--- a/drivers/media/usb/dvb-usb-v2/dvbsky.c
++++ b/drivers/media/usb/dvb-usb-v2/dvbsky.c
+@@ -31,6 +31,7 @@ MODULE_PARM_DESC(disable_rc, "Disable inbuilt IR receiver.");
+ DVB_DEFINE_MOD_OPT_ADAPTER_NR(adapter_nr);
+ 
+ struct dvbsky_state {
++	struct mutex stream_mutex;
+ 	u8 ibuf[DVBSKY_BUF_LEN];
+ 	u8 obuf[DVBSKY_BUF_LEN];
+ 	u8 last_lock;
+@@ -67,17 +68,18 @@ static int dvbsky_usb_generic_rw(struct dvb_usb_device *d,
+ 
+ static int dvbsky_stream_ctrl(struct dvb_usb_device *d, u8 onoff)
+ {
++	struct dvbsky_state *state = d_to_priv(d);
+ 	int ret;
+-	static u8 obuf_pre[3] = { 0x37, 0, 0 };
+-	static u8 obuf_post[3] = { 0x36, 3, 0 };
++	u8 obuf_pre[3] = { 0x37, 0, 0 };
++	u8 obuf_post[3] = { 0x36, 3, 0 };
+ 
+-	mutex_lock(&d->usb_mutex);
+-	ret = dvb_usbv2_generic_rw_locked(d, obuf_pre, 3, NULL, 0);
++	mutex_lock(&state->stream_mutex);
++	ret = dvbsky_usb_generic_rw(d, obuf_pre, 3, NULL, 0);
+ 	if (!ret && onoff) {
+ 		msleep(20);
+-		ret = dvb_usbv2_generic_rw_locked(d, obuf_post, 3, NULL, 0);
++		ret = dvbsky_usb_generic_rw(d, obuf_post, 3, NULL, 0);
+ 	}
+-	mutex_unlock(&d->usb_mutex);
++	mutex_unlock(&state->stream_mutex);
+ 	return ret;
+ }
+ 
+@@ -606,6 +608,8 @@ static int dvbsky_init(struct dvb_usb_device *d)
+ 	if (ret)
+ 		return ret;
+ 	*/
++	mutex_init(&state->stream_mutex);
++
+ 	state->last_lock = 0;
+ 
+ 	return 0;
+diff --git a/drivers/media/usb/em28xx/em28xx-cards.c b/drivers/media/usb/em28xx/em28xx-cards.c
+index ff5e41ac4723..98d6c8fcd262 100644
+--- a/drivers/media/usb/em28xx/em28xx-cards.c
++++ b/drivers/media/usb/em28xx/em28xx-cards.c
+@@ -2141,13 +2141,13 @@ const struct em28xx_board em28xx_boards[] = {
+ 		.input           = { {
+ 			.type     = EM28XX_VMUX_COMPOSITE,
+ 			.vmux     = TVP5150_COMPOSITE1,
+-			.amux     = EM28XX_AUDIO_SRC_LINE,
++			.amux     = EM28XX_AMUX_LINE_IN,
+ 			.gpio     = terratec_av350_unmute_gpio,
+ 
+ 		}, {
+ 			.type     = EM28XX_VMUX_SVIDEO,
+ 			.vmux     = TVP5150_SVIDEO,
+-			.amux     = EM28XX_AUDIO_SRC_LINE,
++			.amux     = EM28XX_AMUX_LINE_IN,
+ 			.gpio     = terratec_av350_unmute_gpio,
+ 		} },
+ 	},
+@@ -3041,6 +3041,9 @@ static int em28xx_hint_board(struct em28xx *dev)
+ 
+ static void em28xx_card_setup(struct em28xx *dev)
+ {
++	int i, j, idx;
++	bool duplicate_entry;
++
+ 	/*
+ 	 * If the device can be a webcam, seek for a sensor.
+ 	 * If sensor is not found, then it isn't a webcam.
+@@ -3197,6 +3200,32 @@ static void em28xx_card_setup(struct em28xx *dev)
+ 	/* Allow override tuner type by a module parameter */
+ 	if (tuner >= 0)
+ 		dev->tuner_type = tuner;
++
++	/*
++	 * Dynamically generate a list of valid audio inputs for this
++	 * specific board, mapping them via enum em28xx_amux.
++	 */
++
++	idx = 0;
++	for (i = 0; i < MAX_EM28XX_INPUT; i++) {
++		if (!INPUT(i)->type)
++			continue;
++
++		/* Skip already mapped audio inputs */
++		duplicate_entry = false;
++		for (j = 0; j < idx; j++) {
++			if (INPUT(i)->amux == dev->amux_map[j]) {
++				duplicate_entry = true;
++				break;
++			}
++		}
++		if (duplicate_entry)
++			continue;
++
++		dev->amux_map[idx++] = INPUT(i)->amux;
++	}
++	for (; idx < MAX_EM28XX_INPUT; idx++)
++		dev->amux_map[idx] = EM28XX_AMUX_UNUSED;
+ }
+ 
+ void em28xx_setup_xc3028(struct em28xx *dev, struct xc2028_ctrl *ctl)
+diff --git a/drivers/media/usb/em28xx/em28xx-video.c b/drivers/media/usb/em28xx/em28xx-video.c
+index 68571bf36d28..3bf98ac897ec 100644
+--- a/drivers/media/usb/em28xx/em28xx-video.c
++++ b/drivers/media/usb/em28xx/em28xx-video.c
+@@ -1093,6 +1093,8 @@ int em28xx_start_analog_streaming(struct vb2_queue *vq, unsigned int count)
+ 
+ 	em28xx_videodbg("%s\n", __func__);
+ 
++	dev->v4l2->field_count = 0;
++
+ 	/*
+ 	 * Make sure streaming is not already in progress for this type
+ 	 * of filehandle (e.g. video, vbi)
+@@ -1471,9 +1473,9 @@ static int vidioc_try_fmt_vid_cap(struct file *file, void *priv,
+ 
+ 	fmt = format_by_fourcc(f->fmt.pix.pixelformat);
+ 	if (!fmt) {
+-		em28xx_videodbg("Fourcc format (%08x) invalid.\n",
+-				f->fmt.pix.pixelformat);
+-		return -EINVAL;
++		fmt = &format[0];
++		em28xx_videodbg("Fourcc format (%08x) invalid. Using default (%08x).\n",
++				f->fmt.pix.pixelformat, fmt->fourcc);
+ 	}
+ 
+ 	if (dev->board.is_em2800) {
+@@ -1666,6 +1668,7 @@ static int vidioc_enum_input(struct file *file, void *priv,
+ {
+ 	struct em28xx *dev = video_drvdata(file);
+ 	unsigned int       n;
++	int j;
+ 
+ 	n = i->index;
+ 	if (n >= MAX_EM28XX_INPUT)
+@@ -1685,6 +1688,12 @@ static int vidioc_enum_input(struct file *file, void *priv,
+ 	if (dev->is_webcam)
+ 		i->capabilities = 0;
+ 
++	/* Dynamically generates an audioset bitmask */
++	i->audioset = 0;
++	for (j = 0; j < MAX_EM28XX_INPUT; j++)
++		if (dev->amux_map[j] != EM28XX_AMUX_UNUSED)
++			i->audioset |= 1 << j;
++
+ 	return 0;
+ }
+ 
+@@ -1710,11 +1719,24 @@ static int vidioc_s_input(struct file *file, void *priv, unsigned int i)
+ 	return 0;
+ }
+ 
+-static int vidioc_g_audio(struct file *file, void *priv, struct v4l2_audio *a)
++static int em28xx_fill_audio_input(struct em28xx *dev,
++				   const char *s,
++				   struct v4l2_audio *a,
++				   unsigned int index)
+ {
+-	struct em28xx *dev = video_drvdata(file);
++	unsigned int idx = dev->amux_map[index];
++
++	/*
++	 * With msp3400, almost all mappings use the default (amux = 0).
++	 * The only one may use a different value is WinTV USB2, where it
++	 * can also be SCART1 input.
++	 * As it is very doubtful that we would see new boards with msp3400,
++	 * let's just reuse the existing switch.
++	 */
++	if (dev->has_msp34xx && idx != EM28XX_AMUX_UNUSED)
++		idx = EM28XX_AMUX_LINE_IN;
+ 
+-	switch (a->index) {
++	switch (idx) {
+ 	case EM28XX_AMUX_VIDEO:
+ 		strcpy(a->name, "Television");
+ 		break;
+@@ -1739,32 +1761,79 @@ static int vidioc_g_audio(struct file *file, void *priv, struct v4l2_audio *a)
+ 	case EM28XX_AMUX_PCM_OUT:
+ 		strcpy(a->name, "PCM");
+ 		break;
++	case EM28XX_AMUX_UNUSED:
+ 	default:
+ 		return -EINVAL;
+ 	}
+-
+-	a->index = dev->ctl_ainput;
++	a->index = index;
+ 	a->capability = V4L2_AUDCAP_STEREO;
+ 
++	em28xx_videodbg("%s: audio input index %d is '%s'\n",
++			s, a->index, a->name);
++
+ 	return 0;
+ }
+ 
++static int vidioc_enumaudio(struct file *file, void *fh, struct v4l2_audio *a)
++{
++	struct em28xx *dev = video_drvdata(file);
++
++	if (a->index >= MAX_EM28XX_INPUT)
++		return -EINVAL;
++
++	return em28xx_fill_audio_input(dev, __func__, a, a->index);
++}
++
++static int vidioc_g_audio(struct file *file, void *priv, struct v4l2_audio *a)
++{
++	struct em28xx *dev = video_drvdata(file);
++	int i;
++
++	for (i = 0; i < MAX_EM28XX_INPUT; i++)
++		if (dev->ctl_ainput == dev->amux_map[i])
++			return em28xx_fill_audio_input(dev, __func__, a, i);
++
++	/* Should never happen! */
++	return -EINVAL;
++}
++
+ static int vidioc_s_audio(struct file *file, void *priv,
+ 			  const struct v4l2_audio *a)
+ {
+ 	struct em28xx *dev = video_drvdata(file);
++	int idx, i;
+ 
+ 	if (a->index >= MAX_EM28XX_INPUT)
+ 		return -EINVAL;
+-	if (!INPUT(a->index)->type)
++
++	idx = dev->amux_map[a->index];
++
++	if (idx == EM28XX_AMUX_UNUSED)
+ 		return -EINVAL;
+ 
+-	dev->ctl_ainput = INPUT(a->index)->amux;
+-	dev->ctl_aoutput = INPUT(a->index)->aout;
++	dev->ctl_ainput = idx;
++
++	/*
++	 * FIXME: This is wrong, as different inputs at em28xx_cards
++	 * may have different audio outputs. So, the right thing
++	 * to do is to implement VIDIOC_G_AUDOUT/VIDIOC_S_AUDOUT.
++	 * With the current board definitions, this would work fine,
++	 * as, currently, all boards fit.
++	 */
++	for (i = 0; i < MAX_EM28XX_INPUT; i++)
++		if (idx == dev->amux_map[i])
++			break;
++	if (i == MAX_EM28XX_INPUT)
++		return -EINVAL;
++
++	dev->ctl_aoutput = INPUT(i)->aout;
+ 
+ 	if (!dev->ctl_aoutput)
+ 		dev->ctl_aoutput = EM28XX_AOUT_MASTER;
+ 
++	em28xx_videodbg("%s: set audio input to %d\n", __func__,
++			dev->ctl_ainput);
++
+ 	return 0;
+ }
+ 
+@@ -2302,6 +2371,7 @@ static const struct v4l2_ioctl_ops video_ioctl_ops = {
+ 	.vidioc_try_fmt_vbi_cap     = vidioc_g_fmt_vbi_cap,
+ 	.vidioc_s_fmt_vbi_cap       = vidioc_g_fmt_vbi_cap,
+ 	.vidioc_enum_framesizes     = vidioc_enum_framesizes,
++	.vidioc_enumaudio           = vidioc_enumaudio,
+ 	.vidioc_g_audio             = vidioc_g_audio,
+ 	.vidioc_s_audio             = vidioc_s_audio,
+ 
+diff --git a/drivers/media/usb/em28xx/em28xx.h b/drivers/media/usb/em28xx/em28xx.h
+index 953caac025f2..a551072e62ed 100644
+--- a/drivers/media/usb/em28xx/em28xx.h
++++ b/drivers/media/usb/em28xx/em28xx.h
+@@ -335,6 +335,9 @@ enum em28xx_usb_audio_type {
+ /**
+  * em28xx_amux - describes the type of audio input used by em28xx
+  *
++ * @EM28XX_AMUX_UNUSED:
++ *	Used only on em28xx dev->map field, in order to mark an entry
++ *	as unused.
+  * @EM28XX_AMUX_VIDEO:
+  *	On devices without AC97, this is the only value that it is currently
+  *	allowed.
+@@ -369,7 +372,8 @@ enum em28xx_usb_audio_type {
+  * same time, via the alsa mux.
+  */
+ enum em28xx_amux {
+-	EM28XX_AMUX_VIDEO,
++	EM28XX_AMUX_UNUSED = -1,
++	EM28XX_AMUX_VIDEO = 0,
+ 	EM28XX_AMUX_LINE_IN,
+ 
+ 	/* Some less-common mixer setups */
+@@ -692,6 +696,8 @@ struct em28xx {
+ 	unsigned int ctl_input;	// selected input
+ 	unsigned int ctl_ainput;// selected audio input
+ 	unsigned int ctl_aoutput;// selected audio output
++	enum em28xx_amux amux_map[MAX_EM28XX_INPUT];
++
+ 	int mute;
+ 	int volume;
+ 
+diff --git a/drivers/media/v4l2-core/v4l2-dv-timings.c b/drivers/media/v4l2-core/v4l2-dv-timings.c
+index c81faea96fba..c7c600c1f63b 100644
+--- a/drivers/media/v4l2-core/v4l2-dv-timings.c
++++ b/drivers/media/v4l2-core/v4l2-dv-timings.c
+@@ -837,9 +837,9 @@ v4l2_hdmi_rx_colorimetry(const struct hdmi_avi_infoframe *avi,
+ 		switch (avi->colorimetry) {
+ 		case HDMI_COLORIMETRY_EXTENDED:
+ 			switch (avi->extended_colorimetry) {
+-			case HDMI_EXTENDED_COLORIMETRY_ADOBE_RGB:
+-				c.colorspace = V4L2_COLORSPACE_ADOBERGB;
+-				c.xfer_func = V4L2_XFER_FUNC_ADOBERGB;
++			case HDMI_EXTENDED_COLORIMETRY_OPRGB:
++				c.colorspace = V4L2_COLORSPACE_OPRGB;
++				c.xfer_func = V4L2_XFER_FUNC_OPRGB;
+ 				break;
+ 			case HDMI_EXTENDED_COLORIMETRY_BT2020:
+ 				c.colorspace = V4L2_COLORSPACE_BT2020;
+@@ -908,10 +908,10 @@ v4l2_hdmi_rx_colorimetry(const struct hdmi_avi_infoframe *avi,
+ 				c.ycbcr_enc = V4L2_YCBCR_ENC_601;
+ 				c.xfer_func = V4L2_XFER_FUNC_SRGB;
+ 				break;
+-			case HDMI_EXTENDED_COLORIMETRY_ADOBE_YCC_601:
+-				c.colorspace = V4L2_COLORSPACE_ADOBERGB;
++			case HDMI_EXTENDED_COLORIMETRY_OPYCC_601:
++				c.colorspace = V4L2_COLORSPACE_OPRGB;
+ 				c.ycbcr_enc = V4L2_YCBCR_ENC_601;
+-				c.xfer_func = V4L2_XFER_FUNC_ADOBERGB;
++				c.xfer_func = V4L2_XFER_FUNC_OPRGB;
+ 				break;
+ 			case HDMI_EXTENDED_COLORIMETRY_BT2020:
+ 				c.colorspace = V4L2_COLORSPACE_BT2020;
+diff --git a/drivers/mfd/menelaus.c b/drivers/mfd/menelaus.c
+index 29b7164a823b..d28ebe7ecd21 100644
+--- a/drivers/mfd/menelaus.c
++++ b/drivers/mfd/menelaus.c
+@@ -1094,6 +1094,7 @@ static void menelaus_rtc_alarm_work(struct menelaus_chip *m)
+ static inline void menelaus_rtc_init(struct menelaus_chip *m)
+ {
+ 	int	alarm = (m->client->irq > 0);
++	int	err;
+ 
+ 	/* assume 32KDETEN pin is pulled high */
+ 	if (!(menelaus_read_reg(MENELAUS_OSC_CTRL) & 0x80)) {
+@@ -1101,6 +1102,12 @@ static inline void menelaus_rtc_init(struct menelaus_chip *m)
+ 		return;
+ 	}
+ 
++	m->rtc = devm_rtc_allocate_device(&m->client->dev);
++	if (IS_ERR(m->rtc))
++		return;
++
++	m->rtc->ops = &menelaus_rtc_ops;
++
+ 	/* support RTC alarm; it can issue wakeups */
+ 	if (alarm) {
+ 		if (menelaus_add_irq_work(MENELAUS_RTCALM_IRQ,
+@@ -1125,10 +1132,8 @@ static inline void menelaus_rtc_init(struct menelaus_chip *m)
+ 		menelaus_write_reg(MENELAUS_RTC_CTRL, m->rtc_control);
+ 	}
+ 
+-	m->rtc = rtc_device_register(DRIVER_NAME,
+-			&m->client->dev,
+-			&menelaus_rtc_ops, THIS_MODULE);
+-	if (IS_ERR(m->rtc)) {
++	err = rtc_register_device(m->rtc);
++	if (err) {
+ 		if (alarm) {
+ 			menelaus_remove_irq_work(MENELAUS_RTCALM_IRQ);
+ 			device_init_wakeup(&m->client->dev, 0);
+diff --git a/drivers/misc/genwqe/card_base.h b/drivers/misc/genwqe/card_base.h
+index 1c3967f10f55..1f94fb436c3c 100644
+--- a/drivers/misc/genwqe/card_base.h
++++ b/drivers/misc/genwqe/card_base.h
+@@ -408,7 +408,7 @@ struct genwqe_file {
+ 	struct file *filp;
+ 
+ 	struct fasync_struct *async_queue;
+-	struct task_struct *owner;
++	struct pid *opener;
+ 	struct list_head list;		/* entry in list of open files */
+ 
+ 	spinlock_t map_lock;		/* lock for dma_mappings */
+diff --git a/drivers/misc/genwqe/card_dev.c b/drivers/misc/genwqe/card_dev.c
+index 0dd6b5ef314a..66f222f24da3 100644
+--- a/drivers/misc/genwqe/card_dev.c
++++ b/drivers/misc/genwqe/card_dev.c
+@@ -52,7 +52,7 @@ static void genwqe_add_file(struct genwqe_dev *cd, struct genwqe_file *cfile)
+ {
+ 	unsigned long flags;
+ 
+-	cfile->owner = current;
++	cfile->opener = get_pid(task_tgid(current));
+ 	spin_lock_irqsave(&cd->file_lock, flags);
+ 	list_add(&cfile->list, &cd->file_list);
+ 	spin_unlock_irqrestore(&cd->file_lock, flags);
+@@ -65,6 +65,7 @@ static int genwqe_del_file(struct genwqe_dev *cd, struct genwqe_file *cfile)
+ 	spin_lock_irqsave(&cd->file_lock, flags);
+ 	list_del(&cfile->list);
+ 	spin_unlock_irqrestore(&cd->file_lock, flags);
++	put_pid(cfile->opener);
+ 
+ 	return 0;
+ }
+@@ -275,7 +276,7 @@ static int genwqe_kill_fasync(struct genwqe_dev *cd, int sig)
+ 	return files;
+ }
+ 
+-static int genwqe_force_sig(struct genwqe_dev *cd, int sig)
++static int genwqe_terminate(struct genwqe_dev *cd)
+ {
+ 	unsigned int files = 0;
+ 	unsigned long flags;
+@@ -283,7 +284,7 @@ static int genwqe_force_sig(struct genwqe_dev *cd, int sig)
+ 
+ 	spin_lock_irqsave(&cd->file_lock, flags);
+ 	list_for_each_entry(cfile, &cd->file_list, list) {
+-		force_sig(sig, cfile->owner);
++		kill_pid(cfile->opener, SIGKILL, 1);
+ 		files++;
+ 	}
+ 	spin_unlock_irqrestore(&cd->file_lock, flags);
+@@ -1357,7 +1358,7 @@ static int genwqe_inform_and_stop_processes(struct genwqe_dev *cd)
+ 		dev_warn(&pci_dev->dev,
+ 			 "[%s] send SIGKILL and wait ...\n", __func__);
+ 
+-		rc = genwqe_force_sig(cd, SIGKILL); /* force terminate */
++		rc = genwqe_terminate(cd);
+ 		if (rc) {
+ 			/* Give kill_timout more seconds to end processes */
+ 			for (i = 0; (i < GENWQE_KILL_TIMEOUT) &&
+diff --git a/drivers/misc/ocxl/config.c b/drivers/misc/ocxl/config.c
+index 2e30de9c694a..57a6bb1fd3c9 100644
+--- a/drivers/misc/ocxl/config.c
++++ b/drivers/misc/ocxl/config.c
+@@ -280,7 +280,9 @@ int ocxl_config_check_afu_index(struct pci_dev *dev,
+ 	u32 val;
+ 	int rc, templ_major, templ_minor, len;
+ 
+-	pci_write_config_word(dev, fn->dvsec_afu_info_pos, afu_idx);
++	pci_write_config_byte(dev,
++			fn->dvsec_afu_info_pos + OCXL_DVSEC_AFU_INFO_AFU_IDX,
++			afu_idx);
+ 	rc = read_afu_info(dev, fn, OCXL_DVSEC_TEMPL_VERSION, &val);
+ 	if (rc)
+ 		return rc;
+diff --git a/drivers/misc/vmw_vmci/vmci_driver.c b/drivers/misc/vmw_vmci/vmci_driver.c
+index d7eaf1eb11e7..003bfba40758 100644
+--- a/drivers/misc/vmw_vmci/vmci_driver.c
++++ b/drivers/misc/vmw_vmci/vmci_driver.c
+@@ -113,5 +113,5 @@ module_exit(vmci_drv_exit);
+ 
+ MODULE_AUTHOR("VMware, Inc.");
+ MODULE_DESCRIPTION("VMware Virtual Machine Communication Interface.");
+-MODULE_VERSION("1.1.5.0-k");
++MODULE_VERSION("1.1.6.0-k");
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/misc/vmw_vmci/vmci_resource.c b/drivers/misc/vmw_vmci/vmci_resource.c
+index 1ab6e8737a5f..da1ee2e1ba99 100644
+--- a/drivers/misc/vmw_vmci/vmci_resource.c
++++ b/drivers/misc/vmw_vmci/vmci_resource.c
+@@ -57,7 +57,8 @@ static struct vmci_resource *vmci_resource_lookup(struct vmci_handle handle,
+ 
+ 		if (r->type == type &&
+ 		    rid == handle.resource &&
+-		    (cid == handle.context || cid == VMCI_INVALID_ID)) {
++		    (cid == handle.context || cid == VMCI_INVALID_ID ||
++		     handle.context == VMCI_INVALID_ID)) {
+ 			resource = r;
+ 			break;
+ 		}
+diff --git a/drivers/mmc/host/sdhci-acpi.c b/drivers/mmc/host/sdhci-acpi.c
+index 32321bd596d8..c61109f7b793 100644
+--- a/drivers/mmc/host/sdhci-acpi.c
++++ b/drivers/mmc/host/sdhci-acpi.c
+@@ -76,6 +76,7 @@ struct sdhci_acpi_slot {
+ 	size_t		priv_size;
+ 	int (*probe_slot)(struct platform_device *, const char *, const char *);
+ 	int (*remove_slot)(struct platform_device *);
++	int (*free_slot)(struct platform_device *pdev);
+ 	int (*setup_host)(struct platform_device *pdev);
+ };
+ 
+@@ -756,6 +757,9 @@ static int sdhci_acpi_probe(struct platform_device *pdev)
+ err_cleanup:
+ 	sdhci_cleanup_host(c->host);
+ err_free:
++	if (c->slot && c->slot->free_slot)
++		c->slot->free_slot(pdev);
++
+ 	sdhci_free_host(c->host);
+ 	return err;
+ }
+@@ -777,6 +781,10 @@ static int sdhci_acpi_remove(struct platform_device *pdev)
+ 
+ 	dead = (sdhci_readl(c->host, SDHCI_INT_STATUS) == ~0);
+ 	sdhci_remove_host(c->host, dead);
++
++	if (c->slot && c->slot->free_slot)
++		c->slot->free_slot(pdev);
++
+ 	sdhci_free_host(c->host);
+ 
+ 	return 0;
+diff --git a/drivers/mmc/host/sdhci-pci-o2micro.c b/drivers/mmc/host/sdhci-pci-o2micro.c
+index 555970a29c94..34326d95d254 100644
+--- a/drivers/mmc/host/sdhci-pci-o2micro.c
++++ b/drivers/mmc/host/sdhci-pci-o2micro.c
+@@ -367,6 +367,9 @@ int sdhci_pci_o2_probe(struct sdhci_pci_chip *chip)
+ 		pci_write_config_byte(chip->pdev, O2_SD_LOCK_WP, scratch);
+ 		break;
+ 	case PCI_DEVICE_ID_O2_SEABIRD0:
++		if (chip->pdev->revision == 0x01)
++			chip->quirks |= SDHCI_QUIRK_DELAY_AFTER_POWER;
++		/* fall through */
+ 	case PCI_DEVICE_ID_O2_SEABIRD1:
+ 		/* UnLock WP */
+ 		ret = pci_read_config_byte(chip->pdev,
+diff --git a/drivers/mtd/nand/raw/atmel/nand-controller.c b/drivers/mtd/nand/raw/atmel/nand-controller.c
+index e686fe73159e..a1fd6f6f5414 100644
+--- a/drivers/mtd/nand/raw/atmel/nand-controller.c
++++ b/drivers/mtd/nand/raw/atmel/nand-controller.c
+@@ -2081,6 +2081,10 @@ atmel_hsmc_nand_controller_legacy_init(struct atmel_hsmc_nand_controller *nc)
+ 	nand_np = dev->of_node;
+ 	nfc_np = of_find_compatible_node(dev->of_node, NULL,
+ 					 "atmel,sama5d3-nfc");
++	if (!nfc_np) {
++		dev_err(dev, "Could not find device node for sama5d3-nfc\n");
++		return -ENODEV;
++	}
+ 
+ 	nc->clk = of_clk_get(nfc_np, 0);
+ 	if (IS_ERR(nc->clk)) {
+diff --git a/drivers/mtd/nand/raw/denali.c b/drivers/mtd/nand/raw/denali.c
+index c502075e5721..ff955f085351 100644
+--- a/drivers/mtd/nand/raw/denali.c
++++ b/drivers/mtd/nand/raw/denali.c
+@@ -28,6 +28,7 @@
+ MODULE_LICENSE("GPL");
+ 
+ #define DENALI_NAND_NAME    "denali-nand"
++#define DENALI_DEFAULT_OOB_SKIP_BYTES	8
+ 
+ /* for Indexed Addressing */
+ #define DENALI_INDEXED_CTRL	0x00
+@@ -1106,12 +1107,17 @@ static void denali_hw_init(struct denali_nand_info *denali)
+ 		denali->revision = swab16(ioread32(denali->reg + REVISION));
+ 
+ 	/*
+-	 * tell driver how many bit controller will skip before
+-	 * writing ECC code in OOB, this register may be already
+-	 * set by firmware. So we read this value out.
+-	 * if this value is 0, just let it be.
++	 * Set how many bytes should be skipped before writing data in OOB.
++	 * If a non-zero value has already been set (by firmware or something),
++	 * just use it.  Otherwise, set the driver default.
+ 	 */
+ 	denali->oob_skip_bytes = ioread32(denali->reg + SPARE_AREA_SKIP_BYTES);
++	if (!denali->oob_skip_bytes) {
++		denali->oob_skip_bytes = DENALI_DEFAULT_OOB_SKIP_BYTES;
++		iowrite32(denali->oob_skip_bytes,
++			  denali->reg + SPARE_AREA_SKIP_BYTES);
++	}
++
+ 	denali_detect_max_banks(denali);
+ 	iowrite32(0x0F, denali->reg + RB_PIN_ENABLED);
+ 	iowrite32(CHIP_EN_DONT_CARE__FLAG, denali->reg + CHIP_ENABLE_DONT_CARE);
+diff --git a/drivers/mtd/nand/raw/marvell_nand.c b/drivers/mtd/nand/raw/marvell_nand.c
+index c88588815ca1..a3477cbf6115 100644
+--- a/drivers/mtd/nand/raw/marvell_nand.c
++++ b/drivers/mtd/nand/raw/marvell_nand.c
+@@ -691,7 +691,7 @@ static irqreturn_t marvell_nfc_isr(int irq, void *dev_id)
+ 
+ 	marvell_nfc_disable_int(nfc, st & NDCR_ALL_INT);
+ 
+-	if (!(st & (NDSR_RDDREQ | NDSR_WRDREQ | NDSR_WRCMDREQ)))
++	if (st & (NDSR_RDY(0) | NDSR_RDY(1)))
+ 		complete(&nfc->complete);
+ 
+ 	return IRQ_HANDLED;
+diff --git a/drivers/mtd/spi-nor/fsl-quadspi.c b/drivers/mtd/spi-nor/fsl-quadspi.c
+index 7d9620c7ff6c..1ff3430f82c8 100644
+--- a/drivers/mtd/spi-nor/fsl-quadspi.c
++++ b/drivers/mtd/spi-nor/fsl-quadspi.c
+@@ -478,6 +478,7 @@ static int fsl_qspi_get_seqid(struct fsl_qspi *q, u8 cmd)
+ {
+ 	switch (cmd) {
+ 	case SPINOR_OP_READ_1_1_4:
++	case SPINOR_OP_READ_1_1_4_4B:
+ 		return SEQID_READ;
+ 	case SPINOR_OP_WREN:
+ 		return SEQID_WREN;
+@@ -543,6 +544,9 @@ fsl_qspi_runcmd(struct fsl_qspi *q, u8 cmd, unsigned int addr, int len)
+ 
+ 	/* trigger the LUT now */
+ 	seqid = fsl_qspi_get_seqid(q, cmd);
++	if (seqid < 0)
++		return seqid;
++
+ 	qspi_writel(q, (seqid << QUADSPI_IPCR_SEQID_SHIFT) | len,
+ 			base + QUADSPI_IPCR);
+ 
+@@ -671,7 +675,7 @@ static void fsl_qspi_set_map_addr(struct fsl_qspi *q)
+  * causes the controller to clear the buffer, and use the sequence pointed
+  * by the QUADSPI_BFGENCR[SEQID] to initiate a read from the flash.
+  */
+-static void fsl_qspi_init_ahb_read(struct fsl_qspi *q)
++static int fsl_qspi_init_ahb_read(struct fsl_qspi *q)
+ {
+ 	void __iomem *base = q->iobase;
+ 	int seqid;
+@@ -696,8 +700,13 @@ static void fsl_qspi_init_ahb_read(struct fsl_qspi *q)
+ 
+ 	/* Set the default lut sequence for AHB Read. */
+ 	seqid = fsl_qspi_get_seqid(q, q->nor[0].read_opcode);
++	if (seqid < 0)
++		return seqid;
++
+ 	qspi_writel(q, seqid << QUADSPI_BFGENCR_SEQID_SHIFT,
+ 		q->iobase + QUADSPI_BFGENCR);
++
++	return 0;
+ }
+ 
+ /* This function was used to prepare and enable QSPI clock */
+@@ -805,9 +814,7 @@ static int fsl_qspi_nor_setup_last(struct fsl_qspi *q)
+ 	fsl_qspi_init_lut(q);
+ 
+ 	/* Init for AHB read */
+-	fsl_qspi_init_ahb_read(q);
+-
+-	return 0;
++	return fsl_qspi_init_ahb_read(q);
+ }
+ 
+ static const struct of_device_id fsl_qspi_dt_ids[] = {
+diff --git a/drivers/mtd/spi-nor/intel-spi-pci.c b/drivers/mtd/spi-nor/intel-spi-pci.c
+index c0976f2e3dd1..872b40922608 100644
+--- a/drivers/mtd/spi-nor/intel-spi-pci.c
++++ b/drivers/mtd/spi-nor/intel-spi-pci.c
+@@ -65,6 +65,7 @@ static void intel_spi_pci_remove(struct pci_dev *pdev)
+ static const struct pci_device_id intel_spi_pci_ids[] = {
+ 	{ PCI_VDEVICE(INTEL, 0x18e0), (unsigned long)&bxt_info },
+ 	{ PCI_VDEVICE(INTEL, 0x19e0), (unsigned long)&bxt_info },
++	{ PCI_VDEVICE(INTEL, 0x34a4), (unsigned long)&bxt_info },
+ 	{ PCI_VDEVICE(INTEL, 0xa1a4), (unsigned long)&bxt_info },
+ 	{ PCI_VDEVICE(INTEL, 0xa224), (unsigned long)&bxt_info },
+ 	{ },
+diff --git a/drivers/net/dsa/mv88e6xxx/phy.c b/drivers/net/dsa/mv88e6xxx/phy.c
+index 46af8052e535..152a65d46e0b 100644
+--- a/drivers/net/dsa/mv88e6xxx/phy.c
++++ b/drivers/net/dsa/mv88e6xxx/phy.c
+@@ -110,6 +110,9 @@ int mv88e6xxx_phy_page_write(struct mv88e6xxx_chip *chip, int phy,
+ 	err = mv88e6xxx_phy_page_get(chip, phy, page);
+ 	if (!err) {
+ 		err = mv88e6xxx_phy_write(chip, phy, MV88E6XXX_PHY_PAGE, page);
++		if (!err)
++			err = mv88e6xxx_phy_write(chip, phy, reg, val);
++
+ 		mv88e6xxx_phy_page_put(chip, phy);
+ 	}
+ 
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+index 34af5f1569c8..de0e24d912fe 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+@@ -342,7 +342,7 @@ static struct device_node *bcmgenet_mii_of_find_mdio(struct bcmgenet_priv *priv)
+ 	if (!compat)
+ 		return NULL;
+ 
+-	priv->mdio_dn = of_find_compatible_node(dn, NULL, compat);
++	priv->mdio_dn = of_get_compatible_child(dn, compat);
+ 	kfree(compat);
+ 	if (!priv->mdio_dn) {
+ 		dev_err(kdev, "unable to find MDIO bus node\n");
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 9d69621f5ab4..542f16074dc9 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -1907,6 +1907,7 @@ static int is_valid_clean_head(struct hns3_enet_ring *ring, int h)
+ bool hns3_clean_tx_ring(struct hns3_enet_ring *ring, int budget)
+ {
+ 	struct net_device *netdev = ring->tqp->handle->kinfo.netdev;
++	struct hns3_nic_priv *priv = netdev_priv(netdev);
+ 	struct netdev_queue *dev_queue;
+ 	int bytes, pkts;
+ 	int head;
+@@ -1953,7 +1954,8 @@ bool hns3_clean_tx_ring(struct hns3_enet_ring *ring, int budget)
+ 		 * sees the new next_to_clean.
+ 		 */
+ 		smp_mb();
+-		if (netif_tx_queue_stopped(dev_queue)) {
++		if (netif_tx_queue_stopped(dev_queue) &&
++		    !test_bit(HNS3_NIC_STATE_DOWN, &priv->state)) {
+ 			netif_tx_wake_queue(dev_queue);
+ 			ring->stats.restart_queue++;
+ 		}
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+index 11620e003a8e..967a625c040d 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+@@ -310,7 +310,7 @@ static void hns3_self_test(struct net_device *ndev,
+ 			h->flags & HNAE3_SUPPORT_MAC_LOOPBACK;
+ 
+ 	if (if_running)
+-		dev_close(ndev);
++		ndev->netdev_ops->ndo_stop(ndev);
+ 
+ #if IS_ENABLED(CONFIG_VLAN_8021Q)
+ 	/* Disable the vlan filter for selftest does not support it */
+@@ -348,7 +348,7 @@ static void hns3_self_test(struct net_device *ndev,
+ #endif
+ 
+ 	if (if_running)
+-		dev_open(ndev);
++		ndev->netdev_ops->ndo_open(ndev);
+ }
+ 
+ static int hns3_get_sset_count(struct net_device *netdev, int stringset)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+index 955f0e3d5c95..b4c0597a392d 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
+@@ -79,6 +79,7 @@ static int hclge_ieee_getets(struct hnae3_handle *h, struct ieee_ets *ets)
+ static int hclge_ets_validate(struct hclge_dev *hdev, struct ieee_ets *ets,
+ 			      u8 *tc, bool *changed)
+ {
++	bool has_ets_tc = false;
+ 	u32 total_ets_bw = 0;
+ 	u8 max_tc = 0;
+ 	u8 i;
+@@ -106,13 +107,14 @@ static int hclge_ets_validate(struct hclge_dev *hdev, struct ieee_ets *ets,
+ 				*changed = true;
+ 
+ 			total_ets_bw += ets->tc_tx_bw[i];
+-		break;
++			has_ets_tc = true;
++			break;
+ 		default:
+ 			return -EINVAL;
+ 		}
+ 	}
+ 
+-	if (total_ets_bw != BW_PERCENT)
++	if (has_ets_tc && total_ets_bw != BW_PERCENT)
+ 		return -EINVAL;
+ 
+ 	*tc = max_tc + 1;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 13f43b74fd6d..9f2bea64c522 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -1669,11 +1669,13 @@ static int hclge_tx_buffer_calc(struct hclge_dev *hdev,
+ static int hclge_rx_buffer_calc(struct hclge_dev *hdev,
+ 				struct hclge_pkt_buf_alloc *buf_alloc)
+ {
+-	u32 rx_all = hdev->pkt_buf_size;
++#define HCLGE_BUF_SIZE_UNIT	128
++	u32 rx_all = hdev->pkt_buf_size, aligned_mps;
+ 	int no_pfc_priv_num, pfc_priv_num;
+ 	struct hclge_priv_buf *priv;
+ 	int i;
+ 
++	aligned_mps = round_up(hdev->mps, HCLGE_BUF_SIZE_UNIT);
+ 	rx_all -= hclge_get_tx_buff_alloced(buf_alloc);
+ 
+ 	/* When DCB is not supported, rx private
+@@ -1692,13 +1694,13 @@ static int hclge_rx_buffer_calc(struct hclge_dev *hdev,
+ 		if (hdev->hw_tc_map & BIT(i)) {
+ 			priv->enable = 1;
+ 			if (hdev->tm_info.hw_pfc_map & BIT(i)) {
+-				priv->wl.low = hdev->mps;
+-				priv->wl.high = priv->wl.low + hdev->mps;
++				priv->wl.low = aligned_mps;
++				priv->wl.high = priv->wl.low + aligned_mps;
+ 				priv->buf_size = priv->wl.high +
+ 						HCLGE_DEFAULT_DV;
+ 			} else {
+ 				priv->wl.low = 0;
+-				priv->wl.high = 2 * hdev->mps;
++				priv->wl.high = 2 * aligned_mps;
+ 				priv->buf_size = priv->wl.high;
+ 			}
+ 		} else {
+@@ -1730,11 +1732,11 @@ static int hclge_rx_buffer_calc(struct hclge_dev *hdev,
+ 
+ 		if (hdev->tm_info.hw_pfc_map & BIT(i)) {
+ 			priv->wl.low = 128;
+-			priv->wl.high = priv->wl.low + hdev->mps;
++			priv->wl.high = priv->wl.low + aligned_mps;
+ 			priv->buf_size = priv->wl.high + HCLGE_DEFAULT_DV;
+ 		} else {
+ 			priv->wl.low = 0;
+-			priv->wl.high = hdev->mps;
++			priv->wl.high = aligned_mps;
+ 			priv->buf_size = priv->wl.high;
+ 		}
+ 	}
+@@ -2396,6 +2398,9 @@ static int hclge_get_mac_phy_link(struct hclge_dev *hdev)
+ 	int mac_state;
+ 	int link_stat;
+ 
++	if (test_bit(HCLGE_STATE_DOWN, &hdev->state))
++		return 0;
++
+ 	mac_state = hclge_get_mac_link_status(hdev);
+ 
+ 	if (hdev->hw.mac.phydev) {
+@@ -3789,6 +3794,8 @@ static void hclge_ae_stop(struct hnae3_handle *handle)
+ 	struct hclge_dev *hdev = vport->back;
+ 	int i;
+ 
++	set_bit(HCLGE_STATE_DOWN, &hdev->state);
++
+ 	del_timer_sync(&hdev->service_timer);
+ 	cancel_work_sync(&hdev->service_task);
+ 	clear_bit(HCLGE_STATE_SERVICE_SCHED, &hdev->state);
+@@ -4679,9 +4686,17 @@ static int hclge_set_vf_vlan_common(struct hclge_dev *hdev, int vfid,
+ 			"Add vf vlan filter fail, ret =%d.\n",
+ 			req0->resp_code);
+ 	} else {
++#define HCLGE_VF_VLAN_DEL_NO_FOUND	1
+ 		if (!req0->resp_code)
+ 			return 0;
+ 
++		if (req0->resp_code == HCLGE_VF_VLAN_DEL_NO_FOUND) {
++			dev_warn(&hdev->pdev->dev,
++				 "vlan %d filter is not in vf vlan table\n",
++				 vlan);
++			return 0;
++		}
++
+ 		dev_err(&hdev->pdev->dev,
+ 			"Kill vf vlan filter fail, ret =%d.\n",
+ 			req0->resp_code);
+@@ -4725,6 +4740,9 @@ static int hclge_set_vlan_filter_hw(struct hclge_dev *hdev, __be16 proto,
+ 	u16 vport_idx, vport_num = 0;
+ 	int ret;
+ 
++	if (is_kill && !vlan_id)
++		return 0;
++
+ 	ret = hclge_set_vf_vlan_common(hdev, vport_id, is_kill, vlan_id,
+ 				       0, proto);
+ 	if (ret) {
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index 12aa1f1b99ef..6090a7cd83e1 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -299,6 +299,9 @@ void hclgevf_update_link_status(struct hclgevf_dev *hdev, int link_state)
+ 
+ 	client = handle->client;
+ 
++	link_state =
++		test_bit(HCLGEVF_STATE_DOWN, &hdev->state) ? 0 : link_state;
++
+ 	if (link_state != hdev->hw.mac.link) {
+ 		client->ops->link_status_change(handle, !!link_state);
+ 		hdev->hw.mac.link = link_state;
+@@ -1439,6 +1442,8 @@ static void hclgevf_ae_stop(struct hnae3_handle *handle)
+ 	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+ 	int i, queue_id;
+ 
++	set_bit(HCLGEVF_STATE_DOWN, &hdev->state);
++
+ 	for (i = 0; i < hdev->num_tqps; i++) {
+ 		/* Ring disable */
+ 		queue_id = hclgevf_get_queue_id(handle->kinfo.tqp[i]);
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index ed071ea75f20..ce12824a8325 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -39,9 +39,9 @@
+ extern const char ice_drv_ver[];
+ #define ICE_BAR0		0
+ #define ICE_DFLT_NUM_DESC	128
+-#define ICE_MIN_NUM_DESC	8
+-#define ICE_MAX_NUM_DESC	8160
+ #define ICE_REQ_DESC_MULTIPLE	32
++#define ICE_MIN_NUM_DESC	ICE_REQ_DESC_MULTIPLE
++#define ICE_MAX_NUM_DESC	8160
+ #define ICE_DFLT_TRAFFIC_CLASS	BIT(0)
+ #define ICE_INT_NAME_STR_LEN	(IFNAMSIZ + 16)
+ #define ICE_ETHTOOL_FWVER_LEN	32
+diff --git a/drivers/net/ethernet/intel/ice/ice_controlq.c b/drivers/net/ethernet/intel/ice/ice_controlq.c
+index 62be72fdc8f3..e783976c401d 100644
+--- a/drivers/net/ethernet/intel/ice/ice_controlq.c
++++ b/drivers/net/ethernet/intel/ice/ice_controlq.c
+@@ -518,22 +518,31 @@ shutdown_sq_out:
+ 
+ /**
+  * ice_aq_ver_check - Check the reported AQ API version.
+- * @fw_branch: The "branch" of FW, typically describes the device type
+- * @fw_major: The major version of the FW API
+- * @fw_minor: The minor version increment of the FW API
++ * @hw: pointer to the hardware structure
+  *
+  * Checks if the driver should load on a given AQ API version.
+  *
+  * Return: 'true' iff the driver should attempt to load. 'false' otherwise.
+  */
+-static bool ice_aq_ver_check(u8 fw_branch, u8 fw_major, u8 fw_minor)
++static bool ice_aq_ver_check(struct ice_hw *hw)
+ {
+-	if (fw_branch != EXP_FW_API_VER_BRANCH)
+-		return false;
+-	if (fw_major != EXP_FW_API_VER_MAJOR)
+-		return false;
+-	if (fw_minor != EXP_FW_API_VER_MINOR)
++	if (hw->api_maj_ver > EXP_FW_API_VER_MAJOR) {
++		/* Major API version is newer than expected, don't load */
++		dev_warn(ice_hw_to_dev(hw),
++			 "The driver for the device stopped because the NVM image is newer than expected. You must install the most recent version of the network driver.\n");
+ 		return false;
++	} else if (hw->api_maj_ver == EXP_FW_API_VER_MAJOR) {
++		if (hw->api_min_ver > (EXP_FW_API_VER_MINOR + 2))
++			dev_info(ice_hw_to_dev(hw),
++				 "The driver for the device detected a newer version of the NVM image than expected. Please install the most recent version of the network driver.\n");
++		else if ((hw->api_min_ver + 2) < EXP_FW_API_VER_MINOR)
++			dev_info(ice_hw_to_dev(hw),
++				 "The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.\n");
++	} else {
++		/* Major API version is older than expected, log a warning */
++		dev_info(ice_hw_to_dev(hw),
++			 "The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.\n");
++	}
+ 	return true;
+ }
+ 
+@@ -588,8 +597,7 @@ static enum ice_status ice_init_check_adminq(struct ice_hw *hw)
+ 	if (status)
+ 		goto init_ctrlq_free_rq;
+ 
+-	if (!ice_aq_ver_check(hw->api_branch, hw->api_maj_ver,
+-			      hw->api_min_ver)) {
++	if (!ice_aq_ver_check(hw)) {
+ 		status = ICE_ERR_FW_API_VER;
+ 		goto init_ctrlq_free_rq;
+ 	}
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+index c71a9b528d6d..9d6754f65a1a 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+@@ -478,9 +478,11 @@ ice_get_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring)
+ 	ring->tx_max_pending = ICE_MAX_NUM_DESC;
+ 	ring->rx_pending = vsi->rx_rings[0]->count;
+ 	ring->tx_pending = vsi->tx_rings[0]->count;
+-	ring->rx_mini_pending = ICE_MIN_NUM_DESC;
++
++	/* Rx mini and jumbo rings are not supported */
+ 	ring->rx_mini_max_pending = 0;
+ 	ring->rx_jumbo_max_pending = 0;
++	ring->rx_mini_pending = 0;
+ 	ring->rx_jumbo_pending = 0;
+ }
+ 
+@@ -498,14 +500,23 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring)
+ 	    ring->tx_pending < ICE_MIN_NUM_DESC ||
+ 	    ring->rx_pending > ICE_MAX_NUM_DESC ||
+ 	    ring->rx_pending < ICE_MIN_NUM_DESC) {
+-		netdev_err(netdev, "Descriptors requested (Tx: %d / Rx: %d) out of range [%d-%d]\n",
++		netdev_err(netdev, "Descriptors requested (Tx: %d / Rx: %d) out of range [%d-%d] (increment %d)\n",
+ 			   ring->tx_pending, ring->rx_pending,
+-			   ICE_MIN_NUM_DESC, ICE_MAX_NUM_DESC);
++			   ICE_MIN_NUM_DESC, ICE_MAX_NUM_DESC,
++			   ICE_REQ_DESC_MULTIPLE);
+ 		return -EINVAL;
+ 	}
+ 
+ 	new_tx_cnt = ALIGN(ring->tx_pending, ICE_REQ_DESC_MULTIPLE);
++	if (new_tx_cnt != ring->tx_pending)
++		netdev_info(netdev,
++			    "Requested Tx descriptor count rounded up to %d\n",
++			    new_tx_cnt);
+ 	new_rx_cnt = ALIGN(ring->rx_pending, ICE_REQ_DESC_MULTIPLE);
++	if (new_rx_cnt != ring->rx_pending)
++		netdev_info(netdev,
++			    "Requested Rx descriptor count rounded up to %d\n",
++			    new_rx_cnt);
+ 
+ 	/* if nothing to do return success */
+ 	if (new_tx_cnt == vsi->tx_rings[0]->count &&
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
+index da4322e4daed..add124e0381d 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
+@@ -676,6 +676,9 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs)
+ 	} else {
+ 		struct tx_sa tsa;
+ 
++		if (adapter->num_vfs)
++			return -EOPNOTSUPP;
++
+ 		/* find the first unused index */
+ 		ret = ixgbe_ipsec_find_empty_idx(ipsec, false);
+ 		if (ret < 0) {
+diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+index 59416eddd840..ce28d474b929 100644
+--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
++++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+@@ -3849,6 +3849,10 @@ static void ixgbevf_tx_csum(struct ixgbevf_ring *tx_ring,
+ 		skb_checksum_help(skb);
+ 		goto no_csum;
+ 	}
++
++	if (first->protocol == htons(ETH_P_IP))
++		type_tucmd |= IXGBE_ADVTXD_TUCMD_IPV4;
++
+ 	/* update TX checksum flag */
+ 	first->tx_flags |= IXGBE_TX_FLAGS_CSUM;
+ 	vlan_macip_lens = skb_checksum_start_offset(skb) -
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/action.c b/drivers/net/ethernet/netronome/nfp/flower/action.c
+index 4a6d2db75071..417fbcc64f00 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/action.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/action.c
+@@ -314,12 +314,14 @@ nfp_fl_set_ip4(const struct tc_action *action, int idx, u32 off,
+ 
+ 	switch (off) {
+ 	case offsetof(struct iphdr, daddr):
+-		set_ip_addr->ipv4_dst_mask = mask;
+-		set_ip_addr->ipv4_dst = exact;
++		set_ip_addr->ipv4_dst_mask |= mask;
++		set_ip_addr->ipv4_dst &= ~mask;
++		set_ip_addr->ipv4_dst |= exact & mask;
+ 		break;
+ 	case offsetof(struct iphdr, saddr):
+-		set_ip_addr->ipv4_src_mask = mask;
+-		set_ip_addr->ipv4_src = exact;
++		set_ip_addr->ipv4_src_mask |= mask;
++		set_ip_addr->ipv4_src &= ~mask;
++		set_ip_addr->ipv4_src |= exact & mask;
+ 		break;
+ 	default:
+ 		return -EOPNOTSUPP;
+@@ -333,11 +335,12 @@ nfp_fl_set_ip4(const struct tc_action *action, int idx, u32 off,
+ }
+ 
+ static void
+-nfp_fl_set_ip6_helper(int opcode_tag, int idx, __be32 exact, __be32 mask,
++nfp_fl_set_ip6_helper(int opcode_tag, u8 word, __be32 exact, __be32 mask,
+ 		      struct nfp_fl_set_ipv6_addr *ip6)
+ {
+-	ip6->ipv6[idx % 4].mask = mask;
+-	ip6->ipv6[idx % 4].exact = exact;
++	ip6->ipv6[word].mask |= mask;
++	ip6->ipv6[word].exact &= ~mask;
++	ip6->ipv6[word].exact |= exact & mask;
+ 
+ 	ip6->reserved = cpu_to_be16(0);
+ 	ip6->head.jump_id = opcode_tag;
+@@ -350,6 +353,7 @@ nfp_fl_set_ip6(const struct tc_action *action, int idx, u32 off,
+ 	       struct nfp_fl_set_ipv6_addr *ip_src)
+ {
+ 	__be32 exact, mask;
++	u8 word;
+ 
+ 	/* We are expecting tcf_pedit to return a big endian value */
+ 	mask = (__force __be32)~tcf_pedit_mask(action, idx);
+@@ -358,17 +362,20 @@ nfp_fl_set_ip6(const struct tc_action *action, int idx, u32 off,
+ 	if (exact & ~mask)
+ 		return -EOPNOTSUPP;
+ 
+-	if (off < offsetof(struct ipv6hdr, saddr))
++	if (off < offsetof(struct ipv6hdr, saddr)) {
+ 		return -EOPNOTSUPP;
+-	else if (off < offsetof(struct ipv6hdr, daddr))
+-		nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_SRC, idx,
++	} else if (off < offsetof(struct ipv6hdr, daddr)) {
++		word = (off - offsetof(struct ipv6hdr, saddr)) / sizeof(exact);
++		nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_SRC, word,
+ 				      exact, mask, ip_src);
+-	else if (off < offsetof(struct ipv6hdr, daddr) +
+-		       sizeof(struct in6_addr))
+-		nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_DST, idx,
++	} else if (off < offsetof(struct ipv6hdr, daddr) +
++		       sizeof(struct in6_addr)) {
++		word = (off - offsetof(struct ipv6hdr, daddr)) / sizeof(exact);
++		nfp_fl_set_ip6_helper(NFP_FL_ACTION_OPCODE_SET_IPV6_DST, word,
+ 				      exact, mask, ip_dst);
+-	else
++	} else {
+ 		return -EOPNOTSUPP;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_devlink.c b/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
+index db463e20a876..e9a4179e7e48 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
+@@ -96,6 +96,7 @@ nfp_devlink_port_split(struct devlink *devlink, unsigned int port_index,
+ {
+ 	struct nfp_pf *pf = devlink_priv(devlink);
+ 	struct nfp_eth_table_port eth_port;
++	unsigned int lanes;
+ 	int ret;
+ 
+ 	if (count < 2)
+@@ -114,8 +115,12 @@ nfp_devlink_port_split(struct devlink *devlink, unsigned int port_index,
+ 		goto out;
+ 	}
+ 
+-	ret = nfp_devlink_set_lanes(pf, eth_port.index,
+-				    eth_port.port_lanes / count);
++	/* Special case the 100G CXP -> 2x40G split */
++	lanes = eth_port.port_lanes / count;
++	if (eth_port.lanes == 10 && count == 2)
++		lanes = 8 / count;
++
++	ret = nfp_devlink_set_lanes(pf, eth_port.index, lanes);
+ out:
+ 	mutex_unlock(&pf->lock);
+ 
+@@ -128,6 +133,7 @@ nfp_devlink_port_unsplit(struct devlink *devlink, unsigned int port_index,
+ {
+ 	struct nfp_pf *pf = devlink_priv(devlink);
+ 	struct nfp_eth_table_port eth_port;
++	unsigned int lanes;
+ 	int ret;
+ 
+ 	mutex_lock(&pf->lock);
+@@ -143,7 +149,12 @@ nfp_devlink_port_unsplit(struct devlink *devlink, unsigned int port_index,
+ 		goto out;
+ 	}
+ 
+-	ret = nfp_devlink_set_lanes(pf, eth_port.index, eth_port.port_lanes);
++	/* Special case the 100G CXP -> 2x40G unsplit */
++	lanes = eth_port.port_lanes;
++	if (eth_port.port_lanes == 8)
++		lanes = 10;
++
++	ret = nfp_devlink_set_lanes(pf, eth_port.index, lanes);
+ out:
+ 	mutex_unlock(&pf->lock);
+ 
+diff --git a/drivers/net/ethernet/qlogic/qla3xxx.c b/drivers/net/ethernet/qlogic/qla3xxx.c
+index b48f76182049..10b075bc5959 100644
+--- a/drivers/net/ethernet/qlogic/qla3xxx.c
++++ b/drivers/net/ethernet/qlogic/qla3xxx.c
+@@ -380,8 +380,6 @@ static void fm93c56a_select(struct ql3_adapter *qdev)
+ 
+ 	qdev->eeprom_cmd_data = AUBURN_EEPROM_CS_1;
+ 	ql_write_nvram_reg(qdev, spir, ISP_NVRAM_MASK | qdev->eeprom_cmd_data);
+-	ql_write_nvram_reg(qdev, spir,
+-			   ((ISP_NVRAM_MASK << 16) | qdev->eeprom_cmd_data));
+ }
+ 
+ /*
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index f18087102d40..41bcbdd355f0 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -7539,20 +7539,12 @@ static int rtl_alloc_irq(struct rtl8169_private *tp)
+ {
+ 	unsigned int flags;
+ 
+-	switch (tp->mac_version) {
+-	case RTL_GIGA_MAC_VER_01 ... RTL_GIGA_MAC_VER_06:
++	if (tp->mac_version <= RTL_GIGA_MAC_VER_06) {
+ 		RTL_W8(tp, Cfg9346, Cfg9346_Unlock);
+ 		RTL_W8(tp, Config2, RTL_R8(tp, Config2) & ~MSIEnable);
+ 		RTL_W8(tp, Cfg9346, Cfg9346_Lock);
+ 		flags = PCI_IRQ_LEGACY;
+-		break;
+-	case RTL_GIGA_MAC_VER_39 ... RTL_GIGA_MAC_VER_40:
+-		/* This version was reported to have issues with resume
+-		 * from suspend when using MSI-X
+-		 */
+-		flags = PCI_IRQ_LEGACY | PCI_IRQ_MSI;
+-		break;
+-	default:
++	} else {
+ 		flags = PCI_IRQ_ALL_TYPES;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
+index e080d3e7c582..4d7d53fbc0ef 100644
+--- a/drivers/net/ethernet/socionext/netsec.c
++++ b/drivers/net/ethernet/socionext/netsec.c
+@@ -945,6 +945,9 @@ static void netsec_uninit_pkt_dring(struct netsec_priv *priv, int id)
+ 	dring->head = 0;
+ 	dring->tail = 0;
+ 	dring->pkt_cnt = 0;
++
++	if (id == NETSEC_RING_TX)
++		netdev_reset_queue(priv->ndev);
+ }
+ 
+ static void netsec_free_dring(struct netsec_priv *priv, int id)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+index f9a61f90cfbc..0f660af01a4b 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+@@ -714,8 +714,9 @@ static int get_ephy_nodes(struct stmmac_priv *priv)
+ 		return -ENODEV;
+ 	}
+ 
+-	mdio_internal = of_find_compatible_node(mdio_mux, NULL,
++	mdio_internal = of_get_compatible_child(mdio_mux,
+ 						"allwinner,sun8i-h3-mdio-internal");
++	of_node_put(mdio_mux);
+ 	if (!mdio_internal) {
+ 		dev_err(priv->device, "Cannot get internal_mdio node\n");
+ 		return -ENODEV;
+@@ -729,13 +730,20 @@ static int get_ephy_nodes(struct stmmac_priv *priv)
+ 		gmac->rst_ephy = of_reset_control_get_exclusive(iphynode, NULL);
+ 		if (IS_ERR(gmac->rst_ephy)) {
+ 			ret = PTR_ERR(gmac->rst_ephy);
+-			if (ret == -EPROBE_DEFER)
++			if (ret == -EPROBE_DEFER) {
++				of_node_put(iphynode);
++				of_node_put(mdio_internal);
+ 				return ret;
++			}
+ 			continue;
+ 		}
+ 		dev_info(priv->device, "Found internal PHY node\n");
++		of_node_put(iphynode);
++		of_node_put(mdio_internal);
+ 		return 0;
+ 	}
++
++	of_node_put(mdio_internal);
+ 	return -ENODEV;
+ }
+ 
+diff --git a/drivers/net/net_failover.c b/drivers/net/net_failover.c
+index 4f390fa557e4..8ec02f1a3be8 100644
+--- a/drivers/net/net_failover.c
++++ b/drivers/net/net_failover.c
+@@ -602,6 +602,9 @@ static int net_failover_slave_unregister(struct net_device *slave_dev,
+ 	primary_dev = rtnl_dereference(nfo_info->primary_dev);
+ 	standby_dev = rtnl_dereference(nfo_info->standby_dev);
+ 
++	if (WARN_ON_ONCE(slave_dev != primary_dev && slave_dev != standby_dev))
++		return -ENODEV;
++
+ 	vlan_vids_del_by_dev(slave_dev, failover_dev);
+ 	dev_uc_unsync(slave_dev, failover_dev);
+ 	dev_mc_unsync(slave_dev, failover_dev);
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index 5827fccd4f29..44a0770de142 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -907,6 +907,9 @@ void phylink_start(struct phylink *pl)
+ 		    phylink_an_mode_str(pl->link_an_mode),
+ 		    phy_modes(pl->link_config.interface));
+ 
++	/* Always set the carrier off */
++	netif_carrier_off(pl->netdev);
++
+ 	/* Apply the link configuration to the MAC when starting. This allows
+ 	 * a fixed-link to start with the correct parameters, and also
+ 	 * ensures that we set the appropriate advertisement for Serdes links.
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 725dd63f8413..546081993ecf 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -2304,6 +2304,8 @@ static void tun_setup(struct net_device *dev)
+ static int tun_validate(struct nlattr *tb[], struct nlattr *data[],
+ 			struct netlink_ext_ack *extack)
+ {
++	if (!data)
++		return 0;
+ 	return -EINVAL;
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
+index 2319f79b34f0..e6d23b6895bd 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.c
++++ b/drivers/net/wireless/ath/ath10k/wmi.c
+@@ -1869,6 +1869,12 @@ int ath10k_wmi_cmd_send(struct ath10k *ar, struct sk_buff *skb, u32 cmd_id)
+ 	if (ret)
+ 		dev_kfree_skb_any(skb);
+ 
++	if (ret == -EAGAIN) {
++		ath10k_warn(ar, "wmi command %d timeout, restarting hardware\n",
++			    cmd_id);
++		queue_work(ar->workqueue, &ar->restart_work);
++	}
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c b/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c
+index d8b79cb72b58..e7584b842dce 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c
+@@ -77,6 +77,8 @@ static u16 d11ac_bw(enum brcmu_chan_bw bw)
+ 		return BRCMU_CHSPEC_D11AC_BW_40;
+ 	case BRCMU_CHAN_BW_80:
+ 		return BRCMU_CHSPEC_D11AC_BW_80;
++	case BRCMU_CHAN_BW_160:
++		return BRCMU_CHSPEC_D11AC_BW_160;
+ 	default:
+ 		WARN_ON(1);
+ 	}
+@@ -190,8 +192,38 @@ static void brcmu_d11ac_decchspec(struct brcmu_chan *ch)
+ 			break;
+ 		}
+ 		break;
+-	case BRCMU_CHSPEC_D11AC_BW_8080:
+ 	case BRCMU_CHSPEC_D11AC_BW_160:
++		switch (ch->sb) {
++		case BRCMU_CHAN_SB_LLL:
++			ch->control_ch_num -= CH_70MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_LLU:
++			ch->control_ch_num -= CH_50MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_LUL:
++			ch->control_ch_num -= CH_30MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_LUU:
++			ch->control_ch_num -= CH_10MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_ULL:
++			ch->control_ch_num += CH_10MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_ULU:
++			ch->control_ch_num += CH_30MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_UUL:
++			ch->control_ch_num += CH_50MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_UUU:
++			ch->control_ch_num += CH_70MHZ_APART;
++			break;
++		default:
++			WARN_ON_ONCE(1);
++			break;
++		}
++		break;
++	case BRCMU_CHSPEC_D11AC_BW_8080:
+ 	default:
+ 		WARN_ON_ONCE(1);
+ 		break;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/include/brcmu_wifi.h b/drivers/net/wireless/broadcom/brcm80211/include/brcmu_wifi.h
+index 7b9a77981df1..75b2a0438cfa 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/include/brcmu_wifi.h
++++ b/drivers/net/wireless/broadcom/brcm80211/include/brcmu_wifi.h
+@@ -29,6 +29,8 @@
+ #define CH_UPPER_SB			0x01
+ #define CH_LOWER_SB			0x02
+ #define CH_EWA_VALID			0x04
++#define CH_70MHZ_APART			14
++#define CH_50MHZ_APART			10
+ #define CH_30MHZ_APART			6
+ #define CH_20MHZ_APART			4
+ #define CH_10MHZ_APART			2
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+index 866c91c923be..dd674dcf1a0a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+@@ -669,8 +669,12 @@ static int iwl_mvm_sar_get_ewrd_table(struct iwl_mvm *mvm)
+ 	enabled = !!(wifi_pkg->package.elements[1].integer.value);
+ 	n_profiles = wifi_pkg->package.elements[2].integer.value;
+ 
+-	/* in case of BIOS bug */
+-	if (n_profiles <= 0) {
++	/*
++	 * Check the validity of n_profiles.  The EWRD profiles start
++	 * from index 1, so the maximum value allowed here is
++	 * ACPI_SAR_PROFILES_NUM - 1.
++	 */
++	if (n_profiles <= 0 || n_profiles >= ACPI_SAR_PROFILE_NUM) {
+ 		ret = -EINVAL;
+ 		goto out_free;
+ 	}
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index a6e072234398..da45dc972889 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -1232,12 +1232,15 @@ void __iwl_mvm_mac_stop(struct iwl_mvm *mvm)
+ 	iwl_mvm_del_aux_sta(mvm);
+ 
+ 	/*
+-	 * Clear IN_HW_RESTART flag when stopping the hw (as restart_complete()
+-	 * won't be called in this case).
++	 * Clear IN_HW_RESTART and HW_RESTART_REQUESTED flag when stopping the
++	 * hw (as restart_complete() won't be called in this case) and mac80211
++	 * won't execute the restart.
+ 	 * But make sure to cleanup interfaces that have gone down before/during
+ 	 * HW restart was requested.
+ 	 */
+-	if (test_and_clear_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status))
++	if (test_and_clear_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) ||
++	    test_and_clear_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED,
++			       &mvm->status))
+ 		ieee80211_iterate_interfaces(mvm->hw, 0,
+ 					     iwl_mvm_cleanup_iterator, mvm);
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
+index 642da10b0b7f..fccb3a4f9d57 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
+@@ -1218,7 +1218,11 @@ void iwl_mvm_rs_tx_status(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ 	    !(info->flags & IEEE80211_TX_STAT_AMPDU))
+ 		return;
+ 
+-	rs_rate_from_ucode_rate(tx_resp_hwrate, info->band, &tx_resp_rate);
++	if (rs_rate_from_ucode_rate(tx_resp_hwrate, info->band,
++				    &tx_resp_rate)) {
++		WARN_ON_ONCE(1);
++		return;
++	}
+ 
+ #ifdef CONFIG_MAC80211_DEBUGFS
+ 	/* Disable last tx check if we are debugging with fixed rate but
+@@ -1269,7 +1273,10 @@ void iwl_mvm_rs_tx_status(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ 	 */
+ 	table = &lq_sta->lq;
+ 	lq_hwrate = le32_to_cpu(table->rs_table[0]);
+-	rs_rate_from_ucode_rate(lq_hwrate, info->band, &lq_rate);
++	if (rs_rate_from_ucode_rate(lq_hwrate, info->band, &lq_rate)) {
++		WARN_ON_ONCE(1);
++		return;
++	}
+ 
+ 	/* Here we actually compare this rate to the latest LQ command */
+ 	if (lq_color != LQ_FLAG_COLOR_GET(table->flags)) {
+@@ -1371,8 +1378,12 @@ void iwl_mvm_rs_tx_status(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ 		/* Collect data for each rate used during failed TX attempts */
+ 		for (i = 0; i <= retries; ++i) {
+ 			lq_hwrate = le32_to_cpu(table->rs_table[i]);
+-			rs_rate_from_ucode_rate(lq_hwrate, info->band,
+-						&lq_rate);
++			if (rs_rate_from_ucode_rate(lq_hwrate, info->band,
++						    &lq_rate)) {
++				WARN_ON_ONCE(1);
++				return;
++			}
++
+ 			/*
+ 			 * Only collect stats if retried rate is in the same RS
+ 			 * table as active/search.
+@@ -3241,7 +3252,10 @@ static void rs_build_rates_table_from_fixed(struct iwl_mvm *mvm,
+ 	for (i = 0; i < num_rates; i++)
+ 		lq_cmd->rs_table[i] = ucode_rate_le32;
+ 
+-	rs_rate_from_ucode_rate(ucode_rate, band, &rate);
++	if (rs_rate_from_ucode_rate(ucode_rate, band, &rate)) {
++		WARN_ON_ONCE(1);
++		return;
++	}
+ 
+ 	if (is_mimo(&rate))
+ 		lq_cmd->mimo_delim = num_rates - 1;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+index cf2591f2ac23..2d35b70de2ab 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+@@ -1385,6 +1385,7 @@ static void iwl_mvm_rx_tx_cmd_single(struct iwl_mvm *mvm,
+ 	while (!skb_queue_empty(&skbs)) {
+ 		struct sk_buff *skb = __skb_dequeue(&skbs);
+ 		struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
++		struct ieee80211_hdr *hdr = (void *)skb->data;
+ 		bool flushed = false;
+ 
+ 		skb_freed++;
+@@ -1429,11 +1430,11 @@ static void iwl_mvm_rx_tx_cmd_single(struct iwl_mvm *mvm,
+ 			info->flags |= IEEE80211_TX_STAT_AMPDU_NO_BACK;
+ 		info->flags &= ~IEEE80211_TX_CTL_AMPDU;
+ 
+-		/* W/A FW bug: seq_ctl is wrong when the status isn't success */
+-		if (status != TX_STATUS_SUCCESS) {
+-			struct ieee80211_hdr *hdr = (void *)skb->data;
++		/* W/A FW bug: seq_ctl is wrong upon failure / BAR frame */
++		if (ieee80211_is_back_req(hdr->frame_control))
++			seq_ctl = 0;
++		else if (status != TX_STATUS_SUCCESS)
+ 			seq_ctl = le16_to_cpu(hdr->seq_ctrl);
+-		}
+ 
+ 		if (unlikely(!seq_ctl)) {
+ 			struct ieee80211_hdr *hdr = (void *)skb->data;
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+index d15f5ba2dc77..cb5631c85d16 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+@@ -1050,6 +1050,14 @@ void iwl_pcie_rx_free(struct iwl_trans *trans)
+ 	kfree(trans_pcie->rxq);
+ }
+ 
++static void iwl_pcie_rx_move_to_allocator(struct iwl_rxq *rxq,
++					  struct iwl_rb_allocator *rba)
++{
++	spin_lock(&rba->lock);
++	list_splice_tail_init(&rxq->rx_used, &rba->rbd_empty);
++	spin_unlock(&rba->lock);
++}
++
+ /*
+  * iwl_pcie_rx_reuse_rbd - Recycle used RBDs
+  *
+@@ -1081,9 +1089,7 @@ static void iwl_pcie_rx_reuse_rbd(struct iwl_trans *trans,
+ 	if ((rxq->used_count % RX_CLAIM_REQ_ALLOC) == RX_POST_REQ_ALLOC) {
+ 		/* Move the 2 RBDs to the allocator ownership.
+ 		 Allocator has another 6 from pool for the request completion*/
+-		spin_lock(&rba->lock);
+-		list_splice_tail_init(&rxq->rx_used, &rba->rbd_empty);
+-		spin_unlock(&rba->lock);
++		iwl_pcie_rx_move_to_allocator(rxq, rba);
+ 
+ 		atomic_inc(&rba->req_pending);
+ 		queue_work(rba->alloc_wq, &rba->rx_alloc);
+@@ -1261,10 +1267,18 @@ restart:
+ 		IWL_DEBUG_RX(trans, "Q %d: HW = SW = %d\n", rxq->id, r);
+ 
+ 	while (i != r) {
++		struct iwl_rb_allocator *rba = &trans_pcie->rba;
+ 		struct iwl_rx_mem_buffer *rxb;
+-
+-		if (unlikely(rxq->used_count == rxq->queue_size / 2))
++		/* number of RBDs still waiting for page allocation */
++		u32 rb_pending_alloc =
++			atomic_read(&trans_pcie->rba.req_pending) *
++			RX_CLAIM_REQ_ALLOC;
++
++		if (unlikely(rb_pending_alloc >= rxq->queue_size / 2 &&
++			     !emergency)) {
++			iwl_pcie_rx_move_to_allocator(rxq, rba);
+ 			emergency = true;
++		}
+ 
+ 		if (trans->cfg->mq_rx_supported) {
+ 			/*
+@@ -1307,17 +1321,13 @@ restart:
+ 			iwl_pcie_rx_allocator_get(trans, rxq);
+ 
+ 		if (rxq->used_count % RX_CLAIM_REQ_ALLOC == 0 && !emergency) {
+-			struct iwl_rb_allocator *rba = &trans_pcie->rba;
+-
+ 			/* Add the remaining empty RBDs for allocator use */
+-			spin_lock(&rba->lock);
+-			list_splice_tail_init(&rxq->rx_used, &rba->rbd_empty);
+-			spin_unlock(&rba->lock);
++			iwl_pcie_rx_move_to_allocator(rxq, rba);
+ 		} else if (emergency) {
+ 			count++;
+ 			if (count == 8) {
+ 				count = 0;
+-				if (rxq->used_count < rxq->queue_size / 3)
++				if (rb_pending_alloc < rxq->queue_size / 3)
+ 					emergency = false;
+ 
+ 				rxq->read = i;
+diff --git a/drivers/net/wireless/marvell/libertas/if_usb.c b/drivers/net/wireless/marvell/libertas/if_usb.c
+index ffea610f67e2..10ba94c2b35b 100644
+--- a/drivers/net/wireless/marvell/libertas/if_usb.c
++++ b/drivers/net/wireless/marvell/libertas/if_usb.c
+@@ -456,8 +456,6 @@ static int __if_usb_submit_rx_urb(struct if_usb_card *cardp,
+ 			  MRVDRV_ETH_RX_PACKET_BUFFER_SIZE, callbackfn,
+ 			  cardp);
+ 
+-	cardp->rx_urb->transfer_flags |= URB_ZERO_PACKET;
+-
+ 	lbs_deb_usb2(&cardp->udev->dev, "Pointer for rx_urb %p\n", cardp->rx_urb);
+ 	if ((ret = usb_submit_urb(cardp->rx_urb, GFP_ATOMIC))) {
+ 		lbs_deb_usbd(&cardp->udev->dev, "Submit Rx URB failed: %d\n", ret);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c b/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c
+index 8985446570bd..190c699d6e3b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c
+@@ -725,8 +725,7 @@ __mt76x2_mac_set_beacon(struct mt76x2_dev *dev, u8 bcn_idx, struct sk_buff *skb)
+ 	if (skb) {
+ 		ret = mt76_write_beacon(dev, beacon_addr, skb);
+ 		if (!ret)
+-			dev->beacon_data_mask |= BIT(bcn_idx) &
+-						 dev->beacon_mask;
++			dev->beacon_data_mask |= BIT(bcn_idx);
+ 	} else {
+ 		dev->beacon_data_mask &= ~BIT(bcn_idx);
+ 		for (i = 0; i < beacon_len; i += 4)
+diff --git a/drivers/net/wireless/rsi/rsi_91x_usb.c b/drivers/net/wireless/rsi/rsi_91x_usb.c
+index 6ce6b754df12..45a1b86491b6 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_usb.c
++++ b/drivers/net/wireless/rsi/rsi_91x_usb.c
+@@ -266,15 +266,17 @@ static void rsi_rx_done_handler(struct urb *urb)
+ 	if (urb->status)
+ 		goto out;
+ 
+-	if (urb->actual_length <= 0) {
+-		rsi_dbg(INFO_ZONE, "%s: Zero length packet\n", __func__);
++	if (urb->actual_length <= 0 ||
++	    urb->actual_length > rx_cb->rx_skb->len) {
++		rsi_dbg(INFO_ZONE, "%s: Invalid packet length = %d\n",
++			__func__, urb->actual_length);
+ 		goto out;
+ 	}
+ 	if (skb_queue_len(&dev->rx_q) >= RSI_MAX_RX_PKTS) {
+ 		rsi_dbg(INFO_ZONE, "Max RX packets reached\n");
+ 		goto out;
+ 	}
+-	skb_put(rx_cb->rx_skb, urb->actual_length);
++	skb_trim(rx_cb->rx_skb, urb->actual_length);
+ 	skb_queue_tail(&dev->rx_q, rx_cb->rx_skb);
+ 
+ 	rsi_set_event(&dev->rx_thread.event);
+@@ -308,6 +310,7 @@ static int rsi_rx_urb_submit(struct rsi_hw *adapter, u8 ep_num)
+ 	if (!skb)
+ 		return -ENOMEM;
+ 	skb_reserve(skb, MAX_DWORD_ALIGN_BYTES);
++	skb_put(skb, RSI_MAX_RX_USB_PKT_SIZE - MAX_DWORD_ALIGN_BYTES);
+ 	dword_align_bytes = (unsigned long)skb->data & 0x3f;
+ 	if (dword_align_bytes > 0)
+ 		skb_push(skb, dword_align_bytes);
+@@ -319,7 +322,7 @@ static int rsi_rx_urb_submit(struct rsi_hw *adapter, u8 ep_num)
+ 			  usb_rcvbulkpipe(dev->usbdev,
+ 			  dev->bulkin_endpoint_addr[ep_num - 1]),
+ 			  urb->transfer_buffer,
+-			  RSI_MAX_RX_USB_PKT_SIZE,
++			  skb->len,
+ 			  rsi_rx_done_handler,
+ 			  rx_cb);
+ 
+diff --git a/drivers/nfc/nfcmrvl/uart.c b/drivers/nfc/nfcmrvl/uart.c
+index 91162f8e0366..9a22056e8d9e 100644
+--- a/drivers/nfc/nfcmrvl/uart.c
++++ b/drivers/nfc/nfcmrvl/uart.c
+@@ -73,10 +73,9 @@ static int nfcmrvl_uart_parse_dt(struct device_node *node,
+ 	struct device_node *matched_node;
+ 	int ret;
+ 
+-	matched_node = of_find_compatible_node(node, NULL, "marvell,nfc-uart");
++	matched_node = of_get_compatible_child(node, "marvell,nfc-uart");
+ 	if (!matched_node) {
+-		matched_node = of_find_compatible_node(node, NULL,
+-						       "mrvl,nfc-uart");
++		matched_node = of_get_compatible_child(node, "mrvl,nfc-uart");
+ 		if (!matched_node)
+ 			return -ENODEV;
+ 	}
+diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
+index 8aae6dcc839f..9148015ed803 100644
+--- a/drivers/nvdimm/bus.c
++++ b/drivers/nvdimm/bus.c
+@@ -488,6 +488,8 @@ static void nd_async_device_register(void *d, async_cookie_t cookie)
+ 		put_device(dev);
+ 	}
+ 	put_device(dev);
++	if (dev->parent)
++		put_device(dev->parent);
+ }
+ 
+ static void nd_async_device_unregister(void *d, async_cookie_t cookie)
+@@ -507,6 +509,8 @@ void __nd_device_register(struct device *dev)
+ 	if (!dev)
+ 		return;
+ 	dev->bus = &nvdimm_bus_type;
++	if (dev->parent)
++		get_device(dev->parent);
+ 	get_device(dev);
+ 	async_schedule_domain(nd_async_device_register, dev,
+ 			&nd_async_domain);
+diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
+index 8b1fd7f1a224..2245cfb8c6ab 100644
+--- a/drivers/nvdimm/pmem.c
++++ b/drivers/nvdimm/pmem.c
+@@ -393,9 +393,11 @@ static int pmem_attach_disk(struct device *dev,
+ 		addr = devm_memremap_pages(dev, &pmem->pgmap);
+ 		pmem->pfn_flags |= PFN_MAP;
+ 		memcpy(&bb_res, &pmem->pgmap.res, sizeof(bb_res));
+-	} else
++	} else {
+ 		addr = devm_memremap(dev, pmem->phys_addr,
+ 				pmem->size, ARCH_MEMREMAP_PMEM);
++		memcpy(&bb_res, &nsio->res, sizeof(bb_res));
++	}
+ 
+ 	/*
+ 	 * At release time the queue must be frozen before
+diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
+index c30d5af02cc2..63cb01ef4ef0 100644
+--- a/drivers/nvdimm/region_devs.c
++++ b/drivers/nvdimm/region_devs.c
+@@ -545,10 +545,17 @@ static ssize_t region_badblocks_show(struct device *dev,
+ 		struct device_attribute *attr, char *buf)
+ {
+ 	struct nd_region *nd_region = to_nd_region(dev);
++	ssize_t rc;
+ 
+-	return badblocks_show(&nd_region->bb, buf, 0);
+-}
++	device_lock(dev);
++	if (dev->driver)
++		rc = badblocks_show(&nd_region->bb, buf, 0);
++	else
++		rc = -ENXIO;
++	device_unlock(dev);
+ 
++	return rc;
++}
+ static DEVICE_ATTR(badblocks, 0444, region_badblocks_show, NULL);
+ 
+ static ssize_t resource_show(struct device *dev,
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index bf65501e6ed6..f1f375fb362b 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -3119,8 +3119,8 @@ static void nvme_ns_remove(struct nvme_ns *ns)
+ 	}
+ 
+ 	mutex_lock(&ns->ctrl->subsys->lock);
+-	nvme_mpath_clear_current_path(ns);
+ 	list_del_rcu(&ns->siblings);
++	nvme_mpath_clear_current_path(ns);
+ 	mutex_unlock(&ns->ctrl->subsys->lock);
+ 
+ 	down_write(&ns->ctrl->namespaces_rwsem);
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index 514d1dfc5630..122b52d0ebfd 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -518,11 +518,17 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
+ 			goto err_device_del;
+ 	}
+ 
+-	if (config->cells)
+-		nvmem_add_cells(nvmem, config->cells, config->ncells);
++	if (config->cells) {
++		rval = nvmem_add_cells(nvmem, config->cells, config->ncells);
++		if (rval)
++			goto err_teardown_compat;
++	}
+ 
+ 	return nvmem;
+ 
++err_teardown_compat:
++	if (config->compat)
++		device_remove_bin_file(nvmem->base_dev, &nvmem->eeprom);
+ err_device_del:
+ 	device_del(&nvmem->dev);
+ err_put_device:
+diff --git a/drivers/opp/of.c b/drivers/opp/of.c
+index 7af0ddec936b..20988c426650 100644
+--- a/drivers/opp/of.c
++++ b/drivers/opp/of.c
+@@ -425,6 +425,7 @@ static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np)
+ 		dev_err(dev, "Not all nodes have performance state set (%d: %d)\n",
+ 			count, pstate_count);
+ 		ret = -ENOENT;
++		_dev_pm_opp_remove_table(opp_table, dev, false);
+ 		goto put_opp_table;
+ 	}
+ 
+diff --git a/drivers/pci/controller/dwc/pci-dra7xx.c b/drivers/pci/controller/dwc/pci-dra7xx.c
+index 345aab56ce8b..78ed6cc8d521 100644
+--- a/drivers/pci/controller/dwc/pci-dra7xx.c
++++ b/drivers/pci/controller/dwc/pci-dra7xx.c
+@@ -542,7 +542,7 @@ static const struct of_device_id of_dra7xx_pcie_match[] = {
+ };
+ 
+ /*
+- * dra7xx_pcie_ep_unaligned_memaccess: workaround for AM572x/AM571x Errata i870
++ * dra7xx_pcie_unaligned_memaccess: workaround for AM572x/AM571x Errata i870
+  * @dra7xx: the dra7xx device where the workaround should be applied
+  *
+  * Access to the PCIe slave port that are not 32-bit aligned will result
+@@ -552,7 +552,7 @@ static const struct of_device_id of_dra7xx_pcie_match[] = {
+  *
+  * To avoid this issue set PCIE_SS1_AXI2OCP_LEGACY_MODE_ENABLE to 1.
+  */
+-static int dra7xx_pcie_ep_unaligned_memaccess(struct device *dev)
++static int dra7xx_pcie_unaligned_memaccess(struct device *dev)
+ {
+ 	int ret;
+ 	struct device_node *np = dev->of_node;
+@@ -704,6 +704,11 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev)
+ 
+ 		dra7xx_pcie_writel(dra7xx, PCIECTRL_TI_CONF_DEVICE_TYPE,
+ 				   DEVICE_TYPE_RC);
++
++		ret = dra7xx_pcie_unaligned_memaccess(dev);
++		if (ret)
++			dev_err(dev, "WA for Errata i870 not applied\n");
++
+ 		ret = dra7xx_add_pcie_port(dra7xx, pdev);
+ 		if (ret < 0)
+ 			goto err_gpio;
+@@ -717,7 +722,7 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev)
+ 		dra7xx_pcie_writel(dra7xx, PCIECTRL_TI_CONF_DEVICE_TYPE,
+ 				   DEVICE_TYPE_EP);
+ 
+-		ret = dra7xx_pcie_ep_unaligned_memaccess(dev);
++		ret = dra7xx_pcie_unaligned_memaccess(dev);
+ 		if (ret)
+ 			goto err_gpio;
+ 
+diff --git a/drivers/pci/controller/pcie-cadence-ep.c b/drivers/pci/controller/pcie-cadence-ep.c
+index e3fe4124e3af..a67dc91261f5 100644
+--- a/drivers/pci/controller/pcie-cadence-ep.c
++++ b/drivers/pci/controller/pcie-cadence-ep.c
+@@ -259,7 +259,6 @@ static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn,
+ 				     u8 intx, bool is_asserted)
+ {
+ 	struct cdns_pcie *pcie = &ep->pcie;
+-	u32 r = ep->max_regions - 1;
+ 	u32 offset;
+ 	u16 status;
+ 	u8 msg_code;
+@@ -269,8 +268,8 @@ static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn,
+ 	/* Set the outbound region if needed. */
+ 	if (unlikely(ep->irq_pci_addr != CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY ||
+ 		     ep->irq_pci_fn != fn)) {
+-		/* Last region was reserved for IRQ writes. */
+-		cdns_pcie_set_outbound_region_for_normal_msg(pcie, fn, r,
++		/* First region was reserved for IRQ writes. */
++		cdns_pcie_set_outbound_region_for_normal_msg(pcie, fn, 0,
+ 							     ep->irq_phys_addr);
+ 		ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY;
+ 		ep->irq_pci_fn = fn;
+@@ -348,8 +347,8 @@ static int cdns_pcie_ep_send_msi_irq(struct cdns_pcie_ep *ep, u8 fn,
+ 	/* Set the outbound region if needed. */
+ 	if (unlikely(ep->irq_pci_addr != (pci_addr & ~pci_addr_mask) ||
+ 		     ep->irq_pci_fn != fn)) {
+-		/* Last region was reserved for IRQ writes. */
+-		cdns_pcie_set_outbound_region(pcie, fn, ep->max_regions - 1,
++		/* First region was reserved for IRQ writes. */
++		cdns_pcie_set_outbound_region(pcie, fn, 0,
+ 					      false,
+ 					      ep->irq_phys_addr,
+ 					      pci_addr & ~pci_addr_mask,
+@@ -510,6 +509,8 @@ static int cdns_pcie_ep_probe(struct platform_device *pdev)
+ 		goto free_epc_mem;
+ 	}
+ 	ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE;
++	/* Reserve region 0 for IRQs */
++	set_bit(0, &ep->ob_region_map);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/pci/controller/pcie-mediatek.c b/drivers/pci/controller/pcie-mediatek.c
+index 861dda69f366..c5ff6ca65eab 100644
+--- a/drivers/pci/controller/pcie-mediatek.c
++++ b/drivers/pci/controller/pcie-mediatek.c
+@@ -337,6 +337,17 @@ static struct mtk_pcie_port *mtk_pcie_find_port(struct pci_bus *bus,
+ {
+ 	struct mtk_pcie *pcie = bus->sysdata;
+ 	struct mtk_pcie_port *port;
++	struct pci_dev *dev = NULL;
++
++	/*
++	 * Walk the bus hierarchy to get the devfn value
++	 * of the port in the root bus.
++	 */
++	while (bus && bus->number) {
++		dev = bus->self;
++		bus = dev->bus;
++		devfn = dev->devfn;
++	}
+ 
+ 	list_for_each_entry(port, &pcie->ports, list)
+ 		if (port->slot == PCI_SLOT(devfn))
+diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
+index 942b64fc7f1f..fd2dbd7eed7b 100644
+--- a/drivers/pci/controller/vmd.c
++++ b/drivers/pci/controller/vmd.c
+@@ -197,9 +197,20 @@ static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd, struct msi_desc *d
+ 	int i, best = 1;
+ 	unsigned long flags;
+ 
+-	if (pci_is_bridge(msi_desc_to_pci_dev(desc)) || vmd->msix_count == 1)
++	if (vmd->msix_count == 1)
+ 		return &vmd->irqs[0];
+ 
++	/*
++	 * White list for fast-interrupt handlers. All others will share the
++	 * "slow" interrupt vector.
++	 */
++	switch (msi_desc_to_pci_dev(desc)->class) {
++	case PCI_CLASS_STORAGE_EXPRESS:
++		break;
++	default:
++		return &vmd->irqs[0];
++	}
++
+ 	raw_spin_lock_irqsave(&list_lock, flags);
+ 	for (i = 1; i < vmd->msix_count; i++)
+ 		if (vmd->irqs[i].count < vmd->irqs[best].count)
+diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
+index 4d88afdfc843..f7b7cb7189eb 100644
+--- a/drivers/pci/msi.c
++++ b/drivers/pci/msi.c
+@@ -958,7 +958,6 @@ static int __pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries,
+ 			}
+ 		}
+ 	}
+-	WARN_ON(!!dev->msix_enabled);
+ 
+ 	/* Check whether driver already requested for MSI irq */
+ 	if (dev->msi_enabled) {
+@@ -1028,8 +1027,6 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
+ 	if (!pci_msi_supported(dev, minvec))
+ 		return -EINVAL;
+ 
+-	WARN_ON(!!dev->msi_enabled);
+-
+ 	/* Check whether driver already requested MSI-X irqs */
+ 	if (dev->msix_enabled) {
+ 		pci_info(dev, "can't enable MSI (MSI-X already enabled)\n");
+@@ -1039,6 +1036,9 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
+ 	if (maxvec < minvec)
+ 		return -ERANGE;
+ 
++	if (WARN_ON_ONCE(dev->msi_enabled))
++		return -EINVAL;
++
+ 	nvec = pci_msi_vec_count(dev);
+ 	if (nvec < 0)
+ 		return nvec;
+@@ -1087,6 +1087,9 @@ static int __pci_enable_msix_range(struct pci_dev *dev,
+ 	if (maxvec < minvec)
+ 		return -ERANGE;
+ 
++	if (WARN_ON_ONCE(dev->msix_enabled))
++		return -EINVAL;
++
+ 	for (;;) {
+ 		if (affd) {
+ 			nvec = irq_calc_affinity_vectors(minvec, nvec, affd);
+diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
+index 5d1698265da5..d2b04ab37308 100644
+--- a/drivers/pci/pci-acpi.c
++++ b/drivers/pci/pci-acpi.c
+@@ -779,19 +779,33 @@ static void pci_acpi_setup(struct device *dev)
+ 		return;
+ 
+ 	device_set_wakeup_capable(dev, true);
++	/*
++	 * For bridges that can do D3 we enable wake automatically (as
++	 * we do for the power management itself in that case). The
++	 * reason is that the bridge may have additional methods such as
++	 * _DSW that need to be called.
++	 */
++	if (pci_dev->bridge_d3)
++		device_wakeup_enable(dev);
++
+ 	acpi_pci_wakeup(pci_dev, false);
+ }
+ 
+ static void pci_acpi_cleanup(struct device *dev)
+ {
+ 	struct acpi_device *adev = ACPI_COMPANION(dev);
++	struct pci_dev *pci_dev = to_pci_dev(dev);
+ 
+ 	if (!adev)
+ 		return;
+ 
+ 	pci_acpi_remove_pm_notifier(adev);
+-	if (adev->wakeup.flags.valid)
++	if (adev->wakeup.flags.valid) {
++		if (pci_dev->bridge_d3)
++			device_wakeup_disable(dev);
++
+ 		device_set_wakeup_capable(dev, false);
++	}
+ }
+ 
+ static bool pci_acpi_bus_match(struct device *dev)
+diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
+index c687c817b47d..6322c3f446bc 100644
+--- a/drivers/pci/pcie/aspm.c
++++ b/drivers/pci/pcie/aspm.c
+@@ -991,7 +991,7 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev)
+ 	 * All PCIe functions are in one slot, remove one function will remove
+ 	 * the whole slot, so just wait until we are the last function left.
+ 	 */
+-	if (!list_is_last(&pdev->bus_list, &parent->subordinate->devices))
++	if (!list_empty(&parent->subordinate->devices))
+ 		goto out;
+ 
+ 	link = parent->link_state;
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index d1e2d175c10b..a4d11d14b196 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -3177,7 +3177,11 @@ static void disable_igfx_irq(struct pci_dev *dev)
+ 
+ 	pci_iounmap(dev, regs);
+ }
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0042, disable_igfx_irq);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0046, disable_igfx_irq);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x004a, disable_igfx_irq);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0102, disable_igfx_irq);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0106, disable_igfx_irq);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x010a, disable_igfx_irq);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0152, disable_igfx_irq);
+ 
+diff --git a/drivers/pci/remove.c b/drivers/pci/remove.c
+index 5e3d0dced2b8..b08945a7bbfd 100644
+--- a/drivers/pci/remove.c
++++ b/drivers/pci/remove.c
+@@ -26,9 +26,6 @@ static void pci_stop_dev(struct pci_dev *dev)
+ 
+ 		pci_dev_assign_added(dev, false);
+ 	}
+-
+-	if (dev->bus->self)
+-		pcie_aspm_exit_link_state(dev);
+ }
+ 
+ static void pci_destroy_dev(struct pci_dev *dev)
+@@ -42,6 +39,7 @@ static void pci_destroy_dev(struct pci_dev *dev)
+ 	list_del(&dev->bus_list);
+ 	up_write(&pci_bus_sem);
+ 
++	pcie_aspm_exit_link_state(dev);
+ 	pci_bridge_d3_update(dev);
+ 	pci_free_resources(dev);
+ 	put_device(&dev->dev);
+diff --git a/drivers/pcmcia/ricoh.h b/drivers/pcmcia/ricoh.h
+index 01098c841f87..8ac7b138c094 100644
+--- a/drivers/pcmcia/ricoh.h
++++ b/drivers/pcmcia/ricoh.h
+@@ -119,6 +119,10 @@
+ #define  RL5C4XX_MISC_CONTROL           0x2F /* 8 bit */
+ #define  RL5C4XX_ZV_ENABLE              0x08
+ 
++/* Misc Control 3 Register */
++#define RL5C4XX_MISC3			0x00A2 /* 16 bit */
++#define  RL5C47X_MISC3_CB_CLKRUN_DIS	BIT(1)
++
+ #ifdef __YENTA_H
+ 
+ #define rl_misc(socket)		((socket)->private[0])
+@@ -156,6 +160,35 @@ static void ricoh_set_zv(struct yenta_socket *socket)
+         }
+ }
+ 
++static void ricoh_set_clkrun(struct yenta_socket *socket, bool quiet)
++{
++	u16 misc3;
++
++	/*
++	 * RL5C475II likely has this setting, too, however no datasheet
++	 * is publicly available for this chip
++	 */
++	if (socket->dev->device != PCI_DEVICE_ID_RICOH_RL5C476 &&
++	    socket->dev->device != PCI_DEVICE_ID_RICOH_RL5C478)
++		return;
++
++	if (socket->dev->revision < 0x80)
++		return;
++
++	misc3 = config_readw(socket, RL5C4XX_MISC3);
++	if (misc3 & RL5C47X_MISC3_CB_CLKRUN_DIS) {
++		if (!quiet)
++			dev_dbg(&socket->dev->dev,
++				"CLKRUN feature already disabled\n");
++	} else if (disable_clkrun) {
++		if (!quiet)
++			dev_info(&socket->dev->dev,
++				 "Disabling CLKRUN feature\n");
++		misc3 |= RL5C47X_MISC3_CB_CLKRUN_DIS;
++		config_writew(socket, RL5C4XX_MISC3, misc3);
++	}
++}
++
+ static void ricoh_save_state(struct yenta_socket *socket)
+ {
+ 	rl_misc(socket) = config_readw(socket, RL5C4XX_MISC);
+@@ -172,6 +205,7 @@ static void ricoh_restore_state(struct yenta_socket *socket)
+ 	config_writew(socket, RL5C4XX_16BIT_IO_0, rl_io(socket));
+ 	config_writew(socket, RL5C4XX_16BIT_MEM_0, rl_mem(socket));
+ 	config_writew(socket, RL5C4XX_CONFIG, rl_config(socket));
++	ricoh_set_clkrun(socket, true);
+ }
+ 
+ 
+@@ -197,6 +231,7 @@ static int ricoh_override(struct yenta_socket *socket)
+ 	config_writew(socket, RL5C4XX_CONFIG, config);
+ 
+ 	ricoh_set_zv(socket);
++	ricoh_set_clkrun(socket, false);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/pcmcia/yenta_socket.c b/drivers/pcmcia/yenta_socket.c
+index ab3da2262f0f..ac6a3f46b1e6 100644
+--- a/drivers/pcmcia/yenta_socket.c
++++ b/drivers/pcmcia/yenta_socket.c
+@@ -26,7 +26,8 @@
+ 
+ static bool disable_clkrun;
+ module_param(disable_clkrun, bool, 0444);
+-MODULE_PARM_DESC(disable_clkrun, "If PC card doesn't function properly, please try this option");
++MODULE_PARM_DESC(disable_clkrun,
++		 "If PC card doesn't function properly, please try this option (TI and Ricoh bridges only)");
+ 
+ static bool isa_probe = 1;
+ module_param(isa_probe, bool, 0444);
+diff --git a/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c b/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c
+index 6556dbeae65e..ac251c62bc66 100644
+--- a/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c
++++ b/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c
+@@ -319,6 +319,8 @@ static int pmic_mpp_set_mux(struct pinctrl_dev *pctldev, unsigned function,
+ 	pad->function = function;
+ 
+ 	ret = pmic_mpp_write_mode_ctl(state, pad);
++	if (ret < 0)
++		return ret;
+ 
+ 	val = pad->is_enabled << PMIC_MPP_REG_MASTER_EN_SHIFT;
+ 
+@@ -343,13 +345,12 @@ static int pmic_mpp_config_get(struct pinctrl_dev *pctldev,
+ 
+ 	switch (param) {
+ 	case PIN_CONFIG_BIAS_DISABLE:
+-		arg = pad->pullup == PMIC_MPP_PULL_UP_OPEN;
++		if (pad->pullup != PMIC_MPP_PULL_UP_OPEN)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_PULL_UP:
+ 		switch (pad->pullup) {
+-		case PMIC_MPP_PULL_UP_OPEN:
+-			arg = 0;
+-			break;
+ 		case PMIC_MPP_PULL_UP_0P6KOHM:
+ 			arg = 600;
+ 			break;
+@@ -364,13 +365,17 @@ static int pmic_mpp_config_get(struct pinctrl_dev *pctldev,
+ 		}
+ 		break;
+ 	case PIN_CONFIG_BIAS_HIGH_IMPEDANCE:
+-		arg = !pad->is_enabled;
++		if (pad->is_enabled)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_POWER_SOURCE:
+ 		arg = pad->power_source;
+ 		break;
+ 	case PIN_CONFIG_INPUT_ENABLE:
+-		arg = pad->input_enabled;
++		if (!pad->input_enabled)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_OUTPUT:
+ 		arg = pad->out_value;
+@@ -382,7 +387,9 @@ static int pmic_mpp_config_get(struct pinctrl_dev *pctldev,
+ 		arg = pad->amux_input;
+ 		break;
+ 	case PMIC_MPP_CONF_PAIRED:
+-		arg = pad->paired;
++		if (!pad->paired)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_DRIVE_STRENGTH:
+ 		arg = pad->drive_strength;
+@@ -455,7 +462,7 @@ static int pmic_mpp_config_set(struct pinctrl_dev *pctldev, unsigned int pin,
+ 			pad->dtest = arg;
+ 			break;
+ 		case PIN_CONFIG_DRIVE_STRENGTH:
+-			arg = pad->drive_strength;
++			pad->drive_strength = arg;
+ 			break;
+ 		case PMIC_MPP_CONF_AMUX_ROUTE:
+ 			if (arg >= PMIC_MPP_AMUX_ROUTE_ABUS4)
+@@ -502,6 +509,10 @@ static int pmic_mpp_config_set(struct pinctrl_dev *pctldev, unsigned int pin,
+ 	if (ret < 0)
+ 		return ret;
+ 
++	ret = pmic_mpp_write(state, pad, PMIC_MPP_REG_SINK_CTL, pad->drive_strength);
++	if (ret < 0)
++		return ret;
++
+ 	val = pad->is_enabled << PMIC_MPP_REG_MASTER_EN_SHIFT;
+ 
+ 	return pmic_mpp_write(state, pad, PMIC_MPP_REG_EN_CTL, val);
+diff --git a/drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c b/drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c
+index f53e32a9d8fc..0e153bae322e 100644
+--- a/drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c
++++ b/drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c
+@@ -260,22 +260,32 @@ static int pm8xxx_pin_config_get(struct pinctrl_dev *pctldev,
+ 
+ 	switch (param) {
+ 	case PIN_CONFIG_BIAS_DISABLE:
+-		arg = pin->bias == PM8XXX_GPIO_BIAS_NP;
++		if (pin->bias != PM8XXX_GPIO_BIAS_NP)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_PULL_DOWN:
+-		arg = pin->bias == PM8XXX_GPIO_BIAS_PD;
++		if (pin->bias != PM8XXX_GPIO_BIAS_PD)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_PULL_UP:
+-		arg = pin->bias <= PM8XXX_GPIO_BIAS_PU_1P5_30;
++		if (pin->bias > PM8XXX_GPIO_BIAS_PU_1P5_30)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PM8XXX_QCOM_PULL_UP_STRENGTH:
+ 		arg = pin->pull_up_strength;
+ 		break;
+ 	case PIN_CONFIG_BIAS_HIGH_IMPEDANCE:
+-		arg = pin->disable;
++		if (!pin->disable)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_INPUT_ENABLE:
+-		arg = pin->mode == PM8XXX_GPIO_MODE_INPUT;
++		if (pin->mode != PM8XXX_GPIO_MODE_INPUT)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_OUTPUT:
+ 		if (pin->mode & PM8XXX_GPIO_MODE_OUTPUT)
+@@ -290,10 +300,14 @@ static int pm8xxx_pin_config_get(struct pinctrl_dev *pctldev,
+ 		arg = pin->output_strength;
+ 		break;
+ 	case PIN_CONFIG_DRIVE_PUSH_PULL:
+-		arg = !pin->open_drain;
++		if (pin->open_drain)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_DRIVE_OPEN_DRAIN:
+-		arg = pin->open_drain;
++		if (!pin->open_drain)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	default:
+ 		return -EINVAL;
+diff --git a/drivers/pinctrl/sunxi/pinctrl-sunxi.c b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+index 4d9bf9b3e9f3..26ebedc1f6d3 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-sunxi.c
++++ b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+@@ -1079,10 +1079,9 @@ static int sunxi_pinctrl_build_state(struct platform_device *pdev)
+ 	 * We suppose that we won't have any more functions than pins,
+ 	 * we'll reallocate that later anyway
+ 	 */
+-	pctl->functions = devm_kcalloc(&pdev->dev,
+-				       pctl->ngroups,
+-				       sizeof(*pctl->functions),
+-				       GFP_KERNEL);
++	pctl->functions = kcalloc(pctl->ngroups,
++				  sizeof(*pctl->functions),
++				  GFP_KERNEL);
+ 	if (!pctl->functions)
+ 		return -ENOMEM;
+ 
+@@ -1133,8 +1132,10 @@ static int sunxi_pinctrl_build_state(struct platform_device *pdev)
+ 
+ 			func_item = sunxi_pinctrl_find_function_by_name(pctl,
+ 									func->name);
+-			if (!func_item)
++			if (!func_item) {
++				kfree(pctl->functions);
+ 				return -EINVAL;
++			}
+ 
+ 			if (!func_item->groups) {
+ 				func_item->groups =
+@@ -1142,8 +1143,10 @@ static int sunxi_pinctrl_build_state(struct platform_device *pdev)
+ 						     func_item->ngroups,
+ 						     sizeof(*func_item->groups),
+ 						     GFP_KERNEL);
+-				if (!func_item->groups)
++				if (!func_item->groups) {
++					kfree(pctl->functions);
+ 					return -ENOMEM;
++				}
+ 			}
+ 
+ 			func_grp = func_item->groups;
+diff --git a/drivers/power/supply/twl4030_charger.c b/drivers/power/supply/twl4030_charger.c
+index bbcaee56db9d..b6a7d9f74cf3 100644
+--- a/drivers/power/supply/twl4030_charger.c
++++ b/drivers/power/supply/twl4030_charger.c
+@@ -996,12 +996,13 @@ static int twl4030_bci_probe(struct platform_device *pdev)
+ 	if (bci->dev->of_node) {
+ 		struct device_node *phynode;
+ 
+-		phynode = of_find_compatible_node(bci->dev->of_node->parent,
+-						  NULL, "ti,twl4030-usb");
++		phynode = of_get_compatible_child(bci->dev->of_node->parent,
++						  "ti,twl4030-usb");
+ 		if (phynode) {
+ 			bci->usb_nb.notifier_call = twl4030_bci_usb_ncb;
+ 			bci->transceiver = devm_usb_get_phy_by_node(
+ 				bci->dev, phynode, &bci->usb_nb);
++			of_node_put(phynode);
+ 			if (IS_ERR(bci->transceiver)) {
+ 				ret = PTR_ERR(bci->transceiver);
+ 				if (ret == -EPROBE_DEFER)
+diff --git a/drivers/rpmsg/qcom_smd.c b/drivers/rpmsg/qcom_smd.c
+index 6437bbeebc91..e026a7817013 100644
+--- a/drivers/rpmsg/qcom_smd.c
++++ b/drivers/rpmsg/qcom_smd.c
+@@ -1114,8 +1114,10 @@ static struct qcom_smd_channel *qcom_smd_create_channel(struct qcom_smd_edge *ed
+ 
+ 	channel->edge = edge;
+ 	channel->name = kstrdup(name, GFP_KERNEL);
+-	if (!channel->name)
+-		return ERR_PTR(-ENOMEM);
++	if (!channel->name) {
++		ret = -ENOMEM;
++		goto free_channel;
++	}
+ 
+ 	spin_lock_init(&channel->tx_lock);
+ 	spin_lock_init(&channel->recv_lock);
+@@ -1165,6 +1167,7 @@ static struct qcom_smd_channel *qcom_smd_create_channel(struct qcom_smd_edge *ed
+ 
+ free_name_and_channel:
+ 	kfree(channel->name);
++free_channel:
+ 	kfree(channel);
+ 
+ 	return ERR_PTR(ret);
+diff --git a/drivers/rtc/rtc-cmos.c b/drivers/rtc/rtc-cmos.c
+index cd3a2411bc2f..df0c5776d49b 100644
+--- a/drivers/rtc/rtc-cmos.c
++++ b/drivers/rtc/rtc-cmos.c
+@@ -50,6 +50,7 @@
+ /* this is for "generic access to PC-style RTC" using CMOS_READ/CMOS_WRITE */
+ #include <linux/mc146818rtc.h>
+ 
++#ifdef CONFIG_ACPI
+ /*
+  * Use ACPI SCI to replace HPET interrupt for RTC Alarm event
+  *
+@@ -61,6 +62,18 @@
+ static bool use_acpi_alarm;
+ module_param(use_acpi_alarm, bool, 0444);
+ 
++static inline int cmos_use_acpi_alarm(void)
++{
++	return use_acpi_alarm;
++}
++#else /* !CONFIG_ACPI */
++
++static inline int cmos_use_acpi_alarm(void)
++{
++	return 0;
++}
++#endif
++
+ struct cmos_rtc {
+ 	struct rtc_device	*rtc;
+ 	struct device		*dev;
+@@ -167,9 +180,9 @@ static inline int hpet_unregister_irq_handler(irq_handler_t handler)
+ #endif
+ 
+ /* Don't use HPET for RTC Alarm event if ACPI Fixed event is used */
+-static int use_hpet_alarm(void)
++static inline int use_hpet_alarm(void)
+ {
+-	return is_hpet_enabled() && !use_acpi_alarm;
++	return is_hpet_enabled() && !cmos_use_acpi_alarm();
+ }
+ 
+ /*----------------------------------------------------------------*/
+@@ -340,7 +353,7 @@ static void cmos_irq_enable(struct cmos_rtc *cmos, unsigned char mask)
+ 	if (use_hpet_alarm())
+ 		hpet_set_rtc_irq_bit(mask);
+ 
+-	if ((mask & RTC_AIE) && use_acpi_alarm) {
++	if ((mask & RTC_AIE) && cmos_use_acpi_alarm()) {
+ 		if (cmos->wake_on)
+ 			cmos->wake_on(cmos->dev);
+ 	}
+@@ -358,7 +371,7 @@ static void cmos_irq_disable(struct cmos_rtc *cmos, unsigned char mask)
+ 	if (use_hpet_alarm())
+ 		hpet_mask_rtc_irq_bit(mask);
+ 
+-	if ((mask & RTC_AIE) && use_acpi_alarm) {
++	if ((mask & RTC_AIE) && cmos_use_acpi_alarm()) {
+ 		if (cmos->wake_off)
+ 			cmos->wake_off(cmos->dev);
+ 	}
+@@ -980,7 +993,7 @@ static int cmos_suspend(struct device *dev)
+ 	}
+ 	spin_unlock_irq(&rtc_lock);
+ 
+-	if ((tmp & RTC_AIE) && !use_acpi_alarm) {
++	if ((tmp & RTC_AIE) && !cmos_use_acpi_alarm()) {
+ 		cmos->enabled_wake = 1;
+ 		if (cmos->wake_on)
+ 			cmos->wake_on(dev);
+@@ -1031,7 +1044,7 @@ static void cmos_check_wkalrm(struct device *dev)
+ 	 * ACPI RTC wake event is cleared after resume from STR,
+ 	 * ACK the rtc irq here
+ 	 */
+-	if (t_now >= cmos->alarm_expires && use_acpi_alarm) {
++	if (t_now >= cmos->alarm_expires && cmos_use_acpi_alarm()) {
+ 		cmos_interrupt(0, (void *)cmos->rtc);
+ 		return;
+ 	}
+@@ -1053,7 +1066,7 @@ static int __maybe_unused cmos_resume(struct device *dev)
+ 	struct cmos_rtc	*cmos = dev_get_drvdata(dev);
+ 	unsigned char tmp;
+ 
+-	if (cmos->enabled_wake && !use_acpi_alarm) {
++	if (cmos->enabled_wake && !cmos_use_acpi_alarm()) {
+ 		if (cmos->wake_off)
+ 			cmos->wake_off(dev);
+ 		else
+@@ -1132,7 +1145,7 @@ static u32 rtc_handler(void *context)
+ 	 * Or else, ACPI SCI is enabled during suspend/resume only,
+ 	 * update rtc irq in that case.
+ 	 */
+-	if (use_acpi_alarm)
++	if (cmos_use_acpi_alarm())
+ 		cmos_interrupt(0, (void *)cmos->rtc);
+ 	else {
+ 		/* Fix me: can we use cmos_interrupt() here as well? */
+diff --git a/drivers/rtc/rtc-ds1307.c b/drivers/rtc/rtc-ds1307.c
+index e9ec4160d7f6..83fa875b89cd 100644
+--- a/drivers/rtc/rtc-ds1307.c
++++ b/drivers/rtc/rtc-ds1307.c
+@@ -1372,7 +1372,6 @@ static void ds1307_clks_register(struct ds1307 *ds1307)
+ static const struct regmap_config regmap_config = {
+ 	.reg_bits = 8,
+ 	.val_bits = 8,
+-	.max_register = 0x9,
+ };
+ 
+ static int ds1307_probe(struct i2c_client *client,
+diff --git a/drivers/scsi/esp_scsi.c b/drivers/scsi/esp_scsi.c
+index c3fc34b9964d..9e5d3f7d29ae 100644
+--- a/drivers/scsi/esp_scsi.c
++++ b/drivers/scsi/esp_scsi.c
+@@ -1338,6 +1338,7 @@ static int esp_data_bytes_sent(struct esp *esp, struct esp_cmd_entry *ent,
+ 
+ 	bytes_sent = esp->data_dma_len;
+ 	bytes_sent -= ecount;
++	bytes_sent -= esp->send_cmd_residual;
+ 
+ 	/*
+ 	 * The am53c974 has a DMA 'pecularity'. The doc states:
+diff --git a/drivers/scsi/esp_scsi.h b/drivers/scsi/esp_scsi.h
+index 8163dca2071b..a77772777a30 100644
+--- a/drivers/scsi/esp_scsi.h
++++ b/drivers/scsi/esp_scsi.h
+@@ -540,6 +540,8 @@ struct esp {
+ 
+ 	void			*dma;
+ 	int			dmarev;
++
++	u32			send_cmd_residual;
+ };
+ 
+ /* A front-end driver for the ESP chip should do the following in
+diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
+index a94fb9f8bb44..3b3af1459008 100644
+--- a/drivers/scsi/lpfc/lpfc_scsi.c
++++ b/drivers/scsi/lpfc/lpfc_scsi.c
+@@ -4140,9 +4140,17 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
+ 	}
+ 	lpfc_scsi_unprep_dma_buf(phba, lpfc_cmd);
+ 
+-	spin_lock_irqsave(&phba->hbalock, flags);
+-	lpfc_cmd->pCmd = NULL;
+-	spin_unlock_irqrestore(&phba->hbalock, flags);
++	/* If pCmd was set to NULL from abort path, do not call scsi_done */
++	if (xchg(&lpfc_cmd->pCmd, NULL) == NULL) {
++		lpfc_printf_vlog(vport, KERN_INFO, LOG_FCP,
++				 "0711 FCP cmd already NULL, sid: 0x%06x, "
++				 "did: 0x%06x, oxid: 0x%04x\n",
++				 vport->fc_myDID,
++				 (pnode) ? pnode->nlp_DID : 0,
++				 phba->sli_rev == LPFC_SLI_REV4 ?
++				 lpfc_cmd->cur_iocbq.sli4_xritag : 0xffff);
++		return;
++	}
+ 
+ 	/* The sdev is not guaranteed to be valid post scsi_done upcall. */
+ 	cmd->scsi_done(cmd);
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 6f3c00a233ec..4f8d459d2378 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -3790,6 +3790,7 @@ lpfc_sli_handle_slow_ring_event_s4(struct lpfc_hba *phba,
+ 	struct hbq_dmabuf *dmabuf;
+ 	struct lpfc_cq_event *cq_event;
+ 	unsigned long iflag;
++	int count = 0;
+ 
+ 	spin_lock_irqsave(&phba->hbalock, iflag);
+ 	phba->hba_flag &= ~HBA_SP_QUEUE_EVT;
+@@ -3811,16 +3812,22 @@ lpfc_sli_handle_slow_ring_event_s4(struct lpfc_hba *phba,
+ 			if (irspiocbq)
+ 				lpfc_sli_sp_handle_rspiocb(phba, pring,
+ 							   irspiocbq);
++			count++;
+ 			break;
+ 		case CQE_CODE_RECEIVE:
+ 		case CQE_CODE_RECEIVE_V1:
+ 			dmabuf = container_of(cq_event, struct hbq_dmabuf,
+ 					      cq_event);
+ 			lpfc_sli4_handle_received_buffer(phba, dmabuf);
++			count++;
+ 			break;
+ 		default:
+ 			break;
+ 		}
++
++		/* Limit the number of events to 64 to avoid soft lockups */
++		if (count == 64)
++			break;
+ 	}
+ }
+ 
+diff --git a/drivers/scsi/mac_esp.c b/drivers/scsi/mac_esp.c
+index eb551f3cc471..71879f2207e0 100644
+--- a/drivers/scsi/mac_esp.c
++++ b/drivers/scsi/mac_esp.c
+@@ -427,6 +427,8 @@ static void mac_esp_send_pio_cmd(struct esp *esp, u32 addr, u32 esp_count,
+ 			scsi_esp_cmd(esp, ESP_CMD_TI);
+ 		}
+ 	}
++
++	esp->send_cmd_residual = esp_count;
+ }
+ 
+ static int mac_esp_irq_pending(struct esp *esp)
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index 8e84e3fb648a..2d6f6414a2a2 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -7499,6 +7499,9 @@ static int megasas_mgmt_compat_ioctl_fw(struct file *file, unsigned long arg)
+ 		get_user(user_sense_off, &cioc->sense_off))
+ 		return -EFAULT;
+ 
++	if (local_sense_off != user_sense_off)
++		return -EINVAL;
++
+ 	if (local_sense_len) {
+ 		void __user **sense_ioc_ptr =
+ 			(void __user **)((u8 *)((unsigned long)&ioc->frame.raw) + local_sense_off);
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 397081d320b1..83f71c266c66 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -1677,8 +1677,9 @@ static void __ufshcd_release(struct ufs_hba *hba)
+ 
+ 	hba->clk_gating.state = REQ_CLKS_OFF;
+ 	trace_ufshcd_clk_gating(dev_name(hba->dev), hba->clk_gating.state);
+-	schedule_delayed_work(&hba->clk_gating.gate_work,
+-			msecs_to_jiffies(hba->clk_gating.delay_ms));
++	queue_delayed_work(hba->clk_gating.clk_gating_workq,
++			   &hba->clk_gating.gate_work,
++			   msecs_to_jiffies(hba->clk_gating.delay_ms));
+ }
+ 
+ void ufshcd_release(struct ufs_hba *hba)
+diff --git a/drivers/soc/qcom/rmtfs_mem.c b/drivers/soc/qcom/rmtfs_mem.c
+index 8a3678c2e83c..97bb5989aa21 100644
+--- a/drivers/soc/qcom/rmtfs_mem.c
++++ b/drivers/soc/qcom/rmtfs_mem.c
+@@ -212,6 +212,11 @@ static int qcom_rmtfs_mem_probe(struct platform_device *pdev)
+ 		dev_err(&pdev->dev, "failed to parse qcom,vmid\n");
+ 		goto remove_cdev;
+ 	} else if (!ret) {
++		if (!qcom_scm_is_available()) {
++			ret = -EPROBE_DEFER;
++			goto remove_cdev;
++		}
++
+ 		perms[0].vmid = QCOM_SCM_VMID_HLOS;
+ 		perms[0].perm = QCOM_SCM_PERM_RW;
+ 		perms[1].vmid = vmid;
+diff --git a/drivers/soc/tegra/pmc.c b/drivers/soc/tegra/pmc.c
+index 2d6f3fcf3211..ed71a4c9c8b2 100644
+--- a/drivers/soc/tegra/pmc.c
++++ b/drivers/soc/tegra/pmc.c
+@@ -1288,7 +1288,7 @@ static void tegra_pmc_init_tsense_reset(struct tegra_pmc *pmc)
+ 	if (!pmc->soc->has_tsense_reset)
+ 		return;
+ 
+-	np = of_find_node_by_name(pmc->dev->of_node, "i2c-thermtrip");
++	np = of_get_child_by_name(pmc->dev->of_node, "i2c-thermtrip");
+ 	if (!np) {
+ 		dev_warn(dev, "i2c-thermtrip node not found, %s.\n", disabled);
+ 		return;
+diff --git a/drivers/spi/spi-bcm-qspi.c b/drivers/spi/spi-bcm-qspi.c
+index 8612525fa4e3..584bcb018a62 100644
+--- a/drivers/spi/spi-bcm-qspi.c
++++ b/drivers/spi/spi-bcm-qspi.c
+@@ -89,7 +89,7 @@
+ #define BSPI_BPP_MODE_SELECT_MASK		BIT(8)
+ #define BSPI_BPP_ADDR_SELECT_MASK		BIT(16)
+ 
+-#define BSPI_READ_LENGTH			512
++#define BSPI_READ_LENGTH			256
+ 
+ /* MSPI register offsets */
+ #define MSPI_SPCR0_LSB				0x000
+@@ -355,7 +355,7 @@ static int bcm_qspi_bspi_set_flex_mode(struct bcm_qspi *qspi,
+ 	int bpc = 0, bpp = 0;
+ 	u8 command = op->cmd.opcode;
+ 	int width  = op->cmd.buswidth ? op->cmd.buswidth : SPI_NBITS_SINGLE;
+-	int addrlen = op->addr.nbytes * 8;
++	int addrlen = op->addr.nbytes;
+ 	int flex_mode = 1;
+ 
+ 	dev_dbg(&qspi->pdev->dev, "set flex mode w %x addrlen %x hp %d\n",
+diff --git a/drivers/spi/spi-ep93xx.c b/drivers/spi/spi-ep93xx.c
+index f1526757aaf6..79fc3940245a 100644
+--- a/drivers/spi/spi-ep93xx.c
++++ b/drivers/spi/spi-ep93xx.c
+@@ -246,6 +246,19 @@ static int ep93xx_spi_read_write(struct spi_master *master)
+ 	return -EINPROGRESS;
+ }
+ 
++static enum dma_transfer_direction
++ep93xx_dma_data_to_trans_dir(enum dma_data_direction dir)
++{
++	switch (dir) {
++	case DMA_TO_DEVICE:
++		return DMA_MEM_TO_DEV;
++	case DMA_FROM_DEVICE:
++		return DMA_DEV_TO_MEM;
++	default:
++		return DMA_TRANS_NONE;
++	}
++}
++
+ /**
+  * ep93xx_spi_dma_prepare() - prepares a DMA transfer
+  * @master: SPI master
+@@ -257,7 +270,7 @@ static int ep93xx_spi_read_write(struct spi_master *master)
+  */
+ static struct dma_async_tx_descriptor *
+ ep93xx_spi_dma_prepare(struct spi_master *master,
+-		       enum dma_transfer_direction dir)
++		       enum dma_data_direction dir)
+ {
+ 	struct ep93xx_spi *espi = spi_master_get_devdata(master);
+ 	struct spi_transfer *xfer = master->cur_msg->state;
+@@ -277,9 +290,9 @@ ep93xx_spi_dma_prepare(struct spi_master *master,
+ 		buswidth = DMA_SLAVE_BUSWIDTH_1_BYTE;
+ 
+ 	memset(&conf, 0, sizeof(conf));
+-	conf.direction = dir;
++	conf.direction = ep93xx_dma_data_to_trans_dir(dir);
+ 
+-	if (dir == DMA_DEV_TO_MEM) {
++	if (dir == DMA_FROM_DEVICE) {
+ 		chan = espi->dma_rx;
+ 		buf = xfer->rx_buf;
+ 		sgt = &espi->rx_sgt;
+@@ -343,7 +356,8 @@ ep93xx_spi_dma_prepare(struct spi_master *master,
+ 	if (!nents)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	txd = dmaengine_prep_slave_sg(chan, sgt->sgl, nents, dir, DMA_CTRL_ACK);
++	txd = dmaengine_prep_slave_sg(chan, sgt->sgl, nents, conf.direction,
++				      DMA_CTRL_ACK);
+ 	if (!txd) {
+ 		dma_unmap_sg(chan->device->dev, sgt->sgl, sgt->nents, dir);
+ 		return ERR_PTR(-ENOMEM);
+@@ -360,13 +374,13 @@ ep93xx_spi_dma_prepare(struct spi_master *master,
+  * unmapped.
+  */
+ static void ep93xx_spi_dma_finish(struct spi_master *master,
+-				  enum dma_transfer_direction dir)
++				  enum dma_data_direction dir)
+ {
+ 	struct ep93xx_spi *espi = spi_master_get_devdata(master);
+ 	struct dma_chan *chan;
+ 	struct sg_table *sgt;
+ 
+-	if (dir == DMA_DEV_TO_MEM) {
++	if (dir == DMA_FROM_DEVICE) {
+ 		chan = espi->dma_rx;
+ 		sgt = &espi->rx_sgt;
+ 	} else {
+@@ -381,8 +395,8 @@ static void ep93xx_spi_dma_callback(void *callback_param)
+ {
+ 	struct spi_master *master = callback_param;
+ 
+-	ep93xx_spi_dma_finish(master, DMA_MEM_TO_DEV);
+-	ep93xx_spi_dma_finish(master, DMA_DEV_TO_MEM);
++	ep93xx_spi_dma_finish(master, DMA_TO_DEVICE);
++	ep93xx_spi_dma_finish(master, DMA_FROM_DEVICE);
+ 
+ 	spi_finalize_current_transfer(master);
+ }
+@@ -392,15 +406,15 @@ static int ep93xx_spi_dma_transfer(struct spi_master *master)
+ 	struct ep93xx_spi *espi = spi_master_get_devdata(master);
+ 	struct dma_async_tx_descriptor *rxd, *txd;
+ 
+-	rxd = ep93xx_spi_dma_prepare(master, DMA_DEV_TO_MEM);
++	rxd = ep93xx_spi_dma_prepare(master, DMA_FROM_DEVICE);
+ 	if (IS_ERR(rxd)) {
+ 		dev_err(&master->dev, "DMA RX failed: %ld\n", PTR_ERR(rxd));
+ 		return PTR_ERR(rxd);
+ 	}
+ 
+-	txd = ep93xx_spi_dma_prepare(master, DMA_MEM_TO_DEV);
++	txd = ep93xx_spi_dma_prepare(master, DMA_TO_DEVICE);
+ 	if (IS_ERR(txd)) {
+-		ep93xx_spi_dma_finish(master, DMA_DEV_TO_MEM);
++		ep93xx_spi_dma_finish(master, DMA_FROM_DEVICE);
+ 		dev_err(&master->dev, "DMA TX failed: %ld\n", PTR_ERR(txd));
+ 		return PTR_ERR(txd);
+ 	}
+diff --git a/drivers/spi/spi-gpio.c b/drivers/spi/spi-gpio.c
+index 3b518ead504e..b82b47152b18 100644
+--- a/drivers/spi/spi-gpio.c
++++ b/drivers/spi/spi-gpio.c
+@@ -282,9 +282,11 @@ static int spi_gpio_request(struct device *dev,
+ 	spi_gpio->miso = devm_gpiod_get_optional(dev, "miso", GPIOD_IN);
+ 	if (IS_ERR(spi_gpio->miso))
+ 		return PTR_ERR(spi_gpio->miso);
+-	if (!spi_gpio->miso)
+-		/* HW configuration without MISO pin */
+-		*mflags |= SPI_MASTER_NO_RX;
++	/*
++	 * No setting SPI_MASTER_NO_RX here - if there is only a MOSI
++	 * pin connected the host can still do RX by changing the
++	 * direction of the line.
++	 */
+ 
+ 	spi_gpio->sck = devm_gpiod_get(dev, "sck", GPIOD_OUT_LOW);
+ 	if (IS_ERR(spi_gpio->sck))
+@@ -408,7 +410,7 @@ static int spi_gpio_probe(struct platform_device *pdev)
+ 	spi_gpio->bitbang.master = master;
+ 	spi_gpio->bitbang.chipselect = spi_gpio_chipselect;
+ 
+-	if ((master_flags & (SPI_MASTER_NO_TX | SPI_MASTER_NO_RX)) == 0) {
++	if ((master_flags & SPI_MASTER_NO_TX) == 0) {
+ 		spi_gpio->bitbang.txrx_word[SPI_MODE_0] = spi_gpio_txrx_word_mode0;
+ 		spi_gpio->bitbang.txrx_word[SPI_MODE_1] = spi_gpio_txrx_word_mode1;
+ 		spi_gpio->bitbang.txrx_word[SPI_MODE_2] = spi_gpio_txrx_word_mode2;
+diff --git a/drivers/spi/spi-mem.c b/drivers/spi/spi-mem.c
+index 990770dfa5cf..ec0c24e873cd 100644
+--- a/drivers/spi/spi-mem.c
++++ b/drivers/spi/spi-mem.c
+@@ -328,10 +328,25 @@ EXPORT_SYMBOL_GPL(spi_mem_exec_op);
+ int spi_mem_adjust_op_size(struct spi_mem *mem, struct spi_mem_op *op)
+ {
+ 	struct spi_controller *ctlr = mem->spi->controller;
++	size_t len;
++
++	len = sizeof(op->cmd.opcode) + op->addr.nbytes + op->dummy.nbytes;
+ 
+ 	if (ctlr->mem_ops && ctlr->mem_ops->adjust_op_size)
+ 		return ctlr->mem_ops->adjust_op_size(mem, op);
+ 
++	if (!ctlr->mem_ops || !ctlr->mem_ops->exec_op) {
++		if (len > spi_max_transfer_size(mem->spi))
++			return -EINVAL;
++
++		op->data.nbytes = min3((size_t)op->data.nbytes,
++				       spi_max_transfer_size(mem->spi),
++				       spi_max_message_size(mem->spi) -
++				       len);
++		if (!op->data.nbytes)
++			return -EINVAL;
++	}
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(spi_mem_adjust_op_size);
+diff --git a/drivers/tc/tc.c b/drivers/tc/tc.c
+index 3be9519654e5..cf3fad2cb871 100644
+--- a/drivers/tc/tc.c
++++ b/drivers/tc/tc.c
+@@ -2,7 +2,7 @@
+  *	TURBOchannel bus services.
+  *
+  *	Copyright (c) Harald Koerfgen, 1998
+- *	Copyright (c) 2001, 2003, 2005, 2006  Maciej W. Rozycki
++ *	Copyright (c) 2001, 2003, 2005, 2006, 2018  Maciej W. Rozycki
+  *	Copyright (c) 2005  James Simmons
+  *
+  *	This file is subject to the terms and conditions of the GNU
+@@ -10,6 +10,7 @@
+  *	directory of this archive for more details.
+  */
+ #include <linux/compiler.h>
++#include <linux/dma-mapping.h>
+ #include <linux/errno.h>
+ #include <linux/init.h>
+ #include <linux/ioport.h>
+@@ -92,6 +93,11 @@ static void __init tc_bus_add_devices(struct tc_bus *tbus)
+ 		tdev->dev.bus = &tc_bus_type;
+ 		tdev->slot = slot;
+ 
++		/* TURBOchannel has 34-bit DMA addressing (16GiB space). */
++		tdev->dma_mask = DMA_BIT_MASK(34);
++		tdev->dev.dma_mask = &tdev->dma_mask;
++		tdev->dev.coherent_dma_mask = DMA_BIT_MASK(34);
++
+ 		for (i = 0; i < 8; i++) {
+ 			tdev->firmware[i] =
+ 				readb(module + offset + TC_FIRM_VER + 4 * i);
+diff --git a/drivers/thermal/da9062-thermal.c b/drivers/thermal/da9062-thermal.c
+index dd8dd947b7f0..01b0cb994457 100644
+--- a/drivers/thermal/da9062-thermal.c
++++ b/drivers/thermal/da9062-thermal.c
+@@ -106,7 +106,7 @@ static void da9062_thermal_poll_on(struct work_struct *work)
+ 					   THERMAL_EVENT_UNSPECIFIED);
+ 
+ 		delay = msecs_to_jiffies(thermal->zone->passive_delay);
+-		schedule_delayed_work(&thermal->work, delay);
++		queue_delayed_work(system_freezable_wq, &thermal->work, delay);
+ 		return;
+ 	}
+ 
+@@ -125,7 +125,7 @@ static irqreturn_t da9062_thermal_irq_handler(int irq, void *data)
+ 	struct da9062_thermal *thermal = data;
+ 
+ 	disable_irq_nosync(thermal->irq);
+-	schedule_delayed_work(&thermal->work, 0);
++	queue_delayed_work(system_freezable_wq, &thermal->work, 0);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/thermal/rcar_thermal.c b/drivers/thermal/rcar_thermal.c
+index e77e63070e99..5844e26bd372 100644
+--- a/drivers/thermal/rcar_thermal.c
++++ b/drivers/thermal/rcar_thermal.c
+@@ -465,6 +465,7 @@ static int rcar_thermal_remove(struct platform_device *pdev)
+ 
+ 	rcar_thermal_for_each_priv(priv, common) {
+ 		rcar_thermal_irq_disable(priv);
++		cancel_delayed_work_sync(&priv->work);
+ 		if (priv->chip->use_of_thermal)
+ 			thermal_remove_hwmon_sysfs(priv->zone);
+ 		else
+diff --git a/drivers/tty/serial/kgdboc.c b/drivers/tty/serial/kgdboc.c
+index b4ba2b1dab76..f4d0ef695225 100644
+--- a/drivers/tty/serial/kgdboc.c
++++ b/drivers/tty/serial/kgdboc.c
+@@ -130,6 +130,11 @@ static void kgdboc_unregister_kbd(void)
+ 
+ static int kgdboc_option_setup(char *opt)
+ {
++	if (!opt) {
++		pr_err("kgdboc: config string not provided\n");
++		return -EINVAL;
++	}
++
+ 	if (strlen(opt) >= MAX_CONFIG_LEN) {
+ 		printk(KERN_ERR "kgdboc: config string too long\n");
+ 		return -ENOSPC;
+diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c
+index 6c58ad1abd7e..d5b2efae82fc 100644
+--- a/drivers/uio/uio.c
++++ b/drivers/uio/uio.c
+@@ -275,6 +275,8 @@ static struct class uio_class = {
+ 	.dev_groups = uio_groups,
+ };
+ 
++bool uio_class_registered;
++
+ /*
+  * device functions
+  */
+@@ -877,6 +879,9 @@ static int init_uio_class(void)
+ 		printk(KERN_ERR "class_register failed for uio\n");
+ 		goto err_class_register;
+ 	}
++
++	uio_class_registered = true;
++
+ 	return 0;
+ 
+ err_class_register:
+@@ -887,6 +892,7 @@ exit:
+ 
+ static void release_uio_class(void)
+ {
++	uio_class_registered = false;
+ 	class_unregister(&uio_class);
+ 	uio_major_cleanup();
+ }
+@@ -913,6 +919,9 @@ int __uio_register_device(struct module *owner,
+ 	struct uio_device *idev;
+ 	int ret = 0;
+ 
++	if (!uio_class_registered)
++		return -EPROBE_DEFER;
++
+ 	if (!parent || !info || !info->name || !info->version)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/usb/chipidea/otg.h b/drivers/usb/chipidea/otg.h
+index 7e7428e48bfa..4f8b8179ec96 100644
+--- a/drivers/usb/chipidea/otg.h
++++ b/drivers/usb/chipidea/otg.h
+@@ -17,7 +17,8 @@ void ci_handle_vbus_change(struct ci_hdrc *ci);
+ static inline void ci_otg_queue_work(struct ci_hdrc *ci)
+ {
+ 	disable_irq_nosync(ci->irq);
+-	queue_work(ci->wq, &ci->work);
++	if (queue_work(ci->wq, &ci->work) == false)
++		enable_irq(ci->irq);
+ }
+ 
+ #endif /* __DRIVERS_USB_CHIPIDEA_OTG_H */
+diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c
+index 6e2cdd7b93d4..05a68f035d19 100644
+--- a/drivers/usb/dwc2/hcd.c
++++ b/drivers/usb/dwc2/hcd.c
+@@ -4394,6 +4394,7 @@ static int _dwc2_hcd_start(struct usb_hcd *hcd)
+ 	struct dwc2_hsotg *hsotg = dwc2_hcd_to_hsotg(hcd);
+ 	struct usb_bus *bus = hcd_to_bus(hcd);
+ 	unsigned long flags;
++	int ret;
+ 
+ 	dev_dbg(hsotg->dev, "DWC OTG HCD START\n");
+ 
+@@ -4409,6 +4410,13 @@ static int _dwc2_hcd_start(struct usb_hcd *hcd)
+ 
+ 	dwc2_hcd_reinit(hsotg);
+ 
++	/* enable external vbus supply before resuming root hub */
++	spin_unlock_irqrestore(&hsotg->lock, flags);
++	ret = dwc2_vbus_supply_init(hsotg);
++	if (ret)
++		return ret;
++	spin_lock_irqsave(&hsotg->lock, flags);
++
+ 	/* Initialize and connect root hub if one is not already attached */
+ 	if (bus->root_hub) {
+ 		dev_dbg(hsotg->dev, "DWC OTG HCD Has Root Hub\n");
+@@ -4418,7 +4426,7 @@ static int _dwc2_hcd_start(struct usb_hcd *hcd)
+ 
+ 	spin_unlock_irqrestore(&hsotg->lock, flags);
+ 
+-	return dwc2_vbus_supply_init(hsotg);
++	return 0;
+ }
+ 
+ /*
+diff --git a/drivers/usb/gadget/udc/atmel_usba_udc.c b/drivers/usb/gadget/udc/atmel_usba_udc.c
+index 17147b8c771e..8f267be1745d 100644
+--- a/drivers/usb/gadget/udc/atmel_usba_udc.c
++++ b/drivers/usb/gadget/udc/atmel_usba_udc.c
+@@ -2017,6 +2017,8 @@ static struct usba_ep * atmel_udc_of_init(struct platform_device *pdev,
+ 
+ 	udc->errata = match->data;
+ 	udc->pmc = syscon_regmap_lookup_by_compatible("atmel,at91sam9g45-pmc");
++	if (IS_ERR(udc->pmc))
++		udc->pmc = syscon_regmap_lookup_by_compatible("atmel,at91sam9rl-pmc");
+ 	if (IS_ERR(udc->pmc))
+ 		udc->pmc = syscon_regmap_lookup_by_compatible("atmel,at91sam9x5-pmc");
+ 	if (udc->errata && IS_ERR(udc->pmc))
+diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c
+index 5b5f1c8b47c9..104b80c28636 100644
+--- a/drivers/usb/gadget/udc/renesas_usb3.c
++++ b/drivers/usb/gadget/udc/renesas_usb3.c
+@@ -2377,6 +2377,9 @@ static ssize_t renesas_usb3_b_device_write(struct file *file,
+ 	else
+ 		usb3->forced_b_device = false;
+ 
++	if (usb3->workaround_for_vbus)
++		usb3_disconnect(usb3);
++
+ 	/* Let this driver call usb3_connect() anyway */
+ 	usb3_check_id(usb3);
+ 
+diff --git a/drivers/usb/host/ohci-at91.c b/drivers/usb/host/ohci-at91.c
+index e98673954020..ec6739ef3129 100644
+--- a/drivers/usb/host/ohci-at91.c
++++ b/drivers/usb/host/ohci-at91.c
+@@ -551,6 +551,8 @@ static int ohci_hcd_at91_drv_probe(struct platform_device *pdev)
+ 		pdata->overcurrent_pin[i] =
+ 			devm_gpiod_get_index_optional(&pdev->dev, "atmel,oc",
+ 						      i, GPIOD_IN);
++		if (!pdata->overcurrent_pin[i])
++			continue;
+ 		if (IS_ERR(pdata->overcurrent_pin[i])) {
+ 			err = PTR_ERR(pdata->overcurrent_pin[i]);
+ 			dev_err(&pdev->dev, "unable to claim gpio \"overcurrent\": %d\n", err);
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index a4b95d019f84..1f7eeee2ebca 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -900,6 +900,7 @@ static u32 xhci_get_port_status(struct usb_hcd *hcd,
+ 				set_bit(wIndex, &bus_state->resuming_ports);
+ 				bus_state->resume_done[wIndex] = timeout;
+ 				mod_timer(&hcd->rh_timer, timeout);
++				usb_hcd_start_port_resume(&hcd->self, wIndex);
+ 			}
+ 		/* Has resume been signalled for USB_RESUME_TIME yet? */
+ 		} else if (time_after_eq(jiffies,
+@@ -940,6 +941,7 @@ static u32 xhci_get_port_status(struct usb_hcd *hcd,
+ 				clear_bit(wIndex, &bus_state->rexit_ports);
+ 			}
+ 
++			usb_hcd_end_port_resume(&hcd->self, wIndex);
+ 			bus_state->port_c_suspend |= 1 << wIndex;
+ 			bus_state->suspended_ports &= ~(1 << wIndex);
+ 		} else {
+@@ -962,6 +964,7 @@ static u32 xhci_get_port_status(struct usb_hcd *hcd,
+ 	    (raw_port_status & PORT_PLS_MASK) != XDEV_RESUME) {
+ 		bus_state->resume_done[wIndex] = 0;
+ 		clear_bit(wIndex, &bus_state->resuming_ports);
++		usb_hcd_end_port_resume(&hcd->self, wIndex);
+ 	}
+ 
+ 
+@@ -1337,6 +1340,7 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 					goto error;
+ 
+ 				set_bit(wIndex, &bus_state->resuming_ports);
++				usb_hcd_start_port_resume(&hcd->self, wIndex);
+ 				xhci_set_link_state(xhci, ports[wIndex],
+ 						    XDEV_RESUME);
+ 				spin_unlock_irqrestore(&xhci->lock, flags);
+@@ -1345,6 +1349,7 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 				xhci_set_link_state(xhci, ports[wIndex],
+ 							XDEV_U0);
+ 				clear_bit(wIndex, &bus_state->resuming_ports);
++				usb_hcd_end_port_resume(&hcd->self, wIndex);
+ 			}
+ 			bus_state->port_c_suspend |= 1 << wIndex;
+ 
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index f0a99aa0ac58..cd4659703647 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1602,6 +1602,7 @@ static void handle_port_status(struct xhci_hcd *xhci,
+ 			set_bit(HCD_FLAG_POLL_RH, &hcd->flags);
+ 			mod_timer(&hcd->rh_timer,
+ 				  bus_state->resume_done[hcd_portnum]);
++			usb_hcd_start_port_resume(&hcd->self, hcd_portnum);
+ 			bogus_port_status = true;
+ 		}
+ 	}
+diff --git a/drivers/usb/typec/tcpm.c b/drivers/usb/typec/tcpm.c
+index d1d20252bad8..a7e231ccb0a1 100644
+--- a/drivers/usb/typec/tcpm.c
++++ b/drivers/usb/typec/tcpm.c
+@@ -1383,8 +1383,8 @@ static enum pdo_err tcpm_caps_err(struct tcpm_port *port, const u32 *pdo,
+ 				if (pdo_apdo_type(pdo[i]) != APDO_TYPE_PPS)
+ 					break;
+ 
+-				if (pdo_pps_apdo_max_current(pdo[i]) <
+-				    pdo_pps_apdo_max_current(pdo[i - 1]))
++				if (pdo_pps_apdo_max_voltage(pdo[i]) <
++				    pdo_pps_apdo_max_voltage(pdo[i - 1]))
+ 					return PDO_ERR_PPS_APDO_NOT_SORTED;
+ 				else if (pdo_pps_apdo_min_voltage(pdo[i]) ==
+ 					  pdo_pps_apdo_min_voltage(pdo[i - 1]) &&
+@@ -4018,6 +4018,9 @@ static int tcpm_pps_set_op_curr(struct tcpm_port *port, u16 op_curr)
+ 		goto port_unlock;
+ 	}
+ 
++	/* Round down operating current to align with PPS valid steps */
++	op_curr = op_curr - (op_curr % RDO_PROG_CURR_MA_STEP);
++
+ 	reinit_completion(&port->pps_complete);
+ 	port->pps_data.op_curr = op_curr;
+ 	port->pps_status = 0;
+@@ -4071,6 +4074,9 @@ static int tcpm_pps_set_out_volt(struct tcpm_port *port, u16 out_volt)
+ 		goto port_unlock;
+ 	}
+ 
++	/* Round down output voltage to align with PPS valid steps */
++	out_volt = out_volt - (out_volt % RDO_PROG_VOLT_MV_STEP);
++
+ 	reinit_completion(&port->pps_complete);
+ 	port->pps_data.out_volt = out_volt;
+ 	port->pps_status = 0;
+diff --git a/drivers/usb/usbip/vudc_main.c b/drivers/usb/usbip/vudc_main.c
+index 3fc22037a82f..390733e6937e 100644
+--- a/drivers/usb/usbip/vudc_main.c
++++ b/drivers/usb/usbip/vudc_main.c
+@@ -73,6 +73,10 @@ static int __init init(void)
+ cleanup:
+ 	list_for_each_entry_safe(udc_dev, udc_dev2, &vudc_devices, dev_entry) {
+ 		list_del(&udc_dev->dev_entry);
++		/*
++		 * Just do platform_device_del() here, put_vudc_device()
++		 * calls the platform_device_put()
++		 */
+ 		platform_device_del(udc_dev->pdev);
+ 		put_vudc_device(udc_dev);
+ 	}
+@@ -89,7 +93,11 @@ static void __exit cleanup(void)
+ 
+ 	list_for_each_entry_safe(udc_dev, udc_dev2, &vudc_devices, dev_entry) {
+ 		list_del(&udc_dev->dev_entry);
+-		platform_device_unregister(udc_dev->pdev);
++		/*
++		 * Just do platform_device_del() here, put_vudc_device()
++		 * calls the platform_device_put()
++		 */
++		platform_device_del(udc_dev->pdev);
+ 		put_vudc_device(udc_dev);
+ 	}
+ 	platform_driver_unregister(&vudc_driver);
+diff --git a/drivers/video/hdmi.c b/drivers/video/hdmi.c
+index 38716eb50408..8a3e8f61b991 100644
+--- a/drivers/video/hdmi.c
++++ b/drivers/video/hdmi.c
+@@ -592,10 +592,10 @@ hdmi_extended_colorimetry_get_name(enum hdmi_extended_colorimetry ext_col)
+ 		return "xvYCC 709";
+ 	case HDMI_EXTENDED_COLORIMETRY_S_YCC_601:
+ 		return "sYCC 601";
+-	case HDMI_EXTENDED_COLORIMETRY_ADOBE_YCC_601:
+-		return "Adobe YCC 601";
+-	case HDMI_EXTENDED_COLORIMETRY_ADOBE_RGB:
+-		return "Adobe RGB";
++	case HDMI_EXTENDED_COLORIMETRY_OPYCC_601:
++		return "opYCC 601";
++	case HDMI_EXTENDED_COLORIMETRY_OPRGB:
++		return "opRGB";
+ 	case HDMI_EXTENDED_COLORIMETRY_BT2020_CONST_LUM:
+ 		return "BT.2020 Constant Luminance";
+ 	case HDMI_EXTENDED_COLORIMETRY_BT2020:
+diff --git a/drivers/w1/masters/omap_hdq.c b/drivers/w1/masters/omap_hdq.c
+index 83fc9aab34e8..3099052e1243 100644
+--- a/drivers/w1/masters/omap_hdq.c
++++ b/drivers/w1/masters/omap_hdq.c
+@@ -763,6 +763,8 @@ static int omap_hdq_remove(struct platform_device *pdev)
+ 	/* remove module dependency */
+ 	pm_runtime_disable(&pdev->dev);
+ 
++	w1_remove_master_device(&omap_w1_master);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/xen/privcmd-buf.c b/drivers/xen/privcmd-buf.c
+index df1ed37c3269..de01a6d0059d 100644
+--- a/drivers/xen/privcmd-buf.c
++++ b/drivers/xen/privcmd-buf.c
+@@ -21,15 +21,9 @@
+ 
+ MODULE_LICENSE("GPL");
+ 
+-static unsigned int limit = 64;
+-module_param(limit, uint, 0644);
+-MODULE_PARM_DESC(limit, "Maximum number of pages that may be allocated by "
+-			"the privcmd-buf device per open file");
+-
+ struct privcmd_buf_private {
+ 	struct mutex lock;
+ 	struct list_head list;
+-	unsigned int allocated;
+ };
+ 
+ struct privcmd_buf_vma_private {
+@@ -60,13 +54,10 @@ static void privcmd_buf_vmapriv_free(struct privcmd_buf_vma_private *vma_priv)
+ {
+ 	unsigned int i;
+ 
+-	vma_priv->file_priv->allocated -= vma_priv->n_pages;
+-
+ 	list_del(&vma_priv->list);
+ 
+ 	for (i = 0; i < vma_priv->n_pages; i++)
+-		if (vma_priv->pages[i])
+-			__free_page(vma_priv->pages[i]);
++		__free_page(vma_priv->pages[i]);
+ 
+ 	kfree(vma_priv);
+ }
+@@ -146,8 +137,7 @@ static int privcmd_buf_mmap(struct file *file, struct vm_area_struct *vma)
+ 	unsigned int i;
+ 	int ret = 0;
+ 
+-	if (!(vma->vm_flags & VM_SHARED) || count > limit ||
+-	    file_priv->allocated + count > limit)
++	if (!(vma->vm_flags & VM_SHARED))
+ 		return -EINVAL;
+ 
+ 	vma_priv = kzalloc(sizeof(*vma_priv) + count * sizeof(void *),
+@@ -155,19 +145,15 @@ static int privcmd_buf_mmap(struct file *file, struct vm_area_struct *vma)
+ 	if (!vma_priv)
+ 		return -ENOMEM;
+ 
+-	vma_priv->n_pages = count;
+-	count = 0;
+-	for (i = 0; i < vma_priv->n_pages; i++) {
++	for (i = 0; i < count; i++) {
+ 		vma_priv->pages[i] = alloc_page(GFP_KERNEL | __GFP_ZERO);
+ 		if (!vma_priv->pages[i])
+ 			break;
+-		count++;
++		vma_priv->n_pages++;
+ 	}
+ 
+ 	mutex_lock(&file_priv->lock);
+ 
+-	file_priv->allocated += count;
+-
+ 	vma_priv->file_priv = file_priv;
+ 	vma_priv->users = 1;
+ 
+diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
+index a6f9ba85dc4b..aa081f806728 100644
+--- a/drivers/xen/swiotlb-xen.c
++++ b/drivers/xen/swiotlb-xen.c
+@@ -303,6 +303,9 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
+ 	*/
+ 	flags &= ~(__GFP_DMA | __GFP_HIGHMEM);
+ 
++	/* Convert the size to actually allocated. */
++	size = 1UL << (order + XEN_PAGE_SHIFT);
++
+ 	/* On ARM this function returns an ioremap'ped virtual address for
+ 	 * which virt_to_phys doesn't return the corresponding physical
+ 	 * address. In fact on ARM virt_to_phys only works for kernel direct
+@@ -351,6 +354,9 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
+ 	 * physical address */
+ 	phys = xen_bus_to_phys(dev_addr);
+ 
++	/* Convert the size to actually allocated. */
++	size = 1UL << (order + XEN_PAGE_SHIFT);
++
+ 	if (((dev_addr + size - 1 <= dma_mask)) ||
+ 	    range_straddles_page_boundary(phys, size))
+ 		xen_destroy_contiguous_region(phys, order);
+diff --git a/drivers/xen/xen-balloon.c b/drivers/xen/xen-balloon.c
+index 294f35ce9e46..cf8ef8cee5a0 100644
+--- a/drivers/xen/xen-balloon.c
++++ b/drivers/xen/xen-balloon.c
+@@ -75,12 +75,15 @@ static void watch_target(struct xenbus_watch *watch,
+ 
+ 	if (!watch_fired) {
+ 		watch_fired = true;
+-		err = xenbus_scanf(XBT_NIL, "memory", "static-max", "%llu",
+-				   &static_max);
+-		if (err != 1)
+-			static_max = new_target;
+-		else
++
++		if ((xenbus_scanf(XBT_NIL, "memory", "static-max",
++				  "%llu", &static_max) == 1) ||
++		    (xenbus_scanf(XBT_NIL, "memory", "memory_static_max",
++				  "%llu", &static_max) == 1))
+ 			static_max >>= PAGE_SHIFT - 10;
++		else
++			static_max = new_target;
++
+ 		target_diff = (xen_pv_domain() || xen_initial_domain()) ? 0
+ 				: static_max - balloon_stats.target_pages;
+ 	}
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 4bc326df472e..4a7ae216977d 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -1054,9 +1054,26 @@ static noinline int __btrfs_cow_block(struct btrfs_trans_handle *trans,
+ 	if ((root->root_key.objectid == BTRFS_TREE_RELOC_OBJECTID) && parent)
+ 		parent_start = parent->start;
+ 
++	/*
++	 * If we are COWing a node/leaf from the extent, chunk or device trees,
++	 * make sure that we do not finish block group creation of pending block
++	 * groups. We do this to avoid a deadlock.
++	 * COWing can result in allocation of a new chunk, and flushing pending
++	 * block groups (btrfs_create_pending_block_groups()) can be triggered
++	 * when finishing allocation of a new chunk. Creation of a pending block
++	 * group modifies the extent, chunk and device trees, therefore we could
++	 * deadlock with ourselves since we are holding a lock on an extent
++	 * buffer that btrfs_create_pending_block_groups() may try to COW later.
++	 */
++	if (root == fs_info->extent_root ||
++	    root == fs_info->chunk_root ||
++	    root == fs_info->dev_root)
++		trans->can_flush_pending_bgs = false;
++
+ 	cow = btrfs_alloc_tree_block(trans, root, parent_start,
+ 			root->root_key.objectid, &disk_key, level,
+ 			search_start, empty_size);
++	trans->can_flush_pending_bgs = true;
+ 	if (IS_ERR(cow))
+ 		return PTR_ERR(cow);
+ 
+diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
+index d20b244623f2..e129a595f811 100644
+--- a/fs/btrfs/dev-replace.c
++++ b/fs/btrfs/dev-replace.c
+@@ -445,6 +445,7 @@ int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info,
+ 		break;
+ 	case BTRFS_IOCTL_DEV_REPLACE_STATE_STARTED:
+ 	case BTRFS_IOCTL_DEV_REPLACE_STATE_SUSPENDED:
++		ASSERT(0);
+ 		ret = BTRFS_IOCTL_DEV_REPLACE_RESULT_ALREADY_STARTED;
+ 		goto leave;
+ 	}
+@@ -487,6 +488,10 @@ int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info,
+ 	if (IS_ERR(trans)) {
+ 		ret = PTR_ERR(trans);
+ 		btrfs_dev_replace_write_lock(dev_replace);
++		dev_replace->replace_state =
++			BTRFS_IOCTL_DEV_REPLACE_STATE_NEVER_STARTED;
++		dev_replace->srcdev = NULL;
++		dev_replace->tgtdev = NULL;
+ 		goto leave;
+ 	}
+ 
+@@ -508,8 +513,6 @@ int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info,
+ 	return ret;
+ 
+ leave:
+-	dev_replace->srcdev = NULL;
+-	dev_replace->tgtdev = NULL;
+ 	btrfs_dev_replace_write_unlock(dev_replace);
+ 	btrfs_destroy_dev_replace_tgtdev(fs_info, tgt_device);
+ 	return ret;
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 4ab0bccfa281..e67de6a9805b 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -2490,6 +2490,9 @@ static int run_one_delayed_ref(struct btrfs_trans_handle *trans,
+ 					   insert_reserved);
+ 	else
+ 		BUG();
++	if (ret && insert_reserved)
++		btrfs_pin_extent(trans->fs_info, node->bytenr,
++				 node->num_bytes, 1);
+ 	return ret;
+ }
+ 
+@@ -3034,7 +3037,6 @@ int btrfs_run_delayed_refs(struct btrfs_trans_handle *trans,
+ 	struct btrfs_delayed_ref_head *head;
+ 	int ret;
+ 	int run_all = count == (unsigned long)-1;
+-	bool can_flush_pending_bgs = trans->can_flush_pending_bgs;
+ 
+ 	/* We'll clean this up in btrfs_cleanup_transaction */
+ 	if (trans->aborted)
+@@ -3051,7 +3053,6 @@ again:
+ #ifdef SCRAMBLE_DELAYED_REFS
+ 	delayed_refs->run_delayed_start = find_middle(&delayed_refs->root);
+ #endif
+-	trans->can_flush_pending_bgs = false;
+ 	ret = __btrfs_run_delayed_refs(trans, count);
+ 	if (ret < 0) {
+ 		btrfs_abort_transaction(trans, ret);
+@@ -3082,7 +3083,6 @@ again:
+ 		goto again;
+ 	}
+ out:
+-	trans->can_flush_pending_bgs = can_flush_pending_bgs;
+ 	return 0;
+ }
+ 
+@@ -4664,6 +4664,7 @@ again:
+ 			goto out;
+ 	} else {
+ 		ret = 1;
++		space_info->max_extent_size = 0;
+ 	}
+ 
+ 	space_info->force_alloc = CHUNK_ALLOC_NO_FORCE;
+@@ -4685,11 +4686,9 @@ out:
+ 	 * the block groups that were made dirty during the lifetime of the
+ 	 * transaction.
+ 	 */
+-	if (trans->can_flush_pending_bgs &&
+-	    trans->chunk_bytes_reserved >= (u64)SZ_2M) {
++	if (trans->chunk_bytes_reserved >= (u64)SZ_2M)
+ 		btrfs_create_pending_block_groups(trans);
+-		btrfs_trans_release_chunk_metadata(trans);
+-	}
++
+ 	return ret;
+ }
+ 
+@@ -6581,6 +6580,7 @@ static int btrfs_free_reserved_bytes(struct btrfs_block_group_cache *cache,
+ 		space_info->bytes_readonly += num_bytes;
+ 	cache->reserved -= num_bytes;
+ 	space_info->bytes_reserved -= num_bytes;
++	space_info->max_extent_size = 0;
+ 
+ 	if (delalloc)
+ 		cache->delalloc_bytes -= num_bytes;
+@@ -7412,6 +7412,7 @@ static noinline int find_free_extent(struct btrfs_fs_info *fs_info,
+ 	struct btrfs_block_group_cache *block_group = NULL;
+ 	u64 search_start = 0;
+ 	u64 max_extent_size = 0;
++	u64 max_free_space = 0;
+ 	u64 empty_cluster = 0;
+ 	struct btrfs_space_info *space_info;
+ 	int loop = 0;
+@@ -7707,8 +7708,8 @@ unclustered_alloc:
+ 			spin_lock(&ctl->tree_lock);
+ 			if (ctl->free_space <
+ 			    num_bytes + empty_cluster + empty_size) {
+-				if (ctl->free_space > max_extent_size)
+-					max_extent_size = ctl->free_space;
++				max_free_space = max(max_free_space,
++						     ctl->free_space);
+ 				spin_unlock(&ctl->tree_lock);
+ 				goto loop;
+ 			}
+@@ -7877,6 +7878,8 @@ loop:
+ 	}
+ out:
+ 	if (ret == -ENOSPC) {
++		if (!max_extent_size)
++			max_extent_size = max_free_space;
+ 		spin_lock(&space_info->lock);
+ 		space_info->max_extent_size = max_extent_size;
+ 		spin_unlock(&space_info->lock);
+@@ -8158,21 +8161,14 @@ static int alloc_reserved_tree_block(struct btrfs_trans_handle *trans,
+ 	}
+ 
+ 	path = btrfs_alloc_path();
+-	if (!path) {
+-		btrfs_free_and_pin_reserved_extent(fs_info,
+-						   extent_key.objectid,
+-						   fs_info->nodesize);
++	if (!path)
+ 		return -ENOMEM;
+-	}
+ 
+ 	path->leave_spinning = 1;
+ 	ret = btrfs_insert_empty_item(trans, fs_info->extent_root, path,
+ 				      &extent_key, size);
+ 	if (ret) {
+ 		btrfs_free_path(path);
+-		btrfs_free_and_pin_reserved_extent(fs_info,
+-						   extent_key.objectid,
+-						   fs_info->nodesize);
+ 		return ret;
+ 	}
+ 
+@@ -8301,6 +8297,19 @@ btrfs_init_new_buffer(struct btrfs_trans_handle *trans, struct btrfs_root *root,
+ 	if (IS_ERR(buf))
+ 		return buf;
+ 
++	/*
++	 * Extra safety check in case the extent tree is corrupted and extent
++	 * allocator chooses to use a tree block which is already used and
++	 * locked.
++	 */
++	if (buf->lock_owner == current->pid) {
++		btrfs_err_rl(fs_info,
++"tree block %llu owner %llu already locked by pid=%d, extent tree corruption detected",
++			buf->start, btrfs_header_owner(buf), current->pid);
++		free_extent_buffer(buf);
++		return ERR_PTR(-EUCLEAN);
++	}
++
+ 	btrfs_set_header_generation(buf, trans->transid);
+ 	btrfs_set_buffer_lockdep_class(root->root_key.objectid, buf, level);
+ 	btrfs_tree_lock(buf);
+@@ -8938,15 +8947,14 @@ static noinline int walk_up_proc(struct btrfs_trans_handle *trans,
+ 	if (eb == root->node) {
+ 		if (wc->flags[level] & BTRFS_BLOCK_FLAG_FULL_BACKREF)
+ 			parent = eb->start;
+-		else
+-			BUG_ON(root->root_key.objectid !=
+-			       btrfs_header_owner(eb));
++		else if (root->root_key.objectid != btrfs_header_owner(eb))
++			goto owner_mismatch;
+ 	} else {
+ 		if (wc->flags[level + 1] & BTRFS_BLOCK_FLAG_FULL_BACKREF)
+ 			parent = path->nodes[level + 1]->start;
+-		else
+-			BUG_ON(root->root_key.objectid !=
+-			       btrfs_header_owner(path->nodes[level + 1]));
++		else if (root->root_key.objectid !=
++			 btrfs_header_owner(path->nodes[level + 1]))
++			goto owner_mismatch;
+ 	}
+ 
+ 	btrfs_free_tree_block(trans, root, eb, parent, wc->refs[level] == 1);
+@@ -8954,6 +8962,11 @@ out:
+ 	wc->refs[level] = 0;
+ 	wc->flags[level] = 0;
+ 	return 0;
++
++owner_mismatch:
++	btrfs_err_rl(fs_info, "unexpected tree owner, have %llu expect %llu",
++		     btrfs_header_owner(eb), root->root_key.objectid);
++	return -EUCLEAN;
+ }
+ 
+ static noinline int walk_down_tree(struct btrfs_trans_handle *trans,
+@@ -9007,6 +9020,8 @@ static noinline int walk_up_tree(struct btrfs_trans_handle *trans,
+ 			ret = walk_up_proc(trans, root, path, wc);
+ 			if (ret > 0)
+ 				return 0;
++			if (ret < 0)
++				return ret;
+ 
+ 			if (path->locks[level]) {
+ 				btrfs_tree_unlock_rw(path->nodes[level],
+@@ -9772,6 +9787,7 @@ void btrfs_put_block_group_cache(struct btrfs_fs_info *info)
+ 
+ 		block_group = btrfs_lookup_first_block_group(info, last);
+ 		while (block_group) {
++			wait_block_group_cache_done(block_group);
+ 			spin_lock(&block_group->lock);
+ 			if (block_group->iref)
+ 				break;
+@@ -10184,15 +10200,19 @@ error:
+ void btrfs_create_pending_block_groups(struct btrfs_trans_handle *trans)
+ {
+ 	struct btrfs_fs_info *fs_info = trans->fs_info;
+-	struct btrfs_block_group_cache *block_group, *tmp;
++	struct btrfs_block_group_cache *block_group;
+ 	struct btrfs_root *extent_root = fs_info->extent_root;
+ 	struct btrfs_block_group_item item;
+ 	struct btrfs_key key;
+ 	int ret = 0;
+-	bool can_flush_pending_bgs = trans->can_flush_pending_bgs;
+ 
+-	trans->can_flush_pending_bgs = false;
+-	list_for_each_entry_safe(block_group, tmp, &trans->new_bgs, bg_list) {
++	if (!trans->can_flush_pending_bgs)
++		return;
++
++	while (!list_empty(&trans->new_bgs)) {
++		block_group = list_first_entry(&trans->new_bgs,
++					       struct btrfs_block_group_cache,
++					       bg_list);
+ 		if (ret)
+ 			goto next;
+ 
+@@ -10214,7 +10234,7 @@ void btrfs_create_pending_block_groups(struct btrfs_trans_handle *trans)
+ next:
+ 		list_del_init(&block_group->bg_list);
+ 	}
+-	trans->can_flush_pending_bgs = can_flush_pending_bgs;
++	btrfs_trans_release_chunk_metadata(trans);
+ }
+ 
+ int btrfs_make_block_group(struct btrfs_trans_handle *trans,
+@@ -10869,14 +10889,16 @@ int btrfs_error_unpin_extent_range(struct btrfs_fs_info *fs_info,
+  * We don't want a transaction for this since the discard may take a
+  * substantial amount of time.  We don't require that a transaction be
+  * running, but we do need to take a running transaction into account
+- * to ensure that we're not discarding chunks that were released in
+- * the current transaction.
++ * to ensure that we're not discarding chunks that were released or
++ * allocated in the current transaction.
+  *
+  * Holding the chunks lock will prevent other threads from allocating
+  * or releasing chunks, but it won't prevent a running transaction
+  * from committing and releasing the memory that the pending chunks
+  * list head uses.  For that, we need to take a reference to the
+- * transaction.
++ * transaction and hold the commit root sem.  We only need to hold
++ * it while performing the free space search since we have already
++ * held back allocations.
+  */
+ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 				   u64 minlen, u64 *trimmed)
+@@ -10886,6 +10908,10 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 
+ 	*trimmed = 0;
+ 
++	/* Discard not supported = nothing to do. */
++	if (!blk_queue_discard(bdev_get_queue(device->bdev)))
++		return 0;
++
+ 	/* Not writeable = nothing to do. */
+ 	if (!test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state))
+ 		return 0;
+@@ -10903,9 +10929,13 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 
+ 		ret = mutex_lock_interruptible(&fs_info->chunk_mutex);
+ 		if (ret)
+-			return ret;
++			break;
+ 
+-		down_read(&fs_info->commit_root_sem);
++		ret = down_read_killable(&fs_info->commit_root_sem);
++		if (ret) {
++			mutex_unlock(&fs_info->chunk_mutex);
++			break;
++		}
+ 
+ 		spin_lock(&fs_info->trans_lock);
+ 		trans = fs_info->running_transaction;
+@@ -10913,13 +10943,17 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 			refcount_inc(&trans->use_count);
+ 		spin_unlock(&fs_info->trans_lock);
+ 
++		if (!trans)
++			up_read(&fs_info->commit_root_sem);
++
+ 		ret = find_free_dev_extent_start(trans, device, minlen, start,
+ 						 &start, &len);
+-		if (trans)
++		if (trans) {
++			up_read(&fs_info->commit_root_sem);
+ 			btrfs_put_transaction(trans);
++		}
+ 
+ 		if (ret) {
+-			up_read(&fs_info->commit_root_sem);
+ 			mutex_unlock(&fs_info->chunk_mutex);
+ 			if (ret == -ENOSPC)
+ 				ret = 0;
+@@ -10927,7 +10961,6 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 		}
+ 
+ 		ret = btrfs_issue_discard(device->bdev, start, len, &bytes);
+-		up_read(&fs_info->commit_root_sem);
+ 		mutex_unlock(&fs_info->chunk_mutex);
+ 
+ 		if (ret)
+@@ -10947,6 +10980,15 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 	return ret;
+ }
+ 
++/*
++ * Trim the whole filesystem by:
++ * 1) trimming the free space in each block group
++ * 2) trimming the unallocated space on each device
++ *
++ * This will also continue trimming even if a block group or device encounters
++ * an error.  The return value will be the last error, or 0 if nothing bad
++ * happens.
++ */
+ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
+ {
+ 	struct btrfs_block_group_cache *cache = NULL;
+@@ -10956,18 +10998,14 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
+ 	u64 start;
+ 	u64 end;
+ 	u64 trimmed = 0;
+-	u64 total_bytes = btrfs_super_total_bytes(fs_info->super_copy);
++	u64 bg_failed = 0;
++	u64 dev_failed = 0;
++	int bg_ret = 0;
++	int dev_ret = 0;
+ 	int ret = 0;
+ 
+-	/*
+-	 * try to trim all FS space, our block group may start from non-zero.
+-	 */
+-	if (range->len == total_bytes)
+-		cache = btrfs_lookup_first_block_group(fs_info, range->start);
+-	else
+-		cache = btrfs_lookup_block_group(fs_info, range->start);
+-
+-	while (cache) {
++	cache = btrfs_lookup_first_block_group(fs_info, range->start);
++	for (; cache; cache = next_block_group(fs_info, cache)) {
+ 		if (cache->key.objectid >= (range->start + range->len)) {
+ 			btrfs_put_block_group(cache);
+ 			break;
+@@ -10981,13 +11019,15 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
+ 			if (!block_group_cache_done(cache)) {
+ 				ret = cache_block_group(cache, 0);
+ 				if (ret) {
+-					btrfs_put_block_group(cache);
+-					break;
++					bg_failed++;
++					bg_ret = ret;
++					continue;
+ 				}
+ 				ret = wait_block_group_cache_done(cache);
+ 				if (ret) {
+-					btrfs_put_block_group(cache);
+-					break;
++					bg_failed++;
++					bg_ret = ret;
++					continue;
+ 				}
+ 			}
+ 			ret = btrfs_trim_block_group(cache,
+@@ -10998,28 +11038,40 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
+ 
+ 			trimmed += group_trimmed;
+ 			if (ret) {
+-				btrfs_put_block_group(cache);
+-				break;
++				bg_failed++;
++				bg_ret = ret;
++				continue;
+ 			}
+ 		}
+-
+-		cache = next_block_group(fs_info, cache);
+ 	}
+ 
++	if (bg_failed)
++		btrfs_warn(fs_info,
++			"failed to trim %llu block group(s), last error %d",
++			bg_failed, bg_ret);
+ 	mutex_lock(&fs_info->fs_devices->device_list_mutex);
+-	devices = &fs_info->fs_devices->alloc_list;
+-	list_for_each_entry(device, devices, dev_alloc_list) {
++	devices = &fs_info->fs_devices->devices;
++	list_for_each_entry(device, devices, dev_list) {
+ 		ret = btrfs_trim_free_extents(device, range->minlen,
+ 					      &group_trimmed);
+-		if (ret)
++		if (ret) {
++			dev_failed++;
++			dev_ret = ret;
+ 			break;
++		}
+ 
+ 		trimmed += group_trimmed;
+ 	}
+ 	mutex_unlock(&fs_info->fs_devices->device_list_mutex);
+ 
++	if (dev_failed)
++		btrfs_warn(fs_info,
++			"failed to trim %llu device(s), last error %d",
++			dev_failed, dev_ret);
+ 	range->len = trimmed;
+-	return ret;
++	if (bg_ret)
++		return bg_ret;
++	return dev_ret;
+ }
+ 
+ /*
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 51e77d72068a..22c2f38cd9b3 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -534,6 +534,14 @@ int btrfs_dirty_pages(struct inode *inode, struct page **pages,
+ 
+ 	end_of_last_block = start_pos + num_bytes - 1;
+ 
++	/*
++	 * The pages may have already been dirty, clear out old accounting so
++	 * we can set things up properly
++	 */
++	clear_extent_bit(&BTRFS_I(inode)->io_tree, start_pos, end_of_last_block,
++			 EXTENT_DIRTY | EXTENT_DELALLOC |
++			 EXTENT_DO_ACCOUNTING | EXTENT_DEFRAG, 0, 0, cached);
++
+ 	if (!btrfs_is_free_space_inode(BTRFS_I(inode))) {
+ 		if (start_pos >= isize &&
+ 		    !(BTRFS_I(inode)->flags & BTRFS_INODE_PREALLOC)) {
+@@ -1504,18 +1512,27 @@ lock_and_cleanup_extent_if_need(struct btrfs_inode *inode, struct page **pages,
+ 		}
+ 		if (ordered)
+ 			btrfs_put_ordered_extent(ordered);
+-		clear_extent_bit(&inode->io_tree, start_pos, last_pos,
+-				 EXTENT_DIRTY | EXTENT_DELALLOC |
+-				 EXTENT_DO_ACCOUNTING | EXTENT_DEFRAG,
+-				 0, 0, cached_state);
++
+ 		*lockstart = start_pos;
+ 		*lockend = last_pos;
+ 		ret = 1;
+ 	}
+ 
++	/*
++	 * It's possible the pages are dirty right now, but we don't want
++	 * to clean them yet because copy_from_user may catch a page fault
++	 * and we might have to fall back to one page at a time.  If that
++	 * happens, we'll unlock these pages and we'd have a window where
++	 * reclaim could sneak in and drop the once-dirty page on the floor
++	 * without writing it.
++	 *
++	 * We have the pages locked and the extent range locked, so there's
++	 * no way someone can start IO on any dirty pages in this range.
++	 *
++	 * We'll call btrfs_dirty_pages() later on, and that will flip around
++	 * delalloc bits and dirty the pages as required.
++	 */
+ 	for (i = 0; i < num_pages; i++) {
+-		if (clear_page_dirty_for_io(pages[i]))
+-			account_page_redirty(pages[i]);
+ 		set_page_extent_mapped(pages[i]);
+ 		WARN_ON(!PageLocked(pages[i]));
+ 	}
+@@ -2065,6 +2082,14 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 		goto out;
+ 
+ 	inode_lock(inode);
++
++	/*
++	 * We take the dio_sem here because the tree log stuff can race with
++	 * lockless dio writes and get an extent map logged for an extent we
++	 * never waited on.  We need it this high up for lockdep reasons.
++	 */
++	down_write(&BTRFS_I(inode)->dio_sem);
++
+ 	atomic_inc(&root->log_batch);
+ 	full_sync = test_bit(BTRFS_INODE_NEEDS_FULL_SYNC,
+ 			     &BTRFS_I(inode)->runtime_flags);
+@@ -2116,6 +2141,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 		ret = start_ordered_ops(inode, start, end);
+ 	}
+ 	if (ret) {
++		up_write(&BTRFS_I(inode)->dio_sem);
+ 		inode_unlock(inode);
+ 		goto out;
+ 	}
+@@ -2171,6 +2197,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 		 * checked called fsync.
+ 		 */
+ 		ret = filemap_check_wb_err(inode->i_mapping, file->f_wb_err);
++		up_write(&BTRFS_I(inode)->dio_sem);
+ 		inode_unlock(inode);
+ 		goto out;
+ 	}
+@@ -2189,6 +2216,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 	trans = btrfs_start_transaction(root, 0);
+ 	if (IS_ERR(trans)) {
+ 		ret = PTR_ERR(trans);
++		up_write(&BTRFS_I(inode)->dio_sem);
+ 		inode_unlock(inode);
+ 		goto out;
+ 	}
+@@ -2210,6 +2238,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 	 * file again, but that will end up using the synchronization
+ 	 * inside btrfs_sync_log to keep things safe.
+ 	 */
++	up_write(&BTRFS_I(inode)->dio_sem);
+ 	inode_unlock(inode);
+ 
+ 	/*
+diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
+index d5f80cb300be..a5f18333aa8c 100644
+--- a/fs/btrfs/free-space-cache.c
++++ b/fs/btrfs/free-space-cache.c
+@@ -10,6 +10,7 @@
+ #include <linux/math64.h>
+ #include <linux/ratelimit.h>
+ #include <linux/error-injection.h>
++#include <linux/sched/mm.h>
+ #include "ctree.h"
+ #include "free-space-cache.h"
+ #include "transaction.h"
+@@ -47,6 +48,7 @@ static struct inode *__lookup_free_space_inode(struct btrfs_root *root,
+ 	struct btrfs_free_space_header *header;
+ 	struct extent_buffer *leaf;
+ 	struct inode *inode = NULL;
++	unsigned nofs_flag;
+ 	int ret;
+ 
+ 	key.objectid = BTRFS_FREE_SPACE_OBJECTID;
+@@ -68,7 +70,13 @@ static struct inode *__lookup_free_space_inode(struct btrfs_root *root,
+ 	btrfs_disk_key_to_cpu(&location, &disk_key);
+ 	btrfs_release_path(path);
+ 
++	/*
++	 * We are often under a trans handle at this point, so we need to make
++	 * sure NOFS is set to keep us from deadlocking.
++	 */
++	nofs_flag = memalloc_nofs_save();
+ 	inode = btrfs_iget(fs_info->sb, &location, root, NULL);
++	memalloc_nofs_restore(nofs_flag);
+ 	if (IS_ERR(inode))
+ 		return inode;
+ 	if (is_bad_inode(inode)) {
+@@ -1686,6 +1694,8 @@ static inline void __bitmap_clear_bits(struct btrfs_free_space_ctl *ctl,
+ 	bitmap_clear(info->bitmap, start, count);
+ 
+ 	info->bytes -= bytes;
++	if (info->max_extent_size > ctl->unit)
++		info->max_extent_size = 0;
+ }
+ 
+ static void bitmap_clear_bits(struct btrfs_free_space_ctl *ctl,
+@@ -1769,6 +1779,13 @@ static int search_bitmap(struct btrfs_free_space_ctl *ctl,
+ 	return -1;
+ }
+ 
++static inline u64 get_max_extent_size(struct btrfs_free_space *entry)
++{
++	if (entry->bitmap)
++		return entry->max_extent_size;
++	return entry->bytes;
++}
++
+ /* Cache the size of the max extent in bytes */
+ static struct btrfs_free_space *
+ find_free_space(struct btrfs_free_space_ctl *ctl, u64 *offset, u64 *bytes,
+@@ -1790,8 +1807,8 @@ find_free_space(struct btrfs_free_space_ctl *ctl, u64 *offset, u64 *bytes,
+ 	for (node = &entry->offset_index; node; node = rb_next(node)) {
+ 		entry = rb_entry(node, struct btrfs_free_space, offset_index);
+ 		if (entry->bytes < *bytes) {
+-			if (entry->bytes > *max_extent_size)
+-				*max_extent_size = entry->bytes;
++			*max_extent_size = max(get_max_extent_size(entry),
++					       *max_extent_size);
+ 			continue;
+ 		}
+ 
+@@ -1809,8 +1826,8 @@ find_free_space(struct btrfs_free_space_ctl *ctl, u64 *offset, u64 *bytes,
+ 		}
+ 
+ 		if (entry->bytes < *bytes + align_off) {
+-			if (entry->bytes > *max_extent_size)
+-				*max_extent_size = entry->bytes;
++			*max_extent_size = max(get_max_extent_size(entry),
++					       *max_extent_size);
+ 			continue;
+ 		}
+ 
+@@ -1822,8 +1839,10 @@ find_free_space(struct btrfs_free_space_ctl *ctl, u64 *offset, u64 *bytes,
+ 				*offset = tmp;
+ 				*bytes = size;
+ 				return entry;
+-			} else if (size > *max_extent_size) {
+-				*max_extent_size = size;
++			} else {
++				*max_extent_size =
++					max(get_max_extent_size(entry),
++					    *max_extent_size);
+ 			}
+ 			continue;
+ 		}
+@@ -2447,6 +2466,7 @@ void btrfs_dump_free_space(struct btrfs_block_group_cache *block_group,
+ 	struct rb_node *n;
+ 	int count = 0;
+ 
++	spin_lock(&ctl->tree_lock);
+ 	for (n = rb_first(&ctl->free_space_offset); n; n = rb_next(n)) {
+ 		info = rb_entry(n, struct btrfs_free_space, offset_index);
+ 		if (info->bytes >= bytes && !block_group->ro)
+@@ -2455,6 +2475,7 @@ void btrfs_dump_free_space(struct btrfs_block_group_cache *block_group,
+ 			   info->offset, info->bytes,
+ 		       (info->bitmap) ? "yes" : "no");
+ 	}
++	spin_unlock(&ctl->tree_lock);
+ 	btrfs_info(fs_info, "block group has cluster?: %s",
+ 	       list_empty(&block_group->cluster_list) ? "no" : "yes");
+ 	btrfs_info(fs_info,
+@@ -2683,8 +2704,8 @@ static u64 btrfs_alloc_from_bitmap(struct btrfs_block_group_cache *block_group,
+ 
+ 	err = search_bitmap(ctl, entry, &search_start, &search_bytes, true);
+ 	if (err) {
+-		if (search_bytes > *max_extent_size)
+-			*max_extent_size = search_bytes;
++		*max_extent_size = max(get_max_extent_size(entry),
++				       *max_extent_size);
+ 		return 0;
+ 	}
+ 
+@@ -2721,8 +2742,9 @@ u64 btrfs_alloc_from_cluster(struct btrfs_block_group_cache *block_group,
+ 
+ 	entry = rb_entry(node, struct btrfs_free_space, offset_index);
+ 	while (1) {
+-		if (entry->bytes < bytes && entry->bytes > *max_extent_size)
+-			*max_extent_size = entry->bytes;
++		if (entry->bytes < bytes)
++			*max_extent_size = max(get_max_extent_size(entry),
++					       *max_extent_size);
+ 
+ 		if (entry->bytes < bytes ||
+ 		    (!entry->bitmap && entry->offset < min_start)) {
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index d3736fbf6774..dc0f9d089b19 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -507,6 +507,7 @@ again:
+ 		pages = kcalloc(nr_pages, sizeof(struct page *), GFP_NOFS);
+ 		if (!pages) {
+ 			/* just bail out to the uncompressed code */
++			nr_pages = 0;
+ 			goto cont;
+ 		}
+ 
+@@ -2950,6 +2951,7 @@ static int btrfs_finish_ordered_io(struct btrfs_ordered_extent *ordered_extent)
+ 	bool truncated = false;
+ 	bool range_locked = false;
+ 	bool clear_new_delalloc_bytes = false;
++	bool clear_reserved_extent = true;
+ 
+ 	if (!test_bit(BTRFS_ORDERED_NOCOW, &ordered_extent->flags) &&
+ 	    !test_bit(BTRFS_ORDERED_PREALLOC, &ordered_extent->flags) &&
+@@ -3053,10 +3055,12 @@ static int btrfs_finish_ordered_io(struct btrfs_ordered_extent *ordered_extent)
+ 						logical_len, logical_len,
+ 						compress_type, 0, 0,
+ 						BTRFS_FILE_EXTENT_REG);
+-		if (!ret)
++		if (!ret) {
++			clear_reserved_extent = false;
+ 			btrfs_release_delalloc_bytes(fs_info,
+ 						     ordered_extent->start,
+ 						     ordered_extent->disk_len);
++		}
+ 	}
+ 	unpin_extent_cache(&BTRFS_I(inode)->extent_tree,
+ 			   ordered_extent->file_offset, ordered_extent->len,
+@@ -3117,8 +3121,13 @@ out:
+ 		 * wrong we need to return the space for this ordered extent
+ 		 * back to the allocator.  We only free the extent in the
+ 		 * truncated case if we didn't write out the extent at all.
++		 *
++		 * If we made it past insert_reserved_file_extent before we
++		 * errored out then we don't need to do this as the accounting
++		 * has already been done.
+ 		 */
+ 		if ((ret || !logical_len) &&
++		    clear_reserved_extent &&
+ 		    !test_bit(BTRFS_ORDERED_NOCOW, &ordered_extent->flags) &&
+ 		    !test_bit(BTRFS_ORDERED_PREALLOC, &ordered_extent->flags))
+ 			btrfs_free_reserved_extent(fs_info,
+@@ -5293,11 +5302,13 @@ static void evict_inode_truncate_pages(struct inode *inode)
+ 		struct extent_state *cached_state = NULL;
+ 		u64 start;
+ 		u64 end;
++		unsigned state_flags;
+ 
+ 		node = rb_first(&io_tree->state);
+ 		state = rb_entry(node, struct extent_state, rb_node);
+ 		start = state->start;
+ 		end = state->end;
++		state_flags = state->state;
+ 		spin_unlock(&io_tree->lock);
+ 
+ 		lock_extent_bits(io_tree, start, end, &cached_state);
+@@ -5310,7 +5321,7 @@ static void evict_inode_truncate_pages(struct inode *inode)
+ 		 *
+ 		 * Note, end is the bytenr of last byte, so we need + 1 here.
+ 		 */
+-		if (state->state & EXTENT_DELALLOC)
++		if (state_flags & EXTENT_DELALLOC)
+ 			btrfs_qgroup_free_data(inode, NULL, start, end - start + 1);
+ 
+ 		clear_extent_bit(io_tree, start, end,
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index ef7159646615..c972920701a3 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -496,7 +496,6 @@ static noinline int btrfs_ioctl_fitrim(struct file *file, void __user *arg)
+ 	struct fstrim_range range;
+ 	u64 minlen = ULLONG_MAX;
+ 	u64 num_devices = 0;
+-	u64 total_bytes = btrfs_super_total_bytes(fs_info->super_copy);
+ 	int ret;
+ 
+ 	if (!capable(CAP_SYS_ADMIN))
+@@ -520,11 +519,15 @@ static noinline int btrfs_ioctl_fitrim(struct file *file, void __user *arg)
+ 		return -EOPNOTSUPP;
+ 	if (copy_from_user(&range, arg, sizeof(range)))
+ 		return -EFAULT;
+-	if (range.start > total_bytes ||
+-	    range.len < fs_info->sb->s_blocksize)
++
++	/*
++	 * NOTE: Don't truncate the range using super->total_bytes.  Bytenr of
++	 * block group is in the logical address space, which can be any
++	 * sectorsize aligned bytenr in  the range [0, U64_MAX].
++	 */
++	if (range.len < fs_info->sb->s_blocksize)
+ 		return -EINVAL;
+ 
+-	range.len = min(range.len, total_bytes - range.start);
+ 	range.minlen = max(range.minlen, minlen);
+ 	ret = btrfs_trim_fs(fs_info, &range);
+ 	if (ret < 0)
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index c25dc47210a3..7407f5a5d682 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -2856,6 +2856,7 @@ qgroup_rescan_zero_tracking(struct btrfs_fs_info *fs_info)
+ 		qgroup->rfer_cmpr = 0;
+ 		qgroup->excl = 0;
+ 		qgroup->excl_cmpr = 0;
++		qgroup_dirty(fs_info, qgroup);
+ 	}
+ 	spin_unlock(&fs_info->qgroup_lock);
+ }
+@@ -3065,6 +3066,10 @@ static int __btrfs_qgroup_release_data(struct inode *inode,
+ 	int trace_op = QGROUP_RELEASE;
+ 	int ret;
+ 
++	if (!test_bit(BTRFS_FS_QUOTA_ENABLED,
++		      &BTRFS_I(inode)->root->fs_info->flags))
++		return 0;
++
+ 	/* In release case, we shouldn't have @reserved */
+ 	WARN_ON(!free && reserved);
+ 	if (free && reserved)
+diff --git a/fs/btrfs/qgroup.h b/fs/btrfs/qgroup.h
+index d60dd06445ce..cad73ed7aebc 100644
+--- a/fs/btrfs/qgroup.h
++++ b/fs/btrfs/qgroup.h
+@@ -261,6 +261,8 @@ void btrfs_qgroup_free_refroot(struct btrfs_fs_info *fs_info,
+ static inline void btrfs_qgroup_free_delayed_ref(struct btrfs_fs_info *fs_info,
+ 						 u64 ref_root, u64 num_bytes)
+ {
++	if (!test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags))
++		return;
+ 	trace_btrfs_qgroup_free_delayed_ref(fs_info, ref_root, num_bytes);
+ 	btrfs_qgroup_free_refroot(fs_info, ref_root, num_bytes,
+ 				  BTRFS_QGROUP_RSV_DATA);
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index be94c65bb4d2..5ee49b796815 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -1321,7 +1321,7 @@ static void __del_reloc_root(struct btrfs_root *root)
+ 	struct mapping_node *node = NULL;
+ 	struct reloc_control *rc = fs_info->reloc_ctl;
+ 
+-	if (rc) {
++	if (rc && root->node) {
+ 		spin_lock(&rc->reloc_root_tree.lock);
+ 		rb_node = tree_search(&rc->reloc_root_tree.rb_root,
+ 				      root->node->start);
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index ff5f6c719976..9ee0aca134fc 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -1930,6 +1930,9 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
+ 		return ret;
+ 	}
+ 
++	btrfs_trans_release_metadata(trans);
++	trans->block_rsv = NULL;
++
+ 	/* make a pass through all the delayed refs we have so far
+ 	 * any runnings procs may add more while we are here
+ 	 */
+@@ -1939,9 +1942,6 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
+ 		return ret;
+ 	}
+ 
+-	btrfs_trans_release_metadata(trans);
+-	trans->block_rsv = NULL;
+-
+ 	cur_trans = trans->transaction;
+ 
+ 	/*
+@@ -2281,15 +2281,6 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
+ 
+ 	kmem_cache_free(btrfs_trans_handle_cachep, trans);
+ 
+-	/*
+-	 * If fs has been frozen, we can not handle delayed iputs, otherwise
+-	 * it'll result in deadlock about SB_FREEZE_FS.
+-	 */
+-	if (current != fs_info->transaction_kthread &&
+-	    current != fs_info->cleaner_kthread &&
+-	    !test_bit(BTRFS_FS_FROZEN, &fs_info->flags))
+-		btrfs_run_delayed_iputs(fs_info);
+-
+ 	return ret;
+ 
+ scrub_continue:
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 84b00a29d531..8b3f14a1adf0 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -258,6 +258,13 @@ struct walk_control {
+ 	/* what stage of the replay code we're currently in */
+ 	int stage;
+ 
++	/*
++	 * Ignore any items from the inode currently being processed. Needs
++	 * to be set every time we find a BTRFS_INODE_ITEM_KEY and we are in
++	 * the LOG_WALK_REPLAY_INODES stage.
++	 */
++	bool ignore_cur_inode;
++
+ 	/* the root we are currently replaying */
+ 	struct btrfs_root *replay_dest;
+ 
+@@ -2492,6 +2499,20 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+ 
+ 			inode_item = btrfs_item_ptr(eb, i,
+ 					    struct btrfs_inode_item);
++			/*
++			 * If we have a tmpfile (O_TMPFILE) that got fsync'ed
++			 * and never got linked before the fsync, skip it, as
++			 * replaying it is pointless since it would be deleted
++			 * later. We skip logging tmpfiles, but it's always
++			 * possible we are replaying a log created with a kernel
++			 * that used to log tmpfiles.
++			 */
++			if (btrfs_inode_nlink(eb, inode_item) == 0) {
++				wc->ignore_cur_inode = true;
++				continue;
++			} else {
++				wc->ignore_cur_inode = false;
++			}
+ 			ret = replay_xattr_deletes(wc->trans, root, log,
+ 						   path, key.objectid);
+ 			if (ret)
+@@ -2529,16 +2550,8 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+ 					     root->fs_info->sectorsize);
+ 				ret = btrfs_drop_extents(wc->trans, root, inode,
+ 							 from, (u64)-1, 1);
+-				/*
+-				 * If the nlink count is zero here, the iput
+-				 * will free the inode.  We bump it to make
+-				 * sure it doesn't get freed until the link
+-				 * count fixup is done.
+-				 */
+ 				if (!ret) {
+-					if (inode->i_nlink == 0)
+-						inc_nlink(inode);
+-					/* Update link count and nbytes. */
++					/* Update the inode's nbytes. */
+ 					ret = btrfs_update_inode(wc->trans,
+ 								 root, inode);
+ 				}
+@@ -2553,6 +2566,9 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+ 				break;
+ 		}
+ 
++		if (wc->ignore_cur_inode)
++			continue;
++
+ 		if (key.type == BTRFS_DIR_INDEX_KEY &&
+ 		    wc->stage == LOG_WALK_REPLAY_DIR_INDEX) {
+ 			ret = replay_one_dir_item(wc->trans, root, path,
+@@ -3209,9 +3225,12 @@ static void free_log_tree(struct btrfs_trans_handle *trans,
+ 	};
+ 
+ 	ret = walk_log_tree(trans, log, &wc);
+-	/* I don't think this can happen but just in case */
+-	if (ret)
+-		btrfs_abort_transaction(trans, ret);
++	if (ret) {
++		if (trans)
++			btrfs_abort_transaction(trans, ret);
++		else
++			btrfs_handle_fs_error(log->fs_info, ret, NULL);
++	}
+ 
+ 	while (1) {
+ 		ret = find_first_extent_bit(&log->dirty_log_pages,
+@@ -4505,7 +4524,6 @@ static int btrfs_log_changed_extents(struct btrfs_trans_handle *trans,
+ 
+ 	INIT_LIST_HEAD(&extents);
+ 
+-	down_write(&inode->dio_sem);
+ 	write_lock(&tree->lock);
+ 	test_gen = root->fs_info->last_trans_committed;
+ 	logged_start = start;
+@@ -4586,7 +4604,6 @@ process:
+ 	}
+ 	WARN_ON(!list_empty(&extents));
+ 	write_unlock(&tree->lock);
+-	up_write(&inode->dio_sem);
+ 
+ 	btrfs_release_path(path);
+ 	if (!ret)
+@@ -4784,7 +4801,8 @@ static int btrfs_log_trailing_hole(struct btrfs_trans_handle *trans,
+ 			ASSERT(len == i_size ||
+ 			       (len == fs_info->sectorsize &&
+ 				btrfs_file_extent_compression(leaf, extent) !=
+-				BTRFS_COMPRESS_NONE));
++				BTRFS_COMPRESS_NONE) ||
++			       (len < i_size && i_size < fs_info->sectorsize));
+ 			return 0;
+ 		}
+ 
+@@ -5718,9 +5736,33 @@ static int btrfs_log_all_parents(struct btrfs_trans_handle *trans,
+ 
+ 			dir_inode = btrfs_iget(fs_info->sb, &inode_key,
+ 					       root, NULL);
+-			/* If parent inode was deleted, skip it. */
+-			if (IS_ERR(dir_inode))
+-				continue;
++			/*
++			 * If the parent inode was deleted, return an error to
++			 * fallback to a transaction commit. This is to prevent
++			 * getting an inode that was moved from one parent A to
++			 * a parent B, got its former parent A deleted and then
++			 * it got fsync'ed, from existing at both parents after
++			 * a log replay (and the old parent still existing).
++			 * Example:
++			 *
++			 * mkdir /mnt/A
++			 * mkdir /mnt/B
++			 * touch /mnt/B/bar
++			 * sync
++			 * mv /mnt/B/bar /mnt/A/bar
++			 * mv -T /mnt/A /mnt/B
++			 * fsync /mnt/B/bar
++			 * <power fail>
++			 *
++			 * If we ignore the old parent B which got deleted,
++			 * after a log replay we would have file bar linked
++			 * at both parents and the old parent B would still
++			 * exist.
++			 */
++			if (IS_ERR(dir_inode)) {
++				ret = PTR_ERR(dir_inode);
++				goto out;
++			}
+ 
+ 			if (ctx)
+ 				ctx->log_new_dentries = false;
+@@ -5794,7 +5836,13 @@ static int btrfs_log_inode_parent(struct btrfs_trans_handle *trans,
+ 	if (ret)
+ 		goto end_no_trans;
+ 
+-	if (btrfs_inode_in_log(inode, trans->transid)) {
++	/*
++	 * Skip already logged inodes or inodes corresponding to tmpfiles
++	 * (since logging them is pointless, a link count of 0 means they
++	 * will never be accessible).
++	 */
++	if (btrfs_inode_in_log(inode, trans->transid) ||
++	    inode->vfs_inode.i_nlink == 0) {
+ 		ret = BTRFS_NO_LOG_SYNC;
+ 		goto end_no_trans;
+ 	}
+diff --git a/fs/cifs/cifs_debug.c b/fs/cifs/cifs_debug.c
+index b20297988fe0..c1261b7fd292 100644
+--- a/fs/cifs/cifs_debug.c
++++ b/fs/cifs/cifs_debug.c
+@@ -383,6 +383,9 @@ static ssize_t cifs_stats_proc_write(struct file *file,
+ 		atomic_set(&totBufAllocCount, 0);
+ 		atomic_set(&totSmBufAllocCount, 0);
+ #endif /* CONFIG_CIFS_STATS2 */
++		atomic_set(&tcpSesReconnectCount, 0);
++		atomic_set(&tconInfoReconnectCount, 0);
++
+ 		spin_lock(&GlobalMid_Lock);
+ 		GlobalMaxActiveXid = 0;
+ 		GlobalCurrentXid = 0;
+diff --git a/fs/cifs/cifs_spnego.c b/fs/cifs/cifs_spnego.c
+index b611fc2e8984..7f01c6e60791 100644
+--- a/fs/cifs/cifs_spnego.c
++++ b/fs/cifs/cifs_spnego.c
+@@ -147,8 +147,10 @@ cifs_get_spnego_key(struct cifs_ses *sesInfo)
+ 		sprintf(dp, ";sec=krb5");
+ 	else if (server->sec_mskerberos)
+ 		sprintf(dp, ";sec=mskrb5");
+-	else
+-		goto out;
++	else {
++		cifs_dbg(VFS, "unknown or missing server auth type, use krb5\n");
++		sprintf(dp, ";sec=krb5");
++	}
+ 
+ 	dp = description + strlen(description);
+ 	sprintf(dp, ";uid=0x%x",
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index d279fa5472db..334b2b3d21a3 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -779,7 +779,15 @@ cifs_get_inode_info(struct inode **inode, const char *full_path,
+ 	} else if (rc == -EREMOTE) {
+ 		cifs_create_dfs_fattr(&fattr, sb);
+ 		rc = 0;
+-	} else if (rc == -EACCES && backup_cred(cifs_sb)) {
++	} else if ((rc == -EACCES) && backup_cred(cifs_sb) &&
++		   (strcmp(server->vals->version_string, SMB1_VERSION_STRING)
++		      == 0)) {
++			/*
++			 * For SMB2 and later the backup intent flag is already
++			 * sent if needed on open and there is no path based
++			 * FindFirst operation to use to retry with
++			 */
++
+ 			srchinf = kzalloc(sizeof(struct cifs_search_info),
+ 						GFP_KERNEL);
+ 			if (srchinf == NULL) {
+diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
+index f408994fc632..6e000392e4a4 100644
+--- a/fs/cramfs/inode.c
++++ b/fs/cramfs/inode.c
+@@ -202,7 +202,8 @@ static void *cramfs_blkdev_read(struct super_block *sb, unsigned int offset,
+ 			continue;
+ 		blk_offset = (blocknr - buffer_blocknr[i]) << PAGE_SHIFT;
+ 		blk_offset += offset;
+-		if (blk_offset + len > BUFFER_SIZE)
++		if (blk_offset > BUFFER_SIZE ||
++		    blk_offset + len > BUFFER_SIZE)
+ 			continue;
+ 		return read_buffers[i] + blk_offset;
+ 	}
+diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h
+index 39c20ef26db4..79debfc9cef9 100644
+--- a/fs/crypto/fscrypt_private.h
++++ b/fs/crypto/fscrypt_private.h
+@@ -83,10 +83,6 @@ static inline bool fscrypt_valid_enc_modes(u32 contents_mode,
+ 	    filenames_mode == FS_ENCRYPTION_MODE_AES_256_CTS)
+ 		return true;
+ 
+-	if (contents_mode == FS_ENCRYPTION_MODE_SPECK128_256_XTS &&
+-	    filenames_mode == FS_ENCRYPTION_MODE_SPECK128_256_CTS)
+-		return true;
+-
+ 	return false;
+ }
+ 
+diff --git a/fs/crypto/keyinfo.c b/fs/crypto/keyinfo.c
+index e997ca51192f..7874c9bb2fc5 100644
+--- a/fs/crypto/keyinfo.c
++++ b/fs/crypto/keyinfo.c
+@@ -174,16 +174,6 @@ static struct fscrypt_mode {
+ 		.cipher_str = "cts(cbc(aes))",
+ 		.keysize = 16,
+ 	},
+-	[FS_ENCRYPTION_MODE_SPECK128_256_XTS] = {
+-		.friendly_name = "Speck128/256-XTS",
+-		.cipher_str = "xts(speck128)",
+-		.keysize = 64,
+-	},
+-	[FS_ENCRYPTION_MODE_SPECK128_256_CTS] = {
+-		.friendly_name = "Speck128/256-CTS-CBC",
+-		.cipher_str = "cts(cbc(speck128))",
+-		.keysize = 32,
+-	},
+ };
+ 
+ static struct fscrypt_mode *
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index aa1ce53d0c87..7fcc11fcbbbd 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -1387,7 +1387,8 @@ struct ext4_sb_info {
+ 	u32 s_min_batch_time;
+ 	struct block_device *journal_bdev;
+ #ifdef CONFIG_QUOTA
+-	char *s_qf_names[EXT4_MAXQUOTAS];	/* Names of quota files with journalled quota */
++	/* Names of quota files with journalled quota */
++	char __rcu *s_qf_names[EXT4_MAXQUOTAS];
+ 	int s_jquota_fmt;			/* Format of quota to use */
+ #endif
+ 	unsigned int s_want_extra_isize; /* New inodes should reserve # bytes */
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 7b4736022761..9c4bac18cc6c 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -863,7 +863,7 @@ int ext4_da_write_inline_data_begin(struct address_space *mapping,
+ 	handle_t *handle;
+ 	struct page *page;
+ 	struct ext4_iloc iloc;
+-	int retries;
++	int retries = 0;
+ 
+ 	ret = ext4_get_inode_loc(inode, &iloc);
+ 	if (ret)
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index a7074115d6f6..0edee31913d1 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -67,7 +67,6 @@ static void swap_inode_data(struct inode *inode1, struct inode *inode2)
+ 	ei1 = EXT4_I(inode1);
+ 	ei2 = EXT4_I(inode2);
+ 
+-	swap(inode1->i_flags, inode2->i_flags);
+ 	swap(inode1->i_version, inode2->i_version);
+ 	swap(inode1->i_blocks, inode2->i_blocks);
+ 	swap(inode1->i_bytes, inode2->i_bytes);
+@@ -85,6 +84,21 @@ static void swap_inode_data(struct inode *inode1, struct inode *inode2)
+ 	i_size_write(inode2, isize);
+ }
+ 
++static void reset_inode_seed(struct inode *inode)
++{
++	struct ext4_inode_info *ei = EXT4_I(inode);
++	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
++	__le32 inum = cpu_to_le32(inode->i_ino);
++	__le32 gen = cpu_to_le32(inode->i_generation);
++	__u32 csum;
++
++	if (!ext4_has_metadata_csum(inode->i_sb))
++		return;
++
++	csum = ext4_chksum(sbi, sbi->s_csum_seed, (__u8 *)&inum, sizeof(inum));
++	ei->i_csum_seed = ext4_chksum(sbi, csum, (__u8 *)&gen, sizeof(gen));
++}
++
+ /**
+  * Swap the information from the given @inode and the inode
+  * EXT4_BOOT_LOADER_INO. It will basically swap i_data and all other
+@@ -102,10 +116,13 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 	struct inode *inode_bl;
+ 	struct ext4_inode_info *ei_bl;
+ 
+-	if (inode->i_nlink != 1 || !S_ISREG(inode->i_mode))
++	if (inode->i_nlink != 1 || !S_ISREG(inode->i_mode) ||
++	    IS_SWAPFILE(inode) || IS_ENCRYPTED(inode) ||
++	    ext4_has_inline_data(inode))
+ 		return -EINVAL;
+ 
+-	if (!inode_owner_or_capable(inode) || !capable(CAP_SYS_ADMIN))
++	if (IS_RDONLY(inode) || IS_APPEND(inode) || IS_IMMUTABLE(inode) ||
++	    !inode_owner_or_capable(inode) || !capable(CAP_SYS_ADMIN))
+ 		return -EPERM;
+ 
+ 	inode_bl = ext4_iget(sb, EXT4_BOOT_LOADER_INO);
+@@ -120,13 +137,13 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 	 * that only 1 swap_inode_boot_loader is running. */
+ 	lock_two_nondirectories(inode, inode_bl);
+ 
+-	truncate_inode_pages(&inode->i_data, 0);
+-	truncate_inode_pages(&inode_bl->i_data, 0);
+-
+ 	/* Wait for all existing dio workers */
+ 	inode_dio_wait(inode);
+ 	inode_dio_wait(inode_bl);
+ 
++	truncate_inode_pages(&inode->i_data, 0);
++	truncate_inode_pages(&inode_bl->i_data, 0);
++
+ 	handle = ext4_journal_start(inode_bl, EXT4_HT_MOVE_EXTENTS, 2);
+ 	if (IS_ERR(handle)) {
+ 		err = -EINVAL;
+@@ -159,6 +176,8 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 
+ 	inode->i_generation = prandom_u32();
+ 	inode_bl->i_generation = prandom_u32();
++	reset_inode_seed(inode);
++	reset_inode_seed(inode_bl);
+ 
+ 	ext4_discard_preallocations(inode);
+ 
+@@ -169,6 +188,7 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 			inode->i_ino, err);
+ 		/* Revert all changes: */
+ 		swap_inode_data(inode, inode_bl);
++		ext4_mark_inode_dirty(handle, inode);
+ 	} else {
+ 		err = ext4_mark_inode_dirty(handle, inode_bl);
+ 		if (err < 0) {
+@@ -178,6 +198,7 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 			/* Revert all changes: */
+ 			swap_inode_data(inode, inode_bl);
+ 			ext4_mark_inode_dirty(handle, inode);
++			ext4_mark_inode_dirty(handle, inode_bl);
+ 		}
+ 	}
+ 	ext4_journal_stop(handle);
+@@ -339,19 +360,14 @@ static int ext4_ioctl_setproject(struct file *filp, __u32 projid)
+ 	if (projid_eq(kprojid, EXT4_I(inode)->i_projid))
+ 		return 0;
+ 
+-	err = mnt_want_write_file(filp);
+-	if (err)
+-		return err;
+-
+ 	err = -EPERM;
+-	inode_lock(inode);
+ 	/* Is it quota file? Do not allow user to mess with it */
+ 	if (ext4_is_quota_file(inode))
+-		goto out_unlock;
++		return err;
+ 
+ 	err = ext4_get_inode_loc(inode, &iloc);
+ 	if (err)
+-		goto out_unlock;
++		return err;
+ 
+ 	raw_inode = ext4_raw_inode(&iloc);
+ 	if (!EXT4_FITS_IN_INODE(raw_inode, ei, i_projid)) {
+@@ -359,20 +375,20 @@ static int ext4_ioctl_setproject(struct file *filp, __u32 projid)
+ 					      EXT4_SB(sb)->s_want_extra_isize,
+ 					      &iloc);
+ 		if (err)
+-			goto out_unlock;
++			return err;
+ 	} else {
+ 		brelse(iloc.bh);
+ 	}
+ 
+-	dquot_initialize(inode);
++	err = dquot_initialize(inode);
++	if (err)
++		return err;
+ 
+ 	handle = ext4_journal_start(inode, EXT4_HT_QUOTA,
+ 		EXT4_QUOTA_INIT_BLOCKS(sb) +
+ 		EXT4_QUOTA_DEL_BLOCKS(sb) + 3);
+-	if (IS_ERR(handle)) {
+-		err = PTR_ERR(handle);
+-		goto out_unlock;
+-	}
++	if (IS_ERR(handle))
++		return PTR_ERR(handle);
+ 
+ 	err = ext4_reserve_inode_write(handle, inode, &iloc);
+ 	if (err)
+@@ -400,9 +416,6 @@ out_dirty:
+ 		err = rc;
+ out_stop:
+ 	ext4_journal_stop(handle);
+-out_unlock:
+-	inode_unlock(inode);
+-	mnt_drop_write_file(filp);
+ 	return err;
+ }
+ #else
+@@ -626,6 +639,30 @@ group_add_out:
+ 	return err;
+ }
+ 
++static int ext4_ioctl_check_project(struct inode *inode, struct fsxattr *fa)
++{
++	/*
++	 * Project Quota ID state is only allowed to change from within the init
++	 * namespace. Enforce that restriction only if we are trying to change
++	 * the quota ID state. Everything else is allowed in user namespaces.
++	 */
++	if (current_user_ns() == &init_user_ns)
++		return 0;
++
++	if (__kprojid_val(EXT4_I(inode)->i_projid) != fa->fsx_projid)
++		return -EINVAL;
++
++	if (ext4_test_inode_flag(inode, EXT4_INODE_PROJINHERIT)) {
++		if (!(fa->fsx_xflags & FS_XFLAG_PROJINHERIT))
++			return -EINVAL;
++	} else {
++		if (fa->fsx_xflags & FS_XFLAG_PROJINHERIT)
++			return -EINVAL;
++	}
++
++	return 0;
++}
++
+ long ext4_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ {
+ 	struct inode *inode = file_inode(filp);
+@@ -1025,19 +1062,19 @@ resizefs_out:
+ 			return err;
+ 
+ 		inode_lock(inode);
++		err = ext4_ioctl_check_project(inode, &fa);
++		if (err)
++			goto out;
+ 		flags = (ei->i_flags & ~EXT4_FL_XFLAG_VISIBLE) |
+ 			 (flags & EXT4_FL_XFLAG_VISIBLE);
+ 		err = ext4_ioctl_setflags(inode, flags);
+-		inode_unlock(inode);
+-		mnt_drop_write_file(filp);
+ 		if (err)
+-			return err;
+-
++			goto out;
+ 		err = ext4_ioctl_setproject(filp, fa.fsx_projid);
+-		if (err)
+-			return err;
+-
+-		return 0;
++out:
++		inode_unlock(inode);
++		mnt_drop_write_file(filp);
++		return err;
+ 	}
+ 	case EXT4_IOC_SHUTDOWN:
+ 		return ext4_shutdown(sb, arg);
+diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c
+index 8e17efdcbf11..887353875060 100644
+--- a/fs/ext4/move_extent.c
++++ b/fs/ext4/move_extent.c
+@@ -518,9 +518,13 @@ mext_check_arguments(struct inode *orig_inode,
+ 			orig_inode->i_ino, donor_inode->i_ino);
+ 		return -EINVAL;
+ 	}
+-	if (orig_eof < orig_start + *len - 1)
++	if (orig_eof <= orig_start)
++		*len = 0;
++	else if (orig_eof < orig_start + *len - 1)
+ 		*len = orig_eof - orig_start;
+-	if (donor_eof < donor_start + *len - 1)
++	if (donor_eof <= donor_start)
++		*len = 0;
++	else if (donor_eof < donor_start + *len - 1)
+ 		*len = donor_eof - donor_start;
+ 	if (!*len) {
+ 		ext4_debug("ext4 move extent: len should not be 0 "
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index a7a0fffc3ae8..8d91d50ccf42 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -895,6 +895,18 @@ static inline void ext4_quota_off_umount(struct super_block *sb)
+ 	for (type = 0; type < EXT4_MAXQUOTAS; type++)
+ 		ext4_quota_off(sb, type);
+ }
++
++/*
++ * This is a helper function which is used in the mount/remount
++ * codepaths (which holds s_umount) to fetch the quota file name.
++ */
++static inline char *get_qf_name(struct super_block *sb,
++				struct ext4_sb_info *sbi,
++				int type)
++{
++	return rcu_dereference_protected(sbi->s_qf_names[type],
++					 lockdep_is_held(&sb->s_umount));
++}
+ #else
+ static inline void ext4_quota_off_umount(struct super_block *sb)
+ {
+@@ -946,7 +958,7 @@ static void ext4_put_super(struct super_block *sb)
+ 	percpu_free_rwsem(&sbi->s_journal_flag_rwsem);
+ #ifdef CONFIG_QUOTA
+ 	for (i = 0; i < EXT4_MAXQUOTAS; i++)
+-		kfree(sbi->s_qf_names[i]);
++		kfree(get_qf_name(sb, sbi, i));
+ #endif
+ 
+ 	/* Debugging code just in case the in-memory inode orphan list
+@@ -1511,11 +1523,10 @@ static const char deprecated_msg[] =
+ static int set_qf_name(struct super_block *sb, int qtype, substring_t *args)
+ {
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+-	char *qname;
++	char *qname, *old_qname = get_qf_name(sb, sbi, qtype);
+ 	int ret = -1;
+ 
+-	if (sb_any_quota_loaded(sb) &&
+-		!sbi->s_qf_names[qtype]) {
++	if (sb_any_quota_loaded(sb) && !old_qname) {
+ 		ext4_msg(sb, KERN_ERR,
+ 			"Cannot change journaled "
+ 			"quota options when quota turned on");
+@@ -1532,8 +1543,8 @@ static int set_qf_name(struct super_block *sb, int qtype, substring_t *args)
+ 			"Not enough memory for storing quotafile name");
+ 		return -1;
+ 	}
+-	if (sbi->s_qf_names[qtype]) {
+-		if (strcmp(sbi->s_qf_names[qtype], qname) == 0)
++	if (old_qname) {
++		if (strcmp(old_qname, qname) == 0)
+ 			ret = 1;
+ 		else
+ 			ext4_msg(sb, KERN_ERR,
+@@ -1546,7 +1557,7 @@ static int set_qf_name(struct super_block *sb, int qtype, substring_t *args)
+ 			"quotafile must be on filesystem root");
+ 		goto errout;
+ 	}
+-	sbi->s_qf_names[qtype] = qname;
++	rcu_assign_pointer(sbi->s_qf_names[qtype], qname);
+ 	set_opt(sb, QUOTA);
+ 	return 1;
+ errout:
+@@ -1558,15 +1569,16 @@ static int clear_qf_name(struct super_block *sb, int qtype)
+ {
+ 
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
++	char *old_qname = get_qf_name(sb, sbi, qtype);
+ 
+-	if (sb_any_quota_loaded(sb) &&
+-		sbi->s_qf_names[qtype]) {
++	if (sb_any_quota_loaded(sb) && old_qname) {
+ 		ext4_msg(sb, KERN_ERR, "Cannot change journaled quota options"
+ 			" when quota turned on");
+ 		return -1;
+ 	}
+-	kfree(sbi->s_qf_names[qtype]);
+-	sbi->s_qf_names[qtype] = NULL;
++	rcu_assign_pointer(sbi->s_qf_names[qtype], NULL);
++	synchronize_rcu();
++	kfree(old_qname);
+ 	return 1;
+ }
+ #endif
+@@ -1941,7 +1953,7 @@ static int parse_options(char *options, struct super_block *sb,
+ 			 int is_remount)
+ {
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+-	char *p;
++	char *p, __maybe_unused *usr_qf_name, __maybe_unused *grp_qf_name;
+ 	substring_t args[MAX_OPT_ARGS];
+ 	int token;
+ 
+@@ -1972,11 +1984,13 @@ static int parse_options(char *options, struct super_block *sb,
+ 			 "Cannot enable project quota enforcement.");
+ 		return 0;
+ 	}
+-	if (sbi->s_qf_names[USRQUOTA] || sbi->s_qf_names[GRPQUOTA]) {
+-		if (test_opt(sb, USRQUOTA) && sbi->s_qf_names[USRQUOTA])
++	usr_qf_name = get_qf_name(sb, sbi, USRQUOTA);
++	grp_qf_name = get_qf_name(sb, sbi, GRPQUOTA);
++	if (usr_qf_name || grp_qf_name) {
++		if (test_opt(sb, USRQUOTA) && usr_qf_name)
+ 			clear_opt(sb, USRQUOTA);
+ 
+-		if (test_opt(sb, GRPQUOTA) && sbi->s_qf_names[GRPQUOTA])
++		if (test_opt(sb, GRPQUOTA) && grp_qf_name)
+ 			clear_opt(sb, GRPQUOTA);
+ 
+ 		if (test_opt(sb, GRPQUOTA) || test_opt(sb, USRQUOTA)) {
+@@ -2010,6 +2024,7 @@ static inline void ext4_show_quota_options(struct seq_file *seq,
+ {
+ #if defined(CONFIG_QUOTA)
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
++	char *usr_qf_name, *grp_qf_name;
+ 
+ 	if (sbi->s_jquota_fmt) {
+ 		char *fmtname = "";
+@@ -2028,11 +2043,14 @@ static inline void ext4_show_quota_options(struct seq_file *seq,
+ 		seq_printf(seq, ",jqfmt=%s", fmtname);
+ 	}
+ 
+-	if (sbi->s_qf_names[USRQUOTA])
+-		seq_show_option(seq, "usrjquota", sbi->s_qf_names[USRQUOTA]);
+-
+-	if (sbi->s_qf_names[GRPQUOTA])
+-		seq_show_option(seq, "grpjquota", sbi->s_qf_names[GRPQUOTA]);
++	rcu_read_lock();
++	usr_qf_name = rcu_dereference(sbi->s_qf_names[USRQUOTA]);
++	grp_qf_name = rcu_dereference(sbi->s_qf_names[GRPQUOTA]);
++	if (usr_qf_name)
++		seq_show_option(seq, "usrjquota", usr_qf_name);
++	if (grp_qf_name)
++		seq_show_option(seq, "grpjquota", grp_qf_name);
++	rcu_read_unlock();
+ #endif
+ }
+ 
+@@ -5081,6 +5099,7 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 	int err = 0;
+ #ifdef CONFIG_QUOTA
+ 	int i, j;
++	char *to_free[EXT4_MAXQUOTAS];
+ #endif
+ 	char *orig_data = kstrdup(data, GFP_KERNEL);
+ 
+@@ -5097,8 +5116,9 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 	old_opts.s_jquota_fmt = sbi->s_jquota_fmt;
+ 	for (i = 0; i < EXT4_MAXQUOTAS; i++)
+ 		if (sbi->s_qf_names[i]) {
+-			old_opts.s_qf_names[i] = kstrdup(sbi->s_qf_names[i],
+-							 GFP_KERNEL);
++			char *qf_name = get_qf_name(sb, sbi, i);
++
++			old_opts.s_qf_names[i] = kstrdup(qf_name, GFP_KERNEL);
+ 			if (!old_opts.s_qf_names[i]) {
+ 				for (j = 0; j < i; j++)
+ 					kfree(old_opts.s_qf_names[j]);
+@@ -5327,9 +5347,12 @@ restore_opts:
+ #ifdef CONFIG_QUOTA
+ 	sbi->s_jquota_fmt = old_opts.s_jquota_fmt;
+ 	for (i = 0; i < EXT4_MAXQUOTAS; i++) {
+-		kfree(sbi->s_qf_names[i]);
+-		sbi->s_qf_names[i] = old_opts.s_qf_names[i];
++		to_free[i] = get_qf_name(sb, sbi, i);
++		rcu_assign_pointer(sbi->s_qf_names[i], old_opts.s_qf_names[i]);
+ 	}
++	synchronize_rcu();
++	for (i = 0; i < EXT4_MAXQUOTAS; i++)
++		kfree(to_free[i]);
+ #endif
+ 	kfree(orig_data);
+ 	return err;
+@@ -5520,7 +5543,7 @@ static int ext4_write_info(struct super_block *sb, int type)
+  */
+ static int ext4_quota_on_mount(struct super_block *sb, int type)
+ {
+-	return dquot_quota_on_mount(sb, EXT4_SB(sb)->s_qf_names[type],
++	return dquot_quota_on_mount(sb, get_qf_name(sb, EXT4_SB(sb), type),
+ 					EXT4_SB(sb)->s_jquota_fmt, type);
+ }
+ 
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index b61954d40c25..e397515261dc 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -80,7 +80,8 @@ static void __read_end_io(struct bio *bio)
+ 		/* PG_error was set if any post_read step failed */
+ 		if (bio->bi_status || PageError(page)) {
+ 			ClearPageUptodate(page);
+-			SetPageError(page);
++			/* will re-read again later */
++			ClearPageError(page);
+ 		} else {
+ 			SetPageUptodate(page);
+ 		}
+@@ -453,12 +454,16 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio)
+ 		bio_put(bio);
+ 		return -EFAULT;
+ 	}
+-	bio_set_op_attrs(bio, fio->op, fio->op_flags);
+ 
+-	__submit_bio(fio->sbi, bio, fio->type);
++	if (fio->io_wbc && !is_read_io(fio->op))
++		wbc_account_io(fio->io_wbc, page, PAGE_SIZE);
++
++	bio_set_op_attrs(bio, fio->op, fio->op_flags);
+ 
+ 	if (!is_read_io(fio->op))
+ 		inc_page_count(fio->sbi, WB_DATA_TYPE(fio->page));
++
++	__submit_bio(fio->sbi, bio, fio->type);
+ 	return 0;
+ }
+ 
+@@ -580,6 +585,7 @@ static int f2fs_submit_page_read(struct inode *inode, struct page *page,
+ 		bio_put(bio);
+ 		return -EFAULT;
+ 	}
++	ClearPageError(page);
+ 	__submit_bio(F2FS_I_SB(inode), bio, DATA);
+ 	return 0;
+ }
+@@ -1524,6 +1530,7 @@ submit_and_realloc:
+ 		if (bio_add_page(bio, page, blocksize, 0) < blocksize)
+ 			goto submit_and_realloc;
+ 
++		ClearPageError(page);
+ 		last_block_in_bio = block_nr;
+ 		goto next_page;
+ set_error_page:
+@@ -2494,10 +2501,6 @@ static int f2fs_set_data_page_dirty(struct page *page)
+ 	if (!PageUptodate(page))
+ 		SetPageUptodate(page);
+ 
+-	/* don't remain PG_checked flag which was set during GC */
+-	if (is_cold_data(page))
+-		clear_cold_data(page);
+-
+ 	if (f2fs_is_atomic_file(inode) && !f2fs_is_commit_atomic_write(inode)) {
+ 		if (!IS_ATOMIC_WRITTEN_PAGE(page)) {
+ 			f2fs_register_inmem_page(inode, page);
+diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c
+index 231b77ef5a53..a70cd2580eae 100644
+--- a/fs/f2fs/extent_cache.c
++++ b/fs/f2fs/extent_cache.c
+@@ -308,14 +308,13 @@ static unsigned int __free_extent_tree(struct f2fs_sb_info *sbi,
+ 	return count - atomic_read(&et->node_cnt);
+ }
+ 
+-static void __drop_largest_extent(struct inode *inode,
++static void __drop_largest_extent(struct extent_tree *et,
+ 					pgoff_t fofs, unsigned int len)
+ {
+-	struct extent_info *largest = &F2FS_I(inode)->extent_tree->largest;
+-
+-	if (fofs < largest->fofs + largest->len && fofs + len > largest->fofs) {
+-		largest->len = 0;
+-		f2fs_mark_inode_dirty_sync(inode, true);
++	if (fofs < et->largest.fofs + et->largest.len &&
++			fofs + len > et->largest.fofs) {
++		et->largest.len = 0;
++		et->largest_updated = true;
+ 	}
+ }
+ 
+@@ -416,12 +415,11 @@ out:
+ 	return ret;
+ }
+ 
+-static struct extent_node *__try_merge_extent_node(struct inode *inode,
++static struct extent_node *__try_merge_extent_node(struct f2fs_sb_info *sbi,
+ 				struct extent_tree *et, struct extent_info *ei,
+ 				struct extent_node *prev_ex,
+ 				struct extent_node *next_ex)
+ {
+-	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	struct extent_node *en = NULL;
+ 
+ 	if (prev_ex && __is_back_mergeable(ei, &prev_ex->ei)) {
+@@ -443,7 +441,7 @@ static struct extent_node *__try_merge_extent_node(struct inode *inode,
+ 	if (!en)
+ 		return NULL;
+ 
+-	__try_update_largest_extent(inode, et, en);
++	__try_update_largest_extent(et, en);
+ 
+ 	spin_lock(&sbi->extent_lock);
+ 	if (!list_empty(&en->list)) {
+@@ -454,12 +452,11 @@ static struct extent_node *__try_merge_extent_node(struct inode *inode,
+ 	return en;
+ }
+ 
+-static struct extent_node *__insert_extent_tree(struct inode *inode,
++static struct extent_node *__insert_extent_tree(struct f2fs_sb_info *sbi,
+ 				struct extent_tree *et, struct extent_info *ei,
+ 				struct rb_node **insert_p,
+ 				struct rb_node *insert_parent)
+ {
+-	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	struct rb_node **p;
+ 	struct rb_node *parent = NULL;
+ 	struct extent_node *en = NULL;
+@@ -476,7 +473,7 @@ do_insert:
+ 	if (!en)
+ 		return NULL;
+ 
+-	__try_update_largest_extent(inode, et, en);
++	__try_update_largest_extent(et, en);
+ 
+ 	/* update in global extent list */
+ 	spin_lock(&sbi->extent_lock);
+@@ -497,6 +494,7 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
+ 	struct rb_node **insert_p = NULL, *insert_parent = NULL;
+ 	unsigned int end = fofs + len;
+ 	unsigned int pos = (unsigned int)fofs;
++	bool updated = false;
+ 
+ 	if (!et)
+ 		return;
+@@ -517,7 +515,7 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
+ 	 * drop largest extent before lookup, in case it's already
+ 	 * been shrunk from extent tree
+ 	 */
+-	__drop_largest_extent(inode, fofs, len);
++	__drop_largest_extent(et, fofs, len);
+ 
+ 	/* 1. lookup first extent node in range [fofs, fofs + len - 1] */
+ 	en = (struct extent_node *)f2fs_lookup_rb_tree_ret(&et->root,
+@@ -550,7 +548,7 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
+ 				set_extent_info(&ei, end,
+ 						end - dei.fofs + dei.blk,
+ 						org_end - end);
+-				en1 = __insert_extent_tree(inode, et, &ei,
++				en1 = __insert_extent_tree(sbi, et, &ei,
+ 							NULL, NULL);
+ 				next_en = en1;
+ 			} else {
+@@ -570,7 +568,7 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
+ 		}
+ 
+ 		if (parts)
+-			__try_update_largest_extent(inode, et, en);
++			__try_update_largest_extent(et, en);
+ 		else
+ 			__release_extent_node(sbi, et, en);
+ 
+@@ -590,15 +588,16 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
+ 	if (blkaddr) {
+ 
+ 		set_extent_info(&ei, fofs, blkaddr, len);
+-		if (!__try_merge_extent_node(inode, et, &ei, prev_en, next_en))
+-			__insert_extent_tree(inode, et, &ei,
++		if (!__try_merge_extent_node(sbi, et, &ei, prev_en, next_en))
++			__insert_extent_tree(sbi, et, &ei,
+ 						insert_p, insert_parent);
+ 
+ 		/* give up extent_cache, if split and small updates happen */
+ 		if (dei.len >= 1 &&
+ 				prev.len < F2FS_MIN_EXTENT_LEN &&
+ 				et->largest.len < F2FS_MIN_EXTENT_LEN) {
+-			__drop_largest_extent(inode, 0, UINT_MAX);
++			et->largest.len = 0;
++			et->largest_updated = true;
+ 			set_inode_flag(inode, FI_NO_EXTENT);
+ 		}
+ 	}
+@@ -606,7 +605,15 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
+ 	if (is_inode_flag_set(inode, FI_NO_EXTENT))
+ 		__free_extent_tree(sbi, et);
+ 
++	if (et->largest_updated) {
++		et->largest_updated = false;
++		updated = true;
++	}
++
+ 	write_unlock(&et->lock);
++
++	if (updated)
++		f2fs_mark_inode_dirty_sync(inode, true);
+ }
+ 
+ unsigned int f2fs_shrink_extent_tree(struct f2fs_sb_info *sbi, int nr_shrink)
+@@ -705,6 +712,7 @@ void f2fs_drop_extent_tree(struct inode *inode)
+ {
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	struct extent_tree *et = F2FS_I(inode)->extent_tree;
++	bool updated = false;
+ 
+ 	if (!f2fs_may_extent_tree(inode))
+ 		return;
+@@ -713,8 +721,13 @@ void f2fs_drop_extent_tree(struct inode *inode)
+ 
+ 	write_lock(&et->lock);
+ 	__free_extent_tree(sbi, et);
+-	__drop_largest_extent(inode, 0, UINT_MAX);
++	if (et->largest.len) {
++		et->largest.len = 0;
++		updated = true;
++	}
+ 	write_unlock(&et->lock);
++	if (updated)
++		f2fs_mark_inode_dirty_sync(inode, true);
+ }
+ 
+ void f2fs_destroy_extent_tree(struct inode *inode)
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index b6f2dc8163e1..181aade161e8 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -556,6 +556,7 @@ struct extent_tree {
+ 	struct list_head list;		/* to be used by sbi->zombie_list */
+ 	rwlock_t lock;			/* protect extent info rb-tree */
+ 	atomic_t node_cnt;		/* # of extent node in rb-tree*/
++	bool largest_updated;		/* largest extent updated */
+ };
+ 
+ /*
+@@ -736,12 +737,12 @@ static inline bool __is_front_mergeable(struct extent_info *cur,
+ }
+ 
+ extern void f2fs_mark_inode_dirty_sync(struct inode *inode, bool sync);
+-static inline void __try_update_largest_extent(struct inode *inode,
+-			struct extent_tree *et, struct extent_node *en)
++static inline void __try_update_largest_extent(struct extent_tree *et,
++						struct extent_node *en)
+ {
+ 	if (en->ei.len > et->largest.len) {
+ 		et->largest = en->ei;
+-		f2fs_mark_inode_dirty_sync(inode, true);
++		et->largest_updated = true;
+ 	}
+ }
+ 
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index cf0f944fcaea..4a2e75bce36a 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -287,6 +287,12 @@ static int do_read_inode(struct inode *inode)
+ 	if (f2fs_has_inline_data(inode) && !f2fs_exist_data(inode))
+ 		__recover_inline_status(inode, node_page);
+ 
++	/* try to recover cold bit for non-dir inode */
++	if (!S_ISDIR(inode->i_mode) && !is_cold_node(node_page)) {
++		set_cold_node(node_page, false);
++		set_page_dirty(node_page);
++	}
++
+ 	/* get rdev by using inline_info */
+ 	__get_inode_rdev(inode, ri);
+ 
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 52ed02b0327c..ec22e7c5b37e 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -2356,7 +2356,7 @@ retry:
+ 	if (!PageUptodate(ipage))
+ 		SetPageUptodate(ipage);
+ 	fill_node_footer(ipage, ino, ino, 0, true);
+-	set_cold_node(page, false);
++	set_cold_node(ipage, false);
+ 
+ 	src = F2FS_INODE(page);
+ 	dst = F2FS_INODE(ipage);
+@@ -2379,6 +2379,13 @@ retry:
+ 			F2FS_FITS_IN_INODE(src, le16_to_cpu(src->i_extra_isize),
+ 								i_projid))
+ 			dst->i_projid = src->i_projid;
++
++		if (f2fs_sb_has_inode_crtime(sbi->sb) &&
++			F2FS_FITS_IN_INODE(src, le16_to_cpu(src->i_extra_isize),
++							i_crtime_nsec)) {
++			dst->i_crtime = src->i_crtime;
++			dst->i_crtime_nsec = src->i_crtime_nsec;
++		}
+ 	}
+ 
+ 	new_ni = old_ni;
+diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
+index ad70e62c5da4..a69a2c5c6682 100644
+--- a/fs/f2fs/recovery.c
++++ b/fs/f2fs/recovery.c
+@@ -221,6 +221,7 @@ static void recover_inode(struct inode *inode, struct page *page)
+ 	inode->i_mtime.tv_nsec = le32_to_cpu(raw->i_mtime_nsec);
+ 
+ 	F2FS_I(inode)->i_advise = raw->i_advise;
++	F2FS_I(inode)->i_flags = le32_to_cpu(raw->i_flags);
+ 
+ 	recover_inline_flags(inode, raw);
+ 
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 742147cbe759..a3e90e6f72a8 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1820,7 +1820,9 @@ static int f2fs_quota_off(struct super_block *sb, int type)
+ 	if (!inode || !igrab(inode))
+ 		return dquot_quota_off(sb, type);
+ 
+-	f2fs_quota_sync(sb, type);
++	err = f2fs_quota_sync(sb, type);
++	if (err)
++		goto out_put;
+ 
+ 	err = dquot_quota_off(sb, type);
+ 	if (err || f2fs_sb_has_quota_ino(sb))
+@@ -1839,9 +1841,20 @@ out_put:
+ void f2fs_quota_off_umount(struct super_block *sb)
+ {
+ 	int type;
++	int err;
++
++	for (type = 0; type < MAXQUOTAS; type++) {
++		err = f2fs_quota_off(sb, type);
++		if (err) {
++			int ret = dquot_quota_off(sb, type);
+ 
+-	for (type = 0; type < MAXQUOTAS; type++)
+-		f2fs_quota_off(sb, type);
++			f2fs_msg(sb, KERN_ERR,
++				"Fail to turn off disk quota "
++				"(type: %d, err: %d, ret:%d), Please "
++				"run fsck to fix it.", type, err, ret);
++			set_sbi_flag(F2FS_SB(sb), SBI_NEED_FSCK);
++		}
++	}
+ }
+ 
+ static int f2fs_get_projid(struct inode *inode, kprojid_t *projid)
+diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
+index c2469833b4fb..6b84ef6ccff3 100644
+--- a/fs/gfs2/ops_fstype.c
++++ b/fs/gfs2/ops_fstype.c
+@@ -1333,6 +1333,9 @@ static struct dentry *gfs2_mount_meta(struct file_system_type *fs_type,
+ 	struct path path;
+ 	int error;
+ 
++	if (!dev_name || !*dev_name)
++		return ERR_PTR(-EINVAL);
++
+ 	error = kern_path(dev_name, LOOKUP_FOLLOW, &path);
+ 	if (error) {
+ 		pr_warn("path_lookup on %s returned error %d\n",
+diff --git a/fs/jbd2/checkpoint.c b/fs/jbd2/checkpoint.c
+index c125d662777c..26f8d7e46462 100644
+--- a/fs/jbd2/checkpoint.c
++++ b/fs/jbd2/checkpoint.c
+@@ -251,8 +251,8 @@ restart:
+ 		bh = jh2bh(jh);
+ 
+ 		if (buffer_locked(bh)) {
+-			spin_unlock(&journal->j_list_lock);
+ 			get_bh(bh);
++			spin_unlock(&journal->j_list_lock);
+ 			wait_on_buffer(bh);
+ 			/* the journal_head may have gone by now */
+ 			BUFFER_TRACE(bh, "brelse");
+@@ -333,8 +333,8 @@ restart2:
+ 		jh = transaction->t_checkpoint_io_list;
+ 		bh = jh2bh(jh);
+ 		if (buffer_locked(bh)) {
+-			spin_unlock(&journal->j_list_lock);
+ 			get_bh(bh);
++			spin_unlock(&journal->j_list_lock);
+ 			wait_on_buffer(bh);
+ 			/* the journal_head may have gone by now */
+ 			BUFFER_TRACE(bh, "brelse");
+diff --git a/fs/jffs2/super.c b/fs/jffs2/super.c
+index 87bdf0f4cba1..902a7dd10e5c 100644
+--- a/fs/jffs2/super.c
++++ b/fs/jffs2/super.c
+@@ -285,10 +285,8 @@ static int jffs2_fill_super(struct super_block *sb, void *data, int silent)
+ 	sb->s_fs_info = c;
+ 
+ 	ret = jffs2_parse_options(c, data);
+-	if (ret) {
+-		kfree(c);
++	if (ret)
+ 		return -EINVAL;
+-	}
+ 
+ 	/* Initialize JFFS2 superblock locks, the further initialization will
+ 	 * be done later */
+diff --git a/fs/lockd/host.c b/fs/lockd/host.c
+index d35cd6be0675..93fb7cf0b92b 100644
+--- a/fs/lockd/host.c
++++ b/fs/lockd/host.c
+@@ -341,7 +341,7 @@ struct nlm_host *nlmsvc_lookup_host(const struct svc_rqst *rqstp,
+ 	};
+ 	struct lockd_net *ln = net_generic(net, lockd_net_id);
+ 
+-	dprintk("lockd: %s(host='%*s', vers=%u, proto=%s)\n", __func__,
++	dprintk("lockd: %s(host='%.*s', vers=%u, proto=%s)\n", __func__,
+ 			(int)hostname_len, hostname, rqstp->rq_vers,
+ 			(rqstp->rq_prot == IPPROTO_UDP ? "udp" : "tcp"));
+ 
+diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c
+index d7124fb12041..5df68d79d661 100644
+--- a/fs/nfs/nfs4client.c
++++ b/fs/nfs/nfs4client.c
+@@ -935,10 +935,10 @@ EXPORT_SYMBOL_GPL(nfs4_set_ds_client);
+ 
+ /*
+  * Session has been established, and the client marked ready.
+- * Set the mount rsize and wsize with negotiated fore channel
+- * attributes which will be bound checked in nfs_server_set_fsinfo.
++ * Limit the mount rsize, wsize and dtsize using negotiated fore
++ * channel attributes.
+  */
+-static void nfs4_session_set_rwsize(struct nfs_server *server)
++static void nfs4_session_limit_rwsize(struct nfs_server *server)
+ {
+ #ifdef CONFIG_NFS_V4_1
+ 	struct nfs4_session *sess;
+@@ -951,9 +951,11 @@ static void nfs4_session_set_rwsize(struct nfs_server *server)
+ 	server_resp_sz = sess->fc_attrs.max_resp_sz - nfs41_maxread_overhead;
+ 	server_rqst_sz = sess->fc_attrs.max_rqst_sz - nfs41_maxwrite_overhead;
+ 
+-	if (!server->rsize || server->rsize > server_resp_sz)
++	if (server->dtsize > server_resp_sz)
++		server->dtsize = server_resp_sz;
++	if (server->rsize > server_resp_sz)
+ 		server->rsize = server_resp_sz;
+-	if (!server->wsize || server->wsize > server_rqst_sz)
++	if (server->wsize > server_rqst_sz)
+ 		server->wsize = server_rqst_sz;
+ #endif /* CONFIG_NFS_V4_1 */
+ }
+@@ -1000,12 +1002,12 @@ static int nfs4_server_common_setup(struct nfs_server *server,
+ 			(unsigned long long) server->fsid.minor);
+ 	nfs_display_fhandle(mntfh, "Pseudo-fs root FH");
+ 
+-	nfs4_session_set_rwsize(server);
+-
+ 	error = nfs_probe_fsinfo(server, mntfh, fattr);
+ 	if (error < 0)
+ 		goto out;
+ 
++	nfs4_session_limit_rwsize(server);
++
+ 	if (server->namelen == 0 || server->namelen > NFS4_MAXNAMLEN)
+ 		server->namelen = NFS4_MAXNAMLEN;
+ 
+diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
+index 67d19cd92e44..7e6425791388 100644
+--- a/fs/nfs/pagelist.c
++++ b/fs/nfs/pagelist.c
+@@ -1110,6 +1110,20 @@ static int nfs_pageio_add_request_mirror(struct nfs_pageio_descriptor *desc,
+ 	return ret;
+ }
+ 
++static void nfs_pageio_error_cleanup(struct nfs_pageio_descriptor *desc)
++{
++	u32 midx;
++	struct nfs_pgio_mirror *mirror;
++
++	if (!desc->pg_error)
++		return;
++
++	for (midx = 0; midx < desc->pg_mirror_count; midx++) {
++		mirror = &desc->pg_mirrors[midx];
++		desc->pg_completion_ops->error_cleanup(&mirror->pg_list);
++	}
++}
++
+ int nfs_pageio_add_request(struct nfs_pageio_descriptor *desc,
+ 			   struct nfs_page *req)
+ {
+@@ -1160,25 +1174,11 @@ int nfs_pageio_add_request(struct nfs_pageio_descriptor *desc,
+ 	return 1;
+ 
+ out_failed:
+-	/*
+-	 * We might have failed before sending any reqs over wire.
+-	 * Clean up rest of the reqs in mirror pg_list.
+-	 */
+-	if (desc->pg_error) {
+-		struct nfs_pgio_mirror *mirror;
+-		void (*func)(struct list_head *);
+-
+-		/* remember fatal errors */
+-		if (nfs_error_is_fatal(desc->pg_error))
+-			nfs_context_set_write_error(req->wb_context,
+-						    desc->pg_error);
+-
+-		func = desc->pg_completion_ops->error_cleanup;
+-		for (midx = 0; midx < desc->pg_mirror_count; midx++) {
+-			mirror = &desc->pg_mirrors[midx];
+-			func(&mirror->pg_list);
+-		}
+-	}
++	/* remember fatal errors */
++	if (nfs_error_is_fatal(desc->pg_error))
++		nfs_context_set_write_error(req->wb_context,
++						desc->pg_error);
++	nfs_pageio_error_cleanup(desc);
+ 	return 0;
+ }
+ 
+@@ -1250,6 +1250,8 @@ void nfs_pageio_complete(struct nfs_pageio_descriptor *desc)
+ 	for (midx = 0; midx < desc->pg_mirror_count; midx++)
+ 		nfs_pageio_complete_mirror(desc, midx);
+ 
++	if (desc->pg_error < 0)
++		nfs_pageio_error_cleanup(desc);
+ 	if (desc->pg_ops->pg_cleanup)
+ 		desc->pg_ops->pg_cleanup(desc);
+ 	nfs_pageio_cleanup_mirroring(desc);
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 4a17fad93411..18fa7fd3bae9 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -4361,7 +4361,7 @@ nfs4_set_delegation(struct nfs4_client *clp, struct svc_fh *fh,
+ 
+ 	fl = nfs4_alloc_init_lease(dp, NFS4_OPEN_DELEGATE_READ);
+ 	if (!fl)
+-		goto out_stid;
++		goto out_clnt_odstate;
+ 
+ 	status = vfs_setlease(fp->fi_deleg_file, fl->fl_type, &fl, NULL);
+ 	if (fl)
+@@ -4386,7 +4386,6 @@ out_unlock:
+ 	vfs_setlease(fp->fi_deleg_file, F_UNLCK, NULL, (void **)&dp);
+ out_clnt_odstate:
+ 	put_clnt_odstate(dp->dl_clnt_odstate);
+-out_stid:
+ 	nfs4_put_stid(&dp->dl_stid);
+ out_delegees:
+ 	put_deleg_file(fp);
+diff --git a/fs/notify/fsnotify.c b/fs/notify/fsnotify.c
+index ababdbfab537..f43ea1aad542 100644
+--- a/fs/notify/fsnotify.c
++++ b/fs/notify/fsnotify.c
+@@ -96,6 +96,9 @@ void fsnotify_unmount_inodes(struct super_block *sb)
+ 
+ 	if (iput_inode)
+ 		iput(iput_inode);
++	/* Wait for outstanding inode references from connectors */
++	wait_var_event(&sb->s_fsnotify_inode_refs,
++		       !atomic_long_read(&sb->s_fsnotify_inode_refs));
+ }
+ 
+ /*
+diff --git a/fs/notify/mark.c b/fs/notify/mark.c
+index 61f4c5fa34c7..75394ae96673 100644
+--- a/fs/notify/mark.c
++++ b/fs/notify/mark.c
+@@ -161,15 +161,18 @@ static void fsnotify_connector_destroy_workfn(struct work_struct *work)
+ 	}
+ }
+ 
+-static struct inode *fsnotify_detach_connector_from_object(
+-					struct fsnotify_mark_connector *conn)
++static void *fsnotify_detach_connector_from_object(
++					struct fsnotify_mark_connector *conn,
++					unsigned int *type)
+ {
+ 	struct inode *inode = NULL;
+ 
++	*type = conn->type;
+ 	if (conn->type == FSNOTIFY_OBJ_TYPE_INODE) {
+ 		inode = conn->inode;
+ 		rcu_assign_pointer(inode->i_fsnotify_marks, NULL);
+ 		inode->i_fsnotify_mask = 0;
++		atomic_long_inc(&inode->i_sb->s_fsnotify_inode_refs);
+ 		conn->inode = NULL;
+ 		conn->type = FSNOTIFY_OBJ_TYPE_DETACHED;
+ 	} else if (conn->type == FSNOTIFY_OBJ_TYPE_VFSMOUNT) {
+@@ -193,10 +196,29 @@ static void fsnotify_final_mark_destroy(struct fsnotify_mark *mark)
+ 	fsnotify_put_group(group);
+ }
+ 
++/* Drop object reference originally held by a connector */
++static void fsnotify_drop_object(unsigned int type, void *objp)
++{
++	struct inode *inode;
++	struct super_block *sb;
++
++	if (!objp)
++		return;
++	/* Currently only inode references are passed to be dropped */
++	if (WARN_ON_ONCE(type != FSNOTIFY_OBJ_TYPE_INODE))
++		return;
++	inode = objp;
++	sb = inode->i_sb;
++	iput(inode);
++	if (atomic_long_dec_and_test(&sb->s_fsnotify_inode_refs))
++		wake_up_var(&sb->s_fsnotify_inode_refs);
++}
++
+ void fsnotify_put_mark(struct fsnotify_mark *mark)
+ {
+ 	struct fsnotify_mark_connector *conn;
+-	struct inode *inode = NULL;
++	void *objp = NULL;
++	unsigned int type = FSNOTIFY_OBJ_TYPE_DETACHED;
+ 	bool free_conn = false;
+ 
+ 	/* Catch marks that were actually never attached to object */
+@@ -216,7 +238,7 @@ void fsnotify_put_mark(struct fsnotify_mark *mark)
+ 	conn = mark->connector;
+ 	hlist_del_init_rcu(&mark->obj_list);
+ 	if (hlist_empty(&conn->list)) {
+-		inode = fsnotify_detach_connector_from_object(conn);
++		objp = fsnotify_detach_connector_from_object(conn, &type);
+ 		free_conn = true;
+ 	} else {
+ 		__fsnotify_recalc_mask(conn);
+@@ -224,7 +246,7 @@ void fsnotify_put_mark(struct fsnotify_mark *mark)
+ 	mark->connector = NULL;
+ 	spin_unlock(&conn->lock);
+ 
+-	iput(inode);
++	fsnotify_drop_object(type, objp);
+ 
+ 	if (free_conn) {
+ 		spin_lock(&destroy_lock);
+@@ -702,7 +724,8 @@ void fsnotify_destroy_marks(struct fsnotify_mark_connector __rcu **connp)
+ {
+ 	struct fsnotify_mark_connector *conn;
+ 	struct fsnotify_mark *mark, *old_mark = NULL;
+-	struct inode *inode;
++	void *objp;
++	unsigned int type;
+ 
+ 	conn = fsnotify_grab_connector(connp);
+ 	if (!conn)
+@@ -728,11 +751,11 @@ void fsnotify_destroy_marks(struct fsnotify_mark_connector __rcu **connp)
+ 	 * mark references get dropped. It would lead to strange results such
+ 	 * as delaying inode deletion or blocking unmount.
+ 	 */
+-	inode = fsnotify_detach_connector_from_object(conn);
++	objp = fsnotify_detach_connector_from_object(conn, &type);
+ 	spin_unlock(&conn->lock);
+ 	if (old_mark)
+ 		fsnotify_put_mark(old_mark);
+-	iput(inode);
++	fsnotify_drop_object(type, objp);
+ }
+ 
+ /*
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index dfd73a4616ce..3437da437099 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -767,6 +767,8 @@ static int show_smap(struct seq_file *m, void *v, int is_pid)
+ 	smaps_walk.private = mss;
+ 
+ #ifdef CONFIG_SHMEM
++	/* In case of smaps_rollup, reset the value from previous vma */
++	mss->check_shmem_swap = false;
+ 	if (vma->vm_file && shmem_mapping(vma->vm_file->f_mapping)) {
+ 		/*
+ 		 * For shared or readonly shmem mappings we know that all
+@@ -782,7 +784,7 @@ static int show_smap(struct seq_file *m, void *v, int is_pid)
+ 
+ 		if (!shmem_swapped || (vma->vm_flags & VM_SHARED) ||
+ 					!(vma->vm_flags & VM_WRITE)) {
+-			mss->swap = shmem_swapped;
++			mss->swap += shmem_swapped;
+ 		} else {
+ 			mss->check_shmem_swap = true;
+ 			smaps_walk.pte_hole = smaps_pte_hole;
+diff --git a/include/crypto/speck.h b/include/crypto/speck.h
+deleted file mode 100644
+index 73cfc952d405..000000000000
+--- a/include/crypto/speck.h
++++ /dev/null
+@@ -1,62 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * Common values for the Speck algorithm
+- */
+-
+-#ifndef _CRYPTO_SPECK_H
+-#define _CRYPTO_SPECK_H
+-
+-#include <linux/types.h>
+-
+-/* Speck128 */
+-
+-#define SPECK128_BLOCK_SIZE	16
+-
+-#define SPECK128_128_KEY_SIZE	16
+-#define SPECK128_128_NROUNDS	32
+-
+-#define SPECK128_192_KEY_SIZE	24
+-#define SPECK128_192_NROUNDS	33
+-
+-#define SPECK128_256_KEY_SIZE	32
+-#define SPECK128_256_NROUNDS	34
+-
+-struct speck128_tfm_ctx {
+-	u64 round_keys[SPECK128_256_NROUNDS];
+-	int nrounds;
+-};
+-
+-void crypto_speck128_encrypt(const struct speck128_tfm_ctx *ctx,
+-			     u8 *out, const u8 *in);
+-
+-void crypto_speck128_decrypt(const struct speck128_tfm_ctx *ctx,
+-			     u8 *out, const u8 *in);
+-
+-int crypto_speck128_setkey(struct speck128_tfm_ctx *ctx, const u8 *key,
+-			   unsigned int keysize);
+-
+-/* Speck64 */
+-
+-#define SPECK64_BLOCK_SIZE	8
+-
+-#define SPECK64_96_KEY_SIZE	12
+-#define SPECK64_96_NROUNDS	26
+-
+-#define SPECK64_128_KEY_SIZE	16
+-#define SPECK64_128_NROUNDS	27
+-
+-struct speck64_tfm_ctx {
+-	u32 round_keys[SPECK64_128_NROUNDS];
+-	int nrounds;
+-};
+-
+-void crypto_speck64_encrypt(const struct speck64_tfm_ctx *ctx,
+-			    u8 *out, const u8 *in);
+-
+-void crypto_speck64_decrypt(const struct speck64_tfm_ctx *ctx,
+-			    u8 *out, const u8 *in);
+-
+-int crypto_speck64_setkey(struct speck64_tfm_ctx *ctx, const u8 *key,
+-			  unsigned int keysize);
+-
+-#endif /* _CRYPTO_SPECK_H */
+diff --git a/include/drm/drm_atomic.h b/include/drm/drm_atomic.h
+index a57a8aa90ffb..2b0d02458a18 100644
+--- a/include/drm/drm_atomic.h
++++ b/include/drm/drm_atomic.h
+@@ -153,6 +153,17 @@ struct __drm_planes_state {
+ struct __drm_crtcs_state {
+ 	struct drm_crtc *ptr;
+ 	struct drm_crtc_state *state, *old_state, *new_state;
++
++	/**
++	 * @commit:
++	 *
++	 * A reference to the CRTC commit object that is kept for use by
++	 * drm_atomic_helper_wait_for_flip_done() after
++	 * drm_atomic_helper_commit_hw_done() is called. This ensures that a
++	 * concurrent commit won't free a commit object that is still in use.
++	 */
++	struct drm_crtc_commit *commit;
++
+ 	s32 __user *out_fence_ptr;
+ 	u64 last_vblank_count;
+ };
+diff --git a/include/linux/compat.h b/include/linux/compat.h
+index c68acc47da57..47041c7fed28 100644
+--- a/include/linux/compat.h
++++ b/include/linux/compat.h
+@@ -103,6 +103,9 @@ typedef struct compat_sigaltstack {
+ 	compat_size_t			ss_size;
+ } compat_stack_t;
+ #endif
++#ifndef COMPAT_MINSIGSTKSZ
++#define COMPAT_MINSIGSTKSZ	MINSIGSTKSZ
++#endif
+ 
+ #define compat_jiffies_to_clock_t(x)	\
+ 		(((unsigned long)(x) * COMPAT_USER_HZ) / HZ)
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index e73363bd8646..cf23c128ac46 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -1416,6 +1416,9 @@ struct super_block {
+ 	/* Number of inodes with nlink == 0 but still referenced */
+ 	atomic_long_t s_remove_count;
+ 
++	/* Pending fsnotify inode refs */
++	atomic_long_t s_fsnotify_inode_refs;
++
+ 	/* Being remounted read-only */
+ 	int s_readonly_remount;
+ 
+diff --git a/include/linux/hdmi.h b/include/linux/hdmi.h
+index d271ff23984f..4f3febc0f971 100644
+--- a/include/linux/hdmi.h
++++ b/include/linux/hdmi.h
+@@ -101,8 +101,8 @@ enum hdmi_extended_colorimetry {
+ 	HDMI_EXTENDED_COLORIMETRY_XV_YCC_601,
+ 	HDMI_EXTENDED_COLORIMETRY_XV_YCC_709,
+ 	HDMI_EXTENDED_COLORIMETRY_S_YCC_601,
+-	HDMI_EXTENDED_COLORIMETRY_ADOBE_YCC_601,
+-	HDMI_EXTENDED_COLORIMETRY_ADOBE_RGB,
++	HDMI_EXTENDED_COLORIMETRY_OPYCC_601,
++	HDMI_EXTENDED_COLORIMETRY_OPRGB,
+ 
+ 	/* The following EC values are only defined in CEA-861-F. */
+ 	HDMI_EXTENDED_COLORIMETRY_BT2020_CONST_LUM,
+diff --git a/include/linux/signal.h b/include/linux/signal.h
+index 3c5200137b24..42ba31da534f 100644
+--- a/include/linux/signal.h
++++ b/include/linux/signal.h
+@@ -36,7 +36,7 @@ enum siginfo_layout {
+ 	SIL_SYS,
+ };
+ 
+-enum siginfo_layout siginfo_layout(int sig, int si_code);
++enum siginfo_layout siginfo_layout(unsigned sig, int si_code);
+ 
+ /*
+  * Define some primitives to manipulate sigset_t.
+diff --git a/include/linux/tc.h b/include/linux/tc.h
+index f92511e57cdb..a60639f37963 100644
+--- a/include/linux/tc.h
++++ b/include/linux/tc.h
+@@ -84,6 +84,7 @@ struct tc_dev {
+ 					   device. */
+ 	struct device	dev;		/* Generic device interface. */
+ 	struct resource	resource;	/* Address space of this device. */
++	u64		dma_mask;	/* DMA addressable range. */
+ 	char		vendor[9];
+ 	char		name[9];
+ 	char		firmware[9];
+diff --git a/include/media/cec.h b/include/media/cec.h
+index 580ab1042898..71cc0272b053 100644
+--- a/include/media/cec.h
++++ b/include/media/cec.h
+@@ -63,7 +63,6 @@ struct cec_data {
+ 	struct delayed_work work;
+ 	struct completion c;
+ 	u8 attempts;
+-	bool new_initiator;
+ 	bool blocking;
+ 	bool completed;
+ };
+@@ -174,6 +173,7 @@ struct cec_adapter {
+ 	bool is_configuring;
+ 	bool is_configured;
+ 	bool cec_pin_is_high;
++	u8 last_initiator;
+ 	u32 monitor_all_cnt;
+ 	u32 monitor_pin_cnt;
+ 	u32 follower_cnt;
+@@ -451,4 +451,74 @@ static inline void cec_phys_addr_invalidate(struct cec_adapter *adap)
+ 	cec_s_phys_addr(adap, CEC_PHYS_ADDR_INVALID, false);
+ }
+ 
++/**
++ * cec_get_edid_spa_location() - find location of the Source Physical Address
++ *
++ * @edid: the EDID
++ * @size: the size of the EDID
++ *
++ * This EDID is expected to be a CEA-861 compliant, which means that there are
++ * at least two blocks and one or more of the extensions blocks are CEA-861
++ * blocks.
++ *
++ * The returned location is guaranteed to be <= size-2.
++ *
++ * This is an inline function since it is used by both CEC and V4L2.
++ * Ideally this would go in a module shared by both, but it is overkill to do
++ * that for just a single function.
++ */
++static inline unsigned int cec_get_edid_spa_location(const u8 *edid,
++						     unsigned int size)
++{
++	unsigned int blocks = size / 128;
++	unsigned int block;
++	u8 d;
++
++	/* Sanity check: at least 2 blocks and a multiple of the block size */
++	if (blocks < 2 || size % 128)
++		return 0;
++
++	/*
++	 * If there are fewer extension blocks than the size, then update
++	 * 'blocks'. It is allowed to have more extension blocks than the size,
++	 * since some hardware can only read e.g. 256 bytes of the EDID, even
++	 * though more blocks are present. The first CEA-861 extension block
++	 * should normally be in block 1 anyway.
++	 */
++	if (edid[0x7e] + 1 < blocks)
++		blocks = edid[0x7e] + 1;
++
++	for (block = 1; block < blocks; block++) {
++		unsigned int offset = block * 128;
++
++		/* Skip any non-CEA-861 extension blocks */
++		if (edid[offset] != 0x02 || edid[offset + 1] != 0x03)
++			continue;
++
++		/* search Vendor Specific Data Block (tag 3) */
++		d = edid[offset + 2] & 0x7f;
++		/* Check if there are Data Blocks */
++		if (d <= 4)
++			continue;
++		if (d > 4) {
++			unsigned int i = offset + 4;
++			unsigned int end = offset + d;
++
++			/* Note: 'end' is always < 'size' */
++			do {
++				u8 tag = edid[i] >> 5;
++				u8 len = edid[i] & 0x1f;
++
++				if (tag == 3 && len >= 5 && i + len <= end &&
++				    edid[i + 1] == 0x03 &&
++				    edid[i + 2] == 0x0c &&
++				    edid[i + 3] == 0x00)
++					return i + 4;
++				i += len + 1;
++			} while (i < end);
++		}
++	}
++	return 0;
++}
++
+ #endif /* _MEDIA_CEC_H */
+diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
+index 6c003995347a..59185fbbd202 100644
+--- a/include/rdma/ib_verbs.h
++++ b/include/rdma/ib_verbs.h
+@@ -1296,21 +1296,27 @@ struct ib_qp_attr {
+ };
+ 
+ enum ib_wr_opcode {
+-	IB_WR_RDMA_WRITE,
+-	IB_WR_RDMA_WRITE_WITH_IMM,
+-	IB_WR_SEND,
+-	IB_WR_SEND_WITH_IMM,
+-	IB_WR_RDMA_READ,
+-	IB_WR_ATOMIC_CMP_AND_SWP,
+-	IB_WR_ATOMIC_FETCH_AND_ADD,
+-	IB_WR_LSO,
+-	IB_WR_SEND_WITH_INV,
+-	IB_WR_RDMA_READ_WITH_INV,
+-	IB_WR_LOCAL_INV,
+-	IB_WR_REG_MR,
+-	IB_WR_MASKED_ATOMIC_CMP_AND_SWP,
+-	IB_WR_MASKED_ATOMIC_FETCH_AND_ADD,
++	/* These are shared with userspace */
++	IB_WR_RDMA_WRITE = IB_UVERBS_WR_RDMA_WRITE,
++	IB_WR_RDMA_WRITE_WITH_IMM = IB_UVERBS_WR_RDMA_WRITE_WITH_IMM,
++	IB_WR_SEND = IB_UVERBS_WR_SEND,
++	IB_WR_SEND_WITH_IMM = IB_UVERBS_WR_SEND_WITH_IMM,
++	IB_WR_RDMA_READ = IB_UVERBS_WR_RDMA_READ,
++	IB_WR_ATOMIC_CMP_AND_SWP = IB_UVERBS_WR_ATOMIC_CMP_AND_SWP,
++	IB_WR_ATOMIC_FETCH_AND_ADD = IB_UVERBS_WR_ATOMIC_FETCH_AND_ADD,
++	IB_WR_LSO = IB_UVERBS_WR_TSO,
++	IB_WR_SEND_WITH_INV = IB_UVERBS_WR_SEND_WITH_INV,
++	IB_WR_RDMA_READ_WITH_INV = IB_UVERBS_WR_RDMA_READ_WITH_INV,
++	IB_WR_LOCAL_INV = IB_UVERBS_WR_LOCAL_INV,
++	IB_WR_MASKED_ATOMIC_CMP_AND_SWP =
++		IB_UVERBS_WR_MASKED_ATOMIC_CMP_AND_SWP,
++	IB_WR_MASKED_ATOMIC_FETCH_AND_ADD =
++		IB_UVERBS_WR_MASKED_ATOMIC_FETCH_AND_ADD,
++
++	/* These are kernel only and can not be issued by userspace */
++	IB_WR_REG_MR = 0x20,
+ 	IB_WR_REG_SIG_MR,
++
+ 	/* reserve values for low level drivers' internal use.
+ 	 * These values will not be used at all in the ib core layer.
+ 	 */
+diff --git a/include/uapi/linux/cec.h b/include/uapi/linux/cec.h
+index 20fe091b7e96..bc2a1b98d9dd 100644
+--- a/include/uapi/linux/cec.h
++++ b/include/uapi/linux/cec.h
+@@ -152,10 +152,13 @@ static inline void cec_msg_set_reply_to(struct cec_msg *msg,
+ #define CEC_TX_STATUS_LOW_DRIVE		(1 << 3)
+ #define CEC_TX_STATUS_ERROR		(1 << 4)
+ #define CEC_TX_STATUS_MAX_RETRIES	(1 << 5)
++#define CEC_TX_STATUS_ABORTED		(1 << 6)
++#define CEC_TX_STATUS_TIMEOUT		(1 << 7)
+ 
+ #define CEC_RX_STATUS_OK		(1 << 0)
+ #define CEC_RX_STATUS_TIMEOUT		(1 << 1)
+ #define CEC_RX_STATUS_FEATURE_ABORT	(1 << 2)
++#define CEC_RX_STATUS_ABORTED		(1 << 3)
+ 
+ static inline int cec_msg_status_is_ok(const struct cec_msg *msg)
+ {
+diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
+index 73e01918f996..a441ea1bfe6d 100644
+--- a/include/uapi/linux/fs.h
++++ b/include/uapi/linux/fs.h
+@@ -279,8 +279,8 @@ struct fsxattr {
+ #define FS_ENCRYPTION_MODE_AES_256_CTS		4
+ #define FS_ENCRYPTION_MODE_AES_128_CBC		5
+ #define FS_ENCRYPTION_MODE_AES_128_CTS		6
+-#define FS_ENCRYPTION_MODE_SPECK128_256_XTS	7
+-#define FS_ENCRYPTION_MODE_SPECK128_256_CTS	8
++#define FS_ENCRYPTION_MODE_SPECK128_256_XTS	7 /* Removed, do not use. */
++#define FS_ENCRYPTION_MODE_SPECK128_256_CTS	8 /* Removed, do not use. */
+ 
+ struct fscrypt_policy {
+ 	__u8 version;
+diff --git a/include/uapi/linux/ndctl.h b/include/uapi/linux/ndctl.h
+index 7e27070b9440..2f2c43d633c5 100644
+--- a/include/uapi/linux/ndctl.h
++++ b/include/uapi/linux/ndctl.h
+@@ -128,37 +128,31 @@ enum {
+ 
+ static inline const char *nvdimm_bus_cmd_name(unsigned cmd)
+ {
+-	static const char * const names[] = {
+-		[ND_CMD_ARS_CAP] = "ars_cap",
+-		[ND_CMD_ARS_START] = "ars_start",
+-		[ND_CMD_ARS_STATUS] = "ars_status",
+-		[ND_CMD_CLEAR_ERROR] = "clear_error",
+-		[ND_CMD_CALL] = "cmd_call",
+-	};
+-
+-	if (cmd < ARRAY_SIZE(names) && names[cmd])
+-		return names[cmd];
+-	return "unknown";
++	switch (cmd) {
++	case ND_CMD_ARS_CAP:		return "ars_cap";
++	case ND_CMD_ARS_START:		return "ars_start";
++	case ND_CMD_ARS_STATUS:		return "ars_status";
++	case ND_CMD_CLEAR_ERROR:	return "clear_error";
++	case ND_CMD_CALL:		return "cmd_call";
++	default:			return "unknown";
++	}
+ }
+ 
+ static inline const char *nvdimm_cmd_name(unsigned cmd)
+ {
+-	static const char * const names[] = {
+-		[ND_CMD_SMART] = "smart",
+-		[ND_CMD_SMART_THRESHOLD] = "smart_thresh",
+-		[ND_CMD_DIMM_FLAGS] = "flags",
+-		[ND_CMD_GET_CONFIG_SIZE] = "get_size",
+-		[ND_CMD_GET_CONFIG_DATA] = "get_data",
+-		[ND_CMD_SET_CONFIG_DATA] = "set_data",
+-		[ND_CMD_VENDOR_EFFECT_LOG_SIZE] = "effect_size",
+-		[ND_CMD_VENDOR_EFFECT_LOG] = "effect_log",
+-		[ND_CMD_VENDOR] = "vendor",
+-		[ND_CMD_CALL] = "cmd_call",
+-	};
+-
+-	if (cmd < ARRAY_SIZE(names) && names[cmd])
+-		return names[cmd];
+-	return "unknown";
++	switch (cmd) {
++	case ND_CMD_SMART:			return "smart";
++	case ND_CMD_SMART_THRESHOLD:		return "smart_thresh";
++	case ND_CMD_DIMM_FLAGS:			return "flags";
++	case ND_CMD_GET_CONFIG_SIZE:		return "get_size";
++	case ND_CMD_GET_CONFIG_DATA:		return "get_data";
++	case ND_CMD_SET_CONFIG_DATA:		return "set_data";
++	case ND_CMD_VENDOR_EFFECT_LOG_SIZE:	return "effect_size";
++	case ND_CMD_VENDOR_EFFECT_LOG:		return "effect_log";
++	case ND_CMD_VENDOR:			return "vendor";
++	case ND_CMD_CALL:			return "cmd_call";
++	default:				return "unknown";
++	}
+ }
+ 
+ #define ND_IOCTL 'N'
+diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
+index 600877be5c22..082dc1439a50 100644
+--- a/include/uapi/linux/videodev2.h
++++ b/include/uapi/linux/videodev2.h
+@@ -225,8 +225,8 @@ enum v4l2_colorspace {
+ 	/* For RGB colorspaces such as produces by most webcams. */
+ 	V4L2_COLORSPACE_SRGB          = 8,
+ 
+-	/* AdobeRGB colorspace */
+-	V4L2_COLORSPACE_ADOBERGB      = 9,
++	/* opRGB colorspace */
++	V4L2_COLORSPACE_OPRGB         = 9,
+ 
+ 	/* BT.2020 colorspace, used for UHDTV. */
+ 	V4L2_COLORSPACE_BT2020        = 10,
+@@ -258,7 +258,7 @@ enum v4l2_xfer_func {
+ 	 *
+ 	 * V4L2_COLORSPACE_SRGB, V4L2_COLORSPACE_JPEG: V4L2_XFER_FUNC_SRGB
+ 	 *
+-	 * V4L2_COLORSPACE_ADOBERGB: V4L2_XFER_FUNC_ADOBERGB
++	 * V4L2_COLORSPACE_OPRGB: V4L2_XFER_FUNC_OPRGB
+ 	 *
+ 	 * V4L2_COLORSPACE_SMPTE240M: V4L2_XFER_FUNC_SMPTE240M
+ 	 *
+@@ -269,7 +269,7 @@ enum v4l2_xfer_func {
+ 	V4L2_XFER_FUNC_DEFAULT     = 0,
+ 	V4L2_XFER_FUNC_709         = 1,
+ 	V4L2_XFER_FUNC_SRGB        = 2,
+-	V4L2_XFER_FUNC_ADOBERGB    = 3,
++	V4L2_XFER_FUNC_OPRGB       = 3,
+ 	V4L2_XFER_FUNC_SMPTE240M   = 4,
+ 	V4L2_XFER_FUNC_NONE        = 5,
+ 	V4L2_XFER_FUNC_DCI_P3      = 6,
+@@ -281,7 +281,7 @@ enum v4l2_xfer_func {
+  * This depends on the colorspace.
+  */
+ #define V4L2_MAP_XFER_FUNC_DEFAULT(colsp) \
+-	((colsp) == V4L2_COLORSPACE_ADOBERGB ? V4L2_XFER_FUNC_ADOBERGB : \
++	((colsp) == V4L2_COLORSPACE_OPRGB ? V4L2_XFER_FUNC_OPRGB : \
+ 	 ((colsp) == V4L2_COLORSPACE_SMPTE240M ? V4L2_XFER_FUNC_SMPTE240M : \
+ 	  ((colsp) == V4L2_COLORSPACE_DCI_P3 ? V4L2_XFER_FUNC_DCI_P3 : \
+ 	   ((colsp) == V4L2_COLORSPACE_RAW ? V4L2_XFER_FUNC_NONE : \
+@@ -295,7 +295,7 @@ enum v4l2_ycbcr_encoding {
+ 	 *
+ 	 * V4L2_COLORSPACE_SMPTE170M, V4L2_COLORSPACE_470_SYSTEM_M,
+ 	 * V4L2_COLORSPACE_470_SYSTEM_BG, V4L2_COLORSPACE_SRGB,
+-	 * V4L2_COLORSPACE_ADOBERGB and V4L2_COLORSPACE_JPEG: V4L2_YCBCR_ENC_601
++	 * V4L2_COLORSPACE_OPRGB and V4L2_COLORSPACE_JPEG: V4L2_YCBCR_ENC_601
+ 	 *
+ 	 * V4L2_COLORSPACE_REC709 and V4L2_COLORSPACE_DCI_P3: V4L2_YCBCR_ENC_709
+ 	 *
+@@ -382,6 +382,17 @@ enum v4l2_quantization {
+ 	 (((is_rgb_or_hsv) || (colsp) == V4L2_COLORSPACE_JPEG) ? \
+ 	 V4L2_QUANTIZATION_FULL_RANGE : V4L2_QUANTIZATION_LIM_RANGE))
+ 
++/*
++ * Deprecated names for opRGB colorspace (IEC 61966-2-5)
++ *
++ * WARNING: Please don't use these deprecated defines in your code, as
++ * there is a chance we have to remove them in the future.
++ */
++#ifndef __KERNEL__
++#define V4L2_COLORSPACE_ADOBERGB V4L2_COLORSPACE_OPRGB
++#define V4L2_XFER_FUNC_ADOBERGB  V4L2_XFER_FUNC_OPRGB
++#endif
++
+ enum v4l2_priority {
+ 	V4L2_PRIORITY_UNSET       = 0,  /* not initialized */
+ 	V4L2_PRIORITY_BACKGROUND  = 1,
+diff --git a/include/uapi/rdma/ib_user_verbs.h b/include/uapi/rdma/ib_user_verbs.h
+index 4f9991de8e3a..8345ca799ad8 100644
+--- a/include/uapi/rdma/ib_user_verbs.h
++++ b/include/uapi/rdma/ib_user_verbs.h
+@@ -762,10 +762,28 @@ struct ib_uverbs_sge {
+ 	__u32 lkey;
+ };
+ 
++enum ib_uverbs_wr_opcode {
++	IB_UVERBS_WR_RDMA_WRITE = 0,
++	IB_UVERBS_WR_RDMA_WRITE_WITH_IMM = 1,
++	IB_UVERBS_WR_SEND = 2,
++	IB_UVERBS_WR_SEND_WITH_IMM = 3,
++	IB_UVERBS_WR_RDMA_READ = 4,
++	IB_UVERBS_WR_ATOMIC_CMP_AND_SWP = 5,
++	IB_UVERBS_WR_ATOMIC_FETCH_AND_ADD = 6,
++	IB_UVERBS_WR_LOCAL_INV = 7,
++	IB_UVERBS_WR_BIND_MW = 8,
++	IB_UVERBS_WR_SEND_WITH_INV = 9,
++	IB_UVERBS_WR_TSO = 10,
++	IB_UVERBS_WR_RDMA_READ_WITH_INV = 11,
++	IB_UVERBS_WR_MASKED_ATOMIC_CMP_AND_SWP = 12,
++	IB_UVERBS_WR_MASKED_ATOMIC_FETCH_AND_ADD = 13,
++	/* Review enum ib_wr_opcode before modifying this */
++};
++
+ struct ib_uverbs_send_wr {
+ 	__aligned_u64 wr_id;
+ 	__u32 num_sge;
+-	__u32 opcode;
++	__u32 opcode;		/* see enum ib_uverbs_wr_opcode */
+ 	__u32 send_flags;
+ 	union {
+ 		__be32 imm_data;
+diff --git a/kernel/bounds.c b/kernel/bounds.c
+index c373e887c066..9795d75b09b2 100644
+--- a/kernel/bounds.c
++++ b/kernel/bounds.c
+@@ -13,7 +13,7 @@
+ #include <linux/log2.h>
+ #include <linux/spinlock_types.h>
+ 
+-void foo(void)
++int main(void)
+ {
+ 	/* The enum constants to put into include/generated/bounds.h */
+ 	DEFINE(NR_PAGEFLAGS, __NR_PAGEFLAGS);
+@@ -23,4 +23,6 @@ void foo(void)
+ #endif
+ 	DEFINE(SPINLOCK_SIZE, sizeof(spinlock_t));
+ 	/* End of constants */
++
++	return 0;
+ }
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index a31a1ba0f8ea..0f5d2e66cd6b 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -683,6 +683,17 @@ err_put:
+ 	return err;
+ }
+ 
++static void maybe_wait_bpf_programs(struct bpf_map *map)
++{
++	/* Wait for any running BPF programs to complete so that
++	 * userspace, when we return to it, knows that all programs
++	 * that could be running use the new map value.
++	 */
++	if (map->map_type == BPF_MAP_TYPE_HASH_OF_MAPS ||
++	    map->map_type == BPF_MAP_TYPE_ARRAY_OF_MAPS)
++		synchronize_rcu();
++}
++
+ #define BPF_MAP_UPDATE_ELEM_LAST_FIELD flags
+ 
+ static int map_update_elem(union bpf_attr *attr)
+@@ -769,6 +780,7 @@ static int map_update_elem(union bpf_attr *attr)
+ 	}
+ 	__this_cpu_dec(bpf_prog_active);
+ 	preempt_enable();
++	maybe_wait_bpf_programs(map);
+ out:
+ free_value:
+ 	kfree(value);
+@@ -821,6 +833,7 @@ static int map_delete_elem(union bpf_attr *attr)
+ 	rcu_read_unlock();
+ 	__this_cpu_dec(bpf_prog_active);
+ 	preempt_enable();
++	maybe_wait_bpf_programs(map);
+ out:
+ 	kfree(key);
+ err_put:
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index b000686fa1a1..d565ec6af97c 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -553,7 +553,9 @@ static void __mark_reg_not_init(struct bpf_reg_state *reg);
+  */
+ static void __mark_reg_known(struct bpf_reg_state *reg, u64 imm)
+ {
+-	reg->id = 0;
++	/* Clear id, off, and union(map_ptr, range) */
++	memset(((u8 *)reg) + sizeof(reg->type), 0,
++	       offsetof(struct bpf_reg_state, var_off) - sizeof(reg->type));
+ 	reg->var_off = tnum_const(imm);
+ 	reg->smin_value = (s64)imm;
+ 	reg->smax_value = (s64)imm;
+@@ -572,7 +574,6 @@ static void __mark_reg_known_zero(struct bpf_reg_state *reg)
+ static void __mark_reg_const_zero(struct bpf_reg_state *reg)
+ {
+ 	__mark_reg_known(reg, 0);
+-	reg->off = 0;
+ 	reg->type = SCALAR_VALUE;
+ }
+ 
+@@ -683,9 +684,12 @@ static void __mark_reg_unbounded(struct bpf_reg_state *reg)
+ /* Mark a register as having a completely unknown (scalar) value. */
+ static void __mark_reg_unknown(struct bpf_reg_state *reg)
+ {
++	/*
++	 * Clear type, id, off, and union(map_ptr, range) and
++	 * padding between 'type' and union
++	 */
++	memset(reg, 0, offsetof(struct bpf_reg_state, var_off));
+ 	reg->type = SCALAR_VALUE;
+-	reg->id = 0;
+-	reg->off = 0;
+ 	reg->var_off = tnum_unknown;
+ 	reg->frameno = 0;
+ 	__mark_reg_unbounded(reg);
+@@ -1726,9 +1730,6 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
+ 			else
+ 				mark_reg_known_zero(env, regs,
+ 						    value_regno);
+-			regs[value_regno].id = 0;
+-			regs[value_regno].off = 0;
+-			regs[value_regno].range = 0;
+ 			regs[value_regno].type = reg_type;
+ 		}
+ 
+@@ -2549,7 +2550,6 @@ static int check_helper_call(struct bpf_verifier_env *env, int func_id, int insn
+ 		regs[BPF_REG_0].type = PTR_TO_MAP_VALUE_OR_NULL;
+ 		/* There is no offset yet applied, variable or fixed */
+ 		mark_reg_known_zero(env, regs, BPF_REG_0);
+-		regs[BPF_REG_0].off = 0;
+ 		/* remember map_ptr, so that check_map_access()
+ 		 * can check 'value_size' boundary of memory access
+ 		 * to map element returned from bpf_map_lookup_elem()
+diff --git a/kernel/bpf/xskmap.c b/kernel/bpf/xskmap.c
+index b3c557476a8d..c98501a04742 100644
+--- a/kernel/bpf/xskmap.c
++++ b/kernel/bpf/xskmap.c
+@@ -191,11 +191,8 @@ static int xsk_map_update_elem(struct bpf_map *map, void *key, void *value,
+ 	sock_hold(sock->sk);
+ 
+ 	old_xs = xchg(&m->xsk_map[i], xs);
+-	if (old_xs) {
+-		/* Make sure we've flushed everything. */
+-		synchronize_net();
++	if (old_xs)
+ 		sock_put((struct sock *)old_xs);
+-	}
+ 
+ 	sockfd_put(sock);
+ 	return 0;
+@@ -211,11 +208,8 @@ static int xsk_map_delete_elem(struct bpf_map *map, void *key)
+ 		return -EINVAL;
+ 
+ 	old_xs = xchg(&m->xsk_map[k], NULL);
+-	if (old_xs) {
+-		/* Make sure we've flushed everything. */
+-		synchronize_net();
++	if (old_xs)
+ 		sock_put((struct sock *)old_xs);
+-	}
+ 
+ 	return 0;
+ }
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 517907b082df..3ec5a37e3068 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -2033,6 +2033,12 @@ static void cpuhp_online_cpu_device(unsigned int cpu)
+ 	kobject_uevent(&dev->kobj, KOBJ_ONLINE);
+ }
+ 
++/*
++ * Architectures that need SMT-specific errata handling during SMT hotplug
++ * should override this.
++ */
++void __weak arch_smt_update(void) { };
++
+ static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
+ {
+ 	int cpu, ret = 0;
+@@ -2059,8 +2065,10 @@ static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
+ 		 */
+ 		cpuhp_offline_cpu_device(cpu);
+ 	}
+-	if (!ret)
++	if (!ret) {
+ 		cpu_smt_control = ctrlval;
++		arch_smt_update();
++	}
+ 	cpu_maps_update_done();
+ 	return ret;
+ }
+@@ -2071,6 +2079,7 @@ static int cpuhp_smt_enable(void)
+ 
+ 	cpu_maps_update_begin();
+ 	cpu_smt_control = CPU_SMT_ENABLED;
++	arch_smt_update();
+ 	for_each_present_cpu(cpu) {
+ 		/* Skip online CPUs and CPUs on offline nodes */
+ 		if (cpu_online(cpu) || !node_online(cpu_to_node(cpu)))
+diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c
+index d987dcd1bd56..54a33337680f 100644
+--- a/kernel/dma/contiguous.c
++++ b/kernel/dma/contiguous.c
+@@ -49,7 +49,11 @@ static phys_addr_t limit_cmdline;
+ 
+ static int __init early_cma(char *p)
+ {
+-	pr_debug("%s(%s)\n", __func__, p);
++	if (!p) {
++		pr_err("Config string not provided\n");
++		return -EINVAL;
++	}
++
+ 	size_cmdline = memparse(p, &p);
+ 	if (*p != '@')
+ 		return 0;
+diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
+index 9a8b7ba9aa88..c4e31f44a0ff 100644
+--- a/kernel/irq/manage.c
++++ b/kernel/irq/manage.c
+@@ -920,6 +920,9 @@ irq_forced_thread_fn(struct irq_desc *desc, struct irqaction *action)
+ 
+ 	local_bh_disable();
+ 	ret = action->thread_fn(action->irq, action->dev_id);
++	if (ret == IRQ_HANDLED)
++		atomic_inc(&desc->threads_handled);
++
+ 	irq_finalize_oneshot(desc, action);
+ 	local_bh_enable();
+ 	return ret;
+@@ -936,6 +939,9 @@ static irqreturn_t irq_thread_fn(struct irq_desc *desc,
+ 	irqreturn_t ret;
+ 
+ 	ret = action->thread_fn(action->irq, action->dev_id);
++	if (ret == IRQ_HANDLED)
++		atomic_inc(&desc->threads_handled);
++
+ 	irq_finalize_oneshot(desc, action);
+ 	return ret;
+ }
+@@ -1013,8 +1019,6 @@ static int irq_thread(void *data)
+ 		irq_thread_check_affinity(desc, action);
+ 
+ 		action_ret = handler_fn(desc, action);
+-		if (action_ret == IRQ_HANDLED)
+-			atomic_inc(&desc->threads_handled);
+ 		if (action_ret == IRQ_WAKE_THREAD)
+ 			irq_wake_secondary(desc, action);
+ 
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index f3183ad10d96..07f912b765db 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -700,9 +700,10 @@ static void unoptimize_kprobe(struct kprobe *p, bool force)
+ }
+ 
+ /* Cancel unoptimizing for reusing */
+-static void reuse_unused_kprobe(struct kprobe *ap)
++static int reuse_unused_kprobe(struct kprobe *ap)
+ {
+ 	struct optimized_kprobe *op;
++	int ret;
+ 
+ 	BUG_ON(!kprobe_unused(ap));
+ 	/*
+@@ -714,8 +715,12 @@ static void reuse_unused_kprobe(struct kprobe *ap)
+ 	/* Enable the probe again */
+ 	ap->flags &= ~KPROBE_FLAG_DISABLED;
+ 	/* Optimize it again (remove from op->list) */
+-	BUG_ON(!kprobe_optready(ap));
++	ret = kprobe_optready(ap);
++	if (ret)
++		return ret;
++
+ 	optimize_kprobe(ap);
++	return 0;
+ }
+ 
+ /* Remove optimized instructions */
+@@ -940,11 +945,16 @@ static void __disarm_kprobe(struct kprobe *p, bool reopt)
+ #define kprobe_disarmed(p)			kprobe_disabled(p)
+ #define wait_for_kprobe_optimizer()		do {} while (0)
+ 
+-/* There should be no unused kprobes can be reused without optimization */
+-static void reuse_unused_kprobe(struct kprobe *ap)
++static int reuse_unused_kprobe(struct kprobe *ap)
+ {
++	/*
++	 * If the optimized kprobe is NOT supported, the aggr kprobe is
++	 * released at the same time that the last aggregated kprobe is
++	 * unregistered.
++	 * Thus there should be no chance to reuse unused kprobe.
++	 */
+ 	printk(KERN_ERR "Error: There should be no unused kprobe here.\n");
+-	BUG_ON(kprobe_unused(ap));
++	return -EINVAL;
+ }
+ 
+ static void free_aggr_kprobe(struct kprobe *p)
+@@ -1343,9 +1353,12 @@ static int register_aggr_kprobe(struct kprobe *orig_p, struct kprobe *p)
+ 			goto out;
+ 		}
+ 		init_aggr_kprobe(ap, orig_p);
+-	} else if (kprobe_unused(ap))
++	} else if (kprobe_unused(ap)) {
+ 		/* This probe is going to die. Rescue it */
+-		reuse_unused_kprobe(ap);
++		ret = reuse_unused_kprobe(ap);
++		if (ret)
++			goto out;
++	}
+ 
+ 	if (kprobe_gone(ap)) {
+ 		/*
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index 5fa4d3138bf1..aa6ebb799f16 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -4148,7 +4148,7 @@ void lock_contended(struct lockdep_map *lock, unsigned long ip)
+ {
+ 	unsigned long flags;
+ 
+-	if (unlikely(!lock_stat))
++	if (unlikely(!lock_stat || !debug_locks))
+ 		return;
+ 
+ 	if (unlikely(current->lockdep_recursion))
+@@ -4168,7 +4168,7 @@ void lock_acquired(struct lockdep_map *lock, unsigned long ip)
+ {
+ 	unsigned long flags;
+ 
+-	if (unlikely(!lock_stat))
++	if (unlikely(!lock_stat || !debug_locks))
+ 		return;
+ 
+ 	if (unlikely(current->lockdep_recursion))
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index 1d1513215c22..72de8cc5a13e 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -1047,7 +1047,12 @@ static void __init log_buf_len_update(unsigned size)
+ /* save requested log_buf_len since it's too early to process it */
+ static int __init log_buf_len_setup(char *str)
+ {
+-	unsigned size = memparse(str, &str);
++	unsigned int size;
++
++	if (!str)
++		return -EINVAL;
++
++	size = memparse(str, &str);
+ 
+ 	log_buf_len_update(size);
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index b27b9509ea89..9e4f550e4797 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -4321,7 +4321,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
+ 	 * put back on, and if we advance min_vruntime, we'll be placed back
+ 	 * further than we started -- ie. we'll be penalized.
+ 	 */
+-	if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) == DEQUEUE_SAVE)
++	if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) != DEQUEUE_SAVE)
+ 		update_min_vruntime(cfs_rq);
+ }
+ 
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 8d8a940422a8..dce9859f6547 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -1009,7 +1009,7 @@ static int __send_signal(int sig, struct siginfo *info, struct task_struct *t,
+ 
+ 	result = TRACE_SIGNAL_IGNORED;
+ 	if (!prepare_signal(sig, t,
+-			from_ancestor_ns || (info == SEND_SIG_FORCED)))
++			from_ancestor_ns || (info == SEND_SIG_PRIV) || (info == SEND_SIG_FORCED)))
+ 		goto ret;
+ 
+ 	pending = group ? &t->signal->shared_pending : &t->pending;
+@@ -2804,7 +2804,7 @@ COMPAT_SYSCALL_DEFINE2(rt_sigpending, compat_sigset_t __user *, uset,
+ }
+ #endif
+ 
+-enum siginfo_layout siginfo_layout(int sig, int si_code)
++enum siginfo_layout siginfo_layout(unsigned sig, int si_code)
+ {
+ 	enum siginfo_layout layout = SIL_KILL;
+ 	if ((si_code > SI_USER) && (si_code < SI_KERNEL)) {
+@@ -3417,7 +3417,8 @@ int do_sigaction(int sig, struct k_sigaction *act, struct k_sigaction *oact)
+ }
+ 
+ static int
+-do_sigaltstack (const stack_t *ss, stack_t *oss, unsigned long sp)
++do_sigaltstack (const stack_t *ss, stack_t *oss, unsigned long sp,
++		size_t min_ss_size)
+ {
+ 	struct task_struct *t = current;
+ 
+@@ -3447,7 +3448,7 @@ do_sigaltstack (const stack_t *ss, stack_t *oss, unsigned long sp)
+ 			ss_size = 0;
+ 			ss_sp = NULL;
+ 		} else {
+-			if (unlikely(ss_size < MINSIGSTKSZ))
++			if (unlikely(ss_size < min_ss_size))
+ 				return -ENOMEM;
+ 		}
+ 
+@@ -3465,7 +3466,8 @@ SYSCALL_DEFINE2(sigaltstack,const stack_t __user *,uss, stack_t __user *,uoss)
+ 	if (uss && copy_from_user(&new, uss, sizeof(stack_t)))
+ 		return -EFAULT;
+ 	err = do_sigaltstack(uss ? &new : NULL, uoss ? &old : NULL,
+-			      current_user_stack_pointer());
++			      current_user_stack_pointer(),
++			      MINSIGSTKSZ);
+ 	if (!err && uoss && copy_to_user(uoss, &old, sizeof(stack_t)))
+ 		err = -EFAULT;
+ 	return err;
+@@ -3476,7 +3478,8 @@ int restore_altstack(const stack_t __user *uss)
+ 	stack_t new;
+ 	if (copy_from_user(&new, uss, sizeof(stack_t)))
+ 		return -EFAULT;
+-	(void)do_sigaltstack(&new, NULL, current_user_stack_pointer());
++	(void)do_sigaltstack(&new, NULL, current_user_stack_pointer(),
++			     MINSIGSTKSZ);
+ 	/* squash all but EFAULT for now */
+ 	return 0;
+ }
+@@ -3510,7 +3513,8 @@ static int do_compat_sigaltstack(const compat_stack_t __user *uss_ptr,
+ 		uss.ss_size = uss32.ss_size;
+ 	}
+ 	ret = do_sigaltstack(uss_ptr ? &uss : NULL, &uoss,
+-			     compat_user_stack_pointer());
++			     compat_user_stack_pointer(),
++			     COMPAT_MINSIGSTKSZ);
+ 	if (ret >= 0 && uoss_ptr)  {
+ 		compat_stack_t old;
+ 		memset(&old, 0, sizeof(old));
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 6c78bc2b7fff..b3482eed270c 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -1072,8 +1072,10 @@ static int create_synth_event(int argc, char **argv)
+ 		event = NULL;
+ 		ret = -EEXIST;
+ 		goto out;
+-	} else if (delete_event)
++	} else if (delete_event) {
++		ret = -ENOENT;
+ 		goto out;
++	}
+ 
+ 	if (argc < 2) {
+ 		ret = -EINVAL;
+diff --git a/kernel/user_namespace.c b/kernel/user_namespace.c
+index e5222b5fb4fe..923414a246e9 100644
+--- a/kernel/user_namespace.c
++++ b/kernel/user_namespace.c
+@@ -974,10 +974,6 @@ static ssize_t map_write(struct file *file, const char __user *buf,
+ 	if (!new_idmap_permitted(file, ns, cap_setid, &new_map))
+ 		goto out;
+ 
+-	ret = sort_idmaps(&new_map);
+-	if (ret < 0)
+-		goto out;
+-
+ 	ret = -EPERM;
+ 	/* Map the lower ids from the parent user namespace to the
+ 	 * kernel global id space.
+@@ -1004,6 +1000,14 @@ static ssize_t map_write(struct file *file, const char __user *buf,
+ 		e->lower_first = lower_first;
+ 	}
+ 
++	/*
++	 * If we want to use binary search for lookup, this clones the extent
++	 * array and sorts both copies.
++	 */
++	ret = sort_idmaps(&new_map);
++	if (ret < 0)
++		goto out;
++
+ 	/* Install the map */
+ 	if (new_map.nr_extents <= UID_GID_MAP_MAX_BASE_EXTENTS) {
+ 		memcpy(map->extent, new_map.extent,
+diff --git a/lib/debug_locks.c b/lib/debug_locks.c
+index 96c4c633d95e..124fdf238b3d 100644
+--- a/lib/debug_locks.c
++++ b/lib/debug_locks.c
+@@ -37,7 +37,7 @@ EXPORT_SYMBOL_GPL(debug_locks_silent);
+  */
+ int debug_locks_off(void)
+ {
+-	if (__debug_locks_off()) {
++	if (debug_locks && __debug_locks_off()) {
+ 		if (!debug_locks_silent) {
+ 			console_verbose();
+ 			return 1;
+diff --git a/mm/hmm.c b/mm/hmm.c
+index f9d1d89dec4d..49e3db686348 100644
+--- a/mm/hmm.c
++++ b/mm/hmm.c
+@@ -91,16 +91,6 @@ static struct hmm *hmm_register(struct mm_struct *mm)
+ 	spin_lock_init(&hmm->lock);
+ 	hmm->mm = mm;
+ 
+-	/*
+-	 * We should only get here if hold the mmap_sem in write mode ie on
+-	 * registration of first mirror through hmm_mirror_register()
+-	 */
+-	hmm->mmu_notifier.ops = &hmm_mmu_notifier_ops;
+-	if (__mmu_notifier_register(&hmm->mmu_notifier, mm)) {
+-		kfree(hmm);
+-		return NULL;
+-	}
+-
+ 	spin_lock(&mm->page_table_lock);
+ 	if (!mm->hmm)
+ 		mm->hmm = hmm;
+@@ -108,12 +98,27 @@ static struct hmm *hmm_register(struct mm_struct *mm)
+ 		cleanup = true;
+ 	spin_unlock(&mm->page_table_lock);
+ 
+-	if (cleanup) {
+-		mmu_notifier_unregister(&hmm->mmu_notifier, mm);
+-		kfree(hmm);
+-	}
++	if (cleanup)
++		goto error;
++
++	/*
++	 * We should only get here if hold the mmap_sem in write mode ie on
++	 * registration of first mirror through hmm_mirror_register()
++	 */
++	hmm->mmu_notifier.ops = &hmm_mmu_notifier_ops;
++	if (__mmu_notifier_register(&hmm->mmu_notifier, mm))
++		goto error_mm;
+ 
+ 	return mm->hmm;
++
++error_mm:
++	spin_lock(&mm->page_table_lock);
++	if (mm->hmm == hmm)
++		mm->hmm = NULL;
++	spin_unlock(&mm->page_table_lock);
++error:
++	kfree(hmm);
++	return NULL;
+ }
+ 
+ void hmm_mm_destroy(struct mm_struct *mm)
+@@ -275,12 +280,13 @@ void hmm_mirror_unregister(struct hmm_mirror *mirror)
+ 	if (!should_unregister || mm == NULL)
+ 		return;
+ 
++	mmu_notifier_unregister_no_release(&hmm->mmu_notifier, mm);
++
+ 	spin_lock(&mm->page_table_lock);
+ 	if (mm->hmm == hmm)
+ 		mm->hmm = NULL;
+ 	spin_unlock(&mm->page_table_lock);
+ 
+-	mmu_notifier_unregister_no_release(&hmm->mmu_notifier, mm);
+ 	kfree(hmm);
+ }
+ EXPORT_SYMBOL(hmm_mirror_unregister);
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index f469315a6a0f..5b38fbef9441 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -3678,6 +3678,12 @@ int huge_add_to_page_cache(struct page *page, struct address_space *mapping,
+ 		return err;
+ 	ClearPagePrivate(page);
+ 
++	/*
++	 * set page dirty so that it will not be removed from cache/file
++	 * by non-hugetlbfs specific code paths.
++	 */
++	set_page_dirty(page);
++
+ 	spin_lock(&inode->i_lock);
+ 	inode->i_blocks += blocks_per_huge_page(h);
+ 	spin_unlock(&inode->i_lock);
+diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
+index ae3c2a35d61b..11df03e71288 100644
+--- a/mm/page_vma_mapped.c
++++ b/mm/page_vma_mapped.c
+@@ -21,7 +21,29 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw)
+ 			if (!is_swap_pte(*pvmw->pte))
+ 				return false;
+ 		} else {
+-			if (!pte_present(*pvmw->pte))
++			/*
++			 * We get here when we are trying to unmap a private
++			 * device page from the process address space. Such
++			 * page is not CPU accessible and thus is mapped as
++			 * a special swap entry, nonetheless it still does
++			 * count as a valid regular mapping for the page (and
++			 * is accounted as such in page maps count).
++			 *
++			 * So handle this special case as if it was a normal
++			 * page mapping ie lock CPU page table and returns
++			 * true.
++			 *
++			 * For more details on device private memory see HMM
++			 * (include/linux/hmm.h or mm/hmm.c).
++			 */
++			if (is_swap_pte(*pvmw->pte)) {
++				swp_entry_t entry;
++
++				/* Handle un-addressable ZONE_DEVICE memory */
++				entry = pte_to_swp_entry(*pvmw->pte);
++				if (!is_device_private_entry(entry))
++					return false;
++			} else if (!pte_present(*pvmw->pte))
+ 				return false;
+ 		}
+ 	}
+diff --git a/net/core/netclassid_cgroup.c b/net/core/netclassid_cgroup.c
+index 5e4f04004a49..7bf833598615 100644
+--- a/net/core/netclassid_cgroup.c
++++ b/net/core/netclassid_cgroup.c
+@@ -106,6 +106,7 @@ static int write_classid(struct cgroup_subsys_state *css, struct cftype *cft,
+ 		iterate_fd(p->files, 0, update_classid_sock,
+ 			   (void *)(unsigned long)cs->classid);
+ 		task_unlock(p);
++		cond_resched();
+ 	}
+ 	css_task_iter_end(&it);
+ 
+diff --git a/net/ipv4/cipso_ipv4.c b/net/ipv4/cipso_ipv4.c
+index 82178cc69c96..777fa3b7fb13 100644
+--- a/net/ipv4/cipso_ipv4.c
++++ b/net/ipv4/cipso_ipv4.c
+@@ -1512,7 +1512,7 @@ static int cipso_v4_parsetag_loc(const struct cipso_v4_doi *doi_def,
+  *
+  * Description:
+  * Parse the packet's IP header looking for a CIPSO option.  Returns a pointer
+- * to the start of the CIPSO option on success, NULL if one if not found.
++ * to the start of the CIPSO option on success, NULL if one is not found.
+  *
+  */
+ unsigned char *cipso_v4_optptr(const struct sk_buff *skb)
+@@ -1522,10 +1522,8 @@ unsigned char *cipso_v4_optptr(const struct sk_buff *skb)
+ 	int optlen;
+ 	int taglen;
+ 
+-	for (optlen = iph->ihl*4 - sizeof(struct iphdr); optlen > 0; ) {
++	for (optlen = iph->ihl*4 - sizeof(struct iphdr); optlen > 1; ) {
+ 		switch (optptr[0]) {
+-		case IPOPT_CIPSO:
+-			return optptr;
+ 		case IPOPT_END:
+ 			return NULL;
+ 		case IPOPT_NOOP:
+@@ -1534,6 +1532,11 @@ unsigned char *cipso_v4_optptr(const struct sk_buff *skb)
+ 		default:
+ 			taglen = optptr[1];
+ 		}
++		if (!taglen || taglen > optlen)
++			return NULL;
++		if (optptr[0] == IPOPT_CIPSO)
++			return optptr;
++
+ 		optlen -= taglen;
+ 		optptr += taglen;
+ 	}
+diff --git a/net/netfilter/xt_nat.c b/net/netfilter/xt_nat.c
+index 8af9707f8789..ac91170fc8c8 100644
+--- a/net/netfilter/xt_nat.c
++++ b/net/netfilter/xt_nat.c
+@@ -216,6 +216,8 @@ static struct xt_target xt_nat_target_reg[] __read_mostly = {
+ 	{
+ 		.name		= "DNAT",
+ 		.revision	= 2,
++		.checkentry	= xt_nat_checkentry,
++		.destroy	= xt_nat_destroy,
+ 		.target		= xt_dnat_target_v2,
+ 		.targetsize	= sizeof(struct nf_nat_range2),
+ 		.table		= "nat",
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 57f71765febe..ce852f8c1d27 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -1306,7 +1306,6 @@ check_loop_fn(struct Qdisc *q, unsigned long cl, struct qdisc_walker *w)
+ 
+ const struct nla_policy rtm_tca_policy[TCA_MAX + 1] = {
+ 	[TCA_KIND]		= { .type = NLA_STRING },
+-	[TCA_OPTIONS]		= { .type = NLA_NESTED },
+ 	[TCA_RATE]		= { .type = NLA_BINARY,
+ 				    .len = sizeof(struct tc_estimator) },
+ 	[TCA_STAB]		= { .type = NLA_NESTED },
+diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
+index 5185efb9027b..83ccd0221c98 100644
+--- a/net/sunrpc/svc_xprt.c
++++ b/net/sunrpc/svc_xprt.c
+@@ -989,7 +989,7 @@ static void call_xpt_users(struct svc_xprt *xprt)
+ 	spin_lock(&xprt->xpt_lock);
+ 	while (!list_empty(&xprt->xpt_users)) {
+ 		u = list_first_entry(&xprt->xpt_users, struct svc_xpt_user, list);
+-		list_del(&u->list);
++		list_del_init(&u->list);
+ 		u->callback(u);
+ 	}
+ 	spin_unlock(&xprt->xpt_lock);
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
+index a68180090554..b9827665ff35 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
+@@ -248,6 +248,7 @@ static void
+ xprt_rdma_bc_close(struct rpc_xprt *xprt)
+ {
+ 	dprintk("svcrdma: %s: xprt %p\n", __func__, xprt);
++	xprt->cwnd = RPC_CWNDSHIFT;
+ }
+ 
+ static void
+diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
+index 143ce2579ba9..98cbc7b060ba 100644
+--- a/net/sunrpc/xprtrdma/transport.c
++++ b/net/sunrpc/xprtrdma/transport.c
+@@ -468,6 +468,12 @@ xprt_rdma_close(struct rpc_xprt *xprt)
+ 		xprt->reestablish_timeout = 0;
+ 	xprt_disconnect_done(xprt);
+ 	rpcrdma_ep_disconnect(ep, ia);
++
++	/* Prepare @xprt for the next connection by reinitializing
++	 * its credit grant to one (see RFC 8166, Section 3.3.3).
++	 */
++	r_xprt->rx_buf.rb_credits = 1;
++	xprt->cwnd = RPC_CWNDSHIFT;
+ }
+ 
+ /**
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 4e937cd7c17d..661504042d30 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -744,6 +744,8 @@ static int xsk_create(struct net *net, struct socket *sock, int protocol,
+ 	sk->sk_destruct = xsk_destruct;
+ 	sk_refcnt_debug_inc(sk);
+ 
++	sock_set_flag(sk, SOCK_RCU_FREE);
++
+ 	xs = xdp_sk(sk);
+ 	mutex_init(&xs->mutex);
+ 	spin_lock_init(&xs->tx_completion_lock);
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 526e6814ed4b..1d2e0a90c0ca 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -625,9 +625,9 @@ static void xfrm_hash_rebuild(struct work_struct *work)
+ 				break;
+ 		}
+ 		if (newpos)
+-			hlist_add_behind(&policy->bydst, newpos);
++			hlist_add_behind_rcu(&policy->bydst, newpos);
+ 		else
+-			hlist_add_head(&policy->bydst, chain);
++			hlist_add_head_rcu(&policy->bydst, chain);
+ 	}
+ 
+ 	spin_unlock_bh(&net->xfrm.xfrm_policy_lock);
+@@ -766,9 +766,9 @@ int xfrm_policy_insert(int dir, struct xfrm_policy *policy, int excl)
+ 			break;
+ 	}
+ 	if (newpos)
+-		hlist_add_behind(&policy->bydst, newpos);
++		hlist_add_behind_rcu(&policy->bydst, newpos);
+ 	else
+-		hlist_add_head(&policy->bydst, chain);
++		hlist_add_head_rcu(&policy->bydst, chain);
+ 	__xfrm_policy_link(policy, dir);
+ 
+ 	/* After previous checking, family can either be AF_INET or AF_INET6 */
+diff --git a/security/integrity/ima/ima_fs.c b/security/integrity/ima/ima_fs.c
+index ae9d5c766a3c..cfb8cc3b975e 100644
+--- a/security/integrity/ima/ima_fs.c
++++ b/security/integrity/ima/ima_fs.c
+@@ -42,14 +42,14 @@ static int __init default_canonical_fmt_setup(char *str)
+ __setup("ima_canonical_fmt", default_canonical_fmt_setup);
+ 
+ static int valid_policy = 1;
+-#define TMPBUFLEN 12
++
+ static ssize_t ima_show_htable_value(char __user *buf, size_t count,
+ 				     loff_t *ppos, atomic_long_t *val)
+ {
+-	char tmpbuf[TMPBUFLEN];
++	char tmpbuf[32];	/* greater than largest 'long' string value */
+ 	ssize_t len;
+ 
+-	len = scnprintf(tmpbuf, TMPBUFLEN, "%li\n", atomic_long_read(val));
++	len = scnprintf(tmpbuf, sizeof(tmpbuf), "%li\n", atomic_long_read(val));
+ 	return simple_read_from_buffer(buf, count, ppos, tmpbuf, len);
+ }
+ 
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 2b5ee5fbd652..4680a217d0fa 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -1509,6 +1509,11 @@ static int selinux_genfs_get_sid(struct dentry *dentry,
+ 		}
+ 		rc = security_genfs_sid(&selinux_state, sb->s_type->name,
+ 					path, tclass, sid);
++		if (rc == -ENOENT) {
++			/* No match in policy, mark as unlabeled. */
++			*sid = SECINITSID_UNLABELED;
++			rc = 0;
++		}
+ 	}
+ 	free_page((unsigned long)buffer);
+ 	return rc;
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 8b6cd5a79bfa..a81d815c81f3 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -420,6 +420,7 @@ static int smk_ptrace_rule_check(struct task_struct *tracer,
+ 	struct smk_audit_info ad, *saip = NULL;
+ 	struct task_smack *tsp;
+ 	struct smack_known *tracer_known;
++	const struct cred *tracercred;
+ 
+ 	if ((mode & PTRACE_MODE_NOAUDIT) == 0) {
+ 		smk_ad_init(&ad, func, LSM_AUDIT_DATA_TASK);
+@@ -428,7 +429,8 @@ static int smk_ptrace_rule_check(struct task_struct *tracer,
+ 	}
+ 
+ 	rcu_read_lock();
+-	tsp = __task_cred(tracer)->security;
++	tracercred = __task_cred(tracer);
++	tsp = tracercred->security;
+ 	tracer_known = smk_of_task(tsp);
+ 
+ 	if ((mode & PTRACE_MODE_ATTACH) &&
+@@ -438,7 +440,7 @@ static int smk_ptrace_rule_check(struct task_struct *tracer,
+ 			rc = 0;
+ 		else if (smack_ptrace_rule == SMACK_PTRACE_DRACONIAN)
+ 			rc = -EACCES;
+-		else if (capable(CAP_SYS_PTRACE))
++		else if (smack_privileged_cred(CAP_SYS_PTRACE, tracercred))
+ 			rc = 0;
+ 		else
+ 			rc = -EACCES;
+@@ -1840,6 +1842,7 @@ static int smack_file_send_sigiotask(struct task_struct *tsk,
+ {
+ 	struct smack_known *skp;
+ 	struct smack_known *tkp = smk_of_task(tsk->cred->security);
++	const struct cred *tcred;
+ 	struct file *file;
+ 	int rc;
+ 	struct smk_audit_info ad;
+@@ -1853,8 +1856,12 @@ static int smack_file_send_sigiotask(struct task_struct *tsk,
+ 	skp = file->f_security;
+ 	rc = smk_access(skp, tkp, MAY_DELIVER, NULL);
+ 	rc = smk_bu_note("sigiotask", skp, tkp, MAY_DELIVER, rc);
+-	if (rc != 0 && has_capability(tsk, CAP_MAC_OVERRIDE))
++
++	rcu_read_lock();
++	tcred = __task_cred(tsk);
++	if (rc != 0 && smack_privileged_cred(CAP_MAC_OVERRIDE, tcred))
+ 		rc = 0;
++	rcu_read_unlock();
+ 
+ 	smk_ad_init(&ad, __func__, LSM_AUDIT_DATA_TASK);
+ 	smk_ad_setfield_u_tsk(&ad, tsk);
+diff --git a/sound/pci/ca0106/ca0106.h b/sound/pci/ca0106/ca0106.h
+index 04402c14cb23..9847b669cf3c 100644
+--- a/sound/pci/ca0106/ca0106.h
++++ b/sound/pci/ca0106/ca0106.h
+@@ -582,7 +582,7 @@
+ #define SPI_PL_BIT_R_R		(2<<7)	/* right channel = right */
+ #define SPI_PL_BIT_R_C		(3<<7)	/* right channel = (L+R)/2 */
+ #define SPI_IZD_REG		2
+-#define SPI_IZD_BIT		(1<<4)	/* infinite zero detect */
++#define SPI_IZD_BIT		(0<<4)	/* infinite zero detect */
+ 
+ #define SPI_FMT_REG		3
+ #define SPI_FMT_BIT_RJ		(0<<0)	/* right justified mode */
+diff --git a/sound/pci/hda/hda_controller.h b/sound/pci/hda/hda_controller.h
+index a68e75b00ea3..53c3cd28bc99 100644
+--- a/sound/pci/hda/hda_controller.h
++++ b/sound/pci/hda/hda_controller.h
+@@ -160,6 +160,7 @@ struct azx {
+ 	unsigned int msi:1;
+ 	unsigned int probing:1; /* codec probing phase */
+ 	unsigned int snoop:1;
++	unsigned int uc_buffer:1; /* non-cached pages for stream buffers */
+ 	unsigned int align_buffer_size:1;
+ 	unsigned int region_requested:1;
+ 	unsigned int disabled:1; /* disabled by vga_switcheroo */
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 28dc5e124995..6f6703e53a05 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -410,7 +410,7 @@ static void __mark_pages_wc(struct azx *chip, struct snd_dma_buffer *dmab, bool
+ #ifdef CONFIG_SND_DMA_SGBUF
+ 	if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_SG) {
+ 		struct snd_sg_buf *sgbuf = dmab->private_data;
+-		if (chip->driver_type == AZX_DRIVER_CMEDIA)
++		if (!chip->uc_buffer)
+ 			return; /* deal with only CORB/RIRB buffers */
+ 		if (on)
+ 			set_pages_array_wc(sgbuf->page_table, sgbuf->pages);
+@@ -1636,6 +1636,7 @@ static void azx_check_snoop_available(struct azx *chip)
+ 		dev_info(chip->card->dev, "Force to %s mode by module option\n",
+ 			 snoop ? "snoop" : "non-snoop");
+ 		chip->snoop = snoop;
++		chip->uc_buffer = !snoop;
+ 		return;
+ 	}
+ 
+@@ -1656,8 +1657,12 @@ static void azx_check_snoop_available(struct azx *chip)
+ 		snoop = false;
+ 
+ 	chip->snoop = snoop;
+-	if (!snoop)
++	if (!snoop) {
+ 		dev_info(chip->card->dev, "Force to non-snoop mode\n");
++		/* C-Media requires non-cached pages only for CORB/RIRB */
++		if (chip->driver_type != AZX_DRIVER_CMEDIA)
++			chip->uc_buffer = true;
++	}
+ }
+ 
+ static void azx_probe_work(struct work_struct *work)
+@@ -2096,7 +2101,7 @@ static void pcm_mmap_prepare(struct snd_pcm_substream *substream,
+ #ifdef CONFIG_X86
+ 	struct azx_pcm *apcm = snd_pcm_substream_chip(substream);
+ 	struct azx *chip = apcm->chip;
+-	if (!azx_snoop(chip) && chip->driver_type != AZX_DRIVER_CMEDIA)
++	if (chip->uc_buffer)
+ 		area->vm_page_prot = pgprot_writecombine(area->vm_page_prot);
+ #endif
+ }
+@@ -2215,8 +2220,12 @@ static struct snd_pci_quirk power_save_blacklist[] = {
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1581607 */
+ 	SND_PCI_QUIRK(0x1558, 0x3501, "Clevo W35xSS_370SS", 0),
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
++	SND_PCI_QUIRK(0x1028, 0x0497, "Dell Precision T3600", 0),
++	/* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
+ 	/* Note the P55A-UD3 and Z87-D3HP share the subsys id for the HDA dev */
+ 	SND_PCI_QUIRK(0x1458, 0xa002, "Gigabyte P55A-UD3 / Z87-D3HP", 0),
++	/* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
++	SND_PCI_QUIRK(0x8086, 0x2040, "Intel DZ77BH-55K", 0),
+ 	/* https://bugzilla.kernel.org/show_bug.cgi?id=199607 */
+ 	SND_PCI_QUIRK(0x8086, 0x2057, "Intel NUC5i7RYB", 0),
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1520902 */
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 1a8a2d440fbd..7d6c3cebb0e3 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -980,6 +980,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x21da, "Lenovo X220", CXT_PINCFG_LENOVO_TP410),
+ 	SND_PCI_QUIRK(0x17aa, 0x21db, "Lenovo X220-tablet", CXT_PINCFG_LENOVO_TP410),
+ 	SND_PCI_QUIRK(0x17aa, 0x38af, "Lenovo IdeaPad Z560", CXT_FIXUP_MUTE_LED_EAPD),
++	SND_PCI_QUIRK(0x17aa, 0x3905, "Lenovo G50-30", CXT_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x390b, "Lenovo G50-80", CXT_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3975, "Lenovo U300s", CXT_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3977, "Lenovo IdeaPad U310", CXT_FIXUP_STEREO_DMIC),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 08b6369f930b..23dd4bb026d1 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6799,6 +6799,12 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x1a, 0x02a11040},
+ 		{0x1b, 0x01014020},
+ 		{0x21, 0x0221101f}),
++	SND_HDA_PIN_QUIRK(0x10ec0235, 0x17aa, "Lenovo", ALC294_FIXUP_LENOVO_MIC_LOCATION,
++		{0x14, 0x90170110},
++		{0x19, 0x02a11030},
++		{0x1a, 0x02a11040},
++		{0x1b, 0x01011020},
++		{0x21, 0x0221101f}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0235, 0x17aa, "Lenovo", ALC294_FIXUP_LENOVO_MIC_LOCATION,
+ 		{0x14, 0x90170110},
+ 		{0x19, 0x02a11020},
+@@ -7690,6 +7696,8 @@ enum {
+ 	ALC662_FIXUP_ASUS_Nx50,
+ 	ALC668_FIXUP_ASUS_Nx51_HEADSET_MODE,
+ 	ALC668_FIXUP_ASUS_Nx51,
++	ALC668_FIXUP_MIC_COEF,
++	ALC668_FIXUP_ASUS_G751,
+ 	ALC891_FIXUP_HEADSET_MODE,
+ 	ALC891_FIXUP_DELL_MIC_NO_PRESENCE,
+ 	ALC662_FIXUP_ACER_VERITON,
+@@ -7959,6 +7967,23 @@ static const struct hda_fixup alc662_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC668_FIXUP_ASUS_Nx51_HEADSET_MODE,
+ 	},
++	[ALC668_FIXUP_MIC_COEF] = {
++		.type = HDA_FIXUP_VERBS,
++		.v.verbs = (const struct hda_verb[]) {
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0xc3 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x4000 },
++			{}
++		},
++	},
++	[ALC668_FIXUP_ASUS_G751] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x16, 0x0421101f }, /* HP */
++			{}
++		},
++		.chained = true,
++		.chain_id = ALC668_FIXUP_MIC_COEF
++	},
+ 	[ALC891_FIXUP_HEADSET_MODE] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc_fixup_headset_mode,
+@@ -8032,6 +8057,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x11cd, "Asus N550", ALC662_FIXUP_ASUS_Nx50),
+ 	SND_PCI_QUIRK(0x1043, 0x13df, "Asus N550JX", ALC662_FIXUP_BASS_1A),
+ 	SND_PCI_QUIRK(0x1043, 0x129d, "Asus N750", ALC662_FIXUP_ASUS_Nx50),
++	SND_PCI_QUIRK(0x1043, 0x12ff, "ASUS G751", ALC668_FIXUP_ASUS_G751),
+ 	SND_PCI_QUIRK(0x1043, 0x1477, "ASUS N56VZ", ALC662_FIXUP_BASS_MODE4_CHMAP),
+ 	SND_PCI_QUIRK(0x1043, 0x15a7, "ASUS UX51VZH", ALC662_FIXUP_BASS_16),
+ 	SND_PCI_QUIRK(0x1043, 0x177d, "ASUS N551", ALC668_FIXUP_ASUS_Nx51),
+diff --git a/sound/soc/codecs/sta32x.c b/sound/soc/codecs/sta32x.c
+index d5035f2f2b2b..ce508b4cc85c 100644
+--- a/sound/soc/codecs/sta32x.c
++++ b/sound/soc/codecs/sta32x.c
+@@ -879,6 +879,9 @@ static int sta32x_probe(struct snd_soc_component *component)
+ 	struct sta32x_priv *sta32x = snd_soc_component_get_drvdata(component);
+ 	struct sta32x_platform_data *pdata = sta32x->pdata;
+ 	int i, ret = 0, thermal = 0;
++
++	sta32x->component = component;
++
+ 	ret = regulator_bulk_enable(ARRAY_SIZE(sta32x->supplies),
+ 				    sta32x->supplies);
+ 	if (ret != 0) {
+diff --git a/sound/soc/intel/skylake/skl-topology.c b/sound/soc/intel/skylake/skl-topology.c
+index fcdc716754b6..bde2effde861 100644
+--- a/sound/soc/intel/skylake/skl-topology.c
++++ b/sound/soc/intel/skylake/skl-topology.c
+@@ -2458,6 +2458,7 @@ static int skl_tplg_get_token(struct device *dev,
+ 
+ 	case SKL_TKN_U8_CORE_ID:
+ 		mconfig->core_id = tkn_elem->value;
++		break;
+ 
+ 	case SKL_TKN_U8_MOD_TYPE:
+ 		mconfig->m_type = tkn_elem->value;
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index 67b042738ed7..986151732d68 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -831,7 +831,7 @@ ifndef NO_JVMTI
+     JDIR=$(shell /usr/sbin/update-java-alternatives -l | head -1 | awk '{print $$3}')
+   else
+     ifneq (,$(wildcard /usr/sbin/alternatives))
+-      JDIR=$(shell alternatives --display java | tail -1 | cut -d' ' -f 5 | sed 's%/jre/bin/java.%%g')
++      JDIR=$(shell /usr/sbin/alternatives --display java | tail -1 | cut -d' ' -f 5 | sed 's%/jre/bin/java.%%g')
+     endif
+   endif
+   ifndef JDIR
+diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
+index c04dc7b53797..82a3c8be19ee 100644
+--- a/tools/perf/builtin-report.c
++++ b/tools/perf/builtin-report.c
+@@ -981,6 +981,7 @@ int cmd_report(int argc, const char **argv)
+ 			.id_index	 = perf_event__process_id_index,
+ 			.auxtrace_info	 = perf_event__process_auxtrace_info,
+ 			.auxtrace	 = perf_event__process_auxtrace,
++			.event_update	 = perf_event__process_event_update,
+ 			.feature	 = process_feature_event,
+ 			.ordered_events	 = true,
+ 			.ordering_requires_timestamps = true,
+diff --git a/tools/perf/pmu-events/arch/x86/ivytown/uncore-power.json b/tools/perf/pmu-events/arch/x86/ivytown/uncore-power.json
+index d40498f2cb1e..635c09fda1d9 100644
+--- a/tools/perf/pmu-events/arch/x86/ivytown/uncore-power.json
++++ b/tools/perf/pmu-events/arch/x86/ivytown/uncore-power.json
+@@ -188,7 +188,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xb",
+         "EventName": "UNC_P_FREQ_GE_1200MHZ_CYCLES",
+-        "Filter": "filter_band0=1200",
++        "Filter": "filter_band0=12",
+         "MetricExpr": "(UNC_P_FREQ_GE_1200MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_1200mhz_cycles %",
+         "PerPkg": "1",
+@@ -199,7 +199,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xc",
+         "EventName": "UNC_P_FREQ_GE_2000MHZ_CYCLES",
+-        "Filter": "filter_band1=2000",
++        "Filter": "filter_band1=20",
+         "MetricExpr": "(UNC_P_FREQ_GE_2000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_2000mhz_cycles %",
+         "PerPkg": "1",
+@@ -210,7 +210,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xd",
+         "EventName": "UNC_P_FREQ_GE_3000MHZ_CYCLES",
+-        "Filter": "filter_band2=3000",
++        "Filter": "filter_band2=30",
+         "MetricExpr": "(UNC_P_FREQ_GE_3000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_3000mhz_cycles %",
+         "PerPkg": "1",
+@@ -221,7 +221,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xe",
+         "EventName": "UNC_P_FREQ_GE_4000MHZ_CYCLES",
+-        "Filter": "filter_band3=4000",
++        "Filter": "filter_band3=40",
+         "MetricExpr": "(UNC_P_FREQ_GE_4000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_4000mhz_cycles %",
+         "PerPkg": "1",
+@@ -232,7 +232,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xb",
+         "EventName": "UNC_P_FREQ_GE_1200MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band0=1200",
++        "Filter": "edge=1,filter_band0=12",
+         "MetricExpr": "(UNC_P_FREQ_GE_1200MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_1200mhz_cycles %",
+         "PerPkg": "1",
+@@ -243,7 +243,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xc",
+         "EventName": "UNC_P_FREQ_GE_2000MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band1=2000",
++        "Filter": "edge=1,filter_band1=20",
+         "MetricExpr": "(UNC_P_FREQ_GE_2000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_2000mhz_cycles %",
+         "PerPkg": "1",
+@@ -254,7 +254,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xd",
+         "EventName": "UNC_P_FREQ_GE_3000MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band2=4000",
++        "Filter": "edge=1,filter_band2=30",
+         "MetricExpr": "(UNC_P_FREQ_GE_3000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_3000mhz_cycles %",
+         "PerPkg": "1",
+@@ -265,7 +265,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xe",
+         "EventName": "UNC_P_FREQ_GE_4000MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band3=4000",
++        "Filter": "edge=1,filter_band3=40",
+         "MetricExpr": "(UNC_P_FREQ_GE_4000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_4000mhz_cycles %",
+         "PerPkg": "1",
+diff --git a/tools/perf/pmu-events/arch/x86/jaketown/uncore-power.json b/tools/perf/pmu-events/arch/x86/jaketown/uncore-power.json
+index 16034bfd06dd..8755693d86c6 100644
+--- a/tools/perf/pmu-events/arch/x86/jaketown/uncore-power.json
++++ b/tools/perf/pmu-events/arch/x86/jaketown/uncore-power.json
+@@ -187,7 +187,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xb",
+         "EventName": "UNC_P_FREQ_GE_1200MHZ_CYCLES",
+-        "Filter": "filter_band0=1200",
++        "Filter": "filter_band0=12",
+         "MetricExpr": "(UNC_P_FREQ_GE_1200MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_1200mhz_cycles %",
+         "PerPkg": "1",
+@@ -198,7 +198,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xc",
+         "EventName": "UNC_P_FREQ_GE_2000MHZ_CYCLES",
+-        "Filter": "filter_band1=2000",
++        "Filter": "filter_band1=20",
+         "MetricExpr": "(UNC_P_FREQ_GE_2000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_2000mhz_cycles %",
+         "PerPkg": "1",
+@@ -209,7 +209,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xd",
+         "EventName": "UNC_P_FREQ_GE_3000MHZ_CYCLES",
+-        "Filter": "filter_band2=3000",
++        "Filter": "filter_band2=30",
+         "MetricExpr": "(UNC_P_FREQ_GE_3000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_3000mhz_cycles %",
+         "PerPkg": "1",
+@@ -220,7 +220,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xe",
+         "EventName": "UNC_P_FREQ_GE_4000MHZ_CYCLES",
+-        "Filter": "filter_band3=4000",
++        "Filter": "filter_band3=40",
+         "MetricExpr": "(UNC_P_FREQ_GE_4000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_4000mhz_cycles %",
+         "PerPkg": "1",
+@@ -231,7 +231,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xb",
+         "EventName": "UNC_P_FREQ_GE_1200MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band0=1200",
++        "Filter": "edge=1,filter_band0=12",
+         "MetricExpr": "(UNC_P_FREQ_GE_1200MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_1200mhz_cycles %",
+         "PerPkg": "1",
+@@ -242,7 +242,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xc",
+         "EventName": "UNC_P_FREQ_GE_2000MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band1=2000",
++        "Filter": "edge=1,filter_band1=20",
+         "MetricExpr": "(UNC_P_FREQ_GE_2000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_2000mhz_cycles %",
+         "PerPkg": "1",
+@@ -253,7 +253,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xd",
+         "EventName": "UNC_P_FREQ_GE_3000MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band2=4000",
++        "Filter": "edge=1,filter_band2=30",
+         "MetricExpr": "(UNC_P_FREQ_GE_3000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_3000mhz_cycles %",
+         "PerPkg": "1",
+@@ -264,7 +264,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xe",
+         "EventName": "UNC_P_FREQ_GE_4000MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band3=4000",
++        "Filter": "edge=1,filter_band3=40",
+         "MetricExpr": "(UNC_P_FREQ_GE_4000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_4000mhz_cycles %",
+         "PerPkg": "1",
+diff --git a/tools/perf/tests/shell/record+probe_libc_inet_pton.sh b/tools/perf/tests/shell/record+probe_libc_inet_pton.sh
+index 3013ac8f83d0..cab7b0aea6ea 100755
+--- a/tools/perf/tests/shell/record+probe_libc_inet_pton.sh
++++ b/tools/perf/tests/shell/record+probe_libc_inet_pton.sh
+@@ -48,7 +48,7 @@ trace_libc_inet_pton_backtrace() {
+ 	*)
+ 		eventattr='max-stack=3'
+ 		echo "getaddrinfo\+0x[[:xdigit:]]+[[:space:]]\($libc\)$" >> $expected
+-		echo ".*\+0x[[:xdigit:]]+[[:space:]]\(.*/bin/ping.*\)$" >> $expected
++		echo ".*(\+0x[[:xdigit:]]+|\[unknown\])[[:space:]]\(.*/bin/ping.*\)$" >> $expected
+ 		;;
+ 	esac
+ 
+diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
+index 0c8ecf0c78a4..6f3db78efe39 100644
+--- a/tools/perf/util/event.c
++++ b/tools/perf/util/event.c
+@@ -1074,6 +1074,7 @@ void *cpu_map_data__alloc(struct cpu_map *map, size_t *size, u16 *type, int *max
+ 	}
+ 
+ 	*size += sizeof(struct cpu_map_data);
++	*size = PERF_ALIGN(*size, sizeof(u64));
+ 	return zalloc(*size);
+ }
+ 
+diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
+index 6324afba8fdd..86ad1389ff5a 100644
+--- a/tools/perf/util/evsel.c
++++ b/tools/perf/util/evsel.c
+@@ -1078,6 +1078,9 @@ void perf_evsel__config(struct perf_evsel *evsel, struct record_opts *opts,
+ 		attr->exclude_user   = 1;
+ 	}
+ 
++	if (evsel->own_cpus)
++		evsel->attr.read_format |= PERF_FORMAT_ID;
++
+ 	/*
+ 	 * Apply event specific term settings,
+ 	 * it overloads any global configuration.
+diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
+index 3ba6a1742f91..02580f3ded1a 100644
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -936,13 +936,14 @@ static void pmu_format_value(unsigned long *format, __u64 value, __u64 *v,
+ 
+ static __u64 pmu_format_max_value(const unsigned long *format)
+ {
+-	__u64 w = 0;
+-	int fbit;
+-
+-	for_each_set_bit(fbit, format, PERF_PMU_FORMAT_BITS)
+-		w |= (1ULL << fbit);
++	int w;
+ 
+-	return w;
++	w = bitmap_weight(format, PERF_PMU_FORMAT_BITS);
++	if (!w)
++		return 0;
++	if (w < 64)
++		return (1ULL << w) - 1;
++	return -1;
+ }
+ 
+ /*
+diff --git a/tools/perf/util/srcline.c b/tools/perf/util/srcline.c
+index 09d6746e6ec8..e767c4a9d4d2 100644
+--- a/tools/perf/util/srcline.c
++++ b/tools/perf/util/srcline.c
+@@ -85,6 +85,9 @@ static struct symbol *new_inline_sym(struct dso *dso,
+ 	struct symbol *inline_sym;
+ 	char *demangled = NULL;
+ 
++	if (!funcname)
++		funcname = "??";
++
+ 	if (dso) {
+ 		demangled = dso__demangle_sym(dso, 0, funcname);
+ 		if (demangled)
+diff --git a/tools/perf/util/strbuf.c b/tools/perf/util/strbuf.c
+index 3d1cf5bf7f18..9005fbe0780e 100644
+--- a/tools/perf/util/strbuf.c
++++ b/tools/perf/util/strbuf.c
+@@ -98,19 +98,25 @@ static int strbuf_addv(struct strbuf *sb, const char *fmt, va_list ap)
+ 
+ 	va_copy(ap_saved, ap);
+ 	len = vsnprintf(sb->buf + sb->len, sb->alloc - sb->len, fmt, ap);
+-	if (len < 0)
++	if (len < 0) {
++		va_end(ap_saved);
+ 		return len;
++	}
+ 	if (len > strbuf_avail(sb)) {
+ 		ret = strbuf_grow(sb, len);
+-		if (ret)
++		if (ret) {
++			va_end(ap_saved);
+ 			return ret;
++		}
+ 		len = vsnprintf(sb->buf + sb->len, sb->alloc - sb->len, fmt, ap_saved);
+ 		va_end(ap_saved);
+ 		if (len > strbuf_avail(sb)) {
+ 			pr_debug("this should not happen, your vsnprintf is broken");
++			va_end(ap_saved);
+ 			return -EINVAL;
+ 		}
+ 	}
++	va_end(ap_saved);
+ 	return strbuf_setlen(sb, sb->len + len);
+ }
+ 
+diff --git a/tools/perf/util/trace-event-info.c b/tools/perf/util/trace-event-info.c
+index 7b0ca7cbb7de..8ad8e755127b 100644
+--- a/tools/perf/util/trace-event-info.c
++++ b/tools/perf/util/trace-event-info.c
+@@ -531,12 +531,14 @@ struct tracing_data *tracing_data_get(struct list_head *pattrs,
+ 			 "/tmp/perf-XXXXXX");
+ 		if (!mkstemp(tdata->temp_file)) {
+ 			pr_debug("Can't make temp file");
++			free(tdata);
+ 			return NULL;
+ 		}
+ 
+ 		temp_fd = open(tdata->temp_file, O_RDWR);
+ 		if (temp_fd < 0) {
+ 			pr_debug("Can't read '%s'", tdata->temp_file);
++			free(tdata);
+ 			return NULL;
+ 		}
+ 
+diff --git a/tools/perf/util/trace-event-read.c b/tools/perf/util/trace-event-read.c
+index 40b425949aa3..2d50e4384c72 100644
+--- a/tools/perf/util/trace-event-read.c
++++ b/tools/perf/util/trace-event-read.c
+@@ -349,9 +349,12 @@ static int read_event_files(struct pevent *pevent)
+ 		for (x=0; x < count; x++) {
+ 			size = read8(pevent);
+ 			ret = read_event_file(pevent, sys, size);
+-			if (ret)
++			if (ret) {
++				free(sys);
+ 				return ret;
++			}
+ 		}
++		free(sys);
+ 	}
+ 	return 0;
+ }
+diff --git a/tools/power/cpupower/utils/cpufreq-info.c b/tools/power/cpupower/utils/cpufreq-info.c
+index df43cd45d810..ccd08dd00996 100644
+--- a/tools/power/cpupower/utils/cpufreq-info.c
++++ b/tools/power/cpupower/utils/cpufreq-info.c
+@@ -200,6 +200,8 @@ static int get_boost_mode(unsigned int cpu)
+ 		printf(_("    Boost States: %d\n"), b_states);
+ 		printf(_("    Total States: %d\n"), pstate_no);
+ 		for (i = 0; i < pstate_no; i++) {
++			if (!pstates[i])
++				continue;
+ 			if (i < b_states)
+ 				printf(_("    Pstate-Pb%d: %luMHz (boost state)"
+ 					 "\n"), i, pstates[i]);
+diff --git a/tools/power/cpupower/utils/helpers/amd.c b/tools/power/cpupower/utils/helpers/amd.c
+index bb41cdd0df6b..9607ada5b29a 100644
+--- a/tools/power/cpupower/utils/helpers/amd.c
++++ b/tools/power/cpupower/utils/helpers/amd.c
+@@ -33,7 +33,7 @@ union msr_pstate {
+ 		unsigned vid:8;
+ 		unsigned iddval:8;
+ 		unsigned idddiv:2;
+-		unsigned res1:30;
++		unsigned res1:31;
+ 		unsigned en:1;
+ 	} fam17h_bits;
+ 	unsigned long long val;
+@@ -119,6 +119,11 @@ int decode_pstates(unsigned int cpu, unsigned int cpu_family,
+ 		}
+ 		if (read_msr(cpu, MSR_AMD_PSTATE + i, &pstate.val))
+ 			return -1;
++		if ((cpu_family == 0x17) && (!pstate.fam17h_bits.en))
++			continue;
++		else if (!pstate.bits.en)
++			continue;
++
+ 		pstates[i] = get_cof(cpu_family, pstate);
+ 	}
+ 	*no = i;
+diff --git a/tools/testing/selftests/drivers/usb/usbip/usbip_test.sh b/tools/testing/selftests/drivers/usb/usbip/usbip_test.sh
+index 1893d0f59ad7..059b7e81b922 100755
+--- a/tools/testing/selftests/drivers/usb/usbip/usbip_test.sh
++++ b/tools/testing/selftests/drivers/usb/usbip/usbip_test.sh
+@@ -143,6 +143,10 @@ echo "Import devices from localhost - should work"
+ src/usbip attach -r localhost -b $busid;
+ echo "=============================================================="
+ 
++# Wait for sysfs file to be updated. Without this sleep, usbip port
++# shows no imported devices.
++sleep 3;
++
+ echo "List imported devices - expect to see imported devices";
+ src/usbip port;
+ echo "=============================================================="
+diff --git a/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-createremove.tc b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-createremove.tc
+index cef11377dcbd..c604438df13b 100644
+--- a/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-createremove.tc
++++ b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-createremove.tc
+@@ -35,18 +35,18 @@ fi
+ 
+ reset_trigger
+ 
+-echo "Test create synthetic event with an error"
+-echo 'wakeup_latency  u64 lat pid_t pid char' > synthetic_events > /dev/null
++echo "Test remove synthetic event"
++echo '!wakeup_latency  u64 lat pid_t pid char comm[16]' >> synthetic_events
+ if [ -d events/synthetic/wakeup_latency ]; then
+-    fail "Created wakeup_latency synthetic event with an invalid format"
++    fail "Failed to delete wakeup_latency synthetic event"
+ fi
+ 
+ reset_trigger
+ 
+-echo "Test remove synthetic event"
+-echo '!wakeup_latency  u64 lat pid_t pid char comm[16]' > synthetic_events
++echo "Test create synthetic event with an error"
++echo 'wakeup_latency  u64 lat pid_t pid char' > synthetic_events > /dev/null
+ if [ -d events/synthetic/wakeup_latency ]; then
+-    fail "Failed to delete wakeup_latency synthetic event"
++    fail "Created wakeup_latency synthetic event with an invalid format"
+ fi
+ 
+ do_reset
+diff --git a/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-syntax.tc b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-syntax.tc
+new file mode 100644
+index 000000000000..88e6c3f43006
+--- /dev/null
++++ b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-syntax.tc
+@@ -0,0 +1,80 @@
++#!/bin/sh
++# SPDX-License-Identifier: GPL-2.0
++# description: event trigger - test synthetic_events syntax parser
++
++do_reset() {
++    reset_trigger
++    echo > set_event
++    clear_trace
++}
++
++fail() { #msg
++    do_reset
++    echo $1
++    exit_fail
++}
++
++if [ ! -f set_event ]; then
++    echo "event tracing is not supported"
++    exit_unsupported
++fi
++
++if [ ! -f synthetic_events ]; then
++    echo "synthetic event is not supported"
++    exit_unsupported
++fi
++
++reset_tracer
++do_reset
++
++echo "Test synthetic_events syntax parser"
++
++echo > synthetic_events
++
++# synthetic event must have a field
++! echo "myevent" >> synthetic_events
++echo "myevent u64 var1" >> synthetic_events
++
++# synthetic event must be found in synthetic_events
++grep "myevent[[:space:]]u64 var1" synthetic_events
++
++# it is not possible to add same name event
++! echo "myevent u64 var2" >> synthetic_events
++
++# Non-append open will cleanup all events and add new one
++echo "myevent u64 var2" > synthetic_events
++
++# multiple fields with different spaces
++echo "myevent u64 var1; u64 var2;" > synthetic_events
++grep "myevent[[:space:]]u64 var1; u64 var2" synthetic_events
++echo "myevent u64 var1 ; u64 var2 ;" > synthetic_events
++grep "myevent[[:space:]]u64 var1; u64 var2" synthetic_events
++echo "myevent u64 var1 ;u64 var2" > synthetic_events
++grep "myevent[[:space:]]u64 var1; u64 var2" synthetic_events
++
++# test field types
++echo "myevent u32 var" > synthetic_events
++echo "myevent u16 var" > synthetic_events
++echo "myevent u8 var" > synthetic_events
++echo "myevent s64 var" > synthetic_events
++echo "myevent s32 var" > synthetic_events
++echo "myevent s16 var" > synthetic_events
++echo "myevent s8 var" > synthetic_events
++
++echo "myevent char var" > synthetic_events
++echo "myevent int var" > synthetic_events
++echo "myevent long var" > synthetic_events
++echo "myevent pid_t var" > synthetic_events
++
++echo "myevent unsigned char var" > synthetic_events
++echo "myevent unsigned int var" > synthetic_events
++echo "myevent unsigned long var" > synthetic_events
++grep "myevent[[:space:]]unsigned long var" synthetic_events
++
++# test string type
++echo "myevent char var[10]" > synthetic_events
++grep "myevent[[:space:]]char\[10\] var" synthetic_events
++
++do_reset
++
++exit 0
+diff --git a/tools/testing/selftests/net/reuseport_bpf.c b/tools/testing/selftests/net/reuseport_bpf.c
+index cad14cd0ea92..b5277106df1f 100644
+--- a/tools/testing/selftests/net/reuseport_bpf.c
++++ b/tools/testing/selftests/net/reuseport_bpf.c
+@@ -437,14 +437,19 @@ void enable_fastopen(void)
+ 	}
+ }
+ 
+-static struct rlimit rlim_old, rlim_new;
++static struct rlimit rlim_old;
+ 
+ static  __attribute__((constructor)) void main_ctor(void)
+ {
+ 	getrlimit(RLIMIT_MEMLOCK, &rlim_old);
+-	rlim_new.rlim_cur = rlim_old.rlim_cur + (1UL << 20);
+-	rlim_new.rlim_max = rlim_old.rlim_max + (1UL << 20);
+-	setrlimit(RLIMIT_MEMLOCK, &rlim_new);
++
++	if (rlim_old.rlim_cur != RLIM_INFINITY) {
++		struct rlimit rlim_new;
++
++		rlim_new.rlim_cur = rlim_old.rlim_cur + (1UL << 20);
++		rlim_new.rlim_max = rlim_old.rlim_max + (1UL << 20);
++		setrlimit(RLIMIT_MEMLOCK, &rlim_new);
++	}
+ }
+ 
+ static __attribute__((destructor)) void main_dtor(void)
+diff --git a/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr.c b/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr.c
+index 327fa943c7f3..dbdffa2e2c82 100644
+--- a/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr.c
++++ b/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr.c
+@@ -67,8 +67,8 @@ trans:
+ 		"3: ;"
+ 		: [res] "=r" (result), [texasr] "=r" (texasr)
+ 		: [gpr_1]"i"(GPR_1), [gpr_2]"i"(GPR_2), [gpr_4]"i"(GPR_4),
+-		[sprn_texasr] "i" (SPRN_TEXASR), [flt_1] "r" (&a),
+-		[flt_2] "r" (&b), [flt_4] "r" (&d)
++		[sprn_texasr] "i" (SPRN_TEXASR), [flt_1] "b" (&a),
++		[flt_4] "b" (&d)
+ 		: "memory", "r5", "r6", "r7",
+ 		"r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15",
+ 		"r16", "r17", "r18", "r19", "r20", "r21", "r22", "r23",
+diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
+index 04e554cae3a2..f8c2b9e7c19c 100644
+--- a/virt/kvm/arm/arm.c
++++ b/virt/kvm/arm/arm.c
+@@ -1244,8 +1244,6 @@ static void cpu_init_hyp_mode(void *dummy)
+ 
+ 	__cpu_init_hyp_mode(pgd_ptr, hyp_stack_ptr, vector_ptr);
+ 	__cpu_init_stage2();
+-
+-	kvm_arm_init_debug();
+ }
+ 
+ static void cpu_hyp_reset(void)
+@@ -1269,6 +1267,8 @@ static void cpu_hyp_reinit(void)
+ 		cpu_init_hyp_mode(NULL);
+ 	}
+ 
++	kvm_arm_init_debug();
++
+ 	if (vgic_present)
+ 		kvm_vgic_init_cpu_hardware();
+ }
+diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
+index fd8c88463928..fbba603caf1b 100644
+--- a/virt/kvm/arm/mmu.c
++++ b/virt/kvm/arm/mmu.c
+@@ -1201,8 +1201,14 @@ static bool transparent_hugepage_adjust(kvm_pfn_t *pfnp, phys_addr_t *ipap)
+ {
+ 	kvm_pfn_t pfn = *pfnp;
+ 	gfn_t gfn = *ipap >> PAGE_SHIFT;
++	struct page *page = pfn_to_page(pfn);
+ 
+-	if (PageTransCompoundMap(pfn_to_page(pfn))) {
++	/*
++	 * PageTransCompoungMap() returns true for THP and
++	 * hugetlbfs. Make sure the adjustment is done only for THP
++	 * pages.
++	 */
++	if (!PageHuge(page) && PageTransCompoundMap(page)) {
+ 		unsigned long mask;
+ 		/*
+ 		 * The address we faulted on is backed by a transparent huge


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-11  1:51 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-11  1:51 UTC (permalink / raw
  To: gentoo-commits

commit:     740c5ef73a8964595268ea9d809069a05c66391f
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Nov 11 01:51:36 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Nov 11 01:51:36 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=740c5ef7

net: sched: Remove TCA_OPTIONS from policy

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                      |  4 ++++
 1800_TCA-OPTIONS-sched-fix.patch | 35 +++++++++++++++++++++++++++++++++++
 2 files changed, 39 insertions(+)

diff --git a/0000_README b/0000_README
index 6774045..bdc7ee9 100644
--- a/0000_README
+++ b/0000_README
@@ -123,6 +123,10 @@ Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.
 
+Patch:  1800_TCA-OPTIONS-sched-fix.patch
+From:   https://git.kernel.org
+Desc:   net: sched: Remove TCA_OPTIONS from policy
+
 Patch:  2500_usb-storage-Disable-UAS-on-JMicron-SATA-enclosure.patch
 From:   https://bugzilla.redhat.com/show_bug.cgi?id=1260207#c5
 Desc:   Add UAS disable quirk. See bug #640082.

diff --git a/1800_TCA-OPTIONS-sched-fix.patch b/1800_TCA-OPTIONS-sched-fix.patch
new file mode 100644
index 0000000..f960fac
--- /dev/null
+++ b/1800_TCA-OPTIONS-sched-fix.patch
@@ -0,0 +1,35 @@
+From e72bde6b66299602087c8c2350d36a525e75d06e Mon Sep 17 00:00:00 2001
+From: David Ahern <dsahern@gmail.com>
+Date: Wed, 24 Oct 2018 08:32:49 -0700
+Subject: net: sched: Remove TCA_OPTIONS from policy
+
+Marco reported an error with hfsc:
+root@Calimero:~# tc qdisc add dev eth0 root handle 1:0 hfsc default 1
+Error: Attribute failed policy validation.
+
+Apparently a few implementations pass TCA_OPTIONS as a binary instead
+of nested attribute, so drop TCA_OPTIONS from the policy.
+
+Fixes: 8b4c3cdd9dd8 ("net: sched: Add policy validation for tc attributes")
+Reported-by: Marco Berizzi <pupilla@libero.it>
+Signed-off-by: David Ahern <dsahern@gmail.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+---
+ net/sched/sch_api.c | 1 -
+ 1 file changed, 1 deletion(-)
+
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 022bca98bde6..ca3b0f46de53 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -1320,7 +1320,6 @@ check_loop_fn(struct Qdisc *q, unsigned long cl, struct qdisc_walker *w)
+ 
+ const struct nla_policy rtm_tca_policy[TCA_MAX + 1] = {
+ 	[TCA_KIND]		= { .type = NLA_STRING },
+-	[TCA_OPTIONS]		= { .type = NLA_NESTED },
+ 	[TCA_RATE]		= { .type = NLA_BINARY,
+ 				    .len = sizeof(struct tc_estimator) },
+ 	[TCA_STAB]		= { .type = NLA_NESTED },
+-- 
+cgit 1.2-0.3.lf.el7
+


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-10 21:33 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-11-10 21:33 UTC (permalink / raw
  To: gentoo-commits

commit:     59ff40a345fb3d3018447b6d6d982f63d9158d9f
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Nov 10 21:33:13 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Nov 10 21:33:13 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=59ff40a3

Linux patch 4.18.18

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1017_linux-4.18.18.patch | 1206 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1210 insertions(+)

diff --git a/0000_README b/0000_README
index fcd301e..6774045 100644
--- a/0000_README
+++ b/0000_README
@@ -111,6 +111,10 @@ Patch:  1016_linux-4.18.17.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.17
 
+Patch:  1017_linux-4.18.18.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.18
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1017_linux-4.18.18.patch b/1017_linux-4.18.18.patch
new file mode 100644
index 0000000..093fbfc
--- /dev/null
+++ b/1017_linux-4.18.18.patch
@@ -0,0 +1,1206 @@
+diff --git a/Makefile b/Makefile
+index c051db0ca5a0..7b35c1ec0427 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 17
++SUBLEVEL = 18
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h
+index a38bf5a1e37a..69dcdf195b61 100644
+--- a/arch/x86/include/asm/fpu/internal.h
++++ b/arch/x86/include/asm/fpu/internal.h
+@@ -528,7 +528,7 @@ static inline void fpregs_activate(struct fpu *fpu)
+ static inline void
+ switch_fpu_prepare(struct fpu *old_fpu, int cpu)
+ {
+-	if (old_fpu->initialized) {
++	if (static_cpu_has(X86_FEATURE_FPU) && old_fpu->initialized) {
+ 		if (!copy_fpregs_to_fpstate(old_fpu))
+ 			old_fpu->last_cpu = -1;
+ 		else
+diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
+index a06b07399d17..6abf3af96fc8 100644
+--- a/arch/x86/include/asm/percpu.h
++++ b/arch/x86/include/asm/percpu.h
+@@ -185,22 +185,22 @@ do {									\
+ 	typeof(var) pfo_ret__;				\
+ 	switch (sizeof(var)) {				\
+ 	case 1:						\
+-		asm(op "b "__percpu_arg(1)",%0"		\
++		asm volatile(op "b "__percpu_arg(1)",%0"\
+ 		    : "=q" (pfo_ret__)			\
+ 		    : "m" (var));			\
+ 		break;					\
+ 	case 2:						\
+-		asm(op "w "__percpu_arg(1)",%0"		\
++		asm volatile(op "w "__percpu_arg(1)",%0"\
+ 		    : "=r" (pfo_ret__)			\
+ 		    : "m" (var));			\
+ 		break;					\
+ 	case 4:						\
+-		asm(op "l "__percpu_arg(1)",%0"		\
++		asm volatile(op "l "__percpu_arg(1)",%0"\
+ 		    : "=r" (pfo_ret__)			\
+ 		    : "m" (var));			\
+ 		break;					\
+ 	case 8:						\
+-		asm(op "q "__percpu_arg(1)",%0"		\
++		asm volatile(op "q "__percpu_arg(1)",%0"\
+ 		    : "=r" (pfo_ret__)			\
+ 		    : "m" (var));			\
+ 		break;					\
+diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c
+index 661583662430..71c0b01d93b1 100644
+--- a/arch/x86/kernel/pci-swiotlb.c
++++ b/arch/x86/kernel/pci-swiotlb.c
+@@ -42,10 +42,8 @@ IOMMU_INIT_FINISH(pci_swiotlb_detect_override,
+ int __init pci_swiotlb_detect_4gb(void)
+ {
+ 	/* don't initialize swiotlb if iommu=off (no_iommu=1) */
+-#ifdef CONFIG_X86_64
+ 	if (!no_iommu && max_possible_pfn > MAX_DMA32_PFN)
+ 		swiotlb = 1;
+-#endif
+ 
+ 	/*
+ 	 * If SME is active then swiotlb will be set to 1 so that bounce
+diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
+index 74b4472ba0a6..f32472acf66c 100644
+--- a/arch/x86/kernel/setup.c
++++ b/arch/x86/kernel/setup.c
+@@ -1258,7 +1258,7 @@ void __init setup_arch(char **cmdline_p)
+ 	x86_init.hyper.guest_late_init();
+ 
+ 	e820__reserve_resources();
+-	e820__register_nosave_regions(max_low_pfn);
++	e820__register_nosave_regions(max_pfn);
+ 
+ 	x86_init.resources.reserve_resources();
+ 
+diff --git a/arch/x86/kernel/time.c b/arch/x86/kernel/time.c
+index be01328eb755..fddaefc51fb6 100644
+--- a/arch/x86/kernel/time.c
++++ b/arch/x86/kernel/time.c
+@@ -25,7 +25,7 @@
+ #include <asm/time.h>
+ 
+ #ifdef CONFIG_X86_64
+-__visible volatile unsigned long jiffies __cacheline_aligned = INITIAL_JIFFIES;
++__visible volatile unsigned long jiffies __cacheline_aligned_in_smp = INITIAL_JIFFIES;
+ #endif
+ 
+ unsigned long profile_pc(struct pt_regs *regs)
+diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
+index a10481656d82..2f4af9598f62 100644
+--- a/arch/x86/kernel/tsc.c
++++ b/arch/x86/kernel/tsc.c
+@@ -60,7 +60,7 @@ struct cyc2ns {
+ 
+ static DEFINE_PER_CPU_ALIGNED(struct cyc2ns, cyc2ns);
+ 
+-void cyc2ns_read_begin(struct cyc2ns_data *data)
++void __always_inline cyc2ns_read_begin(struct cyc2ns_data *data)
+ {
+ 	int seq, idx;
+ 
+@@ -77,7 +77,7 @@ void cyc2ns_read_begin(struct cyc2ns_data *data)
+ 	} while (unlikely(seq != this_cpu_read(cyc2ns.seq.sequence)));
+ }
+ 
+-void cyc2ns_read_end(void)
++void __always_inline cyc2ns_read_end(void)
+ {
+ 	preempt_enable_notrace();
+ }
+@@ -123,7 +123,7 @@ static void __init cyc2ns_init(int cpu)
+ 	seqcount_init(&c2n->seq);
+ }
+ 
+-static inline unsigned long long cycles_2_ns(unsigned long long cyc)
++static __always_inline unsigned long long cycles_2_ns(unsigned long long cyc)
+ {
+ 	struct cyc2ns_data data;
+ 	unsigned long long ns;
+diff --git a/drivers/clk/sunxi-ng/ccu-sun4i-a10.c b/drivers/clk/sunxi-ng/ccu-sun4i-a10.c
+index ffa5dac221e4..129ebd2588fd 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun4i-a10.c
++++ b/drivers/clk/sunxi-ng/ccu-sun4i-a10.c
+@@ -1434,8 +1434,16 @@ static void __init sun4i_ccu_init(struct device_node *node,
+ 		return;
+ 	}
+ 
+-	/* Force the PLL-Audio-1x divider to 1 */
+ 	val = readl(reg + SUN4I_PLL_AUDIO_REG);
++
++	/*
++	 * Force VCO and PLL bias current to lowest setting. Higher
++	 * settings interfere with sigma-delta modulation and result
++	 * in audible noise and distortions when using SPDIF or I2S.
++	 */
++	val &= ~GENMASK(25, 16);
++
++	/* Force the PLL-Audio-1x divider to 1 */
+ 	val &= ~GENMASK(29, 26);
+ 	writel(val | (1 << 26), reg + SUN4I_PLL_AUDIO_REG);
+ 
+diff --git a/drivers/gpio/gpio-mxs.c b/drivers/gpio/gpio-mxs.c
+index e2831ee70cdc..deb539b3316b 100644
+--- a/drivers/gpio/gpio-mxs.c
++++ b/drivers/gpio/gpio-mxs.c
+@@ -18,8 +18,6 @@
+ #include <linux/platform_device.h>
+ #include <linux/slab.h>
+ #include <linux/gpio/driver.h>
+-/* FIXME: for gpio_get_value(), replace this by direct register read */
+-#include <linux/gpio.h>
+ #include <linux/module.h>
+ 
+ #define MXS_SET		0x4
+@@ -86,7 +84,7 @@ static int mxs_gpio_set_irq_type(struct irq_data *d, unsigned int type)
+ 	port->both_edges &= ~pin_mask;
+ 	switch (type) {
+ 	case IRQ_TYPE_EDGE_BOTH:
+-		val = gpio_get_value(port->gc.base + d->hwirq);
++		val = port->gc.get(&port->gc, d->hwirq);
+ 		if (val)
+ 			edge = GPIO_INT_FALL_EDGE;
+ 		else
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index c7b4481c90d7..d74d9a8cde2a 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -113,6 +113,9 @@ static const struct edid_quirk {
+ 	/* AEO model 0 reports 8 bpc, but is a 6 bpc panel */
+ 	{ "AEO", 0, EDID_QUIRK_FORCE_6BPC },
+ 
++	/* BOE model on HP Pavilion 15-n233sl reports 8 bpc, but is a 6 bpc panel */
++	{ "BOE", 0x78b, EDID_QUIRK_FORCE_6BPC },
++
+ 	/* CPT panel of Asus UX303LA reports 8 bpc, but is a 6 bpc panel */
+ 	{ "CPT", 0x17df, EDID_QUIRK_FORCE_6BPC },
+ 
+@@ -4279,7 +4282,7 @@ static void drm_parse_ycbcr420_deep_color_info(struct drm_connector *connector,
+ 	struct drm_hdmi_info *hdmi = &connector->display_info.hdmi;
+ 
+ 	dc_mask = db[7] & DRM_EDID_YCBCR420_DC_MASK;
+-	hdmi->y420_dc_modes |= dc_mask;
++	hdmi->y420_dc_modes = dc_mask;
+ }
+ 
+ static void drm_parse_hdmi_forum_vsdb(struct drm_connector *connector,
+diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
+index 2ee1eaa66188..1ebac724fe7b 100644
+--- a/drivers/gpu/drm/drm_fb_helper.c
++++ b/drivers/gpu/drm/drm_fb_helper.c
+@@ -1561,6 +1561,25 @@ unlock:
+ }
+ EXPORT_SYMBOL(drm_fb_helper_ioctl);
+ 
++static bool drm_fb_pixel_format_equal(const struct fb_var_screeninfo *var_1,
++				      const struct fb_var_screeninfo *var_2)
++{
++	return var_1->bits_per_pixel == var_2->bits_per_pixel &&
++	       var_1->grayscale == var_2->grayscale &&
++	       var_1->red.offset == var_2->red.offset &&
++	       var_1->red.length == var_2->red.length &&
++	       var_1->red.msb_right == var_2->red.msb_right &&
++	       var_1->green.offset == var_2->green.offset &&
++	       var_1->green.length == var_2->green.length &&
++	       var_1->green.msb_right == var_2->green.msb_right &&
++	       var_1->blue.offset == var_2->blue.offset &&
++	       var_1->blue.length == var_2->blue.length &&
++	       var_1->blue.msb_right == var_2->blue.msb_right &&
++	       var_1->transp.offset == var_2->transp.offset &&
++	       var_1->transp.length == var_2->transp.length &&
++	       var_1->transp.msb_right == var_2->transp.msb_right;
++}
++
+ /**
+  * drm_fb_helper_check_var - implementation for &fb_ops.fb_check_var
+  * @var: screeninfo to check
+@@ -1571,7 +1590,6 @@ int drm_fb_helper_check_var(struct fb_var_screeninfo *var,
+ {
+ 	struct drm_fb_helper *fb_helper = info->par;
+ 	struct drm_framebuffer *fb = fb_helper->fb;
+-	int depth;
+ 
+ 	if (var->pixclock != 0 || in_dbg_master())
+ 		return -EINVAL;
+@@ -1591,72 +1609,15 @@ int drm_fb_helper_check_var(struct fb_var_screeninfo *var,
+ 		return -EINVAL;
+ 	}
+ 
+-	switch (var->bits_per_pixel) {
+-	case 16:
+-		depth = (var->green.length == 6) ? 16 : 15;
+-		break;
+-	case 32:
+-		depth = (var->transp.length > 0) ? 32 : 24;
+-		break;
+-	default:
+-		depth = var->bits_per_pixel;
+-		break;
+-	}
+-
+-	switch (depth) {
+-	case 8:
+-		var->red.offset = 0;
+-		var->green.offset = 0;
+-		var->blue.offset = 0;
+-		var->red.length = 8;
+-		var->green.length = 8;
+-		var->blue.length = 8;
+-		var->transp.length = 0;
+-		var->transp.offset = 0;
+-		break;
+-	case 15:
+-		var->red.offset = 10;
+-		var->green.offset = 5;
+-		var->blue.offset = 0;
+-		var->red.length = 5;
+-		var->green.length = 5;
+-		var->blue.length = 5;
+-		var->transp.length = 1;
+-		var->transp.offset = 15;
+-		break;
+-	case 16:
+-		var->red.offset = 11;
+-		var->green.offset = 5;
+-		var->blue.offset = 0;
+-		var->red.length = 5;
+-		var->green.length = 6;
+-		var->blue.length = 5;
+-		var->transp.length = 0;
+-		var->transp.offset = 0;
+-		break;
+-	case 24:
+-		var->red.offset = 16;
+-		var->green.offset = 8;
+-		var->blue.offset = 0;
+-		var->red.length = 8;
+-		var->green.length = 8;
+-		var->blue.length = 8;
+-		var->transp.length = 0;
+-		var->transp.offset = 0;
+-		break;
+-	case 32:
+-		var->red.offset = 16;
+-		var->green.offset = 8;
+-		var->blue.offset = 0;
+-		var->red.length = 8;
+-		var->green.length = 8;
+-		var->blue.length = 8;
+-		var->transp.length = 8;
+-		var->transp.offset = 24;
+-		break;
+-	default:
++	/*
++	 * drm fbdev emulation doesn't support changing the pixel format at all,
++	 * so reject all pixel format changing requests.
++	 */
++	if (!drm_fb_pixel_format_equal(var, &info->var)) {
++		DRM_DEBUG("fbdev emulation doesn't support changing the pixel format\n");
+ 		return -EINVAL;
+ 	}
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL(drm_fb_helper_check_var);
+diff --git a/drivers/gpu/drm/sun4i/sun4i_dotclock.c b/drivers/gpu/drm/sun4i/sun4i_dotclock.c
+index e36004fbe453..2a15f2f9271e 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_dotclock.c
++++ b/drivers/gpu/drm/sun4i/sun4i_dotclock.c
+@@ -81,9 +81,19 @@ static long sun4i_dclk_round_rate(struct clk_hw *hw, unsigned long rate,
+ 	int i;
+ 
+ 	for (i = tcon->dclk_min_div; i <= tcon->dclk_max_div; i++) {
+-		unsigned long ideal = rate * i;
++		u64 ideal = (u64)rate * i;
+ 		unsigned long rounded;
+ 
++		/*
++		 * ideal has overflowed the max value that can be stored in an
++		 * unsigned long, and every clk operation we might do on a
++		 * truncated u64 value will give us incorrect results.
++		 * Let's just stop there since bigger dividers will result in
++		 * the same overflow issue.
++		 */
++		if (ideal > ULONG_MAX)
++			goto out;
++
+ 		rounded = clk_hw_round_rate(clk_hw_get_parent(hw),
+ 					    ideal);
+ 
+diff --git a/drivers/infiniband/core/ucm.c b/drivers/infiniband/core/ucm.c
+index 9eef96dacbd7..d93a719d25c1 100644
+--- a/drivers/infiniband/core/ucm.c
++++ b/drivers/infiniband/core/ucm.c
+@@ -46,6 +46,8 @@
+ #include <linux/mutex.h>
+ #include <linux/slab.h>
+ 
++#include <linux/nospec.h>
++
+ #include <linux/uaccess.h>
+ 
+ #include <rdma/ib.h>
+@@ -1123,6 +1125,7 @@ static ssize_t ib_ucm_write(struct file *filp, const char __user *buf,
+ 
+ 	if (hdr.cmd >= ARRAY_SIZE(ucm_cmd_table))
+ 		return -EINVAL;
++	hdr.cmd = array_index_nospec(hdr.cmd, ARRAY_SIZE(ucm_cmd_table));
+ 
+ 	if (hdr.in + sizeof(hdr) > len)
+ 		return -EINVAL;
+diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c
+index 21863ddde63e..01d68ed46c1b 100644
+--- a/drivers/infiniband/core/ucma.c
++++ b/drivers/infiniband/core/ucma.c
+@@ -44,6 +44,8 @@
+ #include <linux/module.h>
+ #include <linux/nsproxy.h>
+ 
++#include <linux/nospec.h>
++
+ #include <rdma/rdma_user_cm.h>
+ #include <rdma/ib_marshall.h>
+ #include <rdma/rdma_cm.h>
+@@ -1676,6 +1678,7 @@ static ssize_t ucma_write(struct file *filp, const char __user *buf,
+ 
+ 	if (hdr.cmd >= ARRAY_SIZE(ucma_cmd_table))
+ 		return -EINVAL;
++	hdr.cmd = array_index_nospec(hdr.cmd, ARRAY_SIZE(ucma_cmd_table));
+ 
+ 	if (hdr.in + sizeof(hdr) > len)
+ 		return -EINVAL;
+diff --git a/drivers/input/mouse/elan_i2c_core.c b/drivers/input/mouse/elan_i2c_core.c
+index f5ae24865355..b0f9d19b3410 100644
+--- a/drivers/input/mouse/elan_i2c_core.c
++++ b/drivers/input/mouse/elan_i2c_core.c
+@@ -1346,6 +1346,7 @@ static const struct acpi_device_id elan_acpi_id[] = {
+ 	{ "ELAN0611", 0 },
+ 	{ "ELAN0612", 0 },
+ 	{ "ELAN0618", 0 },
++	{ "ELAN061C", 0 },
+ 	{ "ELAN061D", 0 },
+ 	{ "ELAN0622", 0 },
+ 	{ "ELAN1000", 0 },
+diff --git a/drivers/misc/eeprom/at24.c b/drivers/misc/eeprom/at24.c
+index f5cc517d1131..7e50e1d6f58c 100644
+--- a/drivers/misc/eeprom/at24.c
++++ b/drivers/misc/eeprom/at24.c
+@@ -478,6 +478,23 @@ static void at24_properties_to_pdata(struct device *dev,
+ 	if (device_property_present(dev, "no-read-rollover"))
+ 		chip->flags |= AT24_FLAG_NO_RDROL;
+ 
++	err = device_property_read_u32(dev, "address-width", &val);
++	if (!err) {
++		switch (val) {
++		case 8:
++			if (chip->flags & AT24_FLAG_ADDR16)
++				dev_warn(dev, "Override address width to be 8, while default is 16\n");
++			chip->flags &= ~AT24_FLAG_ADDR16;
++			break;
++		case 16:
++			chip->flags |= AT24_FLAG_ADDR16;
++			break;
++		default:
++			dev_warn(dev, "Bad \"address-width\" property: %u\n",
++				 val);
++		}
++	}
++
+ 	err = device_property_read_u32(dev, "size", &val);
+ 	if (!err)
+ 		chip->byte_len = val;
+diff --git a/drivers/ptp/ptp_chardev.c b/drivers/ptp/ptp_chardev.c
+index 01b0e2bb3319..2012551d93e0 100644
+--- a/drivers/ptp/ptp_chardev.c
++++ b/drivers/ptp/ptp_chardev.c
+@@ -24,6 +24,8 @@
+ #include <linux/slab.h>
+ #include <linux/timekeeping.h>
+ 
++#include <linux/nospec.h>
++
+ #include "ptp_private.h"
+ 
+ static int ptp_disable_pinfunc(struct ptp_clock_info *ops,
+@@ -248,6 +250,7 @@ long ptp_ioctl(struct posix_clock *pc, unsigned int cmd, unsigned long arg)
+ 			err = -EINVAL;
+ 			break;
+ 		}
++		pin_index = array_index_nospec(pin_index, ops->n_pins);
+ 		if (mutex_lock_interruptible(&ptp->pincfg_mux))
+ 			return -ERESTARTSYS;
+ 		pd = ops->pin_config[pin_index];
+@@ -266,6 +269,7 @@ long ptp_ioctl(struct posix_clock *pc, unsigned int cmd, unsigned long arg)
+ 			err = -EINVAL;
+ 			break;
+ 		}
++		pin_index = array_index_nospec(pin_index, ops->n_pins);
+ 		if (mutex_lock_interruptible(&ptp->pincfg_mux))
+ 			return -ERESTARTSYS;
+ 		err = ptp_set_pinfunc(ptp, pin_index, pd.func, pd.chan);
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 84f52774810a..b61d101894ef 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -309,17 +309,17 @@ static void acm_process_notification(struct acm *acm, unsigned char *buf)
+ 
+ 		if (difference & ACM_CTRL_DSR)
+ 			acm->iocount.dsr++;
+-		if (difference & ACM_CTRL_BRK)
+-			acm->iocount.brk++;
+-		if (difference & ACM_CTRL_RI)
+-			acm->iocount.rng++;
+ 		if (difference & ACM_CTRL_DCD)
+ 			acm->iocount.dcd++;
+-		if (difference & ACM_CTRL_FRAMING)
++		if (newctrl & ACM_CTRL_BRK)
++			acm->iocount.brk++;
++		if (newctrl & ACM_CTRL_RI)
++			acm->iocount.rng++;
++		if (newctrl & ACM_CTRL_FRAMING)
+ 			acm->iocount.frame++;
+-		if (difference & ACM_CTRL_PARITY)
++		if (newctrl & ACM_CTRL_PARITY)
+ 			acm->iocount.parity++;
+-		if (difference & ACM_CTRL_OVERRUN)
++		if (newctrl & ACM_CTRL_OVERRUN)
+ 			acm->iocount.overrun++;
+ 		spin_unlock(&acm->read_lock);
+ 
+@@ -354,7 +354,6 @@ static void acm_ctrl_irq(struct urb *urb)
+ 	case -ENOENT:
+ 	case -ESHUTDOWN:
+ 		/* this urb is terminated, clean up */
+-		acm->nb_index = 0;
+ 		dev_dbg(&acm->control->dev,
+ 			"%s - urb shutting down with status: %d\n",
+ 			__func__, status);
+@@ -1642,6 +1641,7 @@ static int acm_pre_reset(struct usb_interface *intf)
+ 	struct acm *acm = usb_get_intfdata(intf);
+ 
+ 	clear_bit(EVENT_RX_STALL, &acm->flags);
++	acm->nb_index = 0; /* pending control transfers are lost */
+ 
+ 	return 0;
+ }
+diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
+index e1e0c90ce569..2e66711dac9c 100644
+--- a/drivers/usb/core/devio.c
++++ b/drivers/usb/core/devio.c
+@@ -1473,8 +1473,6 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ 	u = 0;
+ 	switch (uurb->type) {
+ 	case USBDEVFS_URB_TYPE_CONTROL:
+-		if (is_in)
+-			allow_short = true;
+ 		if (!usb_endpoint_xfer_control(&ep->desc))
+ 			return -EINVAL;
+ 		/* min 8 byte setup packet */
+@@ -1504,6 +1502,8 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ 			is_in = 0;
+ 			uurb->endpoint &= ~USB_DIR_IN;
+ 		}
++		if (is_in)
++			allow_short = true;
+ 		snoop(&ps->dev->dev, "control urb: bRequestType=%02x "
+ 			"bRequest=%02x wValue=%04x "
+ 			"wIndex=%04x wLength=%04x\n",
+diff --git a/drivers/usb/gadget/function/f_mass_storage.c b/drivers/usb/gadget/function/f_mass_storage.c
+index acecd13dcbd9..b29620e5df83 100644
+--- a/drivers/usb/gadget/function/f_mass_storage.c
++++ b/drivers/usb/gadget/function/f_mass_storage.c
+@@ -222,6 +222,8 @@
+ #include <linux/usb/gadget.h>
+ #include <linux/usb/composite.h>
+ 
++#include <linux/nospec.h>
++
+ #include "configfs.h"
+ 
+ 
+@@ -3171,6 +3173,7 @@ static struct config_group *fsg_lun_make(struct config_group *group,
+ 	fsg_opts = to_fsg_opts(&group->cg_item);
+ 	if (num >= FSG_MAX_LUNS)
+ 		return ERR_PTR(-ERANGE);
++	num = array_index_nospec(num, FSG_MAX_LUNS);
+ 
+ 	mutex_lock(&fsg_opts->lock);
+ 	if (fsg_opts->refcnt || fsg_opts->common->luns[num]) {
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 722860eb5a91..51dd8e00c4f8 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -179,10 +179,12 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 		xhci->quirks |= XHCI_PME_STUCK_QUIRK;
+ 	}
+ 	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+-		 pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI) {
++	    pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI)
+ 		xhci->quirks |= XHCI_SSIC_PORT_UNUSED;
++	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
++	    (pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_INTEL_APL_XHCI))
+ 		xhci->quirks |= XHCI_INTEL_USB_ROLE_SW;
+-	}
+ 	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+ 	    (pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_SUNRISEPOINT_LP_XHCI ||
+diff --git a/drivers/usb/roles/intel-xhci-usb-role-switch.c b/drivers/usb/roles/intel-xhci-usb-role-switch.c
+index 1fb3dd0f1dfa..277de96181f9 100644
+--- a/drivers/usb/roles/intel-xhci-usb-role-switch.c
++++ b/drivers/usb/roles/intel-xhci-usb-role-switch.c
+@@ -161,6 +161,8 @@ static int intel_xhci_usb_remove(struct platform_device *pdev)
+ {
+ 	struct intel_xhci_usb_data *data = platform_get_drvdata(pdev);
+ 
++	pm_runtime_disable(&pdev->dev);
++
+ 	usb_role_switch_unregister(data->role_sw);
+ 	return 0;
+ }
+diff --git a/drivers/usb/usbip/vhci_hcd.c b/drivers/usb/usbip/vhci_hcd.c
+index d11f3f8dad40..1e592ec94ba4 100644
+--- a/drivers/usb/usbip/vhci_hcd.c
++++ b/drivers/usb/usbip/vhci_hcd.c
+@@ -318,8 +318,9 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 	struct vhci_hcd	*vhci_hcd;
+ 	struct vhci	*vhci;
+ 	int             retval = 0;
+-	int		rhport;
++	int		rhport = -1;
+ 	unsigned long	flags;
++	bool invalid_rhport = false;
+ 
+ 	u32 prev_port_status[VHCI_HC_PORTS];
+ 
+@@ -334,9 +335,19 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 	usbip_dbg_vhci_rh("typeReq %x wValue %x wIndex %x\n", typeReq, wValue,
+ 			  wIndex);
+ 
+-	if (wIndex > VHCI_HC_PORTS)
+-		pr_err("invalid port number %d\n", wIndex);
+-	rhport = wIndex - 1;
++	/*
++	 * wIndex can be 0 for some request types (typeReq). rhport is
++	 * in valid range when wIndex >= 1 and < VHCI_HC_PORTS.
++	 *
++	 * Reference port_status[] only with valid rhport when
++	 * invalid_rhport is false.
++	 */
++	if (wIndex < 1 || wIndex > VHCI_HC_PORTS) {
++		invalid_rhport = true;
++		if (wIndex > VHCI_HC_PORTS)
++			pr_err("invalid port number %d\n", wIndex);
++	} else
++		rhport = wIndex - 1;
+ 
+ 	vhci_hcd = hcd_to_vhci_hcd(hcd);
+ 	vhci = vhci_hcd->vhci;
+@@ -345,8 +356,9 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 
+ 	/* store old status and compare now and old later */
+ 	if (usbip_dbg_flag_vhci_rh) {
+-		memcpy(prev_port_status, vhci_hcd->port_status,
+-			sizeof(prev_port_status));
++		if (!invalid_rhport)
++			memcpy(prev_port_status, vhci_hcd->port_status,
++				sizeof(prev_port_status));
+ 	}
+ 
+ 	switch (typeReq) {
+@@ -354,8 +366,10 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 		usbip_dbg_vhci_rh(" ClearHubFeature\n");
+ 		break;
+ 	case ClearPortFeature:
+-		if (rhport < 0)
++		if (invalid_rhport) {
++			pr_err("invalid port number %d\n", wIndex);
+ 			goto error;
++		}
+ 		switch (wValue) {
+ 		case USB_PORT_FEAT_SUSPEND:
+ 			if (hcd->speed == HCD_USB3) {
+@@ -415,9 +429,10 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 		break;
+ 	case GetPortStatus:
+ 		usbip_dbg_vhci_rh(" GetPortStatus port %x\n", wIndex);
+-		if (wIndex < 1) {
++		if (invalid_rhport) {
+ 			pr_err("invalid port number %d\n", wIndex);
+ 			retval = -EPIPE;
++			goto error;
+ 		}
+ 
+ 		/* we do not care about resume. */
+@@ -513,16 +528,20 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 				goto error;
+ 			}
+ 
+-			if (rhport < 0)
++			if (invalid_rhport) {
++				pr_err("invalid port number %d\n", wIndex);
+ 				goto error;
++			}
+ 
+ 			vhci_hcd->port_status[rhport] |= USB_PORT_STAT_SUSPEND;
+ 			break;
+ 		case USB_PORT_FEAT_POWER:
+ 			usbip_dbg_vhci_rh(
+ 				" SetPortFeature: USB_PORT_FEAT_POWER\n");
+-			if (rhport < 0)
++			if (invalid_rhport) {
++				pr_err("invalid port number %d\n", wIndex);
+ 				goto error;
++			}
+ 			if (hcd->speed == HCD_USB3)
+ 				vhci_hcd->port_status[rhport] |= USB_SS_PORT_STAT_POWER;
+ 			else
+@@ -531,8 +550,10 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 		case USB_PORT_FEAT_BH_PORT_RESET:
+ 			usbip_dbg_vhci_rh(
+ 				" SetPortFeature: USB_PORT_FEAT_BH_PORT_RESET\n");
+-			if (rhport < 0)
++			if (invalid_rhport) {
++				pr_err("invalid port number %d\n", wIndex);
+ 				goto error;
++			}
+ 			/* Applicable only for USB3.0 hub */
+ 			if (hcd->speed != HCD_USB3) {
+ 				pr_err("USB_PORT_FEAT_BH_PORT_RESET req not "
+@@ -543,8 +564,10 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 		case USB_PORT_FEAT_RESET:
+ 			usbip_dbg_vhci_rh(
+ 				" SetPortFeature: USB_PORT_FEAT_RESET\n");
+-			if (rhport < 0)
++			if (invalid_rhport) {
++				pr_err("invalid port number %d\n", wIndex);
+ 				goto error;
++			}
+ 			/* if it's already enabled, disable */
+ 			if (hcd->speed == HCD_USB3) {
+ 				vhci_hcd->port_status[rhport] = 0;
+@@ -565,8 +588,10 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 		default:
+ 			usbip_dbg_vhci_rh(" SetPortFeature: default %d\n",
+ 					  wValue);
+-			if (rhport < 0)
++			if (invalid_rhport) {
++				pr_err("invalid port number %d\n", wIndex);
+ 				goto error;
++			}
+ 			if (hcd->speed == HCD_USB3) {
+ 				if ((vhci_hcd->port_status[rhport] &
+ 				     USB_SS_PORT_STAT_POWER) != 0) {
+@@ -608,7 +633,7 @@ error:
+ 	if (usbip_dbg_flag_vhci_rh) {
+ 		pr_debug("port %d\n", rhport);
+ 		/* Only dump valid port status */
+-		if (rhport >= 0) {
++		if (!invalid_rhport) {
+ 			dump_port_status_diff(prev_port_status[rhport],
+ 					      vhci_hcd->port_status[rhport],
+ 					      hcd->speed == HCD_USB3);
+@@ -618,8 +643,10 @@ error:
+ 
+ 	spin_unlock_irqrestore(&vhci->lock, flags);
+ 
+-	if ((vhci_hcd->port_status[rhport] & PORT_C_MASK) != 0)
++	if (!invalid_rhport &&
++	    (vhci_hcd->port_status[rhport] & PORT_C_MASK) != 0) {
+ 		usb_hcd_poll_rh_status(hcd);
++	}
+ 
+ 	return retval;
+ }
+diff --git a/fs/cachefiles/namei.c b/fs/cachefiles/namei.c
+index af2b17b21b94..95983c744164 100644
+--- a/fs/cachefiles/namei.c
++++ b/fs/cachefiles/namei.c
+@@ -343,7 +343,7 @@ try_again:
+ 	trap = lock_rename(cache->graveyard, dir);
+ 
+ 	/* do some checks before getting the grave dentry */
+-	if (rep->d_parent != dir) {
++	if (rep->d_parent != dir || IS_DEADDIR(d_inode(rep))) {
+ 		/* the entry was probably culled when we dropped the parent dir
+ 		 * lock */
+ 		unlock_rename(cache->graveyard, dir);
+diff --git a/fs/fscache/cookie.c b/fs/fscache/cookie.c
+index 83bfe04456b6..c550512ce335 100644
+--- a/fs/fscache/cookie.c
++++ b/fs/fscache/cookie.c
+@@ -70,20 +70,7 @@ void fscache_free_cookie(struct fscache_cookie *cookie)
+ }
+ 
+ /*
+- * initialise an cookie jar slab element prior to any use
+- */
+-void fscache_cookie_init_once(void *_cookie)
+-{
+-	struct fscache_cookie *cookie = _cookie;
+-
+-	memset(cookie, 0, sizeof(*cookie));
+-	spin_lock_init(&cookie->lock);
+-	spin_lock_init(&cookie->stores_lock);
+-	INIT_HLIST_HEAD(&cookie->backing_objects);
+-}
+-
+-/*
+- * Set the index key in a cookie.  The cookie struct has space for a 12-byte
++ * Set the index key in a cookie.  The cookie struct has space for a 16-byte
+  * key plus length and hash, but if that's not big enough, it's instead a
+  * pointer to a buffer containing 3 bytes of hash, 1 byte of length and then
+  * the key data.
+@@ -93,20 +80,18 @@ static int fscache_set_key(struct fscache_cookie *cookie,
+ {
+ 	unsigned long long h;
+ 	u32 *buf;
++	int bufs;
+ 	int i;
+ 
+-	cookie->key_len = index_key_len;
++	bufs = DIV_ROUND_UP(index_key_len, sizeof(*buf));
+ 
+ 	if (index_key_len > sizeof(cookie->inline_key)) {
+-		buf = kzalloc(index_key_len, GFP_KERNEL);
++		buf = kcalloc(bufs, sizeof(*buf), GFP_KERNEL);
+ 		if (!buf)
+ 			return -ENOMEM;
+ 		cookie->key = buf;
+ 	} else {
+ 		buf = (u32 *)cookie->inline_key;
+-		buf[0] = 0;
+-		buf[1] = 0;
+-		buf[2] = 0;
+ 	}
+ 
+ 	memcpy(buf, index_key, index_key_len);
+@@ -116,7 +101,8 @@ static int fscache_set_key(struct fscache_cookie *cookie,
+ 	 */
+ 	h = (unsigned long)cookie->parent;
+ 	h += index_key_len + cookie->type;
+-	for (i = 0; i < (index_key_len + sizeof(u32) - 1) / sizeof(u32); i++)
++
++	for (i = 0; i < bufs; i++)
+ 		h += buf[i];
+ 
+ 	cookie->key_hash = h ^ (h >> 32);
+@@ -161,7 +147,7 @@ struct fscache_cookie *fscache_alloc_cookie(
+ 	struct fscache_cookie *cookie;
+ 
+ 	/* allocate and initialise a cookie */
+-	cookie = kmem_cache_alloc(fscache_cookie_jar, GFP_KERNEL);
++	cookie = kmem_cache_zalloc(fscache_cookie_jar, GFP_KERNEL);
+ 	if (!cookie)
+ 		return NULL;
+ 
+@@ -192,6 +178,9 @@ struct fscache_cookie *fscache_alloc_cookie(
+ 	cookie->netfs_data	= netfs_data;
+ 	cookie->flags		= (1 << FSCACHE_COOKIE_NO_DATA_YET);
+ 	cookie->type		= def->type;
++	spin_lock_init(&cookie->lock);
++	spin_lock_init(&cookie->stores_lock);
++	INIT_HLIST_HEAD(&cookie->backing_objects);
+ 
+ 	/* radix tree insertion won't use the preallocation pool unless it's
+ 	 * told it may not wait */
+diff --git a/fs/fscache/internal.h b/fs/fscache/internal.h
+index f83328a7f048..d6209022e965 100644
+--- a/fs/fscache/internal.h
++++ b/fs/fscache/internal.h
+@@ -51,7 +51,6 @@ extern struct fscache_cache *fscache_select_cache_for_object(
+ extern struct kmem_cache *fscache_cookie_jar;
+ 
+ extern void fscache_free_cookie(struct fscache_cookie *);
+-extern void fscache_cookie_init_once(void *);
+ extern struct fscache_cookie *fscache_alloc_cookie(struct fscache_cookie *,
+ 						   const struct fscache_cookie_def *,
+ 						   const void *, size_t,
+diff --git a/fs/fscache/main.c b/fs/fscache/main.c
+index 7dce110bf17d..30ad89db1efc 100644
+--- a/fs/fscache/main.c
++++ b/fs/fscache/main.c
+@@ -143,9 +143,7 @@ static int __init fscache_init(void)
+ 
+ 	fscache_cookie_jar = kmem_cache_create("fscache_cookie_jar",
+ 					       sizeof(struct fscache_cookie),
+-					       0,
+-					       0,
+-					       fscache_cookie_init_once);
++					       0, 0, NULL);
+ 	if (!fscache_cookie_jar) {
+ 		pr_notice("Failed to allocate a cookie jar\n");
+ 		ret = -ENOMEM;
+diff --git a/fs/ioctl.c b/fs/ioctl.c
+index b445b13fc59b..5444fec607ce 100644
+--- a/fs/ioctl.c
++++ b/fs/ioctl.c
+@@ -229,7 +229,7 @@ static long ioctl_file_clone(struct file *dst_file, unsigned long srcfd,
+ 	ret = -EXDEV;
+ 	if (src_file.file->f_path.mnt != dst_file->f_path.mnt)
+ 		goto fdput;
+-	ret = do_clone_file_range(src_file.file, off, dst_file, destoff, olen);
++	ret = vfs_clone_file_range(src_file.file, off, dst_file, destoff, olen);
+ fdput:
+ 	fdput(src_file);
+ 	return ret;
+diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
+index b0555d7d8200..613d2fe2dddd 100644
+--- a/fs/nfsd/vfs.c
++++ b/fs/nfsd/vfs.c
+@@ -541,7 +541,8 @@ __be32 nfsd4_set_nfs4_label(struct svc_rqst *rqstp, struct svc_fh *fhp,
+ __be32 nfsd4_clone_file_range(struct file *src, u64 src_pos, struct file *dst,
+ 		u64 dst_pos, u64 count)
+ {
+-	return nfserrno(do_clone_file_range(src, src_pos, dst, dst_pos, count));
++	return nfserrno(vfs_clone_file_range(src, src_pos, dst, dst_pos,
++					     count));
+ }
+ 
+ ssize_t nfsd_copy_file_range(struct file *src, u64 src_pos, struct file *dst,
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index ddaddb4ce4c3..26b477f2538d 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -156,7 +156,7 @@ static int ovl_copy_up_data(struct path *old, struct path *new, loff_t len)
+ 	}
+ 
+ 	/* Try to use clone_file_range to clone up within the same fs */
+-	error = vfs_clone_file_range(old_file, 0, new_file, 0, len);
++	error = do_clone_file_range(old_file, 0, new_file, 0, len);
+ 	if (!error)
+ 		goto out;
+ 	/* Couldn't clone, so now we try to copy the data */
+diff --git a/fs/read_write.c b/fs/read_write.c
+index 153f8f690490..c9d489684335 100644
+--- a/fs/read_write.c
++++ b/fs/read_write.c
+@@ -1818,8 +1818,8 @@ int vfs_clone_file_prep_inodes(struct inode *inode_in, loff_t pos_in,
+ }
+ EXPORT_SYMBOL(vfs_clone_file_prep_inodes);
+ 
+-int vfs_clone_file_range(struct file *file_in, loff_t pos_in,
+-		struct file *file_out, loff_t pos_out, u64 len)
++int do_clone_file_range(struct file *file_in, loff_t pos_in,
++			struct file *file_out, loff_t pos_out, u64 len)
+ {
+ 	struct inode *inode_in = file_inode(file_in);
+ 	struct inode *inode_out = file_inode(file_out);
+@@ -1866,6 +1866,19 @@ int vfs_clone_file_range(struct file *file_in, loff_t pos_in,
+ 
+ 	return ret;
+ }
++EXPORT_SYMBOL(do_clone_file_range);
++
++int vfs_clone_file_range(struct file *file_in, loff_t pos_in,
++			 struct file *file_out, loff_t pos_out, u64 len)
++{
++	int ret;
++
++	file_start_write(file_out);
++	ret = do_clone_file_range(file_in, pos_in, file_out, pos_out, len);
++	file_end_write(file_out);
++
++	return ret;
++}
+ EXPORT_SYMBOL(vfs_clone_file_range);
+ 
+ /*
+diff --git a/include/drm/drm_edid.h b/include/drm/drm_edid.h
+index b25d12ef120a..e3c404833115 100644
+--- a/include/drm/drm_edid.h
++++ b/include/drm/drm_edid.h
+@@ -214,9 +214,9 @@ struct detailed_timing {
+ #define DRM_EDID_HDMI_DC_Y444             (1 << 3)
+ 
+ /* YCBCR 420 deep color modes */
+-#define DRM_EDID_YCBCR420_DC_48		  (1 << 6)
+-#define DRM_EDID_YCBCR420_DC_36		  (1 << 5)
+-#define DRM_EDID_YCBCR420_DC_30		  (1 << 4)
++#define DRM_EDID_YCBCR420_DC_48		  (1 << 2)
++#define DRM_EDID_YCBCR420_DC_36		  (1 << 1)
++#define DRM_EDID_YCBCR420_DC_30		  (1 << 0)
+ #define DRM_EDID_YCBCR420_DC_MASK (DRM_EDID_YCBCR420_DC_48 | \
+ 				    DRM_EDID_YCBCR420_DC_36 | \
+ 				    DRM_EDID_YCBCR420_DC_30)
+diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
+index 38b04f559ad3..1fd6fa822d2c 100644
+--- a/include/linux/bpf_verifier.h
++++ b/include/linux/bpf_verifier.h
+@@ -50,6 +50,9 @@ struct bpf_reg_state {
+ 		 *   PTR_TO_MAP_VALUE_OR_NULL
+ 		 */
+ 		struct bpf_map *map_ptr;
++
++		/* Max size from any of the above. */
++		unsigned long raw;
+ 	};
+ 	/* Fixed part of pointer offset, pointer types only */
+ 	s32 off;
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index a3afa50bb79f..e73363bd8646 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -1813,8 +1813,10 @@ extern ssize_t vfs_copy_file_range(struct file *, loff_t , struct file *,
+ extern int vfs_clone_file_prep_inodes(struct inode *inode_in, loff_t pos_in,
+ 				      struct inode *inode_out, loff_t pos_out,
+ 				      u64 *len, bool is_dedupe);
++extern int do_clone_file_range(struct file *file_in, loff_t pos_in,
++			       struct file *file_out, loff_t pos_out, u64 len);
+ extern int vfs_clone_file_range(struct file *file_in, loff_t pos_in,
+-		struct file *file_out, loff_t pos_out, u64 len);
++				struct file *file_out, loff_t pos_out, u64 len);
+ extern int vfs_dedupe_file_range_compare(struct inode *src, loff_t srcoff,
+ 					 struct inode *dest, loff_t destoff,
+ 					 loff_t len, bool *is_same);
+@@ -2755,19 +2757,6 @@ static inline void file_end_write(struct file *file)
+ 	__sb_end_write(file_inode(file)->i_sb, SB_FREEZE_WRITE);
+ }
+ 
+-static inline int do_clone_file_range(struct file *file_in, loff_t pos_in,
+-				      struct file *file_out, loff_t pos_out,
+-				      u64 len)
+-{
+-	int ret;
+-
+-	file_start_write(file_out);
+-	ret = vfs_clone_file_range(file_in, pos_in, file_out, pos_out, len);
+-	file_end_write(file_out);
+-
+-	return ret;
+-}
+-
+ /*
+  * get_write_access() gets write permission for a file.
+  * put_write_access() releases this write permission.
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 82e8edef6ea0..b000686fa1a1 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -2731,7 +2731,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 			dst_reg->umax_value = umax_ptr;
+ 			dst_reg->var_off = ptr_reg->var_off;
+ 			dst_reg->off = ptr_reg->off + smin_val;
+-			dst_reg->range = ptr_reg->range;
++			dst_reg->raw = ptr_reg->raw;
+ 			break;
+ 		}
+ 		/* A new variable offset is created.  Note that off_reg->off
+@@ -2761,10 +2761,11 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 		}
+ 		dst_reg->var_off = tnum_add(ptr_reg->var_off, off_reg->var_off);
+ 		dst_reg->off = ptr_reg->off;
++		dst_reg->raw = ptr_reg->raw;
+ 		if (reg_is_pkt_pointer(ptr_reg)) {
+ 			dst_reg->id = ++env->id_gen;
+ 			/* something was added to pkt_ptr, set range to zero */
+-			dst_reg->range = 0;
++			dst_reg->raw = 0;
+ 		}
+ 		break;
+ 	case BPF_SUB:
+@@ -2793,7 +2794,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 			dst_reg->var_off = ptr_reg->var_off;
+ 			dst_reg->id = ptr_reg->id;
+ 			dst_reg->off = ptr_reg->off - smin_val;
+-			dst_reg->range = ptr_reg->range;
++			dst_reg->raw = ptr_reg->raw;
+ 			break;
+ 		}
+ 		/* A new variable offset is created.  If the subtrahend is known
+@@ -2819,11 +2820,12 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ 		}
+ 		dst_reg->var_off = tnum_sub(ptr_reg->var_off, off_reg->var_off);
+ 		dst_reg->off = ptr_reg->off;
++		dst_reg->raw = ptr_reg->raw;
+ 		if (reg_is_pkt_pointer(ptr_reg)) {
+ 			dst_reg->id = ++env->id_gen;
+ 			/* something was added to pkt_ptr, set range to zero */
+ 			if (smin_val < 0)
+-				dst_reg->range = 0;
++				dst_reg->raw = 0;
+ 		}
+ 		break;
+ 	case BPF_AND:
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 26526fc41f0d..b27b9509ea89 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -4797,9 +4797,13 @@ static void throttle_cfs_rq(struct cfs_rq *cfs_rq)
+ 
+ 	/*
+ 	 * Add to the _head_ of the list, so that an already-started
+-	 * distribute_cfs_runtime will not see us
++	 * distribute_cfs_runtime will not see us. If disribute_cfs_runtime is
++	 * not running add to the tail so that later runqueues don't get starved.
+ 	 */
+-	list_add_rcu(&cfs_rq->throttled_list, &cfs_b->throttled_cfs_rq);
++	if (cfs_b->distribute_running)
++		list_add_rcu(&cfs_rq->throttled_list, &cfs_b->throttled_cfs_rq);
++	else
++		list_add_tail_rcu(&cfs_rq->throttled_list, &cfs_b->throttled_cfs_rq);
+ 
+ 	/*
+ 	 * If we're the first throttled task, make sure the bandwidth
+@@ -4943,14 +4947,16 @@ static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun)
+ 	 * in us over-using our runtime if it is all used during this loop, but
+ 	 * only by limited amounts in that extreme case.
+ 	 */
+-	while (throttled && cfs_b->runtime > 0) {
++	while (throttled && cfs_b->runtime > 0 && !cfs_b->distribute_running) {
+ 		runtime = cfs_b->runtime;
++		cfs_b->distribute_running = 1;
+ 		raw_spin_unlock(&cfs_b->lock);
+ 		/* we can't nest cfs_b->lock while distributing bandwidth */
+ 		runtime = distribute_cfs_runtime(cfs_b, runtime,
+ 						 runtime_expires);
+ 		raw_spin_lock(&cfs_b->lock);
+ 
++		cfs_b->distribute_running = 0;
+ 		throttled = !list_empty(&cfs_b->throttled_cfs_rq);
+ 
+ 		cfs_b->runtime -= min(runtime, cfs_b->runtime);
+@@ -5061,6 +5067,11 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
+ 
+ 	/* confirm we're still not at a refresh boundary */
+ 	raw_spin_lock(&cfs_b->lock);
++	if (cfs_b->distribute_running) {
++		raw_spin_unlock(&cfs_b->lock);
++		return;
++	}
++
+ 	if (runtime_refresh_within(cfs_b, min_bandwidth_expiration)) {
+ 		raw_spin_unlock(&cfs_b->lock);
+ 		return;
+@@ -5070,6 +5081,9 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
+ 		runtime = cfs_b->runtime;
+ 
+ 	expires = cfs_b->runtime_expires;
++	if (runtime)
++		cfs_b->distribute_running = 1;
++
+ 	raw_spin_unlock(&cfs_b->lock);
+ 
+ 	if (!runtime)
+@@ -5080,6 +5094,7 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
+ 	raw_spin_lock(&cfs_b->lock);
+ 	if (expires == cfs_b->runtime_expires)
+ 		cfs_b->runtime -= min(runtime, cfs_b->runtime);
++	cfs_b->distribute_running = 0;
+ 	raw_spin_unlock(&cfs_b->lock);
+ }
+ 
+@@ -5188,6 +5203,7 @@ void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b)
+ 	cfs_b->period_timer.function = sched_cfs_period_timer;
+ 	hrtimer_init(&cfs_b->slack_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ 	cfs_b->slack_timer.function = sched_cfs_slack_timer;
++	cfs_b->distribute_running = 0;
+ }
+ 
+ static void init_cfs_rq_runtime(struct cfs_rq *cfs_rq)
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index c7742dcc136c..4565c3f9ecc5 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -346,6 +346,8 @@ struct cfs_bandwidth {
+ 	int			nr_periods;
+ 	int			nr_throttled;
+ 	u64			throttled_time;
++
++	bool                    distribute_running;
+ #endif
+ };
+ 
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index aae18af94c94..6c78bc2b7fff 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -747,16 +747,30 @@ static void free_synth_field(struct synth_field *field)
+ 	kfree(field);
+ }
+ 
+-static struct synth_field *parse_synth_field(char *field_type,
+-					     char *field_name)
++static struct synth_field *parse_synth_field(int argc, char **argv,
++					     int *consumed)
+ {
+ 	struct synth_field *field;
++	const char *prefix = NULL;
++	char *field_type = argv[0], *field_name;
+ 	int len, ret = 0;
+ 	char *array;
+ 
+ 	if (field_type[0] == ';')
+ 		field_type++;
+ 
++	if (!strcmp(field_type, "unsigned")) {
++		if (argc < 3)
++			return ERR_PTR(-EINVAL);
++		prefix = "unsigned ";
++		field_type = argv[1];
++		field_name = argv[2];
++		*consumed = 3;
++	} else {
++		field_name = argv[1];
++		*consumed = 2;
++	}
++
+ 	len = strlen(field_name);
+ 	if (field_name[len - 1] == ';')
+ 		field_name[len - 1] = '\0';
+@@ -769,11 +783,15 @@ static struct synth_field *parse_synth_field(char *field_type,
+ 	array = strchr(field_name, '[');
+ 	if (array)
+ 		len += strlen(array);
++	if (prefix)
++		len += strlen(prefix);
+ 	field->type = kzalloc(len, GFP_KERNEL);
+ 	if (!field->type) {
+ 		ret = -ENOMEM;
+ 		goto free;
+ 	}
++	if (prefix)
++		strcat(field->type, prefix);
+ 	strcat(field->type, field_type);
+ 	if (array) {
+ 		strcat(field->type, array);
+@@ -1018,7 +1036,7 @@ static int create_synth_event(int argc, char **argv)
+ 	struct synth_field *field, *fields[SYNTH_FIELDS_MAX];
+ 	struct synth_event *event = NULL;
+ 	bool delete_event = false;
+-	int i, n_fields = 0, ret = 0;
++	int i, consumed = 0, n_fields = 0, ret = 0;
+ 	char *name;
+ 
+ 	mutex_lock(&synth_event_mutex);
+@@ -1070,16 +1088,16 @@ static int create_synth_event(int argc, char **argv)
+ 			goto err;
+ 		}
+ 
+-		field = parse_synth_field(argv[i], argv[i + 1]);
++		field = parse_synth_field(argc - i, &argv[i], &consumed);
+ 		if (IS_ERR(field)) {
+ 			ret = PTR_ERR(field);
+ 			goto err;
+ 		}
+-		fields[n_fields] = field;
+-		i++; n_fields++;
++		fields[n_fields++] = field;
++		i += consumed - 1;
+ 	}
+ 
+-	if (i < argc) {
++	if (i < argc && strcmp(argv[i], ";") != 0) {
+ 		ret = -EINVAL;
+ 		goto err;
+ 	}


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-11-04 17:33 Alice Ferrazzi
  0 siblings, 0 replies; 75+ messages in thread
From: Alice Ferrazzi @ 2018-11-04 17:33 UTC (permalink / raw
  To: gentoo-commits

commit:     0a2b0730ed2156923899b026bd016e89fca0ee5e
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Sun Nov  4 17:33:00 2018 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Sun Nov  4 17:33:00 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0a2b0730

linux kernel 4.18.17

 0000_README              |    4 +
 1016_linux-4.18.17.patch | 4982 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4986 insertions(+)

diff --git a/0000_README b/0000_README
index 52e9ca9..fcd301e 100644
--- a/0000_README
+++ b/0000_README
@@ -107,6 +107,10 @@ Patch:  1015_linux-4.18.16.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.16
 
+Patch:  1016_linux-4.18.17.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.17
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1016_linux-4.18.17.patch b/1016_linux-4.18.17.patch
new file mode 100644
index 0000000..1e385a1
--- /dev/null
+++ b/1016_linux-4.18.17.patch
@@ -0,0 +1,4982 @@
+diff --git a/Makefile b/Makefile
+index 034dd990b0ae..c051db0ca5a0 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 16
++SUBLEVEL = 17
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/Kconfig b/arch/Kconfig
+index f03b72644902..a18371a36e03 100644
+--- a/arch/Kconfig
++++ b/arch/Kconfig
+@@ -977,4 +977,12 @@ config REFCOUNT_FULL
+ 	  against various use-after-free conditions that can be used in
+ 	  security flaw exploits.
+ 
++config HAVE_ARCH_COMPILER_H
++	bool
++	help
++	  An architecture can select this if it provides an
++	  asm/compiler.h header that should be included after
++	  linux/compiler-*.h in order to override macro definitions that those
++	  headers generally provide.
++
+ source "kernel/gcov/Kconfig"
+diff --git a/arch/arm/boot/dts/bcm63138.dtsi b/arch/arm/boot/dts/bcm63138.dtsi
+index 43ee992ccdcf..6df61518776f 100644
+--- a/arch/arm/boot/dts/bcm63138.dtsi
++++ b/arch/arm/boot/dts/bcm63138.dtsi
+@@ -106,21 +106,23 @@
+ 		global_timer: timer@1e200 {
+ 			compatible = "arm,cortex-a9-global-timer";
+ 			reg = <0x1e200 0x20>;
+-			interrupts = <GIC_PPI 11 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_PPI 11 IRQ_TYPE_EDGE_RISING>;
+ 			clocks = <&axi_clk>;
+ 		};
+ 
+ 		local_timer: local-timer@1e600 {
+ 			compatible = "arm,cortex-a9-twd-timer";
+ 			reg = <0x1e600 0x20>;
+-			interrupts = <GIC_PPI 13 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(2) |
++						  IRQ_TYPE_EDGE_RISING)>;
+ 			clocks = <&axi_clk>;
+ 		};
+ 
+ 		twd_watchdog: watchdog@1e620 {
+ 			compatible = "arm,cortex-a9-twd-wdt";
+ 			reg = <0x1e620 0x20>;
+-			interrupts = <GIC_PPI 14 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_PPI 14 (GIC_CPU_MASK_SIMPLE(2) |
++						  IRQ_TYPE_LEVEL_HIGH)>;
+ 		};
+ 
+ 		armpll: armpll {
+@@ -158,7 +160,7 @@
+ 		serial0: serial@600 {
+ 			compatible = "brcm,bcm6345-uart";
+ 			reg = <0x600 0x1b>;
+-			interrupts = <GIC_SPI 32 0>;
++			interrupts = <GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&periph_clk>;
+ 			clock-names = "periph";
+ 			status = "disabled";
+@@ -167,7 +169,7 @@
+ 		serial1: serial@620 {
+ 			compatible = "brcm,bcm6345-uart";
+ 			reg = <0x620 0x1b>;
+-			interrupts = <GIC_SPI 33 0>;
++			interrupts = <GIC_SPI 33 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&periph_clk>;
+ 			clock-names = "periph";
+ 			status = "disabled";
+@@ -180,7 +182,7 @@
+ 			reg = <0x2000 0x600>, <0xf0 0x10>;
+ 			reg-names = "nand", "nand-int-base";
+ 			status = "disabled";
+-			interrupts = <GIC_SPI 38 0>;
++			interrupts = <GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "nand";
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/imx53-qsb-common.dtsi b/arch/arm/boot/dts/imx53-qsb-common.dtsi
+index ef7658a78836..c1548adee789 100644
+--- a/arch/arm/boot/dts/imx53-qsb-common.dtsi
++++ b/arch/arm/boot/dts/imx53-qsb-common.dtsi
+@@ -123,6 +123,17 @@
+ 	};
+ };
+ 
++&cpu0 {
++	/* CPU rated to 1GHz, not 1.2GHz as per the default settings */
++	operating-points = <
++		/* kHz   uV */
++		166666  850000
++		400000  900000
++		800000  1050000
++		1000000 1200000
++	>;
++};
++
+ &esdhc1 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_esdhc1>;
+diff --git a/arch/arm/kernel/vmlinux.lds.h b/arch/arm/kernel/vmlinux.lds.h
+index ae5fdff18406..8247bc15addc 100644
+--- a/arch/arm/kernel/vmlinux.lds.h
++++ b/arch/arm/kernel/vmlinux.lds.h
+@@ -49,6 +49,8 @@
+ #define ARM_DISCARD							\
+ 		*(.ARM.exidx.exit.text)					\
+ 		*(.ARM.extab.exit.text)					\
++		*(.ARM.exidx.text.exit)					\
++		*(.ARM.extab.text.exit)					\
+ 		ARM_CPU_DISCARD(*(.ARM.exidx.cpuexit.text))		\
+ 		ARM_CPU_DISCARD(*(.ARM.extab.cpuexit.text))		\
+ 		ARM_EXIT_DISCARD(EXIT_TEXT)				\
+diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
+index fc91205ff46c..5bf9443cfbaa 100644
+--- a/arch/arm/mm/ioremap.c
++++ b/arch/arm/mm/ioremap.c
+@@ -473,7 +473,7 @@ void pci_ioremap_set_mem_type(int mem_type)
+ 
+ int pci_ioremap_io(unsigned int offset, phys_addr_t phys_addr)
+ {
+-	BUG_ON(offset + SZ_64K > IO_SPACE_LIMIT);
++	BUG_ON(offset + SZ_64K - 1 > IO_SPACE_LIMIT);
+ 
+ 	return ioremap_page_range(PCI_IO_VIRT_BASE + offset,
+ 				  PCI_IO_VIRT_BASE + offset + SZ_64K,
+diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
+index 192b3ba07075..f85be2f8b140 100644
+--- a/arch/arm64/mm/hugetlbpage.c
++++ b/arch/arm64/mm/hugetlbpage.c
+@@ -117,11 +117,14 @@ static pte_t get_clear_flush(struct mm_struct *mm,
+ 
+ 		/*
+ 		 * If HW_AFDBM is enabled, then the HW could turn on
+-		 * the dirty bit for any page in the set, so check
+-		 * them all.  All hugetlb entries are already young.
++		 * the dirty or accessed bit for any page in the set,
++		 * so check them all.
+ 		 */
+ 		if (pte_dirty(pte))
+ 			orig_pte = pte_mkdirty(orig_pte);
++
++		if (pte_young(pte))
++			orig_pte = pte_mkyoung(orig_pte);
+ 	}
+ 
+ 	if (valid) {
+@@ -340,10 +343,13 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
+ 	if (!pte_same(orig_pte, pte))
+ 		changed = 1;
+ 
+-	/* Make sure we don't lose the dirty state */
++	/* Make sure we don't lose the dirty or young state */
+ 	if (pte_dirty(orig_pte))
+ 		pte = pte_mkdirty(pte);
+ 
++	if (pte_young(orig_pte))
++		pte = pte_mkyoung(pte);
++
+ 	hugeprot = pte_pgprot(pte);
+ 	for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn)
+ 		set_pte_at(vma->vm_mm, addr, ptep, pfn_pte(pfn, hugeprot));
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index 59d07bd5374a..055b211b7126 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -1217,9 +1217,10 @@ int find_and_online_cpu_nid(int cpu)
+ 		 * Need to ensure that NODE_DATA is initialized for a node from
+ 		 * available memory (see memblock_alloc_try_nid). If unable to
+ 		 * init the node, then default to nearest node that has memory
+-		 * installed.
++		 * installed. Skip onlining a node if the subsystems are not
++		 * yet initialized.
+ 		 */
+-		if (try_online_node(new_nid))
++		if (!topology_inited || try_online_node(new_nid))
+ 			new_nid = first_online_node;
+ #else
+ 		/*
+diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
+index 0efa5b29d0a3..dcff272aee06 100644
+--- a/arch/riscv/kernel/setup.c
++++ b/arch/riscv/kernel/setup.c
+@@ -165,7 +165,7 @@ static void __init setup_bootmem(void)
+ 	BUG_ON(mem_size == 0);
+ 
+ 	set_max_mapnr(PFN_DOWN(mem_size));
+-	max_low_pfn = pfn_base + PFN_DOWN(mem_size);
++	max_low_pfn = memblock_end_of_DRAM();
+ 
+ #ifdef CONFIG_BLK_DEV_INITRD
+ 	setup_initrd();
+diff --git a/arch/sparc/include/asm/cpudata_64.h b/arch/sparc/include/asm/cpudata_64.h
+index 666d6b5c0440..9c3fc03abe9a 100644
+--- a/arch/sparc/include/asm/cpudata_64.h
++++ b/arch/sparc/include/asm/cpudata_64.h
+@@ -28,7 +28,7 @@ typedef struct {
+ 	unsigned short	sock_id;	/* physical package */
+ 	unsigned short	core_id;
+ 	unsigned short  max_cache_id;	/* groupings of highest shared cache */
+-	unsigned short	proc_id;	/* strand (aka HW thread) id */
++	signed short	proc_id;	/* strand (aka HW thread) id */
+ } cpuinfo_sparc;
+ 
+ DECLARE_PER_CPU(cpuinfo_sparc, __cpu_data);
+diff --git a/arch/sparc/include/asm/switch_to_64.h b/arch/sparc/include/asm/switch_to_64.h
+index 4ff29b1406a9..b1d4e2e3210f 100644
+--- a/arch/sparc/include/asm/switch_to_64.h
++++ b/arch/sparc/include/asm/switch_to_64.h
+@@ -67,6 +67,7 @@ do {	save_and_clear_fpu();						\
+ } while(0)
+ 
+ void synchronize_user_stack(void);
+-void fault_in_user_windows(void);
++struct pt_regs;
++void fault_in_user_windows(struct pt_regs *);
+ 
+ #endif /* __SPARC64_SWITCH_TO_64_H */
+diff --git a/arch/sparc/kernel/perf_event.c b/arch/sparc/kernel/perf_event.c
+index d3149baaa33c..67b3e6b3ce5d 100644
+--- a/arch/sparc/kernel/perf_event.c
++++ b/arch/sparc/kernel/perf_event.c
+@@ -24,6 +24,7 @@
+ #include <asm/cpudata.h>
+ #include <linux/uaccess.h>
+ #include <linux/atomic.h>
++#include <linux/sched/clock.h>
+ #include <asm/nmi.h>
+ #include <asm/pcr.h>
+ #include <asm/cacheflush.h>
+@@ -927,6 +928,8 @@ static void read_in_all_counters(struct cpu_hw_events *cpuc)
+ 			sparc_perf_event_update(cp, &cp->hw,
+ 						cpuc->current_idx[i]);
+ 			cpuc->current_idx[i] = PIC_NO_INDEX;
++			if (cp->hw.state & PERF_HES_STOPPED)
++				cp->hw.state |= PERF_HES_ARCH;
+ 		}
+ 	}
+ }
+@@ -959,10 +962,12 @@ static void calculate_single_pcr(struct cpu_hw_events *cpuc)
+ 
+ 		enc = perf_event_get_enc(cpuc->events[i]);
+ 		cpuc->pcr[0] &= ~mask_for_index(idx);
+-		if (hwc->state & PERF_HES_STOPPED)
++		if (hwc->state & PERF_HES_ARCH) {
+ 			cpuc->pcr[0] |= nop_for_index(idx);
+-		else
++		} else {
+ 			cpuc->pcr[0] |= event_encoding(enc, idx);
++			hwc->state = 0;
++		}
+ 	}
+ out:
+ 	cpuc->pcr[0] |= cpuc->event[0]->hw.config_base;
+@@ -988,6 +993,9 @@ static void calculate_multiple_pcrs(struct cpu_hw_events *cpuc)
+ 
+ 		cpuc->current_idx[i] = idx;
+ 
++		if (cp->hw.state & PERF_HES_ARCH)
++			continue;
++
+ 		sparc_pmu_start(cp, PERF_EF_RELOAD);
+ 	}
+ out:
+@@ -1079,6 +1087,8 @@ static void sparc_pmu_start(struct perf_event *event, int flags)
+ 	event->hw.state = 0;
+ 
+ 	sparc_pmu_enable_event(cpuc, &event->hw, idx);
++
++	perf_event_update_userpage(event);
+ }
+ 
+ static void sparc_pmu_stop(struct perf_event *event, int flags)
+@@ -1371,9 +1381,9 @@ static int sparc_pmu_add(struct perf_event *event, int ef_flags)
+ 	cpuc->events[n0] = event->hw.event_base;
+ 	cpuc->current_idx[n0] = PIC_NO_INDEX;
+ 
+-	event->hw.state = PERF_HES_UPTODATE;
++	event->hw.state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
+ 	if (!(ef_flags & PERF_EF_START))
+-		event->hw.state |= PERF_HES_STOPPED;
++		event->hw.state |= PERF_HES_ARCH;
+ 
+ 	/*
+ 	 * If group events scheduling transaction was started,
+@@ -1603,6 +1613,8 @@ static int __kprobes perf_event_nmi_handler(struct notifier_block *self,
+ 	struct perf_sample_data data;
+ 	struct cpu_hw_events *cpuc;
+ 	struct pt_regs *regs;
++	u64 finish_clock;
++	u64 start_clock;
+ 	int i;
+ 
+ 	if (!atomic_read(&active_events))
+@@ -1616,6 +1628,8 @@ static int __kprobes perf_event_nmi_handler(struct notifier_block *self,
+ 		return NOTIFY_DONE;
+ 	}
+ 
++	start_clock = sched_clock();
++
+ 	regs = args->regs;
+ 
+ 	cpuc = this_cpu_ptr(&cpu_hw_events);
+@@ -1654,6 +1668,10 @@ static int __kprobes perf_event_nmi_handler(struct notifier_block *self,
+ 			sparc_pmu_stop(event, 0);
+ 	}
+ 
++	finish_clock = sched_clock();
++
++	perf_sample_event_took(finish_clock - start_clock);
++
+ 	return NOTIFY_STOP;
+ }
+ 
+diff --git a/arch/sparc/kernel/process_64.c b/arch/sparc/kernel/process_64.c
+index 6c086086ca8f..59eaf6227af1 100644
+--- a/arch/sparc/kernel/process_64.c
++++ b/arch/sparc/kernel/process_64.c
+@@ -36,6 +36,7 @@
+ #include <linux/sysrq.h>
+ #include <linux/nmi.h>
+ #include <linux/context_tracking.h>
++#include <linux/signal.h>
+ 
+ #include <linux/uaccess.h>
+ #include <asm/page.h>
+@@ -521,7 +522,12 @@ static void stack_unaligned(unsigned long sp)
+ 	force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *) sp, 0, current);
+ }
+ 
+-void fault_in_user_windows(void)
++static const char uwfault32[] = KERN_INFO \
++	"%s[%d]: bad register window fault: SP %08lx (orig_sp %08lx) TPC %08lx O7 %08lx\n";
++static const char uwfault64[] = KERN_INFO \
++	"%s[%d]: bad register window fault: SP %016lx (orig_sp %016lx) TPC %08lx O7 %016lx\n";
++
++void fault_in_user_windows(struct pt_regs *regs)
+ {
+ 	struct thread_info *t = current_thread_info();
+ 	unsigned long window;
+@@ -534,9 +540,9 @@ void fault_in_user_windows(void)
+ 		do {
+ 			struct reg_window *rwin = &t->reg_window[window];
+ 			int winsize = sizeof(struct reg_window);
+-			unsigned long sp;
++			unsigned long sp, orig_sp;
+ 
+-			sp = t->rwbuf_stkptrs[window];
++			orig_sp = sp = t->rwbuf_stkptrs[window];
+ 
+ 			if (test_thread_64bit_stack(sp))
+ 				sp += STACK_BIAS;
+@@ -547,8 +553,16 @@ void fault_in_user_windows(void)
+ 				stack_unaligned(sp);
+ 
+ 			if (unlikely(copy_to_user((char __user *)sp,
+-						  rwin, winsize)))
++						  rwin, winsize))) {
++				if (show_unhandled_signals)
++					printk_ratelimited(is_compat_task() ?
++							   uwfault32 : uwfault64,
++							   current->comm, current->pid,
++							   sp, orig_sp,
++							   regs->tpc,
++							   regs->u_regs[UREG_I7]);
+ 				goto barf;
++			}
+ 		} while (window--);
+ 	}
+ 	set_thread_wsaved(0);
+@@ -556,8 +570,7 @@ void fault_in_user_windows(void)
+ 
+ barf:
+ 	set_thread_wsaved(window + 1);
+-	user_exit();
+-	do_exit(SIGILL);
++	force_sig(SIGSEGV, current);
+ }
+ 
+ asmlinkage long sparc_do_fork(unsigned long clone_flags,
+diff --git a/arch/sparc/kernel/rtrap_64.S b/arch/sparc/kernel/rtrap_64.S
+index f6528884a2c8..29aa34f11720 100644
+--- a/arch/sparc/kernel/rtrap_64.S
++++ b/arch/sparc/kernel/rtrap_64.S
+@@ -39,6 +39,7 @@ __handle_preemption:
+ 		 wrpr			%g0, RTRAP_PSTATE_IRQOFF, %pstate
+ 
+ __handle_user_windows:
++		add			%sp, PTREGS_OFF, %o0
+ 		call			fault_in_user_windows
+ 661:		 wrpr			%g0, RTRAP_PSTATE, %pstate
+ 		/* If userspace is using ADI, it could potentially pass
+@@ -84,8 +85,9 @@ __handle_signal:
+ 		ldx			[%sp + PTREGS_OFF + PT_V9_TSTATE], %l1
+ 		sethi			%hi(0xf << 20), %l4
+ 		and			%l1, %l4, %l4
++		andn			%l1, %l4, %l1
+ 		ba,pt			%xcc, __handle_preemption_continue
+-		 andn			%l1, %l4, %l1
++		 srl			%l4, 20, %l4
+ 
+ 		/* When returning from a NMI (%pil==15) interrupt we want to
+ 		 * avoid running softirqs, doing IRQ tracing, preempting, etc.
+diff --git a/arch/sparc/kernel/signal32.c b/arch/sparc/kernel/signal32.c
+index 44d379db3f64..4c5b3fcbed94 100644
+--- a/arch/sparc/kernel/signal32.c
++++ b/arch/sparc/kernel/signal32.c
+@@ -371,7 +371,11 @@ static int setup_frame32(struct ksignal *ksig, struct pt_regs *regs,
+ 		get_sigframe(ksig, regs, sigframe_size);
+ 	
+ 	if (invalid_frame_pointer(sf, sigframe_size)) {
+-		do_exit(SIGILL);
++		if (show_unhandled_signals)
++			pr_info("%s[%d] bad frame in setup_frame32: %08lx TPC %08lx O7 %08lx\n",
++				current->comm, current->pid, (unsigned long)sf,
++				regs->tpc, regs->u_regs[UREG_I7]);
++		force_sigsegv(ksig->sig, current);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -501,7 +505,11 @@ static int setup_rt_frame32(struct ksignal *ksig, struct pt_regs *regs,
+ 		get_sigframe(ksig, regs, sigframe_size);
+ 	
+ 	if (invalid_frame_pointer(sf, sigframe_size)) {
+-		do_exit(SIGILL);
++		if (show_unhandled_signals)
++			pr_info("%s[%d] bad frame in setup_rt_frame32: %08lx TPC %08lx O7 %08lx\n",
++				current->comm, current->pid, (unsigned long)sf,
++				regs->tpc, regs->u_regs[UREG_I7]);
++		force_sigsegv(ksig->sig, current);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/arch/sparc/kernel/signal_64.c b/arch/sparc/kernel/signal_64.c
+index 48366e5eb5b2..e9de1803a22e 100644
+--- a/arch/sparc/kernel/signal_64.c
++++ b/arch/sparc/kernel/signal_64.c
+@@ -370,7 +370,11 @@ setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs)
+ 		get_sigframe(ksig, regs, sf_size);
+ 
+ 	if (invalid_frame_pointer (sf)) {
+-		do_exit(SIGILL);	/* won't return, actually */
++		if (show_unhandled_signals)
++			pr_info("%s[%d] bad frame in setup_rt_frame: %016lx TPC %016lx O7 %016lx\n",
++				current->comm, current->pid, (unsigned long)sf,
++				regs->tpc, regs->u_regs[UREG_I7]);
++		force_sigsegv(ksig->sig, current);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/arch/sparc/kernel/systbls_64.S b/arch/sparc/kernel/systbls_64.S
+index 387ef993880a..25699462ad5b 100644
+--- a/arch/sparc/kernel/systbls_64.S
++++ b/arch/sparc/kernel/systbls_64.S
+@@ -47,9 +47,9 @@ sys_call_table32:
+ 	.word sys_recvfrom, sys_setreuid16, sys_setregid16, sys_rename, compat_sys_truncate
+ /*130*/	.word compat_sys_ftruncate, sys_flock, compat_sys_lstat64, sys_sendto, sys_shutdown
+ 	.word sys_socketpair, sys_mkdir, sys_rmdir, compat_sys_utimes, compat_sys_stat64
+-/*140*/	.word sys_sendfile64, sys_nis_syscall, compat_sys_futex, sys_gettid, compat_sys_getrlimit
++/*140*/	.word sys_sendfile64, sys_getpeername, compat_sys_futex, sys_gettid, compat_sys_getrlimit
+ 	.word compat_sys_setrlimit, sys_pivot_root, sys_prctl, sys_pciconfig_read, sys_pciconfig_write
+-/*150*/	.word sys_nis_syscall, sys_inotify_init, sys_inotify_add_watch, sys_poll, sys_getdents64
++/*150*/	.word sys_getsockname, sys_inotify_init, sys_inotify_add_watch, sys_poll, sys_getdents64
+ 	.word compat_sys_fcntl64, sys_inotify_rm_watch, compat_sys_statfs, compat_sys_fstatfs, sys_oldumount
+ /*160*/	.word compat_sys_sched_setaffinity, compat_sys_sched_getaffinity, sys_getdomainname, sys_setdomainname, sys_nis_syscall
+ 	.word sys_quotactl, sys_set_tid_address, compat_sys_mount, compat_sys_ustat, sys_setxattr
+diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
+index f396048a0d68..39822f611c01 100644
+--- a/arch/sparc/mm/init_64.c
++++ b/arch/sparc/mm/init_64.c
+@@ -1383,6 +1383,7 @@ int __node_distance(int from, int to)
+ 	}
+ 	return numa_latency[from][to];
+ }
++EXPORT_SYMBOL(__node_distance);
+ 
+ static int __init find_best_numa_node_for_mlgroup(struct mdesc_mlgroup *grp)
+ {
+diff --git a/arch/sparc/vdso/vclock_gettime.c b/arch/sparc/vdso/vclock_gettime.c
+index 3feb3d960ca5..75dca9aab737 100644
+--- a/arch/sparc/vdso/vclock_gettime.c
++++ b/arch/sparc/vdso/vclock_gettime.c
+@@ -33,9 +33,19 @@
+ #define	TICK_PRIV_BIT	(1ULL << 63)
+ #endif
+ 
++#ifdef	CONFIG_SPARC64
+ #define SYSCALL_STRING							\
+ 	"ta	0x6d;"							\
+-	"sub	%%g0, %%o0, %%o0;"					\
++	"bcs,a	1f;"							\
++	" sub	%%g0, %%o0, %%o0;"					\
++	"1:"
++#else
++#define SYSCALL_STRING							\
++	"ta	0x10;"							\
++	"bcs,a	1f;"							\
++	" sub	%%g0, %%o0, %%o0;"					\
++	"1:"
++#endif
+ 
+ #define SYSCALL_CLOBBERS						\
+ 	"f0", "f1", "f2", "f3", "f4", "f5", "f6", "f7",			\
+diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c
+index 981ba5e8241b..8671de126eac 100644
+--- a/arch/x86/events/amd/uncore.c
++++ b/arch/x86/events/amd/uncore.c
+@@ -36,6 +36,7 @@
+ 
+ static int num_counters_llc;
+ static int num_counters_nb;
++static bool l3_mask;
+ 
+ static HLIST_HEAD(uncore_unused_list);
+ 
+@@ -209,6 +210,13 @@ static int amd_uncore_event_init(struct perf_event *event)
+ 	hwc->config = event->attr.config & AMD64_RAW_EVENT_MASK_NB;
+ 	hwc->idx = -1;
+ 
++	/*
++	 * SliceMask and ThreadMask need to be set for certain L3 events in
++	 * Family 17h. For other events, the two fields do not affect the count.
++	 */
++	if (l3_mask)
++		hwc->config |= (AMD64_L3_SLICE_MASK | AMD64_L3_THREAD_MASK);
++
+ 	if (event->cpu < 0)
+ 		return -EINVAL;
+ 
+@@ -525,6 +533,7 @@ static int __init amd_uncore_init(void)
+ 		amd_llc_pmu.name	  = "amd_l3";
+ 		format_attr_event_df.show = &event_show_df;
+ 		format_attr_event_l3.show = &event_show_l3;
++		l3_mask			  = true;
+ 	} else {
+ 		num_counters_nb		  = NUM_COUNTERS_NB;
+ 		num_counters_llc	  = NUM_COUNTERS_L2;
+@@ -532,6 +541,7 @@ static int __init amd_uncore_init(void)
+ 		amd_llc_pmu.name	  = "amd_l2";
+ 		format_attr_event_df	  = format_attr_event;
+ 		format_attr_event_l3	  = format_attr_event;
++		l3_mask			  = false;
+ 	}
+ 
+ 	amd_nb_pmu.attr_groups	= amd_uncore_attr_groups_df;
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index 51d7c117e3c7..c07bee31abe8 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -3061,7 +3061,7 @@ static struct event_constraint bdx_uncore_pcu_constraints[] = {
+ 
+ void bdx_uncore_cpu_init(void)
+ {
+-	int pkg = topology_phys_to_logical_pkg(0);
++	int pkg = topology_phys_to_logical_pkg(boot_cpu_data.phys_proc_id);
+ 
+ 	if (bdx_uncore_cbox.num_boxes > boot_cpu_data.x86_max_cores)
+ 		bdx_uncore_cbox.num_boxes = boot_cpu_data.x86_max_cores;
+@@ -3931,16 +3931,16 @@ static const struct pci_device_id skx_uncore_pci_ids[] = {
+ 		.driver_data = UNCORE_PCI_DEV_FULL_DATA(21, 5, SKX_PCI_UNCORE_M2PCIE, 3),
+ 	},
+ 	{ /* M3UPI0 Link 0 */
+-		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x204C),
+-		.driver_data = UNCORE_PCI_DEV_FULL_DATA(18, 0, SKX_PCI_UNCORE_M3UPI, 0),
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x204D),
++		.driver_data = UNCORE_PCI_DEV_FULL_DATA(18, 1, SKX_PCI_UNCORE_M3UPI, 0),
+ 	},
+ 	{ /* M3UPI0 Link 1 */
+-		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x204D),
+-		.driver_data = UNCORE_PCI_DEV_FULL_DATA(18, 1, SKX_PCI_UNCORE_M3UPI, 1),
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x204E),
++		.driver_data = UNCORE_PCI_DEV_FULL_DATA(18, 2, SKX_PCI_UNCORE_M3UPI, 1),
+ 	},
+ 	{ /* M3UPI1 Link 2 */
+-		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x204C),
+-		.driver_data = UNCORE_PCI_DEV_FULL_DATA(18, 4, SKX_PCI_UNCORE_M3UPI, 2),
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x204D),
++		.driver_data = UNCORE_PCI_DEV_FULL_DATA(18, 5, SKX_PCI_UNCORE_M3UPI, 2),
+ 	},
+ 	{ /* end: all zeroes */ }
+ };
+diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
+index 12f54082f4c8..78241b736f2a 100644
+--- a/arch/x86/include/asm/perf_event.h
++++ b/arch/x86/include/asm/perf_event.h
+@@ -46,6 +46,14 @@
+ #define INTEL_ARCH_EVENT_MASK	\
+ 	(ARCH_PERFMON_EVENTSEL_UMASK | ARCH_PERFMON_EVENTSEL_EVENT)
+ 
++#define AMD64_L3_SLICE_SHIFT				48
++#define AMD64_L3_SLICE_MASK				\
++	((0xFULL) << AMD64_L3_SLICE_SHIFT)
++
++#define AMD64_L3_THREAD_SHIFT				56
++#define AMD64_L3_THREAD_MASK				\
++	((0xFFULL) << AMD64_L3_THREAD_SHIFT)
++
+ #define X86_RAW_EVENT_MASK		\
+ 	(ARCH_PERFMON_EVENTSEL_EVENT |	\
+ 	 ARCH_PERFMON_EVENTSEL_UMASK |	\
+diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
+index 930c88341e4e..1fbf38dde84c 100644
+--- a/arch/x86/kernel/paravirt.c
++++ b/arch/x86/kernel/paravirt.c
+@@ -90,7 +90,7 @@ unsigned paravirt_patch_call(void *insnbuf,
+ 
+ 	if (len < 5) {
+ #ifdef CONFIG_RETPOLINE
+-		WARN_ONCE("Failing to patch indirect CALL in %ps\n", (void *)addr);
++		WARN_ONCE(1, "Failing to patch indirect CALL in %ps\n", (void *)addr);
+ #endif
+ 		return len;	/* call too long for patch site */
+ 	}
+@@ -110,7 +110,7 @@ unsigned paravirt_patch_jmp(void *insnbuf, const void *target,
+ 
+ 	if (len < 5) {
+ #ifdef CONFIG_RETPOLINE
+-		WARN_ONCE("Failing to patch indirect JMP in %ps\n", (void *)addr);
++		WARN_ONCE(1, "Failing to patch indirect JMP in %ps\n", (void *)addr);
+ #endif
+ 		return len;	/* call too long for patch site */
+ 	}
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index ef772e5634d4..3e59a187fe30 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -436,14 +436,18 @@ static inline struct kvm_svm *to_kvm_svm(struct kvm *kvm)
+ 
+ static inline bool svm_sev_enabled(void)
+ {
+-	return max_sev_asid;
++	return IS_ENABLED(CONFIG_KVM_AMD_SEV) ? max_sev_asid : 0;
+ }
+ 
+ static inline bool sev_guest(struct kvm *kvm)
+ {
++#ifdef CONFIG_KVM_AMD_SEV
+ 	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+ 
+ 	return sev->active;
++#else
++	return false;
++#endif
+ }
+ 
+ static inline int sev_get_asid(struct kvm *kvm)
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 32721ef9652d..9efe130ea2e6 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -819,6 +819,7 @@ struct nested_vmx {
+ 
+ 	/* to migrate it to L2 if VM_ENTRY_LOAD_DEBUG_CONTROLS is off */
+ 	u64 vmcs01_debugctl;
++	u64 vmcs01_guest_bndcfgs;
+ 
+ 	u16 vpid02;
+ 	u16 last_vpid;
+@@ -3395,9 +3396,6 @@ static void nested_vmx_setup_ctls_msrs(struct nested_vmx_msrs *msrs, bool apicv)
+ 		VM_EXIT_LOAD_IA32_EFER | VM_EXIT_SAVE_IA32_EFER |
+ 		VM_EXIT_SAVE_VMX_PREEMPTION_TIMER | VM_EXIT_ACK_INTR_ON_EXIT;
+ 
+-	if (kvm_mpx_supported())
+-		msrs->exit_ctls_high |= VM_EXIT_CLEAR_BNDCFGS;
+-
+ 	/* We support free control of debug control saving. */
+ 	msrs->exit_ctls_low &= ~VM_EXIT_SAVE_DEBUG_CONTROLS;
+ 
+@@ -3414,8 +3412,6 @@ static void nested_vmx_setup_ctls_msrs(struct nested_vmx_msrs *msrs, bool apicv)
+ 		VM_ENTRY_LOAD_IA32_PAT;
+ 	msrs->entry_ctls_high |=
+ 		(VM_ENTRY_ALWAYSON_WITHOUT_TRUE_MSR | VM_ENTRY_LOAD_IA32_EFER);
+-	if (kvm_mpx_supported())
+-		msrs->entry_ctls_high |= VM_ENTRY_LOAD_BNDCFGS;
+ 
+ 	/* We support free control of debug control loading. */
+ 	msrs->entry_ctls_low &= ~VM_ENTRY_LOAD_DEBUG_CONTROLS;
+@@ -10825,6 +10821,23 @@ static void nested_vmx_cr_fixed1_bits_update(struct kvm_vcpu *vcpu)
+ #undef cr4_fixed1_update
+ }
+ 
++static void nested_vmx_entry_exit_ctls_update(struct kvm_vcpu *vcpu)
++{
++	struct vcpu_vmx *vmx = to_vmx(vcpu);
++
++	if (kvm_mpx_supported()) {
++		bool mpx_enabled = guest_cpuid_has(vcpu, X86_FEATURE_MPX);
++
++		if (mpx_enabled) {
++			vmx->nested.msrs.entry_ctls_high |= VM_ENTRY_LOAD_BNDCFGS;
++			vmx->nested.msrs.exit_ctls_high |= VM_EXIT_CLEAR_BNDCFGS;
++		} else {
++			vmx->nested.msrs.entry_ctls_high &= ~VM_ENTRY_LOAD_BNDCFGS;
++			vmx->nested.msrs.exit_ctls_high &= ~VM_EXIT_CLEAR_BNDCFGS;
++		}
++	}
++}
++
+ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
+ {
+ 	struct vcpu_vmx *vmx = to_vmx(vcpu);
+@@ -10841,8 +10854,10 @@ static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
+ 		to_vmx(vcpu)->msr_ia32_feature_control_valid_bits &=
+ 			~FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX;
+ 
+-	if (nested_vmx_allowed(vcpu))
++	if (nested_vmx_allowed(vcpu)) {
+ 		nested_vmx_cr_fixed1_bits_update(vcpu);
++		nested_vmx_entry_exit_ctls_update(vcpu);
++	}
+ }
+ 
+ static void vmx_set_supported_cpuid(u32 func, struct kvm_cpuid_entry2 *entry)
+@@ -11553,8 +11568,13 @@ static void prepare_vmcs02_full(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
+ 
+ 	set_cr4_guest_host_mask(vmx);
+ 
+-	if (vmx_mpx_supported())
+-		vmcs_write64(GUEST_BNDCFGS, vmcs12->guest_bndcfgs);
++	if (kvm_mpx_supported()) {
++		if (vmx->nested.nested_run_pending &&
++			(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS))
++			vmcs_write64(GUEST_BNDCFGS, vmcs12->guest_bndcfgs);
++		else
++			vmcs_write64(GUEST_BNDCFGS, vmx->nested.vmcs01_guest_bndcfgs);
++	}
+ 
+ 	if (enable_vpid) {
+ 		if (nested_cpu_has_vpid(vmcs12) && vmx->nested.vpid02)
+@@ -12068,6 +12088,9 @@ static int enter_vmx_non_root_mode(struct kvm_vcpu *vcpu)
+ 
+ 	if (!(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS))
+ 		vmx->nested.vmcs01_debugctl = vmcs_read64(GUEST_IA32_DEBUGCTL);
++	if (kvm_mpx_supported() &&
++		!(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_BNDCFGS))
++		vmx->nested.vmcs01_guest_bndcfgs = vmcs_read64(GUEST_BNDCFGS);
+ 
+ 	vmx_switch_vmcs(vcpu, &vmx->nested.vmcs02);
+ 	vmx_segment_cache_clear(vmx);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 97fcac34e007..3cd58a5eb449 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -4625,7 +4625,7 @@ static void kvm_init_msr_list(void)
+ 		 */
+ 		switch (msrs_to_save[i]) {
+ 		case MSR_IA32_BNDCFGS:
+-			if (!kvm_x86_ops->mpx_supported())
++			if (!kvm_mpx_supported())
+ 				continue;
+ 			break;
+ 		case MSR_TSC_AUX:
+diff --git a/drivers/clk/mvebu/armada-37xx-periph.c b/drivers/clk/mvebu/armada-37xx-periph.c
+index 6f7637b19738..e764dfdea53f 100644
+--- a/drivers/clk/mvebu/armada-37xx-periph.c
++++ b/drivers/clk/mvebu/armada-37xx-periph.c
+@@ -419,7 +419,6 @@ static unsigned int armada_3700_pm_dvfs_get_cpu_parent(struct regmap *base)
+ static u8 clk_pm_cpu_get_parent(struct clk_hw *hw)
+ {
+ 	struct clk_pm_cpu *pm_cpu = to_clk_pm_cpu(hw);
+-	int num_parents = clk_hw_get_num_parents(hw);
+ 	u32 val;
+ 
+ 	if (armada_3700_pm_dvfs_is_enabled(pm_cpu->nb_pm_base)) {
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 06dce16e22bb..70f0dedca59f 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -1675,7 +1675,8 @@ static void gpiochip_set_cascaded_irqchip(struct gpio_chip *gpiochip,
+ 		irq_set_chained_handler_and_data(parent_irq, parent_handler,
+ 						 gpiochip);
+ 
+-		gpiochip->irq.parents = &parent_irq;
++		gpiochip->irq.parent_irq = parent_irq;
++		gpiochip->irq.parents = &gpiochip->irq.parent_irq;
+ 		gpiochip->irq.num_parents = 1;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index e484d0a94bdc..5b9cc3aeaa55 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -4494,12 +4494,18 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
+ 	}
+ 	spin_unlock_irqrestore(&adev->ddev->event_lock, flags);
+ 
+-	/* Signal HW programming completion */
+-	drm_atomic_helper_commit_hw_done(state);
+ 
+ 	if (wait_for_vblank)
+ 		drm_atomic_helper_wait_for_flip_done(dev, state);
+ 
++	/*
++	 * FIXME:
++	 * Delay hw_done() until flip_done() is signaled. This is to block
++	 * another commit from freeing the CRTC state while we're still
++	 * waiting on flip_done.
++	 */
++	drm_atomic_helper_commit_hw_done(state);
++
+ 	drm_atomic_helper_cleanup_planes(dev, state);
+ 
+ 	/* Finally, drop a runtime PM reference for each newly disabled CRTC,
+diff --git a/drivers/gpu/drm/i2c/tda9950.c b/drivers/gpu/drm/i2c/tda9950.c
+index 3f7396caad48..ccd355d0c123 100644
+--- a/drivers/gpu/drm/i2c/tda9950.c
++++ b/drivers/gpu/drm/i2c/tda9950.c
+@@ -188,7 +188,8 @@ static irqreturn_t tda9950_irq(int irq, void *data)
+ 			break;
+ 		}
+ 		/* TDA9950 executes all retries for us */
+-		tx_status |= CEC_TX_STATUS_MAX_RETRIES;
++		if (tx_status != CEC_TX_STATUS_OK)
++			tx_status |= CEC_TX_STATUS_MAX_RETRIES;
+ 		cec_transmit_done(priv->adap, tx_status, arb_lost_cnt,
+ 				  nack_cnt, 0, err_cnt);
+ 		break;
+@@ -307,7 +308,7 @@ static void tda9950_release(struct tda9950_priv *priv)
+ 	/* Wait up to .5s for it to signal non-busy */
+ 	do {
+ 		csr = tda9950_read(client, REG_CSR);
+-		if (!(csr & CSR_BUSY) || --timeout)
++		if (!(csr & CSR_BUSY) || !--timeout)
+ 			break;
+ 		msleep(10);
+ 	} while (1);
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index eee6b79fb131..ae5b72269e27 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -974,7 +974,6 @@
+ #define USB_DEVICE_ID_SIS817_TOUCH	0x0817
+ #define USB_DEVICE_ID_SIS_TS		0x1013
+ #define USB_DEVICE_ID_SIS1030_TOUCH	0x1030
+-#define USB_DEVICE_ID_SIS10FB_TOUCH	0x10fb
+ 
+ #define USB_VENDOR_ID_SKYCABLE			0x1223
+ #define	USB_DEVICE_ID_SKYCABLE_WIRELESS_PRESENTER	0x3F07
+diff --git a/drivers/hid/i2c-hid/i2c-hid.c b/drivers/hid/i2c-hid/i2c-hid.c
+index 37013b58098c..d17cf6e323b2 100644
+--- a/drivers/hid/i2c-hid/i2c-hid.c
++++ b/drivers/hid/i2c-hid/i2c-hid.c
+@@ -47,8 +47,7 @@
+ /* quirks to control the device */
+ #define I2C_HID_QUIRK_SET_PWR_WAKEUP_DEV	BIT(0)
+ #define I2C_HID_QUIRK_NO_IRQ_AFTER_RESET	BIT(1)
+-#define I2C_HID_QUIRK_RESEND_REPORT_DESCR	BIT(2)
+-#define I2C_HID_QUIRK_NO_RUNTIME_PM		BIT(3)
++#define I2C_HID_QUIRK_NO_RUNTIME_PM		BIT(2)
+ 
+ /* flags */
+ #define I2C_HID_STARTED		0
+@@ -172,8 +171,6 @@ static const struct i2c_hid_quirks {
+ 	{ I2C_VENDOR_ID_HANTICK, I2C_PRODUCT_ID_HANTICK_5288,
+ 		I2C_HID_QUIRK_NO_IRQ_AFTER_RESET |
+ 		I2C_HID_QUIRK_NO_RUNTIME_PM },
+-	{ USB_VENDOR_ID_SIS_TOUCH, USB_DEVICE_ID_SIS10FB_TOUCH,
+-		I2C_HID_QUIRK_RESEND_REPORT_DESCR },
+ 	{ 0, 0 }
+ };
+ 
+@@ -1241,22 +1238,13 @@ static int i2c_hid_resume(struct device *dev)
+ 
+ 	/* Instead of resetting device, simply powers the device on. This
+ 	 * solves "incomplete reports" on Raydium devices 2386:3118 and
+-	 * 2386:4B33
++	 * 2386:4B33 and fixes various SIS touchscreens no longer sending
++	 * data after a suspend/resume.
+ 	 */
+ 	ret = i2c_hid_set_power(client, I2C_HID_PWR_ON);
+ 	if (ret)
+ 		return ret;
+ 
+-	/* Some devices need to re-send report descr cmd
+-	 * after resume, after this it will be back normal.
+-	 * otherwise it issues too many incomplete reports.
+-	 */
+-	if (ihid->quirks & I2C_HID_QUIRK_RESEND_REPORT_DESCR) {
+-		ret = i2c_hid_command(client, &hid_report_descr_cmd, NULL, 0);
+-		if (ret)
+-			return ret;
+-	}
+-
+ 	if (hid->driver && hid->driver->reset_resume) {
+ 		ret = hid->driver->reset_resume(hid);
+ 		return ret;
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 308456d28afb..73339fd47dd8 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -544,6 +544,9 @@ void mlx5_mr_cache_free(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr)
+ 	int shrink = 0;
+ 	int c;
+ 
++	if (!mr->allocated_from_cache)
++		return;
++
+ 	c = order2idx(dev, mr->order);
+ 	if (c < 0 || c >= MAX_MR_CACHE_ENTRIES) {
+ 		mlx5_ib_warn(dev, "order %d, cache index %d\n", mr->order, c);
+@@ -1647,18 +1650,19 @@ static void dereg_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr)
+ 		umem = NULL;
+ 	}
+ #endif
+-
+ 	clean_mr(dev, mr);
+ 
++	/*
++	 * We should unregister the DMA address from the HCA before
++	 * remove the DMA mapping.
++	 */
++	mlx5_mr_cache_free(dev, mr);
+ 	if (umem) {
+ 		ib_umem_release(umem);
+ 		atomic_sub(npages, &dev->mdev->priv.reg_pages);
+ 	}
+-
+ 	if (!mr->allocated_from_cache)
+ 		kfree(mr);
+-	else
+-		mlx5_mr_cache_free(dev, mr);
+ }
+ 
+ int mlx5_ib_dereg_mr(struct ib_mr *ibmr)
+diff --git a/drivers/net/bonding/bond_netlink.c b/drivers/net/bonding/bond_netlink.c
+index 9697977b80f0..6b9ad8673218 100644
+--- a/drivers/net/bonding/bond_netlink.c
++++ b/drivers/net/bonding/bond_netlink.c
+@@ -638,8 +638,7 @@ static int bond_fill_info(struct sk_buff *skb,
+ 				goto nla_put_failure;
+ 
+ 			if (nla_put(skb, IFLA_BOND_AD_ACTOR_SYSTEM,
+-				    sizeof(bond->params.ad_actor_system),
+-				    &bond->params.ad_actor_system))
++				    ETH_ALEN, &bond->params.ad_actor_system))
+ 				goto nla_put_failure;
+ 		}
+ 		if (!bond_3ad_get_active_agg_info(bond, &info)) {
+diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+index 1b01cd2820ba..000f0d42a710 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+@@ -1580,8 +1580,6 @@ static int ena_up_complete(struct ena_adapter *adapter)
+ 	if (rc)
+ 		return rc;
+ 
+-	ena_init_napi(adapter);
+-
+ 	ena_change_mtu(adapter->netdev, adapter->netdev->mtu);
+ 
+ 	ena_refill_all_rx_bufs(adapter);
+@@ -1735,6 +1733,13 @@ static int ena_up(struct ena_adapter *adapter)
+ 
+ 	ena_setup_io_intr(adapter);
+ 
++	/* napi poll functions should be initialized before running
++	 * request_irq(), to handle a rare condition where there is a pending
++	 * interrupt, causing the ISR to fire immediately while the poll
++	 * function wasn't set yet, causing a null dereference
++	 */
++	ena_init_napi(adapter);
++
+ 	rc = ena_request_io_irq(adapter);
+ 	if (rc)
+ 		goto err_req_irq;
+@@ -2648,7 +2653,11 @@ err_disable_msix:
+ 	ena_free_mgmnt_irq(adapter);
+ 	ena_disable_msix(adapter);
+ err_device_destroy:
++	ena_com_abort_admin_commands(ena_dev);
++	ena_com_wait_for_abort_completion(ena_dev);
+ 	ena_com_admin_destroy(ena_dev);
++	ena_com_mmio_reg_read_request_destroy(ena_dev);
++	ena_com_dev_reset(ena_dev, ENA_REGS_RESET_DRIVER_INVALID_STATE);
+ err:
+ 	clear_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags);
+ 	clear_bit(ENA_FLAG_ONGOING_RESET, &adapter->flags);
+@@ -3128,15 +3137,8 @@ err_rss_init:
+ 
+ static void ena_release_bars(struct ena_com_dev *ena_dev, struct pci_dev *pdev)
+ {
+-	int release_bars;
+-
+-	if (ena_dev->mem_bar)
+-		devm_iounmap(&pdev->dev, ena_dev->mem_bar);
+-
+-	if (ena_dev->reg_bar)
+-		devm_iounmap(&pdev->dev, ena_dev->reg_bar);
++	int release_bars = pci_select_bars(pdev, IORESOURCE_MEM) & ENA_BAR_MASK;
+ 
+-	release_bars = pci_select_bars(pdev, IORESOURCE_MEM) & ENA_BAR_MASK;
+ 	pci_release_selected_regions(pdev, release_bars);
+ }
+ 
+diff --git a/drivers/net/ethernet/amd/declance.c b/drivers/net/ethernet/amd/declance.c
+index 116997a8b593..00332a1ea84b 100644
+--- a/drivers/net/ethernet/amd/declance.c
++++ b/drivers/net/ethernet/amd/declance.c
+@@ -1031,6 +1031,7 @@ static int dec_lance_probe(struct device *bdev, const int type)
+ 	int i, ret;
+ 	unsigned long esar_base;
+ 	unsigned char *esar;
++	const char *desc;
+ 
+ 	if (dec_lance_debug && version_printed++ == 0)
+ 		printk(version);
+@@ -1216,19 +1217,20 @@ static int dec_lance_probe(struct device *bdev, const int type)
+ 	 */
+ 	switch (type) {
+ 	case ASIC_LANCE:
+-		printk("%s: IOASIC onboard LANCE", name);
++		desc = "IOASIC onboard LANCE";
+ 		break;
+ 	case PMAD_LANCE:
+-		printk("%s: PMAD-AA", name);
++		desc = "PMAD-AA";
+ 		break;
+ 	case PMAX_LANCE:
+-		printk("%s: PMAX onboard LANCE", name);
++		desc = "PMAX onboard LANCE";
+ 		break;
+ 	}
+ 	for (i = 0; i < 6; i++)
+ 		dev->dev_addr[i] = esar[i * 4];
+ 
+-	printk(", addr = %pM, irq = %d\n", dev->dev_addr, dev->irq);
++	printk("%s: %s, addr = %pM, irq = %d\n",
++	       name, desc, dev->dev_addr, dev->irq);
+ 
+ 	dev->netdev_ops = &lance_netdev_ops;
+ 	dev->watchdog_timeo = 5*HZ;
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+index 4241ae928d4a..34af5f1569c8 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+@@ -321,9 +321,12 @@ int bcmgenet_mii_probe(struct net_device *dev)
+ 	phydev->advertising = phydev->supported;
+ 
+ 	/* The internal PHY has its link interrupts routed to the
+-	 * Ethernet MAC ISRs
++	 * Ethernet MAC ISRs. On GENETv5 there is a hardware issue
++	 * that prevents the signaling of link UP interrupts when
++	 * the link operates at 10Mbps, so fallback to polling for
++	 * those versions of GENET.
+ 	 */
+-	if (priv->internal_phy)
++	if (priv->internal_phy && !GENET_IS_V5(priv))
+ 		dev->phydev->irq = PHY_IGNORE_INTERRUPT;
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index dfa045f22ef1..db568232ff3e 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -2089,6 +2089,7 @@ static void macb_configure_dma(struct macb *bp)
+ 		else
+ 			dmacfg &= ~GEM_BIT(TXCOEN);
+ 
++		dmacfg &= ~GEM_BIT(ADDR64);
+ #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
+ 		if (bp->hw_dma_cap & HW_DMA_CAP_64B)
+ 			dmacfg |= GEM_BIT(ADDR64);
+diff --git a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
+index a19172dbe6be..c34ea385fe4a 100644
+--- a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
+@@ -2159,6 +2159,8 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
+ 			return -EPERM;
+ 		if (copy_from_user(&t, useraddr, sizeof(t)))
+ 			return -EFAULT;
++		if (t.cmd != CHELSIO_SET_QSET_PARAMS)
++			return -EINVAL;
+ 		if (t.qset_idx >= SGE_QSETS)
+ 			return -EINVAL;
+ 		if (!in_range(t.intr_lat, 0, M_NEWTIMER) ||
+@@ -2258,6 +2260,9 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
+ 		if (copy_from_user(&t, useraddr, sizeof(t)))
+ 			return -EFAULT;
+ 
++		if (t.cmd != CHELSIO_GET_QSET_PARAMS)
++			return -EINVAL;
++
+ 		/* Display qsets for all ports when offload enabled */
+ 		if (test_bit(OFFLOAD_DEVMAP_BIT, &adapter->open_device_map)) {
+ 			q1 = 0;
+@@ -2303,6 +2308,8 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
+ 			return -EBUSY;
+ 		if (copy_from_user(&edata, useraddr, sizeof(edata)))
+ 			return -EFAULT;
++		if (edata.cmd != CHELSIO_SET_QSET_NUM)
++			return -EINVAL;
+ 		if (edata.val < 1 ||
+ 			(edata.val > 1 && !(adapter->flags & USING_MSIX)))
+ 			return -EINVAL;
+@@ -2343,6 +2350,8 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
+ 			return -EPERM;
+ 		if (copy_from_user(&t, useraddr, sizeof(t)))
+ 			return -EFAULT;
++		if (t.cmd != CHELSIO_LOAD_FW)
++			return -EINVAL;
+ 		/* Check t.len sanity ? */
+ 		fw_data = memdup_user(useraddr + sizeof(t), t.len);
+ 		if (IS_ERR(fw_data))
+@@ -2366,6 +2375,8 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
+ 			return -EBUSY;
+ 		if (copy_from_user(&m, useraddr, sizeof(m)))
+ 			return -EFAULT;
++		if (m.cmd != CHELSIO_SETMTUTAB)
++			return -EINVAL;
+ 		if (m.nmtus != NMTUS)
+ 			return -EINVAL;
+ 		if (m.mtus[0] < 81)	/* accommodate SACK */
+@@ -2407,6 +2418,8 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
+ 			return -EBUSY;
+ 		if (copy_from_user(&m, useraddr, sizeof(m)))
+ 			return -EFAULT;
++		if (m.cmd != CHELSIO_SET_PM)
++			return -EINVAL;
+ 		if (!is_power_of_2(m.rx_pg_sz) ||
+ 			!is_power_of_2(m.tx_pg_sz))
+ 			return -EINVAL;	/* not power of 2 */
+@@ -2440,6 +2453,8 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
+ 			return -EIO;	/* need the memory controllers */
+ 		if (copy_from_user(&t, useraddr, sizeof(t)))
+ 			return -EFAULT;
++		if (t.cmd != CHELSIO_GET_MEM)
++			return -EINVAL;
+ 		if ((t.addr & 7) || (t.len & 7))
+ 			return -EINVAL;
+ 		if (t.mem_id == MEM_CM)
+@@ -2492,6 +2507,8 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
+ 			return -EAGAIN;
+ 		if (copy_from_user(&t, useraddr, sizeof(t)))
+ 			return -EFAULT;
++		if (t.cmd != CHELSIO_SET_TRACE_FILTER)
++			return -EINVAL;
+ 
+ 		tp = (const struct trace_params *)&t.sip;
+ 		if (t.config_tx)
+diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
+index 8f755009ff38..c8445a4135a9 100644
+--- a/drivers/net/ethernet/emulex/benet/be_main.c
++++ b/drivers/net/ethernet/emulex/benet/be_main.c
+@@ -3915,8 +3915,6 @@ static int be_enable_vxlan_offloads(struct be_adapter *adapter)
+ 	netdev->hw_enc_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
+ 				   NETIF_F_TSO | NETIF_F_TSO6 |
+ 				   NETIF_F_GSO_UDP_TUNNEL;
+-	netdev->hw_features |= NETIF_F_GSO_UDP_TUNNEL;
+-	netdev->features |= NETIF_F_GSO_UDP_TUNNEL;
+ 
+ 	dev_info(dev, "Enabled VxLAN offloads for UDP port %d\n",
+ 		 be16_to_cpu(port));
+@@ -3938,8 +3936,6 @@ static void be_disable_vxlan_offloads(struct be_adapter *adapter)
+ 	adapter->vxlan_port = 0;
+ 
+ 	netdev->hw_enc_features = 0;
+-	netdev->hw_features &= ~(NETIF_F_GSO_UDP_TUNNEL);
+-	netdev->features &= ~(NETIF_F_GSO_UDP_TUNNEL);
+ }
+ 
+ static void be_calculate_vf_res(struct be_adapter *adapter, u16 num_vfs,
+@@ -5232,6 +5228,7 @@ static void be_netdev_init(struct net_device *netdev)
+ 	struct be_adapter *adapter = netdev_priv(netdev);
+ 
+ 	netdev->hw_features |= NETIF_F_SG | NETIF_F_TSO | NETIF_F_TSO6 |
++		NETIF_F_GSO_UDP_TUNNEL |
+ 		NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | NETIF_F_RXCSUM |
+ 		NETIF_F_HW_VLAN_CTAG_TX;
+ 	if ((be_if_cap_flags(adapter) & BE_IF_FLAGS_RSS))
+diff --git a/drivers/net/ethernet/freescale/fec.h b/drivers/net/ethernet/freescale/fec.h
+index 4778b663653e..bf80855dd0dd 100644
+--- a/drivers/net/ethernet/freescale/fec.h
++++ b/drivers/net/ethernet/freescale/fec.h
+@@ -452,6 +452,10 @@ struct bufdesc_ex {
+  * initialisation.
+  */
+ #define FEC_QUIRK_MIB_CLEAR		(1 << 15)
++/* Only i.MX25/i.MX27/i.MX28 controller supports FRBR,FRSR registers,
++ * those FIFO receive registers are resolved in other platforms.
++ */
++#define FEC_QUIRK_HAS_FRREG		(1 << 16)
+ 
+ struct bufdesc_prop {
+ 	int qid;
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index c729665107f5..11f90bb2d2a9 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -90,14 +90,16 @@ static struct platform_device_id fec_devtype[] = {
+ 		.driver_data = 0,
+ 	}, {
+ 		.name = "imx25-fec",
+-		.driver_data = FEC_QUIRK_USE_GASKET | FEC_QUIRK_MIB_CLEAR,
++		.driver_data = FEC_QUIRK_USE_GASKET | FEC_QUIRK_MIB_CLEAR |
++			       FEC_QUIRK_HAS_FRREG,
+ 	}, {
+ 		.name = "imx27-fec",
+-		.driver_data = FEC_QUIRK_MIB_CLEAR,
++		.driver_data = FEC_QUIRK_MIB_CLEAR | FEC_QUIRK_HAS_FRREG,
+ 	}, {
+ 		.name = "imx28-fec",
+ 		.driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_SWAP_FRAME |
+-				FEC_QUIRK_SINGLE_MDIO | FEC_QUIRK_HAS_RACC,
++				FEC_QUIRK_SINGLE_MDIO | FEC_QUIRK_HAS_RACC |
++				FEC_QUIRK_HAS_FRREG,
+ 	}, {
+ 		.name = "imx6q-fec",
+ 		.driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT |
+@@ -1157,7 +1159,7 @@ static void fec_enet_timeout_work(struct work_struct *work)
+ 		napi_disable(&fep->napi);
+ 		netif_tx_lock_bh(ndev);
+ 		fec_restart(ndev);
+-		netif_wake_queue(ndev);
++		netif_tx_wake_all_queues(ndev);
+ 		netif_tx_unlock_bh(ndev);
+ 		napi_enable(&fep->napi);
+ 	}
+@@ -1272,7 +1274,7 @@ skb_done:
+ 
+ 		/* Since we have freed up a buffer, the ring is no longer full
+ 		 */
+-		if (netif_queue_stopped(ndev)) {
++		if (netif_tx_queue_stopped(nq)) {
+ 			entries_free = fec_enet_get_free_txdesc_num(txq);
+ 			if (entries_free >= txq->tx_wake_threshold)
+ 				netif_tx_wake_queue(nq);
+@@ -1745,7 +1747,7 @@ static void fec_enet_adjust_link(struct net_device *ndev)
+ 			napi_disable(&fep->napi);
+ 			netif_tx_lock_bh(ndev);
+ 			fec_restart(ndev);
+-			netif_wake_queue(ndev);
++			netif_tx_wake_all_queues(ndev);
+ 			netif_tx_unlock_bh(ndev);
+ 			napi_enable(&fep->napi);
+ 		}
+@@ -2163,7 +2165,13 @@ static void fec_enet_get_regs(struct net_device *ndev,
+ 	memset(buf, 0, regs->len);
+ 
+ 	for (i = 0; i < ARRAY_SIZE(fec_enet_register_offset); i++) {
+-		off = fec_enet_register_offset[i] / 4;
++		off = fec_enet_register_offset[i];
++
++		if ((off == FEC_R_BOUND || off == FEC_R_FSTART) &&
++		    !(fep->quirks & FEC_QUIRK_HAS_FRREG))
++			continue;
++
++		off >>= 2;
+ 		buf[off] = readl(&theregs[off]);
+ 	}
+ }
+@@ -2246,7 +2254,7 @@ static int fec_enet_set_pauseparam(struct net_device *ndev,
+ 		napi_disable(&fep->napi);
+ 		netif_tx_lock_bh(ndev);
+ 		fec_restart(ndev);
+-		netif_wake_queue(ndev);
++		netif_tx_wake_all_queues(ndev);
+ 		netif_tx_unlock_bh(ndev);
+ 		napi_enable(&fep->napi);
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index d3a1dd20e41d..fb6c72cf70a0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -429,10 +429,9 @@ static inline u16 mlx5e_icosq_wrap_cnt(struct mlx5e_icosq *sq)
+ 
+ static inline void mlx5e_fill_icosq_frag_edge(struct mlx5e_icosq *sq,
+ 					      struct mlx5_wq_cyc *wq,
+-					      u16 pi, u16 frag_pi)
++					      u16 pi, u16 nnops)
+ {
+ 	struct mlx5e_sq_wqe_info *edge_wi, *wi = &sq->db.ico_wqe[pi];
+-	u8 nnops = mlx5_wq_cyc_get_frag_size(wq) - frag_pi;
+ 
+ 	edge_wi = wi + nnops;
+ 
+@@ -451,15 +450,14 @@ static int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
+ 	struct mlx5_wq_cyc *wq = &sq->wq;
+ 	struct mlx5e_umr_wqe *umr_wqe;
+ 	u16 xlt_offset = ix << (MLX5E_LOG_ALIGNED_MPWQE_PPW - 1);
+-	u16 pi, frag_pi;
++	u16 pi, contig_wqebbs_room;
+ 	int err;
+ 	int i;
+ 
+ 	pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
+-	frag_pi = mlx5_wq_cyc_ctr2fragix(wq, sq->pc);
+-
+-	if (unlikely(frag_pi + MLX5E_UMR_WQEBBS > mlx5_wq_cyc_get_frag_size(wq))) {
+-		mlx5e_fill_icosq_frag_edge(sq, wq, pi, frag_pi);
++	contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi);
++	if (unlikely(contig_wqebbs_room < MLX5E_UMR_WQEBBS)) {
++		mlx5e_fill_icosq_frag_edge(sq, wq, pi, contig_wqebbs_room);
+ 		pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
+ 	}
+ 
+@@ -693,43 +691,15 @@ static inline bool is_last_ethertype_ip(struct sk_buff *skb, int *network_depth)
+ 	return (ethertype == htons(ETH_P_IP) || ethertype == htons(ETH_P_IPV6));
+ }
+ 
+-static __be32 mlx5e_get_fcs(struct sk_buff *skb)
++static u32 mlx5e_get_fcs(const struct sk_buff *skb)
+ {
+-	int last_frag_sz, bytes_in_prev, nr_frags;
+-	u8 *fcs_p1, *fcs_p2;
+-	skb_frag_t *last_frag;
+-	__be32 fcs_bytes;
+-
+-	if (!skb_is_nonlinear(skb))
+-		return *(__be32 *)(skb->data + skb->len - ETH_FCS_LEN);
+-
+-	nr_frags = skb_shinfo(skb)->nr_frags;
+-	last_frag = &skb_shinfo(skb)->frags[nr_frags - 1];
+-	last_frag_sz = skb_frag_size(last_frag);
+-
+-	/* If all FCS data is in last frag */
+-	if (last_frag_sz >= ETH_FCS_LEN)
+-		return *(__be32 *)(skb_frag_address(last_frag) +
+-				   last_frag_sz - ETH_FCS_LEN);
+-
+-	fcs_p2 = (u8 *)skb_frag_address(last_frag);
+-	bytes_in_prev = ETH_FCS_LEN - last_frag_sz;
+-
+-	/* Find where the other part of the FCS is - Linear or another frag */
+-	if (nr_frags == 1) {
+-		fcs_p1 = skb_tail_pointer(skb);
+-	} else {
+-		skb_frag_t *prev_frag = &skb_shinfo(skb)->frags[nr_frags - 2];
+-
+-		fcs_p1 = skb_frag_address(prev_frag) +
+-			    skb_frag_size(prev_frag);
+-	}
+-	fcs_p1 -= bytes_in_prev;
++	const void *fcs_bytes;
++	u32 _fcs_bytes;
+ 
+-	memcpy(&fcs_bytes, fcs_p1, bytes_in_prev);
+-	memcpy(((u8 *)&fcs_bytes) + bytes_in_prev, fcs_p2, last_frag_sz);
++	fcs_bytes = skb_header_pointer(skb, skb->len - ETH_FCS_LEN,
++				       ETH_FCS_LEN, &_fcs_bytes);
+ 
+-	return fcs_bytes;
++	return __get_unaligned_cpu32(fcs_bytes);
+ }
+ 
+ static inline void mlx5e_handle_csum(struct net_device *netdev,
+@@ -762,8 +732,9 @@ static inline void mlx5e_handle_csum(struct net_device *netdev,
+ 						 network_depth - ETH_HLEN,
+ 						 skb->csum);
+ 		if (unlikely(netdev->features & NETIF_F_RXFCS))
+-			skb->csum = csum_add(skb->csum,
+-					     (__force __wsum)mlx5e_get_fcs(skb));
++			skb->csum = csum_block_add(skb->csum,
++						   (__force __wsum)mlx5e_get_fcs(skb),
++						   skb->len - ETH_FCS_LEN);
+ 		stats->csum_complete++;
+ 		return;
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+index f29deb44bf3b..1e774d979c85 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+@@ -287,10 +287,9 @@ dma_unmap_wqe_err:
+ 
+ static inline void mlx5e_fill_sq_frag_edge(struct mlx5e_txqsq *sq,
+ 					   struct mlx5_wq_cyc *wq,
+-					   u16 pi, u16 frag_pi)
++					   u16 pi, u16 nnops)
+ {
+ 	struct mlx5e_tx_wqe_info *edge_wi, *wi = &sq->db.wqe_info[pi];
+-	u8 nnops = mlx5_wq_cyc_get_frag_size(wq) - frag_pi;
+ 
+ 	edge_wi = wi + nnops;
+ 
+@@ -345,8 +344,8 @@ netdev_tx_t mlx5e_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
+ 	struct mlx5e_tx_wqe_info *wi;
+ 
+ 	struct mlx5e_sq_stats *stats = sq->stats;
++	u16 headlen, ihs, contig_wqebbs_room;
+ 	u16 ds_cnt, ds_cnt_inl = 0;
+-	u16 headlen, ihs, frag_pi;
+ 	u8 num_wqebbs, opcode;
+ 	u32 num_bytes;
+ 	int num_dma;
+@@ -383,9 +382,9 @@ netdev_tx_t mlx5e_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
+ 	}
+ 
+ 	num_wqebbs = DIV_ROUND_UP(ds_cnt, MLX5_SEND_WQEBB_NUM_DS);
+-	frag_pi = mlx5_wq_cyc_ctr2fragix(wq, sq->pc);
+-	if (unlikely(frag_pi + num_wqebbs > mlx5_wq_cyc_get_frag_size(wq))) {
+-		mlx5e_fill_sq_frag_edge(sq, wq, pi, frag_pi);
++	contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi);
++	if (unlikely(contig_wqebbs_room < num_wqebbs)) {
++		mlx5e_fill_sq_frag_edge(sq, wq, pi, contig_wqebbs_room);
+ 		mlx5e_sq_fetch_wqe(sq, &wqe, &pi);
+ 	}
+ 
+@@ -629,7 +628,7 @@ netdev_tx_t mlx5i_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
+ 	struct mlx5e_tx_wqe_info *wi;
+ 
+ 	struct mlx5e_sq_stats *stats = sq->stats;
+-	u16 headlen, ihs, pi, frag_pi;
++	u16 headlen, ihs, pi, contig_wqebbs_room;
+ 	u16 ds_cnt, ds_cnt_inl = 0;
+ 	u8 num_wqebbs, opcode;
+ 	u32 num_bytes;
+@@ -665,13 +664,14 @@ netdev_tx_t mlx5i_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
+ 	}
+ 
+ 	num_wqebbs = DIV_ROUND_UP(ds_cnt, MLX5_SEND_WQEBB_NUM_DS);
+-	frag_pi = mlx5_wq_cyc_ctr2fragix(wq, sq->pc);
+-	if (unlikely(frag_pi + num_wqebbs > mlx5_wq_cyc_get_frag_size(wq))) {
++	pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
++	contig_wqebbs_room = mlx5_wq_cyc_get_contig_wqebbs(wq, pi);
++	if (unlikely(contig_wqebbs_room < num_wqebbs)) {
++		mlx5e_fill_sq_frag_edge(sq, wq, pi, contig_wqebbs_room);
+ 		pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
+-		mlx5e_fill_sq_frag_edge(sq, wq, pi, frag_pi);
+ 	}
+ 
+-	mlx5i_sq_fetch_wqe(sq, &wqe, &pi);
++	mlx5i_sq_fetch_wqe(sq, &wqe, pi);
+ 
+ 	/* fill wqe */
+ 	wi       = &sq->db.wqe_info[pi];
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+index 406c23862f5f..01ccc8201052 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+@@ -269,7 +269,7 @@ static void eq_pf_process(struct mlx5_eq *eq)
+ 		case MLX5_PFAULT_SUBTYPE_WQE:
+ 			/* WQE based event */
+ 			pfault->type =
+-				be32_to_cpu(pf_eqe->wqe.pftype_wq) >> 24;
++				(be32_to_cpu(pf_eqe->wqe.pftype_wq) >> 24) & 0x7;
+ 			pfault->token =
+ 				be32_to_cpu(pf_eqe->wqe.token);
+ 			pfault->wqe.wq_num =
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c
+index 5645a4facad2..b8ee9101c506 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c
+@@ -245,7 +245,7 @@ static void *mlx5_fpga_ipsec_cmd_exec(struct mlx5_core_dev *mdev,
+ 		return ERR_PTR(res);
+ 	}
+ 
+-	/* Context will be freed by wait func after completion */
++	/* Context should be freed by the caller after completion. */
+ 	return context;
+ }
+ 
+@@ -418,10 +418,8 @@ static int mlx5_fpga_ipsec_set_caps(struct mlx5_core_dev *mdev, u32 flags)
+ 	cmd.cmd = htonl(MLX5_FPGA_IPSEC_CMD_OP_SET_CAP);
+ 	cmd.flags = htonl(flags);
+ 	context = mlx5_fpga_ipsec_cmd_exec(mdev, &cmd, sizeof(cmd));
+-	if (IS_ERR(context)) {
+-		err = PTR_ERR(context);
+-		goto out;
+-	}
++	if (IS_ERR(context))
++		return PTR_ERR(context);
+ 
+ 	err = mlx5_fpga_ipsec_cmd_wait(context);
+ 	if (err)
+@@ -435,6 +433,7 @@ static int mlx5_fpga_ipsec_set_caps(struct mlx5_core_dev *mdev, u32 flags)
+ 	}
+ 
+ out:
++	kfree(context);
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.h b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.h
+index 08eac92fc26c..0982c579ec74 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.h
+@@ -109,12 +109,11 @@ struct mlx5i_tx_wqe {
+ 
+ static inline void mlx5i_sq_fetch_wqe(struct mlx5e_txqsq *sq,
+ 				      struct mlx5i_tx_wqe **wqe,
+-				      u16 *pi)
++				      u16 pi)
+ {
+ 	struct mlx5_wq_cyc *wq = &sq->wq;
+ 
+-	*pi  = mlx5_wq_cyc_ctr2ix(wq, sq->pc);
+-	*wqe = mlx5_wq_cyc_get_wqe(wq, *pi);
++	*wqe = mlx5_wq_cyc_get_wqe(wq, pi);
+ 	memset(*wqe, 0, sizeof(**wqe));
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wq.c b/drivers/net/ethernet/mellanox/mlx5/core/wq.c
+index d838af9539b1..9046475c531c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/wq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/wq.c
+@@ -39,11 +39,6 @@ u32 mlx5_wq_cyc_get_size(struct mlx5_wq_cyc *wq)
+ 	return (u32)wq->fbc.sz_m1 + 1;
+ }
+ 
+-u16 mlx5_wq_cyc_get_frag_size(struct mlx5_wq_cyc *wq)
+-{
+-	return wq->fbc.frag_sz_m1 + 1;
+-}
+-
+ u32 mlx5_cqwq_get_size(struct mlx5_cqwq *wq)
+ {
+ 	return wq->fbc.sz_m1 + 1;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wq.h b/drivers/net/ethernet/mellanox/mlx5/core/wq.h
+index 16476cc1a602..311256554520 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/wq.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/wq.h
+@@ -80,7 +80,6 @@ int mlx5_wq_cyc_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
+ 		       void *wqc, struct mlx5_wq_cyc *wq,
+ 		       struct mlx5_wq_ctrl *wq_ctrl);
+ u32 mlx5_wq_cyc_get_size(struct mlx5_wq_cyc *wq);
+-u16 mlx5_wq_cyc_get_frag_size(struct mlx5_wq_cyc *wq);
+ 
+ int mlx5_wq_qp_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
+ 		      void *qpc, struct mlx5_wq_qp *wq,
+@@ -140,11 +139,6 @@ static inline u16 mlx5_wq_cyc_ctr2ix(struct mlx5_wq_cyc *wq, u16 ctr)
+ 	return ctr & wq->fbc.sz_m1;
+ }
+ 
+-static inline u16 mlx5_wq_cyc_ctr2fragix(struct mlx5_wq_cyc *wq, u16 ctr)
+-{
+-	return ctr & wq->fbc.frag_sz_m1;
+-}
+-
+ static inline u16 mlx5_wq_cyc_get_head(struct mlx5_wq_cyc *wq)
+ {
+ 	return mlx5_wq_cyc_ctr2ix(wq, wq->wqe_ctr);
+@@ -160,6 +154,11 @@ static inline void *mlx5_wq_cyc_get_wqe(struct mlx5_wq_cyc *wq, u16 ix)
+ 	return mlx5_frag_buf_get_wqe(&wq->fbc, ix);
+ }
+ 
++static inline u16 mlx5_wq_cyc_get_contig_wqebbs(struct mlx5_wq_cyc *wq, u16 ix)
++{
++	return mlx5_frag_buf_get_idx_last_contig_stride(&wq->fbc, ix) - ix + 1;
++}
++
+ static inline int mlx5_wq_cyc_cc_bigger(u16 cc1, u16 cc2)
+ {
+ 	int equal   = (cc1 == cc2);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c
+index f9c724752a32..13636a537f37 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core.c
+@@ -985,8 +985,8 @@ static int mlxsw_devlink_core_bus_device_reload(struct devlink *devlink,
+ 					     mlxsw_core->bus,
+ 					     mlxsw_core->bus_priv, true,
+ 					     devlink);
+-	if (err)
+-		mlxsw_core->reload_fail = true;
++	mlxsw_core->reload_fail = !!err;
++
+ 	return err;
+ }
+ 
+@@ -1126,8 +1126,15 @@ void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core,
+ 	const char *device_kind = mlxsw_core->bus_info->device_kind;
+ 	struct devlink *devlink = priv_to_devlink(mlxsw_core);
+ 
+-	if (mlxsw_core->reload_fail)
+-		goto reload_fail;
++	if (mlxsw_core->reload_fail) {
++		if (!reload)
++			/* Only the parts that were not de-initialized in the
++			 * failed reload attempt need to be de-initialized.
++			 */
++			goto reload_fail_deinit;
++		else
++			return;
++	}
+ 
+ 	if (mlxsw_core->driver->fini)
+ 		mlxsw_core->driver->fini(mlxsw_core);
+@@ -1140,9 +1147,12 @@ void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core,
+ 	if (!reload)
+ 		devlink_resources_unregister(devlink, NULL);
+ 	mlxsw_core->bus->fini(mlxsw_core->bus_priv);
+-	if (reload)
+-		return;
+-reload_fail:
++
++	return;
++
++reload_fail_deinit:
++	devlink_unregister(devlink);
++	devlink_resources_unregister(devlink, NULL);
+ 	devlink_free(devlink);
+ 	mlxsw_core_driver_put(device_kind);
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+index 6cb43dda8232..9883e48d8a21 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+@@ -2307,8 +2307,6 @@ static void mlxsw_sp_switchdev_event_work(struct work_struct *work)
+ 		break;
+ 	case SWITCHDEV_FDB_DEL_TO_DEVICE:
+ 		fdb_info = &switchdev_work->fdb_info;
+-		if (!fdb_info->added_by_user)
+-			break;
+ 		mlxsw_sp_port_fdb_set(mlxsw_sp_port, fdb_info, false);
+ 		break;
+ 	case SWITCHDEV_FDB_ADD_TO_BRIDGE: /* fall through */
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_iwarp.c b/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
+index 90a2b53096e2..51bbb0e5b514 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
+@@ -1710,7 +1710,7 @@ qed_iwarp_parse_rx_pkt(struct qed_hwfn *p_hwfn,
+ 
+ 		cm_info->local_ip[0] = ntohl(iph->daddr);
+ 		cm_info->remote_ip[0] = ntohl(iph->saddr);
+-		cm_info->ip_version = TCP_IPV4;
++		cm_info->ip_version = QED_TCP_IPV4;
+ 
+ 		ip_hlen = (iph->ihl) * sizeof(u32);
+ 		*payload_len = ntohs(iph->tot_len) - ip_hlen;
+@@ -1730,7 +1730,7 @@ qed_iwarp_parse_rx_pkt(struct qed_hwfn *p_hwfn,
+ 			cm_info->remote_ip[i] =
+ 			    ntohl(ip6h->saddr.in6_u.u6_addr32[i]);
+ 		}
+-		cm_info->ip_version = TCP_IPV6;
++		cm_info->ip_version = QED_TCP_IPV6;
+ 
+ 		ip_hlen = sizeof(*ip6h);
+ 		*payload_len = ntohs(ip6h->payload_len);
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_roce.c b/drivers/net/ethernet/qlogic/qed/qed_roce.c
+index b5ce1581645f..79424e6f0976 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_roce.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_roce.c
+@@ -138,23 +138,16 @@ static void qed_rdma_copy_gids(struct qed_rdma_qp *qp, __le32 *src_gid,
+ 
+ static enum roce_flavor qed_roce_mode_to_flavor(enum roce_mode roce_mode)
+ {
+-	enum roce_flavor flavor;
+-
+ 	switch (roce_mode) {
+ 	case ROCE_V1:
+-		flavor = PLAIN_ROCE;
+-		break;
++		return PLAIN_ROCE;
+ 	case ROCE_V2_IPV4:
+-		flavor = RROCE_IPV4;
+-		break;
++		return RROCE_IPV4;
+ 	case ROCE_V2_IPV6:
+-		flavor = ROCE_V2_IPV6;
+-		break;
++		return RROCE_IPV6;
+ 	default:
+-		flavor = MAX_ROCE_MODE;
+-		break;
++		return MAX_ROCE_FLAVOR;
+ 	}
+-	return flavor;
+ }
+ 
+ void qed_roce_free_cid_pair(struct qed_hwfn *p_hwfn, u16 cid)
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c b/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c
+index 8de644b4721e..77b6248ad3b9 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c
+@@ -154,7 +154,7 @@ qed_set_pf_update_tunn_mode(struct qed_tunnel_info *p_tun,
+ static void qed_set_tunn_cls_info(struct qed_tunnel_info *p_tun,
+ 				  struct qed_tunnel_info *p_src)
+ {
+-	enum tunnel_clss type;
++	int type;
+ 
+ 	p_tun->b_update_rx_cls = p_src->b_update_rx_cls;
+ 	p_tun->b_update_tx_cls = p_src->b_update_tx_cls;
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_vf.c b/drivers/net/ethernet/qlogic/qed/qed_vf.c
+index be6ddde1a104..c4766e4ac485 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_vf.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_vf.c
+@@ -413,7 +413,6 @@ static int qed_vf_pf_acquire(struct qed_hwfn *p_hwfn)
+ 	}
+ 
+ 	if (!p_iov->b_pre_fp_hsi &&
+-	    ETH_HSI_VER_MINOR &&
+ 	    (resp->pfdev_info.minor_fp_hsi < ETH_HSI_VER_MINOR)) {
+ 		DP_INFO(p_hwfn,
+ 			"PF is using older fastpath HSI; %02x.%02x is configured\n",
+@@ -572,7 +571,7 @@ free_p_iov:
+ static void
+ __qed_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req,
+ 			   struct qed_tunn_update_type *p_src,
+-			   enum qed_tunn_clss mask, u8 *p_cls)
++			   enum qed_tunn_mode mask, u8 *p_cls)
+ {
+ 	if (p_src->b_update_mode) {
+ 		p_req->tun_mode_update_mask |= BIT(mask);
+@@ -587,7 +586,7 @@ __qed_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req,
+ static void
+ qed_vf_prep_tunn_req_tlv(struct vfpf_update_tunn_param_tlv *p_req,
+ 			 struct qed_tunn_update_type *p_src,
+-			 enum qed_tunn_clss mask,
++			 enum qed_tunn_mode mask,
+ 			 u8 *p_cls, struct qed_tunn_update_udp_port *p_port,
+ 			 u8 *p_update_port, u16 *p_udp_port)
+ {
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index 627c5cd8f786..f18087102d40 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -7044,17 +7044,15 @@ static int rtl8169_poll(struct napi_struct *napi, int budget)
+ 	struct rtl8169_private *tp = container_of(napi, struct rtl8169_private, napi);
+ 	struct net_device *dev = tp->dev;
+ 	u16 enable_mask = RTL_EVENT_NAPI | tp->event_slow;
+-	int work_done= 0;
++	int work_done;
+ 	u16 status;
+ 
+ 	status = rtl_get_events(tp);
+ 	rtl_ack_events(tp, status & ~tp->event_slow);
+ 
+-	if (status & RTL_EVENT_NAPI_RX)
+-		work_done = rtl_rx(dev, tp, (u32) budget);
++	work_done = rtl_rx(dev, tp, (u32) budget);
+ 
+-	if (status & RTL_EVENT_NAPI_TX)
+-		rtl_tx(dev, tp);
++	rtl_tx(dev, tp);
+ 
+ 	if (status & tp->event_slow) {
+ 		enable_mask &= ~tp->event_slow;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
+index 5df1a608e566..541602d70c24 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
+@@ -133,7 +133,7 @@ static int stmmac_mdio_write(struct mii_bus *bus, int phyaddr, int phyreg,
+  */
+ int stmmac_mdio_reset(struct mii_bus *bus)
+ {
+-#if defined(CONFIG_STMMAC_PLATFORM)
++#if IS_ENABLED(CONFIG_STMMAC_PLATFORM)
+ 	struct net_device *ndev = bus->priv;
+ 	struct stmmac_priv *priv = netdev_priv(ndev);
+ 	unsigned int mii_address = priv->hw->mii.addr;
+diff --git a/drivers/net/hamradio/yam.c b/drivers/net/hamradio/yam.c
+index 16ec7af6ab7b..ba9df430fca6 100644
+--- a/drivers/net/hamradio/yam.c
++++ b/drivers/net/hamradio/yam.c
+@@ -966,6 +966,8 @@ static int yam_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+ 				 sizeof(struct yamdrv_ioctl_mcs));
+ 		if (IS_ERR(ym))
+ 			return PTR_ERR(ym);
++		if (ym->cmd != SIOCYAMSMCS)
++			return -EINVAL;
+ 		if (ym->bitrate > YAM_MAXBITRATE) {
+ 			kfree(ym);
+ 			return -EINVAL;
+@@ -981,6 +983,8 @@ static int yam_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+ 		if (copy_from_user(&yi, ifr->ifr_data, sizeof(struct yamdrv_ioctl_cfg)))
+ 			 return -EFAULT;
+ 
++		if (yi.cmd != SIOCYAMSCFG)
++			return -EINVAL;
+ 		if ((yi.cfg.mask & YAM_IOBASE) && netif_running(dev))
+ 			return -EINVAL;		/* Cannot change this parameter when up */
+ 		if ((yi.cfg.mask & YAM_IRQ) && netif_running(dev))
+diff --git a/drivers/net/usb/asix_common.c b/drivers/net/usb/asix_common.c
+index e95dd12edec4..023b8d0bf175 100644
+--- a/drivers/net/usb/asix_common.c
++++ b/drivers/net/usb/asix_common.c
+@@ -607,6 +607,9 @@ int asix_set_wol(struct net_device *net, struct ethtool_wolinfo *wolinfo)
+ 	struct usbnet *dev = netdev_priv(net);
+ 	u8 opt = 0;
+ 
++	if (wolinfo->wolopts & ~(WAKE_PHY | WAKE_MAGIC))
++		return -EINVAL;
++
+ 	if (wolinfo->wolopts & WAKE_PHY)
+ 		opt |= AX_MONITOR_LINK;
+ 	if (wolinfo->wolopts & WAKE_MAGIC)
+diff --git a/drivers/net/usb/ax88179_178a.c b/drivers/net/usb/ax88179_178a.c
+index 9e8ad372f419..2207f7a7d1ff 100644
+--- a/drivers/net/usb/ax88179_178a.c
++++ b/drivers/net/usb/ax88179_178a.c
+@@ -566,6 +566,9 @@ ax88179_set_wol(struct net_device *net, struct ethtool_wolinfo *wolinfo)
+ 	struct usbnet *dev = netdev_priv(net);
+ 	u8 opt = 0;
+ 
++	if (wolinfo->wolopts & ~(WAKE_PHY | WAKE_MAGIC))
++		return -EINVAL;
++
+ 	if (wolinfo->wolopts & WAKE_PHY)
+ 		opt |= AX_MONITOR_MODE_RWLC;
+ 	if (wolinfo->wolopts & WAKE_MAGIC)
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index aeca484a75b8..2bb3a081ff10 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -1401,19 +1401,10 @@ static int lan78xx_set_wol(struct net_device *netdev,
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	pdata->wol = 0;
+-	if (wol->wolopts & WAKE_UCAST)
+-		pdata->wol |= WAKE_UCAST;
+-	if (wol->wolopts & WAKE_MCAST)
+-		pdata->wol |= WAKE_MCAST;
+-	if (wol->wolopts & WAKE_BCAST)
+-		pdata->wol |= WAKE_BCAST;
+-	if (wol->wolopts & WAKE_MAGIC)
+-		pdata->wol |= WAKE_MAGIC;
+-	if (wol->wolopts & WAKE_PHY)
+-		pdata->wol |= WAKE_PHY;
+-	if (wol->wolopts & WAKE_ARP)
+-		pdata->wol |= WAKE_ARP;
++	if (wol->wolopts & ~WAKE_ALL)
++		return -EINVAL;
++
++	pdata->wol = wol->wolopts;
+ 
+ 	device_set_wakeup_enable(&dev->udev->dev, (bool)wol->wolopts);
+ 
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index 1b07bb5e110d..9a55d75f7f10 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -4503,6 +4503,9 @@ static int rtl8152_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+ 	if (!rtl_can_wakeup(tp))
+ 		return -EOPNOTSUPP;
+ 
++	if (wol->wolopts & ~WAKE_ANY)
++		return -EINVAL;
++
+ 	ret = usb_autopm_get_interface(tp->intf);
+ 	if (ret < 0)
+ 		goto out_set_wol;
+diff --git a/drivers/net/usb/smsc75xx.c b/drivers/net/usb/smsc75xx.c
+index b64b1ee56d2d..ec287c9741e8 100644
+--- a/drivers/net/usb/smsc75xx.c
++++ b/drivers/net/usb/smsc75xx.c
+@@ -731,6 +731,9 @@ static int smsc75xx_ethtool_set_wol(struct net_device *net,
+ 	struct smsc75xx_priv *pdata = (struct smsc75xx_priv *)(dev->data[0]);
+ 	int ret;
+ 
++	if (wolinfo->wolopts & ~SUPPORTED_WAKE)
++		return -EINVAL;
++
+ 	pdata->wolopts = wolinfo->wolopts & SUPPORTED_WAKE;
+ 
+ 	ret = device_set_wakeup_enable(&dev->udev->dev, pdata->wolopts);
+diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c
+index 06b4d290784d..262e7a3c23cb 100644
+--- a/drivers/net/usb/smsc95xx.c
++++ b/drivers/net/usb/smsc95xx.c
+@@ -774,6 +774,9 @@ static int smsc95xx_ethtool_set_wol(struct net_device *net,
+ 	struct smsc95xx_priv *pdata = (struct smsc95xx_priv *)(dev->data[0]);
+ 	int ret;
+ 
++	if (wolinfo->wolopts & ~SUPPORTED_WAKE)
++		return -EINVAL;
++
+ 	pdata->wolopts = wolinfo->wolopts & SUPPORTED_WAKE;
+ 
+ 	ret = device_set_wakeup_enable(&dev->udev->dev, pdata->wolopts);
+diff --git a/drivers/net/usb/sr9800.c b/drivers/net/usb/sr9800.c
+index 9277a0f228df..35f39f23d881 100644
+--- a/drivers/net/usb/sr9800.c
++++ b/drivers/net/usb/sr9800.c
+@@ -421,6 +421,9 @@ sr_set_wol(struct net_device *net, struct ethtool_wolinfo *wolinfo)
+ 	struct usbnet *dev = netdev_priv(net);
+ 	u8 opt = 0;
+ 
++	if (wolinfo->wolopts & ~(WAKE_PHY | WAKE_MAGIC))
++		return -EINVAL;
++
+ 	if (wolinfo->wolopts & WAKE_PHY)
+ 		opt |= SR_MONITOR_LINK;
+ 	if (wolinfo->wolopts & WAKE_MAGIC)
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 2b6ec927809e..500e2d8f10bc 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -2162,8 +2162,9 @@ static void virtnet_freeze_down(struct virtio_device *vdev)
+ 	/* Make sure no work handler is accessing the device */
+ 	flush_work(&vi->config_work);
+ 
++	netif_tx_lock_bh(vi->dev);
+ 	netif_device_detach(vi->dev);
+-	netif_tx_disable(vi->dev);
++	netif_tx_unlock_bh(vi->dev);
+ 	cancel_delayed_work_sync(&vi->refill);
+ 
+ 	if (netif_running(vi->dev)) {
+@@ -2199,7 +2200,9 @@ static int virtnet_restore_up(struct virtio_device *vdev)
+ 		}
+ 	}
+ 
++	netif_tx_lock_bh(vi->dev);
+ 	netif_device_attach(vi->dev);
++	netif_tx_unlock_bh(vi->dev);
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index 80e2c8595c7c..58dd217811c8 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -519,7 +519,6 @@ struct mac80211_hwsim_data {
+ 	int channels, idx;
+ 	bool use_chanctx;
+ 	bool destroy_on_close;
+-	struct work_struct destroy_work;
+ 	u32 portid;
+ 	char alpha2[2];
+ 	const struct ieee80211_regdomain *regd;
+@@ -2812,8 +2811,7 @@ static int mac80211_hwsim_new_radio(struct genl_info *info,
+ 	hwsim_radios_generation++;
+ 	spin_unlock_bh(&hwsim_radio_lock);
+ 
+-	if (idx > 0)
+-		hwsim_mcast_new_radio(idx, info, param);
++	hwsim_mcast_new_radio(idx, info, param);
+ 
+ 	return idx;
+ 
+@@ -3442,30 +3440,27 @@ static struct genl_family hwsim_genl_family __ro_after_init = {
+ 	.n_mcgrps = ARRAY_SIZE(hwsim_mcgrps),
+ };
+ 
+-static void destroy_radio(struct work_struct *work)
+-{
+-	struct mac80211_hwsim_data *data =
+-		container_of(work, struct mac80211_hwsim_data, destroy_work);
+-
+-	hwsim_radios_generation++;
+-	mac80211_hwsim_del_radio(data, wiphy_name(data->hw->wiphy), NULL);
+-}
+-
+ static void remove_user_radios(u32 portid)
+ {
+ 	struct mac80211_hwsim_data *entry, *tmp;
++	LIST_HEAD(list);
+ 
+ 	spin_lock_bh(&hwsim_radio_lock);
+ 	list_for_each_entry_safe(entry, tmp, &hwsim_radios, list) {
+ 		if (entry->destroy_on_close && entry->portid == portid) {
+-			list_del(&entry->list);
++			list_move(&entry->list, &list);
+ 			rhashtable_remove_fast(&hwsim_radios_rht, &entry->rht,
+ 					       hwsim_rht_params);
+-			INIT_WORK(&entry->destroy_work, destroy_radio);
+-			queue_work(hwsim_wq, &entry->destroy_work);
++			hwsim_radios_generation++;
+ 		}
+ 	}
+ 	spin_unlock_bh(&hwsim_radio_lock);
++
++	list_for_each_entry_safe(entry, tmp, &list, list) {
++		list_del(&entry->list);
++		mac80211_hwsim_del_radio(entry, wiphy_name(entry->hw->wiphy),
++					 NULL);
++	}
+ }
+ 
+ static int mac80211_hwsim_netlink_notify(struct notifier_block *nb,
+@@ -3523,6 +3518,7 @@ static __net_init int hwsim_init_net(struct net *net)
+ static void __net_exit hwsim_exit_net(struct net *net)
+ {
+ 	struct mac80211_hwsim_data *data, *tmp;
++	LIST_HEAD(list);
+ 
+ 	spin_lock_bh(&hwsim_radio_lock);
+ 	list_for_each_entry_safe(data, tmp, &hwsim_radios, list) {
+@@ -3533,17 +3529,19 @@ static void __net_exit hwsim_exit_net(struct net *net)
+ 		if (data->netgroup == hwsim_net_get_netgroup(&init_net))
+ 			continue;
+ 
+-		list_del(&data->list);
++		list_move(&data->list, &list);
+ 		rhashtable_remove_fast(&hwsim_radios_rht, &data->rht,
+ 				       hwsim_rht_params);
+ 		hwsim_radios_generation++;
+-		spin_unlock_bh(&hwsim_radio_lock);
++	}
++	spin_unlock_bh(&hwsim_radio_lock);
++
++	list_for_each_entry_safe(data, tmp, &list, list) {
++		list_del(&data->list);
+ 		mac80211_hwsim_del_radio(data,
+ 					 wiphy_name(data->hw->wiphy),
+ 					 NULL);
+-		spin_lock_bh(&hwsim_radio_lock);
+ 	}
+-	spin_unlock_bh(&hwsim_radio_lock);
+ 
+ 	ida_simple_remove(&hwsim_netgroup_ida, hwsim_net_get_netgroup(net));
+ }
+diff --git a/drivers/net/wireless/marvell/libertas/if_sdio.c b/drivers/net/wireless/marvell/libertas/if_sdio.c
+index 43743c26c071..39bf85d0ade0 100644
+--- a/drivers/net/wireless/marvell/libertas/if_sdio.c
++++ b/drivers/net/wireless/marvell/libertas/if_sdio.c
+@@ -1317,6 +1317,10 @@ static int if_sdio_suspend(struct device *dev)
+ 	if (priv->wol_criteria == EHS_REMOVE_WAKEUP) {
+ 		dev_info(dev, "Suspend without wake params -- powering down card\n");
+ 		if (priv->fw_ready) {
++			ret = lbs_suspend(priv);
++			if (ret)
++				return ret;
++
+ 			priv->power_up_on_resume = true;
+ 			if_sdio_power_off(card);
+ 		}
+diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
+index 3e18a68c2b03..054e66d93ed6 100644
+--- a/drivers/scsi/qedi/qedi_main.c
++++ b/drivers/scsi/qedi/qedi_main.c
+@@ -2472,6 +2472,7 @@ static int __qedi_probe(struct pci_dev *pdev, int mode)
+ 		/* start qedi context */
+ 		spin_lock_init(&qedi->hba_lock);
+ 		spin_lock_init(&qedi->task_idx_lock);
++		mutex_init(&qedi->stats_lock);
+ 	}
+ 	qedi_ops->ll2->register_cb_ops(qedi->cdev, &qedi_ll2_cb_ops, qedi);
+ 	qedi_ops->ll2->start(qedi->cdev, &params);
+diff --git a/drivers/soc/fsl/qbman/qman.c b/drivers/soc/fsl/qbman/qman.c
+index ecb22749df0b..8cc015183043 100644
+--- a/drivers/soc/fsl/qbman/qman.c
++++ b/drivers/soc/fsl/qbman/qman.c
+@@ -2729,6 +2729,9 @@ static int qman_alloc_range(struct gen_pool *p, u32 *result, u32 cnt)
+ {
+ 	unsigned long addr;
+ 
++	if (!p)
++		return -ENODEV;
++
+ 	addr = gen_pool_alloc(p, cnt);
+ 	if (!addr)
+ 		return -ENOMEM;
+diff --git a/drivers/soc/fsl/qe/ucc.c b/drivers/soc/fsl/qe/ucc.c
+index c646d8713861..681f7d4b7724 100644
+--- a/drivers/soc/fsl/qe/ucc.c
++++ b/drivers/soc/fsl/qe/ucc.c
+@@ -626,7 +626,7 @@ static u32 ucc_get_tdm_sync_shift(enum comm_dir mode, u32 tdm_num)
+ {
+ 	u32 shift;
+ 
+-	shift = (mode == COMM_DIR_RX) ? RX_SYNC_SHIFT_BASE : RX_SYNC_SHIFT_BASE;
++	shift = (mode == COMM_DIR_RX) ? RX_SYNC_SHIFT_BASE : TX_SYNC_SHIFT_BASE;
+ 	shift -= tdm_num * 2;
+ 
+ 	return shift;
+diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c
+index 500911f16498..5bad9fdec5f8 100644
+--- a/drivers/thunderbolt/icm.c
++++ b/drivers/thunderbolt/icm.c
+@@ -653,14 +653,6 @@ icm_fr_xdomain_connected(struct tb *tb, const struct icm_pkg_header *hdr)
+ 	bool approved;
+ 	u64 route;
+ 
+-	/*
+-	 * After NVM upgrade adding root switch device fails because we
+-	 * initiated reset. During that time ICM might still send
+-	 * XDomain connected message which we ignore here.
+-	 */
+-	if (!tb->root_switch)
+-		return;
+-
+ 	link = pkg->link_info & ICM_LINK_INFO_LINK_MASK;
+ 	depth = (pkg->link_info & ICM_LINK_INFO_DEPTH_MASK) >>
+ 		ICM_LINK_INFO_DEPTH_SHIFT;
+@@ -950,14 +942,6 @@ icm_tr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr)
+ 	if (pkg->hdr.packet_id)
+ 		return;
+ 
+-	/*
+-	 * After NVM upgrade adding root switch device fails because we
+-	 * initiated reset. During that time ICM might still send device
+-	 * connected message which we ignore here.
+-	 */
+-	if (!tb->root_switch)
+-		return;
+-
+ 	route = get_route(pkg->route_hi, pkg->route_lo);
+ 	authorized = pkg->link_info & ICM_LINK_INFO_APPROVED;
+ 	security_level = (pkg->hdr.flags & ICM_FLAGS_SLEVEL_MASK) >>
+@@ -1317,19 +1301,26 @@ static void icm_handle_notification(struct work_struct *work)
+ 
+ 	mutex_lock(&tb->lock);
+ 
+-	switch (n->pkg->code) {
+-	case ICM_EVENT_DEVICE_CONNECTED:
+-		icm->device_connected(tb, n->pkg);
+-		break;
+-	case ICM_EVENT_DEVICE_DISCONNECTED:
+-		icm->device_disconnected(tb, n->pkg);
+-		break;
+-	case ICM_EVENT_XDOMAIN_CONNECTED:
+-		icm->xdomain_connected(tb, n->pkg);
+-		break;
+-	case ICM_EVENT_XDOMAIN_DISCONNECTED:
+-		icm->xdomain_disconnected(tb, n->pkg);
+-		break;
++	/*
++	 * When the domain is stopped we flush its workqueue but before
++	 * that the root switch is removed. In that case we should treat
++	 * the queued events as being canceled.
++	 */
++	if (tb->root_switch) {
++		switch (n->pkg->code) {
++		case ICM_EVENT_DEVICE_CONNECTED:
++			icm->device_connected(tb, n->pkg);
++			break;
++		case ICM_EVENT_DEVICE_DISCONNECTED:
++			icm->device_disconnected(tb, n->pkg);
++			break;
++		case ICM_EVENT_XDOMAIN_CONNECTED:
++			icm->xdomain_connected(tb, n->pkg);
++			break;
++		case ICM_EVENT_XDOMAIN_DISCONNECTED:
++			icm->xdomain_disconnected(tb, n->pkg);
++			break;
++		}
+ 	}
+ 
+ 	mutex_unlock(&tb->lock);
+diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
+index f5a33e88e676..2d042150e41c 100644
+--- a/drivers/thunderbolt/nhi.c
++++ b/drivers/thunderbolt/nhi.c
+@@ -1147,5 +1147,5 @@ static void __exit nhi_unload(void)
+ 	tb_domain_exit();
+ }
+ 
+-fs_initcall(nhi_init);
++rootfs_initcall(nhi_init);
+ module_exit(nhi_unload);
+diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
+index af842000188c..a25f6ea5c784 100644
+--- a/drivers/tty/serial/8250/8250_dw.c
++++ b/drivers/tty/serial/8250/8250_dw.c
+@@ -576,10 +576,6 @@ static int dw8250_probe(struct platform_device *pdev)
+ 	if (!data->skip_autocfg)
+ 		dw8250_setup_port(p);
+ 
+-#ifdef CONFIG_PM
+-	uart.capabilities |= UART_CAP_RPM;
+-#endif
+-
+ 	/* If we have a valid fifosize, try hooking up DMA */
+ 	if (p->fifosize) {
+ 		data->dma.rxconf.src_maxburst = p->fifosize / 4;
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index 560ed8711706..c4424cbd9943 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -30,6 +30,7 @@
+ #include <linux/sched/mm.h>
+ #include <linux/sched/signal.h>
+ #include <linux/interval_tree_generic.h>
++#include <linux/nospec.h>
+ 
+ #include "vhost.h"
+ 
+@@ -1362,6 +1363,7 @@ long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *arg
+ 	if (idx >= d->nvqs)
+ 		return -ENOBUFS;
+ 
++	idx = array_index_nospec(idx, d->nvqs);
+ 	vq = d->vqs[idx];
+ 
+ 	mutex_lock(&vq->mutex);
+diff --git a/drivers/video/fbdev/pxa168fb.c b/drivers/video/fbdev/pxa168fb.c
+index def3a501acd6..d059d04c63ac 100644
+--- a/drivers/video/fbdev/pxa168fb.c
++++ b/drivers/video/fbdev/pxa168fb.c
+@@ -712,7 +712,7 @@ static int pxa168fb_probe(struct platform_device *pdev)
+ 	/*
+ 	 * enable controller clock
+ 	 */
+-	clk_enable(fbi->clk);
++	clk_prepare_enable(fbi->clk);
+ 
+ 	pxa168fb_set_par(info);
+ 
+@@ -767,7 +767,7 @@ static int pxa168fb_probe(struct platform_device *pdev)
+ failed_free_cmap:
+ 	fb_dealloc_cmap(&info->cmap);
+ failed_free_clk:
+-	clk_disable(fbi->clk);
++	clk_disable_unprepare(fbi->clk);
+ failed_free_fbmem:
+ 	dma_free_coherent(fbi->dev, info->fix.smem_len,
+ 			info->screen_base, fbi->fb_start_dma);
+@@ -807,7 +807,7 @@ static int pxa168fb_remove(struct platform_device *pdev)
+ 	dma_free_wc(fbi->dev, PAGE_ALIGN(info->fix.smem_len),
+ 		    info->screen_base, info->fix.smem_start);
+ 
+-	clk_disable(fbi->clk);
++	clk_disable_unprepare(fbi->clk);
+ 
+ 	framebuffer_release(info);
+ 
+diff --git a/fs/afs/cell.c b/fs/afs/cell.c
+index f3d0bef16d78..6127f0fcd62c 100644
+--- a/fs/afs/cell.c
++++ b/fs/afs/cell.c
+@@ -514,6 +514,8 @@ static int afs_alloc_anon_key(struct afs_cell *cell)
+  */
+ static int afs_activate_cell(struct afs_net *net, struct afs_cell *cell)
+ {
++	struct hlist_node **p;
++	struct afs_cell *pcell;
+ 	int ret;
+ 
+ 	if (!cell->anonymous_key) {
+@@ -534,7 +536,18 @@ static int afs_activate_cell(struct afs_net *net, struct afs_cell *cell)
+ 		return ret;
+ 
+ 	mutex_lock(&net->proc_cells_lock);
+-	list_add_tail(&cell->proc_link, &net->proc_cells);
++	for (p = &net->proc_cells.first; *p; p = &(*p)->next) {
++		pcell = hlist_entry(*p, struct afs_cell, proc_link);
++		if (strcmp(cell->name, pcell->name) < 0)
++			break;
++	}
++
++	cell->proc_link.pprev = p;
++	cell->proc_link.next = *p;
++	rcu_assign_pointer(*p, &cell->proc_link.next);
++	if (cell->proc_link.next)
++		cell->proc_link.next->pprev = &cell->proc_link.next;
++
+ 	afs_dynroot_mkdir(net, cell);
+ 	mutex_unlock(&net->proc_cells_lock);
+ 	return 0;
+@@ -550,7 +563,7 @@ static void afs_deactivate_cell(struct afs_net *net, struct afs_cell *cell)
+ 	afs_proc_cell_remove(cell);
+ 
+ 	mutex_lock(&net->proc_cells_lock);
+-	list_del_init(&cell->proc_link);
++	hlist_del_rcu(&cell->proc_link);
+ 	afs_dynroot_rmdir(net, cell);
+ 	mutex_unlock(&net->proc_cells_lock);
+ 
+diff --git a/fs/afs/dynroot.c b/fs/afs/dynroot.c
+index 174e843f0633..7de7223843cc 100644
+--- a/fs/afs/dynroot.c
++++ b/fs/afs/dynroot.c
+@@ -286,7 +286,7 @@ int afs_dynroot_populate(struct super_block *sb)
+ 		return -ERESTARTSYS;
+ 
+ 	net->dynroot_sb = sb;
+-	list_for_each_entry(cell, &net->proc_cells, proc_link) {
++	hlist_for_each_entry(cell, &net->proc_cells, proc_link) {
+ 		ret = afs_dynroot_mkdir(net, cell);
+ 		if (ret < 0)
+ 			goto error;
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index 9778df135717..270d1caa27c6 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -241,7 +241,7 @@ struct afs_net {
+ 	seqlock_t		cells_lock;
+ 
+ 	struct mutex		proc_cells_lock;
+-	struct list_head	proc_cells;
++	struct hlist_head	proc_cells;
+ 
+ 	/* Known servers.  Theoretically each fileserver can only be in one
+ 	 * cell, but in practice, people create aliases and subsets and there's
+@@ -319,7 +319,7 @@ struct afs_cell {
+ 	struct afs_net		*net;
+ 	struct key		*anonymous_key;	/* anonymous user key for this cell */
+ 	struct work_struct	manager;	/* Manager for init/deinit/dns */
+-	struct list_head	proc_link;	/* /proc cell list link */
++	struct hlist_node	proc_link;	/* /proc cell list link */
+ #ifdef CONFIG_AFS_FSCACHE
+ 	struct fscache_cookie	*cache;		/* caching cookie */
+ #endif
+diff --git a/fs/afs/main.c b/fs/afs/main.c
+index e84fe822a960..107427688edd 100644
+--- a/fs/afs/main.c
++++ b/fs/afs/main.c
+@@ -87,7 +87,7 @@ static int __net_init afs_net_init(struct net *net_ns)
+ 	timer_setup(&net->cells_timer, afs_cells_timer, 0);
+ 
+ 	mutex_init(&net->proc_cells_lock);
+-	INIT_LIST_HEAD(&net->proc_cells);
++	INIT_HLIST_HEAD(&net->proc_cells);
+ 
+ 	seqlock_init(&net->fs_lock);
+ 	net->fs_servers = RB_ROOT;
+diff --git a/fs/afs/proc.c b/fs/afs/proc.c
+index 476dcbb79713..9101f62707af 100644
+--- a/fs/afs/proc.c
++++ b/fs/afs/proc.c
+@@ -33,9 +33,8 @@ static inline struct afs_net *afs_seq2net_single(struct seq_file *m)
+ static int afs_proc_cells_show(struct seq_file *m, void *v)
+ {
+ 	struct afs_cell *cell = list_entry(v, struct afs_cell, proc_link);
+-	struct afs_net *net = afs_seq2net(m);
+ 
+-	if (v == &net->proc_cells) {
++	if (v == SEQ_START_TOKEN) {
+ 		/* display header on line 1 */
+ 		seq_puts(m, "USE NAME\n");
+ 		return 0;
+@@ -50,12 +49,12 @@ static void *afs_proc_cells_start(struct seq_file *m, loff_t *_pos)
+ 	__acquires(rcu)
+ {
+ 	rcu_read_lock();
+-	return seq_list_start_head(&afs_seq2net(m)->proc_cells, *_pos);
++	return seq_hlist_start_head_rcu(&afs_seq2net(m)->proc_cells, *_pos);
+ }
+ 
+ static void *afs_proc_cells_next(struct seq_file *m, void *v, loff_t *pos)
+ {
+-	return seq_list_next(v, &afs_seq2net(m)->proc_cells, pos);
++	return seq_hlist_next_rcu(v, &afs_seq2net(m)->proc_cells, pos);
+ }
+ 
+ static void afs_proc_cells_stop(struct seq_file *m, void *v)
+diff --git a/fs/fat/fatent.c b/fs/fat/fatent.c
+index 3aef8630a4b9..95d2c716e0da 100644
+--- a/fs/fat/fatent.c
++++ b/fs/fat/fatent.c
+@@ -681,6 +681,7 @@ int fat_count_free_clusters(struct super_block *sb)
+ 			if (ops->ent_get(&fatent) == FAT_ENT_FREE)
+ 				free++;
+ 		} while (fat_ent_next(sbi, &fatent));
++		cond_resched();
+ 	}
+ 	sbi->free_clusters = free;
+ 	sbi->free_clus_valid = 1;
+diff --git a/fs/ocfs2/refcounttree.c b/fs/ocfs2/refcounttree.c
+index 7869622af22a..7a5ee145c733 100644
+--- a/fs/ocfs2/refcounttree.c
++++ b/fs/ocfs2/refcounttree.c
+@@ -2946,6 +2946,7 @@ int ocfs2_duplicate_clusters_by_page(handle_t *handle,
+ 		if (map_end & (PAGE_SIZE - 1))
+ 			to = map_end & (PAGE_SIZE - 1);
+ 
++retry:
+ 		page = find_or_create_page(mapping, page_index, GFP_NOFS);
+ 		if (!page) {
+ 			ret = -ENOMEM;
+@@ -2954,11 +2955,18 @@ int ocfs2_duplicate_clusters_by_page(handle_t *handle,
+ 		}
+ 
+ 		/*
+-		 * In case PAGE_SIZE <= CLUSTER_SIZE, This page
+-		 * can't be dirtied before we CoW it out.
++		 * In case PAGE_SIZE <= CLUSTER_SIZE, we do not expect a dirty
++		 * page, so write it back.
+ 		 */
+-		if (PAGE_SIZE <= OCFS2_SB(sb)->s_clustersize)
+-			BUG_ON(PageDirty(page));
++		if (PAGE_SIZE <= OCFS2_SB(sb)->s_clustersize) {
++			if (PageDirty(page)) {
++				/*
++				 * write_on_page will unlock the page on return
++				 */
++				ret = write_one_page(page);
++				goto retry;
++			}
++		}
+ 
+ 		if (!PageUptodate(page)) {
+ 			ret = block_read_full_page(page, ocfs2_get_block);
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index e373e2e10f6a..83b930988e21 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -70,7 +70,7 @@
+  */
+ #ifdef CONFIG_LD_DEAD_CODE_DATA_ELIMINATION
+ #define TEXT_MAIN .text .text.[0-9a-zA-Z_]*
+-#define DATA_MAIN .data .data.[0-9a-zA-Z_]*
++#define DATA_MAIN .data .data.[0-9a-zA-Z_]* .data..LPBX*
+ #define SDATA_MAIN .sdata .sdata.[0-9a-zA-Z_]*
+ #define RODATA_MAIN .rodata .rodata.[0-9a-zA-Z_]*
+ #define BSS_MAIN .bss .bss.[0-9a-zA-Z_]*
+@@ -617,8 +617,8 @@
+ 
+ #define EXIT_DATA							\
+ 	*(.exit.data .exit.data.*)					\
+-	*(.fini_array)							\
+-	*(.dtors)							\
++	*(.fini_array .fini_array.*)					\
++	*(.dtors .dtors.*)						\
+ 	MEM_DISCARD(exit.data*)						\
+ 	MEM_DISCARD(exit.rodata*)
+ 
+diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
+index a8ba6b04152c..55e4be8b016b 100644
+--- a/include/linux/compiler_types.h
++++ b/include/linux/compiler_types.h
+@@ -78,6 +78,18 @@ extern void __chk_io_ptr(const volatile void __iomem *);
+ #include <linux/compiler-clang.h>
+ #endif
+ 
++/*
++ * Some architectures need to provide custom definitions of macros provided
++ * by linux/compiler-*.h, and can do so using asm/compiler.h. We include that
++ * conditionally rather than using an asm-generic wrapper in order to avoid
++ * build failures if any C compilation, which will include this file via an
++ * -include argument in c_flags, occurs prior to the asm-generic wrappers being
++ * generated.
++ */
++#ifdef CONFIG_HAVE_ARCH_COMPILER_H
++#include <asm/compiler.h>
++#endif
++
+ /*
+  * Generic compiler-dependent macros required for kernel
+  * build go below this comment. Actual compiler/compiler version
+diff --git a/include/linux/gpio/driver.h b/include/linux/gpio/driver.h
+index 5382b5183b7e..82a953ec5ef0 100644
+--- a/include/linux/gpio/driver.h
++++ b/include/linux/gpio/driver.h
+@@ -94,6 +94,13 @@ struct gpio_irq_chip {
+ 	 */
+ 	unsigned int num_parents;
+ 
++	/**
++	 * @parent_irq:
++	 *
++	 * For use by gpiochip_set_cascaded_irqchip()
++	 */
++	unsigned int parent_irq;
++
+ 	/**
+ 	 * @parents:
+ 	 *
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index 64f450593b54..b49bfc8e68b0 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -1022,6 +1022,14 @@ static inline void *mlx5_frag_buf_get_wqe(struct mlx5_frag_buf_ctrl *fbc,
+ 		((fbc->frag_sz_m1 & ix) << fbc->log_stride);
+ }
+ 
++static inline u32
++mlx5_frag_buf_get_idx_last_contig_stride(struct mlx5_frag_buf_ctrl *fbc, u32 ix)
++{
++	u32 last_frag_stride_idx = (ix + fbc->strides_offset) | fbc->frag_sz_m1;
++
++	return min_t(u32, last_frag_stride_idx - fbc->strides_offset, fbc->sz_m1);
++}
++
+ int mlx5_cmd_init(struct mlx5_core_dev *dev);
+ void mlx5_cmd_cleanup(struct mlx5_core_dev *dev);
+ void mlx5_cmd_use_events(struct mlx5_core_dev *dev);
+diff --git a/include/linux/netfilter.h b/include/linux/netfilter.h
+index dd2052f0efb7..11b7b8ab0696 100644
+--- a/include/linux/netfilter.h
++++ b/include/linux/netfilter.h
+@@ -215,6 +215,8 @@ static inline int nf_hook(u_int8_t pf, unsigned int hook, struct net *net,
+ 		break;
+ 	case NFPROTO_ARP:
+ #ifdef CONFIG_NETFILTER_FAMILY_ARP
++		if (WARN_ON_ONCE(hook >= ARRAY_SIZE(net->nf.hooks_arp)))
++			break;
+ 		hook_head = rcu_dereference(net->nf.hooks_arp[hook]);
+ #endif
+ 		break;
+diff --git a/include/net/ip6_fib.h b/include/net/ip6_fib.h
+index 3d4930528db0..2d31e22babd8 100644
+--- a/include/net/ip6_fib.h
++++ b/include/net/ip6_fib.h
+@@ -159,6 +159,10 @@ struct fib6_info {
+ 	struct rt6_info * __percpu	*rt6i_pcpu;
+ 	struct rt6_exception_bucket __rcu *rt6i_exception_bucket;
+ 
++#ifdef CONFIG_IPV6_ROUTER_PREF
++	unsigned long			last_probe;
++#endif
++
+ 	u32				fib6_metric;
+ 	u8				fib6_protocol;
+ 	u8				fib6_type;
+diff --git a/include/net/sctp/sm.h b/include/net/sctp/sm.h
+index 5ef1bad81ef5..9e3d32746430 100644
+--- a/include/net/sctp/sm.h
++++ b/include/net/sctp/sm.h
+@@ -347,7 +347,7 @@ static inline __u16 sctp_data_size(struct sctp_chunk *chunk)
+ 	__u16 size;
+ 
+ 	size = ntohs(chunk->chunk_hdr->length);
+-	size -= sctp_datahdr_len(&chunk->asoc->stream);
++	size -= sctp_datachk_len(&chunk->asoc->stream);
+ 
+ 	return size;
+ }
+diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
+index 4fff00e9da8a..0a774b64fc29 100644
+--- a/include/trace/events/rxrpc.h
++++ b/include/trace/events/rxrpc.h
+@@ -56,7 +56,6 @@ enum rxrpc_peer_trace {
+ 	rxrpc_peer_new,
+ 	rxrpc_peer_processing,
+ 	rxrpc_peer_put,
+-	rxrpc_peer_queued_error,
+ };
+ 
+ enum rxrpc_conn_trace {
+@@ -257,8 +256,7 @@ enum rxrpc_tx_fail_trace {
+ 	EM(rxrpc_peer_got,			"GOT") \
+ 	EM(rxrpc_peer_new,			"NEW") \
+ 	EM(rxrpc_peer_processing,		"PRO") \
+-	EM(rxrpc_peer_put,			"PUT") \
+-	E_(rxrpc_peer_queued_error,		"QER")
++	E_(rxrpc_peer_put,			"PUT")
+ 
+ #define rxrpc_conn_traces \
+ 	EM(rxrpc_conn_got,			"GOT") \
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index ae22d93701db..fc072b7f839d 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -8319,6 +8319,8 @@ void perf_tp_event(u16 event_type, u64 count, void *record, int entry_size,
+ 			goto unlock;
+ 
+ 		list_for_each_entry_rcu(event, &ctx->event_list, event_entry) {
++			if (event->cpu != smp_processor_id())
++				continue;
+ 			if (event->attr.type != PERF_TYPE_TRACEPOINT)
+ 				continue;
+ 			if (event->attr.config != entry->type)
+@@ -9436,9 +9438,7 @@ static void free_pmu_context(struct pmu *pmu)
+ 	if (pmu->task_ctx_nr > perf_invalid_context)
+ 		return;
+ 
+-	mutex_lock(&pmus_lock);
+ 	free_percpu(pmu->pmu_cpu_context);
+-	mutex_unlock(&pmus_lock);
+ }
+ 
+ /*
+@@ -9694,12 +9694,8 @@ EXPORT_SYMBOL_GPL(perf_pmu_register);
+ 
+ void perf_pmu_unregister(struct pmu *pmu)
+ {
+-	int remove_device;
+-
+ 	mutex_lock(&pmus_lock);
+-	remove_device = pmu_bus_running;
+ 	list_del_rcu(&pmu->entry);
+-	mutex_unlock(&pmus_lock);
+ 
+ 	/*
+ 	 * We dereference the pmu list under both SRCU and regular RCU, so
+@@ -9711,13 +9707,14 @@ void perf_pmu_unregister(struct pmu *pmu)
+ 	free_percpu(pmu->pmu_disable_count);
+ 	if (pmu->type >= PERF_TYPE_MAX)
+ 		idr_remove(&pmu_idr, pmu->type);
+-	if (remove_device) {
++	if (pmu_bus_running) {
+ 		if (pmu->nr_addr_filters)
+ 			device_remove_file(pmu->dev, &dev_attr_nr_addr_filters);
+ 		device_del(pmu->dev);
+ 		put_device(pmu->dev);
+ 	}
+ 	free_pmu_context(pmu);
++	mutex_unlock(&pmus_lock);
+ }
+ EXPORT_SYMBOL_GPL(perf_pmu_unregister);
+ 
+diff --git a/kernel/locking/test-ww_mutex.c b/kernel/locking/test-ww_mutex.c
+index 0e4cd64ad2c0..654977862b06 100644
+--- a/kernel/locking/test-ww_mutex.c
++++ b/kernel/locking/test-ww_mutex.c
+@@ -260,7 +260,7 @@ static void test_cycle_work(struct work_struct *work)
+ {
+ 	struct test_cycle *cycle = container_of(work, typeof(*cycle), work);
+ 	struct ww_acquire_ctx ctx;
+-	int err;
++	int err, erra = 0;
+ 
+ 	ww_acquire_init(&ctx, &ww_class);
+ 	ww_mutex_lock(&cycle->a_mutex, &ctx);
+@@ -270,17 +270,19 @@ static void test_cycle_work(struct work_struct *work)
+ 
+ 	err = ww_mutex_lock(cycle->b_mutex, &ctx);
+ 	if (err == -EDEADLK) {
++		err = 0;
+ 		ww_mutex_unlock(&cycle->a_mutex);
+ 		ww_mutex_lock_slow(cycle->b_mutex, &ctx);
+-		err = ww_mutex_lock(&cycle->a_mutex, &ctx);
++		erra = ww_mutex_lock(&cycle->a_mutex, &ctx);
+ 	}
+ 
+ 	if (!err)
+ 		ww_mutex_unlock(cycle->b_mutex);
+-	ww_mutex_unlock(&cycle->a_mutex);
++	if (!erra)
++		ww_mutex_unlock(&cycle->a_mutex);
+ 	ww_acquire_fini(&ctx);
+ 
+-	cycle->result = err;
++	cycle->result = err ?: erra;
+ }
+ 
+ static int __test_cycle(unsigned int nthreads)
+diff --git a/mm/gup_benchmark.c b/mm/gup_benchmark.c
+index 6a473709e9b6..7405c9d89d65 100644
+--- a/mm/gup_benchmark.c
++++ b/mm/gup_benchmark.c
+@@ -19,7 +19,8 @@ static int __gup_benchmark_ioctl(unsigned int cmd,
+ 		struct gup_benchmark *gup)
+ {
+ 	ktime_t start_time, end_time;
+-	unsigned long i, nr, nr_pages, addr, next;
++	unsigned long i, nr_pages, addr, next;
++	int nr;
+ 	struct page **pages;
+ 
+ 	nr_pages = gup->size / PAGE_SIZE;
+diff --git a/mm/migrate.c b/mm/migrate.c
+index 2a55289ee9f1..f49eb9589d73 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -1415,7 +1415,7 @@ retry:
+ 				 * we encounter them after the rest of the list
+ 				 * is processed.
+ 				 */
+-				if (PageTransHuge(page)) {
++				if (PageTransHuge(page) && !PageHuge(page)) {
+ 					lock_page(page);
+ 					rc = split_huge_page_to_list(page, from);
+ 					unlock_page(page);
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index fc0436407471..03822f86f288 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -386,17 +386,6 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
+ 	delta = freeable >> priority;
+ 	delta *= 4;
+ 	do_div(delta, shrinker->seeks);
+-
+-	/*
+-	 * Make sure we apply some minimal pressure on default priority
+-	 * even on small cgroups. Stale objects are not only consuming memory
+-	 * by themselves, but can also hold a reference to a dying cgroup,
+-	 * preventing it from being reclaimed. A dying cgroup with all
+-	 * corresponding structures like per-cpu stats and kmem caches
+-	 * can be really big, so it may lead to a significant waste of memory.
+-	 */
+-	delta = max_t(unsigned long long, delta, min(freeable, batch_size));
+-
+ 	total_scan += delta;
+ 	if (total_scan < 0) {
+ 		pr_err("shrink_slab: %pF negative objects to delete nr=%ld\n",
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 8a80d48d89c4..1b9984f653dd 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -2298,9 +2298,8 @@ static int unpair_device(struct sock *sk, struct hci_dev *hdev, void *data,
+ 	/* LE address type */
+ 	addr_type = le_addr_type(cp->addr.type);
+ 
+-	hci_remove_irk(hdev, &cp->addr.bdaddr, addr_type);
+-
+-	err = hci_remove_ltk(hdev, &cp->addr.bdaddr, addr_type);
++	/* Abort any ongoing SMP pairing. Removes ltk and irk if they exist. */
++	err = smp_cancel_and_remove_pairing(hdev, &cp->addr.bdaddr, addr_type);
+ 	if (err < 0) {
+ 		err = mgmt_cmd_complete(sk, hdev->id, MGMT_OP_UNPAIR_DEVICE,
+ 					MGMT_STATUS_NOT_PAIRED, &rp,
+@@ -2314,8 +2313,6 @@ static int unpair_device(struct sock *sk, struct hci_dev *hdev, void *data,
+ 		goto done;
+ 	}
+ 
+-	/* Abort any ongoing SMP pairing */
+-	smp_cancel_pairing(conn);
+ 
+ 	/* Defer clearing up the connection parameters until closing to
+ 	 * give a chance of keeping them if a repairing happens.
+diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
+index 3a7b0773536b..73f7211d0431 100644
+--- a/net/bluetooth/smp.c
++++ b/net/bluetooth/smp.c
+@@ -2422,30 +2422,51 @@ unlock:
+ 	return ret;
+ }
+ 
+-void smp_cancel_pairing(struct hci_conn *hcon)
++int smp_cancel_and_remove_pairing(struct hci_dev *hdev, bdaddr_t *bdaddr,
++				  u8 addr_type)
+ {
+-	struct l2cap_conn *conn = hcon->l2cap_data;
++	struct hci_conn *hcon;
++	struct l2cap_conn *conn;
+ 	struct l2cap_chan *chan;
+ 	struct smp_chan *smp;
++	int err;
++
++	err = hci_remove_ltk(hdev, bdaddr, addr_type);
++	hci_remove_irk(hdev, bdaddr, addr_type);
++
++	hcon = hci_conn_hash_lookup_le(hdev, bdaddr, addr_type);
++	if (!hcon)
++		goto done;
+ 
++	conn = hcon->l2cap_data;
+ 	if (!conn)
+-		return;
++		goto done;
+ 
+ 	chan = conn->smp;
+ 	if (!chan)
+-		return;
++		goto done;
+ 
+ 	l2cap_chan_lock(chan);
+ 
+ 	smp = chan->data;
+ 	if (smp) {
++		/* Set keys to NULL to make sure smp_failure() does not try to
++		 * remove and free already invalidated rcu list entries. */
++		smp->ltk = NULL;
++		smp->slave_ltk = NULL;
++		smp->remote_irk = NULL;
++
+ 		if (test_bit(SMP_FLAG_COMPLETE, &smp->flags))
+ 			smp_failure(conn, 0);
+ 		else
+ 			smp_failure(conn, SMP_UNSPECIFIED);
++		err = 0;
+ 	}
+ 
+ 	l2cap_chan_unlock(chan);
++
++done:
++	return err;
+ }
+ 
+ static int smp_cmd_encrypt_info(struct l2cap_conn *conn, struct sk_buff *skb)
+diff --git a/net/bluetooth/smp.h b/net/bluetooth/smp.h
+index 0ff6247eaa6c..121edadd5f8d 100644
+--- a/net/bluetooth/smp.h
++++ b/net/bluetooth/smp.h
+@@ -181,7 +181,8 @@ enum smp_key_pref {
+ };
+ 
+ /* SMP Commands */
+-void smp_cancel_pairing(struct hci_conn *hcon);
++int smp_cancel_and_remove_pairing(struct hci_dev *hdev, bdaddr_t *bdaddr,
++				  u8 addr_type);
+ bool smp_sufficient_security(struct hci_conn *hcon, u8 sec_level,
+ 			     enum smp_key_pref key_pref);
+ int smp_conn_security(struct hci_conn *hcon, __u8 sec_level);
+diff --git a/net/bpfilter/bpfilter_kern.c b/net/bpfilter/bpfilter_kern.c
+index f0fc182d3db7..d5dd6b8b4248 100644
+--- a/net/bpfilter/bpfilter_kern.c
++++ b/net/bpfilter/bpfilter_kern.c
+@@ -23,9 +23,11 @@ static void shutdown_umh(struct umh_info *info)
+ 
+ 	if (!info->pid)
+ 		return;
+-	tsk = pid_task(find_vpid(info->pid), PIDTYPE_PID);
+-	if (tsk)
++	tsk = get_pid_task(find_vpid(info->pid), PIDTYPE_PID);
++	if (tsk) {
+ 		force_sig(SIGKILL, tsk);
++		put_task_struct(tsk);
++	}
+ 	fput(info->pipe_to_umh);
+ 	fput(info->pipe_from_umh);
+ 	info->pid = 0;
+diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
+index 920665dd92db..6059a47f5e0c 100644
+--- a/net/bridge/br_multicast.c
++++ b/net/bridge/br_multicast.c
+@@ -1420,7 +1420,14 @@ static void br_multicast_query_received(struct net_bridge *br,
+ 		return;
+ 
+ 	br_multicast_update_query_timer(br, query, max_delay);
+-	br_multicast_mark_router(br, port);
++
++	/* Based on RFC4541, section 2.1.1 IGMP Forwarding Rules,
++	 * the arrival port for IGMP Queries where the source address
++	 * is 0.0.0.0 should not be added to router port list.
++	 */
++	if ((saddr->proto == htons(ETH_P_IP) && saddr->u.ip4) ||
++	    saddr->proto == htons(ETH_P_IPV6))
++		br_multicast_mark_router(br, port);
+ }
+ 
+ static int br_ip4_multicast_query(struct net_bridge *br,
+diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
+index 9b16eaf33819..58240cc185e7 100644
+--- a/net/bridge/br_netfilter_hooks.c
++++ b/net/bridge/br_netfilter_hooks.c
+@@ -834,7 +834,8 @@ static unsigned int ip_sabotage_in(void *priv,
+ 				   struct sk_buff *skb,
+ 				   const struct nf_hook_state *state)
+ {
+-	if (skb->nf_bridge && !skb->nf_bridge->in_prerouting) {
++	if (skb->nf_bridge && !skb->nf_bridge->in_prerouting &&
++	    !netif_is_l3_master(skb->dev)) {
+ 		state->okfn(state->net, state->sk, skb);
+ 		return NF_STOLEN;
+ 	}
+diff --git a/net/core/datagram.c b/net/core/datagram.c
+index 9938952c5c78..16f0eb0970c4 100644
+--- a/net/core/datagram.c
++++ b/net/core/datagram.c
+@@ -808,8 +808,9 @@ int skb_copy_and_csum_datagram_msg(struct sk_buff *skb,
+ 			return -EINVAL;
+ 		}
+ 
+-		if (unlikely(skb->ip_summed == CHECKSUM_COMPLETE))
+-			netdev_rx_csum_fault(skb->dev);
++		if (unlikely(skb->ip_summed == CHECKSUM_COMPLETE) &&
++		    !skb->csum_complete_sw)
++			netdev_rx_csum_fault(NULL);
+ 	}
+ 	return 0;
+ fault:
+diff --git a/net/core/ethtool.c b/net/core/ethtool.c
+index 6c04f1bf377d..548d0e615bc7 100644
+--- a/net/core/ethtool.c
++++ b/net/core/ethtool.c
+@@ -2461,13 +2461,17 @@ roll_back:
+ 	return ret;
+ }
+ 
+-static int ethtool_set_per_queue(struct net_device *dev, void __user *useraddr)
++static int ethtool_set_per_queue(struct net_device *dev,
++				 void __user *useraddr, u32 sub_cmd)
+ {
+ 	struct ethtool_per_queue_op per_queue_opt;
+ 
+ 	if (copy_from_user(&per_queue_opt, useraddr, sizeof(per_queue_opt)))
+ 		return -EFAULT;
+ 
++	if (per_queue_opt.sub_command != sub_cmd)
++		return -EINVAL;
++
+ 	switch (per_queue_opt.sub_command) {
+ 	case ETHTOOL_GCOALESCE:
+ 		return ethtool_get_per_queue_coalesce(dev, useraddr, &per_queue_opt);
+@@ -2838,7 +2842,7 @@ int dev_ethtool(struct net *net, struct ifreq *ifr)
+ 		rc = ethtool_get_phy_stats(dev, useraddr);
+ 		break;
+ 	case ETHTOOL_PERQUEUE:
+-		rc = ethtool_set_per_queue(dev, useraddr);
++		rc = ethtool_set_per_queue(dev, useraddr, sub_cmd);
+ 		break;
+ 	case ETHTOOL_GLINKSETTINGS:
+ 		rc = ethtool_get_link_ksettings(dev, useraddr);
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 18de39dbdc30..4b25fd14bc5a 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -3480,6 +3480,11 @@ static int rtnl_fdb_add(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 		return -EINVAL;
+ 	}
+ 
++	if (dev->type != ARPHRD_ETHER) {
++		NL_SET_ERR_MSG(extack, "FDB delete only supported for Ethernet devices");
++		return -EINVAL;
++	}
++
+ 	addr = nla_data(tb[NDA_LLADDR]);
+ 
+ 	err = fdb_vid_parse(tb[NDA_VLAN], &vid, extack);
+@@ -3584,6 +3589,11 @@ static int rtnl_fdb_del(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 		return -EINVAL;
+ 	}
+ 
++	if (dev->type != ARPHRD_ETHER) {
++		NL_SET_ERR_MSG(extack, "FDB add only supported for Ethernet devices");
++		return -EINVAL;
++	}
++
+ 	addr = nla_data(tb[NDA_LLADDR]);
+ 
+ 	err = fdb_vid_parse(tb[NDA_VLAN], &vid, extack);
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 3680912f056a..c45916b91a9c 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -1845,8 +1845,9 @@ int pskb_trim_rcsum_slow(struct sk_buff *skb, unsigned int len)
+ 	if (skb->ip_summed == CHECKSUM_COMPLETE) {
+ 		int delta = skb->len - len;
+ 
+-		skb->csum = csum_sub(skb->csum,
+-				     skb_checksum(skb, len, delta, 0));
++		skb->csum = csum_block_sub(skb->csum,
++					   skb_checksum(skb, len, delta, 0),
++					   len);
+ 	}
+ 	return __pskb_trim(skb, len);
+ }
+diff --git a/net/ipv4/ip_fragment.c b/net/ipv4/ip_fragment.c
+index d14d741fb05e..9d3bdce1ad8a 100644
+--- a/net/ipv4/ip_fragment.c
++++ b/net/ipv4/ip_fragment.c
+@@ -657,10 +657,14 @@ struct sk_buff *ip_check_defrag(struct net *net, struct sk_buff *skb, u32 user)
+ 	if (ip_is_fragment(&iph)) {
+ 		skb = skb_share_check(skb, GFP_ATOMIC);
+ 		if (skb) {
+-			if (!pskb_may_pull(skb, netoff + iph.ihl * 4))
+-				return skb;
+-			if (pskb_trim_rcsum(skb, netoff + len))
+-				return skb;
++			if (!pskb_may_pull(skb, netoff + iph.ihl * 4)) {
++				kfree_skb(skb);
++				return NULL;
++			}
++			if (pskb_trim_rcsum(skb, netoff + len)) {
++				kfree_skb(skb);
++				return NULL;
++			}
+ 			memset(IPCB(skb), 0, sizeof(struct inet_skb_parm));
+ 			if (ip_defrag(net, skb, user))
+ 				return NULL;
+diff --git a/net/ipv4/ipmr_base.c b/net/ipv4/ipmr_base.c
+index cafb0506c8c9..33be09791c74 100644
+--- a/net/ipv4/ipmr_base.c
++++ b/net/ipv4/ipmr_base.c
+@@ -295,8 +295,6 @@ int mr_rtm_dumproute(struct sk_buff *skb, struct netlink_callback *cb,
+ next_entry:
+ 			e++;
+ 		}
+-		e = 0;
+-		s_e = 0;
+ 
+ 		spin_lock_bh(lock);
+ 		list_for_each_entry(mfc, &mrt->mfc_unres_queue, list) {
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index a12df801de94..2fe7e2713350 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -2124,8 +2124,24 @@ static inline int udp4_csum_init(struct sk_buff *skb, struct udphdr *uh,
+ 	/* Note, we are only interested in != 0 or == 0, thus the
+ 	 * force to int.
+ 	 */
+-	return (__force int)skb_checksum_init_zero_check(skb, proto, uh->check,
+-							 inet_compute_pseudo);
++	err = (__force int)skb_checksum_init_zero_check(skb, proto, uh->check,
++							inet_compute_pseudo);
++	if (err)
++		return err;
++
++	if (skb->ip_summed == CHECKSUM_COMPLETE && !skb->csum_valid) {
++		/* If SW calculated the value, we know it's bad */
++		if (skb->csum_complete_sw)
++			return 1;
++
++		/* HW says the value is bad. Let's validate that.
++		 * skb->csum is no longer the full packet checksum,
++		 * so don't treat it as such.
++		 */
++		skb_checksum_complete_unset(skb);
++	}
++
++	return 0;
+ }
+ 
+ /* wrapper for udp_queue_rcv_skb tacking care of csum conversion and
+diff --git a/net/ipv4/xfrm4_input.c b/net/ipv4/xfrm4_input.c
+index bcfc00e88756..f8de2482a529 100644
+--- a/net/ipv4/xfrm4_input.c
++++ b/net/ipv4/xfrm4_input.c
+@@ -67,6 +67,7 @@ int xfrm4_transport_finish(struct sk_buff *skb, int async)
+ 
+ 	if (xo && (xo->flags & XFRM_GRO)) {
+ 		skb_mac_header_rebuild(skb);
++		skb_reset_transport_header(skb);
+ 		return 0;
+ 	}
+ 
+diff --git a/net/ipv4/xfrm4_mode_transport.c b/net/ipv4/xfrm4_mode_transport.c
+index 3d36644890bb..1ad2c2c4e250 100644
+--- a/net/ipv4/xfrm4_mode_transport.c
++++ b/net/ipv4/xfrm4_mode_transport.c
+@@ -46,7 +46,6 @@ static int xfrm4_transport_output(struct xfrm_state *x, struct sk_buff *skb)
+ static int xfrm4_transport_input(struct xfrm_state *x, struct sk_buff *skb)
+ {
+ 	int ihl = skb->data - skb_transport_header(skb);
+-	struct xfrm_offload *xo = xfrm_offload(skb);
+ 
+ 	if (skb->transport_header != skb->network_header) {
+ 		memmove(skb_transport_header(skb),
+@@ -54,8 +53,7 @@ static int xfrm4_transport_input(struct xfrm_state *x, struct sk_buff *skb)
+ 		skb->network_header = skb->transport_header;
+ 	}
+ 	ip_hdr(skb)->tot_len = htons(skb->len + ihl);
+-	if (!xo || !(xo->flags & XFRM_GRO))
+-		skb_reset_transport_header(skb);
++	skb_reset_transport_header(skb);
+ 	return 0;
+ }
+ 
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 3484c7020fd9..ac3de1aa1cd3 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -4930,8 +4930,8 @@ static int in6_dump_addrs(struct inet6_dev *idev, struct sk_buff *skb,
+ 
+ 		/* unicast address incl. temp addr */
+ 		list_for_each_entry(ifa, &idev->addr_list, if_list) {
+-			if (++ip_idx < s_ip_idx)
+-				continue;
++			if (ip_idx < s_ip_idx)
++				goto next;
+ 			err = inet6_fill_ifaddr(skb, ifa,
+ 						NETLINK_CB(cb->skb).portid,
+ 						cb->nlh->nlmsg_seq,
+@@ -4940,6 +4940,8 @@ static int in6_dump_addrs(struct inet6_dev *idev, struct sk_buff *skb,
+ 			if (err < 0)
+ 				break;
+ 			nl_dump_check_consistent(cb, nlmsg_hdr(skb));
++next:
++			ip_idx++;
+ 		}
+ 		break;
+ 	}
+diff --git a/net/ipv6/ip6_checksum.c b/net/ipv6/ip6_checksum.c
+index 547515e8450a..377717045f8f 100644
+--- a/net/ipv6/ip6_checksum.c
++++ b/net/ipv6/ip6_checksum.c
+@@ -88,8 +88,24 @@ int udp6_csum_init(struct sk_buff *skb, struct udphdr *uh, int proto)
+ 	 * Note, we are only interested in != 0 or == 0, thus the
+ 	 * force to int.
+ 	 */
+-	return (__force int)skb_checksum_init_zero_check(skb, proto, uh->check,
+-							 ip6_compute_pseudo);
++	err = (__force int)skb_checksum_init_zero_check(skb, proto, uh->check,
++							ip6_compute_pseudo);
++	if (err)
++		return err;
++
++	if (skb->ip_summed == CHECKSUM_COMPLETE && !skb->csum_valid) {
++		/* If SW calculated the value, we know it's bad */
++		if (skb->csum_complete_sw)
++			return 1;
++
++		/* HW says the value is bad. Let's validate that.
++		 * skb->csum is no longer the full packet checksum,
++		 * so don't treat is as such.
++		 */
++		skb_checksum_complete_unset(skb);
++	}
++
++	return 0;
+ }
+ EXPORT_SYMBOL(udp6_csum_init);
+ 
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index f5b5b0574a2d..009b508127e6 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -1184,10 +1184,6 @@ route_lookup:
+ 	}
+ 	skb_dst_set(skb, dst);
+ 
+-	if (encap_limit >= 0) {
+-		init_tel_txopt(&opt, encap_limit);
+-		ipv6_push_frag_opts(skb, &opt.ops, &proto);
+-	}
+ 	hop_limit = hop_limit ? : ip6_dst_hoplimit(dst);
+ 
+ 	/* Calculate max headroom for all the headers and adjust
+@@ -1202,6 +1198,11 @@ route_lookup:
+ 	if (err)
+ 		return err;
+ 
++	if (encap_limit >= 0) {
++		init_tel_txopt(&opt, encap_limit);
++		ipv6_push_frag_opts(skb, &opt.ops, &proto);
++	}
++
+ 	skb_push(skb, sizeof(struct ipv6hdr));
+ 	skb_reset_network_header(skb);
+ 	ipv6h = ipv6_hdr(skb);
+diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
+index f60f310785fd..131440ea6b51 100644
+--- a/net/ipv6/mcast.c
++++ b/net/ipv6/mcast.c
+@@ -2436,17 +2436,17 @@ static int ip6_mc_leave_src(struct sock *sk, struct ipv6_mc_socklist *iml,
+ {
+ 	int err;
+ 
+-	/* callers have the socket lock and rtnl lock
+-	 * so no other readers or writers of iml or its sflist
+-	 */
++	write_lock_bh(&iml->sflock);
+ 	if (!iml->sflist) {
+ 		/* any-source empty exclude case */
+-		return ip6_mc_del_src(idev, &iml->addr, iml->sfmode, 0, NULL, 0);
++		err = ip6_mc_del_src(idev, &iml->addr, iml->sfmode, 0, NULL, 0);
++	} else {
++		err = ip6_mc_del_src(idev, &iml->addr, iml->sfmode,
++				iml->sflist->sl_count, iml->sflist->sl_addr, 0);
++		sock_kfree_s(sk, iml->sflist, IP6_SFLSIZE(iml->sflist->sl_max));
++		iml->sflist = NULL;
+ 	}
+-	err = ip6_mc_del_src(idev, &iml->addr, iml->sfmode,
+-		iml->sflist->sl_count, iml->sflist->sl_addr, 0);
+-	sock_kfree_s(sk, iml->sflist, IP6_SFLSIZE(iml->sflist->sl_max));
+-	iml->sflist = NULL;
++	write_unlock_bh(&iml->sflock);
+ 	return err;
+ }
+ 
+diff --git a/net/ipv6/ndisc.c b/net/ipv6/ndisc.c
+index 0ec273997d1d..673a4a932f2a 100644
+--- a/net/ipv6/ndisc.c
++++ b/net/ipv6/ndisc.c
+@@ -1732,10 +1732,9 @@ int ndisc_rcv(struct sk_buff *skb)
+ 		return 0;
+ 	}
+ 
+-	memset(NEIGH_CB(skb), 0, sizeof(struct neighbour_cb));
+-
+ 	switch (msg->icmph.icmp6_type) {
+ 	case NDISC_NEIGHBOUR_SOLICITATION:
++		memset(NEIGH_CB(skb), 0, sizeof(struct neighbour_cb));
+ 		ndisc_recv_ns(skb);
+ 		break;
+ 
+diff --git a/net/ipv6/netfilter/nf_conntrack_reasm.c b/net/ipv6/netfilter/nf_conntrack_reasm.c
+index e4d9e6976d3c..a452d99c9f52 100644
+--- a/net/ipv6/netfilter/nf_conntrack_reasm.c
++++ b/net/ipv6/netfilter/nf_conntrack_reasm.c
+@@ -585,8 +585,6 @@ int nf_ct_frag6_gather(struct net *net, struct sk_buff *skb, u32 user)
+ 	    fq->q.meat == fq->q.len &&
+ 	    nf_ct_frag6_reasm(fq, skb, dev))
+ 		ret = 0;
+-	else
+-		skb_dst_drop(skb);
+ 
+ out_unlock:
+ 	spin_unlock_bh(&fq->q.lock);
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index ed526e257da6..a243d5249b51 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -517,10 +517,11 @@ static void rt6_probe_deferred(struct work_struct *w)
+ 
+ static void rt6_probe(struct fib6_info *rt)
+ {
+-	struct __rt6_probe_work *work;
++	struct __rt6_probe_work *work = NULL;
+ 	const struct in6_addr *nh_gw;
+ 	struct neighbour *neigh;
+ 	struct net_device *dev;
++	struct inet6_dev *idev;
+ 
+ 	/*
+ 	 * Okay, this does not seem to be appropriate
+@@ -536,15 +537,12 @@ static void rt6_probe(struct fib6_info *rt)
+ 	nh_gw = &rt->fib6_nh.nh_gw;
+ 	dev = rt->fib6_nh.nh_dev;
+ 	rcu_read_lock_bh();
++	idev = __in6_dev_get(dev);
+ 	neigh = __ipv6_neigh_lookup_noref(dev, nh_gw);
+ 	if (neigh) {
+-		struct inet6_dev *idev;
+-
+ 		if (neigh->nud_state & NUD_VALID)
+ 			goto out;
+ 
+-		idev = __in6_dev_get(dev);
+-		work = NULL;
+ 		write_lock(&neigh->lock);
+ 		if (!(neigh->nud_state & NUD_VALID) &&
+ 		    time_after(jiffies,
+@@ -554,11 +552,13 @@ static void rt6_probe(struct fib6_info *rt)
+ 				__neigh_set_probe_once(neigh);
+ 		}
+ 		write_unlock(&neigh->lock);
+-	} else {
++	} else if (time_after(jiffies, rt->last_probe +
++				       idev->cnf.rtr_probe_interval)) {
+ 		work = kmalloc(sizeof(*work), GFP_ATOMIC);
+ 	}
+ 
+ 	if (work) {
++		rt->last_probe = jiffies;
+ 		INIT_WORK(&work->work, rt6_probe_deferred);
+ 		work->target = *nh_gw;
+ 		dev_hold(dev);
+@@ -2792,6 +2792,8 @@ static int ip6_route_check_nh_onlink(struct net *net,
+ 	grt = ip6_nh_lookup_table(net, cfg, gw_addr, tbid, 0);
+ 	if (grt) {
+ 		if (!grt->dst.error &&
++		    /* ignore match if it is the default route */
++		    grt->from && !ipv6_addr_any(&grt->from->fib6_dst.addr) &&
+ 		    (grt->rt6i_flags & flags || dev != grt->dst.dev)) {
+ 			NL_SET_ERR_MSG(extack,
+ 				       "Nexthop has invalid gateway or device mismatch");
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 39d0cab919bb..4f2c7a196365 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -762,11 +762,9 @@ static int udp6_unicast_rcv_skb(struct sock *sk, struct sk_buff *skb,
+ 
+ 	ret = udpv6_queue_rcv_skb(sk, skb);
+ 
+-	/* a return value > 0 means to resubmit the input, but
+-	 * it wants the return to be -protocol, or 0
+-	 */
++	/* a return value > 0 means to resubmit the input */
+ 	if (ret > 0)
+-		return -ret;
++		return ret;
+ 	return 0;
+ }
+ 
+diff --git a/net/ipv6/xfrm6_input.c b/net/ipv6/xfrm6_input.c
+index 841f4a07438e..9ef490dddcea 100644
+--- a/net/ipv6/xfrm6_input.c
++++ b/net/ipv6/xfrm6_input.c
+@@ -59,6 +59,7 @@ int xfrm6_transport_finish(struct sk_buff *skb, int async)
+ 
+ 	if (xo && (xo->flags & XFRM_GRO)) {
+ 		skb_mac_header_rebuild(skb);
++		skb_reset_transport_header(skb);
+ 		return -1;
+ 	}
+ 
+diff --git a/net/ipv6/xfrm6_mode_transport.c b/net/ipv6/xfrm6_mode_transport.c
+index 9ad07a91708e..3c29da5defe6 100644
+--- a/net/ipv6/xfrm6_mode_transport.c
++++ b/net/ipv6/xfrm6_mode_transport.c
+@@ -51,7 +51,6 @@ static int xfrm6_transport_output(struct xfrm_state *x, struct sk_buff *skb)
+ static int xfrm6_transport_input(struct xfrm_state *x, struct sk_buff *skb)
+ {
+ 	int ihl = skb->data - skb_transport_header(skb);
+-	struct xfrm_offload *xo = xfrm_offload(skb);
+ 
+ 	if (skb->transport_header != skb->network_header) {
+ 		memmove(skb_transport_header(skb),
+@@ -60,8 +59,7 @@ static int xfrm6_transport_input(struct xfrm_state *x, struct sk_buff *skb)
+ 	}
+ 	ipv6_hdr(skb)->payload_len = htons(skb->len + ihl -
+ 					   sizeof(struct ipv6hdr));
+-	if (!xo || !(xo->flags & XFRM_GRO))
+-		skb_reset_transport_header(skb);
++	skb_reset_transport_header(skb);
+ 	return 0;
+ }
+ 
+diff --git a/net/ipv6/xfrm6_output.c b/net/ipv6/xfrm6_output.c
+index 5959ce9620eb..6a74080005cf 100644
+--- a/net/ipv6/xfrm6_output.c
++++ b/net/ipv6/xfrm6_output.c
+@@ -170,9 +170,11 @@ static int __xfrm6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ 
+ 	if (toobig && xfrm6_local_dontfrag(skb)) {
+ 		xfrm6_local_rxpmtu(skb, mtu);
++		kfree_skb(skb);
+ 		return -EMSGSIZE;
+ 	} else if (!skb->ignore_df && toobig && skb->sk) {
+ 		xfrm_local_error(skb, mtu);
++		kfree_skb(skb);
+ 		return -EMSGSIZE;
+ 	}
+ 
+diff --git a/net/llc/llc_conn.c b/net/llc/llc_conn.c
+index c0ac522b48a1..4ff89cb7c86f 100644
+--- a/net/llc/llc_conn.c
++++ b/net/llc/llc_conn.c
+@@ -734,6 +734,7 @@ void llc_sap_add_socket(struct llc_sap *sap, struct sock *sk)
+ 	llc_sk(sk)->sap = sap;
+ 
+ 	spin_lock_bh(&sap->sk_lock);
++	sock_set_flag(sk, SOCK_RCU_FREE);
+ 	sap->sk_count++;
+ 	sk_nulls_add_node_rcu(sk, laddr_hb);
+ 	hlist_add_head(&llc->dev_hash_node, dev_hb);
+diff --git a/net/mac80211/mesh.h b/net/mac80211/mesh.h
+index ee56f18cad3f..21526630bf65 100644
+--- a/net/mac80211/mesh.h
++++ b/net/mac80211/mesh.h
+@@ -217,7 +217,8 @@ void mesh_rmc_free(struct ieee80211_sub_if_data *sdata);
+ int mesh_rmc_init(struct ieee80211_sub_if_data *sdata);
+ void ieee80211s_init(void);
+ void ieee80211s_update_metric(struct ieee80211_local *local,
+-			      struct sta_info *sta, struct sk_buff *skb);
++			      struct sta_info *sta,
++			      struct ieee80211_tx_status *st);
+ void ieee80211_mesh_init_sdata(struct ieee80211_sub_if_data *sdata);
+ void ieee80211_mesh_teardown_sdata(struct ieee80211_sub_if_data *sdata);
+ int ieee80211_start_mesh(struct ieee80211_sub_if_data *sdata);
+diff --git a/net/mac80211/mesh_hwmp.c b/net/mac80211/mesh_hwmp.c
+index daf9db3c8f24..6950cd0bf594 100644
+--- a/net/mac80211/mesh_hwmp.c
++++ b/net/mac80211/mesh_hwmp.c
+@@ -295,15 +295,12 @@ int mesh_path_error_tx(struct ieee80211_sub_if_data *sdata,
+ }
+ 
+ void ieee80211s_update_metric(struct ieee80211_local *local,
+-		struct sta_info *sta, struct sk_buff *skb)
++			      struct sta_info *sta,
++			      struct ieee80211_tx_status *st)
+ {
+-	struct ieee80211_tx_info *txinfo = IEEE80211_SKB_CB(skb);
+-	struct ieee80211_hdr *hdr = (struct ieee80211_hdr *) skb->data;
++	struct ieee80211_tx_info *txinfo = st->info;
+ 	int failed;
+ 
+-	if (!ieee80211_is_data(hdr->frame_control))
+-		return;
+-
+ 	failed = !(txinfo->flags & IEEE80211_TX_STAT_ACK);
+ 
+ 	/* moving average, scaled to 100.
+diff --git a/net/mac80211/status.c b/net/mac80211/status.c
+index 9a6d7208bf4f..91d7c0cd1882 100644
+--- a/net/mac80211/status.c
++++ b/net/mac80211/status.c
+@@ -479,11 +479,6 @@ static void ieee80211_report_ack_skb(struct ieee80211_local *local,
+ 	if (!skb)
+ 		return;
+ 
+-	if (dropped) {
+-		dev_kfree_skb_any(skb);
+-		return;
+-	}
+-
+ 	if (info->flags & IEEE80211_TX_INTFL_NL80211_FRAME_TX) {
+ 		u64 cookie = IEEE80211_SKB_CB(skb)->ack.cookie;
+ 		struct ieee80211_sub_if_data *sdata;
+@@ -506,6 +501,8 @@ static void ieee80211_report_ack_skb(struct ieee80211_local *local,
+ 		}
+ 		rcu_read_unlock();
+ 
++		dev_kfree_skb_any(skb);
++	} else if (dropped) {
+ 		dev_kfree_skb_any(skb);
+ 	} else {
+ 		/* consumes skb */
+@@ -811,7 +808,7 @@ static void __ieee80211_tx_status(struct ieee80211_hw *hw,
+ 
+ 		rate_control_tx_status(local, sband, status);
+ 		if (ieee80211_vif_is_mesh(&sta->sdata->vif))
+-			ieee80211s_update_metric(local, sta, skb);
++			ieee80211s_update_metric(local, sta, status);
+ 
+ 		if (!(info->flags & IEEE80211_TX_CTL_INJECTED) && acked)
+ 			ieee80211_frame_acked(sta, skb);
+@@ -972,6 +969,8 @@ void ieee80211_tx_status_ext(struct ieee80211_hw *hw,
+ 		}
+ 
+ 		rate_control_tx_status(local, sband, status);
++		if (ieee80211_vif_is_mesh(&sta->sdata->vif))
++			ieee80211s_update_metric(local, sta, status);
+ 	}
+ 
+ 	if (acked || noack_success) {
+diff --git a/net/mac80211/tdls.c b/net/mac80211/tdls.c
+index 5cd5e6e5834e..6c647f425e05 100644
+--- a/net/mac80211/tdls.c
++++ b/net/mac80211/tdls.c
+@@ -16,6 +16,7 @@
+ #include "ieee80211_i.h"
+ #include "driver-ops.h"
+ #include "rate.h"
++#include "wme.h"
+ 
+ /* give usermode some time for retries in setting up the TDLS session */
+ #define TDLS_PEER_SETUP_TIMEOUT	(15 * HZ)
+@@ -1010,14 +1011,13 @@ ieee80211_tdls_prep_mgmt_packet(struct wiphy *wiphy, struct net_device *dev,
+ 	switch (action_code) {
+ 	case WLAN_TDLS_SETUP_REQUEST:
+ 	case WLAN_TDLS_SETUP_RESPONSE:
+-		skb_set_queue_mapping(skb, IEEE80211_AC_BK);
+-		skb->priority = 2;
++		skb->priority = 256 + 2;
+ 		break;
+ 	default:
+-		skb_set_queue_mapping(skb, IEEE80211_AC_VI);
+-		skb->priority = 5;
++		skb->priority = 256 + 5;
+ 		break;
+ 	}
++	skb_set_queue_mapping(skb, ieee80211_select_queue(sdata, skb));
+ 
+ 	/*
+ 	 * Set the WLAN_TDLS_TEARDOWN flag to indicate a teardown in progress.
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index 9b3b069e418a..361f2f6cc839 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -1886,7 +1886,7 @@ static bool ieee80211_tx(struct ieee80211_sub_if_data *sdata,
+ 			sdata->vif.hw_queue[skb_get_queue_mapping(skb)];
+ 
+ 	if (invoke_tx_handlers_early(&tx))
+-		return false;
++		return true;
+ 
+ 	if (ieee80211_queue_skb(local, sdata, tx.sta, tx.skb))
+ 		return true;
+diff --git a/net/netfilter/nf_conntrack_proto_tcp.c b/net/netfilter/nf_conntrack_proto_tcp.c
+index 8e67910185a0..1004fb5930de 100644
+--- a/net/netfilter/nf_conntrack_proto_tcp.c
++++ b/net/netfilter/nf_conntrack_proto_tcp.c
+@@ -1239,8 +1239,8 @@ static const struct nla_policy tcp_nla_policy[CTA_PROTOINFO_TCP_MAX+1] = {
+ #define TCP_NLATTR_SIZE	( \
+ 	NLA_ALIGN(NLA_HDRLEN + 1) + \
+ 	NLA_ALIGN(NLA_HDRLEN + 1) + \
+-	NLA_ALIGN(NLA_HDRLEN + sizeof(sizeof(struct nf_ct_tcp_flags))) + \
+-	NLA_ALIGN(NLA_HDRLEN + sizeof(sizeof(struct nf_ct_tcp_flags))))
++	NLA_ALIGN(NLA_HDRLEN + sizeof(struct nf_ct_tcp_flags)) + \
++	NLA_ALIGN(NLA_HDRLEN + sizeof(struct nf_ct_tcp_flags)))
+ 
+ static int nlattr_to_tcp(struct nlattr *cda[], struct nf_conn *ct)
+ {
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index 9873d734b494..8ad78b82c8e2 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -355,12 +355,11 @@ cont:
+ 
+ static void nft_rbtree_gc(struct work_struct *work)
+ {
++	struct nft_rbtree_elem *rbe, *rbe_end = NULL, *rbe_prev = NULL;
+ 	struct nft_set_gc_batch *gcb = NULL;
+-	struct rb_node *node, *prev = NULL;
+-	struct nft_rbtree_elem *rbe;
+ 	struct nft_rbtree *priv;
++	struct rb_node *node;
+ 	struct nft_set *set;
+-	int i;
+ 
+ 	priv = container_of(work, struct nft_rbtree, gc_work.work);
+ 	set  = nft_set_container_of(priv);
+@@ -371,7 +370,7 @@ static void nft_rbtree_gc(struct work_struct *work)
+ 		rbe = rb_entry(node, struct nft_rbtree_elem, node);
+ 
+ 		if (nft_rbtree_interval_end(rbe)) {
+-			prev = node;
++			rbe_end = rbe;
+ 			continue;
+ 		}
+ 		if (!nft_set_elem_expired(&rbe->ext))
+@@ -379,29 +378,30 @@ static void nft_rbtree_gc(struct work_struct *work)
+ 		if (nft_set_elem_mark_busy(&rbe->ext))
+ 			continue;
+ 
++		if (rbe_prev) {
++			rb_erase(&rbe_prev->node, &priv->root);
++			rbe_prev = NULL;
++		}
+ 		gcb = nft_set_gc_batch_check(set, gcb, GFP_ATOMIC);
+ 		if (!gcb)
+ 			break;
+ 
+ 		atomic_dec(&set->nelems);
+ 		nft_set_gc_batch_add(gcb, rbe);
++		rbe_prev = rbe;
+ 
+-		if (prev) {
+-			rbe = rb_entry(prev, struct nft_rbtree_elem, node);
++		if (rbe_end) {
+ 			atomic_dec(&set->nelems);
+-			nft_set_gc_batch_add(gcb, rbe);
+-			prev = NULL;
++			nft_set_gc_batch_add(gcb, rbe_end);
++			rb_erase(&rbe_end->node, &priv->root);
++			rbe_end = NULL;
+ 		}
+ 		node = rb_next(node);
+ 		if (!node)
+ 			break;
+ 	}
+-	if (gcb) {
+-		for (i = 0; i < gcb->head.cnt; i++) {
+-			rbe = gcb->elems[i];
+-			rb_erase(&rbe->node, &priv->root);
+-		}
+-	}
++	if (rbe_prev)
++		rb_erase(&rbe_prev->node, &priv->root);
+ 	write_seqcount_end(&priv->count);
+ 	write_unlock_bh(&priv->lock);
+ 
+diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c
+index 492ab0c36f7c..8b1ba43b1ece 100644
+--- a/net/openvswitch/flow_netlink.c
++++ b/net/openvswitch/flow_netlink.c
+@@ -2990,7 +2990,7 @@ static int __ovs_nla_copy_actions(struct net *net, const struct nlattr *attr,
+ 			 * is already present */
+ 			if (mac_proto != MAC_PROTO_NONE)
+ 				return -EINVAL;
+-			mac_proto = MAC_PROTO_NONE;
++			mac_proto = MAC_PROTO_ETHERNET;
+ 			break;
+ 
+ 		case OVS_ACTION_ATTR_POP_ETH:
+@@ -2998,7 +2998,7 @@ static int __ovs_nla_copy_actions(struct net *net, const struct nlattr *attr,
+ 				return -EINVAL;
+ 			if (vlan_tci & htons(VLAN_TAG_PRESENT))
+ 				return -EINVAL;
+-			mac_proto = MAC_PROTO_ETHERNET;
++			mac_proto = MAC_PROTO_NONE;
+ 			break;
+ 
+ 		case OVS_ACTION_ATTR_PUSH_NSH:
+diff --git a/net/rds/send.c b/net/rds/send.c
+index 59f17a2335f4..0e54ca0f4e9e 100644
+--- a/net/rds/send.c
++++ b/net/rds/send.c
+@@ -1006,7 +1006,8 @@ static int rds_cmsg_send(struct rds_sock *rs, struct rds_message *rm,
+ 	return ret;
+ }
+ 
+-static int rds_send_mprds_hash(struct rds_sock *rs, struct rds_connection *conn)
++static int rds_send_mprds_hash(struct rds_sock *rs,
++			       struct rds_connection *conn, int nonblock)
+ {
+ 	int hash;
+ 
+@@ -1022,10 +1023,16 @@ static int rds_send_mprds_hash(struct rds_sock *rs, struct rds_connection *conn)
+ 		 * used.  But if we are interrupted, we have to use the zero
+ 		 * c_path in case the connection ends up being non-MP capable.
+ 		 */
+-		if (conn->c_npaths == 0)
++		if (conn->c_npaths == 0) {
++			/* Cannot wait for the connection be made, so just use
++			 * the base c_path.
++			 */
++			if (nonblock)
++				return 0;
+ 			if (wait_event_interruptible(conn->c_hs_waitq,
+ 						     conn->c_npaths != 0))
+ 				hash = 0;
++		}
+ 		if (conn->c_npaths == 1)
+ 			hash = 0;
+ 	}
+@@ -1170,7 +1177,7 @@ int rds_sendmsg(struct socket *sock, struct msghdr *msg, size_t payload_len)
+ 	}
+ 
+ 	if (conn->c_trans->t_mp_capable)
+-		cpath = &conn->c_path[rds_send_mprds_hash(rs, conn)];
++		cpath = &conn->c_path[rds_send_mprds_hash(rs, conn, nonblock)];
+ 	else
+ 		cpath = &conn->c_path[0];
+ 
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index 707630ab4713..330372c04940 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -293,7 +293,6 @@ struct rxrpc_peer {
+ 	struct hlist_node	hash_link;
+ 	struct rxrpc_local	*local;
+ 	struct hlist_head	error_targets;	/* targets for net error distribution */
+-	struct work_struct	error_distributor;
+ 	struct rb_root		service_conns;	/* Service connections */
+ 	struct list_head	keepalive_link;	/* Link in net->peer_keepalive[] */
+ 	time64_t		last_tx_at;	/* Last time packet sent here */
+@@ -304,8 +303,6 @@ struct rxrpc_peer {
+ 	unsigned int		maxdata;	/* data size (MTU - hdrsize) */
+ 	unsigned short		hdrsize;	/* header size (IP + UDP + RxRPC) */
+ 	int			debug_id;	/* debug ID for printks */
+-	int			error_report;	/* Net (+0) or local (+1000000) to distribute */
+-#define RXRPC_LOCAL_ERROR_OFFSET 1000000
+ 	struct sockaddr_rxrpc	srx;		/* remote address */
+ 
+ 	/* calculated RTT cache */
+@@ -449,8 +446,7 @@ struct rxrpc_connection {
+ 	spinlock_t		state_lock;	/* state-change lock */
+ 	enum rxrpc_conn_cache_state cache_state;
+ 	enum rxrpc_conn_proto_state state;	/* current state of connection */
+-	u32			local_abort;	/* local abort code */
+-	u32			remote_abort;	/* remote abort code */
++	u32			abort_code;	/* Abort code of connection abort */
+ 	int			debug_id;	/* debug ID for printks */
+ 	atomic_t		serial;		/* packet serial number counter */
+ 	unsigned int		hi_serial;	/* highest serial number received */
+@@ -460,8 +456,19 @@ struct rxrpc_connection {
+ 	u8			security_size;	/* security header size */
+ 	u8			security_ix;	/* security type */
+ 	u8			out_clientflag;	/* RXRPC_CLIENT_INITIATED if we are client */
++	short			error;		/* Local error code */
+ };
+ 
++static inline bool rxrpc_to_server(const struct rxrpc_skb_priv *sp)
++{
++	return sp->hdr.flags & RXRPC_CLIENT_INITIATED;
++}
++
++static inline bool rxrpc_to_client(const struct rxrpc_skb_priv *sp)
++{
++	return !rxrpc_to_server(sp);
++}
++
+ /*
+  * Flags in call->flags.
+  */
+@@ -1029,7 +1036,6 @@ void rxrpc_send_keepalive(struct rxrpc_peer *);
+  * peer_event.c
+  */
+ void rxrpc_error_report(struct sock *);
+-void rxrpc_peer_error_distributor(struct work_struct *);
+ void rxrpc_peer_add_rtt(struct rxrpc_call *, enum rxrpc_rtt_rx_trace,
+ 			rxrpc_serial_t, rxrpc_serial_t, ktime_t, ktime_t);
+ void rxrpc_peer_keepalive_worker(struct work_struct *);
+@@ -1048,7 +1054,6 @@ void rxrpc_destroy_all_peers(struct rxrpc_net *);
+ struct rxrpc_peer *rxrpc_get_peer(struct rxrpc_peer *);
+ struct rxrpc_peer *rxrpc_get_peer_maybe(struct rxrpc_peer *);
+ void rxrpc_put_peer(struct rxrpc_peer *);
+-void __rxrpc_queue_peer_error(struct rxrpc_peer *);
+ 
+ /*
+  * proc.c
+diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c
+index 9d1e298b784c..0e378d73e856 100644
+--- a/net/rxrpc/call_accept.c
++++ b/net/rxrpc/call_accept.c
+@@ -422,11 +422,11 @@ found_service:
+ 
+ 	case RXRPC_CONN_REMOTELY_ABORTED:
+ 		rxrpc_set_call_completion(call, RXRPC_CALL_REMOTELY_ABORTED,
+-					  conn->remote_abort, -ECONNABORTED);
++					  conn->abort_code, conn->error);
+ 		break;
+ 	case RXRPC_CONN_LOCALLY_ABORTED:
+ 		rxrpc_abort_call("CON", call, sp->hdr.seq,
+-				 conn->local_abort, -ECONNABORTED);
++				 conn->abort_code, conn->error);
+ 		break;
+ 	default:
+ 		BUG();
+diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
+index f6734d8cb01a..ed69257203c2 100644
+--- a/net/rxrpc/call_object.c
++++ b/net/rxrpc/call_object.c
+@@ -400,7 +400,7 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx,
+ 	rcu_assign_pointer(conn->channels[chan].call, call);
+ 
+ 	spin_lock(&conn->params.peer->lock);
+-	hlist_add_head(&call->error_link, &conn->params.peer->error_targets);
++	hlist_add_head_rcu(&call->error_link, &conn->params.peer->error_targets);
+ 	spin_unlock(&conn->params.peer->lock);
+ 
+ 	_net("CALL incoming %d on CONN %d", call->debug_id, call->conn->debug_id);
+diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c
+index 5736f643c516..0be19132202b 100644
+--- a/net/rxrpc/conn_client.c
++++ b/net/rxrpc/conn_client.c
+@@ -709,8 +709,8 @@ int rxrpc_connect_call(struct rxrpc_call *call,
+ 	}
+ 
+ 	spin_lock_bh(&call->conn->params.peer->lock);
+-	hlist_add_head(&call->error_link,
+-		       &call->conn->params.peer->error_targets);
++	hlist_add_head_rcu(&call->error_link,
++			   &call->conn->params.peer->error_targets);
+ 	spin_unlock_bh(&call->conn->params.peer->lock);
+ 
+ out:
+diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c
+index 3fde001fcc39..5e7c8239e703 100644
+--- a/net/rxrpc/conn_event.c
++++ b/net/rxrpc/conn_event.c
+@@ -126,7 +126,7 @@ static void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn,
+ 
+ 	switch (chan->last_type) {
+ 	case RXRPC_PACKET_TYPE_ABORT:
+-		_proto("Tx ABORT %%%u { %d } [re]", serial, conn->local_abort);
++		_proto("Tx ABORT %%%u { %d } [re]", serial, conn->abort_code);
+ 		break;
+ 	case RXRPC_PACKET_TYPE_ACK:
+ 		trace_rxrpc_tx_ack(NULL, serial, chan->last_seq, 0,
+@@ -148,13 +148,12 @@ static void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn,
+  * pass a connection-level abort onto all calls on that connection
+  */
+ static void rxrpc_abort_calls(struct rxrpc_connection *conn,
+-			      enum rxrpc_call_completion compl,
+-			      u32 abort_code, int error)
++			      enum rxrpc_call_completion compl)
+ {
+ 	struct rxrpc_call *call;
+ 	int i;
+ 
+-	_enter("{%d},%x", conn->debug_id, abort_code);
++	_enter("{%d},%x", conn->debug_id, conn->abort_code);
+ 
+ 	spin_lock(&conn->channel_lock);
+ 
+@@ -167,9 +166,11 @@ static void rxrpc_abort_calls(struct rxrpc_connection *conn,
+ 				trace_rxrpc_abort(call->debug_id,
+ 						  "CON", call->cid,
+ 						  call->call_id, 0,
+-						  abort_code, error);
++						  conn->abort_code,
++						  conn->error);
+ 			if (rxrpc_set_call_completion(call, compl,
+-						      abort_code, error))
++						      conn->abort_code,
++						      conn->error))
+ 				rxrpc_notify_socket(call);
+ 		}
+ 	}
+@@ -202,10 +203,12 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn,
+ 		return 0;
+ 	}
+ 
++	conn->error = error;
++	conn->abort_code = abort_code;
+ 	conn->state = RXRPC_CONN_LOCALLY_ABORTED;
+ 	spin_unlock_bh(&conn->state_lock);
+ 
+-	rxrpc_abort_calls(conn, RXRPC_CALL_LOCALLY_ABORTED, abort_code, error);
++	rxrpc_abort_calls(conn, RXRPC_CALL_LOCALLY_ABORTED);
+ 
+ 	msg.msg_name	= &conn->params.peer->srx.transport;
+ 	msg.msg_namelen	= conn->params.peer->srx.transport_len;
+@@ -224,7 +227,7 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn,
+ 	whdr._rsvd	= 0;
+ 	whdr.serviceId	= htons(conn->service_id);
+ 
+-	word		= htonl(conn->local_abort);
++	word		= htonl(conn->abort_code);
+ 
+ 	iov[0].iov_base	= &whdr;
+ 	iov[0].iov_len	= sizeof(whdr);
+@@ -235,7 +238,7 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn,
+ 
+ 	serial = atomic_inc_return(&conn->serial);
+ 	whdr.serial = htonl(serial);
+-	_proto("Tx CONN ABORT %%%u { %d }", serial, conn->local_abort);
++	_proto("Tx CONN ABORT %%%u { %d }", serial, conn->abort_code);
+ 
+ 	ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, 2, len);
+ 	if (ret < 0) {
+@@ -308,9 +311,10 @@ static int rxrpc_process_event(struct rxrpc_connection *conn,
+ 		abort_code = ntohl(wtmp);
+ 		_proto("Rx ABORT %%%u { ac=%d }", sp->hdr.serial, abort_code);
+ 
++		conn->error = -ECONNABORTED;
++		conn->abort_code = abort_code;
+ 		conn->state = RXRPC_CONN_REMOTELY_ABORTED;
+-		rxrpc_abort_calls(conn, RXRPC_CALL_REMOTELY_ABORTED,
+-				  abort_code, -ECONNABORTED);
++		rxrpc_abort_calls(conn, RXRPC_CALL_REMOTELY_ABORTED);
+ 		return -ECONNABORTED;
+ 
+ 	case RXRPC_PACKET_TYPE_CHALLENGE:
+diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c
+index 4c77a78a252a..e0d6d0fb7426 100644
+--- a/net/rxrpc/conn_object.c
++++ b/net/rxrpc/conn_object.c
+@@ -99,7 +99,7 @@ struct rxrpc_connection *rxrpc_find_connection_rcu(struct rxrpc_local *local,
+ 	k.epoch	= sp->hdr.epoch;
+ 	k.cid	= sp->hdr.cid & RXRPC_CIDMASK;
+ 
+-	if (sp->hdr.flags & RXRPC_CLIENT_INITIATED) {
++	if (rxrpc_to_server(sp)) {
+ 		/* We need to look up service connections by the full protocol
+ 		 * parameter set.  We look up the peer first as an intermediate
+ 		 * step and then the connection from the peer's tree.
+@@ -214,7 +214,7 @@ void rxrpc_disconnect_call(struct rxrpc_call *call)
+ 	call->peer->cong_cwnd = call->cong_cwnd;
+ 
+ 	spin_lock_bh(&conn->params.peer->lock);
+-	hlist_del_init(&call->error_link);
++	hlist_del_rcu(&call->error_link);
+ 	spin_unlock_bh(&conn->params.peer->lock);
+ 
+ 	if (rxrpc_is_client_call(call))
+diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c
+index 608d078a4981..a81240845224 100644
+--- a/net/rxrpc/input.c
++++ b/net/rxrpc/input.c
+@@ -216,10 +216,11 @@ static void rxrpc_send_ping(struct rxrpc_call *call, struct sk_buff *skb,
+ /*
+  * Apply a hard ACK by advancing the Tx window.
+  */
+-static void rxrpc_rotate_tx_window(struct rxrpc_call *call, rxrpc_seq_t to,
++static bool rxrpc_rotate_tx_window(struct rxrpc_call *call, rxrpc_seq_t to,
+ 				   struct rxrpc_ack_summary *summary)
+ {
+ 	struct sk_buff *skb, *list = NULL;
++	bool rot_last = false;
+ 	int ix;
+ 	u8 annotation;
+ 
+@@ -243,15 +244,17 @@ static void rxrpc_rotate_tx_window(struct rxrpc_call *call, rxrpc_seq_t to,
+ 		skb->next = list;
+ 		list = skb;
+ 
+-		if (annotation & RXRPC_TX_ANNO_LAST)
++		if (annotation & RXRPC_TX_ANNO_LAST) {
+ 			set_bit(RXRPC_CALL_TX_LAST, &call->flags);
++			rot_last = true;
++		}
+ 		if ((annotation & RXRPC_TX_ANNO_MASK) != RXRPC_TX_ANNO_ACK)
+ 			summary->nr_rot_new_acks++;
+ 	}
+ 
+ 	spin_unlock(&call->lock);
+ 
+-	trace_rxrpc_transmit(call, (test_bit(RXRPC_CALL_TX_LAST, &call->flags) ?
++	trace_rxrpc_transmit(call, (rot_last ?
+ 				    rxrpc_transmit_rotate_last :
+ 				    rxrpc_transmit_rotate));
+ 	wake_up(&call->waitq);
+@@ -262,6 +265,8 @@ static void rxrpc_rotate_tx_window(struct rxrpc_call *call, rxrpc_seq_t to,
+ 		skb->next = NULL;
+ 		rxrpc_free_skb(skb, rxrpc_skb_tx_freed);
+ 	}
++
++	return rot_last;
+ }
+ 
+ /*
+@@ -273,23 +278,26 @@ static void rxrpc_rotate_tx_window(struct rxrpc_call *call, rxrpc_seq_t to,
+ static bool rxrpc_end_tx_phase(struct rxrpc_call *call, bool reply_begun,
+ 			       const char *abort_why)
+ {
++	unsigned int state;
+ 
+ 	ASSERT(test_bit(RXRPC_CALL_TX_LAST, &call->flags));
+ 
+ 	write_lock(&call->state_lock);
+ 
+-	switch (call->state) {
++	state = call->state;
++	switch (state) {
+ 	case RXRPC_CALL_CLIENT_SEND_REQUEST:
+ 	case RXRPC_CALL_CLIENT_AWAIT_REPLY:
+ 		if (reply_begun)
+-			call->state = RXRPC_CALL_CLIENT_RECV_REPLY;
++			call->state = state = RXRPC_CALL_CLIENT_RECV_REPLY;
+ 		else
+-			call->state = RXRPC_CALL_CLIENT_AWAIT_REPLY;
++			call->state = state = RXRPC_CALL_CLIENT_AWAIT_REPLY;
+ 		break;
+ 
+ 	case RXRPC_CALL_SERVER_AWAIT_ACK:
+ 		__rxrpc_call_completed(call);
+ 		rxrpc_notify_socket(call);
++		state = call->state;
+ 		break;
+ 
+ 	default:
+@@ -297,11 +305,10 @@ static bool rxrpc_end_tx_phase(struct rxrpc_call *call, bool reply_begun,
+ 	}
+ 
+ 	write_unlock(&call->state_lock);
+-	if (call->state == RXRPC_CALL_CLIENT_AWAIT_REPLY) {
++	if (state == RXRPC_CALL_CLIENT_AWAIT_REPLY)
+ 		trace_rxrpc_transmit(call, rxrpc_transmit_await_reply);
+-	} else {
++	else
+ 		trace_rxrpc_transmit(call, rxrpc_transmit_end);
+-	}
+ 	_leave(" = ok");
+ 	return true;
+ 
+@@ -332,11 +339,11 @@ static bool rxrpc_receiving_reply(struct rxrpc_call *call)
+ 		trace_rxrpc_timer(call, rxrpc_timer_init_for_reply, now);
+ 	}
+ 
+-	if (!test_bit(RXRPC_CALL_TX_LAST, &call->flags))
+-		rxrpc_rotate_tx_window(call, top, &summary);
+ 	if (!test_bit(RXRPC_CALL_TX_LAST, &call->flags)) {
+-		rxrpc_proto_abort("TXL", call, top);
+-		return false;
++		if (!rxrpc_rotate_tx_window(call, top, &summary)) {
++			rxrpc_proto_abort("TXL", call, top);
++			return false;
++		}
+ 	}
+ 	if (!rxrpc_end_tx_phase(call, true, "ETD"))
+ 		return false;
+@@ -616,13 +623,14 @@ static void rxrpc_input_requested_ack(struct rxrpc_call *call,
+ 		if (!skb)
+ 			continue;
+ 
++		sent_at = skb->tstamp;
++		smp_rmb(); /* Read timestamp before serial. */
+ 		sp = rxrpc_skb(skb);
+ 		if (sp->hdr.serial != orig_serial)
+ 			continue;
+-		smp_rmb();
+-		sent_at = skb->tstamp;
+ 		goto found;
+ 	}
++
+ 	return;
+ 
+ found:
+@@ -854,6 +862,16 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb,
+ 				  rxrpc_propose_ack_respond_to_ack);
+ 	}
+ 
++	/* Discard any out-of-order or duplicate ACKs. */
++	if (before_eq(sp->hdr.serial, call->acks_latest)) {
++		_debug("discard ACK %d <= %d",
++		       sp->hdr.serial, call->acks_latest);
++		return;
++	}
++	call->acks_latest_ts = skb->tstamp;
++	call->acks_latest = sp->hdr.serial;
++
++	/* Parse rwind and mtu sizes if provided. */
+ 	ioffset = offset + nr_acks + 3;
+ 	if (skb->len >= ioffset + sizeof(buf.info)) {
+ 		if (skb_copy_bits(skb, ioffset, &buf.info, sizeof(buf.info)) < 0)
+@@ -875,23 +893,18 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb,
+ 		return;
+ 	}
+ 
+-	/* Discard any out-of-order or duplicate ACKs. */
+-	if (before_eq(sp->hdr.serial, call->acks_latest)) {
+-		_debug("discard ACK %d <= %d",
+-		       sp->hdr.serial, call->acks_latest);
+-		return;
+-	}
+-	call->acks_latest_ts = skb->tstamp;
+-	call->acks_latest = sp->hdr.serial;
+-
+ 	if (before(hard_ack, call->tx_hard_ack) ||
+ 	    after(hard_ack, call->tx_top))
+ 		return rxrpc_proto_abort("AKW", call, 0);
+ 	if (nr_acks > call->tx_top - hard_ack)
+ 		return rxrpc_proto_abort("AKN", call, 0);
+ 
+-	if (after(hard_ack, call->tx_hard_ack))
+-		rxrpc_rotate_tx_window(call, hard_ack, &summary);
++	if (after(hard_ack, call->tx_hard_ack)) {
++		if (rxrpc_rotate_tx_window(call, hard_ack, &summary)) {
++			rxrpc_end_tx_phase(call, false, "ETA");
++			return;
++		}
++	}
+ 
+ 	if (nr_acks > 0) {
+ 		if (skb_copy_bits(skb, offset, buf.acks, nr_acks) < 0)
+@@ -900,11 +913,6 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb,
+ 				      &summary);
+ 	}
+ 
+-	if (test_bit(RXRPC_CALL_TX_LAST, &call->flags)) {
+-		rxrpc_end_tx_phase(call, false, "ETA");
+-		return;
+-	}
+-
+ 	if (call->rxtx_annotations[call->tx_top & RXRPC_RXTX_BUFF_MASK] &
+ 	    RXRPC_TX_ANNO_LAST &&
+ 	    summary.nr_acks == call->tx_top - hard_ack &&
+@@ -926,8 +934,7 @@ static void rxrpc_input_ackall(struct rxrpc_call *call, struct sk_buff *skb)
+ 
+ 	_proto("Rx ACKALL %%%u", sp->hdr.serial);
+ 
+-	rxrpc_rotate_tx_window(call, call->tx_top, &summary);
+-	if (test_bit(RXRPC_CALL_TX_LAST, &call->flags))
++	if (rxrpc_rotate_tx_window(call, call->tx_top, &summary))
+ 		rxrpc_end_tx_phase(call, false, "ETL");
+ }
+ 
+@@ -1137,6 +1144,9 @@ void rxrpc_data_ready(struct sock *udp_sk)
+ 		return;
+ 	}
+ 
++	if (skb->tstamp == 0)
++		skb->tstamp = ktime_get_real();
++
+ 	rxrpc_new_skb(skb, rxrpc_skb_rx_received);
+ 
+ 	_net("recv skb %p", skb);
+@@ -1171,10 +1181,6 @@ void rxrpc_data_ready(struct sock *udp_sk)
+ 
+ 	trace_rxrpc_rx_packet(sp);
+ 
+-	_net("Rx RxRPC %s ep=%x call=%x:%x",
+-	     sp->hdr.flags & RXRPC_CLIENT_INITIATED ? "ToServer" : "ToClient",
+-	     sp->hdr.epoch, sp->hdr.cid, sp->hdr.callNumber);
+-
+ 	if (sp->hdr.type >= RXRPC_N_PACKET_TYPES ||
+ 	    !((RXRPC_SUPPORTED_PACKET_TYPES >> sp->hdr.type) & 1)) {
+ 		_proto("Rx Bad Packet Type %u", sp->hdr.type);
+@@ -1183,13 +1189,13 @@ void rxrpc_data_ready(struct sock *udp_sk)
+ 
+ 	switch (sp->hdr.type) {
+ 	case RXRPC_PACKET_TYPE_VERSION:
+-		if (!(sp->hdr.flags & RXRPC_CLIENT_INITIATED))
++		if (rxrpc_to_client(sp))
+ 			goto discard;
+ 		rxrpc_post_packet_to_local(local, skb);
+ 		goto out;
+ 
+ 	case RXRPC_PACKET_TYPE_BUSY:
+-		if (sp->hdr.flags & RXRPC_CLIENT_INITIATED)
++		if (rxrpc_to_server(sp))
+ 			goto discard;
+ 		/* Fall through */
+ 
+@@ -1269,7 +1275,7 @@ void rxrpc_data_ready(struct sock *udp_sk)
+ 		call = rcu_dereference(chan->call);
+ 
+ 		if (sp->hdr.callNumber > chan->call_id) {
+-			if (!(sp->hdr.flags & RXRPC_CLIENT_INITIATED)) {
++			if (rxrpc_to_client(sp)) {
+ 				rcu_read_unlock();
+ 				goto reject_packet;
+ 			}
+@@ -1292,7 +1298,7 @@ void rxrpc_data_ready(struct sock *udp_sk)
+ 	}
+ 
+ 	if (!call || atomic_read(&call->usage) == 0) {
+-		if (!(sp->hdr.type & RXRPC_CLIENT_INITIATED) ||
++		if (rxrpc_to_client(sp) ||
+ 		    sp->hdr.callNumber == 0 ||
+ 		    sp->hdr.type != RXRPC_PACKET_TYPE_DATA)
+ 			goto bad_message_unlock;
+diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c
+index b493e6b62740..386dc1f20c73 100644
+--- a/net/rxrpc/local_object.c
++++ b/net/rxrpc/local_object.c
+@@ -135,10 +135,10 @@ static int rxrpc_open_socket(struct rxrpc_local *local, struct net *net)
+ 	}
+ 
+ 	switch (local->srx.transport.family) {
+-	case AF_INET:
+-		/* we want to receive ICMP errors */
++	case AF_INET6:
++		/* we want to receive ICMPv6 errors */
+ 		opt = 1;
+-		ret = kernel_setsockopt(local->socket, SOL_IP, IP_RECVERR,
++		ret = kernel_setsockopt(local->socket, SOL_IPV6, IPV6_RECVERR,
+ 					(char *) &opt, sizeof(opt));
+ 		if (ret < 0) {
+ 			_debug("setsockopt failed");
+@@ -146,19 +146,22 @@ static int rxrpc_open_socket(struct rxrpc_local *local, struct net *net)
+ 		}
+ 
+ 		/* we want to set the don't fragment bit */
+-		opt = IP_PMTUDISC_DO;
+-		ret = kernel_setsockopt(local->socket, SOL_IP, IP_MTU_DISCOVER,
++		opt = IPV6_PMTUDISC_DO;
++		ret = kernel_setsockopt(local->socket, SOL_IPV6, IPV6_MTU_DISCOVER,
+ 					(char *) &opt, sizeof(opt));
+ 		if (ret < 0) {
+ 			_debug("setsockopt failed");
+ 			goto error;
+ 		}
+-		break;
+ 
+-	case AF_INET6:
++		/* Fall through and set IPv4 options too otherwise we don't get
++		 * errors from IPv4 packets sent through the IPv6 socket.
++		 */
++
++	case AF_INET:
+ 		/* we want to receive ICMP errors */
+ 		opt = 1;
+-		ret = kernel_setsockopt(local->socket, SOL_IPV6, IPV6_RECVERR,
++		ret = kernel_setsockopt(local->socket, SOL_IP, IP_RECVERR,
+ 					(char *) &opt, sizeof(opt));
+ 		if (ret < 0) {
+ 			_debug("setsockopt failed");
+@@ -166,13 +169,22 @@ static int rxrpc_open_socket(struct rxrpc_local *local, struct net *net)
+ 		}
+ 
+ 		/* we want to set the don't fragment bit */
+-		opt = IPV6_PMTUDISC_DO;
+-		ret = kernel_setsockopt(local->socket, SOL_IPV6, IPV6_MTU_DISCOVER,
++		opt = IP_PMTUDISC_DO;
++		ret = kernel_setsockopt(local->socket, SOL_IP, IP_MTU_DISCOVER,
+ 					(char *) &opt, sizeof(opt));
+ 		if (ret < 0) {
+ 			_debug("setsockopt failed");
+ 			goto error;
+ 		}
++
++		/* We want receive timestamps. */
++		opt = 1;
++		ret = kernel_setsockopt(local->socket, SOL_SOCKET, SO_TIMESTAMPNS,
++					(char *)&opt, sizeof(opt));
++		if (ret < 0) {
++			_debug("setsockopt failed");
++			goto error;
++		}
+ 		break;
+ 
+ 	default:
+diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c
+index 4774c8f5634d..6ac21bb2071d 100644
+--- a/net/rxrpc/output.c
++++ b/net/rxrpc/output.c
+@@ -124,7 +124,6 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, bool ping,
+ 	struct kvec iov[2];
+ 	rxrpc_serial_t serial;
+ 	rxrpc_seq_t hard_ack, top;
+-	ktime_t now;
+ 	size_t len, n;
+ 	int ret;
+ 	u8 reason;
+@@ -196,9 +195,7 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, bool ping,
+ 		/* We need to stick a time in before we send the packet in case
+ 		 * the reply gets back before kernel_sendmsg() completes - but
+ 		 * asking UDP to send the packet can take a relatively long
+-		 * time, so we update the time after, on the assumption that
+-		 * the packet transmission is more likely to happen towards the
+-		 * end of the kernel_sendmsg() call.
++		 * time.
+ 		 */
+ 		call->ping_time = ktime_get_real();
+ 		set_bit(RXRPC_CALL_PINGING, &call->flags);
+@@ -206,9 +203,6 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, bool ping,
+ 	}
+ 
+ 	ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, 2, len);
+-	now = ktime_get_real();
+-	if (ping)
+-		call->ping_time = now;
+ 	conn->params.peer->last_tx_at = ktime_get_seconds();
+ 	if (ret < 0)
+ 		trace_rxrpc_tx_fail(call->debug_id, serial, ret,
+@@ -357,8 +351,14 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct sk_buff *skb,
+ 
+ 	/* If our RTT cache needs working on, request an ACK.  Also request
+ 	 * ACKs if a DATA packet appears to have been lost.
++	 *
++	 * However, we mustn't request an ACK on the last reply packet of a
++	 * service call, lest OpenAFS incorrectly send us an ACK with some
++	 * soft-ACKs in it and then never follow up with a proper hard ACK.
+ 	 */
+-	if (!(sp->hdr.flags & RXRPC_LAST_PACKET) &&
++	if ((!(sp->hdr.flags & RXRPC_LAST_PACKET) ||
++	     rxrpc_to_server(sp)
++	     ) &&
+ 	    (test_and_clear_bit(RXRPC_CALL_EV_ACK_LOST, &call->events) ||
+ 	     retrans ||
+ 	     call->cong_mode == RXRPC_CALL_SLOW_START ||
+@@ -384,6 +384,11 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct sk_buff *skb,
+ 		goto send_fragmentable;
+ 
+ 	down_read(&conn->params.local->defrag_sem);
++
++	sp->hdr.serial = serial;
++	smp_wmb(); /* Set serial before timestamp */
++	skb->tstamp = ktime_get_real();
++
+ 	/* send the packet by UDP
+ 	 * - returns -EMSGSIZE if UDP would have to fragment the packet
+ 	 *   to go out of the interface
+@@ -404,12 +409,8 @@ done:
+ 	trace_rxrpc_tx_data(call, sp->hdr.seq, serial, whdr.flags,
+ 			    retrans, lost);
+ 	if (ret >= 0) {
+-		ktime_t now = ktime_get_real();
+-		skb->tstamp = now;
+-		smp_wmb();
+-		sp->hdr.serial = serial;
+ 		if (whdr.flags & RXRPC_REQUEST_ACK) {
+-			call->peer->rtt_last_req = now;
++			call->peer->rtt_last_req = skb->tstamp;
+ 			trace_rxrpc_rtt_tx(call, rxrpc_rtt_tx_data, serial);
+ 			if (call->peer->rtt_usage > 1) {
+ 				unsigned long nowj = jiffies, ack_lost_at;
+@@ -448,6 +449,10 @@ send_fragmentable:
+ 
+ 	down_write(&conn->params.local->defrag_sem);
+ 
++	sp->hdr.serial = serial;
++	smp_wmb(); /* Set serial before timestamp */
++	skb->tstamp = ktime_get_real();
++
+ 	switch (conn->params.local->srx.transport.family) {
+ 	case AF_INET:
+ 		opt = IP_PMTUDISC_DONT;
+diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c
+index 4f9da2f51c69..f3e6fc670da2 100644
+--- a/net/rxrpc/peer_event.c
++++ b/net/rxrpc/peer_event.c
+@@ -23,6 +23,8 @@
+ #include "ar-internal.h"
+ 
+ static void rxrpc_store_error(struct rxrpc_peer *, struct sock_exterr_skb *);
++static void rxrpc_distribute_error(struct rxrpc_peer *, int,
++				   enum rxrpc_call_completion);
+ 
+ /*
+  * Find the peer associated with an ICMP packet.
+@@ -194,8 +196,6 @@ void rxrpc_error_report(struct sock *sk)
+ 	rcu_read_unlock();
+ 	rxrpc_free_skb(skb, rxrpc_skb_rx_freed);
+ 
+-	/* The ref we obtained is passed off to the work item */
+-	__rxrpc_queue_peer_error(peer);
+ 	_leave("");
+ }
+ 
+@@ -205,6 +205,7 @@ void rxrpc_error_report(struct sock *sk)
+ static void rxrpc_store_error(struct rxrpc_peer *peer,
+ 			      struct sock_exterr_skb *serr)
+ {
++	enum rxrpc_call_completion compl = RXRPC_CALL_NETWORK_ERROR;
+ 	struct sock_extended_err *ee;
+ 	int err;
+ 
+@@ -255,7 +256,7 @@ static void rxrpc_store_error(struct rxrpc_peer *peer,
+ 	case SO_EE_ORIGIN_NONE:
+ 	case SO_EE_ORIGIN_LOCAL:
+ 		_proto("Rx Received local error { error=%d }", err);
+-		err += RXRPC_LOCAL_ERROR_OFFSET;
++		compl = RXRPC_CALL_LOCAL_ERROR;
+ 		break;
+ 
+ 	case SO_EE_ORIGIN_ICMP6:
+@@ -264,48 +265,23 @@ static void rxrpc_store_error(struct rxrpc_peer *peer,
+ 		break;
+ 	}
+ 
+-	peer->error_report = err;
++	rxrpc_distribute_error(peer, err, compl);
+ }
+ 
+ /*
+- * Distribute an error that occurred on a peer
++ * Distribute an error that occurred on a peer.
+  */
+-void rxrpc_peer_error_distributor(struct work_struct *work)
++static void rxrpc_distribute_error(struct rxrpc_peer *peer, int error,
++				   enum rxrpc_call_completion compl)
+ {
+-	struct rxrpc_peer *peer =
+-		container_of(work, struct rxrpc_peer, error_distributor);
+ 	struct rxrpc_call *call;
+-	enum rxrpc_call_completion compl;
+-	int error;
+-
+-	_enter("");
+-
+-	error = READ_ONCE(peer->error_report);
+-	if (error < RXRPC_LOCAL_ERROR_OFFSET) {
+-		compl = RXRPC_CALL_NETWORK_ERROR;
+-	} else {
+-		compl = RXRPC_CALL_LOCAL_ERROR;
+-		error -= RXRPC_LOCAL_ERROR_OFFSET;
+-	}
+ 
+-	_debug("ISSUE ERROR %s %d", rxrpc_call_completions[compl], error);
+-
+-	spin_lock_bh(&peer->lock);
+-
+-	while (!hlist_empty(&peer->error_targets)) {
+-		call = hlist_entry(peer->error_targets.first,
+-				   struct rxrpc_call, error_link);
+-		hlist_del_init(&call->error_link);
++	hlist_for_each_entry_rcu(call, &peer->error_targets, error_link) {
+ 		rxrpc_see_call(call);
+-
+-		if (rxrpc_set_call_completion(call, compl, 0, -error))
++		if (call->state < RXRPC_CALL_COMPLETE &&
++		    rxrpc_set_call_completion(call, compl, 0, -error))
+ 			rxrpc_notify_socket(call);
+ 	}
+-
+-	spin_unlock_bh(&peer->lock);
+-
+-	rxrpc_put_peer(peer);
+-	_leave("");
+ }
+ 
+ /*
+diff --git a/net/rxrpc/peer_object.c b/net/rxrpc/peer_object.c
+index 24ec7cdcf332..ef4c2e8a35cc 100644
+--- a/net/rxrpc/peer_object.c
++++ b/net/rxrpc/peer_object.c
+@@ -222,8 +222,6 @@ struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *local, gfp_t gfp)
+ 		atomic_set(&peer->usage, 1);
+ 		peer->local = local;
+ 		INIT_HLIST_HEAD(&peer->error_targets);
+-		INIT_WORK(&peer->error_distributor,
+-			  &rxrpc_peer_error_distributor);
+ 		peer->service_conns = RB_ROOT;
+ 		seqlock_init(&peer->service_conn_lock);
+ 		spin_lock_init(&peer->lock);
+@@ -415,21 +413,6 @@ struct rxrpc_peer *rxrpc_get_peer_maybe(struct rxrpc_peer *peer)
+ 	return peer;
+ }
+ 
+-/*
+- * Queue a peer record.  This passes the caller's ref to the workqueue.
+- */
+-void __rxrpc_queue_peer_error(struct rxrpc_peer *peer)
+-{
+-	const void *here = __builtin_return_address(0);
+-	int n;
+-
+-	n = atomic_read(&peer->usage);
+-	if (rxrpc_queue_work(&peer->error_distributor))
+-		trace_rxrpc_peer(peer, rxrpc_peer_queued_error, n, here);
+-	else
+-		rxrpc_put_peer(peer);
+-}
+-
+ /*
+  * Discard a peer record.
+  */
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index f74513a7c7a8..c855fd045a3c 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -31,6 +31,8 @@
+ #include <net/pkt_sched.h>
+ #include <net/pkt_cls.h>
+ 
++extern const struct nla_policy rtm_tca_policy[TCA_MAX + 1];
++
+ /* The list of all installed classifier types */
+ static LIST_HEAD(tcf_proto_base);
+ 
+@@ -1083,7 +1085,7 @@ static int tc_new_tfilter(struct sk_buff *skb, struct nlmsghdr *n,
+ replay:
+ 	tp_created = 0;
+ 
+-	err = nlmsg_parse(n, sizeof(*t), tca, TCA_MAX, NULL, extack);
++	err = nlmsg_parse(n, sizeof(*t), tca, TCA_MAX, rtm_tca_policy, extack);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -1226,7 +1228,7 @@ static int tc_del_tfilter(struct sk_buff *skb, struct nlmsghdr *n,
+ 	if (!netlink_ns_capable(skb, net->user_ns, CAP_NET_ADMIN))
+ 		return -EPERM;
+ 
+-	err = nlmsg_parse(n, sizeof(*t), tca, TCA_MAX, NULL, extack);
++	err = nlmsg_parse(n, sizeof(*t), tca, TCA_MAX, rtm_tca_policy, extack);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -1334,7 +1336,7 @@ static int tc_get_tfilter(struct sk_buff *skb, struct nlmsghdr *n,
+ 	void *fh = NULL;
+ 	int err;
+ 
+-	err = nlmsg_parse(n, sizeof(*t), tca, TCA_MAX, NULL, extack);
++	err = nlmsg_parse(n, sizeof(*t), tca, TCA_MAX, rtm_tca_policy, extack);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -1488,7 +1490,8 @@ static int tc_dump_tfilter(struct sk_buff *skb, struct netlink_callback *cb)
+ 	if (nlmsg_len(cb->nlh) < sizeof(*tcm))
+ 		return skb->len;
+ 
+-	err = nlmsg_parse(cb->nlh, sizeof(*tcm), tca, TCA_MAX, NULL, NULL);
++	err = nlmsg_parse(cb->nlh, sizeof(*tcm), tca, TCA_MAX, rtm_tca_policy,
++			  NULL);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 99cc25aae503..57f71765febe 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -2052,7 +2052,8 @@ static int tc_dump_tclass_root(struct Qdisc *root, struct sk_buff *skb,
+ 
+ 	if (tcm->tcm_parent) {
+ 		q = qdisc_match_from_root(root, TC_H_MAJ(tcm->tcm_parent));
+-		if (q && tc_dump_tclass_qdisc(q, skb, tcm, cb, t_p, s_t) < 0)
++		if (q && q != root &&
++		    tc_dump_tclass_qdisc(q, skb, tcm, cb, t_p, s_t) < 0)
+ 			return -1;
+ 		return 0;
+ 	}
+diff --git a/net/sched/sch_gred.c b/net/sched/sch_gred.c
+index cbe4831f46f4..4a042abf844c 100644
+--- a/net/sched/sch_gred.c
++++ b/net/sched/sch_gred.c
+@@ -413,7 +413,7 @@ static int gred_change(struct Qdisc *sch, struct nlattr *opt,
+ 	if (tb[TCA_GRED_PARMS] == NULL && tb[TCA_GRED_STAB] == NULL) {
+ 		if (tb[TCA_GRED_LIMIT] != NULL)
+ 			sch->limit = nla_get_u32(tb[TCA_GRED_LIMIT]);
+-		return gred_change_table_def(sch, opt);
++		return gred_change_table_def(sch, tb[TCA_GRED_DPS]);
+ 	}
+ 
+ 	if (tb[TCA_GRED_PARMS] == NULL ||
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 50ee07cd20c4..9d903b870790 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -270,11 +270,10 @@ struct sctp_association *sctp_id2assoc(struct sock *sk, sctp_assoc_t id)
+ 
+ 	spin_lock_bh(&sctp_assocs_id_lock);
+ 	asoc = (struct sctp_association *)idr_find(&sctp_assocs_id, (int)id);
++	if (asoc && (asoc->base.sk != sk || asoc->base.dead))
++		asoc = NULL;
+ 	spin_unlock_bh(&sctp_assocs_id_lock);
+ 
+-	if (!asoc || (asoc->base.sk != sk) || asoc->base.dead)
+-		return NULL;
+-
+ 	return asoc;
+ }
+ 
+@@ -1940,8 +1939,10 @@ static int sctp_sendmsg_to_asoc(struct sctp_association *asoc,
+ 		if (sp->strm_interleave) {
+ 			timeo = sock_sndtimeo(sk, 0);
+ 			err = sctp_wait_for_connect(asoc, &timeo);
+-			if (err)
++			if (err) {
++				err = -ESRCH;
+ 				goto err;
++			}
+ 		} else {
+ 			wait_connect = true;
+ 		}
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index add82b0266f3..3be95f77ec7f 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -114,22 +114,17 @@ static void __smc_lgr_unregister_conn(struct smc_connection *conn)
+ 	sock_put(&smc->sk); /* sock_hold in smc_lgr_register_conn() */
+ }
+ 
+-/* Unregister connection and trigger lgr freeing if applicable
++/* Unregister connection from lgr
+  */
+ static void smc_lgr_unregister_conn(struct smc_connection *conn)
+ {
+ 	struct smc_link_group *lgr = conn->lgr;
+-	int reduced = 0;
+ 
+ 	write_lock_bh(&lgr->conns_lock);
+ 	if (conn->alert_token_local) {
+-		reduced = 1;
+ 		__smc_lgr_unregister_conn(conn);
+ 	}
+ 	write_unlock_bh(&lgr->conns_lock);
+-	if (!reduced || lgr->conns_num)
+-		return;
+-	smc_lgr_schedule_free_work(lgr);
+ }
+ 
+ static void smc_lgr_free_work(struct work_struct *work)
+@@ -238,7 +233,8 @@ out:
+ 	return rc;
+ }
+ 
+-static void smc_buf_unuse(struct smc_connection *conn)
++static void smc_buf_unuse(struct smc_connection *conn,
++			  struct smc_link_group *lgr)
+ {
+ 	if (conn->sndbuf_desc)
+ 		conn->sndbuf_desc->used = 0;
+@@ -248,8 +244,6 @@ static void smc_buf_unuse(struct smc_connection *conn)
+ 			conn->rmb_desc->used = 0;
+ 		} else {
+ 			/* buf registration failed, reuse not possible */
+-			struct smc_link_group *lgr = conn->lgr;
+-
+ 			write_lock_bh(&lgr->rmbs_lock);
+ 			list_del(&conn->rmb_desc->list);
+ 			write_unlock_bh(&lgr->rmbs_lock);
+@@ -262,11 +256,16 @@ static void smc_buf_unuse(struct smc_connection *conn)
+ /* remove a finished connection from its link group */
+ void smc_conn_free(struct smc_connection *conn)
+ {
+-	if (!conn->lgr)
++	struct smc_link_group *lgr = conn->lgr;
++
++	if (!lgr)
+ 		return;
+ 	smc_cdc_tx_dismiss_slots(conn);
+-	smc_lgr_unregister_conn(conn);
+-	smc_buf_unuse(conn);
++	smc_lgr_unregister_conn(conn);		/* unsets conn->lgr */
++	smc_buf_unuse(conn, lgr);		/* allow buffer reuse */
++
++	if (!lgr->conns_num)
++		smc_lgr_schedule_free_work(lgr);
+ }
+ 
+ static void smc_link_clear(struct smc_link *lnk)
+diff --git a/net/socket.c b/net/socket.c
+index d4187ac17d55..fcb18a7ed14b 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -2887,9 +2887,14 @@ static int ethtool_ioctl(struct net *net, struct compat_ifreq __user *ifr32)
+ 		    copy_in_user(&rxnfc->fs.ring_cookie,
+ 				 &compat_rxnfc->fs.ring_cookie,
+ 				 (void __user *)(&rxnfc->fs.location + 1) -
+-				 (void __user *)&rxnfc->fs.ring_cookie) ||
+-		    copy_in_user(&rxnfc->rule_cnt, &compat_rxnfc->rule_cnt,
+-				 sizeof(rxnfc->rule_cnt)))
++				 (void __user *)&rxnfc->fs.ring_cookie))
++			return -EFAULT;
++		if (ethcmd == ETHTOOL_GRXCLSRLALL) {
++			if (put_user(rule_cnt, &rxnfc->rule_cnt))
++				return -EFAULT;
++		} else if (copy_in_user(&rxnfc->rule_cnt,
++					&compat_rxnfc->rule_cnt,
++					sizeof(rxnfc->rule_cnt)))
+ 			return -EFAULT;
+ 	}
+ 
+diff --git a/net/tipc/name_distr.c b/net/tipc/name_distr.c
+index 51b4b96f89db..3cfeb9df64b0 100644
+--- a/net/tipc/name_distr.c
++++ b/net/tipc/name_distr.c
+@@ -115,7 +115,7 @@ struct sk_buff *tipc_named_withdraw(struct net *net, struct publication *publ)
+ 	struct sk_buff *buf;
+ 	struct distr_item *item;
+ 
+-	list_del(&publ->binding_node);
++	list_del_rcu(&publ->binding_node);
+ 
+ 	if (publ->scope == TIPC_NODE_SCOPE)
+ 		return NULL;
+@@ -147,7 +147,7 @@ static void named_distribute(struct net *net, struct sk_buff_head *list,
+ 			ITEM_SIZE) * ITEM_SIZE;
+ 	u32 msg_rem = msg_dsz;
+ 
+-	list_for_each_entry(publ, pls, binding_node) {
++	list_for_each_entry_rcu(publ, pls, binding_node) {
+ 		/* Prepare next buffer: */
+ 		if (!skb) {
+ 			skb = named_prepare_buf(net, PUBLICATION, msg_rem,
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 9fab8e5a4a5b..994ddc7ec9b1 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -286,7 +286,7 @@ static int zerocopy_from_iter(struct sock *sk, struct iov_iter *from,
+ 			      int length, int *pages_used,
+ 			      unsigned int *size_used,
+ 			      struct scatterlist *to, int to_max_pages,
+-			      bool charge, bool revert)
++			      bool charge)
+ {
+ 	struct page *pages[MAX_SKB_FRAGS];
+ 
+@@ -335,10 +335,10 @@ static int zerocopy_from_iter(struct sock *sk, struct iov_iter *from,
+ 	}
+ 
+ out:
++	if (rc)
++		iov_iter_revert(from, size - *size_used);
+ 	*size_used = size;
+ 	*pages_used = num_elem;
+-	if (revert)
+-		iov_iter_revert(from, size);
+ 
+ 	return rc;
+ }
+@@ -440,7 +440,7 @@ alloc_encrypted:
+ 				&ctx->sg_plaintext_size,
+ 				ctx->sg_plaintext_data,
+ 				ARRAY_SIZE(ctx->sg_plaintext_data),
+-				true, false);
++				true);
+ 			if (ret)
+ 				goto fallback_to_reg_send;
+ 
+@@ -453,8 +453,6 @@ alloc_encrypted:
+ 
+ 			copied -= try_to_copy;
+ fallback_to_reg_send:
+-			iov_iter_revert(&msg->msg_iter,
+-					ctx->sg_plaintext_size - orig_size);
+ 			trim_sg(sk, ctx->sg_plaintext_data,
+ 				&ctx->sg_plaintext_num_elem,
+ 				&ctx->sg_plaintext_size,
+@@ -828,7 +826,7 @@ int tls_sw_recvmsg(struct sock *sk,
+ 				err = zerocopy_from_iter(sk, &msg->msg_iter,
+ 							 to_copy, &pages,
+ 							 &chunk, &sgin[1],
+-							 MAX_SKB_FRAGS,	false, true);
++							 MAX_SKB_FRAGS,	false);
+ 				if (err < 0)
+ 					goto fallback_to_reg_recv;
+ 
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 733ccf867972..214f9ef79a64 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -3699,6 +3699,7 @@ static bool ht_rateset_to_mask(struct ieee80211_supported_band *sband,
+ 			return false;
+ 
+ 		/* check availability */
++		ridx = array_index_nospec(ridx, IEEE80211_HT_MCS_MASK_LEN);
+ 		if (sband->ht_cap.mcs.rx_mask[ridx] & rbit)
+ 			mcs[ridx] |= rbit;
+ 		else
+@@ -10124,7 +10125,7 @@ static int cfg80211_cqm_rssi_update(struct cfg80211_registered_device *rdev,
+ 	struct wireless_dev *wdev = dev->ieee80211_ptr;
+ 	s32 last, low, high;
+ 	u32 hyst;
+-	int i, n;
++	int i, n, low_index;
+ 	int err;
+ 
+ 	/* RSSI reporting disabled? */
+@@ -10161,10 +10162,19 @@ static int cfg80211_cqm_rssi_update(struct cfg80211_registered_device *rdev,
+ 		if (last < wdev->cqm_config->rssi_thresholds[i])
+ 			break;
+ 
+-	low = i > 0 ?
+-		(wdev->cqm_config->rssi_thresholds[i - 1] - hyst) : S32_MIN;
+-	high = i < n ?
+-		(wdev->cqm_config->rssi_thresholds[i] + hyst - 1) : S32_MAX;
++	low_index = i - 1;
++	if (low_index >= 0) {
++		low_index = array_index_nospec(low_index, n);
++		low = wdev->cqm_config->rssi_thresholds[low_index] - hyst;
++	} else {
++		low = S32_MIN;
++	}
++	if (i < n) {
++		i = array_index_nospec(i, n);
++		high = wdev->cqm_config->rssi_thresholds[i] + hyst - 1;
++	} else {
++		high = S32_MAX;
++	}
+ 
+ 	return rdev_set_cqm_rssi_range_config(rdev, dev, low, high);
+ }
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index 2f702adf2912..24cfa2776f50 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -2661,11 +2661,12 @@ static void reg_process_hint(struct regulatory_request *reg_request)
+ {
+ 	struct wiphy *wiphy = NULL;
+ 	enum reg_request_treatment treatment;
++	enum nl80211_reg_initiator initiator = reg_request->initiator;
+ 
+ 	if (reg_request->wiphy_idx != WIPHY_IDX_INVALID)
+ 		wiphy = wiphy_idx_to_wiphy(reg_request->wiphy_idx);
+ 
+-	switch (reg_request->initiator) {
++	switch (initiator) {
+ 	case NL80211_REGDOM_SET_BY_CORE:
+ 		treatment = reg_process_hint_core(reg_request);
+ 		break;
+@@ -2683,7 +2684,7 @@ static void reg_process_hint(struct regulatory_request *reg_request)
+ 		treatment = reg_process_hint_country_ie(wiphy, reg_request);
+ 		break;
+ 	default:
+-		WARN(1, "invalid initiator %d\n", reg_request->initiator);
++		WARN(1, "invalid initiator %d\n", initiator);
+ 		goto out_free;
+ 	}
+ 
+@@ -2698,7 +2699,7 @@ static void reg_process_hint(struct regulatory_request *reg_request)
+ 	 */
+ 	if (treatment == REG_REQ_ALREADY_SET && wiphy &&
+ 	    wiphy->regulatory_flags & REGULATORY_STRICT_REG) {
+-		wiphy_update_regulatory(wiphy, reg_request->initiator);
++		wiphy_update_regulatory(wiphy, initiator);
+ 		wiphy_all_share_dfs_chan_state(wiphy);
+ 		reg_check_channels();
+ 	}
+@@ -2867,6 +2868,7 @@ static int regulatory_hint_core(const char *alpha2)
+ 	request->alpha2[0] = alpha2[0];
+ 	request->alpha2[1] = alpha2[1];
+ 	request->initiator = NL80211_REGDOM_SET_BY_CORE;
++	request->wiphy_idx = WIPHY_IDX_INVALID;
+ 
+ 	queue_regulatory_request(request);
+ 
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index d36c3eb7b931..d0e7472dd9fd 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -1058,13 +1058,23 @@ cfg80211_bss_update(struct cfg80211_registered_device *rdev,
+ 	return NULL;
+ }
+ 
++/*
++ * Update RX channel information based on the available frame payload
++ * information. This is mainly for the 2.4 GHz band where frames can be received
++ * from neighboring channels and the Beacon frames use the DSSS Parameter Set
++ * element to indicate the current (transmitting) channel, but this might also
++ * be needed on other bands if RX frequency does not match with the actual
++ * operating channel of a BSS.
++ */
+ static struct ieee80211_channel *
+ cfg80211_get_bss_channel(struct wiphy *wiphy, const u8 *ie, size_t ielen,
+-			 struct ieee80211_channel *channel)
++			 struct ieee80211_channel *channel,
++			 enum nl80211_bss_scan_width scan_width)
+ {
+ 	const u8 *tmp;
+ 	u32 freq;
+ 	int channel_number = -1;
++	struct ieee80211_channel *alt_channel;
+ 
+ 	tmp = cfg80211_find_ie(WLAN_EID_DS_PARAMS, ie, ielen);
+ 	if (tmp && tmp[1] == 1) {
+@@ -1078,16 +1088,45 @@ cfg80211_get_bss_channel(struct wiphy *wiphy, const u8 *ie, size_t ielen,
+ 		}
+ 	}
+ 
+-	if (channel_number < 0)
++	if (channel_number < 0) {
++		/* No channel information in frame payload */
+ 		return channel;
++	}
+ 
+ 	freq = ieee80211_channel_to_frequency(channel_number, channel->band);
+-	channel = ieee80211_get_channel(wiphy, freq);
+-	if (!channel)
+-		return NULL;
+-	if (channel->flags & IEEE80211_CHAN_DISABLED)
++	alt_channel = ieee80211_get_channel(wiphy, freq);
++	if (!alt_channel) {
++		if (channel->band == NL80211_BAND_2GHZ) {
++			/*
++			 * Better not allow unexpected channels when that could
++			 * be going beyond the 1-11 range (e.g., discovering
++			 * BSS on channel 12 when radio is configured for
++			 * channel 11.
++			 */
++			return NULL;
++		}
++
++		/* No match for the payload channel number - ignore it */
++		return channel;
++	}
++
++	if (scan_width == NL80211_BSS_CHAN_WIDTH_10 ||
++	    scan_width == NL80211_BSS_CHAN_WIDTH_5) {
++		/*
++		 * Ignore channel number in 5 and 10 MHz channels where there
++		 * may not be an n:1 or 1:n mapping between frequencies and
++		 * channel numbers.
++		 */
++		return channel;
++	}
++
++	/*
++	 * Use the channel determined through the payload channel number
++	 * instead of the RX channel reported by the driver.
++	 */
++	if (alt_channel->flags & IEEE80211_CHAN_DISABLED)
+ 		return NULL;
+-	return channel;
++	return alt_channel;
+ }
+ 
+ /* Returned bss is reference counted and must be cleaned up appropriately. */
+@@ -1112,7 +1151,8 @@ cfg80211_inform_bss_data(struct wiphy *wiphy,
+ 		    (data->signal < 0 || data->signal > 100)))
+ 		return NULL;
+ 
+-	channel = cfg80211_get_bss_channel(wiphy, ie, ielen, data->chan);
++	channel = cfg80211_get_bss_channel(wiphy, ie, ielen, data->chan,
++					   data->scan_width);
+ 	if (!channel)
+ 		return NULL;
+ 
+@@ -1210,7 +1250,7 @@ cfg80211_inform_bss_frame_data(struct wiphy *wiphy,
+ 		return NULL;
+ 
+ 	channel = cfg80211_get_bss_channel(wiphy, mgmt->u.beacon.variable,
+-					   ielen, data->chan);
++					   ielen, data->chan, data->scan_width);
+ 	if (!channel)
+ 		return NULL;
+ 
+diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c
+index 352abca2605f..86f5afbd0a0c 100644
+--- a/net/xfrm/xfrm_input.c
++++ b/net/xfrm/xfrm_input.c
+@@ -453,6 +453,7 @@ resume:
+ 			XFRM_INC_STATS(net, LINUX_MIB_XFRMINHDRERROR);
+ 			goto drop;
+ 		}
++		crypto_done = false;
+ 	} while (!err);
+ 
+ 	err = xfrm_rcv_cb(skb, family, x->type->proto, 0);
+diff --git a/net/xfrm/xfrm_output.c b/net/xfrm/xfrm_output.c
+index 89b178a78dc7..36d15a38ce5e 100644
+--- a/net/xfrm/xfrm_output.c
++++ b/net/xfrm/xfrm_output.c
+@@ -101,6 +101,10 @@ static int xfrm_output_one(struct sk_buff *skb, int err)
+ 		spin_unlock_bh(&x->lock);
+ 
+ 		skb_dst_force(skb);
++		if (!skb_dst(skb)) {
++			XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTERROR);
++			goto error_nolock;
++		}
+ 
+ 		if (xfrm_offload(skb)) {
+ 			x->type_offload->encap(x, skb);
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index a94983e03a8b..526e6814ed4b 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -2551,6 +2551,10 @@ int __xfrm_route_forward(struct sk_buff *skb, unsigned short family)
+ 	}
+ 
+ 	skb_dst_force(skb);
++	if (!skb_dst(skb)) {
++		XFRM_INC_STATS(net, LINUX_MIB_XFRMFWDHDRERROR);
++		return 0;
++	}
+ 
+ 	dst = xfrm_lookup(net, skb_dst(skb), &fl, NULL, XFRM_LOOKUP_QUEUE);
+ 	if (IS_ERR(dst)) {
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index 33878e6e0d0a..d0672c400c2f 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -151,10 +151,16 @@ static int verify_newsa_info(struct xfrm_usersa_info *p,
+ 	err = -EINVAL;
+ 	switch (p->family) {
+ 	case AF_INET:
++		if (p->sel.prefixlen_d > 32 || p->sel.prefixlen_s > 32)
++			goto out;
++
+ 		break;
+ 
+ 	case AF_INET6:
+ #if IS_ENABLED(CONFIG_IPV6)
++		if (p->sel.prefixlen_d > 128 || p->sel.prefixlen_s > 128)
++			goto out;
++
+ 		break;
+ #else
+ 		err = -EAFNOSUPPORT;
+@@ -1359,10 +1365,16 @@ static int verify_newpolicy_info(struct xfrm_userpolicy_info *p)
+ 
+ 	switch (p->sel.family) {
+ 	case AF_INET:
++		if (p->sel.prefixlen_d > 32 || p->sel.prefixlen_s > 32)
++			return -EINVAL;
++
+ 		break;
+ 
+ 	case AF_INET6:
+ #if IS_ENABLED(CONFIG_IPV6)
++		if (p->sel.prefixlen_d > 128 || p->sel.prefixlen_s > 128)
++			return -EINVAL;
++
+ 		break;
+ #else
+ 		return  -EAFNOSUPPORT;
+@@ -1443,6 +1455,9 @@ static int validate_tmpl(int nr, struct xfrm_user_tmpl *ut, u16 family)
+ 		    (ut[i].family != prev_family))
+ 			return -EINVAL;
+ 
++		if (ut[i].mode >= XFRM_MODE_MAX)
++			return -EINVAL;
++
+ 		prev_family = ut[i].family;
+ 
+ 		switch (ut[i].family) {
+diff --git a/tools/perf/Makefile b/tools/perf/Makefile
+index 225454416ed5..7902a5681fc8 100644
+--- a/tools/perf/Makefile
++++ b/tools/perf/Makefile
+@@ -84,10 +84,10 @@ endif # has_clean
+ endif # MAKECMDGOALS
+ 
+ #
+-# The clean target is not really parallel, don't print the jobs info:
++# Explicitly disable parallelism for the clean target.
+ #
+ clean:
+-	$(make)
++	$(make) -j1
+ 
+ #
+ # The build-test target is not really parallel, don't print the jobs info,
+diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
+index 22dbb6612b41..b70cce40ca97 100644
+--- a/tools/perf/util/machine.c
++++ b/tools/perf/util/machine.c
+@@ -2246,7 +2246,8 @@ static int append_inlines(struct callchain_cursor *cursor,
+ 	if (!symbol_conf.inline_name || !map || !sym)
+ 		return ret;
+ 
+-	addr = map__rip_2objdump(map, ip);
++	addr = map__map_ip(map, ip);
++	addr = map__rip_2objdump(map, addr);
+ 
+ 	inline_node = inlines__tree_find(&map->dso->inlined_nodes, addr);
+ 	if (!inline_node) {
+@@ -2272,7 +2273,7 @@ static int unwind_entry(struct unwind_entry *entry, void *arg)
+ {
+ 	struct callchain_cursor *cursor = arg;
+ 	const char *srcline = NULL;
+-	u64 addr;
++	u64 addr = entry->ip;
+ 
+ 	if (symbol_conf.hide_unresolved && entry->sym == NULL)
+ 		return 0;
+@@ -2284,7 +2285,8 @@ static int unwind_entry(struct unwind_entry *entry, void *arg)
+ 	 * Convert entry->ip from a virtual address to an offset in
+ 	 * its corresponding binary.
+ 	 */
+-	addr = map__map_ip(entry->map, entry->ip);
++	if (entry->map)
++		addr = map__map_ip(entry->map, entry->ip);
+ 
+ 	srcline = callchain_srcline(entry->map, entry->sym, addr);
+ 	return callchain_cursor_append(cursor, entry->ip,
+diff --git a/tools/perf/util/setup.py b/tools/perf/util/setup.py
+index 001be4f9d3b9..a5f9e236cc71 100644
+--- a/tools/perf/util/setup.py
++++ b/tools/perf/util/setup.py
+@@ -27,7 +27,7 @@ class install_lib(_install_lib):
+ 
+ cflags = getenv('CFLAGS', '').split()
+ # switch off several checks (need to be at the end of cflags list)
+-cflags += ['-fno-strict-aliasing', '-Wno-write-strings', '-Wno-unused-parameter' ]
++cflags += ['-fno-strict-aliasing', '-Wno-write-strings', '-Wno-unused-parameter', '-Wno-redundant-decls' ]
+ if cc != "clang":
+     cflags += ['-Wno-cast-function-type' ]
+ 
+diff --git a/tools/testing/selftests/net/fib-onlink-tests.sh b/tools/testing/selftests/net/fib-onlink-tests.sh
+index 3991ad1a368d..864f865eee55 100755
+--- a/tools/testing/selftests/net/fib-onlink-tests.sh
++++ b/tools/testing/selftests/net/fib-onlink-tests.sh
+@@ -167,8 +167,8 @@ setup()
+ 	# add vrf table
+ 	ip li add ${VRF} type vrf table ${VRF_TABLE}
+ 	ip li set ${VRF} up
+-	ip ro add table ${VRF_TABLE} unreachable default
+-	ip -6 ro add table ${VRF_TABLE} unreachable default
++	ip ro add table ${VRF_TABLE} unreachable default metric 8192
++	ip -6 ro add table ${VRF_TABLE} unreachable default metric 8192
+ 
+ 	# create test interfaces
+ 	ip li add ${NETIFS[p1]} type veth peer name ${NETIFS[p2]}
+@@ -185,20 +185,20 @@ setup()
+ 	for n in 1 3 5 7; do
+ 		ip li set ${NETIFS[p${n}]} up
+ 		ip addr add ${V4ADDRS[p${n}]}/24 dev ${NETIFS[p${n}]}
+-		ip addr add ${V6ADDRS[p${n}]}/64 dev ${NETIFS[p${n}]}
++		ip addr add ${V6ADDRS[p${n}]}/64 dev ${NETIFS[p${n}]} nodad
+ 	done
+ 
+ 	# move peer interfaces to namespace and add addresses
+ 	for n in 2 4 6 8; do
+ 		ip li set ${NETIFS[p${n}]} netns ${PEER_NS} up
+ 		ip -netns ${PEER_NS} addr add ${V4ADDRS[p${n}]}/24 dev ${NETIFS[p${n}]}
+-		ip -netns ${PEER_NS} addr add ${V6ADDRS[p${n}]}/64 dev ${NETIFS[p${n}]}
++		ip -netns ${PEER_NS} addr add ${V6ADDRS[p${n}]}/64 dev ${NETIFS[p${n}]} nodad
+ 	done
+ 
+-	set +e
++	ip -6 ro add default via ${V6ADDRS[p3]/::[0-9]/::64}
++	ip -6 ro add table ${VRF_TABLE} default via ${V6ADDRS[p7]/::[0-9]/::64}
+ 
+-	# let DAD complete - assume default of 1 probe
+-	sleep 1
++	set +e
+ }
+ 
+ cleanup()
+diff --git a/tools/testing/selftests/net/rtnetlink.sh b/tools/testing/selftests/net/rtnetlink.sh
+index 0d7a44fa30af..8e509cbcb209 100755
+--- a/tools/testing/selftests/net/rtnetlink.sh
++++ b/tools/testing/selftests/net/rtnetlink.sh
+@@ -1,4 +1,4 @@
+-#!/bin/sh
++#!/bin/bash
+ #
+ # This test is for checking rtnetlink callpaths, and get as much coverage as possible.
+ #
+diff --git a/tools/testing/selftests/net/udpgso_bench.sh b/tools/testing/selftests/net/udpgso_bench.sh
+index 850767befa47..99e537ab5ad9 100755
+--- a/tools/testing/selftests/net/udpgso_bench.sh
++++ b/tools/testing/selftests/net/udpgso_bench.sh
+@@ -1,4 +1,4 @@
+-#!/bin/sh
++#!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ #
+ # Run a series of udpgso benchmarks


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-10-20 12:36 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-10-20 12:36 UTC (permalink / raw
  To: gentoo-commits

commit:     15ce58de63de9683dd3a0076e8833daf45fb2992
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Oct 20 12:36:21 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Oct 20 12:36:21 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=15ce58de

Linux patch 4.18.16

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1015_linux-4.18.16.patch | 2439 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2443 insertions(+)

diff --git a/0000_README b/0000_README
index 5676b13..52e9ca9 100644
--- a/0000_README
+++ b/0000_README
@@ -103,6 +103,10 @@ Patch:  1014_linux-4.18.15.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.15
 
+Patch:  1015_linux-4.18.16.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.16
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1015_linux-4.18.16.patch b/1015_linux-4.18.16.patch
new file mode 100644
index 0000000..9bc7017
--- /dev/null
+++ b/1015_linux-4.18.16.patch
@@ -0,0 +1,2439 @@
+diff --git a/Makefile b/Makefile
+index 968eb96a0553..034dd990b0ae 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 15
++SUBLEVEL = 16
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/arc/Makefile b/arch/arc/Makefile
+index 6c1b20dd76ad..7c6c97782022 100644
+--- a/arch/arc/Makefile
++++ b/arch/arc/Makefile
+@@ -6,34 +6,12 @@
+ # published by the Free Software Foundation.
+ #
+ 
+-ifeq ($(CROSS_COMPILE),)
+-ifndef CONFIG_CPU_BIG_ENDIAN
+-CROSS_COMPILE := arc-linux-
+-else
+-CROSS_COMPILE := arceb-linux-
+-endif
+-endif
+-
+ KBUILD_DEFCONFIG := nsim_700_defconfig
+ 
+ cflags-y	+= -fno-common -pipe -fno-builtin -mmedium-calls -D__linux__
+ cflags-$(CONFIG_ISA_ARCOMPACT)	+= -mA7
+ cflags-$(CONFIG_ISA_ARCV2)	+= -mcpu=archs
+ 
+-is_700 = $(shell $(CC) -dM -E - < /dev/null | grep -q "ARC700" && echo 1 || echo 0)
+-
+-ifdef CONFIG_ISA_ARCOMPACT
+-ifeq ($(is_700), 0)
+-    $(error Toolchain not configured for ARCompact builds)
+-endif
+-endif
+-
+-ifdef CONFIG_ISA_ARCV2
+-ifeq ($(is_700), 1)
+-    $(error Toolchain not configured for ARCv2 builds)
+-endif
+-endif
+-
+ ifdef CONFIG_ARC_CURR_IN_REG
+ # For a global register defintion, make sure it gets passed to every file
+ # We had a customer reported bug where some code built in kernel was NOT using
+@@ -87,7 +65,7 @@ ldflags-$(CONFIG_CPU_BIG_ENDIAN)	+= -EB
+ # --build-id w/o "-marclinux". Default arc-elf32-ld is OK
+ ldflags-$(upto_gcc44)			+= -marclinux
+ 
+-LIBGCC	:= $(shell $(CC) $(cflags-y) --print-libgcc-file-name)
++LIBGCC	= $(shell $(CC) $(cflags-y) --print-libgcc-file-name)
+ 
+ # Modules with short calls might break for calls into builtin-kernel
+ KBUILD_CFLAGS_MODULE	+= -mlong-calls -mno-millicode
+diff --git a/arch/powerpc/kernel/tm.S b/arch/powerpc/kernel/tm.S
+index ff12f47a96b6..09d347b61218 100644
+--- a/arch/powerpc/kernel/tm.S
++++ b/arch/powerpc/kernel/tm.S
+@@ -175,13 +175,27 @@ _GLOBAL(tm_reclaim)
+ 	std	r1, PACATMSCRATCH(r13)
+ 	ld	r1, PACAR1(r13)
+ 
+-	/* Store the PPR in r11 and reset to decent value */
+ 	std	r11, GPR11(r1)			/* Temporary stash */
+ 
++	/*
++	 * Move the saved user r1 to the kernel stack in case PACATMSCRATCH is
++	 * clobbered by an exception once we turn on MSR_RI below.
++	 */
++	ld	r11, PACATMSCRATCH(r13)
++	std	r11, GPR1(r1)
++
++	/*
++	 * Store r13 away so we can free up the scratch SPR for the SLB fault
++	 * handler (needed once we start accessing the thread_struct).
++	 */
++	GET_SCRATCH0(r11)
++	std	r11, GPR13(r1)
++
+ 	/* Reset MSR RI so we can take SLB faults again */
+ 	li	r11, MSR_RI
+ 	mtmsrd	r11, 1
+ 
++	/* Store the PPR in r11 and reset to decent value */
+ 	mfspr	r11, SPRN_PPR
+ 	HMT_MEDIUM
+ 
+@@ -206,11 +220,11 @@ _GLOBAL(tm_reclaim)
+ 	SAVE_GPR(8, r7)				/* user r8 */
+ 	SAVE_GPR(9, r7)				/* user r9 */
+ 	SAVE_GPR(10, r7)			/* user r10 */
+-	ld	r3, PACATMSCRATCH(r13)		/* user r1 */
++	ld	r3, GPR1(r1)			/* user r1 */
+ 	ld	r4, GPR7(r1)			/* user r7 */
+ 	ld	r5, GPR11(r1)			/* user r11 */
+ 	ld	r6, GPR12(r1)			/* user r12 */
+-	GET_SCRATCH0(8)				/* user r13 */
++	ld	r8, GPR13(r1)			/* user r13 */
+ 	std	r3, GPR1(r7)
+ 	std	r4, GPR7(r7)
+ 	std	r5, GPR11(r7)
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index b5a71baedbc2..59d07bd5374a 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -1204,7 +1204,9 @@ int find_and_online_cpu_nid(int cpu)
+ 	int new_nid;
+ 
+ 	/* Use associativity from first thread for all siblings */
+-	vphn_get_associativity(cpu, associativity);
++	if (vphn_get_associativity(cpu, associativity))
++		return cpu_to_node(cpu);
++
+ 	new_nid = associativity_to_nid(associativity);
+ 	if (new_nid < 0 || !node_possible(new_nid))
+ 		new_nid = first_online_node;
+diff --git a/arch/riscv/include/asm/asm-prototypes.h b/arch/riscv/include/asm/asm-prototypes.h
+new file mode 100644
+index 000000000000..c9fecd120d18
+--- /dev/null
++++ b/arch/riscv/include/asm/asm-prototypes.h
+@@ -0,0 +1,7 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef _ASM_RISCV_PROTOTYPES_H
++
++#include <linux/ftrace.h>
++#include <asm-generic/asm-prototypes.h>
++
++#endif /* _ASM_RISCV_PROTOTYPES_H */
+diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
+index eaa843a52907..a480356e0ed8 100644
+--- a/arch/x86/boot/compressed/mem_encrypt.S
++++ b/arch/x86/boot/compressed/mem_encrypt.S
+@@ -25,20 +25,6 @@ ENTRY(get_sev_encryption_bit)
+ 	push	%ebx
+ 	push	%ecx
+ 	push	%edx
+-	push	%edi
+-
+-	/*
+-	 * RIP-relative addressing is needed to access the encryption bit
+-	 * variable. Since we are running in 32-bit mode we need this call/pop
+-	 * sequence to get the proper relative addressing.
+-	 */
+-	call	1f
+-1:	popl	%edi
+-	subl	$1b, %edi
+-
+-	movl	enc_bit(%edi), %eax
+-	cmpl	$0, %eax
+-	jge	.Lsev_exit
+ 
+ 	/* Check if running under a hypervisor */
+ 	movl	$1, %eax
+@@ -69,15 +55,12 @@ ENTRY(get_sev_encryption_bit)
+ 
+ 	movl	%ebx, %eax
+ 	andl	$0x3f, %eax		/* Return the encryption bit location */
+-	movl	%eax, enc_bit(%edi)
+ 	jmp	.Lsev_exit
+ 
+ .Lno_sev:
+ 	xor	%eax, %eax
+-	movl	%eax, enc_bit(%edi)
+ 
+ .Lsev_exit:
+-	pop	%edi
+ 	pop	%edx
+ 	pop	%ecx
+ 	pop	%ebx
+@@ -113,8 +96,6 @@ ENTRY(set_sev_encryption_mask)
+ ENDPROC(set_sev_encryption_mask)
+ 
+ 	.data
+-enc_bit:
+-	.int	0xffffffff
+ 
+ #ifdef CONFIG_AMD_MEM_ENCRYPT
+ 	.balign	8
+diff --git a/drivers/clocksource/timer-fttmr010.c b/drivers/clocksource/timer-fttmr010.c
+index c020038ebfab..cf93f6419b51 100644
+--- a/drivers/clocksource/timer-fttmr010.c
++++ b/drivers/clocksource/timer-fttmr010.c
+@@ -130,13 +130,17 @@ static int fttmr010_timer_set_next_event(unsigned long cycles,
+ 	cr &= ~fttmr010->t1_enable_val;
+ 	writel(cr, fttmr010->base + TIMER_CR);
+ 
+-	/* Setup the match register forward/backward in time */
+-	cr = readl(fttmr010->base + TIMER1_COUNT);
+-	if (fttmr010->count_down)
+-		cr -= cycles;
+-	else
+-		cr += cycles;
+-	writel(cr, fttmr010->base + TIMER1_MATCH1);
++	if (fttmr010->count_down) {
++		/*
++		 * ASPEED Timer Controller will load TIMER1_LOAD register
++		 * into TIMER1_COUNT register when the timer is re-enabled.
++		 */
++		writel(cycles, fttmr010->base + TIMER1_LOAD);
++	} else {
++		/* Setup the match register forward in time */
++		cr = readl(fttmr010->base + TIMER1_COUNT);
++		writel(cr + cycles, fttmr010->base + TIMER1_MATCH1);
++	}
+ 
+ 	/* Start */
+ 	cr = readl(fttmr010->base + TIMER_CR);
+diff --git a/drivers/clocksource/timer-ti-32k.c b/drivers/clocksource/timer-ti-32k.c
+index 880a861ab3c8..713214d085e0 100644
+--- a/drivers/clocksource/timer-ti-32k.c
++++ b/drivers/clocksource/timer-ti-32k.c
+@@ -98,6 +98,9 @@ static int __init ti_32k_timer_init(struct device_node *np)
+ 		return -ENXIO;
+ 	}
+ 
++	if (!of_machine_is_compatible("ti,am43"))
++		ti_32k_timer.cs.flags |= CLOCK_SOURCE_SUSPEND_NONSTOP;
++
+ 	ti_32k_timer.counter = ti_32k_timer.base;
+ 
+ 	/*
+diff --git a/drivers/gpu/drm/arm/malidp_drv.c b/drivers/gpu/drm/arm/malidp_drv.c
+index 0a788d76ed5f..0ec4659795f1 100644
+--- a/drivers/gpu/drm/arm/malidp_drv.c
++++ b/drivers/gpu/drm/arm/malidp_drv.c
+@@ -615,6 +615,7 @@ static int malidp_bind(struct device *dev)
+ 	drm->irq_enabled = true;
+ 
+ 	ret = drm_vblank_init(drm, drm->mode_config.num_crtc);
++	drm_crtc_vblank_reset(&malidp->crtc);
+ 	if (ret < 0) {
+ 		DRM_ERROR("failed to initialise vblank\n");
+ 		goto vblank_fail;
+diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c
+index c2e55e5d97f6..1cf6290d6435 100644
+--- a/drivers/hwtracing/intel_th/pci.c
++++ b/drivers/hwtracing/intel_th/pci.c
+@@ -160,6 +160,11 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
+ 		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x18e1),
+ 		.driver_data = (kernel_ulong_t)&intel_th_2x,
+ 	},
++	{
++		/* Ice Lake PCH */
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x34a6),
++		.driver_data = (kernel_ulong_t)&intel_th_2x,
++	},
+ 	{ 0 },
+ };
+ 
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index 0e5eb0f547d3..b83348416885 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -2048,33 +2048,55 @@ static int modify_qp(struct ib_uverbs_file *file,
+ 
+ 	if ((cmd->base.attr_mask & IB_QP_CUR_STATE &&
+ 	    cmd->base.cur_qp_state > IB_QPS_ERR) ||
+-	    cmd->base.qp_state > IB_QPS_ERR) {
++	    (cmd->base.attr_mask & IB_QP_STATE &&
++	    cmd->base.qp_state > IB_QPS_ERR)) {
+ 		ret = -EINVAL;
+ 		goto release_qp;
+ 	}
+ 
+-	attr->qp_state		  = cmd->base.qp_state;
+-	attr->cur_qp_state	  = cmd->base.cur_qp_state;
+-	attr->path_mtu		  = cmd->base.path_mtu;
+-	attr->path_mig_state	  = cmd->base.path_mig_state;
+-	attr->qkey		  = cmd->base.qkey;
+-	attr->rq_psn		  = cmd->base.rq_psn;
+-	attr->sq_psn		  = cmd->base.sq_psn;
+-	attr->dest_qp_num	  = cmd->base.dest_qp_num;
+-	attr->qp_access_flags	  = cmd->base.qp_access_flags;
+-	attr->pkey_index	  = cmd->base.pkey_index;
+-	attr->alt_pkey_index	  = cmd->base.alt_pkey_index;
+-	attr->en_sqd_async_notify = cmd->base.en_sqd_async_notify;
+-	attr->max_rd_atomic	  = cmd->base.max_rd_atomic;
+-	attr->max_dest_rd_atomic  = cmd->base.max_dest_rd_atomic;
+-	attr->min_rnr_timer	  = cmd->base.min_rnr_timer;
+-	attr->port_num		  = cmd->base.port_num;
+-	attr->timeout		  = cmd->base.timeout;
+-	attr->retry_cnt		  = cmd->base.retry_cnt;
+-	attr->rnr_retry		  = cmd->base.rnr_retry;
+-	attr->alt_port_num	  = cmd->base.alt_port_num;
+-	attr->alt_timeout	  = cmd->base.alt_timeout;
+-	attr->rate_limit	  = cmd->rate_limit;
++	if (cmd->base.attr_mask & IB_QP_STATE)
++		attr->qp_state = cmd->base.qp_state;
++	if (cmd->base.attr_mask & IB_QP_CUR_STATE)
++		attr->cur_qp_state = cmd->base.cur_qp_state;
++	if (cmd->base.attr_mask & IB_QP_PATH_MTU)
++		attr->path_mtu = cmd->base.path_mtu;
++	if (cmd->base.attr_mask & IB_QP_PATH_MIG_STATE)
++		attr->path_mig_state = cmd->base.path_mig_state;
++	if (cmd->base.attr_mask & IB_QP_QKEY)
++		attr->qkey = cmd->base.qkey;
++	if (cmd->base.attr_mask & IB_QP_RQ_PSN)
++		attr->rq_psn = cmd->base.rq_psn;
++	if (cmd->base.attr_mask & IB_QP_SQ_PSN)
++		attr->sq_psn = cmd->base.sq_psn;
++	if (cmd->base.attr_mask & IB_QP_DEST_QPN)
++		attr->dest_qp_num = cmd->base.dest_qp_num;
++	if (cmd->base.attr_mask & IB_QP_ACCESS_FLAGS)
++		attr->qp_access_flags = cmd->base.qp_access_flags;
++	if (cmd->base.attr_mask & IB_QP_PKEY_INDEX)
++		attr->pkey_index = cmd->base.pkey_index;
++	if (cmd->base.attr_mask & IB_QP_EN_SQD_ASYNC_NOTIFY)
++		attr->en_sqd_async_notify = cmd->base.en_sqd_async_notify;
++	if (cmd->base.attr_mask & IB_QP_MAX_QP_RD_ATOMIC)
++		attr->max_rd_atomic = cmd->base.max_rd_atomic;
++	if (cmd->base.attr_mask & IB_QP_MAX_DEST_RD_ATOMIC)
++		attr->max_dest_rd_atomic = cmd->base.max_dest_rd_atomic;
++	if (cmd->base.attr_mask & IB_QP_MIN_RNR_TIMER)
++		attr->min_rnr_timer = cmd->base.min_rnr_timer;
++	if (cmd->base.attr_mask & IB_QP_PORT)
++		attr->port_num = cmd->base.port_num;
++	if (cmd->base.attr_mask & IB_QP_TIMEOUT)
++		attr->timeout = cmd->base.timeout;
++	if (cmd->base.attr_mask & IB_QP_RETRY_CNT)
++		attr->retry_cnt = cmd->base.retry_cnt;
++	if (cmd->base.attr_mask & IB_QP_RNR_RETRY)
++		attr->rnr_retry = cmd->base.rnr_retry;
++	if (cmd->base.attr_mask & IB_QP_ALT_PATH) {
++		attr->alt_port_num = cmd->base.alt_port_num;
++		attr->alt_timeout = cmd->base.alt_timeout;
++		attr->alt_pkey_index = cmd->base.alt_pkey_index;
++	}
++	if (cmd->base.attr_mask & IB_QP_RATE_LIMIT)
++		attr->rate_limit = cmd->rate_limit;
+ 
+ 	if (cmd->base.attr_mask & IB_QP_AV)
+ 		copy_ah_attr_from_uverbs(qp->device, &attr->ah_attr,
+diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
+index 20b9f31052bf..85cd1a3593d6 100644
+--- a/drivers/infiniband/hw/bnxt_re/main.c
++++ b/drivers/infiniband/hw/bnxt_re/main.c
+@@ -78,7 +78,7 @@ static struct list_head bnxt_re_dev_list = LIST_HEAD_INIT(bnxt_re_dev_list);
+ /* Mutex to protect the list of bnxt_re devices added */
+ static DEFINE_MUTEX(bnxt_re_dev_lock);
+ static struct workqueue_struct *bnxt_re_wq;
+-static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev, bool lock_wait);
++static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev);
+ 
+ /* SR-IOV helper functions */
+ 
+@@ -182,7 +182,7 @@ static void bnxt_re_shutdown(void *p)
+ 	if (!rdev)
+ 		return;
+ 
+-	bnxt_re_ib_unreg(rdev, false);
++	bnxt_re_ib_unreg(rdev);
+ }
+ 
+ static void bnxt_re_stop_irq(void *handle)
+@@ -251,7 +251,7 @@ static struct bnxt_ulp_ops bnxt_re_ulp_ops = {
+ /* Driver registration routines used to let the networking driver (bnxt_en)
+  * to know that the RoCE driver is now installed
+  */
+-static int bnxt_re_unregister_netdev(struct bnxt_re_dev *rdev, bool lock_wait)
++static int bnxt_re_unregister_netdev(struct bnxt_re_dev *rdev)
+ {
+ 	struct bnxt_en_dev *en_dev;
+ 	int rc;
+@@ -260,14 +260,9 @@ static int bnxt_re_unregister_netdev(struct bnxt_re_dev *rdev, bool lock_wait)
+ 		return -EINVAL;
+ 
+ 	en_dev = rdev->en_dev;
+-	/* Acquire rtnl lock if it is not invokded from netdev event */
+-	if (lock_wait)
+-		rtnl_lock();
+ 
+ 	rc = en_dev->en_ops->bnxt_unregister_device(rdev->en_dev,
+ 						    BNXT_ROCE_ULP);
+-	if (lock_wait)
+-		rtnl_unlock();
+ 	return rc;
+ }
+ 
+@@ -281,14 +276,12 @@ static int bnxt_re_register_netdev(struct bnxt_re_dev *rdev)
+ 
+ 	en_dev = rdev->en_dev;
+ 
+-	rtnl_lock();
+ 	rc = en_dev->en_ops->bnxt_register_device(en_dev, BNXT_ROCE_ULP,
+ 						  &bnxt_re_ulp_ops, rdev);
+-	rtnl_unlock();
+ 	return rc;
+ }
+ 
+-static int bnxt_re_free_msix(struct bnxt_re_dev *rdev, bool lock_wait)
++static int bnxt_re_free_msix(struct bnxt_re_dev *rdev)
+ {
+ 	struct bnxt_en_dev *en_dev;
+ 	int rc;
+@@ -298,13 +291,9 @@ static int bnxt_re_free_msix(struct bnxt_re_dev *rdev, bool lock_wait)
+ 
+ 	en_dev = rdev->en_dev;
+ 
+-	if (lock_wait)
+-		rtnl_lock();
+ 
+ 	rc = en_dev->en_ops->bnxt_free_msix(rdev->en_dev, BNXT_ROCE_ULP);
+ 
+-	if (lock_wait)
+-		rtnl_unlock();
+ 	return rc;
+ }
+ 
+@@ -320,7 +309,6 @@ static int bnxt_re_request_msix(struct bnxt_re_dev *rdev)
+ 
+ 	num_msix_want = min_t(u32, BNXT_RE_MAX_MSIX, num_online_cpus());
+ 
+-	rtnl_lock();
+ 	num_msix_got = en_dev->en_ops->bnxt_request_msix(en_dev, BNXT_ROCE_ULP,
+ 							 rdev->msix_entries,
+ 							 num_msix_want);
+@@ -335,7 +323,6 @@ static int bnxt_re_request_msix(struct bnxt_re_dev *rdev)
+ 	}
+ 	rdev->num_msix = num_msix_got;
+ done:
+-	rtnl_unlock();
+ 	return rc;
+ }
+ 
+@@ -358,24 +345,18 @@ static void bnxt_re_fill_fw_msg(struct bnxt_fw_msg *fw_msg, void *msg,
+ 	fw_msg->timeout = timeout;
+ }
+ 
+-static int bnxt_re_net_ring_free(struct bnxt_re_dev *rdev, u16 fw_ring_id,
+-				 bool lock_wait)
++static int bnxt_re_net_ring_free(struct bnxt_re_dev *rdev, u16 fw_ring_id)
+ {
+ 	struct bnxt_en_dev *en_dev = rdev->en_dev;
+ 	struct hwrm_ring_free_input req = {0};
+ 	struct hwrm_ring_free_output resp;
+ 	struct bnxt_fw_msg fw_msg;
+-	bool do_unlock = false;
+ 	int rc = -EINVAL;
+ 
+ 	if (!en_dev)
+ 		return rc;
+ 
+ 	memset(&fw_msg, 0, sizeof(fw_msg));
+-	if (lock_wait) {
+-		rtnl_lock();
+-		do_unlock = true;
+-	}
+ 
+ 	bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_RING_FREE, -1, -1);
+ 	req.ring_type = RING_ALLOC_REQ_RING_TYPE_L2_CMPL;
+@@ -386,8 +367,6 @@ static int bnxt_re_net_ring_free(struct bnxt_re_dev *rdev, u16 fw_ring_id,
+ 	if (rc)
+ 		dev_err(rdev_to_dev(rdev),
+ 			"Failed to free HW ring:%d :%#x", req.ring_id, rc);
+-	if (do_unlock)
+-		rtnl_unlock();
+ 	return rc;
+ }
+ 
+@@ -405,7 +384,6 @@ static int bnxt_re_net_ring_alloc(struct bnxt_re_dev *rdev, dma_addr_t *dma_arr,
+ 		return rc;
+ 
+ 	memset(&fw_msg, 0, sizeof(fw_msg));
+-	rtnl_lock();
+ 	bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_RING_ALLOC, -1, -1);
+ 	req.enables = 0;
+ 	req.page_tbl_addr =  cpu_to_le64(dma_arr[0]);
+@@ -426,27 +404,21 @@ static int bnxt_re_net_ring_alloc(struct bnxt_re_dev *rdev, dma_addr_t *dma_arr,
+ 	if (!rc)
+ 		*fw_ring_id = le16_to_cpu(resp.ring_id);
+ 
+-	rtnl_unlock();
+ 	return rc;
+ }
+ 
+ static int bnxt_re_net_stats_ctx_free(struct bnxt_re_dev *rdev,
+-				      u32 fw_stats_ctx_id, bool lock_wait)
++				      u32 fw_stats_ctx_id)
+ {
+ 	struct bnxt_en_dev *en_dev = rdev->en_dev;
+ 	struct hwrm_stat_ctx_free_input req = {0};
+ 	struct bnxt_fw_msg fw_msg;
+-	bool do_unlock = false;
+ 	int rc = -EINVAL;
+ 
+ 	if (!en_dev)
+ 		return rc;
+ 
+ 	memset(&fw_msg, 0, sizeof(fw_msg));
+-	if (lock_wait) {
+-		rtnl_lock();
+-		do_unlock = true;
+-	}
+ 
+ 	bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_STAT_CTX_FREE, -1, -1);
+ 	req.stat_ctx_id = cpu_to_le32(fw_stats_ctx_id);
+@@ -457,8 +429,6 @@ static int bnxt_re_net_stats_ctx_free(struct bnxt_re_dev *rdev,
+ 		dev_err(rdev_to_dev(rdev),
+ 			"Failed to free HW stats context %#x", rc);
+ 
+-	if (do_unlock)
+-		rtnl_unlock();
+ 	return rc;
+ }
+ 
+@@ -478,7 +448,6 @@ static int bnxt_re_net_stats_ctx_alloc(struct bnxt_re_dev *rdev,
+ 		return rc;
+ 
+ 	memset(&fw_msg, 0, sizeof(fw_msg));
+-	rtnl_lock();
+ 
+ 	bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_STAT_CTX_ALLOC, -1, -1);
+ 	req.update_period_ms = cpu_to_le32(1000);
+@@ -490,7 +459,6 @@ static int bnxt_re_net_stats_ctx_alloc(struct bnxt_re_dev *rdev,
+ 	if (!rc)
+ 		*fw_stats_ctx_id = le32_to_cpu(resp.stat_ctx_id);
+ 
+-	rtnl_unlock();
+ 	return rc;
+ }
+ 
+@@ -929,19 +897,19 @@ fail:
+ 	return rc;
+ }
+ 
+-static void bnxt_re_free_nq_res(struct bnxt_re_dev *rdev, bool lock_wait)
++static void bnxt_re_free_nq_res(struct bnxt_re_dev *rdev)
+ {
+ 	int i;
+ 
+ 	for (i = 0; i < rdev->num_msix - 1; i++) {
+-		bnxt_re_net_ring_free(rdev, rdev->nq[i].ring_id, lock_wait);
++		bnxt_re_net_ring_free(rdev, rdev->nq[i].ring_id);
+ 		bnxt_qplib_free_nq(&rdev->nq[i]);
+ 	}
+ }
+ 
+-static void bnxt_re_free_res(struct bnxt_re_dev *rdev, bool lock_wait)
++static void bnxt_re_free_res(struct bnxt_re_dev *rdev)
+ {
+-	bnxt_re_free_nq_res(rdev, lock_wait);
++	bnxt_re_free_nq_res(rdev);
+ 
+ 	if (rdev->qplib_res.dpi_tbl.max) {
+ 		bnxt_qplib_dealloc_dpi(&rdev->qplib_res,
+@@ -1219,7 +1187,7 @@ static int bnxt_re_setup_qos(struct bnxt_re_dev *rdev)
+ 	return 0;
+ }
+ 
+-static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev, bool lock_wait)
++static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev)
+ {
+ 	int i, rc;
+ 
+@@ -1234,28 +1202,27 @@ static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev, bool lock_wait)
+ 		cancel_delayed_work(&rdev->worker);
+ 
+ 	bnxt_re_cleanup_res(rdev);
+-	bnxt_re_free_res(rdev, lock_wait);
++	bnxt_re_free_res(rdev);
+ 
+ 	if (test_and_clear_bit(BNXT_RE_FLAG_RCFW_CHANNEL_EN, &rdev->flags)) {
+ 		rc = bnxt_qplib_deinit_rcfw(&rdev->rcfw);
+ 		if (rc)
+ 			dev_warn(rdev_to_dev(rdev),
+ 				 "Failed to deinitialize RCFW: %#x", rc);
+-		bnxt_re_net_stats_ctx_free(rdev, rdev->qplib_ctx.stats.fw_id,
+-					   lock_wait);
++		bnxt_re_net_stats_ctx_free(rdev, rdev->qplib_ctx.stats.fw_id);
+ 		bnxt_qplib_free_ctx(rdev->en_dev->pdev, &rdev->qplib_ctx);
+ 		bnxt_qplib_disable_rcfw_channel(&rdev->rcfw);
+-		bnxt_re_net_ring_free(rdev, rdev->rcfw.creq_ring_id, lock_wait);
++		bnxt_re_net_ring_free(rdev, rdev->rcfw.creq_ring_id);
+ 		bnxt_qplib_free_rcfw_channel(&rdev->rcfw);
+ 	}
+ 	if (test_and_clear_bit(BNXT_RE_FLAG_GOT_MSIX, &rdev->flags)) {
+-		rc = bnxt_re_free_msix(rdev, lock_wait);
++		rc = bnxt_re_free_msix(rdev);
+ 		if (rc)
+ 			dev_warn(rdev_to_dev(rdev),
+ 				 "Failed to free MSI-X vectors: %#x", rc);
+ 	}
+ 	if (test_and_clear_bit(BNXT_RE_FLAG_NETDEV_REGISTERED, &rdev->flags)) {
+-		rc = bnxt_re_unregister_netdev(rdev, lock_wait);
++		rc = bnxt_re_unregister_netdev(rdev);
+ 		if (rc)
+ 			dev_warn(rdev_to_dev(rdev),
+ 				 "Failed to unregister with netdev: %#x", rc);
+@@ -1276,6 +1243,12 @@ static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev)
+ {
+ 	int i, j, rc;
+ 
++	bool locked;
++
++	/* Acquire rtnl lock through out this function */
++	rtnl_lock();
++	locked = true;
++
+ 	/* Registered a new RoCE device instance to netdev */
+ 	rc = bnxt_re_register_netdev(rdev);
+ 	if (rc) {
+@@ -1374,12 +1347,16 @@ static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev)
+ 		schedule_delayed_work(&rdev->worker, msecs_to_jiffies(30000));
+ 	}
+ 
++	rtnl_unlock();
++	locked = false;
++
+ 	/* Register ib dev */
+ 	rc = bnxt_re_register_ib(rdev);
+ 	if (rc) {
+ 		pr_err("Failed to register with IB: %#x\n", rc);
+ 		goto fail;
+ 	}
++	set_bit(BNXT_RE_FLAG_IBDEV_REGISTERED, &rdev->flags);
+ 	dev_info(rdev_to_dev(rdev), "Device registered successfully");
+ 	for (i = 0; i < ARRAY_SIZE(bnxt_re_attributes); i++) {
+ 		rc = device_create_file(&rdev->ibdev.dev,
+@@ -1395,7 +1372,6 @@ static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev)
+ 			goto fail;
+ 		}
+ 	}
+-	set_bit(BNXT_RE_FLAG_IBDEV_REGISTERED, &rdev->flags);
+ 	ib_get_eth_speed(&rdev->ibdev, 1, &rdev->active_speed,
+ 			 &rdev->active_width);
+ 	set_bit(BNXT_RE_FLAG_ISSUE_ROCE_STATS, &rdev->flags);
+@@ -1404,17 +1380,21 @@ static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev)
+ 
+ 	return 0;
+ free_sctx:
+-	bnxt_re_net_stats_ctx_free(rdev, rdev->qplib_ctx.stats.fw_id, true);
++	bnxt_re_net_stats_ctx_free(rdev, rdev->qplib_ctx.stats.fw_id);
+ free_ctx:
+ 	bnxt_qplib_free_ctx(rdev->en_dev->pdev, &rdev->qplib_ctx);
+ disable_rcfw:
+ 	bnxt_qplib_disable_rcfw_channel(&rdev->rcfw);
+ free_ring:
+-	bnxt_re_net_ring_free(rdev, rdev->rcfw.creq_ring_id, true);
++	bnxt_re_net_ring_free(rdev, rdev->rcfw.creq_ring_id);
+ free_rcfw:
+ 	bnxt_qplib_free_rcfw_channel(&rdev->rcfw);
+ fail:
+-	bnxt_re_ib_unreg(rdev, true);
++	if (!locked)
++		rtnl_lock();
++	bnxt_re_ib_unreg(rdev);
++	rtnl_unlock();
++
+ 	return rc;
+ }
+ 
+@@ -1567,7 +1547,7 @@ static int bnxt_re_netdev_event(struct notifier_block *notifier,
+ 		 */
+ 		if (atomic_read(&rdev->sched_count) > 0)
+ 			goto exit;
+-		bnxt_re_ib_unreg(rdev, false);
++		bnxt_re_ib_unreg(rdev);
+ 		bnxt_re_remove_one(rdev);
+ 		bnxt_re_dev_unreg(rdev);
+ 		break;
+@@ -1646,7 +1626,10 @@ static void __exit bnxt_re_mod_exit(void)
+ 		 */
+ 		flush_workqueue(bnxt_re_wq);
+ 		bnxt_re_dev_stop(rdev);
+-		bnxt_re_ib_unreg(rdev, true);
++		/* Acquire the rtnl_lock as the L2 resources are freed here */
++		rtnl_lock();
++		bnxt_re_ib_unreg(rdev);
++		rtnl_unlock();
+ 		bnxt_re_remove_one(rdev);
+ 		bnxt_re_dev_unreg(rdev);
+ 	}
+diff --git a/drivers/input/keyboard/atakbd.c b/drivers/input/keyboard/atakbd.c
+index f1235831283d..fdeda0b0fbd6 100644
+--- a/drivers/input/keyboard/atakbd.c
++++ b/drivers/input/keyboard/atakbd.c
+@@ -79,8 +79,7 @@ MODULE_LICENSE("GPL");
+  */
+ 
+ 
+-static unsigned char atakbd_keycode[0x72] = {	/* American layout */
+-	[0]	 = KEY_GRAVE,
++static unsigned char atakbd_keycode[0x73] = {	/* American layout */
+ 	[1]	 = KEY_ESC,
+ 	[2]	 = KEY_1,
+ 	[3]	 = KEY_2,
+@@ -121,9 +120,9 @@ static unsigned char atakbd_keycode[0x72] = {	/* American layout */
+ 	[38]	 = KEY_L,
+ 	[39]	 = KEY_SEMICOLON,
+ 	[40]	 = KEY_APOSTROPHE,
+-	[41]	 = KEY_BACKSLASH,	/* FIXME, '#' */
++	[41]	 = KEY_GRAVE,
+ 	[42]	 = KEY_LEFTSHIFT,
+-	[43]	 = KEY_GRAVE,		/* FIXME: '~' */
++	[43]	 = KEY_BACKSLASH,
+ 	[44]	 = KEY_Z,
+ 	[45]	 = KEY_X,
+ 	[46]	 = KEY_C,
+@@ -149,45 +148,34 @@ static unsigned char atakbd_keycode[0x72] = {	/* American layout */
+ 	[66]	 = KEY_F8,
+ 	[67]	 = KEY_F9,
+ 	[68]	 = KEY_F10,
+-	[69]	 = KEY_ESC,
+-	[70]	 = KEY_DELETE,
+-	[71]	 = KEY_KP7,
+-	[72]	 = KEY_KP8,
+-	[73]	 = KEY_KP9,
++	[71]	 = KEY_HOME,
++	[72]	 = KEY_UP,
+ 	[74]	 = KEY_KPMINUS,
+-	[75]	 = KEY_KP4,
+-	[76]	 = KEY_KP5,
+-	[77]	 = KEY_KP6,
++	[75]	 = KEY_LEFT,
++	[77]	 = KEY_RIGHT,
+ 	[78]	 = KEY_KPPLUS,
+-	[79]	 = KEY_KP1,
+-	[80]	 = KEY_KP2,
+-	[81]	 = KEY_KP3,
+-	[82]	 = KEY_KP0,
+-	[83]	 = KEY_KPDOT,
+-	[90]	 = KEY_KPLEFTPAREN,
+-	[91]	 = KEY_KPRIGHTPAREN,
+-	[92]	 = KEY_KPASTERISK,	/* FIXME */
+-	[93]	 = KEY_KPASTERISK,
+-	[94]	 = KEY_KPPLUS,
+-	[95]	 = KEY_HELP,
++	[80]	 = KEY_DOWN,
++	[82]	 = KEY_INSERT,
++	[83]	 = KEY_DELETE,
+ 	[96]	 = KEY_102ND,
+-	[97]	 = KEY_KPASTERISK,	/* FIXME */
+-	[98]	 = KEY_KPSLASH,
++	[97]	 = KEY_UNDO,
++	[98]	 = KEY_HELP,
+ 	[99]	 = KEY_KPLEFTPAREN,
+ 	[100]	 = KEY_KPRIGHTPAREN,
+ 	[101]	 = KEY_KPSLASH,
+ 	[102]	 = KEY_KPASTERISK,
+-	[103]	 = KEY_UP,
+-	[104]	 = KEY_KPASTERISK,	/* FIXME */
+-	[105]	 = KEY_LEFT,
+-	[106]	 = KEY_RIGHT,
+-	[107]	 = KEY_KPASTERISK,	/* FIXME */
+-	[108]	 = KEY_DOWN,
+-	[109]	 = KEY_KPASTERISK,	/* FIXME */
+-	[110]	 = KEY_KPASTERISK,	/* FIXME */
+-	[111]	 = KEY_KPASTERISK,	/* FIXME */
+-	[112]	 = KEY_KPASTERISK,	/* FIXME */
+-	[113]	 = KEY_KPASTERISK	/* FIXME */
++	[103]	 = KEY_KP7,
++	[104]	 = KEY_KP8,
++	[105]	 = KEY_KP9,
++	[106]	 = KEY_KP4,
++	[107]	 = KEY_KP5,
++	[108]	 = KEY_KP6,
++	[109]	 = KEY_KP1,
++	[110]	 = KEY_KP2,
++	[111]	 = KEY_KP3,
++	[112]	 = KEY_KP0,
++	[113]	 = KEY_KPDOT,
++	[114]	 = KEY_KPENTER,
+ };
+ 
+ static struct input_dev *atakbd_dev;
+@@ -195,21 +183,15 @@ static struct input_dev *atakbd_dev;
+ static void atakbd_interrupt(unsigned char scancode, char down)
+ {
+ 
+-	if (scancode < 0x72) {		/* scancodes < 0xf2 are keys */
++	if (scancode < 0x73) {		/* scancodes < 0xf3 are keys */
+ 
+ 		// report raw events here?
+ 
+ 		scancode = atakbd_keycode[scancode];
+ 
+-		if (scancode == KEY_CAPSLOCK) {	/* CapsLock is a toggle switch key on Amiga */
+-			input_report_key(atakbd_dev, scancode, 1);
+-			input_report_key(atakbd_dev, scancode, 0);
+-			input_sync(atakbd_dev);
+-		} else {
+-			input_report_key(atakbd_dev, scancode, down);
+-			input_sync(atakbd_dev);
+-		}
+-	} else				/* scancodes >= 0xf2 are mouse data, most likely */
++		input_report_key(atakbd_dev, scancode, down);
++		input_sync(atakbd_dev);
++	} else				/* scancodes >= 0xf3 are mouse data, most likely */
+ 		printk(KERN_INFO "atakbd: unhandled scancode %x\n", scancode);
+ 
+ 	return;
+diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
+index c53363443280..c2b511a16b0e 100644
+--- a/drivers/iommu/amd_iommu.c
++++ b/drivers/iommu/amd_iommu.c
+@@ -246,7 +246,13 @@ static u16 get_alias(struct device *dev)
+ 
+ 	/* The callers make sure that get_device_id() does not fail here */
+ 	devid = get_device_id(dev);
++
++	/* For ACPI HID devices, we simply return the devid as such */
++	if (!dev_is_pci(dev))
++		return devid;
++
+ 	ivrs_alias = amd_iommu_alias_table[devid];
++
+ 	pci_for_each_dma_alias(pdev, __last_alias, &pci_alias);
+ 
+ 	if (ivrs_alias == pci_alias)
+diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c
+index 2b1724e8d307..701820b39fd1 100644
+--- a/drivers/iommu/rockchip-iommu.c
++++ b/drivers/iommu/rockchip-iommu.c
+@@ -1242,6 +1242,12 @@ err_unprepare_clocks:
+ 
+ static void rk_iommu_shutdown(struct platform_device *pdev)
+ {
++	struct rk_iommu *iommu = platform_get_drvdata(pdev);
++	int i = 0, irq;
++
++	while ((irq = platform_get_irq(pdev, i++)) != -ENXIO)
++		devm_free_irq(iommu->dev, irq, iommu);
++
+ 	pm_runtime_force_suspend(&pdev->dev);
+ }
+ 
+diff --git a/drivers/media/usb/dvb-usb-v2/af9035.c b/drivers/media/usb/dvb-usb-v2/af9035.c
+index 666d319d3d1a..1f6c1eefe389 100644
+--- a/drivers/media/usb/dvb-usb-v2/af9035.c
++++ b/drivers/media/usb/dvb-usb-v2/af9035.c
+@@ -402,8 +402,10 @@ static int af9035_i2c_master_xfer(struct i2c_adapter *adap,
+ 			if (msg[0].addr == state->af9033_i2c_addr[1])
+ 				reg |= 0x100000;
+ 
+-			ret = af9035_wr_regs(d, reg, &msg[0].buf[3],
+-					msg[0].len - 3);
++			ret = (msg[0].len >= 3) ? af9035_wr_regs(d, reg,
++							         &msg[0].buf[3],
++							         msg[0].len - 3)
++					        : -EOPNOTSUPP;
+ 		} else {
+ 			/* I2C write */
+ 			u8 buf[MAX_XFER_SIZE];
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h b/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
+index 09e38f0733bd..10b9cb2185b1 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
+@@ -753,7 +753,6 @@ struct cpl_abort_req_rss {
+ };
+ 
+ struct cpl_abort_req_rss6 {
+-	WR_HDR;
+ 	union opcode_tid ot;
+ 	__u32 srqidx_status;
+ };
+diff --git a/drivers/net/ethernet/ibm/emac/core.c b/drivers/net/ethernet/ibm/emac/core.c
+index 372664686309..129f4e9f38da 100644
+--- a/drivers/net/ethernet/ibm/emac/core.c
++++ b/drivers/net/ethernet/ibm/emac/core.c
+@@ -2677,12 +2677,17 @@ static int emac_init_phy(struct emac_instance *dev)
+ 		if (of_phy_is_fixed_link(np)) {
+ 			int res = emac_dt_mdio_probe(dev);
+ 
+-			if (!res) {
+-				res = of_phy_register_fixed_link(np);
+-				if (res)
+-					mdiobus_unregister(dev->mii_bus);
++			if (res)
++				return res;
++
++			res = of_phy_register_fixed_link(np);
++			dev->phy_dev = of_phy_find_device(np);
++			if (res || !dev->phy_dev) {
++				mdiobus_unregister(dev->mii_bus);
++				return res ? res : -EINVAL;
+ 			}
+-			return res;
++			emac_adjust_link(dev->ndev);
++			put_device(&dev->phy_dev->mdio.dev);
+ 		}
+ 		return 0;
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx4/eq.c b/drivers/net/ethernet/mellanox/mlx4/eq.c
+index 1f3372c1802e..2df92dbd38e1 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/eq.c
++++ b/drivers/net/ethernet/mellanox/mlx4/eq.c
+@@ -240,7 +240,8 @@ static void mlx4_set_eq_affinity_hint(struct mlx4_priv *priv, int vec)
+ 	struct mlx4_dev *dev = &priv->dev;
+ 	struct mlx4_eq *eq = &priv->eq_table.eq[vec];
+ 
+-	if (!eq->affinity_mask || cpumask_empty(eq->affinity_mask))
++	if (!cpumask_available(eq->affinity_mask) ||
++	    cpumask_empty(eq->affinity_mask))
+ 		return;
+ 
+ 	hint_err = irq_set_affinity_hint(eq->irq, eq->affinity_mask);
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_dcbx.c b/drivers/net/ethernet/qlogic/qed/qed_dcbx.c
+index e0680ce91328..09ed0ba4225a 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_dcbx.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_dcbx.c
+@@ -190,6 +190,7 @@ qed_dcbx_dp_protocol(struct qed_hwfn *p_hwfn, struct qed_dcbx_results *p_data)
+ 
+ static void
+ qed_dcbx_set_params(struct qed_dcbx_results *p_data,
++		    struct qed_hwfn *p_hwfn,
+ 		    struct qed_hw_info *p_info,
+ 		    bool enable,
+ 		    u8 prio,
+@@ -206,6 +207,11 @@ qed_dcbx_set_params(struct qed_dcbx_results *p_data,
+ 	else
+ 		p_data->arr[type].update = DONT_UPDATE_DCB_DSCP;
+ 
++	/* Do not add vlan tag 0 when DCB is enabled and port in UFP/OV mode */
++	if ((test_bit(QED_MF_8021Q_TAGGING, &p_hwfn->cdev->mf_bits) ||
++	     test_bit(QED_MF_8021AD_TAGGING, &p_hwfn->cdev->mf_bits)))
++		p_data->arr[type].dont_add_vlan0 = true;
++
+ 	/* QM reconf data */
+ 	if (p_info->personality == personality)
+ 		p_info->offload_tc = tc;
+@@ -233,7 +239,7 @@ qed_dcbx_update_app_info(struct qed_dcbx_results *p_data,
+ 		personality = qed_dcbx_app_update[i].personality;
+ 		name = qed_dcbx_app_update[i].name;
+ 
+-		qed_dcbx_set_params(p_data, p_info, enable,
++		qed_dcbx_set_params(p_data, p_hwfn, p_info, enable,
+ 				    prio, tc, type, personality);
+ 	}
+ }
+@@ -956,6 +962,7 @@ static void qed_dcbx_update_protocol_data(struct protocol_dcb_data *p_data,
+ 	p_data->dcb_enable_flag = p_src->arr[type].enable;
+ 	p_data->dcb_priority = p_src->arr[type].priority;
+ 	p_data->dcb_tc = p_src->arr[type].tc;
++	p_data->dcb_dont_add_vlan0 = p_src->arr[type].dont_add_vlan0;
+ }
+ 
+ /* Set pf update ramrod command params */
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_dcbx.h b/drivers/net/ethernet/qlogic/qed/qed_dcbx.h
+index 5feb90e049e0..d950d836858c 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_dcbx.h
++++ b/drivers/net/ethernet/qlogic/qed/qed_dcbx.h
+@@ -55,6 +55,7 @@ struct qed_dcbx_app_data {
+ 	u8 update;		/* Update indication */
+ 	u8 priority;		/* Priority */
+ 	u8 tc;			/* Traffic Class */
++	bool dont_add_vlan0;	/* Do not insert a vlan tag with id 0 */
+ };
+ 
+ #define QED_DCBX_VERSION_DISABLED       0
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c
+index e5249b4741d0..194f4dbe57d3 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_dev.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c
+@@ -1636,7 +1636,7 @@ static int qed_vf_start(struct qed_hwfn *p_hwfn,
+ int qed_hw_init(struct qed_dev *cdev, struct qed_hw_init_params *p_params)
+ {
+ 	struct qed_load_req_params load_req_params;
+-	u32 load_code, param, drv_mb_param;
++	u32 load_code, resp, param, drv_mb_param;
+ 	bool b_default_mtu = true;
+ 	struct qed_hwfn *p_hwfn;
+ 	int rc = 0, mfw_rc, i;
+@@ -1782,6 +1782,19 @@ int qed_hw_init(struct qed_dev *cdev, struct qed_hw_init_params *p_params)
+ 
+ 	if (IS_PF(cdev)) {
+ 		p_hwfn = QED_LEADING_HWFN(cdev);
++
++		/* Get pre-negotiated values for stag, bandwidth etc. */
++		DP_VERBOSE(p_hwfn,
++			   QED_MSG_SPQ,
++			   "Sending GET_OEM_UPDATES command to trigger stag/bandwidth attention handling\n");
++		drv_mb_param = 1 << DRV_MB_PARAM_DUMMY_OEM_UPDATES_OFFSET;
++		rc = qed_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
++				 DRV_MSG_CODE_GET_OEM_UPDATES,
++				 drv_mb_param, &resp, &param);
++		if (rc)
++			DP_NOTICE(p_hwfn,
++				  "Failed to send GET_OEM_UPDATES attention request\n");
++
+ 		drv_mb_param = STORM_FW_VERSION;
+ 		rc = qed_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
+ 				 DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER,
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_hsi.h b/drivers/net/ethernet/qlogic/qed/qed_hsi.h
+index 463ffa83685f..ec5de7cf1af4 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_hsi.h
++++ b/drivers/net/ethernet/qlogic/qed/qed_hsi.h
+@@ -12415,6 +12415,7 @@ struct public_drv_mb {
+ #define DRV_MSG_SET_RESOURCE_VALUE_MSG		0x35000000
+ #define DRV_MSG_CODE_OV_UPDATE_WOL              0x38000000
+ #define DRV_MSG_CODE_OV_UPDATE_ESWITCH_MODE     0x39000000
++#define DRV_MSG_CODE_GET_OEM_UPDATES            0x41000000
+ 
+ #define DRV_MSG_CODE_BW_UPDATE_ACK		0x32000000
+ #define DRV_MSG_CODE_NIG_DRAIN			0x30000000
+@@ -12540,6 +12541,9 @@ struct public_drv_mb {
+ #define DRV_MB_PARAM_ESWITCH_MODE_VEB	0x1
+ #define DRV_MB_PARAM_ESWITCH_MODE_VEPA	0x2
+ 
++#define DRV_MB_PARAM_DUMMY_OEM_UPDATES_MASK	0x1
++#define DRV_MB_PARAM_DUMMY_OEM_UPDATES_OFFSET	0
++
+ #define DRV_MB_PARAM_SET_LED_MODE_OPER		0x0
+ #define DRV_MB_PARAM_SET_LED_MODE_ON		0x1
+ #define DRV_MB_PARAM_SET_LED_MODE_OFF		0x2
+diff --git a/drivers/net/ethernet/renesas/ravb.h b/drivers/net/ethernet/renesas/ravb.h
+index b81f4faf7b10..1c40989479bd 100644
+--- a/drivers/net/ethernet/renesas/ravb.h
++++ b/drivers/net/ethernet/renesas/ravb.h
+@@ -431,6 +431,7 @@ enum EIS_BIT {
+ 	EIS_CULF1	= 0x00000080,
+ 	EIS_TFFF	= 0x00000100,
+ 	EIS_QFS		= 0x00010000,
++	EIS_RESERVED	= (GENMASK(31, 17) | GENMASK(15, 11)),
+ };
+ 
+ /* RIC0 */
+@@ -475,6 +476,7 @@ enum RIS0_BIT {
+ 	RIS0_FRF15	= 0x00008000,
+ 	RIS0_FRF16	= 0x00010000,
+ 	RIS0_FRF17	= 0x00020000,
++	RIS0_RESERVED	= GENMASK(31, 18),
+ };
+ 
+ /* RIC1 */
+@@ -531,6 +533,7 @@ enum RIS2_BIT {
+ 	RIS2_QFF16	= 0x00010000,
+ 	RIS2_QFF17	= 0x00020000,
+ 	RIS2_RFFF	= 0x80000000,
++	RIS2_RESERVED	= GENMASK(30, 18),
+ };
+ 
+ /* TIC */
+@@ -547,6 +550,7 @@ enum TIS_BIT {
+ 	TIS_FTF1	= 0x00000002,	/* Undocumented? */
+ 	TIS_TFUF	= 0x00000100,
+ 	TIS_TFWF	= 0x00000200,
++	TIS_RESERVED	= (GENMASK(31, 20) | GENMASK(15, 12) | GENMASK(7, 4))
+ };
+ 
+ /* ISS */
+@@ -620,6 +624,7 @@ enum GIC_BIT {
+ enum GIS_BIT {
+ 	GIS_PTCF	= 0x00000001,	/* Undocumented? */
+ 	GIS_PTMF	= 0x00000004,
++	GIS_RESERVED	= GENMASK(15, 10),
+ };
+ 
+ /* GIE (R-Car Gen3 only) */
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index 0d811c02ff34..db4e306ca996 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -742,10 +742,11 @@ static void ravb_error_interrupt(struct net_device *ndev)
+ 	u32 eis, ris2;
+ 
+ 	eis = ravb_read(ndev, EIS);
+-	ravb_write(ndev, ~EIS_QFS, EIS);
++	ravb_write(ndev, ~(EIS_QFS | EIS_RESERVED), EIS);
+ 	if (eis & EIS_QFS) {
+ 		ris2 = ravb_read(ndev, RIS2);
+-		ravb_write(ndev, ~(RIS2_QFF0 | RIS2_RFFF), RIS2);
++		ravb_write(ndev, ~(RIS2_QFF0 | RIS2_RFFF | RIS2_RESERVED),
++			   RIS2);
+ 
+ 		/* Receive Descriptor Empty int */
+ 		if (ris2 & RIS2_QFF0)
+@@ -798,7 +799,7 @@ static bool ravb_timestamp_interrupt(struct net_device *ndev)
+ 	u32 tis = ravb_read(ndev, TIS);
+ 
+ 	if (tis & TIS_TFUF) {
+-		ravb_write(ndev, ~TIS_TFUF, TIS);
++		ravb_write(ndev, ~(TIS_TFUF | TIS_RESERVED), TIS);
+ 		ravb_get_tx_tstamp(ndev);
+ 		return true;
+ 	}
+@@ -933,7 +934,7 @@ static int ravb_poll(struct napi_struct *napi, int budget)
+ 		/* Processing RX Descriptor Ring */
+ 		if (ris0 & mask) {
+ 			/* Clear RX interrupt */
+-			ravb_write(ndev, ~mask, RIS0);
++			ravb_write(ndev, ~(mask | RIS0_RESERVED), RIS0);
+ 			if (ravb_rx(ndev, &quota, q))
+ 				goto out;
+ 		}
+@@ -941,7 +942,7 @@ static int ravb_poll(struct napi_struct *napi, int budget)
+ 		if (tis & mask) {
+ 			spin_lock_irqsave(&priv->lock, flags);
+ 			/* Clear TX interrupt */
+-			ravb_write(ndev, ~mask, TIS);
++			ravb_write(ndev, ~(mask | TIS_RESERVED), TIS);
+ 			ravb_tx_free(ndev, q, true);
+ 			netif_wake_subqueue(ndev, q);
+ 			mmiowb();
+diff --git a/drivers/net/ethernet/renesas/ravb_ptp.c b/drivers/net/ethernet/renesas/ravb_ptp.c
+index eede70ec37f8..9e3222fd69f9 100644
+--- a/drivers/net/ethernet/renesas/ravb_ptp.c
++++ b/drivers/net/ethernet/renesas/ravb_ptp.c
+@@ -319,7 +319,7 @@ void ravb_ptp_interrupt(struct net_device *ndev)
+ 		}
+ 	}
+ 
+-	ravb_write(ndev, ~gis, GIS);
++	ravb_write(ndev, ~(gis | GIS_RESERVED), GIS);
+ }
+ 
+ void ravb_ptp_init(struct net_device *ndev, struct platform_device *pdev)
+diff --git a/drivers/pci/controller/dwc/pcie-designware.c b/drivers/pci/controller/dwc/pcie-designware.c
+index 778c4f76a884..2153956a0b20 100644
+--- a/drivers/pci/controller/dwc/pcie-designware.c
++++ b/drivers/pci/controller/dwc/pcie-designware.c
+@@ -135,7 +135,7 @@ static void dw_pcie_prog_outbound_atu_unroll(struct dw_pcie *pci, int index,
+ 		if (val & PCIE_ATU_ENABLE)
+ 			return;
+ 
+-		usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX);
++		mdelay(LINK_WAIT_IATU);
+ 	}
+ 	dev_err(pci->dev, "Outbound iATU is not being enabled\n");
+ }
+@@ -178,7 +178,7 @@ void dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, int type,
+ 		if (val & PCIE_ATU_ENABLE)
+ 			return;
+ 
+-		usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX);
++		mdelay(LINK_WAIT_IATU);
+ 	}
+ 	dev_err(pci->dev, "Outbound iATU is not being enabled\n");
+ }
+@@ -236,7 +236,7 @@ static int dw_pcie_prog_inbound_atu_unroll(struct dw_pcie *pci, int index,
+ 		if (val & PCIE_ATU_ENABLE)
+ 			return 0;
+ 
+-		usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX);
++		mdelay(LINK_WAIT_IATU);
+ 	}
+ 	dev_err(pci->dev, "Inbound iATU is not being enabled\n");
+ 
+@@ -282,7 +282,7 @@ int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, int index, int bar,
+ 		if (val & PCIE_ATU_ENABLE)
+ 			return 0;
+ 
+-		usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX);
++		mdelay(LINK_WAIT_IATU);
+ 	}
+ 	dev_err(pci->dev, "Inbound iATU is not being enabled\n");
+ 
+diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h
+index bee4e2535a61..b99d1d72dd12 100644
+--- a/drivers/pci/controller/dwc/pcie-designware.h
++++ b/drivers/pci/controller/dwc/pcie-designware.h
+@@ -26,8 +26,7 @@
+ 
+ /* Parameters for the waiting for iATU enabled routine */
+ #define LINK_WAIT_MAX_IATU_RETRIES	5
+-#define LINK_WAIT_IATU_MIN		9000
+-#define LINK_WAIT_IATU_MAX		10000
++#define LINK_WAIT_IATU			9
+ 
+ /* Synopsys-specific PCIe configuration registers */
+ #define PCIE_PORT_LINK_CONTROL		0x710
+diff --git a/drivers/pinctrl/pinctrl-amd.c b/drivers/pinctrl/pinctrl-amd.c
+index b91db89eb924..d3ba867d01f0 100644
+--- a/drivers/pinctrl/pinctrl-amd.c
++++ b/drivers/pinctrl/pinctrl-amd.c
+@@ -348,21 +348,12 @@ static void amd_gpio_irq_enable(struct irq_data *d)
+ 	unsigned long flags;
+ 	struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+ 	struct amd_gpio *gpio_dev = gpiochip_get_data(gc);
+-	u32 mask = BIT(INTERRUPT_ENABLE_OFF) | BIT(INTERRUPT_MASK_OFF);
+ 
+ 	raw_spin_lock_irqsave(&gpio_dev->lock, flags);
+ 	pin_reg = readl(gpio_dev->base + (d->hwirq)*4);
+ 	pin_reg |= BIT(INTERRUPT_ENABLE_OFF);
+ 	pin_reg |= BIT(INTERRUPT_MASK_OFF);
+ 	writel(pin_reg, gpio_dev->base + (d->hwirq)*4);
+-	/*
+-	 * When debounce logic is enabled it takes ~900 us before interrupts
+-	 * can be enabled.  During this "debounce warm up" period the
+-	 * "INTERRUPT_ENABLE" bit will read as 0. Poll the bit here until it
+-	 * reads back as 1, signaling that interrupts are now enabled.
+-	 */
+-	while ((readl(gpio_dev->base + (d->hwirq)*4) & mask) != mask)
+-		continue;
+ 	raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
+ }
+ 
+@@ -426,7 +417,7 @@ static void amd_gpio_irq_eoi(struct irq_data *d)
+ static int amd_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ {
+ 	int ret = 0;
+-	u32 pin_reg;
++	u32 pin_reg, pin_reg_irq_en, mask;
+ 	unsigned long flags, irq_flags;
+ 	struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+ 	struct amd_gpio *gpio_dev = gpiochip_get_data(gc);
+@@ -495,6 +486,28 @@ static int amd_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ 	}
+ 
+ 	pin_reg |= CLR_INTR_STAT << INTERRUPT_STS_OFF;
++	/*
++	 * If WAKE_INT_MASTER_REG.MaskStsEn is set, a software write to the
++	 * debounce registers of any GPIO will block wake/interrupt status
++	 * generation for *all* GPIOs for a lenght of time that depends on
++	 * WAKE_INT_MASTER_REG.MaskStsLength[11:0].  During this period the
++	 * INTERRUPT_ENABLE bit will read as 0.
++	 *
++	 * We temporarily enable irq for the GPIO whose configuration is
++	 * changing, and then wait for it to read back as 1 to know when
++	 * debounce has settled and then disable the irq again.
++	 * We do this polling with the spinlock held to ensure other GPIO
++	 * access routines do not read an incorrect value for the irq enable
++	 * bit of other GPIOs.  We keep the GPIO masked while polling to avoid
++	 * spurious irqs, and disable the irq again after polling.
++	 */
++	mask = BIT(INTERRUPT_ENABLE_OFF);
++	pin_reg_irq_en = pin_reg;
++	pin_reg_irq_en |= mask;
++	pin_reg_irq_en &= ~BIT(INTERRUPT_MASK_OFF);
++	writel(pin_reg_irq_en, gpio_dev->base + (d->hwirq)*4);
++	while ((readl(gpio_dev->base + (d->hwirq)*4) & mask) != mask)
++		continue;
+ 	writel(pin_reg, gpio_dev->base + (d->hwirq)*4);
+ 	raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
+ 
+diff --git a/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c b/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c
+index c3a76af9f5fa..ada1ebebd325 100644
+--- a/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c
++++ b/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c
+@@ -3475,11 +3475,10 @@ static int ibmvscsis_probe(struct vio_dev *vdev,
+ 		vscsi->dds.window[LOCAL].liobn,
+ 		vscsi->dds.window[REMOTE].liobn);
+ 
+-	strcpy(vscsi->eye, "VSCSI ");
+-	strncat(vscsi->eye, vdev->name, MAX_EYE);
++	snprintf(vscsi->eye, sizeof(vscsi->eye), "VSCSI %s", vdev->name);
+ 
+ 	vscsi->dds.unit_id = vdev->unit_address;
+-	strncpy(vscsi->dds.partition_name, partition_name,
++	strscpy(vscsi->dds.partition_name, partition_name,
+ 		sizeof(vscsi->dds.partition_name));
+ 	vscsi->dds.partition_num = partition_number;
+ 
+diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c
+index 02d65dce74e5..2e8a91341254 100644
+--- a/drivers/scsi/ipr.c
++++ b/drivers/scsi/ipr.c
+@@ -3310,6 +3310,65 @@ static void ipr_release_dump(struct kref *kref)
+ 	LEAVE;
+ }
+ 
++static void ipr_add_remove_thread(struct work_struct *work)
++{
++	unsigned long lock_flags;
++	struct ipr_resource_entry *res;
++	struct scsi_device *sdev;
++	struct ipr_ioa_cfg *ioa_cfg =
++		container_of(work, struct ipr_ioa_cfg, scsi_add_work_q);
++	u8 bus, target, lun;
++	int did_work;
++
++	ENTER;
++	spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
++
++restart:
++	do {
++		did_work = 0;
++		if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].allow_cmds) {
++			spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
++			return;
++		}
++
++		list_for_each_entry(res, &ioa_cfg->used_res_q, queue) {
++			if (res->del_from_ml && res->sdev) {
++				did_work = 1;
++				sdev = res->sdev;
++				if (!scsi_device_get(sdev)) {
++					if (!res->add_to_ml)
++						list_move_tail(&res->queue, &ioa_cfg->free_res_q);
++					else
++						res->del_from_ml = 0;
++					spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
++					scsi_remove_device(sdev);
++					scsi_device_put(sdev);
++					spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
++				}
++				break;
++			}
++		}
++	} while (did_work);
++
++	list_for_each_entry(res, &ioa_cfg->used_res_q, queue) {
++		if (res->add_to_ml) {
++			bus = res->bus;
++			target = res->target;
++			lun = res->lun;
++			res->add_to_ml = 0;
++			spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
++			scsi_add_device(ioa_cfg->host, bus, target, lun);
++			spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
++			goto restart;
++		}
++	}
++
++	ioa_cfg->scan_done = 1;
++	spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
++	kobject_uevent(&ioa_cfg->host->shost_dev.kobj, KOBJ_CHANGE);
++	LEAVE;
++}
++
+ /**
+  * ipr_worker_thread - Worker thread
+  * @work:		ioa config struct
+@@ -3324,13 +3383,9 @@ static void ipr_release_dump(struct kref *kref)
+ static void ipr_worker_thread(struct work_struct *work)
+ {
+ 	unsigned long lock_flags;
+-	struct ipr_resource_entry *res;
+-	struct scsi_device *sdev;
+ 	struct ipr_dump *dump;
+ 	struct ipr_ioa_cfg *ioa_cfg =
+ 		container_of(work, struct ipr_ioa_cfg, work_q);
+-	u8 bus, target, lun;
+-	int did_work;
+ 
+ 	ENTER;
+ 	spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+@@ -3368,49 +3423,9 @@ static void ipr_worker_thread(struct work_struct *work)
+ 		return;
+ 	}
+ 
+-restart:
+-	do {
+-		did_work = 0;
+-		if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].allow_cmds) {
+-			spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+-			return;
+-		}
++	schedule_work(&ioa_cfg->scsi_add_work_q);
+ 
+-		list_for_each_entry(res, &ioa_cfg->used_res_q, queue) {
+-			if (res->del_from_ml && res->sdev) {
+-				did_work = 1;
+-				sdev = res->sdev;
+-				if (!scsi_device_get(sdev)) {
+-					if (!res->add_to_ml)
+-						list_move_tail(&res->queue, &ioa_cfg->free_res_q);
+-					else
+-						res->del_from_ml = 0;
+-					spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+-					scsi_remove_device(sdev);
+-					scsi_device_put(sdev);
+-					spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+-				}
+-				break;
+-			}
+-		}
+-	} while (did_work);
+-
+-	list_for_each_entry(res, &ioa_cfg->used_res_q, queue) {
+-		if (res->add_to_ml) {
+-			bus = res->bus;
+-			target = res->target;
+-			lun = res->lun;
+-			res->add_to_ml = 0;
+-			spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+-			scsi_add_device(ioa_cfg->host, bus, target, lun);
+-			spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+-			goto restart;
+-		}
+-	}
+-
+-	ioa_cfg->scan_done = 1;
+ 	spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+-	kobject_uevent(&ioa_cfg->host->shost_dev.kobj, KOBJ_CHANGE);
+ 	LEAVE;
+ }
+ 
+@@ -9908,6 +9923,7 @@ static void ipr_init_ioa_cfg(struct ipr_ioa_cfg *ioa_cfg,
+ 	INIT_LIST_HEAD(&ioa_cfg->free_res_q);
+ 	INIT_LIST_HEAD(&ioa_cfg->used_res_q);
+ 	INIT_WORK(&ioa_cfg->work_q, ipr_worker_thread);
++	INIT_WORK(&ioa_cfg->scsi_add_work_q, ipr_add_remove_thread);
+ 	init_waitqueue_head(&ioa_cfg->reset_wait_q);
+ 	init_waitqueue_head(&ioa_cfg->msi_wait_q);
+ 	init_waitqueue_head(&ioa_cfg->eeh_wait_q);
+diff --git a/drivers/scsi/ipr.h b/drivers/scsi/ipr.h
+index 93570734cbfb..a98cfd24035a 100644
+--- a/drivers/scsi/ipr.h
++++ b/drivers/scsi/ipr.h
+@@ -1568,6 +1568,7 @@ struct ipr_ioa_cfg {
+ 	u8 saved_mode_page_len;
+ 
+ 	struct work_struct work_q;
++	struct work_struct scsi_add_work_q;
+ 	struct workqueue_struct *reset_work_q;
+ 
+ 	wait_queue_head_t reset_wait_q;
+diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
+index 729d343861f4..de64cbb0e3d5 100644
+--- a/drivers/scsi/lpfc/lpfc_attr.c
++++ b/drivers/scsi/lpfc/lpfc_attr.c
+@@ -320,12 +320,12 @@ lpfc_nvme_info_show(struct device *dev, struct device_attribute *attr,
+ 			localport->port_id, statep);
+ 
+ 	list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
++		nrport = NULL;
++		spin_lock(&vport->phba->hbalock);
+ 		rport = lpfc_ndlp_get_nrport(ndlp);
+-		if (!rport)
+-			continue;
+-
+-		/* local short-hand pointer. */
+-		nrport = rport->remoteport;
++		if (rport)
++			nrport = rport->remoteport;
++		spin_unlock(&vport->phba->hbalock);
+ 		if (!nrport)
+ 			continue;
+ 
+@@ -3304,6 +3304,7 @@ lpfc_update_rport_devloss_tmo(struct lpfc_vport *vport)
+ 	struct lpfc_nodelist  *ndlp;
+ #if (IS_ENABLED(CONFIG_NVME_FC))
+ 	struct lpfc_nvme_rport *rport;
++	struct nvme_fc_remote_port *remoteport = NULL;
+ #endif
+ 
+ 	shost = lpfc_shost_from_vport(vport);
+@@ -3314,8 +3315,12 @@ lpfc_update_rport_devloss_tmo(struct lpfc_vport *vport)
+ 		if (ndlp->rport)
+ 			ndlp->rport->dev_loss_tmo = vport->cfg_devloss_tmo;
+ #if (IS_ENABLED(CONFIG_NVME_FC))
++		spin_lock(&vport->phba->hbalock);
+ 		rport = lpfc_ndlp_get_nrport(ndlp);
+ 		if (rport)
++			remoteport = rport->remoteport;
++		spin_unlock(&vport->phba->hbalock);
++		if (remoteport)
+ 			nvme_fc_set_remoteport_devloss(rport->remoteport,
+ 						       vport->cfg_devloss_tmo);
+ #endif
+diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
+index 9df0c051349f..aec5b10a8c85 100644
+--- a/drivers/scsi/lpfc/lpfc_debugfs.c
++++ b/drivers/scsi/lpfc/lpfc_debugfs.c
+@@ -551,7 +551,7 @@ lpfc_debugfs_nodelist_data(struct lpfc_vport *vport, char *buf, int size)
+ 	unsigned char *statep;
+ 	struct nvme_fc_local_port *localport;
+ 	struct lpfc_nvmet_tgtport *tgtp;
+-	struct nvme_fc_remote_port *nrport;
++	struct nvme_fc_remote_port *nrport = NULL;
+ 	struct lpfc_nvme_rport *rport;
+ 
+ 	cnt = (LPFC_NODELIST_SIZE / LPFC_NODELIST_ENTRY_SIZE);
+@@ -696,11 +696,11 @@ lpfc_debugfs_nodelist_data(struct lpfc_vport *vport, char *buf, int size)
+ 	len += snprintf(buf + len, size - len, "\tRport List:\n");
+ 	list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
+ 		/* local short-hand pointer. */
++		spin_lock(&phba->hbalock);
+ 		rport = lpfc_ndlp_get_nrport(ndlp);
+-		if (!rport)
+-			continue;
+-
+-		nrport = rport->remoteport;
++		if (rport)
++			nrport = rport->remoteport;
++		spin_unlock(&phba->hbalock);
+ 		if (!nrport)
+ 			continue;
+ 
+diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
+index cab1fb087e6a..0960dcaf1684 100644
+--- a/drivers/scsi/lpfc/lpfc_nvme.c
++++ b/drivers/scsi/lpfc/lpfc_nvme.c
+@@ -2718,7 +2718,9 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ 	rpinfo.port_name = wwn_to_u64(ndlp->nlp_portname.u.wwn);
+ 	rpinfo.node_name = wwn_to_u64(ndlp->nlp_nodename.u.wwn);
+ 
++	spin_lock_irq(&vport->phba->hbalock);
+ 	oldrport = lpfc_ndlp_get_nrport(ndlp);
++	spin_unlock_irq(&vport->phba->hbalock);
+ 	if (!oldrport)
+ 		lpfc_nlp_get(ndlp);
+ 
+@@ -2833,7 +2835,7 @@ lpfc_nvme_unregister_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ 	struct nvme_fc_local_port *localport;
+ 	struct lpfc_nvme_lport *lport;
+ 	struct lpfc_nvme_rport *rport;
+-	struct nvme_fc_remote_port *remoteport;
++	struct nvme_fc_remote_port *remoteport = NULL;
+ 
+ 	localport = vport->localport;
+ 
+@@ -2847,11 +2849,14 @@ lpfc_nvme_unregister_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ 	if (!lport)
+ 		goto input_err;
+ 
++	spin_lock_irq(&vport->phba->hbalock);
+ 	rport = lpfc_ndlp_get_nrport(ndlp);
+-	if (!rport)
++	if (rport)
++		remoteport = rport->remoteport;
++	spin_unlock_irq(&vport->phba->hbalock);
++	if (!remoteport)
+ 		goto input_err;
+ 
+-	remoteport = rport->remoteport;
+ 	lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC,
+ 			 "6033 Unreg nvme remoteport %p, portname x%llx, "
+ 			 "port_id x%06x, portstate x%x port type x%x\n",
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 9421d9877730..0949d3db56e7 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -1277,7 +1277,8 @@ static int sd_init_command(struct scsi_cmnd *cmd)
+ 	case REQ_OP_ZONE_RESET:
+ 		return sd_zbc_setup_reset_cmnd(cmd);
+ 	default:
+-		BUG();
++		WARN_ON_ONCE(1);
++		return BLKPREP_KILL;
+ 	}
+ }
+ 
+diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c
+index 4b5e250e8615..e5c7e1ef6318 100644
+--- a/drivers/soundwire/stream.c
++++ b/drivers/soundwire/stream.c
+@@ -899,9 +899,10 @@ static void sdw_release_master_stream(struct sdw_stream_runtime *stream)
+ 	struct sdw_master_runtime *m_rt = stream->m_rt;
+ 	struct sdw_slave_runtime *s_rt, *_s_rt;
+ 
+-	list_for_each_entry_safe(s_rt, _s_rt,
+-			&m_rt->slave_rt_list, m_rt_node)
+-		sdw_stream_remove_slave(s_rt->slave, stream);
++	list_for_each_entry_safe(s_rt, _s_rt, &m_rt->slave_rt_list, m_rt_node) {
++		sdw_slave_port_release(s_rt->slave->bus, s_rt->slave, stream);
++		sdw_release_slave_stream(s_rt->slave, stream);
++	}
+ 
+ 	list_del(&m_rt->bus_node);
+ }
+@@ -1112,7 +1113,7 @@ int sdw_stream_add_master(struct sdw_bus *bus,
+ 				"Master runtime config failed for stream:%s",
+ 				stream->name);
+ 		ret = -ENOMEM;
+-		goto error;
++		goto unlock;
+ 	}
+ 
+ 	ret = sdw_config_stream(bus->dev, stream, stream_config, false);
+@@ -1123,11 +1124,11 @@ int sdw_stream_add_master(struct sdw_bus *bus,
+ 	if (ret)
+ 		goto stream_error;
+ 
+-	stream->state = SDW_STREAM_CONFIGURED;
++	goto unlock;
+ 
+ stream_error:
+ 	sdw_release_master_stream(stream);
+-error:
++unlock:
+ 	mutex_unlock(&bus->bus_lock);
+ 	return ret;
+ }
+@@ -1141,6 +1142,10 @@ EXPORT_SYMBOL(sdw_stream_add_master);
+  * @stream: SoundWire stream
+  * @port_config: Port configuration for audio stream
+  * @num_ports: Number of ports
++ *
++ * It is expected that Slave is added before adding Master
++ * to the Stream.
++ *
+  */
+ int sdw_stream_add_slave(struct sdw_slave *slave,
+ 		struct sdw_stream_config *stream_config,
+@@ -1186,6 +1191,12 @@ int sdw_stream_add_slave(struct sdw_slave *slave,
+ 	if (ret)
+ 		goto stream_error;
+ 
++	/*
++	 * Change stream state to CONFIGURED on first Slave add.
++	 * Bus is not aware of number of Slave(s) in a stream at this
++	 * point so cannot depend on all Slave(s) to be added in order to
++	 * change stream state to CONFIGURED.
++	 */
+ 	stream->state = SDW_STREAM_CONFIGURED;
+ 	goto error;
+ 
+diff --git a/drivers/spi/spi-gpio.c b/drivers/spi/spi-gpio.c
+index 6ae92d4dca19..3b518ead504e 100644
+--- a/drivers/spi/spi-gpio.c
++++ b/drivers/spi/spi-gpio.c
+@@ -287,8 +287,8 @@ static int spi_gpio_request(struct device *dev,
+ 		*mflags |= SPI_MASTER_NO_RX;
+ 
+ 	spi_gpio->sck = devm_gpiod_get(dev, "sck", GPIOD_OUT_LOW);
+-	if (IS_ERR(spi_gpio->mosi))
+-		return PTR_ERR(spi_gpio->mosi);
++	if (IS_ERR(spi_gpio->sck))
++		return PTR_ERR(spi_gpio->sck);
+ 
+ 	for (i = 0; i < num_chipselects; i++) {
+ 		spi_gpio->cs_gpios[i] = devm_gpiod_get_index(dev, "cs",
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 1949e0939d40..bd2f4c68506a 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -446,10 +446,10 @@ int mnt_want_write_file_path(struct file *file)
+ {
+ 	int ret;
+ 
+-	sb_start_write(file_inode(file)->i_sb);
++	sb_start_write(file->f_path.mnt->mnt_sb);
+ 	ret = __mnt_want_write_file(file);
+ 	if (ret)
+-		sb_end_write(file_inode(file)->i_sb);
++		sb_end_write(file->f_path.mnt->mnt_sb);
+ 	return ret;
+ }
+ 
+@@ -540,8 +540,7 @@ void __mnt_drop_write_file(struct file *file)
+ 
+ void mnt_drop_write_file_path(struct file *file)
+ {
+-	__mnt_drop_write_file(file);
+-	sb_end_write(file_inode(file)->i_sb);
++	mnt_drop_write(file->f_path.mnt);
+ }
+ 
+ void mnt_drop_write_file(struct file *file)
+diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
+index a8a126259bc4..0bec79ae4c2d 100644
+--- a/include/linux/huge_mm.h
++++ b/include/linux/huge_mm.h
+@@ -42,7 +42,7 @@ extern int mincore_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ 			unsigned char *vec);
+ extern bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+ 			 unsigned long new_addr, unsigned long old_end,
+-			 pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush);
++			 pmd_t *old_pmd, pmd_t *new_pmd);
+ extern int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ 			unsigned long addr, pgprot_t newprot,
+ 			int prot_numa);
+diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
+index f833a60699ad..e60078ffb302 100644
+--- a/kernel/bpf/sockmap.c
++++ b/kernel/bpf/sockmap.c
+@@ -132,6 +132,7 @@ struct smap_psock {
+ 	struct work_struct gc_work;
+ 
+ 	struct proto *sk_proto;
++	void (*save_unhash)(struct sock *sk);
+ 	void (*save_close)(struct sock *sk, long timeout);
+ 	void (*save_data_ready)(struct sock *sk);
+ 	void (*save_write_space)(struct sock *sk);
+@@ -143,6 +144,7 @@ static int bpf_tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ static int bpf_tcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size);
+ static int bpf_tcp_sendpage(struct sock *sk, struct page *page,
+ 			    int offset, size_t size, int flags);
++static void bpf_tcp_unhash(struct sock *sk);
+ static void bpf_tcp_close(struct sock *sk, long timeout);
+ 
+ static inline struct smap_psock *smap_psock_sk(const struct sock *sk)
+@@ -184,6 +186,7 @@ static void build_protos(struct proto prot[SOCKMAP_NUM_CONFIGS],
+ 			 struct proto *base)
+ {
+ 	prot[SOCKMAP_BASE]			= *base;
++	prot[SOCKMAP_BASE].unhash		= bpf_tcp_unhash;
+ 	prot[SOCKMAP_BASE].close		= bpf_tcp_close;
+ 	prot[SOCKMAP_BASE].recvmsg		= bpf_tcp_recvmsg;
+ 	prot[SOCKMAP_BASE].stream_memory_read	= bpf_tcp_stream_read;
+@@ -217,6 +220,7 @@ static int bpf_tcp_init(struct sock *sk)
+ 		return -EBUSY;
+ 	}
+ 
++	psock->save_unhash = sk->sk_prot->unhash;
+ 	psock->save_close = sk->sk_prot->close;
+ 	psock->sk_proto = sk->sk_prot;
+ 
+@@ -305,30 +309,12 @@ static struct smap_psock_map_entry *psock_map_pop(struct sock *sk,
+ 	return e;
+ }
+ 
+-static void bpf_tcp_close(struct sock *sk, long timeout)
++static void bpf_tcp_remove(struct sock *sk, struct smap_psock *psock)
+ {
+-	void (*close_fun)(struct sock *sk, long timeout);
+ 	struct smap_psock_map_entry *e;
+ 	struct sk_msg_buff *md, *mtmp;
+-	struct smap_psock *psock;
+ 	struct sock *osk;
+ 
+-	lock_sock(sk);
+-	rcu_read_lock();
+-	psock = smap_psock_sk(sk);
+-	if (unlikely(!psock)) {
+-		rcu_read_unlock();
+-		release_sock(sk);
+-		return sk->sk_prot->close(sk, timeout);
+-	}
+-
+-	/* The psock may be destroyed anytime after exiting the RCU critial
+-	 * section so by the time we use close_fun the psock may no longer
+-	 * be valid. However, bpf_tcp_close is called with the sock lock
+-	 * held so the close hook and sk are still valid.
+-	 */
+-	close_fun = psock->save_close;
+-
+ 	if (psock->cork) {
+ 		free_start_sg(psock->sock, psock->cork, true);
+ 		kfree(psock->cork);
+@@ -379,6 +365,42 @@ static void bpf_tcp_close(struct sock *sk, long timeout)
+ 		kfree(e);
+ 		e = psock_map_pop(sk, psock);
+ 	}
++}
++
++static void bpf_tcp_unhash(struct sock *sk)
++{
++	void (*unhash_fun)(struct sock *sk);
++	struct smap_psock *psock;
++
++	rcu_read_lock();
++	psock = smap_psock_sk(sk);
++	if (unlikely(!psock)) {
++		rcu_read_unlock();
++		if (sk->sk_prot->unhash)
++			sk->sk_prot->unhash(sk);
++		return;
++	}
++	unhash_fun = psock->save_unhash;
++	bpf_tcp_remove(sk, psock);
++	rcu_read_unlock();
++	unhash_fun(sk);
++}
++
++static void bpf_tcp_close(struct sock *sk, long timeout)
++{
++	void (*close_fun)(struct sock *sk, long timeout);
++	struct smap_psock *psock;
++
++	lock_sock(sk);
++	rcu_read_lock();
++	psock = smap_psock_sk(sk);
++	if (unlikely(!psock)) {
++		rcu_read_unlock();
++		release_sock(sk);
++		return sk->sk_prot->close(sk, timeout);
++	}
++	close_fun = psock->save_close;
++	bpf_tcp_remove(sk, psock);
+ 	rcu_read_unlock();
+ 	release_sock(sk);
+ 	close_fun(sk, timeout);
+@@ -2100,8 +2122,12 @@ static int sock_map_update_elem(struct bpf_map *map,
+ 		return -EINVAL;
+ 	}
+ 
++	/* ULPs are currently supported only for TCP sockets in ESTABLISHED
++	 * state.
++	 */
+ 	if (skops.sk->sk_type != SOCK_STREAM ||
+-	    skops.sk->sk_protocol != IPPROTO_TCP) {
++	    skops.sk->sk_protocol != IPPROTO_TCP ||
++	    skops.sk->sk_state != TCP_ESTABLISHED) {
+ 		fput(socket->file);
+ 		return -EOPNOTSUPP;
+ 	}
+@@ -2456,6 +2482,16 @@ static int sock_hash_update_elem(struct bpf_map *map,
+ 		return -EINVAL;
+ 	}
+ 
++	/* ULPs are currently supported only for TCP sockets in ESTABLISHED
++	 * state.
++	 */
++	if (skops.sk->sk_type != SOCK_STREAM ||
++	    skops.sk->sk_protocol != IPPROTO_TCP ||
++	    skops.sk->sk_state != TCP_ESTABLISHED) {
++		fput(socket->file);
++		return -EOPNOTSUPP;
++	}
++
+ 	lock_sock(skops.sk);
+ 	preempt_disable();
+ 	rcu_read_lock();
+@@ -2544,10 +2580,22 @@ const struct bpf_map_ops sock_hash_ops = {
+ 	.map_release_uref = sock_map_release,
+ };
+ 
++static bool bpf_is_valid_sock_op(struct bpf_sock_ops_kern *ops)
++{
++	return ops->op == BPF_SOCK_OPS_PASSIVE_ESTABLISHED_CB ||
++	       ops->op == BPF_SOCK_OPS_ACTIVE_ESTABLISHED_CB;
++}
+ BPF_CALL_4(bpf_sock_map_update, struct bpf_sock_ops_kern *, bpf_sock,
+ 	   struct bpf_map *, map, void *, key, u64, flags)
+ {
+ 	WARN_ON_ONCE(!rcu_read_lock_held());
++
++	/* ULPs are currently supported only for TCP sockets in ESTABLISHED
++	 * state. This checks that the sock ops triggering the update is
++	 * one indicating we are (or will be soon) in an ESTABLISHED state.
++	 */
++	if (!bpf_is_valid_sock_op(bpf_sock))
++		return -EOPNOTSUPP;
+ 	return sock_map_ctx_update_elem(bpf_sock, map, key, flags);
+ }
+ 
+@@ -2566,6 +2614,9 @@ BPF_CALL_4(bpf_sock_hash_update, struct bpf_sock_ops_kern *, bpf_sock,
+ 	   struct bpf_map *, map, void *, key, u64, flags)
+ {
+ 	WARN_ON_ONCE(!rcu_read_lock_held());
++
++	if (!bpf_is_valid_sock_op(bpf_sock))
++		return -EOPNOTSUPP;
+ 	return sock_hash_ctx_update_elem(bpf_sock, map, key, flags);
+ }
+ 
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index f7274e0c8bdc..3238bb2d0c93 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -1778,7 +1778,7 @@ static pmd_t move_soft_dirty_pmd(pmd_t pmd)
+ 
+ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+ 		  unsigned long new_addr, unsigned long old_end,
+-		  pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush)
++		  pmd_t *old_pmd, pmd_t *new_pmd)
+ {
+ 	spinlock_t *old_ptl, *new_ptl;
+ 	pmd_t pmd;
+@@ -1809,7 +1809,7 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+ 		if (new_ptl != old_ptl)
+ 			spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
+ 		pmd = pmdp_huge_get_and_clear(mm, old_addr, old_pmd);
+-		if (pmd_present(pmd) && pmd_dirty(pmd))
++		if (pmd_present(pmd))
+ 			force_flush = true;
+ 		VM_BUG_ON(!pmd_none(*new_pmd));
+ 
+@@ -1820,12 +1820,10 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+ 		}
+ 		pmd = move_soft_dirty_pmd(pmd);
+ 		set_pmd_at(mm, new_addr, new_pmd, pmd);
+-		if (new_ptl != old_ptl)
+-			spin_unlock(new_ptl);
+ 		if (force_flush)
+ 			flush_tlb_range(vma, old_addr, old_addr + PMD_SIZE);
+-		else
+-			*need_flush = true;
++		if (new_ptl != old_ptl)
++			spin_unlock(new_ptl);
+ 		spin_unlock(old_ptl);
+ 		return true;
+ 	}
+diff --git a/mm/mremap.c b/mm/mremap.c
+index 5c2e18505f75..a9617e72e6b7 100644
+--- a/mm/mremap.c
++++ b/mm/mremap.c
+@@ -115,7 +115,7 @@ static pte_t move_soft_dirty_pte(pte_t pte)
+ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
+ 		unsigned long old_addr, unsigned long old_end,
+ 		struct vm_area_struct *new_vma, pmd_t *new_pmd,
+-		unsigned long new_addr, bool need_rmap_locks, bool *need_flush)
++		unsigned long new_addr, bool need_rmap_locks)
+ {
+ 	struct mm_struct *mm = vma->vm_mm;
+ 	pte_t *old_pte, *new_pte, pte;
+@@ -163,15 +163,17 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
+ 
+ 		pte = ptep_get_and_clear(mm, old_addr, old_pte);
+ 		/*
+-		 * If we are remapping a dirty PTE, make sure
++		 * If we are remapping a valid PTE, make sure
+ 		 * to flush TLB before we drop the PTL for the
+-		 * old PTE or we may race with page_mkclean().
++		 * PTE.
+ 		 *
+-		 * This check has to be done after we removed the
+-		 * old PTE from page tables or another thread may
+-		 * dirty it after the check and before the removal.
++		 * NOTE! Both old and new PTL matter: the old one
++		 * for racing with page_mkclean(), the new one to
++		 * make sure the physical page stays valid until
++		 * the TLB entry for the old mapping has been
++		 * flushed.
+ 		 */
+-		if (pte_present(pte) && pte_dirty(pte))
++		if (pte_present(pte))
+ 			force_flush = true;
+ 		pte = move_pte(pte, new_vma->vm_page_prot, old_addr, new_addr);
+ 		pte = move_soft_dirty_pte(pte);
+@@ -179,13 +181,11 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
+ 	}
+ 
+ 	arch_leave_lazy_mmu_mode();
++	if (force_flush)
++		flush_tlb_range(vma, old_end - len, old_end);
+ 	if (new_ptl != old_ptl)
+ 		spin_unlock(new_ptl);
+ 	pte_unmap(new_pte - 1);
+-	if (force_flush)
+-		flush_tlb_range(vma, old_end - len, old_end);
+-	else
+-		*need_flush = true;
+ 	pte_unmap_unlock(old_pte - 1, old_ptl);
+ 	if (need_rmap_locks)
+ 		drop_rmap_locks(vma);
+@@ -198,7 +198,6 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
+ {
+ 	unsigned long extent, next, old_end;
+ 	pmd_t *old_pmd, *new_pmd;
+-	bool need_flush = false;
+ 	unsigned long mmun_start;	/* For mmu_notifiers */
+ 	unsigned long mmun_end;		/* For mmu_notifiers */
+ 
+@@ -229,8 +228,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
+ 				if (need_rmap_locks)
+ 					take_rmap_locks(vma);
+ 				moved = move_huge_pmd(vma, old_addr, new_addr,
+-						    old_end, old_pmd, new_pmd,
+-						    &need_flush);
++						    old_end, old_pmd, new_pmd);
+ 				if (need_rmap_locks)
+ 					drop_rmap_locks(vma);
+ 				if (moved)
+@@ -246,10 +244,8 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
+ 		if (extent > next - new_addr)
+ 			extent = next - new_addr;
+ 		move_ptes(vma, old_pmd, old_addr, old_addr + extent, new_vma,
+-			  new_pmd, new_addr, need_rmap_locks, &need_flush);
++			  new_pmd, new_addr, need_rmap_locks);
+ 	}
+-	if (need_flush)
+-		flush_tlb_range(vma, old_end-len, old_addr);
+ 
+ 	mmu_notifier_invalidate_range_end(vma->vm_mm, mmun_start, mmun_end);
+ 
+diff --git a/net/batman-adv/bat_v_elp.c b/net/batman-adv/bat_v_elp.c
+index 71c20c1d4002..9f481cfdf77d 100644
+--- a/net/batman-adv/bat_v_elp.c
++++ b/net/batman-adv/bat_v_elp.c
+@@ -241,7 +241,7 @@ batadv_v_elp_wifi_neigh_probe(struct batadv_hardif_neigh_node *neigh)
+ 		 * the packet to be exactly of that size to make the link
+ 		 * throughput estimation effective.
+ 		 */
+-		skb_put(skb, probe_len - hard_iface->bat_v.elp_skb->len);
++		skb_put_zero(skb, probe_len - hard_iface->bat_v.elp_skb->len);
+ 
+ 		batadv_dbg(BATADV_DBG_BATMAN, bat_priv,
+ 			   "Sending unicast (probe) ELP packet on interface %s to %pM\n",
+@@ -268,6 +268,7 @@ static void batadv_v_elp_periodic_work(struct work_struct *work)
+ 	struct batadv_priv *bat_priv;
+ 	struct sk_buff *skb;
+ 	u32 elp_interval;
++	bool ret;
+ 
+ 	bat_v = container_of(work, struct batadv_hard_iface_bat_v, elp_wq.work);
+ 	hard_iface = container_of(bat_v, struct batadv_hard_iface, bat_v);
+@@ -329,8 +330,11 @@ static void batadv_v_elp_periodic_work(struct work_struct *work)
+ 		 * may sleep and that is not allowed in an rcu protected
+ 		 * context. Therefore schedule a task for that.
+ 		 */
+-		queue_work(batadv_event_workqueue,
+-			   &hardif_neigh->bat_v.metric_work);
++		ret = queue_work(batadv_event_workqueue,
++				 &hardif_neigh->bat_v.metric_work);
++
++		if (!ret)
++			batadv_hardif_neigh_put(hardif_neigh);
+ 	}
+ 	rcu_read_unlock();
+ 
+diff --git a/net/batman-adv/bridge_loop_avoidance.c b/net/batman-adv/bridge_loop_avoidance.c
+index a2de5a44bd41..58c093caf49e 100644
+--- a/net/batman-adv/bridge_loop_avoidance.c
++++ b/net/batman-adv/bridge_loop_avoidance.c
+@@ -1772,6 +1772,7 @@ batadv_bla_loopdetect_check(struct batadv_priv *bat_priv, struct sk_buff *skb,
+ {
+ 	struct batadv_bla_backbone_gw *backbone_gw;
+ 	struct ethhdr *ethhdr;
++	bool ret;
+ 
+ 	ethhdr = eth_hdr(skb);
+ 
+@@ -1795,8 +1796,13 @@ batadv_bla_loopdetect_check(struct batadv_priv *bat_priv, struct sk_buff *skb,
+ 	if (unlikely(!backbone_gw))
+ 		return true;
+ 
+-	queue_work(batadv_event_workqueue, &backbone_gw->report_work);
+-	/* backbone_gw is unreferenced in the report work function function */
++	ret = queue_work(batadv_event_workqueue, &backbone_gw->report_work);
++
++	/* backbone_gw is unreferenced in the report work function function
++	 * if queue_work() call was successful
++	 */
++	if (!ret)
++		batadv_backbone_gw_put(backbone_gw);
+ 
+ 	return true;
+ }
+diff --git a/net/batman-adv/gateway_client.c b/net/batman-adv/gateway_client.c
+index 8b198ee798c9..140c61a3f1ec 100644
+--- a/net/batman-adv/gateway_client.c
++++ b/net/batman-adv/gateway_client.c
+@@ -32,6 +32,7 @@
+ #include <linux/kernel.h>
+ #include <linux/kref.h>
+ #include <linux/list.h>
++#include <linux/lockdep.h>
+ #include <linux/netdevice.h>
+ #include <linux/netlink.h>
+ #include <linux/rculist.h>
+@@ -348,6 +349,9 @@ out:
+  * @bat_priv: the bat priv with all the soft interface information
+  * @orig_node: originator announcing gateway capabilities
+  * @gateway: announced bandwidth information
++ *
++ * Has to be called with the appropriate locks being acquired
++ * (gw.list_lock).
+  */
+ static void batadv_gw_node_add(struct batadv_priv *bat_priv,
+ 			       struct batadv_orig_node *orig_node,
+@@ -355,6 +359,8 @@ static void batadv_gw_node_add(struct batadv_priv *bat_priv,
+ {
+ 	struct batadv_gw_node *gw_node;
+ 
++	lockdep_assert_held(&bat_priv->gw.list_lock);
++
+ 	if (gateway->bandwidth_down == 0)
+ 		return;
+ 
+@@ -369,10 +375,8 @@ static void batadv_gw_node_add(struct batadv_priv *bat_priv,
+ 	gw_node->bandwidth_down = ntohl(gateway->bandwidth_down);
+ 	gw_node->bandwidth_up = ntohl(gateway->bandwidth_up);
+ 
+-	spin_lock_bh(&bat_priv->gw.list_lock);
+ 	kref_get(&gw_node->refcount);
+ 	hlist_add_head_rcu(&gw_node->list, &bat_priv->gw.gateway_list);
+-	spin_unlock_bh(&bat_priv->gw.list_lock);
+ 
+ 	batadv_dbg(BATADV_DBG_BATMAN, bat_priv,
+ 		   "Found new gateway %pM -> gw bandwidth: %u.%u/%u.%u MBit\n",
+@@ -428,11 +432,14 @@ void batadv_gw_node_update(struct batadv_priv *bat_priv,
+ {
+ 	struct batadv_gw_node *gw_node, *curr_gw = NULL;
+ 
++	spin_lock_bh(&bat_priv->gw.list_lock);
+ 	gw_node = batadv_gw_node_get(bat_priv, orig_node);
+ 	if (!gw_node) {
+ 		batadv_gw_node_add(bat_priv, orig_node, gateway);
++		spin_unlock_bh(&bat_priv->gw.list_lock);
+ 		goto out;
+ 	}
++	spin_unlock_bh(&bat_priv->gw.list_lock);
+ 
+ 	if (gw_node->bandwidth_down == ntohl(gateway->bandwidth_down) &&
+ 	    gw_node->bandwidth_up == ntohl(gateway->bandwidth_up))
+diff --git a/net/batman-adv/network-coding.c b/net/batman-adv/network-coding.c
+index c3578444f3cb..34caf129a9bf 100644
+--- a/net/batman-adv/network-coding.c
++++ b/net/batman-adv/network-coding.c
+@@ -854,16 +854,27 @@ batadv_nc_get_nc_node(struct batadv_priv *bat_priv,
+ 	spinlock_t *lock; /* Used to lock list selected by "int in_coding" */
+ 	struct list_head *list;
+ 
++	/* Select ingoing or outgoing coding node */
++	if (in_coding) {
++		lock = &orig_neigh_node->in_coding_list_lock;
++		list = &orig_neigh_node->in_coding_list;
++	} else {
++		lock = &orig_neigh_node->out_coding_list_lock;
++		list = &orig_neigh_node->out_coding_list;
++	}
++
++	spin_lock_bh(lock);
++
+ 	/* Check if nc_node is already added */
+ 	nc_node = batadv_nc_find_nc_node(orig_node, orig_neigh_node, in_coding);
+ 
+ 	/* Node found */
+ 	if (nc_node)
+-		return nc_node;
++		goto unlock;
+ 
+ 	nc_node = kzalloc(sizeof(*nc_node), GFP_ATOMIC);
+ 	if (!nc_node)
+-		return NULL;
++		goto unlock;
+ 
+ 	/* Initialize nc_node */
+ 	INIT_LIST_HEAD(&nc_node->list);
+@@ -872,22 +883,14 @@ batadv_nc_get_nc_node(struct batadv_priv *bat_priv,
+ 	kref_get(&orig_neigh_node->refcount);
+ 	nc_node->orig_node = orig_neigh_node;
+ 
+-	/* Select ingoing or outgoing coding node */
+-	if (in_coding) {
+-		lock = &orig_neigh_node->in_coding_list_lock;
+-		list = &orig_neigh_node->in_coding_list;
+-	} else {
+-		lock = &orig_neigh_node->out_coding_list_lock;
+-		list = &orig_neigh_node->out_coding_list;
+-	}
+-
+ 	batadv_dbg(BATADV_DBG_NC, bat_priv, "Adding nc_node %pM -> %pM\n",
+ 		   nc_node->addr, nc_node->orig_node->orig);
+ 
+ 	/* Add nc_node to orig_node */
+-	spin_lock_bh(lock);
+ 	kref_get(&nc_node->refcount);
+ 	list_add_tail_rcu(&nc_node->list, list);
++
++unlock:
+ 	spin_unlock_bh(lock);
+ 
+ 	return nc_node;
+diff --git a/net/batman-adv/soft-interface.c b/net/batman-adv/soft-interface.c
+index 1485263a348b..626ddca332db 100644
+--- a/net/batman-adv/soft-interface.c
++++ b/net/batman-adv/soft-interface.c
+@@ -574,15 +574,20 @@ int batadv_softif_create_vlan(struct batadv_priv *bat_priv, unsigned short vid)
+ 	struct batadv_softif_vlan *vlan;
+ 	int err;
+ 
++	spin_lock_bh(&bat_priv->softif_vlan_list_lock);
++
+ 	vlan = batadv_softif_vlan_get(bat_priv, vid);
+ 	if (vlan) {
+ 		batadv_softif_vlan_put(vlan);
++		spin_unlock_bh(&bat_priv->softif_vlan_list_lock);
+ 		return -EEXIST;
+ 	}
+ 
+ 	vlan = kzalloc(sizeof(*vlan), GFP_ATOMIC);
+-	if (!vlan)
++	if (!vlan) {
++		spin_unlock_bh(&bat_priv->softif_vlan_list_lock);
+ 		return -ENOMEM;
++	}
+ 
+ 	vlan->bat_priv = bat_priv;
+ 	vlan->vid = vid;
+@@ -590,17 +595,23 @@ int batadv_softif_create_vlan(struct batadv_priv *bat_priv, unsigned short vid)
+ 
+ 	atomic_set(&vlan->ap_isolation, 0);
+ 
++	kref_get(&vlan->refcount);
++	hlist_add_head_rcu(&vlan->list, &bat_priv->softif_vlan_list);
++	spin_unlock_bh(&bat_priv->softif_vlan_list_lock);
++
++	/* batadv_sysfs_add_vlan cannot be in the spinlock section due to the
++	 * sleeping behavior of the sysfs functions and the fs_reclaim lock
++	 */
+ 	err = batadv_sysfs_add_vlan(bat_priv->soft_iface, vlan);
+ 	if (err) {
+-		kfree(vlan);
++		/* ref for the function */
++		batadv_softif_vlan_put(vlan);
++
++		/* ref for the list */
++		batadv_softif_vlan_put(vlan);
+ 		return err;
+ 	}
+ 
+-	spin_lock_bh(&bat_priv->softif_vlan_list_lock);
+-	kref_get(&vlan->refcount);
+-	hlist_add_head_rcu(&vlan->list, &bat_priv->softif_vlan_list);
+-	spin_unlock_bh(&bat_priv->softif_vlan_list_lock);
+-
+ 	/* add a new TT local entry. This one will be marked with the NOPURGE
+ 	 * flag
+ 	 */
+diff --git a/net/batman-adv/sysfs.c b/net/batman-adv/sysfs.c
+index f2eef43bd2ec..09427fc6494a 100644
+--- a/net/batman-adv/sysfs.c
++++ b/net/batman-adv/sysfs.c
+@@ -188,7 +188,8 @@ ssize_t batadv_store_##_name(struct kobject *kobj,			\
+ 									\
+ 	return __batadv_store_uint_attr(buff, count, _min, _max,	\
+ 					_post_func, attr,		\
+-					&bat_priv->_var, net_dev);	\
++					&bat_priv->_var, net_dev,	\
++					NULL);	\
+ }
+ 
+ #define BATADV_ATTR_SIF_SHOW_UINT(_name, _var)				\
+@@ -262,7 +263,9 @@ ssize_t batadv_store_##_name(struct kobject *kobj,			\
+ 									\
+ 	length = __batadv_store_uint_attr(buff, count, _min, _max,	\
+ 					  _post_func, attr,		\
+-					  &hard_iface->_var, net_dev);	\
++					  &hard_iface->_var,		\
++					  hard_iface->soft_iface,	\
++					  net_dev);			\
+ 									\
+ 	batadv_hardif_put(hard_iface);				\
+ 	return length;							\
+@@ -356,10 +359,12 @@ __batadv_store_bool_attr(char *buff, size_t count,
+ 
+ static int batadv_store_uint_attr(const char *buff, size_t count,
+ 				  struct net_device *net_dev,
++				  struct net_device *slave_dev,
+ 				  const char *attr_name,
+ 				  unsigned int min, unsigned int max,
+ 				  atomic_t *attr)
+ {
++	char ifname[IFNAMSIZ + 3] = "";
+ 	unsigned long uint_val;
+ 	int ret;
+ 
+@@ -385,8 +390,11 @@ static int batadv_store_uint_attr(const char *buff, size_t count,
+ 	if (atomic_read(attr) == uint_val)
+ 		return count;
+ 
+-	batadv_info(net_dev, "%s: Changing from: %i to: %lu\n",
+-		    attr_name, atomic_read(attr), uint_val);
++	if (slave_dev)
++		snprintf(ifname, sizeof(ifname), "%s: ", slave_dev->name);
++
++	batadv_info(net_dev, "%s: %sChanging from: %i to: %lu\n",
++		    attr_name, ifname, atomic_read(attr), uint_val);
+ 
+ 	atomic_set(attr, uint_val);
+ 	return count;
+@@ -397,12 +405,13 @@ static ssize_t __batadv_store_uint_attr(const char *buff, size_t count,
+ 					void (*post_func)(struct net_device *),
+ 					const struct attribute *attr,
+ 					atomic_t *attr_store,
+-					struct net_device *net_dev)
++					struct net_device *net_dev,
++					struct net_device *slave_dev)
+ {
+ 	int ret;
+ 
+-	ret = batadv_store_uint_attr(buff, count, net_dev, attr->name, min, max,
+-				     attr_store);
++	ret = batadv_store_uint_attr(buff, count, net_dev, slave_dev,
++				     attr->name, min, max, attr_store);
+ 	if (post_func && ret)
+ 		post_func(net_dev);
+ 
+@@ -571,7 +580,7 @@ static ssize_t batadv_store_gw_sel_class(struct kobject *kobj,
+ 	return __batadv_store_uint_attr(buff, count, 1, BATADV_TQ_MAX_VALUE,
+ 					batadv_post_gw_reselect, attr,
+ 					&bat_priv->gw.sel_class,
+-					bat_priv->soft_iface);
++					bat_priv->soft_iface, NULL);
+ }
+ 
+ static ssize_t batadv_show_gw_bwidth(struct kobject *kobj,
+@@ -1090,8 +1099,9 @@ static ssize_t batadv_store_throughput_override(struct kobject *kobj,
+ 	if (old_tp_override == tp_override)
+ 		goto out;
+ 
+-	batadv_info(net_dev, "%s: Changing from: %u.%u MBit to: %u.%u MBit\n",
+-		    "throughput_override",
++	batadv_info(hard_iface->soft_iface,
++		    "%s: %s: Changing from: %u.%u MBit to: %u.%u MBit\n",
++		    "throughput_override", net_dev->name,
+ 		    old_tp_override / 10, old_tp_override % 10,
+ 		    tp_override / 10, tp_override % 10);
+ 
+diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
+index 12a2b7d21376..d21624c44665 100644
+--- a/net/batman-adv/translation-table.c
++++ b/net/batman-adv/translation-table.c
+@@ -1613,6 +1613,8 @@ batadv_tt_global_orig_entry_add(struct batadv_tt_global_entry *tt_global,
+ {
+ 	struct batadv_tt_orig_list_entry *orig_entry;
+ 
++	spin_lock_bh(&tt_global->list_lock);
++
+ 	orig_entry = batadv_tt_global_orig_entry_find(tt_global, orig_node);
+ 	if (orig_entry) {
+ 		/* refresh the ttvn: the current value could be a bogus one that
+@@ -1635,11 +1637,9 @@ batadv_tt_global_orig_entry_add(struct batadv_tt_global_entry *tt_global,
+ 	orig_entry->flags = flags;
+ 	kref_init(&orig_entry->refcount);
+ 
+-	spin_lock_bh(&tt_global->list_lock);
+ 	kref_get(&orig_entry->refcount);
+ 	hlist_add_head_rcu(&orig_entry->list,
+ 			   &tt_global->orig_list);
+-	spin_unlock_bh(&tt_global->list_lock);
+ 	atomic_inc(&tt_global->orig_list_count);
+ 
+ sync_flags:
+@@ -1647,6 +1647,8 @@ sync_flags:
+ out:
+ 	if (orig_entry)
+ 		batadv_tt_orig_list_entry_put(orig_entry);
++
++	spin_unlock_bh(&tt_global->list_lock);
+ }
+ 
+ /**
+diff --git a/net/batman-adv/tvlv.c b/net/batman-adv/tvlv.c
+index a637458205d1..40e69c9346d2 100644
+--- a/net/batman-adv/tvlv.c
++++ b/net/batman-adv/tvlv.c
+@@ -529,15 +529,20 @@ void batadv_tvlv_handler_register(struct batadv_priv *bat_priv,
+ {
+ 	struct batadv_tvlv_handler *tvlv_handler;
+ 
++	spin_lock_bh(&bat_priv->tvlv.handler_list_lock);
++
+ 	tvlv_handler = batadv_tvlv_handler_get(bat_priv, type, version);
+ 	if (tvlv_handler) {
++		spin_unlock_bh(&bat_priv->tvlv.handler_list_lock);
+ 		batadv_tvlv_handler_put(tvlv_handler);
+ 		return;
+ 	}
+ 
+ 	tvlv_handler = kzalloc(sizeof(*tvlv_handler), GFP_ATOMIC);
+-	if (!tvlv_handler)
++	if (!tvlv_handler) {
++		spin_unlock_bh(&bat_priv->tvlv.handler_list_lock);
+ 		return;
++	}
+ 
+ 	tvlv_handler->ogm_handler = optr;
+ 	tvlv_handler->unicast_handler = uptr;
+@@ -547,7 +552,6 @@ void batadv_tvlv_handler_register(struct batadv_priv *bat_priv,
+ 	kref_init(&tvlv_handler->refcount);
+ 	INIT_HLIST_NODE(&tvlv_handler->list);
+ 
+-	spin_lock_bh(&bat_priv->tvlv.handler_list_lock);
+ 	kref_get(&tvlv_handler->refcount);
+ 	hlist_add_head_rcu(&tvlv_handler->list, &bat_priv->tvlv.handler_list);
+ 	spin_unlock_bh(&bat_priv->tvlv.handler_list_lock);
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index e7de5f282722..effa87858b21 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -612,7 +612,10 @@ static void smc_connect_work(struct work_struct *work)
+ 		smc->sk.sk_err = -rc;
+ 
+ out:
+-	smc->sk.sk_state_change(&smc->sk);
++	if (smc->sk.sk_err)
++		smc->sk.sk_state_change(&smc->sk);
++	else
++		smc->sk.sk_write_space(&smc->sk);
+ 	kfree(smc->connect_info);
+ 	smc->connect_info = NULL;
+ 	release_sock(&smc->sk);
+@@ -1345,7 +1348,7 @@ static __poll_t smc_poll(struct file *file, struct socket *sock,
+ 		return EPOLLNVAL;
+ 
+ 	smc = smc_sk(sock->sk);
+-	if ((sk->sk_state == SMC_INIT) || smc->use_fallback) {
++	if (smc->use_fallback) {
+ 		/* delegate to CLC child sock */
+ 		mask = smc->clcsock->ops->poll(file, smc->clcsock, wait);
+ 		sk->sk_err = smc->clcsock->sk->sk_err;
+diff --git a/net/smc/smc_clc.c b/net/smc/smc_clc.c
+index ae5d168653ce..086157555ac3 100644
+--- a/net/smc/smc_clc.c
++++ b/net/smc/smc_clc.c
+@@ -405,14 +405,12 @@ int smc_clc_send_proposal(struct smc_sock *smc,
+ 	vec[i++].iov_len = sizeof(trl);
+ 	/* due to the few bytes needed for clc-handshake this cannot block */
+ 	len = kernel_sendmsg(smc->clcsock, &msg, vec, i, plen);
+-	if (len < sizeof(pclc)) {
+-		if (len >= 0) {
+-			reason_code = -ENETUNREACH;
+-			smc->sk.sk_err = -reason_code;
+-		} else {
+-			smc->sk.sk_err = smc->clcsock->sk->sk_err;
+-			reason_code = -smc->sk.sk_err;
+-		}
++	if (len < 0) {
++		smc->sk.sk_err = smc->clcsock->sk->sk_err;
++		reason_code = -smc->sk.sk_err;
++	} else if (len < (int)sizeof(pclc)) {
++		reason_code = -ENETUNREACH;
++		smc->sk.sk_err = -reason_code;
+ 	}
+ 
+ 	return reason_code;
+diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
+index 6c253343a6f9..70d18d0d39ff 100644
+--- a/tools/testing/selftests/bpf/test_maps.c
++++ b/tools/testing/selftests/bpf/test_maps.c
+@@ -566,7 +566,11 @@ static void test_sockmap(int tasks, void *data)
+ 	/* Test update without programs */
+ 	for (i = 0; i < 6; i++) {
+ 		err = bpf_map_update_elem(fd, &i, &sfd[i], BPF_ANY);
+-		if (err) {
++		if (i < 2 && !err) {
++			printf("Allowed update sockmap '%i:%i' not in ESTABLISHED\n",
++			       i, sfd[i]);
++			goto out_sockmap;
++		} else if (i >= 2 && err) {
+ 			printf("Failed noprog update sockmap '%i:%i'\n",
+ 			       i, sfd[i]);
+ 			goto out_sockmap;
+@@ -727,7 +731,7 @@ static void test_sockmap(int tasks, void *data)
+ 	}
+ 
+ 	/* Test map update elem afterwards fd lives in fd and map_fd */
+-	for (i = 0; i < 6; i++) {
++	for (i = 2; i < 6; i++) {
+ 		err = bpf_map_update_elem(map_fd_rx, &i, &sfd[i], BPF_ANY);
+ 		if (err) {
+ 			printf("Failed map_fd_rx update sockmap %i '%i:%i'\n",
+@@ -831,7 +835,7 @@ static void test_sockmap(int tasks, void *data)
+ 	}
+ 
+ 	/* Delete the elems without programs */
+-	for (i = 0; i < 6; i++) {
++	for (i = 2; i < 6; i++) {
+ 		err = bpf_map_delete_elem(fd, &i);
+ 		if (err) {
+ 			printf("Failed delete sockmap %i '%i:%i'\n",
+diff --git a/tools/testing/selftests/net/pmtu.sh b/tools/testing/selftests/net/pmtu.sh
+index 32a194e3e07a..0ab9423d009f 100755
+--- a/tools/testing/selftests/net/pmtu.sh
++++ b/tools/testing/selftests/net/pmtu.sh
+@@ -178,8 +178,8 @@ setup() {
+ 
+ cleanup() {
+ 	[ ${cleanup_done} -eq 1 ] && return
+-	ip netns del ${NS_A} 2 > /dev/null
+-	ip netns del ${NS_B} 2 > /dev/null
++	ip netns del ${NS_A} 2> /dev/null
++	ip netns del ${NS_B} 2> /dev/null
+ 	cleanup_done=1
+ }
+ 


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-10-18 10:27 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-10-18 10:27 UTC (permalink / raw
  To: gentoo-commits

commit:     3084192fca32fa67d11685f187a7cd55ad3b21d4
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Oct 18 10:27:08 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Oct 18 10:27:32 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3084192f

Linux patch 4.18.15

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1014_linux-4.18.15.patch | 5433 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5437 insertions(+)

diff --git a/0000_README b/0000_README
index 6d1cb28..5676b13 100644
--- a/0000_README
+++ b/0000_README
@@ -99,6 +99,10 @@ Patch:  1013_linux-4.18.14.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.14
 
+Patch:  1014_linux-4.18.15.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.15
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1014_linux-4.18.15.patch b/1014_linux-4.18.15.patch
new file mode 100644
index 0000000..5477884
--- /dev/null
+++ b/1014_linux-4.18.15.patch
@@ -0,0 +1,5433 @@
+diff --git a/Documentation/devicetree/bindings/net/macb.txt b/Documentation/devicetree/bindings/net/macb.txt
+index 457d5ae16f23..3e17ac1d5d58 100644
+--- a/Documentation/devicetree/bindings/net/macb.txt
++++ b/Documentation/devicetree/bindings/net/macb.txt
+@@ -10,6 +10,7 @@ Required properties:
+   Use "cdns,pc302-gem" for Picochip picoXcell pc302 and later devices based on
+   the Cadence GEM, or the generic form: "cdns,gem".
+   Use "atmel,sama5d2-gem" for the GEM IP (10/100) available on Atmel sama5d2 SoCs.
++  Use "atmel,sama5d3-macb" for the 10/100Mbit IP available on Atmel sama5d3 SoCs.
+   Use "atmel,sama5d3-gem" for the Gigabit IP available on Atmel sama5d3 SoCs.
+   Use "atmel,sama5d4-gem" for the GEM IP (10/100) available on Atmel sama5d4 SoCs.
+   Use "cdns,zynq-gem" Xilinx Zynq-7xxx SoC.
+diff --git a/Makefile b/Makefile
+index 5274f8ae6b44..968eb96a0553 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 14
++SUBLEVEL = 15
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+@@ -298,19 +298,7 @@ KERNELRELEASE = $(shell cat include/config/kernel.release 2> /dev/null)
+ KERNELVERSION = $(VERSION)$(if $(PATCHLEVEL),.$(PATCHLEVEL)$(if $(SUBLEVEL),.$(SUBLEVEL)))$(EXTRAVERSION)
+ export VERSION PATCHLEVEL SUBLEVEL KERNELRELEASE KERNELVERSION
+ 
+-# SUBARCH tells the usermode build what the underlying arch is.  That is set
+-# first, and if a usermode build is happening, the "ARCH=um" on the command
+-# line overrides the setting of ARCH below.  If a native build is happening,
+-# then ARCH is assigned, getting whatever value it gets normally, and
+-# SUBARCH is subsequently ignored.
+-
+-SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \
+-				  -e s/sun4u/sparc64/ \
+-				  -e s/arm.*/arm/ -e s/sa110/arm/ \
+-				  -e s/s390x/s390/ -e s/parisc64/parisc/ \
+-				  -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \
+-				  -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ \
+-				  -e s/riscv.*/riscv/)
++include scripts/subarch.include
+ 
+ # Cross compiling and selecting different set of gcc/bin-utils
+ # ---------------------------------------------------------------------------
+diff --git a/arch/arm/boot/dts/sama5d3_emac.dtsi b/arch/arm/boot/dts/sama5d3_emac.dtsi
+index 7cb235ef0fb6..6e9e1c2f9def 100644
+--- a/arch/arm/boot/dts/sama5d3_emac.dtsi
++++ b/arch/arm/boot/dts/sama5d3_emac.dtsi
+@@ -41,7 +41,7 @@
+ 			};
+ 
+ 			macb1: ethernet@f802c000 {
+-				compatible = "cdns,at91sam9260-macb", "cdns,macb";
++				compatible = "atmel,sama5d3-macb", "cdns,at91sam9260-macb", "cdns,macb";
+ 				reg = <0xf802c000 0x100>;
+ 				interrupts = <35 IRQ_TYPE_LEVEL_HIGH 3>;
+ 				pinctrl-names = "default";
+diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
+index dd5b4fab114f..b7c8a718544c 100644
+--- a/arch/arm64/kernel/perf_event.c
++++ b/arch/arm64/kernel/perf_event.c
+@@ -823,6 +823,12 @@ static int armv8pmu_set_event_filter(struct hw_perf_event *event,
+ 	return 0;
+ }
+ 
++static int armv8pmu_filter_match(struct perf_event *event)
++{
++	unsigned long evtype = event->hw.config_base & ARMV8_PMU_EVTYPE_EVENT;
++	return evtype != ARMV8_PMUV3_PERFCTR_CHAIN;
++}
++
+ static void armv8pmu_reset(void *info)
+ {
+ 	struct arm_pmu *cpu_pmu = (struct arm_pmu *)info;
+@@ -968,6 +974,7 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu)
+ 	cpu_pmu->reset			= armv8pmu_reset,
+ 	cpu_pmu->max_period		= (1LLU << 32) - 1,
+ 	cpu_pmu->set_event_filter	= armv8pmu_set_event_filter;
++	cpu_pmu->filter_match		= armv8pmu_filter_match;
+ 
+ 	return 0;
+ }
+diff --git a/arch/mips/include/asm/processor.h b/arch/mips/include/asm/processor.h
+index b2fa62922d88..49d6046ca1d0 100644
+--- a/arch/mips/include/asm/processor.h
++++ b/arch/mips/include/asm/processor.h
+@@ -13,6 +13,7 @@
+ 
+ #include <linux/atomic.h>
+ #include <linux/cpumask.h>
++#include <linux/sizes.h>
+ #include <linux/threads.h>
+ 
+ #include <asm/cachectl.h>
+@@ -80,11 +81,10 @@ extern unsigned int vced_count, vcei_count;
+ 
+ #endif
+ 
+-/*
+- * One page above the stack is used for branch delay slot "emulation".
+- * See dsemul.c for details.
+- */
+-#define STACK_TOP	((TASK_SIZE & PAGE_MASK) - PAGE_SIZE)
++#define VDSO_RANDOMIZE_SIZE	(TASK_IS_32BIT_ADDR ? SZ_1M : SZ_256M)
++
++extern unsigned long mips_stack_top(void);
++#define STACK_TOP		mips_stack_top()
+ 
+ /*
+  * This decides where the kernel will search for a free chunk of vm
+diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
+index 9670e70139fd..1efd1798532b 100644
+--- a/arch/mips/kernel/process.c
++++ b/arch/mips/kernel/process.c
+@@ -31,6 +31,7 @@
+ #include <linux/prctl.h>
+ #include <linux/nmi.h>
+ 
++#include <asm/abi.h>
+ #include <asm/asm.h>
+ #include <asm/bootinfo.h>
+ #include <asm/cpu.h>
+@@ -38,6 +39,7 @@
+ #include <asm/dsp.h>
+ #include <asm/fpu.h>
+ #include <asm/irq.h>
++#include <asm/mips-cps.h>
+ #include <asm/msa.h>
+ #include <asm/pgtable.h>
+ #include <asm/mipsregs.h>
+@@ -644,6 +646,29 @@ out:
+ 	return pc;
+ }
+ 
++unsigned long mips_stack_top(void)
++{
++	unsigned long top = TASK_SIZE & PAGE_MASK;
++
++	/* One page for branch delay slot "emulation" */
++	top -= PAGE_SIZE;
++
++	/* Space for the VDSO, data page & GIC user page */
++	top -= PAGE_ALIGN(current->thread.abi->vdso->size);
++	top -= PAGE_SIZE;
++	top -= mips_gic_present() ? PAGE_SIZE : 0;
++
++	/* Space for cache colour alignment */
++	if (cpu_has_dc_aliases)
++		top -= shm_align_mask + 1;
++
++	/* Space to randomize the VDSO base */
++	if (current->flags & PF_RANDOMIZE)
++		top -= VDSO_RANDOMIZE_SIZE;
++
++	return top;
++}
++
+ /*
+  * Don't forget that the stack pointer must be aligned on a 8 bytes
+  * boundary for 32-bits ABI and 16 bytes for 64-bits ABI.
+diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
+index 2c96c0c68116..6138224a96b1 100644
+--- a/arch/mips/kernel/setup.c
++++ b/arch/mips/kernel/setup.c
+@@ -835,6 +835,34 @@ static void __init arch_mem_init(char **cmdline_p)
+ 	struct memblock_region *reg;
+ 	extern void plat_mem_setup(void);
+ 
++	/*
++	 * Initialize boot_command_line to an innocuous but non-empty string in
++	 * order to prevent early_init_dt_scan_chosen() from copying
++	 * CONFIG_CMDLINE into it without our knowledge. We handle
++	 * CONFIG_CMDLINE ourselves below & don't want to duplicate its
++	 * content because repeating arguments can be problematic.
++	 */
++	strlcpy(boot_command_line, " ", COMMAND_LINE_SIZE);
++
++	/* call board setup routine */
++	plat_mem_setup();
++
++	/*
++	 * Make sure all kernel memory is in the maps.  The "UP" and
++	 * "DOWN" are opposite for initdata since if it crosses over
++	 * into another memory section you don't want that to be
++	 * freed when the initdata is freed.
++	 */
++	arch_mem_addpart(PFN_DOWN(__pa_symbol(&_text)) << PAGE_SHIFT,
++			 PFN_UP(__pa_symbol(&_edata)) << PAGE_SHIFT,
++			 BOOT_MEM_RAM);
++	arch_mem_addpart(PFN_UP(__pa_symbol(&__init_begin)) << PAGE_SHIFT,
++			 PFN_DOWN(__pa_symbol(&__init_end)) << PAGE_SHIFT,
++			 BOOT_MEM_INIT_RAM);
++
++	pr_info("Determined physical RAM map:\n");
++	print_memory_map();
++
+ #if defined(CONFIG_CMDLINE_BOOL) && defined(CONFIG_CMDLINE_OVERRIDE)
+ 	strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
+ #else
+@@ -862,26 +890,6 @@ static void __init arch_mem_init(char **cmdline_p)
+ 	}
+ #endif
+ #endif
+-
+-	/* call board setup routine */
+-	plat_mem_setup();
+-
+-	/*
+-	 * Make sure all kernel memory is in the maps.  The "UP" and
+-	 * "DOWN" are opposite for initdata since if it crosses over
+-	 * into another memory section you don't want that to be
+-	 * freed when the initdata is freed.
+-	 */
+-	arch_mem_addpart(PFN_DOWN(__pa_symbol(&_text)) << PAGE_SHIFT,
+-			 PFN_UP(__pa_symbol(&_edata)) << PAGE_SHIFT,
+-			 BOOT_MEM_RAM);
+-	arch_mem_addpart(PFN_UP(__pa_symbol(&__init_begin)) << PAGE_SHIFT,
+-			 PFN_DOWN(__pa_symbol(&__init_end)) << PAGE_SHIFT,
+-			 BOOT_MEM_INIT_RAM);
+-
+-	pr_info("Determined physical RAM map:\n");
+-	print_memory_map();
+-
+ 	strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE);
+ 
+ 	*cmdline_p = command_line;
+diff --git a/arch/mips/kernel/vdso.c b/arch/mips/kernel/vdso.c
+index 8f845f6e5f42..48a9c6b90e07 100644
+--- a/arch/mips/kernel/vdso.c
++++ b/arch/mips/kernel/vdso.c
+@@ -15,6 +15,7 @@
+ #include <linux/ioport.h>
+ #include <linux/kernel.h>
+ #include <linux/mm.h>
++#include <linux/random.h>
+ #include <linux/sched.h>
+ #include <linux/slab.h>
+ #include <linux/timekeeper_internal.h>
+@@ -97,6 +98,21 @@ void update_vsyscall_tz(void)
+ 	}
+ }
+ 
++static unsigned long vdso_base(void)
++{
++	unsigned long base;
++
++	/* Skip the delay slot emulation page */
++	base = STACK_TOP + PAGE_SIZE;
++
++	if (current->flags & PF_RANDOMIZE) {
++		base += get_random_int() & (VDSO_RANDOMIZE_SIZE - 1);
++		base = PAGE_ALIGN(base);
++	}
++
++	return base;
++}
++
+ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
+ {
+ 	struct mips_vdso_image *image = current->thread.abi->vdso;
+@@ -137,7 +153,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
+ 	if (cpu_has_dc_aliases)
+ 		size += shm_align_mask + 1;
+ 
+-	base = get_unmapped_area(NULL, 0, size, 0, 0);
++	base = get_unmapped_area(NULL, vdso_base(), size, 0, 0);
+ 	if (IS_ERR_VALUE(base)) {
+ 		ret = base;
+ 		goto out;
+diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
+index 42aafba7a308..9532dff28091 100644
+--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
++++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
+@@ -104,7 +104,7 @@
+  */
+ #define _HPAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
+ 			 _PAGE_ACCESSED | H_PAGE_THP_HUGE | _PAGE_PTE | \
+-			 _PAGE_SOFT_DIRTY)
++			 _PAGE_SOFT_DIRTY | _PAGE_DEVMAP)
+ /*
+  * user access blocked by key
+  */
+@@ -122,7 +122,7 @@
+  */
+ #define _PAGE_CHG_MASK	(PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
+ 			 _PAGE_ACCESSED | _PAGE_SPECIAL | _PAGE_PTE |	\
+-			 _PAGE_SOFT_DIRTY)
++			 _PAGE_SOFT_DIRTY | _PAGE_DEVMAP)
+ 
+ #define H_PTE_PKEY  (H_PTE_PKEY_BIT0 | H_PTE_PKEY_BIT1 | H_PTE_PKEY_BIT2 | \
+ 		     H_PTE_PKEY_BIT3 | H_PTE_PKEY_BIT4)
+diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+index 7efc42538ccf..26d927bf2fdb 100644
+--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
++++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+@@ -538,8 +538,8 @@ int kvmppc_book3s_radix_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
+ 				   unsigned long ea, unsigned long dsisr)
+ {
+ 	struct kvm *kvm = vcpu->kvm;
+-	unsigned long mmu_seq, pte_size;
+-	unsigned long gpa, gfn, hva, pfn;
++	unsigned long mmu_seq;
++	unsigned long gpa, gfn, hva;
+ 	struct kvm_memory_slot *memslot;
+ 	struct page *page = NULL;
+ 	long ret;
+@@ -636,9 +636,10 @@ int kvmppc_book3s_radix_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
+ 	 */
+ 	hva = gfn_to_hva_memslot(memslot, gfn);
+ 	if (upgrade_p && __get_user_pages_fast(hva, 1, 1, &page) == 1) {
+-		pfn = page_to_pfn(page);
+ 		upgrade_write = true;
+ 	} else {
++		unsigned long pfn;
++
+ 		/* Call KVM generic code to do the slow-path check */
+ 		pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL,
+ 					   writing, upgrade_p);
+@@ -652,63 +653,55 @@ int kvmppc_book3s_radix_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
+ 		}
+ 	}
+ 
+-	/* See if we can insert a 1GB or 2MB large PTE here */
+-	level = 0;
+-	if (page && PageCompound(page)) {
+-		pte_size = PAGE_SIZE << compound_order(compound_head(page));
+-		if (pte_size >= PUD_SIZE &&
+-		    (gpa & (PUD_SIZE - PAGE_SIZE)) ==
+-		    (hva & (PUD_SIZE - PAGE_SIZE))) {
+-			level = 2;
+-			pfn &= ~((PUD_SIZE >> PAGE_SHIFT) - 1);
+-		} else if (pte_size >= PMD_SIZE &&
+-			   (gpa & (PMD_SIZE - PAGE_SIZE)) ==
+-			   (hva & (PMD_SIZE - PAGE_SIZE))) {
+-			level = 1;
+-			pfn &= ~((PMD_SIZE >> PAGE_SHIFT) - 1);
+-		}
+-	}
+-
+ 	/*
+-	 * Compute the PTE value that we need to insert.
++	 * Read the PTE from the process' radix tree and use that
++	 * so we get the shift and attribute bits.
+ 	 */
+-	if (page) {
+-		pgflags = _PAGE_READ | _PAGE_EXEC | _PAGE_PRESENT | _PAGE_PTE |
+-			_PAGE_ACCESSED;
+-		if (writing || upgrade_write)
+-			pgflags |= _PAGE_WRITE | _PAGE_DIRTY;
+-		pte = pfn_pte(pfn, __pgprot(pgflags));
+-	} else {
+-		/*
+-		 * Read the PTE from the process' radix tree and use that
+-		 * so we get the attribute bits.
+-		 */
+-		local_irq_disable();
+-		ptep = __find_linux_pte(vcpu->arch.pgdir, hva, NULL, &shift);
+-		pte = *ptep;
++	local_irq_disable();
++	ptep = __find_linux_pte(vcpu->arch.pgdir, hva, NULL, &shift);
++	/*
++	 * If the PTE disappeared temporarily due to a THP
++	 * collapse, just return and let the guest try again.
++	 */
++	if (!ptep) {
+ 		local_irq_enable();
+-		if (shift == PUD_SHIFT &&
+-		    (gpa & (PUD_SIZE - PAGE_SIZE)) ==
+-		    (hva & (PUD_SIZE - PAGE_SIZE))) {
+-			level = 2;
+-		} else if (shift == PMD_SHIFT &&
+-			   (gpa & (PMD_SIZE - PAGE_SIZE)) ==
+-			   (hva & (PMD_SIZE - PAGE_SIZE))) {
+-			level = 1;
+-		} else if (shift && shift != PAGE_SHIFT) {
+-			/* Adjust PFN */
+-			unsigned long mask = (1ul << shift) - PAGE_SIZE;
+-			pte = __pte(pte_val(pte) | (hva & mask));
+-		}
+-		pte = __pte(pte_val(pte) | _PAGE_EXEC | _PAGE_ACCESSED);
+-		if (writing || upgrade_write) {
+-			if (pte_val(pte) & _PAGE_WRITE)
+-				pte = __pte(pte_val(pte) | _PAGE_DIRTY);
+-		} else {
+-			pte = __pte(pte_val(pte) & ~(_PAGE_WRITE | _PAGE_DIRTY));
++		if (page)
++			put_page(page);
++		return RESUME_GUEST;
++	}
++	pte = *ptep;
++	local_irq_enable();
++
++	/* Get pte level from shift/size */
++	if (shift == PUD_SHIFT &&
++	    (gpa & (PUD_SIZE - PAGE_SIZE)) ==
++	    (hva & (PUD_SIZE - PAGE_SIZE))) {
++		level = 2;
++	} else if (shift == PMD_SHIFT &&
++		   (gpa & (PMD_SIZE - PAGE_SIZE)) ==
++		   (hva & (PMD_SIZE - PAGE_SIZE))) {
++		level = 1;
++	} else {
++		level = 0;
++		if (shift > PAGE_SHIFT) {
++			/*
++			 * If the pte maps more than one page, bring over
++			 * bits from the virtual address to get the real
++			 * address of the specific single page we want.
++			 */
++			unsigned long rpnmask = (1ul << shift) - PAGE_SIZE;
++			pte = __pte(pte_val(pte) | (hva & rpnmask));
+ 		}
+ 	}
+ 
++	pte = __pte(pte_val(pte) | _PAGE_EXEC | _PAGE_ACCESSED);
++	if (writing || upgrade_write) {
++		if (pte_val(pte) & _PAGE_WRITE)
++			pte = __pte(pte_val(pte) | _PAGE_DIRTY);
++	} else {
++		pte = __pte(pte_val(pte) & ~(_PAGE_WRITE | _PAGE_DIRTY));
++	}
++
+ 	/* Allocate space in the tree and write the PTE */
+ 	ret = kvmppc_create_pte(kvm, pte, gpa, level, mmu_seq);
+ 
+diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
+index 99fff853c944..a558381b016b 100644
+--- a/arch/x86/include/asm/pgtable_types.h
++++ b/arch/x86/include/asm/pgtable_types.h
+@@ -123,7 +123,7 @@
+  */
+ #define _PAGE_CHG_MASK	(PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT |		\
+ 			 _PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY |	\
+-			 _PAGE_SOFT_DIRTY)
++			 _PAGE_SOFT_DIRTY | _PAGE_DEVMAP)
+ #define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE)
+ 
+ /*
+diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
+index c535c2fdea13..9bba9737ee0b 100644
+--- a/arch/x86/include/uapi/asm/kvm.h
++++ b/arch/x86/include/uapi/asm/kvm.h
+@@ -377,5 +377,6 @@ struct kvm_sync_regs {
+ 
+ #define KVM_X86_QUIRK_LINT0_REENABLED	(1 << 0)
+ #define KVM_X86_QUIRK_CD_NW_CLEARED	(1 << 1)
++#define KVM_X86_QUIRK_LAPIC_MMIO_HOLE	(1 << 2)
+ 
+ #endif /* _ASM_X86_KVM_H */
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index b5cd8465d44f..83c4e8cc7eb9 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -1291,9 +1291,8 @@ EXPORT_SYMBOL_GPL(kvm_lapic_reg_read);
+ 
+ static int apic_mmio_in_range(struct kvm_lapic *apic, gpa_t addr)
+ {
+-	return kvm_apic_hw_enabled(apic) &&
+-	    addr >= apic->base_address &&
+-	    addr < apic->base_address + LAPIC_MMIO_LENGTH;
++	return addr >= apic->base_address &&
++		addr < apic->base_address + LAPIC_MMIO_LENGTH;
+ }
+ 
+ static int apic_mmio_read(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
+@@ -1305,6 +1304,15 @@ static int apic_mmio_read(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
+ 	if (!apic_mmio_in_range(apic, address))
+ 		return -EOPNOTSUPP;
+ 
++	if (!kvm_apic_hw_enabled(apic) || apic_x2apic_mode(apic)) {
++		if (!kvm_check_has_quirk(vcpu->kvm,
++					 KVM_X86_QUIRK_LAPIC_MMIO_HOLE))
++			return -EOPNOTSUPP;
++
++		memset(data, 0xff, len);
++		return 0;
++	}
++
+ 	kvm_lapic_reg_read(apic, offset, len, data);
+ 
+ 	return 0;
+@@ -1864,6 +1872,14 @@ static int apic_mmio_write(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
+ 	if (!apic_mmio_in_range(apic, address))
+ 		return -EOPNOTSUPP;
+ 
++	if (!kvm_apic_hw_enabled(apic) || apic_x2apic_mode(apic)) {
++		if (!kvm_check_has_quirk(vcpu->kvm,
++					 KVM_X86_QUIRK_LAPIC_MMIO_HOLE))
++			return -EOPNOTSUPP;
++
++		return 0;
++	}
++
+ 	/*
+ 	 * APIC register must be aligned on 128-bits boundary.
+ 	 * 32/64/128 bits registers must be accessed thru 32 bits.
+diff --git a/drivers/bluetooth/hci_ldisc.c b/drivers/bluetooth/hci_ldisc.c
+index 963bb0309e25..ea6238ed5c0e 100644
+--- a/drivers/bluetooth/hci_ldisc.c
++++ b/drivers/bluetooth/hci_ldisc.c
+@@ -543,6 +543,8 @@ static void hci_uart_tty_close(struct tty_struct *tty)
+ 	}
+ 	clear_bit(HCI_UART_PROTO_SET, &hu->flags);
+ 
++	percpu_free_rwsem(&hu->proto_lock);
++
+ 	kfree(hu);
+ }
+ 
+diff --git a/drivers/clk/x86/clk-pmc-atom.c b/drivers/clk/x86/clk-pmc-atom.c
+index 08ef69945ffb..d977193842df 100644
+--- a/drivers/clk/x86/clk-pmc-atom.c
++++ b/drivers/clk/x86/clk-pmc-atom.c
+@@ -55,6 +55,7 @@ struct clk_plt_data {
+ 	u8 nparents;
+ 	struct clk_plt *clks[PMC_CLK_NUM];
+ 	struct clk_lookup *mclk_lookup;
++	struct clk_lookup *ether_clk_lookup;
+ };
+ 
+ /* Return an index in parent table */
+@@ -186,13 +187,6 @@ static struct clk_plt *plt_clk_register(struct platform_device *pdev, int id,
+ 	pclk->reg = base + PMC_CLK_CTL_OFFSET + id * PMC_CLK_CTL_SIZE;
+ 	spin_lock_init(&pclk->lock);
+ 
+-	/*
+-	 * If the clock was already enabled by the firmware mark it as critical
+-	 * to avoid it being gated by the clock framework if no driver owns it.
+-	 */
+-	if (plt_clk_is_enabled(&pclk->hw))
+-		init.flags |= CLK_IS_CRITICAL;
+-
+ 	ret = devm_clk_hw_register(&pdev->dev, &pclk->hw);
+ 	if (ret) {
+ 		pclk = ERR_PTR(ret);
+@@ -351,11 +345,20 @@ static int plt_clk_probe(struct platform_device *pdev)
+ 		goto err_unreg_clk_plt;
+ 	}
+ 
++	data->ether_clk_lookup = clkdev_hw_create(&data->clks[4]->hw,
++						  "ether_clk", NULL);
++	if (!data->ether_clk_lookup) {
++		err = -ENOMEM;
++		goto err_drop_mclk;
++	}
++
+ 	plt_clk_free_parent_names_loop(parent_names, data->nparents);
+ 
+ 	platform_set_drvdata(pdev, data);
+ 	return 0;
+ 
++err_drop_mclk:
++	clkdev_drop(data->mclk_lookup);
+ err_unreg_clk_plt:
+ 	plt_clk_unregister_loop(data, i);
+ 	plt_clk_unregister_parents(data);
+@@ -369,6 +372,7 @@ static int plt_clk_remove(struct platform_device *pdev)
+ 
+ 	data = platform_get_drvdata(pdev);
+ 
++	clkdev_drop(data->ether_clk_lookup);
+ 	clkdev_drop(data->mclk_lookup);
+ 	plt_clk_unregister_loop(data, PMC_CLK_NUM);
+ 	plt_clk_unregister_parents(data);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+index 305143fcc1ce..1ac7933cccc5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+@@ -245,7 +245,7 @@ int amdgpu_amdkfd_resume(struct amdgpu_device *adev)
+ 
+ int alloc_gtt_mem(struct kgd_dev *kgd, size_t size,
+ 			void **mem_obj, uint64_t *gpu_addr,
+-			void **cpu_ptr)
++			void **cpu_ptr, bool mqd_gfx9)
+ {
+ 	struct amdgpu_device *adev = (struct amdgpu_device *)kgd;
+ 	struct amdgpu_bo *bo = NULL;
+@@ -261,6 +261,10 @@ int alloc_gtt_mem(struct kgd_dev *kgd, size_t size,
+ 	bp.flags = AMDGPU_GEM_CREATE_CPU_GTT_USWC;
+ 	bp.type = ttm_bo_type_kernel;
+ 	bp.resv = NULL;
++
++	if (mqd_gfx9)
++		bp.flags |= AMDGPU_GEM_CREATE_MQD_GFX9;
++
+ 	r = amdgpu_bo_create(adev, &bp, &bo);
+ 	if (r) {
+ 		dev_err(adev->dev,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+index a8418a3f4e9d..e3cf1c9fb3db 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+@@ -129,7 +129,7 @@ bool amdgpu_amdkfd_is_kfd_vmid(struct amdgpu_device *adev, u32 vmid);
+ /* Shared API */
+ int alloc_gtt_mem(struct kgd_dev *kgd, size_t size,
+ 			void **mem_obj, uint64_t *gpu_addr,
+-			void **cpu_ptr);
++			void **cpu_ptr, bool mqd_gfx9);
+ void free_gtt_mem(struct kgd_dev *kgd, void *mem_obj);
+ void get_local_mem_info(struct kgd_dev *kgd,
+ 			struct kfd_local_mem_info *mem_info);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c
+index ea79908dac4c..29a260e4aefe 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c
+@@ -677,7 +677,7 @@ static int kgd_hqd_sdma_destroy(struct kgd_dev *kgd, void *mqd,
+ 
+ 	while (true) {
+ 		temp = RREG32(sdma_base_addr + mmSDMA0_RLC0_CONTEXT_STATUS);
+-		if (temp & SDMA0_STATUS_REG__RB_CMD_IDLE__SHIFT)
++		if (temp & SDMA0_RLC0_CONTEXT_STATUS__IDLE_MASK)
+ 			break;
+ 		if (time_after(jiffies, end_jiffies))
+ 			return -ETIME;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index 7ee6cec2c060..6881b5a9275f 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -423,7 +423,8 @@ bool kgd2kfd_device_init(struct kfd_dev *kfd,
+ 
+ 	if (kfd->kfd2kgd->init_gtt_mem_allocation(
+ 			kfd->kgd, size, &kfd->gtt_mem,
+-			&kfd->gtt_start_gpu_addr, &kfd->gtt_start_cpu_ptr)){
++			&kfd->gtt_start_gpu_addr, &kfd->gtt_start_cpu_ptr,
++			false)) {
+ 		dev_err(kfd_device, "Could not allocate %d bytes\n", size);
+ 		goto out;
+ 	}
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_iommu.c b/drivers/gpu/drm/amd/amdkfd/kfd_iommu.c
+index c71817963eea..66c2f856d922 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_iommu.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_iommu.c
+@@ -62,9 +62,20 @@ int kfd_iommu_device_init(struct kfd_dev *kfd)
+ 	struct amd_iommu_device_info iommu_info;
+ 	unsigned int pasid_limit;
+ 	int err;
++	struct kfd_topology_device *top_dev;
+ 
+-	if (!kfd->device_info->needs_iommu_device)
++	top_dev = kfd_topology_device_by_id(kfd->id);
++
++	/*
++	 * Overwrite ATS capability according to needs_iommu_device to fix
++	 * potential missing corresponding bit in CRAT of BIOS.
++	 */
++	if (!kfd->device_info->needs_iommu_device) {
++		top_dev->node_props.capability &= ~HSA_CAP_ATS_PRESENT;
+ 		return 0;
++	}
++
++	top_dev->node_props.capability |= HSA_CAP_ATS_PRESENT;
+ 
+ 	iommu_info.flags = 0;
+ 	err = amd_iommu_device_info(kfd->pdev, &iommu_info);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
+index 684054ff02cd..8da079cc6fb9 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
+@@ -63,7 +63,7 @@ static int init_mqd(struct mqd_manager *mm, void **mqd,
+ 				ALIGN(sizeof(struct v9_mqd), PAGE_SIZE),
+ 			&((*mqd_mem_obj)->gtt_mem),
+ 			&((*mqd_mem_obj)->gpu_addr),
+-			(void *)&((*mqd_mem_obj)->cpu_ptr));
++			(void *)&((*mqd_mem_obj)->cpu_ptr), true);
+ 	} else
+ 		retval = kfd_gtt_sa_allocate(mm->dev, sizeof(struct v9_mqd),
+ 				mqd_mem_obj);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+index 5e3990bb4c4b..c4de9b2baf1c 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+@@ -796,6 +796,7 @@ int kfd_topology_add_device(struct kfd_dev *gpu);
+ int kfd_topology_remove_device(struct kfd_dev *gpu);
+ struct kfd_topology_device *kfd_topology_device_by_proximity_domain(
+ 						uint32_t proximity_domain);
++struct kfd_topology_device *kfd_topology_device_by_id(uint32_t gpu_id);
+ struct kfd_dev *kfd_device_by_id(uint32_t gpu_id);
+ struct kfd_dev *kfd_device_by_pci_dev(const struct pci_dev *pdev);
+ int kfd_topology_enum_kfd_devices(uint8_t idx, struct kfd_dev **kdev);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+index bc95d4dfee2e..80f5db4ef75f 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+@@ -63,22 +63,33 @@ struct kfd_topology_device *kfd_topology_device_by_proximity_domain(
+ 	return device;
+ }
+ 
+-struct kfd_dev *kfd_device_by_id(uint32_t gpu_id)
++struct kfd_topology_device *kfd_topology_device_by_id(uint32_t gpu_id)
+ {
+-	struct kfd_topology_device *top_dev;
+-	struct kfd_dev *device = NULL;
++	struct kfd_topology_device *top_dev = NULL;
++	struct kfd_topology_device *ret = NULL;
+ 
+ 	down_read(&topology_lock);
+ 
+ 	list_for_each_entry(top_dev, &topology_device_list, list)
+ 		if (top_dev->gpu_id == gpu_id) {
+-			device = top_dev->gpu;
++			ret = top_dev;
+ 			break;
+ 		}
+ 
+ 	up_read(&topology_lock);
+ 
+-	return device;
++	return ret;
++}
++
++struct kfd_dev *kfd_device_by_id(uint32_t gpu_id)
++{
++	struct kfd_topology_device *top_dev;
++
++	top_dev = kfd_topology_device_by_id(gpu_id);
++	if (!top_dev)
++		return NULL;
++
++	return top_dev->gpu;
+ }
+ 
+ struct kfd_dev *kfd_device_by_pci_dev(const struct pci_dev *pdev)
+diff --git a/drivers/gpu/drm/amd/include/kgd_kfd_interface.h b/drivers/gpu/drm/amd/include/kgd_kfd_interface.h
+index 5733fbee07f7..f56b7553e5ed 100644
+--- a/drivers/gpu/drm/amd/include/kgd_kfd_interface.h
++++ b/drivers/gpu/drm/amd/include/kgd_kfd_interface.h
+@@ -266,7 +266,7 @@ struct tile_config {
+ struct kfd2kgd_calls {
+ 	int (*init_gtt_mem_allocation)(struct kgd_dev *kgd, size_t size,
+ 					void **mem_obj, uint64_t *gpu_addr,
+-					void **cpu_ptr);
++					void **cpu_ptr, bool mqd_gfx9);
+ 
+ 	void (*free_gtt_mem)(struct kgd_dev *kgd, void *mem_obj);
+ 
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+index 7a12d75e5157..c3c8c84da113 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+@@ -875,9 +875,22 @@ static enum drm_connector_status
+ nv50_mstc_detect(struct drm_connector *connector, bool force)
+ {
+ 	struct nv50_mstc *mstc = nv50_mstc(connector);
++	enum drm_connector_status conn_status;
++	int ret;
++
+ 	if (!mstc->port)
+ 		return connector_status_disconnected;
+-	return drm_dp_mst_detect_port(connector, mstc->port->mgr, mstc->port);
++
++	ret = pm_runtime_get_sync(connector->dev->dev);
++	if (ret < 0 && ret != -EACCES)
++		return connector_status_disconnected;
++
++	conn_status = drm_dp_mst_detect_port(connector, mstc->port->mgr,
++					     mstc->port);
++
++	pm_runtime_mark_last_busy(connector->dev->dev);
++	pm_runtime_put_autosuspend(connector->dev->dev);
++	return conn_status;
+ }
+ 
+ static void
+diff --git a/drivers/gpu/drm/pl111/pl111_vexpress.c b/drivers/gpu/drm/pl111/pl111_vexpress.c
+index a534b225e31b..5fa0441bb6df 100644
+--- a/drivers/gpu/drm/pl111/pl111_vexpress.c
++++ b/drivers/gpu/drm/pl111/pl111_vexpress.c
+@@ -111,7 +111,8 @@ static int vexpress_muxfpga_probe(struct platform_device *pdev)
+ }
+ 
+ static const struct of_device_id vexpress_muxfpga_match[] = {
+-	{ .compatible = "arm,vexpress-muxfpga", }
++	{ .compatible = "arm,vexpress-muxfpga", },
++	{}
+ };
+ 
+ static struct platform_driver vexpress_muxfpga_driver = {
+diff --git a/drivers/hwmon/nct6775.c b/drivers/hwmon/nct6775.c
+index b89e8379d898..8859f5572885 100644
+--- a/drivers/hwmon/nct6775.c
++++ b/drivers/hwmon/nct6775.c
+@@ -207,8 +207,6 @@ superio_exit(int ioreg)
+ 
+ #define NUM_FAN		7
+ 
+-#define TEMP_SOURCE_VIRTUAL	0x1f
+-
+ /* Common and NCT6775 specific data */
+ 
+ /* Voltage min/max registers for nr=7..14 are in bank 5 */
+@@ -299,8 +297,9 @@ static const u16 NCT6775_REG_PWM_READ[] = {
+ 
+ static const u16 NCT6775_REG_FAN[] = { 0x630, 0x632, 0x634, 0x636, 0x638 };
+ static const u16 NCT6775_REG_FAN_MIN[] = { 0x3b, 0x3c, 0x3d };
+-static const u16 NCT6775_REG_FAN_PULSES[] = { 0x641, 0x642, 0x643, 0x644, 0 };
+-static const u16 NCT6775_FAN_PULSE_SHIFT[] = { 0, 0, 0, 0, 0, 0 };
++static const u16 NCT6775_REG_FAN_PULSES[NUM_FAN] = {
++	0x641, 0x642, 0x643, 0x644 };
++static const u16 NCT6775_FAN_PULSE_SHIFT[NUM_FAN] = { };
+ 
+ static const u16 NCT6775_REG_TEMP[] = {
+ 	0x27, 0x150, 0x250, 0x62b, 0x62c, 0x62d };
+@@ -373,6 +372,7 @@ static const char *const nct6775_temp_label[] = {
+ };
+ 
+ #define NCT6775_TEMP_MASK	0x001ffffe
++#define NCT6775_VIRT_TEMP_MASK	0x00000000
+ 
+ static const u16 NCT6775_REG_TEMP_ALTERNATE[32] = {
+ 	[13] = 0x661,
+@@ -425,8 +425,8 @@ static const u8 NCT6776_PWM_MODE_MASK[] = { 0x01, 0, 0, 0, 0, 0 };
+ 
+ static const u16 NCT6776_REG_FAN_MIN[] = {
+ 	0x63a, 0x63c, 0x63e, 0x640, 0x642, 0x64a, 0x64c };
+-static const u16 NCT6776_REG_FAN_PULSES[] = {
+-	0x644, 0x645, 0x646, 0x647, 0x648, 0x649, 0 };
++static const u16 NCT6776_REG_FAN_PULSES[NUM_FAN] = {
++	0x644, 0x645, 0x646, 0x647, 0x648, 0x649 };
+ 
+ static const u16 NCT6776_REG_WEIGHT_DUTY_BASE[] = {
+ 	0x13e, 0x23e, 0x33e, 0x83e, 0x93e, 0xa3e };
+@@ -461,6 +461,7 @@ static const char *const nct6776_temp_label[] = {
+ };
+ 
+ #define NCT6776_TEMP_MASK	0x007ffffe
++#define NCT6776_VIRT_TEMP_MASK	0x00000000
+ 
+ static const u16 NCT6776_REG_TEMP_ALTERNATE[32] = {
+ 	[14] = 0x401,
+@@ -501,9 +502,9 @@ static const s8 NCT6779_BEEP_BITS[] = {
+ 	30, 31 };			/* intrusion0, intrusion1 */
+ 
+ static const u16 NCT6779_REG_FAN[] = {
+-	0x4b0, 0x4b2, 0x4b4, 0x4b6, 0x4b8, 0x4ba, 0x660 };
+-static const u16 NCT6779_REG_FAN_PULSES[] = {
+-	0x644, 0x645, 0x646, 0x647, 0x648, 0x649, 0 };
++	0x4c0, 0x4c2, 0x4c4, 0x4c6, 0x4c8, 0x4ca, 0x4ce };
++static const u16 NCT6779_REG_FAN_PULSES[NUM_FAN] = {
++	0x644, 0x645, 0x646, 0x647, 0x648, 0x649 };
+ 
+ static const u16 NCT6779_REG_CRITICAL_PWM_ENABLE[] = {
+ 	0x136, 0x236, 0x336, 0x836, 0x936, 0xa36, 0xb36 };
+@@ -559,7 +560,9 @@ static const char *const nct6779_temp_label[] = {
+ };
+ 
+ #define NCT6779_TEMP_MASK	0x07ffff7e
++#define NCT6779_VIRT_TEMP_MASK	0x00000000
+ #define NCT6791_TEMP_MASK	0x87ffff7e
++#define NCT6791_VIRT_TEMP_MASK	0x80000000
+ 
+ static const u16 NCT6779_REG_TEMP_ALTERNATE[32]
+ 	= { 0x490, 0x491, 0x492, 0x493, 0x494, 0x495, 0, 0,
+@@ -638,6 +641,7 @@ static const char *const nct6792_temp_label[] = {
+ };
+ 
+ #define NCT6792_TEMP_MASK	0x9fffff7e
++#define NCT6792_VIRT_TEMP_MASK	0x80000000
+ 
+ static const char *const nct6793_temp_label[] = {
+ 	"",
+@@ -675,6 +679,7 @@ static const char *const nct6793_temp_label[] = {
+ };
+ 
+ #define NCT6793_TEMP_MASK	0xbfff037e
++#define NCT6793_VIRT_TEMP_MASK	0x80000000
+ 
+ static const char *const nct6795_temp_label[] = {
+ 	"",
+@@ -712,6 +717,7 @@ static const char *const nct6795_temp_label[] = {
+ };
+ 
+ #define NCT6795_TEMP_MASK	0xbfffff7e
++#define NCT6795_VIRT_TEMP_MASK	0x80000000
+ 
+ static const char *const nct6796_temp_label[] = {
+ 	"",
+@@ -724,8 +730,8 @@ static const char *const nct6796_temp_label[] = {
+ 	"AUXTIN4",
+ 	"SMBUSMASTER 0",
+ 	"SMBUSMASTER 1",
+-	"",
+-	"",
++	"Virtual_TEMP",
++	"Virtual_TEMP",
+ 	"",
+ 	"",
+ 	"",
+@@ -748,7 +754,8 @@ static const char *const nct6796_temp_label[] = {
+ 	"Virtual_TEMP"
+ };
+ 
+-#define NCT6796_TEMP_MASK	0xbfff03fe
++#define NCT6796_TEMP_MASK	0xbfff0ffe
++#define NCT6796_VIRT_TEMP_MASK	0x80000c00
+ 
+ /* NCT6102D/NCT6106D specific data */
+ 
+@@ -779,8 +786,8 @@ static const u16 NCT6106_REG_TEMP_CONFIG[] = {
+ 
+ static const u16 NCT6106_REG_FAN[] = { 0x20, 0x22, 0x24 };
+ static const u16 NCT6106_REG_FAN_MIN[] = { 0xe0, 0xe2, 0xe4 };
+-static const u16 NCT6106_REG_FAN_PULSES[] = { 0xf6, 0xf6, 0xf6, 0, 0 };
+-static const u16 NCT6106_FAN_PULSE_SHIFT[] = { 0, 2, 4, 0, 0 };
++static const u16 NCT6106_REG_FAN_PULSES[] = { 0xf6, 0xf6, 0xf6 };
++static const u16 NCT6106_FAN_PULSE_SHIFT[] = { 0, 2, 4 };
+ 
+ static const u8 NCT6106_REG_PWM_MODE[] = { 0xf3, 0xf3, 0xf3 };
+ static const u8 NCT6106_PWM_MODE_MASK[] = { 0x01, 0x02, 0x04 };
+@@ -917,6 +924,11 @@ static unsigned int fan_from_reg16(u16 reg, unsigned int divreg)
+ 	return 1350000U / (reg << divreg);
+ }
+ 
++static unsigned int fan_from_reg_rpm(u16 reg, unsigned int divreg)
++{
++	return reg;
++}
++
+ static u16 fan_to_reg(u32 fan, unsigned int divreg)
+ {
+ 	if (!fan)
+@@ -969,6 +981,7 @@ struct nct6775_data {
+ 	u16 reg_temp_config[NUM_TEMP];
+ 	const char * const *temp_label;
+ 	u32 temp_mask;
++	u32 virt_temp_mask;
+ 
+ 	u16 REG_CONFIG;
+ 	u16 REG_VBAT;
+@@ -1276,11 +1289,11 @@ static bool is_word_sized(struct nct6775_data *data, u16 reg)
+ 	case nct6795:
+ 	case nct6796:
+ 		return reg == 0x150 || reg == 0x153 || reg == 0x155 ||
+-		  ((reg & 0xfff0) == 0x4b0 && (reg & 0x000f) < 0x0b) ||
++		  (reg & 0xfff0) == 0x4c0 ||
+ 		  reg == 0x402 ||
+ 		  reg == 0x63a || reg == 0x63c || reg == 0x63e ||
+ 		  reg == 0x640 || reg == 0x642 || reg == 0x64a ||
+-		  reg == 0x64c || reg == 0x660 ||
++		  reg == 0x64c ||
+ 		  reg == 0x73 || reg == 0x75 || reg == 0x77 || reg == 0x79 ||
+ 		  reg == 0x7b || reg == 0x7d;
+ 	}
+@@ -1682,9 +1695,13 @@ static struct nct6775_data *nct6775_update_device(struct device *dev)
+ 			if (data->has_fan_min & BIT(i))
+ 				data->fan_min[i] = nct6775_read_value(data,
+ 					   data->REG_FAN_MIN[i]);
+-			data->fan_pulses[i] =
+-			  (nct6775_read_value(data, data->REG_FAN_PULSES[i])
+-				>> data->FAN_PULSE_SHIFT[i]) & 0x03;
++
++			if (data->REG_FAN_PULSES[i]) {
++				data->fan_pulses[i] =
++				  (nct6775_read_value(data,
++						      data->REG_FAN_PULSES[i])
++				   >> data->FAN_PULSE_SHIFT[i]) & 0x03;
++			}
+ 
+ 			nct6775_select_fan_div(dev, data, i, reg);
+ 		}
+@@ -3639,6 +3656,7 @@ static int nct6775_probe(struct platform_device *pdev)
+ 
+ 		data->temp_label = nct6776_temp_label;
+ 		data->temp_mask = NCT6776_TEMP_MASK;
++		data->virt_temp_mask = NCT6776_VIRT_TEMP_MASK;
+ 
+ 		data->REG_VBAT = NCT6106_REG_VBAT;
+ 		data->REG_DIODE = NCT6106_REG_DIODE;
+@@ -3717,6 +3735,7 @@ static int nct6775_probe(struct platform_device *pdev)
+ 
+ 		data->temp_label = nct6775_temp_label;
+ 		data->temp_mask = NCT6775_TEMP_MASK;
++		data->virt_temp_mask = NCT6775_VIRT_TEMP_MASK;
+ 
+ 		data->REG_CONFIG = NCT6775_REG_CONFIG;
+ 		data->REG_VBAT = NCT6775_REG_VBAT;
+@@ -3789,6 +3808,7 @@ static int nct6775_probe(struct platform_device *pdev)
+ 
+ 		data->temp_label = nct6776_temp_label;
+ 		data->temp_mask = NCT6776_TEMP_MASK;
++		data->virt_temp_mask = NCT6776_VIRT_TEMP_MASK;
+ 
+ 		data->REG_CONFIG = NCT6775_REG_CONFIG;
+ 		data->REG_VBAT = NCT6775_REG_VBAT;
+@@ -3853,7 +3873,7 @@ static int nct6775_probe(struct platform_device *pdev)
+ 		data->ALARM_BITS = NCT6779_ALARM_BITS;
+ 		data->BEEP_BITS = NCT6779_BEEP_BITS;
+ 
+-		data->fan_from_reg = fan_from_reg13;
++		data->fan_from_reg = fan_from_reg_rpm;
+ 		data->fan_from_reg_min = fan_from_reg13;
+ 		data->target_temp_mask = 0xff;
+ 		data->tolerance_mask = 0x07;
+@@ -3861,6 +3881,7 @@ static int nct6775_probe(struct platform_device *pdev)
+ 
+ 		data->temp_label = nct6779_temp_label;
+ 		data->temp_mask = NCT6779_TEMP_MASK;
++		data->virt_temp_mask = NCT6779_VIRT_TEMP_MASK;
+ 
+ 		data->REG_CONFIG = NCT6775_REG_CONFIG;
+ 		data->REG_VBAT = NCT6775_REG_VBAT;
+@@ -3933,7 +3954,7 @@ static int nct6775_probe(struct platform_device *pdev)
+ 		data->ALARM_BITS = NCT6791_ALARM_BITS;
+ 		data->BEEP_BITS = NCT6779_BEEP_BITS;
+ 
+-		data->fan_from_reg = fan_from_reg13;
++		data->fan_from_reg = fan_from_reg_rpm;
+ 		data->fan_from_reg_min = fan_from_reg13;
+ 		data->target_temp_mask = 0xff;
+ 		data->tolerance_mask = 0x07;
+@@ -3944,22 +3965,27 @@ static int nct6775_probe(struct platform_device *pdev)
+ 		case nct6791:
+ 			data->temp_label = nct6779_temp_label;
+ 			data->temp_mask = NCT6791_TEMP_MASK;
++			data->virt_temp_mask = NCT6791_VIRT_TEMP_MASK;
+ 			break;
+ 		case nct6792:
+ 			data->temp_label = nct6792_temp_label;
+ 			data->temp_mask = NCT6792_TEMP_MASK;
++			data->virt_temp_mask = NCT6792_VIRT_TEMP_MASK;
+ 			break;
+ 		case nct6793:
+ 			data->temp_label = nct6793_temp_label;
+ 			data->temp_mask = NCT6793_TEMP_MASK;
++			data->virt_temp_mask = NCT6793_VIRT_TEMP_MASK;
+ 			break;
+ 		case nct6795:
+ 			data->temp_label = nct6795_temp_label;
+ 			data->temp_mask = NCT6795_TEMP_MASK;
++			data->virt_temp_mask = NCT6795_VIRT_TEMP_MASK;
+ 			break;
+ 		case nct6796:
+ 			data->temp_label = nct6796_temp_label;
+ 			data->temp_mask = NCT6796_TEMP_MASK;
++			data->virt_temp_mask = NCT6796_VIRT_TEMP_MASK;
+ 			break;
+ 		}
+ 
+@@ -4143,7 +4169,7 @@ static int nct6775_probe(struct platform_device *pdev)
+ 		 * for each fan reflects a different temperature, and there
+ 		 * are no duplicates.
+ 		 */
+-		if (src != TEMP_SOURCE_VIRTUAL) {
++		if (!(data->virt_temp_mask & BIT(src))) {
+ 			if (mask & BIT(src))
+ 				continue;
+ 			mask |= BIT(src);
+diff --git a/drivers/i2c/busses/i2c-scmi.c b/drivers/i2c/busses/i2c-scmi.c
+index a01389b85f13..7e9a2bbf5ddc 100644
+--- a/drivers/i2c/busses/i2c-scmi.c
++++ b/drivers/i2c/busses/i2c-scmi.c
+@@ -152,6 +152,7 @@ acpi_smbus_cmi_access(struct i2c_adapter *adap, u16 addr, unsigned short flags,
+ 			mt_params[3].type = ACPI_TYPE_INTEGER;
+ 			mt_params[3].integer.value = len;
+ 			mt_params[4].type = ACPI_TYPE_BUFFER;
++			mt_params[4].buffer.length = len;
+ 			mt_params[4].buffer.pointer = data->block + 1;
+ 		}
+ 		break;
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index cd620e009bad..d4b9db487b16 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -231,6 +231,7 @@ static const struct xpad_device {
+ 	{ 0x0e6f, 0x0246, "Rock Candy Gamepad for Xbox One 2015", 0, XTYPE_XBOXONE },
+ 	{ 0x0e6f, 0x02ab, "PDP Controller for Xbox One", 0, XTYPE_XBOXONE },
+ 	{ 0x0e6f, 0x02a4, "PDP Wired Controller for Xbox One - Stealth Series", 0, XTYPE_XBOXONE },
++	{ 0x0e6f, 0x02a6, "PDP Wired Controller for Xbox One - Camo Series", 0, XTYPE_XBOXONE },
+ 	{ 0x0e6f, 0x0301, "Logic3 Controller", 0, XTYPE_XBOX360 },
+ 	{ 0x0e6f, 0x0346, "Rock Candy Gamepad for Xbox One 2016", 0, XTYPE_XBOXONE },
+ 	{ 0x0e6f, 0x0401, "Logic3 Controller", 0, XTYPE_XBOX360 },
+@@ -530,6 +531,8 @@ static const struct xboxone_init_packet xboxone_init_packets[] = {
+ 	XBOXONE_INIT_PKT(0x0e6f, 0x02ab, xboxone_pdp_init2),
+ 	XBOXONE_INIT_PKT(0x0e6f, 0x02a4, xboxone_pdp_init1),
+ 	XBOXONE_INIT_PKT(0x0e6f, 0x02a4, xboxone_pdp_init2),
++	XBOXONE_INIT_PKT(0x0e6f, 0x02a6, xboxone_pdp_init1),
++	XBOXONE_INIT_PKT(0x0e6f, 0x02a6, xboxone_pdp_init2),
+ 	XBOXONE_INIT_PKT(0x24c6, 0x541a, xboxone_rumblebegin_init),
+ 	XBOXONE_INIT_PKT(0x24c6, 0x542a, xboxone_rumblebegin_init),
+ 	XBOXONE_INIT_PKT(0x24c6, 0x543a, xboxone_rumblebegin_init),
+diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
+index a39ae8f45e32..32379e0ac536 100644
+--- a/drivers/md/dm-cache-target.c
++++ b/drivers/md/dm-cache-target.c
+@@ -3492,14 +3492,13 @@ static int __init dm_cache_init(void)
+ 	int r;
+ 
+ 	migration_cache = KMEM_CACHE(dm_cache_migration, 0);
+-	if (!migration_cache) {
+-		dm_unregister_target(&cache_target);
++	if (!migration_cache)
+ 		return -ENOMEM;
+-	}
+ 
+ 	r = dm_register_target(&cache_target);
+ 	if (r) {
+ 		DMERR("cache target registration failed: %d", r);
++		kmem_cache_destroy(migration_cache);
+ 		return r;
+ 	}
+ 
+diff --git a/drivers/md/dm-flakey.c b/drivers/md/dm-flakey.c
+index 21d126a5078c..32aabe27b37c 100644
+--- a/drivers/md/dm-flakey.c
++++ b/drivers/md/dm-flakey.c
+@@ -467,7 +467,9 @@ static int flakey_iterate_devices(struct dm_target *ti, iterate_devices_callout_
+ static struct target_type flakey_target = {
+ 	.name   = "flakey",
+ 	.version = {1, 5, 0},
++#ifdef CONFIG_BLK_DEV_ZONED
+ 	.features = DM_TARGET_ZONED_HM,
++#endif
+ 	.module = THIS_MODULE,
+ 	.ctr    = flakey_ctr,
+ 	.dtr    = flakey_dtr,
+diff --git a/drivers/md/dm-linear.c b/drivers/md/dm-linear.c
+index d10964d41fd7..2f7c44a006c4 100644
+--- a/drivers/md/dm-linear.c
++++ b/drivers/md/dm-linear.c
+@@ -102,6 +102,7 @@ static int linear_map(struct dm_target *ti, struct bio *bio)
+ 	return DM_MAPIO_REMAPPED;
+ }
+ 
++#ifdef CONFIG_BLK_DEV_ZONED
+ static int linear_end_io(struct dm_target *ti, struct bio *bio,
+ 			 blk_status_t *error)
+ {
+@@ -112,6 +113,7 @@ static int linear_end_io(struct dm_target *ti, struct bio *bio,
+ 
+ 	return DM_ENDIO_DONE;
+ }
++#endif
+ 
+ static void linear_status(struct dm_target *ti, status_type_t type,
+ 			  unsigned status_flags, char *result, unsigned maxlen)
+@@ -208,12 +210,16 @@ static size_t linear_dax_copy_to_iter(struct dm_target *ti, pgoff_t pgoff,
+ static struct target_type linear_target = {
+ 	.name   = "linear",
+ 	.version = {1, 4, 0},
++#ifdef CONFIG_BLK_DEV_ZONED
++	.end_io = linear_end_io,
+ 	.features = DM_TARGET_PASSES_INTEGRITY | DM_TARGET_ZONED_HM,
++#else
++	.features = DM_TARGET_PASSES_INTEGRITY,
++#endif
+ 	.module = THIS_MODULE,
+ 	.ctr    = linear_ctr,
+ 	.dtr    = linear_dtr,
+ 	.map    = linear_map,
+-	.end_io = linear_end_io,
+ 	.status = linear_status,
+ 	.prepare_ioctl = linear_prepare_ioctl,
+ 	.iterate_devices = linear_iterate_devices,
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index b0dd7027848b..4ad8312d5b8d 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1153,12 +1153,14 @@ void dm_accept_partial_bio(struct bio *bio, unsigned n_sectors)
+ EXPORT_SYMBOL_GPL(dm_accept_partial_bio);
+ 
+ /*
+- * The zone descriptors obtained with a zone report indicate
+- * zone positions within the target device. The zone descriptors
+- * must be remapped to match their position within the dm device.
+- * A target may call dm_remap_zone_report after completion of a
+- * REQ_OP_ZONE_REPORT bio to remap the zone descriptors obtained
+- * from the target device mapping to the dm device.
++ * The zone descriptors obtained with a zone report indicate zone positions
++ * within the target backing device, regardless of that device is a partition
++ * and regardless of the target mapping start sector on the device or partition.
++ * The zone descriptors start sector and write pointer position must be adjusted
++ * to match their relative position within the dm device.
++ * A target may call dm_remap_zone_report() after completion of a
++ * REQ_OP_ZONE_REPORT bio to remap the zone descriptors obtained from the
++ * backing device.
+  */
+ void dm_remap_zone_report(struct dm_target *ti, struct bio *bio, sector_t start)
+ {
+@@ -1169,6 +1171,7 @@ void dm_remap_zone_report(struct dm_target *ti, struct bio *bio, sector_t start)
+ 	struct blk_zone *zone;
+ 	unsigned int nr_rep = 0;
+ 	unsigned int ofst;
++	sector_t part_offset;
+ 	struct bio_vec bvec;
+ 	struct bvec_iter iter;
+ 	void *addr;
+@@ -1176,6 +1179,15 @@ void dm_remap_zone_report(struct dm_target *ti, struct bio *bio, sector_t start)
+ 	if (bio->bi_status)
+ 		return;
+ 
++	/*
++	 * bio sector was incremented by the request size on completion. Taking
++	 * into account the original request sector, the target start offset on
++	 * the backing device and the target mapping offset (ti->begin), the
++	 * start sector of the backing device. The partition offset is always 0
++	 * if the target uses a whole device.
++	 */
++	part_offset = bio->bi_iter.bi_sector + ti->begin - (start + bio_end_sector(report_bio));
++
+ 	/*
+ 	 * Remap the start sector of the reported zones. For sequential zones,
+ 	 * also remap the write pointer position.
+@@ -1193,6 +1205,7 @@ void dm_remap_zone_report(struct dm_target *ti, struct bio *bio, sector_t start)
+ 		/* Set zones start sector */
+ 		while (hdr->nr_zones && ofst < bvec.bv_len) {
+ 			zone = addr + ofst;
++			zone->start -= part_offset;
+ 			if (zone->start >= start + ti->len) {
+ 				hdr->nr_zones = 0;
+ 				break;
+@@ -1204,7 +1217,7 @@ void dm_remap_zone_report(struct dm_target *ti, struct bio *bio, sector_t start)
+ 				else if (zone->cond == BLK_ZONE_COND_EMPTY)
+ 					zone->wp = zone->start;
+ 				else
+-					zone->wp = zone->wp + ti->begin - start;
++					zone->wp = zone->wp + ti->begin - start - part_offset;
+ 			}
+ 			ofst += sizeof(struct blk_zone);
+ 			hdr->nr_zones--;
+diff --git a/drivers/mfd/omap-usb-host.c b/drivers/mfd/omap-usb-host.c
+index e11ab12fbdf2..800986a79704 100644
+--- a/drivers/mfd/omap-usb-host.c
++++ b/drivers/mfd/omap-usb-host.c
+@@ -528,8 +528,8 @@ static int usbhs_omap_get_dt_pdata(struct device *dev,
+ }
+ 
+ static const struct of_device_id usbhs_child_match_table[] = {
+-	{ .compatible = "ti,omap-ehci", },
+-	{ .compatible = "ti,omap-ohci", },
++	{ .compatible = "ti,ehci-omap", },
++	{ .compatible = "ti,ohci-omap3", },
+ 	{ }
+ };
+ 
+@@ -855,6 +855,7 @@ static struct platform_driver usbhs_omap_driver = {
+ 		.pm		= &usbhsomap_dev_pm_ops,
+ 		.of_match_table = usbhs_omap_dt_ids,
+ 	},
++	.probe		= usbhs_omap_probe,
+ 	.remove		= usbhs_omap_remove,
+ };
+ 
+@@ -864,9 +865,9 @@ MODULE_ALIAS("platform:" USBHS_DRIVER_NAME);
+ MODULE_LICENSE("GPL v2");
+ MODULE_DESCRIPTION("usb host common core driver for omap EHCI and OHCI");
+ 
+-static int __init omap_usbhs_drvinit(void)
++static int omap_usbhs_drvinit(void)
+ {
+-	return platform_driver_probe(&usbhs_omap_driver, usbhs_omap_probe);
++	return platform_driver_register(&usbhs_omap_driver);
+ }
+ 
+ /*
+@@ -878,7 +879,7 @@ static int __init omap_usbhs_drvinit(void)
+  */
+ fs_initcall_sync(omap_usbhs_drvinit);
+ 
+-static void __exit omap_usbhs_drvexit(void)
++static void omap_usbhs_drvexit(void)
+ {
+ 	platform_driver_unregister(&usbhs_omap_driver);
+ }
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index a0b9102c4c6e..e201ccb3fda4 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -1370,6 +1370,16 @@ static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
+ 		brq->data.blocks = card->host->max_blk_count;
+ 
+ 	if (brq->data.blocks > 1) {
++		/*
++		 * Some SD cards in SPI mode return a CRC error or even lock up
++		 * completely when trying to read the last block using a
++		 * multiblock read command.
++		 */
++		if (mmc_host_is_spi(card->host) && (rq_data_dir(req) == READ) &&
++		    (blk_rq_pos(req) + blk_rq_sectors(req) ==
++		     get_capacity(md->disk)))
++			brq->data.blocks--;
++
+ 		/*
+ 		 * After a read error, we redo the request one sector
+ 		 * at a time in order to accurately determine which
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 217b790d22ed..2b01180be834 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -210,6 +210,7 @@ static void bond_get_stats(struct net_device *bond_dev,
+ static void bond_slave_arr_handler(struct work_struct *work);
+ static bool bond_time_in_interval(struct bonding *bond, unsigned long last_act,
+ 				  int mod);
++static void bond_netdev_notify_work(struct work_struct *work);
+ 
+ /*---------------------------- General routines -----------------------------*/
+ 
+@@ -1177,9 +1178,27 @@ static rx_handler_result_t bond_handle_frame(struct sk_buff **pskb)
+ 		}
+ 	}
+ 
+-	/* don't change skb->dev for link-local packets */
+-	if (is_link_local_ether_addr(eth_hdr(skb)->h_dest))
++	/* Link-local multicast packets should be passed to the
++	 * stack on the link they arrive as well as pass them to the
++	 * bond-master device. These packets are mostly usable when
++	 * stack receives it with the link on which they arrive
++	 * (e.g. LLDP) they also must be available on master. Some of
++	 * the use cases include (but are not limited to): LLDP agents
++	 * that must be able to operate both on enslaved interfaces as
++	 * well as on bonds themselves; linux bridges that must be able
++	 * to process/pass BPDUs from attached bonds when any kind of
++	 * STP version is enabled on the network.
++	 */
++	if (is_link_local_ether_addr(eth_hdr(skb)->h_dest)) {
++		struct sk_buff *nskb = skb_clone(skb, GFP_ATOMIC);
++
++		if (nskb) {
++			nskb->dev = bond->dev;
++			nskb->queue_mapping = 0;
++			netif_rx(nskb);
++		}
+ 		return RX_HANDLER_PASS;
++	}
+ 	if (bond_should_deliver_exact_match(skb, slave, bond))
+ 		return RX_HANDLER_EXACT;
+ 
+@@ -1276,6 +1295,8 @@ static struct slave *bond_alloc_slave(struct bonding *bond)
+ 			return NULL;
+ 		}
+ 	}
++	INIT_DELAYED_WORK(&slave->notify_work, bond_netdev_notify_work);
++
+ 	return slave;
+ }
+ 
+@@ -1283,6 +1304,7 @@ static void bond_free_slave(struct slave *slave)
+ {
+ 	struct bonding *bond = bond_get_bond_by_slave(slave);
+ 
++	cancel_delayed_work_sync(&slave->notify_work);
+ 	if (BOND_MODE(bond) == BOND_MODE_8023AD)
+ 		kfree(SLAVE_AD_INFO(slave));
+ 
+@@ -1304,39 +1326,26 @@ static void bond_fill_ifslave(struct slave *slave, struct ifslave *info)
+ 	info->link_failure_count = slave->link_failure_count;
+ }
+ 
+-static void bond_netdev_notify(struct net_device *dev,
+-			       struct netdev_bonding_info *info)
+-{
+-	rtnl_lock();
+-	netdev_bonding_info_change(dev, info);
+-	rtnl_unlock();
+-}
+-
+ static void bond_netdev_notify_work(struct work_struct *_work)
+ {
+-	struct netdev_notify_work *w =
+-		container_of(_work, struct netdev_notify_work, work.work);
++	struct slave *slave = container_of(_work, struct slave,
++					   notify_work.work);
++
++	if (rtnl_trylock()) {
++		struct netdev_bonding_info binfo;
+ 
+-	bond_netdev_notify(w->dev, &w->bonding_info);
+-	dev_put(w->dev);
+-	kfree(w);
++		bond_fill_ifslave(slave, &binfo.slave);
++		bond_fill_ifbond(slave->bond, &binfo.master);
++		netdev_bonding_info_change(slave->dev, &binfo);
++		rtnl_unlock();
++	} else {
++		queue_delayed_work(slave->bond->wq, &slave->notify_work, 1);
++	}
+ }
+ 
+ void bond_queue_slave_event(struct slave *slave)
+ {
+-	struct bonding *bond = slave->bond;
+-	struct netdev_notify_work *nnw = kzalloc(sizeof(*nnw), GFP_ATOMIC);
+-
+-	if (!nnw)
+-		return;
+-
+-	dev_hold(slave->dev);
+-	nnw->dev = slave->dev;
+-	bond_fill_ifslave(slave, &nnw->bonding_info.slave);
+-	bond_fill_ifbond(bond, &nnw->bonding_info.master);
+-	INIT_DELAYED_WORK(&nnw->work, bond_netdev_notify_work);
+-
+-	queue_delayed_work(slave->bond->wq, &nnw->work, 0);
++	queue_delayed_work(slave->bond->wq, &slave->notify_work, 0);
+ }
+ 
+ void bond_lower_state_changed(struct slave *slave)
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index d93c790bfbe8..ad534b90ef21 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -1107,7 +1107,7 @@ void b53_vlan_add(struct dsa_switch *ds, int port,
+ 		b53_get_vlan_entry(dev, vid, vl);
+ 
+ 		vl->members |= BIT(port);
+-		if (untagged)
++		if (untagged && !dsa_is_cpu_port(ds, port))
+ 			vl->untag |= BIT(port);
+ 		else
+ 			vl->untag &= ~BIT(port);
+@@ -1149,7 +1149,7 @@ int b53_vlan_del(struct dsa_switch *ds, int port,
+ 				pvid = 0;
+ 		}
+ 
+-		if (untagged)
++		if (untagged && !dsa_is_cpu_port(ds, port))
+ 			vl->untag &= ~(BIT(port));
+ 
+ 		b53_set_vlan_entry(dev, vid, vl);
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index 02e8982519ce..d73204767cbe 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -698,7 +698,6 @@ static int bcm_sf2_sw_suspend(struct dsa_switch *ds)
+ static int bcm_sf2_sw_resume(struct dsa_switch *ds)
+ {
+ 	struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds);
+-	unsigned int port;
+ 	int ret;
+ 
+ 	ret = bcm_sf2_sw_rst(priv);
+@@ -710,14 +709,7 @@ static int bcm_sf2_sw_resume(struct dsa_switch *ds)
+ 	if (priv->hw_params.num_gphy == 1)
+ 		bcm_sf2_gphy_enable_set(ds, true);
+ 
+-	for (port = 0; port < DSA_MAX_PORTS; port++) {
+-		if (dsa_is_user_port(ds, port))
+-			bcm_sf2_port_setup(ds, port, NULL);
+-		else if (dsa_is_cpu_port(ds, port))
+-			bcm_sf2_imp_setup(ds, port);
+-	}
+-
+-	bcm_sf2_enable_acb(ds);
++	ds->ops->setup(ds);
+ 
+ 	return 0;
+ }
+@@ -1168,10 +1160,10 @@ static int bcm_sf2_sw_remove(struct platform_device *pdev)
+ {
+ 	struct bcm_sf2_priv *priv = platform_get_drvdata(pdev);
+ 
+-	/* Disable all ports and interrupts */
+ 	priv->wol_ports_mask = 0;
+-	bcm_sf2_sw_suspend(priv->dev->ds);
+ 	dsa_unregister_switch(priv->dev->ds);
++	/* Disable all ports and interrupts */
++	bcm_sf2_sw_suspend(priv->dev->ds);
+ 	bcm_sf2_mdio_unregister(priv);
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+index b5f1f62e8e25..d1e1a0ba8615 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+@@ -225,9 +225,10 @@ int aq_ring_rx_clean(struct aq_ring_s *self,
+ 		}
+ 
+ 		/* for single fragment packets use build_skb() */
+-		if (buff->is_eop) {
++		if (buff->is_eop &&
++		    buff->len <= AQ_CFG_RX_FRAME_MAX - AQ_SKB_ALIGN) {
+ 			skb = build_skb(page_address(buff->page),
+-					buff->len + AQ_SKB_ALIGN);
++					AQ_CFG_RX_FRAME_MAX);
+ 			if (unlikely(!skb)) {
+ 				err = -ENOMEM;
+ 				goto err_exit;
+@@ -247,18 +248,21 @@ int aq_ring_rx_clean(struct aq_ring_s *self,
+ 					buff->len - ETH_HLEN,
+ 					SKB_TRUESIZE(buff->len - ETH_HLEN));
+ 
+-			for (i = 1U, next_ = buff->next,
+-			     buff_ = &self->buff_ring[next_]; true;
+-			     next_ = buff_->next,
+-			     buff_ = &self->buff_ring[next_], ++i) {
+-				skb_add_rx_frag(skb, i, buff_->page, 0,
+-						buff_->len,
+-						SKB_TRUESIZE(buff->len -
+-						ETH_HLEN));
+-				buff_->is_cleaned = 1;
+-
+-				if (buff_->is_eop)
+-					break;
++			if (!buff->is_eop) {
++				for (i = 1U, next_ = buff->next,
++				     buff_ = &self->buff_ring[next_];
++				     true; next_ = buff_->next,
++				     buff_ = &self->buff_ring[next_], ++i) {
++					skb_add_rx_frag(skb, i,
++							buff_->page, 0,
++							buff_->len,
++							SKB_TRUESIZE(buff->len -
++							ETH_HLEN));
++					buff_->is_cleaned = 1;
++
++					if (buff_->is_eop)
++						break;
++				}
+ 			}
+ 		}
+ 
+diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c
+index a1f60f89e059..7a03ee45840e 100644
+--- a/drivers/net/ethernet/broadcom/bcmsysport.c
++++ b/drivers/net/ethernet/broadcom/bcmsysport.c
+@@ -1045,14 +1045,22 @@ static void bcm_sysport_resume_from_wol(struct bcm_sysport_priv *priv)
+ {
+ 	u32 reg;
+ 
+-	/* Stop monitoring MPD interrupt */
+-	intrl2_0_mask_set(priv, INTRL2_0_MPD);
+-
+ 	/* Clear the MagicPacket detection logic */
+ 	reg = umac_readl(priv, UMAC_MPD_CTRL);
+ 	reg &= ~MPD_EN;
+ 	umac_writel(priv, reg, UMAC_MPD_CTRL);
+ 
++	reg = intrl2_0_readl(priv, INTRL2_CPU_STATUS);
++	if (reg & INTRL2_0_MPD)
++		netdev_info(priv->netdev, "Wake-on-LAN (MPD) interrupt!\n");
++
++	if (reg & INTRL2_0_BRCM_MATCH_TAG) {
++		reg = rxchk_readl(priv, RXCHK_BRCM_TAG_MATCH_STATUS) &
++				  RXCHK_BRCM_TAG_MATCH_MASK;
++		netdev_info(priv->netdev,
++			    "Wake-on-LAN (filters 0x%02x) interrupt!\n", reg);
++	}
++
+ 	netif_dbg(priv, wol, priv->netdev, "resumed from WOL\n");
+ }
+ 
+@@ -1102,11 +1110,6 @@ static irqreturn_t bcm_sysport_rx_isr(int irq, void *dev_id)
+ 	if (priv->irq0_stat & INTRL2_0_TX_RING_FULL)
+ 		bcm_sysport_tx_reclaim_all(priv);
+ 
+-	if (priv->irq0_stat & INTRL2_0_MPD) {
+-		netdev_info(priv->netdev, "Wake-on-LAN interrupt!\n");
+-		bcm_sysport_resume_from_wol(priv);
+-	}
+-
+ 	if (!priv->is_lite)
+ 		goto out;
+ 
+@@ -2459,9 +2462,6 @@ static int bcm_sysport_suspend_to_wol(struct bcm_sysport_priv *priv)
+ 	/* UniMAC receive needs to be turned on */
+ 	umac_enable_set(priv, CMD_RX_EN, 1);
+ 
+-	/* Enable the interrupt wake-up source */
+-	intrl2_0_mask_clear(priv, INTRL2_0_MPD);
+-
+ 	netif_dbg(priv, wol, ndev, "entered WOL mode\n");
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 80b05597c5fe..33f0861057fd 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -1882,8 +1882,11 @@ static int bnxt_poll_work(struct bnxt *bp, struct bnxt_napi *bnapi, int budget)
+ 		if (TX_CMP_TYPE(txcmp) == CMP_TYPE_TX_L2_CMP) {
+ 			tx_pkts++;
+ 			/* return full budget so NAPI will complete. */
+-			if (unlikely(tx_pkts > bp->tx_wake_thresh))
++			if (unlikely(tx_pkts > bp->tx_wake_thresh)) {
+ 				rx_pkts = budget;
++				raw_cons = NEXT_RAW_CMP(raw_cons);
++				break;
++			}
+ 		} else if ((TX_CMP_TYPE(txcmp) & 0x30) == 0x10) {
+ 			if (likely(budget))
+ 				rc = bnxt_rx_pkt(bp, bnapi, &raw_cons, &event);
+@@ -1911,7 +1914,7 @@ static int bnxt_poll_work(struct bnxt *bp, struct bnxt_napi *bnapi, int budget)
+ 		}
+ 		raw_cons = NEXT_RAW_CMP(raw_cons);
+ 
+-		if (rx_pkts == budget)
++		if (rx_pkts && rx_pkts == budget)
+ 			break;
+ 	}
+ 
+@@ -2025,8 +2028,12 @@ static int bnxt_poll(struct napi_struct *napi, int budget)
+ 	while (1) {
+ 		work_done += bnxt_poll_work(bp, bnapi, budget - work_done);
+ 
+-		if (work_done >= budget)
++		if (work_done >= budget) {
++			if (!budget)
++				BNXT_CP_DB_REARM(cpr->cp_doorbell,
++						 cpr->cp_raw_cons);
+ 			break;
++		}
+ 
+ 		if (!bnxt_has_work(bp, cpr)) {
+ 			if (napi_complete_done(napi, work_done))
+@@ -3008,10 +3015,11 @@ static void bnxt_free_hwrm_resources(struct bnxt *bp)
+ {
+ 	struct pci_dev *pdev = bp->pdev;
+ 
+-	dma_free_coherent(&pdev->dev, PAGE_SIZE, bp->hwrm_cmd_resp_addr,
+-			  bp->hwrm_cmd_resp_dma_addr);
+-
+-	bp->hwrm_cmd_resp_addr = NULL;
++	if (bp->hwrm_cmd_resp_addr) {
++		dma_free_coherent(&pdev->dev, PAGE_SIZE, bp->hwrm_cmd_resp_addr,
++				  bp->hwrm_cmd_resp_dma_addr);
++		bp->hwrm_cmd_resp_addr = NULL;
++	}
+ 	if (bp->hwrm_dbg_resp_addr) {
+ 		dma_free_coherent(&pdev->dev, HWRM_DBG_REG_BUF_SIZE,
+ 				  bp->hwrm_dbg_resp_addr,
+@@ -4643,7 +4651,7 @@ __bnxt_hwrm_reserve_pf_rings(struct bnxt *bp, struct hwrm_func_cfg_input *req,
+ 				      FUNC_CFG_REQ_ENABLES_NUM_STAT_CTXS : 0;
+ 		enables |= ring_grps ?
+ 			   FUNC_CFG_REQ_ENABLES_NUM_HW_RING_GRPS : 0;
+-		enables |= vnics ? FUNC_VF_CFG_REQ_ENABLES_NUM_VNICS : 0;
++		enables |= vnics ? FUNC_CFG_REQ_ENABLES_NUM_VNICS : 0;
+ 
+ 		req->num_rx_rings = cpu_to_le16(rx_rings);
+ 		req->num_hw_ring_grps = cpu_to_le16(ring_grps);
+@@ -8493,7 +8501,7 @@ static void _bnxt_get_max_rings(struct bnxt *bp, int *max_rx, int *max_tx,
+ 	*max_tx = hw_resc->max_tx_rings;
+ 	*max_rx = hw_resc->max_rx_rings;
+ 	*max_cp = min_t(int, bnxt_get_max_func_cp_rings_for_en(bp),
+-			hw_resc->max_irqs);
++			hw_resc->max_irqs - bnxt_get_ulp_msix_num(bp));
+ 	*max_cp = min_t(int, *max_cp, hw_resc->max_stat_ctxs);
+ 	max_ring_grps = hw_resc->max_hw_ring_grps;
+ 	if (BNXT_CHIP_TYPE_NITRO_A0(bp) && BNXT_PF(bp)) {
+@@ -8924,6 +8932,7 @@ init_err_cleanup_tc:
+ 	bnxt_clear_int_mode(bp);
+ 
+ init_err_pci_clean:
++	bnxt_free_hwrm_resources(bp);
+ 	bnxt_cleanup_pci(bp);
+ 
+ init_err_free:
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
+index d5bc72cecde3..3f896acc4ca8 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
+@@ -98,13 +98,13 @@ static int bnxt_hwrm_queue_cos2bw_cfg(struct bnxt *bp, struct ieee_ets *ets,
+ 
+ 	bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_QUEUE_COS2BW_CFG, -1, -1);
+ 	for (i = 0; i < max_tc; i++) {
+-		u8 qidx;
++		u8 qidx = bp->tc_to_qidx[i];
+ 
+ 		req.enables |= cpu_to_le32(
+-			QUEUE_COS2BW_CFG_REQ_ENABLES_COS_QUEUE_ID0_VALID << i);
++			QUEUE_COS2BW_CFG_REQ_ENABLES_COS_QUEUE_ID0_VALID <<
++			qidx);
+ 
+ 		memset(&cos2bw, 0, sizeof(cos2bw));
+-		qidx = bp->tc_to_qidx[i];
+ 		cos2bw.queue_id = bp->q_info[qidx].queue_id;
+ 		if (ets->tc_tsa[i] == IEEE_8021QAZ_TSA_STRICT) {
+ 			cos2bw.tsa =
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+index 491bd40a254d..c4c9df029466 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+@@ -75,17 +75,23 @@ static int bnxt_tc_parse_redir(struct bnxt *bp,
+ 	return 0;
+ }
+ 
+-static void bnxt_tc_parse_vlan(struct bnxt *bp,
+-			       struct bnxt_tc_actions *actions,
+-			       const struct tc_action *tc_act)
++static int bnxt_tc_parse_vlan(struct bnxt *bp,
++			      struct bnxt_tc_actions *actions,
++			      const struct tc_action *tc_act)
+ {
+-	if (tcf_vlan_action(tc_act) == TCA_VLAN_ACT_POP) {
++	switch (tcf_vlan_action(tc_act)) {
++	case TCA_VLAN_ACT_POP:
+ 		actions->flags |= BNXT_TC_ACTION_FLAG_POP_VLAN;
+-	} else if (tcf_vlan_action(tc_act) == TCA_VLAN_ACT_PUSH) {
++		break;
++	case TCA_VLAN_ACT_PUSH:
+ 		actions->flags |= BNXT_TC_ACTION_FLAG_PUSH_VLAN;
+ 		actions->push_vlan_tci = htons(tcf_vlan_push_vid(tc_act));
+ 		actions->push_vlan_tpid = tcf_vlan_push_proto(tc_act);
++		break;
++	default:
++		return -EOPNOTSUPP;
+ 	}
++	return 0;
+ }
+ 
+ static int bnxt_tc_parse_tunnel_set(struct bnxt *bp,
+@@ -136,7 +142,9 @@ static int bnxt_tc_parse_actions(struct bnxt *bp,
+ 
+ 		/* Push/pop VLAN */
+ 		if (is_tcf_vlan(tc_act)) {
+-			bnxt_tc_parse_vlan(bp, actions, tc_act);
++			rc = bnxt_tc_parse_vlan(bp, actions, tc_act);
++			if (rc)
++				return rc;
+ 			continue;
+ 		}
+ 
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index c4d7479938e2..dfa045f22ef1 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -3765,6 +3765,13 @@ static const struct macb_config at91sam9260_config = {
+ 	.init = macb_init,
+ };
+ 
++static const struct macb_config sama5d3macb_config = {
++	.caps = MACB_CAPS_SG_DISABLED
++	      | MACB_CAPS_USRIO_HAS_CLKEN | MACB_CAPS_USRIO_DEFAULT_IS_MII_GMII,
++	.clk_init = macb_clk_init,
++	.init = macb_init,
++};
++
+ static const struct macb_config pc302gem_config = {
+ 	.caps = MACB_CAPS_SG_DISABLED | MACB_CAPS_GIGABIT_MODE_AVAILABLE,
+ 	.dma_burst_length = 16,
+@@ -3832,6 +3839,7 @@ static const struct of_device_id macb_dt_ids[] = {
+ 	{ .compatible = "cdns,gem", .data = &pc302gem_config },
+ 	{ .compatible = "atmel,sama5d2-gem", .data = &sama5d2_config },
+ 	{ .compatible = "atmel,sama5d3-gem", .data = &sama5d3_config },
++	{ .compatible = "atmel,sama5d3-macb", .data = &sama5d3macb_config },
+ 	{ .compatible = "atmel,sama5d4-gem", .data = &sama5d4_config },
+ 	{ .compatible = "cdns,at91rm9200-emac", .data = &emac_config },
+ 	{ .compatible = "cdns,emac", .data = &emac_config },
+diff --git a/drivers/net/ethernet/hisilicon/hns/hnae.c b/drivers/net/ethernet/hisilicon/hns/hnae.c
+index a051e582d541..79d03f8ee7b1 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hnae.c
++++ b/drivers/net/ethernet/hisilicon/hns/hnae.c
+@@ -84,7 +84,7 @@ static void hnae_unmap_buffer(struct hnae_ring *ring, struct hnae_desc_cb *cb)
+ 	if (cb->type == DESC_TYPE_SKB)
+ 		dma_unmap_single(ring_to_dev(ring), cb->dma, cb->length,
+ 				 ring_to_dma_dir(ring));
+-	else
++	else if (cb->length)
+ 		dma_unmap_page(ring_to_dev(ring), cb->dma, cb->length,
+ 			       ring_to_dma_dir(ring));
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_enet.c b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+index b4518f45f048..1336ec73230d 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+@@ -40,9 +40,9 @@
+ #define SKB_TMP_LEN(SKB) \
+ 	(((SKB)->transport_header - (SKB)->mac_header) + tcp_hdrlen(SKB))
+ 
+-static void fill_v2_desc(struct hnae_ring *ring, void *priv,
+-			 int size, dma_addr_t dma, int frag_end,
+-			 int buf_num, enum hns_desc_type type, int mtu)
++static void fill_v2_desc_hw(struct hnae_ring *ring, void *priv, int size,
++			    int send_sz, dma_addr_t dma, int frag_end,
++			    int buf_num, enum hns_desc_type type, int mtu)
+ {
+ 	struct hnae_desc *desc = &ring->desc[ring->next_to_use];
+ 	struct hnae_desc_cb *desc_cb = &ring->desc_cb[ring->next_to_use];
+@@ -64,7 +64,7 @@ static void fill_v2_desc(struct hnae_ring *ring, void *priv,
+ 	desc_cb->type = type;
+ 
+ 	desc->addr = cpu_to_le64(dma);
+-	desc->tx.send_size = cpu_to_le16((u16)size);
++	desc->tx.send_size = cpu_to_le16((u16)send_sz);
+ 
+ 	/* config bd buffer end */
+ 	hnae_set_bit(rrcfv, HNSV2_TXD_VLD_B, 1);
+@@ -133,6 +133,14 @@ static void fill_v2_desc(struct hnae_ring *ring, void *priv,
+ 	ring_ptr_move_fw(ring, next_to_use);
+ }
+ 
++static void fill_v2_desc(struct hnae_ring *ring, void *priv,
++			 int size, dma_addr_t dma, int frag_end,
++			 int buf_num, enum hns_desc_type type, int mtu)
++{
++	fill_v2_desc_hw(ring, priv, size, size, dma, frag_end,
++			buf_num, type, mtu);
++}
++
+ static const struct acpi_device_id hns_enet_acpi_match[] = {
+ 	{ "HISI00C1", 0 },
+ 	{ "HISI00C2", 0 },
+@@ -289,15 +297,15 @@ static void fill_tso_desc(struct hnae_ring *ring, void *priv,
+ 
+ 	/* when the frag size is bigger than hardware, split this frag */
+ 	for (k = 0; k < frag_buf_num; k++)
+-		fill_v2_desc(ring, priv,
+-			     (k == frag_buf_num - 1) ?
++		fill_v2_desc_hw(ring, priv, k == 0 ? size : 0,
++				(k == frag_buf_num - 1) ?
+ 					sizeoflast : BD_MAX_SEND_SIZE,
+-			     dma + BD_MAX_SEND_SIZE * k,
+-			     frag_end && (k == frag_buf_num - 1) ? 1 : 0,
+-			     buf_num,
+-			     (type == DESC_TYPE_SKB && !k) ?
++				dma + BD_MAX_SEND_SIZE * k,
++				frag_end && (k == frag_buf_num - 1) ? 1 : 0,
++				buf_num,
++				(type == DESC_TYPE_SKB && !k) ?
+ 					DESC_TYPE_SKB : DESC_TYPE_PAGE,
+-			     mtu);
++				mtu);
+ }
+ 
+ netdev_tx_t hns_nic_net_xmit_hw(struct net_device *ndev,
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index b8bba64673e5..3986ef83111b 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -1725,7 +1725,7 @@ static void mvpp2_txq_desc_put(struct mvpp2_tx_queue *txq)
+ }
+ 
+ /* Set Tx descriptors fields relevant for CSUM calculation */
+-static u32 mvpp2_txq_desc_csum(int l3_offs, int l3_proto,
++static u32 mvpp2_txq_desc_csum(int l3_offs, __be16 l3_proto,
+ 			       int ip_hdr_len, int l4_proto)
+ {
+ 	u32 command;
+@@ -2600,14 +2600,15 @@ static u32 mvpp2_skb_tx_csum(struct mvpp2_port *port, struct sk_buff *skb)
+ 	if (skb->ip_summed == CHECKSUM_PARTIAL) {
+ 		int ip_hdr_len = 0;
+ 		u8 l4_proto;
++		__be16 l3_proto = vlan_get_protocol(skb);
+ 
+-		if (skb->protocol == htons(ETH_P_IP)) {
++		if (l3_proto == htons(ETH_P_IP)) {
+ 			struct iphdr *ip4h = ip_hdr(skb);
+ 
+ 			/* Calculate IPv4 checksum and L4 checksum */
+ 			ip_hdr_len = ip4h->ihl;
+ 			l4_proto = ip4h->protocol;
+-		} else if (skb->protocol == htons(ETH_P_IPV6)) {
++		} else if (l3_proto == htons(ETH_P_IPV6)) {
+ 			struct ipv6hdr *ip6h = ipv6_hdr(skb);
+ 
+ 			/* Read l4_protocol from one of IPv6 extra headers */
+@@ -2619,7 +2620,7 @@ static u32 mvpp2_skb_tx_csum(struct mvpp2_port *port, struct sk_buff *skb)
+ 		}
+ 
+ 		return mvpp2_txq_desc_csum(skb_network_offset(skb),
+-				skb->protocol, ip_hdr_len, l4_proto);
++					   l3_proto, ip_hdr_len, l4_proto);
+ 	}
+ 
+ 	return MVPP2_TXD_L4_CSUM_NOT | MVPP2_TXD_IP_CSUM_DISABLE;
+@@ -3055,10 +3056,12 @@ static int mvpp2_poll(struct napi_struct *napi, int budget)
+ 				   cause_rx_tx & ~MVPP2_CAUSE_MISC_SUM_MASK);
+ 	}
+ 
+-	cause_tx = cause_rx_tx & MVPP2_CAUSE_TXQ_OCCUP_DESC_ALL_MASK;
+-	if (cause_tx) {
+-		cause_tx >>= MVPP2_CAUSE_TXQ_OCCUP_DESC_ALL_OFFSET;
+-		mvpp2_tx_done(port, cause_tx, qv->sw_thread_id);
++	if (port->has_tx_irqs) {
++		cause_tx = cause_rx_tx & MVPP2_CAUSE_TXQ_OCCUP_DESC_ALL_MASK;
++		if (cause_tx) {
++			cause_tx >>= MVPP2_CAUSE_TXQ_OCCUP_DESC_ALL_OFFSET;
++			mvpp2_tx_done(port, cause_tx, qv->sw_thread_id);
++		}
+ 	}
+ 
+ 	/* Process RX packets */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index dfbcda0d0e08..701af5ffcbc9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -1339,6 +1339,9 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
+ 
+ 			*match_level = MLX5_MATCH_L2;
+ 		}
++	} else {
++		MLX5_SET(fte_match_set_lyr_2_4, headers_c, svlan_tag, 1);
++		MLX5_SET(fte_match_set_lyr_2_4, headers_c, cvlan_tag, 1);
+ 	}
+ 
+ 	if (dissector_uses_key(f->dissector, FLOW_DISSECTOR_KEY_BASIC)) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+index 40dba9e8af92..69f356f5f8f5 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+@@ -2000,7 +2000,7 @@ static u32 calculate_vports_min_rate_divider(struct mlx5_eswitch *esw)
+ 	u32 max_guarantee = 0;
+ 	int i;
+ 
+-	for (i = 0; i <= esw->total_vports; i++) {
++	for (i = 0; i < esw->total_vports; i++) {
+ 		evport = &esw->vports[i];
+ 		if (!evport->enabled || evport->info.min_rate < max_guarantee)
+ 			continue;
+@@ -2020,7 +2020,7 @@ static int normalize_vports_min_rate(struct mlx5_eswitch *esw, u32 divider)
+ 	int err;
+ 	int i;
+ 
+-	for (i = 0; i <= esw->total_vports; i++) {
++	for (i = 0; i < esw->total_vports; i++) {
+ 		evport = &esw->vports[i];
+ 		if (!evport->enabled)
+ 			continue;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/transobj.c b/drivers/net/ethernet/mellanox/mlx5/core/transobj.c
+index dae1c5c5d27c..d2f76070ea7c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/transobj.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/transobj.c
+@@ -509,7 +509,7 @@ static int mlx5_hairpin_modify_sq(struct mlx5_core_dev *peer_mdev, u32 sqn,
+ 
+ 	sqc = MLX5_ADDR_OF(modify_sq_in, in, ctx);
+ 
+-	if (next_state == MLX5_RQC_STATE_RDY) {
++	if (next_state == MLX5_SQC_STATE_RDY) {
+ 		MLX5_SET(sqc, sqc, hairpin_peer_rq, peer_rq);
+ 		MLX5_SET(sqc, sqc, hairpin_peer_vhca, peer_vhca);
+ 	}
+diff --git a/drivers/net/ethernet/mscc/ocelot_board.c b/drivers/net/ethernet/mscc/ocelot_board.c
+index 18df7d934e81..ccfcf3048cd0 100644
+--- a/drivers/net/ethernet/mscc/ocelot_board.c
++++ b/drivers/net/ethernet/mscc/ocelot_board.c
+@@ -91,7 +91,7 @@ static irqreturn_t ocelot_xtr_irq_handler(int irq, void *arg)
+ 		struct sk_buff *skb;
+ 		struct net_device *dev;
+ 		u32 *buf;
+-		int sz, len;
++		int sz, len, buf_len;
+ 		u32 ifh[4];
+ 		u32 val;
+ 		struct frame_info info;
+@@ -116,14 +116,20 @@ static irqreturn_t ocelot_xtr_irq_handler(int irq, void *arg)
+ 			err = -ENOMEM;
+ 			break;
+ 		}
+-		buf = (u32 *)skb_put(skb, info.len);
++		buf_len = info.len - ETH_FCS_LEN;
++		buf = (u32 *)skb_put(skb, buf_len);
+ 
+ 		len = 0;
+ 		do {
+ 			sz = ocelot_rx_frame_word(ocelot, grp, false, &val);
+ 			*buf++ = val;
+ 			len += sz;
+-		} while ((sz == 4) && (len < info.len));
++		} while (len < buf_len);
++
++		/* Read the FCS and discard it */
++		sz = ocelot_rx_frame_word(ocelot, grp, false, &val);
++		/* Update the statistics if part of the FCS was read before */
++		len -= ETH_FCS_LEN - sz;
+ 
+ 		if (sz < 0) {
+ 			err = sz;
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+index bfccc1955907..80306e4f247c 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+@@ -2068,14 +2068,17 @@ nfp_ctrl_rx_one(struct nfp_net *nn, struct nfp_net_dp *dp,
+ 	return true;
+ }
+ 
+-static void nfp_ctrl_rx(struct nfp_net_r_vector *r_vec)
++static bool nfp_ctrl_rx(struct nfp_net_r_vector *r_vec)
+ {
+ 	struct nfp_net_rx_ring *rx_ring = r_vec->rx_ring;
+ 	struct nfp_net *nn = r_vec->nfp_net;
+ 	struct nfp_net_dp *dp = &nn->dp;
++	unsigned int budget = 512;
+ 
+-	while (nfp_ctrl_rx_one(nn, dp, r_vec, rx_ring))
++	while (nfp_ctrl_rx_one(nn, dp, r_vec, rx_ring) && budget--)
+ 		continue;
++
++	return budget;
+ }
+ 
+ static void nfp_ctrl_poll(unsigned long arg)
+@@ -2087,9 +2090,13 @@ static void nfp_ctrl_poll(unsigned long arg)
+ 	__nfp_ctrl_tx_queued(r_vec);
+ 	spin_unlock_bh(&r_vec->lock);
+ 
+-	nfp_ctrl_rx(r_vec);
+-
+-	nfp_net_irq_unmask(r_vec->nfp_net, r_vec->irq_entry);
++	if (nfp_ctrl_rx(r_vec)) {
++		nfp_net_irq_unmask(r_vec->nfp_net, r_vec->irq_entry);
++	} else {
++		tasklet_schedule(&r_vec->tasklet);
++		nn_dp_warn(&r_vec->nfp_net->dp,
++			   "control message budget exceeded!\n");
++	}
+ }
+ 
+ /* Setup and Configuration
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_hsi.h b/drivers/net/ethernet/qlogic/qed/qed_hsi.h
+index bee10c1781fb..463ffa83685f 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_hsi.h
++++ b/drivers/net/ethernet/qlogic/qed/qed_hsi.h
+@@ -11987,6 +11987,7 @@ struct public_global {
+ 	u32 running_bundle_id;
+ 	s32 external_temperature;
+ 	u32 mdump_reason;
++	u64 reserved;
+ 	u32 data_ptr;
+ 	u32 data_size;
+ };
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic.h b/drivers/net/ethernet/qlogic/qlcnic/qlcnic.h
+index 81312924df14..0c443ea98479 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic.h
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic.h
+@@ -1800,7 +1800,8 @@ struct qlcnic_hardware_ops {
+ 	int (*config_loopback) (struct qlcnic_adapter *, u8);
+ 	int (*clear_loopback) (struct qlcnic_adapter *, u8);
+ 	int (*config_promisc_mode) (struct qlcnic_adapter *, u32);
+-	void (*change_l2_filter) (struct qlcnic_adapter *, u64 *, u16);
++	void (*change_l2_filter)(struct qlcnic_adapter *adapter, u64 *addr,
++				 u16 vlan, struct qlcnic_host_tx_ring *tx_ring);
+ 	int (*get_board_info) (struct qlcnic_adapter *);
+ 	void (*set_mac_filter_count) (struct qlcnic_adapter *);
+ 	void (*free_mac_list) (struct qlcnic_adapter *);
+@@ -2064,9 +2065,10 @@ static inline int qlcnic_nic_set_promisc(struct qlcnic_adapter *adapter,
+ }
+ 
+ static inline void qlcnic_change_filter(struct qlcnic_adapter *adapter,
+-					u64 *addr, u16 id)
++					u64 *addr, u16 vlan,
++					struct qlcnic_host_tx_ring *tx_ring)
+ {
+-	adapter->ahw->hw_ops->change_l2_filter(adapter, addr, id);
++	adapter->ahw->hw_ops->change_l2_filter(adapter, addr, vlan, tx_ring);
+ }
+ 
+ static inline int qlcnic_get_board_info(struct qlcnic_adapter *adapter)
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
+index 569d54ededec..a79d84f99102 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
+@@ -2135,7 +2135,8 @@ out:
+ }
+ 
+ void qlcnic_83xx_change_l2_filter(struct qlcnic_adapter *adapter, u64 *addr,
+-				  u16 vlan_id)
++				  u16 vlan_id,
++				  struct qlcnic_host_tx_ring *tx_ring)
+ {
+ 	u8 mac[ETH_ALEN];
+ 	memcpy(&mac, addr, ETH_ALEN);
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.h b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.h
+index b75a81246856..73fe2f64491d 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.h
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.h
+@@ -550,7 +550,8 @@ int qlcnic_83xx_wrt_reg_indirect(struct qlcnic_adapter *, ulong, u32);
+ int qlcnic_83xx_nic_set_promisc(struct qlcnic_adapter *, u32);
+ int qlcnic_83xx_config_hw_lro(struct qlcnic_adapter *, int);
+ int qlcnic_83xx_config_rss(struct qlcnic_adapter *, int);
+-void qlcnic_83xx_change_l2_filter(struct qlcnic_adapter *, u64 *, u16);
++void qlcnic_83xx_change_l2_filter(struct qlcnic_adapter *adapter, u64 *addr,
++				  u16 vlan, struct qlcnic_host_tx_ring *ring);
+ int qlcnic_83xx_get_pci_info(struct qlcnic_adapter *, struct qlcnic_pci_info *);
+ int qlcnic_83xx_set_nic_info(struct qlcnic_adapter *, struct qlcnic_info *);
+ void qlcnic_83xx_initialize_nic(struct qlcnic_adapter *, int);
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.h b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.h
+index 4bb33af8e2b3..56a3bd9e37dc 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.h
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_hw.h
+@@ -173,7 +173,8 @@ int qlcnic_82xx_napi_add(struct qlcnic_adapter *adapter,
+ 			 struct net_device *netdev);
+ void qlcnic_82xx_get_beacon_state(struct qlcnic_adapter *);
+ void qlcnic_82xx_change_filter(struct qlcnic_adapter *adapter,
+-			       u64 *uaddr, u16 vlan_id);
++			       u64 *uaddr, u16 vlan_id,
++			       struct qlcnic_host_tx_ring *tx_ring);
+ int qlcnic_82xx_config_intr_coalesce(struct qlcnic_adapter *,
+ 				     struct ethtool_coalesce *);
+ int qlcnic_82xx_set_rx_coalesce(struct qlcnic_adapter *);
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c
+index 84dd83031a1b..9647578cbe6a 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c
+@@ -268,13 +268,12 @@ static void qlcnic_add_lb_filter(struct qlcnic_adapter *adapter,
+ }
+ 
+ void qlcnic_82xx_change_filter(struct qlcnic_adapter *adapter, u64 *uaddr,
+-			       u16 vlan_id)
++			       u16 vlan_id, struct qlcnic_host_tx_ring *tx_ring)
+ {
+ 	struct cmd_desc_type0 *hwdesc;
+ 	struct qlcnic_nic_req *req;
+ 	struct qlcnic_mac_req *mac_req;
+ 	struct qlcnic_vlan_req *vlan_req;
+-	struct qlcnic_host_tx_ring *tx_ring = adapter->tx_ring;
+ 	u32 producer;
+ 	u64 word;
+ 
+@@ -301,7 +300,8 @@ void qlcnic_82xx_change_filter(struct qlcnic_adapter *adapter, u64 *uaddr,
+ 
+ static void qlcnic_send_filter(struct qlcnic_adapter *adapter,
+ 			       struct cmd_desc_type0 *first_desc,
+-			       struct sk_buff *skb)
++			       struct sk_buff *skb,
++			       struct qlcnic_host_tx_ring *tx_ring)
+ {
+ 	struct vlan_ethhdr *vh = (struct vlan_ethhdr *)(skb->data);
+ 	struct ethhdr *phdr = (struct ethhdr *)(skb->data);
+@@ -335,7 +335,7 @@ static void qlcnic_send_filter(struct qlcnic_adapter *adapter,
+ 		    tmp_fil->vlan_id == vlan_id) {
+ 			if (jiffies > (QLCNIC_READD_AGE * HZ + tmp_fil->ftime))
+ 				qlcnic_change_filter(adapter, &src_addr,
+-						     vlan_id);
++						     vlan_id, tx_ring);
+ 			tmp_fil->ftime = jiffies;
+ 			return;
+ 		}
+@@ -350,7 +350,7 @@ static void qlcnic_send_filter(struct qlcnic_adapter *adapter,
+ 	if (!fil)
+ 		return;
+ 
+-	qlcnic_change_filter(adapter, &src_addr, vlan_id);
++	qlcnic_change_filter(adapter, &src_addr, vlan_id, tx_ring);
+ 	fil->ftime = jiffies;
+ 	fil->vlan_id = vlan_id;
+ 	memcpy(fil->faddr, &src_addr, ETH_ALEN);
+@@ -766,7 +766,7 @@ netdev_tx_t qlcnic_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
+ 	}
+ 
+ 	if (adapter->drv_mac_learn)
+-		qlcnic_send_filter(adapter, first_desc, skb);
++		qlcnic_send_filter(adapter, first_desc, skb, tx_ring);
+ 
+ 	tx_ring->tx_stats.tx_bytes += skb->len;
+ 	tx_ring->tx_stats.xmit_called++;
+diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
+index 7fd86d40a337..11167abe5934 100644
+--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
++++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c
+@@ -113,7 +113,7 @@ rmnet_map_ingress_handler(struct sk_buff *skb,
+ 	struct sk_buff *skbn;
+ 
+ 	if (skb->dev->type == ARPHRD_ETHER) {
+-		if (pskb_expand_head(skb, ETH_HLEN, 0, GFP_KERNEL)) {
++		if (pskb_expand_head(skb, ETH_HLEN, 0, GFP_ATOMIC)) {
+ 			kfree_skb(skb);
+ 			return;
+ 		}
+@@ -147,7 +147,7 @@ static int rmnet_map_egress_handler(struct sk_buff *skb,
+ 	}
+ 
+ 	if (skb_headroom(skb) < required_headroom) {
+-		if (pskb_expand_head(skb, required_headroom, 0, GFP_KERNEL))
++		if (pskb_expand_head(skb, required_headroom, 0, GFP_ATOMIC))
+ 			return -ENOMEM;
+ 	}
+ 
+@@ -189,6 +189,9 @@ rx_handler_result_t rmnet_rx_handler(struct sk_buff **pskb)
+ 	if (!skb)
+ 		goto done;
+ 
++	if (skb->pkt_type == PACKET_LOOPBACK)
++		return RX_HANDLER_PASS;
++
+ 	dev = skb->dev;
+ 	port = rmnet_get_port(dev);
+ 
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index 1d1e66002232..627c5cd8f786 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -4788,8 +4788,8 @@ static void rtl_init_rxcfg(struct rtl8169_private *tp)
+ 		RTL_W32(tp, RxConfig, RX_FIFO_THRESH | RX_DMA_BURST);
+ 		break;
+ 	case RTL_GIGA_MAC_VER_18 ... RTL_GIGA_MAC_VER_24:
+-	case RTL_GIGA_MAC_VER_34:
+-	case RTL_GIGA_MAC_VER_35:
++	case RTL_GIGA_MAC_VER_34 ... RTL_GIGA_MAC_VER_36:
++	case RTL_GIGA_MAC_VER_38:
+ 		RTL_W32(tp, RxConfig, RX128_INT_EN | RX_MULTI_EN | RX_DMA_BURST);
+ 		break;
+ 	case RTL_GIGA_MAC_VER_40 ... RTL_GIGA_MAC_VER_51:
+@@ -5041,9 +5041,14 @@ static void rtl8169_hw_reset(struct rtl8169_private *tp)
+ 
+ static void rtl_set_tx_config_registers(struct rtl8169_private *tp)
+ {
+-	/* Set DMA burst size and Interframe Gap Time */
+-	RTL_W32(tp, TxConfig, (TX_DMA_BURST << TxDMAShift) |
+-		(InterFrameGap << TxInterFrameGapShift));
++	u32 val = TX_DMA_BURST << TxDMAShift |
++		  InterFrameGap << TxInterFrameGapShift;
++
++	if (tp->mac_version >= RTL_GIGA_MAC_VER_34 &&
++	    tp->mac_version != RTL_GIGA_MAC_VER_39)
++		val |= TXCFG_AUTO_FIFO;
++
++	RTL_W32(tp, TxConfig, val);
+ }
+ 
+ static void rtl_set_rx_max_size(struct rtl8169_private *tp)
+@@ -5530,7 +5535,6 @@ static void rtl_hw_start_8168e_2(struct rtl8169_private *tp)
+ 
+ 	rtl_disable_clock_request(tp);
+ 
+-	RTL_W32(tp, TxConfig, RTL_R32(tp, TxConfig) | TXCFG_AUTO_FIFO);
+ 	RTL_W8(tp, MCU, RTL_R8(tp, MCU) & ~NOW_IS_OOB);
+ 
+ 	/* Adjust EEE LED frequency */
+@@ -5562,7 +5566,6 @@ static void rtl_hw_start_8168f(struct rtl8169_private *tp)
+ 
+ 	rtl_disable_clock_request(tp);
+ 
+-	RTL_W32(tp, TxConfig, RTL_R32(tp, TxConfig) | TXCFG_AUTO_FIFO);
+ 	RTL_W8(tp, MCU, RTL_R8(tp, MCU) & ~NOW_IS_OOB);
+ 	RTL_W8(tp, DLLPR, RTL_R8(tp, DLLPR) | PFM_EN);
+ 	RTL_W32(tp, MISC, RTL_R32(tp, MISC) | PWM_EN);
+@@ -5607,8 +5610,6 @@ static void rtl_hw_start_8411(struct rtl8169_private *tp)
+ 
+ static void rtl_hw_start_8168g(struct rtl8169_private *tp)
+ {
+-	RTL_W32(tp, TxConfig, RTL_R32(tp, TxConfig) | TXCFG_AUTO_FIFO);
+-
+ 	rtl_eri_write(tp, 0xc8, ERIAR_MASK_0101, 0x080002, ERIAR_EXGMAC);
+ 	rtl_eri_write(tp, 0xcc, ERIAR_MASK_0001, 0x38, ERIAR_EXGMAC);
+ 	rtl_eri_write(tp, 0xd0, ERIAR_MASK_0001, 0x48, ERIAR_EXGMAC);
+@@ -5707,8 +5708,6 @@ static void rtl_hw_start_8168h_1(struct rtl8169_private *tp)
+ 	RTL_W8(tp, Config5, RTL_R8(tp, Config5) & ~ASPM_en);
+ 	rtl_ephy_init(tp, e_info_8168h_1, ARRAY_SIZE(e_info_8168h_1));
+ 
+-	RTL_W32(tp, TxConfig, RTL_R32(tp, TxConfig) | TXCFG_AUTO_FIFO);
+-
+ 	rtl_eri_write(tp, 0xc8, ERIAR_MASK_0101, 0x00080002, ERIAR_EXGMAC);
+ 	rtl_eri_write(tp, 0xcc, ERIAR_MASK_0001, 0x38, ERIAR_EXGMAC);
+ 	rtl_eri_write(tp, 0xd0, ERIAR_MASK_0001, 0x48, ERIAR_EXGMAC);
+@@ -5789,8 +5788,6 @@ static void rtl_hw_start_8168ep(struct rtl8169_private *tp)
+ {
+ 	rtl8168ep_stop_cmac(tp);
+ 
+-	RTL_W32(tp, TxConfig, RTL_R32(tp, TxConfig) | TXCFG_AUTO_FIFO);
+-
+ 	rtl_eri_write(tp, 0xc8, ERIAR_MASK_0101, 0x00080002, ERIAR_EXGMAC);
+ 	rtl_eri_write(tp, 0xcc, ERIAR_MASK_0001, 0x2f, ERIAR_EXGMAC);
+ 	rtl_eri_write(tp, 0xd0, ERIAR_MASK_0001, 0x5f, ERIAR_EXGMAC);
+@@ -6108,7 +6105,6 @@ static void rtl_hw_start_8402(struct rtl8169_private *tp)
+ 	/* Force LAN exit from ASPM if Rx/Tx are not idle */
+ 	RTL_W32(tp, FuncEvent, RTL_R32(tp, FuncEvent) | 0x002800);
+ 
+-	RTL_W32(tp, TxConfig, RTL_R32(tp, TxConfig) | TXCFG_AUTO_FIFO);
+ 	RTL_W8(tp, MCU, RTL_R8(tp, MCU) & ~NOW_IS_OOB);
+ 
+ 	rtl_ephy_init(tp, e_info_8402, ARRAY_SIZE(e_info_8402));
+diff --git a/drivers/net/ethernet/stmicro/stmmac/common.h b/drivers/net/ethernet/stmicro/stmmac/common.h
+index 78fd0f8b8e81..a15006e2fb29 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/common.h
++++ b/drivers/net/ethernet/stmicro/stmmac/common.h
+@@ -256,10 +256,10 @@ struct stmmac_safety_stats {
+ #define MAX_DMA_RIWT		0xff
+ #define MIN_DMA_RIWT		0x20
+ /* Tx coalesce parameters */
+-#define STMMAC_COAL_TX_TIMER	40000
++#define STMMAC_COAL_TX_TIMER	1000
+ #define STMMAC_MAX_COAL_TX_TICK	100000
+ #define STMMAC_TX_MAX_FRAMES	256
+-#define STMMAC_TX_FRAMES	64
++#define STMMAC_TX_FRAMES	25
+ 
+ /* Packets types */
+ enum packets_types {
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+index c0a855b7ab3b..63e1064b27a2 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+@@ -48,6 +48,8 @@ struct stmmac_tx_info {
+ 
+ /* Frequently used values are kept adjacent for cache effect */
+ struct stmmac_tx_queue {
++	u32 tx_count_frames;
++	struct timer_list txtimer;
+ 	u32 queue_index;
+ 	struct stmmac_priv *priv_data;
+ 	struct dma_extended_desc *dma_etx ____cacheline_aligned_in_smp;
+@@ -73,7 +75,14 @@ struct stmmac_rx_queue {
+ 	u32 rx_zeroc_thresh;
+ 	dma_addr_t dma_rx_phy;
+ 	u32 rx_tail_addr;
++};
++
++struct stmmac_channel {
+ 	struct napi_struct napi ____cacheline_aligned_in_smp;
++	struct stmmac_priv *priv_data;
++	u32 index;
++	int has_rx;
++	int has_tx;
+ };
+ 
+ struct stmmac_tc_entry {
+@@ -109,14 +118,12 @@ struct stmmac_pps_cfg {
+ 
+ struct stmmac_priv {
+ 	/* Frequently used values are kept adjacent for cache effect */
+-	u32 tx_count_frames;
+ 	u32 tx_coal_frames;
+ 	u32 tx_coal_timer;
+ 
+ 	int tx_coalesce;
+ 	int hwts_tx_en;
+ 	bool tx_path_in_lpi_mode;
+-	struct timer_list txtimer;
+ 	bool tso;
+ 
+ 	unsigned int dma_buf_sz;
+@@ -137,6 +144,9 @@ struct stmmac_priv {
+ 	/* TX Queue */
+ 	struct stmmac_tx_queue tx_queue[MTL_MAX_TX_QUEUES];
+ 
++	/* Generic channel for NAPI */
++	struct stmmac_channel channel[STMMAC_CH_MAX];
++
+ 	bool oldlink;
+ 	int speed;
+ 	int oldduplex;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index c579d98b9666..1c6ba74e294b 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -147,12 +147,14 @@ static void stmmac_verify_args(void)
+ static void stmmac_disable_all_queues(struct stmmac_priv *priv)
+ {
+ 	u32 rx_queues_cnt = priv->plat->rx_queues_to_use;
++	u32 tx_queues_cnt = priv->plat->tx_queues_to_use;
++	u32 maxq = max(rx_queues_cnt, tx_queues_cnt);
+ 	u32 queue;
+ 
+-	for (queue = 0; queue < rx_queues_cnt; queue++) {
+-		struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
++	for (queue = 0; queue < maxq; queue++) {
++		struct stmmac_channel *ch = &priv->channel[queue];
+ 
+-		napi_disable(&rx_q->napi);
++		napi_disable(&ch->napi);
+ 	}
+ }
+ 
+@@ -163,12 +165,14 @@ static void stmmac_disable_all_queues(struct stmmac_priv *priv)
+ static void stmmac_enable_all_queues(struct stmmac_priv *priv)
+ {
+ 	u32 rx_queues_cnt = priv->plat->rx_queues_to_use;
++	u32 tx_queues_cnt = priv->plat->tx_queues_to_use;
++	u32 maxq = max(rx_queues_cnt, tx_queues_cnt);
+ 	u32 queue;
+ 
+-	for (queue = 0; queue < rx_queues_cnt; queue++) {
+-		struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
++	for (queue = 0; queue < maxq; queue++) {
++		struct stmmac_channel *ch = &priv->channel[queue];
+ 
+-		napi_enable(&rx_q->napi);
++		napi_enable(&ch->napi);
+ 	}
+ }
+ 
+@@ -1822,18 +1826,18 @@ static void stmmac_dma_operation_mode(struct stmmac_priv *priv)
+  * @queue: TX queue index
+  * Description: it reclaims the transmit resources after transmission completes.
+  */
+-static void stmmac_tx_clean(struct stmmac_priv *priv, u32 queue)
++static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue)
+ {
+ 	struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue];
+ 	unsigned int bytes_compl = 0, pkts_compl = 0;
+-	unsigned int entry;
++	unsigned int entry, count = 0;
+ 
+-	netif_tx_lock(priv->dev);
++	__netif_tx_lock_bh(netdev_get_tx_queue(priv->dev, queue));
+ 
+ 	priv->xstats.tx_clean++;
+ 
+ 	entry = tx_q->dirty_tx;
+-	while (entry != tx_q->cur_tx) {
++	while ((entry != tx_q->cur_tx) && (count < budget)) {
+ 		struct sk_buff *skb = tx_q->tx_skbuff[entry];
+ 		struct dma_desc *p;
+ 		int status;
+@@ -1849,6 +1853,8 @@ static void stmmac_tx_clean(struct stmmac_priv *priv, u32 queue)
+ 		if (unlikely(status & tx_dma_own))
+ 			break;
+ 
++		count++;
++
+ 		/* Make sure descriptor fields are read after reading
+ 		 * the own bit.
+ 		 */
+@@ -1916,7 +1922,10 @@ static void stmmac_tx_clean(struct stmmac_priv *priv, u32 queue)
+ 		stmmac_enable_eee_mode(priv);
+ 		mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_T(eee_timer));
+ 	}
+-	netif_tx_unlock(priv->dev);
++
++	__netif_tx_unlock_bh(netdev_get_tx_queue(priv->dev, queue));
++
++	return count;
+ }
+ 
+ /**
+@@ -1999,6 +2008,33 @@ static bool stmmac_safety_feat_interrupt(struct stmmac_priv *priv)
+ 	return false;
+ }
+ 
++static int stmmac_napi_check(struct stmmac_priv *priv, u32 chan)
++{
++	int status = stmmac_dma_interrupt_status(priv, priv->ioaddr,
++						 &priv->xstats, chan);
++	struct stmmac_channel *ch = &priv->channel[chan];
++	bool needs_work = false;
++
++	if ((status & handle_rx) && ch->has_rx) {
++		needs_work = true;
++	} else {
++		status &= ~handle_rx;
++	}
++
++	if ((status & handle_tx) && ch->has_tx) {
++		needs_work = true;
++	} else {
++		status &= ~handle_tx;
++	}
++
++	if (needs_work && napi_schedule_prep(&ch->napi)) {
++		stmmac_disable_dma_irq(priv, priv->ioaddr, chan);
++		__napi_schedule(&ch->napi);
++	}
++
++	return status;
++}
++
+ /**
+  * stmmac_dma_interrupt - DMA ISR
+  * @priv: driver private structure
+@@ -2013,57 +2049,14 @@ static void stmmac_dma_interrupt(struct stmmac_priv *priv)
+ 	u32 channels_to_check = tx_channel_count > rx_channel_count ?
+ 				tx_channel_count : rx_channel_count;
+ 	u32 chan;
+-	bool poll_scheduled = false;
+ 	int status[max_t(u32, MTL_MAX_TX_QUEUES, MTL_MAX_RX_QUEUES)];
+ 
+ 	/* Make sure we never check beyond our status buffer. */
+ 	if (WARN_ON_ONCE(channels_to_check > ARRAY_SIZE(status)))
+ 		channels_to_check = ARRAY_SIZE(status);
+ 
+-	/* Each DMA channel can be used for rx and tx simultaneously, yet
+-	 * napi_struct is embedded in struct stmmac_rx_queue rather than in a
+-	 * stmmac_channel struct.
+-	 * Because of this, stmmac_poll currently checks (and possibly wakes)
+-	 * all tx queues rather than just a single tx queue.
+-	 */
+ 	for (chan = 0; chan < channels_to_check; chan++)
+-		status[chan] = stmmac_dma_interrupt_status(priv, priv->ioaddr,
+-				&priv->xstats, chan);
+-
+-	for (chan = 0; chan < rx_channel_count; chan++) {
+-		if (likely(status[chan] & handle_rx)) {
+-			struct stmmac_rx_queue *rx_q = &priv->rx_queue[chan];
+-
+-			if (likely(napi_schedule_prep(&rx_q->napi))) {
+-				stmmac_disable_dma_irq(priv, priv->ioaddr, chan);
+-				__napi_schedule(&rx_q->napi);
+-				poll_scheduled = true;
+-			}
+-		}
+-	}
+-
+-	/* If we scheduled poll, we already know that tx queues will be checked.
+-	 * If we didn't schedule poll, see if any DMA channel (used by tx) has a
+-	 * completed transmission, if so, call stmmac_poll (once).
+-	 */
+-	if (!poll_scheduled) {
+-		for (chan = 0; chan < tx_channel_count; chan++) {
+-			if (status[chan] & handle_tx) {
+-				/* It doesn't matter what rx queue we choose
+-				 * here. We use 0 since it always exists.
+-				 */
+-				struct stmmac_rx_queue *rx_q =
+-					&priv->rx_queue[0];
+-
+-				if (likely(napi_schedule_prep(&rx_q->napi))) {
+-					stmmac_disable_dma_irq(priv,
+-							priv->ioaddr, chan);
+-					__napi_schedule(&rx_q->napi);
+-				}
+-				break;
+-			}
+-		}
+-	}
++		status[chan] = stmmac_napi_check(priv, chan);
+ 
+ 	for (chan = 0; chan < tx_channel_count; chan++) {
+ 		if (unlikely(status[chan] & tx_hard_error_bump_tc)) {
+@@ -2193,8 +2186,7 @@ static int stmmac_init_dma_engine(struct stmmac_priv *priv)
+ 		stmmac_init_tx_chan(priv, priv->ioaddr, priv->plat->dma_cfg,
+ 				    tx_q->dma_tx_phy, chan);
+ 
+-		tx_q->tx_tail_addr = tx_q->dma_tx_phy +
+-			    (DMA_TX_SIZE * sizeof(struct dma_desc));
++		tx_q->tx_tail_addr = tx_q->dma_tx_phy;
+ 		stmmac_set_tx_tail_ptr(priv, priv->ioaddr,
+ 				       tx_q->tx_tail_addr, chan);
+ 	}
+@@ -2212,6 +2204,13 @@ static int stmmac_init_dma_engine(struct stmmac_priv *priv)
+ 	return ret;
+ }
+ 
++static void stmmac_tx_timer_arm(struct stmmac_priv *priv, u32 queue)
++{
++	struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue];
++
++	mod_timer(&tx_q->txtimer, STMMAC_COAL_TIMER(priv->tx_coal_timer));
++}
++
+ /**
+  * stmmac_tx_timer - mitigation sw timer for tx.
+  * @data: data pointer
+@@ -2220,13 +2219,14 @@ static int stmmac_init_dma_engine(struct stmmac_priv *priv)
+  */
+ static void stmmac_tx_timer(struct timer_list *t)
+ {
+-	struct stmmac_priv *priv = from_timer(priv, t, txtimer);
+-	u32 tx_queues_count = priv->plat->tx_queues_to_use;
+-	u32 queue;
++	struct stmmac_tx_queue *tx_q = from_timer(tx_q, t, txtimer);
++	struct stmmac_priv *priv = tx_q->priv_data;
++	struct stmmac_channel *ch;
++
++	ch = &priv->channel[tx_q->queue_index];
+ 
+-	/* let's scan all the tx queues */
+-	for (queue = 0; queue < tx_queues_count; queue++)
+-		stmmac_tx_clean(priv, queue);
++	if (likely(napi_schedule_prep(&ch->napi)))
++		__napi_schedule(&ch->napi);
+ }
+ 
+ /**
+@@ -2239,11 +2239,17 @@ static void stmmac_tx_timer(struct timer_list *t)
+  */
+ static void stmmac_init_tx_coalesce(struct stmmac_priv *priv)
+ {
++	u32 tx_channel_count = priv->plat->tx_queues_to_use;
++	u32 chan;
++
+ 	priv->tx_coal_frames = STMMAC_TX_FRAMES;
+ 	priv->tx_coal_timer = STMMAC_COAL_TX_TIMER;
+-	timer_setup(&priv->txtimer, stmmac_tx_timer, 0);
+-	priv->txtimer.expires = STMMAC_COAL_TIMER(priv->tx_coal_timer);
+-	add_timer(&priv->txtimer);
++
++	for (chan = 0; chan < tx_channel_count; chan++) {
++		struct stmmac_tx_queue *tx_q = &priv->tx_queue[chan];
++
++		timer_setup(&tx_q->txtimer, stmmac_tx_timer, 0);
++	}
+ }
+ 
+ static void stmmac_set_rings_length(struct stmmac_priv *priv)
+@@ -2571,6 +2577,7 @@ static void stmmac_hw_teardown(struct net_device *dev)
+ static int stmmac_open(struct net_device *dev)
+ {
+ 	struct stmmac_priv *priv = netdev_priv(dev);
++	u32 chan;
+ 	int ret;
+ 
+ 	stmmac_check_ether_addr(priv);
+@@ -2667,7 +2674,9 @@ irq_error:
+ 	if (dev->phydev)
+ 		phy_stop(dev->phydev);
+ 
+-	del_timer_sync(&priv->txtimer);
++	for (chan = 0; chan < priv->plat->tx_queues_to_use; chan++)
++		del_timer_sync(&priv->tx_queue[chan].txtimer);
++
+ 	stmmac_hw_teardown(dev);
+ init_error:
+ 	free_dma_desc_resources(priv);
+@@ -2687,6 +2696,7 @@ dma_desc_error:
+ static int stmmac_release(struct net_device *dev)
+ {
+ 	struct stmmac_priv *priv = netdev_priv(dev);
++	u32 chan;
+ 
+ 	if (priv->eee_enabled)
+ 		del_timer_sync(&priv->eee_ctrl_timer);
+@@ -2701,7 +2711,8 @@ static int stmmac_release(struct net_device *dev)
+ 
+ 	stmmac_disable_all_queues(priv);
+ 
+-	del_timer_sync(&priv->txtimer);
++	for (chan = 0; chan < priv->plat->tx_queues_to_use; chan++)
++		del_timer_sync(&priv->tx_queue[chan].txtimer);
+ 
+ 	/* Free the IRQ lines */
+ 	free_irq(dev->irq, dev);
+@@ -2915,14 +2926,13 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	priv->xstats.tx_tso_nfrags += nfrags;
+ 
+ 	/* Manage tx mitigation */
+-	priv->tx_count_frames += nfrags + 1;
+-	if (likely(priv->tx_coal_frames > priv->tx_count_frames)) {
+-		mod_timer(&priv->txtimer,
+-			  STMMAC_COAL_TIMER(priv->tx_coal_timer));
+-	} else {
+-		priv->tx_count_frames = 0;
++	tx_q->tx_count_frames += nfrags + 1;
++	if (priv->tx_coal_frames <= tx_q->tx_count_frames) {
+ 		stmmac_set_tx_ic(priv, desc);
+ 		priv->xstats.tx_set_ic_bit++;
++		tx_q->tx_count_frames = 0;
++	} else {
++		stmmac_tx_timer_arm(priv, queue);
+ 	}
+ 
+ 	skb_tx_timestamp(skb);
+@@ -2971,6 +2981,7 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ 	netdev_tx_sent_queue(netdev_get_tx_queue(dev, queue), skb->len);
+ 
++	tx_q->tx_tail_addr = tx_q->dma_tx_phy + (tx_q->cur_tx * sizeof(*desc));
+ 	stmmac_set_tx_tail_ptr(priv, priv->ioaddr, tx_q->tx_tail_addr, queue);
+ 
+ 	return NETDEV_TX_OK;
+@@ -3125,14 +3136,13 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	 * This approach takes care about the fragments: desc is the first
+ 	 * element in case of no SG.
+ 	 */
+-	priv->tx_count_frames += nfrags + 1;
+-	if (likely(priv->tx_coal_frames > priv->tx_count_frames)) {
+-		mod_timer(&priv->txtimer,
+-			  STMMAC_COAL_TIMER(priv->tx_coal_timer));
+-	} else {
+-		priv->tx_count_frames = 0;
++	tx_q->tx_count_frames += nfrags + 1;
++	if (priv->tx_coal_frames <= tx_q->tx_count_frames) {
+ 		stmmac_set_tx_ic(priv, desc);
+ 		priv->xstats.tx_set_ic_bit++;
++		tx_q->tx_count_frames = 0;
++	} else {
++		stmmac_tx_timer_arm(priv, queue);
+ 	}
+ 
+ 	skb_tx_timestamp(skb);
+@@ -3178,6 +3188,8 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	netdev_tx_sent_queue(netdev_get_tx_queue(dev, queue), skb->len);
+ 
+ 	stmmac_enable_dma_transmission(priv, priv->ioaddr);
++
++	tx_q->tx_tail_addr = tx_q->dma_tx_phy + (tx_q->cur_tx * sizeof(*desc));
+ 	stmmac_set_tx_tail_ptr(priv, priv->ioaddr, tx_q->tx_tail_addr, queue);
+ 
+ 	return NETDEV_TX_OK;
+@@ -3298,6 +3310,7 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue)
+ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
+ {
+ 	struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
++	struct stmmac_channel *ch = &priv->channel[queue];
+ 	unsigned int entry = rx_q->cur_rx;
+ 	int coe = priv->hw->rx_csum;
+ 	unsigned int next_entry;
+@@ -3467,7 +3480,7 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
+ 			else
+ 				skb->ip_summed = CHECKSUM_UNNECESSARY;
+ 
+-			napi_gro_receive(&rx_q->napi, skb);
++			napi_gro_receive(&ch->napi, skb);
+ 
+ 			priv->dev->stats.rx_packets++;
+ 			priv->dev->stats.rx_bytes += frame_len;
+@@ -3490,27 +3503,33 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
+  *  Description :
+  *  To look at the incoming frames and clear the tx resources.
+  */
+-static int stmmac_poll(struct napi_struct *napi, int budget)
++static int stmmac_napi_poll(struct napi_struct *napi, int budget)
+ {
+-	struct stmmac_rx_queue *rx_q =
+-		container_of(napi, struct stmmac_rx_queue, napi);
+-	struct stmmac_priv *priv = rx_q->priv_data;
+-	u32 tx_count = priv->plat->tx_queues_to_use;
+-	u32 chan = rx_q->queue_index;
+-	int work_done = 0;
+-	u32 queue;
++	struct stmmac_channel *ch =
++		container_of(napi, struct stmmac_channel, napi);
++	struct stmmac_priv *priv = ch->priv_data;
++	int work_done = 0, work_rem = budget;
++	u32 chan = ch->index;
+ 
+ 	priv->xstats.napi_poll++;
+ 
+-	/* check all the queues */
+-	for (queue = 0; queue < tx_count; queue++)
+-		stmmac_tx_clean(priv, queue);
++	if (ch->has_tx) {
++		int done = stmmac_tx_clean(priv, work_rem, chan);
+ 
+-	work_done = stmmac_rx(priv, budget, rx_q->queue_index);
+-	if (work_done < budget) {
+-		napi_complete_done(napi, work_done);
+-		stmmac_enable_dma_irq(priv, priv->ioaddr, chan);
++		work_done += done;
++		work_rem -= done;
++	}
++
++	if (ch->has_rx) {
++		int done = stmmac_rx(priv, work_rem, chan);
++
++		work_done += done;
++		work_rem -= done;
+ 	}
++
++	if (work_done < budget && napi_complete_done(napi, work_done))
++		stmmac_enable_dma_irq(priv, priv->ioaddr, chan);
++
+ 	return work_done;
+ }
+ 
+@@ -4170,8 +4189,8 @@ int stmmac_dvr_probe(struct device *device,
+ {
+ 	struct net_device *ndev = NULL;
+ 	struct stmmac_priv *priv;
++	u32 queue, maxq;
+ 	int ret = 0;
+-	u32 queue;
+ 
+ 	ndev = alloc_etherdev_mqs(sizeof(struct stmmac_priv),
+ 				  MTL_MAX_TX_QUEUES,
+@@ -4291,11 +4310,22 @@ int stmmac_dvr_probe(struct device *device,
+ 			 "Enable RX Mitigation via HW Watchdog Timer\n");
+ 	}
+ 
+-	for (queue = 0; queue < priv->plat->rx_queues_to_use; queue++) {
+-		struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
++	/* Setup channels NAPI */
++	maxq = max(priv->plat->rx_queues_to_use, priv->plat->tx_queues_to_use);
+ 
+-		netif_napi_add(ndev, &rx_q->napi, stmmac_poll,
+-			       (8 * priv->plat->rx_queues_to_use));
++	for (queue = 0; queue < maxq; queue++) {
++		struct stmmac_channel *ch = &priv->channel[queue];
++
++		ch->priv_data = priv;
++		ch->index = queue;
++
++		if (queue < priv->plat->rx_queues_to_use)
++			ch->has_rx = true;
++		if (queue < priv->plat->tx_queues_to_use)
++			ch->has_tx = true;
++
++		netif_napi_add(ndev, &ch->napi, stmmac_napi_poll,
++			       NAPI_POLL_WEIGHT);
+ 	}
+ 
+ 	mutex_init(&priv->lock);
+@@ -4341,10 +4371,10 @@ error_netdev_register:
+ 	    priv->hw->pcs != STMMAC_PCS_RTBI)
+ 		stmmac_mdio_unregister(ndev);
+ error_mdio_register:
+-	for (queue = 0; queue < priv->plat->rx_queues_to_use; queue++) {
+-		struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
++	for (queue = 0; queue < maxq; queue++) {
++		struct stmmac_channel *ch = &priv->channel[queue];
+ 
+-		netif_napi_del(&rx_q->napi);
++		netif_napi_del(&ch->napi);
+ 	}
+ error_hw_init:
+ 	destroy_workqueue(priv->wq);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+index 72da77b94ecd..8a3867cec67a 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+@@ -67,7 +67,7 @@ static int dwmac1000_validate_mcast_bins(int mcast_bins)
+  * Description:
+  * This function validates the number of Unicast address entries supported
+  * by a particular Synopsys 10/100/1000 controller. The Synopsys controller
+- * supports 1, 32, 64, or 128 Unicast filter entries for it's Unicast filter
++ * supports 1..32, 64, or 128 Unicast filter entries for it's Unicast filter
+  * logic. This function validates a valid, supported configuration is
+  * selected, and defaults to 1 Unicast address if an unsupported
+  * configuration is selected.
+@@ -77,8 +77,7 @@ static int dwmac1000_validate_ucast_entries(int ucast_entries)
+ 	int x = ucast_entries;
+ 
+ 	switch (x) {
+-	case 1:
+-	case 32:
++	case 1 ... 32:
+ 	case 64:
+ 	case 128:
+ 		break;
+diff --git a/drivers/net/ethernet/ti/Kconfig b/drivers/net/ethernet/ti/Kconfig
+index 9263d638bd6d..f932923f7d56 100644
+--- a/drivers/net/ethernet/ti/Kconfig
++++ b/drivers/net/ethernet/ti/Kconfig
+@@ -41,6 +41,7 @@ config TI_DAVINCI_MDIO
+ config TI_DAVINCI_CPDMA
+ 	tristate "TI DaVinci CPDMA Support"
+ 	depends on ARCH_DAVINCI || ARCH_OMAP2PLUS || COMPILE_TEST
++	select GENERIC_ALLOCATOR
+ 	---help---
+ 	  This driver supports TI's DaVinci CPDMA dma engine.
+ 
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index af4dc4425be2..5827fccd4f29 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -717,6 +717,30 @@ static int phylink_bringup_phy(struct phylink *pl, struct phy_device *phy)
+ 	return 0;
+ }
+ 
++static int __phylink_connect_phy(struct phylink *pl, struct phy_device *phy,
++		phy_interface_t interface)
++{
++	int ret;
++
++	if (WARN_ON(pl->link_an_mode == MLO_AN_FIXED ||
++		    (pl->link_an_mode == MLO_AN_INBAND &&
++		     phy_interface_mode_is_8023z(interface))))
++		return -EINVAL;
++
++	if (pl->phydev)
++		return -EBUSY;
++
++	ret = phy_attach_direct(pl->netdev, phy, 0, interface);
++	if (ret)
++		return ret;
++
++	ret = phylink_bringup_phy(pl, phy);
++	if (ret)
++		phy_detach(phy);
++
++	return ret;
++}
++
+ /**
+  * phylink_connect_phy() - connect a PHY to the phylink instance
+  * @pl: a pointer to a &struct phylink returned from phylink_create()
+@@ -734,31 +758,13 @@ static int phylink_bringup_phy(struct phylink *pl, struct phy_device *phy)
+  */
+ int phylink_connect_phy(struct phylink *pl, struct phy_device *phy)
+ {
+-	int ret;
+-
+-	if (WARN_ON(pl->link_an_mode == MLO_AN_FIXED ||
+-		    (pl->link_an_mode == MLO_AN_INBAND &&
+-		     phy_interface_mode_is_8023z(pl->link_interface))))
+-		return -EINVAL;
+-
+-	if (pl->phydev)
+-		return -EBUSY;
+-
+ 	/* Use PHY device/driver interface */
+ 	if (pl->link_interface == PHY_INTERFACE_MODE_NA) {
+ 		pl->link_interface = phy->interface;
+ 		pl->link_config.interface = pl->link_interface;
+ 	}
+ 
+-	ret = phy_attach_direct(pl->netdev, phy, 0, pl->link_interface);
+-	if (ret)
+-		return ret;
+-
+-	ret = phylink_bringup_phy(pl, phy);
+-	if (ret)
+-		phy_detach(phy);
+-
+-	return ret;
++	return __phylink_connect_phy(pl, phy, pl->link_interface);
+ }
+ EXPORT_SYMBOL_GPL(phylink_connect_phy);
+ 
+@@ -1672,7 +1678,9 @@ static void phylink_sfp_link_up(void *upstream)
+ 
+ static int phylink_sfp_connect_phy(void *upstream, struct phy_device *phy)
+ {
+-	return phylink_connect_phy(upstream, phy);
++	struct phylink *pl = upstream;
++
++	return __phylink_connect_phy(upstream, phy, pl->link_config.interface);
+ }
+ 
+ static void phylink_sfp_disconnect_phy(void *upstream)
+diff --git a/drivers/net/phy/sfp-bus.c b/drivers/net/phy/sfp-bus.c
+index 740655261e5b..83060fb349f4 100644
+--- a/drivers/net/phy/sfp-bus.c
++++ b/drivers/net/phy/sfp-bus.c
+@@ -349,6 +349,7 @@ static int sfp_register_bus(struct sfp_bus *bus)
+ 	}
+ 	if (bus->started)
+ 		bus->socket_ops->start(bus->sfp);
++	bus->netdev->sfp_bus = bus;
+ 	bus->registered = true;
+ 	return 0;
+ }
+@@ -357,6 +358,7 @@ static void sfp_unregister_bus(struct sfp_bus *bus)
+ {
+ 	const struct sfp_upstream_ops *ops = bus->upstream_ops;
+ 
++	bus->netdev->sfp_bus = NULL;
+ 	if (bus->registered) {
+ 		if (bus->started)
+ 			bus->socket_ops->stop(bus->sfp);
+@@ -438,7 +440,6 @@ static void sfp_upstream_clear(struct sfp_bus *bus)
+ {
+ 	bus->upstream_ops = NULL;
+ 	bus->upstream = NULL;
+-	bus->netdev->sfp_bus = NULL;
+ 	bus->netdev = NULL;
+ }
+ 
+@@ -467,7 +468,6 @@ struct sfp_bus *sfp_register_upstream(struct fwnode_handle *fwnode,
+ 		bus->upstream_ops = ops;
+ 		bus->upstream = upstream;
+ 		bus->netdev = ndev;
+-		ndev->sfp_bus = bus;
+ 
+ 		if (bus->sfp) {
+ 			ret = sfp_register_bus(bus);
+diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
+index b070959737ff..286c947cb48d 100644
+--- a/drivers/net/team/team.c
++++ b/drivers/net/team/team.c
+@@ -1172,6 +1172,12 @@ static int team_port_add(struct team *team, struct net_device *port_dev,
+ 		return -EBUSY;
+ 	}
+ 
++	if (dev == port_dev) {
++		NL_SET_ERR_MSG(extack, "Cannot enslave team device to itself");
++		netdev_err(dev, "Cannot enslave team device to itself\n");
++		return -EINVAL;
++	}
++
+ 	if (port_dev->features & NETIF_F_VLAN_CHALLENGED &&
+ 	    vlan_uses_dev(dev)) {
+ 		NL_SET_ERR_MSG(extack, "Device is VLAN challenged and team device has VLAN set up");
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index f5727baac84a..725dd63f8413 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -181,6 +181,7 @@ struct tun_file {
+ 	};
+ 	struct napi_struct napi;
+ 	bool napi_enabled;
++	bool napi_frags_enabled;
+ 	struct mutex napi_mutex;	/* Protects access to the above napi */
+ 	struct list_head next;
+ 	struct tun_struct *detached;
+@@ -312,32 +313,32 @@ static int tun_napi_poll(struct napi_struct *napi, int budget)
+ }
+ 
+ static void tun_napi_init(struct tun_struct *tun, struct tun_file *tfile,
+-			  bool napi_en)
++			  bool napi_en, bool napi_frags)
+ {
+ 	tfile->napi_enabled = napi_en;
++	tfile->napi_frags_enabled = napi_en && napi_frags;
+ 	if (napi_en) {
+ 		netif_napi_add(tun->dev, &tfile->napi, tun_napi_poll,
+ 			       NAPI_POLL_WEIGHT);
+ 		napi_enable(&tfile->napi);
+-		mutex_init(&tfile->napi_mutex);
+ 	}
+ }
+ 
+-static void tun_napi_disable(struct tun_struct *tun, struct tun_file *tfile)
++static void tun_napi_disable(struct tun_file *tfile)
+ {
+ 	if (tfile->napi_enabled)
+ 		napi_disable(&tfile->napi);
+ }
+ 
+-static void tun_napi_del(struct tun_struct *tun, struct tun_file *tfile)
++static void tun_napi_del(struct tun_file *tfile)
+ {
+ 	if (tfile->napi_enabled)
+ 		netif_napi_del(&tfile->napi);
+ }
+ 
+-static bool tun_napi_frags_enabled(const struct tun_struct *tun)
++static bool tun_napi_frags_enabled(const struct tun_file *tfile)
+ {
+-	return READ_ONCE(tun->flags) & IFF_NAPI_FRAGS;
++	return tfile->napi_frags_enabled;
+ }
+ 
+ #ifdef CONFIG_TUN_VNET_CROSS_LE
+@@ -688,8 +689,8 @@ static void __tun_detach(struct tun_file *tfile, bool clean)
+ 	tun = rtnl_dereference(tfile->tun);
+ 
+ 	if (tun && clean) {
+-		tun_napi_disable(tun, tfile);
+-		tun_napi_del(tun, tfile);
++		tun_napi_disable(tfile);
++		tun_napi_del(tfile);
+ 	}
+ 
+ 	if (tun && !tfile->detached) {
+@@ -756,7 +757,7 @@ static void tun_detach_all(struct net_device *dev)
+ 	for (i = 0; i < n; i++) {
+ 		tfile = rtnl_dereference(tun->tfiles[i]);
+ 		BUG_ON(!tfile);
+-		tun_napi_disable(tun, tfile);
++		tun_napi_disable(tfile);
+ 		tfile->socket.sk->sk_shutdown = RCV_SHUTDOWN;
+ 		tfile->socket.sk->sk_data_ready(tfile->socket.sk);
+ 		RCU_INIT_POINTER(tfile->tun, NULL);
+@@ -772,7 +773,7 @@ static void tun_detach_all(struct net_device *dev)
+ 	synchronize_net();
+ 	for (i = 0; i < n; i++) {
+ 		tfile = rtnl_dereference(tun->tfiles[i]);
+-		tun_napi_del(tun, tfile);
++		tun_napi_del(tfile);
+ 		/* Drop read queue */
+ 		tun_queue_purge(tfile);
+ 		xdp_rxq_info_unreg(&tfile->xdp_rxq);
+@@ -791,7 +792,7 @@ static void tun_detach_all(struct net_device *dev)
+ }
+ 
+ static int tun_attach(struct tun_struct *tun, struct file *file,
+-		      bool skip_filter, bool napi)
++		      bool skip_filter, bool napi, bool napi_frags)
+ {
+ 	struct tun_file *tfile = file->private_data;
+ 	struct net_device *dev = tun->dev;
+@@ -864,7 +865,7 @@ static int tun_attach(struct tun_struct *tun, struct file *file,
+ 		tun_enable_queue(tfile);
+ 	} else {
+ 		sock_hold(&tfile->sk);
+-		tun_napi_init(tun, tfile, napi);
++		tun_napi_init(tun, tfile, napi, napi_frags);
+ 	}
+ 
+ 	tun_set_real_num_queues(tun);
+@@ -1174,13 +1175,11 @@ static void tun_poll_controller(struct net_device *dev)
+ 		struct tun_file *tfile;
+ 		int i;
+ 
+-		if (tun_napi_frags_enabled(tun))
+-			return;
+-
+ 		rcu_read_lock();
+ 		for (i = 0; i < tun->numqueues; i++) {
+ 			tfile = rcu_dereference(tun->tfiles[i]);
+-			if (tfile->napi_enabled)
++			if (!tun_napi_frags_enabled(tfile) &&
++			    tfile->napi_enabled)
+ 				napi_schedule(&tfile->napi);
+ 		}
+ 		rcu_read_unlock();
+@@ -1751,7 +1750,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
+ 	int err;
+ 	u32 rxhash = 0;
+ 	int skb_xdp = 1;
+-	bool frags = tun_napi_frags_enabled(tun);
++	bool frags = tun_napi_frags_enabled(tfile);
+ 
+ 	if (!(tun->dev->flags & IFF_UP))
+ 		return -EIO;
+@@ -2576,7 +2575,8 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr)
+ 			return err;
+ 
+ 		err = tun_attach(tun, file, ifr->ifr_flags & IFF_NOFILTER,
+-				 ifr->ifr_flags & IFF_NAPI);
++				 ifr->ifr_flags & IFF_NAPI,
++				 ifr->ifr_flags & IFF_NAPI_FRAGS);
+ 		if (err < 0)
+ 			return err;
+ 
+@@ -2674,7 +2674,8 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr)
+ 			      (ifr->ifr_flags & TUN_FEATURES);
+ 
+ 		INIT_LIST_HEAD(&tun->disabled);
+-		err = tun_attach(tun, file, false, ifr->ifr_flags & IFF_NAPI);
++		err = tun_attach(tun, file, false, ifr->ifr_flags & IFF_NAPI,
++				 ifr->ifr_flags & IFF_NAPI_FRAGS);
+ 		if (err < 0)
+ 			goto err_free_flow;
+ 
+@@ -2823,7 +2824,8 @@ static int tun_set_queue(struct file *file, struct ifreq *ifr)
+ 		ret = security_tun_dev_attach_queue(tun->security);
+ 		if (ret < 0)
+ 			goto unlock;
+-		ret = tun_attach(tun, file, false, tun->flags & IFF_NAPI);
++		ret = tun_attach(tun, file, false, tun->flags & IFF_NAPI,
++				 tun->flags & IFF_NAPI_FRAGS);
+ 	} else if (ifr->ifr_flags & IFF_DETACH_QUEUE) {
+ 		tun = rtnl_dereference(tfile->tun);
+ 		if (!tun || !(tun->flags & IFF_MULTI_QUEUE) || tfile->detached)
+@@ -3241,6 +3243,7 @@ static int tun_chr_open(struct inode *inode, struct file * file)
+ 		return -ENOMEM;
+ 	}
+ 
++	mutex_init(&tfile->napi_mutex);
+ 	RCU_INIT_POINTER(tfile->tun, NULL);
+ 	tfile->flags = 0;
+ 	tfile->ifindex = 0;
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 1e95d37c6e27..1bb01a9e5f92 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1234,6 +1234,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x0b3c, 0xc00b, 4)},	/* Olivetti Olicard 500 */
+ 	{QMI_FIXED_INTF(0x1e2d, 0x0060, 4)},	/* Cinterion PLxx */
+ 	{QMI_FIXED_INTF(0x1e2d, 0x0053, 4)},	/* Cinterion PHxx,PXxx */
++	{QMI_FIXED_INTF(0x1e2d, 0x0063, 10)},	/* Cinterion ALASxx (1 RmNet) */
+ 	{QMI_FIXED_INTF(0x1e2d, 0x0082, 4)},	/* Cinterion PHxx,PXxx (2 RmNet) */
+ 	{QMI_FIXED_INTF(0x1e2d, 0x0082, 5)},	/* Cinterion PHxx,PXxx (2 RmNet) */
+ 	{QMI_FIXED_INTF(0x1e2d, 0x0083, 4)},	/* Cinterion PHxx,PXxx (1 RmNet + USB Audio)*/
+diff --git a/drivers/net/usb/smsc75xx.c b/drivers/net/usb/smsc75xx.c
+index 05553d252446..b64b1ee56d2d 100644
+--- a/drivers/net/usb/smsc75xx.c
++++ b/drivers/net/usb/smsc75xx.c
+@@ -1517,6 +1517,7 @@ static void smsc75xx_unbind(struct usbnet *dev, struct usb_interface *intf)
+ {
+ 	struct smsc75xx_priv *pdata = (struct smsc75xx_priv *)(dev->data[0]);
+ 	if (pdata) {
++		cancel_work_sync(&pdata->set_multicast);
+ 		netif_dbg(dev, ifdown, dev->net, "free pdata\n");
+ 		kfree(pdata);
+ 		pdata = NULL;
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index e857cb3335f6..93a6c43a2354 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -3537,6 +3537,7 @@ static size_t vxlan_get_size(const struct net_device *dev)
+ 		nla_total_size(sizeof(__u32)) +	/* IFLA_VXLAN_LINK */
+ 		nla_total_size(sizeof(struct in6_addr)) + /* IFLA_VXLAN_LOCAL{6} */
+ 		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_TTL */
++		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_TTL_INHERIT */
+ 		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_TOS */
+ 		nla_total_size(sizeof(__be32)) + /* IFLA_VXLAN_LABEL */
+ 		nla_total_size(sizeof(__u8)) +	/* IFLA_VXLAN_LEARNING */
+@@ -3601,6 +3602,8 @@ static int vxlan_fill_info(struct sk_buff *skb, const struct net_device *dev)
+ 	}
+ 
+ 	if (nla_put_u8(skb, IFLA_VXLAN_TTL, vxlan->cfg.ttl) ||
++	    nla_put_u8(skb, IFLA_VXLAN_TTL_INHERIT,
++		       !!(vxlan->cfg.flags & VXLAN_F_TTL_INHERIT)) ||
+ 	    nla_put_u8(skb, IFLA_VXLAN_TOS, vxlan->cfg.tos) ||
+ 	    nla_put_be32(skb, IFLA_VXLAN_LABEL, vxlan->cfg.label) ||
+ 	    nla_put_u8(skb, IFLA_VXLAN_LEARNING,
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index d4d4a55f09f8..c6f375e9cce7 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -89,6 +89,9 @@ static enum pci_protocol_version_t pci_protocol_version;
+ 
+ #define STATUS_REVISION_MISMATCH 0xC0000059
+ 
++/* space for 32bit serial number as string */
++#define SLOT_NAME_SIZE 11
++
+ /*
+  * Message Types
+  */
+@@ -494,6 +497,7 @@ struct hv_pci_dev {
+ 	struct list_head list_entry;
+ 	refcount_t refs;
+ 	enum hv_pcichild_state state;
++	struct pci_slot *pci_slot;
+ 	struct pci_function_description desc;
+ 	bool reported_missing;
+ 	struct hv_pcibus_device *hbus;
+@@ -1457,6 +1461,34 @@ static void prepopulate_bars(struct hv_pcibus_device *hbus)
+ 	spin_unlock_irqrestore(&hbus->device_list_lock, flags);
+ }
+ 
++/*
++ * Assign entries in sysfs pci slot directory.
++ *
++ * Note that this function does not need to lock the children list
++ * because it is called from pci_devices_present_work which
++ * is serialized with hv_eject_device_work because they are on the
++ * same ordered workqueue. Therefore hbus->children list will not change
++ * even when pci_create_slot sleeps.
++ */
++static void hv_pci_assign_slots(struct hv_pcibus_device *hbus)
++{
++	struct hv_pci_dev *hpdev;
++	char name[SLOT_NAME_SIZE];
++	int slot_nr;
++
++	list_for_each_entry(hpdev, &hbus->children, list_entry) {
++		if (hpdev->pci_slot)
++			continue;
++
++		slot_nr = PCI_SLOT(wslot_to_devfn(hpdev->desc.win_slot.slot));
++		snprintf(name, SLOT_NAME_SIZE, "%u", hpdev->desc.ser);
++		hpdev->pci_slot = pci_create_slot(hbus->pci_bus, slot_nr,
++					  name, NULL);
++		if (!hpdev->pci_slot)
++			pr_warn("pci_create slot %s failed\n", name);
++	}
++}
++
+ /**
+  * create_root_hv_pci_bus() - Expose a new root PCI bus
+  * @hbus:	Root PCI bus, as understood by this driver
+@@ -1480,6 +1512,7 @@ static int create_root_hv_pci_bus(struct hv_pcibus_device *hbus)
+ 	pci_lock_rescan_remove();
+ 	pci_scan_child_bus(hbus->pci_bus);
+ 	pci_bus_assign_resources(hbus->pci_bus);
++	hv_pci_assign_slots(hbus);
+ 	pci_bus_add_devices(hbus->pci_bus);
+ 	pci_unlock_rescan_remove();
+ 	hbus->state = hv_pcibus_installed;
+@@ -1742,6 +1775,7 @@ static void pci_devices_present_work(struct work_struct *work)
+ 		 */
+ 		pci_lock_rescan_remove();
+ 		pci_scan_child_bus(hbus->pci_bus);
++		hv_pci_assign_slots(hbus);
+ 		pci_unlock_rescan_remove();
+ 		break;
+ 
+@@ -1858,6 +1892,9 @@ static void hv_eject_device_work(struct work_struct *work)
+ 	list_del(&hpdev->list_entry);
+ 	spin_unlock_irqrestore(&hpdev->hbus->device_list_lock, flags);
+ 
++	if (hpdev->pci_slot)
++		pci_destroy_slot(hpdev->pci_slot);
++
+ 	memset(&ctxt, 0, sizeof(ctxt));
+ 	ejct_pkt = (struct pci_eject_response *)&ctxt.pkt.message;
+ 	ejct_pkt->message_type.type = PCI_EJECTION_COMPLETE;
+diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
+index a6347d487635..1321104b9b9f 100644
+--- a/drivers/perf/arm_pmu.c
++++ b/drivers/perf/arm_pmu.c
+@@ -474,7 +474,13 @@ static int armpmu_filter_match(struct perf_event *event)
+ {
+ 	struct arm_pmu *armpmu = to_arm_pmu(event->pmu);
+ 	unsigned int cpu = smp_processor_id();
+-	return cpumask_test_cpu(cpu, &armpmu->supported_cpus);
++	int ret;
++
++	ret = cpumask_test_cpu(cpu, &armpmu->supported_cpus);
++	if (ret && armpmu->filter_match)
++		return armpmu->filter_match(event);
++
++	return ret;
+ }
+ 
+ static ssize_t armpmu_cpumask_show(struct device *dev,
+diff --git a/drivers/pinctrl/intel/pinctrl-cannonlake.c b/drivers/pinctrl/intel/pinctrl-cannonlake.c
+index 6243e7d95e7e..d36afb17f5e4 100644
+--- a/drivers/pinctrl/intel/pinctrl-cannonlake.c
++++ b/drivers/pinctrl/intel/pinctrl-cannonlake.c
+@@ -382,7 +382,7 @@ static const struct intel_padgroup cnlh_community1_gpps[] = {
+ static const struct intel_padgroup cnlh_community3_gpps[] = {
+ 	CNL_GPP(0, 155, 178, 192),		/* GPP_K */
+ 	CNL_GPP(1, 179, 202, 224),		/* GPP_H */
+-	CNL_GPP(2, 203, 215, 258),		/* GPP_E */
++	CNL_GPP(2, 203, 215, 256),		/* GPP_E */
+ 	CNL_GPP(3, 216, 239, 288),		/* GPP_F */
+ 	CNL_GPP(4, 240, 248, CNL_NO_GPIO),	/* SPI */
+ };
+diff --git a/drivers/pinctrl/pinctrl-mcp23s08.c b/drivers/pinctrl/pinctrl-mcp23s08.c
+index 022307dd4b54..bef6ff2e8f4f 100644
+--- a/drivers/pinctrl/pinctrl-mcp23s08.c
++++ b/drivers/pinctrl/pinctrl-mcp23s08.c
+@@ -636,6 +636,14 @@ static int mcp23s08_irq_setup(struct mcp23s08 *mcp)
+ 		return err;
+ 	}
+ 
++	return 0;
++}
++
++static int mcp23s08_irqchip_setup(struct mcp23s08 *mcp)
++{
++	struct gpio_chip *chip = &mcp->chip;
++	int err;
++
+ 	err =  gpiochip_irqchip_add_nested(chip,
+ 					   &mcp23s08_irq_chip,
+ 					   0,
+@@ -912,7 +920,7 @@ static int mcp23s08_probe_one(struct mcp23s08 *mcp, struct device *dev,
+ 	}
+ 
+ 	if (mcp->irq && mcp->irq_controller) {
+-		ret = mcp23s08_irq_setup(mcp);
++		ret = mcp23s08_irqchip_setup(mcp);
+ 		if (ret)
+ 			goto fail;
+ 	}
+@@ -944,6 +952,9 @@ static int mcp23s08_probe_one(struct mcp23s08 *mcp, struct device *dev,
+ 		goto fail;
+ 	}
+ 
++	if (mcp->irq)
++		ret = mcp23s08_irq_setup(mcp);
++
+ fail:
+ 	if (ret < 0)
+ 		dev_dbg(dev, "can't setup chip %d, --> %d\n", addr, ret);
+diff --git a/drivers/s390/cio/vfio_ccw_cp.c b/drivers/s390/cio/vfio_ccw_cp.c
+index dbe7c7ac9ac8..fd77e46eb3b2 100644
+--- a/drivers/s390/cio/vfio_ccw_cp.c
++++ b/drivers/s390/cio/vfio_ccw_cp.c
+@@ -163,7 +163,7 @@ static bool pfn_array_table_iova_pinned(struct pfn_array_table *pat,
+ 
+ 	for (i = 0; i < pat->pat_nr; i++, pa++)
+ 		for (j = 0; j < pa->pa_nr; j++)
+-			if (pa->pa_iova_pfn[i] == iova_pfn)
++			if (pa->pa_iova_pfn[j] == iova_pfn)
+ 				return true;
+ 
+ 	return false;
+diff --git a/drivers/scsi/qla2xxx/qla_target.h b/drivers/scsi/qla2xxx/qla_target.h
+index fecf96f0225c..199d3ba1916d 100644
+--- a/drivers/scsi/qla2xxx/qla_target.h
++++ b/drivers/scsi/qla2xxx/qla_target.h
+@@ -374,8 +374,8 @@ struct atio_from_isp {
+ static inline int fcpcmd_is_corrupted(struct atio *atio)
+ {
+ 	if (atio->entry_type == ATIO_TYPE7 &&
+-	    (le16_to_cpu(atio->attr_n_length & FCP_CMD_LENGTH_MASK) <
+-	    FCP_CMD_LENGTH_MIN))
++	    ((le16_to_cpu(atio->attr_n_length) & FCP_CMD_LENGTH_MASK) <
++	     FCP_CMD_LENGTH_MIN))
+ 		return 1;
+ 	else
+ 		return 0;
+diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
+index a4ecc9d77624..8e1c3cff567a 100644
+--- a/drivers/target/iscsi/iscsi_target.c
++++ b/drivers/target/iscsi/iscsi_target.c
+@@ -1419,7 +1419,8 @@ static void iscsit_do_crypto_hash_buf(struct ahash_request *hash,
+ 
+ 	sg_init_table(sg, ARRAY_SIZE(sg));
+ 	sg_set_buf(sg, buf, payload_length);
+-	sg_set_buf(sg + 1, pad_bytes, padding);
++	if (padding)
++		sg_set_buf(sg + 1, pad_bytes, padding);
+ 
+ 	ahash_request_set_crypt(hash, sg, data_crc, payload_length + padding);
+ 
+@@ -3913,10 +3914,14 @@ static bool iscsi_target_check_conn_state(struct iscsi_conn *conn)
+ static void iscsit_get_rx_pdu(struct iscsi_conn *conn)
+ {
+ 	int ret;
+-	u8 buffer[ISCSI_HDR_LEN], opcode;
++	u8 *buffer, opcode;
+ 	u32 checksum = 0, digest = 0;
+ 	struct kvec iov;
+ 
++	buffer = kcalloc(ISCSI_HDR_LEN, sizeof(*buffer), GFP_KERNEL);
++	if (!buffer)
++		return;
++
+ 	while (!kthread_should_stop()) {
+ 		/*
+ 		 * Ensure that both TX and RX per connection kthreads
+@@ -3924,7 +3929,6 @@ static void iscsit_get_rx_pdu(struct iscsi_conn *conn)
+ 		 */
+ 		iscsit_thread_check_cpumask(conn, current, 0);
+ 
+-		memset(buffer, 0, ISCSI_HDR_LEN);
+ 		memset(&iov, 0, sizeof(struct kvec));
+ 
+ 		iov.iov_base	= buffer;
+@@ -3933,7 +3937,7 @@ static void iscsit_get_rx_pdu(struct iscsi_conn *conn)
+ 		ret = rx_data(conn, &iov, 1, ISCSI_HDR_LEN);
+ 		if (ret != ISCSI_HDR_LEN) {
+ 			iscsit_rx_thread_wait_for_tcp(conn);
+-			return;
++			break;
+ 		}
+ 
+ 		if (conn->conn_ops->HeaderDigest) {
+@@ -3943,7 +3947,7 @@ static void iscsit_get_rx_pdu(struct iscsi_conn *conn)
+ 			ret = rx_data(conn, &iov, 1, ISCSI_CRC_LEN);
+ 			if (ret != ISCSI_CRC_LEN) {
+ 				iscsit_rx_thread_wait_for_tcp(conn);
+-				return;
++				break;
+ 			}
+ 
+ 			iscsit_do_crypto_hash_buf(conn->conn_rx_hash, buffer,
+@@ -3967,7 +3971,7 @@ static void iscsit_get_rx_pdu(struct iscsi_conn *conn)
+ 		}
+ 
+ 		if (conn->conn_state == TARG_CONN_STATE_IN_LOGOUT)
+-			return;
++			break;
+ 
+ 		opcode = buffer[0] & ISCSI_OPCODE_MASK;
+ 
+@@ -3978,13 +3982,15 @@ static void iscsit_get_rx_pdu(struct iscsi_conn *conn)
+ 			" while in Discovery Session, rejecting.\n", opcode);
+ 			iscsit_add_reject(conn, ISCSI_REASON_PROTOCOL_ERROR,
+ 					  buffer);
+-			return;
++			break;
+ 		}
+ 
+ 		ret = iscsi_target_rx_opcode(conn, buffer);
+ 		if (ret < 0)
+-			return;
++			break;
+ 	}
++
++	kfree(buffer);
+ }
+ 
+ int iscsi_target_rx_thread(void *arg)
+diff --git a/drivers/video/fbdev/aty/atyfb.h b/drivers/video/fbdev/aty/atyfb.h
+index 8235b285dbb2..d09bab3bf224 100644
+--- a/drivers/video/fbdev/aty/atyfb.h
++++ b/drivers/video/fbdev/aty/atyfb.h
+@@ -333,6 +333,8 @@ extern const struct aty_pll_ops aty_pll_ct; /* Integrated */
+ extern void aty_set_pll_ct(const struct fb_info *info, const union aty_pll *pll);
+ extern u8 aty_ld_pll_ct(int offset, const struct atyfb_par *par);
+ 
++extern const u8 aty_postdividers[8];
++
+ 
+     /*
+      *  Hardware cursor support
+@@ -359,7 +361,6 @@ static inline void wait_for_idle(struct atyfb_par *par)
+ 
+ extern void aty_reset_engine(const struct atyfb_par *par);
+ extern void aty_init_engine(struct atyfb_par *par, struct fb_info *info);
+-extern u8   aty_ld_pll_ct(int offset, const struct atyfb_par *par);
+ 
+ void atyfb_copyarea(struct fb_info *info, const struct fb_copyarea *area);
+ void atyfb_fillrect(struct fb_info *info, const struct fb_fillrect *rect);
+diff --git a/drivers/video/fbdev/aty/atyfb_base.c b/drivers/video/fbdev/aty/atyfb_base.c
+index a9a8272f7a6e..05111e90f168 100644
+--- a/drivers/video/fbdev/aty/atyfb_base.c
++++ b/drivers/video/fbdev/aty/atyfb_base.c
+@@ -3087,17 +3087,18 @@ static int atyfb_setup_sparc(struct pci_dev *pdev, struct fb_info *info,
+ 		/*
+ 		 * PLL Reference Divider M:
+ 		 */
+-		M = pll_regs[2];
++		M = pll_regs[PLL_REF_DIV];
+ 
+ 		/*
+ 		 * PLL Feedback Divider N (Dependent on CLOCK_CNTL):
+ 		 */
+-		N = pll_regs[7 + (clock_cntl & 3)];
++		N = pll_regs[VCLK0_FB_DIV + (clock_cntl & 3)];
+ 
+ 		/*
+ 		 * PLL Post Divider P (Dependent on CLOCK_CNTL):
+ 		 */
+-		P = 1 << (pll_regs[6] >> ((clock_cntl & 3) << 1));
++		P = aty_postdividers[((pll_regs[VCLK_POST_DIV] >> ((clock_cntl & 3) << 1)) & 3) |
++		                     ((pll_regs[PLL_EXT_CNTL] >> (2 + (clock_cntl & 3))) & 4)];
+ 
+ 		/*
+ 		 * PLL Divider Q:
+diff --git a/drivers/video/fbdev/aty/mach64_ct.c b/drivers/video/fbdev/aty/mach64_ct.c
+index 74a62aa193c0..f87cc81f4fa2 100644
+--- a/drivers/video/fbdev/aty/mach64_ct.c
++++ b/drivers/video/fbdev/aty/mach64_ct.c
+@@ -115,7 +115,7 @@ static void aty_st_pll_ct(int offset, u8 val, const struct atyfb_par *par)
+  */
+ 
+ #define Maximum_DSP_PRECISION 7
+-static u8 postdividers[] = {1,2,4,8,3};
++const u8 aty_postdividers[8] = {1,2,4,8,3,5,6,12};
+ 
+ static int aty_dsp_gt(const struct fb_info *info, u32 bpp, struct pll_ct *pll)
+ {
+@@ -222,7 +222,7 @@ static int aty_valid_pll_ct(const struct fb_info *info, u32 vclk_per, struct pll
+ 		pll->vclk_post_div += (q <  64*8);
+ 		pll->vclk_post_div += (q <  32*8);
+ 	}
+-	pll->vclk_post_div_real = postdividers[pll->vclk_post_div];
++	pll->vclk_post_div_real = aty_postdividers[pll->vclk_post_div];
+ 	//    pll->vclk_post_div <<= 6;
+ 	pll->vclk_fb_div = q * pll->vclk_post_div_real / 8;
+ 	pllvclk = (1000000 * 2 * pll->vclk_fb_div) /
+@@ -513,7 +513,7 @@ static int aty_init_pll_ct(const struct fb_info *info, union aty_pll *pll)
+ 		u8 mclk_fb_div, pll_ext_cntl;
+ 		pll->ct.pll_ref_div = aty_ld_pll_ct(PLL_REF_DIV, par);
+ 		pll_ext_cntl = aty_ld_pll_ct(PLL_EXT_CNTL, par);
+-		pll->ct.xclk_post_div_real = postdividers[pll_ext_cntl & 0x07];
++		pll->ct.xclk_post_div_real = aty_postdividers[pll_ext_cntl & 0x07];
+ 		mclk_fb_div = aty_ld_pll_ct(MCLK_FB_DIV, par);
+ 		if (pll_ext_cntl & PLL_MFB_TIMES_4_2B)
+ 			mclk_fb_div <<= 1;
+@@ -535,7 +535,7 @@ static int aty_init_pll_ct(const struct fb_info *info, union aty_pll *pll)
+ 		xpost_div += (q <  64*8);
+ 		xpost_div += (q <  32*8);
+ 	}
+-	pll->ct.xclk_post_div_real = postdividers[xpost_div];
++	pll->ct.xclk_post_div_real = aty_postdividers[xpost_div];
+ 	pll->ct.mclk_fb_div = q * pll->ct.xclk_post_div_real / 8;
+ 
+ #ifdef CONFIG_PPC
+@@ -584,7 +584,7 @@ static int aty_init_pll_ct(const struct fb_info *info, union aty_pll *pll)
+ 			mpost_div += (q <  64*8);
+ 			mpost_div += (q <  32*8);
+ 		}
+-		sclk_post_div_real = postdividers[mpost_div];
++		sclk_post_div_real = aty_postdividers[mpost_div];
+ 		pll->ct.sclk_fb_div = q * sclk_post_div_real / 8;
+ 		pll->ct.spll_cntl2 = mpost_div << 4;
+ #ifdef DEBUG
+diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c
+index a1b18082991b..b6735ae3334e 100644
+--- a/fs/afs/rxrpc.c
++++ b/fs/afs/rxrpc.c
+@@ -690,8 +690,6 @@ static void afs_process_async_call(struct work_struct *work)
+ 	}
+ 
+ 	if (call->state == AFS_CALL_COMPLETE) {
+-		call->reply[0] = NULL;
+-
+ 		/* We have two refs to release - one from the alloc and one
+ 		 * queued with the work item - and we can't just deallocate the
+ 		 * call because the work item may be queued again.
+diff --git a/fs/dax.c b/fs/dax.c
+index 94f9fe002b12..0d3f640653c0 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -558,6 +558,8 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
+ 	while (index < end && pagevec_lookup_entries(&pvec, mapping, index,
+ 				min(end - index, (pgoff_t)PAGEVEC_SIZE),
+ 				indices)) {
++		pgoff_t nr_pages = 1;
++
+ 		for (i = 0; i < pagevec_count(&pvec); i++) {
+ 			struct page *pvec_ent = pvec.pages[i];
+ 			void *entry;
+@@ -571,8 +573,15 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
+ 
+ 			xa_lock_irq(&mapping->i_pages);
+ 			entry = get_unlocked_mapping_entry(mapping, index, NULL);
+-			if (entry)
++			if (entry) {
+ 				page = dax_busy_page(entry);
++				/*
++				 * Account for multi-order entries at
++				 * the end of the pagevec.
++				 */
++				if (i + 1 >= pagevec_count(&pvec))
++					nr_pages = 1UL << dax_radix_order(entry);
++			}
+ 			put_unlocked_mapping_entry(mapping, index, entry);
+ 			xa_unlock_irq(&mapping->i_pages);
+ 			if (page)
+@@ -580,7 +589,7 @@ struct page *dax_layout_busy_page(struct address_space *mapping)
+ 		}
+ 		pagevec_remove_exceptionals(&pvec);
+ 		pagevec_release(&pvec);
+-		index++;
++		index += nr_pages;
+ 
+ 		if (page)
+ 			break;
+diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
+index c0e68f903011..04da6a7c9d2d 100644
+--- a/include/linux/cgroup-defs.h
++++ b/include/linux/cgroup-defs.h
+@@ -412,6 +412,7 @@ struct cgroup {
+ 	 * specific task are charged to the dom_cgrp.
+ 	 */
+ 	struct cgroup *dom_cgrp;
++	struct cgroup *old_dom_cgrp;		/* used while enabling threaded */
+ 
+ 	/* per-cpu recursive resource statistics */
+ 	struct cgroup_rstat_cpu __percpu *rstat_cpu;
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 3d0cc0b5cec2..3045a5cee0d8 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -2420,6 +2420,13 @@ struct netdev_notifier_info {
+ 	struct netlink_ext_ack	*extack;
+ };
+ 
++struct netdev_notifier_info_ext {
++	struct netdev_notifier_info info; /* must be first */
++	union {
++		u32 mtu;
++	} ext;
++};
++
+ struct netdev_notifier_change_info {
+ 	struct netdev_notifier_info info; /* must be first */
+ 	unsigned int flags_changed;
+diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
+index ad5444491975..a2f6e178a2d7 100644
+--- a/include/linux/perf/arm_pmu.h
++++ b/include/linux/perf/arm_pmu.h
+@@ -93,6 +93,7 @@ struct arm_pmu {
+ 	void		(*stop)(struct arm_pmu *);
+ 	void		(*reset)(void *);
+ 	int		(*map_event)(struct perf_event *event);
++	int		(*filter_match)(struct perf_event *event);
+ 	int		num_events;
+ 	u64		max_period;
+ 	bool		secure_access; /* 32-bit ARM only */
+diff --git a/include/linux/stmmac.h b/include/linux/stmmac.h
+index 32feac5bbd75..f62e7721cd71 100644
+--- a/include/linux/stmmac.h
++++ b/include/linux/stmmac.h
+@@ -30,6 +30,7 @@
+ 
+ #define MTL_MAX_RX_QUEUES	8
+ #define MTL_MAX_TX_QUEUES	8
++#define STMMAC_CH_MAX		8
+ 
+ #define STMMAC_RX_COE_NONE	0
+ #define STMMAC_RX_COE_TYPE1	1
+diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
+index 9397628a1967..cb462f9ab7dd 100644
+--- a/include/linux/virtio_net.h
++++ b/include/linux/virtio_net.h
+@@ -5,6 +5,24 @@
+ #include <linux/if_vlan.h>
+ #include <uapi/linux/virtio_net.h>
+ 
++static inline int virtio_net_hdr_set_proto(struct sk_buff *skb,
++					   const struct virtio_net_hdr *hdr)
++{
++	switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) {
++	case VIRTIO_NET_HDR_GSO_TCPV4:
++	case VIRTIO_NET_HDR_GSO_UDP:
++		skb->protocol = cpu_to_be16(ETH_P_IP);
++		break;
++	case VIRTIO_NET_HDR_GSO_TCPV6:
++		skb->protocol = cpu_to_be16(ETH_P_IPV6);
++		break;
++	default:
++		return -EINVAL;
++	}
++
++	return 0;
++}
++
+ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
+ 					const struct virtio_net_hdr *hdr,
+ 					bool little_endian)
+diff --git a/include/net/bonding.h b/include/net/bonding.h
+index 808f1d167349..a4f116f06c50 100644
+--- a/include/net/bonding.h
++++ b/include/net/bonding.h
+@@ -139,12 +139,6 @@ struct bond_parm_tbl {
+ 	int mode;
+ };
+ 
+-struct netdev_notify_work {
+-	struct delayed_work	work;
+-	struct net_device	*dev;
+-	struct netdev_bonding_info bonding_info;
+-};
+-
+ struct slave {
+ 	struct net_device *dev; /* first - useful for panic debug */
+ 	struct bonding *bond; /* our master */
+@@ -172,6 +166,7 @@ struct slave {
+ #ifdef CONFIG_NET_POLL_CONTROLLER
+ 	struct netpoll *np;
+ #endif
++	struct delayed_work notify_work;
+ 	struct kobject kobj;
+ 	struct rtnl_link_stats64 slave_stats;
+ };
+diff --git a/include/net/inet_sock.h b/include/net/inet_sock.h
+index 83d5b3c2ac42..7dba2d116e8c 100644
+--- a/include/net/inet_sock.h
++++ b/include/net/inet_sock.h
+@@ -130,12 +130,6 @@ static inline int inet_request_bound_dev_if(const struct sock *sk,
+ 	return sk->sk_bound_dev_if;
+ }
+ 
+-static inline struct ip_options_rcu *ireq_opt_deref(const struct inet_request_sock *ireq)
+-{
+-	return rcu_dereference_check(ireq->ireq_opt,
+-				     refcount_read(&ireq->req.rsk_refcnt) > 0);
+-}
+-
+ struct inet_cork {
+ 	unsigned int		flags;
+ 	__be32			addr;
+diff --git a/include/net/ip_fib.h b/include/net/ip_fib.h
+index 69c91d1934c1..c9b7b136939d 100644
+--- a/include/net/ip_fib.h
++++ b/include/net/ip_fib.h
+@@ -394,6 +394,7 @@ int ip_fib_check_default(__be32 gw, struct net_device *dev);
+ int fib_sync_down_dev(struct net_device *dev, unsigned long event, bool force);
+ int fib_sync_down_addr(struct net_device *dev, __be32 local);
+ int fib_sync_up(struct net_device *dev, unsigned int nh_flags);
++void fib_sync_mtu(struct net_device *dev, u32 orig_mtu);
+ 
+ #ifdef CONFIG_IP_ROUTE_MULTIPATH
+ int fib_multipath_hash(const struct net *net, const struct flowi4 *fl4,
+diff --git a/include/sound/hdaudio.h b/include/sound/hdaudio.h
+index c052afc27547..138e976a2ba2 100644
+--- a/include/sound/hdaudio.h
++++ b/include/sound/hdaudio.h
+@@ -355,6 +355,7 @@ void snd_hdac_bus_init_cmd_io(struct hdac_bus *bus);
+ void snd_hdac_bus_stop_cmd_io(struct hdac_bus *bus);
+ void snd_hdac_bus_enter_link_reset(struct hdac_bus *bus);
+ void snd_hdac_bus_exit_link_reset(struct hdac_bus *bus);
++int snd_hdac_bus_reset_link(struct hdac_bus *bus, bool full_reset);
+ 
+ void snd_hdac_bus_update_rirb(struct hdac_bus *bus);
+ int snd_hdac_bus_handle_stream_irq(struct hdac_bus *bus, unsigned int status,
+diff --git a/include/sound/soc-dapm.h b/include/sound/soc-dapm.h
+index a6ce2de4e20a..be3bee1cf91f 100644
+--- a/include/sound/soc-dapm.h
++++ b/include/sound/soc-dapm.h
+@@ -410,6 +410,7 @@ int snd_soc_dapm_new_dai_widgets(struct snd_soc_dapm_context *dapm,
+ int snd_soc_dapm_link_dai_widgets(struct snd_soc_card *card);
+ void snd_soc_dapm_connect_dai_link_widgets(struct snd_soc_card *card);
+ int snd_soc_dapm_new_pcm(struct snd_soc_card *card,
++			 struct snd_soc_pcm_runtime *rtd,
+ 			 const struct snd_soc_pcm_stream *params,
+ 			 unsigned int num_params,
+ 			 struct snd_soc_dapm_widget *source,
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 2590700237c1..138f0302692e 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -1844,7 +1844,7 @@ static int btf_check_all_metas(struct btf_verifier_env *env)
+ 
+ 	hdr = &btf->hdr;
+ 	cur = btf->nohdr_data + hdr->type_off;
+-	end = btf->nohdr_data + hdr->type_len;
++	end = cur + hdr->type_len;
+ 
+ 	env->log_type_id = 1;
+ 	while (cur < end) {
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 077370bf8964..6e052c899cab 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -2833,11 +2833,12 @@ restart:
+ }
+ 
+ /**
+- * cgroup_save_control - save control masks of a subtree
++ * cgroup_save_control - save control masks and dom_cgrp of a subtree
+  * @cgrp: root of the target subtree
+  *
+- * Save ->subtree_control and ->subtree_ss_mask to the respective old_
+- * prefixed fields for @cgrp's subtree including @cgrp itself.
++ * Save ->subtree_control, ->subtree_ss_mask and ->dom_cgrp to the
++ * respective old_ prefixed fields for @cgrp's subtree including @cgrp
++ * itself.
+  */
+ static void cgroup_save_control(struct cgroup *cgrp)
+ {
+@@ -2847,6 +2848,7 @@ static void cgroup_save_control(struct cgroup *cgrp)
+ 	cgroup_for_each_live_descendant_pre(dsct, d_css, cgrp) {
+ 		dsct->old_subtree_control = dsct->subtree_control;
+ 		dsct->old_subtree_ss_mask = dsct->subtree_ss_mask;
++		dsct->old_dom_cgrp = dsct->dom_cgrp;
+ 	}
+ }
+ 
+@@ -2872,11 +2874,12 @@ static void cgroup_propagate_control(struct cgroup *cgrp)
+ }
+ 
+ /**
+- * cgroup_restore_control - restore control masks of a subtree
++ * cgroup_restore_control - restore control masks and dom_cgrp of a subtree
+  * @cgrp: root of the target subtree
+  *
+- * Restore ->subtree_control and ->subtree_ss_mask from the respective old_
+- * prefixed fields for @cgrp's subtree including @cgrp itself.
++ * Restore ->subtree_control, ->subtree_ss_mask and ->dom_cgrp from the
++ * respective old_ prefixed fields for @cgrp's subtree including @cgrp
++ * itself.
+  */
+ static void cgroup_restore_control(struct cgroup *cgrp)
+ {
+@@ -2886,6 +2889,7 @@ static void cgroup_restore_control(struct cgroup *cgrp)
+ 	cgroup_for_each_live_descendant_post(dsct, d_css, cgrp) {
+ 		dsct->subtree_control = dsct->old_subtree_control;
+ 		dsct->subtree_ss_mask = dsct->old_subtree_ss_mask;
++		dsct->dom_cgrp = dsct->old_dom_cgrp;
+ 	}
+ }
+ 
+@@ -3193,6 +3197,8 @@ static int cgroup_enable_threaded(struct cgroup *cgrp)
+ {
+ 	struct cgroup *parent = cgroup_parent(cgrp);
+ 	struct cgroup *dom_cgrp = parent->dom_cgrp;
++	struct cgroup *dsct;
++	struct cgroup_subsys_state *d_css;
+ 	int ret;
+ 
+ 	lockdep_assert_held(&cgroup_mutex);
+@@ -3222,12 +3228,13 @@ static int cgroup_enable_threaded(struct cgroup *cgrp)
+ 	 */
+ 	cgroup_save_control(cgrp);
+ 
+-	cgrp->dom_cgrp = dom_cgrp;
++	cgroup_for_each_live_descendant_pre(dsct, d_css, cgrp)
++		if (dsct == cgrp || cgroup_is_threaded(dsct))
++			dsct->dom_cgrp = dom_cgrp;
++
+ 	ret = cgroup_apply_control(cgrp);
+ 	if (!ret)
+ 		parent->nr_threaded_children++;
+-	else
+-		cgrp->dom_cgrp = cgrp;
+ 
+ 	cgroup_finalize_control(cgrp, ret);
+ 	return ret;
+diff --git a/lib/vsprintf.c b/lib/vsprintf.c
+index cda186230287..8e58928e8227 100644
+--- a/lib/vsprintf.c
++++ b/lib/vsprintf.c
+@@ -2769,7 +2769,7 @@ int bstr_printf(char *buf, size_t size, const char *fmt, const u32 *bin_buf)
+ 						copy = end - str;
+ 					memcpy(str, args, copy);
+ 					str += len;
+-					args += len;
++					args += len + 1;
+ 				}
+ 			}
+ 			if (process)
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 571875b37453..f7274e0c8bdc 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2883,9 +2883,6 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
+ 	if (!(pvmw->pmd && !pvmw->pte))
+ 		return;
+ 
+-	mmu_notifier_invalidate_range_start(mm, address,
+-			address + HPAGE_PMD_SIZE);
+-
+ 	flush_cache_range(vma, address, address + HPAGE_PMD_SIZE);
+ 	pmdval = *pvmw->pmd;
+ 	pmdp_invalidate(vma, address, pvmw->pmd);
+@@ -2898,9 +2895,6 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
+ 	set_pmd_at(mm, address, pvmw->pmd, pmdswp);
+ 	page_remove_rmap(page, true);
+ 	put_page(page);
+-
+-	mmu_notifier_invalidate_range_end(mm, address,
+-			address + HPAGE_PMD_SIZE);
+ }
+ 
+ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
+diff --git a/mm/mmap.c b/mm/mmap.c
+index 17bbf4d3e24f..080c6b9b1d65 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -1410,7 +1410,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
+ 	if (flags & MAP_FIXED_NOREPLACE) {
+ 		struct vm_area_struct *vma = find_vma(mm, addr);
+ 
+-		if (vma && vma->vm_start <= addr)
++		if (vma && vma->vm_start < addr + len)
+ 			return -EEXIST;
+ 	}
+ 
+diff --git a/mm/percpu.c b/mm/percpu.c
+index 0b6480979ac7..074732f3c209 100644
+--- a/mm/percpu.c
++++ b/mm/percpu.c
+@@ -1204,6 +1204,7 @@ static void pcpu_free_chunk(struct pcpu_chunk *chunk)
+ {
+ 	if (!chunk)
+ 		return;
++	pcpu_mem_free(chunk->md_blocks);
+ 	pcpu_mem_free(chunk->bound_map);
+ 	pcpu_mem_free(chunk->alloc_map);
+ 	pcpu_mem_free(chunk);
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 03822f86f288..fc0436407471 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -386,6 +386,17 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
+ 	delta = freeable >> priority;
+ 	delta *= 4;
+ 	do_div(delta, shrinker->seeks);
++
++	/*
++	 * Make sure we apply some minimal pressure on default priority
++	 * even on small cgroups. Stale objects are not only consuming memory
++	 * by themselves, but can also hold a reference to a dying cgroup,
++	 * preventing it from being reclaimed. A dying cgroup with all
++	 * corresponding structures like per-cpu stats and kmem caches
++	 * can be really big, so it may lead to a significant waste of memory.
++	 */
++	delta = max_t(unsigned long long, delta, min(freeable, batch_size));
++
+ 	total_scan += delta;
+ 	if (total_scan < 0) {
+ 		pr_err("shrink_slab: %pF negative objects to delete nr=%ld\n",
+diff --git a/mm/vmstat.c b/mm/vmstat.c
+index 55a5bb1d773d..7878da76abf2 100644
+--- a/mm/vmstat.c
++++ b/mm/vmstat.c
+@@ -1286,7 +1286,6 @@ const char * const vmstat_text[] = {
+ #ifdef CONFIG_DEBUG_VM_VMACACHE
+ 	"vmacache_find_calls",
+ 	"vmacache_find_hits",
+-	"vmacache_full_flushes",
+ #endif
+ #ifdef CONFIG_SWAP
+ 	"swap_ra",
+diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
+index ae91e2d40056..3a7b0773536b 100644
+--- a/net/bluetooth/smp.c
++++ b/net/bluetooth/smp.c
+@@ -83,6 +83,7 @@ enum {
+ 
+ struct smp_dev {
+ 	/* Secure Connections OOB data */
++	bool			local_oob;
+ 	u8			local_pk[64];
+ 	u8			local_rand[16];
+ 	bool			debug_key;
+@@ -599,6 +600,8 @@ int smp_generate_oob(struct hci_dev *hdev, u8 hash[16], u8 rand[16])
+ 
+ 	memcpy(rand, smp->local_rand, 16);
+ 
++	smp->local_oob = true;
++
+ 	return 0;
+ }
+ 
+@@ -1785,7 +1788,7 @@ static u8 smp_cmd_pairing_req(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	 * successfully received our local OOB data - therefore set the
+ 	 * flag to indicate that local OOB is in use.
+ 	 */
+-	if (req->oob_flag == SMP_OOB_PRESENT)
++	if (req->oob_flag == SMP_OOB_PRESENT && SMP_DEV(hdev)->local_oob)
+ 		set_bit(SMP_FLAG_LOCAL_OOB, &smp->flags);
+ 
+ 	/* SMP over BR/EDR requires special treatment */
+@@ -1967,7 +1970,7 @@ static u8 smp_cmd_pairing_rsp(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	 * successfully received our local OOB data - therefore set the
+ 	 * flag to indicate that local OOB is in use.
+ 	 */
+-	if (rsp->oob_flag == SMP_OOB_PRESENT)
++	if (rsp->oob_flag == SMP_OOB_PRESENT && SMP_DEV(hdev)->local_oob)
+ 		set_bit(SMP_FLAG_LOCAL_OOB, &smp->flags);
+ 
+ 	smp->prsp[0] = SMP_CMD_PAIRING_RSP;
+@@ -2697,7 +2700,13 @@ static int smp_cmd_public_key(struct l2cap_conn *conn, struct sk_buff *skb)
+ 	 * key was set/generated.
+ 	 */
+ 	if (test_bit(SMP_FLAG_LOCAL_OOB, &smp->flags)) {
+-		struct smp_dev *smp_dev = chan->data;
++		struct l2cap_chan *hchan = hdev->smp_data;
++		struct smp_dev *smp_dev;
++
++		if (!hchan || !hchan->data)
++			return SMP_UNSPECIFIED;
++
++		smp_dev = hchan->data;
+ 
+ 		tfm_ecdh = smp_dev->tfm_ecdh;
+ 	} else {
+@@ -3230,6 +3239,7 @@ static struct l2cap_chan *smp_add_cid(struct hci_dev *hdev, u16 cid)
+ 		return ERR_CAST(tfm_ecdh);
+ 	}
+ 
++	smp->local_oob = false;
+ 	smp->tfm_aes = tfm_aes;
+ 	smp->tfm_cmac = tfm_cmac;
+ 	smp->tfm_ecdh = tfm_ecdh;
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 559a91271f82..bf669e77f9f3 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -1754,6 +1754,28 @@ int call_netdevice_notifiers(unsigned long val, struct net_device *dev)
+ }
+ EXPORT_SYMBOL(call_netdevice_notifiers);
+ 
++/**
++ *	call_netdevice_notifiers_mtu - call all network notifier blocks
++ *	@val: value passed unmodified to notifier function
++ *	@dev: net_device pointer passed unmodified to notifier function
++ *	@arg: additional u32 argument passed to the notifier function
++ *
++ *	Call all network notifier blocks.  Parameters and return value
++ *	are as for raw_notifier_call_chain().
++ */
++static int call_netdevice_notifiers_mtu(unsigned long val,
++					struct net_device *dev, u32 arg)
++{
++	struct netdev_notifier_info_ext info = {
++		.info.dev = dev,
++		.ext.mtu = arg,
++	};
++
++	BUILD_BUG_ON(offsetof(struct netdev_notifier_info_ext, info) != 0);
++
++	return call_netdevice_notifiers_info(val, &info.info);
++}
++
+ #ifdef CONFIG_NET_INGRESS
+ static DEFINE_STATIC_KEY_FALSE(ingress_needed_key);
+ 
+@@ -7118,14 +7140,16 @@ int dev_set_mtu(struct net_device *dev, int new_mtu)
+ 	err = __dev_set_mtu(dev, new_mtu);
+ 
+ 	if (!err) {
+-		err = call_netdevice_notifiers(NETDEV_CHANGEMTU, dev);
++		err = call_netdevice_notifiers_mtu(NETDEV_CHANGEMTU, dev,
++						   orig_mtu);
+ 		err = notifier_to_errno(err);
+ 		if (err) {
+ 			/* setting mtu back and notifying everyone again,
+ 			 * so that they have a chance to revert changes.
+ 			 */
+ 			__dev_set_mtu(dev, orig_mtu);
+-			call_netdevice_notifiers(NETDEV_CHANGEMTU, dev);
++			call_netdevice_notifiers_mtu(NETDEV_CHANGEMTU, dev,
++						     new_mtu);
+ 		}
+ 	}
+ 	return err;
+diff --git a/net/core/ethtool.c b/net/core/ethtool.c
+index e677a20180cf..6c04f1bf377d 100644
+--- a/net/core/ethtool.c
++++ b/net/core/ethtool.c
+@@ -2623,6 +2623,7 @@ int dev_ethtool(struct net *net, struct ifreq *ifr)
+ 	case ETHTOOL_GPHYSTATS:
+ 	case ETHTOOL_GTSO:
+ 	case ETHTOOL_GPERMADDR:
++	case ETHTOOL_GUFO:
+ 	case ETHTOOL_GGSO:
+ 	case ETHTOOL_GGRO:
+ 	case ETHTOOL_GFLAGS:
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 963ee2e88861..0b2bd7d3220f 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2334,7 +2334,8 @@ BPF_CALL_4(bpf_msg_pull_data,
+ 	if (unlikely(bytes_sg_total > copy))
+ 		return -EINVAL;
+ 
+-	page = alloc_pages(__GFP_NOWARN | GFP_ATOMIC, get_order(copy));
++	page = alloc_pages(__GFP_NOWARN | GFP_ATOMIC | __GFP_COMP,
++			   get_order(copy));
+ 	if (unlikely(!page))
+ 		return -ENOMEM;
+ 	p = page_address(page);
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index bafaa033826f..18de39dbdc30 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -1848,10 +1848,8 @@ static int rtnl_dump_ifinfo(struct sk_buff *skb, struct netlink_callback *cb)
+ 		if (tb[IFLA_IF_NETNSID]) {
+ 			netnsid = nla_get_s32(tb[IFLA_IF_NETNSID]);
+ 			tgt_net = get_target_net(skb->sk, netnsid);
+-			if (IS_ERR(tgt_net)) {
+-				tgt_net = net;
+-				netnsid = -1;
+-			}
++			if (IS_ERR(tgt_net))
++				return PTR_ERR(tgt_net);
+ 		}
+ 
+ 		if (tb[IFLA_EXT_MASK])
+@@ -2787,6 +2785,12 @@ struct net_device *rtnl_create_link(struct net *net,
+ 	else if (ops->get_num_rx_queues)
+ 		num_rx_queues = ops->get_num_rx_queues();
+ 
++	if (num_tx_queues < 1 || num_tx_queues > 4096)
++		return ERR_PTR(-EINVAL);
++
++	if (num_rx_queues < 1 || num_rx_queues > 4096)
++		return ERR_PTR(-EINVAL);
++
+ 	dev = alloc_netdev_mqs(ops->priv_size, ifname, name_assign_type,
+ 			       ops->setup, num_tx_queues, num_rx_queues);
+ 	if (!dev)
+@@ -3694,16 +3698,27 @@ static int rtnl_fdb_dump(struct sk_buff *skb, struct netlink_callback *cb)
+ 	int err = 0;
+ 	int fidx = 0;
+ 
+-	err = nlmsg_parse(cb->nlh, sizeof(struct ifinfomsg), tb,
+-			  IFLA_MAX, ifla_policy, NULL);
+-	if (err < 0) {
+-		return -EINVAL;
+-	} else if (err == 0) {
+-		if (tb[IFLA_MASTER])
+-			br_idx = nla_get_u32(tb[IFLA_MASTER]);
+-	}
++	/* A hack to preserve kernel<->userspace interface.
++	 * Before Linux v4.12 this code accepted ndmsg since iproute2 v3.3.0.
++	 * However, ndmsg is shorter than ifinfomsg thus nlmsg_parse() bails.
++	 * So, check for ndmsg with an optional u32 attribute (not used here).
++	 * Fortunately these sizes don't conflict with the size of ifinfomsg
++	 * with an optional attribute.
++	 */
++	if (nlmsg_len(cb->nlh) != sizeof(struct ndmsg) &&
++	    (nlmsg_len(cb->nlh) != sizeof(struct ndmsg) +
++	     nla_attr_size(sizeof(u32)))) {
++		err = nlmsg_parse(cb->nlh, sizeof(struct ifinfomsg), tb,
++				  IFLA_MAX, ifla_policy, NULL);
++		if (err < 0) {
++			return -EINVAL;
++		} else if (err == 0) {
++			if (tb[IFLA_MASTER])
++				br_idx = nla_get_u32(tb[IFLA_MASTER]);
++		}
+ 
+-	brport_idx = ifm->ifi_index;
++		brport_idx = ifm->ifi_index;
++	}
+ 
+ 	if (br_idx) {
+ 		br_dev = __dev_get_by_index(net, br_idx);
+diff --git a/net/dccp/input.c b/net/dccp/input.c
+index d28d46bff6ab..85d6c879383d 100644
+--- a/net/dccp/input.c
++++ b/net/dccp/input.c
+@@ -606,11 +606,13 @@ int dccp_rcv_state_process(struct sock *sk, struct sk_buff *skb,
+ 	if (sk->sk_state == DCCP_LISTEN) {
+ 		if (dh->dccph_type == DCCP_PKT_REQUEST) {
+ 			/* It is possible that we process SYN packets from backlog,
+-			 * so we need to make sure to disable BH right there.
++			 * so we need to make sure to disable BH and RCU right there.
+ 			 */
++			rcu_read_lock();
+ 			local_bh_disable();
+ 			acceptable = inet_csk(sk)->icsk_af_ops->conn_request(sk, skb) >= 0;
+ 			local_bh_enable();
++			rcu_read_unlock();
+ 			if (!acceptable)
+ 				return 1;
+ 			consume_skb(skb);
+diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c
+index b08feb219b44..8e08cea6f178 100644
+--- a/net/dccp/ipv4.c
++++ b/net/dccp/ipv4.c
+@@ -493,9 +493,11 @@ static int dccp_v4_send_response(const struct sock *sk, struct request_sock *req
+ 
+ 		dh->dccph_checksum = dccp_v4_csum_finish(skb, ireq->ir_loc_addr,
+ 							      ireq->ir_rmt_addr);
++		rcu_read_lock();
+ 		err = ip_build_and_send_pkt(skb, sk, ireq->ir_loc_addr,
+ 					    ireq->ir_rmt_addr,
+-					    ireq_opt_deref(ireq));
++					    rcu_dereference(ireq->ireq_opt));
++		rcu_read_unlock();
+ 		err = net_xmit_eval(err);
+ 	}
+ 
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index 2998b0e47d4b..0113993e9b2c 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -1243,7 +1243,8 @@ static int fib_inetaddr_event(struct notifier_block *this, unsigned long event,
+ static int fib_netdev_event(struct notifier_block *this, unsigned long event, void *ptr)
+ {
+ 	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
+-	struct netdev_notifier_changeupper_info *info;
++	struct netdev_notifier_changeupper_info *upper_info = ptr;
++	struct netdev_notifier_info_ext *info_ext = ptr;
+ 	struct in_device *in_dev;
+ 	struct net *net = dev_net(dev);
+ 	unsigned int flags;
+@@ -1278,16 +1279,19 @@ static int fib_netdev_event(struct notifier_block *this, unsigned long event, vo
+ 			fib_sync_up(dev, RTNH_F_LINKDOWN);
+ 		else
+ 			fib_sync_down_dev(dev, event, false);
+-		/* fall through */
++		rt_cache_flush(net);
++		break;
+ 	case NETDEV_CHANGEMTU:
++		fib_sync_mtu(dev, info_ext->ext.mtu);
+ 		rt_cache_flush(net);
+ 		break;
+ 	case NETDEV_CHANGEUPPER:
+-		info = ptr;
++		upper_info = ptr;
+ 		/* flush all routes if dev is linked to or unlinked from
+ 		 * an L3 master device (e.g., VRF)
+ 		 */
+-		if (info->upper_dev && netif_is_l3_master(info->upper_dev))
++		if (upper_info->upper_dev &&
++		    netif_is_l3_master(upper_info->upper_dev))
+ 			fib_disable_ip(dev, NETDEV_DOWN, true);
+ 		break;
+ 	}
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index f3c89ccf14c5..446204ca7406 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -1470,6 +1470,56 @@ static int call_fib_nh_notifiers(struct fib_nh *fib_nh,
+ 	return NOTIFY_DONE;
+ }
+ 
++/* Update the PMTU of exceptions when:
++ * - the new MTU of the first hop becomes smaller than the PMTU
++ * - the old MTU was the same as the PMTU, and it limited discovery of
++ *   larger MTUs on the path. With that limit raised, we can now
++ *   discover larger MTUs
++ * A special case is locked exceptions, for which the PMTU is smaller
++ * than the minimal accepted PMTU:
++ * - if the new MTU is greater than the PMTU, don't make any change
++ * - otherwise, unlock and set PMTU
++ */
++static void nh_update_mtu(struct fib_nh *nh, u32 new, u32 orig)
++{
++	struct fnhe_hash_bucket *bucket;
++	int i;
++
++	bucket = rcu_dereference_protected(nh->nh_exceptions, 1);
++	if (!bucket)
++		return;
++
++	for (i = 0; i < FNHE_HASH_SIZE; i++) {
++		struct fib_nh_exception *fnhe;
++
++		for (fnhe = rcu_dereference_protected(bucket[i].chain, 1);
++		     fnhe;
++		     fnhe = rcu_dereference_protected(fnhe->fnhe_next, 1)) {
++			if (fnhe->fnhe_mtu_locked) {
++				if (new <= fnhe->fnhe_pmtu) {
++					fnhe->fnhe_pmtu = new;
++					fnhe->fnhe_mtu_locked = false;
++				}
++			} else if (new < fnhe->fnhe_pmtu ||
++				   orig == fnhe->fnhe_pmtu) {
++				fnhe->fnhe_pmtu = new;
++			}
++		}
++	}
++}
++
++void fib_sync_mtu(struct net_device *dev, u32 orig_mtu)
++{
++	unsigned int hash = fib_devindex_hashfn(dev->ifindex);
++	struct hlist_head *head = &fib_info_devhash[hash];
++	struct fib_nh *nh;
++
++	hlist_for_each_entry(nh, head, nh_hash) {
++		if (nh->nh_dev == dev)
++			nh_update_mtu(nh, dev->mtu, orig_mtu);
++	}
++}
++
+ /* Event              force Flags           Description
+  * NETDEV_CHANGE      0     LINKDOWN        Carrier OFF, not for scope host
+  * NETDEV_DOWN        0     LINKDOWN|DEAD   Link down, not for scope host
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index 33a88e045efd..39cfa3a191d8 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -535,7 +535,8 @@ struct dst_entry *inet_csk_route_req(const struct sock *sk,
+ 	struct ip_options_rcu *opt;
+ 	struct rtable *rt;
+ 
+-	opt = ireq_opt_deref(ireq);
++	rcu_read_lock();
++	opt = rcu_dereference(ireq->ireq_opt);
+ 
+ 	flowi4_init_output(fl4, ireq->ir_iif, ireq->ir_mark,
+ 			   RT_CONN_FLAGS(sk), RT_SCOPE_UNIVERSE,
+@@ -549,11 +550,13 @@ struct dst_entry *inet_csk_route_req(const struct sock *sk,
+ 		goto no_route;
+ 	if (opt && opt->opt.is_strictroute && rt->rt_uses_gateway)
+ 		goto route_err;
++	rcu_read_unlock();
+ 	return &rt->dst;
+ 
+ route_err:
+ 	ip_rt_put(rt);
+ no_route:
++	rcu_read_unlock();
+ 	__IP_INC_STATS(net, IPSTATS_MIB_OUTNOROUTES);
+ 	return NULL;
+ }
+diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c
+index c0fe5ad996f2..26c36cccabdc 100644
+--- a/net/ipv4/ip_sockglue.c
++++ b/net/ipv4/ip_sockglue.c
+@@ -149,7 +149,6 @@ static void ip_cmsg_recv_security(struct msghdr *msg, struct sk_buff *skb)
+ static void ip_cmsg_recv_dstaddr(struct msghdr *msg, struct sk_buff *skb)
+ {
+ 	struct sockaddr_in sin;
+-	const struct iphdr *iph = ip_hdr(skb);
+ 	__be16 *ports;
+ 	int end;
+ 
+@@ -164,7 +163,7 @@ static void ip_cmsg_recv_dstaddr(struct msghdr *msg, struct sk_buff *skb)
+ 	ports = (__be16 *)skb_transport_header(skb);
+ 
+ 	sin.sin_family = AF_INET;
+-	sin.sin_addr.s_addr = iph->daddr;
++	sin.sin_addr.s_addr = ip_hdr(skb)->daddr;
+ 	sin.sin_port = ports[1];
+ 	memset(sin.sin_zero, 0, sizeof(sin.sin_zero));
+ 
+diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
+index c4f5602308ed..284a22154b4e 100644
+--- a/net/ipv4/ip_tunnel.c
++++ b/net/ipv4/ip_tunnel.c
+@@ -627,6 +627,7 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
+ 		    const struct iphdr *tnl_params, u8 protocol)
+ {
+ 	struct ip_tunnel *tunnel = netdev_priv(dev);
++	unsigned int inner_nhdr_len = 0;
+ 	const struct iphdr *inner_iph;
+ 	struct flowi4 fl4;
+ 	u8     tos, ttl;
+@@ -636,6 +637,14 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
+ 	__be32 dst;
+ 	bool connected;
+ 
++	/* ensure we can access the inner net header, for several users below */
++	if (skb->protocol == htons(ETH_P_IP))
++		inner_nhdr_len = sizeof(struct iphdr);
++	else if (skb->protocol == htons(ETH_P_IPV6))
++		inner_nhdr_len = sizeof(struct ipv6hdr);
++	if (unlikely(!pskb_may_pull(skb, inner_nhdr_len)))
++		goto tx_error;
++
+ 	inner_iph = (const struct iphdr *)skb_inner_network_header(skb);
+ 	connected = (tunnel->parms.iph.daddr != 0);
+ 
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 1df6e97106d7..f80acb5f1896 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -1001,21 +1001,22 @@ out:	kfree_skb(skb);
+ static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
+ {
+ 	struct dst_entry *dst = &rt->dst;
++	u32 old_mtu = ipv4_mtu(dst);
+ 	struct fib_result res;
+ 	bool lock = false;
+ 
+ 	if (ip_mtu_locked(dst))
+ 		return;
+ 
+-	if (ipv4_mtu(dst) < mtu)
++	if (old_mtu < mtu)
+ 		return;
+ 
+ 	if (mtu < ip_rt_min_pmtu) {
+ 		lock = true;
+-		mtu = ip_rt_min_pmtu;
++		mtu = min(old_mtu, ip_rt_min_pmtu);
+ 	}
+ 
+-	if (rt->rt_pmtu == mtu &&
++	if (rt->rt_pmtu == mtu && !lock &&
+ 	    time_before(jiffies, dst->expires - ip_rt_mtu_expires / 2))
+ 		return;
+ 
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index f9dcb29be12d..8b7294688633 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -5976,11 +5976,13 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb)
+ 			if (th->fin)
+ 				goto discard;
+ 			/* It is possible that we process SYN packets from backlog,
+-			 * so we need to make sure to disable BH right there.
++			 * so we need to make sure to disable BH and RCU right there.
+ 			 */
++			rcu_read_lock();
+ 			local_bh_disable();
+ 			acceptable = icsk->icsk_af_ops->conn_request(sk, skb) >= 0;
+ 			local_bh_enable();
++			rcu_read_unlock();
+ 
+ 			if (!acceptable)
+ 				return 1;
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 488b201851d7..d380856ba488 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -942,9 +942,11 @@ static int tcp_v4_send_synack(const struct sock *sk, struct dst_entry *dst,
+ 	if (skb) {
+ 		__tcp_v4_send_check(skb, ireq->ir_loc_addr, ireq->ir_rmt_addr);
+ 
++		rcu_read_lock();
+ 		err = ip_build_and_send_pkt(skb, sk, ireq->ir_loc_addr,
+ 					    ireq->ir_rmt_addr,
+-					    ireq_opt_deref(ireq));
++					    rcu_dereference(ireq->ireq_opt));
++		rcu_read_unlock();
+ 		err = net_xmit_eval(err);
+ 	}
+ 
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index fed65bc9df86..a12df801de94 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -1631,7 +1631,7 @@ busy_check:
+ 	*err = error;
+ 	return NULL;
+ }
+-EXPORT_SYMBOL_GPL(__skb_recv_udp);
++EXPORT_SYMBOL(__skb_recv_udp);
+ 
+ /*
+  * 	This should be easy, if there is something there we
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index f66a1cae3366..3484c7020fd9 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -4203,7 +4203,6 @@ static struct inet6_ifaddr *if6_get_first(struct seq_file *seq, loff_t pos)
+ 				p++;
+ 				continue;
+ 			}
+-			state->offset++;
+ 			return ifa;
+ 		}
+ 
+@@ -4227,13 +4226,12 @@ static struct inet6_ifaddr *if6_get_next(struct seq_file *seq,
+ 		return ifa;
+ 	}
+ 
++	state->offset = 0;
+ 	while (++state->bucket < IN6_ADDR_HSIZE) {
+-		state->offset = 0;
+ 		hlist_for_each_entry_rcu(ifa,
+ 				     &inet6_addr_lst[state->bucket], addr_lst) {
+ 			if (!net_eq(dev_net(ifa->idev->dev), net))
+ 				continue;
+-			state->offset++;
+ 			return ifa;
+ 		}
+ 	}
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index 5516f55e214b..cbe46175bb59 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -196,6 +196,8 @@ void fib6_info_destroy_rcu(struct rcu_head *head)
+ 				*ppcpu_rt = NULL;
+ 			}
+ 		}
++
++		free_percpu(f6i->rt6i_pcpu);
+ 	}
+ 
+ 	lwtstate_put(f6i->fib6_nh.nh_lwtstate);
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index 1cc9650af9fb..f5b5b0574a2d 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -1226,7 +1226,7 @@ static inline int
+ ip4ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct ip6_tnl *t = netdev_priv(dev);
+-	const struct iphdr  *iph = ip_hdr(skb);
++	const struct iphdr  *iph;
+ 	int encap_limit = -1;
+ 	struct flowi6 fl6;
+ 	__u8 dsfield;
+@@ -1234,6 +1234,11 @@ ip4ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	u8 tproto;
+ 	int err;
+ 
++	/* ensure we can access the full inner ip header */
++	if (!pskb_may_pull(skb, sizeof(struct iphdr)))
++		return -1;
++
++	iph = ip_hdr(skb);
+ 	memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt));
+ 
+ 	tproto = READ_ONCE(t->parms.proto);
+@@ -1297,7 +1302,7 @@ static inline int
+ ip6ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct ip6_tnl *t = netdev_priv(dev);
+-	struct ipv6hdr *ipv6h = ipv6_hdr(skb);
++	struct ipv6hdr *ipv6h;
+ 	int encap_limit = -1;
+ 	__u16 offset;
+ 	struct flowi6 fl6;
+@@ -1306,6 +1311,10 @@ ip6ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	u8 tproto;
+ 	int err;
+ 
++	if (unlikely(!pskb_may_pull(skb, sizeof(*ipv6h))))
++		return -1;
++
++	ipv6h = ipv6_hdr(skb);
+ 	tproto = READ_ONCE(t->parms.proto);
+ 	if ((tproto != IPPROTO_IPV6 && tproto != 0) ||
+ 	    ip6_tnl_addr_conflict(t, ipv6h))
+diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
+index afc307c89d1a..7ef3e0a5bf86 100644
+--- a/net/ipv6/raw.c
++++ b/net/ipv6/raw.c
+@@ -650,8 +650,6 @@ static int rawv6_send_hdrinc(struct sock *sk, struct msghdr *msg, int length,
+ 	skb->protocol = htons(ETH_P_IPV6);
+ 	skb->priority = sk->sk_priority;
+ 	skb->mark = sk->sk_mark;
+-	skb_dst_set(skb, &rt->dst);
+-	*dstp = NULL;
+ 
+ 	skb_put(skb, length);
+ 	skb_reset_network_header(skb);
+@@ -664,8 +662,14 @@ static int rawv6_send_hdrinc(struct sock *sk, struct msghdr *msg, int length,
+ 
+ 	skb->transport_header = skb->network_header;
+ 	err = memcpy_from_msg(iph, msg, length);
+-	if (err)
+-		goto error_fault;
++	if (err) {
++		err = -EFAULT;
++		kfree_skb(skb);
++		goto error;
++	}
++
++	skb_dst_set(skb, &rt->dst);
++	*dstp = NULL;
+ 
+ 	/* if egress device is enslaved to an L3 master device pass the
+ 	 * skb to its handler for processing
+@@ -674,21 +678,28 @@ static int rawv6_send_hdrinc(struct sock *sk, struct msghdr *msg, int length,
+ 	if (unlikely(!skb))
+ 		return 0;
+ 
++	/* Acquire rcu_read_lock() in case we need to use rt->rt6i_idev
++	 * in the error path. Since skb has been freed, the dst could
++	 * have been queued for deletion.
++	 */
++	rcu_read_lock();
+ 	IP6_UPD_PO_STATS(net, rt->rt6i_idev, IPSTATS_MIB_OUT, skb->len);
+ 	err = NF_HOOK(NFPROTO_IPV6, NF_INET_LOCAL_OUT, net, sk, skb,
+ 		      NULL, rt->dst.dev, dst_output);
+ 	if (err > 0)
+ 		err = net_xmit_errno(err);
+-	if (err)
+-		goto error;
++	if (err) {
++		IP6_INC_STATS(net, rt->rt6i_idev, IPSTATS_MIB_OUTDISCARDS);
++		rcu_read_unlock();
++		goto error_check;
++	}
++	rcu_read_unlock();
+ out:
+ 	return 0;
+ 
+-error_fault:
+-	err = -EFAULT;
+-	kfree_skb(skb);
+ error:
+ 	IP6_INC_STATS(net, rt->rt6i_idev, IPSTATS_MIB_OUTDISCARDS);
++error_check:
+ 	if (err == -ENOBUFS && !np->recverr)
+ 		err = 0;
+ 	return err;
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 480a79f47c52..ed526e257da6 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -4314,11 +4314,6 @@ static int ip6_route_info_append(struct net *net,
+ 	if (!nh)
+ 		return -ENOMEM;
+ 	nh->fib6_info = rt;
+-	err = ip6_convert_metrics(net, rt, r_cfg);
+-	if (err) {
+-		kfree(nh);
+-		return err;
+-	}
+ 	memcpy(&nh->r_cfg, r_cfg, sizeof(*r_cfg));
+ 	list_add_tail(&nh->next, rt6_nh_list);
+ 
+diff --git a/net/netlabel/netlabel_unlabeled.c b/net/netlabel/netlabel_unlabeled.c
+index c070dfc0190a..c92894c3e40a 100644
+--- a/net/netlabel/netlabel_unlabeled.c
++++ b/net/netlabel/netlabel_unlabeled.c
+@@ -781,7 +781,8 @@ static int netlbl_unlabel_addrinfo_get(struct genl_info *info,
+ {
+ 	u32 addr_len;
+ 
+-	if (info->attrs[NLBL_UNLABEL_A_IPV4ADDR]) {
++	if (info->attrs[NLBL_UNLABEL_A_IPV4ADDR] &&
++	    info->attrs[NLBL_UNLABEL_A_IPV4MASK]) {
+ 		addr_len = nla_len(info->attrs[NLBL_UNLABEL_A_IPV4ADDR]);
+ 		if (addr_len != sizeof(struct in_addr) &&
+ 		    addr_len != nla_len(info->attrs[NLBL_UNLABEL_A_IPV4MASK]))
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index e6445d8f3f57..3237e9978c1a 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -2712,10 +2712,12 @@ tpacket_error:
+ 			}
+ 		}
+ 
+-		if (po->has_vnet_hdr && virtio_net_hdr_to_skb(skb, vnet_hdr,
+-							      vio_le())) {
+-			tp_len = -EINVAL;
+-			goto tpacket_error;
++		if (po->has_vnet_hdr) {
++			if (virtio_net_hdr_to_skb(skb, vnet_hdr, vio_le())) {
++				tp_len = -EINVAL;
++				goto tpacket_error;
++			}
++			virtio_net_hdr_set_proto(skb, vnet_hdr);
+ 		}
+ 
+ 		skb->destructor = tpacket_destruct_skb;
+@@ -2911,6 +2913,7 @@ static int packet_snd(struct socket *sock, struct msghdr *msg, size_t len)
+ 		if (err)
+ 			goto out_free;
+ 		len += sizeof(vnet_hdr);
++		virtio_net_hdr_set_proto(skb, &vnet_hdr);
+ 	}
+ 
+ 	skb_probe_transport_header(skb, reserve);
+diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c
+index 260749956ef3..24df95a7b9c7 100644
+--- a/net/sched/cls_u32.c
++++ b/net/sched/cls_u32.c
+@@ -397,6 +397,7 @@ static int u32_init(struct tcf_proto *tp)
+ 	rcu_assign_pointer(tp_c->hlist, root_ht);
+ 	root_ht->tp_c = tp_c;
+ 
++	root_ht->refcnt++;
+ 	rcu_assign_pointer(tp->root, root_ht);
+ 	tp->data = tp_c;
+ 	return 0;
+@@ -608,7 +609,7 @@ static int u32_destroy_hnode(struct tcf_proto *tp, struct tc_u_hnode *ht,
+ 	struct tc_u_hnode __rcu **hn;
+ 	struct tc_u_hnode *phn;
+ 
+-	WARN_ON(ht->refcnt);
++	WARN_ON(--ht->refcnt);
+ 
+ 	u32_clear_hnode(tp, ht, extack);
+ 
+@@ -647,7 +648,7 @@ static void u32_destroy(struct tcf_proto *tp, struct netlink_ext_ack *extack)
+ 
+ 	WARN_ON(root_ht == NULL);
+ 
+-	if (root_ht && --root_ht->refcnt == 0)
++	if (root_ht && --root_ht->refcnt == 1)
+ 		u32_destroy_hnode(tp, root_ht, extack);
+ 
+ 	if (--tp_c->refcnt == 0) {
+@@ -696,7 +697,6 @@ static int u32_delete(struct tcf_proto *tp, void *arg, bool *last,
+ 	}
+ 
+ 	if (ht->refcnt == 1) {
+-		ht->refcnt--;
+ 		u32_destroy_hnode(tp, ht, extack);
+ 	} else {
+ 		NL_SET_ERR_MSG_MOD(extack, "Can not delete in-use filter");
+@@ -706,11 +706,11 @@ static int u32_delete(struct tcf_proto *tp, void *arg, bool *last,
+ out:
+ 	*last = true;
+ 	if (root_ht) {
+-		if (root_ht->refcnt > 1) {
++		if (root_ht->refcnt > 2) {
+ 			*last = false;
+ 			goto ret;
+ 		}
+-		if (root_ht->refcnt == 1) {
++		if (root_ht->refcnt == 2) {
+ 			if (!ht_empty(root_ht)) {
+ 				*last = false;
+ 				goto ret;
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 54eca685420f..99cc25aae503 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -1304,6 +1304,18 @@ check_loop_fn(struct Qdisc *q, unsigned long cl, struct qdisc_walker *w)
+  * Delete/get qdisc.
+  */
+ 
++const struct nla_policy rtm_tca_policy[TCA_MAX + 1] = {
++	[TCA_KIND]		= { .type = NLA_STRING },
++	[TCA_OPTIONS]		= { .type = NLA_NESTED },
++	[TCA_RATE]		= { .type = NLA_BINARY,
++				    .len = sizeof(struct tc_estimator) },
++	[TCA_STAB]		= { .type = NLA_NESTED },
++	[TCA_DUMP_INVISIBLE]	= { .type = NLA_FLAG },
++	[TCA_CHAIN]		= { .type = NLA_U32 },
++	[TCA_INGRESS_BLOCK]	= { .type = NLA_U32 },
++	[TCA_EGRESS_BLOCK]	= { .type = NLA_U32 },
++};
++
+ static int tc_get_qdisc(struct sk_buff *skb, struct nlmsghdr *n,
+ 			struct netlink_ext_ack *extack)
+ {
+@@ -1320,7 +1332,8 @@ static int tc_get_qdisc(struct sk_buff *skb, struct nlmsghdr *n,
+ 	    !netlink_ns_capable(skb, net->user_ns, CAP_NET_ADMIN))
+ 		return -EPERM;
+ 
+-	err = nlmsg_parse(n, sizeof(*tcm), tca, TCA_MAX, NULL, extack);
++	err = nlmsg_parse(n, sizeof(*tcm), tca, TCA_MAX, rtm_tca_policy,
++			  extack);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -1404,7 +1417,8 @@ static int tc_modify_qdisc(struct sk_buff *skb, struct nlmsghdr *n,
+ 
+ replay:
+ 	/* Reinit, just in case something touches this. */
+-	err = nlmsg_parse(n, sizeof(*tcm), tca, TCA_MAX, NULL, extack);
++	err = nlmsg_parse(n, sizeof(*tcm), tca, TCA_MAX, rtm_tca_policy,
++			  extack);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -1638,7 +1652,8 @@ static int tc_dump_qdisc(struct sk_buff *skb, struct netlink_callback *cb)
+ 	idx = 0;
+ 	ASSERT_RTNL();
+ 
+-	err = nlmsg_parse(nlh, sizeof(struct tcmsg), tca, TCA_MAX, NULL, NULL);
++	err = nlmsg_parse(nlh, sizeof(struct tcmsg), tca, TCA_MAX,
++			  rtm_tca_policy, NULL);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -1857,7 +1872,8 @@ static int tc_ctl_tclass(struct sk_buff *skb, struct nlmsghdr *n,
+ 	    !netlink_ns_capable(skb, net->user_ns, CAP_NET_ADMIN))
+ 		return -EPERM;
+ 
+-	err = nlmsg_parse(n, sizeof(*tcm), tca, TCA_MAX, NULL, extack);
++	err = nlmsg_parse(n, sizeof(*tcm), tca, TCA_MAX, rtm_tca_policy,
++			  extack);
+ 	if (err < 0)
+ 		return err;
+ 
+diff --git a/net/sctp/transport.c b/net/sctp/transport.c
+index 12cac85da994..033696e6f74f 100644
+--- a/net/sctp/transport.c
++++ b/net/sctp/transport.c
+@@ -260,6 +260,7 @@ void sctp_transport_pmtu(struct sctp_transport *transport, struct sock *sk)
+ bool sctp_transport_update_pmtu(struct sctp_transport *t, u32 pmtu)
+ {
+ 	struct dst_entry *dst = sctp_transport_dst_check(t);
++	struct sock *sk = t->asoc->base.sk;
+ 	bool change = true;
+ 
+ 	if (unlikely(pmtu < SCTP_DEFAULT_MINSEGMENT)) {
+@@ -271,12 +272,19 @@ bool sctp_transport_update_pmtu(struct sctp_transport *t, u32 pmtu)
+ 	pmtu = SCTP_TRUNC4(pmtu);
+ 
+ 	if (dst) {
+-		dst->ops->update_pmtu(dst, t->asoc->base.sk, NULL, pmtu);
++		struct sctp_pf *pf = sctp_get_pf_specific(dst->ops->family);
++		union sctp_addr addr;
++
++		pf->af->from_sk(&addr, sk);
++		pf->to_sk_daddr(&t->ipaddr, sk);
++		dst->ops->update_pmtu(dst, sk, NULL, pmtu);
++		pf->to_sk_daddr(&addr, sk);
++
+ 		dst = sctp_transport_dst_check(t);
+ 	}
+ 
+ 	if (!dst) {
+-		t->af_specific->get_dst(t, &t->saddr, &t->fl, t->asoc->base.sk);
++		t->af_specific->get_dst(t, &t->saddr, &t->fl, sk);
+ 		dst = t->dst;
+ 	}
+ 
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 093e16d1b770..cdaf3534e373 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -1422,8 +1422,10 @@ static int __tipc_sendstream(struct socket *sock, struct msghdr *m, size_t dlen)
+ 	/* Handle implicit connection setup */
+ 	if (unlikely(dest)) {
+ 		rc = __tipc_sendmsg(sock, m, dlen);
+-		if (dlen && (dlen == rc))
++		if (dlen && dlen == rc) {
++			tsk->peer_caps = tipc_node_get_capabilities(net, dnode);
+ 			tsk->snt_unacked = tsk_inc(tsk, dlen + msg_hdr_sz(hdr));
++		}
+ 		return rc;
+ 	}
+ 
+diff --git a/scripts/subarch.include b/scripts/subarch.include
+new file mode 100644
+index 000000000000..650682821126
+--- /dev/null
++++ b/scripts/subarch.include
+@@ -0,0 +1,13 @@
++# SUBARCH tells the usermode build what the underlying arch is.  That is set
++# first, and if a usermode build is happening, the "ARCH=um" on the command
++# line overrides the setting of ARCH below.  If a native build is happening,
++# then ARCH is assigned, getting whatever value it gets normally, and
++# SUBARCH is subsequently ignored.
++
++SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \
++				  -e s/sun4u/sparc64/ \
++				  -e s/arm.*/arm/ -e s/sa110/arm/ \
++				  -e s/s390x/s390/ -e s/parisc64/parisc/ \
++				  -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \
++				  -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ \
++				  -e s/riscv.*/riscv/)
+diff --git a/sound/hda/hdac_controller.c b/sound/hda/hdac_controller.c
+index 560ec0986e1a..74244d8e2909 100644
+--- a/sound/hda/hdac_controller.c
++++ b/sound/hda/hdac_controller.c
+@@ -40,6 +40,8 @@ static void azx_clear_corbrp(struct hdac_bus *bus)
+  */
+ void snd_hdac_bus_init_cmd_io(struct hdac_bus *bus)
+ {
++	WARN_ON_ONCE(!bus->rb.area);
++
+ 	spin_lock_irq(&bus->reg_lock);
+ 	/* CORB set up */
+ 	bus->corb.addr = bus->rb.addr;
+@@ -383,7 +385,7 @@ void snd_hdac_bus_exit_link_reset(struct hdac_bus *bus)
+ EXPORT_SYMBOL_GPL(snd_hdac_bus_exit_link_reset);
+ 
+ /* reset codec link */
+-static int azx_reset(struct hdac_bus *bus, bool full_reset)
++int snd_hdac_bus_reset_link(struct hdac_bus *bus, bool full_reset)
+ {
+ 	if (!full_reset)
+ 		goto skip_reset;
+@@ -408,7 +410,7 @@ static int azx_reset(struct hdac_bus *bus, bool full_reset)
+  skip_reset:
+ 	/* check to see if controller is ready */
+ 	if (!snd_hdac_chip_readb(bus, GCTL)) {
+-		dev_dbg(bus->dev, "azx_reset: controller not ready!\n");
++		dev_dbg(bus->dev, "controller not ready!\n");
+ 		return -EBUSY;
+ 	}
+ 
+@@ -423,6 +425,7 @@ static int azx_reset(struct hdac_bus *bus, bool full_reset)
+ 
+ 	return 0;
+ }
++EXPORT_SYMBOL_GPL(snd_hdac_bus_reset_link);
+ 
+ /* enable interrupts */
+ static void azx_int_enable(struct hdac_bus *bus)
+@@ -477,15 +480,17 @@ bool snd_hdac_bus_init_chip(struct hdac_bus *bus, bool full_reset)
+ 		return false;
+ 
+ 	/* reset controller */
+-	azx_reset(bus, full_reset);
++	snd_hdac_bus_reset_link(bus, full_reset);
+ 
+-	/* initialize interrupts */
++	/* clear interrupts */
+ 	azx_int_clear(bus);
+-	azx_int_enable(bus);
+ 
+ 	/* initialize the codec command I/O */
+ 	snd_hdac_bus_init_cmd_io(bus);
+ 
++	/* enable interrupts after CORB/RIRB buffers are initialized above */
++	azx_int_enable(bus);
++
+ 	/* program the position buffer */
+ 	if (bus->use_posbuf && bus->posbuf.addr) {
+ 		snd_hdac_chip_writel(bus, DPLBASE, (u32)bus->posbuf.addr);
+diff --git a/sound/soc/amd/acp-pcm-dma.c b/sound/soc/amd/acp-pcm-dma.c
+index 77203841c535..90df61d263b8 100644
+--- a/sound/soc/amd/acp-pcm-dma.c
++++ b/sound/soc/amd/acp-pcm-dma.c
+@@ -16,6 +16,7 @@
+ #include <linux/module.h>
+ #include <linux/delay.h>
+ #include <linux/io.h>
++#include <linux/iopoll.h>
+ #include <linux/sizes.h>
+ #include <linux/pm_runtime.h>
+ 
+@@ -184,6 +185,24 @@ static void config_dma_descriptor_in_sram(void __iomem *acp_mmio,
+ 	acp_reg_write(descr_info->xfer_val, acp_mmio, mmACP_SRBM_Targ_Idx_Data);
+ }
+ 
++static void pre_config_reset(void __iomem *acp_mmio, u16 ch_num)
++{
++	u32 dma_ctrl;
++	int ret;
++
++	/* clear the reset bit */
++	dma_ctrl = acp_reg_read(acp_mmio, mmACP_DMA_CNTL_0 + ch_num);
++	dma_ctrl &= ~ACP_DMA_CNTL_0__DMAChRst_MASK;
++	acp_reg_write(dma_ctrl, acp_mmio, mmACP_DMA_CNTL_0 + ch_num);
++	/* check the reset bit before programming configuration registers */
++	ret = readl_poll_timeout(acp_mmio + ((mmACP_DMA_CNTL_0 + ch_num) * 4),
++				 dma_ctrl,
++				 !(dma_ctrl & ACP_DMA_CNTL_0__DMAChRst_MASK),
++				 100, ACP_DMA_RESET_TIME);
++	if (ret < 0)
++		pr_err("Failed to clear reset of channel : %d\n", ch_num);
++}
++
+ /*
+  * Initialize the DMA descriptor information for transfer between
+  * system memory <-> ACP SRAM
+@@ -238,6 +257,7 @@ static void set_acp_sysmem_dma_descriptors(void __iomem *acp_mmio,
+ 		config_dma_descriptor_in_sram(acp_mmio, dma_dscr_idx,
+ 					      &dmadscr[i]);
+ 	}
++	pre_config_reset(acp_mmio, ch);
+ 	config_acp_dma_channel(acp_mmio, ch,
+ 			       dma_dscr_idx - 1,
+ 			       NUM_DSCRS_PER_CHANNEL,
+@@ -277,6 +297,7 @@ static void set_acp_to_i2s_dma_descriptors(void __iomem *acp_mmio, u32 size,
+ 		config_dma_descriptor_in_sram(acp_mmio, dma_dscr_idx,
+ 					      &dmadscr[i]);
+ 	}
++	pre_config_reset(acp_mmio, ch);
+ 	/* Configure the DMA channel with the above descriptore */
+ 	config_acp_dma_channel(acp_mmio, ch, dma_dscr_idx - 1,
+ 			       NUM_DSCRS_PER_CHANNEL,
+diff --git a/sound/soc/codecs/max98373.c b/sound/soc/codecs/max98373.c
+index a92586106932..f0948e84f6ae 100644
+--- a/sound/soc/codecs/max98373.c
++++ b/sound/soc/codecs/max98373.c
+@@ -519,6 +519,7 @@ static bool max98373_volatile_reg(struct device *dev, unsigned int reg)
+ {
+ 	switch (reg) {
+ 	case MAX98373_R2000_SW_RESET ... MAX98373_R2009_INT_FLAG3:
++	case MAX98373_R203E_AMP_PATH_GAIN:
+ 	case MAX98373_R2054_MEAS_ADC_PVDD_CH_READBACK:
+ 	case MAX98373_R2055_MEAS_ADC_THERM_CH_READBACK:
+ 	case MAX98373_R20B6_BDE_CUR_STATE_READBACK:
+@@ -728,6 +729,7 @@ static int max98373_probe(struct snd_soc_component *component)
+ 	/* Software Reset */
+ 	regmap_write(max98373->regmap,
+ 		MAX98373_R2000_SW_RESET, MAX98373_SOFT_RESET);
++	usleep_range(10000, 11000);
+ 
+ 	/* IV default slot configuration */
+ 	regmap_write(max98373->regmap,
+@@ -816,6 +818,7 @@ static int max98373_resume(struct device *dev)
+ 
+ 	regmap_write(max98373->regmap,
+ 		MAX98373_R2000_SW_RESET, MAX98373_SOFT_RESET);
++	usleep_range(10000, 11000);
+ 	regcache_cache_only(max98373->regmap, false);
+ 	regcache_sync(max98373->regmap);
+ 	return 0;
+diff --git a/sound/soc/codecs/rt5514.c b/sound/soc/codecs/rt5514.c
+index dca82dd6e3bf..32fe76c3134a 100644
+--- a/sound/soc/codecs/rt5514.c
++++ b/sound/soc/codecs/rt5514.c
+@@ -64,8 +64,8 @@ static const struct reg_sequence rt5514_patch[] = {
+ 	{RT5514_ANA_CTRL_LDO10,		0x00028604},
+ 	{RT5514_ANA_CTRL_ADCFED,	0x00000800},
+ 	{RT5514_ASRC_IN_CTRL1,		0x00000003},
+-	{RT5514_DOWNFILTER0_CTRL3,	0x10000352},
+-	{RT5514_DOWNFILTER1_CTRL3,	0x10000352},
++	{RT5514_DOWNFILTER0_CTRL3,	0x10000342},
++	{RT5514_DOWNFILTER1_CTRL3,	0x10000342},
+ };
+ 
+ static const struct reg_default rt5514_reg[] = {
+@@ -92,10 +92,10 @@ static const struct reg_default rt5514_reg[] = {
+ 	{RT5514_ASRC_IN_CTRL1,		0x00000003},
+ 	{RT5514_DOWNFILTER0_CTRL1,	0x00020c2f},
+ 	{RT5514_DOWNFILTER0_CTRL2,	0x00020c2f},
+-	{RT5514_DOWNFILTER0_CTRL3,	0x10000352},
++	{RT5514_DOWNFILTER0_CTRL3,	0x10000342},
+ 	{RT5514_DOWNFILTER1_CTRL1,	0x00020c2f},
+ 	{RT5514_DOWNFILTER1_CTRL2,	0x00020c2f},
+-	{RT5514_DOWNFILTER1_CTRL3,	0x10000352},
++	{RT5514_DOWNFILTER1_CTRL3,	0x10000342},
+ 	{RT5514_ANA_CTRL_LDO10,		0x00028604},
+ 	{RT5514_ANA_CTRL_LDO18_16,	0x02000345},
+ 	{RT5514_ANA_CTRL_ADC12,		0x0000a2a8},
+diff --git a/sound/soc/codecs/sigmadsp.c b/sound/soc/codecs/sigmadsp.c
+index d53680ac78e4..6df158669420 100644
+--- a/sound/soc/codecs/sigmadsp.c
++++ b/sound/soc/codecs/sigmadsp.c
+@@ -117,8 +117,7 @@ static int sigmadsp_ctrl_write(struct sigmadsp *sigmadsp,
+ 	struct sigmadsp_control *ctrl, void *data)
+ {
+ 	/* safeload loads up to 20 bytes in a atomic operation */
+-	if (ctrl->num_bytes > 4 && ctrl->num_bytes <= 20 && sigmadsp->ops &&
+-	    sigmadsp->ops->safeload)
++	if (ctrl->num_bytes <= 20 && sigmadsp->ops && sigmadsp->ops->safeload)
+ 		return sigmadsp->ops->safeload(sigmadsp, ctrl->addr, data,
+ 			ctrl->num_bytes);
+ 	else
+diff --git a/sound/soc/codecs/wm8804-i2c.c b/sound/soc/codecs/wm8804-i2c.c
+index f27464c2c5ba..79541960f45d 100644
+--- a/sound/soc/codecs/wm8804-i2c.c
++++ b/sound/soc/codecs/wm8804-i2c.c
+@@ -13,6 +13,7 @@
+ #include <linux/init.h>
+ #include <linux/module.h>
+ #include <linux/i2c.h>
++#include <linux/acpi.h>
+ 
+ #include "wm8804.h"
+ 
+@@ -40,17 +41,29 @@ static const struct i2c_device_id wm8804_i2c_id[] = {
+ };
+ MODULE_DEVICE_TABLE(i2c, wm8804_i2c_id);
+ 
++#if defined(CONFIG_OF)
+ static const struct of_device_id wm8804_of_match[] = {
+ 	{ .compatible = "wlf,wm8804", },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(of, wm8804_of_match);
++#endif
++
++#ifdef CONFIG_ACPI
++static const struct acpi_device_id wm8804_acpi_match[] = {
++	{ "1AEC8804", 0 }, /* Wolfson PCI ID + part ID */
++	{ "10138804", 0 }, /* Cirrus Logic PCI ID + part ID */
++	{ },
++};
++MODULE_DEVICE_TABLE(acpi, wm8804_acpi_match);
++#endif
+ 
+ static struct i2c_driver wm8804_i2c_driver = {
+ 	.driver = {
+ 		.name = "wm8804",
+ 		.pm = &wm8804_pm,
+-		.of_match_table = wm8804_of_match,
++		.of_match_table = of_match_ptr(wm8804_of_match),
++		.acpi_match_table = ACPI_PTR(wm8804_acpi_match),
+ 	},
+ 	.probe = wm8804_i2c_probe,
+ 	.remove = wm8804_i2c_remove,
+diff --git a/sound/soc/intel/skylake/skl.c b/sound/soc/intel/skylake/skl.c
+index f0d9793f872a..c7cdfa4a7076 100644
+--- a/sound/soc/intel/skylake/skl.c
++++ b/sound/soc/intel/skylake/skl.c
+@@ -844,7 +844,7 @@ static int skl_first_init(struct hdac_ext_bus *ebus)
+ 		return -ENXIO;
+ 	}
+ 
+-	skl_init_chip(bus, true);
++	snd_hdac_bus_reset_link(bus, true);
+ 
+ 	snd_hdac_bus_parse_capabilities(bus);
+ 
+diff --git a/sound/soc/qcom/qdsp6/q6routing.c b/sound/soc/qcom/qdsp6/q6routing.c
+index 593f66b8622f..33bb97c0b6b6 100644
+--- a/sound/soc/qcom/qdsp6/q6routing.c
++++ b/sound/soc/qcom/qdsp6/q6routing.c
+@@ -933,8 +933,10 @@ static int msm_routing_probe(struct snd_soc_component *c)
+ {
+ 	int i;
+ 
+-	for (i = 0; i < MAX_SESSIONS; i++)
++	for (i = 0; i < MAX_SESSIONS; i++) {
+ 		routing_data->sessions[i].port_id = -1;
++		routing_data->sessions[i].fedai_id = -1;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/sound/soc/sh/rcar/adg.c b/sound/soc/sh/rcar/adg.c
+index 4672688cac32..b7c1f34ec280 100644
+--- a/sound/soc/sh/rcar/adg.c
++++ b/sound/soc/sh/rcar/adg.c
+@@ -465,6 +465,11 @@ static void rsnd_adg_get_clkout(struct rsnd_priv *priv,
+ 		goto rsnd_adg_get_clkout_end;
+ 
+ 	req_size = prop->length / sizeof(u32);
++	if (req_size > REQ_SIZE) {
++		dev_err(dev,
++			"too many clock-frequency, use top %d\n", REQ_SIZE);
++		req_size = REQ_SIZE;
++	}
+ 
+ 	of_property_read_u32_array(np, "clock-frequency", req_rate, req_size);
+ 	req_48kHz_rate = 0;
+diff --git a/sound/soc/sh/rcar/core.c b/sound/soc/sh/rcar/core.c
+index ff13189a7ee4..982a72e73ea9 100644
+--- a/sound/soc/sh/rcar/core.c
++++ b/sound/soc/sh/rcar/core.c
+@@ -482,7 +482,7 @@ static int rsnd_status_update(u32 *status,
+ 			(func_call && (mod)->ops->fn) ? #fn : "");	\
+ 		if (func_call && (mod)->ops->fn)			\
+ 			tmp = (mod)->ops->fn(mod, io, param);		\
+-		if (tmp)						\
++		if (tmp && (tmp != -EPROBE_DEFER))			\
+ 			dev_err(dev, "%s[%d] : %s error %d\n",		\
+ 				rsnd_mod_name(mod), rsnd_mod_id(mod),	\
+ 						     #fn, tmp);		\
+@@ -1550,6 +1550,14 @@ exit_snd_probe:
+ 		rsnd_dai_call(remove, &rdai->capture, priv);
+ 	}
+ 
++	/*
++	 * adg is very special mod which can't use rsnd_dai_call(remove),
++	 * and it registers ADG clock on probe.
++	 * It should be unregister if probe failed.
++	 * Mainly it is assuming -EPROBE_DEFER case
++	 */
++	rsnd_adg_remove(priv);
++
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/sh/rcar/dma.c b/sound/soc/sh/rcar/dma.c
+index ef82b94d038b..2f3f4108fda5 100644
+--- a/sound/soc/sh/rcar/dma.c
++++ b/sound/soc/sh/rcar/dma.c
+@@ -244,6 +244,10 @@ static int rsnd_dmaen_attach(struct rsnd_dai_stream *io,
+ 	/* try to get DMAEngine channel */
+ 	chan = rsnd_dmaen_request_channel(io, mod_from, mod_to);
+ 	if (IS_ERR_OR_NULL(chan)) {
++		/* Let's follow when -EPROBE_DEFER case */
++		if (PTR_ERR(chan) == -EPROBE_DEFER)
++			return PTR_ERR(chan);
++
+ 		/*
+ 		 * DMA failed. try to PIO mode
+ 		 * see
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index 4663de3cf495..0b4896d411f9 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -1430,7 +1430,7 @@ static int soc_link_dai_widgets(struct snd_soc_card *card,
+ 	sink = codec_dai->playback_widget;
+ 	source = cpu_dai->capture_widget;
+ 	if (sink && source) {
+-		ret = snd_soc_dapm_new_pcm(card, dai_link->params,
++		ret = snd_soc_dapm_new_pcm(card, rtd, dai_link->params,
+ 					   dai_link->num_params,
+ 					   source, sink);
+ 		if (ret != 0) {
+@@ -1443,7 +1443,7 @@ static int soc_link_dai_widgets(struct snd_soc_card *card,
+ 	sink = cpu_dai->playback_widget;
+ 	source = codec_dai->capture_widget;
+ 	if (sink && source) {
+-		ret = snd_soc_dapm_new_pcm(card, dai_link->params,
++		ret = snd_soc_dapm_new_pcm(card, rtd, dai_link->params,
+ 					   dai_link->num_params,
+ 					   source, sink);
+ 		if (ret != 0) {
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index a099c3e45504..577f6178af57 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -3658,6 +3658,7 @@ static int snd_soc_dai_link_event(struct snd_soc_dapm_widget *w,
+ {
+ 	struct snd_soc_dapm_path *source_p, *sink_p;
+ 	struct snd_soc_dai *source, *sink;
++	struct snd_soc_pcm_runtime *rtd = w->priv;
+ 	const struct snd_soc_pcm_stream *config = w->params + w->params_select;
+ 	struct snd_pcm_substream substream;
+ 	struct snd_pcm_hw_params *params = NULL;
+@@ -3717,6 +3718,7 @@ static int snd_soc_dai_link_event(struct snd_soc_dapm_widget *w,
+ 		goto out;
+ 	}
+ 	substream.runtime = runtime;
++	substream.private_data = rtd;
+ 
+ 	switch (event) {
+ 	case SND_SOC_DAPM_PRE_PMU:
+@@ -3901,6 +3903,7 @@ outfree_w_param:
+ }
+ 
+ int snd_soc_dapm_new_pcm(struct snd_soc_card *card,
++			 struct snd_soc_pcm_runtime *rtd,
+ 			 const struct snd_soc_pcm_stream *params,
+ 			 unsigned int num_params,
+ 			 struct snd_soc_dapm_widget *source,
+@@ -3969,6 +3972,7 @@ int snd_soc_dapm_new_pcm(struct snd_soc_card *card,
+ 
+ 	w->params = params;
+ 	w->num_params = num_params;
++	w->priv = rtd;
+ 
+ 	ret = snd_soc_dapm_add_path(&card->dapm, source, w, NULL, NULL);
+ 	if (ret)
+diff --git a/tools/perf/scripts/python/export-to-postgresql.py b/tools/perf/scripts/python/export-to-postgresql.py
+index efcaf6cac2eb..e46f51b17513 100644
+--- a/tools/perf/scripts/python/export-to-postgresql.py
++++ b/tools/perf/scripts/python/export-to-postgresql.py
+@@ -204,14 +204,23 @@ from ctypes import *
+ libpq = CDLL("libpq.so.5")
+ PQconnectdb = libpq.PQconnectdb
+ PQconnectdb.restype = c_void_p
++PQconnectdb.argtypes = [ c_char_p ]
+ PQfinish = libpq.PQfinish
++PQfinish.argtypes = [ c_void_p ]
+ PQstatus = libpq.PQstatus
++PQstatus.restype = c_int
++PQstatus.argtypes = [ c_void_p ]
+ PQexec = libpq.PQexec
+ PQexec.restype = c_void_p
++PQexec.argtypes = [ c_void_p, c_char_p ]
+ PQresultStatus = libpq.PQresultStatus
++PQresultStatus.restype = c_int
++PQresultStatus.argtypes = [ c_void_p ]
+ PQputCopyData = libpq.PQputCopyData
++PQputCopyData.restype = c_int
+ PQputCopyData.argtypes = [ c_void_p, c_void_p, c_int ]
+ PQputCopyEnd = libpq.PQputCopyEnd
++PQputCopyEnd.restype = c_int
+ PQputCopyEnd.argtypes = [ c_void_p, c_void_p ]
+ 
+ sys.path.append(os.environ['PERF_EXEC_PATH'] + \
+diff --git a/tools/perf/scripts/python/export-to-sqlite.py b/tools/perf/scripts/python/export-to-sqlite.py
+index f827bf77e9d2..e4bb82c8aba9 100644
+--- a/tools/perf/scripts/python/export-to-sqlite.py
++++ b/tools/perf/scripts/python/export-to-sqlite.py
+@@ -440,7 +440,11 @@ def branch_type_table(*x):
+ 
+ def sample_table(*x):
+ 	if branches:
+-		bind_exec(sample_query, 18, x)
++		for xx in x[0:15]:
++			sample_query.addBindValue(str(xx))
++		for xx in x[19:22]:
++			sample_query.addBindValue(str(xx))
++		do_query_(sample_query)
+ 	else:
+ 		bind_exec(sample_query, 22, x)
+ 
+diff --git a/tools/testing/selftests/android/Makefile b/tools/testing/selftests/android/Makefile
+index 72c25a3cb658..d9a725478375 100644
+--- a/tools/testing/selftests/android/Makefile
++++ b/tools/testing/selftests/android/Makefile
+@@ -6,7 +6,7 @@ TEST_PROGS := run.sh
+ 
+ include ../lib.mk
+ 
+-all:
++all: khdr
+ 	@for DIR in $(SUBDIRS); do		\
+ 		BUILD_TARGET=$(OUTPUT)/$$DIR;	\
+ 		mkdir $$BUILD_TARGET  -p;	\
+diff --git a/tools/testing/selftests/android/config b/tools/testing/selftests/android/config
+new file mode 100644
+index 000000000000..b4ad748a9dd9
+--- /dev/null
++++ b/tools/testing/selftests/android/config
+@@ -0,0 +1,5 @@
++CONFIG_ANDROID=y
++CONFIG_STAGING=y
++CONFIG_ION=y
++CONFIG_ION_SYSTEM_HEAP=y
++CONFIG_DRM_VGEM=y
+diff --git a/tools/testing/selftests/android/ion/Makefile b/tools/testing/selftests/android/ion/Makefile
+index e03695287f76..88cfe88e466f 100644
+--- a/tools/testing/selftests/android/ion/Makefile
++++ b/tools/testing/selftests/android/ion/Makefile
+@@ -10,6 +10,8 @@ $(TEST_GEN_FILES): ipcsocket.c ionutils.c
+ 
+ TEST_PROGS := ion_test.sh
+ 
++KSFT_KHDR_INSTALL := 1
++top_srcdir = ../../../../..
+ include ../../lib.mk
+ 
+ $(OUTPUT)/ionapp_export: ionapp_export.c ipcsocket.c ionutils.c
+diff --git a/tools/testing/selftests/android/ion/config b/tools/testing/selftests/android/ion/config
+deleted file mode 100644
+index b4ad748a9dd9..000000000000
+--- a/tools/testing/selftests/android/ion/config
++++ /dev/null
+@@ -1,5 +0,0 @@
+-CONFIG_ANDROID=y
+-CONFIG_STAGING=y
+-CONFIG_ION=y
+-CONFIG_ION_SYSTEM_HEAP=y
+-CONFIG_DRM_VGEM=y
+diff --git a/tools/testing/selftests/cgroup/cgroup_util.c b/tools/testing/selftests/cgroup/cgroup_util.c
+index 1e9e3c470561..8b644ea39725 100644
+--- a/tools/testing/selftests/cgroup/cgroup_util.c
++++ b/tools/testing/selftests/cgroup/cgroup_util.c
+@@ -89,17 +89,28 @@ int cg_read(const char *cgroup, const char *control, char *buf, size_t len)
+ int cg_read_strcmp(const char *cgroup, const char *control,
+ 		   const char *expected)
+ {
+-	size_t size = strlen(expected) + 1;
++	size_t size;
+ 	char *buf;
++	int ret;
++
++	/* Handle the case of comparing against empty string */
++	if (!expected)
++		size = 32;
++	else
++		size = strlen(expected) + 1;
+ 
+ 	buf = malloc(size);
+ 	if (!buf)
+ 		return -1;
+ 
+-	if (cg_read(cgroup, control, buf, size))
++	if (cg_read(cgroup, control, buf, size)) {
++		free(buf);
+ 		return -1;
++	}
+ 
+-	return strcmp(expected, buf);
++	ret = strcmp(expected, buf);
++	free(buf);
++	return ret;
+ }
+ 
+ int cg_read_strstr(const char *cgroup, const char *control, const char *needle)
+diff --git a/tools/testing/selftests/efivarfs/config b/tools/testing/selftests/efivarfs/config
+new file mode 100644
+index 000000000000..4e151f1005b2
+--- /dev/null
++++ b/tools/testing/selftests/efivarfs/config
+@@ -0,0 +1 @@
++CONFIG_EFIVAR_FS=y
+diff --git a/tools/testing/selftests/futex/functional/Makefile b/tools/testing/selftests/futex/functional/Makefile
+index ff8feca49746..ad1eeb14fda7 100644
+--- a/tools/testing/selftests/futex/functional/Makefile
++++ b/tools/testing/selftests/futex/functional/Makefile
+@@ -18,6 +18,7 @@ TEST_GEN_FILES := \
+ 
+ TEST_PROGS := run.sh
+ 
++top_srcdir = ../../../../..
+ include ../../lib.mk
+ 
+ $(TEST_GEN_FILES): $(HEADERS)
+diff --git a/tools/testing/selftests/gpio/Makefile b/tools/testing/selftests/gpio/Makefile
+index 1bbb47565c55..4665cdbf1a8d 100644
+--- a/tools/testing/selftests/gpio/Makefile
++++ b/tools/testing/selftests/gpio/Makefile
+@@ -21,11 +21,8 @@ endef
+ CFLAGS += -O2 -g -std=gnu99 -Wall -I../../../../usr/include/
+ LDLIBS += -lmount -I/usr/include/libmount
+ 
+-$(BINARIES): ../../../gpio/gpio-utils.o ../../../../usr/include/linux/gpio.h
++$(BINARIES):| khdr
++$(BINARIES): ../../../gpio/gpio-utils.o
+ 
+ ../../../gpio/gpio-utils.o:
+ 	make ARCH=$(ARCH) CROSS_COMPILE=$(CROSS_COMPILE) -C ../../../gpio
+-
+-../../../../usr/include/linux/gpio.h:
+-	make -C ../../../.. headers_install INSTALL_HDR_PATH=$(shell pwd)/../../../../usr/
+-
+diff --git a/tools/testing/selftests/kselftest.h b/tools/testing/selftests/kselftest.h
+index 15e6b75fc3a5..a3edb2c8e43d 100644
+--- a/tools/testing/selftests/kselftest.h
++++ b/tools/testing/selftests/kselftest.h
+@@ -19,7 +19,6 @@
+ #define KSFT_FAIL  1
+ #define KSFT_XFAIL 2
+ #define KSFT_XPASS 3
+-/* Treat skip as pass */
+ #define KSFT_SKIP  4
+ 
+ /* counters */
+diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
+index d9d00319b07c..bcb69380bbab 100644
+--- a/tools/testing/selftests/kvm/Makefile
++++ b/tools/testing/selftests/kvm/Makefile
+@@ -32,9 +32,6 @@ $(LIBKVM_OBJ): $(OUTPUT)/%.o: %.c
+ $(OUTPUT)/libkvm.a: $(LIBKVM_OBJ)
+ 	$(AR) crs $@ $^
+ 
+-$(LINUX_HDR_PATH):
+-	make -C $(top_srcdir) headers_install
+-
+-all: $(STATIC_LIBS) $(LINUX_HDR_PATH)
++all: $(STATIC_LIBS)
+ $(TEST_GEN_PROGS): $(STATIC_LIBS)
+-$(TEST_GEN_PROGS) $(LIBKVM_OBJ): | $(LINUX_HDR_PATH)
++$(STATIC_LIBS):| khdr
+diff --git a/tools/testing/selftests/lib.mk b/tools/testing/selftests/lib.mk
+index 17ab36605a8e..0a8e75886224 100644
+--- a/tools/testing/selftests/lib.mk
++++ b/tools/testing/selftests/lib.mk
+@@ -16,8 +16,20 @@ TEST_GEN_PROGS := $(patsubst %,$(OUTPUT)/%,$(TEST_GEN_PROGS))
+ TEST_GEN_PROGS_EXTENDED := $(patsubst %,$(OUTPUT)/%,$(TEST_GEN_PROGS_EXTENDED))
+ TEST_GEN_FILES := $(patsubst %,$(OUTPUT)/%,$(TEST_GEN_FILES))
+ 
++top_srcdir ?= ../../../..
++include $(top_srcdir)/scripts/subarch.include
++ARCH		?= $(SUBARCH)
++
+ all: $(TEST_GEN_PROGS) $(TEST_GEN_PROGS_EXTENDED) $(TEST_GEN_FILES)
+ 
++.PHONY: khdr
++khdr:
++	make ARCH=$(ARCH) -C $(top_srcdir) headers_install
++
++ifdef KSFT_KHDR_INSTALL
++$(TEST_GEN_PROGS) $(TEST_GEN_PROGS_EXTENDED) $(TEST_GEN_FILES):| khdr
++endif
++
+ .ONESHELL:
+ define RUN_TEST_PRINT_RESULT
+ 	TEST_HDR_MSG="selftests: "`basename $$PWD`:" $$BASENAME_TEST";	\
+diff --git a/tools/testing/selftests/memory-hotplug/config b/tools/testing/selftests/memory-hotplug/config
+index 2fde30191a47..a7e8cd5bb265 100644
+--- a/tools/testing/selftests/memory-hotplug/config
++++ b/tools/testing/selftests/memory-hotplug/config
+@@ -2,3 +2,4 @@ CONFIG_MEMORY_HOTPLUG=y
+ CONFIG_MEMORY_HOTPLUG_SPARSE=y
+ CONFIG_NOTIFIER_ERROR_INJECTION=y
+ CONFIG_MEMORY_NOTIFIER_ERROR_INJECT=m
++CONFIG_MEMORY_HOTREMOVE=y
+diff --git a/tools/testing/selftests/net/Makefile b/tools/testing/selftests/net/Makefile
+index 663e11e85727..d515dabc6b0d 100644
+--- a/tools/testing/selftests/net/Makefile
++++ b/tools/testing/selftests/net/Makefile
+@@ -15,6 +15,7 @@ TEST_GEN_FILES += udpgso udpgso_bench_tx udpgso_bench_rx
+ TEST_GEN_PROGS = reuseport_bpf reuseport_bpf_cpu reuseport_bpf_numa
+ TEST_GEN_PROGS += reuseport_dualstack reuseaddr_conflict
+ 
++KSFT_KHDR_INSTALL := 1
+ include ../lib.mk
+ 
+ $(OUTPUT)/reuseport_bpf_numa: LDFLAGS += -lnuma
+diff --git a/tools/testing/selftests/networking/timestamping/Makefile b/tools/testing/selftests/networking/timestamping/Makefile
+index a728040edbe1..14cfcf006936 100644
+--- a/tools/testing/selftests/networking/timestamping/Makefile
++++ b/tools/testing/selftests/networking/timestamping/Makefile
+@@ -5,6 +5,7 @@ TEST_PROGS := hwtstamp_config rxtimestamp timestamping txtimestamp
+ 
+ all: $(TEST_PROGS)
+ 
++top_srcdir = ../../../../..
+ include ../../lib.mk
+ 
+ clean:
+diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
+index fdefa2295ddc..58759454b1d0 100644
+--- a/tools/testing/selftests/vm/Makefile
++++ b/tools/testing/selftests/vm/Makefile
+@@ -25,10 +25,6 @@ TEST_PROGS := run_vmtests
+ 
+ include ../lib.mk
+ 
+-$(OUTPUT)/userfaultfd: ../../../../usr/include/linux/kernel.h
+ $(OUTPUT)/userfaultfd: LDLIBS += -lpthread
+ 
+ $(OUTPUT)/mlock-random-test: LDLIBS += -lcap
+-
+-../../../../usr/include/linux/kernel.h:
+-	make -C ../../../.. headers_install


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-10-13 16:32 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-10-13 16:32 UTC (permalink / raw
  To: gentoo-commits

commit:     0321d35911e5d0856e9295cc57f42916d1b21282
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Oct 13 16:32:27 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Oct 13 16:32:27 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0321d359

Linux patch 4.18.14

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1013_linux-4.18.14.patch | 1692 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1696 insertions(+)

diff --git a/0000_README b/0000_README
index f5bb594..6d1cb28 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,10 @@ Patch:  1012_linux-4.18.13.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.13
 
+Patch:  1013_linux-4.18.14.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.14
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1013_linux-4.18.14.patch b/1013_linux-4.18.14.patch
new file mode 100644
index 0000000..742cbc9
--- /dev/null
+++ b/1013_linux-4.18.14.patch
@@ -0,0 +1,1692 @@
+diff --git a/Makefile b/Makefile
+index 4442e9ea4b6d..5274f8ae6b44 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 13
++SUBLEVEL = 14
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/arc/kernel/process.c b/arch/arc/kernel/process.c
+index 4674541eba3f..8ce6e7235915 100644
+--- a/arch/arc/kernel/process.c
++++ b/arch/arc/kernel/process.c
+@@ -241,6 +241,26 @@ int copy_thread(unsigned long clone_flags,
+ 		task_thread_info(current)->thr_ptr;
+ 	}
+ 
++
++	/*
++	 * setup usermode thread pointer #1:
++	 * when child is picked by scheduler, __switch_to() uses @c_callee to
++	 * populate usermode callee regs: this works (despite being in a kernel
++	 * function) since special return path for child @ret_from_fork()
++	 * ensures those regs are not clobbered all the way to RTIE to usermode
++	 */
++	c_callee->r25 = task_thread_info(p)->thr_ptr;
++
++#ifdef CONFIG_ARC_CURR_IN_REG
++	/*
++	 * setup usermode thread pointer #2:
++	 * however for this special use of r25 in kernel, __switch_to() sets
++	 * r25 for kernel needs and only in the final return path is usermode
++	 * r25 setup, from pt_regs->user_r25. So set that up as well
++	 */
++	c_regs->user_r25 = c_callee->r25;
++#endif
++
+ 	return 0;
+ }
+ 
+diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
+index 8721fd004291..e1a1518a1ec7 100644
+--- a/arch/powerpc/include/asm/setup.h
++++ b/arch/powerpc/include/asm/setup.h
+@@ -9,6 +9,7 @@ extern void ppc_printk_progress(char *s, unsigned short hex);
+ 
+ extern unsigned int rtas_data;
+ extern unsigned long long memory_limit;
++extern bool init_mem_is_free;
+ extern unsigned long klimit;
+ extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
+ 
+diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
+index e0d881ab304e..30cbcadb54d5 100644
+--- a/arch/powerpc/lib/code-patching.c
++++ b/arch/powerpc/lib/code-patching.c
+@@ -142,7 +142,7 @@ static inline int unmap_patch_area(unsigned long addr)
+ 	return 0;
+ }
+ 
+-int patch_instruction(unsigned int *addr, unsigned int instr)
++static int do_patch_instruction(unsigned int *addr, unsigned int instr)
+ {
+ 	int err;
+ 	unsigned int *patch_addr = NULL;
+@@ -182,12 +182,22 @@ out:
+ }
+ #else /* !CONFIG_STRICT_KERNEL_RWX */
+ 
+-int patch_instruction(unsigned int *addr, unsigned int instr)
++static int do_patch_instruction(unsigned int *addr, unsigned int instr)
+ {
+ 	return raw_patch_instruction(addr, instr);
+ }
+ 
+ #endif /* CONFIG_STRICT_KERNEL_RWX */
++
++int patch_instruction(unsigned int *addr, unsigned int instr)
++{
++	/* Make sure we aren't patching a freed init section */
++	if (init_mem_is_free && init_section_contains(addr, 4)) {
++		pr_debug("Skipping init section patching addr: 0x%px\n", addr);
++		return 0;
++	}
++	return do_patch_instruction(addr, instr);
++}
+ NOKPROBE_SYMBOL(patch_instruction);
+ 
+ int patch_branch(unsigned int *addr, unsigned long target, int flags)
+diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
+index 5c8530d0c611..04ccb274a620 100644
+--- a/arch/powerpc/mm/mem.c
++++ b/arch/powerpc/mm/mem.c
+@@ -63,6 +63,7 @@
+ #endif
+ 
+ unsigned long long memory_limit;
++bool init_mem_is_free;
+ 
+ #ifdef CONFIG_HIGHMEM
+ pte_t *kmap_pte;
+@@ -396,6 +397,7 @@ void free_initmem(void)
+ {
+ 	ppc_md.progress = ppc_printk_progress;
+ 	mark_initmem_nx();
++	init_mem_is_free = true;
+ 	free_initmem_default(POISON_FREE_INITMEM);
+ }
+ 
+diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
+index 9589878faf46..eb1ed9a7109d 100644
+--- a/arch/x86/entry/vdso/Makefile
++++ b/arch/x86/entry/vdso/Makefile
+@@ -72,7 +72,13 @@ $(obj)/vdso-image-%.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
+ CFL := $(PROFILING) -mcmodel=small -fPIC -O2 -fasynchronous-unwind-tables -m64 \
+        $(filter -g%,$(KBUILD_CFLAGS)) $(call cc-option, -fno-stack-protector) \
+        -fno-omit-frame-pointer -foptimize-sibling-calls \
+-       -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO $(RETPOLINE_VDSO_CFLAGS)
++       -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
++
++ifdef CONFIG_RETPOLINE
++ifneq ($(RETPOLINE_VDSO_CFLAGS),)
++  CFL += $(RETPOLINE_VDSO_CFLAGS)
++endif
++endif
+ 
+ $(vobjs): KBUILD_CFLAGS := $(filter-out $(GCC_PLUGINS_CFLAGS) $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS)) $(CFL)
+ 
+@@ -144,7 +150,13 @@ KBUILD_CFLAGS_32 += $(call cc-option, -fno-stack-protector)
+ KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls)
+ KBUILD_CFLAGS_32 += -fno-omit-frame-pointer
+ KBUILD_CFLAGS_32 += -DDISABLE_BRANCH_PROFILING
+-KBUILD_CFLAGS_32 += $(RETPOLINE_VDSO_CFLAGS)
++
++ifdef CONFIG_RETPOLINE
++ifneq ($(RETPOLINE_VDSO_CFLAGS),)
++  KBUILD_CFLAGS_32 += $(RETPOLINE_VDSO_CFLAGS)
++endif
++endif
++
+ $(obj)/vdso32.so.dbg: KBUILD_CFLAGS = $(KBUILD_CFLAGS_32)
+ 
+ $(obj)/vdso32.so.dbg: FORCE \
+diff --git a/arch/x86/entry/vdso/vclock_gettime.c b/arch/x86/entry/vdso/vclock_gettime.c
+index f19856d95c60..e48ca3afa091 100644
+--- a/arch/x86/entry/vdso/vclock_gettime.c
++++ b/arch/x86/entry/vdso/vclock_gettime.c
+@@ -43,8 +43,9 @@ extern u8 hvclock_page
+ notrace static long vdso_fallback_gettime(long clock, struct timespec *ts)
+ {
+ 	long ret;
+-	asm("syscall" : "=a" (ret) :
+-	    "0" (__NR_clock_gettime), "D" (clock), "S" (ts) : "memory");
++	asm ("syscall" : "=a" (ret), "=m" (*ts) :
++	     "0" (__NR_clock_gettime), "D" (clock), "S" (ts) :
++	     "memory", "rcx", "r11");
+ 	return ret;
+ }
+ 
+@@ -52,8 +53,9 @@ notrace static long vdso_fallback_gtod(struct timeval *tv, struct timezone *tz)
+ {
+ 	long ret;
+ 
+-	asm("syscall" : "=a" (ret) :
+-	    "0" (__NR_gettimeofday), "D" (tv), "S" (tz) : "memory");
++	asm ("syscall" : "=a" (ret), "=m" (*tv), "=m" (*tz) :
++	     "0" (__NR_gettimeofday), "D" (tv), "S" (tz) :
++	     "memory", "rcx", "r11");
+ 	return ret;
+ }
+ 
+@@ -64,13 +66,13 @@ notrace static long vdso_fallback_gettime(long clock, struct timespec *ts)
+ {
+ 	long ret;
+ 
+-	asm(
++	asm (
+ 		"mov %%ebx, %%edx \n"
+-		"mov %2, %%ebx \n"
++		"mov %[clock], %%ebx \n"
+ 		"call __kernel_vsyscall \n"
+ 		"mov %%edx, %%ebx \n"
+-		: "=a" (ret)
+-		: "0" (__NR_clock_gettime), "g" (clock), "c" (ts)
++		: "=a" (ret), "=m" (*ts)
++		: "0" (__NR_clock_gettime), [clock] "g" (clock), "c" (ts)
+ 		: "memory", "edx");
+ 	return ret;
+ }
+@@ -79,13 +81,13 @@ notrace static long vdso_fallback_gtod(struct timeval *tv, struct timezone *tz)
+ {
+ 	long ret;
+ 
+-	asm(
++	asm (
+ 		"mov %%ebx, %%edx \n"
+-		"mov %2, %%ebx \n"
++		"mov %[tv], %%ebx \n"
+ 		"call __kernel_vsyscall \n"
+ 		"mov %%edx, %%ebx \n"
+-		: "=a" (ret)
+-		: "0" (__NR_gettimeofday), "g" (tv), "c" (tz)
++		: "=a" (ret), "=m" (*tv), "=m" (*tz)
++		: "0" (__NR_gettimeofday), [tv] "g" (tv), "c" (tz)
+ 		: "memory", "edx");
+ 	return ret;
+ }
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index 97d41754769e..d02f0390c1c1 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -232,6 +232,17 @@ static u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
+  */
+ static const u64 shadow_nonpresent_or_rsvd_mask_len = 5;
+ 
++/*
++ * In some cases, we need to preserve the GFN of a non-present or reserved
++ * SPTE when we usurp the upper five bits of the physical address space to
++ * defend against L1TF, e.g. for MMIO SPTEs.  To preserve the GFN, we'll
++ * shift bits of the GFN that overlap with shadow_nonpresent_or_rsvd_mask
++ * left into the reserved bits, i.e. the GFN in the SPTE will be split into
++ * high and low parts.  This mask covers the lower bits of the GFN.
++ */
++static u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask;
++
++
+ static void mmu_spte_set(u64 *sptep, u64 spte);
+ 
+ void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value)
+@@ -338,9 +349,7 @@ static bool is_mmio_spte(u64 spte)
+ 
+ static gfn_t get_mmio_spte_gfn(u64 spte)
+ {
+-	u64 mask = generation_mmio_spte_mask(MMIO_GEN_MASK) | shadow_mmio_mask |
+-		   shadow_nonpresent_or_rsvd_mask;
+-	u64 gpa = spte & ~mask;
++	u64 gpa = spte & shadow_nonpresent_or_rsvd_lower_gfn_mask;
+ 
+ 	gpa |= (spte >> shadow_nonpresent_or_rsvd_mask_len)
+ 	       & shadow_nonpresent_or_rsvd_mask;
+@@ -404,6 +413,8 @@ EXPORT_SYMBOL_GPL(kvm_mmu_set_mask_ptes);
+ 
+ static void kvm_mmu_reset_all_pte_masks(void)
+ {
++	u8 low_phys_bits;
++
+ 	shadow_user_mask = 0;
+ 	shadow_accessed_mask = 0;
+ 	shadow_dirty_mask = 0;
+@@ -418,12 +429,17 @@ static void kvm_mmu_reset_all_pte_masks(void)
+ 	 * appropriate mask to guard against L1TF attacks. Otherwise, it is
+ 	 * assumed that the CPU is not vulnerable to L1TF.
+ 	 */
++	low_phys_bits = boot_cpu_data.x86_phys_bits;
+ 	if (boot_cpu_data.x86_phys_bits <
+-	    52 - shadow_nonpresent_or_rsvd_mask_len)
++	    52 - shadow_nonpresent_or_rsvd_mask_len) {
+ 		shadow_nonpresent_or_rsvd_mask =
+ 			rsvd_bits(boot_cpu_data.x86_phys_bits -
+ 				  shadow_nonpresent_or_rsvd_mask_len,
+ 				  boot_cpu_data.x86_phys_bits - 1);
++		low_phys_bits -= shadow_nonpresent_or_rsvd_mask_len;
++	}
++	shadow_nonpresent_or_rsvd_lower_gfn_mask =
++		GENMASK_ULL(low_phys_bits - 1, PAGE_SHIFT);
+ }
+ 
+ static int is_cpuid_PSE36(void)
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index d0c3be353bb6..32721ef9652d 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -9826,15 +9826,16 @@ static void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu)
+ 	if (!lapic_in_kernel(vcpu))
+ 		return;
+ 
++	if (!flexpriority_enabled &&
++	    !cpu_has_vmx_virtualize_x2apic_mode())
++		return;
++
+ 	/* Postpone execution until vmcs01 is the current VMCS. */
+ 	if (is_guest_mode(vcpu)) {
+ 		to_vmx(vcpu)->nested.change_vmcs01_virtual_apic_mode = true;
+ 		return;
+ 	}
+ 
+-	if (!cpu_need_tpr_shadow(vcpu))
+-		return;
+-
+ 	sec_exec_control = vmcs_read32(SECONDARY_VM_EXEC_CONTROL);
+ 	sec_exec_control &= ~(SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
+ 			      SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE);
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 2f9e14361673..90e8058ae557 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -1596,7 +1596,7 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
+ 		BUG_ON(!rq->q);
+ 		if (rq->mq_ctx != this_ctx) {
+ 			if (this_ctx) {
+-				trace_block_unplug(this_q, depth, from_schedule);
++				trace_block_unplug(this_q, depth, !from_schedule);
+ 				blk_mq_sched_insert_requests(this_q, this_ctx,
+ 								&ctx_list,
+ 								from_schedule);
+@@ -1616,7 +1616,7 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
+ 	 * on 'ctx_list'. Do those.
+ 	 */
+ 	if (this_ctx) {
+-		trace_block_unplug(this_q, depth, from_schedule);
++		trace_block_unplug(this_q, depth, !from_schedule);
+ 		blk_mq_sched_insert_requests(this_q, this_ctx, &ctx_list,
+ 						from_schedule);
+ 	}
+diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
+index 3f68e2919dc5..a690fd400260 100644
+--- a/drivers/base/power/main.c
++++ b/drivers/base/power/main.c
+@@ -1713,8 +1713,10 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
+ 
+ 	dpm_wait_for_subordinate(dev, async);
+ 
+-	if (async_error)
++	if (async_error) {
++		dev->power.direct_complete = false;
+ 		goto Complete;
++	}
+ 
+ 	/*
+ 	 * If a device configured to wake up the system from sleep states
+@@ -1726,6 +1728,7 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
+ 		pm_wakeup_event(dev, 0);
+ 
+ 	if (pm_wakeup_pending()) {
++		dev->power.direct_complete = false;
+ 		async_error = -EBUSY;
+ 		goto Complete;
+ 	}
+diff --git a/drivers/clocksource/timer-atmel-pit.c b/drivers/clocksource/timer-atmel-pit.c
+index ec8a4376f74f..2fab18fae4fc 100644
+--- a/drivers/clocksource/timer-atmel-pit.c
++++ b/drivers/clocksource/timer-atmel-pit.c
+@@ -180,26 +180,29 @@ static int __init at91sam926x_pit_dt_init(struct device_node *node)
+ 	data->base = of_iomap(node, 0);
+ 	if (!data->base) {
+ 		pr_err("Could not map PIT address\n");
+-		return -ENXIO;
++		ret = -ENXIO;
++		goto exit;
+ 	}
+ 
+ 	data->mck = of_clk_get(node, 0);
+ 	if (IS_ERR(data->mck)) {
+ 		pr_err("Unable to get mck clk\n");
+-		return PTR_ERR(data->mck);
++		ret = PTR_ERR(data->mck);
++		goto exit;
+ 	}
+ 
+ 	ret = clk_prepare_enable(data->mck);
+ 	if (ret) {
+ 		pr_err("Unable to enable mck\n");
+-		return ret;
++		goto exit;
+ 	}
+ 
+ 	/* Get the interrupts property */
+ 	data->irq = irq_of_parse_and_map(node, 0);
+ 	if (!data->irq) {
+ 		pr_err("Unable to get IRQ from DT\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto exit;
+ 	}
+ 
+ 	/*
+@@ -227,7 +230,7 @@ static int __init at91sam926x_pit_dt_init(struct device_node *node)
+ 	ret = clocksource_register_hz(&data->clksrc, pit_rate);
+ 	if (ret) {
+ 		pr_err("Failed to register clocksource\n");
+-		return ret;
++		goto exit;
+ 	}
+ 
+ 	/* Set up irq handler */
+@@ -236,7 +239,8 @@ static int __init at91sam926x_pit_dt_init(struct device_node *node)
+ 			  "at91_tick", data);
+ 	if (ret) {
+ 		pr_err("Unable to setup IRQ\n");
+-		return ret;
++		clocksource_unregister(&data->clksrc);
++		goto exit;
+ 	}
+ 
+ 	/* Set up and register clockevents */
+@@ -254,6 +258,10 @@ static int __init at91sam926x_pit_dt_init(struct device_node *node)
+ 	clockevents_register_device(&data->clkevt);
+ 
+ 	return 0;
++
++exit:
++	kfree(data);
++	return ret;
+ }
+ TIMER_OF_DECLARE(at91sam926x_pit, "atmel,at91sam9260-pit",
+ 		       at91sam926x_pit_dt_init);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+index 23d960ec1cf2..acad2999560c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+@@ -246,6 +246,8 @@ int amdgpu_vce_suspend(struct amdgpu_device *adev)
+ {
+ 	int i;
+ 
++	cancel_delayed_work_sync(&adev->vce.idle_work);
++
+ 	if (adev->vce.vcpu_bo == NULL)
+ 		return 0;
+ 
+@@ -256,7 +258,6 @@ int amdgpu_vce_suspend(struct amdgpu_device *adev)
+ 	if (i == AMDGPU_MAX_VCE_HANDLES)
+ 		return 0;
+ 
+-	cancel_delayed_work_sync(&adev->vce.idle_work);
+ 	/* TODO: suspending running encoding sessions isn't supported */
+ 	return -EINVAL;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+index bee49991c1ff..2dc3d1e28f3c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+@@ -151,11 +151,11 @@ int amdgpu_vcn_suspend(struct amdgpu_device *adev)
+ 	unsigned size;
+ 	void *ptr;
+ 
++	cancel_delayed_work_sync(&adev->vcn.idle_work);
++
+ 	if (adev->vcn.vcpu_bo == NULL)
+ 		return 0;
+ 
+-	cancel_delayed_work_sync(&adev->vcn.idle_work);
+-
+ 	size = amdgpu_bo_size(adev->vcn.vcpu_bo);
+ 	ptr = adev->vcn.cpu_addr;
+ 
+diff --git a/drivers/gpu/drm/drm_lease.c b/drivers/gpu/drm/drm_lease.c
+index d638c0fb3418..23a5643a4b98 100644
+--- a/drivers/gpu/drm/drm_lease.c
++++ b/drivers/gpu/drm/drm_lease.c
+@@ -566,14 +566,14 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev,
+ 	lessee_priv->is_master = 1;
+ 	lessee_priv->authenticated = 1;
+ 
+-	/* Hook up the fd */
+-	fd_install(fd, lessee_file);
+-
+ 	/* Pass fd back to userspace */
+ 	DRM_DEBUG_LEASE("Returning fd %d id %d\n", fd, lessee->lessee_id);
+ 	cl->fd = fd;
+ 	cl->lessee_id = lessee->lessee_id;
+ 
++	/* Hook up the fd */
++	fd_install(fd, lessee_file);
++
+ 	DRM_DEBUG_LEASE("drm_mode_create_lease_ioctl succeeded\n");
+ 	return 0;
+ 
+diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
+index d4f4ce484529..8e71da403324 100644
+--- a/drivers/gpu/drm/drm_syncobj.c
++++ b/drivers/gpu/drm/drm_syncobj.c
+@@ -97,6 +97,8 @@ static int drm_syncobj_fence_get_or_add_callback(struct drm_syncobj *syncobj,
+ {
+ 	int ret;
+ 
++	WARN_ON(*fence);
++
+ 	*fence = drm_syncobj_fence_get(syncobj);
+ 	if (*fence)
+ 		return 1;
+@@ -744,6 +746,9 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
+ 
+ 	if (flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT) {
+ 		for (i = 0; i < count; ++i) {
++			if (entries[i].fence)
++				continue;
++
+ 			drm_syncobj_fence_get_or_add_callback(syncobjs[i],
+ 							      &entries[i].fence,
+ 							      &entries[i].syncobj_cb,
+diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c
+index 5f437d1570fb..21863ddde63e 100644
+--- a/drivers/infiniband/core/ucma.c
++++ b/drivers/infiniband/core/ucma.c
+@@ -1759,6 +1759,8 @@ static int ucma_close(struct inode *inode, struct file *filp)
+ 		mutex_lock(&mut);
+ 		if (!ctx->closing) {
+ 			mutex_unlock(&mut);
++			ucma_put_ctx(ctx);
++			wait_for_completion(&ctx->comp);
+ 			/* rdma_destroy_id ensures that no event handlers are
+ 			 * inflight for that id before releasing it.
+ 			 */
+diff --git a/drivers/md/dm-cache-metadata.c b/drivers/md/dm-cache-metadata.c
+index 69dddeab124c..5936de71883f 100644
+--- a/drivers/md/dm-cache-metadata.c
++++ b/drivers/md/dm-cache-metadata.c
+@@ -1455,8 +1455,8 @@ static int __load_mappings(struct dm_cache_metadata *cmd,
+ 		if (hints_valid) {
+ 			r = dm_array_cursor_next(&cmd->hint_cursor);
+ 			if (r) {
+-				DMERR("dm_array_cursor_next for hint failed");
+-				goto out;
++				dm_array_cursor_end(&cmd->hint_cursor);
++				hints_valid = false;
+ 			}
+ 		}
+ 
+diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
+index 44df244807e5..a39ae8f45e32 100644
+--- a/drivers/md/dm-cache-target.c
++++ b/drivers/md/dm-cache-target.c
+@@ -3017,8 +3017,13 @@ static dm_cblock_t get_cache_dev_size(struct cache *cache)
+ 
+ static bool can_resize(struct cache *cache, dm_cblock_t new_size)
+ {
+-	if (from_cblock(new_size) > from_cblock(cache->cache_size))
+-		return true;
++	if (from_cblock(new_size) > from_cblock(cache->cache_size)) {
++		if (cache->sized) {
++			DMERR("%s: unable to extend cache due to missing cache table reload",
++			      cache_device_name(cache));
++			return false;
++		}
++	}
+ 
+ 	/*
+ 	 * We can't drop a dirty block when shrinking the cache.
+diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
+index d94ba6f72ff5..419362c2d8ac 100644
+--- a/drivers/md/dm-mpath.c
++++ b/drivers/md/dm-mpath.c
+@@ -806,19 +806,19 @@ static int parse_path_selector(struct dm_arg_set *as, struct priority_group *pg,
+ }
+ 
+ static int setup_scsi_dh(struct block_device *bdev, struct multipath *m,
+-			 const char *attached_handler_name, char **error)
++			 const char **attached_handler_name, char **error)
+ {
+ 	struct request_queue *q = bdev_get_queue(bdev);
+ 	int r;
+ 
+ 	if (test_bit(MPATHF_RETAIN_ATTACHED_HW_HANDLER, &m->flags)) {
+ retain:
+-		if (attached_handler_name) {
++		if (*attached_handler_name) {
+ 			/*
+ 			 * Clear any hw_handler_params associated with a
+ 			 * handler that isn't already attached.
+ 			 */
+-			if (m->hw_handler_name && strcmp(attached_handler_name, m->hw_handler_name)) {
++			if (m->hw_handler_name && strcmp(*attached_handler_name, m->hw_handler_name)) {
+ 				kfree(m->hw_handler_params);
+ 				m->hw_handler_params = NULL;
+ 			}
+@@ -830,7 +830,8 @@ retain:
+ 			 * handler instead of the original table passed in.
+ 			 */
+ 			kfree(m->hw_handler_name);
+-			m->hw_handler_name = attached_handler_name;
++			m->hw_handler_name = *attached_handler_name;
++			*attached_handler_name = NULL;
+ 		}
+ 	}
+ 
+@@ -867,7 +868,7 @@ static struct pgpath *parse_path(struct dm_arg_set *as, struct path_selector *ps
+ 	struct pgpath *p;
+ 	struct multipath *m = ti->private;
+ 	struct request_queue *q;
+-	const char *attached_handler_name;
++	const char *attached_handler_name = NULL;
+ 
+ 	/* we need at least a path arg */
+ 	if (as->argc < 1) {
+@@ -890,7 +891,7 @@ static struct pgpath *parse_path(struct dm_arg_set *as, struct path_selector *ps
+ 	attached_handler_name = scsi_dh_attached_handler_name(q, GFP_KERNEL);
+ 	if (attached_handler_name || m->hw_handler_name) {
+ 		INIT_DELAYED_WORK(&p->activate_path, activate_path_work);
+-		r = setup_scsi_dh(p->path.dev->bdev, m, attached_handler_name, &ti->error);
++		r = setup_scsi_dh(p->path.dev->bdev, m, &attached_handler_name, &ti->error);
+ 		if (r) {
+ 			dm_put_device(ti, p->path.dev);
+ 			goto bad;
+@@ -905,6 +906,7 @@ static struct pgpath *parse_path(struct dm_arg_set *as, struct path_selector *ps
+ 
+ 	return p;
+  bad:
++	kfree(attached_handler_name);
+ 	free_pgpath(p);
+ 	return ERR_PTR(r);
+ }
+diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
+index abf9e884386c..f57f5de54206 100644
+--- a/drivers/mmc/core/host.c
++++ b/drivers/mmc/core/host.c
+@@ -235,7 +235,7 @@ int mmc_of_parse(struct mmc_host *host)
+ 			host->caps |= MMC_CAP_NEEDS_POLL;
+ 
+ 		ret = mmc_gpiod_request_cd(host, "cd", 0, true,
+-					   cd_debounce_delay_ms,
++					   cd_debounce_delay_ms * 1000,
+ 					   &cd_gpio_invert);
+ 		if (!ret)
+ 			dev_info(host->parent, "Got CD GPIO\n");
+diff --git a/drivers/mmc/core/slot-gpio.c b/drivers/mmc/core/slot-gpio.c
+index 2a833686784b..86803a3a04dc 100644
+--- a/drivers/mmc/core/slot-gpio.c
++++ b/drivers/mmc/core/slot-gpio.c
+@@ -271,7 +271,7 @@ int mmc_gpiod_request_cd(struct mmc_host *host, const char *con_id,
+ 	if (debounce) {
+ 		ret = gpiod_set_debounce(desc, debounce);
+ 		if (ret < 0)
+-			ctx->cd_debounce_delay_ms = debounce;
++			ctx->cd_debounce_delay_ms = debounce / 1000;
+ 	}
+ 
+ 	if (gpio_invert)
+diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.c b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+index 21eb3a598a86..bdaad6e93be5 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.c
++++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+@@ -1619,10 +1619,10 @@ ath10k_wmi_tlv_op_gen_start_scan(struct ath10k *ar,
+ 	bssid_len = arg->n_bssids * sizeof(struct wmi_mac_addr);
+ 	ie_len = roundup(arg->ie_len, 4);
+ 	len = (sizeof(*tlv) + sizeof(*cmd)) +
+-	      (arg->n_channels ? sizeof(*tlv) + chan_len : 0) +
+-	      (arg->n_ssids ? sizeof(*tlv) + ssid_len : 0) +
+-	      (arg->n_bssids ? sizeof(*tlv) + bssid_len : 0) +
+-	      (arg->ie_len ? sizeof(*tlv) + ie_len : 0);
++	      sizeof(*tlv) + chan_len +
++	      sizeof(*tlv) + ssid_len +
++	      sizeof(*tlv) + bssid_len +
++	      sizeof(*tlv) + ie_len;
+ 
+ 	skb = ath10k_wmi_alloc_skb(ar, len);
+ 	if (!skb)
+diff --git a/drivers/net/xen-netback/hash.c b/drivers/net/xen-netback/hash.c
+index 3c4c58b9fe76..3b6fb5b3bdb2 100644
+--- a/drivers/net/xen-netback/hash.c
++++ b/drivers/net/xen-netback/hash.c
+@@ -332,20 +332,22 @@ u32 xenvif_set_hash_mapping_size(struct xenvif *vif, u32 size)
+ u32 xenvif_set_hash_mapping(struct xenvif *vif, u32 gref, u32 len,
+ 			    u32 off)
+ {
+-	u32 *mapping = &vif->hash.mapping[off];
++	u32 *mapping = vif->hash.mapping;
+ 	struct gnttab_copy copy_op = {
+ 		.source.u.ref = gref,
+ 		.source.domid = vif->domid,
+-		.dest.u.gmfn = virt_to_gfn(mapping),
+ 		.dest.domid = DOMID_SELF,
+-		.dest.offset = xen_offset_in_page(mapping),
+-		.len = len * sizeof(u32),
++		.len = len * sizeof(*mapping),
+ 		.flags = GNTCOPY_source_gref
+ 	};
+ 
+-	if ((off + len > vif->hash.size) || copy_op.len > XEN_PAGE_SIZE)
++	if ((off + len < off) || (off + len > vif->hash.size) ||
++	    len > XEN_PAGE_SIZE / sizeof(*mapping))
+ 		return XEN_NETIF_CTRL_STATUS_INVALID_PARAMETER;
+ 
++	copy_op.dest.u.gmfn = virt_to_gfn(mapping + off);
++	copy_op.dest.offset = xen_offset_in_page(mapping + off);
++
+ 	while (len-- != 0)
+ 		if (mapping[off++] >= vif->num_queues)
+ 			return XEN_NETIF_CTRL_STATUS_INVALID_PARAMETER;
+diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
+index 722537e14848..41b49716ac75 100644
+--- a/drivers/of/unittest.c
++++ b/drivers/of/unittest.c
+@@ -771,6 +771,9 @@ static void __init of_unittest_parse_interrupts(void)
+ 	struct of_phandle_args args;
+ 	int i, rc;
+ 
++	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
++		return;
++
+ 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts0");
+ 	if (!np) {
+ 		pr_err("missing testcase data\n");
+@@ -845,6 +848,9 @@ static void __init of_unittest_parse_interrupts_extended(void)
+ 	struct of_phandle_args args;
+ 	int i, rc;
+ 
++	if (of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)
++		return;
++
+ 	np = of_find_node_by_path("/testcase-data/interrupts/interrupts-extended0");
+ 	if (!np) {
+ 		pr_err("missing testcase data\n");
+@@ -1001,15 +1007,19 @@ static void __init of_unittest_platform_populate(void)
+ 	pdev = of_find_device_by_node(np);
+ 	unittest(pdev, "device 1 creation failed\n");
+ 
+-	irq = platform_get_irq(pdev, 0);
+-	unittest(irq == -EPROBE_DEFER, "device deferred probe failed - %d\n", irq);
++	if (!(of_irq_workarounds & OF_IMAP_OLDWORLD_MAC)) {
++		irq = platform_get_irq(pdev, 0);
++		unittest(irq == -EPROBE_DEFER,
++			 "device deferred probe failed - %d\n", irq);
+ 
+-	/* Test that a parsing failure does not return -EPROBE_DEFER */
+-	np = of_find_node_by_path("/testcase-data/testcase-device2");
+-	pdev = of_find_device_by_node(np);
+-	unittest(pdev, "device 2 creation failed\n");
+-	irq = platform_get_irq(pdev, 0);
+-	unittest(irq < 0 && irq != -EPROBE_DEFER, "device parsing error failed - %d\n", irq);
++		/* Test that a parsing failure does not return -EPROBE_DEFER */
++		np = of_find_node_by_path("/testcase-data/testcase-device2");
++		pdev = of_find_device_by_node(np);
++		unittest(pdev, "device 2 creation failed\n");
++		irq = platform_get_irq(pdev, 0);
++		unittest(irq < 0 && irq != -EPROBE_DEFER,
++			 "device parsing error failed - %d\n", irq);
++	}
+ 
+ 	np = of_find_node_by_path("/testcase-data/platform-tests");
+ 	unittest(np, "No testcase data in device tree\n");
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 0abe2865a3a5..c97ad905e7c9 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -1125,12 +1125,12 @@ int pci_save_state(struct pci_dev *dev)
+ EXPORT_SYMBOL(pci_save_state);
+ 
+ static void pci_restore_config_dword(struct pci_dev *pdev, int offset,
+-				     u32 saved_val, int retry)
++				     u32 saved_val, int retry, bool force)
+ {
+ 	u32 val;
+ 
+ 	pci_read_config_dword(pdev, offset, &val);
+-	if (val == saved_val)
++	if (!force && val == saved_val)
+ 		return;
+ 
+ 	for (;;) {
+@@ -1149,25 +1149,36 @@ static void pci_restore_config_dword(struct pci_dev *pdev, int offset,
+ }
+ 
+ static void pci_restore_config_space_range(struct pci_dev *pdev,
+-					   int start, int end, int retry)
++					   int start, int end, int retry,
++					   bool force)
+ {
+ 	int index;
+ 
+ 	for (index = end; index >= start; index--)
+ 		pci_restore_config_dword(pdev, 4 * index,
+ 					 pdev->saved_config_space[index],
+-					 retry);
++					 retry, force);
+ }
+ 
+ static void pci_restore_config_space(struct pci_dev *pdev)
+ {
+ 	if (pdev->hdr_type == PCI_HEADER_TYPE_NORMAL) {
+-		pci_restore_config_space_range(pdev, 10, 15, 0);
++		pci_restore_config_space_range(pdev, 10, 15, 0, false);
+ 		/* Restore BARs before the command register. */
+-		pci_restore_config_space_range(pdev, 4, 9, 10);
+-		pci_restore_config_space_range(pdev, 0, 3, 0);
++		pci_restore_config_space_range(pdev, 4, 9, 10, false);
++		pci_restore_config_space_range(pdev, 0, 3, 0, false);
++	} else if (pdev->hdr_type == PCI_HEADER_TYPE_BRIDGE) {
++		pci_restore_config_space_range(pdev, 12, 15, 0, false);
++
++		/*
++		 * Force rewriting of prefetch registers to avoid S3 resume
++		 * issues on Intel PCI bridges that occur when these
++		 * registers are not explicitly written.
++		 */
++		pci_restore_config_space_range(pdev, 9, 11, 0, true);
++		pci_restore_config_space_range(pdev, 0, 8, 0, false);
+ 	} else {
+-		pci_restore_config_space_range(pdev, 0, 15, 0);
++		pci_restore_config_space_range(pdev, 0, 15, 0, false);
+ 	}
+ }
+ 
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index aba59521ad48..31d06f59c4e4 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -1264,6 +1264,7 @@ static void tty_driver_remove_tty(struct tty_driver *driver, struct tty_struct *
+ static int tty_reopen(struct tty_struct *tty)
+ {
+ 	struct tty_driver *driver = tty->driver;
++	int retval;
+ 
+ 	if (driver->type == TTY_DRIVER_TYPE_PTY &&
+ 	    driver->subtype == PTY_TYPE_MASTER)
+@@ -1277,10 +1278,14 @@ static int tty_reopen(struct tty_struct *tty)
+ 
+ 	tty->count++;
+ 
+-	if (!tty->ldisc)
+-		return tty_ldisc_reinit(tty, tty->termios.c_line);
++	if (tty->ldisc)
++		return 0;
+ 
+-	return 0;
++	retval = tty_ldisc_reinit(tty, tty->termios.c_line);
++	if (retval)
++		tty->count--;
++
++	return retval;
+ }
+ 
+ /**
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index f8ee32d9843a..84f52774810a 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1514,6 +1514,7 @@ static void acm_disconnect(struct usb_interface *intf)
+ {
+ 	struct acm *acm = usb_get_intfdata(intf);
+ 	struct tty_struct *tty;
++	int i;
+ 
+ 	/* sibling interface is already cleaning up */
+ 	if (!acm)
+@@ -1544,6 +1545,11 @@ static void acm_disconnect(struct usb_interface *intf)
+ 
+ 	tty_unregister_device(acm_tty_driver, acm->minor);
+ 
++	usb_free_urb(acm->ctrlurb);
++	for (i = 0; i < ACM_NW; i++)
++		usb_free_urb(acm->wb[i].urb);
++	for (i = 0; i < acm->rx_buflimit; i++)
++		usb_free_urb(acm->read_urbs[i]);
+ 	acm_write_buffers_free(acm);
+ 	usb_free_coherent(acm->dev, acm->ctrlsize, acm->ctrl_buffer, acm->ctrl_dma);
+ 	acm_read_buffers_free(acm);
+diff --git a/drivers/usb/host/xhci-mtk.c b/drivers/usb/host/xhci-mtk.c
+index 7334da9e9779..71d0d33c3286 100644
+--- a/drivers/usb/host/xhci-mtk.c
++++ b/drivers/usb/host/xhci-mtk.c
+@@ -642,10 +642,10 @@ static int __maybe_unused xhci_mtk_resume(struct device *dev)
+ 	xhci_mtk_host_enable(mtk);
+ 
+ 	xhci_dbg(xhci, "%s: restart port polling\n", __func__);
+-	set_bit(HCD_FLAG_POLL_RH, &hcd->flags);
+-	usb_hcd_poll_rh_status(hcd);
+ 	set_bit(HCD_FLAG_POLL_RH, &xhci->shared_hcd->flags);
+ 	usb_hcd_poll_rh_status(xhci->shared_hcd);
++	set_bit(HCD_FLAG_POLL_RH, &hcd->flags);
++	usb_hcd_poll_rh_status(hcd);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 6372edf339d9..722860eb5a91 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -185,6 +185,8 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	}
+ 	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+ 	    (pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_INTEL_SUNRISEPOINT_LP_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_INTEL_SUNRISEPOINT_H_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_APL_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_DNV_XHCI))
+ 		xhci->quirks |= XHCI_MISSING_CAS;
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 0215b70c4efc..e72ad9f81c73 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -561,6 +561,9 @@ static void option_instat_callback(struct urb *urb);
+ /* Interface is reserved */
+ #define RSVD(ifnum)	((BIT(ifnum) & 0xff) << 0)
+ 
++/* Interface must have two endpoints */
++#define NUMEP2		BIT(16)
++
+ 
+ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_COLT) },
+@@ -1081,8 +1084,9 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(4) },
+ 	{ USB_DEVICE(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_BG96),
+ 	  .driver_info = RSVD(4) },
+-	{ USB_DEVICE(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06),
+-	  .driver_info = RSVD(4) | RSVD(5) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0xff, 0xff),
++	  .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0, 0) },
+ 	{ USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6001) },
+ 	{ USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CMU_300) },
+ 	{ USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6003),
+@@ -1999,6 +2003,13 @@ static int option_probe(struct usb_serial *serial,
+ 	if (device_flags & RSVD(iface_desc->bInterfaceNumber))
+ 		return -ENODEV;
+ 
++	/*
++	 * Allow matching on bNumEndpoints for devices whose interface numbers
++	 * can change (e.g. Quectel EP06).
++	 */
++	if (device_flags & NUMEP2 && iface_desc->bNumEndpoints != 2)
++		return -ENODEV;
++
+ 	/* Store the device flags so we can use them during attach. */
+ 	usb_set_serial_data(serial, (void *)device_flags);
+ 
+diff --git a/drivers/usb/serial/usb-serial-simple.c b/drivers/usb/serial/usb-serial-simple.c
+index 40864c2bd9dc..4d0273508043 100644
+--- a/drivers/usb/serial/usb-serial-simple.c
++++ b/drivers/usb/serial/usb-serial-simple.c
+@@ -84,7 +84,8 @@ DEVICE(moto_modem, MOTO_IDS);
+ 
+ /* Motorola Tetra driver */
+ #define MOTOROLA_TETRA_IDS()			\
+-	{ USB_DEVICE(0x0cad, 0x9011) }	/* Motorola Solutions TETRA PEI */
++	{ USB_DEVICE(0x0cad, 0x9011) },	/* Motorola Solutions TETRA PEI */ \
++	{ USB_DEVICE(0x0cad, 0x9012) }	/* MTP6550 */
+ DEVICE(motorola_tetra, MOTOROLA_TETRA_IDS);
+ 
+ /* Novatel Wireless GPS driver */
+diff --git a/drivers/video/fbdev/omap2/omapfb/omapfb-ioctl.c b/drivers/video/fbdev/omap2/omapfb/omapfb-ioctl.c
+index ef69273074ba..a3edb20ea4c3 100644
+--- a/drivers/video/fbdev/omap2/omapfb/omapfb-ioctl.c
++++ b/drivers/video/fbdev/omap2/omapfb/omapfb-ioctl.c
+@@ -496,6 +496,9 @@ static int omapfb_memory_read(struct fb_info *fbi,
+ 	if (!access_ok(VERIFY_WRITE, mr->buffer, mr->buffer_size))
+ 		return -EFAULT;
+ 
++	if (mr->w > 4096 || mr->h > 4096)
++		return -EINVAL;
++
+ 	if (mr->w * mr->h * 3 > mr->buffer_size)
+ 		return -EINVAL;
+ 
+@@ -509,7 +512,7 @@ static int omapfb_memory_read(struct fb_info *fbi,
+ 			mr->x, mr->y, mr->w, mr->h);
+ 
+ 	if (r > 0) {
+-		if (copy_to_user(mr->buffer, buf, mr->buffer_size))
++		if (copy_to_user(mr->buffer, buf, r))
+ 			r = -EFAULT;
+ 	}
+ 
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index 9f1c96caebda..782e7243c5c0 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -746,6 +746,7 @@ static int get_checkpoint_version(struct f2fs_sb_info *sbi, block_t cp_addr,
+ 
+ 	crc_offset = le32_to_cpu((*cp_block)->checksum_offset);
+ 	if (crc_offset > (blk_size - sizeof(__le32))) {
++		f2fs_put_page(*cp_page, 1);
+ 		f2fs_msg(sbi->sb, KERN_WARNING,
+ 			"invalid crc_offset: %zu", crc_offset);
+ 		return -EINVAL;
+@@ -753,6 +754,7 @@ static int get_checkpoint_version(struct f2fs_sb_info *sbi, block_t cp_addr,
+ 
+ 	crc = cur_cp_crc(*cp_block);
+ 	if (!f2fs_crc_valid(sbi, crc, *cp_block, crc_offset)) {
++		f2fs_put_page(*cp_page, 1);
+ 		f2fs_msg(sbi->sb, KERN_WARNING, "invalid crc value");
+ 		return -EINVAL;
+ 	}
+@@ -772,14 +774,14 @@ static struct page *validate_checkpoint(struct f2fs_sb_info *sbi,
+ 	err = get_checkpoint_version(sbi, cp_addr, &cp_block,
+ 					&cp_page_1, version);
+ 	if (err)
+-		goto invalid_cp1;
++		return NULL;
+ 	pre_version = *version;
+ 
+ 	cp_addr += le32_to_cpu(cp_block->cp_pack_total_block_count) - 1;
+ 	err = get_checkpoint_version(sbi, cp_addr, &cp_block,
+ 					&cp_page_2, version);
+ 	if (err)
+-		goto invalid_cp2;
++		goto invalid_cp;
+ 	cur_version = *version;
+ 
+ 	if (cur_version == pre_version) {
+@@ -787,9 +789,8 @@ static struct page *validate_checkpoint(struct f2fs_sb_info *sbi,
+ 		f2fs_put_page(cp_page_2, 1);
+ 		return cp_page_1;
+ 	}
+-invalid_cp2:
+ 	f2fs_put_page(cp_page_2, 1);
+-invalid_cp1:
++invalid_cp:
+ 	f2fs_put_page(cp_page_1, 1);
+ 	return NULL;
+ }
+diff --git a/fs/pstore/ram.c b/fs/pstore/ram.c
+index bbd1e357c23d..f4fd2e72add4 100644
+--- a/fs/pstore/ram.c
++++ b/fs/pstore/ram.c
+@@ -898,8 +898,22 @@ static struct platform_driver ramoops_driver = {
+ 	},
+ };
+ 
+-static void ramoops_register_dummy(void)
++static inline void ramoops_unregister_dummy(void)
+ {
++	platform_device_unregister(dummy);
++	dummy = NULL;
++
++	kfree(dummy_data);
++	dummy_data = NULL;
++}
++
++static void __init ramoops_register_dummy(void)
++{
++	/*
++	 * Prepare a dummy platform data structure to carry the module
++	 * parameters. If mem_size isn't set, then there are no module
++	 * parameters, and we can skip this.
++	 */
+ 	if (!mem_size)
+ 		return;
+ 
+@@ -932,21 +946,28 @@ static void ramoops_register_dummy(void)
+ 	if (IS_ERR(dummy)) {
+ 		pr_info("could not create platform device: %ld\n",
+ 			PTR_ERR(dummy));
++		dummy = NULL;
++		ramoops_unregister_dummy();
+ 	}
+ }
+ 
+ static int __init ramoops_init(void)
+ {
++	int ret;
++
+ 	ramoops_register_dummy();
+-	return platform_driver_register(&ramoops_driver);
++	ret = platform_driver_register(&ramoops_driver);
++	if (ret != 0)
++		ramoops_unregister_dummy();
++
++	return ret;
+ }
+ late_initcall(ramoops_init);
+ 
+ static void __exit ramoops_exit(void)
+ {
+ 	platform_driver_unregister(&ramoops_driver);
+-	platform_device_unregister(dummy);
+-	kfree(dummy_data);
++	ramoops_unregister_dummy();
+ }
+ module_exit(ramoops_exit);
+ 
+diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
+index c5466c70d620..2a82aeeacba5 100644
+--- a/fs/ubifs/super.c
++++ b/fs/ubifs/super.c
+@@ -1929,6 +1929,9 @@ static struct ubi_volume_desc *open_ubi(const char *name, int mode)
+ 	int dev, vol;
+ 	char *endptr;
+ 
++	if (!name || !*name)
++		return ERR_PTR(-EINVAL);
++
+ 	/* First, try to open using the device node path method */
+ 	ubi = ubi_open_volume_path(name, mode);
+ 	if (!IS_ERR(ubi))
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index 36fa6a2a82e3..4ee95d8c8413 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -140,6 +140,8 @@ pte_t *huge_pte_alloc(struct mm_struct *mm,
+ pte_t *huge_pte_offset(struct mm_struct *mm,
+ 		       unsigned long addr, unsigned long sz);
+ int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep);
++void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
++				unsigned long *start, unsigned long *end);
+ struct page *follow_huge_addr(struct mm_struct *mm, unsigned long address,
+ 			      int write);
+ struct page *follow_huge_pd(struct vm_area_struct *vma,
+@@ -170,6 +172,18 @@ static inline unsigned long hugetlb_total_pages(void)
+ 	return 0;
+ }
+ 
++static inline int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr,
++					pte_t *ptep)
++{
++	return 0;
++}
++
++static inline void adjust_range_if_pmd_sharing_possible(
++				struct vm_area_struct *vma,
++				unsigned long *start, unsigned long *end)
++{
++}
++
+ #define follow_hugetlb_page(m,v,p,vs,a,b,i,w,n)	({ BUG(); 0; })
+ #define follow_huge_addr(mm, addr, write)	ERR_PTR(-EINVAL)
+ #define copy_hugetlb_page_range(src, dst, vma)	({ BUG(); 0; })
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index 68a5121694ef..40ad93bc9548 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -2463,6 +2463,12 @@ static inline struct vm_area_struct *find_exact_vma(struct mm_struct *mm,
+ 	return vma;
+ }
+ 
++static inline bool range_in_vma(struct vm_area_struct *vma,
++				unsigned long start, unsigned long end)
++{
++	return (vma && vma->vm_start <= start && end <= vma->vm_end);
++}
++
+ #ifdef CONFIG_MMU
+ pgprot_t vm_get_page_prot(unsigned long vm_flags);
+ void vma_set_page_prot(struct vm_area_struct *vma);
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index c7b3e34811ec..ae22d93701db 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -3940,6 +3940,12 @@ int perf_event_read_local(struct perf_event *event, u64 *value,
+ 		goto out;
+ 	}
+ 
++	/* If this is a pinned event it must be running on this CPU */
++	if (event->attr.pinned && event->oncpu != smp_processor_id()) {
++		ret = -EBUSY;
++		goto out;
++	}
++
+ 	/*
+ 	 * If the event is currently on this CPU, its either a per-task event,
+ 	 * or local to this CPU. Furthermore it means its ACTIVE (otherwise
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 25346bd99364..571875b37453 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2929,7 +2929,7 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
+ 	else
+ 		page_add_file_rmap(new, true);
+ 	set_pmd_at(mm, mmun_start, pvmw->pmd, pmde);
+-	if (vma->vm_flags & VM_LOCKED)
++	if ((vma->vm_flags & VM_LOCKED) && !PageDoubleMap(new))
+ 		mlock_vma_page(new);
+ 	update_mmu_cache_pmd(vma, address, pvmw->pmd);
+ }
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 3103099f64fd..f469315a6a0f 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -4556,12 +4556,40 @@ static bool vma_shareable(struct vm_area_struct *vma, unsigned long addr)
+ 	/*
+ 	 * check on proper vm_flags and page table alignment
+ 	 */
+-	if (vma->vm_flags & VM_MAYSHARE &&
+-	    vma->vm_start <= base && end <= vma->vm_end)
++	if (vma->vm_flags & VM_MAYSHARE && range_in_vma(vma, base, end))
+ 		return true;
+ 	return false;
+ }
+ 
++/*
++ * Determine if start,end range within vma could be mapped by shared pmd.
++ * If yes, adjust start and end to cover range associated with possible
++ * shared pmd mappings.
++ */
++void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
++				unsigned long *start, unsigned long *end)
++{
++	unsigned long check_addr = *start;
++
++	if (!(vma->vm_flags & VM_MAYSHARE))
++		return;
++
++	for (check_addr = *start; check_addr < *end; check_addr += PUD_SIZE) {
++		unsigned long a_start = check_addr & PUD_MASK;
++		unsigned long a_end = a_start + PUD_SIZE;
++
++		/*
++		 * If sharing is possible, adjust start/end if necessary.
++		 */
++		if (range_in_vma(vma, a_start, a_end)) {
++			if (a_start < *start)
++				*start = a_start;
++			if (a_end > *end)
++				*end = a_end;
++		}
++	}
++}
++
+ /*
+  * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc()
+  * and returns the corresponding pte. While this is not necessary for the
+@@ -4659,6 +4687,11 @@ int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep)
+ {
+ 	return 0;
+ }
++
++void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
++				unsigned long *start, unsigned long *end)
++{
++}
+ #define want_pmd_share()	(0)
+ #endif /* CONFIG_ARCH_WANT_HUGE_PMD_SHARE */
+ 
+diff --git a/mm/migrate.c b/mm/migrate.c
+index 8c0af0f7cab1..2a55289ee9f1 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -275,6 +275,9 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
+ 		if (vma->vm_flags & VM_LOCKED && !PageTransCompound(new))
+ 			mlock_vma_page(new);
+ 
++		if (PageTransHuge(page) && PageMlocked(page))
++			clear_page_mlock(page);
++
+ 		/* No need to invalidate - it was non-present before */
+ 		update_mmu_cache(vma, pvmw.address, pvmw.pte);
+ 	}
+diff --git a/mm/rmap.c b/mm/rmap.c
+index eb477809a5c0..1e79fac3186b 100644
+--- a/mm/rmap.c
++++ b/mm/rmap.c
+@@ -1362,11 +1362,21 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
+ 	}
+ 
+ 	/*
+-	 * We have to assume the worse case ie pmd for invalidation. Note that
+-	 * the page can not be free in this function as call of try_to_unmap()
+-	 * must hold a reference on the page.
++	 * For THP, we have to assume the worse case ie pmd for invalidation.
++	 * For hugetlb, it could be much worse if we need to do pud
++	 * invalidation in the case of pmd sharing.
++	 *
++	 * Note that the page can not be free in this function as call of
++	 * try_to_unmap() must hold a reference on the page.
+ 	 */
+ 	end = min(vma->vm_end, start + (PAGE_SIZE << compound_order(page)));
++	if (PageHuge(page)) {
++		/*
++		 * If sharing is possible, start and end will be adjusted
++		 * accordingly.
++		 */
++		adjust_range_if_pmd_sharing_possible(vma, &start, &end);
++	}
+ 	mmu_notifier_invalidate_range_start(vma->vm_mm, start, end);
+ 
+ 	while (page_vma_mapped_walk(&pvmw)) {
+@@ -1409,6 +1419,32 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
+ 		subpage = page - page_to_pfn(page) + pte_pfn(*pvmw.pte);
+ 		address = pvmw.address;
+ 
++		if (PageHuge(page)) {
++			if (huge_pmd_unshare(mm, &address, pvmw.pte)) {
++				/*
++				 * huge_pmd_unshare unmapped an entire PMD
++				 * page.  There is no way of knowing exactly
++				 * which PMDs may be cached for this mm, so
++				 * we must flush them all.  start/end were
++				 * already adjusted above to cover this range.
++				 */
++				flush_cache_range(vma, start, end);
++				flush_tlb_range(vma, start, end);
++				mmu_notifier_invalidate_range(mm, start, end);
++
++				/*
++				 * The ref count of the PMD page was dropped
++				 * which is part of the way map counting
++				 * is done for shared PMDs.  Return 'true'
++				 * here.  When there is no other sharing,
++				 * huge_pmd_unshare returns false and we will
++				 * unmap the actual page and drop map count
++				 * to zero.
++				 */
++				page_vma_mapped_walk_done(&pvmw);
++				break;
++			}
++		}
+ 
+ 		if (IS_ENABLED(CONFIG_MIGRATION) &&
+ 		    (flags & TTU_MIGRATION) &&
+diff --git a/mm/vmstat.c b/mm/vmstat.c
+index 8ba0870ecddd..55a5bb1d773d 100644
+--- a/mm/vmstat.c
++++ b/mm/vmstat.c
+@@ -1275,6 +1275,9 @@ const char * const vmstat_text[] = {
+ #ifdef CONFIG_SMP
+ 	"nr_tlb_remote_flush",
+ 	"nr_tlb_remote_flush_received",
++#else
++	"", /* nr_tlb_remote_flush */
++	"", /* nr_tlb_remote_flush_received */
+ #endif /* CONFIG_SMP */
+ 	"nr_tlb_local_flush_all",
+ 	"nr_tlb_local_flush_one",
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index aa082b71d2e4..c6bbe5b56378 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -427,7 +427,7 @@ static int ieee80211_add_key(struct wiphy *wiphy, struct net_device *dev,
+ 	case NL80211_IFTYPE_AP:
+ 	case NL80211_IFTYPE_AP_VLAN:
+ 		/* Keys without a station are used for TX only */
+-		if (key->sta && test_sta_flag(key->sta, WLAN_STA_MFP))
++		if (sta && test_sta_flag(sta, WLAN_STA_MFP))
+ 			key->conf.flags |= IEEE80211_KEY_FLAG_RX_MGMT;
+ 		break;
+ 	case NL80211_IFTYPE_ADHOC:
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index 555e389b7dfa..5d22c058ae23 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -1756,7 +1756,8 @@ int ieee80211_if_add(struct ieee80211_local *local, const char *name,
+ 
+ 		if (local->ops->wake_tx_queue &&
+ 		    type != NL80211_IFTYPE_AP_VLAN &&
+-		    type != NL80211_IFTYPE_MONITOR)
++		    (type != NL80211_IFTYPE_MONITOR ||
++		     (params->flags & MONITOR_FLAG_ACTIVE)))
+ 			txq_size += sizeof(struct txq_info) +
+ 				    local->hw.txq_data_size;
+ 
+diff --git a/net/rds/ib.h b/net/rds/ib.h
+index a6f4d7d68e95..83ff7c18d691 100644
+--- a/net/rds/ib.h
++++ b/net/rds/ib.h
+@@ -371,7 +371,7 @@ void rds_ib_mr_cqe_handler(struct rds_ib_connection *ic, struct ib_wc *wc);
+ int rds_ib_recv_init(void);
+ void rds_ib_recv_exit(void);
+ int rds_ib_recv_path(struct rds_conn_path *conn);
+-int rds_ib_recv_alloc_caches(struct rds_ib_connection *ic);
++int rds_ib_recv_alloc_caches(struct rds_ib_connection *ic, gfp_t gfp);
+ void rds_ib_recv_free_caches(struct rds_ib_connection *ic);
+ void rds_ib_recv_refill(struct rds_connection *conn, int prefill, gfp_t gfp);
+ void rds_ib_inc_free(struct rds_incoming *inc);
+diff --git a/net/rds/ib_cm.c b/net/rds/ib_cm.c
+index f1684ae6abfd..6a909ea9e8fb 100644
+--- a/net/rds/ib_cm.c
++++ b/net/rds/ib_cm.c
+@@ -949,7 +949,7 @@ int rds_ib_conn_alloc(struct rds_connection *conn, gfp_t gfp)
+ 	if (!ic)
+ 		return -ENOMEM;
+ 
+-	ret = rds_ib_recv_alloc_caches(ic);
++	ret = rds_ib_recv_alloc_caches(ic, gfp);
+ 	if (ret) {
+ 		kfree(ic);
+ 		return ret;
+diff --git a/net/rds/ib_recv.c b/net/rds/ib_recv.c
+index b4e421aa9727..918d2e676b9b 100644
+--- a/net/rds/ib_recv.c
++++ b/net/rds/ib_recv.c
+@@ -98,12 +98,12 @@ static void rds_ib_cache_xfer_to_ready(struct rds_ib_refill_cache *cache)
+ 	}
+ }
+ 
+-static int rds_ib_recv_alloc_cache(struct rds_ib_refill_cache *cache)
++static int rds_ib_recv_alloc_cache(struct rds_ib_refill_cache *cache, gfp_t gfp)
+ {
+ 	struct rds_ib_cache_head *head;
+ 	int cpu;
+ 
+-	cache->percpu = alloc_percpu(struct rds_ib_cache_head);
++	cache->percpu = alloc_percpu_gfp(struct rds_ib_cache_head, gfp);
+ 	if (!cache->percpu)
+ 	       return -ENOMEM;
+ 
+@@ -118,13 +118,13 @@ static int rds_ib_recv_alloc_cache(struct rds_ib_refill_cache *cache)
+ 	return 0;
+ }
+ 
+-int rds_ib_recv_alloc_caches(struct rds_ib_connection *ic)
++int rds_ib_recv_alloc_caches(struct rds_ib_connection *ic, gfp_t gfp)
+ {
+ 	int ret;
+ 
+-	ret = rds_ib_recv_alloc_cache(&ic->i_cache_incs);
++	ret = rds_ib_recv_alloc_cache(&ic->i_cache_incs, gfp);
+ 	if (!ret) {
+-		ret = rds_ib_recv_alloc_cache(&ic->i_cache_frags);
++		ret = rds_ib_recv_alloc_cache(&ic->i_cache_frags, gfp);
+ 		if (ret)
+ 			free_percpu(ic->i_cache_incs.percpu);
+ 	}
+diff --git a/net/tipc/netlink_compat.c b/net/tipc/netlink_compat.c
+index a2f76743c73a..82f665728382 100644
+--- a/net/tipc/netlink_compat.c
++++ b/net/tipc/netlink_compat.c
+@@ -185,6 +185,7 @@ static int __tipc_nl_compat_dumpit(struct tipc_nl_compat_cmd_dump *cmd,
+ 		return -ENOMEM;
+ 
+ 	buf->sk = msg->dst_sk;
++	__tipc_dump_start(&cb, msg->net);
+ 
+ 	do {
+ 		int rem;
+@@ -216,6 +217,7 @@ static int __tipc_nl_compat_dumpit(struct tipc_nl_compat_cmd_dump *cmd,
+ 	err = 0;
+ 
+ err_out:
++	tipc_dump_done(&cb);
+ 	kfree_skb(buf);
+ 
+ 	if (err == -EMSGSIZE) {
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index bdb4a9a5a83a..093e16d1b770 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -3233,7 +3233,7 @@ int tipc_nl_sk_walk(struct sk_buff *skb, struct netlink_callback *cb,
+ 				       struct netlink_callback *cb,
+ 				       struct tipc_sock *tsk))
+ {
+-	struct rhashtable_iter *iter = (void *)cb->args[0];
++	struct rhashtable_iter *iter = (void *)cb->args[4];
+ 	struct tipc_sock *tsk;
+ 	int err;
+ 
+@@ -3269,8 +3269,14 @@ EXPORT_SYMBOL(tipc_nl_sk_walk);
+ 
+ int tipc_dump_start(struct netlink_callback *cb)
+ {
+-	struct rhashtable_iter *iter = (void *)cb->args[0];
+-	struct net *net = sock_net(cb->skb->sk);
++	return __tipc_dump_start(cb, sock_net(cb->skb->sk));
++}
++EXPORT_SYMBOL(tipc_dump_start);
++
++int __tipc_dump_start(struct netlink_callback *cb, struct net *net)
++{
++	/* tipc_nl_name_table_dump() uses cb->args[0...3]. */
++	struct rhashtable_iter *iter = (void *)cb->args[4];
+ 	struct tipc_net *tn = tipc_net(net);
+ 
+ 	if (!iter) {
+@@ -3278,17 +3284,16 @@ int tipc_dump_start(struct netlink_callback *cb)
+ 		if (!iter)
+ 			return -ENOMEM;
+ 
+-		cb->args[0] = (long)iter;
++		cb->args[4] = (long)iter;
+ 	}
+ 
+ 	rhashtable_walk_enter(&tn->sk_rht, iter);
+ 	return 0;
+ }
+-EXPORT_SYMBOL(tipc_dump_start);
+ 
+ int tipc_dump_done(struct netlink_callback *cb)
+ {
+-	struct rhashtable_iter *hti = (void *)cb->args[0];
++	struct rhashtable_iter *hti = (void *)cb->args[4];
+ 
+ 	rhashtable_walk_exit(hti);
+ 	kfree(hti);
+diff --git a/net/tipc/socket.h b/net/tipc/socket.h
+index d43032e26532..5e575f205afe 100644
+--- a/net/tipc/socket.h
++++ b/net/tipc/socket.h
+@@ -69,5 +69,6 @@ int tipc_nl_sk_walk(struct sk_buff *skb, struct netlink_callback *cb,
+ 				       struct netlink_callback *cb,
+ 				       struct tipc_sock *tsk));
+ int tipc_dump_start(struct netlink_callback *cb);
++int __tipc_dump_start(struct netlink_callback *cb, struct net *net);
+ int tipc_dump_done(struct netlink_callback *cb);
+ #endif
+diff --git a/tools/testing/selftests/x86/test_vdso.c b/tools/testing/selftests/x86/test_vdso.c
+index 235259011704..35edd61d1663 100644
+--- a/tools/testing/selftests/x86/test_vdso.c
++++ b/tools/testing/selftests/x86/test_vdso.c
+@@ -17,6 +17,7 @@
+ #include <errno.h>
+ #include <sched.h>
+ #include <stdbool.h>
++#include <limits.h>
+ 
+ #ifndef SYS_getcpu
+ # ifdef __x86_64__
+@@ -31,6 +32,14 @@
+ 
+ int nerrs = 0;
+ 
++typedef int (*vgettime_t)(clockid_t, struct timespec *);
++
++vgettime_t vdso_clock_gettime;
++
++typedef long (*vgtod_t)(struct timeval *tv, struct timezone *tz);
++
++vgtod_t vdso_gettimeofday;
++
+ typedef long (*getcpu_t)(unsigned *, unsigned *, void *);
+ 
+ getcpu_t vgetcpu;
+@@ -95,6 +104,15 @@ static void fill_function_pointers()
+ 		printf("Warning: failed to find getcpu in vDSO\n");
+ 
+ 	vgetcpu = (getcpu_t) vsyscall_getcpu();
++
++	vdso_clock_gettime = (vgettime_t)dlsym(vdso, "__vdso_clock_gettime");
++	if (!vdso_clock_gettime)
++		printf("Warning: failed to find clock_gettime in vDSO\n");
++
++	vdso_gettimeofday = (vgtod_t)dlsym(vdso, "__vdso_gettimeofday");
++	if (!vdso_gettimeofday)
++		printf("Warning: failed to find gettimeofday in vDSO\n");
++
+ }
+ 
+ static long sys_getcpu(unsigned * cpu, unsigned * node,
+@@ -103,6 +121,16 @@ static long sys_getcpu(unsigned * cpu, unsigned * node,
+ 	return syscall(__NR_getcpu, cpu, node, cache);
+ }
+ 
++static inline int sys_clock_gettime(clockid_t id, struct timespec *ts)
++{
++	return syscall(__NR_clock_gettime, id, ts);
++}
++
++static inline int sys_gettimeofday(struct timeval *tv, struct timezone *tz)
++{
++	return syscall(__NR_gettimeofday, tv, tz);
++}
++
+ static void test_getcpu(void)
+ {
+ 	printf("[RUN]\tTesting getcpu...\n");
+@@ -155,10 +183,154 @@ static void test_getcpu(void)
+ 	}
+ }
+ 
++static bool ts_leq(const struct timespec *a, const struct timespec *b)
++{
++	if (a->tv_sec != b->tv_sec)
++		return a->tv_sec < b->tv_sec;
++	else
++		return a->tv_nsec <= b->tv_nsec;
++}
++
++static bool tv_leq(const struct timeval *a, const struct timeval *b)
++{
++	if (a->tv_sec != b->tv_sec)
++		return a->tv_sec < b->tv_sec;
++	else
++		return a->tv_usec <= b->tv_usec;
++}
++
++static char const * const clocknames[] = {
++	[0] = "CLOCK_REALTIME",
++	[1] = "CLOCK_MONOTONIC",
++	[2] = "CLOCK_PROCESS_CPUTIME_ID",
++	[3] = "CLOCK_THREAD_CPUTIME_ID",
++	[4] = "CLOCK_MONOTONIC_RAW",
++	[5] = "CLOCK_REALTIME_COARSE",
++	[6] = "CLOCK_MONOTONIC_COARSE",
++	[7] = "CLOCK_BOOTTIME",
++	[8] = "CLOCK_REALTIME_ALARM",
++	[9] = "CLOCK_BOOTTIME_ALARM",
++	[10] = "CLOCK_SGI_CYCLE",
++	[11] = "CLOCK_TAI",
++};
++
++static void test_one_clock_gettime(int clock, const char *name)
++{
++	struct timespec start, vdso, end;
++	int vdso_ret, end_ret;
++
++	printf("[RUN]\tTesting clock_gettime for clock %s (%d)...\n", name, clock);
++
++	if (sys_clock_gettime(clock, &start) < 0) {
++		if (errno == EINVAL) {
++			vdso_ret = vdso_clock_gettime(clock, &vdso);
++			if (vdso_ret == -EINVAL) {
++				printf("[OK]\tNo such clock.\n");
++			} else {
++				printf("[FAIL]\tNo such clock, but __vdso_clock_gettime returned %d\n", vdso_ret);
++				nerrs++;
++			}
++		} else {
++			printf("[WARN]\t clock_gettime(%d) syscall returned error %d\n", clock, errno);
++		}
++		return;
++	}
++
++	vdso_ret = vdso_clock_gettime(clock, &vdso);
++	end_ret = sys_clock_gettime(clock, &end);
++
++	if (vdso_ret != 0 || end_ret != 0) {
++		printf("[FAIL]\tvDSO returned %d, syscall errno=%d\n",
++		       vdso_ret, errno);
++		nerrs++;
++		return;
++	}
++
++	printf("\t%llu.%09ld %llu.%09ld %llu.%09ld\n",
++	       (unsigned long long)start.tv_sec, start.tv_nsec,
++	       (unsigned long long)vdso.tv_sec, vdso.tv_nsec,
++	       (unsigned long long)end.tv_sec, end.tv_nsec);
++
++	if (!ts_leq(&start, &vdso) || !ts_leq(&vdso, &end)) {
++		printf("[FAIL]\tTimes are out of sequence\n");
++		nerrs++;
++	}
++}
++
++static void test_clock_gettime(void)
++{
++	for (int clock = 0; clock < sizeof(clocknames) / sizeof(clocknames[0]);
++	     clock++) {
++		test_one_clock_gettime(clock, clocknames[clock]);
++	}
++
++	/* Also test some invalid clock ids */
++	test_one_clock_gettime(-1, "invalid");
++	test_one_clock_gettime(INT_MIN, "invalid");
++	test_one_clock_gettime(INT_MAX, "invalid");
++}
++
++static void test_gettimeofday(void)
++{
++	struct timeval start, vdso, end;
++	struct timezone sys_tz, vdso_tz;
++	int vdso_ret, end_ret;
++
++	if (!vdso_gettimeofday)
++		return;
++
++	printf("[RUN]\tTesting gettimeofday...\n");
++
++	if (sys_gettimeofday(&start, &sys_tz) < 0) {
++		printf("[FAIL]\tsys_gettimeofday failed (%d)\n", errno);
++		nerrs++;
++		return;
++	}
++
++	vdso_ret = vdso_gettimeofday(&vdso, &vdso_tz);
++	end_ret = sys_gettimeofday(&end, NULL);
++
++	if (vdso_ret != 0 || end_ret != 0) {
++		printf("[FAIL]\tvDSO returned %d, syscall errno=%d\n",
++		       vdso_ret, errno);
++		nerrs++;
++		return;
++	}
++
++	printf("\t%llu.%06ld %llu.%06ld %llu.%06ld\n",
++	       (unsigned long long)start.tv_sec, start.tv_usec,
++	       (unsigned long long)vdso.tv_sec, vdso.tv_usec,
++	       (unsigned long long)end.tv_sec, end.tv_usec);
++
++	if (!tv_leq(&start, &vdso) || !tv_leq(&vdso, &end)) {
++		printf("[FAIL]\tTimes are out of sequence\n");
++		nerrs++;
++	}
++
++	if (sys_tz.tz_minuteswest == vdso_tz.tz_minuteswest &&
++	    sys_tz.tz_dsttime == vdso_tz.tz_dsttime) {
++		printf("[OK]\ttimezones match: minuteswest=%d, dsttime=%d\n",
++		       sys_tz.tz_minuteswest, sys_tz.tz_dsttime);
++	} else {
++		printf("[FAIL]\ttimezones do not match\n");
++		nerrs++;
++	}
++
++	/* And make sure that passing NULL for tz doesn't crash. */
++	vdso_gettimeofday(&vdso, NULL);
++}
++
+ int main(int argc, char **argv)
+ {
+ 	fill_function_pointers();
+ 
++	test_clock_gettime();
++	test_gettimeofday();
++
++	/*
++	 * Test getcpu() last so that, if something goes wrong setting affinity,
++	 * we still run the other tests.
++	 */
+ 	test_getcpu();
+ 
+ 	return nerrs ? 1 : 0;


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-10-10 11:16 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-10-10 11:16 UTC (permalink / raw
  To: gentoo-commits

commit:     17d5844df544a2912e253984b677303cd31dac3a
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Oct 10 11:16:13 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Oct 10 11:16:13 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=17d5844d

Linux patch 4.18.13

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1012_linux-4.18.13.patch | 7273 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 7277 insertions(+)

diff --git a/0000_README b/0000_README
index ff87445..f5bb594 100644
--- a/0000_README
+++ b/0000_README
@@ -91,6 +91,10 @@ Patch:  1011_linux-4.18.12.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.12
 
+Patch:  1012_linux-4.18.13.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.13
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1012_linux-4.18.13.patch b/1012_linux-4.18.13.patch
new file mode 100644
index 0000000..6c8e751
--- /dev/null
+++ b/1012_linux-4.18.13.patch
@@ -0,0 +1,7273 @@
+diff --git a/Documentation/devicetree/bindings/net/sh_eth.txt b/Documentation/devicetree/bindings/net/sh_eth.txt
+index 82a4cf2c145d..a62fe3b613fc 100644
+--- a/Documentation/devicetree/bindings/net/sh_eth.txt
++++ b/Documentation/devicetree/bindings/net/sh_eth.txt
+@@ -16,6 +16,7 @@ Required properties:
+ 	      "renesas,ether-r8a7794"  if the device is a part of R8A7794 SoC.
+ 	      "renesas,gether-r8a77980" if the device is a part of R8A77980 SoC.
+ 	      "renesas,ether-r7s72100" if the device is a part of R7S72100 SoC.
++	      "renesas,ether-r7s9210" if the device is a part of R7S9210 SoC.
+ 	      "renesas,rcar-gen1-ether" for a generic R-Car Gen1 device.
+ 	      "renesas,rcar-gen2-ether" for a generic R-Car Gen2 or RZ/G1
+ 	                                device.
+diff --git a/Makefile b/Makefile
+index 466e07af8473..4442e9ea4b6d 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 12
++SUBLEVEL = 13
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
+index 11859287c52a..c98b59ac0612 100644
+--- a/arch/arc/include/asm/atomic.h
++++ b/arch/arc/include/asm/atomic.h
+@@ -84,7 +84,7 @@ static inline int atomic_fetch_##op(int i, atomic_t *v)			\
+ 	"1:	llock   %[orig], [%[ctr]]		\n"		\
+ 	"	" #asm_op " %[val], %[orig], %[i]	\n"		\
+ 	"	scond   %[val], [%[ctr]]		\n"		\
+-	"						\n"		\
++	"	bnz     1b				\n"		\
+ 	: [val]	"=&r"	(val),						\
+ 	  [orig] "=&r" (orig)						\
+ 	: [ctr]	"r"	(&v->counter),					\
+diff --git a/arch/arm64/include/asm/jump_label.h b/arch/arm64/include/asm/jump_label.h
+index 1b5e0e843c3a..7e2b3e360086 100644
+--- a/arch/arm64/include/asm/jump_label.h
++++ b/arch/arm64/include/asm/jump_label.h
+@@ -28,7 +28,7 @@
+ 
+ static __always_inline bool arch_static_branch(struct static_key *key, bool branch)
+ {
+-	asm goto("1: nop\n\t"
++	asm_volatile_goto("1: nop\n\t"
+ 		 ".pushsection __jump_table,  \"aw\"\n\t"
+ 		 ".align 3\n\t"
+ 		 ".quad 1b, %l[l_yes], %c0\n\t"
+@@ -42,7 +42,7 @@ l_yes:
+ 
+ static __always_inline bool arch_static_branch_jump(struct static_key *key, bool branch)
+ {
+-	asm goto("1: b %l[l_yes]\n\t"
++	asm_volatile_goto("1: b %l[l_yes]\n\t"
+ 		 ".pushsection __jump_table,  \"aw\"\n\t"
+ 		 ".align 3\n\t"
+ 		 ".quad 1b, %l[l_yes], %c0\n\t"
+diff --git a/arch/hexagon/include/asm/bitops.h b/arch/hexagon/include/asm/bitops.h
+index 5e4a59b3ec1b..2691a1857d20 100644
+--- a/arch/hexagon/include/asm/bitops.h
++++ b/arch/hexagon/include/asm/bitops.h
+@@ -211,7 +211,7 @@ static inline long ffz(int x)
+  * This is defined the same way as ffs.
+  * Note fls(0) = 0, fls(1) = 1, fls(0x80000000) = 32.
+  */
+-static inline long fls(int x)
++static inline int fls(int x)
+ {
+ 	int r;
+ 
+@@ -232,7 +232,7 @@ static inline long fls(int x)
+  * the libc and compiler builtin ffs routines, therefore
+  * differs in spirit from the above ffz (man ffs).
+  */
+-static inline long ffs(int x)
++static inline int ffs(int x)
+ {
+ 	int r;
+ 
+diff --git a/arch/hexagon/kernel/dma.c b/arch/hexagon/kernel/dma.c
+index 77459df34e2e..7ebe7ad19d15 100644
+--- a/arch/hexagon/kernel/dma.c
++++ b/arch/hexagon/kernel/dma.c
+@@ -60,7 +60,7 @@ static void *hexagon_dma_alloc_coherent(struct device *dev, size_t size,
+ 			panic("Can't create %s() memory pool!", __func__);
+ 		else
+ 			gen_pool_add(coherent_pool,
+-				pfn_to_virt(max_low_pfn),
++				(unsigned long)pfn_to_virt(max_low_pfn),
+ 				hexagon_coherent_pool_size, -1);
+ 	}
+ 
+diff --git a/arch/nds32/include/asm/elf.h b/arch/nds32/include/asm/elf.h
+index 56c479058802..f5f9cf7e0544 100644
+--- a/arch/nds32/include/asm/elf.h
++++ b/arch/nds32/include/asm/elf.h
+@@ -121,9 +121,9 @@ struct elf32_hdr;
+  */
+ #define ELF_CLASS	ELFCLASS32
+ #ifdef __NDS32_EB__
+-#define ELF_DATA	ELFDATA2MSB;
++#define ELF_DATA	ELFDATA2MSB
+ #else
+-#define ELF_DATA	ELFDATA2LSB;
++#define ELF_DATA	ELFDATA2LSB
+ #endif
+ #define ELF_ARCH	EM_NDS32
+ #define USE_ELF_CORE_DUMP
+diff --git a/arch/nds32/include/asm/uaccess.h b/arch/nds32/include/asm/uaccess.h
+index 18a009f3804d..3f771e0595e8 100644
+--- a/arch/nds32/include/asm/uaccess.h
++++ b/arch/nds32/include/asm/uaccess.h
+@@ -78,8 +78,9 @@ static inline void set_fs(mm_segment_t fs)
+ #define get_user(x,p)							\
+ ({									\
+ 	long __e = -EFAULT;						\
+-	if(likely(access_ok(VERIFY_READ,  p, sizeof(*p)))) {		\
+-		__e = __get_user(x,p);					\
++	const __typeof__(*(p)) __user *__p = (p);			\
++	if(likely(access_ok(VERIFY_READ, __p, sizeof(*__p)))) {		\
++		__e = __get_user(x, __p);				\
+ 	} else								\
+ 		x = 0;							\
+ 	__e;								\
+@@ -99,10 +100,10 @@ static inline void set_fs(mm_segment_t fs)
+ 
+ #define __get_user_err(x,ptr,err)					\
+ do {									\
+-	unsigned long __gu_addr = (unsigned long)(ptr);			\
++	const __typeof__(*(ptr)) __user *__gu_addr = (ptr);		\
+ 	unsigned long __gu_val;						\
+-	__chk_user_ptr(ptr);						\
+-	switch (sizeof(*(ptr))) {					\
++	__chk_user_ptr(__gu_addr);					\
++	switch (sizeof(*(__gu_addr))) {					\
+ 	case 1:								\
+ 		__get_user_asm("lbi",__gu_val,__gu_addr,err);		\
+ 		break;							\
+@@ -119,7 +120,7 @@ do {									\
+ 		BUILD_BUG(); 						\
+ 		break;							\
+ 	}								\
+-	(x) = (__typeof__(*(ptr)))__gu_val;				\
++	(x) = (__typeof__(*(__gu_addr)))__gu_val;			\
+ } while (0)
+ 
+ #define __get_user_asm(inst,x,addr,err)					\
+@@ -169,8 +170,9 @@ do {									\
+ #define put_user(x,p)							\
+ ({									\
+ 	long __e = -EFAULT;						\
+-	if(likely(access_ok(VERIFY_WRITE,  p, sizeof(*p)))) {		\
+-		__e = __put_user(x,p);					\
++	__typeof__(*(p)) __user *__p = (p);				\
++	if(likely(access_ok(VERIFY_WRITE, __p, sizeof(*__p)))) {	\
++		__e = __put_user(x, __p);				\
+ 	}								\
+ 	__e;								\
+ })
+@@ -189,10 +191,10 @@ do {									\
+ 
+ #define __put_user_err(x,ptr,err)					\
+ do {									\
+-	unsigned long __pu_addr = (unsigned long)(ptr);			\
+-	__typeof__(*(ptr)) __pu_val = (x);				\
+-	__chk_user_ptr(ptr);						\
+-	switch (sizeof(*(ptr))) {					\
++	__typeof__(*(ptr)) __user *__pu_addr = (ptr);			\
++	__typeof__(*(__pu_addr)) __pu_val = (x);			\
++	__chk_user_ptr(__pu_addr);					\
++	switch (sizeof(*(__pu_addr))) {					\
+ 	case 1:								\
+ 		__put_user_asm("sbi",__pu_val,__pu_addr,err);		\
+ 		break;							\
+diff --git a/arch/nds32/kernel/atl2c.c b/arch/nds32/kernel/atl2c.c
+index 0c6d031a1c4a..0c5386e72098 100644
+--- a/arch/nds32/kernel/atl2c.c
++++ b/arch/nds32/kernel/atl2c.c
+@@ -9,7 +9,8 @@
+ 
+ void __iomem *atl2c_base;
+ static const struct of_device_id atl2c_ids[] __initconst = {
+-	{.compatible = "andestech,atl2c",}
++	{.compatible = "andestech,atl2c",},
++	{}
+ };
+ 
+ static int __init atl2c_of_init(void)
+diff --git a/arch/nds32/kernel/module.c b/arch/nds32/kernel/module.c
+index 4167283d8293..1e31829cbc2a 100644
+--- a/arch/nds32/kernel/module.c
++++ b/arch/nds32/kernel/module.c
+@@ -40,7 +40,7 @@ void do_reloc16(unsigned int val, unsigned int *loc, unsigned int val_mask,
+ 
+ 	tmp2 = tmp & loc_mask;
+ 	if (partial_in_place) {
+-		tmp &= (!loc_mask);
++		tmp &= (~loc_mask);
+ 		tmp =
+ 		    tmp2 | ((tmp + ((val & val_mask) >> val_shift)) & val_mask);
+ 	} else {
+@@ -70,7 +70,7 @@ void do_reloc32(unsigned int val, unsigned int *loc, unsigned int val_mask,
+ 
+ 	tmp2 = tmp & loc_mask;
+ 	if (partial_in_place) {
+-		tmp &= (!loc_mask);
++		tmp &= (~loc_mask);
+ 		tmp =
+ 		    tmp2 | ((tmp + ((val & val_mask) >> val_shift)) & val_mask);
+ 	} else {
+diff --git a/arch/nds32/kernel/traps.c b/arch/nds32/kernel/traps.c
+index a6205fd4db52..f0e974347c26 100644
+--- a/arch/nds32/kernel/traps.c
++++ b/arch/nds32/kernel/traps.c
+@@ -137,7 +137,7 @@ static void __dump(struct task_struct *tsk, unsigned long *base_reg)
+ 		       !((unsigned long)base_reg & 0x3) &&
+ 		       ((unsigned long)base_reg >= TASK_SIZE)) {
+ 			unsigned long next_fp;
+-#if !defined(NDS32_ABI_2)
++#if !defined(__NDS32_ABI_2)
+ 			ret_addr = base_reg[0];
+ 			next_fp = base_reg[1];
+ #else
+diff --git a/arch/nds32/kernel/vmlinux.lds.S b/arch/nds32/kernel/vmlinux.lds.S
+index 288313b886ef..9e90f30a181d 100644
+--- a/arch/nds32/kernel/vmlinux.lds.S
++++ b/arch/nds32/kernel/vmlinux.lds.S
+@@ -13,14 +13,26 @@ OUTPUT_ARCH(nds32)
+ ENTRY(_stext_lma)
+ jiffies = jiffies_64;
+ 
++#if defined(CONFIG_GCOV_KERNEL)
++#define NDS32_EXIT_KEEP(x)	x
++#else
++#define NDS32_EXIT_KEEP(x)
++#endif
++
+ SECTIONS
+ {
+ 	_stext_lma = TEXTADDR - LOAD_OFFSET;
+ 	. = TEXTADDR;
+ 	__init_begin = .;
+ 	HEAD_TEXT_SECTION
++	.exit.text : {
++		NDS32_EXIT_KEEP(EXIT_TEXT)
++	}
+ 	INIT_TEXT_SECTION(PAGE_SIZE)
+ 	INIT_DATA_SECTION(16)
++	.exit.data : {
++		NDS32_EXIT_KEEP(EXIT_DATA)
++	}
+ 	PERCPU_SECTION(L1_CACHE_BYTES)
+ 	__init_end = .;
+ 
+diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
+index 7f3a8cf5d66f..4c08f42f6406 100644
+--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
++++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
+@@ -359,7 +359,7 @@ static int kvmppc_mmu_book3s_64_hv_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
+ 	unsigned long pp, key;
+ 	unsigned long v, orig_v, gr;
+ 	__be64 *hptep;
+-	int index;
++	long int index;
+ 	int virtmode = vcpu->arch.shregs.msr & (data ? MSR_DR : MSR_IR);
+ 
+ 	if (kvm_is_radix(vcpu->kvm))
+diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
+index f0d2070866d4..0efa5b29d0a3 100644
+--- a/arch/riscv/kernel/setup.c
++++ b/arch/riscv/kernel/setup.c
+@@ -64,15 +64,8 @@ atomic_t hart_lottery;
+ #ifdef CONFIG_BLK_DEV_INITRD
+ static void __init setup_initrd(void)
+ {
+-	extern char __initramfs_start[];
+-	extern unsigned long __initramfs_size;
+ 	unsigned long size;
+ 
+-	if (__initramfs_size > 0) {
+-		initrd_start = (unsigned long)(&__initramfs_start);
+-		initrd_end = initrd_start + __initramfs_size;
+-	}
+-
+ 	if (initrd_start >= initrd_end) {
+ 		printk(KERN_INFO "initrd not found or empty");
+ 		goto disable;
+diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
+index a4170048a30b..17fbd07e4245 100644
+--- a/arch/x86/events/intel/lbr.c
++++ b/arch/x86/events/intel/lbr.c
+@@ -1250,4 +1250,8 @@ void intel_pmu_lbr_init_knl(void)
+ 
+ 	x86_pmu.lbr_sel_mask = LBR_SEL_MASK;
+ 	x86_pmu.lbr_sel_map  = snb_lbr_sel_map;
++
++	/* Knights Landing does have MISPREDICT bit */
++	if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_LIP)
++		x86_pmu.intel_cap.lbr_format = LBR_FORMAT_EIP_FLAGS;
+ }
+diff --git a/arch/x86/kernel/apm_32.c b/arch/x86/kernel/apm_32.c
+index ec00d1ff5098..f7151cd03cb0 100644
+--- a/arch/x86/kernel/apm_32.c
++++ b/arch/x86/kernel/apm_32.c
+@@ -1640,6 +1640,7 @@ static int do_open(struct inode *inode, struct file *filp)
+ 	return 0;
+ }
+ 
++#ifdef CONFIG_PROC_FS
+ static int proc_apm_show(struct seq_file *m, void *v)
+ {
+ 	unsigned short	bx;
+@@ -1719,6 +1720,7 @@ static int proc_apm_show(struct seq_file *m, void *v)
+ 		   units);
+ 	return 0;
+ }
++#endif
+ 
+ static int apm(void *unused)
+ {
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index eb85cb87c40f..ec868373b11b 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -307,28 +307,11 @@ struct blkcg_gq *blkg_lookup_create(struct blkcg *blkcg,
+ 	}
+ }
+ 
+-static void blkg_pd_offline(struct blkcg_gq *blkg)
+-{
+-	int i;
+-
+-	lockdep_assert_held(blkg->q->queue_lock);
+-	lockdep_assert_held(&blkg->blkcg->lock);
+-
+-	for (i = 0; i < BLKCG_MAX_POLS; i++) {
+-		struct blkcg_policy *pol = blkcg_policy[i];
+-
+-		if (blkg->pd[i] && !blkg->pd[i]->offline &&
+-		    pol->pd_offline_fn) {
+-			pol->pd_offline_fn(blkg->pd[i]);
+-			blkg->pd[i]->offline = true;
+-		}
+-	}
+-}
+-
+ static void blkg_destroy(struct blkcg_gq *blkg)
+ {
+ 	struct blkcg *blkcg = blkg->blkcg;
+ 	struct blkcg_gq *parent = blkg->parent;
++	int i;
+ 
+ 	lockdep_assert_held(blkg->q->queue_lock);
+ 	lockdep_assert_held(&blkcg->lock);
+@@ -337,6 +320,13 @@ static void blkg_destroy(struct blkcg_gq *blkg)
+ 	WARN_ON_ONCE(list_empty(&blkg->q_node));
+ 	WARN_ON_ONCE(hlist_unhashed(&blkg->blkcg_node));
+ 
++	for (i = 0; i < BLKCG_MAX_POLS; i++) {
++		struct blkcg_policy *pol = blkcg_policy[i];
++
++		if (blkg->pd[i] && pol->pd_offline_fn)
++			pol->pd_offline_fn(blkg->pd[i]);
++	}
++
+ 	if (parent) {
+ 		blkg_rwstat_add_aux(&parent->stat_bytes, &blkg->stat_bytes);
+ 		blkg_rwstat_add_aux(&parent->stat_ios, &blkg->stat_ios);
+@@ -379,7 +369,6 @@ static void blkg_destroy_all(struct request_queue *q)
+ 		struct blkcg *blkcg = blkg->blkcg;
+ 
+ 		spin_lock(&blkcg->lock);
+-		blkg_pd_offline(blkg);
+ 		blkg_destroy(blkg);
+ 		spin_unlock(&blkcg->lock);
+ 	}
+@@ -1006,54 +995,21 @@ static struct cftype blkcg_legacy_files[] = {
+  * @css: css of interest
+  *
+  * This function is called when @css is about to go away and responsible
+- * for offlining all blkgs pd and killing all wbs associated with @css.
+- * blkgs pd offline should be done while holding both q and blkcg locks.
+- * As blkcg lock is nested inside q lock, this function performs reverse
+- * double lock dancing.
++ * for shooting down all blkgs associated with @css.  blkgs should be
++ * removed while holding both q and blkcg locks.  As blkcg lock is nested
++ * inside q lock, this function performs reverse double lock dancing.
+  *
+  * This is the blkcg counterpart of ioc_release_fn().
+  */
+ static void blkcg_css_offline(struct cgroup_subsys_state *css)
+ {
+ 	struct blkcg *blkcg = css_to_blkcg(css);
+-	struct blkcg_gq *blkg;
+ 
+ 	spin_lock_irq(&blkcg->lock);
+ 
+-	hlist_for_each_entry(blkg, &blkcg->blkg_list, blkcg_node) {
+-		struct request_queue *q = blkg->q;
+-
+-		if (spin_trylock(q->queue_lock)) {
+-			blkg_pd_offline(blkg);
+-			spin_unlock(q->queue_lock);
+-		} else {
+-			spin_unlock_irq(&blkcg->lock);
+-			cpu_relax();
+-			spin_lock_irq(&blkcg->lock);
+-		}
+-	}
+-
+-	spin_unlock_irq(&blkcg->lock);
+-
+-	wb_blkcg_offline(blkcg);
+-}
+-
+-/**
+- * blkcg_destroy_all_blkgs - destroy all blkgs associated with a blkcg
+- * @blkcg: blkcg of interest
+- *
+- * This function is called when blkcg css is about to free and responsible for
+- * destroying all blkgs associated with @blkcg.
+- * blkgs should be removed while holding both q and blkcg locks. As blkcg lock
+- * is nested inside q lock, this function performs reverse double lock dancing.
+- */
+-static void blkcg_destroy_all_blkgs(struct blkcg *blkcg)
+-{
+-	spin_lock_irq(&blkcg->lock);
+ 	while (!hlist_empty(&blkcg->blkg_list)) {
+ 		struct blkcg_gq *blkg = hlist_entry(blkcg->blkg_list.first,
+-						    struct blkcg_gq,
+-						    blkcg_node);
++						struct blkcg_gq, blkcg_node);
+ 		struct request_queue *q = blkg->q;
+ 
+ 		if (spin_trylock(q->queue_lock)) {
+@@ -1065,7 +1021,10 @@ static void blkcg_destroy_all_blkgs(struct blkcg *blkcg)
+ 			spin_lock_irq(&blkcg->lock);
+ 		}
+ 	}
++
+ 	spin_unlock_irq(&blkcg->lock);
++
++	wb_blkcg_offline(blkcg);
+ }
+ 
+ static void blkcg_css_free(struct cgroup_subsys_state *css)
+@@ -1073,8 +1032,6 @@ static void blkcg_css_free(struct cgroup_subsys_state *css)
+ 	struct blkcg *blkcg = css_to_blkcg(css);
+ 	int i;
+ 
+-	blkcg_destroy_all_blkgs(blkcg);
+-
+ 	mutex_lock(&blkcg_pol_mutex);
+ 
+ 	list_del(&blkcg->all_blkcgs_node);
+@@ -1412,11 +1369,8 @@ void blkcg_deactivate_policy(struct request_queue *q,
+ 
+ 	list_for_each_entry(blkg, &q->blkg_list, q_node) {
+ 		if (blkg->pd[pol->plid]) {
+-			if (!blkg->pd[pol->plid]->offline &&
+-			    pol->pd_offline_fn) {
++			if (pol->pd_offline_fn)
+ 				pol->pd_offline_fn(blkg->pd[pol->plid]);
+-				blkg->pd[pol->plid]->offline = true;
+-			}
+ 			pol->pd_free_fn(blkg->pd[pol->plid]);
+ 			blkg->pd[pol->plid] = NULL;
+ 		}
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 22a2bc5f25ce..99bf0c0394f8 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -7403,4 +7403,4 @@ EXPORT_SYMBOL_GPL(ata_cable_unknown);
+ EXPORT_SYMBOL_GPL(ata_cable_ignore);
+ EXPORT_SYMBOL_GPL(ata_cable_sata);
+ EXPORT_SYMBOL_GPL(ata_host_get);
+-EXPORT_SYMBOL_GPL(ata_host_put);
+\ No newline at end of file
++EXPORT_SYMBOL_GPL(ata_host_put);
+diff --git a/drivers/base/firmware_loader/main.c b/drivers/base/firmware_loader/main.c
+index 0943e7065e0e..8e9213b36e31 100644
+--- a/drivers/base/firmware_loader/main.c
++++ b/drivers/base/firmware_loader/main.c
+@@ -209,22 +209,28 @@ static struct fw_priv *__lookup_fw_priv(const char *fw_name)
+ static int alloc_lookup_fw_priv(const char *fw_name,
+ 				struct firmware_cache *fwc,
+ 				struct fw_priv **fw_priv, void *dbuf,
+-				size_t size)
++				size_t size, enum fw_opt opt_flags)
+ {
+ 	struct fw_priv *tmp;
+ 
+ 	spin_lock(&fwc->lock);
+-	tmp = __lookup_fw_priv(fw_name);
+-	if (tmp) {
+-		kref_get(&tmp->ref);
+-		spin_unlock(&fwc->lock);
+-		*fw_priv = tmp;
+-		pr_debug("batched request - sharing the same struct fw_priv and lookup for multiple requests\n");
+-		return 1;
++	if (!(opt_flags & FW_OPT_NOCACHE)) {
++		tmp = __lookup_fw_priv(fw_name);
++		if (tmp) {
++			kref_get(&tmp->ref);
++			spin_unlock(&fwc->lock);
++			*fw_priv = tmp;
++			pr_debug("batched request - sharing the same struct fw_priv and lookup for multiple requests\n");
++			return 1;
++		}
+ 	}
++
+ 	tmp = __allocate_fw_priv(fw_name, fwc, dbuf, size);
+-	if (tmp)
+-		list_add(&tmp->list, &fwc->head);
++	if (tmp) {
++		INIT_LIST_HEAD(&tmp->list);
++		if (!(opt_flags & FW_OPT_NOCACHE))
++			list_add(&tmp->list, &fwc->head);
++	}
+ 	spin_unlock(&fwc->lock);
+ 
+ 	*fw_priv = tmp;
+@@ -493,7 +499,8 @@ int assign_fw(struct firmware *fw, struct device *device,
+  */
+ static int
+ _request_firmware_prepare(struct firmware **firmware_p, const char *name,
+-			  struct device *device, void *dbuf, size_t size)
++			  struct device *device, void *dbuf, size_t size,
++			  enum fw_opt opt_flags)
+ {
+ 	struct firmware *firmware;
+ 	struct fw_priv *fw_priv;
+@@ -511,7 +518,8 @@ _request_firmware_prepare(struct firmware **firmware_p, const char *name,
+ 		return 0; /* assigned */
+ 	}
+ 
+-	ret = alloc_lookup_fw_priv(name, &fw_cache, &fw_priv, dbuf, size);
++	ret = alloc_lookup_fw_priv(name, &fw_cache, &fw_priv, dbuf, size,
++				  opt_flags);
+ 
+ 	/*
+ 	 * bind with 'priv' now to avoid warning in failure path
+@@ -571,7 +579,8 @@ _request_firmware(const struct firmware **firmware_p, const char *name,
+ 		goto out;
+ 	}
+ 
+-	ret = _request_firmware_prepare(&fw, name, device, buf, size);
++	ret = _request_firmware_prepare(&fw, name, device, buf, size,
++					opt_flags);
+ 	if (ret <= 0) /* error or already assigned */
+ 		goto out;
+ 
+diff --git a/drivers/cpufreq/qcom-cpufreq-kryo.c b/drivers/cpufreq/qcom-cpufreq-kryo.c
+index efc9a7ae4857..35e81d7dd929 100644
+--- a/drivers/cpufreq/qcom-cpufreq-kryo.c
++++ b/drivers/cpufreq/qcom-cpufreq-kryo.c
+@@ -44,7 +44,7 @@ enum _msm8996_version {
+ 
+ struct platform_device *cpufreq_dt_pdev, *kryo_cpufreq_pdev;
+ 
+-static enum _msm8996_version __init qcom_cpufreq_kryo_get_msm_id(void)
++static enum _msm8996_version qcom_cpufreq_kryo_get_msm_id(void)
+ {
+ 	size_t len;
+ 	u32 *msm_id;
+@@ -221,7 +221,7 @@ static int __init qcom_cpufreq_kryo_init(void)
+ }
+ module_init(qcom_cpufreq_kryo_init);
+ 
+-static void __init qcom_cpufreq_kryo_exit(void)
++static void __exit qcom_cpufreq_kryo_exit(void)
+ {
+ 	platform_device_unregister(kryo_cpufreq_pdev);
+ 	platform_driver_unregister(&qcom_cpufreq_kryo_driver);
+diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
+index d67667970f7e..ec40f991e6c6 100644
+--- a/drivers/crypto/caam/caamalg.c
++++ b/drivers/crypto/caam/caamalg.c
+@@ -1553,8 +1553,8 @@ static struct ablkcipher_edesc *ablkcipher_edesc_alloc(struct ablkcipher_request
+ 	edesc->src_nents = src_nents;
+ 	edesc->dst_nents = dst_nents;
+ 	edesc->sec4_sg_bytes = sec4_sg_bytes;
+-	edesc->sec4_sg = (void *)edesc + sizeof(struct ablkcipher_edesc) +
+-			 desc_bytes;
++	edesc->sec4_sg = (struct sec4_sg_entry *)((u8 *)edesc->hw_desc +
++						  desc_bytes);
+ 	edesc->iv_dir = DMA_TO_DEVICE;
+ 
+ 	/* Make sure IV is located in a DMAable area */
+@@ -1757,8 +1757,8 @@ static struct ablkcipher_edesc *ablkcipher_giv_edesc_alloc(
+ 	edesc->src_nents = src_nents;
+ 	edesc->dst_nents = dst_nents;
+ 	edesc->sec4_sg_bytes = sec4_sg_bytes;
+-	edesc->sec4_sg = (void *)edesc + sizeof(struct ablkcipher_edesc) +
+-			 desc_bytes;
++	edesc->sec4_sg = (struct sec4_sg_entry *)((u8 *)edesc->hw_desc +
++						  desc_bytes);
+ 	edesc->iv_dir = DMA_FROM_DEVICE;
+ 
+ 	/* Make sure IV is located in a DMAable area */
+diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c
+index b916c4eb608c..e5d2ac5aec40 100644
+--- a/drivers/crypto/chelsio/chcr_algo.c
++++ b/drivers/crypto/chelsio/chcr_algo.c
+@@ -367,7 +367,8 @@ static inline void dsgl_walk_init(struct dsgl_walk *walk,
+ 	walk->to = (struct phys_sge_pairs *)(dsgl + 1);
+ }
+ 
+-static inline void dsgl_walk_end(struct dsgl_walk *walk, unsigned short qid)
++static inline void dsgl_walk_end(struct dsgl_walk *walk, unsigned short qid,
++				 int pci_chan_id)
+ {
+ 	struct cpl_rx_phys_dsgl *phys_cpl;
+ 
+@@ -385,6 +386,7 @@ static inline void dsgl_walk_end(struct dsgl_walk *walk, unsigned short qid)
+ 	phys_cpl->rss_hdr_int.opcode = CPL_RX_PHYS_ADDR;
+ 	phys_cpl->rss_hdr_int.qid = htons(qid);
+ 	phys_cpl->rss_hdr_int.hash_val = 0;
++	phys_cpl->rss_hdr_int.channel = pci_chan_id;
+ }
+ 
+ static inline void dsgl_walk_add_page(struct dsgl_walk *walk,
+@@ -718,7 +720,7 @@ static inline void create_wreq(struct chcr_context *ctx,
+ 		FILL_WR_RX_Q_ID(ctx->dev->rx_channel_id, qid,
+ 				!!lcb, ctx->tx_qidx);
+ 
+-	chcr_req->ulptx.cmd_dest = FILL_ULPTX_CMD_DEST(ctx->dev->tx_channel_id,
++	chcr_req->ulptx.cmd_dest = FILL_ULPTX_CMD_DEST(ctx->tx_chan_id,
+ 						       qid);
+ 	chcr_req->ulptx.len = htonl((DIV_ROUND_UP(len16, 16) -
+ 				     ((sizeof(chcr_req->wreq)) >> 4)));
+@@ -1339,16 +1341,23 @@ static int chcr_device_init(struct chcr_context *ctx)
+ 				    adap->vres.ncrypto_fc);
+ 		rxq_perchan = u_ctx->lldi.nrxq / u_ctx->lldi.nchan;
+ 		txq_perchan = ntxq / u_ctx->lldi.nchan;
+-		rxq_idx = ctx->dev->tx_channel_id * rxq_perchan;
+-		rxq_idx += id % rxq_perchan;
+-		txq_idx = ctx->dev->tx_channel_id * txq_perchan;
+-		txq_idx += id % txq_perchan;
+ 		spin_lock(&ctx->dev->lock_chcr_dev);
+-		ctx->rx_qidx = rxq_idx;
+-		ctx->tx_qidx = txq_idx;
++		ctx->tx_chan_id = ctx->dev->tx_channel_id;
+ 		ctx->dev->tx_channel_id = !ctx->dev->tx_channel_id;
+ 		ctx->dev->rx_channel_id = 0;
+ 		spin_unlock(&ctx->dev->lock_chcr_dev);
++		rxq_idx = ctx->tx_chan_id * rxq_perchan;
++		rxq_idx += id % rxq_perchan;
++		txq_idx = ctx->tx_chan_id * txq_perchan;
++		txq_idx += id % txq_perchan;
++		ctx->rx_qidx = rxq_idx;
++		ctx->tx_qidx = txq_idx;
++		/* Channel Id used by SGE to forward packet to Host.
++		 * Same value should be used in cpl_fw6_pld RSS_CH field
++		 * by FW. Driver programs PCI channel ID to be used in fw
++		 * at the time of queue allocation with value "pi->tx_chan"
++		 */
++		ctx->pci_chan_id = txq_idx / txq_perchan;
+ 	}
+ out:
+ 	return err;
+@@ -2503,6 +2512,7 @@ void chcr_add_aead_dst_ent(struct aead_request *req,
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	struct dsgl_walk dsgl_walk;
+ 	unsigned int authsize = crypto_aead_authsize(tfm);
++	struct chcr_context *ctx = a_ctx(tfm);
+ 	u32 temp;
+ 
+ 	dsgl_walk_init(&dsgl_walk, phys_cpl);
+@@ -2512,7 +2522,7 @@ void chcr_add_aead_dst_ent(struct aead_request *req,
+ 	dsgl_walk_add_page(&dsgl_walk, IV, &reqctx->iv_dma);
+ 	temp = req->cryptlen + (reqctx->op ? -authsize : authsize);
+ 	dsgl_walk_add_sg(&dsgl_walk, req->dst, temp, req->assoclen);
+-	dsgl_walk_end(&dsgl_walk, qid);
++	dsgl_walk_end(&dsgl_walk, qid, ctx->pci_chan_id);
+ }
+ 
+ void chcr_add_cipher_src_ent(struct ablkcipher_request *req,
+@@ -2544,6 +2554,8 @@ void chcr_add_cipher_dst_ent(struct ablkcipher_request *req,
+ 			     unsigned short qid)
+ {
+ 	struct chcr_blkcipher_req_ctx *reqctx = ablkcipher_request_ctx(req);
++	struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(wrparam->req);
++	struct chcr_context *ctx = c_ctx(tfm);
+ 	struct dsgl_walk dsgl_walk;
+ 
+ 	dsgl_walk_init(&dsgl_walk, phys_cpl);
+@@ -2552,7 +2564,7 @@ void chcr_add_cipher_dst_ent(struct ablkcipher_request *req,
+ 	reqctx->dstsg = dsgl_walk.last_sg;
+ 	reqctx->dst_ofst = dsgl_walk.last_sg_len;
+ 
+-	dsgl_walk_end(&dsgl_walk, qid);
++	dsgl_walk_end(&dsgl_walk, qid, ctx->pci_chan_id);
+ }
+ 
+ void chcr_add_hash_src_ent(struct ahash_request *req,
+diff --git a/drivers/crypto/chelsio/chcr_crypto.h b/drivers/crypto/chelsio/chcr_crypto.h
+index 54835cb109e5..0d2c70c344f3 100644
+--- a/drivers/crypto/chelsio/chcr_crypto.h
++++ b/drivers/crypto/chelsio/chcr_crypto.h
+@@ -255,6 +255,8 @@ struct chcr_context {
+ 	struct chcr_dev *dev;
+ 	unsigned char tx_qidx;
+ 	unsigned char rx_qidx;
++	unsigned char tx_chan_id;
++	unsigned char pci_chan_id;
+ 	struct __crypto_ctx crypto_ctx[0];
+ };
+ 
+diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c
+index a10c418d4e5c..56bd28174f52 100644
+--- a/drivers/crypto/mxs-dcp.c
++++ b/drivers/crypto/mxs-dcp.c
+@@ -63,7 +63,7 @@ struct dcp {
+ 	struct dcp_coherent_block	*coh;
+ 
+ 	struct completion		completion[DCP_MAX_CHANS];
+-	struct mutex			mutex[DCP_MAX_CHANS];
++	spinlock_t			lock[DCP_MAX_CHANS];
+ 	struct task_struct		*thread[DCP_MAX_CHANS];
+ 	struct crypto_queue		queue[DCP_MAX_CHANS];
+ };
+@@ -349,13 +349,20 @@ static int dcp_chan_thread_aes(void *data)
+ 
+ 	int ret;
+ 
+-	do {
+-		__set_current_state(TASK_INTERRUPTIBLE);
++	while (!kthread_should_stop()) {
++		set_current_state(TASK_INTERRUPTIBLE);
+ 
+-		mutex_lock(&sdcp->mutex[chan]);
++		spin_lock(&sdcp->lock[chan]);
+ 		backlog = crypto_get_backlog(&sdcp->queue[chan]);
+ 		arq = crypto_dequeue_request(&sdcp->queue[chan]);
+-		mutex_unlock(&sdcp->mutex[chan]);
++		spin_unlock(&sdcp->lock[chan]);
++
++		if (!backlog && !arq) {
++			schedule();
++			continue;
++		}
++
++		set_current_state(TASK_RUNNING);
+ 
+ 		if (backlog)
+ 			backlog->complete(backlog, -EINPROGRESS);
+@@ -363,11 +370,8 @@ static int dcp_chan_thread_aes(void *data)
+ 		if (arq) {
+ 			ret = mxs_dcp_aes_block_crypt(arq);
+ 			arq->complete(arq, ret);
+-			continue;
+ 		}
+-
+-		schedule();
+-	} while (!kthread_should_stop());
++	}
+ 
+ 	return 0;
+ }
+@@ -409,9 +413,9 @@ static int mxs_dcp_aes_enqueue(struct ablkcipher_request *req, int enc, int ecb)
+ 	rctx->ecb = ecb;
+ 	actx->chan = DCP_CHAN_CRYPTO;
+ 
+-	mutex_lock(&sdcp->mutex[actx->chan]);
++	spin_lock(&sdcp->lock[actx->chan]);
+ 	ret = crypto_enqueue_request(&sdcp->queue[actx->chan], &req->base);
+-	mutex_unlock(&sdcp->mutex[actx->chan]);
++	spin_unlock(&sdcp->lock[actx->chan]);
+ 
+ 	wake_up_process(sdcp->thread[actx->chan]);
+ 
+@@ -640,13 +644,20 @@ static int dcp_chan_thread_sha(void *data)
+ 	struct ahash_request *req;
+ 	int ret, fini;
+ 
+-	do {
+-		__set_current_state(TASK_INTERRUPTIBLE);
++	while (!kthread_should_stop()) {
++		set_current_state(TASK_INTERRUPTIBLE);
+ 
+-		mutex_lock(&sdcp->mutex[chan]);
++		spin_lock(&sdcp->lock[chan]);
+ 		backlog = crypto_get_backlog(&sdcp->queue[chan]);
+ 		arq = crypto_dequeue_request(&sdcp->queue[chan]);
+-		mutex_unlock(&sdcp->mutex[chan]);
++		spin_unlock(&sdcp->lock[chan]);
++
++		if (!backlog && !arq) {
++			schedule();
++			continue;
++		}
++
++		set_current_state(TASK_RUNNING);
+ 
+ 		if (backlog)
+ 			backlog->complete(backlog, -EINPROGRESS);
+@@ -658,12 +669,8 @@ static int dcp_chan_thread_sha(void *data)
+ 			ret = dcp_sha_req_to_buf(arq);
+ 			fini = rctx->fini;
+ 			arq->complete(arq, ret);
+-			if (!fini)
+-				continue;
+ 		}
+-
+-		schedule();
+-	} while (!kthread_should_stop());
++	}
+ 
+ 	return 0;
+ }
+@@ -721,9 +728,9 @@ static int dcp_sha_update_fx(struct ahash_request *req, int fini)
+ 		rctx->init = 1;
+ 	}
+ 
+-	mutex_lock(&sdcp->mutex[actx->chan]);
++	spin_lock(&sdcp->lock[actx->chan]);
+ 	ret = crypto_enqueue_request(&sdcp->queue[actx->chan], &req->base);
+-	mutex_unlock(&sdcp->mutex[actx->chan]);
++	spin_unlock(&sdcp->lock[actx->chan]);
+ 
+ 	wake_up_process(sdcp->thread[actx->chan]);
+ 	mutex_unlock(&actx->mutex);
+@@ -997,7 +1004,7 @@ static int mxs_dcp_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, sdcp);
+ 
+ 	for (i = 0; i < DCP_MAX_CHANS; i++) {
+-		mutex_init(&sdcp->mutex[i]);
++		spin_lock_init(&sdcp->lock[i]);
+ 		init_completion(&sdcp->completion[i]);
+ 		crypto_init_queue(&sdcp->queue[i], 50);
+ 	}
+diff --git a/drivers/crypto/qat/qat_c3xxx/adf_drv.c b/drivers/crypto/qat/qat_c3xxx/adf_drv.c
+index ba197f34c252..763c2166ee0e 100644
+--- a/drivers/crypto/qat/qat_c3xxx/adf_drv.c
++++ b/drivers/crypto/qat/qat_c3xxx/adf_drv.c
+@@ -123,7 +123,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	struct adf_hw_device_data *hw_data;
+ 	char name[ADF_DEVICE_NAME_LENGTH];
+ 	unsigned int i, bar_nr;
+-	int ret, bar_mask;
++	unsigned long bar_mask;
++	int ret;
+ 
+ 	switch (ent->device) {
+ 	case ADF_C3XXX_PCI_DEVICE_ID:
+@@ -235,8 +236,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	/* Find and map all the device's BARS */
+ 	i = 0;
+ 	bar_mask = pci_select_bars(pdev, IORESOURCE_MEM);
+-	for_each_set_bit(bar_nr, (const unsigned long *)&bar_mask,
+-			 ADF_PCI_MAX_BARS * 2) {
++	for_each_set_bit(bar_nr, &bar_mask, ADF_PCI_MAX_BARS * 2) {
+ 		struct adf_bar *bar = &accel_pci_dev->pci_bars[i++];
+ 
+ 		bar->base_addr = pci_resource_start(pdev, bar_nr);
+diff --git a/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c b/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c
+index 24ec908eb26c..613c7d5644ce 100644
+--- a/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c
++++ b/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c
+@@ -125,7 +125,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	struct adf_hw_device_data *hw_data;
+ 	char name[ADF_DEVICE_NAME_LENGTH];
+ 	unsigned int i, bar_nr;
+-	int ret, bar_mask;
++	unsigned long bar_mask;
++	int ret;
+ 
+ 	switch (ent->device) {
+ 	case ADF_C3XXXIOV_PCI_DEVICE_ID:
+@@ -215,8 +216,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	/* Find and map all the device's BARS */
+ 	i = 0;
+ 	bar_mask = pci_select_bars(pdev, IORESOURCE_MEM);
+-	for_each_set_bit(bar_nr, (const unsigned long *)&bar_mask,
+-			 ADF_PCI_MAX_BARS * 2) {
++	for_each_set_bit(bar_nr, &bar_mask, ADF_PCI_MAX_BARS * 2) {
+ 		struct adf_bar *bar = &accel_pci_dev->pci_bars[i++];
+ 
+ 		bar->base_addr = pci_resource_start(pdev, bar_nr);
+diff --git a/drivers/crypto/qat/qat_c62x/adf_drv.c b/drivers/crypto/qat/qat_c62x/adf_drv.c
+index 59a5a0df50b6..9cb832963357 100644
+--- a/drivers/crypto/qat/qat_c62x/adf_drv.c
++++ b/drivers/crypto/qat/qat_c62x/adf_drv.c
+@@ -123,7 +123,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	struct adf_hw_device_data *hw_data;
+ 	char name[ADF_DEVICE_NAME_LENGTH];
+ 	unsigned int i, bar_nr;
+-	int ret, bar_mask;
++	unsigned long bar_mask;
++	int ret;
+ 
+ 	switch (ent->device) {
+ 	case ADF_C62X_PCI_DEVICE_ID:
+@@ -235,8 +236,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	/* Find and map all the device's BARS */
+ 	i = (hw_data->fuses & ADF_DEVICE_FUSECTL_MASK) ? 1 : 0;
+ 	bar_mask = pci_select_bars(pdev, IORESOURCE_MEM);
+-	for_each_set_bit(bar_nr, (const unsigned long *)&bar_mask,
+-			 ADF_PCI_MAX_BARS * 2) {
++	for_each_set_bit(bar_nr, &bar_mask, ADF_PCI_MAX_BARS * 2) {
+ 		struct adf_bar *bar = &accel_pci_dev->pci_bars[i++];
+ 
+ 		bar->base_addr = pci_resource_start(pdev, bar_nr);
+diff --git a/drivers/crypto/qat/qat_c62xvf/adf_drv.c b/drivers/crypto/qat/qat_c62xvf/adf_drv.c
+index b9f3e0e4fde9..278452b8ef81 100644
+--- a/drivers/crypto/qat/qat_c62xvf/adf_drv.c
++++ b/drivers/crypto/qat/qat_c62xvf/adf_drv.c
+@@ -125,7 +125,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	struct adf_hw_device_data *hw_data;
+ 	char name[ADF_DEVICE_NAME_LENGTH];
+ 	unsigned int i, bar_nr;
+-	int ret, bar_mask;
++	unsigned long bar_mask;
++	int ret;
+ 
+ 	switch (ent->device) {
+ 	case ADF_C62XIOV_PCI_DEVICE_ID:
+@@ -215,8 +216,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	/* Find and map all the device's BARS */
+ 	i = 0;
+ 	bar_mask = pci_select_bars(pdev, IORESOURCE_MEM);
+-	for_each_set_bit(bar_nr, (const unsigned long *)&bar_mask,
+-			 ADF_PCI_MAX_BARS * 2) {
++	for_each_set_bit(bar_nr, &bar_mask, ADF_PCI_MAX_BARS * 2) {
+ 		struct adf_bar *bar = &accel_pci_dev->pci_bars[i++];
+ 
+ 		bar->base_addr = pci_resource_start(pdev, bar_nr);
+diff --git a/drivers/crypto/qat/qat_dh895xcc/adf_drv.c b/drivers/crypto/qat/qat_dh895xcc/adf_drv.c
+index be5c5a988ca5..3a9708ef4ce2 100644
+--- a/drivers/crypto/qat/qat_dh895xcc/adf_drv.c
++++ b/drivers/crypto/qat/qat_dh895xcc/adf_drv.c
+@@ -123,7 +123,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	struct adf_hw_device_data *hw_data;
+ 	char name[ADF_DEVICE_NAME_LENGTH];
+ 	unsigned int i, bar_nr;
+-	int ret, bar_mask;
++	unsigned long bar_mask;
++	int ret;
+ 
+ 	switch (ent->device) {
+ 	case ADF_DH895XCC_PCI_DEVICE_ID:
+@@ -237,8 +238,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	/* Find and map all the device's BARS */
+ 	i = 0;
+ 	bar_mask = pci_select_bars(pdev, IORESOURCE_MEM);
+-	for_each_set_bit(bar_nr, (const unsigned long *)&bar_mask,
+-			 ADF_PCI_MAX_BARS * 2) {
++	for_each_set_bit(bar_nr, &bar_mask, ADF_PCI_MAX_BARS * 2) {
+ 		struct adf_bar *bar = &accel_pci_dev->pci_bars[i++];
+ 
+ 		bar->base_addr = pci_resource_start(pdev, bar_nr);
+diff --git a/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c b/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c
+index 26ab17bfc6da..3da0f951cb59 100644
+--- a/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c
++++ b/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c
+@@ -125,7 +125,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	struct adf_hw_device_data *hw_data;
+ 	char name[ADF_DEVICE_NAME_LENGTH];
+ 	unsigned int i, bar_nr;
+-	int ret, bar_mask;
++	unsigned long bar_mask;
++	int ret;
+ 
+ 	switch (ent->device) {
+ 	case ADF_DH895XCCIOV_PCI_DEVICE_ID:
+@@ -215,8 +216,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	/* Find and map all the device's BARS */
+ 	i = 0;
+ 	bar_mask = pci_select_bars(pdev, IORESOURCE_MEM);
+-	for_each_set_bit(bar_nr, (const unsigned long *)&bar_mask,
+-			 ADF_PCI_MAX_BARS * 2) {
++	for_each_set_bit(bar_nr, &bar_mask, ADF_PCI_MAX_BARS * 2) {
+ 		struct adf_bar *bar = &accel_pci_dev->pci_bars[i++];
+ 
+ 		bar->base_addr = pci_resource_start(pdev, bar_nr);
+diff --git a/drivers/firmware/arm_scmi/perf.c b/drivers/firmware/arm_scmi/perf.c
+index 2a219b1261b1..49cb74f54a10 100644
+--- a/drivers/firmware/arm_scmi/perf.c
++++ b/drivers/firmware/arm_scmi/perf.c
+@@ -166,7 +166,13 @@ scmi_perf_domain_attributes_get(const struct scmi_handle *handle, u32 domain,
+ 					le32_to_cpu(attr->sustained_freq_khz);
+ 		dom_info->sustained_perf_level =
+ 					le32_to_cpu(attr->sustained_perf_level);
+-		dom_info->mult_factor =	(dom_info->sustained_freq_khz * 1000) /
++		if (!dom_info->sustained_freq_khz ||
++		    !dom_info->sustained_perf_level)
++			/* CPUFreq converts to kHz, hence default 1000 */
++			dom_info->mult_factor =	1000;
++		else
++			dom_info->mult_factor =
++					(dom_info->sustained_freq_khz * 1000) /
+ 					dom_info->sustained_perf_level;
+ 		memcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE);
+ 	}
+diff --git a/drivers/gpio/gpio-adp5588.c b/drivers/gpio/gpio-adp5588.c
+index 3530ccd17e04..da9781a2ef4a 100644
+--- a/drivers/gpio/gpio-adp5588.c
++++ b/drivers/gpio/gpio-adp5588.c
+@@ -41,6 +41,8 @@ struct adp5588_gpio {
+ 	uint8_t int_en[3];
+ 	uint8_t irq_mask[3];
+ 	uint8_t irq_stat[3];
++	uint8_t int_input_en[3];
++	uint8_t int_lvl_cached[3];
+ };
+ 
+ static int adp5588_gpio_read(struct i2c_client *client, u8 reg)
+@@ -173,12 +175,28 @@ static void adp5588_irq_bus_sync_unlock(struct irq_data *d)
+ 	struct adp5588_gpio *dev = irq_data_get_irq_chip_data(d);
+ 	int i;
+ 
+-	for (i = 0; i <= ADP5588_BANK(ADP5588_MAXGPIO); i++)
++	for (i = 0; i <= ADP5588_BANK(ADP5588_MAXGPIO); i++) {
++		if (dev->int_input_en[i]) {
++			mutex_lock(&dev->lock);
++			dev->dir[i] &= ~dev->int_input_en[i];
++			dev->int_input_en[i] = 0;
++			adp5588_gpio_write(dev->client, GPIO_DIR1 + i,
++					   dev->dir[i]);
++			mutex_unlock(&dev->lock);
++		}
++
++		if (dev->int_lvl_cached[i] != dev->int_lvl[i]) {
++			dev->int_lvl_cached[i] = dev->int_lvl[i];
++			adp5588_gpio_write(dev->client, GPIO_INT_LVL1 + i,
++					   dev->int_lvl[i]);
++		}
++
+ 		if (dev->int_en[i] ^ dev->irq_mask[i]) {
+ 			dev->int_en[i] = dev->irq_mask[i];
+ 			adp5588_gpio_write(dev->client, GPIO_INT_EN1 + i,
+ 					   dev->int_en[i]);
+ 		}
++	}
+ 
+ 	mutex_unlock(&dev->irq_lock);
+ }
+@@ -221,9 +239,7 @@ static int adp5588_irq_set_type(struct irq_data *d, unsigned int type)
+ 	else
+ 		return -EINVAL;
+ 
+-	adp5588_gpio_direction_input(&dev->gpio_chip, gpio);
+-	adp5588_gpio_write(dev->client, GPIO_INT_LVL1 + bank,
+-			   dev->int_lvl[bank]);
++	dev->int_input_en[bank] |= bit;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpio/gpio-dwapb.c b/drivers/gpio/gpio-dwapb.c
+index 7a2de3de6571..5b12d6fdd448 100644
+--- a/drivers/gpio/gpio-dwapb.c
++++ b/drivers/gpio/gpio-dwapb.c
+@@ -726,6 +726,7 @@ static int dwapb_gpio_probe(struct platform_device *pdev)
+ out_unregister:
+ 	dwapb_gpio_unregister(gpio);
+ 	dwapb_irq_teardown(gpio);
++	clk_disable_unprepare(gpio->clk);
+ 
+ 	return err;
+ }
+diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c
+index addd9fecc198..a3e43cacd78e 100644
+--- a/drivers/gpio/gpiolib-acpi.c
++++ b/drivers/gpio/gpiolib-acpi.c
+@@ -25,7 +25,6 @@
+ 
+ struct acpi_gpio_event {
+ 	struct list_head node;
+-	struct list_head initial_sync_list;
+ 	acpi_handle handle;
+ 	unsigned int pin;
+ 	unsigned int irq;
+@@ -49,10 +48,19 @@ struct acpi_gpio_chip {
+ 	struct mutex conn_lock;
+ 	struct gpio_chip *chip;
+ 	struct list_head events;
++	struct list_head deferred_req_irqs_list_entry;
+ };
+ 
+-static LIST_HEAD(acpi_gpio_initial_sync_list);
+-static DEFINE_MUTEX(acpi_gpio_initial_sync_list_lock);
++/*
++ * For gpiochips which call acpi_gpiochip_request_interrupts() before late_init
++ * (so builtin drivers) we register the ACPI GpioInt event handlers from a
++ * late_initcall_sync handler, so that other builtin drivers can register their
++ * OpRegions before the event handlers can run.  This list contains gpiochips
++ * for which the acpi_gpiochip_request_interrupts() has been deferred.
++ */
++static DEFINE_MUTEX(acpi_gpio_deferred_req_irqs_lock);
++static LIST_HEAD(acpi_gpio_deferred_req_irqs_list);
++static bool acpi_gpio_deferred_req_irqs_done;
+ 
+ static int acpi_gpiochip_find(struct gpio_chip *gc, void *data)
+ {
+@@ -89,21 +97,6 @@ static struct gpio_desc *acpi_get_gpiod(char *path, int pin)
+ 	return gpiochip_get_desc(chip, pin);
+ }
+ 
+-static void acpi_gpio_add_to_initial_sync_list(struct acpi_gpio_event *event)
+-{
+-	mutex_lock(&acpi_gpio_initial_sync_list_lock);
+-	list_add(&event->initial_sync_list, &acpi_gpio_initial_sync_list);
+-	mutex_unlock(&acpi_gpio_initial_sync_list_lock);
+-}
+-
+-static void acpi_gpio_del_from_initial_sync_list(struct acpi_gpio_event *event)
+-{
+-	mutex_lock(&acpi_gpio_initial_sync_list_lock);
+-	if (!list_empty(&event->initial_sync_list))
+-		list_del_init(&event->initial_sync_list);
+-	mutex_unlock(&acpi_gpio_initial_sync_list_lock);
+-}
+-
+ static irqreturn_t acpi_gpio_irq_handler(int irq, void *data)
+ {
+ 	struct acpi_gpio_event *event = data;
+@@ -186,7 +179,7 @@ static acpi_status acpi_gpiochip_request_interrupt(struct acpi_resource *ares,
+ 
+ 	gpiod_direction_input(desc);
+ 
+-	value = gpiod_get_value(desc);
++	value = gpiod_get_value_cansleep(desc);
+ 
+ 	ret = gpiochip_lock_as_irq(chip, pin);
+ 	if (ret) {
+@@ -229,7 +222,6 @@ static acpi_status acpi_gpiochip_request_interrupt(struct acpi_resource *ares,
+ 	event->irq = irq;
+ 	event->pin = pin;
+ 	event->desc = desc;
+-	INIT_LIST_HEAD(&event->initial_sync_list);
+ 
+ 	ret = request_threaded_irq(event->irq, NULL, handler, irqflags,
+ 				   "ACPI:Event", event);
+@@ -251,10 +243,9 @@ static acpi_status acpi_gpiochip_request_interrupt(struct acpi_resource *ares,
+ 	 * may refer to OperationRegions from other (builtin) drivers which
+ 	 * may be probed after us.
+ 	 */
+-	if (handler == acpi_gpio_irq_handler &&
+-	    (((irqflags & IRQF_TRIGGER_RISING) && value == 1) ||
+-	     ((irqflags & IRQF_TRIGGER_FALLING) && value == 0)))
+-		acpi_gpio_add_to_initial_sync_list(event);
++	if (((irqflags & IRQF_TRIGGER_RISING) && value == 1) ||
++	    ((irqflags & IRQF_TRIGGER_FALLING) && value == 0))
++		handler(event->irq, event);
+ 
+ 	return AE_OK;
+ 
+@@ -283,6 +274,7 @@ void acpi_gpiochip_request_interrupts(struct gpio_chip *chip)
+ 	struct acpi_gpio_chip *acpi_gpio;
+ 	acpi_handle handle;
+ 	acpi_status status;
++	bool defer;
+ 
+ 	if (!chip->parent || !chip->to_irq)
+ 		return;
+@@ -295,6 +287,16 @@ void acpi_gpiochip_request_interrupts(struct gpio_chip *chip)
+ 	if (ACPI_FAILURE(status))
+ 		return;
+ 
++	mutex_lock(&acpi_gpio_deferred_req_irqs_lock);
++	defer = !acpi_gpio_deferred_req_irqs_done;
++	if (defer)
++		list_add(&acpi_gpio->deferred_req_irqs_list_entry,
++			 &acpi_gpio_deferred_req_irqs_list);
++	mutex_unlock(&acpi_gpio_deferred_req_irqs_lock);
++
++	if (defer)
++		return;
++
+ 	acpi_walk_resources(handle, "_AEI",
+ 			    acpi_gpiochip_request_interrupt, acpi_gpio);
+ }
+@@ -325,11 +327,14 @@ void acpi_gpiochip_free_interrupts(struct gpio_chip *chip)
+ 	if (ACPI_FAILURE(status))
+ 		return;
+ 
++	mutex_lock(&acpi_gpio_deferred_req_irqs_lock);
++	if (!list_empty(&acpi_gpio->deferred_req_irqs_list_entry))
++		list_del_init(&acpi_gpio->deferred_req_irqs_list_entry);
++	mutex_unlock(&acpi_gpio_deferred_req_irqs_lock);
++
+ 	list_for_each_entry_safe_reverse(event, ep, &acpi_gpio->events, node) {
+ 		struct gpio_desc *desc;
+ 
+-		acpi_gpio_del_from_initial_sync_list(event);
+-
+ 		if (irqd_is_wakeup_set(irq_get_irq_data(event->irq)))
+ 			disable_irq_wake(event->irq);
+ 
+@@ -1049,6 +1054,7 @@ void acpi_gpiochip_add(struct gpio_chip *chip)
+ 
+ 	acpi_gpio->chip = chip;
+ 	INIT_LIST_HEAD(&acpi_gpio->events);
++	INIT_LIST_HEAD(&acpi_gpio->deferred_req_irqs_list_entry);
+ 
+ 	status = acpi_attach_data(handle, acpi_gpio_chip_dh, acpi_gpio);
+ 	if (ACPI_FAILURE(status)) {
+@@ -1195,20 +1201,28 @@ bool acpi_can_fallback_to_crs(struct acpi_device *adev, const char *con_id)
+ 	return con_id == NULL;
+ }
+ 
+-/* Sync the initial state of handlers after all builtin drivers have probed */
+-static int acpi_gpio_initial_sync(void)
++/* Run deferred acpi_gpiochip_request_interrupts() */
++static int acpi_gpio_handle_deferred_request_interrupts(void)
+ {
+-	struct acpi_gpio_event *event, *ep;
++	struct acpi_gpio_chip *acpi_gpio, *tmp;
++
++	mutex_lock(&acpi_gpio_deferred_req_irqs_lock);
++	list_for_each_entry_safe(acpi_gpio, tmp,
++				 &acpi_gpio_deferred_req_irqs_list,
++				 deferred_req_irqs_list_entry) {
++		acpi_handle handle;
+ 
+-	mutex_lock(&acpi_gpio_initial_sync_list_lock);
+-	list_for_each_entry_safe(event, ep, &acpi_gpio_initial_sync_list,
+-				 initial_sync_list) {
+-		acpi_evaluate_object(event->handle, NULL, NULL, NULL);
+-		list_del_init(&event->initial_sync_list);
++		handle = ACPI_HANDLE(acpi_gpio->chip->parent);
++		acpi_walk_resources(handle, "_AEI",
++				    acpi_gpiochip_request_interrupt, acpi_gpio);
++
++		list_del_init(&acpi_gpio->deferred_req_irqs_list_entry);
+ 	}
+-	mutex_unlock(&acpi_gpio_initial_sync_list_lock);
++
++	acpi_gpio_deferred_req_irqs_done = true;
++	mutex_unlock(&acpi_gpio_deferred_req_irqs_lock);
+ 
+ 	return 0;
+ }
+ /* We must use _sync so that this runs after the first deferred_probe run */
+-late_initcall_sync(acpi_gpio_initial_sync);
++late_initcall_sync(acpi_gpio_handle_deferred_request_interrupts);
+diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c
+index 53a14ee8ad6d..a704d2e74421 100644
+--- a/drivers/gpio/gpiolib-of.c
++++ b/drivers/gpio/gpiolib-of.c
+@@ -31,6 +31,7 @@ static int of_gpiochip_match_node_and_xlate(struct gpio_chip *chip, void *data)
+ 	struct of_phandle_args *gpiospec = data;
+ 
+ 	return chip->gpiodev->dev.of_node == gpiospec->np &&
++				chip->of_xlate &&
+ 				chip->of_xlate(chip, gpiospec, NULL) >= 0;
+ }
+ 
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index e11a3bb03820..06dce16e22bb 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -565,7 +565,7 @@ static int linehandle_create(struct gpio_device *gdev, void __user *ip)
+ 		if (ret)
+ 			goto out_free_descs;
+ 		lh->descs[i] = desc;
+-		count = i;
++		count = i + 1;
+ 
+ 		if (lflags & GPIOHANDLE_REQUEST_ACTIVE_LOW)
+ 			set_bit(FLAG_ACTIVE_LOW, &desc->flags);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index 7200eea4f918..d9d8964a6e97 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -38,6 +38,7 @@ static int amdgpu_cs_user_fence_chunk(struct amdgpu_cs_parser *p,
+ {
+ 	struct drm_gem_object *gobj;
+ 	unsigned long size;
++	int r;
+ 
+ 	gobj = drm_gem_object_lookup(p->filp, data->handle);
+ 	if (gobj == NULL)
+@@ -49,20 +50,26 @@ static int amdgpu_cs_user_fence_chunk(struct amdgpu_cs_parser *p,
+ 	p->uf_entry.tv.shared = true;
+ 	p->uf_entry.user_pages = NULL;
+ 
+-	size = amdgpu_bo_size(p->uf_entry.robj);
+-	if (size != PAGE_SIZE || (data->offset + 8) > size)
+-		return -EINVAL;
+-
+-	*offset = data->offset;
+-
+ 	drm_gem_object_put_unlocked(gobj);
+ 
++	size = amdgpu_bo_size(p->uf_entry.robj);
++	if (size != PAGE_SIZE || (data->offset + 8) > size) {
++		r = -EINVAL;
++		goto error_unref;
++	}
++
+ 	if (amdgpu_ttm_tt_get_usermm(p->uf_entry.robj->tbo.ttm)) {
+-		amdgpu_bo_unref(&p->uf_entry.robj);
+-		return -EINVAL;
++		r = -EINVAL;
++		goto error_unref;
+ 	}
+ 
++	*offset = data->offset;
++
+ 	return 0;
++
++error_unref:
++	amdgpu_bo_unref(&p->uf_entry.robj);
++	return r;
+ }
+ 
+ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, void *data)
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+index ca53b3fba422..3e3e4e907ee5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+@@ -67,6 +67,7 @@ static const struct soc15_reg_golden golden_settings_sdma_4[] = {
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_IB_CNTL, 0x800f0100, 0x00000100),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_RB_WPTR_POLL_CNTL, 0x0000fff0, 0x00403000),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003c0),
++	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_WATERMK, 0xfc000000, 0x00000000),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CHICKEN_BITS, 0xfe931f07, 0x02831f07),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CLK_CTRL, 0xffffffff, 0x3f000100),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GFX_IB_CNTL, 0x800f0100, 0x00000100),
+@@ -78,7 +79,8 @@ static const struct soc15_reg_golden golden_settings_sdma_4[] = {
+ 	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_RLC0_RB_WPTR_POLL_CNTL, 0x0000fff0, 0x00403000),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_RLC1_IB_CNTL, 0x800f0100, 0x00000100),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_RLC1_RB_WPTR_POLL_CNTL, 0x0000fff0, 0x00403000),
+-	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_UTCL1_PAGE, 0x000003ff, 0x000003c0)
++	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_UTCL1_PAGE, 0x000003ff, 0x000003c0),
++	SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_UTCL1_WATERMK, 0xfc000000, 0x00000000)
+ };
+ 
+ static const struct soc15_reg_golden golden_settings_sdma_vg10[] = {
+@@ -106,7 +108,8 @@ static const struct soc15_reg_golden golden_settings_sdma_4_1[] =
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC0_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_IB_CNTL, 0x800f0111, 0x00000100),
+ 	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000),
+-	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003c0)
++	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003c0),
++	SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_WATERMK, 0xfc000000, 0x00000000)
+ };
+ 
+ static const struct soc15_reg_golden golden_settings_sdma_4_2[] =
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+index 77779adeef28..f8e866ceda02 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+@@ -4555,12 +4555,12 @@ static int smu7_get_sclks(struct pp_hwmgr *hwmgr, struct amd_pp_clocks *clocks)
+ 			return -EINVAL;
+ 		dep_sclk_table = table_info->vdd_dep_on_sclk;
+ 		for (i = 0; i < dep_sclk_table->count; i++)
+-			clocks->clock[i] = dep_sclk_table->entries[i].clk * 10;
++			clocks->clock[i] = dep_sclk_table->entries[i].clk;
+ 		clocks->count = dep_sclk_table->count;
+ 	} else if (hwmgr->pp_table_version == PP_TABLE_V0) {
+ 		sclk_table = hwmgr->dyn_state.vddc_dependency_on_sclk;
+ 		for (i = 0; i < sclk_table->count; i++)
+-			clocks->clock[i] = sclk_table->entries[i].clk * 10;
++			clocks->clock[i] = sclk_table->entries[i].clk;
+ 		clocks->count = sclk_table->count;
+ 	}
+ 
+@@ -4592,7 +4592,7 @@ static int smu7_get_mclks(struct pp_hwmgr *hwmgr, struct amd_pp_clocks *clocks)
+ 			return -EINVAL;
+ 		dep_mclk_table = table_info->vdd_dep_on_mclk;
+ 		for (i = 0; i < dep_mclk_table->count; i++) {
+-			clocks->clock[i] = dep_mclk_table->entries[i].clk * 10;
++			clocks->clock[i] = dep_mclk_table->entries[i].clk;
+ 			clocks->latency[i] = smu7_get_mem_latency(hwmgr,
+ 						dep_mclk_table->entries[i].clk);
+ 		}
+@@ -4600,7 +4600,7 @@ static int smu7_get_mclks(struct pp_hwmgr *hwmgr, struct amd_pp_clocks *clocks)
+ 	} else if (hwmgr->pp_table_version == PP_TABLE_V0) {
+ 		mclk_table = hwmgr->dyn_state.vddc_dependency_on_mclk;
+ 		for (i = 0; i < mclk_table->count; i++)
+-			clocks->clock[i] = mclk_table->entries[i].clk * 10;
++			clocks->clock[i] = mclk_table->entries[i].clk;
+ 		clocks->count = mclk_table->count;
+ 	}
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
+index 0adfc5392cd3..617557bd8c24 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
+@@ -1605,17 +1605,17 @@ static int smu8_get_clock_by_type(struct pp_hwmgr *hwmgr, enum amd_pp_clock_type
+ 	switch (type) {
+ 	case amd_pp_disp_clock:
+ 		for (i = 0; i < clocks->count; i++)
+-			clocks->clock[i] = data->sys_info.display_clock[i] * 10;
++			clocks->clock[i] = data->sys_info.display_clock[i];
+ 		break;
+ 	case amd_pp_sys_clock:
+ 		table = hwmgr->dyn_state.vddc_dependency_on_sclk;
+ 		for (i = 0; i < clocks->count; i++)
+-			clocks->clock[i] = table->entries[i].clk * 10;
++			clocks->clock[i] = table->entries[i].clk;
+ 		break;
+ 	case amd_pp_mem_clock:
+ 		clocks->count = SMU8_NUM_NBPMEMORYCLOCK;
+ 		for (i = 0; i < clocks->count; i++)
+-			clocks->clock[i] = data->sys_info.nbp_memory_clock[clocks->count - 1 - i] * 10;
++			clocks->clock[i] = data->sys_info.nbp_memory_clock[clocks->count - 1 - i];
+ 		break;
+ 	default:
+ 		return -1;
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index c2ebe5da34d0..89225adaa60a 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -230,7 +230,7 @@ nouveau_cli_init(struct nouveau_drm *drm, const char *sname,
+ 		mutex_unlock(&drm->master.lock);
+ 	}
+ 	if (ret) {
+-		NV_ERROR(drm, "Client allocation failed: %d\n", ret);
++		NV_PRINTK(err, cli, "Client allocation failed: %d\n", ret);
+ 		goto done;
+ 	}
+ 
+@@ -240,37 +240,37 @@ nouveau_cli_init(struct nouveau_drm *drm, const char *sname,
+ 			       }, sizeof(struct nv_device_v0),
+ 			       &cli->device);
+ 	if (ret) {
+-		NV_ERROR(drm, "Device allocation failed: %d\n", ret);
++		NV_PRINTK(err, cli, "Device allocation failed: %d\n", ret);
+ 		goto done;
+ 	}
+ 
+ 	ret = nvif_mclass(&cli->device.object, mmus);
+ 	if (ret < 0) {
+-		NV_ERROR(drm, "No supported MMU class\n");
++		NV_PRINTK(err, cli, "No supported MMU class\n");
+ 		goto done;
+ 	}
+ 
+ 	ret = nvif_mmu_init(&cli->device.object, mmus[ret].oclass, &cli->mmu);
+ 	if (ret) {
+-		NV_ERROR(drm, "MMU allocation failed: %d\n", ret);
++		NV_PRINTK(err, cli, "MMU allocation failed: %d\n", ret);
+ 		goto done;
+ 	}
+ 
+ 	ret = nvif_mclass(&cli->mmu.object, vmms);
+ 	if (ret < 0) {
+-		NV_ERROR(drm, "No supported VMM class\n");
++		NV_PRINTK(err, cli, "No supported VMM class\n");
+ 		goto done;
+ 	}
+ 
+ 	ret = nouveau_vmm_init(cli, vmms[ret].oclass, &cli->vmm);
+ 	if (ret) {
+-		NV_ERROR(drm, "VMM allocation failed: %d\n", ret);
++		NV_PRINTK(err, cli, "VMM allocation failed: %d\n", ret);
+ 		goto done;
+ 	}
+ 
+ 	ret = nvif_mclass(&cli->mmu.object, mems);
+ 	if (ret < 0) {
+-		NV_ERROR(drm, "No supported MEM class\n");
++		NV_PRINTK(err, cli, "No supported MEM class\n");
+ 		goto done;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/base.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/base.c
+index 32fa94a9773f..cbd33e87b799 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/base.c
+@@ -275,6 +275,7 @@ nvkm_disp_oneinit(struct nvkm_engine *engine)
+ 	struct nvkm_outp *outp, *outt, *pair;
+ 	struct nvkm_conn *conn;
+ 	struct nvkm_head *head;
++	struct nvkm_ior *ior;
+ 	struct nvbios_connE connE;
+ 	struct dcb_output dcbE;
+ 	u8  hpd = 0, ver, hdr;
+@@ -399,6 +400,19 @@ nvkm_disp_oneinit(struct nvkm_engine *engine)
+ 			return ret;
+ 	}
+ 
++	/* Enforce identity-mapped SOR assignment for panels, which have
++	 * certain bits (ie. backlight controls) wired to a specific SOR.
++	 */
++	list_for_each_entry(outp, &disp->outp, head) {
++		if (outp->conn->info.type == DCB_CONNECTOR_LVDS ||
++		    outp->conn->info.type == DCB_CONNECTOR_eDP) {
++			ior = nvkm_ior_find(disp, SOR, ffs(outp->info.or) - 1);
++			if (!WARN_ON(!ior))
++				ior->identity = true;
++			outp->identity = true;
++		}
++	}
++
+ 	i = 0;
+ 	list_for_each_entry(head, &disp->head, head)
+ 		i = max(i, head->id + 1);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c
+index 7c5bed29ffef..6160a6158cf2 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c
+@@ -412,14 +412,10 @@ nvkm_dp_train(struct nvkm_dp *dp, u32 dataKBps)
+ }
+ 
+ static void
+-nvkm_dp_release(struct nvkm_outp *outp, struct nvkm_ior *ior)
++nvkm_dp_disable(struct nvkm_outp *outp, struct nvkm_ior *ior)
+ {
+ 	struct nvkm_dp *dp = nvkm_dp(outp);
+ 
+-	/* Prevent link from being retrained if sink sends an IRQ. */
+-	atomic_set(&dp->lt.done, 0);
+-	ior->dp.nr = 0;
+-
+ 	/* Execute DisableLT script from DP Info Table. */
+ 	nvbios_init(&ior->disp->engine.subdev, dp->info.script[4],
+ 		init.outp = &dp->outp.info;
+@@ -428,6 +424,16 @@ nvkm_dp_release(struct nvkm_outp *outp, struct nvkm_ior *ior)
+ 	);
+ }
+ 
++static void
++nvkm_dp_release(struct nvkm_outp *outp)
++{
++	struct nvkm_dp *dp = nvkm_dp(outp);
++
++	/* Prevent link from being retrained if sink sends an IRQ. */
++	atomic_set(&dp->lt.done, 0);
++	dp->outp.ior->dp.nr = 0;
++}
++
+ static int
+ nvkm_dp_acquire(struct nvkm_outp *outp)
+ {
+@@ -576,6 +582,7 @@ nvkm_dp_func = {
+ 	.fini = nvkm_dp_fini,
+ 	.acquire = nvkm_dp_acquire,
+ 	.release = nvkm_dp_release,
++	.disable = nvkm_dp_disable,
+ };
+ 
+ static int
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/ior.h b/drivers/gpu/drm/nouveau/nvkm/engine/disp/ior.h
+index e0b4e0c5704e..19911211a12a 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/ior.h
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/ior.h
+@@ -16,6 +16,7 @@ struct nvkm_ior {
+ 	char name[8];
+ 
+ 	struct list_head head;
++	bool identity;
+ 
+ 	struct nvkm_ior_state {
+ 		struct nvkm_outp *outp;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.c
+index f89c7b977aa5..def005dd5fda 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.c
+@@ -501,11 +501,11 @@ nv50_disp_super_2_0(struct nv50_disp *disp, struct nvkm_head *head)
+ 	nv50_disp_super_ied_off(head, ior, 2);
+ 
+ 	/* If we're shutting down the OR's only active head, execute
+-	 * the output path's release function.
++	 * the output path's disable function.
+ 	 */
+ 	if (ior->arm.head == (1 << head->id)) {
+-		if ((outp = ior->arm.outp) && outp->func->release)
+-			outp->func->release(outp, ior);
++		if ((outp = ior->arm.outp) && outp->func->disable)
++			outp->func->disable(outp, ior);
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.c
+index be9e7f8c3b23..44df835e5473 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.c
+@@ -93,6 +93,8 @@ nvkm_outp_release(struct nvkm_outp *outp, u8 user)
+ 	if (ior) {
+ 		outp->acquired &= ~user;
+ 		if (!outp->acquired) {
++			if (outp->func->release && outp->ior)
++				outp->func->release(outp);
+ 			outp->ior->asy.outp = NULL;
+ 			outp->ior = NULL;
+ 		}
+@@ -127,17 +129,26 @@ nvkm_outp_acquire(struct nvkm_outp *outp, u8 user)
+ 	if (proto == UNKNOWN)
+ 		return -ENOSYS;
+ 
++	/* Deal with panels requiring identity-mapped SOR assignment. */
++	if (outp->identity) {
++		ior = nvkm_ior_find(outp->disp, SOR, ffs(outp->info.or) - 1);
++		if (WARN_ON(!ior))
++			return -ENOSPC;
++		return nvkm_outp_acquire_ior(outp, user, ior);
++	}
++
+ 	/* First preference is to reuse the OR that is currently armed
+ 	 * on HW, if any, in order to prevent unnecessary switching.
+ 	 */
+ 	list_for_each_entry(ior, &outp->disp->ior, head) {
+-		if (!ior->asy.outp && ior->arm.outp == outp)
++		if (!ior->identity && !ior->asy.outp && ior->arm.outp == outp)
+ 			return nvkm_outp_acquire_ior(outp, user, ior);
+ 	}
+ 
+ 	/* Failing that, a completely unused OR is the next best thing. */
+ 	list_for_each_entry(ior, &outp->disp->ior, head) {
+-		if (!ior->asy.outp && ior->type == type && !ior->arm.outp &&
++		if (!ior->identity &&
++		    !ior->asy.outp && ior->type == type && !ior->arm.outp &&
+ 		    (ior->func->route.set || ior->id == __ffs(outp->info.or)))
+ 			return nvkm_outp_acquire_ior(outp, user, ior);
+ 	}
+@@ -146,7 +157,7 @@ nvkm_outp_acquire(struct nvkm_outp *outp, u8 user)
+ 	 * but will be released during the next modeset.
+ 	 */
+ 	list_for_each_entry(ior, &outp->disp->ior, head) {
+-		if (!ior->asy.outp && ior->type == type &&
++		if (!ior->identity && !ior->asy.outp && ior->type == type &&
+ 		    (ior->func->route.set || ior->id == __ffs(outp->info.or)))
+ 			return nvkm_outp_acquire_ior(outp, user, ior);
+ 	}
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.h b/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.h
+index ea84d7d5741a..3f932fb39c94 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.h
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.h
+@@ -17,6 +17,7 @@ struct nvkm_outp {
+ 
+ 	struct list_head head;
+ 	struct nvkm_conn *conn;
++	bool identity;
+ 
+ 	/* Assembly state. */
+ #define NVKM_OUTP_PRIV 1
+@@ -41,7 +42,8 @@ struct nvkm_outp_func {
+ 	void (*init)(struct nvkm_outp *);
+ 	void (*fini)(struct nvkm_outp *);
+ 	int (*acquire)(struct nvkm_outp *);
+-	void (*release)(struct nvkm_outp *, struct nvkm_ior *);
++	void (*release)(struct nvkm_outp *);
++	void (*disable)(struct nvkm_outp *, struct nvkm_ior *);
+ };
+ 
+ #define OUTP_MSG(o,l,f,a...) do {                                              \
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/gm200.c b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/gm200.c
+index b80618e35491..d65959ef0564 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/gm200.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/gm200.c
+@@ -158,7 +158,8 @@ gm200_devinit_post(struct nvkm_devinit *base, bool post)
+ 	}
+ 
+ 	/* load and execute some other ucode image (bios therm?) */
+-	return pmu_load(init, 0x01, post, NULL, NULL);
++	pmu_load(init, 0x01, post, NULL, NULL);
++	return 0;
+ }
+ 
+ static const struct nvkm_devinit_func
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
+index de269eb482dd..7459def78d50 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
+@@ -1423,7 +1423,7 @@ nvkm_vmm_get(struct nvkm_vmm *vmm, u8 page, u64 size, struct nvkm_vma **pvma)
+ void
+ nvkm_vmm_part(struct nvkm_vmm *vmm, struct nvkm_memory *inst)
+ {
+-	if (vmm->func->part && inst) {
++	if (inst && vmm->func->part) {
+ 		mutex_lock(&vmm->mutex);
+ 		vmm->func->part(vmm, inst);
+ 		mutex_unlock(&vmm->mutex);
+diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c
+index 25b7bd56ae11..1cb41992aaa1 100644
+--- a/drivers/hid/hid-apple.c
++++ b/drivers/hid/hid-apple.c
+@@ -335,7 +335,8 @@ static int apple_input_mapping(struct hid_device *hdev, struct hid_input *hi,
+ 		struct hid_field *field, struct hid_usage *usage,
+ 		unsigned long **bit, int *max)
+ {
+-	if (usage->hid == (HID_UP_CUSTOM | 0x0003)) {
++	if (usage->hid == (HID_UP_CUSTOM | 0x0003) ||
++			usage->hid == (HID_UP_MSVENDOR | 0x0003)) {
+ 		/* The fn key on Apple USB keyboards */
+ 		set_bit(EV_REP, hi->input->evbit);
+ 		hid_map_usage_clear(hi, usage, bit, max, EV_KEY, KEY_FN);
+@@ -472,6 +473,12 @@ static const struct hid_device_id apple_devices[] = {
+ 		.driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_ANSI),
+ 		.driver_data = APPLE_HAS_FN },
++	{ HID_BLUETOOTH_DEVICE(BT_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_ANSI),
++		.driver_data = APPLE_HAS_FN },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_NUMPAD_ANSI),
++		.driver_data = APPLE_HAS_FN },
++	{ HID_BLUETOOTH_DEVICE(BT_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_NUMPAD_ANSI),
++		.driver_data = APPLE_HAS_FN },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING_ANSI),
+ 		.driver_data = APPLE_HAS_FN },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING_ISO),
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index e80bcd71fe1e..eee6b79fb131 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -88,6 +88,7 @@
+ #define USB_DEVICE_ID_ANTON_TOUCH_PAD	0x3101
+ 
+ #define USB_VENDOR_ID_APPLE		0x05ac
++#define BT_VENDOR_ID_APPLE		0x004c
+ #define USB_DEVICE_ID_APPLE_MIGHTYMOUSE	0x0304
+ #define USB_DEVICE_ID_APPLE_MAGICMOUSE	0x030d
+ #define USB_DEVICE_ID_APPLE_MAGICTRACKPAD	0x030e
+@@ -157,6 +158,7 @@
+ #define USB_DEVICE_ID_APPLE_ALU_WIRELESS_2011_ISO   0x0256
+ #define USB_DEVICE_ID_APPLE_ALU_WIRELESS_2011_JIS   0x0257
+ #define USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_ANSI   0x0267
++#define USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_NUMPAD_ANSI   0x026c
+ #define USB_DEVICE_ID_APPLE_WELLSPRING8_ANSI	0x0290
+ #define USB_DEVICE_ID_APPLE_WELLSPRING8_ISO	0x0291
+ #define USB_DEVICE_ID_APPLE_WELLSPRING8_JIS	0x0292
+@@ -526,10 +528,6 @@
+ #define I2C_VENDOR_ID_HANTICK		0x0911
+ #define I2C_PRODUCT_ID_HANTICK_5288	0x5288
+ 
+-#define I2C_VENDOR_ID_RAYD		0x2386
+-#define I2C_PRODUCT_ID_RAYD_3118	0x3118
+-#define I2C_PRODUCT_ID_RAYD_4B33	0x4B33
+-
+ #define USB_VENDOR_ID_HANWANG		0x0b57
+ #define USB_DEVICE_ID_HANWANG_TABLET_FIRST	0x5000
+ #define USB_DEVICE_ID_HANWANG_TABLET_LAST	0x8fff
+@@ -949,6 +947,7 @@
+ #define USB_DEVICE_ID_SAITEK_RUMBLEPAD	0xff17
+ #define USB_DEVICE_ID_SAITEK_PS1000	0x0621
+ #define USB_DEVICE_ID_SAITEK_RAT7_OLD	0x0ccb
++#define USB_DEVICE_ID_SAITEK_RAT7_CONTAGION	0x0ccd
+ #define USB_DEVICE_ID_SAITEK_RAT7	0x0cd7
+ #define USB_DEVICE_ID_SAITEK_RAT9	0x0cfa
+ #define USB_DEVICE_ID_SAITEK_MMO7	0x0cd0
+diff --git a/drivers/hid/hid-saitek.c b/drivers/hid/hid-saitek.c
+index 39e642686ff0..683861f324e3 100644
+--- a/drivers/hid/hid-saitek.c
++++ b/drivers/hid/hid-saitek.c
+@@ -183,6 +183,8 @@ static const struct hid_device_id saitek_devices[] = {
+ 		.driver_data = SAITEK_RELEASE_MODE_RAT7 },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_RAT7),
+ 		.driver_data = SAITEK_RELEASE_MODE_RAT7 },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_RAT7_CONTAGION),
++		.driver_data = SAITEK_RELEASE_MODE_RAT7 },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_RAT9),
+ 		.driver_data = SAITEK_RELEASE_MODE_RAT7 },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_RAT9),
+diff --git a/drivers/hid/hid-sensor-hub.c b/drivers/hid/hid-sensor-hub.c
+index 50af72baa5ca..2b63487057c2 100644
+--- a/drivers/hid/hid-sensor-hub.c
++++ b/drivers/hid/hid-sensor-hub.c
+@@ -579,6 +579,28 @@ void sensor_hub_device_close(struct hid_sensor_hub_device *hsdev)
+ }
+ EXPORT_SYMBOL_GPL(sensor_hub_device_close);
+ 
++static __u8 *sensor_hub_report_fixup(struct hid_device *hdev, __u8 *rdesc,
++		unsigned int *rsize)
++{
++	/*
++	 * Checks if the report descriptor of Thinkpad Helix 2 has a logical
++	 * minimum for magnetic flux axis greater than the maximum.
++	 */
++	if (hdev->product == USB_DEVICE_ID_TEXAS_INSTRUMENTS_LENOVO_YOGA &&
++		*rsize == 2558 && rdesc[913] == 0x17 && rdesc[914] == 0x40 &&
++		rdesc[915] == 0x81 && rdesc[916] == 0x08 &&
++		rdesc[917] == 0x00 && rdesc[918] == 0x27 &&
++		rdesc[921] == 0x07 && rdesc[922] == 0x00) {
++		/* Sets negative logical minimum for mag x, y and z */
++		rdesc[914] = rdesc[935] = rdesc[956] = 0xc0;
++		rdesc[915] = rdesc[936] = rdesc[957] = 0x7e;
++		rdesc[916] = rdesc[937] = rdesc[958] = 0xf7;
++		rdesc[917] = rdesc[938] = rdesc[959] = 0xff;
++	}
++
++	return rdesc;
++}
++
+ static int sensor_hub_probe(struct hid_device *hdev,
+ 				const struct hid_device_id *id)
+ {
+@@ -743,6 +765,7 @@ static struct hid_driver sensor_hub_driver = {
+ 	.probe = sensor_hub_probe,
+ 	.remove = sensor_hub_remove,
+ 	.raw_event = sensor_hub_raw_event,
++	.report_fixup = sensor_hub_report_fixup,
+ #ifdef CONFIG_PM
+ 	.suspend = sensor_hub_suspend,
+ 	.resume = sensor_hub_resume,
+diff --git a/drivers/hid/i2c-hid/i2c-hid.c b/drivers/hid/i2c-hid/i2c-hid.c
+index 64773433b947..37013b58098c 100644
+--- a/drivers/hid/i2c-hid/i2c-hid.c
++++ b/drivers/hid/i2c-hid/i2c-hid.c
+@@ -48,6 +48,7 @@
+ #define I2C_HID_QUIRK_SET_PWR_WAKEUP_DEV	BIT(0)
+ #define I2C_HID_QUIRK_NO_IRQ_AFTER_RESET	BIT(1)
+ #define I2C_HID_QUIRK_RESEND_REPORT_DESCR	BIT(2)
++#define I2C_HID_QUIRK_NO_RUNTIME_PM		BIT(3)
+ 
+ /* flags */
+ #define I2C_HID_STARTED		0
+@@ -169,13 +170,10 @@ static const struct i2c_hid_quirks {
+ 	{ USB_VENDOR_ID_WEIDA, USB_DEVICE_ID_WEIDA_8755,
+ 		I2C_HID_QUIRK_SET_PWR_WAKEUP_DEV },
+ 	{ I2C_VENDOR_ID_HANTICK, I2C_PRODUCT_ID_HANTICK_5288,
+-		I2C_HID_QUIRK_NO_IRQ_AFTER_RESET },
+-	{ I2C_VENDOR_ID_RAYD, I2C_PRODUCT_ID_RAYD_3118,
+-		I2C_HID_QUIRK_RESEND_REPORT_DESCR },
++		I2C_HID_QUIRK_NO_IRQ_AFTER_RESET |
++		I2C_HID_QUIRK_NO_RUNTIME_PM },
+ 	{ USB_VENDOR_ID_SIS_TOUCH, USB_DEVICE_ID_SIS10FB_TOUCH,
+ 		I2C_HID_QUIRK_RESEND_REPORT_DESCR },
+-	{ I2C_VENDOR_ID_RAYD, I2C_PRODUCT_ID_RAYD_4B33,
+-		I2C_HID_QUIRK_RESEND_REPORT_DESCR },
+ 	{ 0, 0 }
+ };
+ 
+@@ -1110,7 +1108,9 @@ static int i2c_hid_probe(struct i2c_client *client,
+ 		goto err_mem_free;
+ 	}
+ 
+-	pm_runtime_put(&client->dev);
++	if (!(ihid->quirks & I2C_HID_QUIRK_NO_RUNTIME_PM))
++		pm_runtime_put(&client->dev);
++
+ 	return 0;
+ 
+ err_mem_free:
+@@ -1136,7 +1136,8 @@ static int i2c_hid_remove(struct i2c_client *client)
+ 	struct i2c_hid *ihid = i2c_get_clientdata(client);
+ 	struct hid_device *hid;
+ 
+-	pm_runtime_get_sync(&client->dev);
++	if (!(ihid->quirks & I2C_HID_QUIRK_NO_RUNTIME_PM))
++		pm_runtime_get_sync(&client->dev);
+ 	pm_runtime_disable(&client->dev);
+ 	pm_runtime_set_suspended(&client->dev);
+ 	pm_runtime_put_noidle(&client->dev);
+@@ -1237,11 +1238,16 @@ static int i2c_hid_resume(struct device *dev)
+ 	pm_runtime_enable(dev);
+ 
+ 	enable_irq(client->irq);
+-	ret = i2c_hid_hwreset(client);
++
++	/* Instead of resetting device, simply powers the device on. This
++	 * solves "incomplete reports" on Raydium devices 2386:3118 and
++	 * 2386:4B33
++	 */
++	ret = i2c_hid_set_power(client, I2C_HID_PWR_ON);
+ 	if (ret)
+ 		return ret;
+ 
+-	/* RAYDIUM device (2386:3118) need to re-send report descr cmd
++	/* Some devices need to re-send report descr cmd
+ 	 * after resume, after this it will be back normal.
+ 	 * otherwise it issues too many incomplete reports.
+ 	 */
+diff --git a/drivers/hid/intel-ish-hid/ipc/hw-ish.h b/drivers/hid/intel-ish-hid/ipc/hw-ish.h
+index 97869b7410eb..da133716bed0 100644
+--- a/drivers/hid/intel-ish-hid/ipc/hw-ish.h
++++ b/drivers/hid/intel-ish-hid/ipc/hw-ish.h
+@@ -29,6 +29,7 @@
+ #define CNL_Ax_DEVICE_ID	0x9DFC
+ #define GLK_Ax_DEVICE_ID	0x31A2
+ #define CNL_H_DEVICE_ID		0xA37C
++#define SPT_H_DEVICE_ID		0xA135
+ 
+ #define	REVISION_ID_CHT_A0	0x6
+ #define	REVISION_ID_CHT_Ax_SI	0x0
+diff --git a/drivers/hid/intel-ish-hid/ipc/pci-ish.c b/drivers/hid/intel-ish-hid/ipc/pci-ish.c
+index a2c53ea3b5ed..c7b8eb32b1ea 100644
+--- a/drivers/hid/intel-ish-hid/ipc/pci-ish.c
++++ b/drivers/hid/intel-ish-hid/ipc/pci-ish.c
+@@ -38,6 +38,7 @@ static const struct pci_device_id ish_pci_tbl[] = {
+ 	{PCI_DEVICE(PCI_VENDOR_ID_INTEL, CNL_Ax_DEVICE_ID)},
+ 	{PCI_DEVICE(PCI_VENDOR_ID_INTEL, GLK_Ax_DEVICE_ID)},
+ 	{PCI_DEVICE(PCI_VENDOR_ID_INTEL, CNL_H_DEVICE_ID)},
++	{PCI_DEVICE(PCI_VENDOR_ID_INTEL, SPT_H_DEVICE_ID)},
+ 	{0, }
+ };
+ MODULE_DEVICE_TABLE(pci, ish_pci_tbl);
+diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c
+index ced041899456..f4d08c8ac7f8 100644
+--- a/drivers/hv/connection.c
++++ b/drivers/hv/connection.c
+@@ -76,6 +76,7 @@ static int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo,
+ 					__u32 version)
+ {
+ 	int ret = 0;
++	unsigned int cur_cpu;
+ 	struct vmbus_channel_initiate_contact *msg;
+ 	unsigned long flags;
+ 
+@@ -118,9 +119,10 @@ static int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo,
+ 	 * the CPU attempting to connect may not be CPU 0.
+ 	 */
+ 	if (version >= VERSION_WIN8_1) {
+-		msg->target_vcpu =
+-			hv_cpu_number_to_vp_number(smp_processor_id());
+-		vmbus_connection.connect_cpu = smp_processor_id();
++		cur_cpu = get_cpu();
++		msg->target_vcpu = hv_cpu_number_to_vp_number(cur_cpu);
++		vmbus_connection.connect_cpu = cur_cpu;
++		put_cpu();
+ 	} else {
+ 		msg->target_vcpu = 0;
+ 		vmbus_connection.connect_cpu = 0;
+diff --git a/drivers/i2c/busses/i2c-uniphier-f.c b/drivers/i2c/busses/i2c-uniphier-f.c
+index 9918bdd81619..a403e8579b65 100644
+--- a/drivers/i2c/busses/i2c-uniphier-f.c
++++ b/drivers/i2c/busses/i2c-uniphier-f.c
+@@ -401,11 +401,8 @@ static int uniphier_fi2c_master_xfer(struct i2c_adapter *adap,
+ 		return ret;
+ 
+ 	for (msg = msgs; msg < emsg; msg++) {
+-		/* If next message is read, skip the stop condition */
+-		bool stop = !(msg + 1 < emsg && msg[1].flags & I2C_M_RD);
+-		/* but, force it if I2C_M_STOP is set */
+-		if (msg->flags & I2C_M_STOP)
+-			stop = true;
++		/* Emit STOP if it is the last message or I2C_M_STOP is set. */
++		bool stop = (msg + 1 == emsg) || (msg->flags & I2C_M_STOP);
+ 
+ 		ret = uniphier_fi2c_master_xfer_one(adap, msg, stop);
+ 		if (ret)
+diff --git a/drivers/i2c/busses/i2c-uniphier.c b/drivers/i2c/busses/i2c-uniphier.c
+index bb181b088291..454f914ae66d 100644
+--- a/drivers/i2c/busses/i2c-uniphier.c
++++ b/drivers/i2c/busses/i2c-uniphier.c
+@@ -248,11 +248,8 @@ static int uniphier_i2c_master_xfer(struct i2c_adapter *adap,
+ 		return ret;
+ 
+ 	for (msg = msgs; msg < emsg; msg++) {
+-		/* If next message is read, skip the stop condition */
+-		bool stop = !(msg + 1 < emsg && msg[1].flags & I2C_M_RD);
+-		/* but, force it if I2C_M_STOP is set */
+-		if (msg->flags & I2C_M_STOP)
+-			stop = true;
++		/* Emit STOP if it is the last message or I2C_M_STOP is set. */
++		bool stop = (msg + 1 == emsg) || (msg->flags & I2C_M_STOP);
+ 
+ 		ret = uniphier_i2c_master_xfer_one(adap, msg, stop);
+ 		if (ret)
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
+index 4994f920a836..8653182be818 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
+@@ -187,12 +187,15 @@ static int st_lsm6dsx_set_fifo_odr(struct st_lsm6dsx_sensor *sensor,
+ 
+ int st_lsm6dsx_update_watermark(struct st_lsm6dsx_sensor *sensor, u16 watermark)
+ {
+-	u16 fifo_watermark = ~0, cur_watermark, sip = 0, fifo_th_mask;
++	u16 fifo_watermark = ~0, cur_watermark, fifo_th_mask;
+ 	struct st_lsm6dsx_hw *hw = sensor->hw;
+ 	struct st_lsm6dsx_sensor *cur_sensor;
+ 	int i, err, data;
+ 	__le16 wdata;
+ 
++	if (!hw->sip)
++		return 0;
++
+ 	for (i = 0; i < ST_LSM6DSX_ID_MAX; i++) {
+ 		cur_sensor = iio_priv(hw->iio_devs[i]);
+ 
+@@ -203,14 +206,10 @@ int st_lsm6dsx_update_watermark(struct st_lsm6dsx_sensor *sensor, u16 watermark)
+ 						       : cur_sensor->watermark;
+ 
+ 		fifo_watermark = min_t(u16, fifo_watermark, cur_watermark);
+-		sip += cur_sensor->sip;
+ 	}
+ 
+-	if (!sip)
+-		return 0;
+-
+-	fifo_watermark = max_t(u16, fifo_watermark, sip);
+-	fifo_watermark = (fifo_watermark / sip) * sip;
++	fifo_watermark = max_t(u16, fifo_watermark, hw->sip);
++	fifo_watermark = (fifo_watermark / hw->sip) * hw->sip;
+ 	fifo_watermark = fifo_watermark * hw->settings->fifo_ops.th_wl;
+ 
+ 	err = regmap_read(hw->regmap, hw->settings->fifo_ops.fifo_th.addr + 1,
+diff --git a/drivers/iio/temperature/maxim_thermocouple.c b/drivers/iio/temperature/maxim_thermocouple.c
+index 54e383231d1e..c31b9633f32d 100644
+--- a/drivers/iio/temperature/maxim_thermocouple.c
++++ b/drivers/iio/temperature/maxim_thermocouple.c
+@@ -258,7 +258,6 @@ static int maxim_thermocouple_remove(struct spi_device *spi)
+ static const struct spi_device_id maxim_thermocouple_id[] = {
+ 	{"max6675", MAX6675},
+ 	{"max31855", MAX31855},
+-	{"max31856", MAX31855},
+ 	{},
+ };
+ MODULE_DEVICE_TABLE(spi, maxim_thermocouple_id);
+diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c
+index ec8fb289621f..5f437d1570fb 100644
+--- a/drivers/infiniband/core/ucma.c
++++ b/drivers/infiniband/core/ucma.c
+@@ -124,6 +124,8 @@ static DEFINE_MUTEX(mut);
+ static DEFINE_IDR(ctx_idr);
+ static DEFINE_IDR(multicast_idr);
+ 
++static const struct file_operations ucma_fops;
++
+ static inline struct ucma_context *_ucma_find_context(int id,
+ 						      struct ucma_file *file)
+ {
+@@ -1581,6 +1583,10 @@ static ssize_t ucma_migrate_id(struct ucma_file *new_file,
+ 	f = fdget(cmd.fd);
+ 	if (!f.file)
+ 		return -ENOENT;
++	if (f.file->f_op != &ucma_fops) {
++		ret = -EINVAL;
++		goto file_put;
++	}
+ 
+ 	/* Validate current fd and prevent destruction of id. */
+ 	ctx = ucma_get_ctx(f.file->private_data, cmd.id);
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index a76e206704d4..cb1e69bdad0b 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -844,6 +844,8 @@ int bnxt_re_destroy_qp(struct ib_qp *ib_qp)
+ 				"Failed to destroy Shadow QP");
+ 			return rc;
+ 		}
++		bnxt_qplib_free_qp_res(&rdev->qplib_res,
++				       &rdev->qp1_sqp->qplib_qp);
+ 		mutex_lock(&rdev->qp_lock);
+ 		list_del(&rdev->qp1_sqp->list);
+ 		atomic_dec(&rdev->qp_count);
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index e426b990c1dd..6ad0d46ab879 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -196,7 +196,7 @@ static int bnxt_qplib_alloc_qp_hdr_buf(struct bnxt_qplib_res *res,
+ 				       struct bnxt_qplib_qp *qp)
+ {
+ 	struct bnxt_qplib_q *rq = &qp->rq;
+-	struct bnxt_qplib_q *sq = &qp->rq;
++	struct bnxt_qplib_q *sq = &qp->sq;
+ 	int rc = 0;
+ 
+ 	if (qp->sq_hdr_buf_size && sq->hwq.max_elements) {
+diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
+index d77c97fe4a23..c53363443280 100644
+--- a/drivers/iommu/amd_iommu.c
++++ b/drivers/iommu/amd_iommu.c
+@@ -3073,7 +3073,7 @@ static phys_addr_t amd_iommu_iova_to_phys(struct iommu_domain *dom,
+ 		return 0;
+ 
+ 	offset_mask = pte_pgsize - 1;
+-	__pte	    = *pte & PM_ADDR_MASK;
++	__pte	    = __sme_clr(*pte & PM_ADDR_MASK);
+ 
+ 	return (__pte & ~offset_mask) | (iova & offset_mask);
+ }
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index 75df4c9d8b54..1c7c1250bf75 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -29,9 +29,6 @@
+  */
+ #define	MIN_RAID456_JOURNAL_SPACE (4*2048)
+ 
+-/* Global list of all raid sets */
+-static LIST_HEAD(raid_sets);
+-
+ static bool devices_handle_discard_safely = false;
+ 
+ /*
+@@ -227,7 +224,6 @@ struct rs_layout {
+ 
+ struct raid_set {
+ 	struct dm_target *ti;
+-	struct list_head list;
+ 
+ 	uint32_t stripe_cache_entries;
+ 	unsigned long ctr_flags;
+@@ -273,19 +269,6 @@ static void rs_config_restore(struct raid_set *rs, struct rs_layout *l)
+ 	mddev->new_chunk_sectors = l->new_chunk_sectors;
+ }
+ 
+-/* Find any raid_set in active slot for @rs on global list */
+-static struct raid_set *rs_find_active(struct raid_set *rs)
+-{
+-	struct raid_set *r;
+-	struct mapped_device *md = dm_table_get_md(rs->ti->table);
+-
+-	list_for_each_entry(r, &raid_sets, list)
+-		if (r != rs && dm_table_get_md(r->ti->table) == md)
+-			return r;
+-
+-	return NULL;
+-}
+-
+ /* raid10 algorithms (i.e. formats) */
+ #define	ALGORITHM_RAID10_DEFAULT	0
+ #define	ALGORITHM_RAID10_NEAR		1
+@@ -764,7 +747,6 @@ static struct raid_set *raid_set_alloc(struct dm_target *ti, struct raid_type *r
+ 
+ 	mddev_init(&rs->md);
+ 
+-	INIT_LIST_HEAD(&rs->list);
+ 	rs->raid_disks = raid_devs;
+ 	rs->delta_disks = 0;
+ 
+@@ -782,9 +764,6 @@ static struct raid_set *raid_set_alloc(struct dm_target *ti, struct raid_type *r
+ 	for (i = 0; i < raid_devs; i++)
+ 		md_rdev_init(&rs->dev[i].rdev);
+ 
+-	/* Add @rs to global list. */
+-	list_add(&rs->list, &raid_sets);
+-
+ 	/*
+ 	 * Remaining items to be initialized by further RAID params:
+ 	 *  rs->md.persistent
+@@ -797,7 +776,7 @@ static struct raid_set *raid_set_alloc(struct dm_target *ti, struct raid_type *r
+ 	return rs;
+ }
+ 
+-/* Free all @rs allocations and remove it from global list. */
++/* Free all @rs allocations */
+ static void raid_set_free(struct raid_set *rs)
+ {
+ 	int i;
+@@ -815,8 +794,6 @@ static void raid_set_free(struct raid_set *rs)
+ 			dm_put_device(rs->ti, rs->dev[i].data_dev);
+ 	}
+ 
+-	list_del(&rs->list);
+-
+ 	kfree(rs);
+ }
+ 
+@@ -3149,6 +3126,11 @@ static int raid_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+ 		set_bit(RT_FLAG_UPDATE_SBS, &rs->runtime_flags);
+ 		rs_set_new(rs);
+ 	} else if (rs_is_recovering(rs)) {
++		/* Rebuild particular devices */
++		if (test_bit(__CTR_FLAG_REBUILD, &rs->ctr_flags)) {
++			set_bit(RT_FLAG_UPDATE_SBS, &rs->runtime_flags);
++			rs_setup_recovery(rs, MaxSector);
++		}
+ 		/* A recovering raid set may be resized */
+ 		; /* skip setup rs */
+ 	} else if (rs_is_reshaping(rs)) {
+@@ -3350,32 +3332,53 @@ static int raid_map(struct dm_target *ti, struct bio *bio)
+ 	return DM_MAPIO_SUBMITTED;
+ }
+ 
+-/* Return string describing the current sync action of @mddev */
+-static const char *decipher_sync_action(struct mddev *mddev, unsigned long recovery)
++/* Return sync state string for @state */
++enum sync_state { st_frozen, st_reshape, st_resync, st_check, st_repair, st_recover, st_idle };
++static const char *sync_str(enum sync_state state)
++{
++	/* Has to be in above sync_state order! */
++	static const char *sync_strs[] = {
++		"frozen",
++		"reshape",
++		"resync",
++		"check",
++		"repair",
++		"recover",
++		"idle"
++	};
++
++	return __within_range(state, 0, ARRAY_SIZE(sync_strs) - 1) ? sync_strs[state] : "undef";
++};
++
++/* Return enum sync_state for @mddev derived from @recovery flags */
++static const enum sync_state decipher_sync_action(struct mddev *mddev, unsigned long recovery)
+ {
+ 	if (test_bit(MD_RECOVERY_FROZEN, &recovery))
+-		return "frozen";
++		return st_frozen;
+ 
+-	/* The MD sync thread can be done with io but still be running */
++	/* The MD sync thread can be done with io or be interrupted but still be running */
+ 	if (!test_bit(MD_RECOVERY_DONE, &recovery) &&
+ 	    (test_bit(MD_RECOVERY_RUNNING, &recovery) ||
+ 	     (!mddev->ro && test_bit(MD_RECOVERY_NEEDED, &recovery)))) {
+ 		if (test_bit(MD_RECOVERY_RESHAPE, &recovery))
+-			return "reshape";
++			return st_reshape;
+ 
+ 		if (test_bit(MD_RECOVERY_SYNC, &recovery)) {
+ 			if (!test_bit(MD_RECOVERY_REQUESTED, &recovery))
+-				return "resync";
+-			else if (test_bit(MD_RECOVERY_CHECK, &recovery))
+-				return "check";
+-			return "repair";
++				return st_resync;
++			if (test_bit(MD_RECOVERY_CHECK, &recovery))
++				return st_check;
++			return st_repair;
+ 		}
+ 
+ 		if (test_bit(MD_RECOVERY_RECOVER, &recovery))
+-			return "recover";
++			return st_recover;
++
++		if (mddev->reshape_position != MaxSector)
++			return st_reshape;
+ 	}
+ 
+-	return "idle";
++	return st_idle;
+ }
+ 
+ /*
+@@ -3409,6 +3412,7 @@ static sector_t rs_get_progress(struct raid_set *rs, unsigned long recovery,
+ 				sector_t resync_max_sectors)
+ {
+ 	sector_t r;
++	enum sync_state state;
+ 	struct mddev *mddev = &rs->md;
+ 
+ 	clear_bit(RT_FLAG_RS_IN_SYNC, &rs->runtime_flags);
+@@ -3419,20 +3423,14 @@ static sector_t rs_get_progress(struct raid_set *rs, unsigned long recovery,
+ 		set_bit(RT_FLAG_RS_IN_SYNC, &rs->runtime_flags);
+ 
+ 	} else {
+-		if (!test_bit(__CTR_FLAG_NOSYNC, &rs->ctr_flags) &&
+-		    !test_bit(MD_RECOVERY_INTR, &recovery) &&
+-		    (test_bit(MD_RECOVERY_NEEDED, &recovery) ||
+-		     test_bit(MD_RECOVERY_RESHAPE, &recovery) ||
+-		     test_bit(MD_RECOVERY_RUNNING, &recovery)))
+-			r = mddev->curr_resync_completed;
+-		else
++		state = decipher_sync_action(mddev, recovery);
++
++		if (state == st_idle && !test_bit(MD_RECOVERY_INTR, &recovery))
+ 			r = mddev->recovery_cp;
++		else
++			r = mddev->curr_resync_completed;
+ 
+-		if (r >= resync_max_sectors &&
+-		    (!test_bit(MD_RECOVERY_REQUESTED, &recovery) ||
+-		     (!test_bit(MD_RECOVERY_FROZEN, &recovery) &&
+-		      !test_bit(MD_RECOVERY_NEEDED, &recovery) &&
+-		      !test_bit(MD_RECOVERY_RUNNING, &recovery)))) {
++		if (state == st_idle && r >= resync_max_sectors) {
+ 			/*
+ 			 * Sync complete.
+ 			 */
+@@ -3440,24 +3438,20 @@ static sector_t rs_get_progress(struct raid_set *rs, unsigned long recovery,
+ 			if (test_bit(MD_RECOVERY_RECOVER, &recovery))
+ 				set_bit(RT_FLAG_RS_IN_SYNC, &rs->runtime_flags);
+ 
+-		} else if (test_bit(MD_RECOVERY_RECOVER, &recovery)) {
++		} else if (state == st_recover)
+ 			/*
+ 			 * In case we are recovering, the array is not in sync
+ 			 * and health chars should show the recovering legs.
+ 			 */
+ 			;
+-
+-		} else if (test_bit(MD_RECOVERY_SYNC, &recovery) &&
+-			   !test_bit(MD_RECOVERY_REQUESTED, &recovery)) {
++		else if (state == st_resync)
+ 			/*
+ 			 * If "resync" is occurring, the raid set
+ 			 * is or may be out of sync hence the health
+ 			 * characters shall be 'a'.
+ 			 */
+ 			set_bit(RT_FLAG_RS_RESYNCING, &rs->runtime_flags);
+-
+-		} else if (test_bit(MD_RECOVERY_RESHAPE, &recovery) &&
+-			   !test_bit(MD_RECOVERY_REQUESTED, &recovery)) {
++		else if (state == st_reshape)
+ 			/*
+ 			 * If "reshape" is occurring, the raid set
+ 			 * is or may be out of sync hence the health
+@@ -3465,7 +3459,7 @@ static sector_t rs_get_progress(struct raid_set *rs, unsigned long recovery,
+ 			 */
+ 			set_bit(RT_FLAG_RS_RESYNCING, &rs->runtime_flags);
+ 
+-		} else if (test_bit(MD_RECOVERY_REQUESTED, &recovery)) {
++		else if (state == st_check || state == st_repair)
+ 			/*
+ 			 * If "check" or "repair" is occurring, the raid set has
+ 			 * undergone an initial sync and the health characters
+@@ -3473,12 +3467,12 @@ static sector_t rs_get_progress(struct raid_set *rs, unsigned long recovery,
+ 			 */
+ 			set_bit(RT_FLAG_RS_IN_SYNC, &rs->runtime_flags);
+ 
+-		} else {
++		else {
+ 			struct md_rdev *rdev;
+ 
+ 			/*
+ 			 * We are idle and recovery is needed, prevent 'A' chars race
+-			 * caused by components still set to in-sync by constrcuctor.
++			 * caused by components still set to in-sync by constructor.
+ 			 */
+ 			if (test_bit(MD_RECOVERY_NEEDED, &recovery))
+ 				set_bit(RT_FLAG_RS_RESYNCING, &rs->runtime_flags);
+@@ -3542,7 +3536,7 @@ static void raid_status(struct dm_target *ti, status_type_t type,
+ 		progress = rs_get_progress(rs, recovery, resync_max_sectors);
+ 		resync_mismatches = (mddev->last_sync_action && !strcasecmp(mddev->last_sync_action, "check")) ?
+ 				    atomic64_read(&mddev->resync_mismatches) : 0;
+-		sync_action = decipher_sync_action(&rs->md, recovery);
++		sync_action = sync_str(decipher_sync_action(&rs->md, recovery));
+ 
+ 		/* HM FIXME: do we want another state char for raid0? It shows 'D'/'A'/'-' now */
+ 		for (i = 0; i < rs->raid_disks; i++)
+@@ -3892,14 +3886,13 @@ static int rs_start_reshape(struct raid_set *rs)
+ 	struct mddev *mddev = &rs->md;
+ 	struct md_personality *pers = mddev->pers;
+ 
++	/* Don't allow the sync thread to work until the table gets reloaded. */
++	set_bit(MD_RECOVERY_WAIT, &mddev->recovery);
++
+ 	r = rs_setup_reshape(rs);
+ 	if (r)
+ 		return r;
+ 
+-	/* Need to be resumed to be able to start reshape, recovery is frozen until raid_resume() though */
+-	if (test_and_clear_bit(RT_FLAG_RS_SUSPENDED, &rs->runtime_flags))
+-		mddev_resume(mddev);
+-
+ 	/*
+ 	 * Check any reshape constraints enforced by the personalility
+ 	 *
+@@ -3923,10 +3916,6 @@ static int rs_start_reshape(struct raid_set *rs)
+ 		}
+ 	}
+ 
+-	/* Suspend because a resume will happen in raid_resume() */
+-	set_bit(RT_FLAG_RS_SUSPENDED, &rs->runtime_flags);
+-	mddev_suspend(mddev);
+-
+ 	/*
+ 	 * Now reshape got set up, update superblocks to
+ 	 * reflect the fact so that a table reload will
+@@ -3947,29 +3936,6 @@ static int raid_preresume(struct dm_target *ti)
+ 	if (test_and_set_bit(RT_FLAG_RS_PRERESUMED, &rs->runtime_flags))
+ 		return 0;
+ 
+-	if (!test_bit(__CTR_FLAG_REBUILD, &rs->ctr_flags)) {
+-		struct raid_set *rs_active = rs_find_active(rs);
+-
+-		if (rs_active) {
+-			/*
+-			 * In case no rebuilds have been requested
+-			 * and an active table slot exists, copy
+-			 * current resynchonization completed and
+-			 * reshape position pointers across from
+-			 * suspended raid set in the active slot.
+-			 *
+-			 * This resumes the new mapping at current
+-			 * offsets to continue recover/reshape without
+-			 * necessarily redoing a raid set partially or
+-			 * causing data corruption in case of a reshape.
+-			 */
+-			if (rs_active->md.curr_resync_completed != MaxSector)
+-				mddev->curr_resync_completed = rs_active->md.curr_resync_completed;
+-			if (rs_active->md.reshape_position != MaxSector)
+-				mddev->reshape_position = rs_active->md.reshape_position;
+-		}
+-	}
+-
+ 	/*
+ 	 * The superblocks need to be updated on disk if the
+ 	 * array is new or new devices got added (thus zeroed
+diff --git a/drivers/md/dm-thin-metadata.c b/drivers/md/dm-thin-metadata.c
+index 72142021b5c9..20b0776e39ef 100644
+--- a/drivers/md/dm-thin-metadata.c
++++ b/drivers/md/dm-thin-metadata.c
+@@ -188,6 +188,12 @@ struct dm_pool_metadata {
+ 	unsigned long flags;
+ 	sector_t data_block_size;
+ 
++	/*
++	 * We reserve a section of the metadata for commit overhead.
++	 * All reported space does *not* include this.
++	 */
++	dm_block_t metadata_reserve;
++
+ 	/*
+ 	 * Set if a transaction has to be aborted but the attempt to roll back
+ 	 * to the previous (good) transaction failed.  The only pool metadata
+@@ -816,6 +822,20 @@ static int __commit_transaction(struct dm_pool_metadata *pmd)
+ 	return dm_tm_commit(pmd->tm, sblock);
+ }
+ 
++static void __set_metadata_reserve(struct dm_pool_metadata *pmd)
++{
++	int r;
++	dm_block_t total;
++	dm_block_t max_blocks = 4096; /* 16M */
++
++	r = dm_sm_get_nr_blocks(pmd->metadata_sm, &total);
++	if (r) {
++		DMERR("could not get size of metadata device");
++		pmd->metadata_reserve = max_blocks;
++	} else
++		pmd->metadata_reserve = min(max_blocks, div_u64(total, 10));
++}
++
+ struct dm_pool_metadata *dm_pool_metadata_open(struct block_device *bdev,
+ 					       sector_t data_block_size,
+ 					       bool format_device)
+@@ -849,6 +869,8 @@ struct dm_pool_metadata *dm_pool_metadata_open(struct block_device *bdev,
+ 		return ERR_PTR(r);
+ 	}
+ 
++	__set_metadata_reserve(pmd);
++
+ 	return pmd;
+ }
+ 
+@@ -1820,6 +1842,13 @@ int dm_pool_get_free_metadata_block_count(struct dm_pool_metadata *pmd,
+ 	down_read(&pmd->root_lock);
+ 	if (!pmd->fail_io)
+ 		r = dm_sm_get_nr_free(pmd->metadata_sm, result);
++
++	if (!r) {
++		if (*result < pmd->metadata_reserve)
++			*result = 0;
++		else
++			*result -= pmd->metadata_reserve;
++	}
+ 	up_read(&pmd->root_lock);
+ 
+ 	return r;
+@@ -1932,8 +1961,11 @@ int dm_pool_resize_metadata_dev(struct dm_pool_metadata *pmd, dm_block_t new_cou
+ 	int r = -EINVAL;
+ 
+ 	down_write(&pmd->root_lock);
+-	if (!pmd->fail_io)
++	if (!pmd->fail_io) {
+ 		r = __resize_space_map(pmd->metadata_sm, new_count);
++		if (!r)
++			__set_metadata_reserve(pmd);
++	}
+ 	up_write(&pmd->root_lock);
+ 
+ 	return r;
+diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
+index 1087f6a1ac79..b512efd4050c 100644
+--- a/drivers/md/dm-thin.c
++++ b/drivers/md/dm-thin.c
+@@ -200,7 +200,13 @@ struct dm_thin_new_mapping;
+ enum pool_mode {
+ 	PM_WRITE,		/* metadata may be changed */
+ 	PM_OUT_OF_DATA_SPACE,	/* metadata may be changed, though data may not be allocated */
++
++	/*
++	 * Like READ_ONLY, except may switch back to WRITE on metadata resize. Reported as READ_ONLY.
++	 */
++	PM_OUT_OF_METADATA_SPACE,
+ 	PM_READ_ONLY,		/* metadata may not be changed */
++
+ 	PM_FAIL,		/* all I/O fails */
+ };
+ 
+@@ -1388,7 +1394,35 @@ static void set_pool_mode(struct pool *pool, enum pool_mode new_mode);
+ 
+ static void requeue_bios(struct pool *pool);
+ 
+-static void check_for_space(struct pool *pool)
++static bool is_read_only_pool_mode(enum pool_mode mode)
++{
++	return (mode == PM_OUT_OF_METADATA_SPACE || mode == PM_READ_ONLY);
++}
++
++static bool is_read_only(struct pool *pool)
++{
++	return is_read_only_pool_mode(get_pool_mode(pool));
++}
++
++static void check_for_metadata_space(struct pool *pool)
++{
++	int r;
++	const char *ooms_reason = NULL;
++	dm_block_t nr_free;
++
++	r = dm_pool_get_free_metadata_block_count(pool->pmd, &nr_free);
++	if (r)
++		ooms_reason = "Could not get free metadata blocks";
++	else if (!nr_free)
++		ooms_reason = "No free metadata blocks";
++
++	if (ooms_reason && !is_read_only(pool)) {
++		DMERR("%s", ooms_reason);
++		set_pool_mode(pool, PM_OUT_OF_METADATA_SPACE);
++	}
++}
++
++static void check_for_data_space(struct pool *pool)
+ {
+ 	int r;
+ 	dm_block_t nr_free;
+@@ -1414,14 +1448,16 @@ static int commit(struct pool *pool)
+ {
+ 	int r;
+ 
+-	if (get_pool_mode(pool) >= PM_READ_ONLY)
++	if (get_pool_mode(pool) >= PM_OUT_OF_METADATA_SPACE)
+ 		return -EINVAL;
+ 
+ 	r = dm_pool_commit_metadata(pool->pmd);
+ 	if (r)
+ 		metadata_operation_failed(pool, "dm_pool_commit_metadata", r);
+-	else
+-		check_for_space(pool);
++	else {
++		check_for_metadata_space(pool);
++		check_for_data_space(pool);
++	}
+ 
+ 	return r;
+ }
+@@ -1487,6 +1523,19 @@ static int alloc_data_block(struct thin_c *tc, dm_block_t *result)
+ 		return r;
+ 	}
+ 
++	r = dm_pool_get_free_metadata_block_count(pool->pmd, &free_blocks);
++	if (r) {
++		metadata_operation_failed(pool, "dm_pool_get_free_metadata_block_count", r);
++		return r;
++	}
++
++	if (!free_blocks) {
++		/* Let's commit before we use up the metadata reserve. */
++		r = commit(pool);
++		if (r)
++			return r;
++	}
++
+ 	return 0;
+ }
+ 
+@@ -1518,6 +1567,7 @@ static blk_status_t should_error_unserviceable_bio(struct pool *pool)
+ 	case PM_OUT_OF_DATA_SPACE:
+ 		return pool->pf.error_if_no_space ? BLK_STS_NOSPC : 0;
+ 
++	case PM_OUT_OF_METADATA_SPACE:
+ 	case PM_READ_ONLY:
+ 	case PM_FAIL:
+ 		return BLK_STS_IOERR;
+@@ -2481,8 +2531,9 @@ static void set_pool_mode(struct pool *pool, enum pool_mode new_mode)
+ 		error_retry_list(pool);
+ 		break;
+ 
++	case PM_OUT_OF_METADATA_SPACE:
+ 	case PM_READ_ONLY:
+-		if (old_mode != new_mode)
++		if (!is_read_only_pool_mode(old_mode))
+ 			notify_of_pool_mode_change(pool, "read-only");
+ 		dm_pool_metadata_read_only(pool->pmd);
+ 		pool->process_bio = process_bio_read_only;
+@@ -3420,6 +3471,10 @@ static int maybe_resize_metadata_dev(struct dm_target *ti, bool *need_commit)
+ 		DMINFO("%s: growing the metadata device from %llu to %llu blocks",
+ 		       dm_device_name(pool->pool_md),
+ 		       sb_metadata_dev_size, metadata_dev_size);
++
++		if (get_pool_mode(pool) == PM_OUT_OF_METADATA_SPACE)
++			set_pool_mode(pool, PM_WRITE);
++
+ 		r = dm_pool_resize_metadata_dev(pool->pmd, metadata_dev_size);
+ 		if (r) {
+ 			metadata_operation_failed(pool, "dm_pool_resize_metadata_dev", r);
+@@ -3724,7 +3779,7 @@ static int pool_message(struct dm_target *ti, unsigned argc, char **argv,
+ 	struct pool_c *pt = ti->private;
+ 	struct pool *pool = pt->pool;
+ 
+-	if (get_pool_mode(pool) >= PM_READ_ONLY) {
++	if (get_pool_mode(pool) >= PM_OUT_OF_METADATA_SPACE) {
+ 		DMERR("%s: unable to service pool target messages in READ_ONLY or FAIL mode",
+ 		      dm_device_name(pool->pool_md));
+ 		return -EOPNOTSUPP;
+@@ -3798,6 +3853,7 @@ static void pool_status(struct dm_target *ti, status_type_t type,
+ 	dm_block_t nr_blocks_data;
+ 	dm_block_t nr_blocks_metadata;
+ 	dm_block_t held_root;
++	enum pool_mode mode;
+ 	char buf[BDEVNAME_SIZE];
+ 	char buf2[BDEVNAME_SIZE];
+ 	struct pool_c *pt = ti->private;
+@@ -3868,9 +3924,10 @@ static void pool_status(struct dm_target *ti, status_type_t type,
+ 		else
+ 			DMEMIT("- ");
+ 
+-		if (pool->pf.mode == PM_OUT_OF_DATA_SPACE)
++		mode = get_pool_mode(pool);
++		if (mode == PM_OUT_OF_DATA_SPACE)
+ 			DMEMIT("out_of_data_space ");
+-		else if (pool->pf.mode == PM_READ_ONLY)
++		else if (is_read_only_pool_mode(mode))
+ 			DMEMIT("ro ");
+ 		else
+ 			DMEMIT("rw ");
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 35bd3a62451b..8c93d44a052c 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -4531,11 +4531,12 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr,
+ 		allow_barrier(conf);
+ 	}
+ 
++	raise_barrier(conf, 0);
+ read_more:
+ 	/* Now schedule reads for blocks from sector_nr to last */
+ 	r10_bio = raid10_alloc_init_r10buf(conf);
+ 	r10_bio->state = 0;
+-	raise_barrier(conf, sectors_done != 0);
++	raise_barrier(conf, 1);
+ 	atomic_set(&r10_bio->remaining, 0);
+ 	r10_bio->mddev = mddev;
+ 	r10_bio->sector = sector_nr;
+@@ -4631,6 +4632,8 @@ read_more:
+ 	if (sector_nr <= last)
+ 		goto read_more;
+ 
++	lower_barrier(conf);
++
+ 	/* Now that we have done the whole section we can
+ 	 * update reshape_progress
+ 	 */
+diff --git a/drivers/md/raid5-log.h b/drivers/md/raid5-log.h
+index a001808a2b77..bfb811407061 100644
+--- a/drivers/md/raid5-log.h
++++ b/drivers/md/raid5-log.h
+@@ -46,6 +46,11 @@ extern int ppl_modify_log(struct r5conf *conf, struct md_rdev *rdev, bool add);
+ extern void ppl_quiesce(struct r5conf *conf, int quiesce);
+ extern int ppl_handle_flush_request(struct r5l_log *log, struct bio *bio);
+ 
++static inline bool raid5_has_log(struct r5conf *conf)
++{
++	return test_bit(MD_HAS_JOURNAL, &conf->mddev->flags);
++}
++
+ static inline bool raid5_has_ppl(struct r5conf *conf)
+ {
+ 	return test_bit(MD_HAS_PPL, &conf->mddev->flags);
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 49107c52c8e6..9050bfc71309 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -735,7 +735,7 @@ static bool stripe_can_batch(struct stripe_head *sh)
+ {
+ 	struct r5conf *conf = sh->raid_conf;
+ 
+-	if (conf->log || raid5_has_ppl(conf))
++	if (raid5_has_log(conf) || raid5_has_ppl(conf))
+ 		return false;
+ 	return test_bit(STRIPE_BATCH_READY, &sh->state) &&
+ 		!test_bit(STRIPE_BITMAP_PENDING, &sh->state) &&
+@@ -7739,7 +7739,7 @@ static int raid5_resize(struct mddev *mddev, sector_t sectors)
+ 	sector_t newsize;
+ 	struct r5conf *conf = mddev->private;
+ 
+-	if (conf->log || raid5_has_ppl(conf))
++	if (raid5_has_log(conf) || raid5_has_ppl(conf))
+ 		return -EINVAL;
+ 	sectors &= ~((sector_t)conf->chunk_sectors - 1);
+ 	newsize = raid5_size(mddev, sectors, mddev->raid_disks);
+@@ -7790,7 +7790,7 @@ static int check_reshape(struct mddev *mddev)
+ {
+ 	struct r5conf *conf = mddev->private;
+ 
+-	if (conf->log || raid5_has_ppl(conf))
++	if (raid5_has_log(conf) || raid5_has_ppl(conf))
+ 		return -EINVAL;
+ 	if (mddev->delta_disks == 0 &&
+ 	    mddev->new_layout == mddev->layout &&
+diff --git a/drivers/net/ethernet/amazon/ena/ena_com.c b/drivers/net/ethernet/amazon/ena/ena_com.c
+index 17f12c18d225..c37deef3bcf1 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_com.c
++++ b/drivers/net/ethernet/amazon/ena/ena_com.c
+@@ -459,7 +459,7 @@ static void ena_com_handle_admin_completion(struct ena_com_admin_queue *admin_qu
+ 	cqe = &admin_queue->cq.entries[head_masked];
+ 
+ 	/* Go over all the completions */
+-	while ((cqe->acq_common_descriptor.flags &
++	while ((READ_ONCE(cqe->acq_common_descriptor.flags) &
+ 			ENA_ADMIN_ACQ_COMMON_DESC_PHASE_MASK) == phase) {
+ 		/* Do not read the rest of the completion entry before the
+ 		 * phase bit was validated
+@@ -637,7 +637,7 @@ static u32 ena_com_reg_bar_read32(struct ena_com_dev *ena_dev, u16 offset)
+ 
+ 	mmiowb();
+ 	for (i = 0; i < timeout; i++) {
+-		if (read_resp->req_id == mmio_read->seq_num)
++		if (READ_ONCE(read_resp->req_id) == mmio_read->seq_num)
+ 			break;
+ 
+ 		udelay(1);
+@@ -1796,8 +1796,8 @@ void ena_com_aenq_intr_handler(struct ena_com_dev *dev, void *data)
+ 	aenq_common = &aenq_e->aenq_common_desc;
+ 
+ 	/* Go over all the events */
+-	while ((aenq_common->flags & ENA_ADMIN_AENQ_COMMON_DESC_PHASE_MASK) ==
+-	       phase) {
++	while ((READ_ONCE(aenq_common->flags) &
++		ENA_ADMIN_AENQ_COMMON_DESC_PHASE_MASK) == phase) {
+ 		pr_debug("AENQ! Group[%x] Syndrom[%x] timestamp: [%llus]\n",
+ 			 aenq_common->group, aenq_common->syndrom,
+ 			 (u64)aenq_common->timestamp_low +
+diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+index f2af87d70594..1b01cd2820ba 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+@@ -76,7 +76,7 @@ MODULE_DEVICE_TABLE(pci, ena_pci_tbl);
+ 
+ static int ena_rss_init_default(struct ena_adapter *adapter);
+ static void check_for_admin_com_state(struct ena_adapter *adapter);
+-static void ena_destroy_device(struct ena_adapter *adapter);
++static void ena_destroy_device(struct ena_adapter *adapter, bool graceful);
+ static int ena_restore_device(struct ena_adapter *adapter);
+ 
+ static void ena_tx_timeout(struct net_device *dev)
+@@ -461,7 +461,7 @@ static inline int ena_alloc_rx_page(struct ena_ring *rx_ring,
+ 		return -ENOMEM;
+ 	}
+ 
+-	dma = dma_map_page(rx_ring->dev, page, 0, PAGE_SIZE,
++	dma = dma_map_page(rx_ring->dev, page, 0, ENA_PAGE_SIZE,
+ 			   DMA_FROM_DEVICE);
+ 	if (unlikely(dma_mapping_error(rx_ring->dev, dma))) {
+ 		u64_stats_update_begin(&rx_ring->syncp);
+@@ -478,7 +478,7 @@ static inline int ena_alloc_rx_page(struct ena_ring *rx_ring,
+ 	rx_info->page_offset = 0;
+ 	ena_buf = &rx_info->ena_buf;
+ 	ena_buf->paddr = dma;
+-	ena_buf->len = PAGE_SIZE;
++	ena_buf->len = ENA_PAGE_SIZE;
+ 
+ 	return 0;
+ }
+@@ -495,7 +495,7 @@ static void ena_free_rx_page(struct ena_ring *rx_ring,
+ 		return;
+ 	}
+ 
+-	dma_unmap_page(rx_ring->dev, ena_buf->paddr, PAGE_SIZE,
++	dma_unmap_page(rx_ring->dev, ena_buf->paddr, ENA_PAGE_SIZE,
+ 		       DMA_FROM_DEVICE);
+ 
+ 	__free_page(page);
+@@ -916,10 +916,10 @@ static struct sk_buff *ena_rx_skb(struct ena_ring *rx_ring,
+ 	do {
+ 		dma_unmap_page(rx_ring->dev,
+ 			       dma_unmap_addr(&rx_info->ena_buf, paddr),
+-			       PAGE_SIZE, DMA_FROM_DEVICE);
++			       ENA_PAGE_SIZE, DMA_FROM_DEVICE);
+ 
+ 		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_info->page,
+-				rx_info->page_offset, len, PAGE_SIZE);
++				rx_info->page_offset, len, ENA_PAGE_SIZE);
+ 
+ 		netif_dbg(rx_ring->adapter, rx_status, rx_ring->netdev,
+ 			  "rx skb updated. len %d. data_len %d\n",
+@@ -1900,7 +1900,7 @@ static int ena_close(struct net_device *netdev)
+ 			  "Destroy failure, restarting device\n");
+ 		ena_dump_stats_to_dmesg(adapter);
+ 		/* rtnl lock already obtained in dev_ioctl() layer */
+-		ena_destroy_device(adapter);
++		ena_destroy_device(adapter, false);
+ 		ena_restore_device(adapter);
+ 	}
+ 
+@@ -2549,12 +2549,15 @@ err_disable_msix:
+ 	return rc;
+ }
+ 
+-static void ena_destroy_device(struct ena_adapter *adapter)
++static void ena_destroy_device(struct ena_adapter *adapter, bool graceful)
+ {
+ 	struct net_device *netdev = adapter->netdev;
+ 	struct ena_com_dev *ena_dev = adapter->ena_dev;
+ 	bool dev_up;
+ 
++	if (!test_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags))
++		return;
++
+ 	netif_carrier_off(netdev);
+ 
+ 	del_timer_sync(&adapter->timer_service);
+@@ -2562,7 +2565,8 @@ static void ena_destroy_device(struct ena_adapter *adapter)
+ 	dev_up = test_bit(ENA_FLAG_DEV_UP, &adapter->flags);
+ 	adapter->dev_up_before_reset = dev_up;
+ 
+-	ena_com_set_admin_running_state(ena_dev, false);
++	if (!graceful)
++		ena_com_set_admin_running_state(ena_dev, false);
+ 
+ 	if (test_bit(ENA_FLAG_DEV_UP, &adapter->flags))
+ 		ena_down(adapter);
+@@ -2590,6 +2594,7 @@ static void ena_destroy_device(struct ena_adapter *adapter)
+ 	adapter->reset_reason = ENA_REGS_RESET_NORMAL;
+ 
+ 	clear_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags);
++	clear_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags);
+ }
+ 
+ static int ena_restore_device(struct ena_adapter *adapter)
+@@ -2634,6 +2639,7 @@ static int ena_restore_device(struct ena_adapter *adapter)
+ 		}
+ 	}
+ 
++	set_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags);
+ 	mod_timer(&adapter->timer_service, round_jiffies(jiffies + HZ));
+ 	dev_err(&pdev->dev, "Device reset completed successfully\n");
+ 
+@@ -2664,7 +2670,7 @@ static void ena_fw_reset_device(struct work_struct *work)
+ 		return;
+ 	}
+ 	rtnl_lock();
+-	ena_destroy_device(adapter);
++	ena_destroy_device(adapter, false);
+ 	ena_restore_device(adapter);
+ 	rtnl_unlock();
+ }
+@@ -3408,30 +3414,24 @@ static void ena_remove(struct pci_dev *pdev)
+ 		netdev->rx_cpu_rmap = NULL;
+ 	}
+ #endif /* CONFIG_RFS_ACCEL */
+-
+-	unregister_netdev(netdev);
+ 	del_timer_sync(&adapter->timer_service);
+ 
+ 	cancel_work_sync(&adapter->reset_task);
+ 
+-	/* Reset the device only if the device is running. */
+-	if (test_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags))
+-		ena_com_dev_reset(ena_dev, adapter->reset_reason);
++	unregister_netdev(netdev);
+ 
+-	ena_free_mgmnt_irq(adapter);
++	/* If the device is running then we want to make sure the device will be
++	 * reset to make sure no more events will be issued by the device.
++	 */
++	if (test_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags))
++		set_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags);
+ 
+-	ena_disable_msix(adapter);
++	rtnl_lock();
++	ena_destroy_device(adapter, true);
++	rtnl_unlock();
+ 
+ 	free_netdev(netdev);
+ 
+-	ena_com_mmio_reg_read_request_destroy(ena_dev);
+-
+-	ena_com_abort_admin_commands(ena_dev);
+-
+-	ena_com_wait_for_abort_completion(ena_dev);
+-
+-	ena_com_admin_destroy(ena_dev);
+-
+ 	ena_com_rss_destroy(ena_dev);
+ 
+ 	ena_com_delete_debug_area(ena_dev);
+@@ -3466,7 +3466,7 @@ static int ena_suspend(struct pci_dev *pdev,  pm_message_t state)
+ 			"ignoring device reset request as the device is being suspended\n");
+ 		clear_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags);
+ 	}
+-	ena_destroy_device(adapter);
++	ena_destroy_device(adapter, true);
+ 	rtnl_unlock();
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h
+index f1972b5ab650..7c7ae56c52cf 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_netdev.h
++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h
+@@ -355,4 +355,15 @@ void ena_dump_stats_to_buf(struct ena_adapter *adapter, u8 *buf);
+ 
+ int ena_get_sset_count(struct net_device *netdev, int sset);
+ 
++/* The ENA buffer length fields is 16 bit long. So when PAGE_SIZE == 64kB the
++ * driver passas 0.
++ * Since the max packet size the ENA handles is ~9kB limit the buffer length to
++ * 16kB.
++ */
++#if PAGE_SIZE > SZ_16K
++#define ENA_PAGE_SIZE SZ_16K
++#else
++#define ENA_PAGE_SIZE PAGE_SIZE
++#endif
++
+ #endif /* !(ENA_H) */
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index 515d96e32143..c4d7479938e2 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -648,7 +648,7 @@ static int macb_halt_tx(struct macb *bp)
+ 		if (!(status & MACB_BIT(TGO)))
+ 			return 0;
+ 
+-		usleep_range(10, 250);
++		udelay(250);
+ 	} while (time_before(halt_time, timeout));
+ 
+ 	return -ETIMEDOUT;
+diff --git a/drivers/net/ethernet/hisilicon/hns/hnae.h b/drivers/net/ethernet/hisilicon/hns/hnae.h
+index cad52bd331f7..08a750fb60c4 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hnae.h
++++ b/drivers/net/ethernet/hisilicon/hns/hnae.h
+@@ -486,6 +486,8 @@ struct hnae_ae_ops {
+ 			u8 *auto_neg, u16 *speed, u8 *duplex);
+ 	void (*toggle_ring_irq)(struct hnae_ring *ring, u32 val);
+ 	void (*adjust_link)(struct hnae_handle *handle, int speed, int duplex);
++	bool (*need_adjust_link)(struct hnae_handle *handle,
++				 int speed, int duplex);
+ 	int (*set_loopback)(struct hnae_handle *handle,
+ 			    enum hnae_loop loop_mode, int en);
+ 	void (*get_ring_bdnum_limit)(struct hnae_queue *queue,
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c b/drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c
+index bd68379d2bea..bf930ab3c2bd 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c
+@@ -155,6 +155,41 @@ static void hns_ae_put_handle(struct hnae_handle *handle)
+ 		hns_ae_get_ring_pair(handle->qs[i])->used_by_vf = 0;
+ }
+ 
++static int hns_ae_wait_flow_down(struct hnae_handle *handle)
++{
++	struct dsaf_device *dsaf_dev;
++	struct hns_ppe_cb *ppe_cb;
++	struct hnae_vf_cb *vf_cb;
++	int ret;
++	int i;
++
++	for (i = 0; i < handle->q_num; i++) {
++		ret = hns_rcb_wait_tx_ring_clean(handle->qs[i]);
++		if (ret)
++			return ret;
++	}
++
++	ppe_cb = hns_get_ppe_cb(handle);
++	ret = hns_ppe_wait_tx_fifo_clean(ppe_cb);
++	if (ret)
++		return ret;
++
++	dsaf_dev = hns_ae_get_dsaf_dev(handle->dev);
++	if (!dsaf_dev)
++		return -EINVAL;
++	ret = hns_dsaf_wait_pkt_clean(dsaf_dev, handle->dport_id);
++	if (ret)
++		return ret;
++
++	vf_cb = hns_ae_get_vf_cb(handle);
++	ret = hns_mac_wait_fifo_clean(vf_cb->mac_cb);
++	if (ret)
++		return ret;
++
++	mdelay(10);
++	return 0;
++}
++
+ static void hns_ae_ring_enable_all(struct hnae_handle *handle, int val)
+ {
+ 	int q_num = handle->q_num;
+@@ -399,12 +434,41 @@ static int hns_ae_get_mac_info(struct hnae_handle *handle,
+ 	return hns_mac_get_port_info(mac_cb, auto_neg, speed, duplex);
+ }
+ 
++static bool hns_ae_need_adjust_link(struct hnae_handle *handle, int speed,
++				    int duplex)
++{
++	struct hns_mac_cb *mac_cb = hns_get_mac_cb(handle);
++
++	return hns_mac_need_adjust_link(mac_cb, speed, duplex);
++}
++
+ static void hns_ae_adjust_link(struct hnae_handle *handle, int speed,
+ 			       int duplex)
+ {
+ 	struct hns_mac_cb *mac_cb = hns_get_mac_cb(handle);
+ 
+-	hns_mac_adjust_link(mac_cb, speed, duplex);
++	switch (mac_cb->dsaf_dev->dsaf_ver) {
++	case AE_VERSION_1:
++		hns_mac_adjust_link(mac_cb, speed, duplex);
++		break;
++
++	case AE_VERSION_2:
++		/* chip need to clear all pkt inside */
++		hns_mac_disable(mac_cb, MAC_COMM_MODE_RX);
++		if (hns_ae_wait_flow_down(handle)) {
++			hns_mac_enable(mac_cb, MAC_COMM_MODE_RX);
++			break;
++		}
++
++		hns_mac_adjust_link(mac_cb, speed, duplex);
++		hns_mac_enable(mac_cb, MAC_COMM_MODE_RX);
++		break;
++
++	default:
++		break;
++	}
++
++	return;
+ }
+ 
+ static void hns_ae_get_ring_bdnum_limit(struct hnae_queue *queue,
+@@ -902,6 +966,7 @@ static struct hnae_ae_ops hns_dsaf_ops = {
+ 	.get_status = hns_ae_get_link_status,
+ 	.get_info = hns_ae_get_mac_info,
+ 	.adjust_link = hns_ae_adjust_link,
++	.need_adjust_link = hns_ae_need_adjust_link,
+ 	.set_loopback = hns_ae_config_loopback,
+ 	.get_ring_bdnum_limit = hns_ae_get_ring_bdnum_limit,
+ 	.get_pauseparam = hns_ae_get_pauseparam,
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c
+index 74bd260ca02a..8c7bc5cf193c 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c
+@@ -257,6 +257,16 @@ static void hns_gmac_get_pausefrm_cfg(void *mac_drv, u32 *rx_pause_en,
+ 	*tx_pause_en = dsaf_get_bit(pause_en, GMAC_PAUSE_EN_TX_FDFC_B);
+ }
+ 
++static bool hns_gmac_need_adjust_link(void *mac_drv, enum mac_speed speed,
++				      int duplex)
++{
++	struct mac_driver *drv = (struct mac_driver *)mac_drv;
++	struct hns_mac_cb *mac_cb = drv->mac_cb;
++
++	return (mac_cb->speed != speed) ||
++		(mac_cb->half_duplex == duplex);
++}
++
+ static int hns_gmac_adjust_link(void *mac_drv, enum mac_speed speed,
+ 				u32 full_duplex)
+ {
+@@ -309,6 +319,30 @@ static void hns_gmac_set_promisc(void *mac_drv, u8 en)
+ 		hns_gmac_set_uc_match(mac_drv, en);
+ }
+ 
++int hns_gmac_wait_fifo_clean(void *mac_drv)
++{
++	struct mac_driver *drv = (struct mac_driver *)mac_drv;
++	int wait_cnt;
++	u32 val;
++
++	wait_cnt = 0;
++	while (wait_cnt++ < HNS_MAX_WAIT_CNT) {
++		val = dsaf_read_dev(drv, GMAC_FIFO_STATE_REG);
++		/* bit5~bit0 is not send complete pkts */
++		if ((val & 0x3f) == 0)
++			break;
++		usleep_range(100, 200);
++	}
++
++	if (wait_cnt >= HNS_MAX_WAIT_CNT) {
++		dev_err(drv->dev,
++			"hns ge %d fifo was not idle.\n", drv->mac_id);
++		return -EBUSY;
++	}
++
++	return 0;
++}
++
+ static void hns_gmac_init(void *mac_drv)
+ {
+ 	u32 port;
+@@ -690,6 +724,7 @@ void *hns_gmac_config(struct hns_mac_cb *mac_cb, struct mac_params *mac_param)
+ 	mac_drv->mac_disable = hns_gmac_disable;
+ 	mac_drv->mac_free = hns_gmac_free;
+ 	mac_drv->adjust_link = hns_gmac_adjust_link;
++	mac_drv->need_adjust_link = hns_gmac_need_adjust_link;
+ 	mac_drv->set_tx_auto_pause_frames = hns_gmac_set_tx_auto_pause_frames;
+ 	mac_drv->config_max_frame_length = hns_gmac_config_max_frame_length;
+ 	mac_drv->mac_pausefrm_cfg = hns_gmac_pause_frm_cfg;
+@@ -717,6 +752,7 @@ void *hns_gmac_config(struct hns_mac_cb *mac_cb, struct mac_params *mac_param)
+ 	mac_drv->get_strings = hns_gmac_get_strings;
+ 	mac_drv->update_stats = hns_gmac_update_stats;
+ 	mac_drv->set_promiscuous = hns_gmac_set_promisc;
++	mac_drv->wait_fifo_clean = hns_gmac_wait_fifo_clean;
+ 
+ 	return (void *)mac_drv;
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c
+index 9dcc5765f11f..5c6b880c3eb7 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c
+@@ -114,6 +114,26 @@ int hns_mac_get_port_info(struct hns_mac_cb *mac_cb,
+ 	return 0;
+ }
+ 
++/**
++ *hns_mac_is_adjust_link - check is need change mac speed and duplex register
++ *@mac_cb: mac device
++ *@speed: phy device speed
++ *@duplex:phy device duplex
++ *
++ */
++bool hns_mac_need_adjust_link(struct hns_mac_cb *mac_cb, int speed, int duplex)
++{
++	struct mac_driver *mac_ctrl_drv;
++
++	mac_ctrl_drv = (struct mac_driver *)(mac_cb->priv.mac);
++
++	if (mac_ctrl_drv->need_adjust_link)
++		return mac_ctrl_drv->need_adjust_link(mac_ctrl_drv,
++			(enum mac_speed)speed, duplex);
++	else
++		return true;
++}
++
+ void hns_mac_adjust_link(struct hns_mac_cb *mac_cb, int speed, int duplex)
+ {
+ 	int ret;
+@@ -430,6 +450,16 @@ int hns_mac_vm_config_bc_en(struct hns_mac_cb *mac_cb, u32 vmid, bool enable)
+ 	return 0;
+ }
+ 
++int hns_mac_wait_fifo_clean(struct hns_mac_cb *mac_cb)
++{
++	struct mac_driver *drv = hns_mac_get_drv(mac_cb);
++
++	if (drv->wait_fifo_clean)
++		return drv->wait_fifo_clean(drv);
++
++	return 0;
++}
++
+ void hns_mac_reset(struct hns_mac_cb *mac_cb)
+ {
+ 	struct mac_driver *drv = hns_mac_get_drv(mac_cb);
+@@ -999,6 +1029,20 @@ static int hns_mac_get_max_port_num(struct dsaf_device *dsaf_dev)
+ 		return  DSAF_MAX_PORT_NUM;
+ }
+ 
++void hns_mac_enable(struct hns_mac_cb *mac_cb, enum mac_commom_mode mode)
++{
++	struct mac_driver *mac_ctrl_drv = hns_mac_get_drv(mac_cb);
++
++	mac_ctrl_drv->mac_enable(mac_cb->priv.mac, mode);
++}
++
++void hns_mac_disable(struct hns_mac_cb *mac_cb, enum mac_commom_mode mode)
++{
++	struct mac_driver *mac_ctrl_drv = hns_mac_get_drv(mac_cb);
++
++	mac_ctrl_drv->mac_disable(mac_cb->priv.mac, mode);
++}
++
+ /**
+  * hns_mac_init - init mac
+  * @dsaf_dev: dsa fabric device struct pointer
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.h
+index bbc0a98e7ca3..fbc75341bef7 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.h
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.h
+@@ -356,6 +356,9 @@ struct mac_driver {
+ 	/*adjust mac mode of port,include speed and duplex*/
+ 	int (*adjust_link)(void *mac_drv, enum mac_speed speed,
+ 			   u32 full_duplex);
++	/* need adjust link */
++	bool (*need_adjust_link)(void *mac_drv, enum mac_speed speed,
++				 int duplex);
+ 	/* config autoegotaite mode of port*/
+ 	void (*set_an_mode)(void *mac_drv, u8 enable);
+ 	/* config loopbank mode */
+@@ -394,6 +397,7 @@ struct mac_driver {
+ 	void (*get_info)(void *mac_drv, struct mac_info *mac_info);
+ 
+ 	void (*update_stats)(void *mac_drv);
++	int (*wait_fifo_clean)(void *mac_drv);
+ 
+ 	enum mac_mode mac_mode;
+ 	u8 mac_id;
+@@ -427,6 +431,7 @@ void *hns_xgmac_config(struct hns_mac_cb *mac_cb,
+ 
+ int hns_mac_init(struct dsaf_device *dsaf_dev);
+ void mac_adjust_link(struct net_device *net_dev);
++bool hns_mac_need_adjust_link(struct hns_mac_cb *mac_cb, int speed, int duplex);
+ void hns_mac_get_link_status(struct hns_mac_cb *mac_cb,	u32 *link_status);
+ int hns_mac_change_vf_addr(struct hns_mac_cb *mac_cb, u32 vmid, char *addr);
+ int hns_mac_set_multi(struct hns_mac_cb *mac_cb,
+@@ -463,5 +468,8 @@ int hns_mac_add_uc_addr(struct hns_mac_cb *mac_cb, u8 vf_id,
+ int hns_mac_rm_uc_addr(struct hns_mac_cb *mac_cb, u8 vf_id,
+ 		       const unsigned char *addr);
+ int hns_mac_clr_multicast(struct hns_mac_cb *mac_cb, int vfn);
++void hns_mac_enable(struct hns_mac_cb *mac_cb, enum mac_commom_mode mode);
++void hns_mac_disable(struct hns_mac_cb *mac_cb, enum mac_commom_mode mode);
++int hns_mac_wait_fifo_clean(struct hns_mac_cb *mac_cb);
+ 
+ #endif /* _HNS_DSAF_MAC_H */
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c
+index 0ce07f6eb1e6..0ef6d429308f 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c
+@@ -2733,6 +2733,35 @@ void hns_dsaf_set_promisc_tcam(struct dsaf_device *dsaf_dev,
+ 	soft_mac_entry->index = enable ? entry_index : DSAF_INVALID_ENTRY_IDX;
+ }
+ 
++int hns_dsaf_wait_pkt_clean(struct dsaf_device *dsaf_dev, int port)
++{
++	u32 val, val_tmp;
++	int wait_cnt;
++
++	if (port >= DSAF_SERVICE_NW_NUM)
++		return 0;
++
++	wait_cnt = 0;
++	while (wait_cnt++ < HNS_MAX_WAIT_CNT) {
++		val = dsaf_read_dev(dsaf_dev, DSAF_VOQ_IN_PKT_NUM_0_REG +
++			(port + DSAF_XGE_NUM) * 0x40);
++		val_tmp = dsaf_read_dev(dsaf_dev, DSAF_VOQ_OUT_PKT_NUM_0_REG +
++			(port + DSAF_XGE_NUM) * 0x40);
++		if (val == val_tmp)
++			break;
++
++		usleep_range(100, 200);
++	}
++
++	if (wait_cnt >= HNS_MAX_WAIT_CNT) {
++		dev_err(dsaf_dev->dev, "hns dsaf clean wait timeout(%u - %u).\n",
++			val, val_tmp);
++		return -EBUSY;
++	}
++
++	return 0;
++}
++
+ /**
+  * dsaf_probe - probo dsaf dev
+  * @pdev: dasf platform device
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.h
+index 4507e8222683..0e1cd99831a6 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.h
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.h
+@@ -44,6 +44,8 @@ struct hns_mac_cb;
+ #define DSAF_ROCE_CREDIT_CHN	8
+ #define DSAF_ROCE_CHAN_MODE	3
+ 
++#define HNS_MAX_WAIT_CNT 10000
++
+ enum dsaf_roce_port_mode {
+ 	DSAF_ROCE_6PORT_MODE,
+ 	DSAF_ROCE_4PORT_MODE,
+@@ -463,5 +465,6 @@ int hns_dsaf_rm_mac_addr(
+ 
+ int hns_dsaf_clr_mac_mc_port(struct dsaf_device *dsaf_dev,
+ 			     u8 mac_id, u8 port_num);
++int hns_dsaf_wait_pkt_clean(struct dsaf_device *dsaf_dev, int port);
+ 
+ #endif /* __HNS_DSAF_MAIN_H__ */
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c
+index 93e71e27401b..a19932aeb9d7 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c
+@@ -274,6 +274,29 @@ static void hns_ppe_exc_irq_en(struct hns_ppe_cb *ppe_cb, int en)
+ 	dsaf_write_dev(ppe_cb, PPE_INTEN_REG, msk_vlue & vld_msk);
+ }
+ 
++int hns_ppe_wait_tx_fifo_clean(struct hns_ppe_cb *ppe_cb)
++{
++	int wait_cnt;
++	u32 val;
++
++	wait_cnt = 0;
++	while (wait_cnt++ < HNS_MAX_WAIT_CNT) {
++		val = dsaf_read_dev(ppe_cb, PPE_CURR_TX_FIFO0_REG) & 0x3ffU;
++		if (!val)
++			break;
++
++		usleep_range(100, 200);
++	}
++
++	if (wait_cnt >= HNS_MAX_WAIT_CNT) {
++		dev_err(ppe_cb->dev, "hns ppe tx fifo clean wait timeout, still has %u pkt.\n",
++			val);
++		return -EBUSY;
++	}
++
++	return 0;
++}
++
+ /**
+  * ppe_init_hw - init ppe
+  * @ppe_cb: ppe device
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h
+index 9d8e643e8aa6..f670e63a5a01 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h
+@@ -100,6 +100,7 @@ struct ppe_common_cb {
+ 
+ };
+ 
++int hns_ppe_wait_tx_fifo_clean(struct hns_ppe_cb *ppe_cb);
+ int hns_ppe_init(struct dsaf_device *dsaf_dev);
+ 
+ void hns_ppe_uninit(struct dsaf_device *dsaf_dev);
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c
+index e2e28532e4dc..1e43d7a3ca86 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c
+@@ -66,6 +66,29 @@ void hns_rcb_wait_fbd_clean(struct hnae_queue **qs, int q_num, u32 flag)
+ 			"queue(%d) wait fbd(%d) clean fail!!\n", i, fbd_num);
+ }
+ 
++int hns_rcb_wait_tx_ring_clean(struct hnae_queue *qs)
++{
++	u32 head, tail;
++	int wait_cnt;
++
++	tail = dsaf_read_dev(&qs->tx_ring, RCB_REG_TAIL);
++	wait_cnt = 0;
++	while (wait_cnt++ < HNS_MAX_WAIT_CNT) {
++		head = dsaf_read_dev(&qs->tx_ring, RCB_REG_HEAD);
++		if (tail == head)
++			break;
++
++		usleep_range(100, 200);
++	}
++
++	if (wait_cnt >= HNS_MAX_WAIT_CNT) {
++		dev_err(qs->dev->dev, "rcb wait timeout, head not equal to tail.\n");
++		return -EBUSY;
++	}
++
++	return 0;
++}
++
+ /**
+  *hns_rcb_reset_ring_hw - ring reset
+  *@q: ring struct pointer
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h
+index 602816498c8d..2319b772a271 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h
+@@ -136,6 +136,7 @@ void hns_rcbv2_int_clr_hw(struct hnae_queue *q, u32 flag);
+ void hns_rcb_init_hw(struct ring_pair_cb *ring);
+ void hns_rcb_reset_ring_hw(struct hnae_queue *q);
+ void hns_rcb_wait_fbd_clean(struct hnae_queue **qs, int q_num, u32 flag);
++int hns_rcb_wait_tx_ring_clean(struct hnae_queue *qs);
+ u32 hns_rcb_get_rx_coalesced_frames(
+ 	struct rcb_common_cb *rcb_common, u32 port_idx);
+ u32 hns_rcb_get_tx_coalesced_frames(
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h
+index 886cbbf25761..74d935d82cbc 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h
+@@ -464,6 +464,7 @@
+ #define RCB_RING_INTMSK_TX_OVERTIME_REG		0x000C4
+ #define RCB_RING_INTSTS_TX_OVERTIME_REG		0x000C8
+ 
++#define GMAC_FIFO_STATE_REG			0x0000UL
+ #define GMAC_DUPLEX_TYPE_REG			0x0008UL
+ #define GMAC_FD_FC_TYPE_REG			0x000CUL
+ #define GMAC_TX_WATER_LINE_REG			0x0010UL
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_enet.c b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+index ef994a715f93..b4518f45f048 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+@@ -1212,11 +1212,26 @@ static void hns_nic_adjust_link(struct net_device *ndev)
+ 	struct hnae_handle *h = priv->ae_handle;
+ 	int state = 1;
+ 
++	/* If there is no phy, do not need adjust link */
+ 	if (ndev->phydev) {
+-		h->dev->ops->adjust_link(h, ndev->phydev->speed,
+-					 ndev->phydev->duplex);
+-		state = ndev->phydev->link;
++		/* When phy link down, do nothing */
++		if (ndev->phydev->link == 0)
++			return;
++
++		if (h->dev->ops->need_adjust_link(h, ndev->phydev->speed,
++						  ndev->phydev->duplex)) {
++			/* because Hi161X chip don't support to change gmac
++			 * speed and duplex with traffic. Delay 200ms to
++			 * make sure there is no more data in chip FIFO.
++			 */
++			netif_carrier_off(ndev);
++			msleep(200);
++			h->dev->ops->adjust_link(h, ndev->phydev->speed,
++						 ndev->phydev->duplex);
++			netif_carrier_on(ndev);
++		}
+ 	}
++
+ 	state = state && h->dev->ops->get_status(h);
+ 
+ 	if (state != priv->link) {
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c b/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
+index 2e14a3ae1d8b..c1e947bb852f 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
+@@ -243,7 +243,9 @@ static int hns_nic_set_link_ksettings(struct net_device *net_dev,
+ 	}
+ 
+ 	if (h->dev->ops->adjust_link) {
++		netif_carrier_off(net_dev);
+ 		h->dev->ops->adjust_link(h, (int)speed, cmd->base.duplex);
++		netif_carrier_on(net_dev);
+ 		return 0;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/ibm/emac/core.c b/drivers/net/ethernet/ibm/emac/core.c
+index 354c0982847b..372664686309 100644
+--- a/drivers/net/ethernet/ibm/emac/core.c
++++ b/drivers/net/ethernet/ibm/emac/core.c
+@@ -494,9 +494,6 @@ static u32 __emac_calc_base_mr1(struct emac_instance *dev, int tx_size, int rx_s
+ 	case 16384:
+ 		ret |= EMAC_MR1_RFS_16K;
+ 		break;
+-	case 8192:
+-		ret |= EMAC4_MR1_RFS_8K;
+-		break;
+ 	case 4096:
+ 		ret |= EMAC_MR1_RFS_4K;
+ 		break;
+@@ -537,6 +534,9 @@ static u32 __emac4_calc_base_mr1(struct emac_instance *dev, int tx_size, int rx_
+ 	case 16384:
+ 		ret |= EMAC4_MR1_RFS_16K;
+ 		break;
++	case 8192:
++		ret |= EMAC4_MR1_RFS_8K;
++		break;
+ 	case 4096:
+ 		ret |= EMAC4_MR1_RFS_4K;
+ 		break;
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index ffe7acbeaa22..d834308adf95 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -1841,11 +1841,17 @@ static int do_reset(struct ibmvnic_adapter *adapter,
+ 			adapter->map_id = 1;
+ 			release_rx_pools(adapter);
+ 			release_tx_pools(adapter);
+-			init_rx_pools(netdev);
+-			init_tx_pools(netdev);
++			rc = init_rx_pools(netdev);
++			if (rc)
++				return rc;
++			rc = init_tx_pools(netdev);
++			if (rc)
++				return rc;
+ 
+ 			release_napi(adapter);
+-			init_napi(adapter);
++			rc = init_napi(adapter);
++			if (rc)
++				return rc;
+ 		} else {
+ 			rc = reset_tx_pools(adapter);
+ 			if (rc)
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index 62e57b05a0ae..56b31e903cc1 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -3196,11 +3196,13 @@ int ixgbe_poll(struct napi_struct *napi, int budget)
+ 		return budget;
+ 
+ 	/* all work done, exit the polling mode */
+-	napi_complete_done(napi, work_done);
+-	if (adapter->rx_itr_setting & 1)
+-		ixgbe_set_itr(q_vector);
+-	if (!test_bit(__IXGBE_DOWN, &adapter->state))
+-		ixgbe_irq_enable_queues(adapter, BIT_ULL(q_vector->v_idx));
++	if (likely(napi_complete_done(napi, work_done))) {
++		if (adapter->rx_itr_setting & 1)
++			ixgbe_set_itr(q_vector);
++		if (!test_bit(__IXGBE_DOWN, &adapter->state))
++			ixgbe_irq_enable_queues(adapter,
++						BIT_ULL(q_vector->v_idx));
++	}
+ 
+ 	return min(work_done, budget - 1);
+ }
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index 661fa5a38df2..b8bba64673e5 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -4685,6 +4685,7 @@ static int mvpp2_port_probe(struct platform_device *pdev,
+ 	dev->min_mtu = ETH_MIN_MTU;
+ 	/* 9704 == 9728 - 20 and rounding to 8 */
+ 	dev->max_mtu = MVPP2_BM_JUMBO_PKT_SIZE;
++	dev->dev.of_node = port_node;
+ 
+ 	/* Phylink isn't used w/ ACPI as of now */
+ 	if (port_node) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dev.c b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+index 922811fb66e7..37ba7c78859d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/dev.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+@@ -396,16 +396,17 @@ void mlx5_remove_dev_by_protocol(struct mlx5_core_dev *dev, int protocol)
+ 		}
+ }
+ 
+-static u16 mlx5_gen_pci_id(struct mlx5_core_dev *dev)
++static u32 mlx5_gen_pci_id(struct mlx5_core_dev *dev)
+ {
+-	return (u16)((dev->pdev->bus->number << 8) |
++	return (u32)((pci_domain_nr(dev->pdev->bus) << 16) |
++		     (dev->pdev->bus->number << 8) |
+ 		     PCI_SLOT(dev->pdev->devfn));
+ }
+ 
+ /* Must be called with intf_mutex held */
+ struct mlx5_core_dev *mlx5_get_next_phys_dev(struct mlx5_core_dev *dev)
+ {
+-	u16 pci_id = mlx5_gen_pci_id(dev);
++	u32 pci_id = mlx5_gen_pci_id(dev);
+ 	struct mlx5_core_dev *res = NULL;
+ 	struct mlx5_core_dev *tmp_dev;
+ 	struct mlx5_priv *priv;
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index e5eb361b973c..1d1e66002232 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -730,7 +730,7 @@ struct rtl8169_tc_offsets {
+ };
+ 
+ enum rtl_flag {
+-	RTL_FLAG_TASK_ENABLED,
++	RTL_FLAG_TASK_ENABLED = 0,
+ 	RTL_FLAG_TASK_SLOW_PENDING,
+ 	RTL_FLAG_TASK_RESET_PENDING,
+ 	RTL_FLAG_TASK_PHY_PENDING,
+@@ -5150,13 +5150,13 @@ static void rtl_hw_start(struct  rtl8169_private *tp)
+ 
+ 	rtl_set_rx_max_size(tp);
+ 	rtl_set_rx_tx_desc_registers(tp);
+-	rtl_set_tx_config_registers(tp);
+ 	RTL_W8(tp, Cfg9346, Cfg9346_Lock);
+ 
+ 	/* Initially a 10 us delay. Turned it into a PCI commit. - FR */
+ 	RTL_R8(tp, IntrMask);
+ 	RTL_W8(tp, ChipCmd, CmdTxEnb | CmdRxEnb);
+ 	rtl_init_rxcfg(tp);
++	rtl_set_tx_config_registers(tp);
+ 
+ 	rtl_set_rx_mode(tp->dev);
+ 	/* no early-rx interrupts */
+@@ -7125,7 +7125,8 @@ static int rtl8169_close(struct net_device *dev)
+ 	rtl8169_update_counters(tp);
+ 
+ 	rtl_lock_work(tp);
+-	clear_bit(RTL_FLAG_TASK_ENABLED, tp->wk.flags);
++	/* Clear all task flags */
++	bitmap_zero(tp->wk.flags, RTL_FLAG_MAX);
+ 
+ 	rtl8169_down(dev);
+ 	rtl_unlock_work(tp);
+@@ -7301,7 +7302,9 @@ static void rtl8169_net_suspend(struct net_device *dev)
+ 
+ 	rtl_lock_work(tp);
+ 	napi_disable(&tp->napi);
+-	clear_bit(RTL_FLAG_TASK_ENABLED, tp->wk.flags);
++	/* Clear all task flags */
++	bitmap_zero(tp->wk.flags, RTL_FLAG_MAX);
++
+ 	rtl_unlock_work(tp);
+ 
+ 	rtl_pll_power_down(tp);
+diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c
+index 5614fd231bbe..6520379b390e 100644
+--- a/drivers/net/ethernet/renesas/sh_eth.c
++++ b/drivers/net/ethernet/renesas/sh_eth.c
+@@ -807,6 +807,41 @@ static struct sh_eth_cpu_data r8a77980_data = {
+ 	.magic		= 1,
+ 	.cexcr		= 1,
+ };
++
++/* R7S9210 */
++static struct sh_eth_cpu_data r7s9210_data = {
++	.soft_reset	= sh_eth_soft_reset,
++
++	.set_duplex	= sh_eth_set_duplex,
++	.set_rate	= sh_eth_set_rate_rcar,
++
++	.register_type	= SH_ETH_REG_FAST_SH4,
++
++	.edtrr_trns	= EDTRR_TRNS_ETHER,
++	.ecsr_value	= ECSR_ICD,
++	.ecsipr_value	= ECSIPR_ICDIP,
++	.eesipr_value	= EESIPR_TWBIP | EESIPR_TABTIP | EESIPR_RABTIP |
++			  EESIPR_RFCOFIP | EESIPR_ECIIP | EESIPR_FTCIP |
++			  EESIPR_TDEIP | EESIPR_TFUFIP | EESIPR_FRIP |
++			  EESIPR_RDEIP | EESIPR_RFOFIP | EESIPR_CNDIP |
++			  EESIPR_DLCIP | EESIPR_CDIP | EESIPR_TROIP |
++			  EESIPR_RMAFIP | EESIPR_RRFIP | EESIPR_RTLFIP |
++			  EESIPR_RTSFIP | EESIPR_PREIP | EESIPR_CERFIP,
++
++	.tx_check	= EESR_FTC | EESR_CND | EESR_DLC | EESR_CD | EESR_TRO,
++	.eesr_err_check	= EESR_TWB | EESR_TABT | EESR_RABT | EESR_RFE |
++			  EESR_RDE | EESR_RFRMER | EESR_TFE | EESR_TDE,
++
++	.fdr_value	= 0x0000070f,
++
++	.apr		= 1,
++	.mpr		= 1,
++	.tpauser	= 1,
++	.hw_swap	= 1,
++	.rpadir		= 1,
++	.no_ade		= 1,
++	.xdfar_rw	= 1,
++};
+ #endif /* CONFIG_OF */
+ 
+ static void sh_eth_set_rate_sh7724(struct net_device *ndev)
+@@ -3131,6 +3166,7 @@ static const struct of_device_id sh_eth_match_table[] = {
+ 	{ .compatible = "renesas,ether-r8a7794", .data = &rcar_gen2_data },
+ 	{ .compatible = "renesas,gether-r8a77980", .data = &r8a77980_data },
+ 	{ .compatible = "renesas,ether-r7s72100", .data = &r7s72100_data },
++	{ .compatible = "renesas,ether-r7s9210", .data = &r7s9210_data },
+ 	{ .compatible = "renesas,rcar-gen1-ether", .data = &rcar_gen1_data },
+ 	{ .compatible = "renesas,rcar-gen2-ether", .data = &rcar_gen2_data },
+ 	{ }
+diff --git a/drivers/net/wireless/broadcom/b43/dma.c b/drivers/net/wireless/broadcom/b43/dma.c
+index 6b0e1ec346cb..d46d57b989ae 100644
+--- a/drivers/net/wireless/broadcom/b43/dma.c
++++ b/drivers/net/wireless/broadcom/b43/dma.c
+@@ -1518,13 +1518,15 @@ void b43_dma_handle_txstatus(struct b43_wldev *dev,
+ 			}
+ 		} else {
+ 			/* More than a single header/data pair were missed.
+-			 * Report this error, and reset the controller to
++			 * Report this error. If running with open-source
++			 * firmware, then reset the controller to
+ 			 * revive operation.
+ 			 */
+ 			b43dbg(dev->wl,
+ 			       "Out of order TX status report on DMA ring %d. Expected %d, but got %d\n",
+ 			       ring->index, firstused, slot);
+-			b43_controller_restart(dev, "Out of order TX");
++			if (dev->fw.opensource)
++				b43_controller_restart(dev, "Out of order TX");
+ 			return;
+ 		}
+ 	}
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+index b815ba38dbdb..88121548eb9f 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+@@ -877,15 +877,12 @@ iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg,
+ 	const u8 *nvm_chan = cfg->nvm_type == IWL_NVM_EXT ?
+ 			     iwl_ext_nvm_channels : iwl_nvm_channels;
+ 	struct ieee80211_regdomain *regd, *copy_rd;
+-	int size_of_regd, regd_to_copy, wmms_to_copy;
+-	int size_of_wmms = 0;
++	int size_of_regd, regd_to_copy;
+ 	struct ieee80211_reg_rule *rule;
+-	struct ieee80211_wmm_rule *wmm_rule, *d_wmm, *s_wmm;
+ 	struct regdb_ptrs *regdb_ptrs;
+ 	enum nl80211_band band;
+ 	int center_freq, prev_center_freq = 0;
+-	int valid_rules = 0, n_wmms = 0;
+-	int i;
++	int valid_rules = 0;
+ 	bool new_rule;
+ 	int max_num_ch = cfg->nvm_type == IWL_NVM_EXT ?
+ 			 IWL_NVM_NUM_CHANNELS_EXT : IWL_NVM_NUM_CHANNELS;
+@@ -904,11 +901,7 @@ iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg,
+ 		sizeof(struct ieee80211_regdomain) +
+ 		num_of_ch * sizeof(struct ieee80211_reg_rule);
+ 
+-	if (geo_info & GEO_WMM_ETSI_5GHZ_INFO)
+-		size_of_wmms =
+-			num_of_ch * sizeof(struct ieee80211_wmm_rule);
+-
+-	regd = kzalloc(size_of_regd + size_of_wmms, GFP_KERNEL);
++	regd = kzalloc(size_of_regd, GFP_KERNEL);
+ 	if (!regd)
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -922,8 +915,6 @@ iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg,
+ 	regd->alpha2[0] = fw_mcc >> 8;
+ 	regd->alpha2[1] = fw_mcc & 0xff;
+ 
+-	wmm_rule = (struct ieee80211_wmm_rule *)((u8 *)regd + size_of_regd);
+-
+ 	for (ch_idx = 0; ch_idx < num_of_ch; ch_idx++) {
+ 		ch_flags = (u16)__le32_to_cpup(channels + ch_idx);
+ 		band = (ch_idx < NUM_2GHZ_CHANNELS) ?
+@@ -977,26 +968,10 @@ iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg,
+ 		    band == NL80211_BAND_2GHZ)
+ 			continue;
+ 
+-		if (!reg_query_regdb_wmm(regd->alpha2, center_freq,
+-					 &regdb_ptrs[n_wmms].token, wmm_rule)) {
+-			/* Add only new rules */
+-			for (i = 0; i < n_wmms; i++) {
+-				if (regdb_ptrs[i].token ==
+-				    regdb_ptrs[n_wmms].token) {
+-					rule->wmm_rule = regdb_ptrs[i].rule;
+-					break;
+-				}
+-			}
+-			if (i == n_wmms) {
+-				rule->wmm_rule = wmm_rule;
+-				regdb_ptrs[n_wmms++].rule = wmm_rule;
+-				wmm_rule++;
+-			}
+-		}
++		reg_query_regdb_wmm(regd->alpha2, center_freq, rule);
+ 	}
+ 
+ 	regd->n_reg_rules = valid_rules;
+-	regd->n_wmm_rules = n_wmms;
+ 
+ 	/*
+ 	 * Narrow down regdom for unused regulatory rules to prevent hole
+@@ -1005,28 +980,13 @@ iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg,
+ 	regd_to_copy = sizeof(struct ieee80211_regdomain) +
+ 		valid_rules * sizeof(struct ieee80211_reg_rule);
+ 
+-	wmms_to_copy = sizeof(struct ieee80211_wmm_rule) * n_wmms;
+-
+-	copy_rd = kzalloc(regd_to_copy + wmms_to_copy, GFP_KERNEL);
++	copy_rd = kzalloc(regd_to_copy, GFP_KERNEL);
+ 	if (!copy_rd) {
+ 		copy_rd = ERR_PTR(-ENOMEM);
+ 		goto out;
+ 	}
+ 
+ 	memcpy(copy_rd, regd, regd_to_copy);
+-	memcpy((u8 *)copy_rd + regd_to_copy, (u8 *)regd + size_of_regd,
+-	       wmms_to_copy);
+-
+-	d_wmm = (struct ieee80211_wmm_rule *)((u8 *)copy_rd + regd_to_copy);
+-	s_wmm = (struct ieee80211_wmm_rule *)((u8 *)regd + size_of_regd);
+-
+-	for (i = 0; i < regd->n_reg_rules; i++) {
+-		if (!regd->reg_rules[i].wmm_rule)
+-			continue;
+-
+-		copy_rd->reg_rules[i].wmm_rule = d_wmm +
+-			(regd->reg_rules[i].wmm_rule - s_wmm);
+-	}
+ 
+ out:
+ 	kfree(regdb_ptrs);
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index 18e819d964f1..80e2c8595c7c 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -33,6 +33,7 @@
+ #include <net/net_namespace.h>
+ #include <net/netns/generic.h>
+ #include <linux/rhashtable.h>
++#include <linux/nospec.h>
+ #include "mac80211_hwsim.h"
+ 
+ #define WARN_QUEUE 100
+@@ -2699,9 +2700,6 @@ static int mac80211_hwsim_new_radio(struct genl_info *info,
+ 				IEEE80211_VHT_CAP_SHORT_GI_80 |
+ 				IEEE80211_VHT_CAP_SHORT_GI_160 |
+ 				IEEE80211_VHT_CAP_TXSTBC |
+-				IEEE80211_VHT_CAP_RXSTBC_1 |
+-				IEEE80211_VHT_CAP_RXSTBC_2 |
+-				IEEE80211_VHT_CAP_RXSTBC_3 |
+ 				IEEE80211_VHT_CAP_RXSTBC_4 |
+ 				IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK;
+ 			sband->vht_cap.vht_mcs.rx_mcs_map =
+@@ -3194,6 +3192,11 @@ static int hwsim_new_radio_nl(struct sk_buff *msg, struct genl_info *info)
+ 	if (info->attrs[HWSIM_ATTR_CHANNELS])
+ 		param.channels = nla_get_u32(info->attrs[HWSIM_ATTR_CHANNELS]);
+ 
++	if (param.channels < 1) {
++		GENL_SET_ERR_MSG(info, "must have at least one channel");
++		return -EINVAL;
++	}
++
+ 	if (param.channels > CFG80211_MAX_NUM_DIFFERENT_CHANNELS) {
+ 		GENL_SET_ERR_MSG(info, "too many channels specified");
+ 		return -EINVAL;
+@@ -3227,6 +3230,9 @@ static int hwsim_new_radio_nl(struct sk_buff *msg, struct genl_info *info)
+ 			kfree(hwname);
+ 			return -EINVAL;
+ 		}
++
++		idx = array_index_nospec(idx,
++					 ARRAY_SIZE(hwsim_world_regdom_custom));
+ 		param.regd = hwsim_world_regdom_custom[idx];
+ 	}
+ 
+diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
+index 52e0c5d579a7..1d909e5ba657 100644
+--- a/drivers/nvme/target/rdma.c
++++ b/drivers/nvme/target/rdma.c
+@@ -65,6 +65,7 @@ struct nvmet_rdma_rsp {
+ 
+ 	struct nvmet_req	req;
+ 
++	bool			allocated;
+ 	u8			n_rdma;
+ 	u32			flags;
+ 	u32			invalidate_rkey;
+@@ -166,11 +167,19 @@ nvmet_rdma_get_rsp(struct nvmet_rdma_queue *queue)
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&queue->rsps_lock, flags);
+-	rsp = list_first_entry(&queue->free_rsps,
++	rsp = list_first_entry_or_null(&queue->free_rsps,
+ 				struct nvmet_rdma_rsp, free_list);
+-	list_del(&rsp->free_list);
++	if (likely(rsp))
++		list_del(&rsp->free_list);
+ 	spin_unlock_irqrestore(&queue->rsps_lock, flags);
+ 
++	if (unlikely(!rsp)) {
++		rsp = kmalloc(sizeof(*rsp), GFP_KERNEL);
++		if (unlikely(!rsp))
++			return NULL;
++		rsp->allocated = true;
++	}
++
+ 	return rsp;
+ }
+ 
+@@ -179,6 +188,11 @@ nvmet_rdma_put_rsp(struct nvmet_rdma_rsp *rsp)
+ {
+ 	unsigned long flags;
+ 
++	if (rsp->allocated) {
++		kfree(rsp);
++		return;
++	}
++
+ 	spin_lock_irqsave(&rsp->queue->rsps_lock, flags);
+ 	list_add_tail(&rsp->free_list, &rsp->queue->free_rsps);
+ 	spin_unlock_irqrestore(&rsp->queue->rsps_lock, flags);
+@@ -702,6 +716,15 @@ static void nvmet_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ 
+ 	cmd->queue = queue;
+ 	rsp = nvmet_rdma_get_rsp(queue);
++	if (unlikely(!rsp)) {
++		/*
++		 * we get here only under memory pressure,
++		 * silently drop and have the host retry
++		 * as we can't even fail it.
++		 */
++		nvmet_rdma_post_recv(queue->dev, cmd);
++		return;
++	}
+ 	rsp->queue = queue;
+ 	rsp->cmd = cmd;
+ 	rsp->flags = 0;
+diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
+index ffdb78421a25..b0f0d4e86f67 100644
+--- a/drivers/s390/net/qeth_core_main.c
++++ b/drivers/s390/net/qeth_core_main.c
+@@ -25,6 +25,7 @@
+ #include <linux/netdevice.h>
+ #include <linux/netdev_features.h>
+ #include <linux/skbuff.h>
++#include <linux/vmalloc.h>
+ 
+ #include <net/iucv/af_iucv.h>
+ #include <net/dsfield.h>
+@@ -4738,7 +4739,7 @@ static int qeth_query_oat_command(struct qeth_card *card, char __user *udata)
+ 
+ 	priv.buffer_len = oat_data.buffer_len;
+ 	priv.response_len = 0;
+-	priv.buffer =  kzalloc(oat_data.buffer_len, GFP_KERNEL);
++	priv.buffer = vzalloc(oat_data.buffer_len);
+ 	if (!priv.buffer) {
+ 		rc = -ENOMEM;
+ 		goto out;
+@@ -4779,7 +4780,7 @@ static int qeth_query_oat_command(struct qeth_card *card, char __user *udata)
+ 			rc = -EFAULT;
+ 
+ out_free:
+-	kfree(priv.buffer);
++	vfree(priv.buffer);
+ out:
+ 	return rc;
+ }
+diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c
+index 2487f0aeb165..3bef60ae0480 100644
+--- a/drivers/s390/net/qeth_l2_main.c
++++ b/drivers/s390/net/qeth_l2_main.c
+@@ -425,7 +425,7 @@ static int qeth_l2_process_inbound_buffer(struct qeth_card *card,
+ 		default:
+ 			dev_kfree_skb_any(skb);
+ 			QETH_CARD_TEXT(card, 3, "inbunkno");
+-			QETH_DBF_HEX(CTRL, 3, hdr, QETH_DBF_CTRL_LEN);
++			QETH_DBF_HEX(CTRL, 3, hdr, sizeof(*hdr));
+ 			continue;
+ 		}
+ 		work_done++;
+diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c
+index 5905dc63e256..3ea840542767 100644
+--- a/drivers/s390/net/qeth_l3_main.c
++++ b/drivers/s390/net/qeth_l3_main.c
+@@ -1390,7 +1390,7 @@ static int qeth_l3_process_inbound_buffer(struct qeth_card *card,
+ 		default:
+ 			dev_kfree_skb_any(skb);
+ 			QETH_CARD_TEXT(card, 3, "inbunkno");
+-			QETH_DBF_HEX(CTRL, 3, hdr, QETH_DBF_CTRL_LEN);
++			QETH_DBF_HEX(CTRL, 3, hdr, sizeof(*hdr));
+ 			continue;
+ 		}
+ 		work_done++;
+diff --git a/drivers/scsi/aacraid/aacraid.h b/drivers/scsi/aacraid/aacraid.h
+index 29bf1e60f542..39eb415987fc 100644
+--- a/drivers/scsi/aacraid/aacraid.h
++++ b/drivers/scsi/aacraid/aacraid.h
+@@ -1346,7 +1346,7 @@ struct fib {
+ struct aac_hba_map_info {
+ 	__le32	rmw_nexus;		/* nexus for native HBA devices */
+ 	u8		devtype;	/* device type */
+-	u8		reset_state;	/* 0 - no reset, 1..x - */
++	s8		reset_state;	/* 0 - no reset, 1..x - */
+ 					/* after xth TM LUN reset */
+ 	u16		qd_limit;
+ 	u32		scan_counter;
+diff --git a/drivers/scsi/csiostor/csio_hw.c b/drivers/scsi/csiostor/csio_hw.c
+index a10cf25ee7f9..e4baf04ec5ea 100644
+--- a/drivers/scsi/csiostor/csio_hw.c
++++ b/drivers/scsi/csiostor/csio_hw.c
+@@ -1512,6 +1512,46 @@ fw_port_cap32_t fwcaps16_to_caps32(fw_port_cap16_t caps16)
+ 	return caps32;
+ }
+ 
++/**
++ *	fwcaps32_to_caps16 - convert 32-bit Port Capabilities to 16-bits
++ *	@caps32: a 32-bit Port Capabilities value
++ *
++ *	Returns the equivalent 16-bit Port Capabilities value.  Note that
++ *	not all 32-bit Port Capabilities can be represented in the 16-bit
++ *	Port Capabilities and some fields/values may not make it.
++ */
++fw_port_cap16_t fwcaps32_to_caps16(fw_port_cap32_t caps32)
++{
++	fw_port_cap16_t caps16 = 0;
++
++	#define CAP32_TO_CAP16(__cap) \
++		do { \
++			if (caps32 & FW_PORT_CAP32_##__cap) \
++				caps16 |= FW_PORT_CAP_##__cap; \
++		} while (0)
++
++	CAP32_TO_CAP16(SPEED_100M);
++	CAP32_TO_CAP16(SPEED_1G);
++	CAP32_TO_CAP16(SPEED_10G);
++	CAP32_TO_CAP16(SPEED_25G);
++	CAP32_TO_CAP16(SPEED_40G);
++	CAP32_TO_CAP16(SPEED_100G);
++	CAP32_TO_CAP16(FC_RX);
++	CAP32_TO_CAP16(FC_TX);
++	CAP32_TO_CAP16(802_3_PAUSE);
++	CAP32_TO_CAP16(802_3_ASM_DIR);
++	CAP32_TO_CAP16(ANEG);
++	CAP32_TO_CAP16(FORCE_PAUSE);
++	CAP32_TO_CAP16(MDIAUTO);
++	CAP32_TO_CAP16(MDISTRAIGHT);
++	CAP32_TO_CAP16(FEC_RS);
++	CAP32_TO_CAP16(FEC_BASER_RS);
++
++	#undef CAP32_TO_CAP16
++
++	return caps16;
++}
++
+ /**
+  *      lstatus_to_fwcap - translate old lstatus to 32-bit Port Capabilities
+  *      @lstatus: old FW_PORT_ACTION_GET_PORT_INFO lstatus value
+@@ -1670,7 +1710,7 @@ csio_enable_ports(struct csio_hw *hw)
+ 			val = 1;
+ 
+ 			csio_mb_params(hw, mbp, CSIO_MB_DEFAULT_TMO,
+-				       hw->pfn, 0, 1, &param, &val, false,
++				       hw->pfn, 0, 1, &param, &val, true,
+ 				       NULL);
+ 
+ 			if (csio_mb_issue(hw, mbp)) {
+@@ -1680,16 +1720,9 @@ csio_enable_ports(struct csio_hw *hw)
+ 				return -EINVAL;
+ 			}
+ 
+-			csio_mb_process_read_params_rsp(hw, mbp, &retval, 1,
+-							&val);
+-			if (retval != FW_SUCCESS) {
+-				csio_err(hw, "FW_PARAMS_CMD(r) port:%d failed: 0x%x\n",
+-					 portid, retval);
+-				mempool_free(mbp, hw->mb_mempool);
+-				return -EINVAL;
+-			}
+-
+-			fw_caps = val;
++			csio_mb_process_read_params_rsp(hw, mbp, &retval,
++							0, NULL);
++			fw_caps = retval ? FW_CAPS16 : FW_CAPS32;
+ 		}
+ 
+ 		/* Read PORT information */
+@@ -2275,8 +2308,8 @@ bye:
+ }
+ 
+ /*
+- * Returns -EINVAL if attempts to flash the firmware failed
+- * else returns 0,
++ * Returns -EINVAL if attempts to flash the firmware failed,
++ * -ENOMEM if memory allocation failed else returns 0,
+  * if flashing was not attempted because the card had the
+  * latest firmware ECANCELED is returned
+  */
+@@ -2304,6 +2337,13 @@ csio_hw_flash_fw(struct csio_hw *hw, int *reset)
+ 		return -EINVAL;
+ 	}
+ 
++	/* allocate memory to read the header of the firmware on the
++	 * card
++	 */
++	card_fw = kmalloc(sizeof(*card_fw), GFP_KERNEL);
++	if (!card_fw)
++		return -ENOMEM;
++
+ 	if (csio_is_t5(pci_dev->device & CSIO_HW_CHIP_MASK))
+ 		fw_bin_file = FW_FNAME_T5;
+ 	else
+@@ -2317,11 +2357,6 @@ csio_hw_flash_fw(struct csio_hw *hw, int *reset)
+ 		fw_size = fw->size;
+ 	}
+ 
+-	/* allocate memory to read the header of the firmware on the
+-	 * card
+-	 */
+-	card_fw = kmalloc(sizeof(*card_fw), GFP_KERNEL);
+-
+ 	/* upgrade FW logic */
+ 	ret = csio_hw_prep_fw(hw, fw_info, fw_data, fw_size, card_fw,
+ 			 hw->fw_state, reset);
+diff --git a/drivers/scsi/csiostor/csio_hw.h b/drivers/scsi/csiostor/csio_hw.h
+index 9e73ef771eb7..e351af6e7c81 100644
+--- a/drivers/scsi/csiostor/csio_hw.h
++++ b/drivers/scsi/csiostor/csio_hw.h
+@@ -639,6 +639,7 @@ int csio_handle_intr_status(struct csio_hw *, unsigned int,
+ 
+ fw_port_cap32_t fwcap_to_fwspeed(fw_port_cap32_t acaps);
+ fw_port_cap32_t fwcaps16_to_caps32(fw_port_cap16_t caps16);
++fw_port_cap16_t fwcaps32_to_caps16(fw_port_cap32_t caps32);
+ fw_port_cap32_t lstatus_to_fwcap(u32 lstatus);
+ 
+ int csio_hw_start(struct csio_hw *);
+diff --git a/drivers/scsi/csiostor/csio_mb.c b/drivers/scsi/csiostor/csio_mb.c
+index c026417269c3..6f13673d6aa0 100644
+--- a/drivers/scsi/csiostor/csio_mb.c
++++ b/drivers/scsi/csiostor/csio_mb.c
+@@ -368,7 +368,7 @@ csio_mb_port(struct csio_hw *hw, struct csio_mb *mbp, uint32_t tmo,
+ 			FW_CMD_LEN16_V(sizeof(*cmdp) / 16));
+ 
+ 	if (fw_caps == FW_CAPS16)
+-		cmdp->u.l1cfg.rcap = cpu_to_be32(fc);
++		cmdp->u.l1cfg.rcap = cpu_to_be32(fwcaps32_to_caps16(fc));
+ 	else
+ 		cmdp->u.l1cfg32.rcap32 = cpu_to_be32(fc);
+ }
+@@ -395,8 +395,8 @@ csio_mb_process_read_port_rsp(struct csio_hw *hw, struct csio_mb *mbp,
+ 			*pcaps = fwcaps16_to_caps32(ntohs(rsp->u.info.pcap));
+ 			*acaps = fwcaps16_to_caps32(ntohs(rsp->u.info.acap));
+ 		} else {
+-			*pcaps = ntohs(rsp->u.info32.pcaps32);
+-			*acaps = ntohs(rsp->u.info32.acaps32);
++			*pcaps = be32_to_cpu(rsp->u.info32.pcaps32);
++			*acaps = be32_to_cpu(rsp->u.info32.acaps32);
+ 		}
+ 	}
+ }
+diff --git a/drivers/scsi/qedi/qedi.h b/drivers/scsi/qedi/qedi.h
+index fc3babc15fa3..a6f96b35e971 100644
+--- a/drivers/scsi/qedi/qedi.h
++++ b/drivers/scsi/qedi/qedi.h
+@@ -77,6 +77,11 @@ enum qedi_nvm_tgts {
+ 	QEDI_NVM_TGT_SEC,
+ };
+ 
++struct qedi_nvm_iscsi_image {
++	struct nvm_iscsi_cfg iscsi_cfg;
++	u32 crc;
++};
++
+ struct qedi_uio_ctrl {
+ 	/* meta data */
+ 	u32 uio_hsi_version;
+@@ -294,7 +299,7 @@ struct qedi_ctx {
+ 	void *bdq_pbl_list;
+ 	dma_addr_t bdq_pbl_list_dma;
+ 	u8 bdq_pbl_list_num_entries;
+-	struct nvm_iscsi_cfg *iscsi_cfg;
++	struct qedi_nvm_iscsi_image *iscsi_image;
+ 	dma_addr_t nvm_buf_dma;
+ 	void __iomem *bdq_primary_prod;
+ 	void __iomem *bdq_secondary_prod;
+diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
+index cff83b9457f7..3e18a68c2b03 100644
+--- a/drivers/scsi/qedi/qedi_main.c
++++ b/drivers/scsi/qedi/qedi_main.c
+@@ -1346,23 +1346,26 @@ exit_setup_int:
+ 
+ static void qedi_free_nvm_iscsi_cfg(struct qedi_ctx *qedi)
+ {
+-	if (qedi->iscsi_cfg)
++	if (qedi->iscsi_image)
+ 		dma_free_coherent(&qedi->pdev->dev,
+-				  sizeof(struct nvm_iscsi_cfg),
+-				  qedi->iscsi_cfg, qedi->nvm_buf_dma);
++				  sizeof(struct qedi_nvm_iscsi_image),
++				  qedi->iscsi_image, qedi->nvm_buf_dma);
+ }
+ 
+ static int qedi_alloc_nvm_iscsi_cfg(struct qedi_ctx *qedi)
+ {
+-	qedi->iscsi_cfg = dma_zalloc_coherent(&qedi->pdev->dev,
+-					     sizeof(struct nvm_iscsi_cfg),
+-					     &qedi->nvm_buf_dma, GFP_KERNEL);
+-	if (!qedi->iscsi_cfg) {
++	struct qedi_nvm_iscsi_image nvm_image;
++
++	qedi->iscsi_image = dma_zalloc_coherent(&qedi->pdev->dev,
++						sizeof(nvm_image),
++						&qedi->nvm_buf_dma,
++						GFP_KERNEL);
++	if (!qedi->iscsi_image) {
+ 		QEDI_ERR(&qedi->dbg_ctx, "Could not allocate NVM BUF.\n");
+ 		return -ENOMEM;
+ 	}
+ 	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+-		  "NVM BUF addr=0x%p dma=0x%llx.\n", qedi->iscsi_cfg,
++		  "NVM BUF addr=0x%p dma=0x%llx.\n", qedi->iscsi_image,
+ 		  qedi->nvm_buf_dma);
+ 
+ 	return 0;
+@@ -1905,7 +1908,7 @@ qedi_get_nvram_block(struct qedi_ctx *qedi)
+ 	struct nvm_iscsi_block *block;
+ 
+ 	pf = qedi->dev_info.common.abs_pf_id;
+-	block = &qedi->iscsi_cfg->block[0];
++	block = &qedi->iscsi_image->iscsi_cfg.block[0];
+ 	for (i = 0; i < NUM_OF_ISCSI_PF_SUPPORTED; i++, block++) {
+ 		flags = ((block->id) & NVM_ISCSI_CFG_BLK_CTRL_FLAG_MASK) >>
+ 			NVM_ISCSI_CFG_BLK_CTRL_FLAG_OFFSET;
+@@ -2194,15 +2197,14 @@ static void qedi_boot_release(void *data)
+ static int qedi_get_boot_info(struct qedi_ctx *qedi)
+ {
+ 	int ret = 1;
+-	u16 len;
+-
+-	len = sizeof(struct nvm_iscsi_cfg);
++	struct qedi_nvm_iscsi_image nvm_image;
+ 
+ 	QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+ 		  "Get NVM iSCSI CFG image\n");
+ 	ret = qedi_ops->common->nvm_get_image(qedi->cdev,
+ 					      QED_NVM_IMAGE_ISCSI_CFG,
+-					      (char *)qedi->iscsi_cfg, len);
++					      (char *)qedi->iscsi_image,
++					      sizeof(nvm_image));
+ 	if (ret)
+ 		QEDI_ERR(&qedi->dbg_ctx,
+ 			 "Could not get NVM image. ret = %d\n", ret);
+diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
+index 8e223799347a..a4ecc9d77624 100644
+--- a/drivers/target/iscsi/iscsi_target.c
++++ b/drivers/target/iscsi/iscsi_target.c
+@@ -4211,22 +4211,15 @@ int iscsit_close_connection(
+ 		crypto_free_ahash(tfm);
+ 	}
+ 
+-	free_cpumask_var(conn->conn_cpumask);
+-
+-	kfree(conn->conn_ops);
+-	conn->conn_ops = NULL;
+-
+ 	if (conn->sock)
+ 		sock_release(conn->sock);
+ 
+ 	if (conn->conn_transport->iscsit_free_conn)
+ 		conn->conn_transport->iscsit_free_conn(conn);
+ 
+-	iscsit_put_transport(conn->conn_transport);
+-
+ 	pr_debug("Moving to TARG_CONN_STATE_FREE.\n");
+ 	conn->conn_state = TARG_CONN_STATE_FREE;
+-	kfree(conn);
++	iscsit_free_conn(conn);
+ 
+ 	spin_lock_bh(&sess->conn_lock);
+ 	atomic_dec(&sess->nconn);
+diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c
+index 68b3eb00a9d0..2fda5b0664fd 100644
+--- a/drivers/target/iscsi/iscsi_target_login.c
++++ b/drivers/target/iscsi/iscsi_target_login.c
+@@ -67,45 +67,10 @@ static struct iscsi_login *iscsi_login_init_conn(struct iscsi_conn *conn)
+ 		goto out_req_buf;
+ 	}
+ 
+-	conn->conn_ops = kzalloc(sizeof(struct iscsi_conn_ops), GFP_KERNEL);
+-	if (!conn->conn_ops) {
+-		pr_err("Unable to allocate memory for"
+-			" struct iscsi_conn_ops.\n");
+-		goto out_rsp_buf;
+-	}
+-
+-	init_waitqueue_head(&conn->queues_wq);
+-	INIT_LIST_HEAD(&conn->conn_list);
+-	INIT_LIST_HEAD(&conn->conn_cmd_list);
+-	INIT_LIST_HEAD(&conn->immed_queue_list);
+-	INIT_LIST_HEAD(&conn->response_queue_list);
+-	init_completion(&conn->conn_post_wait_comp);
+-	init_completion(&conn->conn_wait_comp);
+-	init_completion(&conn->conn_wait_rcfr_comp);
+-	init_completion(&conn->conn_waiting_on_uc_comp);
+-	init_completion(&conn->conn_logout_comp);
+-	init_completion(&conn->rx_half_close_comp);
+-	init_completion(&conn->tx_half_close_comp);
+-	init_completion(&conn->rx_login_comp);
+-	spin_lock_init(&conn->cmd_lock);
+-	spin_lock_init(&conn->conn_usage_lock);
+-	spin_lock_init(&conn->immed_queue_lock);
+-	spin_lock_init(&conn->nopin_timer_lock);
+-	spin_lock_init(&conn->response_queue_lock);
+-	spin_lock_init(&conn->state_lock);
+-
+-	if (!zalloc_cpumask_var(&conn->conn_cpumask, GFP_KERNEL)) {
+-		pr_err("Unable to allocate conn->conn_cpumask\n");
+-		goto out_conn_ops;
+-	}
+ 	conn->conn_login = login;
+ 
+ 	return login;
+ 
+-out_conn_ops:
+-	kfree(conn->conn_ops);
+-out_rsp_buf:
+-	kfree(login->rsp_buf);
+ out_req_buf:
+ 	kfree(login->req_buf);
+ out_login:
+@@ -310,11 +275,9 @@ static int iscsi_login_zero_tsih_s1(
+ 		return -ENOMEM;
+ 	}
+ 
+-	ret = iscsi_login_set_conn_values(sess, conn, pdu->cid);
+-	if (unlikely(ret)) {
+-		kfree(sess);
+-		return ret;
+-	}
++	if (iscsi_login_set_conn_values(sess, conn, pdu->cid))
++		goto free_sess;
++
+ 	sess->init_task_tag	= pdu->itt;
+ 	memcpy(&sess->isid, pdu->isid, 6);
+ 	sess->exp_cmd_sn	= be32_to_cpu(pdu->cmdsn);
+@@ -1157,6 +1120,75 @@ iscsit_conn_set_transport(struct iscsi_conn *conn, struct iscsit_transport *t)
+ 	return 0;
+ }
+ 
++static struct iscsi_conn *iscsit_alloc_conn(struct iscsi_np *np)
++{
++	struct iscsi_conn *conn;
++
++	conn = kzalloc(sizeof(struct iscsi_conn), GFP_KERNEL);
++	if (!conn) {
++		pr_err("Could not allocate memory for new connection\n");
++		return NULL;
++	}
++	pr_debug("Moving to TARG_CONN_STATE_FREE.\n");
++	conn->conn_state = TARG_CONN_STATE_FREE;
++
++	init_waitqueue_head(&conn->queues_wq);
++	INIT_LIST_HEAD(&conn->conn_list);
++	INIT_LIST_HEAD(&conn->conn_cmd_list);
++	INIT_LIST_HEAD(&conn->immed_queue_list);
++	INIT_LIST_HEAD(&conn->response_queue_list);
++	init_completion(&conn->conn_post_wait_comp);
++	init_completion(&conn->conn_wait_comp);
++	init_completion(&conn->conn_wait_rcfr_comp);
++	init_completion(&conn->conn_waiting_on_uc_comp);
++	init_completion(&conn->conn_logout_comp);
++	init_completion(&conn->rx_half_close_comp);
++	init_completion(&conn->tx_half_close_comp);
++	init_completion(&conn->rx_login_comp);
++	spin_lock_init(&conn->cmd_lock);
++	spin_lock_init(&conn->conn_usage_lock);
++	spin_lock_init(&conn->immed_queue_lock);
++	spin_lock_init(&conn->nopin_timer_lock);
++	spin_lock_init(&conn->response_queue_lock);
++	spin_lock_init(&conn->state_lock);
++
++	timer_setup(&conn->nopin_response_timer,
++		    iscsit_handle_nopin_response_timeout, 0);
++	timer_setup(&conn->nopin_timer, iscsit_handle_nopin_timeout, 0);
++
++	if (iscsit_conn_set_transport(conn, np->np_transport) < 0)
++		goto free_conn;
++
++	conn->conn_ops = kzalloc(sizeof(struct iscsi_conn_ops), GFP_KERNEL);
++	if (!conn->conn_ops) {
++		pr_err("Unable to allocate memory for struct iscsi_conn_ops.\n");
++		goto put_transport;
++	}
++
++	if (!zalloc_cpumask_var(&conn->conn_cpumask, GFP_KERNEL)) {
++		pr_err("Unable to allocate conn->conn_cpumask\n");
++		goto free_mask;
++	}
++
++	return conn;
++
++free_mask:
++	free_cpumask_var(conn->conn_cpumask);
++put_transport:
++	iscsit_put_transport(conn->conn_transport);
++free_conn:
++	kfree(conn);
++	return NULL;
++}
++
++void iscsit_free_conn(struct iscsi_conn *conn)
++{
++	free_cpumask_var(conn->conn_cpumask);
++	kfree(conn->conn_ops);
++	iscsit_put_transport(conn->conn_transport);
++	kfree(conn);
++}
++
+ void iscsi_target_login_sess_out(struct iscsi_conn *conn,
+ 		struct iscsi_np *np, bool zero_tsih, bool new_sess)
+ {
+@@ -1210,10 +1242,6 @@ old_sess_out:
+ 		crypto_free_ahash(tfm);
+ 	}
+ 
+-	free_cpumask_var(conn->conn_cpumask);
+-
+-	kfree(conn->conn_ops);
+-
+ 	if (conn->param_list) {
+ 		iscsi_release_param_list(conn->param_list);
+ 		conn->param_list = NULL;
+@@ -1231,8 +1259,7 @@ old_sess_out:
+ 	if (conn->conn_transport->iscsit_free_conn)
+ 		conn->conn_transport->iscsit_free_conn(conn);
+ 
+-	iscsit_put_transport(conn->conn_transport);
+-	kfree(conn);
++	iscsit_free_conn(conn);
+ }
+ 
+ static int __iscsi_target_login_thread(struct iscsi_np *np)
+@@ -1262,31 +1289,16 @@ static int __iscsi_target_login_thread(struct iscsi_np *np)
+ 	}
+ 	spin_unlock_bh(&np->np_thread_lock);
+ 
+-	conn = kzalloc(sizeof(struct iscsi_conn), GFP_KERNEL);
++	conn = iscsit_alloc_conn(np);
+ 	if (!conn) {
+-		pr_err("Could not allocate memory for"
+-			" new connection\n");
+ 		/* Get another socket */
+ 		return 1;
+ 	}
+-	pr_debug("Moving to TARG_CONN_STATE_FREE.\n");
+-	conn->conn_state = TARG_CONN_STATE_FREE;
+-
+-	timer_setup(&conn->nopin_response_timer,
+-		    iscsit_handle_nopin_response_timeout, 0);
+-	timer_setup(&conn->nopin_timer, iscsit_handle_nopin_timeout, 0);
+-
+-	if (iscsit_conn_set_transport(conn, np->np_transport) < 0) {
+-		kfree(conn);
+-		return 1;
+-	}
+ 
+ 	rc = np->np_transport->iscsit_accept_np(np, conn);
+ 	if (rc == -ENOSYS) {
+ 		complete(&np->np_restart_comp);
+-		iscsit_put_transport(conn->conn_transport);
+-		kfree(conn);
+-		conn = NULL;
++		iscsit_free_conn(conn);
+ 		goto exit;
+ 	} else if (rc < 0) {
+ 		spin_lock_bh(&np->np_thread_lock);
+@@ -1294,17 +1306,13 @@ static int __iscsi_target_login_thread(struct iscsi_np *np)
+ 			np->np_thread_state = ISCSI_NP_THREAD_ACTIVE;
+ 			spin_unlock_bh(&np->np_thread_lock);
+ 			complete(&np->np_restart_comp);
+-			iscsit_put_transport(conn->conn_transport);
+-			kfree(conn);
+-			conn = NULL;
++			iscsit_free_conn(conn);
+ 			/* Get another socket */
+ 			return 1;
+ 		}
+ 		spin_unlock_bh(&np->np_thread_lock);
+-		iscsit_put_transport(conn->conn_transport);
+-		kfree(conn);
+-		conn = NULL;
+-		goto out;
++		iscsit_free_conn(conn);
++		return 1;
+ 	}
+ 	/*
+ 	 * Perform the remaining iSCSI connection initialization items..
+@@ -1454,7 +1462,6 @@ old_sess_out:
+ 		tpg_np = NULL;
+ 	}
+ 
+-out:
+ 	return 1;
+ 
+ exit:
+diff --git a/drivers/target/iscsi/iscsi_target_login.h b/drivers/target/iscsi/iscsi_target_login.h
+index 74ac3abc44a0..3b8e3639ff5d 100644
+--- a/drivers/target/iscsi/iscsi_target_login.h
++++ b/drivers/target/iscsi/iscsi_target_login.h
+@@ -19,7 +19,7 @@ extern int iscsi_target_setup_login_socket(struct iscsi_np *,
+ extern int iscsit_accept_np(struct iscsi_np *, struct iscsi_conn *);
+ extern int iscsit_get_login_rx(struct iscsi_conn *, struct iscsi_login *);
+ extern int iscsit_put_login_tx(struct iscsi_conn *, struct iscsi_login *, u32);
+-extern void iscsit_free_conn(struct iscsi_np *, struct iscsi_conn *);
++extern void iscsit_free_conn(struct iscsi_conn *);
+ extern int iscsit_start_kthreads(struct iscsi_conn *);
+ extern void iscsi_post_login_handler(struct iscsi_np *, struct iscsi_conn *, u8);
+ extern void iscsi_target_login_sess_out(struct iscsi_conn *, struct iscsi_np *,
+diff --git a/drivers/usb/gadget/udc/fotg210-udc.c b/drivers/usb/gadget/udc/fotg210-udc.c
+index 53a48f561458..587c5037ff07 100644
+--- a/drivers/usb/gadget/udc/fotg210-udc.c
++++ b/drivers/usb/gadget/udc/fotg210-udc.c
+@@ -1063,12 +1063,15 @@ static const struct usb_gadget_ops fotg210_gadget_ops = {
+ static int fotg210_udc_remove(struct platform_device *pdev)
+ {
+ 	struct fotg210_udc *fotg210 = platform_get_drvdata(pdev);
++	int i;
+ 
+ 	usb_del_gadget_udc(&fotg210->gadget);
+ 	iounmap(fotg210->reg);
+ 	free_irq(platform_get_irq(pdev, 0), fotg210);
+ 
+ 	fotg210_ep_free_request(&fotg210->ep[0]->ep, fotg210->ep0_req);
++	for (i = 0; i < FOTG210_MAX_NUM_EP; i++)
++		kfree(fotg210->ep[i]);
+ 	kfree(fotg210);
+ 
+ 	return 0;
+@@ -1099,7 +1102,7 @@ static int fotg210_udc_probe(struct platform_device *pdev)
+ 	/* initialize udc */
+ 	fotg210 = kzalloc(sizeof(struct fotg210_udc), GFP_KERNEL);
+ 	if (fotg210 == NULL)
+-		goto err_alloc;
++		goto err;
+ 
+ 	for (i = 0; i < FOTG210_MAX_NUM_EP; i++) {
+ 		_ep[i] = kzalloc(sizeof(struct fotg210_ep), GFP_KERNEL);
+@@ -1111,7 +1114,7 @@ static int fotg210_udc_probe(struct platform_device *pdev)
+ 	fotg210->reg = ioremap(res->start, resource_size(res));
+ 	if (fotg210->reg == NULL) {
+ 		pr_err("ioremap error.\n");
+-		goto err_map;
++		goto err_alloc;
+ 	}
+ 
+ 	spin_lock_init(&fotg210->lock);
+@@ -1159,7 +1162,7 @@ static int fotg210_udc_probe(struct platform_device *pdev)
+ 	fotg210->ep0_req = fotg210_ep_alloc_request(&fotg210->ep[0]->ep,
+ 				GFP_KERNEL);
+ 	if (fotg210->ep0_req == NULL)
+-		goto err_req;
++		goto err_map;
+ 
+ 	fotg210_init(fotg210);
+ 
+@@ -1187,12 +1190,14 @@ err_req:
+ 	fotg210_ep_free_request(&fotg210->ep[0]->ep, fotg210->ep0_req);
+ 
+ err_map:
+-	if (fotg210->reg)
+-		iounmap(fotg210->reg);
++	iounmap(fotg210->reg);
+ 
+ err_alloc:
++	for (i = 0; i < FOTG210_MAX_NUM_EP; i++)
++		kfree(fotg210->ep[i]);
+ 	kfree(fotg210);
+ 
++err:
+ 	return ret;
+ }
+ 
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index c1b22fc64e38..b5a14caa9297 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -152,7 +152,7 @@ static int xhci_plat_probe(struct platform_device *pdev)
+ {
+ 	const struct xhci_plat_priv *priv_match;
+ 	const struct hc_driver	*driver;
+-	struct device		*sysdev;
++	struct device		*sysdev, *tmpdev;
+ 	struct xhci_hcd		*xhci;
+ 	struct resource         *res;
+ 	struct usb_hcd		*hcd;
+@@ -272,19 +272,24 @@ static int xhci_plat_probe(struct platform_device *pdev)
+ 		goto disable_clk;
+ 	}
+ 
+-	if (device_property_read_bool(sysdev, "usb2-lpm-disable"))
+-		xhci->quirks |= XHCI_HW_LPM_DISABLE;
++	/* imod_interval is the interrupt moderation value in nanoseconds. */
++	xhci->imod_interval = 40000;
+ 
+-	if (device_property_read_bool(sysdev, "usb3-lpm-capable"))
+-		xhci->quirks |= XHCI_LPM_SUPPORT;
++	/* Iterate over all parent nodes for finding quirks */
++	for (tmpdev = &pdev->dev; tmpdev; tmpdev = tmpdev->parent) {
+ 
+-	if (device_property_read_bool(&pdev->dev, "quirk-broken-port-ped"))
+-		xhci->quirks |= XHCI_BROKEN_PORT_PED;
++		if (device_property_read_bool(tmpdev, "usb2-lpm-disable"))
++			xhci->quirks |= XHCI_HW_LPM_DISABLE;
+ 
+-	/* imod_interval is the interrupt moderation value in nanoseconds. */
+-	xhci->imod_interval = 40000;
+-	device_property_read_u32(sysdev, "imod-interval-ns",
+-				 &xhci->imod_interval);
++		if (device_property_read_bool(tmpdev, "usb3-lpm-capable"))
++			xhci->quirks |= XHCI_LPM_SUPPORT;
++
++		if (device_property_read_bool(tmpdev, "quirk-broken-port-ped"))
++			xhci->quirks |= XHCI_BROKEN_PORT_PED;
++
++		device_property_read_u32(tmpdev, "imod-interval-ns",
++					 &xhci->imod_interval);
++	}
+ 
+ 	hcd->usb_phy = devm_usb_get_phy_by_phandle(sysdev, "usb-phy", 0);
+ 	if (IS_ERR(hcd->usb_phy)) {
+diff --git a/drivers/usb/misc/yurex.c b/drivers/usb/misc/yurex.c
+index 1232dd49556d..6d9fd5f64903 100644
+--- a/drivers/usb/misc/yurex.c
++++ b/drivers/usb/misc/yurex.c
+@@ -413,6 +413,9 @@ static ssize_t yurex_read(struct file *file, char __user *buffer, size_t count,
+ 	spin_unlock_irqrestore(&dev->lock, flags);
+ 	mutex_unlock(&dev->io_mutex);
+ 
++	if (WARN_ON_ONCE(len >= sizeof(in_buffer)))
++		return -EIO;
++
+ 	return simple_read_from_buffer(buffer, count, ppos, in_buffer, len);
+ }
+ 
+diff --git a/drivers/xen/cpu_hotplug.c b/drivers/xen/cpu_hotplug.c
+index d4265c8ebb22..b1357aa4bc55 100644
+--- a/drivers/xen/cpu_hotplug.c
++++ b/drivers/xen/cpu_hotplug.c
+@@ -19,15 +19,16 @@ static void enable_hotplug_cpu(int cpu)
+ 
+ static void disable_hotplug_cpu(int cpu)
+ {
+-	if (cpu_online(cpu)) {
+-		lock_device_hotplug();
++	if (!cpu_is_hotpluggable(cpu))
++		return;
++	lock_device_hotplug();
++	if (cpu_online(cpu))
+ 		device_offline(get_cpu_device(cpu));
+-		unlock_device_hotplug();
+-	}
+-	if (cpu_present(cpu))
++	if (!cpu_online(cpu) && cpu_present(cpu)) {
+ 		xen_arch_unregister_cpu(cpu);
+-
+-	set_cpu_present(cpu, false);
++		set_cpu_present(cpu, false);
++	}
++	unlock_device_hotplug();
+ }
+ 
+ static int vcpu_online(unsigned int cpu)
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index 08e4af04d6f2..e6c1934734b7 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -138,7 +138,7 @@ static int set_evtchn_to_irq(unsigned evtchn, unsigned irq)
+ 		clear_evtchn_to_irq_row(row);
+ 	}
+ 
+-	evtchn_to_irq[EVTCHN_ROW(evtchn)][EVTCHN_COL(evtchn)] = irq;
++	evtchn_to_irq[row][col] = irq;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
+index c93d8ef8df34..5bb01a62f214 100644
+--- a/drivers/xen/manage.c
++++ b/drivers/xen/manage.c
+@@ -280,9 +280,11 @@ static void sysrq_handler(struct xenbus_watch *watch, const char *path,
+ 		/*
+ 		 * The Xenstore watch fires directly after registering it and
+ 		 * after a suspend/resume cycle. So ENOENT is no error but
+-		 * might happen in those cases.
++		 * might happen in those cases. ERANGE is observed when we get
++		 * an empty value (''), this happens when we acknowledge the
++		 * request by writing '\0' below.
+ 		 */
+-		if (err != -ENOENT)
++		if (err != -ENOENT && err != -ERANGE)
+ 			pr_err("Error %d reading sysrq code in control/sysrq\n",
+ 			       err);
+ 		xenbus_transaction_end(xbt, 1);
+diff --git a/fs/afs/proc.c b/fs/afs/proc.c
+index 0c3285c8db95..476dcbb79713 100644
+--- a/fs/afs/proc.c
++++ b/fs/afs/proc.c
+@@ -98,13 +98,13 @@ static int afs_proc_cells_write(struct file *file, char *buf, size_t size)
+ 		goto inval;
+ 
+ 	args = strchr(name, ' ');
+-	if (!args)
+-		goto inval;
+-	do {
+-		*args++ = 0;
+-	} while(*args == ' ');
+-	if (!*args)
+-		goto inval;
++	if (args) {
++		do {
++			*args++ = 0;
++		} while(*args == ' ');
++		if (!*args)
++			goto inval;
++	}
+ 
+ 	/* determine command to perform */
+ 	_debug("cmd=%s name=%s args=%s", buf, name, args);
+@@ -120,7 +120,6 @@ static int afs_proc_cells_write(struct file *file, char *buf, size_t size)
+ 
+ 		if (test_and_set_bit(AFS_CELL_FL_NO_GC, &cell->flags))
+ 			afs_put_cell(net, cell);
+-		printk("kAFS: Added new cell '%s'\n", name);
+ 	} else {
+ 		goto inval;
+ 	}
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 118346aceea9..663ce0518d27 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -1277,6 +1277,7 @@ struct btrfs_root {
+ 	int send_in_progress;
+ 	struct btrfs_subvolume_writers *subv_writers;
+ 	atomic_t will_be_snapshotted;
++	atomic_t snapshot_force_cow;
+ 
+ 	/* For qgroup metadata reserved space */
+ 	spinlock_t qgroup_meta_rsv_lock;
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index dfed08e70ec1..891b1aab3480 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -1217,6 +1217,7 @@ static void __setup_root(struct btrfs_root *root, struct btrfs_fs_info *fs_info,
+ 	atomic_set(&root->log_batch, 0);
+ 	refcount_set(&root->refs, 1);
+ 	atomic_set(&root->will_be_snapshotted, 0);
++	atomic_set(&root->snapshot_force_cow, 0);
+ 	root->log_transid = 0;
+ 	root->log_transid_committed = -1;
+ 	root->last_log_commit = 0;
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 071d949f69ec..d3736fbf6774 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -1275,7 +1275,7 @@ static noinline int run_delalloc_nocow(struct inode *inode,
+ 	u64 disk_num_bytes;
+ 	u64 ram_bytes;
+ 	int extent_type;
+-	int ret, err;
++	int ret;
+ 	int type;
+ 	int nocow;
+ 	int check_prev = 1;
+@@ -1407,11 +1407,8 @@ next_slot:
+ 			 * if there are pending snapshots for this root,
+ 			 * we fall into common COW way.
+ 			 */
+-			if (!nolock) {
+-				err = btrfs_start_write_no_snapshotting(root);
+-				if (!err)
+-					goto out_check;
+-			}
++			if (!nolock && atomic_read(&root->snapshot_force_cow))
++				goto out_check;
+ 			/*
+ 			 * force cow if csum exists in the range.
+ 			 * this ensure that csum for a given extent are
+@@ -1420,9 +1417,6 @@ next_slot:
+ 			ret = csum_exist_in_range(fs_info, disk_bytenr,
+ 						  num_bytes);
+ 			if (ret) {
+-				if (!nolock)
+-					btrfs_end_write_no_snapshotting(root);
+-
+ 				/*
+ 				 * ret could be -EIO if the above fails to read
+ 				 * metadata.
+@@ -1435,11 +1429,8 @@ next_slot:
+ 				WARN_ON_ONCE(nolock);
+ 				goto out_check;
+ 			}
+-			if (!btrfs_inc_nocow_writers(fs_info, disk_bytenr)) {
+-				if (!nolock)
+-					btrfs_end_write_no_snapshotting(root);
++			if (!btrfs_inc_nocow_writers(fs_info, disk_bytenr))
+ 				goto out_check;
+-			}
+ 			nocow = 1;
+ 		} else if (extent_type == BTRFS_FILE_EXTENT_INLINE) {
+ 			extent_end = found_key.offset +
+@@ -1453,8 +1444,6 @@ next_slot:
+ out_check:
+ 		if (extent_end <= start) {
+ 			path->slots[0]++;
+-			if (!nolock && nocow)
+-				btrfs_end_write_no_snapshotting(root);
+ 			if (nocow)
+ 				btrfs_dec_nocow_writers(fs_info, disk_bytenr);
+ 			goto next_slot;
+@@ -1476,8 +1465,6 @@ out_check:
+ 					     end, page_started, nr_written, 1,
+ 					     NULL);
+ 			if (ret) {
+-				if (!nolock && nocow)
+-					btrfs_end_write_no_snapshotting(root);
+ 				if (nocow)
+ 					btrfs_dec_nocow_writers(fs_info,
+ 								disk_bytenr);
+@@ -1497,8 +1484,6 @@ out_check:
+ 					  ram_bytes, BTRFS_COMPRESS_NONE,
+ 					  BTRFS_ORDERED_PREALLOC);
+ 			if (IS_ERR(em)) {
+-				if (!nolock && nocow)
+-					btrfs_end_write_no_snapshotting(root);
+ 				if (nocow)
+ 					btrfs_dec_nocow_writers(fs_info,
+ 								disk_bytenr);
+@@ -1537,8 +1522,6 @@ out_check:
+ 					     EXTENT_CLEAR_DATA_RESV,
+ 					     PAGE_UNLOCK | PAGE_SET_PRIVATE2);
+ 
+-		if (!nolock && nocow)
+-			btrfs_end_write_no_snapshotting(root);
+ 		cur_offset = extent_end;
+ 
+ 		/*
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index f3d6be0c657b..ef7159646615 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -761,6 +761,7 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir,
+ 	struct btrfs_pending_snapshot *pending_snapshot;
+ 	struct btrfs_trans_handle *trans;
+ 	int ret;
++	bool snapshot_force_cow = false;
+ 
+ 	if (!test_bit(BTRFS_ROOT_REF_COWS, &root->state))
+ 		return -EINVAL;
+@@ -777,6 +778,11 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir,
+ 		goto free_pending;
+ 	}
+ 
++	/*
++	 * Force new buffered writes to reserve space even when NOCOW is
++	 * possible. This is to avoid later writeback (running dealloc) to
++	 * fallback to COW mode and unexpectedly fail with ENOSPC.
++	 */
+ 	atomic_inc(&root->will_be_snapshotted);
+ 	smp_mb__after_atomic();
+ 	/* wait for no snapshot writes */
+@@ -787,6 +793,14 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir,
+ 	if (ret)
+ 		goto dec_and_free;
+ 
++	/*
++	 * All previous writes have started writeback in NOCOW mode, so now
++	 * we force future writes to fallback to COW mode during snapshot
++	 * creation.
++	 */
++	atomic_inc(&root->snapshot_force_cow);
++	snapshot_force_cow = true;
++
+ 	btrfs_wait_ordered_extents(root, U64_MAX, 0, (u64)-1);
+ 
+ 	btrfs_init_block_rsv(&pending_snapshot->block_rsv,
+@@ -851,6 +865,8 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir,
+ fail:
+ 	btrfs_subvolume_release_metadata(fs_info, &pending_snapshot->block_rsv);
+ dec_and_free:
++	if (snapshot_force_cow)
++		atomic_dec(&root->snapshot_force_cow);
+ 	if (atomic_dec_and_test(&root->will_be_snapshotted))
+ 		wake_up_var(&root->will_be_snapshotted);
+ free_pending:
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 5304b8d6ceb8..1a22c0ecaf67 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -4584,7 +4584,12 @@ again:
+ 
+ 	/* Now btrfs_update_device() will change the on-disk size. */
+ 	ret = btrfs_update_device(trans, device);
+-	btrfs_end_transaction(trans);
++	if (ret < 0) {
++		btrfs_abort_transaction(trans, ret);
++		btrfs_end_transaction(trans);
++	} else {
++		ret = btrfs_commit_transaction(trans);
++	}
+ done:
+ 	btrfs_free_path(path);
+ 	if (ret) {
+diff --git a/fs/ceph/super.c b/fs/ceph/super.c
+index 95a3b3ac9b6e..60f81ac369b5 100644
+--- a/fs/ceph/super.c
++++ b/fs/ceph/super.c
+@@ -603,6 +603,8 @@ static int extra_mon_dispatch(struct ceph_client *client, struct ceph_msg *msg)
+ 
+ /*
+  * create a new fs client
++ *
++ * Success or not, this function consumes @fsopt and @opt.
+  */
+ static struct ceph_fs_client *create_fs_client(struct ceph_mount_options *fsopt,
+ 					struct ceph_options *opt)
+@@ -610,17 +612,20 @@ static struct ceph_fs_client *create_fs_client(struct ceph_mount_options *fsopt,
+ 	struct ceph_fs_client *fsc;
+ 	int page_count;
+ 	size_t size;
+-	int err = -ENOMEM;
++	int err;
+ 
+ 	fsc = kzalloc(sizeof(*fsc), GFP_KERNEL);
+-	if (!fsc)
+-		return ERR_PTR(-ENOMEM);
++	if (!fsc) {
++		err = -ENOMEM;
++		goto fail;
++	}
+ 
+ 	fsc->client = ceph_create_client(opt, fsc);
+ 	if (IS_ERR(fsc->client)) {
+ 		err = PTR_ERR(fsc->client);
+ 		goto fail;
+ 	}
++	opt = NULL; /* fsc->client now owns this */
+ 
+ 	fsc->client->extra_mon_dispatch = extra_mon_dispatch;
+ 	fsc->client->osdc.abort_on_full = true;
+@@ -678,6 +683,9 @@ fail_client:
+ 	ceph_destroy_client(fsc->client);
+ fail:
+ 	kfree(fsc);
++	if (opt)
++		ceph_destroy_options(opt);
++	destroy_mount_options(fsopt);
+ 	return ERR_PTR(err);
+ }
+ 
+@@ -1042,8 +1050,6 @@ static struct dentry *ceph_mount(struct file_system_type *fs_type,
+ 	fsc = create_fs_client(fsopt, opt);
+ 	if (IS_ERR(fsc)) {
+ 		res = ERR_CAST(fsc);
+-		destroy_mount_options(fsopt);
+-		ceph_destroy_options(opt);
+ 		goto out_final;
+ 	}
+ 
+diff --git a/fs/cifs/cifs_unicode.c b/fs/cifs/cifs_unicode.c
+index b380e0871372..a2b2355e7f01 100644
+--- a/fs/cifs/cifs_unicode.c
++++ b/fs/cifs/cifs_unicode.c
+@@ -105,9 +105,6 @@ convert_sfm_char(const __u16 src_char, char *target)
+ 	case SFM_LESSTHAN:
+ 		*target = '<';
+ 		break;
+-	case SFM_SLASH:
+-		*target = '\\';
+-		break;
+ 	case SFM_SPACE:
+ 		*target = ' ';
+ 		break;
+diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c
+index 93408eab92e7..f5baf777564c 100644
+--- a/fs/cifs/cifssmb.c
++++ b/fs/cifs/cifssmb.c
+@@ -601,10 +601,15 @@ CIFSSMBNegotiate(const unsigned int xid, struct cifs_ses *ses)
+ 	}
+ 
+ 	count = 0;
++	/*
++	 * We know that all the name entries in the protocols array
++	 * are short (< 16 bytes anyway) and are NUL terminated.
++	 */
+ 	for (i = 0; i < CIFS_NUM_PROT; i++) {
+-		strncpy(pSMB->DialectsArray+count, protocols[i].name, 16);
+-		count += strlen(protocols[i].name) + 1;
+-		/* null at end of source and target buffers anyway */
++		size_t len = strlen(protocols[i].name) + 1;
++
++		memcpy(pSMB->DialectsArray+count, protocols[i].name, len);
++		count += len;
+ 	}
+ 	inc_rfc1001_len(pSMB, count);
+ 	pSMB->ByteCount = cpu_to_le16(count);
+diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
+index 53e8362cbc4a..6737f54d9a34 100644
+--- a/fs/cifs/misc.c
++++ b/fs/cifs/misc.c
+@@ -404,9 +404,17 @@ is_valid_oplock_break(char *buffer, struct TCP_Server_Info *srv)
+ 			(struct smb_com_transaction_change_notify_rsp *)buf;
+ 		struct file_notify_information *pnotify;
+ 		__u32 data_offset = 0;
++		size_t len = srv->total_read - sizeof(pSMBr->hdr.smb_buf_length);
++
+ 		if (get_bcc(buf) > sizeof(struct file_notify_information)) {
+ 			data_offset = le32_to_cpu(pSMBr->DataOffset);
+ 
++			if (data_offset >
++			    len - sizeof(struct file_notify_information)) {
++				cifs_dbg(FYI, "invalid data_offset %u\n",
++					 data_offset);
++				return true;
++			}
+ 			pnotify = (struct file_notify_information *)
+ 				((char *)&pSMBr->hdr.Protocol + data_offset);
+ 			cifs_dbg(FYI, "dnotify on %s Action: 0x%x\n",
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 5ecbc99f46e4..abb54b852bdc 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -1484,7 +1484,7 @@ smb2_query_dir_first(const unsigned int xid, struct cifs_tcon *tcon,
+ 	}
+ 
+ 	srch_inf->entries_in_buffer = 0;
+-	srch_inf->index_of_last_entry = 0;
++	srch_inf->index_of_last_entry = 2;
+ 
+ 	rc = SMB2_query_directory(xid, tcon, fid->persistent_fid,
+ 				  fid->volatile_fid, 0, srch_inf);
+diff --git a/fs/dcache.c b/fs/dcache.c
+index d19a0dc46c04..baa89f092a2d 100644
+--- a/fs/dcache.c
++++ b/fs/dcache.c
+@@ -1890,7 +1890,7 @@ void d_instantiate_new(struct dentry *entry, struct inode *inode)
+ 	spin_lock(&inode->i_lock);
+ 	__d_instantiate(entry, inode);
+ 	WARN_ON(!(inode->i_state & I_NEW));
+-	inode->i_state &= ~I_NEW;
++	inode->i_state &= ~I_NEW & ~I_CREATING;
+ 	smp_mb();
+ 	wake_up_bit(&inode->i_state, __I_NEW);
+ 	spin_unlock(&inode->i_lock);
+diff --git a/fs/inode.c b/fs/inode.c
+index 8c86c809ca17..a06de4454232 100644
+--- a/fs/inode.c
++++ b/fs/inode.c
+@@ -804,6 +804,10 @@ repeat:
+ 			__wait_on_freeing_inode(inode);
+ 			goto repeat;
+ 		}
++		if (unlikely(inode->i_state & I_CREATING)) {
++			spin_unlock(&inode->i_lock);
++			return ERR_PTR(-ESTALE);
++		}
+ 		__iget(inode);
+ 		spin_unlock(&inode->i_lock);
+ 		return inode;
+@@ -831,6 +835,10 @@ repeat:
+ 			__wait_on_freeing_inode(inode);
+ 			goto repeat;
+ 		}
++		if (unlikely(inode->i_state & I_CREATING)) {
++			spin_unlock(&inode->i_lock);
++			return ERR_PTR(-ESTALE);
++		}
+ 		__iget(inode);
+ 		spin_unlock(&inode->i_lock);
+ 		return inode;
+@@ -961,13 +969,26 @@ void unlock_new_inode(struct inode *inode)
+ 	lockdep_annotate_inode_mutex_key(inode);
+ 	spin_lock(&inode->i_lock);
+ 	WARN_ON(!(inode->i_state & I_NEW));
+-	inode->i_state &= ~I_NEW;
++	inode->i_state &= ~I_NEW & ~I_CREATING;
+ 	smp_mb();
+ 	wake_up_bit(&inode->i_state, __I_NEW);
+ 	spin_unlock(&inode->i_lock);
+ }
+ EXPORT_SYMBOL(unlock_new_inode);
+ 
++void discard_new_inode(struct inode *inode)
++{
++	lockdep_annotate_inode_mutex_key(inode);
++	spin_lock(&inode->i_lock);
++	WARN_ON(!(inode->i_state & I_NEW));
++	inode->i_state &= ~I_NEW;
++	smp_mb();
++	wake_up_bit(&inode->i_state, __I_NEW);
++	spin_unlock(&inode->i_lock);
++	iput(inode);
++}
++EXPORT_SYMBOL(discard_new_inode);
++
+ /**
+  * lock_two_nondirectories - take two i_mutexes on non-directory objects
+  *
+@@ -1029,6 +1050,7 @@ struct inode *inode_insert5(struct inode *inode, unsigned long hashval,
+ {
+ 	struct hlist_head *head = inode_hashtable + hash(inode->i_sb, hashval);
+ 	struct inode *old;
++	bool creating = inode->i_state & I_CREATING;
+ 
+ again:
+ 	spin_lock(&inode_hash_lock);
+@@ -1039,6 +1061,8 @@ again:
+ 		 * Use the old inode instead of the preallocated one.
+ 		 */
+ 		spin_unlock(&inode_hash_lock);
++		if (IS_ERR(old))
++			return NULL;
+ 		wait_on_inode(old);
+ 		if (unlikely(inode_unhashed(old))) {
+ 			iput(old);
+@@ -1060,6 +1084,8 @@ again:
+ 	inode->i_state |= I_NEW;
+ 	hlist_add_head(&inode->i_hash, head);
+ 	spin_unlock(&inode->i_lock);
++	if (!creating)
++		inode_sb_list_add(inode);
+ unlock:
+ 	spin_unlock(&inode_hash_lock);
+ 
+@@ -1094,12 +1120,13 @@ struct inode *iget5_locked(struct super_block *sb, unsigned long hashval,
+ 	struct inode *inode = ilookup5(sb, hashval, test, data);
+ 
+ 	if (!inode) {
+-		struct inode *new = new_inode(sb);
++		struct inode *new = alloc_inode(sb);
+ 
+ 		if (new) {
++			new->i_state = 0;
+ 			inode = inode_insert5(new, hashval, test, set, data);
+ 			if (unlikely(inode != new))
+-				iput(new);
++				destroy_inode(new);
+ 		}
+ 	}
+ 	return inode;
+@@ -1128,6 +1155,8 @@ again:
+ 	inode = find_inode_fast(sb, head, ino);
+ 	spin_unlock(&inode_hash_lock);
+ 	if (inode) {
++		if (IS_ERR(inode))
++			return NULL;
+ 		wait_on_inode(inode);
+ 		if (unlikely(inode_unhashed(inode))) {
+ 			iput(inode);
+@@ -1165,6 +1194,8 @@ again:
+ 		 */
+ 		spin_unlock(&inode_hash_lock);
+ 		destroy_inode(inode);
++		if (IS_ERR(old))
++			return NULL;
+ 		inode = old;
+ 		wait_on_inode(inode);
+ 		if (unlikely(inode_unhashed(inode))) {
+@@ -1282,7 +1313,7 @@ struct inode *ilookup5_nowait(struct super_block *sb, unsigned long hashval,
+ 	inode = find_inode(sb, head, test, data);
+ 	spin_unlock(&inode_hash_lock);
+ 
+-	return inode;
++	return IS_ERR(inode) ? NULL : inode;
+ }
+ EXPORT_SYMBOL(ilookup5_nowait);
+ 
+@@ -1338,6 +1369,8 @@ again:
+ 	spin_unlock(&inode_hash_lock);
+ 
+ 	if (inode) {
++		if (IS_ERR(inode))
++			return NULL;
+ 		wait_on_inode(inode);
+ 		if (unlikely(inode_unhashed(inode))) {
+ 			iput(inode);
+@@ -1421,12 +1454,17 @@ int insert_inode_locked(struct inode *inode)
+ 		}
+ 		if (likely(!old)) {
+ 			spin_lock(&inode->i_lock);
+-			inode->i_state |= I_NEW;
++			inode->i_state |= I_NEW | I_CREATING;
+ 			hlist_add_head(&inode->i_hash, head);
+ 			spin_unlock(&inode->i_lock);
+ 			spin_unlock(&inode_hash_lock);
+ 			return 0;
+ 		}
++		if (unlikely(old->i_state & I_CREATING)) {
++			spin_unlock(&old->i_lock);
++			spin_unlock(&inode_hash_lock);
++			return -EBUSY;
++		}
+ 		__iget(old);
+ 		spin_unlock(&old->i_lock);
+ 		spin_unlock(&inode_hash_lock);
+@@ -1443,7 +1481,10 @@ EXPORT_SYMBOL(insert_inode_locked);
+ int insert_inode_locked4(struct inode *inode, unsigned long hashval,
+ 		int (*test)(struct inode *, void *), void *data)
+ {
+-	struct inode *old = inode_insert5(inode, hashval, test, NULL, data);
++	struct inode *old;
++
++	inode->i_state |= I_CREATING;
++	old = inode_insert5(inode, hashval, test, NULL, data);
+ 
+ 	if (old != inode) {
+ 		iput(old);
+diff --git a/fs/notify/fsnotify.c b/fs/notify/fsnotify.c
+index f174397b63a0..ababdbfab537 100644
+--- a/fs/notify/fsnotify.c
++++ b/fs/notify/fsnotify.c
+@@ -351,16 +351,9 @@ int fsnotify(struct inode *to_tell, __u32 mask, const void *data, int data_is,
+ 
+ 	iter_info.srcu_idx = srcu_read_lock(&fsnotify_mark_srcu);
+ 
+-	if ((mask & FS_MODIFY) ||
+-	    (test_mask & to_tell->i_fsnotify_mask)) {
+-		iter_info.marks[FSNOTIFY_OBJ_TYPE_INODE] =
+-			fsnotify_first_mark(&to_tell->i_fsnotify_marks);
+-	}
+-
+-	if (mnt && ((mask & FS_MODIFY) ||
+-		    (test_mask & mnt->mnt_fsnotify_mask))) {
+-		iter_info.marks[FSNOTIFY_OBJ_TYPE_INODE] =
+-			fsnotify_first_mark(&to_tell->i_fsnotify_marks);
++	iter_info.marks[FSNOTIFY_OBJ_TYPE_INODE] =
++		fsnotify_first_mark(&to_tell->i_fsnotify_marks);
++	if (mnt) {
+ 		iter_info.marks[FSNOTIFY_OBJ_TYPE_VFSMOUNT] =
+ 			fsnotify_first_mark(&mnt->mnt_fsnotify_marks);
+ 	}
+diff --git a/fs/ocfs2/dlm/dlmmaster.c b/fs/ocfs2/dlm/dlmmaster.c
+index aaca0949fe53..826f0567ec43 100644
+--- a/fs/ocfs2/dlm/dlmmaster.c
++++ b/fs/ocfs2/dlm/dlmmaster.c
+@@ -584,9 +584,9 @@ static void dlm_init_lockres(struct dlm_ctxt *dlm,
+ 
+ 	res->last_used = 0;
+ 
+-	spin_lock(&dlm->spinlock);
++	spin_lock(&dlm->track_lock);
+ 	list_add_tail(&res->tracking, &dlm->tracking_list);
+-	spin_unlock(&dlm->spinlock);
++	spin_unlock(&dlm->track_lock);
+ 
+ 	memset(res->lvb, 0, DLM_LVB_LEN);
+ 	memset(res->refmap, 0, sizeof(res->refmap));
+diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c
+index f480b1a2cd2e..da9b3ccfde23 100644
+--- a/fs/overlayfs/dir.c
++++ b/fs/overlayfs/dir.c
+@@ -601,6 +601,10 @@ static int ovl_create_object(struct dentry *dentry, int mode, dev_t rdev,
+ 	if (!inode)
+ 		goto out_drop_write;
+ 
++	spin_lock(&inode->i_lock);
++	inode->i_state |= I_CREATING;
++	spin_unlock(&inode->i_lock);
++
+ 	inode_init_owner(inode, dentry->d_parent->d_inode, mode);
+ 	attr.mode = inode->i_mode;
+ 
+diff --git a/fs/overlayfs/namei.c b/fs/overlayfs/namei.c
+index c993dd8db739..c2229f02389b 100644
+--- a/fs/overlayfs/namei.c
++++ b/fs/overlayfs/namei.c
+@@ -705,7 +705,7 @@ struct dentry *ovl_lookup_index(struct ovl_fs *ofs, struct dentry *upper,
+ 			index = NULL;
+ 			goto out;
+ 		}
+-		pr_warn_ratelimited("overlayfs: failed inode index lookup (ino=%lu, key=%*s, err=%i);\n"
++		pr_warn_ratelimited("overlayfs: failed inode index lookup (ino=%lu, key=%.*s, err=%i);\n"
+ 				    "overlayfs: mount with '-o index=off' to disable inodes index.\n",
+ 				    d_inode(origin)->i_ino, name.len, name.name,
+ 				    err);
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index 7538b9b56237..e789924e9833 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -147,8 +147,8 @@ static inline int ovl_do_setxattr(struct dentry *dentry, const char *name,
+ 				  const void *value, size_t size, int flags)
+ {
+ 	int err = vfs_setxattr(dentry, name, value, size, flags);
+-	pr_debug("setxattr(%pd2, \"%s\", \"%*s\", 0x%x) = %i\n",
+-		 dentry, name, (int) size, (char *) value, flags, err);
++	pr_debug("setxattr(%pd2, \"%s\", \"%*pE\", %zu, 0x%x) = %i\n",
++		 dentry, name, min((int)size, 48), value, size, flags, err);
+ 	return err;
+ }
+ 
+diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c
+index 6f1078028c66..319a7eeb388f 100644
+--- a/fs/overlayfs/util.c
++++ b/fs/overlayfs/util.c
+@@ -531,7 +531,7 @@ static void ovl_cleanup_index(struct dentry *dentry)
+ 	struct dentry *upperdentry = ovl_dentry_upper(dentry);
+ 	struct dentry *index = NULL;
+ 	struct inode *inode;
+-	struct qstr name;
++	struct qstr name = { };
+ 	int err;
+ 
+ 	err = ovl_get_index_name(lowerdentry, &name);
+@@ -574,6 +574,7 @@ static void ovl_cleanup_index(struct dentry *dentry)
+ 		goto fail;
+ 
+ out:
++	kfree(name.name);
+ 	dput(index);
+ 	return;
+ 
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index aaffc0c30216..bbcad104505c 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -407,6 +407,20 @@ static int proc_pid_stack(struct seq_file *m, struct pid_namespace *ns,
+ 	unsigned long *entries;
+ 	int err;
+ 
++	/*
++	 * The ability to racily run the kernel stack unwinder on a running task
++	 * and then observe the unwinder output is scary; while it is useful for
++	 * debugging kernel issues, it can also allow an attacker to leak kernel
++	 * stack contents.
++	 * Doing this in a manner that is at least safe from races would require
++	 * some work to ensure that the remote task can not be scheduled; and
++	 * even then, this would still expose the unwinder as local attack
++	 * surface.
++	 * Therefore, this interface is restricted to root.
++	 */
++	if (!file_ns_capable(m->file, &init_user_ns, CAP_SYS_ADMIN))
++		return -EACCES;
++
+ 	entries = kmalloc_array(MAX_STACK_TRACE_DEPTH, sizeof(*entries),
+ 				GFP_KERNEL);
+ 	if (!entries)
+diff --git a/fs/xattr.c b/fs/xattr.c
+index 1bee74682513..c689fd5b5679 100644
+--- a/fs/xattr.c
++++ b/fs/xattr.c
+@@ -949,17 +949,19 @@ ssize_t simple_xattr_list(struct inode *inode, struct simple_xattrs *xattrs,
+ 	int err = 0;
+ 
+ #ifdef CONFIG_FS_POSIX_ACL
+-	if (inode->i_acl) {
+-		err = xattr_list_one(&buffer, &remaining_size,
+-				     XATTR_NAME_POSIX_ACL_ACCESS);
+-		if (err)
+-			return err;
+-	}
+-	if (inode->i_default_acl) {
+-		err = xattr_list_one(&buffer, &remaining_size,
+-				     XATTR_NAME_POSIX_ACL_DEFAULT);
+-		if (err)
+-			return err;
++	if (IS_POSIXACL(inode)) {
++		if (inode->i_acl) {
++			err = xattr_list_one(&buffer, &remaining_size,
++					     XATTR_NAME_POSIX_ACL_ACCESS);
++			if (err)
++				return err;
++		}
++		if (inode->i_default_acl) {
++			err = xattr_list_one(&buffer, &remaining_size,
++					     XATTR_NAME_POSIX_ACL_DEFAULT);
++			if (err)
++				return err;
++		}
+ 	}
+ #endif
+ 
+diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h
+index 66d1d45fa2e1..d356f802945a 100644
+--- a/include/asm-generic/io.h
++++ b/include/asm-generic/io.h
+@@ -1026,7 +1026,8 @@ static inline void __iomem *ioremap_wt(phys_addr_t offset, size_t size)
+ #define ioport_map ioport_map
+ static inline void __iomem *ioport_map(unsigned long port, unsigned int nr)
+ {
+-	return PCI_IOBASE + (port & MMIO_UPPER_LIMIT);
++	port &= IO_SPACE_LIMIT;
++	return (port > MMIO_UPPER_LIMIT) ? NULL : PCI_IOBASE + port;
+ }
+ #endif
+ 
+diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
+index 0fce47d5acb1..5d46b83d4820 100644
+--- a/include/linux/blk-cgroup.h
++++ b/include/linux/blk-cgroup.h
+@@ -88,7 +88,6 @@ struct blkg_policy_data {
+ 	/* the blkg and policy id this per-policy data belongs to */
+ 	struct blkcg_gq			*blkg;
+ 	int				plid;
+-	bool				offline;
+ };
+ 
+ /*
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 805bf22898cf..a3afa50bb79f 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -2014,6 +2014,8 @@ static inline void init_sync_kiocb(struct kiocb *kiocb, struct file *filp)
+  * I_OVL_INUSE		Used by overlayfs to get exclusive ownership on upper
+  *			and work dirs among overlayfs mounts.
+  *
++ * I_CREATING		New object's inode in the middle of setting up.
++ *
+  * Q: What is the difference between I_WILL_FREE and I_FREEING?
+  */
+ #define I_DIRTY_SYNC		(1 << 0)
+@@ -2034,7 +2036,8 @@ static inline void init_sync_kiocb(struct kiocb *kiocb, struct file *filp)
+ #define __I_DIRTY_TIME_EXPIRED	12
+ #define I_DIRTY_TIME_EXPIRED	(1 << __I_DIRTY_TIME_EXPIRED)
+ #define I_WB_SWITCH		(1 << 13)
+-#define I_OVL_INUSE			(1 << 14)
++#define I_OVL_INUSE		(1 << 14)
++#define I_CREATING		(1 << 15)
+ 
+ #define I_DIRTY_INODE (I_DIRTY_SYNC | I_DIRTY_DATASYNC)
+ #define I_DIRTY (I_DIRTY_INODE | I_DIRTY_PAGES)
+@@ -2918,6 +2921,7 @@ extern void lockdep_annotate_inode_mutex_key(struct inode *inode);
+ static inline void lockdep_annotate_inode_mutex_key(struct inode *inode) { };
+ #endif
+ extern void unlock_new_inode(struct inode *);
++extern void discard_new_inode(struct inode *);
+ extern unsigned int get_next_ino(void);
+ extern void evict_inodes(struct super_block *sb);
+ 
+diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
+index 1beb3ead0385..7229c186d199 100644
+--- a/include/net/cfg80211.h
++++ b/include/net/cfg80211.h
+@@ -4763,8 +4763,8 @@ const char *reg_initiator_name(enum nl80211_reg_initiator initiator);
+  *
+  * Return: 0 on success. -ENODATA.
+  */
+-int reg_query_regdb_wmm(char *alpha2, int freq, u32 *ptr,
+-			struct ieee80211_wmm_rule *rule);
++int reg_query_regdb_wmm(char *alpha2, int freq,
++			struct ieee80211_reg_rule *rule);
+ 
+ /*
+  * callbacks for asynchronous cfg80211 methods, notification
+diff --git a/include/net/regulatory.h b/include/net/regulatory.h
+index 60f8cc86a447..3469750df0f4 100644
+--- a/include/net/regulatory.h
++++ b/include/net/regulatory.h
+@@ -217,15 +217,15 @@ struct ieee80211_wmm_rule {
+ struct ieee80211_reg_rule {
+ 	struct ieee80211_freq_range freq_range;
+ 	struct ieee80211_power_rule power_rule;
+-	struct ieee80211_wmm_rule *wmm_rule;
++	struct ieee80211_wmm_rule wmm_rule;
+ 	u32 flags;
+ 	u32 dfs_cac_ms;
++	bool has_wmm;
+ };
+ 
+ struct ieee80211_regdomain {
+ 	struct rcu_head rcu_head;
+ 	u32 n_reg_rules;
+-	u32 n_wmm_rules;
+ 	char alpha2[3];
+ 	enum nl80211_dfs_regions dfs_region;
+ 	struct ieee80211_reg_rule reg_rules[];
+diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
+index ed707b21d152..f833a60699ad 100644
+--- a/kernel/bpf/sockmap.c
++++ b/kernel/bpf/sockmap.c
+@@ -236,7 +236,7 @@ static int bpf_tcp_init(struct sock *sk)
+ }
+ 
+ static void smap_release_sock(struct smap_psock *psock, struct sock *sock);
+-static int free_start_sg(struct sock *sk, struct sk_msg_buff *md);
++static int free_start_sg(struct sock *sk, struct sk_msg_buff *md, bool charge);
+ 
+ static void bpf_tcp_release(struct sock *sk)
+ {
+@@ -248,7 +248,7 @@ static void bpf_tcp_release(struct sock *sk)
+ 		goto out;
+ 
+ 	if (psock->cork) {
+-		free_start_sg(psock->sock, psock->cork);
++		free_start_sg(psock->sock, psock->cork, true);
+ 		kfree(psock->cork);
+ 		psock->cork = NULL;
+ 	}
+@@ -330,14 +330,14 @@ static void bpf_tcp_close(struct sock *sk, long timeout)
+ 	close_fun = psock->save_close;
+ 
+ 	if (psock->cork) {
+-		free_start_sg(psock->sock, psock->cork);
++		free_start_sg(psock->sock, psock->cork, true);
+ 		kfree(psock->cork);
+ 		psock->cork = NULL;
+ 	}
+ 
+ 	list_for_each_entry_safe(md, mtmp, &psock->ingress, list) {
+ 		list_del(&md->list);
+-		free_start_sg(psock->sock, md);
++		free_start_sg(psock->sock, md, true);
+ 		kfree(md);
+ 	}
+ 
+@@ -369,7 +369,7 @@ static void bpf_tcp_close(struct sock *sk, long timeout)
+ 			/* If another thread deleted this object skip deletion.
+ 			 * The refcnt on psock may or may not be zero.
+ 			 */
+-			if (l) {
++			if (l && l == link) {
+ 				hlist_del_rcu(&link->hash_node);
+ 				smap_release_sock(psock, link->sk);
+ 				free_htab_elem(htab, link);
+@@ -570,14 +570,16 @@ static void free_bytes_sg(struct sock *sk, int bytes,
+ 	md->sg_start = i;
+ }
+ 
+-static int free_sg(struct sock *sk, int start, struct sk_msg_buff *md)
++static int free_sg(struct sock *sk, int start,
++		   struct sk_msg_buff *md, bool charge)
+ {
+ 	struct scatterlist *sg = md->sg_data;
+ 	int i = start, free = 0;
+ 
+ 	while (sg[i].length) {
+ 		free += sg[i].length;
+-		sk_mem_uncharge(sk, sg[i].length);
++		if (charge)
++			sk_mem_uncharge(sk, sg[i].length);
+ 		if (!md->skb)
+ 			put_page(sg_page(&sg[i]));
+ 		sg[i].length = 0;
+@@ -594,9 +596,9 @@ static int free_sg(struct sock *sk, int start, struct sk_msg_buff *md)
+ 	return free;
+ }
+ 
+-static int free_start_sg(struct sock *sk, struct sk_msg_buff *md)
++static int free_start_sg(struct sock *sk, struct sk_msg_buff *md, bool charge)
+ {
+-	int free = free_sg(sk, md->sg_start, md);
++	int free = free_sg(sk, md->sg_start, md, charge);
+ 
+ 	md->sg_start = md->sg_end;
+ 	return free;
+@@ -604,7 +606,7 @@ static int free_start_sg(struct sock *sk, struct sk_msg_buff *md)
+ 
+ static int free_curr_sg(struct sock *sk, struct sk_msg_buff *md)
+ {
+-	return free_sg(sk, md->sg_curr, md);
++	return free_sg(sk, md->sg_curr, md, true);
+ }
+ 
+ static int bpf_map_msg_verdict(int _rc, struct sk_msg_buff *md)
+@@ -718,7 +720,7 @@ static int bpf_tcp_ingress(struct sock *sk, int apply_bytes,
+ 		list_add_tail(&r->list, &psock->ingress);
+ 		sk->sk_data_ready(sk);
+ 	} else {
+-		free_start_sg(sk, r);
++		free_start_sg(sk, r, true);
+ 		kfree(r);
+ 	}
+ 
+@@ -755,14 +757,10 @@ static int bpf_tcp_sendmsg_do_redirect(struct sock *sk, int send,
+ 		release_sock(sk);
+ 	}
+ 	smap_release_sock(psock, sk);
+-	if (unlikely(err))
+-		goto out;
+-	return 0;
++	return err;
+ out_rcu:
+ 	rcu_read_unlock();
+-out:
+-	free_bytes_sg(NULL, send, md, false);
+-	return err;
++	return 0;
+ }
+ 
+ static inline void bpf_md_init(struct smap_psock *psock)
+@@ -825,7 +823,7 @@ more_data:
+ 	case __SK_PASS:
+ 		err = bpf_tcp_push(sk, send, m, flags, true);
+ 		if (unlikely(err)) {
+-			*copied -= free_start_sg(sk, m);
++			*copied -= free_start_sg(sk, m, true);
+ 			break;
+ 		}
+ 
+@@ -848,16 +846,17 @@ more_data:
+ 		lock_sock(sk);
+ 
+ 		if (unlikely(err < 0)) {
+-			free_start_sg(sk, m);
++			int free = free_start_sg(sk, m, false);
++
+ 			psock->sg_size = 0;
+ 			if (!cork)
+-				*copied -= send;
++				*copied -= free;
+ 		} else {
+ 			psock->sg_size -= send;
+ 		}
+ 
+ 		if (cork) {
+-			free_start_sg(sk, m);
++			free_start_sg(sk, m, true);
+ 			psock->sg_size = 0;
+ 			kfree(m);
+ 			m = NULL;
+@@ -915,6 +914,8 @@ static int bpf_tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ 
+ 	if (unlikely(flags & MSG_ERRQUEUE))
+ 		return inet_recv_error(sk, msg, len, addr_len);
++	if (!skb_queue_empty(&sk->sk_receive_queue))
++		return tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len);
+ 
+ 	rcu_read_lock();
+ 	psock = smap_psock_sk(sk);
+@@ -925,9 +926,6 @@ static int bpf_tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ 		goto out;
+ 	rcu_read_unlock();
+ 
+-	if (!skb_queue_empty(&sk->sk_receive_queue))
+-		return tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len);
+-
+ 	lock_sock(sk);
+ bytes_ready:
+ 	while (copied != len) {
+@@ -1125,7 +1123,7 @@ wait_for_memory:
+ 		err = sk_stream_wait_memory(sk, &timeo);
+ 		if (err) {
+ 			if (m && m != psock->cork)
+-				free_start_sg(sk, m);
++				free_start_sg(sk, m, true);
+ 			goto out_err;
+ 		}
+ 	}
+@@ -1467,10 +1465,16 @@ static void smap_destroy_psock(struct rcu_head *rcu)
+ 	schedule_work(&psock->gc_work);
+ }
+ 
++static bool psock_is_smap_sk(struct sock *sk)
++{
++	return inet_csk(sk)->icsk_ulp_ops == &bpf_tcp_ulp_ops;
++}
++
+ static void smap_release_sock(struct smap_psock *psock, struct sock *sock)
+ {
+ 	if (refcount_dec_and_test(&psock->refcnt)) {
+-		tcp_cleanup_ulp(sock);
++		if (psock_is_smap_sk(sock))
++			tcp_cleanup_ulp(sock);
+ 		write_lock_bh(&sock->sk_callback_lock);
+ 		smap_stop_sock(psock, sock);
+ 		write_unlock_bh(&sock->sk_callback_lock);
+@@ -1584,13 +1588,13 @@ static void smap_gc_work(struct work_struct *w)
+ 		bpf_prog_put(psock->bpf_tx_msg);
+ 
+ 	if (psock->cork) {
+-		free_start_sg(psock->sock, psock->cork);
++		free_start_sg(psock->sock, psock->cork, true);
+ 		kfree(psock->cork);
+ 	}
+ 
+ 	list_for_each_entry_safe(md, mtmp, &psock->ingress, list) {
+ 		list_del(&md->list);
+-		free_start_sg(psock->sock, md);
++		free_start_sg(psock->sock, md, true);
+ 		kfree(md);
+ 	}
+ 
+@@ -1897,6 +1901,10 @@ static int __sock_map_ctx_update_elem(struct bpf_map *map,
+ 	 * doesn't update user data.
+ 	 */
+ 	if (psock) {
++		if (!psock_is_smap_sk(sock)) {
++			err = -EBUSY;
++			goto out_progs;
++		}
+ 		if (READ_ONCE(psock->bpf_parse) && parse) {
+ 			err = -EBUSY;
+ 			goto out_progs;
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index adbe21c8876e..82e8edef6ea0 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -2865,6 +2865,15 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
+ 	u64 umin_val, umax_val;
+ 	u64 insn_bitness = (BPF_CLASS(insn->code) == BPF_ALU64) ? 64 : 32;
+ 
++	if (insn_bitness == 32) {
++		/* Relevant for 32-bit RSH: Information can propagate towards
++		 * LSB, so it isn't sufficient to only truncate the output to
++		 * 32 bits.
++		 */
++		coerce_reg_to_size(dst_reg, 4);
++		coerce_reg_to_size(&src_reg, 4);
++	}
++
+ 	smin_val = src_reg.smin_value;
+ 	smax_val = src_reg.smax_value;
+ 	umin_val = src_reg.umin_value;
+@@ -3100,7 +3109,6 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
+ 	if (BPF_CLASS(insn->code) != BPF_ALU64) {
+ 		/* 32-bit ALU ops are (32,32)->32 */
+ 		coerce_reg_to_size(dst_reg, 4);
+-		coerce_reg_to_size(&src_reg, 4);
+ 	}
+ 
+ 	__reg_deduce_bounds(dst_reg);
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index 56a0fed30c0a..505a41c42b96 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -1295,7 +1295,7 @@ static void init_numa_topology_type(void)
+ 
+ 	n = sched_max_numa_distance;
+ 
+-	if (sched_domains_numa_levels <= 1) {
++	if (sched_domains_numa_levels <= 2) {
+ 		sched_numa_topology_type = NUMA_DIRECT;
+ 		return;
+ 	}
+@@ -1380,9 +1380,6 @@ void sched_init_numa(void)
+ 			break;
+ 	}
+ 
+-	if (!level)
+-		return;
+-
+ 	/*
+ 	 * 'level' contains the number of unique distances
+ 	 *
+diff --git a/mm/madvise.c b/mm/madvise.c
+index 4d3c922ea1a1..8534ea2978c5 100644
+--- a/mm/madvise.c
++++ b/mm/madvise.c
+@@ -96,7 +96,7 @@ static long madvise_behavior(struct vm_area_struct *vma,
+ 		new_flags |= VM_DONTDUMP;
+ 		break;
+ 	case MADV_DODUMP:
+-		if (new_flags & VM_SPECIAL) {
++		if (!is_vm_hugetlb_page(vma) && new_flags & VM_SPECIAL) {
+ 			error = -EINVAL;
+ 			goto out;
+ 		}
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 9dfd145eedcc..963ee2e88861 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2272,14 +2272,21 @@ static const struct bpf_func_proto bpf_msg_cork_bytes_proto = {
+ 	.arg2_type      = ARG_ANYTHING,
+ };
+ 
++#define sk_msg_iter_var(var)			\
++	do {					\
++		var++;				\
++		if (var == MAX_SKB_FRAGS)	\
++			var = 0;		\
++	} while (0)
++
+ BPF_CALL_4(bpf_msg_pull_data,
+ 	   struct sk_msg_buff *, msg, u32, start, u32, end, u64, flags)
+ {
+-	unsigned int len = 0, offset = 0, copy = 0;
++	unsigned int len = 0, offset = 0, copy = 0, poffset = 0;
++	int bytes = end - start, bytes_sg_total;
+ 	struct scatterlist *sg = msg->sg_data;
+ 	int first_sg, last_sg, i, shift;
+ 	unsigned char *p, *to, *from;
+-	int bytes = end - start;
+ 	struct page *page;
+ 
+ 	if (unlikely(flags || end <= start))
+@@ -2289,21 +2296,22 @@ BPF_CALL_4(bpf_msg_pull_data,
+ 	i = msg->sg_start;
+ 	do {
+ 		len = sg[i].length;
+-		offset += len;
+ 		if (start < offset + len)
+ 			break;
+-		i++;
+-		if (i == MAX_SKB_FRAGS)
+-			i = 0;
++		offset += len;
++		sk_msg_iter_var(i);
+ 	} while (i != msg->sg_end);
+ 
+ 	if (unlikely(start >= offset + len))
+ 		return -EINVAL;
+ 
+-	if (!msg->sg_copy[i] && bytes <= len)
+-		goto out;
+-
+ 	first_sg = i;
++	/* The start may point into the sg element so we need to also
++	 * account for the headroom.
++	 */
++	bytes_sg_total = start - offset + bytes;
++	if (!msg->sg_copy[i] && bytes_sg_total <= len)
++		goto out;
+ 
+ 	/* At this point we need to linearize multiple scatterlist
+ 	 * elements or a single shared page. Either way we need to
+@@ -2317,37 +2325,32 @@ BPF_CALL_4(bpf_msg_pull_data,
+ 	 */
+ 	do {
+ 		copy += sg[i].length;
+-		i++;
+-		if (i == MAX_SKB_FRAGS)
+-			i = 0;
+-		if (bytes < copy)
++		sk_msg_iter_var(i);
++		if (bytes_sg_total <= copy)
+ 			break;
+ 	} while (i != msg->sg_end);
+ 	last_sg = i;
+ 
+-	if (unlikely(copy < end - start))
++	if (unlikely(bytes_sg_total > copy))
+ 		return -EINVAL;
+ 
+ 	page = alloc_pages(__GFP_NOWARN | GFP_ATOMIC, get_order(copy));
+ 	if (unlikely(!page))
+ 		return -ENOMEM;
+ 	p = page_address(page);
+-	offset = 0;
+ 
+ 	i = first_sg;
+ 	do {
+ 		from = sg_virt(&sg[i]);
+ 		len = sg[i].length;
+-		to = p + offset;
++		to = p + poffset;
+ 
+ 		memcpy(to, from, len);
+-		offset += len;
++		poffset += len;
+ 		sg[i].length = 0;
+ 		put_page(sg_page(&sg[i]));
+ 
+-		i++;
+-		if (i == MAX_SKB_FRAGS)
+-			i = 0;
++		sk_msg_iter_var(i);
+ 	} while (i != last_sg);
+ 
+ 	sg[first_sg].length = copy;
+@@ -2357,11 +2360,15 @@ BPF_CALL_4(bpf_msg_pull_data,
+ 	 * had a single entry though we can just replace it and
+ 	 * be done. Otherwise walk the ring and shift the entries.
+ 	 */
+-	shift = last_sg - first_sg - 1;
++	WARN_ON_ONCE(last_sg == first_sg);
++	shift = last_sg > first_sg ?
++		last_sg - first_sg - 1 :
++		MAX_SKB_FRAGS - first_sg + last_sg - 1;
+ 	if (!shift)
+ 		goto out;
+ 
+-	i = first_sg + 1;
++	i = first_sg;
++	sk_msg_iter_var(i);
+ 	do {
+ 		int move_from;
+ 
+@@ -2378,15 +2385,13 @@ BPF_CALL_4(bpf_msg_pull_data,
+ 		sg[move_from].page_link = 0;
+ 		sg[move_from].offset = 0;
+ 
+-		i++;
+-		if (i == MAX_SKB_FRAGS)
+-			i = 0;
++		sk_msg_iter_var(i);
+ 	} while (1);
+ 	msg->sg_end -= shift;
+ 	if (msg->sg_end < 0)
+ 		msg->sg_end += MAX_SKB_FRAGS;
+ out:
+-	msg->data = sg_virt(&sg[i]) + start - offset;
++	msg->data = sg_virt(&sg[first_sg]) + start - offset;
+ 	msg->data_end = msg->data + bytes;
+ 
+ 	return 0;
+diff --git a/net/ipv4/netfilter/Kconfig b/net/ipv4/netfilter/Kconfig
+index bbfc356cb1b5..d7ecae5e93ea 100644
+--- a/net/ipv4/netfilter/Kconfig
++++ b/net/ipv4/netfilter/Kconfig
+@@ -122,6 +122,10 @@ config NF_NAT_IPV4
+ 
+ if NF_NAT_IPV4
+ 
++config NF_NAT_MASQUERADE_IPV4
++	bool
++
++if NF_TABLES
+ config NFT_CHAIN_NAT_IPV4
+ 	depends on NF_TABLES_IPV4
+ 	tristate "IPv4 nf_tables nat chain support"
+@@ -131,9 +135,6 @@ config NFT_CHAIN_NAT_IPV4
+ 	  packet transformations such as the source, destination address and
+ 	  source and destination ports.
+ 
+-config NF_NAT_MASQUERADE_IPV4
+-	bool
+-
+ config NFT_MASQ_IPV4
+ 	tristate "IPv4 masquerading support for nf_tables"
+ 	depends on NF_TABLES_IPV4
+@@ -151,6 +152,7 @@ config NFT_REDIR_IPV4
+ 	help
+ 	  This is the expression that provides IPv4 redirect support for
+ 	  nf_tables.
++endif # NF_TABLES
+ 
+ config NF_NAT_SNMP_BASIC
+ 	tristate "Basic SNMP-ALG support"
+diff --git a/net/mac80211/ibss.c b/net/mac80211/ibss.c
+index 6449a1c2283b..f0f5fedb8caa 100644
+--- a/net/mac80211/ibss.c
++++ b/net/mac80211/ibss.c
+@@ -947,8 +947,8 @@ static void ieee80211_rx_mgmt_deauth_ibss(struct ieee80211_sub_if_data *sdata,
+ 	if (len < IEEE80211_DEAUTH_FRAME_LEN)
+ 		return;
+ 
+-	ibss_dbg(sdata, "RX DeAuth SA=%pM DA=%pM BSSID=%pM (reason: %d)\n",
+-		 mgmt->sa, mgmt->da, mgmt->bssid, reason);
++	ibss_dbg(sdata, "RX DeAuth SA=%pM DA=%pM\n", mgmt->sa, mgmt->da);
++	ibss_dbg(sdata, "\tBSSID=%pM (reason: %d)\n", mgmt->bssid, reason);
+ 	sta_info_destroy_addr(sdata, mgmt->sa);
+ }
+ 
+@@ -966,9 +966,9 @@ static void ieee80211_rx_mgmt_auth_ibss(struct ieee80211_sub_if_data *sdata,
+ 	auth_alg = le16_to_cpu(mgmt->u.auth.auth_alg);
+ 	auth_transaction = le16_to_cpu(mgmt->u.auth.auth_transaction);
+ 
+-	ibss_dbg(sdata,
+-		 "RX Auth SA=%pM DA=%pM BSSID=%pM (auth_transaction=%d)\n",
+-		 mgmt->sa, mgmt->da, mgmt->bssid, auth_transaction);
++	ibss_dbg(sdata, "RX Auth SA=%pM DA=%pM\n", mgmt->sa, mgmt->da);
++	ibss_dbg(sdata, "\tBSSID=%pM (auth_transaction=%d)\n",
++		 mgmt->bssid, auth_transaction);
+ 
+ 	if (auth_alg != WLAN_AUTH_OPEN || auth_transaction != 1)
+ 		return;
+@@ -1175,10 +1175,10 @@ static void ieee80211_rx_bss_info(struct ieee80211_sub_if_data *sdata,
+ 		rx_timestamp = drv_get_tsf(local, sdata);
+ 	}
+ 
+-	ibss_dbg(sdata,
+-		 "RX beacon SA=%pM BSSID=%pM TSF=0x%llx BCN=0x%llx diff=%lld @%lu\n",
++	ibss_dbg(sdata, "RX beacon SA=%pM BSSID=%pM TSF=0x%llx\n",
+ 		 mgmt->sa, mgmt->bssid,
+-		 (unsigned long long)rx_timestamp,
++		 (unsigned long long)rx_timestamp);
++	ibss_dbg(sdata, "\tBCN=0x%llx diff=%lld @%lu\n",
+ 		 (unsigned long long)beacon_timestamp,
+ 		 (unsigned long long)(rx_timestamp - beacon_timestamp),
+ 		 jiffies);
+@@ -1537,9 +1537,9 @@ static void ieee80211_rx_mgmt_probe_req(struct ieee80211_sub_if_data *sdata,
+ 
+ 	tx_last_beacon = drv_tx_last_beacon(local);
+ 
+-	ibss_dbg(sdata,
+-		 "RX ProbeReq SA=%pM DA=%pM BSSID=%pM (tx_last_beacon=%d)\n",
+-		 mgmt->sa, mgmt->da, mgmt->bssid, tx_last_beacon);
++	ibss_dbg(sdata, "RX ProbeReq SA=%pM DA=%pM\n", mgmt->sa, mgmt->da);
++	ibss_dbg(sdata, "\tBSSID=%pM (tx_last_beacon=%d)\n",
++		 mgmt->bssid, tx_last_beacon);
+ 
+ 	if (!tx_last_beacon && is_multicast_ether_addr(mgmt->da))
+ 		return;
+diff --git a/net/mac80211/main.c b/net/mac80211/main.c
+index fb73451ed85e..66cbddd65b47 100644
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -255,8 +255,27 @@ static void ieee80211_restart_work(struct work_struct *work)
+ 
+ 	flush_work(&local->radar_detected_work);
+ 	rtnl_lock();
+-	list_for_each_entry(sdata, &local->interfaces, list)
++	list_for_each_entry(sdata, &local->interfaces, list) {
++		/*
++		 * XXX: there may be more work for other vif types and even
++		 * for station mode: a good thing would be to run most of
++		 * the iface type's dependent _stop (ieee80211_mg_stop,
++		 * ieee80211_ibss_stop) etc...
++		 * For now, fix only the specific bug that was seen: race
++		 * between csa_connection_drop_work and us.
++		 */
++		if (sdata->vif.type == NL80211_IFTYPE_STATION) {
++			/*
++			 * This worker is scheduled from the iface worker that
++			 * runs on mac80211's workqueue, so we can't be
++			 * scheduling this worker after the cancel right here.
++			 * The exception is ieee80211_chswitch_done.
++			 * Then we can have a race...
++			 */
++			cancel_work_sync(&sdata->u.mgd.csa_connection_drop_work);
++		}
+ 		flush_delayed_work(&sdata->dec_tailroom_needed_wk);
++	}
+ 	ieee80211_scan_cancel(local);
+ 
+ 	/* make sure any new ROC will consider local->in_reconfig */
+@@ -470,10 +489,7 @@ static const struct ieee80211_vht_cap mac80211_vht_capa_mod_mask = {
+ 		cpu_to_le32(IEEE80211_VHT_CAP_RXLDPC |
+ 			    IEEE80211_VHT_CAP_SHORT_GI_80 |
+ 			    IEEE80211_VHT_CAP_SHORT_GI_160 |
+-			    IEEE80211_VHT_CAP_RXSTBC_1 |
+-			    IEEE80211_VHT_CAP_RXSTBC_2 |
+-			    IEEE80211_VHT_CAP_RXSTBC_3 |
+-			    IEEE80211_VHT_CAP_RXSTBC_4 |
++			    IEEE80211_VHT_CAP_RXSTBC_MASK |
+ 			    IEEE80211_VHT_CAP_TXSTBC |
+ 			    IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE |
+ 			    IEEE80211_VHT_CAP_SU_BEAMFORMEE_CAPABLE |
+@@ -1182,6 +1198,7 @@ void ieee80211_unregister_hw(struct ieee80211_hw *hw)
+ #if IS_ENABLED(CONFIG_IPV6)
+ 	unregister_inet6addr_notifier(&local->ifa6_notifier);
+ #endif
++	ieee80211_txq_teardown_flows(local);
+ 
+ 	rtnl_lock();
+ 
+@@ -1210,7 +1227,6 @@ void ieee80211_unregister_hw(struct ieee80211_hw *hw)
+ 	skb_queue_purge(&local->skb_queue);
+ 	skb_queue_purge(&local->skb_queue_unreliable);
+ 	skb_queue_purge(&local->skb_queue_tdls_chsw);
+-	ieee80211_txq_teardown_flows(local);
+ 
+ 	destroy_workqueue(local->workqueue);
+ 	wiphy_unregister(local->hw.wiphy);
+diff --git a/net/mac80211/mesh_hwmp.c b/net/mac80211/mesh_hwmp.c
+index 35ad3983ae4b..daf9db3c8f24 100644
+--- a/net/mac80211/mesh_hwmp.c
++++ b/net/mac80211/mesh_hwmp.c
+@@ -572,6 +572,10 @@ static void hwmp_preq_frame_process(struct ieee80211_sub_if_data *sdata,
+ 		forward = false;
+ 		reply = true;
+ 		target_metric = 0;
++
++		if (SN_GT(target_sn, ifmsh->sn))
++			ifmsh->sn = target_sn;
++
+ 		if (time_after(jiffies, ifmsh->last_sn_update +
+ 					net_traversal_jiffies(sdata)) ||
+ 		    time_before(jiffies, ifmsh->last_sn_update)) {
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index a59187c016e0..b046bf95eb3c 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -978,6 +978,10 @@ static void ieee80211_chswitch_work(struct work_struct *work)
+ 	 */
+ 
+ 	if (sdata->reserved_chanctx) {
++		struct ieee80211_supported_band *sband = NULL;
++		struct sta_info *mgd_sta = NULL;
++		enum ieee80211_sta_rx_bandwidth bw = IEEE80211_STA_RX_BW_20;
++
+ 		/*
+ 		 * with multi-vif csa driver may call ieee80211_csa_finish()
+ 		 * many times while waiting for other interfaces to use their
+@@ -986,6 +990,48 @@ static void ieee80211_chswitch_work(struct work_struct *work)
+ 		if (sdata->reserved_ready)
+ 			goto out;
+ 
++		if (sdata->vif.bss_conf.chandef.width !=
++		    sdata->csa_chandef.width) {
++			/*
++			 * For managed interface, we need to also update the AP
++			 * station bandwidth and align the rate scale algorithm
++			 * on the bandwidth change. Here we only consider the
++			 * bandwidth of the new channel definition (as channel
++			 * switch flow does not have the full HT/VHT/HE
++			 * information), assuming that if additional changes are
++			 * required they would be done as part of the processing
++			 * of the next beacon from the AP.
++			 */
++			switch (sdata->csa_chandef.width) {
++			case NL80211_CHAN_WIDTH_20_NOHT:
++			case NL80211_CHAN_WIDTH_20:
++			default:
++				bw = IEEE80211_STA_RX_BW_20;
++				break;
++			case NL80211_CHAN_WIDTH_40:
++				bw = IEEE80211_STA_RX_BW_40;
++				break;
++			case NL80211_CHAN_WIDTH_80:
++				bw = IEEE80211_STA_RX_BW_80;
++				break;
++			case NL80211_CHAN_WIDTH_80P80:
++			case NL80211_CHAN_WIDTH_160:
++				bw = IEEE80211_STA_RX_BW_160;
++				break;
++			}
++
++			mgd_sta = sta_info_get(sdata, ifmgd->bssid);
++			sband =
++				local->hw.wiphy->bands[sdata->csa_chandef.chan->band];
++		}
++
++		if (sdata->vif.bss_conf.chandef.width >
++		    sdata->csa_chandef.width) {
++			mgd_sta->sta.bandwidth = bw;
++			rate_control_rate_update(local, sband, mgd_sta,
++						 IEEE80211_RC_BW_CHANGED);
++		}
++
+ 		ret = ieee80211_vif_use_reserved_context(sdata);
+ 		if (ret) {
+ 			sdata_info(sdata,
+@@ -996,6 +1042,13 @@ static void ieee80211_chswitch_work(struct work_struct *work)
+ 			goto out;
+ 		}
+ 
++		if (sdata->vif.bss_conf.chandef.width <
++		    sdata->csa_chandef.width) {
++			mgd_sta->sta.bandwidth = bw;
++			rate_control_rate_update(local, sband, mgd_sta,
++						 IEEE80211_RC_BW_CHANGED);
++		}
++
+ 		goto out;
+ 	}
+ 
+@@ -1217,6 +1270,16 @@ ieee80211_sta_process_chanswitch(struct ieee80211_sub_if_data *sdata,
+ 					 cbss->beacon_interval));
+ 	return;
+  drop_connection:
++	/*
++	 * This is just so that the disconnect flow will know that
++	 * we were trying to switch channel and failed. In case the
++	 * mode is 1 (we are not allowed to Tx), we will know not to
++	 * send a deauthentication frame. Those two fields will be
++	 * reset when the disconnection worker runs.
++	 */
++	sdata->vif.csa_active = true;
++	sdata->csa_block_tx = csa_ie.mode;
++
+ 	ieee80211_queue_work(&local->hw, &ifmgd->csa_connection_drop_work);
+ 	mutex_unlock(&local->chanctx_mtx);
+ 	mutex_unlock(&local->mtx);
+@@ -2400,6 +2463,7 @@ static void __ieee80211_disconnect(struct ieee80211_sub_if_data *sdata)
+ 	struct ieee80211_local *local = sdata->local;
+ 	struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
+ 	u8 frame_buf[IEEE80211_DEAUTH_FRAME_LEN];
++	bool tx;
+ 
+ 	sdata_lock(sdata);
+ 	if (!ifmgd->associated) {
+@@ -2407,6 +2471,8 @@ static void __ieee80211_disconnect(struct ieee80211_sub_if_data *sdata)
+ 		return;
+ 	}
+ 
++	tx = !sdata->csa_block_tx;
++
+ 	/* AP is probably out of range (or not reachable for another reason) so
+ 	 * remove the bss struct for that AP.
+ 	 */
+@@ -2414,7 +2480,7 @@ static void __ieee80211_disconnect(struct ieee80211_sub_if_data *sdata)
+ 
+ 	ieee80211_set_disassoc(sdata, IEEE80211_STYPE_DEAUTH,
+ 			       WLAN_REASON_DISASSOC_DUE_TO_INACTIVITY,
+-			       true, frame_buf);
++			       tx, frame_buf);
+ 	mutex_lock(&local->mtx);
+ 	sdata->vif.csa_active = false;
+ 	ifmgd->csa_waiting_bcn = false;
+@@ -2425,7 +2491,7 @@ static void __ieee80211_disconnect(struct ieee80211_sub_if_data *sdata)
+ 	}
+ 	mutex_unlock(&local->mtx);
+ 
+-	ieee80211_report_disconnect(sdata, frame_buf, sizeof(frame_buf), true,
++	ieee80211_report_disconnect(sdata, frame_buf, sizeof(frame_buf), tx,
+ 				    WLAN_REASON_DISASSOC_DUE_TO_INACTIVITY);
+ 
+ 	sdata_unlock(sdata);
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index fa1f1e63a264..9b3b069e418a 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -3073,27 +3073,18 @@ void ieee80211_clear_fast_xmit(struct sta_info *sta)
+ }
+ 
+ static bool ieee80211_amsdu_realloc_pad(struct ieee80211_local *local,
+-					struct sk_buff *skb, int headroom,
+-					int *subframe_len)
++					struct sk_buff *skb, int headroom)
+ {
+-	int amsdu_len = *subframe_len + sizeof(struct ethhdr);
+-	int padding = (4 - amsdu_len) & 3;
+-
+-	if (skb_headroom(skb) < headroom || skb_tailroom(skb) < padding) {
++	if (skb_headroom(skb) < headroom) {
+ 		I802_DEBUG_INC(local->tx_expand_skb_head);
+ 
+-		if (pskb_expand_head(skb, headroom, padding, GFP_ATOMIC)) {
++		if (pskb_expand_head(skb, headroom, 0, GFP_ATOMIC)) {
+ 			wiphy_debug(local->hw.wiphy,
+ 				    "failed to reallocate TX buffer\n");
+ 			return false;
+ 		}
+ 	}
+ 
+-	if (padding) {
+-		*subframe_len += padding;
+-		skb_put_zero(skb, padding);
+-	}
+-
+ 	return true;
+ }
+ 
+@@ -3117,8 +3108,7 @@ static bool ieee80211_amsdu_prepare_head(struct ieee80211_sub_if_data *sdata,
+ 	if (info->control.flags & IEEE80211_TX_CTRL_AMSDU)
+ 		return true;
+ 
+-	if (!ieee80211_amsdu_realloc_pad(local, skb, sizeof(*amsdu_hdr),
+-					 &subframe_len))
++	if (!ieee80211_amsdu_realloc_pad(local, skb, sizeof(*amsdu_hdr)))
+ 		return false;
+ 
+ 	data = skb_push(skb, sizeof(*amsdu_hdr));
+@@ -3184,7 +3174,8 @@ static bool ieee80211_amsdu_aggregate(struct ieee80211_sub_if_data *sdata,
+ 	void *data;
+ 	bool ret = false;
+ 	unsigned int orig_len;
+-	int n = 1, nfrags;
++	int n = 2, nfrags, pad = 0;
++	u16 hdrlen;
+ 
+ 	if (!ieee80211_hw_check(&local->hw, TX_AMSDU))
+ 		return false;
+@@ -3217,9 +3208,6 @@ static bool ieee80211_amsdu_aggregate(struct ieee80211_sub_if_data *sdata,
+ 	if (skb->len + head->len > max_amsdu_len)
+ 		goto out;
+ 
+-	if (!ieee80211_amsdu_prepare_head(sdata, fast_tx, head))
+-		goto out;
+-
+ 	nfrags = 1 + skb_shinfo(skb)->nr_frags;
+ 	nfrags += 1 + skb_shinfo(head)->nr_frags;
+ 	frag_tail = &skb_shinfo(head)->frag_list;
+@@ -3235,10 +3223,24 @@ static bool ieee80211_amsdu_aggregate(struct ieee80211_sub_if_data *sdata,
+ 	if (max_frags && nfrags > max_frags)
+ 		goto out;
+ 
+-	if (!ieee80211_amsdu_realloc_pad(local, skb, sizeof(rfc1042_header) + 2,
+-					 &subframe_len))
++	if (!ieee80211_amsdu_prepare_head(sdata, fast_tx, head))
+ 		goto out;
+ 
++	/*
++	 * Pad out the previous subframe to a multiple of 4 by adding the
++	 * padding to the next one, that's being added. Note that head->len
++	 * is the length of the full A-MSDU, but that works since each time
++	 * we add a new subframe we pad out the previous one to a multiple
++	 * of 4 and thus it no longer matters in the next round.
++	 */
++	hdrlen = fast_tx->hdr_len - sizeof(rfc1042_header);
++	if ((head->len - hdrlen) & 3)
++		pad = 4 - ((head->len - hdrlen) & 3);
++
++	if (!ieee80211_amsdu_realloc_pad(local, skb, sizeof(rfc1042_header) +
++						     2 + pad))
++		goto out_recalc;
++
+ 	ret = true;
+ 	data = skb_push(skb, ETH_ALEN + 2);
+ 	memmove(data, data + ETH_ALEN + 2, 2 * ETH_ALEN);
+@@ -3248,15 +3250,19 @@ static bool ieee80211_amsdu_aggregate(struct ieee80211_sub_if_data *sdata,
+ 	memcpy(data, &len, 2);
+ 	memcpy(data + 2, rfc1042_header, sizeof(rfc1042_header));
+ 
++	memset(skb_push(skb, pad), 0, pad);
++
+ 	head->len += skb->len;
+ 	head->data_len += skb->len;
+ 	*frag_tail = skb;
+ 
+-	flow->backlog += head->len - orig_len;
+-	tin->backlog_bytes += head->len - orig_len;
+-
+-	fq_recalc_backlog(fq, tin, flow);
++out_recalc:
++	if (head->len != orig_len) {
++		flow->backlog += head->len - orig_len;
++		tin->backlog_bytes += head->len - orig_len;
+ 
++		fq_recalc_backlog(fq, tin, flow);
++	}
+ out:
+ 	spin_unlock_bh(&fq->lock);
+ 
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index d02fbfec3783..93b5bb849ad7 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -1120,7 +1120,7 @@ void ieee80211_regulatory_limit_wmm_params(struct ieee80211_sub_if_data *sdata,
+ {
+ 	struct ieee80211_chanctx_conf *chanctx_conf;
+ 	const struct ieee80211_reg_rule *rrule;
+-	struct ieee80211_wmm_ac *wmm_ac;
++	const struct ieee80211_wmm_ac *wmm_ac;
+ 	u16 center_freq = 0;
+ 
+ 	if (sdata->vif.type != NL80211_IFTYPE_AP &&
+@@ -1139,20 +1139,19 @@ void ieee80211_regulatory_limit_wmm_params(struct ieee80211_sub_if_data *sdata,
+ 
+ 	rrule = freq_reg_info(sdata->wdev.wiphy, MHZ_TO_KHZ(center_freq));
+ 
+-	if (IS_ERR_OR_NULL(rrule) || !rrule->wmm_rule) {
++	if (IS_ERR_OR_NULL(rrule) || !rrule->has_wmm) {
+ 		rcu_read_unlock();
+ 		return;
+ 	}
+ 
+ 	if (sdata->vif.type == NL80211_IFTYPE_AP)
+-		wmm_ac = &rrule->wmm_rule->ap[ac];
++		wmm_ac = &rrule->wmm_rule.ap[ac];
+ 	else
+-		wmm_ac = &rrule->wmm_rule->client[ac];
++		wmm_ac = &rrule->wmm_rule.client[ac];
+ 	qparam->cw_min = max_t(u16, qparam->cw_min, wmm_ac->cw_min);
+ 	qparam->cw_max = max_t(u16, qparam->cw_max, wmm_ac->cw_max);
+ 	qparam->aifs = max_t(u8, qparam->aifs, wmm_ac->aifsn);
+-	qparam->txop = !qparam->txop ? wmm_ac->cot / 32 :
+-		min_t(u16, qparam->txop, wmm_ac->cot / 32);
++	qparam->txop = min_t(u16, qparam->txop, wmm_ac->cot / 32);
+ 	rcu_read_unlock();
+ }
+ 
+diff --git a/net/netfilter/Kconfig b/net/netfilter/Kconfig
+index f0a1c536ef15..e6d5c87f0d96 100644
+--- a/net/netfilter/Kconfig
++++ b/net/netfilter/Kconfig
+@@ -740,13 +740,13 @@ config NETFILTER_XT_TARGET_CHECKSUM
+ 	depends on NETFILTER_ADVANCED
+ 	---help---
+ 	  This option adds a `CHECKSUM' target, which can be used in the iptables mangle
+-	  table.
++	  table to work around buggy DHCP clients in virtualized environments.
+ 
+-	  You can use this target to compute and fill in the checksum in
+-	  a packet that lacks a checksum.  This is particularly useful,
+-	  if you need to work around old applications such as dhcp clients,
+-	  that do not work well with checksum offloads, but don't want to disable
+-	  checksum offload in your device.
++	  Some old DHCP clients drop packets because they are not aware
++	  that the checksum would normally be offloaded to hardware and
++	  thus should be considered valid.
++	  This target can be used to fill in the checksum using iptables
++	  when such packets are sent via a virtual network device.
+ 
+ 	  To compile it as a module, choose M here.  If unsure, say N.
+ 
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index f5745e4c6513..77d690a87144 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -4582,6 +4582,7 @@ static int nft_flush_set(const struct nft_ctx *ctx,
+ 	}
+ 	set->ndeact++;
+ 
++	nft_set_elem_deactivate(ctx->net, set, elem);
+ 	nft_trans_elem_set(trans) = set;
+ 	nft_trans_elem(trans) = *elem;
+ 	list_add_tail(&trans->list, &ctx->net->nft.commit_list);
+diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c
+index ea4ba551abb2..d33094f4ec41 100644
+--- a/net/netfilter/nfnetlink_queue.c
++++ b/net/netfilter/nfnetlink_queue.c
+@@ -233,6 +233,7 @@ static void nfqnl_reinject(struct nf_queue_entry *entry, unsigned int verdict)
+ 	int err;
+ 
+ 	if (verdict == NF_ACCEPT ||
++	    verdict == NF_REPEAT ||
+ 	    verdict == NF_STOP) {
+ 		rcu_read_lock();
+ 		ct_hook = rcu_dereference(nf_ct_hook);
+diff --git a/net/netfilter/xt_CHECKSUM.c b/net/netfilter/xt_CHECKSUM.c
+index 9f4151ec3e06..6c7aa6a0a0d2 100644
+--- a/net/netfilter/xt_CHECKSUM.c
++++ b/net/netfilter/xt_CHECKSUM.c
+@@ -16,6 +16,9 @@
+ #include <linux/netfilter/x_tables.h>
+ #include <linux/netfilter/xt_CHECKSUM.h>
+ 
++#include <linux/netfilter_ipv4/ip_tables.h>
++#include <linux/netfilter_ipv6/ip6_tables.h>
++
+ MODULE_LICENSE("GPL");
+ MODULE_AUTHOR("Michael S. Tsirkin <mst@redhat.com>");
+ MODULE_DESCRIPTION("Xtables: checksum modification");
+@@ -25,7 +28,7 @@ MODULE_ALIAS("ip6t_CHECKSUM");
+ static unsigned int
+ checksum_tg(struct sk_buff *skb, const struct xt_action_param *par)
+ {
+-	if (skb->ip_summed == CHECKSUM_PARTIAL)
++	if (skb->ip_summed == CHECKSUM_PARTIAL && !skb_is_gso(skb))
+ 		skb_checksum_help(skb);
+ 
+ 	return XT_CONTINUE;
+@@ -34,6 +37,8 @@ checksum_tg(struct sk_buff *skb, const struct xt_action_param *par)
+ static int checksum_tg_check(const struct xt_tgchk_param *par)
+ {
+ 	const struct xt_CHECKSUM_info *einfo = par->targinfo;
++	const struct ip6t_ip6 *i6 = par->entryinfo;
++	const struct ipt_ip *i4 = par->entryinfo;
+ 
+ 	if (einfo->operation & ~XT_CHECKSUM_OP_FILL) {
+ 		pr_info_ratelimited("unsupported CHECKSUM operation %x\n",
+@@ -43,6 +48,21 @@ static int checksum_tg_check(const struct xt_tgchk_param *par)
+ 	if (!einfo->operation)
+ 		return -EINVAL;
+ 
++	switch (par->family) {
++	case NFPROTO_IPV4:
++		if (i4->proto == IPPROTO_UDP &&
++		    (i4->invflags & XT_INV_PROTO) == 0)
++			return 0;
++		break;
++	case NFPROTO_IPV6:
++		if ((i6->flags & IP6T_F_PROTO) &&
++		    i6->proto == IPPROTO_UDP &&
++		    (i6->invflags & XT_INV_PROTO) == 0)
++			return 0;
++		break;
++	}
++
++	pr_warn_once("CHECKSUM should be avoided.  If really needed, restrict with \"-p udp\" and only use in OUTPUT\n");
+ 	return 0;
+ }
+ 
+diff --git a/net/netfilter/xt_cluster.c b/net/netfilter/xt_cluster.c
+index dfbdbb2fc0ed..51d0c257e7a5 100644
+--- a/net/netfilter/xt_cluster.c
++++ b/net/netfilter/xt_cluster.c
+@@ -125,6 +125,7 @@ xt_cluster_mt(const struct sk_buff *skb, struct xt_action_param *par)
+ static int xt_cluster_mt_checkentry(const struct xt_mtchk_param *par)
+ {
+ 	struct xt_cluster_match_info *info = par->matchinfo;
++	int ret;
+ 
+ 	if (info->total_nodes > XT_CLUSTER_NODES_MAX) {
+ 		pr_info_ratelimited("you have exceeded the maximum number of cluster nodes (%u > %u)\n",
+@@ -135,7 +136,17 @@ static int xt_cluster_mt_checkentry(const struct xt_mtchk_param *par)
+ 		pr_info_ratelimited("node mask cannot exceed total number of nodes\n");
+ 		return -EDOM;
+ 	}
+-	return 0;
++
++	ret = nf_ct_netns_get(par->net, par->family);
++	if (ret < 0)
++		pr_info_ratelimited("cannot load conntrack support for proto=%u\n",
++				    par->family);
++	return ret;
++}
++
++static void xt_cluster_mt_destroy(const struct xt_mtdtor_param *par)
++{
++	nf_ct_netns_put(par->net, par->family);
+ }
+ 
+ static struct xt_match xt_cluster_match __read_mostly = {
+@@ -144,6 +155,7 @@ static struct xt_match xt_cluster_match __read_mostly = {
+ 	.match		= xt_cluster_mt,
+ 	.checkentry	= xt_cluster_mt_checkentry,
+ 	.matchsize	= sizeof(struct xt_cluster_match_info),
++	.destroy	= xt_cluster_mt_destroy,
+ 	.me		= THIS_MODULE,
+ };
+ 
+diff --git a/net/netfilter/xt_hashlimit.c b/net/netfilter/xt_hashlimit.c
+index 9b16402f29af..3e7d259e5d8d 100644
+--- a/net/netfilter/xt_hashlimit.c
++++ b/net/netfilter/xt_hashlimit.c
+@@ -1057,7 +1057,7 @@ static struct xt_match hashlimit_mt_reg[] __read_mostly = {
+ static void *dl_seq_start(struct seq_file *s, loff_t *pos)
+ 	__acquires(htable->lock)
+ {
+-	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->file));
+ 	unsigned int *bucket;
+ 
+ 	spin_lock_bh(&htable->lock);
+@@ -1074,7 +1074,7 @@ static void *dl_seq_start(struct seq_file *s, loff_t *pos)
+ 
+ static void *dl_seq_next(struct seq_file *s, void *v, loff_t *pos)
+ {
+-	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->file));
+ 	unsigned int *bucket = v;
+ 
+ 	*pos = ++(*bucket);
+@@ -1088,7 +1088,7 @@ static void *dl_seq_next(struct seq_file *s, void *v, loff_t *pos)
+ static void dl_seq_stop(struct seq_file *s, void *v)
+ 	__releases(htable->lock)
+ {
+-	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->file));
+ 	unsigned int *bucket = v;
+ 
+ 	if (!IS_ERR(bucket))
+@@ -1130,7 +1130,7 @@ static void dl_seq_print(struct dsthash_ent *ent, u_int8_t family,
+ static int dl_seq_real_show_v2(struct dsthash_ent *ent, u_int8_t family,
+ 			       struct seq_file *s)
+ {
+-	struct xt_hashlimit_htable *ht = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *ht = PDE_DATA(file_inode(s->file));
+ 
+ 	spin_lock(&ent->lock);
+ 	/* recalculate to show accurate numbers */
+@@ -1145,7 +1145,7 @@ static int dl_seq_real_show_v2(struct dsthash_ent *ent, u_int8_t family,
+ static int dl_seq_real_show_v1(struct dsthash_ent *ent, u_int8_t family,
+ 			       struct seq_file *s)
+ {
+-	struct xt_hashlimit_htable *ht = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *ht = PDE_DATA(file_inode(s->file));
+ 
+ 	spin_lock(&ent->lock);
+ 	/* recalculate to show accurate numbers */
+@@ -1160,7 +1160,7 @@ static int dl_seq_real_show_v1(struct dsthash_ent *ent, u_int8_t family,
+ static int dl_seq_real_show(struct dsthash_ent *ent, u_int8_t family,
+ 			    struct seq_file *s)
+ {
+-	struct xt_hashlimit_htable *ht = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *ht = PDE_DATA(file_inode(s->file));
+ 
+ 	spin_lock(&ent->lock);
+ 	/* recalculate to show accurate numbers */
+@@ -1174,7 +1174,7 @@ static int dl_seq_real_show(struct dsthash_ent *ent, u_int8_t family,
+ 
+ static int dl_seq_show_v2(struct seq_file *s, void *v)
+ {
+-	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->file));
+ 	unsigned int *bucket = (unsigned int *)v;
+ 	struct dsthash_ent *ent;
+ 
+@@ -1188,7 +1188,7 @@ static int dl_seq_show_v2(struct seq_file *s, void *v)
+ 
+ static int dl_seq_show_v1(struct seq_file *s, void *v)
+ {
+-	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->file));
+ 	unsigned int *bucket = v;
+ 	struct dsthash_ent *ent;
+ 
+@@ -1202,7 +1202,7 @@ static int dl_seq_show_v1(struct seq_file *s, void *v)
+ 
+ static int dl_seq_show(struct seq_file *s, void *v)
+ {
+-	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->private));
++	struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->file));
+ 	unsigned int *bucket = v;
+ 	struct dsthash_ent *ent;
+ 
+diff --git a/net/tipc/diag.c b/net/tipc/diag.c
+index aaabb0b776dd..73137f4aeb68 100644
+--- a/net/tipc/diag.c
++++ b/net/tipc/diag.c
+@@ -84,7 +84,9 @@ static int tipc_sock_diag_handler_dump(struct sk_buff *skb,
+ 
+ 	if (h->nlmsg_flags & NLM_F_DUMP) {
+ 		struct netlink_dump_control c = {
++			.start = tipc_dump_start,
+ 			.dump = tipc_diag_dump,
++			.done = tipc_dump_done,
+ 		};
+ 		netlink_dump_start(net->diag_nlsk, skb, h, &c);
+ 		return 0;
+diff --git a/net/tipc/netlink.c b/net/tipc/netlink.c
+index 6ff2254088f6..99ee419210ba 100644
+--- a/net/tipc/netlink.c
++++ b/net/tipc/netlink.c
+@@ -167,7 +167,9 @@ static const struct genl_ops tipc_genl_v2_ops[] = {
+ 	},
+ 	{
+ 		.cmd	= TIPC_NL_SOCK_GET,
++		.start = tipc_dump_start,
+ 		.dumpit	= tipc_nl_sk_dump,
++		.done	= tipc_dump_done,
+ 		.policy = tipc_nl_policy,
+ 	},
+ 	{
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index ac8ca238c541..bdb4a9a5a83a 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -3233,45 +3233,69 @@ int tipc_nl_sk_walk(struct sk_buff *skb, struct netlink_callback *cb,
+ 				       struct netlink_callback *cb,
+ 				       struct tipc_sock *tsk))
+ {
+-	struct net *net = sock_net(skb->sk);
+-	struct tipc_net *tn = tipc_net(net);
+-	const struct bucket_table *tbl;
+-	u32 prev_portid = cb->args[1];
+-	u32 tbl_id = cb->args[0];
+-	struct rhash_head *pos;
++	struct rhashtable_iter *iter = (void *)cb->args[0];
+ 	struct tipc_sock *tsk;
+ 	int err;
+ 
+-	rcu_read_lock();
+-	tbl = rht_dereference_rcu((&tn->sk_rht)->tbl, &tn->sk_rht);
+-	for (; tbl_id < tbl->size; tbl_id++) {
+-		rht_for_each_entry_rcu(tsk, pos, tbl, tbl_id, node) {
+-			spin_lock_bh(&tsk->sk.sk_lock.slock);
+-			if (prev_portid && prev_portid != tsk->portid) {
+-				spin_unlock_bh(&tsk->sk.sk_lock.slock);
++	rhashtable_walk_start(iter);
++	while ((tsk = rhashtable_walk_next(iter)) != NULL) {
++		if (IS_ERR(tsk)) {
++			err = PTR_ERR(tsk);
++			if (err == -EAGAIN) {
++				err = 0;
+ 				continue;
+ 			}
++			break;
++		}
+ 
+-			err = skb_handler(skb, cb, tsk);
+-			if (err) {
+-				prev_portid = tsk->portid;
+-				spin_unlock_bh(&tsk->sk.sk_lock.slock);
+-				goto out;
+-			}
+-
+-			prev_portid = 0;
+-			spin_unlock_bh(&tsk->sk.sk_lock.slock);
++		sock_hold(&tsk->sk);
++		rhashtable_walk_stop(iter);
++		lock_sock(&tsk->sk);
++		err = skb_handler(skb, cb, tsk);
++		if (err) {
++			release_sock(&tsk->sk);
++			sock_put(&tsk->sk);
++			goto out;
+ 		}
++		release_sock(&tsk->sk);
++		rhashtable_walk_start(iter);
++		sock_put(&tsk->sk);
+ 	}
++	rhashtable_walk_stop(iter);
+ out:
+-	rcu_read_unlock();
+-	cb->args[0] = tbl_id;
+-	cb->args[1] = prev_portid;
+-
+ 	return skb->len;
+ }
+ EXPORT_SYMBOL(tipc_nl_sk_walk);
+ 
++int tipc_dump_start(struct netlink_callback *cb)
++{
++	struct rhashtable_iter *iter = (void *)cb->args[0];
++	struct net *net = sock_net(cb->skb->sk);
++	struct tipc_net *tn = tipc_net(net);
++
++	if (!iter) {
++		iter = kmalloc(sizeof(*iter), GFP_KERNEL);
++		if (!iter)
++			return -ENOMEM;
++
++		cb->args[0] = (long)iter;
++	}
++
++	rhashtable_walk_enter(&tn->sk_rht, iter);
++	return 0;
++}
++EXPORT_SYMBOL(tipc_dump_start);
++
++int tipc_dump_done(struct netlink_callback *cb)
++{
++	struct rhashtable_iter *hti = (void *)cb->args[0];
++
++	rhashtable_walk_exit(hti);
++	kfree(hti);
++	return 0;
++}
++EXPORT_SYMBOL(tipc_dump_done);
++
+ int tipc_sk_fill_sock_diag(struct sk_buff *skb, struct netlink_callback *cb,
+ 			   struct tipc_sock *tsk, u32 sk_filter_state,
+ 			   u64 (*tipc_diag_gen_cookie)(struct sock *sk))
+diff --git a/net/tipc/socket.h b/net/tipc/socket.h
+index aff9b2ae5a1f..d43032e26532 100644
+--- a/net/tipc/socket.h
++++ b/net/tipc/socket.h
+@@ -68,4 +68,6 @@ int tipc_nl_sk_walk(struct sk_buff *skb, struct netlink_callback *cb,
+ 		    int (*skb_handler)(struct sk_buff *skb,
+ 				       struct netlink_callback *cb,
+ 				       struct tipc_sock *tsk));
++int tipc_dump_start(struct netlink_callback *cb);
++int tipc_dump_done(struct netlink_callback *cb);
+ #endif
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 80bc986c79e5..733ccf867972 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -667,13 +667,13 @@ static int nl80211_msg_put_wmm_rules(struct sk_buff *msg,
+ 			goto nla_put_failure;
+ 
+ 		if (nla_put_u16(msg, NL80211_WMMR_CW_MIN,
+-				rule->wmm_rule->client[j].cw_min) ||
++				rule->wmm_rule.client[j].cw_min) ||
+ 		    nla_put_u16(msg, NL80211_WMMR_CW_MAX,
+-				rule->wmm_rule->client[j].cw_max) ||
++				rule->wmm_rule.client[j].cw_max) ||
+ 		    nla_put_u8(msg, NL80211_WMMR_AIFSN,
+-			       rule->wmm_rule->client[j].aifsn) ||
+-		    nla_put_u8(msg, NL80211_WMMR_TXOP,
+-			       rule->wmm_rule->client[j].cot))
++			       rule->wmm_rule.client[j].aifsn) ||
++		    nla_put_u16(msg, NL80211_WMMR_TXOP,
++			        rule->wmm_rule.client[j].cot))
+ 			goto nla_put_failure;
+ 
+ 		nla_nest_end(msg, nl_wmm_rule);
+@@ -764,9 +764,9 @@ static int nl80211_msg_put_channel(struct sk_buff *msg, struct wiphy *wiphy,
+ 
+ 	if (large) {
+ 		const struct ieee80211_reg_rule *rule =
+-			freq_reg_info(wiphy, chan->center_freq);
++			freq_reg_info(wiphy, MHZ_TO_KHZ(chan->center_freq));
+ 
+-		if (!IS_ERR(rule) && rule->wmm_rule) {
++		if (!IS_ERR_OR_NULL(rule) && rule->has_wmm) {
+ 			if (nl80211_msg_put_wmm_rules(msg, rule))
+ 				goto nla_put_failure;
+ 		}
+@@ -12099,6 +12099,7 @@ static int nl80211_update_ft_ies(struct sk_buff *skb, struct genl_info *info)
+ 		return -EOPNOTSUPP;
+ 
+ 	if (!info->attrs[NL80211_ATTR_MDID] ||
++	    !info->attrs[NL80211_ATTR_IE] ||
+ 	    !is_valid_ie_attr(info->attrs[NL80211_ATTR_IE]))
+ 		return -EINVAL;
+ 
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index 4fc66a117b7d..2f702adf2912 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -425,36 +425,23 @@ static const struct ieee80211_regdomain *
+ reg_copy_regd(const struct ieee80211_regdomain *src_regd)
+ {
+ 	struct ieee80211_regdomain *regd;
+-	int size_of_regd, size_of_wmms;
++	int size_of_regd;
+ 	unsigned int i;
+-	struct ieee80211_wmm_rule *d_wmm, *s_wmm;
+ 
+ 	size_of_regd =
+ 		sizeof(struct ieee80211_regdomain) +
+ 		src_regd->n_reg_rules * sizeof(struct ieee80211_reg_rule);
+-	size_of_wmms = src_regd->n_wmm_rules *
+-		sizeof(struct ieee80211_wmm_rule);
+ 
+-	regd = kzalloc(size_of_regd + size_of_wmms, GFP_KERNEL);
++	regd = kzalloc(size_of_regd, GFP_KERNEL);
+ 	if (!regd)
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	memcpy(regd, src_regd, sizeof(struct ieee80211_regdomain));
+ 
+-	d_wmm = (struct ieee80211_wmm_rule *)((u8 *)regd + size_of_regd);
+-	s_wmm = (struct ieee80211_wmm_rule *)((u8 *)src_regd + size_of_regd);
+-	memcpy(d_wmm, s_wmm, size_of_wmms);
+-
+-	for (i = 0; i < src_regd->n_reg_rules; i++) {
++	for (i = 0; i < src_regd->n_reg_rules; i++)
+ 		memcpy(&regd->reg_rules[i], &src_regd->reg_rules[i],
+ 		       sizeof(struct ieee80211_reg_rule));
+-		if (!src_regd->reg_rules[i].wmm_rule)
+-			continue;
+ 
+-		regd->reg_rules[i].wmm_rule = d_wmm +
+-			(src_regd->reg_rules[i].wmm_rule - s_wmm) /
+-			sizeof(struct ieee80211_wmm_rule);
+-	}
+ 	return regd;
+ }
+ 
+@@ -860,9 +847,10 @@ static bool valid_regdb(const u8 *data, unsigned int size)
+ 	return true;
+ }
+ 
+-static void set_wmm_rule(struct ieee80211_wmm_rule *rule,
++static void set_wmm_rule(struct ieee80211_reg_rule *rrule,
+ 			 struct fwdb_wmm_rule *wmm)
+ {
++	struct ieee80211_wmm_rule *rule = &rrule->wmm_rule;
+ 	unsigned int i;
+ 
+ 	for (i = 0; i < IEEE80211_NUM_ACS; i++) {
+@@ -876,11 +864,13 @@ static void set_wmm_rule(struct ieee80211_wmm_rule *rule,
+ 		rule->ap[i].aifsn = wmm->ap[i].aifsn;
+ 		rule->ap[i].cot = 1000 * be16_to_cpu(wmm->ap[i].cot);
+ 	}
++
++	rrule->has_wmm = true;
+ }
+ 
+ static int __regdb_query_wmm(const struct fwdb_header *db,
+ 			     const struct fwdb_country *country, int freq,
+-			     u32 *dbptr, struct ieee80211_wmm_rule *rule)
++			     struct ieee80211_reg_rule *rule)
+ {
+ 	unsigned int ptr = be16_to_cpu(country->coll_ptr) << 2;
+ 	struct fwdb_collection *coll = (void *)((u8 *)db + ptr);
+@@ -901,8 +891,6 @@ static int __regdb_query_wmm(const struct fwdb_header *db,
+ 			wmm_ptr = be16_to_cpu(rrule->wmm_ptr) << 2;
+ 			wmm = (void *)((u8 *)db + wmm_ptr);
+ 			set_wmm_rule(rule, wmm);
+-			if (dbptr)
+-				*dbptr = wmm_ptr;
+ 			return 0;
+ 		}
+ 	}
+@@ -910,8 +898,7 @@ static int __regdb_query_wmm(const struct fwdb_header *db,
+ 	return -ENODATA;
+ }
+ 
+-int reg_query_regdb_wmm(char *alpha2, int freq, u32 *dbptr,
+-			struct ieee80211_wmm_rule *rule)
++int reg_query_regdb_wmm(char *alpha2, int freq, struct ieee80211_reg_rule *rule)
+ {
+ 	const struct fwdb_header *hdr = regdb;
+ 	const struct fwdb_country *country;
+@@ -925,8 +912,7 @@ int reg_query_regdb_wmm(char *alpha2, int freq, u32 *dbptr,
+ 	country = &hdr->country[0];
+ 	while (country->coll_ptr) {
+ 		if (alpha2_equal(alpha2, country->alpha2))
+-			return __regdb_query_wmm(regdb, country, freq, dbptr,
+-						 rule);
++			return __regdb_query_wmm(regdb, country, freq, rule);
+ 
+ 		country++;
+ 	}
+@@ -935,32 +921,13 @@ int reg_query_regdb_wmm(char *alpha2, int freq, u32 *dbptr,
+ }
+ EXPORT_SYMBOL(reg_query_regdb_wmm);
+ 
+-struct wmm_ptrs {
+-	struct ieee80211_wmm_rule *rule;
+-	u32 ptr;
+-};
+-
+-static struct ieee80211_wmm_rule *find_wmm_ptr(struct wmm_ptrs *wmm_ptrs,
+-					       u32 wmm_ptr, int n_wmms)
+-{
+-	int i;
+-
+-	for (i = 0; i < n_wmms; i++) {
+-		if (wmm_ptrs[i].ptr == wmm_ptr)
+-			return wmm_ptrs[i].rule;
+-	}
+-	return NULL;
+-}
+-
+ static int regdb_query_country(const struct fwdb_header *db,
+ 			       const struct fwdb_country *country)
+ {
+ 	unsigned int ptr = be16_to_cpu(country->coll_ptr) << 2;
+ 	struct fwdb_collection *coll = (void *)((u8 *)db + ptr);
+ 	struct ieee80211_regdomain *regdom;
+-	struct ieee80211_regdomain *tmp_rd;
+-	unsigned int size_of_regd, i, n_wmms = 0;
+-	struct wmm_ptrs *wmm_ptrs;
++	unsigned int size_of_regd, i;
+ 
+ 	size_of_regd = sizeof(struct ieee80211_regdomain) +
+ 		coll->n_rules * sizeof(struct ieee80211_reg_rule);
+@@ -969,12 +936,6 @@ static int regdb_query_country(const struct fwdb_header *db,
+ 	if (!regdom)
+ 		return -ENOMEM;
+ 
+-	wmm_ptrs = kcalloc(coll->n_rules, sizeof(*wmm_ptrs), GFP_KERNEL);
+-	if (!wmm_ptrs) {
+-		kfree(regdom);
+-		return -ENOMEM;
+-	}
+-
+ 	regdom->n_reg_rules = coll->n_rules;
+ 	regdom->alpha2[0] = country->alpha2[0];
+ 	regdom->alpha2[1] = country->alpha2[1];
+@@ -1013,37 +974,11 @@ static int regdb_query_country(const struct fwdb_header *db,
+ 				1000 * be16_to_cpu(rule->cac_timeout);
+ 		if (rule->len >= offsetofend(struct fwdb_rule, wmm_ptr)) {
+ 			u32 wmm_ptr = be16_to_cpu(rule->wmm_ptr) << 2;
+-			struct ieee80211_wmm_rule *wmm_pos =
+-				find_wmm_ptr(wmm_ptrs, wmm_ptr, n_wmms);
+-			struct fwdb_wmm_rule *wmm;
+-			struct ieee80211_wmm_rule *wmm_rule;
+-
+-			if (wmm_pos) {
+-				rrule->wmm_rule = wmm_pos;
+-				continue;
+-			}
+-			wmm = (void *)((u8 *)db + wmm_ptr);
+-			tmp_rd = krealloc(regdom, size_of_regd + (n_wmms + 1) *
+-					  sizeof(struct ieee80211_wmm_rule),
+-					  GFP_KERNEL);
+-
+-			if (!tmp_rd) {
+-				kfree(regdom);
+-				kfree(wmm_ptrs);
+-				return -ENOMEM;
+-			}
+-			regdom = tmp_rd;
+-
+-			wmm_rule = (struct ieee80211_wmm_rule *)
+-				((u8 *)regdom + size_of_regd + n_wmms *
+-				sizeof(struct ieee80211_wmm_rule));
++			struct fwdb_wmm_rule *wmm = (void *)((u8 *)db + wmm_ptr);
+ 
+-			set_wmm_rule(wmm_rule, wmm);
+-			wmm_ptrs[n_wmms].ptr = wmm_ptr;
+-			wmm_ptrs[n_wmms++].rule = wmm_rule;
++			set_wmm_rule(rrule, wmm);
+ 		}
+ 	}
+-	kfree(wmm_ptrs);
+ 
+ 	return reg_schedule_apply(regdom);
+ }
+diff --git a/net/wireless/util.c b/net/wireless/util.c
+index 3c654cd7ba56..908bf5b6d89e 100644
+--- a/net/wireless/util.c
++++ b/net/wireless/util.c
+@@ -1374,7 +1374,7 @@ bool ieee80211_chandef_to_operating_class(struct cfg80211_chan_def *chandef,
+ 					  u8 *op_class)
+ {
+ 	u8 vht_opclass;
+-	u16 freq = chandef->center_freq1;
++	u32 freq = chandef->center_freq1;
+ 
+ 	if (freq >= 2412 && freq <= 2472) {
+ 		if (chandef->width > NL80211_CHAN_WIDTH_40)
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index d14b05f68d6d..08b6369f930b 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6455,6 +6455,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0706, "Dell Inspiron 7559", ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER),
+ 	SND_PCI_QUIRK(0x1028, 0x0725, "Dell Inspiron 3162", ALC255_FIXUP_DELL_SPK_NOISE),
+ 	SND_PCI_QUIRK(0x1028, 0x075b, "Dell XPS 13 9360", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE),
++	SND_PCI_QUIRK(0x1028, 0x075c, "Dell XPS 27 7760", ALC298_FIXUP_SPK_VOLUME),
+ 	SND_PCI_QUIRK(0x1028, 0x075d, "Dell AIO", ALC298_FIXUP_SPK_VOLUME),
+ 	SND_PCI_QUIRK(0x1028, 0x07b0, "Dell Precision 7520", ALC295_FIXUP_DISABLE_DAC3),
+ 	SND_PCI_QUIRK(0x1028, 0x0798, "Dell Inspiron 17 7000 Gaming", ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER),
+diff --git a/tools/hv/hv_fcopy_daemon.c b/tools/hv/hv_fcopy_daemon.c
+index d78aed86af09..8ff8cb1a11f4 100644
+--- a/tools/hv/hv_fcopy_daemon.c
++++ b/tools/hv/hv_fcopy_daemon.c
+@@ -234,6 +234,7 @@ int main(int argc, char *argv[])
+ 			break;
+ 
+ 		default:
++			error = HV_E_FAIL;
+ 			syslog(LOG_ERR, "Unknown operation: %d",
+ 				buffer.hdr.operation);
+ 
+diff --git a/tools/kvm/kvm_stat/kvm_stat b/tools/kvm/kvm_stat/kvm_stat
+index 56c4b3f8a01b..7c92545931e3 100755
+--- a/tools/kvm/kvm_stat/kvm_stat
++++ b/tools/kvm/kvm_stat/kvm_stat
+@@ -759,13 +759,20 @@ class DebugfsProvider(Provider):
+             if len(vms) == 0:
+                 self.do_read = False
+ 
+-            self.paths = filter(lambda x: "{}-".format(pid) in x, vms)
++            self.paths = list(filter(lambda x: "{}-".format(pid) in x, vms))
+ 
+         else:
+             self.paths = []
+             self.do_read = True
+         self.reset()
+ 
++    def _verify_paths(self):
++        """Remove invalid paths"""
++        for path in self.paths:
++            if not os.path.exists(os.path.join(PATH_DEBUGFS_KVM, path)):
++                self.paths.remove(path)
++                continue
++
+     def read(self, reset=0, by_guest=0):
+         """Returns a dict with format:'file name / field -> current value'.
+ 
+@@ -780,6 +787,7 @@ class DebugfsProvider(Provider):
+         # If no debugfs filtering support is available, then don't read.
+         if not self.do_read:
+             return results
++        self._verify_paths()
+ 
+         paths = self.paths
+         if self._pid == 0:
+@@ -1162,6 +1170,9 @@ class Tui(object):
+ 
+             return sorted_items
+ 
++        if not self._is_running_guest(self.stats.pid_filter):
++            # leave final data on screen
++            return
+         row = 3
+         self.screen.move(row, 0)
+         self.screen.clrtobot()
+@@ -1219,10 +1230,10 @@ class Tui(object):
+         (x, term_width) = self.screen.getmaxyx()
+         row = 2
+         for line in text:
+-            start = (term_width - len(line)) / 2
++            start = (term_width - len(line)) // 2
+             self.screen.addstr(row, start, line)
+             row += 1
+-        self.screen.addstr(row + 1, (term_width - len(hint)) / 2, hint,
++        self.screen.addstr(row + 1, (term_width - len(hint)) // 2, hint,
+                            curses.A_STANDOUT)
+         self.screen.getkey()
+ 
+@@ -1319,6 +1330,12 @@ class Tui(object):
+                 msg = '"' + str(val) + '": Invalid value'
+         self._refresh_header()
+ 
++    def _is_running_guest(self, pid):
++        """Check if pid is still a running process."""
++        if not pid:
++            return True
++        return os.path.isdir(os.path.join('/proc/', str(pid)))
++
+     def _show_vm_selection_by_guest(self):
+         """Draws guest selection mask.
+ 
+@@ -1346,7 +1363,7 @@ class Tui(object):
+             if not guest or guest == '0':
+                 break
+             if guest.isdigit():
+-                if not os.path.isdir(os.path.join('/proc/', guest)):
++                if not self._is_running_guest(guest):
+                     msg = '"' + guest + '": Not a running process'
+                     continue
+                 pid = int(guest)
+diff --git a/tools/perf/arch/powerpc/util/sym-handling.c b/tools/perf/arch/powerpc/util/sym-handling.c
+index 20e7d74d86cd..10a44e946f77 100644
+--- a/tools/perf/arch/powerpc/util/sym-handling.c
++++ b/tools/perf/arch/powerpc/util/sym-handling.c
+@@ -22,15 +22,16 @@ bool elf__needs_adjust_symbols(GElf_Ehdr ehdr)
+ 
+ #endif
+ 
+-#if !defined(_CALL_ELF) || _CALL_ELF != 2
+ int arch__choose_best_symbol(struct symbol *syma,
+ 			     struct symbol *symb __maybe_unused)
+ {
+ 	char *sym = syma->name;
+ 
++#if !defined(_CALL_ELF) || _CALL_ELF != 2
+ 	/* Skip over any initial dot */
+ 	if (*sym == '.')
+ 		sym++;
++#endif
+ 
+ 	/* Avoid "SyS" kernel syscall aliases */
+ 	if (strlen(sym) >= 3 && !strncmp(sym, "SyS", 3))
+@@ -41,6 +42,7 @@ int arch__choose_best_symbol(struct symbol *syma,
+ 	return SYMBOL_A;
+ }
+ 
++#if !defined(_CALL_ELF) || _CALL_ELF != 2
+ /* Allow matching against dot variants */
+ int arch__compare_symbol_names(const char *namea, const char *nameb)
+ {
+diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
+index f91775b4bc3c..3b05219c3ed7 100644
+--- a/tools/perf/util/annotate.c
++++ b/tools/perf/util/annotate.c
+@@ -245,8 +245,14 @@ find_target:
+ 
+ indirect_call:
+ 	tok = strchr(endptr, '*');
+-	if (tok != NULL)
+-		ops->target.addr = strtoull(tok + 1, NULL, 16);
++	if (tok != NULL) {
++		endptr++;
++
++		/* Indirect call can use a non-rip register and offset: callq  *0x8(%rbx).
++		 * Do not parse such instruction.  */
++		if (strstr(endptr, "(%r") == NULL)
++			ops->target.addr = strtoull(endptr, NULL, 16);
++	}
+ 	goto find_target;
+ }
+ 
+@@ -275,7 +281,19 @@ bool ins__is_call(const struct ins *ins)
+ 	return ins->ops == &call_ops || ins->ops == &s390_call_ops;
+ }
+ 
+-static int jump__parse(struct arch *arch __maybe_unused, struct ins_operands *ops, struct map_symbol *ms)
++/*
++ * Prevents from matching commas in the comment section, e.g.:
++ * ffff200008446e70:       b.cs    ffff2000084470f4 <generic_exec_single+0x314>  // b.hs, b.nlast
++ */
++static inline const char *validate_comma(const char *c, struct ins_operands *ops)
++{
++	if (ops->raw_comment && c > ops->raw_comment)
++		return NULL;
++
++	return c;
++}
++
++static int jump__parse(struct arch *arch, struct ins_operands *ops, struct map_symbol *ms)
+ {
+ 	struct map *map = ms->map;
+ 	struct symbol *sym = ms->sym;
+@@ -284,6 +302,10 @@ static int jump__parse(struct arch *arch __maybe_unused, struct ins_operands *op
+ 	};
+ 	const char *c = strchr(ops->raw, ',');
+ 	u64 start, end;
++
++	ops->raw_comment = strchr(ops->raw, arch->objdump.comment_char);
++	c = validate_comma(c, ops);
++
+ 	/*
+ 	 * Examples of lines to parse for the _cpp_lex_token@@Base
+ 	 * function:
+@@ -303,6 +325,7 @@ static int jump__parse(struct arch *arch __maybe_unused, struct ins_operands *op
+ 		ops->target.addr = strtoull(c, NULL, 16);
+ 		if (!ops->target.addr) {
+ 			c = strchr(c, ',');
++			c = validate_comma(c, ops);
+ 			if (c++ != NULL)
+ 				ops->target.addr = strtoull(c, NULL, 16);
+ 		}
+@@ -360,9 +383,12 @@ static int jump__scnprintf(struct ins *ins, char *bf, size_t size,
+ 		return scnprintf(bf, size, "%-6s %s", ins->name, ops->target.sym->name);
+ 
+ 	c = strchr(ops->raw, ',');
++	c = validate_comma(c, ops);
++
+ 	if (c != NULL) {
+ 		const char *c2 = strchr(c + 1, ',');
+ 
++		c2 = validate_comma(c2, ops);
+ 		/* check for 3-op insn */
+ 		if (c2 != NULL)
+ 			c = c2;
+diff --git a/tools/perf/util/annotate.h b/tools/perf/util/annotate.h
+index a4c0d91907e6..61e0c7fd5efd 100644
+--- a/tools/perf/util/annotate.h
++++ b/tools/perf/util/annotate.h
+@@ -21,6 +21,7 @@ struct ins {
+ 
+ struct ins_operands {
+ 	char	*raw;
++	char	*raw_comment;
+ 	struct {
+ 		char	*raw;
+ 		char	*name;
+diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
+index 0d5504751cc5..6324afba8fdd 100644
+--- a/tools/perf/util/evsel.c
++++ b/tools/perf/util/evsel.c
+@@ -251,8 +251,9 @@ struct perf_evsel *perf_evsel__new_idx(struct perf_event_attr *attr, int idx)
+ {
+ 	struct perf_evsel *evsel = zalloc(perf_evsel__object.size);
+ 
+-	if (evsel != NULL)
+-		perf_evsel__init(evsel, attr, idx);
++	if (!evsel)
++		return NULL;
++	perf_evsel__init(evsel, attr, idx);
+ 
+ 	if (perf_evsel__is_bpf_output(evsel)) {
+ 		evsel->attr.sample_type |= (PERF_SAMPLE_RAW | PERF_SAMPLE_TIME |
+diff --git a/tools/perf/util/trace-event-info.c b/tools/perf/util/trace-event-info.c
+index c85d0d1a65ed..7b0ca7cbb7de 100644
+--- a/tools/perf/util/trace-event-info.c
++++ b/tools/perf/util/trace-event-info.c
+@@ -377,7 +377,7 @@ out:
+ 
+ static int record_saved_cmdline(void)
+ {
+-	unsigned int size;
++	unsigned long long size;
+ 	char *path;
+ 	struct stat st;
+ 	int ret, err = 0;
+diff --git a/tools/testing/selftests/net/pmtu.sh b/tools/testing/selftests/net/pmtu.sh
+index f8cc38afffa2..32a194e3e07a 100755
+--- a/tools/testing/selftests/net/pmtu.sh
++++ b/tools/testing/selftests/net/pmtu.sh
+@@ -46,6 +46,9 @@
+ # Kselftest framework requirement - SKIP code is 4.
+ ksft_skip=4
+ 
++# Some systems don't have a ping6 binary anymore
++which ping6 > /dev/null 2>&1 && ping6=$(which ping6) || ping6=$(which ping)
++
+ tests="
+ 	pmtu_vti6_exception		vti6: PMTU exceptions
+ 	pmtu_vti4_exception		vti4: PMTU exceptions
+@@ -274,7 +277,7 @@ test_pmtu_vti6_exception() {
+ 	mtu "${ns_b}" veth_b 4000
+ 	mtu "${ns_a}" vti6_a 5000
+ 	mtu "${ns_b}" vti6_b 5000
+-	${ns_a} ping6 -q -i 0.1 -w 2 -s 60000 ${vti6_b_addr} > /dev/null
++	${ns_a} ${ping6} -q -i 0.1 -w 2 -s 60000 ${vti6_b_addr} > /dev/null
+ 
+ 	# Check that exception was created
+ 	if [ "$(route_get_dst_pmtu_from_exception "${ns_a}" ${vti6_b_addr})" = "" ]; then
+@@ -334,7 +337,7 @@ test_pmtu_vti4_link_add_mtu() {
+ 	fail=0
+ 
+ 	min=68
+-	max=$((65528 - 20))
++	max=$((65535 - 20))
+ 	# Check invalid values first
+ 	for v in $((min - 1)) $((max + 1)); do
+ 		${ns_a} ip link add vti4_a mtu ${v} type vti local ${veth4_a_addr} remote ${veth4_b_addr} key 10 2>/dev/null
+diff --git a/tools/testing/selftests/rseq/param_test.c b/tools/testing/selftests/rseq/param_test.c
+index 615252331813..4bc071525bf7 100644
+--- a/tools/testing/selftests/rseq/param_test.c
++++ b/tools/testing/selftests/rseq/param_test.c
+@@ -56,15 +56,13 @@ unsigned int yield_mod_cnt, nr_abort;
+ 			printf(fmt, ## __VA_ARGS__);	\
+ 	} while (0)
+ 
+-#if defined(__x86_64__) || defined(__i386__)
++#ifdef __i386__
+ 
+ #define INJECT_ASM_REG	"eax"
+ 
+ #define RSEQ_INJECT_CLOBBER \
+ 	, INJECT_ASM_REG
+ 
+-#ifdef __i386__
+-
+ #define RSEQ_INJECT_ASM(n) \
+ 	"mov asm_loop_cnt_" #n ", %%" INJECT_ASM_REG "\n\t" \
+ 	"test %%" INJECT_ASM_REG ",%%" INJECT_ASM_REG "\n\t" \
+@@ -76,9 +74,16 @@ unsigned int yield_mod_cnt, nr_abort;
+ 
+ #elif defined(__x86_64__)
+ 
++#define INJECT_ASM_REG_P	"rax"
++#define INJECT_ASM_REG		"eax"
++
++#define RSEQ_INJECT_CLOBBER \
++	, INJECT_ASM_REG_P \
++	, INJECT_ASM_REG
++
+ #define RSEQ_INJECT_ASM(n) \
+-	"lea asm_loop_cnt_" #n "(%%rip), %%" INJECT_ASM_REG "\n\t" \
+-	"mov (%%" INJECT_ASM_REG "), %%" INJECT_ASM_REG "\n\t" \
++	"lea asm_loop_cnt_" #n "(%%rip), %%" INJECT_ASM_REG_P "\n\t" \
++	"mov (%%" INJECT_ASM_REG_P "), %%" INJECT_ASM_REG "\n\t" \
+ 	"test %%" INJECT_ASM_REG ",%%" INJECT_ASM_REG "\n\t" \
+ 	"jz 333f\n\t" \
+ 	"222:\n\t" \
+@@ -86,10 +91,6 @@ unsigned int yield_mod_cnt, nr_abort;
+ 	"jnz 222b\n\t" \
+ 	"333:\n\t"
+ 
+-#else
+-#error "Unsupported architecture"
+-#endif
+-
+ #elif defined(__ARMEL__)
+ 
+ #define RSEQ_INJECT_INPUT \
+diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/police.json b/tools/testing/selftests/tc-testing/tc-tests/actions/police.json
+index f03763d81617..30f9b54bd666 100644
+--- a/tools/testing/selftests/tc-testing/tc-tests/actions/police.json
++++ b/tools/testing/selftests/tc-testing/tc-tests/actions/police.json
+@@ -312,6 +312,54 @@
+             "$TC actions flush action police"
+         ]
+     },
++    {
++        "id": "6aaf",
++        "name": "Add police actions with conform-exceed control pass/pipe [with numeric values]",
++        "category": [
++            "actions",
++            "police"
++        ],
++        "setup": [
++            [
++                "$TC actions flush action police",
++                0,
++                1,
++                255
++            ]
++        ],
++        "cmdUnderTest": "$TC actions add action police rate 3mbit burst 250k conform-exceed 0/3 index 1",
++        "expExitCode": "0",
++        "verifyCmd": "$TC actions get action police index 1",
++        "matchPattern": "action order [0-9]*:  police 0x1 rate 3Mbit burst 250Kb mtu 2Kb action pass/pipe",
++        "matchCount": "1",
++        "teardown": [
++            "$TC actions flush action police"
++        ]
++    },
++    {
++        "id": "29b1",
++        "name": "Add police actions with conform-exceed control <invalid>/drop",
++        "category": [
++            "actions",
++            "police"
++        ],
++        "setup": [
++            [
++                "$TC actions flush action police",
++                0,
++                1,
++                255
++            ]
++        ],
++        "cmdUnderTest": "$TC actions add action police rate 3mbit burst 250k conform-exceed 10/drop index 1",
++        "expExitCode": "255",
++        "verifyCmd": "$TC actions ls action police",
++        "matchPattern": "action order [0-9]*:  police 0x1 rate 3Mbit burst 250Kb mtu 2Kb action ",
++        "matchCount": "0",
++        "teardown": [
++            "$TC actions flush action police"
++        ]
++    },
+     {
+         "id": "c26f",
+         "name": "Add police action with invalid peakrate value",
+diff --git a/tools/vm/page-types.c b/tools/vm/page-types.c
+index cce853dca691..a4c31fb2887b 100644
+--- a/tools/vm/page-types.c
++++ b/tools/vm/page-types.c
+@@ -156,12 +156,6 @@ static const char * const page_flag_names[] = {
+ };
+ 
+ 
+-static const char * const debugfs_known_mountpoints[] = {
+-	"/sys/kernel/debug",
+-	"/debug",
+-	0,
+-};
+-
+ /*
+  * data structures
+  */
+diff --git a/tools/vm/slabinfo.c b/tools/vm/slabinfo.c
+index f82c2eaa859d..334b16db0ebb 100644
+--- a/tools/vm/slabinfo.c
++++ b/tools/vm/slabinfo.c
+@@ -30,8 +30,8 @@ struct slabinfo {
+ 	int alias;
+ 	int refs;
+ 	int aliases, align, cache_dma, cpu_slabs, destroy_by_rcu;
+-	int hwcache_align, object_size, objs_per_slab;
+-	int sanity_checks, slab_size, store_user, trace;
++	unsigned int hwcache_align, object_size, objs_per_slab;
++	unsigned int sanity_checks, slab_size, store_user, trace;
+ 	int order, poison, reclaim_account, red_zone;
+ 	unsigned long partial, objects, slabs, objects_partial, objects_total;
+ 	unsigned long alloc_fastpath, alloc_slowpath;


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-10-04 10:44 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-10-04 10:44 UTC (permalink / raw
  To: gentoo-commits

commit:     2e88d7f9eaffea37f89cf24b95cf17b308c058e9
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Oct  4 10:44:07 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Oct  4 10:44:07 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2e88d7f9

Linux patch 4.18.12

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1011_linux-4.18.12.patch | 7724 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 7728 insertions(+)

diff --git a/0000_README b/0000_README
index cccbd63..ff87445 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch:  1010_linux-4.18.11.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.11
 
+Patch:  1011_linux-4.18.12.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.12
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1011_linux-4.18.12.patch b/1011_linux-4.18.12.patch
new file mode 100644
index 0000000..0851ea8
--- /dev/null
+++ b/1011_linux-4.18.12.patch
@@ -0,0 +1,7724 @@
+diff --git a/Documentation/hwmon/ina2xx b/Documentation/hwmon/ina2xx
+index 72d16f08e431..b8df81f6d6bc 100644
+--- a/Documentation/hwmon/ina2xx
++++ b/Documentation/hwmon/ina2xx
+@@ -32,7 +32,7 @@ Supported chips:
+     Datasheet: Publicly available at the Texas Instruments website
+                http://www.ti.com/
+ 
+-Author: Lothar Felten <l-felten@ti.com>
++Author: Lothar Felten <lothar.felten@gmail.com>
+ 
+ Description
+ -----------
+diff --git a/Documentation/process/2.Process.rst b/Documentation/process/2.Process.rst
+index a9c46dd0706b..51d0349c7809 100644
+--- a/Documentation/process/2.Process.rst
++++ b/Documentation/process/2.Process.rst
+@@ -134,7 +134,7 @@ and their maintainers are:
+ 	4.4	Greg Kroah-Hartman	(very long-term stable kernel)
+ 	4.9	Greg Kroah-Hartman
+ 	4.14	Greg Kroah-Hartman
+-	======  ======================  ===========================
++	======  ======================  ==============================
+ 
+ The selection of a kernel for long-term support is purely a matter of a
+ maintainer having the need and the time to maintain that release.  There
+diff --git a/Makefile b/Makefile
+index de0ecace693a..466e07af8473 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 11
++SUBLEVEL = 12
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/arm/boot/dts/dra7.dtsi b/arch/arm/boot/dts/dra7.dtsi
+index e03495a799ce..a0ddf497e8cd 100644
+--- a/arch/arm/boot/dts/dra7.dtsi
++++ b/arch/arm/boot/dts/dra7.dtsi
+@@ -1893,7 +1893,7 @@
+ 			};
+ 		};
+ 
+-		dcan1: can@481cc000 {
++		dcan1: can@4ae3c000 {
+ 			compatible = "ti,dra7-d_can";
+ 			ti,hwmods = "dcan1";
+ 			reg = <0x4ae3c000 0x2000>;
+@@ -1903,7 +1903,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		dcan2: can@481d0000 {
++		dcan2: can@48480000 {
+ 			compatible = "ti,dra7-d_can";
+ 			ti,hwmods = "dcan2";
+ 			reg = <0x48480000 0x2000>;
+diff --git a/arch/arm/boot/dts/imx7d.dtsi b/arch/arm/boot/dts/imx7d.dtsi
+index 8d3d123d0a5c..37f0a5afe348 100644
+--- a/arch/arm/boot/dts/imx7d.dtsi
++++ b/arch/arm/boot/dts/imx7d.dtsi
+@@ -125,10 +125,14 @@
+ 		interrupt-names = "msi";
+ 		#interrupt-cells = <1>;
+ 		interrupt-map-mask = <0 0 0 0x7>;
+-		interrupt-map = <0 0 0 1 &intc GIC_SPI 122 IRQ_TYPE_LEVEL_HIGH>,
+-				<0 0 0 2 &intc GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>,
+-				<0 0 0 3 &intc GIC_SPI 124 IRQ_TYPE_LEVEL_HIGH>,
+-				<0 0 0 4 &intc GIC_SPI 125 IRQ_TYPE_LEVEL_HIGH>;
++		/*
++		 * Reference manual lists pci irqs incorrectly
++		 * Real hardware ordering is same as imx6: D+MSI, C, B, A
++		 */
++		interrupt-map = <0 0 0 1 &intc GIC_SPI 125 IRQ_TYPE_LEVEL_HIGH>,
++				<0 0 0 2 &intc GIC_SPI 124 IRQ_TYPE_LEVEL_HIGH>,
++				<0 0 0 3 &intc GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>,
++				<0 0 0 4 &intc GIC_SPI 122 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&clks IMX7D_PCIE_CTRL_ROOT_CLK>,
+ 			 <&clks IMX7D_PLL_ENET_MAIN_100M_CLK>,
+ 			 <&clks IMX7D_PCIE_PHY_ROOT_CLK>;
+diff --git a/arch/arm/boot/dts/ls1021a.dtsi b/arch/arm/boot/dts/ls1021a.dtsi
+index c55d479971cc..f18490548c78 100644
+--- a/arch/arm/boot/dts/ls1021a.dtsi
++++ b/arch/arm/boot/dts/ls1021a.dtsi
+@@ -84,6 +84,7 @@
+ 			device_type = "cpu";
+ 			reg = <0xf01>;
+ 			clocks = <&clockgen 1 0>;
++			#cooling-cells = <2>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/mt7623.dtsi b/arch/arm/boot/dts/mt7623.dtsi
+index d1eb123bc73b..1cdc346a05e8 100644
+--- a/arch/arm/boot/dts/mt7623.dtsi
++++ b/arch/arm/boot/dts/mt7623.dtsi
+@@ -92,6 +92,7 @@
+ 				 <&apmixedsys CLK_APMIXED_MAINPLL>;
+ 			clock-names = "cpu", "intermediate";
+ 			operating-points-v2 = <&cpu_opp_table>;
++			#cooling-cells = <2>;
+ 			clock-frequency = <1300000000>;
+ 		};
+ 
+@@ -103,6 +104,7 @@
+ 				 <&apmixedsys CLK_APMIXED_MAINPLL>;
+ 			clock-names = "cpu", "intermediate";
+ 			operating-points-v2 = <&cpu_opp_table>;
++			#cooling-cells = <2>;
+ 			clock-frequency = <1300000000>;
+ 		};
+ 
+@@ -114,6 +116,7 @@
+ 				 <&apmixedsys CLK_APMIXED_MAINPLL>;
+ 			clock-names = "cpu", "intermediate";
+ 			operating-points-v2 = <&cpu_opp_table>;
++			#cooling-cells = <2>;
+ 			clock-frequency = <1300000000>;
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/omap4-droid4-xt894.dts b/arch/arm/boot/dts/omap4-droid4-xt894.dts
+index e7c3c563ff8f..5f27518561c4 100644
+--- a/arch/arm/boot/dts/omap4-droid4-xt894.dts
++++ b/arch/arm/boot/dts/omap4-droid4-xt894.dts
+@@ -351,7 +351,7 @@
+ &mmc2 {
+ 	vmmc-supply = <&vsdio>;
+ 	bus-width = <8>;
+-	non-removable;
++	ti,non-removable;
+ };
+ 
+ &mmc3 {
+@@ -618,15 +618,6 @@
+ 		OMAP4_IOPAD(0x10c, PIN_INPUT | MUX_MODE1)	/* abe_mcbsp3_fsx */
+ 		>;
+ 	};
+-};
+-
+-&omap4_pmx_wkup {
+-	usb_gpio_mux_sel2: pinmux_usb_gpio_mux_sel2_pins {
+-		/* gpio_wk0 */
+-		pinctrl-single,pins = <
+-		OMAP4_IOPAD(0x040, PIN_OUTPUT_PULLDOWN | MUX_MODE3)
+-		>;
+-	};
+ 
+ 	vibrator_direction_pin: pinmux_vibrator_direction_pin {
+ 		pinctrl-single,pins = <
+@@ -641,6 +632,15 @@
+ 	};
+ };
+ 
++&omap4_pmx_wkup {
++	usb_gpio_mux_sel2: pinmux_usb_gpio_mux_sel2_pins {
++		/* gpio_wk0 */
++		pinctrl-single,pins = <
++		OMAP4_IOPAD(0x040, PIN_OUTPUT_PULLDOWN | MUX_MODE3)
++		>;
++	};
++};
++
+ /*
+  * As uart1 is wired to mdm6600 with rts and cts, we can use the cts pin for
+  * uart1 wakeirq.
+diff --git a/arch/arm/mach-mvebu/pmsu.c b/arch/arm/mach-mvebu/pmsu.c
+index 27a78c80e5b1..73d5d72dfc3e 100644
+--- a/arch/arm/mach-mvebu/pmsu.c
++++ b/arch/arm/mach-mvebu/pmsu.c
+@@ -116,8 +116,8 @@ void mvebu_pmsu_set_cpu_boot_addr(int hw_cpu, void *boot_addr)
+ 		PMSU_BOOT_ADDR_REDIRECT_OFFSET(hw_cpu));
+ }
+ 
+-extern unsigned char mvebu_boot_wa_start;
+-extern unsigned char mvebu_boot_wa_end;
++extern unsigned char mvebu_boot_wa_start[];
++extern unsigned char mvebu_boot_wa_end[];
+ 
+ /*
+  * This function sets up the boot address workaround needed for SMP
+@@ -130,7 +130,7 @@ int mvebu_setup_boot_addr_wa(unsigned int crypto_eng_target,
+ 			     phys_addr_t resume_addr_reg)
+ {
+ 	void __iomem *sram_virt_base;
+-	u32 code_len = &mvebu_boot_wa_end - &mvebu_boot_wa_start;
++	u32 code_len = mvebu_boot_wa_end - mvebu_boot_wa_start;
+ 
+ 	mvebu_mbus_del_window(BOOTROM_BASE, BOOTROM_SIZE);
+ 	mvebu_mbus_add_window_by_id(crypto_eng_target, crypto_eng_attribute,
+diff --git a/arch/arm/mach-omap2/omap_hwmod.c b/arch/arm/mach-omap2/omap_hwmod.c
+index 2ceffd85dd3d..cd65ea4e9c54 100644
+--- a/arch/arm/mach-omap2/omap_hwmod.c
++++ b/arch/arm/mach-omap2/omap_hwmod.c
+@@ -2160,6 +2160,37 @@ static int of_dev_hwmod_lookup(struct device_node *np,
+ 	return -ENODEV;
+ }
+ 
++/**
++ * omap_hwmod_fix_mpu_rt_idx - fix up mpu_rt_idx register offsets
++ *
++ * @oh: struct omap_hwmod *
++ * @np: struct device_node *
++ *
++ * Fix up module register offsets for modules with mpu_rt_idx.
++ * Only needed for cpsw with interconnect target module defined
++ * in device tree while still using legacy hwmod platform data
++ * for rev, sysc and syss registers.
++ *
++ * Can be removed when all cpsw hwmod platform data has been
++ * dropped.
++ */
++static void omap_hwmod_fix_mpu_rt_idx(struct omap_hwmod *oh,
++				      struct device_node *np,
++				      struct resource *res)
++{
++	struct device_node *child = NULL;
++	int error;
++
++	child = of_get_next_child(np, child);
++	if (!child)
++		return;
++
++	error = of_address_to_resource(child, oh->mpu_rt_idx, res);
++	if (error)
++		pr_err("%s: error mapping mpu_rt_idx: %i\n",
++		       __func__, error);
++}
++
+ /**
+  * omap_hwmod_parse_module_range - map module IO range from device tree
+  * @oh: struct omap_hwmod *
+@@ -2220,7 +2251,13 @@ int omap_hwmod_parse_module_range(struct omap_hwmod *oh,
+ 	size = be32_to_cpup(ranges);
+ 
+ 	pr_debug("omap_hwmod: %s %s at 0x%llx size 0x%llx\n",
+-		 oh->name, np->name, base, size);
++		 oh ? oh->name : "", np->name, base, size);
++
++	if (oh && oh->mpu_rt_idx) {
++		omap_hwmod_fix_mpu_rt_idx(oh, np, res);
++
++		return 0;
++	}
+ 
+ 	res->start = base;
+ 	res->end = base + size - 1;
+diff --git a/arch/arm/mach-omap2/omap_hwmod_reset.c b/arch/arm/mach-omap2/omap_hwmod_reset.c
+index b68f9c0aff0b..d5ddba00bb73 100644
+--- a/arch/arm/mach-omap2/omap_hwmod_reset.c
++++ b/arch/arm/mach-omap2/omap_hwmod_reset.c
+@@ -92,11 +92,13 @@ static void omap_rtc_wait_not_busy(struct omap_hwmod *oh)
+  */
+ void omap_hwmod_rtc_unlock(struct omap_hwmod *oh)
+ {
+-	local_irq_disable();
++	unsigned long flags;
++
++	local_irq_save(flags);
+ 	omap_rtc_wait_not_busy(oh);
+ 	omap_hwmod_write(OMAP_RTC_KICK0_VALUE, oh, OMAP_RTC_KICK0_REG);
+ 	omap_hwmod_write(OMAP_RTC_KICK1_VALUE, oh, OMAP_RTC_KICK1_REG);
+-	local_irq_enable();
++	local_irq_restore(flags);
+ }
+ 
+ /**
+@@ -110,9 +112,11 @@ void omap_hwmod_rtc_unlock(struct omap_hwmod *oh)
+  */
+ void omap_hwmod_rtc_lock(struct omap_hwmod *oh)
+ {
+-	local_irq_disable();
++	unsigned long flags;
++
++	local_irq_save(flags);
+ 	omap_rtc_wait_not_busy(oh);
+ 	omap_hwmod_write(0x0, oh, OMAP_RTC_KICK0_REG);
+ 	omap_hwmod_write(0x0, oh, OMAP_RTC_KICK1_REG);
+-	local_irq_enable();
++	local_irq_restore(flags);
+ }
+diff --git a/arch/arm64/boot/dts/renesas/r8a7795-es1.dtsi b/arch/arm64/boot/dts/renesas/r8a7795-es1.dtsi
+index e19dcd6cb767..0a42b016f257 100644
+--- a/arch/arm64/boot/dts/renesas/r8a7795-es1.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a7795-es1.dtsi
+@@ -80,7 +80,7 @@
+ 
+ 	vspd3: vsp@fea38000 {
+ 		compatible = "renesas,vsp2";
+-		reg = <0 0xfea38000 0 0x8000>;
++		reg = <0 0xfea38000 0 0x5000>;
+ 		interrupts = <GIC_SPI 469 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&cpg CPG_MOD 620>;
+ 		power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a7795.dtsi b/arch/arm64/boot/dts/renesas/r8a7795.dtsi
+index d842940b2f43..91c392f879f9 100644
+--- a/arch/arm64/boot/dts/renesas/r8a7795.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a7795.dtsi
+@@ -2530,7 +2530,7 @@
+ 
+ 		vspd0: vsp@fea20000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea20000 0 0x8000>;
++			reg = <0 0xfea20000 0 0x5000>;
+ 			interrupts = <GIC_SPI 466 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 623>;
+ 			power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
+@@ -2541,7 +2541,7 @@
+ 
+ 		vspd1: vsp@fea28000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea28000 0 0x8000>;
++			reg = <0 0xfea28000 0 0x5000>;
+ 			interrupts = <GIC_SPI 467 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 622>;
+ 			power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
+@@ -2552,7 +2552,7 @@
+ 
+ 		vspd2: vsp@fea30000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea30000 0 0x8000>;
++			reg = <0 0xfea30000 0 0x5000>;
+ 			interrupts = <GIC_SPI 468 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 621>;
+ 			power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a7796.dtsi b/arch/arm64/boot/dts/renesas/r8a7796.dtsi
+index 7c25be6b5af3..a3653f9f4627 100644
+--- a/arch/arm64/boot/dts/renesas/r8a7796.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a7796.dtsi
+@@ -2212,7 +2212,7 @@
+ 
+ 		vspd0: vsp@fea20000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea20000 0 0x8000>;
++			reg = <0 0xfea20000 0 0x5000>;
+ 			interrupts = <GIC_SPI 466 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 623>;
+ 			power-domains = <&sysc R8A7796_PD_ALWAYS_ON>;
+@@ -2223,7 +2223,7 @@
+ 
+ 		vspd1: vsp@fea28000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea28000 0 0x8000>;
++			reg = <0 0xfea28000 0 0x5000>;
+ 			interrupts = <GIC_SPI 467 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 622>;
+ 			power-domains = <&sysc R8A7796_PD_ALWAYS_ON>;
+@@ -2234,7 +2234,7 @@
+ 
+ 		vspd2: vsp@fea30000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea30000 0 0x8000>;
++			reg = <0 0xfea30000 0 0x5000>;
+ 			interrupts = <GIC_SPI 468 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 621>;
+ 			power-domains = <&sysc R8A7796_PD_ALWAYS_ON>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77965.dtsi b/arch/arm64/boot/dts/renesas/r8a77965.dtsi
+index 486aecacb22a..ca618228fce1 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77965.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77965.dtsi
+@@ -1397,7 +1397,7 @@
+ 
+ 		vspd0: vsp@fea20000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea20000 0 0x8000>;
++			reg = <0 0xfea20000 0 0x5000>;
+ 			interrupts = <GIC_SPI 466 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 623>;
+ 			power-domains = <&sysc R8A77965_PD_ALWAYS_ON>;
+@@ -1416,7 +1416,7 @@
+ 
+ 		vspd1: vsp@fea28000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea28000 0 0x8000>;
++			reg = <0 0xfea28000 0 0x5000>;
+ 			interrupts = <GIC_SPI 467 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 622>;
+ 			power-domains = <&sysc R8A77965_PD_ALWAYS_ON>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77970.dtsi b/arch/arm64/boot/dts/renesas/r8a77970.dtsi
+index 98a2317a16c4..89dc4e343b7c 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77970.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77970.dtsi
+@@ -776,7 +776,7 @@
+ 
+ 		vspd0: vsp@fea20000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea20000 0 0x8000>;
++			reg = <0 0xfea20000 0 0x5000>;
+ 			interrupts = <GIC_SPI 169 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 623>;
+ 			power-domains = <&sysc R8A77970_PD_ALWAYS_ON>;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77995.dtsi b/arch/arm64/boot/dts/renesas/r8a77995.dtsi
+index 2506f46293e8..ac9aadf2723c 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77995.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77995.dtsi
+@@ -699,7 +699,7 @@
+ 
+ 		vspd0: vsp@fea20000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea20000 0 0x8000>;
++			reg = <0 0xfea20000 0 0x5000>;
+ 			interrupts = <GIC_SPI 466 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 623>;
+ 			power-domains = <&sysc R8A77995_PD_ALWAYS_ON>;
+@@ -709,7 +709,7 @@
+ 
+ 		vspd1: vsp@fea28000 {
+ 			compatible = "renesas,vsp2";
+-			reg = <0 0xfea28000 0 0x8000>;
++			reg = <0 0xfea28000 0 0x5000>;
+ 			interrupts = <GIC_SPI 467 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&cpg CPG_MOD 622>;
+ 			power-domains = <&sysc R8A77995_PD_ALWAYS_ON>;
+diff --git a/arch/arm64/boot/dts/renesas/salvator-common.dtsi b/arch/arm64/boot/dts/renesas/salvator-common.dtsi
+index 9256fbaaab7f..5853f5177b4b 100644
+--- a/arch/arm64/boot/dts/renesas/salvator-common.dtsi
++++ b/arch/arm64/boot/dts/renesas/salvator-common.dtsi
+@@ -440,7 +440,7 @@
+ 			};
+ 		};
+ 
+-		port@10 {
++		port@a {
+ 			reg = <10>;
+ 
+ 			adv7482_txa: endpoint {
+@@ -450,7 +450,7 @@
+ 			};
+ 		};
+ 
+-		port@11 {
++		port@b {
+ 			reg = <11>;
+ 
+ 			adv7482_txb: endpoint {
+diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
+index 56a0260ceb11..d5c6bb1562d8 100644
+--- a/arch/arm64/kvm/guest.c
++++ b/arch/arm64/kvm/guest.c
+@@ -57,6 +57,45 @@ static u64 core_reg_offset_from_id(u64 id)
+ 	return id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_ARM_CORE);
+ }
+ 
++static int validate_core_offset(const struct kvm_one_reg *reg)
++{
++	u64 off = core_reg_offset_from_id(reg->id);
++	int size;
++
++	switch (off) {
++	case KVM_REG_ARM_CORE_REG(regs.regs[0]) ...
++	     KVM_REG_ARM_CORE_REG(regs.regs[30]):
++	case KVM_REG_ARM_CORE_REG(regs.sp):
++	case KVM_REG_ARM_CORE_REG(regs.pc):
++	case KVM_REG_ARM_CORE_REG(regs.pstate):
++	case KVM_REG_ARM_CORE_REG(sp_el1):
++	case KVM_REG_ARM_CORE_REG(elr_el1):
++	case KVM_REG_ARM_CORE_REG(spsr[0]) ...
++	     KVM_REG_ARM_CORE_REG(spsr[KVM_NR_SPSR - 1]):
++		size = sizeof(__u64);
++		break;
++
++	case KVM_REG_ARM_CORE_REG(fp_regs.vregs[0]) ...
++	     KVM_REG_ARM_CORE_REG(fp_regs.vregs[31]):
++		size = sizeof(__uint128_t);
++		break;
++
++	case KVM_REG_ARM_CORE_REG(fp_regs.fpsr):
++	case KVM_REG_ARM_CORE_REG(fp_regs.fpcr):
++		size = sizeof(__u32);
++		break;
++
++	default:
++		return -EINVAL;
++	}
++
++	if (KVM_REG_SIZE(reg->id) == size &&
++	    IS_ALIGNED(off, size / sizeof(__u32)))
++		return 0;
++
++	return -EINVAL;
++}
++
+ static int get_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ {
+ 	/*
+@@ -76,6 +115,9 @@ static int get_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ 	    (off + (KVM_REG_SIZE(reg->id) / sizeof(__u32))) >= nr_regs)
+ 		return -ENOENT;
+ 
++	if (validate_core_offset(reg))
++		return -EINVAL;
++
+ 	if (copy_to_user(uaddr, ((u32 *)regs) + off, KVM_REG_SIZE(reg->id)))
+ 		return -EFAULT;
+ 
+@@ -98,6 +140,9 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ 	    (off + (KVM_REG_SIZE(reg->id) / sizeof(__u32))) >= nr_regs)
+ 		return -ENOENT;
+ 
++	if (validate_core_offset(reg))
++		return -EINVAL;
++
+ 	if (KVM_REG_SIZE(reg->id) > sizeof(tmp))
+ 		return -EINVAL;
+ 
+@@ -107,17 +152,25 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ 	}
+ 
+ 	if (off == KVM_REG_ARM_CORE_REG(regs.pstate)) {
+-		u32 mode = (*(u32 *)valp) & COMPAT_PSR_MODE_MASK;
++		u64 mode = (*(u64 *)valp) & COMPAT_PSR_MODE_MASK;
+ 		switch (mode) {
+ 		case COMPAT_PSR_MODE_USR:
++			if (!system_supports_32bit_el0())
++				return -EINVAL;
++			break;
+ 		case COMPAT_PSR_MODE_FIQ:
+ 		case COMPAT_PSR_MODE_IRQ:
+ 		case COMPAT_PSR_MODE_SVC:
+ 		case COMPAT_PSR_MODE_ABT:
+ 		case COMPAT_PSR_MODE_UND:
++			if (!vcpu_el1_is_32bit(vcpu))
++				return -EINVAL;
++			break;
+ 		case PSR_MODE_EL0t:
+ 		case PSR_MODE_EL1t:
+ 		case PSR_MODE_EL1h:
++			if (vcpu_el1_is_32bit(vcpu))
++				return -EINVAL;
+ 			break;
+ 		default:
+ 			err = -EINVAL;
+diff --git a/arch/mips/boot/Makefile b/arch/mips/boot/Makefile
+index c22da16d67b8..5c7bfa8478e7 100644
+--- a/arch/mips/boot/Makefile
++++ b/arch/mips/boot/Makefile
+@@ -118,10 +118,12 @@ ifeq ($(ADDR_BITS),64)
+ 	itb_addr_cells = 2
+ endif
+ 
++targets += vmlinux.its.S
++
+ quiet_cmd_its_cat = CAT     $@
+-      cmd_its_cat = cat $^ >$@
++      cmd_its_cat = cat $(filter-out $(PHONY), $^) >$@
+ 
+-$(obj)/vmlinux.its.S: $(addprefix $(srctree)/arch/mips/$(PLATFORM)/,$(ITS_INPUTS))
++$(obj)/vmlinux.its.S: $(addprefix $(srctree)/arch/mips/$(PLATFORM)/,$(ITS_INPUTS)) FORCE
+ 	$(call if_changed,its_cat)
+ 
+ quiet_cmd_cpp_its_S = ITS     $@
+diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
+index f817342aab8f..53729220b48d 100644
+--- a/arch/powerpc/kernel/exceptions-64s.S
++++ b/arch/powerpc/kernel/exceptions-64s.S
+@@ -1321,9 +1321,7 @@ EXC_REAL_BEGIN(denorm_exception_hv, 0x1500, 0x100)
+ 
+ #ifdef CONFIG_PPC_DENORMALISATION
+ 	mfspr	r10,SPRN_HSRR1
+-	mfspr	r11,SPRN_HSRR0		/* save HSRR0 */
+ 	andis.	r10,r10,(HSRR1_DENORM)@h /* denorm? */
+-	addi	r11,r11,-4		/* HSRR0 is next instruction */
+ 	bne+	denorm_assist
+ #endif
+ 
+@@ -1389,6 +1387,8 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
+  */
+ 	XVCPSGNDP32(32)
+ denorm_done:
++	mfspr	r11,SPRN_HSRR0
++	subi	r11,r11,4
+ 	mtspr	SPRN_HSRR0,r11
+ 	mtcrf	0x80,r9
+ 	ld	r9,PACA_EXGEN+EX_R9(r13)
+diff --git a/arch/powerpc/kernel/machine_kexec.c b/arch/powerpc/kernel/machine_kexec.c
+index 936c7e2d421e..b53401334e81 100644
+--- a/arch/powerpc/kernel/machine_kexec.c
++++ b/arch/powerpc/kernel/machine_kexec.c
+@@ -188,7 +188,12 @@ void __init reserve_crashkernel(void)
+ 			(unsigned long)(crashk_res.start >> 20),
+ 			(unsigned long)(memblock_phys_mem_size() >> 20));
+ 
+-	memblock_reserve(crashk_res.start, crash_size);
++	if (!memblock_is_region_memory(crashk_res.start, crash_size) ||
++	    memblock_reserve(crashk_res.start, crash_size)) {
++		pr_err("Failed to reserve memory for crashkernel!\n");
++		crashk_res.start = crashk_res.end = 0;
++		return;
++	}
+ }
+ 
+ int overlaps_crashkernel(unsigned long start, unsigned long size)
+diff --git a/arch/powerpc/lib/checksum_64.S b/arch/powerpc/lib/checksum_64.S
+index 886ed94b9c13..d05c8af4ac51 100644
+--- a/arch/powerpc/lib/checksum_64.S
++++ b/arch/powerpc/lib/checksum_64.S
+@@ -443,6 +443,9 @@ _GLOBAL(csum_ipv6_magic)
+ 	addc	r0, r8, r9
+ 	ld	r10, 0(r4)
+ 	ld	r11, 8(r4)
++#ifdef CONFIG_CPU_LITTLE_ENDIAN
++	rotldi	r5, r5, 8
++#endif
+ 	adde	r0, r0, r10
+ 	add	r5, r5, r7
+ 	adde	r0, r0, r11
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index 35ac5422903a..b5a71baedbc2 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -1452,7 +1452,8 @@ static struct timer_list topology_timer;
+ 
+ static void reset_topology_timer(void)
+ {
+-	mod_timer(&topology_timer, jiffies + topology_timer_secs * HZ);
++	if (vphn_enabled)
++		mod_timer(&topology_timer, jiffies + topology_timer_secs * HZ);
+ }
+ 
+ #ifdef CONFIG_SMP
+diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
+index 0e7810ccd1ae..c18d17d830a1 100644
+--- a/arch/powerpc/mm/pkeys.c
++++ b/arch/powerpc/mm/pkeys.c
+@@ -44,7 +44,7 @@ static void scan_pkey_feature(void)
+ 	 * Since any pkey can be used for data or execute, we will just treat
+ 	 * all keys as equal and track them as one entity.
+ 	 */
+-	pkeys_total = be32_to_cpu(vals[0]);
++	pkeys_total = vals[0];
+ 	pkeys_devtree_defined = true;
+ }
+ 
+diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
+index a2cdf358a3ac..0976049d3365 100644
+--- a/arch/powerpc/platforms/powernv/pci-ioda.c
++++ b/arch/powerpc/platforms/powernv/pci-ioda.c
+@@ -2841,7 +2841,7 @@ static long pnv_pci_ioda2_table_alloc_pages(int nid, __u64 bus_offset,
+ 	level_shift = entries_shift + 3;
+ 	level_shift = max_t(unsigned, level_shift, PAGE_SHIFT);
+ 
+-	if ((level_shift - 3) * levels + page_shift >= 60)
++	if ((level_shift - 3) * levels + page_shift >= 55)
+ 		return -EINVAL;
+ 
+ 	/* Allocate TCE table */
+diff --git a/arch/s390/kernel/sysinfo.c b/arch/s390/kernel/sysinfo.c
+index 54f5496913fa..12f80d1f0415 100644
+--- a/arch/s390/kernel/sysinfo.c
++++ b/arch/s390/kernel/sysinfo.c
+@@ -59,6 +59,8 @@ int stsi(void *sysinfo, int fc, int sel1, int sel2)
+ }
+ EXPORT_SYMBOL(stsi);
+ 
++#ifdef CONFIG_PROC_FS
++
+ static bool convert_ext_name(unsigned char encoding, char *name, size_t len)
+ {
+ 	switch (encoding) {
+@@ -301,6 +303,8 @@ static int __init sysinfo_create_proc(void)
+ }
+ device_initcall(sysinfo_create_proc);
+ 
++#endif /* CONFIG_PROC_FS */
++
+ /*
+  * Service levels interface.
+  */
+diff --git a/arch/s390/mm/extmem.c b/arch/s390/mm/extmem.c
+index 6ad15d3fab81..84111a43ea29 100644
+--- a/arch/s390/mm/extmem.c
++++ b/arch/s390/mm/extmem.c
+@@ -80,7 +80,7 @@ struct qin64 {
+ struct dcss_segment {
+ 	struct list_head list;
+ 	char dcss_name[8];
+-	char res_name[15];
++	char res_name[16];
+ 	unsigned long start_addr;
+ 	unsigned long end;
+ 	atomic_t ref_count;
+@@ -433,7 +433,7 @@ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long
+ 	memcpy(&seg->res_name, seg->dcss_name, 8);
+ 	EBCASC(seg->res_name, 8);
+ 	seg->res_name[8] = '\0';
+-	strncat(seg->res_name, " (DCSS)", 7);
++	strlcat(seg->res_name, " (DCSS)", sizeof(seg->res_name));
+ 	seg->res->name = seg->res_name;
+ 	rc = seg->vm_segtype;
+ 	if (rc == SEG_TYPE_SC ||
+diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c
+index e3bd5627afef..76d89ee8b428 100644
+--- a/arch/s390/mm/pgalloc.c
++++ b/arch/s390/mm/pgalloc.c
+@@ -28,7 +28,7 @@ static struct ctl_table page_table_sysctl[] = {
+ 		.data		= &page_table_allocate_pgste,
+ 		.maxlen		= sizeof(int),
+ 		.mode		= S_IRUGO | S_IWUSR,
+-		.proc_handler	= proc_dointvec,
++		.proc_handler	= proc_dointvec_minmax,
+ 		.extra1		= &page_table_allocate_pgste_min,
+ 		.extra2		= &page_table_allocate_pgste_max,
+ 	},
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index 8ae7ffda8f98..0ab33af41fbd 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -92,7 +92,7 @@ END(native_usergs_sysret64)
+ .endm
+ 
+ .macro TRACE_IRQS_IRETQ_DEBUG
+-	bt	$9, EFLAGS(%rsp)		/* interrupts off? */
++	btl	$9, EFLAGS(%rsp)		/* interrupts off? */
+ 	jnc	1f
+ 	TRACE_IRQS_ON_DEBUG
+ 1:
+@@ -701,7 +701,7 @@ retint_kernel:
+ #ifdef CONFIG_PREEMPT
+ 	/* Interrupts are off */
+ 	/* Check if we need preemption */
+-	bt	$9, EFLAGS(%rsp)		/* were interrupts off? */
++	btl	$9, EFLAGS(%rsp)		/* were interrupts off? */
+ 	jnc	1f
+ 0:	cmpl	$0, PER_CPU_VAR(__preempt_count)
+ 	jnz	1f
+diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
+index cf372b90557e..a4170048a30b 100644
+--- a/arch/x86/events/intel/lbr.c
++++ b/arch/x86/events/intel/lbr.c
+@@ -346,7 +346,7 @@ static void __intel_pmu_lbr_restore(struct x86_perf_task_context *task_ctx)
+ 
+ 	mask = x86_pmu.lbr_nr - 1;
+ 	tos = task_ctx->tos;
+-	for (i = 0; i < tos; i++) {
++	for (i = 0; i < task_ctx->valid_lbrs; i++) {
+ 		lbr_idx = (tos - i) & mask;
+ 		wrlbr_from(lbr_idx, task_ctx->lbr_from[i]);
+ 		wrlbr_to  (lbr_idx, task_ctx->lbr_to[i]);
+@@ -354,6 +354,15 @@ static void __intel_pmu_lbr_restore(struct x86_perf_task_context *task_ctx)
+ 		if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_INFO)
+ 			wrmsrl(MSR_LBR_INFO_0 + lbr_idx, task_ctx->lbr_info[i]);
+ 	}
++
++	for (; i < x86_pmu.lbr_nr; i++) {
++		lbr_idx = (tos - i) & mask;
++		wrlbr_from(lbr_idx, 0);
++		wrlbr_to(lbr_idx, 0);
++		if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_INFO)
++			wrmsrl(MSR_LBR_INFO_0 + lbr_idx, 0);
++	}
++
+ 	wrmsrl(x86_pmu.lbr_tos, tos);
+ 	task_ctx->lbr_stack_state = LBR_NONE;
+ }
+@@ -361,7 +370,7 @@ static void __intel_pmu_lbr_restore(struct x86_perf_task_context *task_ctx)
+ static void __intel_pmu_lbr_save(struct x86_perf_task_context *task_ctx)
+ {
+ 	unsigned lbr_idx, mask;
+-	u64 tos;
++	u64 tos, from;
+ 	int i;
+ 
+ 	if (task_ctx->lbr_callstack_users == 0) {
+@@ -371,13 +380,17 @@ static void __intel_pmu_lbr_save(struct x86_perf_task_context *task_ctx)
+ 
+ 	mask = x86_pmu.lbr_nr - 1;
+ 	tos = intel_pmu_lbr_tos();
+-	for (i = 0; i < tos; i++) {
++	for (i = 0; i < x86_pmu.lbr_nr; i++) {
+ 		lbr_idx = (tos - i) & mask;
+-		task_ctx->lbr_from[i] = rdlbr_from(lbr_idx);
++		from = rdlbr_from(lbr_idx);
++		if (!from)
++			break;
++		task_ctx->lbr_from[i] = from;
+ 		task_ctx->lbr_to[i]   = rdlbr_to(lbr_idx);
+ 		if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_INFO)
+ 			rdmsrl(MSR_LBR_INFO_0 + lbr_idx, task_ctx->lbr_info[i]);
+ 	}
++	task_ctx->valid_lbrs = i;
+ 	task_ctx->tos = tos;
+ 	task_ctx->lbr_stack_state = LBR_VALID;
+ }
+@@ -531,7 +544,7 @@ static void intel_pmu_lbr_read_32(struct cpu_hw_events *cpuc)
+  */
+ static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc)
+ {
+-	bool need_info = false;
++	bool need_info = false, call_stack = false;
+ 	unsigned long mask = x86_pmu.lbr_nr - 1;
+ 	int lbr_format = x86_pmu.intel_cap.lbr_format;
+ 	u64 tos = intel_pmu_lbr_tos();
+@@ -542,7 +555,7 @@ static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc)
+ 	if (cpuc->lbr_sel) {
+ 		need_info = !(cpuc->lbr_sel->config & LBR_NO_INFO);
+ 		if (cpuc->lbr_sel->config & LBR_CALL_STACK)
+-			num = tos;
++			call_stack = true;
+ 	}
+ 
+ 	for (i = 0; i < num; i++) {
+@@ -555,6 +568,13 @@ static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc)
+ 		from = rdlbr_from(lbr_idx);
+ 		to   = rdlbr_to(lbr_idx);
+ 
++		/*
++		 * Read LBR call stack entries
++		 * until invalid entry (0s) is detected.
++		 */
++		if (call_stack && !from)
++			break;
++
+ 		if (lbr_format == LBR_FORMAT_INFO && need_info) {
+ 			u64 info;
+ 
+diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
+index 9f3711470ec1..6b72a92069fd 100644
+--- a/arch/x86/events/perf_event.h
++++ b/arch/x86/events/perf_event.h
+@@ -648,6 +648,7 @@ struct x86_perf_task_context {
+ 	u64 lbr_to[MAX_LBR_ENTRIES];
+ 	u64 lbr_info[MAX_LBR_ENTRIES];
+ 	int tos;
++	int valid_lbrs;
+ 	int lbr_callstack_users;
+ 	int lbr_stack_state;
+ };
+diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h
+index e203169931c7..6390bd8c141b 100644
+--- a/arch/x86/include/asm/fixmap.h
++++ b/arch/x86/include/asm/fixmap.h
+@@ -14,6 +14,16 @@
+ #ifndef _ASM_X86_FIXMAP_H
+ #define _ASM_X86_FIXMAP_H
+ 
++/*
++ * Exposed to assembly code for setting up initial page tables. Cannot be
++ * calculated in assembly code (fixmap entries are an enum), but is sanity
++ * checked in the actual fixmap C code to make sure that the fixmap is
++ * covered fully.
++ */
++#define FIXMAP_PMD_NUM	2
++/* fixmap starts downwards from the 507th entry in level2_fixmap_pgt */
++#define FIXMAP_PMD_TOP	507
++
+ #ifndef __ASSEMBLY__
+ #include <linux/kernel.h>
+ #include <asm/acpi.h>
+diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h
+index 82ff20b0ae45..20127d551ab5 100644
+--- a/arch/x86/include/asm/pgtable_64.h
++++ b/arch/x86/include/asm/pgtable_64.h
+@@ -14,6 +14,7 @@
+ #include <asm/processor.h>
+ #include <linux/bitops.h>
+ #include <linux/threads.h>
++#include <asm/fixmap.h>
+ 
+ extern p4d_t level4_kernel_pgt[512];
+ extern p4d_t level4_ident_pgt[512];
+@@ -22,7 +23,7 @@ extern pud_t level3_ident_pgt[512];
+ extern pmd_t level2_kernel_pgt[512];
+ extern pmd_t level2_fixmap_pgt[512];
+ extern pmd_t level2_ident_pgt[512];
+-extern pte_t level1_fixmap_pgt[512];
++extern pte_t level1_fixmap_pgt[512 * FIXMAP_PMD_NUM];
+ extern pgd_t init_top_pgt[];
+ 
+ #define swapper_pg_dir init_top_pgt
+diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
+index 8047379e575a..11455200ae66 100644
+--- a/arch/x86/kernel/head64.c
++++ b/arch/x86/kernel/head64.c
+@@ -35,6 +35,7 @@
+ #include <asm/bootparam_utils.h>
+ #include <asm/microcode.h>
+ #include <asm/kasan.h>
++#include <asm/fixmap.h>
+ 
+ /*
+  * Manage page tables very early on.
+@@ -165,7 +166,8 @@ unsigned long __head __startup_64(unsigned long physaddr,
+ 	pud[511] += load_delta;
+ 
+ 	pmd = fixup_pointer(level2_fixmap_pgt, physaddr);
+-	pmd[506] += load_delta;
++	for (i = FIXMAP_PMD_TOP; i > FIXMAP_PMD_TOP - FIXMAP_PMD_NUM; i--)
++		pmd[i] += load_delta;
+ 
+ 	/*
+ 	 * Set up the identity mapping for the switchover.  These
+diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
+index 8344dd2f310a..6bc215c15ce0 100644
+--- a/arch/x86/kernel/head_64.S
++++ b/arch/x86/kernel/head_64.S
+@@ -24,6 +24,7 @@
+ #include "../entry/calling.h"
+ #include <asm/export.h>
+ #include <asm/nospec-branch.h>
++#include <asm/fixmap.h>
+ 
+ #ifdef CONFIG_PARAVIRT
+ #include <asm/asm-offsets.h>
+@@ -445,13 +446,20 @@ NEXT_PAGE(level2_kernel_pgt)
+ 		KERNEL_IMAGE_SIZE/PMD_SIZE)
+ 
+ NEXT_PAGE(level2_fixmap_pgt)
+-	.fill	506,8,0
+-	.quad	level1_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC
+-	/* 8MB reserved for vsyscalls + a 2MB hole = 4 + 1 entries */
+-	.fill	5,8,0
++	.fill	(512 - 4 - FIXMAP_PMD_NUM),8,0
++	pgtno = 0
++	.rept (FIXMAP_PMD_NUM)
++	.quad level1_fixmap_pgt + (pgtno << PAGE_SHIFT) - __START_KERNEL_map \
++		+ _PAGE_TABLE_NOENC;
++	pgtno = pgtno + 1
++	.endr
++	/* 6 MB reserved space + a 2MB hole */
++	.fill	4,8,0
+ 
+ NEXT_PAGE(level1_fixmap_pgt)
++	.rept (FIXMAP_PMD_NUM)
+ 	.fill	512,8,0
++	.endr
+ 
+ #undef PMDS
+ 
+diff --git a/arch/x86/kernel/tsc_msr.c b/arch/x86/kernel/tsc_msr.c
+index 19afdbd7d0a7..5532d1be7687 100644
+--- a/arch/x86/kernel/tsc_msr.c
++++ b/arch/x86/kernel/tsc_msr.c
+@@ -12,6 +12,7 @@
+ #include <asm/setup.h>
+ #include <asm/apic.h>
+ #include <asm/param.h>
++#include <asm/tsc.h>
+ 
+ #define MAX_NUM_FREQS	9
+ 
+diff --git a/arch/x86/mm/numa_emulation.c b/arch/x86/mm/numa_emulation.c
+index 34a2a3bfde9c..22cbad56acab 100644
+--- a/arch/x86/mm/numa_emulation.c
++++ b/arch/x86/mm/numa_emulation.c
+@@ -61,7 +61,7 @@ static int __init emu_setup_memblk(struct numa_meminfo *ei,
+ 	eb->nid = nid;
+ 
+ 	if (emu_nid_to_phys[nid] == NUMA_NO_NODE)
+-		emu_nid_to_phys[nid] = nid;
++		emu_nid_to_phys[nid] = pb->nid;
+ 
+ 	pb->start += size;
+ 	if (pb->start >= pb->end) {
+diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
+index e3deefb891da..a300ffeece9b 100644
+--- a/arch/x86/mm/pgtable.c
++++ b/arch/x86/mm/pgtable.c
+@@ -577,6 +577,15 @@ void __native_set_fixmap(enum fixed_addresses idx, pte_t pte)
+ {
+ 	unsigned long address = __fix_to_virt(idx);
+ 
++#ifdef CONFIG_X86_64
++       /*
++	* Ensure that the static initial page tables are covering the
++	* fixmap completely.
++	*/
++	BUILD_BUG_ON(__end_of_permanent_fixed_addresses >
++		     (FIXMAP_PMD_NUM * PTRS_PER_PTE));
++#endif
++
+ 	if (idx >= __end_of_fixed_addresses) {
+ 		BUG();
+ 		return;
+diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
+index 1d2106d83b4e..019da252a04f 100644
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -239,7 +239,7 @@ static pmd_t *pti_user_pagetable_walk_pmd(unsigned long address)
+  *
+  * Returns a pointer to a PTE on success, or NULL on failure.
+  */
+-static __init pte_t *pti_user_pagetable_walk_pte(unsigned long address)
++static pte_t *pti_user_pagetable_walk_pte(unsigned long address)
+ {
+ 	gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
+ 	pmd_t *pmd;
+diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
+index 071d82ec9abb..2473eaca3468 100644
+--- a/arch/x86/xen/mmu_pv.c
++++ b/arch/x86/xen/mmu_pv.c
+@@ -1908,7 +1908,7 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
+ 	/* L3_k[511] -> level2_fixmap_pgt */
+ 	convert_pfn_mfn(level3_kernel_pgt);
+ 
+-	/* L3_k[511][506] -> level1_fixmap_pgt */
++	/* L3_k[511][508-FIXMAP_PMD_NUM ... 507] -> level1_fixmap_pgt */
+ 	convert_pfn_mfn(level2_fixmap_pgt);
+ 
+ 	/* We get [511][511] and have Xen's version of level2_kernel_pgt */
+@@ -1953,7 +1953,11 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
+ 	set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO);
+ 	set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO);
+ 	set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO);
+-	set_page_prot(level1_fixmap_pgt, PAGE_KERNEL_RO);
++
++	for (i = 0; i < FIXMAP_PMD_NUM; i++) {
++		set_page_prot(level1_fixmap_pgt + i * PTRS_PER_PTE,
++			      PAGE_KERNEL_RO);
++	}
+ 
+ 	/* Pin down new L4 */
+ 	pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
+diff --git a/block/elevator.c b/block/elevator.c
+index fa828b5bfd4b..89a48a3a8c12 100644
+--- a/block/elevator.c
++++ b/block/elevator.c
+@@ -609,7 +609,7 @@ void elv_drain_elevator(struct request_queue *q)
+ 
+ 	while (e->type->ops.sq.elevator_dispatch_fn(q, 1))
+ 		;
+-	if (q->nr_sorted && printed++ < 10) {
++	if (q->nr_sorted && !blk_queue_is_zoned(q) && printed++ < 10 ) {
+ 		printk(KERN_ERR "%s: forced dispatching is broken "
+ 		       "(nr_sorted=%u), please report this\n",
+ 		       q->elevator->type->elevator_name, q->nr_sorted);
+diff --git a/crypto/ablkcipher.c b/crypto/ablkcipher.c
+index 4ee7c041bb82..8882e90e868e 100644
+--- a/crypto/ablkcipher.c
++++ b/crypto/ablkcipher.c
+@@ -368,6 +368,7 @@ static int crypto_ablkcipher_report(struct sk_buff *skb, struct crypto_alg *alg)
+ 	strncpy(rblkcipher.type, "ablkcipher", sizeof(rblkcipher.type));
+ 	strncpy(rblkcipher.geniv, alg->cra_ablkcipher.geniv ?: "<default>",
+ 		sizeof(rblkcipher.geniv));
++	rblkcipher.geniv[sizeof(rblkcipher.geniv) - 1] = '\0';
+ 
+ 	rblkcipher.blocksize = alg->cra_blocksize;
+ 	rblkcipher.min_keysize = alg->cra_ablkcipher.min_keysize;
+@@ -442,6 +443,7 @@ static int crypto_givcipher_report(struct sk_buff *skb, struct crypto_alg *alg)
+ 	strncpy(rblkcipher.type, "givcipher", sizeof(rblkcipher.type));
+ 	strncpy(rblkcipher.geniv, alg->cra_ablkcipher.geniv ?: "<built-in>",
+ 		sizeof(rblkcipher.geniv));
++	rblkcipher.geniv[sizeof(rblkcipher.geniv) - 1] = '\0';
+ 
+ 	rblkcipher.blocksize = alg->cra_blocksize;
+ 	rblkcipher.min_keysize = alg->cra_ablkcipher.min_keysize;
+diff --git a/crypto/blkcipher.c b/crypto/blkcipher.c
+index 77b5fa293f66..f93abf13b5d4 100644
+--- a/crypto/blkcipher.c
++++ b/crypto/blkcipher.c
+@@ -510,6 +510,7 @@ static int crypto_blkcipher_report(struct sk_buff *skb, struct crypto_alg *alg)
+ 	strncpy(rblkcipher.type, "blkcipher", sizeof(rblkcipher.type));
+ 	strncpy(rblkcipher.geniv, alg->cra_blkcipher.geniv ?: "<default>",
+ 		sizeof(rblkcipher.geniv));
++	rblkcipher.geniv[sizeof(rblkcipher.geniv) - 1] = '\0';
+ 
+ 	rblkcipher.blocksize = alg->cra_blocksize;
+ 	rblkcipher.min_keysize = alg->cra_blkcipher.min_keysize;
+diff --git a/drivers/acpi/button.c b/drivers/acpi/button.c
+index 2345a5ee2dbb..40ed3ec9fc94 100644
+--- a/drivers/acpi/button.c
++++ b/drivers/acpi/button.c
+@@ -235,9 +235,6 @@ static int acpi_lid_notify_state(struct acpi_device *device, int state)
+ 		button->last_time = ktime_get();
+ 	}
+ 
+-	if (state)
+-		acpi_pm_wakeup_event(&device->dev);
+-
+ 	ret = blocking_notifier_call_chain(&acpi_lid_notifier, state, device);
+ 	if (ret == NOTIFY_DONE)
+ 		ret = blocking_notifier_call_chain(&acpi_lid_notifier, state,
+@@ -366,7 +363,8 @@ int acpi_lid_open(void)
+ }
+ EXPORT_SYMBOL(acpi_lid_open);
+ 
+-static int acpi_lid_update_state(struct acpi_device *device)
++static int acpi_lid_update_state(struct acpi_device *device,
++				 bool signal_wakeup)
+ {
+ 	int state;
+ 
+@@ -374,6 +372,9 @@ static int acpi_lid_update_state(struct acpi_device *device)
+ 	if (state < 0)
+ 		return state;
+ 
++	if (state && signal_wakeup)
++		acpi_pm_wakeup_event(&device->dev);
++
+ 	return acpi_lid_notify_state(device, state);
+ }
+ 
+@@ -384,7 +385,7 @@ static void acpi_lid_initialize_state(struct acpi_device *device)
+ 		(void)acpi_lid_notify_state(device, 1);
+ 		break;
+ 	case ACPI_BUTTON_LID_INIT_METHOD:
+-		(void)acpi_lid_update_state(device);
++		(void)acpi_lid_update_state(device, false);
+ 		break;
+ 	case ACPI_BUTTON_LID_INIT_IGNORE:
+ 	default:
+@@ -409,7 +410,7 @@ static void acpi_button_notify(struct acpi_device *device, u32 event)
+ 			users = button->input->users;
+ 			mutex_unlock(&button->input->mutex);
+ 			if (users)
+-				acpi_lid_update_state(device);
++				acpi_lid_update_state(device, true);
+ 		} else {
+ 			int keycode;
+ 
+diff --git a/drivers/ata/pata_ftide010.c b/drivers/ata/pata_ftide010.c
+index 5d4b72e21161..569a4a662dcd 100644
+--- a/drivers/ata/pata_ftide010.c
++++ b/drivers/ata/pata_ftide010.c
+@@ -256,14 +256,12 @@ static struct ata_port_operations pata_ftide010_port_ops = {
+ 	.qc_issue	= ftide010_qc_issue,
+ };
+ 
+-static struct ata_port_info ftide010_port_info[] = {
+-	{
+-		.flags		= ATA_FLAG_SLAVE_POSS,
+-		.mwdma_mask	= ATA_MWDMA2,
+-		.udma_mask	= ATA_UDMA6,
+-		.pio_mask	= ATA_PIO4,
+-		.port_ops	= &pata_ftide010_port_ops,
+-	},
++static struct ata_port_info ftide010_port_info = {
++	.flags		= ATA_FLAG_SLAVE_POSS,
++	.mwdma_mask	= ATA_MWDMA2,
++	.udma_mask	= ATA_UDMA6,
++	.pio_mask	= ATA_PIO4,
++	.port_ops	= &pata_ftide010_port_ops,
+ };
+ 
+ #if IS_ENABLED(CONFIG_SATA_GEMINI)
+@@ -349,6 +347,7 @@ static int pata_ftide010_gemini_cable_detect(struct ata_port *ap)
+ }
+ 
+ static int pata_ftide010_gemini_init(struct ftide010 *ftide,
++				     struct ata_port_info *pi,
+ 				     bool is_ata1)
+ {
+ 	struct device *dev = ftide->dev;
+@@ -373,7 +372,13 @@ static int pata_ftide010_gemini_init(struct ftide010 *ftide,
+ 
+ 	/* Flag port as SATA-capable */
+ 	if (gemini_sata_bridge_enabled(sg, is_ata1))
+-		ftide010_port_info[0].flags |= ATA_FLAG_SATA;
++		pi->flags |= ATA_FLAG_SATA;
++
++	/* This device has broken DMA, only PIO works */
++	if (of_machine_is_compatible("itian,sq201")) {
++		pi->mwdma_mask = 0;
++		pi->udma_mask = 0;
++	}
+ 
+ 	/*
+ 	 * We assume that a simple 40-wire cable is used in the PATA mode.
+@@ -435,6 +440,7 @@ static int pata_ftide010_gemini_init(struct ftide010 *ftide,
+ }
+ #else
+ static int pata_ftide010_gemini_init(struct ftide010 *ftide,
++				     struct ata_port_info *pi,
+ 				     bool is_ata1)
+ {
+ 	return -ENOTSUPP;
+@@ -446,7 +452,7 @@ static int pata_ftide010_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+ 	struct device_node *np = dev->of_node;
+-	const struct ata_port_info pi = ftide010_port_info[0];
++	struct ata_port_info pi = ftide010_port_info;
+ 	const struct ata_port_info *ppi[] = { &pi, NULL };
+ 	struct ftide010 *ftide;
+ 	struct resource *res;
+@@ -490,6 +496,7 @@ static int pata_ftide010_probe(struct platform_device *pdev)
+ 		 * are ATA0. This will also set up the cable types.
+ 		 */
+ 		ret = pata_ftide010_gemini_init(ftide,
++				&pi,
+ 				(res->start == 0x63400000));
+ 		if (ret)
+ 			goto err_dis_clk;
+diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
+index 8871b5044d9e..7d7c698c0213 100644
+--- a/drivers/block/floppy.c
++++ b/drivers/block/floppy.c
+@@ -3470,6 +3470,9 @@ static int fd_locked_ioctl(struct block_device *bdev, fmode_t mode, unsigned int
+ 					  (struct floppy_struct **)&outparam);
+ 		if (ret)
+ 			return ret;
++		memcpy(&inparam.g, outparam,
++				offsetof(struct floppy_struct, name));
++		outparam = &inparam.g;
+ 		break;
+ 	case FDMSGON:
+ 		UDP->flags |= FTD_MSG;
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index f73a27ea28cc..75947f04fc75 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -374,6 +374,7 @@ static const struct usb_device_id blacklist_table[] = {
+ 	{ USB_DEVICE(0x7392, 0xa611), .driver_info = BTUSB_REALTEK },
+ 
+ 	/* Additional Realtek 8723DE Bluetooth devices */
++	{ USB_DEVICE(0x0bda, 0xb009), .driver_info = BTUSB_REALTEK },
+ 	{ USB_DEVICE(0x2ff8, 0xb011), .driver_info = BTUSB_REALTEK },
+ 
+ 	/* Additional Realtek 8821AE Bluetooth devices */
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 80d60f43db56..4576a1268e0e 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -490,32 +490,29 @@ static int sysc_check_registers(struct sysc *ddata)
+ 
+ /**
+  * syc_ioremap - ioremap register space for the interconnect target module
+- * @ddata: deviec driver data
++ * @ddata: device driver data
+  *
+  * Note that the interconnect target module registers can be anywhere
+- * within the first child device address space. For example, SGX has
+- * them at offset 0x1fc00 in the 32MB module address space. We just
+- * what we need around the interconnect target module registers.
++ * within the interconnect target module range. For example, SGX has
++ * them at offset 0x1fc00 in the 32MB module address space. And cpsw
++ * has them at offset 0x1200 in the CPSW_WR child. Usually the
++ * the interconnect target module registers are at the beginning of
++ * the module range though.
+  */
+ static int sysc_ioremap(struct sysc *ddata)
+ {
+-	u32 size = 0;
+-
+-	if (ddata->offsets[SYSC_SYSSTATUS] >= 0)
+-		size = ddata->offsets[SYSC_SYSSTATUS];
+-	else if (ddata->offsets[SYSC_SYSCONFIG] >= 0)
+-		size = ddata->offsets[SYSC_SYSCONFIG];
+-	else if (ddata->offsets[SYSC_REVISION] >= 0)
+-		size = ddata->offsets[SYSC_REVISION];
+-	else
+-		return -EINVAL;
++	int size;
+ 
+-	size &= 0xfff00;
+-	size += SZ_256;
++	size = max3(ddata->offsets[SYSC_REVISION],
++		    ddata->offsets[SYSC_SYSCONFIG],
++		    ddata->offsets[SYSC_SYSSTATUS]);
++
++	if (size < 0 || (size + sizeof(u32)) > ddata->module_size)
++		return -EINVAL;
+ 
+ 	ddata->module_va = devm_ioremap(ddata->dev,
+ 					ddata->module_pa,
+-					size);
++					size + sizeof(u32));
+ 	if (!ddata->module_va)
+ 		return -EIO;
+ 
+@@ -1178,10 +1175,10 @@ static int sysc_child_suspend_noirq(struct device *dev)
+ 	if (!pm_runtime_status_suspended(dev)) {
+ 		error = pm_generic_runtime_suspend(dev);
+ 		if (error) {
+-			dev_err(dev, "%s error at %i: %i\n",
+-				__func__, __LINE__, error);
++			dev_warn(dev, "%s busy at %i: %i\n",
++				 __func__, __LINE__, error);
+ 
+-			return error;
++			return 0;
+ 		}
+ 
+ 		error = sysc_runtime_suspend(ddata->dev);
+diff --git a/drivers/clk/x86/clk-st.c b/drivers/clk/x86/clk-st.c
+index fb62f3938008..3a0996f2d556 100644
+--- a/drivers/clk/x86/clk-st.c
++++ b/drivers/clk/x86/clk-st.c
+@@ -46,7 +46,7 @@ static int st_clk_probe(struct platform_device *pdev)
+ 		clk_oscout1_parents, ARRAY_SIZE(clk_oscout1_parents),
+ 		0, st_data->base + CLKDRVSTR2, OSCOUT1CLK25MHZ, 3, 0, NULL);
+ 
+-	clk_set_parent(hws[ST_CLK_MUX]->clk, hws[ST_CLK_25M]->clk);
++	clk_set_parent(hws[ST_CLK_MUX]->clk, hws[ST_CLK_48M]->clk);
+ 
+ 	hws[ST_CLK_GATE] = clk_hw_register_gate(NULL, "oscout1", "oscout1_mux",
+ 		0, st_data->base + MISCCLKCNTL1, OSCCLKENB,
+diff --git a/drivers/crypto/cavium/nitrox/nitrox_dev.h b/drivers/crypto/cavium/nitrox/nitrox_dev.h
+index 9a476bb6d4c7..af596455b420 100644
+--- a/drivers/crypto/cavium/nitrox/nitrox_dev.h
++++ b/drivers/crypto/cavium/nitrox/nitrox_dev.h
+@@ -35,6 +35,7 @@ struct nitrox_cmdq {
+ 	/* requests in backlog queues */
+ 	atomic_t backlog_count;
+ 
++	int write_idx;
+ 	/* command size 32B/64B */
+ 	u8 instr_size;
+ 	u8 qno;
+@@ -87,7 +88,7 @@ struct nitrox_bh {
+ 	struct bh_data *slc;
+ };
+ 
+-/* NITROX-5 driver state */
++/* NITROX-V driver state */
+ #define NITROX_UCODE_LOADED	0
+ #define NITROX_READY		1
+ 
+diff --git a/drivers/crypto/cavium/nitrox/nitrox_lib.c b/drivers/crypto/cavium/nitrox/nitrox_lib.c
+index 4fdc921ba611..9906c0086647 100644
+--- a/drivers/crypto/cavium/nitrox/nitrox_lib.c
++++ b/drivers/crypto/cavium/nitrox/nitrox_lib.c
+@@ -36,6 +36,7 @@ static int cmdq_common_init(struct nitrox_cmdq *cmdq)
+ 	cmdq->head = PTR_ALIGN(cmdq->head_unaligned, PKT_IN_ALIGN);
+ 	cmdq->dma = PTR_ALIGN(cmdq->dma_unaligned, PKT_IN_ALIGN);
+ 	cmdq->qsize = (qsize + PKT_IN_ALIGN);
++	cmdq->write_idx = 0;
+ 
+ 	spin_lock_init(&cmdq->response_lock);
+ 	spin_lock_init(&cmdq->cmdq_lock);
+diff --git a/drivers/crypto/cavium/nitrox/nitrox_reqmgr.c b/drivers/crypto/cavium/nitrox/nitrox_reqmgr.c
+index deaefd532aaa..4a362fc22f62 100644
+--- a/drivers/crypto/cavium/nitrox/nitrox_reqmgr.c
++++ b/drivers/crypto/cavium/nitrox/nitrox_reqmgr.c
+@@ -42,6 +42,16 @@
+  *   Invalid flag options in AES-CCM IV.
+  */
+ 
++static inline int incr_index(int index, int count, int max)
++{
++	if ((index + count) >= max)
++		index = index + count - max;
++	else
++		index += count;
++
++	return index;
++}
++
+ /**
+  * dma_free_sglist - unmap and free the sg lists.
+  * @ndev: N5 device
+@@ -426,30 +436,29 @@ static void post_se_instr(struct nitrox_softreq *sr,
+ 			  struct nitrox_cmdq *cmdq)
+ {
+ 	struct nitrox_device *ndev = sr->ndev;
+-	union nps_pkt_in_instr_baoff_dbell pkt_in_baoff_dbell;
+-	u64 offset;
++	int idx;
+ 	u8 *ent;
+ 
+ 	spin_lock_bh(&cmdq->cmdq_lock);
+ 
+-	/* get the next write offset */
+-	offset = NPS_PKT_IN_INSTR_BAOFF_DBELLX(cmdq->qno);
+-	pkt_in_baoff_dbell.value = nitrox_read_csr(ndev, offset);
++	idx = cmdq->write_idx;
+ 	/* copy the instruction */
+-	ent = cmdq->head + pkt_in_baoff_dbell.s.aoff;
++	ent = cmdq->head + (idx * cmdq->instr_size);
+ 	memcpy(ent, &sr->instr, cmdq->instr_size);
+-	/* flush the command queue updates */
+-	dma_wmb();
+ 
+-	sr->tstamp = jiffies;
+ 	atomic_set(&sr->status, REQ_POSTED);
+ 	response_list_add(sr, cmdq);
++	sr->tstamp = jiffies;
++	/* flush the command queue updates */
++	dma_wmb();
+ 
+ 	/* Ring doorbell with count 1 */
+ 	writeq(1, cmdq->dbell_csr_addr);
+ 	/* orders the doorbell rings */
+ 	mmiowb();
+ 
++	cmdq->write_idx = incr_index(idx, 1, ndev->qlen);
++
+ 	spin_unlock_bh(&cmdq->cmdq_lock);
+ }
+ 
+@@ -459,6 +468,9 @@ static int post_backlog_cmds(struct nitrox_cmdq *cmdq)
+ 	struct nitrox_softreq *sr, *tmp;
+ 	int ret = 0;
+ 
++	if (!atomic_read(&cmdq->backlog_count))
++		return 0;
++
+ 	spin_lock_bh(&cmdq->backlog_lock);
+ 
+ 	list_for_each_entry_safe(sr, tmp, &cmdq->backlog_head, backlog) {
+@@ -466,7 +478,7 @@ static int post_backlog_cmds(struct nitrox_cmdq *cmdq)
+ 
+ 		/* submit until space available */
+ 		if (unlikely(cmdq_full(cmdq, ndev->qlen))) {
+-			ret = -EBUSY;
++			ret = -ENOSPC;
+ 			break;
+ 		}
+ 		/* delete from backlog list */
+@@ -491,23 +503,20 @@ static int nitrox_enqueue_request(struct nitrox_softreq *sr)
+ {
+ 	struct nitrox_cmdq *cmdq = sr->cmdq;
+ 	struct nitrox_device *ndev = sr->ndev;
+-	int ret = -EBUSY;
++
++	/* try to post backlog requests */
++	post_backlog_cmds(cmdq);
+ 
+ 	if (unlikely(cmdq_full(cmdq, ndev->qlen))) {
+ 		if (!(sr->flags & CRYPTO_TFM_REQ_MAY_BACKLOG))
+-			return -EAGAIN;
+-
++			return -ENOSPC;
++		/* add to backlog list */
+ 		backlog_list_add(sr, cmdq);
+-	} else {
+-		ret = post_backlog_cmds(cmdq);
+-		if (ret) {
+-			backlog_list_add(sr, cmdq);
+-			return ret;
+-		}
+-		post_se_instr(sr, cmdq);
+-		ret = -EINPROGRESS;
++		return -EBUSY;
+ 	}
+-	return ret;
++	post_se_instr(sr, cmdq);
++
++	return -EINPROGRESS;
+ }
+ 
+ /**
+@@ -624,11 +633,9 @@ int nitrox_process_se_request(struct nitrox_device *ndev,
+ 	 */
+ 	sr->instr.fdata[0] = *((u64 *)&req->gph);
+ 	sr->instr.fdata[1] = 0;
+-	/* flush the soft_req changes before posting the cmd */
+-	wmb();
+ 
+ 	ret = nitrox_enqueue_request(sr);
+-	if (ret == -EAGAIN)
++	if (ret == -ENOSPC)
+ 		goto send_fail;
+ 
+ 	return ret;
+diff --git a/drivers/crypto/chelsio/chtls/chtls.h b/drivers/crypto/chelsio/chtls/chtls.h
+index a53a0e6ba024..7725b6ee14ef 100644
+--- a/drivers/crypto/chelsio/chtls/chtls.h
++++ b/drivers/crypto/chelsio/chtls/chtls.h
+@@ -96,6 +96,10 @@ enum csk_flags {
+ 	CSK_CONN_INLINE,	/* Connection on HW */
+ };
+ 
++enum chtls_cdev_state {
++	CHTLS_CDEV_STATE_UP = 1
++};
++
+ struct listen_ctx {
+ 	struct sock *lsk;
+ 	struct chtls_dev *cdev;
+@@ -146,6 +150,7 @@ struct chtls_dev {
+ 	unsigned int send_page_order;
+ 	int max_host_sndbuf;
+ 	struct key_map kmap;
++	unsigned int cdev_state;
+ };
+ 
+ struct chtls_hws {
+diff --git a/drivers/crypto/chelsio/chtls/chtls_main.c b/drivers/crypto/chelsio/chtls/chtls_main.c
+index 9b07f9165658..f59b044ebd25 100644
+--- a/drivers/crypto/chelsio/chtls/chtls_main.c
++++ b/drivers/crypto/chelsio/chtls/chtls_main.c
+@@ -160,6 +160,7 @@ static void chtls_register_dev(struct chtls_dev *cdev)
+ 	tlsdev->hash = chtls_create_hash;
+ 	tlsdev->unhash = chtls_destroy_hash;
+ 	tls_register_device(&cdev->tlsdev);
++	cdev->cdev_state = CHTLS_CDEV_STATE_UP;
+ }
+ 
+ static void chtls_unregister_dev(struct chtls_dev *cdev)
+@@ -281,8 +282,10 @@ static void chtls_free_all_uld(void)
+ 	struct chtls_dev *cdev, *tmp;
+ 
+ 	mutex_lock(&cdev_mutex);
+-	list_for_each_entry_safe(cdev, tmp, &cdev_list, list)
+-		chtls_free_uld(cdev);
++	list_for_each_entry_safe(cdev, tmp, &cdev_list, list) {
++		if (cdev->cdev_state == CHTLS_CDEV_STATE_UP)
++			chtls_free_uld(cdev);
++	}
+ 	mutex_unlock(&cdev_mutex);
+ }
+ 
+diff --git a/drivers/edac/altera_edac.c b/drivers/edac/altera_edac.c
+index d0d5c4dbe097..5762c3c383f2 100644
+--- a/drivers/edac/altera_edac.c
++++ b/drivers/edac/altera_edac.c
+@@ -730,7 +730,8 @@ static int altr_s10_sdram_probe(struct platform_device *pdev)
+ 			 S10_DDR0_IRQ_MASK)) {
+ 		edac_printk(KERN_ERR, EDAC_MC,
+ 			    "Error clearing SDRAM ECC count\n");
+-		return -ENODEV;
++		ret = -ENODEV;
++		goto err2;
+ 	}
+ 
+ 	if (regmap_update_bits(drvdata->mc_vbase, priv->ecc_irq_en_offset,
+diff --git a/drivers/edac/edac_mc_sysfs.c b/drivers/edac/edac_mc_sysfs.c
+index 7481955160a4..20374b8248f0 100644
+--- a/drivers/edac/edac_mc_sysfs.c
++++ b/drivers/edac/edac_mc_sysfs.c
+@@ -1075,14 +1075,14 @@ int __init edac_mc_sysfs_init(void)
+ 
+ 	err = device_add(mci_pdev);
+ 	if (err < 0)
+-		goto out_dev_free;
++		goto out_put_device;
+ 
+ 	edac_dbg(0, "device %s created\n", dev_name(mci_pdev));
+ 
+ 	return 0;
+ 
+- out_dev_free:
+-	kfree(mci_pdev);
++ out_put_device:
++	put_device(mci_pdev);
+  out:
+ 	return err;
+ }
+diff --git a/drivers/edac/i7core_edac.c b/drivers/edac/i7core_edac.c
+index 8ed4dd9c571b..8e120bf60624 100644
+--- a/drivers/edac/i7core_edac.c
++++ b/drivers/edac/i7core_edac.c
+@@ -1177,15 +1177,14 @@ static int i7core_create_sysfs_devices(struct mem_ctl_info *mci)
+ 
+ 	rc = device_add(pvt->addrmatch_dev);
+ 	if (rc < 0)
+-		return rc;
++		goto err_put_addrmatch;
+ 
+ 	if (!pvt->is_registered) {
+ 		pvt->chancounts_dev = kzalloc(sizeof(*pvt->chancounts_dev),
+ 					      GFP_KERNEL);
+ 		if (!pvt->chancounts_dev) {
+-			put_device(pvt->addrmatch_dev);
+-			device_del(pvt->addrmatch_dev);
+-			return -ENOMEM;
++			rc = -ENOMEM;
++			goto err_del_addrmatch;
+ 		}
+ 
+ 		pvt->chancounts_dev->type = &all_channel_counts_type;
+@@ -1199,9 +1198,18 @@ static int i7core_create_sysfs_devices(struct mem_ctl_info *mci)
+ 
+ 		rc = device_add(pvt->chancounts_dev);
+ 		if (rc < 0)
+-			return rc;
++			goto err_put_chancounts;
+ 	}
+ 	return 0;
++
++err_put_chancounts:
++	put_device(pvt->chancounts_dev);
++err_del_addrmatch:
++	device_del(pvt->addrmatch_dev);
++err_put_addrmatch:
++	put_device(pvt->addrmatch_dev);
++
++	return rc;
+ }
+ 
+ static void i7core_delete_sysfs_devices(struct mem_ctl_info *mci)
+@@ -1211,11 +1219,11 @@ static void i7core_delete_sysfs_devices(struct mem_ctl_info *mci)
+ 	edac_dbg(1, "\n");
+ 
+ 	if (!pvt->is_registered) {
+-		put_device(pvt->chancounts_dev);
+ 		device_del(pvt->chancounts_dev);
++		put_device(pvt->chancounts_dev);
+ 	}
+-	put_device(pvt->addrmatch_dev);
+ 	device_del(pvt->addrmatch_dev);
++	put_device(pvt->addrmatch_dev);
+ }
+ 
+ /****************************************************************************
+diff --git a/drivers/gpio/gpio-menz127.c b/drivers/gpio/gpio-menz127.c
+index e1037582e34d..b2635326546e 100644
+--- a/drivers/gpio/gpio-menz127.c
++++ b/drivers/gpio/gpio-menz127.c
+@@ -56,9 +56,9 @@ static int men_z127_debounce(struct gpio_chip *gc, unsigned gpio,
+ 		rnd = fls(debounce) - 1;
+ 
+ 		if (rnd && (debounce & BIT(rnd - 1)))
+-			debounce = round_up(debounce, MEN_Z127_DB_MIN_US);
++			debounce = roundup(debounce, MEN_Z127_DB_MIN_US);
+ 		else
+-			debounce = round_down(debounce, MEN_Z127_DB_MIN_US);
++			debounce = rounddown(debounce, MEN_Z127_DB_MIN_US);
+ 
+ 		if (debounce > MEN_Z127_DB_MAX_US)
+ 			debounce = MEN_Z127_DB_MAX_US;
+diff --git a/drivers/gpio/gpio-tegra.c b/drivers/gpio/gpio-tegra.c
+index d5d79727c55d..d9e4da146227 100644
+--- a/drivers/gpio/gpio-tegra.c
++++ b/drivers/gpio/gpio-tegra.c
+@@ -323,13 +323,6 @@ static int tegra_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ 		return -EINVAL;
+ 	}
+ 
+-	ret = gpiochip_lock_as_irq(&tgi->gc, gpio);
+-	if (ret) {
+-		dev_err(tgi->dev,
+-			"unable to lock Tegra GPIO %u as IRQ\n", gpio);
+-		return ret;
+-	}
+-
+ 	spin_lock_irqsave(&bank->lvl_lock[port], flags);
+ 
+ 	val = tegra_gpio_readl(tgi, GPIO_INT_LVL(tgi, gpio));
+@@ -342,6 +335,14 @@ static int tegra_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ 	tegra_gpio_mask_write(tgi, GPIO_MSK_OE(tgi, gpio), gpio, 0);
+ 	tegra_gpio_enable(tgi, gpio);
+ 
++	ret = gpiochip_lock_as_irq(&tgi->gc, gpio);
++	if (ret) {
++		dev_err(tgi->dev,
++			"unable to lock Tegra GPIO %u as IRQ\n", gpio);
++		tegra_gpio_disable(tgi, gpio);
++		return ret;
++	}
++
+ 	if (type & (IRQ_TYPE_LEVEL_LOW | IRQ_TYPE_LEVEL_HIGH))
+ 		irq_set_handler_locked(d, handle_level_irq);
+ 	else if (type & (IRQ_TYPE_EDGE_FALLING | IRQ_TYPE_EDGE_RISING))
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index 5a196ec49be8..7200eea4f918 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -975,13 +975,9 @@ static int amdgpu_cs_ib_fill(struct amdgpu_device *adev,
+ 		if (r)
+ 			return r;
+ 
+-		if (chunk_ib->flags & AMDGPU_IB_FLAG_PREAMBLE) {
+-			parser->job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT;
+-			if (!parser->ctx->preamble_presented) {
+-				parser->job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT_FIRST;
+-				parser->ctx->preamble_presented = true;
+-			}
+-		}
++		if (chunk_ib->flags & AMDGPU_IB_FLAG_PREAMBLE)
++			parser->job->preamble_status |=
++				AMDGPU_PREAMBLE_IB_PRESENT;
+ 
+ 		if (parser->job->ring && parser->job->ring != ring)
+ 			return -EINVAL;
+@@ -1206,6 +1202,12 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
+ 
+ 	amdgpu_cs_post_dependencies(p);
+ 
++	if ((job->preamble_status & AMDGPU_PREAMBLE_IB_PRESENT) &&
++	    !p->ctx->preamble_presented) {
++		job->preamble_status |= AMDGPU_PREAMBLE_IB_PRESENT_FIRST;
++		p->ctx->preamble_presented = true;
++	}
++
+ 	cs->out.handle = seq;
+ 	job->uf_sequence = seq;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+index 7aaa263ad8c7..6b5d4a20860d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+@@ -164,8 +164,10 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
+ 		return r;
+ 	}
+ 
++	need_ctx_switch = ring->current_ctx != fence_ctx;
+ 	if (ring->funcs->emit_pipeline_sync && job &&
+ 	    ((tmp = amdgpu_sync_get_fence(&job->sched_sync, NULL)) ||
++	     (amdgpu_sriov_vf(adev) && need_ctx_switch) ||
+ 	     amdgpu_vm_need_pipeline_sync(ring, job))) {
+ 		need_pipe_sync = true;
+ 		dma_fence_put(tmp);
+@@ -196,7 +198,6 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
+ 	}
+ 
+ 	skip_preamble = ring->current_ctx == fence_ctx;
+-	need_ctx_switch = ring->current_ctx != fence_ctx;
+ 	if (job && ring->funcs->emit_cntxcntl) {
+ 		if (need_ctx_switch)
+ 			status |= AMDGPU_HAVE_CTX_SWITCH;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index fdcb498f6d19..c31fff32a321 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -123,6 +123,7 @@ static void amdgpu_vm_bo_base_init(struct amdgpu_vm_bo_base *base,
+ 	 * is validated on next vm use to avoid fault.
+ 	 * */
+ 	list_move_tail(&base->vm_status, &vm->evicted);
++	base->moved = true;
+ }
+ 
+ /**
+@@ -303,7 +304,6 @@ static int amdgpu_vm_clear_bo(struct amdgpu_device *adev,
+ 	uint64_t addr;
+ 	int r;
+ 
+-	addr = amdgpu_bo_gpu_offset(bo);
+ 	entries = amdgpu_bo_size(bo) / 8;
+ 
+ 	if (pte_support_ats) {
+@@ -335,6 +335,7 @@ static int amdgpu_vm_clear_bo(struct amdgpu_device *adev,
+ 	if (r)
+ 		goto error;
+ 
++	addr = amdgpu_bo_gpu_offset(bo);
+ 	if (ats_entries) {
+ 		uint64_t ats_value;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+index 818874b13c99..9057a5adb31b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+@@ -5614,6 +5614,11 @@ static int gfx_v8_0_set_powergating_state(void *handle,
+ 	if (amdgpu_sriov_vf(adev))
+ 		return 0;
+ 
++	if (adev->pg_flags & (AMD_PG_SUPPORT_GFX_SMG |
++				AMD_PG_SUPPORT_RLC_SMU_HS |
++				AMD_PG_SUPPORT_CP |
++				AMD_PG_SUPPORT_GFX_DMG))
++		adev->gfx.rlc.funcs->enter_safe_mode(adev);
+ 	switch (adev->asic_type) {
+ 	case CHIP_CARRIZO:
+ 	case CHIP_STONEY:
+@@ -5663,7 +5668,11 @@ static int gfx_v8_0_set_powergating_state(void *handle,
+ 	default:
+ 		break;
+ 	}
+-
++	if (adev->pg_flags & (AMD_PG_SUPPORT_GFX_SMG |
++				AMD_PG_SUPPORT_RLC_SMU_HS |
++				AMD_PG_SUPPORT_CP |
++				AMD_PG_SUPPORT_GFX_DMG))
++		adev->gfx.rlc.funcs->exit_safe_mode(adev);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/kv_dpm.c b/drivers/gpu/drm/amd/amdgpu/kv_dpm.c
+index 7a1e77c93bf1..d8e469c594bb 100644
+--- a/drivers/gpu/drm/amd/amdgpu/kv_dpm.c
++++ b/drivers/gpu/drm/amd/amdgpu/kv_dpm.c
+@@ -1354,8 +1354,6 @@ static int kv_dpm_enable(struct amdgpu_device *adev)
+ 		return ret;
+ 	}
+ 
+-	kv_update_current_ps(adev, adev->pm.dpm.boot_ps);
+-
+ 	if (adev->irq.installed &&
+ 	    amdgpu_is_internal_thermal_sensor(adev->pm.int_thermal_type)) {
+ 		ret = kv_set_thermal_temperature_range(adev, KV_TEMP_RANGE_MIN, KV_TEMP_RANGE_MAX);
+@@ -3061,7 +3059,7 @@ static int kv_dpm_hw_init(void *handle)
+ 	else
+ 		adev->pm.dpm_enabled = true;
+ 	mutex_unlock(&adev->pm.mutex);
+-
++	amdgpu_pm_compute_clocks(adev);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/si_dpm.c b/drivers/gpu/drm/amd/amdgpu/si_dpm.c
+index 5c97a3671726..606f461dce49 100644
+--- a/drivers/gpu/drm/amd/amdgpu/si_dpm.c
++++ b/drivers/gpu/drm/amd/amdgpu/si_dpm.c
+@@ -6887,7 +6887,6 @@ static int si_dpm_enable(struct amdgpu_device *adev)
+ 
+ 	si_enable_auto_throttle_source(adev, AMDGPU_DPM_AUTO_THROTTLE_SRC_THERMAL, true);
+ 	si_thermal_start_thermal_controller(adev);
+-	ni_update_current_ps(adev, boot_ps);
+ 
+ 	return 0;
+ }
+@@ -7764,7 +7763,7 @@ static int si_dpm_hw_init(void *handle)
+ 	else
+ 		adev->pm.dpm_enabled = true;
+ 	mutex_unlock(&adev->pm.mutex);
+-
++	amdgpu_pm_compute_clocks(adev);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
+index 88b09dd758ba..ca137757a69e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
+@@ -133,7 +133,7 @@ static bool calculate_fb_and_fractional_fb_divider(
+ 	uint64_t feedback_divider;
+ 
+ 	feedback_divider =
+-		(uint64_t)(target_pix_clk_khz * ref_divider * post_divider);
++		(uint64_t)target_pix_clk_khz * ref_divider * post_divider;
+ 	feedback_divider *= 10;
+ 	/* additional factor, since we divide by 10 afterwards */
+ 	feedback_divider *= (uint64_t)(calc_pll_cs->fract_fb_divider_factor);
+@@ -145,8 +145,8 @@ static bool calculate_fb_and_fractional_fb_divider(
+  * of fractional feedback decimal point and the fractional FB Divider precision
+  * is 2 then the equation becomes (ullfeedbackDivider + 5*100) / (10*100))*/
+ 
+-	feedback_divider += (uint64_t)
+-			(5 * calc_pll_cs->fract_fb_divider_precision_factor);
++	feedback_divider += 5ULL *
++			    calc_pll_cs->fract_fb_divider_precision_factor;
+ 	feedback_divider =
+ 		div_u64(feedback_divider,
+ 			calc_pll_cs->fract_fb_divider_precision_factor * 10);
+@@ -203,8 +203,8 @@ static bool calc_fb_divider_checking_tolerance(
+ 			&fract_feedback_divider);
+ 
+ 	/*Actual calculated value*/
+-	actual_calc_clk_khz = (uint64_t)(feedback_divider *
+-					calc_pll_cs->fract_fb_divider_factor) +
++	actual_calc_clk_khz = (uint64_t)feedback_divider *
++					calc_pll_cs->fract_fb_divider_factor +
+ 							fract_feedback_divider;
+ 	actual_calc_clk_khz *= calc_pll_cs->ref_freq_khz;
+ 	actual_calc_clk_khz =
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c b/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
+index c2037daa8e66..0efbf411667a 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
+@@ -239,6 +239,8 @@ void dml1_extract_rq_regs(
+ 	extract_rq_sizing_regs(mode_lib, &(rq_regs->rq_regs_l), rq_param.sizing.rq_l);
+ 	if (rq_param.yuv420)
+ 		extract_rq_sizing_regs(mode_lib, &(rq_regs->rq_regs_c), rq_param.sizing.rq_c);
++	else
++		memset(&(rq_regs->rq_regs_c), 0, sizeof(rq_regs->rq_regs_c));
+ 
+ 	rq_regs->rq_regs_l.swath_height = dml_log2(rq_param.dlg.rq_l.swath_height);
+ 	rq_regs->rq_regs_c.swath_height = dml_log2(rq_param.dlg.rq_c.swath_height);
+diff --git a/drivers/gpu/drm/omapdrm/omap_debugfs.c b/drivers/gpu/drm/omapdrm/omap_debugfs.c
+index b42e286616b0..84da7a5b84f3 100644
+--- a/drivers/gpu/drm/omapdrm/omap_debugfs.c
++++ b/drivers/gpu/drm/omapdrm/omap_debugfs.c
+@@ -37,7 +37,9 @@ static int gem_show(struct seq_file *m, void *arg)
+ 		return ret;
+ 
+ 	seq_printf(m, "All Objects:\n");
++	mutex_lock(&priv->list_lock);
+ 	omap_gem_describe_objects(&priv->obj_list, m);
++	mutex_unlock(&priv->list_lock);
+ 
+ 	mutex_unlock(&dev->struct_mutex);
+ 
+diff --git a/drivers/gpu/drm/omapdrm/omap_drv.c b/drivers/gpu/drm/omapdrm/omap_drv.c
+index ef3b0e3571ec..5fcf9eaf3eaf 100644
+--- a/drivers/gpu/drm/omapdrm/omap_drv.c
++++ b/drivers/gpu/drm/omapdrm/omap_drv.c
+@@ -540,7 +540,7 @@ static int omapdrm_init(struct omap_drm_private *priv, struct device *dev)
+ 	priv->omaprev = soc ? (unsigned int)soc->data : 0;
+ 	priv->wq = alloc_ordered_workqueue("omapdrm", 0);
+ 
+-	spin_lock_init(&priv->list_lock);
++	mutex_init(&priv->list_lock);
+ 	INIT_LIST_HEAD(&priv->obj_list);
+ 
+ 	/* Allocate and initialize the DRM device. */
+diff --git a/drivers/gpu/drm/omapdrm/omap_drv.h b/drivers/gpu/drm/omapdrm/omap_drv.h
+index 6eaee4df4559..f27c8e216adf 100644
+--- a/drivers/gpu/drm/omapdrm/omap_drv.h
++++ b/drivers/gpu/drm/omapdrm/omap_drv.h
+@@ -71,7 +71,7 @@ struct omap_drm_private {
+ 	struct workqueue_struct *wq;
+ 
+ 	/* lock for obj_list below */
+-	spinlock_t list_lock;
++	struct mutex list_lock;
+ 
+ 	/* list of GEM objects: */
+ 	struct list_head obj_list;
+diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c
+index 17a53d207978..7a029b892a37 100644
+--- a/drivers/gpu/drm/omapdrm/omap_gem.c
++++ b/drivers/gpu/drm/omapdrm/omap_gem.c
+@@ -1001,6 +1001,7 @@ int omap_gem_resume(struct drm_device *dev)
+ 	struct omap_gem_object *omap_obj;
+ 	int ret = 0;
+ 
++	mutex_lock(&priv->list_lock);
+ 	list_for_each_entry(omap_obj, &priv->obj_list, mm_list) {
+ 		if (omap_obj->block) {
+ 			struct drm_gem_object *obj = &omap_obj->base;
+@@ -1012,12 +1013,14 @@ int omap_gem_resume(struct drm_device *dev)
+ 					omap_obj->roll, true);
+ 			if (ret) {
+ 				dev_err(dev->dev, "could not repin: %d\n", ret);
+-				return ret;
++				goto done;
+ 			}
+ 		}
+ 	}
+ 
+-	return 0;
++done:
++	mutex_unlock(&priv->list_lock);
++	return ret;
+ }
+ #endif
+ 
+@@ -1085,9 +1088,9 @@ void omap_gem_free_object(struct drm_gem_object *obj)
+ 
+ 	WARN_ON(!mutex_is_locked(&dev->struct_mutex));
+ 
+-	spin_lock(&priv->list_lock);
++	mutex_lock(&priv->list_lock);
+ 	list_del(&omap_obj->mm_list);
+-	spin_unlock(&priv->list_lock);
++	mutex_unlock(&priv->list_lock);
+ 
+ 	/* this means the object is still pinned.. which really should
+ 	 * not happen.  I think..
+@@ -1206,9 +1209,9 @@ struct drm_gem_object *omap_gem_new(struct drm_device *dev,
+ 			goto err_release;
+ 	}
+ 
+-	spin_lock(&priv->list_lock);
++	mutex_lock(&priv->list_lock);
+ 	list_add(&omap_obj->mm_list, &priv->obj_list);
+-	spin_unlock(&priv->list_lock);
++	mutex_unlock(&priv->list_lock);
+ 
+ 	return obj;
+ 
+diff --git a/drivers/gpu/drm/sun4i/sun4i_drv.c b/drivers/gpu/drm/sun4i/sun4i_drv.c
+index 50d19605c38f..e15fa2389e3f 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_drv.c
++++ b/drivers/gpu/drm/sun4i/sun4i_drv.c
+@@ -283,7 +283,6 @@ static int sun4i_drv_add_endpoints(struct device *dev,
+ 		remote = of_graph_get_remote_port_parent(ep);
+ 		if (!remote) {
+ 			DRM_DEBUG_DRIVER("Error retrieving the output node\n");
+-			of_node_put(remote);
+ 			continue;
+ 		}
+ 
+@@ -297,11 +296,13 @@ static int sun4i_drv_add_endpoints(struct device *dev,
+ 
+ 			if (of_graph_parse_endpoint(ep, &endpoint)) {
+ 				DRM_DEBUG_DRIVER("Couldn't parse endpoint\n");
++				of_node_put(remote);
+ 				continue;
+ 			}
+ 
+ 			if (!endpoint.id) {
+ 				DRM_DEBUG_DRIVER("Endpoint is our panel... skipping\n");
++				of_node_put(remote);
+ 				continue;
+ 			}
+ 		}
+diff --git a/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c b/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
+index 5a52fc489a9d..966688f04741 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
++++ b/drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
+@@ -477,13 +477,15 @@ int sun8i_hdmi_phy_probe(struct sun8i_dw_hdmi *hdmi, struct device_node *node)
+ 			dev_err(dev, "Couldn't create the PHY clock\n");
+ 			goto err_put_clk_pll0;
+ 		}
++
++		clk_prepare_enable(phy->clk_phy);
+ 	}
+ 
+ 	phy->rst_phy = of_reset_control_get_shared(node, "phy");
+ 	if (IS_ERR(phy->rst_phy)) {
+ 		dev_err(dev, "Could not get phy reset control\n");
+ 		ret = PTR_ERR(phy->rst_phy);
+-		goto err_put_clk_pll0;
++		goto err_disable_clk_phy;
+ 	}
+ 
+ 	ret = reset_control_deassert(phy->rst_phy);
+@@ -514,6 +516,8 @@ err_deassert_rst_phy:
+ 	reset_control_assert(phy->rst_phy);
+ err_put_rst_phy:
+ 	reset_control_put(phy->rst_phy);
++err_disable_clk_phy:
++	clk_disable_unprepare(phy->clk_phy);
+ err_put_clk_pll0:
+ 	if (phy->variant->has_phy_clk)
+ 		clk_put(phy->clk_pll0);
+@@ -531,6 +535,7 @@ void sun8i_hdmi_phy_remove(struct sun8i_dw_hdmi *hdmi)
+ 
+ 	clk_disable_unprepare(phy->clk_mod);
+ 	clk_disable_unprepare(phy->clk_bus);
++	clk_disable_unprepare(phy->clk_phy);
+ 
+ 	reset_control_assert(phy->rst_phy);
+ 
+diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h
+index a043ac3aae98..26005abd9c5d 100644
+--- a/drivers/gpu/drm/v3d/v3d_drv.h
++++ b/drivers/gpu/drm/v3d/v3d_drv.h
+@@ -85,6 +85,11 @@ struct v3d_dev {
+ 	 */
+ 	struct mutex reset_lock;
+ 
++	/* Lock taken when creating and pushing the GPU scheduler
++	 * jobs, to keep the sched-fence seqnos in order.
++	 */
++	struct mutex sched_lock;
++
+ 	struct {
+ 		u32 num_allocated;
+ 		u32 pages_allocated;
+diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c
+index b513f9189caf..269fe16379c0 100644
+--- a/drivers/gpu/drm/v3d/v3d_gem.c
++++ b/drivers/gpu/drm/v3d/v3d_gem.c
+@@ -550,6 +550,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data,
+ 	if (ret)
+ 		goto fail;
+ 
++	mutex_lock(&v3d->sched_lock);
+ 	if (exec->bin.start != exec->bin.end) {
+ 		ret = drm_sched_job_init(&exec->bin.base,
+ 					 &v3d->queue[V3D_BIN].sched,
+@@ -576,6 +577,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data,
+ 	kref_get(&exec->refcount); /* put by scheduler job completion */
+ 	drm_sched_entity_push_job(&exec->render.base,
+ 				  &v3d_priv->sched_entity[V3D_RENDER]);
++	mutex_unlock(&v3d->sched_lock);
+ 
+ 	v3d_attach_object_fences(exec);
+ 
+@@ -594,6 +596,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data,
+ 	return 0;
+ 
+ fail_unreserve:
++	mutex_unlock(&v3d->sched_lock);
+ 	v3d_unlock_bo_reservations(dev, exec, &acquire_ctx);
+ fail:
+ 	v3d_exec_put(exec);
+@@ -615,6 +618,7 @@ v3d_gem_init(struct drm_device *dev)
+ 	spin_lock_init(&v3d->job_lock);
+ 	mutex_init(&v3d->bo_lock);
+ 	mutex_init(&v3d->reset_lock);
++	mutex_init(&v3d->sched_lock);
+ 
+ 	/* Note: We don't allocate address 0.  Various bits of HW
+ 	 * treat 0 as special, such as the occlusion query counters
+diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c
+index cf5aea1d6488..203ddf5723e8 100644
+--- a/drivers/gpu/drm/vc4/vc4_plane.c
++++ b/drivers/gpu/drm/vc4/vc4_plane.c
+@@ -543,6 +543,7 @@ static int vc4_plane_mode_set(struct drm_plane *plane,
+ 	/* Control word */
+ 	vc4_dlist_write(vc4_state,
+ 			SCALER_CTL0_VALID |
++			VC4_SET_FIELD(SCALER_CTL0_RGBA_EXPAND_ROUND, SCALER_CTL0_RGBA_EXPAND) |
+ 			(format->pixel_order << SCALER_CTL0_ORDER_SHIFT) |
+ 			(format->hvs << SCALER_CTL0_PIXEL_FORMAT_SHIFT) |
+ 			VC4_SET_FIELD(tiling, SCALER_CTL0_TILING) |
+@@ -874,7 +875,9 @@ static bool vc4_format_mod_supported(struct drm_plane *plane,
+ 	case DRM_FORMAT_YUV420:
+ 	case DRM_FORMAT_YVU420:
+ 	case DRM_FORMAT_NV12:
++	case DRM_FORMAT_NV21:
+ 	case DRM_FORMAT_NV16:
++	case DRM_FORMAT_NV61:
+ 	default:
+ 		return (modifier == DRM_FORMAT_MOD_LINEAR);
+ 	}
+diff --git a/drivers/hid/hid-ntrig.c b/drivers/hid/hid-ntrig.c
+index 43b1c7234316..9bc6f4867cb3 100644
+--- a/drivers/hid/hid-ntrig.c
++++ b/drivers/hid/hid-ntrig.c
+@@ -955,6 +955,8 @@ static int ntrig_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 
+ 	ret = sysfs_create_group(&hdev->dev.kobj,
+ 			&ntrig_attribute_group);
++	if (ret)
++		hid_err(hdev, "cannot create sysfs group\n");
+ 
+ 	return 0;
+ err_free:
+diff --git a/drivers/hid/i2c-hid/i2c-hid.c b/drivers/hid/i2c-hid/i2c-hid.c
+index 5fd1159fc095..64773433b947 100644
+--- a/drivers/hid/i2c-hid/i2c-hid.c
++++ b/drivers/hid/i2c-hid/i2c-hid.c
+@@ -1004,18 +1004,18 @@ static int i2c_hid_probe(struct i2c_client *client,
+ 		return client->irq;
+ 	}
+ 
+-	ihid = kzalloc(sizeof(struct i2c_hid), GFP_KERNEL);
++	ihid = devm_kzalloc(&client->dev, sizeof(*ihid), GFP_KERNEL);
+ 	if (!ihid)
+ 		return -ENOMEM;
+ 
+ 	if (client->dev.of_node) {
+ 		ret = i2c_hid_of_probe(client, &ihid->pdata);
+ 		if (ret)
+-			goto err;
++			return ret;
+ 	} else if (!platform_data) {
+ 		ret = i2c_hid_acpi_pdata(client, &ihid->pdata);
+ 		if (ret)
+-			goto err;
++			return ret;
+ 	} else {
+ 		ihid->pdata = *platform_data;
+ 	}
+@@ -1128,7 +1128,6 @@ err_regulator:
+ 
+ err:
+ 	i2c_hid_free_buffers(ihid);
+-	kfree(ihid);
+ 	return ret;
+ }
+ 
+@@ -1152,8 +1151,6 @@ static int i2c_hid_remove(struct i2c_client *client)
+ 
+ 	regulator_disable(ihid->pdata.supply);
+ 
+-	kfree(ihid);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/hwmon/adt7475.c b/drivers/hwmon/adt7475.c
+index 9ef84998c7f3..37db2eb66ed7 100644
+--- a/drivers/hwmon/adt7475.c
++++ b/drivers/hwmon/adt7475.c
+@@ -303,14 +303,18 @@ static inline u16 volt2reg(int channel, long volt, u8 bypass_attn)
+ 	return clamp_val(reg, 0, 1023) & (0xff << 2);
+ }
+ 
+-static u16 adt7475_read_word(struct i2c_client *client, int reg)
++static int adt7475_read_word(struct i2c_client *client, int reg)
+ {
+-	u16 val;
++	int val1, val2;
+ 
+-	val = i2c_smbus_read_byte_data(client, reg);
+-	val |= (i2c_smbus_read_byte_data(client, reg + 1) << 8);
++	val1 = i2c_smbus_read_byte_data(client, reg);
++	if (val1 < 0)
++		return val1;
++	val2 = i2c_smbus_read_byte_data(client, reg + 1);
++	if (val2 < 0)
++		return val2;
+ 
+-	return val;
++	return val1 | (val2 << 8);
+ }
+ 
+ static void adt7475_write_word(struct i2c_client *client, int reg, u16 val)
+diff --git a/drivers/hwmon/ina2xx.c b/drivers/hwmon/ina2xx.c
+index e9e6aeabbf84..71d3445ba869 100644
+--- a/drivers/hwmon/ina2xx.c
++++ b/drivers/hwmon/ina2xx.c
+@@ -17,7 +17,7 @@
+  * Bi-directional Current/Power Monitor with I2C Interface
+  * Datasheet: http://www.ti.com/product/ina230
+  *
+- * Copyright (C) 2012 Lothar Felten <l-felten@ti.com>
++ * Copyright (C) 2012 Lothar Felten <lothar.felten@gmail.com>
+  * Thanks to Jan Volkering
+  *
+  * This program is free software; you can redistribute it and/or modify
+@@ -329,6 +329,15 @@ static int ina2xx_set_shunt(struct ina2xx_data *data, long val)
+ 	return 0;
+ }
+ 
++static ssize_t ina2xx_show_shunt(struct device *dev,
++			      struct device_attribute *da,
++			      char *buf)
++{
++	struct ina2xx_data *data = dev_get_drvdata(dev);
++
++	return snprintf(buf, PAGE_SIZE, "%li\n", data->rshunt);
++}
++
+ static ssize_t ina2xx_store_shunt(struct device *dev,
+ 				  struct device_attribute *da,
+ 				  const char *buf, size_t count)
+@@ -403,7 +412,7 @@ static SENSOR_DEVICE_ATTR(power1_input, S_IRUGO, ina2xx_show_value, NULL,
+ 
+ /* shunt resistance */
+ static SENSOR_DEVICE_ATTR(shunt_resistor, S_IRUGO | S_IWUSR,
+-			  ina2xx_show_value, ina2xx_store_shunt,
++			  ina2xx_show_shunt, ina2xx_store_shunt,
+ 			  INA2XX_CALIBRATION);
+ 
+ /* update interval (ina226 only) */
+diff --git a/drivers/hwtracing/intel_th/core.c b/drivers/hwtracing/intel_th/core.c
+index da962aa2cef5..fc6b7f8b62fb 100644
+--- a/drivers/hwtracing/intel_th/core.c
++++ b/drivers/hwtracing/intel_th/core.c
+@@ -139,7 +139,8 @@ static int intel_th_remove(struct device *dev)
+ 			th->thdev[i] = NULL;
+ 		}
+ 
+-		th->num_thdevs = lowest;
++		if (lowest >= 0)
++			th->num_thdevs = lowest;
+ 	}
+ 
+ 	if (thdrv->attr_group)
+@@ -487,7 +488,7 @@ static const struct intel_th_subdevice {
+ 				.flags	= IORESOURCE_MEM,
+ 			},
+ 			{
+-				.start	= TH_MMIO_SW,
++				.start	= 1, /* use resource[1] */
+ 				.end	= 0,
+ 				.flags	= IORESOURCE_MEM,
+ 			},
+@@ -580,6 +581,7 @@ intel_th_subdevice_alloc(struct intel_th *th,
+ 	struct intel_th_device *thdev;
+ 	struct resource res[3];
+ 	unsigned int req = 0;
++	bool is64bit = false;
+ 	int r, err;
+ 
+ 	thdev = intel_th_device_alloc(th, subdev->type, subdev->name,
+@@ -589,12 +591,18 @@ intel_th_subdevice_alloc(struct intel_th *th,
+ 
+ 	thdev->drvdata = th->drvdata;
+ 
++	for (r = 0; r < th->num_resources; r++)
++		if (th->resource[r].flags & IORESOURCE_MEM_64) {
++			is64bit = true;
++			break;
++		}
++
+ 	memcpy(res, subdev->res,
+ 	       sizeof(struct resource) * subdev->nres);
+ 
+ 	for (r = 0; r < subdev->nres; r++) {
+ 		struct resource *devres = th->resource;
+-		int bar = TH_MMIO_CONFIG;
++		int bar = 0; /* cut subdevices' MMIO from resource[0] */
+ 
+ 		/*
+ 		 * Take .end == 0 to mean 'take the whole bar',
+@@ -603,6 +611,8 @@ intel_th_subdevice_alloc(struct intel_th *th,
+ 		 */
+ 		if (!res[r].end && res[r].flags == IORESOURCE_MEM) {
+ 			bar = res[r].start;
++			if (is64bit)
++				bar *= 2;
+ 			res[r].start = 0;
+ 			res[r].end = resource_size(&devres[bar]) - 1;
+ 		}
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index 45fcf0c37a9e..2806cdeda053 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -1417,6 +1417,13 @@ static void i801_add_tco(struct i801_priv *priv)
+ }
+ 
+ #ifdef CONFIG_ACPI
++static bool i801_acpi_is_smbus_ioport(const struct i801_priv *priv,
++				      acpi_physical_address address)
++{
++	return address >= priv->smba &&
++	       address <= pci_resource_end(priv->pci_dev, SMBBAR);
++}
++
+ static acpi_status
+ i801_acpi_io_handler(u32 function, acpi_physical_address address, u32 bits,
+ 		     u64 *value, void *handler_context, void *region_context)
+@@ -1432,7 +1439,7 @@ i801_acpi_io_handler(u32 function, acpi_physical_address address, u32 bits,
+ 	 */
+ 	mutex_lock(&priv->acpi_lock);
+ 
+-	if (!priv->acpi_reserved) {
++	if (!priv->acpi_reserved && i801_acpi_is_smbus_ioport(priv, address)) {
+ 		priv->acpi_reserved = true;
+ 
+ 		dev_warn(&pdev->dev, "BIOS is accessing SMBus registers\n");
+diff --git a/drivers/iio/accel/adxl345_core.c b/drivers/iio/accel/adxl345_core.c
+index 7251d0e63d74..98080e05ac6d 100644
+--- a/drivers/iio/accel/adxl345_core.c
++++ b/drivers/iio/accel/adxl345_core.c
+@@ -21,6 +21,8 @@
+ #define ADXL345_REG_DATAX0		0x32
+ #define ADXL345_REG_DATAY0		0x34
+ #define ADXL345_REG_DATAZ0		0x36
++#define ADXL345_REG_DATA_AXIS(index)	\
++	(ADXL345_REG_DATAX0 + (index) * sizeof(__le16))
+ 
+ #define ADXL345_POWER_CTL_MEASURE	BIT(3)
+ #define ADXL345_POWER_CTL_STANDBY	0x00
+@@ -47,19 +49,19 @@ struct adxl345_data {
+ 	u8 data_range;
+ };
+ 
+-#define ADXL345_CHANNEL(reg, axis) {					\
++#define ADXL345_CHANNEL(index, axis) {					\
+ 	.type = IIO_ACCEL,						\
+ 	.modified = 1,							\
+ 	.channel2 = IIO_MOD_##axis,					\
+-	.address = reg,							\
++	.address = index,						\
+ 	.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),			\
+ 	.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),		\
+ }
+ 
+ static const struct iio_chan_spec adxl345_channels[] = {
+-	ADXL345_CHANNEL(ADXL345_REG_DATAX0, X),
+-	ADXL345_CHANNEL(ADXL345_REG_DATAY0, Y),
+-	ADXL345_CHANNEL(ADXL345_REG_DATAZ0, Z),
++	ADXL345_CHANNEL(0, X),
++	ADXL345_CHANNEL(1, Y),
++	ADXL345_CHANNEL(2, Z),
+ };
+ 
+ static int adxl345_read_raw(struct iio_dev *indio_dev,
+@@ -67,7 +69,7 @@ static int adxl345_read_raw(struct iio_dev *indio_dev,
+ 			    int *val, int *val2, long mask)
+ {
+ 	struct adxl345_data *data = iio_priv(indio_dev);
+-	__le16 regval;
++	__le16 accel;
+ 	int ret;
+ 
+ 	switch (mask) {
+@@ -77,12 +79,13 @@ static int adxl345_read_raw(struct iio_dev *indio_dev,
+ 		 * ADXL345_REG_DATA(X0/Y0/Z0) contain the least significant byte
+ 		 * and ADXL345_REG_DATA(X0/Y0/Z0) + 1 the most significant byte
+ 		 */
+-		ret = regmap_bulk_read(data->regmap, chan->address, &regval,
+-				       sizeof(regval));
++		ret = regmap_bulk_read(data->regmap,
++				       ADXL345_REG_DATA_AXIS(chan->address),
++				       &accel, sizeof(accel));
+ 		if (ret < 0)
+ 			return ret;
+ 
+-		*val = sign_extend32(le16_to_cpu(regval), 12);
++		*val = sign_extend32(le16_to_cpu(accel), 12);
+ 		return IIO_VAL_INT;
+ 	case IIO_CHAN_INFO_SCALE:
+ 		*val = 0;
+diff --git a/drivers/iio/adc/ina2xx-adc.c b/drivers/iio/adc/ina2xx-adc.c
+index 0635a79864bf..d1239624187d 100644
+--- a/drivers/iio/adc/ina2xx-adc.c
++++ b/drivers/iio/adc/ina2xx-adc.c
+@@ -30,6 +30,7 @@
+ #include <linux/module.h>
+ #include <linux/of_device.h>
+ #include <linux/regmap.h>
++#include <linux/sched/task.h>
+ #include <linux/util_macros.h>
+ 
+ #include <linux/platform_data/ina2xx.h>
+@@ -826,6 +827,7 @@ static int ina2xx_buffer_enable(struct iio_dev *indio_dev)
+ {
+ 	struct ina2xx_chip_info *chip = iio_priv(indio_dev);
+ 	unsigned int sampling_us = SAMPLING_PERIOD(chip);
++	struct task_struct *task;
+ 
+ 	dev_dbg(&indio_dev->dev, "Enabling buffer w/ scan_mask %02x, freq = %d, avg =%u\n",
+ 		(unsigned int)(*indio_dev->active_scan_mask),
+@@ -835,11 +837,17 @@ static int ina2xx_buffer_enable(struct iio_dev *indio_dev)
+ 	dev_dbg(&indio_dev->dev, "Async readout mode: %d\n",
+ 		chip->allow_async_readout);
+ 
+-	chip->task = kthread_run(ina2xx_capture_thread, (void *)indio_dev,
+-				 "%s:%d-%uus", indio_dev->name, indio_dev->id,
+-				 sampling_us);
++	task = kthread_create(ina2xx_capture_thread, (void *)indio_dev,
++			      "%s:%d-%uus", indio_dev->name, indio_dev->id,
++			      sampling_us);
++	if (IS_ERR(task))
++		return PTR_ERR(task);
++
++	get_task_struct(task);
++	wake_up_process(task);
++	chip->task = task;
+ 
+-	return PTR_ERR_OR_ZERO(chip->task);
++	return 0;
+ }
+ 
+ static int ina2xx_buffer_disable(struct iio_dev *indio_dev)
+@@ -848,6 +856,7 @@ static int ina2xx_buffer_disable(struct iio_dev *indio_dev)
+ 
+ 	if (chip->task) {
+ 		kthread_stop(chip->task);
++		put_task_struct(chip->task);
+ 		chip->task = NULL;
+ 	}
+ 
+diff --git a/drivers/iio/counter/104-quad-8.c b/drivers/iio/counter/104-quad-8.c
+index b56985078d8c..4be85ec54af4 100644
+--- a/drivers/iio/counter/104-quad-8.c
++++ b/drivers/iio/counter/104-quad-8.c
+@@ -138,7 +138,7 @@ static int quad8_write_raw(struct iio_dev *indio_dev,
+ 			outb(val >> (8 * i), base_offset);
+ 
+ 		/* Reset Borrow, Carry, Compare, and Sign flags */
+-		outb(0x02, base_offset + 1);
++		outb(0x04, base_offset + 1);
+ 		/* Reset Error flag */
+ 		outb(0x06, base_offset + 1);
+ 
+diff --git a/drivers/infiniband/core/rw.c b/drivers/infiniband/core/rw.c
+index c8963e91f92a..3ee0adfb45e9 100644
+--- a/drivers/infiniband/core/rw.c
++++ b/drivers/infiniband/core/rw.c
+@@ -87,7 +87,7 @@ static int rdma_rw_init_one_mr(struct ib_qp *qp, u8 port_num,
+ 	}
+ 
+ 	ret = ib_map_mr_sg(reg->mr, sg, nents, &offset, PAGE_SIZE);
+-	if (ret < nents) {
++	if (ret < 0 || ret < nents) {
+ 		ib_mr_pool_put(qp, &qp->rdma_mrs, reg->mr);
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index 583d3a10b940..0e5eb0f547d3 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -2812,6 +2812,9 @@ static struct ib_uflow_resources *flow_resources_alloc(size_t num_specs)
+ 	if (!resources)
+ 		goto err_res;
+ 
++	if (!num_specs)
++		goto out;
++
+ 	resources->counters =
+ 		kcalloc(num_specs, sizeof(*resources->counters), GFP_KERNEL);
+ 
+@@ -2824,8 +2827,8 @@ static struct ib_uflow_resources *flow_resources_alloc(size_t num_specs)
+ 	if (!resources->collection)
+ 		goto err_collection;
+ 
++out:
+ 	resources->max = num_specs;
+-
+ 	return resources;
+ 
+ err_collection:
+diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
+index 2094d136513d..92d8469e28f3 100644
+--- a/drivers/infiniband/core/uverbs_main.c
++++ b/drivers/infiniband/core/uverbs_main.c
+@@ -429,6 +429,7 @@ static int ib_uverbs_comp_event_close(struct inode *inode, struct file *filp)
+ 			list_del(&entry->obj_list);
+ 		kfree(entry);
+ 	}
++	file->ev_queue.is_closed = 1;
+ 	spin_unlock_irq(&file->ev_queue.lock);
+ 
+ 	uverbs_close_fd(filp);
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index 50d8f1fc98d5..e426b990c1dd 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -2354,7 +2354,7 @@ static int bnxt_qplib_cq_process_res_rc(struct bnxt_qplib_cq *cq,
+ 		srq = qp->srq;
+ 		if (!srq)
+ 			return -EINVAL;
+-		if (wr_id_idx > srq->hwq.max_elements) {
++		if (wr_id_idx >= srq->hwq.max_elements) {
+ 			dev_err(&cq->hwq.pdev->dev,
+ 				"QPLIB: FP: CQ Process RC ");
+ 			dev_err(&cq->hwq.pdev->dev,
+@@ -2369,7 +2369,7 @@ static int bnxt_qplib_cq_process_res_rc(struct bnxt_qplib_cq *cq,
+ 		*pcqe = cqe;
+ 	} else {
+ 		rq = &qp->rq;
+-		if (wr_id_idx > rq->hwq.max_elements) {
++		if (wr_id_idx >= rq->hwq.max_elements) {
+ 			dev_err(&cq->hwq.pdev->dev,
+ 				"QPLIB: FP: CQ Process RC ");
+ 			dev_err(&cq->hwq.pdev->dev,
+@@ -2437,7 +2437,7 @@ static int bnxt_qplib_cq_process_res_ud(struct bnxt_qplib_cq *cq,
+ 		if (!srq)
+ 			return -EINVAL;
+ 
+-		if (wr_id_idx > srq->hwq.max_elements) {
++		if (wr_id_idx >= srq->hwq.max_elements) {
+ 			dev_err(&cq->hwq.pdev->dev,
+ 				"QPLIB: FP: CQ Process UD ");
+ 			dev_err(&cq->hwq.pdev->dev,
+@@ -2452,7 +2452,7 @@ static int bnxt_qplib_cq_process_res_ud(struct bnxt_qplib_cq *cq,
+ 		*pcqe = cqe;
+ 	} else {
+ 		rq = &qp->rq;
+-		if (wr_id_idx > rq->hwq.max_elements) {
++		if (wr_id_idx >= rq->hwq.max_elements) {
+ 			dev_err(&cq->hwq.pdev->dev,
+ 				"QPLIB: FP: CQ Process UD ");
+ 			dev_err(&cq->hwq.pdev->dev,
+@@ -2546,7 +2546,7 @@ static int bnxt_qplib_cq_process_res_raweth_qp1(struct bnxt_qplib_cq *cq,
+ 				"QPLIB: FP: SRQ used but not defined??");
+ 			return -EINVAL;
+ 		}
+-		if (wr_id_idx > srq->hwq.max_elements) {
++		if (wr_id_idx >= srq->hwq.max_elements) {
+ 			dev_err(&cq->hwq.pdev->dev,
+ 				"QPLIB: FP: CQ Process Raw/QP1 ");
+ 			dev_err(&cq->hwq.pdev->dev,
+@@ -2561,7 +2561,7 @@ static int bnxt_qplib_cq_process_res_raweth_qp1(struct bnxt_qplib_cq *cq,
+ 		*pcqe = cqe;
+ 	} else {
+ 		rq = &qp->rq;
+-		if (wr_id_idx > rq->hwq.max_elements) {
++		if (wr_id_idx >= rq->hwq.max_elements) {
+ 			dev_err(&cq->hwq.pdev->dev,
+ 				"QPLIB: FP: CQ Process Raw/QP1 RQ wr_id ");
+ 			dev_err(&cq->hwq.pdev->dev,
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.c b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+index 2f3f32eaa1d5..4097f3fa25c5 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_sp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+@@ -197,7 +197,7 @@ int bnxt_qplib_get_sgid(struct bnxt_qplib_res *res,
+ 			struct bnxt_qplib_sgid_tbl *sgid_tbl, int index,
+ 			struct bnxt_qplib_gid *gid)
+ {
+-	if (index > sgid_tbl->max) {
++	if (index >= sgid_tbl->max) {
+ 		dev_err(&res->pdev->dev,
+ 			"QPLIB: Index %d exceeded SGID table max (%d)",
+ 			index, sgid_tbl->max);
+@@ -402,7 +402,7 @@ int bnxt_qplib_get_pkey(struct bnxt_qplib_res *res,
+ 		*pkey = 0xFFFF;
+ 		return 0;
+ 	}
+-	if (index > pkey_tbl->max) {
++	if (index >= pkey_tbl->max) {
+ 		dev_err(&res->pdev->dev,
+ 			"QPLIB: Index %d exceeded PKEY table max (%d)",
+ 			index, pkey_tbl->max);
+diff --git a/drivers/infiniband/hw/hfi1/chip.c b/drivers/infiniband/hw/hfi1/chip.c
+index 6deb101cdd43..b49351914feb 100644
+--- a/drivers/infiniband/hw/hfi1/chip.c
++++ b/drivers/infiniband/hw/hfi1/chip.c
+@@ -6733,6 +6733,7 @@ void start_freeze_handling(struct hfi1_pportdata *ppd, int flags)
+ 	struct hfi1_devdata *dd = ppd->dd;
+ 	struct send_context *sc;
+ 	int i;
++	int sc_flags;
+ 
+ 	if (flags & FREEZE_SELF)
+ 		write_csr(dd, CCE_CTRL, CCE_CTRL_SPC_FREEZE_SMASK);
+@@ -6743,11 +6744,13 @@ void start_freeze_handling(struct hfi1_pportdata *ppd, int flags)
+ 	/* notify all SDMA engines that they are going into a freeze */
+ 	sdma_freeze_notify(dd, !!(flags & FREEZE_LINK_DOWN));
+ 
++	sc_flags = SCF_FROZEN | SCF_HALTED | (flags & FREEZE_LINK_DOWN ?
++					      SCF_LINK_DOWN : 0);
+ 	/* do halt pre-handling on all enabled send contexts */
+ 	for (i = 0; i < dd->num_send_contexts; i++) {
+ 		sc = dd->send_contexts[i].sc;
+ 		if (sc && (sc->flags & SCF_ENABLED))
+-			sc_stop(sc, SCF_FROZEN | SCF_HALTED);
++			sc_stop(sc, sc_flags);
+ 	}
+ 
+ 	/* Send context are frozen. Notify user space */
+@@ -10665,6 +10668,7 @@ int set_link_state(struct hfi1_pportdata *ppd, u32 state)
+ 		add_rcvctrl(dd, RCV_CTRL_RCV_PORT_ENABLE_SMASK);
+ 
+ 		handle_linkup_change(dd, 1);
++		pio_kernel_linkup(dd);
+ 
+ 		/*
+ 		 * After link up, a new link width will have been set.
+diff --git a/drivers/infiniband/hw/hfi1/pio.c b/drivers/infiniband/hw/hfi1/pio.c
+index 9cac15d10c4f..81f7cd7abcc5 100644
+--- a/drivers/infiniband/hw/hfi1/pio.c
++++ b/drivers/infiniband/hw/hfi1/pio.c
+@@ -86,6 +86,7 @@ void pio_send_control(struct hfi1_devdata *dd, int op)
+ 	unsigned long flags;
+ 	int write = 1;	/* write sendctrl back */
+ 	int flush = 0;	/* re-read sendctrl to make sure it is flushed */
++	int i;
+ 
+ 	spin_lock_irqsave(&dd->sendctrl_lock, flags);
+ 
+@@ -95,9 +96,13 @@ void pio_send_control(struct hfi1_devdata *dd, int op)
+ 		reg |= SEND_CTRL_SEND_ENABLE_SMASK;
+ 	/* Fall through */
+ 	case PSC_DATA_VL_ENABLE:
++		mask = 0;
++		for (i = 0; i < ARRAY_SIZE(dd->vld); i++)
++			if (!dd->vld[i].mtu)
++				mask |= BIT_ULL(i);
+ 		/* Disallow sending on VLs not enabled */
+-		mask = (((~0ull) << num_vls) & SEND_CTRL_UNSUPPORTED_VL_MASK) <<
+-				SEND_CTRL_UNSUPPORTED_VL_SHIFT;
++		mask = (mask & SEND_CTRL_UNSUPPORTED_VL_MASK) <<
++			SEND_CTRL_UNSUPPORTED_VL_SHIFT;
+ 		reg = (reg & ~SEND_CTRL_UNSUPPORTED_VL_SMASK) | mask;
+ 		break;
+ 	case PSC_GLOBAL_DISABLE:
+@@ -921,20 +926,18 @@ void sc_free(struct send_context *sc)
+ void sc_disable(struct send_context *sc)
+ {
+ 	u64 reg;
+-	unsigned long flags;
+ 	struct pio_buf *pbuf;
+ 
+ 	if (!sc)
+ 		return;
+ 
+ 	/* do all steps, even if already disabled */
+-	spin_lock_irqsave(&sc->alloc_lock, flags);
++	spin_lock_irq(&sc->alloc_lock);
+ 	reg = read_kctxt_csr(sc->dd, sc->hw_context, SC(CTRL));
+ 	reg &= ~SC(CTRL_CTXT_ENABLE_SMASK);
+ 	sc->flags &= ~SCF_ENABLED;
+ 	sc_wait_for_packet_egress(sc, 1);
+ 	write_kctxt_csr(sc->dd, sc->hw_context, SC(CTRL), reg);
+-	spin_unlock_irqrestore(&sc->alloc_lock, flags);
+ 
+ 	/*
+ 	 * Flush any waiters.  Once the context is disabled,
+@@ -944,7 +947,7 @@ void sc_disable(struct send_context *sc)
+ 	 * proceed with the flush.
+ 	 */
+ 	udelay(1);
+-	spin_lock_irqsave(&sc->release_lock, flags);
++	spin_lock(&sc->release_lock);
+ 	if (sc->sr) {	/* this context has a shadow ring */
+ 		while (sc->sr_tail != sc->sr_head) {
+ 			pbuf = &sc->sr[sc->sr_tail].pbuf;
+@@ -955,7 +958,8 @@ void sc_disable(struct send_context *sc)
+ 				sc->sr_tail = 0;
+ 		}
+ 	}
+-	spin_unlock_irqrestore(&sc->release_lock, flags);
++	spin_unlock(&sc->release_lock);
++	spin_unlock_irq(&sc->alloc_lock);
+ }
+ 
+ /* return SendEgressCtxtStatus.PacketOccupancy */
+@@ -1178,11 +1182,39 @@ void pio_kernel_unfreeze(struct hfi1_devdata *dd)
+ 		sc = dd->send_contexts[i].sc;
+ 		if (!sc || !(sc->flags & SCF_FROZEN) || sc->type == SC_USER)
+ 			continue;
++		if (sc->flags & SCF_LINK_DOWN)
++			continue;
+ 
+ 		sc_enable(sc);	/* will clear the sc frozen flag */
+ 	}
+ }
+ 
++/**
++ * pio_kernel_linkup() - Re-enable send contexts after linkup event
++ * @dd: valid devive data
++ *
++ * When the link goes down, the freeze path is taken.  However, a link down
++ * event is different from a freeze because if the send context is re-enabled
++ * whowever is sending data will start sending data again, which will hang
++ * any QP that is sending data.
++ *
++ * The freeze path now looks at the type of event that occurs and takes this
++ * path for link down event.
++ */
++void pio_kernel_linkup(struct hfi1_devdata *dd)
++{
++	struct send_context *sc;
++	int i;
++
++	for (i = 0; i < dd->num_send_contexts; i++) {
++		sc = dd->send_contexts[i].sc;
++		if (!sc || !(sc->flags & SCF_LINK_DOWN) || sc->type == SC_USER)
++			continue;
++
++		sc_enable(sc);	/* will clear the sc link down flag */
++	}
++}
++
+ /*
+  * Wait for the SendPioInitCtxt.PioInitInProgress bit to clear.
+  * Returns:
+@@ -1382,11 +1414,10 @@ void sc_stop(struct send_context *sc, int flag)
+ {
+ 	unsigned long flags;
+ 
+-	/* mark the context */
+-	sc->flags |= flag;
+-
+ 	/* stop buffer allocations */
+ 	spin_lock_irqsave(&sc->alloc_lock, flags);
++	/* mark the context */
++	sc->flags |= flag;
+ 	sc->flags &= ~SCF_ENABLED;
+ 	spin_unlock_irqrestore(&sc->alloc_lock, flags);
+ 	wake_up(&sc->halt_wait);
+diff --git a/drivers/infiniband/hw/hfi1/pio.h b/drivers/infiniband/hw/hfi1/pio.h
+index 058b08f459ab..aaf372c3e5d6 100644
+--- a/drivers/infiniband/hw/hfi1/pio.h
++++ b/drivers/infiniband/hw/hfi1/pio.h
+@@ -139,6 +139,7 @@ struct send_context {
+ #define SCF_IN_FREE 0x02
+ #define SCF_HALTED  0x04
+ #define SCF_FROZEN  0x08
++#define SCF_LINK_DOWN 0x10
+ 
+ struct send_context_info {
+ 	struct send_context *sc;	/* allocated working context */
+@@ -306,6 +307,7 @@ void set_pio_integrity(struct send_context *sc);
+ void pio_reset_all(struct hfi1_devdata *dd);
+ void pio_freeze(struct hfi1_devdata *dd);
+ void pio_kernel_unfreeze(struct hfi1_devdata *dd);
++void pio_kernel_linkup(struct hfi1_devdata *dd);
+ 
+ /* global PIO send control operations */
+ #define PSC_GLOBAL_ENABLE 0
+diff --git a/drivers/infiniband/hw/hfi1/user_sdma.c b/drivers/infiniband/hw/hfi1/user_sdma.c
+index a3a7b33196d6..5c88706121c1 100644
+--- a/drivers/infiniband/hw/hfi1/user_sdma.c
++++ b/drivers/infiniband/hw/hfi1/user_sdma.c
+@@ -828,7 +828,7 @@ static int user_sdma_send_pkts(struct user_sdma_request *req, unsigned maxpkts)
+ 			if (READ_ONCE(iovec->offset) == iovec->iov.iov_len) {
+ 				if (++req->iov_idx == req->data_iovs) {
+ 					ret = -EFAULT;
+-					goto free_txreq;
++					goto free_tx;
+ 				}
+ 				iovec = &req->iovs[req->iov_idx];
+ 				WARN_ON(iovec->offset);
+diff --git a/drivers/infiniband/hw/hfi1/verbs.c b/drivers/infiniband/hw/hfi1/verbs.c
+index 08991874c0e2..a1040a142aac 100644
+--- a/drivers/infiniband/hw/hfi1/verbs.c
++++ b/drivers/infiniband/hw/hfi1/verbs.c
+@@ -1590,6 +1590,7 @@ static int hfi1_check_ah(struct ib_device *ibdev, struct rdma_ah_attr *ah_attr)
+ 	struct hfi1_pportdata *ppd;
+ 	struct hfi1_devdata *dd;
+ 	u8 sc5;
++	u8 sl;
+ 
+ 	if (hfi1_check_mcast(rdma_ah_get_dlid(ah_attr)) &&
+ 	    !(rdma_ah_get_ah_flags(ah_attr) & IB_AH_GRH))
+@@ -1598,8 +1599,13 @@ static int hfi1_check_ah(struct ib_device *ibdev, struct rdma_ah_attr *ah_attr)
+ 	/* test the mapping for validity */
+ 	ibp = to_iport(ibdev, rdma_ah_get_port_num(ah_attr));
+ 	ppd = ppd_from_ibp(ibp);
+-	sc5 = ibp->sl_to_sc[rdma_ah_get_sl(ah_attr)];
+ 	dd = dd_from_ppd(ppd);
++
++	sl = rdma_ah_get_sl(ah_attr);
++	if (sl >= ARRAY_SIZE(ibp->sl_to_sc))
++		return -EINVAL;
++
++	sc5 = ibp->sl_to_sc[sl];
+ 	if (sc_to_vlt(dd, sc5) > num_vls && sc_to_vlt(dd, sc5) != 0xf)
+ 		return -EINVAL;
+ 	return 0;
+diff --git a/drivers/infiniband/hw/i40iw/i40iw_verbs.c b/drivers/infiniband/hw/i40iw/i40iw_verbs.c
+index 68679ad4c6da..937899fea01d 100644
+--- a/drivers/infiniband/hw/i40iw/i40iw_verbs.c
++++ b/drivers/infiniband/hw/i40iw/i40iw_verbs.c
+@@ -1409,6 +1409,7 @@ static void i40iw_set_hugetlb_values(u64 addr, struct i40iw_mr *iwmr)
+ 	struct vm_area_struct *vma;
+ 	struct hstate *h;
+ 
++	down_read(&current->mm->mmap_sem);
+ 	vma = find_vma(current->mm, addr);
+ 	if (vma && is_vm_hugetlb_page(vma)) {
+ 		h = hstate_vma(vma);
+@@ -1417,6 +1418,7 @@ static void i40iw_set_hugetlb_values(u64 addr, struct i40iw_mr *iwmr)
+ 			iwmr->page_msk = huge_page_mask(h);
+ 		}
+ 	}
++	up_read(&current->mm->mmap_sem);
+ }
+ 
+ /**
+diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
+index 3b8045fd23ed..b94e33a56e97 100644
+--- a/drivers/infiniband/hw/mlx4/qp.c
++++ b/drivers/infiniband/hw/mlx4/qp.c
+@@ -4047,9 +4047,9 @@ static void to_rdma_ah_attr(struct mlx4_ib_dev *ibdev,
+ 	u8 port_num = path->sched_queue & 0x40 ? 2 : 1;
+ 
+ 	memset(ah_attr, 0, sizeof(*ah_attr));
+-	ah_attr->type = rdma_ah_find_type(&ibdev->ib_dev, port_num);
+ 	if (port_num == 0 || port_num > dev->caps.num_ports)
+ 		return;
++	ah_attr->type = rdma_ah_find_type(&ibdev->ib_dev, port_num);
+ 
+ 	if (ah_attr->type == RDMA_AH_ATTR_TYPE_ROCE)
+ 		rdma_ah_set_sl(ah_attr, ((path->sched_queue >> 3) & 0x7) |
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index cbeae4509359..85677afa6f77 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -2699,7 +2699,7 @@ static int parse_flow_attr(struct mlx5_core_dev *mdev, u32 *match_c,
+ 			 IPPROTO_GRE);
+ 
+ 		MLX5_SET(fte_match_set_misc, misc_params_c, gre_protocol,
+-			 0xffff);
++			 ntohs(ib_spec->gre.mask.protocol));
+ 		MLX5_SET(fte_match_set_misc, misc_params_v, gre_protocol,
+ 			 ntohs(ib_spec->gre.val.protocol));
+ 
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
+index 9786b24b956f..2b8cc76bb77e 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.c
++++ b/drivers/infiniband/ulp/srp/ib_srp.c
+@@ -2954,7 +2954,7 @@ static int srp_reset_device(struct scsi_cmnd *scmnd)
+ {
+ 	struct srp_target_port *target = host_to_target(scmnd->device->host);
+ 	struct srp_rdma_ch *ch;
+-	int i;
++	int i, j;
+ 	u8 status;
+ 
+ 	shost_printk(KERN_ERR, target->scsi_host, "SRP reset_device called\n");
+@@ -2968,8 +2968,8 @@ static int srp_reset_device(struct scsi_cmnd *scmnd)
+ 
+ 	for (i = 0; i < target->ch_count; i++) {
+ 		ch = &target->ch[i];
+-		for (i = 0; i < target->req_ring_size; ++i) {
+-			struct srp_request *req = &ch->req_ring[i];
++		for (j = 0; j < target->req_ring_size; ++j) {
++			struct srp_request *req = &ch->req_ring[j];
+ 
+ 			srp_finish_req(ch, req, scmnd->device, DID_RESET << 16);
+ 		}
+diff --git a/drivers/input/misc/xen-kbdfront.c b/drivers/input/misc/xen-kbdfront.c
+index d91f3b1c5375..92d739649022 100644
+--- a/drivers/input/misc/xen-kbdfront.c
++++ b/drivers/input/misc/xen-kbdfront.c
+@@ -229,7 +229,7 @@ static int xenkbd_probe(struct xenbus_device *dev,
+ 		}
+ 	}
+ 
+-	touch = xenbus_read_unsigned(dev->nodename,
++	touch = xenbus_read_unsigned(dev->otherend,
+ 				     XENKBD_FIELD_FEAT_MTOUCH, 0);
+ 	if (touch) {
+ 		ret = xenbus_write(XBT_NIL, dev->nodename,
+@@ -304,13 +304,13 @@ static int xenkbd_probe(struct xenbus_device *dev,
+ 		if (!mtouch)
+ 			goto error_nomem;
+ 
+-		num_cont = xenbus_read_unsigned(info->xbdev->nodename,
++		num_cont = xenbus_read_unsigned(info->xbdev->otherend,
+ 						XENKBD_FIELD_MT_NUM_CONTACTS,
+ 						1);
+-		width = xenbus_read_unsigned(info->xbdev->nodename,
++		width = xenbus_read_unsigned(info->xbdev->otherend,
+ 					     XENKBD_FIELD_MT_WIDTH,
+ 					     XENFB_WIDTH);
+-		height = xenbus_read_unsigned(info->xbdev->nodename,
++		height = xenbus_read_unsigned(info->xbdev->otherend,
+ 					      XENKBD_FIELD_MT_HEIGHT,
+ 					      XENFB_HEIGHT);
+ 
+diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
+index dd85b16dc6f8..88564f729e93 100644
+--- a/drivers/input/mouse/elantech.c
++++ b/drivers/input/mouse/elantech.c
+@@ -1178,6 +1178,8 @@ static const struct dmi_system_id elantech_dmi_has_middle_button[] = {
+ static const char * const middle_button_pnp_ids[] = {
+ 	"LEN2131", /* ThinkPad P52 w/ NFC */
+ 	"LEN2132", /* ThinkPad P52 */
++	"LEN2133", /* ThinkPad P72 w/ NFC */
++	"LEN2134", /* ThinkPad P72 */
+ 	NULL
+ };
+ 
+diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
+index 596b95c50051..d77c97fe4a23 100644
+--- a/drivers/iommu/amd_iommu.c
++++ b/drivers/iommu/amd_iommu.c
+@@ -2405,9 +2405,9 @@ static void __unmap_single(struct dma_ops_domain *dma_dom,
+ 	}
+ 
+ 	if (amd_iommu_unmap_flush) {
+-		dma_ops_free_iova(dma_dom, dma_addr, pages);
+ 		domain_flush_tlb(&dma_dom->domain);
+ 		domain_flush_complete(&dma_dom->domain);
++		dma_ops_free_iova(dma_dom, dma_addr, pages);
+ 	} else {
+ 		pages = __roundup_pow_of_two(pages);
+ 		queue_iova(&dma_dom->iovad, dma_addr >> PAGE_SHIFT, pages, 0);
+diff --git a/drivers/iommu/msm_iommu.c b/drivers/iommu/msm_iommu.c
+index 0d3350463a3f..9a95c9b9d0d8 100644
+--- a/drivers/iommu/msm_iommu.c
++++ b/drivers/iommu/msm_iommu.c
+@@ -395,20 +395,15 @@ static int msm_iommu_add_device(struct device *dev)
+ 	struct msm_iommu_dev *iommu;
+ 	struct iommu_group *group;
+ 	unsigned long flags;
+-	int ret = 0;
+ 
+ 	spin_lock_irqsave(&msm_iommu_lock, flags);
+-
+ 	iommu = find_iommu_for_dev(dev);
++	spin_unlock_irqrestore(&msm_iommu_lock, flags);
++
+ 	if (iommu)
+ 		iommu_device_link(&iommu->iommu, dev);
+ 	else
+-		ret = -ENODEV;
+-
+-	spin_unlock_irqrestore(&msm_iommu_lock, flags);
+-
+-	if (ret)
+-		return ret;
++		return -ENODEV;
+ 
+ 	group = iommu_group_get_for_dev(dev);
+ 	if (IS_ERR(group))
+@@ -425,13 +420,12 @@ static void msm_iommu_remove_device(struct device *dev)
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&msm_iommu_lock, flags);
+-
+ 	iommu = find_iommu_for_dev(dev);
++	spin_unlock_irqrestore(&msm_iommu_lock, flags);
++
+ 	if (iommu)
+ 		iommu_device_unlink(&iommu->iommu, dev);
+ 
+-	spin_unlock_irqrestore(&msm_iommu_lock, flags);
+-
+ 	iommu_group_remove_device(dev);
+ }
+ 
+diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
+index 021cbf9ef1bf..1ac945f7a3c2 100644
+--- a/drivers/md/md-cluster.c
++++ b/drivers/md/md-cluster.c
+@@ -304,15 +304,6 @@ static void recover_bitmaps(struct md_thread *thread)
+ 	while (cinfo->recovery_map) {
+ 		slot = fls64((u64)cinfo->recovery_map) - 1;
+ 
+-		/* Clear suspend_area associated with the bitmap */
+-		spin_lock_irq(&cinfo->suspend_lock);
+-		list_for_each_entry_safe(s, tmp, &cinfo->suspend_list, list)
+-			if (slot == s->slot) {
+-				list_del(&s->list);
+-				kfree(s);
+-			}
+-		spin_unlock_irq(&cinfo->suspend_lock);
+-
+ 		snprintf(str, 64, "bitmap%04d", slot);
+ 		bm_lockres = lockres_init(mddev, str, NULL, 1);
+ 		if (!bm_lockres) {
+@@ -331,6 +322,16 @@ static void recover_bitmaps(struct md_thread *thread)
+ 			pr_err("md-cluster: Could not copy data from bitmap %d\n", slot);
+ 			goto clear_bit;
+ 		}
++
++		/* Clear suspend_area associated with the bitmap */
++		spin_lock_irq(&cinfo->suspend_lock);
++		list_for_each_entry_safe(s, tmp, &cinfo->suspend_list, list)
++			if (slot == s->slot) {
++				list_del(&s->list);
++				kfree(s);
++			}
++		spin_unlock_irq(&cinfo->suspend_lock);
++
+ 		if (hi > 0) {
+ 			if (lo < mddev->recovery_cp)
+ 				mddev->recovery_cp = lo;
+diff --git a/drivers/media/i2c/ov772x.c b/drivers/media/i2c/ov772x.c
+index e2550708abc8..3fdbe644648a 100644
+--- a/drivers/media/i2c/ov772x.c
++++ b/drivers/media/i2c/ov772x.c
+@@ -542,9 +542,19 @@ static struct ov772x_priv *to_ov772x(struct v4l2_subdev *sd)
+ 	return container_of(sd, struct ov772x_priv, subdev);
+ }
+ 
+-static inline int ov772x_read(struct i2c_client *client, u8 addr)
++static int ov772x_read(struct i2c_client *client, u8 addr)
+ {
+-	return i2c_smbus_read_byte_data(client, addr);
++	int ret;
++	u8 val;
++
++	ret = i2c_master_send(client, &addr, 1);
++	if (ret < 0)
++		return ret;
++	ret = i2c_master_recv(client, &val, 1);
++	if (ret < 0)
++		return ret;
++
++	return val;
+ }
+ 
+ static inline int ov772x_write(struct i2c_client *client, u8 addr, u8 value)
+@@ -1136,7 +1146,7 @@ static int ov772x_set_fmt(struct v4l2_subdev *sd,
+ static int ov772x_video_probe(struct ov772x_priv *priv)
+ {
+ 	struct i2c_client  *client = v4l2_get_subdevdata(&priv->subdev);
+-	u8                  pid, ver;
++	int		    pid, ver, midh, midl;
+ 	const char         *devname;
+ 	int		    ret;
+ 
+@@ -1146,7 +1156,11 @@ static int ov772x_video_probe(struct ov772x_priv *priv)
+ 
+ 	/* Check and show product ID and manufacturer ID. */
+ 	pid = ov772x_read(client, PID);
++	if (pid < 0)
++		return pid;
+ 	ver = ov772x_read(client, VER);
++	if (ver < 0)
++		return ver;
+ 
+ 	switch (VERSION(pid, ver)) {
+ 	case OV7720:
+@@ -1162,13 +1176,17 @@ static int ov772x_video_probe(struct ov772x_priv *priv)
+ 		goto done;
+ 	}
+ 
++	midh = ov772x_read(client, MIDH);
++	if (midh < 0)
++		return midh;
++	midl = ov772x_read(client, MIDL);
++	if (midl < 0)
++		return midl;
++
+ 	dev_info(&client->dev,
+ 		 "%s Product ID %0x:%0x Manufacturer ID %x:%x\n",
+-		 devname,
+-		 pid,
+-		 ver,
+-		 ov772x_read(client, MIDH),
+-		 ov772x_read(client, MIDL));
++		 devname, pid, ver, midh, midl);
++
+ 	ret = v4l2_ctrl_handler_setup(&priv->hdl);
+ 
+ done:
+@@ -1255,13 +1273,11 @@ static int ov772x_probe(struct i2c_client *client,
+ 		return -EINVAL;
+ 	}
+ 
+-	if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE_DATA |
+-					      I2C_FUNC_PROTOCOL_MANGLING)) {
++	if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE_DATA)) {
+ 		dev_err(&adapter->dev,
+-			"I2C-Adapter doesn't support SMBUS_BYTE_DATA or PROTOCOL_MANGLING\n");
++			"I2C-Adapter doesn't support SMBUS_BYTE_DATA\n");
+ 		return -EIO;
+ 	}
+-	client->flags |= I2C_CLIENT_SCCB;
+ 
+ 	priv = devm_kzalloc(&client->dev, sizeof(*priv), GFP_KERNEL);
+ 	if (!priv)
+diff --git a/drivers/media/i2c/soc_camera/ov772x.c b/drivers/media/i2c/soc_camera/ov772x.c
+index 806383500313..14377af7c888 100644
+--- a/drivers/media/i2c/soc_camera/ov772x.c
++++ b/drivers/media/i2c/soc_camera/ov772x.c
+@@ -834,7 +834,7 @@ static int ov772x_set_params(struct ov772x_priv *priv,
+ 	 * set COM8
+ 	 */
+ 	if (priv->band_filter) {
+-		ret = ov772x_mask_set(client, COM8, BNDF_ON_OFF, 1);
++		ret = ov772x_mask_set(client, COM8, BNDF_ON_OFF, BNDF_ON_OFF);
+ 		if (!ret)
+ 			ret = ov772x_mask_set(client, BDBASE,
+ 					      0xff, 256 - priv->band_filter);
+diff --git a/drivers/media/platform/exynos4-is/fimc-isp-video.c b/drivers/media/platform/exynos4-is/fimc-isp-video.c
+index 55ba696b8cf4..a920164f53f1 100644
+--- a/drivers/media/platform/exynos4-is/fimc-isp-video.c
++++ b/drivers/media/platform/exynos4-is/fimc-isp-video.c
+@@ -384,12 +384,17 @@ static void __isp_video_try_fmt(struct fimc_isp *isp,
+ 				struct v4l2_pix_format_mplane *pixm,
+ 				const struct fimc_fmt **fmt)
+ {
+-	*fmt = fimc_isp_find_format(&pixm->pixelformat, NULL, 2);
++	const struct fimc_fmt *__fmt;
++
++	__fmt = fimc_isp_find_format(&pixm->pixelformat, NULL, 2);
++
++	if (fmt)
++		*fmt = __fmt;
+ 
+ 	pixm->colorspace = V4L2_COLORSPACE_SRGB;
+ 	pixm->field = V4L2_FIELD_NONE;
+-	pixm->num_planes = (*fmt)->memplanes;
+-	pixm->pixelformat = (*fmt)->fourcc;
++	pixm->num_planes = __fmt->memplanes;
++	pixm->pixelformat = __fmt->fourcc;
+ 	/*
+ 	 * TODO: double check with the docmentation these width/height
+ 	 * constraints are correct.
+diff --git a/drivers/media/platform/fsl-viu.c b/drivers/media/platform/fsl-viu.c
+index e41510ce69a4..0273302aa741 100644
+--- a/drivers/media/platform/fsl-viu.c
++++ b/drivers/media/platform/fsl-viu.c
+@@ -1414,7 +1414,7 @@ static int viu_of_probe(struct platform_device *op)
+ 				     sizeof(struct viu_reg), DRV_NAME)) {
+ 		dev_err(&op->dev, "Error while requesting mem region\n");
+ 		ret = -EBUSY;
+-		goto err;
++		goto err_irq;
+ 	}
+ 
+ 	/* remap registers */
+@@ -1422,7 +1422,7 @@ static int viu_of_probe(struct platform_device *op)
+ 	if (!viu_regs) {
+ 		dev_err(&op->dev, "Can't map register set\n");
+ 		ret = -ENOMEM;
+-		goto err;
++		goto err_irq;
+ 	}
+ 
+ 	/* Prepare our private structure */
+@@ -1430,7 +1430,7 @@ static int viu_of_probe(struct platform_device *op)
+ 	if (!viu_dev) {
+ 		dev_err(&op->dev, "Can't allocate private structure\n");
+ 		ret = -ENOMEM;
+-		goto err;
++		goto err_irq;
+ 	}
+ 
+ 	viu_dev->vr = viu_regs;
+@@ -1446,16 +1446,21 @@ static int viu_of_probe(struct platform_device *op)
+ 	ret = v4l2_device_register(viu_dev->dev, &viu_dev->v4l2_dev);
+ 	if (ret < 0) {
+ 		dev_err(&op->dev, "v4l2_device_register() failed: %d\n", ret);
+-		goto err;
++		goto err_irq;
+ 	}
+ 
+ 	ad = i2c_get_adapter(0);
++	if (!ad) {
++		ret = -EFAULT;
++		dev_err(&op->dev, "couldn't get i2c adapter\n");
++		goto err_v4l2;
++	}
+ 
+ 	v4l2_ctrl_handler_init(&viu_dev->hdl, 5);
+ 	if (viu_dev->hdl.error) {
+ 		ret = viu_dev->hdl.error;
+ 		dev_err(&op->dev, "couldn't register control\n");
+-		goto err_vdev;
++		goto err_i2c;
+ 	}
+ 	/* This control handler will inherit the control(s) from the
+ 	   sub-device(s). */
+@@ -1471,7 +1476,7 @@ static int viu_of_probe(struct platform_device *op)
+ 	vdev = video_device_alloc();
+ 	if (vdev == NULL) {
+ 		ret = -ENOMEM;
+-		goto err_vdev;
++		goto err_hdl;
+ 	}
+ 
+ 	*vdev = viu_template;
+@@ -1492,7 +1497,7 @@ static int viu_of_probe(struct platform_device *op)
+ 	ret = video_register_device(viu_dev->vdev, VFL_TYPE_GRABBER, -1);
+ 	if (ret < 0) {
+ 		video_device_release(viu_dev->vdev);
+-		goto err_vdev;
++		goto err_unlock;
+ 	}
+ 
+ 	/* enable VIU clock */
+@@ -1500,12 +1505,12 @@ static int viu_of_probe(struct platform_device *op)
+ 	if (IS_ERR(clk)) {
+ 		dev_err(&op->dev, "failed to lookup the clock!\n");
+ 		ret = PTR_ERR(clk);
+-		goto err_clk;
++		goto err_vdev;
+ 	}
+ 	ret = clk_prepare_enable(clk);
+ 	if (ret) {
+ 		dev_err(&op->dev, "failed to enable the clock!\n");
+-		goto err_clk;
++		goto err_vdev;
+ 	}
+ 	viu_dev->clk = clk;
+ 
+@@ -1516,7 +1521,7 @@ static int viu_of_probe(struct platform_device *op)
+ 	if (request_irq(viu_dev->irq, viu_intr, 0, "viu", (void *)viu_dev)) {
+ 		dev_err(&op->dev, "Request VIU IRQ failed.\n");
+ 		ret = -ENODEV;
+-		goto err_irq;
++		goto err_clk;
+ 	}
+ 
+ 	mutex_unlock(&viu_dev->lock);
+@@ -1524,16 +1529,19 @@ static int viu_of_probe(struct platform_device *op)
+ 	dev_info(&op->dev, "Freescale VIU Video Capture Board\n");
+ 	return ret;
+ 
+-err_irq:
+-	clk_disable_unprepare(viu_dev->clk);
+ err_clk:
+-	video_unregister_device(viu_dev->vdev);
++	clk_disable_unprepare(viu_dev->clk);
+ err_vdev:
+-	v4l2_ctrl_handler_free(&viu_dev->hdl);
++	video_unregister_device(viu_dev->vdev);
++err_unlock:
+ 	mutex_unlock(&viu_dev->lock);
++err_hdl:
++	v4l2_ctrl_handler_free(&viu_dev->hdl);
++err_i2c:
+ 	i2c_put_adapter(ad);
++err_v4l2:
+ 	v4l2_device_unregister(&viu_dev->v4l2_dev);
+-err:
++err_irq:
+ 	irq_dispose_mapping(viu_irq);
+ 	return ret;
+ }
+diff --git a/drivers/media/platform/omap3isp/isp.c b/drivers/media/platform/omap3isp/isp.c
+index f22cf351e3ee..ae0ef8b241a7 100644
+--- a/drivers/media/platform/omap3isp/isp.c
++++ b/drivers/media/platform/omap3isp/isp.c
+@@ -300,7 +300,7 @@ static struct clk *isp_xclk_src_get(struct of_phandle_args *clkspec, void *data)
+ static int isp_xclk_init(struct isp_device *isp)
+ {
+ 	struct device_node *np = isp->dev->of_node;
+-	struct clk_init_data init;
++	struct clk_init_data init = { 0 };
+ 	unsigned int i;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(isp->xclks); ++i)
+diff --git a/drivers/media/platform/s3c-camif/camif-capture.c b/drivers/media/platform/s3c-camif/camif-capture.c
+index 9ab8e7ee2e1e..b1d9f3857d3d 100644
+--- a/drivers/media/platform/s3c-camif/camif-capture.c
++++ b/drivers/media/platform/s3c-camif/camif-capture.c
+@@ -117,6 +117,8 @@ static int sensor_set_power(struct camif_dev *camif, int on)
+ 
+ 	if (camif->sensor.power_count == !on)
+ 		err = v4l2_subdev_call(sensor->sd, core, s_power, on);
++	if (err == -ENOIOCTLCMD)
++		err = 0;
+ 	if (!err)
+ 		sensor->power_count += on ? 1 : -1;
+ 
+diff --git a/drivers/media/usb/tm6000/tm6000-dvb.c b/drivers/media/usb/tm6000/tm6000-dvb.c
+index c811fc6cf48a..3a4e545c6037 100644
+--- a/drivers/media/usb/tm6000/tm6000-dvb.c
++++ b/drivers/media/usb/tm6000/tm6000-dvb.c
+@@ -266,6 +266,11 @@ static int register_dvb(struct tm6000_core *dev)
+ 
+ 	ret = dvb_register_adapter(&dvb->adapter, "Trident TVMaster 6000 DVB-T",
+ 					THIS_MODULE, &dev->udev->dev, adapter_nr);
++	if (ret < 0) {
++		pr_err("tm6000: couldn't register the adapter!\n");
++		goto err;
++	}
++
+ 	dvb->adapter.priv = dev;
+ 
+ 	if (dvb->frontend) {
+diff --git a/drivers/media/v4l2-core/v4l2-event.c b/drivers/media/v4l2-core/v4l2-event.c
+index 127fe6eb91d9..a3ef1f50a4b3 100644
+--- a/drivers/media/v4l2-core/v4l2-event.c
++++ b/drivers/media/v4l2-core/v4l2-event.c
+@@ -115,14 +115,6 @@ static void __v4l2_event_queue_fh(struct v4l2_fh *fh, const struct v4l2_event *e
+ 	if (sev == NULL)
+ 		return;
+ 
+-	/*
+-	 * If the event has been added to the fh->subscribed list, but its
+-	 * add op has not completed yet elems will be 0, treat this as
+-	 * not being subscribed.
+-	 */
+-	if (!sev->elems)
+-		return;
+-
+ 	/* Increase event sequence number on fh. */
+ 	fh->sequence++;
+ 
+@@ -208,6 +200,7 @@ int v4l2_event_subscribe(struct v4l2_fh *fh,
+ 	struct v4l2_subscribed_event *sev, *found_ev;
+ 	unsigned long flags;
+ 	unsigned i;
++	int ret = 0;
+ 
+ 	if (sub->type == V4L2_EVENT_ALL)
+ 		return -EINVAL;
+@@ -225,31 +218,36 @@ int v4l2_event_subscribe(struct v4l2_fh *fh,
+ 	sev->flags = sub->flags;
+ 	sev->fh = fh;
+ 	sev->ops = ops;
++	sev->elems = elems;
++
++	mutex_lock(&fh->subscribe_lock);
+ 
+ 	spin_lock_irqsave(&fh->vdev->fh_lock, flags);
+ 	found_ev = v4l2_event_subscribed(fh, sub->type, sub->id);
+-	if (!found_ev)
+-		list_add(&sev->list, &fh->subscribed);
+ 	spin_unlock_irqrestore(&fh->vdev->fh_lock, flags);
+ 
+ 	if (found_ev) {
++		/* Already listening */
+ 		kvfree(sev);
+-		return 0; /* Already listening */
++		goto out_unlock;
+ 	}
+ 
+ 	if (sev->ops && sev->ops->add) {
+-		int ret = sev->ops->add(sev, elems);
++		ret = sev->ops->add(sev, elems);
+ 		if (ret) {
+-			sev->ops = NULL;
+-			v4l2_event_unsubscribe(fh, sub);
+-			return ret;
++			kvfree(sev);
++			goto out_unlock;
+ 		}
+ 	}
+ 
+-	/* Mark as ready for use */
+-	sev->elems = elems;
++	spin_lock_irqsave(&fh->vdev->fh_lock, flags);
++	list_add(&sev->list, &fh->subscribed);
++	spin_unlock_irqrestore(&fh->vdev->fh_lock, flags);
+ 
+-	return 0;
++out_unlock:
++	mutex_unlock(&fh->subscribe_lock);
++
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_event_subscribe);
+ 
+@@ -288,6 +286,8 @@ int v4l2_event_unsubscribe(struct v4l2_fh *fh,
+ 		return 0;
+ 	}
+ 
++	mutex_lock(&fh->subscribe_lock);
++
+ 	spin_lock_irqsave(&fh->vdev->fh_lock, flags);
+ 
+ 	sev = v4l2_event_subscribed(fh, sub->type, sub->id);
+@@ -305,6 +305,8 @@ int v4l2_event_unsubscribe(struct v4l2_fh *fh,
+ 	if (sev && sev->ops && sev->ops->del)
+ 		sev->ops->del(sev);
+ 
++	mutex_unlock(&fh->subscribe_lock);
++
+ 	kvfree(sev);
+ 
+ 	return 0;
+diff --git a/drivers/media/v4l2-core/v4l2-fh.c b/drivers/media/v4l2-core/v4l2-fh.c
+index 3895999bf880..c91a7bd3ecfc 100644
+--- a/drivers/media/v4l2-core/v4l2-fh.c
++++ b/drivers/media/v4l2-core/v4l2-fh.c
+@@ -45,6 +45,7 @@ void v4l2_fh_init(struct v4l2_fh *fh, struct video_device *vdev)
+ 	INIT_LIST_HEAD(&fh->available);
+ 	INIT_LIST_HEAD(&fh->subscribed);
+ 	fh->sequence = -1;
++	mutex_init(&fh->subscribe_lock);
+ }
+ EXPORT_SYMBOL_GPL(v4l2_fh_init);
+ 
+@@ -90,6 +91,7 @@ void v4l2_fh_exit(struct v4l2_fh *fh)
+ 		return;
+ 	v4l_disable_media_source(fh->vdev);
+ 	v4l2_event_unsubscribe_all(fh);
++	mutex_destroy(&fh->subscribe_lock);
+ 	fh->vdev = NULL;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_fh_exit);
+diff --git a/drivers/misc/ibmvmc.c b/drivers/misc/ibmvmc.c
+index 50d82c3d032a..b8aaa684c397 100644
+--- a/drivers/misc/ibmvmc.c
++++ b/drivers/misc/ibmvmc.c
+@@ -273,7 +273,7 @@ static void *alloc_dma_buffer(struct vio_dev *vdev, size_t size,
+ 			      dma_addr_t *dma_handle)
+ {
+ 	/* allocate memory */
+-	void *buffer = kzalloc(size, GFP_KERNEL);
++	void *buffer = kzalloc(size, GFP_ATOMIC);
+ 
+ 	if (!buffer) {
+ 		*dma_handle = 0;
+diff --git a/drivers/misc/sram.c b/drivers/misc/sram.c
+index 679647713e36..74b183baf044 100644
+--- a/drivers/misc/sram.c
++++ b/drivers/misc/sram.c
+@@ -391,23 +391,23 @@ static int sram_probe(struct platform_device *pdev)
+ 	if (IS_ERR(sram->pool))
+ 		return PTR_ERR(sram->pool);
+ 
+-	ret = sram_reserve_regions(sram, res);
+-	if (ret)
+-		return ret;
+-
+ 	sram->clk = devm_clk_get(sram->dev, NULL);
+ 	if (IS_ERR(sram->clk))
+ 		sram->clk = NULL;
+ 	else
+ 		clk_prepare_enable(sram->clk);
+ 
++	ret = sram_reserve_regions(sram, res);
++	if (ret)
++		goto err_disable_clk;
++
+ 	platform_set_drvdata(pdev, sram);
+ 
+ 	init_func = of_device_get_match_data(&pdev->dev);
+ 	if (init_func) {
+ 		ret = init_func();
+ 		if (ret)
+-			goto err_disable_clk;
++			goto err_free_partitions;
+ 	}
+ 
+ 	dev_dbg(sram->dev, "SRAM pool: %zu KiB @ 0x%p\n",
+@@ -415,10 +415,11 @@ static int sram_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ 
++err_free_partitions:
++	sram_free_partitions(sram);
+ err_disable_clk:
+ 	if (sram->clk)
+ 		clk_disable_unprepare(sram->clk);
+-	sram_free_partitions(sram);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/misc/tsl2550.c b/drivers/misc/tsl2550.c
+index adf46072cb37..3fce3b6a3624 100644
+--- a/drivers/misc/tsl2550.c
++++ b/drivers/misc/tsl2550.c
+@@ -177,7 +177,7 @@ static int tsl2550_calculate_lux(u8 ch0, u8 ch1)
+ 		} else
+ 			lux = 0;
+ 	else
+-		return -EAGAIN;
++		return 0;
+ 
+ 	/* LUX range check */
+ 	return lux > TSL2550_MAX_LUX ? TSL2550_MAX_LUX : lux;
+diff --git a/drivers/misc/vmw_vmci/vmci_queue_pair.c b/drivers/misc/vmw_vmci/vmci_queue_pair.c
+index b4d7774cfe07..d95e8648e7b3 100644
+--- a/drivers/misc/vmw_vmci/vmci_queue_pair.c
++++ b/drivers/misc/vmw_vmci/vmci_queue_pair.c
+@@ -668,7 +668,7 @@ static int qp_host_get_user_memory(u64 produce_uva,
+ 	retval = get_user_pages_fast((uintptr_t) produce_uva,
+ 				     produce_q->kernel_if->num_pages, 1,
+ 				     produce_q->kernel_if->u.h.header_page);
+-	if (retval < produce_q->kernel_if->num_pages) {
++	if (retval < (int)produce_q->kernel_if->num_pages) {
+ 		pr_debug("get_user_pages_fast(produce) failed (retval=%d)",
+ 			retval);
+ 		qp_release_pages(produce_q->kernel_if->u.h.header_page,
+@@ -680,7 +680,7 @@ static int qp_host_get_user_memory(u64 produce_uva,
+ 	retval = get_user_pages_fast((uintptr_t) consume_uva,
+ 				     consume_q->kernel_if->num_pages, 1,
+ 				     consume_q->kernel_if->u.h.header_page);
+-	if (retval < consume_q->kernel_if->num_pages) {
++	if (retval < (int)consume_q->kernel_if->num_pages) {
+ 		pr_debug("get_user_pages_fast(consume) failed (retval=%d)",
+ 			retval);
+ 		qp_release_pages(consume_q->kernel_if->u.h.header_page,
+diff --git a/drivers/mmc/host/android-goldfish.c b/drivers/mmc/host/android-goldfish.c
+index 294de177632c..61e4e2a213c9 100644
+--- a/drivers/mmc/host/android-goldfish.c
++++ b/drivers/mmc/host/android-goldfish.c
+@@ -217,7 +217,7 @@ static void goldfish_mmc_xfer_done(struct goldfish_mmc_host *host,
+ 			 * We don't really have DMA, so we need
+ 			 * to copy from our platform driver buffer
+ 			 */
+-			sg_copy_to_buffer(data->sg, 1, host->virt_base,
++			sg_copy_from_buffer(data->sg, 1, host->virt_base,
+ 					data->sg->length);
+ 		}
+ 		host->data->bytes_xfered += data->sg->length;
+@@ -393,7 +393,7 @@ static void goldfish_mmc_prepare_data(struct goldfish_mmc_host *host,
+ 		 * We don't really have DMA, so we need to copy to our
+ 		 * platform driver buffer
+ 		 */
+-		sg_copy_from_buffer(data->sg, 1, host->virt_base,
++		sg_copy_to_buffer(data->sg, 1, host->virt_base,
+ 				data->sg->length);
+ 	}
+ }
+diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c
+index 5aa2c9404e92..be53044086c7 100644
+--- a/drivers/mmc/host/atmel-mci.c
++++ b/drivers/mmc/host/atmel-mci.c
+@@ -1976,7 +1976,7 @@ static void atmci_read_data_pio(struct atmel_mci *host)
+ 	do {
+ 		value = atmci_readl(host, ATMCI_RDR);
+ 		if (likely(offset + 4 <= sg->length)) {
+-			sg_pcopy_to_buffer(sg, 1, &value, sizeof(u32), offset);
++			sg_pcopy_from_buffer(sg, 1, &value, sizeof(u32), offset);
+ 
+ 			offset += 4;
+ 			nbytes += 4;
+@@ -1993,7 +1993,7 @@ static void atmci_read_data_pio(struct atmel_mci *host)
+ 		} else {
+ 			unsigned int remaining = sg->length - offset;
+ 
+-			sg_pcopy_to_buffer(sg, 1, &value, remaining, offset);
++			sg_pcopy_from_buffer(sg, 1, &value, remaining, offset);
+ 			nbytes += remaining;
+ 
+ 			flush_dcache_page(sg_page(sg));
+@@ -2003,7 +2003,7 @@ static void atmci_read_data_pio(struct atmel_mci *host)
+ 				goto done;
+ 
+ 			offset = 4 - remaining;
+-			sg_pcopy_to_buffer(sg, 1, (u8 *)&value + remaining,
++			sg_pcopy_from_buffer(sg, 1, (u8 *)&value + remaining,
+ 					offset, 0);
+ 			nbytes += offset;
+ 		}
+@@ -2042,7 +2042,7 @@ static void atmci_write_data_pio(struct atmel_mci *host)
+ 
+ 	do {
+ 		if (likely(offset + 4 <= sg->length)) {
+-			sg_pcopy_from_buffer(sg, 1, &value, sizeof(u32), offset);
++			sg_pcopy_to_buffer(sg, 1, &value, sizeof(u32), offset);
+ 			atmci_writel(host, ATMCI_TDR, value);
+ 
+ 			offset += 4;
+@@ -2059,7 +2059,7 @@ static void atmci_write_data_pio(struct atmel_mci *host)
+ 			unsigned int remaining = sg->length - offset;
+ 
+ 			value = 0;
+-			sg_pcopy_from_buffer(sg, 1, &value, remaining, offset);
++			sg_pcopy_to_buffer(sg, 1, &value, remaining, offset);
+ 			nbytes += remaining;
+ 
+ 			host->sg = sg = sg_next(sg);
+@@ -2070,7 +2070,7 @@ static void atmci_write_data_pio(struct atmel_mci *host)
+ 			}
+ 
+ 			offset = 4 - remaining;
+-			sg_pcopy_from_buffer(sg, 1, (u8 *)&value + remaining,
++			sg_pcopy_to_buffer(sg, 1, (u8 *)&value + remaining,
+ 					offset, 0);
+ 			atmci_writel(host, ATMCI_TDR, value);
+ 			nbytes += offset;
+diff --git a/drivers/mtd/nand/raw/atmel/nand-controller.c b/drivers/mtd/nand/raw/atmel/nand-controller.c
+index 12f6753d47ae..e686fe73159e 100644
+--- a/drivers/mtd/nand/raw/atmel/nand-controller.c
++++ b/drivers/mtd/nand/raw/atmel/nand-controller.c
+@@ -129,6 +129,11 @@
+ #define DEFAULT_TIMEOUT_MS			1000
+ #define MIN_DMA_LEN				128
+ 
++static bool atmel_nand_avoid_dma __read_mostly;
++
++MODULE_PARM_DESC(avoiddma, "Avoid using DMA");
++module_param_named(avoiddma, atmel_nand_avoid_dma, bool, 0400);
++
+ enum atmel_nand_rb_type {
+ 	ATMEL_NAND_NO_RB,
+ 	ATMEL_NAND_NATIVE_RB,
+@@ -1977,7 +1982,7 @@ static int atmel_nand_controller_init(struct atmel_nand_controller *nc,
+ 		return ret;
+ 	}
+ 
+-	if (nc->caps->has_dma) {
++	if (nc->caps->has_dma && !atmel_nand_avoid_dma) {
+ 		dma_cap_mask_t mask;
+ 
+ 		dma_cap_zero(mask);
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+index a8926e97935e..c5d387be6cfe 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+@@ -5705,7 +5705,7 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 		if (t4_read_reg(adapter, LE_DB_CONFIG_A) & HASHEN_F) {
+ 			u32 hash_base, hash_reg;
+ 
+-			if (chip <= CHELSIO_T5) {
++			if (chip_ver <= CHELSIO_T5) {
+ 				hash_reg = LE_DB_TID_HASHBASE_A;
+ 				hash_base = t4_read_reg(adapter, hash_reg);
+ 				adapter->tids.hash_base = hash_base / 4;
+diff --git a/drivers/net/ethernet/hisilicon/hns/hnae.h b/drivers/net/ethernet/hisilicon/hns/hnae.h
+index fa5b30f547f6..cad52bd331f7 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hnae.h
++++ b/drivers/net/ethernet/hisilicon/hns/hnae.h
+@@ -220,10 +220,10 @@ struct hnae_desc_cb {
+ 
+ 	/* priv data for the desc, e.g. skb when use with ip stack*/
+ 	void *priv;
+-	u16 page_offset;
+-	u16 reuse_flag;
++	u32 page_offset;
++	u32 length;     /* length of the buffer */
+ 
+-	u16 length;     /* length of the buffer */
++	u16 reuse_flag;
+ 
+        /* desc type, used by the ring user to mark the type of the priv data */
+ 	u16 type;
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_enet.c b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+index ef9ef703d13a..ef994a715f93 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+@@ -530,7 +530,7 @@ static void hns_nic_reuse_page(struct sk_buff *skb, int i,
+ 	}
+ 
+ 	skb_add_rx_frag(skb, i, desc_cb->priv, desc_cb->page_offset + pull_len,
+-			size - pull_len, truesize - pull_len);
++			size - pull_len, truesize);
+ 
+ 	 /* avoid re-using remote pages,flag default unreuse */
+ 	if (unlikely(page_to_nid(desc_cb->priv) != numa_node_id()))
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+index 3b083d5ae9ce..c84c09053640 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
+@@ -290,11 +290,11 @@ struct hns3_desc_cb {
+ 
+ 	/* priv data for the desc, e.g. skb when use with ip stack*/
+ 	void *priv;
+-	u16 page_offset;
+-	u16 reuse_flag;
+-
++	u32 page_offset;
+ 	u32 length;     /* length of the buffer */
+ 
++	u16 reuse_flag;
++
+        /* desc type, used by the ring user to mark the type of the priv data */
+ 	u16 type;
+ };
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+index 40c0425b4023..11620e003a8e 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+@@ -201,7 +201,9 @@ static u32 hns3_lb_check_rx_ring(struct hns3_nic_priv *priv, u32 budget)
+ 		rx_group = &ring->tqp_vector->rx_group;
+ 		pre_rx_pkt = rx_group->total_packets;
+ 
++		preempt_disable();
+ 		hns3_clean_rx_ring(ring, budget, hns3_lb_check_skb_data);
++		preempt_enable();
+ 
+ 		rcv_good_pkt_total += (rx_group->total_packets - pre_rx_pkt);
+ 		rx_group->total_packets = pre_rx_pkt;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+index 262c125f8137..f027fceea548 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+@@ -1223,6 +1223,10 @@ static int hclge_mac_pause_setup_hw(struct hclge_dev *hdev)
+ 		tx_en = true;
+ 		rx_en = true;
+ 		break;
++	case HCLGE_FC_PFC:
++		tx_en = false;
++		rx_en = false;
++		break;
+ 	default:
+ 		tx_en = true;
+ 		rx_en = true;
+@@ -1240,8 +1244,9 @@ int hclge_pause_setup_hw(struct hclge_dev *hdev)
+ 	if (ret)
+ 		return ret;
+ 
+-	if (hdev->tm_info.fc_mode != HCLGE_FC_PFC)
+-		return hclge_mac_pause_setup_hw(hdev);
++	ret = hclge_mac_pause_setup_hw(hdev);
++	if (ret)
++		return ret;
+ 
+ 	/* Only DCB-supported dev supports qset back pressure and pfc cmd */
+ 	if (!hnae3_dev_dcb_supported(hdev))
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index a17872aab168..12aa1f1b99ef 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -648,8 +648,17 @@ static int hclgevf_unmap_ring_from_vector(
+ static int hclgevf_put_vector(struct hnae3_handle *handle, int vector)
+ {
+ 	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
++	int vector_id;
+ 
+-	hclgevf_free_vector(hdev, vector);
++	vector_id = hclgevf_get_vector_index(hdev, vector);
++	if (vector_id < 0) {
++		dev_err(&handle->pdev->dev,
++			"hclgevf_put_vector get vector index fail. ret =%d\n",
++			vector_id);
++		return vector_id;
++	}
++
++	hclgevf_free_vector(hdev, vector_id);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
+index b598c06af8e0..cd246f906150 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
+@@ -208,7 +208,8 @@ void hclgevf_mbx_handler(struct hclgevf_dev *hdev)
+ 
+ 			/* tail the async message in arq */
+ 			msg_q = hdev->arq.msg_q[hdev->arq.tail];
+-			memcpy(&msg_q[0], req->msg, HCLGE_MBX_MAX_ARQ_MSG_SIZE);
++			memcpy(&msg_q[0], req->msg,
++			       HCLGE_MBX_MAX_ARQ_MSG_SIZE * sizeof(u16));
+ 			hclge_mbx_tail_ptr_move_arq(hdev->arq);
+ 			hdev->arq.count++;
+ 
+diff --git a/drivers/net/ethernet/intel/e1000/e1000_ethtool.c b/drivers/net/ethernet/intel/e1000/e1000_ethtool.c
+index bdb3f8e65ed4..2569a168334c 100644
+--- a/drivers/net/ethernet/intel/e1000/e1000_ethtool.c
++++ b/drivers/net/ethernet/intel/e1000/e1000_ethtool.c
+@@ -624,14 +624,14 @@ static int e1000_set_ringparam(struct net_device *netdev,
+ 		adapter->tx_ring = tx_old;
+ 		e1000_free_all_rx_resources(adapter);
+ 		e1000_free_all_tx_resources(adapter);
+-		kfree(tx_old);
+-		kfree(rx_old);
+ 		adapter->rx_ring = rxdr;
+ 		adapter->tx_ring = txdr;
+ 		err = e1000_up(adapter);
+ 		if (err)
+ 			goto err_setup;
+ 	}
++	kfree(tx_old);
++	kfree(rx_old);
+ 
+ 	clear_bit(__E1000_RESETTING, &adapter->flags);
+ 	return 0;
+@@ -644,7 +644,8 @@ err_setup_rx:
+ err_alloc_rx:
+ 	kfree(txdr);
+ err_alloc_tx:
+-	e1000_up(adapter);
++	if (netif_running(adapter->netdev))
++		e1000_up(adapter);
+ err_setup:
+ 	clear_bit(__E1000_RESETTING, &adapter->flags);
+ 	return err;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+index 6947a2a571cb..5d670f4ce5ac 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+@@ -1903,7 +1903,7 @@ static void i40e_get_stat_strings(struct net_device *netdev, u8 *data)
+ 		data += ETH_GSTRING_LEN;
+ 	}
+ 
+-	WARN_ONCE(p - data != i40e_get_stats_count(netdev) * ETH_GSTRING_LEN,
++	WARN_ONCE(data - p != i40e_get_stats_count(netdev) * ETH_GSTRING_LEN,
+ 		  "stat strings count mismatch!");
+ }
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index c944bd10b03d..5f105bc68c6a 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -5121,15 +5121,17 @@ static int i40e_vsi_configure_bw_alloc(struct i40e_vsi *vsi, u8 enabled_tc,
+ 				       u8 *bw_share)
+ {
+ 	struct i40e_aqc_configure_vsi_tc_bw_data bw_data;
++	struct i40e_pf *pf = vsi->back;
+ 	i40e_status ret;
+ 	int i;
+ 
+-	if (vsi->back->flags & I40E_FLAG_TC_MQPRIO)
++	/* There is no need to reset BW when mqprio mode is on.  */
++	if (pf->flags & I40E_FLAG_TC_MQPRIO)
+ 		return 0;
+-	if (!vsi->mqprio_qopt.qopt.hw) {
++	if (!vsi->mqprio_qopt.qopt.hw && !(pf->flags & I40E_FLAG_DCB_ENABLED)) {
+ 		ret = i40e_set_bw_limit(vsi, vsi->seid, 0);
+ 		if (ret)
+-			dev_info(&vsi->back->pdev->dev,
++			dev_info(&pf->pdev->dev,
+ 				 "Failed to reset tx rate for vsi->seid %u\n",
+ 				 vsi->seid);
+ 		return ret;
+@@ -5138,12 +5140,11 @@ static int i40e_vsi_configure_bw_alloc(struct i40e_vsi *vsi, u8 enabled_tc,
+ 	for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++)
+ 		bw_data.tc_bw_credits[i] = bw_share[i];
+ 
+-	ret = i40e_aq_config_vsi_tc_bw(&vsi->back->hw, vsi->seid, &bw_data,
+-				       NULL);
++	ret = i40e_aq_config_vsi_tc_bw(&pf->hw, vsi->seid, &bw_data, NULL);
+ 	if (ret) {
+-		dev_info(&vsi->back->pdev->dev,
++		dev_info(&pf->pdev->dev,
+ 			 "AQ command Config VSI BW allocation per TC failed = %d\n",
+-			 vsi->back->hw.aq.asq_last_status);
++			 pf->hw.aq.asq_last_status);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index d8b5fff581e7..ed071ea75f20 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -89,6 +89,13 @@ extern const char ice_drv_ver[];
+ #define ice_for_each_rxq(vsi, i) \
+ 	for ((i) = 0; (i) < (vsi)->num_rxq; (i)++)
+ 
++/* Macros for each allocated tx/rx ring whether used or not in a VSI */
++#define ice_for_each_alloc_txq(vsi, i) \
++	for ((i) = 0; (i) < (vsi)->alloc_txq; (i)++)
++
++#define ice_for_each_alloc_rxq(vsi, i) \
++	for ((i) = 0; (i) < (vsi)->alloc_rxq; (i)++)
++
+ struct ice_tc_info {
+ 	u16 qoffset;
+ 	u16 qcount;
+diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+index 7541ec2270b3..a0614f472658 100644
+--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
++++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+@@ -329,19 +329,19 @@ struct ice_aqc_vsi_props {
+ 	/* VLAN section */
+ 	__le16 pvid; /* VLANS include priority bits */
+ 	u8 pvlan_reserved[2];
+-	u8 port_vlan_flags;
+-#define ICE_AQ_VSI_PVLAN_MODE_S	0
+-#define ICE_AQ_VSI_PVLAN_MODE_M	(0x3 << ICE_AQ_VSI_PVLAN_MODE_S)
+-#define ICE_AQ_VSI_PVLAN_MODE_UNTAGGED	0x1
+-#define ICE_AQ_VSI_PVLAN_MODE_TAGGED	0x2
+-#define ICE_AQ_VSI_PVLAN_MODE_ALL	0x3
++	u8 vlan_flags;
++#define ICE_AQ_VSI_VLAN_MODE_S	0
++#define ICE_AQ_VSI_VLAN_MODE_M	(0x3 << ICE_AQ_VSI_VLAN_MODE_S)
++#define ICE_AQ_VSI_VLAN_MODE_UNTAGGED	0x1
++#define ICE_AQ_VSI_VLAN_MODE_TAGGED	0x2
++#define ICE_AQ_VSI_VLAN_MODE_ALL	0x3
+ #define ICE_AQ_VSI_PVLAN_INSERT_PVID	BIT(2)
+-#define ICE_AQ_VSI_PVLAN_EMOD_S	3
+-#define ICE_AQ_VSI_PVLAN_EMOD_M	(0x3 << ICE_AQ_VSI_PVLAN_EMOD_S)
+-#define ICE_AQ_VSI_PVLAN_EMOD_STR_BOTH	(0x0 << ICE_AQ_VSI_PVLAN_EMOD_S)
+-#define ICE_AQ_VSI_PVLAN_EMOD_STR_UP	(0x1 << ICE_AQ_VSI_PVLAN_EMOD_S)
+-#define ICE_AQ_VSI_PVLAN_EMOD_STR	(0x2 << ICE_AQ_VSI_PVLAN_EMOD_S)
+-#define ICE_AQ_VSI_PVLAN_EMOD_NOTHING	(0x3 << ICE_AQ_VSI_PVLAN_EMOD_S)
++#define ICE_AQ_VSI_VLAN_EMOD_S		3
++#define ICE_AQ_VSI_VLAN_EMOD_M		(0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
++#define ICE_AQ_VSI_VLAN_EMOD_STR_BOTH	(0x0 << ICE_AQ_VSI_VLAN_EMOD_S)
++#define ICE_AQ_VSI_VLAN_EMOD_STR_UP	(0x1 << ICE_AQ_VSI_VLAN_EMOD_S)
++#define ICE_AQ_VSI_VLAN_EMOD_STR	(0x2 << ICE_AQ_VSI_VLAN_EMOD_S)
++#define ICE_AQ_VSI_VLAN_EMOD_NOTHING	(0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
+ 	u8 pvlan_reserved2[3];
+ 	/* ingress egress up sections */
+ 	__le32 ingress_table; /* bitmap, 3 bits per up */
+@@ -594,6 +594,7 @@ struct ice_sw_rule_lg_act {
+ #define ICE_LG_ACT_GENERIC_OFFSET_M	(0x7 << ICE_LG_ACT_GENERIC_OFFSET_S)
+ #define ICE_LG_ACT_GENERIC_PRIORITY_S	22
+ #define ICE_LG_ACT_GENERIC_PRIORITY_M	(0x7 << ICE_LG_ACT_GENERIC_PRIORITY_S)
++#define ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX	7
+ 
+ 	/* Action = 7 - Set Stat count */
+ #define ICE_LG_ACT_STAT_COUNT		0x7
+diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
+index 71d032cc5fa7..ebd701ac9428 100644
+--- a/drivers/net/ethernet/intel/ice/ice_common.c
++++ b/drivers/net/ethernet/intel/ice/ice_common.c
+@@ -1483,7 +1483,7 @@ enum ice_status ice_get_link_status(struct ice_port_info *pi, bool *link_up)
+ 	struct ice_phy_info *phy_info;
+ 	enum ice_status status = 0;
+ 
+-	if (!pi)
++	if (!pi || !link_up)
+ 		return ICE_ERR_PARAM;
+ 
+ 	phy_info = &pi->phy;
+@@ -1619,20 +1619,23 @@ __ice_aq_get_set_rss_lut(struct ice_hw *hw, u16 vsi_id, u8 lut_type, u8 *lut,
+ 	}
+ 
+ 	/* LUT size is only valid for Global and PF table types */
+-	if (lut_size == ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128) {
+-		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128_FLAG <<
+-			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+-			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+-	} else if (lut_size == ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512) {
++	switch (lut_size) {
++	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_128:
++		break;
++	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512:
+ 		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_512_FLAG <<
+ 			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+ 			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+-	} else if ((lut_size == ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K) &&
+-		   (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF)) {
+-		flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG <<
+-			  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
+-			 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
+-	} else {
++		break;
++	case ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K:
++		if (lut_type == ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF) {
++			flags |= (ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_2K_FLAG <<
++				  ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_S) &
++				 ICE_AQC_GSET_RSS_LUT_TABLE_SIZE_M;
++			break;
++		}
++		/* fall-through */
++	default:
+ 		status = ICE_ERR_PARAM;
+ 		goto ice_aq_get_set_rss_lut_exit;
+ 	}
+diff --git a/drivers/net/ethernet/intel/ice/ice_controlq.c b/drivers/net/ethernet/intel/ice/ice_controlq.c
+index 7c511f144ed6..62be72fdc8f3 100644
+--- a/drivers/net/ethernet/intel/ice/ice_controlq.c
++++ b/drivers/net/ethernet/intel/ice/ice_controlq.c
+@@ -597,10 +597,14 @@ static enum ice_status ice_init_check_adminq(struct ice_hw *hw)
+ 	return 0;
+ 
+ init_ctrlq_free_rq:
+-	ice_shutdown_rq(hw, cq);
+-	ice_shutdown_sq(hw, cq);
+-	mutex_destroy(&cq->sq_lock);
+-	mutex_destroy(&cq->rq_lock);
++	if (cq->rq.head) {
++		ice_shutdown_rq(hw, cq);
++		mutex_destroy(&cq->rq_lock);
++	}
++	if (cq->sq.head) {
++		ice_shutdown_sq(hw, cq);
++		mutex_destroy(&cq->sq_lock);
++	}
+ 	return status;
+ }
+ 
+@@ -706,10 +710,14 @@ static void ice_shutdown_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
+ 		return;
+ 	}
+ 
+-	ice_shutdown_sq(hw, cq);
+-	ice_shutdown_rq(hw, cq);
+-	mutex_destroy(&cq->sq_lock);
+-	mutex_destroy(&cq->rq_lock);
++	if (cq->sq.head) {
++		ice_shutdown_sq(hw, cq);
++		mutex_destroy(&cq->sq_lock);
++	}
++	if (cq->rq.head) {
++		ice_shutdown_rq(hw, cq);
++		mutex_destroy(&cq->rq_lock);
++	}
+ }
+ 
+ /**
+@@ -1057,8 +1065,11 @@ ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
+ 
+ clean_rq_elem_out:
+ 	/* Set pending if needed, unlock and return */
+-	if (pending)
++	if (pending) {
++		/* re-read HW head to calculate actual pending messages */
++		ntu = (u16)(rd32(hw, cq->rq.head) & cq->rq.head_mask);
+ 		*pending = (u16)((ntc > ntu ? cq->rq.count : 0) + (ntu - ntc));
++	}
+ clean_rq_elem_err:
+ 	mutex_unlock(&cq->rq_lock);
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+index 1db304c01d10..c71a9b528d6d 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+@@ -26,7 +26,7 @@ static int ice_q_stats_len(struct net_device *netdev)
+ {
+ 	struct ice_netdev_priv *np = netdev_priv(netdev);
+ 
+-	return ((np->vsi->num_txq + np->vsi->num_rxq) *
++	return ((np->vsi->alloc_txq + np->vsi->alloc_rxq) *
+ 		(sizeof(struct ice_q_stats) / sizeof(u64)));
+ }
+ 
+@@ -218,7 +218,7 @@ static void ice_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
+ 			p += ETH_GSTRING_LEN;
+ 		}
+ 
+-		ice_for_each_txq(vsi, i) {
++		ice_for_each_alloc_txq(vsi, i) {
+ 			snprintf(p, ETH_GSTRING_LEN,
+ 				 "tx-queue-%u.tx_packets", i);
+ 			p += ETH_GSTRING_LEN;
+@@ -226,7 +226,7 @@ static void ice_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
+ 			p += ETH_GSTRING_LEN;
+ 		}
+ 
+-		ice_for_each_rxq(vsi, i) {
++		ice_for_each_alloc_rxq(vsi, i) {
+ 			snprintf(p, ETH_GSTRING_LEN,
+ 				 "rx-queue-%u.rx_packets", i);
+ 			p += ETH_GSTRING_LEN;
+@@ -253,6 +253,24 @@ static int ice_get_sset_count(struct net_device *netdev, int sset)
+ {
+ 	switch (sset) {
+ 	case ETH_SS_STATS:
++		/* The number (and order) of strings reported *must* remain
++		 * constant for a given netdevice. This function must not
++		 * report a different number based on run time parameters
++		 * (such as the number of queues in use, or the setting of
++		 * a private ethtool flag). This is due to the nature of the
++		 * ethtool stats API.
++		 *
++		 * User space programs such as ethtool must make 3 separate
++		 * ioctl requests, one for size, one for the strings, and
++		 * finally one for the stats. Since these cross into
++		 * user space, changes to the number or size could result in
++		 * undefined memory access or incorrect string<->value
++		 * correlations for statistics.
++		 *
++		 * Even if it appears to be safe, changes to the size or
++		 * order of strings will suffer from race conditions and are
++		 * not safe.
++		 */
+ 		return ICE_ALL_STATS_LEN(netdev);
+ 	default:
+ 		return -EOPNOTSUPP;
+@@ -280,18 +298,26 @@ ice_get_ethtool_stats(struct net_device *netdev,
+ 	/* populate per queue stats */
+ 	rcu_read_lock();
+ 
+-	ice_for_each_txq(vsi, j) {
++	ice_for_each_alloc_txq(vsi, j) {
+ 		ring = READ_ONCE(vsi->tx_rings[j]);
+-		if (!ring)
+-			continue;
+-		data[i++] = ring->stats.pkts;
+-		data[i++] = ring->stats.bytes;
++		if (ring) {
++			data[i++] = ring->stats.pkts;
++			data[i++] = ring->stats.bytes;
++		} else {
++			data[i++] = 0;
++			data[i++] = 0;
++		}
+ 	}
+ 
+-	ice_for_each_rxq(vsi, j) {
++	ice_for_each_alloc_rxq(vsi, j) {
+ 		ring = READ_ONCE(vsi->rx_rings[j]);
+-		data[i++] = ring->stats.pkts;
+-		data[i++] = ring->stats.bytes;
++		if (ring) {
++			data[i++] = ring->stats.pkts;
++			data[i++] = ring->stats.bytes;
++		} else {
++			data[i++] = 0;
++			data[i++] = 0;
++		}
+ 	}
+ 
+ 	rcu_read_unlock();
+@@ -519,7 +545,7 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring)
+ 		goto done;
+ 	}
+ 
+-	for (i = 0; i < vsi->num_txq; i++) {
++	for (i = 0; i < vsi->alloc_txq; i++) {
+ 		/* clone ring and setup updated count */
+ 		tx_rings[i] = *vsi->tx_rings[i];
+ 		tx_rings[i].count = new_tx_cnt;
+@@ -551,7 +577,7 @@ process_rx:
+ 		goto done;
+ 	}
+ 
+-	for (i = 0; i < vsi->num_rxq; i++) {
++	for (i = 0; i < vsi->alloc_rxq; i++) {
+ 		/* clone ring and setup updated count */
+ 		rx_rings[i] = *vsi->rx_rings[i];
+ 		rx_rings[i].count = new_rx_cnt;
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 5299caf55a7f..27c9aa31b248 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -916,6 +916,21 @@ static int __ice_clean_ctrlq(struct ice_pf *pf, enum ice_ctl_q q_type)
+ 	return pending && (i == ICE_DFLT_IRQ_WORK);
+ }
+ 
++/**
++ * ice_ctrlq_pending - check if there is a difference between ntc and ntu
++ * @hw: pointer to hardware info
++ * @cq: control queue information
++ *
++ * returns true if there are pending messages in a queue, false if there aren't
++ */
++static bool ice_ctrlq_pending(struct ice_hw *hw, struct ice_ctl_q_info *cq)
++{
++	u16 ntu;
++
++	ntu = (u16)(rd32(hw, cq->rq.head) & cq->rq.head_mask);
++	return cq->rq.next_to_clean != ntu;
++}
++
+ /**
+  * ice_clean_adminq_subtask - clean the AdminQ rings
+  * @pf: board private structure
+@@ -923,7 +938,6 @@ static int __ice_clean_ctrlq(struct ice_pf *pf, enum ice_ctl_q q_type)
+ static void ice_clean_adminq_subtask(struct ice_pf *pf)
+ {
+ 	struct ice_hw *hw = &pf->hw;
+-	u32 val;
+ 
+ 	if (!test_bit(__ICE_ADMINQ_EVENT_PENDING, pf->state))
+ 		return;
+@@ -933,9 +947,13 @@ static void ice_clean_adminq_subtask(struct ice_pf *pf)
+ 
+ 	clear_bit(__ICE_ADMINQ_EVENT_PENDING, pf->state);
+ 
+-	/* re-enable Admin queue interrupt causes */
+-	val = rd32(hw, PFINT_FW_CTL);
+-	wr32(hw, PFINT_FW_CTL, (val | PFINT_FW_CTL_CAUSE_ENA_M));
++	/* There might be a situation where new messages arrive to a control
++	 * queue between processing the last message and clearing the
++	 * EVENT_PENDING bit. So before exiting, check queue head again (using
++	 * ice_ctrlq_pending) and process new messages if any.
++	 */
++	if (ice_ctrlq_pending(hw, &hw->adminq))
++		__ice_clean_ctrlq(pf, ICE_CTL_Q_ADMIN);
+ 
+ 	ice_flush(hw);
+ }
+@@ -1295,11 +1313,8 @@ static void ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt)
+ 		qcount = numq_tc;
+ 	}
+ 
+-	/* find higher power-of-2 of qcount */
+-	pow = ilog2(qcount);
+-
+-	if (!is_power_of_2(qcount))
+-		pow++;
++	/* find the (rounded up) power-of-2 of qcount */
++	pow = order_base_2(qcount);
+ 
+ 	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
+ 		if (!(vsi->tc_cfg.ena_tc & BIT(i))) {
+@@ -1352,14 +1367,15 @@ static void ice_set_dflt_vsi_ctx(struct ice_vsi_ctx *ctxt)
+ 	ctxt->info.sw_flags = ICE_AQ_VSI_SW_FLAG_SRC_PRUNE;
+ 	/* Traffic from VSI can be sent to LAN */
+ 	ctxt->info.sw_flags2 = ICE_AQ_VSI_SW_FLAG_LAN_ENA;
+-	/* Allow all packets untagged/tagged */
+-	ctxt->info.port_vlan_flags = ((ICE_AQ_VSI_PVLAN_MODE_ALL &
+-				       ICE_AQ_VSI_PVLAN_MODE_M) >>
+-				      ICE_AQ_VSI_PVLAN_MODE_S);
+-	/* Show VLAN/UP from packets in Rx descriptors */
+-	ctxt->info.port_vlan_flags |= ((ICE_AQ_VSI_PVLAN_EMOD_STR_BOTH &
+-					ICE_AQ_VSI_PVLAN_EMOD_M) >>
+-				       ICE_AQ_VSI_PVLAN_EMOD_S);
++
++	/* By default bits 3 and 4 in vlan_flags are 0's which results in legacy
++	 * behavior (show VLAN, DEI, and UP) in descriptor. Also, allow all
++	 * packets untagged/tagged.
++	 */
++	ctxt->info.vlan_flags = ((ICE_AQ_VSI_VLAN_MODE_ALL &
++				  ICE_AQ_VSI_VLAN_MODE_M) >>
++				 ICE_AQ_VSI_VLAN_MODE_S);
++
+ 	/* Have 1:1 UP mapping for both ingress/egress tables */
+ 	table |= ICE_UP_TABLE_TRANSLATE(0, 0);
+ 	table |= ICE_UP_TABLE_TRANSLATE(1, 1);
+@@ -2058,15 +2074,13 @@ static int ice_req_irq_msix_misc(struct ice_pf *pf)
+ skip_req_irq:
+ 	ice_ena_misc_vector(pf);
+ 
+-	val = (pf->oicr_idx & PFINT_OICR_CTL_MSIX_INDX_M) |
+-	      (ICE_RX_ITR & PFINT_OICR_CTL_ITR_INDX_M) |
+-	      PFINT_OICR_CTL_CAUSE_ENA_M;
++	val = ((pf->oicr_idx & PFINT_OICR_CTL_MSIX_INDX_M) |
++	       PFINT_OICR_CTL_CAUSE_ENA_M);
+ 	wr32(hw, PFINT_OICR_CTL, val);
+ 
+ 	/* This enables Admin queue Interrupt causes */
+-	val = (pf->oicr_idx & PFINT_FW_CTL_MSIX_INDX_M) |
+-	      (ICE_RX_ITR & PFINT_FW_CTL_ITR_INDX_M) |
+-	      PFINT_FW_CTL_CAUSE_ENA_M;
++	val = ((pf->oicr_idx & PFINT_FW_CTL_MSIX_INDX_M) |
++	       PFINT_FW_CTL_CAUSE_ENA_M);
+ 	wr32(hw, PFINT_FW_CTL, val);
+ 
+ 	itr_gran = hw->itr_gran_200;
+@@ -3246,8 +3260,10 @@ static void ice_clear_interrupt_scheme(struct ice_pf *pf)
+ 	if (test_bit(ICE_FLAG_MSIX_ENA, pf->flags))
+ 		ice_dis_msix(pf);
+ 
+-	devm_kfree(&pf->pdev->dev, pf->irq_tracker);
+-	pf->irq_tracker = NULL;
++	if (pf->irq_tracker) {
++		devm_kfree(&pf->pdev->dev, pf->irq_tracker);
++		pf->irq_tracker = NULL;
++	}
+ }
+ 
+ /**
+@@ -3720,10 +3736,10 @@ static int ice_vsi_manage_vlan_insertion(struct ice_vsi *vsi)
+ 	enum ice_status status;
+ 
+ 	/* Here we are configuring the VSI to let the driver add VLAN tags by
+-	 * setting port_vlan_flags to ICE_AQ_VSI_PVLAN_MODE_ALL. The actual VLAN
+-	 * tag insertion happens in the Tx hot path, in ice_tx_map.
++	 * setting vlan_flags to ICE_AQ_VSI_VLAN_MODE_ALL. The actual VLAN tag
++	 * insertion happens in the Tx hot path, in ice_tx_map.
+ 	 */
+-	ctxt.info.port_vlan_flags = ICE_AQ_VSI_PVLAN_MODE_ALL;
++	ctxt.info.vlan_flags = ICE_AQ_VSI_VLAN_MODE_ALL;
+ 
+ 	ctxt.info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID);
+ 	ctxt.vsi_num = vsi->vsi_num;
+@@ -3735,7 +3751,7 @@ static int ice_vsi_manage_vlan_insertion(struct ice_vsi *vsi)
+ 		return -EIO;
+ 	}
+ 
+-	vsi->info.port_vlan_flags = ctxt.info.port_vlan_flags;
++	vsi->info.vlan_flags = ctxt.info.vlan_flags;
+ 	return 0;
+ }
+ 
+@@ -3757,12 +3773,15 @@ static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena)
+ 	 */
+ 	if (ena) {
+ 		/* Strip VLAN tag from Rx packet and put it in the desc */
+-		ctxt.info.port_vlan_flags = ICE_AQ_VSI_PVLAN_EMOD_STR_BOTH;
++		ctxt.info.vlan_flags = ICE_AQ_VSI_VLAN_EMOD_STR_BOTH;
+ 	} else {
+ 		/* Disable stripping. Leave tag in packet */
+-		ctxt.info.port_vlan_flags = ICE_AQ_VSI_PVLAN_EMOD_NOTHING;
++		ctxt.info.vlan_flags = ICE_AQ_VSI_VLAN_EMOD_NOTHING;
+ 	}
+ 
++	/* Allow all packets untagged/tagged */
++	ctxt.info.vlan_flags |= ICE_AQ_VSI_VLAN_MODE_ALL;
++
+ 	ctxt.info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_VLAN_VALID);
+ 	ctxt.vsi_num = vsi->vsi_num;
+ 
+@@ -3773,7 +3792,7 @@ static int ice_vsi_manage_vlan_stripping(struct ice_vsi *vsi, bool ena)
+ 		return -EIO;
+ 	}
+ 
+-	vsi->info.port_vlan_flags = ctxt.info.port_vlan_flags;
++	vsi->info.vlan_flags = ctxt.info.vlan_flags;
+ 	return 0;
+ }
+ 
+@@ -4098,11 +4117,12 @@ static int ice_vsi_cfg(struct ice_vsi *vsi)
+ {
+ 	int err;
+ 
+-	ice_set_rx_mode(vsi->netdev);
+-
+-	err = ice_restore_vlan(vsi);
+-	if (err)
+-		return err;
++	if (vsi->netdev) {
++		ice_set_rx_mode(vsi->netdev);
++		err = ice_restore_vlan(vsi);
++		if (err)
++			return err;
++	}
+ 
+ 	err = ice_vsi_cfg_txqs(vsi);
+ 	if (!err)
+@@ -4868,7 +4888,7 @@ int ice_down(struct ice_vsi *vsi)
+  */
+ static int ice_vsi_setup_tx_rings(struct ice_vsi *vsi)
+ {
+-	int i, err;
++	int i, err = 0;
+ 
+ 	if (!vsi->num_txq) {
+ 		dev_err(&vsi->back->pdev->dev, "VSI %d has 0 Tx queues\n",
+@@ -4893,7 +4913,7 @@ static int ice_vsi_setup_tx_rings(struct ice_vsi *vsi)
+  */
+ static int ice_vsi_setup_rx_rings(struct ice_vsi *vsi)
+ {
+-	int i, err;
++	int i, err = 0;
+ 
+ 	if (!vsi->num_rxq) {
+ 		dev_err(&vsi->back->pdev->dev, "VSI %d has 0 Rx queues\n",
+diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c
+index 723d15f1e90b..6b7ec2ae5ad6 100644
+--- a/drivers/net/ethernet/intel/ice/ice_switch.c
++++ b/drivers/net/ethernet/intel/ice/ice_switch.c
+@@ -645,14 +645,14 @@ ice_add_marker_act(struct ice_hw *hw, struct ice_fltr_mgmt_list_entry *m_ent,
+ 	act |= (1 << ICE_LG_ACT_GENERIC_VALUE_S) & ICE_LG_ACT_GENERIC_VALUE_M;
+ 	lg_act->pdata.lg_act.act[1] = cpu_to_le32(act);
+ 
+-	act = (7 << ICE_LG_ACT_GENERIC_OFFSET_S) & ICE_LG_ACT_GENERIC_VALUE_M;
++	act = (ICE_LG_ACT_GENERIC_OFF_RX_DESC_PROF_IDX <<
++	       ICE_LG_ACT_GENERIC_OFFSET_S) & ICE_LG_ACT_GENERIC_OFFSET_M;
+ 
+ 	/* Third action Marker value */
+ 	act |= ICE_LG_ACT_GENERIC;
+ 	act |= (sw_marker << ICE_LG_ACT_GENERIC_VALUE_S) &
+ 		ICE_LG_ACT_GENERIC_VALUE_M;
+ 
+-	act |= (0 << ICE_LG_ACT_GENERIC_OFFSET_S) & ICE_LG_ACT_GENERIC_VALUE_M;
+ 	lg_act->pdata.lg_act.act[2] = cpu_to_le32(act);
+ 
+ 	/* call the fill switch rule to fill the lookup tx rx structure */
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+index 6f59933cdff7..2bc4fe475f28 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+@@ -688,8 +688,13 @@ static int ixgbe_set_vf_macvlan(struct ixgbe_adapter *adapter,
+ static inline void ixgbe_vf_reset_event(struct ixgbe_adapter *adapter, u32 vf)
+ {
+ 	struct ixgbe_hw *hw = &adapter->hw;
++	struct ixgbe_ring_feature *vmdq = &adapter->ring_feature[RING_F_VMDQ];
+ 	struct vf_data_storage *vfinfo = &adapter->vfinfo[vf];
++	u32 q_per_pool = __ALIGN_MASK(1, ~vmdq->mask);
+ 	u8 num_tcs = adapter->hw_tcs;
++	u32 reg_val;
++	u32 queue;
++	u32 word;
+ 
+ 	/* remove VLAN filters beloning to this VF */
+ 	ixgbe_clear_vf_vlans(adapter, vf);
+@@ -726,6 +731,27 @@ static inline void ixgbe_vf_reset_event(struct ixgbe_adapter *adapter, u32 vf)
+ 
+ 	/* reset VF api back to unknown */
+ 	adapter->vfinfo[vf].vf_api = ixgbe_mbox_api_10;
++
++	/* Restart each queue for given VF */
++	for (queue = 0; queue < q_per_pool; queue++) {
++		unsigned int reg_idx = (vf * q_per_pool) + queue;
++
++		reg_val = IXGBE_READ_REG(hw, IXGBE_PVFTXDCTL(reg_idx));
++
++		/* Re-enabling only configured queues */
++		if (reg_val) {
++			reg_val |= IXGBE_TXDCTL_ENABLE;
++			IXGBE_WRITE_REG(hw, IXGBE_PVFTXDCTL(reg_idx), reg_val);
++			reg_val &= ~IXGBE_TXDCTL_ENABLE;
++			IXGBE_WRITE_REG(hw, IXGBE_PVFTXDCTL(reg_idx), reg_val);
++		}
++	}
++
++	/* Clear VF's mailbox memory */
++	for (word = 0; word < IXGBE_VFMAILBOX_SIZE; word++)
++		IXGBE_WRITE_REG_ARRAY(hw, IXGBE_PFMBMEM(vf), word, 0);
++
++	IXGBE_WRITE_FLUSH(hw);
+ }
+ 
+ static int ixgbe_set_vf_mac(struct ixgbe_adapter *adapter,
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
+index 44cfb2021145..41bcbb337e83 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
+@@ -2518,6 +2518,7 @@ enum {
+ /* Translated register #defines */
+ #define IXGBE_PVFTDH(P)		(0x06010 + (0x40 * (P)))
+ #define IXGBE_PVFTDT(P)		(0x06018 + (0x40 * (P)))
++#define IXGBE_PVFTXDCTL(P)	(0x06028 + (0x40 * (P)))
+ #define IXGBE_PVFTDWBAL(P)	(0x06038 + (0x40 * (P)))
+ #define IXGBE_PVFTDWBAH(P)	(0x0603C + (0x40 * (P)))
+ 
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.c b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
+index cdd645024a32..ad6826b5f758 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_mcp.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
+@@ -48,7 +48,7 @@
+ #include "qed_reg_addr.h"
+ #include "qed_sriov.h"
+ 
+-#define CHIP_MCP_RESP_ITER_US 10
++#define QED_MCP_RESP_ITER_US	10
+ 
+ #define QED_DRV_MB_MAX_RETRIES	(500 * 1000)	/* Account for 5 sec */
+ #define QED_MCP_RESET_RETRIES	(50 * 1000)	/* Account for 500 msec */
+@@ -183,18 +183,57 @@ int qed_mcp_free(struct qed_hwfn *p_hwfn)
+ 	return 0;
+ }
+ 
++/* Maximum of 1 sec to wait for the SHMEM ready indication */
++#define QED_MCP_SHMEM_RDY_MAX_RETRIES	20
++#define QED_MCP_SHMEM_RDY_ITER_MS	50
++
+ static int qed_load_mcp_offsets(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ {
+ 	struct qed_mcp_info *p_info = p_hwfn->mcp_info;
++	u8 cnt = QED_MCP_SHMEM_RDY_MAX_RETRIES;
++	u8 msec = QED_MCP_SHMEM_RDY_ITER_MS;
+ 	u32 drv_mb_offsize, mfw_mb_offsize;
+ 	u32 mcp_pf_id = MCP_PF_ID(p_hwfn);
+ 
+ 	p_info->public_base = qed_rd(p_hwfn, p_ptt, MISC_REG_SHARED_MEM_ADDR);
+-	if (!p_info->public_base)
+-		return 0;
++	if (!p_info->public_base) {
++		DP_NOTICE(p_hwfn,
++			  "The address of the MCP scratch-pad is not configured\n");
++		return -EINVAL;
++	}
+ 
+ 	p_info->public_base |= GRCBASE_MCP;
+ 
++	/* Get the MFW MB address and number of supported messages */
++	mfw_mb_offsize = qed_rd(p_hwfn, p_ptt,
++				SECTION_OFFSIZE_ADDR(p_info->public_base,
++						     PUBLIC_MFW_MB));
++	p_info->mfw_mb_addr = SECTION_ADDR(mfw_mb_offsize, mcp_pf_id);
++	p_info->mfw_mb_length = (u16)qed_rd(p_hwfn, p_ptt,
++					    p_info->mfw_mb_addr +
++					    offsetof(struct public_mfw_mb,
++						     sup_msgs));
++
++	/* The driver can notify that there was an MCP reset, and might read the
++	 * SHMEM values before the MFW has completed initializing them.
++	 * To avoid this, the "sup_msgs" field in the MFW mailbox is used as a
++	 * data ready indication.
++	 */
++	while (!p_info->mfw_mb_length && --cnt) {
++		msleep(msec);
++		p_info->mfw_mb_length =
++			(u16)qed_rd(p_hwfn, p_ptt,
++				    p_info->mfw_mb_addr +
++				    offsetof(struct public_mfw_mb, sup_msgs));
++	}
++
++	if (!cnt) {
++		DP_NOTICE(p_hwfn,
++			  "Failed to get the SHMEM ready notification after %d msec\n",
++			  QED_MCP_SHMEM_RDY_MAX_RETRIES * msec);
++		return -EBUSY;
++	}
++
+ 	/* Calculate the driver and MFW mailbox address */
+ 	drv_mb_offsize = qed_rd(p_hwfn, p_ptt,
+ 				SECTION_OFFSIZE_ADDR(p_info->public_base,
+@@ -204,13 +243,6 @@ static int qed_load_mcp_offsets(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ 		   "drv_mb_offsiz = 0x%x, drv_mb_addr = 0x%x mcp_pf_id = 0x%x\n",
+ 		   drv_mb_offsize, p_info->drv_mb_addr, mcp_pf_id);
+ 
+-	/* Set the MFW MB address */
+-	mfw_mb_offsize = qed_rd(p_hwfn, p_ptt,
+-				SECTION_OFFSIZE_ADDR(p_info->public_base,
+-						     PUBLIC_MFW_MB));
+-	p_info->mfw_mb_addr = SECTION_ADDR(mfw_mb_offsize, mcp_pf_id);
+-	p_info->mfw_mb_length =	(u16)qed_rd(p_hwfn, p_ptt, p_info->mfw_mb_addr);
+-
+ 	/* Get the current driver mailbox sequence before sending
+ 	 * the first command
+ 	 */
+@@ -285,9 +317,15 @@ static void qed_mcp_reread_offsets(struct qed_hwfn *p_hwfn,
+ 
+ int qed_mcp_reset(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ {
+-	u32 org_mcp_reset_seq, seq, delay = CHIP_MCP_RESP_ITER_US, cnt = 0;
++	u32 org_mcp_reset_seq, seq, delay = QED_MCP_RESP_ITER_US, cnt = 0;
+ 	int rc = 0;
+ 
++	if (p_hwfn->mcp_info->b_block_cmd) {
++		DP_NOTICE(p_hwfn,
++			  "The MFW is not responsive. Avoid sending MCP_RESET mailbox command.\n");
++		return -EBUSY;
++	}
++
+ 	/* Ensure that only a single thread is accessing the mailbox */
+ 	spin_lock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 
+@@ -413,14 +451,41 @@ static void __qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 		   (p_mb_params->cmd | seq_num), p_mb_params->param);
+ }
+ 
++static void qed_mcp_cmd_set_blocking(struct qed_hwfn *p_hwfn, bool block_cmd)
++{
++	p_hwfn->mcp_info->b_block_cmd = block_cmd;
++
++	DP_INFO(p_hwfn, "%s sending of mailbox commands to the MFW\n",
++		block_cmd ? "Block" : "Unblock");
++}
++
++static void qed_mcp_print_cpu_info(struct qed_hwfn *p_hwfn,
++				   struct qed_ptt *p_ptt)
++{
++	u32 cpu_mode, cpu_state, cpu_pc_0, cpu_pc_1, cpu_pc_2;
++	u32 delay = QED_MCP_RESP_ITER_US;
++
++	cpu_mode = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_MODE);
++	cpu_state = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_STATE);
++	cpu_pc_0 = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_PROGRAM_COUNTER);
++	udelay(delay);
++	cpu_pc_1 = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_PROGRAM_COUNTER);
++	udelay(delay);
++	cpu_pc_2 = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_PROGRAM_COUNTER);
++
++	DP_NOTICE(p_hwfn,
++		  "MCP CPU info: mode 0x%08x, state 0x%08x, pc {0x%08x, 0x%08x, 0x%08x}\n",
++		  cpu_mode, cpu_state, cpu_pc_0, cpu_pc_1, cpu_pc_2);
++}
++
+ static int
+ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 		       struct qed_ptt *p_ptt,
+ 		       struct qed_mcp_mb_params *p_mb_params,
+-		       u32 max_retries, u32 delay)
++		       u32 max_retries, u32 usecs)
+ {
++	u32 cnt = 0, msecs = DIV_ROUND_UP(usecs, 1000);
+ 	struct qed_mcp_cmd_elem *p_cmd_elem;
+-	u32 cnt = 0;
+ 	u16 seq_num;
+ 	int rc = 0;
+ 
+@@ -443,7 +508,11 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 			goto err;
+ 
+ 		spin_unlock_bh(&p_hwfn->mcp_info->cmd_lock);
+-		udelay(delay);
++
++		if (QED_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP))
++			msleep(msecs);
++		else
++			udelay(usecs);
+ 	} while (++cnt < max_retries);
+ 
+ 	if (cnt >= max_retries) {
+@@ -472,7 +541,11 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 		 * The spinlock stays locked until the list element is removed.
+ 		 */
+ 
+-		udelay(delay);
++		if (QED_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP))
++			msleep(msecs);
++		else
++			udelay(usecs);
++
+ 		spin_lock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 
+ 		if (p_cmd_elem->b_is_completed)
+@@ -491,11 +564,15 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 		DP_NOTICE(p_hwfn,
+ 			  "The MFW failed to respond to command 0x%08x [param 0x%08x].\n",
+ 			  p_mb_params->cmd, p_mb_params->param);
++		qed_mcp_print_cpu_info(p_hwfn, p_ptt);
+ 
+ 		spin_lock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 		qed_mcp_cmd_del_elem(p_hwfn, p_cmd_elem);
+ 		spin_unlock_bh(&p_hwfn->mcp_info->cmd_lock);
+ 
++		if (!QED_MB_FLAGS_IS_SET(p_mb_params, AVOID_BLOCK))
++			qed_mcp_cmd_set_blocking(p_hwfn, true);
++
+ 		return -EAGAIN;
+ 	}
+ 
+@@ -507,7 +584,7 @@ _qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 		   "MFW mailbox: response 0x%08x param 0x%08x [after %d.%03d ms]\n",
+ 		   p_mb_params->mcp_resp,
+ 		   p_mb_params->mcp_param,
+-		   (cnt * delay) / 1000, (cnt * delay) % 1000);
++		   (cnt * usecs) / 1000, (cnt * usecs) % 1000);
+ 
+ 	/* Clear the sequence number from the MFW response */
+ 	p_mb_params->mcp_resp &= FW_MSG_CODE_MASK;
+@@ -525,7 +602,7 @@ static int qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ {
+ 	size_t union_data_size = sizeof(union drv_union_data);
+ 	u32 max_retries = QED_DRV_MB_MAX_RETRIES;
+-	u32 delay = CHIP_MCP_RESP_ITER_US;
++	u32 usecs = QED_MCP_RESP_ITER_US;
+ 
+ 	/* MCP not initialized */
+ 	if (!qed_mcp_is_init(p_hwfn)) {
+@@ -533,6 +610,13 @@ static int qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 		return -EBUSY;
+ 	}
+ 
++	if (p_hwfn->mcp_info->b_block_cmd) {
++		DP_NOTICE(p_hwfn,
++			  "The MFW is not responsive. Avoid sending mailbox command 0x%08x [param 0x%08x].\n",
++			  p_mb_params->cmd, p_mb_params->param);
++		return -EBUSY;
++	}
++
+ 	if (p_mb_params->data_src_size > union_data_size ||
+ 	    p_mb_params->data_dst_size > union_data_size) {
+ 		DP_ERR(p_hwfn,
+@@ -542,8 +626,13 @@ static int qed_mcp_cmd_and_union(struct qed_hwfn *p_hwfn,
+ 		return -EINVAL;
+ 	}
+ 
++	if (QED_MB_FLAGS_IS_SET(p_mb_params, CAN_SLEEP)) {
++		max_retries = DIV_ROUND_UP(max_retries, 1000);
++		usecs *= 1000;
++	}
++
+ 	return _qed_mcp_cmd_and_union(p_hwfn, p_ptt, p_mb_params, max_retries,
+-				      delay);
++				      usecs);
+ }
+ 
+ int qed_mcp_cmd(struct qed_hwfn *p_hwfn,
+@@ -760,6 +849,7 @@ __qed_mcp_load_req(struct qed_hwfn *p_hwfn,
+ 	mb_params.data_src_size = sizeof(load_req);
+ 	mb_params.p_data_dst = &load_rsp;
+ 	mb_params.data_dst_size = sizeof(load_rsp);
++	mb_params.flags = QED_MB_FLAG_CAN_SLEEP | QED_MB_FLAG_AVOID_BLOCK;
+ 
+ 	DP_VERBOSE(p_hwfn, QED_MSG_SP,
+ 		   "Load Request: param 0x%08x [init_hw %d, drv_type %d, hsi_ver %d, pda 0x%04x]\n",
+@@ -981,7 +1071,8 @@ int qed_mcp_load_req(struct qed_hwfn *p_hwfn,
+ 
+ int qed_mcp_unload_req(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ {
+-	u32 wol_param, mcp_resp, mcp_param;
++	struct qed_mcp_mb_params mb_params;
++	u32 wol_param;
+ 
+ 	switch (p_hwfn->cdev->wol_config) {
+ 	case QED_OV_WOL_DISABLED:
+@@ -999,8 +1090,12 @@ int qed_mcp_unload_req(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ 		wol_param = DRV_MB_PARAM_UNLOAD_WOL_MCP;
+ 	}
+ 
+-	return qed_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_UNLOAD_REQ, wol_param,
+-			   &mcp_resp, &mcp_param);
++	memset(&mb_params, 0, sizeof(mb_params));
++	mb_params.cmd = DRV_MSG_CODE_UNLOAD_REQ;
++	mb_params.param = wol_param;
++	mb_params.flags = QED_MB_FLAG_CAN_SLEEP | QED_MB_FLAG_AVOID_BLOCK;
++
++	return qed_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+ }
+ 
+ int qed_mcp_unload_done(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+@@ -2075,31 +2170,65 @@ qed_mcp_send_drv_version(struct qed_hwfn *p_hwfn,
+ 	return rc;
+ }
+ 
++/* A maximal 100 msec waiting time for the MCP to halt */
++#define QED_MCP_HALT_SLEEP_MS		10
++#define QED_MCP_HALT_MAX_RETRIES	10
++
+ int qed_mcp_halt(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ {
+-	u32 resp = 0, param = 0;
++	u32 resp = 0, param = 0, cpu_state, cnt = 0;
+ 	int rc;
+ 
+ 	rc = qed_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MCP_HALT, 0, &resp,
+ 			 &param);
+-	if (rc)
++	if (rc) {
+ 		DP_ERR(p_hwfn, "MCP response failure, aborting\n");
++		return rc;
++	}
+ 
+-	return rc;
++	do {
++		msleep(QED_MCP_HALT_SLEEP_MS);
++		cpu_state = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_STATE);
++		if (cpu_state & MCP_REG_CPU_STATE_SOFT_HALTED)
++			break;
++	} while (++cnt < QED_MCP_HALT_MAX_RETRIES);
++
++	if (cnt == QED_MCP_HALT_MAX_RETRIES) {
++		DP_NOTICE(p_hwfn,
++			  "Failed to halt the MCP [CPU_MODE = 0x%08x, CPU_STATE = 0x%08x]\n",
++			  qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_MODE), cpu_state);
++		return -EBUSY;
++	}
++
++	qed_mcp_cmd_set_blocking(p_hwfn, true);
++
++	return 0;
+ }
+ 
++#define QED_MCP_RESUME_SLEEP_MS	10
++
+ int qed_mcp_resume(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ {
+-	u32 value, cpu_mode;
++	u32 cpu_mode, cpu_state;
+ 
+ 	qed_wr(p_hwfn, p_ptt, MCP_REG_CPU_STATE, 0xffffffff);
+ 
+-	value = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_MODE);
+-	value &= ~MCP_REG_CPU_MODE_SOFT_HALT;
+-	qed_wr(p_hwfn, p_ptt, MCP_REG_CPU_MODE, value);
+ 	cpu_mode = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_MODE);
++	cpu_mode &= ~MCP_REG_CPU_MODE_SOFT_HALT;
++	qed_wr(p_hwfn, p_ptt, MCP_REG_CPU_MODE, cpu_mode);
++	msleep(QED_MCP_RESUME_SLEEP_MS);
++	cpu_state = qed_rd(p_hwfn, p_ptt, MCP_REG_CPU_STATE);
+ 
+-	return (cpu_mode & MCP_REG_CPU_MODE_SOFT_HALT) ? -EAGAIN : 0;
++	if (cpu_state & MCP_REG_CPU_STATE_SOFT_HALTED) {
++		DP_NOTICE(p_hwfn,
++			  "Failed to resume the MCP [CPU_MODE = 0x%08x, CPU_STATE = 0x%08x]\n",
++			  cpu_mode, cpu_state);
++		return -EBUSY;
++	}
++
++	qed_mcp_cmd_set_blocking(p_hwfn, false);
++
++	return 0;
+ }
+ 
+ int qed_mcp_ov_update_current_config(struct qed_hwfn *p_hwfn,
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.h b/drivers/net/ethernet/qlogic/qed/qed_mcp.h
+index 632a838f1fe3..ce2e617d2cab 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_mcp.h
++++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.h
+@@ -635,11 +635,14 @@ struct qed_mcp_info {
+ 	 */
+ 	spinlock_t				cmd_lock;
+ 
++	/* Flag to indicate whether sending a MFW mailbox command is blocked */
++	bool					b_block_cmd;
++
+ 	/* Spinlock used for syncing SW link-changes and link-changes
+ 	 * originating from attention context.
+ 	 */
+ 	spinlock_t				link_lock;
+-	bool					block_mb_sending;
++
+ 	u32					public_base;
+ 	u32					drv_mb_addr;
+ 	u32					mfw_mb_addr;
+@@ -660,14 +663,20 @@ struct qed_mcp_info {
+ };
+ 
+ struct qed_mcp_mb_params {
+-	u32			cmd;
+-	u32			param;
+-	void			*p_data_src;
+-	u8			data_src_size;
+-	void			*p_data_dst;
+-	u8			data_dst_size;
+-	u32			mcp_resp;
+-	u32			mcp_param;
++	u32 cmd;
++	u32 param;
++	void *p_data_src;
++	void *p_data_dst;
++	u8 data_src_size;
++	u8 data_dst_size;
++	u32 mcp_resp;
++	u32 mcp_param;
++	u32 flags;
++#define QED_MB_FLAG_CAN_SLEEP	(0x1 << 0)
++#define QED_MB_FLAG_AVOID_BLOCK	(0x1 << 1)
++#define QED_MB_FLAGS_IS_SET(params, flag) \
++	({ typeof(params) __params = (params); \
++	   (__params && (__params->flags & QED_MB_FLAG_ ## flag)); })
+ };
+ 
+ struct qed_drv_tlv_hdr {
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h b/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
+index d8ad2dcad8d5..f736f70956fd 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
++++ b/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
+@@ -562,8 +562,10 @@
+ 	0
+ #define MCP_REG_CPU_STATE \
+ 	0xe05004UL
++#define MCP_REG_CPU_STATE_SOFT_HALTED	(0x1UL << 10)
+ #define MCP_REG_CPU_EVENT_MASK \
+ 	0xe05008UL
++#define MCP_REG_CPU_PROGRAM_COUNTER	0xe0501cUL
+ #define PGLUE_B_REG_PF_BAR0_SIZE \
+ 	0x2aae60UL
+ #define PGLUE_B_REG_PF_BAR1_SIZE \
+diff --git a/drivers/net/phy/xilinx_gmii2rgmii.c b/drivers/net/phy/xilinx_gmii2rgmii.c
+index 2e5150b0b8d5..7a14e8170e82 100644
+--- a/drivers/net/phy/xilinx_gmii2rgmii.c
++++ b/drivers/net/phy/xilinx_gmii2rgmii.c
+@@ -40,8 +40,11 @@ static int xgmiitorgmii_read_status(struct phy_device *phydev)
+ {
+ 	struct gmii2rgmii *priv = phydev->priv;
+ 	u16 val = 0;
++	int err;
+ 
+-	priv->phy_drv->read_status(phydev);
++	err = priv->phy_drv->read_status(phydev);
++	if (err < 0)
++		return err;
+ 
+ 	val = mdiobus_read(phydev->mdio.bus, priv->addr, XILINX_GMII2RGMII_REG);
+ 	val &= ~XILINX_GMII2RGMII_SPEED_MASK;
+@@ -81,6 +84,11 @@ static int xgmiitorgmii_probe(struct mdio_device *mdiodev)
+ 		return -EPROBE_DEFER;
+ 	}
+ 
++	if (!priv->phy_dev->drv) {
++		dev_info(dev, "Attached phy not ready\n");
++		return -EPROBE_DEFER;
++	}
++
+ 	priv->addr = mdiodev->addr;
+ 	priv->phy_drv = priv->phy_dev->drv;
+ 	memcpy(&priv->conv_phy_drv, priv->phy_dev->drv,
+diff --git a/drivers/net/wireless/ath/ath10k/ce.c b/drivers/net/wireless/ath/ath10k/ce.c
+index 3b96a43fbda4..18c709c484e7 100644
+--- a/drivers/net/wireless/ath/ath10k/ce.c
++++ b/drivers/net/wireless/ath/ath10k/ce.c
+@@ -1512,7 +1512,7 @@ ath10k_ce_alloc_src_ring_64(struct ath10k *ar, unsigned int ce_id,
+ 		ret = ath10k_ce_alloc_shadow_base(ar, src_ring, nentries);
+ 		if (ret) {
+ 			dma_free_coherent(ar->dev,
+-					  (nentries * sizeof(struct ce_desc) +
++					  (nentries * sizeof(struct ce_desc_64) +
+ 					   CE_DESC_RING_ALIGN),
+ 					  src_ring->base_addr_owner_space_unaligned,
+ 					  base_addr);
+diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c
+index c72d8af122a2..4d1cd90d6d27 100644
+--- a/drivers/net/wireless/ath/ath10k/htt_rx.c
++++ b/drivers/net/wireless/ath/ath10k/htt_rx.c
+@@ -268,11 +268,12 @@ int ath10k_htt_rx_ring_refill(struct ath10k *ar)
+ 	spin_lock_bh(&htt->rx_ring.lock);
+ 	ret = ath10k_htt_rx_ring_fill_n(htt, (htt->rx_ring.fill_level -
+ 					      htt->rx_ring.fill_cnt));
+-	spin_unlock_bh(&htt->rx_ring.lock);
+ 
+ 	if (ret)
+ 		ath10k_htt_rx_ring_free(htt);
+ 
++	spin_unlock_bh(&htt->rx_ring.lock);
++
+ 	return ret;
+ }
+ 
+@@ -284,7 +285,9 @@ void ath10k_htt_rx_free(struct ath10k_htt *htt)
+ 	skb_queue_purge(&htt->rx_in_ord_compl_q);
+ 	skb_queue_purge(&htt->tx_fetch_ind_q);
+ 
++	spin_lock_bh(&htt->rx_ring.lock);
+ 	ath10k_htt_rx_ring_free(htt);
++	spin_unlock_bh(&htt->rx_ring.lock);
+ 
+ 	dma_free_coherent(htt->ar->dev,
+ 			  ath10k_htt_get_rx_ring_size(htt),
+@@ -1089,7 +1092,7 @@ static void ath10k_htt_rx_h_queue_msdu(struct ath10k *ar,
+ 	status = IEEE80211_SKB_RXCB(skb);
+ 	*status = *rx_status;
+ 
+-	__skb_queue_tail(&ar->htt.rx_msdus_q, skb);
++	skb_queue_tail(&ar->htt.rx_msdus_q, skb);
+ }
+ 
+ static void ath10k_process_rx(struct ath10k *ar, struct sk_buff *skb)
+@@ -2810,7 +2813,7 @@ bool ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb)
+ 		break;
+ 	}
+ 	case HTT_T2H_MSG_TYPE_RX_IN_ORD_PADDR_IND: {
+-		__skb_queue_tail(&htt->rx_in_ord_compl_q, skb);
++		skb_queue_tail(&htt->rx_in_ord_compl_q, skb);
+ 		return false;
+ 	}
+ 	case HTT_T2H_MSG_TYPE_TX_CREDIT_UPDATE_IND:
+@@ -2874,7 +2877,7 @@ static int ath10k_htt_rx_deliver_msdu(struct ath10k *ar, int quota, int budget)
+ 		if (skb_queue_empty(&ar->htt.rx_msdus_q))
+ 			break;
+ 
+-		skb = __skb_dequeue(&ar->htt.rx_msdus_q);
++		skb = skb_dequeue(&ar->htt.rx_msdus_q);
+ 		if (!skb)
+ 			break;
+ 		ath10k_process_rx(ar, skb);
+@@ -2905,7 +2908,7 @@ int ath10k_htt_txrx_compl_task(struct ath10k *ar, int budget)
+ 		goto exit;
+ 	}
+ 
+-	while ((skb = __skb_dequeue(&htt->rx_in_ord_compl_q))) {
++	while ((skb = skb_dequeue(&htt->rx_in_ord_compl_q))) {
+ 		spin_lock_bh(&htt->rx_ring.lock);
+ 		ret = ath10k_htt_rx_in_ord_ind(ar, skb);
+ 		spin_unlock_bh(&htt->rx_ring.lock);
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index 747c6951b5c1..e0b9f7d0dfd3 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -4054,6 +4054,7 @@ void ath10k_mac_tx_push_pending(struct ath10k *ar)
+ 	rcu_read_unlock();
+ 	spin_unlock_bh(&ar->txqs_lock);
+ }
++EXPORT_SYMBOL(ath10k_mac_tx_push_pending);
+ 
+ /************/
+ /* Scanning */
+diff --git a/drivers/net/wireless/ath/ath10k/sdio.c b/drivers/net/wireless/ath/ath10k/sdio.c
+index d612ce8c9cff..299db8b1c9ba 100644
+--- a/drivers/net/wireless/ath/ath10k/sdio.c
++++ b/drivers/net/wireless/ath/ath10k/sdio.c
+@@ -30,6 +30,7 @@
+ #include "debug.h"
+ #include "hif.h"
+ #include "htc.h"
++#include "mac.h"
+ #include "targaddrs.h"
+ #include "trace.h"
+ #include "sdio.h"
+@@ -396,6 +397,7 @@ static int ath10k_sdio_mbox_rx_process_packet(struct ath10k *ar,
+ 	int ret;
+ 
+ 	payload_len = le16_to_cpu(htc_hdr->len);
++	skb->len = payload_len + sizeof(struct ath10k_htc_hdr);
+ 
+ 	if (trailer_present) {
+ 		trailer = skb->data + sizeof(*htc_hdr) +
+@@ -434,12 +436,14 @@ static int ath10k_sdio_mbox_rx_process_packets(struct ath10k *ar,
+ 	enum ath10k_htc_ep_id id;
+ 	int ret, i, *n_lookahead_local;
+ 	u32 *lookaheads_local;
++	int lookahead_idx = 0;
+ 
+ 	for (i = 0; i < ar_sdio->n_rx_pkts; i++) {
+ 		lookaheads_local = lookaheads;
+ 		n_lookahead_local = n_lookahead;
+ 
+-		id = ((struct ath10k_htc_hdr *)&lookaheads[i])->eid;
++		id = ((struct ath10k_htc_hdr *)
++		      &lookaheads[lookahead_idx++])->eid;
+ 
+ 		if (id >= ATH10K_HTC_EP_COUNT) {
+ 			ath10k_warn(ar, "invalid endpoint in look-ahead: %d\n",
+@@ -462,6 +466,7 @@ static int ath10k_sdio_mbox_rx_process_packets(struct ath10k *ar,
+ 			/* Only read lookahead's from RX trailers
+ 			 * for the last packet in a bundle.
+ 			 */
++			lookahead_idx--;
+ 			lookaheads_local = NULL;
+ 			n_lookahead_local = NULL;
+ 		}
+@@ -1342,6 +1347,8 @@ static void ath10k_sdio_irq_handler(struct sdio_func *func)
+ 			break;
+ 	} while (time_before(jiffies, timeout) && !done);
+ 
++	ath10k_mac_tx_push_pending(ar);
++
+ 	sdio_claim_host(ar_sdio->func);
+ 
+ 	if (ret && ret != -ECANCELED)
+diff --git a/drivers/net/wireless/ath/ath10k/snoc.c b/drivers/net/wireless/ath/ath10k/snoc.c
+index a3a7042fe13a..aa621bf50a91 100644
+--- a/drivers/net/wireless/ath/ath10k/snoc.c
++++ b/drivers/net/wireless/ath/ath10k/snoc.c
+@@ -449,7 +449,7 @@ static void ath10k_snoc_htt_rx_cb(struct ath10k_ce_pipe *ce_state)
+ 
+ static void ath10k_snoc_rx_replenish_retry(struct timer_list *t)
+ {
+-	struct ath10k_pci *ar_snoc = from_timer(ar_snoc, t, rx_post_retry);
++	struct ath10k_snoc *ar_snoc = from_timer(ar_snoc, t, rx_post_retry);
+ 	struct ath10k *ar = ar_snoc->ar;
+ 
+ 	ath10k_snoc_rx_post(ar);
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
+index f97ab795cf2e..2319f79b34f0 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.c
++++ b/drivers/net/wireless/ath/ath10k/wmi.c
+@@ -4602,10 +4602,6 @@ void ath10k_wmi_event_pdev_tpc_config(struct ath10k *ar, struct sk_buff *skb)
+ 
+ 	ev = (struct wmi_pdev_tpc_config_event *)skb->data;
+ 
+-	tpc_stats = kzalloc(sizeof(*tpc_stats), GFP_ATOMIC);
+-	if (!tpc_stats)
+-		return;
+-
+ 	num_tx_chain = __le32_to_cpu(ev->num_tx_chain);
+ 
+ 	if (num_tx_chain > WMI_TPC_TX_N_CHAIN) {
+@@ -4614,6 +4610,10 @@ void ath10k_wmi_event_pdev_tpc_config(struct ath10k *ar, struct sk_buff *skb)
+ 		return;
+ 	}
+ 
++	tpc_stats = kzalloc(sizeof(*tpc_stats), GFP_ATOMIC);
++	if (!tpc_stats)
++		return;
++
+ 	ath10k_wmi_tpc_config_get_rate_code(rate_code, pream_table,
+ 					    num_tx_chain);
+ 
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_qmath.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_qmath.c
+index b9672da24a9d..b24bc57ca91b 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_qmath.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_qmath.c
+@@ -213,7 +213,7 @@ static const s16 log_table[] = {
+ 	30498,
+ 	31267,
+ 	32024,
+-	32768
++	32767
+ };
+ 
+ #define LOG_TABLE_SIZE 32       /* log_table size */
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c b/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c
+index b49aea4da2d6..8985446570bd 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c
+@@ -439,15 +439,13 @@ mt76x2_mac_fill_tx_status(struct mt76x2_dev *dev,
+ 	if (last_rate < IEEE80211_TX_MAX_RATES - 1)
+ 		rate[last_rate + 1].idx = -1;
+ 
+-	cur_idx = rate[last_rate].idx + st->retry;
++	cur_idx = rate[last_rate].idx + last_rate;
+ 	for (i = 0; i <= last_rate; i++) {
+ 		rate[i].flags = rate[last_rate].flags;
+ 		rate[i].idx = max_t(int, 0, cur_idx - i);
+ 		rate[i].count = 1;
+ 	}
+-
+-	if (last_rate > 0)
+-		rate[last_rate - 1].count = st->retry + 1 - last_rate;
++	rate[last_rate].count = st->retry + 1 - last_rate;
+ 
+ 	info->status.ampdu_len = n_frames;
+ 	info->status.ampdu_ack_len = st->success ? n_frames : 0;
+diff --git a/drivers/net/wireless/rndis_wlan.c b/drivers/net/wireless/rndis_wlan.c
+index 9935bd09db1f..d4947e3a909e 100644
+--- a/drivers/net/wireless/rndis_wlan.c
++++ b/drivers/net/wireless/rndis_wlan.c
+@@ -2928,6 +2928,8 @@ static void rndis_wlan_auth_indication(struct usbnet *usbdev,
+ 
+ 	while (buflen >= sizeof(*auth_req)) {
+ 		auth_req = (void *)buf;
++		if (buflen < le32_to_cpu(auth_req->length))
++			return;
+ 		type = "unknown";
+ 		flags = le32_to_cpu(auth_req->flags);
+ 		pairwise_error = false;
+diff --git a/drivers/net/wireless/ti/wlcore/cmd.c b/drivers/net/wireless/ti/wlcore/cmd.c
+index 761cf8573a80..f48c3f62966d 100644
+--- a/drivers/net/wireless/ti/wlcore/cmd.c
++++ b/drivers/net/wireless/ti/wlcore/cmd.c
+@@ -35,6 +35,7 @@
+ #include "wl12xx_80211.h"
+ #include "cmd.h"
+ #include "event.h"
++#include "ps.h"
+ #include "tx.h"
+ #include "hw_ops.h"
+ 
+@@ -191,6 +192,10 @@ int wlcore_cmd_wait_for_event_or_timeout(struct wl1271 *wl,
+ 
+ 	timeout_time = jiffies + msecs_to_jiffies(WL1271_EVENT_TIMEOUT);
+ 
++	ret = wl1271_ps_elp_wakeup(wl);
++	if (ret < 0)
++		return ret;
++
+ 	do {
+ 		if (time_after(jiffies, timeout_time)) {
+ 			wl1271_debug(DEBUG_CMD, "timeout waiting for event %d",
+@@ -222,6 +227,7 @@ int wlcore_cmd_wait_for_event_or_timeout(struct wl1271 *wl,
+ 	} while (!event);
+ 
+ out:
++	wl1271_ps_elp_sleep(wl);
+ 	kfree(events_vector);
+ 	return ret;
+ }
+diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
+index 34712def81b1..5251689a1d9a 100644
+--- a/drivers/nvme/target/fcloop.c
++++ b/drivers/nvme/target/fcloop.c
+@@ -311,7 +311,7 @@ fcloop_tgt_lsrqst_done_work(struct work_struct *work)
+ 	struct fcloop_tport *tport = tls_req->tport;
+ 	struct nvmefc_ls_req *lsreq = tls_req->lsreq;
+ 
+-	if (tport->remoteport)
++	if (!tport || tport->remoteport)
+ 		lsreq->done(lsreq, tls_req->status);
+ }
+ 
+@@ -329,6 +329,7 @@ fcloop_ls_req(struct nvme_fc_local_port *localport,
+ 
+ 	if (!rport->targetport) {
+ 		tls_req->status = -ECONNREFUSED;
++		tls_req->tport = NULL;
+ 		schedule_work(&tls_req->work);
+ 		return ret;
+ 	}
+diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c
+index ef0b1b6ba86f..12afa7fdf77e 100644
+--- a/drivers/pci/hotplug/acpiphp_glue.c
++++ b/drivers/pci/hotplug/acpiphp_glue.c
+@@ -457,17 +457,18 @@ static void acpiphp_native_scan_bridge(struct pci_dev *bridge)
+ /**
+  * enable_slot - enable, configure a slot
+  * @slot: slot to be enabled
++ * @bridge: true if enable is for the whole bridge (not a single slot)
+  *
+  * This function should be called per *physical slot*,
+  * not per each slot object in ACPI namespace.
+  */
+-static void enable_slot(struct acpiphp_slot *slot)
++static void enable_slot(struct acpiphp_slot *slot, bool bridge)
+ {
+ 	struct pci_dev *dev;
+ 	struct pci_bus *bus = slot->bus;
+ 	struct acpiphp_func *func;
+ 
+-	if (bus->self && hotplug_is_native(bus->self)) {
++	if (bridge && bus->self && hotplug_is_native(bus->self)) {
+ 		/*
+ 		 * If native hotplug is used, it will take care of hotplug
+ 		 * slot management and resource allocation for hotplug
+@@ -701,7 +702,7 @@ static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
+ 					trim_stale_devices(dev);
+ 
+ 			/* configure all functions */
+-			enable_slot(slot);
++			enable_slot(slot, true);
+ 		} else {
+ 			disable_slot(slot);
+ 		}
+@@ -785,7 +786,7 @@ static void hotplug_event(u32 type, struct acpiphp_context *context)
+ 		if (bridge)
+ 			acpiphp_check_bridge(bridge);
+ 		else if (!(slot->flags & SLOT_IS_GOING_AWAY))
+-			enable_slot(slot);
++			enable_slot(slot, false);
+ 
+ 		break;
+ 
+@@ -973,7 +974,7 @@ int acpiphp_enable_slot(struct acpiphp_slot *slot)
+ 
+ 	/* configure all functions */
+ 	if (!(slot->flags & SLOT_ENABLED))
+-		enable_slot(slot);
++		enable_slot(slot, false);
+ 
+ 	pci_unlock_rescan_remove();
+ 	return 0;
+diff --git a/drivers/platform/x86/asus-wireless.c b/drivers/platform/x86/asus-wireless.c
+index 6afd011de9e5..b8e35a8d65cf 100644
+--- a/drivers/platform/x86/asus-wireless.c
++++ b/drivers/platform/x86/asus-wireless.c
+@@ -52,13 +52,12 @@ static const struct acpi_device_id device_ids[] = {
+ };
+ MODULE_DEVICE_TABLE(acpi, device_ids);
+ 
+-static u64 asus_wireless_method(acpi_handle handle, const char *method,
+-				int param)
++static acpi_status asus_wireless_method(acpi_handle handle, const char *method,
++					int param, u64 *ret)
+ {
+ 	struct acpi_object_list p;
+ 	union acpi_object obj;
+ 	acpi_status s;
+-	u64 ret;
+ 
+ 	acpi_handle_debug(handle, "Evaluating method %s, parameter %#x\n",
+ 			  method, param);
+@@ -67,24 +66,27 @@ static u64 asus_wireless_method(acpi_handle handle, const char *method,
+ 	p.count = 1;
+ 	p.pointer = &obj;
+ 
+-	s = acpi_evaluate_integer(handle, (acpi_string) method, &p, &ret);
++	s = acpi_evaluate_integer(handle, (acpi_string) method, &p, ret);
+ 	if (ACPI_FAILURE(s))
+ 		acpi_handle_err(handle,
+ 				"Failed to eval method %s, param %#x (%d)\n",
+ 				method, param, s);
+-	acpi_handle_debug(handle, "%s returned %#llx\n", method, ret);
+-	return ret;
++	else
++		acpi_handle_debug(handle, "%s returned %#llx\n", method, *ret);
++
++	return s;
+ }
+ 
+ static enum led_brightness led_state_get(struct led_classdev *led)
+ {
+ 	struct asus_wireless_data *data;
+-	int s;
++	acpi_status s;
++	u64 ret;
+ 
+ 	data = container_of(led, struct asus_wireless_data, led);
+ 	s = asus_wireless_method(acpi_device_handle(data->adev), "HSWC",
+-				 data->hswc_params->status);
+-	if (s == data->hswc_params->on)
++				 data->hswc_params->status, &ret);
++	if (ACPI_SUCCESS(s) && ret == data->hswc_params->on)
+ 		return LED_FULL;
+ 	return LED_OFF;
+ }
+@@ -92,10 +94,11 @@ static enum led_brightness led_state_get(struct led_classdev *led)
+ static void led_state_update(struct work_struct *work)
+ {
+ 	struct asus_wireless_data *data;
++	u64 ret;
+ 
+ 	data = container_of(work, struct asus_wireless_data, led_work);
+ 	asus_wireless_method(acpi_device_handle(data->adev), "HSWC",
+-			     data->led_state);
++			     data->led_state, &ret);
+ }
+ 
+ static void led_state_set(struct led_classdev *led, enum led_brightness value)
+diff --git a/drivers/power/reset/vexpress-poweroff.c b/drivers/power/reset/vexpress-poweroff.c
+index 102f95a09460..e9e749f87517 100644
+--- a/drivers/power/reset/vexpress-poweroff.c
++++ b/drivers/power/reset/vexpress-poweroff.c
+@@ -35,6 +35,7 @@ static void vexpress_reset_do(struct device *dev, const char *what)
+ }
+ 
+ static struct device *vexpress_power_off_device;
++static atomic_t vexpress_restart_nb_refcnt = ATOMIC_INIT(0);
+ 
+ static void vexpress_power_off(void)
+ {
+@@ -99,10 +100,13 @@ static int _vexpress_register_restart_handler(struct device *dev)
+ 	int err;
+ 
+ 	vexpress_restart_device = dev;
+-	err = register_restart_handler(&vexpress_restart_nb);
+-	if (err) {
+-		dev_err(dev, "cannot register restart handler (err=%d)\n", err);
+-		return err;
++	if (atomic_inc_return(&vexpress_restart_nb_refcnt) == 1) {
++		err = register_restart_handler(&vexpress_restart_nb);
++		if (err) {
++			dev_err(dev, "cannot register restart handler (err=%d)\n", err);
++			atomic_dec(&vexpress_restart_nb_refcnt);
++			return err;
++		}
+ 	}
+ 	device_create_file(dev, &dev_attr_active);
+ 
+diff --git a/drivers/power/supply/axp288_charger.c b/drivers/power/supply/axp288_charger.c
+index 6e1bc14c3304..735658ee1c60 100644
+--- a/drivers/power/supply/axp288_charger.c
++++ b/drivers/power/supply/axp288_charger.c
+@@ -718,7 +718,7 @@ static int charger_init_hw_regs(struct axp288_chrg_info *info)
+ 	}
+ 
+ 	/* Determine charge current limit */
+-	cc = (ret & CHRG_CCCV_CC_MASK) >> CHRG_CCCV_CC_BIT_POS;
++	cc = (val & CHRG_CCCV_CC_MASK) >> CHRG_CCCV_CC_BIT_POS;
+ 	cc = (cc * CHRG_CCCV_CC_LSB_RES) + CHRG_CCCV_CC_OFFSET;
+ 	info->cc = cc;
+ 
+diff --git a/drivers/power/supply/power_supply_core.c b/drivers/power/supply/power_supply_core.c
+index d21f478741c1..e85361878450 100644
+--- a/drivers/power/supply/power_supply_core.c
++++ b/drivers/power/supply/power_supply_core.c
+@@ -14,6 +14,7 @@
+ #include <linux/types.h>
+ #include <linux/init.h>
+ #include <linux/slab.h>
++#include <linux/delay.h>
+ #include <linux/device.h>
+ #include <linux/notifier.h>
+ #include <linux/err.h>
+@@ -140,8 +141,13 @@ static void power_supply_deferred_register_work(struct work_struct *work)
+ 	struct power_supply *psy = container_of(work, struct power_supply,
+ 						deferred_register_work.work);
+ 
+-	if (psy->dev.parent)
+-		mutex_lock(&psy->dev.parent->mutex);
++	if (psy->dev.parent) {
++		while (!mutex_trylock(&psy->dev.parent->mutex)) {
++			if (psy->removing)
++				return;
++			msleep(10);
++		}
++	}
+ 
+ 	power_supply_changed(psy);
+ 
+@@ -1082,6 +1088,7 @@ EXPORT_SYMBOL_GPL(devm_power_supply_register_no_ws);
+ void power_supply_unregister(struct power_supply *psy)
+ {
+ 	WARN_ON(atomic_dec_return(&psy->use_cnt));
++	psy->removing = true;
+ 	cancel_work_sync(&psy->changed_work);
+ 	cancel_delayed_work_sync(&psy->deferred_register_work);
+ 	sysfs_remove_link(&psy->dev.kobj, "powers");
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 6ed568b96c0e..cc1450c53fb2 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -3147,7 +3147,7 @@ static inline int regulator_suspend_toggle(struct regulator_dev *rdev,
+ 	if (!rstate->changeable)
+ 		return -EPERM;
+ 
+-	rstate->enabled = en;
++	rstate->enabled = (en) ? ENABLE_IN_SUSPEND : DISABLE_IN_SUSPEND;
+ 
+ 	return 0;
+ }
+@@ -4381,13 +4381,13 @@ regulator_register(const struct regulator_desc *regulator_desc,
+ 	    !rdev->desc->fixed_uV)
+ 		rdev->is_switch = true;
+ 
++	dev_set_drvdata(&rdev->dev, rdev);
+ 	ret = device_register(&rdev->dev);
+ 	if (ret != 0) {
+ 		put_device(&rdev->dev);
+ 		goto unset_supplies;
+ 	}
+ 
+-	dev_set_drvdata(&rdev->dev, rdev);
+ 	rdev_init_debugfs(rdev);
+ 
+ 	/* try to resolve regulators supply since a new one was registered */
+diff --git a/drivers/regulator/of_regulator.c b/drivers/regulator/of_regulator.c
+index 638f17d4c848..210fc20f7de7 100644
+--- a/drivers/regulator/of_regulator.c
++++ b/drivers/regulator/of_regulator.c
+@@ -213,8 +213,6 @@ static void of_get_regulation_constraints(struct device_node *np,
+ 		else if (of_property_read_bool(suspend_np,
+ 					"regulator-off-in-suspend"))
+ 			suspend_state->enabled = DISABLE_IN_SUSPEND;
+-		else
+-			suspend_state->enabled = DO_NOTHING_IN_SUSPEND;
+ 
+ 		if (!of_property_read_u32(np, "regulator-suspend-min-microvolt",
+ 					  &pval))
+diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
+index a9f60d0ee02e..7c732414367f 100644
+--- a/drivers/s390/block/dasd.c
++++ b/drivers/s390/block/dasd.c
+@@ -3127,6 +3127,7 @@ static int dasd_alloc_queue(struct dasd_block *block)
+ 	block->tag_set.nr_hw_queues = nr_hw_queues;
+ 	block->tag_set.queue_depth = queue_depth;
+ 	block->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
++	block->tag_set.numa_node = NUMA_NO_NODE;
+ 
+ 	rc = blk_mq_alloc_tag_set(&block->tag_set);
+ 	if (rc)
+diff --git a/drivers/s390/block/scm_blk.c b/drivers/s390/block/scm_blk.c
+index b1fcb76dd272..98f66b7b6794 100644
+--- a/drivers/s390/block/scm_blk.c
++++ b/drivers/s390/block/scm_blk.c
+@@ -455,6 +455,7 @@ int scm_blk_dev_setup(struct scm_blk_dev *bdev, struct scm_device *scmdev)
+ 	bdev->tag_set.nr_hw_queues = nr_requests;
+ 	bdev->tag_set.queue_depth = nr_requests_per_io * nr_requests;
+ 	bdev->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
++	bdev->tag_set.numa_node = NUMA_NO_NODE;
+ 
+ 	ret = blk_mq_alloc_tag_set(&bdev->tag_set);
+ 	if (ret)
+diff --git a/drivers/scsi/bnx2i/bnx2i_hwi.c b/drivers/scsi/bnx2i/bnx2i_hwi.c
+index 8f03a869ac98..e9e669a6c2bc 100644
+--- a/drivers/scsi/bnx2i/bnx2i_hwi.c
++++ b/drivers/scsi/bnx2i/bnx2i_hwi.c
+@@ -2727,6 +2727,8 @@ int bnx2i_map_ep_dbell_regs(struct bnx2i_endpoint *ep)
+ 					      BNX2X_DOORBELL_PCI_BAR);
+ 		reg_off = (1 << BNX2X_DB_SHIFT) * (cid_num & 0x1FFFF);
+ 		ep->qp.ctx_base = ioremap_nocache(reg_base + reg_off, 4);
++		if (!ep->qp.ctx_base)
++			return -ENOMEM;
+ 		goto arm_cq;
+ 	}
+ 
+diff --git a/drivers/scsi/hisi_sas/hisi_sas.h b/drivers/scsi/hisi_sas/hisi_sas.h
+index 7052a5d45f7f..78e5a9254143 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas.h
++++ b/drivers/scsi/hisi_sas/hisi_sas.h
+@@ -277,6 +277,7 @@ struct hisi_hba {
+ 
+ 	int n_phy;
+ 	spinlock_t lock;
++	struct semaphore sem;
+ 
+ 	struct timer_list timer;
+ 	struct workqueue_struct *wq;
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
+index 6f562974f8f6..bfbd2fb7e69e 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
+@@ -914,7 +914,9 @@ static void hisi_sas_dev_gone(struct domain_device *device)
+ 
+ 		hisi_sas_dereg_device(hisi_hba, device);
+ 
++		down(&hisi_hba->sem);
+ 		hisi_hba->hw->clear_itct(hisi_hba, sas_dev);
++		up(&hisi_hba->sem);
+ 		device->lldd_dev = NULL;
+ 	}
+ 
+@@ -1364,6 +1366,7 @@ static int hisi_sas_controller_reset(struct hisi_hba *hisi_hba)
+ 	if (test_and_set_bit(HISI_SAS_RESET_BIT, &hisi_hba->flags))
+ 		return -1;
+ 
++	down(&hisi_hba->sem);
+ 	dev_info(dev, "controller resetting...\n");
+ 	old_state = hisi_hba->hw->get_phys_state(hisi_hba);
+ 
+@@ -1378,6 +1381,7 @@ static int hisi_sas_controller_reset(struct hisi_hba *hisi_hba)
+ 	if (rc) {
+ 		dev_warn(dev, "controller reset failed (%d)\n", rc);
+ 		clear_bit(HISI_SAS_REJECT_CMD_BIT, &hisi_hba->flags);
++		up(&hisi_hba->sem);
+ 		scsi_unblock_requests(shost);
+ 		goto out;
+ 	}
+@@ -1388,6 +1392,7 @@ static int hisi_sas_controller_reset(struct hisi_hba *hisi_hba)
+ 	hisi_hba->hw->phys_init(hisi_hba);
+ 	msleep(1000);
+ 	hisi_sas_refresh_port_id(hisi_hba);
++	up(&hisi_hba->sem);
+ 
+ 	if (hisi_hba->reject_stp_links_msk)
+ 		hisi_sas_terminate_stp_reject(hisi_hba);
+@@ -2016,6 +2021,7 @@ int hisi_sas_alloc(struct hisi_hba *hisi_hba, struct Scsi_Host *shost)
+ 	struct device *dev = hisi_hba->dev;
+ 	int i, s, max_command_entries = hisi_hba->hw->max_command_entries;
+ 
++	sema_init(&hisi_hba->sem, 1);
+ 	spin_lock_init(&hisi_hba->lock);
+ 	for (i = 0; i < hisi_hba->n_phy; i++) {
+ 		hisi_sas_phy_init(hisi_hba, i);
+diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.c b/drivers/scsi/ibmvscsi/ibmvscsi.c
+index 17df76f0be3c..67a2c844e30d 100644
+--- a/drivers/scsi/ibmvscsi/ibmvscsi.c
++++ b/drivers/scsi/ibmvscsi/ibmvscsi.c
+@@ -93,7 +93,7 @@ static int max_requests = IBMVSCSI_MAX_REQUESTS_DEFAULT;
+ static int max_events = IBMVSCSI_MAX_REQUESTS_DEFAULT + 2;
+ static int fast_fail = 1;
+ static int client_reserve = 1;
+-static char partition_name[97] = "UNKNOWN";
++static char partition_name[96] = "UNKNOWN";
+ static unsigned int partition_number = -1;
+ static LIST_HEAD(ibmvscsi_head);
+ 
+@@ -262,7 +262,7 @@ static void gather_partition_info(void)
+ 
+ 	ppartition_name = of_get_property(of_root, "ibm,partition-name", NULL);
+ 	if (ppartition_name)
+-		strncpy(partition_name, ppartition_name,
++		strlcpy(partition_name, ppartition_name,
+ 				sizeof(partition_name));
+ 	p_number_ptr = of_get_property(of_root, "ibm,partition-no", NULL);
+ 	if (p_number_ptr)
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index 71d97573a667..8e84e3fb648a 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -6789,6 +6789,9 @@ megasas_resume(struct pci_dev *pdev)
+ 			goto fail_init_mfi;
+ 	}
+ 
++	if (megasas_get_ctrl_info(instance) != DCMD_SUCCESS)
++		goto fail_init_mfi;
++
+ 	tasklet_init(&instance->isr_tasklet, instance->instancet->tasklet,
+ 		     (unsigned long)instance);
+ 
+diff --git a/drivers/siox/siox-core.c b/drivers/siox/siox-core.c
+index 16590dfaafa4..cef307c0399c 100644
+--- a/drivers/siox/siox-core.c
++++ b/drivers/siox/siox-core.c
+@@ -715,17 +715,17 @@ int siox_master_register(struct siox_master *smaster)
+ 
+ 	dev_set_name(&smaster->dev, "siox-%d", smaster->busno);
+ 
++	mutex_init(&smaster->lock);
++	INIT_LIST_HEAD(&smaster->devices);
++
+ 	smaster->last_poll = jiffies;
+-	smaster->poll_thread = kthread_create(siox_poll_thread, smaster,
+-					      "siox-%d", smaster->busno);
++	smaster->poll_thread = kthread_run(siox_poll_thread, smaster,
++					   "siox-%d", smaster->busno);
+ 	if (IS_ERR(smaster->poll_thread)) {
+ 		smaster->active = 0;
+ 		return PTR_ERR(smaster->poll_thread);
+ 	}
+ 
+-	mutex_init(&smaster->lock);
+-	INIT_LIST_HEAD(&smaster->devices);
+-
+ 	ret = device_add(&smaster->dev);
+ 	if (ret)
+ 		kthread_stop(smaster->poll_thread);
+diff --git a/drivers/spi/spi-orion.c b/drivers/spi/spi-orion.c
+index d01a6adc726e..47ef6b1a2e76 100644
+--- a/drivers/spi/spi-orion.c
++++ b/drivers/spi/spi-orion.c
+@@ -20,6 +20,7 @@
+ #include <linux/of.h>
+ #include <linux/of_address.h>
+ #include <linux/of_device.h>
++#include <linux/of_gpio.h>
+ #include <linux/clk.h>
+ #include <linux/sizes.h>
+ #include <linux/gpio.h>
+@@ -681,9 +682,9 @@ static int orion_spi_probe(struct platform_device *pdev)
+ 		goto out_rel_axi_clk;
+ 	}
+ 
+-	/* Scan all SPI devices of this controller for direct mapped devices */
+ 	for_each_available_child_of_node(pdev->dev.of_node, np) {
+ 		u32 cs;
++		int cs_gpio;
+ 
+ 		/* Get chip-select number from the "reg" property */
+ 		status = of_property_read_u32(np, "reg", &cs);
+@@ -694,6 +695,44 @@ static int orion_spi_probe(struct platform_device *pdev)
+ 			continue;
+ 		}
+ 
++		/*
++		 * Initialize the CS GPIO:
++		 * - properly request the actual GPIO signal
++		 * - de-assert the logical signal so that all GPIO CS lines
++		 *   are inactive when probing for slaves
++		 * - find an unused physical CS which will be driven for any
++		 *   slave which uses a CS GPIO
++		 */
++		cs_gpio = of_get_named_gpio(pdev->dev.of_node, "cs-gpios", cs);
++		if (cs_gpio > 0) {
++			char *gpio_name;
++			int cs_flags;
++
++			if (spi->unused_hw_gpio == -1) {
++				dev_info(&pdev->dev,
++					"Selected unused HW CS#%d for any GPIO CSes\n",
++					cs);
++				spi->unused_hw_gpio = cs;
++			}
++
++			gpio_name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
++					"%s-CS%d", dev_name(&pdev->dev), cs);
++			if (!gpio_name) {
++				status = -ENOMEM;
++				goto out_rel_axi_clk;
++			}
++
++			cs_flags = of_property_read_bool(np, "spi-cs-high") ?
++				GPIOF_OUT_INIT_LOW : GPIOF_OUT_INIT_HIGH;
++			status = devm_gpio_request_one(&pdev->dev, cs_gpio,
++					cs_flags, gpio_name);
++			if (status) {
++				dev_err(&pdev->dev,
++					"Can't request GPIO for CS %d\n", cs);
++				goto out_rel_axi_clk;
++			}
++		}
++
+ 		/*
+ 		 * Check if an address is configured for this SPI device. If
+ 		 * not, the MBus mapping via the 'ranges' property in the 'soc'
+@@ -740,44 +779,8 @@ static int orion_spi_probe(struct platform_device *pdev)
+ 	if (status < 0)
+ 		goto out_rel_pm;
+ 
+-	if (master->cs_gpios) {
+-		int i;
+-		for (i = 0; i < master->num_chipselect; ++i) {
+-			char *gpio_name;
+-
+-			if (!gpio_is_valid(master->cs_gpios[i])) {
+-				continue;
+-			}
+-
+-			gpio_name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
+-					"%s-CS%d", dev_name(&pdev->dev), i);
+-			if (!gpio_name) {
+-				status = -ENOMEM;
+-				goto out_rel_master;
+-			}
+-
+-			status = devm_gpio_request(&pdev->dev,
+-					master->cs_gpios[i], gpio_name);
+-			if (status) {
+-				dev_err(&pdev->dev,
+-					"Can't request GPIO for CS %d\n",
+-					master->cs_gpios[i]);
+-				goto out_rel_master;
+-			}
+-			if (spi->unused_hw_gpio == -1) {
+-				dev_info(&pdev->dev,
+-					"Selected unused HW CS#%d for any GPIO CSes\n",
+-					i);
+-				spi->unused_hw_gpio = i;
+-			}
+-		}
+-	}
+-
+-
+ 	return status;
+ 
+-out_rel_master:
+-	spi_unregister_master(master);
+ out_rel_pm:
+ 	pm_runtime_disable(&pdev->dev);
+ out_rel_axi_clk:
+diff --git a/drivers/spi/spi-rspi.c b/drivers/spi/spi-rspi.c
+index 95dc4d78618d..b37de1d991d6 100644
+--- a/drivers/spi/spi-rspi.c
++++ b/drivers/spi/spi-rspi.c
+@@ -598,11 +598,13 @@ static int rspi_dma_transfer(struct rspi_data *rspi, struct sg_table *tx,
+ 
+ 	ret = wait_event_interruptible_timeout(rspi->wait,
+ 					       rspi->dma_callbacked, HZ);
+-	if (ret > 0 && rspi->dma_callbacked)
++	if (ret > 0 && rspi->dma_callbacked) {
+ 		ret = 0;
+-	else if (!ret) {
+-		dev_err(&rspi->master->dev, "DMA timeout\n");
+-		ret = -ETIMEDOUT;
++	} else {
++		if (!ret) {
++			dev_err(&rspi->master->dev, "DMA timeout\n");
++			ret = -ETIMEDOUT;
++		}
+ 		if (tx)
+ 			dmaengine_terminate_all(rspi->master->dma_tx);
+ 		if (rx)
+@@ -1350,12 +1352,36 @@ static const struct platform_device_id spi_driver_ids[] = {
+ 
+ MODULE_DEVICE_TABLE(platform, spi_driver_ids);
+ 
++#ifdef CONFIG_PM_SLEEP
++static int rspi_suspend(struct device *dev)
++{
++	struct platform_device *pdev = to_platform_device(dev);
++	struct rspi_data *rspi = platform_get_drvdata(pdev);
++
++	return spi_master_suspend(rspi->master);
++}
++
++static int rspi_resume(struct device *dev)
++{
++	struct platform_device *pdev = to_platform_device(dev);
++	struct rspi_data *rspi = platform_get_drvdata(pdev);
++
++	return spi_master_resume(rspi->master);
++}
++
++static SIMPLE_DEV_PM_OPS(rspi_pm_ops, rspi_suspend, rspi_resume);
++#define DEV_PM_OPS	&rspi_pm_ops
++#else
++#define DEV_PM_OPS	NULL
++#endif /* CONFIG_PM_SLEEP */
++
+ static struct platform_driver rspi_driver = {
+ 	.probe =	rspi_probe,
+ 	.remove =	rspi_remove,
+ 	.id_table =	spi_driver_ids,
+ 	.driver		= {
+ 		.name = "renesas_spi",
++		.pm = DEV_PM_OPS,
+ 		.of_match_table = of_match_ptr(rspi_of_match),
+ 	},
+ };
+diff --git a/drivers/spi/spi-sh-msiof.c b/drivers/spi/spi-sh-msiof.c
+index 0e74cbf9929d..37364c634fef 100644
+--- a/drivers/spi/spi-sh-msiof.c
++++ b/drivers/spi/spi-sh-msiof.c
+@@ -396,7 +396,8 @@ static void sh_msiof_spi_set_mode_regs(struct sh_msiof_spi_priv *p,
+ 
+ static void sh_msiof_reset_str(struct sh_msiof_spi_priv *p)
+ {
+-	sh_msiof_write(p, STR, sh_msiof_read(p, STR));
++	sh_msiof_write(p, STR,
++		       sh_msiof_read(p, STR) & ~(STR_TDREQ | STR_RDREQ));
+ }
+ 
+ static void sh_msiof_spi_write_fifo_8(struct sh_msiof_spi_priv *p,
+@@ -1421,12 +1422,37 @@ static const struct platform_device_id spi_driver_ids[] = {
+ };
+ MODULE_DEVICE_TABLE(platform, spi_driver_ids);
+ 
++#ifdef CONFIG_PM_SLEEP
++static int sh_msiof_spi_suspend(struct device *dev)
++{
++	struct platform_device *pdev = to_platform_device(dev);
++	struct sh_msiof_spi_priv *p = platform_get_drvdata(pdev);
++
++	return spi_master_suspend(p->master);
++}
++
++static int sh_msiof_spi_resume(struct device *dev)
++{
++	struct platform_device *pdev = to_platform_device(dev);
++	struct sh_msiof_spi_priv *p = platform_get_drvdata(pdev);
++
++	return spi_master_resume(p->master);
++}
++
++static SIMPLE_DEV_PM_OPS(sh_msiof_spi_pm_ops, sh_msiof_spi_suspend,
++			 sh_msiof_spi_resume);
++#define DEV_PM_OPS	&sh_msiof_spi_pm_ops
++#else
++#define DEV_PM_OPS	NULL
++#endif /* CONFIG_PM_SLEEP */
++
+ static struct platform_driver sh_msiof_spi_drv = {
+ 	.probe		= sh_msiof_spi_probe,
+ 	.remove		= sh_msiof_spi_remove,
+ 	.id_table	= spi_driver_ids,
+ 	.driver		= {
+ 		.name		= "spi_sh_msiof",
++		.pm		= DEV_PM_OPS,
+ 		.of_match_table = of_match_ptr(sh_msiof_match),
+ 	},
+ };
+diff --git a/drivers/spi/spi-tegra20-slink.c b/drivers/spi/spi-tegra20-slink.c
+index 6f7b946b5ced..1427f343b39a 100644
+--- a/drivers/spi/spi-tegra20-slink.c
++++ b/drivers/spi/spi-tegra20-slink.c
+@@ -1063,6 +1063,24 @@ static int tegra_slink_probe(struct platform_device *pdev)
+ 		goto exit_free_master;
+ 	}
+ 
++	/* disabled clock may cause interrupt storm upon request */
++	tspi->clk = devm_clk_get(&pdev->dev, NULL);
++	if (IS_ERR(tspi->clk)) {
++		ret = PTR_ERR(tspi->clk);
++		dev_err(&pdev->dev, "Can not get clock %d\n", ret);
++		goto exit_free_master;
++	}
++	ret = clk_prepare(tspi->clk);
++	if (ret < 0) {
++		dev_err(&pdev->dev, "Clock prepare failed %d\n", ret);
++		goto exit_free_master;
++	}
++	ret = clk_enable(tspi->clk);
++	if (ret < 0) {
++		dev_err(&pdev->dev, "Clock enable failed %d\n", ret);
++		goto exit_free_master;
++	}
++
+ 	spi_irq = platform_get_irq(pdev, 0);
+ 	tspi->irq = spi_irq;
+ 	ret = request_threaded_irq(tspi->irq, tegra_slink_isr,
+@@ -1071,14 +1089,7 @@ static int tegra_slink_probe(struct platform_device *pdev)
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "Failed to register ISR for IRQ %d\n",
+ 					tspi->irq);
+-		goto exit_free_master;
+-	}
+-
+-	tspi->clk = devm_clk_get(&pdev->dev, NULL);
+-	if (IS_ERR(tspi->clk)) {
+-		dev_err(&pdev->dev, "can not get clock\n");
+-		ret = PTR_ERR(tspi->clk);
+-		goto exit_free_irq;
++		goto exit_clk_disable;
+ 	}
+ 
+ 	tspi->rst = devm_reset_control_get_exclusive(&pdev->dev, "spi");
+@@ -1138,6 +1149,8 @@ exit_rx_dma_free:
+ 	tegra_slink_deinit_dma_param(tspi, true);
+ exit_free_irq:
+ 	free_irq(spi_irq, tspi);
++exit_clk_disable:
++	clk_disable(tspi->clk);
+ exit_free_master:
+ 	spi_master_put(master);
+ 	return ret;
+@@ -1150,6 +1163,8 @@ static int tegra_slink_remove(struct platform_device *pdev)
+ 
+ 	free_irq(tspi->irq, tspi);
+ 
++	clk_disable(tspi->clk);
++
+ 	if (tspi->tx_dma_chan)
+ 		tegra_slink_deinit_dma_param(tspi, false);
+ 
+diff --git a/drivers/staging/android/ashmem.c b/drivers/staging/android/ashmem.c
+index d5d33e12e952..716573c21579 100644
+--- a/drivers/staging/android/ashmem.c
++++ b/drivers/staging/android/ashmem.c
+@@ -366,6 +366,12 @@ static int ashmem_mmap(struct file *file, struct vm_area_struct *vma)
+ 		goto out;
+ 	}
+ 
++	/* requested mapping size larger than object size */
++	if (vma->vm_end - vma->vm_start > PAGE_ALIGN(asma->size)) {
++		ret = -EINVAL;
++		goto out;
++	}
++
+ 	/* requested protection bits must match our allowed protection mask */
+ 	if (unlikely((vma->vm_flags & ~calc_vm_prot_bits(asma->prot_mask, 0)) &
+ 		     calc_vm_prot_bits(PROT_MASK, 0))) {
+diff --git a/drivers/staging/media/imx/imx-ic-prpencvf.c b/drivers/staging/media/imx/imx-ic-prpencvf.c
+index ae453fd422f0..ffeb017c73b2 100644
+--- a/drivers/staging/media/imx/imx-ic-prpencvf.c
++++ b/drivers/staging/media/imx/imx-ic-prpencvf.c
+@@ -210,6 +210,7 @@ static void prp_vb2_buf_done(struct prp_priv *priv, struct ipuv3_channel *ch)
+ 
+ 	done = priv->active_vb2_buf[priv->ipu_buf_num];
+ 	if (done) {
++		done->vbuf.field = vdev->fmt.fmt.pix.field;
+ 		vb = &done->vbuf.vb2_buf;
+ 		vb->timestamp = ktime_get_ns();
+ 		vb2_buffer_done(vb, priv->nfb4eof ?
+diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
+index 95d7805f3485..0e963c24af37 100644
+--- a/drivers/staging/media/imx/imx-media-csi.c
++++ b/drivers/staging/media/imx/imx-media-csi.c
+@@ -236,6 +236,7 @@ static void csi_vb2_buf_done(struct csi_priv *priv)
+ 
+ 	done = priv->active_vb2_buf[priv->ipu_buf_num];
+ 	if (done) {
++		done->vbuf.field = vdev->fmt.fmt.pix.field;
+ 		vb = &done->vbuf.vb2_buf;
+ 		vb->timestamp = ktime_get_ns();
+ 		vb2_buffer_done(vb, priv->nfb4eof ?
+diff --git a/drivers/staging/mt7621-dts/gbpc1.dts b/drivers/staging/mt7621-dts/gbpc1.dts
+index 6b13d85d9d34..87555600195f 100644
+--- a/drivers/staging/mt7621-dts/gbpc1.dts
++++ b/drivers/staging/mt7621-dts/gbpc1.dts
+@@ -113,6 +113,8 @@
+ };
+ 
+ &pcie {
++	pinctrl-names = "default";
++	pinctrl-0 = <&pcie_pins>;
+ 	status = "okay";
+ };
+ 
+diff --git a/drivers/staging/mt7621-dts/mt7621.dtsi b/drivers/staging/mt7621-dts/mt7621.dtsi
+index eb3966b7f033..ce6b43639079 100644
+--- a/drivers/staging/mt7621-dts/mt7621.dtsi
++++ b/drivers/staging/mt7621-dts/mt7621.dtsi
+@@ -447,31 +447,28 @@
+ 		clocks = <&clkctrl 24 &clkctrl 25 &clkctrl 26>;
+ 		clock-names = "pcie0", "pcie1", "pcie2";
+ 
+-		pcie0 {
++		pcie@0,0 {
+ 			reg = <0x0000 0 0 0 0>;
+-
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+-
+-			device_type = "pci";
++			ranges;
++			bus-range = <0x00 0xff>;
+ 		};
+ 
+-		pcie1 {
++		pcie@1,0 {
+ 			reg = <0x0800 0 0 0 0>;
+-
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+-
+-			device_type = "pci";
++			ranges;
++			bus-range = <0x00 0xff>;
+ 		};
+ 
+-		pcie2 {
++		pcie@2,0 {
+ 			reg = <0x1000 0 0 0 0>;
+-
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+-
+-			device_type = "pci";
++			ranges;
++			bus-range = <0x00 0xff>;
+ 		};
+ 	};
+ };
+diff --git a/drivers/staging/mt7621-eth/mtk_eth_soc.c b/drivers/staging/mt7621-eth/mtk_eth_soc.c
+index 2c7a2e666bfb..381d9d270bf5 100644
+--- a/drivers/staging/mt7621-eth/mtk_eth_soc.c
++++ b/drivers/staging/mt7621-eth/mtk_eth_soc.c
+@@ -2012,8 +2012,10 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np)
+ 		mac->hw_stats = devm_kzalloc(eth->dev,
+ 					     sizeof(*mac->hw_stats),
+ 					     GFP_KERNEL);
+-		if (!mac->hw_stats)
+-			return -ENOMEM;
++		if (!mac->hw_stats) {
++			err = -ENOMEM;
++			goto free_netdev;
++		}
+ 		spin_lock_init(&mac->hw_stats->stats_lock);
+ 		mac->hw_stats->reg_offset = id * MTK_STAT_OFFSET;
+ 	}
+@@ -2037,7 +2039,8 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np)
+ 	err = register_netdev(eth->netdev[id]);
+ 	if (err) {
+ 		dev_err(eth->dev, "error bringing up device\n");
+-		return err;
++		err = -ENOMEM;
++		goto free_netdev;
+ 	}
+ 	eth->netdev[id]->irq = eth->irq;
+ 	netif_info(eth, probe, eth->netdev[id],
+@@ -2045,6 +2048,10 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np)
+ 		   eth->netdev[id]->base_addr, eth->netdev[id]->irq);
+ 
+ 	return 0;
++
++free_netdev:
++	free_netdev(eth->netdev[id]);
++	return err;
+ }
+ 
+ static int mtk_probe(struct platform_device *pdev)
+diff --git a/drivers/staging/pi433/pi433_if.c b/drivers/staging/pi433/pi433_if.c
+index b061f77dda41..94e0bfcec991 100644
+--- a/drivers/staging/pi433/pi433_if.c
++++ b/drivers/staging/pi433/pi433_if.c
+@@ -880,6 +880,7 @@ pi433_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ 	int			retval = 0;
+ 	struct pi433_instance	*instance;
+ 	struct pi433_device	*device;
++	struct pi433_tx_cfg	tx_cfg;
+ 	void __user *argp = (void __user *)arg;
+ 
+ 	/* Check type and command number */
+@@ -902,9 +903,11 @@ pi433_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ 			return -EFAULT;
+ 		break;
+ 	case PI433_IOC_WR_TX_CFG:
+-		if (copy_from_user(&instance->tx_cfg, argp,
+-				   sizeof(struct pi433_tx_cfg)))
++		if (copy_from_user(&tx_cfg, argp, sizeof(struct pi433_tx_cfg)))
+ 			return -EFAULT;
++		mutex_lock(&device->tx_fifo_lock);
++		memcpy(&instance->tx_cfg, &tx_cfg, sizeof(struct pi433_tx_cfg));
++		mutex_unlock(&device->tx_fifo_lock);
+ 		break;
+ 	case PI433_IOC_RD_RX_CFG:
+ 		if (copy_to_user(argp, &device->rx_cfg,
+diff --git a/drivers/staging/rts5208/sd.c b/drivers/staging/rts5208/sd.c
+index d548bc695f9e..0421dd9277a8 100644
+--- a/drivers/staging/rts5208/sd.c
++++ b/drivers/staging/rts5208/sd.c
+@@ -4996,7 +4996,7 @@ int sd_execute_write_data(struct scsi_cmnd *srb, struct rtsx_chip *chip)
+ 			goto sd_execute_write_cmd_failed;
+ 		}
+ 
+-		rtsx_write_register(chip, SD_BYTE_CNT_L, 0xFF, 0x00);
++		retval = rtsx_write_register(chip, SD_BYTE_CNT_L, 0xFF, 0x00);
+ 		if (retval != STATUS_SUCCESS) {
+ 			rtsx_trace(chip);
+ 			goto sd_execute_write_cmd_failed;
+diff --git a/drivers/target/iscsi/iscsi_target_tpg.c b/drivers/target/iscsi/iscsi_target_tpg.c
+index 4b34f71547c6..101d62105c93 100644
+--- a/drivers/target/iscsi/iscsi_target_tpg.c
++++ b/drivers/target/iscsi/iscsi_target_tpg.c
+@@ -636,8 +636,7 @@ int iscsit_ta_authentication(struct iscsi_portal_group *tpg, u32 authentication)
+ 		none = strstr(buf1, NONE);
+ 		if (none)
+ 			goto out;
+-		strncat(buf1, ",", strlen(","));
+-		strncat(buf1, NONE, strlen(NONE));
++		strlcat(buf1, "," NONE, sizeof(buf1));
+ 		if (iscsi_update_param_value(param, buf1) < 0)
+ 			return -EINVAL;
+ 	}
+diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
+index e27db4d45a9d..06c9886e556c 100644
+--- a/drivers/target/target_core_device.c
++++ b/drivers/target/target_core_device.c
+@@ -904,14 +904,20 @@ struct se_device *target_find_device(int id, bool do_depend)
+ EXPORT_SYMBOL(target_find_device);
+ 
+ struct devices_idr_iter {
++	struct config_item *prev_item;
+ 	int (*fn)(struct se_device *dev, void *data);
+ 	void *data;
+ };
+ 
+ static int target_devices_idr_iter(int id, void *p, void *data)
++	 __must_hold(&device_mutex)
+ {
+ 	struct devices_idr_iter *iter = data;
+ 	struct se_device *dev = p;
++	int ret;
++
++	config_item_put(iter->prev_item);
++	iter->prev_item = NULL;
+ 
+ 	/*
+ 	 * We add the device early to the idr, so it can be used
+@@ -922,7 +928,15 @@ static int target_devices_idr_iter(int id, void *p, void *data)
+ 	if (!(dev->dev_flags & DF_CONFIGURED))
+ 		return 0;
+ 
+-	return iter->fn(dev, iter->data);
++	iter->prev_item = config_item_get_unless_zero(&dev->dev_group.cg_item);
++	if (!iter->prev_item)
++		return 0;
++	mutex_unlock(&device_mutex);
++
++	ret = iter->fn(dev, iter->data);
++
++	mutex_lock(&device_mutex);
++	return ret;
+ }
+ 
+ /**
+@@ -936,15 +950,13 @@ static int target_devices_idr_iter(int id, void *p, void *data)
+ int target_for_each_device(int (*fn)(struct se_device *dev, void *data),
+ 			   void *data)
+ {
+-	struct devices_idr_iter iter;
++	struct devices_idr_iter iter = { .fn = fn, .data = data };
+ 	int ret;
+ 
+-	iter.fn = fn;
+-	iter.data = data;
+-
+ 	mutex_lock(&device_mutex);
+ 	ret = idr_for_each(&devices_idr, target_devices_idr_iter, &iter);
+ 	mutex_unlock(&device_mutex);
++	config_item_put(iter.prev_item);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/thermal/imx_thermal.c b/drivers/thermal/imx_thermal.c
+index 334d98be03b9..b1f82d64253e 100644
+--- a/drivers/thermal/imx_thermal.c
++++ b/drivers/thermal/imx_thermal.c
+@@ -604,7 +604,10 @@ static int imx_init_from_nvmem_cells(struct platform_device *pdev)
+ 	ret = nvmem_cell_read_u32(&pdev->dev, "calib", &val);
+ 	if (ret)
+ 		return ret;
+-	imx_init_calib(pdev, val);
++
++	ret = imx_init_calib(pdev, val);
++	if (ret)
++		return ret;
+ 
+ 	ret = nvmem_cell_read_u32(&pdev->dev, "temp_grade", &val);
+ 	if (ret)
+diff --git a/drivers/thermal/of-thermal.c b/drivers/thermal/of-thermal.c
+index 977a8307fbb1..4f2816559205 100644
+--- a/drivers/thermal/of-thermal.c
++++ b/drivers/thermal/of-thermal.c
+@@ -260,10 +260,13 @@ static int of_thermal_set_mode(struct thermal_zone_device *tz,
+ 
+ 	mutex_lock(&tz->lock);
+ 
+-	if (mode == THERMAL_DEVICE_ENABLED)
++	if (mode == THERMAL_DEVICE_ENABLED) {
+ 		tz->polling_delay = data->polling_delay;
+-	else
++		tz->passive_delay = data->passive_delay;
++	} else {
+ 		tz->polling_delay = 0;
++		tz->passive_delay = 0;
++	}
+ 
+ 	mutex_unlock(&tz->lock);
+ 
+diff --git a/drivers/tty/serial/8250/serial_cs.c b/drivers/tty/serial/8250/serial_cs.c
+index 9963a766dcfb..c8186a05a453 100644
+--- a/drivers/tty/serial/8250/serial_cs.c
++++ b/drivers/tty/serial/8250/serial_cs.c
+@@ -638,8 +638,10 @@ static int serial_config(struct pcmcia_device *link)
+ 	    (link->has_func_id) &&
+ 	    (link->socket->pcmcia_pfc == 0) &&
+ 	    ((link->func_id == CISTPL_FUNCID_MULTI) ||
+-	     (link->func_id == CISTPL_FUNCID_SERIAL)))
+-		pcmcia_loop_config(link, serial_check_for_multi, info);
++	     (link->func_id == CISTPL_FUNCID_SERIAL))) {
++		if (pcmcia_loop_config(link, serial_check_for_multi, info))
++			goto failed;
++	}
+ 
+ 	/*
+ 	 * Apply any multi-port quirk.
+diff --git a/drivers/tty/serial/cpm_uart/cpm_uart_core.c b/drivers/tty/serial/cpm_uart/cpm_uart_core.c
+index 24a5f05e769b..e5389591bb4f 100644
+--- a/drivers/tty/serial/cpm_uart/cpm_uart_core.c
++++ b/drivers/tty/serial/cpm_uart/cpm_uart_core.c
+@@ -1054,8 +1054,8 @@ static int poll_wait_key(char *obuf, struct uart_cpm_port *pinfo)
+ 	/* Get the address of the host memory buffer.
+ 	 */
+ 	bdp = pinfo->rx_cur;
+-	while (bdp->cbd_sc & BD_SC_EMPTY)
+-		;
++	if (bdp->cbd_sc & BD_SC_EMPTY)
++		return NO_POLL_CHAR;
+ 
+ 	/* If the buffer address is in the CPM DPRAM, don't
+ 	 * convert it.
+@@ -1090,7 +1090,11 @@ static int cpm_get_poll_char(struct uart_port *port)
+ 		poll_chars = 0;
+ 	}
+ 	if (poll_chars <= 0) {
+-		poll_chars = poll_wait_key(poll_buf, pinfo);
++		int ret = poll_wait_key(poll_buf, pinfo);
++
++		if (ret == NO_POLL_CHAR)
++			return ret;
++		poll_chars = ret;
+ 		pollp = poll_buf;
+ 	}
+ 	poll_chars--;
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 51e47a63d61a..3f8d1274fc85 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -979,7 +979,8 @@ static inline int lpuart_start_rx_dma(struct lpuart_port *sport)
+ 	struct circ_buf *ring = &sport->rx_ring;
+ 	int ret, nent;
+ 	int bits, baud;
+-	struct tty_struct *tty = tty_port_tty_get(&sport->port.state->port);
++	struct tty_port *port = &sport->port.state->port;
++	struct tty_struct *tty = port->tty;
+ 	struct ktermios *termios = &tty->termios;
+ 
+ 	baud = tty_get_baud_rate(tty);
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index 4e853570ea80..554a69db1bca 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -2350,6 +2350,14 @@ static int imx_uart_probe(struct platform_device *pdev)
+ 				ret);
+ 			return ret;
+ 		}
++
++		ret = devm_request_irq(&pdev->dev, rtsirq, imx_uart_rtsint, 0,
++				       dev_name(&pdev->dev), sport);
++		if (ret) {
++			dev_err(&pdev->dev, "failed to request rts irq: %d\n",
++				ret);
++			return ret;
++		}
+ 	} else {
+ 		ret = devm_request_irq(&pdev->dev, rxirq, imx_uart_int, 0,
+ 				       dev_name(&pdev->dev), sport);
+diff --git a/drivers/tty/serial/mvebu-uart.c b/drivers/tty/serial/mvebu-uart.c
+index d04b5eeea3c6..170e446a2f62 100644
+--- a/drivers/tty/serial/mvebu-uart.c
++++ b/drivers/tty/serial/mvebu-uart.c
+@@ -511,6 +511,7 @@ static void mvebu_uart_set_termios(struct uart_port *port,
+ 		termios->c_iflag |= old->c_iflag & ~(INPCK | IGNPAR);
+ 		termios->c_cflag &= CREAD | CBAUD;
+ 		termios->c_cflag |= old->c_cflag & ~(CREAD | CBAUD);
++		termios->c_cflag |= CS8;
+ 	}
+ 
+ 	spin_unlock_irqrestore(&port->lock, flags);
+diff --git a/drivers/tty/serial/pxa.c b/drivers/tty/serial/pxa.c
+index eda3c7710d6a..4932b674f7ef 100644
+--- a/drivers/tty/serial/pxa.c
++++ b/drivers/tty/serial/pxa.c
+@@ -887,7 +887,8 @@ static int serial_pxa_probe(struct platform_device *dev)
+ 		goto err_clk;
+ 	if (sport->port.line >= ARRAY_SIZE(serial_pxa_ports)) {
+ 		dev_err(&dev->dev, "serial%d out of range\n", sport->port.line);
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err_clk;
+ 	}
+ 	snprintf(sport->name, PXA_NAME_LEN - 1, "UART%d", sport->port.line + 1);
+ 
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index c181eb37f985..3c55600a8236 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -2099,6 +2099,8 @@ static void sci_shutdown(struct uart_port *port)
+ 	}
+ #endif
+ 
++	if (s->rx_trigger > 1 && s->rx_fifo_timeout > 0)
++		del_timer_sync(&s->rx_fifo_timer);
+ 	sci_free_irq(s);
+ 	sci_free_dma(port);
+ }
+diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
+index 632a2bfabc08..a0d284ef3f40 100644
+--- a/drivers/usb/class/cdc-wdm.c
++++ b/drivers/usb/class/cdc-wdm.c
+@@ -458,7 +458,7 @@ static int service_outstanding_interrupt(struct wdm_device *desc)
+ 
+ 	set_bit(WDM_RESPONDING, &desc->flags);
+ 	spin_unlock_irq(&desc->iuspin);
+-	rv = usb_submit_urb(desc->response, GFP_ATOMIC);
++	rv = usb_submit_urb(desc->response, GFP_KERNEL);
+ 	spin_lock_irq(&desc->iuspin);
+ 	if (rv) {
+ 		dev_err(&desc->intf->dev,
+diff --git a/drivers/usb/common/roles.c b/drivers/usb/common/roles.c
+index 15cc76e22123..99116af07f1d 100644
+--- a/drivers/usb/common/roles.c
++++ b/drivers/usb/common/roles.c
+@@ -109,8 +109,15 @@ static void *usb_role_switch_match(struct device_connection *con, int ep,
+  */
+ struct usb_role_switch *usb_role_switch_get(struct device *dev)
+ {
+-	return device_connection_find_match(dev, "usb-role-switch", NULL,
+-					    usb_role_switch_match);
++	struct usb_role_switch *sw;
++
++	sw = device_connection_find_match(dev, "usb-role-switch", NULL,
++					  usb_role_switch_match);
++
++	if (!IS_ERR_OR_NULL(sw))
++		WARN_ON(!try_module_get(sw->dev.parent->driver->owner));
++
++	return sw;
+ }
+ EXPORT_SYMBOL_GPL(usb_role_switch_get);
+ 
+@@ -122,8 +129,10 @@ EXPORT_SYMBOL_GPL(usb_role_switch_get);
+  */
+ void usb_role_switch_put(struct usb_role_switch *sw)
+ {
+-	if (!IS_ERR_OR_NULL(sw))
++	if (!IS_ERR_OR_NULL(sw)) {
+ 		put_device(&sw->dev);
++		module_put(sw->dev.parent->driver->owner);
++	}
+ }
+ EXPORT_SYMBOL_GPL(usb_role_switch_put);
+ 
+diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
+index 476dcc5f2da3..e1e0c90ce569 100644
+--- a/drivers/usb/core/devio.c
++++ b/drivers/usb/core/devio.c
+@@ -1433,10 +1433,13 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ 	struct async *as = NULL;
+ 	struct usb_ctrlrequest *dr = NULL;
+ 	unsigned int u, totlen, isofrmlen;
+-	int i, ret, is_in, num_sgs = 0, ifnum = -1;
++	int i, ret, num_sgs = 0, ifnum = -1;
+ 	int number_of_packets = 0;
+ 	unsigned int stream_id = 0;
+ 	void *buf;
++	bool is_in;
++	bool allow_short = false;
++	bool allow_zero = false;
+ 	unsigned long mask =	USBDEVFS_URB_SHORT_NOT_OK |
+ 				USBDEVFS_URB_BULK_CONTINUATION |
+ 				USBDEVFS_URB_NO_FSBR |
+@@ -1470,6 +1473,8 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ 	u = 0;
+ 	switch (uurb->type) {
+ 	case USBDEVFS_URB_TYPE_CONTROL:
++		if (is_in)
++			allow_short = true;
+ 		if (!usb_endpoint_xfer_control(&ep->desc))
+ 			return -EINVAL;
+ 		/* min 8 byte setup packet */
+@@ -1510,6 +1515,10 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ 		break;
+ 
+ 	case USBDEVFS_URB_TYPE_BULK:
++		if (!is_in)
++			allow_zero = true;
++		else
++			allow_short = true;
+ 		switch (usb_endpoint_type(&ep->desc)) {
+ 		case USB_ENDPOINT_XFER_CONTROL:
+ 		case USB_ENDPOINT_XFER_ISOC:
+@@ -1530,6 +1539,10 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ 		if (!usb_endpoint_xfer_int(&ep->desc))
+ 			return -EINVAL;
+  interrupt_urb:
++		if (!is_in)
++			allow_zero = true;
++		else
++			allow_short = true;
+ 		break;
+ 
+ 	case USBDEVFS_URB_TYPE_ISO:
+@@ -1675,14 +1688,19 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
+ 	u = (is_in ? URB_DIR_IN : URB_DIR_OUT);
+ 	if (uurb->flags & USBDEVFS_URB_ISO_ASAP)
+ 		u |= URB_ISO_ASAP;
+-	if (uurb->flags & USBDEVFS_URB_SHORT_NOT_OK && is_in)
++	if (allow_short && uurb->flags & USBDEVFS_URB_SHORT_NOT_OK)
+ 		u |= URB_SHORT_NOT_OK;
+-	if (uurb->flags & USBDEVFS_URB_ZERO_PACKET)
++	if (allow_zero && uurb->flags & USBDEVFS_URB_ZERO_PACKET)
+ 		u |= URB_ZERO_PACKET;
+ 	if (uurb->flags & USBDEVFS_URB_NO_INTERRUPT)
+ 		u |= URB_NO_INTERRUPT;
+ 	as->urb->transfer_flags = u;
+ 
++	if (!allow_short && uurb->flags & USBDEVFS_URB_SHORT_NOT_OK)
++		dev_warn(&ps->dev->dev, "Requested nonsensical USBDEVFS_URB_SHORT_NOT_OK.\n");
++	if (!allow_zero && uurb->flags & USBDEVFS_URB_ZERO_PACKET)
++		dev_warn(&ps->dev->dev, "Requested nonsensical USBDEVFS_URB_ZERO_PACKET.\n");
++
+ 	as->urb->transfer_buffer_length = uurb->buffer_length;
+ 	as->urb->setup_packet = (unsigned char *)dr;
+ 	dr = NULL;
+diff --git a/drivers/usb/core/driver.c b/drivers/usb/core/driver.c
+index e76e95f62f76..a1f225f077cd 100644
+--- a/drivers/usb/core/driver.c
++++ b/drivers/usb/core/driver.c
+@@ -512,7 +512,6 @@ int usb_driver_claim_interface(struct usb_driver *driver,
+ 	struct device *dev;
+ 	struct usb_device *udev;
+ 	int retval = 0;
+-	int lpm_disable_error = -ENODEV;
+ 
+ 	if (!iface)
+ 		return -ENODEV;
+@@ -533,16 +532,6 @@ int usb_driver_claim_interface(struct usb_driver *driver,
+ 
+ 	iface->condition = USB_INTERFACE_BOUND;
+ 
+-	/* See the comment about disabling LPM in usb_probe_interface(). */
+-	if (driver->disable_hub_initiated_lpm) {
+-		lpm_disable_error = usb_unlocked_disable_lpm(udev);
+-		if (lpm_disable_error) {
+-			dev_err(&iface->dev, "%s Failed to disable LPM for driver %s\n",
+-				__func__, driver->name);
+-			return -ENOMEM;
+-		}
+-	}
+-
+ 	/* Claimed interfaces are initially inactive (suspended) and
+ 	 * runtime-PM-enabled, but only if the driver has autosuspend
+ 	 * support.  Otherwise they are marked active, to prevent the
+@@ -561,9 +550,20 @@ int usb_driver_claim_interface(struct usb_driver *driver,
+ 	if (device_is_registered(dev))
+ 		retval = device_bind_driver(dev);
+ 
+-	/* Attempt to re-enable USB3 LPM, if the disable was successful. */
+-	if (!lpm_disable_error)
+-		usb_unlocked_enable_lpm(udev);
++	if (retval) {
++		dev->driver = NULL;
++		usb_set_intfdata(iface, NULL);
++		iface->needs_remote_wakeup = 0;
++		iface->condition = USB_INTERFACE_UNBOUND;
++
++		/*
++		 * Unbound interfaces are always runtime-PM-disabled
++		 * and runtime-PM-suspended
++		 */
++		if (driver->supports_autosuspend)
++			pm_runtime_disable(dev);
++		pm_runtime_set_suspended(dev);
++	}
+ 
+ 	return retval;
+ }
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index e77dfe5ed5ec..178d6c6063c0 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -58,6 +58,7 @@ static int quirks_param_set(const char *val, const struct kernel_param *kp)
+ 	quirk_list = kcalloc(quirk_count, sizeof(struct quirk_entry),
+ 			     GFP_KERNEL);
+ 	if (!quirk_list) {
++		quirk_count = 0;
+ 		mutex_unlock(&quirk_mutex);
+ 		return -ENOMEM;
+ 	}
+@@ -154,7 +155,7 @@ static struct kparam_string quirks_param_string = {
+ 	.string = quirks_param,
+ };
+ 
+-module_param_cb(quirks, &quirks_param_ops, &quirks_param_string, 0644);
++device_param_cb(quirks, &quirks_param_ops, &quirks_param_string, 0644);
+ MODULE_PARM_DESC(quirks, "Add/modify USB quirks by specifying quirks=vendorID:productID:quirks");
+ 
+ /* Lists of quirky USB devices, split in device quirks and interface quirks.
+diff --git a/drivers/usb/core/usb.c b/drivers/usb/core/usb.c
+index 623be3174fb3..79d8bd7a612e 100644
+--- a/drivers/usb/core/usb.c
++++ b/drivers/usb/core/usb.c
+@@ -228,6 +228,8 @@ struct usb_host_interface *usb_find_alt_setting(
+ 	struct usb_interface_cache *intf_cache = NULL;
+ 	int i;
+ 
++	if (!config)
++		return NULL;
+ 	for (i = 0; i < config->desc.bNumInterfaces; i++) {
+ 		if (config->intf_cache[i]->altsetting[0].desc.bInterfaceNumber
+ 				== iface_num) {
+diff --git a/drivers/usb/musb/musb_dsps.c b/drivers/usb/musb/musb_dsps.c
+index fb871eabcc10..a129d601a0c3 100644
+--- a/drivers/usb/musb/musb_dsps.c
++++ b/drivers/usb/musb/musb_dsps.c
+@@ -658,16 +658,6 @@ dsps_dma_controller_create(struct musb *musb, void __iomem *base)
+ 	return controller;
+ }
+ 
+-static void dsps_dma_controller_destroy(struct dma_controller *c)
+-{
+-	struct musb *musb = c->musb;
+-	struct dsps_glue *glue = dev_get_drvdata(musb->controller->parent);
+-	void __iomem *usbss_base = glue->usbss_base;
+-
+-	musb_writel(usbss_base, USBSS_IRQ_CLEARR, USBSS_IRQ_PD_COMP);
+-	cppi41_dma_controller_destroy(c);
+-}
+-
+ #ifdef CONFIG_PM_SLEEP
+ static void dsps_dma_controller_suspend(struct dsps_glue *glue)
+ {
+@@ -697,7 +687,7 @@ static struct musb_platform_ops dsps_ops = {
+ 
+ #ifdef CONFIG_USB_TI_CPPI41_DMA
+ 	.dma_init	= dsps_dma_controller_create,
+-	.dma_exit	= dsps_dma_controller_destroy,
++	.dma_exit	= cppi41_dma_controller_destroy,
+ #endif
+ 	.enable		= dsps_musb_enable,
+ 	.disable	= dsps_musb_disable,
+diff --git a/drivers/usb/serial/kobil_sct.c b/drivers/usb/serial/kobil_sct.c
+index a31ea7e194dd..a6ebed1e0f20 100644
+--- a/drivers/usb/serial/kobil_sct.c
++++ b/drivers/usb/serial/kobil_sct.c
+@@ -393,12 +393,20 @@ static int kobil_tiocmget(struct tty_struct *tty)
+ 			  transfer_buffer_length,
+ 			  KOBIL_TIMEOUT);
+ 
+-	dev_dbg(&port->dev, "%s - Send get_status_line_state URB returns: %i. Statusline: %02x\n",
+-		__func__, result, transfer_buffer[0]);
++	dev_dbg(&port->dev, "Send get_status_line_state URB returns: %i\n",
++			result);
++	if (result < 1) {
++		if (result >= 0)
++			result = -EIO;
++		goto out_free;
++	}
++
++	dev_dbg(&port->dev, "Statusline: %02x\n", transfer_buffer[0]);
+ 
+ 	result = 0;
+ 	if ((transfer_buffer[0] & SUSBCR_GSL_DSR) != 0)
+ 		result = TIOCM_DSR;
++out_free:
+ 	kfree(transfer_buffer);
+ 	return result;
+ }
+diff --git a/drivers/usb/wusbcore/security.c b/drivers/usb/wusbcore/security.c
+index 33d2f5d7f33b..14ac8c98ac9e 100644
+--- a/drivers/usb/wusbcore/security.c
++++ b/drivers/usb/wusbcore/security.c
+@@ -217,7 +217,7 @@ int wusb_dev_sec_add(struct wusbhc *wusbhc,
+ 
+ 	result = usb_get_descriptor(usb_dev, USB_DT_SECURITY,
+ 				    0, secd, sizeof(*secd));
+-	if (result < sizeof(*secd)) {
++	if (result < (int)sizeof(*secd)) {
+ 		dev_err(dev, "Can't read security descriptor or "
+ 			"not enough data: %d\n", result);
+ 		goto out;
+diff --git a/drivers/uwb/hwa-rc.c b/drivers/uwb/hwa-rc.c
+index 9a53912bdfe9..5d3ba747ae17 100644
+--- a/drivers/uwb/hwa-rc.c
++++ b/drivers/uwb/hwa-rc.c
+@@ -873,6 +873,7 @@ error_get_version:
+ error_rc_add:
+ 	usb_put_intf(iface);
+ 	usb_put_dev(hwarc->usb_dev);
++	kfree(hwarc);
+ error_alloc:
+ 	uwb_rc_put(uwb_rc);
+ error_rc_alloc:
+diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
+index 29756d88799b..6b86ca8772fb 100644
+--- a/drivers/vhost/net.c
++++ b/drivers/vhost/net.c
+@@ -396,13 +396,10 @@ static inline unsigned long busy_clock(void)
+ 	return local_clock() >> 10;
+ }
+ 
+-static bool vhost_can_busy_poll(struct vhost_dev *dev,
+-				unsigned long endtime)
++static bool vhost_can_busy_poll(unsigned long endtime)
+ {
+-	return likely(!need_resched()) &&
+-	       likely(!time_after(busy_clock(), endtime)) &&
+-	       likely(!signal_pending(current)) &&
+-	       !vhost_has_work(dev);
++	return likely(!need_resched() && !time_after(busy_clock(), endtime) &&
++		      !signal_pending(current));
+ }
+ 
+ static void vhost_net_disable_vq(struct vhost_net *n,
+@@ -434,7 +431,8 @@ static int vhost_net_enable_vq(struct vhost_net *n,
+ static int vhost_net_tx_get_vq_desc(struct vhost_net *net,
+ 				    struct vhost_virtqueue *vq,
+ 				    struct iovec iov[], unsigned int iov_size,
+-				    unsigned int *out_num, unsigned int *in_num)
++				    unsigned int *out_num, unsigned int *in_num,
++				    bool *busyloop_intr)
+ {
+ 	unsigned long uninitialized_var(endtime);
+ 	int r = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
+@@ -443,9 +441,15 @@ static int vhost_net_tx_get_vq_desc(struct vhost_net *net,
+ 	if (r == vq->num && vq->busyloop_timeout) {
+ 		preempt_disable();
+ 		endtime = busy_clock() + vq->busyloop_timeout;
+-		while (vhost_can_busy_poll(vq->dev, endtime) &&
+-		       vhost_vq_avail_empty(vq->dev, vq))
++		while (vhost_can_busy_poll(endtime)) {
++			if (vhost_has_work(vq->dev)) {
++				*busyloop_intr = true;
++				break;
++			}
++			if (!vhost_vq_avail_empty(vq->dev, vq))
++				break;
+ 			cpu_relax();
++		}
+ 		preempt_enable();
+ 		r = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
+ 				      out_num, in_num, NULL, NULL);
+@@ -501,20 +505,24 @@ static void handle_tx(struct vhost_net *net)
+ 	zcopy = nvq->ubufs;
+ 
+ 	for (;;) {
++		bool busyloop_intr;
++
+ 		/* Release DMAs done buffers first */
+ 		if (zcopy)
+ 			vhost_zerocopy_signal_used(net, vq);
+ 
+-
++		busyloop_intr = false;
+ 		head = vhost_net_tx_get_vq_desc(net, vq, vq->iov,
+ 						ARRAY_SIZE(vq->iov),
+-						&out, &in);
++						&out, &in, &busyloop_intr);
+ 		/* On error, stop handling until the next kick. */
+ 		if (unlikely(head < 0))
+ 			break;
+ 		/* Nothing new?  Wait for eventfd to tell us they refilled. */
+ 		if (head == vq->num) {
+-			if (unlikely(vhost_enable_notify(&net->dev, vq))) {
++			if (unlikely(busyloop_intr)) {
++				vhost_poll_queue(&vq->poll);
++			} else if (unlikely(vhost_enable_notify(&net->dev, vq))) {
+ 				vhost_disable_notify(&net->dev, vq);
+ 				continue;
+ 			}
+@@ -663,7 +671,8 @@ static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk)
+ 		preempt_disable();
+ 		endtime = busy_clock() + vq->busyloop_timeout;
+ 
+-		while (vhost_can_busy_poll(&net->dev, endtime) &&
++		while (vhost_can_busy_poll(endtime) &&
++		       !vhost_has_work(&net->dev) &&
+ 		       !sk_has_rx_data(sk) &&
+ 		       vhost_vq_avail_empty(&net->dev, vq))
+ 			cpu_relax();
+diff --git a/fs/dax.c b/fs/dax.c
+index 641192808bb6..94f9fe002b12 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -1007,21 +1007,12 @@ static vm_fault_t dax_load_hole(struct address_space *mapping, void *entry,
+ {
+ 	struct inode *inode = mapping->host;
+ 	unsigned long vaddr = vmf->address;
+-	vm_fault_t ret = VM_FAULT_NOPAGE;
+-	struct page *zero_page;
+-	pfn_t pfn;
+-
+-	zero_page = ZERO_PAGE(0);
+-	if (unlikely(!zero_page)) {
+-		ret = VM_FAULT_OOM;
+-		goto out;
+-	}
++	pfn_t pfn = pfn_to_pfn_t(my_zero_pfn(vaddr));
++	vm_fault_t ret;
+ 
+-	pfn = page_to_pfn_t(zero_page);
+ 	dax_insert_mapping_entry(mapping, vmf, entry, pfn, RADIX_DAX_ZERO_PAGE,
+ 			false);
+ 	ret = vmf_insert_mixed(vmf->vma, vaddr, pfn);
+-out:
+ 	trace_dax_load_hole(inode, vmf, ret);
+ 	return ret;
+ }
+diff --git a/fs/ext2/inode.c b/fs/ext2/inode.c
+index 71635909df3b..b4e0501bcba1 100644
+--- a/fs/ext2/inode.c
++++ b/fs/ext2/inode.c
+@@ -1448,6 +1448,7 @@ struct inode *ext2_iget (struct super_block *sb, unsigned long ino)
+ 	}
+ 	inode->i_blocks = le32_to_cpu(raw_inode->i_blocks);
+ 	ei->i_flags = le32_to_cpu(raw_inode->i_flags);
++	ext2_set_inode_flags(inode);
+ 	ei->i_faddr = le32_to_cpu(raw_inode->i_faddr);
+ 	ei->i_frag_no = raw_inode->i_frag;
+ 	ei->i_frag_size = raw_inode->i_fsize;
+@@ -1517,7 +1518,6 @@ struct inode *ext2_iget (struct super_block *sb, unsigned long ino)
+ 			   new_decode_dev(le32_to_cpu(raw_inode->i_block[1])));
+ 	}
+ 	brelse (bh);
+-	ext2_set_inode_flags(inode);
+ 	unlock_new_inode(inode);
+ 	return inode;
+ 	
+diff --git a/fs/iomap.c b/fs/iomap.c
+index 0d0bd8845586..af6144fd4919 100644
+--- a/fs/iomap.c
++++ b/fs/iomap.c
+@@ -811,6 +811,7 @@ struct iomap_dio {
+ 	atomic_t		ref;
+ 	unsigned		flags;
+ 	int			error;
++	bool			wait_for_completion;
+ 
+ 	union {
+ 		/* used during submission and for synchronous completion: */
+@@ -914,9 +915,8 @@ static void iomap_dio_bio_end_io(struct bio *bio)
+ 		iomap_dio_set_error(dio, blk_status_to_errno(bio->bi_status));
+ 
+ 	if (atomic_dec_and_test(&dio->ref)) {
+-		if (is_sync_kiocb(dio->iocb)) {
++		if (dio->wait_for_completion) {
+ 			struct task_struct *waiter = dio->submit.waiter;
+-
+ 			WRITE_ONCE(dio->submit.waiter, NULL);
+ 			wake_up_process(waiter);
+ 		} else if (dio->flags & IOMAP_DIO_WRITE) {
+@@ -1131,13 +1131,12 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
+ 	dio->end_io = end_io;
+ 	dio->error = 0;
+ 	dio->flags = 0;
++	dio->wait_for_completion = is_sync_kiocb(iocb);
+ 
+ 	dio->submit.iter = iter;
+-	if (is_sync_kiocb(iocb)) {
+-		dio->submit.waiter = current;
+-		dio->submit.cookie = BLK_QC_T_NONE;
+-		dio->submit.last_queue = NULL;
+-	}
++	dio->submit.waiter = current;
++	dio->submit.cookie = BLK_QC_T_NONE;
++	dio->submit.last_queue = NULL;
+ 
+ 	if (iov_iter_rw(iter) == READ) {
+ 		if (pos >= dio->i_size)
+@@ -1187,7 +1186,7 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
+ 		dio_warn_stale_pagecache(iocb->ki_filp);
+ 	ret = 0;
+ 
+-	if (iov_iter_rw(iter) == WRITE && !is_sync_kiocb(iocb) &&
++	if (iov_iter_rw(iter) == WRITE && !dio->wait_for_completion &&
+ 	    !inode->i_sb->s_dio_done_wq) {
+ 		ret = sb_init_dio_done_wq(inode->i_sb);
+ 		if (ret < 0)
+@@ -1202,8 +1201,10 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
+ 				iomap_dio_actor);
+ 		if (ret <= 0) {
+ 			/* magic error code to fall back to buffered I/O */
+-			if (ret == -ENOTBLK)
++			if (ret == -ENOTBLK) {
++				dio->wait_for_completion = true;
+ 				ret = 0;
++			}
+ 			break;
+ 		}
+ 		pos += ret;
+@@ -1224,7 +1225,7 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
+ 		dio->flags &= ~IOMAP_DIO_NEED_SYNC;
+ 
+ 	if (!atomic_dec_and_test(&dio->ref)) {
+-		if (!is_sync_kiocb(iocb))
++		if (!dio->wait_for_completion)
+ 			return -EIOCBQUEUED;
+ 
+ 		for (;;) {
+diff --git a/fs/isofs/inode.c b/fs/isofs/inode.c
+index ec3fba7d492f..488a9e7f8f66 100644
+--- a/fs/isofs/inode.c
++++ b/fs/isofs/inode.c
+@@ -24,6 +24,7 @@
+ #include <linux/mpage.h>
+ #include <linux/user_namespace.h>
+ #include <linux/seq_file.h>
++#include <linux/blkdev.h>
+ 
+ #include "isofs.h"
+ #include "zisofs.h"
+@@ -653,6 +654,12 @@ static int isofs_fill_super(struct super_block *s, void *data, int silent)
+ 	/*
+ 	 * What if bugger tells us to go beyond page size?
+ 	 */
++	if (bdev_logical_block_size(s->s_bdev) > 2048) {
++		printk(KERN_WARNING
++		       "ISOFS: unsupported/invalid hardware sector size %d\n",
++			bdev_logical_block_size(s->s_bdev));
++		goto out_freesbi;
++	}
+ 	opt.blocksize = sb_min_blocksize(s, opt.blocksize);
+ 
+ 	sbi->s_high_sierra = 0; /* default is iso9660 */
+diff --git a/fs/locks.c b/fs/locks.c
+index db7b6917d9c5..fafce5a8d74f 100644
+--- a/fs/locks.c
++++ b/fs/locks.c
+@@ -2072,6 +2072,13 @@ static pid_t locks_translate_pid(struct file_lock *fl, struct pid_namespace *ns)
+ 		return -1;
+ 	if (IS_REMOTELCK(fl))
+ 		return fl->fl_pid;
++	/*
++	 * If the flock owner process is dead and its pid has been already
++	 * freed, the translation below won't work, but we still want to show
++	 * flock owner pid number in init pidns.
++	 */
++	if (ns == &init_pid_ns)
++		return (pid_t)fl->fl_pid;
+ 
+ 	rcu_read_lock();
+ 	pid = find_pid_ns(fl->fl_pid, &init_pid_ns);
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index 5d99e8810b85..0dded931f119 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -1726,6 +1726,7 @@ nfsd4_proc_compound(struct svc_rqst *rqstp)
+ 	if (status) {
+ 		op = &args->ops[0];
+ 		op->status = status;
++		resp->opcnt = 1;
+ 		goto encode_op;
+ 	}
+ 
+diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
+index ca1d2cc2cdfa..18863d56273c 100644
+--- a/include/linux/arm-smccc.h
++++ b/include/linux/arm-smccc.h
+@@ -199,47 +199,57 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
+ 
+ #define __declare_arg_0(a0, res)					\
+ 	struct arm_smccc_res   *___res = res;				\
+-	register u32           r0 asm("r0") = a0;			\
++	register unsigned long r0 asm("r0") = (u32)a0;			\
+ 	register unsigned long r1 asm("r1");				\
+ 	register unsigned long r2 asm("r2");				\
+ 	register unsigned long r3 asm("r3")
+ 
+ #define __declare_arg_1(a0, a1, res)					\
++	typeof(a1) __a1 = a1;						\
+ 	struct arm_smccc_res   *___res = res;				\
+-	register u32           r0 asm("r0") = a0;			\
+-	register typeof(a1)    r1 asm("r1") = a1;			\
++	register unsigned long r0 asm("r0") = (u32)a0;			\
++	register unsigned long r1 asm("r1") = __a1;			\
+ 	register unsigned long r2 asm("r2");				\
+ 	register unsigned long r3 asm("r3")
+ 
+ #define __declare_arg_2(a0, a1, a2, res)				\
++	typeof(a1) __a1 = a1;						\
++	typeof(a2) __a2 = a2;						\
+ 	struct arm_smccc_res   *___res = res;				\
+-	register u32           r0 asm("r0") = a0;			\
+-	register typeof(a1)    r1 asm("r1") = a1;			\
+-	register typeof(a2)    r2 asm("r2") = a2;			\
++	register unsigned long r0 asm("r0") = (u32)a0;			\
++	register unsigned long r1 asm("r1") = __a1;			\
++	register unsigned long r2 asm("r2") = __a2;			\
+ 	register unsigned long r3 asm("r3")
+ 
+ #define __declare_arg_3(a0, a1, a2, a3, res)				\
++	typeof(a1) __a1 = a1;						\
++	typeof(a2) __a2 = a2;						\
++	typeof(a3) __a3 = a3;						\
+ 	struct arm_smccc_res   *___res = res;				\
+-	register u32           r0 asm("r0") = a0;			\
+-	register typeof(a1)    r1 asm("r1") = a1;			\
+-	register typeof(a2)    r2 asm("r2") = a2;			\
+-	register typeof(a3)    r3 asm("r3") = a3
++	register unsigned long r0 asm("r0") = (u32)a0;			\
++	register unsigned long r1 asm("r1") = __a1;			\
++	register unsigned long r2 asm("r2") = __a2;			\
++	register unsigned long r3 asm("r3") = __a3
+ 
+ #define __declare_arg_4(a0, a1, a2, a3, a4, res)			\
++	typeof(a4) __a4 = a4;						\
+ 	__declare_arg_3(a0, a1, a2, a3, res);				\
+-	register typeof(a4) r4 asm("r4") = a4
++	register unsigned long r4 asm("r4") = __a4
+ 
+ #define __declare_arg_5(a0, a1, a2, a3, a4, a5, res)			\
++	typeof(a5) __a5 = a5;						\
+ 	__declare_arg_4(a0, a1, a2, a3, a4, res);			\
+-	register typeof(a5) r5 asm("r5") = a5
++	register unsigned long r5 asm("r5") = __a5
+ 
+ #define __declare_arg_6(a0, a1, a2, a3, a4, a5, a6, res)		\
++	typeof(a6) __a6 = a6;						\
+ 	__declare_arg_5(a0, a1, a2, a3, a4, a5, res);			\
+-	register typeof(a6) r6 asm("r6") = a6
++	register unsigned long r6 asm("r6") = __a6
+ 
+ #define __declare_arg_7(a0, a1, a2, a3, a4, a5, a6, a7, res)		\
++	typeof(a7) __a7 = a7;						\
+ 	__declare_arg_6(a0, a1, a2, a3, a4, a5, a6, res);		\
+-	register typeof(a7) r7 asm("r7") = a7
++	register unsigned long r7 asm("r7") = __a7
+ 
+ #define ___declare_args(count, ...) __declare_arg_ ## count(__VA_ARGS__)
+ #define __declare_args(count, ...)  ___declare_args(count, __VA_ARGS__)
+diff --git a/include/linux/bitfield.h b/include/linux/bitfield.h
+index cf2588d81148..147a7bb341dd 100644
+--- a/include/linux/bitfield.h
++++ b/include/linux/bitfield.h
+@@ -104,7 +104,7 @@
+ 		(typeof(_mask))(((_reg) & (_mask)) >> __bf_shf(_mask));	\
+ 	})
+ 
+-extern void __compiletime_warning("value doesn't fit into mask")
++extern void __compiletime_error("value doesn't fit into mask")
+ __field_overflow(void);
+ extern void __compiletime_error("bad bitfield mask")
+ __bad_mask(void);
+@@ -121,8 +121,8 @@ static __always_inline u64 field_mask(u64 field)
+ #define ____MAKE_OP(type,base,to,from)					\
+ static __always_inline __##type type##_encode_bits(base v, base field)	\
+ {									\
+-        if (__builtin_constant_p(v) &&	(v & ~field_multiplier(field)))	\
+-			    __field_overflow();				\
++	if (__builtin_constant_p(v) && (v & ~field_mask(field)))	\
++		__field_overflow();					\
+ 	return to((v & field_mask(field)) * field_multiplier(field));	\
+ }									\
+ static __always_inline __##type type##_replace_bits(__##type old,	\
+diff --git a/include/linux/platform_data/ina2xx.h b/include/linux/platform_data/ina2xx.h
+index 9abc0ca7259b..9f0aa1b48c78 100644
+--- a/include/linux/platform_data/ina2xx.h
++++ b/include/linux/platform_data/ina2xx.h
+@@ -1,7 +1,7 @@
+ /*
+  * Driver for Texas Instruments INA219, INA226 power monitor chips
+  *
+- * Copyright (C) 2012 Lothar Felten <l-felten@ti.com>
++ * Copyright (C) 2012 Lothar Felten <lothar.felten@gmail.com>
+  *
+  * This program is free software; you can redistribute it and/or modify
+  * it under the terms of the GNU General Public License version 2 as
+diff --git a/include/linux/posix-timers.h b/include/linux/posix-timers.h
+index c85704fcdbd2..ee7e987ea1b4 100644
+--- a/include/linux/posix-timers.h
++++ b/include/linux/posix-timers.h
+@@ -95,8 +95,8 @@ struct k_itimer {
+ 	clockid_t		it_clock;
+ 	timer_t			it_id;
+ 	int			it_active;
+-	int			it_overrun;
+-	int			it_overrun_last;
++	s64			it_overrun;
++	s64			it_overrun_last;
+ 	int			it_requeue_pending;
+ 	int			it_sigev_notify;
+ 	ktime_t			it_interval;
+diff --git a/include/linux/power_supply.h b/include/linux/power_supply.h
+index b21c4bd96b84..f80769175c56 100644
+--- a/include/linux/power_supply.h
++++ b/include/linux/power_supply.h
+@@ -269,6 +269,7 @@ struct power_supply {
+ 	spinlock_t changed_lock;
+ 	bool changed;
+ 	bool initialized;
++	bool removing;
+ 	atomic_t use_cnt;
+ #ifdef CONFIG_THERMAL
+ 	struct thermal_zone_device *tzd;
+diff --git a/include/linux/regulator/machine.h b/include/linux/regulator/machine.h
+index 3468703d663a..a459a5e973a7 100644
+--- a/include/linux/regulator/machine.h
++++ b/include/linux/regulator/machine.h
+@@ -48,9 +48,9 @@ struct regulator;
+  * DISABLE_IN_SUSPEND	- turn off regulator in suspend states
+  * ENABLE_IN_SUSPEND	- keep regulator on in suspend states
+  */
+-#define DO_NOTHING_IN_SUSPEND	(-1)
+-#define DISABLE_IN_SUSPEND	0
+-#define ENABLE_IN_SUSPEND	1
++#define DO_NOTHING_IN_SUSPEND	0
++#define DISABLE_IN_SUSPEND	1
++#define ENABLE_IN_SUSPEND	2
+ 
+ /* Regulator active discharge flags */
+ enum regulator_active_discharge {
+diff --git a/include/linux/uio.h b/include/linux/uio.h
+index 409c845d4cd3..422b1c01ee0d 100644
+--- a/include/linux/uio.h
++++ b/include/linux/uio.h
+@@ -172,7 +172,7 @@ size_t copy_from_iter_flushcache(void *addr, size_t bytes, struct iov_iter *i)
+ static __always_inline __must_check
+ size_t copy_to_iter_mcsafe(void *addr, size_t bytes, struct iov_iter *i)
+ {
+-	if (unlikely(!check_copy_size(addr, bytes, false)))
++	if (unlikely(!check_copy_size(addr, bytes, true)))
+ 		return 0;
+ 	else
+ 		return _copy_to_iter_mcsafe(addr, bytes, i);
+diff --git a/include/media/v4l2-fh.h b/include/media/v4l2-fh.h
+index ea73fef8bdc0..8586cfb49828 100644
+--- a/include/media/v4l2-fh.h
++++ b/include/media/v4l2-fh.h
+@@ -38,10 +38,13 @@ struct v4l2_ctrl_handler;
+  * @prio: priority of the file handler, as defined by &enum v4l2_priority
+  *
+  * @wait: event' s wait queue
++ * @subscribe_lock: serialise changes to the subscribed list; guarantee that
++ *		    the add and del event callbacks are orderly called
+  * @subscribed: list of subscribed events
+  * @available: list of events waiting to be dequeued
+  * @navailable: number of available events at @available list
+  * @sequence: event sequence number
++ *
+  * @m2m_ctx: pointer to &struct v4l2_m2m_ctx
+  */
+ struct v4l2_fh {
+@@ -52,6 +55,7 @@ struct v4l2_fh {
+ 
+ 	/* Events */
+ 	wait_queue_head_t	wait;
++	struct mutex		subscribe_lock;
+ 	struct list_head	subscribed;
+ 	struct list_head	available;
+ 	unsigned int		navailable;
+diff --git a/include/rdma/opa_addr.h b/include/rdma/opa_addr.h
+index 2bbb7a67e643..66d4393d339c 100644
+--- a/include/rdma/opa_addr.h
++++ b/include/rdma/opa_addr.h
+@@ -120,7 +120,7 @@ static inline bool rdma_is_valid_unicast_lid(struct rdma_ah_attr *attr)
+ 	if (attr->type == RDMA_AH_ATTR_TYPE_IB) {
+ 		if (!rdma_ah_get_dlid(attr) ||
+ 		    rdma_ah_get_dlid(attr) >=
+-		    be32_to_cpu(IB_MULTICAST_LID_BASE))
++		    be16_to_cpu(IB_MULTICAST_LID_BASE))
+ 			return false;
+ 	} else if (attr->type == RDMA_AH_ATTR_TYPE_OPA) {
+ 		if (!rdma_ah_get_dlid(attr) ||
+diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
+index 58899601fccf..ed707b21d152 100644
+--- a/kernel/bpf/sockmap.c
++++ b/kernel/bpf/sockmap.c
+@@ -1430,12 +1430,15 @@ out:
+ static void smap_write_space(struct sock *sk)
+ {
+ 	struct smap_psock *psock;
++	void (*write_space)(struct sock *sk);
+ 
+ 	rcu_read_lock();
+ 	psock = smap_psock_sk(sk);
+ 	if (likely(psock && test_bit(SMAP_TX_RUNNING, &psock->state)))
+ 		schedule_work(&psock->tx_work);
++	write_space = psock->save_write_space;
+ 	rcu_read_unlock();
++	write_space(sk);
+ }
+ 
+ static void smap_stop_sock(struct smap_psock *psock, struct sock *sk)
+@@ -2143,7 +2146,9 @@ static struct bpf_map *sock_hash_alloc(union bpf_attr *attr)
+ 		return ERR_PTR(-EPERM);
+ 
+ 	/* check sanity of attributes */
+-	if (attr->max_entries == 0 || attr->value_size != 4 ||
++	if (attr->max_entries == 0 ||
++	    attr->key_size == 0 ||
++	    attr->value_size != 4 ||
+ 	    attr->map_flags & ~SOCK_CREATE_FLAG_MASK)
+ 		return ERR_PTR(-EINVAL);
+ 
+@@ -2270,8 +2275,10 @@ static struct htab_elem *alloc_sock_hash_elem(struct bpf_htab *htab,
+ 	}
+ 	l_new = kmalloc_node(htab->elem_size, GFP_ATOMIC | __GFP_NOWARN,
+ 			     htab->map.numa_node);
+-	if (!l_new)
++	if (!l_new) {
++		atomic_dec(&htab->count);
+ 		return ERR_PTR(-ENOMEM);
++	}
+ 
+ 	memcpy(l_new->key, key, key_size);
+ 	l_new->sk = sk;
+diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c
+index 6e28d2866be5..314e2a9040c7 100644
+--- a/kernel/events/hw_breakpoint.c
++++ b/kernel/events/hw_breakpoint.c
+@@ -400,16 +400,35 @@ int dbg_release_bp_slot(struct perf_event *bp)
+ 	return 0;
+ }
+ 
+-static int validate_hw_breakpoint(struct perf_event *bp)
++#ifndef hw_breakpoint_arch_parse
++int hw_breakpoint_arch_parse(struct perf_event *bp,
++			     const struct perf_event_attr *attr,
++			     struct arch_hw_breakpoint *hw)
+ {
+-	int ret;
++	int err;
+ 
+-	ret = arch_validate_hwbkpt_settings(bp);
+-	if (ret)
+-		return ret;
++	err = arch_validate_hwbkpt_settings(bp);
++	if (err)
++		return err;
++
++	*hw = bp->hw.info;
++
++	return 0;
++}
++#endif
++
++static int hw_breakpoint_parse(struct perf_event *bp,
++			       const struct perf_event_attr *attr,
++			       struct arch_hw_breakpoint *hw)
++{
++	int err;
++
++	err = hw_breakpoint_arch_parse(bp, attr, hw);
++	if (err)
++		return err;
+ 
+ 	if (arch_check_bp_in_kernelspace(bp)) {
+-		if (bp->attr.exclude_kernel)
++		if (attr->exclude_kernel)
+ 			return -EINVAL;
+ 		/*
+ 		 * Don't let unprivileged users set a breakpoint in the trap
+@@ -424,19 +443,22 @@ static int validate_hw_breakpoint(struct perf_event *bp)
+ 
+ int register_perf_hw_breakpoint(struct perf_event *bp)
+ {
+-	int ret;
+-
+-	ret = reserve_bp_slot(bp);
+-	if (ret)
+-		return ret;
++	struct arch_hw_breakpoint hw;
++	int err;
+ 
+-	ret = validate_hw_breakpoint(bp);
++	err = reserve_bp_slot(bp);
++	if (err)
++		return err;
+ 
+-	/* if arch_validate_hwbkpt_settings() fails then release bp slot */
+-	if (ret)
++	err = hw_breakpoint_parse(bp, &bp->attr, &hw);
++	if (err) {
+ 		release_bp_slot(bp);
++		return err;
++	}
+ 
+-	return ret;
++	bp->hw.info = hw;
++
++	return 0;
+ }
+ 
+ /**
+@@ -464,6 +486,7 @@ modify_user_hw_breakpoint_check(struct perf_event *bp, struct perf_event_attr *a
+ 	u64 old_len  = bp->attr.bp_len;
+ 	int old_type = bp->attr.bp_type;
+ 	bool modify  = attr->bp_type != old_type;
++	struct arch_hw_breakpoint hw;
+ 	int err = 0;
+ 
+ 	bp->attr.bp_addr = attr->bp_addr;
+@@ -473,7 +496,7 @@ modify_user_hw_breakpoint_check(struct perf_event *bp, struct perf_event_attr *a
+ 	if (check && memcmp(&bp->attr, attr, sizeof(*attr)))
+ 		return -EINVAL;
+ 
+-	err = validate_hw_breakpoint(bp);
++	err = hw_breakpoint_parse(bp, attr, &hw);
+ 	if (!err && modify)
+ 		err = modify_bp_slot(bp, old_type);
+ 
+@@ -484,7 +507,9 @@ modify_user_hw_breakpoint_check(struct perf_event *bp, struct perf_event_attr *a
+ 		return err;
+ 	}
+ 
++	bp->hw.info = hw;
+ 	bp->attr.disabled = attr->disabled;
++
+ 	return 0;
+ }
+ 
+diff --git a/kernel/module.c b/kernel/module.c
+index f475f30eed8c..4a6b9c6d5f2c 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -4067,7 +4067,7 @@ static unsigned long mod_find_symname(struct module *mod, const char *name)
+ 
+ 	for (i = 0; i < kallsyms->num_symtab; i++)
+ 		if (strcmp(name, symname(kallsyms, i)) == 0 &&
+-		    kallsyms->symtab[i].st_info != 'U')
++		    kallsyms->symtab[i].st_shndx != SHN_UNDEF)
+ 			return kallsyms->symtab[i].st_value;
+ 	return 0;
+ }
+@@ -4113,6 +4113,10 @@ int module_kallsyms_on_each_symbol(int (*fn)(void *, const char *,
+ 		if (mod->state == MODULE_STATE_UNFORMED)
+ 			continue;
+ 		for (i = 0; i < kallsyms->num_symtab; i++) {
++
++			if (kallsyms->symtab[i].st_shndx == SHN_UNDEF)
++				continue;
++
+ 			ret = fn(data, symname(kallsyms, i),
+ 				 mod, kallsyms->symtab[i].st_value);
+ 			if (ret != 0)
+diff --git a/kernel/time/alarmtimer.c b/kernel/time/alarmtimer.c
+index 639321bf2e39..fa5de5e8de61 100644
+--- a/kernel/time/alarmtimer.c
++++ b/kernel/time/alarmtimer.c
+@@ -581,11 +581,11 @@ static void alarm_timer_rearm(struct k_itimer *timr)
+  * @timr:	Pointer to the posixtimer data struct
+  * @now:	Current time to forward the timer against
+  */
+-static int alarm_timer_forward(struct k_itimer *timr, ktime_t now)
++static s64 alarm_timer_forward(struct k_itimer *timr, ktime_t now)
+ {
+ 	struct alarm *alarm = &timr->it.alarm.alarmtimer;
+ 
+-	return (int) alarm_forward(alarm, timr->it_interval, now);
++	return alarm_forward(alarm, timr->it_interval, now);
+ }
+ 
+ /**
+@@ -808,7 +808,8 @@ static int alarm_timer_nsleep(const clockid_t which_clock, int flags,
+ 	/* Convert (if necessary) to absolute time */
+ 	if (flags != TIMER_ABSTIME) {
+ 		ktime_t now = alarm_bases[type].gettime();
+-		exp = ktime_add(now, exp);
++
++		exp = ktime_add_safe(now, exp);
+ 	}
+ 
+ 	ret = alarmtimer_do_nsleep(&alarm, exp, type);
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index 9cdf54b04ca8..294d7b65af33 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -85,7 +85,7 @@ static void bump_cpu_timer(struct k_itimer *timer, u64 now)
+ 			continue;
+ 
+ 		timer->it.cpu.expires += incr;
+-		timer->it_overrun += 1 << i;
++		timer->it_overrun += 1LL << i;
+ 		delta -= incr;
+ 	}
+ }
+diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
+index e08ce3f27447..e475012bff7e 100644
+--- a/kernel/time/posix-timers.c
++++ b/kernel/time/posix-timers.c
+@@ -283,6 +283,17 @@ static __init int init_posix_timers(void)
+ }
+ __initcall(init_posix_timers);
+ 
++/*
++ * The siginfo si_overrun field and the return value of timer_getoverrun(2)
++ * are of type int. Clamp the overrun value to INT_MAX
++ */
++static inline int timer_overrun_to_int(struct k_itimer *timr, int baseval)
++{
++	s64 sum = timr->it_overrun_last + (s64)baseval;
++
++	return sum > (s64)INT_MAX ? INT_MAX : (int)sum;
++}
++
+ static void common_hrtimer_rearm(struct k_itimer *timr)
+ {
+ 	struct hrtimer *timer = &timr->it.real.timer;
+@@ -290,9 +301,8 @@ static void common_hrtimer_rearm(struct k_itimer *timr)
+ 	if (!timr->it_interval)
+ 		return;
+ 
+-	timr->it_overrun += (unsigned int) hrtimer_forward(timer,
+-						timer->base->get_time(),
+-						timr->it_interval);
++	timr->it_overrun += hrtimer_forward(timer, timer->base->get_time(),
++					    timr->it_interval);
+ 	hrtimer_restart(timer);
+ }
+ 
+@@ -321,10 +331,10 @@ void posixtimer_rearm(struct siginfo *info)
+ 
+ 		timr->it_active = 1;
+ 		timr->it_overrun_last = timr->it_overrun;
+-		timr->it_overrun = -1;
++		timr->it_overrun = -1LL;
+ 		++timr->it_requeue_pending;
+ 
+-		info->si_overrun += timr->it_overrun_last;
++		info->si_overrun = timer_overrun_to_int(timr, info->si_overrun);
+ 	}
+ 
+ 	unlock_timer(timr, flags);
+@@ -418,9 +428,8 @@ static enum hrtimer_restart posix_timer_fn(struct hrtimer *timer)
+ 					now = ktime_add(now, kj);
+ 			}
+ #endif
+-			timr->it_overrun += (unsigned int)
+-				hrtimer_forward(timer, now,
+-						timr->it_interval);
++			timr->it_overrun += hrtimer_forward(timer, now,
++							    timr->it_interval);
+ 			ret = HRTIMER_RESTART;
+ 			++timr->it_requeue_pending;
+ 			timr->it_active = 1;
+@@ -524,7 +533,7 @@ static int do_timer_create(clockid_t which_clock, struct sigevent *event,
+ 	new_timer->it_id = (timer_t) new_timer_id;
+ 	new_timer->it_clock = which_clock;
+ 	new_timer->kclock = kc;
+-	new_timer->it_overrun = -1;
++	new_timer->it_overrun = -1LL;
+ 
+ 	if (event) {
+ 		rcu_read_lock();
+@@ -645,11 +654,11 @@ static ktime_t common_hrtimer_remaining(struct k_itimer *timr, ktime_t now)
+ 	return __hrtimer_expires_remaining_adjusted(timer, now);
+ }
+ 
+-static int common_hrtimer_forward(struct k_itimer *timr, ktime_t now)
++static s64 common_hrtimer_forward(struct k_itimer *timr, ktime_t now)
+ {
+ 	struct hrtimer *timer = &timr->it.real.timer;
+ 
+-	return (int)hrtimer_forward(timer, now, timr->it_interval);
++	return hrtimer_forward(timer, now, timr->it_interval);
+ }
+ 
+ /*
+@@ -789,7 +798,7 @@ SYSCALL_DEFINE1(timer_getoverrun, timer_t, timer_id)
+ 	if (!timr)
+ 		return -EINVAL;
+ 
+-	overrun = timr->it_overrun_last;
++	overrun = timer_overrun_to_int(timr, 0);
+ 	unlock_timer(timr, flags);
+ 
+ 	return overrun;
+diff --git a/kernel/time/posix-timers.h b/kernel/time/posix-timers.h
+index 151e28f5bf30..ddb21145211a 100644
+--- a/kernel/time/posix-timers.h
++++ b/kernel/time/posix-timers.h
+@@ -19,7 +19,7 @@ struct k_clock {
+ 	void	(*timer_get)(struct k_itimer *timr,
+ 			     struct itimerspec64 *cur_setting);
+ 	void	(*timer_rearm)(struct k_itimer *timr);
+-	int	(*timer_forward)(struct k_itimer *timr, ktime_t now);
++	s64	(*timer_forward)(struct k_itimer *timr, ktime_t now);
+ 	ktime_t	(*timer_remaining)(struct k_itimer *timr, ktime_t now);
+ 	int	(*timer_try_to_cancel)(struct k_itimer *timr);
+ 	void	(*timer_arm)(struct k_itimer *timr, ktime_t expires,
+diff --git a/lib/klist.c b/lib/klist.c
+index 0507fa5d84c5..f6b547812fe3 100644
+--- a/lib/klist.c
++++ b/lib/klist.c
+@@ -336,8 +336,9 @@ struct klist_node *klist_prev(struct klist_iter *i)
+ 	void (*put)(struct klist_node *) = i->i_klist->put;
+ 	struct klist_node *last = i->i_cur;
+ 	struct klist_node *prev;
++	unsigned long flags;
+ 
+-	spin_lock(&i->i_klist->k_lock);
++	spin_lock_irqsave(&i->i_klist->k_lock, flags);
+ 
+ 	if (last) {
+ 		prev = to_klist_node(last->n_node.prev);
+@@ -356,7 +357,7 @@ struct klist_node *klist_prev(struct klist_iter *i)
+ 		prev = to_klist_node(prev->n_node.prev);
+ 	}
+ 
+-	spin_unlock(&i->i_klist->k_lock);
++	spin_unlock_irqrestore(&i->i_klist->k_lock, flags);
+ 
+ 	if (put && last)
+ 		put(last);
+@@ -377,8 +378,9 @@ struct klist_node *klist_next(struct klist_iter *i)
+ 	void (*put)(struct klist_node *) = i->i_klist->put;
+ 	struct klist_node *last = i->i_cur;
+ 	struct klist_node *next;
++	unsigned long flags;
+ 
+-	spin_lock(&i->i_klist->k_lock);
++	spin_lock_irqsave(&i->i_klist->k_lock, flags);
+ 
+ 	if (last) {
+ 		next = to_klist_node(last->n_node.next);
+@@ -397,7 +399,7 @@ struct klist_node *klist_next(struct klist_iter *i)
+ 		next = to_klist_node(next->n_node.next);
+ 	}
+ 
+-	spin_unlock(&i->i_klist->k_lock);
++	spin_unlock_irqrestore(&i->i_klist->k_lock, flags);
+ 
+ 	if (put && last)
+ 		put(last);
+diff --git a/net/6lowpan/iphc.c b/net/6lowpan/iphc.c
+index 6b1042e21656..52fad5dad9f7 100644
+--- a/net/6lowpan/iphc.c
++++ b/net/6lowpan/iphc.c
+@@ -770,6 +770,7 @@ int lowpan_header_decompress(struct sk_buff *skb, const struct net_device *dev,
+ 		hdr.hop_limit, &hdr.daddr);
+ 
+ 	skb_push(skb, sizeof(hdr));
++	skb_reset_mac_header(skb);
+ 	skb_reset_network_header(skb);
+ 	skb_copy_to_linear_data(skb, &hdr, sizeof(hdr));
+ 
+diff --git a/net/ipv4/tcp_bbr.c b/net/ipv4/tcp_bbr.c
+index 4bfff3c87e8e..e99d6afb70ef 100644
+--- a/net/ipv4/tcp_bbr.c
++++ b/net/ipv4/tcp_bbr.c
+@@ -95,11 +95,10 @@ struct bbr {
+ 	u32     mode:3,		     /* current bbr_mode in state machine */
+ 		prev_ca_state:3,     /* CA state on previous ACK */
+ 		packet_conservation:1,  /* use packet conservation? */
+-		restore_cwnd:1,	     /* decided to revert cwnd to old value */
+ 		round_start:1,	     /* start of packet-timed tx->ack round? */
+ 		idle_restart:1,	     /* restarting after idle? */
+ 		probe_rtt_round_done:1,  /* a BBR_PROBE_RTT round at 4 pkts? */
+-		unused:12,
++		unused:13,
+ 		lt_is_sampling:1,    /* taking long-term ("LT") samples now? */
+ 		lt_rtt_cnt:7,	     /* round trips in long-term interval */
+ 		lt_use_bw:1;	     /* use lt_bw as our bw estimate? */
+@@ -175,6 +174,8 @@ static const u32 bbr_lt_bw_diff = 4000 / 8;
+ /* If we estimate we're policed, use lt_bw for this many round trips: */
+ static const u32 bbr_lt_bw_max_rtts = 48;
+ 
++static void bbr_check_probe_rtt_done(struct sock *sk);
++
+ /* Do we estimate that STARTUP filled the pipe? */
+ static bool bbr_full_bw_reached(const struct sock *sk)
+ {
+@@ -305,6 +306,8 @@ static void bbr_cwnd_event(struct sock *sk, enum tcp_ca_event event)
+ 		 */
+ 		if (bbr->mode == BBR_PROBE_BW)
+ 			bbr_set_pacing_rate(sk, bbr_bw(sk), BBR_UNIT);
++		else if (bbr->mode == BBR_PROBE_RTT)
++			bbr_check_probe_rtt_done(sk);
+ 	}
+ }
+ 
+@@ -392,17 +395,11 @@ static bool bbr_set_cwnd_to_recover_or_restore(
+ 		cwnd = tcp_packets_in_flight(tp) + acked;
+ 	} else if (prev_state >= TCP_CA_Recovery && state < TCP_CA_Recovery) {
+ 		/* Exiting loss recovery; restore cwnd saved before recovery. */
+-		bbr->restore_cwnd = 1;
++		cwnd = max(cwnd, bbr->prior_cwnd);
+ 		bbr->packet_conservation = 0;
+ 	}
+ 	bbr->prev_ca_state = state;
+ 
+-	if (bbr->restore_cwnd) {
+-		/* Restore cwnd after exiting loss recovery or PROBE_RTT. */
+-		cwnd = max(cwnd, bbr->prior_cwnd);
+-		bbr->restore_cwnd = 0;
+-	}
+-
+ 	if (bbr->packet_conservation) {
+ 		*new_cwnd = max(cwnd, tcp_packets_in_flight(tp) + acked);
+ 		return true;	/* yes, using packet conservation */
+@@ -744,6 +741,20 @@ static void bbr_check_drain(struct sock *sk, const struct rate_sample *rs)
+ 		bbr_reset_probe_bw_mode(sk);  /* we estimate queue is drained */
+ }
+ 
++static void bbr_check_probe_rtt_done(struct sock *sk)
++{
++	struct tcp_sock *tp = tcp_sk(sk);
++	struct bbr *bbr = inet_csk_ca(sk);
++
++	if (!(bbr->probe_rtt_done_stamp &&
++	      after(tcp_jiffies32, bbr->probe_rtt_done_stamp)))
++		return;
++
++	bbr->min_rtt_stamp = tcp_jiffies32;  /* wait a while until PROBE_RTT */
++	tp->snd_cwnd = max(tp->snd_cwnd, bbr->prior_cwnd);
++	bbr_reset_mode(sk);
++}
++
+ /* The goal of PROBE_RTT mode is to have BBR flows cooperatively and
+  * periodically drain the bottleneck queue, to converge to measure the true
+  * min_rtt (unloaded propagation delay). This allows the flows to keep queues
+@@ -802,12 +813,8 @@ static void bbr_update_min_rtt(struct sock *sk, const struct rate_sample *rs)
+ 		} else if (bbr->probe_rtt_done_stamp) {
+ 			if (bbr->round_start)
+ 				bbr->probe_rtt_round_done = 1;
+-			if (bbr->probe_rtt_round_done &&
+-			    after(tcp_jiffies32, bbr->probe_rtt_done_stamp)) {
+-				bbr->min_rtt_stamp = tcp_jiffies32;
+-				bbr->restore_cwnd = 1;  /* snap to prior_cwnd */
+-				bbr_reset_mode(sk);
+-			}
++			if (bbr->probe_rtt_round_done)
++				bbr_check_probe_rtt_done(sk);
+ 		}
+ 	}
+ 	/* Restart after idle ends only once we process a new S/ACK for data */
+@@ -858,7 +865,6 @@ static void bbr_init(struct sock *sk)
+ 	bbr->has_seen_rtt = 0;
+ 	bbr_init_pacing_rate_from_rtt(sk);
+ 
+-	bbr->restore_cwnd = 0;
+ 	bbr->round_start = 0;
+ 	bbr->idle_restart = 0;
+ 	bbr->full_bw_reached = 0;
+diff --git a/net/ncsi/ncsi-netlink.c b/net/ncsi/ncsi-netlink.c
+index 82e6edf9c5d9..45f33d6dedf7 100644
+--- a/net/ncsi/ncsi-netlink.c
++++ b/net/ncsi/ncsi-netlink.c
+@@ -100,7 +100,7 @@ static int ncsi_write_package_info(struct sk_buff *skb,
+ 	bool found;
+ 	int rc;
+ 
+-	if (id > ndp->package_num) {
++	if (id > ndp->package_num - 1) {
+ 		netdev_info(ndp->ndev.dev, "NCSI: No package with id %u\n", id);
+ 		return -ENODEV;
+ 	}
+@@ -240,7 +240,7 @@ static int ncsi_pkg_info_all_nl(struct sk_buff *skb,
+ 		return 0; /* done */
+ 
+ 	hdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq,
+-			  &ncsi_genl_family, 0,  NCSI_CMD_PKG_INFO);
++			  &ncsi_genl_family, NLM_F_MULTI,  NCSI_CMD_PKG_INFO);
+ 	if (!hdr) {
+ 		rc = -EMSGSIZE;
+ 		goto err;
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index 2ccf194c3ebb..8015e50e8d0a 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -222,9 +222,14 @@ static void tls_write_space(struct sock *sk)
+ {
+ 	struct tls_context *ctx = tls_get_ctx(sk);
+ 
+-	/* We are already sending pages, ignore notification */
+-	if (ctx->in_tcp_sendpages)
++	/* If in_tcp_sendpages call lower protocol write space handler
++	 * to ensure we wake up any waiting operations there. For example
++	 * if do_tcp_sendpages where to call sk_wait_event.
++	 */
++	if (ctx->in_tcp_sendpages) {
++		ctx->sk_write_space(sk);
+ 		return;
++	}
+ 
+ 	if (!sk->sk_write_pending && tls_is_pending_closed_record(ctx)) {
+ 		gfp_t sk_allocation = sk->sk_allocation;
+diff --git a/sound/aoa/core/gpio-feature.c b/sound/aoa/core/gpio-feature.c
+index 71960089e207..65557421fe0b 100644
+--- a/sound/aoa/core/gpio-feature.c
++++ b/sound/aoa/core/gpio-feature.c
+@@ -88,8 +88,10 @@ static struct device_node *get_gpio(char *name,
+ 	}
+ 
+ 	reg = of_get_property(np, "reg", NULL);
+-	if (!reg)
++	if (!reg) {
++		of_node_put(np);
+ 		return NULL;
++	}
+ 
+ 	*gpioptr = *reg;
+ 
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 647ae1a71e10..28dc5e124995 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2535,7 +2535,8 @@ static const struct pci_device_id azx_ids[] = {
+ 	  .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB },
+ 	/* AMD Raven */
+ 	{ PCI_DEVICE(0x1022, 0x15e3),
+-	  .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB },
++	  .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB |
++			 AZX_DCAPS_PM_RUNTIME },
+ 	/* ATI HDMI */
+ 	{ PCI_DEVICE(0x1002, 0x0002),
+ 	  .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
+diff --git a/sound/soc/codecs/rt1305.c b/sound/soc/codecs/rt1305.c
+index f4c8c45f4010..421b8fb2fa04 100644
+--- a/sound/soc/codecs/rt1305.c
++++ b/sound/soc/codecs/rt1305.c
+@@ -1066,7 +1066,7 @@ static void rt1305_calibrate(struct rt1305_priv *rt1305)
+ 	pr_debug("Left_rhl = 0x%x rh=0x%x rl=0x%x\n", rhl, rh, rl);
+ 	pr_info("Left channel %d.%dohm\n", (r0ohm/10), (r0ohm%10));
+ 
+-	r0l = 562949953421312;
++	r0l = 562949953421312ULL;
+ 	if (rhl != 0)
+ 		do_div(r0l, rhl);
+ 	pr_debug("Left_r0 = 0x%llx\n", r0l);
+@@ -1083,7 +1083,7 @@ static void rt1305_calibrate(struct rt1305_priv *rt1305)
+ 	pr_debug("Right_rhl = 0x%x rh=0x%x rl=0x%x\n", rhl, rh, rl);
+ 	pr_info("Right channel %d.%dohm\n", (r0ohm/10), (r0ohm%10));
+ 
+-	r0r = 562949953421312;
++	r0r = 562949953421312ULL;
+ 	if (rhl != 0)
+ 		do_div(r0r, rhl);
+ 	pr_debug("Right_r0 = 0x%llx\n", r0r);
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index 33065ba294a9..d2c9d7865bde 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -404,7 +404,7 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ 		},
+ 		.driver_data = (void *)(BYT_RT5640_DMIC1_MAP |
+ 					BYT_RT5640_JD_SRC_JD1_IN4P |
+-					BYT_RT5640_OVCD_TH_2000UA |
++					BYT_RT5640_OVCD_TH_1500UA |
+ 					BYT_RT5640_OVCD_SF_0P75 |
+ 					BYT_RT5640_SSP0_AIF1 |
+ 					BYT_RT5640_MCLK_EN),
+diff --git a/sound/soc/qcom/qdsp6/q6afe.c b/sound/soc/qcom/qdsp6/q6afe.c
+index 01f43218984b..69a7896cb713 100644
+--- a/sound/soc/qcom/qdsp6/q6afe.c
++++ b/sound/soc/qcom/qdsp6/q6afe.c
+@@ -777,7 +777,7 @@ static int q6afe_callback(struct apr_device *adev, struct apr_resp_pkt *data)
+  */
+ int q6afe_get_port_id(int index)
+ {
+-	if (index < 0 || index > AFE_PORT_MAX)
++	if (index < 0 || index >= AFE_PORT_MAX)
+ 		return -EINVAL;
+ 
+ 	return port_maps[index].port_id;
+@@ -1014,7 +1014,7 @@ int q6afe_port_stop(struct q6afe_port *port)
+ 
+ 	port_id = port->id;
+ 	index = port->token;
+-	if (index < 0 || index > AFE_PORT_MAX) {
++	if (index < 0 || index >= AFE_PORT_MAX) {
+ 		dev_err(afe->dev, "AFE port index[%d] invalid!\n", index);
+ 		return -EINVAL;
+ 	}
+@@ -1355,7 +1355,7 @@ struct q6afe_port *q6afe_port_get_from_id(struct device *dev, int id)
+ 	unsigned long flags;
+ 	int cfg_type;
+ 
+-	if (id < 0 || id > AFE_PORT_MAX) {
++	if (id < 0 || id >= AFE_PORT_MAX) {
+ 		dev_err(dev, "AFE port token[%d] invalid!\n", id);
+ 		return ERR_PTR(-EINVAL);
+ 	}
+diff --git a/sound/soc/sh/rcar/ssi.c b/sound/soc/sh/rcar/ssi.c
+index cf4b40d376e5..c675058b908b 100644
+--- a/sound/soc/sh/rcar/ssi.c
++++ b/sound/soc/sh/rcar/ssi.c
+@@ -37,6 +37,7 @@
+ #define	CHNL_4		(1 << 22)	/* Channels */
+ #define	CHNL_6		(2 << 22)	/* Channels */
+ #define	CHNL_8		(3 << 22)	/* Channels */
++#define DWL_MASK	(7 << 19)	/* Data Word Length mask */
+ #define	DWL_8		(0 << 19)	/* Data Word Length */
+ #define	DWL_16		(1 << 19)	/* Data Word Length */
+ #define	DWL_18		(2 << 19)	/* Data Word Length */
+@@ -353,21 +354,18 @@ static void rsnd_ssi_config_init(struct rsnd_mod *mod,
+ 	struct rsnd_dai *rdai = rsnd_io_to_rdai(io);
+ 	struct snd_pcm_runtime *runtime = rsnd_io_to_runtime(io);
+ 	struct rsnd_ssi *ssi = rsnd_mod_to_ssi(mod);
+-	u32 cr_own;
+-	u32 cr_mode;
+-	u32 wsr;
++	u32 cr_own	= ssi->cr_own;
++	u32 cr_mode	= ssi->cr_mode;
++	u32 wsr		= ssi->wsr;
+ 	int is_tdm;
+ 
+-	if (rsnd_ssi_is_parent(mod, io))
+-		return;
+-
+ 	is_tdm = rsnd_runtime_is_ssi_tdm(io);
+ 
+ 	/*
+ 	 * always use 32bit system word.
+ 	 * see also rsnd_ssi_master_clk_enable()
+ 	 */
+-	cr_own = FORCE | SWL_32;
++	cr_own |= FORCE | SWL_32;
+ 
+ 	if (rdai->bit_clk_inv)
+ 		cr_own |= SCKP;
+@@ -377,9 +375,18 @@ static void rsnd_ssi_config_init(struct rsnd_mod *mod,
+ 		cr_own |= SDTA;
+ 	if (rdai->sys_delay)
+ 		cr_own |= DEL;
++
++	/*
++	 * We shouldn't exchange SWSP after running.
++	 * This means, parent needs to care it.
++	 */
++	if (rsnd_ssi_is_parent(mod, io))
++		goto init_end;
++
+ 	if (rsnd_io_is_play(io))
+ 		cr_own |= TRMD;
+ 
++	cr_own &= ~DWL_MASK;
+ 	switch (snd_pcm_format_width(runtime->format)) {
+ 	case 16:
+ 		cr_own |= DWL_16;
+@@ -406,7 +413,7 @@ static void rsnd_ssi_config_init(struct rsnd_mod *mod,
+ 		wsr	|= WS_MODE;
+ 		cr_own	|= CHNL_8;
+ 	}
+-
++init_end:
+ 	ssi->cr_own	= cr_own;
+ 	ssi->cr_mode	= cr_mode;
+ 	ssi->wsr	= wsr;
+@@ -465,15 +472,18 @@ static int rsnd_ssi_quit(struct rsnd_mod *mod,
+ 		return -EIO;
+ 	}
+ 
+-	if (!rsnd_ssi_is_parent(mod, io))
+-		ssi->cr_own	= 0;
+-
+ 	rsnd_ssi_master_clk_stop(mod, io);
+ 
+ 	rsnd_mod_power_off(mod);
+ 
+ 	ssi->usrcnt--;
+ 
++	if (!ssi->usrcnt) {
++		ssi->cr_own	= 0;
++		ssi->cr_mode	= 0;
++		ssi->wsr	= 0;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index 229c12349803..a099c3e45504 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -4073,6 +4073,13 @@ int snd_soc_dapm_link_dai_widgets(struct snd_soc_card *card)
+ 			continue;
+ 		}
+ 
++		/* let users know there is no DAI to link */
++		if (!dai_w->priv) {
++			dev_dbg(card->dev, "dai widget %s has no DAI\n",
++				dai_w->name);
++			continue;
++		}
++
+ 		dai = dai_w->priv;
+ 
+ 		/* ...find all widgets with the same stream and link them */
+diff --git a/tools/bpf/bpftool/map_perf_ring.c b/tools/bpf/bpftool/map_perf_ring.c
+index 1832100d1b27..6d41323be291 100644
+--- a/tools/bpf/bpftool/map_perf_ring.c
++++ b/tools/bpf/bpftool/map_perf_ring.c
+@@ -194,8 +194,10 @@ int do_event_pipe(int argc, char **argv)
+ 	}
+ 
+ 	while (argc) {
+-		if (argc < 2)
++		if (argc < 2) {
+ 			BAD_ARG();
++			goto err_close_map;
++		}
+ 
+ 		if (is_prefix(*argv, "cpu")) {
+ 			char *endptr;
+@@ -221,6 +223,7 @@ int do_event_pipe(int argc, char **argv)
+ 			NEXT_ARG();
+ 		} else {
+ 			BAD_ARG();
++			goto err_close_map;
+ 		}
+ 
+ 		do_all = false;
+diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
+index 4f5de8245b32..6631b0b8b4ab 100644
+--- a/tools/perf/tests/builtin-test.c
++++ b/tools/perf/tests/builtin-test.c
+@@ -385,7 +385,7 @@ static int test_and_print(struct test *t, bool force_skip, int subtest)
+ 	if (!t->subtest.get_nr)
+ 		pr_debug("%s:", t->desc);
+ 	else
+-		pr_debug("%s subtest %d:", t->desc, subtest);
++		pr_debug("%s subtest %d:", t->desc, subtest + 1);
+ 
+ 	switch (err) {
+ 	case TEST_OK:
+diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh
+index 3bb4c2ba7b14..197e769c2ed1 100755
+--- a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh
++++ b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh
+@@ -74,12 +74,14 @@ test_vlan_match()
+ 
+ test_gretap()
+ {
+-	test_vlan_match gt4 'vlan_id 555 vlan_ethtype ip' "mirror to gretap"
++	test_vlan_match gt4 'skip_hw vlan_id 555 vlan_ethtype ip' \
++			"mirror to gretap"
+ }
+ 
+ test_ip6gretap()
+ {
+-	test_vlan_match gt6 'vlan_id 555 vlan_ethtype ipv6' "mirror to ip6gretap"
++	test_vlan_match gt6 'skip_hw vlan_id 555 vlan_ethtype ip' \
++			"mirror to ip6gretap"
+ }
+ 
+ test_gretap_stp()
+diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_lib.sh b/tools/testing/selftests/net/forwarding/mirror_gre_lib.sh
+index 619b469365be..1c18e332cd4f 100644
+--- a/tools/testing/selftests/net/forwarding/mirror_gre_lib.sh
++++ b/tools/testing/selftests/net/forwarding/mirror_gre_lib.sh
+@@ -62,7 +62,7 @@ full_test_span_gre_dir_vlan_ips()
+ 			  "$backward_type" "$ip1" "$ip2"
+ 
+ 	tc filter add dev $h3 ingress pref 77 prot 802.1q \
+-		flower $vlan_match ip_proto 0x2f \
++		flower $vlan_match \
+ 		action pass
+ 	mirror_test v$h1 $ip1 $ip2 $h3 77 10
+ 	tc filter del dev $h3 ingress pref 77
+diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh b/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh
+index 5dbc7a08f4bd..a12274776116 100755
+--- a/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh
++++ b/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh
+@@ -79,12 +79,14 @@ test_vlan_match()
+ 
+ test_gretap()
+ {
+-	test_vlan_match gt4 'vlan_id 555 vlan_ethtype ip' "mirror to gretap"
++	test_vlan_match gt4 'skip_hw vlan_id 555 vlan_ethtype ip' \
++			"mirror to gretap"
+ }
+ 
+ test_ip6gretap()
+ {
+-	test_vlan_match gt6 'vlan_id 555 vlan_ethtype ipv6' "mirror to ip6gretap"
++	test_vlan_match gt6 'skip_hw vlan_id 555 vlan_ethtype ip' \
++			"mirror to ip6gretap"
+ }
+ 
+ test_span_gre_forbidden_cpu()


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-09-29 13:36 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-09-29 13:36 UTC (permalink / raw
  To: gentoo-commits

commit:     4256d26c4916914f83182e196d2b437222f4289f
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Sep 29 13:36:23 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Sep 29 13:36:23 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4256d26c

Linux patch 4.18.11

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1010_linux-4.18.11.patch | 2983 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2987 insertions(+)

diff --git a/0000_README b/0000_README
index a9e2bd7..cccbd63 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch:  1009_linux-4.18.10.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.10
 
+Patch:  1010_linux-4.18.11.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.11
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1010_linux-4.18.11.patch b/1010_linux-4.18.11.patch
new file mode 100644
index 0000000..fe34a23
--- /dev/null
+++ b/1010_linux-4.18.11.patch
@@ -0,0 +1,2983 @@
+diff --git a/Makefile b/Makefile
+index ffab15235ff0..de0ecace693a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/x86/crypto/aegis128-aesni-glue.c b/arch/x86/crypto/aegis128-aesni-glue.c
+index acd11b3bf639..2a356b948720 100644
+--- a/arch/x86/crypto/aegis128-aesni-glue.c
++++ b/arch/x86/crypto/aegis128-aesni-glue.c
+@@ -379,7 +379,6 @@ static int __init crypto_aegis128_aesni_module_init(void)
+ {
+ 	if (!boot_cpu_has(X86_FEATURE_XMM2) ||
+ 	    !boot_cpu_has(X86_FEATURE_AES) ||
+-	    !boot_cpu_has(X86_FEATURE_OSXSAVE) ||
+ 	    !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL))
+ 		return -ENODEV;
+ 
+diff --git a/arch/x86/crypto/aegis128l-aesni-glue.c b/arch/x86/crypto/aegis128l-aesni-glue.c
+index 2071c3d1ae07..dbe8bb980da1 100644
+--- a/arch/x86/crypto/aegis128l-aesni-glue.c
++++ b/arch/x86/crypto/aegis128l-aesni-glue.c
+@@ -379,7 +379,6 @@ static int __init crypto_aegis128l_aesni_module_init(void)
+ {
+ 	if (!boot_cpu_has(X86_FEATURE_XMM2) ||
+ 	    !boot_cpu_has(X86_FEATURE_AES) ||
+-	    !boot_cpu_has(X86_FEATURE_OSXSAVE) ||
+ 	    !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL))
+ 		return -ENODEV;
+ 
+diff --git a/arch/x86/crypto/aegis256-aesni-glue.c b/arch/x86/crypto/aegis256-aesni-glue.c
+index b5f2a8fd5a71..8bebda2de92f 100644
+--- a/arch/x86/crypto/aegis256-aesni-glue.c
++++ b/arch/x86/crypto/aegis256-aesni-glue.c
+@@ -379,7 +379,6 @@ static int __init crypto_aegis256_aesni_module_init(void)
+ {
+ 	if (!boot_cpu_has(X86_FEATURE_XMM2) ||
+ 	    !boot_cpu_has(X86_FEATURE_AES) ||
+-	    !boot_cpu_has(X86_FEATURE_OSXSAVE) ||
+ 	    !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL))
+ 		return -ENODEV;
+ 
+diff --git a/arch/x86/crypto/morus1280-sse2-glue.c b/arch/x86/crypto/morus1280-sse2-glue.c
+index 95cf857d2cbb..f40244eaf14d 100644
+--- a/arch/x86/crypto/morus1280-sse2-glue.c
++++ b/arch/x86/crypto/morus1280-sse2-glue.c
+@@ -40,7 +40,6 @@ MORUS1280_DECLARE_ALGS(sse2, "morus1280-sse2", 350);
+ static int __init crypto_morus1280_sse2_module_init(void)
+ {
+ 	if (!boot_cpu_has(X86_FEATURE_XMM2) ||
+-	    !boot_cpu_has(X86_FEATURE_OSXSAVE) ||
+ 	    !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL))
+ 		return -ENODEV;
+ 
+diff --git a/arch/x86/crypto/morus640-sse2-glue.c b/arch/x86/crypto/morus640-sse2-glue.c
+index 615fb7bc9a32..9afaf8f8565a 100644
+--- a/arch/x86/crypto/morus640-sse2-glue.c
++++ b/arch/x86/crypto/morus640-sse2-glue.c
+@@ -40,7 +40,6 @@ MORUS640_DECLARE_ALGS(sse2, "morus640-sse2", 400);
+ static int __init crypto_morus640_sse2_module_init(void)
+ {
+ 	if (!boot_cpu_has(X86_FEATURE_XMM2) ||
+-	    !boot_cpu_has(X86_FEATURE_OSXSAVE) ||
+ 	    !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL))
+ 		return -ENODEV;
+ 
+diff --git a/arch/x86/xen/pmu.c b/arch/x86/xen/pmu.c
+index 7d00d4ad44d4..95997e6c0696 100644
+--- a/arch/x86/xen/pmu.c
++++ b/arch/x86/xen/pmu.c
+@@ -478,7 +478,7 @@ static void xen_convert_regs(const struct xen_pmu_regs *xen_regs,
+ irqreturn_t xen_pmu_irq_handler(int irq, void *dev_id)
+ {
+ 	int err, ret = IRQ_NONE;
+-	struct pt_regs regs;
++	struct pt_regs regs = {0};
+ 	const struct xen_pmu_data *xenpmu_data = get_xenpmu_data();
+ 	uint8_t xenpmu_flags = get_xenpmu_flags();
+ 
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 984b37647b2f..22a2bc5f25ce 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -5358,10 +5358,20 @@ void ata_qc_complete(struct ata_queued_cmd *qc)
+  */
+ int ata_qc_complete_multiple(struct ata_port *ap, u64 qc_active)
+ {
++	u64 done_mask, ap_qc_active = ap->qc_active;
+ 	int nr_done = 0;
+-	u64 done_mask;
+ 
+-	done_mask = ap->qc_active ^ qc_active;
++	/*
++	 * If the internal tag is set on ap->qc_active, then we care about
++	 * bit0 on the passed in qc_active mask. Move that bit up to match
++	 * the internal tag.
++	 */
++	if (ap_qc_active & (1ULL << ATA_TAG_INTERNAL)) {
++		qc_active |= (qc_active & 0x01) << ATA_TAG_INTERNAL;
++		qc_active ^= qc_active & 0x01;
++	}
++
++	done_mask = ap_qc_active ^ qc_active;
+ 
+ 	if (unlikely(done_mask & qc_active)) {
+ 		ata_port_err(ap, "illegal qc_active transition (%08llx->%08llx)\n",
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
+index e950730f1933..5a6e7e1cb351 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
+@@ -367,12 +367,14 @@ static int amdgpu_cgs_get_firmware_info(struct cgs_device *cgs_device,
+ 				break;
+ 			case CHIP_POLARIS10:
+ 				if (type == CGS_UCODE_ID_SMU) {
+-					if ((adev->pdev->device == 0x67df) &&
+-					    ((adev->pdev->revision == 0xe0) ||
+-					     (adev->pdev->revision == 0xe3) ||
+-					     (adev->pdev->revision == 0xe4) ||
+-					     (adev->pdev->revision == 0xe5) ||
+-					     (adev->pdev->revision == 0xe7) ||
++					if (((adev->pdev->device == 0x67df) &&
++					     ((adev->pdev->revision == 0xe0) ||
++					      (adev->pdev->revision == 0xe3) ||
++					      (adev->pdev->revision == 0xe4) ||
++					      (adev->pdev->revision == 0xe5) ||
++					      (adev->pdev->revision == 0xe7) ||
++					      (adev->pdev->revision == 0xef))) ||
++					    ((adev->pdev->device == 0x6fdf) &&
+ 					     (adev->pdev->revision == 0xef))) {
+ 						info->is_kicker = true;
+ 						strcpy(fw_name, "amdgpu/polaris10_k_smc.bin");
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index b0bf2f24da48..dc893076398e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -532,6 +532,7 @@ static const struct pci_device_id pciidlist[] = {
+ 	{0x1002, 0x67CA, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
+ 	{0x1002, 0x67CC, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
+ 	{0x1002, 0x67CF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
++	{0x1002, 0x6FDF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
+ 	/* Polaris12 */
+ 	{0x1002, 0x6980, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
+ 	{0x1002, 0x6981, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
+diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
+index dec0d60921bf..00486c744f24 100644
+--- a/drivers/gpu/drm/i915/intel_display.c
++++ b/drivers/gpu/drm/i915/intel_display.c
+@@ -5062,10 +5062,14 @@ void hsw_disable_ips(const struct intel_crtc_state *crtc_state)
+ 		mutex_lock(&dev_priv->pcu_lock);
+ 		WARN_ON(sandybridge_pcode_write(dev_priv, DISPLAY_IPS_CONTROL, 0));
+ 		mutex_unlock(&dev_priv->pcu_lock);
+-		/* wait for pcode to finish disabling IPS, which may take up to 42ms */
++		/*
++		 * Wait for PCODE to finish disabling IPS. The BSpec specified
++		 * 42ms timeout value leads to occasional timeouts so use 100ms
++		 * instead.
++		 */
+ 		if (intel_wait_for_register(dev_priv,
+ 					    IPS_CTL, IPS_ENABLE, 0,
+-					    42))
++					    100))
+ 			DRM_ERROR("Timed out waiting for IPS disable\n");
+ 	} else {
+ 		I915_WRITE(IPS_CTL, 0);
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+index 9bae4db84cfb..7a12d75e5157 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+@@ -1098,17 +1098,21 @@ nv50_mstm_enable(struct nv50_mstm *mstm, u8 dpcd, int state)
+ 	int ret;
+ 
+ 	if (dpcd >= 0x12) {
+-		ret = drm_dp_dpcd_readb(mstm->mgr.aux, DP_MSTM_CTRL, &dpcd);
++		/* Even if we're enabling MST, start with disabling the
++		 * branching unit to clear any sink-side MST topology state
++		 * that wasn't set by us
++		 */
++		ret = drm_dp_dpcd_writeb(mstm->mgr.aux, DP_MSTM_CTRL, 0);
+ 		if (ret < 0)
+ 			return ret;
+ 
+-		dpcd &= ~DP_MST_EN;
+-		if (state)
+-			dpcd |= DP_MST_EN;
+-
+-		ret = drm_dp_dpcd_writeb(mstm->mgr.aux, DP_MSTM_CTRL, dpcd);
+-		if (ret < 0)
+-			return ret;
++		if (state) {
++			/* Now, start initializing */
++			ret = drm_dp_dpcd_writeb(mstm->mgr.aux, DP_MSTM_CTRL,
++						 DP_MST_EN);
++			if (ret < 0)
++				return ret;
++		}
+ 	}
+ 
+ 	return nvif_mthd(disp, 0, &args, sizeof(args));
+@@ -1117,31 +1121,58 @@ nv50_mstm_enable(struct nv50_mstm *mstm, u8 dpcd, int state)
+ int
+ nv50_mstm_detect(struct nv50_mstm *mstm, u8 dpcd[8], int allow)
+ {
+-	int ret, state = 0;
++	struct drm_dp_aux *aux;
++	int ret;
++	bool old_state, new_state;
++	u8 mstm_ctrl;
+ 
+ 	if (!mstm)
+ 		return 0;
+ 
+-	if (dpcd[0] >= 0x12) {
+-		ret = drm_dp_dpcd_readb(mstm->mgr.aux, DP_MSTM_CAP, &dpcd[1]);
++	mutex_lock(&mstm->mgr.lock);
++
++	old_state = mstm->mgr.mst_state;
++	new_state = old_state;
++	aux = mstm->mgr.aux;
++
++	if (old_state) {
++		/* Just check that the MST hub is still as we expect it */
++		ret = drm_dp_dpcd_readb(aux, DP_MSTM_CTRL, &mstm_ctrl);
++		if (ret < 0 || !(mstm_ctrl & DP_MST_EN)) {
++			DRM_DEBUG_KMS("Hub gone, disabling MST topology\n");
++			new_state = false;
++		}
++	} else if (dpcd[0] >= 0x12) {
++		ret = drm_dp_dpcd_readb(aux, DP_MSTM_CAP, &dpcd[1]);
+ 		if (ret < 0)
+-			return ret;
++			goto probe_error;
+ 
+ 		if (!(dpcd[1] & DP_MST_CAP))
+ 			dpcd[0] = 0x11;
+ 		else
+-			state = allow;
++			new_state = allow;
++	}
++
++	if (new_state == old_state) {
++		mutex_unlock(&mstm->mgr.lock);
++		return new_state;
+ 	}
+ 
+-	ret = nv50_mstm_enable(mstm, dpcd[0], state);
++	ret = nv50_mstm_enable(mstm, dpcd[0], new_state);
+ 	if (ret)
+-		return ret;
++		goto probe_error;
++
++	mutex_unlock(&mstm->mgr.lock);
+ 
+-	ret = drm_dp_mst_topology_mgr_set_mst(&mstm->mgr, state);
++	ret = drm_dp_mst_topology_mgr_set_mst(&mstm->mgr, new_state);
+ 	if (ret)
+ 		return nv50_mstm_enable(mstm, dpcd[0], 0);
+ 
+-	return mstm->mgr.mst_state;
++	return new_state;
++
++probe_error:
++	mutex_unlock(&mstm->mgr.lock);
++	return ret;
+ }
+ 
+ static void
+@@ -2049,7 +2080,7 @@ nv50_disp_atomic_state_alloc(struct drm_device *dev)
+ static const struct drm_mode_config_funcs
+ nv50_disp_func = {
+ 	.fb_create = nouveau_user_framebuffer_create,
+-	.output_poll_changed = drm_fb_helper_output_poll_changed,
++	.output_poll_changed = nouveau_fbcon_output_poll_changed,
+ 	.atomic_check = nv50_disp_atomic_check,
+ 	.atomic_commit = nv50_disp_atomic_commit,
+ 	.atomic_state_alloc = nv50_disp_atomic_state_alloc,
+diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c
+index af68eae4c626..de4ab310ef8e 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_connector.c
++++ b/drivers/gpu/drm/nouveau/nouveau_connector.c
+@@ -570,12 +570,16 @@ nouveau_connector_detect(struct drm_connector *connector, bool force)
+ 		nv_connector->edid = NULL;
+ 	}
+ 
+-	/* Outputs are only polled while runtime active, so acquiring a
+-	 * runtime PM ref here is unnecessary (and would deadlock upon
+-	 * runtime suspend because it waits for polling to finish).
++	/* Outputs are only polled while runtime active, so resuming the
++	 * device here is unnecessary (and would deadlock upon runtime suspend
++	 * because it waits for polling to finish). We do however, want to
++	 * prevent the autosuspend timer from elapsing during this operation
++	 * if possible.
+ 	 */
+-	if (!drm_kms_helper_is_poll_worker()) {
+-		ret = pm_runtime_get_sync(connector->dev->dev);
++	if (drm_kms_helper_is_poll_worker()) {
++		pm_runtime_get_noresume(dev->dev);
++	} else {
++		ret = pm_runtime_get_sync(dev->dev);
+ 		if (ret < 0 && ret != -EACCES)
+ 			return conn_status;
+ 	}
+@@ -653,10 +657,8 @@ detect_analog:
+ 
+  out:
+ 
+-	if (!drm_kms_helper_is_poll_worker()) {
+-		pm_runtime_mark_last_busy(connector->dev->dev);
+-		pm_runtime_put_autosuspend(connector->dev->dev);
+-	}
++	pm_runtime_mark_last_busy(dev->dev);
++	pm_runtime_put_autosuspend(dev->dev);
+ 
+ 	return conn_status;
+ }
+@@ -1120,6 +1122,26 @@ nouveau_connector_hotplug(struct nvif_notify *notify)
+ 	const struct nvif_notify_conn_rep_v0 *rep = notify->data;
+ 	const char *name = connector->name;
+ 	struct nouveau_encoder *nv_encoder;
++	int ret;
++
++	ret = pm_runtime_get(drm->dev->dev);
++	if (ret == 0) {
++		/* We can't block here if there's a pending PM request
++		 * running, as we'll deadlock nouveau_display_fini() when it
++		 * calls nvif_put() on our nvif_notify struct. So, simply
++		 * defer the hotplug event until the device finishes resuming
++		 */
++		NV_DEBUG(drm, "Deferring HPD on %s until runtime resume\n",
++			 name);
++		schedule_work(&drm->hpd_work);
++
++		pm_runtime_put_noidle(drm->dev->dev);
++		return NVIF_NOTIFY_KEEP;
++	} else if (ret != 1 && ret != -EACCES) {
++		NV_WARN(drm, "HPD on %s dropped due to RPM failure: %d\n",
++			name, ret);
++		return NVIF_NOTIFY_DROP;
++	}
+ 
+ 	if (rep->mask & NVIF_NOTIFY_CONN_V0_IRQ) {
+ 		NV_DEBUG(drm, "service %s\n", name);
+@@ -1137,6 +1159,8 @@ nouveau_connector_hotplug(struct nvif_notify *notify)
+ 		drm_helper_hpd_irq_event(connector->dev);
+ 	}
+ 
++	pm_runtime_mark_last_busy(drm->dev->dev);
++	pm_runtime_put_autosuspend(drm->dev->dev);
+ 	return NVIF_NOTIFY_KEEP;
+ }
+ 
+diff --git a/drivers/gpu/drm/nouveau/nouveau_display.c b/drivers/gpu/drm/nouveau/nouveau_display.c
+index ec7861457b84..c5b3cc17965c 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_display.c
++++ b/drivers/gpu/drm/nouveau/nouveau_display.c
+@@ -293,7 +293,7 @@ nouveau_user_framebuffer_create(struct drm_device *dev,
+ 
+ static const struct drm_mode_config_funcs nouveau_mode_config_funcs = {
+ 	.fb_create = nouveau_user_framebuffer_create,
+-	.output_poll_changed = drm_fb_helper_output_poll_changed,
++	.output_poll_changed = nouveau_fbcon_output_poll_changed,
+ };
+ 
+ 
+@@ -355,8 +355,6 @@ nouveau_display_hpd_work(struct work_struct *work)
+ 	pm_runtime_get_sync(drm->dev->dev);
+ 
+ 	drm_helper_hpd_irq_event(drm->dev);
+-	/* enable polling for external displays */
+-	drm_kms_helper_poll_enable(drm->dev);
+ 
+ 	pm_runtime_mark_last_busy(drm->dev->dev);
+ 	pm_runtime_put_sync(drm->dev->dev);
+@@ -379,15 +377,29 @@ nouveau_display_acpi_ntfy(struct notifier_block *nb, unsigned long val,
+ {
+ 	struct nouveau_drm *drm = container_of(nb, typeof(*drm), acpi_nb);
+ 	struct acpi_bus_event *info = data;
++	int ret;
+ 
+ 	if (!strcmp(info->device_class, ACPI_VIDEO_CLASS)) {
+ 		if (info->type == ACPI_VIDEO_NOTIFY_PROBE) {
+-			/*
+-			 * This may be the only indication we receive of a
+-			 * connector hotplug on a runtime suspended GPU,
+-			 * schedule hpd_work to check.
+-			 */
+-			schedule_work(&drm->hpd_work);
++			ret = pm_runtime_get(drm->dev->dev);
++			if (ret == 1 || ret == -EACCES) {
++				/* If the GPU is already awake, or in a state
++				 * where we can't wake it up, it can handle
++				 * it's own hotplug events.
++				 */
++				pm_runtime_put_autosuspend(drm->dev->dev);
++			} else if (ret == 0) {
++				/* This may be the only indication we receive
++				 * of a connector hotplug on a runtime
++				 * suspended GPU, schedule hpd_work to check.
++				 */
++				NV_DEBUG(drm, "ACPI requested connector reprobe\n");
++				schedule_work(&drm->hpd_work);
++				pm_runtime_put_noidle(drm->dev->dev);
++			} else {
++				NV_WARN(drm, "Dropped ACPI reprobe event due to RPM error: %d\n",
++					ret);
++			}
+ 
+ 			/* acpi-video should not generate keypresses for this */
+ 			return NOTIFY_BAD;
+@@ -411,6 +423,11 @@ nouveau_display_init(struct drm_device *dev)
+ 	if (ret)
+ 		return ret;
+ 
++	/* enable connector detection and polling for connectors without HPD
++	 * support
++	 */
++	drm_kms_helper_poll_enable(dev);
++
+ 	/* enable hotplug interrupts */
+ 	drm_connector_list_iter_begin(dev, &conn_iter);
+ 	nouveau_for_each_non_mst_connector_iter(connector, &conn_iter) {
+@@ -425,7 +442,7 @@ nouveau_display_init(struct drm_device *dev)
+ }
+ 
+ void
+-nouveau_display_fini(struct drm_device *dev, bool suspend)
++nouveau_display_fini(struct drm_device *dev, bool suspend, bool runtime)
+ {
+ 	struct nouveau_display *disp = nouveau_display(dev);
+ 	struct nouveau_drm *drm = nouveau_drm(dev);
+@@ -450,6 +467,9 @@ nouveau_display_fini(struct drm_device *dev, bool suspend)
+ 	}
+ 	drm_connector_list_iter_end(&conn_iter);
+ 
++	if (!runtime)
++		cancel_work_sync(&drm->hpd_work);
++
+ 	drm_kms_helper_poll_disable(dev);
+ 	disp->fini(dev);
+ }
+@@ -618,11 +638,11 @@ nouveau_display_suspend(struct drm_device *dev, bool runtime)
+ 			}
+ 		}
+ 
+-		nouveau_display_fini(dev, true);
++		nouveau_display_fini(dev, true, runtime);
+ 		return 0;
+ 	}
+ 
+-	nouveau_display_fini(dev, true);
++	nouveau_display_fini(dev, true, runtime);
+ 
+ 	list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
+ 		struct nouveau_framebuffer *nouveau_fb;
+diff --git a/drivers/gpu/drm/nouveau/nouveau_display.h b/drivers/gpu/drm/nouveau/nouveau_display.h
+index 54aa7c3fa42d..ff92b54ce448 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_display.h
++++ b/drivers/gpu/drm/nouveau/nouveau_display.h
+@@ -62,7 +62,7 @@ nouveau_display(struct drm_device *dev)
+ int  nouveau_display_create(struct drm_device *dev);
+ void nouveau_display_destroy(struct drm_device *dev);
+ int  nouveau_display_init(struct drm_device *dev);
+-void nouveau_display_fini(struct drm_device *dev, bool suspend);
++void nouveau_display_fini(struct drm_device *dev, bool suspend, bool runtime);
+ int  nouveau_display_suspend(struct drm_device *dev, bool runtime);
+ void nouveau_display_resume(struct drm_device *dev, bool runtime);
+ int  nouveau_display_vblank_enable(struct drm_device *, unsigned int);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index c7ec86d6c3c9..c2ebe5da34d0 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -629,7 +629,7 @@ nouveau_drm_unload(struct drm_device *dev)
+ 	nouveau_debugfs_fini(drm);
+ 
+ 	if (dev->mode_config.num_crtc)
+-		nouveau_display_fini(dev, false);
++		nouveau_display_fini(dev, false, false);
+ 	nouveau_display_destroy(dev);
+ 
+ 	nouveau_bios_takedown(dev);
+@@ -835,7 +835,6 @@ nouveau_pmops_runtime_suspend(struct device *dev)
+ 		return -EBUSY;
+ 	}
+ 
+-	drm_kms_helper_poll_disable(drm_dev);
+ 	nouveau_switcheroo_optimus_dsm();
+ 	ret = nouveau_do_suspend(drm_dev, true);
+ 	pci_save_state(pdev);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_fbcon.c b/drivers/gpu/drm/nouveau/nouveau_fbcon.c
+index 85c1f10bc2b6..8cf966690963 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_fbcon.c
++++ b/drivers/gpu/drm/nouveau/nouveau_fbcon.c
+@@ -466,6 +466,7 @@ nouveau_fbcon_set_suspend_work(struct work_struct *work)
+ 	console_unlock();
+ 
+ 	if (state == FBINFO_STATE_RUNNING) {
++		nouveau_fbcon_hotplug_resume(drm->fbcon);
+ 		pm_runtime_mark_last_busy(drm->dev->dev);
+ 		pm_runtime_put_sync(drm->dev->dev);
+ 	}
+@@ -487,6 +488,61 @@ nouveau_fbcon_set_suspend(struct drm_device *dev, int state)
+ 	schedule_work(&drm->fbcon_work);
+ }
+ 
++void
++nouveau_fbcon_output_poll_changed(struct drm_device *dev)
++{
++	struct nouveau_drm *drm = nouveau_drm(dev);
++	struct nouveau_fbdev *fbcon = drm->fbcon;
++	int ret;
++
++	if (!fbcon)
++		return;
++
++	mutex_lock(&fbcon->hotplug_lock);
++
++	ret = pm_runtime_get(dev->dev);
++	if (ret == 1 || ret == -EACCES) {
++		drm_fb_helper_hotplug_event(&fbcon->helper);
++
++		pm_runtime_mark_last_busy(dev->dev);
++		pm_runtime_put_autosuspend(dev->dev);
++	} else if (ret == 0) {
++		/* If the GPU was already in the process of suspending before
++		 * this event happened, then we can't block here as we'll
++		 * deadlock the runtime pmops since they wait for us to
++		 * finish. So, just defer this event for when we runtime
++		 * resume again. It will be handled by fbcon_work.
++		 */
++		NV_DEBUG(drm, "fbcon HPD event deferred until runtime resume\n");
++		fbcon->hotplug_waiting = true;
++		pm_runtime_put_noidle(drm->dev->dev);
++	} else {
++		DRM_WARN("fbcon HPD event lost due to RPM failure: %d\n",
++			 ret);
++	}
++
++	mutex_unlock(&fbcon->hotplug_lock);
++}
++
++void
++nouveau_fbcon_hotplug_resume(struct nouveau_fbdev *fbcon)
++{
++	struct nouveau_drm *drm;
++
++	if (!fbcon)
++		return;
++	drm = nouveau_drm(fbcon->helper.dev);
++
++	mutex_lock(&fbcon->hotplug_lock);
++	if (fbcon->hotplug_waiting) {
++		fbcon->hotplug_waiting = false;
++
++		NV_DEBUG(drm, "Handling deferred fbcon HPD events\n");
++		drm_fb_helper_hotplug_event(&fbcon->helper);
++	}
++	mutex_unlock(&fbcon->hotplug_lock);
++}
++
+ int
+ nouveau_fbcon_init(struct drm_device *dev)
+ {
+@@ -505,6 +561,7 @@ nouveau_fbcon_init(struct drm_device *dev)
+ 
+ 	drm->fbcon = fbcon;
+ 	INIT_WORK(&drm->fbcon_work, nouveau_fbcon_set_suspend_work);
++	mutex_init(&fbcon->hotplug_lock);
+ 
+ 	drm_fb_helper_prepare(dev, &fbcon->helper, &nouveau_fbcon_helper_funcs);
+ 
+diff --git a/drivers/gpu/drm/nouveau/nouveau_fbcon.h b/drivers/gpu/drm/nouveau/nouveau_fbcon.h
+index a6f192ea3fa6..db9d52047ef8 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_fbcon.h
++++ b/drivers/gpu/drm/nouveau/nouveau_fbcon.h
+@@ -41,6 +41,9 @@ struct nouveau_fbdev {
+ 	struct nvif_object gdi;
+ 	struct nvif_object blit;
+ 	struct nvif_object twod;
++
++	struct mutex hotplug_lock;
++	bool hotplug_waiting;
+ };
+ 
+ void nouveau_fbcon_restore(void);
+@@ -68,6 +71,8 @@ void nouveau_fbcon_set_suspend(struct drm_device *dev, int state);
+ void nouveau_fbcon_accel_save_disable(struct drm_device *dev);
+ void nouveau_fbcon_accel_restore(struct drm_device *dev);
+ 
++void nouveau_fbcon_output_poll_changed(struct drm_device *dev);
++void nouveau_fbcon_hotplug_resume(struct nouveau_fbdev *fbcon);
+ extern int nouveau_nofbaccel;
+ 
+ #endif /* __NV50_FBCON_H__ */
+diff --git a/drivers/gpu/drm/udl/udl_fb.c b/drivers/gpu/drm/udl/udl_fb.c
+index 8746eeeec44d..491f1892b50e 100644
+--- a/drivers/gpu/drm/udl/udl_fb.c
++++ b/drivers/gpu/drm/udl/udl_fb.c
+@@ -432,9 +432,11 @@ static void udl_fbdev_destroy(struct drm_device *dev,
+ {
+ 	drm_fb_helper_unregister_fbi(&ufbdev->helper);
+ 	drm_fb_helper_fini(&ufbdev->helper);
+-	drm_framebuffer_unregister_private(&ufbdev->ufb.base);
+-	drm_framebuffer_cleanup(&ufbdev->ufb.base);
+-	drm_gem_object_put_unlocked(&ufbdev->ufb.obj->base);
++	if (ufbdev->ufb.obj) {
++		drm_framebuffer_unregister_private(&ufbdev->ufb.base);
++		drm_framebuffer_cleanup(&ufbdev->ufb.base);
++		drm_gem_object_put_unlocked(&ufbdev->ufb.obj->base);
++	}
+ }
+ 
+ int udl_fbdev_init(struct drm_device *dev)
+diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c
+index a951ec75d01f..cf5aea1d6488 100644
+--- a/drivers/gpu/drm/vc4/vc4_plane.c
++++ b/drivers/gpu/drm/vc4/vc4_plane.c
+@@ -297,6 +297,9 @@ static int vc4_plane_setup_clipping_and_scaling(struct drm_plane_state *state)
+ 	vc4_state->y_scaling[0] = vc4_get_scaling_mode(vc4_state->src_h[0],
+ 						       vc4_state->crtc_h);
+ 
++	vc4_state->is_unity = (vc4_state->x_scaling[0] == VC4_SCALING_NONE &&
++			       vc4_state->y_scaling[0] == VC4_SCALING_NONE);
++
+ 	if (num_planes > 1) {
+ 		vc4_state->is_yuv = true;
+ 
+@@ -312,24 +315,17 @@ static int vc4_plane_setup_clipping_and_scaling(struct drm_plane_state *state)
+ 			vc4_get_scaling_mode(vc4_state->src_h[1],
+ 					     vc4_state->crtc_h);
+ 
+-		/* YUV conversion requires that scaling be enabled,
+-		 * even on a plane that's otherwise 1:1.  Choose TPZ
+-		 * for simplicity.
++		/* YUV conversion requires that horizontal scaling be enabled,
++		 * even on a plane that's otherwise 1:1. Looks like only PPF
++		 * works in that case, so let's pick that one.
+ 		 */
+-		if (vc4_state->x_scaling[0] == VC4_SCALING_NONE)
+-			vc4_state->x_scaling[0] = VC4_SCALING_TPZ;
+-		if (vc4_state->y_scaling[0] == VC4_SCALING_NONE)
+-			vc4_state->y_scaling[0] = VC4_SCALING_TPZ;
++		if (vc4_state->is_unity)
++			vc4_state->x_scaling[0] = VC4_SCALING_PPF;
+ 	} else {
+ 		vc4_state->x_scaling[1] = VC4_SCALING_NONE;
+ 		vc4_state->y_scaling[1] = VC4_SCALING_NONE;
+ 	}
+ 
+-	vc4_state->is_unity = (vc4_state->x_scaling[0] == VC4_SCALING_NONE &&
+-			       vc4_state->y_scaling[0] == VC4_SCALING_NONE &&
+-			       vc4_state->x_scaling[1] == VC4_SCALING_NONE &&
+-			       vc4_state->y_scaling[1] == VC4_SCALING_NONE);
+-
+ 	/* No configuring scaling on the cursor plane, since it gets
+ 	   non-vblank-synced updates, and scaling requires requires
+ 	   LBM changes which have to be vblank-synced.
+@@ -621,7 +617,10 @@ static int vc4_plane_mode_set(struct drm_plane *plane,
+ 		vc4_dlist_write(vc4_state, SCALER_CSC2_ITR_R_601_5);
+ 	}
+ 
+-	if (!vc4_state->is_unity) {
++	if (vc4_state->x_scaling[0] != VC4_SCALING_NONE ||
++	    vc4_state->x_scaling[1] != VC4_SCALING_NONE ||
++	    vc4_state->y_scaling[0] != VC4_SCALING_NONE ||
++	    vc4_state->y_scaling[1] != VC4_SCALING_NONE) {
+ 		/* LBM Base Address. */
+ 		if (vc4_state->y_scaling[0] != VC4_SCALING_NONE ||
+ 		    vc4_state->y_scaling[1] != VC4_SCALING_NONE) {
+diff --git a/drivers/infiniband/hw/cxgb4/qp.c b/drivers/infiniband/hw/cxgb4/qp.c
+index aef53305f1c3..d97581ae3bf9 100644
+--- a/drivers/infiniband/hw/cxgb4/qp.c
++++ b/drivers/infiniband/hw/cxgb4/qp.c
+@@ -1388,6 +1388,12 @@ static void flush_qp(struct c4iw_qp *qhp)
+ 	schp = to_c4iw_cq(qhp->ibqp.send_cq);
+ 
+ 	if (qhp->ibqp.uobject) {
++
++		/* for user qps, qhp->wq.flushed is protected by qhp->mutex */
++		if (qhp->wq.flushed)
++			return;
++
++		qhp->wq.flushed = 1;
+ 		t4_set_wq_in_error(&qhp->wq);
+ 		t4_set_cq_in_error(&rchp->cq);
+ 		spin_lock_irqsave(&rchp->comp_handler_lock, flag);
+diff --git a/drivers/misc/vmw_balloon.c b/drivers/misc/vmw_balloon.c
+index 5f8b583c6e41..f74166aa9a0d 100644
+--- a/drivers/misc/vmw_balloon.c
++++ b/drivers/misc/vmw_balloon.c
+@@ -45,6 +45,7 @@
+ #include <linux/seq_file.h>
+ #include <linux/vmw_vmci_defs.h>
+ #include <linux/vmw_vmci_api.h>
++#include <linux/io.h>
+ #include <asm/hypervisor.h>
+ 
+ MODULE_AUTHOR("VMware, Inc.");
+diff --git a/drivers/mtd/devices/m25p80.c b/drivers/mtd/devices/m25p80.c
+index e84563d2067f..3463cd94a7f6 100644
+--- a/drivers/mtd/devices/m25p80.c
++++ b/drivers/mtd/devices/m25p80.c
+@@ -41,13 +41,23 @@ static int m25p80_read_reg(struct spi_nor *nor, u8 code, u8 *val, int len)
+ 	struct spi_mem_op op = SPI_MEM_OP(SPI_MEM_OP_CMD(code, 1),
+ 					  SPI_MEM_OP_NO_ADDR,
+ 					  SPI_MEM_OP_NO_DUMMY,
+-					  SPI_MEM_OP_DATA_IN(len, val, 1));
++					  SPI_MEM_OP_DATA_IN(len, NULL, 1));
++	void *scratchbuf;
+ 	int ret;
+ 
++	scratchbuf = kmalloc(len, GFP_KERNEL);
++	if (!scratchbuf)
++		return -ENOMEM;
++
++	op.data.buf.in = scratchbuf;
+ 	ret = spi_mem_exec_op(flash->spimem, &op);
+ 	if (ret < 0)
+ 		dev_err(&flash->spimem->spi->dev, "error %d reading %x\n", ret,
+ 			code);
++	else
++		memcpy(val, scratchbuf, len);
++
++	kfree(scratchbuf);
+ 
+ 	return ret;
+ }
+@@ -58,9 +68,19 @@ static int m25p80_write_reg(struct spi_nor *nor, u8 opcode, u8 *buf, int len)
+ 	struct spi_mem_op op = SPI_MEM_OP(SPI_MEM_OP_CMD(opcode, 1),
+ 					  SPI_MEM_OP_NO_ADDR,
+ 					  SPI_MEM_OP_NO_DUMMY,
+-					  SPI_MEM_OP_DATA_OUT(len, buf, 1));
++					  SPI_MEM_OP_DATA_OUT(len, NULL, 1));
++	void *scratchbuf;
++	int ret;
+ 
+-	return spi_mem_exec_op(flash->spimem, &op);
++	scratchbuf = kmemdup(buf, len, GFP_KERNEL);
++	if (!scratchbuf)
++		return -ENOMEM;
++
++	op.data.buf.out = scratchbuf;
++	ret = spi_mem_exec_op(flash->spimem, &op);
++	kfree(scratchbuf);
++
++	return ret;
+ }
+ 
+ static ssize_t m25p80_write(struct spi_nor *nor, loff_t to, size_t len,
+diff --git a/drivers/mtd/nand/raw/denali.c b/drivers/mtd/nand/raw/denali.c
+index 2a302a1d1430..c502075e5721 100644
+--- a/drivers/mtd/nand/raw/denali.c
++++ b/drivers/mtd/nand/raw/denali.c
+@@ -604,6 +604,12 @@ static int denali_dma_xfer(struct denali_nand_info *denali, void *buf,
+ 	}
+ 
+ 	iowrite32(DMA_ENABLE__FLAG, denali->reg + DMA_ENABLE);
++	/*
++	 * The ->setup_dma() hook kicks DMA by using the data/command
++	 * interface, which belongs to a different AXI port from the
++	 * register interface.  Read back the register to avoid a race.
++	 */
++	ioread32(denali->reg + DMA_ENABLE);
+ 
+ 	denali_reset_irq(denali);
+ 	denali->setup_dma(denali, dma_addr, page, write);
+diff --git a/drivers/net/appletalk/ipddp.c b/drivers/net/appletalk/ipddp.c
+index 9375cef22420..3d27616d9c85 100644
+--- a/drivers/net/appletalk/ipddp.c
++++ b/drivers/net/appletalk/ipddp.c
+@@ -283,8 +283,12 @@ static int ipddp_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+                 case SIOCFINDIPDDPRT:
+ 			spin_lock_bh(&ipddp_route_lock);
+ 			rp = __ipddp_find_route(&rcp);
+-			if (rp)
+-				memcpy(&rcp2, rp, sizeof(rcp2));
++			if (rp) {
++				memset(&rcp2, 0, sizeof(rcp2));
++				rcp2.ip    = rp->ip;
++				rcp2.at    = rp->at;
++				rcp2.flags = rp->flags;
++			}
+ 			spin_unlock_bh(&ipddp_route_lock);
+ 
+ 			if (rp) {
+diff --git a/drivers/net/dsa/mv88e6xxx/global1.h b/drivers/net/dsa/mv88e6xxx/global1.h
+index 7c791c1da4b9..bef01331266f 100644
+--- a/drivers/net/dsa/mv88e6xxx/global1.h
++++ b/drivers/net/dsa/mv88e6xxx/global1.h
+@@ -128,7 +128,7 @@
+ #define MV88E6XXX_G1_ATU_OP_GET_CLR_VIOLATION		0x7000
+ #define MV88E6XXX_G1_ATU_OP_AGE_OUT_VIOLATION		BIT(7)
+ #define MV88E6XXX_G1_ATU_OP_MEMBER_VIOLATION		BIT(6)
+-#define MV88E6XXX_G1_ATU_OP_MISS_VIOLTATION		BIT(5)
++#define MV88E6XXX_G1_ATU_OP_MISS_VIOLATION		BIT(5)
+ #define MV88E6XXX_G1_ATU_OP_FULL_VIOLATION		BIT(4)
+ 
+ /* Offset 0x0C: ATU Data Register */
+diff --git a/drivers/net/dsa/mv88e6xxx/global1_atu.c b/drivers/net/dsa/mv88e6xxx/global1_atu.c
+index 307410898fc9..5200e4bdce93 100644
+--- a/drivers/net/dsa/mv88e6xxx/global1_atu.c
++++ b/drivers/net/dsa/mv88e6xxx/global1_atu.c
+@@ -349,7 +349,7 @@ static irqreturn_t mv88e6xxx_g1_atu_prob_irq_thread_fn(int irq, void *dev_id)
+ 		chip->ports[entry.portvec].atu_member_violation++;
+ 	}
+ 
+-	if (val & MV88E6XXX_G1_ATU_OP_MEMBER_VIOLATION) {
++	if (val & MV88E6XXX_G1_ATU_OP_MISS_VIOLATION) {
+ 		dev_err_ratelimited(chip->dev,
+ 				    "ATU miss violation for %pM portvec %x\n",
+ 				    entry.mac, entry.portvec);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 4fdf3d33aa59..80b05597c5fe 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -7888,7 +7888,7 @@ static int bnxt_change_mac_addr(struct net_device *dev, void *p)
+ 	if (ether_addr_equal(addr->sa_data, dev->dev_addr))
+ 		return 0;
+ 
+-	rc = bnxt_approve_mac(bp, addr->sa_data);
++	rc = bnxt_approve_mac(bp, addr->sa_data, true);
+ 	if (rc)
+ 		return rc;
+ 
+@@ -8683,14 +8683,19 @@ static int bnxt_init_mac_addr(struct bnxt *bp)
+ 	} else {
+ #ifdef CONFIG_BNXT_SRIOV
+ 		struct bnxt_vf_info *vf = &bp->vf;
++		bool strict_approval = true;
+ 
+ 		if (is_valid_ether_addr(vf->mac_addr)) {
+ 			/* overwrite netdev dev_addr with admin VF MAC */
+ 			memcpy(bp->dev->dev_addr, vf->mac_addr, ETH_ALEN);
++			/* Older PF driver or firmware may not approve this
++			 * correctly.
++			 */
++			strict_approval = false;
+ 		} else {
+ 			eth_hw_addr_random(bp->dev);
+ 		}
+-		rc = bnxt_approve_mac(bp, bp->dev->dev_addr);
++		rc = bnxt_approve_mac(bp, bp->dev->dev_addr, strict_approval);
+ #endif
+ 	}
+ 	return rc;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+index 2c77004a022b..24d16d3d33a1 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+@@ -1095,7 +1095,7 @@ update_vf_mac_exit:
+ 	mutex_unlock(&bp->hwrm_cmd_lock);
+ }
+ 
+-int bnxt_approve_mac(struct bnxt *bp, u8 *mac)
++int bnxt_approve_mac(struct bnxt *bp, u8 *mac, bool strict)
+ {
+ 	struct hwrm_func_vf_cfg_input req = {0};
+ 	int rc = 0;
+@@ -1113,12 +1113,13 @@ int bnxt_approve_mac(struct bnxt *bp, u8 *mac)
+ 	memcpy(req.dflt_mac_addr, mac, ETH_ALEN);
+ 	rc = hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
+ mac_done:
+-	if (rc) {
++	if (rc && strict) {
+ 		rc = -EADDRNOTAVAIL;
+ 		netdev_warn(bp->dev, "VF MAC address %pM not approved by the PF\n",
+ 			    mac);
++		return rc;
+ 	}
+-	return rc;
++	return 0;
+ }
+ #else
+ 
+@@ -1135,7 +1136,7 @@ void bnxt_update_vf_mac(struct bnxt *bp)
+ {
+ }
+ 
+-int bnxt_approve_mac(struct bnxt *bp, u8 *mac)
++int bnxt_approve_mac(struct bnxt *bp, u8 *mac, bool strict)
+ {
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.h
+index e9b20cd19881..2eed9eda1195 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.h
+@@ -39,5 +39,5 @@ int bnxt_sriov_configure(struct pci_dev *pdev, int num_vfs);
+ void bnxt_sriov_disable(struct bnxt *);
+ void bnxt_hwrm_exec_fwd_req(struct bnxt *);
+ void bnxt_update_vf_mac(struct bnxt *);
+-int bnxt_approve_mac(struct bnxt *, u8 *);
++int bnxt_approve_mac(struct bnxt *, u8 *, bool);
+ #endif
+diff --git a/drivers/net/ethernet/hp/hp100.c b/drivers/net/ethernet/hp/hp100.c
+index c8c7ad2eff77..9b5a68b65432 100644
+--- a/drivers/net/ethernet/hp/hp100.c
++++ b/drivers/net/ethernet/hp/hp100.c
+@@ -2634,7 +2634,7 @@ static int hp100_login_to_vg_hub(struct net_device *dev, u_short force_relogin)
+ 		/* Wait for link to drop */
+ 		time = jiffies + (HZ / 10);
+ 		do {
+-			if (~(hp100_inb(VG_LAN_CFG_1) & HP100_LINK_UP_ST))
++			if (!(hp100_inb(VG_LAN_CFG_1) & HP100_LINK_UP_ST))
+ 				break;
+ 			if (!in_interrupt())
+ 				schedule_timeout_interruptible(1);
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index f7f08e3fa761..661fa5a38df2 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -61,6 +61,8 @@ static struct {
+  */
+ static void mvpp2_mac_config(struct net_device *dev, unsigned int mode,
+ 			     const struct phylink_link_state *state);
++static void mvpp2_mac_link_up(struct net_device *dev, unsigned int mode,
++			      phy_interface_t interface, struct phy_device *phy);
+ 
+ /* Queue modes */
+ #define MVPP2_QDIST_SINGLE_MODE	0
+@@ -3142,6 +3144,7 @@ static void mvpp2_start_dev(struct mvpp2_port *port)
+ 		mvpp22_mode_reconfigure(port);
+ 
+ 	if (port->phylink) {
++		netif_carrier_off(port->dev);
+ 		phylink_start(port->phylink);
+ 	} else {
+ 		/* Phylink isn't used as of now for ACPI, so the MAC has to be
+@@ -3150,9 +3153,10 @@ static void mvpp2_start_dev(struct mvpp2_port *port)
+ 		 */
+ 		struct phylink_link_state state = {
+ 			.interface = port->phy_interface,
+-			.link = 1,
+ 		};
+ 		mvpp2_mac_config(port->dev, MLO_AN_INBAND, &state);
++		mvpp2_mac_link_up(port->dev, MLO_AN_INBAND, port->phy_interface,
++				  NULL);
+ 	}
+ 
+ 	netif_tx_start_all_queues(port->dev);
+@@ -4389,10 +4393,6 @@ static void mvpp2_mac_config(struct net_device *dev, unsigned int mode,
+ 		return;
+ 	}
+ 
+-	netif_tx_stop_all_queues(port->dev);
+-	if (!port->has_phy)
+-		netif_carrier_off(port->dev);
+-
+ 	/* Make sure the port is disabled when reconfiguring the mode */
+ 	mvpp2_port_disable(port);
+ 
+@@ -4417,16 +4417,7 @@ static void mvpp2_mac_config(struct net_device *dev, unsigned int mode,
+ 	if (port->priv->hw_version == MVPP21 && port->flags & MVPP2_F_LOOPBACK)
+ 		mvpp2_port_loopback_set(port, state);
+ 
+-	/* If the port already was up, make sure it's still in the same state */
+-	if (state->link || !port->has_phy) {
+-		mvpp2_port_enable(port);
+-
+-		mvpp2_egress_enable(port);
+-		mvpp2_ingress_enable(port);
+-		if (!port->has_phy)
+-			netif_carrier_on(dev);
+-		netif_tx_wake_all_queues(dev);
+-	}
++	mvpp2_port_enable(port);
+ }
+ 
+ static void mvpp2_mac_link_up(struct net_device *dev, unsigned int mode,
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index 6d74cde68163..c0fc30a1f600 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -2172,17 +2172,15 @@ static int netvsc_remove(struct hv_device *dev)
+ 
+ 	cancel_delayed_work_sync(&ndev_ctx->dwork);
+ 
+-	rcu_read_lock();
+-	nvdev = rcu_dereference(ndev_ctx->nvdev);
+-
+-	if  (nvdev)
++	rtnl_lock();
++	nvdev = rtnl_dereference(ndev_ctx->nvdev);
++	if (nvdev)
+ 		cancel_work_sync(&nvdev->subchan_work);
+ 
+ 	/*
+ 	 * Call to the vsc driver to let it know that the device is being
+ 	 * removed. Also blocks mtu and channel changes.
+ 	 */
+-	rtnl_lock();
+ 	vf_netdev = rtnl_dereference(ndev_ctx->vf_netdev);
+ 	if (vf_netdev)
+ 		netvsc_unregister_vf(vf_netdev);
+@@ -2194,7 +2192,6 @@ static int netvsc_remove(struct hv_device *dev)
+ 	list_del(&ndev_ctx->list);
+ 
+ 	rtnl_unlock();
+-	rcu_read_unlock();
+ 
+ 	hv_set_drvdata(dev, NULL);
+ 
+diff --git a/drivers/net/ppp/pppoe.c b/drivers/net/ppp/pppoe.c
+index ce61231e96ea..62dc564b251d 100644
+--- a/drivers/net/ppp/pppoe.c
++++ b/drivers/net/ppp/pppoe.c
+@@ -429,6 +429,9 @@ static int pppoe_rcv(struct sk_buff *skb, struct net_device *dev,
+ 	if (!skb)
+ 		goto out;
+ 
++	if (skb_mac_header_len(skb) < ETH_HLEN)
++		goto drop;
++
+ 	if (!pskb_may_pull(skb, sizeof(struct pppoe_hdr)))
+ 		goto drop;
+ 
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index cb0cc30c3d6a..1e95d37c6e27 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1206,13 +1206,13 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x1199, 0x9061, 8)},	/* Sierra Wireless Modem */
+ 	{QMI_FIXED_INTF(0x1199, 0x9063, 8)},	/* Sierra Wireless EM7305 */
+ 	{QMI_FIXED_INTF(0x1199, 0x9063, 10)},	/* Sierra Wireless EM7305 */
+-	{QMI_FIXED_INTF(0x1199, 0x9071, 8)},	/* Sierra Wireless MC74xx */
+-	{QMI_FIXED_INTF(0x1199, 0x9071, 10)},	/* Sierra Wireless MC74xx */
+-	{QMI_FIXED_INTF(0x1199, 0x9079, 8)},	/* Sierra Wireless EM74xx */
+-	{QMI_FIXED_INTF(0x1199, 0x9079, 10)},	/* Sierra Wireless EM74xx */
+-	{QMI_FIXED_INTF(0x1199, 0x907b, 8)},	/* Sierra Wireless EM74xx */
+-	{QMI_FIXED_INTF(0x1199, 0x907b, 10)},	/* Sierra Wireless EM74xx */
+-	{QMI_FIXED_INTF(0x1199, 0x9091, 8)},	/* Sierra Wireless EM7565 */
++	{QMI_QUIRK_SET_DTR(0x1199, 0x9071, 8)},	/* Sierra Wireless MC74xx */
++	{QMI_QUIRK_SET_DTR(0x1199, 0x9071, 10)},/* Sierra Wireless MC74xx */
++	{QMI_QUIRK_SET_DTR(0x1199, 0x9079, 8)},	/* Sierra Wireless EM74xx */
++	{QMI_QUIRK_SET_DTR(0x1199, 0x9079, 10)},/* Sierra Wireless EM74xx */
++	{QMI_QUIRK_SET_DTR(0x1199, 0x907b, 8)},	/* Sierra Wireless EM74xx */
++	{QMI_QUIRK_SET_DTR(0x1199, 0x907b, 10)},/* Sierra Wireless EM74xx */
++	{QMI_QUIRK_SET_DTR(0x1199, 0x9091, 8)},	/* Sierra Wireless EM7565 */
+ 	{QMI_FIXED_INTF(0x1bbb, 0x011e, 4)},	/* Telekom Speedstick LTE II (Alcatel One Touch L100V LTE) */
+ 	{QMI_FIXED_INTF(0x1bbb, 0x0203, 2)},	/* Alcatel L800MA */
+ 	{QMI_FIXED_INTF(0x2357, 0x0201, 4)},	/* TP-LINK HSUPA Modem MA180 */
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index c2b6aa1d485f..f49c2a60a6eb 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -907,7 +907,11 @@ static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
+ 			BUG_ON(pull_to <= skb_headlen(skb));
+ 			__pskb_pull_tail(skb, pull_to - skb_headlen(skb));
+ 		}
+-		BUG_ON(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS);
++		if (unlikely(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS)) {
++			queue->rx.rsp_cons = ++cons;
++			kfree_skb(nskb);
++			return ~0U;
++		}
+ 
+ 		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
+ 				skb_frag_page(nfrag),
+@@ -1044,6 +1048,8 @@ err:
+ 		skb->len += rx->status;
+ 
+ 		i = xennet_fill_frags(queue, skb, &tmpq);
++		if (unlikely(i == ~0U))
++			goto err;
+ 
+ 		if (rx->flags & XEN_NETRXF_csum_blank)
+ 			skb->ip_summed = CHECKSUM_PARTIAL;
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index f439de848658..d1e2d175c10b 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -4235,11 +4235,6 @@ static int pci_quirk_qcom_rp_acs(struct pci_dev *dev, u16 acs_flags)
+  *
+  * 0x9d10-0x9d1b PCI Express Root port #{1-12}
+  *
+- * The 300 series chipset suffers from the same bug so include those root
+- * ports here as well.
+- *
+- * 0xa32c-0xa343 PCI Express Root port #{0-24}
+- *
+  * [1] http://www.intel.com/content/www/us/en/chipsets/100-series-chipset-datasheet-vol-2.html
+  * [2] http://www.intel.com/content/www/us/en/chipsets/100-series-chipset-datasheet-vol-1.html
+  * [3] http://www.intel.com/content/www/us/en/chipsets/100-series-chipset-spec-update.html
+@@ -4257,7 +4252,6 @@ static bool pci_quirk_intel_spt_pch_acs_match(struct pci_dev *dev)
+ 	case 0xa110 ... 0xa11f: case 0xa167 ... 0xa16a: /* Sunrise Point */
+ 	case 0xa290 ... 0xa29f: case 0xa2e7 ... 0xa2ee: /* Union Point */
+ 	case 0x9d10 ... 0x9d1b: /* 7th & 8th Gen Mobile */
+-	case 0xa32c ... 0xa343:				/* 300 series */
+ 		return true;
+ 	}
+ 
+diff --git a/drivers/platform/x86/alienware-wmi.c b/drivers/platform/x86/alienware-wmi.c
+index d975462a4c57..f10af5c383c5 100644
+--- a/drivers/platform/x86/alienware-wmi.c
++++ b/drivers/platform/x86/alienware-wmi.c
+@@ -536,6 +536,7 @@ static acpi_status alienware_wmax_command(struct wmax_basic_args *in_args,
+ 		if (obj && obj->type == ACPI_TYPE_INTEGER)
+ 			*out_data = (u32) obj->integer.value;
+ 	}
++	kfree(output.pointer);
+ 	return status;
+ 
+ }
+diff --git a/drivers/platform/x86/dell-smbios-wmi.c b/drivers/platform/x86/dell-smbios-wmi.c
+index fbefedb1c172..548abba2c1e9 100644
+--- a/drivers/platform/x86/dell-smbios-wmi.c
++++ b/drivers/platform/x86/dell-smbios-wmi.c
+@@ -78,6 +78,7 @@ static int run_smbios_call(struct wmi_device *wdev)
+ 	dev_dbg(&wdev->dev, "result: [%08x,%08x,%08x,%08x]\n",
+ 		priv->buf->std.output[0], priv->buf->std.output[1],
+ 		priv->buf->std.output[2], priv->buf->std.output[3]);
++	kfree(output.pointer);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/rpmsg/rpmsg_core.c b/drivers/rpmsg/rpmsg_core.c
+index 8122807db380..b714a543a91d 100644
+--- a/drivers/rpmsg/rpmsg_core.c
++++ b/drivers/rpmsg/rpmsg_core.c
+@@ -15,7 +15,6 @@
+ #include <linux/module.h>
+ #include <linux/rpmsg.h>
+ #include <linux/of_device.h>
+-#include <linux/pm_domain.h>
+ #include <linux/slab.h>
+ 
+ #include "rpmsg_internal.h"
+@@ -450,10 +449,6 @@ static int rpmsg_dev_probe(struct device *dev)
+ 	struct rpmsg_endpoint *ept = NULL;
+ 	int err;
+ 
+-	err = dev_pm_domain_attach(dev, true);
+-	if (err)
+-		goto out;
+-
+ 	if (rpdrv->callback) {
+ 		strncpy(chinfo.name, rpdev->id.name, RPMSG_NAME_SIZE);
+ 		chinfo.src = rpdev->src;
+@@ -495,8 +490,6 @@ static int rpmsg_dev_remove(struct device *dev)
+ 
+ 	rpdrv->remove(rpdev);
+ 
+-	dev_pm_domain_detach(dev, true);
+-
+ 	if (rpdev->ept)
+ 		rpmsg_destroy_ept(rpdev->ept);
+ 
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index ec395a6baf9c..9da0bc5a036c 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -2143,8 +2143,17 @@ int spi_register_controller(struct spi_controller *ctlr)
+ 	 */
+ 	if (ctlr->num_chipselect == 0)
+ 		return -EINVAL;
+-	/* allocate dynamic bus number using Linux idr */
+-	if ((ctlr->bus_num < 0) && ctlr->dev.of_node) {
++	if (ctlr->bus_num >= 0) {
++		/* devices with a fixed bus num must check-in with the num */
++		mutex_lock(&board_lock);
++		id = idr_alloc(&spi_master_idr, ctlr, ctlr->bus_num,
++			ctlr->bus_num + 1, GFP_KERNEL);
++		mutex_unlock(&board_lock);
++		if (WARN(id < 0, "couldn't get idr"))
++			return id == -ENOSPC ? -EBUSY : id;
++		ctlr->bus_num = id;
++	} else if (ctlr->dev.of_node) {
++		/* allocate dynamic bus number using Linux idr */
+ 		id = of_alias_get_id(ctlr->dev.of_node, "spi");
+ 		if (id >= 0) {
+ 			ctlr->bus_num = id;
+diff --git a/drivers/target/iscsi/iscsi_target_auth.c b/drivers/target/iscsi/iscsi_target_auth.c
+index 9518ffd8b8ba..4e680d753941 100644
+--- a/drivers/target/iscsi/iscsi_target_auth.c
++++ b/drivers/target/iscsi/iscsi_target_auth.c
+@@ -26,27 +26,6 @@
+ #include "iscsi_target_nego.h"
+ #include "iscsi_target_auth.h"
+ 
+-static int chap_string_to_hex(unsigned char *dst, unsigned char *src, int len)
+-{
+-	int j = DIV_ROUND_UP(len, 2), rc;
+-
+-	rc = hex2bin(dst, src, j);
+-	if (rc < 0)
+-		pr_debug("CHAP string contains non hex digit symbols\n");
+-
+-	dst[j] = '\0';
+-	return j;
+-}
+-
+-static void chap_binaryhex_to_asciihex(char *dst, char *src, int src_len)
+-{
+-	int i;
+-
+-	for (i = 0; i < src_len; i++) {
+-		sprintf(&dst[i*2], "%02x", (int) src[i] & 0xff);
+-	}
+-}
+-
+ static int chap_gen_challenge(
+ 	struct iscsi_conn *conn,
+ 	int caller,
+@@ -62,7 +41,7 @@ static int chap_gen_challenge(
+ 	ret = get_random_bytes_wait(chap->challenge, CHAP_CHALLENGE_LENGTH);
+ 	if (unlikely(ret))
+ 		return ret;
+-	chap_binaryhex_to_asciihex(challenge_asciihex, chap->challenge,
++	bin2hex(challenge_asciihex, chap->challenge,
+ 				CHAP_CHALLENGE_LENGTH);
+ 	/*
+ 	 * Set CHAP_C, and copy the generated challenge into c_str.
+@@ -248,9 +227,16 @@ static int chap_server_compute_md5(
+ 		pr_err("Could not find CHAP_R.\n");
+ 		goto out;
+ 	}
++	if (strlen(chap_r) != MD5_SIGNATURE_SIZE * 2) {
++		pr_err("Malformed CHAP_R\n");
++		goto out;
++	}
++	if (hex2bin(client_digest, chap_r, MD5_SIGNATURE_SIZE) < 0) {
++		pr_err("Malformed CHAP_R\n");
++		goto out;
++	}
+ 
+ 	pr_debug("[server] Got CHAP_R=%s\n", chap_r);
+-	chap_string_to_hex(client_digest, chap_r, strlen(chap_r));
+ 
+ 	tfm = crypto_alloc_shash("md5", 0, 0);
+ 	if (IS_ERR(tfm)) {
+@@ -294,7 +280,7 @@ static int chap_server_compute_md5(
+ 		goto out;
+ 	}
+ 
+-	chap_binaryhex_to_asciihex(response, server_digest, MD5_SIGNATURE_SIZE);
++	bin2hex(response, server_digest, MD5_SIGNATURE_SIZE);
+ 	pr_debug("[server] MD5 Server Digest: %s\n", response);
+ 
+ 	if (memcmp(server_digest, client_digest, MD5_SIGNATURE_SIZE) != 0) {
+@@ -349,9 +335,7 @@ static int chap_server_compute_md5(
+ 		pr_err("Could not find CHAP_C.\n");
+ 		goto out;
+ 	}
+-	pr_debug("[server] Got CHAP_C=%s\n", challenge);
+-	challenge_len = chap_string_to_hex(challenge_binhex, challenge,
+-				strlen(challenge));
++	challenge_len = DIV_ROUND_UP(strlen(challenge), 2);
+ 	if (!challenge_len) {
+ 		pr_err("Unable to convert incoming challenge\n");
+ 		goto out;
+@@ -360,6 +344,11 @@ static int chap_server_compute_md5(
+ 		pr_err("CHAP_C exceeds maximum binary size of 1024 bytes\n");
+ 		goto out;
+ 	}
++	if (hex2bin(challenge_binhex, challenge, challenge_len) < 0) {
++		pr_err("Malformed CHAP_C\n");
++		goto out;
++	}
++	pr_debug("[server] Got CHAP_C=%s\n", challenge);
+ 	/*
+ 	 * During mutual authentication, the CHAP_C generated by the
+ 	 * initiator must not match the original CHAP_C generated by
+@@ -413,7 +402,7 @@ static int chap_server_compute_md5(
+ 	/*
+ 	 * Convert response from binary hex to ascii hext.
+ 	 */
+-	chap_binaryhex_to_asciihex(response, digest, MD5_SIGNATURE_SIZE);
++	bin2hex(response, digest, MD5_SIGNATURE_SIZE);
+ 	*nr_out_len += sprintf(nr_out_ptr + *nr_out_len, "CHAP_R=0x%s",
+ 			response);
+ 	*nr_out_len += 1;
+diff --git a/drivers/tty/vt/vt_ioctl.c b/drivers/tty/vt/vt_ioctl.c
+index a78ad10a119b..73cdc0d633dd 100644
+--- a/drivers/tty/vt/vt_ioctl.c
++++ b/drivers/tty/vt/vt_ioctl.c
+@@ -32,6 +32,8 @@
+ #include <asm/io.h>
+ #include <linux/uaccess.h>
+ 
++#include <linux/nospec.h>
++
+ #include <linux/kbd_kern.h>
+ #include <linux/vt_kern.h>
+ #include <linux/kbd_diacr.h>
+@@ -700,6 +702,8 @@ int vt_ioctl(struct tty_struct *tty,
+ 		if (vsa.console == 0 || vsa.console > MAX_NR_CONSOLES)
+ 			ret = -ENXIO;
+ 		else {
++			vsa.console = array_index_nospec(vsa.console,
++							 MAX_NR_CONSOLES + 1);
+ 			vsa.console--;
+ 			console_lock();
+ 			ret = vc_allocate(vsa.console);
+diff --git a/fs/ext4/dir.c b/fs/ext4/dir.c
+index e2902d394f1b..f93f9881ec18 100644
+--- a/fs/ext4/dir.c
++++ b/fs/ext4/dir.c
+@@ -76,7 +76,7 @@ int __ext4_check_dir_entry(const char *function, unsigned int line,
+ 	else if (unlikely(rlen < EXT4_DIR_REC_LEN(de->name_len)))
+ 		error_msg = "rec_len is too small for name_len";
+ 	else if (unlikely(((char *) de - buf) + rlen > size))
+-		error_msg = "directory entry across range";
++		error_msg = "directory entry overrun";
+ 	else if (unlikely(le32_to_cpu(de->inode) >
+ 			le32_to_cpu(EXT4_SB(dir->i_sb)->s_es->s_inodes_count)))
+ 		error_msg = "inode out of bounds";
+@@ -85,18 +85,16 @@ int __ext4_check_dir_entry(const char *function, unsigned int line,
+ 
+ 	if (filp)
+ 		ext4_error_file(filp, function, line, bh->b_blocknr,
+-				"bad entry in directory: %s - offset=%u(%u), "
+-				"inode=%u, rec_len=%d, name_len=%d",
+-				error_msg, (unsigned) (offset % size),
+-				offset, le32_to_cpu(de->inode),
+-				rlen, de->name_len);
++				"bad entry in directory: %s - offset=%u, "
++				"inode=%u, rec_len=%d, name_len=%d, size=%d",
++				error_msg, offset, le32_to_cpu(de->inode),
++				rlen, de->name_len, size);
+ 	else
+ 		ext4_error_inode(dir, function, line, bh->b_blocknr,
+-				"bad entry in directory: %s - offset=%u(%u), "
+-				"inode=%u, rec_len=%d, name_len=%d",
+-				error_msg, (unsigned) (offset % size),
+-				offset, le32_to_cpu(de->inode),
+-				rlen, de->name_len);
++				"bad entry in directory: %s - offset=%u, "
++				"inode=%u, rec_len=%d, name_len=%d, size=%d",
++				 error_msg, offset, le32_to_cpu(de->inode),
++				 rlen, de->name_len, size);
+ 
+ 	return 1;
+ }
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 7c7123f265c2..aa1ce53d0c87 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -675,6 +675,9 @@ enum {
+ /* Max physical block we can address w/o extents */
+ #define EXT4_MAX_BLOCK_FILE_PHYS	0xFFFFFFFF
+ 
++/* Max logical block we can support */
++#define EXT4_MAX_LOGICAL_BLOCK		0xFFFFFFFF
++
+ /*
+  * Structure of an inode on the disk
+  */
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 3543fe80a3c4..7b4736022761 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -1753,6 +1753,7 @@ bool empty_inline_dir(struct inode *dir, int *has_inline_data)
+ {
+ 	int err, inline_size;
+ 	struct ext4_iloc iloc;
++	size_t inline_len;
+ 	void *inline_pos;
+ 	unsigned int offset;
+ 	struct ext4_dir_entry_2 *de;
+@@ -1780,8 +1781,9 @@ bool empty_inline_dir(struct inode *dir, int *has_inline_data)
+ 		goto out;
+ 	}
+ 
++	inline_len = ext4_get_inline_size(dir);
+ 	offset = EXT4_INLINE_DOTDOT_SIZE;
+-	while (offset < dir->i_size) {
++	while (offset < inline_len) {
+ 		de = ext4_get_inline_entry(dir, &iloc, offset,
+ 					   &inline_pos, &inline_size);
+ 		if (ext4_check_dir_entry(dir, NULL, de,
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 4efe77286ecd..2276137d0083 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -3412,12 +3412,16 @@ static int ext4_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
+ {
+ 	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
+ 	unsigned int blkbits = inode->i_blkbits;
+-	unsigned long first_block = offset >> blkbits;
+-	unsigned long last_block = (offset + length - 1) >> blkbits;
++	unsigned long first_block, last_block;
+ 	struct ext4_map_blocks map;
+ 	bool delalloc = false;
+ 	int ret;
+ 
++	if ((offset >> blkbits) > EXT4_MAX_LOGICAL_BLOCK)
++		return -EINVAL;
++	first_block = offset >> blkbits;
++	last_block = min_t(loff_t, (offset + length - 1) >> blkbits,
++			   EXT4_MAX_LOGICAL_BLOCK);
+ 
+ 	if (flags & IOMAP_REPORT) {
+ 		if (ext4_has_inline_data(inode)) {
+@@ -3947,6 +3951,7 @@ static const struct address_space_operations ext4_dax_aops = {
+ 	.writepages		= ext4_dax_writepages,
+ 	.direct_IO		= noop_direct_IO,
+ 	.set_page_dirty		= noop_set_page_dirty,
++	.bmap			= ext4_bmap,
+ 	.invalidatepage		= noop_invalidatepage,
+ };
+ 
+@@ -4856,6 +4861,7 @@ struct inode *ext4_iget(struct super_block *sb, unsigned long ino)
+ 		 * not initialized on a new filesystem. */
+ 	}
+ 	ei->i_flags = le32_to_cpu(raw_inode->i_flags);
++	ext4_set_inode_flags(inode);
+ 	inode->i_blocks = ext4_inode_blocks(raw_inode, ei);
+ 	ei->i_file_acl = le32_to_cpu(raw_inode->i_file_acl_lo);
+ 	if (ext4_has_feature_64bit(sb))
+@@ -5005,7 +5011,6 @@ struct inode *ext4_iget(struct super_block *sb, unsigned long ino)
+ 		goto bad_inode;
+ 	}
+ 	brelse(iloc.bh);
+-	ext4_set_inode_flags(inode);
+ 
+ 	unlock_new_inode(inode);
+ 	return inode;
+diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c
+index 638ad4743477..38e6a846aac1 100644
+--- a/fs/ext4/mmp.c
++++ b/fs/ext4/mmp.c
+@@ -49,7 +49,6 @@ static int write_mmp_block(struct super_block *sb, struct buffer_head *bh)
+ 	 */
+ 	sb_start_write(sb);
+ 	ext4_mmp_csum_set(sb, mmp);
+-	mark_buffer_dirty(bh);
+ 	lock_buffer(bh);
+ 	bh->b_end_io = end_buffer_write_sync;
+ 	get_bh(bh);
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 116ff68c5bd4..377d516c475f 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -3478,6 +3478,12 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 	int credits;
+ 	u8 old_file_type;
+ 
++	if (new.inode && new.inode->i_nlink == 0) {
++		EXT4_ERROR_INODE(new.inode,
++				 "target of rename is already freed");
++		return -EFSCORRUPTED;
++	}
++
+ 	if ((ext4_test_inode_flag(new_dir, EXT4_INODE_PROJINHERIT)) &&
+ 	    (!projid_eq(EXT4_I(new_dir)->i_projid,
+ 			EXT4_I(old_dentry->d_inode)->i_projid)))
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index e5fb38451a73..ebbc663d0798 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -19,6 +19,7 @@
+ 
+ int ext4_resize_begin(struct super_block *sb)
+ {
++	struct ext4_sb_info *sbi = EXT4_SB(sb);
+ 	int ret = 0;
+ 
+ 	if (!capable(CAP_SYS_RESOURCE))
+@@ -29,7 +30,7 @@ int ext4_resize_begin(struct super_block *sb)
+          * because the user tools have no way of handling this.  Probably a
+          * bad time to do it anyways.
+          */
+-	if (EXT4_SB(sb)->s_sbh->b_blocknr !=
++	if (EXT4_B2C(sbi, sbi->s_sbh->b_blocknr) !=
+ 	    le32_to_cpu(EXT4_SB(sb)->s_es->s_first_data_block)) {
+ 		ext4_warning(sb, "won't resize using backup superblock at %llu",
+ 			(unsigned long long)EXT4_SB(sb)->s_sbh->b_blocknr);
+@@ -1986,6 +1987,26 @@ retry:
+ 		}
+ 	}
+ 
++	/*
++	 * Make sure the last group has enough space so that it's
++	 * guaranteed to have enough space for all metadata blocks
++	 * that it might need to hold.  (We might not need to store
++	 * the inode table blocks in the last block group, but there
++	 * will be cases where this might be needed.)
++	 */
++	if ((ext4_group_first_block_no(sb, n_group) +
++	     ext4_group_overhead_blocks(sb, n_group) + 2 +
++	     sbi->s_itb_per_group + sbi->s_cluster_ratio) >= n_blocks_count) {
++		n_blocks_count = ext4_group_first_block_no(sb, n_group);
++		n_group--;
++		n_blocks_count_retry = 0;
++		if (resize_inode) {
++			iput(resize_inode);
++			resize_inode = NULL;
++		}
++		goto retry;
++	}
++
+ 	/* extend the last group */
+ 	if (n_group == o_group)
+ 		add = n_blocks_count - o_blocks_count;
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 130c12974e28..a7a0fffc3ae8 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -2126,6 +2126,8 @@ static int _ext4_show_options(struct seq_file *seq, struct super_block *sb,
+ 		SEQ_OPTS_PRINT("max_dir_size_kb=%u", sbi->s_max_dir_size_kb);
+ 	if (test_opt(sb, DATA_ERR_ABORT))
+ 		SEQ_OPTS_PUTS("data_err=abort");
++	if (DUMMY_ENCRYPTION_ENABLED(sbi))
++		SEQ_OPTS_PUTS("test_dummy_encryption");
+ 
+ 	ext4_show_quota_options(seq, sb);
+ 	return 0;
+@@ -4357,11 +4359,13 @@ no_journal:
+ 	block = ext4_count_free_clusters(sb);
+ 	ext4_free_blocks_count_set(sbi->s_es, 
+ 				   EXT4_C2B(sbi, block));
++	ext4_superblock_csum_set(sb);
+ 	err = percpu_counter_init(&sbi->s_freeclusters_counter, block,
+ 				  GFP_KERNEL);
+ 	if (!err) {
+ 		unsigned long freei = ext4_count_free_inodes(sb);
+ 		sbi->s_es->s_free_inodes_count = cpu_to_le32(freei);
++		ext4_superblock_csum_set(sb);
+ 		err = percpu_counter_init(&sbi->s_freeinodes_counter, freei,
+ 					  GFP_KERNEL);
+ 	}
+diff --git a/fs/ocfs2/buffer_head_io.c b/fs/ocfs2/buffer_head_io.c
+index d9ebe11c8990..1d098c3c00e0 100644
+--- a/fs/ocfs2/buffer_head_io.c
++++ b/fs/ocfs2/buffer_head_io.c
+@@ -342,6 +342,7 @@ int ocfs2_read_blocks(struct ocfs2_caching_info *ci, u64 block, int nr,
+ 				 * for this bh as it's not marked locally
+ 				 * uptodate. */
+ 				status = -EIO;
++				clear_buffer_needs_validate(bh);
+ 				put_bh(bh);
+ 				bhs[i] = NULL;
+ 				continue;
+diff --git a/fs/ubifs/xattr.c b/fs/ubifs/xattr.c
+index 09e37e63bddd..6f720fdf5020 100644
+--- a/fs/ubifs/xattr.c
++++ b/fs/ubifs/xattr.c
+@@ -152,12 +152,6 @@ static int create_xattr(struct ubifs_info *c, struct inode *host,
+ 	ui->data_len = size;
+ 
+ 	mutex_lock(&host_ui->ui_mutex);
+-
+-	if (!host->i_nlink) {
+-		err = -ENOENT;
+-		goto out_noent;
+-	}
+-
+ 	host->i_ctime = current_time(host);
+ 	host_ui->xattr_cnt += 1;
+ 	host_ui->xattr_size += CALC_DENT_SIZE(fname_len(nm));
+@@ -190,7 +184,6 @@ out_cancel:
+ 	host_ui->xattr_size -= CALC_XATTR_BYTES(size);
+ 	host_ui->xattr_names -= fname_len(nm);
+ 	host_ui->flags &= ~UBIFS_CRYPT_FL;
+-out_noent:
+ 	mutex_unlock(&host_ui->ui_mutex);
+ out_free:
+ 	make_bad_inode(inode);
+@@ -242,12 +235,6 @@ static int change_xattr(struct ubifs_info *c, struct inode *host,
+ 	mutex_unlock(&ui->ui_mutex);
+ 
+ 	mutex_lock(&host_ui->ui_mutex);
+-
+-	if (!host->i_nlink) {
+-		err = -ENOENT;
+-		goto out_noent;
+-	}
+-
+ 	host->i_ctime = current_time(host);
+ 	host_ui->xattr_size -= CALC_XATTR_BYTES(old_size);
+ 	host_ui->xattr_size += CALC_XATTR_BYTES(size);
+@@ -269,7 +256,6 @@ static int change_xattr(struct ubifs_info *c, struct inode *host,
+ out_cancel:
+ 	host_ui->xattr_size -= CALC_XATTR_BYTES(size);
+ 	host_ui->xattr_size += CALC_XATTR_BYTES(old_size);
+-out_noent:
+ 	mutex_unlock(&host_ui->ui_mutex);
+ 	make_bad_inode(inode);
+ out_free:
+@@ -496,12 +482,6 @@ static int remove_xattr(struct ubifs_info *c, struct inode *host,
+ 		return err;
+ 
+ 	mutex_lock(&host_ui->ui_mutex);
+-
+-	if (!host->i_nlink) {
+-		err = -ENOENT;
+-		goto out_noent;
+-	}
+-
+ 	host->i_ctime = current_time(host);
+ 	host_ui->xattr_cnt -= 1;
+ 	host_ui->xattr_size -= CALC_DENT_SIZE(fname_len(nm));
+@@ -521,7 +501,6 @@ out_cancel:
+ 	host_ui->xattr_size += CALC_DENT_SIZE(fname_len(nm));
+ 	host_ui->xattr_size += CALC_XATTR_BYTES(ui->data_len);
+ 	host_ui->xattr_names += fname_len(nm);
+-out_noent:
+ 	mutex_unlock(&host_ui->ui_mutex);
+ 	ubifs_release_budget(c, &req);
+ 	make_bad_inode(inode);
+@@ -561,9 +540,6 @@ static int ubifs_xattr_remove(struct inode *host, const char *name)
+ 
+ 	ubifs_assert(inode_is_locked(host));
+ 
+-	if (!host->i_nlink)
+-		return -ENOENT;
+-
+ 	if (fname_len(&nm) > UBIFS_MAX_NLEN)
+ 		return -ENAMETOOLONG;
+ 
+diff --git a/include/net/nfc/hci.h b/include/net/nfc/hci.h
+index 316694dafa5b..008f466d1da7 100644
+--- a/include/net/nfc/hci.h
++++ b/include/net/nfc/hci.h
+@@ -87,7 +87,7 @@ struct nfc_hci_pipe {
+  * According to specification 102 622 chapter 4.4 Pipes,
+  * the pipe identifier is 7 bits long.
+  */
+-#define NFC_HCI_MAX_PIPES		127
++#define NFC_HCI_MAX_PIPES		128
+ struct nfc_hci_init_data {
+ 	u8 gate_count;
+ 	struct nfc_hci_gate gates[NFC_HCI_MAX_CUSTOM_GATES];
+diff --git a/include/net/tls.h b/include/net/tls.h
+index 70c273777fe9..32b71e5b1290 100644
+--- a/include/net/tls.h
++++ b/include/net/tls.h
+@@ -165,15 +165,14 @@ struct cipher_context {
+ 	char *rec_seq;
+ };
+ 
++union tls_crypto_context {
++	struct tls_crypto_info info;
++	struct tls12_crypto_info_aes_gcm_128 aes_gcm_128;
++};
++
+ struct tls_context {
+-	union {
+-		struct tls_crypto_info crypto_send;
+-		struct tls12_crypto_info_aes_gcm_128 crypto_send_aes_gcm_128;
+-	};
+-	union {
+-		struct tls_crypto_info crypto_recv;
+-		struct tls12_crypto_info_aes_gcm_128 crypto_recv_aes_gcm_128;
+-	};
++	union tls_crypto_context crypto_send;
++	union tls_crypto_context crypto_recv;
+ 
+ 	struct list_head list;
+ 	struct net_device *netdev;
+@@ -337,8 +336,8 @@ static inline void tls_fill_prepend(struct tls_context *ctx,
+ 	 * size KTLS_DTLS_HEADER_SIZE + KTLS_DTLS_NONCE_EXPLICIT_SIZE
+ 	 */
+ 	buf[0] = record_type;
+-	buf[1] = TLS_VERSION_MINOR(ctx->crypto_send.version);
+-	buf[2] = TLS_VERSION_MAJOR(ctx->crypto_send.version);
++	buf[1] = TLS_VERSION_MINOR(ctx->crypto_send.info.version);
++	buf[2] = TLS_VERSION_MAJOR(ctx->crypto_send.info.version);
+ 	/* we can use IV for nonce explicit according to spec */
+ 	buf[3] = pkt_len >> 8;
+ 	buf[4] = pkt_len & 0xFF;
+diff --git a/include/uapi/linux/keyctl.h b/include/uapi/linux/keyctl.h
+index 910cc4334b21..7b8c9e19bad1 100644
+--- a/include/uapi/linux/keyctl.h
++++ b/include/uapi/linux/keyctl.h
+@@ -65,7 +65,7 @@
+ 
+ /* keyctl structures */
+ struct keyctl_dh_params {
+-	__s32 dh_private;
++	__s32 private;
+ 	__s32 prime;
+ 	__s32 base;
+ };
+diff --git a/include/uapi/sound/skl-tplg-interface.h b/include/uapi/sound/skl-tplg-interface.h
+index f58cafa42f18..f39352cef382 100644
+--- a/include/uapi/sound/skl-tplg-interface.h
++++ b/include/uapi/sound/skl-tplg-interface.h
+@@ -10,6 +10,8 @@
+ #ifndef __HDA_TPLG_INTERFACE_H__
+ #define __HDA_TPLG_INTERFACE_H__
+ 
++#include <linux/types.h>
++
+ /*
+  * Default types range from 0~12. type can range from 0 to 0xff
+  * SST types start at higher to avoid any overlapping in future
+@@ -143,10 +145,10 @@ enum skl_module_param_type {
+ };
+ 
+ struct skl_dfw_algo_data {
+-	u32 set_params:2;
+-	u32 rsvd:30;
+-	u32 param_id;
+-	u32 max;
++	__u32 set_params:2;
++	__u32 rsvd:30;
++	__u32 param_id;
++	__u32 max;
+ 	char params[0];
+ } __packed;
+ 
+@@ -163,68 +165,68 @@ enum skl_tuple_type {
+ /* v4 configuration data */
+ 
+ struct skl_dfw_v4_module_pin {
+-	u16 module_id;
+-	u16 instance_id;
++	__u16 module_id;
++	__u16 instance_id;
+ } __packed;
+ 
+ struct skl_dfw_v4_module_fmt {
+-	u32 channels;
+-	u32 freq;
+-	u32 bit_depth;
+-	u32 valid_bit_depth;
+-	u32 ch_cfg;
+-	u32 interleaving_style;
+-	u32 sample_type;
+-	u32 ch_map;
++	__u32 channels;
++	__u32 freq;
++	__u32 bit_depth;
++	__u32 valid_bit_depth;
++	__u32 ch_cfg;
++	__u32 interleaving_style;
++	__u32 sample_type;
++	__u32 ch_map;
+ } __packed;
+ 
+ struct skl_dfw_v4_module_caps {
+-	u32 set_params:2;
+-	u32 rsvd:30;
+-	u32 param_id;
+-	u32 caps_size;
+-	u32 caps[HDA_SST_CFG_MAX];
++	__u32 set_params:2;
++	__u32 rsvd:30;
++	__u32 param_id;
++	__u32 caps_size;
++	__u32 caps[HDA_SST_CFG_MAX];
+ } __packed;
+ 
+ struct skl_dfw_v4_pipe {
+-	u8 pipe_id;
+-	u8 pipe_priority;
+-	u16 conn_type:4;
+-	u16 rsvd:4;
+-	u16 memory_pages:8;
++	__u8 pipe_id;
++	__u8 pipe_priority;
++	__u16 conn_type:4;
++	__u16 rsvd:4;
++	__u16 memory_pages:8;
+ } __packed;
+ 
+ struct skl_dfw_v4_module {
+ 	char uuid[SKL_UUID_STR_SZ];
+ 
+-	u16 module_id;
+-	u16 instance_id;
+-	u32 max_mcps;
+-	u32 mem_pages;
+-	u32 obs;
+-	u32 ibs;
+-	u32 vbus_id;
+-
+-	u32 max_in_queue:8;
+-	u32 max_out_queue:8;
+-	u32 time_slot:8;
+-	u32 core_id:4;
+-	u32 rsvd1:4;
+-
+-	u32 module_type:8;
+-	u32 conn_type:4;
+-	u32 dev_type:4;
+-	u32 hw_conn_type:4;
+-	u32 rsvd2:12;
+-
+-	u32 params_fixup:8;
+-	u32 converter:8;
+-	u32 input_pin_type:1;
+-	u32 output_pin_type:1;
+-	u32 is_dynamic_in_pin:1;
+-	u32 is_dynamic_out_pin:1;
+-	u32 is_loadable:1;
+-	u32 rsvd3:11;
++	__u16 module_id;
++	__u16 instance_id;
++	__u32 max_mcps;
++	__u32 mem_pages;
++	__u32 obs;
++	__u32 ibs;
++	__u32 vbus_id;
++
++	__u32 max_in_queue:8;
++	__u32 max_out_queue:8;
++	__u32 time_slot:8;
++	__u32 core_id:4;
++	__u32 rsvd1:4;
++
++	__u32 module_type:8;
++	__u32 conn_type:4;
++	__u32 dev_type:4;
++	__u32 hw_conn_type:4;
++	__u32 rsvd2:12;
++
++	__u32 params_fixup:8;
++	__u32 converter:8;
++	__u32 input_pin_type:1;
++	__u32 output_pin_type:1;
++	__u32 is_dynamic_in_pin:1;
++	__u32 is_dynamic_out_pin:1;
++	__u32 is_loadable:1;
++	__u32 rsvd3:11;
+ 
+ 	struct skl_dfw_v4_pipe pipe;
+ 	struct skl_dfw_v4_module_fmt in_fmt[MAX_IN_QUEUE];
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 63aaac52a265..adbe21c8876e 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -3132,7 +3132,7 @@ static int adjust_reg_min_max_vals(struct bpf_verifier_env *env,
+ 				 * an arbitrary scalar. Disallow all math except
+ 				 * pointer subtraction
+ 				 */
+-				if (opcode == BPF_SUB){
++				if (opcode == BPF_SUB && env->allow_ptr_leaks) {
+ 					mark_reg_unknown(env, regs, insn->dst_reg);
+ 					return 0;
+ 				}
+diff --git a/kernel/pid.c b/kernel/pid.c
+index 157fe4b19971..2ff2d8bfa4e0 100644
+--- a/kernel/pid.c
++++ b/kernel/pid.c
+@@ -195,7 +195,7 @@ struct pid *alloc_pid(struct pid_namespace *ns)
+ 		idr_preload_end();
+ 
+ 		if (nr < 0) {
+-			retval = nr;
++			retval = (nr == -ENOSPC) ? -EAGAIN : nr;
+ 			goto out_free;
+ 		}
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 478d9d3e6be9..26526fc41f0d 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -10019,7 +10019,8 @@ static inline bool vruntime_normalized(struct task_struct *p)
+ 	 * - A task which has been woken up by try_to_wake_up() and
+ 	 *   waiting for actually being woken up by sched_ttwu_pending().
+ 	 */
+-	if (!se->sum_exec_runtime || p->state == TASK_WAKING)
++	if (!se->sum_exec_runtime ||
++	    (p->state == TASK_WAKING && p->sched_remote_wakeup))
+ 		return true;
+ 
+ 	return false;
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 0b0b688ea166..e58fd35ff64a 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -1545,6 +1545,8 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned long nr_pages)
+ 	tmp_iter_page = first_page;
+ 
+ 	do {
++		cond_resched();
++
+ 		to_remove_page = tmp_iter_page;
+ 		rb_inc_page(cpu_buffer, &tmp_iter_page);
+ 
+diff --git a/mm/Kconfig b/mm/Kconfig
+index 94af022b7f3d..22e949e263f0 100644
+--- a/mm/Kconfig
++++ b/mm/Kconfig
+@@ -637,6 +637,7 @@ config DEFERRED_STRUCT_PAGE_INIT
+ 	depends on NO_BOOTMEM
+ 	depends on SPARSEMEM
+ 	depends on !NEED_PER_CPU_KM
++	depends on 64BIT
+ 	help
+ 	  Ordinarily all struct pages are initialised during early boot in a
+ 	  single thread. On very large machines this can take a considerable
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 41b9bbf24e16..8264bbdbb6a5 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -2226,6 +2226,8 @@ static struct inode *shmem_get_inode(struct super_block *sb, const struct inode
+ 			mpol_shared_policy_init(&info->policy, NULL);
+ 			break;
+ 		}
++
++		lockdep_annotate_inode_mutex_key(inode);
+ 	} else
+ 		shmem_free_inode(sb);
+ 	return inode;
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 8e3fda9e725c..cb01d509d511 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -1179,6 +1179,12 @@ int neigh_update(struct neighbour *neigh, const u8 *lladdr, u8 new,
+ 		lladdr = neigh->ha;
+ 	}
+ 
++	/* Update confirmed timestamp for neighbour entry after we
++	 * received ARP packet even if it doesn't change IP to MAC binding.
++	 */
++	if (new & NUD_CONNECTED)
++		neigh->confirmed = jiffies;
++
+ 	/* If entry was valid and address is not changed,
+ 	   do not change entry state, if new one is STALE.
+ 	 */
+@@ -1200,15 +1206,12 @@ int neigh_update(struct neighbour *neigh, const u8 *lladdr, u8 new,
+ 		}
+ 	}
+ 
+-	/* Update timestamps only once we know we will make a change to the
++	/* Update timestamp only once we know we will make a change to the
+ 	 * neighbour entry. Otherwise we risk to move the locktime window with
+ 	 * noop updates and ignore relevant ARP updates.
+ 	 */
+-	if (new != old || lladdr != neigh->ha) {
+-		if (new & NUD_CONNECTED)
+-			neigh->confirmed = jiffies;
++	if (new != old || lladdr != neigh->ha)
+ 		neigh->updated = jiffies;
+-	}
+ 
+ 	if (new != old) {
+ 		neigh_del_timer(neigh);
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index e3f743c141b3..bafaa033826f 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -2760,7 +2760,7 @@ int rtnl_configure_link(struct net_device *dev, const struct ifinfomsg *ifm)
+ 	}
+ 
+ 	if (dev->rtnl_link_state == RTNL_LINK_INITIALIZED) {
+-		__dev_notify_flags(dev, old_flags, 0U);
++		__dev_notify_flags(dev, old_flags, (old_flags ^ dev->flags));
+ 	} else {
+ 		dev->rtnl_link_state = RTNL_LINK_INITIALIZED;
+ 		__dev_notify_flags(dev, old_flags, ~0U);
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index b403499fdabe..0c43b050dac7 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -1377,6 +1377,7 @@ struct sk_buff *inet_gso_segment(struct sk_buff *skb,
+ 		if (encap)
+ 			skb_reset_inner_headers(skb);
+ 		skb->network_header = (u8 *)iph - skb->head;
++		skb_reset_mac_len(skb);
+ 	} while ((skb = skb->next));
+ 
+ out:
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 24e116ddae79..fed65bc9df86 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -2128,6 +2128,28 @@ static inline int udp4_csum_init(struct sk_buff *skb, struct udphdr *uh,
+ 							 inet_compute_pseudo);
+ }
+ 
++/* wrapper for udp_queue_rcv_skb tacking care of csum conversion and
++ * return code conversion for ip layer consumption
++ */
++static int udp_unicast_rcv_skb(struct sock *sk, struct sk_buff *skb,
++			       struct udphdr *uh)
++{
++	int ret;
++
++	if (inet_get_convert_csum(sk) && uh->check && !IS_UDPLITE(sk))
++		skb_checksum_try_convert(skb, IPPROTO_UDP, uh->check,
++					 inet_compute_pseudo);
++
++	ret = udp_queue_rcv_skb(sk, skb);
++
++	/* a return value > 0 means to resubmit the input, but
++	 * it wants the return to be -protocol, or 0
++	 */
++	if (ret > 0)
++		return -ret;
++	return 0;
++}
++
+ /*
+  *	All we need to do is get the socket, and then do a checksum.
+  */
+@@ -2174,14 +2196,9 @@ int __udp4_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
+ 		if (unlikely(sk->sk_rx_dst != dst))
+ 			udp_sk_rx_dst_set(sk, dst);
+ 
+-		ret = udp_queue_rcv_skb(sk, skb);
++		ret = udp_unicast_rcv_skb(sk, skb, uh);
+ 		sock_put(sk);
+-		/* a return value > 0 means to resubmit the input, but
+-		 * it wants the return to be -protocol, or 0
+-		 */
+-		if (ret > 0)
+-			return -ret;
+-		return 0;
++		return ret;
+ 	}
+ 
+ 	if (rt->rt_flags & (RTCF_BROADCAST|RTCF_MULTICAST))
+@@ -2189,22 +2206,8 @@ int __udp4_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
+ 						saddr, daddr, udptable, proto);
+ 
+ 	sk = __udp4_lib_lookup_skb(skb, uh->source, uh->dest, udptable);
+-	if (sk) {
+-		int ret;
+-
+-		if (inet_get_convert_csum(sk) && uh->check && !IS_UDPLITE(sk))
+-			skb_checksum_try_convert(skb, IPPROTO_UDP, uh->check,
+-						 inet_compute_pseudo);
+-
+-		ret = udp_queue_rcv_skb(sk, skb);
+-
+-		/* a return value > 0 means to resubmit the input, but
+-		 * it wants the return to be -protocol, or 0
+-		 */
+-		if (ret > 0)
+-			return -ret;
+-		return 0;
+-	}
++	if (sk)
++		return udp_unicast_rcv_skb(sk, skb, uh);
+ 
+ 	if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb))
+ 		goto drop;
+diff --git a/net/ipv6/ip6_offload.c b/net/ipv6/ip6_offload.c
+index 5b3f2f89ef41..c6b75e96868c 100644
+--- a/net/ipv6/ip6_offload.c
++++ b/net/ipv6/ip6_offload.c
+@@ -115,6 +115,7 @@ static struct sk_buff *ipv6_gso_segment(struct sk_buff *skb,
+ 			payload_len = skb->len - nhoff - sizeof(*ipv6h);
+ 		ipv6h->payload_len = htons(payload_len);
+ 		skb->network_header = (u8 *)ipv6h - skb->head;
++		skb_reset_mac_len(skb);
+ 
+ 		if (udpfrag) {
+ 			int err = ip6_find_1stfragopt(skb, &prevhdr);
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 3168847c30d1..4f607aace43c 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -219,12 +219,10 @@ int ip6_xmit(const struct sock *sk, struct sk_buff *skb, struct flowi6 *fl6,
+ 				kfree_skb(skb);
+ 				return -ENOBUFS;
+ 			}
++			if (skb->sk)
++				skb_set_owner_w(skb2, skb->sk);
+ 			consume_skb(skb);
+ 			skb = skb2;
+-			/* skb_set_owner_w() changes sk->sk_wmem_alloc atomically,
+-			 * it is safe to call in our context (socket lock not held)
+-			 */
+-			skb_set_owner_w(skb, (struct sock *)sk);
+ 		}
+ 		if (opt->opt_flen)
+ 			ipv6_push_frag_opts(skb, opt, &proto);
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 18e00ce1719a..480a79f47c52 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -946,8 +946,6 @@ static void ip6_rt_init_dst_reject(struct rt6_info *rt, struct fib6_info *ort)
+ 
+ static void ip6_rt_init_dst(struct rt6_info *rt, struct fib6_info *ort)
+ {
+-	rt->dst.flags |= fib6_info_dst_flags(ort);
+-
+ 	if (ort->fib6_flags & RTF_REJECT) {
+ 		ip6_rt_init_dst_reject(rt, ort);
+ 		return;
+@@ -4670,20 +4668,31 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 			 int iif, int type, u32 portid, u32 seq,
+ 			 unsigned int flags)
+ {
+-	struct rtmsg *rtm;
++	struct rt6_info *rt6 = (struct rt6_info *)dst;
++	struct rt6key *rt6_dst, *rt6_src;
++	u32 *pmetrics, table, rt6_flags;
+ 	struct nlmsghdr *nlh;
++	struct rtmsg *rtm;
+ 	long expires = 0;
+-	u32 *pmetrics;
+-	u32 table;
+ 
+ 	nlh = nlmsg_put(skb, portid, seq, type, sizeof(*rtm), flags);
+ 	if (!nlh)
+ 		return -EMSGSIZE;
+ 
++	if (rt6) {
++		rt6_dst = &rt6->rt6i_dst;
++		rt6_src = &rt6->rt6i_src;
++		rt6_flags = rt6->rt6i_flags;
++	} else {
++		rt6_dst = &rt->fib6_dst;
++		rt6_src = &rt->fib6_src;
++		rt6_flags = rt->fib6_flags;
++	}
++
+ 	rtm = nlmsg_data(nlh);
+ 	rtm->rtm_family = AF_INET6;
+-	rtm->rtm_dst_len = rt->fib6_dst.plen;
+-	rtm->rtm_src_len = rt->fib6_src.plen;
++	rtm->rtm_dst_len = rt6_dst->plen;
++	rtm->rtm_src_len = rt6_src->plen;
+ 	rtm->rtm_tos = 0;
+ 	if (rt->fib6_table)
+ 		table = rt->fib6_table->tb6_id;
+@@ -4698,7 +4707,7 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 	rtm->rtm_scope = RT_SCOPE_UNIVERSE;
+ 	rtm->rtm_protocol = rt->fib6_protocol;
+ 
+-	if (rt->fib6_flags & RTF_CACHE)
++	if (rt6_flags & RTF_CACHE)
+ 		rtm->rtm_flags |= RTM_F_CLONED;
+ 
+ 	if (dest) {
+@@ -4706,7 +4715,7 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 			goto nla_put_failure;
+ 		rtm->rtm_dst_len = 128;
+ 	} else if (rtm->rtm_dst_len)
+-		if (nla_put_in6_addr(skb, RTA_DST, &rt->fib6_dst.addr))
++		if (nla_put_in6_addr(skb, RTA_DST, &rt6_dst->addr))
+ 			goto nla_put_failure;
+ #ifdef CONFIG_IPV6_SUBTREES
+ 	if (src) {
+@@ -4714,12 +4723,12 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 			goto nla_put_failure;
+ 		rtm->rtm_src_len = 128;
+ 	} else if (rtm->rtm_src_len &&
+-		   nla_put_in6_addr(skb, RTA_SRC, &rt->fib6_src.addr))
++		   nla_put_in6_addr(skb, RTA_SRC, &rt6_src->addr))
+ 		goto nla_put_failure;
+ #endif
+ 	if (iif) {
+ #ifdef CONFIG_IPV6_MROUTE
+-		if (ipv6_addr_is_multicast(&rt->fib6_dst.addr)) {
++		if (ipv6_addr_is_multicast(&rt6_dst->addr)) {
+ 			int err = ip6mr_get_route(net, skb, rtm, portid);
+ 
+ 			if (err == 0)
+@@ -4754,7 +4763,14 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 	/* For multipath routes, walk the siblings list and add
+ 	 * each as a nexthop within RTA_MULTIPATH.
+ 	 */
+-	if (rt->fib6_nsiblings) {
++	if (rt6) {
++		if (rt6_flags & RTF_GATEWAY &&
++		    nla_put_in6_addr(skb, RTA_GATEWAY, &rt6->rt6i_gateway))
++			goto nla_put_failure;
++
++		if (dst->dev && nla_put_u32(skb, RTA_OIF, dst->dev->ifindex))
++			goto nla_put_failure;
++	} else if (rt->fib6_nsiblings) {
+ 		struct fib6_info *sibling, *next_sibling;
+ 		struct nlattr *mp;
+ 
+@@ -4777,7 +4793,7 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 			goto nla_put_failure;
+ 	}
+ 
+-	if (rt->fib6_flags & RTF_EXPIRES) {
++	if (rt6_flags & RTF_EXPIRES) {
+ 		expires = dst ? dst->expires : rt->expires;
+ 		expires -= jiffies;
+ 	}
+@@ -4785,7 +4801,7 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 	if (rtnl_put_cacheinfo(skb, dst, 0, expires, dst ? dst->error : 0) < 0)
+ 		goto nla_put_failure;
+ 
+-	if (nla_put_u8(skb, RTA_PREF, IPV6_EXTRACT_PREF(rt->fib6_flags)))
++	if (nla_put_u8(skb, RTA_PREF, IPV6_EXTRACT_PREF(rt6_flags)))
+ 		goto nla_put_failure;
+ 
+ 
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index e6645cae403e..39d0cab919bb 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -748,6 +748,28 @@ static void udp6_sk_rx_dst_set(struct sock *sk, struct dst_entry *dst)
+ 	}
+ }
+ 
++/* wrapper for udp_queue_rcv_skb tacking care of csum conversion and
++ * return code conversion for ip layer consumption
++ */
++static int udp6_unicast_rcv_skb(struct sock *sk, struct sk_buff *skb,
++				struct udphdr *uh)
++{
++	int ret;
++
++	if (inet_get_convert_csum(sk) && uh->check && !IS_UDPLITE(sk))
++		skb_checksum_try_convert(skb, IPPROTO_UDP, uh->check,
++					 ip6_compute_pseudo);
++
++	ret = udpv6_queue_rcv_skb(sk, skb);
++
++	/* a return value > 0 means to resubmit the input, but
++	 * it wants the return to be -protocol, or 0
++	 */
++	if (ret > 0)
++		return -ret;
++	return 0;
++}
++
+ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
+ 		   int proto)
+ {
+@@ -799,13 +821,14 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
+ 		if (unlikely(sk->sk_rx_dst != dst))
+ 			udp6_sk_rx_dst_set(sk, dst);
+ 
+-		ret = udpv6_queue_rcv_skb(sk, skb);
+-		sock_put(sk);
++		if (!uh->check && !udp_sk(sk)->no_check6_rx) {
++			sock_put(sk);
++			goto report_csum_error;
++		}
+ 
+-		/* a return value > 0 means to resubmit the input */
+-		if (ret > 0)
+-			return ret;
+-		return 0;
++		ret = udp6_unicast_rcv_skb(sk, skb, uh);
++		sock_put(sk);
++		return ret;
+ 	}
+ 
+ 	/*
+@@ -818,30 +841,13 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
+ 	/* Unicast */
+ 	sk = __udp6_lib_lookup_skb(skb, uh->source, uh->dest, udptable);
+ 	if (sk) {
+-		int ret;
+-
+-		if (!uh->check && !udp_sk(sk)->no_check6_rx) {
+-			udp6_csum_zero_error(skb);
+-			goto csum_error;
+-		}
+-
+-		if (inet_get_convert_csum(sk) && uh->check && !IS_UDPLITE(sk))
+-			skb_checksum_try_convert(skb, IPPROTO_UDP, uh->check,
+-						 ip6_compute_pseudo);
+-
+-		ret = udpv6_queue_rcv_skb(sk, skb);
+-
+-		/* a return value > 0 means to resubmit the input */
+-		if (ret > 0)
+-			return ret;
+-
+-		return 0;
++		if (!uh->check && !udp_sk(sk)->no_check6_rx)
++			goto report_csum_error;
++		return udp6_unicast_rcv_skb(sk, skb, uh);
+ 	}
+ 
+-	if (!uh->check) {
+-		udp6_csum_zero_error(skb);
+-		goto csum_error;
+-	}
++	if (!uh->check)
++		goto report_csum_error;
+ 
+ 	if (!xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb))
+ 		goto discard;
+@@ -862,6 +868,9 @@ short_packet:
+ 			    ulen, skb->len,
+ 			    daddr, ntohs(uh->dest));
+ 	goto discard;
++
++report_csum_error:
++	udp6_csum_zero_error(skb);
+ csum_error:
+ 	__UDP6_INC_STATS(net, UDP_MIB_CSUMERRORS, proto == IPPROTO_UDPLITE);
+ discard:
+diff --git a/net/nfc/hci/core.c b/net/nfc/hci/core.c
+index ac8030c4bcf8..19cb2e473ea6 100644
+--- a/net/nfc/hci/core.c
++++ b/net/nfc/hci/core.c
+@@ -209,6 +209,11 @@ void nfc_hci_cmd_received(struct nfc_hci_dev *hdev, u8 pipe, u8 cmd,
+ 		}
+ 		create_info = (struct hci_create_pipe_resp *)skb->data;
+ 
++		if (create_info->pipe >= NFC_HCI_MAX_PIPES) {
++			status = NFC_HCI_ANY_E_NOK;
++			goto exit;
++		}
++
+ 		/* Save the new created pipe and bind with local gate,
+ 		 * the description for skb->data[3] is destination gate id
+ 		 * but since we received this cmd from host controller, we
+@@ -232,6 +237,11 @@ void nfc_hci_cmd_received(struct nfc_hci_dev *hdev, u8 pipe, u8 cmd,
+ 		}
+ 		delete_info = (struct hci_delete_pipe_noti *)skb->data;
+ 
++		if (delete_info->pipe >= NFC_HCI_MAX_PIPES) {
++			status = NFC_HCI_ANY_E_NOK;
++			goto exit;
++		}
++
+ 		hdev->pipes[delete_info->pipe].gate = NFC_HCI_INVALID_GATE;
+ 		hdev->pipes[delete_info->pipe].dest_host = NFC_HCI_INVALID_HOST;
+ 		break;
+diff --git a/net/sched/act_sample.c b/net/sched/act_sample.c
+index 5db358497c9e..e0e334a3a6e1 100644
+--- a/net/sched/act_sample.c
++++ b/net/sched/act_sample.c
+@@ -64,7 +64,7 @@ static int tcf_sample_init(struct net *net, struct nlattr *nla,
+ 
+ 	if (!exists) {
+ 		ret = tcf_idr_create(tn, parm->index, est, a,
+-				     &act_sample_ops, bind, false);
++				     &act_sample_ops, bind, true);
+ 		if (ret)
+ 			return ret;
+ 		ret = ACT_P_CREATED;
+diff --git a/net/socket.c b/net/socket.c
+index 4ac3b834cce9..d4187ac17d55 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -962,7 +962,8 @@ void dlci_ioctl_set(int (*hook) (unsigned int, void __user *))
+ EXPORT_SYMBOL(dlci_ioctl_set);
+ 
+ static long sock_do_ioctl(struct net *net, struct socket *sock,
+-				 unsigned int cmd, unsigned long arg)
++			  unsigned int cmd, unsigned long arg,
++			  unsigned int ifreq_size)
+ {
+ 	int err;
+ 	void __user *argp = (void __user *)arg;
+@@ -988,11 +989,11 @@ static long sock_do_ioctl(struct net *net, struct socket *sock,
+ 	} else {
+ 		struct ifreq ifr;
+ 		bool need_copyout;
+-		if (copy_from_user(&ifr, argp, sizeof(struct ifreq)))
++		if (copy_from_user(&ifr, argp, ifreq_size))
+ 			return -EFAULT;
+ 		err = dev_ioctl(net, cmd, &ifr, &need_copyout);
+ 		if (!err && need_copyout)
+-			if (copy_to_user(argp, &ifr, sizeof(struct ifreq)))
++			if (copy_to_user(argp, &ifr, ifreq_size))
+ 				return -EFAULT;
+ 	}
+ 	return err;
+@@ -1091,7 +1092,8 @@ static long sock_ioctl(struct file *file, unsigned cmd, unsigned long arg)
+ 			err = open_related_ns(&net->ns, get_net_ns);
+ 			break;
+ 		default:
+-			err = sock_do_ioctl(net, sock, cmd, arg);
++			err = sock_do_ioctl(net, sock, cmd, arg,
++					    sizeof(struct ifreq));
+ 			break;
+ 		}
+ 	return err;
+@@ -2762,7 +2764,8 @@ static int do_siocgstamp(struct net *net, struct socket *sock,
+ 	int err;
+ 
+ 	set_fs(KERNEL_DS);
+-	err = sock_do_ioctl(net, sock, cmd, (unsigned long)&ktv);
++	err = sock_do_ioctl(net, sock, cmd, (unsigned long)&ktv,
++			    sizeof(struct compat_ifreq));
+ 	set_fs(old_fs);
+ 	if (!err)
+ 		err = compat_put_timeval(&ktv, up);
+@@ -2778,7 +2781,8 @@ static int do_siocgstampns(struct net *net, struct socket *sock,
+ 	int err;
+ 
+ 	set_fs(KERNEL_DS);
+-	err = sock_do_ioctl(net, sock, cmd, (unsigned long)&kts);
++	err = sock_do_ioctl(net, sock, cmd, (unsigned long)&kts,
++			    sizeof(struct compat_ifreq));
+ 	set_fs(old_fs);
+ 	if (!err)
+ 		err = compat_put_timespec(&kts, up);
+@@ -3084,7 +3088,8 @@ static int routing_ioctl(struct net *net, struct socket *sock,
+ 	}
+ 
+ 	set_fs(KERNEL_DS);
+-	ret = sock_do_ioctl(net, sock, cmd, (unsigned long) r);
++	ret = sock_do_ioctl(net, sock, cmd, (unsigned long) r,
++			    sizeof(struct compat_ifreq));
+ 	set_fs(old_fs);
+ 
+ out:
+@@ -3197,7 +3202,8 @@ static int compat_sock_ioctl_trans(struct file *file, struct socket *sock,
+ 	case SIOCBONDSETHWADDR:
+ 	case SIOCBONDCHANGEACTIVE:
+ 	case SIOCGIFNAME:
+-		return sock_do_ioctl(net, sock, cmd, arg);
++		return sock_do_ioctl(net, sock, cmd, arg,
++				     sizeof(struct compat_ifreq));
+ 	}
+ 
+ 	return -ENOIOCTLCMD;
+diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
+index a7a8f8e20ff3..9bd0286d5407 100644
+--- a/net/tls/tls_device.c
++++ b/net/tls/tls_device.c
+@@ -552,7 +552,7 @@ int tls_set_device_offload(struct sock *sk, struct tls_context *ctx)
+ 		goto free_marker_record;
+ 	}
+ 
+-	crypto_info = &ctx->crypto_send;
++	crypto_info = &ctx->crypto_send.info;
+ 	switch (crypto_info->cipher_type) {
+ 	case TLS_CIPHER_AES_GCM_128:
+ 		nonce_size = TLS_CIPHER_AES_GCM_128_IV_SIZE;
+@@ -650,7 +650,7 @@ int tls_set_device_offload(struct sock *sk, struct tls_context *ctx)
+ 
+ 	ctx->priv_ctx_tx = offload_ctx;
+ 	rc = netdev->tlsdev_ops->tls_dev_add(netdev, sk, TLS_OFFLOAD_CTX_DIR_TX,
+-					     &ctx->crypto_send,
++					     &ctx->crypto_send.info,
+ 					     tcp_sk(sk)->write_seq);
+ 	if (rc)
+ 		goto release_netdev;
+diff --git a/net/tls/tls_device_fallback.c b/net/tls/tls_device_fallback.c
+index 748914abdb60..72143679d3d6 100644
+--- a/net/tls/tls_device_fallback.c
++++ b/net/tls/tls_device_fallback.c
+@@ -320,7 +320,7 @@ static struct sk_buff *tls_enc_skb(struct tls_context *tls_ctx,
+ 		goto free_req;
+ 
+ 	iv = buf;
+-	memcpy(iv, tls_ctx->crypto_send_aes_gcm_128.salt,
++	memcpy(iv, tls_ctx->crypto_send.aes_gcm_128.salt,
+ 	       TLS_CIPHER_AES_GCM_128_SALT_SIZE);
+ 	aad = buf + TLS_CIPHER_AES_GCM_128_SALT_SIZE +
+ 	      TLS_CIPHER_AES_GCM_128_IV_SIZE;
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index 45188d920013..2ccf194c3ebb 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -245,6 +245,16 @@ static void tls_write_space(struct sock *sk)
+ 	ctx->sk_write_space(sk);
+ }
+ 
++static void tls_ctx_free(struct tls_context *ctx)
++{
++	if (!ctx)
++		return;
++
++	memzero_explicit(&ctx->crypto_send, sizeof(ctx->crypto_send));
++	memzero_explicit(&ctx->crypto_recv, sizeof(ctx->crypto_recv));
++	kfree(ctx);
++}
++
+ static void tls_sk_proto_close(struct sock *sk, long timeout)
+ {
+ 	struct tls_context *ctx = tls_get_ctx(sk);
+@@ -295,7 +305,7 @@ static void tls_sk_proto_close(struct sock *sk, long timeout)
+ #else
+ 	{
+ #endif
+-		kfree(ctx);
++		tls_ctx_free(ctx);
+ 		ctx = NULL;
+ 	}
+ 
+@@ -306,7 +316,7 @@ skip_tx_cleanup:
+ 	 * for sk->sk_prot->unhash [tls_hw_unhash]
+ 	 */
+ 	if (free_ctx)
+-		kfree(ctx);
++		tls_ctx_free(ctx);
+ }
+ 
+ static int do_tls_getsockopt_tx(struct sock *sk, char __user *optval,
+@@ -331,7 +341,7 @@ static int do_tls_getsockopt_tx(struct sock *sk, char __user *optval,
+ 	}
+ 
+ 	/* get user crypto info */
+-	crypto_info = &ctx->crypto_send;
++	crypto_info = &ctx->crypto_send.info;
+ 
+ 	if (!TLS_CRYPTO_INFO_READY(crypto_info)) {
+ 		rc = -EBUSY;
+@@ -418,9 +428,9 @@ static int do_tls_setsockopt_conf(struct sock *sk, char __user *optval,
+ 	}
+ 
+ 	if (tx)
+-		crypto_info = &ctx->crypto_send;
++		crypto_info = &ctx->crypto_send.info;
+ 	else
+-		crypto_info = &ctx->crypto_recv;
++		crypto_info = &ctx->crypto_recv.info;
+ 
+ 	/* Currently we don't support set crypto info more than one time */
+ 	if (TLS_CRYPTO_INFO_READY(crypto_info)) {
+@@ -492,7 +502,7 @@ static int do_tls_setsockopt_conf(struct sock *sk, char __user *optval,
+ 	goto out;
+ 
+ err_crypto_info:
+-	memset(crypto_info, 0, sizeof(*crypto_info));
++	memzero_explicit(crypto_info, sizeof(union tls_crypto_context));
+ out:
+ 	return rc;
+ }
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index b3344bbe336b..9fab8e5a4a5b 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -872,7 +872,15 @@ fallback_to_reg_recv:
+ 				if (control != TLS_RECORD_TYPE_DATA)
+ 					goto recv_end;
+ 			}
++		} else {
++			/* MSG_PEEK right now cannot look beyond current skb
++			 * from strparser, meaning we cannot advance skb here
++			 * and thus unpause strparser since we'd loose original
++			 * one.
++			 */
++			break;
+ 		}
++
+ 		/* If we have a new message from strparser, continue now. */
+ 		if (copied >= target && !ctx->recv_pkt)
+ 			break;
+@@ -989,8 +997,8 @@ static int tls_read_size(struct strparser *strp, struct sk_buff *skb)
+ 		goto read_failure;
+ 	}
+ 
+-	if (header[1] != TLS_VERSION_MINOR(tls_ctx->crypto_recv.version) ||
+-	    header[2] != TLS_VERSION_MAJOR(tls_ctx->crypto_recv.version)) {
++	if (header[1] != TLS_VERSION_MINOR(tls_ctx->crypto_recv.info.version) ||
++	    header[2] != TLS_VERSION_MAJOR(tls_ctx->crypto_recv.info.version)) {
+ 		ret = -EINVAL;
+ 		goto read_failure;
+ 	}
+@@ -1064,7 +1072,6 @@ void tls_sw_free_resources_rx(struct sock *sk)
+ 
+ int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx)
+ {
+-	char keyval[TLS_CIPHER_AES_GCM_128_KEY_SIZE];
+ 	struct tls_crypto_info *crypto_info;
+ 	struct tls12_crypto_info_aes_gcm_128 *gcm_128_info;
+ 	struct tls_sw_context_tx *sw_ctx_tx = NULL;
+@@ -1100,11 +1107,11 @@ int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx)
+ 	}
+ 
+ 	if (tx) {
+-		crypto_info = &ctx->crypto_send;
++		crypto_info = &ctx->crypto_send.info;
+ 		cctx = &ctx->tx;
+ 		aead = &sw_ctx_tx->aead_send;
+ 	} else {
+-		crypto_info = &ctx->crypto_recv;
++		crypto_info = &ctx->crypto_recv.info;
+ 		cctx = &ctx->rx;
+ 		aead = &sw_ctx_rx->aead_recv;
+ 	}
+@@ -1184,9 +1191,7 @@ int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx)
+ 
+ 	ctx->push_pending_record = tls_sw_push_pending_record;
+ 
+-	memcpy(keyval, gcm_128_info->key, TLS_CIPHER_AES_GCM_128_KEY_SIZE);
+-
+-	rc = crypto_aead_setkey(*aead, keyval,
++	rc = crypto_aead_setkey(*aead, gcm_128_info->key,
+ 				TLS_CIPHER_AES_GCM_128_KEY_SIZE);
+ 	if (rc)
+ 		goto free_aead;
+diff --git a/security/keys/dh.c b/security/keys/dh.c
+index 1a68d27e72b4..b203f7758f97 100644
+--- a/security/keys/dh.c
++++ b/security/keys/dh.c
+@@ -300,7 +300,7 @@ long __keyctl_dh_compute(struct keyctl_dh_params __user *params,
+ 	}
+ 	dh_inputs.g_size = dlen;
+ 
+-	dlen = dh_data_from_key(pcopy.dh_private, &dh_inputs.key);
++	dlen = dh_data_from_key(pcopy.private, &dh_inputs.key);
+ 	if (dlen < 0) {
+ 		ret = dlen;
+ 		goto out2;
+diff --git a/sound/firewire/bebob/bebob.c b/sound/firewire/bebob/bebob.c
+index 730ea91d9be8..93676354f87f 100644
+--- a/sound/firewire/bebob/bebob.c
++++ b/sound/firewire/bebob/bebob.c
+@@ -263,6 +263,8 @@ do_registration(struct work_struct *work)
+ error:
+ 	mutex_unlock(&devices_mutex);
+ 	snd_bebob_stream_destroy_duplex(bebob);
++	kfree(bebob->maudio_special_quirk);
++	bebob->maudio_special_quirk = NULL;
+ 	snd_card_free(bebob->card);
+ 	dev_info(&bebob->unit->device,
+ 		 "Sound card registration failed: %d\n", err);
+diff --git a/sound/firewire/bebob/bebob_maudio.c b/sound/firewire/bebob/bebob_maudio.c
+index bd55620c6a47..c266997ad299 100644
+--- a/sound/firewire/bebob/bebob_maudio.c
++++ b/sound/firewire/bebob/bebob_maudio.c
+@@ -96,17 +96,13 @@ int snd_bebob_maudio_load_firmware(struct fw_unit *unit)
+ 	struct fw_device *device = fw_parent_device(unit);
+ 	int err, rcode;
+ 	u64 date;
+-	__le32 cues[3] = {
+-		cpu_to_le32(MAUDIO_BOOTLOADER_CUE1),
+-		cpu_to_le32(MAUDIO_BOOTLOADER_CUE2),
+-		cpu_to_le32(MAUDIO_BOOTLOADER_CUE3)
+-	};
++	__le32 *cues;
+ 
+ 	/* check date of software used to build */
+ 	err = snd_bebob_read_block(unit, INFO_OFFSET_SW_DATE,
+ 				   &date, sizeof(u64));
+ 	if (err < 0)
+-		goto end;
++		return err;
+ 	/*
+ 	 * firmware version 5058 or later has date later than "20070401", but
+ 	 * 'date' is not null-terminated.
+@@ -114,20 +110,28 @@ int snd_bebob_maudio_load_firmware(struct fw_unit *unit)
+ 	if (date < 0x3230303730343031LL) {
+ 		dev_err(&unit->device,
+ 			"Use firmware version 5058 or later\n");
+-		err = -ENOSYS;
+-		goto end;
++		return -ENXIO;
+ 	}
+ 
++	cues = kmalloc_array(3, sizeof(*cues), GFP_KERNEL);
++	if (!cues)
++		return -ENOMEM;
++
++	cues[0] = cpu_to_le32(MAUDIO_BOOTLOADER_CUE1);
++	cues[1] = cpu_to_le32(MAUDIO_BOOTLOADER_CUE2);
++	cues[2] = cpu_to_le32(MAUDIO_BOOTLOADER_CUE3);
++
+ 	rcode = fw_run_transaction(device->card, TCODE_WRITE_BLOCK_REQUEST,
+ 				   device->node_id, device->generation,
+ 				   device->max_speed, BEBOB_ADDR_REG_REQ,
+-				   cues, sizeof(cues));
++				   cues, 3 * sizeof(*cues));
++	kfree(cues);
+ 	if (rcode != RCODE_COMPLETE) {
+ 		dev_err(&unit->device,
+ 			"Failed to send a cue to load firmware\n");
+ 		err = -EIO;
+ 	}
+-end:
++
+ 	return err;
+ }
+ 
+@@ -290,10 +294,6 @@ snd_bebob_maudio_special_discover(struct snd_bebob *bebob, bool is1814)
+ 		bebob->midi_output_ports = 2;
+ 	}
+ end:
+-	if (err < 0) {
+-		kfree(params);
+-		bebob->maudio_special_quirk = NULL;
+-	}
+ 	mutex_unlock(&bebob->mutex);
+ 	return err;
+ }
+diff --git a/sound/firewire/digi00x/digi00x.c b/sound/firewire/digi00x/digi00x.c
+index 1f5e1d23f31a..ef689997d6a5 100644
+--- a/sound/firewire/digi00x/digi00x.c
++++ b/sound/firewire/digi00x/digi00x.c
+@@ -49,6 +49,7 @@ static void dg00x_free(struct snd_dg00x *dg00x)
+ 	fw_unit_put(dg00x->unit);
+ 
+ 	mutex_destroy(&dg00x->mutex);
++	kfree(dg00x);
+ }
+ 
+ static void dg00x_card_free(struct snd_card *card)
+diff --git a/sound/firewire/fireface/ff-protocol-ff400.c b/sound/firewire/fireface/ff-protocol-ff400.c
+index ad7a0a32557d..64c3cb0fb926 100644
+--- a/sound/firewire/fireface/ff-protocol-ff400.c
++++ b/sound/firewire/fireface/ff-protocol-ff400.c
+@@ -146,6 +146,7 @@ static int ff400_switch_fetching_mode(struct snd_ff *ff, bool enable)
+ {
+ 	__le32 *reg;
+ 	int i;
++	int err;
+ 
+ 	reg = kcalloc(18, sizeof(__le32), GFP_KERNEL);
+ 	if (reg == NULL)
+@@ -163,9 +164,11 @@ static int ff400_switch_fetching_mode(struct snd_ff *ff, bool enable)
+ 			reg[i] = cpu_to_le32(0x00000001);
+ 	}
+ 
+-	return snd_fw_transaction(ff->unit, TCODE_WRITE_BLOCK_REQUEST,
+-				  FF400_FETCH_PCM_FRAMES, reg,
+-				  sizeof(__le32) * 18, 0);
++	err = snd_fw_transaction(ff->unit, TCODE_WRITE_BLOCK_REQUEST,
++				 FF400_FETCH_PCM_FRAMES, reg,
++				 sizeof(__le32) * 18, 0);
++	kfree(reg);
++	return err;
+ }
+ 
+ static void ff400_dump_sync_status(struct snd_ff *ff,
+diff --git a/sound/firewire/fireworks/fireworks.c b/sound/firewire/fireworks/fireworks.c
+index 71a0613d3da0..f2d073365cf6 100644
+--- a/sound/firewire/fireworks/fireworks.c
++++ b/sound/firewire/fireworks/fireworks.c
+@@ -301,6 +301,8 @@ error:
+ 	snd_efw_transaction_remove_instance(efw);
+ 	snd_efw_stream_destroy_duplex(efw);
+ 	snd_card_free(efw->card);
++	kfree(efw->resp_buf);
++	efw->resp_buf = NULL;
+ 	dev_info(&efw->unit->device,
+ 		 "Sound card registration failed: %d\n", err);
+ }
+diff --git a/sound/firewire/oxfw/oxfw.c b/sound/firewire/oxfw/oxfw.c
+index 1e5b2c802635..2ea8be6c8584 100644
+--- a/sound/firewire/oxfw/oxfw.c
++++ b/sound/firewire/oxfw/oxfw.c
+@@ -130,6 +130,7 @@ static void oxfw_free(struct snd_oxfw *oxfw)
+ 
+ 	kfree(oxfw->spec);
+ 	mutex_destroy(&oxfw->mutex);
++	kfree(oxfw);
+ }
+ 
+ /*
+@@ -207,6 +208,7 @@ static int detect_quirks(struct snd_oxfw *oxfw)
+ static void do_registration(struct work_struct *work)
+ {
+ 	struct snd_oxfw *oxfw = container_of(work, struct snd_oxfw, dwork.work);
++	int i;
+ 	int err;
+ 
+ 	if (oxfw->registered)
+@@ -269,7 +271,15 @@ error:
+ 	snd_oxfw_stream_destroy_simplex(oxfw, &oxfw->rx_stream);
+ 	if (oxfw->has_output)
+ 		snd_oxfw_stream_destroy_simplex(oxfw, &oxfw->tx_stream);
++	for (i = 0; i < SND_OXFW_STREAM_FORMAT_ENTRIES; ++i) {
++		kfree(oxfw->tx_stream_formats[i]);
++		oxfw->tx_stream_formats[i] = NULL;
++		kfree(oxfw->rx_stream_formats[i]);
++		oxfw->rx_stream_formats[i] = NULL;
++	}
+ 	snd_card_free(oxfw->card);
++	kfree(oxfw->spec);
++	oxfw->spec = NULL;
+ 	dev_info(&oxfw->unit->device,
+ 		 "Sound card registration failed: %d\n", err);
+ }
+diff --git a/sound/firewire/tascam/tascam.c b/sound/firewire/tascam/tascam.c
+index 44ad41fb7374..d3fdc463a884 100644
+--- a/sound/firewire/tascam/tascam.c
++++ b/sound/firewire/tascam/tascam.c
+@@ -93,6 +93,7 @@ static void tscm_free(struct snd_tscm *tscm)
+ 	fw_unit_put(tscm->unit);
+ 
+ 	mutex_destroy(&tscm->mutex);
++	kfree(tscm);
+ }
+ 
+ static void tscm_card_free(struct snd_card *card)
+diff --git a/sound/pci/emu10k1/emufx.c b/sound/pci/emu10k1/emufx.c
+index de2ecbe95d6c..2c54d26f30a6 100644
+--- a/sound/pci/emu10k1/emufx.c
++++ b/sound/pci/emu10k1/emufx.c
+@@ -2540,7 +2540,7 @@ static int snd_emu10k1_fx8010_ioctl(struct snd_hwdep * hw, struct file *file, un
+ 		emu->support_tlv = 1;
+ 		return put_user(SNDRV_EMU10K1_VERSION, (int __user *)argp);
+ 	case SNDRV_EMU10K1_IOCTL_INFO:
+-		info = kmalloc(sizeof(*info), GFP_KERNEL);
++		info = kzalloc(sizeof(*info), GFP_KERNEL);
+ 		if (!info)
+ 			return -ENOMEM;
+ 		snd_emu10k1_fx8010_info(emu, info);
+diff --git a/sound/soc/codecs/cs4265.c b/sound/soc/codecs/cs4265.c
+index 275677de669f..407554175282 100644
+--- a/sound/soc/codecs/cs4265.c
++++ b/sound/soc/codecs/cs4265.c
+@@ -157,8 +157,8 @@ static const struct snd_kcontrol_new cs4265_snd_controls[] = {
+ 	SOC_SINGLE("Validity Bit Control Switch", CS4265_SPDIF_CTL2,
+ 				3, 1, 0),
+ 	SOC_ENUM("SPDIF Mono/Stereo", spdif_mono_stereo_enum),
+-	SOC_SINGLE("MMTLR Data Switch", 0,
+-				1, 1, 0),
++	SOC_SINGLE("MMTLR Data Switch", CS4265_SPDIF_CTL2,
++				0, 1, 0),
+ 	SOC_ENUM("Mono Channel Select", spdif_mono_select_enum),
+ 	SND_SOC_BYTES("C Data Buffer", CS4265_C_DATA_BUFF, 24),
+ };
+diff --git a/sound/soc/codecs/tas6424.c b/sound/soc/codecs/tas6424.c
+index 14999b999fd3..0d6145549a98 100644
+--- a/sound/soc/codecs/tas6424.c
++++ b/sound/soc/codecs/tas6424.c
+@@ -424,8 +424,10 @@ static void tas6424_fault_check_work(struct work_struct *work)
+ 	       TAS6424_FAULT_PVDD_UV |
+ 	       TAS6424_FAULT_VBAT_UV;
+ 
+-	if (reg)
++	if (!reg) {
++		tas6424->last_fault1 = reg;
+ 		goto check_global_fault2_reg;
++	}
+ 
+ 	/*
+ 	 * Only flag errors once for a given occurrence. This is needed as
+@@ -461,8 +463,10 @@ check_global_fault2_reg:
+ 	       TAS6424_FAULT_OTSD_CH3 |
+ 	       TAS6424_FAULT_OTSD_CH4;
+ 
+-	if (!reg)
++	if (!reg) {
++		tas6424->last_fault2 = reg;
+ 		goto check_warn_reg;
++	}
+ 
+ 	if ((reg & TAS6424_FAULT_OTSD) && !(tas6424->last_fault2 & TAS6424_FAULT_OTSD))
+ 		dev_crit(dev, "experienced a global overtemp shutdown\n");
+@@ -497,8 +501,10 @@ check_warn_reg:
+ 	       TAS6424_WARN_VDD_OTW_CH3 |
+ 	       TAS6424_WARN_VDD_OTW_CH4;
+ 
+-	if (!reg)
++	if (!reg) {
++		tas6424->last_warn = reg;
+ 		goto out;
++	}
+ 
+ 	if ((reg & TAS6424_WARN_VDD_UV) && !(tas6424->last_warn & TAS6424_WARN_VDD_UV))
+ 		dev_warn(dev, "experienced a VDD under voltage condition\n");
+diff --git a/sound/soc/codecs/wm9712.c b/sound/soc/codecs/wm9712.c
+index 953d94d50586..ade34c26ad2f 100644
+--- a/sound/soc/codecs/wm9712.c
++++ b/sound/soc/codecs/wm9712.c
+@@ -719,7 +719,7 @@ static int wm9712_probe(struct platform_device *pdev)
+ 
+ static struct platform_driver wm9712_component_driver = {
+ 	.driver = {
+-		.name = "wm9712-component",
++		.name = "wm9712-codec",
+ 	},
+ 
+ 	.probe = wm9712_probe,
+diff --git a/sound/soc/sh/rcar/core.c b/sound/soc/sh/rcar/core.c
+index f237002180c0..ff13189a7ee4 100644
+--- a/sound/soc/sh/rcar/core.c
++++ b/sound/soc/sh/rcar/core.c
+@@ -953,12 +953,23 @@ static void rsnd_soc_dai_shutdown(struct snd_pcm_substream *substream,
+ 	rsnd_dai_stream_quit(io);
+ }
+ 
++static int rsnd_soc_dai_prepare(struct snd_pcm_substream *substream,
++				struct snd_soc_dai *dai)
++{
++	struct rsnd_priv *priv = rsnd_dai_to_priv(dai);
++	struct rsnd_dai *rdai = rsnd_dai_to_rdai(dai);
++	struct rsnd_dai_stream *io = rsnd_rdai_to_io(rdai, substream);
++
++	return rsnd_dai_call(prepare, io, priv);
++}
++
+ static const struct snd_soc_dai_ops rsnd_soc_dai_ops = {
+ 	.startup	= rsnd_soc_dai_startup,
+ 	.shutdown	= rsnd_soc_dai_shutdown,
+ 	.trigger	= rsnd_soc_dai_trigger,
+ 	.set_fmt	= rsnd_soc_dai_set_fmt,
+ 	.set_tdm_slot	= rsnd_soc_set_dai_tdm_slot,
++	.prepare	= rsnd_soc_dai_prepare,
+ };
+ 
+ void rsnd_parse_connect_common(struct rsnd_dai *rdai,
+diff --git a/sound/soc/sh/rcar/rsnd.h b/sound/soc/sh/rcar/rsnd.h
+index 6d7280d2d9be..e93032498a5b 100644
+--- a/sound/soc/sh/rcar/rsnd.h
++++ b/sound/soc/sh/rcar/rsnd.h
+@@ -283,6 +283,9 @@ struct rsnd_mod_ops {
+ 	int (*nolock_stop)(struct rsnd_mod *mod,
+ 		    struct rsnd_dai_stream *io,
+ 		    struct rsnd_priv *priv);
++	int (*prepare)(struct rsnd_mod *mod,
++		       struct rsnd_dai_stream *io,
++		       struct rsnd_priv *priv);
+ };
+ 
+ struct rsnd_dai_stream;
+@@ -312,6 +315,7 @@ struct rsnd_mod {
+  * H	0: fallback
+  * H	0: hw_params
+  * H	0: pointer
++ * H	0: prepare
+  */
+ #define __rsnd_mod_shift_nolock_start	0
+ #define __rsnd_mod_shift_nolock_stop	0
+@@ -326,6 +330,7 @@ struct rsnd_mod {
+ #define __rsnd_mod_shift_fallback	28 /* always called */
+ #define __rsnd_mod_shift_hw_params	28 /* always called */
+ #define __rsnd_mod_shift_pointer	28 /* always called */
++#define __rsnd_mod_shift_prepare	28 /* always called */
+ 
+ #define __rsnd_mod_add_probe		0
+ #define __rsnd_mod_add_remove		0
+@@ -340,6 +345,7 @@ struct rsnd_mod {
+ #define __rsnd_mod_add_fallback		0
+ #define __rsnd_mod_add_hw_params	0
+ #define __rsnd_mod_add_pointer		0
++#define __rsnd_mod_add_prepare		0
+ 
+ #define __rsnd_mod_call_probe		0
+ #define __rsnd_mod_call_remove		0
+@@ -354,6 +360,7 @@ struct rsnd_mod {
+ #define __rsnd_mod_call_pointer		0
+ #define __rsnd_mod_call_nolock_start	0
+ #define __rsnd_mod_call_nolock_stop	1
++#define __rsnd_mod_call_prepare		0
+ 
+ #define rsnd_mod_to_priv(mod)	((mod)->priv)
+ #define rsnd_mod_name(mod)	((mod)->ops->name)
+diff --git a/sound/soc/sh/rcar/ssi.c b/sound/soc/sh/rcar/ssi.c
+index 6e1166ec24a0..cf4b40d376e5 100644
+--- a/sound/soc/sh/rcar/ssi.c
++++ b/sound/soc/sh/rcar/ssi.c
+@@ -286,7 +286,7 @@ static int rsnd_ssi_master_clk_start(struct rsnd_mod *mod,
+ 	if (rsnd_ssi_is_multi_slave(mod, io))
+ 		return 0;
+ 
+-	if (ssi->usrcnt > 1) {
++	if (ssi->rate) {
+ 		if (ssi->rate != rate) {
+ 			dev_err(dev, "SSI parent/child should use same rate\n");
+ 			return -EINVAL;
+@@ -431,7 +431,6 @@ static int rsnd_ssi_init(struct rsnd_mod *mod,
+ 			 struct rsnd_priv *priv)
+ {
+ 	struct rsnd_ssi *ssi = rsnd_mod_to_ssi(mod);
+-	int ret;
+ 
+ 	if (!rsnd_ssi_is_run_mods(mod, io))
+ 		return 0;
+@@ -440,10 +439,6 @@ static int rsnd_ssi_init(struct rsnd_mod *mod,
+ 
+ 	rsnd_mod_power_on(mod);
+ 
+-	ret = rsnd_ssi_master_clk_start(mod, io);
+-	if (ret < 0)
+-		return ret;
+-
+ 	rsnd_ssi_config_init(mod, io);
+ 
+ 	rsnd_ssi_register_setup(mod);
+@@ -846,6 +841,13 @@ static int rsnd_ssi_pio_pointer(struct rsnd_mod *mod,
+ 	return 0;
+ }
+ 
++static int rsnd_ssi_prepare(struct rsnd_mod *mod,
++			    struct rsnd_dai_stream *io,
++			    struct rsnd_priv *priv)
++{
++	return rsnd_ssi_master_clk_start(mod, io);
++}
++
+ static struct rsnd_mod_ops rsnd_ssi_pio_ops = {
+ 	.name	= SSI_NAME,
+ 	.probe	= rsnd_ssi_common_probe,
+@@ -858,6 +860,7 @@ static struct rsnd_mod_ops rsnd_ssi_pio_ops = {
+ 	.pointer = rsnd_ssi_pio_pointer,
+ 	.pcm_new = rsnd_ssi_pcm_new,
+ 	.hw_params = rsnd_ssi_hw_params,
++	.prepare = rsnd_ssi_prepare,
+ };
+ 
+ static int rsnd_ssi_dma_probe(struct rsnd_mod *mod,
+@@ -934,6 +937,7 @@ static struct rsnd_mod_ops rsnd_ssi_dma_ops = {
+ 	.pcm_new = rsnd_ssi_pcm_new,
+ 	.fallback = rsnd_ssi_fallback,
+ 	.hw_params = rsnd_ssi_hw_params,
++	.prepare = rsnd_ssi_prepare,
+ };
+ 
+ int rsnd_ssi_is_dma_mode(struct rsnd_mod *mod)


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-09-26 10:40 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-09-26 10:40 UTC (permalink / raw
  To: gentoo-commits

commit:     54ae6ee8b5bae9dc320080c834184ead5030ce4c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 26 10:40:05 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Sep 26 10:40:05 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=54ae6ee8

Linux patch 4.18.10

 0000_README              |    4 +
 1009_linux-4.18.10.patch | 6974 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6978 insertions(+)

diff --git a/0000_README b/0000_README
index 6534d27..a9e2bd7 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch:  1008_linux-4.18.9.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.9
 
+Patch:  1009_linux-4.18.10.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.10
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1009_linux-4.18.10.patch b/1009_linux-4.18.10.patch
new file mode 100644
index 0000000..16ee162
--- /dev/null
+++ b/1009_linux-4.18.10.patch
@@ -0,0 +1,6974 @@
+diff --git a/Makefile b/Makefile
+index 1178348fb9ca..ffab15235ff0 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+@@ -225,10 +225,12 @@ no-dot-config-targets := $(clean-targets) \
+ 			 cscope gtags TAGS tags help% %docs check% coccicheck \
+ 			 $(version_h) headers_% archheaders archscripts \
+ 			 kernelversion %src-pkg
++no-sync-config-targets := $(no-dot-config-targets) install %install
+ 
+-config-targets := 0
+-mixed-targets  := 0
+-dot-config     := 1
++config-targets  := 0
++mixed-targets   := 0
++dot-config      := 1
++may-sync-config := 1
+ 
+ ifneq ($(filter $(no-dot-config-targets), $(MAKECMDGOALS)),)
+ 	ifeq ($(filter-out $(no-dot-config-targets), $(MAKECMDGOALS)),)
+@@ -236,6 +238,16 @@ ifneq ($(filter $(no-dot-config-targets), $(MAKECMDGOALS)),)
+ 	endif
+ endif
+ 
++ifneq ($(filter $(no-sync-config-targets), $(MAKECMDGOALS)),)
++	ifeq ($(filter-out $(no-sync-config-targets), $(MAKECMDGOALS)),)
++		may-sync-config := 0
++	endif
++endif
++
++ifneq ($(KBUILD_EXTMOD),)
++	may-sync-config := 0
++endif
++
+ ifeq ($(KBUILD_EXTMOD),)
+         ifneq ($(filter config %config,$(MAKECMDGOALS)),)
+                 config-targets := 1
+@@ -610,7 +622,7 @@ ARCH_CFLAGS :=
+ include arch/$(SRCARCH)/Makefile
+ 
+ ifeq ($(dot-config),1)
+-ifeq ($(KBUILD_EXTMOD),)
++ifeq ($(may-sync-config),1)
+ # Read in dependencies to all Kconfig* files, make sure to run syncconfig if
+ # changes are detected. This should be included after arch/$(SRCARCH)/Makefile
+ # because some architectures define CROSS_COMPILE there.
+@@ -625,8 +637,9 @@ $(KCONFIG_CONFIG) include/config/auto.conf.cmd: ;
+ include/config/%.conf: $(KCONFIG_CONFIG) include/config/auto.conf.cmd
+ 	$(Q)$(MAKE) -f $(srctree)/Makefile syncconfig
+ else
+-# external modules needs include/generated/autoconf.h and include/config/auto.conf
+-# but do not care if they are up-to-date. Use auto.conf to trigger the test
++# External modules and some install targets need include/generated/autoconf.h
++# and include/config/auto.conf but do not care if they are up-to-date.
++# Use auto.conf to trigger the test
+ PHONY += include/config/auto.conf
+ 
+ include/config/auto.conf:
+@@ -638,7 +651,7 @@ include/config/auto.conf:
+ 	echo >&2 ;							\
+ 	/bin/false)
+ 
+-endif # KBUILD_EXTMOD
++endif # may-sync-config
+ 
+ else
+ # Dummy target needed, because used as prerequisite
+diff --git a/arch/arm/boot/dts/qcom-msm8974-lge-nexus5-hammerhead.dts b/arch/arm/boot/dts/qcom-msm8974-lge-nexus5-hammerhead.dts
+index 4dc0b347b1ee..c2dc9d09484a 100644
+--- a/arch/arm/boot/dts/qcom-msm8974-lge-nexus5-hammerhead.dts
++++ b/arch/arm/boot/dts/qcom-msm8974-lge-nexus5-hammerhead.dts
+@@ -189,6 +189,8 @@
+ 						regulator-max-microvolt = <2950000>;
+ 
+ 						regulator-boot-on;
++						regulator-system-load = <200000>;
++						regulator-allow-set-load;
+ 					};
+ 
+ 					l21 {
+diff --git a/arch/arm/mach-exynos/suspend.c b/arch/arm/mach-exynos/suspend.c
+index d3db306a5a70..941b0ffd9806 100644
+--- a/arch/arm/mach-exynos/suspend.c
++++ b/arch/arm/mach-exynos/suspend.c
+@@ -203,6 +203,7 @@ static int __init exynos_pmu_irq_init(struct device_node *node,
+ 					  NULL);
+ 	if (!domain) {
+ 		iounmap(pmu_base_addr);
++		pmu_base_addr = NULL;
+ 		return -ENOMEM;
+ 	}
+ 
+diff --git a/arch/arm/mach-hisi/hotplug.c b/arch/arm/mach-hisi/hotplug.c
+index a129aae72602..909bb2493781 100644
+--- a/arch/arm/mach-hisi/hotplug.c
++++ b/arch/arm/mach-hisi/hotplug.c
+@@ -148,13 +148,20 @@ static int hi3xxx_hotplug_init(void)
+ 	struct device_node *node;
+ 
+ 	node = of_find_compatible_node(NULL, NULL, "hisilicon,sysctrl");
+-	if (node) {
+-		ctrl_base = of_iomap(node, 0);
+-		id = HI3620_CTRL;
+-		return 0;
++	if (!node) {
++		id = ERROR_CTRL;
++		return -ENOENT;
+ 	}
+-	id = ERROR_CTRL;
+-	return -ENOENT;
++
++	ctrl_base = of_iomap(node, 0);
++	of_node_put(node);
++	if (!ctrl_base) {
++		id = ERROR_CTRL;
++		return -ENOMEM;
++	}
++
++	id = HI3620_CTRL;
++	return 0;
+ }
+ 
+ void hi3xxx_set_cpu(int cpu, bool enable)
+@@ -173,11 +180,15 @@ static bool hix5hd2_hotplug_init(void)
+ 	struct device_node *np;
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "hisilicon,cpuctrl");
+-	if (np) {
+-		ctrl_base = of_iomap(np, 0);
+-		return true;
+-	}
+-	return false;
++	if (!np)
++		return false;
++
++	ctrl_base = of_iomap(np, 0);
++	of_node_put(np);
++	if (!ctrl_base)
++		return false;
++
++	return true;
+ }
+ 
+ void hix5hd2_set_cpu(int cpu, bool enable)
+@@ -219,10 +230,10 @@ void hip01_set_cpu(int cpu, bool enable)
+ 
+ 	if (!ctrl_base) {
+ 		np = of_find_compatible_node(NULL, NULL, "hisilicon,hip01-sysctrl");
+-		if (np)
+-			ctrl_base = of_iomap(np, 0);
+-		else
+-			BUG();
++		BUG_ON(!np);
++		ctrl_base = of_iomap(np, 0);
++		of_node_put(np);
++		BUG_ON(!ctrl_base);
+ 	}
+ 
+ 	if (enable) {
+diff --git a/arch/arm64/boot/dts/mediatek/mt7622.dtsi b/arch/arm64/boot/dts/mediatek/mt7622.dtsi
+index 9213c966c224..ec7ea8dca777 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7622.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt7622.dtsi
+@@ -331,7 +331,7 @@
+ 		reg = <0 0x11002000 0 0x400>;
+ 		interrupts = <GIC_SPI 91 IRQ_TYPE_LEVEL_LOW>;
+ 		clocks = <&topckgen CLK_TOP_UART_SEL>,
+-			 <&pericfg CLK_PERI_UART1_PD>;
++			 <&pericfg CLK_PERI_UART0_PD>;
+ 		clock-names = "baud", "bus";
+ 		status = "disabled";
+ 	};
+diff --git a/arch/arm64/boot/dts/qcom/apq8016-sbc.dtsi b/arch/arm64/boot/dts/qcom/apq8016-sbc.dtsi
+index 9ff848792712..78ce3979ef09 100644
+--- a/arch/arm64/boot/dts/qcom/apq8016-sbc.dtsi
++++ b/arch/arm64/boot/dts/qcom/apq8016-sbc.dtsi
+@@ -338,7 +338,7 @@
+ 			led@6 {
+ 				label = "apq8016-sbc:blue:bt";
+ 				gpios = <&pm8916_mpps 3 GPIO_ACTIVE_HIGH>;
+-				linux,default-trigger = "bt";
++				linux,default-trigger = "bluetooth-power";
+ 				default-state = "off";
+ 			};
+ 		};
+diff --git a/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi b/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi
+index 0298bd0d0e1a..caf112629caa 100644
+--- a/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi
++++ b/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi
+@@ -58,6 +58,7 @@
+ 			clocks = <&sys_clk 32>;
+ 			enable-method = "psci";
+ 			operating-points-v2 = <&cluster0_opp>;
++			#cooling-cells = <2>;
+ 		};
+ 
+ 		cpu2: cpu@100 {
+@@ -77,6 +78,7 @@
+ 			clocks = <&sys_clk 33>;
+ 			enable-method = "psci";
+ 			operating-points-v2 = <&cluster1_opp>;
++			#cooling-cells = <2>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
+index 33147aacdafd..dd5b4fab114f 100644
+--- a/arch/arm64/kernel/perf_event.c
++++ b/arch/arm64/kernel/perf_event.c
+@@ -670,6 +670,28 @@ static void armv8pmu_disable_event(struct perf_event *event)
+ 	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
+ }
+ 
++static void armv8pmu_start(struct arm_pmu *cpu_pmu)
++{
++	unsigned long flags;
++	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
++
++	raw_spin_lock_irqsave(&events->pmu_lock, flags);
++	/* Enable all counters */
++	armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E);
++	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
++}
++
++static void armv8pmu_stop(struct arm_pmu *cpu_pmu)
++{
++	unsigned long flags;
++	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
++
++	raw_spin_lock_irqsave(&events->pmu_lock, flags);
++	/* Disable all counters */
++	armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E);
++	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
++}
++
+ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
+ {
+ 	u32 pmovsr;
+@@ -694,6 +716,11 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
+ 	 */
+ 	regs = get_irq_regs();
+ 
++	/*
++	 * Stop the PMU while processing the counter overflows
++	 * to prevent skews in group events.
++	 */
++	armv8pmu_stop(cpu_pmu);
+ 	for (idx = 0; idx < cpu_pmu->num_events; ++idx) {
+ 		struct perf_event *event = cpuc->events[idx];
+ 		struct hw_perf_event *hwc;
+@@ -718,6 +745,7 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
+ 		if (perf_event_overflow(event, &data, regs))
+ 			cpu_pmu->disable(event);
+ 	}
++	armv8pmu_start(cpu_pmu);
+ 
+ 	/*
+ 	 * Handle the pending perf events.
+@@ -731,28 +759,6 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
+ 	return IRQ_HANDLED;
+ }
+ 
+-static void armv8pmu_start(struct arm_pmu *cpu_pmu)
+-{
+-	unsigned long flags;
+-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
+-
+-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
+-	/* Enable all counters */
+-	armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E);
+-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
+-}
+-
+-static void armv8pmu_stop(struct arm_pmu *cpu_pmu)
+-{
+-	unsigned long flags;
+-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
+-
+-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
+-	/* Disable all counters */
+-	armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E);
+-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
+-}
+-
+ static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc,
+ 				  struct perf_event *event)
+ {
+diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
+index 5c338ce5a7fa..db5440339ab3 100644
+--- a/arch/arm64/kernel/ptrace.c
++++ b/arch/arm64/kernel/ptrace.c
+@@ -277,19 +277,22 @@ static int ptrace_hbp_set_event(unsigned int note_type,
+ 
+ 	switch (note_type) {
+ 	case NT_ARM_HW_BREAK:
+-		if (idx < ARM_MAX_BRP) {
+-			tsk->thread.debug.hbp_break[idx] = bp;
+-			err = 0;
+-		}
++		if (idx >= ARM_MAX_BRP)
++			goto out;
++		idx = array_index_nospec(idx, ARM_MAX_BRP);
++		tsk->thread.debug.hbp_break[idx] = bp;
++		err = 0;
+ 		break;
+ 	case NT_ARM_HW_WATCH:
+-		if (idx < ARM_MAX_WRP) {
+-			tsk->thread.debug.hbp_watch[idx] = bp;
+-			err = 0;
+-		}
++		if (idx >= ARM_MAX_WRP)
++			goto out;
++		idx = array_index_nospec(idx, ARM_MAX_WRP);
++		tsk->thread.debug.hbp_watch[idx] = bp;
++		err = 0;
+ 		break;
+ 	}
+ 
++out:
+ 	return err;
+ }
+ 
+diff --git a/arch/mips/ath79/setup.c b/arch/mips/ath79/setup.c
+index f206dafbb0a3..26a058d58d37 100644
+--- a/arch/mips/ath79/setup.c
++++ b/arch/mips/ath79/setup.c
+@@ -40,6 +40,7 @@ static char ath79_sys_type[ATH79_SYS_TYPE_LEN];
+ 
+ static void ath79_restart(char *command)
+ {
++	local_irq_disable();
+ 	ath79_device_reset_set(AR71XX_RESET_FULL_CHIP);
+ 	for (;;)
+ 		if (cpu_wait)
+diff --git a/arch/mips/include/asm/mach-ath79/ath79.h b/arch/mips/include/asm/mach-ath79/ath79.h
+index 441faa92c3cd..6e6c0fead776 100644
+--- a/arch/mips/include/asm/mach-ath79/ath79.h
++++ b/arch/mips/include/asm/mach-ath79/ath79.h
+@@ -134,6 +134,7 @@ static inline u32 ath79_pll_rr(unsigned reg)
+ static inline void ath79_reset_wr(unsigned reg, u32 val)
+ {
+ 	__raw_writel(val, ath79_reset_base + reg);
++	(void) __raw_readl(ath79_reset_base + reg); /* flush */
+ }
+ 
+ static inline u32 ath79_reset_rr(unsigned reg)
+diff --git a/arch/mips/jz4740/Platform b/arch/mips/jz4740/Platform
+index 28448d358c10..a2a5a85ea1f9 100644
+--- a/arch/mips/jz4740/Platform
++++ b/arch/mips/jz4740/Platform
+@@ -1,4 +1,4 @@
+ platform-$(CONFIG_MACH_INGENIC)	+= jz4740/
+ cflags-$(CONFIG_MACH_INGENIC)	+= -I$(srctree)/arch/mips/include/asm/mach-jz4740
+ load-$(CONFIG_MACH_INGENIC)	+= 0xffffffff80010000
+-zload-$(CONFIG_MACH_INGENIC)	+= 0xffffffff80600000
++zload-$(CONFIG_MACH_INGENIC)	+= 0xffffffff81000000
+diff --git a/arch/mips/loongson64/common/cs5536/cs5536_ohci.c b/arch/mips/loongson64/common/cs5536/cs5536_ohci.c
+index f7c905e50dc4..92dc6bafc127 100644
+--- a/arch/mips/loongson64/common/cs5536/cs5536_ohci.c
++++ b/arch/mips/loongson64/common/cs5536/cs5536_ohci.c
+@@ -138,7 +138,7 @@ u32 pci_ohci_read_reg(int reg)
+ 		break;
+ 	case PCI_OHCI_INT_REG:
+ 		_rdmsr(DIVIL_MSR_REG(PIC_YSEL_LOW), &hi, &lo);
+-		if ((lo & 0x00000f00) == CS5536_USB_INTR)
++		if (((lo >> PIC_YSEL_LOW_USB_SHIFT) & 0xf) == CS5536_USB_INTR)
+ 			conf_data = 1;
+ 		break;
+ 	default:
+diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c
+index 8c456fa691a5..8167ce8e0cdd 100644
+--- a/arch/powerpc/kvm/book3s_64_vio.c
++++ b/arch/powerpc/kvm/book3s_64_vio.c
+@@ -180,7 +180,7 @@ extern long kvm_spapr_tce_attach_iommu_group(struct kvm *kvm, int tablefd,
+ 		if ((tbltmp->it_page_shift <= stt->page_shift) &&
+ 				(tbltmp->it_offset << tbltmp->it_page_shift ==
+ 				 stt->offset << stt->page_shift) &&
+-				(tbltmp->it_size << tbltmp->it_page_shift ==
++				(tbltmp->it_size << tbltmp->it_page_shift >=
+ 				 stt->size << stt->page_shift)) {
+ 			/*
+ 			 * Reference the table to avoid races with
+@@ -296,7 +296,7 @@ long kvm_vm_ioctl_create_spapr_tce(struct kvm *kvm,
+ {
+ 	struct kvmppc_spapr_tce_table *stt = NULL;
+ 	struct kvmppc_spapr_tce_table *siter;
+-	unsigned long npages, size;
++	unsigned long npages, size = args->size;
+ 	int ret = -ENOMEM;
+ 	int i;
+ 
+@@ -304,7 +304,6 @@ long kvm_vm_ioctl_create_spapr_tce(struct kvm *kvm,
+ 		(args->offset + args->size > (ULLONG_MAX >> args->page_shift)))
+ 		return -EINVAL;
+ 
+-	size = _ALIGN_UP(args->size, PAGE_SIZE >> 3);
+ 	npages = kvmppc_tce_pages(size);
+ 	ret = kvmppc_account_memlimit(kvmppc_stt_pages(npages), true);
+ 	if (ret)
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index a995513573c2..2ebd5132a29f 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -4562,6 +4562,8 @@ static int kvmppc_book3s_init_hv(void)
+ 			pr_err("KVM-HV: Cannot determine method for accessing XICS\n");
+ 			return -ENODEV;
+ 		}
++		/* presence of intc confirmed - node can be dropped again */
++		of_node_put(np);
+ 	}
+ #endif
+ 
+diff --git a/arch/powerpc/platforms/powernv/opal.c b/arch/powerpc/platforms/powernv/opal.c
+index 0d539c661748..371e33ecc547 100644
+--- a/arch/powerpc/platforms/powernv/opal.c
++++ b/arch/powerpc/platforms/powernv/opal.c
+@@ -388,7 +388,7 @@ int opal_put_chars(uint32_t vtermno, const char *data, int total_len)
+ 		/* Closed or other error drop */
+ 		if (rc != OPAL_SUCCESS && rc != OPAL_BUSY &&
+ 		    rc != OPAL_BUSY_EVENT) {
+-			written = total_len;
++			written += total_len;
+ 			break;
+ 		}
+ 		if (rc == OPAL_SUCCESS) {
+diff --git a/arch/s390/crypto/paes_s390.c b/arch/s390/crypto/paes_s390.c
+index 80b27294c1de..ab9a0ebecc19 100644
+--- a/arch/s390/crypto/paes_s390.c
++++ b/arch/s390/crypto/paes_s390.c
+@@ -208,7 +208,7 @@ static int cbc_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
+ 			      walk->dst.virt.addr, walk->src.virt.addr, n);
+ 		if (k)
+ 			ret = blkcipher_walk_done(desc, walk, nbytes - k);
+-		if (n < k) {
++		if (k < n) {
+ 			if (__cbc_paes_set_key(ctx) != 0)
+ 				return blkcipher_walk_done(desc, walk, -EIO);
+ 			memcpy(param.key, ctx->pk.protkey, MAXPROTKEYSIZE);
+diff --git a/arch/x86/kernel/eisa.c b/arch/x86/kernel/eisa.c
+index f260e452e4f8..e8c8c5d78dbd 100644
+--- a/arch/x86/kernel/eisa.c
++++ b/arch/x86/kernel/eisa.c
+@@ -7,11 +7,17 @@
+ #include <linux/eisa.h>
+ #include <linux/io.h>
+ 
++#include <xen/xen.h>
++
+ static __init int eisa_bus_probe(void)
+ {
+-	void __iomem *p = ioremap(0x0FFFD9, 4);
++	void __iomem *p;
++
++	if (xen_pv_domain() && !xen_initial_domain())
++		return 0;
+ 
+-	if (readl(p) == 'E' + ('I'<<8) + ('S'<<16) + ('A'<<24))
++	p = ioremap(0x0FFFD9, 4);
++	if (p && readl(p) == 'E' + ('I' << 8) + ('S' << 16) + ('A' << 24))
+ 		EISA_bus = 1;
+ 	iounmap(p);
+ 	return 0;
+diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
+index 946455e9cfef..1d2106d83b4e 100644
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -177,7 +177,7 @@ static p4d_t *pti_user_pagetable_walk_p4d(unsigned long address)
+ 
+ 	if (pgd_none(*pgd)) {
+ 		unsigned long new_p4d_page = __get_free_page(gfp);
+-		if (!new_p4d_page)
++		if (WARN_ON_ONCE(!new_p4d_page))
+ 			return NULL;
+ 
+ 		set_pgd(pgd, __pgd(_KERNPG_TABLE | __pa(new_p4d_page)));
+@@ -196,13 +196,17 @@ static p4d_t *pti_user_pagetable_walk_p4d(unsigned long address)
+ static pmd_t *pti_user_pagetable_walk_pmd(unsigned long address)
+ {
+ 	gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
+-	p4d_t *p4d = pti_user_pagetable_walk_p4d(address);
++	p4d_t *p4d;
+ 	pud_t *pud;
+ 
++	p4d = pti_user_pagetable_walk_p4d(address);
++	if (!p4d)
++		return NULL;
++
+ 	BUILD_BUG_ON(p4d_large(*p4d) != 0);
+ 	if (p4d_none(*p4d)) {
+ 		unsigned long new_pud_page = __get_free_page(gfp);
+-		if (!new_pud_page)
++		if (WARN_ON_ONCE(!new_pud_page))
+ 			return NULL;
+ 
+ 		set_p4d(p4d, __p4d(_KERNPG_TABLE | __pa(new_pud_page)));
+@@ -216,7 +220,7 @@ static pmd_t *pti_user_pagetable_walk_pmd(unsigned long address)
+ 	}
+ 	if (pud_none(*pud)) {
+ 		unsigned long new_pmd_page = __get_free_page(gfp);
+-		if (!new_pmd_page)
++		if (WARN_ON_ONCE(!new_pmd_page))
+ 			return NULL;
+ 
+ 		set_pud(pud, __pud(_KERNPG_TABLE | __pa(new_pmd_page)));
+@@ -238,9 +242,13 @@ static pmd_t *pti_user_pagetable_walk_pmd(unsigned long address)
+ static __init pte_t *pti_user_pagetable_walk_pte(unsigned long address)
+ {
+ 	gfp_t gfp = (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO);
+-	pmd_t *pmd = pti_user_pagetable_walk_pmd(address);
++	pmd_t *pmd;
+ 	pte_t *pte;
+ 
++	pmd = pti_user_pagetable_walk_pmd(address);
++	if (!pmd)
++		return NULL;
++
+ 	/* We can't do anything sensible if we hit a large mapping. */
+ 	if (pmd_large(*pmd)) {
+ 		WARN_ON(1);
+@@ -298,6 +306,10 @@ pti_clone_pmds(unsigned long start, unsigned long end, pmdval_t clear)
+ 		p4d_t *p4d;
+ 		pud_t *pud;
+ 
++		/* Overflow check */
++		if (addr < start)
++			break;
++
+ 		pgd = pgd_offset_k(addr);
+ 		if (WARN_ON(pgd_none(*pgd)))
+ 			return;
+@@ -355,6 +367,9 @@ static void __init pti_clone_p4d(unsigned long addr)
+ 	pgd_t *kernel_pgd;
+ 
+ 	user_p4d = pti_user_pagetable_walk_p4d(addr);
++	if (!user_p4d)
++		return;
++
+ 	kernel_pgd = pgd_offset_k(addr);
+ 	kernel_p4d = p4d_offset(kernel_pgd, addr);
+ 	*user_p4d = *kernel_p4d;
+diff --git a/arch/xtensa/platforms/iss/setup.c b/arch/xtensa/platforms/iss/setup.c
+index f4bbb28026f8..58709e89a8ed 100644
+--- a/arch/xtensa/platforms/iss/setup.c
++++ b/arch/xtensa/platforms/iss/setup.c
+@@ -78,23 +78,28 @@ static struct notifier_block iss_panic_block = {
+ 
+ void __init platform_setup(char **p_cmdline)
+ {
++	static void *argv[COMMAND_LINE_SIZE / sizeof(void *)] __initdata;
++	static char cmdline[COMMAND_LINE_SIZE] __initdata;
+ 	int argc = simc_argc();
+ 	int argv_size = simc_argv_size();
+ 
+ 	if (argc > 1) {
+-		void **argv = alloc_bootmem(argv_size);
+-		char *cmdline = alloc_bootmem(argv_size);
+-		int i;
++		if (argv_size > sizeof(argv)) {
++			pr_err("%s: command line too long: argv_size = %d\n",
++			       __func__, argv_size);
++		} else {
++			int i;
+ 
+-		cmdline[0] = 0;
+-		simc_argv((void *)argv);
++			cmdline[0] = 0;
++			simc_argv((void *)argv);
+ 
+-		for (i = 1; i < argc; ++i) {
+-			if (i > 1)
+-				strcat(cmdline, " ");
+-			strcat(cmdline, argv[i]);
++			for (i = 1; i < argc; ++i) {
++				if (i > 1)
++					strcat(cmdline, " ");
++				strcat(cmdline, argv[i]);
++			}
++			*p_cmdline = cmdline;
+ 		}
+-		*p_cmdline = cmdline;
+ 	}
+ 
+ 	atomic_notifier_chain_register(&panic_notifier_list, &iss_panic_block);
+diff --git a/block/blk-core.c b/block/blk-core.c
+index cbaca5a73f2e..f9d2e1b66e05 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -791,9 +791,13 @@ void blk_cleanup_queue(struct request_queue *q)
+ 	 * make sure all in-progress dispatch are completed because
+ 	 * blk_freeze_queue() can only complete all requests, and
+ 	 * dispatch may still be in-progress since we dispatch requests
+-	 * from more than one contexts
++	 * from more than one contexts.
++	 *
++	 * No need to quiesce queue if it isn't initialized yet since
++	 * blk_freeze_queue() should be enough for cases of passthrough
++	 * request.
+ 	 */
+-	if (q->mq_ops)
++	if (q->mq_ops && blk_queue_init_done(q))
+ 		blk_mq_quiesce_queue(q);
+ 
+ 	/* for synchronous bio-based driver finish in-flight integrity i/o */
+diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
+index 56c493c6cd90..f5745acc2d98 100644
+--- a/block/blk-mq-sched.c
++++ b/block/blk-mq-sched.c
+@@ -339,7 +339,8 @@ bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio)
+ 		return e->type->ops.mq.bio_merge(hctx, bio);
+ 	}
+ 
+-	if (hctx->flags & BLK_MQ_F_SHOULD_MERGE) {
++	if ((hctx->flags & BLK_MQ_F_SHOULD_MERGE) &&
++			!list_empty_careful(&ctx->rq_list)) {
+ 		/* default per sw-queue merge */
+ 		spin_lock(&ctx->lock);
+ 		ret = blk_mq_attempt_merge(q, ctx, bio);
+diff --git a/block/blk-settings.c b/block/blk-settings.c
+index d1de71124656..24fff4a3d08a 100644
+--- a/block/blk-settings.c
++++ b/block/blk-settings.c
+@@ -128,7 +128,7 @@ void blk_set_stacking_limits(struct queue_limits *lim)
+ 
+ 	/* Inherit limits from component devices */
+ 	lim->max_segments = USHRT_MAX;
+-	lim->max_discard_segments = 1;
++	lim->max_discard_segments = USHRT_MAX;
+ 	lim->max_hw_sectors = UINT_MAX;
+ 	lim->max_segment_size = UINT_MAX;
+ 	lim->max_sectors = UINT_MAX;
+diff --git a/crypto/api.c b/crypto/api.c
+index 0ee632bba064..7aca9f86c5f3 100644
+--- a/crypto/api.c
++++ b/crypto/api.c
+@@ -229,7 +229,7 @@ static struct crypto_alg *crypto_larval_lookup(const char *name, u32 type,
+ 	mask &= ~(CRYPTO_ALG_LARVAL | CRYPTO_ALG_DEAD);
+ 
+ 	alg = crypto_alg_lookup(name, type, mask);
+-	if (!alg) {
++	if (!alg && !(mask & CRYPTO_NOLOAD)) {
+ 		request_module("crypto-%s", name);
+ 
+ 		if (!((type ^ CRYPTO_ALG_NEED_FALLBACK) & mask &
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index df3e1a44707a..3aba4ad8af5c 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -2809,6 +2809,9 @@ void device_shutdown(void)
+ {
+ 	struct device *dev, *parent;
+ 
++	wait_for_device_probe();
++	device_block_probing();
++
+ 	spin_lock(&devices_kset->list_lock);
+ 	/*
+ 	 * Walk the devices list backward, shutting down each in turn.
+diff --git a/drivers/block/DAC960.c b/drivers/block/DAC960.c
+index f6518067aa7d..f99e5c883368 100644
+--- a/drivers/block/DAC960.c
++++ b/drivers/block/DAC960.c
+@@ -21,6 +21,7 @@
+ #define DAC960_DriverDate			"21 Aug 2007"
+ 
+ 
++#include <linux/compiler.h>
+ #include <linux/module.h>
+ #include <linux/types.h>
+ #include <linux/miscdevice.h>
+@@ -6426,7 +6427,7 @@ static bool DAC960_V2_ExecuteUserCommand(DAC960_Controller_T *Controller,
+   return true;
+ }
+ 
+-static int dac960_proc_show(struct seq_file *m, void *v)
++static int __maybe_unused dac960_proc_show(struct seq_file *m, void *v)
+ {
+   unsigned char *StatusMessage = "OK\n";
+   int ControllerNumber;
+@@ -6446,14 +6447,16 @@ static int dac960_proc_show(struct seq_file *m, void *v)
+   return 0;
+ }
+ 
+-static int dac960_initial_status_proc_show(struct seq_file *m, void *v)
++static int __maybe_unused dac960_initial_status_proc_show(struct seq_file *m,
++							  void *v)
+ {
+ 	DAC960_Controller_T *Controller = (DAC960_Controller_T *)m->private;
+ 	seq_printf(m, "%.*s", Controller->InitialStatusLength, Controller->CombinedStatusBuffer);
+ 	return 0;
+ }
+ 
+-static int dac960_current_status_proc_show(struct seq_file *m, void *v)
++static int __maybe_unused dac960_current_status_proc_show(struct seq_file *m,
++							  void *v)
+ {
+   DAC960_Controller_T *Controller = (DAC960_Controller_T *) m->private;
+   unsigned char *StatusMessage =
+diff --git a/drivers/char/ipmi/ipmi_bt_sm.c b/drivers/char/ipmi/ipmi_bt_sm.c
+index a3397664f800..97d6856c9c0f 100644
+--- a/drivers/char/ipmi/ipmi_bt_sm.c
++++ b/drivers/char/ipmi/ipmi_bt_sm.c
+@@ -59,8 +59,6 @@ enum bt_states {
+ 	BT_STATE_RESET3,
+ 	BT_STATE_RESTART,
+ 	BT_STATE_PRINTME,
+-	BT_STATE_CAPABILITIES_BEGIN,
+-	BT_STATE_CAPABILITIES_END,
+ 	BT_STATE_LONG_BUSY	/* BT doesn't get hosed :-) */
+ };
+ 
+@@ -86,7 +84,6 @@ struct si_sm_data {
+ 	int		error_retries;	/* end of "common" fields */
+ 	int		nonzero_status;	/* hung BMCs stay all 0 */
+ 	enum bt_states	complete;	/* to divert the state machine */
+-	int		BT_CAP_outreqs;
+ 	long		BT_CAP_req2rsp;
+ 	int		BT_CAP_retries;	/* Recommended retries */
+ };
+@@ -137,8 +134,6 @@ static char *state2txt(unsigned char state)
+ 	case BT_STATE_RESET3:		return("RESET3");
+ 	case BT_STATE_RESTART:		return("RESTART");
+ 	case BT_STATE_LONG_BUSY:	return("LONG_BUSY");
+-	case BT_STATE_CAPABILITIES_BEGIN: return("CAP_BEGIN");
+-	case BT_STATE_CAPABILITIES_END:	return("CAP_END");
+ 	}
+ 	return("BAD STATE");
+ }
+@@ -185,7 +180,6 @@ static unsigned int bt_init_data(struct si_sm_data *bt, struct si_sm_io *io)
+ 	bt->complete = BT_STATE_IDLE;	/* end here */
+ 	bt->BT_CAP_req2rsp = BT_NORMAL_TIMEOUT * USEC_PER_SEC;
+ 	bt->BT_CAP_retries = BT_NORMAL_RETRY_LIMIT;
+-	/* BT_CAP_outreqs == zero is a flag to read BT Capabilities */
+ 	return 3; /* We claim 3 bytes of space; ought to check SPMI table */
+ }
+ 
+@@ -451,7 +445,7 @@ static enum si_sm_result error_recovery(struct si_sm_data *bt,
+ 
+ static enum si_sm_result bt_event(struct si_sm_data *bt, long time)
+ {
+-	unsigned char status, BT_CAP[8];
++	unsigned char status;
+ 	static enum bt_states last_printed = BT_STATE_PRINTME;
+ 	int i;
+ 
+@@ -504,12 +498,6 @@ static enum si_sm_result bt_event(struct si_sm_data *bt, long time)
+ 		if (status & BT_H_BUSY)		/* clear a leftover H_BUSY */
+ 			BT_CONTROL(BT_H_BUSY);
+ 
+-		bt->timeout = bt->BT_CAP_req2rsp;
+-
+-		/* Read BT capabilities if it hasn't been done yet */
+-		if (!bt->BT_CAP_outreqs)
+-			BT_STATE_CHANGE(BT_STATE_CAPABILITIES_BEGIN,
+-					SI_SM_CALL_WITHOUT_DELAY);
+ 		BT_SI_SM_RETURN(SI_SM_IDLE);
+ 
+ 	case BT_STATE_XACTION_START:
+@@ -614,37 +602,6 @@ static enum si_sm_result bt_event(struct si_sm_data *bt, long time)
+ 		BT_STATE_CHANGE(BT_STATE_XACTION_START,
+ 				SI_SM_CALL_WITH_DELAY);
+ 
+-	/*
+-	 * Get BT Capabilities, using timing of upper level state machine.
+-	 * Set outreqs to prevent infinite loop on timeout.
+-	 */
+-	case BT_STATE_CAPABILITIES_BEGIN:
+-		bt->BT_CAP_outreqs = 1;
+-		{
+-			unsigned char GetBT_CAP[] = { 0x18, 0x36 };
+-			bt->state = BT_STATE_IDLE;
+-			bt_start_transaction(bt, GetBT_CAP, sizeof(GetBT_CAP));
+-		}
+-		bt->complete = BT_STATE_CAPABILITIES_END;
+-		BT_STATE_CHANGE(BT_STATE_XACTION_START,
+-				SI_SM_CALL_WITH_DELAY);
+-
+-	case BT_STATE_CAPABILITIES_END:
+-		i = bt_get_result(bt, BT_CAP, sizeof(BT_CAP));
+-		bt_init_data(bt, bt->io);
+-		if ((i == 8) && !BT_CAP[2]) {
+-			bt->BT_CAP_outreqs = BT_CAP[3];
+-			bt->BT_CAP_req2rsp = BT_CAP[6] * USEC_PER_SEC;
+-			bt->BT_CAP_retries = BT_CAP[7];
+-		} else
+-			printk(KERN_WARNING "IPMI BT: using default values\n");
+-		if (!bt->BT_CAP_outreqs)
+-			bt->BT_CAP_outreqs = 1;
+-		printk(KERN_WARNING "IPMI BT: req2rsp=%ld secs retries=%d\n",
+-			bt->BT_CAP_req2rsp / USEC_PER_SEC, bt->BT_CAP_retries);
+-		bt->timeout = bt->BT_CAP_req2rsp;
+-		return SI_SM_CALL_WITHOUT_DELAY;
+-
+ 	default:	/* should never occur */
+ 		return error_recovery(bt,
+ 				      status,
+@@ -655,6 +612,11 @@ static enum si_sm_result bt_event(struct si_sm_data *bt, long time)
+ 
+ static int bt_detect(struct si_sm_data *bt)
+ {
++	unsigned char GetBT_CAP[] = { 0x18, 0x36 };
++	unsigned char BT_CAP[8];
++	enum si_sm_result smi_result;
++	int rv;
++
+ 	/*
+ 	 * It's impossible for the BT status and interrupt registers to be
+ 	 * all 1's, (assuming a properly functioning, self-initialized BMC)
+@@ -665,6 +627,48 @@ static int bt_detect(struct si_sm_data *bt)
+ 	if ((BT_STATUS == 0xFF) && (BT_INTMASK_R == 0xFF))
+ 		return 1;
+ 	reset_flags(bt);
++
++	/*
++	 * Try getting the BT capabilities here.
++	 */
++	rv = bt_start_transaction(bt, GetBT_CAP, sizeof(GetBT_CAP));
++	if (rv) {
++		dev_warn(bt->io->dev,
++			 "Can't start capabilities transaction: %d\n", rv);
++		goto out_no_bt_cap;
++	}
++
++	smi_result = SI_SM_CALL_WITHOUT_DELAY;
++	for (;;) {
++		if (smi_result == SI_SM_CALL_WITH_DELAY ||
++		    smi_result == SI_SM_CALL_WITH_TICK_DELAY) {
++			schedule_timeout_uninterruptible(1);
++			smi_result = bt_event(bt, jiffies_to_usecs(1));
++		} else if (smi_result == SI_SM_CALL_WITHOUT_DELAY) {
++			smi_result = bt_event(bt, 0);
++		} else
++			break;
++	}
++
++	rv = bt_get_result(bt, BT_CAP, sizeof(BT_CAP));
++	bt_init_data(bt, bt->io);
++	if (rv < 8) {
++		dev_warn(bt->io->dev, "bt cap response too short: %d\n", rv);
++		goto out_no_bt_cap;
++	}
++
++	if (BT_CAP[2]) {
++		dev_warn(bt->io->dev, "Error fetching bt cap: %x\n", BT_CAP[2]);
++out_no_bt_cap:
++		dev_warn(bt->io->dev, "using default values\n");
++	} else {
++		bt->BT_CAP_req2rsp = BT_CAP[6] * USEC_PER_SEC;
++		bt->BT_CAP_retries = BT_CAP[7];
++	}
++
++	dev_info(bt->io->dev, "req2rsp=%ld secs retries=%d\n",
++		 bt->BT_CAP_req2rsp / USEC_PER_SEC, bt->BT_CAP_retries);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
+index 51832b8a2c62..7fc9612070a1 100644
+--- a/drivers/char/ipmi/ipmi_msghandler.c
++++ b/drivers/char/ipmi/ipmi_msghandler.c
+@@ -3381,39 +3381,45 @@ int ipmi_register_smi(const struct ipmi_smi_handlers *handlers,
+ 
+ 	rv = handlers->start_processing(send_info, intf);
+ 	if (rv)
+-		goto out;
++		goto out_err;
+ 
+ 	rv = __bmc_get_device_id(intf, NULL, &id, NULL, NULL, i);
+ 	if (rv) {
+ 		dev_err(si_dev, "Unable to get the device id: %d\n", rv);
+-		goto out;
++		goto out_err_started;
+ 	}
+ 
+ 	mutex_lock(&intf->bmc_reg_mutex);
+ 	rv = __scan_channels(intf, &id);
+ 	mutex_unlock(&intf->bmc_reg_mutex);
++	if (rv)
++		goto out_err_bmc_reg;
+ 
+- out:
+-	if (rv) {
+-		ipmi_bmc_unregister(intf);
+-		list_del_rcu(&intf->link);
+-		mutex_unlock(&ipmi_interfaces_mutex);
+-		synchronize_srcu(&ipmi_interfaces_srcu);
+-		cleanup_srcu_struct(&intf->users_srcu);
+-		kref_put(&intf->refcount, intf_free);
+-	} else {
+-		/*
+-		 * Keep memory order straight for RCU readers.  Make
+-		 * sure everything else is committed to memory before
+-		 * setting intf_num to mark the interface valid.
+-		 */
+-		smp_wmb();
+-		intf->intf_num = i;
+-		mutex_unlock(&ipmi_interfaces_mutex);
++	/*
++	 * Keep memory order straight for RCU readers.  Make
++	 * sure everything else is committed to memory before
++	 * setting intf_num to mark the interface valid.
++	 */
++	smp_wmb();
++	intf->intf_num = i;
++	mutex_unlock(&ipmi_interfaces_mutex);
+ 
+-		/* After this point the interface is legal to use. */
+-		call_smi_watchers(i, intf->si_dev);
+-	}
++	/* After this point the interface is legal to use. */
++	call_smi_watchers(i, intf->si_dev);
++
++	return 0;
++
++ out_err_bmc_reg:
++	ipmi_bmc_unregister(intf);
++ out_err_started:
++	if (intf->handlers->shutdown)
++		intf->handlers->shutdown(intf->send_info);
++ out_err:
++	list_del_rcu(&intf->link);
++	mutex_unlock(&ipmi_interfaces_mutex);
++	synchronize_srcu(&ipmi_interfaces_srcu);
++	cleanup_srcu_struct(&intf->users_srcu);
++	kref_put(&intf->refcount, intf_free);
+ 
+ 	return rv;
+ }
+@@ -3504,7 +3510,8 @@ void ipmi_unregister_smi(struct ipmi_smi *intf)
+ 	}
+ 	srcu_read_unlock(&intf->users_srcu, index);
+ 
+-	intf->handlers->shutdown(intf->send_info);
++	if (intf->handlers->shutdown)
++		intf->handlers->shutdown(intf->send_info);
+ 
+ 	cleanup_smi_msgs(intf);
+ 
+diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
+index 90ec010bffbd..5faa917df1b6 100644
+--- a/drivers/char/ipmi/ipmi_si_intf.c
++++ b/drivers/char/ipmi/ipmi_si_intf.c
+@@ -2083,18 +2083,9 @@ static int try_smi_init(struct smi_info *new_smi)
+ 		 si_to_str[new_smi->io.si_type]);
+ 
+ 	WARN_ON(new_smi->io.dev->init_name != NULL);
+-	kfree(init_name);
+-
+-	return 0;
+-
+-out_err:
+-	if (new_smi->intf) {
+-		ipmi_unregister_smi(new_smi->intf);
+-		new_smi->intf = NULL;
+-	}
+ 
++ out_err:
+ 	kfree(init_name);
+-
+ 	return rv;
+ }
+ 
+@@ -2227,6 +2218,8 @@ static void shutdown_smi(void *send_info)
+ 
+ 	kfree(smi_info->si_sm);
+ 	smi_info->si_sm = NULL;
++
++	smi_info->intf = NULL;
+ }
+ 
+ /*
+@@ -2240,10 +2233,8 @@ static void cleanup_one_si(struct smi_info *smi_info)
+ 
+ 	list_del(&smi_info->link);
+ 
+-	if (smi_info->intf) {
++	if (smi_info->intf)
+ 		ipmi_unregister_smi(smi_info->intf);
+-		smi_info->intf = NULL;
+-	}
+ 
+ 	if (smi_info->pdev) {
+ 		if (smi_info->pdev_registered)
+diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c
+index 18e4650c233b..265d6a6583bc 100644
+--- a/drivers/char/ipmi/ipmi_ssif.c
++++ b/drivers/char/ipmi/ipmi_ssif.c
+@@ -181,6 +181,8 @@ struct ssif_addr_info {
+ 	struct device *dev;
+ 	struct i2c_client *client;
+ 
++	struct i2c_client *added_client;
++
+ 	struct mutex clients_mutex;
+ 	struct list_head clients;
+ 
+@@ -1214,18 +1216,11 @@ static void shutdown_ssif(void *send_info)
+ 		complete(&ssif_info->wake_thread);
+ 		kthread_stop(ssif_info->thread);
+ 	}
+-
+-	/*
+-	 * No message can be outstanding now, we have removed the
+-	 * upper layer and it permitted us to do so.
+-	 */
+-	kfree(ssif_info);
+ }
+ 
+ static int ssif_remove(struct i2c_client *client)
+ {
+ 	struct ssif_info *ssif_info = i2c_get_clientdata(client);
+-	struct ipmi_smi *intf;
+ 	struct ssif_addr_info *addr_info;
+ 
+ 	if (!ssif_info)
+@@ -1235,9 +1230,7 @@ static int ssif_remove(struct i2c_client *client)
+ 	 * After this point, we won't deliver anything asychronously
+ 	 * to the message handler.  We can unregister ourself.
+ 	 */
+-	intf = ssif_info->intf;
+-	ssif_info->intf = NULL;
+-	ipmi_unregister_smi(intf);
++	ipmi_unregister_smi(ssif_info->intf);
+ 
+ 	list_for_each_entry(addr_info, &ssif_infos, link) {
+ 		if (addr_info->client == client) {
+@@ -1246,6 +1239,8 @@ static int ssif_remove(struct i2c_client *client)
+ 		}
+ 	}
+ 
++	kfree(ssif_info);
++
+ 	return 0;
+ }
+ 
+@@ -1648,15 +1643,7 @@ static int ssif_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ 
+  out:
+ 	if (rv) {
+-		/*
+-		 * Note that if addr_info->client is assigned, we
+-		 * leave it.  The i2c client hangs around even if we
+-		 * return a failure here, and the failure here is not
+-		 * propagated back to the i2c code.  This seems to be
+-		 * design intent, strange as it may be.  But if we
+-		 * don't leave it, ssif_platform_remove will not remove
+-		 * the client like it should.
+-		 */
++		addr_info->client = NULL;
+ 		dev_err(&client->dev, "Unable to start IPMI SSIF: %d\n", rv);
+ 		kfree(ssif_info);
+ 	}
+@@ -1676,7 +1663,8 @@ static int ssif_adapter_handler(struct device *adev, void *opaque)
+ 	if (adev->type != &i2c_adapter_type)
+ 		return 0;
+ 
+-	i2c_new_device(to_i2c_adapter(adev), &addr_info->binfo);
++	addr_info->added_client = i2c_new_device(to_i2c_adapter(adev),
++						 &addr_info->binfo);
+ 
+ 	if (!addr_info->adapter_name)
+ 		return 1; /* Only try the first I2C adapter by default. */
+@@ -1849,7 +1837,7 @@ static int ssif_platform_remove(struct platform_device *dev)
+ 		return 0;
+ 
+ 	mutex_lock(&ssif_infos_mutex);
+-	i2c_unregister_device(addr_info->client);
++	i2c_unregister_device(addr_info->added_client);
+ 
+ 	list_del(&addr_info->link);
+ 	kfree(addr_info);
+diff --git a/drivers/clk/clk-fixed-factor.c b/drivers/clk/clk-fixed-factor.c
+index a5d402de5584..20724abd38bd 100644
+--- a/drivers/clk/clk-fixed-factor.c
++++ b/drivers/clk/clk-fixed-factor.c
+@@ -177,8 +177,15 @@ static struct clk *_of_fixed_factor_clk_setup(struct device_node *node)
+ 
+ 	clk = clk_register_fixed_factor(NULL, clk_name, parent_name, flags,
+ 					mult, div);
+-	if (IS_ERR(clk))
++	if (IS_ERR(clk)) {
++		/*
++		 * If parent clock is not registered, registration would fail.
++		 * Clear OF_POPULATED flag so that clock registration can be
++		 * attempted again from probe function.
++		 */
++		of_node_clear_flag(node, OF_POPULATED);
+ 		return clk;
++	}
+ 
+ 	ret = of_clk_add_provider(node, of_clk_src_simple_get, clk);
+ 	if (ret) {
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index e2ed078abd90..2d96e7966e94 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -2933,6 +2933,7 @@ struct clk *__clk_create_clk(struct clk_hw *hw, const char *dev_id,
+ 	return clk;
+ }
+ 
++/* keep in sync with __clk_put */
+ void __clk_free_clk(struct clk *clk)
+ {
+ 	clk_prepare_lock();
+@@ -3312,6 +3313,7 @@ int __clk_get(struct clk *clk)
+ 	return 1;
+ }
+ 
++/* keep in sync with __clk_free_clk */
+ void __clk_put(struct clk *clk)
+ {
+ 	struct module *owner;
+@@ -3345,6 +3347,7 @@ void __clk_put(struct clk *clk)
+ 
+ 	module_put(owner);
+ 
++	kfree_const(clk->con_id);
+ 	kfree(clk);
+ }
+ 
+diff --git a/drivers/clk/imx/clk-imx6sll.c b/drivers/clk/imx/clk-imx6sll.c
+index 3651c77fbabe..645d8a42007c 100644
+--- a/drivers/clk/imx/clk-imx6sll.c
++++ b/drivers/clk/imx/clk-imx6sll.c
+@@ -92,6 +92,7 @@ static void __init imx6sll_clocks_init(struct device_node *ccm_node)
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "fsl,imx6sll-anatop");
+ 	base = of_iomap(np, 0);
++	of_node_put(np);
+ 	WARN_ON(!base);
+ 
+ 	/* Do not bypass PLLs initially */
+diff --git a/drivers/clk/imx/clk-imx6ul.c b/drivers/clk/imx/clk-imx6ul.c
+index ba563ba50b40..9f1a40498642 100644
+--- a/drivers/clk/imx/clk-imx6ul.c
++++ b/drivers/clk/imx/clk-imx6ul.c
+@@ -142,6 +142,7 @@ static void __init imx6ul_clocks_init(struct device_node *ccm_node)
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "fsl,imx6ul-anatop");
+ 	base = of_iomap(np, 0);
++	of_node_put(np);
+ 	WARN_ON(!base);
+ 
+ 	clks[IMX6UL_PLL1_BYPASS_SRC] = imx_clk_mux("pll1_bypass_src", base + 0x00, 14, 1, pll_bypass_src_sels, ARRAY_SIZE(pll_bypass_src_sels));
+diff --git a/drivers/clk/mvebu/armada-37xx-periph.c b/drivers/clk/mvebu/armada-37xx-periph.c
+index 44e4e27eddad..6f7637b19738 100644
+--- a/drivers/clk/mvebu/armada-37xx-periph.c
++++ b/drivers/clk/mvebu/armada-37xx-periph.c
+@@ -429,9 +429,6 @@ static u8 clk_pm_cpu_get_parent(struct clk_hw *hw)
+ 		val &= pm_cpu->mask_mux;
+ 	}
+ 
+-	if (val >= num_parents)
+-		return -EINVAL;
+-
+ 	return val;
+ }
+ 
+diff --git a/drivers/clk/tegra/clk-bpmp.c b/drivers/clk/tegra/clk-bpmp.c
+index a896692b74ec..01dada561c10 100644
+--- a/drivers/clk/tegra/clk-bpmp.c
++++ b/drivers/clk/tegra/clk-bpmp.c
+@@ -586,9 +586,15 @@ static struct clk_hw *tegra_bpmp_clk_of_xlate(struct of_phandle_args *clkspec,
+ 	unsigned int id = clkspec->args[0], i;
+ 	struct tegra_bpmp *bpmp = data;
+ 
+-	for (i = 0; i < bpmp->num_clocks; i++)
+-		if (bpmp->clocks[i]->id == id)
+-			return &bpmp->clocks[i]->hw;
++	for (i = 0; i < bpmp->num_clocks; i++) {
++		struct tegra_bpmp_clk *clk = bpmp->clocks[i];
++
++		if (!clk)
++			continue;
++
++		if (clk->id == id)
++			return &clk->hw;
++	}
+ 
+ 	return NULL;
+ }
+diff --git a/drivers/crypto/ccp/psp-dev.c b/drivers/crypto/ccp/psp-dev.c
+index 051b8c6bae64..a9c85095bd56 100644
+--- a/drivers/crypto/ccp/psp-dev.c
++++ b/drivers/crypto/ccp/psp-dev.c
+@@ -38,6 +38,17 @@ static DEFINE_MUTEX(sev_cmd_mutex);
+ static struct sev_misc_dev *misc_dev;
+ static struct psp_device *psp_master;
+ 
++static int psp_cmd_timeout = 100;
++module_param(psp_cmd_timeout, int, 0644);
++MODULE_PARM_DESC(psp_cmd_timeout, " default timeout value, in seconds, for PSP commands");
++
++static int psp_probe_timeout = 5;
++module_param(psp_probe_timeout, int, 0644);
++MODULE_PARM_DESC(psp_probe_timeout, " default timeout value, in seconds, during PSP device probe");
++
++static bool psp_dead;
++static int psp_timeout;
++
+ static struct psp_device *psp_alloc_struct(struct sp_device *sp)
+ {
+ 	struct device *dev = sp->dev;
+@@ -82,10 +93,19 @@ done:
+ 	return IRQ_HANDLED;
+ }
+ 
+-static void sev_wait_cmd_ioc(struct psp_device *psp, unsigned int *reg)
++static int sev_wait_cmd_ioc(struct psp_device *psp,
++			    unsigned int *reg, unsigned int timeout)
+ {
+-	wait_event(psp->sev_int_queue, psp->sev_int_rcvd);
++	int ret;
++
++	ret = wait_event_timeout(psp->sev_int_queue,
++			psp->sev_int_rcvd, timeout * HZ);
++	if (!ret)
++		return -ETIMEDOUT;
++
+ 	*reg = ioread32(psp->io_regs + PSP_CMDRESP);
++
++	return 0;
+ }
+ 
+ static int sev_cmd_buffer_len(int cmd)
+@@ -133,12 +153,15 @@ static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
+ 	if (!psp)
+ 		return -ENODEV;
+ 
++	if (psp_dead)
++		return -EBUSY;
++
+ 	/* Get the physical address of the command buffer */
+ 	phys_lsb = data ? lower_32_bits(__psp_pa(data)) : 0;
+ 	phys_msb = data ? upper_32_bits(__psp_pa(data)) : 0;
+ 
+-	dev_dbg(psp->dev, "sev command id %#x buffer 0x%08x%08x\n",
+-		cmd, phys_msb, phys_lsb);
++	dev_dbg(psp->dev, "sev command id %#x buffer 0x%08x%08x timeout %us\n",
++		cmd, phys_msb, phys_lsb, psp_timeout);
+ 
+ 	print_hex_dump_debug("(in):  ", DUMP_PREFIX_OFFSET, 16, 2, data,
+ 			     sev_cmd_buffer_len(cmd), false);
+@@ -154,7 +177,18 @@ static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
+ 	iowrite32(reg, psp->io_regs + PSP_CMDRESP);
+ 
+ 	/* wait for command completion */
+-	sev_wait_cmd_ioc(psp, &reg);
++	ret = sev_wait_cmd_ioc(psp, &reg, psp_timeout);
++	if (ret) {
++		if (psp_ret)
++			*psp_ret = 0;
++
++		dev_err(psp->dev, "sev command %#x timed out, disabling PSP \n", cmd);
++		psp_dead = true;
++
++		return ret;
++	}
++
++	psp_timeout = psp_cmd_timeout;
+ 
+ 	if (psp_ret)
+ 		*psp_ret = reg & PSP_CMDRESP_ERR_MASK;
+@@ -886,6 +920,8 @@ void psp_pci_init(void)
+ 
+ 	psp_master = sp->psp_data;
+ 
++	psp_timeout = psp_probe_timeout;
++
+ 	if (sev_get_api_version())
+ 		goto err;
+ 
+diff --git a/drivers/crypto/sahara.c b/drivers/crypto/sahara.c
+index 0f2245e1af2b..97d86dca7e85 100644
+--- a/drivers/crypto/sahara.c
++++ b/drivers/crypto/sahara.c
+@@ -1351,7 +1351,7 @@ err_sha_v4_algs:
+ 
+ err_sha_v3_algs:
+ 	for (j = 0; j < k; j++)
+-		crypto_unregister_ahash(&sha_v4_algs[j]);
++		crypto_unregister_ahash(&sha_v3_algs[j]);
+ 
+ err_aes_algs:
+ 	for (j = 0; j < i; j++)
+@@ -1367,7 +1367,7 @@ static void sahara_unregister_algs(struct sahara_dev *dev)
+ 	for (i = 0; i < ARRAY_SIZE(aes_algs); i++)
+ 		crypto_unregister_alg(&aes_algs[i]);
+ 
+-	for (i = 0; i < ARRAY_SIZE(sha_v4_algs); i++)
++	for (i = 0; i < ARRAY_SIZE(sha_v3_algs); i++)
+ 		crypto_unregister_ahash(&sha_v3_algs[i]);
+ 
+ 	if (dev->version > SAHARA_VERSION_3)
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index 0b5b3abe054e..e26adf67e218 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -625,7 +625,8 @@ struct devfreq *devfreq_add_device(struct device *dev,
+ 	err = device_register(&devfreq->dev);
+ 	if (err) {
+ 		mutex_unlock(&devfreq->lock);
+-		goto err_dev;
++		put_device(&devfreq->dev);
++		goto err_out;
+ 	}
+ 
+ 	devfreq->trans_table =
+@@ -672,6 +673,7 @@ err_init:
+ 	mutex_unlock(&devfreq_list_lock);
+ 
+ 	device_unregister(&devfreq->dev);
++	devfreq = NULL;
+ err_dev:
+ 	if (devfreq)
+ 		kfree(devfreq);
+diff --git a/drivers/dma/mv_xor_v2.c b/drivers/dma/mv_xor_v2.c
+index c6589ccf1b9a..d349fedf4ab2 100644
+--- a/drivers/dma/mv_xor_v2.c
++++ b/drivers/dma/mv_xor_v2.c
+@@ -899,6 +899,8 @@ static int mv_xor_v2_remove(struct platform_device *pdev)
+ 
+ 	platform_msi_domain_free_irqs(&pdev->dev);
+ 
++	tasklet_kill(&xor_dev->irq_tasklet);
++
+ 	clk_disable_unprepare(xor_dev->clk);
+ 
+ 	return 0;
+diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c
+index de0957fe9668..bb6dfa2e1e8a 100644
+--- a/drivers/dma/pl330.c
++++ b/drivers/dma/pl330.c
+@@ -2257,13 +2257,14 @@ static int pl330_terminate_all(struct dma_chan *chan)
+ 
+ 	pm_runtime_get_sync(pl330->ddma.dev);
+ 	spin_lock_irqsave(&pch->lock, flags);
++
+ 	spin_lock(&pl330->lock);
+ 	_stop(pch->thread);
+-	spin_unlock(&pl330->lock);
+-
+ 	pch->thread->req[0].desc = NULL;
+ 	pch->thread->req[1].desc = NULL;
+ 	pch->thread->req_running = -1;
++	spin_unlock(&pl330->lock);
++
+ 	power_down = pch->active;
+ 	pch->active = false;
+ 
+diff --git a/drivers/dma/sh/rcar-dmac.c b/drivers/dma/sh/rcar-dmac.c
+index 2a2ccd9c78e4..8305a1ce8a9b 100644
+--- a/drivers/dma/sh/rcar-dmac.c
++++ b/drivers/dma/sh/rcar-dmac.c
+@@ -774,8 +774,9 @@ static void rcar_dmac_sync_tcr(struct rcar_dmac_chan *chan)
+ 	/* make sure all remaining data was flushed */
+ 	rcar_dmac_chcr_de_barrier(chan);
+ 
+-	/* back DE */
+-	rcar_dmac_chan_write(chan, RCAR_DMACHCR, chcr);
++	/* back DE if remain data exists */
++	if (rcar_dmac_chan_read(chan, RCAR_DMATCR))
++		rcar_dmac_chan_write(chan, RCAR_DMACHCR, chcr);
+ }
+ 
+ static void rcar_dmac_chan_halt(struct rcar_dmac_chan *chan)
+diff --git a/drivers/firmware/efi/arm-init.c b/drivers/firmware/efi/arm-init.c
+index b5214c143fee..388a929baf95 100644
+--- a/drivers/firmware/efi/arm-init.c
++++ b/drivers/firmware/efi/arm-init.c
+@@ -259,7 +259,6 @@ void __init efi_init(void)
+ 
+ 	reserve_regions();
+ 	efi_esrt_init();
+-	efi_memmap_unmap();
+ 
+ 	memblock_reserve(params.mmap & PAGE_MASK,
+ 			 PAGE_ALIGN(params.mmap_size +
+diff --git a/drivers/firmware/efi/arm-runtime.c b/drivers/firmware/efi/arm-runtime.c
+index 5889cbea60b8..4712445c3213 100644
+--- a/drivers/firmware/efi/arm-runtime.c
++++ b/drivers/firmware/efi/arm-runtime.c
+@@ -110,11 +110,13 @@ static int __init arm_enable_runtime_services(void)
+ {
+ 	u64 mapsize;
+ 
+-	if (!efi_enabled(EFI_BOOT)) {
++	if (!efi_enabled(EFI_BOOT) || !efi_enabled(EFI_MEMMAP)) {
+ 		pr_info("EFI services will not be available.\n");
+ 		return 0;
+ 	}
+ 
++	efi_memmap_unmap();
++
+ 	if (efi_runtime_disabled()) {
+ 		pr_info("EFI runtime services will be disabled.\n");
+ 		return 0;
+diff --git a/drivers/firmware/efi/esrt.c b/drivers/firmware/efi/esrt.c
+index 1ab80e06e7c5..e5d80ebd72b6 100644
+--- a/drivers/firmware/efi/esrt.c
++++ b/drivers/firmware/efi/esrt.c
+@@ -326,7 +326,8 @@ void __init efi_esrt_init(void)
+ 
+ 	end = esrt_data + size;
+ 	pr_info("Reserving ESRT space from %pa to %pa.\n", &esrt_data, &end);
+-	efi_mem_reserve(esrt_data, esrt_data_size);
++	if (md.type == EFI_BOOT_SERVICES_DATA)
++		efi_mem_reserve(esrt_data, esrt_data_size);
+ 
+ 	pr_debug("esrt-init: loaded.\n");
+ }
+diff --git a/drivers/gpio/gpio-pxa.c b/drivers/gpio/gpio-pxa.c
+index 2e33fd552899..99070e2ac3cd 100644
+--- a/drivers/gpio/gpio-pxa.c
++++ b/drivers/gpio/gpio-pxa.c
+@@ -665,6 +665,8 @@ static int pxa_gpio_probe(struct platform_device *pdev)
+ 	pchip->irq0 = irq0;
+ 	pchip->irq1 = irq1;
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!res)
++		return -EINVAL;
+ 	gpio_reg_base = devm_ioremap(&pdev->dev, res->start,
+ 				     resource_size(res));
+ 	if (!gpio_reg_base)
+diff --git a/drivers/gpio/gpiolib.h b/drivers/gpio/gpiolib.h
+index 1a8e20363861..a7e49fef73d4 100644
+--- a/drivers/gpio/gpiolib.h
++++ b/drivers/gpio/gpiolib.h
+@@ -92,7 +92,7 @@ struct acpi_gpio_info {
+ };
+ 
+ /* gpio suffixes used for ACPI and device tree lookup */
+-static const char * const gpio_suffixes[] = { "gpios", "gpio" };
++static __maybe_unused const char * const gpio_suffixes[] = { "gpios", "gpio" };
+ 
+ #ifdef CONFIG_OF_GPIO
+ struct gpio_desc *of_find_gpio(struct device *dev,
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c b/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
+index c3744d89352c..ebe79bf00145 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
+@@ -188,9 +188,9 @@ void __iomem *kfd_get_kernel_doorbell(struct kfd_dev *kfd,
+ 	*doorbell_off = kfd->doorbell_id_offset + inx;
+ 
+ 	pr_debug("Get kernel queue doorbell\n"
+-			 "     doorbell offset   == 0x%08X\n"
+-			 "     kernel address    == %p\n",
+-		*doorbell_off, (kfd->doorbell_kernel_ptr + inx));
++			"     doorbell offset   == 0x%08X\n"
++			"     doorbell index    == 0x%x\n",
++		*doorbell_off, inx);
+ 
+ 	return kfd->doorbell_kernel_ptr + inx;
+ }
+@@ -199,7 +199,8 @@ void kfd_release_kernel_doorbell(struct kfd_dev *kfd, u32 __iomem *db_addr)
+ {
+ 	unsigned int inx;
+ 
+-	inx = (unsigned int)(db_addr - kfd->doorbell_kernel_ptr);
++	inx = (unsigned int)(db_addr - kfd->doorbell_kernel_ptr)
++		* sizeof(u32) / kfd->device_info->doorbell_size;
+ 
+ 	mutex_lock(&kfd->doorbell_mutex);
+ 	__clear_bit(inx, kfd->doorbell_available_index);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+index 1d80b4f7c681..4694386cc623 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+@@ -244,6 +244,8 @@ struct kfd_process *kfd_get_process(const struct task_struct *thread)
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	process = find_process(thread);
++	if (!process)
++		return ERR_PTR(-EINVAL);
+ 
+ 	return process;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+index 8a7890b03d97..6ccd59b87403 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+@@ -497,6 +497,10 @@ static bool detect_dp(
+ 			sink_caps->signal = SIGNAL_TYPE_DISPLAY_PORT_MST;
+ 			link->type = dc_connection_mst_branch;
+ 
++			dal_ddc_service_set_transaction_type(
++							link->ddc,
++							sink_caps->transaction_type);
++
+ 			/*
+ 			 * This call will initiate MST topology discovery. Which
+ 			 * will detect MST ports and add new DRM connector DRM
+diff --git a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
+index d567be49c31b..b487774d8041 100644
+--- a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
++++ b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
+@@ -1020,7 +1020,7 @@ static int pp_get_display_power_level(void *handle,
+ static int pp_get_current_clocks(void *handle,
+ 		struct amd_pp_clock_info *clocks)
+ {
+-	struct amd_pp_simple_clock_info simple_clocks;
++	struct amd_pp_simple_clock_info simple_clocks = { 0 };
+ 	struct pp_clock_info hw_clocks;
+ 	struct pp_hwmgr *hwmgr = handle;
+ 	int ret = 0;
+@@ -1056,7 +1056,10 @@ static int pp_get_current_clocks(void *handle,
+ 	clocks->max_engine_clock_in_sr = hw_clocks.max_eng_clk;
+ 	clocks->min_engine_clock_in_sr = hw_clocks.min_eng_clk;
+ 
+-	clocks->max_clocks_state = simple_clocks.level;
++	if (simple_clocks.level == 0)
++		clocks->max_clocks_state = PP_DAL_POWERLEVEL_7;
++	else
++		clocks->max_clocks_state = simple_clocks.level;
+ 
+ 	if (0 == phm_get_current_shallow_sleep_clocks(hwmgr, &hwmgr->current_ps->hardware, &hw_clocks)) {
+ 		clocks->max_engine_clock_in_sr = hw_clocks.max_eng_clk;
+@@ -1159,6 +1162,8 @@ static int pp_get_display_mode_validation_clocks(void *handle,
+ 	if (!hwmgr || !hwmgr->pm_en ||!clocks)
+ 		return -EINVAL;
+ 
++	clocks->level = PP_DAL_POWERLEVEL_7;
++
+ 	mutex_lock(&hwmgr->smu_lock);
+ 
+ 	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps, PHM_PlatformCaps_DynamicPatchPowerState))
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+index f8e866ceda02..77779adeef28 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+@@ -4555,12 +4555,12 @@ static int smu7_get_sclks(struct pp_hwmgr *hwmgr, struct amd_pp_clocks *clocks)
+ 			return -EINVAL;
+ 		dep_sclk_table = table_info->vdd_dep_on_sclk;
+ 		for (i = 0; i < dep_sclk_table->count; i++)
+-			clocks->clock[i] = dep_sclk_table->entries[i].clk;
++			clocks->clock[i] = dep_sclk_table->entries[i].clk * 10;
+ 		clocks->count = dep_sclk_table->count;
+ 	} else if (hwmgr->pp_table_version == PP_TABLE_V0) {
+ 		sclk_table = hwmgr->dyn_state.vddc_dependency_on_sclk;
+ 		for (i = 0; i < sclk_table->count; i++)
+-			clocks->clock[i] = sclk_table->entries[i].clk;
++			clocks->clock[i] = sclk_table->entries[i].clk * 10;
+ 		clocks->count = sclk_table->count;
+ 	}
+ 
+@@ -4592,7 +4592,7 @@ static int smu7_get_mclks(struct pp_hwmgr *hwmgr, struct amd_pp_clocks *clocks)
+ 			return -EINVAL;
+ 		dep_mclk_table = table_info->vdd_dep_on_mclk;
+ 		for (i = 0; i < dep_mclk_table->count; i++) {
+-			clocks->clock[i] = dep_mclk_table->entries[i].clk;
++			clocks->clock[i] = dep_mclk_table->entries[i].clk * 10;
+ 			clocks->latency[i] = smu7_get_mem_latency(hwmgr,
+ 						dep_mclk_table->entries[i].clk);
+ 		}
+@@ -4600,7 +4600,7 @@ static int smu7_get_mclks(struct pp_hwmgr *hwmgr, struct amd_pp_clocks *clocks)
+ 	} else if (hwmgr->pp_table_version == PP_TABLE_V0) {
+ 		mclk_table = hwmgr->dyn_state.vddc_dependency_on_mclk;
+ 		for (i = 0; i < mclk_table->count; i++)
+-			clocks->clock[i] = mclk_table->entries[i].clk;
++			clocks->clock[i] = mclk_table->entries[i].clk * 10;
+ 		clocks->count = mclk_table->count;
+ 	}
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
+index 617557bd8c24..0adfc5392cd3 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
+@@ -1605,17 +1605,17 @@ static int smu8_get_clock_by_type(struct pp_hwmgr *hwmgr, enum amd_pp_clock_type
+ 	switch (type) {
+ 	case amd_pp_disp_clock:
+ 		for (i = 0; i < clocks->count; i++)
+-			clocks->clock[i] = data->sys_info.display_clock[i];
++			clocks->clock[i] = data->sys_info.display_clock[i] * 10;
+ 		break;
+ 	case amd_pp_sys_clock:
+ 		table = hwmgr->dyn_state.vddc_dependency_on_sclk;
+ 		for (i = 0; i < clocks->count; i++)
+-			clocks->clock[i] = table->entries[i].clk;
++			clocks->clock[i] = table->entries[i].clk * 10;
+ 		break;
+ 	case amd_pp_mem_clock:
+ 		clocks->count = SMU8_NUM_NBPMEMORYCLOCK;
+ 		for (i = 0; i < clocks->count; i++)
+-			clocks->clock[i] = data->sys_info.nbp_memory_clock[clocks->count - 1 - i];
++			clocks->clock[i] = data->sys_info.nbp_memory_clock[clocks->count - 1 - i] * 10;
+ 		break;
+ 	default:
+ 		return -1;
+diff --git a/drivers/gpu/drm/nouveau/nouveau_debugfs.c b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
+index 963a4dba8213..9109b69cd052 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_debugfs.c
++++ b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
+@@ -160,7 +160,11 @@ nouveau_debugfs_pstate_set(struct file *file, const char __user *ubuf,
+ 		args.ustate = value;
+ 	}
+ 
++	ret = pm_runtime_get_sync(drm->dev);
++	if (IS_ERR_VALUE(ret) && ret != -EACCES)
++		return ret;
+ 	ret = nvif_mthd(ctrl, NVIF_CONTROL_PSTATE_USER, &args, sizeof(args));
++	pm_runtime_put_autosuspend(drm->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index f5d3158f0378..c7ec86d6c3c9 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -908,8 +908,10 @@ nouveau_drm_open(struct drm_device *dev, struct drm_file *fpriv)
+ 	get_task_comm(tmpname, current);
+ 	snprintf(name, sizeof(name), "%s[%d]", tmpname, pid_nr(fpriv->pid));
+ 
+-	if (!(cli = kzalloc(sizeof(*cli), GFP_KERNEL)))
+-		return ret;
++	if (!(cli = kzalloc(sizeof(*cli), GFP_KERNEL))) {
++		ret = -ENOMEM;
++		goto done;
++	}
+ 
+ 	ret = nouveau_cli_init(drm, name, cli);
+ 	if (ret)
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c b/drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c
+index 78597da6313a..0e372a190d3f 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c
+@@ -23,6 +23,10 @@
+ #ifdef CONFIG_NOUVEAU_PLATFORM_DRIVER
+ #include "priv.h"
+ 
++#if IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU)
++#include <asm/dma-iommu.h>
++#endif
++
+ static int
+ nvkm_device_tegra_power_up(struct nvkm_device_tegra *tdev)
+ {
+@@ -105,6 +109,15 @@ nvkm_device_tegra_probe_iommu(struct nvkm_device_tegra *tdev)
+ 	unsigned long pgsize_bitmap;
+ 	int ret;
+ 
++#if IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU)
++	if (dev->archdata.mapping) {
++		struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev);
++
++		arm_iommu_detach_device(dev);
++		arm_iommu_release_mapping(mapping);
++	}
++#endif
++
+ 	if (!tdev->func->iommu_bit)
+ 		return;
+ 
+diff --git a/drivers/gpu/drm/panel/panel-samsung-s6e8aa0.c b/drivers/gpu/drm/panel/panel-samsung-s6e8aa0.c
+index a188a3959f1a..6ad827b93ae1 100644
+--- a/drivers/gpu/drm/panel/panel-samsung-s6e8aa0.c
++++ b/drivers/gpu/drm/panel/panel-samsung-s6e8aa0.c
+@@ -823,7 +823,7 @@ static void s6e8aa0_read_mtp_id(struct s6e8aa0 *ctx)
+ 	int ret, i;
+ 
+ 	ret = s6e8aa0_dcs_read(ctx, 0xd1, id, ARRAY_SIZE(id));
+-	if (ret < ARRAY_SIZE(id) || id[0] == 0x00) {
++	if (ret < 0 || ret < ARRAY_SIZE(id) || id[0] == 0x00) {
+ 		dev_err(ctx->dev, "read id failed\n");
+ 		ctx->error = -EIO;
+ 		return;
+diff --git a/drivers/gpu/ipu-v3/ipu-csi.c b/drivers/gpu/ipu-v3/ipu-csi.c
+index 5450a2db1219..2beadb3f79c2 100644
+--- a/drivers/gpu/ipu-v3/ipu-csi.c
++++ b/drivers/gpu/ipu-v3/ipu-csi.c
+@@ -318,13 +318,17 @@ static int mbus_code_to_bus_cfg(struct ipu_csi_bus_config *cfg, u32 mbus_code)
+ /*
+  * Fill a CSI bus config struct from mbus_config and mbus_framefmt.
+  */
+-static void fill_csi_bus_cfg(struct ipu_csi_bus_config *csicfg,
++static int fill_csi_bus_cfg(struct ipu_csi_bus_config *csicfg,
+ 				 struct v4l2_mbus_config *mbus_cfg,
+ 				 struct v4l2_mbus_framefmt *mbus_fmt)
+ {
++	int ret;
++
+ 	memset(csicfg, 0, sizeof(*csicfg));
+ 
+-	mbus_code_to_bus_cfg(csicfg, mbus_fmt->code);
++	ret = mbus_code_to_bus_cfg(csicfg, mbus_fmt->code);
++	if (ret < 0)
++		return ret;
+ 
+ 	switch (mbus_cfg->type) {
+ 	case V4L2_MBUS_PARALLEL:
+@@ -356,6 +360,8 @@ static void fill_csi_bus_cfg(struct ipu_csi_bus_config *csicfg,
+ 		/* will never get here, keep compiler quiet */
+ 		break;
+ 	}
++
++	return 0;
+ }
+ 
+ int ipu_csi_init_interface(struct ipu_csi *csi,
+@@ -365,8 +371,11 @@ int ipu_csi_init_interface(struct ipu_csi *csi,
+ 	struct ipu_csi_bus_config cfg;
+ 	unsigned long flags;
+ 	u32 width, height, data = 0;
++	int ret;
+ 
+-	fill_csi_bus_cfg(&cfg, mbus_cfg, mbus_fmt);
++	ret = fill_csi_bus_cfg(&cfg, mbus_cfg, mbus_fmt);
++	if (ret < 0)
++		return ret;
+ 
+ 	/* set default sensor frame width and height */
+ 	width = mbus_fmt->width;
+@@ -587,11 +596,14 @@ int ipu_csi_set_mipi_datatype(struct ipu_csi *csi, u32 vc,
+ 	struct ipu_csi_bus_config cfg;
+ 	unsigned long flags;
+ 	u32 temp;
++	int ret;
+ 
+ 	if (vc > 3)
+ 		return -EINVAL;
+ 
+-	mbus_code_to_bus_cfg(&cfg, mbus_fmt->code);
++	ret = mbus_code_to_bus_cfg(&cfg, mbus_fmt->code);
++	if (ret < 0)
++		return ret;
+ 
+ 	spin_lock_irqsave(&csi->lock, flags);
+ 
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
+index b10fe26c4891..c9a466be7709 100644
+--- a/drivers/hv/vmbus_drv.c
++++ b/drivers/hv/vmbus_drv.c
+@@ -1178,6 +1178,9 @@ static ssize_t vmbus_chan_attr_show(struct kobject *kobj,
+ 	if (!attribute->show)
+ 		return -EIO;
+ 
++	if (chan->state != CHANNEL_OPENED_STATE)
++		return -EINVAL;
++
+ 	return attribute->show(chan, buf);
+ }
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x.c b/drivers/hwtracing/coresight/coresight-etm4x.c
+index 9bc04c50d45b..1d94ebec027b 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x.c
+@@ -1027,7 +1027,8 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
+ 	}
+ 
+ 	pm_runtime_put(&adev->dev);
+-	dev_info(dev, "%s initialized\n", (char *)id->data);
++	dev_info(dev, "CPU%d: ETM v%d.%d initialized\n",
++		 drvdata->cpu, drvdata->arch >> 4, drvdata->arch & 0xf);
+ 
+ 	if (boot_enable) {
+ 		coresight_enable(drvdata->csdev);
+@@ -1045,23 +1046,19 @@ err_arch_supported:
+ 	return ret;
+ }
+ 
++#define ETM4x_AMBA_ID(pid)			\
++	{					\
++		.id	= pid,			\
++		.mask	= 0x000fffff,		\
++	}
++
+ static const struct amba_id etm4_ids[] = {
+-	{       /* ETM 4.0 - Cortex-A53  */
+-		.id	= 0x000bb95d,
+-		.mask	= 0x000fffff,
+-		.data	= "ETM 4.0",
+-	},
+-	{       /* ETM 4.0 - Cortex-A57 */
+-		.id	= 0x000bb95e,
+-		.mask	= 0x000fffff,
+-		.data	= "ETM 4.0",
+-	},
+-	{       /* ETM 4.0 - A72, Maia, HiSilicon */
+-		.id = 0x000bb95a,
+-		.mask = 0x000fffff,
+-		.data = "ETM 4.0",
+-	},
+-	{ 0, 0},
++	ETM4x_AMBA_ID(0x000bb95d),		/* Cortex-A53 */
++	ETM4x_AMBA_ID(0x000bb95e),		/* Cortex-A57 */
++	ETM4x_AMBA_ID(0x000bb95a),		/* Cortex-A72 */
++	ETM4x_AMBA_ID(0x000bb959),		/* Cortex-A73 */
++	ETM4x_AMBA_ID(0x000bb9da),		/* Cortex-A35 */
++	{},
+ };
+ 
+ static struct amba_driver etm4x_driver = {
+diff --git a/drivers/hwtracing/coresight/coresight-tpiu.c b/drivers/hwtracing/coresight/coresight-tpiu.c
+index 01b7457fe8fc..459ef930d98c 100644
+--- a/drivers/hwtracing/coresight/coresight-tpiu.c
++++ b/drivers/hwtracing/coresight/coresight-tpiu.c
+@@ -40,8 +40,9 @@
+ 
+ /** register definition **/
+ /* FFSR - 0x300 */
+-#define FFSR_FT_STOPPED		BIT(1)
++#define FFSR_FT_STOPPED_BIT	1
+ /* FFCR - 0x304 */
++#define FFCR_FON_MAN_BIT	6
+ #define FFCR_FON_MAN		BIT(6)
+ #define FFCR_STOP_FI		BIT(12)
+ 
+@@ -86,9 +87,9 @@ static void tpiu_disable_hw(struct tpiu_drvdata *drvdata)
+ 	/* Generate manual flush */
+ 	writel_relaxed(FFCR_STOP_FI | FFCR_FON_MAN, drvdata->base + TPIU_FFCR);
+ 	/* Wait for flush to complete */
+-	coresight_timeout(drvdata->base, TPIU_FFCR, FFCR_FON_MAN, 0);
++	coresight_timeout(drvdata->base, TPIU_FFCR, FFCR_FON_MAN_BIT, 0);
+ 	/* Wait for formatter to stop */
+-	coresight_timeout(drvdata->base, TPIU_FFSR, FFSR_FT_STOPPED, 1);
++	coresight_timeout(drvdata->base, TPIU_FFSR, FFSR_FT_STOPPED_BIT, 1);
+ 
+ 	CS_LOCK(drvdata->base);
+ }
+diff --git a/drivers/hwtracing/coresight/coresight.c b/drivers/hwtracing/coresight/coresight.c
+index 29e834aab539..b673718952f6 100644
+--- a/drivers/hwtracing/coresight/coresight.c
++++ b/drivers/hwtracing/coresight/coresight.c
+@@ -108,7 +108,7 @@ static int coresight_find_link_inport(struct coresight_device *csdev,
+ 	dev_err(&csdev->dev, "couldn't find inport, parent: %s, child: %s\n",
+ 		dev_name(&parent->dev), dev_name(&csdev->dev));
+ 
+-	return 0;
++	return -ENODEV;
+ }
+ 
+ static int coresight_find_link_outport(struct coresight_device *csdev,
+@@ -126,7 +126,7 @@ static int coresight_find_link_outport(struct coresight_device *csdev,
+ 	dev_err(&csdev->dev, "couldn't find outport, parent: %s, child: %s\n",
+ 		dev_name(&csdev->dev), dev_name(&child->dev));
+ 
+-	return 0;
++	return -ENODEV;
+ }
+ 
+ static int coresight_enable_sink(struct coresight_device *csdev, u32 mode)
+@@ -179,6 +179,9 @@ static int coresight_enable_link(struct coresight_device *csdev,
+ 	else
+ 		refport = 0;
+ 
++	if (refport < 0)
++		return refport;
++
+ 	if (atomic_inc_return(&csdev->refcnt[refport]) == 1) {
+ 		if (link_ops(csdev)->enable) {
+ 			ret = link_ops(csdev)->enable(csdev, inport, outport);
+diff --git a/drivers/i2c/busses/i2c-aspeed.c b/drivers/i2c/busses/i2c-aspeed.c
+index 715b6fdb4989..5c8ea4e9203c 100644
+--- a/drivers/i2c/busses/i2c-aspeed.c
++++ b/drivers/i2c/busses/i2c-aspeed.c
+@@ -111,22 +111,22 @@
+ #define ASPEED_I2CD_DEV_ADDR_MASK			GENMASK(6, 0)
+ 
+ enum aspeed_i2c_master_state {
++	ASPEED_I2C_MASTER_INACTIVE,
+ 	ASPEED_I2C_MASTER_START,
+ 	ASPEED_I2C_MASTER_TX_FIRST,
+ 	ASPEED_I2C_MASTER_TX,
+ 	ASPEED_I2C_MASTER_RX_FIRST,
+ 	ASPEED_I2C_MASTER_RX,
+ 	ASPEED_I2C_MASTER_STOP,
+-	ASPEED_I2C_MASTER_INACTIVE,
+ };
+ 
+ enum aspeed_i2c_slave_state {
++	ASPEED_I2C_SLAVE_STOP,
+ 	ASPEED_I2C_SLAVE_START,
+ 	ASPEED_I2C_SLAVE_READ_REQUESTED,
+ 	ASPEED_I2C_SLAVE_READ_PROCESSED,
+ 	ASPEED_I2C_SLAVE_WRITE_REQUESTED,
+ 	ASPEED_I2C_SLAVE_WRITE_RECEIVED,
+-	ASPEED_I2C_SLAVE_STOP,
+ };
+ 
+ struct aspeed_i2c_bus {
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index dafcb6f019b3..2702ead01a03 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -722,6 +722,7 @@ static int cma_resolve_ib_dev(struct rdma_id_private *id_priv)
+ 	dgid = (union ib_gid *) &addr->sib_addr;
+ 	pkey = ntohs(addr->sib_pkey);
+ 
++	mutex_lock(&lock);
+ 	list_for_each_entry(cur_dev, &dev_list, list) {
+ 		for (p = 1; p <= cur_dev->device->phys_port_cnt; ++p) {
+ 			if (!rdma_cap_af_ib(cur_dev->device, p))
+@@ -748,18 +749,19 @@ static int cma_resolve_ib_dev(struct rdma_id_private *id_priv)
+ 					cma_dev = cur_dev;
+ 					sgid = gid;
+ 					id_priv->id.port_num = p;
++					goto found;
+ 				}
+ 			}
+ 		}
+ 	}
+-
+-	if (!cma_dev)
+-		return -ENODEV;
++	mutex_unlock(&lock);
++	return -ENODEV;
+ 
+ found:
+ 	cma_attach_to_dev(id_priv, cma_dev);
+-	addr = (struct sockaddr_ib *) cma_src_addr(id_priv);
+-	memcpy(&addr->sib_addr, &sgid, sizeof sgid);
++	mutex_unlock(&lock);
++	addr = (struct sockaddr_ib *)cma_src_addr(id_priv);
++	memcpy(&addr->sib_addr, &sgid, sizeof(sgid));
+ 	cma_translate_ib(addr, &id_priv->id.route.addr.dev_addr);
+ 	return 0;
+ }
+diff --git a/drivers/infiniband/hw/mlx5/cong.c b/drivers/infiniband/hw/mlx5/cong.c
+index 985fa2637390..7e4e358a4fd8 100644
+--- a/drivers/infiniband/hw/mlx5/cong.c
++++ b/drivers/infiniband/hw/mlx5/cong.c
+@@ -359,9 +359,6 @@ static ssize_t get_param(struct file *filp, char __user *buf, size_t count,
+ 	int ret;
+ 	char lbuf[11];
+ 
+-	if (*pos)
+-		return 0;
+-
+ 	ret = mlx5_ib_get_cc_params(param->dev, param->port_num, offset, &var);
+ 	if (ret)
+ 		return ret;
+@@ -370,11 +367,7 @@ static ssize_t get_param(struct file *filp, char __user *buf, size_t count,
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	if (copy_to_user(buf, lbuf, ret))
+-		return -EFAULT;
+-
+-	*pos += ret;
+-	return ret;
++	return simple_read_from_buffer(buf, count, pos, lbuf, ret);
+ }
+ 
+ static const struct file_operations dbg_cc_fops = {
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 90a9c461cedc..308456d28afb 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -271,16 +271,16 @@ static ssize_t size_write(struct file *filp, const char __user *buf,
+ {
+ 	struct mlx5_cache_ent *ent = filp->private_data;
+ 	struct mlx5_ib_dev *dev = ent->dev;
+-	char lbuf[20];
++	char lbuf[20] = {0};
+ 	u32 var;
+ 	int err;
+ 	int c;
+ 
+-	if (copy_from_user(lbuf, buf, sizeof(lbuf)))
++	count = min(count, sizeof(lbuf) - 1);
++	if (copy_from_user(lbuf, buf, count))
+ 		return -EFAULT;
+ 
+ 	c = order2idx(dev, ent->order);
+-	lbuf[sizeof(lbuf) - 1] = 0;
+ 
+ 	if (sscanf(lbuf, "%u", &var) != 1)
+ 		return -EINVAL;
+@@ -310,19 +310,11 @@ static ssize_t size_read(struct file *filp, char __user *buf, size_t count,
+ 	char lbuf[20];
+ 	int err;
+ 
+-	if (*pos)
+-		return 0;
+-
+ 	err = snprintf(lbuf, sizeof(lbuf), "%d\n", ent->size);
+ 	if (err < 0)
+ 		return err;
+ 
+-	if (copy_to_user(buf, lbuf, err))
+-		return -EFAULT;
+-
+-	*pos += err;
+-
+-	return err;
++	return simple_read_from_buffer(buf, count, pos, lbuf, err);
+ }
+ 
+ static const struct file_operations size_fops = {
+@@ -337,16 +329,16 @@ static ssize_t limit_write(struct file *filp, const char __user *buf,
+ {
+ 	struct mlx5_cache_ent *ent = filp->private_data;
+ 	struct mlx5_ib_dev *dev = ent->dev;
+-	char lbuf[20];
++	char lbuf[20] = {0};
+ 	u32 var;
+ 	int err;
+ 	int c;
+ 
+-	if (copy_from_user(lbuf, buf, sizeof(lbuf)))
++	count = min(count, sizeof(lbuf) - 1);
++	if (copy_from_user(lbuf, buf, count))
+ 		return -EFAULT;
+ 
+ 	c = order2idx(dev, ent->order);
+-	lbuf[sizeof(lbuf) - 1] = 0;
+ 
+ 	if (sscanf(lbuf, "%u", &var) != 1)
+ 		return -EINVAL;
+@@ -372,19 +364,11 @@ static ssize_t limit_read(struct file *filp, char __user *buf, size_t count,
+ 	char lbuf[20];
+ 	int err;
+ 
+-	if (*pos)
+-		return 0;
+-
+ 	err = snprintf(lbuf, sizeof(lbuf), "%d\n", ent->limit);
+ 	if (err < 0)
+ 		return err;
+ 
+-	if (copy_to_user(buf, lbuf, err))
+-		return -EFAULT;
+-
+-	*pos += err;
+-
+-	return err;
++	return simple_read_from_buffer(buf, count, pos, lbuf, err);
+ }
+ 
+ static const struct file_operations limit_fops = {
+diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c
+index dfba44a40f0b..fe45d6cad6cd 100644
+--- a/drivers/infiniband/sw/rxe/rxe_recv.c
++++ b/drivers/infiniband/sw/rxe/rxe_recv.c
+@@ -225,9 +225,14 @@ static int hdr_check(struct rxe_pkt_info *pkt)
+ 		goto err1;
+ 	}
+ 
++	if (unlikely(qpn == 0)) {
++		pr_warn_once("QP 0 not supported");
++		goto err1;
++	}
++
+ 	if (qpn != IB_MULTICAST_QPN) {
+-		index = (qpn == 0) ? port->qp_smi_index :
+-			((qpn == 1) ? port->qp_gsi_index : qpn);
++		index = (qpn == 1) ? port->qp_gsi_index : qpn;
++
+ 		qp = rxe_pool_get_index(&rxe->qp_pool, index);
+ 		if (unlikely(!qp)) {
+ 			pr_warn_ratelimited("no qp matches qpn 0x%x\n", qpn);
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+index 6535d9beb24d..a620701f9d41 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+@@ -1028,12 +1028,14 @@ static int ipoib_cm_rep_handler(struct ib_cm_id *cm_id, struct ib_cm_event *even
+ 
+ 	skb_queue_head_init(&skqueue);
+ 
++	netif_tx_lock_bh(p->dev);
+ 	spin_lock_irq(&priv->lock);
+ 	set_bit(IPOIB_FLAG_OPER_UP, &p->flags);
+ 	if (p->neigh)
+ 		while ((skb = __skb_dequeue(&p->neigh->queue)))
+ 			__skb_queue_tail(&skqueue, skb);
+ 	spin_unlock_irq(&priv->lock);
++	netif_tx_unlock_bh(p->dev);
+ 
+ 	while ((skb = __skb_dequeue(&skqueue))) {
+ 		skb->dev = p->dev;
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+index 26cde95bc0f3..7630d5ed2b41 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+@@ -1787,7 +1787,8 @@ int ipoib_dev_init(struct net_device *dev, struct ib_device *ca, int port)
+ 		goto out_free_pd;
+ 	}
+ 
+-	if (ipoib_neigh_hash_init(priv) < 0) {
++	ret = ipoib_neigh_hash_init(priv);
++	if (ret) {
+ 		pr_warn("%s failed to init neigh hash\n", dev->name);
+ 		goto out_dev_uninit;
+ 	}
+diff --git a/drivers/input/joystick/pxrc.c b/drivers/input/joystick/pxrc.c
+index 07a0dbd3ced2..cfb410cf0789 100644
+--- a/drivers/input/joystick/pxrc.c
++++ b/drivers/input/joystick/pxrc.c
+@@ -120,48 +120,51 @@ static void pxrc_close(struct input_dev *input)
+ 	mutex_unlock(&pxrc->pm_mutex);
+ }
+ 
++static void pxrc_free_urb(void *_pxrc)
++{
++	struct pxrc *pxrc = _pxrc;
++
++	usb_free_urb(pxrc->urb);
++}
++
+ static int pxrc_usb_init(struct pxrc *pxrc)
+ {
+ 	struct usb_endpoint_descriptor *epirq;
+ 	unsigned int pipe;
+-	int retval;
++	int error;
+ 
+ 	/* Set up the endpoint information */
+ 	/* This device only has an interrupt endpoint */
+-	retval = usb_find_common_endpoints(pxrc->intf->cur_altsetting,
+-			NULL, NULL, &epirq, NULL);
+-	if (retval) {
+-		dev_err(&pxrc->intf->dev,
+-			"Could not find endpoint\n");
+-		goto error;
++	error = usb_find_common_endpoints(pxrc->intf->cur_altsetting,
++					  NULL, NULL, &epirq, NULL);
++	if (error) {
++		dev_err(&pxrc->intf->dev, "Could not find endpoint\n");
++		return error;
+ 	}
+ 
+ 	pxrc->bsize = usb_endpoint_maxp(epirq);
+ 	pxrc->epaddr = epirq->bEndpointAddress;
+ 	pxrc->data = devm_kmalloc(&pxrc->intf->dev, pxrc->bsize, GFP_KERNEL);
+-	if (!pxrc->data) {
+-		retval = -ENOMEM;
+-		goto error;
+-	}
++	if (!pxrc->data)
++		return -ENOMEM;
+ 
+ 	usb_set_intfdata(pxrc->intf, pxrc);
+ 	usb_make_path(pxrc->udev, pxrc->phys, sizeof(pxrc->phys));
+ 	strlcat(pxrc->phys, "/input0", sizeof(pxrc->phys));
+ 
+ 	pxrc->urb = usb_alloc_urb(0, GFP_KERNEL);
+-	if (!pxrc->urb) {
+-		retval = -ENOMEM;
+-		goto error;
+-	}
++	if (!pxrc->urb)
++		return -ENOMEM;
++
++	error = devm_add_action_or_reset(&pxrc->intf->dev, pxrc_free_urb, pxrc);
++	if (error)
++		return error;
+ 
+ 	pipe = usb_rcvintpipe(pxrc->udev, pxrc->epaddr),
+ 	usb_fill_int_urb(pxrc->urb, pxrc->udev, pipe, pxrc->data, pxrc->bsize,
+ 						pxrc_usb_irq, pxrc, 1);
+ 
+-error:
+-	return retval;
+-
+-
++	return 0;
+ }
+ 
+ static int pxrc_input_init(struct pxrc *pxrc)
+@@ -197,7 +200,7 @@ static int pxrc_probe(struct usb_interface *intf,
+ 		      const struct usb_device_id *id)
+ {
+ 	struct pxrc *pxrc;
+-	int retval;
++	int error;
+ 
+ 	pxrc = devm_kzalloc(&intf->dev, sizeof(*pxrc), GFP_KERNEL);
+ 	if (!pxrc)
+@@ -207,29 +210,20 @@ static int pxrc_probe(struct usb_interface *intf,
+ 	pxrc->udev = usb_get_dev(interface_to_usbdev(intf));
+ 	pxrc->intf = intf;
+ 
+-	retval = pxrc_usb_init(pxrc);
+-	if (retval)
+-		goto error;
++	error = pxrc_usb_init(pxrc);
++	if (error)
++		return error;
+ 
+-	retval = pxrc_input_init(pxrc);
+-	if (retval)
+-		goto err_free_urb;
++	error = pxrc_input_init(pxrc);
++	if (error)
++		return error;
+ 
+ 	return 0;
+-
+-err_free_urb:
+-	usb_free_urb(pxrc->urb);
+-
+-error:
+-	return retval;
+ }
+ 
+ static void pxrc_disconnect(struct usb_interface *intf)
+ {
+-	struct pxrc *pxrc = usb_get_intfdata(intf);
+-
+-	usb_free_urb(pxrc->urb);
+-	usb_set_intfdata(intf, NULL);
++	/* All driver resources are devm-managed. */
+ }
+ 
+ static int pxrc_suspend(struct usb_interface *intf, pm_message_t message)
+diff --git a/drivers/input/touchscreen/rohm_bu21023.c b/drivers/input/touchscreen/rohm_bu21023.c
+index bda0500c9b57..714affdd742f 100644
+--- a/drivers/input/touchscreen/rohm_bu21023.c
++++ b/drivers/input/touchscreen/rohm_bu21023.c
+@@ -304,7 +304,7 @@ static int rohm_i2c_burst_read(struct i2c_client *client, u8 start, void *buf,
+ 	msg[1].len = len;
+ 	msg[1].buf = buf;
+ 
+-	i2c_lock_adapter(adap);
++	i2c_lock_bus(adap, I2C_LOCK_SEGMENT);
+ 
+ 	for (i = 0; i < 2; i++) {
+ 		if (__i2c_transfer(adap, &msg[i], 1) < 0) {
+@@ -313,7 +313,7 @@ static int rohm_i2c_burst_read(struct i2c_client *client, u8 start, void *buf,
+ 		}
+ 	}
+ 
+-	i2c_unlock_adapter(adap);
++	i2c_unlock_bus(adap, I2C_LOCK_SEGMENT);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
+index b73c6a7bf7f2..b7076aa24d6b 100644
+--- a/drivers/iommu/arm-smmu-v3.c
++++ b/drivers/iommu/arm-smmu-v3.c
+@@ -1302,6 +1302,7 @@ static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
+ 
+ 	/* Sync our overflow flag, as we believe we're up to speed */
+ 	q->cons = Q_OVF(q, q->prod) | Q_WRP(q, q->cons) | Q_IDX(q, q->cons);
++	writel(q->cons, q->cons_reg);
+ 	return IRQ_HANDLED;
+ }
+ 
+diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-arm-v7s.c
+index 50e3a9fcf43e..b5948ba6b3b3 100644
+--- a/drivers/iommu/io-pgtable-arm-v7s.c
++++ b/drivers/iommu/io-pgtable-arm-v7s.c
+@@ -192,6 +192,7 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
+ {
+ 	struct io_pgtable_cfg *cfg = &data->iop.cfg;
+ 	struct device *dev = cfg->iommu_dev;
++	phys_addr_t phys;
+ 	dma_addr_t dma;
+ 	size_t size = ARM_V7S_TABLE_SIZE(lvl);
+ 	void *table = NULL;
+@@ -200,6 +201,10 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
+ 		table = (void *)__get_dma_pages(__GFP_ZERO, get_order(size));
+ 	else if (lvl == 2)
+ 		table = kmem_cache_zalloc(data->l2_tables, gfp | GFP_DMA);
++	phys = virt_to_phys(table);
++	if (phys != (arm_v7s_iopte)phys)
++		/* Doesn't fit in PTE */
++		goto out_free;
+ 	if (table && !(cfg->quirks & IO_PGTABLE_QUIRK_NO_DMA)) {
+ 		dma = dma_map_single(dev, table, size, DMA_TO_DEVICE);
+ 		if (dma_mapping_error(dev, dma))
+@@ -209,7 +214,7 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
+ 		 * address directly, so if the DMA layer suggests otherwise by
+ 		 * translating or truncating them, that bodes very badly...
+ 		 */
+-		if (dma != virt_to_phys(table))
++		if (dma != phys)
+ 			goto out_unmap;
+ 	}
+ 	kmemleak_ignore(table);
+diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
+index 010a254305dd..88641b4560bc 100644
+--- a/drivers/iommu/io-pgtable-arm.c
++++ b/drivers/iommu/io-pgtable-arm.c
+@@ -237,7 +237,8 @@ static void *__arm_lpae_alloc_pages(size_t size, gfp_t gfp,
+ 	void *pages;
+ 
+ 	VM_BUG_ON((gfp & __GFP_HIGHMEM));
+-	p = alloc_pages_node(dev_to_node(dev), gfp | __GFP_ZERO, order);
++	p = alloc_pages_node(dev ? dev_to_node(dev) : NUMA_NO_NODE,
++			     gfp | __GFP_ZERO, order);
+ 	if (!p)
+ 		return NULL;
+ 
+diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
+index feb1664815b7..6e2882cda55d 100644
+--- a/drivers/iommu/ipmmu-vmsa.c
++++ b/drivers/iommu/ipmmu-vmsa.c
+@@ -47,6 +47,7 @@ struct ipmmu_features {
+ 	unsigned int number_of_contexts;
+ 	bool setup_imbuscr;
+ 	bool twobit_imttbcr_sl0;
++	bool reserved_context;
+ };
+ 
+ struct ipmmu_vmsa_device {
+@@ -916,6 +917,7 @@ static const struct ipmmu_features ipmmu_features_default = {
+ 	.number_of_contexts = 1, /* software only tested with one context */
+ 	.setup_imbuscr = true,
+ 	.twobit_imttbcr_sl0 = false,
++	.reserved_context = false,
+ };
+ 
+ static const struct ipmmu_features ipmmu_features_r8a7795 = {
+@@ -924,6 +926,7 @@ static const struct ipmmu_features ipmmu_features_r8a7795 = {
+ 	.number_of_contexts = 8,
+ 	.setup_imbuscr = false,
+ 	.twobit_imttbcr_sl0 = true,
++	.reserved_context = true,
+ };
+ 
+ static const struct of_device_id ipmmu_of_ids[] = {
+@@ -1017,6 +1020,11 @@ static int ipmmu_probe(struct platform_device *pdev)
+ 		}
+ 
+ 		ipmmu_device_reset(mmu);
++
++		if (mmu->features->reserved_context) {
++			dev_info(&pdev->dev, "IPMMU context 0 is reserved\n");
++			set_bit(0, mmu->ctx);
++		}
+ 	}
+ 
+ 	/*
+diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c
+index b57f764d6a16..93ebba6dcc25 100644
+--- a/drivers/lightnvm/pblk-init.c
++++ b/drivers/lightnvm/pblk-init.c
+@@ -716,10 +716,11 @@ static int pblk_setup_line_meta_12(struct pblk *pblk, struct pblk_line *line,
+ 
+ 		/*
+ 		 * In 1.2 spec. chunk state is not persisted by the device. Thus
+-		 * some of the values are reset each time pblk is instantiated.
++		 * some of the values are reset each time pblk is instantiated,
++		 * so we have to assume that the block is closed.
+ 		 */
+ 		if (lun_bb_meta[line->id] == NVM_BLK_T_FREE)
+-			chunk->state =  NVM_CHK_ST_FREE;
++			chunk->state =  NVM_CHK_ST_CLOSED;
+ 		else
+ 			chunk->state = NVM_CHK_ST_OFFLINE;
+ 
+diff --git a/drivers/lightnvm/pblk-recovery.c b/drivers/lightnvm/pblk-recovery.c
+index 3a5069183859..d83466b3821b 100644
+--- a/drivers/lightnvm/pblk-recovery.c
++++ b/drivers/lightnvm/pblk-recovery.c
+@@ -742,9 +742,10 @@ static int pblk_recov_check_line_version(struct pblk *pblk,
+ 		return 1;
+ 	}
+ 
+-#ifdef NVM_DEBUG
++#ifdef CONFIG_NVM_PBLK_DEBUG
+ 	if (header->version_minor > EMETA_VERSION_MINOR)
+-		pr_info("pblk: newer line minor version found: %d\n", line_v);
++		pr_info("pblk: newer line minor version found: %d\n",
++				header->version_minor);
+ #endif
+ 
+ 	return 0;
+diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
+index 12decdbd722d..fc65f0dedf7f 100644
+--- a/drivers/md/dm-verity-target.c
++++ b/drivers/md/dm-verity-target.c
+@@ -99,10 +99,26 @@ static int verity_hash_update(struct dm_verity *v, struct ahash_request *req,
+ {
+ 	struct scatterlist sg;
+ 
+-	sg_init_one(&sg, data, len);
+-	ahash_request_set_crypt(req, &sg, NULL, len);
+-
+-	return crypto_wait_req(crypto_ahash_update(req), wait);
++	if (likely(!is_vmalloc_addr(data))) {
++		sg_init_one(&sg, data, len);
++		ahash_request_set_crypt(req, &sg, NULL, len);
++		return crypto_wait_req(crypto_ahash_update(req), wait);
++	} else {
++		do {
++			int r;
++			size_t this_step = min_t(size_t, len, PAGE_SIZE - offset_in_page(data));
++			flush_kernel_vmap_range((void *)data, this_step);
++			sg_init_table(&sg, 1);
++			sg_set_page(&sg, vmalloc_to_page(data), this_step, offset_in_page(data));
++			ahash_request_set_crypt(req, &sg, NULL, this_step);
++			r = crypto_wait_req(crypto_ahash_update(req), wait);
++			if (unlikely(r))
++				return r;
++			data += this_step;
++			len -= this_step;
++		} while (len);
++		return 0;
++	}
+ }
+ 
+ /*
+diff --git a/drivers/media/common/videobuf2/videobuf2-core.c b/drivers/media/common/videobuf2/videobuf2-core.c
+index f32ec7342ef0..5653e8eebe2b 100644
+--- a/drivers/media/common/videobuf2/videobuf2-core.c
++++ b/drivers/media/common/videobuf2/videobuf2-core.c
+@@ -1377,6 +1377,11 @@ int vb2_core_qbuf(struct vb2_queue *q, unsigned int index, void *pb)
+ 	struct vb2_buffer *vb;
+ 	int ret;
+ 
++	if (q->error) {
++		dprintk(1, "fatal error occurred on queue\n");
++		return -EIO;
++	}
++
+ 	vb = q->bufs[index];
+ 
+ 	switch (vb->state) {
+diff --git a/drivers/media/i2c/ov5645.c b/drivers/media/i2c/ov5645.c
+index b3f762578f7f..1722cdab0daf 100644
+--- a/drivers/media/i2c/ov5645.c
++++ b/drivers/media/i2c/ov5645.c
+@@ -510,8 +510,8 @@ static const struct reg_value ov5645_setting_full[] = {
+ };
+ 
+ static const s64 link_freq[] = {
+-	222880000,
+-	334320000
++	224000000,
++	336000000
+ };
+ 
+ static const struct ov5645_mode_info ov5645_mode_info_data[] = {
+@@ -520,7 +520,7 @@ static const struct ov5645_mode_info ov5645_mode_info_data[] = {
+ 		.height = 960,
+ 		.data = ov5645_setting_sxga,
+ 		.data_size = ARRAY_SIZE(ov5645_setting_sxga),
+-		.pixel_clock = 111440000,
++		.pixel_clock = 112000000,
+ 		.link_freq = 0 /* an index in link_freq[] */
+ 	},
+ 	{
+@@ -528,7 +528,7 @@ static const struct ov5645_mode_info ov5645_mode_info_data[] = {
+ 		.height = 1080,
+ 		.data = ov5645_setting_1080p,
+ 		.data_size = ARRAY_SIZE(ov5645_setting_1080p),
+-		.pixel_clock = 167160000,
++		.pixel_clock = 168000000,
+ 		.link_freq = 1 /* an index in link_freq[] */
+ 	},
+ 	{
+@@ -536,7 +536,7 @@ static const struct ov5645_mode_info ov5645_mode_info_data[] = {
+ 		.height = 1944,
+ 		.data = ov5645_setting_full,
+ 		.data_size = ARRAY_SIZE(ov5645_setting_full),
+-		.pixel_clock = 167160000,
++		.pixel_clock = 168000000,
+ 		.link_freq = 1 /* an index in link_freq[] */
+ 	},
+ };
+@@ -1145,7 +1145,8 @@ static int ov5645_probe(struct i2c_client *client,
+ 		return ret;
+ 	}
+ 
+-	if (xclk_freq != 23880000) {
++	/* external clock must be 24MHz, allow 1% tolerance */
++	if (xclk_freq < 23760000 || xclk_freq > 24240000) {
+ 		dev_err(dev, "external clock frequency %u is not supported\n",
+ 			xclk_freq);
+ 		return -EINVAL;
+diff --git a/drivers/media/pci/tw686x/tw686x-video.c b/drivers/media/pci/tw686x/tw686x-video.c
+index 0ea8dd44026c..3a06c000f97b 100644
+--- a/drivers/media/pci/tw686x/tw686x-video.c
++++ b/drivers/media/pci/tw686x/tw686x-video.c
+@@ -1190,6 +1190,14 @@ int tw686x_video_init(struct tw686x_dev *dev)
+ 			return err;
+ 	}
+ 
++	/* Initialize vc->dev and vc->ch for the error path */
++	for (ch = 0; ch < max_channels(dev); ch++) {
++		struct tw686x_video_channel *vc = &dev->video_channels[ch];
++
++		vc->dev = dev;
++		vc->ch = ch;
++	}
++
+ 	for (ch = 0; ch < max_channels(dev); ch++) {
+ 		struct tw686x_video_channel *vc = &dev->video_channels[ch];
+ 		struct video_device *vdev;
+@@ -1198,9 +1206,6 @@ int tw686x_video_init(struct tw686x_dev *dev)
+ 		spin_lock_init(&vc->qlock);
+ 		INIT_LIST_HEAD(&vc->vidq_queued);
+ 
+-		vc->dev = dev;
+-		vc->ch = ch;
+-
+ 		/* default settings */
+ 		err = tw686x_set_standard(vc, V4L2_STD_NTSC);
+ 		if (err)
+diff --git a/drivers/mfd/88pm860x-i2c.c b/drivers/mfd/88pm860x-i2c.c
+index 84e313107233..7b9052ea7413 100644
+--- a/drivers/mfd/88pm860x-i2c.c
++++ b/drivers/mfd/88pm860x-i2c.c
+@@ -146,14 +146,14 @@ int pm860x_page_reg_write(struct i2c_client *i2c, int reg,
+ 	unsigned char zero;
+ 	int ret;
+ 
+-	i2c_lock_adapter(i2c->adapter);
++	i2c_lock_bus(i2c->adapter, I2C_LOCK_SEGMENT);
+ 	read_device(i2c, 0xFA, 0, &zero);
+ 	read_device(i2c, 0xFB, 0, &zero);
+ 	read_device(i2c, 0xFF, 0, &zero);
+ 	ret = write_device(i2c, reg, 1, &data);
+ 	read_device(i2c, 0xFE, 0, &zero);
+ 	read_device(i2c, 0xFC, 0, &zero);
+-	i2c_unlock_adapter(i2c->adapter);
++	i2c_unlock_bus(i2c->adapter, I2C_LOCK_SEGMENT);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(pm860x_page_reg_write);
+@@ -164,14 +164,14 @@ int pm860x_page_bulk_read(struct i2c_client *i2c, int reg,
+ 	unsigned char zero = 0;
+ 	int ret;
+ 
+-	i2c_lock_adapter(i2c->adapter);
++	i2c_lock_bus(i2c->adapter, I2C_LOCK_SEGMENT);
+ 	read_device(i2c, 0xfa, 0, &zero);
+ 	read_device(i2c, 0xfb, 0, &zero);
+ 	read_device(i2c, 0xff, 0, &zero);
+ 	ret = read_device(i2c, reg, count, buf);
+ 	read_device(i2c, 0xFE, 0, &zero);
+ 	read_device(i2c, 0xFC, 0, &zero);
+-	i2c_unlock_adapter(i2c->adapter);
++	i2c_unlock_bus(i2c->adapter, I2C_LOCK_SEGMENT);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(pm860x_page_bulk_read);
+diff --git a/drivers/misc/hmc6352.c b/drivers/misc/hmc6352.c
+index eeb7eef62174..38f90e179927 100644
+--- a/drivers/misc/hmc6352.c
++++ b/drivers/misc/hmc6352.c
+@@ -27,6 +27,7 @@
+ #include <linux/err.h>
+ #include <linux/delay.h>
+ #include <linux/sysfs.h>
++#include <linux/nospec.h>
+ 
+ static DEFINE_MUTEX(compass_mutex);
+ 
+@@ -50,6 +51,7 @@ static int compass_store(struct device *dev, const char *buf, size_t count,
+ 		return ret;
+ 	if (val >= strlen(map))
+ 		return -EINVAL;
++	val = array_index_nospec(val, strlen(map));
+ 	mutex_lock(&compass_mutex);
+ 	ret = compass_command(c, map[val]);
+ 	mutex_unlock(&compass_mutex);
+diff --git a/drivers/misc/ibmvmc.c b/drivers/misc/ibmvmc.c
+index fb83d1375638..50d82c3d032a 100644
+--- a/drivers/misc/ibmvmc.c
++++ b/drivers/misc/ibmvmc.c
+@@ -2131,7 +2131,7 @@ static int ibmvmc_init_crq_queue(struct crq_server_adapter *adapter)
+ 	retrc = plpar_hcall_norets(H_REG_CRQ,
+ 				   vdev->unit_address,
+ 				   queue->msg_token, PAGE_SIZE);
+-	retrc = rc;
++	rc = retrc;
+ 
+ 	if (rc == H_RESOURCE)
+ 		rc = ibmvmc_reset_crq_queue(adapter);
+diff --git a/drivers/misc/mei/bus-fixup.c b/drivers/misc/mei/bus-fixup.c
+index 0208c4b027c5..fa0236a5e59a 100644
+--- a/drivers/misc/mei/bus-fixup.c
++++ b/drivers/misc/mei/bus-fixup.c
+@@ -267,7 +267,7 @@ static int mei_nfc_if_version(struct mei_cl *cl,
+ 
+ 	ret = 0;
+ 	bytes_recv = __mei_cl_recv(cl, (u8 *)reply, if_version_length, 0);
+-	if (bytes_recv < if_version_length) {
++	if (bytes_recv < 0 || bytes_recv < if_version_length) {
+ 		dev_err(bus->dev, "Could not read IF version\n");
+ 		ret = -EIO;
+ 		goto err;
+diff --git a/drivers/misc/mei/bus.c b/drivers/misc/mei/bus.c
+index b1133739fb4b..692b2f9a18cb 100644
+--- a/drivers/misc/mei/bus.c
++++ b/drivers/misc/mei/bus.c
+@@ -505,17 +505,15 @@ int mei_cldev_enable(struct mei_cl_device *cldev)
+ 
+ 	cl = cldev->cl;
+ 
++	mutex_lock(&bus->device_lock);
+ 	if (cl->state == MEI_FILE_UNINITIALIZED) {
+-		mutex_lock(&bus->device_lock);
+ 		ret = mei_cl_link(cl);
+-		mutex_unlock(&bus->device_lock);
+ 		if (ret)
+-			return ret;
++			goto out;
+ 		/* update pointers */
+ 		cl->cldev = cldev;
+ 	}
+ 
+-	mutex_lock(&bus->device_lock);
+ 	if (mei_cl_is_connected(cl)) {
+ 		ret = 0;
+ 		goto out;
+@@ -600,9 +598,8 @@ int mei_cldev_disable(struct mei_cl_device *cldev)
+ 	if (err < 0)
+ 		dev_err(bus->dev, "Could not disconnect from the ME client\n");
+ 
+-out:
+ 	mei_cl_bus_module_put(cldev);
+-
++out:
+ 	/* Flush queues and remove any pending read */
+ 	mei_cl_flush_queues(cl, NULL);
+ 	mei_cl_unlink(cl);
+@@ -860,12 +857,13 @@ static void mei_cl_bus_dev_release(struct device *dev)
+ 
+ 	mei_me_cl_put(cldev->me_cl);
+ 	mei_dev_bus_put(cldev->bus);
++	mei_cl_unlink(cldev->cl);
+ 	kfree(cldev->cl);
+ 	kfree(cldev);
+ }
+ 
+ static const struct device_type mei_cl_device_type = {
+-	.release	= mei_cl_bus_dev_release,
++	.release = mei_cl_bus_dev_release,
+ };
+ 
+ /**
+diff --git a/drivers/misc/mei/hbm.c b/drivers/misc/mei/hbm.c
+index fe6595fe94f1..995ff1b7e7b5 100644
+--- a/drivers/misc/mei/hbm.c
++++ b/drivers/misc/mei/hbm.c
+@@ -1140,15 +1140,18 @@ int mei_hbm_dispatch(struct mei_device *dev, struct mei_msg_hdr *hdr)
+ 
+ 		props_res = (struct hbm_props_response *)mei_msg;
+ 
+-		if (props_res->status) {
++		if (props_res->status == MEI_HBMS_CLIENT_NOT_FOUND) {
++			dev_dbg(dev->dev, "hbm: properties response: %d CLIENT_NOT_FOUND\n",
++				props_res->me_addr);
++		} else if (props_res->status) {
+ 			dev_err(dev->dev, "hbm: properties response: wrong status = %d %s\n",
+ 				props_res->status,
+ 				mei_hbm_status_str(props_res->status));
+ 			return -EPROTO;
++		} else {
++			mei_hbm_me_cl_add(dev, props_res);
+ 		}
+ 
+-		mei_hbm_me_cl_add(dev, props_res);
+-
+ 		/* request property for the next client */
+ 		if (mei_hbm_prop_req(dev, props_res->me_addr + 1))
+ 			return -EIO;
+diff --git a/drivers/mmc/host/meson-mx-sdio.c b/drivers/mmc/host/meson-mx-sdio.c
+index 09cb89645d06..2cfec33178c1 100644
+--- a/drivers/mmc/host/meson-mx-sdio.c
++++ b/drivers/mmc/host/meson-mx-sdio.c
+@@ -517,19 +517,23 @@ static struct mmc_host_ops meson_mx_mmc_ops = {
+ static struct platform_device *meson_mx_mmc_slot_pdev(struct device *parent)
+ {
+ 	struct device_node *slot_node;
++	struct platform_device *pdev;
+ 
+ 	/*
+ 	 * TODO: the MMC core framework currently does not support
+ 	 * controllers with multiple slots properly. So we only register
+ 	 * the first slot for now
+ 	 */
+-	slot_node = of_find_compatible_node(parent->of_node, NULL, "mmc-slot");
++	slot_node = of_get_compatible_child(parent->of_node, "mmc-slot");
+ 	if (!slot_node) {
+ 		dev_warn(parent, "no 'mmc-slot' sub-node found\n");
+ 		return ERR_PTR(-ENOENT);
+ 	}
+ 
+-	return of_platform_device_create(slot_node, NULL, parent);
++	pdev = of_platform_device_create(slot_node, NULL, parent);
++	of_node_put(slot_node);
++
++	return pdev;
+ }
+ 
+ static int meson_mx_mmc_add_host(struct meson_mx_mmc_host *host)
+diff --git a/drivers/mmc/host/omap_hsmmc.c b/drivers/mmc/host/omap_hsmmc.c
+index 071693ebfe18..68760d4a5d3d 100644
+--- a/drivers/mmc/host/omap_hsmmc.c
++++ b/drivers/mmc/host/omap_hsmmc.c
+@@ -2177,6 +2177,7 @@ static int omap_hsmmc_remove(struct platform_device *pdev)
+ 	dma_release_channel(host->tx_chan);
+ 	dma_release_channel(host->rx_chan);
+ 
++	dev_pm_clear_wake_irq(host->dev);
+ 	pm_runtime_dont_use_autosuspend(host->dev);
+ 	pm_runtime_put_sync(host->dev);
+ 	pm_runtime_disable(host->dev);
+diff --git a/drivers/mmc/host/sdhci-of-esdhc.c b/drivers/mmc/host/sdhci-of-esdhc.c
+index 4ffa6b173a21..8332f56e6c0d 100644
+--- a/drivers/mmc/host/sdhci-of-esdhc.c
++++ b/drivers/mmc/host/sdhci-of-esdhc.c
+@@ -22,6 +22,7 @@
+ #include <linux/sys_soc.h>
+ #include <linux/clk.h>
+ #include <linux/ktime.h>
++#include <linux/dma-mapping.h>
+ #include <linux/mmc/host.h>
+ #include "sdhci-pltfm.h"
+ #include "sdhci-esdhc.h"
+@@ -427,6 +428,11 @@ static void esdhc_of_adma_workaround(struct sdhci_host *host, u32 intmask)
+ static int esdhc_of_enable_dma(struct sdhci_host *host)
+ {
+ 	u32 value;
++	struct device *dev = mmc_dev(host->mmc);
++
++	if (of_device_is_compatible(dev->of_node, "fsl,ls1043a-esdhc") ||
++	    of_device_is_compatible(dev->of_node, "fsl,ls1046a-esdhc"))
++		dma_set_mask_and_coherent(dev, DMA_BIT_MASK(40));
+ 
+ 	value = sdhci_readl(host, ESDHC_DMA_SYSCTL);
+ 	value |= ESDHC_DMA_SNOOP;
+diff --git a/drivers/mmc/host/sdhci-tegra.c b/drivers/mmc/host/sdhci-tegra.c
+index 970d38f68939..137df06b9b6e 100644
+--- a/drivers/mmc/host/sdhci-tegra.c
++++ b/drivers/mmc/host/sdhci-tegra.c
+@@ -334,7 +334,8 @@ static const struct sdhci_pltfm_data sdhci_tegra30_pdata = {
+ 		  SDHCI_QUIRK_NO_HISPD_BIT |
+ 		  SDHCI_QUIRK_BROKEN_ADMA_ZEROLEN_DESC |
+ 		  SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN,
+-	.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
++	.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
++		   SDHCI_QUIRK2_BROKEN_HS200,
+ 	.ops  = &tegra_sdhci_ops,
+ };
+ 
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index 1c828e0e9905..a7b5602ef6f7 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -3734,14 +3734,21 @@ int sdhci_setup_host(struct sdhci_host *host)
+ 	    mmc_gpio_get_cd(host->mmc) < 0)
+ 		mmc->caps |= MMC_CAP_NEEDS_POLL;
+ 
+-	/* If vqmmc regulator and no 1.8V signalling, then there's no UHS */
+ 	if (!IS_ERR(mmc->supply.vqmmc)) {
+ 		ret = regulator_enable(mmc->supply.vqmmc);
++
++		/* If vqmmc provides no 1.8V signalling, then there's no UHS */
+ 		if (!regulator_is_supported_voltage(mmc->supply.vqmmc, 1700000,
+ 						    1950000))
+ 			host->caps1 &= ~(SDHCI_SUPPORT_SDR104 |
+ 					 SDHCI_SUPPORT_SDR50 |
+ 					 SDHCI_SUPPORT_DDR50);
++
++		/* In eMMC case vqmmc might be a fixed 1.8V regulator */
++		if (!regulator_is_supported_voltage(mmc->supply.vqmmc, 2700000,
++						    3600000))
++			host->flags &= ~SDHCI_SIGNALING_330;
++
+ 		if (ret) {
+ 			pr_warn("%s: Failed to enable vqmmc regulator: %d\n",
+ 				mmc_hostname(mmc), ret);
+diff --git a/drivers/mtd/maps/solutionengine.c b/drivers/mtd/maps/solutionengine.c
+index bb580bc16445..c07f21b20463 100644
+--- a/drivers/mtd/maps/solutionengine.c
++++ b/drivers/mtd/maps/solutionengine.c
+@@ -59,9 +59,9 @@ static int __init init_soleng_maps(void)
+ 			return -ENXIO;
+ 		}
+ 	}
+-	printk(KERN_NOTICE "Solution Engine: Flash at 0x%08lx, EPROM at 0x%08lx\n",
+-	       soleng_flash_map.phys & 0x1fffffff,
+-	       soleng_eprom_map.phys & 0x1fffffff);
++	printk(KERN_NOTICE "Solution Engine: Flash at 0x%pap, EPROM at 0x%pap\n",
++	       &soleng_flash_map.phys,
++	       &soleng_eprom_map.phys);
+ 	flash_mtd->owner = THIS_MODULE;
+ 
+ 	eprom_mtd = do_map_probe("map_rom", &soleng_eprom_map);
+diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c
+index cd67c85cc87d..02389528f622 100644
+--- a/drivers/mtd/mtdchar.c
++++ b/drivers/mtd/mtdchar.c
+@@ -160,8 +160,12 @@ static ssize_t mtdchar_read(struct file *file, char __user *buf, size_t count,
+ 
+ 	pr_debug("MTD_read\n");
+ 
+-	if (*ppos + count > mtd->size)
+-		count = mtd->size - *ppos;
++	if (*ppos + count > mtd->size) {
++		if (*ppos < mtd->size)
++			count = mtd->size - *ppos;
++		else
++			count = 0;
++	}
+ 
+ 	if (!count)
+ 		return 0;
+@@ -246,7 +250,7 @@ static ssize_t mtdchar_write(struct file *file, const char __user *buf, size_t c
+ 
+ 	pr_debug("MTD_write\n");
+ 
+-	if (*ppos == mtd->size)
++	if (*ppos >= mtd->size)
+ 		return -ENOSPC;
+ 
+ 	if (*ppos + count > mtd->size)
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-desc.c b/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
+index cc1e4f820e64..533094233659 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-desc.c
+@@ -289,7 +289,7 @@ static int xgbe_alloc_pages(struct xgbe_prv_data *pdata,
+ 	struct page *pages = NULL;
+ 	dma_addr_t pages_dma;
+ 	gfp_t gfp;
+-	int order, ret;
++	int order;
+ 
+ again:
+ 	order = alloc_order;
+@@ -316,10 +316,9 @@ again:
+ 	/* Map the pages */
+ 	pages_dma = dma_map_page(pdata->dev, pages, 0,
+ 				 PAGE_SIZE << order, DMA_FROM_DEVICE);
+-	ret = dma_mapping_error(pdata->dev, pages_dma);
+-	if (ret) {
++	if (dma_mapping_error(pdata->dev, pages_dma)) {
+ 		put_page(pages);
+-		return ret;
++		return -ENOMEM;
+ 	}
+ 
+ 	pa->pages = pages;
+diff --git a/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c b/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c
+index 929d485a3a2f..e088dedc1747 100644
+--- a/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c
++++ b/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_device.c
+@@ -493,6 +493,9 @@ static void cn23xx_pf_setup_global_output_regs(struct octeon_device *oct)
+ 	for (q_no = srn; q_no < ern; q_no++) {
+ 		reg_val = octeon_read_csr(oct, CN23XX_SLI_OQ_PKT_CONTROL(q_no));
+ 
++		/* clear IPTR */
++		reg_val &= ~CN23XX_PKT_OUTPUT_CTL_IPTR;
++
+ 		/* set DPTR */
+ 		reg_val |= CN23XX_PKT_OUTPUT_CTL_DPTR;
+ 
+diff --git a/drivers/net/ethernet/cavium/liquidio/cn23xx_vf_device.c b/drivers/net/ethernet/cavium/liquidio/cn23xx_vf_device.c
+index 9338a0008378..1f8b7f651254 100644
+--- a/drivers/net/ethernet/cavium/liquidio/cn23xx_vf_device.c
++++ b/drivers/net/ethernet/cavium/liquidio/cn23xx_vf_device.c
+@@ -165,6 +165,9 @@ static void cn23xx_vf_setup_global_output_regs(struct octeon_device *oct)
+ 		reg_val =
+ 		    octeon_read_csr(oct, CN23XX_VF_SLI_OQ_PKT_CONTROL(q_no));
+ 
++		/* clear IPTR */
++		reg_val &= ~CN23XX_PKT_OUTPUT_CTL_IPTR;
++
+ 		/* set DPTR */
+ 		reg_val |= CN23XX_PKT_OUTPUT_CTL_DPTR;
+ 
+diff --git a/drivers/net/ethernet/cortina/gemini.c b/drivers/net/ethernet/cortina/gemini.c
+index 6d7404f66f84..c9a061e707c4 100644
+--- a/drivers/net/ethernet/cortina/gemini.c
++++ b/drivers/net/ethernet/cortina/gemini.c
+@@ -1753,7 +1753,10 @@ static int gmac_open(struct net_device *netdev)
+ 	phy_start(netdev->phydev);
+ 
+ 	err = geth_resize_freeq(port);
+-	if (err) {
++	/* It's fine if it's just busy, the other port has set up
++	 * the freeq in that case.
++	 */
++	if (err && (err != -EBUSY)) {
+ 		netdev_err(netdev, "could not resize freeq\n");
+ 		goto err_stop_phy;
+ 	}
+diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.c b/drivers/net/ethernet/emulex/benet/be_cmds.c
+index ff92ab1daeb8..1e9d882c04ef 100644
+--- a/drivers/net/ethernet/emulex/benet/be_cmds.c
++++ b/drivers/net/ethernet/emulex/benet/be_cmds.c
+@@ -4500,7 +4500,7 @@ int be_cmd_get_profile_config(struct be_adapter *adapter,
+ 				port_res->max_vfs += le16_to_cpu(pcie->num_vfs);
+ 			}
+ 		}
+-		return status;
++		goto err;
+ 	}
+ 
+ 	pcie = be_get_pcie_desc(resp->func_param, desc_count,
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 25a73bb2e642..9d69621f5ab4 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -3081,7 +3081,6 @@ static int hns3_client_init(struct hnae3_handle *handle)
+ 	priv->dev = &pdev->dev;
+ 	priv->netdev = netdev;
+ 	priv->ae_handle = handle;
+-	priv->ae_handle->reset_level = HNAE3_NONE_RESET;
+ 	priv->ae_handle->last_reset_time = jiffies;
+ 	priv->tx_timeout_count = 0;
+ 
+@@ -3102,6 +3101,11 @@ static int hns3_client_init(struct hnae3_handle *handle)
+ 	/* Carrier off reporting is important to ethtool even BEFORE open */
+ 	netif_carrier_off(netdev);
+ 
++	if (handle->flags & HNAE3_SUPPORT_VF)
++		handle->reset_level = HNAE3_VF_RESET;
++	else
++		handle->reset_level = HNAE3_FUNC_RESET;
++
+ 	ret = hns3_get_ring_config(priv);
+ 	if (ret) {
+ 		ret = -ENOMEM;
+@@ -3418,7 +3422,7 @@ static int hns3_reset_notify_down_enet(struct hnae3_handle *handle)
+ 	struct net_device *ndev = kinfo->netdev;
+ 
+ 	if (!netif_running(ndev))
+-		return -EIO;
++		return 0;
+ 
+ 	return hns3_nic_net_stop(ndev);
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 6fd7ea8074b0..13f43b74fd6d 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -2825,15 +2825,13 @@ static void hclge_clear_reset_cause(struct hclge_dev *hdev)
+ static void hclge_reset(struct hclge_dev *hdev)
+ {
+ 	/* perform reset of the stack & ae device for a client */
+-
++	rtnl_lock();
+ 	hclge_notify_client(hdev, HNAE3_DOWN_CLIENT);
+ 
+ 	if (!hclge_reset_wait(hdev)) {
+-		rtnl_lock();
+ 		hclge_notify_client(hdev, HNAE3_UNINIT_CLIENT);
+ 		hclge_reset_ae_dev(hdev->ae_dev);
+ 		hclge_notify_client(hdev, HNAE3_INIT_CLIENT);
+-		rtnl_unlock();
+ 
+ 		hclge_clear_reset_cause(hdev);
+ 	} else {
+@@ -2843,6 +2841,7 @@ static void hclge_reset(struct hclge_dev *hdev)
+ 	}
+ 
+ 	hclge_notify_client(hdev, HNAE3_UP_CLIENT);
++	rtnl_unlock();
+ }
+ 
+ static void hclge_reset_event(struct hnae3_handle *handle)
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index 0319ed9ef8b8..f7f08e3fa761 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -5011,6 +5011,12 @@ static int mvpp2_probe(struct platform_device *pdev)
+ 			(unsigned long)of_device_get_match_data(&pdev->dev);
+ 	}
+ 
++	/* multi queue mode isn't supported on PPV2.1, fallback to single
++	 * mode
++	 */
++	if (priv->hw_version == MVPP21)
++		queue_mode = MVPP2_QDIST_SINGLE_MODE;
++
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	base = devm_ioremap_resource(&pdev->dev, res);
+ 	if (IS_ERR(base))
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index 384c1fa49081..f167f4eec3ff 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -452,6 +452,7 @@ const char *mlx5_command_str(int command)
+ 	MLX5_COMMAND_STR_CASE(SET_HCA_CAP);
+ 	MLX5_COMMAND_STR_CASE(QUERY_ISSI);
+ 	MLX5_COMMAND_STR_CASE(SET_ISSI);
++	MLX5_COMMAND_STR_CASE(SET_DRIVER_VERSION);
+ 	MLX5_COMMAND_STR_CASE(CREATE_MKEY);
+ 	MLX5_COMMAND_STR_CASE(QUERY_MKEY);
+ 	MLX5_COMMAND_STR_CASE(DESTROY_MKEY);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dev.c b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+index b994b80d5714..922811fb66e7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/dev.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+@@ -132,11 +132,11 @@ void mlx5_add_device(struct mlx5_interface *intf, struct mlx5_priv *priv)
+ 	delayed_event_start(priv);
+ 
+ 	dev_ctx->context = intf->add(dev);
+-	set_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state);
+-	if (intf->attach)
+-		set_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state);
+-
+ 	if (dev_ctx->context) {
++		set_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state);
++		if (intf->attach)
++			set_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state);
++
+ 		spin_lock_irq(&priv->ctx_lock);
+ 		list_add_tail(&dev_ctx->list, &priv->ctx_list);
+ 
+@@ -211,12 +211,17 @@ static void mlx5_attach_interface(struct mlx5_interface *intf, struct mlx5_priv
+ 	if (intf->attach) {
+ 		if (test_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state))
+ 			goto out;
+-		intf->attach(dev, dev_ctx->context);
++		if (intf->attach(dev, dev_ctx->context))
++			goto out;
++
+ 		set_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state);
+ 	} else {
+ 		if (test_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state))
+ 			goto out;
+ 		dev_ctx->context = intf->add(dev);
++		if (!dev_ctx->context)
++			goto out;
++
+ 		set_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state);
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index 91f1209886ff..4c53957c918c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -658,6 +658,7 @@ static int esw_create_offloads_fdb_tables(struct mlx5_eswitch *esw, int nvports)
+ 	if (err)
+ 		goto miss_rule_err;
+ 
++	kvfree(flow_group_in);
+ 	return 0;
+ 
+ miss_rule_err:
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 6ddb2565884d..0031c510ab68 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -1649,6 +1649,33 @@ static u64 matched_fgs_get_version(struct list_head *match_head)
+ 	return version;
+ }
+ 
++static struct fs_fte *
++lookup_fte_locked(struct mlx5_flow_group *g,
++		  u32 *match_value,
++		  bool take_write)
++{
++	struct fs_fte *fte_tmp;
++
++	if (take_write)
++		nested_down_write_ref_node(&g->node, FS_LOCK_PARENT);
++	else
++		nested_down_read_ref_node(&g->node, FS_LOCK_PARENT);
++	fte_tmp = rhashtable_lookup_fast(&g->ftes_hash, match_value,
++					 rhash_fte);
++	if (!fte_tmp || !tree_get_node(&fte_tmp->node)) {
++		fte_tmp = NULL;
++		goto out;
++	}
++
++	nested_down_write_ref_node(&fte_tmp->node, FS_LOCK_CHILD);
++out:
++	if (take_write)
++		up_write_ref_node(&g->node);
++	else
++		up_read_ref_node(&g->node);
++	return fte_tmp;
++}
++
+ static struct mlx5_flow_handle *
+ try_add_to_existing_fg(struct mlx5_flow_table *ft,
+ 		       struct list_head *match_head,
+@@ -1671,10 +1698,6 @@ try_add_to_existing_fg(struct mlx5_flow_table *ft,
+ 	if (IS_ERR(fte))
+ 		return  ERR_PTR(-ENOMEM);
+ 
+-	list_for_each_entry(iter, match_head, list) {
+-		nested_down_read_ref_node(&iter->g->node, FS_LOCK_PARENT);
+-	}
+-
+ search_again_locked:
+ 	version = matched_fgs_get_version(match_head);
+ 	/* Try to find a fg that already contains a matching fte */
+@@ -1682,20 +1705,9 @@ search_again_locked:
+ 		struct fs_fte *fte_tmp;
+ 
+ 		g = iter->g;
+-		fte_tmp = rhashtable_lookup_fast(&g->ftes_hash, spec->match_value,
+-						 rhash_fte);
+-		if (!fte_tmp || !tree_get_node(&fte_tmp->node))
++		fte_tmp = lookup_fte_locked(g, spec->match_value, take_write);
++		if (!fte_tmp)
+ 			continue;
+-
+-		nested_down_write_ref_node(&fte_tmp->node, FS_LOCK_CHILD);
+-		if (!take_write) {
+-			list_for_each_entry(iter, match_head, list)
+-				up_read_ref_node(&iter->g->node);
+-		} else {
+-			list_for_each_entry(iter, match_head, list)
+-				up_write_ref_node(&iter->g->node);
+-		}
+-
+ 		rule = add_rule_fg(g, spec->match_value,
+ 				   flow_act, dest, dest_num, fte_tmp);
+ 		up_write_ref_node(&fte_tmp->node);
+@@ -1704,19 +1716,6 @@ search_again_locked:
+ 		return rule;
+ 	}
+ 
+-	/* No group with matching fte found. Try to add a new fte to any
+-	 * matching fg.
+-	 */
+-
+-	if (!take_write) {
+-		list_for_each_entry(iter, match_head, list)
+-			up_read_ref_node(&iter->g->node);
+-		list_for_each_entry(iter, match_head, list)
+-			nested_down_write_ref_node(&iter->g->node,
+-						   FS_LOCK_PARENT);
+-		take_write = true;
+-	}
+-
+ 	/* Check the ft version, for case that new flow group
+ 	 * was added while the fgs weren't locked
+ 	 */
+@@ -1728,27 +1727,30 @@ search_again_locked:
+ 	/* Check the fgs version, for case the new FTE with the
+ 	 * same values was added while the fgs weren't locked
+ 	 */
+-	if (version != matched_fgs_get_version(match_head))
++	if (version != matched_fgs_get_version(match_head)) {
++		take_write = true;
+ 		goto search_again_locked;
++	}
+ 
+ 	list_for_each_entry(iter, match_head, list) {
+ 		g = iter->g;
+ 
+ 		if (!g->node.active)
+ 			continue;
++
++		nested_down_write_ref_node(&g->node, FS_LOCK_PARENT);
++
+ 		err = insert_fte(g, fte);
+ 		if (err) {
++			up_write_ref_node(&g->node);
+ 			if (err == -ENOSPC)
+ 				continue;
+-			list_for_each_entry(iter, match_head, list)
+-				up_write_ref_node(&iter->g->node);
+ 			kmem_cache_free(steering->ftes_cache, fte);
+ 			return ERR_PTR(err);
+ 		}
+ 
+ 		nested_down_write_ref_node(&fte->node, FS_LOCK_CHILD);
+-		list_for_each_entry(iter, match_head, list)
+-			up_write_ref_node(&iter->g->node);
++		up_write_ref_node(&g->node);
+ 		rule = add_rule_fg(g, spec->match_value,
+ 				   flow_act, dest, dest_num, fte);
+ 		up_write_ref_node(&fte->node);
+@@ -1757,8 +1759,6 @@ search_again_locked:
+ 	}
+ 	rule = ERR_PTR(-ENOENT);
+ out:
+-	list_for_each_entry(iter, match_head, list)
+-		up_write_ref_node(&iter->g->node);
+ 	kmem_cache_free(steering->ftes_cache, fte);
+ 	return rule;
+ }
+@@ -1797,6 +1797,8 @@ search_again_locked:
+ 	if (err) {
+ 		if (take_write)
+ 			up_write_ref_node(&ft->node);
++		else
++			up_read_ref_node(&ft->node);
+ 		return ERR_PTR(err);
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+index d39b0b7011b2..9f39aeca863f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/health.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+@@ -331,9 +331,17 @@ void mlx5_start_health_poll(struct mlx5_core_dev *dev)
+ 	add_timer(&health->timer);
+ }
+ 
+-void mlx5_stop_health_poll(struct mlx5_core_dev *dev)
++void mlx5_stop_health_poll(struct mlx5_core_dev *dev, bool disable_health)
+ {
+ 	struct mlx5_core_health *health = &dev->priv.health;
++	unsigned long flags;
++
++	if (disable_health) {
++		spin_lock_irqsave(&health->wq_lock, flags);
++		set_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags);
++		set_bit(MLX5_DROP_NEW_RECOVERY_WORK, &health->flags);
++		spin_unlock_irqrestore(&health->wq_lock, flags);
++	}
+ 
+ 	del_timer_sync(&health->timer);
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 615005e63819..76e6ca87db11 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -874,8 +874,10 @@ static int mlx5_pci_init(struct mlx5_core_dev *dev, struct mlx5_priv *priv)
+ 	priv->numa_node = dev_to_node(&dev->pdev->dev);
+ 
+ 	priv->dbg_root = debugfs_create_dir(dev_name(&pdev->dev), mlx5_debugfs_root);
+-	if (!priv->dbg_root)
++	if (!priv->dbg_root) {
++		dev_err(&pdev->dev, "Cannot create debugfs dir, aborting\n");
+ 		return -ENOMEM;
++	}
+ 
+ 	err = mlx5_pci_enable_device(dev);
+ 	if (err) {
+@@ -924,7 +926,7 @@ static void mlx5_pci_close(struct mlx5_core_dev *dev, struct mlx5_priv *priv)
+ 	pci_clear_master(dev->pdev);
+ 	release_bar(dev->pdev);
+ 	mlx5_pci_disable_device(dev);
+-	debugfs_remove(priv->dbg_root);
++	debugfs_remove_recursive(priv->dbg_root);
+ }
+ 
+ static int mlx5_init_once(struct mlx5_core_dev *dev, struct mlx5_priv *priv)
+@@ -1266,7 +1268,7 @@ err_cleanup_once:
+ 		mlx5_cleanup_once(dev);
+ 
+ err_stop_poll:
+-	mlx5_stop_health_poll(dev);
++	mlx5_stop_health_poll(dev, boot);
+ 	if (mlx5_cmd_teardown_hca(dev)) {
+ 		dev_err(&dev->pdev->dev, "tear_down_hca failed, skip cleanup\n");
+ 		goto out_err;
+@@ -1325,7 +1327,7 @@ static int mlx5_unload_one(struct mlx5_core_dev *dev, struct mlx5_priv *priv,
+ 	mlx5_free_irq_vectors(dev);
+ 	if (cleanup)
+ 		mlx5_cleanup_once(dev);
+-	mlx5_stop_health_poll(dev);
++	mlx5_stop_health_poll(dev, cleanup);
+ 	err = mlx5_cmd_teardown_hca(dev);
+ 	if (err) {
+ 		dev_err(&dev->pdev->dev, "tear_down_hca failed, skip cleanup\n");
+@@ -1587,7 +1589,7 @@ static int mlx5_try_fast_unload(struct mlx5_core_dev *dev)
+ 	 * with the HCA, so the health polll is no longer needed.
+ 	 */
+ 	mlx5_drain_health_wq(dev);
+-	mlx5_stop_health_poll(dev);
++	mlx5_stop_health_poll(dev, false);
+ 
+ 	ret = mlx5_cmd_force_teardown_hca(dev);
+ 	if (ret) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wq.c b/drivers/net/ethernet/mellanox/mlx5/core/wq.c
+index c8c315eb5128..d838af9539b1 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/wq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/wq.c
+@@ -39,9 +39,9 @@ u32 mlx5_wq_cyc_get_size(struct mlx5_wq_cyc *wq)
+ 	return (u32)wq->fbc.sz_m1 + 1;
+ }
+ 
+-u32 mlx5_wq_cyc_get_frag_size(struct mlx5_wq_cyc *wq)
++u16 mlx5_wq_cyc_get_frag_size(struct mlx5_wq_cyc *wq)
+ {
+-	return (u32)wq->fbc.frag_sz_m1 + 1;
++	return wq->fbc.frag_sz_m1 + 1;
+ }
+ 
+ u32 mlx5_cqwq_get_size(struct mlx5_cqwq *wq)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wq.h b/drivers/net/ethernet/mellanox/mlx5/core/wq.h
+index 0b47126815b6..16476cc1a602 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/wq.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/wq.h
+@@ -80,7 +80,7 @@ int mlx5_wq_cyc_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
+ 		       void *wqc, struct mlx5_wq_cyc *wq,
+ 		       struct mlx5_wq_ctrl *wq_ctrl);
+ u32 mlx5_wq_cyc_get_size(struct mlx5_wq_cyc *wq);
+-u32 mlx5_wq_cyc_get_frag_size(struct mlx5_wq_cyc *wq);
++u16 mlx5_wq_cyc_get_frag_size(struct mlx5_wq_cyc *wq);
+ 
+ int mlx5_wq_qp_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
+ 		      void *qpc, struct mlx5_wq_qp *wq,
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_main.c b/drivers/net/ethernet/netronome/nfp/nfp_main.c
+index 152283d7e59c..4a540c5e27fe 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_main.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_main.c
+@@ -236,16 +236,20 @@ static int nfp_pcie_sriov_read_nfd_limit(struct nfp_pf *pf)
+ 	int err;
+ 
+ 	pf->limit_vfs = nfp_rtsym_read_le(pf->rtbl, "nfd_vf_cfg_max_vfs", &err);
+-	if (!err)
+-		return pci_sriov_set_totalvfs(pf->pdev, pf->limit_vfs);
++	if (err) {
++		/* For backwards compatibility if symbol not found allow all */
++		pf->limit_vfs = ~0;
++		if (err == -ENOENT)
++			return 0;
+ 
+-	pf->limit_vfs = ~0;
+-	/* Allow any setting for backwards compatibility if symbol not found */
+-	if (err == -ENOENT)
+-		return 0;
++		nfp_warn(pf->cpp, "Warning: VF limit read failed: %d\n", err);
++		return err;
++	}
+ 
+-	nfp_warn(pf->cpp, "Warning: VF limit read failed: %d\n", err);
+-	return err;
++	err = pci_sriov_set_totalvfs(pf->pdev, pf->limit_vfs);
++	if (err)
++		nfp_warn(pf->cpp, "Failed to set VF count in sysfs: %d\n", err);
++	return 0;
+ }
+ 
+ static int nfp_pcie_sriov_enable(struct pci_dev *pdev, int num_vfs)
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+index c2a9e64bc57b..bfccc1955907 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+@@ -1093,7 +1093,7 @@ static bool nfp_net_xdp_complete(struct nfp_net_tx_ring *tx_ring)
+  * @dp:		NFP Net data path struct
+  * @tx_ring:	TX ring structure
+  *
+- * Assumes that the device is stopped
++ * Assumes that the device is stopped, must be idempotent.
+  */
+ static void
+ nfp_net_tx_ring_reset(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring)
+@@ -1295,13 +1295,18 @@ static void nfp_net_rx_give_one(const struct nfp_net_dp *dp,
+  * nfp_net_rx_ring_reset() - Reflect in SW state of freelist after disable
+  * @rx_ring:	RX ring structure
+  *
+- * Warning: Do *not* call if ring buffers were never put on the FW freelist
+- *	    (i.e. device was not enabled)!
++ * Assumes that the device is stopped, must be idempotent.
+  */
+ static void nfp_net_rx_ring_reset(struct nfp_net_rx_ring *rx_ring)
+ {
+ 	unsigned int wr_idx, last_idx;
+ 
++	/* wr_p == rd_p means ring was never fed FL bufs.  RX rings are always
++	 * kept at cnt - 1 FL bufs.
++	 */
++	if (rx_ring->wr_p == 0 && rx_ring->rd_p == 0)
++		return;
++
+ 	/* Move the empty entry to the end of the list */
+ 	wr_idx = D_IDX(rx_ring, rx_ring->wr_p);
+ 	last_idx = rx_ring->cnt - 1;
+@@ -2524,6 +2529,8 @@ static void nfp_net_vec_clear_ring_data(struct nfp_net *nn, unsigned int idx)
+ /**
+  * nfp_net_clear_config_and_disable() - Clear control BAR and disable NFP
+  * @nn:      NFP Net device to reconfigure
++ *
++ * Warning: must be fully idempotent.
+  */
+ static void nfp_net_clear_config_and_disable(struct nfp_net *nn)
+ {
+diff --git a/drivers/net/ethernet/qualcomm/qca_7k.c b/drivers/net/ethernet/qualcomm/qca_7k.c
+index ffe7a16bdfc8..6c8543fb90c0 100644
+--- a/drivers/net/ethernet/qualcomm/qca_7k.c
++++ b/drivers/net/ethernet/qualcomm/qca_7k.c
+@@ -45,34 +45,33 @@ qcaspi_read_register(struct qcaspi *qca, u16 reg, u16 *result)
+ {
+ 	__be16 rx_data;
+ 	__be16 tx_data;
+-	struct spi_transfer *transfer;
+-	struct spi_message *msg;
++	struct spi_transfer transfer[2];
++	struct spi_message msg;
+ 	int ret;
+ 
++	memset(transfer, 0, sizeof(transfer));
++
++	spi_message_init(&msg);
++
+ 	tx_data = cpu_to_be16(QCA7K_SPI_READ | QCA7K_SPI_INTERNAL | reg);
++	*result = 0;
++
++	transfer[0].tx_buf = &tx_data;
++	transfer[0].len = QCASPI_CMD_LEN;
++	transfer[1].rx_buf = &rx_data;
++	transfer[1].len = QCASPI_CMD_LEN;
++
++	spi_message_add_tail(&transfer[0], &msg);
+ 
+ 	if (qca->legacy_mode) {
+-		msg = &qca->spi_msg1;
+-		transfer = &qca->spi_xfer1;
+-		transfer->tx_buf = &tx_data;
+-		transfer->rx_buf = NULL;
+-		transfer->len = QCASPI_CMD_LEN;
+-		spi_sync(qca->spi_dev, msg);
+-	} else {
+-		msg = &qca->spi_msg2;
+-		transfer = &qca->spi_xfer2[0];
+-		transfer->tx_buf = &tx_data;
+-		transfer->rx_buf = NULL;
+-		transfer->len = QCASPI_CMD_LEN;
+-		transfer = &qca->spi_xfer2[1];
++		spi_sync(qca->spi_dev, &msg);
++		spi_message_init(&msg);
+ 	}
+-	transfer->tx_buf = NULL;
+-	transfer->rx_buf = &rx_data;
+-	transfer->len = QCASPI_CMD_LEN;
+-	ret = spi_sync(qca->spi_dev, msg);
++	spi_message_add_tail(&transfer[1], &msg);
++	ret = spi_sync(qca->spi_dev, &msg);
+ 
+ 	if (!ret)
+-		ret = msg->status;
++		ret = msg.status;
+ 
+ 	if (ret)
+ 		qcaspi_spi_error(qca);
+@@ -86,35 +85,32 @@ int
+ qcaspi_write_register(struct qcaspi *qca, u16 reg, u16 value)
+ {
+ 	__be16 tx_data[2];
+-	struct spi_transfer *transfer;
+-	struct spi_message *msg;
++	struct spi_transfer transfer[2];
++	struct spi_message msg;
+ 	int ret;
+ 
++	memset(&transfer, 0, sizeof(transfer));
++
++	spi_message_init(&msg);
++
+ 	tx_data[0] = cpu_to_be16(QCA7K_SPI_WRITE | QCA7K_SPI_INTERNAL | reg);
+ 	tx_data[1] = cpu_to_be16(value);
+ 
++	transfer[0].tx_buf = &tx_data[0];
++	transfer[0].len = QCASPI_CMD_LEN;
++	transfer[1].tx_buf = &tx_data[1];
++	transfer[1].len = QCASPI_CMD_LEN;
++
++	spi_message_add_tail(&transfer[0], &msg);
+ 	if (qca->legacy_mode) {
+-		msg = &qca->spi_msg1;
+-		transfer = &qca->spi_xfer1;
+-		transfer->tx_buf = &tx_data[0];
+-		transfer->rx_buf = NULL;
+-		transfer->len = QCASPI_CMD_LEN;
+-		spi_sync(qca->spi_dev, msg);
+-	} else {
+-		msg = &qca->spi_msg2;
+-		transfer = &qca->spi_xfer2[0];
+-		transfer->tx_buf = &tx_data[0];
+-		transfer->rx_buf = NULL;
+-		transfer->len = QCASPI_CMD_LEN;
+-		transfer = &qca->spi_xfer2[1];
++		spi_sync(qca->spi_dev, &msg);
++		spi_message_init(&msg);
+ 	}
+-	transfer->tx_buf = &tx_data[1];
+-	transfer->rx_buf = NULL;
+-	transfer->len = QCASPI_CMD_LEN;
+-	ret = spi_sync(qca->spi_dev, msg);
++	spi_message_add_tail(&transfer[1], &msg);
++	ret = spi_sync(qca->spi_dev, &msg);
+ 
+ 	if (!ret)
+-		ret = msg->status;
++		ret = msg.status;
+ 
+ 	if (ret)
+ 		qcaspi_spi_error(qca);
+diff --git a/drivers/net/ethernet/qualcomm/qca_spi.c b/drivers/net/ethernet/qualcomm/qca_spi.c
+index 206f0266463e..66b775d462fd 100644
+--- a/drivers/net/ethernet/qualcomm/qca_spi.c
++++ b/drivers/net/ethernet/qualcomm/qca_spi.c
+@@ -99,22 +99,24 @@ static u32
+ qcaspi_write_burst(struct qcaspi *qca, u8 *src, u32 len)
+ {
+ 	__be16 cmd;
+-	struct spi_message *msg = &qca->spi_msg2;
+-	struct spi_transfer *transfer = &qca->spi_xfer2[0];
++	struct spi_message msg;
++	struct spi_transfer transfer[2];
+ 	int ret;
+ 
++	memset(&transfer, 0, sizeof(transfer));
++	spi_message_init(&msg);
++
+ 	cmd = cpu_to_be16(QCA7K_SPI_WRITE | QCA7K_SPI_EXTERNAL);
+-	transfer->tx_buf = &cmd;
+-	transfer->rx_buf = NULL;
+-	transfer->len = QCASPI_CMD_LEN;
+-	transfer = &qca->spi_xfer2[1];
+-	transfer->tx_buf = src;
+-	transfer->rx_buf = NULL;
+-	transfer->len = len;
++	transfer[0].tx_buf = &cmd;
++	transfer[0].len = QCASPI_CMD_LEN;
++	transfer[1].tx_buf = src;
++	transfer[1].len = len;
+ 
+-	ret = spi_sync(qca->spi_dev, msg);
++	spi_message_add_tail(&transfer[0], &msg);
++	spi_message_add_tail(&transfer[1], &msg);
++	ret = spi_sync(qca->spi_dev, &msg);
+ 
+-	if (ret || (msg->actual_length != QCASPI_CMD_LEN + len)) {
++	if (ret || (msg.actual_length != QCASPI_CMD_LEN + len)) {
+ 		qcaspi_spi_error(qca);
+ 		return 0;
+ 	}
+@@ -125,17 +127,20 @@ qcaspi_write_burst(struct qcaspi *qca, u8 *src, u32 len)
+ static u32
+ qcaspi_write_legacy(struct qcaspi *qca, u8 *src, u32 len)
+ {
+-	struct spi_message *msg = &qca->spi_msg1;
+-	struct spi_transfer *transfer = &qca->spi_xfer1;
++	struct spi_message msg;
++	struct spi_transfer transfer;
+ 	int ret;
+ 
+-	transfer->tx_buf = src;
+-	transfer->rx_buf = NULL;
+-	transfer->len = len;
++	memset(&transfer, 0, sizeof(transfer));
++	spi_message_init(&msg);
++
++	transfer.tx_buf = src;
++	transfer.len = len;
+ 
+-	ret = spi_sync(qca->spi_dev, msg);
++	spi_message_add_tail(&transfer, &msg);
++	ret = spi_sync(qca->spi_dev, &msg);
+ 
+-	if (ret || (msg->actual_length != len)) {
++	if (ret || (msg.actual_length != len)) {
+ 		qcaspi_spi_error(qca);
+ 		return 0;
+ 	}
+@@ -146,23 +151,25 @@ qcaspi_write_legacy(struct qcaspi *qca, u8 *src, u32 len)
+ static u32
+ qcaspi_read_burst(struct qcaspi *qca, u8 *dst, u32 len)
+ {
+-	struct spi_message *msg = &qca->spi_msg2;
++	struct spi_message msg;
+ 	__be16 cmd;
+-	struct spi_transfer *transfer = &qca->spi_xfer2[0];
++	struct spi_transfer transfer[2];
+ 	int ret;
+ 
++	memset(&transfer, 0, sizeof(transfer));
++	spi_message_init(&msg);
++
+ 	cmd = cpu_to_be16(QCA7K_SPI_READ | QCA7K_SPI_EXTERNAL);
+-	transfer->tx_buf = &cmd;
+-	transfer->rx_buf = NULL;
+-	transfer->len = QCASPI_CMD_LEN;
+-	transfer = &qca->spi_xfer2[1];
+-	transfer->tx_buf = NULL;
+-	transfer->rx_buf = dst;
+-	transfer->len = len;
++	transfer[0].tx_buf = &cmd;
++	transfer[0].len = QCASPI_CMD_LEN;
++	transfer[1].rx_buf = dst;
++	transfer[1].len = len;
+ 
+-	ret = spi_sync(qca->spi_dev, msg);
++	spi_message_add_tail(&transfer[0], &msg);
++	spi_message_add_tail(&transfer[1], &msg);
++	ret = spi_sync(qca->spi_dev, &msg);
+ 
+-	if (ret || (msg->actual_length != QCASPI_CMD_LEN + len)) {
++	if (ret || (msg.actual_length != QCASPI_CMD_LEN + len)) {
+ 		qcaspi_spi_error(qca);
+ 		return 0;
+ 	}
+@@ -173,17 +180,20 @@ qcaspi_read_burst(struct qcaspi *qca, u8 *dst, u32 len)
+ static u32
+ qcaspi_read_legacy(struct qcaspi *qca, u8 *dst, u32 len)
+ {
+-	struct spi_message *msg = &qca->spi_msg1;
+-	struct spi_transfer *transfer = &qca->spi_xfer1;
++	struct spi_message msg;
++	struct spi_transfer transfer;
+ 	int ret;
+ 
+-	transfer->tx_buf = NULL;
+-	transfer->rx_buf = dst;
+-	transfer->len = len;
++	memset(&transfer, 0, sizeof(transfer));
++	spi_message_init(&msg);
+ 
+-	ret = spi_sync(qca->spi_dev, msg);
++	transfer.rx_buf = dst;
++	transfer.len = len;
+ 
+-	if (ret || (msg->actual_length != len)) {
++	spi_message_add_tail(&transfer, &msg);
++	ret = spi_sync(qca->spi_dev, &msg);
++
++	if (ret || (msg.actual_length != len)) {
+ 		qcaspi_spi_error(qca);
+ 		return 0;
+ 	}
+@@ -195,19 +205,23 @@ static int
+ qcaspi_tx_cmd(struct qcaspi *qca, u16 cmd)
+ {
+ 	__be16 tx_data;
+-	struct spi_message *msg = &qca->spi_msg1;
+-	struct spi_transfer *transfer = &qca->spi_xfer1;
++	struct spi_message msg;
++	struct spi_transfer transfer;
+ 	int ret;
+ 
++	memset(&transfer, 0, sizeof(transfer));
++
++	spi_message_init(&msg);
++
+ 	tx_data = cpu_to_be16(cmd);
+-	transfer->len = sizeof(tx_data);
+-	transfer->tx_buf = &tx_data;
+-	transfer->rx_buf = NULL;
++	transfer.len = sizeof(cmd);
++	transfer.tx_buf = &tx_data;
++	spi_message_add_tail(&transfer, &msg);
+ 
+-	ret = spi_sync(qca->spi_dev, msg);
++	ret = spi_sync(qca->spi_dev, &msg);
+ 
+ 	if (!ret)
+-		ret = msg->status;
++		ret = msg.status;
+ 
+ 	if (ret)
+ 		qcaspi_spi_error(qca);
+@@ -835,16 +849,6 @@ qcaspi_netdev_setup(struct net_device *dev)
+ 	qca = netdev_priv(dev);
+ 	memset(qca, 0, sizeof(struct qcaspi));
+ 
+-	memset(&qca->spi_xfer1, 0, sizeof(struct spi_transfer));
+-	memset(&qca->spi_xfer2, 0, sizeof(struct spi_transfer) * 2);
+-
+-	spi_message_init(&qca->spi_msg1);
+-	spi_message_add_tail(&qca->spi_xfer1, &qca->spi_msg1);
+-
+-	spi_message_init(&qca->spi_msg2);
+-	spi_message_add_tail(&qca->spi_xfer2[0], &qca->spi_msg2);
+-	spi_message_add_tail(&qca->spi_xfer2[1], &qca->spi_msg2);
+-
+ 	memset(&qca->txr, 0, sizeof(qca->txr));
+ 	qca->txr.count = TX_RING_MAX_LEN;
+ }
+diff --git a/drivers/net/ethernet/qualcomm/qca_spi.h b/drivers/net/ethernet/qualcomm/qca_spi.h
+index fc4beb1b32d1..fc0e98726b36 100644
+--- a/drivers/net/ethernet/qualcomm/qca_spi.h
++++ b/drivers/net/ethernet/qualcomm/qca_spi.h
+@@ -83,11 +83,6 @@ struct qcaspi {
+ 	struct tx_ring txr;
+ 	struct qcaspi_stats stats;
+ 
+-	struct spi_message spi_msg1;
+-	struct spi_message spi_msg2;
+-	struct spi_transfer spi_xfer1;
+-	struct spi_transfer spi_xfer2[2];
+-
+ 	u8 *rx_buffer;
+ 	u32 buffer_size;
+ 	u8 sync;
+diff --git a/drivers/net/wan/fsl_ucc_hdlc.c b/drivers/net/wan/fsl_ucc_hdlc.c
+index 9b09c9d0d0fb..5f0366a125e2 100644
+--- a/drivers/net/wan/fsl_ucc_hdlc.c
++++ b/drivers/net/wan/fsl_ucc_hdlc.c
+@@ -192,7 +192,7 @@ static int uhdlc_init(struct ucc_hdlc_private *priv)
+ 	priv->ucc_pram_offset = qe_muram_alloc(sizeof(struct ucc_hdlc_param),
+ 				ALIGNMENT_OF_UCC_HDLC_PRAM);
+ 
+-	if (priv->ucc_pram_offset < 0) {
++	if (IS_ERR_VALUE(priv->ucc_pram_offset)) {
+ 		dev_err(priv->dev, "Can not allocate MURAM for hdlc parameter.\n");
+ 		ret = -ENOMEM;
+ 		goto free_tx_bd;
+@@ -230,14 +230,14 @@ static int uhdlc_init(struct ucc_hdlc_private *priv)
+ 
+ 	/* Alloc riptr, tiptr */
+ 	riptr = qe_muram_alloc(32, 32);
+-	if (riptr < 0) {
++	if (IS_ERR_VALUE(riptr)) {
+ 		dev_err(priv->dev, "Cannot allocate MURAM mem for Receive internal temp data pointer\n");
+ 		ret = -ENOMEM;
+ 		goto free_tx_skbuff;
+ 	}
+ 
+ 	tiptr = qe_muram_alloc(32, 32);
+-	if (tiptr < 0) {
++	if (IS_ERR_VALUE(tiptr)) {
+ 		dev_err(priv->dev, "Cannot allocate MURAM mem for Transmit internal temp data pointer\n");
+ 		ret = -ENOMEM;
+ 		goto free_riptr;
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
+index 45ea32796cda..92b38a21cd10 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
+@@ -660,7 +660,7 @@ static inline void iwl_enable_fw_load_int(struct iwl_trans *trans)
+ 	}
+ }
+ 
+-static inline u8 iwl_pcie_get_cmd_index(struct iwl_txq *q, u32 index)
++static inline u8 iwl_pcie_get_cmd_index(const struct iwl_txq *q, u32 index)
+ {
+ 	return index & (q->n_window - 1);
+ }
+@@ -730,9 +730,13 @@ static inline void iwl_stop_queue(struct iwl_trans *trans,
+ 
+ static inline bool iwl_queue_used(const struct iwl_txq *q, int i)
+ {
+-	return q->write_ptr >= q->read_ptr ?
+-		(i >= q->read_ptr && i < q->write_ptr) :
+-		!(i < q->read_ptr && i >= q->write_ptr);
++	int index = iwl_pcie_get_cmd_index(q, i);
++	int r = iwl_pcie_get_cmd_index(q, q->read_ptr);
++	int w = iwl_pcie_get_cmd_index(q, q->write_ptr);
++
++	return w >= r ?
++		(index >= r && index < w) :
++		!(index < r && index >= w);
+ }
+ 
+ static inline bool iwl_is_rfkill_set(struct iwl_trans *trans)
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+index 473fe7ccb07c..11bd7ce2be8e 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+@@ -1225,9 +1225,13 @@ static void iwl_pcie_cmdq_reclaim(struct iwl_trans *trans, int txq_id, int idx)
+ 	struct iwl_txq *txq = trans_pcie->txq[txq_id];
+ 	unsigned long flags;
+ 	int nfreed = 0;
++	u16 r;
+ 
+ 	lockdep_assert_held(&txq->lock);
+ 
++	idx = iwl_pcie_get_cmd_index(txq, idx);
++	r = iwl_pcie_get_cmd_index(txq, txq->read_ptr);
++
+ 	if ((idx >= TFD_QUEUE_SIZE_MAX) || (!iwl_queue_used(txq, idx))) {
+ 		IWL_ERR(trans,
+ 			"%s: Read index for DMA queue txq id (%d), index %d is out of range [0-%d] %d %d.\n",
+@@ -1236,12 +1240,13 @@ static void iwl_pcie_cmdq_reclaim(struct iwl_trans *trans, int txq_id, int idx)
+ 		return;
+ 	}
+ 
+-	for (idx = iwl_queue_inc_wrap(idx); txq->read_ptr != idx;
+-	     txq->read_ptr = iwl_queue_inc_wrap(txq->read_ptr)) {
++	for (idx = iwl_queue_inc_wrap(idx); r != idx;
++	     r = iwl_queue_inc_wrap(r)) {
++		txq->read_ptr = iwl_queue_inc_wrap(txq->read_ptr);
+ 
+ 		if (nfreed++ > 0) {
+ 			IWL_ERR(trans, "HCMD skipped: index (%d) %d %d\n",
+-				idx, txq->write_ptr, txq->read_ptr);
++				idx, txq->write_ptr, r);
+ 			iwl_force_nmi(trans);
+ 		}
+ 	}
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index 9dd2ca62d84a..c2b6aa1d485f 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -87,8 +87,7 @@ struct netfront_cb {
+ /* IRQ name is queue name with "-tx" or "-rx" appended */
+ #define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 3)
+ 
+-static DECLARE_WAIT_QUEUE_HEAD(module_load_q);
+-static DECLARE_WAIT_QUEUE_HEAD(module_unload_q);
++static DECLARE_WAIT_QUEUE_HEAD(module_wq);
+ 
+ struct netfront_stats {
+ 	u64			packets;
+@@ -1331,11 +1330,11 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
+ 	netif_carrier_off(netdev);
+ 
+ 	xenbus_switch_state(dev, XenbusStateInitialising);
+-	wait_event(module_load_q,
+-			   xenbus_read_driver_state(dev->otherend) !=
+-			   XenbusStateClosed &&
+-			   xenbus_read_driver_state(dev->otherend) !=
+-			   XenbusStateUnknown);
++	wait_event(module_wq,
++		   xenbus_read_driver_state(dev->otherend) !=
++		   XenbusStateClosed &&
++		   xenbus_read_driver_state(dev->otherend) !=
++		   XenbusStateUnknown);
+ 	return netdev;
+ 
+  exit:
+@@ -1603,14 +1602,16 @@ static int xennet_init_queue(struct netfront_queue *queue)
+ {
+ 	unsigned short i;
+ 	int err = 0;
++	char *devid;
+ 
+ 	spin_lock_init(&queue->tx_lock);
+ 	spin_lock_init(&queue->rx_lock);
+ 
+ 	timer_setup(&queue->rx_refill_timer, rx_refill_timeout, 0);
+ 
+-	snprintf(queue->name, sizeof(queue->name), "%s-q%u",
+-		 queue->info->netdev->name, queue->id);
++	devid = strrchr(queue->info->xbdev->nodename, '/') + 1;
++	snprintf(queue->name, sizeof(queue->name), "vif%s-q%u",
++		 devid, queue->id);
+ 
+ 	/* Initialise tx_skbs as a free chain containing every entry. */
+ 	queue->tx_skb_freelist = 0;
+@@ -2007,15 +2008,14 @@ static void netback_changed(struct xenbus_device *dev,
+ 
+ 	dev_dbg(&dev->dev, "%s\n", xenbus_strstate(backend_state));
+ 
++	wake_up_all(&module_wq);
++
+ 	switch (backend_state) {
+ 	case XenbusStateInitialising:
+ 	case XenbusStateInitialised:
+ 	case XenbusStateReconfiguring:
+ 	case XenbusStateReconfigured:
+-		break;
+-
+ 	case XenbusStateUnknown:
+-		wake_up_all(&module_unload_q);
+ 		break;
+ 
+ 	case XenbusStateInitWait:
+@@ -2031,12 +2031,10 @@ static void netback_changed(struct xenbus_device *dev,
+ 		break;
+ 
+ 	case XenbusStateClosed:
+-		wake_up_all(&module_unload_q);
+ 		if (dev->state == XenbusStateClosed)
+ 			break;
+ 		/* Missed the backend's CLOSING state -- fallthrough */
+ 	case XenbusStateClosing:
+-		wake_up_all(&module_unload_q);
+ 		xenbus_frontend_closed(dev);
+ 		break;
+ 	}
+@@ -2144,14 +2142,14 @@ static int xennet_remove(struct xenbus_device *dev)
+ 
+ 	if (xenbus_read_driver_state(dev->otherend) != XenbusStateClosed) {
+ 		xenbus_switch_state(dev, XenbusStateClosing);
+-		wait_event(module_unload_q,
++		wait_event(module_wq,
+ 			   xenbus_read_driver_state(dev->otherend) ==
+ 			   XenbusStateClosing ||
+ 			   xenbus_read_driver_state(dev->otherend) ==
+ 			   XenbusStateUnknown);
+ 
+ 		xenbus_switch_state(dev, XenbusStateClosed);
+-		wait_event(module_unload_q,
++		wait_event(module_wq,
+ 			   xenbus_read_driver_state(dev->otherend) ==
+ 			   XenbusStateClosed ||
+ 			   xenbus_read_driver_state(dev->otherend) ==
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 66ec5985c9f3..69fb62feb833 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -1741,6 +1741,8 @@ static void nvme_rdma_shutdown_ctrl(struct nvme_rdma_ctrl *ctrl, bool shutdown)
+ 		nvme_rdma_stop_io_queues(ctrl);
+ 		blk_mq_tagset_busy_iter(&ctrl->tag_set,
+ 					nvme_cancel_request, &ctrl->ctrl);
++		if (shutdown)
++			nvme_start_queues(&ctrl->ctrl);
+ 		nvme_rdma_destroy_io_queues(ctrl, shutdown);
+ 	}
+ 
+diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c
+index 8c42b3a8c420..64c7596a46a1 100644
+--- a/drivers/nvme/target/io-cmd-file.c
++++ b/drivers/nvme/target/io-cmd-file.c
+@@ -209,22 +209,24 @@ static void nvmet_file_execute_discard(struct nvmet_req *req)
+ {
+ 	int mode = FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE;
+ 	struct nvme_dsm_range range;
+-	loff_t offset;
+-	loff_t len;
+-	int i, ret;
++	loff_t offset, len;
++	u16 ret;
++	int i;
+ 
+ 	for (i = 0; i <= le32_to_cpu(req->cmd->dsm.nr); i++) {
+-		if (nvmet_copy_from_sgl(req, i * sizeof(range), &range,
+-					sizeof(range)))
++		ret = nvmet_copy_from_sgl(req, i * sizeof(range), &range,
++					sizeof(range));
++		if (ret)
+ 			break;
+ 		offset = le64_to_cpu(range.slba) << req->ns->blksize_shift;
+ 		len = le32_to_cpu(range.nlb) << req->ns->blksize_shift;
+-		ret = vfs_fallocate(req->ns->file, mode, offset, len);
+-		if (ret)
++		if (vfs_fallocate(req->ns->file, mode, offset, len)) {
++			ret = NVME_SC_INTERNAL | NVME_SC_DNR;
+ 			break;
++		}
+ 	}
+ 
+-	nvmet_req_complete(req, ret < 0 ? NVME_SC_INTERNAL | NVME_SC_DNR : 0);
++	nvmet_req_complete(req, ret);
+ }
+ 
+ static void nvmet_file_dsm_work(struct work_struct *w)
+diff --git a/drivers/of/base.c b/drivers/of/base.c
+index 466e3c8582f0..53a51c6911eb 100644
+--- a/drivers/of/base.c
++++ b/drivers/of/base.c
+@@ -118,6 +118,9 @@ void of_populate_phandle_cache(void)
+ 		if (np->phandle && np->phandle != OF_PHANDLE_ILLEGAL)
+ 			phandles++;
+ 
++	if (!phandles)
++		goto out;
++
+ 	cache_entries = roundup_pow_of_two(phandles);
+ 	phandle_cache_mask = cache_entries - 1;
+ 
+@@ -719,6 +722,31 @@ struct device_node *of_get_next_available_child(const struct device_node *node,
+ }
+ EXPORT_SYMBOL(of_get_next_available_child);
+ 
++/**
++ * of_get_compatible_child - Find compatible child node
++ * @parent:	parent node
++ * @compatible:	compatible string
++ *
++ * Lookup child node whose compatible property contains the given compatible
++ * string.
++ *
++ * Returns a node pointer with refcount incremented, use of_node_put() on it
++ * when done; or NULL if not found.
++ */
++struct device_node *of_get_compatible_child(const struct device_node *parent,
++				const char *compatible)
++{
++	struct device_node *child;
++
++	for_each_child_of_node(parent, child) {
++		if (of_device_is_compatible(child, compatible))
++			break;
++	}
++
++	return child;
++}
++EXPORT_SYMBOL(of_get_compatible_child);
++
+ /**
+  *	of_get_child_by_name - Find the child node by name for a given parent
+  *	@node:	parent node
+diff --git a/drivers/parport/parport_sunbpp.c b/drivers/parport/parport_sunbpp.c
+index 01cf1c1a841a..8de329546b82 100644
+--- a/drivers/parport/parport_sunbpp.c
++++ b/drivers/parport/parport_sunbpp.c
+@@ -286,12 +286,16 @@ static int bpp_probe(struct platform_device *op)
+ 
+ 	ops = kmemdup(&parport_sunbpp_ops, sizeof(struct parport_operations),
+ 		      GFP_KERNEL);
+-        if (!ops)
++	if (!ops) {
++		err = -ENOMEM;
+ 		goto out_unmap;
++	}
+ 
+ 	dprintk(("register_port\n"));
+-	if (!(p = parport_register_port((unsigned long)base, irq, dma, ops)))
++	if (!(p = parport_register_port((unsigned long)base, irq, dma, ops))) {
++		err = -ENOMEM;
+ 		goto out_free_ops;
++	}
+ 
+ 	p->size = size;
+ 	p->dev = &op->dev;
+diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c
+index a2e88386af28..0fbf612b8ef2 100644
+--- a/drivers/pci/pcie/aer.c
++++ b/drivers/pci/pcie/aer.c
+@@ -303,6 +303,9 @@ int pcie_aer_get_firmware_first(struct pci_dev *dev)
+ 	if (!pci_is_pcie(dev))
+ 		return 0;
+ 
++	if (pcie_ports_native)
++		return 0;
++
+ 	if (!dev->__aer_firmware_first_valid)
+ 		aer_set_firmware_first(dev);
+ 	return dev->__aer_firmware_first;
+@@ -323,6 +326,9 @@ bool aer_acpi_firmware_first(void)
+ 		.firmware_first	= 0,
+ 	};
+ 
++	if (pcie_ports_native)
++		return false;
++
+ 	if (!parsed) {
+ 		apei_hest_parse(aer_hest_parse, &info);
+ 		aer_firmware_first = info.firmware_first;
+diff --git a/drivers/pinctrl/mediatek/pinctrl-mt7622.c b/drivers/pinctrl/mediatek/pinctrl-mt7622.c
+index 4c4740ffeb9c..3ea685634b6c 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-mt7622.c
++++ b/drivers/pinctrl/mediatek/pinctrl-mt7622.c
+@@ -1537,7 +1537,7 @@ static int mtk_build_groups(struct mtk_pinctrl *hw)
+ 		err = pinctrl_generic_add_group(hw->pctrl, group->name,
+ 						group->pins, group->num_pins,
+ 						group->data);
+-		if (err) {
++		if (err < 0) {
+ 			dev_err(hw->dev, "Failed to register group %s\n",
+ 				group->name);
+ 			return err;
+@@ -1558,7 +1558,7 @@ static int mtk_build_functions(struct mtk_pinctrl *hw)
+ 						  func->group_names,
+ 						  func->num_group_names,
+ 						  func->data);
+-		if (err) {
++		if (err < 0) {
+ 			dev_err(hw->dev, "Failed to register function %s\n",
+ 				func->name);
+ 			return err;
+diff --git a/drivers/pinctrl/pinctrl-rza1.c b/drivers/pinctrl/pinctrl-rza1.c
+index 717c0f4449a0..f76edf664539 100644
+--- a/drivers/pinctrl/pinctrl-rza1.c
++++ b/drivers/pinctrl/pinctrl-rza1.c
+@@ -1006,6 +1006,7 @@ static int rza1_dt_node_to_map(struct pinctrl_dev *pctldev,
+ 	const char *grpname;
+ 	const char **fngrps;
+ 	int ret, npins;
++	int gsel, fsel;
+ 
+ 	npins = rza1_dt_node_pin_count(np);
+ 	if (npins < 0) {
+@@ -1055,18 +1056,19 @@ static int rza1_dt_node_to_map(struct pinctrl_dev *pctldev,
+ 	fngrps[0] = grpname;
+ 
+ 	mutex_lock(&rza1_pctl->mutex);
+-	ret = pinctrl_generic_add_group(pctldev, grpname, grpins, npins,
+-					NULL);
+-	if (ret) {
++	gsel = pinctrl_generic_add_group(pctldev, grpname, grpins, npins,
++					 NULL);
++	if (gsel < 0) {
+ 		mutex_unlock(&rza1_pctl->mutex);
+-		return ret;
++		return gsel;
+ 	}
+ 
+-	ret = pinmux_generic_add_function(pctldev, grpname, fngrps, 1,
+-					  mux_confs);
+-	if (ret)
++	fsel = pinmux_generic_add_function(pctldev, grpname, fngrps, 1,
++					   mux_confs);
++	if (fsel < 0) {
++		ret = fsel;
+ 		goto remove_group;
+-	mutex_unlock(&rza1_pctl->mutex);
++	}
+ 
+ 	dev_info(rza1_pctl->dev, "Parsed function and group %s with %d pins\n",
+ 				 grpname, npins);
+@@ -1083,15 +1085,15 @@ static int rza1_dt_node_to_map(struct pinctrl_dev *pctldev,
+ 	(*map)->data.mux.group = np->name;
+ 	(*map)->data.mux.function = np->name;
+ 	*num_maps = 1;
++	mutex_unlock(&rza1_pctl->mutex);
+ 
+ 	return 0;
+ 
+ remove_function:
+-	mutex_lock(&rza1_pctl->mutex);
+-	pinmux_generic_remove_last_function(pctldev);
++	pinmux_generic_remove_function(pctldev, fsel);
+ 
+ remove_group:
+-	pinctrl_generic_remove_last_group(pctldev);
++	pinctrl_generic_remove_group(pctldev, gsel);
+ 	mutex_unlock(&rza1_pctl->mutex);
+ 
+ 	dev_info(rza1_pctl->dev, "Unable to parse function and group %s\n",
+diff --git a/drivers/pinctrl/qcom/pinctrl-msm.c b/drivers/pinctrl/qcom/pinctrl-msm.c
+index 0e22f52b2a19..2155a30c282b 100644
+--- a/drivers/pinctrl/qcom/pinctrl-msm.c
++++ b/drivers/pinctrl/qcom/pinctrl-msm.c
+@@ -250,22 +250,30 @@ static int msm_config_group_get(struct pinctrl_dev *pctldev,
+ 	/* Convert register value to pinconf value */
+ 	switch (param) {
+ 	case PIN_CONFIG_BIAS_DISABLE:
+-		arg = arg == MSM_NO_PULL;
++		if (arg != MSM_NO_PULL)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_PULL_DOWN:
+-		arg = arg == MSM_PULL_DOWN;
++		if (arg != MSM_PULL_DOWN)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_BUS_HOLD:
+ 		if (pctrl->soc->pull_no_keeper)
+ 			return -ENOTSUPP;
+ 
+-		arg = arg == MSM_KEEPER;
++		if (arg != MSM_KEEPER)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_PULL_UP:
+ 		if (pctrl->soc->pull_no_keeper)
+ 			arg = arg == MSM_PULL_UP_NO_KEEPER;
+ 		else
+ 			arg = arg == MSM_PULL_UP;
++		if (!arg)
++			return -EINVAL;
+ 		break;
+ 	case PIN_CONFIG_DRIVE_STRENGTH:
+ 		arg = msm_regval_to_drive(arg);
+diff --git a/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c b/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
+index 3e66e0d10010..cf82db78e69e 100644
+--- a/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
++++ b/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
+@@ -390,31 +390,47 @@ static int pmic_gpio_config_get(struct pinctrl_dev *pctldev,
+ 
+ 	switch (param) {
+ 	case PIN_CONFIG_DRIVE_PUSH_PULL:
+-		arg = pad->buffer_type == PMIC_GPIO_OUT_BUF_CMOS;
++		if (pad->buffer_type != PMIC_GPIO_OUT_BUF_CMOS)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_DRIVE_OPEN_DRAIN:
+-		arg = pad->buffer_type == PMIC_GPIO_OUT_BUF_OPEN_DRAIN_NMOS;
++		if (pad->buffer_type != PMIC_GPIO_OUT_BUF_OPEN_DRAIN_NMOS)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_DRIVE_OPEN_SOURCE:
+-		arg = pad->buffer_type == PMIC_GPIO_OUT_BUF_OPEN_DRAIN_PMOS;
++		if (pad->buffer_type != PMIC_GPIO_OUT_BUF_OPEN_DRAIN_PMOS)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_PULL_DOWN:
+-		arg = pad->pullup == PMIC_GPIO_PULL_DOWN;
++		if (pad->pullup != PMIC_GPIO_PULL_DOWN)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_DISABLE:
+-		arg = pad->pullup = PMIC_GPIO_PULL_DISABLE;
++		if (pad->pullup != PMIC_GPIO_PULL_DISABLE)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_PULL_UP:
+-		arg = pad->pullup == PMIC_GPIO_PULL_UP_30;
++		if (pad->pullup != PMIC_GPIO_PULL_UP_30)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_HIGH_IMPEDANCE:
+-		arg = !pad->is_enabled;
++		if (pad->is_enabled)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_POWER_SOURCE:
+ 		arg = pad->power_source;
+ 		break;
+ 	case PIN_CONFIG_INPUT_ENABLE:
+-		arg = pad->input_enabled;
++		if (!pad->input_enabled)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_OUTPUT:
+ 		arg = pad->out_value;
+diff --git a/drivers/platform/x86/toshiba_acpi.c b/drivers/platform/x86/toshiba_acpi.c
+index eef76bfa5d73..e50941c3ba54 100644
+--- a/drivers/platform/x86/toshiba_acpi.c
++++ b/drivers/platform/x86/toshiba_acpi.c
+@@ -34,6 +34,7 @@
+ #define TOSHIBA_ACPI_VERSION	"0.24"
+ #define PROC_INTERFACE_VERSION	1
+ 
++#include <linux/compiler.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/moduleparam.h>
+@@ -1682,7 +1683,7 @@ static const struct file_operations keys_proc_fops = {
+ 	.write		= keys_proc_write,
+ };
+ 
+-static int version_proc_show(struct seq_file *m, void *v)
++static int __maybe_unused version_proc_show(struct seq_file *m, void *v)
+ {
+ 	seq_printf(m, "driver:                  %s\n", TOSHIBA_ACPI_VERSION);
+ 	seq_printf(m, "proc_interface:          %d\n", PROC_INTERFACE_VERSION);
+diff --git a/drivers/regulator/qcom_spmi-regulator.c b/drivers/regulator/qcom_spmi-regulator.c
+index 9817f1a75342..ba3d5e63ada6 100644
+--- a/drivers/regulator/qcom_spmi-regulator.c
++++ b/drivers/regulator/qcom_spmi-regulator.c
+@@ -1752,7 +1752,8 @@ static int qcom_spmi_regulator_probe(struct platform_device *pdev)
+ 	const char *name;
+ 	struct device *dev = &pdev->dev;
+ 	struct device_node *node = pdev->dev.of_node;
+-	struct device_node *syscon;
++	struct device_node *syscon, *reg_node;
++	struct property *reg_prop;
+ 	int ret, lenp;
+ 	struct list_head *vreg_list;
+ 
+@@ -1774,16 +1775,19 @@ static int qcom_spmi_regulator_probe(struct platform_device *pdev)
+ 		syscon = of_parse_phandle(node, "qcom,saw-reg", 0);
+ 		saw_regmap = syscon_node_to_regmap(syscon);
+ 		of_node_put(syscon);
+-		if (IS_ERR(regmap))
++		if (IS_ERR(saw_regmap))
+ 			dev_err(dev, "ERROR reading SAW regmap\n");
+ 	}
+ 
+ 	for (reg = match->data; reg->name; reg++) {
+ 
+-		if (saw_regmap && \
+-		    of_find_property(of_find_node_by_name(node, reg->name), \
+-				     "qcom,saw-slave", &lenp)) {
+-			continue;
++		if (saw_regmap) {
++			reg_node = of_get_child_by_name(node, reg->name);
++			reg_prop = of_find_property(reg_node, "qcom,saw-slave",
++						    &lenp);
++			of_node_put(reg_node);
++			if (reg_prop)
++				continue;
+ 		}
+ 
+ 		vreg = devm_kzalloc(dev, sizeof(*vreg), GFP_KERNEL);
+@@ -1816,13 +1820,17 @@ static int qcom_spmi_regulator_probe(struct platform_device *pdev)
+ 		if (ret)
+ 			continue;
+ 
+-		if (saw_regmap && \
+-		    of_find_property(of_find_node_by_name(node, reg->name), \
+-				     "qcom,saw-leader", &lenp)) {
+-			spmi_saw_ops = *(vreg->desc.ops);
+-			spmi_saw_ops.set_voltage_sel = \
+-				spmi_regulator_saw_set_voltage;
+-			vreg->desc.ops = &spmi_saw_ops;
++		if (saw_regmap) {
++			reg_node = of_get_child_by_name(node, reg->name);
++			reg_prop = of_find_property(reg_node, "qcom,saw-leader",
++						    &lenp);
++			of_node_put(reg_node);
++			if (reg_prop) {
++				spmi_saw_ops = *(vreg->desc.ops);
++				spmi_saw_ops.set_voltage_sel =
++					spmi_regulator_saw_set_voltage;
++				vreg->desc.ops = &spmi_saw_ops;
++			}
+ 		}
+ 
+ 		config.dev = dev;
+diff --git a/drivers/remoteproc/qcom_q6v5_pil.c b/drivers/remoteproc/qcom_q6v5_pil.c
+index 2bf8e7c49f2a..e5ec59102b01 100644
+--- a/drivers/remoteproc/qcom_q6v5_pil.c
++++ b/drivers/remoteproc/qcom_q6v5_pil.c
+@@ -1370,7 +1370,6 @@ static const struct rproc_hexagon_res sdm845_mss = {
+ 	.hexagon_mba_image = "mba.mbn",
+ 	.proxy_clk_names = (char*[]){
+ 			"xo",
+-			"axis2",
+ 			"prng",
+ 			NULL
+ 	},
+diff --git a/drivers/reset/reset-imx7.c b/drivers/reset/reset-imx7.c
+index 4db177bc89bc..fdeac1946429 100644
+--- a/drivers/reset/reset-imx7.c
++++ b/drivers/reset/reset-imx7.c
+@@ -80,7 +80,7 @@ static int imx7_reset_set(struct reset_controller_dev *rcdev,
+ {
+ 	struct imx7_src *imx7src = to_imx7_src(rcdev);
+ 	const struct imx7_src_signal *signal = &imx7_src_signals[id];
+-	unsigned int value = 0;
++	unsigned int value = assert ? signal->bit : 0;
+ 
+ 	switch (id) {
+ 	case IMX7_RESET_PCIEPHY:
+diff --git a/drivers/rtc/rtc-bq4802.c b/drivers/rtc/rtc-bq4802.c
+index d768f6747961..113493b52149 100644
+--- a/drivers/rtc/rtc-bq4802.c
++++ b/drivers/rtc/rtc-bq4802.c
+@@ -162,6 +162,10 @@ static int bq4802_probe(struct platform_device *pdev)
+ 	} else if (p->r->flags & IORESOURCE_MEM) {
+ 		p->regs = devm_ioremap(&pdev->dev, p->r->start,
+ 					resource_size(p->r));
++		if (!p->regs){
++			err = -ENOMEM;
++			goto out;
++		}
+ 		p->read = bq4802_read_mem;
+ 		p->write = bq4802_write_mem;
+ 	} else {
+diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
+index d01ac29fd986..ffdb78421a25 100644
+--- a/drivers/s390/net/qeth_core_main.c
++++ b/drivers/s390/net/qeth_core_main.c
+@@ -3530,13 +3530,14 @@ static void qeth_flush_buffers(struct qeth_qdio_out_q *queue, int index,
+ 	qdio_flags = QDIO_FLAG_SYNC_OUTPUT;
+ 	if (atomic_read(&queue->set_pci_flags_count))
+ 		qdio_flags |= QDIO_FLAG_PCI_OUT;
++	atomic_add(count, &queue->used_buffers);
++
+ 	rc = do_QDIO(CARD_DDEV(queue->card), qdio_flags,
+ 		     queue->queue_no, index, count);
+ 	if (queue->card->options.performance_stats)
+ 		queue->card->perf_stats.outbound_do_qdio_time +=
+ 			qeth_get_micros() -
+ 			queue->card->perf_stats.outbound_do_qdio_start_time;
+-	atomic_add(count, &queue->used_buffers);
+ 	if (rc) {
+ 		queue->card->stats.tx_errors += count;
+ 		/* ignore temporary SIGA errors without busy condition */
+diff --git a/drivers/s390/net/qeth_core_sys.c b/drivers/s390/net/qeth_core_sys.c
+index c3f18afb368b..cfb659747693 100644
+--- a/drivers/s390/net/qeth_core_sys.c
++++ b/drivers/s390/net/qeth_core_sys.c
+@@ -426,6 +426,7 @@ static ssize_t qeth_dev_layer2_store(struct device *dev,
+ 	if (card->discipline) {
+ 		card->discipline->remove(card->gdev);
+ 		qeth_core_free_discipline(card);
++		card->options.layer2 = -1;
+ 	}
+ 
+ 	rc = qeth_core_load_discipline(card, newdis);
+diff --git a/drivers/scsi/libfc/fc_disc.c b/drivers/scsi/libfc/fc_disc.c
+index 3f3569ec5ce3..ddc7921ae5da 100644
+--- a/drivers/scsi/libfc/fc_disc.c
++++ b/drivers/scsi/libfc/fc_disc.c
+@@ -294,9 +294,11 @@ static void fc_disc_done(struct fc_disc *disc, enum fc_disc_event event)
+ 	 * discovery, reverify or log them in.	Otherwise, log them out.
+ 	 * Skip ports which were never discovered.  These are the dNS port
+ 	 * and ports which were created by PLOGI.
++	 *
++	 * We don't need to use the _rcu variant here as the rport list
++	 * is protected by the disc mutex which is already held on entry.
+ 	 */
+-	rcu_read_lock();
+-	list_for_each_entry_rcu(rdata, &disc->rports, peers) {
++	list_for_each_entry(rdata, &disc->rports, peers) {
+ 		if (!kref_get_unless_zero(&rdata->kref))
+ 			continue;
+ 		if (rdata->disc_id) {
+@@ -307,7 +309,6 @@ static void fc_disc_done(struct fc_disc *disc, enum fc_disc_event event)
+ 		}
+ 		kref_put(&rdata->kref, fc_rport_destroy);
+ 	}
+-	rcu_read_unlock();
+ 	mutex_unlock(&disc->disc_mutex);
+ 	disc->disc_callback(lport, event);
+ 	mutex_lock(&disc->disc_mutex);
+diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
+index d723fd1d7b26..cab1fb087e6a 100644
+--- a/drivers/scsi/lpfc/lpfc_nvme.c
++++ b/drivers/scsi/lpfc/lpfc_nvme.c
+@@ -2976,7 +2976,7 @@ lpfc_nvme_wait_for_io_drain(struct lpfc_hba *phba)
+ 	struct lpfc_sli_ring  *pring;
+ 	u32 i, wait_cnt = 0;
+ 
+-	if (phba->sli_rev < LPFC_SLI_REV4)
++	if (phba->sli_rev < LPFC_SLI_REV4 || !phba->sli4_hba.nvme_wq)
+ 		return;
+ 
+ 	/* Cycle through all NVME rings and make sure all outstanding
+@@ -2985,6 +2985,9 @@ lpfc_nvme_wait_for_io_drain(struct lpfc_hba *phba)
+ 	for (i = 0; i < phba->cfg_nvme_io_channel; i++) {
+ 		pring = phba->sli4_hba.nvme_wq[i]->pring;
+ 
++		if (!pring)
++			continue;
++
+ 		/* Retrieve everything on the txcmplq */
+ 		while (!list_empty(&pring->txcmplq)) {
+ 			msleep(LPFC_XRI_EXCH_BUSY_WAIT_T1);
+diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
+index 7271c9d885dd..5e5ec3363b44 100644
+--- a/drivers/scsi/lpfc/lpfc_nvmet.c
++++ b/drivers/scsi/lpfc/lpfc_nvmet.c
+@@ -402,6 +402,7 @@ lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctx_buf)
+ 
+ 		/* Process FCP command */
+ 		if (rc == 0) {
++			ctxp->rqb_buffer = NULL;
+ 			atomic_inc(&tgtp->rcv_fcp_cmd_out);
+ 			nvmebuf->hrq->rqbp->rqb_free_buffer(phba, nvmebuf);
+ 			return;
+@@ -1116,8 +1117,17 @@ lpfc_nvmet_defer_rcv(struct nvmet_fc_target_port *tgtport,
+ 	lpfc_nvmeio_data(phba, "NVMET DEFERRCV: xri x%x sz %d CPU %02x\n",
+ 			 ctxp->oxid, ctxp->size, smp_processor_id());
+ 
++	if (!nvmebuf) {
++		lpfc_printf_log(phba, KERN_INFO, LOG_NVME_IOERR,
++				"6425 Defer rcv: no buffer xri x%x: "
++				"flg %x ste %x\n",
++				ctxp->oxid, ctxp->flag, ctxp->state);
++		return;
++	}
++
+ 	tgtp = phba->targetport->private;
+-	atomic_inc(&tgtp->rcv_fcp_cmd_defer);
++	if (tgtp)
++		atomic_inc(&tgtp->rcv_fcp_cmd_defer);
+ 
+ 	/* Free the nvmebuf since a new buffer already replaced it */
+ 	nvmebuf->hrq->rqbp->rqb_free_buffer(phba, nvmebuf);
+diff --git a/drivers/soc/qcom/smem.c b/drivers/soc/qcom/smem.c
+index 70b2ee80d6bd..bf4bd71ab53f 100644
+--- a/drivers/soc/qcom/smem.c
++++ b/drivers/soc/qcom/smem.c
+@@ -364,11 +364,6 @@ static int qcom_smem_alloc_private(struct qcom_smem *smem,
+ 	end = phdr_to_last_uncached_entry(phdr);
+ 	cached = phdr_to_last_cached_entry(phdr);
+ 
+-	if (smem->global_partition) {
+-		dev_err(smem->dev, "Already found the global partition\n");
+-		return -EINVAL;
+-	}
+-
+ 	while (hdr < end) {
+ 		if (hdr->canary != SMEM_PRIVATE_CANARY)
+ 			goto bad_canary;
+@@ -736,6 +731,11 @@ static int qcom_smem_set_global_partition(struct qcom_smem *smem)
+ 	bool found = false;
+ 	int i;
+ 
++	if (smem->global_partition) {
++		dev_err(smem->dev, "Already found the global partition\n");
++		return -EINVAL;
++	}
++
+ 	ptable = qcom_smem_get_ptable(smem);
+ 	if (IS_ERR(ptable))
+ 		return PTR_ERR(ptable);
+diff --git a/drivers/spi/spi-dw.c b/drivers/spi/spi-dw.c
+index f693bfe95ab9..a087464efdd7 100644
+--- a/drivers/spi/spi-dw.c
++++ b/drivers/spi/spi-dw.c
+@@ -485,6 +485,8 @@ int dw_spi_add_host(struct device *dev, struct dw_spi *dws)
+ 	dws->dma_inited = 0;
+ 	dws->dma_addr = (dma_addr_t)(dws->paddr + DW_SPI_DR);
+ 
++	spi_controller_set_devdata(master, dws);
++
+ 	ret = request_irq(dws->irq, dw_spi_irq, IRQF_SHARED, dev_name(dev),
+ 			  master);
+ 	if (ret < 0) {
+@@ -518,7 +520,6 @@ int dw_spi_add_host(struct device *dev, struct dw_spi *dws)
+ 		}
+ 	}
+ 
+-	spi_controller_set_devdata(master, dws);
+ 	ret = devm_spi_register_controller(dev, master);
+ 	if (ret) {
+ 		dev_err(&master->dev, "problem registering spi master\n");
+diff --git a/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c
+index 396371728aa1..537d5bb5e294 100644
+--- a/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c
++++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c
+@@ -767,7 +767,7 @@ static void free_bufs(struct dpaa2_eth_priv *priv, u64 *buf_array, int count)
+ 	for (i = 0; i < count; i++) {
+ 		vaddr = dpaa2_iova_to_virt(priv->iommu_domain, buf_array[i]);
+ 		dma_unmap_single(dev, buf_array[i], DPAA2_ETH_RX_BUF_SIZE,
+-				 DMA_BIDIRECTIONAL);
++				 DMA_FROM_DEVICE);
+ 		skb_free_frag(vaddr);
+ 	}
+ }
+diff --git a/drivers/staging/vc04_services/bcm2835-audio/bcm2835-vchiq.c b/drivers/staging/vc04_services/bcm2835-audio/bcm2835-vchiq.c
+index f0cefa1b7b0f..b20d34449ed4 100644
+--- a/drivers/staging/vc04_services/bcm2835-audio/bcm2835-vchiq.c
++++ b/drivers/staging/vc04_services/bcm2835-audio/bcm2835-vchiq.c
+@@ -439,16 +439,16 @@ int bcm2835_audio_open(struct bcm2835_alsa_stream *alsa_stream)
+ 	my_workqueue_init(alsa_stream);
+ 
+ 	ret = bcm2835_audio_open_connection(alsa_stream);
+-	if (ret) {
+-		ret = -1;
+-		goto exit;
+-	}
++	if (ret)
++		goto free_wq;
++
+ 	instance = alsa_stream->instance;
+ 	LOG_DBG(" instance (%p)\n", instance);
+ 
+ 	if (mutex_lock_interruptible(&instance->vchi_mutex)) {
+ 		LOG_DBG("Interrupted whilst waiting for lock on (%d)\n", instance->num_connections);
+-		return -EINTR;
++		ret = -EINTR;
++		goto free_wq;
+ 	}
+ 	vchi_service_use(instance->vchi_handle[0]);
+ 
+@@ -471,7 +471,11 @@ int bcm2835_audio_open(struct bcm2835_alsa_stream *alsa_stream)
+ unlock:
+ 	vchi_service_release(instance->vchi_handle[0]);
+ 	mutex_unlock(&instance->vchi_mutex);
+-exit:
++
++free_wq:
++	if (ret)
++		destroy_workqueue(alsa_stream->my_wq);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c b/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c
+index ce26741ae9d9..3f61d04c47ab 100644
+--- a/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c
++++ b/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c
+@@ -580,6 +580,7 @@ static int start_streaming(struct vb2_queue *vq, unsigned int count)
+ static void stop_streaming(struct vb2_queue *vq)
+ {
+ 	int ret;
++	unsigned long timeout;
+ 	struct bm2835_mmal_dev *dev = vb2_get_drv_priv(vq);
+ 
+ 	v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev, "%s: dev:%p\n",
+@@ -605,10 +606,10 @@ static void stop_streaming(struct vb2_queue *vq)
+ 				      sizeof(dev->capture.frame_count));
+ 
+ 	/* wait for last frame to complete */
+-	ret = wait_for_completion_timeout(&dev->capture.frame_cmplt, HZ);
+-	if (ret <= 0)
++	timeout = wait_for_completion_timeout(&dev->capture.frame_cmplt, HZ);
++	if (timeout == 0)
+ 		v4l2_err(&dev->v4l2_dev,
+-			 "error %d waiting for frame completion\n", ret);
++			 "timed out waiting for frame completion\n");
+ 
+ 	v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
+ 		 "disabling connection\n");
+diff --git a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c
+index f5b5ead6347c..51e5b04ff0f5 100644
+--- a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c
++++ b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c
+@@ -630,6 +630,7 @@ static int send_synchronous_mmal_msg(struct vchiq_mmal_instance *instance,
+ {
+ 	struct mmal_msg_context *msg_context;
+ 	int ret;
++	unsigned long timeout;
+ 
+ 	/* payload size must not cause message to exceed max size */
+ 	if (payload_len >
+@@ -668,11 +669,11 @@ static int send_synchronous_mmal_msg(struct vchiq_mmal_instance *instance,
+ 		return ret;
+ 	}
+ 
+-	ret = wait_for_completion_timeout(&msg_context->u.sync.cmplt, 3 * HZ);
+-	if (ret <= 0) {
+-		pr_err("error %d waiting for sync completion\n", ret);
+-		if (ret == 0)
+-			ret = -ETIME;
++	timeout = wait_for_completion_timeout(&msg_context->u.sync.cmplt,
++					      3 * HZ);
++	if (timeout == 0) {
++		pr_err("timed out waiting for sync completion\n");
++		ret = -ETIME;
+ 		/* todo: what happens if the message arrives after aborting */
+ 		release_msg_context(msg_context);
+ 		return ret;
+diff --git a/drivers/tty/serial/8250/8250_of.c b/drivers/tty/serial/8250/8250_of.c
+index bfb37f0be22f..863e86b9a424 100644
+--- a/drivers/tty/serial/8250/8250_of.c
++++ b/drivers/tty/serial/8250/8250_of.c
+@@ -124,7 +124,7 @@ static int of_platform_serial_setup(struct platform_device *ofdev,
+ 				dev_warn(&ofdev->dev, "unsupported reg-io-width (%d)\n",
+ 					 prop);
+ 				ret = -EINVAL;
+-				goto err_dispose;
++				goto err_unprepare;
+ 			}
+ 		}
+ 		port->flags |= UPF_IOREMAP;
+diff --git a/drivers/tty/tty_baudrate.c b/drivers/tty/tty_baudrate.c
+index 6ff8cdfc9d2a..3e827a3d48d5 100644
+--- a/drivers/tty/tty_baudrate.c
++++ b/drivers/tty/tty_baudrate.c
+@@ -157,18 +157,25 @@ void tty_termios_encode_baud_rate(struct ktermios *termios,
+ 	termios->c_ospeed = obaud;
+ 
+ #ifdef BOTHER
++	if ((termios->c_cflag >> IBSHIFT) & CBAUD)
++		ibinput = 1;	/* An input speed was specified */
++
+ 	/* If the user asked for a precise weird speed give a precise weird
+ 	   answer. If they asked for a Bfoo speed they may have problems
+ 	   digesting non-exact replies so fuzz a bit */
+ 
+-	if ((termios->c_cflag & CBAUD) == BOTHER)
++	if ((termios->c_cflag & CBAUD) == BOTHER) {
+ 		oclose = 0;
++		if (!ibinput)
++			iclose = 0;
++	}
+ 	if (((termios->c_cflag >> IBSHIFT) & CBAUD) == BOTHER)
+ 		iclose = 0;
+-	if ((termios->c_cflag >> IBSHIFT) & CBAUD)
+-		ibinput = 1;	/* An input speed was specified */
+ #endif
+ 	termios->c_cflag &= ~CBAUD;
++#ifdef IBSHIFT
++	termios->c_cflag &= ~(CBAUD << IBSHIFT);
++#endif
+ 
+ 	/*
+ 	 *	Our goal is to find a close match to the standard baud rate
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 75c4623ad779..f8ee32d9843a 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -779,20 +779,9 @@ static int acm_tty_write(struct tty_struct *tty,
+ 	}
+ 
+ 	if (acm->susp_count) {
+-		if (acm->putbuffer) {
+-			/* now to preserve order */
+-			usb_anchor_urb(acm->putbuffer->urb, &acm->delayed);
+-			acm->putbuffer = NULL;
+-		}
+ 		usb_anchor_urb(wb->urb, &acm->delayed);
+ 		spin_unlock_irqrestore(&acm->write_lock, flags);
+ 		return count;
+-	} else {
+-		if (acm->putbuffer) {
+-			/* at this point there is no good way to handle errors */
+-			acm_start_wb(acm, acm->putbuffer);
+-			acm->putbuffer = NULL;
+-		}
+ 	}
+ 
+ 	stat = acm_start_wb(acm, wb);
+@@ -803,66 +792,6 @@ static int acm_tty_write(struct tty_struct *tty,
+ 	return count;
+ }
+ 
+-static void acm_tty_flush_chars(struct tty_struct *tty)
+-{
+-	struct acm *acm = tty->driver_data;
+-	struct acm_wb *cur;
+-	int err;
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&acm->write_lock, flags);
+-
+-	cur = acm->putbuffer;
+-	if (!cur) /* nothing to do */
+-		goto out;
+-
+-	acm->putbuffer = NULL;
+-	err = usb_autopm_get_interface_async(acm->control);
+-	if (err < 0) {
+-		cur->use = 0;
+-		acm->putbuffer = cur;
+-		goto out;
+-	}
+-
+-	if (acm->susp_count)
+-		usb_anchor_urb(cur->urb, &acm->delayed);
+-	else
+-		acm_start_wb(acm, cur);
+-out:
+-	spin_unlock_irqrestore(&acm->write_lock, flags);
+-	return;
+-}
+-
+-static int acm_tty_put_char(struct tty_struct *tty, unsigned char ch)
+-{
+-	struct acm *acm = tty->driver_data;
+-	struct acm_wb *cur;
+-	int wbn;
+-	unsigned long flags;
+-
+-overflow:
+-	cur = acm->putbuffer;
+-	if (!cur) {
+-		spin_lock_irqsave(&acm->write_lock, flags);
+-		wbn = acm_wb_alloc(acm);
+-		if (wbn >= 0) {
+-			cur = &acm->wb[wbn];
+-			acm->putbuffer = cur;
+-		}
+-		spin_unlock_irqrestore(&acm->write_lock, flags);
+-		if (!cur)
+-			return 0;
+-	}
+-
+-	if (cur->len == acm->writesize) {
+-		acm_tty_flush_chars(tty);
+-		goto overflow;
+-	}
+-
+-	cur->buf[cur->len++] = ch;
+-	return 1;
+-}
+-
+ static int acm_tty_write_room(struct tty_struct *tty)
+ {
+ 	struct acm *acm = tty->driver_data;
+@@ -1987,8 +1916,6 @@ static const struct tty_operations acm_ops = {
+ 	.cleanup =		acm_tty_cleanup,
+ 	.hangup =		acm_tty_hangup,
+ 	.write =		acm_tty_write,
+-	.put_char =		acm_tty_put_char,
+-	.flush_chars =		acm_tty_flush_chars,
+ 	.write_room =		acm_tty_write_room,
+ 	.ioctl =		acm_tty_ioctl,
+ 	.throttle =		acm_tty_throttle,
+diff --git a/drivers/usb/class/cdc-acm.h b/drivers/usb/class/cdc-acm.h
+index eacc116e83da..ca06b20d7af9 100644
+--- a/drivers/usb/class/cdc-acm.h
++++ b/drivers/usb/class/cdc-acm.h
+@@ -96,7 +96,6 @@ struct acm {
+ 	unsigned long read_urbs_free;
+ 	struct urb *read_urbs[ACM_NR];
+ 	struct acm_rb read_buffers[ACM_NR];
+-	struct acm_wb *putbuffer;			/* for acm_tty_put_char() */
+ 	int rx_buflimit;
+ 	spinlock_t read_lock;
+ 	u8 *notification_buffer;			/* to reassemble fragmented notifications */
+diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
+index a0d284ef3f40..632a2bfabc08 100644
+--- a/drivers/usb/class/cdc-wdm.c
++++ b/drivers/usb/class/cdc-wdm.c
+@@ -458,7 +458,7 @@ static int service_outstanding_interrupt(struct wdm_device *desc)
+ 
+ 	set_bit(WDM_RESPONDING, &desc->flags);
+ 	spin_unlock_irq(&desc->iuspin);
+-	rv = usb_submit_urb(desc->response, GFP_KERNEL);
++	rv = usb_submit_urb(desc->response, GFP_ATOMIC);
+ 	spin_lock_irq(&desc->iuspin);
+ 	if (rv) {
+ 		dev_err(&desc->intf->dev,
+diff --git a/drivers/usb/core/hcd-pci.c b/drivers/usb/core/hcd-pci.c
+index 66fe1b78d952..03432467b05f 100644
+--- a/drivers/usb/core/hcd-pci.c
++++ b/drivers/usb/core/hcd-pci.c
+@@ -515,8 +515,6 @@ static int resume_common(struct device *dev, int event)
+ 				event == PM_EVENT_RESTORE);
+ 		if (retval) {
+ 			dev_err(dev, "PCI post-resume error %d!\n", retval);
+-			if (hcd->shared_hcd)
+-				usb_hc_died(hcd->shared_hcd);
+ 			usb_hc_died(hcd);
+ 		}
+ 	}
+diff --git a/drivers/usb/core/message.c b/drivers/usb/core/message.c
+index 1a15392326fc..525ebd03cfe5 100644
+--- a/drivers/usb/core/message.c
++++ b/drivers/usb/core/message.c
+@@ -1340,6 +1340,11 @@ void usb_enable_interface(struct usb_device *dev,
+  * is submitted that needs that bandwidth.  Some other operating systems
+  * allocate bandwidth early, when a configuration is chosen.
+  *
++ * xHCI reserves bandwidth and configures the alternate setting in
++ * usb_hcd_alloc_bandwidth(). If it fails the original interface altsetting
++ * may be disabled. Drivers cannot rely on any particular alternate
++ * setting being in effect after a failure.
++ *
+  * This call is synchronous, and may not be used in an interrupt context.
+  * Also, drivers must not change altsettings while urbs are scheduled for
+  * endpoints in that interface; all such urbs must first be completed
+@@ -1375,6 +1380,12 @@ int usb_set_interface(struct usb_device *dev, int interface, int alternate)
+ 			 alternate);
+ 		return -EINVAL;
+ 	}
++	/*
++	 * usb3 hosts configure the interface in usb_hcd_alloc_bandwidth,
++	 * including freeing dropped endpoint ring buffers.
++	 * Make sure the interface endpoints are flushed before that
++	 */
++	usb_disable_interface(dev, iface, false);
+ 
+ 	/* Make sure we have enough bandwidth for this alternate interface.
+ 	 * Remove the current alt setting and add the new alt setting.
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 097057d2eacf..e77dfe5ed5ec 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -178,6 +178,10 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	/* CBM - Flash disk */
+ 	{ USB_DEVICE(0x0204, 0x6025), .driver_info = USB_QUIRK_RESET_RESUME },
+ 
++	/* WORLDE Controller KS49 or Prodipe MIDI 49C USB controller */
++	{ USB_DEVICE(0x0218, 0x0201), .driver_info =
++			USB_QUIRK_CONFIG_INTF_STRINGS },
++
+ 	/* WORLDE easy key (easykey.25) MIDI controller  */
+ 	{ USB_DEVICE(0x0218, 0x0401), .driver_info =
+ 			USB_QUIRK_CONFIG_INTF_STRINGS },
+@@ -406,6 +410,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	{ USB_DEVICE(0x2040, 0x7200), .driver_info =
+ 			USB_QUIRK_CONFIG_INTF_STRINGS },
+ 
++	/* DJI CineSSD */
++	{ USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM },
++
+ 	/* INTEL VALUE SSD */
+ 	{ USB_DEVICE(0x8086, 0xf1a5), .driver_info = USB_QUIRK_RESET_RESUME },
+ 
+diff --git a/drivers/usb/dwc3/gadget.h b/drivers/usb/dwc3/gadget.h
+index db610c56f1d6..2aacd1afd9ff 100644
+--- a/drivers/usb/dwc3/gadget.h
++++ b/drivers/usb/dwc3/gadget.h
+@@ -25,7 +25,7 @@ struct dwc3;
+ #define DWC3_DEPCFG_XFER_IN_PROGRESS_EN	BIT(9)
+ #define DWC3_DEPCFG_XFER_NOT_READY_EN	BIT(10)
+ #define DWC3_DEPCFG_FIFO_ERROR_EN	BIT(11)
+-#define DWC3_DEPCFG_STREAM_EVENT_EN	BIT(12)
++#define DWC3_DEPCFG_STREAM_EVENT_EN	BIT(13)
+ #define DWC3_DEPCFG_BINTERVAL_M1(n)	(((n) & 0xff) << 16)
+ #define DWC3_DEPCFG_STREAM_CAPABLE	BIT(24)
+ #define DWC3_DEPCFG_EP_NUMBER(n)	(((n) & 0x1f) << 25)
+diff --git a/drivers/usb/gadget/udc/net2280.c b/drivers/usb/gadget/udc/net2280.c
+index 318246d8b2e2..b02ab2a8d927 100644
+--- a/drivers/usb/gadget/udc/net2280.c
++++ b/drivers/usb/gadget/udc/net2280.c
+@@ -1545,11 +1545,14 @@ static int net2280_pullup(struct usb_gadget *_gadget, int is_on)
+ 		writel(tmp | BIT(USB_DETECT_ENABLE), &dev->usb->usbctl);
+ 	} else {
+ 		writel(tmp & ~BIT(USB_DETECT_ENABLE), &dev->usb->usbctl);
+-		stop_activity(dev, dev->driver);
++		stop_activity(dev, NULL);
+ 	}
+ 
+ 	spin_unlock_irqrestore(&dev->lock, flags);
+ 
++	if (!is_on && dev->driver)
++		dev->driver->disconnect(&dev->gadget);
++
+ 	return 0;
+ }
+ 
+@@ -2466,8 +2469,11 @@ static void stop_activity(struct net2280 *dev, struct usb_gadget_driver *driver)
+ 		nuke(&dev->ep[i]);
+ 
+ 	/* report disconnect; the driver is already quiesced */
+-	if (driver)
++	if (driver) {
++		spin_unlock(&dev->lock);
+ 		driver->disconnect(&dev->gadget);
++		spin_lock(&dev->lock);
++	}
+ 
+ 	usb_reinit(dev);
+ }
+@@ -3341,6 +3347,8 @@ next_endpoints:
+ 		BIT(PCI_RETRY_ABORT_INTERRUPT))
+ 
+ static void handle_stat1_irqs(struct net2280 *dev, u32 stat)
++__releases(dev->lock)
++__acquires(dev->lock)
+ {
+ 	struct net2280_ep	*ep;
+ 	u32			tmp, num, mask, scratch;
+@@ -3381,12 +3389,14 @@ static void handle_stat1_irqs(struct net2280 *dev, u32 stat)
+ 			if (disconnect || reset) {
+ 				stop_activity(dev, dev->driver);
+ 				ep0_start(dev);
++				spin_unlock(&dev->lock);
+ 				if (reset)
+ 					usb_gadget_udc_reset
+ 						(&dev->gadget, dev->driver);
+ 				else
+ 					(dev->driver->disconnect)
+ 						(&dev->gadget);
++				spin_lock(&dev->lock);
+ 				return;
+ 			}
+ 		}
+@@ -3405,6 +3415,7 @@ static void handle_stat1_irqs(struct net2280 *dev, u32 stat)
+ 	tmp = BIT(SUSPEND_REQUEST_CHANGE_INTERRUPT);
+ 	if (stat & tmp) {
+ 		writel(tmp, &dev->regs->irqstat1);
++		spin_unlock(&dev->lock);
+ 		if (stat & BIT(SUSPEND_REQUEST_INTERRUPT)) {
+ 			if (dev->driver->suspend)
+ 				dev->driver->suspend(&dev->gadget);
+@@ -3415,6 +3426,7 @@ static void handle_stat1_irqs(struct net2280 *dev, u32 stat)
+ 				dev->driver->resume(&dev->gadget);
+ 			/* at high speed, note erratum 0133 */
+ 		}
++		spin_lock(&dev->lock);
+ 		stat &= ~tmp;
+ 	}
+ 
+diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c
+index 7cf98c793e04..5b5f1c8b47c9 100644
+--- a/drivers/usb/gadget/udc/renesas_usb3.c
++++ b/drivers/usb/gadget/udc/renesas_usb3.c
+@@ -787,12 +787,15 @@ static void usb3_irq_epc_int_1_speed(struct renesas_usb3 *usb3)
+ 	switch (speed) {
+ 	case USB_STA_SPEED_SS:
+ 		usb3->gadget.speed = USB_SPEED_SUPER;
++		usb3->gadget.ep0->maxpacket = USB3_EP0_SS_MAX_PACKET_SIZE;
+ 		break;
+ 	case USB_STA_SPEED_HS:
+ 		usb3->gadget.speed = USB_SPEED_HIGH;
++		usb3->gadget.ep0->maxpacket = USB3_EP0_HSFS_MAX_PACKET_SIZE;
+ 		break;
+ 	case USB_STA_SPEED_FS:
+ 		usb3->gadget.speed = USB_SPEED_FULL;
++		usb3->gadget.ep0->maxpacket = USB3_EP0_HSFS_MAX_PACKET_SIZE;
+ 		break;
+ 	default:
+ 		usb3->gadget.speed = USB_SPEED_UNKNOWN;
+@@ -2451,7 +2454,7 @@ static int renesas_usb3_init_ep(struct renesas_usb3 *usb3, struct device *dev,
+ 			/* for control pipe */
+ 			usb3->gadget.ep0 = &usb3_ep->ep;
+ 			usb_ep_set_maxpacket_limit(&usb3_ep->ep,
+-						USB3_EP0_HSFS_MAX_PACKET_SIZE);
++						USB3_EP0_SS_MAX_PACKET_SIZE);
+ 			usb3_ep->ep.caps.type_control = true;
+ 			usb3_ep->ep.caps.dir_in = true;
+ 			usb3_ep->ep.caps.dir_out = true;
+diff --git a/drivers/usb/host/u132-hcd.c b/drivers/usb/host/u132-hcd.c
+index 032b8652910a..02f8e08b3ee8 100644
+--- a/drivers/usb/host/u132-hcd.c
++++ b/drivers/usb/host/u132-hcd.c
+@@ -2555,7 +2555,7 @@ static int u132_get_frame(struct usb_hcd *hcd)
+ 	} else {
+ 		int frame = 0;
+ 		dev_err(&u132->platform_dev->dev, "TODO: u132_get_frame\n");
+-		msleep(100);
++		mdelay(100);
+ 		return frame;
+ 	}
+ }
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index ef350c33dc4a..b1f27aa38b10 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -1613,6 +1613,10 @@ void xhci_endpoint_copy(struct xhci_hcd *xhci,
+ 	in_ep_ctx->ep_info2 = out_ep_ctx->ep_info2;
+ 	in_ep_ctx->deq = out_ep_ctx->deq;
+ 	in_ep_ctx->tx_info = out_ep_ctx->tx_info;
++	if (xhci->quirks & XHCI_MTK_HOST) {
++		in_ep_ctx->reserved[0] = out_ep_ctx->reserved[0];
++		in_ep_ctx->reserved[1] = out_ep_ctx->reserved[1];
++	}
+ }
+ 
+ /* Copy output xhci_slot_ctx to the input xhci_slot_ctx.
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 68e6132aa8b2..c2220a7fc758 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -37,6 +37,21 @@ static unsigned long long quirks;
+ module_param(quirks, ullong, S_IRUGO);
+ MODULE_PARM_DESC(quirks, "Bit flags for quirks to be enabled as default");
+ 
++static bool td_on_ring(struct xhci_td *td, struct xhci_ring *ring)
++{
++	struct xhci_segment *seg = ring->first_seg;
++
++	if (!td || !td->start_seg)
++		return false;
++	do {
++		if (seg == td->start_seg)
++			return true;
++		seg = seg->next;
++	} while (seg && seg != ring->first_seg);
++
++	return false;
++}
++
+ /* TODO: copied from ehci-hcd.c - can this be refactored? */
+ /*
+  * xhci_handshake - spin reading hc until handshake completes or fails
+@@ -1571,6 +1586,21 @@ static int xhci_urb_dequeue(struct usb_hcd *hcd, struct urb *urb, int status)
+ 		goto done;
+ 	}
+ 
++	/*
++	 * check ring is not re-allocated since URB was enqueued. If it is, then
++	 * make sure none of the ring related pointers in this URB private data
++	 * are touched, such as td_list, otherwise we overwrite freed data
++	 */
++	if (!td_on_ring(&urb_priv->td[0], ep_ring)) {
++		xhci_err(xhci, "Canceled URB td not found on endpoint ring");
++		for (i = urb_priv->num_tds_done; i < urb_priv->num_tds; i++) {
++			td = &urb_priv->td[i];
++			if (!list_empty(&td->cancelled_td_list))
++				list_del_init(&td->cancelled_td_list);
++		}
++		goto err_giveback;
++	}
++
+ 	if (xhci->xhc_state & XHCI_STATE_HALTED) {
+ 		xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb,
+ 				"HC halted, freeing TD manually.");
+diff --git a/drivers/usb/misc/uss720.c b/drivers/usb/misc/uss720.c
+index de9a502491c2..69822852888a 100644
+--- a/drivers/usb/misc/uss720.c
++++ b/drivers/usb/misc/uss720.c
+@@ -369,7 +369,7 @@ static unsigned char parport_uss720_frob_control(struct parport *pp, unsigned ch
+ 	mask &= 0x0f;
+ 	val &= 0x0f;
+ 	d = (priv->reg[1] & (~mask)) ^ val;
+-	if (set_1284_register(pp, 2, d, GFP_KERNEL))
++	if (set_1284_register(pp, 2, d, GFP_ATOMIC))
+ 		return 0;
+ 	priv->reg[1] = d;
+ 	return d & 0xf;
+@@ -379,7 +379,7 @@ static unsigned char parport_uss720_read_status(struct parport *pp)
+ {
+ 	unsigned char ret;
+ 
+-	if (get_1284_register(pp, 1, &ret, GFP_KERNEL))
++	if (get_1284_register(pp, 1, &ret, GFP_ATOMIC))
+ 		return 0;
+ 	return ret & 0xf8;
+ }
+diff --git a/drivers/usb/misc/yurex.c b/drivers/usb/misc/yurex.c
+index 3be40eaa1ac9..1232dd49556d 100644
+--- a/drivers/usb/misc/yurex.c
++++ b/drivers/usb/misc/yurex.c
+@@ -421,13 +421,13 @@ static ssize_t yurex_write(struct file *file, const char __user *user_buffer,
+ {
+ 	struct usb_yurex *dev;
+ 	int i, set = 0, retval = 0;
+-	char buffer[16];
++	char buffer[16 + 1];
+ 	char *data = buffer;
+ 	unsigned long long c, c2 = 0;
+ 	signed long timeout = 0;
+ 	DEFINE_WAIT(wait);
+ 
+-	count = min(sizeof(buffer), count);
++	count = min(sizeof(buffer) - 1, count);
+ 	dev = file->private_data;
+ 
+ 	/* verify that we actually have some data to write */
+@@ -446,6 +446,7 @@ static ssize_t yurex_write(struct file *file, const char __user *user_buffer,
+ 		retval = -EFAULT;
+ 		goto error;
+ 	}
++	buffer[count] = 0;
+ 	memset(dev->cntl_buffer, CMD_PADDING, YUREX_BUF_SIZE);
+ 
+ 	switch (buffer[0]) {
+diff --git a/drivers/usb/mtu3/mtu3_core.c b/drivers/usb/mtu3/mtu3_core.c
+index eecfd0671362..d045d8458f81 100644
+--- a/drivers/usb/mtu3/mtu3_core.c
++++ b/drivers/usb/mtu3/mtu3_core.c
+@@ -107,8 +107,12 @@ static int mtu3_device_enable(struct mtu3 *mtu)
+ 		(SSUSB_U2_PORT_DIS | SSUSB_U2_PORT_PDN |
+ 		SSUSB_U2_PORT_HOST_SEL));
+ 
+-	if (mtu->ssusb->dr_mode == USB_DR_MODE_OTG)
++	if (mtu->ssusb->dr_mode == USB_DR_MODE_OTG) {
+ 		mtu3_setbits(ibase, SSUSB_U2_CTRL(0), SSUSB_U2_PORT_OTG_SEL);
++		if (mtu->is_u3_ip)
++			mtu3_setbits(ibase, SSUSB_U3_CTRL(0),
++				     SSUSB_U3_PORT_DUAL_MODE);
++	}
+ 
+ 	return ssusb_check_clocks(mtu->ssusb, check_clk);
+ }
+diff --git a/drivers/usb/mtu3/mtu3_hw_regs.h b/drivers/usb/mtu3/mtu3_hw_regs.h
+index 6ee371478d89..a45bb253939f 100644
+--- a/drivers/usb/mtu3/mtu3_hw_regs.h
++++ b/drivers/usb/mtu3/mtu3_hw_regs.h
+@@ -459,6 +459,7 @@
+ 
+ /* U3D_SSUSB_U3_CTRL_0P */
+ #define SSUSB_U3_PORT_SSP_SPEED	BIT(9)
++#define SSUSB_U3_PORT_DUAL_MODE	BIT(7)
+ #define SSUSB_U3_PORT_HOST_SEL		BIT(2)
+ #define SSUSB_U3_PORT_PDN		BIT(1)
+ #define SSUSB_U3_PORT_DIS		BIT(0)
+diff --git a/drivers/usb/serial/io_ti.h b/drivers/usb/serial/io_ti.h
+index e53c68261017..9bbcee37524e 100644
+--- a/drivers/usb/serial/io_ti.h
++++ b/drivers/usb/serial/io_ti.h
+@@ -173,7 +173,7 @@ struct ump_interrupt {
+ }  __attribute__((packed));
+ 
+ 
+-#define TIUMP_GET_PORT_FROM_CODE(c)	(((c) >> 4) - 3)
++#define TIUMP_GET_PORT_FROM_CODE(c)	(((c) >> 6) & 0x01)
+ #define TIUMP_GET_FUNC_FROM_CODE(c)	((c) & 0x0f)
+ #define TIUMP_INTERRUPT_CODE_LSR	0x03
+ #define TIUMP_INTERRUPT_CODE_MSR	0x04
+diff --git a/drivers/usb/serial/ti_usb_3410_5052.c b/drivers/usb/serial/ti_usb_3410_5052.c
+index 6b22857f6e52..58fc7964ee6b 100644
+--- a/drivers/usb/serial/ti_usb_3410_5052.c
++++ b/drivers/usb/serial/ti_usb_3410_5052.c
+@@ -1119,7 +1119,7 @@ static void ti_break(struct tty_struct *tty, int break_state)
+ 
+ static int ti_get_port_from_code(unsigned char code)
+ {
+-	return (code >> 4) - 3;
++	return (code >> 6) & 0x01;
+ }
+ 
+ static int ti_get_func_from_code(unsigned char code)
+diff --git a/drivers/usb/storage/scsiglue.c b/drivers/usb/storage/scsiglue.c
+index c267f2812a04..e227bb5b794f 100644
+--- a/drivers/usb/storage/scsiglue.c
++++ b/drivers/usb/storage/scsiglue.c
+@@ -376,6 +376,15 @@ static int queuecommand_lck(struct scsi_cmnd *srb,
+ 		return 0;
+ 	}
+ 
++	if ((us->fflags & US_FL_NO_ATA_1X) &&
++			(srb->cmnd[0] == ATA_12 || srb->cmnd[0] == ATA_16)) {
++		memcpy(srb->sense_buffer, usb_stor_sense_invalidCDB,
++		       sizeof(usb_stor_sense_invalidCDB));
++		srb->result = SAM_STAT_CHECK_CONDITION;
++		done(srb);
++		return 0;
++	}
++
+ 	/* enqueue the command and wake up the control thread */
+ 	srb->scsi_done = done;
+ 	us->srb = srb;
+diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c
+index 9e9de5452860..1f7b401c4d04 100644
+--- a/drivers/usb/storage/uas.c
++++ b/drivers/usb/storage/uas.c
+@@ -842,6 +842,27 @@ static int uas_slave_configure(struct scsi_device *sdev)
+ 		sdev->skip_ms_page_8 = 1;
+ 		sdev->wce_default_on = 1;
+ 	}
++
++	/*
++	 * Some disks return the total number of blocks in response
++	 * to READ CAPACITY rather than the highest block number.
++	 * If this device makes that mistake, tell the sd driver.
++	 */
++	if (devinfo->flags & US_FL_FIX_CAPACITY)
++		sdev->fix_capacity = 1;
++
++	/*
++	 * Some devices don't like MODE SENSE with page=0x3f,
++	 * which is the command used for checking if a device
++	 * is write-protected.  Now that we tell the sd driver
++	 * to do a 192-byte transfer with this command the
++	 * majority of devices work fine, but a few still can't
++	 * handle it.  The sd driver will simply assume those
++	 * devices are write-enabled.
++	 */
++	if (devinfo->flags & US_FL_NO_WP_DETECT)
++		sdev->skip_ms_page_3f = 1;
++
+ 	scsi_change_queue_depth(sdev, devinfo->qdepth - 2);
+ 	return 0;
+ }
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index 22fcfccf453a..f7f83b21dc74 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -2288,6 +2288,13 @@ UNUSUAL_DEV(  0x2735, 0x100b, 0x0000, 0x9999,
+ 		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ 		US_FL_GO_SLOW ),
+ 
++/* Reported-by: Tim Anderson <tsa@biglakesoftware.com> */
++UNUSUAL_DEV(  0x2ca3, 0x0031, 0x0000, 0x9999,
++		"DJI",
++		"CineSSD",
++		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++		US_FL_NO_ATA_1X),
++
+ /*
+  * Reported by Frederic Marchal <frederic.marchal@wowcompany.com>
+  * Mio Moov 330
+diff --git a/drivers/video/fbdev/core/modedb.c b/drivers/video/fbdev/core/modedb.c
+index 2510fa728d77..de119f11b78f 100644
+--- a/drivers/video/fbdev/core/modedb.c
++++ b/drivers/video/fbdev/core/modedb.c
+@@ -644,7 +644,7 @@ static int fb_try_mode(struct fb_var_screeninfo *var, struct fb_info *info,
+  *
+  *     Valid mode specifiers for @mode_option:
+  *
+- *     <xres>x<yres>[M][R][-<bpp>][@<refresh>][i][m] or
++ *     <xres>x<yres>[M][R][-<bpp>][@<refresh>][i][p][m] or
+  *     <name>[-<bpp>][@<refresh>]
+  *
+  *     with <xres>, <yres>, <bpp> and <refresh> decimal numbers and
+@@ -653,10 +653,10 @@ static int fb_try_mode(struct fb_var_screeninfo *var, struct fb_info *info,
+  *      If 'M' is present after yres (and before refresh/bpp if present),
+  *      the function will compute the timings using VESA(tm) Coordinated
+  *      Video Timings (CVT).  If 'R' is present after 'M', will compute with
+- *      reduced blanking (for flatpanels).  If 'i' is present, compute
+- *      interlaced mode.  If 'm' is present, add margins equal to 1.8%
+- *      of xres rounded down to 8 pixels, and 1.8% of yres. The char
+- *      'i' and 'm' must be after 'M' and 'R'. Example:
++ *      reduced blanking (for flatpanels).  If 'i' or 'p' are present, compute
++ *      interlaced or progressive mode.  If 'm' is present, add margins equal
++ *      to 1.8% of xres rounded down to 8 pixels, and 1.8% of yres. The chars
++ *      'i', 'p' and 'm' must be after 'M' and 'R'. Example:
+  *
+  *      1024x768MR-8@60m - Reduced blank with margins at 60Hz.
+  *
+@@ -697,7 +697,8 @@ int fb_find_mode(struct fb_var_screeninfo *var,
+ 		unsigned int namelen = strlen(name);
+ 		int res_specified = 0, bpp_specified = 0, refresh_specified = 0;
+ 		unsigned int xres = 0, yres = 0, bpp = default_bpp, refresh = 0;
+-		int yres_specified = 0, cvt = 0, rb = 0, interlace = 0;
++		int yres_specified = 0, cvt = 0, rb = 0;
++		int interlace_specified = 0, interlace = 0;
+ 		int margins = 0;
+ 		u32 best, diff, tdiff;
+ 
+@@ -748,9 +749,17 @@ int fb_find_mode(struct fb_var_screeninfo *var,
+ 				if (!cvt)
+ 					margins = 1;
+ 				break;
++			case 'p':
++				if (!cvt) {
++					interlace = 0;
++					interlace_specified = 1;
++				}
++				break;
+ 			case 'i':
+-				if (!cvt)
++				if (!cvt) {
+ 					interlace = 1;
++					interlace_specified = 1;
++				}
+ 				break;
+ 			default:
+ 				goto done;
+@@ -819,11 +828,21 @@ done:
+ 			if ((name_matches(db[i], name, namelen) ||
+ 			     (res_specified && res_matches(db[i], xres, yres))) &&
+ 			    !fb_try_mode(var, info, &db[i], bpp)) {
+-				if (refresh_specified && db[i].refresh == refresh)
+-					return 1;
++				const int db_interlace = (db[i].vmode &
++					FB_VMODE_INTERLACED ? 1 : 0);
++				int score = abs(db[i].refresh - refresh);
++
++				if (interlace_specified)
++					score += abs(db_interlace - interlace);
++
++				if (!interlace_specified ||
++				    db_interlace == interlace)
++					if (refresh_specified &&
++					    db[i].refresh == refresh)
++						return 1;
+ 
+-				if (abs(db[i].refresh - refresh) < diff) {
+-					diff = abs(db[i].refresh - refresh);
++				if (score < diff) {
++					diff = score;
+ 					best = i;
+ 				}
+ 			}
+diff --git a/drivers/video/fbdev/goldfishfb.c b/drivers/video/fbdev/goldfishfb.c
+index 3b70044773b6..9fe7edf725c6 100644
+--- a/drivers/video/fbdev/goldfishfb.c
++++ b/drivers/video/fbdev/goldfishfb.c
+@@ -301,6 +301,7 @@ static int goldfish_fb_remove(struct platform_device *pdev)
+ 	dma_free_coherent(&pdev->dev, framesize, (void *)fb->fb.screen_base,
+ 						fb->fb.fix.smem_start);
+ 	iounmap(fb->reg_base);
++	kfree(fb);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/video/fbdev/omap/omapfb_main.c b/drivers/video/fbdev/omap/omapfb_main.c
+index 585f39efcff6..1c75f4806ed3 100644
+--- a/drivers/video/fbdev/omap/omapfb_main.c
++++ b/drivers/video/fbdev/omap/omapfb_main.c
+@@ -958,7 +958,7 @@ int omapfb_register_client(struct omapfb_notifier_block *omapfb_nb,
+ {
+ 	int r;
+ 
+-	if ((unsigned)omapfb_nb->plane_idx > OMAPFB_PLANE_NUM)
++	if ((unsigned)omapfb_nb->plane_idx >= OMAPFB_PLANE_NUM)
+ 		return -EINVAL;
+ 
+ 	if (!notifier_inited) {
+diff --git a/drivers/video/fbdev/omap2/omapfb/Makefile b/drivers/video/fbdev/omap2/omapfb/Makefile
+index 602edfed09df..f54c3f56b641 100644
+--- a/drivers/video/fbdev/omap2/omapfb/Makefile
++++ b/drivers/video/fbdev/omap2/omapfb/Makefile
+@@ -2,5 +2,5 @@
+ obj-$(CONFIG_OMAP2_VRFB) += vrfb.o
+ obj-y += dss/
+ obj-y += displays/
+-obj-$(CONFIG_FB_OMAP2) += omapfb.o
+-omapfb-y := omapfb-main.o omapfb-sysfs.o omapfb-ioctl.o
++obj-$(CONFIG_FB_OMAP2) += omap2fb.o
++omap2fb-y := omapfb-main.o omapfb-sysfs.o omapfb-ioctl.o
+diff --git a/drivers/video/fbdev/pxafb.c b/drivers/video/fbdev/pxafb.c
+index 76722a59f55e..dfe382e68287 100644
+--- a/drivers/video/fbdev/pxafb.c
++++ b/drivers/video/fbdev/pxafb.c
+@@ -2128,8 +2128,8 @@ static int of_get_pxafb_display(struct device *dev, struct device_node *disp,
+ 		return -EINVAL;
+ 
+ 	ret = -ENOMEM;
+-	info->modes = kmalloc_array(timings->num_timings,
+-				    sizeof(info->modes[0]), GFP_KERNEL);
++	info->modes = kcalloc(timings->num_timings, sizeof(info->modes[0]),
++			      GFP_KERNEL);
+ 	if (!info->modes)
+ 		goto out;
+ 	info->num_modes = timings->num_timings;
+diff --git a/drivers/video/fbdev/via/viafbdev.c b/drivers/video/fbdev/via/viafbdev.c
+index d2f785068ef4..7bb7e90b8f00 100644
+--- a/drivers/video/fbdev/via/viafbdev.c
++++ b/drivers/video/fbdev/via/viafbdev.c
+@@ -19,6 +19,7 @@
+  * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+  */
+ 
++#include <linux/compiler.h>
+ #include <linux/module.h>
+ #include <linux/seq_file.h>
+ #include <linux/slab.h>
+@@ -1468,7 +1469,7 @@ static const struct file_operations viafb_vt1636_proc_fops = {
+ 
+ #endif /* CONFIG_FB_VIA_DIRECT_PROCFS */
+ 
+-static int viafb_sup_odev_proc_show(struct seq_file *m, void *v)
++static int __maybe_unused viafb_sup_odev_proc_show(struct seq_file *m, void *v)
+ {
+ 	via_odev_to_seq(m, supported_odev_map[
+ 		viaparinfo->shared->chip_info.gfx_chip_name]);
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index 816cc921cf36..efae2fb0930a 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -1751,7 +1751,7 @@ static int fill_thread_core_info(struct elf_thread_core_info *t,
+ 		const struct user_regset *regset = &view->regsets[i];
+ 		do_thread_regset_writeback(t->task, regset);
+ 		if (regset->core_note_type && regset->get &&
+-		    (!regset->active || regset->active(t->task, regset))) {
++		    (!regset->active || regset->active(t->task, regset) > 0)) {
+ 			int ret;
+ 			size_t size = regset_size(t->task, regset);
+ 			void *data = kmalloc(size, GFP_KERNEL);
+diff --git a/fs/cifs/readdir.c b/fs/cifs/readdir.c
+index eeab81c9452f..e169e1a5fd35 100644
+--- a/fs/cifs/readdir.c
++++ b/fs/cifs/readdir.c
+@@ -376,8 +376,15 @@ static char *nxt_dir_entry(char *old_entry, char *end_of_smb, int level)
+ 
+ 		new_entry = old_entry + sizeof(FIND_FILE_STANDARD_INFO) +
+ 				pfData->FileNameLength;
+-	} else
+-		new_entry = old_entry + le32_to_cpu(pDirInfo->NextEntryOffset);
++	} else {
++		u32 next_offset = le32_to_cpu(pDirInfo->NextEntryOffset);
++
++		if (old_entry + next_offset < old_entry) {
++			cifs_dbg(VFS, "invalid offset %u\n", next_offset);
++			return NULL;
++		}
++		new_entry = old_entry + next_offset;
++	}
+ 	cifs_dbg(FYI, "new entry %p old entry %p\n", new_entry, old_entry);
+ 	/* validate that new_entry is not past end of SMB */
+ 	if (new_entry >= end_of_smb) {
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 82be1dfeca33..29cce842ed04 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -2418,14 +2418,14 @@ SMB2_ioctl(const unsigned int xid, struct cifs_tcon *tcon, u64 persistent_fid,
+ 	/* We check for obvious errors in the output buffer length and offset */
+ 	if (*plen == 0)
+ 		goto ioctl_exit; /* server returned no data */
+-	else if (*plen > 0xFF00) {
++	else if (*plen > rsp_iov.iov_len || *plen > 0xFF00) {
+ 		cifs_dbg(VFS, "srv returned invalid ioctl length: %d\n", *plen);
+ 		*plen = 0;
+ 		rc = -EIO;
+ 		goto ioctl_exit;
+ 	}
+ 
+-	if (rsp_iov.iov_len < le32_to_cpu(rsp->OutputOffset) + *plen) {
++	if (rsp_iov.iov_len - *plen < le32_to_cpu(rsp->OutputOffset)) {
+ 		cifs_dbg(VFS, "Malformed ioctl resp: len %d offset %d\n", *plen,
+ 			le32_to_cpu(rsp->OutputOffset));
+ 		*plen = 0;
+@@ -3492,33 +3492,38 @@ num_entries(char *bufstart, char *end_of_buf, char **lastentry, size_t size)
+ 	int len;
+ 	unsigned int entrycount = 0;
+ 	unsigned int next_offset = 0;
+-	FILE_DIRECTORY_INFO *entryptr;
++	char *entryptr;
++	FILE_DIRECTORY_INFO *dir_info;
+ 
+ 	if (bufstart == NULL)
+ 		return 0;
+ 
+-	entryptr = (FILE_DIRECTORY_INFO *)bufstart;
++	entryptr = bufstart;
+ 
+ 	while (1) {
+-		entryptr = (FILE_DIRECTORY_INFO *)
+-					((char *)entryptr + next_offset);
+-
+-		if ((char *)entryptr + size > end_of_buf) {
++		if (entryptr + next_offset < entryptr ||
++		    entryptr + next_offset > end_of_buf ||
++		    entryptr + next_offset + size > end_of_buf) {
+ 			cifs_dbg(VFS, "malformed search entry would overflow\n");
+ 			break;
+ 		}
+ 
+-		len = le32_to_cpu(entryptr->FileNameLength);
+-		if ((char *)entryptr + len + size > end_of_buf) {
++		entryptr = entryptr + next_offset;
++		dir_info = (FILE_DIRECTORY_INFO *)entryptr;
++
++		len = le32_to_cpu(dir_info->FileNameLength);
++		if (entryptr + len < entryptr ||
++		    entryptr + len > end_of_buf ||
++		    entryptr + len + size > end_of_buf) {
+ 			cifs_dbg(VFS, "directory entry name would overflow frame end of buf %p\n",
+ 				 end_of_buf);
+ 			break;
+ 		}
+ 
+-		*lastentry = (char *)entryptr;
++		*lastentry = entryptr;
+ 		entrycount++;
+ 
+-		next_offset = le32_to_cpu(entryptr->NextEntryOffset);
++		next_offset = le32_to_cpu(dir_info->NextEntryOffset);
+ 		if (!next_offset)
+ 			break;
+ 	}
+diff --git a/fs/configfs/dir.c b/fs/configfs/dir.c
+index 577cff24707b..39843fa7e11b 100644
+--- a/fs/configfs/dir.c
++++ b/fs/configfs/dir.c
+@@ -1777,6 +1777,16 @@ void configfs_unregister_group(struct config_group *group)
+ 	struct dentry *dentry = group->cg_item.ci_dentry;
+ 	struct dentry *parent = group->cg_item.ci_parent->ci_dentry;
+ 
++	mutex_lock(&subsys->su_mutex);
++	if (!group->cg_item.ci_parent->ci_group) {
++		/*
++		 * The parent has already been unlinked and detached
++		 * due to a rmdir.
++		 */
++		goto unlink_group;
++	}
++	mutex_unlock(&subsys->su_mutex);
++
+ 	inode_lock_nested(d_inode(parent), I_MUTEX_PARENT);
+ 	spin_lock(&configfs_dirent_lock);
+ 	configfs_detach_prep(dentry, NULL);
+@@ -1791,6 +1801,7 @@ void configfs_unregister_group(struct config_group *group)
+ 	dput(dentry);
+ 
+ 	mutex_lock(&subsys->su_mutex);
++unlink_group:
+ 	unlink_group(group);
+ 	mutex_unlock(&subsys->su_mutex);
+ }
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 128d489acebb..742147cbe759 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -3106,9 +3106,19 @@ static struct dentry *f2fs_mount(struct file_system_type *fs_type, int flags,
+ static void kill_f2fs_super(struct super_block *sb)
+ {
+ 	if (sb->s_root) {
+-		set_sbi_flag(F2FS_SB(sb), SBI_IS_CLOSE);
+-		f2fs_stop_gc_thread(F2FS_SB(sb));
+-		f2fs_stop_discard_thread(F2FS_SB(sb));
++		struct f2fs_sb_info *sbi = F2FS_SB(sb);
++
++		set_sbi_flag(sbi, SBI_IS_CLOSE);
++		f2fs_stop_gc_thread(sbi);
++		f2fs_stop_discard_thread(sbi);
++
++		if (is_sbi_flag_set(sbi, SBI_IS_DIRTY) ||
++				!is_set_ckpt_flags(sbi, CP_UMOUNT_FLAG)) {
++			struct cp_control cpc = {
++				.reason = CP_UMOUNT,
++			};
++			f2fs_write_checkpoint(sbi, &cpc);
++		}
+ 	}
+ 	kill_block_super(sb);
+ }
+diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
+index ed6699705c13..fd5bea55fd60 100644
+--- a/fs/gfs2/bmap.c
++++ b/fs/gfs2/bmap.c
+@@ -2060,7 +2060,7 @@ int gfs2_write_alloc_required(struct gfs2_inode *ip, u64 offset,
+ 	end_of_file = (i_size_read(&ip->i_inode) + sdp->sd_sb.sb_bsize - 1) >> shift;
+ 	lblock = offset >> shift;
+ 	lblock_stop = (offset + len + sdp->sd_sb.sb_bsize - 1) >> shift;
+-	if (lblock_stop > end_of_file)
++	if (lblock_stop > end_of_file && ip != GFS2_I(sdp->sd_rindex))
+ 		return 1;
+ 
+ 	size = (lblock_stop - lblock) << shift;
+diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c
+index 33abcf29bc05..b86249ebde11 100644
+--- a/fs/gfs2/rgrp.c
++++ b/fs/gfs2/rgrp.c
+@@ -1686,7 +1686,8 @@ static int gfs2_rbm_find(struct gfs2_rbm *rbm, u8 state, u32 *minext,
+ 
+ 	while(1) {
+ 		bi = rbm_bi(rbm);
+-		if (test_bit(GBF_FULL, &bi->bi_flags) &&
++		if ((ip == NULL || !gfs2_rs_active(&ip->i_res)) &&
++		    test_bit(GBF_FULL, &bi->bi_flags) &&
+ 		    (state == GFS2_BLKST_FREE))
+ 			goto next_bitmap;
+ 
+diff --git a/fs/namespace.c b/fs/namespace.c
+index bd2f4c68506a..1949e0939d40 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -446,10 +446,10 @@ int mnt_want_write_file_path(struct file *file)
+ {
+ 	int ret;
+ 
+-	sb_start_write(file->f_path.mnt->mnt_sb);
++	sb_start_write(file_inode(file)->i_sb);
+ 	ret = __mnt_want_write_file(file);
+ 	if (ret)
+-		sb_end_write(file->f_path.mnt->mnt_sb);
++		sb_end_write(file_inode(file)->i_sb);
+ 	return ret;
+ }
+ 
+@@ -540,7 +540,8 @@ void __mnt_drop_write_file(struct file *file)
+ 
+ void mnt_drop_write_file_path(struct file *file)
+ {
+-	mnt_drop_write(file->f_path.mnt);
++	__mnt_drop_write_file(file);
++	sb_end_write(file_inode(file)->i_sb);
+ }
+ 
+ void mnt_drop_write_file(struct file *file)
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index ff98e2a3f3cc..f688338b0482 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -2642,14 +2642,18 @@ static void nfs41_check_delegation_stateid(struct nfs4_state *state)
+ 	}
+ 
+ 	nfs4_stateid_copy(&stateid, &delegation->stateid);
+-	if (test_bit(NFS_DELEGATION_REVOKED, &delegation->flags) ||
+-		!test_and_clear_bit(NFS_DELEGATION_TEST_EXPIRED,
+-			&delegation->flags)) {
++	if (test_bit(NFS_DELEGATION_REVOKED, &delegation->flags)) {
+ 		rcu_read_unlock();
+ 		nfs_finish_clear_delegation_stateid(state, &stateid);
+ 		return;
+ 	}
+ 
++	if (!test_and_clear_bit(NFS_DELEGATION_TEST_EXPIRED,
++				&delegation->flags)) {
++		rcu_read_unlock();
++		return;
++	}
++
+ 	cred = get_rpccred(delegation->cred);
+ 	rcu_read_unlock();
+ 	status = nfs41_test_and_free_expired_stateid(server, &stateid, cred);
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index 2bf2eaa08ca7..3c18c12a5c4c 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -1390,6 +1390,8 @@ int nfs4_schedule_stateid_recovery(const struct nfs_server *server, struct nfs4_
+ 
+ 	if (!nfs4_state_mark_reclaim_nograce(clp, state))
+ 		return -EBADF;
++	nfs_inode_find_delegation_state_and_recover(state->inode,
++			&state->stateid);
+ 	dprintk("%s: scheduling stateid recovery for server %s\n", __func__,
+ 			clp->cl_hostname);
+ 	nfs4_schedule_state_manager(clp);
+diff --git a/fs/nfs/nfs4trace.h b/fs/nfs/nfs4trace.h
+index a275fba93170..708342f4692f 100644
+--- a/fs/nfs/nfs4trace.h
++++ b/fs/nfs/nfs4trace.h
+@@ -1194,7 +1194,7 @@ DECLARE_EVENT_CLASS(nfs4_inode_stateid_callback_event,
+ 		TP_fast_assign(
+ 			__entry->error = error;
+ 			__entry->fhandle = nfs_fhandle_hash(fhandle);
+-			if (inode != NULL) {
++			if (!IS_ERR_OR_NULL(inode)) {
+ 				__entry->fileid = NFS_FILEID(inode);
+ 				__entry->dev = inode->i_sb->s_dev;
+ 			} else {
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index 704b37311467..fa2121f877c1 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -970,16 +970,6 @@ static int ovl_get_upper(struct ovl_fs *ofs, struct path *upperpath)
+ 	if (err)
+ 		goto out;
+ 
+-	err = -EBUSY;
+-	if (ovl_inuse_trylock(upperpath->dentry)) {
+-		ofs->upperdir_locked = true;
+-	} else if (ofs->config.index) {
+-		pr_err("overlayfs: upperdir is in-use by another mount, mount with '-o index=off' to override exclusive upperdir protection.\n");
+-		goto out;
+-	} else {
+-		pr_warn("overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.\n");
+-	}
+-
+ 	upper_mnt = clone_private_mount(upperpath);
+ 	err = PTR_ERR(upper_mnt);
+ 	if (IS_ERR(upper_mnt)) {
+@@ -990,6 +980,17 @@ static int ovl_get_upper(struct ovl_fs *ofs, struct path *upperpath)
+ 	/* Don't inherit atime flags */
+ 	upper_mnt->mnt_flags &= ~(MNT_NOATIME | MNT_NODIRATIME | MNT_RELATIME);
+ 	ofs->upper_mnt = upper_mnt;
++
++	err = -EBUSY;
++	if (ovl_inuse_trylock(ofs->upper_mnt->mnt_root)) {
++		ofs->upperdir_locked = true;
++	} else if (ofs->config.index) {
++		pr_err("overlayfs: upperdir is in-use by another mount, mount with '-o index=off' to override exclusive upperdir protection.\n");
++		goto out;
++	} else {
++		pr_warn("overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.\n");
++	}
++
+ 	err = 0;
+ out:
+ 	return err;
+@@ -1089,8 +1090,10 @@ static int ovl_get_workdir(struct ovl_fs *ofs, struct path *upperpath)
+ 		goto out;
+ 	}
+ 
++	ofs->workbasedir = dget(workpath.dentry);
++
+ 	err = -EBUSY;
+-	if (ovl_inuse_trylock(workpath.dentry)) {
++	if (ovl_inuse_trylock(ofs->workbasedir)) {
+ 		ofs->workdir_locked = true;
+ 	} else if (ofs->config.index) {
+ 		pr_err("overlayfs: workdir is in-use by another mount, mount with '-o index=off' to override exclusive workdir protection.\n");
+@@ -1099,7 +1102,6 @@ static int ovl_get_workdir(struct ovl_fs *ofs, struct path *upperpath)
+ 		pr_warn("overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.\n");
+ 	}
+ 
+-	ofs->workbasedir = dget(workpath.dentry);
+ 	err = ovl_make_workdir(ofs, &workpath);
+ 	if (err)
+ 		goto out;
+diff --git a/fs/pstore/ram_core.c b/fs/pstore/ram_core.c
+index 951a14edcf51..0792595ebcfb 100644
+--- a/fs/pstore/ram_core.c
++++ b/fs/pstore/ram_core.c
+@@ -429,7 +429,12 @@ static void *persistent_ram_vmap(phys_addr_t start, size_t size,
+ 	vaddr = vmap(pages, page_count, VM_MAP, prot);
+ 	kfree(pages);
+ 
+-	return vaddr;
++	/*
++	 * Since vmap() uses page granularity, we must add the offset
++	 * into the page here, to get the byte granularity address
++	 * into the mapping to represent the actual "start" location.
++	 */
++	return vaddr + offset_in_page(start);
+ }
+ 
+ static void *persistent_ram_iomap(phys_addr_t start, size_t size,
+@@ -448,6 +453,11 @@ static void *persistent_ram_iomap(phys_addr_t start, size_t size,
+ 	else
+ 		va = ioremap_wc(start, size);
+ 
++	/*
++	 * Since request_mem_region() and ioremap() are byte-granularity
++	 * there is no need handle anything special like we do when the
++	 * vmap() case in persistent_ram_vmap() above.
++	 */
+ 	return va;
+ }
+ 
+@@ -468,7 +478,7 @@ static int persistent_ram_buffer_map(phys_addr_t start, phys_addr_t size,
+ 		return -ENOMEM;
+ 	}
+ 
+-	prz->buffer = prz->vaddr + offset_in_page(start);
++	prz->buffer = prz->vaddr;
+ 	prz->buffer_size = size - sizeof(struct persistent_ram_buffer);
+ 
+ 	return 0;
+@@ -515,7 +525,8 @@ void persistent_ram_free(struct persistent_ram_zone *prz)
+ 
+ 	if (prz->vaddr) {
+ 		if (pfn_valid(prz->paddr >> PAGE_SHIFT)) {
+-			vunmap(prz->vaddr);
++			/* We must vunmap() at page-granularity. */
++			vunmap(prz->vaddr - offset_in_page(prz->paddr));
+ 		} else {
+ 			iounmap(prz->vaddr);
+ 			release_mem_region(prz->paddr, prz->size);
+diff --git a/include/linux/crypto.h b/include/linux/crypto.h
+index 6eb06101089f..e8839d3a7559 100644
+--- a/include/linux/crypto.h
++++ b/include/linux/crypto.h
+@@ -112,6 +112,11 @@
+  */
+ #define CRYPTO_ALG_OPTIONAL_KEY		0x00004000
+ 
++/*
++ * Don't trigger module loading
++ */
++#define CRYPTO_NOLOAD			0x00008000
++
+ /*
+  * Transform masks and values (for crt_flags).
+  */
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index 83957920653a..64f450593b54 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -357,7 +357,7 @@ struct mlx5_frag_buf {
+ struct mlx5_frag_buf_ctrl {
+ 	struct mlx5_frag_buf	frag_buf;
+ 	u32			sz_m1;
+-	u32			frag_sz_m1;
++	u16			frag_sz_m1;
+ 	u32			strides_offset;
+ 	u8			log_sz;
+ 	u8			log_stride;
+@@ -1042,7 +1042,7 @@ int mlx5_cmd_free_uar(struct mlx5_core_dev *dev, u32 uarn);
+ void mlx5_health_cleanup(struct mlx5_core_dev *dev);
+ int mlx5_health_init(struct mlx5_core_dev *dev);
+ void mlx5_start_health_poll(struct mlx5_core_dev *dev);
+-void mlx5_stop_health_poll(struct mlx5_core_dev *dev);
++void mlx5_stop_health_poll(struct mlx5_core_dev *dev, bool disable_health);
+ void mlx5_drain_health_wq(struct mlx5_core_dev *dev);
+ void mlx5_trigger_health_work(struct mlx5_core_dev *dev);
+ void mlx5_drain_health_recovery(struct mlx5_core_dev *dev);
+diff --git a/include/linux/of.h b/include/linux/of.h
+index 4d25e4f952d9..b99a1a8c2952 100644
+--- a/include/linux/of.h
++++ b/include/linux/of.h
+@@ -290,6 +290,8 @@ extern struct device_node *of_get_next_child(const struct device_node *node,
+ extern struct device_node *of_get_next_available_child(
+ 	const struct device_node *node, struct device_node *prev);
+ 
++extern struct device_node *of_get_compatible_child(const struct device_node *parent,
++					const char *compatible);
+ extern struct device_node *of_get_child_by_name(const struct device_node *node,
+ 					const char *name);
+ 
+@@ -632,6 +634,12 @@ static inline bool of_have_populated_dt(void)
+ 	return false;
+ }
+ 
++static inline struct device_node *of_get_compatible_child(const struct device_node *parent,
++					const char *compatible)
++{
++	return NULL;
++}
++
+ static inline struct device_node *of_get_child_by_name(
+ 					const struct device_node *node,
+ 					const char *name)
+diff --git a/kernel/audit_watch.c b/kernel/audit_watch.c
+index c17c0c268436..dce35e16bff4 100644
+--- a/kernel/audit_watch.c
++++ b/kernel/audit_watch.c
+@@ -419,6 +419,13 @@ int audit_add_watch(struct audit_krule *krule, struct list_head **list)
+ 	struct path parent_path;
+ 	int h, ret = 0;
+ 
++	/*
++	 * When we will be calling audit_add_to_parent, krule->watch might have
++	 * been updated and watch might have been freed.
++	 * So we need to keep a reference of watch.
++	 */
++	audit_get_watch(watch);
++
+ 	mutex_unlock(&audit_filter_mutex);
+ 
+ 	/* Avoid calling path_lookup under audit_filter_mutex. */
+@@ -427,8 +434,10 @@ int audit_add_watch(struct audit_krule *krule, struct list_head **list)
+ 	/* caller expects mutex locked */
+ 	mutex_lock(&audit_filter_mutex);
+ 
+-	if (ret)
++	if (ret) {
++		audit_put_watch(watch);
+ 		return ret;
++	}
+ 
+ 	/* either find an old parent or attach a new one */
+ 	parent = audit_find_parent(d_backing_inode(parent_path.dentry));
+@@ -446,6 +455,7 @@ int audit_add_watch(struct audit_krule *krule, struct list_head **list)
+ 	*list = &audit_inode_hash[h];
+ error:
+ 	path_put(&parent_path);
++	audit_put_watch(watch);
+ 	return ret;
+ }
+ 
+diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
+index 3d83ee7df381..badabb0b435c 100644
+--- a/kernel/bpf/cgroup.c
++++ b/kernel/bpf/cgroup.c
+@@ -95,7 +95,7 @@ static int compute_effective_progs(struct cgroup *cgrp,
+ 				   enum bpf_attach_type type,
+ 				   struct bpf_prog_array __rcu **array)
+ {
+-	struct bpf_prog_array __rcu *progs;
++	struct bpf_prog_array *progs;
+ 	struct bpf_prog_list *pl;
+ 	struct cgroup *p = cgrp;
+ 	int cnt = 0;
+@@ -120,13 +120,12 @@ static int compute_effective_progs(struct cgroup *cgrp,
+ 					    &p->bpf.progs[type], node) {
+ 				if (!pl->prog)
+ 					continue;
+-				rcu_dereference_protected(progs, 1)->
+-					progs[cnt++] = pl->prog;
++				progs->progs[cnt++] = pl->prog;
+ 			}
+ 		p = cgroup_parent(p);
+ 	} while (p);
+ 
+-	*array = progs;
++	rcu_assign_pointer(*array, progs);
+ 	return 0;
+ }
+ 
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index eec2d5fb676b..c7b3e34811ec 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -5948,6 +5948,7 @@ perf_output_sample_ustack(struct perf_output_handle *handle, u64 dump_size,
+ 		unsigned long sp;
+ 		unsigned int rem;
+ 		u64 dyn_size;
++		mm_segment_t fs;
+ 
+ 		/*
+ 		 * We dump:
+@@ -5965,7 +5966,10 @@ perf_output_sample_ustack(struct perf_output_handle *handle, u64 dump_size,
+ 
+ 		/* Data. */
+ 		sp = perf_user_stack_pointer(regs);
++		fs = get_fs();
++		set_fs(USER_DS);
+ 		rem = __output_copy_user(handle, (void *) sp, dump_size);
++		set_fs(fs);
+ 		dyn_size = dump_size - rem;
+ 
+ 		perf_output_skip(handle, rem);
+diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
+index 42fcb7f05fac..f42cf69ef539 100644
+--- a/kernel/rcu/rcutorture.c
++++ b/kernel/rcu/rcutorture.c
+@@ -1446,7 +1446,7 @@ static int rcu_torture_stall(void *args)
+ 		VERBOSE_TOROUT_STRING("rcu_torture_stall end holdoff");
+ 	}
+ 	if (!kthread_should_stop()) {
+-		stop_at = get_seconds() + stall_cpu;
++		stop_at = ktime_get_seconds() + stall_cpu;
+ 		/* RCU CPU stall is expected behavior in following code. */
+ 		rcu_read_lock();
+ 		if (stall_cpu_irqsoff)
+@@ -1455,7 +1455,8 @@ static int rcu_torture_stall(void *args)
+ 			preempt_disable();
+ 		pr_alert("rcu_torture_stall start on CPU %d.\n",
+ 			 smp_processor_id());
+-		while (ULONG_CMP_LT(get_seconds(), stop_at))
++		while (ULONG_CMP_LT((unsigned long)ktime_get_seconds(),
++				    stop_at))
+ 			continue;  /* Induce RCU CPU stall warning. */
+ 		if (stall_cpu_irqsoff)
+ 			local_irq_enable();
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 9c219f7b0970..478d9d3e6be9 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -735,11 +735,12 @@ static void attach_entity_cfs_rq(struct sched_entity *se);
+  * To solve this problem, we also cap the util_avg of successive tasks to
+  * only 1/2 of the left utilization budget:
+  *
+- *   util_avg_cap = (1024 - cfs_rq->avg.util_avg) / 2^n
++ *   util_avg_cap = (cpu_scale - cfs_rq->avg.util_avg) / 2^n
+  *
+- * where n denotes the nth task.
++ * where n denotes the nth task and cpu_scale the CPU capacity.
+  *
+- * For example, a simplest series from the beginning would be like:
++ * For example, for a CPU with 1024 of capacity, a simplest series from
++ * the beginning would be like:
+  *
+  *  task  util_avg: 512, 256, 128,  64,  32,   16,    8, ...
+  * cfs_rq util_avg: 512, 768, 896, 960, 992, 1008, 1016, ...
+@@ -751,7 +752,8 @@ void post_init_entity_util_avg(struct sched_entity *se)
+ {
+ 	struct cfs_rq *cfs_rq = cfs_rq_of(se);
+ 	struct sched_avg *sa = &se->avg;
+-	long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2;
++	long cpu_scale = arch_scale_cpu_capacity(NULL, cpu_of(rq_of(cfs_rq)));
++	long cap = (long)(cpu_scale - cfs_rq->avg.util_avg) / 2;
+ 
+ 	if (cap > 0) {
+ 		if (cfs_rq->avg.util_avg != 0) {
+diff --git a/kernel/sched/wait.c b/kernel/sched/wait.c
+index 928be527477e..a7a2aaa3026a 100644
+--- a/kernel/sched/wait.c
++++ b/kernel/sched/wait.c
+@@ -392,35 +392,36 @@ static inline bool is_kthread_should_stop(void)
+  *     if (condition)
+  *         break;
+  *
+- *     p->state = mode;				condition = true;
+- *     smp_mb(); // A				smp_wmb(); // C
+- *     if (!wq_entry->flags & WQ_FLAG_WOKEN)	wq_entry->flags |= WQ_FLAG_WOKEN;
+- *         schedule()				try_to_wake_up();
+- *     p->state = TASK_RUNNING;		    ~~~~~~~~~~~~~~~~~~
+- *     wq_entry->flags &= ~WQ_FLAG_WOKEN;		condition = true;
+- *     smp_mb() // B				smp_wmb(); // C
+- *						wq_entry->flags |= WQ_FLAG_WOKEN;
+- * }
+- * remove_wait_queue(&wq_head, &wait);
++ *     // in wait_woken()			// in woken_wake_function()
+  *
++ *     p->state = mode;				wq_entry->flags |= WQ_FLAG_WOKEN;
++ *     smp_mb(); // A				try_to_wake_up():
++ *     if (!(wq_entry->flags & WQ_FLAG_WOKEN))	   <full barrier>
++ *         schedule()				   if (p->state & mode)
++ *     p->state = TASK_RUNNING;			      p->state = TASK_RUNNING;
++ *     wq_entry->flags &= ~WQ_FLAG_WOKEN;	~~~~~~~~~~~~~~~~~~
++ *     smp_mb(); // B				condition = true;
++ * }						smp_mb(); // C
++ * remove_wait_queue(&wq_head, &wait);		wq_entry->flags |= WQ_FLAG_WOKEN;
+  */
+ long wait_woken(struct wait_queue_entry *wq_entry, unsigned mode, long timeout)
+ {
+-	set_current_state(mode); /* A */
+ 	/*
+-	 * The above implies an smp_mb(), which matches with the smp_wmb() from
+-	 * woken_wake_function() such that if we observe WQ_FLAG_WOKEN we must
+-	 * also observe all state before the wakeup.
++	 * The below executes an smp_mb(), which matches with the full barrier
++	 * executed by the try_to_wake_up() in woken_wake_function() such that
++	 * either we see the store to wq_entry->flags in woken_wake_function()
++	 * or woken_wake_function() sees our store to current->state.
+ 	 */
++	set_current_state(mode); /* A */
+ 	if (!(wq_entry->flags & WQ_FLAG_WOKEN) && !is_kthread_should_stop())
+ 		timeout = schedule_timeout(timeout);
+ 	__set_current_state(TASK_RUNNING);
+ 
+ 	/*
+-	 * The below implies an smp_mb(), it too pairs with the smp_wmb() from
+-	 * woken_wake_function() such that we must either observe the wait
+-	 * condition being true _OR_ WQ_FLAG_WOKEN such that we will not miss
+-	 * an event.
++	 * The below executes an smp_mb(), which matches with the smp_mb() (C)
++	 * in woken_wake_function() such that either we see the wait condition
++	 * being true or the store to wq_entry->flags in woken_wake_function()
++	 * follows ours in the coherence order.
+ 	 */
+ 	smp_store_mb(wq_entry->flags, wq_entry->flags & ~WQ_FLAG_WOKEN); /* B */
+ 
+@@ -430,14 +431,8 @@ EXPORT_SYMBOL(wait_woken);
+ 
+ int woken_wake_function(struct wait_queue_entry *wq_entry, unsigned mode, int sync, void *key)
+ {
+-	/*
+-	 * Although this function is called under waitqueue lock, LOCK
+-	 * doesn't imply write barrier and the users expects write
+-	 * barrier semantics on wakeup functions.  The following
+-	 * smp_wmb() is equivalent to smp_wmb() in try_to_wake_up()
+-	 * and is paired with smp_store_mb() in wait_woken().
+-	 */
+-	smp_wmb(); /* C */
++	/* Pairs with the smp_store_mb() in wait_woken(). */
++	smp_mb(); /* C */
+ 	wq_entry->flags |= WQ_FLAG_WOKEN;
+ 
+ 	return default_wake_function(wq_entry, mode, sync, key);
+diff --git a/net/bluetooth/af_bluetooth.c b/net/bluetooth/af_bluetooth.c
+index 3264e1873219..deacc52d7ff1 100644
+--- a/net/bluetooth/af_bluetooth.c
++++ b/net/bluetooth/af_bluetooth.c
+@@ -159,7 +159,7 @@ void bt_accept_enqueue(struct sock *parent, struct sock *sk)
+ 	BT_DBG("parent %p, sk %p", parent, sk);
+ 
+ 	sock_hold(sk);
+-	lock_sock(sk);
++	lock_sock_nested(sk, SINGLE_DEPTH_NESTING);
+ 	list_add_tail(&bt_sk(sk)->accept_q, &bt_sk(parent)->accept_q);
+ 	bt_sk(sk)->parent = parent;
+ 	release_sock(sk);
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index fb35b62af272..3680912f056a 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -939,9 +939,6 @@ struct ubuf_info *sock_zerocopy_alloc(struct sock *sk, size_t size)
+ 
+ 	WARN_ON_ONCE(!in_task());
+ 
+-	if (!sock_flag(sk, SOCK_ZEROCOPY))
+-		return NULL;
+-
+ 	skb = sock_omalloc(sk, 0, GFP_KERNEL);
+ 	if (!skb)
+ 		return NULL;
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index 055f4bbba86b..41883c34a385 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -178,6 +178,9 @@ static void ipgre_err(struct sk_buff *skb, u32 info,
+ 
+ 	if (tpi->proto == htons(ETH_P_TEB))
+ 		itn = net_generic(net, gre_tap_net_id);
++	else if (tpi->proto == htons(ETH_P_ERSPAN) ||
++		 tpi->proto == htons(ETH_P_ERSPAN2))
++		itn = net_generic(net, erspan_net_id);
+ 	else
+ 		itn = net_generic(net, ipgre_net_id);
+ 
+@@ -328,6 +331,8 @@ static int erspan_rcv(struct sk_buff *skb, struct tnl_ptk_info *tpi,
+ 		ip_tunnel_rcv(tunnel, skb, tpi, tun_dst, log_ecn_error);
+ 		return PACKET_RCVD;
+ 	}
++	return PACKET_REJECT;
++
+ drop:
+ 	kfree_skb(skb);
+ 	return PACKET_RCVD;
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 4491faf83f4f..086201d96d54 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -1186,7 +1186,7 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
+ 
+ 	flags = msg->msg_flags;
+ 
+-	if (flags & MSG_ZEROCOPY && size) {
++	if (flags & MSG_ZEROCOPY && size && sock_flag(sk, SOCK_ZEROCOPY)) {
+ 		if (sk->sk_state != TCP_ESTABLISHED) {
+ 			err = -EINVAL;
+ 			goto out_err;
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index bdf6fa78d0d2..aa082b71d2e4 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -495,7 +495,7 @@ static int ieee80211_del_key(struct wiphy *wiphy, struct net_device *dev,
+ 		goto out_unlock;
+ 	}
+ 
+-	ieee80211_key_free(key, true);
++	ieee80211_key_free(key, sdata->vif.type == NL80211_IFTYPE_STATION);
+ 
+ 	ret = 0;
+  out_unlock:
+diff --git a/net/mac80211/key.c b/net/mac80211/key.c
+index ee0d0cc8dc3b..c054ac85793c 100644
+--- a/net/mac80211/key.c
++++ b/net/mac80211/key.c
+@@ -656,11 +656,15 @@ int ieee80211_key_link(struct ieee80211_key *key,
+ {
+ 	struct ieee80211_local *local = sdata->local;
+ 	struct ieee80211_key *old_key;
+-	int idx, ret;
+-	bool pairwise;
+-
+-	pairwise = key->conf.flags & IEEE80211_KEY_FLAG_PAIRWISE;
+-	idx = key->conf.keyidx;
++	int idx = key->conf.keyidx;
++	bool pairwise = key->conf.flags & IEEE80211_KEY_FLAG_PAIRWISE;
++	/*
++	 * We want to delay tailroom updates only for station - in that
++	 * case it helps roaming speed, but in other cases it hurts and
++	 * can cause warnings to appear.
++	 */
++	bool delay_tailroom = sdata->vif.type == NL80211_IFTYPE_STATION;
++	int ret;
+ 
+ 	mutex_lock(&sdata->local->key_mtx);
+ 
+@@ -688,14 +692,14 @@ int ieee80211_key_link(struct ieee80211_key *key,
+ 	increment_tailroom_need_count(sdata);
+ 
+ 	ieee80211_key_replace(sdata, sta, pairwise, old_key, key);
+-	ieee80211_key_destroy(old_key, true);
++	ieee80211_key_destroy(old_key, delay_tailroom);
+ 
+ 	ieee80211_debugfs_key_add(key);
+ 
+ 	if (!local->wowlan) {
+ 		ret = ieee80211_key_enable_hw_accel(key);
+ 		if (ret)
+-			ieee80211_key_free(key, true);
++			ieee80211_key_free(key, delay_tailroom);
+ 	} else {
+ 		ret = 0;
+ 	}
+@@ -930,7 +934,8 @@ void ieee80211_free_sta_keys(struct ieee80211_local *local,
+ 		ieee80211_key_replace(key->sdata, key->sta,
+ 				key->conf.flags & IEEE80211_KEY_FLAG_PAIRWISE,
+ 				key, NULL);
+-		__ieee80211_key_destroy(key, true);
++		__ieee80211_key_destroy(key, key->sdata->vif.type ==
++					NL80211_IFTYPE_STATION);
+ 	}
+ 
+ 	for (i = 0; i < NUM_DEFAULT_KEYS; i++) {
+@@ -940,7 +945,8 @@ void ieee80211_free_sta_keys(struct ieee80211_local *local,
+ 		ieee80211_key_replace(key->sdata, key->sta,
+ 				key->conf.flags & IEEE80211_KEY_FLAG_PAIRWISE,
+ 				key, NULL);
+-		__ieee80211_key_destroy(key, true);
++		__ieee80211_key_destroy(key, key->sdata->vif.type ==
++					NL80211_IFTYPE_STATION);
+ 	}
+ 
+ 	mutex_unlock(&local->key_mtx);
+diff --git a/net/rds/bind.c b/net/rds/bind.c
+index 5aa3a64aa4f0..48257d3a4201 100644
+--- a/net/rds/bind.c
++++ b/net/rds/bind.c
+@@ -60,11 +60,13 @@ struct rds_sock *rds_find_bound(__be32 addr, __be16 port)
+ 	u64 key = ((u64)addr << 32) | port;
+ 	struct rds_sock *rs;
+ 
+-	rs = rhashtable_lookup_fast(&bind_hash_table, &key, ht_parms);
++	rcu_read_lock();
++	rs = rhashtable_lookup(&bind_hash_table, &key, ht_parms);
+ 	if (rs && !sock_flag(rds_rs_to_sk(rs), SOCK_DEAD))
+ 		rds_sock_addref(rs);
+ 	else
+ 		rs = NULL;
++	rcu_read_unlock();
+ 
+ 	rdsdebug("returning rs %p for %pI4:%u\n", rs, &addr,
+ 		ntohs(port));
+@@ -157,6 +159,7 @@ int rds_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ 		goto out;
+ 	}
+ 
++	sock_set_flag(sk, SOCK_RCU_FREE);
+ 	ret = rds_add_bound(rs, sin->sin_addr.s_addr, &sin->sin_port);
+ 	if (ret)
+ 		goto out;
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 0a5fa347135e..ac8ca238c541 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -578,6 +578,7 @@ static int tipc_release(struct socket *sock)
+ 	sk_stop_timer(sk, &sk->sk_timer);
+ 	tipc_sk_remove(tsk);
+ 
++	sock_orphan(sk);
+ 	/* Reject any messages that accumulated in backlog queue */
+ 	release_sock(sk);
+ 	tipc_dest_list_purge(&tsk->cong_links);
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index 1f3d9789af30..b3344bbe336b 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -149,6 +149,9 @@ static int alloc_encrypted_sg(struct sock *sk, int len)
+ 			 &ctx->sg_encrypted_num_elem,
+ 			 &ctx->sg_encrypted_size, 0);
+ 
++	if (rc == -ENOSPC)
++		ctx->sg_encrypted_num_elem = ARRAY_SIZE(ctx->sg_encrypted_data);
++
+ 	return rc;
+ }
+ 
+@@ -162,6 +165,9 @@ static int alloc_plaintext_sg(struct sock *sk, int len)
+ 			 &ctx->sg_plaintext_num_elem, &ctx->sg_plaintext_size,
+ 			 tls_ctx->pending_open_record_frags);
+ 
++	if (rc == -ENOSPC)
++		ctx->sg_plaintext_num_elem = ARRAY_SIZE(ctx->sg_plaintext_data);
++
+ 	return rc;
+ }
+ 
+@@ -280,7 +286,7 @@ static int zerocopy_from_iter(struct sock *sk, struct iov_iter *from,
+ 			      int length, int *pages_used,
+ 			      unsigned int *size_used,
+ 			      struct scatterlist *to, int to_max_pages,
+-			      bool charge)
++			      bool charge, bool revert)
+ {
+ 	struct page *pages[MAX_SKB_FRAGS];
+ 
+@@ -331,6 +337,8 @@ static int zerocopy_from_iter(struct sock *sk, struct iov_iter *from,
+ out:
+ 	*size_used = size;
+ 	*pages_used = num_elem;
++	if (revert)
++		iov_iter_revert(from, size);
+ 
+ 	return rc;
+ }
+@@ -432,7 +440,7 @@ alloc_encrypted:
+ 				&ctx->sg_plaintext_size,
+ 				ctx->sg_plaintext_data,
+ 				ARRAY_SIZE(ctx->sg_plaintext_data),
+-				true);
++				true, false);
+ 			if (ret)
+ 				goto fallback_to_reg_send;
+ 
+@@ -820,7 +828,7 @@ int tls_sw_recvmsg(struct sock *sk,
+ 				err = zerocopy_from_iter(sk, &msg->msg_iter,
+ 							 to_copy, &pages,
+ 							 &chunk, &sgin[1],
+-							 MAX_SKB_FRAGS,	false);
++							 MAX_SKB_FRAGS,	false, true);
+ 				if (err < 0)
+ 					goto fallback_to_reg_recv;
+ 
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 7c5e8978aeaa..a94983e03a8b 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -1831,7 +1831,10 @@ xfrm_resolve_and_create_bundle(struct xfrm_policy **pols, int num_pols,
+ 	/* Try to instantiate a bundle */
+ 	err = xfrm_tmpl_resolve(pols, num_pols, fl, xfrm, family);
+ 	if (err <= 0) {
+-		if (err != 0 && err != -EAGAIN)
++		if (err == 0)
++			return NULL;
++
++		if (err != -EAGAIN)
+ 			XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTPOLERROR);
+ 		return ERR_PTR(err);
+ 	}
+diff --git a/scripts/Kbuild.include b/scripts/Kbuild.include
+index 86321f06461e..ed303f552f9d 100644
+--- a/scripts/Kbuild.include
++++ b/scripts/Kbuild.include
+@@ -400,3 +400,6 @@ endif
+ endef
+ #
+ ###############################################################################
++
++# delete partially updated (i.e. corrupted) files on error
++.DELETE_ON_ERROR:
+diff --git a/security/integrity/evm/evm_crypto.c b/security/integrity/evm/evm_crypto.c
+index b60524310855..c20e3142b541 100644
+--- a/security/integrity/evm/evm_crypto.c
++++ b/security/integrity/evm/evm_crypto.c
+@@ -97,7 +97,8 @@ static struct shash_desc *init_desc(char type)
+ 		mutex_lock(&mutex);
+ 		if (*tfm)
+ 			goto out;
+-		*tfm = crypto_alloc_shash(algo, 0, CRYPTO_ALG_ASYNC);
++		*tfm = crypto_alloc_shash(algo, 0,
++					  CRYPTO_ALG_ASYNC | CRYPTO_NOLOAD);
+ 		if (IS_ERR(*tfm)) {
+ 			rc = PTR_ERR(*tfm);
+ 			pr_err("Can not allocate %s (reason: %ld)\n", algo, rc);
+diff --git a/security/security.c b/security/security.c
+index 68f46d849abe..4e572b38937d 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -118,6 +118,8 @@ static int lsm_append(char *new, char **result)
+ 
+ 	if (*result == NULL) {
+ 		*result = kstrdup(new, GFP_KERNEL);
++		if (*result == NULL)
++			return -ENOMEM;
+ 	} else {
+ 		/* Check if it is the last registered name */
+ 		if (match_last_lsm(*result, new))
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 19de675d4504..8b6cd5a79bfa 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -3924,15 +3924,19 @@ static int smack_socket_sock_rcv_skb(struct sock *sk, struct sk_buff *skb)
+ 	struct smack_known *skp = NULL;
+ 	int rc = 0;
+ 	struct smk_audit_info ad;
++	u16 family = sk->sk_family;
+ #ifdef CONFIG_AUDIT
+ 	struct lsm_network_audit net;
+ #endif
+ #if IS_ENABLED(CONFIG_IPV6)
+ 	struct sockaddr_in6 sadd;
+ 	int proto;
++
++	if (family == PF_INET6 && skb->protocol == htons(ETH_P_IP))
++		family = PF_INET;
+ #endif /* CONFIG_IPV6 */
+ 
+-	switch (sk->sk_family) {
++	switch (family) {
+ 	case PF_INET:
+ #ifdef CONFIG_SECURITY_SMACK_NETFILTER
+ 		/*
+@@ -3950,7 +3954,7 @@ static int smack_socket_sock_rcv_skb(struct sock *sk, struct sk_buff *skb)
+ 		 */
+ 		netlbl_secattr_init(&secattr);
+ 
+-		rc = netlbl_skbuff_getattr(skb, sk->sk_family, &secattr);
++		rc = netlbl_skbuff_getattr(skb, family, &secattr);
+ 		if (rc == 0)
+ 			skp = smack_from_secattr(&secattr, ssp);
+ 		else
+@@ -3963,7 +3967,7 @@ access_check:
+ #endif
+ #ifdef CONFIG_AUDIT
+ 		smk_ad_init_net(&ad, __func__, LSM_AUDIT_DATA_NET, &net);
+-		ad.a.u.net->family = sk->sk_family;
++		ad.a.u.net->family = family;
+ 		ad.a.u.net->netif = skb->skb_iif;
+ 		ipv4_skb_to_auditdata(skb, &ad.a, NULL);
+ #endif
+@@ -3977,7 +3981,7 @@ access_check:
+ 		rc = smk_bu_note("IPv4 delivery", skp, ssp->smk_in,
+ 					MAY_WRITE, rc);
+ 		if (rc != 0)
+-			netlbl_skbuff_err(skb, sk->sk_family, rc, 0);
++			netlbl_skbuff_err(skb, family, rc, 0);
+ 		break;
+ #if IS_ENABLED(CONFIG_IPV6)
+ 	case PF_INET6:
+@@ -3993,7 +3997,7 @@ access_check:
+ 			skp = smack_net_ambient;
+ #ifdef CONFIG_AUDIT
+ 		smk_ad_init_net(&ad, __func__, LSM_AUDIT_DATA_NET, &net);
+-		ad.a.u.net->family = sk->sk_family;
++		ad.a.u.net->family = family;
+ 		ad.a.u.net->netif = skb->skb_iif;
+ 		ipv6_skb_to_auditdata(skb, &ad.a, NULL);
+ #endif /* CONFIG_AUDIT */
+diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c
+index 44b5ae833082..a4aac948ea49 100644
+--- a/sound/core/pcm_lib.c
++++ b/sound/core/pcm_lib.c
+@@ -626,27 +626,33 @@ EXPORT_SYMBOL(snd_interval_refine);
+ 
+ static int snd_interval_refine_first(struct snd_interval *i)
+ {
++	const unsigned int last_max = i->max;
++
+ 	if (snd_BUG_ON(snd_interval_empty(i)))
+ 		return -EINVAL;
+ 	if (snd_interval_single(i))
+ 		return 0;
+ 	i->max = i->min;
+-	i->openmax = i->openmin;
+-	if (i->openmax)
++	if (i->openmin)
+ 		i->max++;
++	/* only exclude max value if also excluded before refine */
++	i->openmax = (i->openmax && i->max >= last_max);
+ 	return 1;
+ }
+ 
+ static int snd_interval_refine_last(struct snd_interval *i)
+ {
++	const unsigned int last_min = i->min;
++
+ 	if (snd_BUG_ON(snd_interval_empty(i)))
+ 		return -EINVAL;
+ 	if (snd_interval_single(i))
+ 		return 0;
+ 	i->min = i->max;
+-	i->openmin = i->openmax;
+-	if (i->openmin)
++	if (i->openmax)
+ 		i->min--;
++	/* only exclude min value if also excluded before refine */
++	i->openmin = (i->openmin && i->min <= last_min);
+ 	return 1;
+ }
+ 
+diff --git a/sound/isa/msnd/msnd_pinnacle.c b/sound/isa/msnd/msnd_pinnacle.c
+index 6c584d9b6c42..a19f802b2071 100644
+--- a/sound/isa/msnd/msnd_pinnacle.c
++++ b/sound/isa/msnd/msnd_pinnacle.c
+@@ -82,10 +82,10 @@
+ 
+ static void set_default_audio_parameters(struct snd_msnd *chip)
+ {
+-	chip->play_sample_size = DEFSAMPLESIZE;
++	chip->play_sample_size = snd_pcm_format_width(DEFSAMPLESIZE);
+ 	chip->play_sample_rate = DEFSAMPLERATE;
+ 	chip->play_channels = DEFCHANNELS;
+-	chip->capture_sample_size = DEFSAMPLESIZE;
++	chip->capture_sample_size = snd_pcm_format_width(DEFSAMPLESIZE);
+ 	chip->capture_sample_rate = DEFSAMPLERATE;
+ 	chip->capture_channels = DEFCHANNELS;
+ }
+diff --git a/sound/soc/codecs/hdmi-codec.c b/sound/soc/codecs/hdmi-codec.c
+index 38e4a8515709..d00734d31e04 100644
+--- a/sound/soc/codecs/hdmi-codec.c
++++ b/sound/soc/codecs/hdmi-codec.c
+@@ -291,10 +291,6 @@ static const struct snd_soc_dapm_widget hdmi_widgets[] = {
+ 	SND_SOC_DAPM_OUTPUT("TX"),
+ };
+ 
+-static const struct snd_soc_dapm_route hdmi_routes[] = {
+-	{ "TX", NULL, "Playback" },
+-};
+-
+ enum {
+ 	DAI_ID_I2S = 0,
+ 	DAI_ID_SPDIF,
+@@ -689,9 +685,23 @@ static int hdmi_codec_pcm_new(struct snd_soc_pcm_runtime *rtd,
+ 	return snd_ctl_add(rtd->card->snd_card, kctl);
+ }
+ 
++static int hdmi_dai_probe(struct snd_soc_dai *dai)
++{
++	struct snd_soc_dapm_context *dapm;
++	struct snd_soc_dapm_route route = {
++		.sink = "TX",
++		.source = dai->driver->playback.stream_name,
++	};
++
++	dapm = snd_soc_component_get_dapm(dai->component);
++
++	return snd_soc_dapm_add_routes(dapm, &route, 1);
++}
++
+ static const struct snd_soc_dai_driver hdmi_i2s_dai = {
+ 	.name = "i2s-hifi",
+ 	.id = DAI_ID_I2S,
++	.probe = hdmi_dai_probe,
+ 	.playback = {
+ 		.stream_name = "I2S Playback",
+ 		.channels_min = 2,
+@@ -707,6 +717,7 @@ static const struct snd_soc_dai_driver hdmi_i2s_dai = {
+ static const struct snd_soc_dai_driver hdmi_spdif_dai = {
+ 	.name = "spdif-hifi",
+ 	.id = DAI_ID_SPDIF,
++	.probe = hdmi_dai_probe,
+ 	.playback = {
+ 		.stream_name = "SPDIF Playback",
+ 		.channels_min = 2,
+@@ -733,8 +744,6 @@ static int hdmi_of_xlate_dai_id(struct snd_soc_component *component,
+ static const struct snd_soc_component_driver hdmi_driver = {
+ 	.dapm_widgets		= hdmi_widgets,
+ 	.num_dapm_widgets	= ARRAY_SIZE(hdmi_widgets),
+-	.dapm_routes		= hdmi_routes,
+-	.num_dapm_routes	= ARRAY_SIZE(hdmi_routes),
+ 	.of_xlate_dai_id	= hdmi_of_xlate_dai_id,
+ 	.idle_bias_on		= 1,
+ 	.use_pmdown_time	= 1,
+diff --git a/sound/soc/codecs/rt5514.c b/sound/soc/codecs/rt5514.c
+index 1570b91bf018..dca82dd6e3bf 100644
+--- a/sound/soc/codecs/rt5514.c
++++ b/sound/soc/codecs/rt5514.c
+@@ -64,8 +64,8 @@ static const struct reg_sequence rt5514_patch[] = {
+ 	{RT5514_ANA_CTRL_LDO10,		0x00028604},
+ 	{RT5514_ANA_CTRL_ADCFED,	0x00000800},
+ 	{RT5514_ASRC_IN_CTRL1,		0x00000003},
+-	{RT5514_DOWNFILTER0_CTRL3,	0x10000362},
+-	{RT5514_DOWNFILTER1_CTRL3,	0x10000362},
++	{RT5514_DOWNFILTER0_CTRL3,	0x10000352},
++	{RT5514_DOWNFILTER1_CTRL3,	0x10000352},
+ };
+ 
+ static const struct reg_default rt5514_reg[] = {
+@@ -92,10 +92,10 @@ static const struct reg_default rt5514_reg[] = {
+ 	{RT5514_ASRC_IN_CTRL1,		0x00000003},
+ 	{RT5514_DOWNFILTER0_CTRL1,	0x00020c2f},
+ 	{RT5514_DOWNFILTER0_CTRL2,	0x00020c2f},
+-	{RT5514_DOWNFILTER0_CTRL3,	0x10000362},
++	{RT5514_DOWNFILTER0_CTRL3,	0x10000352},
+ 	{RT5514_DOWNFILTER1_CTRL1,	0x00020c2f},
+ 	{RT5514_DOWNFILTER1_CTRL2,	0x00020c2f},
+-	{RT5514_DOWNFILTER1_CTRL3,	0x10000362},
++	{RT5514_DOWNFILTER1_CTRL3,	0x10000352},
+ 	{RT5514_ANA_CTRL_LDO10,		0x00028604},
+ 	{RT5514_ANA_CTRL_LDO18_16,	0x02000345},
+ 	{RT5514_ANA_CTRL_ADC12,		0x0000a2a8},
+diff --git a/sound/soc/codecs/rt5651.c b/sound/soc/codecs/rt5651.c
+index 6b5669f3e85d..39d2c67cd064 100644
+--- a/sound/soc/codecs/rt5651.c
++++ b/sound/soc/codecs/rt5651.c
+@@ -1696,6 +1696,13 @@ static irqreturn_t rt5651_irq(int irq, void *data)
+ 	return IRQ_HANDLED;
+ }
+ 
++static void rt5651_cancel_work(void *data)
++{
++	struct rt5651_priv *rt5651 = data;
++
++	cancel_work_sync(&rt5651->jack_detect_work);
++}
++
+ static int rt5651_set_jack(struct snd_soc_component *component,
+ 			   struct snd_soc_jack *hp_jack, void *data)
+ {
+@@ -2036,6 +2043,11 @@ static int rt5651_i2c_probe(struct i2c_client *i2c,
+ 
+ 	INIT_WORK(&rt5651->jack_detect_work, rt5651_jack_detect_work);
+ 
++	/* Make sure work is stopped on probe-error / remove */
++	ret = devm_add_action_or_reset(&i2c->dev, rt5651_cancel_work, rt5651);
++	if (ret)
++		return ret;
++
+ 	ret = devm_snd_soc_register_component(&i2c->dev,
+ 				&soc_component_dev_rt5651,
+ 				rt5651_dai, ARRAY_SIZE(rt5651_dai));
+@@ -2043,15 +2055,6 @@ static int rt5651_i2c_probe(struct i2c_client *i2c,
+ 	return ret;
+ }
+ 
+-static int rt5651_i2c_remove(struct i2c_client *i2c)
+-{
+-	struct rt5651_priv *rt5651 = i2c_get_clientdata(i2c);
+-
+-	cancel_work_sync(&rt5651->jack_detect_work);
+-
+-	return 0;
+-}
+-
+ static struct i2c_driver rt5651_i2c_driver = {
+ 	.driver = {
+ 		.name = "rt5651",
+@@ -2059,7 +2062,6 @@ static struct i2c_driver rt5651_i2c_driver = {
+ 		.of_match_table = of_match_ptr(rt5651_of_match),
+ 	},
+ 	.probe = rt5651_i2c_probe,
+-	.remove   = rt5651_i2c_remove,
+ 	.id_table = rt5651_i2c_id,
+ };
+ module_i2c_driver(rt5651_i2c_driver);
+diff --git a/sound/soc/qcom/qdsp6/q6afe-dai.c b/sound/soc/qcom/qdsp6/q6afe-dai.c
+index 5002dd05bf27..f8298be7038f 100644
+--- a/sound/soc/qcom/qdsp6/q6afe-dai.c
++++ b/sound/soc/qcom/qdsp6/q6afe-dai.c
+@@ -1180,7 +1180,7 @@ static void of_q6afe_parse_dai_data(struct device *dev,
+ 		int id, i, num_lines;
+ 
+ 		ret = of_property_read_u32(node, "reg", &id);
+-		if (ret || id > AFE_PORT_MAX) {
++		if (ret || id < 0 || id >= AFE_PORT_MAX) {
+ 			dev_err(dev, "valid dai id not found:%d\n", ret);
+ 			continue;
+ 		}
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 8aac48f9c322..08aa78007020 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -2875,7 +2875,8 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+  */
+ 
+ #define AU0828_DEVICE(vid, pid, vname, pname) { \
+-	USB_DEVICE_VENDOR_SPEC(vid, pid), \
++	.idVendor = vid, \
++	.idProduct = pid, \
+ 	.match_flags = USB_DEVICE_ID_MATCH_DEVICE | \
+ 		       USB_DEVICE_ID_MATCH_INT_CLASS | \
+ 		       USB_DEVICE_ID_MATCH_INT_SUBCLASS, \
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 02b6cc02767f..dde87d64bc32 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1373,6 +1373,7 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip,
+ 			return SNDRV_PCM_FMTBIT_DSD_U32_BE;
+ 		break;
+ 
++	case USB_ID(0x16d0, 0x09dd): /* Encore mDSD */
+ 	case USB_ID(0x0d8c, 0x0316): /* Hegel HD12 DSD */
+ 	case USB_ID(0x16b0, 0x06b2): /* NuPrime DAC-10 */
+ 	case USB_ID(0x16d0, 0x0733): /* Furutech ADL Stratos */
+@@ -1443,6 +1444,7 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip,
+ 	 */
+ 	switch (USB_ID_VENDOR(chip->usb_id)) {
+ 	case 0x20b1:  /* XMOS based devices */
++	case 0x152a:  /* Thesycon devices */
+ 	case 0x25ce:  /* Mytek devices */
+ 		if (fp->dsd_raw)
+ 			return SNDRV_PCM_FMTBIT_DSD_U32_BE;
+diff --git a/tools/hv/hv_kvp_daemon.c b/tools/hv/hv_kvp_daemon.c
+index dbf6e8bd98ba..bbb2a8ef367c 100644
+--- a/tools/hv/hv_kvp_daemon.c
++++ b/tools/hv/hv_kvp_daemon.c
+@@ -286,7 +286,7 @@ static int kvp_key_delete(int pool, const __u8 *key, int key_size)
+ 		 * Found a match; just move the remaining
+ 		 * entries up.
+ 		 */
+-		if (i == num_records) {
++		if (i == (num_records - 1)) {
+ 			kvp_file_info[pool].num_records--;
+ 			kvp_update_file(pool);
+ 			return 0;
+diff --git a/tools/perf/arch/powerpc/util/skip-callchain-idx.c b/tools/perf/arch/powerpc/util/skip-callchain-idx.c
+index ef5d59a5742e..7c6eeb4633fe 100644
+--- a/tools/perf/arch/powerpc/util/skip-callchain-idx.c
++++ b/tools/perf/arch/powerpc/util/skip-callchain-idx.c
+@@ -58,9 +58,13 @@ static int check_return_reg(int ra_regno, Dwarf_Frame *frame)
+ 	}
+ 
+ 	/*
+-	 * Check if return address is on the stack.
++	 * Check if return address is on the stack. If return address
++	 * is in a register (typically R0), it is yet to be saved on
++	 * the stack.
+ 	 */
+-	if (nops != 0 || ops != NULL)
++	if ((nops != 0 || ops != NULL) &&
++		!(nops == 1 && ops[0].atom == DW_OP_regx &&
++			ops[0].number2 == 0 && ops[0].offset == 0))
+ 		return 0;
+ 
+ 	/*
+@@ -246,7 +250,7 @@ int arch_skip_callchain_idx(struct thread *thread, struct ip_callchain *chain)
+ 	if (!chain || chain->nr < 3)
+ 		return skip_slot;
+ 
+-	ip = chain->ips[2];
++	ip = chain->ips[1];
+ 
+ 	thread__find_symbol(thread, PERF_RECORD_MISC_USER, ip, &al);
+ 
+diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
+index dd850a26d579..4f5de8245b32 100644
+--- a/tools/perf/tests/builtin-test.c
++++ b/tools/perf/tests/builtin-test.c
+@@ -599,7 +599,7 @@ static int __cmd_test(int argc, const char *argv[], struct intlist *skiplist)
+ 			for (subi = 0; subi < subn; subi++) {
+ 				pr_info("%2d.%1d: %-*s:", i, subi + 1, subw,
+ 					t->subtest.get_desc(subi));
+-				err = test_and_print(t, skip, subi);
++				err = test_and_print(t, skip, subi + 1);
+ 				if (err != TEST_OK && t->subtest.skip_if_fail)
+ 					skip = true;
+ 			}
+diff --git a/tools/perf/tests/shell/record+probe_libc_inet_pton.sh b/tools/perf/tests/shell/record+probe_libc_inet_pton.sh
+index 94e513e62b34..3013ac8f83d0 100755
+--- a/tools/perf/tests/shell/record+probe_libc_inet_pton.sh
++++ b/tools/perf/tests/shell/record+probe_libc_inet_pton.sh
+@@ -13,11 +13,24 @@
+ libc=$(grep -w libc /proc/self/maps | head -1 | sed -r 's/.*[[:space:]](\/.*)/\1/g')
+ nm -Dg $libc 2>/dev/null | fgrep -q inet_pton || exit 254
+ 
++event_pattern='probe_libc:inet_pton(\_[[:digit:]]+)?'
++
++add_libc_inet_pton_event() {
++
++	event_name=$(perf probe -f -x $libc -a inet_pton 2>&1 | tail -n +2 | head -n -5 | \
++			grep -P -o "$event_pattern(?=[[:space:]]\(on inet_pton in $libc\))")
++
++	if [ $? -ne 0 -o -z "$event_name" ] ; then
++		printf "FAIL: could not add event\n"
++		return 1
++	fi
++}
++
+ trace_libc_inet_pton_backtrace() {
+ 
+ 	expected=`mktemp -u /tmp/expected.XXX`
+ 
+-	echo "ping[][0-9 \.:]+probe_libc:inet_pton: \([[:xdigit:]]+\)" > $expected
++	echo "ping[][0-9 \.:]+$event_name: \([[:xdigit:]]+\)" > $expected
+ 	echo ".*inet_pton\+0x[[:xdigit:]]+[[:space:]]\($libc|inlined\)$" >> $expected
+ 	case "$(uname -m)" in
+ 	s390x)
+@@ -26,6 +39,12 @@ trace_libc_inet_pton_backtrace() {
+ 		echo "(__GI_)?getaddrinfo\+0x[[:xdigit:]]+[[:space:]]\($libc|inlined\)$" >> $expected
+ 		echo "main\+0x[[:xdigit:]]+[[:space:]]\(.*/bin/ping.*\)$" >> $expected
+ 		;;
++	ppc64|ppc64le)
++		eventattr='max-stack=4'
++		echo "gaih_inet.*\+0x[[:xdigit:]]+[[:space:]]\($libc\)$" >> $expected
++		echo "getaddrinfo\+0x[[:xdigit:]]+[[:space:]]\($libc\)$" >> $expected
++		echo ".*\+0x[[:xdigit:]]+[[:space:]]\(.*/bin/ping.*\)$" >> $expected
++		;;
+ 	*)
+ 		eventattr='max-stack=3'
+ 		echo "getaddrinfo\+0x[[:xdigit:]]+[[:space:]]\($libc\)$" >> $expected
+@@ -35,7 +54,7 @@ trace_libc_inet_pton_backtrace() {
+ 
+ 	perf_data=`mktemp -u /tmp/perf.data.XXX`
+ 	perf_script=`mktemp -u /tmp/perf.script.XXX`
+-	perf record -e probe_libc:inet_pton/$eventattr/ -o $perf_data ping -6 -c 1 ::1 > /dev/null 2>&1
++	perf record -e $event_name/$eventattr/ -o $perf_data ping -6 -c 1 ::1 > /dev/null 2>&1
+ 	perf script -i $perf_data > $perf_script
+ 
+ 	exec 3<$perf_script
+@@ -46,7 +65,7 @@ trace_libc_inet_pton_backtrace() {
+ 		echo "$line" | egrep -q "$pattern"
+ 		if [ $? -ne 0 ] ; then
+ 			printf "FAIL: expected backtrace entry \"%s\" got \"%s\"\n" "$pattern" "$line"
+-			exit 1
++			return 1
+ 		fi
+ 	done
+ 
+@@ -56,13 +75,20 @@ trace_libc_inet_pton_backtrace() {
+ 	# even if the perf script output does not match.
+ }
+ 
++delete_libc_inet_pton_event() {
++
++	if [ -n "$event_name" ] ; then
++		perf probe -q -d $event_name
++	fi
++}
++
+ # Check for IPv6 interface existence
+ ip a sh lo | fgrep -q inet6 || exit 2
+ 
+ skip_if_no_perf_probe && \
+-perf probe -q $libc inet_pton && \
++add_libc_inet_pton_event && \
+ trace_libc_inet_pton_backtrace
+ err=$?
+ rm -f ${perf_data} ${perf_script} ${expected}
+-perf probe -q -d probe_libc:inet_pton
++delete_libc_inet_pton_event
+ exit $err
+diff --git a/tools/perf/util/comm.c b/tools/perf/util/comm.c
+index 7798a2cc8a86..31279a7bd919 100644
+--- a/tools/perf/util/comm.c
++++ b/tools/perf/util/comm.c
+@@ -20,9 +20,10 @@ static struct rw_semaphore comm_str_lock = {.lock = PTHREAD_RWLOCK_INITIALIZER,}
+ 
+ static struct comm_str *comm_str__get(struct comm_str *cs)
+ {
+-	if (cs)
+-		refcount_inc(&cs->refcnt);
+-	return cs;
++	if (cs && refcount_inc_not_zero(&cs->refcnt))
++		return cs;
++
++	return NULL;
+ }
+ 
+ static void comm_str__put(struct comm_str *cs)
+@@ -67,9 +68,14 @@ struct comm_str *__comm_str__findnew(const char *str, struct rb_root *root)
+ 		parent = *p;
+ 		iter = rb_entry(parent, struct comm_str, rb_node);
+ 
++		/*
++		 * If we race with comm_str__put, iter->refcnt is 0
++		 * and it will be removed within comm_str__put call
++		 * shortly, ignore it in this search.
++		 */
+ 		cmp = strcmp(str, iter->str);
+-		if (!cmp)
+-			return comm_str__get(iter);
++		if (!cmp && comm_str__get(iter))
++			return iter;
+ 
+ 		if (cmp < 0)
+ 			p = &(*p)->rb_left;
+diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
+index 653ff65aa2c3..5af58aac91ad 100644
+--- a/tools/perf/util/header.c
++++ b/tools/perf/util/header.c
+@@ -2587,7 +2587,7 @@ static const struct feature_ops feat_ops[HEADER_LAST_FEATURE] = {
+ 	FEAT_OPR(NUMA_TOPOLOGY,	numa_topology,	true),
+ 	FEAT_OPN(BRANCH_STACK,	branch_stack,	false),
+ 	FEAT_OPR(PMU_MAPPINGS,	pmu_mappings,	false),
+-	FEAT_OPN(GROUP_DESC,	group_desc,	false),
++	FEAT_OPR(GROUP_DESC,	group_desc,	false),
+ 	FEAT_OPN(AUXTRACE,	auxtrace,	false),
+ 	FEAT_OPN(STAT,		stat,		false),
+ 	FEAT_OPN(CACHE,		cache,		true),
+diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
+index e7b4a8b513f2..22dbb6612b41 100644
+--- a/tools/perf/util/machine.c
++++ b/tools/perf/util/machine.c
+@@ -2272,6 +2272,7 @@ static int unwind_entry(struct unwind_entry *entry, void *arg)
+ {
+ 	struct callchain_cursor *cursor = arg;
+ 	const char *srcline = NULL;
++	u64 addr;
+ 
+ 	if (symbol_conf.hide_unresolved && entry->sym == NULL)
+ 		return 0;
+@@ -2279,7 +2280,13 @@ static int unwind_entry(struct unwind_entry *entry, void *arg)
+ 	if (append_inlines(cursor, entry->map, entry->sym, entry->ip) == 0)
+ 		return 0;
+ 
+-	srcline = callchain_srcline(entry->map, entry->sym, entry->ip);
++	/*
++	 * Convert entry->ip from a virtual address to an offset in
++	 * its corresponding binary.
++	 */
++	addr = map__map_ip(entry->map, entry->ip);
++
++	srcline = callchain_srcline(entry->map, entry->sym, addr);
+ 	return callchain_cursor_append(cursor, entry->ip,
+ 				       entry->map, entry->sym,
+ 				       false, NULL, 0, 0, 0, srcline);
+diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
+index 89ac5b5dc218..f5431092c6d1 100644
+--- a/tools/perf/util/map.c
++++ b/tools/perf/util/map.c
+@@ -590,6 +590,13 @@ struct symbol *map_groups__find_symbol(struct map_groups *mg,
+ 	return NULL;
+ }
+ 
++static bool map__contains_symbol(struct map *map, struct symbol *sym)
++{
++	u64 ip = map->unmap_ip(map, sym->start);
++
++	return ip >= map->start && ip < map->end;
++}
++
+ struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name,
+ 					 struct map **mapp)
+ {
+@@ -605,6 +612,10 @@ struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name,
+ 
+ 		if (sym == NULL)
+ 			continue;
++		if (!map__contains_symbol(pos, sym)) {
++			sym = NULL;
++			continue;
++		}
+ 		if (mapp != NULL)
+ 			*mapp = pos;
+ 		goto out;
+diff --git a/tools/perf/util/unwind-libdw.c b/tools/perf/util/unwind-libdw.c
+index 538db4e5d1e6..6f318b15950e 100644
+--- a/tools/perf/util/unwind-libdw.c
++++ b/tools/perf/util/unwind-libdw.c
+@@ -77,7 +77,7 @@ static int entry(u64 ip, struct unwind_info *ui)
+ 	if (__report_module(&al, ip, ui))
+ 		return -1;
+ 
+-	e->ip  = al.addr;
++	e->ip  = ip;
+ 	e->map = al.map;
+ 	e->sym = al.sym;
+ 
+diff --git a/tools/perf/util/unwind-libunwind-local.c b/tools/perf/util/unwind-libunwind-local.c
+index 6a11bc7e6b27..79f521a552cf 100644
+--- a/tools/perf/util/unwind-libunwind-local.c
++++ b/tools/perf/util/unwind-libunwind-local.c
+@@ -575,7 +575,7 @@ static int entry(u64 ip, struct thread *thread,
+ 	struct addr_location al;
+ 
+ 	e.sym = thread__find_symbol(thread, PERF_RECORD_MISC_USER, ip, &al);
+-	e.ip = al.addr;
++	e.ip  = ip;
+ 	e.map = al.map;
+ 
+ 	pr_debug("unwind: %s:ip = 0x%" PRIx64 " (0x%" PRIx64 ")\n",
+diff --git a/tools/testing/nvdimm/test/nfit.c b/tools/testing/nvdimm/test/nfit.c
+index e2926f72a821..94c3bdf82ff7 100644
+--- a/tools/testing/nvdimm/test/nfit.c
++++ b/tools/testing/nvdimm/test/nfit.c
+@@ -1308,7 +1308,8 @@ static void smart_init(struct nfit_test *t)
+ 			| ND_INTEL_SMART_ALARM_VALID
+ 			| ND_INTEL_SMART_USED_VALID
+ 			| ND_INTEL_SMART_SHUTDOWN_VALID
+-			| ND_INTEL_SMART_MTEMP_VALID,
++			| ND_INTEL_SMART_MTEMP_VALID
++			| ND_INTEL_SMART_CTEMP_VALID,
+ 		.health = ND_INTEL_SMART_NON_CRITICAL_HEALTH,
+ 		.media_temperature = 23 * 16,
+ 		.ctrl_temperature = 25 * 16,
+diff --git a/tools/testing/selftests/android/ion/ionapp_export.c b/tools/testing/selftests/android/ion/ionapp_export.c
+index a944e72621a9..b5fa0a2dc968 100644
+--- a/tools/testing/selftests/android/ion/ionapp_export.c
++++ b/tools/testing/selftests/android/ion/ionapp_export.c
+@@ -51,6 +51,7 @@ int main(int argc, char *argv[])
+ 
+ 	heap_size = 0;
+ 	flags = 0;
++	heap_type = ION_HEAP_TYPE_SYSTEM;
+ 
+ 	while ((opt = getopt(argc, argv, "hi:s:")) != -1) {
+ 		switch (opt) {
+diff --git a/tools/testing/selftests/timers/raw_skew.c b/tools/testing/selftests/timers/raw_skew.c
+index ca6cd146aafe..dcf73c5dab6e 100644
+--- a/tools/testing/selftests/timers/raw_skew.c
++++ b/tools/testing/selftests/timers/raw_skew.c
+@@ -134,6 +134,11 @@ int main(int argv, char **argc)
+ 	printf(" %lld.%i(act)", ppm/1000, abs((int)(ppm%1000)));
+ 
+ 	if (llabs(eppm - ppm) > 1000) {
++		if (tx1.offset || tx2.offset ||
++		    tx1.freq != tx2.freq || tx1.tick != tx2.tick) {
++			printf("	[SKIP]\n");
++			return ksft_exit_skip("The clock was adjusted externally. Shutdown NTPd or other time sync daemons\n");
++		}
+ 		printf("	[FAILED]\n");
+ 		return ksft_exit_fail();
+ 	}
+diff --git a/tools/testing/selftests/vDSO/vdso_test.c b/tools/testing/selftests/vDSO/vdso_test.c
+index 2df26bd0099c..eda53f833d8e 100644
+--- a/tools/testing/selftests/vDSO/vdso_test.c
++++ b/tools/testing/selftests/vDSO/vdso_test.c
+@@ -15,6 +15,8 @@
+ #include <sys/auxv.h>
+ #include <sys/time.h>
+ 
++#include "../kselftest.h"
++
+ extern void *vdso_sym(const char *version, const char *name);
+ extern void vdso_init_from_sysinfo_ehdr(uintptr_t base);
+ extern void vdso_init_from_auxv(void *auxv);
+@@ -37,7 +39,7 @@ int main(int argc, char **argv)
+ 	unsigned long sysinfo_ehdr = getauxval(AT_SYSINFO_EHDR);
+ 	if (!sysinfo_ehdr) {
+ 		printf("AT_SYSINFO_EHDR is not present!\n");
+-		return 0;
++		return KSFT_SKIP;
+ 	}
+ 
+ 	vdso_init_from_sysinfo_ehdr(getauxval(AT_SYSINFO_EHDR));
+@@ -48,7 +50,7 @@ int main(int argc, char **argv)
+ 
+ 	if (!gtod) {
+ 		printf("Could not find %s\n", name);
+-		return 1;
++		return KSFT_SKIP;
+ 	}
+ 
+ 	struct timeval tv;
+@@ -59,6 +61,7 @@ int main(int argc, char **argv)
+ 		       (long long)tv.tv_sec, (long long)tv.tv_usec);
+ 	} else {
+ 		printf("%s failed\n", name);
++		return KSFT_FAIL;
+ 	}
+ 
+ 	return 0;
+diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c
+index 2673efce65f3..b71417913741 100644
+--- a/virt/kvm/arm/vgic/vgic-init.c
++++ b/virt/kvm/arm/vgic/vgic-init.c
+@@ -271,6 +271,10 @@ int vgic_init(struct kvm *kvm)
+ 	if (vgic_initialized(kvm))
+ 		return 0;
+ 
++	/* Are we also in the middle of creating a VCPU? */
++	if (kvm->created_vcpus != atomic_read(&kvm->online_vcpus))
++		return -EBUSY;
++
+ 	/* freeze the number of spis */
+ 	if (!dist->nr_spis)
+ 		dist->nr_spis = VGIC_NR_IRQS_LEGACY - VGIC_NR_PRIVATE_IRQS;
+diff --git a/virt/kvm/arm/vgic/vgic-mmio-v2.c b/virt/kvm/arm/vgic/vgic-mmio-v2.c
+index ffc587bf4742..64e571cc02df 100644
+--- a/virt/kvm/arm/vgic/vgic-mmio-v2.c
++++ b/virt/kvm/arm/vgic/vgic-mmio-v2.c
+@@ -352,6 +352,9 @@ static void vgic_mmio_write_apr(struct kvm_vcpu *vcpu,
+ 
+ 		if (n > vgic_v3_max_apr_idx(vcpu))
+ 			return;
++
++		n = array_index_nospec(n, 4);
++
+ 		/* GICv3 only uses ICH_AP1Rn for memory mapped (GICv2) guests */
+ 		vgicv3->vgic_ap1r[n] = val;
+ 	}


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-09-19 22:41 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-09-19 22:41 UTC (permalink / raw
  To: gentoo-commits

commit:     24c320725e8df6e42f0e4ae6d28f333ece085a4e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 19 22:41:12 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Sep 19 22:41:12 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=24c32072

Linux patch 4.18.9

 0000_README             |    4 +
 1008_linux-4.18.9.patch | 5298 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5302 insertions(+)

diff --git a/0000_README b/0000_README
index 597262e..6534d27 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch:  1007_linux-4.18.8.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.8
 
+Patch:  1008_linux-4.18.9.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.9
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1008_linux-4.18.9.patch b/1008_linux-4.18.9.patch
new file mode 100644
index 0000000..877b17a
--- /dev/null
+++ b/1008_linux-4.18.9.patch
@@ -0,0 +1,5298 @@
+diff --git a/Makefile b/Makefile
+index 0d73431f66cd..1178348fb9ca 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/arc/boot/dts/axs10x_mb.dtsi b/arch/arc/boot/dts/axs10x_mb.dtsi
+index 47b74fbc403c..37bafd44e36d 100644
+--- a/arch/arc/boot/dts/axs10x_mb.dtsi
++++ b/arch/arc/boot/dts/axs10x_mb.dtsi
+@@ -9,6 +9,10 @@
+  */
+ 
+ / {
++	aliases {
++		ethernet = &gmac;
++	};
++
+ 	axs10x_mb {
+ 		compatible = "simple-bus";
+ 		#address-cells = <1>;
+@@ -68,7 +72,7 @@
+ 			};
+ 		};
+ 
+-		ethernet@0x18000 {
++		gmac: ethernet@0x18000 {
+ 			#interrupt-cells = <1>;
+ 			compatible = "snps,dwmac";
+ 			reg = < 0x18000 0x2000 >;
+@@ -81,6 +85,7 @@
+ 			max-speed = <100>;
+ 			resets = <&creg_rst 5>;
+ 			reset-names = "stmmaceth";
++			mac-address = [00 00 00 00 00 00]; /* Filled in by U-Boot */
+ 		};
+ 
+ 		ehci@0x40000 {
+diff --git a/arch/arc/boot/dts/hsdk.dts b/arch/arc/boot/dts/hsdk.dts
+index 006aa3de5348..d00f283094d3 100644
+--- a/arch/arc/boot/dts/hsdk.dts
++++ b/arch/arc/boot/dts/hsdk.dts
+@@ -25,6 +25,10 @@
+ 		bootargs = "earlycon=uart8250,mmio32,0xf0005000,115200n8 console=ttyS0,115200n8 debug print-fatal-signals=1";
+ 	};
+ 
++	aliases {
++		ethernet = &gmac;
++	};
++
+ 	cpus {
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+@@ -163,7 +167,7 @@
+ 			#clock-cells = <0>;
+ 		};
+ 
+-		ethernet@8000 {
++		gmac: ethernet@8000 {
+ 			#interrupt-cells = <1>;
+ 			compatible = "snps,dwmac";
+ 			reg = <0x8000 0x2000>;
+@@ -176,6 +180,7 @@
+ 			phy-handle = <&phy0>;
+ 			resets = <&cgu_rst HSDK_ETH_RESET>;
+ 			reset-names = "stmmaceth";
++			mac-address = [00 00 00 00 00 00]; /* Filled in by U-Boot */
+ 
+ 			mdio {
+ 				#address-cells = <1>;
+diff --git a/arch/arc/configs/axs101_defconfig b/arch/arc/configs/axs101_defconfig
+index a635ea972304..df848c44dacd 100644
+--- a/arch/arc/configs/axs101_defconfig
++++ b/arch/arc/configs/axs101_defconfig
+@@ -1,5 +1,4 @@
+ CONFIG_DEFAULT_HOSTNAME="ARCLinux"
+-# CONFIG_SWAP is not set
+ CONFIG_SYSVIPC=y
+ CONFIG_POSIX_MQUEUE=y
+ # CONFIG_CROSS_MEMORY_ATTACH is not set
+diff --git a/arch/arc/configs/axs103_defconfig b/arch/arc/configs/axs103_defconfig
+index aa507e423075..bcbdc0494faa 100644
+--- a/arch/arc/configs/axs103_defconfig
++++ b/arch/arc/configs/axs103_defconfig
+@@ -1,5 +1,4 @@
+ CONFIG_DEFAULT_HOSTNAME="ARCLinux"
+-# CONFIG_SWAP is not set
+ CONFIG_SYSVIPC=y
+ CONFIG_POSIX_MQUEUE=y
+ # CONFIG_CROSS_MEMORY_ATTACH is not set
+diff --git a/arch/arc/configs/axs103_smp_defconfig b/arch/arc/configs/axs103_smp_defconfig
+index eba07f468654..d145bce7ebdf 100644
+--- a/arch/arc/configs/axs103_smp_defconfig
++++ b/arch/arc/configs/axs103_smp_defconfig
+@@ -1,5 +1,4 @@
+ CONFIG_DEFAULT_HOSTNAME="ARCLinux"
+-# CONFIG_SWAP is not set
+ CONFIG_SYSVIPC=y
+ CONFIG_POSIX_MQUEUE=y
+ # CONFIG_CROSS_MEMORY_ATTACH is not set
+diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
+index d496ef579859..ca46153d7915 100644
+--- a/arch/arm64/kvm/hyp/switch.c
++++ b/arch/arm64/kvm/hyp/switch.c
+@@ -98,8 +98,10 @@ static void activate_traps_vhe(struct kvm_vcpu *vcpu)
+ 	val = read_sysreg(cpacr_el1);
+ 	val |= CPACR_EL1_TTA;
+ 	val &= ~CPACR_EL1_ZEN;
+-	if (!update_fp_enabled(vcpu))
++	if (!update_fp_enabled(vcpu)) {
+ 		val &= ~CPACR_EL1_FPEN;
++		__activate_traps_fpsimd32(vcpu);
++	}
+ 
+ 	write_sysreg(val, cpacr_el1);
+ 
+@@ -114,8 +116,10 @@ static void __hyp_text __activate_traps_nvhe(struct kvm_vcpu *vcpu)
+ 
+ 	val = CPTR_EL2_DEFAULT;
+ 	val |= CPTR_EL2_TTA | CPTR_EL2_TZ;
+-	if (!update_fp_enabled(vcpu))
++	if (!update_fp_enabled(vcpu)) {
+ 		val |= CPTR_EL2_TFP;
++		__activate_traps_fpsimd32(vcpu);
++	}
+ 
+ 	write_sysreg(val, cptr_el2);
+ }
+@@ -129,7 +133,6 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
+ 	if (cpus_have_const_cap(ARM64_HAS_RAS_EXTN) && (hcr & HCR_VSE))
+ 		write_sysreg_s(vcpu->arch.vsesr_el2, SYS_VSESR_EL2);
+ 
+-	__activate_traps_fpsimd32(vcpu);
+ 	if (has_vhe())
+ 		activate_traps_vhe(vcpu);
+ 	else
+diff --git a/arch/mips/boot/dts/mscc/ocelot.dtsi b/arch/mips/boot/dts/mscc/ocelot.dtsi
+index 4f33dbc67348..7096915f26e0 100644
+--- a/arch/mips/boot/dts/mscc/ocelot.dtsi
++++ b/arch/mips/boot/dts/mscc/ocelot.dtsi
+@@ -184,7 +184,7 @@
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 			compatible = "mscc,ocelot-miim";
+-			reg = <0x107009c 0x36>, <0x10700f0 0x8>;
++			reg = <0x107009c 0x24>, <0x10700f0 0x8>;
+ 			interrupts = <14>;
+ 			status = "disabled";
+ 
+diff --git a/arch/mips/cavium-octeon/octeon-platform.c b/arch/mips/cavium-octeon/octeon-platform.c
+index 8505db478904..1d92efb82c37 100644
+--- a/arch/mips/cavium-octeon/octeon-platform.c
++++ b/arch/mips/cavium-octeon/octeon-platform.c
+@@ -322,6 +322,7 @@ static int __init octeon_ehci_device_init(void)
+ 		return 0;
+ 
+ 	pd = of_find_device_by_node(ehci_node);
++	of_node_put(ehci_node);
+ 	if (!pd)
+ 		return 0;
+ 
+@@ -384,6 +385,7 @@ static int __init octeon_ohci_device_init(void)
+ 		return 0;
+ 
+ 	pd = of_find_device_by_node(ohci_node);
++	of_node_put(ohci_node);
+ 	if (!pd)
+ 		return 0;
+ 
+diff --git a/arch/mips/generic/init.c b/arch/mips/generic/init.c
+index 5ba6fcc26fa7..94a78dbbc91f 100644
+--- a/arch/mips/generic/init.c
++++ b/arch/mips/generic/init.c
+@@ -204,6 +204,7 @@ void __init arch_init_irq(void)
+ 					    "mti,cpu-interrupt-controller");
+ 	if (!cpu_has_veic && !intc_node)
+ 		mips_cpu_irq_init();
++	of_node_put(intc_node);
+ 
+ 	irqchip_init();
+ }
+diff --git a/arch/mips/include/asm/io.h b/arch/mips/include/asm/io.h
+index cea8ad864b3f..57b34257be2b 100644
+--- a/arch/mips/include/asm/io.h
++++ b/arch/mips/include/asm/io.h
+@@ -141,14 +141,14 @@ static inline void * phys_to_virt(unsigned long address)
+ /*
+  * ISA I/O bus memory addresses are 1:1 with the physical address.
+  */
+-static inline unsigned long isa_virt_to_bus(volatile void * address)
++static inline unsigned long isa_virt_to_bus(volatile void *address)
+ {
+-	return (unsigned long)address - PAGE_OFFSET;
++	return virt_to_phys(address);
+ }
+ 
+-static inline void * isa_bus_to_virt(unsigned long address)
++static inline void *isa_bus_to_virt(unsigned long address)
+ {
+-	return (void *)(address + PAGE_OFFSET);
++	return phys_to_virt(address);
+ }
+ 
+ #define isa_page_to_bus page_to_phys
+diff --git a/arch/mips/kernel/vdso.c b/arch/mips/kernel/vdso.c
+index 019035d7225c..8f845f6e5f42 100644
+--- a/arch/mips/kernel/vdso.c
++++ b/arch/mips/kernel/vdso.c
+@@ -13,6 +13,7 @@
+ #include <linux/err.h>
+ #include <linux/init.h>
+ #include <linux/ioport.h>
++#include <linux/kernel.h>
+ #include <linux/mm.h>
+ #include <linux/sched.h>
+ #include <linux/slab.h>
+@@ -20,6 +21,7 @@
+ 
+ #include <asm/abi.h>
+ #include <asm/mips-cps.h>
++#include <asm/page.h>
+ #include <asm/vdso.h>
+ 
+ /* Kernel-provided data used by the VDSO. */
+@@ -128,12 +130,30 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
+ 	vvar_size = gic_size + PAGE_SIZE;
+ 	size = vvar_size + image->size;
+ 
++	/*
++	 * Find a region that's large enough for us to perform the
++	 * colour-matching alignment below.
++	 */
++	if (cpu_has_dc_aliases)
++		size += shm_align_mask + 1;
++
+ 	base = get_unmapped_area(NULL, 0, size, 0, 0);
+ 	if (IS_ERR_VALUE(base)) {
+ 		ret = base;
+ 		goto out;
+ 	}
+ 
++	/*
++	 * If we suffer from dcache aliasing, ensure that the VDSO data page
++	 * mapping is coloured the same as the kernel's mapping of that memory.
++	 * This ensures that when the kernel updates the VDSO data userland
++	 * will observe it without requiring cache invalidations.
++	 */
++	if (cpu_has_dc_aliases) {
++		base = __ALIGN_MASK(base, shm_align_mask);
++		base += ((unsigned long)&vdso_data - gic_size) & shm_align_mask;
++	}
++
+ 	data_addr = base + gic_size;
+ 	vdso_addr = data_addr + PAGE_SIZE;
+ 
+diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c
+index e12dfa48b478..a5893b2cdc0e 100644
+--- a/arch/mips/mm/c-r4k.c
++++ b/arch/mips/mm/c-r4k.c
+@@ -835,7 +835,8 @@ static void r4k_flush_icache_user_range(unsigned long start, unsigned long end)
+ static void r4k_dma_cache_wback_inv(unsigned long addr, unsigned long size)
+ {
+ 	/* Catch bad driver code */
+-	BUG_ON(size == 0);
++	if (WARN_ON(size == 0))
++		return;
+ 
+ 	preempt_disable();
+ 	if (cpu_has_inclusive_pcaches) {
+@@ -871,7 +872,8 @@ static void r4k_dma_cache_wback_inv(unsigned long addr, unsigned long size)
+ static void r4k_dma_cache_inv(unsigned long addr, unsigned long size)
+ {
+ 	/* Catch bad driver code */
+-	BUG_ON(size == 0);
++	if (WARN_ON(size == 0))
++		return;
+ 
+ 	preempt_disable();
+ 	if (cpu_has_inclusive_pcaches) {
+diff --git a/arch/powerpc/include/asm/book3s/64/pgalloc.h b/arch/powerpc/include/asm/book3s/64/pgalloc.h
+index 01ee40f11f3a..76234a14b97d 100644
+--- a/arch/powerpc/include/asm/book3s/64/pgalloc.h
++++ b/arch/powerpc/include/asm/book3s/64/pgalloc.h
+@@ -9,6 +9,7 @@
+ 
+ #include <linux/slab.h>
+ #include <linux/cpumask.h>
++#include <linux/kmemleak.h>
+ #include <linux/percpu.h>
+ 
+ struct vmemmap_backing {
+@@ -82,6 +83,13 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm)
+ 
+ 	pgd = kmem_cache_alloc(PGT_CACHE(PGD_INDEX_SIZE),
+ 			       pgtable_gfp_flags(mm, GFP_KERNEL));
++	/*
++	 * Don't scan the PGD for pointers, it contains references to PUDs but
++	 * those references are not full pointers and so can't be recognised by
++	 * kmemleak.
++	 */
++	kmemleak_no_scan(pgd);
++
+ 	/*
+ 	 * With hugetlb, we don't clear the second half of the page table.
+ 	 * If we share the same slab cache with the pmd or pud level table,
+@@ -110,8 +118,19 @@ static inline void pgd_populate(struct mm_struct *mm, pgd_t *pgd, pud_t *pud)
+ 
+ static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr)
+ {
+-	return kmem_cache_alloc(PGT_CACHE(PUD_CACHE_INDEX),
+-		pgtable_gfp_flags(mm, GFP_KERNEL));
++	pud_t *pud;
++
++	pud = kmem_cache_alloc(PGT_CACHE(PUD_CACHE_INDEX),
++			       pgtable_gfp_flags(mm, GFP_KERNEL));
++	/*
++	 * Tell kmemleak to ignore the PUD, that means don't scan it for
++	 * pointers and don't consider it a leak. PUDs are typically only
++	 * referred to by their PGD, but kmemleak is not able to recognise those
++	 * as pointers, leading to false leak reports.
++	 */
++	kmemleak_ignore(pud);
++
++	return pud;
+ }
+ 
+ static inline void pud_free(struct mm_struct *mm, pud_t *pud)
+diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+index 176f911ee983..7efc42538ccf 100644
+--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
++++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+@@ -738,10 +738,10 @@ int kvm_unmap_radix(struct kvm *kvm, struct kvm_memory_slot *memslot,
+ 					      gpa, shift);
+ 		kvmppc_radix_tlbie_page(kvm, gpa, shift);
+ 		if ((old & _PAGE_DIRTY) && memslot->dirty_bitmap) {
+-			unsigned long npages = 1;
++			unsigned long psize = PAGE_SIZE;
+ 			if (shift)
+-				npages = 1ul << (shift - PAGE_SHIFT);
+-			kvmppc_update_dirty_map(memslot, gfn, npages);
++				psize = 1ul << shift;
++			kvmppc_update_dirty_map(memslot, gfn, psize);
+ 		}
+ 	}
+ 	return 0;				
+diff --git a/arch/powerpc/platforms/4xx/msi.c b/arch/powerpc/platforms/4xx/msi.c
+index 81b2cbce7df8..7c324eff2f22 100644
+--- a/arch/powerpc/platforms/4xx/msi.c
++++ b/arch/powerpc/platforms/4xx/msi.c
+@@ -146,13 +146,19 @@ static int ppc4xx_setup_pcieh_hw(struct platform_device *dev,
+ 	const u32 *sdr_addr;
+ 	dma_addr_t msi_phys;
+ 	void *msi_virt;
++	int err;
+ 
+ 	sdr_addr = of_get_property(dev->dev.of_node, "sdr-base", NULL);
+ 	if (!sdr_addr)
+-		return -1;
++		return -EINVAL;
+ 
+-	mtdcri(SDR0, *sdr_addr, upper_32_bits(res.start));	/*HIGH addr */
+-	mtdcri(SDR0, *sdr_addr + 1, lower_32_bits(res.start));	/* Low addr */
++	msi_data = of_get_property(dev->dev.of_node, "msi-data", NULL);
++	if (!msi_data)
++		return -EINVAL;
++
++	msi_mask = of_get_property(dev->dev.of_node, "msi-mask", NULL);
++	if (!msi_mask)
++		return -EINVAL;
+ 
+ 	msi->msi_dev = of_find_node_by_name(NULL, "ppc4xx-msi");
+ 	if (!msi->msi_dev)
+@@ -160,30 +166,30 @@ static int ppc4xx_setup_pcieh_hw(struct platform_device *dev,
+ 
+ 	msi->msi_regs = of_iomap(msi->msi_dev, 0);
+ 	if (!msi->msi_regs) {
+-		dev_err(&dev->dev, "of_iomap problem failed\n");
+-		return -ENOMEM;
++		dev_err(&dev->dev, "of_iomap failed\n");
++		err = -ENOMEM;
++		goto node_put;
+ 	}
+ 	dev_dbg(&dev->dev, "PCIE-MSI: msi register mapped 0x%x 0x%x\n",
+ 		(u32) (msi->msi_regs + PEIH_TERMADH), (u32) (msi->msi_regs));
+ 
+ 	msi_virt = dma_alloc_coherent(&dev->dev, 64, &msi_phys, GFP_KERNEL);
+-	if (!msi_virt)
+-		return -ENOMEM;
++	if (!msi_virt) {
++		err = -ENOMEM;
++		goto iounmap;
++	}
+ 	msi->msi_addr_hi = upper_32_bits(msi_phys);
+ 	msi->msi_addr_lo = lower_32_bits(msi_phys & 0xffffffff);
+ 	dev_dbg(&dev->dev, "PCIE-MSI: msi address high 0x%x, low 0x%x\n",
+ 		msi->msi_addr_hi, msi->msi_addr_lo);
+ 
++	mtdcri(SDR0, *sdr_addr, upper_32_bits(res.start));	/*HIGH addr */
++	mtdcri(SDR0, *sdr_addr + 1, lower_32_bits(res.start));	/* Low addr */
++
+ 	/* Progam the Interrupt handler Termination addr registers */
+ 	out_be32(msi->msi_regs + PEIH_TERMADH, msi->msi_addr_hi);
+ 	out_be32(msi->msi_regs + PEIH_TERMADL, msi->msi_addr_lo);
+ 
+-	msi_data = of_get_property(dev->dev.of_node, "msi-data", NULL);
+-	if (!msi_data)
+-		return -1;
+-	msi_mask = of_get_property(dev->dev.of_node, "msi-mask", NULL);
+-	if (!msi_mask)
+-		return -1;
+ 	/* Program MSI Expected data and Mask bits */
+ 	out_be32(msi->msi_regs + PEIH_MSIED, *msi_data);
+ 	out_be32(msi->msi_regs + PEIH_MSIMK, *msi_mask);
+@@ -191,6 +197,12 @@ static int ppc4xx_setup_pcieh_hw(struct platform_device *dev,
+ 	dma_free_coherent(&dev->dev, 64, msi_virt, msi_phys);
+ 
+ 	return 0;
++
++iounmap:
++	iounmap(msi->msi_regs);
++node_put:
++	of_node_put(msi->msi_dev);
++	return err;
+ }
+ 
+ static int ppc4xx_of_msi_remove(struct platform_device *dev)
+@@ -209,7 +221,6 @@ static int ppc4xx_of_msi_remove(struct platform_device *dev)
+ 		msi_bitmap_free(&msi->bitmap);
+ 	iounmap(msi->msi_regs);
+ 	of_node_put(msi->msi_dev);
+-	kfree(msi);
+ 
+ 	return 0;
+ }
+@@ -223,18 +234,16 @@ static int ppc4xx_msi_probe(struct platform_device *dev)
+ 
+ 	dev_dbg(&dev->dev, "PCIE-MSI: Setting up MSI support...\n");
+ 
+-	msi = kzalloc(sizeof(*msi), GFP_KERNEL);
+-	if (!msi) {
+-		dev_err(&dev->dev, "No memory for MSI structure\n");
++	msi = devm_kzalloc(&dev->dev, sizeof(*msi), GFP_KERNEL);
++	if (!msi)
+ 		return -ENOMEM;
+-	}
+ 	dev->dev.platform_data = msi;
+ 
+ 	/* Get MSI ranges */
+ 	err = of_address_to_resource(dev->dev.of_node, 0, &res);
+ 	if (err) {
+ 		dev_err(&dev->dev, "%pOF resource error!\n", dev->dev.of_node);
+-		goto error_out;
++		return err;
+ 	}
+ 
+ 	msi_irqs = of_irq_count(dev->dev.of_node);
+@@ -243,7 +252,7 @@ static int ppc4xx_msi_probe(struct platform_device *dev)
+ 
+ 	err = ppc4xx_setup_pcieh_hw(dev, res, msi);
+ 	if (err)
+-		goto error_out;
++		return err;
+ 
+ 	err = ppc4xx_msi_init_allocator(dev, msi);
+ 	if (err) {
+@@ -256,7 +265,7 @@ static int ppc4xx_msi_probe(struct platform_device *dev)
+ 		phb->controller_ops.setup_msi_irqs = ppc4xx_setup_msi_irqs;
+ 		phb->controller_ops.teardown_msi_irqs = ppc4xx_teardown_msi_irqs;
+ 	}
+-	return err;
++	return 0;
+ 
+ error_out:
+ 	ppc4xx_of_msi_remove(dev);
+diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
+index 8cdf91f5d3a4..c773465b2c95 100644
+--- a/arch/powerpc/platforms/powernv/npu-dma.c
++++ b/arch/powerpc/platforms/powernv/npu-dma.c
+@@ -437,8 +437,9 @@ static int get_mmio_atsd_reg(struct npu *npu)
+ 	int i;
+ 
+ 	for (i = 0; i < npu->mmio_atsd_count; i++) {
+-		if (!test_and_set_bit_lock(i, &npu->mmio_atsd_usage))
+-			return i;
++		if (!test_bit(i, &npu->mmio_atsd_usage))
++			if (!test_and_set_bit_lock(i, &npu->mmio_atsd_usage))
++				return i;
+ 	}
+ 
+ 	return -ENOSPC;
+diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
+index 8a4868a3964b..cb098e962ffe 100644
+--- a/arch/powerpc/platforms/pseries/setup.c
++++ b/arch/powerpc/platforms/pseries/setup.c
+@@ -647,6 +647,15 @@ void of_pci_parse_iov_addrs(struct pci_dev *dev, const int *indexes)
+ 	}
+ }
+ 
++static void pseries_disable_sriov_resources(struct pci_dev *pdev)
++{
++	int i;
++
++	pci_warn(pdev, "No hypervisor support for SR-IOV on this device, IOV BARs disabled.\n");
++	for (i = 0; i < PCI_SRIOV_NUM_BARS; i++)
++		pdev->resource[i + PCI_IOV_RESOURCES].flags = 0;
++}
++
+ static void pseries_pci_fixup_resources(struct pci_dev *pdev)
+ {
+ 	const int *indexes;
+@@ -654,10 +663,10 @@ static void pseries_pci_fixup_resources(struct pci_dev *pdev)
+ 
+ 	/*Firmware must support open sriov otherwise dont configure*/
+ 	indexes = of_get_property(dn, "ibm,open-sriov-vf-bar-info", NULL);
+-	if (!indexes)
+-		return;
+-	/* Assign the addresses from device tree*/
+-	of_pci_set_vf_bar_size(pdev, indexes);
++	if (indexes)
++		of_pci_set_vf_bar_size(pdev, indexes);
++	else
++		pseries_disable_sriov_resources(pdev);
+ }
+ 
+ static void pseries_pci_fixup_iov_resources(struct pci_dev *pdev)
+@@ -669,10 +678,10 @@ static void pseries_pci_fixup_iov_resources(struct pci_dev *pdev)
+ 		return;
+ 	/*Firmware must support open sriov otherwise dont configure*/
+ 	indexes = of_get_property(dn, "ibm,open-sriov-vf-bar-info", NULL);
+-	if (!indexes)
+-		return;
+-	/* Assign the addresses from device tree*/
+-	of_pci_parse_iov_addrs(pdev, indexes);
++	if (indexes)
++		of_pci_parse_iov_addrs(pdev, indexes);
++	else
++		pseries_disable_sriov_resources(pdev);
+ }
+ 
+ static resource_size_t pseries_pci_iov_resource_alignment(struct pci_dev *pdev,
+diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c
+index 84c89cb9636f..cbdd8341f17e 100644
+--- a/arch/s390/kvm/vsie.c
++++ b/arch/s390/kvm/vsie.c
+@@ -173,7 +173,8 @@ static int shadow_crycb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
+ 		return set_validity_icpt(scb_s, 0x0039U);
+ 
+ 	/* copy only the wrapping keys */
+-	if (read_guest_real(vcpu, crycb_addr + 72, &vsie_page->crycb, 56))
++	if (read_guest_real(vcpu, crycb_addr + 72,
++			    vsie_page->crycb.dea_wrapping_key_mask, 56))
+ 		return set_validity_icpt(scb_s, 0x0035U);
+ 
+ 	scb_s->ecb3 |= ecb3_flags;
+diff --git a/arch/x86/include/asm/kdebug.h b/arch/x86/include/asm/kdebug.h
+index 395c9631e000..75f1e35e7c15 100644
+--- a/arch/x86/include/asm/kdebug.h
++++ b/arch/x86/include/asm/kdebug.h
+@@ -22,10 +22,20 @@ enum die_val {
+ 	DIE_NMIUNKNOWN,
+ };
+ 
++enum show_regs_mode {
++	SHOW_REGS_SHORT,
++	/*
++	 * For when userspace crashed, but we don't think it's our fault, and
++	 * therefore don't print kernel registers.
++	 */
++	SHOW_REGS_USER,
++	SHOW_REGS_ALL
++};
++
+ extern void die(const char *, struct pt_regs *,long);
+ extern int __must_check __die(const char *, struct pt_regs *, long);
+ extern void show_stack_regs(struct pt_regs *regs);
+-extern void __show_regs(struct pt_regs *regs, int all);
++extern void __show_regs(struct pt_regs *regs, enum show_regs_mode);
+ extern void show_iret_regs(struct pt_regs *regs);
+ extern unsigned long oops_begin(void);
+ extern void oops_end(unsigned long, struct pt_regs *, int signr);
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index acebb808c4b5..0722b7745382 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1198,18 +1198,22 @@ enum emulation_result {
+ #define EMULTYPE_NO_DECODE	    (1 << 0)
+ #define EMULTYPE_TRAP_UD	    (1 << 1)
+ #define EMULTYPE_SKIP		    (1 << 2)
+-#define EMULTYPE_RETRY		    (1 << 3)
+-#define EMULTYPE_NO_REEXECUTE	    (1 << 4)
+-#define EMULTYPE_NO_UD_ON_FAIL	    (1 << 5)
+-#define EMULTYPE_VMWARE		    (1 << 6)
++#define EMULTYPE_ALLOW_RETRY	    (1 << 3)
++#define EMULTYPE_NO_UD_ON_FAIL	    (1 << 4)
++#define EMULTYPE_VMWARE		    (1 << 5)
+ int x86_emulate_instruction(struct kvm_vcpu *vcpu, unsigned long cr2,
+ 			    int emulation_type, void *insn, int insn_len);
+ 
+ static inline int emulate_instruction(struct kvm_vcpu *vcpu,
+ 			int emulation_type)
+ {
+-	return x86_emulate_instruction(vcpu, 0,
+-			emulation_type | EMULTYPE_NO_REEXECUTE, NULL, 0);
++	return x86_emulate_instruction(vcpu, 0, emulation_type, NULL, 0);
++}
++
++static inline int kvm_emulate_instruction_from_buffer(struct kvm_vcpu *vcpu,
++						      void *insn, int insn_len)
++{
++	return x86_emulate_instruction(vcpu, 0, 0, insn, insn_len);
+ }
+ 
+ void kvm_enable_efer_bits(u64);
+diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
+index c9b773401fd8..21d1fa5eaa5f 100644
+--- a/arch/x86/kernel/apic/vector.c
++++ b/arch/x86/kernel/apic/vector.c
+@@ -422,7 +422,7 @@ static int activate_managed(struct irq_data *irqd)
+ 	if (WARN_ON_ONCE(cpumask_empty(vector_searchmask))) {
+ 		/* Something in the core code broke! Survive gracefully */
+ 		pr_err("Managed startup for irq %u, but no CPU\n", irqd->irq);
+-		return EINVAL;
++		return -EINVAL;
+ 	}
+ 
+ 	ret = assign_managed_vector(irqd, vector_searchmask);
+diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
+index 0624957aa068..07b5fc00b188 100644
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -504,6 +504,7 @@ static enum ucode_state apply_microcode_amd(int cpu)
+ 	struct microcode_amd *mc_amd;
+ 	struct ucode_cpu_info *uci;
+ 	struct ucode_patch *p;
++	enum ucode_state ret;
+ 	u32 rev, dummy;
+ 
+ 	BUG_ON(raw_smp_processor_id() != cpu);
+@@ -521,9 +522,8 @@ static enum ucode_state apply_microcode_amd(int cpu)
+ 
+ 	/* need to apply patch? */
+ 	if (rev >= mc_amd->hdr.patch_id) {
+-		c->microcode = rev;
+-		uci->cpu_sig.rev = rev;
+-		return UCODE_OK;
++		ret = UCODE_OK;
++		goto out;
+ 	}
+ 
+ 	if (__apply_microcode_amd(mc_amd)) {
+@@ -531,13 +531,21 @@ static enum ucode_state apply_microcode_amd(int cpu)
+ 			cpu, mc_amd->hdr.patch_id);
+ 		return UCODE_ERROR;
+ 	}
+-	pr_info("CPU%d: new patch_level=0x%08x\n", cpu,
+-		mc_amd->hdr.patch_id);
+ 
+-	uci->cpu_sig.rev = mc_amd->hdr.patch_id;
+-	c->microcode = mc_amd->hdr.patch_id;
++	rev = mc_amd->hdr.patch_id;
++	ret = UCODE_UPDATED;
++
++	pr_info("CPU%d: new patch_level=0x%08x\n", cpu, rev);
+ 
+-	return UCODE_UPDATED;
++out:
++	uci->cpu_sig.rev = rev;
++	c->microcode	 = rev;
++
++	/* Update boot_cpu_data's revision too, if we're on the BSP: */
++	if (c->cpu_index == boot_cpu_data.cpu_index)
++		boot_cpu_data.microcode = rev;
++
++	return ret;
+ }
+ 
+ static int install_equiv_cpu_table(const u8 *buf)
+diff --git a/arch/x86/kernel/cpu/microcode/intel.c b/arch/x86/kernel/cpu/microcode/intel.c
+index 97ccf4c3b45b..16936a24795c 100644
+--- a/arch/x86/kernel/cpu/microcode/intel.c
++++ b/arch/x86/kernel/cpu/microcode/intel.c
+@@ -795,6 +795,7 @@ static enum ucode_state apply_microcode_intel(int cpu)
+ 	struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
+ 	struct cpuinfo_x86 *c = &cpu_data(cpu);
+ 	struct microcode_intel *mc;
++	enum ucode_state ret;
+ 	static int prev_rev;
+ 	u32 rev;
+ 
+@@ -817,9 +818,8 @@ static enum ucode_state apply_microcode_intel(int cpu)
+ 	 */
+ 	rev = intel_get_microcode_revision();
+ 	if (rev >= mc->hdr.rev) {
+-		uci->cpu_sig.rev = rev;
+-		c->microcode = rev;
+-		return UCODE_OK;
++		ret = UCODE_OK;
++		goto out;
+ 	}
+ 
+ 	/*
+@@ -848,10 +848,17 @@ static enum ucode_state apply_microcode_intel(int cpu)
+ 		prev_rev = rev;
+ 	}
+ 
++	ret = UCODE_UPDATED;
++
++out:
+ 	uci->cpu_sig.rev = rev;
+-	c->microcode = rev;
++	c->microcode	 = rev;
++
++	/* Update boot_cpu_data's revision too, if we're on the BSP: */
++	if (c->cpu_index == boot_cpu_data.cpu_index)
++		boot_cpu_data.microcode = rev;
+ 
+-	return UCODE_UPDATED;
++	return ret;
+ }
+ 
+ static enum ucode_state generic_load_microcode(int cpu, void *data, size_t size,
+diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
+index 17b02adc79aa..0c5a9fc6e36d 100644
+--- a/arch/x86/kernel/dumpstack.c
++++ b/arch/x86/kernel/dumpstack.c
+@@ -155,7 +155,7 @@ static void show_regs_if_on_stack(struct stack_info *info, struct pt_regs *regs,
+ 	 * they can be printed in the right context.
+ 	 */
+ 	if (!partial && on_stack(info, regs, sizeof(*regs))) {
+-		__show_regs(regs, 0);
++		__show_regs(regs, SHOW_REGS_SHORT);
+ 
+ 	} else if (partial && on_stack(info, (void *)regs + IRET_FRAME_OFFSET,
+ 				       IRET_FRAME_SIZE)) {
+@@ -353,7 +353,7 @@ void oops_end(unsigned long flags, struct pt_regs *regs, int signr)
+ 	oops_exit();
+ 
+ 	/* Executive summary in case the oops scrolled away */
+-	__show_regs(&exec_summary_regs, true);
++	__show_regs(&exec_summary_regs, SHOW_REGS_ALL);
+ 
+ 	if (!signr)
+ 		return;
+@@ -416,14 +416,9 @@ void die(const char *str, struct pt_regs *regs, long err)
+ 
+ void show_regs(struct pt_regs *regs)
+ {
+-	bool all = true;
+-
+ 	show_regs_print_info(KERN_DEFAULT);
+ 
+-	if (IS_ENABLED(CONFIG_X86_32))
+-		all = !user_mode(regs);
+-
+-	__show_regs(regs, all);
++	__show_regs(regs, user_mode(regs) ? SHOW_REGS_USER : SHOW_REGS_ALL);
+ 
+ 	/*
+ 	 * When in-kernel, we also print out the stack at the time of the fault..
+diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
+index 0ae659de21eb..666d1825390d 100644
+--- a/arch/x86/kernel/process_32.c
++++ b/arch/x86/kernel/process_32.c
+@@ -59,7 +59,7 @@
+ #include <asm/intel_rdt_sched.h>
+ #include <asm/proto.h>
+ 
+-void __show_regs(struct pt_regs *regs, int all)
++void __show_regs(struct pt_regs *regs, enum show_regs_mode mode)
+ {
+ 	unsigned long cr0 = 0L, cr2 = 0L, cr3 = 0L, cr4 = 0L;
+ 	unsigned long d0, d1, d2, d3, d6, d7;
+@@ -85,7 +85,7 @@ void __show_regs(struct pt_regs *regs, int all)
+ 	printk(KERN_DEFAULT "DS: %04x ES: %04x FS: %04x GS: %04x SS: %04x EFLAGS: %08lx\n",
+ 	       (u16)regs->ds, (u16)regs->es, (u16)regs->fs, gs, ss, regs->flags);
+ 
+-	if (!all)
++	if (mode != SHOW_REGS_ALL)
+ 		return;
+ 
+ 	cr0 = read_cr0();
+diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
+index 4344a032ebe6..0091a733c1cf 100644
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -62,7 +62,7 @@
+ __visible DEFINE_PER_CPU(unsigned long, rsp_scratch);
+ 
+ /* Prints also some state that isn't saved in the pt_regs */
+-void __show_regs(struct pt_regs *regs, int all)
++void __show_regs(struct pt_regs *regs, enum show_regs_mode mode)
+ {
+ 	unsigned long cr0 = 0L, cr2 = 0L, cr3 = 0L, cr4 = 0L, fs, gs, shadowgs;
+ 	unsigned long d0, d1, d2, d3, d6, d7;
+@@ -87,9 +87,17 @@ void __show_regs(struct pt_regs *regs, int all)
+ 	printk(KERN_DEFAULT "R13: %016lx R14: %016lx R15: %016lx\n",
+ 	       regs->r13, regs->r14, regs->r15);
+ 
+-	if (!all)
++	if (mode == SHOW_REGS_SHORT)
+ 		return;
+ 
++	if (mode == SHOW_REGS_USER) {
++		rdmsrl(MSR_FS_BASE, fs);
++		rdmsrl(MSR_KERNEL_GS_BASE, shadowgs);
++		printk(KERN_DEFAULT "FS:  %016lx GS:  %016lx\n",
++		       fs, shadowgs);
++		return;
++	}
++
+ 	asm("movl %%ds,%0" : "=r" (ds));
+ 	asm("movl %%cs,%0" : "=r" (cs));
+ 	asm("movl %%es,%0" : "=r" (es));
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index 42f1ba92622a..97d41754769e 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -4960,7 +4960,7 @@ static int make_mmu_pages_available(struct kvm_vcpu *vcpu)
+ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
+ 		       void *insn, int insn_len)
+ {
+-	int r, emulation_type = EMULTYPE_RETRY;
++	int r, emulation_type = 0;
+ 	enum emulation_result er;
+ 	bool direct = vcpu->arch.mmu.direct_map;
+ 
+@@ -4973,10 +4973,8 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
+ 	r = RET_PF_INVALID;
+ 	if (unlikely(error_code & PFERR_RSVD_MASK)) {
+ 		r = handle_mmio_page_fault(vcpu, cr2, direct);
+-		if (r == RET_PF_EMULATE) {
+-			emulation_type = 0;
++		if (r == RET_PF_EMULATE)
+ 			goto emulate;
+-		}
+ 	}
+ 
+ 	if (r == RET_PF_INVALID) {
+@@ -5003,8 +5001,19 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
+ 		return 1;
+ 	}
+ 
+-	if (mmio_info_in_cache(vcpu, cr2, direct))
+-		emulation_type = 0;
++	/*
++	 * vcpu->arch.mmu.page_fault returned RET_PF_EMULATE, but we can still
++	 * optimistically try to just unprotect the page and let the processor
++	 * re-execute the instruction that caused the page fault.  Do not allow
++	 * retrying MMIO emulation, as it's not only pointless but could also
++	 * cause us to enter an infinite loop because the processor will keep
++	 * faulting on the non-existent MMIO address.  Retrying an instruction
++	 * from a nested guest is also pointless and dangerous as we are only
++	 * explicitly shadowing L1's page tables, i.e. unprotecting something
++	 * for L1 isn't going to magically fix whatever issue cause L2 to fail.
++	 */
++	if (!mmio_info_in_cache(vcpu, cr2, direct) && !is_guest_mode(vcpu))
++		emulation_type = EMULTYPE_ALLOW_RETRY;
+ emulate:
+ 	/*
+ 	 * On AMD platforms, under certain conditions insn_len may be zero on #NPF.
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index 9799f86388e7..ef772e5634d4 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -3875,8 +3875,8 @@ static int emulate_on_interception(struct vcpu_svm *svm)
+ 
+ static int rsm_interception(struct vcpu_svm *svm)
+ {
+-	return x86_emulate_instruction(&svm->vcpu, 0, 0,
+-				       rsm_ins_bytes, 2) == EMULATE_DONE;
++	return kvm_emulate_instruction_from_buffer(&svm->vcpu,
++					rsm_ins_bytes, 2) == EMULATE_DONE;
+ }
+ 
+ static int rdpmc_interception(struct vcpu_svm *svm)
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 9869bfd0c601..d0c3be353bb6 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -7539,8 +7539,8 @@ static int handle_ept_misconfig(struct kvm_vcpu *vcpu)
+ 		if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
+ 			return kvm_skip_emulated_instruction(vcpu);
+ 		else
+-			return x86_emulate_instruction(vcpu, gpa, EMULTYPE_SKIP,
+-						       NULL, 0) == EMULATE_DONE;
++			return emulate_instruction(vcpu, EMULTYPE_SKIP) ==
++								EMULATE_DONE;
+ 	}
+ 
+ 	return kvm_mmu_page_fault(vcpu, gpa, PFERR_RSVD_MASK, NULL, 0);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 94cd63081471..97fcac34e007 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -5810,7 +5810,10 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gva_t cr2,
+ 	gpa_t gpa = cr2;
+ 	kvm_pfn_t pfn;
+ 
+-	if (emulation_type & EMULTYPE_NO_REEXECUTE)
++	if (!(emulation_type & EMULTYPE_ALLOW_RETRY))
++		return false;
++
++	if (WARN_ON_ONCE(is_guest_mode(vcpu)))
+ 		return false;
+ 
+ 	if (!vcpu->arch.mmu.direct_map) {
+@@ -5898,7 +5901,10 @@ static bool retry_instruction(struct x86_emulate_ctxt *ctxt,
+ 	 */
+ 	vcpu->arch.last_retry_eip = vcpu->arch.last_retry_addr = 0;
+ 
+-	if (!(emulation_type & EMULTYPE_RETRY))
++	if (!(emulation_type & EMULTYPE_ALLOW_RETRY))
++		return false;
++
++	if (WARN_ON_ONCE(is_guest_mode(vcpu)))
+ 		return false;
+ 
+ 	if (x86_page_table_writing_insn(ctxt))
+diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
+index d1f1612672c7..045338ac1667 100644
+--- a/arch/x86/mm/fault.c
++++ b/arch/x86/mm/fault.c
+@@ -317,8 +317,6 @@ static noinline int vmalloc_fault(unsigned long address)
+ 	if (!(address >= VMALLOC_START && address < VMALLOC_END))
+ 		return -1;
+ 
+-	WARN_ON_ONCE(in_nmi());
+-
+ 	/*
+ 	 * Synchronize this task's top level page-table
+ 	 * with the 'reference' page table.
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+index 58c6efa9f9a9..9fe5952d117d 100644
+--- a/block/bfq-cgroup.c
++++ b/block/bfq-cgroup.c
+@@ -275,9 +275,9 @@ static void bfqg_and_blkg_get(struct bfq_group *bfqg)
+ 
+ void bfqg_and_blkg_put(struct bfq_group *bfqg)
+ {
+-	bfqg_put(bfqg);
+-
+ 	blkg_put(bfqg_to_blkg(bfqg));
++
++	bfqg_put(bfqg);
+ }
+ 
+ /* @stats = 0 */
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 746a5eac4541..cbaca5a73f2e 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -2161,9 +2161,12 @@ static inline bool bio_check_ro(struct bio *bio, struct hd_struct *part)
+ {
+ 	const int op = bio_op(bio);
+ 
+-	if (part->policy && (op_is_write(op) && !op_is_flush(op))) {
++	if (part->policy && op_is_write(op)) {
+ 		char b[BDEVNAME_SIZE];
+ 
++		if (op_is_flush(bio->bi_opf) && !bio_sectors(bio))
++			return false;
++
+ 		WARN_ONCE(1,
+ 		       "generic_make_request: Trying to write "
+ 			"to read-only block-device %s (partno %d)\n",
+diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
+index d5f2c21d8531..816923bf874d 100644
+--- a/block/blk-mq-tag.c
++++ b/block/blk-mq-tag.c
+@@ -402,8 +402,6 @@ int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx,
+ 	if (tdepth <= tags->nr_reserved_tags)
+ 		return -EINVAL;
+ 
+-	tdepth -= tags->nr_reserved_tags;
+-
+ 	/*
+ 	 * If we are allowed to grow beyond the original size, allocate
+ 	 * a new set of tags before freeing the old one.
+@@ -423,7 +421,8 @@ int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx,
+ 		if (tdepth > 16 * BLKDEV_MAX_RQ)
+ 			return -EINVAL;
+ 
+-		new = blk_mq_alloc_rq_map(set, hctx->queue_num, tdepth, 0);
++		new = blk_mq_alloc_rq_map(set, hctx->queue_num, tdepth,
++				tags->nr_reserved_tags);
+ 		if (!new)
+ 			return -ENOMEM;
+ 		ret = blk_mq_alloc_rqs(set, new, hctx->queue_num, tdepth);
+@@ -440,7 +439,8 @@ int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx,
+ 		 * Don't need (or can't) update reserved tags here, they
+ 		 * remain static and should never need resizing.
+ 		 */
+-		sbitmap_queue_resize(&tags->bitmap_tags, tdepth);
++		sbitmap_queue_resize(&tags->bitmap_tags,
++				tdepth - tags->nr_reserved_tags);
+ 	}
+ 
+ 	return 0;
+diff --git a/block/partitions/aix.c b/block/partitions/aix.c
+index 007f95eea0e1..903f3ed175d0 100644
+--- a/block/partitions/aix.c
++++ b/block/partitions/aix.c
+@@ -178,7 +178,7 @@ int aix_partition(struct parsed_partitions *state)
+ 	u32 vgda_sector = 0;
+ 	u32 vgda_len = 0;
+ 	int numlvs = 0;
+-	struct pvd *pvd;
++	struct pvd *pvd = NULL;
+ 	struct lv_info {
+ 		unsigned short pps_per_lv;
+ 		unsigned short pps_found;
+@@ -232,10 +232,11 @@ int aix_partition(struct parsed_partitions *state)
+ 				if (lvip[i].pps_per_lv)
+ 					foundlvs += 1;
+ 			}
++			/* pvd loops depend on n[].name and lvip[].pps_per_lv */
++			pvd = alloc_pvd(state, vgda_sector + 17);
+ 		}
+ 		put_dev_sector(sect);
+ 	}
+-	pvd = alloc_pvd(state, vgda_sector + 17);
+ 	if (pvd) {
+ 		int numpps = be16_to_cpu(pvd->pp_count);
+ 		int psn_part1 = be32_to_cpu(pvd->psn_part1);
+@@ -282,10 +283,14 @@ int aix_partition(struct parsed_partitions *state)
+ 				next_lp_ix += 1;
+ 		}
+ 		for (i = 0; i < state->limit; i += 1)
+-			if (lvip[i].pps_found && !lvip[i].lv_is_contiguous)
++			if (lvip[i].pps_found && !lvip[i].lv_is_contiguous) {
++				char tmp[sizeof(n[i].name) + 1]; // null char
++
++				snprintf(tmp, sizeof(tmp), "%s", n[i].name);
+ 				pr_warn("partition %s (%u pp's found) is "
+ 					"not contiguous\n",
+-					n[i].name, lvip[i].pps_found);
++					tmp, lvip[i].pps_found);
++			}
+ 		kfree(pvd);
+ 	}
+ 	kfree(n);
+diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c
+index 9706613eecf9..bf64cfa30feb 100644
+--- a/drivers/acpi/acpi_lpss.c
++++ b/drivers/acpi/acpi_lpss.c
+@@ -879,7 +879,7 @@ static void acpi_lpss_dismiss(struct device *dev)
+ #define LPSS_GPIODEF0_DMA_LLP		BIT(13)
+ 
+ static DEFINE_MUTEX(lpss_iosf_mutex);
+-static bool lpss_iosf_d3_entered;
++static bool lpss_iosf_d3_entered = true;
+ 
+ static void lpss_iosf_enter_d3_state(void)
+ {
+diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
+index 2628806c64a2..3d5277a39097 100644
+--- a/drivers/android/binder_alloc.c
++++ b/drivers/android/binder_alloc.c
+@@ -327,6 +327,35 @@ err_no_vma:
+ 	return vma ? -ENOMEM : -ESRCH;
+ }
+ 
++
++static inline void binder_alloc_set_vma(struct binder_alloc *alloc,
++		struct vm_area_struct *vma)
++{
++	if (vma)
++		alloc->vma_vm_mm = vma->vm_mm;
++	/*
++	 * If we see alloc->vma is not NULL, buffer data structures set up
++	 * completely. Look at smp_rmb side binder_alloc_get_vma.
++	 * We also want to guarantee new alloc->vma_vm_mm is always visible
++	 * if alloc->vma is set.
++	 */
++	smp_wmb();
++	alloc->vma = vma;
++}
++
++static inline struct vm_area_struct *binder_alloc_get_vma(
++		struct binder_alloc *alloc)
++{
++	struct vm_area_struct *vma = NULL;
++
++	if (alloc->vma) {
++		/* Look at description in binder_alloc_set_vma */
++		smp_rmb();
++		vma = alloc->vma;
++	}
++	return vma;
++}
++
+ static struct binder_buffer *binder_alloc_new_buf_locked(
+ 				struct binder_alloc *alloc,
+ 				size_t data_size,
+@@ -343,7 +372,7 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
+ 	size_t size, data_offsets_size;
+ 	int ret;
+ 
+-	if (alloc->vma == NULL) {
++	if (!binder_alloc_get_vma(alloc)) {
+ 		pr_err("%d: binder_alloc_buf, no vma\n",
+ 		       alloc->pid);
+ 		return ERR_PTR(-ESRCH);
+@@ -714,9 +743,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc,
+ 	buffer->free = 1;
+ 	binder_insert_free_buffer(alloc, buffer);
+ 	alloc->free_async_space = alloc->buffer_size / 2;
+-	barrier();
+-	alloc->vma = vma;
+-	alloc->vma_vm_mm = vma->vm_mm;
++	binder_alloc_set_vma(alloc, vma);
+ 	mmgrab(alloc->vma_vm_mm);
+ 
+ 	return 0;
+@@ -743,10 +770,10 @@ void binder_alloc_deferred_release(struct binder_alloc *alloc)
+ 	int buffers, page_count;
+ 	struct binder_buffer *buffer;
+ 
+-	BUG_ON(alloc->vma);
+-
+ 	buffers = 0;
+ 	mutex_lock(&alloc->mutex);
++	BUG_ON(alloc->vma);
++
+ 	while ((n = rb_first(&alloc->allocated_buffers))) {
+ 		buffer = rb_entry(n, struct binder_buffer, rb_node);
+ 
+@@ -889,7 +916,7 @@ int binder_alloc_get_allocated_count(struct binder_alloc *alloc)
+  */
+ void binder_alloc_vma_close(struct binder_alloc *alloc)
+ {
+-	WRITE_ONCE(alloc->vma, NULL);
++	binder_alloc_set_vma(alloc, NULL);
+ }
+ 
+ /**
+@@ -924,7 +951,7 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
+ 
+ 	index = page - alloc->pages;
+ 	page_addr = (uintptr_t)alloc->buffer + index * PAGE_SIZE;
+-	vma = alloc->vma;
++	vma = binder_alloc_get_vma(alloc);
+ 	if (vma) {
+ 		if (!mmget_not_zero(alloc->vma_vm_mm))
+ 			goto err_mmget;
+diff --git a/drivers/ata/libahci.c b/drivers/ata/libahci.c
+index 09620c2ffa0f..704a761f94b2 100644
+--- a/drivers/ata/libahci.c
++++ b/drivers/ata/libahci.c
+@@ -2107,7 +2107,7 @@ static void ahci_set_aggressive_devslp(struct ata_port *ap, bool sleep)
+ 	struct ahci_host_priv *hpriv = ap->host->private_data;
+ 	void __iomem *port_mmio = ahci_port_base(ap);
+ 	struct ata_device *dev = ap->link.device;
+-	u32 devslp, dm, dito, mdat, deto;
++	u32 devslp, dm, dito, mdat, deto, dito_conf;
+ 	int rc;
+ 	unsigned int err_mask;
+ 
+@@ -2131,8 +2131,15 @@ static void ahci_set_aggressive_devslp(struct ata_port *ap, bool sleep)
+ 		return;
+ 	}
+ 
+-	/* device sleep was already enabled */
+-	if (devslp & PORT_DEVSLP_ADSE)
++	dm = (devslp & PORT_DEVSLP_DM_MASK) >> PORT_DEVSLP_DM_OFFSET;
++	dito = devslp_idle_timeout / (dm + 1);
++	if (dito > 0x3ff)
++		dito = 0x3ff;
++
++	dito_conf = (devslp >> PORT_DEVSLP_DITO_OFFSET) & 0x3FF;
++
++	/* device sleep was already enabled and same dito */
++	if ((devslp & PORT_DEVSLP_ADSE) && (dito_conf == dito))
+ 		return;
+ 
+ 	/* set DITO, MDAT, DETO and enable DevSlp, need to stop engine first */
+@@ -2140,11 +2147,6 @@ static void ahci_set_aggressive_devslp(struct ata_port *ap, bool sleep)
+ 	if (rc)
+ 		return;
+ 
+-	dm = (devslp & PORT_DEVSLP_DM_MASK) >> PORT_DEVSLP_DM_OFFSET;
+-	dito = devslp_idle_timeout / (dm + 1);
+-	if (dito > 0x3ff)
+-		dito = 0x3ff;
+-
+ 	/* Use the nominal value 10 ms if the read MDAT is zero,
+ 	 * the nominal value of DETO is 20 ms.
+ 	 */
+@@ -2162,6 +2164,8 @@ static void ahci_set_aggressive_devslp(struct ata_port *ap, bool sleep)
+ 		deto = 20;
+ 	}
+ 
++	/* Make dito, mdat, deto bits to 0s */
++	devslp &= ~GENMASK_ULL(24, 2);
+ 	devslp |= ((dito << PORT_DEVSLP_DITO_OFFSET) |
+ 		   (mdat << PORT_DEVSLP_MDAT_OFFSET) |
+ 		   (deto << PORT_DEVSLP_DETO_OFFSET) |
+diff --git a/drivers/base/memory.c b/drivers/base/memory.c
+index f5e560188a18..622ab8edc035 100644
+--- a/drivers/base/memory.c
++++ b/drivers/base/memory.c
+@@ -416,26 +416,24 @@ static ssize_t show_valid_zones(struct device *dev,
+ 	struct zone *default_zone;
+ 	int nid;
+ 
+-	/*
+-	 * The block contains more than one zone can not be offlined.
+-	 * This can happen e.g. for ZONE_DMA and ZONE_DMA32
+-	 */
+-	if (!test_pages_in_a_zone(start_pfn, start_pfn + nr_pages, &valid_start_pfn, &valid_end_pfn))
+-		return sprintf(buf, "none\n");
+-
+-	start_pfn = valid_start_pfn;
+-	nr_pages = valid_end_pfn - start_pfn;
+-
+ 	/*
+ 	 * Check the existing zone. Make sure that we do that only on the
+ 	 * online nodes otherwise the page_zone is not reliable
+ 	 */
+ 	if (mem->state == MEM_ONLINE) {
++		/*
++		 * The block contains more than one zone can not be offlined.
++		 * This can happen e.g. for ZONE_DMA and ZONE_DMA32
++		 */
++		if (!test_pages_in_a_zone(start_pfn, start_pfn + nr_pages,
++					  &valid_start_pfn, &valid_end_pfn))
++			return sprintf(buf, "none\n");
++		start_pfn = valid_start_pfn;
+ 		strcat(buf, page_zone(pfn_to_page(start_pfn))->name);
+ 		goto out;
+ 	}
+ 
+-	nid = pfn_to_nid(start_pfn);
++	nid = mem->nid;
+ 	default_zone = zone_for_pfn_range(MMOP_ONLINE_KEEP, nid, start_pfn, nr_pages);
+ 	strcat(buf, default_zone->name);
+ 
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 3fb95c8d9fd8..15a5ce5bba3d 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -1239,6 +1239,9 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
+ 	case NBD_SET_SOCK:
+ 		return nbd_add_socket(nbd, arg, false);
+ 	case NBD_SET_BLKSIZE:
++		if (!arg || !is_power_of_2(arg) || arg < 512 ||
++		    arg > PAGE_SIZE)
++			return -EINVAL;
+ 		nbd_size_set(nbd, arg,
+ 			     div_s64(config->bytesize, arg));
+ 		return 0;
+diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c
+index b3f83cd96f33..01f59be71433 100644
+--- a/drivers/block/pktcdvd.c
++++ b/drivers/block/pktcdvd.c
+@@ -67,7 +67,7 @@
+ #include <scsi/scsi.h>
+ #include <linux/debugfs.h>
+ #include <linux/device.h>
+-
++#include <linux/nospec.h>
+ #include <linux/uaccess.h>
+ 
+ #define DRIVER_NAME	"pktcdvd"
+@@ -2231,6 +2231,8 @@ static struct pktcdvd_device *pkt_find_dev_from_minor(unsigned int dev_minor)
+ {
+ 	if (dev_minor >= MAX_WRITERS)
+ 		return NULL;
++
++	dev_minor = array_index_nospec(dev_minor, MAX_WRITERS);
+ 	return pkt_devs[dev_minor];
+ }
+ 
+diff --git a/drivers/bluetooth/Kconfig b/drivers/bluetooth/Kconfig
+index f3c643a0473c..5f953ca8ac5b 100644
+--- a/drivers/bluetooth/Kconfig
++++ b/drivers/bluetooth/Kconfig
+@@ -159,6 +159,7 @@ config BT_HCIUART_LL
+ config BT_HCIUART_3WIRE
+ 	bool "Three-wire UART (H5) protocol support"
+ 	depends on BT_HCIUART
++	depends on BT_HCIUART_SERDEV
+ 	help
+ 	  The HCI Three-wire UART Transport Layer makes it possible to
+ 	  user the Bluetooth HCI over a serial port interface. The HCI
+diff --git a/drivers/char/tpm/tpm_i2c_infineon.c b/drivers/char/tpm/tpm_i2c_infineon.c
+index 6116cd05e228..9086edc9066b 100644
+--- a/drivers/char/tpm/tpm_i2c_infineon.c
++++ b/drivers/char/tpm/tpm_i2c_infineon.c
+@@ -117,7 +117,7 @@ static int iic_tpm_read(u8 addr, u8 *buffer, size_t len)
+ 	/* Lock the adapter for the duration of the whole sequence. */
+ 	if (!tpm_dev.client->adapter->algo->master_xfer)
+ 		return -EOPNOTSUPP;
+-	i2c_lock_adapter(tpm_dev.client->adapter);
++	i2c_lock_bus(tpm_dev.client->adapter, I2C_LOCK_SEGMENT);
+ 
+ 	if (tpm_dev.chip_type == SLB9645) {
+ 		/* use a combined read for newer chips
+@@ -192,7 +192,7 @@ static int iic_tpm_read(u8 addr, u8 *buffer, size_t len)
+ 	}
+ 
+ out:
+-	i2c_unlock_adapter(tpm_dev.client->adapter);
++	i2c_unlock_bus(tpm_dev.client->adapter, I2C_LOCK_SEGMENT);
+ 	/* take care of 'guard time' */
+ 	usleep_range(SLEEP_DURATION_LOW, SLEEP_DURATION_HI);
+ 
+@@ -224,7 +224,7 @@ static int iic_tpm_write_generic(u8 addr, u8 *buffer, size_t len,
+ 
+ 	if (!tpm_dev.client->adapter->algo->master_xfer)
+ 		return -EOPNOTSUPP;
+-	i2c_lock_adapter(tpm_dev.client->adapter);
++	i2c_lock_bus(tpm_dev.client->adapter, I2C_LOCK_SEGMENT);
+ 
+ 	/* prepend the 'register address' to the buffer */
+ 	tpm_dev.buf[0] = addr;
+@@ -243,7 +243,7 @@ static int iic_tpm_write_generic(u8 addr, u8 *buffer, size_t len,
+ 		usleep_range(sleep_low, sleep_hi);
+ 	}
+ 
+-	i2c_unlock_adapter(tpm_dev.client->adapter);
++	i2c_unlock_bus(tpm_dev.client->adapter, I2C_LOCK_SEGMENT);
+ 	/* take care of 'guard time' */
+ 	usleep_range(SLEEP_DURATION_LOW, SLEEP_DURATION_HI);
+ 
+diff --git a/drivers/char/tpm/tpm_tis_spi.c b/drivers/char/tpm/tpm_tis_spi.c
+index 424ff2fde1f2..9914f6973463 100644
+--- a/drivers/char/tpm/tpm_tis_spi.c
++++ b/drivers/char/tpm/tpm_tis_spi.c
+@@ -199,6 +199,7 @@ static const struct tpm_tis_phy_ops tpm_spi_phy_ops = {
+ static int tpm_tis_spi_probe(struct spi_device *dev)
+ {
+ 	struct tpm_tis_spi_phy *phy;
++	int irq;
+ 
+ 	phy = devm_kzalloc(&dev->dev, sizeof(struct tpm_tis_spi_phy),
+ 			   GFP_KERNEL);
+@@ -211,7 +212,13 @@ static int tpm_tis_spi_probe(struct spi_device *dev)
+ 	if (!phy->iobuf)
+ 		return -ENOMEM;
+ 
+-	return tpm_tis_core_init(&dev->dev, &phy->priv, -1, &tpm_spi_phy_ops,
++	/* If the SPI device has an IRQ then use that */
++	if (dev->irq > 0)
++		irq = dev->irq;
++	else
++		irq = -1;
++
++	return tpm_tis_core_init(&dev->dev, &phy->priv, irq, &tpm_spi_phy_ops,
+ 				 NULL);
+ }
+ 
+diff --git a/drivers/clk/clk-scmi.c b/drivers/clk/clk-scmi.c
+index bb2a6f2f5516..a985bf5e1ac6 100644
+--- a/drivers/clk/clk-scmi.c
++++ b/drivers/clk/clk-scmi.c
+@@ -38,7 +38,6 @@ static unsigned long scmi_clk_recalc_rate(struct clk_hw *hw,
+ static long scmi_clk_round_rate(struct clk_hw *hw, unsigned long rate,
+ 				unsigned long *parent_rate)
+ {
+-	int step;
+ 	u64 fmin, fmax, ftmp;
+ 	struct scmi_clk *clk = to_scmi_clk(hw);
+ 
+@@ -60,9 +59,9 @@ static long scmi_clk_round_rate(struct clk_hw *hw, unsigned long rate,
+ 
+ 	ftmp = rate - fmin;
+ 	ftmp += clk->info->range.step_size - 1; /* to round up */
+-	step = do_div(ftmp, clk->info->range.step_size);
++	do_div(ftmp, clk->info->range.step_size);
+ 
+-	return step * clk->info->range.step_size + fmin;
++	return ftmp * clk->info->range.step_size + fmin;
+ }
+ 
+ static int scmi_clk_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/dax/pmem.c b/drivers/dax/pmem.c
+index fd49b24fd6af..99e2aace8078 100644
+--- a/drivers/dax/pmem.c
++++ b/drivers/dax/pmem.c
+@@ -105,15 +105,19 @@ static int dax_pmem_probe(struct device *dev)
+ 	if (rc)
+ 		return rc;
+ 
+-	rc = devm_add_action_or_reset(dev, dax_pmem_percpu_exit,
+-							&dax_pmem->ref);
+-	if (rc)
++	rc = devm_add_action(dev, dax_pmem_percpu_exit, &dax_pmem->ref);
++	if (rc) {
++		percpu_ref_exit(&dax_pmem->ref);
+ 		return rc;
++	}
+ 
+ 	dax_pmem->pgmap.ref = &dax_pmem->ref;
+ 	addr = devm_memremap_pages(dev, &dax_pmem->pgmap);
+-	if (IS_ERR(addr))
++	if (IS_ERR(addr)) {
++		devm_remove_action(dev, dax_pmem_percpu_exit, &dax_pmem->ref);
++		percpu_ref_exit(&dax_pmem->ref);
+ 		return PTR_ERR(addr);
++	}
+ 
+ 	rc = devm_add_action_or_reset(dev, dax_pmem_percpu_kill,
+ 							&dax_pmem->ref);
+diff --git a/drivers/firmware/google/vpd.c b/drivers/firmware/google/vpd.c
+index e9db895916c3..1aa67bb5d8c0 100644
+--- a/drivers/firmware/google/vpd.c
++++ b/drivers/firmware/google/vpd.c
+@@ -246,6 +246,7 @@ static int vpd_section_destroy(struct vpd_section *sec)
+ 		sysfs_remove_bin_file(vpd_kobj, &sec->bin_attr);
+ 		kfree(sec->raw_name);
+ 		memunmap(sec->baseaddr);
++		sec->enabled = false;
+ 	}
+ 
+ 	return 0;
+@@ -279,8 +280,10 @@ static int vpd_sections_init(phys_addr_t physaddr)
+ 		ret = vpd_section_init("rw", &rw_vpd,
+ 				       physaddr + sizeof(struct vpd_cbmem) +
+ 				       header.ro_size, header.rw_size);
+-		if (ret)
++		if (ret) {
++			vpd_section_destroy(&ro_vpd);
+ 			return ret;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/gpio/gpio-ml-ioh.c b/drivers/gpio/gpio-ml-ioh.c
+index b23d9a36be1f..51c7d1b84c2e 100644
+--- a/drivers/gpio/gpio-ml-ioh.c
++++ b/drivers/gpio/gpio-ml-ioh.c
+@@ -496,9 +496,10 @@ static int ioh_gpio_probe(struct pci_dev *pdev,
+ 	return 0;
+ 
+ err_gpiochip_add:
++	chip = chip_save;
+ 	while (--i >= 0) {
+-		chip--;
+ 		gpiochip_remove(&chip->gpio);
++		chip++;
+ 	}
+ 	kfree(chip_save);
+ 
+diff --git a/drivers/gpio/gpio-pxa.c b/drivers/gpio/gpio-pxa.c
+index 1e66f808051c..2e33fd552899 100644
+--- a/drivers/gpio/gpio-pxa.c
++++ b/drivers/gpio/gpio-pxa.c
+@@ -241,6 +241,17 @@ int pxa_irq_to_gpio(int irq)
+ 	return irq_gpio0;
+ }
+ 
++static bool pxa_gpio_has_pinctrl(void)
++{
++	switch (gpio_type) {
++	case PXA3XX_GPIO:
++		return false;
++
++	default:
++		return true;
++	}
++}
++
+ static int pxa_gpio_to_irq(struct gpio_chip *chip, unsigned offset)
+ {
+ 	struct pxa_gpio_chip *pchip = chip_to_pxachip(chip);
+@@ -255,9 +266,11 @@ static int pxa_gpio_direction_input(struct gpio_chip *chip, unsigned offset)
+ 	unsigned long flags;
+ 	int ret;
+ 
+-	ret = pinctrl_gpio_direction_input(chip->base + offset);
+-	if (!ret)
+-		return 0;
++	if (pxa_gpio_has_pinctrl()) {
++		ret = pinctrl_gpio_direction_input(chip->base + offset);
++		if (!ret)
++			return 0;
++	}
+ 
+ 	spin_lock_irqsave(&gpio_lock, flags);
+ 
+@@ -282,9 +295,11 @@ static int pxa_gpio_direction_output(struct gpio_chip *chip,
+ 
+ 	writel_relaxed(mask, base + (value ? GPSR_OFFSET : GPCR_OFFSET));
+ 
+-	ret = pinctrl_gpio_direction_output(chip->base + offset);
+-	if (ret)
+-		return ret;
++	if (pxa_gpio_has_pinctrl()) {
++		ret = pinctrl_gpio_direction_output(chip->base + offset);
++		if (ret)
++			return ret;
++	}
+ 
+ 	spin_lock_irqsave(&gpio_lock, flags);
+ 
+@@ -348,8 +363,12 @@ static int pxa_init_gpio_chip(struct pxa_gpio_chip *pchip, int ngpio,
+ 	pchip->chip.set = pxa_gpio_set;
+ 	pchip->chip.to_irq = pxa_gpio_to_irq;
+ 	pchip->chip.ngpio = ngpio;
+-	pchip->chip.request = gpiochip_generic_request;
+-	pchip->chip.free = gpiochip_generic_free;
++
++	if (pxa_gpio_has_pinctrl()) {
++		pchip->chip.request = gpiochip_generic_request;
++		pchip->chip.free = gpiochip_generic_free;
++	}
++
+ #ifdef CONFIG_OF_GPIO
+ 	pchip->chip.of_node = np;
+ 	pchip->chip.of_xlate = pxa_gpio_of_xlate;
+diff --git a/drivers/gpio/gpio-tegra.c b/drivers/gpio/gpio-tegra.c
+index 94396caaca75..d5d79727c55d 100644
+--- a/drivers/gpio/gpio-tegra.c
++++ b/drivers/gpio/gpio-tegra.c
+@@ -720,4 +720,4 @@ static int __init tegra_gpio_init(void)
+ {
+ 	return platform_driver_register(&tegra_gpio_driver);
+ }
+-postcore_initcall(tegra_gpio_init);
++subsys_initcall(tegra_gpio_init);
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c b/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c
+index a576b8bbb3cd..dea40b322191 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c
+@@ -150,7 +150,7 @@ static void dce_dmcu_set_psr_enable(struct dmcu *dmcu, bool enable, bool wait)
+ 	}
+ }
+ 
+-static void dce_dmcu_setup_psr(struct dmcu *dmcu,
++static bool dce_dmcu_setup_psr(struct dmcu *dmcu,
+ 		struct dc_link *link,
+ 		struct psr_context *psr_context)
+ {
+@@ -261,6 +261,8 @@ static void dce_dmcu_setup_psr(struct dmcu *dmcu,
+ 
+ 	/* notifyDMCUMsg */
+ 	REG_UPDATE(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 1);
++
++	return true;
+ }
+ 
+ static bool dce_is_dmcu_initialized(struct dmcu *dmcu)
+@@ -545,24 +547,25 @@ static void dcn10_dmcu_set_psr_enable(struct dmcu *dmcu, bool enable, bool wait)
+ 	 *  least a few frames. Should never hit the max retry assert below.
+ 	 */
+ 	if (wait == true) {
+-	for (retryCount = 0; retryCount <= 1000; retryCount++) {
+-		dcn10_get_dmcu_psr_state(dmcu, &psr_state);
+-		if (enable) {
+-			if (psr_state != 0)
+-				break;
+-		} else {
+-			if (psr_state == 0)
+-				break;
++		for (retryCount = 0; retryCount <= 1000; retryCount++) {
++			dcn10_get_dmcu_psr_state(dmcu, &psr_state);
++			if (enable) {
++				if (psr_state != 0)
++					break;
++			} else {
++				if (psr_state == 0)
++					break;
++			}
++			udelay(500);
+ 		}
+-		udelay(500);
+-	}
+ 
+-	/* assert if max retry hit */
+-	ASSERT(retryCount <= 1000);
++		/* assert if max retry hit */
++		if (retryCount >= 1000)
++			ASSERT(0);
+ 	}
+ }
+ 
+-static void dcn10_dmcu_setup_psr(struct dmcu *dmcu,
++static bool dcn10_dmcu_setup_psr(struct dmcu *dmcu,
+ 		struct dc_link *link,
+ 		struct psr_context *psr_context)
+ {
+@@ -577,7 +580,7 @@ static void dcn10_dmcu_setup_psr(struct dmcu *dmcu,
+ 
+ 	/* If microcontroller is not running, do nothing */
+ 	if (dmcu->dmcu_state != DMCU_RUNNING)
+-		return;
++		return false;
+ 
+ 	link->link_enc->funcs->psr_program_dp_dphy_fast_training(link->link_enc,
+ 			psr_context->psrExitLinkTrainingRequired);
+@@ -677,6 +680,11 @@ static void dcn10_dmcu_setup_psr(struct dmcu *dmcu,
+ 
+ 	/* notifyDMCUMsg */
+ 	REG_UPDATE(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 1);
++
++	/* waitDMCUReadyForCmd */
++	REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 0, 1, 10000);
++
++	return true;
+ }
+ 
+ static void dcn10_psr_wait_loop(
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/dmcu.h b/drivers/gpu/drm/amd/display/dc/inc/hw/dmcu.h
+index de60f940030d..4550747fb61c 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/dmcu.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/dmcu.h
+@@ -48,7 +48,7 @@ struct dmcu_funcs {
+ 			const char *src,
+ 			unsigned int bytes);
+ 	void (*set_psr_enable)(struct dmcu *dmcu, bool enable, bool wait);
+-	void (*setup_psr)(struct dmcu *dmcu,
++	bool (*setup_psr)(struct dmcu *dmcu,
+ 			struct dc_link *link,
+ 			struct psr_context *psr_context);
+ 	void (*get_psr_state)(struct dmcu *dmcu, uint32_t *psr_state);
+diff --git a/drivers/gpu/ipu-v3/ipu-common.c b/drivers/gpu/ipu-v3/ipu-common.c
+index 48685cddbad1..c73bd003f845 100644
+--- a/drivers/gpu/ipu-v3/ipu-common.c
++++ b/drivers/gpu/ipu-v3/ipu-common.c
+@@ -1401,6 +1401,8 @@ static int ipu_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 
+ 	ipu->id = of_alias_get_id(np, "ipu");
++	if (ipu->id < 0)
++		ipu->id = 0;
+ 
+ 	if (of_device_is_compatible(np, "fsl,imx6qp-ipu") &&
+ 	    IS_ENABLED(CONFIG_DRM)) {
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index c7981ddd8776..e80bcd71fe1e 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -528,6 +528,7 @@
+ 
+ #define I2C_VENDOR_ID_RAYD		0x2386
+ #define I2C_PRODUCT_ID_RAYD_3118	0x3118
++#define I2C_PRODUCT_ID_RAYD_4B33	0x4B33
+ 
+ #define USB_VENDOR_ID_HANWANG		0x0b57
+ #define USB_DEVICE_ID_HANWANG_TABLET_FIRST	0x5000
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index ab93dd5927c3..b23c4b5854d8 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -1579,6 +1579,7 @@ static struct hid_input *hidinput_allocate(struct hid_device *hid,
+ 	input_dev->dev.parent = &hid->dev;
+ 
+ 	hidinput->input = input_dev;
++	hidinput->application = application;
+ 	list_add_tail(&hidinput->list, &hid->inputs);
+ 
+ 	INIT_LIST_HEAD(&hidinput->reports);
+@@ -1674,8 +1675,7 @@ static struct hid_input *hidinput_match_application(struct hid_report *report)
+ 	struct hid_input *hidinput;
+ 
+ 	list_for_each_entry(hidinput, &hid->inputs, list) {
+-		if (hidinput->report &&
+-		    hidinput->report->application == report->application)
++		if (hidinput->application == report->application)
+ 			return hidinput;
+ 	}
+ 
+@@ -1812,6 +1812,7 @@ void hidinput_disconnect(struct hid_device *hid)
+ 			input_unregister_device(hidinput->input);
+ 		else
+ 			input_free_device(hidinput->input);
++		kfree(hidinput->name);
+ 		kfree(hidinput);
+ 	}
+ 
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 45968f7970f8..15c934ef6b18 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -1167,7 +1167,8 @@ static bool mt_need_to_apply_feature(struct hid_device *hdev,
+ 				     struct hid_usage *usage,
+ 				     enum latency_mode latency,
+ 				     bool surface_switch,
+-				     bool button_switch)
++				     bool button_switch,
++				     bool *inputmode_found)
+ {
+ 	struct mt_device *td = hid_get_drvdata(hdev);
+ 	struct mt_class *cls = &td->mtclass;
+@@ -1179,6 +1180,14 @@ static bool mt_need_to_apply_feature(struct hid_device *hdev,
+ 
+ 	switch (usage->hid) {
+ 	case HID_DG_INPUTMODE:
++		/*
++		 * Some elan panels wrongly declare 2 input mode features,
++		 * and silently ignore when we set the value in the second
++		 * field. Skip the second feature and hope for the best.
++		 */
++		if (*inputmode_found)
++			return false;
++
+ 		if (cls->quirks & MT_QUIRK_FORCE_GET_FEATURE) {
+ 			report_len = hid_report_len(report);
+ 			buf = hid_alloc_report_buf(report, GFP_KERNEL);
+@@ -1194,6 +1203,7 @@ static bool mt_need_to_apply_feature(struct hid_device *hdev,
+ 		}
+ 
+ 		field->value[index] = td->inputmode_value;
++		*inputmode_found = true;
+ 		return true;
+ 
+ 	case HID_DG_CONTACTMAX:
+@@ -1231,6 +1241,7 @@ static void mt_set_modes(struct hid_device *hdev, enum latency_mode latency,
+ 	struct hid_usage *usage;
+ 	int i, j;
+ 	bool update_report;
++	bool inputmode_found = false;
+ 
+ 	rep_enum = &hdev->report_enum[HID_FEATURE_REPORT];
+ 	list_for_each_entry(rep, &rep_enum->report_list, list) {
+@@ -1249,7 +1260,8 @@ static void mt_set_modes(struct hid_device *hdev, enum latency_mode latency,
+ 							     usage,
+ 							     latency,
+ 							     surface_switch,
+-							     button_switch))
++							     button_switch,
++							     &inputmode_found))
+ 					update_report = true;
+ 			}
+ 		}
+@@ -1476,6 +1488,9 @@ static int mt_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 	 */
+ 	hdev->quirks |= HID_QUIRK_INPUT_PER_APP;
+ 
++	if (id->group != HID_GROUP_MULTITOUCH_WIN_8)
++		hdev->quirks |= HID_QUIRK_MULTI_INPUT;
++
+ 	timer_setup(&td->release_timer, mt_expired_timeout, 0);
+ 
+ 	ret = hid_parse(hdev);
+diff --git a/drivers/hid/i2c-hid/i2c-hid.c b/drivers/hid/i2c-hid/i2c-hid.c
+index eae0cb3ddec6..5fd1159fc095 100644
+--- a/drivers/hid/i2c-hid/i2c-hid.c
++++ b/drivers/hid/i2c-hid/i2c-hid.c
+@@ -174,6 +174,8 @@ static const struct i2c_hid_quirks {
+ 		I2C_HID_QUIRK_RESEND_REPORT_DESCR },
+ 	{ USB_VENDOR_ID_SIS_TOUCH, USB_DEVICE_ID_SIS10FB_TOUCH,
+ 		I2C_HID_QUIRK_RESEND_REPORT_DESCR },
++	{ I2C_VENDOR_ID_RAYD, I2C_PRODUCT_ID_RAYD_4B33,
++		I2C_HID_QUIRK_RESEND_REPORT_DESCR },
+ 	{ 0, 0 }
+ };
+ 
+diff --git a/drivers/hv/hv.c b/drivers/hv/hv.c
+index 658dc765753b..553adccb05d7 100644
+--- a/drivers/hv/hv.c
++++ b/drivers/hv/hv.c
+@@ -242,6 +242,10 @@ int hv_synic_alloc(void)
+ 
+ 	return 0;
+ err:
++	/*
++	 * Any memory allocations that succeeded will be freed when
++	 * the caller cleans up by calling hv_synic_free()
++	 */
+ 	return -ENOMEM;
+ }
+ 
+@@ -254,12 +258,10 @@ void hv_synic_free(void)
+ 		struct hv_per_cpu_context *hv_cpu
+ 			= per_cpu_ptr(hv_context.cpu_context, cpu);
+ 
+-		if (hv_cpu->synic_event_page)
+-			free_page((unsigned long)hv_cpu->synic_event_page);
+-		if (hv_cpu->synic_message_page)
+-			free_page((unsigned long)hv_cpu->synic_message_page);
+-		if (hv_cpu->post_msg_page)
+-			free_page((unsigned long)hv_cpu->post_msg_page);
++		kfree(hv_cpu->clk_evt);
++		free_page((unsigned long)hv_cpu->synic_event_page);
++		free_page((unsigned long)hv_cpu->synic_message_page);
++		free_page((unsigned long)hv_cpu->post_msg_page);
+ 	}
+ 
+ 	kfree(hv_context.hv_numa_map);
+diff --git a/drivers/i2c/busses/i2c-aspeed.c b/drivers/i2c/busses/i2c-aspeed.c
+index 60e4d0e939a3..715b6fdb4989 100644
+--- a/drivers/i2c/busses/i2c-aspeed.c
++++ b/drivers/i2c/busses/i2c-aspeed.c
+@@ -868,7 +868,7 @@ static int aspeed_i2c_probe_bus(struct platform_device *pdev)
+ 	if (!match)
+ 		bus->get_clk_reg_val = aspeed_i2c_24xx_get_clk_reg_val;
+ 	else
+-		bus->get_clk_reg_val = match->data;
++		bus->get_clk_reg_val = (u32 (*)(u32))match->data;
+ 
+ 	/* Initialize the I2C adapter */
+ 	spin_lock_init(&bus->lock);
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index aa726607645e..45fcf0c37a9e 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -139,6 +139,7 @@
+ 
+ #define SBREG_BAR		0x10
+ #define SBREG_SMBCTRL		0xc6000c
++#define SBREG_SMBCTRL_DNV	0xcf000c
+ 
+ /* Host status bits for SMBPCISTS */
+ #define SMBPCISTS_INTS		BIT(3)
+@@ -1396,7 +1397,11 @@ static void i801_add_tco(struct i801_priv *priv)
+ 	spin_unlock(&p2sb_spinlock);
+ 
+ 	res = &tco_res[ICH_RES_MEM_OFF];
+-	res->start = (resource_size_t)base64_addr + SBREG_SMBCTRL;
++	if (pci_dev->device == PCI_DEVICE_ID_INTEL_DNV_SMBUS)
++		res->start = (resource_size_t)base64_addr + SBREG_SMBCTRL_DNV;
++	else
++		res->start = (resource_size_t)base64_addr + SBREG_SMBCTRL;
++
+ 	res->end = res->start + 3;
+ 	res->flags = IORESOURCE_MEM;
+ 
+diff --git a/drivers/i2c/busses/i2c-xiic.c b/drivers/i2c/busses/i2c-xiic.c
+index 9a71e50d21f1..0c51c0ffdda9 100644
+--- a/drivers/i2c/busses/i2c-xiic.c
++++ b/drivers/i2c/busses/i2c-xiic.c
+@@ -532,6 +532,7 @@ static void xiic_start_recv(struct xiic_i2c *i2c)
+ {
+ 	u8 rx_watermark;
+ 	struct i2c_msg *msg = i2c->rx_msg = i2c->tx_msg;
++	unsigned long flags;
+ 
+ 	/* Clear and enable Rx full interrupt. */
+ 	xiic_irq_clr_en(i2c, XIIC_INTR_RX_FULL_MASK | XIIC_INTR_TX_ERROR_MASK);
+@@ -547,6 +548,7 @@ static void xiic_start_recv(struct xiic_i2c *i2c)
+ 		rx_watermark = IIC_RX_FIFO_DEPTH;
+ 	xiic_setreg8(i2c, XIIC_RFD_REG_OFFSET, rx_watermark - 1);
+ 
++	local_irq_save(flags);
+ 	if (!(msg->flags & I2C_M_NOSTART))
+ 		/* write the address */
+ 		xiic_setreg16(i2c, XIIC_DTR_REG_OFFSET,
+@@ -556,6 +558,8 @@ static void xiic_start_recv(struct xiic_i2c *i2c)
+ 
+ 	xiic_setreg16(i2c, XIIC_DTR_REG_OFFSET,
+ 		msg->len | ((i2c->nmsgs == 1) ? XIIC_TX_DYN_STOP_MASK : 0));
++	local_irq_restore(flags);
++
+ 	if (i2c->nmsgs == 1)
+ 		/* very last, enable bus not busy as well */
+ 		xiic_irq_clr_en(i2c, XIIC_INTR_BNB_MASK);
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index bff10ab141b0..dafcb6f019b3 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -1445,9 +1445,16 @@ static bool cma_match_net_dev(const struct rdma_cm_id *id,
+ 		       (addr->src_addr.ss_family == AF_IB ||
+ 			rdma_protocol_roce(id->device, port_num));
+ 
+-	return !addr->dev_addr.bound_dev_if ||
+-	       (net_eq(dev_net(net_dev), addr->dev_addr.net) &&
+-		addr->dev_addr.bound_dev_if == net_dev->ifindex);
++	/*
++	 * Net namespaces must match, and if the listner is listening
++	 * on a specific netdevice than netdevice must match as well.
++	 */
++	if (net_eq(dev_net(net_dev), addr->dev_addr.net) &&
++	    (!!addr->dev_addr.bound_dev_if ==
++	     (addr->dev_addr.bound_dev_if == net_dev->ifindex)))
++		return true;
++	else
++		return false;
+ }
+ 
+ static struct rdma_id_private *cma_find_listener(
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
+index 63b5b3edabcb..8dc336a85128 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
+@@ -494,6 +494,9 @@ static int hns_roce_table_mhop_get(struct hns_roce_dev *hr_dev,
+ 			step_idx = 1;
+ 		} else if (hop_num == HNS_ROCE_HOP_NUM_0) {
+ 			step_idx = 0;
++		} else {
++			ret = -EINVAL;
++			goto err_dma_alloc_l1;
+ 		}
+ 
+ 		/* set HEM base address to hardware */
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index a6e11be0ea0f..c00925ed9da8 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -273,7 +273,8 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
+ 			switch (wr->opcode) {
+ 			case IB_WR_SEND_WITH_IMM:
+ 			case IB_WR_RDMA_WRITE_WITH_IMM:
+-				ud_sq_wqe->immtdata = wr->ex.imm_data;
++				ud_sq_wqe->immtdata =
++				      cpu_to_le32(be32_to_cpu(wr->ex.imm_data));
+ 				break;
+ 			default:
+ 				ud_sq_wqe->immtdata = 0;
+@@ -371,7 +372,8 @@ static int hns_roce_v2_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
+ 			switch (wr->opcode) {
+ 			case IB_WR_SEND_WITH_IMM:
+ 			case IB_WR_RDMA_WRITE_WITH_IMM:
+-				rc_sq_wqe->immtdata = wr->ex.imm_data;
++				rc_sq_wqe->immtdata =
++				      cpu_to_le32(be32_to_cpu(wr->ex.imm_data));
+ 				break;
+ 			case IB_WR_SEND_WITH_INV:
+ 				rc_sq_wqe->inv_key =
+@@ -1931,7 +1933,8 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *hr_cq,
+ 		case HNS_ROCE_V2_OPCODE_RDMA_WRITE_IMM:
+ 			wc->opcode = IB_WC_RECV_RDMA_WITH_IMM;
+ 			wc->wc_flags = IB_WC_WITH_IMM;
+-			wc->ex.imm_data = cqe->immtdata;
++			wc->ex.imm_data =
++				cpu_to_be32(le32_to_cpu(cqe->immtdata));
+ 			break;
+ 		case HNS_ROCE_V2_OPCODE_SEND:
+ 			wc->opcode = IB_WC_RECV;
+@@ -1940,7 +1943,8 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *hr_cq,
+ 		case HNS_ROCE_V2_OPCODE_SEND_WITH_IMM:
+ 			wc->opcode = IB_WC_RECV;
+ 			wc->wc_flags = IB_WC_WITH_IMM;
+-			wc->ex.imm_data = cqe->immtdata;
++			wc->ex.imm_data =
++				cpu_to_be32(le32_to_cpu(cqe->immtdata));
+ 			break;
+ 		case HNS_ROCE_V2_OPCODE_SEND_WITH_INV:
+ 			wc->opcode = IB_WC_RECV;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+index d47675f365c7..7e2c740e0df5 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+@@ -768,7 +768,7 @@ struct hns_roce_v2_cqe {
+ 	__le32	byte_4;
+ 	union {
+ 		__le32 rkey;
+-		__be32 immtdata;
++		__le32 immtdata;
+ 	};
+ 	__le32	byte_12;
+ 	__le32	byte_16;
+@@ -926,7 +926,7 @@ struct hns_roce_v2_cq_db {
+ struct hns_roce_v2_ud_send_wqe {
+ 	__le32	byte_4;
+ 	__le32	msg_len;
+-	__be32	immtdata;
++	__le32	immtdata;
+ 	__le32	byte_16;
+ 	__le32	byte_20;
+ 	__le32	byte_24;
+@@ -1012,7 +1012,7 @@ struct hns_roce_v2_rc_send_wqe {
+ 	__le32		msg_len;
+ 	union {
+ 		__le32  inv_key;
+-		__be32  immtdata;
++		__le32  immtdata;
+ 	};
+ 	__le32		byte_16;
+ 	__le32		byte_20;
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+index 6709328d90f8..c7e034963738 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+@@ -822,6 +822,7 @@ void ipoib_mcast_send(struct net_device *dev, u8 *daddr, struct sk_buff *skb)
+ 			if (neigh && list_empty(&neigh->list)) {
+ 				kref_get(&mcast->ah->ref);
+ 				neigh->ah	= mcast->ah;
++				neigh->ah->valid = 1;
+ 				list_add_tail(&neigh->list, &mcast->neigh_list);
+ 			}
+ 		}
+diff --git a/drivers/input/touchscreen/atmel_mxt_ts.c b/drivers/input/touchscreen/atmel_mxt_ts.c
+index 54fe190fd4bc..48c5ccab00a0 100644
+--- a/drivers/input/touchscreen/atmel_mxt_ts.c
++++ b/drivers/input/touchscreen/atmel_mxt_ts.c
+@@ -1658,10 +1658,11 @@ static int mxt_parse_object_table(struct mxt_data *data,
+ 			break;
+ 		case MXT_TOUCH_MULTI_T9:
+ 			data->multitouch = MXT_TOUCH_MULTI_T9;
++			/* Only handle messages from first T9 instance */
+ 			data->T9_reportid_min = min_id;
+-			data->T9_reportid_max = max_id;
+-			data->num_touchids = object->num_report_ids
+-						* mxt_obj_instances(object);
++			data->T9_reportid_max = min_id +
++						object->num_report_ids - 1;
++			data->num_touchids = object->num_report_ids;
+ 			break;
+ 		case MXT_SPT_MESSAGECOUNT_T44:
+ 			data->T44_address = object->start_address;
+diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
+index 1d647104bccc..b73c6a7bf7f2 100644
+--- a/drivers/iommu/arm-smmu-v3.c
++++ b/drivers/iommu/arm-smmu-v3.c
+@@ -24,6 +24,7 @@
+ #include <linux/acpi_iort.h>
+ #include <linux/bitfield.h>
+ #include <linux/bitops.h>
++#include <linux/crash_dump.h>
+ #include <linux/delay.h>
+ #include <linux/dma-iommu.h>
+ #include <linux/err.h>
+@@ -2211,8 +2212,12 @@ static int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr)
+ 	reg &= ~clr;
+ 	reg |= set;
+ 	writel_relaxed(reg | GBPA_UPDATE, gbpa);
+-	return readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE),
+-					  1, ARM_SMMU_POLL_TIMEOUT_US);
++	ret = readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE),
++					 1, ARM_SMMU_POLL_TIMEOUT_US);
++
++	if (ret)
++		dev_err(smmu->dev, "GBPA not responding to update\n");
++	return ret;
+ }
+ 
+ static void arm_smmu_free_msis(void *data)
+@@ -2392,8 +2397,15 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
+ 
+ 	/* Clear CR0 and sync (disables SMMU and queue processing) */
+ 	reg = readl_relaxed(smmu->base + ARM_SMMU_CR0);
+-	if (reg & CR0_SMMUEN)
++	if (reg & CR0_SMMUEN) {
++		if (is_kdump_kernel()) {
++			arm_smmu_update_gbpa(smmu, GBPA_ABORT, 0);
++			arm_smmu_device_disable(smmu);
++			return -EBUSY;
++		}
++
+ 		dev_warn(smmu->dev, "SMMU currently enabled! Resetting...\n");
++	}
+ 
+ 	ret = arm_smmu_device_disable(smmu);
+ 	if (ret)
+@@ -2491,10 +2503,8 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
+ 		enables |= CR0_SMMUEN;
+ 	} else {
+ 		ret = arm_smmu_update_gbpa(smmu, 0, GBPA_ABORT);
+-		if (ret) {
+-			dev_err(smmu->dev, "GBPA not responding to update\n");
++		if (ret)
+ 			return ret;
+-		}
+ 	}
+ 	ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+ 				      ARM_SMMU_CR0ACK);
+diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
+index 09b47260c74b..feb1664815b7 100644
+--- a/drivers/iommu/ipmmu-vmsa.c
++++ b/drivers/iommu/ipmmu-vmsa.c
+@@ -73,7 +73,7 @@ struct ipmmu_vmsa_domain {
+ 	struct io_pgtable_ops *iop;
+ 
+ 	unsigned int context_id;
+-	spinlock_t lock;			/* Protects mappings */
++	struct mutex mutex;			/* Protects mappings */
+ };
+ 
+ static struct ipmmu_vmsa_domain *to_vmsa_domain(struct iommu_domain *dom)
+@@ -595,7 +595,7 @@ static struct iommu_domain *__ipmmu_domain_alloc(unsigned type)
+ 	if (!domain)
+ 		return NULL;
+ 
+-	spin_lock_init(&domain->lock);
++	mutex_init(&domain->mutex);
+ 
+ 	return &domain->io_domain;
+ }
+@@ -641,7 +641,6 @@ static int ipmmu_attach_device(struct iommu_domain *io_domain,
+ 	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+ 	struct ipmmu_vmsa_device *mmu = to_ipmmu(dev);
+ 	struct ipmmu_vmsa_domain *domain = to_vmsa_domain(io_domain);
+-	unsigned long flags;
+ 	unsigned int i;
+ 	int ret = 0;
+ 
+@@ -650,7 +649,7 @@ static int ipmmu_attach_device(struct iommu_domain *io_domain,
+ 		return -ENXIO;
+ 	}
+ 
+-	spin_lock_irqsave(&domain->lock, flags);
++	mutex_lock(&domain->mutex);
+ 
+ 	if (!domain->mmu) {
+ 		/* The domain hasn't been used yet, initialize it. */
+@@ -674,7 +673,7 @@ static int ipmmu_attach_device(struct iommu_domain *io_domain,
+ 	} else
+ 		dev_info(dev, "Reusing IPMMU context %u\n", domain->context_id);
+ 
+-	spin_unlock_irqrestore(&domain->lock, flags);
++	mutex_unlock(&domain->mutex);
+ 
+ 	if (ret < 0)
+ 		return ret;
+diff --git a/drivers/macintosh/via-pmu.c b/drivers/macintosh/via-pmu.c
+index 25c1ce811053..1fdd09ebb3f1 100644
+--- a/drivers/macintosh/via-pmu.c
++++ b/drivers/macintosh/via-pmu.c
+@@ -534,8 +534,9 @@ init_pmu(void)
+ 	int timeout;
+ 	struct adb_request req;
+ 
+-	out_8(&via[B], via[B] | TREQ);			/* negate TREQ */
+-	out_8(&via[DIRB], (via[DIRB] | TREQ) & ~TACK);	/* TACK in, TREQ out */
++	/* Negate TREQ. Set TACK to input and TREQ to output. */
++	out_8(&via[B], in_8(&via[B]) | TREQ);
++	out_8(&via[DIRB], (in_8(&via[DIRB]) | TREQ) & ~TACK);
+ 
+ 	pmu_request(&req, NULL, 2, PMU_SET_INTR_MASK, pmu_intr_mask);
+ 	timeout =  100000;
+@@ -1418,8 +1419,8 @@ pmu_sr_intr(void)
+ 	struct adb_request *req;
+ 	int bite = 0;
+ 
+-	if (via[B] & TREQ) {
+-		printk(KERN_ERR "PMU: spurious SR intr (%x)\n", via[B]);
++	if (in_8(&via[B]) & TREQ) {
++		printk(KERN_ERR "PMU: spurious SR intr (%x)\n", in_8(&via[B]));
+ 		out_8(&via[IFR], SR_INT);
+ 		return NULL;
+ 	}
+diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
+index ce14a3d1f609..44df244807e5 100644
+--- a/drivers/md/dm-cache-target.c
++++ b/drivers/md/dm-cache-target.c
+@@ -2250,7 +2250,7 @@ static int parse_features(struct cache_args *ca, struct dm_arg_set *as,
+ 		{0, 2, "Invalid number of cache feature arguments"},
+ 	};
+ 
+-	int r;
++	int r, mode_ctr = 0;
+ 	unsigned argc;
+ 	const char *arg;
+ 	struct cache_features *cf = &ca->features;
+@@ -2264,14 +2264,20 @@ static int parse_features(struct cache_args *ca, struct dm_arg_set *as,
+ 	while (argc--) {
+ 		arg = dm_shift_arg(as);
+ 
+-		if (!strcasecmp(arg, "writeback"))
++		if (!strcasecmp(arg, "writeback")) {
+ 			cf->io_mode = CM_IO_WRITEBACK;
++			mode_ctr++;
++		}
+ 
+-		else if (!strcasecmp(arg, "writethrough"))
++		else if (!strcasecmp(arg, "writethrough")) {
+ 			cf->io_mode = CM_IO_WRITETHROUGH;
++			mode_ctr++;
++		}
+ 
+-		else if (!strcasecmp(arg, "passthrough"))
++		else if (!strcasecmp(arg, "passthrough")) {
+ 			cf->io_mode = CM_IO_PASSTHROUGH;
++			mode_ctr++;
++		}
+ 
+ 		else if (!strcasecmp(arg, "metadata2"))
+ 			cf->metadata_version = 2;
+@@ -2282,6 +2288,11 @@ static int parse_features(struct cache_args *ca, struct dm_arg_set *as,
+ 		}
+ 	}
+ 
++	if (mode_ctr > 1) {
++		*error = "Duplicate cache io_mode features requested";
++		return -EINVAL;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 2031506a0ecd..49107c52c8e6 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -4521,6 +4521,12 @@ static void analyse_stripe(struct stripe_head *sh, struct stripe_head_state *s)
+ 			s->failed++;
+ 			if (rdev && !test_bit(Faulty, &rdev->flags))
+ 				do_recovery = 1;
++			else if (!rdev) {
++				rdev = rcu_dereference(
++				    conf->disks[i].replacement);
++				if (rdev && !test_bit(Faulty, &rdev->flags))
++					do_recovery = 1;
++			}
+ 		}
+ 
+ 		if (test_bit(R5_InJournal, &dev->flags))
+diff --git a/drivers/media/dvb-frontends/helene.c b/drivers/media/dvb-frontends/helene.c
+index a0d0b53c91d7..a5de65dcf784 100644
+--- a/drivers/media/dvb-frontends/helene.c
++++ b/drivers/media/dvb-frontends/helene.c
+@@ -897,7 +897,10 @@ static int helene_x_pon(struct helene_priv *priv)
+ 	helene_write_regs(priv, 0x99, cdata, sizeof(cdata));
+ 
+ 	/* 0x81 - 0x94 */
+-	data[0] = 0x18; /* xtal 24 MHz */
++	if (priv->xtal == SONY_HELENE_XTAL_16000)
++		data[0] = 0x10; /* xtal 16 MHz */
++	else
++		data[0] = 0x18; /* xtal 24 MHz */
+ 	data[1] = (uint8_t)(0x80 | (0x04 & 0x1F)); /* 4 x 25 = 100uA */
+ 	data[2] = (uint8_t)(0x80 | (0x26 & 0x7F)); /* 38 x 0.25 = 9.5pF */
+ 	data[3] = 0x80; /* REFOUT signal output 500mVpp */
+diff --git a/drivers/media/platform/davinci/vpif_display.c b/drivers/media/platform/davinci/vpif_display.c
+index 7be636237acf..0f324055cc9f 100644
+--- a/drivers/media/platform/davinci/vpif_display.c
++++ b/drivers/media/platform/davinci/vpif_display.c
+@@ -1114,6 +1114,14 @@ vpif_init_free_channel_objects:
+ 	return err;
+ }
+ 
++static void free_vpif_objs(void)
++{
++	int i;
++
++	for (i = 0; i < VPIF_DISPLAY_MAX_DEVICES; i++)
++		kfree(vpif_obj.dev[i]);
++}
++
+ static int vpif_async_bound(struct v4l2_async_notifier *notifier,
+ 			    struct v4l2_subdev *subdev,
+ 			    struct v4l2_async_subdev *asd)
+@@ -1255,11 +1263,6 @@ static __init int vpif_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 	}
+ 
+-	if (!pdev->dev.platform_data) {
+-		dev_warn(&pdev->dev, "Missing platform data.  Giving up.\n");
+-		return -EINVAL;
+-	}
+-
+ 	vpif_dev = &pdev->dev;
+ 	err = initialize_vpif();
+ 
+@@ -1271,7 +1274,7 @@ static __init int vpif_probe(struct platform_device *pdev)
+ 	err = v4l2_device_register(vpif_dev, &vpif_obj.v4l2_dev);
+ 	if (err) {
+ 		v4l2_err(vpif_dev->driver, "Error registering v4l2 device\n");
+-		return err;
++		goto vpif_free;
+ 	}
+ 
+ 	while ((res = platform_get_resource(pdev, IORESOURCE_IRQ, res_idx))) {
+@@ -1314,7 +1317,10 @@ static __init int vpif_probe(struct platform_device *pdev)
+ 			if (vpif_obj.sd[i])
+ 				vpif_obj.sd[i]->grp_id = 1 << i;
+ 		}
+-		vpif_probe_complete();
++		err = vpif_probe_complete();
++		if (err) {
++			goto probe_subdev_out;
++		}
+ 	} else {
+ 		vpif_obj.notifier.subdevs = vpif_obj.config->asd;
+ 		vpif_obj.notifier.num_subdevs = vpif_obj.config->asd_sizes[0];
+@@ -1334,6 +1340,8 @@ probe_subdev_out:
+ 	kfree(vpif_obj.sd);
+ vpif_unregister:
+ 	v4l2_device_unregister(&vpif_obj.v4l2_dev);
++vpif_free:
++	free_vpif_objs();
+ 
+ 	return err;
+ }
+@@ -1355,8 +1363,8 @@ static int vpif_remove(struct platform_device *device)
+ 		ch = vpif_obj.dev[i];
+ 		/* Unregister video device */
+ 		video_unregister_device(&ch->video_dev);
+-		kfree(vpif_obj.dev[i]);
+ 	}
++	free_vpif_objs();
+ 
+ 	return 0;
+ }
+diff --git a/drivers/media/platform/qcom/camss-8x16/camss-csid.c b/drivers/media/platform/qcom/camss-8x16/camss-csid.c
+index 226f36ef7419..2bf65805f2c1 100644
+--- a/drivers/media/platform/qcom/camss-8x16/camss-csid.c
++++ b/drivers/media/platform/qcom/camss-8x16/camss-csid.c
+@@ -392,9 +392,6 @@ static int csid_set_stream(struct v4l2_subdev *sd, int enable)
+ 		    !media_entity_remote_pad(&csid->pads[MSM_CSID_PAD_SINK]))
+ 			return -ENOLINK;
+ 
+-		dt = csid_get_fmt_entry(csid->fmt[MSM_CSID_PAD_SRC].code)->
+-								data_type;
+-
+ 		if (tg->enabled) {
+ 			/* Config Test Generator */
+ 			struct v4l2_mbus_framefmt *f =
+@@ -416,6 +413,9 @@ static int csid_set_stream(struct v4l2_subdev *sd, int enable)
+ 			writel_relaxed(val, csid->base +
+ 				       CAMSS_CSID_TG_DT_n_CGG_0(0));
+ 
++			dt = csid_get_fmt_entry(
++				csid->fmt[MSM_CSID_PAD_SRC].code)->data_type;
++
+ 			/* 5:0 data type */
+ 			val = dt;
+ 			writel_relaxed(val, csid->base +
+@@ -425,6 +425,9 @@ static int csid_set_stream(struct v4l2_subdev *sd, int enable)
+ 			val = tg->payload_mode;
+ 			writel_relaxed(val, csid->base +
+ 				       CAMSS_CSID_TG_DT_n_CGG_2(0));
++
++			df = csid_get_fmt_entry(
++				csid->fmt[MSM_CSID_PAD_SRC].code)->decode_format;
+ 		} else {
+ 			struct csid_phy_config *phy = &csid->phy;
+ 
+@@ -439,13 +442,16 @@ static int csid_set_stream(struct v4l2_subdev *sd, int enable)
+ 
+ 			writel_relaxed(val,
+ 				       csid->base + CAMSS_CSID_CORE_CTRL_1);
++
++			dt = csid_get_fmt_entry(
++				csid->fmt[MSM_CSID_PAD_SINK].code)->data_type;
++			df = csid_get_fmt_entry(
++				csid->fmt[MSM_CSID_PAD_SINK].code)->decode_format;
+ 		}
+ 
+ 		/* Config LUT */
+ 
+ 		dt_shift = (cid % 4) * 8;
+-		df = csid_get_fmt_entry(csid->fmt[MSM_CSID_PAD_SINK].code)->
+-								decode_format;
+ 
+ 		val = readl_relaxed(csid->base + CAMSS_CSID_CID_LUT_VC_n(vc));
+ 		val &= ~(0xff << dt_shift);
+diff --git a/drivers/media/platform/rcar-vin/rcar-csi2.c b/drivers/media/platform/rcar-vin/rcar-csi2.c
+index daef72d410a3..dc5ae8025832 100644
+--- a/drivers/media/platform/rcar-vin/rcar-csi2.c
++++ b/drivers/media/platform/rcar-vin/rcar-csi2.c
+@@ -339,6 +339,7 @@ enum rcar_csi2_pads {
+ 
+ struct rcar_csi2_info {
+ 	int (*init_phtw)(struct rcar_csi2 *priv, unsigned int mbps);
++	int (*confirm_start)(struct rcar_csi2 *priv);
+ 	const struct rcsi2_mbps_reg *hsfreqrange;
+ 	unsigned int csi0clkfreqrange;
+ 	bool clear_ulps;
+@@ -545,6 +546,13 @@ static int rcsi2_start(struct rcar_csi2 *priv)
+ 	if (ret)
+ 		return ret;
+ 
++	/* Confirm start */
++	if (priv->info->confirm_start) {
++		ret = priv->info->confirm_start(priv);
++		if (ret)
++			return ret;
++	}
++
+ 	/* Clear Ultra Low Power interrupt. */
+ 	if (priv->info->clear_ulps)
+ 		rcsi2_write(priv, INTSTATE_REG,
+@@ -880,6 +888,11 @@ static int rcsi2_init_phtw_h3_v3h_m3n(struct rcar_csi2 *priv, unsigned int mbps)
+ }
+ 
+ static int rcsi2_init_phtw_v3m_e3(struct rcar_csi2 *priv, unsigned int mbps)
++{
++	return rcsi2_phtw_write_mbps(priv, mbps, phtw_mbps_v3m_e3, 0x44);
++}
++
++static int rcsi2_confirm_start_v3m_e3(struct rcar_csi2 *priv)
+ {
+ 	static const struct phtw_value step1[] = {
+ 		{ .data = 0xed, .code = 0x34 },
+@@ -890,12 +903,6 @@ static int rcsi2_init_phtw_v3m_e3(struct rcar_csi2 *priv, unsigned int mbps)
+ 		{ /* sentinel */ },
+ 	};
+ 
+-	int ret;
+-
+-	ret = rcsi2_phtw_write_mbps(priv, mbps, phtw_mbps_v3m_e3, 0x44);
+-	if (ret)
+-		return ret;
+-
+ 	return rcsi2_phtw_write_array(priv, step1);
+ }
+ 
+@@ -949,6 +956,7 @@ static const struct rcar_csi2_info rcar_csi2_info_r8a77965 = {
+ 
+ static const struct rcar_csi2_info rcar_csi2_info_r8a77970 = {
+ 	.init_phtw = rcsi2_init_phtw_v3m_e3,
++	.confirm_start = rcsi2_confirm_start_v3m_e3,
+ };
+ 
+ static const struct of_device_id rcar_csi2_of_table[] = {
+diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc.c b/drivers/media/platform/s5p-mfc/s5p_mfc.c
+index a80251ed3143..780548dd650e 100644
+--- a/drivers/media/platform/s5p-mfc/s5p_mfc.c
++++ b/drivers/media/platform/s5p-mfc/s5p_mfc.c
+@@ -254,24 +254,24 @@ static void s5p_mfc_handle_frame_all_extracted(struct s5p_mfc_ctx *ctx)
+ static void s5p_mfc_handle_frame_copy_time(struct s5p_mfc_ctx *ctx)
+ {
+ 	struct s5p_mfc_dev *dev = ctx->dev;
+-	struct s5p_mfc_buf  *dst_buf, *src_buf;
+-	size_t dec_y_addr;
++	struct s5p_mfc_buf *dst_buf, *src_buf;
++	u32 dec_y_addr;
+ 	unsigned int frame_type;
+ 
+ 	/* Make sure we actually have a new frame before continuing. */
+ 	frame_type = s5p_mfc_hw_call(dev->mfc_ops, get_dec_frame_type, dev);
+ 	if (frame_type == S5P_FIMV_DECODE_FRAME_SKIPPED)
+ 		return;
+-	dec_y_addr = s5p_mfc_hw_call(dev->mfc_ops, get_dec_y_adr, dev);
++	dec_y_addr = (u32)s5p_mfc_hw_call(dev->mfc_ops, get_dec_y_adr, dev);
+ 
+ 	/* Copy timestamp / timecode from decoded src to dst and set
+ 	   appropriate flags. */
+ 	src_buf = list_entry(ctx->src_queue.next, struct s5p_mfc_buf, list);
+ 	list_for_each_entry(dst_buf, &ctx->dst_queue, list) {
+-		if (vb2_dma_contig_plane_dma_addr(&dst_buf->b->vb2_buf, 0)
+-				== dec_y_addr) {
+-			dst_buf->b->timecode =
+-						src_buf->b->timecode;
++		u32 addr = (u32)vb2_dma_contig_plane_dma_addr(&dst_buf->b->vb2_buf, 0);
++
++		if (addr == dec_y_addr) {
++			dst_buf->b->timecode = src_buf->b->timecode;
+ 			dst_buf->b->vb2_buf.timestamp =
+ 						src_buf->b->vb2_buf.timestamp;
+ 			dst_buf->b->flags &=
+@@ -307,10 +307,10 @@ static void s5p_mfc_handle_frame_new(struct s5p_mfc_ctx *ctx, unsigned int err)
+ {
+ 	struct s5p_mfc_dev *dev = ctx->dev;
+ 	struct s5p_mfc_buf  *dst_buf;
+-	size_t dspl_y_addr;
++	u32 dspl_y_addr;
+ 	unsigned int frame_type;
+ 
+-	dspl_y_addr = s5p_mfc_hw_call(dev->mfc_ops, get_dspl_y_adr, dev);
++	dspl_y_addr = (u32)s5p_mfc_hw_call(dev->mfc_ops, get_dspl_y_adr, dev);
+ 	if (IS_MFCV6_PLUS(dev))
+ 		frame_type = s5p_mfc_hw_call(dev->mfc_ops,
+ 			get_disp_frame_type, ctx);
+@@ -329,9 +329,10 @@ static void s5p_mfc_handle_frame_new(struct s5p_mfc_ctx *ctx, unsigned int err)
+ 	/* The MFC returns address of the buffer, now we have to
+ 	 * check which videobuf does it correspond to */
+ 	list_for_each_entry(dst_buf, &ctx->dst_queue, list) {
++		u32 addr = (u32)vb2_dma_contig_plane_dma_addr(&dst_buf->b->vb2_buf, 0);
++
+ 		/* Check if this is the buffer we're looking for */
+-		if (vb2_dma_contig_plane_dma_addr(&dst_buf->b->vb2_buf, 0)
+-				== dspl_y_addr) {
++		if (addr == dspl_y_addr) {
+ 			list_del(&dst_buf->list);
+ 			ctx->dst_queue_cnt--;
+ 			dst_buf->b->sequence = ctx->sequence;
+diff --git a/drivers/media/usb/dvb-usb/dw2102.c b/drivers/media/usb/dvb-usb/dw2102.c
+index 0d4fdd34a710..9ce8b4d79d1f 100644
+--- a/drivers/media/usb/dvb-usb/dw2102.c
++++ b/drivers/media/usb/dvb-usb/dw2102.c
+@@ -2101,14 +2101,12 @@ static struct dvb_usb_device_properties s6x0_properties = {
+ 	}
+ };
+ 
+-static struct dvb_usb_device_properties *p1100;
+ static const struct dvb_usb_device_description d1100 = {
+ 	"Prof 1100 USB ",
+ 	{&dw2102_table[PROF_1100], NULL},
+ 	{NULL},
+ };
+ 
+-static struct dvb_usb_device_properties *s660;
+ static const struct dvb_usb_device_description d660 = {
+ 	"TeVii S660 USB",
+ 	{&dw2102_table[TEVII_S660], NULL},
+@@ -2127,14 +2125,12 @@ static const struct dvb_usb_device_description d480_2 = {
+ 	{NULL},
+ };
+ 
+-static struct dvb_usb_device_properties *p7500;
+ static const struct dvb_usb_device_description d7500 = {
+ 	"Prof 7500 USB DVB-S2",
+ 	{&dw2102_table[PROF_7500], NULL},
+ 	{NULL},
+ };
+ 
+-static struct dvb_usb_device_properties *s421;
+ static const struct dvb_usb_device_description d421 = {
+ 	"TeVii S421 PCI",
+ 	{&dw2102_table[TEVII_S421], NULL},
+@@ -2334,6 +2330,11 @@ static int dw2102_probe(struct usb_interface *intf,
+ 		const struct usb_device_id *id)
+ {
+ 	int retval = -ENOMEM;
++	struct dvb_usb_device_properties *p1100;
++	struct dvb_usb_device_properties *s660;
++	struct dvb_usb_device_properties *p7500;
++	struct dvb_usb_device_properties *s421;
++
+ 	p1100 = kmemdup(&s6x0_properties,
+ 			sizeof(struct dvb_usb_device_properties), GFP_KERNEL);
+ 	if (!p1100)
+@@ -2402,8 +2403,16 @@ static int dw2102_probe(struct usb_interface *intf,
+ 	    0 == dvb_usb_device_init(intf, &t220_properties,
+ 			 THIS_MODULE, NULL, adapter_nr) ||
+ 	    0 == dvb_usb_device_init(intf, &tt_s2_4600_properties,
+-			 THIS_MODULE, NULL, adapter_nr))
++			 THIS_MODULE, NULL, adapter_nr)) {
++
++		/* clean up copied properties */
++		kfree(s421);
++		kfree(p7500);
++		kfree(s660);
++		kfree(p1100);
++
+ 		return 0;
++	}
+ 
+ 	retval = -ENODEV;
+ 	kfree(s421);
+diff --git a/drivers/media/usb/em28xx/em28xx-cards.c b/drivers/media/usb/em28xx/em28xx-cards.c
+index 6c8438311d3b..ff5e41ac4723 100644
+--- a/drivers/media/usb/em28xx/em28xx-cards.c
++++ b/drivers/media/usb/em28xx/em28xx-cards.c
+@@ -3376,7 +3376,9 @@ void em28xx_free_device(struct kref *ref)
+ 	if (!dev->disconnected)
+ 		em28xx_release_resources(dev);
+ 
+-	kfree(dev->alt_max_pkt_size_isoc);
++	if (dev->ts == PRIMARY_TS)
++		kfree(dev->alt_max_pkt_size_isoc);
++
+ 	kfree(dev);
+ }
+ EXPORT_SYMBOL_GPL(em28xx_free_device);
+diff --git a/drivers/media/usb/em28xx/em28xx-core.c b/drivers/media/usb/em28xx/em28xx-core.c
+index f70845e7d8c6..45b24776a695 100644
+--- a/drivers/media/usb/em28xx/em28xx-core.c
++++ b/drivers/media/usb/em28xx/em28xx-core.c
+@@ -655,12 +655,12 @@ int em28xx_capture_start(struct em28xx *dev, int start)
+ 			rc = em28xx_write_reg_bits(dev,
+ 						   EM2874_R5F_TS_ENABLE,
+ 						   start ? EM2874_TS1_CAPTURE_ENABLE : 0x00,
+-						   EM2874_TS1_CAPTURE_ENABLE);
++						   EM2874_TS1_CAPTURE_ENABLE | EM2874_TS1_FILTER_ENABLE | EM2874_TS1_NULL_DISCARD);
+ 		else
+ 			rc = em28xx_write_reg_bits(dev,
+ 						   EM2874_R5F_TS_ENABLE,
+ 						   start ? EM2874_TS2_CAPTURE_ENABLE : 0x00,
+-						   EM2874_TS2_CAPTURE_ENABLE);
++						   EM2874_TS2_CAPTURE_ENABLE | EM2874_TS2_FILTER_ENABLE | EM2874_TS2_NULL_DISCARD);
+ 	} else {
+ 		/* FIXME: which is the best order? */
+ 		/* video registers are sampled by VREF */
+diff --git a/drivers/media/usb/em28xx/em28xx-dvb.c b/drivers/media/usb/em28xx/em28xx-dvb.c
+index b778d8a1983e..a73faf12f7e4 100644
+--- a/drivers/media/usb/em28xx/em28xx-dvb.c
++++ b/drivers/media/usb/em28xx/em28xx-dvb.c
+@@ -218,7 +218,9 @@ static int em28xx_start_streaming(struct em28xx_dvb *dvb)
+ 		dvb_alt = dev->dvb_alt_isoc;
+ 	}
+ 
+-	usb_set_interface(udev, dev->ifnum, dvb_alt);
++	if (!dev->board.has_dual_ts)
++		usb_set_interface(udev, dev->ifnum, dvb_alt);
++
+ 	rc = em28xx_set_mode(dev, EM28XX_DIGITAL_MODE);
+ 	if (rc < 0)
+ 		return rc;
+diff --git a/drivers/memory/ti-aemif.c b/drivers/memory/ti-aemif.c
+index 31112f622b88..475e5b3790ed 100644
+--- a/drivers/memory/ti-aemif.c
++++ b/drivers/memory/ti-aemif.c
+@@ -411,7 +411,7 @@ static int aemif_probe(struct platform_device *pdev)
+ 			if (ret < 0)
+ 				goto error;
+ 		}
+-	} else {
++	} else if (pdata) {
+ 		for (i = 0; i < pdata->num_sub_devices; i++) {
+ 			pdata->sub_devices[i].dev.parent = dev;
+ 			ret = platform_device_register(&pdata->sub_devices[i]);
+diff --git a/drivers/mfd/rave-sp.c b/drivers/mfd/rave-sp.c
+index 36dcd98977d6..4f545fdc6ebc 100644
+--- a/drivers/mfd/rave-sp.c
++++ b/drivers/mfd/rave-sp.c
+@@ -776,6 +776,13 @@ static int rave_sp_probe(struct serdev_device *serdev)
+ 		return ret;
+ 
+ 	serdev_device_set_baudrate(serdev, baud);
++	serdev_device_set_flow_control(serdev, false);
++
++	ret = serdev_device_set_parity(serdev, SERDEV_PARITY_NONE);
++	if (ret) {
++		dev_err(dev, "Failed to set parity\n");
++		return ret;
++	}
+ 
+ 	ret = rave_sp_get_status(sp);
+ 	if (ret) {
+diff --git a/drivers/mfd/ti_am335x_tscadc.c b/drivers/mfd/ti_am335x_tscadc.c
+index 47012c0899cd..7a30546880a4 100644
+--- a/drivers/mfd/ti_am335x_tscadc.c
++++ b/drivers/mfd/ti_am335x_tscadc.c
+@@ -209,14 +209,13 @@ static	int ti_tscadc_probe(struct platform_device *pdev)
+ 	 * The TSC_ADC_SS controller design assumes the OCP clock is
+ 	 * at least 6x faster than the ADC clock.
+ 	 */
+-	clk = clk_get(&pdev->dev, "adc_tsc_fck");
++	clk = devm_clk_get(&pdev->dev, "adc_tsc_fck");
+ 	if (IS_ERR(clk)) {
+ 		dev_err(&pdev->dev, "failed to get TSC fck\n");
+ 		err = PTR_ERR(clk);
+ 		goto err_disable_clk;
+ 	}
+ 	clock_rate = clk_get_rate(clk);
+-	clk_put(clk);
+ 	tscadc->clk_div = clock_rate / ADC_CLK;
+ 
+ 	/* TSCADC_CLKDIV needs to be configured to the value minus 1 */
+diff --git a/drivers/misc/mic/scif/scif_api.c b/drivers/misc/mic/scif/scif_api.c
+index 7b2dddcdd46d..42f7a12894d6 100644
+--- a/drivers/misc/mic/scif/scif_api.c
++++ b/drivers/misc/mic/scif/scif_api.c
+@@ -370,11 +370,10 @@ int scif_bind(scif_epd_t epd, u16 pn)
+ 			goto scif_bind_exit;
+ 		}
+ 	} else {
+-		pn = scif_get_new_port();
+-		if (!pn) {
+-			ret = -ENOSPC;
++		ret = scif_get_new_port();
++		if (ret < 0)
+ 			goto scif_bind_exit;
+-		}
++		pn = ret;
+ 	}
+ 
+ 	ep->state = SCIFEP_BOUND;
+@@ -648,13 +647,12 @@ int __scif_connect(scif_epd_t epd, struct scif_port_id *dst, bool non_block)
+ 			err = -EISCONN;
+ 		break;
+ 	case SCIFEP_UNBOUND:
+-		ep->port.port = scif_get_new_port();
+-		if (!ep->port.port) {
+-			err = -ENOSPC;
+-		} else {
+-			ep->port.node = scif_info.nodeid;
+-			ep->conn_async_state = ASYNC_CONN_IDLE;
+-		}
++		err = scif_get_new_port();
++		if (err < 0)
++			break;
++		ep->port.port = err;
++		ep->port.node = scif_info.nodeid;
++		ep->conn_async_state = ASYNC_CONN_IDLE;
+ 		/* Fall through */
+ 	case SCIFEP_BOUND:
+ 		/*
+diff --git a/drivers/misc/ti-st/st_kim.c b/drivers/misc/ti-st/st_kim.c
+index 5ec3f5a43718..14a5e9da32bd 100644
+--- a/drivers/misc/ti-st/st_kim.c
++++ b/drivers/misc/ti-st/st_kim.c
+@@ -756,14 +756,14 @@ static int kim_probe(struct platform_device *pdev)
+ 	err = gpio_request(kim_gdata->nshutdown, "kim");
+ 	if (unlikely(err)) {
+ 		pr_err(" gpio %d request failed ", kim_gdata->nshutdown);
+-		return err;
++		goto err_sysfs_group;
+ 	}
+ 
+ 	/* Configure nShutdown GPIO as output=0 */
+ 	err = gpio_direction_output(kim_gdata->nshutdown, 0);
+ 	if (unlikely(err)) {
+ 		pr_err(" unable to configure gpio %d", kim_gdata->nshutdown);
+-		return err;
++		goto err_sysfs_group;
+ 	}
+ 	/* get reference of pdev for request_firmware
+ 	 */
+diff --git a/drivers/mtd/nand/raw/nand_base.c b/drivers/mtd/nand/raw/nand_base.c
+index b01d15ec4c56..3e3e6a8f1abc 100644
+--- a/drivers/mtd/nand/raw/nand_base.c
++++ b/drivers/mtd/nand/raw/nand_base.c
+@@ -2668,8 +2668,8 @@ static bool nand_subop_instr_is_valid(const struct nand_subop *subop,
+ 	return subop && instr_idx < subop->ninstrs;
+ }
+ 
+-static int nand_subop_get_start_off(const struct nand_subop *subop,
+-				    unsigned int instr_idx)
++static unsigned int nand_subop_get_start_off(const struct nand_subop *subop,
++					     unsigned int instr_idx)
+ {
+ 	if (instr_idx)
+ 		return 0;
+@@ -2688,12 +2688,12 @@ static int nand_subop_get_start_off(const struct nand_subop *subop,
+  *
+  * Given an address instruction, returns the offset of the first cycle to issue.
+  */
+-int nand_subop_get_addr_start_off(const struct nand_subop *subop,
+-				  unsigned int instr_idx)
++unsigned int nand_subop_get_addr_start_off(const struct nand_subop *subop,
++					   unsigned int instr_idx)
+ {
+-	if (!nand_subop_instr_is_valid(subop, instr_idx) ||
+-	    subop->instrs[instr_idx].type != NAND_OP_ADDR_INSTR)
+-		return -EINVAL;
++	if (WARN_ON(!nand_subop_instr_is_valid(subop, instr_idx) ||
++		    subop->instrs[instr_idx].type != NAND_OP_ADDR_INSTR))
++		return 0;
+ 
+ 	return nand_subop_get_start_off(subop, instr_idx);
+ }
+@@ -2710,14 +2710,14 @@ EXPORT_SYMBOL_GPL(nand_subop_get_addr_start_off);
+  *
+  * Given an address instruction, returns the number of address cycle to issue.
+  */
+-int nand_subop_get_num_addr_cyc(const struct nand_subop *subop,
+-				unsigned int instr_idx)
++unsigned int nand_subop_get_num_addr_cyc(const struct nand_subop *subop,
++					 unsigned int instr_idx)
+ {
+ 	int start_off, end_off;
+ 
+-	if (!nand_subop_instr_is_valid(subop, instr_idx) ||
+-	    subop->instrs[instr_idx].type != NAND_OP_ADDR_INSTR)
+-		return -EINVAL;
++	if (WARN_ON(!nand_subop_instr_is_valid(subop, instr_idx) ||
++		    subop->instrs[instr_idx].type != NAND_OP_ADDR_INSTR))
++		return 0;
+ 
+ 	start_off = nand_subop_get_addr_start_off(subop, instr_idx);
+ 
+@@ -2742,12 +2742,12 @@ EXPORT_SYMBOL_GPL(nand_subop_get_num_addr_cyc);
+  *
+  * Given a data instruction, returns the offset to start from.
+  */
+-int nand_subop_get_data_start_off(const struct nand_subop *subop,
+-				  unsigned int instr_idx)
++unsigned int nand_subop_get_data_start_off(const struct nand_subop *subop,
++					   unsigned int instr_idx)
+ {
+-	if (!nand_subop_instr_is_valid(subop, instr_idx) ||
+-	    !nand_instr_is_data(&subop->instrs[instr_idx]))
+-		return -EINVAL;
++	if (WARN_ON(!nand_subop_instr_is_valid(subop, instr_idx) ||
++		    !nand_instr_is_data(&subop->instrs[instr_idx])))
++		return 0;
+ 
+ 	return nand_subop_get_start_off(subop, instr_idx);
+ }
+@@ -2764,14 +2764,14 @@ EXPORT_SYMBOL_GPL(nand_subop_get_data_start_off);
+  *
+  * Returns the length of the chunk of data to send/receive.
+  */
+-int nand_subop_get_data_len(const struct nand_subop *subop,
+-			    unsigned int instr_idx)
++unsigned int nand_subop_get_data_len(const struct nand_subop *subop,
++				     unsigned int instr_idx)
+ {
+ 	int start_off = 0, end_off;
+ 
+-	if (!nand_subop_instr_is_valid(subop, instr_idx) ||
+-	    !nand_instr_is_data(&subop->instrs[instr_idx]))
+-		return -EINVAL;
++	if (WARN_ON(!nand_subop_instr_is_valid(subop, instr_idx) ||
++		    !nand_instr_is_data(&subop->instrs[instr_idx])))
++		return 0;
+ 
+ 	start_off = nand_subop_get_data_start_off(subop, instr_idx);
+ 
+diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
+index 82ac1d10f239..b4253d0e056b 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -3196,7 +3196,6 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu)
+ 
+ 	on_each_cpu(mvneta_percpu_enable, pp, true);
+ 	mvneta_start_dev(pp);
+-	mvneta_port_up(pp);
+ 
+ 	netdev_update_features(dev);
+ 
+diff --git a/drivers/net/phy/mdio-mux-bcm-iproc.c b/drivers/net/phy/mdio-mux-bcm-iproc.c
+index 0c5b68e7da51..9b3167054843 100644
+--- a/drivers/net/phy/mdio-mux-bcm-iproc.c
++++ b/drivers/net/phy/mdio-mux-bcm-iproc.c
+@@ -22,7 +22,7 @@
+ #include <linux/mdio-mux.h>
+ #include <linux/delay.h>
+ 
+-#define MDIO_PARAM_OFFSET		0x00
++#define MDIO_PARAM_OFFSET		0x23c
+ #define MDIO_PARAM_MIIM_CYCLE		29
+ #define MDIO_PARAM_INTERNAL_SEL		25
+ #define MDIO_PARAM_BUS_ID		22
+@@ -30,20 +30,22 @@
+ #define MDIO_PARAM_PHY_ID		16
+ #define MDIO_PARAM_PHY_DATA		0
+ 
+-#define MDIO_READ_OFFSET		0x04
++#define MDIO_READ_OFFSET		0x240
+ #define MDIO_READ_DATA_MASK		0xffff
+-#define MDIO_ADDR_OFFSET		0x08
++#define MDIO_ADDR_OFFSET		0x244
+ 
+-#define MDIO_CTRL_OFFSET		0x0C
++#define MDIO_CTRL_OFFSET		0x248
+ #define MDIO_CTRL_WRITE_OP		0x1
+ #define MDIO_CTRL_READ_OP		0x2
+ 
+-#define MDIO_STAT_OFFSET		0x10
++#define MDIO_STAT_OFFSET		0x24c
+ #define MDIO_STAT_DONE			1
+ 
+ #define BUS_MAX_ADDR			32
+ #define EXT_BUS_START_ADDR		16
+ 
++#define MDIO_REG_ADDR_SPACE_SIZE	0x250
++
+ struct iproc_mdiomux_desc {
+ 	void *mux_handle;
+ 	void __iomem *base;
+@@ -169,6 +171,14 @@ static int mdio_mux_iproc_probe(struct platform_device *pdev)
+ 	md->dev = &pdev->dev;
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (res->start & 0xfff) {
++		/* For backward compatibility in case the
++		 * base address is specified with an offset.
++		 */
++		dev_info(&pdev->dev, "fix base address in dt-blob\n");
++		res->start &= ~0xfff;
++		res->end = res->start + MDIO_REG_ADDR_SPACE_SIZE - 1;
++	}
+ 	md->base = devm_ioremap_resource(&pdev->dev, res);
+ 	if (IS_ERR(md->base)) {
+ 		dev_err(&pdev->dev, "failed to ioremap register\n");
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index 836e0a47b94a..747c6951b5c1 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -3085,6 +3085,13 @@ static int ath10k_update_channel_list(struct ath10k *ar)
+ 			passive = channel->flags & IEEE80211_CHAN_NO_IR;
+ 			ch->passive = passive;
+ 
++			/* the firmware is ignoring the "radar" flag of the
++			 * channel and is scanning actively using Probe Requests
++			 * on "Radar detection"/DFS channels which are not
++			 * marked as "available"
++			 */
++			ch->passive |= ch->chan_radar;
++
+ 			ch->freq = channel->center_freq;
+ 			ch->band_center_freq1 = channel->center_freq;
+ 			ch->min_power = 0;
+diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.c b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+index 8c49a26fc571..21eb3a598a86 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.c
++++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+@@ -1584,6 +1584,11 @@ static struct sk_buff *ath10k_wmi_tlv_op_gen_init(struct ath10k *ar)
+ 	cfg->keep_alive_pattern_size = __cpu_to_le32(0);
+ 	cfg->max_tdls_concurrent_sleep_sta = __cpu_to_le32(1);
+ 	cfg->max_tdls_concurrent_buffer_sta = __cpu_to_le32(1);
++	cfg->wmi_send_separate = __cpu_to_le32(0);
++	cfg->num_ocb_vdevs = __cpu_to_le32(0);
++	cfg->num_ocb_channels = __cpu_to_le32(0);
++	cfg->num_ocb_schedules = __cpu_to_le32(0);
++	cfg->host_capab = __cpu_to_le32(0);
+ 
+ 	ath10k_wmi_put_host_mem_chunks(ar, chunks);
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.h b/drivers/net/wireless/ath/ath10k/wmi-tlv.h
+index 3e1e340cd834..1cb93d09b8a9 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.h
++++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.h
+@@ -1670,6 +1670,11 @@ struct wmi_tlv_resource_config {
+ 	__le32 keep_alive_pattern_size;
+ 	__le32 max_tdls_concurrent_sleep_sta;
+ 	__le32 max_tdls_concurrent_buffer_sta;
++	__le32 wmi_send_separate;
++	__le32 num_ocb_vdevs;
++	__le32 num_ocb_channels;
++	__le32 num_ocb_schedules;
++	__le32 host_capab;
+ } __packed;
+ 
+ struct wmi_tlv_init_cmd {
+diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c
+index e60bea4604e4..fcd9d5eeae72 100644
+--- a/drivers/net/wireless/ath/ath9k/hw.c
++++ b/drivers/net/wireless/ath/ath9k/hw.c
+@@ -2942,16 +2942,19 @@ void ath9k_hw_apply_txpower(struct ath_hw *ah, struct ath9k_channel *chan,
+ 	struct ath_regulatory *reg = ath9k_hw_regulatory(ah);
+ 	struct ieee80211_channel *channel;
+ 	int chan_pwr, new_pwr;
++	u16 ctl = NO_CTL;
+ 
+ 	if (!chan)
+ 		return;
+ 
++	if (!test)
++		ctl = ath9k_regd_get_ctl(reg, chan);
++
+ 	channel = chan->chan;
+ 	chan_pwr = min_t(int, channel->max_power * 2, MAX_RATE_POWER);
+ 	new_pwr = min_t(int, chan_pwr, reg->power_limit);
+ 
+-	ah->eep_ops->set_txpower(ah, chan,
+-				 ath9k_regd_get_ctl(reg, chan),
++	ah->eep_ops->set_txpower(ah, chan, ctl,
+ 				 get_antenna_gain(ah, chan), new_pwr, test);
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c
+index 7fdb152be0bb..a249ee747dc9 100644
+--- a/drivers/net/wireless/ath/ath9k/xmit.c
++++ b/drivers/net/wireless/ath/ath9k/xmit.c
+@@ -86,7 +86,8 @@ static void ath_tx_status(struct ieee80211_hw *hw, struct sk_buff *skb)
+ 	struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+ 	struct ieee80211_sta *sta = info->status.status_driver_data[0];
+ 
+-	if (info->flags & IEEE80211_TX_CTL_REQ_TX_STATUS) {
++	if (info->flags & (IEEE80211_TX_CTL_REQ_TX_STATUS |
++			   IEEE80211_TX_STATUS_EOSP)) {
+ 		ieee80211_tx_status(hw, skb);
+ 		return;
+ 	}
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 8520523b91b4..d8d8443c1c93 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -1003,6 +1003,10 @@ static int iwl_pci_resume(struct device *device)
+ 	if (!trans->op_mode)
+ 		return 0;
+ 
++	/* In WOWLAN, let iwl_trans_pcie_d3_resume do the rest of the work */
++	if (test_bit(STATUS_DEVICE_ENABLED, &trans->status))
++		return 0;
++
+ 	/* reconfigure the MSI-X mapping to get the correct IRQ for rfkill */
+ 	iwl_pcie_conf_msix_hw(trans_pcie);
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+index 7229991ae70d..a2a98087eb41 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+@@ -1539,18 +1539,6 @@ static int iwl_trans_pcie_d3_resume(struct iwl_trans *trans,
+ 
+ 	iwl_pcie_enable_rx_wake(trans, true);
+ 
+-	/*
+-	 * Reconfigure IVAR table in case of MSIX or reset ict table in
+-	 * MSI mode since HW reset erased it.
+-	 * Also enables interrupts - none will happen as
+-	 * the device doesn't know we're waking it up, only when
+-	 * the opmode actually tells it after this call.
+-	 */
+-	iwl_pcie_conf_msix_hw(trans_pcie);
+-	if (!trans_pcie->msix_enabled)
+-		iwl_pcie_reset_ict(trans);
+-	iwl_enable_interrupts(trans);
+-
+ 	iwl_set_bit(trans, CSR_GP_CNTRL,
+ 		    BIT(trans->cfg->csr->flag_mac_access_req));
+ 	iwl_set_bit(trans, CSR_GP_CNTRL,
+@@ -1568,6 +1556,18 @@ static int iwl_trans_pcie_d3_resume(struct iwl_trans *trans,
+ 		return ret;
+ 	}
+ 
++	/*
++	 * Reconfigure IVAR table in case of MSIX or reset ict table in
++	 * MSI mode since HW reset erased it.
++	 * Also enables interrupts - none will happen as
++	 * the device doesn't know we're waking it up, only when
++	 * the opmode actually tells it after this call.
++	 */
++	iwl_pcie_conf_msix_hw(trans_pcie);
++	if (!trans_pcie->msix_enabled)
++		iwl_pcie_reset_ict(trans);
++	iwl_enable_interrupts(trans);
++
+ 	iwl_pcie_set_pwr(trans, false);
+ 
+ 	if (!reset) {
+diff --git a/drivers/net/wireless/ti/wlcore/rx.c b/drivers/net/wireless/ti/wlcore/rx.c
+index 0f15696195f8..078a4940bc5c 100644
+--- a/drivers/net/wireless/ti/wlcore/rx.c
++++ b/drivers/net/wireless/ti/wlcore/rx.c
+@@ -59,7 +59,7 @@ static u32 wlcore_rx_get_align_buf_size(struct wl1271 *wl, u32 pkt_len)
+ static void wl1271_rx_status(struct wl1271 *wl,
+ 			     struct wl1271_rx_descriptor *desc,
+ 			     struct ieee80211_rx_status *status,
+-			     u8 beacon)
++			     u8 beacon, u8 probe_rsp)
+ {
+ 	memset(status, 0, sizeof(struct ieee80211_rx_status));
+ 
+@@ -106,6 +106,9 @@ static void wl1271_rx_status(struct wl1271 *wl,
+ 		}
+ 	}
+ 
++	if (beacon || probe_rsp)
++		status->boottime_ns = ktime_get_boot_ns();
++
+ 	if (beacon)
+ 		wlcore_set_pending_regdomain_ch(wl, (u16)desc->channel,
+ 						status->band);
+@@ -191,7 +194,8 @@ static int wl1271_rx_handle_data(struct wl1271 *wl, u8 *data, u32 length,
+ 	if (ieee80211_is_data_present(hdr->frame_control))
+ 		is_data = 1;
+ 
+-	wl1271_rx_status(wl, desc, IEEE80211_SKB_RXCB(skb), beacon);
++	wl1271_rx_status(wl, desc, IEEE80211_SKB_RXCB(skb), beacon,
++			 ieee80211_is_probe_resp(hdr->frame_control));
+ 	wlcore_hw_set_rx_csum(wl, desc, skb);
+ 
+ 	seq_num = (le16_to_cpu(hdr->seq_ctrl) & IEEE80211_SCTL_SEQ) >> 4;
+diff --git a/drivers/pci/controller/pcie-mobiveil.c b/drivers/pci/controller/pcie-mobiveil.c
+index cf0aa7cee5b0..a939e8d31735 100644
+--- a/drivers/pci/controller/pcie-mobiveil.c
++++ b/drivers/pci/controller/pcie-mobiveil.c
+@@ -23,6 +23,8 @@
+ #include <linux/platform_device.h>
+ #include <linux/slab.h>
+ 
++#include "../pci.h"
++
+ /* register offsets and bit positions */
+ 
+ /*
+@@ -130,7 +132,7 @@ struct mobiveil_pcie {
+ 	void __iomem *config_axi_slave_base;	/* endpoint config base */
+ 	void __iomem *csr_axi_slave_base;	/* root port config base */
+ 	void __iomem *apb_csr_base;	/* MSI register base */
+-	void __iomem *pcie_reg_base;	/* Physical PCIe Controller Base */
++	phys_addr_t pcie_reg_base;	/* Physical PCIe Controller Base */
+ 	struct irq_domain *intx_domain;
+ 	raw_spinlock_t intx_mask_lock;
+ 	int irq;
+diff --git a/drivers/pci/switch/switchtec.c b/drivers/pci/switch/switchtec.c
+index 47cd0c037433..f96af1467984 100644
+--- a/drivers/pci/switch/switchtec.c
++++ b/drivers/pci/switch/switchtec.c
+@@ -14,6 +14,8 @@
+ #include <linux/poll.h>
+ #include <linux/wait.h>
+ 
++#include <linux/nospec.h>
++
+ MODULE_DESCRIPTION("Microsemi Switchtec(tm) PCIe Management Driver");
+ MODULE_VERSION("0.1");
+ MODULE_LICENSE("GPL");
+@@ -909,6 +911,8 @@ static int ioctl_port_to_pff(struct switchtec_dev *stdev,
+ 	default:
+ 		if (p.port > ARRAY_SIZE(pcfg->dsp_pff_inst_id))
+ 			return -EINVAL;
++		p.port = array_index_nospec(p.port,
++					ARRAY_SIZE(pcfg->dsp_pff_inst_id) + 1);
+ 		p.pff = ioread32(&pcfg->dsp_pff_inst_id[p.port - 1]);
+ 		break;
+ 	}
+diff --git a/drivers/pinctrl/berlin/berlin.c b/drivers/pinctrl/berlin/berlin.c
+index d6d183e9db17..b5903fffb3d0 100644
+--- a/drivers/pinctrl/berlin/berlin.c
++++ b/drivers/pinctrl/berlin/berlin.c
+@@ -216,10 +216,8 @@ static int berlin_pinctrl_build_state(struct platform_device *pdev)
+ 	}
+ 
+ 	/* we will reallocate later */
+-	pctrl->functions = devm_kcalloc(&pdev->dev,
+-					max_functions,
+-					sizeof(*pctrl->functions),
+-					GFP_KERNEL);
++	pctrl->functions = kcalloc(max_functions,
++				   sizeof(*pctrl->functions), GFP_KERNEL);
+ 	if (!pctrl->functions)
+ 		return -ENOMEM;
+ 
+@@ -257,8 +255,10 @@ static int berlin_pinctrl_build_state(struct platform_device *pdev)
+ 				function++;
+ 			}
+ 
+-			if (!found)
++			if (!found) {
++				kfree(pctrl->functions);
+ 				return -EINVAL;
++			}
+ 
+ 			if (!function->groups) {
+ 				function->groups =
+@@ -267,8 +267,10 @@ static int berlin_pinctrl_build_state(struct platform_device *pdev)
+ 						     sizeof(char *),
+ 						     GFP_KERNEL);
+ 
+-				if (!function->groups)
++				if (!function->groups) {
++					kfree(pctrl->functions);
+ 					return -ENOMEM;
++				}
+ 			}
+ 
+ 			groups = function->groups;
+diff --git a/drivers/pinctrl/freescale/pinctrl-imx.c b/drivers/pinctrl/freescale/pinctrl-imx.c
+index 1c6bb15579e1..b04edc22dad7 100644
+--- a/drivers/pinctrl/freescale/pinctrl-imx.c
++++ b/drivers/pinctrl/freescale/pinctrl-imx.c
+@@ -383,7 +383,7 @@ static void imx_pinconf_group_dbg_show(struct pinctrl_dev *pctldev,
+ 	const char *name;
+ 	int i, ret;
+ 
+-	if (group > pctldev->num_groups)
++	if (group >= pctldev->num_groups)
+ 		return;
+ 
+ 	seq_puts(s, "\n");
+diff --git a/drivers/pinctrl/pinctrl-amd.c b/drivers/pinctrl/pinctrl-amd.c
+index 04ae139671c8..b91db89eb924 100644
+--- a/drivers/pinctrl/pinctrl-amd.c
++++ b/drivers/pinctrl/pinctrl-amd.c
+@@ -552,7 +552,8 @@ static irqreturn_t amd_gpio_irq_handler(int irq, void *dev_id)
+ 		/* Each status bit covers four pins */
+ 		for (i = 0; i < 4; i++) {
+ 			regval = readl(regs + i);
+-			if (!(regval & PIN_IRQ_PENDING))
++			if (!(regval & PIN_IRQ_PENDING) ||
++			    !(regval & BIT(INTERRUPT_MASK_OFF)))
+ 				continue;
+ 			irq = irq_find_mapping(gc->irq.domain, irqnr + i);
+ 			generic_handle_irq(irq);
+diff --git a/drivers/regulator/tps65217-regulator.c b/drivers/regulator/tps65217-regulator.c
+index fc12badf3805..d84fab616abf 100644
+--- a/drivers/regulator/tps65217-regulator.c
++++ b/drivers/regulator/tps65217-regulator.c
+@@ -232,6 +232,8 @@ static int tps65217_regulator_probe(struct platform_device *pdev)
+ 	tps->strobes = devm_kcalloc(&pdev->dev,
+ 				    TPS65217_NUM_REGULATOR, sizeof(u8),
+ 				    GFP_KERNEL);
++	if (!tps->strobes)
++		return -ENOMEM;
+ 
+ 	platform_set_drvdata(pdev, tps);
+ 
+diff --git a/drivers/rpmsg/rpmsg_core.c b/drivers/rpmsg/rpmsg_core.c
+index b714a543a91d..8122807db380 100644
+--- a/drivers/rpmsg/rpmsg_core.c
++++ b/drivers/rpmsg/rpmsg_core.c
+@@ -15,6 +15,7 @@
+ #include <linux/module.h>
+ #include <linux/rpmsg.h>
+ #include <linux/of_device.h>
++#include <linux/pm_domain.h>
+ #include <linux/slab.h>
+ 
+ #include "rpmsg_internal.h"
+@@ -449,6 +450,10 @@ static int rpmsg_dev_probe(struct device *dev)
+ 	struct rpmsg_endpoint *ept = NULL;
+ 	int err;
+ 
++	err = dev_pm_domain_attach(dev, true);
++	if (err)
++		goto out;
++
+ 	if (rpdrv->callback) {
+ 		strncpy(chinfo.name, rpdev->id.name, RPMSG_NAME_SIZE);
+ 		chinfo.src = rpdev->src;
+@@ -490,6 +495,8 @@ static int rpmsg_dev_remove(struct device *dev)
+ 
+ 	rpdrv->remove(rpdev);
+ 
++	dev_pm_domain_detach(dev, true);
++
+ 	if (rpdev->ept)
+ 		rpmsg_destroy_ept(rpdev->ept);
+ 
+diff --git a/drivers/scsi/3w-9xxx.c b/drivers/scsi/3w-9xxx.c
+index 99ba4a770406..27521fc3ef5a 100644
+--- a/drivers/scsi/3w-9xxx.c
++++ b/drivers/scsi/3w-9xxx.c
+@@ -2038,6 +2038,7 @@ static int twa_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 
+ 	if (twa_initialize_device_extension(tw_dev)) {
+ 		TW_PRINTK(tw_dev->host, TW_DRIVER, 0x25, "Failed to initialize device extension");
++		retval = -ENOMEM;
+ 		goto out_free_device_extension;
+ 	}
+ 
+@@ -2060,6 +2061,7 @@ static int twa_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 	tw_dev->base_addr = ioremap(mem_addr, mem_len);
+ 	if (!tw_dev->base_addr) {
+ 		TW_PRINTK(tw_dev->host, TW_DRIVER, 0x35, "Failed to ioremap");
++		retval = -ENOMEM;
+ 		goto out_release_mem_region;
+ 	}
+ 
+@@ -2067,8 +2069,10 @@ static int twa_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 	TW_DISABLE_INTERRUPTS(tw_dev);
+ 
+ 	/* Initialize the card */
+-	if (twa_reset_sequence(tw_dev, 0))
++	if (twa_reset_sequence(tw_dev, 0)) {
++		retval = -ENOMEM;
+ 		goto out_iounmap;
++	}
+ 
+ 	/* Set host specific parameters */
+ 	if ((pdev->device == PCI_DEVICE_ID_3WARE_9650SE) ||
+diff --git a/drivers/scsi/3w-sas.c b/drivers/scsi/3w-sas.c
+index cf9f2a09b47d..40c1e6e64f58 100644
+--- a/drivers/scsi/3w-sas.c
++++ b/drivers/scsi/3w-sas.c
+@@ -1594,6 +1594,7 @@ static int twl_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 
+ 	if (twl_initialize_device_extension(tw_dev)) {
+ 		TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1a, "Failed to initialize device extension");
++		retval = -ENOMEM;
+ 		goto out_free_device_extension;
+ 	}
+ 
+@@ -1608,6 +1609,7 @@ static int twl_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 	tw_dev->base_addr = pci_iomap(pdev, 1, 0);
+ 	if (!tw_dev->base_addr) {
+ 		TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1c, "Failed to ioremap");
++		retval = -ENOMEM;
+ 		goto out_release_mem_region;
+ 	}
+ 
+@@ -1617,6 +1619,7 @@ static int twl_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 	/* Initialize the card */
+ 	if (twl_reset_sequence(tw_dev, 0)) {
+ 		TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1d, "Controller reset failed during probe");
++		retval = -ENOMEM;
+ 		goto out_iounmap;
+ 	}
+ 
+diff --git a/drivers/scsi/3w-xxxx.c b/drivers/scsi/3w-xxxx.c
+index f6179e3d6953..961ea6f7def8 100644
+--- a/drivers/scsi/3w-xxxx.c
++++ b/drivers/scsi/3w-xxxx.c
+@@ -2280,6 +2280,7 @@ static int tw_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 
+ 	if (tw_initialize_device_extension(tw_dev)) {
+ 		printk(KERN_WARNING "3w-xxxx: Failed to initialize device extension.");
++		retval = -ENOMEM;
+ 		goto out_free_device_extension;
+ 	}
+ 
+@@ -2294,6 +2295,7 @@ static int tw_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 	tw_dev->base_addr = pci_resource_start(pdev, 0);
+ 	if (!tw_dev->base_addr) {
+ 		printk(KERN_WARNING "3w-xxxx: Failed to get io address.");
++		retval = -ENOMEM;
+ 		goto out_release_mem_region;
+ 	}
+ 
+diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
+index 20b249a649dd..902004dc8dc7 100644
+--- a/drivers/scsi/lpfc/lpfc.h
++++ b/drivers/scsi/lpfc/lpfc.h
+@@ -672,7 +672,7 @@ struct lpfc_hba {
+ #define LS_NPIV_FAB_SUPPORTED 0x2	/* Fabric supports NPIV */
+ #define LS_IGNORE_ERATT       0x4	/* intr handler should ignore ERATT */
+ #define LS_MDS_LINK_DOWN      0x8	/* MDS Diagnostics Link Down */
+-#define LS_MDS_LOOPBACK      0x16	/* MDS Diagnostics Link Up (Loopback) */
++#define LS_MDS_LOOPBACK      0x10	/* MDS Diagnostics Link Up (Loopback) */
+ 
+ 	uint32_t hba_flag;	/* hba generic flags */
+ #define HBA_ERATT_HANDLED	0x1 /* This flag is set when eratt handled */
+diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
+index 76a5a99605aa..d723fd1d7b26 100644
+--- a/drivers/scsi/lpfc/lpfc_nvme.c
++++ b/drivers/scsi/lpfc/lpfc_nvme.c
+@@ -2687,7 +2687,7 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ 	struct lpfc_nvme_rport *oldrport;
+ 	struct nvme_fc_remote_port *remote_port;
+ 	struct nvme_fc_port_info rpinfo;
+-	struct lpfc_nodelist *prev_ndlp;
++	struct lpfc_nodelist *prev_ndlp = NULL;
+ 
+ 	lpfc_printf_vlog(ndlp->vport, KERN_INFO, LOG_NVME_DISC,
+ 			 "6006 Register NVME PORT. DID x%06x nlptype x%x\n",
+@@ -2736,23 +2736,29 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ 		spin_unlock_irq(&vport->phba->hbalock);
+ 		rport = remote_port->private;
+ 		if (oldrport) {
++			/* New remoteport record does not guarantee valid
++			 * host private memory area.
++			 */
++			prev_ndlp = oldrport->ndlp;
+ 			if (oldrport == remote_port->private) {
+-				/* Same remoteport.  Just reuse. */
++				/* Same remoteport - ndlp should match.
++				 * Just reuse.
++				 */
+ 				lpfc_printf_vlog(ndlp->vport, KERN_INFO,
+ 						 LOG_NVME_DISC,
+ 						 "6014 Rebinding lport to "
+ 						 "remoteport %p wwpn 0x%llx, "
+-						 "Data: x%x x%x %p x%x x%06x\n",
++						 "Data: x%x x%x %p %p x%x x%06x\n",
+ 						 remote_port,
+ 						 remote_port->port_name,
+ 						 remote_port->port_id,
+ 						 remote_port->port_role,
++						 prev_ndlp,
+ 						 ndlp,
+ 						 ndlp->nlp_type,
+ 						 ndlp->nlp_DID);
+ 				return 0;
+ 			}
+-			prev_ndlp = rport->ndlp;
+ 
+ 			/* Sever the ndlp<->rport association
+ 			 * before dropping the ndlp ref from
+@@ -2786,13 +2792,13 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ 		lpfc_printf_vlog(vport, KERN_INFO,
+ 				 LOG_NVME_DISC | LOG_NODE,
+ 				 "6022 Binding new rport to "
+-				 "lport %p Remoteport %p  WWNN 0x%llx, "
++				 "lport %p Remoteport %p rport %p WWNN 0x%llx, "
+ 				 "Rport WWPN 0x%llx DID "
+-				 "x%06x Role x%x, ndlp %p\n",
+-				 lport, remote_port,
++				 "x%06x Role x%x, ndlp %p prev_ndlp %p\n",
++				 lport, remote_port, rport,
+ 				 rpinfo.node_name, rpinfo.port_name,
+ 				 rpinfo.port_id, rpinfo.port_role,
+-				 ndlp);
++				 ndlp, prev_ndlp);
+ 	} else {
+ 		lpfc_printf_vlog(vport, KERN_ERR,
+ 				 LOG_NVME_DISC | LOG_NODE,
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index ec550ee0108e..75d34def2361 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -1074,9 +1074,12 @@ void qla24xx_handle_gpdb_event(scsi_qla_host_t *vha, struct event_arg *ea)
+ 	case PDS_PLOGI_COMPLETE:
+ 	case PDS_PRLI_PENDING:
+ 	case PDS_PRLI2_PENDING:
+-		ql_dbg(ql_dbg_disc, vha, 0x20d5, "%s %d %8phC relogin needed\n",
+-		    __func__, __LINE__, fcport->port_name);
+-		set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
++		/* Set discovery state back to GNL to Relogin attempt */
++		if (qla_dual_mode_enabled(vha) ||
++		    qla_ini_mode_enabled(vha)) {
++			fcport->disc_state = DSC_GNL;
++			set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
++		}
+ 		return;
+ 	case PDS_LOGO_PENDING:
+ 	case PDS_PORT_UNAVAILABLE:
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index 1027b0cb7fa3..6dc1b1bd8069 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -982,8 +982,9 @@ void qlt_free_session_done(struct work_struct *work)
+ 
+ 			logo.id = sess->d_id;
+ 			logo.cmd_count = 0;
++			if (!own)
++				qlt_send_first_logo(vha, &logo);
+ 			sess->send_els_logo = 0;
+-			qlt_send_first_logo(vha, &logo);
+ 		}
+ 
+ 		if (sess->logout_on_delete && sess->loop_id != FC_NO_LOOP_ID) {
+diff --git a/drivers/scsi/qla2xxx/qla_tmpl.c b/drivers/scsi/qla2xxx/qla_tmpl.c
+index 731ca0d8520a..9f3c263756a8 100644
+--- a/drivers/scsi/qla2xxx/qla_tmpl.c
++++ b/drivers/scsi/qla2xxx/qla_tmpl.c
+@@ -571,6 +571,15 @@ qla27xx_fwdt_entry_t268(struct scsi_qla_host *vha,
+ 		}
+ 		break;
+ 
++	case T268_BUF_TYPE_REQ_MIRROR:
++	case T268_BUF_TYPE_RSP_MIRROR:
++		/*
++		 * Mirror pointers are not implemented in the
++		 * driver, instead shadow pointers are used by
++		 * the drier. Skip these entries.
++		 */
++		qla27xx_skip_entry(ent, buf);
++		break;
+ 	default:
+ 		ql_dbg(ql_dbg_async, vha, 0xd02b,
+ 		    "%s: unknown buffer %x\n", __func__, ent->t268.buf_type);
+diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
+index ee5081ba5313..1fc87a3260cc 100644
+--- a/drivers/target/target_core_transport.c
++++ b/drivers/target/target_core_transport.c
+@@ -316,6 +316,7 @@ void __transport_register_session(
+ {
+ 	const struct target_core_fabric_ops *tfo = se_tpg->se_tpg_tfo;
+ 	unsigned char buf[PR_REG_ISID_LEN];
++	unsigned long flags;
+ 
+ 	se_sess->se_tpg = se_tpg;
+ 	se_sess->fabric_sess_ptr = fabric_sess_ptr;
+@@ -352,7 +353,7 @@ void __transport_register_session(
+ 			se_sess->sess_bin_isid = get_unaligned_be64(&buf[0]);
+ 		}
+ 
+-		spin_lock_irq(&se_nacl->nacl_sess_lock);
++		spin_lock_irqsave(&se_nacl->nacl_sess_lock, flags);
+ 		/*
+ 		 * The se_nacl->nacl_sess pointer will be set to the
+ 		 * last active I_T Nexus for each struct se_node_acl.
+@@ -361,7 +362,7 @@ void __transport_register_session(
+ 
+ 		list_add_tail(&se_sess->sess_acl_list,
+ 			      &se_nacl->acl_sess_list);
+-		spin_unlock_irq(&se_nacl->nacl_sess_lock);
++		spin_unlock_irqrestore(&se_nacl->nacl_sess_lock, flags);
+ 	}
+ 	list_add_tail(&se_sess->sess_list, &se_tpg->tpg_sess_list);
+ 
+diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
+index d8dc3d22051f..b8dc5efc606b 100644
+--- a/drivers/target/target_core_user.c
++++ b/drivers/target/target_core_user.c
+@@ -1745,9 +1745,11 @@ static int tcmu_configure_device(struct se_device *dev)
+ 
+ 	info = &udev->uio_info;
+ 
++	mutex_lock(&udev->cmdr_lock);
+ 	udev->data_bitmap = kcalloc(BITS_TO_LONGS(udev->max_blocks),
+ 				    sizeof(unsigned long),
+ 				    GFP_KERNEL);
++	mutex_unlock(&udev->cmdr_lock);
+ 	if (!udev->data_bitmap) {
+ 		ret = -ENOMEM;
+ 		goto err_bitmap_alloc;
+@@ -1957,7 +1959,7 @@ static match_table_t tokens = {
+ 	{Opt_hw_block_size, "hw_block_size=%u"},
+ 	{Opt_hw_max_sectors, "hw_max_sectors=%u"},
+ 	{Opt_nl_reply_supported, "nl_reply_supported=%d"},
+-	{Opt_max_data_area_mb, "max_data_area_mb=%u"},
++	{Opt_max_data_area_mb, "max_data_area_mb=%d"},
+ 	{Opt_err, NULL}
+ };
+ 
+@@ -1985,13 +1987,48 @@ static int tcmu_set_dev_attrib(substring_t *arg, u32 *dev_attrib)
+ 	return 0;
+ }
+ 
++static int tcmu_set_max_blocks_param(struct tcmu_dev *udev, substring_t *arg)
++{
++	int val, ret;
++
++	ret = match_int(arg, &val);
++	if (ret < 0) {
++		pr_err("match_int() failed for max_data_area_mb=. Error %d.\n",
++		       ret);
++		return ret;
++	}
++
++	if (val <= 0) {
++		pr_err("Invalid max_data_area %d.\n", val);
++		return -EINVAL;
++	}
++
++	mutex_lock(&udev->cmdr_lock);
++	if (udev->data_bitmap) {
++		pr_err("Cannot set max_data_area_mb after it has been enabled.\n");
++		ret = -EINVAL;
++		goto unlock;
++	}
++
++	udev->max_blocks = TCMU_MBS_TO_BLOCKS(val);
++	if (udev->max_blocks > tcmu_global_max_blocks) {
++		pr_err("%d is too large. Adjusting max_data_area_mb to global limit of %u\n",
++		       val, TCMU_BLOCKS_TO_MBS(tcmu_global_max_blocks));
++		udev->max_blocks = tcmu_global_max_blocks;
++	}
++
++unlock:
++	mutex_unlock(&udev->cmdr_lock);
++	return ret;
++}
++
+ static ssize_t tcmu_set_configfs_dev_params(struct se_device *dev,
+ 		const char *page, ssize_t count)
+ {
+ 	struct tcmu_dev *udev = TCMU_DEV(dev);
+ 	char *orig, *ptr, *opts, *arg_p;
+ 	substring_t args[MAX_OPT_ARGS];
+-	int ret = 0, token, tmpval;
++	int ret = 0, token;
+ 
+ 	opts = kstrdup(page, GFP_KERNEL);
+ 	if (!opts)
+@@ -2044,37 +2081,7 @@ static ssize_t tcmu_set_configfs_dev_params(struct se_device *dev,
+ 				pr_err("kstrtoint() failed for nl_reply_supported=\n");
+ 			break;
+ 		case Opt_max_data_area_mb:
+-			if (dev->export_count) {
+-				pr_err("Unable to set max_data_area_mb while exports exist\n");
+-				ret = -EINVAL;
+-				break;
+-			}
+-
+-			arg_p = match_strdup(&args[0]);
+-			if (!arg_p) {
+-				ret = -ENOMEM;
+-				break;
+-			}
+-			ret = kstrtoint(arg_p, 0, &tmpval);
+-			kfree(arg_p);
+-			if (ret < 0) {
+-				pr_err("kstrtoint() failed for max_data_area_mb=\n");
+-				break;
+-			}
+-
+-			if (tmpval <= 0) {
+-				pr_err("Invalid max_data_area %d\n", tmpval);
+-				ret = -EINVAL;
+-				break;
+-			}
+-
+-			udev->max_blocks = TCMU_MBS_TO_BLOCKS(tmpval);
+-			if (udev->max_blocks > tcmu_global_max_blocks) {
+-				pr_err("%d is too large. Adjusting max_data_area_mb to global limit of %u\n",
+-				       tmpval,
+-				       TCMU_BLOCKS_TO_MBS(tcmu_global_max_blocks));
+-				udev->max_blocks = tcmu_global_max_blocks;
+-			}
++			ret = tcmu_set_max_blocks_param(udev, &args[0]);
+ 			break;
+ 		default:
+ 			break;
+diff --git a/drivers/thermal/rcar_thermal.c b/drivers/thermal/rcar_thermal.c
+index 45fb284d4c11..e77e63070e99 100644
+--- a/drivers/thermal/rcar_thermal.c
++++ b/drivers/thermal/rcar_thermal.c
+@@ -598,7 +598,7 @@ static int rcar_thermal_probe(struct platform_device *pdev)
+ 			enr_bits |= 3 << (i * 8);
+ 	}
+ 
+-	if (enr_bits)
++	if (common->base && enr_bits)
+ 		rcar_thermal_common_write(common, ENR, enr_bits);
+ 
+ 	dev_info(dev, "%d sensor probed\n", i);
+diff --git a/drivers/thermal/thermal_hwmon.c b/drivers/thermal/thermal_hwmon.c
+index 11278836ed12..0bd47007c57f 100644
+--- a/drivers/thermal/thermal_hwmon.c
++++ b/drivers/thermal/thermal_hwmon.c
+@@ -142,6 +142,7 @@ int thermal_add_hwmon_sysfs(struct thermal_zone_device *tz)
+ 
+ 	INIT_LIST_HEAD(&hwmon->tz_list);
+ 	strlcpy(hwmon->type, tz->type, THERMAL_NAME_LENGTH);
++	strreplace(hwmon->type, '-', '_');
+ 	hwmon->device = hwmon_device_register_with_info(NULL, hwmon->type,
+ 							hwmon, NULL, NULL);
+ 	if (IS_ERR(hwmon->device)) {
+diff --git a/drivers/tty/rocket.c b/drivers/tty/rocket.c
+index bdd17d2aaafd..b121d8f8f3d7 100644
+--- a/drivers/tty/rocket.c
++++ b/drivers/tty/rocket.c
+@@ -1881,7 +1881,7 @@ static __init int register_PCI(int i, struct pci_dev *dev)
+ 	ByteIO_t UPCIRingInd = 0;
+ 
+ 	if (!dev || !pci_match_id(rocket_pci_ids, dev) ||
+-	    pci_enable_device(dev))
++	    pci_enable_device(dev) || i >= NUM_BOARDS)
+ 		return 0;
+ 
+ 	rcktpt_io_addr[i] = pci_resource_start(dev, 0);
+diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c
+index f68c1121fa7c..6c58ad1abd7e 100644
+--- a/drivers/uio/uio.c
++++ b/drivers/uio/uio.c
+@@ -622,6 +622,12 @@ static ssize_t uio_write(struct file *filep, const char __user *buf,
+ 	ssize_t retval;
+ 	s32 irq_on;
+ 
++	if (count != sizeof(s32))
++		return -EINVAL;
++
++	if (copy_from_user(&irq_on, buf, count))
++		return -EFAULT;
++
+ 	mutex_lock(&idev->info_lock);
+ 	if (!idev->info) {
+ 		retval = -EINVAL;
+@@ -633,21 +639,11 @@ static ssize_t uio_write(struct file *filep, const char __user *buf,
+ 		goto out;
+ 	}
+ 
+-	if (count != sizeof(s32)) {
+-		retval = -EINVAL;
+-		goto out;
+-	}
+-
+ 	if (!idev->info->irqcontrol) {
+ 		retval = -ENOSYS;
+ 		goto out;
+ 	}
+ 
+-	if (copy_from_user(&irq_on, buf, count)) {
+-		retval = -EFAULT;
+-		goto out;
+-	}
+-
+ 	retval = idev->info->irqcontrol(idev->info, irq_on);
+ 
+ out:
+@@ -955,8 +951,6 @@ int __uio_register_device(struct module *owner,
+ 	if (ret)
+ 		goto err_uio_dev_add_attributes;
+ 
+-	info->uio_dev = idev;
+-
+ 	if (info->irq && (info->irq != UIO_IRQ_CUSTOM)) {
+ 		/*
+ 		 * Note that we deliberately don't use devm_request_irq
+@@ -972,6 +966,7 @@ int __uio_register_device(struct module *owner,
+ 			goto err_request_irq;
+ 	}
+ 
++	info->uio_dev = idev;
+ 	return 0;
+ 
+ err_request_irq:
+diff --git a/fs/autofs/autofs_i.h b/fs/autofs/autofs_i.h
+index 9400a9f6318a..5057b9f0f846 100644
+--- a/fs/autofs/autofs_i.h
++++ b/fs/autofs/autofs_i.h
+@@ -26,6 +26,7 @@
+ #include <linux/list.h>
+ #include <linux/completion.h>
+ #include <linux/file.h>
++#include <linux/magic.h>
+ 
+ /* This is the range of ioctl() numbers we claim as ours */
+ #define AUTOFS_IOC_FIRST     AUTOFS_IOC_READY
+@@ -124,7 +125,8 @@ struct autofs_sb_info {
+ 
+ static inline struct autofs_sb_info *autofs_sbi(struct super_block *sb)
+ {
+-	return (struct autofs_sb_info *)(sb->s_fs_info);
++	return sb->s_magic != AUTOFS_SUPER_MAGIC ?
++		NULL : (struct autofs_sb_info *)(sb->s_fs_info);
+ }
+ 
+ static inline struct autofs_info *autofs_dentry_ino(struct dentry *dentry)
+diff --git a/fs/autofs/inode.c b/fs/autofs/inode.c
+index b51980fc274e..846c052569dd 100644
+--- a/fs/autofs/inode.c
++++ b/fs/autofs/inode.c
+@@ -10,7 +10,6 @@
+ #include <linux/seq_file.h>
+ #include <linux/pagemap.h>
+ #include <linux/parser.h>
+-#include <linux/magic.h>
+ 
+ #include "autofs_i.h"
+ 
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 53cac20650d8..4ab0bccfa281 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -5935,7 +5935,7 @@ void btrfs_trans_release_chunk_metadata(struct btrfs_trans_handle *trans)
+  * root: the root of the parent directory
+  * rsv: block reservation
+  * items: the number of items that we need do reservation
+- * qgroup_reserved: used to return the reserved size in qgroup
++ * use_global_rsv: allow fallback to the global block reservation
+  *
+  * This function is used to reserve the space for snapshot/subvolume
+  * creation and deletion. Those operations are different with the
+@@ -5945,10 +5945,10 @@ void btrfs_trans_release_chunk_metadata(struct btrfs_trans_handle *trans)
+  * the space reservation mechanism in start_transaction().
+  */
+ int btrfs_subvolume_reserve_metadata(struct btrfs_root *root,
+-				     struct btrfs_block_rsv *rsv,
+-				     int items,
++				     struct btrfs_block_rsv *rsv, int items,
+ 				     bool use_global_rsv)
+ {
++	u64 qgroup_num_bytes = 0;
+ 	u64 num_bytes;
+ 	int ret;
+ 	struct btrfs_fs_info *fs_info = root->fs_info;
+@@ -5956,12 +5956,11 @@ int btrfs_subvolume_reserve_metadata(struct btrfs_root *root,
+ 
+ 	if (test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags)) {
+ 		/* One for parent inode, two for dir entries */
+-		num_bytes = 3 * fs_info->nodesize;
+-		ret = btrfs_qgroup_reserve_meta_prealloc(root, num_bytes, true);
++		qgroup_num_bytes = 3 * fs_info->nodesize;
++		ret = btrfs_qgroup_reserve_meta_prealloc(root,
++				qgroup_num_bytes, true);
+ 		if (ret)
+ 			return ret;
+-	} else {
+-		num_bytes = 0;
+ 	}
+ 
+ 	num_bytes = btrfs_calc_trans_metadata_size(fs_info, items);
+@@ -5973,8 +5972,8 @@ int btrfs_subvolume_reserve_metadata(struct btrfs_root *root,
+ 	if (ret == -ENOSPC && use_global_rsv)
+ 		ret = btrfs_block_rsv_migrate(global_rsv, rsv, num_bytes, 1);
+ 
+-	if (ret && num_bytes)
+-		btrfs_qgroup_free_meta_prealloc(root, num_bytes);
++	if (ret && qgroup_num_bytes)
++		btrfs_qgroup_free_meta_prealloc(root, qgroup_num_bytes);
+ 
+ 	return ret;
+ }
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index b077544b5232..f3d6be0c657b 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -3463,6 +3463,25 @@ static int btrfs_extent_same_range(struct inode *src, u64 loff, u64 olen,
+ 
+ 		same_lock_start = min_t(u64, loff, dst_loff);
+ 		same_lock_len = max_t(u64, loff, dst_loff) + len - same_lock_start;
++	} else {
++		/*
++		 * If the source and destination inodes are different, the
++		 * source's range end offset matches the source's i_size, that
++		 * i_size is not a multiple of the sector size, and the
++		 * destination range does not go past the destination's i_size,
++		 * we must round down the length to the nearest sector size
++		 * multiple. If we don't do this adjustment we end replacing
++		 * with zeroes the bytes in the range that starts at the
++		 * deduplication range's end offset and ends at the next sector
++		 * size multiple.
++		 */
++		if (loff + olen == i_size_read(src) &&
++		    dst_loff + len < i_size_read(dst)) {
++			const u64 sz = BTRFS_I(src)->root->fs_info->sectorsize;
++
++			len = round_down(i_size_read(src), sz) - loff;
++			olen = len;
++		}
+ 	}
+ 
+ again:
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 9d02563b2147..44043f809a3c 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -2523,7 +2523,7 @@ cifs_setup_ipc(struct cifs_ses *ses, struct smb_vol *volume_info)
+ 	if (tcon == NULL)
+ 		return -ENOMEM;
+ 
+-	snprintf(unc, sizeof(unc), "\\\\%s\\IPC$", ses->serverName);
++	snprintf(unc, sizeof(unc), "\\\\%s\\IPC$", ses->server->hostname);
+ 
+ 	/* cannot fail */
+ 	nls_codepage = load_nls_default();
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index 9051b9dfd590..d279fa5472db 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -469,6 +469,8 @@ cifs_sfu_type(struct cifs_fattr *fattr, const char *path,
+ 	oparms.cifs_sb = cifs_sb;
+ 	oparms.desired_access = GENERIC_READ;
+ 	oparms.create_options = CREATE_NOT_DIR;
++	if (backup_cred(cifs_sb))
++		oparms.create_options |= CREATE_OPEN_BACKUP_INTENT;
+ 	oparms.disposition = FILE_OPEN;
+ 	oparms.path = path;
+ 	oparms.fid = &fid;
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index ee6c4a952ce9..5ecbc99f46e4 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -626,7 +626,10 @@ smb2_is_path_accessible(const unsigned int xid, struct cifs_tcon *tcon,
+ 	oparms.tcon = tcon;
+ 	oparms.desired_access = FILE_READ_ATTRIBUTES;
+ 	oparms.disposition = FILE_OPEN;
+-	oparms.create_options = 0;
++	if (backup_cred(cifs_sb))
++		oparms.create_options = CREATE_OPEN_BACKUP_INTENT;
++	else
++		oparms.create_options = 0;
+ 	oparms.fid = &fid;
+ 	oparms.reconnect = false;
+ 
+@@ -775,7 +778,10 @@ smb2_query_eas(const unsigned int xid, struct cifs_tcon *tcon,
+ 	oparms.tcon = tcon;
+ 	oparms.desired_access = FILE_READ_EA;
+ 	oparms.disposition = FILE_OPEN;
+-	oparms.create_options = 0;
++	if (backup_cred(cifs_sb))
++		oparms.create_options = CREATE_OPEN_BACKUP_INTENT;
++	else
++		oparms.create_options = 0;
+ 	oparms.fid = &fid;
+ 	oparms.reconnect = false;
+ 
+@@ -854,7 +860,10 @@ smb2_set_ea(const unsigned int xid, struct cifs_tcon *tcon,
+ 	oparms.tcon = tcon;
+ 	oparms.desired_access = FILE_WRITE_EA;
+ 	oparms.disposition = FILE_OPEN;
+-	oparms.create_options = 0;
++	if (backup_cred(cifs_sb))
++		oparms.create_options = CREATE_OPEN_BACKUP_INTENT;
++	else
++		oparms.create_options = 0;
+ 	oparms.fid = &fid;
+ 	oparms.reconnect = false;
+ 
+@@ -1460,7 +1469,10 @@ smb2_query_dir_first(const unsigned int xid, struct cifs_tcon *tcon,
+ 	oparms.tcon = tcon;
+ 	oparms.desired_access = FILE_READ_ATTRIBUTES | FILE_READ_DATA;
+ 	oparms.disposition = FILE_OPEN;
+-	oparms.create_options = 0;
++	if (backup_cred(cifs_sb))
++		oparms.create_options = CREATE_OPEN_BACKUP_INTENT;
++	else
++		oparms.create_options = 0;
+ 	oparms.fid = fid;
+ 	oparms.reconnect = false;
+ 
+@@ -1735,7 +1747,10 @@ smb2_query_symlink(const unsigned int xid, struct cifs_tcon *tcon,
+ 	oparms.tcon = tcon;
+ 	oparms.desired_access = FILE_READ_ATTRIBUTES;
+ 	oparms.disposition = FILE_OPEN;
+-	oparms.create_options = 0;
++	if (backup_cred(cifs_sb))
++		oparms.create_options = CREATE_OPEN_BACKUP_INTENT;
++	else
++		oparms.create_options = 0;
+ 	oparms.fid = &fid;
+ 	oparms.reconnect = false;
+ 
+@@ -3463,7 +3478,7 @@ struct smb_version_values smb21_values = {
+ struct smb_version_values smb3any_values = {
+ 	.version_string = SMB3ANY_VERSION_STRING,
+ 	.protocol_id = SMB302_PROT_ID, /* doesn't matter, send protocol array */
+-	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION,
++	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION | SMB2_GLOBAL_CAP_DIRECTORY_LEASING,
+ 	.large_lock_type = 0,
+ 	.exclusive_lock_type = SMB2_LOCKFLAG_EXCLUSIVE_LOCK,
+ 	.shared_lock_type = SMB2_LOCKFLAG_SHARED_LOCK,
+@@ -3484,7 +3499,7 @@ struct smb_version_values smb3any_values = {
+ struct smb_version_values smbdefault_values = {
+ 	.version_string = SMBDEFAULT_VERSION_STRING,
+ 	.protocol_id = SMB302_PROT_ID, /* doesn't matter, send protocol array */
+-	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION,
++	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION | SMB2_GLOBAL_CAP_DIRECTORY_LEASING,
+ 	.large_lock_type = 0,
+ 	.exclusive_lock_type = SMB2_LOCKFLAG_EXCLUSIVE_LOCK,
+ 	.shared_lock_type = SMB2_LOCKFLAG_SHARED_LOCK,
+@@ -3505,7 +3520,7 @@ struct smb_version_values smbdefault_values = {
+ struct smb_version_values smb30_values = {
+ 	.version_string = SMB30_VERSION_STRING,
+ 	.protocol_id = SMB30_PROT_ID,
+-	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION,
++	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION | SMB2_GLOBAL_CAP_DIRECTORY_LEASING,
+ 	.large_lock_type = 0,
+ 	.exclusive_lock_type = SMB2_LOCKFLAG_EXCLUSIVE_LOCK,
+ 	.shared_lock_type = SMB2_LOCKFLAG_SHARED_LOCK,
+@@ -3526,7 +3541,7 @@ struct smb_version_values smb30_values = {
+ struct smb_version_values smb302_values = {
+ 	.version_string = SMB302_VERSION_STRING,
+ 	.protocol_id = SMB302_PROT_ID,
+-	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION,
++	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION | SMB2_GLOBAL_CAP_DIRECTORY_LEASING,
+ 	.large_lock_type = 0,
+ 	.exclusive_lock_type = SMB2_LOCKFLAG_EXCLUSIVE_LOCK,
+ 	.shared_lock_type = SMB2_LOCKFLAG_SHARED_LOCK,
+@@ -3548,7 +3563,7 @@ struct smb_version_values smb302_values = {
+ struct smb_version_values smb311_values = {
+ 	.version_string = SMB311_VERSION_STRING,
+ 	.protocol_id = SMB311_PROT_ID,
+-	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION,
++	.req_capabilities = SMB2_GLOBAL_CAP_DFS | SMB2_GLOBAL_CAP_LEASING | SMB2_GLOBAL_CAP_LARGE_MTU | SMB2_GLOBAL_CAP_PERSISTENT_HANDLES | SMB2_GLOBAL_CAP_ENCRYPTION | SMB2_GLOBAL_CAP_DIRECTORY_LEASING,
+ 	.large_lock_type = 0,
+ 	.exclusive_lock_type = SMB2_LOCKFLAG_EXCLUSIVE_LOCK,
+ 	.shared_lock_type = SMB2_LOCKFLAG_SHARED_LOCK,
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 44e511a35559..82be1dfeca33 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -2179,6 +2179,9 @@ SMB2_open(const unsigned int xid, struct cifs_open_parms *oparms, __le16 *path,
+ 	if (!(server->capabilities & SMB2_GLOBAL_CAP_LEASING) ||
+ 	    *oplock == SMB2_OPLOCK_LEVEL_NONE)
+ 		req->RequestedOplockLevel = *oplock;
++	else if (!(server->capabilities & SMB2_GLOBAL_CAP_DIRECTORY_LEASING) &&
++		  (oparms->create_options & CREATE_NOT_FILE))
++		req->RequestedOplockLevel = *oplock; /* no srv lease support */
+ 	else {
+ 		rc = add_lease_context(server, iov, &n_iov,
+ 				       oparms->fid->lease_key, oplock);
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 4d8b1de83143..b6f2dc8163e1 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -1680,18 +1680,20 @@ static inline int inc_valid_block_count(struct f2fs_sb_info *sbi,
+ 		sbi->total_valid_block_count -= diff;
+ 		if (!*count) {
+ 			spin_unlock(&sbi->stat_lock);
+-			percpu_counter_sub(&sbi->alloc_valid_block_count, diff);
+ 			goto enospc;
+ 		}
+ 	}
+ 	spin_unlock(&sbi->stat_lock);
+ 
+-	if (unlikely(release))
++	if (unlikely(release)) {
++		percpu_counter_sub(&sbi->alloc_valid_block_count, release);
+ 		dquot_release_reservation_block(inode, release);
++	}
+ 	f2fs_i_blocks_write(inode, *count, true, true);
+ 	return 0;
+ 
+ enospc:
++	percpu_counter_sub(&sbi->alloc_valid_block_count, release);
+ 	dquot_release_reservation_block(inode, release);
+ 	return -ENOSPC;
+ }
+@@ -1954,8 +1956,13 @@ static inline struct page *f2fs_grab_cache_page(struct address_space *mapping,
+ 						pgoff_t index, bool for_write)
+ {
+ #ifdef CONFIG_F2FS_FAULT_INJECTION
+-	struct page *page = find_lock_page(mapping, index);
++	struct page *page;
+ 
++	if (!for_write)
++		page = find_get_page_flags(mapping, index,
++						FGP_LOCK | FGP_ACCESSED);
++	else
++		page = find_lock_page(mapping, index);
+ 	if (page)
+ 		return page;
+ 
+@@ -2812,7 +2819,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
+ int f2fs_sync_node_pages(struct f2fs_sb_info *sbi,
+ 			struct writeback_control *wbc,
+ 			bool do_balance, enum iostat_type io_type);
+-void f2fs_build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount);
++int f2fs_build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount);
+ bool f2fs_alloc_nid(struct f2fs_sb_info *sbi, nid_t *nid);
+ void f2fs_alloc_nid_done(struct f2fs_sb_info *sbi, nid_t nid);
+ void f2fs_alloc_nid_failed(struct f2fs_sb_info *sbi, nid_t nid);
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 3ffa341cf586..4c9f9bcbd2d9 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -1882,7 +1882,7 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	struct super_block *sb = sbi->sb;
+ 	__u32 in;
+-	int ret;
++	int ret = 0;
+ 
+ 	if (!capable(CAP_SYS_ADMIN))
+ 		return -EPERM;
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 9093be6e7a7d..37ab2d10a872 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -986,7 +986,13 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi,
+ 			goto next;
+ 
+ 		sum = page_address(sum_page);
+-		f2fs_bug_on(sbi, type != GET_SUM_TYPE((&sum->footer)));
++		if (type != GET_SUM_TYPE((&sum->footer))) {
++			f2fs_msg(sbi->sb, KERN_ERR, "Inconsistent segment (%u) "
++				"type [%d, %d] in SSA and SIT",
++				segno, type, GET_SUM_TYPE((&sum->footer)));
++			set_sbi_flag(sbi, SBI_NEED_FSCK);
++			goto next;
++		}
+ 
+ 		/*
+ 		 * this is to avoid deadlock:
+diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
+index 043830be5662..2bcb2d36f024 100644
+--- a/fs/f2fs/inline.c
++++ b/fs/f2fs/inline.c
+@@ -130,6 +130,16 @@ int f2fs_convert_inline_page(struct dnode_of_data *dn, struct page *page)
+ 	if (err)
+ 		return err;
+ 
++	if (unlikely(dn->data_blkaddr != NEW_ADDR)) {
++		f2fs_put_dnode(dn);
++		set_sbi_flag(fio.sbi, SBI_NEED_FSCK);
++		f2fs_msg(fio.sbi->sb, KERN_WARNING,
++			"%s: corrupted inline inode ino=%lx, i_addr[0]:0x%x, "
++			"run fsck to fix.",
++			__func__, dn->inode->i_ino, dn->data_blkaddr);
++		return -EINVAL;
++	}
++
+ 	f2fs_bug_on(F2FS_P_SB(page), PageWriteback(page));
+ 
+ 	f2fs_do_read_inline_data(page, dn->inode_page);
+@@ -363,6 +373,17 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage,
+ 	if (err)
+ 		goto out;
+ 
++	if (unlikely(dn.data_blkaddr != NEW_ADDR)) {
++		f2fs_put_dnode(&dn);
++		set_sbi_flag(F2FS_P_SB(page), SBI_NEED_FSCK);
++		f2fs_msg(F2FS_P_SB(page)->sb, KERN_WARNING,
++			"%s: corrupted inline inode ino=%lx, i_addr[0]:0x%x, "
++			"run fsck to fix.",
++			__func__, dir->i_ino, dn.data_blkaddr);
++		err = -EINVAL;
++		goto out;
++	}
++
+ 	f2fs_wait_on_page_writeback(page, DATA, true);
+ 
+ 	dentry_blk = page_address(page);
+@@ -477,6 +498,7 @@ static int f2fs_move_rehashed_dirents(struct inode *dir, struct page *ipage,
+ 	return 0;
+ recover:
+ 	lock_page(ipage);
++	f2fs_wait_on_page_writeback(ipage, NODE, true);
+ 	memcpy(inline_dentry, backup_dentry, MAX_INLINE_DATA(dir));
+ 	f2fs_i_depth_write(dir, 0);
+ 	f2fs_i_size_write(dir, MAX_INLINE_DATA(dir));
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index f121c864f4c0..cf0f944fcaea 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -197,6 +197,16 @@ static bool sanity_check_inode(struct inode *inode)
+ 			__func__, inode->i_ino);
+ 		return false;
+ 	}
++
++	if (f2fs_has_extra_attr(inode) &&
++			!f2fs_sb_has_extra_attr(sbi->sb)) {
++		set_sbi_flag(sbi, SBI_NEED_FSCK);
++		f2fs_msg(sbi->sb, KERN_WARNING,
++			"%s: inode (ino=%lx) is with extra_attr, "
++			"but extra_attr feature is off",
++			__func__, inode->i_ino);
++		return false;
++	}
+ 	return true;
+ }
+ 
+@@ -249,6 +259,11 @@ static int do_read_inode(struct inode *inode)
+ 
+ 	get_inline_info(inode, ri);
+ 
++	if (!sanity_check_inode(inode)) {
++		f2fs_put_page(node_page, 1);
++		return -EINVAL;
++	}
++
+ 	fi->i_extra_isize = f2fs_has_extra_attr(inode) ?
+ 					le16_to_cpu(ri->i_extra_isize) : 0;
+ 
+@@ -330,10 +345,6 @@ struct inode *f2fs_iget(struct super_block *sb, unsigned long ino)
+ 	ret = do_read_inode(inode);
+ 	if (ret)
+ 		goto bad_inode;
+-	if (!sanity_check_inode(inode)) {
+-		ret = -EINVAL;
+-		goto bad_inode;
+-	}
+ make_now:
+ 	if (ino == F2FS_NODE_INO(sbi)) {
+ 		inode->i_mapping->a_ops = &f2fs_node_aops;
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 10643b11bd59..52ed02b0327c 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -1633,7 +1633,9 @@ next_step:
+ 						!is_cold_node(page)))
+ 				continue;
+ lock_node:
+-			if (!trylock_page(page))
++			if (wbc->sync_mode == WB_SYNC_ALL)
++				lock_page(page);
++			else if (!trylock_page(page))
+ 				continue;
+ 
+ 			if (unlikely(page->mapping != NODE_MAPPING(sbi))) {
+@@ -1968,7 +1970,7 @@ static void remove_free_nid(struct f2fs_sb_info *sbi, nid_t nid)
+ 		kmem_cache_free(free_nid_slab, i);
+ }
+ 
+-static void scan_nat_page(struct f2fs_sb_info *sbi,
++static int scan_nat_page(struct f2fs_sb_info *sbi,
+ 			struct page *nat_page, nid_t start_nid)
+ {
+ 	struct f2fs_nm_info *nm_i = NM_I(sbi);
+@@ -1986,7 +1988,10 @@ static void scan_nat_page(struct f2fs_sb_info *sbi,
+ 			break;
+ 
+ 		blk_addr = le32_to_cpu(nat_blk->entries[i].block_addr);
+-		f2fs_bug_on(sbi, blk_addr == NEW_ADDR);
++
++		if (blk_addr == NEW_ADDR)
++			return -EINVAL;
++
+ 		if (blk_addr == NULL_ADDR) {
+ 			add_free_nid(sbi, start_nid, true, true);
+ 		} else {
+@@ -1995,6 +2000,8 @@ static void scan_nat_page(struct f2fs_sb_info *sbi,
+ 			spin_unlock(&NM_I(sbi)->nid_list_lock);
+ 		}
+ 	}
++
++	return 0;
+ }
+ 
+ static void scan_curseg_cache(struct f2fs_sb_info *sbi)
+@@ -2050,11 +2057,11 @@ out:
+ 	up_read(&nm_i->nat_tree_lock);
+ }
+ 
+-static void __f2fs_build_free_nids(struct f2fs_sb_info *sbi,
++static int __f2fs_build_free_nids(struct f2fs_sb_info *sbi,
+ 						bool sync, bool mount)
+ {
+ 	struct f2fs_nm_info *nm_i = NM_I(sbi);
+-	int i = 0;
++	int i = 0, ret;
+ 	nid_t nid = nm_i->next_scan_nid;
+ 
+ 	if (unlikely(nid >= nm_i->max_nid))
+@@ -2062,17 +2069,17 @@ static void __f2fs_build_free_nids(struct f2fs_sb_info *sbi,
+ 
+ 	/* Enough entries */
+ 	if (nm_i->nid_cnt[FREE_NID] >= NAT_ENTRY_PER_BLOCK)
+-		return;
++		return 0;
+ 
+ 	if (!sync && !f2fs_available_free_memory(sbi, FREE_NIDS))
+-		return;
++		return 0;
+ 
+ 	if (!mount) {
+ 		/* try to find free nids in free_nid_bitmap */
+ 		scan_free_nid_bits(sbi);
+ 
+ 		if (nm_i->nid_cnt[FREE_NID] >= NAT_ENTRY_PER_BLOCK)
+-			return;
++			return 0;
+ 	}
+ 
+ 	/* readahead nat pages to be scanned */
+@@ -2086,8 +2093,16 @@ static void __f2fs_build_free_nids(struct f2fs_sb_info *sbi,
+ 						nm_i->nat_block_bitmap)) {
+ 			struct page *page = get_current_nat_page(sbi, nid);
+ 
+-			scan_nat_page(sbi, page, nid);
++			ret = scan_nat_page(sbi, page, nid);
+ 			f2fs_put_page(page, 1);
++
++			if (ret) {
++				up_read(&nm_i->nat_tree_lock);
++				f2fs_bug_on(sbi, !mount);
++				f2fs_msg(sbi->sb, KERN_ERR,
++					"NAT is corrupt, run fsck to fix it");
++				return -EINVAL;
++			}
+ 		}
+ 
+ 		nid += (NAT_ENTRY_PER_BLOCK - (nid % NAT_ENTRY_PER_BLOCK));
+@@ -2108,13 +2123,19 @@ static void __f2fs_build_free_nids(struct f2fs_sb_info *sbi,
+ 
+ 	f2fs_ra_meta_pages(sbi, NAT_BLOCK_OFFSET(nm_i->next_scan_nid),
+ 					nm_i->ra_nid_pages, META_NAT, false);
++
++	return 0;
+ }
+ 
+-void f2fs_build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount)
++int f2fs_build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount)
+ {
++	int ret;
++
+ 	mutex_lock(&NM_I(sbi)->build_lock);
+-	__f2fs_build_free_nids(sbi, sync, mount);
++	ret = __f2fs_build_free_nids(sbi, sync, mount);
+ 	mutex_unlock(&NM_I(sbi)->build_lock);
++
++	return ret;
+ }
+ 
+ /*
+@@ -2801,8 +2822,7 @@ int f2fs_build_node_manager(struct f2fs_sb_info *sbi)
+ 	/* load free nid status from nat_bits table */
+ 	load_free_nid_bitmap(sbi);
+ 
+-	f2fs_build_free_nids(sbi, true, true);
+-	return 0;
++	return f2fs_build_free_nids(sbi, true, true);
+ }
+ 
+ void f2fs_destroy_node_manager(struct f2fs_sb_info *sbi)
+diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
+index 38f25f0b193a..ad70e62c5da4 100644
+--- a/fs/f2fs/recovery.c
++++ b/fs/f2fs/recovery.c
+@@ -241,8 +241,8 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head,
+ 	struct page *page = NULL;
+ 	block_t blkaddr;
+ 	unsigned int loop_cnt = 0;
+-	unsigned int free_blocks = sbi->user_block_count -
+-					valid_user_blocks(sbi);
++	unsigned int free_blocks = MAIN_SEGS(sbi) * sbi->blocks_per_seg -
++						valid_user_blocks(sbi);
+ 	int err = 0;
+ 
+ 	/* get node pages in the current segment */
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 9efce174c51a..43fecd5eb252 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -1643,21 +1643,30 @@ void f2fs_clear_prefree_segments(struct f2fs_sb_info *sbi,
+ 	unsigned int start = 0, end = -1;
+ 	unsigned int secno, start_segno;
+ 	bool force = (cpc->reason & CP_DISCARD);
++	bool need_align = test_opt(sbi, LFS) && sbi->segs_per_sec > 1;
+ 
+ 	mutex_lock(&dirty_i->seglist_lock);
+ 
+ 	while (1) {
+ 		int i;
++
++		if (need_align && end != -1)
++			end--;
+ 		start = find_next_bit(prefree_map, MAIN_SEGS(sbi), end + 1);
+ 		if (start >= MAIN_SEGS(sbi))
+ 			break;
+ 		end = find_next_zero_bit(prefree_map, MAIN_SEGS(sbi),
+ 								start + 1);
+ 
+-		for (i = start; i < end; i++)
+-			clear_bit(i, prefree_map);
++		if (need_align) {
++			start = rounddown(start, sbi->segs_per_sec);
++			end = roundup(end, sbi->segs_per_sec);
++		}
+ 
+-		dirty_i->nr_dirty[PRE] -= end - start;
++		for (i = start; i < end; i++) {
++			if (test_and_clear_bit(i, prefree_map))
++				dirty_i->nr_dirty[PRE]--;
++		}
+ 
+ 		if (!test_opt(sbi, DISCARD))
+ 			continue;
+@@ -2437,6 +2446,7 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range)
+ 	struct discard_policy dpolicy;
+ 	unsigned long long trimmed = 0;
+ 	int err = 0;
++	bool need_align = test_opt(sbi, LFS) && sbi->segs_per_sec > 1;
+ 
+ 	if (start >= MAX_BLKADDR(sbi) || range->len < sbi->blocksize)
+ 		return -EINVAL;
+@@ -2454,6 +2464,10 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range)
+ 	start_segno = (start <= MAIN_BLKADDR(sbi)) ? 0 : GET_SEGNO(sbi, start);
+ 	end_segno = (end >= MAX_BLKADDR(sbi)) ? MAIN_SEGS(sbi) - 1 :
+ 						GET_SEGNO(sbi, end);
++	if (need_align) {
++		start_segno = rounddown(start_segno, sbi->segs_per_sec);
++		end_segno = roundup(end_segno + 1, sbi->segs_per_sec) - 1;
++	}
+ 
+ 	cpc.reason = CP_DISCARD;
+ 	cpc.trim_minlen = max_t(__u64, 1, F2FS_BYTES_TO_BLK(range->minlen));
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index f18fc82fbe99..38c549d77a80 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -448,6 +448,8 @@ static inline void __set_test_and_free(struct f2fs_sb_info *sbi,
+ 	if (test_and_clear_bit(segno, free_i->free_segmap)) {
+ 		free_i->free_segments++;
+ 
++		if (IS_CURSEC(sbi, secno))
++			goto skip_free;
+ 		next = find_next_bit(free_i->free_segmap,
+ 				start_segno + sbi->segs_per_sec, start_segno);
+ 		if (next >= start_segno + sbi->segs_per_sec) {
+@@ -455,6 +457,7 @@ static inline void __set_test_and_free(struct f2fs_sb_info *sbi,
+ 				free_i->free_sections++;
+ 		}
+ 	}
++skip_free:
+ 	spin_unlock(&free_i->segmap_lock);
+ }
+ 
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 3995e926ba3a..128d489acebb 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -2229,9 +2229,9 @@ static int sanity_check_raw_super(struct f2fs_sb_info *sbi,
+ 		return 1;
+ 	}
+ 
+-	if (secs_per_zone > total_sections) {
++	if (secs_per_zone > total_sections || !secs_per_zone) {
+ 		f2fs_msg(sb, KERN_INFO,
+-			"Wrong secs_per_zone (%u > %u)",
++			"Wrong secs_per_zone / total_sections (%u, %u)",
+ 			secs_per_zone, total_sections);
+ 		return 1;
+ 	}
+@@ -2282,12 +2282,17 @@ int f2fs_sanity_check_ckpt(struct f2fs_sb_info *sbi)
+ 	struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi);
+ 	unsigned int ovp_segments, reserved_segments;
+ 	unsigned int main_segs, blocks_per_seg;
++	unsigned int sit_segs, nat_segs;
++	unsigned int sit_bitmap_size, nat_bitmap_size;
++	unsigned int log_blocks_per_seg;
+ 	int i;
+ 
+ 	total = le32_to_cpu(raw_super->segment_count);
+ 	fsmeta = le32_to_cpu(raw_super->segment_count_ckpt);
+-	fsmeta += le32_to_cpu(raw_super->segment_count_sit);
+-	fsmeta += le32_to_cpu(raw_super->segment_count_nat);
++	sit_segs = le32_to_cpu(raw_super->segment_count_sit);
++	fsmeta += sit_segs;
++	nat_segs = le32_to_cpu(raw_super->segment_count_nat);
++	fsmeta += nat_segs;
+ 	fsmeta += le32_to_cpu(ckpt->rsvd_segment_count);
+ 	fsmeta += le32_to_cpu(raw_super->segment_count_ssa);
+ 
+@@ -2318,6 +2323,18 @@ int f2fs_sanity_check_ckpt(struct f2fs_sb_info *sbi)
+ 			return 1;
+ 	}
+ 
++	sit_bitmap_size = le32_to_cpu(ckpt->sit_ver_bitmap_bytesize);
++	nat_bitmap_size = le32_to_cpu(ckpt->nat_ver_bitmap_bytesize);
++	log_blocks_per_seg = le32_to_cpu(raw_super->log_blocks_per_seg);
++
++	if (sit_bitmap_size != ((sit_segs / 2) << log_blocks_per_seg) / 8 ||
++		nat_bitmap_size != ((nat_segs / 2) << log_blocks_per_seg) / 8) {
++		f2fs_msg(sbi->sb, KERN_ERR,
++			"Wrong bitmap size: sit: %u, nat:%u",
++			sit_bitmap_size, nat_bitmap_size);
++		return 1;
++	}
++
+ 	if (unlikely(f2fs_cp_error(sbi))) {
+ 		f2fs_msg(sbi->sb, KERN_ERR, "A bug case: need to run fsck");
+ 		return 1;
+diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
+index 2e7e611deaef..bca1236fd6fa 100644
+--- a/fs/f2fs/sysfs.c
++++ b/fs/f2fs/sysfs.c
+@@ -9,6 +9,7 @@
+  * it under the terms of the GNU General Public License version 2 as
+  * published by the Free Software Foundation.
+  */
++#include <linux/compiler.h>
+ #include <linux/proc_fs.h>
+ #include <linux/f2fs_fs.h>
+ #include <linux/seq_file.h>
+@@ -286,8 +287,10 @@ static ssize_t f2fs_sbi_store(struct f2fs_attr *a,
+ 	bool gc_entry = (!strcmp(a->attr.name, "gc_urgent") ||
+ 					a->struct_type == GC_THREAD);
+ 
+-	if (gc_entry)
+-		down_read(&sbi->sb->s_umount);
++	if (gc_entry) {
++		if (!down_read_trylock(&sbi->sb->s_umount))
++			return -EAGAIN;
++	}
+ 	ret = __sbi_store(a, sbi, buf, count);
+ 	if (gc_entry)
+ 		up_read(&sbi->sb->s_umount);
+@@ -516,7 +519,8 @@ static struct kobject f2fs_feat = {
+ 	.kset	= &f2fs_kset,
+ };
+ 
+-static int segment_info_seq_show(struct seq_file *seq, void *offset)
++static int __maybe_unused segment_info_seq_show(struct seq_file *seq,
++						void *offset)
+ {
+ 	struct super_block *sb = seq->private;
+ 	struct f2fs_sb_info *sbi = F2FS_SB(sb);
+@@ -543,7 +547,8 @@ static int segment_info_seq_show(struct seq_file *seq, void *offset)
+ 	return 0;
+ }
+ 
+-static int segment_bits_seq_show(struct seq_file *seq, void *offset)
++static int __maybe_unused segment_bits_seq_show(struct seq_file *seq,
++						void *offset)
+ {
+ 	struct super_block *sb = seq->private;
+ 	struct f2fs_sb_info *sbi = F2FS_SB(sb);
+@@ -567,7 +572,8 @@ static int segment_bits_seq_show(struct seq_file *seq, void *offset)
+ 	return 0;
+ }
+ 
+-static int iostat_info_seq_show(struct seq_file *seq, void *offset)
++static int __maybe_unused iostat_info_seq_show(struct seq_file *seq,
++					       void *offset)
+ {
+ 	struct super_block *sb = seq->private;
+ 	struct f2fs_sb_info *sbi = F2FS_SB(sb);
+diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
+index 5d57e818d0c3..6d049dfddb14 100644
+--- a/fs/nfs/callback_proc.c
++++ b/fs/nfs/callback_proc.c
+@@ -215,9 +215,9 @@ static u32 pnfs_check_callback_stateid(struct pnfs_layout_hdr *lo,
+ {
+ 	u32 oldseq, newseq;
+ 
+-	/* Is the stateid still not initialised? */
++	/* Is the stateid not initialised? */
+ 	if (!pnfs_layout_is_valid(lo))
+-		return NFS4ERR_DELAY;
++		return NFS4ERR_NOMATCHING_LAYOUT;
+ 
+ 	/* Mismatched stateid? */
+ 	if (!nfs4_stateid_match_other(&lo->plh_stateid, new))
+diff --git a/fs/nfs/callback_xdr.c b/fs/nfs/callback_xdr.c
+index a813979b5be0..cb905c0e606c 100644
+--- a/fs/nfs/callback_xdr.c
++++ b/fs/nfs/callback_xdr.c
+@@ -883,16 +883,21 @@ static __be32 nfs4_callback_compound(struct svc_rqst *rqstp)
+ 
+ 	if (hdr_arg.minorversion == 0) {
+ 		cps.clp = nfs4_find_client_ident(SVC_NET(rqstp), hdr_arg.cb_ident);
+-		if (!cps.clp || !check_gss_callback_principal(cps.clp, rqstp))
++		if (!cps.clp || !check_gss_callback_principal(cps.clp, rqstp)) {
++			if (cps.clp)
++				nfs_put_client(cps.clp);
+ 			goto out_invalidcred;
++		}
+ 	}
+ 
+ 	cps.minorversion = hdr_arg.minorversion;
+ 	hdr_res.taglen = hdr_arg.taglen;
+ 	hdr_res.tag = hdr_arg.tag;
+-	if (encode_compound_hdr_res(&xdr_out, &hdr_res) != 0)
++	if (encode_compound_hdr_res(&xdr_out, &hdr_res) != 0) {
++		if (cps.clp)
++			nfs_put_client(cps.clp);
+ 		return rpc_system_err;
+-
++	}
+ 	while (status == 0 && nops != hdr_arg.nops) {
+ 		status = process_op(nops, rqstp, &xdr_in,
+ 				    rqstp->rq_argp, &xdr_out, rqstp->rq_resp,
+diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c
+index 979631411a0e..d7124fb12041 100644
+--- a/fs/nfs/nfs4client.c
++++ b/fs/nfs/nfs4client.c
+@@ -1127,7 +1127,7 @@ struct nfs_server *nfs4_create_referral_server(struct nfs_clone_mount *data,
+ 	nfs_server_copy_userdata(server, parent_server);
+ 
+ 	/* Get a client representation */
+-#ifdef CONFIG_SUNRPC_XPRT_RDMA
++#if IS_ENABLED(CONFIG_SUNRPC_XPRT_RDMA)
+ 	rpc_set_port(data->addr, NFS_RDMA_PORT);
+ 	error = nfs4_set_client(server, data->hostname,
+ 				data->addr,
+@@ -1139,7 +1139,7 @@ struct nfs_server *nfs4_create_referral_server(struct nfs_clone_mount *data,
+ 				parent_client->cl_net);
+ 	if (!error)
+ 		goto init_server;
+-#endif	/* CONFIG_SUNRPC_XPRT_RDMA */
++#endif	/* IS_ENABLED(CONFIG_SUNRPC_XPRT_RDMA) */
+ 
+ 	rpc_set_port(data->addr, NFS_PORT);
+ 	error = nfs4_set_client(server, data->hostname,
+@@ -1153,7 +1153,7 @@ struct nfs_server *nfs4_create_referral_server(struct nfs_clone_mount *data,
+ 	if (error < 0)
+ 		goto error;
+ 
+-#ifdef CONFIG_SUNRPC_XPRT_RDMA
++#if IS_ENABLED(CONFIG_SUNRPC_XPRT_RDMA)
+ init_server:
+ #endif
+ 	error = nfs_init_server_rpcclient(server, parent_server->client->cl_timeout, data->authflavor);
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index 773bcb1d4044..5482dd6ae9ef 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -520,6 +520,7 @@ struct hid_input {
+ 	const char *name;
+ 	bool registered;
+ 	struct list_head reports;	/* the list of reports */
++	unsigned int application;	/* application usage for this input */
+ };
+ 
+ enum hid_type {
+diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
+index 22651e124071..a590419e46c5 100644
+--- a/include/linux/mm_types.h
++++ b/include/linux/mm_types.h
+@@ -340,7 +340,7 @@ struct kioctx_table;
+ struct mm_struct {
+ 	struct vm_area_struct *mmap;		/* list of VMAs */
+ 	struct rb_root mm_rb;
+-	u32 vmacache_seqnum;                   /* per-thread vmacache */
++	u64 vmacache_seqnum;                   /* per-thread vmacache */
+ #ifdef CONFIG_MMU
+ 	unsigned long (*get_unmapped_area) (struct file *filp,
+ 				unsigned long addr, unsigned long len,
+diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h
+index 5fe87687664c..d7016dcb245e 100644
+--- a/include/linux/mm_types_task.h
++++ b/include/linux/mm_types_task.h
+@@ -32,7 +32,7 @@
+ #define VMACACHE_MASK (VMACACHE_SIZE - 1)
+ 
+ struct vmacache {
+-	u32 seqnum;
++	u64 seqnum;
+ 	struct vm_area_struct *vmas[VMACACHE_SIZE];
+ };
+ 
+diff --git a/include/linux/mtd/rawnand.h b/include/linux/mtd/rawnand.h
+index 3e8ec3b8a39c..87c635d6c773 100644
+--- a/include/linux/mtd/rawnand.h
++++ b/include/linux/mtd/rawnand.h
+@@ -986,14 +986,14 @@ struct nand_subop {
+ 	unsigned int last_instr_end_off;
+ };
+ 
+-int nand_subop_get_addr_start_off(const struct nand_subop *subop,
+-				  unsigned int op_id);
+-int nand_subop_get_num_addr_cyc(const struct nand_subop *subop,
+-				unsigned int op_id);
+-int nand_subop_get_data_start_off(const struct nand_subop *subop,
+-				  unsigned int op_id);
+-int nand_subop_get_data_len(const struct nand_subop *subop,
+-			    unsigned int op_id);
++unsigned int nand_subop_get_addr_start_off(const struct nand_subop *subop,
++					   unsigned int op_id);
++unsigned int nand_subop_get_num_addr_cyc(const struct nand_subop *subop,
++					 unsigned int op_id);
++unsigned int nand_subop_get_data_start_off(const struct nand_subop *subop,
++					   unsigned int op_id);
++unsigned int nand_subop_get_data_len(const struct nand_subop *subop,
++				     unsigned int op_id);
+ 
+ /**
+  * struct nand_op_parser_addr_constraints - Constraints for address instructions
+diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
+index 5c7f010676a7..47a3441cf4c4 100644
+--- a/include/linux/vm_event_item.h
++++ b/include/linux/vm_event_item.h
+@@ -105,7 +105,6 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
+ #ifdef CONFIG_DEBUG_VM_VMACACHE
+ 		VMACACHE_FIND_CALLS,
+ 		VMACACHE_FIND_HITS,
+-		VMACACHE_FULL_FLUSHES,
+ #endif
+ #ifdef CONFIG_SWAP
+ 		SWAP_RA,
+diff --git a/include/linux/vmacache.h b/include/linux/vmacache.h
+index a5b3aa8d281f..a09b28f76460 100644
+--- a/include/linux/vmacache.h
++++ b/include/linux/vmacache.h
+@@ -16,7 +16,6 @@ static inline void vmacache_flush(struct task_struct *tsk)
+ 	memset(tsk->vmacache.vmas, 0, sizeof(tsk->vmacache.vmas));
+ }
+ 
+-extern void vmacache_flush_all(struct mm_struct *mm);
+ extern void vmacache_update(unsigned long addr, struct vm_area_struct *newvma);
+ extern struct vm_area_struct *vmacache_find(struct mm_struct *mm,
+ 						    unsigned long addr);
+@@ -30,10 +29,6 @@ extern struct vm_area_struct *vmacache_find_exact(struct mm_struct *mm,
+ static inline void vmacache_invalidate(struct mm_struct *mm)
+ {
+ 	mm->vmacache_seqnum++;
+-
+-	/* deal with overflows */
+-	if (unlikely(mm->vmacache_seqnum == 0))
+-		vmacache_flush_all(mm);
+ }
+ 
+ #endif /* __LINUX_VMACACHE_H */
+diff --git a/include/uapi/linux/ethtool.h b/include/uapi/linux/ethtool.h
+index 7363f18e65a5..813282cc8af6 100644
+--- a/include/uapi/linux/ethtool.h
++++ b/include/uapi/linux/ethtool.h
+@@ -902,13 +902,13 @@ struct ethtool_rx_flow_spec {
+ static inline __u64 ethtool_get_flow_spec_ring(__u64 ring_cookie)
+ {
+ 	return ETHTOOL_RX_FLOW_SPEC_RING & ring_cookie;
+-};
++}
+ 
+ static inline __u64 ethtool_get_flow_spec_ring_vf(__u64 ring_cookie)
+ {
+ 	return (ETHTOOL_RX_FLOW_SPEC_RING_VF & ring_cookie) >>
+ 				ETHTOOL_RX_FLOW_SPEC_RING_VF_OFF;
+-};
++}
+ 
+ /**
+  * struct ethtool_rxnfc - command to get or set RX flow classification rules
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index f80afc674f02..517907b082df 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -608,15 +608,15 @@ static void cpuhp_thread_fun(unsigned int cpu)
+ 	bool bringup = st->bringup;
+ 	enum cpuhp_state state;
+ 
++	if (WARN_ON_ONCE(!st->should_run))
++		return;
++
+ 	/*
+ 	 * ACQUIRE for the cpuhp_should_run() load of ->should_run. Ensures
+ 	 * that if we see ->should_run we also see the rest of the state.
+ 	 */
+ 	smp_mb();
+ 
+-	if (WARN_ON_ONCE(!st->should_run))
+-		return;
+-
+ 	cpuhp_lock_acquire(bringup);
+ 
+ 	if (st->single) {
+@@ -928,7 +928,8 @@ static int cpuhp_down_callbacks(unsigned int cpu, struct cpuhp_cpu_state *st,
+ 		ret = cpuhp_invoke_callback(cpu, st->state, false, NULL, NULL);
+ 		if (ret) {
+ 			st->target = prev_state;
+-			undo_cpu_down(cpu, st);
++			if (st->state < prev_state)
++				undo_cpu_down(cpu, st);
+ 			break;
+ 		}
+ 	}
+@@ -981,7 +982,7 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen,
+ 	 * to do the further cleanups.
+ 	 */
+ 	ret = cpuhp_down_callbacks(cpu, st, target);
+-	if (ret && st->state > CPUHP_TEARDOWN_CPU && st->state < prev_state) {
++	if (ret && st->state == CPUHP_TEARDOWN_CPU && st->state < prev_state) {
+ 		cpuhp_reset_state(st, prev_state);
+ 		__cpuhp_kick_ap(st);
+ 	}
+diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
+index f89a78e2792b..443941aa784e 100644
+--- a/kernel/time/clocksource.c
++++ b/kernel/time/clocksource.c
+@@ -129,19 +129,40 @@ static void inline clocksource_watchdog_unlock(unsigned long *flags)
+ 	spin_unlock_irqrestore(&watchdog_lock, *flags);
+ }
+ 
++static int clocksource_watchdog_kthread(void *data);
++static void __clocksource_change_rating(struct clocksource *cs, int rating);
++
+ /*
+  * Interval: 0.5sec Threshold: 0.0625s
+  */
+ #define WATCHDOG_INTERVAL (HZ >> 1)
+ #define WATCHDOG_THRESHOLD (NSEC_PER_SEC >> 4)
+ 
++static void clocksource_watchdog_work(struct work_struct *work)
++{
++	/*
++	 * We cannot directly run clocksource_watchdog_kthread() here, because
++	 * clocksource_select() calls timekeeping_notify() which uses
++	 * stop_machine(). One cannot use stop_machine() from a workqueue() due
++	 * lock inversions wrt CPU hotplug.
++	 *
++	 * Also, we only ever run this work once or twice during the lifetime
++	 * of the kernel, so there is no point in creating a more permanent
++	 * kthread for this.
++	 *
++	 * If kthread_run fails the next watchdog scan over the
++	 * watchdog_list will find the unstable clock again.
++	 */
++	kthread_run(clocksource_watchdog_kthread, NULL, "kwatchdog");
++}
++
+ static void __clocksource_unstable(struct clocksource *cs)
+ {
+ 	cs->flags &= ~(CLOCK_SOURCE_VALID_FOR_HRES | CLOCK_SOURCE_WATCHDOG);
+ 	cs->flags |= CLOCK_SOURCE_UNSTABLE;
+ 
+ 	/*
+-	 * If the clocksource is registered clocksource_watchdog_work() will
++	 * If the clocksource is registered clocksource_watchdog_kthread() will
+ 	 * re-rate and re-select.
+ 	 */
+ 	if (list_empty(&cs->list)) {
+@@ -152,7 +173,7 @@ static void __clocksource_unstable(struct clocksource *cs)
+ 	if (cs->mark_unstable)
+ 		cs->mark_unstable(cs);
+ 
+-	/* kick clocksource_watchdog_work() */
++	/* kick clocksource_watchdog_kthread() */
+ 	if (finished_booting)
+ 		schedule_work(&watchdog_work);
+ }
+@@ -162,7 +183,7 @@ static void __clocksource_unstable(struct clocksource *cs)
+  * @cs:		clocksource to be marked unstable
+  *
+  * This function is called by the x86 TSC code to mark clocksources as unstable;
+- * it defers demotion and re-selection to a work.
++ * it defers demotion and re-selection to a kthread.
+  */
+ void clocksource_mark_unstable(struct clocksource *cs)
+ {
+@@ -387,9 +408,7 @@ static void clocksource_dequeue_watchdog(struct clocksource *cs)
+ 	}
+ }
+ 
+-static void __clocksource_change_rating(struct clocksource *cs, int rating);
+-
+-static int __clocksource_watchdog_work(void)
++static int __clocksource_watchdog_kthread(void)
+ {
+ 	struct clocksource *cs, *tmp;
+ 	unsigned long flags;
+@@ -414,12 +433,13 @@ static int __clocksource_watchdog_work(void)
+ 	return select;
+ }
+ 
+-static void clocksource_watchdog_work(struct work_struct *work)
++static int clocksource_watchdog_kthread(void *data)
+ {
+ 	mutex_lock(&clocksource_mutex);
+-	if (__clocksource_watchdog_work())
++	if (__clocksource_watchdog_kthread())
+ 		clocksource_select();
+ 	mutex_unlock(&clocksource_mutex);
++	return 0;
+ }
+ 
+ static bool clocksource_is_watchdog(struct clocksource *cs)
+@@ -438,7 +458,7 @@ static void clocksource_enqueue_watchdog(struct clocksource *cs)
+ static void clocksource_select_watchdog(bool fallback) { }
+ static inline void clocksource_dequeue_watchdog(struct clocksource *cs) { }
+ static inline void clocksource_resume_watchdog(void) { }
+-static inline int __clocksource_watchdog_work(void) { return 0; }
++static inline int __clocksource_watchdog_kthread(void) { return 0; }
+ static bool clocksource_is_watchdog(struct clocksource *cs) { return false; }
+ void clocksource_mark_unstable(struct clocksource *cs) { }
+ 
+@@ -672,7 +692,7 @@ static int __init clocksource_done_booting(void)
+ 	/*
+ 	 * Run the watchdog first to eliminate unstable clock sources
+ 	 */
+-	__clocksource_watchdog_work();
++	__clocksource_watchdog_kthread();
+ 	clocksource_select();
+ 	mutex_unlock(&clocksource_mutex);
+ 	return 0;
+diff --git a/kernel/time/timer.c b/kernel/time/timer.c
+index cc2d23e6ff61..786f8c014e7e 100644
+--- a/kernel/time/timer.c
++++ b/kernel/time/timer.c
+@@ -1657,6 +1657,22 @@ static inline void __run_timers(struct timer_base *base)
+ 
+ 	raw_spin_lock_irq(&base->lock);
+ 
++	/*
++	 * timer_base::must_forward_clk must be cleared before running
++	 * timers so that any timer functions that call mod_timer() will
++	 * not try to forward the base. Idle tracking / clock forwarding
++	 * logic is only used with BASE_STD timers.
++	 *
++	 * The must_forward_clk flag is cleared unconditionally also for
++	 * the deferrable base. The deferrable base is not affected by idle
++	 * tracking and never forwarded, so clearing the flag is a NOOP.
++	 *
++	 * The fact that the deferrable base is never forwarded can cause
++	 * large variations in granularity for deferrable timers, but they
++	 * can be deferred for long periods due to idle anyway.
++	 */
++	base->must_forward_clk = false;
++
+ 	while (time_after_eq(jiffies, base->clk)) {
+ 
+ 		levels = collect_expired_timers(base, heads);
+@@ -1676,19 +1692,6 @@ static __latent_entropy void run_timer_softirq(struct softirq_action *h)
+ {
+ 	struct timer_base *base = this_cpu_ptr(&timer_bases[BASE_STD]);
+ 
+-	/*
+-	 * must_forward_clk must be cleared before running timers so that any
+-	 * timer functions that call mod_timer will not try to forward the
+-	 * base. idle trcking / clock forwarding logic is only used with
+-	 * BASE_STD timers.
+-	 *
+-	 * The deferrable base does not do idle tracking at all, so we do
+-	 * not forward it. This can result in very large variations in
+-	 * granularity for deferrable timers, but they can be deferred for
+-	 * long periods due to idle.
+-	 */
+-	base->must_forward_clk = false;
+-
+ 	__run_timers(base);
+ 	if (IS_ENABLED(CONFIG_NO_HZ_COMMON))
+ 		__run_timers(this_cpu_ptr(&timer_bases[BASE_DEF]));
+diff --git a/mm/debug.c b/mm/debug.c
+index 38c926520c97..bd10aad8539a 100644
+--- a/mm/debug.c
++++ b/mm/debug.c
+@@ -114,7 +114,7 @@ EXPORT_SYMBOL(dump_vma);
+ 
+ void dump_mm(const struct mm_struct *mm)
+ {
+-	pr_emerg("mm %px mmap %px seqnum %d task_size %lu\n"
++	pr_emerg("mm %px mmap %px seqnum %llu task_size %lu\n"
+ #ifdef CONFIG_MMU
+ 		"get_unmapped_area %px\n"
+ #endif
+@@ -142,7 +142,7 @@ void dump_mm(const struct mm_struct *mm)
+ 		"tlb_flush_pending %d\n"
+ 		"def_flags: %#lx(%pGv)\n",
+ 
+-		mm, mm->mmap, mm->vmacache_seqnum, mm->task_size,
++		mm, mm->mmap, (long long) mm->vmacache_seqnum, mm->task_size,
+ #ifdef CONFIG_MMU
+ 		mm->get_unmapped_area,
+ #endif
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 7deb49f69e27..785252397e35 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1341,7 +1341,8 @@ static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
+ 			if (__PageMovable(page))
+ 				return pfn;
+ 			if (PageHuge(page)) {
+-				if (page_huge_active(page))
++				if (hugepage_migration_supported(page_hstate(page)) &&
++				    page_huge_active(page))
+ 					return pfn;
+ 				else
+ 					pfn = round_up(pfn + 1,
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 3222193c46c6..65f2e6481c99 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -7649,6 +7649,10 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
+ 		 * handle each tail page individually in migration.
+ 		 */
+ 		if (PageHuge(page)) {
++
++			if (!hugepage_migration_supported(page_hstate(page)))
++				goto unmovable;
++
+ 			iter = round_up(iter + 1, 1<<compound_order(page)) - 1;
+ 			continue;
+ 		}
+diff --git a/mm/vmacache.c b/mm/vmacache.c
+index db7596eb6132..f1729617dc85 100644
+--- a/mm/vmacache.c
++++ b/mm/vmacache.c
+@@ -7,44 +7,6 @@
+ #include <linux/mm.h>
+ #include <linux/vmacache.h>
+ 
+-/*
+- * Flush vma caches for threads that share a given mm.
+- *
+- * The operation is safe because the caller holds the mmap_sem
+- * exclusively and other threads accessing the vma cache will
+- * have mmap_sem held at least for read, so no extra locking
+- * is required to maintain the vma cache.
+- */
+-void vmacache_flush_all(struct mm_struct *mm)
+-{
+-	struct task_struct *g, *p;
+-
+-	count_vm_vmacache_event(VMACACHE_FULL_FLUSHES);
+-
+-	/*
+-	 * Single threaded tasks need not iterate the entire
+-	 * list of process. We can avoid the flushing as well
+-	 * since the mm's seqnum was increased and don't have
+-	 * to worry about other threads' seqnum. Current's
+-	 * flush will occur upon the next lookup.
+-	 */
+-	if (atomic_read(&mm->mm_users) == 1)
+-		return;
+-
+-	rcu_read_lock();
+-	for_each_process_thread(g, p) {
+-		/*
+-		 * Only flush the vmacache pointers as the
+-		 * mm seqnum is already set and curr's will
+-		 * be set upon invalidation when the next
+-		 * lookup is done.
+-		 */
+-		if (mm == p->mm)
+-			vmacache_flush(p);
+-	}
+-	rcu_read_unlock();
+-}
+-
+ /*
+  * This task may be accessing a foreign mm via (for example)
+  * get_user_pages()->find_vma().  The vmacache is task-local and this
+diff --git a/net/bluetooth/hidp/core.c b/net/bluetooth/hidp/core.c
+index 3bba8f4b08a9..253975cce943 100644
+--- a/net/bluetooth/hidp/core.c
++++ b/net/bluetooth/hidp/core.c
+@@ -775,7 +775,7 @@ static int hidp_setup_hid(struct hidp_session *session,
+ 	hid->version = req->version;
+ 	hid->country = req->country;
+ 
+-	strncpy(hid->name, req->name, sizeof(req->name) - 1);
++	strncpy(hid->name, req->name, sizeof(hid->name));
+ 
+ 	snprintf(hid->phys, sizeof(hid->phys), "%pMR",
+ 		 &l2cap_pi(session->ctrl_sock->sk)->chan->src);
+diff --git a/net/dcb/dcbnl.c b/net/dcb/dcbnl.c
+index 2589a6b78aa1..013fdb6fa07a 100644
+--- a/net/dcb/dcbnl.c
++++ b/net/dcb/dcbnl.c
+@@ -1786,7 +1786,7 @@ static struct dcb_app_type *dcb_app_lookup(const struct dcb_app *app,
+ 		if (itr->app.selector == app->selector &&
+ 		    itr->app.protocol == app->protocol &&
+ 		    itr->ifindex == ifindex &&
+-		    (!prio || itr->app.priority == prio))
++		    ((prio == -1) || itr->app.priority == prio))
+ 			return itr;
+ 	}
+ 
+@@ -1821,7 +1821,8 @@ u8 dcb_getapp(struct net_device *dev, struct dcb_app *app)
+ 	u8 prio = 0;
+ 
+ 	spin_lock_bh(&dcb_lock);
+-	if ((itr = dcb_app_lookup(app, dev->ifindex, 0)))
++	itr = dcb_app_lookup(app, dev->ifindex, -1);
++	if (itr)
+ 		prio = itr->app.priority;
+ 	spin_unlock_bh(&dcb_lock);
+ 
+@@ -1849,7 +1850,8 @@ int dcb_setapp(struct net_device *dev, struct dcb_app *new)
+ 
+ 	spin_lock_bh(&dcb_lock);
+ 	/* Search for existing match and replace */
+-	if ((itr = dcb_app_lookup(new, dev->ifindex, 0))) {
++	itr = dcb_app_lookup(new, dev->ifindex, -1);
++	if (itr) {
+ 		if (new->priority)
+ 			itr->app.priority = new->priority;
+ 		else {
+@@ -1882,7 +1884,8 @@ u8 dcb_ieee_getapp_mask(struct net_device *dev, struct dcb_app *app)
+ 	u8 prio = 0;
+ 
+ 	spin_lock_bh(&dcb_lock);
+-	if ((itr = dcb_app_lookup(app, dev->ifindex, 0)))
++	itr = dcb_app_lookup(app, dev->ifindex, -1);
++	if (itr)
+ 		prio |= 1 << itr->app.priority;
+ 	spin_unlock_bh(&dcb_lock);
+ 
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 932985ca4e66..3f80a5ca4050 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -1612,6 +1612,7 @@ ieee80211_rx_h_sta_process(struct ieee80211_rx_data *rx)
+ 	 */
+ 	if (!ieee80211_hw_check(&sta->local->hw, AP_LINK_PS) &&
+ 	    !ieee80211_has_morefrags(hdr->frame_control) &&
++	    !is_multicast_ether_addr(hdr->addr1) &&
+ 	    (ieee80211_is_mgmt(hdr->frame_control) ||
+ 	     ieee80211_is_data(hdr->frame_control)) &&
+ 	    !(status->rx_flags & IEEE80211_RX_DEFERRED_RELEASE) &&
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 20a171ac4bb2..16849969c138 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -3910,7 +3910,8 @@ void snd_hda_bus_reset_codecs(struct hda_bus *bus)
+ 
+ 	list_for_each_codec(codec, bus) {
+ 		/* FIXME: maybe a better way needed for forced reset */
+-		cancel_delayed_work_sync(&codec->jackpoll_work);
++		if (current_work() != &codec->jackpoll_work.work)
++			cancel_delayed_work_sync(&codec->jackpoll_work);
+ #ifdef CONFIG_PM
+ 		if (hda_codec_is_power_on(codec)) {
+ 			hda_call_codec_suspend(codec);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index f6af3e1c2b93..d14b05f68d6d 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6530,6 +6530,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x827e, "HP x360", ALC295_FIXUP_HP_X360),
+ 	SND_PCI_QUIRK(0x103c, 0x82bf, "HP", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x82c0, "HP", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x103c, 0x83b9, "HP Spectre x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+ 	SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 5feae9666822..55d6c9488d8e 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -1165,6 +1165,9 @@ static snd_pcm_uframes_t soc_pcm_pointer(struct snd_pcm_substream *substream)
+ 	snd_pcm_sframes_t codec_delay = 0;
+ 	int i;
+ 
++	/* clearing the previous total delay */
++	runtime->delay = 0;
++
+ 	for_each_rtdcom(rtd, rtdcom) {
+ 		component = rtdcom->component;
+ 
+@@ -1176,6 +1179,8 @@ static snd_pcm_uframes_t soc_pcm_pointer(struct snd_pcm_substream *substream)
+ 		offset = component->driver->ops->pointer(substream);
+ 		break;
+ 	}
++	/* base delay if assigned in pointer callback */
++	delay = runtime->delay;
+ 
+ 	if (cpu_dai->driver->ops->delay)
+ 		delay += cpu_dai->driver->ops->delay(substream, cpu_dai);
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index f5a3b402589e..67b042738ed7 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -905,8 +905,8 @@ bindir = $(abspath $(prefix)/$(bindir_relative))
+ mandir = share/man
+ infodir = share/info
+ perfexecdir = libexec/perf-core
+-perf_include_dir = lib/include/perf
+-perf_examples_dir = lib/examples/perf
++perf_include_dir = lib/perf/include
++perf_examples_dir = lib/perf/examples
+ sharedir = $(prefix)/share
+ template_dir = share/perf-core/templates
+ STRACE_GROUPS_DIR = share/perf-core/strace/groups
+diff --git a/tools/perf/builtin-c2c.c b/tools/perf/builtin-c2c.c
+index 6a8738f7ead3..eab66e3b0a19 100644
+--- a/tools/perf/builtin-c2c.c
++++ b/tools/perf/builtin-c2c.c
+@@ -2349,6 +2349,9 @@ static int perf_c2c__browse_cacheline(struct hist_entry *he)
+ 	" s             Toggle full length of symbol and source line columns \n"
+ 	" q             Return back to cacheline list \n";
+ 
++	if (!he)
++		return 0;
++
+ 	/* Display compact version first. */
+ 	c2c.symbol_full = false;
+ 
+diff --git a/tools/perf/perf.h b/tools/perf/perf.h
+index d215714f48df..21bf7f5a3cf5 100644
+--- a/tools/perf/perf.h
++++ b/tools/perf/perf.h
+@@ -25,7 +25,9 @@ static inline unsigned long long rdclock(void)
+ 	return ts.tv_sec * 1000000000ULL + ts.tv_nsec;
+ }
+ 
++#ifndef MAX_NR_CPUS
+ #define MAX_NR_CPUS			1024
++#endif
+ 
+ extern const char *input_name;
+ extern bool perf_host, perf_guest;
+diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
+index 94fce4f537e9..0d5504751cc5 100644
+--- a/tools/perf/util/evsel.c
++++ b/tools/perf/util/evsel.c
+@@ -848,6 +848,12 @@ static void apply_config_terms(struct perf_evsel *evsel,
+ 	}
+ }
+ 
++static bool is_dummy_event(struct perf_evsel *evsel)
++{
++	return (evsel->attr.type == PERF_TYPE_SOFTWARE) &&
++	       (evsel->attr.config == PERF_COUNT_SW_DUMMY);
++}
++
+ /*
+  * The enable_on_exec/disabled value strategy:
+  *
+@@ -1086,6 +1092,14 @@ void perf_evsel__config(struct perf_evsel *evsel, struct record_opts *opts,
+ 		else
+ 			perf_evsel__reset_sample_bit(evsel, PERIOD);
+ 	}
++
++	/*
++	 * For initial_delay, a dummy event is added implicitly.
++	 * The software event will trigger -EOPNOTSUPP error out,
++	 * if BRANCH_STACK bit is set.
++	 */
++	if (opts->initial_delay && is_dummy_event(evsel))
++		perf_evsel__reset_sample_bit(evsel, BRANCH_STACK);
+ }
+ 
+ static int perf_evsel__alloc_fd(struct perf_evsel *evsel, int ncpus, int nthreads)
+diff --git a/tools/testing/nvdimm/pmem-dax.c b/tools/testing/nvdimm/pmem-dax.c
+index b53596ad601b..2e7fd8227969 100644
+--- a/tools/testing/nvdimm/pmem-dax.c
++++ b/tools/testing/nvdimm/pmem-dax.c
+@@ -31,17 +31,21 @@ long __pmem_direct_access(struct pmem_device *pmem, pgoff_t pgoff,
+ 	if (get_nfit_res(pmem->phys_addr + offset)) {
+ 		struct page *page;
+ 
+-		*kaddr = pmem->virt_addr + offset;
++		if (kaddr)
++			*kaddr = pmem->virt_addr + offset;
+ 		page = vmalloc_to_page(pmem->virt_addr + offset);
+-		*pfn = page_to_pfn_t(page);
++		if (pfn)
++			*pfn = page_to_pfn_t(page);
+ 		pr_debug_ratelimited("%s: pmem: %p pgoff: %#lx pfn: %#lx\n",
+ 				__func__, pmem, pgoff, page_to_pfn(page));
+ 
+ 		return 1;
+ 	}
+ 
+-	*kaddr = pmem->virt_addr + offset;
+-	*pfn = phys_to_pfn_t(pmem->phys_addr + offset, pmem->pfn_flags);
++	if (kaddr)
++		*kaddr = pmem->virt_addr + offset;
++	if (pfn)
++		*pfn = phys_to_pfn_t(pmem->phys_addr + offset, pmem->pfn_flags);
+ 
+ 	/*
+ 	 * If badblocks are present, limit known good range to the
+diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
+index 41106d9d5cc7..f9c856c8e472 100644
+--- a/tools/testing/selftests/bpf/test_verifier.c
++++ b/tools/testing/selftests/bpf/test_verifier.c
+@@ -6997,7 +6997,7 @@ static struct bpf_test tests[] = {
+ 			BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
+ 			BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ 				     BPF_FUNC_map_lookup_elem),
+-			BPF_MOV64_REG(BPF_REG_0, 0),
++			BPF_MOV64_IMM(BPF_REG_0, 0),
+ 			BPF_EXIT_INSN(),
+ 		},
+ 		.fixup_map_in_map = { 3 },
+@@ -7020,7 +7020,7 @@ static struct bpf_test tests[] = {
+ 			BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
+ 			BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ 				     BPF_FUNC_map_lookup_elem),
+-			BPF_MOV64_REG(BPF_REG_0, 0),
++			BPF_MOV64_IMM(BPF_REG_0, 0),
+ 			BPF_EXIT_INSN(),
+ 		},
+ 		.fixup_map_in_map = { 3 },
+@@ -7042,7 +7042,7 @@ static struct bpf_test tests[] = {
+ 			BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
+ 			BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ 				     BPF_FUNC_map_lookup_elem),
+-			BPF_MOV64_REG(BPF_REG_0, 0),
++			BPF_MOV64_IMM(BPF_REG_0, 0),
+ 			BPF_EXIT_INSN(),
+ 		},
+ 		.fixup_map_in_map = { 3 },
+diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/connmark.json b/tools/testing/selftests/tc-testing/tc-tests/actions/connmark.json
+index 70952bd98ff9..13147a1f5731 100644
+--- a/tools/testing/selftests/tc-testing/tc-tests/actions/connmark.json
++++ b/tools/testing/selftests/tc-testing/tc-tests/actions/connmark.json
+@@ -17,7 +17,7 @@
+         "cmdUnderTest": "$TC actions add action connmark",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions list action connmark",
+-        "matchPattern": "action order [0-9]+:  connmark zone 0 pipe",
++        "matchPattern": "action order [0-9]+: connmark zone 0 pipe",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -41,7 +41,7 @@
+         "cmdUnderTest": "$TC actions add action connmark pass index 1",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions get action connmark index 1",
+-        "matchPattern": "action order [0-9]+:  connmark zone 0 pass.*index 1 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 0 pass.*index 1 ref",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -65,7 +65,7 @@
+         "cmdUnderTest": "$TC actions add action connmark drop index 100",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions get action connmark index 100",
+-        "matchPattern": "action order [0-9]+:  connmark zone 0 drop.*index 100 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 0 drop.*index 100 ref",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -89,7 +89,7 @@
+         "cmdUnderTest": "$TC actions add action connmark pipe index 455",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions get action connmark index 455",
+-        "matchPattern": "action order [0-9]+:  connmark zone 0 pipe.*index 455 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 0 pipe.*index 455 ref",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -113,7 +113,7 @@
+         "cmdUnderTest": "$TC actions add action connmark reclassify index 7",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions list action connmark",
+-        "matchPattern": "action order [0-9]+:  connmark zone 0 reclassify.*index 7 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 0 reclassify.*index 7 ref",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -137,7 +137,7 @@
+         "cmdUnderTest": "$TC actions add action connmark continue index 17",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions list action connmark",
+-        "matchPattern": "action order [0-9]+:  connmark zone 0 continue.*index 17 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 0 continue.*index 17 ref",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -161,7 +161,7 @@
+         "cmdUnderTest": "$TC actions add action connmark jump 10 index 17",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions list action connmark",
+-        "matchPattern": "action order [0-9]+:  connmark zone 0 jump 10.*index 17 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 0 jump 10.*index 17 ref",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -185,7 +185,7 @@
+         "cmdUnderTest": "$TC actions add action connmark zone 100 pipe index 1",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions get action connmark index 1",
+-        "matchPattern": "action order [0-9]+:  connmark zone 100 pipe.*index 1 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 100 pipe.*index 1 ref",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -209,7 +209,7 @@
+         "cmdUnderTest": "$TC actions add action connmark zone 65536 reclassify index 21",
+         "expExitCode": "255",
+         "verifyCmd": "$TC actions get action connmark index 1",
+-        "matchPattern": "action order [0-9]+:  connmark zone 65536 reclassify.*index 21 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 65536 reclassify.*index 21 ref",
+         "matchCount": "0",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -233,7 +233,7 @@
+         "cmdUnderTest": "$TC actions add action connmark zone 655 unsupp_arg pass index 2",
+         "expExitCode": "255",
+         "verifyCmd": "$TC actions get action connmark index 2",
+-        "matchPattern": "action order [0-9]+:  connmark zone 655 unsupp_arg pass.*index 2 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 655 unsupp_arg pass.*index 2 ref",
+         "matchCount": "0",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -258,7 +258,7 @@
+         "cmdUnderTest": "$TC actions replace action connmark zone 555 reclassify index 555",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions get action connmark index 555",
+-        "matchPattern": "action order [0-9]+:  connmark zone 555 reclassify.*index 555 ref",
++        "matchPattern": "action order [0-9]+: connmark zone 555 reclassify.*index 555 ref",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+@@ -282,7 +282,7 @@
+         "cmdUnderTest": "$TC actions add action connmark zone 555 pipe index 5 cookie aabbccddeeff112233445566778800a1",
+         "expExitCode": "0",
+         "verifyCmd": "$TC actions get action connmark index 5",
+-        "matchPattern": "action order [0-9]+:  connmark zone 555 pipe.*index 5 ref.*cookie aabbccddeeff112233445566778800a1",
++        "matchPattern": "action order [0-9]+: connmark zone 555 pipe.*index 5 ref.*cookie aabbccddeeff112233445566778800a1",
+         "matchCount": "1",
+         "teardown": [
+             "$TC actions flush action connmark"
+diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/mirred.json b/tools/testing/selftests/tc-testing/tc-tests/actions/mirred.json
+index 6e4edfae1799..db49fd0f8445 100644
+--- a/tools/testing/selftests/tc-testing/tc-tests/actions/mirred.json
++++ b/tools/testing/selftests/tc-testing/tc-tests/actions/mirred.json
+@@ -44,7 +44,8 @@
+         "matchPattern": "action order [0-9]*: mirred \\(Egress Redirect to device lo\\).*index 2 ref",
+         "matchCount": "1",
+         "teardown": [
+-            "$TC actions flush action mirred"
++            "$TC actions flush action mirred",
++            "$TC actions flush action gact"
+         ]
+     },
+     {
+diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
+index c2b95a22959b..fd8c88463928 100644
+--- a/virt/kvm/arm/mmu.c
++++ b/virt/kvm/arm/mmu.c
+@@ -1831,13 +1831,20 @@ static int kvm_set_spte_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data
+ void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte)
+ {
+ 	unsigned long end = hva + PAGE_SIZE;
++	kvm_pfn_t pfn = pte_pfn(pte);
+ 	pte_t stage2_pte;
+ 
+ 	if (!kvm->arch.pgd)
+ 		return;
+ 
+ 	trace_kvm_set_spte_hva(hva);
+-	stage2_pte = pfn_pte(pte_pfn(pte), PAGE_S2);
++
++	/*
++	 * We've moved a page around, probably through CoW, so let's treat it
++	 * just like a translation fault and clean the cache to the PoC.
++	 */
++	clean_dcache_guest_page(pfn, PAGE_SIZE);
++	stage2_pte = pfn_pte(pfn, PAGE_S2);
+ 	handle_hva_to_gpa(kvm, hva, end, &kvm_set_spte_handler, &stage2_pte);
+ }
+ 


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-09-15 10:12 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-09-15 10:12 UTC (permalink / raw
  To: gentoo-commits

commit:     f69bd2c4a51cabfc16f5a44334d1cd82613e7157
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Sep 15 10:12:46 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Sep 15 10:12:46 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f69bd2c4

Linux patch 4.18.8

 0000_README             |    4 +
 1007_linux-4.18.8.patch | 6654 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6658 insertions(+)

diff --git a/0000_README b/0000_README
index f3682ca..597262e 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch:  1006_linux-4.18.7.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.7
 
+Patch:  1007_linux-4.18.8.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.8
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1007_linux-4.18.8.patch b/1007_linux-4.18.8.patch
new file mode 100644
index 0000000..8a888c7
--- /dev/null
+++ b/1007_linux-4.18.8.patch
@@ -0,0 +1,6654 @@
+diff --git a/Makefile b/Makefile
+index 711b04d00e49..0d73431f66cd 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/arm/mach-rockchip/Kconfig b/arch/arm/mach-rockchip/Kconfig
+index fafd3d7f9f8c..8ca926522026 100644
+--- a/arch/arm/mach-rockchip/Kconfig
++++ b/arch/arm/mach-rockchip/Kconfig
+@@ -17,6 +17,7 @@ config ARCH_ROCKCHIP
+ 	select ARM_GLOBAL_TIMER
+ 	select CLKSRC_ARM_GLOBAL_TIMER_SCHED_CLOCK
+ 	select ZONE_DMA if ARM_LPAE
++	select PM
+ 	help
+ 	  Support for Rockchip's Cortex-A9 Single-to-Quad-Core-SoCs
+ 	  containing the RK2928, RK30xx and RK31xx series.
+diff --git a/arch/arm64/Kconfig.platforms b/arch/arm64/Kconfig.platforms
+index d5aeac351fc3..21a715ad8222 100644
+--- a/arch/arm64/Kconfig.platforms
++++ b/arch/arm64/Kconfig.platforms
+@@ -151,6 +151,7 @@ config ARCH_ROCKCHIP
+ 	select GPIOLIB
+ 	select PINCTRL
+ 	select PINCTRL_ROCKCHIP
++	select PM
+ 	select ROCKCHIP_TIMER
+ 	help
+ 	  This enables support for the ARMv8 based Rockchip chipsets,
+diff --git a/arch/powerpc/include/asm/topology.h b/arch/powerpc/include/asm/topology.h
+index 16b077801a5f..a4a718dbfec6 100644
+--- a/arch/powerpc/include/asm/topology.h
++++ b/arch/powerpc/include/asm/topology.h
+@@ -92,6 +92,7 @@ extern int stop_topology_update(void);
+ extern int prrn_is_enabled(void);
+ extern int find_and_online_cpu_nid(int cpu);
+ extern int timed_topology_update(int nsecs);
++extern void __init shared_proc_topology_init(void);
+ #else
+ static inline int start_topology_update(void)
+ {
+@@ -113,6 +114,10 @@ static inline int timed_topology_update(int nsecs)
+ {
+ 	return 0;
+ }
++
++#ifdef CONFIG_SMP
++static inline void shared_proc_topology_init(void) {}
++#endif
+ #endif /* CONFIG_NUMA && CONFIG_PPC_SPLPAR */
+ 
+ #include <asm-generic/topology.h>
+diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
+index 468653ce844c..327f6112fe8e 100644
+--- a/arch/powerpc/include/asm/uaccess.h
++++ b/arch/powerpc/include/asm/uaccess.h
+@@ -250,10 +250,17 @@ do {								\
+ 	}							\
+ } while (0)
+ 
++/*
++ * This is a type: either unsigned long, if the argument fits into
++ * that type, or otherwise unsigned long long.
++ */
++#define __long_type(x) \
++	__typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL))
++
+ #define __get_user_nocheck(x, ptr, size)			\
+ ({								\
+ 	long __gu_err;						\
+-	unsigned long __gu_val;					\
++	__long_type(*(ptr)) __gu_val;				\
+ 	const __typeof__(*(ptr)) __user *__gu_addr = (ptr);	\
+ 	__chk_user_ptr(ptr);					\
+ 	if (!is_kernel_addr((unsigned long)__gu_addr))		\
+@@ -267,7 +274,7 @@ do {								\
+ #define __get_user_check(x, ptr, size)					\
+ ({									\
+ 	long __gu_err = -EFAULT;					\
+-	unsigned long  __gu_val = 0;					\
++	__long_type(*(ptr)) __gu_val = 0;				\
+ 	const __typeof__(*(ptr)) __user *__gu_addr = (ptr);		\
+ 	might_fault();							\
+ 	if (access_ok(VERIFY_READ, __gu_addr, (size))) {		\
+@@ -281,7 +288,7 @@ do {								\
+ #define __get_user_nosleep(x, ptr, size)			\
+ ({								\
+ 	long __gu_err;						\
+-	unsigned long __gu_val;					\
++	__long_type(*(ptr)) __gu_val;				\
+ 	const __typeof__(*(ptr)) __user *__gu_addr = (ptr);	\
+ 	__chk_user_ptr(ptr);					\
+ 	barrier_nospec();					\
+diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
+index 285c6465324a..f817342aab8f 100644
+--- a/arch/powerpc/kernel/exceptions-64s.S
++++ b/arch/powerpc/kernel/exceptions-64s.S
+@@ -1526,6 +1526,8 @@ TRAMP_REAL_BEGIN(stf_barrier_fallback)
+ TRAMP_REAL_BEGIN(rfi_flush_fallback)
+ 	SET_SCRATCH0(r13);
+ 	GET_PACA(r13);
++	std	r1,PACA_EXRFI+EX_R12(r13)
++	ld	r1,PACAKSAVE(r13)
+ 	std	r9,PACA_EXRFI+EX_R9(r13)
+ 	std	r10,PACA_EXRFI+EX_R10(r13)
+ 	std	r11,PACA_EXRFI+EX_R11(r13)
+@@ -1560,12 +1562,15 @@ TRAMP_REAL_BEGIN(rfi_flush_fallback)
+ 	ld	r9,PACA_EXRFI+EX_R9(r13)
+ 	ld	r10,PACA_EXRFI+EX_R10(r13)
+ 	ld	r11,PACA_EXRFI+EX_R11(r13)
++	ld	r1,PACA_EXRFI+EX_R12(r13)
+ 	GET_SCRATCH0(r13);
+ 	rfid
+ 
+ TRAMP_REAL_BEGIN(hrfi_flush_fallback)
+ 	SET_SCRATCH0(r13);
+ 	GET_PACA(r13);
++	std	r1,PACA_EXRFI+EX_R12(r13)
++	ld	r1,PACAKSAVE(r13)
+ 	std	r9,PACA_EXRFI+EX_R9(r13)
+ 	std	r10,PACA_EXRFI+EX_R10(r13)
+ 	std	r11,PACA_EXRFI+EX_R11(r13)
+@@ -1600,6 +1605,7 @@ TRAMP_REAL_BEGIN(hrfi_flush_fallback)
+ 	ld	r9,PACA_EXRFI+EX_R9(r13)
+ 	ld	r10,PACA_EXRFI+EX_R10(r13)
+ 	ld	r11,PACA_EXRFI+EX_R11(r13)
++	ld	r1,PACA_EXRFI+EX_R12(r13)
+ 	GET_SCRATCH0(r13);
+ 	hrfid
+ 
+diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
+index 4794d6b4f4d2..b3142c7b9c31 100644
+--- a/arch/powerpc/kernel/smp.c
++++ b/arch/powerpc/kernel/smp.c
+@@ -1156,6 +1156,11 @@ void __init smp_cpus_done(unsigned int max_cpus)
+ 	if (smp_ops && smp_ops->bringup_done)
+ 		smp_ops->bringup_done();
+ 
++	/*
++	 * On a shared LPAR, associativity needs to be requested.
++	 * Hence, get numa topology before dumping cpu topology
++	 */
++	shared_proc_topology_init();
+ 	dump_numa_cpu_topology();
+ 
+ 	/*
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index 0c7e05d89244..35ac5422903a 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -1078,7 +1078,6 @@ static int prrn_enabled;
+ static void reset_topology_timer(void);
+ static int topology_timer_secs = 1;
+ static int topology_inited;
+-static int topology_update_needed;
+ 
+ /*
+  * Change polling interval for associativity changes.
+@@ -1306,11 +1305,8 @@ int numa_update_cpu_topology(bool cpus_locked)
+ 	struct device *dev;
+ 	int weight, new_nid, i = 0;
+ 
+-	if (!prrn_enabled && !vphn_enabled) {
+-		if (!topology_inited)
+-			topology_update_needed = 1;
++	if (!prrn_enabled && !vphn_enabled && topology_inited)
+ 		return 0;
+-	}
+ 
+ 	weight = cpumask_weight(&cpu_associativity_changes_mask);
+ 	if (!weight)
+@@ -1423,7 +1419,6 @@ int numa_update_cpu_topology(bool cpus_locked)
+ 
+ out:
+ 	kfree(updates);
+-	topology_update_needed = 0;
+ 	return changed;
+ }
+ 
+@@ -1551,6 +1546,15 @@ int prrn_is_enabled(void)
+ 	return prrn_enabled;
+ }
+ 
++void __init shared_proc_topology_init(void)
++{
++	if (lppaca_shared_proc(get_lppaca())) {
++		bitmap_fill(cpumask_bits(&cpu_associativity_changes_mask),
++			    nr_cpumask_bits);
++		numa_update_cpu_topology(false);
++	}
++}
++
+ static int topology_read(struct seq_file *file, void *v)
+ {
+ 	if (vphn_enabled || prrn_enabled)
+@@ -1608,10 +1612,6 @@ static int topology_update_init(void)
+ 		return -ENOMEM;
+ 
+ 	topology_inited = 1;
+-	if (topology_update_needed)
+-		bitmap_fill(cpumask_bits(&cpu_associativity_changes_mask),
+-					nr_cpumask_bits);
+-
+ 	return 0;
+ }
+ device_initcall(topology_update_init);
+diff --git a/arch/powerpc/platforms/85xx/t1042rdb_diu.c b/arch/powerpc/platforms/85xx/t1042rdb_diu.c
+index 58fa3d319f1c..dac36ba82fea 100644
+--- a/arch/powerpc/platforms/85xx/t1042rdb_diu.c
++++ b/arch/powerpc/platforms/85xx/t1042rdb_diu.c
+@@ -9,8 +9,10 @@
+  * option) any later version.
+  */
+ 
++#include <linux/init.h>
+ #include <linux/io.h>
+ #include <linux/kernel.h>
++#include <linux/module.h>
+ #include <linux/of.h>
+ #include <linux/of_address.h>
+ 
+@@ -150,3 +152,5 @@ static int __init t1042rdb_diu_init(void)
+ }
+ 
+ early_initcall(t1042rdb_diu_init);
++
++MODULE_LICENSE("GPL");
+diff --git a/arch/powerpc/platforms/pseries/ras.c b/arch/powerpc/platforms/pseries/ras.c
+index 2edc673be137..99d1152ae224 100644
+--- a/arch/powerpc/platforms/pseries/ras.c
++++ b/arch/powerpc/platforms/pseries/ras.c
+@@ -371,7 +371,7 @@ static struct rtas_error_log *fwnmi_get_errinfo(struct pt_regs *regs)
+ 		int len, error_log_length;
+ 
+ 		error_log_length = 8 + rtas_error_extended_log_length(h);
+-		len = max_t(int, error_log_length, RTAS_ERROR_LOG_MAX);
++		len = min_t(int, error_log_length, RTAS_ERROR_LOG_MAX);
+ 		memset(global_mce_data_buf, 0, RTAS_ERROR_LOG_MAX);
+ 		memcpy(global_mce_data_buf, h, len);
+ 		errhdr = (struct rtas_error_log *)global_mce_data_buf;
+diff --git a/arch/powerpc/sysdev/mpic_msgr.c b/arch/powerpc/sysdev/mpic_msgr.c
+index eb69a5186243..280e964e1aa8 100644
+--- a/arch/powerpc/sysdev/mpic_msgr.c
++++ b/arch/powerpc/sysdev/mpic_msgr.c
+@@ -196,7 +196,7 @@ static int mpic_msgr_probe(struct platform_device *dev)
+ 
+ 	/* IO map the message register block. */
+ 	of_address_to_resource(np, 0, &rsrc);
+-	msgr_block_addr = ioremap(rsrc.start, rsrc.end - rsrc.start);
++	msgr_block_addr = ioremap(rsrc.start, resource_size(&rsrc));
+ 	if (!msgr_block_addr) {
+ 		dev_err(&dev->dev, "Failed to iomap MPIC message registers");
+ 		return -EFAULT;
+diff --git a/arch/riscv/kernel/vdso/Makefile b/arch/riscv/kernel/vdso/Makefile
+index f6561b783b61..eed1c137f618 100644
+--- a/arch/riscv/kernel/vdso/Makefile
++++ b/arch/riscv/kernel/vdso/Makefile
+@@ -52,8 +52,8 @@ $(obj)/%.so: $(obj)/%.so.dbg FORCE
+ # Add -lgcc so rv32 gets static muldi3 and lshrdi3 definitions.
+ # Make sure only to export the intended __vdso_xxx symbol offsets.
+ quiet_cmd_vdsold = VDSOLD  $@
+-      cmd_vdsold = $(CC) $(KCFLAGS) $(call cc-option, -no-pie) -nostdlib $(SYSCFLAGS_$(@F)) \
+-                           -Wl,-T,$(filter-out FORCE,$^) -o $@.tmp -lgcc && \
++      cmd_vdsold = $(CC) $(KBUILD_CFLAGS) $(call cc-option, -no-pie) -nostdlib -nostartfiles $(SYSCFLAGS_$(@F)) \
++                           -Wl,-T,$(filter-out FORCE,$^) -o $@.tmp && \
+                    $(CROSS_COMPILE)objcopy \
+                            $(patsubst %, -G __vdso_%, $(vdso-syms)) $@.tmp $@
+ 
+diff --git a/arch/s390/kernel/crash_dump.c b/arch/s390/kernel/crash_dump.c
+index 9f5ea9d87069..9b0216d571ad 100644
+--- a/arch/s390/kernel/crash_dump.c
++++ b/arch/s390/kernel/crash_dump.c
+@@ -404,11 +404,13 @@ static void *get_vmcoreinfo_old(unsigned long *size)
+ 	if (copy_oldmem_kernel(nt_name, addr + sizeof(note),
+ 			       sizeof(nt_name) - 1))
+ 		return NULL;
+-	if (strcmp(nt_name, "VMCOREINFO") != 0)
++	if (strcmp(nt_name, VMCOREINFO_NOTE_NAME) != 0)
+ 		return NULL;
+ 	vmcoreinfo = kzalloc_panic(note.n_descsz);
+-	if (copy_oldmem_kernel(vmcoreinfo, addr + 24, note.n_descsz))
++	if (copy_oldmem_kernel(vmcoreinfo, addr + 24, note.n_descsz)) {
++		kfree(vmcoreinfo);
+ 		return NULL;
++	}
+ 	*size = note.n_descsz;
+ 	return vmcoreinfo;
+ }
+@@ -418,15 +420,20 @@ static void *get_vmcoreinfo_old(unsigned long *size)
+  */
+ static void *nt_vmcoreinfo(void *ptr)
+ {
++	const char *name = VMCOREINFO_NOTE_NAME;
+ 	unsigned long size;
+ 	void *vmcoreinfo;
+ 
+ 	vmcoreinfo = os_info_old_entry(OS_INFO_VMCOREINFO, &size);
+-	if (!vmcoreinfo)
+-		vmcoreinfo = get_vmcoreinfo_old(&size);
++	if (vmcoreinfo)
++		return nt_init_name(ptr, 0, vmcoreinfo, size, name);
++
++	vmcoreinfo = get_vmcoreinfo_old(&size);
+ 	if (!vmcoreinfo)
+ 		return ptr;
+-	return nt_init_name(ptr, 0, vmcoreinfo, size, "VMCOREINFO");
++	ptr = nt_init_name(ptr, 0, vmcoreinfo, size, name);
++	kfree(vmcoreinfo);
++	return ptr;
+ }
+ 
+ /*
+diff --git a/arch/um/Makefile b/arch/um/Makefile
+index e54dda8a0363..de340e41f3b2 100644
+--- a/arch/um/Makefile
++++ b/arch/um/Makefile
+@@ -122,8 +122,7 @@ archheaders:
+ 	$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.asm-generic \
+ 	            kbuild-file=$(HOST_DIR)/include/uapi/asm/Kbuild \
+ 		    obj=$(HOST_DIR)/include/generated/uapi/asm
+-	$(Q)$(MAKE) KBUILD_SRC= ARCH=$(HEADER_ARCH) archheaders
+-
++	$(Q)$(MAKE) -f $(srctree)/Makefile ARCH=$(HEADER_ARCH) archheaders
+ 
+ archprepare: include/generated/user_constants.h
+ 
+diff --git a/arch/x86/include/asm/mce.h b/arch/x86/include/asm/mce.h
+index 8c7b3e5a2d01..3a17107594c8 100644
+--- a/arch/x86/include/asm/mce.h
++++ b/arch/x86/include/asm/mce.h
+@@ -148,6 +148,7 @@ enum mce_notifier_prios {
+ 	MCE_PRIO_LOWEST		= 0,
+ };
+ 
++struct notifier_block;
+ extern void mce_register_decode_chain(struct notifier_block *nb);
+ extern void mce_unregister_decode_chain(struct notifier_block *nb);
+ 
+diff --git a/arch/x86/include/asm/pgtable-3level.h b/arch/x86/include/asm/pgtable-3level.h
+index bb035a4cbc8c..9eeb1359ec75 100644
+--- a/arch/x86/include/asm/pgtable-3level.h
++++ b/arch/x86/include/asm/pgtable-3level.h
+@@ -2,6 +2,8 @@
+ #ifndef _ASM_X86_PGTABLE_3LEVEL_H
+ #define _ASM_X86_PGTABLE_3LEVEL_H
+ 
++#include <asm/atomic64_32.h>
++
+ /*
+  * Intel Physical Address Extension (PAE) Mode - three-level page
+  * tables on PPro+ CPUs.
+@@ -147,10 +149,7 @@ static inline pte_t native_ptep_get_and_clear(pte_t *ptep)
+ {
+ 	pte_t res;
+ 
+-	/* xchg acts as a barrier before the setting of the high bits */
+-	res.pte_low = xchg(&ptep->pte_low, 0);
+-	res.pte_high = ptep->pte_high;
+-	ptep->pte_high = 0;
++	res.pte = (pteval_t)arch_atomic64_xchg((atomic64_t *)ptep, 0);
+ 
+ 	return res;
+ }
+diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
+index 74392d9d51e0..a10481656d82 100644
+--- a/arch/x86/kernel/tsc.c
++++ b/arch/x86/kernel/tsc.c
+@@ -1343,7 +1343,7 @@ device_initcall(init_tsc_clocksource);
+ 
+ void __init tsc_early_delay_calibrate(void)
+ {
+-	unsigned long lpj;
++	u64 lpj;
+ 
+ 	if (!boot_cpu_has(X86_FEATURE_TSC))
+ 		return;
+@@ -1355,7 +1355,7 @@ void __init tsc_early_delay_calibrate(void)
+ 	if (!tsc_khz)
+ 		return;
+ 
+-	lpj = tsc_khz * 1000;
++	lpj = (u64)tsc_khz * 1000;
+ 	do_div(lpj, HZ);
+ 	loops_per_jiffy = lpj;
+ }
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index a44e568363a4..42f1ba92622a 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -221,6 +221,17 @@ static const u64 shadow_acc_track_saved_bits_mask = PT64_EPT_READABLE_MASK |
+ 						    PT64_EPT_EXECUTABLE_MASK;
+ static const u64 shadow_acc_track_saved_bits_shift = PT64_SECOND_AVAIL_BITS_SHIFT;
+ 
++/*
++ * This mask must be set on all non-zero Non-Present or Reserved SPTEs in order
++ * to guard against L1TF attacks.
++ */
++static u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
++
++/*
++ * The number of high-order 1 bits to use in the mask above.
++ */
++static const u64 shadow_nonpresent_or_rsvd_mask_len = 5;
++
+ static void mmu_spte_set(u64 *sptep, u64 spte);
+ 
+ void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value)
+@@ -308,9 +319,13 @@ static void mark_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 gfn,
+ {
+ 	unsigned int gen = kvm_current_mmio_generation(vcpu);
+ 	u64 mask = generation_mmio_spte_mask(gen);
++	u64 gpa = gfn << PAGE_SHIFT;
+ 
+ 	access &= ACC_WRITE_MASK | ACC_USER_MASK;
+-	mask |= shadow_mmio_value | access | gfn << PAGE_SHIFT;
++	mask |= shadow_mmio_value | access;
++	mask |= gpa | shadow_nonpresent_or_rsvd_mask;
++	mask |= (gpa & shadow_nonpresent_or_rsvd_mask)
++		<< shadow_nonpresent_or_rsvd_mask_len;
+ 
+ 	trace_mark_mmio_spte(sptep, gfn, access, gen);
+ 	mmu_spte_set(sptep, mask);
+@@ -323,8 +338,14 @@ static bool is_mmio_spte(u64 spte)
+ 
+ static gfn_t get_mmio_spte_gfn(u64 spte)
+ {
+-	u64 mask = generation_mmio_spte_mask(MMIO_GEN_MASK) | shadow_mmio_mask;
+-	return (spte & ~mask) >> PAGE_SHIFT;
++	u64 mask = generation_mmio_spte_mask(MMIO_GEN_MASK) | shadow_mmio_mask |
++		   shadow_nonpresent_or_rsvd_mask;
++	u64 gpa = spte & ~mask;
++
++	gpa |= (spte >> shadow_nonpresent_or_rsvd_mask_len)
++	       & shadow_nonpresent_or_rsvd_mask;
++
++	return gpa >> PAGE_SHIFT;
+ }
+ 
+ static unsigned get_mmio_spte_access(u64 spte)
+@@ -381,7 +402,7 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask,
+ }
+ EXPORT_SYMBOL_GPL(kvm_mmu_set_mask_ptes);
+ 
+-static void kvm_mmu_clear_all_pte_masks(void)
++static void kvm_mmu_reset_all_pte_masks(void)
+ {
+ 	shadow_user_mask = 0;
+ 	shadow_accessed_mask = 0;
+@@ -391,6 +412,18 @@ static void kvm_mmu_clear_all_pte_masks(void)
+ 	shadow_mmio_mask = 0;
+ 	shadow_present_mask = 0;
+ 	shadow_acc_track_mask = 0;
++
++	/*
++	 * If the CPU has 46 or less physical address bits, then set an
++	 * appropriate mask to guard against L1TF attacks. Otherwise, it is
++	 * assumed that the CPU is not vulnerable to L1TF.
++	 */
++	if (boot_cpu_data.x86_phys_bits <
++	    52 - shadow_nonpresent_or_rsvd_mask_len)
++		shadow_nonpresent_or_rsvd_mask =
++			rsvd_bits(boot_cpu_data.x86_phys_bits -
++				  shadow_nonpresent_or_rsvd_mask_len,
++				  boot_cpu_data.x86_phys_bits - 1);
+ }
+ 
+ static int is_cpuid_PSE36(void)
+@@ -5500,7 +5533,7 @@ int kvm_mmu_module_init(void)
+ {
+ 	int ret = -ENOMEM;
+ 
+-	kvm_mmu_clear_all_pte_masks();
++	kvm_mmu_reset_all_pte_masks();
+ 
+ 	pte_list_desc_cache = kmem_cache_create("pte_list_desc",
+ 					    sizeof(struct pte_list_desc),
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index bedabcf33a3e..9869bfd0c601 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -939,17 +939,21 @@ struct vcpu_vmx {
+ 	/*
+ 	 * loaded_vmcs points to the VMCS currently used in this vcpu. For a
+ 	 * non-nested (L1) guest, it always points to vmcs01. For a nested
+-	 * guest (L2), it points to a different VMCS.
++	 * guest (L2), it points to a different VMCS.  loaded_cpu_state points
++	 * to the VMCS whose state is loaded into the CPU registers that only
++	 * need to be switched when transitioning to/from the kernel; a NULL
++	 * value indicates that host state is loaded.
+ 	 */
+ 	struct loaded_vmcs    vmcs01;
+ 	struct loaded_vmcs   *loaded_vmcs;
++	struct loaded_vmcs   *loaded_cpu_state;
+ 	bool                  __launched; /* temporary, used in vmx_vcpu_run */
+ 	struct msr_autoload {
+ 		struct vmx_msrs guest;
+ 		struct vmx_msrs host;
+ 	} msr_autoload;
++
+ 	struct {
+-		int           loaded;
+ 		u16           fs_sel, gs_sel, ldt_sel;
+ #ifdef CONFIG_X86_64
+ 		u16           ds_sel, es_sel;
+@@ -2750,10 +2754,11 @@ static void vmx_save_host_state(struct kvm_vcpu *vcpu)
+ #endif
+ 	int i;
+ 
+-	if (vmx->host_state.loaded)
++	if (vmx->loaded_cpu_state)
+ 		return;
+ 
+-	vmx->host_state.loaded = 1;
++	vmx->loaded_cpu_state = vmx->loaded_vmcs;
++
+ 	/*
+ 	 * Set host fs and gs selectors.  Unfortunately, 22.2.3 does not
+ 	 * allow segment selectors with cpl > 0 or ti == 1.
+@@ -2815,11 +2820,14 @@ static void vmx_save_host_state(struct kvm_vcpu *vcpu)
+ 
+ static void __vmx_load_host_state(struct vcpu_vmx *vmx)
+ {
+-	if (!vmx->host_state.loaded)
++	if (!vmx->loaded_cpu_state)
+ 		return;
+ 
++	WARN_ON_ONCE(vmx->loaded_cpu_state != vmx->loaded_vmcs);
++
+ 	++vmx->vcpu.stat.host_state_reload;
+-	vmx->host_state.loaded = 0;
++	vmx->loaded_cpu_state = NULL;
++
+ #ifdef CONFIG_X86_64
+ 	if (is_long_mode(&vmx->vcpu))
+ 		rdmsrl(MSR_KERNEL_GS_BASE, vmx->msr_guest_kernel_gs_base);
+@@ -8115,7 +8123,7 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
+ 
+ 	/* CPL=0 must be checked manually. */
+ 	if (vmx_get_cpl(vcpu)) {
+-		kvm_queue_exception(vcpu, UD_VECTOR);
++		kvm_inject_gp(vcpu, 0);
+ 		return 1;
+ 	}
+ 
+@@ -8179,7 +8187,7 @@ static int handle_vmon(struct kvm_vcpu *vcpu)
+ static int nested_vmx_check_permission(struct kvm_vcpu *vcpu)
+ {
+ 	if (vmx_get_cpl(vcpu)) {
+-		kvm_queue_exception(vcpu, UD_VECTOR);
++		kvm_inject_gp(vcpu, 0);
+ 		return 0;
+ 	}
+ 
+@@ -10517,8 +10525,8 @@ static void vmx_switch_vmcs(struct kvm_vcpu *vcpu, struct loaded_vmcs *vmcs)
+ 		return;
+ 
+ 	cpu = get_cpu();
+-	vmx->loaded_vmcs = vmcs;
+ 	vmx_vcpu_put(vcpu);
++	vmx->loaded_vmcs = vmcs;
+ 	vmx_vcpu_load(vcpu, cpu);
+ 	put_cpu();
+ }
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 24c84aa87049..94cd63081471 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -6506,20 +6506,22 @@ static void kvm_set_mmio_spte_mask(void)
+ 	 * Set the reserved bits and the present bit of an paging-structure
+ 	 * entry to generate page fault with PFER.RSV = 1.
+ 	 */
+-	 /* Mask the reserved physical address bits. */
+-	mask = rsvd_bits(maxphyaddr, 51);
++
++	/*
++	 * Mask the uppermost physical address bit, which would be reserved as
++	 * long as the supported physical address width is less than 52.
++	 */
++	mask = 1ull << 51;
+ 
+ 	/* Set the present bit. */
+ 	mask |= 1ull;
+ 
+-#ifdef CONFIG_X86_64
+ 	/*
+ 	 * If reserved bit is not supported, clear the present bit to disable
+ 	 * mmio page fault.
+ 	 */
+-	if (maxphyaddr == 52)
++	if (IS_ENABLED(CONFIG_X86_64) && maxphyaddr == 52)
+ 		mask &= ~1ull;
+-#endif
+ 
+ 	kvm_mmu_set_mmio_spte_mask(mask, mask);
+ }
+diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
+index 2c30cabfda90..071d82ec9abb 100644
+--- a/arch/x86/xen/mmu_pv.c
++++ b/arch/x86/xen/mmu_pv.c
+@@ -434,14 +434,13 @@ static void xen_set_pud(pud_t *ptr, pud_t val)
+ static void xen_set_pte_atomic(pte_t *ptep, pte_t pte)
+ {
+ 	trace_xen_mmu_set_pte_atomic(ptep, pte);
+-	set_64bit((u64 *)ptep, native_pte_val(pte));
++	__xen_set_pte(ptep, pte);
+ }
+ 
+ static void xen_pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
+ {
+ 	trace_xen_mmu_pte_clear(mm, addr, ptep);
+-	if (!xen_batched_set_pte(ptep, native_make_pte(0)))
+-		native_pte_clear(mm, addr, ptep);
++	__xen_set_pte(ptep, native_make_pte(0));
+ }
+ 
+ static void xen_pmd_clear(pmd_t *pmdp)
+@@ -1571,7 +1570,7 @@ static void __init xen_set_pte_init(pte_t *ptep, pte_t pte)
+ 		pte = __pte_ma(((pte_val_ma(*ptep) & _PAGE_RW) | ~_PAGE_RW) &
+ 			       pte_val_ma(pte));
+ #endif
+-	native_set_pte(ptep, pte);
++	__xen_set_pte(ptep, pte);
+ }
+ 
+ /* Early in boot, while setting up the initial pagetable, assume
+diff --git a/block/bio.c b/block/bio.c
+index 047c5dca6d90..ff94640bc734 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -156,7 +156,7 @@ out:
+ 
+ unsigned int bvec_nr_vecs(unsigned short idx)
+ {
+-	return bvec_slabs[idx].nr_vecs;
++	return bvec_slabs[--idx].nr_vecs;
+ }
+ 
+ void bvec_free(mempool_t *pool, struct bio_vec *bv, unsigned int idx)
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 1646ea85dade..746a5eac4541 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -2159,7 +2159,9 @@ static inline bool should_fail_request(struct hd_struct *part,
+ 
+ static inline bool bio_check_ro(struct bio *bio, struct hd_struct *part)
+ {
+-	if (part->policy && op_is_write(bio_op(bio))) {
++	const int op = bio_op(bio);
++
++	if (part->policy && (op_is_write(op) && !op_is_flush(op))) {
+ 		char b[BDEVNAME_SIZE];
+ 
+ 		WARN_ONCE(1,
+diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
+index 3de0836163c2..d5f2c21d8531 100644
+--- a/block/blk-mq-tag.c
++++ b/block/blk-mq-tag.c
+@@ -23,6 +23,9 @@ bool blk_mq_has_free_tags(struct blk_mq_tags *tags)
+ 
+ /*
+  * If a previously inactive queue goes active, bump the active user count.
++ * We need to do this before try to allocate driver tag, then even if fail
++ * to get tag when first time, the other shared-tag users could reserve
++ * budget for it.
+  */
+ bool __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
+ {
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 654b0dc7e001..2f9e14361673 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -285,7 +285,7 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data,
+ 		rq->tag = -1;
+ 		rq->internal_tag = tag;
+ 	} else {
+-		if (blk_mq_tag_busy(data->hctx)) {
++		if (data->hctx->flags & BLK_MQ_F_TAG_SHARED) {
+ 			rq_flags = RQF_MQ_INFLIGHT;
+ 			atomic_inc(&data->hctx->nr_active);
+ 		}
+@@ -367,6 +367,8 @@ static struct request *blk_mq_get_request(struct request_queue *q,
+ 		if (!op_is_flush(op) && e->type->ops.mq.limit_depth &&
+ 		    !(data->flags & BLK_MQ_REQ_RESERVED))
+ 			e->type->ops.mq.limit_depth(op, data);
++	} else {
++		blk_mq_tag_busy(data->hctx);
+ 	}
+ 
+ 	tag = blk_mq_get_tag(data);
+@@ -970,6 +972,7 @@ bool blk_mq_get_driver_tag(struct request *rq, struct blk_mq_hw_ctx **hctx,
+ 		.hctx = blk_mq_map_queue(rq->q, rq->mq_ctx->cpu),
+ 		.flags = wait ? 0 : BLK_MQ_REQ_NOWAIT,
+ 	};
++	bool shared;
+ 
+ 	might_sleep_if(wait);
+ 
+@@ -979,9 +982,10 @@ bool blk_mq_get_driver_tag(struct request *rq, struct blk_mq_hw_ctx **hctx,
+ 	if (blk_mq_tag_is_reserved(data.hctx->sched_tags, rq->internal_tag))
+ 		data.flags |= BLK_MQ_REQ_RESERVED;
+ 
++	shared = blk_mq_tag_busy(data.hctx);
+ 	rq->tag = blk_mq_get_tag(&data);
+ 	if (rq->tag >= 0) {
+-		if (blk_mq_tag_busy(data.hctx)) {
++		if (shared) {
+ 			rq->rq_flags |= RQF_MQ_INFLIGHT;
+ 			atomic_inc(&data.hctx->nr_active);
+ 		}
+diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
+index 82b6c27b3245..f6f180f3aa1c 100644
+--- a/block/cfq-iosched.c
++++ b/block/cfq-iosched.c
+@@ -4735,12 +4735,13 @@ USEC_SHOW_FUNCTION(cfq_target_latency_us_show, cfqd->cfq_target_latency);
+ static ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count)	\
+ {									\
+ 	struct cfq_data *cfqd = e->elevator_data;			\
+-	unsigned int __data;						\
++	unsigned int __data, __min = (MIN), __max = (MAX);		\
++									\
+ 	cfq_var_store(&__data, (page));					\
+-	if (__data < (MIN))						\
+-		__data = (MIN);						\
+-	else if (__data > (MAX))					\
+-		__data = (MAX);						\
++	if (__data < __min)						\
++		__data = __min;						\
++	else if (__data > __max)					\
++		__data = __max;						\
+ 	if (__CONV)							\
+ 		*(__PTR) = (u64)__data * NSEC_PER_MSEC;			\
+ 	else								\
+@@ -4769,12 +4770,13 @@ STORE_FUNCTION(cfq_target_latency_store, &cfqd->cfq_target_latency, 1, UINT_MAX,
+ static ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count)	\
+ {									\
+ 	struct cfq_data *cfqd = e->elevator_data;			\
+-	unsigned int __data;						\
++	unsigned int __data, __min = (MIN), __max = (MAX);		\
++									\
+ 	cfq_var_store(&__data, (page));					\
+-	if (__data < (MIN))						\
+-		__data = (MIN);						\
+-	else if (__data > (MAX))					\
+-		__data = (MAX);						\
++	if (__data < __min)						\
++		__data = __min;						\
++	else if (__data > __max)					\
++		__data = __max;						\
+ 	*(__PTR) = (u64)__data * NSEC_PER_USEC;				\
+ 	return count;							\
+ }
+diff --git a/drivers/acpi/acpica/hwregs.c b/drivers/acpi/acpica/hwregs.c
+index 3de794bcf8fa..69603ba52a3a 100644
+--- a/drivers/acpi/acpica/hwregs.c
++++ b/drivers/acpi/acpica/hwregs.c
+@@ -528,13 +528,18 @@ acpi_status acpi_hw_register_read(u32 register_id, u32 *return_value)
+ 
+ 		status =
+ 		    acpi_hw_read(&value64, &acpi_gbl_FADT.xpm2_control_block);
+-		value = (u32)value64;
++		if (ACPI_SUCCESS(status)) {
++			value = (u32)value64;
++		}
+ 		break;
+ 
+ 	case ACPI_REGISTER_PM_TIMER:	/* 32-bit access */
+ 
+ 		status = acpi_hw_read(&value64, &acpi_gbl_FADT.xpm_timer_block);
+-		value = (u32)value64;
++		if (ACPI_SUCCESS(status)) {
++			value = (u32)value64;
++		}
++
+ 		break;
+ 
+ 	case ACPI_REGISTER_SMI_COMMAND_BLOCK:	/* 8-bit access */
+diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
+index 970dd87d347c..6799d00dd790 100644
+--- a/drivers/acpi/scan.c
++++ b/drivers/acpi/scan.c
+@@ -1612,7 +1612,8 @@ static int acpi_add_single_object(struct acpi_device **child,
+ 	 * Note this must be done before the get power-/wakeup_dev-flags calls.
+ 	 */
+ 	if (type == ACPI_BUS_TYPE_DEVICE)
+-		acpi_bus_get_status(device);
++		if (acpi_bus_get_status(device) < 0)
++			acpi_set_device_status(device, 0);
+ 
+ 	acpi_bus_get_power_flags(device);
+ 	acpi_bus_get_wakeup_device_flags(device);
+@@ -1690,7 +1691,7 @@ static int acpi_bus_type_and_status(acpi_handle handle, int *type,
+ 		 * acpi_add_single_object updates this once we've an acpi_device
+ 		 * so that acpi_bus_get_status' quirk handling can be used.
+ 		 */
+-		*sta = 0;
++		*sta = ACPI_STA_DEFAULT;
+ 		break;
+ 	case ACPI_TYPE_PROCESSOR:
+ 		*type = ACPI_BUS_TYPE_PROCESSOR;
+diff --git a/drivers/clk/rockchip/clk-rk3399.c b/drivers/clk/rockchip/clk-rk3399.c
+index 2a8634a52856..5a628148f3f0 100644
+--- a/drivers/clk/rockchip/clk-rk3399.c
++++ b/drivers/clk/rockchip/clk-rk3399.c
+@@ -1523,6 +1523,7 @@ static const char *const rk3399_pmucru_critical_clocks[] __initconst = {
+ 	"pclk_pmu_src",
+ 	"fclk_cm0s_src_pmu",
+ 	"clk_timer_src_pmu",
++	"pclk_rkpwm_pmu",
+ };
+ 
+ static void __init rk3399_clk_init(struct device_node *np)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+index 7dcbac8af9a7..b60aa7d43cb7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+@@ -1579,9 +1579,9 @@ struct amdgpu_device {
+ 	DECLARE_HASHTABLE(mn_hash, 7);
+ 
+ 	/* tracking pinned memory */
+-	u64 vram_pin_size;
+-	u64 invisible_pin_size;
+-	u64 gart_pin_size;
++	atomic64_t vram_pin_size;
++	atomic64_t visible_pin_size;
++	atomic64_t gart_pin_size;
+ 
+ 	/* amdkfd interface */
+ 	struct kfd_dev          *kfd;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index 9c85a90be293..5a196ec49be8 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -257,7 +257,7 @@ static void amdgpu_cs_get_threshold_for_moves(struct amdgpu_device *adev,
+ 		return;
+ 	}
+ 
+-	total_vram = adev->gmc.real_vram_size - adev->vram_pin_size;
++	total_vram = adev->gmc.real_vram_size - atomic64_read(&adev->vram_pin_size);
+ 	used_vram = amdgpu_vram_mgr_usage(&adev->mman.bdev.man[TTM_PL_VRAM]);
+ 	free_vram = used_vram >= total_vram ? 0 : total_vram - used_vram;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+index 91517b166a3b..063f9aa96946 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+@@ -494,13 +494,13 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
+ 	case AMDGPU_INFO_VRAM_GTT: {
+ 		struct drm_amdgpu_info_vram_gtt vram_gtt;
+ 
+-		vram_gtt.vram_size = adev->gmc.real_vram_size;
+-		vram_gtt.vram_size -= adev->vram_pin_size;
+-		vram_gtt.vram_cpu_accessible_size = adev->gmc.visible_vram_size;
+-		vram_gtt.vram_cpu_accessible_size -= (adev->vram_pin_size - adev->invisible_pin_size);
++		vram_gtt.vram_size = adev->gmc.real_vram_size -
++			atomic64_read(&adev->vram_pin_size);
++		vram_gtt.vram_cpu_accessible_size = adev->gmc.visible_vram_size -
++			atomic64_read(&adev->visible_pin_size);
+ 		vram_gtt.gtt_size = adev->mman.bdev.man[TTM_PL_TT].size;
+ 		vram_gtt.gtt_size *= PAGE_SIZE;
+-		vram_gtt.gtt_size -= adev->gart_pin_size;
++		vram_gtt.gtt_size -= atomic64_read(&adev->gart_pin_size);
+ 		return copy_to_user(out, &vram_gtt,
+ 				    min((size_t)size, sizeof(vram_gtt))) ? -EFAULT : 0;
+ 	}
+@@ -509,17 +509,16 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
+ 
+ 		memset(&mem, 0, sizeof(mem));
+ 		mem.vram.total_heap_size = adev->gmc.real_vram_size;
+-		mem.vram.usable_heap_size =
+-			adev->gmc.real_vram_size - adev->vram_pin_size;
++		mem.vram.usable_heap_size = adev->gmc.real_vram_size -
++			atomic64_read(&adev->vram_pin_size);
+ 		mem.vram.heap_usage =
+ 			amdgpu_vram_mgr_usage(&adev->mman.bdev.man[TTM_PL_VRAM]);
+ 		mem.vram.max_allocation = mem.vram.usable_heap_size * 3 / 4;
+ 
+ 		mem.cpu_accessible_vram.total_heap_size =
+ 			adev->gmc.visible_vram_size;
+-		mem.cpu_accessible_vram.usable_heap_size =
+-			adev->gmc.visible_vram_size -
+-			(adev->vram_pin_size - adev->invisible_pin_size);
++		mem.cpu_accessible_vram.usable_heap_size = adev->gmc.visible_vram_size -
++			atomic64_read(&adev->visible_pin_size);
+ 		mem.cpu_accessible_vram.heap_usage =
+ 			amdgpu_vram_mgr_vis_usage(&adev->mman.bdev.man[TTM_PL_VRAM]);
+ 		mem.cpu_accessible_vram.max_allocation =
+@@ -527,8 +526,8 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
+ 
+ 		mem.gtt.total_heap_size = adev->mman.bdev.man[TTM_PL_TT].size;
+ 		mem.gtt.total_heap_size *= PAGE_SIZE;
+-		mem.gtt.usable_heap_size = mem.gtt.total_heap_size
+-			- adev->gart_pin_size;
++		mem.gtt.usable_heap_size = mem.gtt.total_heap_size -
++			atomic64_read(&adev->gart_pin_size);
+ 		mem.gtt.heap_usage =
+ 			amdgpu_gtt_mgr_usage(&adev->mman.bdev.man[TTM_PL_TT]);
+ 		mem.gtt.max_allocation = mem.gtt.usable_heap_size * 3 / 4;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index 3526efa8960e..3873c3353020 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -50,11 +50,35 @@ static bool amdgpu_need_backup(struct amdgpu_device *adev)
+ 	return true;
+ }
+ 
++/**
++ * amdgpu_bo_subtract_pin_size - Remove BO from pin_size accounting
++ *
++ * @bo: &amdgpu_bo buffer object
++ *
++ * This function is called when a BO stops being pinned, and updates the
++ * &amdgpu_device pin_size values accordingly.
++ */
++static void amdgpu_bo_subtract_pin_size(struct amdgpu_bo *bo)
++{
++	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
++
++	if (bo->tbo.mem.mem_type == TTM_PL_VRAM) {
++		atomic64_sub(amdgpu_bo_size(bo), &adev->vram_pin_size);
++		atomic64_sub(amdgpu_vram_mgr_bo_visible_size(bo),
++			     &adev->visible_pin_size);
++	} else if (bo->tbo.mem.mem_type == TTM_PL_TT) {
++		atomic64_sub(amdgpu_bo_size(bo), &adev->gart_pin_size);
++	}
++}
++
+ static void amdgpu_ttm_bo_destroy(struct ttm_buffer_object *tbo)
+ {
+ 	struct amdgpu_device *adev = amdgpu_ttm_adev(tbo->bdev);
+ 	struct amdgpu_bo *bo = ttm_to_amdgpu_bo(tbo);
+ 
++	if (bo->pin_count > 0)
++		amdgpu_bo_subtract_pin_size(bo);
++
+ 	if (bo->kfd_bo)
+ 		amdgpu_amdkfd_unreserve_system_memory_limit(bo);
+ 
+@@ -761,10 +785,11 @@ int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 domain,
+ 
+ 	domain = amdgpu_mem_type_to_domain(bo->tbo.mem.mem_type);
+ 	if (domain == AMDGPU_GEM_DOMAIN_VRAM) {
+-		adev->vram_pin_size += amdgpu_bo_size(bo);
+-		adev->invisible_pin_size += amdgpu_vram_mgr_bo_invisible_size(bo);
++		atomic64_add(amdgpu_bo_size(bo), &adev->vram_pin_size);
++		atomic64_add(amdgpu_vram_mgr_bo_visible_size(bo),
++			     &adev->visible_pin_size);
+ 	} else if (domain == AMDGPU_GEM_DOMAIN_GTT) {
+-		adev->gart_pin_size += amdgpu_bo_size(bo);
++		atomic64_add(amdgpu_bo_size(bo), &adev->gart_pin_size);
+ 	}
+ 
+ error:
+@@ -790,12 +815,7 @@ int amdgpu_bo_unpin(struct amdgpu_bo *bo)
+ 	if (bo->pin_count)
+ 		return 0;
+ 
+-	if (bo->tbo.mem.mem_type == TTM_PL_VRAM) {
+-		adev->vram_pin_size -= amdgpu_bo_size(bo);
+-		adev->invisible_pin_size -= amdgpu_vram_mgr_bo_invisible_size(bo);
+-	} else if (bo->tbo.mem.mem_type == TTM_PL_TT) {
+-		adev->gart_pin_size -= amdgpu_bo_size(bo);
+-	}
++	amdgpu_bo_subtract_pin_size(bo);
+ 
+ 	for (i = 0; i < bo->placement.num_placement; i++) {
+ 		bo->placements[i].lpfn = 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+index a44c3d58fef4..2ec20348b983 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+@@ -1157,7 +1157,7 @@ static ssize_t amdgpu_hwmon_show_vddnb(struct device *dev,
+ 	int r, size = sizeof(vddnb);
+ 
+ 	/* only APUs have vddnb */
+-	if  (adev->flags & AMD_IS_APU)
++	if  (!(adev->flags & AMD_IS_APU))
+ 		return -EINVAL;
+ 
+ 	/* Can't get voltage when the card is off */
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index 9f1a5bd39ae8..5b39d1399630 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -131,6 +131,11 @@ psp_cmd_submit_buf(struct psp_context *psp,
+ 		msleep(1);
+ 	}
+ 
++	if (ucode) {
++		ucode->tmr_mc_addr_lo = psp->cmd_buf_mem->resp.fw_addr_lo;
++		ucode->tmr_mc_addr_hi = psp->cmd_buf_mem->resp.fw_addr_hi;
++	}
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
+index 86a0715d9431..1cafe8d83a4d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
+@@ -53,9 +53,8 @@ static int amdgpu_sched_process_priority_override(struct amdgpu_device *adev,
+ 						  int fd,
+ 						  enum drm_sched_priority priority)
+ {
+-	struct file *filp = fcheck(fd);
++	struct file *filp = fget(fd);
+ 	struct drm_file *file;
+-	struct pid *pid;
+ 	struct amdgpu_fpriv *fpriv;
+ 	struct amdgpu_ctx *ctx;
+ 	uint32_t id;
+@@ -63,20 +62,12 @@ static int amdgpu_sched_process_priority_override(struct amdgpu_device *adev,
+ 	if (!filp)
+ 		return -EINVAL;
+ 
+-	pid = get_pid(((struct drm_file *)filp->private_data)->pid);
++	file = filp->private_data;
++	fpriv = file->driver_priv;
++	idr_for_each_entry(&fpriv->ctx_mgr.ctx_handles, ctx, id)
++		amdgpu_ctx_priority_override(ctx, priority);
+ 
+-	mutex_lock(&adev->ddev->filelist_mutex);
+-	list_for_each_entry(file, &adev->ddev->filelist, lhead) {
+-		if (file->pid != pid)
+-			continue;
+-
+-		fpriv = file->driver_priv;
+-		idr_for_each_entry(&fpriv->ctx_mgr.ctx_handles, ctx, id)
+-				amdgpu_ctx_priority_override(ctx, priority);
+-	}
+-	mutex_unlock(&adev->ddev->filelist_mutex);
+-
+-	put_pid(pid);
++	fput(filp);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+index e5da4654b630..8b3cc6687769 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+@@ -73,7 +73,7 @@ bool amdgpu_gtt_mgr_has_gart_addr(struct ttm_mem_reg *mem);
+ uint64_t amdgpu_gtt_mgr_usage(struct ttm_mem_type_manager *man);
+ int amdgpu_gtt_mgr_recover(struct ttm_mem_type_manager *man);
+ 
+-u64 amdgpu_vram_mgr_bo_invisible_size(struct amdgpu_bo *bo);
++u64 amdgpu_vram_mgr_bo_visible_size(struct amdgpu_bo *bo);
+ uint64_t amdgpu_vram_mgr_usage(struct ttm_mem_type_manager *man);
+ uint64_t amdgpu_vram_mgr_vis_usage(struct ttm_mem_type_manager *man);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
+index 08e38579af24..bdc472b6e641 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
+@@ -194,6 +194,7 @@ enum AMDGPU_UCODE_ID {
+ 	AMDGPU_UCODE_ID_SMC,
+ 	AMDGPU_UCODE_ID_UVD,
+ 	AMDGPU_UCODE_ID_VCE,
++	AMDGPU_UCODE_ID_VCN,
+ 	AMDGPU_UCODE_ID_MAXIMUM,
+ };
+ 
+@@ -226,6 +227,9 @@ struct amdgpu_firmware_info {
+ 	void *kaddr;
+ 	/* ucode_size_bytes */
+ 	uint32_t ucode_size;
++	/* starting tmr mc address */
++	uint32_t tmr_mc_addr_lo;
++	uint32_t tmr_mc_addr_hi;
+ };
+ 
+ void amdgpu_ucode_print_mc_hdr(const struct common_firmware_header *hdr);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+index 1b4ad9b2a755..bee49991c1ff 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+@@ -111,9 +111,10 @@ int amdgpu_vcn_sw_init(struct amdgpu_device *adev)
+ 			version_major, version_minor, family_id);
+ 	}
+ 
+-	bo_size = AMDGPU_GPU_PAGE_ALIGN(le32_to_cpu(hdr->ucode_size_bytes) + 8)
+-		  +  AMDGPU_VCN_STACK_SIZE + AMDGPU_VCN_HEAP_SIZE
++	bo_size = AMDGPU_VCN_STACK_SIZE + AMDGPU_VCN_HEAP_SIZE
+ 		  +  AMDGPU_VCN_SESSION_SIZE * 40;
++	if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP)
++		bo_size += AMDGPU_GPU_PAGE_ALIGN(le32_to_cpu(hdr->ucode_size_bytes) + 8);
+ 	r = amdgpu_bo_create_kernel(adev, bo_size, PAGE_SIZE,
+ 				    AMDGPU_GEM_DOMAIN_VRAM, &adev->vcn.vcpu_bo,
+ 				    &adev->vcn.gpu_addr, &adev->vcn.cpu_addr);
+@@ -187,11 +188,13 @@ int amdgpu_vcn_resume(struct amdgpu_device *adev)
+ 		unsigned offset;
+ 
+ 		hdr = (const struct common_firmware_header *)adev->vcn.fw->data;
+-		offset = le32_to_cpu(hdr->ucode_array_offset_bytes);
+-		memcpy_toio(adev->vcn.cpu_addr, adev->vcn.fw->data + offset,
+-			    le32_to_cpu(hdr->ucode_size_bytes));
+-		size -= le32_to_cpu(hdr->ucode_size_bytes);
+-		ptr += le32_to_cpu(hdr->ucode_size_bytes);
++		if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP) {
++			offset = le32_to_cpu(hdr->ucode_array_offset_bytes);
++			memcpy_toio(adev->vcn.cpu_addr, adev->vcn.fw->data + offset,
++				    le32_to_cpu(hdr->ucode_size_bytes));
++			size -= le32_to_cpu(hdr->ucode_size_bytes);
++			ptr += le32_to_cpu(hdr->ucode_size_bytes);
++		}
+ 		memset_io(ptr, 0, size);
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+index b6333f92ba45..ef4784458800 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+@@ -97,33 +97,29 @@ static u64 amdgpu_vram_mgr_vis_size(struct amdgpu_device *adev,
+ }
+ 
+ /**
+- * amdgpu_vram_mgr_bo_invisible_size - CPU invisible BO size
++ * amdgpu_vram_mgr_bo_visible_size - CPU visible BO size
+  *
+  * @bo: &amdgpu_bo buffer object (must be in VRAM)
+  *
+  * Returns:
+- * How much of the given &amdgpu_bo buffer object lies in CPU invisible VRAM.
++ * How much of the given &amdgpu_bo buffer object lies in CPU visible VRAM.
+  */
+-u64 amdgpu_vram_mgr_bo_invisible_size(struct amdgpu_bo *bo)
++u64 amdgpu_vram_mgr_bo_visible_size(struct amdgpu_bo *bo)
+ {
+ 	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
+ 	struct ttm_mem_reg *mem = &bo->tbo.mem;
+ 	struct drm_mm_node *nodes = mem->mm_node;
+ 	unsigned pages = mem->num_pages;
+-	u64 usage = 0;
++	u64 usage;
+ 
+ 	if (adev->gmc.visible_vram_size == adev->gmc.real_vram_size)
+-		return 0;
++		return amdgpu_bo_size(bo);
+ 
+ 	if (mem->start >= adev->gmc.visible_vram_size >> PAGE_SHIFT)
+-		return amdgpu_bo_size(bo);
++		return 0;
+ 
+-	while (nodes && pages) {
+-		usage += nodes->size << PAGE_SHIFT;
+-		usage -= amdgpu_vram_mgr_vis_size(adev, nodes);
+-		pages -= nodes->size;
+-		++nodes;
+-	}
++	for (usage = 0; nodes && pages; pages -= nodes->size, nodes++)
++		usage += amdgpu_vram_mgr_vis_size(adev, nodes);
+ 
+ 	return usage;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index a69153435ea7..8f0ac805ecd2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -3433,7 +3433,7 @@ static void gfx_v9_0_enter_rlc_safe_mode(struct amdgpu_device *adev)
+ 
+ 		/* wait for RLC_SAFE_MODE */
+ 		for (i = 0; i < adev->usec_timeout; i++) {
+-			if (!REG_GET_FIELD(SOC15_REG_OFFSET(GC, 0, mmRLC_SAFE_MODE), RLC_SAFE_MODE, CMD))
++			if (!REG_GET_FIELD(RREG32_SOC15(GC, 0, mmRLC_SAFE_MODE), RLC_SAFE_MODE, CMD))
+ 				break;
+ 			udelay(1);
+ 		}
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c
+index 0ff136d02d9b..02be34e72ed9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c
+@@ -88,6 +88,9 @@ psp_v10_0_get_fw_type(struct amdgpu_firmware_info *ucode, enum psp_gfx_fw_type *
+ 	case AMDGPU_UCODE_ID_VCE:
+ 		*type = GFX_FW_TYPE_VCE;
+ 		break;
++	case AMDGPU_UCODE_ID_VCN:
++		*type = GFX_FW_TYPE_VCN;
++		break;
+ 	case AMDGPU_UCODE_ID_MAXIMUM:
+ 	default:
+ 		return -EINVAL;
+diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
+index bfddf97dd13e..a16eebc05d12 100644
+--- a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
+@@ -1569,7 +1569,6 @@ static const struct amdgpu_ring_funcs uvd_v6_0_ring_phys_funcs = {
+ static const struct amdgpu_ring_funcs uvd_v6_0_ring_vm_funcs = {
+ 	.type = AMDGPU_RING_TYPE_UVD,
+ 	.align_mask = 0xf,
+-	.nop = PACKET0(mmUVD_NO_OP, 0),
+ 	.support_64bit_ptrs = false,
+ 	.get_rptr = uvd_v6_0_ring_get_rptr,
+ 	.get_wptr = uvd_v6_0_ring_get_wptr,
+@@ -1587,7 +1586,7 @@ static const struct amdgpu_ring_funcs uvd_v6_0_ring_vm_funcs = {
+ 	.emit_hdp_flush = uvd_v6_0_ring_emit_hdp_flush,
+ 	.test_ring = uvd_v6_0_ring_test_ring,
+ 	.test_ib = amdgpu_uvd_ring_test_ib,
+-	.insert_nop = amdgpu_ring_insert_nop,
++	.insert_nop = uvd_v6_0_ring_insert_nop,
+ 	.pad_ib = amdgpu_ring_generic_pad_ib,
+ 	.begin_use = amdgpu_uvd_ring_begin_use,
+ 	.end_use = amdgpu_uvd_ring_end_use,
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+index 29684c3ea4ef..700119168067 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+@@ -90,6 +90,16 @@ static int vcn_v1_0_sw_init(void *handle)
+ 	if (r)
+ 		return r;
+ 
++	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
++		const struct common_firmware_header *hdr;
++		hdr = (const struct common_firmware_header *)adev->vcn.fw->data;
++		adev->firmware.ucode[AMDGPU_UCODE_ID_VCN].ucode_id = AMDGPU_UCODE_ID_VCN;
++		adev->firmware.ucode[AMDGPU_UCODE_ID_VCN].fw = adev->vcn.fw;
++		adev->firmware.fw_size +=
++			ALIGN(le32_to_cpu(hdr->ucode_size_bytes), PAGE_SIZE);
++		DRM_INFO("PSP loading VCN firmware\n");
++	}
++
+ 	r = amdgpu_vcn_resume(adev);
+ 	if (r)
+ 		return r;
+@@ -241,26 +251,38 @@ static int vcn_v1_0_resume(void *handle)
+ static void vcn_v1_0_mc_resume(struct amdgpu_device *adev)
+ {
+ 	uint32_t size = AMDGPU_GPU_PAGE_ALIGN(adev->vcn.fw->size + 4);
+-
+-	WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW,
++	uint32_t offset;
++
++	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
++		WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW,
++			     (adev->firmware.ucode[AMDGPU_UCODE_ID_VCN].tmr_mc_addr_lo));
++		WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH,
++			     (adev->firmware.ucode[AMDGPU_UCODE_ID_VCN].tmr_mc_addr_hi));
++		WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_OFFSET0, 0);
++		offset = 0;
++	} else {
++		WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW,
+ 			lower_32_bits(adev->vcn.gpu_addr));
+-	WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH,
++		WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH,
+ 			upper_32_bits(adev->vcn.gpu_addr));
+-	WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_OFFSET0,
+-				AMDGPU_UVD_FIRMWARE_OFFSET >> 3);
++		offset = size;
++		WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_OFFSET0,
++			     AMDGPU_UVD_FIRMWARE_OFFSET >> 3);
++	}
++
+ 	WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_SIZE0, size);
+ 
+ 	WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE1_64BIT_BAR_LOW,
+-			lower_32_bits(adev->vcn.gpu_addr + size));
++		     lower_32_bits(adev->vcn.gpu_addr + offset));
+ 	WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE1_64BIT_BAR_HIGH,
+-			upper_32_bits(adev->vcn.gpu_addr + size));
++		     upper_32_bits(adev->vcn.gpu_addr + offset));
+ 	WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_OFFSET1, 0);
+ 	WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_SIZE1, AMDGPU_VCN_HEAP_SIZE);
+ 
+ 	WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE2_64BIT_BAR_LOW,
+-			lower_32_bits(adev->vcn.gpu_addr + size + AMDGPU_VCN_HEAP_SIZE));
++		     lower_32_bits(adev->vcn.gpu_addr + offset + AMDGPU_VCN_HEAP_SIZE));
+ 	WREG32_SOC15(UVD, 0, mmUVD_LMI_VCPU_CACHE2_64BIT_BAR_HIGH,
+-			upper_32_bits(adev->vcn.gpu_addr + size + AMDGPU_VCN_HEAP_SIZE));
++		     upper_32_bits(adev->vcn.gpu_addr + offset + AMDGPU_VCN_HEAP_SIZE));
+ 	WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_OFFSET2, 0);
+ 	WREG32_SOC15(UVD, 0, mmUVD_VCPU_CACHE_SIZE2,
+ 			AMDGPU_VCN_STACK_SIZE + (AMDGPU_VCN_SESSION_SIZE * 40));
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 770c6b24be0b..e484d0a94bdc 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1334,6 +1334,7 @@ amdgpu_dm_register_backlight_device(struct amdgpu_display_manager *dm)
+ 	struct backlight_properties props = { 0 };
+ 
+ 	props.max_brightness = AMDGPU_MAX_BL_LEVEL;
++	props.brightness = AMDGPU_MAX_BL_LEVEL;
+ 	props.type = BACKLIGHT_RAW;
+ 
+ 	snprintf(bl_name, sizeof(bl_name), "amdgpu_bl%d",
+@@ -2123,13 +2124,8 @@ convert_color_depth_from_display_info(const struct drm_connector *connector)
+ static enum dc_aspect_ratio
+ get_aspect_ratio(const struct drm_display_mode *mode_in)
+ {
+-	int32_t width = mode_in->crtc_hdisplay * 9;
+-	int32_t height = mode_in->crtc_vdisplay * 16;
+-
+-	if ((width - height) < 10 && (width - height) > -10)
+-		return ASPECT_RATIO_16_9;
+-	else
+-		return ASPECT_RATIO_4_3;
++	/* 1-1 mapping, since both enums follow the HDMI spec. */
++	return (enum dc_aspect_ratio) mode_in->picture_aspect_ratio;
+ }
+ 
+ static enum dc_color_space
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
+index 52f2c01349e3..9bfb040352e9 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
+@@ -98,10 +98,16 @@ int amdgpu_dm_crtc_set_crc_source(struct drm_crtc *crtc, const char *src_name,
+  */
+ void amdgpu_dm_crtc_handle_crc_irq(struct drm_crtc *crtc)
+ {
+-	struct dm_crtc_state *crtc_state = to_dm_crtc_state(crtc->state);
+-	struct dc_stream_state *stream_state = crtc_state->stream;
++	struct dm_crtc_state *crtc_state;
++	struct dc_stream_state *stream_state;
+ 	uint32_t crcs[3];
+ 
++	if (crtc == NULL)
++		return;
++
++	crtc_state = to_dm_crtc_state(crtc->state);
++	stream_state = crtc_state->stream;
++
+ 	/* Early return if CRC capture is not enabled. */
+ 	if (!crtc_state->crc_enabled)
+ 		return;
+diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table.c b/drivers/gpu/drm/amd/display/dc/bios/command_table.c
+index 651e1fd4622f..a558bfaa0c46 100644
+--- a/drivers/gpu/drm/amd/display/dc/bios/command_table.c
++++ b/drivers/gpu/drm/amd/display/dc/bios/command_table.c
+@@ -808,6 +808,24 @@ static enum bp_result transmitter_control_v1_5(
+ 	 * (=1: 8bpp, =1.25: 10bpp, =1.5:12bpp, =2: 16bpp)
+ 	 * LVDS mode: usPixelClock = pixel clock
+ 	 */
++	if  (cntl->signal == SIGNAL_TYPE_HDMI_TYPE_A) {
++		switch (cntl->color_depth) {
++		case COLOR_DEPTH_101010:
++			params.usSymClock =
++				cpu_to_le16((le16_to_cpu(params.usSymClock) * 30) / 24);
++			break;
++		case COLOR_DEPTH_121212:
++			params.usSymClock =
++				cpu_to_le16((le16_to_cpu(params.usSymClock) * 36) / 24);
++			break;
++		case COLOR_DEPTH_161616:
++			params.usSymClock =
++				cpu_to_le16((le16_to_cpu(params.usSymClock) * 48) / 24);
++			break;
++		default:
++			break;
++		}
++	}
+ 
+ 	if (EXEC_BIOS_CMD_TABLE(UNIPHYTransmitterControl, params))
+ 		result = BP_RESULT_OK;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+index 2fa521812d23..8a7890b03d97 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+@@ -728,6 +728,17 @@ bool dc_link_detect(struct dc_link *link, enum dc_detect_reason reason)
+ 			break;
+ 		case EDID_NO_RESPONSE:
+ 			DC_LOG_ERROR("No EDID read.\n");
++
++			/*
++			 * Abort detection for non-DP connectors if we have
++			 * no EDID
++			 *
++			 * DP needs to report as connected if HDP is high
++			 * even if we have no EDID in order to go to
++			 * fail-safe mode
++			 */
++			if (!dc_is_dp_signal(link->connector_signal))
++				return false;
+ 		default:
+ 			break;
+ 		}
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index 751f3ac9d921..754b4c2fc90a 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -268,24 +268,30 @@ bool resource_construct(
+ 
+ 	return true;
+ }
++static int find_matching_clock_source(
++		const struct resource_pool *pool,
++		struct clock_source *clock_source)
++{
+ 
++	int i;
++
++	for (i = 0; i < pool->clk_src_count; i++) {
++		if (pool->clock_sources[i] == clock_source)
++			return i;
++	}
++	return -1;
++}
+ 
+ void resource_unreference_clock_source(
+ 		struct resource_context *res_ctx,
+ 		const struct resource_pool *pool,
+ 		struct clock_source *clock_source)
+ {
+-	int i;
+-
+-	for (i = 0; i < pool->clk_src_count; i++) {
+-		if (pool->clock_sources[i] != clock_source)
+-			continue;
++	int i = find_matching_clock_source(pool, clock_source);
+ 
++	if (i > -1)
+ 		res_ctx->clock_source_ref_count[i]--;
+ 
+-		break;
+-	}
+-
+ 	if (pool->dp_clock_source == clock_source)
+ 		res_ctx->dp_clock_source_ref_count--;
+ }
+@@ -295,19 +301,31 @@ void resource_reference_clock_source(
+ 		const struct resource_pool *pool,
+ 		struct clock_source *clock_source)
+ {
+-	int i;
+-	for (i = 0; i < pool->clk_src_count; i++) {
+-		if (pool->clock_sources[i] != clock_source)
+-			continue;
++	int i = find_matching_clock_source(pool, clock_source);
+ 
++	if (i > -1)
+ 		res_ctx->clock_source_ref_count[i]++;
+-		break;
+-	}
+ 
+ 	if (pool->dp_clock_source == clock_source)
+ 		res_ctx->dp_clock_source_ref_count++;
+ }
+ 
++int resource_get_clock_source_reference(
++		struct resource_context *res_ctx,
++		const struct resource_pool *pool,
++		struct clock_source *clock_source)
++{
++	int i = find_matching_clock_source(pool, clock_source);
++
++	if (i > -1)
++		return res_ctx->clock_source_ref_count[i];
++
++	if (pool->dp_clock_source == clock_source)
++		return res_ctx->dp_clock_source_ref_count;
++
++	return -1;
++}
++
+ bool resource_are_streams_timing_synchronizable(
+ 	struct dc_stream_state *stream1,
+ 	struct dc_stream_state *stream2)
+@@ -330,6 +348,9 @@ bool resource_are_streams_timing_synchronizable(
+ 				!= stream2->timing.pix_clk_khz)
+ 		return false;
+ 
++	if (stream1->clamping.c_depth != stream2->clamping.c_depth)
++		return false;
++
+ 	if (stream1->phy_pix_clk != stream2->phy_pix_clk
+ 			&& (!dc_is_dp_signal(stream1->signal)
+ 			|| !dc_is_dp_signal(stream2->signal)))
+@@ -337,6 +358,20 @@ bool resource_are_streams_timing_synchronizable(
+ 
+ 	return true;
+ }
++static bool is_dp_and_hdmi_sharable(
++		struct dc_stream_state *stream1,
++		struct dc_stream_state *stream2)
++{
++	if (stream1->ctx->dc->caps.disable_dp_clk_share)
++		return false;
++
++	if (stream1->clamping.c_depth != COLOR_DEPTH_888 ||
++	    stream2->clamping.c_depth != COLOR_DEPTH_888)
++	return false;
++
++	return true;
++
++}
+ 
+ static bool is_sharable_clk_src(
+ 	const struct pipe_ctx *pipe_with_clk_src,
+@@ -348,7 +383,10 @@ static bool is_sharable_clk_src(
+ 	if (pipe_with_clk_src->stream->signal == SIGNAL_TYPE_VIRTUAL)
+ 		return false;
+ 
+-	if (dc_is_dp_signal(pipe_with_clk_src->stream->signal))
++	if (dc_is_dp_signal(pipe_with_clk_src->stream->signal) ||
++		(dc_is_dp_signal(pipe->stream->signal) &&
++		!is_dp_and_hdmi_sharable(pipe_with_clk_src->stream,
++				     pipe->stream)))
+ 		return false;
+ 
+ 	if (dc_is_hdmi_signal(pipe_with_clk_src->stream->signal)
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index 53c71296f3dd..efe155d50668 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -77,6 +77,7 @@ struct dc_caps {
+ 	bool dual_link_dvi;
+ 	bool post_blend_color_processing;
+ 	bool force_dp_tps4_for_cp2520;
++	bool disable_dp_clk_share;
+ };
+ 
+ struct dc_dcc_surface_param {
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c b/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c
+index dbe3b26b6d9e..f6ec1d3dfd0c 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c
+@@ -919,7 +919,7 @@ void dce110_link_encoder_enable_tmds_output(
+ 	enum bp_result result;
+ 
+ 	/* Enable the PHY */
+-
++	cntl.connector_obj_id = enc110->base.connector;
+ 	cntl.action = TRANSMITTER_CONTROL_ENABLE;
+ 	cntl.engine_id = enc->preferred_engine;
+ 	cntl.transmitter = enc110->base.transmitter;
+@@ -961,7 +961,7 @@ void dce110_link_encoder_enable_dp_output(
+ 	 * We need to set number of lanes manually.
+ 	 */
+ 	configure_encoder(enc110, link_settings);
+-
++	cntl.connector_obj_id = enc110->base.connector;
+ 	cntl.action = TRANSMITTER_CONTROL_ENABLE;
+ 	cntl.engine_id = enc->preferred_engine;
+ 	cntl.transmitter = enc110->base.transmitter;
+diff --git a/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c b/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
+index 344dd2e69e7c..aa2f03eb46fe 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
+@@ -884,7 +884,7 @@ static bool construct(
+ 	dc->caps.i2c_speed_in_khz = 40;
+ 	dc->caps.max_cursor_size = 128;
+ 	dc->caps.dual_link_dvi = true;
+-
++	dc->caps.disable_dp_clk_share = true;
+ 	for (i = 0; i < pool->base.pipe_count; i++) {
+ 		pool->base.timing_generators[i] =
+ 			dce100_timing_generator_create(
+diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_compressor.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_compressor.c
+index e2994d337044..111c4921987f 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_compressor.c
++++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_compressor.c
+@@ -143,7 +143,7 @@ static void wait_for_fbc_state_changed(
+ 	struct dce110_compressor *cp110,
+ 	bool enabled)
+ {
+-	uint8_t counter = 0;
++	uint16_t counter = 0;
+ 	uint32_t addr = mmFBC_STATUS;
+ 	uint32_t value;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+index c29052b6da5a..7c0b1d7aa9b8 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+@@ -1939,7 +1939,9 @@ static void dce110_reset_hw_ctx_wrap(
+ 			pipe_ctx_old->plane_res.mi->funcs->free_mem_input(
+ 					pipe_ctx_old->plane_res.mi, dc->current_state->stream_count);
+ 
+-			if (old_clk)
++			if (old_clk && 0 == resource_get_clock_source_reference(&context->res_ctx,
++										dc->res_pool,
++										old_clk))
+ 				old_clk->funcs->cs_power_down(old_clk);
+ 
+ 			dc->hwss.disable_plane(dc, pipe_ctx_old);
+diff --git a/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c b/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
+index 48a068964722..6f4992bdc9ce 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
+@@ -902,6 +902,7 @@ static bool dce80_construct(
+ 	}
+ 
+ 	dc->caps.max_planes =  pool->base.pipe_count;
++	dc->caps.disable_dp_clk_share = true;
+ 
+ 	if (!resource_construct(num_virtual_links, dc, &pool->base,
+ 			&res_create_funcs))
+@@ -1087,6 +1088,7 @@ static bool dce81_construct(
+ 	}
+ 
+ 	dc->caps.max_planes =  pool->base.pipe_count;
++	dc->caps.disable_dp_clk_share = true;
+ 
+ 	if (!resource_construct(num_virtual_links, dc, &pool->base,
+ 			&res_create_funcs))
+@@ -1268,6 +1270,7 @@ static bool dce83_construct(
+ 	}
+ 
+ 	dc->caps.max_planes =  pool->base.pipe_count;
++	dc->caps.disable_dp_clk_share = true;
+ 
+ 	if (!resource_construct(num_virtual_links, dc, &pool->base,
+ 			&res_create_funcs))
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/resource.h b/drivers/gpu/drm/amd/display/dc/inc/resource.h
+index 640a647f4611..abf42a7d0859 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/resource.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/resource.h
+@@ -102,6 +102,11 @@ void resource_reference_clock_source(
+ 		const struct resource_pool *pool,
+ 		struct clock_source *clock_source);
+ 
++int resource_get_clock_source_reference(
++		struct resource_context *res_ctx,
++		const struct resource_pool *pool,
++		struct clock_source *clock_source);
++
+ bool resource_are_streams_timing_synchronizable(
+ 		struct dc_stream_state *stream1,
+ 		struct dc_stream_state *stream2);
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_powertune.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_powertune.c
+index c952845833d7..5e19f5977eb1 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_powertune.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_powertune.c
+@@ -403,6 +403,49 @@ static const struct gpu_pt_config_reg DIDTConfig_Polaris12[] = {
+ 	{   ixDIDT_SQ_CTRL1,                   DIDT_SQ_CTRL1__MAX_POWER_MASK,                      DIDT_SQ_CTRL1__MAX_POWER__SHIFT,                    0xffff,     GPU_CONFIGREG_DIDT_IND },
+ 
+ 	{   ixDIDT_SQ_CTRL_OCP,                DIDT_SQ_CTRL_OCP__UNUSED_0_MASK,                    DIDT_SQ_CTRL_OCP__UNUSED_0__SHIFT,                  0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL_OCP,                DIDT_SQ_CTRL_OCP__OCP_MAX_POWER_MASK,               DIDT_SQ_CTRL_OCP__OCP_MAX_POWER__SHIFT,             0xffff,     GPU_CONFIGREG_DIDT_IND },
++
++	{   ixDIDT_SQ_CTRL2,                   DIDT_SQ_CTRL2__MAX_POWER_DELTA_MASK,                DIDT_SQ_CTRL2__MAX_POWER_DELTA__SHIFT,              0x3853,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL2,                   DIDT_SQ_CTRL2__UNUSED_0_MASK,                       DIDT_SQ_CTRL2__UNUSED_0__SHIFT,                     0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL2,                   DIDT_SQ_CTRL2__SHORT_TERM_INTERVAL_SIZE_MASK,       DIDT_SQ_CTRL2__SHORT_TERM_INTERVAL_SIZE__SHIFT,     0x005a,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL2,                   DIDT_SQ_CTRL2__UNUSED_1_MASK,                       DIDT_SQ_CTRL2__UNUSED_1__SHIFT,                     0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL2,                   DIDT_SQ_CTRL2__LONG_TERM_INTERVAL_RATIO_MASK,       DIDT_SQ_CTRL2__LONG_TERM_INTERVAL_RATIO__SHIFT,     0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL2,                   DIDT_SQ_CTRL2__UNUSED_2_MASK,                       DIDT_SQ_CTRL2__UNUSED_2__SHIFT,                     0x0000,     GPU_CONFIGREG_DIDT_IND },
++
++	{   ixDIDT_SQ_STALL_CTRL,              DIDT_SQ_STALL_CTRL__DIDT_STALL_CTRL_ENABLE_MASK,    DIDT_SQ_STALL_CTRL__DIDT_STALL_CTRL_ENABLE__SHIFT,  0x0001,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_STALL_CTRL,              DIDT_SQ_STALL_CTRL__DIDT_STALL_DELAY_HI_MASK,       DIDT_SQ_STALL_CTRL__DIDT_STALL_DELAY_HI__SHIFT,     0x0001,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_STALL_CTRL,              DIDT_SQ_STALL_CTRL__DIDT_STALL_DELAY_LO_MASK,       DIDT_SQ_STALL_CTRL__DIDT_STALL_DELAY_LO__SHIFT,     0x0001,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_STALL_CTRL,              DIDT_SQ_STALL_CTRL__DIDT_HI_POWER_THRESHOLD_MASK,   DIDT_SQ_STALL_CTRL__DIDT_HI_POWER_THRESHOLD__SHIFT, 0x0ebb,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_STALL_CTRL,              DIDT_SQ_STALL_CTRL__UNUSED_0_MASK,                  DIDT_SQ_STALL_CTRL__UNUSED_0__SHIFT,                0x0000,     GPU_CONFIGREG_DIDT_IND },
++
++	{   ixDIDT_SQ_TUNING_CTRL,             DIDT_SQ_TUNING_CTRL__DIDT_TUNING_ENABLE_MASK,       DIDT_SQ_TUNING_CTRL__DIDT_TUNING_ENABLE__SHIFT,     0x0001,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_TUNING_CTRL,             DIDT_SQ_TUNING_CTRL__MAX_POWER_DELTA_HI_MASK,       DIDT_SQ_TUNING_CTRL__MAX_POWER_DELTA_HI__SHIFT,     0x3853,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_TUNING_CTRL,             DIDT_SQ_TUNING_CTRL__MAX_POWER_DELTA_LO_MASK,       DIDT_SQ_TUNING_CTRL__MAX_POWER_DELTA_LO__SHIFT,     0x3153,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_TUNING_CTRL,             DIDT_SQ_TUNING_CTRL__UNUSED_0_MASK,                 DIDT_SQ_TUNING_CTRL__UNUSED_0__SHIFT,               0x0000,     GPU_CONFIGREG_DIDT_IND },
++
++	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__DIDT_CTRL_EN_MASK,                   DIDT_SQ_CTRL0__DIDT_CTRL_EN__SHIFT,                 0x0001,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__USE_REF_CLOCK_MASK,                  DIDT_SQ_CTRL0__USE_REF_CLOCK__SHIFT,                0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__PHASE_OFFSET_MASK,                   DIDT_SQ_CTRL0__PHASE_OFFSET__SHIFT,                 0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__DIDT_CTRL_RST_MASK,                  DIDT_SQ_CTRL0__DIDT_CTRL_RST__SHIFT,                0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__DIDT_CLK_EN_OVERRIDE_MASK,           DIDT_SQ_CTRL0__DIDT_CLK_EN_OVERRIDE__SHIFT,         0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__DIDT_MAX_STALLS_ALLOWED_HI_MASK,     DIDT_SQ_CTRL0__DIDT_MAX_STALLS_ALLOWED_HI__SHIFT,   0x0010,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__DIDT_MAX_STALLS_ALLOWED_LO_MASK,     DIDT_SQ_CTRL0__DIDT_MAX_STALLS_ALLOWED_LO__SHIFT,   0x0010,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_SQ_CTRL0,                   DIDT_SQ_CTRL0__UNUSED_0_MASK,                       DIDT_SQ_CTRL0__UNUSED_0__SHIFT,                     0x0000,     GPU_CONFIGREG_DIDT_IND },
++
++	{   ixDIDT_TD_WEIGHT0_3,               DIDT_TD_WEIGHT0_3__WEIGHT0_MASK,                    DIDT_TD_WEIGHT0_3__WEIGHT0__SHIFT,                  0x000a,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_TD_WEIGHT0_3,               DIDT_TD_WEIGHT0_3__WEIGHT1_MASK,                    DIDT_TD_WEIGHT0_3__WEIGHT1__SHIFT,                  0x0010,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_TD_WEIGHT0_3,               DIDT_TD_WEIGHT0_3__WEIGHT2_MASK,                    DIDT_TD_WEIGHT0_3__WEIGHT2__SHIFT,                  0x0017,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_TD_WEIGHT0_3,               DIDT_TD_WEIGHT0_3__WEIGHT3_MASK,                    DIDT_TD_WEIGHT0_3__WEIGHT3__SHIFT,                  0x002f,     GPU_CONFIGREG_DIDT_IND },
++
++	{   ixDIDT_TD_WEIGHT4_7,               DIDT_TD_WEIGHT4_7__WEIGHT4_MASK,                    DIDT_TD_WEIGHT4_7__WEIGHT4__SHIFT,                  0x0046,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_TD_WEIGHT4_7,               DIDT_TD_WEIGHT4_7__WEIGHT5_MASK,                    DIDT_TD_WEIGHT4_7__WEIGHT5__SHIFT,                  0x005d,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_TD_WEIGHT4_7,               DIDT_TD_WEIGHT4_7__WEIGHT6_MASK,                    DIDT_TD_WEIGHT4_7__WEIGHT6__SHIFT,                  0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_TD_WEIGHT4_7,               DIDT_TD_WEIGHT4_7__WEIGHT7_MASK,                    DIDT_TD_WEIGHT4_7__WEIGHT7__SHIFT,                  0x0000,     GPU_CONFIGREG_DIDT_IND },
++
++	{   ixDIDT_TD_CTRL1,                   DIDT_TD_CTRL1__MIN_POWER_MASK,                      DIDT_TD_CTRL1__MIN_POWER__SHIFT,                    0x0000,     GPU_CONFIGREG_DIDT_IND },
++	{   ixDIDT_TD_CTRL1,                   DIDT_TD_CTRL1__MAX_POWER_MASK,                      DIDT_TD_CTRL1__MAX_POWER__SHIFT,                    0xffff,     GPU_CONFIGREG_DIDT_IND },
++
++	{   ixDIDT_TD_CTRL_OCP,                DIDT_TD_CTRL_OCP__UNUSED_0_MASK,                    DIDT_TD_CTRL_OCP__UNUSED_0__SHIFT,                  0x0000,     GPU_CONFIGREG_DIDT_IND },
+ 	{   ixDIDT_TD_CTRL_OCP,                DIDT_TD_CTRL_OCP__OCP_MAX_POWER_MASK,               DIDT_TD_CTRL_OCP__OCP_MAX_POWER__SHIFT,             0x00ff,     GPU_CONFIGREG_DIDT_IND },
+ 
+ 	{   ixDIDT_TD_CTRL2,                   DIDT_TD_CTRL2__MAX_POWER_DELTA_MASK,                DIDT_TD_CTRL2__MAX_POWER_DELTA__SHIFT,              0x3fff,     GPU_CONFIGREG_DIDT_IND },
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
+index 50690c72b2ea..617557bd8c24 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
+@@ -244,6 +244,7 @@ static int smu8_initialize_dpm_defaults(struct pp_hwmgr *hwmgr)
+ 	return 0;
+ }
+ 
++/* convert form 8bit vid to real voltage in mV*4 */
+ static uint32_t smu8_convert_8Bit_index_to_voltage(
+ 			struct pp_hwmgr *hwmgr, uint16_t voltage)
+ {
+@@ -1702,13 +1703,13 @@ static int smu8_read_sensor(struct pp_hwmgr *hwmgr, int idx,
+ 	case AMDGPU_PP_SENSOR_VDDNB:
+ 		tmp = (cgs_read_ind_register(hwmgr->device, CGS_IND_REG__SMC, ixSMUSVI_NB_CURRENTVID) &
+ 			CURRENT_NB_VID_MASK) >> CURRENT_NB_VID__SHIFT;
+-		vddnb = smu8_convert_8Bit_index_to_voltage(hwmgr, tmp);
++		vddnb = smu8_convert_8Bit_index_to_voltage(hwmgr, tmp) / 4;
+ 		*((uint32_t *)value) = vddnb;
+ 		return 0;
+ 	case AMDGPU_PP_SENSOR_VDDGFX:
+ 		tmp = (cgs_read_ind_register(hwmgr->device, CGS_IND_REG__SMC, ixSMUSVI_GFX_CURRENTVID) &
+ 			CURRENT_GFX_VID_MASK) >> CURRENT_GFX_VID__SHIFT;
+-		vddgfx = smu8_convert_8Bit_index_to_voltage(hwmgr, (u16)tmp);
++		vddgfx = smu8_convert_8Bit_index_to_voltage(hwmgr, (u16)tmp) / 4;
+ 		*((uint32_t *)value) = vddgfx;
+ 		return 0;
+ 	case AMDGPU_PP_SENSOR_UVD_VCLK:
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
+index c98e5de777cd..fcd2808874bf 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
+@@ -490,7 +490,7 @@ static int vega12_get_number_dpm_level(struct pp_hwmgr *hwmgr,
+ static int vega12_get_dpm_frequency_by_index(struct pp_hwmgr *hwmgr,
+ 		PPCLK_e clkID, uint32_t index, uint32_t *clock)
+ {
+-	int result;
++	int result = 0;
+ 
+ 	/*
+ 	 *SMU expects the Clock ID to be in the top 16 bits.
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index a5808382bdf0..c7b4481c90d7 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -116,6 +116,9 @@ static const struct edid_quirk {
+ 	/* CPT panel of Asus UX303LA reports 8 bpc, but is a 6 bpc panel */
+ 	{ "CPT", 0x17df, EDID_QUIRK_FORCE_6BPC },
+ 
++	/* SDC panel of Lenovo B50-80 reports 8 bpc, but is a 6 bpc panel */
++	{ "SDC", 0x3652, EDID_QUIRK_FORCE_6BPC },
++
+ 	/* Belinea 10 15 55 */
+ 	{ "MAX", 1516, EDID_QUIRK_PREFER_LARGE_60 },
+ 	{ "MAX", 0x77e, EDID_QUIRK_PREFER_LARGE_60 },
+@@ -163,8 +166,9 @@ static const struct edid_quirk {
+ 	/* Rotel RSX-1058 forwards sink's EDID but only does HDMI 1.1*/
+ 	{ "ETR", 13896, EDID_QUIRK_FORCE_8BPC },
+ 
+-	/* HTC Vive VR Headset */
++	/* HTC Vive and Vive Pro VR Headsets */
+ 	{ "HVR", 0xaa01, EDID_QUIRK_NON_DESKTOP },
++	{ "HVR", 0xaa02, EDID_QUIRK_NON_DESKTOP },
+ 
+ 	/* Oculus Rift DK1, DK2, and CV1 VR Headsets */
+ 	{ "OVR", 0x0001, EDID_QUIRK_NON_DESKTOP },
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+index 686f6552db48..3ef440b235e5 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+@@ -799,6 +799,7 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
+ 
+ free_buffer:
+ 	etnaviv_cmdbuf_free(&gpu->buffer);
++	gpu->buffer.suballoc = NULL;
+ destroy_iommu:
+ 	etnaviv_iommu_destroy(gpu->mmu);
+ 	gpu->mmu = NULL;
+diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
+index 9c449b8d8eab..015f9e93419d 100644
+--- a/drivers/gpu/drm/i915/i915_drv.c
++++ b/drivers/gpu/drm/i915/i915_drv.c
+@@ -919,7 +919,6 @@ static int i915_driver_init_early(struct drm_i915_private *dev_priv,
+ 	spin_lock_init(&dev_priv->uncore.lock);
+ 
+ 	mutex_init(&dev_priv->sb_lock);
+-	mutex_init(&dev_priv->modeset_restore_lock);
+ 	mutex_init(&dev_priv->av_mutex);
+ 	mutex_init(&dev_priv->wm.wm_mutex);
+ 	mutex_init(&dev_priv->pps_mutex);
+@@ -1560,11 +1559,6 @@ static int i915_drm_suspend(struct drm_device *dev)
+ 	pci_power_t opregion_target_state;
+ 	int error;
+ 
+-	/* ignore lid events during suspend */
+-	mutex_lock(&dev_priv->modeset_restore_lock);
+-	dev_priv->modeset_restore = MODESET_SUSPENDED;
+-	mutex_unlock(&dev_priv->modeset_restore_lock);
+-
+ 	disable_rpm_wakeref_asserts(dev_priv);
+ 
+ 	/* We do a lot of poking in a lot of registers, make sure they work
+@@ -1764,10 +1758,6 @@ static int i915_drm_resume(struct drm_device *dev)
+ 
+ 	intel_fbdev_set_suspend(dev, FBINFO_STATE_RUNNING, false);
+ 
+-	mutex_lock(&dev_priv->modeset_restore_lock);
+-	dev_priv->modeset_restore = MODESET_DONE;
+-	mutex_unlock(&dev_priv->modeset_restore_lock);
+-
+ 	intel_opregion_notify_adapter(dev_priv, PCI_D0);
+ 
+ 	enable_rpm_wakeref_asserts(dev_priv);
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index 71e1aa54f774..7c22fac3aa04 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -1003,12 +1003,6 @@ struct i915_gem_mm {
+ #define I915_ENGINE_DEAD_TIMEOUT  (4 * HZ)  /* Seqno, head and subunits dead */
+ #define I915_SEQNO_DEAD_TIMEOUT   (12 * HZ) /* Seqno dead with active head */
+ 
+-enum modeset_restore {
+-	MODESET_ON_LID_OPEN,
+-	MODESET_DONE,
+-	MODESET_SUSPENDED,
+-};
+-
+ #define DP_AUX_A 0x40
+ #define DP_AUX_B 0x10
+ #define DP_AUX_C 0x20
+@@ -1740,8 +1734,6 @@ struct drm_i915_private {
+ 
+ 	unsigned long quirks;
+ 
+-	enum modeset_restore modeset_restore;
+-	struct mutex modeset_restore_lock;
+ 	struct drm_atomic_state *modeset_restore_state;
+ 	struct drm_modeset_acquire_ctx reset_ctx;
+ 
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 7720569f2024..6e048ee88e3f 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -8825,6 +8825,7 @@ enum skl_power_gate {
+ #define  TRANS_MSA_10_BPC		(2<<5)
+ #define  TRANS_MSA_12_BPC		(3<<5)
+ #define  TRANS_MSA_16_BPC		(4<<5)
++#define  TRANS_MSA_CEA_RANGE		(1<<3)
+ 
+ /* LCPLL Control */
+ #define LCPLL_CTL			_MMIO(0x130040)
+diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c
+index fed26d6e4e27..e195c287c263 100644
+--- a/drivers/gpu/drm/i915/intel_ddi.c
++++ b/drivers/gpu/drm/i915/intel_ddi.c
+@@ -1659,6 +1659,10 @@ void intel_ddi_set_pipe_settings(const struct intel_crtc_state *crtc_state)
+ 	WARN_ON(transcoder_is_dsi(cpu_transcoder));
+ 
+ 	temp = TRANS_MSA_SYNC_CLK;
++
++	if (crtc_state->limited_color_range)
++		temp |= TRANS_MSA_CEA_RANGE;
++
+ 	switch (crtc_state->pipe_bpp) {
+ 	case 18:
+ 		temp |= TRANS_MSA_6_BPC;
+diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
+index 16faea30114a..8e465095fe06 100644
+--- a/drivers/gpu/drm/i915/intel_dp.c
++++ b/drivers/gpu/drm/i915/intel_dp.c
+@@ -4293,18 +4293,6 @@ intel_dp_needs_link_retrain(struct intel_dp *intel_dp)
+ 	return !drm_dp_channel_eq_ok(link_status, intel_dp->lane_count);
+ }
+ 
+-/*
+- * If display is now connected check links status,
+- * there has been known issues of link loss triggering
+- * long pulse.
+- *
+- * Some sinks (eg. ASUS PB287Q) seem to perform some
+- * weird HPD ping pong during modesets. So we can apparently
+- * end up with HPD going low during a modeset, and then
+- * going back up soon after. And once that happens we must
+- * retrain the link to get a picture. That's in case no
+- * userspace component reacted to intermittent HPD dip.
+- */
+ int intel_dp_retrain_link(struct intel_encoder *encoder,
+ 			  struct drm_modeset_acquire_ctx *ctx)
+ {
+@@ -4794,7 +4782,8 @@ intel_dp_unset_edid(struct intel_dp *intel_dp)
+ }
+ 
+ static int
+-intel_dp_long_pulse(struct intel_connector *connector)
++intel_dp_long_pulse(struct intel_connector *connector,
++		    struct drm_modeset_acquire_ctx *ctx)
+ {
+ 	struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
+ 	struct intel_dp *intel_dp = intel_attached_dp(&connector->base);
+@@ -4853,6 +4842,22 @@ intel_dp_long_pulse(struct intel_connector *connector)
+ 		 */
+ 		status = connector_status_disconnected;
+ 		goto out;
++	} else {
++		/*
++		 * If display is now connected check links status,
++		 * there has been known issues of link loss triggering
++		 * long pulse.
++		 *
++		 * Some sinks (eg. ASUS PB287Q) seem to perform some
++		 * weird HPD ping pong during modesets. So we can apparently
++		 * end up with HPD going low during a modeset, and then
++		 * going back up soon after. And once that happens we must
++		 * retrain the link to get a picture. That's in case no
++		 * userspace component reacted to intermittent HPD dip.
++		 */
++		struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
++
++		intel_dp_retrain_link(encoder, ctx);
+ 	}
+ 
+ 	/*
+@@ -4914,7 +4919,7 @@ intel_dp_detect(struct drm_connector *connector,
+ 				return ret;
+ 		}
+ 
+-		status = intel_dp_long_pulse(intel_dp->attached_connector);
++		status = intel_dp_long_pulse(intel_dp->attached_connector, ctx);
+ 	}
+ 
+ 	intel_dp->detect_done = false;
+diff --git a/drivers/gpu/drm/i915/intel_hdmi.c b/drivers/gpu/drm/i915/intel_hdmi.c
+index d8cb53ef4351..c8640959a7fc 100644
+--- a/drivers/gpu/drm/i915/intel_hdmi.c
++++ b/drivers/gpu/drm/i915/intel_hdmi.c
+@@ -933,8 +933,12 @@ static int intel_hdmi_hdcp_write(struct intel_digital_port *intel_dig_port,
+ 
+ 	ret = i2c_transfer(adapter, &msg, 1);
+ 	if (ret == 1)
+-		return 0;
+-	return ret >= 0 ? -EIO : ret;
++		ret = 0;
++	else if (ret >= 0)
++		ret = -EIO;
++
++	kfree(write_buf);
++	return ret;
+ }
+ 
+ static
+diff --git a/drivers/gpu/drm/i915/intel_lpe_audio.c b/drivers/gpu/drm/i915/intel_lpe_audio.c
+index b4941101f21a..cdf19553ffac 100644
+--- a/drivers/gpu/drm/i915/intel_lpe_audio.c
++++ b/drivers/gpu/drm/i915/intel_lpe_audio.c
+@@ -127,9 +127,7 @@ lpe_audio_platdev_create(struct drm_i915_private *dev_priv)
+ 		return platdev;
+ 	}
+ 
+-	pm_runtime_forbid(&platdev->dev);
+-	pm_runtime_set_active(&platdev->dev);
+-	pm_runtime_enable(&platdev->dev);
++	pm_runtime_no_callbacks(&platdev->dev);
+ 
+ 	return platdev;
+ }
+diff --git a/drivers/gpu/drm/i915/intel_lspcon.c b/drivers/gpu/drm/i915/intel_lspcon.c
+index 8ae8f42f430a..6b6758419fb3 100644
+--- a/drivers/gpu/drm/i915/intel_lspcon.c
++++ b/drivers/gpu/drm/i915/intel_lspcon.c
+@@ -74,7 +74,7 @@ static enum drm_lspcon_mode lspcon_wait_mode(struct intel_lspcon *lspcon,
+ 	DRM_DEBUG_KMS("Waiting for LSPCON mode %s to settle\n",
+ 		      lspcon_mode_name(mode));
+ 
+-	wait_for((current_mode = lspcon_get_current_mode(lspcon)) == mode, 100);
++	wait_for((current_mode = lspcon_get_current_mode(lspcon)) == mode, 400);
+ 	if (current_mode != mode)
+ 		DRM_ERROR("LSPCON mode hasn't settled\n");
+ 
+diff --git a/drivers/gpu/drm/i915/intel_lvds.c b/drivers/gpu/drm/i915/intel_lvds.c
+index 48f618dc9abb..63d7faa99946 100644
+--- a/drivers/gpu/drm/i915/intel_lvds.c
++++ b/drivers/gpu/drm/i915/intel_lvds.c
+@@ -44,8 +44,6 @@
+ /* Private structure for the integrated LVDS support */
+ struct intel_lvds_connector {
+ 	struct intel_connector base;
+-
+-	struct notifier_block lid_notifier;
+ };
+ 
+ struct intel_lvds_pps {
+@@ -454,26 +452,9 @@ static bool intel_lvds_compute_config(struct intel_encoder *intel_encoder,
+ 	return true;
+ }
+ 
+-/*
+- * Detect the LVDS connection.
+- *
+- * Since LVDS doesn't have hotlug, we use the lid as a proxy.  Open means
+- * connected and closed means disconnected.  We also send hotplug events as
+- * needed, using lid status notification from the input layer.
+- */
+ static enum drm_connector_status
+ intel_lvds_detect(struct drm_connector *connector, bool force)
+ {
+-	struct drm_i915_private *dev_priv = to_i915(connector->dev);
+-	enum drm_connector_status status;
+-
+-	DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n",
+-		      connector->base.id, connector->name);
+-
+-	status = intel_panel_detect(dev_priv);
+-	if (status != connector_status_unknown)
+-		return status;
+-
+ 	return connector_status_connected;
+ }
+ 
+@@ -498,117 +479,6 @@ static int intel_lvds_get_modes(struct drm_connector *connector)
+ 	return 1;
+ }
+ 
+-static int intel_no_modeset_on_lid_dmi_callback(const struct dmi_system_id *id)
+-{
+-	DRM_INFO("Skipping forced modeset for %s\n", id->ident);
+-	return 1;
+-}
+-
+-/* The GPU hangs up on these systems if modeset is performed on LID open */
+-static const struct dmi_system_id intel_no_modeset_on_lid[] = {
+-	{
+-		.callback = intel_no_modeset_on_lid_dmi_callback,
+-		.ident = "Toshiba Tecra A11",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
+-			DMI_MATCH(DMI_PRODUCT_NAME, "TECRA A11"),
+-		},
+-	},
+-
+-	{ }	/* terminating entry */
+-};
+-
+-/*
+- * Lid events. Note the use of 'modeset':
+- *  - we set it to MODESET_ON_LID_OPEN on lid close,
+- *    and set it to MODESET_DONE on open
+- *  - we use it as a "only once" bit (ie we ignore
+- *    duplicate events where it was already properly set)
+- *  - the suspend/resume paths will set it to
+- *    MODESET_SUSPENDED and ignore the lid open event,
+- *    because they restore the mode ("lid open").
+- */
+-static int intel_lid_notify(struct notifier_block *nb, unsigned long val,
+-			    void *unused)
+-{
+-	struct intel_lvds_connector *lvds_connector =
+-		container_of(nb, struct intel_lvds_connector, lid_notifier);
+-	struct drm_connector *connector = &lvds_connector->base.base;
+-	struct drm_device *dev = connector->dev;
+-	struct drm_i915_private *dev_priv = to_i915(dev);
+-
+-	if (dev->switch_power_state != DRM_SWITCH_POWER_ON)
+-		return NOTIFY_OK;
+-
+-	mutex_lock(&dev_priv->modeset_restore_lock);
+-	if (dev_priv->modeset_restore == MODESET_SUSPENDED)
+-		goto exit;
+-	/*
+-	 * check and update the status of LVDS connector after receiving
+-	 * the LID nofication event.
+-	 */
+-	connector->status = connector->funcs->detect(connector, false);
+-
+-	/* Don't force modeset on machines where it causes a GPU lockup */
+-	if (dmi_check_system(intel_no_modeset_on_lid))
+-		goto exit;
+-	if (!acpi_lid_open()) {
+-		/* do modeset on next lid open event */
+-		dev_priv->modeset_restore = MODESET_ON_LID_OPEN;
+-		goto exit;
+-	}
+-
+-	if (dev_priv->modeset_restore == MODESET_DONE)
+-		goto exit;
+-
+-	/*
+-	 * Some old platform's BIOS love to wreak havoc while the lid is closed.
+-	 * We try to detect this here and undo any damage. The split for PCH
+-	 * platforms is rather conservative and a bit arbitrary expect that on
+-	 * those platforms VGA disabling requires actual legacy VGA I/O access,
+-	 * and as part of the cleanup in the hw state restore we also redisable
+-	 * the vga plane.
+-	 */
+-	if (!HAS_PCH_SPLIT(dev_priv))
+-		intel_display_resume(dev);
+-
+-	dev_priv->modeset_restore = MODESET_DONE;
+-
+-exit:
+-	mutex_unlock(&dev_priv->modeset_restore_lock);
+-	return NOTIFY_OK;
+-}
+-
+-static int
+-intel_lvds_connector_register(struct drm_connector *connector)
+-{
+-	struct intel_lvds_connector *lvds = to_lvds_connector(connector);
+-	int ret;
+-
+-	ret = intel_connector_register(connector);
+-	if (ret)
+-		return ret;
+-
+-	lvds->lid_notifier.notifier_call = intel_lid_notify;
+-	if (acpi_lid_notifier_register(&lvds->lid_notifier)) {
+-		DRM_DEBUG_KMS("lid notifier registration failed\n");
+-		lvds->lid_notifier.notifier_call = NULL;
+-	}
+-
+-	return 0;
+-}
+-
+-static void
+-intel_lvds_connector_unregister(struct drm_connector *connector)
+-{
+-	struct intel_lvds_connector *lvds = to_lvds_connector(connector);
+-
+-	if (lvds->lid_notifier.notifier_call)
+-		acpi_lid_notifier_unregister(&lvds->lid_notifier);
+-
+-	intel_connector_unregister(connector);
+-}
+-
+ /**
+  * intel_lvds_destroy - unregister and free LVDS structures
+  * @connector: connector to free
+@@ -641,8 +511,8 @@ static const struct drm_connector_funcs intel_lvds_connector_funcs = {
+ 	.fill_modes = drm_helper_probe_single_connector_modes,
+ 	.atomic_get_property = intel_digital_connector_atomic_get_property,
+ 	.atomic_set_property = intel_digital_connector_atomic_set_property,
+-	.late_register = intel_lvds_connector_register,
+-	.early_unregister = intel_lvds_connector_unregister,
++	.late_register = intel_connector_register,
++	.early_unregister = intel_connector_unregister,
+ 	.destroy = intel_lvds_destroy,
+ 	.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+ 	.atomic_duplicate_state = intel_digital_connector_duplicate_state,
+@@ -1108,8 +978,6 @@ void intel_lvds_init(struct drm_i915_private *dev_priv)
+ 	 * 2) check for VBT data
+ 	 * 3) check to see if LVDS is already on
+ 	 *    if none of the above, no panel
+-	 * 4) make sure lid is open
+-	 *    if closed, act like it's not there for now
+ 	 */
+ 
+ 	/*
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+index 2121345a61af..78ce3d232c4d 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+@@ -486,6 +486,31 @@ static void vop_line_flag_irq_disable(struct vop *vop)
+ 	spin_unlock_irqrestore(&vop->irq_lock, flags);
+ }
+ 
++static int vop_core_clks_enable(struct vop *vop)
++{
++	int ret;
++
++	ret = clk_enable(vop->hclk);
++	if (ret < 0)
++		return ret;
++
++	ret = clk_enable(vop->aclk);
++	if (ret < 0)
++		goto err_disable_hclk;
++
++	return 0;
++
++err_disable_hclk:
++	clk_disable(vop->hclk);
++	return ret;
++}
++
++static void vop_core_clks_disable(struct vop *vop)
++{
++	clk_disable(vop->aclk);
++	clk_disable(vop->hclk);
++}
++
+ static int vop_enable(struct drm_crtc *crtc)
+ {
+ 	struct vop *vop = to_vop(crtc);
+@@ -497,17 +522,13 @@ static int vop_enable(struct drm_crtc *crtc)
+ 		return ret;
+ 	}
+ 
+-	ret = clk_enable(vop->hclk);
++	ret = vop_core_clks_enable(vop);
+ 	if (WARN_ON(ret < 0))
+ 		goto err_put_pm_runtime;
+ 
+ 	ret = clk_enable(vop->dclk);
+ 	if (WARN_ON(ret < 0))
+-		goto err_disable_hclk;
+-
+-	ret = clk_enable(vop->aclk);
+-	if (WARN_ON(ret < 0))
+-		goto err_disable_dclk;
++		goto err_disable_core;
+ 
+ 	/*
+ 	 * Slave iommu shares power, irq and clock with vop.  It was associated
+@@ -519,7 +540,7 @@ static int vop_enable(struct drm_crtc *crtc)
+ 	if (ret) {
+ 		DRM_DEV_ERROR(vop->dev,
+ 			      "failed to attach dma mapping, %d\n", ret);
+-		goto err_disable_aclk;
++		goto err_disable_dclk;
+ 	}
+ 
+ 	spin_lock(&vop->reg_lock);
+@@ -552,18 +573,14 @@ static int vop_enable(struct drm_crtc *crtc)
+ 
+ 	spin_unlock(&vop->reg_lock);
+ 
+-	enable_irq(vop->irq);
+-
+ 	drm_crtc_vblank_on(crtc);
+ 
+ 	return 0;
+ 
+-err_disable_aclk:
+-	clk_disable(vop->aclk);
+ err_disable_dclk:
+ 	clk_disable(vop->dclk);
+-err_disable_hclk:
+-	clk_disable(vop->hclk);
++err_disable_core:
++	vop_core_clks_disable(vop);
+ err_put_pm_runtime:
+ 	pm_runtime_put_sync(vop->dev);
+ 	return ret;
+@@ -599,8 +616,6 @@ static void vop_crtc_atomic_disable(struct drm_crtc *crtc,
+ 
+ 	vop_dsp_hold_valid_irq_disable(vop);
+ 
+-	disable_irq(vop->irq);
+-
+ 	vop->is_enabled = false;
+ 
+ 	/*
+@@ -609,8 +624,7 @@ static void vop_crtc_atomic_disable(struct drm_crtc *crtc,
+ 	rockchip_drm_dma_detach_device(vop->drm_dev, vop->dev);
+ 
+ 	clk_disable(vop->dclk);
+-	clk_disable(vop->aclk);
+-	clk_disable(vop->hclk);
++	vop_core_clks_disable(vop);
+ 	pm_runtime_put(vop->dev);
+ 	mutex_unlock(&vop->vop_lock);
+ 
+@@ -1177,6 +1191,18 @@ static irqreturn_t vop_isr(int irq, void *data)
+ 	uint32_t active_irqs;
+ 	int ret = IRQ_NONE;
+ 
++	/*
++	 * The irq is shared with the iommu. If the runtime-pm state of the
++	 * vop-device is disabled the irq has to be targeted at the iommu.
++	 */
++	if (!pm_runtime_get_if_in_use(vop->dev))
++		return IRQ_NONE;
++
++	if (vop_core_clks_enable(vop)) {
++		DRM_DEV_ERROR_RATELIMITED(vop->dev, "couldn't enable clocks\n");
++		goto out;
++	}
++
+ 	/*
+ 	 * interrupt register has interrupt status, enable and clear bits, we
+ 	 * must hold irq_lock to avoid a race with enable/disable_vblank().
+@@ -1192,7 +1218,7 @@ static irqreturn_t vop_isr(int irq, void *data)
+ 
+ 	/* This is expected for vop iommu irqs, since the irq is shared */
+ 	if (!active_irqs)
+-		return IRQ_NONE;
++		goto out_disable;
+ 
+ 	if (active_irqs & DSP_HOLD_VALID_INTR) {
+ 		complete(&vop->dsp_hold_completion);
+@@ -1218,6 +1244,10 @@ static irqreturn_t vop_isr(int irq, void *data)
+ 		DRM_DEV_ERROR(vop->dev, "Unknown VOP IRQs: %#02x\n",
+ 			      active_irqs);
+ 
++out_disable:
++	vop_core_clks_disable(vop);
++out:
++	pm_runtime_put(vop->dev);
+ 	return ret;
+ }
+ 
+@@ -1596,9 +1626,6 @@ static int vop_bind(struct device *dev, struct device *master, void *data)
+ 	if (ret)
+ 		goto err_disable_pm_runtime;
+ 
+-	/* IRQ is initially disabled; it gets enabled in power_on */
+-	disable_irq(vop->irq);
+-
+ 	return 0;
+ 
+ err_disable_pm_runtime:
+diff --git a/drivers/gpu/drm/rockchip/rockchip_lvds.c b/drivers/gpu/drm/rockchip/rockchip_lvds.c
+index e67f4ea28c0e..051b8be3dc0f 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_lvds.c
++++ b/drivers/gpu/drm/rockchip/rockchip_lvds.c
+@@ -363,8 +363,10 @@ static int rockchip_lvds_bind(struct device *dev, struct device *master,
+ 		of_property_read_u32(endpoint, "reg", &endpoint_id);
+ 		ret = drm_of_find_panel_or_bridge(dev->of_node, 1, endpoint_id,
+ 						  &lvds->panel, &lvds->bridge);
+-		if (!ret)
++		if (!ret) {
++			of_node_put(endpoint);
+ 			break;
++		}
+ 	}
+ 	if (!child_count) {
+ 		DRM_DEV_ERROR(dev, "lvds port does not have any children\n");
+diff --git a/drivers/hid/hid-redragon.c b/drivers/hid/hid-redragon.c
+index daf59578bf93..73c9d4c4fa34 100644
+--- a/drivers/hid/hid-redragon.c
++++ b/drivers/hid/hid-redragon.c
+@@ -44,29 +44,6 @@ static __u8 *redragon_report_fixup(struct hid_device *hdev, __u8 *rdesc,
+ 	return rdesc;
+ }
+ 
+-static int redragon_probe(struct hid_device *dev,
+-	const struct hid_device_id *id)
+-{
+-	int ret;
+-
+-	ret = hid_parse(dev);
+-	if (ret) {
+-		hid_err(dev, "parse failed\n");
+-		return ret;
+-	}
+-
+-	/* do not register unused input device */
+-	if (dev->maxapplication == 1)
+-		return 0;
+-
+-	ret = hid_hw_start(dev, HID_CONNECT_DEFAULT);
+-	if (ret) {
+-		hid_err(dev, "hw start failed\n");
+-		return ret;
+-	}
+-
+-	return 0;
+-}
+ static const struct hid_device_id redragon_devices[] = {
+ 	{HID_USB_DEVICE(USB_VENDOR_ID_JESS, USB_DEVICE_ID_REDRAGON_ASURA)},
+ 	{}
+@@ -77,8 +54,7 @@ MODULE_DEVICE_TABLE(hid, redragon_devices);
+ static struct hid_driver redragon_driver = {
+ 	.name = "redragon",
+ 	.id_table = redragon_devices,
+-	.report_fixup = redragon_report_fixup,
+-	.probe = redragon_probe
++	.report_fixup = redragon_report_fixup
+ };
+ 
+ module_hid_driver(redragon_driver);
+diff --git a/drivers/i2c/i2c-core-acpi.c b/drivers/i2c/i2c-core-acpi.c
+index b8f303dea305..32affd3fa8bd 100644
+--- a/drivers/i2c/i2c-core-acpi.c
++++ b/drivers/i2c/i2c-core-acpi.c
+@@ -453,8 +453,12 @@ static int acpi_gsb_i2c_read_bytes(struct i2c_client *client,
+ 		else
+ 			dev_err(&client->adapter->dev, "i2c read %d bytes from client@%#x starting at reg %#x failed, error: %d\n",
+ 				data_len, client->addr, cmd, ret);
+-	} else {
++	/* 2 transfers must have completed successfully */
++	} else if (ret == 2) {
+ 		memcpy(data, buffer, data_len);
++		ret = 0;
++	} else {
++		ret = -EIO;
+ 	}
+ 
+ 	kfree(buffer);
+@@ -595,8 +599,6 @@ i2c_acpi_space_handler(u32 function, acpi_physical_address command,
+ 		if (action == ACPI_READ) {
+ 			status = acpi_gsb_i2c_read_bytes(client, command,
+ 					gsb->data, info->access_length);
+-			if (status > 0)
+-				status = 0;
+ 		} else {
+ 			status = acpi_gsb_i2c_write_bytes(client, command,
+ 					gsb->data, info->access_length);
+diff --git a/drivers/infiniband/hw/hfi1/affinity.c b/drivers/infiniband/hw/hfi1/affinity.c
+index fbe7198a715a..bedd5fba33b0 100644
+--- a/drivers/infiniband/hw/hfi1/affinity.c
++++ b/drivers/infiniband/hw/hfi1/affinity.c
+@@ -198,7 +198,7 @@ int node_affinity_init(void)
+ 		while ((dev = pci_get_device(ids->vendor, ids->device, dev))) {
+ 			node = pcibus_to_node(dev->bus);
+ 			if (node < 0)
+-				node = numa_node_id();
++				goto out;
+ 
+ 			hfi1_per_node_cntr[node]++;
+ 		}
+@@ -206,6 +206,18 @@ int node_affinity_init(void)
+ 	}
+ 
+ 	return 0;
++
++out:
++	/*
++	 * Invalid PCI NUMA node information found, note it, and populate
++	 * our database 1:1.
++	 */
++	pr_err("HFI: Invalid PCI NUMA node. Performance may be affected\n");
++	pr_err("HFI: System BIOS may need to be upgraded\n");
++	for (node = 0; node < node_affinity.num_possible_nodes; node++)
++		hfi1_per_node_cntr[node] = 1;
++
++	return 0;
+ }
+ 
+ static void node_affinity_destroy(struct hfi1_affinity_node *entry)
+@@ -622,8 +634,14 @@ int hfi1_dev_affinity_init(struct hfi1_devdata *dd)
+ 	int curr_cpu, possible, i, ret;
+ 	bool new_entry = false;
+ 
+-	if (node < 0)
+-		node = numa_node_id();
++	/*
++	 * If the BIOS does not have the NUMA node information set, select
++	 * NUMA 0 so we get consistent performance.
++	 */
++	if (node < 0) {
++		dd_dev_err(dd, "Invalid PCI NUMA node. Performance may be affected\n");
++		node = 0;
++	}
+ 	dd->node = node;
+ 
+ 	local_mask = cpumask_of_node(dd->node);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_pd.c b/drivers/infiniband/hw/hns/hns_roce_pd.c
+index b9f2c871ff9a..e11c149da04d 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_pd.c
++++ b/drivers/infiniband/hw/hns/hns_roce_pd.c
+@@ -37,7 +37,7 @@
+ 
+ static int hns_roce_pd_alloc(struct hns_roce_dev *hr_dev, unsigned long *pdn)
+ {
+-	return hns_roce_bitmap_alloc(&hr_dev->pd_bitmap, pdn);
++	return hns_roce_bitmap_alloc(&hr_dev->pd_bitmap, pdn) ? -ENOMEM : 0;
+ }
+ 
+ static void hns_roce_pd_free(struct hns_roce_dev *hr_dev, unsigned long pdn)
+diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
+index baaf906f7c2e..97664570c5ac 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
+@@ -115,7 +115,10 @@ static int hns_roce_reserve_range_qp(struct hns_roce_dev *hr_dev, int cnt,
+ {
+ 	struct hns_roce_qp_table *qp_table = &hr_dev->qp_table;
+ 
+-	return hns_roce_bitmap_alloc_range(&qp_table->bitmap, cnt, align, base);
++	return hns_roce_bitmap_alloc_range(&qp_table->bitmap, cnt, align,
++					   base) ?
++		       -ENOMEM :
++		       0;
+ }
+ 
+ enum hns_roce_qp_state to_hns_roce_state(enum ib_qp_state state)
+diff --git a/drivers/input/input.c b/drivers/input/input.c
+index 6365c1958264..3304aaaffe87 100644
+--- a/drivers/input/input.c
++++ b/drivers/input/input.c
+@@ -480,11 +480,19 @@ EXPORT_SYMBOL(input_inject_event);
+  */
+ void input_alloc_absinfo(struct input_dev *dev)
+ {
+-	if (!dev->absinfo)
+-		dev->absinfo = kcalloc(ABS_CNT, sizeof(*dev->absinfo),
+-					GFP_KERNEL);
++	if (dev->absinfo)
++		return;
+ 
+-	WARN(!dev->absinfo, "%s(): kcalloc() failed?\n", __func__);
++	dev->absinfo = kcalloc(ABS_CNT, sizeof(*dev->absinfo), GFP_KERNEL);
++	if (!dev->absinfo) {
++		dev_err(dev->dev.parent ?: &dev->dev,
++			"%s: unable to allocate memory\n", __func__);
++		/*
++		 * We will handle this allocation failure in
++		 * input_register_device() when we refuse to register input
++		 * device with ABS bits but without absinfo.
++		 */
++	}
+ }
+ EXPORT_SYMBOL(input_alloc_absinfo);
+ 
+diff --git a/drivers/iommu/omap-iommu.c b/drivers/iommu/omap-iommu.c
+index af4a8e7fcd27..3b05117118c3 100644
+--- a/drivers/iommu/omap-iommu.c
++++ b/drivers/iommu/omap-iommu.c
+@@ -550,7 +550,7 @@ static u32 *iopte_alloc(struct omap_iommu *obj, u32 *iopgd,
+ 
+ pte_ready:
+ 	iopte = iopte_offset(iopgd, da);
+-	*pt_dma = virt_to_phys(iopte);
++	*pt_dma = iopgd_page_paddr(iopgd);
+ 	dev_vdbg(obj->dev,
+ 		 "%s: da:%08x pgd:%p *pgd:%08x pte:%p *pte:%08x\n",
+ 		 __func__, da, iopgd, *iopgd, iopte, *iopte);
+@@ -738,7 +738,7 @@ static size_t iopgtable_clear_entry_core(struct omap_iommu *obj, u32 da)
+ 		}
+ 		bytes *= nent;
+ 		memset(iopte, 0, nent * sizeof(*iopte));
+-		pt_dma = virt_to_phys(iopte);
++		pt_dma = iopgd_page_paddr(iopgd);
+ 		flush_iopte_range(obj->dev, pt_dma, pt_offset, nent);
+ 
+ 		/*
+diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c
+index 054cd2c8e9c8..2b1724e8d307 100644
+--- a/drivers/iommu/rockchip-iommu.c
++++ b/drivers/iommu/rockchip-iommu.c
+@@ -521,10 +521,11 @@ static irqreturn_t rk_iommu_irq(int irq, void *dev_id)
+ 	u32 int_status;
+ 	dma_addr_t iova;
+ 	irqreturn_t ret = IRQ_NONE;
+-	int i;
++	int i, err;
+ 
+-	if (WARN_ON(!pm_runtime_get_if_in_use(iommu->dev)))
+-		return 0;
++	err = pm_runtime_get_if_in_use(iommu->dev);
++	if (WARN_ON_ONCE(err <= 0))
++		return ret;
+ 
+ 	if (WARN_ON(clk_bulk_enable(iommu->num_clocks, iommu->clocks)))
+ 		goto out;
+@@ -620,11 +621,15 @@ static void rk_iommu_zap_iova(struct rk_iommu_domain *rk_domain,
+ 	spin_lock_irqsave(&rk_domain->iommus_lock, flags);
+ 	list_for_each(pos, &rk_domain->iommus) {
+ 		struct rk_iommu *iommu;
++		int ret;
+ 
+ 		iommu = list_entry(pos, struct rk_iommu, node);
+ 
+ 		/* Only zap TLBs of IOMMUs that are powered on. */
+-		if (pm_runtime_get_if_in_use(iommu->dev)) {
++		ret = pm_runtime_get_if_in_use(iommu->dev);
++		if (WARN_ON_ONCE(ret < 0))
++			continue;
++		if (ret) {
+ 			WARN_ON(clk_bulk_enable(iommu->num_clocks,
+ 						iommu->clocks));
+ 			rk_iommu_zap_lines(iommu, iova, size);
+@@ -891,6 +896,7 @@ static void rk_iommu_detach_device(struct iommu_domain *domain,
+ 	struct rk_iommu *iommu;
+ 	struct rk_iommu_domain *rk_domain = to_rk_domain(domain);
+ 	unsigned long flags;
++	int ret;
+ 
+ 	/* Allow 'virtual devices' (eg drm) to detach from domain */
+ 	iommu = rk_iommu_from_dev(dev);
+@@ -909,7 +915,9 @@ static void rk_iommu_detach_device(struct iommu_domain *domain,
+ 	list_del_init(&iommu->node);
+ 	spin_unlock_irqrestore(&rk_domain->iommus_lock, flags);
+ 
+-	if (pm_runtime_get_if_in_use(iommu->dev)) {
++	ret = pm_runtime_get_if_in_use(iommu->dev);
++	WARN_ON_ONCE(ret < 0);
++	if (ret > 0) {
+ 		rk_iommu_disable(iommu);
+ 		pm_runtime_put(iommu->dev);
+ 	}
+@@ -946,7 +954,8 @@ static int rk_iommu_attach_device(struct iommu_domain *domain,
+ 	list_add_tail(&iommu->node, &rk_domain->iommus);
+ 	spin_unlock_irqrestore(&rk_domain->iommus_lock, flags);
+ 
+-	if (!pm_runtime_get_if_in_use(iommu->dev))
++	ret = pm_runtime_get_if_in_use(iommu->dev);
++	if (!ret || WARN_ON_ONCE(ret < 0))
+ 		return 0;
+ 
+ 	ret = rk_iommu_enable(iommu);
+@@ -1152,17 +1161,6 @@ static int rk_iommu_probe(struct platform_device *pdev)
+ 	if (iommu->num_mmu == 0)
+ 		return PTR_ERR(iommu->bases[0]);
+ 
+-	i = 0;
+-	while ((irq = platform_get_irq(pdev, i++)) != -ENXIO) {
+-		if (irq < 0)
+-			return irq;
+-
+-		err = devm_request_irq(iommu->dev, irq, rk_iommu_irq,
+-				       IRQF_SHARED, dev_name(dev), iommu);
+-		if (err)
+-			return err;
+-	}
+-
+ 	iommu->reset_disabled = device_property_read_bool(dev,
+ 					"rockchip,disable-mmu-reset");
+ 
+@@ -1219,6 +1217,19 @@ static int rk_iommu_probe(struct platform_device *pdev)
+ 
+ 	pm_runtime_enable(dev);
+ 
++	i = 0;
++	while ((irq = platform_get_irq(pdev, i++)) != -ENXIO) {
++		if (irq < 0)
++			return irq;
++
++		err = devm_request_irq(iommu->dev, irq, rk_iommu_irq,
++				       IRQF_SHARED, dev_name(dev), iommu);
++		if (err) {
++			pm_runtime_disable(dev);
++			goto err_remove_sysfs;
++		}
++	}
++
+ 	return 0;
+ err_remove_sysfs:
+ 	iommu_device_sysfs_remove(&iommu->iommu);
+diff --git a/drivers/irqchip/irq-bcm7038-l1.c b/drivers/irqchip/irq-bcm7038-l1.c
+index faf734ff4cf3..0f6e30e9009d 100644
+--- a/drivers/irqchip/irq-bcm7038-l1.c
++++ b/drivers/irqchip/irq-bcm7038-l1.c
+@@ -217,6 +217,7 @@ static int bcm7038_l1_set_affinity(struct irq_data *d,
+ 	return 0;
+ }
+ 
++#ifdef CONFIG_SMP
+ static void bcm7038_l1_cpu_offline(struct irq_data *d)
+ {
+ 	struct cpumask *mask = irq_data_get_affinity_mask(d);
+@@ -241,6 +242,7 @@ static void bcm7038_l1_cpu_offline(struct irq_data *d)
+ 	}
+ 	irq_set_affinity_locked(d, &new_affinity, false);
+ }
++#endif
+ 
+ static int __init bcm7038_l1_init_one(struct device_node *dn,
+ 				      unsigned int idx,
+@@ -293,7 +295,9 @@ static struct irq_chip bcm7038_l1_irq_chip = {
+ 	.irq_mask		= bcm7038_l1_mask,
+ 	.irq_unmask		= bcm7038_l1_unmask,
+ 	.irq_set_affinity	= bcm7038_l1_set_affinity,
++#ifdef CONFIG_SMP
+ 	.irq_cpu_offline	= bcm7038_l1_cpu_offline,
++#endif
+ };
+ 
+ static int bcm7038_l1_map(struct irq_domain *d, unsigned int virq,
+diff --git a/drivers/irqchip/irq-stm32-exti.c b/drivers/irqchip/irq-stm32-exti.c
+index 3a7e8905a97e..880e48947576 100644
+--- a/drivers/irqchip/irq-stm32-exti.c
++++ b/drivers/irqchip/irq-stm32-exti.c
+@@ -602,17 +602,24 @@ stm32_exti_host_data *stm32_exti_host_init(const struct stm32_exti_drv_data *dd,
+ 					sizeof(struct stm32_exti_chip_data),
+ 					GFP_KERNEL);
+ 	if (!host_data->chips_data)
+-		return NULL;
++		goto free_host_data;
+ 
+ 	host_data->base = of_iomap(node, 0);
+ 	if (!host_data->base) {
+ 		pr_err("%pOF: Unable to map registers\n", node);
+-		return NULL;
++		goto free_chips_data;
+ 	}
+ 
+ 	stm32_host_data = host_data;
+ 
+ 	return host_data;
++
++free_chips_data:
++	kfree(host_data->chips_data);
++free_host_data:
++	kfree(host_data);
++
++	return NULL;
+ }
+ 
+ static struct
+@@ -664,10 +671,8 @@ static int __init stm32_exti_init(const struct stm32_exti_drv_data *drv_data,
+ 	struct irq_domain *domain;
+ 
+ 	host_data = stm32_exti_host_init(drv_data, node);
+-	if (!host_data) {
+-		ret = -ENOMEM;
+-		goto out_free_mem;
+-	}
++	if (!host_data)
++		return -ENOMEM;
+ 
+ 	domain = irq_domain_add_linear(node, drv_data->bank_nr * IRQS_PER_BANK,
+ 				       &irq_exti_domain_ops, NULL);
+@@ -724,7 +729,6 @@ out_free_domain:
+ 	irq_domain_remove(domain);
+ out_unmap:
+ 	iounmap(host_data->base);
+-out_free_mem:
+ 	kfree(host_data->chips_data);
+ 	kfree(host_data);
+ 	return ret;
+@@ -751,10 +755,8 @@ __init stm32_exti_hierarchy_init(const struct stm32_exti_drv_data *drv_data,
+ 	}
+ 
+ 	host_data = stm32_exti_host_init(drv_data, node);
+-	if (!host_data) {
+-		ret = -ENOMEM;
+-		goto out_free_mem;
+-	}
++	if (!host_data)
++		return -ENOMEM;
+ 
+ 	for (i = 0; i < drv_data->bank_nr; i++)
+ 		stm32_exti_chip_init(host_data, i, node);
+@@ -776,7 +778,6 @@ __init stm32_exti_hierarchy_init(const struct stm32_exti_drv_data *drv_data,
+ 
+ out_unmap:
+ 	iounmap(host_data->base);
+-out_free_mem:
+ 	kfree(host_data->chips_data);
+ 	kfree(host_data);
+ 	return ret;
+diff --git a/drivers/md/dm-kcopyd.c b/drivers/md/dm-kcopyd.c
+index 3c7547a3c371..d7b9cdafd1c3 100644
+--- a/drivers/md/dm-kcopyd.c
++++ b/drivers/md/dm-kcopyd.c
+@@ -487,6 +487,8 @@ static int run_complete_job(struct kcopyd_job *job)
+ 	if (atomic_dec_and_test(&kc->nr_jobs))
+ 		wake_up(&kc->destroyq);
+ 
++	cond_resched();
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/mfd/sm501.c b/drivers/mfd/sm501.c
+index 2a87b0d2f21f..a530972c5a7e 100644
+--- a/drivers/mfd/sm501.c
++++ b/drivers/mfd/sm501.c
+@@ -715,6 +715,7 @@ sm501_create_subdev(struct sm501_devdata *sm, char *name,
+ 	smdev->pdev.name = name;
+ 	smdev->pdev.id = sm->pdev_id;
+ 	smdev->pdev.dev.parent = sm->dev;
++	smdev->pdev.dev.coherent_dma_mask = 0xffffffff;
+ 
+ 	if (res_count) {
+ 		smdev->pdev.resource = (struct resource *)(smdev+1);
+diff --git a/drivers/mtd/ubi/vtbl.c b/drivers/mtd/ubi/vtbl.c
+index 94d7a865b135..7504f430c011 100644
+--- a/drivers/mtd/ubi/vtbl.c
++++ b/drivers/mtd/ubi/vtbl.c
+@@ -578,6 +578,16 @@ static int init_volumes(struct ubi_device *ubi,
+ 		vol->ubi = ubi;
+ 		reserved_pebs += vol->reserved_pebs;
+ 
++		/*
++		 * We use ubi->peb_count and not vol->reserved_pebs because
++		 * we want to keep the code simple. Otherwise we'd have to
++		 * resize/check the bitmap upon volume resize too.
++		 * Allocating a few bytes more does not hurt.
++		 */
++		err = ubi_fastmap_init_checkmap(vol, ubi->peb_count);
++		if (err)
++			return err;
++
+ 		/*
+ 		 * In case of dynamic volume UBI knows nothing about how many
+ 		 * data is stored there. So assume the whole volume is used.
+@@ -620,16 +630,6 @@ static int init_volumes(struct ubi_device *ubi,
+ 			(long long)(vol->used_ebs - 1) * vol->usable_leb_size;
+ 		vol->used_bytes += av->last_data_size;
+ 		vol->last_eb_bytes = av->last_data_size;
+-
+-		/*
+-		 * We use ubi->peb_count and not vol->reserved_pebs because
+-		 * we want to keep the code simple. Otherwise we'd have to
+-		 * resize/check the bitmap upon volume resize too.
+-		 * Allocating a few bytes more does not hurt.
+-		 */
+-		err = ubi_fastmap_init_checkmap(vol, ubi->peb_count);
+-		if (err)
+-			return err;
+ 	}
+ 
+ 	/* And add the layout volume */
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 4394c1162be4..4fdf3d33aa59 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -5907,12 +5907,12 @@ unsigned int bnxt_get_max_func_cp_rings(struct bnxt *bp)
+ 	return bp->hw_resc.max_cp_rings;
+ }
+ 
+-void bnxt_set_max_func_cp_rings(struct bnxt *bp, unsigned int max)
++unsigned int bnxt_get_max_func_cp_rings_for_en(struct bnxt *bp)
+ {
+-	bp->hw_resc.max_cp_rings = max;
++	return bp->hw_resc.max_cp_rings - bnxt_get_ulp_msix_num(bp);
+ }
+ 
+-unsigned int bnxt_get_max_func_irqs(struct bnxt *bp)
++static unsigned int bnxt_get_max_func_irqs(struct bnxt *bp)
+ {
+ 	struct bnxt_hw_resc *hw_resc = &bp->hw_resc;
+ 
+@@ -8492,7 +8492,8 @@ static void _bnxt_get_max_rings(struct bnxt *bp, int *max_rx, int *max_tx,
+ 
+ 	*max_tx = hw_resc->max_tx_rings;
+ 	*max_rx = hw_resc->max_rx_rings;
+-	*max_cp = min_t(int, hw_resc->max_irqs, hw_resc->max_cp_rings);
++	*max_cp = min_t(int, bnxt_get_max_func_cp_rings_for_en(bp),
++			hw_resc->max_irqs);
+ 	*max_cp = min_t(int, *max_cp, hw_resc->max_stat_ctxs);
+ 	max_ring_grps = hw_resc->max_hw_ring_grps;
+ 	if (BNXT_CHIP_TYPE_NITRO_A0(bp) && BNXT_PF(bp)) {
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index 91575ef97c8c..ea1246a94b38 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -1468,8 +1468,7 @@ int bnxt_hwrm_set_coal(struct bnxt *);
+ unsigned int bnxt_get_max_func_stat_ctxs(struct bnxt *bp);
+ void bnxt_set_max_func_stat_ctxs(struct bnxt *bp, unsigned int max);
+ unsigned int bnxt_get_max_func_cp_rings(struct bnxt *bp);
+-void bnxt_set_max_func_cp_rings(struct bnxt *bp, unsigned int max);
+-unsigned int bnxt_get_max_func_irqs(struct bnxt *bp);
++unsigned int bnxt_get_max_func_cp_rings_for_en(struct bnxt *bp);
+ int bnxt_get_avail_msix(struct bnxt *bp, int num);
+ int bnxt_reserve_rings(struct bnxt *bp);
+ void bnxt_tx_disable(struct bnxt *bp);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+index a64910892c25..2c77004a022b 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+@@ -451,7 +451,7 @@ static int bnxt_hwrm_func_vf_resc_cfg(struct bnxt *bp, int num_vfs)
+ 
+ 	bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_VF_RESOURCE_CFG, -1, -1);
+ 
+-	vf_cp_rings = hw_resc->max_cp_rings - bp->cp_nr_rings;
++	vf_cp_rings = bnxt_get_max_func_cp_rings_for_en(bp) - bp->cp_nr_rings;
+ 	vf_stat_ctx = hw_resc->max_stat_ctxs - bp->num_stat_ctxs;
+ 	if (bp->flags & BNXT_FLAG_AGG_RINGS)
+ 		vf_rx_rings = hw_resc->max_rx_rings - bp->rx_nr_rings * 2;
+@@ -544,7 +544,8 @@ static int bnxt_hwrm_func_cfg(struct bnxt *bp, int num_vfs)
+ 	max_stat_ctxs = hw_resc->max_stat_ctxs;
+ 
+ 	/* Remaining rings are distributed equally amongs VF's for now */
+-	vf_cp_rings = (hw_resc->max_cp_rings - bp->cp_nr_rings) / num_vfs;
++	vf_cp_rings = (bnxt_get_max_func_cp_rings_for_en(bp) -
++		       bp->cp_nr_rings) / num_vfs;
+ 	vf_stat_ctx = (max_stat_ctxs - bp->num_stat_ctxs) / num_vfs;
+ 	if (bp->flags & BNXT_FLAG_AGG_RINGS)
+ 		vf_rx_rings = (hw_resc->max_rx_rings - bp->rx_nr_rings * 2) /
+@@ -638,7 +639,7 @@ static int bnxt_sriov_enable(struct bnxt *bp, int *num_vfs)
+ 	 */
+ 	vfs_supported = *num_vfs;
+ 
+-	avail_cp = hw_resc->max_cp_rings - bp->cp_nr_rings;
++	avail_cp = bnxt_get_max_func_cp_rings_for_en(bp) - bp->cp_nr_rings;
+ 	avail_stat = hw_resc->max_stat_ctxs - bp->num_stat_ctxs;
+ 	avail_cp = min_t(int, avail_cp, avail_stat);
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+index 840f6e505f73..4209cfd73971 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+@@ -169,7 +169,6 @@ static int bnxt_req_msix_vecs(struct bnxt_en_dev *edev, int ulp_id,
+ 		edev->ulp_tbl[ulp_id].msix_requested = avail_msix;
+ 	}
+ 	bnxt_fill_msix_vecs(bp, ent);
+-	bnxt_set_max_func_cp_rings(bp, max_cp_rings - avail_msix);
+ 	edev->flags |= BNXT_EN_FLAG_MSIX_REQUESTED;
+ 	return avail_msix;
+ }
+@@ -178,7 +177,6 @@ static int bnxt_free_msix_vecs(struct bnxt_en_dev *edev, int ulp_id)
+ {
+ 	struct net_device *dev = edev->net;
+ 	struct bnxt *bp = netdev_priv(dev);
+-	int max_cp_rings, msix_requested;
+ 
+ 	ASSERT_RTNL();
+ 	if (ulp_id != BNXT_ROCE_ULP)
+@@ -187,9 +185,6 @@ static int bnxt_free_msix_vecs(struct bnxt_en_dev *edev, int ulp_id)
+ 	if (!(edev->flags & BNXT_EN_FLAG_MSIX_REQUESTED))
+ 		return 0;
+ 
+-	max_cp_rings = bnxt_get_max_func_cp_rings(bp);
+-	msix_requested = edev->ulp_tbl[ulp_id].msix_requested;
+-	bnxt_set_max_func_cp_rings(bp, max_cp_rings + msix_requested);
+ 	edev->ulp_tbl[ulp_id].msix_requested = 0;
+ 	edev->flags &= ~BNXT_EN_FLAG_MSIX_REQUESTED;
+ 	if (netif_running(dev)) {
+@@ -220,21 +215,6 @@ int bnxt_get_ulp_msix_base(struct bnxt *bp)
+ 	return 0;
+ }
+ 
+-void bnxt_subtract_ulp_resources(struct bnxt *bp, int ulp_id)
+-{
+-	ASSERT_RTNL();
+-	if (bnxt_ulp_registered(bp->edev, ulp_id)) {
+-		struct bnxt_en_dev *edev = bp->edev;
+-		unsigned int msix_req, max;
+-
+-		msix_req = edev->ulp_tbl[ulp_id].msix_requested;
+-		max = bnxt_get_max_func_cp_rings(bp);
+-		bnxt_set_max_func_cp_rings(bp, max - msix_req);
+-		max = bnxt_get_max_func_stat_ctxs(bp);
+-		bnxt_set_max_func_stat_ctxs(bp, max - 1);
+-	}
+-}
+-
+ static int bnxt_send_msg(struct bnxt_en_dev *edev, int ulp_id,
+ 			 struct bnxt_fw_msg *fw_msg)
+ {
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h
+index df48ac71729f..d9bea37cd211 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h
+@@ -90,7 +90,6 @@ static inline bool bnxt_ulp_registered(struct bnxt_en_dev *edev, int ulp_id)
+ 
+ int bnxt_get_ulp_msix_num(struct bnxt *bp);
+ int bnxt_get_ulp_msix_base(struct bnxt *bp);
+-void bnxt_subtract_ulp_resources(struct bnxt *bp, int ulp_id);
+ void bnxt_ulp_stop(struct bnxt *bp);
+ void bnxt_ulp_start(struct bnxt *bp);
+ void bnxt_ulp_sriov_cfg(struct bnxt *bp, int num_vfs);
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.h b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+index b773bc07edf7..14b49612aa86 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.h
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h
+@@ -186,6 +186,9 @@ struct bcmgenet_mib_counters {
+ #define UMAC_MAC1			0x010
+ #define UMAC_MAX_FRAME_LEN		0x014
+ 
++#define UMAC_MODE			0x44
++#define  MODE_LINK_STATUS		(1 << 5)
++
+ #define UMAC_EEE_CTRL			0x064
+ #define  EN_LPI_RX_PAUSE		(1 << 0)
+ #define  EN_LPI_TX_PFC			(1 << 1)
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+index 5333274a283c..4241ae928d4a 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+@@ -115,8 +115,14 @@ void bcmgenet_mii_setup(struct net_device *dev)
+ static int bcmgenet_fixed_phy_link_update(struct net_device *dev,
+ 					  struct fixed_phy_status *status)
+ {
+-	if (dev && dev->phydev && status)
+-		status->link = dev->phydev->link;
++	struct bcmgenet_priv *priv;
++	u32 reg;
++
++	if (dev && dev->phydev && status) {
++		priv = netdev_priv(dev);
++		reg = bcmgenet_umac_readl(priv, UMAC_MODE);
++		status->link = !!(reg & MODE_LINK_STATUS);
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index a6c911bb5ce2..515d96e32143 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -481,11 +481,6 @@ static int macb_mii_probe(struct net_device *dev)
+ 
+ 	if (np) {
+ 		if (of_phy_is_fixed_link(np)) {
+-			if (of_phy_register_fixed_link(np) < 0) {
+-				dev_err(&bp->pdev->dev,
+-					"broken fixed-link specification\n");
+-				return -ENODEV;
+-			}
+ 			bp->phy_node = of_node_get(np);
+ 		} else {
+ 			bp->phy_node = of_parse_phandle(np, "phy-handle", 0);
+@@ -568,7 +563,7 @@ static int macb_mii_init(struct macb *bp)
+ {
+ 	struct macb_platform_data *pdata;
+ 	struct device_node *np;
+-	int err;
++	int err = -ENXIO;
+ 
+ 	/* Enable management port */
+ 	macb_writel(bp, NCR, MACB_BIT(MPE));
+@@ -591,12 +586,23 @@ static int macb_mii_init(struct macb *bp)
+ 	dev_set_drvdata(&bp->dev->dev, bp->mii_bus);
+ 
+ 	np = bp->pdev->dev.of_node;
+-	if (pdata)
+-		bp->mii_bus->phy_mask = pdata->phy_mask;
++	if (np && of_phy_is_fixed_link(np)) {
++		if (of_phy_register_fixed_link(np) < 0) {
++			dev_err(&bp->pdev->dev,
++				"broken fixed-link specification %pOF\n", np);
++			goto err_out_free_mdiobus;
++		}
++
++		err = mdiobus_register(bp->mii_bus);
++	} else {
++		if (pdata)
++			bp->mii_bus->phy_mask = pdata->phy_mask;
++
++		err = of_mdiobus_register(bp->mii_bus, np);
++	}
+ 
+-	err = of_mdiobus_register(bp->mii_bus, np);
+ 	if (err)
+-		goto err_out_free_mdiobus;
++		goto err_out_free_fixed_link;
+ 
+ 	err = macb_mii_probe(bp->dev);
+ 	if (err)
+@@ -606,6 +612,7 @@ static int macb_mii_init(struct macb *bp)
+ 
+ err_out_unregister_bus:
+ 	mdiobus_unregister(bp->mii_bus);
++err_out_free_fixed_link:
+ 	if (np && of_phy_is_fixed_link(np))
+ 		of_phy_deregister_fixed_link(np);
+ err_out_free_mdiobus:
+@@ -1957,14 +1964,17 @@ static void macb_reset_hw(struct macb *bp)
+ {
+ 	struct macb_queue *queue;
+ 	unsigned int q;
++	u32 ctrl = macb_readl(bp, NCR);
+ 
+ 	/* Disable RX and TX (XXX: Should we halt the transmission
+ 	 * more gracefully?)
+ 	 */
+-	macb_writel(bp, NCR, 0);
++	ctrl &= ~(MACB_BIT(RE) | MACB_BIT(TE));
+ 
+ 	/* Clear the stats registers (XXX: Update stats first?) */
+-	macb_writel(bp, NCR, MACB_BIT(CLRSTAT));
++	ctrl |= MACB_BIT(CLRSTAT);
++
++	macb_writel(bp, NCR, ctrl);
+ 
+ 	/* Clear all status flags */
+ 	macb_writel(bp, TSR, -1);
+@@ -2152,7 +2162,7 @@ static void macb_init_hw(struct macb *bp)
+ 	}
+ 
+ 	/* Enable TX and RX */
+-	macb_writel(bp, NCR, MACB_BIT(RE) | MACB_BIT(TE) | MACB_BIT(MPE));
++	macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(RE) | MACB_BIT(TE));
+ }
+ 
+ /* The hash address register is 64 bits long and takes up two
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index d318d35e598f..6fd7ea8074b0 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -3911,7 +3911,7 @@ static bool hclge_is_all_function_id_zero(struct hclge_desc *desc)
+ #define HCLGE_FUNC_NUMBER_PER_DESC 6
+ 	int i, j;
+ 
+-	for (i = 0; i < HCLGE_DESC_NUMBER; i++)
++	for (i = 1; i < HCLGE_DESC_NUMBER; i++)
+ 		for (j = 0; j < HCLGE_FUNC_NUMBER_PER_DESC; j++)
+ 			if (desc[i].data[j])
+ 				return false;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
+index 9f7932e423b5..6315e8ad8467 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
+@@ -208,6 +208,8 @@ int hclge_mac_start_phy(struct hclge_dev *hdev)
+ 	if (!phydev)
+ 		return 0;
+ 
++	phydev->supported &= ~SUPPORTED_FIBRE;
++
+ 	ret = phy_connect_direct(netdev, phydev,
+ 				 hclge_mac_adjust_link,
+ 				 PHY_INTERFACE_MODE_SGMII);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wq.c b/drivers/net/ethernet/mellanox/mlx5/core/wq.c
+index 86478a6b99c5..c8c315eb5128 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/wq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/wq.c
+@@ -139,14 +139,15 @@ int mlx5_wq_qp_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
+ 		      struct mlx5_wq_ctrl *wq_ctrl)
+ {
+ 	u32 sq_strides_offset;
++	u32 rq_pg_remainder;
+ 	int err;
+ 
+ 	mlx5_fill_fbc(MLX5_GET(qpc, qpc, log_rq_stride) + 4,
+ 		      MLX5_GET(qpc, qpc, log_rq_size),
+ 		      &wq->rq.fbc);
+ 
+-	sq_strides_offset =
+-		((wq->rq.fbc.frag_sz_m1 + 1) % PAGE_SIZE) / MLX5_SEND_WQE_BB;
++	rq_pg_remainder   = mlx5_wq_cyc_get_byte_size(&wq->rq) % PAGE_SIZE;
++	sq_strides_offset = rq_pg_remainder / MLX5_SEND_WQE_BB;
+ 
+ 	mlx5_fill_fbc_offset(ilog2(MLX5_SEND_WQE_BB),
+ 			     MLX5_GET(qpc, qpc, log_sq_size),
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
+index 4a519d8edec8..3500c79e29cd 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
+@@ -433,6 +433,8 @@ mlxsw_sp_netdevice_ipip_ul_event(struct mlxsw_sp *mlxsw_sp,
+ void
+ mlxsw_sp_port_vlan_router_leave(struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan);
+ void mlxsw_sp_rif_destroy(struct mlxsw_sp_rif *rif);
++void mlxsw_sp_rif_destroy_by_dev(struct mlxsw_sp *mlxsw_sp,
++				 struct net_device *dev);
+ 
+ /* spectrum_kvdl.c */
+ int mlxsw_sp_kvdl_init(struct mlxsw_sp *mlxsw_sp);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+index 77b2adb29341..cb43d17097fa 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+@@ -6228,6 +6228,17 @@ void mlxsw_sp_rif_destroy(struct mlxsw_sp_rif *rif)
+ 	mlxsw_sp_vr_put(mlxsw_sp, vr);
+ }
+ 
++void mlxsw_sp_rif_destroy_by_dev(struct mlxsw_sp *mlxsw_sp,
++				 struct net_device *dev)
++{
++	struct mlxsw_sp_rif *rif;
++
++	rif = mlxsw_sp_rif_find_by_dev(mlxsw_sp, dev);
++	if (!rif)
++		return;
++	mlxsw_sp_rif_destroy(rif);
++}
++
+ static void
+ mlxsw_sp_rif_subport_params_init(struct mlxsw_sp_rif_params *params,
+ 				 struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan)
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+index eea5666a86b2..6cb43dda8232 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+@@ -160,6 +160,24 @@ bool mlxsw_sp_bridge_device_is_offloaded(const struct mlxsw_sp *mlxsw_sp,
+ 	return !!mlxsw_sp_bridge_device_find(mlxsw_sp->bridge, br_dev);
+ }
+ 
++static int mlxsw_sp_bridge_device_upper_rif_destroy(struct net_device *dev,
++						    void *data)
++{
++	struct mlxsw_sp *mlxsw_sp = data;
++
++	mlxsw_sp_rif_destroy_by_dev(mlxsw_sp, dev);
++	return 0;
++}
++
++static void mlxsw_sp_bridge_device_rifs_destroy(struct mlxsw_sp *mlxsw_sp,
++						struct net_device *dev)
++{
++	mlxsw_sp_rif_destroy_by_dev(mlxsw_sp, dev);
++	netdev_walk_all_upper_dev_rcu(dev,
++				      mlxsw_sp_bridge_device_upper_rif_destroy,
++				      mlxsw_sp);
++}
++
+ static struct mlxsw_sp_bridge_device *
+ mlxsw_sp_bridge_device_create(struct mlxsw_sp_bridge *bridge,
+ 			      struct net_device *br_dev)
+@@ -198,6 +216,8 @@ static void
+ mlxsw_sp_bridge_device_destroy(struct mlxsw_sp_bridge *bridge,
+ 			       struct mlxsw_sp_bridge_device *bridge_device)
+ {
++	mlxsw_sp_bridge_device_rifs_destroy(bridge->mlxsw_sp,
++					    bridge_device->dev);
+ 	list_del(&bridge_device->list);
+ 	if (bridge_device->vlan_enabled)
+ 		bridge->vlan_enabled_exists = false;
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+index d4c27f849f9b..c2a9e64bc57b 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+@@ -227,29 +227,16 @@ done:
+ 	spin_unlock_bh(&nn->reconfig_lock);
+ }
+ 
+-/**
+- * nfp_net_reconfig() - Reconfigure the firmware
+- * @nn:      NFP Net device to reconfigure
+- * @update:  The value for the update field in the BAR config
+- *
+- * Write the update word to the BAR and ping the reconfig queue.  The
+- * poll until the firmware has acknowledged the update by zeroing the
+- * update word.
+- *
+- * Return: Negative errno on error, 0 on success
+- */
+-int nfp_net_reconfig(struct nfp_net *nn, u32 update)
++static void nfp_net_reconfig_sync_enter(struct nfp_net *nn)
+ {
+ 	bool cancelled_timer = false;
+ 	u32 pre_posted_requests;
+-	int ret;
+ 
+ 	spin_lock_bh(&nn->reconfig_lock);
+ 
+ 	nn->reconfig_sync_present = true;
+ 
+ 	if (nn->reconfig_timer_active) {
+-		del_timer(&nn->reconfig_timer);
+ 		nn->reconfig_timer_active = false;
+ 		cancelled_timer = true;
+ 	}
+@@ -258,14 +245,43 @@ int nfp_net_reconfig(struct nfp_net *nn, u32 update)
+ 
+ 	spin_unlock_bh(&nn->reconfig_lock);
+ 
+-	if (cancelled_timer)
++	if (cancelled_timer) {
++		del_timer_sync(&nn->reconfig_timer);
+ 		nfp_net_reconfig_wait(nn, nn->reconfig_timer.expires);
++	}
+ 
+ 	/* Run the posted reconfigs which were issued before we started */
+ 	if (pre_posted_requests) {
+ 		nfp_net_reconfig_start(nn, pre_posted_requests);
+ 		nfp_net_reconfig_wait(nn, jiffies + HZ * NFP_NET_POLL_TIMEOUT);
+ 	}
++}
++
++static void nfp_net_reconfig_wait_posted(struct nfp_net *nn)
++{
++	nfp_net_reconfig_sync_enter(nn);
++
++	spin_lock_bh(&nn->reconfig_lock);
++	nn->reconfig_sync_present = false;
++	spin_unlock_bh(&nn->reconfig_lock);
++}
++
++/**
++ * nfp_net_reconfig() - Reconfigure the firmware
++ * @nn:      NFP Net device to reconfigure
++ * @update:  The value for the update field in the BAR config
++ *
++ * Write the update word to the BAR and ping the reconfig queue.  The
++ * poll until the firmware has acknowledged the update by zeroing the
++ * update word.
++ *
++ * Return: Negative errno on error, 0 on success
++ */
++int nfp_net_reconfig(struct nfp_net *nn, u32 update)
++{
++	int ret;
++
++	nfp_net_reconfig_sync_enter(nn);
+ 
+ 	nfp_net_reconfig_start(nn, update);
+ 	ret = nfp_net_reconfig_wait(nn, jiffies + HZ * NFP_NET_POLL_TIMEOUT);
+@@ -3609,6 +3625,7 @@ struct nfp_net *nfp_net_alloc(struct pci_dev *pdev, bool needs_netdev,
+  */
+ void nfp_net_free(struct nfp_net *nn)
+ {
++	WARN_ON(timer_pending(&nn->reconfig_timer) || nn->reconfig_posted);
+ 	if (nn->dp.netdev)
+ 		free_netdev(nn->dp.netdev);
+ 	else
+@@ -3893,4 +3910,5 @@ void nfp_net_clean(struct nfp_net *nn)
+ 		return;
+ 
+ 	unregister_netdev(nn->dp.netdev);
++	nfp_net_reconfig_wait_posted(nn);
+ }
+diff --git a/drivers/net/ethernet/qlogic/qlge/qlge_main.c b/drivers/net/ethernet/qlogic/qlge/qlge_main.c
+index 353f1c129af1..059ba9429e51 100644
+--- a/drivers/net/ethernet/qlogic/qlge/qlge_main.c
++++ b/drivers/net/ethernet/qlogic/qlge/qlge_main.c
+@@ -2384,26 +2384,20 @@ static int qlge_update_hw_vlan_features(struct net_device *ndev,
+ 	return status;
+ }
+ 
+-static netdev_features_t qlge_fix_features(struct net_device *ndev,
+-	netdev_features_t features)
+-{
+-	int err;
+-
+-	/* Update the behavior of vlan accel in the adapter */
+-	err = qlge_update_hw_vlan_features(ndev, features);
+-	if (err)
+-		return err;
+-
+-	return features;
+-}
+-
+ static int qlge_set_features(struct net_device *ndev,
+ 	netdev_features_t features)
+ {
+ 	netdev_features_t changed = ndev->features ^ features;
++	int err;
++
++	if (changed & NETIF_F_HW_VLAN_CTAG_RX) {
++		/* Update the behavior of vlan accel in the adapter */
++		err = qlge_update_hw_vlan_features(ndev, features);
++		if (err)
++			return err;
+ 
+-	if (changed & NETIF_F_HW_VLAN_CTAG_RX)
+ 		qlge_vlan_mode(ndev, features);
++	}
+ 
+ 	return 0;
+ }
+@@ -4719,7 +4713,6 @@ static const struct net_device_ops qlge_netdev_ops = {
+ 	.ndo_set_mac_address	= qlge_set_mac_address,
+ 	.ndo_validate_addr	= eth_validate_addr,
+ 	.ndo_tx_timeout		= qlge_tx_timeout,
+-	.ndo_fix_features	= qlge_fix_features,
+ 	.ndo_set_features	= qlge_set_features,
+ 	.ndo_vlan_rx_add_vid	= qlge_vlan_rx_add_vid,
+ 	.ndo_vlan_rx_kill_vid	= qlge_vlan_rx_kill_vid,
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index 9ceb34bac3a9..e5eb361b973c 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -303,6 +303,7 @@ static const struct pci_device_id rtl8169_pci_tbl[] = {
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_REALTEK,	0x8161), 0, 0, RTL_CFG_1 },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_REALTEK,	0x8167), 0, 0, RTL_CFG_0 },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_REALTEK,	0x8168), 0, 0, RTL_CFG_1 },
++	{ PCI_DEVICE(PCI_VENDOR_ID_NCUBE,	0x8168), 0, 0, RTL_CFG_1 },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_REALTEK,	0x8169), 0, 0, RTL_CFG_0 },
+ 	{ PCI_VENDOR_ID_DLINK,			0x4300,
+ 		PCI_VENDOR_ID_DLINK, 0x4b10,		 0, 0, RTL_CFG_1 },
+@@ -5038,7 +5039,7 @@ static void rtl8169_hw_reset(struct rtl8169_private *tp)
+ 	rtl_hw_reset(tp);
+ }
+ 
+-static void rtl_set_rx_tx_config_registers(struct rtl8169_private *tp)
++static void rtl_set_tx_config_registers(struct rtl8169_private *tp)
+ {
+ 	/* Set DMA burst size and Interframe Gap Time */
+ 	RTL_W32(tp, TxConfig, (TX_DMA_BURST << TxDMAShift) |
+@@ -5149,12 +5150,14 @@ static void rtl_hw_start(struct  rtl8169_private *tp)
+ 
+ 	rtl_set_rx_max_size(tp);
+ 	rtl_set_rx_tx_desc_registers(tp);
+-	rtl_set_rx_tx_config_registers(tp);
++	rtl_set_tx_config_registers(tp);
+ 	RTL_W8(tp, Cfg9346, Cfg9346_Lock);
+ 
+ 	/* Initially a 10 us delay. Turned it into a PCI commit. - FR */
+ 	RTL_R8(tp, IntrMask);
+ 	RTL_W8(tp, ChipCmd, CmdTxEnb | CmdRxEnb);
++	rtl_init_rxcfg(tp);
++
+ 	rtl_set_rx_mode(tp->dev);
+ 	/* no early-rx interrupts */
+ 	RTL_W16(tp, MultiIntr, RTL_R16(tp, MultiIntr) & 0xf000);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+index 76649adf8fb0..c0a855b7ab3b 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+@@ -112,7 +112,6 @@ struct stmmac_priv {
+ 	u32 tx_count_frames;
+ 	u32 tx_coal_frames;
+ 	u32 tx_coal_timer;
+-	bool tx_timer_armed;
+ 
+ 	int tx_coalesce;
+ 	int hwts_tx_en;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index ef6a8d39db2f..c579d98b9666 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -3126,16 +3126,13 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	 * element in case of no SG.
+ 	 */
+ 	priv->tx_count_frames += nfrags + 1;
+-	if (likely(priv->tx_coal_frames > priv->tx_count_frames) &&
+-	    !priv->tx_timer_armed) {
++	if (likely(priv->tx_coal_frames > priv->tx_count_frames)) {
+ 		mod_timer(&priv->txtimer,
+ 			  STMMAC_COAL_TIMER(priv->tx_coal_timer));
+-		priv->tx_timer_armed = true;
+ 	} else {
+ 		priv->tx_count_frames = 0;
+ 		stmmac_set_tx_ic(priv, desc);
+ 		priv->xstats.tx_set_ic_bit++;
+-		priv->tx_timer_armed = false;
+ 	}
+ 
+ 	skb_tx_timestamp(skb);
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index dd1d6e115145..6d74cde68163 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -29,6 +29,7 @@
+ #include <linux/netdevice.h>
+ #include <linux/inetdevice.h>
+ #include <linux/etherdevice.h>
++#include <linux/pci.h>
+ #include <linux/skbuff.h>
+ #include <linux/if_vlan.h>
+ #include <linux/in.h>
+@@ -1939,12 +1940,16 @@ static int netvsc_register_vf(struct net_device *vf_netdev)
+ {
+ 	struct net_device *ndev;
+ 	struct net_device_context *net_device_ctx;
++	struct device *pdev = vf_netdev->dev.parent;
+ 	struct netvsc_device *netvsc_dev;
+ 	int ret;
+ 
+ 	if (vf_netdev->addr_len != ETH_ALEN)
+ 		return NOTIFY_DONE;
+ 
++	if (!pdev || !dev_is_pci(pdev) || dev_is_pf(pdev))
++		return NOTIFY_DONE;
++
+ 	/*
+ 	 * We will use the MAC address to locate the synthetic interface to
+ 	 * associate with the VF interface. If we don't find a matching
+@@ -2101,6 +2106,16 @@ static int netvsc_probe(struct hv_device *dev,
+ 
+ 	memcpy(net->dev_addr, device_info.mac_adr, ETH_ALEN);
+ 
++	/* We must get rtnl lock before scheduling nvdev->subchan_work,
++	 * otherwise netvsc_subchan_work() can get rtnl lock first and wait
++	 * all subchannels to show up, but that may not happen because
++	 * netvsc_probe() can't get rtnl lock and as a result vmbus_onoffer()
++	 * -> ... -> device_add() -> ... -> __device_attach() can't get
++	 * the device lock, so all the subchannels can't be processed --
++	 * finally netvsc_subchan_work() hangs for ever.
++	 */
++	rtnl_lock();
++
+ 	if (nvdev->num_chn > 1)
+ 		schedule_work(&nvdev->subchan_work);
+ 
+@@ -2119,7 +2134,6 @@ static int netvsc_probe(struct hv_device *dev,
+ 	else
+ 		net->max_mtu = ETH_DATA_LEN;
+ 
+-	rtnl_lock();
+ 	ret = register_netdevice(net);
+ 	if (ret != 0) {
+ 		pr_err("Unable to register netdev.\n");
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index 2a58607a6aea..1b07bb5e110d 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -5214,8 +5214,8 @@ static int rtl8152_probe(struct usb_interface *intf,
+ 		netdev->hw_features &= ~NETIF_F_RXCSUM;
+ 	}
+ 
+-	if (le16_to_cpu(udev->descriptor.bcdDevice) == 0x3011 &&
+-	    udev->serial && !strcmp(udev->serial, "000001000000")) {
++	if (le16_to_cpu(udev->descriptor.bcdDevice) == 0x3011 && udev->serial &&
++	    (!strcmp(udev->serial, "000001000000") || !strcmp(udev->serial, "000002000000"))) {
+ 		dev_info(&udev->dev, "Dell TB16 Dock, disable RX aggregation");
+ 		set_bit(DELL_TB_RX_AGG_BUG, &tp->flags);
+ 	}
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index b6122aad639e..7569f9af8d47 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -6926,15 +6926,15 @@ struct brcmf_cfg80211_info *brcmf_cfg80211_attach(struct brcmf_pub *drvr,
+ 	cfg->d11inf.io_type = (u8)io_type;
+ 	brcmu_d11_attach(&cfg->d11inf);
+ 
+-	err = brcmf_setup_wiphy(wiphy, ifp);
+-	if (err < 0)
+-		goto priv_out;
+-
+ 	/* regulatory notifer below needs access to cfg so
+ 	 * assign it now.
+ 	 */
+ 	drvr->config = cfg;
+ 
++	err = brcmf_setup_wiphy(wiphy, ifp);
++	if (err < 0)
++		goto priv_out;
++
+ 	brcmf_dbg(INFO, "Registering custom regulatory\n");
+ 	wiphy->reg_notifier = brcmf_cfg80211_reg_notifier;
+ 	wiphy->regulatory_flags |= REGULATORY_CUSTOM_REG;
+diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
+index 23e270839e6a..f00df2384985 100644
+--- a/drivers/pci/controller/pci-mvebu.c
++++ b/drivers/pci/controller/pci-mvebu.c
+@@ -1219,7 +1219,7 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
+ 		pcie->realio.start = PCIBIOS_MIN_IO;
+ 		pcie->realio.end = min_t(resource_size_t,
+ 					 IO_SPACE_LIMIT,
+-					 resource_size(&pcie->io));
++					 resource_size(&pcie->io) - 1);
+ 	} else
+ 		pcie->realio = pcie->io;
+ 
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index b2857865c0aa..a1a243ee36bb 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -1725,7 +1725,7 @@ int pci_setup_device(struct pci_dev *dev)
+ static void pci_configure_mps(struct pci_dev *dev)
+ {
+ 	struct pci_dev *bridge = pci_upstream_bridge(dev);
+-	int mps, p_mps, rc;
++	int mps, mpss, p_mps, rc;
+ 
+ 	if (!pci_is_pcie(dev) || !bridge || !pci_is_pcie(bridge))
+ 		return;
+@@ -1753,6 +1753,14 @@ static void pci_configure_mps(struct pci_dev *dev)
+ 	if (pcie_bus_config != PCIE_BUS_DEFAULT)
+ 		return;
+ 
++	mpss = 128 << dev->pcie_mpss;
++	if (mpss < p_mps && pci_pcie_type(bridge) == PCI_EXP_TYPE_ROOT_PORT) {
++		pcie_set_mps(bridge, mpss);
++		pci_info(dev, "Upstream bridge's Max Payload Size set to %d (was %d, max %d)\n",
++			 mpss, p_mps, 128 << bridge->pcie_mpss);
++		p_mps = pcie_get_mps(bridge);
++	}
++
+ 	rc = pcie_set_mps(dev, p_mps);
+ 	if (rc) {
+ 		pci_warn(dev, "can't set Max Payload Size to %d; if necessary, use \"pci=pcie_bus_safe\" and report a bug\n",
+@@ -1761,7 +1769,7 @@ static void pci_configure_mps(struct pci_dev *dev)
+ 	}
+ 
+ 	pci_info(dev, "Max Payload Size set to %d (was %d, max %d)\n",
+-		 p_mps, mps, 128 << dev->pcie_mpss);
++		 p_mps, mps, mpss);
+ }
+ 
+ static struct hpp_type0 pci_default_type0 = {
+diff --git a/drivers/pinctrl/pinctrl-axp209.c b/drivers/pinctrl/pinctrl-axp209.c
+index a52779f33ad4..afd0b533c40a 100644
+--- a/drivers/pinctrl/pinctrl-axp209.c
++++ b/drivers/pinctrl/pinctrl-axp209.c
+@@ -316,7 +316,7 @@ static const struct pinctrl_ops axp20x_pctrl_ops = {
+ 	.get_group_pins		= axp20x_group_pins,
+ };
+ 
+-static void axp20x_funcs_groups_from_mask(struct device *dev, unsigned int mask,
++static int axp20x_funcs_groups_from_mask(struct device *dev, unsigned int mask,
+ 					  unsigned int mask_len,
+ 					  struct axp20x_pinctrl_function *func,
+ 					  const struct pinctrl_pin_desc *pins)
+@@ -331,18 +331,22 @@ static void axp20x_funcs_groups_from_mask(struct device *dev, unsigned int mask,
+ 		func->groups = devm_kcalloc(dev,
+ 					    ngroups, sizeof(const char *),
+ 					    GFP_KERNEL);
++		if (!func->groups)
++			return -ENOMEM;
+ 		group = func->groups;
+ 		for_each_set_bit(bit, &mask_cpy, mask_len) {
+ 			*group = pins[bit].name;
+ 			group++;
+ 		}
+ 	}
++
++	return 0;
+ }
+ 
+-static void axp20x_build_funcs_groups(struct platform_device *pdev)
++static int axp20x_build_funcs_groups(struct platform_device *pdev)
+ {
+ 	struct axp20x_pctl *pctl = platform_get_drvdata(pdev);
+-	int i, pin, npins = pctl->desc->npins;
++	int i, ret, pin, npins = pctl->desc->npins;
+ 
+ 	pctl->funcs[AXP20X_FUNC_GPIO_OUT].name = "gpio_out";
+ 	pctl->funcs[AXP20X_FUNC_GPIO_OUT].muxval = AXP20X_MUX_GPIO_OUT;
+@@ -366,13 +370,19 @@ static void axp20x_build_funcs_groups(struct platform_device *pdev)
+ 			pctl->funcs[i].groups[pin] = pctl->desc->pins[pin].name;
+ 	}
+ 
+-	axp20x_funcs_groups_from_mask(&pdev->dev, pctl->desc->ldo_mask,
++	ret = axp20x_funcs_groups_from_mask(&pdev->dev, pctl->desc->ldo_mask,
+ 				      npins, &pctl->funcs[AXP20X_FUNC_LDO],
+ 				      pctl->desc->pins);
++	if (ret)
++		return ret;
+ 
+-	axp20x_funcs_groups_from_mask(&pdev->dev, pctl->desc->adc_mask,
++	ret = axp20x_funcs_groups_from_mask(&pdev->dev, pctl->desc->adc_mask,
+ 				      npins, &pctl->funcs[AXP20X_FUNC_ADC],
+ 				      pctl->desc->pins);
++	if (ret)
++		return ret;
++
++	return 0;
+ }
+ 
+ static const struct of_device_id axp20x_pctl_match[] = {
+@@ -424,7 +434,11 @@ static int axp20x_pctl_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, pctl);
+ 
+-	axp20x_build_funcs_groups(pdev);
++	ret = axp20x_build_funcs_groups(pdev);
++	if (ret) {
++		dev_err(&pdev->dev, "failed to build groups\n");
++		return ret;
++	}
+ 
+ 	pctrl_desc = devm_kzalloc(&pdev->dev, sizeof(*pctrl_desc), GFP_KERNEL);
+ 	if (!pctrl_desc)
+diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c
+index 136ff2b4cce5..db2af09067db 100644
+--- a/drivers/platform/x86/asus-nb-wmi.c
++++ b/drivers/platform/x86/asus-nb-wmi.c
+@@ -496,6 +496,7 @@ static const struct key_entry asus_nb_wmi_keymap[] = {
+ 	{ KE_KEY, 0xC4, { KEY_KBDILLUMUP } },
+ 	{ KE_KEY, 0xC5, { KEY_KBDILLUMDOWN } },
+ 	{ KE_IGNORE, 0xC6, },  /* Ambient Light Sensor notification */
++	{ KE_KEY, 0xFA, { KEY_PROG2 } },           /* Lid flip action */
+ 	{ KE_END, 0},
+ };
+ 
+diff --git a/drivers/platform/x86/intel_punit_ipc.c b/drivers/platform/x86/intel_punit_ipc.c
+index b5b890127479..b7dfe06261f1 100644
+--- a/drivers/platform/x86/intel_punit_ipc.c
++++ b/drivers/platform/x86/intel_punit_ipc.c
+@@ -17,6 +17,7 @@
+ #include <linux/bitops.h>
+ #include <linux/device.h>
+ #include <linux/interrupt.h>
++#include <linux/io.h>
+ #include <linux/platform_device.h>
+ #include <asm/intel_punit_ipc.h>
+ 
+diff --git a/drivers/pwm/pwm-meson.c b/drivers/pwm/pwm-meson.c
+index 822860b4801a..c1ed641b3e26 100644
+--- a/drivers/pwm/pwm-meson.c
++++ b/drivers/pwm/pwm-meson.c
+@@ -458,7 +458,6 @@ static int meson_pwm_init_channels(struct meson_pwm *meson,
+ 				   struct meson_pwm_channel *channels)
+ {
+ 	struct device *dev = meson->chip.dev;
+-	struct device_node *np = dev->of_node;
+ 	struct clk_init_data init;
+ 	unsigned int i;
+ 	char name[255];
+@@ -467,7 +466,7 @@ static int meson_pwm_init_channels(struct meson_pwm *meson,
+ 	for (i = 0; i < meson->chip.npwm; i++) {
+ 		struct meson_pwm_channel *channel = &channels[i];
+ 
+-		snprintf(name, sizeof(name), "%pOF#mux%u", np, i);
++		snprintf(name, sizeof(name), "%s#mux%u", dev_name(dev), i);
+ 
+ 		init.name = name;
+ 		init.ops = &clk_mux_ops;
+diff --git a/drivers/s390/block/dasd_eckd.c b/drivers/s390/block/dasd_eckd.c
+index bbf95b78ef5d..43e3398c9268 100644
+--- a/drivers/s390/block/dasd_eckd.c
++++ b/drivers/s390/block/dasd_eckd.c
+@@ -1780,6 +1780,9 @@ static void dasd_eckd_uncheck_device(struct dasd_device *device)
+ 	struct dasd_eckd_private *private = device->private;
+ 	int i;
+ 
++	if (!private)
++		return;
++
+ 	dasd_alias_disconnect_device_from_lcu(device);
+ 	private->ned = NULL;
+ 	private->sneq = NULL;
+@@ -2035,8 +2038,11 @@ static int dasd_eckd_basic_to_ready(struct dasd_device *device)
+ 
+ static int dasd_eckd_online_to_ready(struct dasd_device *device)
+ {
+-	cancel_work_sync(&device->reload_device);
+-	cancel_work_sync(&device->kick_validate);
++	if (cancel_work_sync(&device->reload_device))
++		dasd_put_device(device);
++	if (cancel_work_sync(&device->kick_validate))
++		dasd_put_device(device);
++
+ 	return 0;
+ };
+ 
+diff --git a/drivers/scsi/aic94xx/aic94xx_init.c b/drivers/scsi/aic94xx/aic94xx_init.c
+index 80e5b283fd81..1391e5f35918 100644
+--- a/drivers/scsi/aic94xx/aic94xx_init.c
++++ b/drivers/scsi/aic94xx/aic94xx_init.c
+@@ -1030,8 +1030,10 @@ static int __init aic94xx_init(void)
+ 
+ 	aic94xx_transport_template =
+ 		sas_domain_attach_transport(&aic94xx_transport_functions);
+-	if (!aic94xx_transport_template)
++	if (!aic94xx_transport_template) {
++		err = -ENOMEM;
+ 		goto out_destroy_caches;
++	}
+ 
+ 	err = pci_register_driver(&aic94xx_pci_driver);
+ 	if (err)
+diff --git a/drivers/staging/comedi/drivers/ni_mio_common.c b/drivers/staging/comedi/drivers/ni_mio_common.c
+index e40a2c0a9543..d3da39a9f567 100644
+--- a/drivers/staging/comedi/drivers/ni_mio_common.c
++++ b/drivers/staging/comedi/drivers/ni_mio_common.c
+@@ -5446,11 +5446,11 @@ static int ni_E_init(struct comedi_device *dev,
+ 	/* Digital I/O (PFI) subdevice */
+ 	s = &dev->subdevices[NI_PFI_DIO_SUBDEV];
+ 	s->type		= COMEDI_SUBD_DIO;
+-	s->subdev_flags	= SDF_READABLE | SDF_WRITABLE | SDF_INTERNAL;
+ 	s->maxdata	= 1;
+ 	if (devpriv->is_m_series) {
+ 		s->n_chan	= 16;
+ 		s->insn_bits	= ni_pfi_insn_bits;
++		s->subdev_flags	= SDF_READABLE | SDF_WRITABLE | SDF_INTERNAL;
+ 
+ 		ni_writew(dev, s->state, NI_M_PFI_DO_REG);
+ 		for (i = 0; i < NUM_PFI_OUTPUT_SELECT_REGS; ++i) {
+@@ -5459,6 +5459,7 @@ static int ni_E_init(struct comedi_device *dev,
+ 		}
+ 	} else {
+ 		s->n_chan	= 10;
++		s->subdev_flags	= SDF_INTERNAL;
+ 	}
+ 	s->insn_config	= ni_pfi_insn_config;
+ 
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index ed3114556fda..560ed8711706 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -951,7 +951,7 @@ static void vhost_iotlb_notify_vq(struct vhost_dev *d,
+ 	list_for_each_entry_safe(node, n, &d->pending_list, node) {
+ 		struct vhost_iotlb_msg *vq_msg = &node->msg.iotlb;
+ 		if (msg->iova <= vq_msg->iova &&
+-		    msg->iova + msg->size - 1 > vq_msg->iova &&
++		    msg->iova + msg->size - 1 >= vq_msg->iova &&
+ 		    vq_msg->type == VHOST_IOTLB_MISS) {
+ 			vhost_poll_queue(&node->vq->poll);
+ 			list_del(&node->node);
+diff --git a/drivers/virtio/virtio_pci_legacy.c b/drivers/virtio/virtio_pci_legacy.c
+index 2780886e8ba3..de062fb201bc 100644
+--- a/drivers/virtio/virtio_pci_legacy.c
++++ b/drivers/virtio/virtio_pci_legacy.c
+@@ -122,6 +122,7 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
+ 	struct virtqueue *vq;
+ 	u16 num;
+ 	int err;
++	u64 q_pfn;
+ 
+ 	/* Select the queue we're interested in */
+ 	iowrite16(index, vp_dev->ioaddr + VIRTIO_PCI_QUEUE_SEL);
+@@ -141,9 +142,17 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
+ 	if (!vq)
+ 		return ERR_PTR(-ENOMEM);
+ 
++	q_pfn = virtqueue_get_desc_addr(vq) >> VIRTIO_PCI_QUEUE_ADDR_SHIFT;
++	if (q_pfn >> 32) {
++		dev_err(&vp_dev->pci_dev->dev,
++			"platform bug: legacy virtio-mmio must not be used with RAM above 0x%llxGB\n",
++			0x1ULL << (32 + PAGE_SHIFT - 30));
++		err = -E2BIG;
++		goto out_del_vq;
++	}
++
+ 	/* activate the queue */
+-	iowrite32(virtqueue_get_desc_addr(vq) >> VIRTIO_PCI_QUEUE_ADDR_SHIFT,
+-		  vp_dev->ioaddr + VIRTIO_PCI_QUEUE_PFN);
++	iowrite32(q_pfn, vp_dev->ioaddr + VIRTIO_PCI_QUEUE_PFN);
+ 
+ 	vq->priv = (void __force *)vp_dev->ioaddr + VIRTIO_PCI_QUEUE_NOTIFY;
+ 
+@@ -160,6 +169,7 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
+ 
+ out_deactivate:
+ 	iowrite32(0, vp_dev->ioaddr + VIRTIO_PCI_QUEUE_PFN);
++out_del_vq:
+ 	vring_del_virtqueue(vq);
+ 	return ERR_PTR(err);
+ }
+diff --git a/drivers/xen/xen-balloon.c b/drivers/xen/xen-balloon.c
+index b437fccd4e62..294f35ce9e46 100644
+--- a/drivers/xen/xen-balloon.c
++++ b/drivers/xen/xen-balloon.c
+@@ -81,7 +81,7 @@ static void watch_target(struct xenbus_watch *watch,
+ 			static_max = new_target;
+ 		else
+ 			static_max >>= PAGE_SHIFT - 10;
+-		target_diff = xen_pv_domain() ? 0
++		target_diff = (xen_pv_domain() || xen_initial_domain()) ? 0
+ 				: static_max - balloon_stats.target_pages;
+ 	}
+ 
+diff --git a/fs/btrfs/check-integrity.c b/fs/btrfs/check-integrity.c
+index a3fdb4fe967d..daf45472bef9 100644
+--- a/fs/btrfs/check-integrity.c
++++ b/fs/btrfs/check-integrity.c
+@@ -1539,7 +1539,12 @@ static int btrfsic_map_block(struct btrfsic_state *state, u64 bytenr, u32 len,
+ 	}
+ 
+ 	device = multi->stripes[0].dev;
+-	block_ctx_out->dev = btrfsic_dev_state_lookup(device->bdev->bd_dev);
++	if (test_bit(BTRFS_DEV_STATE_MISSING, &device->dev_state) ||
++	    !device->bdev || !device->name)
++		block_ctx_out->dev = NULL;
++	else
++		block_ctx_out->dev = btrfsic_dev_state_lookup(
++							device->bdev->bd_dev);
+ 	block_ctx_out->dev_bytenr = multi->stripes[0].physical;
+ 	block_ctx_out->start = bytenr;
+ 	block_ctx_out->len = len;
+diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
+index e2ba0419297a..d20b244623f2 100644
+--- a/fs/btrfs/dev-replace.c
++++ b/fs/btrfs/dev-replace.c
+@@ -676,6 +676,12 @@ static int btrfs_dev_replace_finishing(struct btrfs_fs_info *fs_info,
+ 
+ 	btrfs_rm_dev_replace_unblocked(fs_info);
+ 
++	/*
++	 * Increment dev_stats_ccnt so that btrfs_run_dev_stats() will
++	 * update on-disk dev stats value during commit transaction
++	 */
++	atomic_inc(&tgt_device->dev_stats_ccnt);
++
+ 	/*
+ 	 * this is again a consistent state where no dev_replace procedure
+ 	 * is running, the target device is part of the filesystem, the
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 8aab7a6c1e58..53cac20650d8 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -10687,7 +10687,7 @@ void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info)
+ 		/* Don't want to race with allocators so take the groups_sem */
+ 		down_write(&space_info->groups_sem);
+ 		spin_lock(&block_group->lock);
+-		if (block_group->reserved ||
++		if (block_group->reserved || block_group->pinned ||
+ 		    btrfs_block_group_used(&block_group->item) ||
+ 		    block_group->ro ||
+ 		    list_is_singular(&block_group->list)) {
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index 879b76fa881a..be94c65bb4d2 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -1321,18 +1321,19 @@ static void __del_reloc_root(struct btrfs_root *root)
+ 	struct mapping_node *node = NULL;
+ 	struct reloc_control *rc = fs_info->reloc_ctl;
+ 
+-	spin_lock(&rc->reloc_root_tree.lock);
+-	rb_node = tree_search(&rc->reloc_root_tree.rb_root,
+-			      root->node->start);
+-	if (rb_node) {
+-		node = rb_entry(rb_node, struct mapping_node, rb_node);
+-		rb_erase(&node->rb_node, &rc->reloc_root_tree.rb_root);
++	if (rc) {
++		spin_lock(&rc->reloc_root_tree.lock);
++		rb_node = tree_search(&rc->reloc_root_tree.rb_root,
++				      root->node->start);
++		if (rb_node) {
++			node = rb_entry(rb_node, struct mapping_node, rb_node);
++			rb_erase(&node->rb_node, &rc->reloc_root_tree.rb_root);
++		}
++		spin_unlock(&rc->reloc_root_tree.lock);
++		if (!node)
++			return;
++		BUG_ON((struct btrfs_root *)node->data != root);
+ 	}
+-	spin_unlock(&rc->reloc_root_tree.lock);
+-
+-	if (!node)
+-		return;
+-	BUG_ON((struct btrfs_root *)node->data != root);
+ 
+ 	spin_lock(&fs_info->trans_lock);
+ 	list_del_init(&root->root_list);
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index bddfc28b27c0..9b25f29d0e73 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -892,6 +892,8 @@ static int btrfs_parse_early_options(const char *options, fmode_t flags,
+ 	char *device_name, *opts, *orig, *p;
+ 	int error = 0;
+ 
++	lockdep_assert_held(&uuid_mutex);
++
+ 	if (!options)
+ 		return 0;
+ 
+@@ -1526,12 +1528,6 @@ static struct dentry *btrfs_mount_root(struct file_system_type *fs_type,
+ 	if (!(flags & SB_RDONLY))
+ 		mode |= FMODE_WRITE;
+ 
+-	error = btrfs_parse_early_options(data, mode, fs_type,
+-					  &fs_devices);
+-	if (error) {
+-		return ERR_PTR(error);
+-	}
+-
+ 	security_init_mnt_opts(&new_sec_opts);
+ 	if (data) {
+ 		error = parse_security_options(data, &new_sec_opts);
+@@ -1539,10 +1535,6 @@ static struct dentry *btrfs_mount_root(struct file_system_type *fs_type,
+ 			return ERR_PTR(error);
+ 	}
+ 
+-	error = btrfs_scan_one_device(device_name, mode, fs_type, &fs_devices);
+-	if (error)
+-		goto error_sec_opts;
+-
+ 	/*
+ 	 * Setup a dummy root and fs_info for test/set super.  This is because
+ 	 * we don't actually fill this stuff out until open_ctree, but we need
+@@ -1555,8 +1547,6 @@ static struct dentry *btrfs_mount_root(struct file_system_type *fs_type,
+ 		goto error_sec_opts;
+ 	}
+ 
+-	fs_info->fs_devices = fs_devices;
+-
+ 	fs_info->super_copy = kzalloc(BTRFS_SUPER_INFO_SIZE, GFP_KERNEL);
+ 	fs_info->super_for_commit = kzalloc(BTRFS_SUPER_INFO_SIZE, GFP_KERNEL);
+ 	security_init_mnt_opts(&fs_info->security_opts);
+@@ -1565,7 +1555,23 @@ static struct dentry *btrfs_mount_root(struct file_system_type *fs_type,
+ 		goto error_fs_info;
+ 	}
+ 
++	mutex_lock(&uuid_mutex);
++	error = btrfs_parse_early_options(data, mode, fs_type, &fs_devices);
++	if (error) {
++		mutex_unlock(&uuid_mutex);
++		goto error_fs_info;
++	}
++
++	error = btrfs_scan_one_device(device_name, mode, fs_type, &fs_devices);
++	if (error) {
++		mutex_unlock(&uuid_mutex);
++		goto error_fs_info;
++	}
++
++	fs_info->fs_devices = fs_devices;
++
+ 	error = btrfs_open_devices(fs_devices, mode, fs_type);
++	mutex_unlock(&uuid_mutex);
+ 	if (error)
+ 		goto error_fs_info;
+ 
+@@ -2234,15 +2240,21 @@ static long btrfs_control_ioctl(struct file *file, unsigned int cmd,
+ 
+ 	switch (cmd) {
+ 	case BTRFS_IOC_SCAN_DEV:
++		mutex_lock(&uuid_mutex);
+ 		ret = btrfs_scan_one_device(vol->name, FMODE_READ,
+ 					    &btrfs_root_fs_type, &fs_devices);
++		mutex_unlock(&uuid_mutex);
+ 		break;
+ 	case BTRFS_IOC_DEVICES_READY:
++		mutex_lock(&uuid_mutex);
+ 		ret = btrfs_scan_one_device(vol->name, FMODE_READ,
+ 					    &btrfs_root_fs_type, &fs_devices);
+-		if (ret)
++		if (ret) {
++			mutex_unlock(&uuid_mutex);
+ 			break;
++		}
+ 		ret = !(fs_devices->num_devices == fs_devices->total_devices);
++		mutex_unlock(&uuid_mutex);
+ 		break;
+ 	case BTRFS_IOC_GET_SUPPORTED_FEATURES:
+ 		ret = btrfs_ioctl_get_supported_features((void __user*)arg);
+@@ -2368,7 +2380,7 @@ static __cold void btrfs_interface_exit(void)
+ 
+ static void __init btrfs_print_mod_info(void)
+ {
+-	pr_info("Btrfs loaded, crc32c=%s"
++	static const char options[] = ""
+ #ifdef CONFIG_BTRFS_DEBUG
+ 			", debug=on"
+ #endif
+@@ -2381,8 +2393,8 @@ static void __init btrfs_print_mod_info(void)
+ #ifdef CONFIG_BTRFS_FS_REF_VERIFY
+ 			", ref-verify=on"
+ #endif
+-			"\n",
+-			crc32c_impl());
++			;
++	pr_info("Btrfs loaded, crc32c=%s%s\n", crc32c_impl(), options);
+ }
+ 
+ static int __init init_btrfs_fs(void)
+diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
+index 8d40e7dd8c30..d014af352ce0 100644
+--- a/fs/btrfs/tree-checker.c
++++ b/fs/btrfs/tree-checker.c
+@@ -396,9 +396,22 @@ static int check_leaf(struct btrfs_fs_info *fs_info, struct extent_buffer *leaf,
+ 	 * skip this check for relocation trees.
+ 	 */
+ 	if (nritems == 0 && !btrfs_header_flag(leaf, BTRFS_HEADER_FLAG_RELOC)) {
++		u64 owner = btrfs_header_owner(leaf);
+ 		struct btrfs_root *check_root;
+ 
+-		key.objectid = btrfs_header_owner(leaf);
++		/* These trees must never be empty */
++		if (owner == BTRFS_ROOT_TREE_OBJECTID ||
++		    owner == BTRFS_CHUNK_TREE_OBJECTID ||
++		    owner == BTRFS_EXTENT_TREE_OBJECTID ||
++		    owner == BTRFS_DEV_TREE_OBJECTID ||
++		    owner == BTRFS_FS_TREE_OBJECTID ||
++		    owner == BTRFS_DATA_RELOC_TREE_OBJECTID) {
++			generic_err(fs_info, leaf, 0,
++			"invalid root, root %llu must never be empty",
++				    owner);
++			return -EUCLEAN;
++		}
++		key.objectid = owner;
+ 		key.type = BTRFS_ROOT_ITEM_KEY;
+ 		key.offset = (u64)-1;
+ 
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 1da162928d1a..5304b8d6ceb8 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -634,44 +634,48 @@ static void pending_bios_fn(struct btrfs_work *work)
+  *		devices.
+  */
+ static void btrfs_free_stale_devices(const char *path,
+-				     struct btrfs_device *skip_dev)
++				     struct btrfs_device *skip_device)
+ {
+-	struct btrfs_fs_devices *fs_devs, *tmp_fs_devs;
+-	struct btrfs_device *dev, *tmp_dev;
++	struct btrfs_fs_devices *fs_devices, *tmp_fs_devices;
++	struct btrfs_device *device, *tmp_device;
+ 
+-	list_for_each_entry_safe(fs_devs, tmp_fs_devs, &fs_uuids, fs_list) {
+-
+-		if (fs_devs->opened)
++	list_for_each_entry_safe(fs_devices, tmp_fs_devices, &fs_uuids, fs_list) {
++		mutex_lock(&fs_devices->device_list_mutex);
++		if (fs_devices->opened) {
++			mutex_unlock(&fs_devices->device_list_mutex);
+ 			continue;
++		}
+ 
+-		list_for_each_entry_safe(dev, tmp_dev,
+-					 &fs_devs->devices, dev_list) {
++		list_for_each_entry_safe(device, tmp_device,
++					 &fs_devices->devices, dev_list) {
+ 			int not_found = 0;
+ 
+-			if (skip_dev && skip_dev == dev)
++			if (skip_device && skip_device == device)
+ 				continue;
+-			if (path && !dev->name)
++			if (path && !device->name)
+ 				continue;
+ 
+ 			rcu_read_lock();
+ 			if (path)
+-				not_found = strcmp(rcu_str_deref(dev->name),
++				not_found = strcmp(rcu_str_deref(device->name),
+ 						   path);
+ 			rcu_read_unlock();
+ 			if (not_found)
+ 				continue;
+ 
+ 			/* delete the stale device */
+-			if (fs_devs->num_devices == 1) {
+-				btrfs_sysfs_remove_fsid(fs_devs);
+-				list_del(&fs_devs->fs_list);
+-				free_fs_devices(fs_devs);
++			fs_devices->num_devices--;
++			list_del(&device->dev_list);
++			btrfs_free_device(device);
++
++			if (fs_devices->num_devices == 0)
+ 				break;
+-			} else {
+-				fs_devs->num_devices--;
+-				list_del(&dev->dev_list);
+-				btrfs_free_device(dev);
+-			}
++		}
++		mutex_unlock(&fs_devices->device_list_mutex);
++		if (fs_devices->num_devices == 0) {
++			btrfs_sysfs_remove_fsid(fs_devices);
++			list_del(&fs_devices->fs_list);
++			free_fs_devices(fs_devices);
+ 		}
+ 	}
+ }
+@@ -750,7 +754,8 @@ error_brelse:
+  * error pointer when failed
+  */
+ static noinline struct btrfs_device *device_list_add(const char *path,
+-			   struct btrfs_super_block *disk_super)
++			   struct btrfs_super_block *disk_super,
++			   bool *new_device_added)
+ {
+ 	struct btrfs_device *device;
+ 	struct btrfs_fs_devices *fs_devices;
+@@ -764,21 +769,26 @@ static noinline struct btrfs_device *device_list_add(const char *path,
+ 		if (IS_ERR(fs_devices))
+ 			return ERR_CAST(fs_devices);
+ 
++		mutex_lock(&fs_devices->device_list_mutex);
+ 		list_add(&fs_devices->fs_list, &fs_uuids);
+ 
+ 		device = NULL;
+ 	} else {
++		mutex_lock(&fs_devices->device_list_mutex);
+ 		device = find_device(fs_devices, devid,
+ 				disk_super->dev_item.uuid);
+ 	}
+ 
+ 	if (!device) {
+-		if (fs_devices->opened)
++		if (fs_devices->opened) {
++			mutex_unlock(&fs_devices->device_list_mutex);
+ 			return ERR_PTR(-EBUSY);
++		}
+ 
+ 		device = btrfs_alloc_device(NULL, &devid,
+ 					    disk_super->dev_item.uuid);
+ 		if (IS_ERR(device)) {
++			mutex_unlock(&fs_devices->device_list_mutex);
+ 			/* we can safely leave the fs_devices entry around */
+ 			return device;
+ 		}
+@@ -786,17 +796,16 @@ static noinline struct btrfs_device *device_list_add(const char *path,
+ 		name = rcu_string_strdup(path, GFP_NOFS);
+ 		if (!name) {
+ 			btrfs_free_device(device);
++			mutex_unlock(&fs_devices->device_list_mutex);
+ 			return ERR_PTR(-ENOMEM);
+ 		}
+ 		rcu_assign_pointer(device->name, name);
+ 
+-		mutex_lock(&fs_devices->device_list_mutex);
+ 		list_add_rcu(&device->dev_list, &fs_devices->devices);
+ 		fs_devices->num_devices++;
+-		mutex_unlock(&fs_devices->device_list_mutex);
+ 
+ 		device->fs_devices = fs_devices;
+-		btrfs_free_stale_devices(path, device);
++		*new_device_added = true;
+ 
+ 		if (disk_super->label[0])
+ 			pr_info("BTRFS: device label %s devid %llu transid %llu %s\n",
+@@ -840,12 +849,15 @@ static noinline struct btrfs_device *device_list_add(const char *path,
+ 			 * with larger generation number or the last-in if
+ 			 * generation are equal.
+ 			 */
++			mutex_unlock(&fs_devices->device_list_mutex);
+ 			return ERR_PTR(-EEXIST);
+ 		}
+ 
+ 		name = rcu_string_strdup(path, GFP_NOFS);
+-		if (!name)
++		if (!name) {
++			mutex_unlock(&fs_devices->device_list_mutex);
+ 			return ERR_PTR(-ENOMEM);
++		}
+ 		rcu_string_free(device->name);
+ 		rcu_assign_pointer(device->name, name);
+ 		if (test_bit(BTRFS_DEV_STATE_MISSING, &device->dev_state)) {
+@@ -865,6 +877,7 @@ static noinline struct btrfs_device *device_list_add(const char *path,
+ 
+ 	fs_devices->total_devices = btrfs_super_num_devices(disk_super);
+ 
++	mutex_unlock(&fs_devices->device_list_mutex);
+ 	return device;
+ }
+ 
+@@ -1146,7 +1159,8 @@ int btrfs_open_devices(struct btrfs_fs_devices *fs_devices,
+ {
+ 	int ret;
+ 
+-	mutex_lock(&uuid_mutex);
++	lockdep_assert_held(&uuid_mutex);
++
+ 	mutex_lock(&fs_devices->device_list_mutex);
+ 	if (fs_devices->opened) {
+ 		fs_devices->opened++;
+@@ -1156,7 +1170,6 @@ int btrfs_open_devices(struct btrfs_fs_devices *fs_devices,
+ 		ret = open_fs_devices(fs_devices, flags, holder);
+ 	}
+ 	mutex_unlock(&fs_devices->device_list_mutex);
+-	mutex_unlock(&uuid_mutex);
+ 
+ 	return ret;
+ }
+@@ -1221,12 +1234,15 @@ int btrfs_scan_one_device(const char *path, fmode_t flags, void *holder,
+ 			  struct btrfs_fs_devices **fs_devices_ret)
+ {
+ 	struct btrfs_super_block *disk_super;
++	bool new_device_added = false;
+ 	struct btrfs_device *device;
+ 	struct block_device *bdev;
+ 	struct page *page;
+ 	int ret = 0;
+ 	u64 bytenr;
+ 
++	lockdep_assert_held(&uuid_mutex);
++
+ 	/*
+ 	 * we would like to check all the supers, but that would make
+ 	 * a btrfs mount succeed after a mkfs from a different FS.
+@@ -1245,13 +1261,14 @@ int btrfs_scan_one_device(const char *path, fmode_t flags, void *holder,
+ 		goto error_bdev_put;
+ 	}
+ 
+-	mutex_lock(&uuid_mutex);
+-	device = device_list_add(path, disk_super);
+-	if (IS_ERR(device))
++	device = device_list_add(path, disk_super, &new_device_added);
++	if (IS_ERR(device)) {
+ 		ret = PTR_ERR(device);
+-	else
++	} else {
+ 		*fs_devices_ret = device->fs_devices;
+-	mutex_unlock(&uuid_mutex);
++		if (new_device_added)
++			btrfs_free_stale_devices(path, device);
++	}
+ 
+ 	btrfs_release_disk_super(page);
+ 
+@@ -2029,6 +2046,9 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info, const char *device_path,
+ 
+ 	cur_devices->num_devices--;
+ 	cur_devices->total_devices--;
++	/* Update total_devices of the parent fs_devices if it's seed */
++	if (cur_devices != fs_devices)
++		fs_devices->total_devices--;
+ 
+ 	if (test_bit(BTRFS_DEV_STATE_MISSING, &device->dev_state))
+ 		cur_devices->missing_devices--;
+@@ -6563,10 +6583,14 @@ static int read_one_chunk(struct btrfs_fs_info *fs_info, struct btrfs_key *key,
+ 	write_lock(&map_tree->map_tree.lock);
+ 	ret = add_extent_mapping(&map_tree->map_tree, em, 0);
+ 	write_unlock(&map_tree->map_tree.lock);
+-	BUG_ON(ret); /* Tree corruption */
++	if (ret < 0) {
++		btrfs_err(fs_info,
++			  "failed to add chunk map, start=%llu len=%llu: %d",
++			  em->start, em->len, ret);
++	}
+ 	free_extent_map(em);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static void fill_device_from_item(struct extent_buffer *leaf,
+diff --git a/fs/cifs/cifs_debug.c b/fs/cifs/cifs_debug.c
+index 991bfb271908..b20297988fe0 100644
+--- a/fs/cifs/cifs_debug.c
++++ b/fs/cifs/cifs_debug.c
+@@ -383,6 +383,10 @@ static ssize_t cifs_stats_proc_write(struct file *file,
+ 		atomic_set(&totBufAllocCount, 0);
+ 		atomic_set(&totSmBufAllocCount, 0);
+ #endif /* CONFIG_CIFS_STATS2 */
++		spin_lock(&GlobalMid_Lock);
++		GlobalMaxActiveXid = 0;
++		GlobalCurrentXid = 0;
++		spin_unlock(&GlobalMid_Lock);
+ 		spin_lock(&cifs_tcp_ses_lock);
+ 		list_for_each(tmp1, &cifs_tcp_ses_list) {
+ 			server = list_entry(tmp1, struct TCP_Server_Info,
+@@ -395,6 +399,10 @@ static ssize_t cifs_stats_proc_write(struct file *file,
+ 							  struct cifs_tcon,
+ 							  tcon_list);
+ 					atomic_set(&tcon->num_smbs_sent, 0);
++					spin_lock(&tcon->stat_lock);
++					tcon->bytes_read = 0;
++					tcon->bytes_written = 0;
++					spin_unlock(&tcon->stat_lock);
+ 					if (server->ops->clear_stats)
+ 						server->ops->clear_stats(tcon);
+ 				}
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 5df2c0698cda..9d02563b2147 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -3031,11 +3031,15 @@ cifs_get_tcon(struct cifs_ses *ses, struct smb_vol *volume_info)
+ 	}
+ 
+ #ifdef CONFIG_CIFS_SMB311
+-	if ((volume_info->linux_ext) && (ses->server->posix_ext_supported)) {
+-		if (ses->server->vals->protocol_id == SMB311_PROT_ID) {
++	if (volume_info->linux_ext) {
++		if (ses->server->posix_ext_supported) {
+ 			tcon->posix_extensions = true;
+ 			printk_once(KERN_WARNING
+ 				"SMB3.11 POSIX Extensions are experimental\n");
++		} else {
++			cifs_dbg(VFS, "Server does not support mounting with posix SMB3.11 extensions.\n");
++			rc = -EOPNOTSUPP;
++			goto out_fail;
+ 		}
+ 	}
+ #endif /* 311 */
+diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c
+index 3ff7cec2da81..239215dcc00b 100644
+--- a/fs/cifs/smb2misc.c
++++ b/fs/cifs/smb2misc.c
+@@ -240,6 +240,13 @@ smb2_check_message(char *buf, unsigned int len, struct TCP_Server_Info *srvr)
+ 		if (clc_len == len + 1)
+ 			return 0;
+ 
++		/*
++		 * Some windows servers (win2016) will pad also the final
++		 * PDU in a compound to 8 bytes.
++		 */
++		if (((clc_len + 7) & ~7) == len)
++			return 0;
++
+ 		/*
+ 		 * MacOS server pads after SMB2.1 write response with 3 bytes
+ 		 * of junk. Other servers match RFC1001 len to actual
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index ffce77e00a58..44e511a35559 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -360,7 +360,7 @@ smb2_plain_req_init(__le16 smb2_command, struct cifs_tcon *tcon,
+ 		       total_len);
+ 
+ 	if (tcon != NULL) {
+-#ifdef CONFIG_CIFS_STATS2
++#ifdef CONFIG_CIFS_STATS
+ 		uint16_t com_code = le16_to_cpu(smb2_command);
+ 		cifs_stats_inc(&tcon->stats.smb2_stats.smb2_com_sent[com_code]);
+ #endif
+@@ -1928,7 +1928,7 @@ int smb311_posix_mkdir(const unsigned int xid, struct inode *inode,
+ {
+ 	struct smb_rqst rqst;
+ 	struct smb2_create_req *req;
+-	struct smb2_create_rsp *rsp;
++	struct smb2_create_rsp *rsp = NULL;
+ 	struct TCP_Server_Info *server;
+ 	struct cifs_ses *ses = tcon->ses;
+ 	struct kvec iov[3]; /* make sure at least one for each open context */
+@@ -1943,27 +1943,31 @@ int smb311_posix_mkdir(const unsigned int xid, struct inode *inode,
+ 	char *pc_buf = NULL;
+ 	int flags = 0;
+ 	unsigned int total_len;
+-	__le16 *path = cifs_convert_path_to_utf16(full_path, cifs_sb);
+-
+-	if (!path)
+-		return -ENOMEM;
++	__le16 *utf16_path = NULL;
+ 
+ 	cifs_dbg(FYI, "mkdir\n");
+ 
++	/* resource #1: path allocation */
++	utf16_path = cifs_convert_path_to_utf16(full_path, cifs_sb);
++	if (!utf16_path)
++		return -ENOMEM;
++
+ 	if (ses && (ses->server))
+ 		server = ses->server;
+-	else
+-		return -EIO;
++	else {
++		rc = -EIO;
++		goto err_free_path;
++	}
+ 
++	/* resource #2: request */
+ 	rc = smb2_plain_req_init(SMB2_CREATE, tcon, (void **) &req, &total_len);
+-
+ 	if (rc)
+-		return rc;
++		goto err_free_path;
++
+ 
+ 	if (smb3_encryption_required(tcon))
+ 		flags |= CIFS_TRANSFORM_REQ;
+ 
+-
+ 	req->ImpersonationLevel = IL_IMPERSONATION;
+ 	req->DesiredAccess = cpu_to_le32(FILE_WRITE_ATTRIBUTES);
+ 	/* File attributes ignored on open (used in create though) */
+@@ -1992,50 +1996,44 @@ int smb311_posix_mkdir(const unsigned int xid, struct inode *inode,
+ 		req->sync_hdr.Flags |= SMB2_FLAGS_DFS_OPERATIONS;
+ 		rc = alloc_path_with_tree_prefix(&copy_path, &copy_size,
+ 						 &name_len,
+-						 tcon->treeName, path);
+-		if (rc) {
+-			cifs_small_buf_release(req);
+-			return rc;
+-		}
++						 tcon->treeName, utf16_path);
++		if (rc)
++			goto err_free_req;
++
+ 		req->NameLength = cpu_to_le16(name_len * 2);
+ 		uni_path_len = copy_size;
+-		path = copy_path;
++		/* free before overwriting resource */
++		kfree(utf16_path);
++		utf16_path = copy_path;
+ 	} else {
+-		uni_path_len = (2 * UniStrnlen((wchar_t *)path, PATH_MAX)) + 2;
++		uni_path_len = (2 * UniStrnlen((wchar_t *)utf16_path, PATH_MAX)) + 2;
+ 		/* MUST set path len (NameLength) to 0 opening root of share */
+ 		req->NameLength = cpu_to_le16(uni_path_len - 2);
+ 		if (uni_path_len % 8 != 0) {
+ 			copy_size = roundup(uni_path_len, 8);
+ 			copy_path = kzalloc(copy_size, GFP_KERNEL);
+ 			if (!copy_path) {
+-				cifs_small_buf_release(req);
+-				return -ENOMEM;
++				rc = -ENOMEM;
++				goto err_free_req;
+ 			}
+-			memcpy((char *)copy_path, (const char *)path,
++			memcpy((char *)copy_path, (const char *)utf16_path,
+ 			       uni_path_len);
+ 			uni_path_len = copy_size;
+-			path = copy_path;
++			/* free before overwriting resource */
++			kfree(utf16_path);
++			utf16_path = copy_path;
+ 		}
+ 	}
+ 
+ 	iov[1].iov_len = uni_path_len;
+-	iov[1].iov_base = path;
++	iov[1].iov_base = utf16_path;
+ 	req->RequestedOplockLevel = SMB2_OPLOCK_LEVEL_NONE;
+ 
+ 	if (tcon->posix_extensions) {
+-		if (n_iov > 2) {
+-			struct create_context *ccontext =
+-			    (struct create_context *)iov[n_iov-1].iov_base;
+-			ccontext->Next =
+-				cpu_to_le32(iov[n_iov-1].iov_len);
+-		}
+-
++		/* resource #3: posix buf */
+ 		rc = add_posix_context(iov, &n_iov, mode);
+-		if (rc) {
+-			cifs_small_buf_release(req);
+-			kfree(copy_path);
+-			return rc;
+-		}
++		if (rc)
++			goto err_free_req;
+ 		pc_buf = iov[n_iov-1].iov_base;
+ 	}
+ 
+@@ -2044,32 +2042,33 @@ int smb311_posix_mkdir(const unsigned int xid, struct inode *inode,
+ 	rqst.rq_iov = iov;
+ 	rqst.rq_nvec = n_iov;
+ 
+-	rc = cifs_send_recv(xid, ses, &rqst, &resp_buftype, flags,
+-			    &rsp_iov);
+-
+-	cifs_small_buf_release(req);
+-	rsp = (struct smb2_create_rsp *)rsp_iov.iov_base;
+-
+-	if (rc != 0) {
++	/* resource #4: response buffer */
++	rc = cifs_send_recv(xid, ses, &rqst, &resp_buftype, flags, &rsp_iov);
++	if (rc) {
+ 		cifs_stats_fail_inc(tcon, SMB2_CREATE_HE);
+ 		trace_smb3_posix_mkdir_err(xid, tcon->tid, ses->Suid,
+-				    CREATE_NOT_FILE, FILE_WRITE_ATTRIBUTES, rc);
+-		goto smb311_mkdir_exit;
+-	} else
+-		trace_smb3_posix_mkdir_done(xid, rsp->PersistentFileId, tcon->tid,
+-				     ses->Suid, CREATE_NOT_FILE,
+-				     FILE_WRITE_ATTRIBUTES);
++					   CREATE_NOT_FILE,
++					   FILE_WRITE_ATTRIBUTES, rc);
++		goto err_free_rsp_buf;
++	}
++
++	rsp = (struct smb2_create_rsp *)rsp_iov.iov_base;
++	trace_smb3_posix_mkdir_done(xid, rsp->PersistentFileId, tcon->tid,
++				    ses->Suid, CREATE_NOT_FILE,
++				    FILE_WRITE_ATTRIBUTES);
+ 
+ 	SMB2_close(xid, tcon, rsp->PersistentFileId, rsp->VolatileFileId);
+ 
+ 	/* Eventually save off posix specific response info and timestaps */
+ 
+-smb311_mkdir_exit:
+-	kfree(copy_path);
+-	kfree(pc_buf);
++err_free_rsp_buf:
+ 	free_rsp_buf(resp_buftype, rsp);
++	kfree(pc_buf);
++err_free_req:
++	cifs_small_buf_release(req);
++err_free_path:
++	kfree(utf16_path);
+ 	return rc;
+-
+ }
+ #endif /* SMB311 */
+ 
+diff --git a/fs/dcache.c b/fs/dcache.c
+index ceb7b491d1b9..d19a0dc46c04 100644
+--- a/fs/dcache.c
++++ b/fs/dcache.c
+@@ -292,7 +292,8 @@ void take_dentry_name_snapshot(struct name_snapshot *name, struct dentry *dentry
+ 		spin_unlock(&dentry->d_lock);
+ 		name->name = p->name;
+ 	} else {
+-		memcpy(name->inline_name, dentry->d_iname, DNAME_INLINE_LEN);
++		memcpy(name->inline_name, dentry->d_iname,
++		       dentry->d_name.len + 1);
+ 		spin_unlock(&dentry->d_lock);
+ 		name->name = name->inline_name;
+ 	}
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 8f931d699287..b61954d40c25 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -2149,8 +2149,12 @@ static void f2fs_write_failed(struct address_space *mapping, loff_t to)
+ 
+ 	if (to > i_size) {
+ 		down_write(&F2FS_I(inode)->i_mmap_sem);
++		down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
++
+ 		truncate_pagecache(inode, i_size);
+ 		f2fs_truncate_blocks(inode, i_size, true);
++
++		up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ 		up_write(&F2FS_I(inode)->i_mmap_sem);
+ 	}
+ }
+@@ -2490,6 +2494,10 @@ static int f2fs_set_data_page_dirty(struct page *page)
+ 	if (!PageUptodate(page))
+ 		SetPageUptodate(page);
+ 
++	/* don't remain PG_checked flag which was set during GC */
++	if (is_cold_data(page))
++		clear_cold_data(page);
++
+ 	if (f2fs_is_atomic_file(inode) && !f2fs_is_commit_atomic_write(inode)) {
+ 		if (!IS_ATOMIC_WRITTEN_PAGE(page)) {
+ 			f2fs_register_inmem_page(inode, page);
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 6880c6f78d58..3ffa341cf586 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -782,22 +782,26 @@ int f2fs_setattr(struct dentry *dentry, struct iattr *attr)
+ 	}
+ 
+ 	if (attr->ia_valid & ATTR_SIZE) {
+-		if (attr->ia_size <= i_size_read(inode)) {
+-			down_write(&F2FS_I(inode)->i_mmap_sem);
+-			truncate_setsize(inode, attr->ia_size);
++		bool to_smaller = (attr->ia_size <= i_size_read(inode));
++
++		down_write(&F2FS_I(inode)->i_mmap_sem);
++		down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
++
++		truncate_setsize(inode, attr->ia_size);
++
++		if (to_smaller)
+ 			err = f2fs_truncate(inode);
+-			up_write(&F2FS_I(inode)->i_mmap_sem);
+-			if (err)
+-				return err;
+-		} else {
+-			/*
+-			 * do not trim all blocks after i_size if target size is
+-			 * larger than i_size.
+-			 */
+-			down_write(&F2FS_I(inode)->i_mmap_sem);
+-			truncate_setsize(inode, attr->ia_size);
+-			up_write(&F2FS_I(inode)->i_mmap_sem);
++		/*
++		 * do not trim all blocks after i_size if target size is
++		 * larger than i_size.
++		 */
++		up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
++		up_write(&F2FS_I(inode)->i_mmap_sem);
+ 
++		if (err)
++			return err;
++
++		if (!to_smaller) {
+ 			/* should convert inline inode here */
+ 			if (!f2fs_may_inline_data(inode)) {
+ 				err = f2fs_convert_inline_inode(inode);
+@@ -944,13 +948,18 @@ static int punch_hole(struct inode *inode, loff_t offset, loff_t len)
+ 
+ 			blk_start = (loff_t)pg_start << PAGE_SHIFT;
+ 			blk_end = (loff_t)pg_end << PAGE_SHIFT;
++
+ 			down_write(&F2FS_I(inode)->i_mmap_sem);
++			down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
++
+ 			truncate_inode_pages_range(mapping, blk_start,
+ 					blk_end - 1);
+ 
+ 			f2fs_lock_op(sbi);
+ 			ret = f2fs_truncate_hole(inode, pg_start, pg_end);
+ 			f2fs_unlock_op(sbi);
++
++			up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ 			up_write(&F2FS_I(inode)->i_mmap_sem);
+ 		}
+ 	}
+@@ -1295,8 +1304,6 @@ static int f2fs_zero_range(struct inode *inode, loff_t offset, loff_t len,
+ 	if (ret)
+ 		goto out_sem;
+ 
+-	truncate_pagecache_range(inode, offset, offset + len - 1);
+-
+ 	pg_start = ((unsigned long long) offset) >> PAGE_SHIFT;
+ 	pg_end = ((unsigned long long) offset + len) >> PAGE_SHIFT;
+ 
+@@ -1326,12 +1333,19 @@ static int f2fs_zero_range(struct inode *inode, loff_t offset, loff_t len,
+ 			unsigned int end_offset;
+ 			pgoff_t end;
+ 
++			down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
++
++			truncate_pagecache_range(inode,
++				(loff_t)index << PAGE_SHIFT,
++				((loff_t)pg_end << PAGE_SHIFT) - 1);
++
+ 			f2fs_lock_op(sbi);
+ 
+ 			set_new_dnode(&dn, inode, NULL, NULL, 0);
+ 			ret = f2fs_get_dnode_of_data(&dn, index, ALLOC_NODE);
+ 			if (ret) {
+ 				f2fs_unlock_op(sbi);
++				up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ 				goto out;
+ 			}
+ 
+@@ -1340,7 +1354,9 @@ static int f2fs_zero_range(struct inode *inode, loff_t offset, loff_t len,
+ 
+ 			ret = f2fs_do_zero_range(&dn, index, end);
+ 			f2fs_put_dnode(&dn);
++
+ 			f2fs_unlock_op(sbi);
++			up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ 
+ 			f2fs_balance_fs(sbi, dn.node_changed);
+ 
+diff --git a/fs/fat/cache.c b/fs/fat/cache.c
+index e9bed49df6b7..78d501c1fb65 100644
+--- a/fs/fat/cache.c
++++ b/fs/fat/cache.c
+@@ -225,7 +225,8 @@ static inline void cache_init(struct fat_cache_id *cid, int fclus, int dclus)
+ int fat_get_cluster(struct inode *inode, int cluster, int *fclus, int *dclus)
+ {
+ 	struct super_block *sb = inode->i_sb;
+-	const int limit = sb->s_maxbytes >> MSDOS_SB(sb)->cluster_bits;
++	struct msdos_sb_info *sbi = MSDOS_SB(sb);
++	const int limit = sb->s_maxbytes >> sbi->cluster_bits;
+ 	struct fat_entry fatent;
+ 	struct fat_cache_id cid;
+ 	int nr;
+@@ -234,6 +235,12 @@ int fat_get_cluster(struct inode *inode, int cluster, int *fclus, int *dclus)
+ 
+ 	*fclus = 0;
+ 	*dclus = MSDOS_I(inode)->i_start;
++	if (!fat_valid_entry(sbi, *dclus)) {
++		fat_fs_error_ratelimit(sb,
++			"%s: invalid start cluster (i_pos %lld, start %08x)",
++			__func__, MSDOS_I(inode)->i_pos, *dclus);
++		return -EIO;
++	}
+ 	if (cluster == 0)
+ 		return 0;
+ 
+@@ -250,9 +257,8 @@ int fat_get_cluster(struct inode *inode, int cluster, int *fclus, int *dclus)
+ 		/* prevent the infinite loop of cluster chain */
+ 		if (*fclus > limit) {
+ 			fat_fs_error_ratelimit(sb,
+-					"%s: detected the cluster chain loop"
+-					" (i_pos %lld)", __func__,
+-					MSDOS_I(inode)->i_pos);
++				"%s: detected the cluster chain loop (i_pos %lld)",
++				__func__, MSDOS_I(inode)->i_pos);
+ 			nr = -EIO;
+ 			goto out;
+ 		}
+@@ -262,9 +268,8 @@ int fat_get_cluster(struct inode *inode, int cluster, int *fclus, int *dclus)
+ 			goto out;
+ 		else if (nr == FAT_ENT_FREE) {
+ 			fat_fs_error_ratelimit(sb,
+-				       "%s: invalid cluster chain (i_pos %lld)",
+-				       __func__,
+-				       MSDOS_I(inode)->i_pos);
++				"%s: invalid cluster chain (i_pos %lld)",
++				__func__, MSDOS_I(inode)->i_pos);
+ 			nr = -EIO;
+ 			goto out;
+ 		} else if (nr == FAT_ENT_EOF) {
+diff --git a/fs/fat/fat.h b/fs/fat/fat.h
+index 8fc1093da47d..a0a00f3734bc 100644
+--- a/fs/fat/fat.h
++++ b/fs/fat/fat.h
+@@ -348,6 +348,11 @@ static inline void fatent_brelse(struct fat_entry *fatent)
+ 	fatent->fat_inode = NULL;
+ }
+ 
++static inline bool fat_valid_entry(struct msdos_sb_info *sbi, int entry)
++{
++	return FAT_START_ENT <= entry && entry < sbi->max_cluster;
++}
++
+ extern void fat_ent_access_init(struct super_block *sb);
+ extern int fat_ent_read(struct inode *inode, struct fat_entry *fatent,
+ 			int entry);
+diff --git a/fs/fat/fatent.c b/fs/fat/fatent.c
+index bac10de678cc..3aef8630a4b9 100644
+--- a/fs/fat/fatent.c
++++ b/fs/fat/fatent.c
+@@ -23,7 +23,7 @@ static void fat12_ent_blocknr(struct super_block *sb, int entry,
+ {
+ 	struct msdos_sb_info *sbi = MSDOS_SB(sb);
+ 	int bytes = entry + (entry >> 1);
+-	WARN_ON(entry < FAT_START_ENT || sbi->max_cluster <= entry);
++	WARN_ON(!fat_valid_entry(sbi, entry));
+ 	*offset = bytes & (sb->s_blocksize - 1);
+ 	*blocknr = sbi->fat_start + (bytes >> sb->s_blocksize_bits);
+ }
+@@ -33,7 +33,7 @@ static void fat_ent_blocknr(struct super_block *sb, int entry,
+ {
+ 	struct msdos_sb_info *sbi = MSDOS_SB(sb);
+ 	int bytes = (entry << sbi->fatent_shift);
+-	WARN_ON(entry < FAT_START_ENT || sbi->max_cluster <= entry);
++	WARN_ON(!fat_valid_entry(sbi, entry));
+ 	*offset = bytes & (sb->s_blocksize - 1);
+ 	*blocknr = sbi->fat_start + (bytes >> sb->s_blocksize_bits);
+ }
+@@ -353,7 +353,7 @@ int fat_ent_read(struct inode *inode, struct fat_entry *fatent, int entry)
+ 	int err, offset;
+ 	sector_t blocknr;
+ 
+-	if (entry < FAT_START_ENT || sbi->max_cluster <= entry) {
++	if (!fat_valid_entry(sbi, entry)) {
+ 		fatent_brelse(fatent);
+ 		fat_fs_error(sb, "invalid access to FAT (entry 0x%08x)", entry);
+ 		return -EIO;
+diff --git a/fs/hfs/brec.c b/fs/hfs/brec.c
+index ad04a5741016..9a8772465a90 100644
+--- a/fs/hfs/brec.c
++++ b/fs/hfs/brec.c
+@@ -75,9 +75,10 @@ int hfs_brec_insert(struct hfs_find_data *fd, void *entry, int entry_len)
+ 	if (!fd->bnode) {
+ 		if (!tree->root)
+ 			hfs_btree_inc_height(tree);
+-		fd->bnode = hfs_bnode_find(tree, tree->leaf_head);
+-		if (IS_ERR(fd->bnode))
+-			return PTR_ERR(fd->bnode);
++		node = hfs_bnode_find(tree, tree->leaf_head);
++		if (IS_ERR(node))
++			return PTR_ERR(node);
++		fd->bnode = node;
+ 		fd->record = -1;
+ 	}
+ 	new_node = NULL;
+diff --git a/fs/hfsplus/dir.c b/fs/hfsplus/dir.c
+index b5254378f011..cd017d7dbdfa 100644
+--- a/fs/hfsplus/dir.c
++++ b/fs/hfsplus/dir.c
+@@ -78,13 +78,13 @@ again:
+ 				cpu_to_be32(HFSP_HARDLINK_TYPE) &&
+ 				entry.file.user_info.fdCreator ==
+ 				cpu_to_be32(HFSP_HFSPLUS_CREATOR) &&
++				HFSPLUS_SB(sb)->hidden_dir &&
+ 				(entry.file.create_date ==
+ 					HFSPLUS_I(HFSPLUS_SB(sb)->hidden_dir)->
+ 						create_date ||
+ 				entry.file.create_date ==
+ 					HFSPLUS_I(d_inode(sb->s_root))->
+-						create_date) &&
+-				HFSPLUS_SB(sb)->hidden_dir) {
++						create_date)) {
+ 			struct qstr str;
+ 			char name[32];
+ 
+diff --git a/fs/hfsplus/super.c b/fs/hfsplus/super.c
+index a6c0f54c48c3..80abba550bfa 100644
+--- a/fs/hfsplus/super.c
++++ b/fs/hfsplus/super.c
+@@ -524,8 +524,10 @@ static int hfsplus_fill_super(struct super_block *sb, void *data, int silent)
+ 		goto out_put_root;
+ 	if (!hfs_brec_read(&fd, &entry, sizeof(entry))) {
+ 		hfs_find_exit(&fd);
+-		if (entry.type != cpu_to_be16(HFSPLUS_FOLDER))
++		if (entry.type != cpu_to_be16(HFSPLUS_FOLDER)) {
++			err = -EINVAL;
+ 			goto out_put_root;
++		}
+ 		inode = hfsplus_iget(sb, be32_to_cpu(entry.folder.id));
+ 		if (IS_ERR(inode)) {
+ 			err = PTR_ERR(inode);
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 464db0c0f5c8..ff98e2a3f3cc 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -7734,7 +7734,7 @@ static int nfs4_sp4_select_mode(struct nfs_client *clp,
+ 	}
+ out:
+ 	clp->cl_sp4_flags = flags;
+-	return 0;
++	return ret;
+ }
+ 
+ struct nfs41_exchange_id_data {
+diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c
+index e64ecb9f2720..66c373230e60 100644
+--- a/fs/proc/kcore.c
++++ b/fs/proc/kcore.c
+@@ -384,8 +384,10 @@ static void elf_kcore_store_hdr(char *bufp, int nphdr, int dataoff)
+ 		phdr->p_flags	= PF_R|PF_W|PF_X;
+ 		phdr->p_offset	= kc_vaddr_to_offset(m->addr) + dataoff;
+ 		phdr->p_vaddr	= (size_t)m->addr;
+-		if (m->type == KCORE_RAM || m->type == KCORE_TEXT)
++		if (m->type == KCORE_RAM)
+ 			phdr->p_paddr	= __pa(m->addr);
++		else if (m->type == KCORE_TEXT)
++			phdr->p_paddr	= __pa_symbol(m->addr);
+ 		else
+ 			phdr->p_paddr	= (elf_addr_t)-1;
+ 		phdr->p_filesz	= phdr->p_memsz	= m->size;
+diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
+index cfb6674331fd..0651646dd04d 100644
+--- a/fs/proc/vmcore.c
++++ b/fs/proc/vmcore.c
+@@ -225,6 +225,7 @@ out_unlock:
+ 	return ret;
+ }
+ 
++#ifdef CONFIG_MMU
+ static int vmcoredd_mmap_dumps(struct vm_area_struct *vma, unsigned long dst,
+ 			       u64 start, size_t size)
+ {
+@@ -259,6 +260,7 @@ out_unlock:
+ 	mutex_unlock(&vmcoredd_mutex);
+ 	return ret;
+ }
++#endif /* CONFIG_MMU */
+ #endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP */
+ 
+ /* Read from the ELF header and then the crash dump. On error, negative value is
+diff --git a/fs/reiserfs/reiserfs.h b/fs/reiserfs/reiserfs.h
+index ae4811fecc1f..6d670bd9ab6b 100644
+--- a/fs/reiserfs/reiserfs.h
++++ b/fs/reiserfs/reiserfs.h
+@@ -271,7 +271,7 @@ struct reiserfs_journal_list {
+ 
+ 	struct mutex j_commit_mutex;
+ 	unsigned int j_trans_id;
+-	time_t j_timestamp;
++	time64_t j_timestamp; /* write-only but useful for crash dump analysis */
+ 	struct reiserfs_list_bitmap *j_list_bitmap;
+ 	struct buffer_head *j_commit_bh;	/* commit buffer head */
+ 	struct reiserfs_journal_cnode *j_realblock;
+diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
+index 29502238e510..bf85e152af05 100644
+--- a/include/linux/pci_ids.h
++++ b/include/linux/pci_ids.h
+@@ -3082,4 +3082,6 @@
+ 
+ #define PCI_VENDOR_ID_OCZ		0x1b85
+ 
++#define PCI_VENDOR_ID_NCUBE		0x10ff
++
+ #endif /* _LINUX_PCI_IDS_H */
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index cd3ecda9386a..106e01c721e6 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -2023,6 +2023,10 @@ int tcp_set_ulp_id(struct sock *sk, const int ulp);
+ void tcp_get_available_ulp(char *buf, size_t len);
+ void tcp_cleanup_ulp(struct sock *sk);
+ 
++#define MODULE_ALIAS_TCP_ULP(name)				\
++	__MODULE_INFO(alias, alias_userspace, name);		\
++	__MODULE_INFO(alias, alias_tcp_ulp, "tcp-ulp-" name)
++
+ /* Call BPF_SOCK_OPS program that returns an int. If the return value
+  * is < 0, then the BPF op failed (for example if the loaded BPF
+  * program does not support the chosen operation or there is no BPF
+diff --git a/include/uapi/linux/keyctl.h b/include/uapi/linux/keyctl.h
+index 7b8c9e19bad1..910cc4334b21 100644
+--- a/include/uapi/linux/keyctl.h
++++ b/include/uapi/linux/keyctl.h
+@@ -65,7 +65,7 @@
+ 
+ /* keyctl structures */
+ struct keyctl_dh_params {
+-	__s32 private;
++	__s32 dh_private;
+ 	__s32 prime;
+ 	__s32 base;
+ };
+diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c
+index 76efe9a183f5..fc5b103512e7 100644
+--- a/kernel/bpf/inode.c
++++ b/kernel/bpf/inode.c
+@@ -196,19 +196,21 @@ static void *map_seq_next(struct seq_file *m, void *v, loff_t *pos)
+ {
+ 	struct bpf_map *map = seq_file_to_map(m);
+ 	void *key = map_iter(m)->key;
++	void *prev_key;
+ 
+ 	if (map_iter(m)->done)
+ 		return NULL;
+ 
+ 	if (unlikely(v == SEQ_START_TOKEN))
+-		goto done;
++		prev_key = NULL;
++	else
++		prev_key = key;
+ 
+-	if (map->ops->map_get_next_key(map, key, key)) {
++	if (map->ops->map_get_next_key(map, prev_key, key)) {
+ 		map_iter(m)->done = true;
+ 		return NULL;
+ 	}
+ 
+-done:
+ 	++(*pos);
+ 	return key;
+ }
+diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
+index c4d75c52b4fc..58899601fccf 100644
+--- a/kernel/bpf/sockmap.c
++++ b/kernel/bpf/sockmap.c
+@@ -58,6 +58,7 @@ struct bpf_stab {
+ 	struct bpf_map map;
+ 	struct sock **sock_map;
+ 	struct bpf_sock_progs progs;
++	raw_spinlock_t lock;
+ };
+ 
+ struct bucket {
+@@ -89,9 +90,9 @@ enum smap_psock_state {
+ 
+ struct smap_psock_map_entry {
+ 	struct list_head list;
++	struct bpf_map *map;
+ 	struct sock **entry;
+ 	struct htab_elem __rcu *hash_link;
+-	struct bpf_htab __rcu *htab;
+ };
+ 
+ struct smap_psock {
+@@ -343,13 +344,18 @@ static void bpf_tcp_close(struct sock *sk, long timeout)
+ 	e = psock_map_pop(sk, psock);
+ 	while (e) {
+ 		if (e->entry) {
+-			osk = cmpxchg(e->entry, sk, NULL);
++			struct bpf_stab *stab = container_of(e->map, struct bpf_stab, map);
++
++			raw_spin_lock_bh(&stab->lock);
++			osk = *e->entry;
+ 			if (osk == sk) {
++				*e->entry = NULL;
+ 				smap_release_sock(psock, sk);
+ 			}
++			raw_spin_unlock_bh(&stab->lock);
+ 		} else {
+ 			struct htab_elem *link = rcu_dereference(e->hash_link);
+-			struct bpf_htab *htab = rcu_dereference(e->htab);
++			struct bpf_htab *htab = container_of(e->map, struct bpf_htab, map);
+ 			struct hlist_head *head;
+ 			struct htab_elem *l;
+ 			struct bucket *b;
+@@ -370,6 +376,7 @@ static void bpf_tcp_close(struct sock *sk, long timeout)
+ 			}
+ 			raw_spin_unlock_bh(&b->lock);
+ 		}
++		kfree(e);
+ 		e = psock_map_pop(sk, psock);
+ 	}
+ 	rcu_read_unlock();
+@@ -1644,6 +1651,7 @@ static struct bpf_map *sock_map_alloc(union bpf_attr *attr)
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	bpf_map_init_from_attr(&stab->map, attr);
++	raw_spin_lock_init(&stab->lock);
+ 
+ 	/* make sure page count doesn't overflow */
+ 	cost = (u64) stab->map.max_entries * sizeof(struct sock *);
+@@ -1678,8 +1686,10 @@ static void smap_list_map_remove(struct smap_psock *psock,
+ 
+ 	spin_lock_bh(&psock->maps_lock);
+ 	list_for_each_entry_safe(e, tmp, &psock->maps, list) {
+-		if (e->entry == entry)
++		if (e->entry == entry) {
+ 			list_del(&e->list);
++			kfree(e);
++		}
+ 	}
+ 	spin_unlock_bh(&psock->maps_lock);
+ }
+@@ -1693,8 +1703,10 @@ static void smap_list_hash_remove(struct smap_psock *psock,
+ 	list_for_each_entry_safe(e, tmp, &psock->maps, list) {
+ 		struct htab_elem *c = rcu_dereference(e->hash_link);
+ 
+-		if (c == hash_link)
++		if (c == hash_link) {
+ 			list_del(&e->list);
++			kfree(e);
++		}
+ 	}
+ 	spin_unlock_bh(&psock->maps_lock);
+ }
+@@ -1714,14 +1726,15 @@ static void sock_map_free(struct bpf_map *map)
+ 	 * and a grace period expire to ensure psock is really safe to remove.
+ 	 */
+ 	rcu_read_lock();
++	raw_spin_lock_bh(&stab->lock);
+ 	for (i = 0; i < stab->map.max_entries; i++) {
+ 		struct smap_psock *psock;
+ 		struct sock *sock;
+ 
+-		sock = xchg(&stab->sock_map[i], NULL);
++		sock = stab->sock_map[i];
+ 		if (!sock)
+ 			continue;
+-
++		stab->sock_map[i] = NULL;
+ 		psock = smap_psock_sk(sock);
+ 		/* This check handles a racing sock event that can get the
+ 		 * sk_callback_lock before this case but after xchg happens
+@@ -1733,6 +1746,7 @@ static void sock_map_free(struct bpf_map *map)
+ 			smap_release_sock(psock, sock);
+ 		}
+ 	}
++	raw_spin_unlock_bh(&stab->lock);
+ 	rcu_read_unlock();
+ 
+ 	sock_map_remove_complete(stab);
+@@ -1776,19 +1790,23 @@ static int sock_map_delete_elem(struct bpf_map *map, void *key)
+ 	if (k >= map->max_entries)
+ 		return -EINVAL;
+ 
+-	sock = xchg(&stab->sock_map[k], NULL);
++	raw_spin_lock_bh(&stab->lock);
++	sock = stab->sock_map[k];
++	stab->sock_map[k] = NULL;
++	raw_spin_unlock_bh(&stab->lock);
+ 	if (!sock)
+ 		return -EINVAL;
+ 
+ 	psock = smap_psock_sk(sock);
+ 	if (!psock)
+-		goto out;
+-
+-	if (psock->bpf_parse)
++		return 0;
++	if (psock->bpf_parse) {
++		write_lock_bh(&sock->sk_callback_lock);
+ 		smap_stop_sock(psock, sock);
++		write_unlock_bh(&sock->sk_callback_lock);
++	}
+ 	smap_list_map_remove(psock, &stab->sock_map[k]);
+ 	smap_release_sock(psock, sock);
+-out:
+ 	return 0;
+ }
+ 
+@@ -1824,11 +1842,9 @@ out:
+ static int __sock_map_ctx_update_elem(struct bpf_map *map,
+ 				      struct bpf_sock_progs *progs,
+ 				      struct sock *sock,
+-				      struct sock **map_link,
+ 				      void *key)
+ {
+ 	struct bpf_prog *verdict, *parse, *tx_msg;
+-	struct smap_psock_map_entry *e = NULL;
+ 	struct smap_psock *psock;
+ 	bool new = false;
+ 	int err = 0;
+@@ -1901,14 +1917,6 @@ static int __sock_map_ctx_update_elem(struct bpf_map *map,
+ 		new = true;
+ 	}
+ 
+-	if (map_link) {
+-		e = kzalloc(sizeof(*e), GFP_ATOMIC | __GFP_NOWARN);
+-		if (!e) {
+-			err = -ENOMEM;
+-			goto out_free;
+-		}
+-	}
+-
+ 	/* 3. At this point we have a reference to a valid psock that is
+ 	 * running. Attach any BPF programs needed.
+ 	 */
+@@ -1930,17 +1938,6 @@ static int __sock_map_ctx_update_elem(struct bpf_map *map,
+ 		write_unlock_bh(&sock->sk_callback_lock);
+ 	}
+ 
+-	/* 4. Place psock in sockmap for use and stop any programs on
+-	 * the old sock assuming its not the same sock we are replacing
+-	 * it with. Because we can only have a single set of programs if
+-	 * old_sock has a strp we can stop it.
+-	 */
+-	if (map_link) {
+-		e->entry = map_link;
+-		spin_lock_bh(&psock->maps_lock);
+-		list_add_tail(&e->list, &psock->maps);
+-		spin_unlock_bh(&psock->maps_lock);
+-	}
+ 	return err;
+ out_free:
+ 	smap_release_sock(psock, sock);
+@@ -1951,7 +1948,6 @@ out_progs:
+ 	}
+ 	if (tx_msg)
+ 		bpf_prog_put(tx_msg);
+-	kfree(e);
+ 	return err;
+ }
+ 
+@@ -1961,36 +1957,57 @@ static int sock_map_ctx_update_elem(struct bpf_sock_ops_kern *skops,
+ {
+ 	struct bpf_stab *stab = container_of(map, struct bpf_stab, map);
+ 	struct bpf_sock_progs *progs = &stab->progs;
+-	struct sock *osock, *sock;
++	struct sock *osock, *sock = skops->sk;
++	struct smap_psock_map_entry *e;
++	struct smap_psock *psock;
+ 	u32 i = *(u32 *)key;
+ 	int err;
+ 
+ 	if (unlikely(flags > BPF_EXIST))
+ 		return -EINVAL;
+-
+ 	if (unlikely(i >= stab->map.max_entries))
+ 		return -E2BIG;
+ 
+-	sock = READ_ONCE(stab->sock_map[i]);
+-	if (flags == BPF_EXIST && !sock)
+-		return -ENOENT;
+-	else if (flags == BPF_NOEXIST && sock)
+-		return -EEXIST;
++	e = kzalloc(sizeof(*e), GFP_ATOMIC | __GFP_NOWARN);
++	if (!e)
++		return -ENOMEM;
+ 
+-	sock = skops->sk;
+-	err = __sock_map_ctx_update_elem(map, progs, sock, &stab->sock_map[i],
+-					 key);
++	err = __sock_map_ctx_update_elem(map, progs, sock, key);
+ 	if (err)
+ 		goto out;
+ 
+-	osock = xchg(&stab->sock_map[i], sock);
+-	if (osock) {
+-		struct smap_psock *opsock = smap_psock_sk(osock);
++	/* psock guaranteed to be present. */
++	psock = smap_psock_sk(sock);
++	raw_spin_lock_bh(&stab->lock);
++	osock = stab->sock_map[i];
++	if (osock && flags == BPF_NOEXIST) {
++		err = -EEXIST;
++		goto out_unlock;
++	}
++	if (!osock && flags == BPF_EXIST) {
++		err = -ENOENT;
++		goto out_unlock;
++	}
+ 
+-		smap_list_map_remove(opsock, &stab->sock_map[i]);
+-		smap_release_sock(opsock, osock);
++	e->entry = &stab->sock_map[i];
++	e->map = map;
++	spin_lock_bh(&psock->maps_lock);
++	list_add_tail(&e->list, &psock->maps);
++	spin_unlock_bh(&psock->maps_lock);
++
++	stab->sock_map[i] = sock;
++	if (osock) {
++		psock = smap_psock_sk(osock);
++		smap_list_map_remove(psock, &stab->sock_map[i]);
++		smap_release_sock(psock, osock);
+ 	}
++	raw_spin_unlock_bh(&stab->lock);
++	return 0;
++out_unlock:
++	smap_release_sock(psock, sock);
++	raw_spin_unlock_bh(&stab->lock);
+ out:
++	kfree(e);
+ 	return err;
+ }
+ 
+@@ -2353,7 +2370,7 @@ static int sock_hash_ctx_update_elem(struct bpf_sock_ops_kern *skops,
+ 	b = __select_bucket(htab, hash);
+ 	head = &b->head;
+ 
+-	err = __sock_map_ctx_update_elem(map, progs, sock, NULL, key);
++	err = __sock_map_ctx_update_elem(map, progs, sock, key);
+ 	if (err)
+ 		goto err;
+ 
+@@ -2379,8 +2396,7 @@ static int sock_hash_ctx_update_elem(struct bpf_sock_ops_kern *skops,
+ 	}
+ 
+ 	rcu_assign_pointer(e->hash_link, l_new);
+-	rcu_assign_pointer(e->htab,
+-			   container_of(map, struct bpf_htab, map));
++	e->map = map;
+ 	spin_lock_bh(&psock->maps_lock);
+ 	list_add_tail(&e->list, &psock->maps);
+ 	spin_unlock_bh(&psock->maps_lock);
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 1b27babc4c78..8ed48ca2cc43 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -549,8 +549,7 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
+ 			goto out;
+ 	}
+ 	/* a new mm has just been created */
+-	arch_dup_mmap(oldmm, mm);
+-	retval = 0;
++	retval = arch_dup_mmap(oldmm, mm);
+ out:
+ 	up_write(&mm->mmap_sem);
+ 	flush_tlb_mm(oldmm);
+@@ -1417,7 +1416,9 @@ static int copy_sighand(unsigned long clone_flags, struct task_struct *tsk)
+ 		return -ENOMEM;
+ 
+ 	atomic_set(&sig->count, 1);
++	spin_lock_irq(&current->sighand->siglock);
+ 	memcpy(sig->action, current->sighand->action, sizeof(sig->action));
++	spin_unlock_irq(&current->sighand->siglock);
+ 	return 0;
+ }
+ 
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 5f78c6e41796..0280deac392e 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -2652,6 +2652,9 @@ void flush_workqueue(struct workqueue_struct *wq)
+ 	if (WARN_ON(!wq_online))
+ 		return;
+ 
++	lock_map_acquire(&wq->lockdep_map);
++	lock_map_release(&wq->lockdep_map);
++
+ 	mutex_lock(&wq->mutex);
+ 
+ 	/*
+@@ -2843,7 +2846,8 @@ reflush:
+ }
+ EXPORT_SYMBOL_GPL(drain_workqueue);
+ 
+-static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr)
++static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr,
++			     bool from_cancel)
+ {
+ 	struct worker *worker = NULL;
+ 	struct worker_pool *pool;
+@@ -2885,7 +2889,8 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr)
+ 	 * workqueues the deadlock happens when the rescuer stalls, blocking
+ 	 * forward progress.
+ 	 */
+-	if (pwq->wq->saved_max_active == 1 || pwq->wq->rescuer) {
++	if (!from_cancel &&
++	    (pwq->wq->saved_max_active == 1 || pwq->wq->rescuer)) {
+ 		lock_map_acquire(&pwq->wq->lockdep_map);
+ 		lock_map_release(&pwq->wq->lockdep_map);
+ 	}
+@@ -2896,6 +2901,27 @@ already_gone:
+ 	return false;
+ }
+ 
++static bool __flush_work(struct work_struct *work, bool from_cancel)
++{
++	struct wq_barrier barr;
++
++	if (WARN_ON(!wq_online))
++		return false;
++
++	if (!from_cancel) {
++		lock_map_acquire(&work->lockdep_map);
++		lock_map_release(&work->lockdep_map);
++	}
++
++	if (start_flush_work(work, &barr, from_cancel)) {
++		wait_for_completion(&barr.done);
++		destroy_work_on_stack(&barr.work);
++		return true;
++	} else {
++		return false;
++	}
++}
++
+ /**
+  * flush_work - wait for a work to finish executing the last queueing instance
+  * @work: the work to flush
+@@ -2909,18 +2935,7 @@ already_gone:
+  */
+ bool flush_work(struct work_struct *work)
+ {
+-	struct wq_barrier barr;
+-
+-	if (WARN_ON(!wq_online))
+-		return false;
+-
+-	if (start_flush_work(work, &barr)) {
+-		wait_for_completion(&barr.done);
+-		destroy_work_on_stack(&barr.work);
+-		return true;
+-	} else {
+-		return false;
+-	}
++	return __flush_work(work, false);
+ }
+ EXPORT_SYMBOL_GPL(flush_work);
+ 
+@@ -2986,7 +3001,7 @@ static bool __cancel_work_timer(struct work_struct *work, bool is_dwork)
+ 	 * isn't executing.
+ 	 */
+ 	if (wq_online)
+-		flush_work(work);
++		__flush_work(work, true);
+ 
+ 	clear_work_data(work);
+ 
+diff --git a/lib/debugobjects.c b/lib/debugobjects.c
+index 994be4805cec..24c1df0d7466 100644
+--- a/lib/debugobjects.c
++++ b/lib/debugobjects.c
+@@ -360,9 +360,12 @@ static void debug_object_is_on_stack(void *addr, int onstack)
+ 
+ 	limit++;
+ 	if (is_on_stack)
+-		pr_warn("object is on stack, but not annotated\n");
++		pr_warn("object %p is on stack %p, but NOT annotated.\n", addr,
++			 task_stack_page(current));
+ 	else
+-		pr_warn("object is not on stack, but annotated\n");
++		pr_warn("object %p is NOT on stack %p, but annotated.\n", addr,
++			 task_stack_page(current));
++
+ 	WARN_ON(1);
+ }
+ 
+diff --git a/mm/Kconfig b/mm/Kconfig
+index ce95491abd6a..94af022b7f3d 100644
+--- a/mm/Kconfig
++++ b/mm/Kconfig
+@@ -635,7 +635,7 @@ config DEFERRED_STRUCT_PAGE_INIT
+ 	bool "Defer initialisation of struct pages to kthreads"
+ 	default n
+ 	depends on NO_BOOTMEM
+-	depends on !FLATMEM
++	depends on SPARSEMEM
+ 	depends on !NEED_PER_CPU_KM
+ 	help
+ 	  Ordinarily all struct pages are initialised during early boot in a
+diff --git a/mm/fadvise.c b/mm/fadvise.c
+index afa41491d324..2d8376e3c640 100644
+--- a/mm/fadvise.c
++++ b/mm/fadvise.c
+@@ -72,8 +72,12 @@ int ksys_fadvise64_64(int fd, loff_t offset, loff_t len, int advice)
+ 		goto out;
+ 	}
+ 
+-	/* Careful about overflows. Len == 0 means "as much as possible" */
+-	endbyte = offset + len;
++	/*
++	 * Careful about overflows. Len == 0 means "as much as possible".  Use
++	 * unsigned math because signed overflows are undefined and UBSan
++	 * complains.
++	 */
++	endbyte = (u64)offset + (u64)len;
+ 	if (!len || endbyte < len)
+ 		endbyte = -1;
+ 	else
+diff --git a/net/9p/trans_fd.c b/net/9p/trans_fd.c
+index ef456395645a..7fb60dd4be79 100644
+--- a/net/9p/trans_fd.c
++++ b/net/9p/trans_fd.c
+@@ -199,15 +199,14 @@ static void p9_mux_poll_stop(struct p9_conn *m)
+ static void p9_conn_cancel(struct p9_conn *m, int err)
+ {
+ 	struct p9_req_t *req, *rtmp;
+-	unsigned long flags;
+ 	LIST_HEAD(cancel_list);
+ 
+ 	p9_debug(P9_DEBUG_ERROR, "mux %p err %d\n", m, err);
+ 
+-	spin_lock_irqsave(&m->client->lock, flags);
++	spin_lock(&m->client->lock);
+ 
+ 	if (m->err) {
+-		spin_unlock_irqrestore(&m->client->lock, flags);
++		spin_unlock(&m->client->lock);
+ 		return;
+ 	}
+ 
+@@ -219,7 +218,6 @@ static void p9_conn_cancel(struct p9_conn *m, int err)
+ 	list_for_each_entry_safe(req, rtmp, &m->unsent_req_list, req_list) {
+ 		list_move(&req->req_list, &cancel_list);
+ 	}
+-	spin_unlock_irqrestore(&m->client->lock, flags);
+ 
+ 	list_for_each_entry_safe(req, rtmp, &cancel_list, req_list) {
+ 		p9_debug(P9_DEBUG_ERROR, "call back req %p\n", req);
+@@ -228,6 +226,7 @@ static void p9_conn_cancel(struct p9_conn *m, int err)
+ 			req->t_err = err;
+ 		p9_client_cb(m->client, req, REQ_STATUS_ERROR);
+ 	}
++	spin_unlock(&m->client->lock);
+ }
+ 
+ static __poll_t
+@@ -375,8 +374,9 @@ static void p9_read_work(struct work_struct *work)
+ 		if (m->req->status != REQ_STATUS_ERROR)
+ 			status = REQ_STATUS_RCVD;
+ 		list_del(&m->req->req_list);
+-		spin_unlock(&m->client->lock);
++		/* update req->status while holding client->lock  */
+ 		p9_client_cb(m->client, m->req, status);
++		spin_unlock(&m->client->lock);
+ 		m->rc.sdata = NULL;
+ 		m->rc.offset = 0;
+ 		m->rc.capacity = 0;
+diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c
+index 4c2da2513c8b..2dc1c293092b 100644
+--- a/net/9p/trans_virtio.c
++++ b/net/9p/trans_virtio.c
+@@ -571,7 +571,7 @@ static int p9_virtio_probe(struct virtio_device *vdev)
+ 	chan->vq = virtio_find_single_vq(vdev, req_done, "requests");
+ 	if (IS_ERR(chan->vq)) {
+ 		err = PTR_ERR(chan->vq);
+-		goto out_free_vq;
++		goto out_free_chan;
+ 	}
+ 	chan->vq->vdev->priv = chan;
+ 	spin_lock_init(&chan->lock);
+@@ -624,6 +624,7 @@ out_free_tag:
+ 	kfree(tag);
+ out_free_vq:
+ 	vdev->config->del_vqs(vdev);
++out_free_chan:
+ 	kfree(chan);
+ fail:
+ 	return err;
+diff --git a/net/core/xdp.c b/net/core/xdp.c
+index 6771f1855b96..2657056130a4 100644
+--- a/net/core/xdp.c
++++ b/net/core/xdp.c
+@@ -95,23 +95,15 @@ static void __xdp_rxq_info_unreg_mem_model(struct xdp_rxq_info *xdp_rxq)
+ {
+ 	struct xdp_mem_allocator *xa;
+ 	int id = xdp_rxq->mem.id;
+-	int err;
+ 
+ 	if (id == 0)
+ 		return;
+ 
+ 	mutex_lock(&mem_id_lock);
+ 
+-	xa = rhashtable_lookup(mem_id_ht, &id, mem_id_rht_params);
+-	if (!xa) {
+-		mutex_unlock(&mem_id_lock);
+-		return;
+-	}
+-
+-	err = rhashtable_remove_fast(mem_id_ht, &xa->node, mem_id_rht_params);
+-	WARN_ON(err);
+-
+-	call_rcu(&xa->rcu, __xdp_mem_allocator_rcu_free);
++	xa = rhashtable_lookup_fast(mem_id_ht, &id, mem_id_rht_params);
++	if (xa && !rhashtable_remove_fast(mem_id_ht, &xa->node, mem_id_rht_params))
++		call_rcu(&xa->rcu, __xdp_mem_allocator_rcu_free);
+ 
+ 	mutex_unlock(&mem_id_lock);
+ }
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index 2d8efeecf619..055f4bbba86b 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -1511,11 +1511,14 @@ nla_put_failure:
+ 
+ static void erspan_setup(struct net_device *dev)
+ {
++	struct ip_tunnel *t = netdev_priv(dev);
++
+ 	ether_setup(dev);
+ 	dev->netdev_ops = &erspan_netdev_ops;
+ 	dev->priv_flags &= ~IFF_TX_SKB_SHARING;
+ 	dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
+ 	ip_tunnel_setup(dev, erspan_net_id);
++	t->erspan_ver = 1;
+ }
+ 
+ static const struct nla_policy ipgre_policy[IFLA_GRE_MAX + 1] = {
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 3b2711e33e4c..488b201851d7 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -2516,6 +2516,12 @@ static int __net_init tcp_sk_init(struct net *net)
+ 		if (res)
+ 			goto fail;
+ 		sock_set_flag(sk, SOCK_USE_WRITE_QUEUE);
++
++		/* Please enforce IP_DF and IPID==0 for RST and
++		 * ACK sent in SYN-RECV and TIME-WAIT state.
++		 */
++		inet_sk(sk)->pmtudisc = IP_PMTUDISC_DO;
++
+ 		*per_cpu_ptr(net->ipv4.tcp_sk, cpu) = sk;
+ 	}
+ 
+diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
+index 1dda1341a223..b690132f5da2 100644
+--- a/net/ipv4/tcp_minisocks.c
++++ b/net/ipv4/tcp_minisocks.c
+@@ -184,8 +184,9 @@ kill:
+ 				inet_twsk_deschedule_put(tw);
+ 				return TCP_TW_SUCCESS;
+ 			}
++		} else {
++			inet_twsk_reschedule(tw, TCP_TIMEWAIT_LEN);
+ 		}
+-		inet_twsk_reschedule(tw, TCP_TIMEWAIT_LEN);
+ 
+ 		if (tmp_opt.saw_tstamp) {
+ 			tcptw->tw_ts_recent	  = tmp_opt.rcv_tsval;
+diff --git a/net/ipv4/tcp_ulp.c b/net/ipv4/tcp_ulp.c
+index 622caa4039e0..a5995bb2eaca 100644
+--- a/net/ipv4/tcp_ulp.c
++++ b/net/ipv4/tcp_ulp.c
+@@ -51,7 +51,7 @@ static const struct tcp_ulp_ops *__tcp_ulp_find_autoload(const char *name)
+ #ifdef CONFIG_MODULES
+ 	if (!ulp && capable(CAP_NET_ADMIN)) {
+ 		rcu_read_unlock();
+-		request_module("%s", name);
++		request_module("tcp-ulp-%s", name);
+ 		rcu_read_lock();
+ 		ulp = tcp_ulp_find(name);
+ 	}
+@@ -129,6 +129,8 @@ void tcp_cleanup_ulp(struct sock *sk)
+ 	if (icsk->icsk_ulp_ops->release)
+ 		icsk->icsk_ulp_ops->release(sk);
+ 	module_put(icsk->icsk_ulp_ops->owner);
++
++	icsk->icsk_ulp_ops = NULL;
+ }
+ 
+ /* Change upper layer protocol for socket */
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index d212738e9d10..5516f55e214b 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -198,6 +198,8 @@ void fib6_info_destroy_rcu(struct rcu_head *head)
+ 		}
+ 	}
+ 
++	lwtstate_put(f6i->fib6_nh.nh_lwtstate);
++
+ 	if (f6i->fib6_nh.nh_dev)
+ 		dev_put(f6i->fib6_nh.nh_dev);
+ 
+@@ -987,7 +989,10 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct fib6_info *rt,
+ 					fib6_clean_expires(iter);
+ 				else
+ 					fib6_set_expires(iter, rt->expires);
+-				fib6_metric_set(iter, RTAX_MTU, rt->fib6_pmtu);
++
++				if (rt->fib6_pmtu)
++					fib6_metric_set(iter, RTAX_MTU,
++							rt->fib6_pmtu);
+ 				return -EEXIST;
+ 			}
+ 			/* If we have the same destination and the same metric,
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index cd2cfb04e5d8..7ec997fcbc43 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -1776,6 +1776,7 @@ static void ip6gre_netlink_parms(struct nlattr *data[],
+ 	if (data[IFLA_GRE_COLLECT_METADATA])
+ 		parms->collect_md = true;
+ 
++	parms->erspan_ver = 1;
+ 	if (data[IFLA_GRE_ERSPAN_VER])
+ 		parms->erspan_ver = nla_get_u8(data[IFLA_GRE_ERSPAN_VER]);
+ 
+diff --git a/net/ipv6/ip6_vti.c b/net/ipv6/ip6_vti.c
+index c72ae3a4fe09..c31a7c4a9249 100644
+--- a/net/ipv6/ip6_vti.c
++++ b/net/ipv6/ip6_vti.c
+@@ -481,7 +481,7 @@ vti6_xmit(struct sk_buff *skb, struct net_device *dev, struct flowi *fl)
+ 	}
+ 
+ 	mtu = dst_mtu(dst);
+-	if (!skb->ignore_df && skb->len > mtu) {
++	if (skb->len > mtu) {
+ 		skb_dst_update_pmtu(skb, mtu);
+ 
+ 		if (skb->protocol == htons(ETH_P_IPV6)) {
+@@ -1102,7 +1102,8 @@ static void __net_exit vti6_destroy_tunnels(struct vti6_net *ip6n,
+ 	}
+ 
+ 	t = rtnl_dereference(ip6n->tnls_wc[0]);
+-	unregister_netdevice_queue(t->dev, list);
++	if (t)
++		unregister_netdevice_queue(t->dev, list);
+ }
+ 
+ static int __net_init vti6_init_net(struct net *net)
+@@ -1114,6 +1115,8 @@ static int __net_init vti6_init_net(struct net *net)
+ 	ip6n->tnls[0] = ip6n->tnls_wc;
+ 	ip6n->tnls[1] = ip6n->tnls_r_l;
+ 
++	if (!net_has_fallback_tunnels(net))
++		return 0;
+ 	err = -ENOMEM;
+ 	ip6n->fb_tnl_dev = alloc_netdev(sizeof(struct ip6_tnl), "ip6_vti0",
+ 					NET_NAME_UNKNOWN, vti6_dev_setup);
+diff --git a/net/ipv6/netfilter/ip6t_rpfilter.c b/net/ipv6/netfilter/ip6t_rpfilter.c
+index 0fe61ede77c6..c3c6b09acdc4 100644
+--- a/net/ipv6/netfilter/ip6t_rpfilter.c
++++ b/net/ipv6/netfilter/ip6t_rpfilter.c
+@@ -26,6 +26,12 @@ static bool rpfilter_addr_unicast(const struct in6_addr *addr)
+ 	return addr_type & IPV6_ADDR_UNICAST;
+ }
+ 
++static bool rpfilter_addr_linklocal(const struct in6_addr *addr)
++{
++	int addr_type = ipv6_addr_type(addr);
++	return addr_type & IPV6_ADDR_LINKLOCAL;
++}
++
+ static bool rpfilter_lookup_reverse6(struct net *net, const struct sk_buff *skb,
+ 				     const struct net_device *dev, u8 flags)
+ {
+@@ -48,7 +54,11 @@ static bool rpfilter_lookup_reverse6(struct net *net, const struct sk_buff *skb,
+ 	}
+ 
+ 	fl6.flowi6_mark = flags & XT_RPFILTER_VALID_MARK ? skb->mark : 0;
+-	if ((flags & XT_RPFILTER_LOOSE) == 0)
++
++	if (rpfilter_addr_linklocal(&iph->saddr)) {
++		lookup_flags |= RT6_LOOKUP_F_IFACE;
++		fl6.flowi6_oif = dev->ifindex;
++	} else if ((flags & XT_RPFILTER_LOOSE) == 0)
+ 		fl6.flowi6_oif = dev->ifindex;
+ 
+ 	rt = (void *)ip6_route_lookup(net, &fl6, skb, lookup_flags);
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 7208c16302f6..18e00ce1719a 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -956,7 +956,7 @@ static void ip6_rt_init_dst(struct rt6_info *rt, struct fib6_info *ort)
+ 	rt->dst.error = 0;
+ 	rt->dst.output = ip6_output;
+ 
+-	if (ort->fib6_type == RTN_LOCAL) {
++	if (ort->fib6_type == RTN_LOCAL || ort->fib6_type == RTN_ANYCAST) {
+ 		rt->dst.input = ip6_input;
+ 	} else if (ipv6_addr_type(&ort->fib6_dst.addr) & IPV6_ADDR_MULTICAST) {
+ 		rt->dst.input = ip6_mc_input;
+@@ -996,7 +996,6 @@ static void ip6_rt_copy_init(struct rt6_info *rt, struct fib6_info *ort)
+ 	rt->rt6i_src = ort->fib6_src;
+ #endif
+ 	rt->rt6i_prefsrc = ort->fib6_prefsrc;
+-	rt->dst.lwtstate = lwtstate_get(ort->fib6_nh.nh_lwtstate);
+ }
+ 
+ static struct fib6_node* fib6_backtrack(struct fib6_node *fn,
+diff --git a/net/netfilter/ipvs/ip_vs_core.c b/net/netfilter/ipvs/ip_vs_core.c
+index 0679dd101e72..7ca926a03b81 100644
+--- a/net/netfilter/ipvs/ip_vs_core.c
++++ b/net/netfilter/ipvs/ip_vs_core.c
+@@ -1972,13 +1972,20 @@ ip_vs_in(struct netns_ipvs *ipvs, unsigned int hooknum, struct sk_buff *skb, int
+ 	if (cp->dest && !(cp->dest->flags & IP_VS_DEST_F_AVAILABLE)) {
+ 		/* the destination server is not available */
+ 
+-		if (sysctl_expire_nodest_conn(ipvs)) {
++		__u32 flags = cp->flags;
++
++		/* when timer already started, silently drop the packet.*/
++		if (timer_pending(&cp->timer))
++			__ip_vs_conn_put(cp);
++		else
++			ip_vs_conn_put(cp);
++
++		if (sysctl_expire_nodest_conn(ipvs) &&
++		    !(flags & IP_VS_CONN_F_ONE_PACKET)) {
+ 			/* try to expire the connection immediately */
+ 			ip_vs_conn_expire_now(cp);
+ 		}
+-		/* don't restart its timer, and silently
+-		   drop the packet. */
+-		__ip_vs_conn_put(cp);
++
+ 		return NF_DROP;
+ 	}
+ 
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index 20a2e37c76d1..e952eedf44b4 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -821,6 +821,21 @@ ctnetlink_alloc_filter(const struct nlattr * const cda[])
+ #endif
+ }
+ 
++static int ctnetlink_start(struct netlink_callback *cb)
++{
++	const struct nlattr * const *cda = cb->data;
++	struct ctnetlink_filter *filter = NULL;
++
++	if (cda[CTA_MARK] && cda[CTA_MARK_MASK]) {
++		filter = ctnetlink_alloc_filter(cda);
++		if (IS_ERR(filter))
++			return PTR_ERR(filter);
++	}
++
++	cb->data = filter;
++	return 0;
++}
++
+ static int ctnetlink_filter_match(struct nf_conn *ct, void *data)
+ {
+ 	struct ctnetlink_filter *filter = data;
+@@ -1240,19 +1255,12 @@ static int ctnetlink_get_conntrack(struct net *net, struct sock *ctnl,
+ 
+ 	if (nlh->nlmsg_flags & NLM_F_DUMP) {
+ 		struct netlink_dump_control c = {
++			.start = ctnetlink_start,
+ 			.dump = ctnetlink_dump_table,
+ 			.done = ctnetlink_done,
++			.data = (void *)cda,
+ 		};
+ 
+-		if (cda[CTA_MARK] && cda[CTA_MARK_MASK]) {
+-			struct ctnetlink_filter *filter;
+-
+-			filter = ctnetlink_alloc_filter(cda);
+-			if (IS_ERR(filter))
+-				return PTR_ERR(filter);
+-
+-			c.data = filter;
+-		}
+ 		return netlink_dump_start(ctnl, skb, nlh, &c);
+ 	}
+ 
+diff --git a/net/netfilter/nfnetlink_acct.c b/net/netfilter/nfnetlink_acct.c
+index a0e5adf0b3b6..8fa8bf7c48e6 100644
+--- a/net/netfilter/nfnetlink_acct.c
++++ b/net/netfilter/nfnetlink_acct.c
+@@ -238,29 +238,33 @@ static const struct nla_policy filter_policy[NFACCT_FILTER_MAX + 1] = {
+ 	[NFACCT_FILTER_VALUE]	= { .type = NLA_U32 },
+ };
+ 
+-static struct nfacct_filter *
+-nfacct_filter_alloc(const struct nlattr * const attr)
++static int nfnl_acct_start(struct netlink_callback *cb)
+ {
+-	struct nfacct_filter *filter;
++	const struct nlattr *const attr = cb->data;
+ 	struct nlattr *tb[NFACCT_FILTER_MAX + 1];
++	struct nfacct_filter *filter;
+ 	int err;
+ 
++	if (!attr)
++		return 0;
++
+ 	err = nla_parse_nested(tb, NFACCT_FILTER_MAX, attr, filter_policy,
+ 			       NULL);
+ 	if (err < 0)
+-		return ERR_PTR(err);
++		return err;
+ 
+ 	if (!tb[NFACCT_FILTER_MASK] || !tb[NFACCT_FILTER_VALUE])
+-		return ERR_PTR(-EINVAL);
++		return -EINVAL;
+ 
+ 	filter = kzalloc(sizeof(struct nfacct_filter), GFP_KERNEL);
+ 	if (!filter)
+-		return ERR_PTR(-ENOMEM);
++		return -ENOMEM;
+ 
+ 	filter->mask = ntohl(nla_get_be32(tb[NFACCT_FILTER_MASK]));
+ 	filter->value = ntohl(nla_get_be32(tb[NFACCT_FILTER_VALUE]));
++	cb->data = filter;
+ 
+-	return filter;
++	return 0;
+ }
+ 
+ static int nfnl_acct_get(struct net *net, struct sock *nfnl,
+@@ -275,18 +279,11 @@ static int nfnl_acct_get(struct net *net, struct sock *nfnl,
+ 	if (nlh->nlmsg_flags & NLM_F_DUMP) {
+ 		struct netlink_dump_control c = {
+ 			.dump = nfnl_acct_dump,
++			.start = nfnl_acct_start,
+ 			.done = nfnl_acct_done,
++			.data = (void *)tb[NFACCT_FILTER],
+ 		};
+ 
+-		if (tb[NFACCT_FILTER]) {
+-			struct nfacct_filter *filter;
+-
+-			filter = nfacct_filter_alloc(tb[NFACCT_FILTER]);
+-			if (IS_ERR(filter))
+-				return PTR_ERR(filter);
+-
+-			c.data = filter;
+-		}
+ 		return netlink_dump_start(nfnl, skb, nlh, &c);
+ 	}
+ 
+diff --git a/net/netfilter/x_tables.c b/net/netfilter/x_tables.c
+index d0d8397c9588..aecadd471e1d 100644
+--- a/net/netfilter/x_tables.c
++++ b/net/netfilter/x_tables.c
+@@ -1178,12 +1178,7 @@ struct xt_table_info *xt_alloc_table_info(unsigned int size)
+ 	if (sz < sizeof(*info) || sz >= XT_MAX_TABLE_SIZE)
+ 		return NULL;
+ 
+-	/* __GFP_NORETRY is not fully supported by kvmalloc but it should
+-	 * work reasonably well if sz is too large and bail out rather
+-	 * than shoot all processes down before realizing there is nothing
+-	 * more to reclaim.
+-	 */
+-	info = kvmalloc(sz, GFP_KERNEL | __GFP_NORETRY);
++	info = kvmalloc(sz, GFP_KERNEL_ACCOUNT);
+ 	if (!info)
+ 		return NULL;
+ 
+diff --git a/net/rds/ib_frmr.c b/net/rds/ib_frmr.c
+index d152e48ea371..8596eed6d9a8 100644
+--- a/net/rds/ib_frmr.c
++++ b/net/rds/ib_frmr.c
+@@ -61,6 +61,7 @@ static struct rds_ib_mr *rds_ib_alloc_frmr(struct rds_ib_device *rds_ibdev,
+ 			 pool->fmr_attr.max_pages);
+ 	if (IS_ERR(frmr->mr)) {
+ 		pr_warn("RDS/IB: %s failed to allocate MR", __func__);
++		err = PTR_ERR(frmr->mr);
+ 		goto out_no_cigar;
+ 	}
+ 
+diff --git a/net/sched/act_ife.c b/net/sched/act_ife.c
+index 20d7d36b2fc9..005cb21348c9 100644
+--- a/net/sched/act_ife.c
++++ b/net/sched/act_ife.c
+@@ -265,10 +265,8 @@ static const char *ife_meta_id2name(u32 metaid)
+ #endif
+ 
+ /* called when adding new meta information
+- * under ife->tcf_lock for existing action
+ */
+-static int load_metaops_and_vet(struct tcf_ife_info *ife, u32 metaid,
+-				void *val, int len, bool exists)
++static int load_metaops_and_vet(u32 metaid, void *val, int len)
+ {
+ 	struct tcf_meta_ops *ops = find_ife_oplist(metaid);
+ 	int ret = 0;
+@@ -276,13 +274,9 @@ static int load_metaops_and_vet(struct tcf_ife_info *ife, u32 metaid,
+ 	if (!ops) {
+ 		ret = -ENOENT;
+ #ifdef CONFIG_MODULES
+-		if (exists)
+-			spin_unlock_bh(&ife->tcf_lock);
+ 		rtnl_unlock();
+ 		request_module("ife-meta-%s", ife_meta_id2name(metaid));
+ 		rtnl_lock();
+-		if (exists)
+-			spin_lock_bh(&ife->tcf_lock);
+ 		ops = find_ife_oplist(metaid);
+ #endif
+ 	}
+@@ -299,24 +293,17 @@ static int load_metaops_and_vet(struct tcf_ife_info *ife, u32 metaid,
+ }
+ 
+ /* called when adding new meta information
+- * under ife->tcf_lock for existing action
+ */
+-static int add_metainfo(struct tcf_ife_info *ife, u32 metaid, void *metaval,
+-			int len, bool atomic)
++static int __add_metainfo(const struct tcf_meta_ops *ops,
++			  struct tcf_ife_info *ife, u32 metaid, void *metaval,
++			  int len, bool atomic, bool exists)
+ {
+ 	struct tcf_meta_info *mi = NULL;
+-	struct tcf_meta_ops *ops = find_ife_oplist(metaid);
+ 	int ret = 0;
+ 
+-	if (!ops)
+-		return -ENOENT;
+-
+ 	mi = kzalloc(sizeof(*mi), atomic ? GFP_ATOMIC : GFP_KERNEL);
+-	if (!mi) {
+-		/*put back what find_ife_oplist took */
+-		module_put(ops->owner);
++	if (!mi)
+ 		return -ENOMEM;
+-	}
+ 
+ 	mi->metaid = metaid;
+ 	mi->ops = ops;
+@@ -324,17 +311,49 @@ static int add_metainfo(struct tcf_ife_info *ife, u32 metaid, void *metaval,
+ 		ret = ops->alloc(mi, metaval, atomic ? GFP_ATOMIC : GFP_KERNEL);
+ 		if (ret != 0) {
+ 			kfree(mi);
+-			module_put(ops->owner);
+ 			return ret;
+ 		}
+ 	}
+ 
++	if (exists)
++		spin_lock_bh(&ife->tcf_lock);
+ 	list_add_tail(&mi->metalist, &ife->metalist);
++	if (exists)
++		spin_unlock_bh(&ife->tcf_lock);
+ 
+ 	return ret;
+ }
+ 
+-static int use_all_metadata(struct tcf_ife_info *ife)
++static int add_metainfo_and_get_ops(const struct tcf_meta_ops *ops,
++				    struct tcf_ife_info *ife, u32 metaid,
++				    bool exists)
++{
++	int ret;
++
++	if (!try_module_get(ops->owner))
++		return -ENOENT;
++	ret = __add_metainfo(ops, ife, metaid, NULL, 0, true, exists);
++	if (ret)
++		module_put(ops->owner);
++	return ret;
++}
++
++static int add_metainfo(struct tcf_ife_info *ife, u32 metaid, void *metaval,
++			int len, bool exists)
++{
++	const struct tcf_meta_ops *ops = find_ife_oplist(metaid);
++	int ret;
++
++	if (!ops)
++		return -ENOENT;
++	ret = __add_metainfo(ops, ife, metaid, metaval, len, false, exists);
++	if (ret)
++		/*put back what find_ife_oplist took */
++		module_put(ops->owner);
++	return ret;
++}
++
++static int use_all_metadata(struct tcf_ife_info *ife, bool exists)
+ {
+ 	struct tcf_meta_ops *o;
+ 	int rc = 0;
+@@ -342,7 +361,7 @@ static int use_all_metadata(struct tcf_ife_info *ife)
+ 
+ 	read_lock(&ife_mod_lock);
+ 	list_for_each_entry(o, &ifeoplist, list) {
+-		rc = add_metainfo(ife, o->metaid, NULL, 0, true);
++		rc = add_metainfo_and_get_ops(o, ife, o->metaid, exists);
+ 		if (rc == 0)
+ 			installed += 1;
+ 	}
+@@ -393,7 +412,6 @@ static void _tcf_ife_cleanup(struct tc_action *a)
+ 	struct tcf_meta_info *e, *n;
+ 
+ 	list_for_each_entry_safe(e, n, &ife->metalist, metalist) {
+-		module_put(e->ops->owner);
+ 		list_del(&e->metalist);
+ 		if (e->metaval) {
+ 			if (e->ops->release)
+@@ -401,6 +419,7 @@ static void _tcf_ife_cleanup(struct tc_action *a)
+ 			else
+ 				kfree(e->metaval);
+ 		}
++		module_put(e->ops->owner);
+ 		kfree(e);
+ 	}
+ }
+@@ -419,7 +438,6 @@ static void tcf_ife_cleanup(struct tc_action *a)
+ 		kfree_rcu(p, rcu);
+ }
+ 
+-/* under ife->tcf_lock for existing action */
+ static int populate_metalist(struct tcf_ife_info *ife, struct nlattr **tb,
+ 			     bool exists)
+ {
+@@ -433,7 +451,7 @@ static int populate_metalist(struct tcf_ife_info *ife, struct nlattr **tb,
+ 			val = nla_data(tb[i]);
+ 			len = nla_len(tb[i]);
+ 
+-			rc = load_metaops_and_vet(ife, i, val, len, exists);
++			rc = load_metaops_and_vet(i, val, len);
+ 			if (rc != 0)
+ 				return rc;
+ 
+@@ -531,8 +549,6 @@ static int tcf_ife_init(struct net *net, struct nlattr *nla,
+ 		p->eth_type = ife_type;
+ 	}
+ 
+-	if (exists)
+-		spin_lock_bh(&ife->tcf_lock);
+ 
+ 	if (ret == ACT_P_CREATED)
+ 		INIT_LIST_HEAD(&ife->metalist);
+@@ -544,9 +560,6 @@ static int tcf_ife_init(struct net *net, struct nlattr *nla,
+ metadata_parse_err:
+ 			if (ret == ACT_P_CREATED)
+ 				tcf_idr_release(*a, bind);
+-
+-			if (exists)
+-				spin_unlock_bh(&ife->tcf_lock);
+ 			kfree(p);
+ 			return err;
+ 		}
+@@ -561,18 +574,17 @@ metadata_parse_err:
+ 		 * as we can. You better have at least one else we are
+ 		 * going to bail out
+ 		 */
+-		err = use_all_metadata(ife);
++		err = use_all_metadata(ife, exists);
+ 		if (err) {
+ 			if (ret == ACT_P_CREATED)
+ 				tcf_idr_release(*a, bind);
+-
+-			if (exists)
+-				spin_unlock_bh(&ife->tcf_lock);
+ 			kfree(p);
+ 			return err;
+ 		}
+ 	}
+ 
++	if (exists)
++		spin_lock_bh(&ife->tcf_lock);
+ 	ife->tcf_action = parm->action;
+ 	if (exists)
+ 		spin_unlock_bh(&ife->tcf_lock);
+diff --git a/net/sched/act_pedit.c b/net/sched/act_pedit.c
+index 8a925c72db5f..bad475c87688 100644
+--- a/net/sched/act_pedit.c
++++ b/net/sched/act_pedit.c
+@@ -109,16 +109,18 @@ static int tcf_pedit_key_ex_dump(struct sk_buff *skb,
+ {
+ 	struct nlattr *keys_start = nla_nest_start(skb, TCA_PEDIT_KEYS_EX);
+ 
++	if (!keys_start)
++		goto nla_failure;
+ 	for (; n > 0; n--) {
+ 		struct nlattr *key_start;
+ 
+ 		key_start = nla_nest_start(skb, TCA_PEDIT_KEY_EX);
++		if (!key_start)
++			goto nla_failure;
+ 
+ 		if (nla_put_u16(skb, TCA_PEDIT_KEY_EX_HTYPE, keys_ex->htype) ||
+-		    nla_put_u16(skb, TCA_PEDIT_KEY_EX_CMD, keys_ex->cmd)) {
+-			nlmsg_trim(skb, keys_start);
+-			return -EINVAL;
+-		}
++		    nla_put_u16(skb, TCA_PEDIT_KEY_EX_CMD, keys_ex->cmd))
++			goto nla_failure;
+ 
+ 		nla_nest_end(skb, key_start);
+ 
+@@ -128,6 +130,9 @@ static int tcf_pedit_key_ex_dump(struct sk_buff *skb,
+ 	nla_nest_end(skb, keys_start);
+ 
+ 	return 0;
++nla_failure:
++	nla_nest_cancel(skb, keys_start);
++	return -EINVAL;
+ }
+ 
+ static int tcf_pedit_init(struct net *net, struct nlattr *nla,
+@@ -395,7 +400,10 @@ static int tcf_pedit_dump(struct sk_buff *skb, struct tc_action *a,
+ 	opt->bindcnt = p->tcf_bindcnt - bind;
+ 
+ 	if (p->tcfp_keys_ex) {
+-		tcf_pedit_key_ex_dump(skb, p->tcfp_keys_ex, p->tcfp_nkeys);
++		if (tcf_pedit_key_ex_dump(skb,
++					  p->tcfp_keys_ex,
++					  p->tcfp_nkeys))
++			goto nla_put_failure;
+ 
+ 		if (nla_put(skb, TCA_PEDIT_PARMS_EX, s, opt))
+ 			goto nla_put_failure;
+diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c
+index fb861f90fde6..260749956ef3 100644
+--- a/net/sched/cls_u32.c
++++ b/net/sched/cls_u32.c
+@@ -912,6 +912,7 @@ static int u32_change(struct net *net, struct sk_buff *in_skb,
+ 	struct nlattr *opt = tca[TCA_OPTIONS];
+ 	struct nlattr *tb[TCA_U32_MAX + 1];
+ 	u32 htid, flags = 0;
++	size_t sel_size;
+ 	int err;
+ #ifdef CONFIG_CLS_U32_PERF
+ 	size_t size;
+@@ -1074,8 +1075,13 @@ static int u32_change(struct net *net, struct sk_buff *in_skb,
+ 	}
+ 
+ 	s = nla_data(tb[TCA_U32_SEL]);
++	sel_size = struct_size(s, keys, s->nkeys);
++	if (nla_len(tb[TCA_U32_SEL]) < sel_size) {
++		err = -EINVAL;
++		goto erridr;
++	}
+ 
+-	n = kzalloc(sizeof(*n) + s->nkeys*sizeof(struct tc_u32_key), GFP_KERNEL);
++	n = kzalloc(offsetof(typeof(*n), sel) + sel_size, GFP_KERNEL);
+ 	if (n == NULL) {
+ 		err = -ENOBUFS;
+ 		goto erridr;
+@@ -1090,7 +1096,7 @@ static int u32_change(struct net *net, struct sk_buff *in_skb,
+ 	}
+ #endif
+ 
+-	memcpy(&n->sel, s, sizeof(*s) + s->nkeys*sizeof(struct tc_u32_key));
++	memcpy(&n->sel, s, sel_size);
+ 	RCU_INIT_POINTER(n->ht_up, ht);
+ 	n->handle = handle;
+ 	n->fshift = s->hmask ? ffs(ntohl(s->hmask)) - 1 : 0;
+diff --git a/net/sctp/proc.c b/net/sctp/proc.c
+index ef5c9a82d4e8..a644292f9faf 100644
+--- a/net/sctp/proc.c
++++ b/net/sctp/proc.c
+@@ -215,7 +215,6 @@ static const struct seq_operations sctp_eps_ops = {
+ struct sctp_ht_iter {
+ 	struct seq_net_private p;
+ 	struct rhashtable_iter hti;
+-	int start_fail;
+ };
+ 
+ static void *sctp_transport_seq_start(struct seq_file *seq, loff_t *pos)
+@@ -224,7 +223,6 @@ static void *sctp_transport_seq_start(struct seq_file *seq, loff_t *pos)
+ 
+ 	sctp_transport_walk_start(&iter->hti);
+ 
+-	iter->start_fail = 0;
+ 	return sctp_transport_get_idx(seq_file_net(seq), &iter->hti, *pos);
+ }
+ 
+@@ -232,8 +230,6 @@ static void sctp_transport_seq_stop(struct seq_file *seq, void *v)
+ {
+ 	struct sctp_ht_iter *iter = seq->private;
+ 
+-	if (iter->start_fail)
+-		return;
+ 	sctp_transport_walk_stop(&iter->hti);
+ }
+ 
+@@ -264,8 +260,6 @@ static int sctp_assocs_seq_show(struct seq_file *seq, void *v)
+ 	}
+ 
+ 	transport = (struct sctp_transport *)v;
+-	if (!sctp_transport_hold(transport))
+-		return 0;
+ 	assoc = transport->asoc;
+ 	epb = &assoc->base;
+ 	sk = epb->sk;
+@@ -322,8 +316,6 @@ static int sctp_remaddr_seq_show(struct seq_file *seq, void *v)
+ 	}
+ 
+ 	transport = (struct sctp_transport *)v;
+-	if (!sctp_transport_hold(transport))
+-		return 0;
+ 	assoc = transport->asoc;
+ 
+ 	list_for_each_entry_rcu(tsp, &assoc->peer.transport_addr_list,
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index ce620e878538..50ee07cd20c4 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -4881,9 +4881,14 @@ struct sctp_transport *sctp_transport_get_next(struct net *net,
+ 			break;
+ 		}
+ 
++		if (!sctp_transport_hold(t))
++			continue;
++
+ 		if (net_eq(sock_net(t->asoc->base.sk), net) &&
+ 		    t->asoc->peer.primary_path == t)
+ 			break;
++
++		sctp_transport_put(t);
+ 	}
+ 
+ 	return t;
+@@ -4893,13 +4898,18 @@ struct sctp_transport *sctp_transport_get_idx(struct net *net,
+ 					      struct rhashtable_iter *iter,
+ 					      int pos)
+ {
+-	void *obj = SEQ_START_TOKEN;
++	struct sctp_transport *t;
+ 
+-	while (pos && (obj = sctp_transport_get_next(net, iter)) &&
+-	       !IS_ERR(obj))
+-		pos--;
++	if (!pos)
++		return SEQ_START_TOKEN;
+ 
+-	return obj;
++	while ((t = sctp_transport_get_next(net, iter)) && !IS_ERR(t)) {
++		if (!--pos)
++			break;
++		sctp_transport_put(t);
++	}
++
++	return t;
+ }
+ 
+ int sctp_for_each_endpoint(int (*cb)(struct sctp_endpoint *, void *),
+@@ -4958,8 +4968,6 @@ again:
+ 
+ 	tsp = sctp_transport_get_idx(net, &hti, *pos + 1);
+ 	for (; !IS_ERR_OR_NULL(tsp); tsp = sctp_transport_get_next(net, &hti)) {
+-		if (!sctp_transport_hold(tsp))
+-			continue;
+ 		ret = cb(tsp, p);
+ 		if (ret)
+ 			break;
+diff --git a/net/sunrpc/auth_gss/gss_krb5_crypto.c b/net/sunrpc/auth_gss/gss_krb5_crypto.c
+index 8654494b4d0a..834eb2b9e41b 100644
+--- a/net/sunrpc/auth_gss/gss_krb5_crypto.c
++++ b/net/sunrpc/auth_gss/gss_krb5_crypto.c
+@@ -169,7 +169,7 @@ make_checksum_hmac_md5(struct krb5_ctx *kctx, char *header, int hdrlen,
+ 	struct scatterlist              sg[1];
+ 	int err = -1;
+ 	u8 *checksumdata;
+-	u8 rc4salt[4];
++	u8 *rc4salt;
+ 	struct crypto_ahash *md5;
+ 	struct crypto_ahash *hmac_md5;
+ 	struct ahash_request *req;
+@@ -183,14 +183,18 @@ make_checksum_hmac_md5(struct krb5_ctx *kctx, char *header, int hdrlen,
+ 		return GSS_S_FAILURE;
+ 	}
+ 
++	rc4salt = kmalloc_array(4, sizeof(*rc4salt), GFP_NOFS);
++	if (!rc4salt)
++		return GSS_S_FAILURE;
++
+ 	if (arcfour_hmac_md5_usage_to_salt(usage, rc4salt)) {
+ 		dprintk("%s: invalid usage value %u\n", __func__, usage);
+-		return GSS_S_FAILURE;
++		goto out_free_rc4salt;
+ 	}
+ 
+ 	checksumdata = kmalloc(GSS_KRB5_MAX_CKSUM_LEN, GFP_NOFS);
+ 	if (!checksumdata)
+-		return GSS_S_FAILURE;
++		goto out_free_rc4salt;
+ 
+ 	md5 = crypto_alloc_ahash("md5", 0, CRYPTO_ALG_ASYNC);
+ 	if (IS_ERR(md5))
+@@ -258,6 +262,8 @@ out_free_md5:
+ 	crypto_free_ahash(md5);
+ out_free_cksum:
+ 	kfree(checksumdata);
++out_free_rc4salt:
++	kfree(rc4salt);
+ 	return err ? GSS_S_FAILURE : 0;
+ }
+ 
+diff --git a/net/tipc/name_table.c b/net/tipc/name_table.c
+index bebe88cae07b..ff968c7afef6 100644
+--- a/net/tipc/name_table.c
++++ b/net/tipc/name_table.c
+@@ -980,20 +980,17 @@ int tipc_nl_name_table_dump(struct sk_buff *skb, struct netlink_callback *cb)
+ 
+ struct tipc_dest *tipc_dest_find(struct list_head *l, u32 node, u32 port)
+ {
+-	u64 value = (u64)node << 32 | port;
+ 	struct tipc_dest *dst;
+ 
+ 	list_for_each_entry(dst, l, list) {
+-		if (dst->value != value)
+-			continue;
+-		return dst;
++		if (dst->node == node && dst->port == port)
++			return dst;
+ 	}
+ 	return NULL;
+ }
+ 
+ bool tipc_dest_push(struct list_head *l, u32 node, u32 port)
+ {
+-	u64 value = (u64)node << 32 | port;
+ 	struct tipc_dest *dst;
+ 
+ 	if (tipc_dest_find(l, node, port))
+@@ -1002,7 +999,8 @@ bool tipc_dest_push(struct list_head *l, u32 node, u32 port)
+ 	dst = kmalloc(sizeof(*dst), GFP_ATOMIC);
+ 	if (unlikely(!dst))
+ 		return false;
+-	dst->value = value;
++	dst->node = node;
++	dst->port = port;
+ 	list_add(&dst->list, l);
+ 	return true;
+ }
+diff --git a/net/tipc/name_table.h b/net/tipc/name_table.h
+index 0febba41da86..892bd750b85f 100644
+--- a/net/tipc/name_table.h
++++ b/net/tipc/name_table.h
+@@ -133,13 +133,8 @@ void tipc_nametbl_stop(struct net *net);
+ 
+ struct tipc_dest {
+ 	struct list_head list;
+-	union {
+-		struct {
+-			u32 port;
+-			u32 node;
+-		};
+-		u64 value;
+-	};
++	u32 port;
++	u32 node;
+ };
+ 
+ struct tipc_dest *tipc_dest_find(struct list_head *l, u32 node, u32 port);
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 930852c54d7a..0a5fa347135e 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -2675,6 +2675,8 @@ void tipc_sk_reinit(struct net *net)
+ 
+ 		rhashtable_walk_stop(&iter);
+ 	} while (tsk == ERR_PTR(-EAGAIN));
++
++	rhashtable_walk_exit(&iter);
+ }
+ 
+ static struct tipc_sock *tipc_sk_lookup(struct net *net, u32 portid)
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index 301f22430469..45188d920013 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -45,6 +45,7 @@
+ MODULE_AUTHOR("Mellanox Technologies");
+ MODULE_DESCRIPTION("Transport Layer Security Support");
+ MODULE_LICENSE("Dual BSD/GPL");
++MODULE_ALIAS_TCP_ULP("tls");
+ 
+ enum {
+ 	TLSV4,
+diff --git a/samples/bpf/xdp_redirect_cpu_user.c b/samples/bpf/xdp_redirect_cpu_user.c
+index 4b4d78fffe30..da9070889223 100644
+--- a/samples/bpf/xdp_redirect_cpu_user.c
++++ b/samples/bpf/xdp_redirect_cpu_user.c
+@@ -679,8 +679,9 @@ int main(int argc, char **argv)
+ 		return EXIT_FAIL_OPTION;
+ 	}
+ 
+-	/* Remove XDP program when program is interrupted */
++	/* Remove XDP program when program is interrupted or killed */
+ 	signal(SIGINT, int_exit);
++	signal(SIGTERM, int_exit);
+ 
+ 	if (bpf_set_link_xdp_fd(ifindex, prog_fd[prog_num], xdp_flags) < 0) {
+ 		fprintf(stderr, "link set xdp fd failed\n");
+diff --git a/samples/bpf/xdp_rxq_info_user.c b/samples/bpf/xdp_rxq_info_user.c
+index e4e9ba52bff0..bb278447299c 100644
+--- a/samples/bpf/xdp_rxq_info_user.c
++++ b/samples/bpf/xdp_rxq_info_user.c
+@@ -534,8 +534,9 @@ int main(int argc, char **argv)
+ 		exit(EXIT_FAIL_BPF);
+ 	}
+ 
+-	/* Remove XDP program when program is interrupted */
++	/* Remove XDP program when program is interrupted or killed */
+ 	signal(SIGINT, int_exit);
++	signal(SIGTERM, int_exit);
+ 
+ 	if (bpf_set_link_xdp_fd(ifindex, prog_fd, xdp_flags) < 0) {
+ 		fprintf(stderr, "link set xdp fd failed\n");
+diff --git a/scripts/coccicheck b/scripts/coccicheck
+index 9fedca611b7f..e04d328210ac 100755
+--- a/scripts/coccicheck
++++ b/scripts/coccicheck
+@@ -128,9 +128,10 @@ run_cmd_parmap() {
+ 	fi
+ 	echo $@ >>$DEBUG_FILE
+ 	$@ 2>>$DEBUG_FILE
+-	if [[ $? -ne 0 ]]; then
++	err=$?
++	if [[ $err -ne 0 ]]; then
+ 		echo "coccicheck failed"
+-		exit $?
++		exit $err
+ 	fi
+ }
+ 
+diff --git a/scripts/depmod.sh b/scripts/depmod.sh
+index 999d585eaa73..e5f0aad75b96 100755
+--- a/scripts/depmod.sh
++++ b/scripts/depmod.sh
+@@ -15,9 +15,9 @@ if ! test -r System.map ; then
+ fi
+ 
+ if [ -z $(command -v $DEPMOD) ]; then
+-	echo "'make modules_install' requires $DEPMOD. Please install it." >&2
++	echo "Warning: 'make modules_install' requires $DEPMOD. Please install it." >&2
+ 	echo "This is probably in the kmod package." >&2
+-	exit 1
++	exit 0
+ fi
+ 
+ # older versions of depmod require the version string to start with three
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index 1663fb19343a..b95cf57782a3 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -672,7 +672,7 @@ static void handle_modversions(struct module *mod, struct elf_info *info,
+ 			if (ELF_ST_TYPE(sym->st_info) == STT_SPARC_REGISTER)
+ 				break;
+ 			if (symname[0] == '.') {
+-				char *munged = strdup(symname);
++				char *munged = NOFAIL(strdup(symname));
+ 				munged[0] = '_';
+ 				munged[1] = toupper(munged[1]);
+ 				symname = munged;
+@@ -1318,7 +1318,7 @@ static Elf_Sym *find_elf_symbol2(struct elf_info *elf, Elf_Addr addr,
+ static char *sec2annotation(const char *s)
+ {
+ 	if (match(s, init_exit_sections)) {
+-		char *p = malloc(20);
++		char *p = NOFAIL(malloc(20));
+ 		char *r = p;
+ 
+ 		*p++ = '_';
+@@ -1338,7 +1338,7 @@ static char *sec2annotation(const char *s)
+ 			strcat(p, " ");
+ 		return r;
+ 	} else {
+-		return strdup("");
++		return NOFAIL(strdup(""));
+ 	}
+ }
+ 
+@@ -2036,7 +2036,7 @@ void buf_write(struct buffer *buf, const char *s, int len)
+ {
+ 	if (buf->size - buf->pos < len) {
+ 		buf->size += len + SZ;
+-		buf->p = realloc(buf->p, buf->size);
++		buf->p = NOFAIL(realloc(buf->p, buf->size));
+ 	}
+ 	strncpy(buf->p + buf->pos, s, len);
+ 	buf->pos += len;
+diff --git a/security/apparmor/policy_ns.c b/security/apparmor/policy_ns.c
+index b0f9dc3f765a..1a7cec5d9cac 100644
+--- a/security/apparmor/policy_ns.c
++++ b/security/apparmor/policy_ns.c
+@@ -255,7 +255,7 @@ static struct aa_ns *__aa_create_ns(struct aa_ns *parent, const char *name,
+ 
+ 	ns = alloc_ns(parent->base.hname, name);
+ 	if (!ns)
+-		return NULL;
++		return ERR_PTR(-ENOMEM);
+ 	ns->level = parent->level + 1;
+ 	mutex_lock_nested(&ns->lock, ns->level);
+ 	error = __aafs_ns_mkdir(ns, ns_subns_dir(parent), name, dir);
+diff --git a/security/keys/dh.c b/security/keys/dh.c
+index b203f7758f97..1a68d27e72b4 100644
+--- a/security/keys/dh.c
++++ b/security/keys/dh.c
+@@ -300,7 +300,7 @@ long __keyctl_dh_compute(struct keyctl_dh_params __user *params,
+ 	}
+ 	dh_inputs.g_size = dlen;
+ 
+-	dlen = dh_data_from_key(pcopy.private, &dh_inputs.key);
++	dlen = dh_data_from_key(pcopy.dh_private, &dh_inputs.key);
+ 	if (dlen < 0) {
+ 		ret = dlen;
+ 		goto out2;
+diff --git a/security/selinux/selinuxfs.c b/security/selinux/selinuxfs.c
+index 79d3709b0671..0b66d7283b00 100644
+--- a/security/selinux/selinuxfs.c
++++ b/security/selinux/selinuxfs.c
+@@ -1365,13 +1365,18 @@ static int sel_make_bools(struct selinux_fs_info *fsi)
+ 
+ 		ret = -ENOMEM;
+ 		inode = sel_make_inode(dir->d_sb, S_IFREG | S_IRUGO | S_IWUSR);
+-		if (!inode)
++		if (!inode) {
++			dput(dentry);
+ 			goto out;
++		}
+ 
+ 		ret = -ENAMETOOLONG;
+ 		len = snprintf(page, PAGE_SIZE, "/%s/%s", BOOL_DIR_NAME, names[i]);
+-		if (len >= PAGE_SIZE)
++		if (len >= PAGE_SIZE) {
++			dput(dentry);
++			iput(inode);
+ 			goto out;
++		}
+ 
+ 		isec = (struct inode_security_struct *)inode->i_security;
+ 		ret = security_genfs_sid(fsi->state, "selinuxfs", page,
+@@ -1586,8 +1591,10 @@ static int sel_make_avc_files(struct dentry *dir)
+ 			return -ENOMEM;
+ 
+ 		inode = sel_make_inode(dir->d_sb, S_IFREG|files[i].mode);
+-		if (!inode)
++		if (!inode) {
++			dput(dentry);
+ 			return -ENOMEM;
++		}
+ 
+ 		inode->i_fop = files[i].ops;
+ 		inode->i_ino = ++fsi->last_ino;
+@@ -1632,8 +1639,10 @@ static int sel_make_initcon_files(struct dentry *dir)
+ 			return -ENOMEM;
+ 
+ 		inode = sel_make_inode(dir->d_sb, S_IFREG|S_IRUGO);
+-		if (!inode)
++		if (!inode) {
++			dput(dentry);
+ 			return -ENOMEM;
++		}
+ 
+ 		inode->i_fop = &sel_initcon_ops;
+ 		inode->i_ino = i|SEL_INITCON_INO_OFFSET;
+@@ -1733,8 +1742,10 @@ static int sel_make_perm_files(char *objclass, int classvalue,
+ 
+ 		rc = -ENOMEM;
+ 		inode = sel_make_inode(dir->d_sb, S_IFREG|S_IRUGO);
+-		if (!inode)
++		if (!inode) {
++			dput(dentry);
+ 			goto out;
++		}
+ 
+ 		inode->i_fop = &sel_perm_ops;
+ 		/* i+1 since perm values are 1-indexed */
+@@ -1763,8 +1774,10 @@ static int sel_make_class_dir_entries(char *classname, int index,
+ 		return -ENOMEM;
+ 
+ 	inode = sel_make_inode(dir->d_sb, S_IFREG|S_IRUGO);
+-	if (!inode)
++	if (!inode) {
++		dput(dentry);
+ 		return -ENOMEM;
++	}
+ 
+ 	inode->i_fop = &sel_class_ops;
+ 	inode->i_ino = sel_class_to_ino(index);
+@@ -1838,8 +1851,10 @@ static int sel_make_policycap(struct selinux_fs_info *fsi)
+ 			return -ENOMEM;
+ 
+ 		inode = sel_make_inode(fsi->sb, S_IFREG | 0444);
+-		if (inode == NULL)
++		if (inode == NULL) {
++			dput(dentry);
+ 			return -ENOMEM;
++		}
+ 
+ 		inode->i_fop = &sel_policycap_ops;
+ 		inode->i_ino = iter | SEL_POLICYCAP_INO_OFFSET;
+@@ -1932,8 +1947,10 @@ static int sel_fill_super(struct super_block *sb, void *data, int silent)
+ 
+ 	ret = -ENOMEM;
+ 	inode = sel_make_inode(sb, S_IFCHR | S_IRUGO | S_IWUGO);
+-	if (!inode)
++	if (!inode) {
++		dput(dentry);
+ 		goto err;
++	}
+ 
+ 	inode->i_ino = ++fsi->last_ino;
+ 	isec = (struct inode_security_struct *)inode->i_security;
+diff --git a/sound/soc/codecs/rt5677.c b/sound/soc/codecs/rt5677.c
+index 8a0181a2db08..47feef30dadb 100644
+--- a/sound/soc/codecs/rt5677.c
++++ b/sound/soc/codecs/rt5677.c
+@@ -5007,7 +5007,7 @@ static const struct regmap_config rt5677_regmap = {
+ };
+ 
+ static const struct of_device_id rt5677_of_match[] = {
+-	{ .compatible = "realtek,rt5677", RT5677 },
++	{ .compatible = "realtek,rt5677", .data = (const void *)RT5677 },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(of, rt5677_of_match);
+diff --git a/sound/soc/codecs/wm8994.c b/sound/soc/codecs/wm8994.c
+index 7fdfdf3f6e67..14f1b0c0d286 100644
+--- a/sound/soc/codecs/wm8994.c
++++ b/sound/soc/codecs/wm8994.c
+@@ -2432,6 +2432,7 @@ static int wm8994_set_dai_sysclk(struct snd_soc_dai *dai,
+ 			snd_soc_component_update_bits(component, WM8994_POWER_MANAGEMENT_2,
+ 					    WM8994_OPCLK_ENA, 0);
+ 		}
++		break;
+ 
+ 	default:
+ 		return -EINVAL;
+diff --git a/tools/perf/arch/arm64/util/arm-spe.c b/tools/perf/arch/arm64/util/arm-spe.c
+index 1120e39c1b00..5ccfce87e693 100644
+--- a/tools/perf/arch/arm64/util/arm-spe.c
++++ b/tools/perf/arch/arm64/util/arm-spe.c
+@@ -194,6 +194,7 @@ struct auxtrace_record *arm_spe_recording_init(int *err,
+ 	sper->itr.read_finish = arm_spe_read_finish;
+ 	sper->itr.alignment = 0;
+ 
++	*err = 0;
+ 	return &sper->itr;
+ }
+ 
+diff --git a/tools/perf/arch/powerpc/util/sym-handling.c b/tools/perf/arch/powerpc/util/sym-handling.c
+index 53d83d7e6a09..20e7d74d86cd 100644
+--- a/tools/perf/arch/powerpc/util/sym-handling.c
++++ b/tools/perf/arch/powerpc/util/sym-handling.c
+@@ -141,8 +141,10 @@ void arch__post_process_probe_trace_events(struct perf_probe_event *pev,
+ 	for (i = 0; i < ntevs; i++) {
+ 		tev = &pev->tevs[i];
+ 		map__for_each_symbol(map, sym, tmp) {
+-			if (map->unmap_ip(map, sym->start) == tev->point.address)
++			if (map->unmap_ip(map, sym->start) == tev->point.address) {
+ 				arch__fix_tev_from_maps(pev, tev, map, sym);
++				break;
++			}
+ 		}
+ 	}
+ }
+diff --git a/tools/perf/util/namespaces.c b/tools/perf/util/namespaces.c
+index 5be021701f34..cf8bd123cf73 100644
+--- a/tools/perf/util/namespaces.c
++++ b/tools/perf/util/namespaces.c
+@@ -139,6 +139,9 @@ struct nsinfo *nsinfo__copy(struct nsinfo *nsi)
+ {
+ 	struct nsinfo *nnsi;
+ 
++	if (nsi == NULL)
++		return NULL;
++
+ 	nnsi = calloc(1, sizeof(*nnsi));
+ 	if (nnsi != NULL) {
+ 		nnsi->pid = nsi->pid;
+diff --git a/tools/testing/selftests/powerpc/harness.c b/tools/testing/selftests/powerpc/harness.c
+index 66d31de60b9a..9d7166dfad1e 100644
+--- a/tools/testing/selftests/powerpc/harness.c
++++ b/tools/testing/selftests/powerpc/harness.c
+@@ -85,13 +85,13 @@ wait:
+ 	return status;
+ }
+ 
+-static void alarm_handler(int signum)
++static void sig_handler(int signum)
+ {
+-	/* Jut wake us up from waitpid */
++	/* Just wake us up from waitpid */
+ }
+ 
+-static struct sigaction alarm_action = {
+-	.sa_handler = alarm_handler,
++static struct sigaction sig_action = {
++	.sa_handler = sig_handler,
+ };
+ 
+ void test_harness_set_timeout(uint64_t time)
+@@ -106,8 +106,14 @@ int test_harness(int (test_function)(void), char *name)
+ 	test_start(name);
+ 	test_set_git_version(GIT_VERSION);
+ 
+-	if (sigaction(SIGALRM, &alarm_action, NULL)) {
+-		perror("sigaction");
++	if (sigaction(SIGINT, &sig_action, NULL)) {
++		perror("sigaction (sigint)");
++		test_error(name);
++		return 1;
++	}
++
++	if (sigaction(SIGALRM, &sig_action, NULL)) {
++		perror("sigaction (sigalrm)");
+ 		test_error(name);
+ 		return 1;
+ 	}


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-09-09 11:25 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-09-09 11:25 UTC (permalink / raw
  To: gentoo-commits

commit:     9a044a4deae2ea2a876cb6bea5415a1efac72a9e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Sep  9 11:25:12 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Sep  9 11:25:12 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9a044a4d

Linux patch 4.18.7

 0000_README             |    4 +
 1006_linux-4.18.7.patch | 5658 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5662 insertions(+)

diff --git a/0000_README b/0000_README
index 8bfc2e4..f3682ca 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch:  1005_linux-4.18.6.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.6
 
+Patch:  1006_linux-4.18.7.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.7
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1006_linux-4.18.7.patch b/1006_linux-4.18.7.patch
new file mode 100644
index 0000000..7ab3155
--- /dev/null
+++ b/1006_linux-4.18.7.patch
@@ -0,0 +1,5658 @@
+diff --git a/Makefile b/Makefile
+index 62524f4d42ad..711b04d00e49 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/alpha/kernel/osf_sys.c b/arch/alpha/kernel/osf_sys.c
+index c210a25dd6da..cff52d8ffdb1 100644
+--- a/arch/alpha/kernel/osf_sys.c
++++ b/arch/alpha/kernel/osf_sys.c
+@@ -530,24 +530,19 @@ SYSCALL_DEFINE4(osf_mount, unsigned long, typenr, const char __user *, path,
+ SYSCALL_DEFINE1(osf_utsname, char __user *, name)
+ {
+ 	int error;
++	char tmp[5 * 32];
+ 
+ 	down_read(&uts_sem);
+-	error = -EFAULT;
+-	if (copy_to_user(name + 0, utsname()->sysname, 32))
+-		goto out;
+-	if (copy_to_user(name + 32, utsname()->nodename, 32))
+-		goto out;
+-	if (copy_to_user(name + 64, utsname()->release, 32))
+-		goto out;
+-	if (copy_to_user(name + 96, utsname()->version, 32))
+-		goto out;
+-	if (copy_to_user(name + 128, utsname()->machine, 32))
+-		goto out;
++	memcpy(tmp + 0 * 32, utsname()->sysname, 32);
++	memcpy(tmp + 1 * 32, utsname()->nodename, 32);
++	memcpy(tmp + 2 * 32, utsname()->release, 32);
++	memcpy(tmp + 3 * 32, utsname()->version, 32);
++	memcpy(tmp + 4 * 32, utsname()->machine, 32);
++	up_read(&uts_sem);
+ 
+-	error = 0;
+- out:
+-	up_read(&uts_sem);	
+-	return error;
++	if (copy_to_user(name, tmp, sizeof(tmp)))
++		return -EFAULT;
++	return 0;
+ }
+ 
+ SYSCALL_DEFINE0(getpagesize)
+@@ -567,18 +562,21 @@ SYSCALL_DEFINE2(osf_getdomainname, char __user *, name, int, namelen)
+ {
+ 	int len, err = 0;
+ 	char *kname;
++	char tmp[32];
+ 
+-	if (namelen > 32)
++	if (namelen < 0 || namelen > 32)
+ 		namelen = 32;
+ 
+ 	down_read(&uts_sem);
+ 	kname = utsname()->domainname;
+ 	len = strnlen(kname, namelen);
+-	if (copy_to_user(name, kname, min(len + 1, namelen)))
+-		err = -EFAULT;
++	len = min(len + 1, namelen);
++	memcpy(tmp, kname, len);
+ 	up_read(&uts_sem);
+ 
+-	return err;
++	if (copy_to_user(name, tmp, len))
++		return -EFAULT;
++	return 0;
+ }
+ 
+ /*
+@@ -739,13 +737,14 @@ SYSCALL_DEFINE3(osf_sysinfo, int, command, char __user *, buf, long, count)
+ 	};
+ 	unsigned long offset;
+ 	const char *res;
+-	long len, err = -EINVAL;
++	long len;
++	char tmp[__NEW_UTS_LEN + 1];
+ 
+ 	offset = command-1;
+ 	if (offset >= ARRAY_SIZE(sysinfo_table)) {
+ 		/* Digital UNIX has a few unpublished interfaces here */
+ 		printk("sysinfo(%d)", command);
+-		goto out;
++		return -EINVAL;
+ 	}
+ 
+ 	down_read(&uts_sem);
+@@ -753,13 +752,11 @@ SYSCALL_DEFINE3(osf_sysinfo, int, command, char __user *, buf, long, count)
+ 	len = strlen(res)+1;
+ 	if ((unsigned long)len > (unsigned long)count)
+ 		len = count;
+-	if (copy_to_user(buf, res, len))
+-		err = -EFAULT;
+-	else
+-		err = 0;
++	memcpy(tmp, res, len);
+ 	up_read(&uts_sem);
+- out:
+-	return err;
++	if (copy_to_user(buf, tmp, len))
++		return -EFAULT;
++	return 0;
+ }
+ 
+ SYSCALL_DEFINE5(osf_getsysinfo, unsigned long, op, void __user *, buffer,
+diff --git a/arch/arm/boot/dts/am571x-idk.dts b/arch/arm/boot/dts/am571x-idk.dts
+index 5bb9d68d6e90..d9a2049a1ea8 100644
+--- a/arch/arm/boot/dts/am571x-idk.dts
++++ b/arch/arm/boot/dts/am571x-idk.dts
+@@ -66,10 +66,6 @@
+ 	};
+ };
+ 
+-&omap_dwc3_2 {
+-	extcon = <&extcon_usb2>;
+-};
+-
+ &extcon_usb2 {
+ 	id-gpio = <&gpio5 7 GPIO_ACTIVE_HIGH>;
+ 	vbus-gpio = <&gpio7 22 GPIO_ACTIVE_HIGH>;
+diff --git a/arch/arm/boot/dts/am572x-idk-common.dtsi b/arch/arm/boot/dts/am572x-idk-common.dtsi
+index c6d858b31011..784639ddf451 100644
+--- a/arch/arm/boot/dts/am572x-idk-common.dtsi
++++ b/arch/arm/boot/dts/am572x-idk-common.dtsi
+@@ -57,10 +57,6 @@
+ 	};
+ };
+ 
+-&omap_dwc3_2 {
+-	extcon = <&extcon_usb2>;
+-};
+-
+ &extcon_usb2 {
+ 	id-gpio = <&gpio3 16 GPIO_ACTIVE_HIGH>;
+ 	vbus-gpio = <&gpio3 26 GPIO_ACTIVE_HIGH>;
+diff --git a/arch/arm/boot/dts/am57xx-idk-common.dtsi b/arch/arm/boot/dts/am57xx-idk-common.dtsi
+index ad87f1ae904d..c9063ffca524 100644
+--- a/arch/arm/boot/dts/am57xx-idk-common.dtsi
++++ b/arch/arm/boot/dts/am57xx-idk-common.dtsi
+@@ -395,8 +395,13 @@
+ 	dr_mode = "host";
+ };
+ 
++&omap_dwc3_2 {
++	extcon = <&extcon_usb2>;
++};
++
+ &usb2 {
+-	dr_mode = "peripheral";
++	extcon = <&extcon_usb2>;
++	dr_mode = "otg";
+ };
+ 
+ &mmc1 {
+diff --git a/arch/arm/boot/dts/tegra30-cardhu.dtsi b/arch/arm/boot/dts/tegra30-cardhu.dtsi
+index 92a9740c533f..3b1db7b9ec50 100644
+--- a/arch/arm/boot/dts/tegra30-cardhu.dtsi
++++ b/arch/arm/boot/dts/tegra30-cardhu.dtsi
+@@ -206,6 +206,7 @@
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+ 			reg = <0x70>;
++			reset-gpio = <&gpio TEGRA_GPIO(BB, 0) GPIO_ACTIVE_LOW>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 42c090cf0292..3eb034189cf8 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -754,7 +754,6 @@ config NEED_PER_CPU_EMBED_FIRST_CHUNK
+ 
+ config HOLES_IN_ZONE
+ 	def_bool y
+-	depends on NUMA
+ 
+ source kernel/Kconfig.preempt
+ source kernel/Kconfig.hz
+diff --git a/arch/arm64/crypto/sm4-ce-glue.c b/arch/arm64/crypto/sm4-ce-glue.c
+index b7fb5274b250..0c4fc223f225 100644
+--- a/arch/arm64/crypto/sm4-ce-glue.c
++++ b/arch/arm64/crypto/sm4-ce-glue.c
+@@ -69,5 +69,5 @@ static void __exit sm4_ce_mod_fini(void)
+ 	crypto_unregister_alg(&sm4_ce_alg);
+ }
+ 
+-module_cpu_feature_match(SM3, sm4_ce_mod_init);
++module_cpu_feature_match(SM4, sm4_ce_mod_init);
+ module_exit(sm4_ce_mod_fini);
+diff --git a/arch/powerpc/include/asm/fadump.h b/arch/powerpc/include/asm/fadump.h
+index 5a23010af600..1e7a33592e29 100644
+--- a/arch/powerpc/include/asm/fadump.h
++++ b/arch/powerpc/include/asm/fadump.h
+@@ -195,9 +195,6 @@ struct fadump_crash_info_header {
+ 	struct cpumask	online_mask;
+ };
+ 
+-/* Crash memory ranges */
+-#define INIT_CRASHMEM_RANGES	(INIT_MEMBLOCK_REGIONS + 2)
+-
+ struct fad_crash_memory_ranges {
+ 	unsigned long long	base;
+ 	unsigned long long	size;
+diff --git a/arch/powerpc/include/asm/nohash/pgtable.h b/arch/powerpc/include/asm/nohash/pgtable.h
+index 2160be2e4339..b321c82b3624 100644
+--- a/arch/powerpc/include/asm/nohash/pgtable.h
++++ b/arch/powerpc/include/asm/nohash/pgtable.h
+@@ -51,17 +51,14 @@ static inline int pte_present(pte_t pte)
+ #define pte_access_permitted pte_access_permitted
+ static inline bool pte_access_permitted(pte_t pte, bool write)
+ {
+-	unsigned long pteval = pte_val(pte);
+ 	/*
+ 	 * A read-only access is controlled by _PAGE_USER bit.
+ 	 * We have _PAGE_READ set for WRITE and EXECUTE
+ 	 */
+-	unsigned long need_pte_bits = _PAGE_PRESENT | _PAGE_USER;
+-
+-	if (write)
+-		need_pte_bits |= _PAGE_WRITE;
++	if (!pte_present(pte) || !pte_user(pte) || !pte_read(pte))
++		return false;
+ 
+-	if ((pteval & need_pte_bits) != need_pte_bits)
++	if (write && !pte_write(pte))
+ 		return false;
+ 
+ 	return true;
+diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
+index 5ba80cffb505..3312606fda07 100644
+--- a/arch/powerpc/include/asm/pkeys.h
++++ b/arch/powerpc/include/asm/pkeys.h
+@@ -94,8 +94,6 @@ static inline bool mm_pkey_is_allocated(struct mm_struct *mm, int pkey)
+ 		__mm_pkey_is_allocated(mm, pkey));
+ }
+ 
+-extern void __arch_activate_pkey(int pkey);
+-extern void __arch_deactivate_pkey(int pkey);
+ /*
+  * Returns a positive, 5-bit key on success, or -1 on failure.
+  * Relies on the mmap_sem to protect against concurrency in mm_pkey_alloc() and
+@@ -124,11 +122,6 @@ static inline int mm_pkey_alloc(struct mm_struct *mm)
+ 	ret = ffz((u32)mm_pkey_allocation_map(mm));
+ 	__mm_pkey_allocated(mm, ret);
+ 
+-	/*
+-	 * Enable the key in the hardware
+-	 */
+-	if (ret > 0)
+-		__arch_activate_pkey(ret);
+ 	return ret;
+ }
+ 
+@@ -140,10 +133,6 @@ static inline int mm_pkey_free(struct mm_struct *mm, int pkey)
+ 	if (!mm_pkey_is_allocated(mm, pkey))
+ 		return -EINVAL;
+ 
+-	/*
+-	 * Disable the key in the hardware
+-	 */
+-	__arch_deactivate_pkey(pkey);
+ 	__mm_pkey_free(mm, pkey);
+ 
+ 	return 0;
+diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
+index 07e8396d472b..958eb5cd2a9e 100644
+--- a/arch/powerpc/kernel/fadump.c
++++ b/arch/powerpc/kernel/fadump.c
+@@ -47,8 +47,10 @@ static struct fadump_mem_struct fdm;
+ static const struct fadump_mem_struct *fdm_active;
+ 
+ static DEFINE_MUTEX(fadump_mutex);
+-struct fad_crash_memory_ranges crash_memory_ranges[INIT_CRASHMEM_RANGES];
++struct fad_crash_memory_ranges *crash_memory_ranges;
++int crash_memory_ranges_size;
+ int crash_mem_ranges;
++int max_crash_mem_ranges;
+ 
+ /* Scan the Firmware Assisted dump configuration details. */
+ int __init early_init_dt_scan_fw_dump(unsigned long node,
+@@ -868,38 +870,88 @@ static int __init process_fadump(const struct fadump_mem_struct *fdm_active)
+ 	return 0;
+ }
+ 
+-static inline void fadump_add_crash_memory(unsigned long long base,
+-					unsigned long long end)
++static void free_crash_memory_ranges(void)
++{
++	kfree(crash_memory_ranges);
++	crash_memory_ranges = NULL;
++	crash_memory_ranges_size = 0;
++	max_crash_mem_ranges = 0;
++}
++
++/*
++ * Allocate or reallocate crash memory ranges array in incremental units
++ * of PAGE_SIZE.
++ */
++static int allocate_crash_memory_ranges(void)
++{
++	struct fad_crash_memory_ranges *new_array;
++	u64 new_size;
++
++	new_size = crash_memory_ranges_size + PAGE_SIZE;
++	pr_debug("Allocating %llu bytes of memory for crash memory ranges\n",
++		 new_size);
++
++	new_array = krealloc(crash_memory_ranges, new_size, GFP_KERNEL);
++	if (new_array == NULL) {
++		pr_err("Insufficient memory for setting up crash memory ranges\n");
++		free_crash_memory_ranges();
++		return -ENOMEM;
++	}
++
++	crash_memory_ranges = new_array;
++	crash_memory_ranges_size = new_size;
++	max_crash_mem_ranges = (new_size /
++				sizeof(struct fad_crash_memory_ranges));
++	return 0;
++}
++
++static inline int fadump_add_crash_memory(unsigned long long base,
++					  unsigned long long end)
+ {
+ 	if (base == end)
+-		return;
++		return 0;
++
++	if (crash_mem_ranges == max_crash_mem_ranges) {
++		int ret;
++
++		ret = allocate_crash_memory_ranges();
++		if (ret)
++			return ret;
++	}
+ 
+ 	pr_debug("crash_memory_range[%d] [%#016llx-%#016llx], %#llx bytes\n",
+ 		crash_mem_ranges, base, end - 1, (end - base));
+ 	crash_memory_ranges[crash_mem_ranges].base = base;
+ 	crash_memory_ranges[crash_mem_ranges].size = end - base;
+ 	crash_mem_ranges++;
++	return 0;
+ }
+ 
+-static void fadump_exclude_reserved_area(unsigned long long start,
++static int fadump_exclude_reserved_area(unsigned long long start,
+ 					unsigned long long end)
+ {
+ 	unsigned long long ra_start, ra_end;
++	int ret = 0;
+ 
+ 	ra_start = fw_dump.reserve_dump_area_start;
+ 	ra_end = ra_start + fw_dump.reserve_dump_area_size;
+ 
+ 	if ((ra_start < end) && (ra_end > start)) {
+ 		if ((start < ra_start) && (end > ra_end)) {
+-			fadump_add_crash_memory(start, ra_start);
+-			fadump_add_crash_memory(ra_end, end);
++			ret = fadump_add_crash_memory(start, ra_start);
++			if (ret)
++				return ret;
++
++			ret = fadump_add_crash_memory(ra_end, end);
+ 		} else if (start < ra_start) {
+-			fadump_add_crash_memory(start, ra_start);
++			ret = fadump_add_crash_memory(start, ra_start);
+ 		} else if (ra_end < end) {
+-			fadump_add_crash_memory(ra_end, end);
++			ret = fadump_add_crash_memory(ra_end, end);
+ 		}
+ 	} else
+-		fadump_add_crash_memory(start, end);
++		ret = fadump_add_crash_memory(start, end);
++
++	return ret;
+ }
+ 
+ static int fadump_init_elfcore_header(char *bufp)
+@@ -939,10 +991,11 @@ static int fadump_init_elfcore_header(char *bufp)
+  * Traverse through memblock structure and setup crash memory ranges. These
+  * ranges will be used create PT_LOAD program headers in elfcore header.
+  */
+-static void fadump_setup_crash_memory_ranges(void)
++static int fadump_setup_crash_memory_ranges(void)
+ {
+ 	struct memblock_region *reg;
+ 	unsigned long long start, end;
++	int ret;
+ 
+ 	pr_debug("Setup crash memory ranges.\n");
+ 	crash_mem_ranges = 0;
+@@ -953,7 +1006,9 @@ static void fadump_setup_crash_memory_ranges(void)
+ 	 * specified during fadump registration. We need to create a separate
+ 	 * program header for this chunk with the correct offset.
+ 	 */
+-	fadump_add_crash_memory(RMA_START, fw_dump.boot_memory_size);
++	ret = fadump_add_crash_memory(RMA_START, fw_dump.boot_memory_size);
++	if (ret)
++		return ret;
+ 
+ 	for_each_memblock(memory, reg) {
+ 		start = (unsigned long long)reg->base;
+@@ -973,8 +1028,12 @@ static void fadump_setup_crash_memory_ranges(void)
+ 		}
+ 
+ 		/* add this range excluding the reserved dump area. */
+-		fadump_exclude_reserved_area(start, end);
++		ret = fadump_exclude_reserved_area(start, end);
++		if (ret)
++			return ret;
+ 	}
++
++	return 0;
+ }
+ 
+ /*
+@@ -1097,6 +1156,7 @@ static int register_fadump(void)
+ {
+ 	unsigned long addr;
+ 	void *vaddr;
++	int ret;
+ 
+ 	/*
+ 	 * If no memory is reserved then we can not register for firmware-
+@@ -1105,7 +1165,9 @@ static int register_fadump(void)
+ 	if (!fw_dump.reserve_dump_area_size)
+ 		return -ENODEV;
+ 
+-	fadump_setup_crash_memory_ranges();
++	ret = fadump_setup_crash_memory_ranges();
++	if (ret)
++		return ret;
+ 
+ 	addr = be64_to_cpu(fdm.rmr_region.destination_address) + be64_to_cpu(fdm.rmr_region.source_len);
+ 	/* Initialize fadump crash info header. */
+@@ -1183,6 +1245,7 @@ void fadump_cleanup(void)
+ 	} else if (fw_dump.dump_registered) {
+ 		/* Un-register Firmware-assisted dump if it was registered. */
+ 		fadump_unregister_dump(&fdm);
++		free_crash_memory_ranges();
+ 	}
+ }
+ 
+diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
+index 9ef4aea9fffe..991d09774108 100644
+--- a/arch/powerpc/kernel/process.c
++++ b/arch/powerpc/kernel/process.c
+@@ -583,6 +583,7 @@ static void save_all(struct task_struct *tsk)
+ 		__giveup_spe(tsk);
+ 
+ 	msr_check_and_clear(msr_all_available);
++	thread_pkey_regs_save(&tsk->thread);
+ }
+ 
+ void flush_all_to_thread(struct task_struct *tsk)
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index de686b340f4a..a995513573c2 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -46,6 +46,7 @@
+ #include <linux/compiler.h>
+ #include <linux/of.h>
+ 
++#include <asm/ftrace.h>
+ #include <asm/reg.h>
+ #include <asm/ppc-opcode.h>
+ #include <asm/asm-prototypes.h>
+diff --git a/arch/powerpc/mm/mmu_context_book3s64.c b/arch/powerpc/mm/mmu_context_book3s64.c
+index f3d4b4a0e561..3bb5cec03d1f 100644
+--- a/arch/powerpc/mm/mmu_context_book3s64.c
++++ b/arch/powerpc/mm/mmu_context_book3s64.c
+@@ -200,9 +200,9 @@ static void pte_frag_destroy(void *pte_frag)
+ 	/* drop all the pending references */
+ 	count = ((unsigned long)pte_frag & ~PAGE_MASK) >> PTE_FRAG_SIZE_SHIFT;
+ 	/* We allow PTE_FRAG_NR fragments from a PTE page */
+-	if (page_ref_sub_and_test(page, PTE_FRAG_NR - count)) {
++	if (atomic_sub_and_test(PTE_FRAG_NR - count, &page->pt_frag_refcount)) {
+ 		pgtable_page_dtor(page);
+-		free_unref_page(page);
++		__free_page(page);
+ 	}
+ }
+ 
+@@ -215,9 +215,9 @@ static void pmd_frag_destroy(void *pmd_frag)
+ 	/* drop all the pending references */
+ 	count = ((unsigned long)pmd_frag & ~PAGE_MASK) >> PMD_FRAG_SIZE_SHIFT;
+ 	/* We allow PTE_FRAG_NR fragments from a PTE page */
+-	if (page_ref_sub_and_test(page, PMD_FRAG_NR - count)) {
++	if (atomic_sub_and_test(PMD_FRAG_NR - count, &page->pt_frag_refcount)) {
+ 		pgtable_pmd_page_dtor(page);
+-		free_unref_page(page);
++		__free_page(page);
+ 	}
+ }
+ 
+diff --git a/arch/powerpc/mm/mmu_context_iommu.c b/arch/powerpc/mm/mmu_context_iommu.c
+index a4ca57612558..c9ee9e23845f 100644
+--- a/arch/powerpc/mm/mmu_context_iommu.c
++++ b/arch/powerpc/mm/mmu_context_iommu.c
+@@ -129,6 +129,7 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
+ 	long i, j, ret = 0, locked_entries = 0;
+ 	unsigned int pageshift;
+ 	unsigned long flags;
++	unsigned long cur_ua;
+ 	struct page *page = NULL;
+ 
+ 	mutex_lock(&mem_list_mutex);
+@@ -177,7 +178,8 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
+ 	}
+ 
+ 	for (i = 0; i < entries; ++i) {
+-		if (1 != get_user_pages_fast(ua + (i << PAGE_SHIFT),
++		cur_ua = ua + (i << PAGE_SHIFT);
++		if (1 != get_user_pages_fast(cur_ua,
+ 					1/* pages */, 1/* iswrite */, &page)) {
+ 			ret = -EFAULT;
+ 			for (j = 0; j < i; ++j)
+@@ -196,7 +198,7 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
+ 		if (is_migrate_cma_page(page)) {
+ 			if (mm_iommu_move_page_from_cma(page))
+ 				goto populate;
+-			if (1 != get_user_pages_fast(ua + (i << PAGE_SHIFT),
++			if (1 != get_user_pages_fast(cur_ua,
+ 						1/* pages */, 1/* iswrite */,
+ 						&page)) {
+ 				ret = -EFAULT;
+@@ -210,20 +212,21 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
+ 		}
+ populate:
+ 		pageshift = PAGE_SHIFT;
+-		if (PageCompound(page)) {
++		if (mem->pageshift > PAGE_SHIFT && PageCompound(page)) {
+ 			pte_t *pte;
+ 			struct page *head = compound_head(page);
+ 			unsigned int compshift = compound_order(head);
++			unsigned int pteshift;
+ 
+ 			local_irq_save(flags); /* disables as well */
+-			pte = find_linux_pte(mm->pgd, ua, NULL, &pageshift);
+-			local_irq_restore(flags);
++			pte = find_linux_pte(mm->pgd, cur_ua, NULL, &pteshift);
+ 
+ 			/* Double check it is still the same pinned page */
+ 			if (pte && pte_page(*pte) == head &&
+-					pageshift == compshift)
+-				pageshift = max_t(unsigned int, pageshift,
++			    pteshift == compshift + PAGE_SHIFT)
++				pageshift = max_t(unsigned int, pteshift,
+ 						PAGE_SHIFT);
++			local_irq_restore(flags);
+ 		}
+ 		mem->pageshift = min(mem->pageshift, pageshift);
+ 		mem->hpas[i] = page_to_pfn(page) << PAGE_SHIFT;
+diff --git a/arch/powerpc/mm/pgtable-book3s64.c b/arch/powerpc/mm/pgtable-book3s64.c
+index 4afbfbb64bfd..78d0b3d5ebad 100644
+--- a/arch/powerpc/mm/pgtable-book3s64.c
++++ b/arch/powerpc/mm/pgtable-book3s64.c
+@@ -270,6 +270,8 @@ static pmd_t *__alloc_for_pmdcache(struct mm_struct *mm)
+ 		return NULL;
+ 	}
+ 
++	atomic_set(&page->pt_frag_refcount, 1);
++
+ 	ret = page_address(page);
+ 	/*
+ 	 * if we support only one fragment just return the
+@@ -285,7 +287,7 @@ static pmd_t *__alloc_for_pmdcache(struct mm_struct *mm)
+ 	 * count.
+ 	 */
+ 	if (likely(!mm->context.pmd_frag)) {
+-		set_page_count(page, PMD_FRAG_NR);
++		atomic_set(&page->pt_frag_refcount, PMD_FRAG_NR);
+ 		mm->context.pmd_frag = ret + PMD_FRAG_SIZE;
+ 	}
+ 	spin_unlock(&mm->page_table_lock);
+@@ -308,9 +310,10 @@ void pmd_fragment_free(unsigned long *pmd)
+ {
+ 	struct page *page = virt_to_page(pmd);
+ 
+-	if (put_page_testzero(page)) {
++	BUG_ON(atomic_read(&page->pt_frag_refcount) <= 0);
++	if (atomic_dec_and_test(&page->pt_frag_refcount)) {
+ 		pgtable_pmd_page_dtor(page);
+-		free_unref_page(page);
++		__free_page(page);
+ 	}
+ }
+ 
+@@ -352,6 +355,7 @@ static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel)
+ 			return NULL;
+ 	}
+ 
++	atomic_set(&page->pt_frag_refcount, 1);
+ 
+ 	ret = page_address(page);
+ 	/*
+@@ -367,7 +371,7 @@ static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel)
+ 	 * count.
+ 	 */
+ 	if (likely(!mm->context.pte_frag)) {
+-		set_page_count(page, PTE_FRAG_NR);
++		atomic_set(&page->pt_frag_refcount, PTE_FRAG_NR);
+ 		mm->context.pte_frag = ret + PTE_FRAG_SIZE;
+ 	}
+ 	spin_unlock(&mm->page_table_lock);
+@@ -390,10 +394,11 @@ void pte_fragment_free(unsigned long *table, int kernel)
+ {
+ 	struct page *page = virt_to_page(table);
+ 
+-	if (put_page_testzero(page)) {
++	BUG_ON(atomic_read(&page->pt_frag_refcount) <= 0);
++	if (atomic_dec_and_test(&page->pt_frag_refcount)) {
+ 		if (!kernel)
+ 			pgtable_page_dtor(page);
+-		free_unref_page(page);
++		__free_page(page);
+ 	}
+ }
+ 
+diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
+index e6f500fabf5e..0e7810ccd1ae 100644
+--- a/arch/powerpc/mm/pkeys.c
++++ b/arch/powerpc/mm/pkeys.c
+@@ -15,8 +15,10 @@ bool pkey_execute_disable_supported;
+ int  pkeys_total;		/* Total pkeys as per device tree */
+ bool pkeys_devtree_defined;	/* pkey property exported by device tree */
+ u32  initial_allocation_mask;	/* Bits set for reserved keys */
+-u64  pkey_amr_uamor_mask;	/* Bits in AMR/UMOR not to be touched */
++u64  pkey_amr_mask;		/* Bits in AMR not to be touched */
+ u64  pkey_iamr_mask;		/* Bits in AMR not to be touched */
++u64  pkey_uamor_mask;		/* Bits in UMOR not to be touched */
++int  execute_only_key = 2;
+ 
+ #define AMR_BITS_PER_PKEY 2
+ #define AMR_RD_BIT 0x1UL
+@@ -91,7 +93,7 @@ int pkey_initialize(void)
+ 	 * arch-neutral code.
+ 	 */
+ 	pkeys_total = min_t(int, pkeys_total,
+-			(ARCH_VM_PKEY_FLAGS >> VM_PKEY_SHIFT));
++			((ARCH_VM_PKEY_FLAGS >> VM_PKEY_SHIFT)+1));
+ 
+ 	if (!pkey_mmu_enabled() || radix_enabled() || !pkeys_total)
+ 		static_branch_enable(&pkey_disabled);
+@@ -119,20 +121,38 @@ int pkey_initialize(void)
+ #else
+ 	os_reserved = 0;
+ #endif
+-	initial_allocation_mask = ~0x0;
+-	pkey_amr_uamor_mask = ~0x0ul;
++	initial_allocation_mask  = (0x1 << 0) | (0x1 << 1) |
++					(0x1 << execute_only_key);
++
++	/* register mask is in BE format */
++	pkey_amr_mask = ~0x0ul;
++	pkey_amr_mask &= ~(0x3ul << pkeyshift(0));
++
+ 	pkey_iamr_mask = ~0x0ul;
+-	/*
+-	 * key 0, 1 are reserved.
+-	 * key 0 is the default key, which allows read/write/execute.
+-	 * key 1 is recommended not to be used. PowerISA(3.0) page 1015,
+-	 * programming note.
+-	 */
+-	for (i = 2; i < (pkeys_total - os_reserved); i++) {
+-		initial_allocation_mask &= ~(0x1 << i);
+-		pkey_amr_uamor_mask &= ~(0x3ul << pkeyshift(i));
+-		pkey_iamr_mask &= ~(0x1ul << pkeyshift(i));
++	pkey_iamr_mask &= ~(0x3ul << pkeyshift(0));
++	pkey_iamr_mask &= ~(0x3ul << pkeyshift(execute_only_key));
++
++	pkey_uamor_mask = ~0x0ul;
++	pkey_uamor_mask &= ~(0x3ul << pkeyshift(0));
++	pkey_uamor_mask &= ~(0x3ul << pkeyshift(execute_only_key));
++
++	/* mark the rest of the keys as reserved and hence unavailable */
++	for (i = (pkeys_total - os_reserved); i < pkeys_total; i++) {
++		initial_allocation_mask |= (0x1 << i);
++		pkey_uamor_mask &= ~(0x3ul << pkeyshift(i));
++	}
++
++	if (unlikely((pkeys_total - os_reserved) <= execute_only_key)) {
++		/*
++		 * Insufficient number of keys to support
++		 * execute only key. Mark it unavailable.
++		 * Any AMR, UAMOR, IAMR bit set for
++		 * this key is irrelevant since this key
++		 * can never be allocated.
++		 */
++		execute_only_key = -1;
+ 	}
++
+ 	return 0;
+ }
+ 
+@@ -143,8 +163,7 @@ void pkey_mm_init(struct mm_struct *mm)
+ 	if (static_branch_likely(&pkey_disabled))
+ 		return;
+ 	mm_pkey_allocation_map(mm) = initial_allocation_mask;
+-	/* -1 means unallocated or invalid */
+-	mm->context.execute_only_pkey = -1;
++	mm->context.execute_only_pkey = execute_only_key;
+ }
+ 
+ static inline u64 read_amr(void)
+@@ -213,33 +232,6 @@ static inline void init_iamr(int pkey, u8 init_bits)
+ 	write_iamr(old_iamr | new_iamr_bits);
+ }
+ 
+-static void pkey_status_change(int pkey, bool enable)
+-{
+-	u64 old_uamor;
+-
+-	/* Reset the AMR and IAMR bits for this key */
+-	init_amr(pkey, 0x0);
+-	init_iamr(pkey, 0x0);
+-
+-	/* Enable/disable key */
+-	old_uamor = read_uamor();
+-	if (enable)
+-		old_uamor |= (0x3ul << pkeyshift(pkey));
+-	else
+-		old_uamor &= ~(0x3ul << pkeyshift(pkey));
+-	write_uamor(old_uamor);
+-}
+-
+-void __arch_activate_pkey(int pkey)
+-{
+-	pkey_status_change(pkey, true);
+-}
+-
+-void __arch_deactivate_pkey(int pkey)
+-{
+-	pkey_status_change(pkey, false);
+-}
+-
+ /*
+  * Set the access rights in AMR IAMR and UAMOR registers for @pkey to that
+  * specified in @init_val.
+@@ -289,9 +281,6 @@ void thread_pkey_regs_restore(struct thread_struct *new_thread,
+ 	if (static_branch_likely(&pkey_disabled))
+ 		return;
+ 
+-	/*
+-	 * TODO: Just set UAMOR to zero if @new_thread hasn't used any keys yet.
+-	 */
+ 	if (old_thread->amr != new_thread->amr)
+ 		write_amr(new_thread->amr);
+ 	if (old_thread->iamr != new_thread->iamr)
+@@ -305,9 +294,13 @@ void thread_pkey_regs_init(struct thread_struct *thread)
+ 	if (static_branch_likely(&pkey_disabled))
+ 		return;
+ 
+-	thread->amr = read_amr() & pkey_amr_uamor_mask;
+-	thread->iamr = read_iamr() & pkey_iamr_mask;
+-	thread->uamor = read_uamor() & pkey_amr_uamor_mask;
++	thread->amr = pkey_amr_mask;
++	thread->iamr = pkey_iamr_mask;
++	thread->uamor = pkey_uamor_mask;
++
++	write_uamor(pkey_uamor_mask);
++	write_amr(pkey_amr_mask);
++	write_iamr(pkey_iamr_mask);
+ }
+ 
+ static inline bool pkey_allows_readwrite(int pkey)
+@@ -322,48 +315,7 @@ static inline bool pkey_allows_readwrite(int pkey)
+ 
+ int __execute_only_pkey(struct mm_struct *mm)
+ {
+-	bool need_to_set_mm_pkey = false;
+-	int execute_only_pkey = mm->context.execute_only_pkey;
+-	int ret;
+-
+-	/* Do we need to assign a pkey for mm's execute-only maps? */
+-	if (execute_only_pkey == -1) {
+-		/* Go allocate one to use, which might fail */
+-		execute_only_pkey = mm_pkey_alloc(mm);
+-		if (execute_only_pkey < 0)
+-			return -1;
+-		need_to_set_mm_pkey = true;
+-	}
+-
+-	/*
+-	 * We do not want to go through the relatively costly dance to set AMR
+-	 * if we do not need to. Check it first and assume that if the
+-	 * execute-only pkey is readwrite-disabled than we do not have to set it
+-	 * ourselves.
+-	 */
+-	if (!need_to_set_mm_pkey && !pkey_allows_readwrite(execute_only_pkey))
+-		return execute_only_pkey;
+-
+-	/*
+-	 * Set up AMR so that it denies access for everything other than
+-	 * execution.
+-	 */
+-	ret = __arch_set_user_pkey_access(current, execute_only_pkey,
+-					  PKEY_DISABLE_ACCESS |
+-					  PKEY_DISABLE_WRITE);
+-	/*
+-	 * If the AMR-set operation failed somehow, just return 0 and
+-	 * effectively disable execute-only support.
+-	 */
+-	if (ret) {
+-		mm_pkey_free(mm, execute_only_pkey);
+-		return -1;
+-	}
+-
+-	/* We got one, store it and use it from here on out */
+-	if (need_to_set_mm_pkey)
+-		mm->context.execute_only_pkey = execute_only_pkey;
+-	return execute_only_pkey;
++	return mm->context.execute_only_pkey;
+ }
+ 
+ static inline bool vma_is_pkey_exec_only(struct vm_area_struct *vma)
+diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
+index 70b2e1e0f23c..a2cdf358a3ac 100644
+--- a/arch/powerpc/platforms/powernv/pci-ioda.c
++++ b/arch/powerpc/platforms/powernv/pci-ioda.c
+@@ -3368,12 +3368,49 @@ static void pnv_pci_ioda_create_dbgfs(void)
+ #endif /* CONFIG_DEBUG_FS */
+ }
+ 
++static void pnv_pci_enable_bridge(struct pci_bus *bus)
++{
++	struct pci_dev *dev = bus->self;
++	struct pci_bus *child;
++
++	/* Empty bus ? bail */
++	if (list_empty(&bus->devices))
++		return;
++
++	/*
++	 * If there's a bridge associated with that bus enable it. This works
++	 * around races in the generic code if the enabling is done during
++	 * parallel probing. This can be removed once those races have been
++	 * fixed.
++	 */
++	if (dev) {
++		int rc = pci_enable_device(dev);
++		if (rc)
++			pci_err(dev, "Error enabling bridge (%d)\n", rc);
++		pci_set_master(dev);
++	}
++
++	/* Perform the same to child busses */
++	list_for_each_entry(child, &bus->children, node)
++		pnv_pci_enable_bridge(child);
++}
++
++static void pnv_pci_enable_bridges(void)
++{
++	struct pci_controller *hose;
++
++	list_for_each_entry(hose, &hose_list, list_node)
++		pnv_pci_enable_bridge(hose->bus);
++}
++
+ static void pnv_pci_ioda_fixup(void)
+ {
+ 	pnv_pci_ioda_setup_PEs();
+ 	pnv_pci_ioda_setup_iommu_api();
+ 	pnv_pci_ioda_create_dbgfs();
+ 
++	pnv_pci_enable_bridges();
++
+ #ifdef CONFIG_EEH
+ 	pnv_eeh_post_init();
+ #endif
+diff --git a/arch/powerpc/platforms/pseries/ras.c b/arch/powerpc/platforms/pseries/ras.c
+index 5e1ef9150182..2edc673be137 100644
+--- a/arch/powerpc/platforms/pseries/ras.c
++++ b/arch/powerpc/platforms/pseries/ras.c
+@@ -360,7 +360,7 @@ static struct rtas_error_log *fwnmi_get_errinfo(struct pt_regs *regs)
+ 	}
+ 
+ 	savep = __va(regs->gpr[3]);
+-	regs->gpr[3] = savep[0];	/* restore original r3 */
++	regs->gpr[3] = be64_to_cpu(savep[0]);	/* restore original r3 */
+ 
+ 	/* If it isn't an extended log we can use the per cpu 64bit buffer */
+ 	h = (struct rtas_error_log *)&savep[1];
+diff --git a/arch/sparc/kernel/sys_sparc_32.c b/arch/sparc/kernel/sys_sparc_32.c
+index 7f3d9c59719a..452e4d080855 100644
+--- a/arch/sparc/kernel/sys_sparc_32.c
++++ b/arch/sparc/kernel/sys_sparc_32.c
+@@ -197,23 +197,27 @@ SYSCALL_DEFINE5(rt_sigaction, int, sig,
+ 
+ SYSCALL_DEFINE2(getdomainname, char __user *, name, int, len)
+ {
+- 	int nlen, err;
+- 	
++	int nlen, err;
++	char tmp[__NEW_UTS_LEN + 1];
++
+ 	if (len < 0)
+ 		return -EINVAL;
+ 
+- 	down_read(&uts_sem);
+- 	
++	down_read(&uts_sem);
++
+ 	nlen = strlen(utsname()->domainname) + 1;
+ 	err = -EINVAL;
+ 	if (nlen > len)
+-		goto out;
++		goto out_unlock;
++	memcpy(tmp, utsname()->domainname, nlen);
+ 
+-	err = -EFAULT;
+-	if (!copy_to_user(name, utsname()->domainname, nlen))
+-		err = 0;
++	up_read(&uts_sem);
+ 
+-out:
++	if (copy_to_user(name, tmp, nlen))
++		return -EFAULT;
++	return 0;
++
++out_unlock:
+ 	up_read(&uts_sem);
+ 	return err;
+ }
+diff --git a/arch/sparc/kernel/sys_sparc_64.c b/arch/sparc/kernel/sys_sparc_64.c
+index 63baa8aa9414..274ed0b9b3e0 100644
+--- a/arch/sparc/kernel/sys_sparc_64.c
++++ b/arch/sparc/kernel/sys_sparc_64.c
+@@ -519,23 +519,27 @@ asmlinkage void sparc_breakpoint(struct pt_regs *regs)
+ 
+ SYSCALL_DEFINE2(getdomainname, char __user *, name, int, len)
+ {
+-        int nlen, err;
++	int nlen, err;
++	char tmp[__NEW_UTS_LEN + 1];
+ 
+ 	if (len < 0)
+ 		return -EINVAL;
+ 
+- 	down_read(&uts_sem);
+- 	
++	down_read(&uts_sem);
++
+ 	nlen = strlen(utsname()->domainname) + 1;
+ 	err = -EINVAL;
+ 	if (nlen > len)
+-		goto out;
++		goto out_unlock;
++	memcpy(tmp, utsname()->domainname, nlen);
++
++	up_read(&uts_sem);
+ 
+-	err = -EFAULT;
+-	if (!copy_to_user(name, utsname()->domainname, nlen))
+-		err = 0;
++	if (copy_to_user(name, tmp, nlen))
++		return -EFAULT;
++	return 0;
+ 
+-out:
++out_unlock:
+ 	up_read(&uts_sem);
+ 	return err;
+ }
+diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S
+index e762ef417562..d27a50656aa1 100644
+--- a/arch/x86/crypto/aesni-intel_asm.S
++++ b/arch/x86/crypto/aesni-intel_asm.S
+@@ -223,34 +223,34 @@ ALL_F:      .octa 0xffffffffffffffffffffffffffffffff
+ 	pcmpeqd TWOONE(%rip), \TMP2
+ 	pand	POLY(%rip), \TMP2
+ 	pxor	\TMP2, \TMP3
+-	movdqa	\TMP3, HashKey(%arg2)
++	movdqu	\TMP3, HashKey(%arg2)
+ 
+ 	movdqa	   \TMP3, \TMP5
+ 	pshufd	   $78, \TMP3, \TMP1
+ 	pxor	   \TMP3, \TMP1
+-	movdqa	   \TMP1, HashKey_k(%arg2)
++	movdqu	   \TMP1, HashKey_k(%arg2)
+ 
+ 	GHASH_MUL  \TMP5, \TMP3, \TMP1, \TMP2, \TMP4, \TMP6, \TMP7
+ # TMP5 = HashKey^2<<1 (mod poly)
+-	movdqa	   \TMP5, HashKey_2(%arg2)
++	movdqu	   \TMP5, HashKey_2(%arg2)
+ # HashKey_2 = HashKey^2<<1 (mod poly)
+ 	pshufd	   $78, \TMP5, \TMP1
+ 	pxor	   \TMP5, \TMP1
+-	movdqa	   \TMP1, HashKey_2_k(%arg2)
++	movdqu	   \TMP1, HashKey_2_k(%arg2)
+ 
+ 	GHASH_MUL  \TMP5, \TMP3, \TMP1, \TMP2, \TMP4, \TMP6, \TMP7
+ # TMP5 = HashKey^3<<1 (mod poly)
+-	movdqa	   \TMP5, HashKey_3(%arg2)
++	movdqu	   \TMP5, HashKey_3(%arg2)
+ 	pshufd	   $78, \TMP5, \TMP1
+ 	pxor	   \TMP5, \TMP1
+-	movdqa	   \TMP1, HashKey_3_k(%arg2)
++	movdqu	   \TMP1, HashKey_3_k(%arg2)
+ 
+ 	GHASH_MUL  \TMP5, \TMP3, \TMP1, \TMP2, \TMP4, \TMP6, \TMP7
+ # TMP5 = HashKey^3<<1 (mod poly)
+-	movdqa	   \TMP5, HashKey_4(%arg2)
++	movdqu	   \TMP5, HashKey_4(%arg2)
+ 	pshufd	   $78, \TMP5, \TMP1
+ 	pxor	   \TMP5, \TMP1
+-	movdqa	   \TMP1, HashKey_4_k(%arg2)
++	movdqu	   \TMP1, HashKey_4_k(%arg2)
+ .endm
+ 
+ # GCM_INIT initializes a gcm_context struct to prepare for encoding/decoding.
+@@ -271,7 +271,7 @@ ALL_F:      .octa 0xffffffffffffffffffffffffffffffff
+ 	movdqu %xmm0, CurCount(%arg2) # ctx_data.current_counter = iv
+ 
+ 	PRECOMPUTE \SUBKEY, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7,
+-	movdqa HashKey(%arg2), %xmm13
++	movdqu HashKey(%arg2), %xmm13
+ 
+ 	CALC_AAD_HASH %xmm13, \AAD, \AADLEN, %xmm0, %xmm1, %xmm2, %xmm3, \
+ 	%xmm4, %xmm5, %xmm6
+@@ -997,7 +997,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	pshufd	  $78, \XMM5, \TMP6
+ 	pxor	  \XMM5, \TMP6
+ 	paddd     ONE(%rip), \XMM0		# INCR CNT
+-	movdqa	  HashKey_4(%arg2), \TMP5
++	movdqu	  HashKey_4(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP4           # TMP4 = a1*b1
+ 	movdqa    \XMM0, \XMM1
+ 	paddd     ONE(%rip), \XMM0		# INCR CNT
+@@ -1016,7 +1016,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	pxor	  (%arg1), \XMM2
+ 	pxor	  (%arg1), \XMM3
+ 	pxor	  (%arg1), \XMM4
+-	movdqa	  HashKey_4_k(%arg2), \TMP5
++	movdqu	  HashKey_4_k(%arg2), \TMP5
+ 	PCLMULQDQ 0x00, \TMP5, \TMP6           # TMP6 = (a1+a0)*(b1+b0)
+ 	movaps 0x10(%arg1), \TMP1
+ 	AESENC	  \TMP1, \XMM1              # Round 1
+@@ -1031,7 +1031,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	movdqa	  \XMM6, \TMP1
+ 	pshufd	  $78, \XMM6, \TMP2
+ 	pxor	  \XMM6, \TMP2
+-	movdqa	  HashKey_3(%arg2), \TMP5
++	movdqu	  HashKey_3(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP1           # TMP1 = a1 * b1
+ 	movaps 0x30(%arg1), \TMP3
+ 	AESENC    \TMP3, \XMM1              # Round 3
+@@ -1044,7 +1044,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	AESENC	  \TMP3, \XMM2
+ 	AESENC	  \TMP3, \XMM3
+ 	AESENC	  \TMP3, \XMM4
+-	movdqa	  HashKey_3_k(%arg2), \TMP5
++	movdqu	  HashKey_3_k(%arg2), \TMP5
+ 	PCLMULQDQ 0x00, \TMP5, \TMP2           # TMP2 = (a1+a0)*(b1+b0)
+ 	movaps 0x50(%arg1), \TMP3
+ 	AESENC	  \TMP3, \XMM1              # Round 5
+@@ -1058,7 +1058,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	movdqa	  \XMM7, \TMP1
+ 	pshufd	  $78, \XMM7, \TMP2
+ 	pxor	  \XMM7, \TMP2
+-	movdqa	  HashKey_2(%arg2), \TMP5
++	movdqu	  HashKey_2(%arg2), \TMP5
+ 
+         # Multiply TMP5 * HashKey using karatsuba
+ 
+@@ -1074,7 +1074,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	AESENC	  \TMP3, \XMM2
+ 	AESENC	  \TMP3, \XMM3
+ 	AESENC	  \TMP3, \XMM4
+-	movdqa	  HashKey_2_k(%arg2), \TMP5
++	movdqu	  HashKey_2_k(%arg2), \TMP5
+ 	PCLMULQDQ 0x00, \TMP5, \TMP2           # TMP2 = (a1+a0)*(b1+b0)
+ 	movaps 0x80(%arg1), \TMP3
+ 	AESENC	  \TMP3, \XMM1             # Round 8
+@@ -1092,7 +1092,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	movdqa	  \XMM8, \TMP1
+ 	pshufd	  $78, \XMM8, \TMP2
+ 	pxor	  \XMM8, \TMP2
+-	movdqa	  HashKey(%arg2), \TMP5
++	movdqu	  HashKey(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP1          # TMP1 = a1*b1
+ 	movaps 0x90(%arg1), \TMP3
+ 	AESENC	  \TMP3, \XMM1            # Round 9
+@@ -1121,7 +1121,7 @@ aes_loop_par_enc_done\@:
+ 	AESENCLAST \TMP3, \XMM2
+ 	AESENCLAST \TMP3, \XMM3
+ 	AESENCLAST \TMP3, \XMM4
+-	movdqa    HashKey_k(%arg2), \TMP5
++	movdqu    HashKey_k(%arg2), \TMP5
+ 	PCLMULQDQ 0x00, \TMP5, \TMP2          # TMP2 = (a1+a0)*(b1+b0)
+ 	movdqu	  (%arg4,%r11,1), \TMP3
+ 	pxor	  \TMP3, \XMM1                 # Ciphertext/Plaintext XOR EK
+@@ -1205,7 +1205,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	pshufd	  $78, \XMM5, \TMP6
+ 	pxor	  \XMM5, \TMP6
+ 	paddd     ONE(%rip), \XMM0		# INCR CNT
+-	movdqa	  HashKey_4(%arg2), \TMP5
++	movdqu	  HashKey_4(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP4           # TMP4 = a1*b1
+ 	movdqa    \XMM0, \XMM1
+ 	paddd     ONE(%rip), \XMM0		# INCR CNT
+@@ -1224,7 +1224,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	pxor	  (%arg1), \XMM2
+ 	pxor	  (%arg1), \XMM3
+ 	pxor	  (%arg1), \XMM4
+-	movdqa	  HashKey_4_k(%arg2), \TMP5
++	movdqu	  HashKey_4_k(%arg2), \TMP5
+ 	PCLMULQDQ 0x00, \TMP5, \TMP6           # TMP6 = (a1+a0)*(b1+b0)
+ 	movaps 0x10(%arg1), \TMP1
+ 	AESENC	  \TMP1, \XMM1              # Round 1
+@@ -1239,7 +1239,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	movdqa	  \XMM6, \TMP1
+ 	pshufd	  $78, \XMM6, \TMP2
+ 	pxor	  \XMM6, \TMP2
+-	movdqa	  HashKey_3(%arg2), \TMP5
++	movdqu	  HashKey_3(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP1           # TMP1 = a1 * b1
+ 	movaps 0x30(%arg1), \TMP3
+ 	AESENC    \TMP3, \XMM1              # Round 3
+@@ -1252,7 +1252,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	AESENC	  \TMP3, \XMM2
+ 	AESENC	  \TMP3, \XMM3
+ 	AESENC	  \TMP3, \XMM4
+-	movdqa	  HashKey_3_k(%arg2), \TMP5
++	movdqu	  HashKey_3_k(%arg2), \TMP5
+ 	PCLMULQDQ 0x00, \TMP5, \TMP2           # TMP2 = (a1+a0)*(b1+b0)
+ 	movaps 0x50(%arg1), \TMP3
+ 	AESENC	  \TMP3, \XMM1              # Round 5
+@@ -1266,7 +1266,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	movdqa	  \XMM7, \TMP1
+ 	pshufd	  $78, \XMM7, \TMP2
+ 	pxor	  \XMM7, \TMP2
+-	movdqa	  HashKey_2(%arg2), \TMP5
++	movdqu	  HashKey_2(%arg2), \TMP5
+ 
+         # Multiply TMP5 * HashKey using karatsuba
+ 
+@@ -1282,7 +1282,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	AESENC	  \TMP3, \XMM2
+ 	AESENC	  \TMP3, \XMM3
+ 	AESENC	  \TMP3, \XMM4
+-	movdqa	  HashKey_2_k(%arg2), \TMP5
++	movdqu	  HashKey_2_k(%arg2), \TMP5
+ 	PCLMULQDQ 0x00, \TMP5, \TMP2           # TMP2 = (a1+a0)*(b1+b0)
+ 	movaps 0x80(%arg1), \TMP3
+ 	AESENC	  \TMP3, \XMM1             # Round 8
+@@ -1300,7 +1300,7 @@ TMP6 XMM0 XMM1 XMM2 XMM3 XMM4 XMM5 XMM6 XMM7 XMM8 operation
+ 	movdqa	  \XMM8, \TMP1
+ 	pshufd	  $78, \XMM8, \TMP2
+ 	pxor	  \XMM8, \TMP2
+-	movdqa	  HashKey(%arg2), \TMP5
++	movdqu	  HashKey(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP1          # TMP1 = a1*b1
+ 	movaps 0x90(%arg1), \TMP3
+ 	AESENC	  \TMP3, \XMM1            # Round 9
+@@ -1329,7 +1329,7 @@ aes_loop_par_dec_done\@:
+ 	AESENCLAST \TMP3, \XMM2
+ 	AESENCLAST \TMP3, \XMM3
+ 	AESENCLAST \TMP3, \XMM4
+-	movdqa    HashKey_k(%arg2), \TMP5
++	movdqu    HashKey_k(%arg2), \TMP5
+ 	PCLMULQDQ 0x00, \TMP5, \TMP2          # TMP2 = (a1+a0)*(b1+b0)
+ 	movdqu	  (%arg4,%r11,1), \TMP3
+ 	pxor	  \TMP3, \XMM1                 # Ciphertext/Plaintext XOR EK
+@@ -1405,10 +1405,10 @@ TMP7 XMM1 XMM2 XMM3 XMM4 XMMDst
+ 	movdqa	  \XMM1, \TMP6
+ 	pshufd	  $78, \XMM1, \TMP2
+ 	pxor	  \XMM1, \TMP2
+-	movdqa	  HashKey_4(%arg2), \TMP5
++	movdqu	  HashKey_4(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP6       # TMP6 = a1*b1
+ 	PCLMULQDQ 0x00, \TMP5, \XMM1       # XMM1 = a0*b0
+-	movdqa	  HashKey_4_k(%arg2), \TMP4
++	movdqu	  HashKey_4_k(%arg2), \TMP4
+ 	PCLMULQDQ 0x00, \TMP4, \TMP2       # TMP2 = (a1+a0)*(b1+b0)
+ 	movdqa	  \XMM1, \XMMDst
+ 	movdqa	  \TMP2, \XMM1              # result in TMP6, XMMDst, XMM1
+@@ -1418,10 +1418,10 @@ TMP7 XMM1 XMM2 XMM3 XMM4 XMMDst
+ 	movdqa	  \XMM2, \TMP1
+ 	pshufd	  $78, \XMM2, \TMP2
+ 	pxor	  \XMM2, \TMP2
+-	movdqa	  HashKey_3(%arg2), \TMP5
++	movdqu	  HashKey_3(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP1       # TMP1 = a1*b1
+ 	PCLMULQDQ 0x00, \TMP5, \XMM2       # XMM2 = a0*b0
+-	movdqa	  HashKey_3_k(%arg2), \TMP4
++	movdqu	  HashKey_3_k(%arg2), \TMP4
+ 	PCLMULQDQ 0x00, \TMP4, \TMP2       # TMP2 = (a1+a0)*(b1+b0)
+ 	pxor	  \TMP1, \TMP6
+ 	pxor	  \XMM2, \XMMDst
+@@ -1433,10 +1433,10 @@ TMP7 XMM1 XMM2 XMM3 XMM4 XMMDst
+ 	movdqa	  \XMM3, \TMP1
+ 	pshufd	  $78, \XMM3, \TMP2
+ 	pxor	  \XMM3, \TMP2
+-	movdqa	  HashKey_2(%arg2), \TMP5
++	movdqu	  HashKey_2(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP1       # TMP1 = a1*b1
+ 	PCLMULQDQ 0x00, \TMP5, \XMM3       # XMM3 = a0*b0
+-	movdqa	  HashKey_2_k(%arg2), \TMP4
++	movdqu	  HashKey_2_k(%arg2), \TMP4
+ 	PCLMULQDQ 0x00, \TMP4, \TMP2       # TMP2 = (a1+a0)*(b1+b0)
+ 	pxor	  \TMP1, \TMP6
+ 	pxor	  \XMM3, \XMMDst
+@@ -1446,10 +1446,10 @@ TMP7 XMM1 XMM2 XMM3 XMM4 XMMDst
+ 	movdqa	  \XMM4, \TMP1
+ 	pshufd	  $78, \XMM4, \TMP2
+ 	pxor	  \XMM4, \TMP2
+-	movdqa	  HashKey(%arg2), \TMP5
++	movdqu	  HashKey(%arg2), \TMP5
+ 	PCLMULQDQ 0x11, \TMP5, \TMP1	    # TMP1 = a1*b1
+ 	PCLMULQDQ 0x00, \TMP5, \XMM4       # XMM4 = a0*b0
+-	movdqa	  HashKey_k(%arg2), \TMP4
++	movdqu	  HashKey_k(%arg2), \TMP4
+ 	PCLMULQDQ 0x00, \TMP4, \TMP2       # TMP2 = (a1+a0)*(b1+b0)
+ 	pxor	  \TMP1, \TMP6
+ 	pxor	  \XMM4, \XMMDst
+diff --git a/arch/x86/kernel/kexec-bzimage64.c b/arch/x86/kernel/kexec-bzimage64.c
+index 7326078eaa7a..278cd07228dd 100644
+--- a/arch/x86/kernel/kexec-bzimage64.c
++++ b/arch/x86/kernel/kexec-bzimage64.c
+@@ -532,7 +532,7 @@ static int bzImage64_cleanup(void *loader_data)
+ static int bzImage64_verify_sig(const char *kernel, unsigned long kernel_len)
+ {
+ 	return verify_pefile_signature(kernel, kernel_len,
+-				       NULL,
++				       VERIFY_USE_SECONDARY_KEYRING,
+ 				       VERIFYING_KEXEC_PE_SIGNATURE);
+ }
+ #endif
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 46b428c0990e..bedabcf33a3e 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -197,12 +197,14 @@ static enum vmx_l1d_flush_state __read_mostly vmentry_l1d_flush_param = VMENTER_
+ 
+ static const struct {
+ 	const char *option;
+-	enum vmx_l1d_flush_state cmd;
++	bool for_parse;
+ } vmentry_l1d_param[] = {
+-	{"auto",	VMENTER_L1D_FLUSH_AUTO},
+-	{"never",	VMENTER_L1D_FLUSH_NEVER},
+-	{"cond",	VMENTER_L1D_FLUSH_COND},
+-	{"always",	VMENTER_L1D_FLUSH_ALWAYS},
++	[VMENTER_L1D_FLUSH_AUTO]	 = {"auto", true},
++	[VMENTER_L1D_FLUSH_NEVER]	 = {"never", true},
++	[VMENTER_L1D_FLUSH_COND]	 = {"cond", true},
++	[VMENTER_L1D_FLUSH_ALWAYS]	 = {"always", true},
++	[VMENTER_L1D_FLUSH_EPT_DISABLED] = {"EPT disabled", false},
++	[VMENTER_L1D_FLUSH_NOT_REQUIRED] = {"not required", false},
+ };
+ 
+ #define L1D_CACHE_ORDER 4
+@@ -286,8 +288,9 @@ static int vmentry_l1d_flush_parse(const char *s)
+ 
+ 	if (s) {
+ 		for (i = 0; i < ARRAY_SIZE(vmentry_l1d_param); i++) {
+-			if (sysfs_streq(s, vmentry_l1d_param[i].option))
+-				return vmentry_l1d_param[i].cmd;
++			if (vmentry_l1d_param[i].for_parse &&
++			    sysfs_streq(s, vmentry_l1d_param[i].option))
++				return i;
+ 		}
+ 	}
+ 	return -EINVAL;
+@@ -297,13 +300,13 @@ static int vmentry_l1d_flush_set(const char *s, const struct kernel_param *kp)
+ {
+ 	int l1tf, ret;
+ 
+-	if (!boot_cpu_has(X86_BUG_L1TF))
+-		return 0;
+-
+ 	l1tf = vmentry_l1d_flush_parse(s);
+ 	if (l1tf < 0)
+ 		return l1tf;
+ 
++	if (!boot_cpu_has(X86_BUG_L1TF))
++		return 0;
++
+ 	/*
+ 	 * Has vmx_init() run already? If not then this is the pre init
+ 	 * parameter parsing. In that case just store the value and let
+@@ -323,6 +326,9 @@ static int vmentry_l1d_flush_set(const char *s, const struct kernel_param *kp)
+ 
+ static int vmentry_l1d_flush_get(char *s, const struct kernel_param *kp)
+ {
++	if (WARN_ON_ONCE(l1tf_vmx_mitigation >= ARRAY_SIZE(vmentry_l1d_param)))
++		return sprintf(s, "???\n");
++
+ 	return sprintf(s, "%s\n", vmentry_l1d_param[l1tf_vmx_mitigation].option);
+ }
+ 
+diff --git a/arch/xtensa/include/asm/cacheasm.h b/arch/xtensa/include/asm/cacheasm.h
+index 2041abb10a23..34545ecfdd6b 100644
+--- a/arch/xtensa/include/asm/cacheasm.h
++++ b/arch/xtensa/include/asm/cacheasm.h
+@@ -31,16 +31,32 @@
+  *
+  */
+ 
+-	.macro	__loop_cache_all ar at insn size line_width
+ 
+-	movi	\ar, 0
++	.macro	__loop_cache_unroll ar at insn size line_width max_immed
++
++	.if	(1 << (\line_width)) > (\max_immed)
++	.set	_reps, 1
++	.elseif	(2 << (\line_width)) > (\max_immed)
++	.set	_reps, 2
++	.else
++	.set	_reps, 4
++	.endif
++
++	__loopi	\ar, \at, \size, (_reps << (\line_width))
++	.set	_index, 0
++	.rep	_reps
++	\insn	\ar, _index << (\line_width)
++	.set	_index, _index + 1
++	.endr
++	__endla	\ar, \at, _reps << (\line_width)
++
++	.endm
++
+ 
+-	__loopi	\ar, \at, \size, (4 << (\line_width))
+-	\insn	\ar, 0 << (\line_width)
+-	\insn	\ar, 1 << (\line_width)
+-	\insn	\ar, 2 << (\line_width)
+-	\insn	\ar, 3 << (\line_width)
+-	__endla	\ar, \at, 4 << (\line_width)
++	.macro	__loop_cache_all ar at insn size line_width max_immed
++
++	movi	\ar, 0
++	__loop_cache_unroll \ar, \at, \insn, \size, \line_width, \max_immed
+ 
+ 	.endm
+ 
+@@ -57,14 +73,9 @@
+ 	.endm
+ 
+ 
+-	.macro	__loop_cache_page ar at insn line_width
++	.macro	__loop_cache_page ar at insn line_width max_immed
+ 
+-	__loopi	\ar, \at, PAGE_SIZE, 4 << (\line_width)
+-	\insn	\ar, 0 << (\line_width)
+-	\insn	\ar, 1 << (\line_width)
+-	\insn	\ar, 2 << (\line_width)
+-	\insn	\ar, 3 << (\line_width)
+-	__endla	\ar, \at, 4 << (\line_width)
++	__loop_cache_unroll \ar, \at, \insn, PAGE_SIZE, \line_width, \max_immed
+ 
+ 	.endm
+ 
+@@ -72,7 +83,8 @@
+ 	.macro	___unlock_dcache_all ar at
+ 
+ #if XCHAL_DCACHE_LINE_LOCKABLE && XCHAL_DCACHE_SIZE
+-	__loop_cache_all \ar \at diu XCHAL_DCACHE_SIZE XCHAL_DCACHE_LINEWIDTH
++	__loop_cache_all \ar \at diu XCHAL_DCACHE_SIZE \
++		XCHAL_DCACHE_LINEWIDTH 240
+ #endif
+ 
+ 	.endm
+@@ -81,7 +93,8 @@
+ 	.macro	___unlock_icache_all ar at
+ 
+ #if XCHAL_ICACHE_LINE_LOCKABLE && XCHAL_ICACHE_SIZE
+-	__loop_cache_all \ar \at iiu XCHAL_ICACHE_SIZE XCHAL_ICACHE_LINEWIDTH
++	__loop_cache_all \ar \at iiu XCHAL_ICACHE_SIZE \
++		XCHAL_ICACHE_LINEWIDTH 240
+ #endif
+ 
+ 	.endm
+@@ -90,7 +103,8 @@
+ 	.macro	___flush_invalidate_dcache_all ar at
+ 
+ #if XCHAL_DCACHE_SIZE
+-	__loop_cache_all \ar \at diwbi XCHAL_DCACHE_SIZE XCHAL_DCACHE_LINEWIDTH
++	__loop_cache_all \ar \at diwbi XCHAL_DCACHE_SIZE \
++		XCHAL_DCACHE_LINEWIDTH 240
+ #endif
+ 
+ 	.endm
+@@ -99,7 +113,8 @@
+ 	.macro	___flush_dcache_all ar at
+ 
+ #if XCHAL_DCACHE_SIZE
+-	__loop_cache_all \ar \at diwb XCHAL_DCACHE_SIZE XCHAL_DCACHE_LINEWIDTH
++	__loop_cache_all \ar \at diwb XCHAL_DCACHE_SIZE \
++		XCHAL_DCACHE_LINEWIDTH 240
+ #endif
+ 
+ 	.endm
+@@ -108,8 +123,8 @@
+ 	.macro	___invalidate_dcache_all ar at
+ 
+ #if XCHAL_DCACHE_SIZE
+-	__loop_cache_all \ar \at dii __stringify(DCACHE_WAY_SIZE) \
+-			 XCHAL_DCACHE_LINEWIDTH
++	__loop_cache_all \ar \at dii XCHAL_DCACHE_SIZE \
++			 XCHAL_DCACHE_LINEWIDTH 1020
+ #endif
+ 
+ 	.endm
+@@ -118,8 +133,8 @@
+ 	.macro	___invalidate_icache_all ar at
+ 
+ #if XCHAL_ICACHE_SIZE
+-	__loop_cache_all \ar \at iii __stringify(ICACHE_WAY_SIZE) \
+-			 XCHAL_ICACHE_LINEWIDTH
++	__loop_cache_all \ar \at iii XCHAL_ICACHE_SIZE \
++			 XCHAL_ICACHE_LINEWIDTH 1020
+ #endif
+ 
+ 	.endm
+@@ -166,7 +181,7 @@
+ 	.macro	___flush_invalidate_dcache_page ar as
+ 
+ #if XCHAL_DCACHE_SIZE
+-	__loop_cache_page \ar \as dhwbi XCHAL_DCACHE_LINEWIDTH
++	__loop_cache_page \ar \as dhwbi XCHAL_DCACHE_LINEWIDTH 1020
+ #endif
+ 
+ 	.endm
+@@ -175,7 +190,7 @@
+ 	.macro ___flush_dcache_page ar as
+ 
+ #if XCHAL_DCACHE_SIZE
+-	__loop_cache_page \ar \as dhwb XCHAL_DCACHE_LINEWIDTH
++	__loop_cache_page \ar \as dhwb XCHAL_DCACHE_LINEWIDTH 1020
+ #endif
+ 
+ 	.endm
+@@ -184,7 +199,7 @@
+ 	.macro	___invalidate_dcache_page ar as
+ 
+ #if XCHAL_DCACHE_SIZE
+-	__loop_cache_page \ar \as dhi XCHAL_DCACHE_LINEWIDTH
++	__loop_cache_page \ar \as dhi XCHAL_DCACHE_LINEWIDTH 1020
+ #endif
+ 
+ 	.endm
+@@ -193,7 +208,7 @@
+ 	.macro	___invalidate_icache_page ar as
+ 
+ #if XCHAL_ICACHE_SIZE
+-	__loop_cache_page \ar \as ihi XCHAL_ICACHE_LINEWIDTH
++	__loop_cache_page \ar \as ihi XCHAL_ICACHE_LINEWIDTH 1020
+ #endif
+ 
+ 	.endm
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+index a9e8633388f4..58c6efa9f9a9 100644
+--- a/block/bfq-cgroup.c
++++ b/block/bfq-cgroup.c
+@@ -913,7 +913,8 @@ static ssize_t bfq_io_set_weight(struct kernfs_open_file *of,
+ 	if (ret)
+ 		return ret;
+ 
+-	return bfq_io_set_weight_legacy(of_css(of), NULL, weight);
++	ret = bfq_io_set_weight_legacy(of_css(of), NULL, weight);
++	return ret ?: nbytes;
+ }
+ 
+ #ifdef CONFIG_DEBUG_BLK_CGROUP
+diff --git a/block/blk-core.c b/block/blk-core.c
+index ee33590f54eb..1646ea85dade 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -715,6 +715,35 @@ void blk_set_queue_dying(struct request_queue *q)
+ }
+ EXPORT_SYMBOL_GPL(blk_set_queue_dying);
+ 
++/* Unconfigure the I/O scheduler and dissociate from the cgroup controller. */
++void blk_exit_queue(struct request_queue *q)
++{
++	/*
++	 * Since the I/O scheduler exit code may access cgroup information,
++	 * perform I/O scheduler exit before disassociating from the block
++	 * cgroup controller.
++	 */
++	if (q->elevator) {
++		ioc_clear_queue(q);
++		elevator_exit(q, q->elevator);
++		q->elevator = NULL;
++	}
++
++	/*
++	 * Remove all references to @q from the block cgroup controller before
++	 * restoring @q->queue_lock to avoid that restoring this pointer causes
++	 * e.g. blkcg_print_blkgs() to crash.
++	 */
++	blkcg_exit_queue(q);
++
++	/*
++	 * Since the cgroup code may dereference the @q->backing_dev_info
++	 * pointer, only decrease its reference count after having removed the
++	 * association with the block cgroup controller.
++	 */
++	bdi_put(q->backing_dev_info);
++}
++
+ /**
+  * blk_cleanup_queue - shutdown a request queue
+  * @q: request queue to shutdown
+@@ -780,30 +809,7 @@ void blk_cleanup_queue(struct request_queue *q)
+ 	 */
+ 	WARN_ON_ONCE(q->kobj.state_in_sysfs);
+ 
+-	/*
+-	 * Since the I/O scheduler exit code may access cgroup information,
+-	 * perform I/O scheduler exit before disassociating from the block
+-	 * cgroup controller.
+-	 */
+-	if (q->elevator) {
+-		ioc_clear_queue(q);
+-		elevator_exit(q, q->elevator);
+-		q->elevator = NULL;
+-	}
+-
+-	/*
+-	 * Remove all references to @q from the block cgroup controller before
+-	 * restoring @q->queue_lock to avoid that restoring this pointer causes
+-	 * e.g. blkcg_print_blkgs() to crash.
+-	 */
+-	blkcg_exit_queue(q);
+-
+-	/*
+-	 * Since the cgroup code may dereference the @q->backing_dev_info
+-	 * pointer, only decrease its reference count after having removed the
+-	 * association with the block cgroup controller.
+-	 */
+-	bdi_put(q->backing_dev_info);
++	blk_exit_queue(q);
+ 
+ 	if (q->mq_ops)
+ 		blk_mq_free_queue(q);
+@@ -1180,6 +1186,7 @@ out_exit_flush_rq:
+ 		q->exit_rq_fn(q, q->fq->flush_rq);
+ out_free_flush_queue:
+ 	blk_free_flush_queue(q->fq);
++	q->fq = NULL;
+ 	return -ENOMEM;
+ }
+ EXPORT_SYMBOL(blk_init_allocated_queue);
+@@ -3763,9 +3770,11 @@ EXPORT_SYMBOL(blk_finish_plug);
+  */
+ void blk_pm_runtime_init(struct request_queue *q, struct device *dev)
+ {
+-	/* not support for RQF_PM and ->rpm_status in blk-mq yet */
+-	if (q->mq_ops)
++	/* Don't enable runtime PM for blk-mq until it is ready */
++	if (q->mq_ops) {
++		pm_runtime_disable(dev);
+ 		return;
++	}
+ 
+ 	q->dev = dev;
+ 	q->rpm_status = RPM_ACTIVE;
+diff --git a/block/blk-lib.c b/block/blk-lib.c
+index 8faa70f26fcd..d1b9dd03da25 100644
+--- a/block/blk-lib.c
++++ b/block/blk-lib.c
+@@ -68,6 +68,8 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
+ 		 */
+ 		req_sects = min_t(sector_t, nr_sects,
+ 					q->limits.max_discard_sectors);
++		if (!req_sects)
++			goto fail;
+ 		if (req_sects > UINT_MAX >> 9)
+ 			req_sects = UINT_MAX >> 9;
+ 
+@@ -105,6 +107,14 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
+ 
+ 	*biop = bio;
+ 	return 0;
++
++fail:
++	if (bio) {
++		submit_bio_wait(bio);
++		bio_put(bio);
++	}
++	*biop = NULL;
++	return -EOPNOTSUPP;
+ }
+ EXPORT_SYMBOL(__blkdev_issue_discard);
+ 
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index 94987b1f69e1..96c7dfc04852 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -804,6 +804,21 @@ static void __blk_release_queue(struct work_struct *work)
+ 		blk_stat_remove_callback(q, q->poll_cb);
+ 	blk_stat_free_callback(q->poll_cb);
+ 
++	if (!blk_queue_dead(q)) {
++		/*
++		 * Last reference was dropped without having called
++		 * blk_cleanup_queue().
++		 */
++		WARN_ONCE(blk_queue_init_done(q),
++			  "request queue %p has been registered but blk_cleanup_queue() has not been called for that queue\n",
++			  q);
++		blk_exit_queue(q);
++	}
++
++	WARN(blkg_root_lookup(q),
++	     "request queue %p is being released but it has not yet been removed from the blkcg controller\n",
++	     q);
++
+ 	blk_free_queue_stats(q->stats);
+ 
+ 	blk_exit_rl(q, &q->root_rl);
+diff --git a/block/blk.h b/block/blk.h
+index 8d23aea96ce9..a8f0f7986cfd 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -130,6 +130,7 @@ void blk_free_flush_queue(struct blk_flush_queue *q);
+ int blk_init_rl(struct request_list *rl, struct request_queue *q,
+ 		gfp_t gfp_mask);
+ void blk_exit_rl(struct request_queue *q, struct request_list *rl);
++void blk_exit_queue(struct request_queue *q);
+ void blk_rq_bio_prep(struct request_queue *q, struct request *rq,
+ 			struct bio *bio);
+ void blk_queue_bypass_start(struct request_queue *q);
+diff --git a/certs/system_keyring.c b/certs/system_keyring.c
+index 6251d1b27f0c..81728717523d 100644
+--- a/certs/system_keyring.c
++++ b/certs/system_keyring.c
+@@ -15,6 +15,7 @@
+ #include <linux/cred.h>
+ #include <linux/err.h>
+ #include <linux/slab.h>
++#include <linux/verification.h>
+ #include <keys/asymmetric-type.h>
+ #include <keys/system_keyring.h>
+ #include <crypto/pkcs7.h>
+@@ -230,7 +231,7 @@ int verify_pkcs7_signature(const void *data, size_t len,
+ 
+ 	if (!trusted_keys) {
+ 		trusted_keys = builtin_trusted_keys;
+-	} else if (trusted_keys == (void *)1UL) {
++	} else if (trusted_keys == VERIFY_USE_SECONDARY_KEYRING) {
+ #ifdef CONFIG_SECONDARY_TRUSTED_KEYRING
+ 		trusted_keys = secondary_trusted_keys;
+ #else
+diff --git a/crypto/asymmetric_keys/pkcs7_key_type.c b/crypto/asymmetric_keys/pkcs7_key_type.c
+index e284d9cb9237..5b2f6a2b5585 100644
+--- a/crypto/asymmetric_keys/pkcs7_key_type.c
++++ b/crypto/asymmetric_keys/pkcs7_key_type.c
+@@ -63,7 +63,7 @@ static int pkcs7_preparse(struct key_preparsed_payload *prep)
+ 
+ 	return verify_pkcs7_signature(NULL, 0,
+ 				      prep->data, prep->datalen,
+-				      (void *)1UL, usage,
++				      VERIFY_USE_SECONDARY_KEYRING, usage,
+ 				      pkcs7_view_content, prep);
+ }
+ 
+diff --git a/drivers/acpi/acpica/hwsleep.c b/drivers/acpi/acpica/hwsleep.c
+index fe9d46d81750..d8b8fc2ff563 100644
+--- a/drivers/acpi/acpica/hwsleep.c
++++ b/drivers/acpi/acpica/hwsleep.c
+@@ -56,14 +56,9 @@ acpi_status acpi_hw_legacy_sleep(u8 sleep_state)
+ 	if (ACPI_FAILURE(status)) {
+ 		return_ACPI_STATUS(status);
+ 	}
+-	/*
+-	 * If the target sleep state is S5, clear all GPEs and fixed events too
+-	 */
+-	if (sleep_state == ACPI_STATE_S5) {
+-		status = acpi_hw_clear_acpi_status();
+-		if (ACPI_FAILURE(status)) {
+-			return_ACPI_STATUS(status);
+-		}
++	status = acpi_hw_clear_acpi_status();
++	if (ACPI_FAILURE(status)) {
++		return_ACPI_STATUS(status);
+ 	}
+ 	acpi_gbl_system_awake_and_running = FALSE;
+ 
+diff --git a/drivers/acpi/acpica/psloop.c b/drivers/acpi/acpica/psloop.c
+index 44f35ab3347d..0f0bdc9d24c6 100644
+--- a/drivers/acpi/acpica/psloop.c
++++ b/drivers/acpi/acpica/psloop.c
+@@ -22,6 +22,7 @@
+ #include "acdispat.h"
+ #include "amlcode.h"
+ #include "acconvert.h"
++#include "acnamesp.h"
+ 
+ #define _COMPONENT          ACPI_PARSER
+ ACPI_MODULE_NAME("psloop")
+@@ -527,12 +528,18 @@ acpi_status acpi_ps_parse_loop(struct acpi_walk_state *walk_state)
+ 				if (ACPI_FAILURE(status)) {
+ 					return_ACPI_STATUS(status);
+ 				}
+-				if (walk_state->opcode == AML_SCOPE_OP) {
++				if (acpi_ns_opens_scope
++				    (acpi_ps_get_opcode_info
++				     (walk_state->opcode)->object_type)) {
+ 					/*
+-					 * If the scope op fails to parse, skip the body of the
+-					 * scope op because the parse failure indicates that the
+-					 * device may not exist.
++					 * If the scope/device op fails to parse, skip the body of
++					 * the scope op because the parse failure indicates that
++					 * the device may not exist.
+ 					 */
++					ACPI_ERROR((AE_INFO,
++						    "Skip parsing opcode %s",
++						    acpi_ps_get_opcode_name
++						    (walk_state->opcode)));
+ 					walk_state->parser_state.aml =
+ 					    walk_state->aml + 1;
+ 					walk_state->parser_state.aml =
+@@ -540,8 +547,6 @@ acpi_status acpi_ps_parse_loop(struct acpi_walk_state *walk_state)
+ 					    (&walk_state->parser_state);
+ 					walk_state->aml =
+ 					    walk_state->parser_state.aml;
+-					ACPI_ERROR((AE_INFO,
+-						    "Skipping Scope block"));
+ 				}
+ 
+ 				continue;
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index a390c6d4f72d..af7cb8e618fe 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -337,6 +337,7 @@ static ssize_t backing_dev_store(struct device *dev,
+ 		struct device_attribute *attr, const char *buf, size_t len)
+ {
+ 	char *file_name;
++	size_t sz;
+ 	struct file *backing_dev = NULL;
+ 	struct inode *inode;
+ 	struct address_space *mapping;
+@@ -357,7 +358,11 @@ static ssize_t backing_dev_store(struct device *dev,
+ 		goto out;
+ 	}
+ 
+-	strlcpy(file_name, buf, len);
++	strlcpy(file_name, buf, PATH_MAX);
++	/* ignore trailing newline */
++	sz = strlen(file_name);
++	if (sz > 0 && file_name[sz - 1] == '\n')
++		file_name[sz - 1] = 0x00;
+ 
+ 	backing_dev = filp_open(file_name, O_RDWR|O_LARGEFILE, 0);
+ 	if (IS_ERR(backing_dev)) {
+diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
+index 1d50e97d49f1..6d53f7d9fc7a 100644
+--- a/drivers/cpufreq/cpufreq_governor.c
++++ b/drivers/cpufreq/cpufreq_governor.c
+@@ -555,12 +555,20 @@ EXPORT_SYMBOL_GPL(cpufreq_dbs_governor_stop);
+ 
+ void cpufreq_dbs_governor_limits(struct cpufreq_policy *policy)
+ {
+-	struct policy_dbs_info *policy_dbs = policy->governor_data;
++	struct policy_dbs_info *policy_dbs;
++
++	/* Protect gov->gdbs_data against cpufreq_dbs_governor_exit() */
++	mutex_lock(&gov_dbs_data_mutex);
++	policy_dbs = policy->governor_data;
++	if (!policy_dbs)
++		goto out;
+ 
+ 	mutex_lock(&policy_dbs->update_mutex);
+ 	cpufreq_policy_apply_limits(policy);
+ 	gov_update_sample_delay(policy_dbs, 0);
+-
+ 	mutex_unlock(&policy_dbs->update_mutex);
++
++out:
++	mutex_unlock(&gov_dbs_data_mutex);
+ }
+ EXPORT_SYMBOL_GPL(cpufreq_dbs_governor_limits);
+diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c
+index 1aef60d160eb..910f8a68f58b 100644
+--- a/drivers/cpuidle/governors/menu.c
++++ b/drivers/cpuidle/governors/menu.c
+@@ -349,14 +349,12 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ 		 * If the tick is already stopped, the cost of possible short
+ 		 * idle duration misprediction is much higher, because the CPU
+ 		 * may be stuck in a shallow idle state for a long time as a
+-		 * result of it.  In that case say we might mispredict and try
+-		 * to force the CPU into a state for which we would have stopped
+-		 * the tick, unless a timer is going to expire really soon
+-		 * anyway.
++		 * result of it.  In that case say we might mispredict and use
++		 * the known time till the closest timer event for the idle
++		 * state selection.
+ 		 */
+ 		if (data->predicted_us < TICK_USEC)
+-			data->predicted_us = min_t(unsigned int, TICK_USEC,
+-						   ktime_to_us(delta_next));
++			data->predicted_us = ktime_to_us(delta_next);
+ 	} else {
+ 		/*
+ 		 * Use the performance multiplier and the user-configurable
+@@ -381,8 +379,33 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ 			continue;
+ 		if (idx == -1)
+ 			idx = i; /* first enabled state */
+-		if (s->target_residency > data->predicted_us)
+-			break;
++		if (s->target_residency > data->predicted_us) {
++			if (data->predicted_us < TICK_USEC)
++				break;
++
++			if (!tick_nohz_tick_stopped()) {
++				/*
++				 * If the state selected so far is shallow,
++				 * waking up early won't hurt, so retain the
++				 * tick in that case and let the governor run
++				 * again in the next iteration of the loop.
++				 */
++				expected_interval = drv->states[idx].target_residency;
++				break;
++			}
++
++			/*
++			 * If the state selected so far is shallow and this
++			 * state's target residency matches the time till the
++			 * closest timer event, select this one to avoid getting
++			 * stuck in the shallow one for too long.
++			 */
++			if (drv->states[idx].target_residency < TICK_USEC &&
++			    s->target_residency <= ktime_to_us(delta_next))
++				idx = i;
++
++			goto out;
++		}
+ 		if (s->exit_latency > latency_req) {
+ 			/*
+ 			 * If we break out of the loop for latency reasons, use
+@@ -403,14 +426,13 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ 	 * Don't stop the tick if the selected state is a polling one or if the
+ 	 * expected idle duration is shorter than the tick period length.
+ 	 */
+-	if ((drv->states[idx].flags & CPUIDLE_FLAG_POLLING) ||
+-	    expected_interval < TICK_USEC) {
++	if (((drv->states[idx].flags & CPUIDLE_FLAG_POLLING) ||
++	     expected_interval < TICK_USEC) && !tick_nohz_tick_stopped()) {
+ 		unsigned int delta_next_us = ktime_to_us(delta_next);
+ 
+ 		*stop_tick = false;
+ 
+-		if (!tick_nohz_tick_stopped() && idx > 0 &&
+-		    drv->states[idx].target_residency > delta_next_us) {
++		if (idx > 0 && drv->states[idx].target_residency > delta_next_us) {
+ 			/*
+ 			 * The tick is not going to be stopped and the target
+ 			 * residency of the state to be returned is not within
+@@ -429,6 +451,7 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ 		}
+ 	}
+ 
++out:
+ 	data->last_state_idx = idx;
+ 
+ 	return data->last_state_idx;
+diff --git a/drivers/crypto/caam/caamalg_qi.c b/drivers/crypto/caam/caamalg_qi.c
+index 6e61cc93c2b0..d7aa7d7ff102 100644
+--- a/drivers/crypto/caam/caamalg_qi.c
++++ b/drivers/crypto/caam/caamalg_qi.c
+@@ -679,10 +679,8 @@ static int xts_ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher,
+ 	int ret = 0;
+ 
+ 	if (keylen != 2 * AES_MIN_KEY_SIZE  && keylen != 2 * AES_MAX_KEY_SIZE) {
+-		crypto_ablkcipher_set_flags(ablkcipher,
+-					    CRYPTO_TFM_RES_BAD_KEY_LEN);
+ 		dev_err(jrdev, "key size mismatch\n");
+-		return -EINVAL;
++		goto badkey;
+ 	}
+ 
+ 	ctx->cdata.keylen = keylen;
+@@ -715,7 +713,7 @@ static int xts_ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher,
+ 	return ret;
+ badkey:
+ 	crypto_ablkcipher_set_flags(ablkcipher, CRYPTO_TFM_RES_BAD_KEY_LEN);
+-	return 0;
++	return -EINVAL;
+ }
+ 
+ /*
+diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
+index 578ea63a3109..f26d62e5533a 100644
+--- a/drivers/crypto/caam/caampkc.c
++++ b/drivers/crypto/caam/caampkc.c
+@@ -71,8 +71,8 @@ static void rsa_priv_f2_unmap(struct device *dev, struct rsa_edesc *edesc,
+ 	dma_unmap_single(dev, pdb->d_dma, key->d_sz, DMA_TO_DEVICE);
+ 	dma_unmap_single(dev, pdb->p_dma, p_sz, DMA_TO_DEVICE);
+ 	dma_unmap_single(dev, pdb->q_dma, q_sz, DMA_TO_DEVICE);
+-	dma_unmap_single(dev, pdb->tmp1_dma, p_sz, DMA_TO_DEVICE);
+-	dma_unmap_single(dev, pdb->tmp2_dma, q_sz, DMA_TO_DEVICE);
++	dma_unmap_single(dev, pdb->tmp1_dma, p_sz, DMA_BIDIRECTIONAL);
++	dma_unmap_single(dev, pdb->tmp2_dma, q_sz, DMA_BIDIRECTIONAL);
+ }
+ 
+ static void rsa_priv_f3_unmap(struct device *dev, struct rsa_edesc *edesc,
+@@ -90,8 +90,8 @@ static void rsa_priv_f3_unmap(struct device *dev, struct rsa_edesc *edesc,
+ 	dma_unmap_single(dev, pdb->dp_dma, p_sz, DMA_TO_DEVICE);
+ 	dma_unmap_single(dev, pdb->dq_dma, q_sz, DMA_TO_DEVICE);
+ 	dma_unmap_single(dev, pdb->c_dma, p_sz, DMA_TO_DEVICE);
+-	dma_unmap_single(dev, pdb->tmp1_dma, p_sz, DMA_TO_DEVICE);
+-	dma_unmap_single(dev, pdb->tmp2_dma, q_sz, DMA_TO_DEVICE);
++	dma_unmap_single(dev, pdb->tmp1_dma, p_sz, DMA_BIDIRECTIONAL);
++	dma_unmap_single(dev, pdb->tmp2_dma, q_sz, DMA_BIDIRECTIONAL);
+ }
+ 
+ /* RSA Job Completion handler */
+@@ -417,13 +417,13 @@ static int set_rsa_priv_f2_pdb(struct akcipher_request *req,
+ 		goto unmap_p;
+ 	}
+ 
+-	pdb->tmp1_dma = dma_map_single(dev, key->tmp1, p_sz, DMA_TO_DEVICE);
++	pdb->tmp1_dma = dma_map_single(dev, key->tmp1, p_sz, DMA_BIDIRECTIONAL);
+ 	if (dma_mapping_error(dev, pdb->tmp1_dma)) {
+ 		dev_err(dev, "Unable to map RSA tmp1 memory\n");
+ 		goto unmap_q;
+ 	}
+ 
+-	pdb->tmp2_dma = dma_map_single(dev, key->tmp2, q_sz, DMA_TO_DEVICE);
++	pdb->tmp2_dma = dma_map_single(dev, key->tmp2, q_sz, DMA_BIDIRECTIONAL);
+ 	if (dma_mapping_error(dev, pdb->tmp2_dma)) {
+ 		dev_err(dev, "Unable to map RSA tmp2 memory\n");
+ 		goto unmap_tmp1;
+@@ -451,7 +451,7 @@ static int set_rsa_priv_f2_pdb(struct akcipher_request *req,
+ 	return 0;
+ 
+ unmap_tmp1:
+-	dma_unmap_single(dev, pdb->tmp1_dma, p_sz, DMA_TO_DEVICE);
++	dma_unmap_single(dev, pdb->tmp1_dma, p_sz, DMA_BIDIRECTIONAL);
+ unmap_q:
+ 	dma_unmap_single(dev, pdb->q_dma, q_sz, DMA_TO_DEVICE);
+ unmap_p:
+@@ -504,13 +504,13 @@ static int set_rsa_priv_f3_pdb(struct akcipher_request *req,
+ 		goto unmap_dq;
+ 	}
+ 
+-	pdb->tmp1_dma = dma_map_single(dev, key->tmp1, p_sz, DMA_TO_DEVICE);
++	pdb->tmp1_dma = dma_map_single(dev, key->tmp1, p_sz, DMA_BIDIRECTIONAL);
+ 	if (dma_mapping_error(dev, pdb->tmp1_dma)) {
+ 		dev_err(dev, "Unable to map RSA tmp1 memory\n");
+ 		goto unmap_qinv;
+ 	}
+ 
+-	pdb->tmp2_dma = dma_map_single(dev, key->tmp2, q_sz, DMA_TO_DEVICE);
++	pdb->tmp2_dma = dma_map_single(dev, key->tmp2, q_sz, DMA_BIDIRECTIONAL);
+ 	if (dma_mapping_error(dev, pdb->tmp2_dma)) {
+ 		dev_err(dev, "Unable to map RSA tmp2 memory\n");
+ 		goto unmap_tmp1;
+@@ -538,7 +538,7 @@ static int set_rsa_priv_f3_pdb(struct akcipher_request *req,
+ 	return 0;
+ 
+ unmap_tmp1:
+-	dma_unmap_single(dev, pdb->tmp1_dma, p_sz, DMA_TO_DEVICE);
++	dma_unmap_single(dev, pdb->tmp1_dma, p_sz, DMA_BIDIRECTIONAL);
+ unmap_qinv:
+ 	dma_unmap_single(dev, pdb->c_dma, p_sz, DMA_TO_DEVICE);
+ unmap_dq:
+diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c
+index f4f258075b89..acdd72016ffe 100644
+--- a/drivers/crypto/caam/jr.c
++++ b/drivers/crypto/caam/jr.c
+@@ -190,7 +190,8 @@ static void caam_jr_dequeue(unsigned long devarg)
+ 		BUG_ON(CIRC_CNT(head, tail + i, JOBR_DEPTH) <= 0);
+ 
+ 		/* Unmap just-run descriptor so we can post-process */
+-		dma_unmap_single(dev, jrp->outring[hw_idx].desc,
++		dma_unmap_single(dev,
++				 caam_dma_to_cpu(jrp->outring[hw_idx].desc),
+ 				 jrp->entinfo[sw_idx].desc_size,
+ 				 DMA_TO_DEVICE);
+ 
+diff --git a/drivers/crypto/vmx/aes_cbc.c b/drivers/crypto/vmx/aes_cbc.c
+index 5285ece4f33a..b71895871be3 100644
+--- a/drivers/crypto/vmx/aes_cbc.c
++++ b/drivers/crypto/vmx/aes_cbc.c
+@@ -107,24 +107,23 @@ static int p8_aes_cbc_encrypt(struct blkcipher_desc *desc,
+ 		ret = crypto_skcipher_encrypt(req);
+ 		skcipher_request_zero(req);
+ 	} else {
+-		preempt_disable();
+-		pagefault_disable();
+-		enable_kernel_vsx();
+-
+ 		blkcipher_walk_init(&walk, dst, src, nbytes);
+ 		ret = blkcipher_walk_virt(desc, &walk);
+ 		while ((nbytes = walk.nbytes)) {
++			preempt_disable();
++			pagefault_disable();
++			enable_kernel_vsx();
+ 			aes_p8_cbc_encrypt(walk.src.virt.addr,
+ 					   walk.dst.virt.addr,
+ 					   nbytes & AES_BLOCK_MASK,
+ 					   &ctx->enc_key, walk.iv, 1);
++			disable_kernel_vsx();
++			pagefault_enable();
++			preempt_enable();
++
+ 			nbytes &= AES_BLOCK_SIZE - 1;
+ 			ret = blkcipher_walk_done(desc, &walk, nbytes);
+ 		}
+-
+-		disable_kernel_vsx();
+-		pagefault_enable();
+-		preempt_enable();
+ 	}
+ 
+ 	return ret;
+@@ -147,24 +146,23 @@ static int p8_aes_cbc_decrypt(struct blkcipher_desc *desc,
+ 		ret = crypto_skcipher_decrypt(req);
+ 		skcipher_request_zero(req);
+ 	} else {
+-		preempt_disable();
+-		pagefault_disable();
+-		enable_kernel_vsx();
+-
+ 		blkcipher_walk_init(&walk, dst, src, nbytes);
+ 		ret = blkcipher_walk_virt(desc, &walk);
+ 		while ((nbytes = walk.nbytes)) {
++			preempt_disable();
++			pagefault_disable();
++			enable_kernel_vsx();
+ 			aes_p8_cbc_encrypt(walk.src.virt.addr,
+ 					   walk.dst.virt.addr,
+ 					   nbytes & AES_BLOCK_MASK,
+ 					   &ctx->dec_key, walk.iv, 0);
++			disable_kernel_vsx();
++			pagefault_enable();
++			preempt_enable();
++
+ 			nbytes &= AES_BLOCK_SIZE - 1;
+ 			ret = blkcipher_walk_done(desc, &walk, nbytes);
+ 		}
+-
+-		disable_kernel_vsx();
+-		pagefault_enable();
+-		preempt_enable();
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/crypto/vmx/aes_xts.c b/drivers/crypto/vmx/aes_xts.c
+index 8bd9aff0f55f..e9954a7d4694 100644
+--- a/drivers/crypto/vmx/aes_xts.c
++++ b/drivers/crypto/vmx/aes_xts.c
+@@ -116,32 +116,39 @@ static int p8_aes_xts_crypt(struct blkcipher_desc *desc,
+ 		ret = enc? crypto_skcipher_encrypt(req) : crypto_skcipher_decrypt(req);
+ 		skcipher_request_zero(req);
+ 	} else {
++		blkcipher_walk_init(&walk, dst, src, nbytes);
++
++		ret = blkcipher_walk_virt(desc, &walk);
++
+ 		preempt_disable();
+ 		pagefault_disable();
+ 		enable_kernel_vsx();
+ 
+-		blkcipher_walk_init(&walk, dst, src, nbytes);
+-
+-		ret = blkcipher_walk_virt(desc, &walk);
+ 		iv = walk.iv;
+ 		memset(tweak, 0, AES_BLOCK_SIZE);
+ 		aes_p8_encrypt(iv, tweak, &ctx->tweak_key);
+ 
++		disable_kernel_vsx();
++		pagefault_enable();
++		preempt_enable();
++
+ 		while ((nbytes = walk.nbytes)) {
++			preempt_disable();
++			pagefault_disable();
++			enable_kernel_vsx();
+ 			if (enc)
+ 				aes_p8_xts_encrypt(walk.src.virt.addr, walk.dst.virt.addr,
+ 						nbytes & AES_BLOCK_MASK, &ctx->enc_key, NULL, tweak);
+ 			else
+ 				aes_p8_xts_decrypt(walk.src.virt.addr, walk.dst.virt.addr,
+ 						nbytes & AES_BLOCK_MASK, &ctx->dec_key, NULL, tweak);
++			disable_kernel_vsx();
++			pagefault_enable();
++			preempt_enable();
+ 
+ 			nbytes &= AES_BLOCK_SIZE - 1;
+ 			ret = blkcipher_walk_done(desc, &walk, nbytes);
+ 		}
+-
+-		disable_kernel_vsx();
+-		pagefault_enable();
+-		preempt_enable();
+ 	}
+ 	return ret;
+ }
+diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c
+index 314eb1071cce..532545b9488e 100644
+--- a/drivers/dma-buf/reservation.c
++++ b/drivers/dma-buf/reservation.c
+@@ -141,6 +141,7 @@ reservation_object_add_shared_inplace(struct reservation_object *obj,
+ 	if (signaled) {
+ 		RCU_INIT_POINTER(fobj->shared[signaled_idx], fence);
+ 	} else {
++		BUG_ON(fobj->shared_count >= fobj->shared_max);
+ 		RCU_INIT_POINTER(fobj->shared[fobj->shared_count], fence);
+ 		fobj->shared_count++;
+ 	}
+@@ -230,10 +231,9 @@ void reservation_object_add_shared_fence(struct reservation_object *obj,
+ 	old = reservation_object_get_list(obj);
+ 	obj->staged = NULL;
+ 
+-	if (!fobj) {
+-		BUG_ON(old->shared_count >= old->shared_max);
++	if (!fobj)
+ 		reservation_object_add_shared_inplace(obj, old, fence);
+-	} else
++	else
+ 		reservation_object_add_shared_replace(obj, old, fobj, fence);
+ }
+ EXPORT_SYMBOL(reservation_object_add_shared_fence);
+diff --git a/drivers/extcon/extcon.c b/drivers/extcon/extcon.c
+index af83ad58819c..b9d27c8fe57e 100644
+--- a/drivers/extcon/extcon.c
++++ b/drivers/extcon/extcon.c
+@@ -433,8 +433,8 @@ int extcon_sync(struct extcon_dev *edev, unsigned int id)
+ 		return index;
+ 
+ 	spin_lock_irqsave(&edev->lock, flags);
+-
+ 	state = !!(edev->state & BIT(index));
++	spin_unlock_irqrestore(&edev->lock, flags);
+ 
+ 	/*
+ 	 * Call functions in a raw notifier chain for the specific one
+@@ -448,6 +448,7 @@ int extcon_sync(struct extcon_dev *edev, unsigned int id)
+ 	 */
+ 	raw_notifier_call_chain(&edev->nh_all, state, edev);
+ 
++	spin_lock_irqsave(&edev->lock, flags);
+ 	/* This could be in interrupt handler */
+ 	prop_buf = (char *)get_zeroed_page(GFP_ATOMIC);
+ 	if (!prop_buf) {
+diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
+index ba0a092ae085..c3949220b770 100644
+--- a/drivers/hv/channel.c
++++ b/drivers/hv/channel.c
+@@ -558,11 +558,8 @@ static void reset_channel_cb(void *arg)
+ 	channel->onchannel_callback = NULL;
+ }
+ 
+-static int vmbus_close_internal(struct vmbus_channel *channel)
++void vmbus_reset_channel_cb(struct vmbus_channel *channel)
+ {
+-	struct vmbus_channel_close_channel *msg;
+-	int ret;
+-
+ 	/*
+ 	 * vmbus_on_event(), running in the per-channel tasklet, can race
+ 	 * with vmbus_close_internal() in the case of SMP guest, e.g., when
+@@ -572,6 +569,29 @@ static int vmbus_close_internal(struct vmbus_channel *channel)
+ 	 */
+ 	tasklet_disable(&channel->callback_event);
+ 
++	channel->sc_creation_callback = NULL;
++
++	/* Stop the callback asap */
++	if (channel->target_cpu != get_cpu()) {
++		put_cpu();
++		smp_call_function_single(channel->target_cpu, reset_channel_cb,
++					 channel, true);
++	} else {
++		reset_channel_cb(channel);
++		put_cpu();
++	}
++
++	/* Re-enable tasklet for use on re-open */
++	tasklet_enable(&channel->callback_event);
++}
++
++static int vmbus_close_internal(struct vmbus_channel *channel)
++{
++	struct vmbus_channel_close_channel *msg;
++	int ret;
++
++	vmbus_reset_channel_cb(channel);
++
+ 	/*
+ 	 * In case a device driver's probe() fails (e.g.,
+ 	 * util_probe() -> vmbus_open() returns -ENOMEM) and the device is
+@@ -585,16 +605,6 @@ static int vmbus_close_internal(struct vmbus_channel *channel)
+ 	}
+ 
+ 	channel->state = CHANNEL_OPEN_STATE;
+-	channel->sc_creation_callback = NULL;
+-	/* Stop callback and cancel the timer asap */
+-	if (channel->target_cpu != get_cpu()) {
+-		put_cpu();
+-		smp_call_function_single(channel->target_cpu, reset_channel_cb,
+-					 channel, true);
+-	} else {
+-		reset_channel_cb(channel);
+-		put_cpu();
+-	}
+ 
+ 	/* Send a closing message */
+ 
+@@ -639,8 +649,6 @@ static int vmbus_close_internal(struct vmbus_channel *channel)
+ 		get_order(channel->ringbuffer_pagecount * PAGE_SIZE));
+ 
+ out:
+-	/* re-enable tasklet for use on re-open */
+-	tasklet_enable(&channel->callback_event);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index ecc2bd275a73..0f0e091c117c 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -527,10 +527,8 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
+ 		struct hv_device *dev
+ 			= newchannel->primary_channel->device_obj;
+ 
+-		if (vmbus_add_channel_kobj(dev, newchannel)) {
+-			atomic_dec(&vmbus_connection.offer_in_progress);
++		if (vmbus_add_channel_kobj(dev, newchannel))
+ 			goto err_free_chan;
+-		}
+ 
+ 		if (channel->sc_creation_callback != NULL)
+ 			channel->sc_creation_callback(newchannel);
+@@ -894,6 +892,12 @@ static void vmbus_onoffer_rescind(struct vmbus_channel_message_header *hdr)
+ 		return;
+ 	}
+ 
++	/*
++	 * Before setting channel->rescind in vmbus_rescind_cleanup(), we
++	 * should make sure the channel callback is not running any more.
++	 */
++	vmbus_reset_channel_cb(channel);
++
+ 	/*
+ 	 * Now wait for offer handling to complete.
+ 	 */
+diff --git a/drivers/i2c/busses/i2c-designware-master.c b/drivers/i2c/busses/i2c-designware-master.c
+index 27436a937492..54b2a3a86677 100644
+--- a/drivers/i2c/busses/i2c-designware-master.c
++++ b/drivers/i2c/busses/i2c-designware-master.c
+@@ -693,7 +693,6 @@ int i2c_dw_probe(struct dw_i2c_dev *dev)
+ 	i2c_set_adapdata(adap, dev);
+ 
+ 	if (dev->pm_disabled) {
+-		dev_pm_syscore_device(dev->dev, true);
+ 		irq_flags = IRQF_NO_SUSPEND;
+ 	} else {
+ 		irq_flags = IRQF_SHARED | IRQF_COND_SUSPEND;
+diff --git a/drivers/i2c/busses/i2c-designware-platdrv.c b/drivers/i2c/busses/i2c-designware-platdrv.c
+index 5660daf6c92e..d281d21cdd8e 100644
+--- a/drivers/i2c/busses/i2c-designware-platdrv.c
++++ b/drivers/i2c/busses/i2c-designware-platdrv.c
+@@ -448,6 +448,9 @@ static int dw_i2c_plat_suspend(struct device *dev)
+ {
+ 	struct dw_i2c_dev *i_dev = dev_get_drvdata(dev);
+ 
++	if (i_dev->pm_disabled)
++		return 0;
++
+ 	i_dev->disable(i_dev);
+ 	i2c_dw_prepare_clk(i_dev, false);
+ 
+@@ -458,7 +461,9 @@ static int dw_i2c_plat_resume(struct device *dev)
+ {
+ 	struct dw_i2c_dev *i_dev = dev_get_drvdata(dev);
+ 
+-	i2c_dw_prepare_clk(i_dev, true);
++	if (!i_dev->pm_disabled)
++		i2c_dw_prepare_clk(i_dev, true);
++
+ 	i_dev->init(i_dev);
+ 
+ 	return 0;
+diff --git a/drivers/iio/accel/sca3000.c b/drivers/iio/accel/sca3000.c
+index 4dceb75e3586..4964561595f5 100644
+--- a/drivers/iio/accel/sca3000.c
++++ b/drivers/iio/accel/sca3000.c
+@@ -797,6 +797,7 @@ static int sca3000_write_raw(struct iio_dev *indio_dev,
+ 		mutex_lock(&st->lock);
+ 		ret = sca3000_write_3db_freq(st, val);
+ 		mutex_unlock(&st->lock);
++		return ret;
+ 	default:
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/iio/frequency/ad9523.c b/drivers/iio/frequency/ad9523.c
+index ddb6a334ae68..8e8263758439 100644
+--- a/drivers/iio/frequency/ad9523.c
++++ b/drivers/iio/frequency/ad9523.c
+@@ -508,7 +508,7 @@ static ssize_t ad9523_store(struct device *dev,
+ 		return ret;
+ 
+ 	if (!state)
+-		return 0;
++		return len;
+ 
+ 	mutex_lock(&indio_dev->mlock);
+ 	switch ((u32)this_attr->address) {
+@@ -642,7 +642,7 @@ static int ad9523_read_raw(struct iio_dev *indio_dev,
+ 		code = (AD9523_CLK_DIST_DIV_PHASE_REV(ret) * 3141592) /
+ 			AD9523_CLK_DIST_DIV_REV(ret);
+ 		*val = code / 1000000;
+-		*val2 = (code % 1000000) * 10;
++		*val2 = code % 1000000;
+ 		return IIO_VAL_INT_PLUS_MICRO;
+ 	default:
+ 		return -EINVAL;
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index b3ba9a222550..cbeae4509359 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -4694,7 +4694,7 @@ static void mlx5_ib_dealloc_counters(struct mlx5_ib_dev *dev)
+ 	int i;
+ 
+ 	for (i = 0; i < dev->num_ports; i++) {
+-		if (dev->port[i].cnts.set_id)
++		if (dev->port[i].cnts.set_id_valid)
+ 			mlx5_core_dealloc_q_counter(dev->mdev,
+ 						    dev->port[i].cnts.set_id);
+ 		kfree(dev->port[i].cnts.names);
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index a4f1f638509f..01eae67d5a6e 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -1626,7 +1626,7 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
+ 	struct mlx5_ib_resources *devr = &dev->devr;
+ 	int inlen = MLX5_ST_SZ_BYTES(create_qp_in);
+ 	struct mlx5_core_dev *mdev = dev->mdev;
+-	struct mlx5_ib_create_qp_resp resp;
++	struct mlx5_ib_create_qp_resp resp = {};
+ 	struct mlx5_ib_cq *send_cq;
+ 	struct mlx5_ib_cq *recv_cq;
+ 	unsigned long flags;
+@@ -5365,7 +5365,9 @@ static int set_user_rq_size(struct mlx5_ib_dev *dev,
+ 
+ 	rwq->wqe_count = ucmd->rq_wqe_count;
+ 	rwq->wqe_shift = ucmd->rq_wqe_shift;
+-	rwq->buf_size = (rwq->wqe_count << rwq->wqe_shift);
++	if (check_shl_overflow(rwq->wqe_count, rwq->wqe_shift, &rwq->buf_size))
++		return -EINVAL;
++
+ 	rwq->log_rq_stride = rwq->wqe_shift;
+ 	rwq->log_rq_size = ilog2(rwq->wqe_count);
+ 	return 0;
+diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c
+index 98d470d1f3fc..83311dd07019 100644
+--- a/drivers/infiniband/sw/rxe/rxe_comp.c
++++ b/drivers/infiniband/sw/rxe/rxe_comp.c
+@@ -276,6 +276,7 @@ static inline enum comp_state check_ack(struct rxe_qp *qp,
+ 	case IB_OPCODE_RC_RDMA_READ_RESPONSE_MIDDLE:
+ 		if (wqe->wr.opcode != IB_WR_RDMA_READ &&
+ 		    wqe->wr.opcode != IB_WR_RDMA_READ_WITH_INV) {
++			wqe->status = IB_WC_FATAL_ERR;
+ 			return COMPST_ERROR;
+ 		}
+ 		reset_retry_counters(qp);
+diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
+index 3081c629a7f7..8a9633e97bec 100644
+--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
++++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
+@@ -1833,8 +1833,7 @@ static bool srpt_close_ch(struct srpt_rdma_ch *ch)
+ 	int ret;
+ 
+ 	if (!srpt_set_ch_state(ch, CH_DRAINING)) {
+-		pr_debug("%s-%d: already closed\n", ch->sess_name,
+-			 ch->qp->qp_num);
++		pr_debug("%s: already closed\n", ch->sess_name);
+ 		return false;
+ 	}
+ 
+@@ -1940,8 +1939,8 @@ static void __srpt_close_all_ch(struct srpt_port *sport)
+ 	list_for_each_entry(nexus, &sport->nexus_list, entry) {
+ 		list_for_each_entry(ch, &nexus->ch_list, list) {
+ 			if (srpt_disconnect_ch(ch) >= 0)
+-				pr_info("Closing channel %s-%d because target %s_%d has been disabled\n",
+-					ch->sess_name, ch->qp->qp_num,
++				pr_info("Closing channel %s because target %s_%d has been disabled\n",
++					ch->sess_name,
+ 					sport->sdev->device->name, sport->port);
+ 			srpt_close_ch(ch);
+ 		}
+@@ -2087,7 +2086,7 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev,
+ 		struct rdma_conn_param rdma_cm;
+ 		struct ib_cm_rep_param ib_cm;
+ 	} *rep_param = NULL;
+-	struct srpt_rdma_ch *ch;
++	struct srpt_rdma_ch *ch = NULL;
+ 	char i_port_id[36];
+ 	u32 it_iu_len;
+ 	int i, ret;
+@@ -2234,13 +2233,15 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev,
+ 						TARGET_PROT_NORMAL,
+ 						i_port_id + 2, ch, NULL);
+ 	if (IS_ERR_OR_NULL(ch->sess)) {
++		WARN_ON_ONCE(ch->sess == NULL);
+ 		ret = PTR_ERR(ch->sess);
++		ch->sess = NULL;
+ 		pr_info("Rejected login for initiator %s: ret = %d.\n",
+ 			ch->sess_name, ret);
+ 		rej->reason = cpu_to_be32(ret == -ENOMEM ?
+ 				SRP_LOGIN_REJ_INSUFFICIENT_RESOURCES :
+ 				SRP_LOGIN_REJ_CHANNEL_LIMIT_REACHED);
+-		goto reject;
++		goto destroy_ib;
+ 	}
+ 
+ 	mutex_lock(&sport->mutex);
+@@ -2279,7 +2280,7 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev,
+ 		rej->reason = cpu_to_be32(SRP_LOGIN_REJ_INSUFFICIENT_RESOURCES);
+ 		pr_err("rejected SRP_LOGIN_REQ because enabling RTR failed (error code = %d)\n",
+ 		       ret);
+-		goto destroy_ib;
++		goto reject;
+ 	}
+ 
+ 	pr_debug("Establish connection sess=%p name=%s ch=%p\n", ch->sess,
+@@ -2358,8 +2359,11 @@ free_ring:
+ 	srpt_free_ioctx_ring((struct srpt_ioctx **)ch->ioctx_ring,
+ 			     ch->sport->sdev, ch->rq_size,
+ 			     ch->max_rsp_size, DMA_TO_DEVICE);
++
+ free_ch:
+-	if (ib_cm_id)
++	if (rdma_cm_id)
++		rdma_cm_id->context = NULL;
++	else
+ 		ib_cm_id->context = NULL;
+ 	kfree(ch);
+ 	ch = NULL;
+@@ -2379,6 +2383,15 @@ reject:
+ 		ib_send_cm_rej(ib_cm_id, IB_CM_REJ_CONSUMER_DEFINED, NULL, 0,
+ 			       rej, sizeof(*rej));
+ 
++	if (ch && ch->sess) {
++		srpt_close_ch(ch);
++		/*
++		 * Tell the caller not to free cm_id since
++		 * srpt_release_channel_work() will do that.
++		 */
++		ret = 0;
++	}
++
+ out:
+ 	kfree(rep_param);
+ 	kfree(rsp);
+@@ -2969,7 +2982,8 @@ static void srpt_add_one(struct ib_device *device)
+ 
+ 	pr_debug("device = %p\n", device);
+ 
+-	sdev = kzalloc(sizeof(*sdev), GFP_KERNEL);
++	sdev = kzalloc(struct_size(sdev, port, device->phys_port_cnt),
++		       GFP_KERNEL);
+ 	if (!sdev)
+ 		goto err;
+ 
+@@ -3023,8 +3037,6 @@ static void srpt_add_one(struct ib_device *device)
+ 			      srpt_event_handler);
+ 	ib_register_event_handler(&sdev->event_handler);
+ 
+-	WARN_ON(sdev->device->phys_port_cnt > ARRAY_SIZE(sdev->port));
+-
+ 	for (i = 1; i <= sdev->device->phys_port_cnt; i++) {
+ 		sport = &sdev->port[i - 1];
+ 		INIT_LIST_HEAD(&sport->nexus_list);
+diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.h b/drivers/infiniband/ulp/srpt/ib_srpt.h
+index 2361483476a0..444dfd7281b5 100644
+--- a/drivers/infiniband/ulp/srpt/ib_srpt.h
++++ b/drivers/infiniband/ulp/srpt/ib_srpt.h
+@@ -396,9 +396,9 @@ struct srpt_port {
+  * @sdev_mutex:	   Serializes use_srq changes.
+  * @use_srq:       Whether or not to use SRQ.
+  * @ioctx_ring:    Per-HCA SRQ.
+- * @port:          Information about the ports owned by this HCA.
+  * @event_handler: Per-HCA asynchronous IB event handler.
+  * @list:          Node in srpt_dev_list.
++ * @port:          Information about the ports owned by this HCA.
+  */
+ struct srpt_device {
+ 	struct ib_device	*device;
+@@ -410,9 +410,9 @@ struct srpt_device {
+ 	struct mutex		sdev_mutex;
+ 	bool			use_srq;
+ 	struct srpt_recv_ioctx	**ioctx_ring;
+-	struct srpt_port	port[2];
+ 	struct ib_event_handler	event_handler;
+ 	struct list_head	list;
++	struct srpt_port        port[];
+ };
+ 
+ #endif				/* IB_SRPT_H */
+diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
+index 75456b5aa825..d9c748b6f9e4 100644
+--- a/drivers/iommu/dmar.c
++++ b/drivers/iommu/dmar.c
+@@ -1339,8 +1339,8 @@ void qi_flush_iotlb(struct intel_iommu *iommu, u16 did, u64 addr,
+ 	qi_submit_sync(&desc, iommu);
+ }
+ 
+-void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 qdep,
+-			u64 addr, unsigned mask)
++void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 pfsid,
++			u16 qdep, u64 addr, unsigned mask)
+ {
+ 	struct qi_desc desc;
+ 
+@@ -1355,7 +1355,7 @@ void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 qdep,
+ 		qdep = 0;
+ 
+ 	desc.low = QI_DEV_IOTLB_SID(sid) | QI_DEV_IOTLB_QDEP(qdep) |
+-		   QI_DIOTLB_TYPE;
++		   QI_DIOTLB_TYPE | QI_DEV_IOTLB_PFSID(pfsid);
+ 
+ 	qi_submit_sync(&desc, iommu);
+ }
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index 115ff26e9ced..07dc938199f9 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -421,6 +421,7 @@ struct device_domain_info {
+ 	struct list_head global; /* link to global list */
+ 	u8 bus;			/* PCI bus number */
+ 	u8 devfn;		/* PCI devfn number */
++	u16 pfsid;		/* SRIOV physical function source ID */
+ 	u8 pasid_supported:3;
+ 	u8 pasid_enabled:1;
+ 	u8 pri_supported:1;
+@@ -1501,6 +1502,20 @@ static void iommu_enable_dev_iotlb(struct device_domain_info *info)
+ 		return;
+ 
+ 	pdev = to_pci_dev(info->dev);
++	/* For IOMMU that supports device IOTLB throttling (DIT), we assign
++	 * PFSID to the invalidation desc of a VF such that IOMMU HW can gauge
++	 * queue depth at PF level. If DIT is not set, PFSID will be treated as
++	 * reserved, which should be set to 0.
++	 */
++	if (!ecap_dit(info->iommu->ecap))
++		info->pfsid = 0;
++	else {
++		struct pci_dev *pf_pdev;
++
++		/* pdev will be returned if device is not a vf */
++		pf_pdev = pci_physfn(pdev);
++		info->pfsid = PCI_DEVID(pf_pdev->bus->number, pf_pdev->devfn);
++	}
+ 
+ #ifdef CONFIG_INTEL_IOMMU_SVM
+ 	/* The PCIe spec, in its wisdom, declares that the behaviour of
+@@ -1566,7 +1581,8 @@ static void iommu_flush_dev_iotlb(struct dmar_domain *domain,
+ 
+ 		sid = info->bus << 8 | info->devfn;
+ 		qdep = info->ats_qdep;
+-		qi_flush_dev_iotlb(info->iommu, sid, qdep, addr, mask);
++		qi_flush_dev_iotlb(info->iommu, sid, info->pfsid,
++				qdep, addr, mask);
+ 	}
+ 	spin_unlock_irqrestore(&device_domain_lock, flags);
+ }
+diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
+index 40ae6e87cb88..09b47260c74b 100644
+--- a/drivers/iommu/ipmmu-vmsa.c
++++ b/drivers/iommu/ipmmu-vmsa.c
+@@ -1081,12 +1081,19 @@ static struct platform_driver ipmmu_driver = {
+ 
+ static int __init ipmmu_init(void)
+ {
++	struct device_node *np;
+ 	static bool setup_done;
+ 	int ret;
+ 
+ 	if (setup_done)
+ 		return 0;
+ 
++	np = of_find_matching_node(NULL, ipmmu_of_ids);
++	if (!np)
++		return 0;
++
++	of_node_put(np);
++
+ 	ret = platform_driver_register(&ipmmu_driver);
+ 	if (ret < 0)
+ 		return ret;
+diff --git a/drivers/mailbox/mailbox-xgene-slimpro.c b/drivers/mailbox/mailbox-xgene-slimpro.c
+index a7040163dd43..b8b2b3533f46 100644
+--- a/drivers/mailbox/mailbox-xgene-slimpro.c
++++ b/drivers/mailbox/mailbox-xgene-slimpro.c
+@@ -195,9 +195,9 @@ static int slimpro_mbox_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, ctx);
+ 
+ 	regs = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	mb_base = devm_ioremap(&pdev->dev, regs->start, resource_size(regs));
+-	if (!mb_base)
+-		return -ENOMEM;
++	mb_base = devm_ioremap_resource(&pdev->dev, regs);
++	if (IS_ERR(mb_base))
++		return PTR_ERR(mb_base);
+ 
+ 	/* Setup mailbox links */
+ 	for (i = 0; i < MBOX_CNT; i++) {
+diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
+index ad45ebe1a74b..6c33923c2c35 100644
+--- a/drivers/md/bcache/writeback.c
++++ b/drivers/md/bcache/writeback.c
+@@ -645,8 +645,10 @@ static int bch_writeback_thread(void *arg)
+ 			 * data on cache. BCACHE_DEV_DETACHING flag is set in
+ 			 * bch_cached_dev_detach().
+ 			 */
+-			if (test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags))
++			if (test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags)) {
++				up_write(&dc->writeback_lock);
+ 				break;
++			}
+ 		}
+ 
+ 		up_write(&dc->writeback_lock);
+diff --git a/drivers/md/dm-cache-metadata.c b/drivers/md/dm-cache-metadata.c
+index 0d7212410e21..69dddeab124c 100644
+--- a/drivers/md/dm-cache-metadata.c
++++ b/drivers/md/dm-cache-metadata.c
+@@ -363,7 +363,7 @@ static int __write_initial_superblock(struct dm_cache_metadata *cmd)
+ 	disk_super->version = cpu_to_le32(cmd->version);
+ 	memset(disk_super->policy_name, 0, sizeof(disk_super->policy_name));
+ 	memset(disk_super->policy_version, 0, sizeof(disk_super->policy_version));
+-	disk_super->policy_hint_size = 0;
++	disk_super->policy_hint_size = cpu_to_le32(0);
+ 
+ 	__copy_sm_root(cmd, disk_super);
+ 
+@@ -701,6 +701,7 @@ static int __commit_transaction(struct dm_cache_metadata *cmd,
+ 	disk_super->policy_version[0] = cpu_to_le32(cmd->policy_version[0]);
+ 	disk_super->policy_version[1] = cpu_to_le32(cmd->policy_version[1]);
+ 	disk_super->policy_version[2] = cpu_to_le32(cmd->policy_version[2]);
++	disk_super->policy_hint_size = cpu_to_le32(cmd->policy_hint_size);
+ 
+ 	disk_super->read_hits = cpu_to_le32(cmd->stats.read_hits);
+ 	disk_super->read_misses = cpu_to_le32(cmd->stats.read_misses);
+@@ -1322,6 +1323,7 @@ static int __load_mapping_v1(struct dm_cache_metadata *cmd,
+ 
+ 	dm_oblock_t oblock;
+ 	unsigned flags;
++	bool dirty = true;
+ 
+ 	dm_array_cursor_get_value(mapping_cursor, (void **) &mapping_value_le);
+ 	memcpy(&mapping, mapping_value_le, sizeof(mapping));
+@@ -1332,8 +1334,10 @@ static int __load_mapping_v1(struct dm_cache_metadata *cmd,
+ 			dm_array_cursor_get_value(hint_cursor, (void **) &hint_value_le);
+ 			memcpy(&hint, hint_value_le, sizeof(hint));
+ 		}
++		if (cmd->clean_when_opened)
++			dirty = flags & M_DIRTY;
+ 
+-		r = fn(context, oblock, to_cblock(cb), flags & M_DIRTY,
++		r = fn(context, oblock, to_cblock(cb), dirty,
+ 		       le32_to_cpu(hint), hints_valid);
+ 		if (r) {
+ 			DMERR("policy couldn't load cache block %llu",
+@@ -1361,7 +1365,7 @@ static int __load_mapping_v2(struct dm_cache_metadata *cmd,
+ 
+ 	dm_oblock_t oblock;
+ 	unsigned flags;
+-	bool dirty;
++	bool dirty = true;
+ 
+ 	dm_array_cursor_get_value(mapping_cursor, (void **) &mapping_value_le);
+ 	memcpy(&mapping, mapping_value_le, sizeof(mapping));
+@@ -1372,8 +1376,9 @@ static int __load_mapping_v2(struct dm_cache_metadata *cmd,
+ 			dm_array_cursor_get_value(hint_cursor, (void **) &hint_value_le);
+ 			memcpy(&hint, hint_value_le, sizeof(hint));
+ 		}
++		if (cmd->clean_when_opened)
++			dirty = dm_bitset_cursor_get_value(dirty_cursor);
+ 
+-		dirty = dm_bitset_cursor_get_value(dirty_cursor);
+ 		r = fn(context, oblock, to_cblock(cb), dirty,
+ 		       le32_to_cpu(hint), hints_valid);
+ 		if (r) {
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index b61b069c33af..3fdec1147221 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -3069,11 +3069,11 @@ static void crypt_io_hints(struct dm_target *ti, struct queue_limits *limits)
+ 	 */
+ 	limits->max_segment_size = PAGE_SIZE;
+ 
+-	if (cc->sector_size != (1 << SECTOR_SHIFT)) {
+-		limits->logical_block_size = cc->sector_size;
+-		limits->physical_block_size = cc->sector_size;
+-		blk_limits_io_min(limits, cc->sector_size);
+-	}
++	limits->logical_block_size =
++		max_t(unsigned short, limits->logical_block_size, cc->sector_size);
++	limits->physical_block_size =
++		max_t(unsigned, limits->physical_block_size, cc->sector_size);
++	limits->io_min = max_t(unsigned, limits->io_min, cc->sector_size);
+ }
+ 
+ static struct target_type crypt_target = {
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index 86438b2f10dd..0a8a4c2aa3ea 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -178,7 +178,7 @@ struct dm_integrity_c {
+ 	__u8 sectors_per_block;
+ 
+ 	unsigned char mode;
+-	bool suspending;
++	int suspending;
+ 
+ 	int failed;
+ 
+@@ -2210,7 +2210,7 @@ static void dm_integrity_postsuspend(struct dm_target *ti)
+ 
+ 	del_timer_sync(&ic->autocommit_timer);
+ 
+-	ic->suspending = true;
++	WRITE_ONCE(ic->suspending, 1);
+ 
+ 	queue_work(ic->commit_wq, &ic->commit_work);
+ 	drain_workqueue(ic->commit_wq);
+@@ -2220,7 +2220,7 @@ static void dm_integrity_postsuspend(struct dm_target *ti)
+ 		dm_integrity_flush_buffers(ic);
+ 	}
+ 
+-	ic->suspending = false;
++	WRITE_ONCE(ic->suspending, 0);
+ 
+ 	BUG_ON(!RB_EMPTY_ROOT(&ic->in_progress));
+ 
+diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
+index b900723bbd0f..1087f6a1ac79 100644
+--- a/drivers/md/dm-thin.c
++++ b/drivers/md/dm-thin.c
+@@ -2520,6 +2520,8 @@ static void set_pool_mode(struct pool *pool, enum pool_mode new_mode)
+ 	case PM_WRITE:
+ 		if (old_mode != new_mode)
+ 			notify_of_pool_mode_change(pool, "write");
++		if (old_mode == PM_OUT_OF_DATA_SPACE)
++			cancel_delayed_work_sync(&pool->no_space_timeout);
+ 		pool->out_of_data_space = false;
+ 		pool->pf.error_if_no_space = pt->requested_pf.error_if_no_space;
+ 		dm_pool_metadata_read_write(pool->pmd);
+diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c
+index 87107c995cb5..7669069005e9 100644
+--- a/drivers/md/dm-writecache.c
++++ b/drivers/md/dm-writecache.c
+@@ -457,7 +457,7 @@ static void ssd_commit_flushed(struct dm_writecache *wc)
+ 		COMPLETION_INITIALIZER_ONSTACK(endio.c),
+ 		ATOMIC_INIT(1),
+ 	};
+-	unsigned bitmap_bits = wc->dirty_bitmap_size * BITS_PER_LONG;
++	unsigned bitmap_bits = wc->dirty_bitmap_size * 8;
+ 	unsigned i = 0;
+ 
+ 	while (1) {
+diff --git a/drivers/media/i2c/tvp5150.c b/drivers/media/i2c/tvp5150.c
+index b162c2fe62c3..76e6bed5a1da 100644
+--- a/drivers/media/i2c/tvp5150.c
++++ b/drivers/media/i2c/tvp5150.c
+@@ -872,7 +872,7 @@ static int tvp5150_fill_fmt(struct v4l2_subdev *sd,
+ 	f = &format->format;
+ 
+ 	f->width = decoder->rect.width;
+-	f->height = decoder->rect.height;
++	f->height = decoder->rect.height / 2;
+ 
+ 	f->code = MEDIA_BUS_FMT_UYVY8_2X8;
+ 	f->field = V4L2_FIELD_ALTERNATE;
+diff --git a/drivers/mfd/hi655x-pmic.c b/drivers/mfd/hi655x-pmic.c
+index c37ccbfd52f2..96c07fa1802a 100644
+--- a/drivers/mfd/hi655x-pmic.c
++++ b/drivers/mfd/hi655x-pmic.c
+@@ -49,7 +49,7 @@ static struct regmap_config hi655x_regmap_config = {
+ 	.reg_bits = 32,
+ 	.reg_stride = HI655X_STRIDE,
+ 	.val_bits = 8,
+-	.max_register = HI655X_BUS_ADDR(0xFFF),
++	.max_register = HI655X_BUS_ADDR(0x400) - HI655X_STRIDE,
+ };
+ 
+ static struct resource pwrkey_resources[] = {
+diff --git a/drivers/misc/cxl/main.c b/drivers/misc/cxl/main.c
+index c1ba0d42cbc8..e0f29b8a872d 100644
+--- a/drivers/misc/cxl/main.c
++++ b/drivers/misc/cxl/main.c
+@@ -287,7 +287,7 @@ int cxl_adapter_context_get(struct cxl *adapter)
+ 	int rc;
+ 
+ 	rc = atomic_inc_unless_negative(&adapter->contexts_num);
+-	return rc >= 0 ? 0 : -EBUSY;
++	return rc ? 0 : -EBUSY;
+ }
+ 
+ void cxl_adapter_context_put(struct cxl *adapter)
+diff --git a/drivers/misc/ocxl/link.c b/drivers/misc/ocxl/link.c
+index 88876ae8f330..a963b0a4a3c5 100644
+--- a/drivers/misc/ocxl/link.c
++++ b/drivers/misc/ocxl/link.c
+@@ -136,7 +136,7 @@ static void xsl_fault_handler_bh(struct work_struct *fault_work)
+ 	int rc;
+ 
+ 	/*
+-	 * We need to release a reference on the mm whenever exiting this
++	 * We must release a reference on mm_users whenever exiting this
+ 	 * function (taken in the memory fault interrupt handler)
+ 	 */
+ 	rc = copro_handle_mm_fault(fault->pe_data.mm, fault->dar, fault->dsisr,
+@@ -172,7 +172,7 @@ static void xsl_fault_handler_bh(struct work_struct *fault_work)
+ 	}
+ 	r = RESTART;
+ ack:
+-	mmdrop(fault->pe_data.mm);
++	mmput(fault->pe_data.mm);
+ 	ack_irq(spa, r);
+ }
+ 
+@@ -184,6 +184,7 @@ static irqreturn_t xsl_fault_handler(int irq, void *data)
+ 	struct pe_data *pe_data;
+ 	struct ocxl_process_element *pe;
+ 	int lpid, pid, tid;
++	bool schedule = false;
+ 
+ 	read_irq(spa, &dsisr, &dar, &pe_handle);
+ 	trace_ocxl_fault(spa->spa_mem, pe_handle, dsisr, dar, -1);
+@@ -226,14 +227,19 @@ static irqreturn_t xsl_fault_handler(int irq, void *data)
+ 	}
+ 	WARN_ON(pe_data->mm->context.id != pid);
+ 
+-	spa->xsl_fault.pe = pe_handle;
+-	spa->xsl_fault.dar = dar;
+-	spa->xsl_fault.dsisr = dsisr;
+-	spa->xsl_fault.pe_data = *pe_data;
+-	mmgrab(pe_data->mm); /* mm count is released by bottom half */
+-
++	if (mmget_not_zero(pe_data->mm)) {
++			spa->xsl_fault.pe = pe_handle;
++			spa->xsl_fault.dar = dar;
++			spa->xsl_fault.dsisr = dsisr;
++			spa->xsl_fault.pe_data = *pe_data;
++			schedule = true;
++			/* mm_users count released by bottom half */
++	}
+ 	rcu_read_unlock();
+-	schedule_work(&spa->xsl_fault.fault_work);
++	if (schedule)
++		schedule_work(&spa->xsl_fault.fault_work);
++	else
++		ack_irq(spa, ADDRESS_ERROR);
+ 	return IRQ_HANDLED;
+ }
+ 
+diff --git a/drivers/misc/vmw_balloon.c b/drivers/misc/vmw_balloon.c
+index 56c6f79a5c5a..5f8b583c6e41 100644
+--- a/drivers/misc/vmw_balloon.c
++++ b/drivers/misc/vmw_balloon.c
+@@ -341,7 +341,13 @@ static bool vmballoon_send_start(struct vmballoon *b, unsigned long req_caps)
+ 		success = false;
+ 	}
+ 
+-	if (b->capabilities & VMW_BALLOON_BATCHED_2M_CMDS)
++	/*
++	 * 2MB pages are only supported with batching. If batching is for some
++	 * reason disabled, do not use 2MB pages, since otherwise the legacy
++	 * mechanism is used with 2MB pages, causing a failure.
++	 */
++	if ((b->capabilities & VMW_BALLOON_BATCHED_2M_CMDS) &&
++	    (b->capabilities & VMW_BALLOON_BATCHED_CMDS))
+ 		b->supported_page_sizes = 2;
+ 	else
+ 		b->supported_page_sizes = 1;
+@@ -450,7 +456,7 @@ static int vmballoon_send_lock_page(struct vmballoon *b, unsigned long pfn,
+ 
+ 	pfn32 = (u32)pfn;
+ 	if (pfn32 != pfn)
+-		return -1;
++		return -EINVAL;
+ 
+ 	STATS_INC(b->stats.lock[false]);
+ 
+@@ -460,7 +466,7 @@ static int vmballoon_send_lock_page(struct vmballoon *b, unsigned long pfn,
+ 
+ 	pr_debug("%s - ppn %lx, hv returns %ld\n", __func__, pfn, status);
+ 	STATS_INC(b->stats.lock_fail[false]);
+-	return 1;
++	return -EIO;
+ }
+ 
+ static int vmballoon_send_batched_lock(struct vmballoon *b,
+@@ -597,11 +603,12 @@ static int vmballoon_lock_page(struct vmballoon *b, unsigned int num_pages,
+ 
+ 	locked = vmballoon_send_lock_page(b, page_to_pfn(page), &hv_status,
+ 								target);
+-	if (locked > 0) {
++	if (locked) {
+ 		STATS_INC(b->stats.refused_alloc[false]);
+ 
+-		if (hv_status == VMW_BALLOON_ERROR_RESET ||
+-				hv_status == VMW_BALLOON_ERROR_PPN_NOTNEEDED) {
++		if (locked == -EIO &&
++		    (hv_status == VMW_BALLOON_ERROR_RESET ||
++		     hv_status == VMW_BALLOON_ERROR_PPN_NOTNEEDED)) {
+ 			vmballoon_free_page(page, false);
+ 			return -EIO;
+ 		}
+@@ -617,7 +624,7 @@ static int vmballoon_lock_page(struct vmballoon *b, unsigned int num_pages,
+ 		} else {
+ 			vmballoon_free_page(page, false);
+ 		}
+-		return -EIO;
++		return locked;
+ 	}
+ 
+ 	/* track allocated page */
+@@ -1029,29 +1036,30 @@ static void vmballoon_vmci_cleanup(struct vmballoon *b)
+  */
+ static int vmballoon_vmci_init(struct vmballoon *b)
+ {
+-	int error = 0;
++	unsigned long error, dummy;
+ 
+-	if ((b->capabilities & VMW_BALLOON_SIGNALLED_WAKEUP_CMD) != 0) {
+-		error = vmci_doorbell_create(&b->vmci_doorbell,
+-				VMCI_FLAG_DELAYED_CB,
+-				VMCI_PRIVILEGE_FLAG_RESTRICTED,
+-				vmballoon_doorbell, b);
+-
+-		if (error == VMCI_SUCCESS) {
+-			VMWARE_BALLOON_CMD(VMCI_DOORBELL_SET,
+-					b->vmci_doorbell.context,
+-					b->vmci_doorbell.resource, error);
+-			STATS_INC(b->stats.doorbell_set);
+-		}
+-	}
++	if ((b->capabilities & VMW_BALLOON_SIGNALLED_WAKEUP_CMD) == 0)
++		return 0;
+ 
+-	if (error != 0) {
+-		vmballoon_vmci_cleanup(b);
++	error = vmci_doorbell_create(&b->vmci_doorbell, VMCI_FLAG_DELAYED_CB,
++				     VMCI_PRIVILEGE_FLAG_RESTRICTED,
++				     vmballoon_doorbell, b);
+ 
+-		return -EIO;
+-	}
++	if (error != VMCI_SUCCESS)
++		goto fail;
++
++	error = VMWARE_BALLOON_CMD(VMCI_DOORBELL_SET, b->vmci_doorbell.context,
++				   b->vmci_doorbell.resource, dummy);
++
++	STATS_INC(b->stats.doorbell_set);
++
++	if (error != VMW_BALLOON_SUCCESS)
++		goto fail;
+ 
+ 	return 0;
++fail:
++	vmballoon_vmci_cleanup(b);
++	return -EIO;
+ }
+ 
+ /*
+@@ -1289,7 +1297,14 @@ static int __init vmballoon_init(void)
+ 
+ 	return 0;
+ }
+-module_init(vmballoon_init);
++
++/*
++ * Using late_initcall() instead of module_init() allows the balloon to use the
++ * VMCI doorbell even when the balloon is built into the kernel. Otherwise the
++ * VMCI is probed only after the balloon is initialized. If the balloon is used
++ * as a module, late_initcall() is equivalent to module_init().
++ */
++late_initcall(vmballoon_init);
+ 
+ static void __exit vmballoon_exit(void)
+ {
+diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
+index 648eb6743ed5..6edffeed9953 100644
+--- a/drivers/mmc/core/queue.c
++++ b/drivers/mmc/core/queue.c
+@@ -238,10 +238,6 @@ static void mmc_mq_exit_request(struct blk_mq_tag_set *set, struct request *req,
+ 	mmc_exit_request(mq->queue, req);
+ }
+ 
+-/*
+- * We use BLK_MQ_F_BLOCKING and have only 1 hardware queue, which means requests
+- * will not be dispatched in parallel.
+- */
+ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 				    const struct blk_mq_queue_data *bd)
+ {
+@@ -264,7 +260,7 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 
+ 	spin_lock_irq(q->queue_lock);
+ 
+-	if (mq->recovery_needed) {
++	if (mq->recovery_needed || mq->busy) {
+ 		spin_unlock_irq(q->queue_lock);
+ 		return BLK_STS_RESOURCE;
+ 	}
+@@ -291,6 +287,9 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 		break;
+ 	}
+ 
++	/* Parallel dispatch of requests is not supported at the moment */
++	mq->busy = true;
++
+ 	mq->in_flight[issue_type] += 1;
+ 	get_card = (mmc_tot_in_flight(mq) == 1);
+ 	cqe_retune_ok = (mmc_cqe_qcnt(mq) == 1);
+@@ -333,9 +332,12 @@ static blk_status_t mmc_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 		mq->in_flight[issue_type] -= 1;
+ 		if (mmc_tot_in_flight(mq) == 0)
+ 			put_card = true;
++		mq->busy = false;
+ 		spin_unlock_irq(q->queue_lock);
+ 		if (put_card)
+ 			mmc_put_card(card, &mq->ctx);
++	} else {
++		WRITE_ONCE(mq->busy, false);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
+index 17e59d50b496..9bf3c9245075 100644
+--- a/drivers/mmc/core/queue.h
++++ b/drivers/mmc/core/queue.h
+@@ -81,6 +81,7 @@ struct mmc_queue {
+ 	unsigned int		cqe_busy;
+ #define MMC_CQE_DCMD_BUSY	BIT(0)
+ #define MMC_CQE_QUEUE_FULL	BIT(1)
++	bool			busy;
+ 	bool			use_cqe;
+ 	bool			recovery_needed;
+ 	bool			in_recovery;
+diff --git a/drivers/mmc/host/renesas_sdhi_internal_dmac.c b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
+index d032bd63444d..4a7991151918 100644
+--- a/drivers/mmc/host/renesas_sdhi_internal_dmac.c
++++ b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
+@@ -45,14 +45,16 @@
+ /* DM_CM_RST */
+ #define RST_DTRANRST1		BIT(9)
+ #define RST_DTRANRST0		BIT(8)
+-#define RST_RESERVED_BITS	GENMASK_ULL(32, 0)
++#define RST_RESERVED_BITS	GENMASK_ULL(31, 0)
+ 
+ /* DM_CM_INFO1 and DM_CM_INFO1_MASK */
+ #define INFO1_CLEAR		0
++#define INFO1_MASK_CLEAR	GENMASK_ULL(31, 0)
+ #define INFO1_DTRANEND1		BIT(17)
+ #define INFO1_DTRANEND0		BIT(16)
+ 
+ /* DM_CM_INFO2 and DM_CM_INFO2_MASK */
++#define INFO2_MASK_CLEAR	GENMASK_ULL(31, 0)
+ #define INFO2_DTRANERR1		BIT(17)
+ #define INFO2_DTRANERR0		BIT(16)
+ 
+@@ -236,6 +238,12 @@ renesas_sdhi_internal_dmac_request_dma(struct tmio_mmc_host *host,
+ {
+ 	struct renesas_sdhi *priv = host_to_priv(host);
+ 
++	/* Disable DMAC interrupts, we don't use them */
++	renesas_sdhi_internal_dmac_dm_write(host, DM_CM_INFO1_MASK,
++					    INFO1_MASK_CLEAR);
++	renesas_sdhi_internal_dmac_dm_write(host, DM_CM_INFO2_MASK,
++					    INFO2_MASK_CLEAR);
++
+ 	/* Each value is set to non-zero to assume "enabling" each DMA */
+ 	host->chan_rx = host->chan_tx = (void *)0xdeadbeaf;
+ 
+diff --git a/drivers/net/wireless/marvell/libertas/dev.h b/drivers/net/wireless/marvell/libertas/dev.h
+index dd1ee1f0af48..469134930026 100644
+--- a/drivers/net/wireless/marvell/libertas/dev.h
++++ b/drivers/net/wireless/marvell/libertas/dev.h
+@@ -104,6 +104,7 @@ struct lbs_private {
+ 	u8 fw_ready;
+ 	u8 surpriseremoved;
+ 	u8 setup_fw_on_resume;
++	u8 power_up_on_resume;
+ 	int (*hw_host_to_card) (struct lbs_private *priv, u8 type, u8 *payload, u16 nb);
+ 	void (*reset_card) (struct lbs_private *priv);
+ 	int (*power_save) (struct lbs_private *priv);
+diff --git a/drivers/net/wireless/marvell/libertas/if_sdio.c b/drivers/net/wireless/marvell/libertas/if_sdio.c
+index 2300e796c6ab..43743c26c071 100644
+--- a/drivers/net/wireless/marvell/libertas/if_sdio.c
++++ b/drivers/net/wireless/marvell/libertas/if_sdio.c
+@@ -1290,15 +1290,23 @@ static void if_sdio_remove(struct sdio_func *func)
+ static int if_sdio_suspend(struct device *dev)
+ {
+ 	struct sdio_func *func = dev_to_sdio_func(dev);
+-	int ret;
+ 	struct if_sdio_card *card = sdio_get_drvdata(func);
++	struct lbs_private *priv = card->priv;
++	int ret;
+ 
+ 	mmc_pm_flag_t flags = sdio_get_host_pm_caps(func);
++	priv->power_up_on_resume = false;
+ 
+ 	/* If we're powered off anyway, just let the mmc layer remove the
+ 	 * card. */
+-	if (!lbs_iface_active(card->priv))
+-		return -ENOSYS;
++	if (!lbs_iface_active(priv)) {
++		if (priv->fw_ready) {
++			priv->power_up_on_resume = true;
++			if_sdio_power_off(card);
++		}
++
++		return 0;
++	}
+ 
+ 	dev_info(dev, "%s: suspend: PM flags = 0x%x\n",
+ 		 sdio_func_id(func), flags);
+@@ -1306,9 +1314,14 @@ static int if_sdio_suspend(struct device *dev)
+ 	/* If we aren't being asked to wake on anything, we should bail out
+ 	 * and let the SD stack power down the card.
+ 	 */
+-	if (card->priv->wol_criteria == EHS_REMOVE_WAKEUP) {
++	if (priv->wol_criteria == EHS_REMOVE_WAKEUP) {
+ 		dev_info(dev, "Suspend without wake params -- powering down card\n");
+-		return -ENOSYS;
++		if (priv->fw_ready) {
++			priv->power_up_on_resume = true;
++			if_sdio_power_off(card);
++		}
++
++		return 0;
+ 	}
+ 
+ 	if (!(flags & MMC_PM_KEEP_POWER)) {
+@@ -1321,7 +1334,7 @@ static int if_sdio_suspend(struct device *dev)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = lbs_suspend(card->priv);
++	ret = lbs_suspend(priv);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1336,6 +1349,11 @@ static int if_sdio_resume(struct device *dev)
+ 
+ 	dev_info(dev, "%s: resume: we're back\n", sdio_func_id(func));
+ 
++	if (card->priv->power_up_on_resume) {
++		if_sdio_power_on(card);
++		wait_event(card->pwron_waitq, card->priv->fw_ready);
++	}
++
+ 	ret = lbs_resume(card->priv);
+ 
+ 	return ret;
+diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
+index 27902a8799b1..8aae6dcc839f 100644
+--- a/drivers/nvdimm/bus.c
++++ b/drivers/nvdimm/bus.c
+@@ -812,9 +812,9 @@ u32 nd_cmd_out_size(struct nvdimm *nvdimm, int cmd,
+ 		 * overshoots the remainder by 4 bytes, assume it was
+ 		 * including 'status'.
+ 		 */
+-		if (out_field[1] - 8 == remainder)
++		if (out_field[1] - 4 == remainder)
+ 			return remainder;
+-		return out_field[1] - 4;
++		return out_field[1] - 8;
+ 	} else if (cmd == ND_CMD_CALL) {
+ 		struct nd_cmd_pkg *pkg = (struct nd_cmd_pkg *) in_field;
+ 
+diff --git a/drivers/nvdimm/dimm_devs.c b/drivers/nvdimm/dimm_devs.c
+index 8d348b22ba45..863cabc35215 100644
+--- a/drivers/nvdimm/dimm_devs.c
++++ b/drivers/nvdimm/dimm_devs.c
+@@ -536,6 +536,37 @@ resource_size_t nd_blk_available_dpa(struct nd_region *nd_region)
+ 	return info.available;
+ }
+ 
++/**
++ * nd_pmem_max_contiguous_dpa - For the given dimm+region, return the max
++ *			   contiguous unallocated dpa range.
++ * @nd_region: constrain available space check to this reference region
++ * @nd_mapping: container of dpa-resource-root + labels
++ */
++resource_size_t nd_pmem_max_contiguous_dpa(struct nd_region *nd_region,
++					   struct nd_mapping *nd_mapping)
++{
++	struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
++	struct nvdimm_bus *nvdimm_bus;
++	resource_size_t max = 0;
++	struct resource *res;
++
++	/* if a dimm is disabled the available capacity is zero */
++	if (!ndd)
++		return 0;
++
++	nvdimm_bus = walk_to_nvdimm_bus(ndd->dev);
++	if (__reserve_free_pmem(&nd_region->dev, nd_mapping->nvdimm))
++		return 0;
++	for_each_dpa_resource(ndd, res) {
++		if (strcmp(res->name, "pmem-reserve") != 0)
++			continue;
++		if (resource_size(res) > max)
++			max = resource_size(res);
++	}
++	release_free_pmem(nvdimm_bus, nd_mapping);
++	return max;
++}
++
+ /**
+  * nd_pmem_available_dpa - for the given dimm+region account unallocated dpa
+  * @nd_mapping: container of dpa-resource-root + labels
+diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
+index 28afdd668905..4525d8ef6022 100644
+--- a/drivers/nvdimm/namespace_devs.c
++++ b/drivers/nvdimm/namespace_devs.c
+@@ -799,7 +799,7 @@ static int merge_dpa(struct nd_region *nd_region,
+ 	return 0;
+ }
+ 
+-static int __reserve_free_pmem(struct device *dev, void *data)
++int __reserve_free_pmem(struct device *dev, void *data)
+ {
+ 	struct nvdimm *nvdimm = data;
+ 	struct nd_region *nd_region;
+@@ -836,7 +836,7 @@ static int __reserve_free_pmem(struct device *dev, void *data)
+ 	return 0;
+ }
+ 
+-static void release_free_pmem(struct nvdimm_bus *nvdimm_bus,
++void release_free_pmem(struct nvdimm_bus *nvdimm_bus,
+ 		struct nd_mapping *nd_mapping)
+ {
+ 	struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
+@@ -1032,7 +1032,7 @@ static ssize_t __size_store(struct device *dev, unsigned long long val)
+ 
+ 		allocated += nvdimm_allocated_dpa(ndd, &label_id);
+ 	}
+-	available = nd_region_available_dpa(nd_region);
++	available = nd_region_allocatable_dpa(nd_region);
+ 
+ 	if (val > available + allocated)
+ 		return -ENOSPC;
+diff --git a/drivers/nvdimm/nd-core.h b/drivers/nvdimm/nd-core.h
+index 79274ead54fb..ac68072fb8cd 100644
+--- a/drivers/nvdimm/nd-core.h
++++ b/drivers/nvdimm/nd-core.h
+@@ -100,6 +100,14 @@ struct nd_region;
+ struct nvdimm_drvdata;
+ struct nd_mapping;
+ void nd_mapping_free_labels(struct nd_mapping *nd_mapping);
++
++int __reserve_free_pmem(struct device *dev, void *data);
++void release_free_pmem(struct nvdimm_bus *nvdimm_bus,
++		       struct nd_mapping *nd_mapping);
++
++resource_size_t nd_pmem_max_contiguous_dpa(struct nd_region *nd_region,
++					   struct nd_mapping *nd_mapping);
++resource_size_t nd_region_allocatable_dpa(struct nd_region *nd_region);
+ resource_size_t nd_pmem_available_dpa(struct nd_region *nd_region,
+ 		struct nd_mapping *nd_mapping, resource_size_t *overlap);
+ resource_size_t nd_blk_available_dpa(struct nd_region *nd_region);
+diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
+index ec3543b83330..c30d5af02cc2 100644
+--- a/drivers/nvdimm/region_devs.c
++++ b/drivers/nvdimm/region_devs.c
+@@ -389,6 +389,30 @@ resource_size_t nd_region_available_dpa(struct nd_region *nd_region)
+ 	return available;
+ }
+ 
++resource_size_t nd_region_allocatable_dpa(struct nd_region *nd_region)
++{
++	resource_size_t available = 0;
++	int i;
++
++	if (is_memory(&nd_region->dev))
++		available = PHYS_ADDR_MAX;
++
++	WARN_ON(!is_nvdimm_bus_locked(&nd_region->dev));
++	for (i = 0; i < nd_region->ndr_mappings; i++) {
++		struct nd_mapping *nd_mapping = &nd_region->mapping[i];
++
++		if (is_memory(&nd_region->dev))
++			available = min(available,
++					nd_pmem_max_contiguous_dpa(nd_region,
++								   nd_mapping));
++		else if (is_nd_blk(&nd_region->dev))
++			available += nd_blk_available_dpa(nd_region);
++	}
++	if (is_memory(&nd_region->dev))
++		return available * nd_region->ndr_mappings;
++	return available;
++}
++
+ static ssize_t available_size_show(struct device *dev,
+ 		struct device_attribute *attr, char *buf)
+ {
+diff --git a/drivers/pwm/pwm-omap-dmtimer.c b/drivers/pwm/pwm-omap-dmtimer.c
+index 665da3c8fbce..f45798679e3c 100644
+--- a/drivers/pwm/pwm-omap-dmtimer.c
++++ b/drivers/pwm/pwm-omap-dmtimer.c
+@@ -264,8 +264,9 @@ static int pwm_omap_dmtimer_probe(struct platform_device *pdev)
+ 
+ 	timer_pdata = dev_get_platdata(&timer_pdev->dev);
+ 	if (!timer_pdata) {
+-		dev_err(&pdev->dev, "dmtimer pdata structure NULL\n");
+-		ret = -EINVAL;
++		dev_dbg(&pdev->dev,
++			 "dmtimer pdata structure NULL, deferring probe\n");
++		ret = -EPROBE_DEFER;
+ 		goto put;
+ 	}
+ 
+diff --git a/drivers/pwm/pwm-tiehrpwm.c b/drivers/pwm/pwm-tiehrpwm.c
+index 4c22cb395040..f7b8a86fa5c5 100644
+--- a/drivers/pwm/pwm-tiehrpwm.c
++++ b/drivers/pwm/pwm-tiehrpwm.c
+@@ -33,10 +33,6 @@
+ #define TBCTL			0x00
+ #define TBPRD			0x0A
+ 
+-#define TBCTL_RUN_MASK		(BIT(15) | BIT(14))
+-#define TBCTL_STOP_NEXT		0
+-#define TBCTL_STOP_ON_CYCLE	BIT(14)
+-#define TBCTL_FREE_RUN		(BIT(15) | BIT(14))
+ #define TBCTL_PRDLD_MASK	BIT(3)
+ #define TBCTL_PRDLD_SHDW	0
+ #define TBCTL_PRDLD_IMDT	BIT(3)
+@@ -360,7 +356,7 @@ static int ehrpwm_pwm_enable(struct pwm_chip *chip, struct pwm_device *pwm)
+ 	/* Channels polarity can be configured from action qualifier module */
+ 	configure_polarity(pc, pwm->hwpwm);
+ 
+-	/* Enable TBCLK before enabling PWM device */
++	/* Enable TBCLK */
+ 	ret = clk_enable(pc->tbclk);
+ 	if (ret) {
+ 		dev_err(chip->dev, "Failed to enable TBCLK for %s: %d\n",
+@@ -368,9 +364,6 @@ static int ehrpwm_pwm_enable(struct pwm_chip *chip, struct pwm_device *pwm)
+ 		return ret;
+ 	}
+ 
+-	/* Enable time counter for free_run */
+-	ehrpwm_modify(pc->mmio_base, TBCTL, TBCTL_RUN_MASK, TBCTL_FREE_RUN);
+-
+ 	return 0;
+ }
+ 
+@@ -388,6 +381,8 @@ static void ehrpwm_pwm_disable(struct pwm_chip *chip, struct pwm_device *pwm)
+ 		aqcsfrc_mask = AQCSFRC_CSFA_MASK;
+ 	}
+ 
++	/* Update shadow register first before modifying active register */
++	ehrpwm_modify(pc->mmio_base, AQCSFRC, aqcsfrc_mask, aqcsfrc_val);
+ 	/*
+ 	 * Changes to immediate action on Action Qualifier. This puts
+ 	 * Action Qualifier control on PWM output from next TBCLK
+@@ -400,9 +395,6 @@ static void ehrpwm_pwm_disable(struct pwm_chip *chip, struct pwm_device *pwm)
+ 	/* Disabling TBCLK on PWM disable */
+ 	clk_disable(pc->tbclk);
+ 
+-	/* Stop Time base counter */
+-	ehrpwm_modify(pc->mmio_base, TBCTL, TBCTL_RUN_MASK, TBCTL_STOP_NEXT);
+-
+ 	/* Disable clock on PWM disable */
+ 	pm_runtime_put_sync(chip->dev);
+ }
+diff --git a/drivers/rtc/rtc-omap.c b/drivers/rtc/rtc-omap.c
+index 39086398833e..6a7b804c3074 100644
+--- a/drivers/rtc/rtc-omap.c
++++ b/drivers/rtc/rtc-omap.c
+@@ -861,13 +861,6 @@ static int omap_rtc_probe(struct platform_device *pdev)
+ 			goto err;
+ 	}
+ 
+-	if (rtc->is_pmic_controller) {
+-		if (!pm_power_off) {
+-			omap_rtc_power_off_rtc = rtc;
+-			pm_power_off = omap_rtc_power_off;
+-		}
+-	}
+-
+ 	/* Support ext_wakeup pinconf */
+ 	rtc_pinctrl_desc.name = dev_name(&pdev->dev);
+ 
+@@ -880,12 +873,21 @@ static int omap_rtc_probe(struct platform_device *pdev)
+ 
+ 	ret = rtc_register_device(rtc->rtc);
+ 	if (ret)
+-		goto err;
++		goto err_deregister_pinctrl;
+ 
+ 	rtc_nvmem_register(rtc->rtc, &omap_rtc_nvmem_config);
+ 
++	if (rtc->is_pmic_controller) {
++		if (!pm_power_off) {
++			omap_rtc_power_off_rtc = rtc;
++			pm_power_off = omap_rtc_power_off;
++		}
++	}
++
+ 	return 0;
+ 
++err_deregister_pinctrl:
++	pinctrl_unregister(rtc->pctldev);
+ err:
+ 	clk_disable_unprepare(rtc->clk);
+ 	device_init_wakeup(&pdev->dev, false);
+diff --git a/drivers/spi/spi-cadence.c b/drivers/spi/spi-cadence.c
+index f3dad6fcdc35..a568f35522f9 100644
+--- a/drivers/spi/spi-cadence.c
++++ b/drivers/spi/spi-cadence.c
+@@ -319,7 +319,7 @@ static void cdns_spi_fill_tx_fifo(struct cdns_spi *xspi)
+ 		 */
+ 		if (cdns_spi_read(xspi, CDNS_SPI_ISR) &
+ 		    CDNS_SPI_IXR_TXFULL)
+-			usleep_range(10, 20);
++			udelay(10);
+ 
+ 		if (xspi->txbuf)
+ 			cdns_spi_write(xspi, CDNS_SPI_TXD, *xspi->txbuf++);
+diff --git a/drivers/spi/spi-davinci.c b/drivers/spi/spi-davinci.c
+index 577084bb911b..a02099c90c5c 100644
+--- a/drivers/spi/spi-davinci.c
++++ b/drivers/spi/spi-davinci.c
+@@ -217,7 +217,7 @@ static void davinci_spi_chipselect(struct spi_device *spi, int value)
+ 	pdata = &dspi->pdata;
+ 
+ 	/* program delay transfers if tx_delay is non zero */
+-	if (spicfg->wdelay)
++	if (spicfg && spicfg->wdelay)
+ 		spidat1 |= SPIDAT1_WDEL;
+ 
+ 	/*
+diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c
+index 0630962ce442..f225f7c99a32 100644
+--- a/drivers/spi/spi-fsl-dspi.c
++++ b/drivers/spi/spi-fsl-dspi.c
+@@ -1029,30 +1029,30 @@ static int dspi_probe(struct platform_device *pdev)
+ 		goto out_master_put;
+ 	}
+ 
++	dspi->clk = devm_clk_get(&pdev->dev, "dspi");
++	if (IS_ERR(dspi->clk)) {
++		ret = PTR_ERR(dspi->clk);
++		dev_err(&pdev->dev, "unable to get clock\n");
++		goto out_master_put;
++	}
++	ret = clk_prepare_enable(dspi->clk);
++	if (ret)
++		goto out_master_put;
++
+ 	dspi_init(dspi);
+ 	dspi->irq = platform_get_irq(pdev, 0);
+ 	if (dspi->irq < 0) {
+ 		dev_err(&pdev->dev, "can't get platform irq\n");
+ 		ret = dspi->irq;
+-		goto out_master_put;
++		goto out_clk_put;
+ 	}
+ 
+ 	ret = devm_request_irq(&pdev->dev, dspi->irq, dspi_interrupt, 0,
+ 			pdev->name, dspi);
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "Unable to attach DSPI interrupt\n");
+-		goto out_master_put;
+-	}
+-
+-	dspi->clk = devm_clk_get(&pdev->dev, "dspi");
+-	if (IS_ERR(dspi->clk)) {
+-		ret = PTR_ERR(dspi->clk);
+-		dev_err(&pdev->dev, "unable to get clock\n");
+-		goto out_master_put;
++		goto out_clk_put;
+ 	}
+-	ret = clk_prepare_enable(dspi->clk);
+-	if (ret)
+-		goto out_master_put;
+ 
+ 	if (dspi->devtype_data->trans_mode == DSPI_DMA_MODE) {
+ 		ret = dspi_request_dma(dspi, res->start);
+diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c
+index 0b2d60d30f69..14f4ea59caff 100644
+--- a/drivers/spi/spi-pxa2xx.c
++++ b/drivers/spi/spi-pxa2xx.c
+@@ -1391,6 +1391,10 @@ static const struct pci_device_id pxa2xx_spi_pci_compound_match[] = {
+ 	{ PCI_VDEVICE(INTEL, 0x31c2), LPSS_BXT_SSP },
+ 	{ PCI_VDEVICE(INTEL, 0x31c4), LPSS_BXT_SSP },
+ 	{ PCI_VDEVICE(INTEL, 0x31c6), LPSS_BXT_SSP },
++	/* ICL-LP */
++	{ PCI_VDEVICE(INTEL, 0x34aa), LPSS_CNL_SSP },
++	{ PCI_VDEVICE(INTEL, 0x34ab), LPSS_CNL_SSP },
++	{ PCI_VDEVICE(INTEL, 0x34fb), LPSS_CNL_SSP },
+ 	/* APL */
+ 	{ PCI_VDEVICE(INTEL, 0x5ac2), LPSS_BXT_SSP },
+ 	{ PCI_VDEVICE(INTEL, 0x5ac4), LPSS_BXT_SSP },
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 9c14a453f73c..80bb56facfb6 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -182,6 +182,7 @@ static int uart_port_startup(struct tty_struct *tty, struct uart_state *state,
+ {
+ 	struct uart_port *uport = uart_port_check(state);
+ 	unsigned long page;
++	unsigned long flags = 0;
+ 	int retval = 0;
+ 
+ 	if (uport->type == PORT_UNKNOWN)
+@@ -196,15 +197,18 @@ static int uart_port_startup(struct tty_struct *tty, struct uart_state *state,
+ 	 * Initialise and allocate the transmit and temporary
+ 	 * buffer.
+ 	 */
+-	if (!state->xmit.buf) {
+-		/* This is protected by the per port mutex */
+-		page = get_zeroed_page(GFP_KERNEL);
+-		if (!page)
+-			return -ENOMEM;
++	page = get_zeroed_page(GFP_KERNEL);
++	if (!page)
++		return -ENOMEM;
+ 
++	uart_port_lock(state, flags);
++	if (!state->xmit.buf) {
+ 		state->xmit.buf = (unsigned char *) page;
+ 		uart_circ_clear(&state->xmit);
++	} else {
++		free_page(page);
+ 	}
++	uart_port_unlock(uport, flags);
+ 
+ 	retval = uport->ops->startup(uport);
+ 	if (retval == 0) {
+@@ -263,6 +267,7 @@ static void uart_shutdown(struct tty_struct *tty, struct uart_state *state)
+ {
+ 	struct uart_port *uport = uart_port_check(state);
+ 	struct tty_port *port = &state->port;
++	unsigned long flags = 0;
+ 
+ 	/*
+ 	 * Set the TTY IO error marker
+@@ -295,10 +300,12 @@ static void uart_shutdown(struct tty_struct *tty, struct uart_state *state)
+ 	/*
+ 	 * Free the transmit buffer page.
+ 	 */
++	uart_port_lock(state, flags);
+ 	if (state->xmit.buf) {
+ 		free_page((unsigned long)state->xmit.buf);
+ 		state->xmit.buf = NULL;
+ 	}
++	uart_port_unlock(uport, flags);
+ }
+ 
+ /**
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index 609438d2465b..9ae2fb1344de 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -1704,12 +1704,12 @@ static int do_register_framebuffer(struct fb_info *fb_info)
+ 	return 0;
+ }
+ 
+-static int do_unregister_framebuffer(struct fb_info *fb_info)
++static int unbind_console(struct fb_info *fb_info)
+ {
+ 	struct fb_event event;
+-	int i, ret = 0;
++	int ret;
++	int i = fb_info->node;
+ 
+-	i = fb_info->node;
+ 	if (i < 0 || i >= FB_MAX || registered_fb[i] != fb_info)
+ 		return -EINVAL;
+ 
+@@ -1724,17 +1724,29 @@ static int do_unregister_framebuffer(struct fb_info *fb_info)
+ 	unlock_fb_info(fb_info);
+ 	console_unlock();
+ 
++	return ret;
++}
++
++static int __unlink_framebuffer(struct fb_info *fb_info);
++
++static int do_unregister_framebuffer(struct fb_info *fb_info)
++{
++	struct fb_event event;
++	int ret;
++
++	ret = unbind_console(fb_info);
++
+ 	if (ret)
+ 		return -EINVAL;
+ 
+ 	pm_vt_switch_unregister(fb_info->dev);
+ 
+-	unlink_framebuffer(fb_info);
++	__unlink_framebuffer(fb_info);
+ 	if (fb_info->pixmap.addr &&
+ 	    (fb_info->pixmap.flags & FB_PIXMAP_DEFAULT))
+ 		kfree(fb_info->pixmap.addr);
+ 	fb_destroy_modelist(&fb_info->modelist);
+-	registered_fb[i] = NULL;
++	registered_fb[fb_info->node] = NULL;
+ 	num_registered_fb--;
+ 	fb_cleanup_device(fb_info);
+ 	event.info = fb_info;
+@@ -1747,7 +1759,7 @@ static int do_unregister_framebuffer(struct fb_info *fb_info)
+ 	return 0;
+ }
+ 
+-int unlink_framebuffer(struct fb_info *fb_info)
++static int __unlink_framebuffer(struct fb_info *fb_info)
+ {
+ 	int i;
+ 
+@@ -1759,6 +1771,20 @@ int unlink_framebuffer(struct fb_info *fb_info)
+ 		device_destroy(fb_class, MKDEV(FB_MAJOR, i));
+ 		fb_info->dev = NULL;
+ 	}
++
++	return 0;
++}
++
++int unlink_framebuffer(struct fb_info *fb_info)
++{
++	int ret;
++
++	ret = __unlink_framebuffer(fb_info);
++	if (ret)
++		return ret;
++
++	unbind_console(fb_info);
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL(unlink_framebuffer);
+diff --git a/drivers/video/fbdev/udlfb.c b/drivers/video/fbdev/udlfb.c
+index f365d4862015..862e8027acf6 100644
+--- a/drivers/video/fbdev/udlfb.c
++++ b/drivers/video/fbdev/udlfb.c
+@@ -27,6 +27,7 @@
+ #include <linux/slab.h>
+ #include <linux/prefetch.h>
+ #include <linux/delay.h>
++#include <asm/unaligned.h>
+ #include <video/udlfb.h>
+ #include "edid.h"
+ 
+@@ -450,17 +451,17 @@ static void dlfb_compress_hline(
+ 		raw_pixels_count_byte = cmd++; /*  we'll know this later */
+ 		raw_pixel_start = pixel;
+ 
+-		cmd_pixel_end = pixel + min(MAX_CMD_PIXELS + 1,
+-			min((int)(pixel_end - pixel),
+-			    (int)(cmd_buffer_end - cmd) / BPP));
++		cmd_pixel_end = pixel + min3(MAX_CMD_PIXELS + 1UL,
++					(unsigned long)(pixel_end - pixel),
++					(unsigned long)(cmd_buffer_end - 1 - cmd) / BPP);
+ 
+-		prefetch_range((void *) pixel, (cmd_pixel_end - pixel) * BPP);
++		prefetch_range((void *) pixel, (u8 *)cmd_pixel_end - (u8 *)pixel);
+ 
+ 		while (pixel < cmd_pixel_end) {
+ 			const uint16_t * const repeating_pixel = pixel;
+ 
+-			*cmd++ = *pixel >> 8;
+-			*cmd++ = *pixel;
++			put_unaligned_be16(*pixel, cmd);
++			cmd += 2;
+ 			pixel++;
+ 
+ 			if (unlikely((pixel < cmd_pixel_end) &&
+@@ -486,13 +487,16 @@ static void dlfb_compress_hline(
+ 		if (pixel > raw_pixel_start) {
+ 			/* finalize last RAW span */
+ 			*raw_pixels_count_byte = (pixel-raw_pixel_start) & 0xFF;
++		} else {
++			/* undo unused byte */
++			cmd--;
+ 		}
+ 
+ 		*cmd_pixels_count_byte = (pixel - cmd_pixel_start) & 0xFF;
+-		dev_addr += (pixel - cmd_pixel_start) * BPP;
++		dev_addr += (u8 *)pixel - (u8 *)cmd_pixel_start;
+ 	}
+ 
+-	if (cmd_buffer_end <= MIN_RLX_CMD_BYTES + cmd) {
++	if (cmd_buffer_end - MIN_RLX_CMD_BYTES <= cmd) {
+ 		/* Fill leftover bytes with no-ops */
+ 		if (cmd_buffer_end > cmd)
+ 			memset(cmd, 0xAF, cmd_buffer_end - cmd);
+@@ -610,8 +614,11 @@ static int dlfb_handle_damage(struct dlfb_data *dlfb, int x, int y,
+ 	}
+ 
+ 	if (cmd > (char *) urb->transfer_buffer) {
++		int len;
++		if (cmd < (char *) urb->transfer_buffer + urb->transfer_buffer_length)
++			*cmd++ = 0xAF;
+ 		/* Send partial buffer remaining before exiting */
+-		int len = cmd - (char *) urb->transfer_buffer;
++		len = cmd - (char *) urb->transfer_buffer;
+ 		ret = dlfb_submit_urb(dlfb, urb, len);
+ 		bytes_sent += len;
+ 	} else
+@@ -735,8 +742,11 @@ static void dlfb_dpy_deferred_io(struct fb_info *info,
+ 	}
+ 
+ 	if (cmd > (char *) urb->transfer_buffer) {
++		int len;
++		if (cmd < (char *) urb->transfer_buffer + urb->transfer_buffer_length)
++			*cmd++ = 0xAF;
+ 		/* Send partial buffer remaining before exiting */
+-		int len = cmd - (char *) urb->transfer_buffer;
++		len = cmd - (char *) urb->transfer_buffer;
+ 		dlfb_submit_urb(dlfb, urb, len);
+ 		bytes_sent += len;
+ 	} else
+@@ -922,14 +932,6 @@ static void dlfb_free(struct kref *kref)
+ 	kfree(dlfb);
+ }
+ 
+-static void dlfb_release_urb_work(struct work_struct *work)
+-{
+-	struct urb_node *unode = container_of(work, struct urb_node,
+-					      release_urb_work.work);
+-
+-	up(&unode->dlfb->urbs.limit_sem);
+-}
+-
+ static void dlfb_free_framebuffer(struct dlfb_data *dlfb)
+ {
+ 	struct fb_info *info = dlfb->info;
+@@ -1039,10 +1041,25 @@ static int dlfb_ops_set_par(struct fb_info *info)
+ 	int result;
+ 	u16 *pix_framebuffer;
+ 	int i;
++	struct fb_var_screeninfo fvs;
++
++	/* clear the activate field because it causes spurious miscompares */
++	fvs = info->var;
++	fvs.activate = 0;
++	fvs.vmode &= ~FB_VMODE_SMOOTH_XPAN;
++
++	if (!memcmp(&dlfb->current_mode, &fvs, sizeof(struct fb_var_screeninfo)))
++		return 0;
+ 
+ 	result = dlfb_set_video_mode(dlfb, &info->var);
+ 
+-	if ((result == 0) && (dlfb->fb_count == 0)) {
++	if (result)
++		return result;
++
++	dlfb->current_mode = fvs;
++	info->fix.line_length = info->var.xres * (info->var.bits_per_pixel / 8);
++
++	if (dlfb->fb_count == 0) {
+ 
+ 		/* paint greenscreen */
+ 
+@@ -1054,7 +1071,7 @@ static int dlfb_ops_set_par(struct fb_info *info)
+ 				   info->screen_base);
+ 	}
+ 
+-	return result;
++	return 0;
+ }
+ 
+ /* To fonzi the jukebox (e.g. make blanking changes take effect) */
+@@ -1649,7 +1666,8 @@ static void dlfb_init_framebuffer_work(struct work_struct *work)
+ 	dlfb->info = info;
+ 	info->par = dlfb;
+ 	info->pseudo_palette = dlfb->pseudo_palette;
+-	info->fbops = &dlfb_ops;
++	dlfb->ops = dlfb_ops;
++	info->fbops = &dlfb->ops;
+ 
+ 	retval = fb_alloc_cmap(&info->cmap, 256, 0);
+ 	if (retval < 0) {
+@@ -1789,14 +1807,7 @@ static void dlfb_urb_completion(struct urb *urb)
+ 	dlfb->urbs.available++;
+ 	spin_unlock_irqrestore(&dlfb->urbs.lock, flags);
+ 
+-	/*
+-	 * When using fb_defio, we deadlock if up() is called
+-	 * while another is waiting. So queue to another process.
+-	 */
+-	if (fb_defio)
+-		schedule_delayed_work(&unode->release_urb_work, 0);
+-	else
+-		up(&dlfb->urbs.limit_sem);
++	up(&dlfb->urbs.limit_sem);
+ }
+ 
+ static void dlfb_free_urb_list(struct dlfb_data *dlfb)
+@@ -1805,16 +1816,11 @@ static void dlfb_free_urb_list(struct dlfb_data *dlfb)
+ 	struct list_head *node;
+ 	struct urb_node *unode;
+ 	struct urb *urb;
+-	int ret;
+ 	unsigned long flags;
+ 
+ 	/* keep waiting and freeing, until we've got 'em all */
+ 	while (count--) {
+-
+-		/* Getting interrupted means a leak, but ok at disconnect */
+-		ret = down_interruptible(&dlfb->urbs.limit_sem);
+-		if (ret)
+-			break;
++		down(&dlfb->urbs.limit_sem);
+ 
+ 		spin_lock_irqsave(&dlfb->urbs.lock, flags);
+ 
+@@ -1838,25 +1844,27 @@ static void dlfb_free_urb_list(struct dlfb_data *dlfb)
+ 
+ static int dlfb_alloc_urb_list(struct dlfb_data *dlfb, int count, size_t size)
+ {
+-	int i = 0;
+ 	struct urb *urb;
+ 	struct urb_node *unode;
+ 	char *buf;
++	size_t wanted_size = count * size;
+ 
+ 	spin_lock_init(&dlfb->urbs.lock);
+ 
++retry:
+ 	dlfb->urbs.size = size;
+ 	INIT_LIST_HEAD(&dlfb->urbs.list);
+ 
+-	while (i < count) {
++	sema_init(&dlfb->urbs.limit_sem, 0);
++	dlfb->urbs.count = 0;
++	dlfb->urbs.available = 0;
++
++	while (dlfb->urbs.count * size < wanted_size) {
+ 		unode = kzalloc(sizeof(*unode), GFP_KERNEL);
+ 		if (!unode)
+ 			break;
+ 		unode->dlfb = dlfb;
+ 
+-		INIT_DELAYED_WORK(&unode->release_urb_work,
+-			  dlfb_release_urb_work);
+-
+ 		urb = usb_alloc_urb(0, GFP_KERNEL);
+ 		if (!urb) {
+ 			kfree(unode);
+@@ -1864,11 +1872,16 @@ static int dlfb_alloc_urb_list(struct dlfb_data *dlfb, int count, size_t size)
+ 		}
+ 		unode->urb = urb;
+ 
+-		buf = usb_alloc_coherent(dlfb->udev, MAX_TRANSFER, GFP_KERNEL,
++		buf = usb_alloc_coherent(dlfb->udev, size, GFP_KERNEL,
+ 					 &urb->transfer_dma);
+ 		if (!buf) {
+ 			kfree(unode);
+ 			usb_free_urb(urb);
++			if (size > PAGE_SIZE) {
++				size /= 2;
++				dlfb_free_urb_list(dlfb);
++				goto retry;
++			}
+ 			break;
+ 		}
+ 
+@@ -1879,14 +1892,12 @@ static int dlfb_alloc_urb_list(struct dlfb_data *dlfb, int count, size_t size)
+ 
+ 		list_add_tail(&unode->entry, &dlfb->urbs.list);
+ 
+-		i++;
++		up(&dlfb->urbs.limit_sem);
++		dlfb->urbs.count++;
++		dlfb->urbs.available++;
+ 	}
+ 
+-	sema_init(&dlfb->urbs.limit_sem, i);
+-	dlfb->urbs.count = i;
+-	dlfb->urbs.available = i;
+-
+-	return i;
++	return dlfb->urbs.count;
+ }
+ 
+ static struct urb *dlfb_get_urb(struct dlfb_data *dlfb)
+diff --git a/fs/9p/xattr.c b/fs/9p/xattr.c
+index f329eee6dc93..352abc39e891 100644
+--- a/fs/9p/xattr.c
++++ b/fs/9p/xattr.c
+@@ -105,7 +105,7 @@ int v9fs_fid_xattr_set(struct p9_fid *fid, const char *name,
+ {
+ 	struct kvec kvec = {.iov_base = (void *)value, .iov_len = value_len};
+ 	struct iov_iter from;
+-	int retval;
++	int retval, err;
+ 
+ 	iov_iter_kvec(&from, WRITE | ITER_KVEC, &kvec, 1, value_len);
+ 
+@@ -126,7 +126,9 @@ int v9fs_fid_xattr_set(struct p9_fid *fid, const char *name,
+ 			 retval);
+ 	else
+ 		p9_client_write(fid, 0, &from, &retval);
+-	p9_client_clunk(fid);
++	err = p9_client_clunk(fid);
++	if (!retval && err)
++		retval = err;
+ 	return retval;
+ }
+ 
+diff --git a/fs/lockd/clntlock.c b/fs/lockd/clntlock.c
+index 96c1d14c18f1..c2a128678e6e 100644
+--- a/fs/lockd/clntlock.c
++++ b/fs/lockd/clntlock.c
+@@ -187,7 +187,7 @@ __be32 nlmclnt_grant(const struct sockaddr *addr, const struct nlm_lock *lock)
+ 			continue;
+ 		if (!rpc_cmp_addr(nlm_addr(block->b_host), addr))
+ 			continue;
+-		if (nfs_compare_fh(NFS_FH(file_inode(fl_blocked->fl_file)) ,fh) != 0)
++		if (nfs_compare_fh(NFS_FH(locks_inode(fl_blocked->fl_file)), fh) != 0)
+ 			continue;
+ 		/* Alright, we found a lock. Set the return status
+ 		 * and wake up the caller
+diff --git a/fs/lockd/clntproc.c b/fs/lockd/clntproc.c
+index a2c0dfc6fdc0..d20b92f271c2 100644
+--- a/fs/lockd/clntproc.c
++++ b/fs/lockd/clntproc.c
+@@ -128,7 +128,7 @@ static void nlmclnt_setlockargs(struct nlm_rqst *req, struct file_lock *fl)
+ 	char *nodename = req->a_host->h_rpcclnt->cl_nodename;
+ 
+ 	nlmclnt_next_cookie(&argp->cookie);
+-	memcpy(&lock->fh, NFS_FH(file_inode(fl->fl_file)), sizeof(struct nfs_fh));
++	memcpy(&lock->fh, NFS_FH(locks_inode(fl->fl_file)), sizeof(struct nfs_fh));
+ 	lock->caller  = nodename;
+ 	lock->oh.data = req->a_owner;
+ 	lock->oh.len  = snprintf(req->a_owner, sizeof(req->a_owner), "%u@%s",
+diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c
+index 3701bccab478..74330daeab71 100644
+--- a/fs/lockd/svclock.c
++++ b/fs/lockd/svclock.c
+@@ -405,8 +405,8 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
+ 	__be32			ret;
+ 
+ 	dprintk("lockd: nlmsvc_lock(%s/%ld, ty=%d, pi=%d, %Ld-%Ld, bl=%d)\n",
+-				file_inode(file->f_file)->i_sb->s_id,
+-				file_inode(file->f_file)->i_ino,
++				locks_inode(file->f_file)->i_sb->s_id,
++				locks_inode(file->f_file)->i_ino,
+ 				lock->fl.fl_type, lock->fl.fl_pid,
+ 				(long long)lock->fl.fl_start,
+ 				(long long)lock->fl.fl_end,
+@@ -511,8 +511,8 @@ nlmsvc_testlock(struct svc_rqst *rqstp, struct nlm_file *file,
+ 	__be32			ret;
+ 
+ 	dprintk("lockd: nlmsvc_testlock(%s/%ld, ty=%d, %Ld-%Ld)\n",
+-				file_inode(file->f_file)->i_sb->s_id,
+-				file_inode(file->f_file)->i_ino,
++				locks_inode(file->f_file)->i_sb->s_id,
++				locks_inode(file->f_file)->i_ino,
+ 				lock->fl.fl_type,
+ 				(long long)lock->fl.fl_start,
+ 				(long long)lock->fl.fl_end);
+@@ -566,8 +566,8 @@ nlmsvc_unlock(struct net *net, struct nlm_file *file, struct nlm_lock *lock)
+ 	int	error;
+ 
+ 	dprintk("lockd: nlmsvc_unlock(%s/%ld, pi=%d, %Ld-%Ld)\n",
+-				file_inode(file->f_file)->i_sb->s_id,
+-				file_inode(file->f_file)->i_ino,
++				locks_inode(file->f_file)->i_sb->s_id,
++				locks_inode(file->f_file)->i_ino,
+ 				lock->fl.fl_pid,
+ 				(long long)lock->fl.fl_start,
+ 				(long long)lock->fl.fl_end);
+@@ -595,8 +595,8 @@ nlmsvc_cancel_blocked(struct net *net, struct nlm_file *file, struct nlm_lock *l
+ 	int status = 0;
+ 
+ 	dprintk("lockd: nlmsvc_cancel(%s/%ld, pi=%d, %Ld-%Ld)\n",
+-				file_inode(file->f_file)->i_sb->s_id,
+-				file_inode(file->f_file)->i_ino,
++				locks_inode(file->f_file)->i_sb->s_id,
++				locks_inode(file->f_file)->i_ino,
+ 				lock->fl.fl_pid,
+ 				(long long)lock->fl.fl_start,
+ 				(long long)lock->fl.fl_end);
+diff --git a/fs/lockd/svcsubs.c b/fs/lockd/svcsubs.c
+index 4ec3d6e03e76..899360ba3b84 100644
+--- a/fs/lockd/svcsubs.c
++++ b/fs/lockd/svcsubs.c
+@@ -44,7 +44,7 @@ static inline void nlm_debug_print_fh(char *msg, struct nfs_fh *f)
+ 
+ static inline void nlm_debug_print_file(char *msg, struct nlm_file *file)
+ {
+-	struct inode *inode = file_inode(file->f_file);
++	struct inode *inode = locks_inode(file->f_file);
+ 
+ 	dprintk("lockd: %s %s/%ld\n",
+ 		msg, inode->i_sb->s_id, inode->i_ino);
+@@ -414,7 +414,7 @@ nlmsvc_match_sb(void *datap, struct nlm_file *file)
+ {
+ 	struct super_block *sb = datap;
+ 
+-	return sb == file_inode(file->f_file)->i_sb;
++	return sb == locks_inode(file->f_file)->i_sb;
+ }
+ 
+ /**
+diff --git a/fs/nfs/blocklayout/dev.c b/fs/nfs/blocklayout/dev.c
+index a7efd83779d2..dec5880ac6de 100644
+--- a/fs/nfs/blocklayout/dev.c
++++ b/fs/nfs/blocklayout/dev.c
+@@ -204,7 +204,7 @@ static bool bl_map_stripe(struct pnfs_block_dev *dev, u64 offset,
+ 	chunk = div_u64(offset, dev->chunk_size);
+ 	div_u64_rem(chunk, dev->nr_children, &chunk_idx);
+ 
+-	if (chunk_idx > dev->nr_children) {
++	if (chunk_idx >= dev->nr_children) {
+ 		dprintk("%s: invalid chunk idx %d (%lld/%lld)\n",
+ 			__func__, chunk_idx, offset, dev->chunk_size);
+ 		/* error, should not happen */
+diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
+index 64c214fb9da6..5d57e818d0c3 100644
+--- a/fs/nfs/callback_proc.c
++++ b/fs/nfs/callback_proc.c
+@@ -441,11 +441,14 @@ validate_seqid(const struct nfs4_slot_table *tbl, const struct nfs4_slot *slot,
+  * a match.  If the slot is in use and the sequence numbers match, the
+  * client is still waiting for a response to the original request.
+  */
+-static bool referring_call_exists(struct nfs_client *clp,
++static int referring_call_exists(struct nfs_client *clp,
+ 				  uint32_t nrclists,
+-				  struct referring_call_list *rclists)
++				  struct referring_call_list *rclists,
++				  spinlock_t *lock)
++	__releases(lock)
++	__acquires(lock)
+ {
+-	bool status = false;
++	int status = 0;
+ 	int i, j;
+ 	struct nfs4_session *session;
+ 	struct nfs4_slot_table *tbl;
+@@ -468,8 +471,10 @@ static bool referring_call_exists(struct nfs_client *clp,
+ 
+ 		for (j = 0; j < rclist->rcl_nrefcalls; j++) {
+ 			ref = &rclist->rcl_refcalls[j];
++			spin_unlock(lock);
+ 			status = nfs4_slot_wait_on_seqid(tbl, ref->rc_slotid,
+ 					ref->rc_sequenceid, HZ >> 1) < 0;
++			spin_lock(lock);
+ 			if (status)
+ 				goto out;
+ 		}
+@@ -546,7 +551,8 @@ __be32 nfs4_callback_sequence(void *argp, void *resp,
+ 	 * related callback was received before the response to the original
+ 	 * call.
+ 	 */
+-	if (referring_call_exists(clp, args->csa_nrclists, args->csa_rclists)) {
++	if (referring_call_exists(clp, args->csa_nrclists, args->csa_rclists,
++				&tbl->slot_tbl_lock) < 0) {
+ 		status = htonl(NFS4ERR_DELAY);
+ 		goto out_unlock;
+ 	}
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index f6c4ccd693f4..464db0c0f5c8 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -581,8 +581,15 @@ nfs4_async_handle_exception(struct rpc_task *task, struct nfs_server *server,
+ 		ret = -EIO;
+ 	return ret;
+ out_retry:
+-	if (ret == 0)
++	if (ret == 0) {
+ 		exception->retry = 1;
++		/*
++		 * For NFS4ERR_MOVED, the client transport will need to
++		 * be recomputed after migration recovery has completed.
++		 */
++		if (errorcode == -NFS4ERR_MOVED)
++			rpc_task_release_transport(task);
++	}
+ 	return ret;
+ }
+ 
+diff --git a/fs/nfs/pnfs_nfs.c b/fs/nfs/pnfs_nfs.c
+index 32ba2d471853..d5e4d3cd8c7f 100644
+--- a/fs/nfs/pnfs_nfs.c
++++ b/fs/nfs/pnfs_nfs.c
+@@ -61,7 +61,7 @@ EXPORT_SYMBOL_GPL(pnfs_generic_commit_release);
+ 
+ /* The generic layer is about to remove the req from the commit list.
+  * If this will make the bucket empty, it will need to put the lseg reference.
+- * Note this must be called holding i_lock
++ * Note this must be called holding nfsi->commit_mutex
+  */
+ void
+ pnfs_generic_clear_request_commit(struct nfs_page *req,
+@@ -149,9 +149,7 @@ restart:
+ 		if (list_empty(&b->written)) {
+ 			freeme = b->wlseg;
+ 			b->wlseg = NULL;
+-			spin_unlock(&cinfo->inode->i_lock);
+ 			pnfs_put_lseg(freeme);
+-			spin_lock(&cinfo->inode->i_lock);
+ 			goto restart;
+ 		}
+ 	}
+@@ -167,7 +165,7 @@ static void pnfs_generic_retry_commit(struct nfs_commit_info *cinfo, int idx)
+ 	LIST_HEAD(pages);
+ 	int i;
+ 
+-	spin_lock(&cinfo->inode->i_lock);
++	mutex_lock(&NFS_I(cinfo->inode)->commit_mutex);
+ 	for (i = idx; i < fl_cinfo->nbuckets; i++) {
+ 		bucket = &fl_cinfo->buckets[i];
+ 		if (list_empty(&bucket->committing))
+@@ -177,12 +175,12 @@ static void pnfs_generic_retry_commit(struct nfs_commit_info *cinfo, int idx)
+ 		list_for_each(pos, &bucket->committing)
+ 			cinfo->ds->ncommitting--;
+ 		list_splice_init(&bucket->committing, &pages);
+-		spin_unlock(&cinfo->inode->i_lock);
++		mutex_unlock(&NFS_I(cinfo->inode)->commit_mutex);
+ 		nfs_retry_commit(&pages, freeme, cinfo, i);
+ 		pnfs_put_lseg(freeme);
+-		spin_lock(&cinfo->inode->i_lock);
++		mutex_lock(&NFS_I(cinfo->inode)->commit_mutex);
+ 	}
+-	spin_unlock(&cinfo->inode->i_lock);
++	mutex_unlock(&NFS_I(cinfo->inode)->commit_mutex);
+ }
+ 
+ static unsigned int
+@@ -222,13 +220,13 @@ void pnfs_fetch_commit_bucket_list(struct list_head *pages,
+ 	struct list_head *pos;
+ 
+ 	bucket = &cinfo->ds->buckets[data->ds_commit_index];
+-	spin_lock(&cinfo->inode->i_lock);
++	mutex_lock(&NFS_I(cinfo->inode)->commit_mutex);
+ 	list_for_each(pos, &bucket->committing)
+ 		cinfo->ds->ncommitting--;
+ 	list_splice_init(&bucket->committing, pages);
+ 	data->lseg = bucket->clseg;
+ 	bucket->clseg = NULL;
+-	spin_unlock(&cinfo->inode->i_lock);
++	mutex_unlock(&NFS_I(cinfo->inode)->commit_mutex);
+ 
+ }
+ 
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 857141446d6b..4a17fad93411 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -6293,7 +6293,7 @@ check_for_locks(struct nfs4_file *fp, struct nfs4_lockowner *lowner)
+ 		return status;
+ 	}
+ 
+-	inode = file_inode(filp);
++	inode = locks_inode(filp);
+ 	flctx = inode->i_flctx;
+ 
+ 	if (flctx && !list_empty_careful(&flctx->flc_posix)) {
+diff --git a/fs/overlayfs/readdir.c b/fs/overlayfs/readdir.c
+index ef1fe42ff7bb..cc8303a806b4 100644
+--- a/fs/overlayfs/readdir.c
++++ b/fs/overlayfs/readdir.c
+@@ -668,6 +668,21 @@ static int ovl_fill_real(struct dir_context *ctx, const char *name,
+ 	return orig_ctx->actor(orig_ctx, name, namelen, offset, ino, d_type);
+ }
+ 
++static bool ovl_is_impure_dir(struct file *file)
++{
++	struct ovl_dir_file *od = file->private_data;
++	struct inode *dir = d_inode(file->f_path.dentry);
++
++	/*
++	 * Only upper dir can be impure, but if we are in the middle of
++	 * iterating a lower real dir, dir could be copied up and marked
++	 * impure. We only want the impure cache if we started iterating
++	 * a real upper dir to begin with.
++	 */
++	return od->is_upper && ovl_test_flag(OVL_IMPURE, dir);
++
++}
++
+ static int ovl_iterate_real(struct file *file, struct dir_context *ctx)
+ {
+ 	int err;
+@@ -696,7 +711,7 @@ static int ovl_iterate_real(struct file *file, struct dir_context *ctx)
+ 		rdt.parent_ino = stat.ino;
+ 	}
+ 
+-	if (ovl_test_flag(OVL_IMPURE, d_inode(dir))) {
++	if (ovl_is_impure_dir(file)) {
+ 		rdt.cache = ovl_cache_get_impure(&file->f_path);
+ 		if (IS_ERR(rdt.cache))
+ 			return PTR_ERR(rdt.cache);
+@@ -727,7 +742,7 @@ static int ovl_iterate(struct file *file, struct dir_context *ctx)
+ 		 */
+ 		if (ovl_xino_bits(dentry->d_sb) ||
+ 		    (ovl_same_sb(dentry->d_sb) &&
+-		     (ovl_test_flag(OVL_IMPURE, d_inode(dentry)) ||
++		     (ovl_is_impure_dir(file) ||
+ 		      OVL_TYPE_MERGE(ovl_path_type(dentry->d_parent))))) {
+ 			return ovl_iterate_real(file, ctx);
+ 		}
+diff --git a/fs/quota/quota.c b/fs/quota/quota.c
+index 860bfbe7a07a..dac1735312df 100644
+--- a/fs/quota/quota.c
++++ b/fs/quota/quota.c
+@@ -18,6 +18,7 @@
+ #include <linux/quotaops.h>
+ #include <linux/types.h>
+ #include <linux/writeback.h>
++#include <linux/nospec.h>
+ 
+ static int check_quotactl_permission(struct super_block *sb, int type, int cmd,
+ 				     qid_t id)
+@@ -703,6 +704,7 @@ static int do_quotactl(struct super_block *sb, int type, int cmd, qid_t id,
+ 
+ 	if (type >= (XQM_COMMAND(cmd) ? XQM_MAXQUOTAS : MAXQUOTAS))
+ 		return -EINVAL;
++	type = array_index_nospec(type, MAXQUOTAS);
+ 	/*
+ 	 * Quota not supported on this fs? Check this before s_quota_types
+ 	 * since they needn't be set if quota is not supported at all.
+diff --git a/fs/ubifs/dir.c b/fs/ubifs/dir.c
+index 9da224d4f2da..e8616040bffc 100644
+--- a/fs/ubifs/dir.c
++++ b/fs/ubifs/dir.c
+@@ -1123,8 +1123,7 @@ static int ubifs_symlink(struct inode *dir, struct dentry *dentry,
+ 	struct ubifs_inode *ui;
+ 	struct ubifs_inode *dir_ui = ubifs_inode(dir);
+ 	struct ubifs_info *c = dir->i_sb->s_fs_info;
+-	int err, len = strlen(symname);
+-	int sz_change = CALC_DENT_SIZE(len);
++	int err, sz_change, len = strlen(symname);
+ 	struct fscrypt_str disk_link;
+ 	struct ubifs_budget_req req = { .new_ino = 1, .new_dent = 1,
+ 					.new_ino_d = ALIGN(len, 8),
+@@ -1151,6 +1150,8 @@ static int ubifs_symlink(struct inode *dir, struct dentry *dentry,
+ 	if (err)
+ 		goto out_budg;
+ 
++	sz_change = CALC_DENT_SIZE(fname_len(&nm));
++
+ 	inode = ubifs_new_inode(c, dir, S_IFLNK | S_IRWXUGO);
+ 	if (IS_ERR(inode)) {
+ 		err = PTR_ERR(inode);
+diff --git a/fs/ubifs/journal.c b/fs/ubifs/journal.c
+index 07b4956e0425..48060dc48683 100644
+--- a/fs/ubifs/journal.c
++++ b/fs/ubifs/journal.c
+@@ -664,6 +664,11 @@ int ubifs_jnl_update(struct ubifs_info *c, const struct inode *dir,
+ 	spin_lock(&ui->ui_lock);
+ 	ui->synced_i_size = ui->ui_size;
+ 	spin_unlock(&ui->ui_lock);
++	if (xent) {
++		spin_lock(&host_ui->ui_lock);
++		host_ui->synced_i_size = host_ui->ui_size;
++		spin_unlock(&host_ui->ui_lock);
++	}
+ 	mark_inode_clean(c, ui);
+ 	mark_inode_clean(c, host_ui);
+ 	return 0;
+@@ -1282,11 +1287,10 @@ static int truncate_data_node(const struct ubifs_info *c, const struct inode *in
+ 			      int *new_len)
+ {
+ 	void *buf;
+-	int err, compr_type;
+-	u32 dlen, out_len, old_dlen;
++	int err, dlen, compr_type, out_len, old_dlen;
+ 
+ 	out_len = le32_to_cpu(dn->size);
+-	buf = kmalloc_array(out_len, WORST_COMPR_FACTOR, GFP_NOFS);
++	buf = kmalloc(out_len * WORST_COMPR_FACTOR, GFP_NOFS);
+ 	if (!buf)
+ 		return -ENOMEM;
+ 
+@@ -1388,7 +1392,16 @@ int ubifs_jnl_truncate(struct ubifs_info *c, const struct inode *inode,
+ 		else if (err)
+ 			goto out_free;
+ 		else {
+-			if (le32_to_cpu(dn->size) <= dlen)
++			int dn_len = le32_to_cpu(dn->size);
++
++			if (dn_len <= 0 || dn_len > UBIFS_BLOCK_SIZE) {
++				ubifs_err(c, "bad data node (block %u, inode %lu)",
++					  blk, inode->i_ino);
++				ubifs_dump_node(c, dn);
++				goto out_free;
++			}
++
++			if (dn_len <= dlen)
+ 				dlen = 0; /* Nothing to do */
+ 			else {
+ 				err = truncate_data_node(c, inode, blk, dn, &dlen);
+diff --git a/fs/ubifs/lprops.c b/fs/ubifs/lprops.c
+index f5a46844340c..8ade493a423a 100644
+--- a/fs/ubifs/lprops.c
++++ b/fs/ubifs/lprops.c
+@@ -1089,10 +1089,6 @@ static int scan_check_cb(struct ubifs_info *c,
+ 		}
+ 	}
+ 
+-	buf = __vmalloc(c->leb_size, GFP_NOFS, PAGE_KERNEL);
+-	if (!buf)
+-		return -ENOMEM;
+-
+ 	/*
+ 	 * After an unclean unmount, empty and freeable LEBs
+ 	 * may contain garbage - do not scan them.
+@@ -1111,6 +1107,10 @@ static int scan_check_cb(struct ubifs_info *c,
+ 		return LPT_SCAN_CONTINUE;
+ 	}
+ 
++	buf = __vmalloc(c->leb_size, GFP_NOFS, PAGE_KERNEL);
++	if (!buf)
++		return -ENOMEM;
++
+ 	sleb = ubifs_scan(c, lnum, 0, buf, 0);
+ 	if (IS_ERR(sleb)) {
+ 		ret = PTR_ERR(sleb);
+diff --git a/fs/ubifs/xattr.c b/fs/ubifs/xattr.c
+index 6f720fdf5020..09e37e63bddd 100644
+--- a/fs/ubifs/xattr.c
++++ b/fs/ubifs/xattr.c
+@@ -152,6 +152,12 @@ static int create_xattr(struct ubifs_info *c, struct inode *host,
+ 	ui->data_len = size;
+ 
+ 	mutex_lock(&host_ui->ui_mutex);
++
++	if (!host->i_nlink) {
++		err = -ENOENT;
++		goto out_noent;
++	}
++
+ 	host->i_ctime = current_time(host);
+ 	host_ui->xattr_cnt += 1;
+ 	host_ui->xattr_size += CALC_DENT_SIZE(fname_len(nm));
+@@ -184,6 +190,7 @@ out_cancel:
+ 	host_ui->xattr_size -= CALC_XATTR_BYTES(size);
+ 	host_ui->xattr_names -= fname_len(nm);
+ 	host_ui->flags &= ~UBIFS_CRYPT_FL;
++out_noent:
+ 	mutex_unlock(&host_ui->ui_mutex);
+ out_free:
+ 	make_bad_inode(inode);
+@@ -235,6 +242,12 @@ static int change_xattr(struct ubifs_info *c, struct inode *host,
+ 	mutex_unlock(&ui->ui_mutex);
+ 
+ 	mutex_lock(&host_ui->ui_mutex);
++
++	if (!host->i_nlink) {
++		err = -ENOENT;
++		goto out_noent;
++	}
++
+ 	host->i_ctime = current_time(host);
+ 	host_ui->xattr_size -= CALC_XATTR_BYTES(old_size);
+ 	host_ui->xattr_size += CALC_XATTR_BYTES(size);
+@@ -256,6 +269,7 @@ static int change_xattr(struct ubifs_info *c, struct inode *host,
+ out_cancel:
+ 	host_ui->xattr_size -= CALC_XATTR_BYTES(size);
+ 	host_ui->xattr_size += CALC_XATTR_BYTES(old_size);
++out_noent:
+ 	mutex_unlock(&host_ui->ui_mutex);
+ 	make_bad_inode(inode);
+ out_free:
+@@ -482,6 +496,12 @@ static int remove_xattr(struct ubifs_info *c, struct inode *host,
+ 		return err;
+ 
+ 	mutex_lock(&host_ui->ui_mutex);
++
++	if (!host->i_nlink) {
++		err = -ENOENT;
++		goto out_noent;
++	}
++
+ 	host->i_ctime = current_time(host);
+ 	host_ui->xattr_cnt -= 1;
+ 	host_ui->xattr_size -= CALC_DENT_SIZE(fname_len(nm));
+@@ -501,6 +521,7 @@ out_cancel:
+ 	host_ui->xattr_size += CALC_DENT_SIZE(fname_len(nm));
+ 	host_ui->xattr_size += CALC_XATTR_BYTES(ui->data_len);
+ 	host_ui->xattr_names += fname_len(nm);
++out_noent:
+ 	mutex_unlock(&host_ui->ui_mutex);
+ 	ubifs_release_budget(c, &req);
+ 	make_bad_inode(inode);
+@@ -540,6 +561,9 @@ static int ubifs_xattr_remove(struct inode *host, const char *name)
+ 
+ 	ubifs_assert(inode_is_locked(host));
+ 
++	if (!host->i_nlink)
++		return -ENOENT;
++
+ 	if (fname_len(&nm) > UBIFS_MAX_NLEN)
+ 		return -ENAMETOOLONG;
+ 
+diff --git a/fs/udf/super.c b/fs/udf/super.c
+index 0c504c8031d3..74b13347cd94 100644
+--- a/fs/udf/super.c
++++ b/fs/udf/super.c
+@@ -1570,10 +1570,16 @@ static void udf_load_logicalvolint(struct super_block *sb, struct kernel_extent_
+  */
+ #define PART_DESC_ALLOC_STEP 32
+ 
++struct part_desc_seq_scan_data {
++	struct udf_vds_record rec;
++	u32 partnum;
++};
++
+ struct desc_seq_scan_data {
+ 	struct udf_vds_record vds[VDS_POS_LENGTH];
+ 	unsigned int size_part_descs;
+-	struct udf_vds_record *part_descs_loc;
++	unsigned int num_part_descs;
++	struct part_desc_seq_scan_data *part_descs_loc;
+ };
+ 
+ static struct udf_vds_record *handle_partition_descriptor(
+@@ -1582,10 +1588,14 @@ static struct udf_vds_record *handle_partition_descriptor(
+ {
+ 	struct partitionDesc *desc = (struct partitionDesc *)bh->b_data;
+ 	int partnum;
++	int i;
+ 
+ 	partnum = le16_to_cpu(desc->partitionNumber);
+-	if (partnum >= data->size_part_descs) {
+-		struct udf_vds_record *new_loc;
++	for (i = 0; i < data->num_part_descs; i++)
++		if (partnum == data->part_descs_loc[i].partnum)
++			return &(data->part_descs_loc[i].rec);
++	if (data->num_part_descs >= data->size_part_descs) {
++		struct part_desc_seq_scan_data *new_loc;
+ 		unsigned int new_size = ALIGN(partnum, PART_DESC_ALLOC_STEP);
+ 
+ 		new_loc = kcalloc(new_size, sizeof(*new_loc), GFP_KERNEL);
+@@ -1597,7 +1607,7 @@ static struct udf_vds_record *handle_partition_descriptor(
+ 		data->part_descs_loc = new_loc;
+ 		data->size_part_descs = new_size;
+ 	}
+-	return &(data->part_descs_loc[partnum]);
++	return &(data->part_descs_loc[data->num_part_descs++].rec);
+ }
+ 
+ 
+@@ -1647,6 +1657,7 @@ static noinline int udf_process_sequence(
+ 
+ 	memset(data.vds, 0, sizeof(struct udf_vds_record) * VDS_POS_LENGTH);
+ 	data.size_part_descs = PART_DESC_ALLOC_STEP;
++	data.num_part_descs = 0;
+ 	data.part_descs_loc = kcalloc(data.size_part_descs,
+ 				      sizeof(*data.part_descs_loc),
+ 				      GFP_KERNEL);
+@@ -1658,7 +1669,6 @@ static noinline int udf_process_sequence(
+ 	 * are in it.
+ 	 */
+ 	for (; (!done && block <= lastblock); block++) {
+-
+ 		bh = udf_read_tagged(sb, block, block, &ident);
+ 		if (!bh)
+ 			break;
+@@ -1730,13 +1740,10 @@ static noinline int udf_process_sequence(
+ 	}
+ 
+ 	/* Now handle prevailing Partition Descriptors */
+-	for (i = 0; i < data.size_part_descs; i++) {
+-		if (data.part_descs_loc[i].block) {
+-			ret = udf_load_partdesc(sb,
+-						data.part_descs_loc[i].block);
+-			if (ret < 0)
+-				return ret;
+-		}
++	for (i = 0; i < data.num_part_descs; i++) {
++		ret = udf_load_partdesc(sb, data.part_descs_loc[i].rec.block);
++		if (ret < 0)
++			return ret;
+ 	}
+ 
+ 	return 0;
+diff --git a/fs/xattr.c b/fs/xattr.c
+index f9cb1db187b7..1bee74682513 100644
+--- a/fs/xattr.c
++++ b/fs/xattr.c
+@@ -539,7 +539,7 @@ getxattr(struct dentry *d, const char __user *name, void __user *value,
+ 	if (error > 0) {
+ 		if ((strcmp(kname, XATTR_NAME_POSIX_ACL_ACCESS) == 0) ||
+ 		    (strcmp(kname, XATTR_NAME_POSIX_ACL_DEFAULT) == 0))
+-			posix_acl_fix_xattr_to_user(kvalue, size);
++			posix_acl_fix_xattr_to_user(kvalue, error);
+ 		if (size && copy_to_user(value, kvalue, error))
+ 			error = -EFAULT;
+ 	} else if (error == -ERANGE && size >= XATTR_SIZE_MAX) {
+diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
+index 6c666fd7de3c..0fce47d5acb1 100644
+--- a/include/linux/blk-cgroup.h
++++ b/include/linux/blk-cgroup.h
+@@ -295,6 +295,23 @@ static inline struct blkcg_gq *blkg_lookup(struct blkcg *blkcg,
+ 	return __blkg_lookup(blkcg, q, false);
+ }
+ 
++/**
++ * blkg_lookup - look up blkg for the specified request queue
++ * @q: request_queue of interest
++ *
++ * Lookup blkg for @q at the root level. See also blkg_lookup().
++ */
++static inline struct blkcg_gq *blkg_root_lookup(struct request_queue *q)
++{
++	struct blkcg_gq *blkg;
++
++	rcu_read_lock();
++	blkg = blkg_lookup(&blkcg_root, q);
++	rcu_read_unlock();
++
++	return blkg;
++}
++
+ /**
+  * blkg_to_pdata - get policy private data
+  * @blkg: blkg of interest
+@@ -737,6 +754,7 @@ struct blkcg_policy {
+ #ifdef CONFIG_BLOCK
+ 
+ static inline struct blkcg_gq *blkg_lookup(struct blkcg *blkcg, void *key) { return NULL; }
++static inline struct blkcg_gq *blkg_root_lookup(struct request_queue *q) { return NULL; }
+ static inline int blkcg_init_queue(struct request_queue *q) { return 0; }
+ static inline void blkcg_drain_queue(struct request_queue *q) { }
+ static inline void blkcg_exit_queue(struct request_queue *q) { }
+diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
+index 3a3012f57be4..5389012f1d25 100644
+--- a/include/linux/hyperv.h
++++ b/include/linux/hyperv.h
+@@ -1046,6 +1046,8 @@ extern int vmbus_establish_gpadl(struct vmbus_channel *channel,
+ extern int vmbus_teardown_gpadl(struct vmbus_channel *channel,
+ 				     u32 gpadl_handle);
+ 
++void vmbus_reset_channel_cb(struct vmbus_channel *channel);
++
+ extern int vmbus_recvpacket(struct vmbus_channel *channel,
+ 				  void *buffer,
+ 				  u32 bufferlen,
+diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
+index ef169d67df92..7fd9fbaea5aa 100644
+--- a/include/linux/intel-iommu.h
++++ b/include/linux/intel-iommu.h
+@@ -114,6 +114,7 @@
+  * Extended Capability Register
+  */
+ 
++#define ecap_dit(e)		((e >> 41) & 0x1)
+ #define ecap_pasid(e)		((e >> 40) & 0x1)
+ #define ecap_pss(e)		((e >> 35) & 0x1f)
+ #define ecap_eafs(e)		((e >> 34) & 0x1)
+@@ -284,6 +285,7 @@ enum {
+ #define QI_DEV_IOTLB_SID(sid)	((u64)((sid) & 0xffff) << 32)
+ #define QI_DEV_IOTLB_QDEP(qdep)	(((qdep) & 0x1f) << 16)
+ #define QI_DEV_IOTLB_ADDR(addr)	((u64)(addr) & VTD_PAGE_MASK)
++#define QI_DEV_IOTLB_PFSID(pfsid) (((u64)(pfsid & 0xf) << 12) | ((u64)(pfsid & 0xfff) << 52))
+ #define QI_DEV_IOTLB_SIZE	1
+ #define QI_DEV_IOTLB_MAX_INVS	32
+ 
+@@ -308,6 +310,7 @@ enum {
+ #define QI_DEV_EIOTLB_PASID(p)	(((u64)p) << 32)
+ #define QI_DEV_EIOTLB_SID(sid)	((u64)((sid) & 0xffff) << 16)
+ #define QI_DEV_EIOTLB_QDEP(qd)	((u64)((qd) & 0x1f) << 4)
++#define QI_DEV_EIOTLB_PFSID(pfsid) (((u64)(pfsid & 0xf) << 12) | ((u64)(pfsid & 0xfff) << 52))
+ #define QI_DEV_EIOTLB_MAX_INVS	32
+ 
+ #define QI_PGRP_IDX(idx)	(((u64)(idx)) << 55)
+@@ -453,9 +456,8 @@ extern void qi_flush_context(struct intel_iommu *iommu, u16 did, u16 sid,
+ 			     u8 fm, u64 type);
+ extern void qi_flush_iotlb(struct intel_iommu *iommu, u16 did, u64 addr,
+ 			  unsigned int size_order, u64 type);
+-extern void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 qdep,
+-			       u64 addr, unsigned mask);
+-
++extern void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 pfsid,
++			u16 qdep, u64 addr, unsigned mask);
+ extern int qi_submit_sync(struct qi_desc *desc, struct intel_iommu *iommu);
+ 
+ extern int dmar_ir_support(void);
+diff --git a/include/linux/lockd/lockd.h b/include/linux/lockd/lockd.h
+index 4fd95dbeb52f..b065ef406770 100644
+--- a/include/linux/lockd/lockd.h
++++ b/include/linux/lockd/lockd.h
+@@ -299,7 +299,7 @@ int           nlmsvc_unlock_all_by_ip(struct sockaddr *server_addr);
+ 
+ static inline struct inode *nlmsvc_file_inode(struct nlm_file *file)
+ {
+-	return file_inode(file->f_file);
++	return locks_inode(file->f_file);
+ }
+ 
+ static inline int __nlm_privileged_request4(const struct sockaddr *sap)
+@@ -359,7 +359,7 @@ static inline int nlm_privileged_requester(const struct svc_rqst *rqstp)
+ static inline int nlm_compare_locks(const struct file_lock *fl1,
+ 				    const struct file_lock *fl2)
+ {
+-	return file_inode(fl1->fl_file) == file_inode(fl2->fl_file)
++	return locks_inode(fl1->fl_file) == locks_inode(fl2->fl_file)
+ 	     && fl1->fl_pid   == fl2->fl_pid
+ 	     && fl1->fl_owner == fl2->fl_owner
+ 	     && fl1->fl_start == fl2->fl_start
+diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
+index 99ce070e7dcb..22651e124071 100644
+--- a/include/linux/mm_types.h
++++ b/include/linux/mm_types.h
+@@ -139,7 +139,10 @@ struct page {
+ 			unsigned long _pt_pad_1;	/* compound_head */
+ 			pgtable_t pmd_huge_pte; /* protected by page->ptl */
+ 			unsigned long _pt_pad_2;	/* mapping */
+-			struct mm_struct *pt_mm;	/* x86 pgds only */
++			union {
++				struct mm_struct *pt_mm; /* x86 pgds only */
++				atomic_t pt_frag_refcount; /* powerpc */
++			};
+ #if ALLOC_SPLIT_PTLOCKS
+ 			spinlock_t *ptl;
+ #else
+diff --git a/include/linux/overflow.h b/include/linux/overflow.h
+index 8712ff70995f..40b48e2133cb 100644
+--- a/include/linux/overflow.h
++++ b/include/linux/overflow.h
+@@ -202,6 +202,37 @@
+ 
+ #endif /* COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW */
+ 
++/** check_shl_overflow() - Calculate a left-shifted value and check overflow
++ *
++ * @a: Value to be shifted
++ * @s: How many bits left to shift
++ * @d: Pointer to where to store the result
++ *
++ * Computes *@d = (@a << @s)
++ *
++ * Returns true if '*d' cannot hold the result or when 'a << s' doesn't
++ * make sense. Example conditions:
++ * - 'a << s' causes bits to be lost when stored in *d.
++ * - 's' is garbage (e.g. negative) or so large that the result of
++ *   'a << s' is guaranteed to be 0.
++ * - 'a' is negative.
++ * - 'a << s' sets the sign bit, if any, in '*d'.
++ *
++ * '*d' will hold the results of the attempted shift, but is not
++ * considered "safe for use" if false is returned.
++ */
++#define check_shl_overflow(a, s, d) ({					\
++	typeof(a) _a = a;						\
++	typeof(s) _s = s;						\
++	typeof(d) _d = d;						\
++	u64 _a_full = _a;						\
++	unsigned int _to_shift =					\
++		_s >= 0 && _s < 8 * sizeof(*d) ? _s : 0;		\
++	*_d = (_a_full << _to_shift);					\
++	(_to_shift != _s || *_d < 0 || _a < 0 ||			\
++		(*_d >> _to_shift) != _a);				\
++})
++
+ /**
+  * array_size() - Calculate size of 2-dimensional array.
+  *
+diff --git a/include/linux/sunrpc/clnt.h b/include/linux/sunrpc/clnt.h
+index 9b11b6a0978c..73d5c4a870fa 100644
+--- a/include/linux/sunrpc/clnt.h
++++ b/include/linux/sunrpc/clnt.h
+@@ -156,6 +156,7 @@ int		rpc_switch_client_transport(struct rpc_clnt *,
+ 
+ void		rpc_shutdown_client(struct rpc_clnt *);
+ void		rpc_release_client(struct rpc_clnt *);
++void		rpc_task_release_transport(struct rpc_task *);
+ void		rpc_task_release_client(struct rpc_task *);
+ 
+ int		rpcb_create_local(struct net *);
+diff --git a/include/linux/verification.h b/include/linux/verification.h
+index a10549a6c7cd..cfa4730d607a 100644
+--- a/include/linux/verification.h
++++ b/include/linux/verification.h
+@@ -12,6 +12,12 @@
+ #ifndef _LINUX_VERIFICATION_H
+ #define _LINUX_VERIFICATION_H
+ 
++/*
++ * Indicate that both builtin trusted keys and secondary trusted keys
++ * should be used.
++ */
++#define VERIFY_USE_SECONDARY_KEYRING ((struct key *)1UL)
++
+ /*
+  * The use to which an asymmetric key is being put.
+  */
+diff --git a/include/uapi/linux/eventpoll.h b/include/uapi/linux/eventpoll.h
+index bf48e71f2634..8a3432d0f0dc 100644
+--- a/include/uapi/linux/eventpoll.h
++++ b/include/uapi/linux/eventpoll.h
+@@ -42,7 +42,7 @@
+ #define EPOLLRDHUP	(__force __poll_t)0x00002000
+ 
+ /* Set exclusive wakeup mode for the target file descriptor */
+-#define EPOLLEXCLUSIVE (__force __poll_t)(1U << 28)
++#define EPOLLEXCLUSIVE	((__force __poll_t)(1U << 28))
+ 
+ /*
+  * Request the handling of system wakeup events so as to prevent system suspends
+@@ -54,13 +54,13 @@
+  *
+  * Requires CAP_BLOCK_SUSPEND
+  */
+-#define EPOLLWAKEUP (__force __poll_t)(1U << 29)
++#define EPOLLWAKEUP	((__force __poll_t)(1U << 29))
+ 
+ /* Set the One Shot behaviour for the target file descriptor */
+-#define EPOLLONESHOT (__force __poll_t)(1U << 30)
++#define EPOLLONESHOT	((__force __poll_t)(1U << 30))
+ 
+ /* Set the Edge Triggered behaviour for the target file descriptor */
+-#define EPOLLET (__force __poll_t)(1U << 31)
++#define EPOLLET		((__force __poll_t)(1U << 31))
+ 
+ /* 
+  * On x86-64 make the 64bit structure have the same alignment as the
+diff --git a/include/video/udlfb.h b/include/video/udlfb.h
+index 0cabe6b09095..6e1a2e790b1b 100644
+--- a/include/video/udlfb.h
++++ b/include/video/udlfb.h
+@@ -20,7 +20,6 @@ struct dloarea {
+ struct urb_node {
+ 	struct list_head entry;
+ 	struct dlfb_data *dlfb;
+-	struct delayed_work release_urb_work;
+ 	struct urb *urb;
+ };
+ 
+@@ -52,11 +51,13 @@ struct dlfb_data {
+ 	int base8;
+ 	u32 pseudo_palette[256];
+ 	int blank_mode; /*one of FB_BLANK_ */
++	struct fb_ops ops;
+ 	/* blit-only rendering path metrics, exposed through sysfs */
+ 	atomic_t bytes_rendered; /* raw pixel-bytes driver asked to render */
+ 	atomic_t bytes_identical; /* saved effort with backbuffer comparison */
+ 	atomic_t bytes_sent; /* to usb, after compression including overhead */
+ 	atomic_t cpu_kcycles_used; /* transpired during pixel processing */
++	struct fb_var_screeninfo current_mode;
+ };
+ 
+ #define NR_USB_REQUEST_I2C_SUB_IO 0x02
+@@ -87,7 +88,7 @@ struct dlfb_data {
+ #define MIN_RAW_PIX_BYTES	2
+ #define MIN_RAW_CMD_BYTES	(RAW_HEADER_BYTES + MIN_RAW_PIX_BYTES)
+ 
+-#define DL_DEFIO_WRITE_DELAY    5 /* fb_deferred_io.delay in jiffies */
++#define DL_DEFIO_WRITE_DELAY    msecs_to_jiffies(HZ <= 300 ? 4 : 10) /* optimal value for 720p video */
+ #define DL_DEFIO_WRITE_DISABLE  (HZ*60) /* "disable" with long delay */
+ 
+ /* remove these once align.h patch is taken into kernel */
+diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
+index 3a4656fb7047..5b77a7314e01 100644
+--- a/kernel/livepatch/core.c
++++ b/kernel/livepatch/core.c
+@@ -678,6 +678,9 @@ static int klp_init_func(struct klp_object *obj, struct klp_func *func)
+ 	if (!func->old_name || !func->new_func)
+ 		return -EINVAL;
+ 
++	if (strlen(func->old_name) >= KSYM_NAME_LEN)
++		return -EINVAL;
++
+ 	INIT_LIST_HEAD(&func->stack_node);
+ 	func->patched = false;
+ 	func->transition = false;
+@@ -751,6 +754,9 @@ static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
+ 	if (!obj->funcs)
+ 		return -EINVAL;
+ 
++	if (klp_is_module(obj) && strlen(obj->name) >= MODULE_NAME_LEN)
++		return -EINVAL;
++
+ 	obj->patched = false;
+ 	obj->mod = NULL;
+ 
+diff --git a/kernel/memremap.c b/kernel/memremap.c
+index 38283363da06..cfb750105e1e 100644
+--- a/kernel/memremap.c
++++ b/kernel/memremap.c
+@@ -355,7 +355,6 @@ void __put_devmap_managed_page(struct page *page)
+ 		__ClearPageActive(page);
+ 		__ClearPageWaiters(page);
+ 
+-		page->mapping = NULL;
+ 		mem_cgroup_uncharge(page);
+ 
+ 		page->pgmap->page_free(page, page->pgmap->data);
+diff --git a/kernel/power/Kconfig b/kernel/power/Kconfig
+index e880ca22c5a5..3a6c2f87699e 100644
+--- a/kernel/power/Kconfig
++++ b/kernel/power/Kconfig
+@@ -105,6 +105,7 @@ config PM_SLEEP
+ 	def_bool y
+ 	depends on SUSPEND || HIBERNATE_CALLBACKS
+ 	select PM
++	select SRCU
+ 
+ config PM_SLEEP_SMP
+ 	def_bool y
+diff --git a/kernel/printk/printk_safe.c b/kernel/printk/printk_safe.c
+index a0a74c533e4b..0913b4d385de 100644
+--- a/kernel/printk/printk_safe.c
++++ b/kernel/printk/printk_safe.c
+@@ -306,12 +306,12 @@ static __printf(1, 0) int vprintk_nmi(const char *fmt, va_list args)
+ 	return printk_safe_log_store(s, fmt, args);
+ }
+ 
+-void printk_nmi_enter(void)
++void notrace printk_nmi_enter(void)
+ {
+ 	this_cpu_or(printk_context, PRINTK_NMI_CONTEXT_MASK);
+ }
+ 
+-void printk_nmi_exit(void)
++void notrace printk_nmi_exit(void)
+ {
+ 	this_cpu_and(printk_context, ~PRINTK_NMI_CONTEXT_MASK);
+ }
+diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
+index d40708e8c5d6..01b6ddeb4f05 100644
+--- a/kernel/rcu/tree_exp.h
++++ b/kernel/rcu/tree_exp.h
+@@ -472,6 +472,7 @@ retry_ipi:
+ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
+ 				     smp_call_func_t func)
+ {
++	int cpu;
+ 	struct rcu_node *rnp;
+ 
+ 	trace_rcu_exp_grace_period(rsp->name, rcu_exp_gp_seq_endval(rsp), TPS("reset"));
+@@ -492,7 +493,13 @@ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
+ 			continue;
+ 		}
+ 		INIT_WORK(&rnp->rew.rew_work, sync_rcu_exp_select_node_cpus);
+-		queue_work_on(rnp->grplo, rcu_par_gp_wq, &rnp->rew.rew_work);
++		preempt_disable();
++		cpu = cpumask_next(rnp->grplo - 1, cpu_online_mask);
++		/* If all offline, queue the work on an unbound CPU. */
++		if (unlikely(cpu > rnp->grphi))
++			cpu = WORK_CPU_UNBOUND;
++		queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
++		preempt_enable();
+ 		rnp->exp_need_flush = true;
+ 	}
+ 
+diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
+index 1a3e9bddd17b..16f84142f2f4 100644
+--- a/kernel/sched/idle.c
++++ b/kernel/sched/idle.c
+@@ -190,7 +190,7 @@ static void cpuidle_idle_call(void)
+ 		 */
+ 		next_state = cpuidle_select(drv, dev, &stop_tick);
+ 
+-		if (stop_tick)
++		if (stop_tick || tick_nohz_tick_stopped())
+ 			tick_nohz_idle_stop_tick();
+ 		else
+ 			tick_nohz_idle_retain_tick();
+diff --git a/kernel/sys.c b/kernel/sys.c
+index 38509dc1f77b..69b9a37ecf0d 100644
+--- a/kernel/sys.c
++++ b/kernel/sys.c
+@@ -1237,18 +1237,19 @@ static int override_release(char __user *release, size_t len)
+ 
+ SYSCALL_DEFINE1(newuname, struct new_utsname __user *, name)
+ {
+-	int errno = 0;
++	struct new_utsname tmp;
+ 
+ 	down_read(&uts_sem);
+-	if (copy_to_user(name, utsname(), sizeof *name))
+-		errno = -EFAULT;
++	memcpy(&tmp, utsname(), sizeof(tmp));
+ 	up_read(&uts_sem);
++	if (copy_to_user(name, &tmp, sizeof(tmp)))
++		return -EFAULT;
+ 
+-	if (!errno && override_release(name->release, sizeof(name->release)))
+-		errno = -EFAULT;
+-	if (!errno && override_architecture(name))
+-		errno = -EFAULT;
+-	return errno;
++	if (override_release(name->release, sizeof(name->release)))
++		return -EFAULT;
++	if (override_architecture(name))
++		return -EFAULT;
++	return 0;
+ }
+ 
+ #ifdef __ARCH_WANT_SYS_OLD_UNAME
+@@ -1257,55 +1258,46 @@ SYSCALL_DEFINE1(newuname, struct new_utsname __user *, name)
+  */
+ SYSCALL_DEFINE1(uname, struct old_utsname __user *, name)
+ {
+-	int error = 0;
++	struct old_utsname tmp;
+ 
+ 	if (!name)
+ 		return -EFAULT;
+ 
+ 	down_read(&uts_sem);
+-	if (copy_to_user(name, utsname(), sizeof(*name)))
+-		error = -EFAULT;
++	memcpy(&tmp, utsname(), sizeof(tmp));
+ 	up_read(&uts_sem);
++	if (copy_to_user(name, &tmp, sizeof(tmp)))
++		return -EFAULT;
+ 
+-	if (!error && override_release(name->release, sizeof(name->release)))
+-		error = -EFAULT;
+-	if (!error && override_architecture(name))
+-		error = -EFAULT;
+-	return error;
++	if (override_release(name->release, sizeof(name->release)))
++		return -EFAULT;
++	if (override_architecture(name))
++		return -EFAULT;
++	return 0;
+ }
+ 
+ SYSCALL_DEFINE1(olduname, struct oldold_utsname __user *, name)
+ {
+-	int error;
++	struct oldold_utsname tmp = {};
+ 
+ 	if (!name)
+ 		return -EFAULT;
+-	if (!access_ok(VERIFY_WRITE, name, sizeof(struct oldold_utsname)))
+-		return -EFAULT;
+ 
+ 	down_read(&uts_sem);
+-	error = __copy_to_user(&name->sysname, &utsname()->sysname,
+-			       __OLD_UTS_LEN);
+-	error |= __put_user(0, name->sysname + __OLD_UTS_LEN);
+-	error |= __copy_to_user(&name->nodename, &utsname()->nodename,
+-				__OLD_UTS_LEN);
+-	error |= __put_user(0, name->nodename + __OLD_UTS_LEN);
+-	error |= __copy_to_user(&name->release, &utsname()->release,
+-				__OLD_UTS_LEN);
+-	error |= __put_user(0, name->release + __OLD_UTS_LEN);
+-	error |= __copy_to_user(&name->version, &utsname()->version,
+-				__OLD_UTS_LEN);
+-	error |= __put_user(0, name->version + __OLD_UTS_LEN);
+-	error |= __copy_to_user(&name->machine, &utsname()->machine,
+-				__OLD_UTS_LEN);
+-	error |= __put_user(0, name->machine + __OLD_UTS_LEN);
++	memcpy(&tmp.sysname, &utsname()->sysname, __OLD_UTS_LEN);
++	memcpy(&tmp.nodename, &utsname()->nodename, __OLD_UTS_LEN);
++	memcpy(&tmp.release, &utsname()->release, __OLD_UTS_LEN);
++	memcpy(&tmp.version, &utsname()->version, __OLD_UTS_LEN);
++	memcpy(&tmp.machine, &utsname()->machine, __OLD_UTS_LEN);
+ 	up_read(&uts_sem);
++	if (copy_to_user(name, &tmp, sizeof(tmp)))
++		return -EFAULT;
+ 
+-	if (!error && override_architecture(name))
+-		error = -EFAULT;
+-	if (!error && override_release(name->release, sizeof(name->release)))
+-		error = -EFAULT;
+-	return error ? -EFAULT : 0;
++	if (override_architecture(name))
++		return -EFAULT;
++	if (override_release(name->release, sizeof(name->release)))
++		return -EFAULT;
++	return 0;
+ }
+ #endif
+ 
+@@ -1319,17 +1311,18 @@ SYSCALL_DEFINE2(sethostname, char __user *, name, int, len)
+ 
+ 	if (len < 0 || len > __NEW_UTS_LEN)
+ 		return -EINVAL;
+-	down_write(&uts_sem);
+ 	errno = -EFAULT;
+ 	if (!copy_from_user(tmp, name, len)) {
+-		struct new_utsname *u = utsname();
++		struct new_utsname *u;
+ 
++		down_write(&uts_sem);
++		u = utsname();
+ 		memcpy(u->nodename, tmp, len);
+ 		memset(u->nodename + len, 0, sizeof(u->nodename) - len);
+ 		errno = 0;
+ 		uts_proc_notify(UTS_PROC_HOSTNAME);
++		up_write(&uts_sem);
+ 	}
+-	up_write(&uts_sem);
+ 	return errno;
+ }
+ 
+@@ -1337,8 +1330,9 @@ SYSCALL_DEFINE2(sethostname, char __user *, name, int, len)
+ 
+ SYSCALL_DEFINE2(gethostname, char __user *, name, int, len)
+ {
+-	int i, errno;
++	int i;
+ 	struct new_utsname *u;
++	char tmp[__NEW_UTS_LEN + 1];
+ 
+ 	if (len < 0)
+ 		return -EINVAL;
+@@ -1347,11 +1341,11 @@ SYSCALL_DEFINE2(gethostname, char __user *, name, int, len)
+ 	i = 1 + strlen(u->nodename);
+ 	if (i > len)
+ 		i = len;
+-	errno = 0;
+-	if (copy_to_user(name, u->nodename, i))
+-		errno = -EFAULT;
++	memcpy(tmp, u->nodename, i);
+ 	up_read(&uts_sem);
+-	return errno;
++	if (copy_to_user(name, tmp, i))
++		return -EFAULT;
++	return 0;
+ }
+ 
+ #endif
+@@ -1370,17 +1364,18 @@ SYSCALL_DEFINE2(setdomainname, char __user *, name, int, len)
+ 	if (len < 0 || len > __NEW_UTS_LEN)
+ 		return -EINVAL;
+ 
+-	down_write(&uts_sem);
+ 	errno = -EFAULT;
+ 	if (!copy_from_user(tmp, name, len)) {
+-		struct new_utsname *u = utsname();
++		struct new_utsname *u;
+ 
++		down_write(&uts_sem);
++		u = utsname();
+ 		memcpy(u->domainname, tmp, len);
+ 		memset(u->domainname + len, 0, sizeof(u->domainname) - len);
+ 		errno = 0;
+ 		uts_proc_notify(UTS_PROC_DOMAINNAME);
++		up_write(&uts_sem);
+ 	}
+-	up_write(&uts_sem);
+ 	return errno;
+ }
+ 
+diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
+index 987d9a9ae283..8defc6fd8c0f 100644
+--- a/kernel/trace/blktrace.c
++++ b/kernel/trace/blktrace.c
+@@ -1841,6 +1841,10 @@ static ssize_t sysfs_blk_trace_attr_store(struct device *dev,
+ 	mutex_lock(&q->blk_trace_mutex);
+ 
+ 	if (attr == &dev_attr_enable) {
++		if (!!value == !!q->blk_trace) {
++			ret = 0;
++			goto out_unlock_bdev;
++		}
+ 		if (value)
+ 			ret = blk_trace_setup_queue(q, bdev);
+ 		else
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 176debd3481b..ddae35127571 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -7628,7 +7628,9 @@ rb_simple_write(struct file *filp, const char __user *ubuf,
+ 
+ 	if (buffer) {
+ 		mutex_lock(&trace_types_lock);
+-		if (val) {
++		if (!!val == tracer_tracing_is_on(tr)) {
++			val = 0; /* do nothing */
++		} else if (val) {
+ 			tracer_tracing_on(tr);
+ 			if (tr->current_trace->start)
+ 				tr->current_trace->start(tr);
+diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
+index bf89a51e740d..ac02fafc9f1b 100644
+--- a/kernel/trace/trace_uprobe.c
++++ b/kernel/trace/trace_uprobe.c
+@@ -952,7 +952,7 @@ probe_event_disable(struct trace_uprobe *tu, struct trace_event_file *file)
+ 
+ 		list_del_rcu(&link->list);
+ 		/* synchronize with u{,ret}probe_trace_func */
+-		synchronize_sched();
++		synchronize_rcu();
+ 		kfree(link);
+ 
+ 		if (!list_empty(&tu->tp.files))
+diff --git a/kernel/user_namespace.c b/kernel/user_namespace.c
+index c3d7583fcd21..e5222b5fb4fe 100644
+--- a/kernel/user_namespace.c
++++ b/kernel/user_namespace.c
+@@ -859,7 +859,16 @@ static ssize_t map_write(struct file *file, const char __user *buf,
+ 	unsigned idx;
+ 	struct uid_gid_extent extent;
+ 	char *kbuf = NULL, *pos, *next_line;
+-	ssize_t ret = -EINVAL;
++	ssize_t ret;
++
++	/* Only allow < page size writes at the beginning of the file */
++	if ((*ppos != 0) || (count >= PAGE_SIZE))
++		return -EINVAL;
++
++	/* Slurp in the user data */
++	kbuf = memdup_user_nul(buf, count);
++	if (IS_ERR(kbuf))
++		return PTR_ERR(kbuf);
+ 
+ 	/*
+ 	 * The userns_state_mutex serializes all writes to any given map.
+@@ -895,19 +904,6 @@ static ssize_t map_write(struct file *file, const char __user *buf,
+ 	if (cap_valid(cap_setid) && !file_ns_capable(file, ns, CAP_SYS_ADMIN))
+ 		goto out;
+ 
+-	/* Only allow < page size writes at the beginning of the file */
+-	ret = -EINVAL;
+-	if ((*ppos != 0) || (count >= PAGE_SIZE))
+-		goto out;
+-
+-	/* Slurp in the user data */
+-	kbuf = memdup_user_nul(buf, count);
+-	if (IS_ERR(kbuf)) {
+-		ret = PTR_ERR(kbuf);
+-		kbuf = NULL;
+-		goto out;
+-	}
+-
+ 	/* Parse the user data */
+ 	ret = -EINVAL;
+ 	pos = kbuf;
+diff --git a/kernel/utsname_sysctl.c b/kernel/utsname_sysctl.c
+index 233cd8fc6910..258033d62cb3 100644
+--- a/kernel/utsname_sysctl.c
++++ b/kernel/utsname_sysctl.c
+@@ -18,7 +18,7 @@
+ 
+ #ifdef CONFIG_PROC_SYSCTL
+ 
+-static void *get_uts(struct ctl_table *table, int write)
++static void *get_uts(struct ctl_table *table)
+ {
+ 	char *which = table->data;
+ 	struct uts_namespace *uts_ns;
+@@ -26,21 +26,9 @@ static void *get_uts(struct ctl_table *table, int write)
+ 	uts_ns = current->nsproxy->uts_ns;
+ 	which = (which - (char *)&init_uts_ns) + (char *)uts_ns;
+ 
+-	if (!write)
+-		down_read(&uts_sem);
+-	else
+-		down_write(&uts_sem);
+ 	return which;
+ }
+ 
+-static void put_uts(struct ctl_table *table, int write, void *which)
+-{
+-	if (!write)
+-		up_read(&uts_sem);
+-	else
+-		up_write(&uts_sem);
+-}
+-
+ /*
+  *	Special case of dostring for the UTS structure. This has locks
+  *	to observe. Should this be in kernel/sys.c ????
+@@ -50,13 +38,34 @@ static int proc_do_uts_string(struct ctl_table *table, int write,
+ {
+ 	struct ctl_table uts_table;
+ 	int r;
++	char tmp_data[__NEW_UTS_LEN + 1];
++
+ 	memcpy(&uts_table, table, sizeof(uts_table));
+-	uts_table.data = get_uts(table, write);
++	uts_table.data = tmp_data;
++
++	/*
++	 * Buffer the value in tmp_data so that proc_dostring() can be called
++	 * without holding any locks.
++	 * We also need to read the original value in the write==1 case to
++	 * support partial writes.
++	 */
++	down_read(&uts_sem);
++	memcpy(tmp_data, get_uts(table), sizeof(tmp_data));
++	up_read(&uts_sem);
+ 	r = proc_dostring(&uts_table, write, buffer, lenp, ppos);
+-	put_uts(table, write, uts_table.data);
+ 
+-	if (write)
++	if (write) {
++		/*
++		 * Write back the new value.
++		 * Note that, since we dropped uts_sem, the result can
++		 * theoretically be incorrect if there are two parallel writes
++		 * at non-zero offsets to the same sysctl.
++		 */
++		down_write(&uts_sem);
++		memcpy(get_uts(table), tmp_data, sizeof(tmp_data));
++		up_write(&uts_sem);
+ 		proc_sys_poll_notify(table->poll);
++	}
+ 
+ 	return r;
+ }
+diff --git a/mm/hmm.c b/mm/hmm.c
+index de7b6bf77201..f9d1d89dec4d 100644
+--- a/mm/hmm.c
++++ b/mm/hmm.c
+@@ -963,6 +963,8 @@ static void hmm_devmem_free(struct page *page, void *data)
+ {
+ 	struct hmm_devmem *devmem = data;
+ 
++	page->mapping = NULL;
++
+ 	devmem->ops->free(devmem, page);
+ }
+ 
+diff --git a/mm/memory.c b/mm/memory.c
+index 86d4329acb05..f94feec6518d 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -391,15 +391,6 @@ void tlb_remove_table(struct mmu_gather *tlb, void *table)
+ {
+ 	struct mmu_table_batch **batch = &tlb->batch;
+ 
+-	/*
+-	 * When there's less then two users of this mm there cannot be a
+-	 * concurrent page-table walk.
+-	 */
+-	if (atomic_read(&tlb->mm->mm_users) < 2) {
+-		__tlb_remove_table(table);
+-		return;
+-	}
+-
+ 	if (*batch == NULL) {
+ 		*batch = (struct mmu_table_batch *)__get_free_page(GFP_NOWAIT | __GFP_NOWARN);
+ 		if (*batch == NULL) {
+diff --git a/mm/readahead.c b/mm/readahead.c
+index e273f0de3376..792dea696d54 100644
+--- a/mm/readahead.c
++++ b/mm/readahead.c
+@@ -385,6 +385,7 @@ ondemand_readahead(struct address_space *mapping,
+ {
+ 	struct backing_dev_info *bdi = inode_to_bdi(mapping->host);
+ 	unsigned long max_pages = ra->ra_pages;
++	unsigned long add_pages;
+ 	pgoff_t prev_offset;
+ 
+ 	/*
+@@ -474,10 +475,17 @@ readit:
+ 	 * Will this read hit the readahead marker made by itself?
+ 	 * If so, trigger the readahead marker hit now, and merge
+ 	 * the resulted next readahead window into the current one.
++	 * Take care of maximum IO pages as above.
+ 	 */
+ 	if (offset == ra->start && ra->size == ra->async_size) {
+-		ra->async_size = get_next_ra_size(ra, max_pages);
+-		ra->size += ra->async_size;
++		add_pages = get_next_ra_size(ra, max_pages);
++		if (ra->size + add_pages <= max_pages) {
++			ra->async_size = add_pages;
++			ra->size += add_pages;
++		} else {
++			ra->size = max_pages;
++			ra->async_size = max_pages >> 1;
++		}
+ 	}
+ 
+ 	return ra_submit(ra, mapping, filp);
+diff --git a/net/9p/client.c b/net/9p/client.c
+index 5c1343195292..2872f3dbfd86 100644
+--- a/net/9p/client.c
++++ b/net/9p/client.c
+@@ -958,7 +958,7 @@ static int p9_client_version(struct p9_client *c)
+ {
+ 	int err = 0;
+ 	struct p9_req_t *req;
+-	char *version;
++	char *version = NULL;
+ 	int msize;
+ 
+ 	p9_debug(P9_DEBUG_9P, ">>> TVERSION msize %d protocol %d\n",
+diff --git a/net/9p/trans_fd.c b/net/9p/trans_fd.c
+index 588bf88c3305..ef456395645a 100644
+--- a/net/9p/trans_fd.c
++++ b/net/9p/trans_fd.c
+@@ -185,6 +185,8 @@ static void p9_mux_poll_stop(struct p9_conn *m)
+ 	spin_lock_irqsave(&p9_poll_lock, flags);
+ 	list_del_init(&m->poll_pending_link);
+ 	spin_unlock_irqrestore(&p9_poll_lock, flags);
++
++	flush_work(&p9_poll_work);
+ }
+ 
+ /**
+@@ -940,7 +942,7 @@ p9_fd_create_tcp(struct p9_client *client, const char *addr, char *args)
+ 	if (err < 0)
+ 		return err;
+ 
+-	if (valid_ipaddr4(addr) < 0)
++	if (addr == NULL || valid_ipaddr4(addr) < 0)
+ 		return -EINVAL;
+ 
+ 	csocket = NULL;
+@@ -990,6 +992,9 @@ p9_fd_create_unix(struct p9_client *client, const char *addr, char *args)
+ 
+ 	csocket = NULL;
+ 
++	if (addr == NULL)
++		return -EINVAL;
++
+ 	if (strlen(addr) >= UNIX_PATH_MAX) {
+ 		pr_err("%s (%d): address too long: %s\n",
+ 		       __func__, task_pid_nr(current), addr);
+diff --git a/net/9p/trans_rdma.c b/net/9p/trans_rdma.c
+index 3d414acb7015..afaf0d65f3dd 100644
+--- a/net/9p/trans_rdma.c
++++ b/net/9p/trans_rdma.c
+@@ -644,6 +644,9 @@ rdma_create_trans(struct p9_client *client, const char *addr, char *args)
+ 	struct rdma_conn_param conn_param;
+ 	struct ib_qp_init_attr qp_attr;
+ 
++	if (addr == NULL)
++		return -EINVAL;
++
+ 	/* Parse the transport specific mount options */
+ 	err = parse_opts(args, &opts);
+ 	if (err < 0)
+diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c
+index 05006cbb3361..4c2da2513c8b 100644
+--- a/net/9p/trans_virtio.c
++++ b/net/9p/trans_virtio.c
+@@ -188,7 +188,7 @@ static int pack_sg_list(struct scatterlist *sg, int start,
+ 		s = rest_of_page(data);
+ 		if (s > count)
+ 			s = count;
+-		BUG_ON(index > limit);
++		BUG_ON(index >= limit);
+ 		/* Make sure we don't terminate early. */
+ 		sg_unmark_end(&sg[index]);
+ 		sg_set_buf(&sg[index++], data, s);
+@@ -233,6 +233,7 @@ pack_sg_list_p(struct scatterlist *sg, int start, int limit,
+ 		s = PAGE_SIZE - data_off;
+ 		if (s > count)
+ 			s = count;
++		BUG_ON(index >= limit);
+ 		/* Make sure we don't terminate early. */
+ 		sg_unmark_end(&sg[index]);
+ 		sg_set_page(&sg[index++], pdata[i++], s, data_off);
+@@ -406,6 +407,7 @@ p9_virtio_zc_request(struct p9_client *client, struct p9_req_t *req,
+ 	p9_debug(P9_DEBUG_TRANS, "virtio request\n");
+ 
+ 	if (uodata) {
++		__le32 sz;
+ 		int n = p9_get_mapped_pages(chan, &out_pages, uodata,
+ 					    outlen, &offs, &need_drop);
+ 		if (n < 0)
+@@ -416,6 +418,12 @@ p9_virtio_zc_request(struct p9_client *client, struct p9_req_t *req,
+ 			memcpy(&req->tc->sdata[req->tc->size - 4], &v, 4);
+ 			outlen = n;
+ 		}
++		/* The size field of the message must include the length of the
++		 * header and the length of the data.  We didn't actually know
++		 * the length of the data until this point so add it in now.
++		 */
++		sz = cpu_to_le32(req->tc->size + outlen);
++		memcpy(&req->tc->sdata[0], &sz, sizeof(sz));
+ 	} else if (uidata) {
+ 		int n = p9_get_mapped_pages(chan, &in_pages, uidata,
+ 					    inlen, &offs, &need_drop);
+@@ -643,6 +651,9 @@ p9_virtio_create(struct p9_client *client, const char *devname, char *args)
+ 	int ret = -ENOENT;
+ 	int found = 0;
+ 
++	if (devname == NULL)
++		return -EINVAL;
++
+ 	mutex_lock(&virtio_9p_lock);
+ 	list_for_each_entry(chan, &virtio_chan_list, chan_list) {
+ 		if (!strncmp(devname, chan->tag, chan->tag_len) &&
+diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
+index 2e2b8bca54f3..c2d54ac76bfd 100644
+--- a/net/9p/trans_xen.c
++++ b/net/9p/trans_xen.c
+@@ -94,6 +94,9 @@ static int p9_xen_create(struct p9_client *client, const char *addr, char *args)
+ {
+ 	struct xen_9pfs_front_priv *priv;
+ 
++	if (addr == NULL)
++		return -EINVAL;
++
+ 	read_lock(&xen_9pfs_lock);
+ 	list_for_each_entry(priv, &xen_9pfs_devs, list) {
+ 		if (!strcmp(priv->tag, addr)) {
+diff --git a/net/ieee802154/6lowpan/tx.c b/net/ieee802154/6lowpan/tx.c
+index e6ff5128e61a..ca53efa17be1 100644
+--- a/net/ieee802154/6lowpan/tx.c
++++ b/net/ieee802154/6lowpan/tx.c
+@@ -265,9 +265,24 @@ netdev_tx_t lowpan_xmit(struct sk_buff *skb, struct net_device *ldev)
+ 	/* We must take a copy of the skb before we modify/replace the ipv6
+ 	 * header as the header could be used elsewhere
+ 	 */
+-	skb = skb_unshare(skb, GFP_ATOMIC);
+-	if (!skb)
+-		return NET_XMIT_DROP;
++	if (unlikely(skb_headroom(skb) < ldev->needed_headroom ||
++		     skb_tailroom(skb) < ldev->needed_tailroom)) {
++		struct sk_buff *nskb;
++
++		nskb = skb_copy_expand(skb, ldev->needed_headroom,
++				       ldev->needed_tailroom, GFP_ATOMIC);
++		if (likely(nskb)) {
++			consume_skb(skb);
++			skb = nskb;
++		} else {
++			kfree_skb(skb);
++			return NET_XMIT_DROP;
++		}
++	} else {
++		skb = skb_unshare(skb, GFP_ATOMIC);
++		if (!skb)
++			return NET_XMIT_DROP;
++	}
+ 
+ 	ret = lowpan_header(skb, ldev, &dgram_size, &dgram_offset);
+ 	if (ret < 0) {
+diff --git a/net/mac802154/tx.c b/net/mac802154/tx.c
+index 7e253455f9dd..bcd1a5e6ebf4 100644
+--- a/net/mac802154/tx.c
++++ b/net/mac802154/tx.c
+@@ -63,8 +63,21 @@ ieee802154_tx(struct ieee802154_local *local, struct sk_buff *skb)
+ 	int ret;
+ 
+ 	if (!(local->hw.flags & IEEE802154_HW_TX_OMIT_CKSUM)) {
+-		u16 crc = crc_ccitt(0, skb->data, skb->len);
++		struct sk_buff *nskb;
++		u16 crc;
++
++		if (unlikely(skb_tailroom(skb) < IEEE802154_FCS_LEN)) {
++			nskb = skb_copy_expand(skb, 0, IEEE802154_FCS_LEN,
++					       GFP_ATOMIC);
++			if (likely(nskb)) {
++				consume_skb(skb);
++				skb = nskb;
++			} else {
++				goto err_tx;
++			}
++		}
+ 
++		crc = crc_ccitt(0, skb->data, skb->len);
+ 		put_unaligned_le16(crc, skb_put(skb, 2));
+ 	}
+ 
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index d839c33ae7d9..0d85425b1e07 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -965,10 +965,20 @@ out:
+ }
+ EXPORT_SYMBOL_GPL(rpc_bind_new_program);
+ 
++void rpc_task_release_transport(struct rpc_task *task)
++{
++	struct rpc_xprt *xprt = task->tk_xprt;
++
++	if (xprt) {
++		task->tk_xprt = NULL;
++		xprt_put(xprt);
++	}
++}
++EXPORT_SYMBOL_GPL(rpc_task_release_transport);
++
+ void rpc_task_release_client(struct rpc_task *task)
+ {
+ 	struct rpc_clnt *clnt = task->tk_client;
+-	struct rpc_xprt *xprt = task->tk_xprt;
+ 
+ 	if (clnt != NULL) {
+ 		/* Remove from client task list */
+@@ -979,12 +989,14 @@ void rpc_task_release_client(struct rpc_task *task)
+ 
+ 		rpc_release_client(clnt);
+ 	}
++	rpc_task_release_transport(task);
++}
+ 
+-	if (xprt != NULL) {
+-		task->tk_xprt = NULL;
+-
+-		xprt_put(xprt);
+-	}
++static
++void rpc_task_set_transport(struct rpc_task *task, struct rpc_clnt *clnt)
++{
++	if (!task->tk_xprt)
++		task->tk_xprt = xprt_iter_get_next(&clnt->cl_xpi);
+ }
+ 
+ static
+@@ -992,8 +1004,7 @@ void rpc_task_set_client(struct rpc_task *task, struct rpc_clnt *clnt)
+ {
+ 
+ 	if (clnt != NULL) {
+-		if (task->tk_xprt == NULL)
+-			task->tk_xprt = xprt_iter_get_next(&clnt->cl_xpi);
++		rpc_task_set_transport(task, clnt);
+ 		task->tk_client = clnt;
+ 		atomic_inc(&clnt->cl_count);
+ 		if (clnt->cl_softrtry)
+@@ -1512,6 +1523,7 @@ call_start(struct rpc_task *task)
+ 		clnt->cl_program->version[clnt->cl_vers]->counts[idx]++;
+ 	clnt->cl_stats->rpccnt++;
+ 	task->tk_action = call_reserve;
++	rpc_task_set_transport(task, clnt);
+ }
+ 
+ /*
+diff --git a/scripts/kconfig/Makefile b/scripts/kconfig/Makefile
+index a3ac2c91331c..5e1dd493ce59 100644
+--- a/scripts/kconfig/Makefile
++++ b/scripts/kconfig/Makefile
+@@ -173,7 +173,7 @@ HOSTLOADLIBES_nconf	= $(shell . $(obj)/.nconf-cfg && echo $$libs)
+ HOSTCFLAGS_nconf.o	= $(shell . $(obj)/.nconf-cfg && echo $$cflags)
+ HOSTCFLAGS_nconf.gui.o	= $(shell . $(obj)/.nconf-cfg && echo $$cflags)
+ 
+-$(obj)/nconf.o: $(obj)/.nconf-cfg
++$(obj)/nconf.o $(obj)/nconf.gui.o: $(obj)/.nconf-cfg
+ 
+ # mconf: Used for the menuconfig target based on lxdialog
+ hostprogs-y	+= mconf
+@@ -184,7 +184,8 @@ HOSTLOADLIBES_mconf = $(shell . $(obj)/.mconf-cfg && echo $$libs)
+ $(foreach f, mconf.o $(lxdialog), \
+   $(eval HOSTCFLAGS_$f = $$(shell . $(obj)/.mconf-cfg && echo $$$$cflags)))
+ 
+-$(addprefix $(obj)/, mconf.o $(lxdialog)): $(obj)/.mconf-cfg
++$(obj)/mconf.o: $(obj)/.mconf-cfg
++$(addprefix $(obj)/lxdialog/, $(lxdialog)): $(obj)/.mconf-cfg
+ 
+ # qconf: Used for the xconfig target based on Qt
+ hostprogs-y	+= qconf
+diff --git a/security/apparmor/secid.c b/security/apparmor/secid.c
+index f2f22d00db18..4ccec1bcf6f5 100644
+--- a/security/apparmor/secid.c
++++ b/security/apparmor/secid.c
+@@ -79,7 +79,6 @@ int apparmor_secid_to_secctx(u32 secid, char **secdata, u32 *seclen)
+ 	struct aa_label *label = aa_secid_to_label(secid);
+ 	int len;
+ 
+-	AA_BUG(!secdata);
+ 	AA_BUG(!seclen);
+ 
+ 	if (!label)
+diff --git a/security/commoncap.c b/security/commoncap.c
+index f4c33abd9959..2e489d6a3ac8 100644
+--- a/security/commoncap.c
++++ b/security/commoncap.c
+@@ -388,7 +388,7 @@ int cap_inode_getsecurity(struct inode *inode, const char *name, void **buffer,
+ 	if (strcmp(name, "capability") != 0)
+ 		return -EOPNOTSUPP;
+ 
+-	dentry = d_find_alias(inode);
++	dentry = d_find_any_alias(inode);
+ 	if (!dentry)
+ 		return -EINVAL;
+ 
+diff --git a/sound/ac97/bus.c b/sound/ac97/bus.c
+index 31f858eceffc..83eed9d7f679 100644
+--- a/sound/ac97/bus.c
++++ b/sound/ac97/bus.c
+@@ -503,7 +503,7 @@ static int ac97_bus_remove(struct device *dev)
+ 	int ret;
+ 
+ 	ret = pm_runtime_get_sync(dev);
+-	if (ret)
++	if (ret < 0)
+ 		return ret;
+ 
+ 	ret = adrv->remove(adev);
+@@ -511,6 +511,8 @@ static int ac97_bus_remove(struct device *dev)
+ 	if (ret == 0)
+ 		ac97_put_disable_clk(adev);
+ 
++	pm_runtime_disable(dev);
++
+ 	return ret;
+ }
+ 
+diff --git a/sound/ac97/snd_ac97_compat.c b/sound/ac97/snd_ac97_compat.c
+index 61544e0d8de4..8bab44f74bb8 100644
+--- a/sound/ac97/snd_ac97_compat.c
++++ b/sound/ac97/snd_ac97_compat.c
+@@ -15,6 +15,11 @@
+ 
+ #include "ac97_core.h"
+ 
++static void compat_ac97_release(struct device *dev)
++{
++	kfree(to_ac97_t(dev));
++}
++
+ static void compat_ac97_reset(struct snd_ac97 *ac97)
+ {
+ 	struct ac97_codec_device *adev = to_ac97_device(ac97->private_data);
+@@ -65,21 +70,31 @@ static struct snd_ac97_bus compat_soc_ac97_bus = {
+ struct snd_ac97 *snd_ac97_compat_alloc(struct ac97_codec_device *adev)
+ {
+ 	struct snd_ac97 *ac97;
++	int ret;
+ 
+ 	ac97 = kzalloc(sizeof(struct snd_ac97), GFP_KERNEL);
+ 	if (ac97 == NULL)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	ac97->dev = adev->dev;
+ 	ac97->private_data = adev;
+ 	ac97->bus = &compat_soc_ac97_bus;
++
++	ac97->dev.parent = &adev->dev;
++	ac97->dev.release = compat_ac97_release;
++	dev_set_name(&ac97->dev, "%s-compat", dev_name(&adev->dev));
++	ret = device_register(&ac97->dev);
++	if (ret) {
++		put_device(&ac97->dev);
++		return ERR_PTR(ret);
++	}
++
+ 	return ac97;
+ }
+ EXPORT_SYMBOL_GPL(snd_ac97_compat_alloc);
+ 
+ void snd_ac97_compat_release(struct snd_ac97 *ac97)
+ {
+-	kfree(ac97);
++	device_unregister(&ac97->dev);
+ }
+ EXPORT_SYMBOL_GPL(snd_ac97_compat_release);
+ 
+diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
+index d056447520a2..eeb6d1f7cfb3 100644
+--- a/tools/perf/util/auxtrace.c
++++ b/tools/perf/util/auxtrace.c
+@@ -202,6 +202,9 @@ static int auxtrace_queues__grow(struct auxtrace_queues *queues,
+ 	for (i = 0; i < queues->nr_queues; i++) {
+ 		list_splice_tail(&queues->queue_array[i].head,
+ 				 &queue_array[i].head);
++		queue_array[i].tid = queues->queue_array[i].tid;
++		queue_array[i].cpu = queues->queue_array[i].cpu;
++		queue_array[i].set = queues->queue_array[i].set;
+ 		queue_array[i].priv = queues->queue_array[i].priv;
+ 	}
+ 


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-09-05 15:30 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-09-05 15:30 UTC (permalink / raw
  To: gentoo-commits

commit:     a830aee1944ccf0a758da9c5f5de62ae4aef091f
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Sep  5 15:30:20 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Sep  5 15:30:20 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a830aee1

Linux patch 4.18.6

 0000_README             |    4 +
 1005_linux-4.18.6.patch | 5123 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5127 insertions(+)

diff --git a/0000_README b/0000_README
index 8da0979..8bfc2e4 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch:  1004_linux-4.18.5.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.5
 
+Patch:  1005_linux-4.18.6.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.6
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1005_linux-4.18.6.patch b/1005_linux-4.18.6.patch
new file mode 100644
index 0000000..99632b3
--- /dev/null
+++ b/1005_linux-4.18.6.patch
@@ -0,0 +1,5123 @@
+diff --git a/Makefile b/Makefile
+index a41692c5827a..62524f4d42ad 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+@@ -493,9 +493,13 @@ KBUILD_AFLAGS += $(call cc-option, -no-integrated-as)
+ endif
+ 
+ RETPOLINE_CFLAGS_GCC := -mindirect-branch=thunk-extern -mindirect-branch-register
++RETPOLINE_VDSO_CFLAGS_GCC := -mindirect-branch=thunk-inline -mindirect-branch-register
+ RETPOLINE_CFLAGS_CLANG := -mretpoline-external-thunk
++RETPOLINE_VDSO_CFLAGS_CLANG := -mretpoline
+ RETPOLINE_CFLAGS := $(call cc-option,$(RETPOLINE_CFLAGS_GCC),$(call cc-option,$(RETPOLINE_CFLAGS_CLANG)))
++RETPOLINE_VDSO_CFLAGS := $(call cc-option,$(RETPOLINE_VDSO_CFLAGS_GCC),$(call cc-option,$(RETPOLINE_VDSO_CFLAGS_CLANG)))
+ export RETPOLINE_CFLAGS
++export RETPOLINE_VDSO_CFLAGS
+ 
+ KBUILD_CFLAGS	+= $(call cc-option,-fno-PIE)
+ KBUILD_AFLAGS	+= $(call cc-option,-fno-PIE)
+diff --git a/arch/Kconfig b/arch/Kconfig
+index d1f2ed462ac8..f03b72644902 100644
+--- a/arch/Kconfig
++++ b/arch/Kconfig
+@@ -354,6 +354,9 @@ config HAVE_ARCH_JUMP_LABEL
+ config HAVE_RCU_TABLE_FREE
+ 	bool
+ 
++config HAVE_RCU_TABLE_INVALIDATE
++	bool
++
+ config ARCH_HAVE_NMI_SAFE_CMPXCHG
+ 	bool
+ 
+diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
+index f6a62ae44a65..c864f6b045ba 100644
+--- a/arch/arm/net/bpf_jit_32.c
++++ b/arch/arm/net/bpf_jit_32.c
+@@ -238,7 +238,7 @@ static void jit_fill_hole(void *area, unsigned int size)
+ #define STACK_SIZE	ALIGN(_STACK_SIZE, STACK_ALIGNMENT)
+ 
+ /* Get the offset of eBPF REGISTERs stored on scratch space. */
+-#define STACK_VAR(off) (STACK_SIZE - off)
++#define STACK_VAR(off) (STACK_SIZE - off - 4)
+ 
+ #if __LINUX_ARM_ARCH__ < 7
+ 
+diff --git a/arch/arm/probes/kprobes/core.c b/arch/arm/probes/kprobes/core.c
+index e90cc8a08186..a8be6fe3946d 100644
+--- a/arch/arm/probes/kprobes/core.c
++++ b/arch/arm/probes/kprobes/core.c
+@@ -289,8 +289,8 @@ void __kprobes kprobe_handler(struct pt_regs *regs)
+ 				break;
+ 			case KPROBE_REENTER:
+ 				/* A nested probe was hit in FIQ, it is a BUG */
+-				pr_warn("Unrecoverable kprobe detected at %p.\n",
+-					p->addr);
++				pr_warn("Unrecoverable kprobe detected.\n");
++				dump_kprobe(p);
+ 				/* fall through */
+ 			default:
+ 				/* impossible cases */
+diff --git a/arch/arm/probes/kprobes/test-core.c b/arch/arm/probes/kprobes/test-core.c
+index 14db14152909..cc237fa9b90f 100644
+--- a/arch/arm/probes/kprobes/test-core.c
++++ b/arch/arm/probes/kprobes/test-core.c
+@@ -1461,7 +1461,6 @@ fail:
+ 	print_registers(&result_regs);
+ 
+ 	if (mem) {
+-		pr_err("current_stack=%p\n", current_stack);
+ 		pr_err("expected_memory:\n");
+ 		print_memory(expected_memory, mem_size);
+ 		pr_err("result_memory:\n");
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328.dtsi b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+index b8e9da15e00c..2c1aa84abeea 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+@@ -331,7 +331,7 @@
+ 		reg = <0x0 0xff120000 0x0 0x100>;
+ 		interrupts = <GIC_SPI 56 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&cru SCLK_UART1>, <&cru PCLK_UART1>;
+-		clock-names = "sclk_uart", "pclk_uart";
++		clock-names = "baudclk", "apb_pclk";
+ 		dmas = <&dmac 4>, <&dmac 5>;
+ 		dma-names = "tx", "rx";
+ 		pinctrl-names = "default";
+diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h
+index 5df5cfe1c143..5ee5bca8c24b 100644
+--- a/arch/arm64/include/asm/cache.h
++++ b/arch/arm64/include/asm/cache.h
+@@ -21,12 +21,16 @@
+ #define CTR_L1IP_SHIFT		14
+ #define CTR_L1IP_MASK		3
+ #define CTR_DMINLINE_SHIFT	16
++#define CTR_IMINLINE_SHIFT	0
+ #define CTR_ERG_SHIFT		20
+ #define CTR_CWG_SHIFT		24
+ #define CTR_CWG_MASK		15
+ #define CTR_IDC_SHIFT		28
+ #define CTR_DIC_SHIFT		29
+ 
++#define CTR_CACHE_MINLINE_MASK	\
++	(0xf << CTR_DMINLINE_SHIFT | 0xf << CTR_IMINLINE_SHIFT)
++
+ #define CTR_L1IP(ctr)		(((ctr) >> CTR_L1IP_SHIFT) & CTR_L1IP_MASK)
+ 
+ #define ICACHE_POLICY_VPIPT	0
+diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
+index 8a699c708fc9..be3bf3d08916 100644
+--- a/arch/arm64/include/asm/cpucaps.h
++++ b/arch/arm64/include/asm/cpucaps.h
+@@ -49,7 +49,8 @@
+ #define ARM64_HAS_CACHE_DIC			28
+ #define ARM64_HW_DBM				29
+ #define ARM64_SSBD				30
++#define ARM64_MISMATCHED_CACHE_TYPE		31
+ 
+-#define ARM64_NCAPS				31
++#define ARM64_NCAPS				32
+ 
+ #endif /* __ASM_CPUCAPS_H */
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 1d2b6d768efe..5d59ff9a8da9 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -65,12 +65,18 @@ is_kryo_midr(const struct arm64_cpu_capabilities *entry, int scope)
+ }
+ 
+ static bool
+-has_mismatched_cache_line_size(const struct arm64_cpu_capabilities *entry,
+-				int scope)
++has_mismatched_cache_type(const struct arm64_cpu_capabilities *entry,
++			  int scope)
+ {
++	u64 mask = CTR_CACHE_MINLINE_MASK;
++
++	/* Skip matching the min line sizes for cache type check */
++	if (entry->capability == ARM64_MISMATCHED_CACHE_TYPE)
++		mask ^= arm64_ftr_reg_ctrel0.strict_mask;
++
+ 	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
+-	return (read_cpuid_cachetype() & arm64_ftr_reg_ctrel0.strict_mask) !=
+-		(arm64_ftr_reg_ctrel0.sys_val & arm64_ftr_reg_ctrel0.strict_mask);
++	return (read_cpuid_cachetype() & mask) !=
++	       (arm64_ftr_reg_ctrel0.sys_val & mask);
+ }
+ 
+ static void
+@@ -613,7 +619,14 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ 	{
+ 		.desc = "Mismatched cache line size",
+ 		.capability = ARM64_MISMATCHED_CACHE_LINE_SIZE,
+-		.matches = has_mismatched_cache_line_size,
++		.matches = has_mismatched_cache_type,
++		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
++		.cpu_enable = cpu_enable_trap_ctr_access,
++	},
++	{
++		.desc = "Mismatched cache type",
++		.capability = ARM64_MISMATCHED_CACHE_TYPE,
++		.matches = has_mismatched_cache_type,
+ 		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+ 		.cpu_enable = cpu_enable_trap_ctr_access,
+ 	},
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index c6d80743f4ed..e4103b718a7c 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -214,7 +214,7 @@ static const struct arm64_ftr_bits ftr_ctr[] = {
+ 	 * If we have differing I-cache policies, report it as the weakest - VIPT.
+ 	 */
+ 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_EXACT, 14, 2, ICACHE_POLICY_VIPT),	/* L1Ip */
+-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 0, 4, 0),	/* IminLine */
++	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IMINLINE_SHIFT, 4, 0),
+ 	ARM64_FTR_END,
+ };
+ 
+diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
+index d849d9804011..22a5921562c7 100644
+--- a/arch/arm64/kernel/probes/kprobes.c
++++ b/arch/arm64/kernel/probes/kprobes.c
+@@ -275,7 +275,7 @@ static int __kprobes reenter_kprobe(struct kprobe *p,
+ 		break;
+ 	case KPROBE_HIT_SS:
+ 	case KPROBE_REENTER:
+-		pr_warn("Unrecoverable kprobe detected at %p.\n", p->addr);
++		pr_warn("Unrecoverable kprobe detected.\n");
+ 		dump_kprobe(p);
+ 		BUG();
+ 		break;
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index 9abf8a1e7b25..787e27964ab9 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -287,7 +287,11 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
+ #ifdef CONFIG_HAVE_ARCH_PFN_VALID
+ int pfn_valid(unsigned long pfn)
+ {
+-	return memblock_is_map_memory(pfn << PAGE_SHIFT);
++	phys_addr_t addr = pfn << PAGE_SHIFT;
++
++	if ((addr >> PAGE_SHIFT) != pfn)
++		return 0;
++	return memblock_is_map_memory(addr);
+ }
+ EXPORT_SYMBOL(pfn_valid);
+ #endif
+diff --git a/arch/mips/Makefile b/arch/mips/Makefile
+index e2122cca4ae2..1e98d22ec119 100644
+--- a/arch/mips/Makefile
++++ b/arch/mips/Makefile
+@@ -155,15 +155,11 @@ cflags-$(CONFIG_CPU_R4300)	+= -march=r4300 -Wa,--trap
+ cflags-$(CONFIG_CPU_VR41XX)	+= -march=r4100 -Wa,--trap
+ cflags-$(CONFIG_CPU_R4X00)	+= -march=r4600 -Wa,--trap
+ cflags-$(CONFIG_CPU_TX49XX)	+= -march=r4600 -Wa,--trap
+-cflags-$(CONFIG_CPU_MIPS32_R1)	+= $(call cc-option,-march=mips32,-mips32 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS32) \
+-			-Wa,-mips32 -Wa,--trap
+-cflags-$(CONFIG_CPU_MIPS32_R2)	+= $(call cc-option,-march=mips32r2,-mips32r2 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS32) \
+-			-Wa,-mips32r2 -Wa,--trap
++cflags-$(CONFIG_CPU_MIPS32_R1)	+= -march=mips32 -Wa,--trap
++cflags-$(CONFIG_CPU_MIPS32_R2)	+= -march=mips32r2 -Wa,--trap
+ cflags-$(CONFIG_CPU_MIPS32_R6)	+= -march=mips32r6 -Wa,--trap -modd-spreg
+-cflags-$(CONFIG_CPU_MIPS64_R1)	+= $(call cc-option,-march=mips64,-mips64 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS64) \
+-			-Wa,-mips64 -Wa,--trap
+-cflags-$(CONFIG_CPU_MIPS64_R2)	+= $(call cc-option,-march=mips64r2,-mips64r2 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS64) \
+-			-Wa,-mips64r2 -Wa,--trap
++cflags-$(CONFIG_CPU_MIPS64_R1)	+= -march=mips64 -Wa,--trap
++cflags-$(CONFIG_CPU_MIPS64_R2)	+= -march=mips64r2 -Wa,--trap
+ cflags-$(CONFIG_CPU_MIPS64_R6)	+= -march=mips64r6 -Wa,--trap
+ cflags-$(CONFIG_CPU_R5000)	+= -march=r5000 -Wa,--trap
+ cflags-$(CONFIG_CPU_R5432)	+= $(call cc-option,-march=r5400,-march=r5000) \
+diff --git a/arch/mips/include/asm/processor.h b/arch/mips/include/asm/processor.h
+index af34afbc32d9..b2fa62922d88 100644
+--- a/arch/mips/include/asm/processor.h
++++ b/arch/mips/include/asm/processor.h
+@@ -141,7 +141,7 @@ struct mips_fpu_struct {
+ 
+ #define NUM_DSP_REGS   6
+ 
+-typedef __u32 dspreg_t;
++typedef unsigned long dspreg_t;
+ 
+ struct mips_dsp_state {
+ 	dspreg_t	dspr[NUM_DSP_REGS];
+@@ -386,7 +386,20 @@ unsigned long get_wchan(struct task_struct *p);
+ #define KSTK_ESP(tsk) (task_pt_regs(tsk)->regs[29])
+ #define KSTK_STATUS(tsk) (task_pt_regs(tsk)->cp0_status)
+ 
++#ifdef CONFIG_CPU_LOONGSON3
++/*
++ * Loongson-3's SFB (Store-Fill-Buffer) may buffer writes indefinitely when a
++ * tight read loop is executed, because reads take priority over writes & the
++ * hardware (incorrectly) doesn't ensure that writes will eventually occur.
++ *
++ * Since spin loops of any kind should have a cpu_relax() in them, force an SFB
++ * flush from cpu_relax() such that any pending writes will become visible as
++ * expected.
++ */
++#define cpu_relax()	smp_mb()
++#else
+ #define cpu_relax()	barrier()
++#endif
+ 
+ /*
+  * Return_address is a replacement for __builtin_return_address(count)
+diff --git a/arch/mips/kernel/ptrace.c b/arch/mips/kernel/ptrace.c
+index 9f6c3f2aa2e2..8c8d42823bda 100644
+--- a/arch/mips/kernel/ptrace.c
++++ b/arch/mips/kernel/ptrace.c
+@@ -856,7 +856,7 @@ long arch_ptrace(struct task_struct *child, long request,
+ 				goto out;
+ 			}
+ 			dregs = __get_dsp_regs(child);
+-			tmp = (unsigned long) (dregs[addr - DSP_BASE]);
++			tmp = dregs[addr - DSP_BASE];
+ 			break;
+ 		}
+ 		case DSP_CONTROL:
+diff --git a/arch/mips/kernel/ptrace32.c b/arch/mips/kernel/ptrace32.c
+index 7edc629304c8..bc348d44d151 100644
+--- a/arch/mips/kernel/ptrace32.c
++++ b/arch/mips/kernel/ptrace32.c
+@@ -142,7 +142,7 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
+ 				goto out;
+ 			}
+ 			dregs = __get_dsp_regs(child);
+-			tmp = (unsigned long) (dregs[addr - DSP_BASE]);
++			tmp = dregs[addr - DSP_BASE];
+ 			break;
+ 		}
+ 		case DSP_CONTROL:
+diff --git a/arch/mips/lib/memset.S b/arch/mips/lib/memset.S
+index 1cc306520a55..fac26ce64b2f 100644
+--- a/arch/mips/lib/memset.S
++++ b/arch/mips/lib/memset.S
+@@ -195,6 +195,7 @@
+ #endif
+ #else
+ 	 PTR_SUBU	t0, $0, a2
++	move		a2, zero		/* No remaining longs */
+ 	PTR_ADDIU	t0, 1
+ 	STORE_BYTE(0)
+ 	STORE_BYTE(1)
+@@ -231,7 +232,7 @@
+ 
+ #ifdef CONFIG_CPU_MIPSR6
+ .Lbyte_fixup\@:
+-	PTR_SUBU	a2, $0, t0
++	PTR_SUBU	a2, t0
+ 	jr		ra
+ 	 PTR_ADDIU	a2, 1
+ #endif /* CONFIG_CPU_MIPSR6 */
+diff --git a/arch/mips/lib/multi3.c b/arch/mips/lib/multi3.c
+index 111ad475aa0c..4c2483f410c2 100644
+--- a/arch/mips/lib/multi3.c
++++ b/arch/mips/lib/multi3.c
+@@ -4,12 +4,12 @@
+ #include "libgcc.h"
+ 
+ /*
+- * GCC 7 suboptimally generates __multi3 calls for mips64r6, so for that
+- * specific case only we'll implement it here.
++ * GCC 7 & older can suboptimally generate __multi3 calls for mips64r6, so for
++ * that specific case only we implement that intrinsic here.
+  *
+  * See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82981
+  */
+-#if defined(CONFIG_64BIT) && defined(CONFIG_CPU_MIPSR6) && (__GNUC__ == 7)
++#if defined(CONFIG_64BIT) && defined(CONFIG_CPU_MIPSR6) && (__GNUC__ < 8)
+ 
+ /* multiply 64-bit values, low 64-bits returned */
+ static inline long long notrace dmulu(long long a, long long b)
+diff --git a/arch/s390/include/asm/qdio.h b/arch/s390/include/asm/qdio.h
+index de11ecc99c7c..9c9970a5dfb1 100644
+--- a/arch/s390/include/asm/qdio.h
++++ b/arch/s390/include/asm/qdio.h
+@@ -262,7 +262,6 @@ struct qdio_outbuf_state {
+ 	void *user;
+ };
+ 
+-#define QDIO_OUTBUF_STATE_FLAG_NONE	0x00
+ #define QDIO_OUTBUF_STATE_FLAG_PENDING	0x01
+ 
+ #define CHSC_AC1_INITIATE_INPUTQ	0x80
+diff --git a/arch/s390/lib/mem.S b/arch/s390/lib/mem.S
+index 2311f15be9cf..40c4d59c926e 100644
+--- a/arch/s390/lib/mem.S
++++ b/arch/s390/lib/mem.S
+@@ -17,7 +17,7 @@
+ ENTRY(memmove)
+ 	ltgr	%r4,%r4
+ 	lgr	%r1,%r2
+-	bzr	%r14
++	jz	.Lmemmove_exit
+ 	aghi	%r4,-1
+ 	clgr	%r2,%r3
+ 	jnh	.Lmemmove_forward
+@@ -36,6 +36,7 @@ ENTRY(memmove)
+ .Lmemmove_forward_remainder:
+ 	larl	%r5,.Lmemmove_mvc
+ 	ex	%r4,0(%r5)
++.Lmemmove_exit:
+ 	BR_EX	%r14
+ .Lmemmove_reverse:
+ 	ic	%r0,0(%r4,%r3)
+@@ -65,7 +66,7 @@ EXPORT_SYMBOL(memmove)
+  */
+ ENTRY(memset)
+ 	ltgr	%r4,%r4
+-	bzr	%r14
++	jz	.Lmemset_exit
+ 	ltgr	%r3,%r3
+ 	jnz	.Lmemset_fill
+ 	aghi	%r4,-1
+@@ -80,6 +81,7 @@ ENTRY(memset)
+ .Lmemset_clear_remainder:
+ 	larl	%r3,.Lmemset_xc
+ 	ex	%r4,0(%r3)
++.Lmemset_exit:
+ 	BR_EX	%r14
+ .Lmemset_fill:
+ 	cghi	%r4,1
+@@ -115,7 +117,7 @@ EXPORT_SYMBOL(memset)
+  */
+ ENTRY(memcpy)
+ 	ltgr	%r4,%r4
+-	bzr	%r14
++	jz	.Lmemcpy_exit
+ 	aghi	%r4,-1
+ 	srlg	%r5,%r4,8
+ 	ltgr	%r5,%r5
+@@ -124,6 +126,7 @@ ENTRY(memcpy)
+ .Lmemcpy_remainder:
+ 	larl	%r5,.Lmemcpy_mvc
+ 	ex	%r4,0(%r5)
++.Lmemcpy_exit:
+ 	BR_EX	%r14
+ .Lmemcpy_loop:
+ 	mvc	0(256,%r1),0(%r3)
+@@ -145,9 +148,9 @@ EXPORT_SYMBOL(memcpy)
+ .macro __MEMSET bits,bytes,insn
+ ENTRY(__memset\bits)
+ 	ltgr	%r4,%r4
+-	bzr	%r14
++	jz	.L__memset_exit\bits
+ 	cghi	%r4,\bytes
+-	je	.L__memset_exit\bits
++	je	.L__memset_store\bits
+ 	aghi	%r4,-(\bytes+1)
+ 	srlg	%r5,%r4,8
+ 	ltgr	%r5,%r5
+@@ -163,8 +166,9 @@ ENTRY(__memset\bits)
+ 	larl	%r5,.L__memset_mvc\bits
+ 	ex	%r4,0(%r5)
+ 	BR_EX	%r14
+-.L__memset_exit\bits:
++.L__memset_store\bits:
+ 	\insn	%r3,0(%r2)
++.L__memset_exit\bits:
+ 	BR_EX	%r14
+ .L__memset_mvc\bits:
+ 	mvc	\bytes(1,%r1),0(%r1)
+diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
+index e074480d3598..4cc3f06b0ab3 100644
+--- a/arch/s390/mm/fault.c
++++ b/arch/s390/mm/fault.c
+@@ -502,6 +502,8 @@ retry:
+ 	/* No reason to continue if interrupted by SIGKILL. */
+ 	if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) {
+ 		fault = VM_FAULT_SIGNAL;
++		if (flags & FAULT_FLAG_RETRY_NOWAIT)
++			goto out_up;
+ 		goto out;
+ 	}
+ 	if (unlikely(fault & VM_FAULT_ERROR))
+diff --git a/arch/s390/mm/page-states.c b/arch/s390/mm/page-states.c
+index 382153ff17e3..dc3cede7f2ec 100644
+--- a/arch/s390/mm/page-states.c
++++ b/arch/s390/mm/page-states.c
+@@ -271,7 +271,7 @@ void arch_set_page_states(int make_stable)
+ 			list_for_each(l, &zone->free_area[order].free_list[t]) {
+ 				page = list_entry(l, struct page, lru);
+ 				if (make_stable)
+-					set_page_stable_dat(page, 0);
++					set_page_stable_dat(page, order);
+ 				else
+ 					set_page_unused(page, order);
+ 			}
+diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
+index 5f0234ec8038..d7052cbe984f 100644
+--- a/arch/s390/net/bpf_jit_comp.c
++++ b/arch/s390/net/bpf_jit_comp.c
+@@ -485,8 +485,6 @@ static void bpf_jit_epilogue(struct bpf_jit *jit, u32 stack_depth)
+ 			/* br %r1 */
+ 			_EMIT2(0x07f1);
+ 		} else {
+-			/* larl %r1,.+14 */
+-			EMIT6_PCREL_RILB(0xc0000000, REG_1, jit->prg + 14);
+ 			/* ex 0,S390_lowcore.br_r1_tampoline */
+ 			EMIT4_DISP(0x44000000, REG_0, REG_0,
+ 				   offsetof(struct lowcore, br_r1_trampoline));
+diff --git a/arch/s390/numa/numa.c b/arch/s390/numa/numa.c
+index 06a80434cfe6..5bd374491f94 100644
+--- a/arch/s390/numa/numa.c
++++ b/arch/s390/numa/numa.c
+@@ -134,26 +134,14 @@ void __init numa_setup(void)
+ {
+ 	pr_info("NUMA mode: %s\n", mode->name);
+ 	nodes_clear(node_possible_map);
++	/* Initially attach all possible CPUs to node 0. */
++	cpumask_copy(&node_to_cpumask_map[0], cpu_possible_mask);
+ 	if (mode->setup)
+ 		mode->setup();
+ 	numa_setup_memory();
+ 	memblock_dump_all();
+ }
+ 
+-/*
+- * numa_init_early() - Initialization initcall
+- *
+- * This runs when only one CPU is online and before the first
+- * topology update is called for by the scheduler.
+- */
+-static int __init numa_init_early(void)
+-{
+-	/* Attach all possible CPUs to node 0 for now. */
+-	cpumask_copy(&node_to_cpumask_map[0], cpu_possible_mask);
+-	return 0;
+-}
+-early_initcall(numa_init_early);
+-
+ /*
+  * numa_init_late() - Initialization initcall
+  *
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index 4902fed221c0..8a505cfdd9b9 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -421,6 +421,8 @@ int arch_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type)
+ 	hwirq = 0;
+ 	for_each_pci_msi_entry(msi, pdev) {
+ 		rc = -EIO;
++		if (hwirq >= msi_vecs)
++			break;
+ 		irq = irq_alloc_desc(0);	/* Alloc irq on node 0 */
+ 		if (irq < 0)
+ 			return -ENOMEM;
+diff --git a/arch/s390/purgatory/Makefile b/arch/s390/purgatory/Makefile
+index 1ace023cbdce..abfa8c7a6d9a 100644
+--- a/arch/s390/purgatory/Makefile
++++ b/arch/s390/purgatory/Makefile
+@@ -7,13 +7,13 @@ purgatory-y := head.o purgatory.o string.o sha256.o mem.o
+ targets += $(purgatory-y) purgatory.ro kexec-purgatory.c
+ PURGATORY_OBJS = $(addprefix $(obj)/,$(purgatory-y))
+ 
+-$(obj)/sha256.o: $(srctree)/lib/sha256.c
++$(obj)/sha256.o: $(srctree)/lib/sha256.c FORCE
+ 	$(call if_changed_rule,cc_o_c)
+ 
+-$(obj)/mem.o: $(srctree)/arch/s390/lib/mem.S
++$(obj)/mem.o: $(srctree)/arch/s390/lib/mem.S FORCE
+ 	$(call if_changed_rule,as_o_S)
+ 
+-$(obj)/string.o: $(srctree)/arch/s390/lib/string.c
++$(obj)/string.o: $(srctree)/arch/s390/lib/string.c FORCE
+ 	$(call if_changed_rule,cc_o_c)
+ 
+ LDFLAGS_purgatory.ro := -e purgatory_start -r --no-undefined -nostdlib
+@@ -23,6 +23,7 @@ KBUILD_CFLAGS += -Wno-pointer-sign -Wno-sign-compare
+ KBUILD_CFLAGS += -fno-zero-initialized-in-bss -fno-builtin -ffreestanding
+ KBUILD_CFLAGS += -c -MD -Os -m64 -msoft-float
+ KBUILD_CFLAGS += $(call cc-option,-fno-PIE)
++KBUILD_AFLAGS := $(filter-out -DCC_USING_EXPOLINE,$(KBUILD_AFLAGS))
+ 
+ $(obj)/purgatory.ro: $(PURGATORY_OBJS) FORCE
+ 		$(call if_changed,ld)
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 6b8065d718bd..1aa4dd3b5687 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -179,6 +179,7 @@ config X86
+ 	select HAVE_PERF_REGS
+ 	select HAVE_PERF_USER_STACK_DUMP
+ 	select HAVE_RCU_TABLE_FREE
++	select HAVE_RCU_TABLE_INVALIDATE	if HAVE_RCU_TABLE_FREE
+ 	select HAVE_REGS_AND_STACK_ACCESS_API
+ 	select HAVE_RELIABLE_STACKTRACE		if X86_64 && UNWINDER_FRAME_POINTER && STACK_VALIDATION
+ 	select HAVE_STACKPROTECTOR		if CC_HAS_SANE_STACKPROTECTOR
+diff --git a/arch/x86/Makefile b/arch/x86/Makefile
+index a08e82856563..d944b52649a4 100644
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -180,10 +180,6 @@ ifdef CONFIG_FUNCTION_GRAPH_TRACER
+   endif
+ endif
+ 
+-ifndef CC_HAVE_ASM_GOTO
+-  $(error Compiler lacks asm-goto support.)
+-endif
+-
+ #
+ # Jump labels need '-maccumulate-outgoing-args' for gcc < 4.5.2 to prevent a
+ # GCC bug (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=46226).  There's no way
+@@ -317,6 +313,13 @@ PHONY += vdso_install
+ vdso_install:
+ 	$(Q)$(MAKE) $(build)=arch/x86/entry/vdso $@
+ 
++archprepare: checkbin
++checkbin:
++ifndef CC_HAVE_ASM_GOTO
++	@echo Compiler lacks asm-goto support.
++	@exit 1
++endif
++
+ archclean:
+ 	$(Q)rm -rf $(objtree)/arch/i386
+ 	$(Q)rm -rf $(objtree)/arch/x86_64
+diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
+index 261802b1cc50..9589878faf46 100644
+--- a/arch/x86/entry/vdso/Makefile
++++ b/arch/x86/entry/vdso/Makefile
+@@ -72,9 +72,9 @@ $(obj)/vdso-image-%.c: $(obj)/vdso%.so.dbg $(obj)/vdso%.so $(obj)/vdso2c FORCE
+ CFL := $(PROFILING) -mcmodel=small -fPIC -O2 -fasynchronous-unwind-tables -m64 \
+        $(filter -g%,$(KBUILD_CFLAGS)) $(call cc-option, -fno-stack-protector) \
+        -fno-omit-frame-pointer -foptimize-sibling-calls \
+-       -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
++       -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO $(RETPOLINE_VDSO_CFLAGS)
+ 
+-$(vobjs): KBUILD_CFLAGS := $(filter-out $(GCC_PLUGINS_CFLAGS),$(KBUILD_CFLAGS)) $(CFL)
++$(vobjs): KBUILD_CFLAGS := $(filter-out $(GCC_PLUGINS_CFLAGS) $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS)) $(CFL)
+ 
+ #
+ # vDSO code runs in userspace and -pg doesn't help with profiling anyway.
+@@ -138,11 +138,13 @@ KBUILD_CFLAGS_32 := $(filter-out -mcmodel=kernel,$(KBUILD_CFLAGS_32))
+ KBUILD_CFLAGS_32 := $(filter-out -fno-pic,$(KBUILD_CFLAGS_32))
+ KBUILD_CFLAGS_32 := $(filter-out -mfentry,$(KBUILD_CFLAGS_32))
+ KBUILD_CFLAGS_32 := $(filter-out $(GCC_PLUGINS_CFLAGS),$(KBUILD_CFLAGS_32))
++KBUILD_CFLAGS_32 := $(filter-out $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS_32))
+ KBUILD_CFLAGS_32 += -m32 -msoft-float -mregparm=0 -fpic
+ KBUILD_CFLAGS_32 += $(call cc-option, -fno-stack-protector)
+ KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls)
+ KBUILD_CFLAGS_32 += -fno-omit-frame-pointer
+ KBUILD_CFLAGS_32 += -DDISABLE_BRANCH_PROFILING
++KBUILD_CFLAGS_32 += $(RETPOLINE_VDSO_CFLAGS)
+ $(obj)/vdso32.so.dbg: KBUILD_CFLAGS = $(KBUILD_CFLAGS_32)
+ 
+ $(obj)/vdso32.so.dbg: FORCE \
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index 5f4829f10129..dfb2f7c0d019 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -2465,7 +2465,7 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs
+ 
+ 	perf_callchain_store(entry, regs->ip);
+ 
+-	if (!current->mm)
++	if (!nmi_uaccess_okay())
+ 		return;
+ 
+ 	if (perf_callchain_user32(regs, entry))
+diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
+index c14f2a74b2be..15450a675031 100644
+--- a/arch/x86/include/asm/irqflags.h
++++ b/arch/x86/include/asm/irqflags.h
+@@ -33,7 +33,8 @@ extern inline unsigned long native_save_fl(void)
+ 	return flags;
+ }
+ 
+-static inline void native_restore_fl(unsigned long flags)
++extern inline void native_restore_fl(unsigned long flags);
++extern inline void native_restore_fl(unsigned long flags)
+ {
+ 	asm volatile("push %0 ; popf"
+ 		     : /* no output */
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index 682286aca881..d53c54b842da 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -132,6 +132,8 @@ struct cpuinfo_x86 {
+ 	/* Index into per_cpu list: */
+ 	u16			cpu_index;
+ 	u32			microcode;
++	/* Address space bits used by the cache internally */
++	u8			x86_cache_bits;
+ 	unsigned		initialized : 1;
+ } __randomize_layout;
+ 
+@@ -181,9 +183,9 @@ extern const struct seq_operations cpuinfo_op;
+ 
+ extern void cpu_detect(struct cpuinfo_x86 *c);
+ 
+-static inline unsigned long l1tf_pfn_limit(void)
++static inline unsigned long long l1tf_pfn_limit(void)
+ {
+-	return BIT(boot_cpu_data.x86_phys_bits - 1 - PAGE_SHIFT) - 1;
++	return BIT_ULL(boot_cpu_data.x86_cache_bits - 1 - PAGE_SHIFT);
+ }
+ 
+ extern void early_cpu_init(void);
+diff --git a/arch/x86/include/asm/stacktrace.h b/arch/x86/include/asm/stacktrace.h
+index b6dc698f992a..f335aad404a4 100644
+--- a/arch/x86/include/asm/stacktrace.h
++++ b/arch/x86/include/asm/stacktrace.h
+@@ -111,6 +111,6 @@ static inline unsigned long caller_frame_pointer(void)
+ 	return (unsigned long)frame;
+ }
+ 
+-void show_opcodes(u8 *rip, const char *loglvl);
++void show_opcodes(struct pt_regs *regs, const char *loglvl);
+ void show_ip(struct pt_regs *regs, const char *loglvl);
+ #endif /* _ASM_X86_STACKTRACE_H */
+diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
+index 6690cd3fc8b1..0af97e51e609 100644
+--- a/arch/x86/include/asm/tlbflush.h
++++ b/arch/x86/include/asm/tlbflush.h
+@@ -175,8 +175,16 @@ struct tlb_state {
+ 	 * are on.  This means that it may not match current->active_mm,
+ 	 * which will contain the previous user mm when we're in lazy TLB
+ 	 * mode even if we've already switched back to swapper_pg_dir.
++	 *
++	 * During switch_mm_irqs_off(), loaded_mm will be set to
++	 * LOADED_MM_SWITCHING during the brief interrupts-off window
++	 * when CR3 and loaded_mm would otherwise be inconsistent.  This
++	 * is for nmi_uaccess_okay()'s benefit.
+ 	 */
+ 	struct mm_struct *loaded_mm;
++
++#define LOADED_MM_SWITCHING ((struct mm_struct *)1)
++
+ 	u16 loaded_mm_asid;
+ 	u16 next_asid;
+ 	/* last user mm's ctx id */
+@@ -246,6 +254,38 @@ struct tlb_state {
+ };
+ DECLARE_PER_CPU_SHARED_ALIGNED(struct tlb_state, cpu_tlbstate);
+ 
++/*
++ * Blindly accessing user memory from NMI context can be dangerous
++ * if we're in the middle of switching the current user task or
++ * switching the loaded mm.  It can also be dangerous if we
++ * interrupted some kernel code that was temporarily using a
++ * different mm.
++ */
++static inline bool nmi_uaccess_okay(void)
++{
++	struct mm_struct *loaded_mm = this_cpu_read(cpu_tlbstate.loaded_mm);
++	struct mm_struct *current_mm = current->mm;
++
++	VM_WARN_ON_ONCE(!loaded_mm);
++
++	/*
++	 * The condition we want to check is
++	 * current_mm->pgd == __va(read_cr3_pa()).  This may be slow, though,
++	 * if we're running in a VM with shadow paging, and nmi_uaccess_okay()
++	 * is supposed to be reasonably fast.
++	 *
++	 * Instead, we check the almost equivalent but somewhat conservative
++	 * condition below, and we rely on the fact that switch_mm_irqs_off()
++	 * sets loaded_mm to LOADED_MM_SWITCHING before writing to CR3.
++	 */
++	if (loaded_mm != current_mm)
++		return false;
++
++	VM_WARN_ON_ONCE(current_mm->pgd != __va(read_cr3_pa()));
++
++	return true;
++}
++
+ /* Initialize cr4 shadow for this CPU. */
+ static inline void cr4_init_shadow(void)
+ {
+diff --git a/arch/x86/include/asm/vgtod.h b/arch/x86/include/asm/vgtod.h
+index fb856c9f0449..53748541c487 100644
+--- a/arch/x86/include/asm/vgtod.h
++++ b/arch/x86/include/asm/vgtod.h
+@@ -93,7 +93,7 @@ static inline unsigned int __getcpu(void)
+ 	 *
+ 	 * If RDPID is available, use it.
+ 	 */
+-	alternative_io ("lsl %[p],%[seg]",
++	alternative_io ("lsl %[seg],%[p]",
+ 			".byte 0xf3,0x0f,0xc7,0xf8", /* RDPID %eax/rax */
+ 			X86_FEATURE_RDPID,
+ 			[p] "=a" (p), [seg] "r" (__PER_CPU_SEG));
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 664f161f96ff..4891a621a752 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -652,6 +652,45 @@ EXPORT_SYMBOL_GPL(l1tf_mitigation);
+ enum vmx_l1d_flush_state l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
+ EXPORT_SYMBOL_GPL(l1tf_vmx_mitigation);
+ 
++/*
++ * These CPUs all support 44bits physical address space internally in the
++ * cache but CPUID can report a smaller number of physical address bits.
++ *
++ * The L1TF mitigation uses the top most address bit for the inversion of
++ * non present PTEs. When the installed memory reaches into the top most
++ * address bit due to memory holes, which has been observed on machines
++ * which report 36bits physical address bits and have 32G RAM installed,
++ * then the mitigation range check in l1tf_select_mitigation() triggers.
++ * This is a false positive because the mitigation is still possible due to
++ * the fact that the cache uses 44bit internally. Use the cache bits
++ * instead of the reported physical bits and adjust them on the affected
++ * machines to 44bit if the reported bits are less than 44.
++ */
++static void override_cache_bits(struct cpuinfo_x86 *c)
++{
++	if (c->x86 != 6)
++		return;
++
++	switch (c->x86_model) {
++	case INTEL_FAM6_NEHALEM:
++	case INTEL_FAM6_WESTMERE:
++	case INTEL_FAM6_SANDYBRIDGE:
++	case INTEL_FAM6_IVYBRIDGE:
++	case INTEL_FAM6_HASWELL_CORE:
++	case INTEL_FAM6_HASWELL_ULT:
++	case INTEL_FAM6_HASWELL_GT3E:
++	case INTEL_FAM6_BROADWELL_CORE:
++	case INTEL_FAM6_BROADWELL_GT3E:
++	case INTEL_FAM6_SKYLAKE_MOBILE:
++	case INTEL_FAM6_SKYLAKE_DESKTOP:
++	case INTEL_FAM6_KABYLAKE_MOBILE:
++	case INTEL_FAM6_KABYLAKE_DESKTOP:
++		if (c->x86_cache_bits < 44)
++			c->x86_cache_bits = 44;
++		break;
++	}
++}
++
+ static void __init l1tf_select_mitigation(void)
+ {
+ 	u64 half_pa;
+@@ -659,6 +698,8 @@ static void __init l1tf_select_mitigation(void)
+ 	if (!boot_cpu_has_bug(X86_BUG_L1TF))
+ 		return;
+ 
++	override_cache_bits(&boot_cpu_data);
++
+ 	switch (l1tf_mitigation) {
+ 	case L1TF_MITIGATION_OFF:
+ 	case L1TF_MITIGATION_FLUSH_NOWARN:
+@@ -678,14 +719,13 @@ static void __init l1tf_select_mitigation(void)
+ 	return;
+ #endif
+ 
+-	/*
+-	 * This is extremely unlikely to happen because almost all
+-	 * systems have far more MAX_PA/2 than RAM can be fit into
+-	 * DIMM slots.
+-	 */
+ 	half_pa = (u64)l1tf_pfn_limit() << PAGE_SHIFT;
+ 	if (e820__mapped_any(half_pa, ULLONG_MAX - half_pa, E820_TYPE_RAM)) {
+ 		pr_warn("System has more than MAX_PA/2 memory. L1TF mitigation not effective.\n");
++		pr_info("You may make it effective by booting the kernel with mem=%llu parameter.\n",
++				half_pa);
++		pr_info("However, doing so will make a part of your RAM unusable.\n");
++		pr_info("Reading https://www.kernel.org/doc/html/latest/admin-guide/l1tf.html might help you decide.\n");
+ 		return;
+ 	}
+ 
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index b41b72bd8bb8..1ee8ea36af30 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -919,6 +919,7 @@ void get_cpu_address_sizes(struct cpuinfo_x86 *c)
+ 	else if (cpu_has(c, X86_FEATURE_PAE) || cpu_has(c, X86_FEATURE_PSE36))
+ 		c->x86_phys_bits = 36;
+ #endif
++	c->x86_cache_bits = c->x86_phys_bits;
+ }
+ 
+ static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c)
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index 6602941cfebf..3f0abb62161b 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -150,6 +150,9 @@ static bool bad_spectre_microcode(struct cpuinfo_x86 *c)
+ 	if (cpu_has(c, X86_FEATURE_HYPERVISOR))
+ 		return false;
+ 
++	if (c->x86 != 6)
++		return false;
++
+ 	for (i = 0; i < ARRAY_SIZE(spectre_bad_microcodes); i++) {
+ 		if (c->x86_model == spectre_bad_microcodes[i].model &&
+ 		    c->x86_stepping == spectre_bad_microcodes[i].stepping)
+diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
+index 666a284116ac..17b02adc79aa 100644
+--- a/arch/x86/kernel/dumpstack.c
++++ b/arch/x86/kernel/dumpstack.c
+@@ -17,6 +17,7 @@
+ #include <linux/bug.h>
+ #include <linux/nmi.h>
+ #include <linux/sysfs.h>
++#include <linux/kasan.h>
+ 
+ #include <asm/cpu_entry_area.h>
+ #include <asm/stacktrace.h>
+@@ -91,23 +92,32 @@ static void printk_stack_address(unsigned long address, int reliable,
+  * Thus, the 2/3rds prologue and 64 byte OPCODE_BUFSIZE is just a random
+  * guesstimate in attempt to achieve all of the above.
+  */
+-void show_opcodes(u8 *rip, const char *loglvl)
++void show_opcodes(struct pt_regs *regs, const char *loglvl)
+ {
+ 	unsigned int code_prologue = OPCODE_BUFSIZE * 2 / 3;
+ 	u8 opcodes[OPCODE_BUFSIZE];
+-	u8 *ip;
++	unsigned long ip;
+ 	int i;
++	bool bad_ip;
+ 
+ 	printk("%sCode: ", loglvl);
+ 
+-	ip = (u8 *)rip - code_prologue;
+-	if (probe_kernel_read(opcodes, ip, OPCODE_BUFSIZE)) {
++	ip = regs->ip - code_prologue;
++
++	/*
++	 * Make sure userspace isn't trying to trick us into dumping kernel
++	 * memory by pointing the userspace instruction pointer at it.
++	 */
++	bad_ip = user_mode(regs) &&
++		 __chk_range_not_ok(ip, OPCODE_BUFSIZE, TASK_SIZE_MAX);
++
++	if (bad_ip || probe_kernel_read(opcodes, (u8 *)ip, OPCODE_BUFSIZE)) {
+ 		pr_cont("Bad RIP value.\n");
+ 		return;
+ 	}
+ 
+ 	for (i = 0; i < OPCODE_BUFSIZE; i++, ip++) {
+-		if (ip == rip)
++		if (ip == regs->ip)
+ 			pr_cont("<%02x> ", opcodes[i]);
+ 		else
+ 			pr_cont("%02x ", opcodes[i]);
+@@ -122,7 +132,7 @@ void show_ip(struct pt_regs *regs, const char *loglvl)
+ #else
+ 	printk("%sRIP: %04x:%pS\n", loglvl, (int)regs->cs, (void *)regs->ip);
+ #endif
+-	show_opcodes((u8 *)regs->ip, loglvl);
++	show_opcodes(regs, loglvl);
+ }
+ 
+ void show_iret_regs(struct pt_regs *regs)
+@@ -356,7 +366,10 @@ void oops_end(unsigned long flags, struct pt_regs *regs, int signr)
+ 	 * We're not going to return, but we might be on an IST stack or
+ 	 * have very little stack space left.  Rewind the stack and kill
+ 	 * the task.
++	 * Before we rewind the stack, we have to tell KASAN that we're going to
++	 * reuse the task stack and that existing poisons are invalid.
+ 	 */
++	kasan_unpoison_task_stack(current);
+ 	rewind_stack_do_exit(signr);
+ }
+ NOKPROBE_SYMBOL(oops_end);
+diff --git a/arch/x86/kernel/early-quirks.c b/arch/x86/kernel/early-quirks.c
+index da5d8ac60062..50d5848bf22e 100644
+--- a/arch/x86/kernel/early-quirks.c
++++ b/arch/x86/kernel/early-quirks.c
+@@ -338,6 +338,18 @@ static resource_size_t __init gen3_stolen_base(int num, int slot, int func,
+ 	return bsm & INTEL_BSM_MASK;
+ }
+ 
++static resource_size_t __init gen11_stolen_base(int num, int slot, int func,
++						resource_size_t stolen_size)
++{
++	u64 bsm;
++
++	bsm = read_pci_config(num, slot, func, INTEL_GEN11_BSM_DW0);
++	bsm &= INTEL_BSM_MASK;
++	bsm |= (u64)read_pci_config(num, slot, func, INTEL_GEN11_BSM_DW1) << 32;
++
++	return bsm;
++}
++
+ static resource_size_t __init i830_stolen_size(int num, int slot, int func)
+ {
+ 	u16 gmch_ctrl;
+@@ -498,6 +510,11 @@ static const struct intel_early_ops chv_early_ops __initconst = {
+ 	.stolen_size = chv_stolen_size,
+ };
+ 
++static const struct intel_early_ops gen11_early_ops __initconst = {
++	.stolen_base = gen11_stolen_base,
++	.stolen_size = gen9_stolen_size,
++};
++
+ static const struct pci_device_id intel_early_ids[] __initconst = {
+ 	INTEL_I830_IDS(&i830_early_ops),
+ 	INTEL_I845G_IDS(&i845_early_ops),
+@@ -529,6 +546,7 @@ static const struct pci_device_id intel_early_ids[] __initconst = {
+ 	INTEL_CFL_IDS(&gen9_early_ops),
+ 	INTEL_GLK_IDS(&gen9_early_ops),
+ 	INTEL_CNL_IDS(&gen9_early_ops),
++	INTEL_ICL_11_IDS(&gen11_early_ops),
+ };
+ 
+ struct resource intel_graphics_stolen_res __ro_after_init = DEFINE_RES_MEM(0, 0);
+diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
+index 12bb445fb98d..4344a032ebe6 100644
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -384,6 +384,7 @@ start_thread(struct pt_regs *regs, unsigned long new_ip, unsigned long new_sp)
+ 	start_thread_common(regs, new_ip, new_sp,
+ 			    __USER_CS, __USER_DS, 0);
+ }
++EXPORT_SYMBOL_GPL(start_thread);
+ 
+ #ifdef CONFIG_COMPAT
+ void compat_start_thread(struct pt_regs *regs, u32 new_ip, u32 new_sp)
+diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
+index af8caf965baa..01d209ab5481 100644
+--- a/arch/x86/kvm/hyperv.c
++++ b/arch/x86/kvm/hyperv.c
+@@ -235,7 +235,7 @@ static int synic_set_msr(struct kvm_vcpu_hv_synic *synic,
+ 	struct kvm_vcpu *vcpu = synic_to_vcpu(synic);
+ 	int ret;
+ 
+-	if (!synic->active)
++	if (!synic->active && !host)
+ 		return 1;
+ 
+ 	trace_kvm_hv_synic_set_msr(vcpu->vcpu_id, msr, data, host);
+@@ -295,11 +295,12 @@ static int synic_set_msr(struct kvm_vcpu_hv_synic *synic,
+ 	return ret;
+ }
+ 
+-static int synic_get_msr(struct kvm_vcpu_hv_synic *synic, u32 msr, u64 *pdata)
++static int synic_get_msr(struct kvm_vcpu_hv_synic *synic, u32 msr, u64 *pdata,
++			 bool host)
+ {
+ 	int ret;
+ 
+-	if (!synic->active)
++	if (!synic->active && !host)
+ 		return 1;
+ 
+ 	ret = 0;
+@@ -1014,6 +1015,11 @@ static int kvm_hv_set_msr_pw(struct kvm_vcpu *vcpu, u32 msr, u64 data,
+ 	case HV_X64_MSR_TSC_EMULATION_STATUS:
+ 		hv->hv_tsc_emulation_status = data;
+ 		break;
++	case HV_X64_MSR_TIME_REF_COUNT:
++		/* read-only, but still ignore it if host-initiated */
++		if (!host)
++			return 1;
++		break;
+ 	default:
+ 		vcpu_unimpl(vcpu, "Hyper-V uhandled wrmsr: 0x%x data 0x%llx\n",
+ 			    msr, data);
+@@ -1101,6 +1107,12 @@ static int kvm_hv_set_msr(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host)
+ 		return stimer_set_count(vcpu_to_stimer(vcpu, timer_index),
+ 					data, host);
+ 	}
++	case HV_X64_MSR_TSC_FREQUENCY:
++	case HV_X64_MSR_APIC_FREQUENCY:
++		/* read-only, but still ignore it if host-initiated */
++		if (!host)
++			return 1;
++		break;
+ 	default:
+ 		vcpu_unimpl(vcpu, "Hyper-V uhandled wrmsr: 0x%x data 0x%llx\n",
+ 			    msr, data);
+@@ -1156,7 +1168,8 @@ static int kvm_hv_get_msr_pw(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
+ 	return 0;
+ }
+ 
+-static int kvm_hv_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
++static int kvm_hv_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata,
++			  bool host)
+ {
+ 	u64 data = 0;
+ 	struct kvm_vcpu_hv *hv = &vcpu->arch.hyperv;
+@@ -1183,7 +1196,7 @@ static int kvm_hv_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
+ 	case HV_X64_MSR_SIMP:
+ 	case HV_X64_MSR_EOM:
+ 	case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15:
+-		return synic_get_msr(vcpu_to_synic(vcpu), msr, pdata);
++		return synic_get_msr(vcpu_to_synic(vcpu), msr, pdata, host);
+ 	case HV_X64_MSR_STIMER0_CONFIG:
+ 	case HV_X64_MSR_STIMER1_CONFIG:
+ 	case HV_X64_MSR_STIMER2_CONFIG:
+@@ -1229,7 +1242,7 @@ int kvm_hv_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host)
+ 		return kvm_hv_set_msr(vcpu, msr, data, host);
+ }
+ 
+-int kvm_hv_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
++int kvm_hv_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata, bool host)
+ {
+ 	if (kvm_hv_msr_partition_wide(msr)) {
+ 		int r;
+@@ -1239,7 +1252,7 @@ int kvm_hv_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
+ 		mutex_unlock(&vcpu->kvm->arch.hyperv.hv_lock);
+ 		return r;
+ 	} else
+-		return kvm_hv_get_msr(vcpu, msr, pdata);
++		return kvm_hv_get_msr(vcpu, msr, pdata, host);
+ }
+ 
+ static __always_inline int get_sparse_bank_no(u64 valid_bank_mask, int bank_no)
+diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h
+index 837465d69c6d..d6aa969e20f1 100644
+--- a/arch/x86/kvm/hyperv.h
++++ b/arch/x86/kvm/hyperv.h
+@@ -48,7 +48,7 @@ static inline struct kvm_vcpu *synic_to_vcpu(struct kvm_vcpu_hv_synic *synic)
+ }
+ 
+ int kvm_hv_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host);
+-int kvm_hv_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata);
++int kvm_hv_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata, bool host);
+ 
+ bool kvm_hv_hypercall_enabled(struct kvm *kvm);
+ int kvm_hv_hypercall(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index f059a73f0fd0..9799f86388e7 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -5580,8 +5580,6 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
+ 
+ 	clgi();
+ 
+-	local_irq_enable();
+-
+ 	/*
+ 	 * If this vCPU has touched SPEC_CTRL, restore the guest's value if
+ 	 * it's non-zero. Since vmentry is serialising on affected CPUs, there
+@@ -5590,6 +5588,8 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
+ 	 */
+ 	x86_spec_ctrl_set_guest(svm->spec_ctrl, svm->virt_spec_ctrl);
+ 
++	local_irq_enable();
++
+ 	asm volatile (
+ 		"push %%" _ASM_BP "; \n\t"
+ 		"mov %c[rbx](%[svm]), %%" _ASM_BX " \n\t"
+@@ -5712,12 +5712,12 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
+ 	if (unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL)))
+ 		svm->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL);
+ 
+-	x86_spec_ctrl_restore_host(svm->spec_ctrl, svm->virt_spec_ctrl);
+-
+ 	reload_tss(vcpu);
+ 
+ 	local_irq_disable();
+ 
++	x86_spec_ctrl_restore_host(svm->spec_ctrl, svm->virt_spec_ctrl);
++
+ 	vcpu->arch.cr2 = svm->vmcb->save.cr2;
+ 	vcpu->arch.regs[VCPU_REGS_RAX] = svm->vmcb->save.rax;
+ 	vcpu->arch.regs[VCPU_REGS_RSP] = svm->vmcb->save.rsp;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index a5caa5e5480c..24c84aa87049 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -2185,10 +2185,11 @@ static int set_msr_mce(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		vcpu->arch.mcg_status = data;
+ 		break;
+ 	case MSR_IA32_MCG_CTL:
+-		if (!(mcg_cap & MCG_CTL_P))
++		if (!(mcg_cap & MCG_CTL_P) &&
++		    (data || !msr_info->host_initiated))
+ 			return 1;
+ 		if (data != 0 && data != ~(u64)0)
+-			return -1;
++			return 1;
+ 		vcpu->arch.mcg_ctl = data;
+ 		break;
+ 	default:
+@@ -2576,7 +2577,7 @@ int kvm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
+ }
+ EXPORT_SYMBOL_GPL(kvm_get_msr);
+ 
+-static int get_msr_mce(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
++static int get_msr_mce(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata, bool host)
+ {
+ 	u64 data;
+ 	u64 mcg_cap = vcpu->arch.mcg_cap;
+@@ -2591,7 +2592,7 @@ static int get_msr_mce(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
+ 		data = vcpu->arch.mcg_cap;
+ 		break;
+ 	case MSR_IA32_MCG_CTL:
+-		if (!(mcg_cap & MCG_CTL_P))
++		if (!(mcg_cap & MCG_CTL_P) && !host)
+ 			return 1;
+ 		data = vcpu->arch.mcg_ctl;
+ 		break;
+@@ -2724,7 +2725,8 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 	case MSR_IA32_MCG_CTL:
+ 	case MSR_IA32_MCG_STATUS:
+ 	case MSR_IA32_MC0_CTL ... MSR_IA32_MCx_CTL(KVM_MAX_MCE_BANKS) - 1:
+-		return get_msr_mce(vcpu, msr_info->index, &msr_info->data);
++		return get_msr_mce(vcpu, msr_info->index, &msr_info->data,
++				   msr_info->host_initiated);
+ 	case MSR_K7_CLK_CTL:
+ 		/*
+ 		 * Provide expected ramp-up count for K7. All other
+@@ -2745,7 +2747,8 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 	case HV_X64_MSR_TSC_EMULATION_CONTROL:
+ 	case HV_X64_MSR_TSC_EMULATION_STATUS:
+ 		return kvm_hv_get_msr_common(vcpu,
+-					     msr_info->index, &msr_info->data);
++					     msr_info->index, &msr_info->data,
++					     msr_info->host_initiated);
+ 		break;
+ 	case MSR_IA32_BBL_CR_CTL3:
+ 		/* This legacy MSR exists but isn't fully documented in current
+diff --git a/arch/x86/lib/usercopy.c b/arch/x86/lib/usercopy.c
+index c8c6ad0d58b8..3f435d7fca5e 100644
+--- a/arch/x86/lib/usercopy.c
++++ b/arch/x86/lib/usercopy.c
+@@ -7,6 +7,8 @@
+ #include <linux/uaccess.h>
+ #include <linux/export.h>
+ 
++#include <asm/tlbflush.h>
++
+ /*
+  * We rely on the nested NMI work to allow atomic faults from the NMI path; the
+  * nested NMI paths are careful to preserve CR2.
+@@ -19,6 +21,9 @@ copy_from_user_nmi(void *to, const void __user *from, unsigned long n)
+ 	if (__range_not_ok(from, n, TASK_SIZE))
+ 		return n;
+ 
++	if (!nmi_uaccess_okay())
++		return n;
++
+ 	/*
+ 	 * Even though this function is typically called from NMI/IRQ context
+ 	 * disable pagefaults so that its behaviour is consistent even when
+diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
+index 2aafa6ab6103..d1f1612672c7 100644
+--- a/arch/x86/mm/fault.c
++++ b/arch/x86/mm/fault.c
+@@ -838,7 +838,7 @@ show_signal_msg(struct pt_regs *regs, unsigned long error_code,
+ 
+ 	printk(KERN_CONT "\n");
+ 
+-	show_opcodes((u8 *)regs->ip, loglvl);
++	show_opcodes(regs, loglvl);
+ }
+ 
+ static void
+diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
+index acfab322fbe0..63a6f9fcaf20 100644
+--- a/arch/x86/mm/init.c
++++ b/arch/x86/mm/init.c
+@@ -923,7 +923,7 @@ unsigned long max_swapfile_size(void)
+ 
+ 	if (boot_cpu_has_bug(X86_BUG_L1TF)) {
+ 		/* Limit the swap file size to MAX_PA/2 for L1TF workaround */
+-		unsigned long l1tf_limit = l1tf_pfn_limit() + 1;
++		unsigned long long l1tf_limit = l1tf_pfn_limit();
+ 		/*
+ 		 * We encode swap offsets also with 3 bits below those for pfn
+ 		 * which makes the usable limit higher.
+@@ -931,7 +931,7 @@ unsigned long max_swapfile_size(void)
+ #if CONFIG_PGTABLE_LEVELS > 2
+ 		l1tf_limit <<= PAGE_SHIFT - SWP_OFFSET_FIRST_BIT;
+ #endif
+-		pages = min_t(unsigned long, l1tf_limit, pages);
++		pages = min_t(unsigned long long, l1tf_limit, pages);
+ 	}
+ 	return pages;
+ }
+diff --git a/arch/x86/mm/mmap.c b/arch/x86/mm/mmap.c
+index f40ab8185d94..1e95d57760cf 100644
+--- a/arch/x86/mm/mmap.c
++++ b/arch/x86/mm/mmap.c
+@@ -257,7 +257,7 @@ bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot)
+ 	/* If it's real memory always allow */
+ 	if (pfn_valid(pfn))
+ 		return true;
+-	if (pfn > l1tf_pfn_limit() && !capable(CAP_SYS_ADMIN))
++	if (pfn >= l1tf_pfn_limit() && !capable(CAP_SYS_ADMIN))
+ 		return false;
+ 	return true;
+ }
+diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
+index 6eb1f34c3c85..cd2617285e2e 100644
+--- a/arch/x86/mm/tlb.c
++++ b/arch/x86/mm/tlb.c
+@@ -298,6 +298,10 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
+ 
+ 		choose_new_asid(next, next_tlb_gen, &new_asid, &need_flush);
+ 
++		/* Let nmi_uaccess_okay() know that we're changing CR3. */
++		this_cpu_write(cpu_tlbstate.loaded_mm, LOADED_MM_SWITCHING);
++		barrier();
++
+ 		if (need_flush) {
+ 			this_cpu_write(cpu_tlbstate.ctxs[new_asid].ctx_id, next->context.ctx_id);
+ 			this_cpu_write(cpu_tlbstate.ctxs[new_asid].tlb_gen, next_tlb_gen);
+@@ -328,6 +332,9 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
+ 		if (next != &init_mm)
+ 			this_cpu_write(cpu_tlbstate.last_ctx_id, next->context.ctx_id);
+ 
++		/* Make sure we write CR3 before loaded_mm. */
++		barrier();
++
+ 		this_cpu_write(cpu_tlbstate.loaded_mm, next);
+ 		this_cpu_write(cpu_tlbstate.loaded_mm_asid, new_asid);
+ 	}
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index cc71c63df381..984b37647b2f 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -6424,6 +6424,7 @@ void ata_host_init(struct ata_host *host, struct device *dev,
+ 	host->n_tags = ATA_MAX_QUEUE;
+ 	host->dev = dev;
+ 	host->ops = ops;
++	kref_init(&host->kref);
+ }
+ 
+ void __ata_port_probe(struct ata_port *ap)
+@@ -7391,3 +7392,5 @@ EXPORT_SYMBOL_GPL(ata_cable_80wire);
+ EXPORT_SYMBOL_GPL(ata_cable_unknown);
+ EXPORT_SYMBOL_GPL(ata_cable_ignore);
+ EXPORT_SYMBOL_GPL(ata_cable_sata);
++EXPORT_SYMBOL_GPL(ata_host_get);
++EXPORT_SYMBOL_GPL(ata_host_put);
+\ No newline at end of file
+diff --git a/drivers/ata/libata.h b/drivers/ata/libata.h
+index 9e21c49cf6be..f953cb4bb1ba 100644
+--- a/drivers/ata/libata.h
++++ b/drivers/ata/libata.h
+@@ -100,8 +100,6 @@ extern int ata_port_probe(struct ata_port *ap);
+ extern void __ata_port_probe(struct ata_port *ap);
+ extern unsigned int ata_read_log_page(struct ata_device *dev, u8 log,
+ 				      u8 page, void *buf, unsigned int sectors);
+-extern void ata_host_get(struct ata_host *host);
+-extern void ata_host_put(struct ata_host *host);
+ 
+ #define to_ata_port(d) container_of(d, struct ata_port, tdev)
+ 
+diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
+index 8e2e4757adcb..5a42ae4078c2 100644
+--- a/drivers/base/power/clock_ops.c
++++ b/drivers/base/power/clock_ops.c
+@@ -185,7 +185,7 @@ EXPORT_SYMBOL_GPL(of_pm_clk_add_clk);
+ int of_pm_clk_add_clks(struct device *dev)
+ {
+ 	struct clk **clks;
+-	unsigned int i, count;
++	int i, count;
+ 	int ret;
+ 
+ 	if (!dev || !dev->of_node)
+diff --git a/drivers/cdrom/cdrom.c b/drivers/cdrom/cdrom.c
+index a78b8e7085e9..66acbd063562 100644
+--- a/drivers/cdrom/cdrom.c
++++ b/drivers/cdrom/cdrom.c
+@@ -2542,7 +2542,7 @@ static int cdrom_ioctl_drive_status(struct cdrom_device_info *cdi,
+ 	if (!CDROM_CAN(CDC_SELECT_DISC) ||
+ 	    (arg == CDSL_CURRENT || arg == CDSL_NONE))
+ 		return cdi->ops->drive_status(cdi, CDSL_CURRENT);
+-	if (((int)arg >= cdi->capacity))
++	if (arg >= cdi->capacity)
+ 		return -EINVAL;
+ 	return cdrom_slot_status(cdi, arg);
+ }
+diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
+index e32f6e85dc6d..3a3a7a548a85 100644
+--- a/drivers/char/tpm/tpm-interface.c
++++ b/drivers/char/tpm/tpm-interface.c
+@@ -29,7 +29,6 @@
+ #include <linux/mutex.h>
+ #include <linux/spinlock.h>
+ #include <linux/freezer.h>
+-#include <linux/pm_runtime.h>
+ #include <linux/tpm_eventlog.h>
+ 
+ #include "tpm.h"
+@@ -369,10 +368,13 @@ err_len:
+ 	return -EINVAL;
+ }
+ 
+-static int tpm_request_locality(struct tpm_chip *chip)
++static int tpm_request_locality(struct tpm_chip *chip, unsigned int flags)
+ {
+ 	int rc;
+ 
++	if (flags & TPM_TRANSMIT_RAW)
++		return 0;
++
+ 	if (!chip->ops->request_locality)
+ 		return 0;
+ 
+@@ -385,10 +387,13 @@ static int tpm_request_locality(struct tpm_chip *chip)
+ 	return 0;
+ }
+ 
+-static void tpm_relinquish_locality(struct tpm_chip *chip)
++static void tpm_relinquish_locality(struct tpm_chip *chip, unsigned int flags)
+ {
+ 	int rc;
+ 
++	if (flags & TPM_TRANSMIT_RAW)
++		return;
++
+ 	if (!chip->ops->relinquish_locality)
+ 		return;
+ 
+@@ -399,6 +404,28 @@ static void tpm_relinquish_locality(struct tpm_chip *chip)
+ 	chip->locality = -1;
+ }
+ 
++static int tpm_cmd_ready(struct tpm_chip *chip, unsigned int flags)
++{
++	if (flags & TPM_TRANSMIT_RAW)
++		return 0;
++
++	if (!chip->ops->cmd_ready)
++		return 0;
++
++	return chip->ops->cmd_ready(chip);
++}
++
++static int tpm_go_idle(struct tpm_chip *chip, unsigned int flags)
++{
++	if (flags & TPM_TRANSMIT_RAW)
++		return 0;
++
++	if (!chip->ops->go_idle)
++		return 0;
++
++	return chip->ops->go_idle(chip);
++}
++
+ static ssize_t tpm_try_transmit(struct tpm_chip *chip,
+ 				struct tpm_space *space,
+ 				u8 *buf, size_t bufsiz,
+@@ -423,7 +450,7 @@ static ssize_t tpm_try_transmit(struct tpm_chip *chip,
+ 		header->tag = cpu_to_be16(TPM2_ST_NO_SESSIONS);
+ 		header->return_code = cpu_to_be32(TPM2_RC_COMMAND_CODE |
+ 						  TSS2_RESMGR_TPM_RC_LAYER);
+-		return bufsiz;
++		return sizeof(*header);
+ 	}
+ 
+ 	if (bufsiz > TPM_BUFSIZE)
+@@ -449,14 +476,15 @@ static ssize_t tpm_try_transmit(struct tpm_chip *chip,
+ 	/* Store the decision as chip->locality will be changed. */
+ 	need_locality = chip->locality == -1;
+ 
+-	if (!(flags & TPM_TRANSMIT_RAW) && need_locality) {
+-		rc = tpm_request_locality(chip);
++	if (need_locality) {
++		rc = tpm_request_locality(chip, flags);
+ 		if (rc < 0)
+ 			goto out_no_locality;
+ 	}
+ 
+-	if (chip->dev.parent)
+-		pm_runtime_get_sync(chip->dev.parent);
++	rc = tpm_cmd_ready(chip, flags);
++	if (rc)
++		goto out;
+ 
+ 	rc = tpm2_prepare_space(chip, space, ordinal, buf);
+ 	if (rc)
+@@ -516,13 +544,16 @@ out_recv:
+ 	}
+ 
+ 	rc = tpm2_commit_space(chip, space, ordinal, buf, &len);
++	if (rc)
++		dev_err(&chip->dev, "tpm2_commit_space: error %d\n", rc);
+ 
+ out:
+-	if (chip->dev.parent)
+-		pm_runtime_put_sync(chip->dev.parent);
++	rc = tpm_go_idle(chip, flags);
++	if (rc)
++		goto out;
+ 
+ 	if (need_locality)
+-		tpm_relinquish_locality(chip);
++		tpm_relinquish_locality(chip, flags);
+ 
+ out_no_locality:
+ 	if (chip->ops->clk_enable != NULL)
+diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
+index 4426649e431c..5f02dcd3df97 100644
+--- a/drivers/char/tpm/tpm.h
++++ b/drivers/char/tpm/tpm.h
+@@ -511,9 +511,17 @@ extern const struct file_operations tpm_fops;
+ extern const struct file_operations tpmrm_fops;
+ extern struct idr dev_nums_idr;
+ 
++/**
++ * enum tpm_transmit_flags
++ *
++ * @TPM_TRANSMIT_UNLOCKED: used to lock sequence of tpm_transmit calls.
++ * @TPM_TRANSMIT_RAW: prevent recursive calls into setup steps
++ *                    (go idle, locality,..). Always use with UNLOCKED
++ *                    as it will fail on double locking.
++ */
+ enum tpm_transmit_flags {
+-	TPM_TRANSMIT_UNLOCKED	= BIT(0),
+-	TPM_TRANSMIT_RAW	= BIT(1),
++	TPM_TRANSMIT_UNLOCKED = BIT(0),
++	TPM_TRANSMIT_RAW      = BIT(1),
+ };
+ 
+ ssize_t tpm_transmit(struct tpm_chip *chip, struct tpm_space *space,
+diff --git a/drivers/char/tpm/tpm2-space.c b/drivers/char/tpm/tpm2-space.c
+index 6122d3276f72..11c85ed8c113 100644
+--- a/drivers/char/tpm/tpm2-space.c
++++ b/drivers/char/tpm/tpm2-space.c
+@@ -39,7 +39,8 @@ static void tpm2_flush_sessions(struct tpm_chip *chip, struct tpm_space *space)
+ 	for (i = 0; i < ARRAY_SIZE(space->session_tbl); i++) {
+ 		if (space->session_tbl[i])
+ 			tpm2_flush_context_cmd(chip, space->session_tbl[i],
+-					       TPM_TRANSMIT_UNLOCKED);
++					       TPM_TRANSMIT_UNLOCKED |
++					       TPM_TRANSMIT_RAW);
+ 	}
+ }
+ 
+@@ -84,7 +85,7 @@ static int tpm2_load_context(struct tpm_chip *chip, u8 *buf,
+ 	tpm_buf_append(&tbuf, &buf[*offset], body_size);
+ 
+ 	rc = tpm_transmit_cmd(chip, NULL, tbuf.data, PAGE_SIZE, 4,
+-			      TPM_TRANSMIT_UNLOCKED, NULL);
++			      TPM_TRANSMIT_UNLOCKED | TPM_TRANSMIT_RAW, NULL);
+ 	if (rc < 0) {
+ 		dev_warn(&chip->dev, "%s: failed with a system error %d\n",
+ 			 __func__, rc);
+@@ -133,7 +134,7 @@ static int tpm2_save_context(struct tpm_chip *chip, u32 handle, u8 *buf,
+ 	tpm_buf_append_u32(&tbuf, handle);
+ 
+ 	rc = tpm_transmit_cmd(chip, NULL, tbuf.data, PAGE_SIZE, 0,
+-			      TPM_TRANSMIT_UNLOCKED, NULL);
++			      TPM_TRANSMIT_UNLOCKED | TPM_TRANSMIT_RAW, NULL);
+ 	if (rc < 0) {
+ 		dev_warn(&chip->dev, "%s: failed with a system error %d\n",
+ 			 __func__, rc);
+@@ -170,7 +171,8 @@ static void tpm2_flush_space(struct tpm_chip *chip)
+ 	for (i = 0; i < ARRAY_SIZE(space->context_tbl); i++)
+ 		if (space->context_tbl[i] && ~space->context_tbl[i])
+ 			tpm2_flush_context_cmd(chip, space->context_tbl[i],
+-					       TPM_TRANSMIT_UNLOCKED);
++					       TPM_TRANSMIT_UNLOCKED |
++					       TPM_TRANSMIT_RAW);
+ 
+ 	tpm2_flush_sessions(chip, space);
+ }
+@@ -377,7 +379,8 @@ static int tpm2_map_response_header(struct tpm_chip *chip, u32 cc, u8 *rsp,
+ 
+ 	return 0;
+ out_no_slots:
+-	tpm2_flush_context_cmd(chip, phandle, TPM_TRANSMIT_UNLOCKED);
++	tpm2_flush_context_cmd(chip, phandle,
++			       TPM_TRANSMIT_UNLOCKED | TPM_TRANSMIT_RAW);
+ 	dev_warn(&chip->dev, "%s: out of slots for 0x%08X\n", __func__,
+ 		 phandle);
+ 	return -ENOMEM;
+@@ -465,7 +468,8 @@ static int tpm2_save_space(struct tpm_chip *chip)
+ 			return rc;
+ 
+ 		tpm2_flush_context_cmd(chip, space->context_tbl[i],
+-				       TPM_TRANSMIT_UNLOCKED);
++				       TPM_TRANSMIT_UNLOCKED |
++				       TPM_TRANSMIT_RAW);
+ 		space->context_tbl[i] = ~0;
+ 	}
+ 
+diff --git a/drivers/char/tpm/tpm_crb.c b/drivers/char/tpm/tpm_crb.c
+index 34fbc6cb097b..36952ef98f90 100644
+--- a/drivers/char/tpm/tpm_crb.c
++++ b/drivers/char/tpm/tpm_crb.c
+@@ -132,7 +132,7 @@ static bool crb_wait_for_reg_32(u32 __iomem *reg, u32 mask, u32 value,
+ }
+ 
+ /**
+- * crb_go_idle - request tpm crb device to go the idle state
++ * __crb_go_idle - request tpm crb device to go the idle state
+  *
+  * @dev:  crb device
+  * @priv: crb private data
+@@ -147,7 +147,7 @@ static bool crb_wait_for_reg_32(u32 __iomem *reg, u32 mask, u32 value,
+  *
+  * Return: 0 always
+  */
+-static int crb_go_idle(struct device *dev, struct crb_priv *priv)
++static int __crb_go_idle(struct device *dev, struct crb_priv *priv)
+ {
+ 	if ((priv->sm == ACPI_TPM2_START_METHOD) ||
+ 	    (priv->sm == ACPI_TPM2_COMMAND_BUFFER_WITH_START_METHOD) ||
+@@ -163,11 +163,20 @@ static int crb_go_idle(struct device *dev, struct crb_priv *priv)
+ 		dev_warn(dev, "goIdle timed out\n");
+ 		return -ETIME;
+ 	}
++
+ 	return 0;
+ }
+ 
++static int crb_go_idle(struct tpm_chip *chip)
++{
++	struct device *dev = &chip->dev;
++	struct crb_priv *priv = dev_get_drvdata(dev);
++
++	return __crb_go_idle(dev, priv);
++}
++
+ /**
+- * crb_cmd_ready - request tpm crb device to enter ready state
++ * __crb_cmd_ready - request tpm crb device to enter ready state
+  *
+  * @dev:  crb device
+  * @priv: crb private data
+@@ -181,7 +190,7 @@ static int crb_go_idle(struct device *dev, struct crb_priv *priv)
+  *
+  * Return: 0 on success -ETIME on timeout;
+  */
+-static int crb_cmd_ready(struct device *dev, struct crb_priv *priv)
++static int __crb_cmd_ready(struct device *dev, struct crb_priv *priv)
+ {
+ 	if ((priv->sm == ACPI_TPM2_START_METHOD) ||
+ 	    (priv->sm == ACPI_TPM2_COMMAND_BUFFER_WITH_START_METHOD) ||
+@@ -200,6 +209,14 @@ static int crb_cmd_ready(struct device *dev, struct crb_priv *priv)
+ 	return 0;
+ }
+ 
++static int crb_cmd_ready(struct tpm_chip *chip)
++{
++	struct device *dev = &chip->dev;
++	struct crb_priv *priv = dev_get_drvdata(dev);
++
++	return __crb_cmd_ready(dev, priv);
++}
++
+ static int __crb_request_locality(struct device *dev,
+ 				  struct crb_priv *priv, int loc)
+ {
+@@ -401,6 +418,8 @@ static const struct tpm_class_ops tpm_crb = {
+ 	.send = crb_send,
+ 	.cancel = crb_cancel,
+ 	.req_canceled = crb_req_canceled,
++	.go_idle  = crb_go_idle,
++	.cmd_ready = crb_cmd_ready,
+ 	.request_locality = crb_request_locality,
+ 	.relinquish_locality = crb_relinquish_locality,
+ 	.req_complete_mask = CRB_DRV_STS_COMPLETE,
+@@ -520,7 +539,7 @@ static int crb_map_io(struct acpi_device *device, struct crb_priv *priv,
+ 	 * PTT HW bug w/a: wake up the device to access
+ 	 * possibly not retained registers.
+ 	 */
+-	ret = crb_cmd_ready(dev, priv);
++	ret = __crb_cmd_ready(dev, priv);
+ 	if (ret)
+ 		goto out_relinquish_locality;
+ 
+@@ -565,7 +584,7 @@ out:
+ 	if (!ret)
+ 		priv->cmd_size = cmd_size;
+ 
+-	crb_go_idle(dev, priv);
++	__crb_go_idle(dev, priv);
+ 
+ out_relinquish_locality:
+ 
+@@ -628,32 +647,7 @@ static int crb_acpi_add(struct acpi_device *device)
+ 	chip->acpi_dev_handle = device->handle;
+ 	chip->flags = TPM_CHIP_FLAG_TPM2;
+ 
+-	rc = __crb_request_locality(dev, priv, 0);
+-	if (rc)
+-		return rc;
+-
+-	rc  = crb_cmd_ready(dev, priv);
+-	if (rc)
+-		goto out;
+-
+-	pm_runtime_get_noresume(dev);
+-	pm_runtime_set_active(dev);
+-	pm_runtime_enable(dev);
+-
+-	rc = tpm_chip_register(chip);
+-	if (rc) {
+-		crb_go_idle(dev, priv);
+-		pm_runtime_put_noidle(dev);
+-		pm_runtime_disable(dev);
+-		goto out;
+-	}
+-
+-	pm_runtime_put_sync(dev);
+-
+-out:
+-	__crb_relinquish_locality(dev, priv, 0);
+-
+-	return rc;
++	return tpm_chip_register(chip);
+ }
+ 
+ static int crb_acpi_remove(struct acpi_device *device)
+@@ -663,52 +657,11 @@ static int crb_acpi_remove(struct acpi_device *device)
+ 
+ 	tpm_chip_unregister(chip);
+ 
+-	pm_runtime_disable(dev);
+-
+ 	return 0;
+ }
+ 
+-static int __maybe_unused crb_pm_runtime_suspend(struct device *dev)
+-{
+-	struct tpm_chip *chip = dev_get_drvdata(dev);
+-	struct crb_priv *priv = dev_get_drvdata(&chip->dev);
+-
+-	return crb_go_idle(dev, priv);
+-}
+-
+-static int __maybe_unused crb_pm_runtime_resume(struct device *dev)
+-{
+-	struct tpm_chip *chip = dev_get_drvdata(dev);
+-	struct crb_priv *priv = dev_get_drvdata(&chip->dev);
+-
+-	return crb_cmd_ready(dev, priv);
+-}
+-
+-static int __maybe_unused crb_pm_suspend(struct device *dev)
+-{
+-	int ret;
+-
+-	ret = tpm_pm_suspend(dev);
+-	if (ret)
+-		return ret;
+-
+-	return crb_pm_runtime_suspend(dev);
+-}
+-
+-static int __maybe_unused crb_pm_resume(struct device *dev)
+-{
+-	int ret;
+-
+-	ret = crb_pm_runtime_resume(dev);
+-	if (ret)
+-		return ret;
+-
+-	return tpm_pm_resume(dev);
+-}
+-
+ static const struct dev_pm_ops crb_pm = {
+-	SET_SYSTEM_SLEEP_PM_OPS(crb_pm_suspend, crb_pm_resume)
+-	SET_RUNTIME_PM_OPS(crb_pm_runtime_suspend, crb_pm_runtime_resume, NULL)
++	SET_SYSTEM_SLEEP_PM_OPS(tpm_pm_suspend, tpm_pm_resume)
+ };
+ 
+ static const struct acpi_device_id crb_device_ids[] = {
+diff --git a/drivers/clk/clk-npcm7xx.c b/drivers/clk/clk-npcm7xx.c
+index 740af90a9508..c5edf8f2fd19 100644
+--- a/drivers/clk/clk-npcm7xx.c
++++ b/drivers/clk/clk-npcm7xx.c
+@@ -558,8 +558,8 @@ static void __init npcm7xx_clk_init(struct device_node *clk_np)
+ 	if (!clk_base)
+ 		goto npcm7xx_init_error;
+ 
+-	npcm7xx_clk_data = kzalloc(sizeof(*npcm7xx_clk_data->hws) *
+-		NPCM7XX_NUM_CLOCKS + sizeof(npcm7xx_clk_data), GFP_KERNEL);
++	npcm7xx_clk_data = kzalloc(struct_size(npcm7xx_clk_data, hws,
++				   NPCM7XX_NUM_CLOCKS), GFP_KERNEL);
+ 	if (!npcm7xx_clk_data)
+ 		goto npcm7xx_init_np_err;
+ 
+diff --git a/drivers/clk/rockchip/clk-rk3399.c b/drivers/clk/rockchip/clk-rk3399.c
+index bca10d618f0a..2a8634a52856 100644
+--- a/drivers/clk/rockchip/clk-rk3399.c
++++ b/drivers/clk/rockchip/clk-rk3399.c
+@@ -631,7 +631,7 @@ static struct rockchip_clk_branch rk3399_clk_branches[] __initdata = {
+ 	MUX(0, "clk_i2sout_src", mux_i2sch_p, CLK_SET_RATE_PARENT,
+ 			RK3399_CLKSEL_CON(31), 0, 2, MFLAGS),
+ 	COMPOSITE_NODIV(SCLK_I2S_8CH_OUT, "clk_i2sout", mux_i2sout_p, CLK_SET_RATE_PARENT,
+-			RK3399_CLKSEL_CON(30), 8, 2, MFLAGS,
++			RK3399_CLKSEL_CON(31), 2, 1, MFLAGS,
+ 			RK3399_CLKGATE_CON(8), 12, GFLAGS),
+ 
+ 	/* uart */
+diff --git a/drivers/gpu/drm/udl/udl_drv.h b/drivers/gpu/drm/udl/udl_drv.h
+index 55c0cc309198..7588a9eb0ee0 100644
+--- a/drivers/gpu/drm/udl/udl_drv.h
++++ b/drivers/gpu/drm/udl/udl_drv.h
+@@ -112,7 +112,7 @@ udl_fb_user_fb_create(struct drm_device *dev,
+ 		      struct drm_file *file,
+ 		      const struct drm_mode_fb_cmd2 *mode_cmd);
+ 
+-int udl_render_hline(struct drm_device *dev, int bpp, struct urb **urb_ptr,
++int udl_render_hline(struct drm_device *dev, int log_bpp, struct urb **urb_ptr,
+ 		     const char *front, char **urb_buf_ptr,
+ 		     u32 byte_offset, u32 device_byte_offset, u32 byte_width,
+ 		     int *ident_ptr, int *sent_ptr);
+diff --git a/drivers/gpu/drm/udl/udl_fb.c b/drivers/gpu/drm/udl/udl_fb.c
+index d5583190f3e4..8746eeeec44d 100644
+--- a/drivers/gpu/drm/udl/udl_fb.c
++++ b/drivers/gpu/drm/udl/udl_fb.c
+@@ -90,7 +90,10 @@ int udl_handle_damage(struct udl_framebuffer *fb, int x, int y,
+ 	int bytes_identical = 0;
+ 	struct urb *urb;
+ 	int aligned_x;
+-	int bpp = fb->base.format->cpp[0];
++	int log_bpp;
++
++	BUG_ON(!is_power_of_2(fb->base.format->cpp[0]));
++	log_bpp = __ffs(fb->base.format->cpp[0]);
+ 
+ 	if (!fb->active_16)
+ 		return 0;
+@@ -125,12 +128,12 @@ int udl_handle_damage(struct udl_framebuffer *fb, int x, int y,
+ 
+ 	for (i = y; i < y + height ; i++) {
+ 		const int line_offset = fb->base.pitches[0] * i;
+-		const int byte_offset = line_offset + (x * bpp);
+-		const int dev_byte_offset = (fb->base.width * bpp * i) + (x * bpp);
+-		if (udl_render_hline(dev, bpp, &urb,
++		const int byte_offset = line_offset + (x << log_bpp);
++		const int dev_byte_offset = (fb->base.width * i + x) << log_bpp;
++		if (udl_render_hline(dev, log_bpp, &urb,
+ 				     (char *) fb->obj->vmapping,
+ 				     &cmd, byte_offset, dev_byte_offset,
+-				     width * bpp,
++				     width << log_bpp,
+ 				     &bytes_identical, &bytes_sent))
+ 			goto error;
+ 	}
+@@ -149,7 +152,7 @@ int udl_handle_damage(struct udl_framebuffer *fb, int x, int y,
+ error:
+ 	atomic_add(bytes_sent, &udl->bytes_sent);
+ 	atomic_add(bytes_identical, &udl->bytes_identical);
+-	atomic_add(width*height*bpp, &udl->bytes_rendered);
++	atomic_add((width * height) << log_bpp, &udl->bytes_rendered);
+ 	end_cycles = get_cycles();
+ 	atomic_add(((unsigned int) ((end_cycles - start_cycles)
+ 		    >> 10)), /* Kcycles */
+@@ -221,7 +224,7 @@ static int udl_fb_open(struct fb_info *info, int user)
+ 
+ 		struct fb_deferred_io *fbdefio;
+ 
+-		fbdefio = kmalloc(sizeof(struct fb_deferred_io), GFP_KERNEL);
++		fbdefio = kzalloc(sizeof(struct fb_deferred_io), GFP_KERNEL);
+ 
+ 		if (fbdefio) {
+ 			fbdefio->delay = DL_DEFIO_WRITE_DELAY;
+diff --git a/drivers/gpu/drm/udl/udl_main.c b/drivers/gpu/drm/udl/udl_main.c
+index d518de8f496b..7e9ad926926a 100644
+--- a/drivers/gpu/drm/udl/udl_main.c
++++ b/drivers/gpu/drm/udl/udl_main.c
+@@ -170,18 +170,13 @@ static void udl_free_urb_list(struct drm_device *dev)
+ 	struct list_head *node;
+ 	struct urb_node *unode;
+ 	struct urb *urb;
+-	int ret;
+ 	unsigned long flags;
+ 
+ 	DRM_DEBUG("Waiting for completes and freeing all render urbs\n");
+ 
+ 	/* keep waiting and freeing, until we've got 'em all */
+ 	while (count--) {
+-
+-		/* Getting interrupted means a leak, but ok at shutdown*/
+-		ret = down_interruptible(&udl->urbs.limit_sem);
+-		if (ret)
+-			break;
++		down(&udl->urbs.limit_sem);
+ 
+ 		spin_lock_irqsave(&udl->urbs.lock, flags);
+ 
+@@ -205,17 +200,22 @@ static void udl_free_urb_list(struct drm_device *dev)
+ static int udl_alloc_urb_list(struct drm_device *dev, int count, size_t size)
+ {
+ 	struct udl_device *udl = dev->dev_private;
+-	int i = 0;
+ 	struct urb *urb;
+ 	struct urb_node *unode;
+ 	char *buf;
++	size_t wanted_size = count * size;
+ 
+ 	spin_lock_init(&udl->urbs.lock);
+ 
++retry:
+ 	udl->urbs.size = size;
+ 	INIT_LIST_HEAD(&udl->urbs.list);
+ 
+-	while (i < count) {
++	sema_init(&udl->urbs.limit_sem, 0);
++	udl->urbs.count = 0;
++	udl->urbs.available = 0;
++
++	while (udl->urbs.count * size < wanted_size) {
+ 		unode = kzalloc(sizeof(struct urb_node), GFP_KERNEL);
+ 		if (!unode)
+ 			break;
+@@ -231,11 +231,16 @@ static int udl_alloc_urb_list(struct drm_device *dev, int count, size_t size)
+ 		}
+ 		unode->urb = urb;
+ 
+-		buf = usb_alloc_coherent(udl->udev, MAX_TRANSFER, GFP_KERNEL,
++		buf = usb_alloc_coherent(udl->udev, size, GFP_KERNEL,
+ 					 &urb->transfer_dma);
+ 		if (!buf) {
+ 			kfree(unode);
+ 			usb_free_urb(urb);
++			if (size > PAGE_SIZE) {
++				size /= 2;
++				udl_free_urb_list(dev);
++				goto retry;
++			}
+ 			break;
+ 		}
+ 
+@@ -246,16 +251,14 @@ static int udl_alloc_urb_list(struct drm_device *dev, int count, size_t size)
+ 
+ 		list_add_tail(&unode->entry, &udl->urbs.list);
+ 
+-		i++;
++		up(&udl->urbs.limit_sem);
++		udl->urbs.count++;
++		udl->urbs.available++;
+ 	}
+ 
+-	sema_init(&udl->urbs.limit_sem, i);
+-	udl->urbs.count = i;
+-	udl->urbs.available = i;
+-
+-	DRM_DEBUG("allocated %d %d byte urbs\n", i, (int) size);
++	DRM_DEBUG("allocated %d %d byte urbs\n", udl->urbs.count, (int) size);
+ 
+-	return i;
++	return udl->urbs.count;
+ }
+ 
+ struct urb *udl_get_urb(struct drm_device *dev)
+diff --git a/drivers/gpu/drm/udl/udl_transfer.c b/drivers/gpu/drm/udl/udl_transfer.c
+index b992644c17e6..f3331d33547a 100644
+--- a/drivers/gpu/drm/udl/udl_transfer.c
++++ b/drivers/gpu/drm/udl/udl_transfer.c
+@@ -83,12 +83,12 @@ static inline u16 pixel32_to_be16(const uint32_t pixel)
+ 		((pixel >> 8) & 0xf800));
+ }
+ 
+-static inline u16 get_pixel_val16(const uint8_t *pixel, int bpp)
++static inline u16 get_pixel_val16(const uint8_t *pixel, int log_bpp)
+ {
+-	u16 pixel_val16 = 0;
+-	if (bpp == 2)
++	u16 pixel_val16;
++	if (log_bpp == 1)
+ 		pixel_val16 = *(const uint16_t *)pixel;
+-	else if (bpp == 4)
++	else
+ 		pixel_val16 = pixel32_to_be16(*(const uint32_t *)pixel);
+ 	return pixel_val16;
+ }
+@@ -125,8 +125,9 @@ static void udl_compress_hline16(
+ 	const u8 *const pixel_end,
+ 	uint32_t *device_address_ptr,
+ 	uint8_t **command_buffer_ptr,
+-	const uint8_t *const cmd_buffer_end, int bpp)
++	const uint8_t *const cmd_buffer_end, int log_bpp)
+ {
++	const int bpp = 1 << log_bpp;
+ 	const u8 *pixel = *pixel_start_ptr;
+ 	uint32_t dev_addr  = *device_address_ptr;
+ 	uint8_t *cmd = *command_buffer_ptr;
+@@ -153,12 +154,12 @@ static void udl_compress_hline16(
+ 		raw_pixels_count_byte = cmd++; /*  we'll know this later */
+ 		raw_pixel_start = pixel;
+ 
+-		cmd_pixel_end = pixel + min3(MAX_CMD_PIXELS + 1UL,
+-					(unsigned long)(pixel_end - pixel) / bpp,
+-					(unsigned long)(cmd_buffer_end - 1 - cmd) / 2) * bpp;
++		cmd_pixel_end = pixel + (min3(MAX_CMD_PIXELS + 1UL,
++					(unsigned long)(pixel_end - pixel) >> log_bpp,
++					(unsigned long)(cmd_buffer_end - 1 - cmd) / 2) << log_bpp);
+ 
+ 		prefetch_range((void *) pixel, cmd_pixel_end - pixel);
+-		pixel_val16 = get_pixel_val16(pixel, bpp);
++		pixel_val16 = get_pixel_val16(pixel, log_bpp);
+ 
+ 		while (pixel < cmd_pixel_end) {
+ 			const u8 *const start = pixel;
+@@ -170,7 +171,7 @@ static void udl_compress_hline16(
+ 			pixel += bpp;
+ 
+ 			while (pixel < cmd_pixel_end) {
+-				pixel_val16 = get_pixel_val16(pixel, bpp);
++				pixel_val16 = get_pixel_val16(pixel, log_bpp);
+ 				if (pixel_val16 != repeating_pixel_val16)
+ 					break;
+ 				pixel += bpp;
+@@ -179,10 +180,10 @@ static void udl_compress_hline16(
+ 			if (unlikely(pixel > start + bpp)) {
+ 				/* go back and fill in raw pixel count */
+ 				*raw_pixels_count_byte = (((start -
+-						raw_pixel_start) / bpp) + 1) & 0xFF;
++						raw_pixel_start) >> log_bpp) + 1) & 0xFF;
+ 
+ 				/* immediately after raw data is repeat byte */
+-				*cmd++ = (((pixel - start) / bpp) - 1) & 0xFF;
++				*cmd++ = (((pixel - start) >> log_bpp) - 1) & 0xFF;
+ 
+ 				/* Then start another raw pixel span */
+ 				raw_pixel_start = pixel;
+@@ -192,14 +193,14 @@ static void udl_compress_hline16(
+ 
+ 		if (pixel > raw_pixel_start) {
+ 			/* finalize last RAW span */
+-			*raw_pixels_count_byte = ((pixel-raw_pixel_start) / bpp) & 0xFF;
++			*raw_pixels_count_byte = ((pixel - raw_pixel_start) >> log_bpp) & 0xFF;
+ 		} else {
+ 			/* undo unused byte */
+ 			cmd--;
+ 		}
+ 
+-		*cmd_pixels_count_byte = ((pixel - cmd_pixel_start) / bpp) & 0xFF;
+-		dev_addr += ((pixel - cmd_pixel_start) / bpp) * 2;
++		*cmd_pixels_count_byte = ((pixel - cmd_pixel_start) >> log_bpp) & 0xFF;
++		dev_addr += ((pixel - cmd_pixel_start) >> log_bpp) * 2;
+ 	}
+ 
+ 	if (cmd_buffer_end <= MIN_RLX_CMD_BYTES + cmd) {
+@@ -222,19 +223,19 @@ static void udl_compress_hline16(
+  * (that we can only write to, slowly, and can never read), and (optionally)
+  * our shadow copy that tracks what's been sent to that hardware buffer.
+  */
+-int udl_render_hline(struct drm_device *dev, int bpp, struct urb **urb_ptr,
++int udl_render_hline(struct drm_device *dev, int log_bpp, struct urb **urb_ptr,
+ 		     const char *front, char **urb_buf_ptr,
+ 		     u32 byte_offset, u32 device_byte_offset,
+ 		     u32 byte_width,
+ 		     int *ident_ptr, int *sent_ptr)
+ {
+ 	const u8 *line_start, *line_end, *next_pixel;
+-	u32 base16 = 0 + (device_byte_offset / bpp) * 2;
++	u32 base16 = 0 + (device_byte_offset >> log_bpp) * 2;
+ 	struct urb *urb = *urb_ptr;
+ 	u8 *cmd = *urb_buf_ptr;
+ 	u8 *cmd_end = (u8 *) urb->transfer_buffer + urb->transfer_buffer_length;
+ 
+-	BUG_ON(!(bpp == 2 || bpp == 4));
++	BUG_ON(!(log_bpp == 1 || log_bpp == 2));
+ 
+ 	line_start = (u8 *) (front + byte_offset);
+ 	next_pixel = line_start;
+@@ -244,7 +245,7 @@ int udl_render_hline(struct drm_device *dev, int bpp, struct urb **urb_ptr,
+ 
+ 		udl_compress_hline16(&next_pixel,
+ 			     line_end, &base16,
+-			     (u8 **) &cmd, (u8 *) cmd_end, bpp);
++			     (u8 **) &cmd, (u8 *) cmd_end, log_bpp);
+ 
+ 		if (cmd >= cmd_end) {
+ 			int len = cmd - (u8 *) urb->transfer_buffer;
+diff --git a/drivers/hwmon/k10temp.c b/drivers/hwmon/k10temp.c
+index 17c6460ae351..577e2ede5a1a 100644
+--- a/drivers/hwmon/k10temp.c
++++ b/drivers/hwmon/k10temp.c
+@@ -105,6 +105,8 @@ static const struct tctl_offset tctl_offset_table[] = {
+ 	{ 0x17, "AMD Ryzen Threadripper 1950", 10000 },
+ 	{ 0x17, "AMD Ryzen Threadripper 1920", 10000 },
+ 	{ 0x17, "AMD Ryzen Threadripper 1910", 10000 },
++	{ 0x17, "AMD Ryzen Threadripper 2950X", 27000 },
++	{ 0x17, "AMD Ryzen Threadripper 2990WX", 27000 },
+ };
+ 
+ static void read_htcreg_pci(struct pci_dev *pdev, u32 *regval)
+diff --git a/drivers/hwmon/nct6775.c b/drivers/hwmon/nct6775.c
+index f9d1349c3286..b89e8379d898 100644
+--- a/drivers/hwmon/nct6775.c
++++ b/drivers/hwmon/nct6775.c
+@@ -63,6 +63,7 @@
+ #include <linux/bitops.h>
+ #include <linux/dmi.h>
+ #include <linux/io.h>
++#include <linux/nospec.h>
+ #include "lm75.h"
+ 
+ #define USE_ALTERNATE
+@@ -2689,6 +2690,7 @@ store_pwm_weight_temp_sel(struct device *dev, struct device_attribute *attr,
+ 		return err;
+ 	if (val > NUM_TEMP)
+ 		return -EINVAL;
++	val = array_index_nospec(val, NUM_TEMP + 1);
+ 	if (val && (!(data->have_temp & BIT(val - 1)) ||
+ 		    !data->temp_src[val - 1]))
+ 		return -EINVAL;
+diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
+index f7a96bcf94a6..5349e22b5c78 100644
+--- a/drivers/iommu/arm-smmu.c
++++ b/drivers/iommu/arm-smmu.c
+@@ -2103,12 +2103,16 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
+ 	if (err)
+ 		return err;
+ 
+-	if (smmu->version == ARM_SMMU_V2 &&
+-	    smmu->num_context_banks != smmu->num_context_irqs) {
+-		dev_err(dev,
+-			"found only %d context interrupt(s) but %d required\n",
+-			smmu->num_context_irqs, smmu->num_context_banks);
+-		return -ENODEV;
++	if (smmu->version == ARM_SMMU_V2) {
++		if (smmu->num_context_banks > smmu->num_context_irqs) {
++			dev_err(dev,
++			      "found only %d context irq(s) but %d required\n",
++			      smmu->num_context_irqs, smmu->num_context_banks);
++			return -ENODEV;
++		}
++
++		/* Ignore superfluous interrupts */
++		smmu->num_context_irqs = smmu->num_context_banks;
+ 	}
+ 
+ 	for (i = 0; i < smmu->num_global_irqs; ++i) {
+diff --git a/drivers/misc/mei/main.c b/drivers/misc/mei/main.c
+index 7465f17e1559..38175ebd92d4 100644
+--- a/drivers/misc/mei/main.c
++++ b/drivers/misc/mei/main.c
+@@ -312,7 +312,6 @@ static ssize_t mei_write(struct file *file, const char __user *ubuf,
+ 		}
+ 	}
+ 
+-	*offset = 0;
+ 	cb = mei_cl_alloc_cb(cl, length, MEI_FOP_WRITE, file);
+ 	if (!cb) {
+ 		rets = -ENOMEM;
+diff --git a/drivers/mtd/nand/raw/fsmc_nand.c b/drivers/mtd/nand/raw/fsmc_nand.c
+index f4a5a317d4ae..e1086a010b88 100644
+--- a/drivers/mtd/nand/raw/fsmc_nand.c
++++ b/drivers/mtd/nand/raw/fsmc_nand.c
+@@ -740,7 +740,7 @@ static int fsmc_read_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip,
+ 	for (i = 0, s = 0; s < eccsteps; s++, i += eccbytes, p += eccsize) {
+ 		nand_read_page_op(chip, page, s * eccsize, NULL, 0);
+ 		chip->ecc.hwctl(mtd, NAND_ECC_READ);
+-		chip->read_buf(mtd, p, eccsize);
++		nand_read_data_op(chip, p, eccsize, false);
+ 
+ 		for (j = 0; j < eccbytes;) {
+ 			struct mtd_oob_region oobregion;
+diff --git a/drivers/mtd/nand/raw/marvell_nand.c b/drivers/mtd/nand/raw/marvell_nand.c
+index ebb1d141b900..c88588815ca1 100644
+--- a/drivers/mtd/nand/raw/marvell_nand.c
++++ b/drivers/mtd/nand/raw/marvell_nand.c
+@@ -2677,6 +2677,21 @@ static int marvell_nfc_init_dma(struct marvell_nfc *nfc)
+ 	return 0;
+ }
+ 
++static void marvell_nfc_reset(struct marvell_nfc *nfc)
++{
++	/*
++	 * ECC operations and interruptions are only enabled when specifically
++	 * needed. ECC shall not be activated in the early stages (fails probe).
++	 * Arbiter flag, even if marked as "reserved", must be set (empirical).
++	 * SPARE_EN bit must always be set or ECC bytes will not be at the same
++	 * offset in the read page and this will fail the protection.
++	 */
++	writel_relaxed(NDCR_ALL_INT | NDCR_ND_ARB_EN | NDCR_SPARE_EN |
++		       NDCR_RD_ID_CNT(NFCV1_READID_LEN), nfc->regs + NDCR);
++	writel_relaxed(0xFFFFFFFF, nfc->regs + NDSR);
++	writel_relaxed(0, nfc->regs + NDECCCTRL);
++}
++
+ static int marvell_nfc_init(struct marvell_nfc *nfc)
+ {
+ 	struct device_node *np = nfc->dev->of_node;
+@@ -2715,17 +2730,7 @@ static int marvell_nfc_init(struct marvell_nfc *nfc)
+ 	if (!nfc->caps->is_nfcv2)
+ 		marvell_nfc_init_dma(nfc);
+ 
+-	/*
+-	 * ECC operations and interruptions are only enabled when specifically
+-	 * needed. ECC shall not be activated in the early stages (fails probe).
+-	 * Arbiter flag, even if marked as "reserved", must be set (empirical).
+-	 * SPARE_EN bit must always be set or ECC bytes will not be at the same
+-	 * offset in the read page and this will fail the protection.
+-	 */
+-	writel_relaxed(NDCR_ALL_INT | NDCR_ND_ARB_EN | NDCR_SPARE_EN |
+-		       NDCR_RD_ID_CNT(NFCV1_READID_LEN), nfc->regs + NDCR);
+-	writel_relaxed(0xFFFFFFFF, nfc->regs + NDSR);
+-	writel_relaxed(0, nfc->regs + NDECCCTRL);
++	marvell_nfc_reset(nfc);
+ 
+ 	return 0;
+ }
+@@ -2840,6 +2845,51 @@ static int marvell_nfc_remove(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
++static int __maybe_unused marvell_nfc_suspend(struct device *dev)
++{
++	struct marvell_nfc *nfc = dev_get_drvdata(dev);
++	struct marvell_nand_chip *chip;
++
++	list_for_each_entry(chip, &nfc->chips, node)
++		marvell_nfc_wait_ndrun(&chip->chip);
++
++	clk_disable_unprepare(nfc->reg_clk);
++	clk_disable_unprepare(nfc->core_clk);
++
++	return 0;
++}
++
++static int __maybe_unused marvell_nfc_resume(struct device *dev)
++{
++	struct marvell_nfc *nfc = dev_get_drvdata(dev);
++	int ret;
++
++	ret = clk_prepare_enable(nfc->core_clk);
++	if (ret < 0)
++		return ret;
++
++	if (!IS_ERR(nfc->reg_clk)) {
++		ret = clk_prepare_enable(nfc->reg_clk);
++		if (ret < 0)
++			return ret;
++	}
++
++	/*
++	 * Reset nfc->selected_chip so the next command will cause the timing
++	 * registers to be restored in marvell_nfc_select_chip().
++	 */
++	nfc->selected_chip = NULL;
++
++	/* Reset registers that have lost their contents */
++	marvell_nfc_reset(nfc);
++
++	return 0;
++}
++
++static const struct dev_pm_ops marvell_nfc_pm_ops = {
++	SET_SYSTEM_SLEEP_PM_OPS(marvell_nfc_suspend, marvell_nfc_resume)
++};
++
+ static const struct marvell_nfc_caps marvell_armada_8k_nfc_caps = {
+ 	.max_cs_nb = 4,
+ 	.max_rb_nb = 2,
+@@ -2924,6 +2974,7 @@ static struct platform_driver marvell_nfc_driver = {
+ 	.driver	= {
+ 		.name		= "marvell-nfc",
+ 		.of_match_table = marvell_nfc_of_ids,
++		.pm		= &marvell_nfc_pm_ops,
+ 	},
+ 	.id_table = marvell_nfc_platform_ids,
+ 	.probe = marvell_nfc_probe,
+diff --git a/drivers/mtd/nand/raw/nand_hynix.c b/drivers/mtd/nand/raw/nand_hynix.c
+index d542908a0ebb..766df4134482 100644
+--- a/drivers/mtd/nand/raw/nand_hynix.c
++++ b/drivers/mtd/nand/raw/nand_hynix.c
+@@ -100,6 +100,16 @@ static int hynix_nand_reg_write_op(struct nand_chip *chip, u8 addr, u8 val)
+ 	struct mtd_info *mtd = nand_to_mtd(chip);
+ 	u16 column = ((u16)addr << 8) | addr;
+ 
++	if (chip->exec_op) {
++		struct nand_op_instr instrs[] = {
++			NAND_OP_ADDR(1, &addr, 0),
++			NAND_OP_8BIT_DATA_OUT(1, &val, 0),
++		};
++		struct nand_operation op = NAND_OPERATION(instrs);
++
++		return nand_exec_op(chip, &op);
++	}
++
+ 	chip->cmdfunc(mtd, NAND_CMD_NONE, column, -1);
+ 	chip->write_byte(mtd, val);
+ 
+diff --git a/drivers/mtd/nand/raw/qcom_nandc.c b/drivers/mtd/nand/raw/qcom_nandc.c
+index 6a5519f0ff25..49b4e70fefe7 100644
+--- a/drivers/mtd/nand/raw/qcom_nandc.c
++++ b/drivers/mtd/nand/raw/qcom_nandc.c
+@@ -213,6 +213,8 @@ nandc_set_reg(nandc, NAND_READ_LOCATION_##reg,			\
+ #define QPIC_PER_CW_CMD_SGL		32
+ #define QPIC_PER_CW_DATA_SGL		8
+ 
++#define QPIC_NAND_COMPLETION_TIMEOUT	msecs_to_jiffies(2000)
++
+ /*
+  * Flags used in DMA descriptor preparation helper functions
+  * (i.e. read_reg_dma/write_reg_dma/read_data_dma/write_data_dma)
+@@ -245,6 +247,11 @@ nandc_set_reg(nandc, NAND_READ_LOCATION_##reg,			\
+  * @tx_sgl_start - start index in data sgl for tx.
+  * @rx_sgl_pos - current index in data sgl for rx.
+  * @rx_sgl_start - start index in data sgl for rx.
++ * @wait_second_completion - wait for second DMA desc completion before making
++ *			     the NAND transfer completion.
++ * @txn_done - completion for NAND transfer.
++ * @last_data_desc - last DMA desc in data channel (tx/rx).
++ * @last_cmd_desc - last DMA desc in command channel.
+  */
+ struct bam_transaction {
+ 	struct bam_cmd_element *bam_ce;
+@@ -258,6 +265,10 @@ struct bam_transaction {
+ 	u32 tx_sgl_start;
+ 	u32 rx_sgl_pos;
+ 	u32 rx_sgl_start;
++	bool wait_second_completion;
++	struct completion txn_done;
++	struct dma_async_tx_descriptor *last_data_desc;
++	struct dma_async_tx_descriptor *last_cmd_desc;
+ };
+ 
+ /*
+@@ -504,6 +515,8 @@ alloc_bam_transaction(struct qcom_nand_controller *nandc)
+ 
+ 	bam_txn->data_sgl = bam_txn_buf;
+ 
++	init_completion(&bam_txn->txn_done);
++
+ 	return bam_txn;
+ }
+ 
+@@ -523,11 +536,33 @@ static void clear_bam_transaction(struct qcom_nand_controller *nandc)
+ 	bam_txn->tx_sgl_start = 0;
+ 	bam_txn->rx_sgl_pos = 0;
+ 	bam_txn->rx_sgl_start = 0;
++	bam_txn->last_data_desc = NULL;
++	bam_txn->wait_second_completion = false;
+ 
+ 	sg_init_table(bam_txn->cmd_sgl, nandc->max_cwperpage *
+ 		      QPIC_PER_CW_CMD_SGL);
+ 	sg_init_table(bam_txn->data_sgl, nandc->max_cwperpage *
+ 		      QPIC_PER_CW_DATA_SGL);
++
++	reinit_completion(&bam_txn->txn_done);
++}
++
++/* Callback for DMA descriptor completion */
++static void qpic_bam_dma_done(void *data)
++{
++	struct bam_transaction *bam_txn = data;
++
++	/*
++	 * In case of data transfer with NAND, 2 callbacks will be generated.
++	 * One for command channel and another one for data channel.
++	 * If current transaction has data descriptors
++	 * (i.e. wait_second_completion is true), then set this to false
++	 * and wait for second DMA descriptor completion.
++	 */
++	if (bam_txn->wait_second_completion)
++		bam_txn->wait_second_completion = false;
++	else
++		complete(&bam_txn->txn_done);
+ }
+ 
+ static inline struct qcom_nand_host *to_qcom_nand_host(struct nand_chip *chip)
+@@ -756,6 +791,12 @@ static int prepare_bam_async_desc(struct qcom_nand_controller *nandc,
+ 
+ 	desc->dma_desc = dma_desc;
+ 
++	/* update last data/command descriptor */
++	if (chan == nandc->cmd_chan)
++		bam_txn->last_cmd_desc = dma_desc;
++	else
++		bam_txn->last_data_desc = dma_desc;
++
+ 	list_add_tail(&desc->node, &nandc->desc_list);
+ 
+ 	return 0;
+@@ -1273,10 +1314,20 @@ static int submit_descs(struct qcom_nand_controller *nandc)
+ 		cookie = dmaengine_submit(desc->dma_desc);
+ 
+ 	if (nandc->props->is_bam) {
++		bam_txn->last_cmd_desc->callback = qpic_bam_dma_done;
++		bam_txn->last_cmd_desc->callback_param = bam_txn;
++		if (bam_txn->last_data_desc) {
++			bam_txn->last_data_desc->callback = qpic_bam_dma_done;
++			bam_txn->last_data_desc->callback_param = bam_txn;
++			bam_txn->wait_second_completion = true;
++		}
++
+ 		dma_async_issue_pending(nandc->tx_chan);
+ 		dma_async_issue_pending(nandc->rx_chan);
++		dma_async_issue_pending(nandc->cmd_chan);
+ 
+-		if (dma_sync_wait(nandc->cmd_chan, cookie) != DMA_COMPLETE)
++		if (!wait_for_completion_timeout(&bam_txn->txn_done,
++						 QPIC_NAND_COMPLETION_TIMEOUT))
+ 			return -ETIMEDOUT;
+ 	} else {
+ 		if (dma_sync_wait(nandc->chan, cookie) != DMA_COMPLETE)
+diff --git a/drivers/net/wireless/broadcom/b43/leds.c b/drivers/net/wireless/broadcom/b43/leds.c
+index cb987c2ecc6b..87131f663292 100644
+--- a/drivers/net/wireless/broadcom/b43/leds.c
++++ b/drivers/net/wireless/broadcom/b43/leds.c
+@@ -131,7 +131,7 @@ static int b43_register_led(struct b43_wldev *dev, struct b43_led *led,
+ 	led->wl = dev->wl;
+ 	led->index = led_index;
+ 	led->activelow = activelow;
+-	strncpy(led->name, name, sizeof(led->name));
++	strlcpy(led->name, name, sizeof(led->name));
+ 	atomic_set(&led->state, 0);
+ 
+ 	led->led_dev.name = led->name;
+diff --git a/drivers/net/wireless/broadcom/b43legacy/leds.c b/drivers/net/wireless/broadcom/b43legacy/leds.c
+index fd4565389c77..bc922118b6ac 100644
+--- a/drivers/net/wireless/broadcom/b43legacy/leds.c
++++ b/drivers/net/wireless/broadcom/b43legacy/leds.c
+@@ -101,7 +101,7 @@ static int b43legacy_register_led(struct b43legacy_wldev *dev,
+ 	led->dev = dev;
+ 	led->index = led_index;
+ 	led->activelow = activelow;
+-	strncpy(led->name, name, sizeof(led->name));
++	strlcpy(led->name, name, sizeof(led->name));
+ 
+ 	led->led_dev.name = led->name;
+ 	led->led_dev.default_trigger = default_trigger;
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index ddd441b1516a..e10b0d20c4a7 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -316,6 +316,14 @@ static bool nvme_dbbuf_update_and_check_event(u16 value, u32 *dbbuf_db,
+ 		old_value = *dbbuf_db;
+ 		*dbbuf_db = value;
+ 
++		/*
++		 * Ensure that the doorbell is updated before reading the event
++		 * index from memory.  The controller needs to provide similar
++		 * ordering to ensure the envent index is updated before reading
++		 * the doorbell.
++		 */
++		mb();
++
+ 		if (!nvme_dbbuf_need_event(*dbbuf_ei, value, old_value))
+ 			return false;
+ 	}
+diff --git a/drivers/pinctrl/freescale/pinctrl-imx1-core.c b/drivers/pinctrl/freescale/pinctrl-imx1-core.c
+index c3bdd90b1422..deb7870b3d1a 100644
+--- a/drivers/pinctrl/freescale/pinctrl-imx1-core.c
++++ b/drivers/pinctrl/freescale/pinctrl-imx1-core.c
+@@ -429,7 +429,7 @@ static void imx1_pinconf_group_dbg_show(struct pinctrl_dev *pctldev,
+ 	const char *name;
+ 	int i, ret;
+ 
+-	if (group > info->ngroups)
++	if (group >= info->ngroups)
+ 		return;
+ 
+ 	seq_puts(s, "\n");
+diff --git a/drivers/platform/x86/ideapad-laptop.c b/drivers/platform/x86/ideapad-laptop.c
+index 45b7cb01f410..307403decf76 100644
+--- a/drivers/platform/x86/ideapad-laptop.c
++++ b/drivers/platform/x86/ideapad-laptop.c
+@@ -1133,10 +1133,10 @@ static const struct dmi_system_id no_hw_rfkill_list[] = {
+ 		},
+ 	},
+ 	{
+-		.ident = "Lenovo Legion Y520-15IKBN",
++		.ident = "Lenovo Legion Y520-15IKB",
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+-			DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo Y520-15IKBN"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo Y520-15IKB"),
+ 		},
+ 	},
+ 	{
+diff --git a/drivers/platform/x86/wmi.c b/drivers/platform/x86/wmi.c
+index 8e3d0146ff8c..04791ea5d97b 100644
+--- a/drivers/platform/x86/wmi.c
++++ b/drivers/platform/x86/wmi.c
+@@ -895,7 +895,6 @@ static int wmi_dev_probe(struct device *dev)
+ 	struct wmi_driver *wdriver =
+ 		container_of(dev->driver, struct wmi_driver, driver);
+ 	int ret = 0;
+-	int count;
+ 	char *buf;
+ 
+ 	if (ACPI_FAILURE(wmi_method_enable(wblock, 1)))
+@@ -917,9 +916,8 @@ static int wmi_dev_probe(struct device *dev)
+ 			goto probe_failure;
+ 		}
+ 
+-		count = get_order(wblock->req_buf_size);
+-		wblock->handler_data = (void *)__get_free_pages(GFP_KERNEL,
+-								count);
++		wblock->handler_data = kmalloc(wblock->req_buf_size,
++					       GFP_KERNEL);
+ 		if (!wblock->handler_data) {
+ 			ret = -ENOMEM;
+ 			goto probe_failure;
+@@ -964,8 +962,7 @@ static int wmi_dev_remove(struct device *dev)
+ 	if (wdriver->filter_callback) {
+ 		misc_deregister(&wblock->char_dev);
+ 		kfree(wblock->char_dev.name);
+-		free_pages((unsigned long)wblock->handler_data,
+-			   get_order(wblock->req_buf_size));
++		kfree(wblock->handler_data);
+ 	}
+ 
+ 	if (wdriver->remove)
+diff --git a/drivers/power/supply/generic-adc-battery.c b/drivers/power/supply/generic-adc-battery.c
+index 28dc056eaafa..bc462d1ec963 100644
+--- a/drivers/power/supply/generic-adc-battery.c
++++ b/drivers/power/supply/generic-adc-battery.c
+@@ -241,10 +241,10 @@ static int gab_probe(struct platform_device *pdev)
+ 	struct power_supply_desc *psy_desc;
+ 	struct power_supply_config psy_cfg = {};
+ 	struct gab_platform_data *pdata = pdev->dev.platform_data;
+-	enum power_supply_property *properties;
+ 	int ret = 0;
+ 	int chan;
+-	int index = 0;
++	int index = ARRAY_SIZE(gab_props);
++	bool any = false;
+ 
+ 	adc_bat = devm_kzalloc(&pdev->dev, sizeof(*adc_bat), GFP_KERNEL);
+ 	if (!adc_bat) {
+@@ -278,8 +278,6 @@ static int gab_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	memcpy(psy_desc->properties, gab_props, sizeof(gab_props));
+-	properties = (enum power_supply_property *)
+-			((char *)psy_desc->properties + sizeof(gab_props));
+ 
+ 	/*
+ 	 * getting channel from iio and copying the battery properties
+@@ -293,15 +291,22 @@ static int gab_probe(struct platform_device *pdev)
+ 			adc_bat->channel[chan] = NULL;
+ 		} else {
+ 			/* copying properties for supported channels only */
+-			memcpy(properties + sizeof(*(psy_desc->properties)) * index,
+-					&gab_dyn_props[chan],
+-					sizeof(gab_dyn_props[chan]));
+-			index++;
++			int index2;
++
++			for (index2 = 0; index2 < index; index2++) {
++				if (psy_desc->properties[index2] ==
++				    gab_dyn_props[chan])
++					break;	/* already known */
++			}
++			if (index2 == index)	/* really new */
++				psy_desc->properties[index++] =
++					gab_dyn_props[chan];
++			any = true;
+ 		}
+ 	}
+ 
+ 	/* none of the channels are supported so let's bail out */
+-	if (index == 0) {
++	if (!any) {
+ 		ret = -ENODEV;
+ 		goto second_mem_fail;
+ 	}
+@@ -312,7 +317,7 @@ static int gab_probe(struct platform_device *pdev)
+ 	 * as come channels may be not be supported by the device.So
+ 	 * we need to take care of that.
+ 	 */
+-	psy_desc->num_properties = ARRAY_SIZE(gab_props) + index;
++	psy_desc->num_properties = index;
+ 
+ 	adc_bat->psy = power_supply_register(&pdev->dev, psy_desc, &psy_cfg);
+ 	if (IS_ERR(adc_bat->psy)) {
+diff --git a/drivers/regulator/arizona-ldo1.c b/drivers/regulator/arizona-ldo1.c
+index f6d6a4ad9e8a..e976d073f28d 100644
+--- a/drivers/regulator/arizona-ldo1.c
++++ b/drivers/regulator/arizona-ldo1.c
+@@ -36,6 +36,8 @@ struct arizona_ldo1 {
+ 
+ 	struct regulator_consumer_supply supply;
+ 	struct regulator_init_data init_data;
++
++	struct gpio_desc *ena_gpiod;
+ };
+ 
+ static int arizona_ldo1_hc_list_voltage(struct regulator_dev *rdev,
+@@ -253,12 +255,17 @@ static int arizona_ldo1_common_init(struct platform_device *pdev,
+ 		}
+ 	}
+ 
+-	/* We assume that high output = regulator off */
+-	config.ena_gpiod = devm_gpiod_get_optional(&pdev->dev, "wlf,ldoena",
+-						   GPIOD_OUT_HIGH);
++	/* We assume that high output = regulator off
++	 * Don't use devm, since we need to get against the parent device
++	 * so clean up would happen at the wrong time
++	 */
++	config.ena_gpiod = gpiod_get_optional(parent_dev, "wlf,ldoena",
++					      GPIOD_OUT_LOW);
+ 	if (IS_ERR(config.ena_gpiod))
+ 		return PTR_ERR(config.ena_gpiod);
+ 
++	ldo1->ena_gpiod = config.ena_gpiod;
++
+ 	if (pdata->init_data)
+ 		config.init_data = pdata->init_data;
+ 	else
+@@ -276,6 +283,9 @@ static int arizona_ldo1_common_init(struct platform_device *pdev,
+ 	of_node_put(config.of_node);
+ 
+ 	if (IS_ERR(ldo1->regulator)) {
++		if (config.ena_gpiod)
++			gpiod_put(config.ena_gpiod);
++
+ 		ret = PTR_ERR(ldo1->regulator);
+ 		dev_err(&pdev->dev, "Failed to register LDO1 supply: %d\n",
+ 			ret);
+@@ -334,8 +344,19 @@ static int arizona_ldo1_probe(struct platform_device *pdev)
+ 	return ret;
+ }
+ 
++static int arizona_ldo1_remove(struct platform_device *pdev)
++{
++	struct arizona_ldo1 *ldo1 = platform_get_drvdata(pdev);
++
++	if (ldo1->ena_gpiod)
++		gpiod_put(ldo1->ena_gpiod);
++
++	return 0;
++}
++
+ static struct platform_driver arizona_ldo1_driver = {
+ 	.probe = arizona_ldo1_probe,
++	.remove = arizona_ldo1_remove,
+ 	.driver		= {
+ 		.name	= "arizona-ldo1",
+ 	},
+diff --git a/drivers/s390/cio/qdio_main.c b/drivers/s390/cio/qdio_main.c
+index f4ca72dd862f..9c7d9da42ba0 100644
+--- a/drivers/s390/cio/qdio_main.c
++++ b/drivers/s390/cio/qdio_main.c
+@@ -631,21 +631,20 @@ static inline unsigned long qdio_aob_for_buffer(struct qdio_output_q *q,
+ 	unsigned long phys_aob = 0;
+ 
+ 	if (!q->use_cq)
+-		goto out;
++		return 0;
+ 
+ 	if (!q->aobs[bufnr]) {
+ 		struct qaob *aob = qdio_allocate_aob();
+ 		q->aobs[bufnr] = aob;
+ 	}
+ 	if (q->aobs[bufnr]) {
+-		q->sbal_state[bufnr].flags = QDIO_OUTBUF_STATE_FLAG_NONE;
+ 		q->sbal_state[bufnr].aob = q->aobs[bufnr];
+ 		q->aobs[bufnr]->user1 = (u64) q->sbal_state[bufnr].user;
+ 		phys_aob = virt_to_phys(q->aobs[bufnr]);
+ 		WARN_ON_ONCE(phys_aob & 0xFF);
+ 	}
+ 
+-out:
++	q->sbal_state[bufnr].flags = 0;
+ 	return phys_aob;
+ }
+ 
+diff --git a/drivers/scsi/libsas/sas_ata.c b/drivers/scsi/libsas/sas_ata.c
+index ff1d612f6fb9..41cdda7a926b 100644
+--- a/drivers/scsi/libsas/sas_ata.c
++++ b/drivers/scsi/libsas/sas_ata.c
+@@ -557,34 +557,46 @@ int sas_ata_init(struct domain_device *found_dev)
+ {
+ 	struct sas_ha_struct *ha = found_dev->port->ha;
+ 	struct Scsi_Host *shost = ha->core.shost;
++	struct ata_host *ata_host;
+ 	struct ata_port *ap;
+ 	int rc;
+ 
+-	ata_host_init(&found_dev->sata_dev.ata_host, ha->dev, &sas_sata_ops);
+-	ap = ata_sas_port_alloc(&found_dev->sata_dev.ata_host,
+-				&sata_port_info,
+-				shost);
++	ata_host = kzalloc(sizeof(*ata_host), GFP_KERNEL);
++	if (!ata_host)	{
++		SAS_DPRINTK("ata host alloc failed.\n");
++		return -ENOMEM;
++	}
++
++	ata_host_init(ata_host, ha->dev, &sas_sata_ops);
++
++	ap = ata_sas_port_alloc(ata_host, &sata_port_info, shost);
+ 	if (!ap) {
+ 		SAS_DPRINTK("ata_sas_port_alloc failed.\n");
+-		return -ENODEV;
++		rc = -ENODEV;
++		goto free_host;
+ 	}
+ 
+ 	ap->private_data = found_dev;
+ 	ap->cbl = ATA_CBL_SATA;
+ 	ap->scsi_host = shost;
+ 	rc = ata_sas_port_init(ap);
+-	if (rc) {
+-		ata_sas_port_destroy(ap);
+-		return rc;
+-	}
+-	rc = ata_sas_tport_add(found_dev->sata_dev.ata_host.dev, ap);
+-	if (rc) {
+-		ata_sas_port_destroy(ap);
+-		return rc;
+-	}
++	if (rc)
++		goto destroy_port;
++
++	rc = ata_sas_tport_add(ata_host->dev, ap);
++	if (rc)
++		goto destroy_port;
++
++	found_dev->sata_dev.ata_host = ata_host;
+ 	found_dev->sata_dev.ap = ap;
+ 
+ 	return 0;
++
++destroy_port:
++	ata_sas_port_destroy(ap);
++free_host:
++	ata_host_put(ata_host);
++	return rc;
+ }
+ 
+ void sas_ata_task_abort(struct sas_task *task)
+diff --git a/drivers/scsi/libsas/sas_discover.c b/drivers/scsi/libsas/sas_discover.c
+index 1ffca28fe6a8..0148ae62a52a 100644
+--- a/drivers/scsi/libsas/sas_discover.c
++++ b/drivers/scsi/libsas/sas_discover.c
+@@ -316,6 +316,8 @@ void sas_free_device(struct kref *kref)
+ 	if (dev_is_sata(dev) && dev->sata_dev.ap) {
+ 		ata_sas_tport_delete(dev->sata_dev.ap);
+ 		ata_sas_port_destroy(dev->sata_dev.ap);
++		ata_host_put(dev->sata_dev.ata_host);
++		dev->sata_dev.ata_host = NULL;
+ 		dev->sata_dev.ap = NULL;
+ 	}
+ 
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index e44c91edf92d..3c8c17c0b547 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -3284,6 +3284,7 @@ void mpt3sas_base_clear_st(struct MPT3SAS_ADAPTER *ioc,
+ 	st->cb_idx = 0xFF;
+ 	st->direct_io = 0;
+ 	atomic_set(&ioc->chain_lookup[st->smid - 1].chain_offset, 0);
++	st->smid = 0;
+ }
+ 
+ /**
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index b8d131a455d0..f3d727076e1f 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -1489,7 +1489,7 @@ mpt3sas_scsih_scsi_lookup_get(struct MPT3SAS_ADAPTER *ioc, u16 smid)
+ 		scmd = scsi_host_find_tag(ioc->shost, unique_tag);
+ 		if (scmd) {
+ 			st = scsi_cmd_priv(scmd);
+-			if (st->cb_idx == 0xFF)
++			if (st->cb_idx == 0xFF || st->smid == 0)
+ 				scmd = NULL;
+ 		}
+ 	}
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_transport.c b/drivers/scsi/mpt3sas/mpt3sas_transport.c
+index 3a143bb5ca72..6c71b20af9e3 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_transport.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_transport.c
+@@ -1936,12 +1936,12 @@ _transport_smp_handler(struct bsg_job *job, struct Scsi_Host *shost,
+ 		pr_info(MPT3SAS_FMT "%s: host reset in progress!\n",
+ 		    __func__, ioc->name);
+ 		rc = -EFAULT;
+-		goto out;
++		goto job_done;
+ 	}
+ 
+ 	rc = mutex_lock_interruptible(&ioc->transport_cmds.mutex);
+ 	if (rc)
+-		goto out;
++		goto job_done;
+ 
+ 	if (ioc->transport_cmds.status != MPT3_CMD_NOT_USED) {
+ 		pr_err(MPT3SAS_FMT "%s: transport_cmds in use\n", ioc->name,
+@@ -2066,6 +2066,7 @@ _transport_smp_handler(struct bsg_job *job, struct Scsi_Host *shost,
+  out:
+ 	ioc->transport_cmds.status = MPT3_CMD_NOT_USED;
+ 	mutex_unlock(&ioc->transport_cmds.mutex);
++job_done:
+ 	bsg_job_done(job, rc, reslen);
+ }
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 1b19b954bbae..ec550ee0108e 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -382,7 +382,7 @@ qla2x00_async_adisc_sp_done(void *ptr, int res)
+ 	    "Async done-%s res %x %8phC\n",
+ 	    sp->name, res, sp->fcport->port_name);
+ 
+-	sp->fcport->flags &= ~FCF_ASYNC_SENT;
++	sp->fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE);
+ 
+ 	memset(&ea, 0, sizeof(ea));
+ 	ea.event = FCME_ADISC_DONE;
+diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c
+index dd93a22fe843..667055cbe155 100644
+--- a/drivers/scsi/qla2xxx/qla_iocb.c
++++ b/drivers/scsi/qla2xxx/qla_iocb.c
+@@ -2656,6 +2656,7 @@ qla24xx_els_dcmd2_iocb(scsi_qla_host_t *vha, int els_opcode,
+ 	ql_dbg(ql_dbg_io, vha, 0x3073,
+ 	    "Enter: PLOGI portid=%06x\n", fcport->d_id.b24);
+ 
++	fcport->flags |= FCF_ASYNC_SENT;
+ 	sp->type = SRB_ELS_DCMD;
+ 	sp->name = "ELS_DCMD";
+ 	sp->fcport = fcport;
+diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
+index 7943b762c12d..87ef6714845b 100644
+--- a/drivers/scsi/scsi_sysfs.c
++++ b/drivers/scsi/scsi_sysfs.c
+@@ -722,8 +722,24 @@ static ssize_t
+ sdev_store_delete(struct device *dev, struct device_attribute *attr,
+ 		  const char *buf, size_t count)
+ {
+-	if (device_remove_file_self(dev, attr))
+-		scsi_remove_device(to_scsi_device(dev));
++	struct kernfs_node *kn;
++
++	kn = sysfs_break_active_protection(&dev->kobj, &attr->attr);
++	WARN_ON_ONCE(!kn);
++	/*
++	 * Concurrent writes into the "delete" sysfs attribute may trigger
++	 * concurrent calls to device_remove_file() and scsi_remove_device().
++	 * device_remove_file() handles concurrent removal calls by
++	 * serializing these and by ignoring the second and later removal
++	 * attempts.  Concurrent calls of scsi_remove_device() are
++	 * serialized. The second and later calls of scsi_remove_device() are
++	 * ignored because the first call of that function changes the device
++	 * state into SDEV_DEL.
++	 */
++	device_remove_file(dev, attr);
++	scsi_remove_device(to_scsi_device(dev));
++	if (kn)
++		sysfs_unbreak_active_protection(kn);
+ 	return count;
+ };
+ static DEVICE_ATTR(delete, S_IWUSR, NULL, sdev_store_delete);
+diff --git a/drivers/soc/qcom/rmtfs_mem.c b/drivers/soc/qcom/rmtfs_mem.c
+index c8999e38b005..8a3678c2e83c 100644
+--- a/drivers/soc/qcom/rmtfs_mem.c
++++ b/drivers/soc/qcom/rmtfs_mem.c
+@@ -184,6 +184,7 @@ static int qcom_rmtfs_mem_probe(struct platform_device *pdev)
+ 	device_initialize(&rmtfs_mem->dev);
+ 	rmtfs_mem->dev.parent = &pdev->dev;
+ 	rmtfs_mem->dev.groups = qcom_rmtfs_mem_groups;
++	rmtfs_mem->dev.release = qcom_rmtfs_mem_release_device;
+ 
+ 	rmtfs_mem->base = devm_memremap(&rmtfs_mem->dev, rmtfs_mem->addr,
+ 					rmtfs_mem->size, MEMREMAP_WC);
+@@ -206,8 +207,6 @@ static int qcom_rmtfs_mem_probe(struct platform_device *pdev)
+ 		goto put_device;
+ 	}
+ 
+-	rmtfs_mem->dev.release = qcom_rmtfs_mem_release_device;
+-
+ 	ret = of_property_read_u32(node, "qcom,vmid", &vmid);
+ 	if (ret < 0 && ret != -EINVAL) {
+ 		dev_err(&pdev->dev, "failed to parse qcom,vmid\n");
+diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c
+index 99501785cdc1..68b3eb00a9d0 100644
+--- a/drivers/target/iscsi/iscsi_target_login.c
++++ b/drivers/target/iscsi/iscsi_target_login.c
+@@ -348,8 +348,7 @@ static int iscsi_login_zero_tsih_s1(
+ 		pr_err("idr_alloc() for sess_idr failed\n");
+ 		iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+ 				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+-		kfree(sess);
+-		return -ENOMEM;
++		goto free_sess;
+ 	}
+ 
+ 	sess->creation_time = get_jiffies_64();
+@@ -365,20 +364,28 @@ static int iscsi_login_zero_tsih_s1(
+ 				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+ 		pr_err("Unable to allocate memory for"
+ 				" struct iscsi_sess_ops.\n");
+-		kfree(sess);
+-		return -ENOMEM;
++		goto remove_idr;
+ 	}
+ 
+ 	sess->se_sess = transport_init_session(TARGET_PROT_NORMAL);
+ 	if (IS_ERR(sess->se_sess)) {
+ 		iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
+ 				ISCSI_LOGIN_STATUS_NO_RESOURCES);
+-		kfree(sess->sess_ops);
+-		kfree(sess);
+-		return -ENOMEM;
++		goto free_ops;
+ 	}
+ 
+ 	return 0;
++
++free_ops:
++	kfree(sess->sess_ops);
++remove_idr:
++	spin_lock_bh(&sess_idr_lock);
++	idr_remove(&sess_idr, sess->session_index);
++	spin_unlock_bh(&sess_idr_lock);
++free_sess:
++	kfree(sess);
++	conn->sess = NULL;
++	return -ENOMEM;
+ }
+ 
+ static int iscsi_login_zero_tsih_s2(
+@@ -1161,13 +1168,13 @@ void iscsi_target_login_sess_out(struct iscsi_conn *conn,
+ 				   ISCSI_LOGIN_STATUS_INIT_ERR);
+ 	if (!zero_tsih || !conn->sess)
+ 		goto old_sess_out;
+-	if (conn->sess->se_sess)
+-		transport_free_session(conn->sess->se_sess);
+-	if (conn->sess->session_index != 0) {
+-		spin_lock_bh(&sess_idr_lock);
+-		idr_remove(&sess_idr, conn->sess->session_index);
+-		spin_unlock_bh(&sess_idr_lock);
+-	}
++
++	transport_free_session(conn->sess->se_sess);
++
++	spin_lock_bh(&sess_idr_lock);
++	idr_remove(&sess_idr, conn->sess->session_index);
++	spin_unlock_bh(&sess_idr_lock);
++
+ 	kfree(conn->sess->sess_ops);
+ 	kfree(conn->sess);
+ 	conn->sess = NULL;
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 205092dc9390..dfed08e70ec1 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -961,8 +961,9 @@ static int btree_writepages(struct address_space *mapping,
+ 
+ 		fs_info = BTRFS_I(mapping->host)->root->fs_info;
+ 		/* this is a bit racy, but that's ok */
+-		ret = percpu_counter_compare(&fs_info->dirty_metadata_bytes,
+-					     BTRFS_DIRTY_METADATA_THRESH);
++		ret = __percpu_counter_compare(&fs_info->dirty_metadata_bytes,
++					     BTRFS_DIRTY_METADATA_THRESH,
++					     fs_info->dirty_metadata_batch);
+ 		if (ret < 0)
+ 			return 0;
+ 	}
+@@ -4150,8 +4151,9 @@ static void __btrfs_btree_balance_dirty(struct btrfs_fs_info *fs_info,
+ 	if (flush_delayed)
+ 		btrfs_balance_delayed_items(fs_info);
+ 
+-	ret = percpu_counter_compare(&fs_info->dirty_metadata_bytes,
+-				     BTRFS_DIRTY_METADATA_THRESH);
++	ret = __percpu_counter_compare(&fs_info->dirty_metadata_bytes,
++				     BTRFS_DIRTY_METADATA_THRESH,
++				     fs_info->dirty_metadata_batch);
+ 	if (ret > 0) {
+ 		balance_dirty_pages_ratelimited(fs_info->btree_inode->i_mapping);
+ 	}
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 3d9fe58c0080..8aab7a6c1e58 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -4358,7 +4358,7 @@ commit_trans:
+ 				      data_sinfo->flags, bytes, 1);
+ 	spin_unlock(&data_sinfo->lock);
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ int btrfs_check_data_free_space(struct inode *inode,
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index eba61bcb9bb3..071d949f69ec 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -6027,32 +6027,6 @@ err:
+ 	return ret;
+ }
+ 
+-int btrfs_write_inode(struct inode *inode, struct writeback_control *wbc)
+-{
+-	struct btrfs_root *root = BTRFS_I(inode)->root;
+-	struct btrfs_trans_handle *trans;
+-	int ret = 0;
+-	bool nolock = false;
+-
+-	if (test_bit(BTRFS_INODE_DUMMY, &BTRFS_I(inode)->runtime_flags))
+-		return 0;
+-
+-	if (btrfs_fs_closing(root->fs_info) &&
+-			btrfs_is_free_space_inode(BTRFS_I(inode)))
+-		nolock = true;
+-
+-	if (wbc->sync_mode == WB_SYNC_ALL) {
+-		if (nolock)
+-			trans = btrfs_join_transaction_nolock(root);
+-		else
+-			trans = btrfs_join_transaction(root);
+-		if (IS_ERR(trans))
+-			return PTR_ERR(trans);
+-		ret = btrfs_commit_transaction(trans);
+-	}
+-	return ret;
+-}
+-
+ /*
+  * This is somewhat expensive, updating the tree every time the
+  * inode changes.  But, it is most likely to find the inode in cache.
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index c47f62b19226..b75b4abaa4a5 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -100,6 +100,7 @@ struct send_ctx {
+ 	u64 cur_inode_rdev;
+ 	u64 cur_inode_last_extent;
+ 	u64 cur_inode_next_write_offset;
++	bool ignore_cur_inode;
+ 
+ 	u64 send_progress;
+ 
+@@ -5006,6 +5007,15 @@ static int send_hole(struct send_ctx *sctx, u64 end)
+ 	u64 len;
+ 	int ret = 0;
+ 
++	/*
++	 * A hole that starts at EOF or beyond it. Since we do not yet support
++	 * fallocate (for extent preallocation and hole punching), sending a
++	 * write of zeroes starting at EOF or beyond would later require issuing
++	 * a truncate operation which would undo the write and achieve nothing.
++	 */
++	if (offset >= sctx->cur_inode_size)
++		return 0;
++
+ 	if (sctx->flags & BTRFS_SEND_FLAG_NO_FILE_DATA)
+ 		return send_update_extent(sctx, offset, end - offset);
+ 
+@@ -5799,6 +5809,9 @@ static int finish_inode_if_needed(struct send_ctx *sctx, int at_end)
+ 	int pending_move = 0;
+ 	int refs_processed = 0;
+ 
++	if (sctx->ignore_cur_inode)
++		return 0;
++
+ 	ret = process_recorded_refs_if_needed(sctx, at_end, &pending_move,
+ 					      &refs_processed);
+ 	if (ret < 0)
+@@ -5917,6 +5930,93 @@ out:
+ 	return ret;
+ }
+ 
++struct parent_paths_ctx {
++	struct list_head *refs;
++	struct send_ctx *sctx;
++};
++
++static int record_parent_ref(int num, u64 dir, int index, struct fs_path *name,
++			     void *ctx)
++{
++	struct parent_paths_ctx *ppctx = ctx;
++
++	return record_ref(ppctx->sctx->parent_root, dir, name, ppctx->sctx,
++			  ppctx->refs);
++}
++
++/*
++ * Issue unlink operations for all paths of the current inode found in the
++ * parent snapshot.
++ */
++static int btrfs_unlink_all_paths(struct send_ctx *sctx)
++{
++	LIST_HEAD(deleted_refs);
++	struct btrfs_path *path;
++	struct btrfs_key key;
++	struct parent_paths_ctx ctx;
++	int ret;
++
++	path = alloc_path_for_send();
++	if (!path)
++		return -ENOMEM;
++
++	key.objectid = sctx->cur_ino;
++	key.type = BTRFS_INODE_REF_KEY;
++	key.offset = 0;
++	ret = btrfs_search_slot(NULL, sctx->parent_root, &key, path, 0, 0);
++	if (ret < 0)
++		goto out;
++
++	ctx.refs = &deleted_refs;
++	ctx.sctx = sctx;
++
++	while (true) {
++		struct extent_buffer *eb = path->nodes[0];
++		int slot = path->slots[0];
++
++		if (slot >= btrfs_header_nritems(eb)) {
++			ret = btrfs_next_leaf(sctx->parent_root, path);
++			if (ret < 0)
++				goto out;
++			else if (ret > 0)
++				break;
++			continue;
++		}
++
++		btrfs_item_key_to_cpu(eb, &key, slot);
++		if (key.objectid != sctx->cur_ino)
++			break;
++		if (key.type != BTRFS_INODE_REF_KEY &&
++		    key.type != BTRFS_INODE_EXTREF_KEY)
++			break;
++
++		ret = iterate_inode_ref(sctx->parent_root, path, &key, 1,
++					record_parent_ref, &ctx);
++		if (ret < 0)
++			goto out;
++
++		path->slots[0]++;
++	}
++
++	while (!list_empty(&deleted_refs)) {
++		struct recorded_ref *ref;
++
++		ref = list_first_entry(&deleted_refs, struct recorded_ref, list);
++		ret = send_unlink(sctx, ref->full_path);
++		if (ret < 0)
++			goto out;
++		fs_path_free(ref->full_path);
++		list_del(&ref->list);
++		kfree(ref);
++	}
++	ret = 0;
++out:
++	btrfs_free_path(path);
++	if (ret)
++		__free_recorded_refs(&deleted_refs);
++	return ret;
++}
++
+ static int changed_inode(struct send_ctx *sctx,
+ 			 enum btrfs_compare_tree_result result)
+ {
+@@ -5931,6 +6031,7 @@ static int changed_inode(struct send_ctx *sctx,
+ 	sctx->cur_inode_new_gen = 0;
+ 	sctx->cur_inode_last_extent = (u64)-1;
+ 	sctx->cur_inode_next_write_offset = 0;
++	sctx->ignore_cur_inode = false;
+ 
+ 	/*
+ 	 * Set send_progress to current inode. This will tell all get_cur_xxx
+@@ -5971,6 +6072,33 @@ static int changed_inode(struct send_ctx *sctx,
+ 			sctx->cur_inode_new_gen = 1;
+ 	}
+ 
++	/*
++	 * Normally we do not find inodes with a link count of zero (orphans)
++	 * because the most common case is to create a snapshot and use it
++	 * for a send operation. However other less common use cases involve
++	 * using a subvolume and send it after turning it to RO mode just
++	 * after deleting all hard links of a file while holding an open
++	 * file descriptor against it or turning a RO snapshot into RW mode,
++	 * keep an open file descriptor against a file, delete it and then
++	 * turn the snapshot back to RO mode before using it for a send
++	 * operation. So if we find such cases, ignore the inode and all its
++	 * items completely if it's a new inode, or if it's a changed inode
++	 * make sure all its previous paths (from the parent snapshot) are all
++	 * unlinked and all other the inode items are ignored.
++	 */
++	if (result == BTRFS_COMPARE_TREE_NEW ||
++	    result == BTRFS_COMPARE_TREE_CHANGED) {
++		u32 nlinks;
++
++		nlinks = btrfs_inode_nlink(sctx->left_path->nodes[0], left_ii);
++		if (nlinks == 0) {
++			sctx->ignore_cur_inode = true;
++			if (result == BTRFS_COMPARE_TREE_CHANGED)
++				ret = btrfs_unlink_all_paths(sctx);
++			goto out;
++		}
++	}
++
+ 	if (result == BTRFS_COMPARE_TREE_NEW) {
+ 		sctx->cur_inode_gen = left_gen;
+ 		sctx->cur_inode_new = 1;
+@@ -6309,15 +6437,17 @@ static int changed_cb(struct btrfs_path *left_path,
+ 	    key->objectid == BTRFS_FREE_SPACE_OBJECTID)
+ 		goto out;
+ 
+-	if (key->type == BTRFS_INODE_ITEM_KEY)
++	if (key->type == BTRFS_INODE_ITEM_KEY) {
+ 		ret = changed_inode(sctx, result);
+-	else if (key->type == BTRFS_INODE_REF_KEY ||
+-		 key->type == BTRFS_INODE_EXTREF_KEY)
+-		ret = changed_ref(sctx, result);
+-	else if (key->type == BTRFS_XATTR_ITEM_KEY)
+-		ret = changed_xattr(sctx, result);
+-	else if (key->type == BTRFS_EXTENT_DATA_KEY)
+-		ret = changed_extent(sctx, result);
++	} else if (!sctx->ignore_cur_inode) {
++		if (key->type == BTRFS_INODE_REF_KEY ||
++		    key->type == BTRFS_INODE_EXTREF_KEY)
++			ret = changed_ref(sctx, result);
++		else if (key->type == BTRFS_XATTR_ITEM_KEY)
++			ret = changed_xattr(sctx, result);
++		else if (key->type == BTRFS_EXTENT_DATA_KEY)
++			ret = changed_extent(sctx, result);
++	}
+ 
+ out:
+ 	return ret;
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index 81107ad49f3a..bddfc28b27c0 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -2331,7 +2331,6 @@ static const struct super_operations btrfs_super_ops = {
+ 	.sync_fs	= btrfs_sync_fs,
+ 	.show_options	= btrfs_show_options,
+ 	.show_devname	= btrfs_show_devname,
+-	.write_inode	= btrfs_write_inode,
+ 	.alloc_inode	= btrfs_alloc_inode,
+ 	.destroy_inode	= btrfs_destroy_inode,
+ 	.statfs		= btrfs_statfs,
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index f8220ec02036..84b00a29d531 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -1291,6 +1291,46 @@ again:
+ 	return ret;
+ }
+ 
++static int btrfs_inode_ref_exists(struct inode *inode, struct inode *dir,
++				  const u8 ref_type, const char *name,
++				  const int namelen)
++{
++	struct btrfs_key key;
++	struct btrfs_path *path;
++	const u64 parent_id = btrfs_ino(BTRFS_I(dir));
++	int ret;
++
++	path = btrfs_alloc_path();
++	if (!path)
++		return -ENOMEM;
++
++	key.objectid = btrfs_ino(BTRFS_I(inode));
++	key.type = ref_type;
++	if (key.type == BTRFS_INODE_REF_KEY)
++		key.offset = parent_id;
++	else
++		key.offset = btrfs_extref_hash(parent_id, name, namelen);
++
++	ret = btrfs_search_slot(NULL, BTRFS_I(inode)->root, &key, path, 0, 0);
++	if (ret < 0)
++		goto out;
++	if (ret > 0) {
++		ret = 0;
++		goto out;
++	}
++	if (key.type == BTRFS_INODE_EXTREF_KEY)
++		ret = btrfs_find_name_in_ext_backref(path->nodes[0],
++						     path->slots[0], parent_id,
++						     name, namelen, NULL);
++	else
++		ret = btrfs_find_name_in_backref(path->nodes[0], path->slots[0],
++						 name, namelen, NULL);
++
++out:
++	btrfs_free_path(path);
++	return ret;
++}
++
+ /*
+  * replay one inode back reference item found in the log tree.
+  * eb, slot and key refer to the buffer and key found in the log tree.
+@@ -1400,6 +1440,32 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans,
+ 				}
+ 			}
+ 
++			/*
++			 * If a reference item already exists for this inode
++			 * with the same parent and name, but different index,
++			 * drop it and the corresponding directory index entries
++			 * from the parent before adding the new reference item
++			 * and dir index entries, otherwise we would fail with
++			 * -EEXIST returned from btrfs_add_link() below.
++			 */
++			ret = btrfs_inode_ref_exists(inode, dir, key->type,
++						     name, namelen);
++			if (ret > 0) {
++				ret = btrfs_unlink_inode(trans, root,
++							 BTRFS_I(dir),
++							 BTRFS_I(inode),
++							 name, namelen);
++				/*
++				 * If we dropped the link count to 0, bump it so
++				 * that later the iput() on the inode will not
++				 * free it. We will fixup the link count later.
++				 */
++				if (!ret && inode->i_nlink == 0)
++					inc_nlink(inode);
++			}
++			if (ret < 0)
++				goto out;
++
+ 			/* insert our name */
+ 			ret = btrfs_add_link(trans, BTRFS_I(dir),
+ 					BTRFS_I(inode),
+diff --git a/fs/cifs/cifs_debug.c b/fs/cifs/cifs_debug.c
+index bfe999505815..991bfb271908 100644
+--- a/fs/cifs/cifs_debug.c
++++ b/fs/cifs/cifs_debug.c
+@@ -160,25 +160,41 @@ static int cifs_debug_data_proc_show(struct seq_file *m, void *v)
+ 	seq_printf(m, "CIFS Version %s\n", CIFS_VERSION);
+ 	seq_printf(m, "Features:");
+ #ifdef CONFIG_CIFS_DFS_UPCALL
+-	seq_printf(m, " dfs");
++	seq_printf(m, " DFS");
+ #endif
+ #ifdef CONFIG_CIFS_FSCACHE
+-	seq_printf(m, " fscache");
++	seq_printf(m, ",FSCACHE");
++#endif
++#ifdef CONFIG_CIFS_SMB_DIRECT
++	seq_printf(m, ",SMB_DIRECT");
++#endif
++#ifdef CONFIG_CIFS_STATS2
++	seq_printf(m, ",STATS2");
++#elif defined(CONFIG_CIFS_STATS)
++	seq_printf(m, ",STATS");
++#endif
++#ifdef CONFIG_CIFS_DEBUG2
++	seq_printf(m, ",DEBUG2");
++#elif defined(CONFIG_CIFS_DEBUG)
++	seq_printf(m, ",DEBUG");
++#endif
++#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY
++	seq_printf(m, ",ALLOW_INSECURE_LEGACY");
+ #endif
+ #ifdef CONFIG_CIFS_WEAK_PW_HASH
+-	seq_printf(m, " lanman");
++	seq_printf(m, ",WEAK_PW_HASH");
+ #endif
+ #ifdef CONFIG_CIFS_POSIX
+-	seq_printf(m, " posix");
++	seq_printf(m, ",CIFS_POSIX");
+ #endif
+ #ifdef CONFIG_CIFS_UPCALL
+-	seq_printf(m, " spnego");
++	seq_printf(m, ",UPCALL(SPNEGO)");
+ #endif
+ #ifdef CONFIG_CIFS_XATTR
+-	seq_printf(m, " xattr");
++	seq_printf(m, ",XATTR");
+ #endif
+ #ifdef CONFIG_CIFS_ACL
+-	seq_printf(m, " acl");
++	seq_printf(m, ",ACL");
+ #endif
+ 	seq_putc(m, '\n');
+ 	seq_printf(m, "Active VFS Requests: %d\n", GlobalTotalActiveXid);
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index d5aa7ae917bf..69ec5427769c 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -209,14 +209,16 @@ cifs_statfs(struct dentry *dentry, struct kstatfs *buf)
+ 
+ 	xid = get_xid();
+ 
+-	/*
+-	 * PATH_MAX may be too long - it would presumably be total path,
+-	 * but note that some servers (includinng Samba 3) have a shorter
+-	 * maximum path.
+-	 *
+-	 * Instead could get the real value via SMB_QUERY_FS_ATTRIBUTE_INFO.
+-	 */
+-	buf->f_namelen = PATH_MAX;
++	if (le32_to_cpu(tcon->fsAttrInfo.MaxPathNameComponentLength) > 0)
++		buf->f_namelen =
++		       le32_to_cpu(tcon->fsAttrInfo.MaxPathNameComponentLength);
++	else
++		buf->f_namelen = PATH_MAX;
++
++	buf->f_fsid.val[0] = tcon->vol_serial_number;
++	/* are using part of create time for more randomness, see man statfs */
++	buf->f_fsid.val[1] =  (int)le64_to_cpu(tcon->vol_create_time);
++
+ 	buf->f_files = 0;	/* undefined */
+ 	buf->f_ffree = 0;	/* unlimited */
+ 
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index c923c7854027..4b45d3ef3f9d 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -913,6 +913,7 @@ cap_unix(struct cifs_ses *ses)
+ 
+ struct cached_fid {
+ 	bool is_valid:1;	/* Do we have a useable root fid */
++	struct kref refcount;
+ 	struct cifs_fid *fid;
+ 	struct mutex fid_mutex;
+ 	struct cifs_tcon *tcon;
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index a2cfb33e85c1..9051b9dfd590 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -1122,6 +1122,8 @@ cifs_set_file_info(struct inode *inode, struct iattr *attrs, unsigned int xid,
+ 	if (!server->ops->set_file_info)
+ 		return -ENOSYS;
+ 
++	info_buf.Pad = 0;
++
+ 	if (attrs->ia_valid & ATTR_ATIME) {
+ 		set_time = true;
+ 		info_buf.LastAccessTime =
+diff --git a/fs/cifs/link.c b/fs/cifs/link.c
+index de41f96aba49..2148b0f60e5e 100644
+--- a/fs/cifs/link.c
++++ b/fs/cifs/link.c
+@@ -396,7 +396,7 @@ smb3_query_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
+ 	struct cifs_io_parms io_parms;
+ 	int buf_type = CIFS_NO_BUFFER;
+ 	__le16 *utf16_path;
+-	__u8 oplock = SMB2_OPLOCK_LEVEL_II;
++	__u8 oplock = SMB2_OPLOCK_LEVEL_NONE;
+ 	struct smb2_file_all_info *pfile_info = NULL;
+ 
+ 	oparms.tcon = tcon;
+@@ -459,7 +459,7 @@ smb3_create_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
+ 	struct cifs_io_parms io_parms;
+ 	int create_options = CREATE_NOT_DIR;
+ 	__le16 *utf16_path;
+-	__u8 oplock = SMB2_OPLOCK_LEVEL_EXCLUSIVE;
++	__u8 oplock = SMB2_OPLOCK_LEVEL_NONE;
+ 	struct kvec iov[2];
+ 
+ 	if (backup_cred(cifs_sb))
+diff --git a/fs/cifs/sess.c b/fs/cifs/sess.c
+index 8b0502cd39af..aa23c00367ec 100644
+--- a/fs/cifs/sess.c
++++ b/fs/cifs/sess.c
+@@ -398,6 +398,12 @@ int build_ntlmssp_auth_blob(unsigned char **pbuffer,
+ 		goto setup_ntlmv2_ret;
+ 	}
+ 	*pbuffer = kmalloc(size_of_ntlmssp_blob(ses), GFP_KERNEL);
++	if (!*pbuffer) {
++		rc = -ENOMEM;
++		cifs_dbg(VFS, "Error %d during NTLMSSP allocation\n", rc);
++		*buflen = 0;
++		goto setup_ntlmv2_ret;
++	}
+ 	sec_blob = (AUTHENTICATE_MESSAGE *)*pbuffer;
+ 
+ 	memcpy(sec_blob->Signature, NTLMSSP_SIGNATURE, 8);
+diff --git a/fs/cifs/smb2inode.c b/fs/cifs/smb2inode.c
+index d01ad706d7fc..1eef1791d0c4 100644
+--- a/fs/cifs/smb2inode.c
++++ b/fs/cifs/smb2inode.c
+@@ -120,7 +120,9 @@ smb2_open_op_close(const unsigned int xid, struct cifs_tcon *tcon,
+ 		break;
+ 	}
+ 
+-	if (use_cached_root_handle == false)
++	if (use_cached_root_handle)
++		close_shroot(&tcon->crfid);
++	else
+ 		rc = SMB2_close(xid, tcon, fid.persistent_fid, fid.volatile_fid);
+ 	if (tmprc)
+ 		rc = tmprc;
+@@ -281,7 +283,7 @@ smb2_set_file_info(struct inode *inode, const char *full_path,
+ 	int rc;
+ 
+ 	if ((buf->CreationTime == 0) && (buf->LastAccessTime == 0) &&
+-	    (buf->LastWriteTime == 0) && (buf->ChangeTime) &&
++	    (buf->LastWriteTime == 0) && (buf->ChangeTime == 0) &&
+ 	    (buf->Attributes == 0))
+ 		return 0; /* would be a no op, no sense sending this */
+ 
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index ea92a38b2f08..ee6c4a952ce9 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -466,21 +466,36 @@ out:
+ 	return rc;
+ }
+ 
+-void
+-smb2_cached_lease_break(struct work_struct *work)
++static void
++smb2_close_cached_fid(struct kref *ref)
+ {
+-	struct cached_fid *cfid = container_of(work,
+-				struct cached_fid, lease_break);
+-	mutex_lock(&cfid->fid_mutex);
++	struct cached_fid *cfid = container_of(ref, struct cached_fid,
++					       refcount);
++
+ 	if (cfid->is_valid) {
+ 		cifs_dbg(FYI, "clear cached root file handle\n");
+ 		SMB2_close(0, cfid->tcon, cfid->fid->persistent_fid,
+ 			   cfid->fid->volatile_fid);
+ 		cfid->is_valid = false;
+ 	}
++}
++
++void close_shroot(struct cached_fid *cfid)
++{
++	mutex_lock(&cfid->fid_mutex);
++	kref_put(&cfid->refcount, smb2_close_cached_fid);
+ 	mutex_unlock(&cfid->fid_mutex);
+ }
+ 
++void
++smb2_cached_lease_break(struct work_struct *work)
++{
++	struct cached_fid *cfid = container_of(work,
++				struct cached_fid, lease_break);
++
++	close_shroot(cfid);
++}
++
+ /*
+  * Open the directory at the root of a share
+  */
+@@ -495,6 +510,7 @@ int open_shroot(unsigned int xid, struct cifs_tcon *tcon, struct cifs_fid *pfid)
+ 	if (tcon->crfid.is_valid) {
+ 		cifs_dbg(FYI, "found a cached root file handle\n");
+ 		memcpy(pfid, tcon->crfid.fid, sizeof(struct cifs_fid));
++		kref_get(&tcon->crfid.refcount);
+ 		mutex_unlock(&tcon->crfid.fid_mutex);
+ 		return 0;
+ 	}
+@@ -511,6 +527,8 @@ int open_shroot(unsigned int xid, struct cifs_tcon *tcon, struct cifs_fid *pfid)
+ 		memcpy(tcon->crfid.fid, pfid, sizeof(struct cifs_fid));
+ 		tcon->crfid.tcon = tcon;
+ 		tcon->crfid.is_valid = true;
++		kref_init(&tcon->crfid.refcount);
++		kref_get(&tcon->crfid.refcount);
+ 	}
+ 	mutex_unlock(&tcon->crfid.fid_mutex);
+ 	return rc;
+@@ -548,10 +566,15 @@ smb3_qfs_tcon(const unsigned int xid, struct cifs_tcon *tcon)
+ 			FS_ATTRIBUTE_INFORMATION);
+ 	SMB2_QFS_attr(xid, tcon, fid.persistent_fid, fid.volatile_fid,
+ 			FS_DEVICE_INFORMATION);
++	SMB2_QFS_attr(xid, tcon, fid.persistent_fid, fid.volatile_fid,
++			FS_VOLUME_INFORMATION);
+ 	SMB2_QFS_attr(xid, tcon, fid.persistent_fid, fid.volatile_fid,
+ 			FS_SECTOR_SIZE_INFORMATION); /* SMB3 specific */
+ 	if (no_cached_open)
+ 		SMB2_close(xid, tcon, fid.persistent_fid, fid.volatile_fid);
++	else
++		close_shroot(&tcon->crfid);
++
+ 	return;
+ }
+ 
+@@ -1353,6 +1376,13 @@ smb3_set_integrity(const unsigned int xid, struct cifs_tcon *tcon,
+ 
+ }
+ 
++/* GMT Token is @GMT-YYYY.MM.DD-HH.MM.SS Unicode which is 48 bytes + null */
++#define GMT_TOKEN_SIZE 50
++
++/*
++ * Input buffer contains (empty) struct smb_snapshot array with size filled in
++ * For output see struct SRV_SNAPSHOT_ARRAY in MS-SMB2 section 2.2.32.2
++ */
+ static int
+ smb3_enum_snapshots(const unsigned int xid, struct cifs_tcon *tcon,
+ 		   struct cifsFileInfo *cfile, void __user *ioc_buf)
+@@ -1382,14 +1412,27 @@ smb3_enum_snapshots(const unsigned int xid, struct cifs_tcon *tcon,
+ 			kfree(retbuf);
+ 			return rc;
+ 		}
+-		if (snapshot_in.snapshot_array_size < sizeof(struct smb_snapshot_array)) {
+-			rc = -ERANGE;
+-			kfree(retbuf);
+-			return rc;
+-		}
+ 
+-		if (ret_data_len > snapshot_in.snapshot_array_size)
+-			ret_data_len = snapshot_in.snapshot_array_size;
++		/*
++		 * Check for min size, ie not large enough to fit even one GMT
++		 * token (snapshot).  On the first ioctl some users may pass in
++		 * smaller size (or zero) to simply get the size of the array
++		 * so the user space caller can allocate sufficient memory
++		 * and retry the ioctl again with larger array size sufficient
++		 * to hold all of the snapshot GMT tokens on the second try.
++		 */
++		if (snapshot_in.snapshot_array_size < GMT_TOKEN_SIZE)
++			ret_data_len = sizeof(struct smb_snapshot_array);
++
++		/*
++		 * We return struct SRV_SNAPSHOT_ARRAY, followed by
++		 * the snapshot array (of 50 byte GMT tokens) each
++		 * representing an available previous version of the data
++		 */
++		if (ret_data_len > (snapshot_in.snapshot_array_size +
++					sizeof(struct smb_snapshot_array)))
++			ret_data_len = snapshot_in.snapshot_array_size +
++					sizeof(struct smb_snapshot_array);
+ 
+ 		if (copy_to_user(ioc_buf, retbuf, ret_data_len))
+ 			rc = -EFAULT;
+@@ -3366,6 +3409,11 @@ struct smb_version_operations smb311_operations = {
+ 	.query_all_EAs = smb2_query_eas,
+ 	.set_EA = smb2_set_ea,
+ #endif /* CIFS_XATTR */
++#ifdef CONFIG_CIFS_ACL
++	.get_acl = get_smb2_acl,
++	.get_acl_by_fid = get_smb2_acl_by_fid,
++	.set_acl = set_smb2_acl,
++#endif /* CIFS_ACL */
+ 	.next_header = smb2_next_header,
+ };
+ #endif /* CIFS_SMB311 */
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 3c92678cb45b..ffce77e00a58 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -4046,6 +4046,9 @@ SMB2_QFS_attr(const unsigned int xid, struct cifs_tcon *tcon,
+ 	} else if (level == FS_SECTOR_SIZE_INFORMATION) {
+ 		max_len = sizeof(struct smb3_fs_ss_info);
+ 		min_len = sizeof(struct smb3_fs_ss_info);
++	} else if (level == FS_VOLUME_INFORMATION) {
++		max_len = sizeof(struct smb3_fs_vol_info) + MAX_VOL_LABEL_LEN;
++		min_len = sizeof(struct smb3_fs_vol_info);
+ 	} else {
+ 		cifs_dbg(FYI, "Invalid qfsinfo level %d\n", level);
+ 		return -EINVAL;
+@@ -4090,6 +4093,11 @@ SMB2_QFS_attr(const unsigned int xid, struct cifs_tcon *tcon,
+ 		tcon->ss_flags = le32_to_cpu(ss_info->Flags);
+ 		tcon->perf_sector_size =
+ 			le32_to_cpu(ss_info->PhysicalBytesPerSectorForPerf);
++	} else if (level == FS_VOLUME_INFORMATION) {
++		struct smb3_fs_vol_info *vol_info = (struct smb3_fs_vol_info *)
++			(offset + (char *)rsp);
++		tcon->vol_serial_number = vol_info->VolumeSerialNumber;
++		tcon->vol_create_time = vol_info->VolumeCreationTime;
+ 	}
+ 
+ qfsattr_exit:
+diff --git a/fs/cifs/smb2pdu.h b/fs/cifs/smb2pdu.h
+index a671adcc44a6..c2a4526512b5 100644
+--- a/fs/cifs/smb2pdu.h
++++ b/fs/cifs/smb2pdu.h
+@@ -1248,6 +1248,17 @@ struct smb3_fs_ss_info {
+ 	__le32 ByteOffsetForPartitionAlignment;
+ } __packed;
+ 
++/* volume info struct - see MS-FSCC 2.5.9 */
++#define MAX_VOL_LABEL_LEN	32
++struct smb3_fs_vol_info {
++	__le64	VolumeCreationTime;
++	__u32	VolumeSerialNumber;
++	__le32	VolumeLabelLength; /* includes trailing null */
++	__u8	SupportsObjects; /* True if eg like NTFS, supports objects */
++	__u8	Reserved;
++	__u8	VolumeLabel[0]; /* variable len */
++} __packed;
++
+ /* partial list of QUERY INFO levels */
+ #define FILE_DIRECTORY_INFORMATION	1
+ #define FILE_FULL_DIRECTORY_INFORMATION 2
+diff --git a/fs/cifs/smb2proto.h b/fs/cifs/smb2proto.h
+index 6e6a4f2ec890..c1520b48d1e1 100644
+--- a/fs/cifs/smb2proto.h
++++ b/fs/cifs/smb2proto.h
+@@ -68,6 +68,7 @@ extern int smb3_handle_read_data(struct TCP_Server_Info *server,
+ 
+ extern int open_shroot(unsigned int xid, struct cifs_tcon *tcon,
+ 			struct cifs_fid *pfid);
++extern void close_shroot(struct cached_fid *cfid);
+ extern void move_smb2_info_to_cifs(FILE_ALL_INFO *dst,
+ 				   struct smb2_file_all_info *src);
+ extern int smb2_query_path_info(const unsigned int xid, struct cifs_tcon *tcon,
+diff --git a/fs/cifs/smb2transport.c b/fs/cifs/smb2transport.c
+index 719d55e63d88..bf61c3774830 100644
+--- a/fs/cifs/smb2transport.c
++++ b/fs/cifs/smb2transport.c
+@@ -173,7 +173,7 @@ smb2_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server)
+ 	struct kvec *iov = rqst->rq_iov;
+ 	struct smb2_sync_hdr *shdr = (struct smb2_sync_hdr *)iov[0].iov_base;
+ 	struct cifs_ses *ses;
+-	struct shash_desc *shash = &server->secmech.sdeschmacsha256->shash;
++	struct shash_desc *shash;
+ 	struct smb_rqst drqst;
+ 
+ 	ses = smb2_find_smb_ses(server, shdr->SessionId);
+@@ -187,7 +187,7 @@ smb2_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server)
+ 
+ 	rc = smb2_crypto_shash_allocate(server);
+ 	if (rc) {
+-		cifs_dbg(VFS, "%s: shah256 alloc failed\n", __func__);
++		cifs_dbg(VFS, "%s: sha256 alloc failed\n", __func__);
+ 		return rc;
+ 	}
+ 
+@@ -198,6 +198,7 @@ smb2_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server)
+ 		return rc;
+ 	}
+ 
++	shash = &server->secmech.sdeschmacsha256->shash;
+ 	rc = crypto_shash_init(shash);
+ 	if (rc) {
+ 		cifs_dbg(VFS, "%s: Could not init sha256", __func__);
+diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
+index aa52d87985aa..e5d6ee61ff48 100644
+--- a/fs/ext4/balloc.c
++++ b/fs/ext4/balloc.c
+@@ -426,9 +426,9 @@ ext4_read_block_bitmap_nowait(struct super_block *sb, ext4_group_t block_group)
+ 	}
+ 	bh = sb_getblk(sb, bitmap_blk);
+ 	if (unlikely(!bh)) {
+-		ext4_error(sb, "Cannot get buffer for block bitmap - "
+-			   "block_group = %u, block_bitmap = %llu",
+-			   block_group, bitmap_blk);
++		ext4_warning(sb, "Cannot get buffer for block bitmap - "
++			     "block_group = %u, block_bitmap = %llu",
++			     block_group, bitmap_blk);
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+ 
+diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
+index f336cbc6e932..796aa609bcb9 100644
+--- a/fs/ext4/ialloc.c
++++ b/fs/ext4/ialloc.c
+@@ -138,9 +138,9 @@ ext4_read_inode_bitmap(struct super_block *sb, ext4_group_t block_group)
+ 	}
+ 	bh = sb_getblk(sb, bitmap_blk);
+ 	if (unlikely(!bh)) {
+-		ext4_error(sb, "Cannot read inode bitmap - "
+-			    "block_group = %u, inode_bitmap = %llu",
+-			    block_group, bitmap_blk);
++		ext4_warning(sb, "Cannot read inode bitmap - "
++			     "block_group = %u, inode_bitmap = %llu",
++			     block_group, bitmap_blk);
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+ 	if (bitmap_uptodate(bh))
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 2a4c25c4681d..116ff68c5bd4 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -1398,6 +1398,7 @@ static struct buffer_head * ext4_find_entry (struct inode *dir,
+ 			goto cleanup_and_exit;
+ 		dxtrace(printk(KERN_DEBUG "ext4_find_entry: dx failed, "
+ 			       "falling back\n"));
++		ret = NULL;
+ 	}
+ 	nblocks = dir->i_size >> EXT4_BLOCK_SIZE_BITS(sb);
+ 	if (!nblocks) {
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index b7f7922061be..130c12974e28 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -776,26 +776,26 @@ void ext4_mark_group_bitmap_corrupted(struct super_block *sb,
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+ 	struct ext4_group_info *grp = ext4_get_group_info(sb, group);
+ 	struct ext4_group_desc *gdp = ext4_get_group_desc(sb, group, NULL);
++	int ret;
+ 
+-	if ((flags & EXT4_GROUP_INFO_BBITMAP_CORRUPT) &&
+-	    !EXT4_MB_GRP_BBITMAP_CORRUPT(grp)) {
+-		percpu_counter_sub(&sbi->s_freeclusters_counter,
+-					grp->bb_free);
+-		set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT,
+-			&grp->bb_state);
++	if (flags & EXT4_GROUP_INFO_BBITMAP_CORRUPT) {
++		ret = ext4_test_and_set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT,
++					    &grp->bb_state);
++		if (!ret)
++			percpu_counter_sub(&sbi->s_freeclusters_counter,
++					   grp->bb_free);
+ 	}
+ 
+-	if ((flags & EXT4_GROUP_INFO_IBITMAP_CORRUPT) &&
+-	    !EXT4_MB_GRP_IBITMAP_CORRUPT(grp)) {
+-		if (gdp) {
++	if (flags & EXT4_GROUP_INFO_IBITMAP_CORRUPT) {
++		ret = ext4_test_and_set_bit(EXT4_GROUP_INFO_IBITMAP_CORRUPT_BIT,
++					    &grp->bb_state);
++		if (!ret && gdp) {
+ 			int count;
+ 
+ 			count = ext4_free_inodes_count(sb, gdp);
+ 			percpu_counter_sub(&sbi->s_freeinodes_counter,
+ 					   count);
+ 		}
+-		set_bit(EXT4_GROUP_INFO_IBITMAP_CORRUPT_BIT,
+-			&grp->bb_state);
+ 	}
+ }
+ 
+diff --git a/fs/ext4/sysfs.c b/fs/ext4/sysfs.c
+index f34da0bb8f17..b970a200f20c 100644
+--- a/fs/ext4/sysfs.c
++++ b/fs/ext4/sysfs.c
+@@ -274,8 +274,12 @@ static ssize_t ext4_attr_show(struct kobject *kobj,
+ 	case attr_pointer_ui:
+ 		if (!ptr)
+ 			return 0;
+-		return snprintf(buf, PAGE_SIZE, "%u\n",
+-				*((unsigned int *) ptr));
++		if (a->attr_ptr == ptr_ext4_super_block_offset)
++			return snprintf(buf, PAGE_SIZE, "%u\n",
++					le32_to_cpup(ptr));
++		else
++			return snprintf(buf, PAGE_SIZE, "%u\n",
++					*((unsigned int *) ptr));
+ 	case attr_pointer_atomic:
+ 		if (!ptr)
+ 			return 0;
+@@ -308,7 +312,10 @@ static ssize_t ext4_attr_store(struct kobject *kobj,
+ 		ret = kstrtoul(skip_spaces(buf), 0, &t);
+ 		if (ret)
+ 			return ret;
+-		*((unsigned int *) ptr) = t;
++		if (a->attr_ptr == ptr_ext4_super_block_offset)
++			*((__le32 *) ptr) = cpu_to_le32(t);
++		else
++			*((unsigned int *) ptr) = t;
+ 		return len;
+ 	case attr_inode_readahead:
+ 		return inode_readahead_blks_store(sbi, buf, len);
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 723df14f4084..f36fc5d5b257 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -190,6 +190,8 @@ ext4_xattr_check_entries(struct ext4_xattr_entry *entry, void *end,
+ 		struct ext4_xattr_entry *next = EXT4_XATTR_NEXT(e);
+ 		if ((void *)next >= end)
+ 			return -EFSCORRUPTED;
++		if (strnlen(e->e_name, e->e_name_len) != e->e_name_len)
++			return -EFSCORRUPTED;
+ 		e = next;
+ 	}
+ 
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index c6b88fa85e2e..4a9ace7280b9 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -127,6 +127,16 @@ static bool fuse_block_alloc(struct fuse_conn *fc, bool for_background)
+ 	return !fc->initialized || (for_background && fc->blocked);
+ }
+ 
++static void fuse_drop_waiting(struct fuse_conn *fc)
++{
++	if (fc->connected) {
++		atomic_dec(&fc->num_waiting);
++	} else if (atomic_dec_and_test(&fc->num_waiting)) {
++		/* wake up aborters */
++		wake_up_all(&fc->blocked_waitq);
++	}
++}
++
+ static struct fuse_req *__fuse_get_req(struct fuse_conn *fc, unsigned npages,
+ 				       bool for_background)
+ {
+@@ -175,7 +185,7 @@ static struct fuse_req *__fuse_get_req(struct fuse_conn *fc, unsigned npages,
+ 	return req;
+ 
+  out:
+-	atomic_dec(&fc->num_waiting);
++	fuse_drop_waiting(fc);
+ 	return ERR_PTR(err);
+ }
+ 
+@@ -285,7 +295,7 @@ void fuse_put_request(struct fuse_conn *fc, struct fuse_req *req)
+ 
+ 		if (test_bit(FR_WAITING, &req->flags)) {
+ 			__clear_bit(FR_WAITING, &req->flags);
+-			atomic_dec(&fc->num_waiting);
++			fuse_drop_waiting(fc);
+ 		}
+ 
+ 		if (req->stolen_file)
+@@ -371,7 +381,7 @@ static void request_end(struct fuse_conn *fc, struct fuse_req *req)
+ 	struct fuse_iqueue *fiq = &fc->iq;
+ 
+ 	if (test_and_set_bit(FR_FINISHED, &req->flags))
+-		return;
++		goto put_request;
+ 
+ 	spin_lock(&fiq->waitq.lock);
+ 	list_del_init(&req->intr_entry);
+@@ -400,6 +410,7 @@ static void request_end(struct fuse_conn *fc, struct fuse_req *req)
+ 	wake_up(&req->waitq);
+ 	if (req->end)
+ 		req->end(fc, req);
++put_request:
+ 	fuse_put_request(fc, req);
+ }
+ 
+@@ -1944,12 +1955,15 @@ static ssize_t fuse_dev_splice_write(struct pipe_inode_info *pipe,
+ 	if (!fud)
+ 		return -EPERM;
+ 
++	pipe_lock(pipe);
++
+ 	bufs = kmalloc_array(pipe->buffers, sizeof(struct pipe_buffer),
+ 			     GFP_KERNEL);
+-	if (!bufs)
++	if (!bufs) {
++		pipe_unlock(pipe);
+ 		return -ENOMEM;
++	}
+ 
+-	pipe_lock(pipe);
+ 	nbuf = 0;
+ 	rem = 0;
+ 	for (idx = 0; idx < pipe->nrbufs && rem < len; idx++)
+@@ -2105,6 +2119,7 @@ void fuse_abort_conn(struct fuse_conn *fc, bool is_abort)
+ 				set_bit(FR_ABORTED, &req->flags);
+ 				if (!test_bit(FR_LOCKED, &req->flags)) {
+ 					set_bit(FR_PRIVATE, &req->flags);
++					__fuse_get_request(req);
+ 					list_move(&req->list, &to_end1);
+ 				}
+ 				spin_unlock(&req->waitq.lock);
+@@ -2131,7 +2146,6 @@ void fuse_abort_conn(struct fuse_conn *fc, bool is_abort)
+ 
+ 		while (!list_empty(&to_end1)) {
+ 			req = list_first_entry(&to_end1, struct fuse_req, list);
+-			__fuse_get_request(req);
+ 			list_del_init(&req->list);
+ 			request_end(fc, req);
+ 		}
+@@ -2142,6 +2156,11 @@ void fuse_abort_conn(struct fuse_conn *fc, bool is_abort)
+ }
+ EXPORT_SYMBOL_GPL(fuse_abort_conn);
+ 
++void fuse_wait_aborted(struct fuse_conn *fc)
++{
++	wait_event(fc->blocked_waitq, atomic_read(&fc->num_waiting) == 0);
++}
++
+ int fuse_dev_release(struct inode *inode, struct file *file)
+ {
+ 	struct fuse_dev *fud = fuse_get_dev(file);
+@@ -2149,9 +2168,15 @@ int fuse_dev_release(struct inode *inode, struct file *file)
+ 	if (fud) {
+ 		struct fuse_conn *fc = fud->fc;
+ 		struct fuse_pqueue *fpq = &fud->pq;
++		LIST_HEAD(to_end);
+ 
++		spin_lock(&fpq->lock);
+ 		WARN_ON(!list_empty(&fpq->io));
+-		end_requests(fc, &fpq->processing);
++		list_splice_init(&fpq->processing, &to_end);
++		spin_unlock(&fpq->lock);
++
++		end_requests(fc, &to_end);
++
+ 		/* Are we the last open device? */
+ 		if (atomic_dec_and_test(&fc->dev_count)) {
+ 			WARN_ON(fc->iq.fasync != NULL);
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index 56231b31f806..606909ed5f21 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -355,11 +355,12 @@ static struct dentry *fuse_lookup(struct inode *dir, struct dentry *entry,
+ 	struct inode *inode;
+ 	struct dentry *newent;
+ 	bool outarg_valid = true;
++	bool locked;
+ 
+-	fuse_lock_inode(dir);
++	locked = fuse_lock_inode(dir);
+ 	err = fuse_lookup_name(dir->i_sb, get_node_id(dir), &entry->d_name,
+ 			       &outarg, &inode);
+-	fuse_unlock_inode(dir);
++	fuse_unlock_inode(dir, locked);
+ 	if (err == -ENOENT) {
+ 		outarg_valid = false;
+ 		err = 0;
+@@ -1340,6 +1341,7 @@ static int fuse_readdir(struct file *file, struct dir_context *ctx)
+ 	struct fuse_conn *fc = get_fuse_conn(inode);
+ 	struct fuse_req *req;
+ 	u64 attr_version = 0;
++	bool locked;
+ 
+ 	if (is_bad_inode(inode))
+ 		return -EIO;
+@@ -1367,9 +1369,9 @@ static int fuse_readdir(struct file *file, struct dir_context *ctx)
+ 		fuse_read_fill(req, file, ctx->pos, PAGE_SIZE,
+ 			       FUSE_READDIR);
+ 	}
+-	fuse_lock_inode(inode);
++	locked = fuse_lock_inode(inode);
+ 	fuse_request_send(fc, req);
+-	fuse_unlock_inode(inode);
++	fuse_unlock_inode(inode, locked);
+ 	nbytes = req->out.args[0].size;
+ 	err = req->out.h.error;
+ 	fuse_put_request(fc, req);
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index a201fb0ac64f..aa23749a943b 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -866,6 +866,7 @@ static int fuse_readpages_fill(void *_data, struct page *page)
+ 	}
+ 
+ 	if (WARN_ON(req->num_pages >= req->max_pages)) {
++		unlock_page(page);
+ 		fuse_put_request(fc, req);
+ 		return -EIO;
+ 	}
+diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
+index 5256ad333b05..f78e9614bb5f 100644
+--- a/fs/fuse/fuse_i.h
++++ b/fs/fuse/fuse_i.h
+@@ -862,6 +862,7 @@ void fuse_request_send_background_locked(struct fuse_conn *fc,
+ 
+ /* Abort all requests */
+ void fuse_abort_conn(struct fuse_conn *fc, bool is_abort);
++void fuse_wait_aborted(struct fuse_conn *fc);
+ 
+ /**
+  * Invalidate inode attributes
+@@ -974,8 +975,8 @@ int fuse_do_setattr(struct dentry *dentry, struct iattr *attr,
+ 
+ void fuse_set_initialized(struct fuse_conn *fc);
+ 
+-void fuse_unlock_inode(struct inode *inode);
+-void fuse_lock_inode(struct inode *inode);
++void fuse_unlock_inode(struct inode *inode, bool locked);
++bool fuse_lock_inode(struct inode *inode);
+ 
+ int fuse_setxattr(struct inode *inode, const char *name, const void *value,
+ 		  size_t size, int flags);
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index a24df8861b40..2dbd487390a3 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -357,15 +357,21 @@ int fuse_reverse_inval_inode(struct super_block *sb, u64 nodeid,
+ 	return 0;
+ }
+ 
+-void fuse_lock_inode(struct inode *inode)
++bool fuse_lock_inode(struct inode *inode)
+ {
+-	if (!get_fuse_conn(inode)->parallel_dirops)
++	bool locked = false;
++
++	if (!get_fuse_conn(inode)->parallel_dirops) {
+ 		mutex_lock(&get_fuse_inode(inode)->mutex);
++		locked = true;
++	}
++
++	return locked;
+ }
+ 
+-void fuse_unlock_inode(struct inode *inode)
++void fuse_unlock_inode(struct inode *inode, bool locked)
+ {
+-	if (!get_fuse_conn(inode)->parallel_dirops)
++	if (locked)
+ 		mutex_unlock(&get_fuse_inode(inode)->mutex);
+ }
+ 
+@@ -391,9 +397,6 @@ static void fuse_put_super(struct super_block *sb)
+ {
+ 	struct fuse_conn *fc = get_fuse_conn_super(sb);
+ 
+-	fuse_send_destroy(fc);
+-
+-	fuse_abort_conn(fc, false);
+ 	mutex_lock(&fuse_mutex);
+ 	list_del(&fc->entry);
+ 	fuse_ctl_remove_conn(fc);
+@@ -1210,16 +1213,25 @@ static struct dentry *fuse_mount(struct file_system_type *fs_type,
+ 	return mount_nodev(fs_type, flags, raw_data, fuse_fill_super);
+ }
+ 
+-static void fuse_kill_sb_anon(struct super_block *sb)
++static void fuse_sb_destroy(struct super_block *sb)
+ {
+ 	struct fuse_conn *fc = get_fuse_conn_super(sb);
+ 
+ 	if (fc) {
++		fuse_send_destroy(fc);
++
++		fuse_abort_conn(fc, false);
++		fuse_wait_aborted(fc);
++
+ 		down_write(&fc->killsb);
+ 		fc->sb = NULL;
+ 		up_write(&fc->killsb);
+ 	}
++}
+ 
++static void fuse_kill_sb_anon(struct super_block *sb)
++{
++	fuse_sb_destroy(sb);
+ 	kill_anon_super(sb);
+ }
+ 
+@@ -1242,14 +1254,7 @@ static struct dentry *fuse_mount_blk(struct file_system_type *fs_type,
+ 
+ static void fuse_kill_sb_blk(struct super_block *sb)
+ {
+-	struct fuse_conn *fc = get_fuse_conn_super(sb);
+-
+-	if (fc) {
+-		down_write(&fc->killsb);
+-		fc->sb = NULL;
+-		up_write(&fc->killsb);
+-	}
+-
++	fuse_sb_destroy(sb);
+ 	kill_block_super(sb);
+ }
+ 
+diff --git a/fs/sysfs/file.c b/fs/sysfs/file.c
+index 5c13f29bfcdb..118fa197a35f 100644
+--- a/fs/sysfs/file.c
++++ b/fs/sysfs/file.c
+@@ -405,6 +405,50 @@ int sysfs_chmod_file(struct kobject *kobj, const struct attribute *attr,
+ }
+ EXPORT_SYMBOL_GPL(sysfs_chmod_file);
+ 
++/**
++ * sysfs_break_active_protection - break "active" protection
++ * @kobj: The kernel object @attr is associated with.
++ * @attr: The attribute to break the "active" protection for.
++ *
++ * With sysfs, just like kernfs, deletion of an attribute is postponed until
++ * all active .show() and .store() callbacks have finished unless this function
++ * is called. Hence this function is useful in methods that implement self
++ * deletion.
++ */
++struct kernfs_node *sysfs_break_active_protection(struct kobject *kobj,
++						  const struct attribute *attr)
++{
++	struct kernfs_node *kn;
++
++	kobject_get(kobj);
++	kn = kernfs_find_and_get(kobj->sd, attr->name);
++	if (kn)
++		kernfs_break_active_protection(kn);
++	return kn;
++}
++EXPORT_SYMBOL_GPL(sysfs_break_active_protection);
++
++/**
++ * sysfs_unbreak_active_protection - restore "active" protection
++ * @kn: Pointer returned by sysfs_break_active_protection().
++ *
++ * Undo the effects of sysfs_break_active_protection(). Since this function
++ * calls kernfs_put() on the kernfs node that corresponds to the 'attr'
++ * argument passed to sysfs_break_active_protection() that attribute may have
++ * been removed between the sysfs_break_active_protection() and
++ * sysfs_unbreak_active_protection() calls, it is not safe to access @kn after
++ * this function has returned.
++ */
++void sysfs_unbreak_active_protection(struct kernfs_node *kn)
++{
++	struct kobject *kobj = kn->parent->priv;
++
++	kernfs_unbreak_active_protection(kn);
++	kernfs_put(kn);
++	kobject_put(kobj);
++}
++EXPORT_SYMBOL_GPL(sysfs_unbreak_active_protection);
++
+ /**
+  * sysfs_remove_file_ns - remove an object attribute with a custom ns tag
+  * @kobj: object we're acting for
+diff --git a/include/drm/i915_drm.h b/include/drm/i915_drm.h
+index c9e5a6621b95..c44703f471b3 100644
+--- a/include/drm/i915_drm.h
++++ b/include/drm/i915_drm.h
+@@ -95,7 +95,9 @@ extern struct resource intel_graphics_stolen_res;
+ #define    I845_TSEG_SIZE_512K	(2 << 1)
+ #define    I845_TSEG_SIZE_1M	(3 << 1)
+ 
+-#define INTEL_BSM 0x5c
++#define INTEL_BSM		0x5c
++#define INTEL_GEN11_BSM_DW0	0xc0
++#define INTEL_GEN11_BSM_DW1	0xc4
+ #define   INTEL_BSM_MASK	(-(1u << 20))
+ 
+ #endif				/* _I915_DRM_H_ */
+diff --git a/include/linux/libata.h b/include/linux/libata.h
+index 32f247cb5e9e..bc4f87cbe7f4 100644
+--- a/include/linux/libata.h
++++ b/include/linux/libata.h
+@@ -1111,6 +1111,8 @@ extern struct ata_host *ata_host_alloc(struct device *dev, int max_ports);
+ extern struct ata_host *ata_host_alloc_pinfo(struct device *dev,
+ 			const struct ata_port_info * const * ppi, int n_ports);
+ extern int ata_slave_link_init(struct ata_port *ap);
++extern void ata_host_get(struct ata_host *host);
++extern void ata_host_put(struct ata_host *host);
+ extern int ata_host_start(struct ata_host *host);
+ extern int ata_host_register(struct ata_host *host,
+ 			     struct scsi_host_template *sht);
+diff --git a/include/linux/printk.h b/include/linux/printk.h
+index 6d7e800affd8..3ede9f46a494 100644
+--- a/include/linux/printk.h
++++ b/include/linux/printk.h
+@@ -148,9 +148,13 @@ void early_printk(const char *s, ...) { }
+ #ifdef CONFIG_PRINTK_NMI
+ extern void printk_nmi_enter(void);
+ extern void printk_nmi_exit(void);
++extern void printk_nmi_direct_enter(void);
++extern void printk_nmi_direct_exit(void);
+ #else
+ static inline void printk_nmi_enter(void) { }
+ static inline void printk_nmi_exit(void) { }
++static inline void printk_nmi_direct_enter(void) { }
++static inline void printk_nmi_direct_exit(void) { }
+ #endif /* PRINTK_NMI */
+ 
+ #ifdef CONFIG_PRINTK
+diff --git a/include/linux/sysfs.h b/include/linux/sysfs.h
+index b8bfdc173ec0..3c12198c0103 100644
+--- a/include/linux/sysfs.h
++++ b/include/linux/sysfs.h
+@@ -237,6 +237,9 @@ int __must_check sysfs_create_files(struct kobject *kobj,
+ 				   const struct attribute **attr);
+ int __must_check sysfs_chmod_file(struct kobject *kobj,
+ 				  const struct attribute *attr, umode_t mode);
++struct kernfs_node *sysfs_break_active_protection(struct kobject *kobj,
++						  const struct attribute *attr);
++void sysfs_unbreak_active_protection(struct kernfs_node *kn);
+ void sysfs_remove_file_ns(struct kobject *kobj, const struct attribute *attr,
+ 			  const void *ns);
+ bool sysfs_remove_file_self(struct kobject *kobj, const struct attribute *attr);
+@@ -350,6 +353,17 @@ static inline int sysfs_chmod_file(struct kobject *kobj,
+ 	return 0;
+ }
+ 
++static inline struct kernfs_node *
++sysfs_break_active_protection(struct kobject *kobj,
++			      const struct attribute *attr)
++{
++	return NULL;
++}
++
++static inline void sysfs_unbreak_active_protection(struct kernfs_node *kn)
++{
++}
++
+ static inline void sysfs_remove_file_ns(struct kobject *kobj,
+ 					const struct attribute *attr,
+ 					const void *ns)
+diff --git a/include/linux/tpm.h b/include/linux/tpm.h
+index 06639fb6ab85..8eb5e5ebe136 100644
+--- a/include/linux/tpm.h
++++ b/include/linux/tpm.h
+@@ -43,6 +43,8 @@ struct tpm_class_ops {
+ 	u8 (*status) (struct tpm_chip *chip);
+ 	bool (*update_timeouts)(struct tpm_chip *chip,
+ 				unsigned long *timeout_cap);
++	int (*go_idle)(struct tpm_chip *chip);
++	int (*cmd_ready)(struct tpm_chip *chip);
+ 	int (*request_locality)(struct tpm_chip *chip, int loc);
+ 	int (*relinquish_locality)(struct tpm_chip *chip, int loc);
+ 	void (*clk_enable)(struct tpm_chip *chip, bool value);
+diff --git a/include/scsi/libsas.h b/include/scsi/libsas.h
+index 225ab7783dfd..3de3b10da19a 100644
+--- a/include/scsi/libsas.h
++++ b/include/scsi/libsas.h
+@@ -161,7 +161,7 @@ struct sata_device {
+ 	u8     port_no;        /* port number, if this is a PM (Port) */
+ 
+ 	struct ata_port *ap;
+-	struct ata_host ata_host;
++	struct ata_host *ata_host;
+ 	struct smp_resp rps_resp ____cacheline_aligned; /* report_phy_sata_resp */
+ 	u8     fis[ATA_RESP_FIS_SIZE];
+ };
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index ea619021d901..f3183ad10d96 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -710,9 +710,7 @@ static void reuse_unused_kprobe(struct kprobe *ap)
+ 	 * there is still a relative jump) and disabled.
+ 	 */
+ 	op = container_of(ap, struct optimized_kprobe, kp);
+-	if (unlikely(list_empty(&op->list)))
+-		printk(KERN_WARNING "Warning: found a stray unused "
+-			"aggrprobe@%p\n", ap->addr);
++	WARN_ON_ONCE(list_empty(&op->list));
+ 	/* Enable the probe again */
+ 	ap->flags &= ~KPROBE_FLAG_DISABLED;
+ 	/* Optimize it again (remove from op->list) */
+@@ -985,7 +983,8 @@ static int arm_kprobe_ftrace(struct kprobe *p)
+ 	ret = ftrace_set_filter_ip(&kprobe_ftrace_ops,
+ 				   (unsigned long)p->addr, 0, 0);
+ 	if (ret) {
+-		pr_debug("Failed to arm kprobe-ftrace at %p (%d)\n", p->addr, ret);
++		pr_debug("Failed to arm kprobe-ftrace at %pS (%d)\n",
++			 p->addr, ret);
+ 		return ret;
+ 	}
+ 
+@@ -1025,7 +1024,8 @@ static int disarm_kprobe_ftrace(struct kprobe *p)
+ 
+ 	ret = ftrace_set_filter_ip(&kprobe_ftrace_ops,
+ 			   (unsigned long)p->addr, 1, 0);
+-	WARN(ret < 0, "Failed to disarm kprobe-ftrace at %p (%d)\n", p->addr, ret);
++	WARN_ONCE(ret < 0, "Failed to disarm kprobe-ftrace at %pS (%d)\n",
++		  p->addr, ret);
+ 	return ret;
+ }
+ #else	/* !CONFIG_KPROBES_ON_FTRACE */
+@@ -2169,11 +2169,12 @@ out:
+ }
+ EXPORT_SYMBOL_GPL(enable_kprobe);
+ 
++/* Caller must NOT call this in usual path. This is only for critical case */
+ void dump_kprobe(struct kprobe *kp)
+ {
+-	printk(KERN_WARNING "Dumping kprobe:\n");
+-	printk(KERN_WARNING "Name: %s\nAddress: %p\nOffset: %x\n",
+-	       kp->symbol_name, kp->addr, kp->offset);
++	pr_err("Dumping kprobe:\n");
++	pr_err("Name: %s\nOffset: %x\nAddress: %pS\n",
++	       kp->symbol_name, kp->offset, kp->addr);
+ }
+ NOKPROBE_SYMBOL(dump_kprobe);
+ 
+@@ -2196,11 +2197,8 @@ static int __init populate_kprobe_blacklist(unsigned long *start,
+ 		entry = arch_deref_entry_point((void *)*iter);
+ 
+ 		if (!kernel_text_address(entry) ||
+-		    !kallsyms_lookup_size_offset(entry, &size, &offset)) {
+-			pr_err("Failed to find blacklist at %p\n",
+-				(void *)entry);
++		    !kallsyms_lookup_size_offset(entry, &size, &offset))
+ 			continue;
+-		}
+ 
+ 		ent = kmalloc(sizeof(*ent), GFP_KERNEL);
+ 		if (!ent)
+@@ -2428,8 +2426,16 @@ static int kprobe_blacklist_seq_show(struct seq_file *m, void *v)
+ 	struct kprobe_blacklist_entry *ent =
+ 		list_entry(v, struct kprobe_blacklist_entry, list);
+ 
+-	seq_printf(m, "0x%px-0x%px\t%ps\n", (void *)ent->start_addr,
+-		   (void *)ent->end_addr, (void *)ent->start_addr);
++	/*
++	 * If /proc/kallsyms is not showing kernel address, we won't
++	 * show them here either.
++	 */
++	if (!kallsyms_show_value())
++		seq_printf(m, "0x%px-0x%px\t%ps\n", NULL, NULL,
++			   (void *)ent->start_addr);
++	else
++		seq_printf(m, "0x%px-0x%px\t%ps\n", (void *)ent->start_addr,
++			   (void *)ent->end_addr, (void *)ent->start_addr);
+ 	return 0;
+ }
+ 
+@@ -2611,7 +2617,7 @@ static int __init debugfs_kprobe_init(void)
+ 	if (!dir)
+ 		return -ENOMEM;
+ 
+-	file = debugfs_create_file("list", 0444, dir, NULL,
++	file = debugfs_create_file("list", 0400, dir, NULL,
+ 				&debugfs_kprobes_operations);
+ 	if (!file)
+ 		goto error;
+@@ -2621,7 +2627,7 @@ static int __init debugfs_kprobe_init(void)
+ 	if (!file)
+ 		goto error;
+ 
+-	file = debugfs_create_file("blacklist", 0444, dir, NULL,
++	file = debugfs_create_file("blacklist", 0400, dir, NULL,
+ 				&debugfs_kprobe_blacklist_ops);
+ 	if (!file)
+ 		goto error;
+diff --git a/kernel/printk/internal.h b/kernel/printk/internal.h
+index 2a7d04049af4..0f1898820cba 100644
+--- a/kernel/printk/internal.h
++++ b/kernel/printk/internal.h
+@@ -19,11 +19,16 @@
+ #ifdef CONFIG_PRINTK
+ 
+ #define PRINTK_SAFE_CONTEXT_MASK	 0x3fffffff
+-#define PRINTK_NMI_DEFERRED_CONTEXT_MASK 0x40000000
++#define PRINTK_NMI_DIRECT_CONTEXT_MASK	 0x40000000
+ #define PRINTK_NMI_CONTEXT_MASK		 0x80000000
+ 
+ extern raw_spinlock_t logbuf_lock;
+ 
++__printf(5, 0)
++int vprintk_store(int facility, int level,
++		  const char *dict, size_t dictlen,
++		  const char *fmt, va_list args);
++
+ __printf(1, 0) int vprintk_default(const char *fmt, va_list args);
+ __printf(1, 0) int vprintk_deferred(const char *fmt, va_list args);
+ __printf(1, 0) int vprintk_func(const char *fmt, va_list args);
+@@ -54,6 +59,8 @@ void __printk_safe_exit(void);
+ 		local_irq_enable();		\
+ 	} while (0)
+ 
++void defer_console_output(void);
++
+ #else
+ 
+ __printf(1, 0) int vprintk_func(const char *fmt, va_list args) { return 0; }
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index 247808333ba4..1d1513215c22 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -1824,28 +1824,16 @@ static size_t log_output(int facility, int level, enum log_flags lflags, const c
+ 	return log_store(facility, level, lflags, 0, dict, dictlen, text, text_len);
+ }
+ 
+-asmlinkage int vprintk_emit(int facility, int level,
+-			    const char *dict, size_t dictlen,
+-			    const char *fmt, va_list args)
++/* Must be called under logbuf_lock. */
++int vprintk_store(int facility, int level,
++		  const char *dict, size_t dictlen,
++		  const char *fmt, va_list args)
+ {
+ 	static char textbuf[LOG_LINE_MAX];
+ 	char *text = textbuf;
+ 	size_t text_len;
+ 	enum log_flags lflags = 0;
+-	unsigned long flags;
+-	int printed_len;
+-	bool in_sched = false;
+-
+-	if (level == LOGLEVEL_SCHED) {
+-		level = LOGLEVEL_DEFAULT;
+-		in_sched = true;
+-	}
+-
+-	boot_delay_msec(level);
+-	printk_delay();
+ 
+-	/* This stops the holder of console_sem just where we want him */
+-	logbuf_lock_irqsave(flags);
+ 	/*
+ 	 * The printf needs to come first; we need the syslog
+ 	 * prefix which might be passed-in as a parameter.
+@@ -1886,8 +1874,29 @@ asmlinkage int vprintk_emit(int facility, int level,
+ 	if (dict)
+ 		lflags |= LOG_PREFIX|LOG_NEWLINE;
+ 
+-	printed_len = log_output(facility, level, lflags, dict, dictlen, text, text_len);
++	return log_output(facility, level, lflags,
++			  dict, dictlen, text, text_len);
++}
+ 
++asmlinkage int vprintk_emit(int facility, int level,
++			    const char *dict, size_t dictlen,
++			    const char *fmt, va_list args)
++{
++	int printed_len;
++	bool in_sched = false;
++	unsigned long flags;
++
++	if (level == LOGLEVEL_SCHED) {
++		level = LOGLEVEL_DEFAULT;
++		in_sched = true;
++	}
++
++	boot_delay_msec(level);
++	printk_delay();
++
++	/* This stops the holder of console_sem just where we want him */
++	logbuf_lock_irqsave(flags);
++	printed_len = vprintk_store(facility, level, dict, dictlen, fmt, args);
+ 	logbuf_unlock_irqrestore(flags);
+ 
+ 	/* If called from the scheduler, we can not call up(). */
+@@ -2878,16 +2887,20 @@ void wake_up_klogd(void)
+ 	preempt_enable();
+ }
+ 
+-int vprintk_deferred(const char *fmt, va_list args)
++void defer_console_output(void)
+ {
+-	int r;
+-
+-	r = vprintk_emit(0, LOGLEVEL_SCHED, NULL, 0, fmt, args);
+-
+ 	preempt_disable();
+ 	__this_cpu_or(printk_pending, PRINTK_PENDING_OUTPUT);
+ 	irq_work_queue(this_cpu_ptr(&wake_up_klogd_work));
+ 	preempt_enable();
++}
++
++int vprintk_deferred(const char *fmt, va_list args)
++{
++	int r;
++
++	r = vprintk_emit(0, LOGLEVEL_SCHED, NULL, 0, fmt, args);
++	defer_console_output();
+ 
+ 	return r;
+ }
+diff --git a/kernel/printk/printk_safe.c b/kernel/printk/printk_safe.c
+index d7d091309054..a0a74c533e4b 100644
+--- a/kernel/printk/printk_safe.c
++++ b/kernel/printk/printk_safe.c
+@@ -308,24 +308,33 @@ static __printf(1, 0) int vprintk_nmi(const char *fmt, va_list args)
+ 
+ void printk_nmi_enter(void)
+ {
+-	/*
+-	 * The size of the extra per-CPU buffer is limited. Use it only when
+-	 * the main one is locked. If this CPU is not in the safe context,
+-	 * the lock must be taken on another CPU and we could wait for it.
+-	 */
+-	if ((this_cpu_read(printk_context) & PRINTK_SAFE_CONTEXT_MASK) &&
+-	    raw_spin_is_locked(&logbuf_lock)) {
+-		this_cpu_or(printk_context, PRINTK_NMI_CONTEXT_MASK);
+-	} else {
+-		this_cpu_or(printk_context, PRINTK_NMI_DEFERRED_CONTEXT_MASK);
+-	}
++	this_cpu_or(printk_context, PRINTK_NMI_CONTEXT_MASK);
+ }
+ 
+ void printk_nmi_exit(void)
+ {
+-	this_cpu_and(printk_context,
+-		     ~(PRINTK_NMI_CONTEXT_MASK |
+-		       PRINTK_NMI_DEFERRED_CONTEXT_MASK));
++	this_cpu_and(printk_context, ~PRINTK_NMI_CONTEXT_MASK);
++}
++
++/*
++ * Marks a code that might produce many messages in NMI context
++ * and the risk of losing them is more critical than eventual
++ * reordering.
++ *
++ * It has effect only when called in NMI context. Then printk()
++ * will try to store the messages into the main logbuf directly
++ * and use the per-CPU buffers only as a fallback when the lock
++ * is not available.
++ */
++void printk_nmi_direct_enter(void)
++{
++	if (this_cpu_read(printk_context) & PRINTK_NMI_CONTEXT_MASK)
++		this_cpu_or(printk_context, PRINTK_NMI_DIRECT_CONTEXT_MASK);
++}
++
++void printk_nmi_direct_exit(void)
++{
++	this_cpu_and(printk_context, ~PRINTK_NMI_DIRECT_CONTEXT_MASK);
+ }
+ 
+ #else
+@@ -363,6 +372,20 @@ void __printk_safe_exit(void)
+ 
+ __printf(1, 0) int vprintk_func(const char *fmt, va_list args)
+ {
++	/*
++	 * Try to use the main logbuf even in NMI. But avoid calling console
++	 * drivers that might have their own locks.
++	 */
++	if ((this_cpu_read(printk_context) & PRINTK_NMI_DIRECT_CONTEXT_MASK) &&
++	    raw_spin_trylock(&logbuf_lock)) {
++		int len;
++
++		len = vprintk_store(0, LOGLEVEL_DEFAULT, NULL, 0, fmt, args);
++		raw_spin_unlock(&logbuf_lock);
++		defer_console_output();
++		return len;
++	}
++
+ 	/* Use extra buffer in NMI when logbuf_lock is taken or in safe mode. */
+ 	if (this_cpu_read(printk_context) & PRINTK_NMI_CONTEXT_MASK)
+ 		return vprintk_nmi(fmt, args);
+@@ -371,13 +394,6 @@ __printf(1, 0) int vprintk_func(const char *fmt, va_list args)
+ 	if (this_cpu_read(printk_context) & PRINTK_SAFE_CONTEXT_MASK)
+ 		return vprintk_safe(fmt, args);
+ 
+-	/*
+-	 * Use the main logbuf when logbuf_lock is available in NMI.
+-	 * But avoid calling console drivers that might have their own locks.
+-	 */
+-	if (this_cpu_read(printk_context) & PRINTK_NMI_DEFERRED_CONTEXT_MASK)
+-		return vprintk_deferred(fmt, args);
+-
+ 	/* No obstacles. */
+ 	return vprintk_default(fmt, args);
+ }
+diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
+index e190d1ef3a23..067cb83f37ea 100644
+--- a/kernel/stop_machine.c
++++ b/kernel/stop_machine.c
+@@ -81,6 +81,7 @@ static bool cpu_stop_queue_work(unsigned int cpu, struct cpu_stop_work *work)
+ 	unsigned long flags;
+ 	bool enabled;
+ 
++	preempt_disable();
+ 	raw_spin_lock_irqsave(&stopper->lock, flags);
+ 	enabled = stopper->enabled;
+ 	if (enabled)
+@@ -90,6 +91,7 @@ static bool cpu_stop_queue_work(unsigned int cpu, struct cpu_stop_work *work)
+ 	raw_spin_unlock_irqrestore(&stopper->lock, flags);
+ 
+ 	wake_up_q(&wakeq);
++	preempt_enable();
+ 
+ 	return enabled;
+ }
+@@ -236,13 +238,24 @@ static int cpu_stop_queue_two_works(int cpu1, struct cpu_stop_work *work1,
+ 	struct cpu_stopper *stopper2 = per_cpu_ptr(&cpu_stopper, cpu2);
+ 	DEFINE_WAKE_Q(wakeq);
+ 	int err;
++
+ retry:
++	/*
++	 * The waking up of stopper threads has to happen in the same
++	 * scheduling context as the queueing.  Otherwise, there is a
++	 * possibility of one of the above stoppers being woken up by another
++	 * CPU, and preempting us. This will cause us to not wake up the other
++	 * stopper forever.
++	 */
++	preempt_disable();
+ 	raw_spin_lock_irq(&stopper1->lock);
+ 	raw_spin_lock_nested(&stopper2->lock, SINGLE_DEPTH_NESTING);
+ 
+-	err = -ENOENT;
+-	if (!stopper1->enabled || !stopper2->enabled)
++	if (!stopper1->enabled || !stopper2->enabled) {
++		err = -ENOENT;
+ 		goto unlock;
++	}
++
+ 	/*
+ 	 * Ensure that if we race with __stop_cpus() the stoppers won't get
+ 	 * queued up in reverse order leading to system deadlock.
+@@ -253,36 +266,30 @@ retry:
+ 	 * It can be falsely true but it is safe to spin until it is cleared,
+ 	 * queue_stop_cpus_work() does everything under preempt_disable().
+ 	 */
+-	err = -EDEADLK;
+-	if (unlikely(stop_cpus_in_progress))
+-			goto unlock;
++	if (unlikely(stop_cpus_in_progress)) {
++		err = -EDEADLK;
++		goto unlock;
++	}
+ 
+ 	err = 0;
+ 	__cpu_stop_queue_work(stopper1, work1, &wakeq);
+ 	__cpu_stop_queue_work(stopper2, work2, &wakeq);
+-	/*
+-	 * The waking up of stopper threads has to happen
+-	 * in the same scheduling context as the queueing.
+-	 * Otherwise, there is a possibility of one of the
+-	 * above stoppers being woken up by another CPU,
+-	 * and preempting us. This will cause us to n ot
+-	 * wake up the other stopper forever.
+-	 */
+-	preempt_disable();
++
+ unlock:
+ 	raw_spin_unlock(&stopper2->lock);
+ 	raw_spin_unlock_irq(&stopper1->lock);
+ 
+ 	if (unlikely(err == -EDEADLK)) {
++		preempt_enable();
++
+ 		while (stop_cpus_in_progress)
+ 			cpu_relax();
++
+ 		goto retry;
+ 	}
+ 
+-	if (!err) {
+-		wake_up_q(&wakeq);
+-		preempt_enable();
+-	}
++	wake_up_q(&wakeq);
++	preempt_enable();
+ 
+ 	return err;
+ }
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 823687997b01..176debd3481b 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -8288,6 +8288,7 @@ void ftrace_dump(enum ftrace_dump_mode oops_dump_mode)
+ 	tracing_off();
+ 
+ 	local_irq_save(flags);
++	printk_nmi_direct_enter();
+ 
+ 	/* Simulate the iterator */
+ 	trace_init_global_iter(&iter);
+@@ -8367,7 +8368,8 @@ void ftrace_dump(enum ftrace_dump_mode oops_dump_mode)
+ 	for_each_tracing_cpu(cpu) {
+ 		atomic_dec(&per_cpu_ptr(iter.trace_buffer->data, cpu)->disabled);
+ 	}
+- 	atomic_dec(&dump_running);
++	atomic_dec(&dump_running);
++	printk_nmi_direct_exit();
+ 	local_irq_restore(flags);
+ }
+ EXPORT_SYMBOL_GPL(ftrace_dump);
+diff --git a/kernel/watchdog.c b/kernel/watchdog.c
+index 576d18045811..51f5a64d9ec2 100644
+--- a/kernel/watchdog.c
++++ b/kernel/watchdog.c
+@@ -266,7 +266,7 @@ static void __touch_watchdog(void)
+  * entering idle state.  This should only be used for scheduler events.
+  * Use touch_softlockup_watchdog() for everything else.
+  */
+-void touch_softlockup_watchdog_sched(void)
++notrace void touch_softlockup_watchdog_sched(void)
+ {
+ 	/*
+ 	 * Preemption can be enabled.  It doesn't matter which CPU's timestamp
+@@ -275,7 +275,7 @@ void touch_softlockup_watchdog_sched(void)
+ 	raw_cpu_write(watchdog_touch_ts, 0);
+ }
+ 
+-void touch_softlockup_watchdog(void)
++notrace void touch_softlockup_watchdog(void)
+ {
+ 	touch_softlockup_watchdog_sched();
+ 	wq_watchdog_touch(raw_smp_processor_id());
+diff --git a/kernel/watchdog_hld.c b/kernel/watchdog_hld.c
+index e449a23e9d59..4ece6028007a 100644
+--- a/kernel/watchdog_hld.c
++++ b/kernel/watchdog_hld.c
+@@ -29,7 +29,7 @@ static struct cpumask dead_events_mask;
+ static unsigned long hardlockup_allcpu_dumped;
+ static atomic_t watchdog_cpus = ATOMIC_INIT(0);
+ 
+-void arch_touch_nmi_watchdog(void)
++notrace void arch_touch_nmi_watchdog(void)
+ {
+ 	/*
+ 	 * Using __raw here because some code paths have
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 78b192071ef7..5f78c6e41796 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -5559,7 +5559,7 @@ static void wq_watchdog_timer_fn(struct timer_list *unused)
+ 	mod_timer(&wq_watchdog_timer, jiffies + thresh);
+ }
+ 
+-void wq_watchdog_touch(int cpu)
++notrace void wq_watchdog_touch(int cpu)
+ {
+ 	if (cpu >= 0)
+ 		per_cpu(wq_watchdog_touched_cpu, cpu) = jiffies;
+diff --git a/lib/nmi_backtrace.c b/lib/nmi_backtrace.c
+index 61a6b5aab07e..15ca78e1c7d4 100644
+--- a/lib/nmi_backtrace.c
++++ b/lib/nmi_backtrace.c
+@@ -87,11 +87,9 @@ void nmi_trigger_cpumask_backtrace(const cpumask_t *mask,
+ 
+ bool nmi_cpu_backtrace(struct pt_regs *regs)
+ {
+-	static arch_spinlock_t lock = __ARCH_SPIN_LOCK_UNLOCKED;
+ 	int cpu = smp_processor_id();
+ 
+ 	if (cpumask_test_cpu(cpu, to_cpumask(backtrace_mask))) {
+-		arch_spin_lock(&lock);
+ 		if (regs && cpu_in_idle(instruction_pointer(regs))) {
+ 			pr_warn("NMI backtrace for cpu %d skipped: idling at %pS\n",
+ 				cpu, (void *)instruction_pointer(regs));
+@@ -102,7 +100,6 @@ bool nmi_cpu_backtrace(struct pt_regs *regs)
+ 			else
+ 				dump_stack();
+ 		}
+-		arch_spin_unlock(&lock);
+ 		cpumask_clear_cpu(cpu, to_cpumask(backtrace_mask));
+ 		return true;
+ 	}
+diff --git a/lib/vsprintf.c b/lib/vsprintf.c
+index a48aaa79d352..cda186230287 100644
+--- a/lib/vsprintf.c
++++ b/lib/vsprintf.c
+@@ -1942,6 +1942,7 @@ char *pointer(const char *fmt, char *buf, char *end, void *ptr,
+ 		case 'F':
+ 			return device_node_string(buf, end, ptr, spec, fmt + 1);
+ 		}
++		break;
+ 	case 'x':
+ 		return pointer_string(buf, end, ptr, spec);
+ 	}
+diff --git a/mm/memory.c b/mm/memory.c
+index 0e356dd923c2..86d4329acb05 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -245,9 +245,6 @@ static void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb)
+ 
+ 	tlb_flush(tlb);
+ 	mmu_notifier_invalidate_range(tlb->mm, tlb->start, tlb->end);
+-#ifdef CONFIG_HAVE_RCU_TABLE_FREE
+-	tlb_table_flush(tlb);
+-#endif
+ 	__tlb_reset_range(tlb);
+ }
+ 
+@@ -255,6 +252,9 @@ static void tlb_flush_mmu_free(struct mmu_gather *tlb)
+ {
+ 	struct mmu_gather_batch *batch;
+ 
++#ifdef CONFIG_HAVE_RCU_TABLE_FREE
++	tlb_table_flush(tlb);
++#endif
+ 	for (batch = &tlb->local; batch && batch->nr; batch = batch->next) {
+ 		free_pages_and_swap_cache(batch->pages, batch->nr);
+ 		batch->nr = 0;
+@@ -330,6 +330,21 @@ bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_
+  * See the comment near struct mmu_table_batch.
+  */
+ 
++/*
++ * If we want tlb_remove_table() to imply TLB invalidates.
++ */
++static inline void tlb_table_invalidate(struct mmu_gather *tlb)
++{
++#ifdef CONFIG_HAVE_RCU_TABLE_INVALIDATE
++	/*
++	 * Invalidate page-table caches used by hardware walkers. Then we still
++	 * need to RCU-sched wait while freeing the pages because software
++	 * walkers can still be in-flight.
++	 */
++	tlb_flush_mmu_tlbonly(tlb);
++#endif
++}
++
+ static void tlb_remove_table_smp_sync(void *arg)
+ {
+ 	/* Simply deliver the interrupt */
+@@ -366,6 +381,7 @@ void tlb_table_flush(struct mmu_gather *tlb)
+ 	struct mmu_table_batch **batch = &tlb->batch;
+ 
+ 	if (*batch) {
++		tlb_table_invalidate(tlb);
+ 		call_rcu_sched(&(*batch)->rcu, tlb_remove_table_rcu);
+ 		*batch = NULL;
+ 	}
+@@ -387,11 +403,13 @@ void tlb_remove_table(struct mmu_gather *tlb, void *table)
+ 	if (*batch == NULL) {
+ 		*batch = (struct mmu_table_batch *)__get_free_page(GFP_NOWAIT | __GFP_NOWARN);
+ 		if (*batch == NULL) {
++			tlb_table_invalidate(tlb);
+ 			tlb_remove_table_one(table);
+ 			return;
+ 		}
+ 		(*batch)->nr = 0;
+ 	}
++
+ 	(*batch)->tables[(*batch)->nr++] = table;
+ 	if ((*batch)->nr == MAX_TABLE_BATCH)
+ 		tlb_table_flush(tlb);
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index 16161a36dc73..e8d1024dc547 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -280,7 +280,6 @@ rpcrdma_conn_upcall(struct rdma_cm_id *id, struct rdma_cm_event *event)
+ 		++xprt->rx_xprt.connect_cookie;
+ 		connstate = -ECONNABORTED;
+ connected:
+-		xprt->rx_buf.rb_credits = 1;
+ 		ep->rep_connected = connstate;
+ 		rpcrdma_conn_func(ep);
+ 		wake_up_all(&ep->rep_connect_wait);
+@@ -755,6 +754,7 @@ retry:
+ 	}
+ 
+ 	ep->rep_connected = 0;
++	rpcrdma_post_recvs(r_xprt, true);
+ 
+ 	rc = rdma_connect(ia->ri_id, &ep->rep_remote_cma);
+ 	if (rc) {
+@@ -773,8 +773,6 @@ retry:
+ 
+ 	dprintk("RPC:       %s: connected\n", __func__);
+ 
+-	rpcrdma_post_recvs(r_xprt, true);
+-
+ out:
+ 	if (rc)
+ 		ep->rep_connected = rc;
+@@ -1171,6 +1169,7 @@ rpcrdma_buffer_create(struct rpcrdma_xprt *r_xprt)
+ 		list_add(&req->rl_list, &buf->rb_send_bufs);
+ 	}
+ 
++	buf->rb_credits = 1;
+ 	buf->rb_posted_receives = 0;
+ 	INIT_LIST_HEAD(&buf->rb_recv_bufs);
+ 
+diff --git a/scripts/kernel-doc b/scripts/kernel-doc
+index 0057d8eafcc1..8f0f508a78e9 100755
+--- a/scripts/kernel-doc
++++ b/scripts/kernel-doc
+@@ -1062,7 +1062,7 @@ sub dump_struct($$) {
+     my $x = shift;
+     my $file = shift;
+ 
+-    if ($x =~ /(struct|union)\s+(\w+)\s*{(.*)}/) {
++    if ($x =~ /(struct|union)\s+(\w+)\s*\{(.*)\}/) {
+ 	my $decl_type = $1;
+ 	$declaration_name = $2;
+ 	my $members = $3;
+@@ -1148,20 +1148,20 @@ sub dump_struct($$) {
+ 				}
+ 			}
+ 		}
+-		$members =~ s/(struct|union)([^\{\};]+)\{([^\{\}]*)}([^\{\}\;]*)\;/$newmember/;
++		$members =~ s/(struct|union)([^\{\};]+)\{([^\{\}]*)\}([^\{\}\;]*)\;/$newmember/;
+ 	}
+ 
+ 	# Ignore other nested elements, like enums
+-	$members =~ s/({[^\{\}]*})//g;
++	$members =~ s/(\{[^\{\}]*\})//g;
+ 
+ 	create_parameterlist($members, ';', $file, $declaration_name);
+ 	check_sections($file, $declaration_name, $decl_type, $sectcheck, $struct_actual);
+ 
+ 	# Adjust declaration for better display
+-	$declaration =~ s/([{;])/$1\n/g;
+-	$declaration =~ s/}\s+;/};/g;
++	$declaration =~ s/([\{;])/$1\n/g;
++	$declaration =~ s/\}\s+;/};/g;
+ 	# Better handle inlined enums
+-	do {} while ($declaration =~ s/(enum\s+{[^}]+),([^\n])/$1,\n$2/);
++	do {} while ($declaration =~ s/(enum\s+\{[^\}]+),([^\n])/$1,\n$2/);
+ 
+ 	my @def_args = split /\n/, $declaration;
+ 	my $level = 1;
+@@ -1171,12 +1171,12 @@ sub dump_struct($$) {
+ 		$clause =~ s/\s+$//;
+ 		$clause =~ s/\s+/ /;
+ 		next if (!$clause);
+-		$level-- if ($clause =~ m/(})/ && $level > 1);
++		$level-- if ($clause =~ m/(\})/ && $level > 1);
+ 		if (!($clause =~ m/^\s*#/)) {
+ 			$declaration .= "\t" x $level;
+ 		}
+ 		$declaration .= "\t" . $clause . "\n";
+-		$level++ if ($clause =~ m/({)/ && !($clause =~m/}/));
++		$level++ if ($clause =~ m/(\{)/ && !($clause =~m/\}/));
+ 	}
+ 	output_declaration($declaration_name,
+ 			   'struct',
+@@ -1244,7 +1244,7 @@ sub dump_enum($$) {
+     # strip #define macros inside enums
+     $x =~ s@#\s*((define|ifdef)\s+|endif)[^;]*;@@gos;
+ 
+-    if ($x =~ /enum\s+(\w+)\s*{(.*)}/) {
++    if ($x =~ /enum\s+(\w+)\s*\{(.*)\}/) {
+ 	$declaration_name = $1;
+ 	my $members = $2;
+ 	my %_members;
+@@ -1785,7 +1785,7 @@ sub process_proto_type($$) {
+     }
+ 
+     while (1) {
+-	if ( $x =~ /([^{};]*)([{};])(.*)/ ) {
++	if ( $x =~ /([^\{\};]*)([\{\};])(.*)/ ) {
+             if( length $prototype ) {
+                 $prototype .= " "
+             }
+diff --git a/sound/soc/codecs/wm_adsp.c b/sound/soc/codecs/wm_adsp.c
+index 2fcdd84021a5..86c7805da997 100644
+--- a/sound/soc/codecs/wm_adsp.c
++++ b/sound/soc/codecs/wm_adsp.c
+@@ -2642,7 +2642,10 @@ int wm_adsp2_preloader_get(struct snd_kcontrol *kcontrol,
+ 			   struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct snd_soc_component *component = snd_soc_kcontrol_component(kcontrol);
+-	struct wm_adsp *dsp = snd_soc_component_get_drvdata(component);
++	struct wm_adsp *dsps = snd_soc_component_get_drvdata(component);
++	struct soc_mixer_control *mc =
++		(struct soc_mixer_control *)kcontrol->private_value;
++	struct wm_adsp *dsp = &dsps[mc->shift - 1];
+ 
+ 	ucontrol->value.integer.value[0] = dsp->preloaded;
+ 
+@@ -2654,10 +2657,11 @@ int wm_adsp2_preloader_put(struct snd_kcontrol *kcontrol,
+ 			   struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct snd_soc_component *component = snd_soc_kcontrol_component(kcontrol);
+-	struct wm_adsp *dsp = snd_soc_component_get_drvdata(component);
++	struct wm_adsp *dsps = snd_soc_component_get_drvdata(component);
+ 	struct snd_soc_dapm_context *dapm = snd_soc_component_get_dapm(component);
+ 	struct soc_mixer_control *mc =
+ 		(struct soc_mixer_control *)kcontrol->private_value;
++	struct wm_adsp *dsp = &dsps[mc->shift - 1];
+ 	char preload[32];
+ 
+ 	snprintf(preload, ARRAY_SIZE(preload), "DSP%u Preload", mc->shift);
+diff --git a/sound/soc/sirf/sirf-usp.c b/sound/soc/sirf/sirf-usp.c
+index 77e7dcf969d0..d70fcd4a1adf 100644
+--- a/sound/soc/sirf/sirf-usp.c
++++ b/sound/soc/sirf/sirf-usp.c
+@@ -370,10 +370,9 @@ static int sirf_usp_pcm_probe(struct platform_device *pdev)
+ 	platform_set_drvdata(pdev, usp);
+ 
+ 	mem_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	base = devm_ioremap(&pdev->dev, mem_res->start,
+-		resource_size(mem_res));
+-	if (base == NULL)
+-		return -ENOMEM;
++	base = devm_ioremap_resource(&pdev->dev, mem_res);
++	if (IS_ERR(base))
++		return PTR_ERR(base);
+ 	usp->regmap = devm_regmap_init_mmio(&pdev->dev, base,
+ 					    &sirf_usp_regmap_config);
+ 	if (IS_ERR(usp->regmap))
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 5e7ae47a9658..5feae9666822 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -1694,6 +1694,14 @@ static u64 dpcm_runtime_base_format(struct snd_pcm_substream *substream)
+ 		int i;
+ 
+ 		for (i = 0; i < be->num_codecs; i++) {
++			/*
++			 * Skip CODECs which don't support the current stream
++			 * type. See soc_pcm_init_runtime_hw() for more details
++			 */
++			if (!snd_soc_dai_stream_valid(be->codec_dais[i],
++						      stream))
++				continue;
++
+ 			codec_dai_drv = be->codec_dais[i]->driver;
+ 			if (stream == SNDRV_PCM_STREAM_PLAYBACK)
+ 				codec_stream = &codec_dai_drv->playback;
+diff --git a/sound/soc/zte/zx-tdm.c b/sound/soc/zte/zx-tdm.c
+index dc955272f58b..389272eeba9a 100644
+--- a/sound/soc/zte/zx-tdm.c
++++ b/sound/soc/zte/zx-tdm.c
+@@ -144,8 +144,8 @@ static void zx_tdm_rx_dma_en(struct zx_tdm_info *tdm, bool on)
+ #define ZX_TDM_RATES	(SNDRV_PCM_RATE_8000 | SNDRV_PCM_RATE_16000)
+ 
+ #define ZX_TDM_FMTBIT \
+-	(SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FORMAT_MU_LAW | \
+-	SNDRV_PCM_FORMAT_A_LAW)
++	(SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_MU_LAW | \
++	SNDRV_PCM_FMTBIT_A_LAW)
+ 
+ static int zx_tdm_dai_probe(struct snd_soc_dai *dai)
+ {
+diff --git a/tools/perf/arch/s390/util/kvm-stat.c b/tools/perf/arch/s390/util/kvm-stat.c
+index d233e2eb9592..aaabab5e2830 100644
+--- a/tools/perf/arch/s390/util/kvm-stat.c
++++ b/tools/perf/arch/s390/util/kvm-stat.c
+@@ -102,7 +102,7 @@ const char * const kvm_skip_events[] = {
+ 
+ int cpu_isa_init(struct perf_kvm_stat *kvm, const char *cpuid)
+ {
+-	if (strstr(cpuid, "IBM/S390")) {
++	if (strstr(cpuid, "IBM")) {
+ 		kvm->exit_reasons = sie_exit_reasons;
+ 		kvm->exit_reasons_isa = "SIE";
+ 	} else
+diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
+index bd3d57f40f1b..17cecc96f735 100644
+--- a/virt/kvm/arm/arch_timer.c
++++ b/virt/kvm/arm/arch_timer.c
+@@ -295,9 +295,9 @@ static void phys_timer_emulate(struct kvm_vcpu *vcpu)
+ 	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
+ 
+ 	/*
+-	 * If the timer can fire now we have just raised the IRQ line and we
+-	 * don't need to have a soft timer scheduled for the future.  If the
+-	 * timer cannot fire at all, then we also don't need a soft timer.
++	 * If the timer can fire now, we don't need to have a soft timer
++	 * scheduled for the future.  If the timer cannot fire at all,
++	 * then we also don't need a soft timer.
+ 	 */
+ 	if (kvm_timer_should_fire(ptimer) || !kvm_timer_irq_can_fire(ptimer)) {
+ 		soft_timer_cancel(&timer->phys_timer, NULL);
+@@ -332,10 +332,10 @@ static void kvm_timer_update_state(struct kvm_vcpu *vcpu)
+ 	level = kvm_timer_should_fire(vtimer);
+ 	kvm_timer_update_irq(vcpu, level, vtimer);
+ 
++	phys_timer_emulate(vcpu);
++
+ 	if (kvm_timer_should_fire(ptimer) != ptimer->irq.level)
+ 		kvm_timer_update_irq(vcpu, !ptimer->irq.level, ptimer);
+-
+-	phys_timer_emulate(vcpu);
+ }
+ 
+ static void vtimer_save_state(struct kvm_vcpu *vcpu)
+@@ -487,6 +487,7 @@ void kvm_timer_vcpu_load(struct kvm_vcpu *vcpu)
+ {
+ 	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+ 	struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
++	struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
+ 
+ 	if (unlikely(!timer->enabled))
+ 		return;
+@@ -502,6 +503,10 @@ void kvm_timer_vcpu_load(struct kvm_vcpu *vcpu)
+ 
+ 	/* Set the background timer for the physical timer emulation. */
+ 	phys_timer_emulate(vcpu);
++
++	/* If the timer fired while we weren't running, inject it now */
++	if (kvm_timer_should_fire(ptimer) != ptimer->irq.level)
++		kvm_timer_update_irq(vcpu, !ptimer->irq.level, ptimer);
+ }
+ 
+ bool kvm_timer_should_notify_user(struct kvm_vcpu *vcpu)
+diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
+index 1d90d79706bd..c2b95a22959b 100644
+--- a/virt/kvm/arm/mmu.c
++++ b/virt/kvm/arm/mmu.c
+@@ -1015,19 +1015,35 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
+ 	pmd = stage2_get_pmd(kvm, cache, addr);
+ 	VM_BUG_ON(!pmd);
+ 
+-	/*
+-	 * Mapping in huge pages should only happen through a fault.  If a
+-	 * page is merged into a transparent huge page, the individual
+-	 * subpages of that huge page should be unmapped through MMU
+-	 * notifiers before we get here.
+-	 *
+-	 * Merging of CompoundPages is not supported; they should become
+-	 * splitting first, unmapped, merged, and mapped back in on-demand.
+-	 */
+-	VM_BUG_ON(pmd_present(*pmd) && pmd_pfn(*pmd) != pmd_pfn(*new_pmd));
+-
+ 	old_pmd = *pmd;
+ 	if (pmd_present(old_pmd)) {
++		/*
++		 * Multiple vcpus faulting on the same PMD entry, can
++		 * lead to them sequentially updating the PMD with the
++		 * same value. Following the break-before-make
++		 * (pmd_clear() followed by tlb_flush()) process can
++		 * hinder forward progress due to refaults generated
++		 * on missing translations.
++		 *
++		 * Skip updating the page table if the entry is
++		 * unchanged.
++		 */
++		if (pmd_val(old_pmd) == pmd_val(*new_pmd))
++			return 0;
++
++		/*
++		 * Mapping in huge pages should only happen through a
++		 * fault.  If a page is merged into a transparent huge
++		 * page, the individual subpages of that huge page
++		 * should be unmapped through MMU notifiers before we
++		 * get here.
++		 *
++		 * Merging of CompoundPages is not supported; they
++		 * should become splitting first, unmapped, merged,
++		 * and mapped back in on-demand.
++		 */
++		VM_BUG_ON(pmd_pfn(old_pmd) != pmd_pfn(*new_pmd));
++
+ 		pmd_clear(pmd);
+ 		kvm_tlb_flush_vmid_ipa(kvm, addr);
+ 	} else {
+@@ -1102,6 +1118,10 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
+ 	/* Create 2nd stage page table mapping - Level 3 */
+ 	old_pte = *pte;
+ 	if (pte_present(old_pte)) {
++		/* Skip page table update if there is no change */
++		if (pte_val(old_pte) == pte_val(*new_pte))
++			return 0;
++
+ 		kvm_set_pte(pte, __pte(0));
+ 		kvm_tlb_flush_vmid_ipa(kvm, addr);
+ 	} else {


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-08-24 11:46 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-08-24 11:46 UTC (permalink / raw
  To: gentoo-commits

commit:     f6b7dd03deac6395d6e2c321ff32c5df7296c3a8
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Aug 24 11:46:20 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Aug 24 11:46:20 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f6b7dd03

Linux patch 4.18.5

 0000_README             |   4 +
 1004_linux-4.18.5.patch | 742 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 746 insertions(+)

diff --git a/0000_README b/0000_README
index c7d6cc0..8da0979 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch:  1003_linux-4.18.4.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.4
 
+Patch:  1004_linux-4.18.5.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.5
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1004_linux-4.18.5.patch b/1004_linux-4.18.5.patch
new file mode 100644
index 0000000..abf70a2
--- /dev/null
+++ b/1004_linux-4.18.5.patch
@@ -0,0 +1,742 @@
+diff --git a/Makefile b/Makefile
+index ef0dd566c104..a41692c5827a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/parisc/include/asm/spinlock.h b/arch/parisc/include/asm/spinlock.h
+index 6f84b6acc86e..8a63515f03bf 100644
+--- a/arch/parisc/include/asm/spinlock.h
++++ b/arch/parisc/include/asm/spinlock.h
+@@ -20,7 +20,6 @@ static inline void arch_spin_lock_flags(arch_spinlock_t *x,
+ {
+ 	volatile unsigned int *a;
+ 
+-	mb();
+ 	a = __ldcw_align(x);
+ 	while (__ldcw(a) == 0)
+ 		while (*a == 0)
+@@ -30,17 +29,16 @@ static inline void arch_spin_lock_flags(arch_spinlock_t *x,
+ 				local_irq_disable();
+ 			} else
+ 				cpu_relax();
+-	mb();
+ }
+ #define arch_spin_lock_flags arch_spin_lock_flags
+ 
+ static inline void arch_spin_unlock(arch_spinlock_t *x)
+ {
+ 	volatile unsigned int *a;
+-	mb();
++
+ 	a = __ldcw_align(x);
+-	*a = 1;
+ 	mb();
++	*a = 1;
+ }
+ 
+ static inline int arch_spin_trylock(arch_spinlock_t *x)
+@@ -48,10 +46,8 @@ static inline int arch_spin_trylock(arch_spinlock_t *x)
+ 	volatile unsigned int *a;
+ 	int ret;
+ 
+-	mb();
+ 	a = __ldcw_align(x);
+         ret = __ldcw(a) != 0;
+-	mb();
+ 
+ 	return ret;
+ }
+diff --git a/arch/parisc/kernel/syscall.S b/arch/parisc/kernel/syscall.S
+index 4886a6db42e9..5f7e57fcaeef 100644
+--- a/arch/parisc/kernel/syscall.S
++++ b/arch/parisc/kernel/syscall.S
+@@ -629,12 +629,12 @@ cas_action:
+ 	stw	%r1, 4(%sr2,%r20)
+ #endif
+ 	/* The load and store could fail */
+-1:	ldw,ma	0(%r26), %r28
++1:	ldw	0(%r26), %r28
+ 	sub,<>	%r28, %r25, %r0
+-2:	stw,ma	%r24, 0(%r26)
++2:	stw	%r24, 0(%r26)
+ 	/* Free lock */
+ 	sync
+-	stw,ma	%r20, 0(%sr2,%r20)
++	stw	%r20, 0(%sr2,%r20)
+ #if ENABLE_LWS_DEBUG
+ 	/* Clear thread register indicator */
+ 	stw	%r0, 4(%sr2,%r20)
+@@ -798,30 +798,30 @@ cas2_action:
+ 	ldo	1(%r0),%r28
+ 
+ 	/* 8bit CAS */
+-13:	ldb,ma	0(%r26), %r29
++13:	ldb	0(%r26), %r29
+ 	sub,=	%r29, %r25, %r0
+ 	b,n	cas2_end
+-14:	stb,ma	%r24, 0(%r26)
++14:	stb	%r24, 0(%r26)
+ 	b	cas2_end
+ 	copy	%r0, %r28
+ 	nop
+ 	nop
+ 
+ 	/* 16bit CAS */
+-15:	ldh,ma	0(%r26), %r29
++15:	ldh	0(%r26), %r29
+ 	sub,=	%r29, %r25, %r0
+ 	b,n	cas2_end
+-16:	sth,ma	%r24, 0(%r26)
++16:	sth	%r24, 0(%r26)
+ 	b	cas2_end
+ 	copy	%r0, %r28
+ 	nop
+ 	nop
+ 
+ 	/* 32bit CAS */
+-17:	ldw,ma	0(%r26), %r29
++17:	ldw	0(%r26), %r29
+ 	sub,=	%r29, %r25, %r0
+ 	b,n	cas2_end
+-18:	stw,ma	%r24, 0(%r26)
++18:	stw	%r24, 0(%r26)
+ 	b	cas2_end
+ 	copy	%r0, %r28
+ 	nop
+@@ -829,10 +829,10 @@ cas2_action:
+ 
+ 	/* 64bit CAS */
+ #ifdef CONFIG_64BIT
+-19:	ldd,ma	0(%r26), %r29
++19:	ldd	0(%r26), %r29
+ 	sub,*=	%r29, %r25, %r0
+ 	b,n	cas2_end
+-20:	std,ma	%r24, 0(%r26)
++20:	std	%r24, 0(%r26)
+ 	copy	%r0, %r28
+ #else
+ 	/* Compare first word */
+@@ -851,7 +851,7 @@ cas2_action:
+ cas2_end:
+ 	/* Free lock */
+ 	sync
+-	stw,ma	%r20, 0(%sr2,%r20)
++	stw	%r20, 0(%sr2,%r20)
+ 	/* Enable interrupts */
+ 	ssm	PSW_SM_I, %r0
+ 	/* Return to userspace, set no error */
+diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
+index a8b277362931..4cb8f1f7b593 100644
+--- a/arch/powerpc/kernel/security.c
++++ b/arch/powerpc/kernel/security.c
+@@ -117,25 +117,35 @@ ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, cha
+ 
+ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, char *buf)
+ {
+-	if (!security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR))
+-		return sprintf(buf, "Not affected\n");
++	struct seq_buf s;
++
++	seq_buf_init(&s, buf, PAGE_SIZE - 1);
+ 
+-	if (barrier_nospec_enabled)
+-		return sprintf(buf, "Mitigation: __user pointer sanitization\n");
++	if (security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR)) {
++		if (barrier_nospec_enabled)
++			seq_buf_printf(&s, "Mitigation: __user pointer sanitization");
++		else
++			seq_buf_printf(&s, "Vulnerable");
+ 
+-	return sprintf(buf, "Vulnerable\n");
++		if (security_ftr_enabled(SEC_FTR_SPEC_BAR_ORI31))
++			seq_buf_printf(&s, ", ori31 speculation barrier enabled");
++
++		seq_buf_printf(&s, "\n");
++	} else
++		seq_buf_printf(&s, "Not affected\n");
++
++	return s.len;
+ }
+ 
+ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, char *buf)
+ {
+-	bool bcs, ccd, ori;
+ 	struct seq_buf s;
++	bool bcs, ccd;
+ 
+ 	seq_buf_init(&s, buf, PAGE_SIZE - 1);
+ 
+ 	bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
+ 	ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
+-	ori = security_ftr_enabled(SEC_FTR_SPEC_BAR_ORI31);
+ 
+ 	if (bcs || ccd) {
+ 		seq_buf_printf(&s, "Mitigation: ");
+@@ -151,9 +161,6 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, c
+ 	} else
+ 		seq_buf_printf(&s, "Vulnerable");
+ 
+-	if (ori)
+-		seq_buf_printf(&s, ", ori31 speculation barrier enabled");
+-
+ 	seq_buf_printf(&s, "\n");
+ 
+ 	return s.len;
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index 79e409974ccc..682286aca881 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -971,6 +971,7 @@ static inline uint32_t hypervisor_cpuid_base(const char *sig, uint32_t leaves)
+ 
+ extern unsigned long arch_align_stack(unsigned long sp);
+ extern void free_init_pages(char *what, unsigned long begin, unsigned long end);
++extern void free_kernel_image_pages(void *begin, void *end);
+ 
+ void default_idle(void);
+ #ifdef	CONFIG_XEN
+diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
+index bd090367236c..34cffcef7375 100644
+--- a/arch/x86/include/asm/set_memory.h
++++ b/arch/x86/include/asm/set_memory.h
+@@ -46,6 +46,7 @@ int set_memory_np(unsigned long addr, int numpages);
+ int set_memory_4k(unsigned long addr, int numpages);
+ int set_memory_encrypted(unsigned long addr, int numpages);
+ int set_memory_decrypted(unsigned long addr, int numpages);
++int set_memory_np_noalias(unsigned long addr, int numpages);
+ 
+ int set_memory_array_uc(unsigned long *addr, int addrinarray);
+ int set_memory_array_wc(unsigned long *addr, int addrinarray);
+diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
+index 83241eb71cd4..acfab322fbe0 100644
+--- a/arch/x86/mm/init.c
++++ b/arch/x86/mm/init.c
+@@ -775,13 +775,44 @@ void free_init_pages(char *what, unsigned long begin, unsigned long end)
+ 	}
+ }
+ 
++/*
++ * begin/end can be in the direct map or the "high kernel mapping"
++ * used for the kernel image only.  free_init_pages() will do the
++ * right thing for either kind of address.
++ */
++void free_kernel_image_pages(void *begin, void *end)
++{
++	unsigned long begin_ul = (unsigned long)begin;
++	unsigned long end_ul = (unsigned long)end;
++	unsigned long len_pages = (end_ul - begin_ul) >> PAGE_SHIFT;
++
++
++	free_init_pages("unused kernel image", begin_ul, end_ul);
++
++	/*
++	 * PTI maps some of the kernel into userspace.  For performance,
++	 * this includes some kernel areas that do not contain secrets.
++	 * Those areas might be adjacent to the parts of the kernel image
++	 * being freed, which may contain secrets.  Remove the "high kernel
++	 * image mapping" for these freed areas, ensuring they are not even
++	 * potentially vulnerable to Meltdown regardless of the specific
++	 * optimizations PTI is currently using.
++	 *
++	 * The "noalias" prevents unmapping the direct map alias which is
++	 * needed to access the freed pages.
++	 *
++	 * This is only valid for 64bit kernels. 32bit has only one mapping
++	 * which can't be treated in this way for obvious reasons.
++	 */
++	if (IS_ENABLED(CONFIG_X86_64) && cpu_feature_enabled(X86_FEATURE_PTI))
++		set_memory_np_noalias(begin_ul, len_pages);
++}
++
+ void __ref free_initmem(void)
+ {
+ 	e820__reallocate_tables();
+ 
+-	free_init_pages("unused kernel",
+-			(unsigned long)(&__init_begin),
+-			(unsigned long)(&__init_end));
++	free_kernel_image_pages(&__init_begin, &__init_end);
+ }
+ 
+ #ifdef CONFIG_BLK_DEV_INITRD
+diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
+index a688617c727e..68c292cb1ebf 100644
+--- a/arch/x86/mm/init_64.c
++++ b/arch/x86/mm/init_64.c
+@@ -1283,12 +1283,8 @@ void mark_rodata_ro(void)
+ 	set_memory_ro(start, (end-start) >> PAGE_SHIFT);
+ #endif
+ 
+-	free_init_pages("unused kernel",
+-			(unsigned long) __va(__pa_symbol(text_end)),
+-			(unsigned long) __va(__pa_symbol(rodata_start)));
+-	free_init_pages("unused kernel",
+-			(unsigned long) __va(__pa_symbol(rodata_end)),
+-			(unsigned long) __va(__pa_symbol(_sdata)));
++	free_kernel_image_pages((void *)text_end, (void *)rodata_start);
++	free_kernel_image_pages((void *)rodata_end, (void *)_sdata);
+ 
+ 	debug_checkwx();
+ 
+diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
+index 29505724202a..8d6c34fe49be 100644
+--- a/arch/x86/mm/pageattr.c
++++ b/arch/x86/mm/pageattr.c
+@@ -53,6 +53,7 @@ static DEFINE_SPINLOCK(cpa_lock);
+ #define CPA_FLUSHTLB 1
+ #define CPA_ARRAY 2
+ #define CPA_PAGES_ARRAY 4
++#define CPA_NO_CHECK_ALIAS 8 /* Do not search for aliases */
+ 
+ #ifdef CONFIG_PROC_FS
+ static unsigned long direct_pages_count[PG_LEVEL_NUM];
+@@ -1486,6 +1487,9 @@ static int change_page_attr_set_clr(unsigned long *addr, int numpages,
+ 
+ 	/* No alias checking for _NX bit modifications */
+ 	checkalias = (pgprot_val(mask_set) | pgprot_val(mask_clr)) != _PAGE_NX;
++	/* Has caller explicitly disabled alias checking? */
++	if (in_flag & CPA_NO_CHECK_ALIAS)
++		checkalias = 0;
+ 
+ 	ret = __change_page_attr_set_clr(&cpa, checkalias);
+ 
+@@ -1772,6 +1776,15 @@ int set_memory_np(unsigned long addr, int numpages)
+ 	return change_page_attr_clear(&addr, numpages, __pgprot(_PAGE_PRESENT), 0);
+ }
+ 
++int set_memory_np_noalias(unsigned long addr, int numpages)
++{
++	int cpa_flags = CPA_NO_CHECK_ALIAS;
++
++	return change_page_attr_set_clr(&addr, numpages, __pgprot(0),
++					__pgprot(_PAGE_PRESENT), 0,
++					cpa_flags, NULL);
++}
++
+ int set_memory_4k(unsigned long addr, int numpages)
+ {
+ 	return change_page_attr_set_clr(&addr, numpages, __pgprot(0),
+diff --git a/drivers/edac/edac_mc.c b/drivers/edac/edac_mc.c
+index 3bb82e511eca..7d3edd713932 100644
+--- a/drivers/edac/edac_mc.c
++++ b/drivers/edac/edac_mc.c
+@@ -215,6 +215,7 @@ const char * const edac_mem_types[] = {
+ 	[MEM_LRDDR3]	= "Load-Reduced-DDR3-RAM",
+ 	[MEM_DDR4]	= "Unbuffered-DDR4",
+ 	[MEM_RDDR4]	= "Registered-DDR4",
++	[MEM_LRDDR4]	= "Load-Reduced-DDR4-RAM",
+ 	[MEM_NVDIMM]	= "Non-volatile-RAM",
+ };
+ EXPORT_SYMBOL_GPL(edac_mem_types);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+index fc818b4d849c..a44c3d58fef4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+@@ -31,7 +31,7 @@
+ #include <linux/power_supply.h>
+ #include <linux/hwmon.h>
+ #include <linux/hwmon-sysfs.h>
+-
++#include <linux/nospec.h>
+ 
+ static int amdgpu_debugfs_pm_init(struct amdgpu_device *adev);
+ 
+@@ -393,6 +393,7 @@ static ssize_t amdgpu_set_pp_force_state(struct device *dev,
+ 			count = -EINVAL;
+ 			goto fail;
+ 		}
++		idx = array_index_nospec(idx, ARRAY_SIZE(data.states));
+ 
+ 		amdgpu_dpm_get_pp_num_states(adev, &data);
+ 		state = data.states[idx];
+diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
+index df4e4a07db3d..14dce5c201d5 100644
+--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
++++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
+@@ -43,6 +43,8 @@
+ #include <linux/mdev.h>
+ #include <linux/debugfs.h>
+ 
++#include <linux/nospec.h>
++
+ #include "i915_drv.h"
+ #include "gvt.h"
+ 
+@@ -1084,7 +1086,8 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
+ 	} else if (cmd == VFIO_DEVICE_GET_REGION_INFO) {
+ 		struct vfio_region_info info;
+ 		struct vfio_info_cap caps = { .buf = NULL, .size = 0 };
+-		int i, ret;
++		unsigned int i;
++		int ret;
+ 		struct vfio_region_info_cap_sparse_mmap *sparse = NULL;
+ 		size_t size;
+ 		int nr_areas = 1;
+@@ -1169,6 +1172,10 @@ static long intel_vgpu_ioctl(struct mdev_device *mdev, unsigned int cmd,
+ 				if (info.index >= VFIO_PCI_NUM_REGIONS +
+ 						vgpu->vdev.num_regions)
+ 					return -EINVAL;
++				info.index =
++					array_index_nospec(info.index,
++							VFIO_PCI_NUM_REGIONS +
++							vgpu->vdev.num_regions);
+ 
+ 				i = info.index - VFIO_PCI_NUM_REGIONS;
+ 
+diff --git a/drivers/i2c/busses/i2c-imx.c b/drivers/i2c/busses/i2c-imx.c
+index 498c5e891649..ad6adefb64da 100644
+--- a/drivers/i2c/busses/i2c-imx.c
++++ b/drivers/i2c/busses/i2c-imx.c
+@@ -668,9 +668,6 @@ static int i2c_imx_dma_read(struct imx_i2c_struct *i2c_imx,
+ 	struct imx_i2c_dma *dma = i2c_imx->dma;
+ 	struct device *dev = &i2c_imx->adapter.dev;
+ 
+-	temp = imx_i2c_read_reg(i2c_imx, IMX_I2C_I2CR);
+-	temp |= I2CR_DMAEN;
+-	imx_i2c_write_reg(temp, i2c_imx, IMX_I2C_I2CR);
+ 
+ 	dma->chan_using = dma->chan_rx;
+ 	dma->dma_transfer_dir = DMA_DEV_TO_MEM;
+@@ -783,6 +780,7 @@ static int i2c_imx_read(struct imx_i2c_struct *i2c_imx, struct i2c_msg *msgs, bo
+ 	int i, result;
+ 	unsigned int temp;
+ 	int block_data = msgs->flags & I2C_M_RECV_LEN;
++	int use_dma = i2c_imx->dma && msgs->len >= DMA_THRESHOLD && !block_data;
+ 
+ 	dev_dbg(&i2c_imx->adapter.dev,
+ 		"<%s> write slave address: addr=0x%x\n",
+@@ -809,12 +807,14 @@ static int i2c_imx_read(struct imx_i2c_struct *i2c_imx, struct i2c_msg *msgs, bo
+ 	 */
+ 	if ((msgs->len - 1) || block_data)
+ 		temp &= ~I2CR_TXAK;
++	if (use_dma)
++		temp |= I2CR_DMAEN;
+ 	imx_i2c_write_reg(temp, i2c_imx, IMX_I2C_I2CR);
+ 	imx_i2c_read_reg(i2c_imx, IMX_I2C_I2DR); /* dummy read */
+ 
+ 	dev_dbg(&i2c_imx->adapter.dev, "<%s> read data\n", __func__);
+ 
+-	if (i2c_imx->dma && msgs->len >= DMA_THRESHOLD && !block_data)
++	if (use_dma)
+ 		return i2c_imx_dma_read(i2c_imx, msgs, is_lastmsg);
+ 
+ 	/* read data */
+diff --git a/drivers/i2c/i2c-core-acpi.c b/drivers/i2c/i2c-core-acpi.c
+index 7c3b4740b94b..b8f303dea305 100644
+--- a/drivers/i2c/i2c-core-acpi.c
++++ b/drivers/i2c/i2c-core-acpi.c
+@@ -482,11 +482,16 @@ static int acpi_gsb_i2c_write_bytes(struct i2c_client *client,
+ 	msgs[0].buf = buffer;
+ 
+ 	ret = i2c_transfer(client->adapter, msgs, ARRAY_SIZE(msgs));
+-	if (ret < 0)
+-		dev_err(&client->adapter->dev, "i2c write failed\n");
+ 
+ 	kfree(buffer);
+-	return ret;
++
++	if (ret < 0) {
++		dev_err(&client->adapter->dev, "i2c write failed: %d\n", ret);
++		return ret;
++	}
++
++	/* 1 transfer must have completed successfully */
++	return (ret == 1) ? 0 : -EIO;
+ }
+ 
+ static acpi_status
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index 0fae816fba39..44604af23b3a 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -952,6 +952,7 @@ static int advk_pcie_probe(struct platform_device *pdev)
+ 
+ 	bus = bridge->bus;
+ 
++	pci_bus_size_bridges(bus);
+ 	pci_bus_assign_resources(bus);
+ 
+ 	list_for_each_entry(child, &bus->children, node)
+diff --git a/drivers/pci/hotplug/pci_hotplug_core.c b/drivers/pci/hotplug/pci_hotplug_core.c
+index af92fed46ab7..fd93783a87b0 100644
+--- a/drivers/pci/hotplug/pci_hotplug_core.c
++++ b/drivers/pci/hotplug/pci_hotplug_core.c
+@@ -438,8 +438,17 @@ int __pci_hp_register(struct hotplug_slot *slot, struct pci_bus *bus,
+ 	list_add(&slot->slot_list, &pci_hotplug_slot_list);
+ 
+ 	result = fs_add_slot(pci_slot);
++	if (result)
++		goto err_list_del;
++
+ 	kobject_uevent(&pci_slot->kobj, KOBJ_ADD);
+ 	dbg("Added slot %s to the list\n", name);
++	goto out;
++
++err_list_del:
++	list_del(&slot->slot_list);
++	pci_slot->hotplug = NULL;
++	pci_destroy_slot(pci_slot);
+ out:
+ 	mutex_unlock(&pci_hp_mutex);
+ 	return result;
+diff --git a/drivers/pci/hotplug/pciehp.h b/drivers/pci/hotplug/pciehp.h
+index 5f892065585e..fca87a1a2b22 100644
+--- a/drivers/pci/hotplug/pciehp.h
++++ b/drivers/pci/hotplug/pciehp.h
+@@ -119,6 +119,7 @@ int pciehp_unconfigure_device(struct slot *p_slot);
+ void pciehp_queue_pushbutton_work(struct work_struct *work);
+ struct controller *pcie_init(struct pcie_device *dev);
+ int pcie_init_notification(struct controller *ctrl);
++void pcie_shutdown_notification(struct controller *ctrl);
+ int pciehp_enable_slot(struct slot *p_slot);
+ int pciehp_disable_slot(struct slot *p_slot);
+ void pcie_reenable_notification(struct controller *ctrl);
+diff --git a/drivers/pci/hotplug/pciehp_core.c b/drivers/pci/hotplug/pciehp_core.c
+index 44a6a63802d5..2ba59fc94827 100644
+--- a/drivers/pci/hotplug/pciehp_core.c
++++ b/drivers/pci/hotplug/pciehp_core.c
+@@ -62,6 +62,12 @@ static int reset_slot(struct hotplug_slot *slot, int probe);
+  */
+ static void release_slot(struct hotplug_slot *hotplug_slot)
+ {
++	struct slot *slot = hotplug_slot->private;
++
++	/* queued work needs hotplug_slot name */
++	cancel_delayed_work(&slot->work);
++	drain_workqueue(slot->wq);
++
+ 	kfree(hotplug_slot->ops);
+ 	kfree(hotplug_slot->info);
+ 	kfree(hotplug_slot);
+@@ -264,6 +270,7 @@ static void pciehp_remove(struct pcie_device *dev)
+ {
+ 	struct controller *ctrl = get_service_data(dev);
+ 
++	pcie_shutdown_notification(ctrl);
+ 	cleanup_slot(ctrl);
+ 	pciehp_release_ctrl(ctrl);
+ }
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index 718b6073afad..aff191b4552c 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -539,8 +539,6 @@ static irqreturn_t pciehp_isr(int irq, void *dev_id)
+ {
+ 	struct controller *ctrl = (struct controller *)dev_id;
+ 	struct pci_dev *pdev = ctrl_dev(ctrl);
+-	struct pci_bus *subordinate = pdev->subordinate;
+-	struct pci_dev *dev;
+ 	struct slot *slot = ctrl->slot;
+ 	u16 status, events;
+ 	u8 present;
+@@ -588,14 +586,9 @@ static irqreturn_t pciehp_isr(int irq, void *dev_id)
+ 		wake_up(&ctrl->queue);
+ 	}
+ 
+-	if (subordinate) {
+-		list_for_each_entry(dev, &subordinate->devices, bus_list) {
+-			if (dev->ignore_hotplug) {
+-				ctrl_dbg(ctrl, "ignoring hotplug event %#06x (%s requested no hotplug)\n",
+-					 events, pci_name(dev));
+-				return IRQ_HANDLED;
+-			}
+-		}
++	if (pdev->ignore_hotplug) {
++		ctrl_dbg(ctrl, "ignoring hotplug event %#06x\n", events);
++		return IRQ_HANDLED;
+ 	}
+ 
+ 	/* Check Attention Button Pressed */
+@@ -765,7 +758,7 @@ int pcie_init_notification(struct controller *ctrl)
+ 	return 0;
+ }
+ 
+-static void pcie_shutdown_notification(struct controller *ctrl)
++void pcie_shutdown_notification(struct controller *ctrl)
+ {
+ 	if (ctrl->notification_enabled) {
+ 		pcie_disable_notification(ctrl);
+@@ -800,7 +793,7 @@ abort:
+ static void pcie_cleanup_slot(struct controller *ctrl)
+ {
+ 	struct slot *slot = ctrl->slot;
+-	cancel_delayed_work(&slot->work);
++
+ 	destroy_workqueue(slot->wq);
+ 	kfree(slot);
+ }
+@@ -893,7 +886,6 @@ abort:
+ 
+ void pciehp_release_ctrl(struct controller *ctrl)
+ {
+-	pcie_shutdown_notification(ctrl);
+ 	pcie_cleanup_slot(ctrl);
+ 	kfree(ctrl);
+ }
+diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
+index 89ee6a2b6eb8..5d1698265da5 100644
+--- a/drivers/pci/pci-acpi.c
++++ b/drivers/pci/pci-acpi.c
+@@ -632,13 +632,11 @@ static bool acpi_pci_need_resume(struct pci_dev *dev)
+ 	/*
+ 	 * In some cases (eg. Samsung 305V4A) leaving a bridge in suspend over
+ 	 * system-wide suspend/resume confuses the platform firmware, so avoid
+-	 * doing that, unless the bridge has a driver that should take care of
+-	 * the PM handling.  According to Section 16.1.6 of ACPI 6.2, endpoint
++	 * doing that.  According to Section 16.1.6 of ACPI 6.2, endpoint
+ 	 * devices are expected to be in D3 before invoking the S3 entry path
+ 	 * from the firmware, so they should not be affected by this issue.
+ 	 */
+-	if (pci_is_bridge(dev) && !dev->driver &&
+-	    acpi_target_system_state() != ACPI_STATE_S0)
++	if (pci_is_bridge(dev) && acpi_target_system_state() != ACPI_STATE_S0)
+ 		return true;
+ 
+ 	if (!adev || !acpi_device_power_manageable(adev))
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 316496e99da9..0abe2865a3a5 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -1171,6 +1171,33 @@ static void pci_restore_config_space(struct pci_dev *pdev)
+ 	}
+ }
+ 
++static void pci_restore_rebar_state(struct pci_dev *pdev)
++{
++	unsigned int pos, nbars, i;
++	u32 ctrl;
++
++	pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_REBAR);
++	if (!pos)
++		return;
++
++	pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl);
++	nbars = (ctrl & PCI_REBAR_CTRL_NBAR_MASK) >>
++		    PCI_REBAR_CTRL_NBAR_SHIFT;
++
++	for (i = 0; i < nbars; i++, pos += 8) {
++		struct resource *res;
++		int bar_idx, size;
++
++		pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl);
++		bar_idx = ctrl & PCI_REBAR_CTRL_BAR_IDX;
++		res = pdev->resource + bar_idx;
++		size = order_base_2((resource_size(res) >> 20) | 1) - 1;
++		ctrl &= ~PCI_REBAR_CTRL_BAR_SIZE;
++		ctrl |= size << 8;
++		pci_write_config_dword(pdev, pos + PCI_REBAR_CTRL, ctrl);
++	}
++}
++
+ /**
+  * pci_restore_state - Restore the saved state of a PCI device
+  * @dev: - PCI device that we're dealing with
+@@ -1186,6 +1213,7 @@ void pci_restore_state(struct pci_dev *dev)
+ 	pci_restore_pri_state(dev);
+ 	pci_restore_ats_state(dev);
+ 	pci_restore_vc_state(dev);
++	pci_restore_rebar_state(dev);
+ 
+ 	pci_cleanup_aer_error_status_regs(dev);
+ 
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index 611adcd9c169..b2857865c0aa 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -1730,6 +1730,10 @@ static void pci_configure_mps(struct pci_dev *dev)
+ 	if (!pci_is_pcie(dev) || !bridge || !pci_is_pcie(bridge))
+ 		return;
+ 
++	/* MPS and MRRS fields are of type 'RsvdP' for VFs, short-circuit out */
++	if (dev->is_virtfn)
++		return;
++
+ 	mps = pcie_get_mps(dev);
+ 	p_mps = pcie_get_mps(bridge);
+ 
+diff --git a/drivers/tty/pty.c b/drivers/tty/pty.c
+index b0e2c4847a5d..678406e0948b 100644
+--- a/drivers/tty/pty.c
++++ b/drivers/tty/pty.c
+@@ -625,7 +625,7 @@ int ptm_open_peer(struct file *master, struct tty_struct *tty, int flags)
+ 	if (tty->driver != ptm_driver)
+ 		return -EIO;
+ 
+-	fd = get_unused_fd_flags(0);
++	fd = get_unused_fd_flags(flags);
+ 	if (fd < 0) {
+ 		retval = fd;
+ 		goto err;
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index f7ab34088162..8b24d3d42cb3 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -14,6 +14,7 @@
+ #include <linux/log2.h>
+ #include <linux/module.h>
+ #include <linux/slab.h>
++#include <linux/nospec.h>
+ #include <linux/backing-dev.h>
+ #include <trace/events/ext4.h>
+ 
+@@ -2140,7 +2141,8 @@ ext4_mb_regular_allocator(struct ext4_allocation_context *ac)
+ 		 * This should tell if fe_len is exactly power of 2
+ 		 */
+ 		if ((ac->ac_g_ex.fe_len & (~(1 << (i - 1)))) == 0)
+-			ac->ac_2order = i - 1;
++			ac->ac_2order = array_index_nospec(i - 1,
++							   sb->s_blocksize_bits + 2);
+ 	}
+ 
+ 	/* if stream allocation is enabled, use global goal */
+diff --git a/fs/reiserfs/xattr.c b/fs/reiserfs/xattr.c
+index ff94fad477e4..48cdfc81fe10 100644
+--- a/fs/reiserfs/xattr.c
++++ b/fs/reiserfs/xattr.c
+@@ -792,8 +792,10 @@ static int listxattr_filler(struct dir_context *ctx, const char *name,
+ 			return 0;
+ 		size = namelen + 1;
+ 		if (b->buf) {
+-			if (size > b->size)
++			if (b->pos + size > b->size) {
++				b->pos = -ERANGE;
+ 				return -ERANGE;
++			}
+ 			memcpy(b->buf + b->pos, name, namelen);
+ 			b->buf[b->pos + namelen] = 0;
+ 		}
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index a790ef4be74e..3222193c46c6 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -6939,9 +6939,21 @@ unsigned long free_reserved_area(void *start, void *end, int poison, char *s)
+ 	start = (void *)PAGE_ALIGN((unsigned long)start);
+ 	end = (void *)((unsigned long)end & PAGE_MASK);
+ 	for (pos = start; pos < end; pos += PAGE_SIZE, pages++) {
++		struct page *page = virt_to_page(pos);
++		void *direct_map_addr;
++
++		/*
++		 * 'direct_map_addr' might be different from 'pos'
++		 * because some architectures' virt_to_page()
++		 * work with aliases.  Getting the direct map
++		 * address ensures that we get a _writeable_
++		 * alias for the memset().
++		 */
++		direct_map_addr = page_address(page);
+ 		if ((unsigned int)poison <= 0xFF)
+-			memset(pos, poison, PAGE_SIZE);
+-		free_reserved_page(virt_to_page(pos));
++			memset(direct_map_addr, poison, PAGE_SIZE);
++
++		free_reserved_page(page);
+ 	}
+ 
+ 	if (pages && s)


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-08-22  9:59 Alice Ferrazzi
  0 siblings, 0 replies; 75+ messages in thread
From: Alice Ferrazzi @ 2018-08-22  9:59 UTC (permalink / raw
  To: gentoo-commits

commit:     f0792c043c9c65972a5dd649543b5b52f2c1169a
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 22 09:59:11 2018 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Aug 22 09:59:11 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f0792c04

linux kernel 4.18.4

 0000_README             |   4 +
 1003_linux-4.18.4.patch | 817 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 821 insertions(+)

diff --git a/0000_README b/0000_README
index c313d8e..c7d6cc0 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  1002_linux-4.18.3.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.3
 
+Patch:  1003_linux-4.18.4.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.4
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1003_linux-4.18.4.patch b/1003_linux-4.18.4.patch
new file mode 100644
index 0000000..a94a413
--- /dev/null
+++ b/1003_linux-4.18.4.patch
@@ -0,0 +1,817 @@
+diff --git a/Makefile b/Makefile
+index e2bd815f24eb..ef0dd566c104 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index 5d0486f1cfcd..1a1c0718cd7a 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -338,6 +338,14 @@ static const struct dmi_system_id acpisleep_dmi_table[] __initconst = {
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "K54HR"),
+ 		},
+ 	},
++	{
++	.callback = init_nvs_save_s3,
++	.ident = "Asus 1025C",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++		DMI_MATCH(DMI_PRODUCT_NAME, "1025C"),
++		},
++	},
+ 	/*
+ 	 * https://bugzilla.kernel.org/show_bug.cgi?id=189431
+ 	 * Lenovo G50-45 is a platform later than 2012, but needs nvs memory
+diff --git a/drivers/isdn/i4l/isdn_common.c b/drivers/isdn/i4l/isdn_common.c
+index 7a501dbe7123..6a5b3f00f9ad 100644
+--- a/drivers/isdn/i4l/isdn_common.c
++++ b/drivers/isdn/i4l/isdn_common.c
+@@ -1640,13 +1640,7 @@ isdn_ioctl(struct file *file, uint cmd, ulong arg)
+ 			} else
+ 				return -EINVAL;
+ 		case IIOCDBGVAR:
+-			if (arg) {
+-				if (copy_to_user(argp, &dev, sizeof(ulong)))
+-					return -EFAULT;
+-				return 0;
+-			} else
+-				return -EINVAL;
+-			break;
++			return -EINVAL;
+ 		default:
+ 			if ((cmd & IIOCDRVCTL) == IIOCDRVCTL)
+ 				cmd = ((cmd >> _IOC_NRSHIFT) & _IOC_NRMASK) & ISDN_DRVIOCTL_MASK;
+diff --git a/drivers/media/usb/dvb-usb-v2/gl861.c b/drivers/media/usb/dvb-usb-v2/gl861.c
+index 9d154fdae45b..fee4b30df778 100644
+--- a/drivers/media/usb/dvb-usb-v2/gl861.c
++++ b/drivers/media/usb/dvb-usb-v2/gl861.c
+@@ -26,10 +26,14 @@ static int gl861_i2c_msg(struct dvb_usb_device *d, u8 addr,
+ 	if (wo) {
+ 		req = GL861_REQ_I2C_WRITE;
+ 		type = GL861_WRITE;
++		buf = kmemdup(wbuf, wlen, GFP_KERNEL);
+ 	} else { /* rw */
+ 		req = GL861_REQ_I2C_READ;
+ 		type = GL861_READ;
++		buf = kmalloc(rlen, GFP_KERNEL);
+ 	}
++	if (!buf)
++		return -ENOMEM;
+ 
+ 	switch (wlen) {
+ 	case 1:
+@@ -42,24 +46,19 @@ static int gl861_i2c_msg(struct dvb_usb_device *d, u8 addr,
+ 	default:
+ 		dev_err(&d->udev->dev, "%s: wlen=%d, aborting\n",
+ 				KBUILD_MODNAME, wlen);
++		kfree(buf);
+ 		return -EINVAL;
+ 	}
+-	buf = NULL;
+-	if (rlen > 0) {
+-		buf = kmalloc(rlen, GFP_KERNEL);
+-		if (!buf)
+-			return -ENOMEM;
+-	}
++
+ 	usleep_range(1000, 2000); /* avoid I2C errors */
+ 
+ 	ret = usb_control_msg(d->udev, usb_rcvctrlpipe(d->udev, 0), req, type,
+ 			      value, index, buf, rlen, 2000);
+-	if (rlen > 0) {
+-		if (ret > 0)
+-			memcpy(rbuf, buf, rlen);
+-		kfree(buf);
+-	}
+ 
++	if (!wo && ret > 0)
++		memcpy(rbuf, buf, rlen);
++
++	kfree(buf);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/misc/sram.c b/drivers/misc/sram.c
+index c5dc6095686a..679647713e36 100644
+--- a/drivers/misc/sram.c
++++ b/drivers/misc/sram.c
+@@ -407,13 +407,20 @@ static int sram_probe(struct platform_device *pdev)
+ 	if (init_func) {
+ 		ret = init_func();
+ 		if (ret)
+-			return ret;
++			goto err_disable_clk;
+ 	}
+ 
+ 	dev_dbg(sram->dev, "SRAM pool: %zu KiB @ 0x%p\n",
+ 		gen_pool_size(sram->pool) / 1024, sram->virt_base);
+ 
+ 	return 0;
++
++err_disable_clk:
++	if (sram->clk)
++		clk_disable_unprepare(sram->clk);
++	sram_free_partitions(sram);
++
++	return ret;
+ }
+ 
+ static int sram_remove(struct platform_device *pdev)
+diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
+index 0ad2f3f7da85..82ac1d10f239 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -1901,10 +1901,10 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
+ }
+ 
+ /* Main rx processing when using software buffer management */
+-static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
++static int mvneta_rx_swbm(struct napi_struct *napi,
++			  struct mvneta_port *pp, int rx_todo,
+ 			  struct mvneta_rx_queue *rxq)
+ {
+-	struct mvneta_pcpu_port *port = this_cpu_ptr(pp->ports);
+ 	struct net_device *dev = pp->dev;
+ 	int rx_done;
+ 	u32 rcvd_pkts = 0;
+@@ -1959,7 +1959,7 @@ err_drop_frame:
+ 
+ 			skb->protocol = eth_type_trans(skb, dev);
+ 			mvneta_rx_csum(pp, rx_status, skb);
+-			napi_gro_receive(&port->napi, skb);
++			napi_gro_receive(napi, skb);
+ 
+ 			rcvd_pkts++;
+ 			rcvd_bytes += rx_bytes;
+@@ -2001,7 +2001,7 @@ err_drop_frame:
+ 
+ 		mvneta_rx_csum(pp, rx_status, skb);
+ 
+-		napi_gro_receive(&port->napi, skb);
++		napi_gro_receive(napi, skb);
+ 	}
+ 
+ 	if (rcvd_pkts) {
+@@ -2020,10 +2020,10 @@ err_drop_frame:
+ }
+ 
+ /* Main rx processing when using hardware buffer management */
+-static int mvneta_rx_hwbm(struct mvneta_port *pp, int rx_todo,
++static int mvneta_rx_hwbm(struct napi_struct *napi,
++			  struct mvneta_port *pp, int rx_todo,
+ 			  struct mvneta_rx_queue *rxq)
+ {
+-	struct mvneta_pcpu_port *port = this_cpu_ptr(pp->ports);
+ 	struct net_device *dev = pp->dev;
+ 	int rx_done;
+ 	u32 rcvd_pkts = 0;
+@@ -2085,7 +2085,7 @@ err_drop_frame:
+ 
+ 			skb->protocol = eth_type_trans(skb, dev);
+ 			mvneta_rx_csum(pp, rx_status, skb);
+-			napi_gro_receive(&port->napi, skb);
++			napi_gro_receive(napi, skb);
+ 
+ 			rcvd_pkts++;
+ 			rcvd_bytes += rx_bytes;
+@@ -2129,7 +2129,7 @@ err_drop_frame:
+ 
+ 		mvneta_rx_csum(pp, rx_status, skb);
+ 
+-		napi_gro_receive(&port->napi, skb);
++		napi_gro_receive(napi, skb);
+ 	}
+ 
+ 	if (rcvd_pkts) {
+@@ -2722,9 +2722,11 @@ static int mvneta_poll(struct napi_struct *napi, int budget)
+ 	if (rx_queue) {
+ 		rx_queue = rx_queue - 1;
+ 		if (pp->bm_priv)
+-			rx_done = mvneta_rx_hwbm(pp, budget, &pp->rxqs[rx_queue]);
++			rx_done = mvneta_rx_hwbm(napi, pp, budget,
++						 &pp->rxqs[rx_queue]);
+ 		else
+-			rx_done = mvneta_rx_swbm(pp, budget, &pp->rxqs[rx_queue]);
++			rx_done = mvneta_rx_swbm(napi, pp, budget,
++						 &pp->rxqs[rx_queue]);
+ 	}
+ 
+ 	if (rx_done < budget) {
+@@ -4018,13 +4020,18 @@ static int  mvneta_config_rss(struct mvneta_port *pp)
+ 
+ 	on_each_cpu(mvneta_percpu_mask_interrupt, pp, true);
+ 
+-	/* We have to synchronise on the napi of each CPU */
+-	for_each_online_cpu(cpu) {
+-		struct mvneta_pcpu_port *pcpu_port =
+-			per_cpu_ptr(pp->ports, cpu);
++	if (!pp->neta_armada3700) {
++		/* We have to synchronise on the napi of each CPU */
++		for_each_online_cpu(cpu) {
++			struct mvneta_pcpu_port *pcpu_port =
++				per_cpu_ptr(pp->ports, cpu);
+ 
+-		napi_synchronize(&pcpu_port->napi);
+-		napi_disable(&pcpu_port->napi);
++			napi_synchronize(&pcpu_port->napi);
++			napi_disable(&pcpu_port->napi);
++		}
++	} else {
++		napi_synchronize(&pp->napi);
++		napi_disable(&pp->napi);
+ 	}
+ 
+ 	pp->rxq_def = pp->indir[0];
+@@ -4041,12 +4048,16 @@ static int  mvneta_config_rss(struct mvneta_port *pp)
+ 	mvneta_percpu_elect(pp);
+ 	spin_unlock(&pp->lock);
+ 
+-	/* We have to synchronise on the napi of each CPU */
+-	for_each_online_cpu(cpu) {
+-		struct mvneta_pcpu_port *pcpu_port =
+-			per_cpu_ptr(pp->ports, cpu);
++	if (!pp->neta_armada3700) {
++		/* We have to synchronise on the napi of each CPU */
++		for_each_online_cpu(cpu) {
++			struct mvneta_pcpu_port *pcpu_port =
++				per_cpu_ptr(pp->ports, cpu);
+ 
+-		napi_enable(&pcpu_port->napi);
++			napi_enable(&pcpu_port->napi);
++		}
++	} else {
++		napi_enable(&pp->napi);
+ 	}
+ 
+ 	netif_tx_start_all_queues(pp->dev);
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index eaedc11ed686..9ceb34bac3a9 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -7539,12 +7539,20 @@ static int rtl_alloc_irq(struct rtl8169_private *tp)
+ {
+ 	unsigned int flags;
+ 
+-	if (tp->mac_version <= RTL_GIGA_MAC_VER_06) {
++	switch (tp->mac_version) {
++	case RTL_GIGA_MAC_VER_01 ... RTL_GIGA_MAC_VER_06:
+ 		RTL_W8(tp, Cfg9346, Cfg9346_Unlock);
+ 		RTL_W8(tp, Config2, RTL_R8(tp, Config2) & ~MSIEnable);
+ 		RTL_W8(tp, Cfg9346, Cfg9346_Lock);
+ 		flags = PCI_IRQ_LEGACY;
+-	} else {
++		break;
++	case RTL_GIGA_MAC_VER_39 ... RTL_GIGA_MAC_VER_40:
++		/* This version was reported to have issues with resume
++		 * from suspend when using MSI-X
++		 */
++		flags = PCI_IRQ_LEGACY | PCI_IRQ_MSI;
++		break;
++	default:
+ 		flags = PCI_IRQ_ALL_TYPES;
+ 	}
+ 
+diff --git a/drivers/net/hyperv/rndis_filter.c b/drivers/net/hyperv/rndis_filter.c
+index 408ece27131c..2a5209f23f29 100644
+--- a/drivers/net/hyperv/rndis_filter.c
++++ b/drivers/net/hyperv/rndis_filter.c
+@@ -1338,7 +1338,7 @@ out:
+ 	/* setting up multiple channels failed */
+ 	net_device->max_chn = 1;
+ 	net_device->num_chn = 1;
+-	return 0;
++	return net_device;
+ 
+ err_dev_remv:
+ 	rndis_filter_device_remove(dev, net_device);
+diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
+index aff04f1de3a5..af842000188c 100644
+--- a/drivers/tty/serial/8250/8250_dw.c
++++ b/drivers/tty/serial/8250/8250_dw.c
+@@ -293,7 +293,7 @@ static void dw8250_set_termios(struct uart_port *p, struct ktermios *termios,
+ 	long rate;
+ 	int ret;
+ 
+-	if (IS_ERR(d->clk) || !old)
++	if (IS_ERR(d->clk))
+ 		goto out;
+ 
+ 	clk_disable_unprepare(d->clk);
+@@ -707,6 +707,7 @@ static const struct acpi_device_id dw8250_acpi_match[] = {
+ 	{ "APMC0D08", 0},
+ 	{ "AMD0020", 0 },
+ 	{ "AMDI0020", 0 },
++	{ "BRCM2032", 0 },
+ 	{ "HISI0031", 0 },
+ 	{ },
+ };
+diff --git a/drivers/tty/serial/8250/8250_exar.c b/drivers/tty/serial/8250/8250_exar.c
+index 38af306ca0e8..a951511f04cf 100644
+--- a/drivers/tty/serial/8250/8250_exar.c
++++ b/drivers/tty/serial/8250/8250_exar.c
+@@ -433,7 +433,11 @@ static irqreturn_t exar_misc_handler(int irq, void *data)
+ 	struct exar8250 *priv = data;
+ 
+ 	/* Clear all PCI interrupts by reading INT0. No effect on IIR */
+-	ioread8(priv->virt + UART_EXAR_INT0);
++	readb(priv->virt + UART_EXAR_INT0);
++
++	/* Clear INT0 for Expansion Interface slave ports, too */
++	if (priv->board->num_ports > 8)
++		readb(priv->virt + 0x2000 + UART_EXAR_INT0);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index cf541aab2bd0..5cbc13e3d316 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -90,8 +90,7 @@ static const struct serial8250_config uart_config[] = {
+ 		.name		= "16550A",
+ 		.fifo_size	= 16,
+ 		.tx_loadsz	= 16,
+-		.fcr		= UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10 |
+-				  UART_FCR_CLEAR_RCVR | UART_FCR_CLEAR_XMIT,
++		.fcr		= UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10,
+ 		.rxtrig_bytes	= {1, 4, 8, 14},
+ 		.flags		= UART_CAP_FIFO,
+ 	},
+diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c
+index 5d421d7e8904..f68c1121fa7c 100644
+--- a/drivers/uio/uio.c
++++ b/drivers/uio/uio.c
+@@ -443,13 +443,10 @@ static irqreturn_t uio_interrupt(int irq, void *dev_id)
+ 	struct uio_device *idev = (struct uio_device *)dev_id;
+ 	irqreturn_t ret;
+ 
+-	mutex_lock(&idev->info_lock);
+-
+ 	ret = idev->info->handler(irq, idev->info);
+ 	if (ret == IRQ_HANDLED)
+ 		uio_event_notify(idev->info);
+ 
+-	mutex_unlock(&idev->info_lock);
+ 	return ret;
+ }
+ 
+@@ -814,7 +811,7 @@ static int uio_mmap(struct file *filep, struct vm_area_struct *vma)
+ 
+ out:
+ 	mutex_unlock(&idev->info_lock);
+-	return 0;
++	return ret;
+ }
+ 
+ static const struct file_operations uio_fops = {
+@@ -969,9 +966,8 @@ int __uio_register_device(struct module *owner,
+ 		 * FDs at the time of unregister and therefore may not be
+ 		 * freed until they are released.
+ 		 */
+-		ret = request_threaded_irq(info->irq, NULL, uio_interrupt,
+-					   info->irq_flags, info->name, idev);
+-
++		ret = request_irq(info->irq, uio_interrupt,
++				  info->irq_flags, info->name, idev);
+ 		if (ret)
+ 			goto err_request_irq;
+ 	}
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 664e61f16b6a..0215b70c4efc 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -196,6 +196,8 @@ static void option_instat_callback(struct urb *urb);
+ #define DELL_PRODUCT_5800_V2_MINICARD_VZW	0x8196  /* Novatel E362 */
+ #define DELL_PRODUCT_5804_MINICARD_ATT		0x819b  /* Novatel E371 */
+ 
++#define DELL_PRODUCT_5821E			0x81d7
++
+ #define KYOCERA_VENDOR_ID			0x0c88
+ #define KYOCERA_PRODUCT_KPC650			0x17da
+ #define KYOCERA_PRODUCT_KPC680			0x180a
+@@ -1030,6 +1032,8 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(DELL_VENDOR_ID, DELL_PRODUCT_5800_MINICARD_VZW, 0xff, 0xff, 0xff) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(DELL_VENDOR_ID, DELL_PRODUCT_5800_V2_MINICARD_VZW, 0xff, 0xff, 0xff) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(DELL_VENDOR_ID, DELL_PRODUCT_5804_MINICARD_ATT, 0xff, 0xff, 0xff) },
++	{ USB_DEVICE(DELL_VENDOR_ID, DELL_PRODUCT_5821E),
++	  .driver_info = RSVD(0) | RSVD(1) | RSVD(6) },
+ 	{ USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_E100A) },	/* ADU-E100, ADU-310 */
+ 	{ USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_500A) },
+ 	{ USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_620UW) },
+diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
+index 5d1a1931967e..e41f725ac7aa 100644
+--- a/drivers/usb/serial/pl2303.c
++++ b/drivers/usb/serial/pl2303.c
+@@ -52,6 +52,8 @@ static const struct usb_device_id id_table[] = {
+ 		.driver_info = PL2303_QUIRK_ENDPOINT_HACK },
+ 	{ USB_DEVICE(ATEN_VENDOR_ID, ATEN_PRODUCT_UC485),
+ 		.driver_info = PL2303_QUIRK_ENDPOINT_HACK },
++	{ USB_DEVICE(ATEN_VENDOR_ID, ATEN_PRODUCT_UC232B),
++		.driver_info = PL2303_QUIRK_ENDPOINT_HACK },
+ 	{ USB_DEVICE(ATEN_VENDOR_ID, ATEN_PRODUCT_ID2) },
+ 	{ USB_DEVICE(ATEN_VENDOR_ID2, ATEN_PRODUCT_ID) },
+ 	{ USB_DEVICE(ELCOM_VENDOR_ID, ELCOM_PRODUCT_ID) },
+diff --git a/drivers/usb/serial/pl2303.h b/drivers/usb/serial/pl2303.h
+index fcd72396a7b6..26965cc23c17 100644
+--- a/drivers/usb/serial/pl2303.h
++++ b/drivers/usb/serial/pl2303.h
+@@ -24,6 +24,7 @@
+ #define ATEN_VENDOR_ID2		0x0547
+ #define ATEN_PRODUCT_ID		0x2008
+ #define ATEN_PRODUCT_UC485	0x2021
++#define ATEN_PRODUCT_UC232B	0x2022
+ #define ATEN_PRODUCT_ID2	0x2118
+ 
+ #define IODATA_VENDOR_ID	0x04bb
+diff --git a/drivers/usb/serial/sierra.c b/drivers/usb/serial/sierra.c
+index d189f953c891..55956a638f5b 100644
+--- a/drivers/usb/serial/sierra.c
++++ b/drivers/usb/serial/sierra.c
+@@ -770,9 +770,9 @@ static void sierra_close(struct usb_serial_port *port)
+ 		kfree(urb->transfer_buffer);
+ 		usb_free_urb(urb);
+ 		usb_autopm_put_interface_async(serial->interface);
+-		spin_lock(&portdata->lock);
++		spin_lock_irq(&portdata->lock);
+ 		portdata->outstanding_urbs--;
+-		spin_unlock(&portdata->lock);
++		spin_unlock_irq(&portdata->lock);
+ 	}
+ 
+ 	sierra_stop_rx_urbs(port);
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index 413b8ee49fec..8f0f9279eac9 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -393,7 +393,8 @@ static void sco_sock_cleanup_listen(struct sock *parent)
+  */
+ static void sco_sock_kill(struct sock *sk)
+ {
+-	if (!sock_flag(sk, SOCK_ZAPPED) || sk->sk_socket)
++	if (!sock_flag(sk, SOCK_ZAPPED) || sk->sk_socket ||
++	    sock_flag(sk, SOCK_DEAD))
+ 		return;
+ 
+ 	BT_DBG("sk %p state %d", sk, sk->sk_state);
+diff --git a/net/core/sock_diag.c b/net/core/sock_diag.c
+index c37b5be7c5e4..3312a5849a97 100644
+--- a/net/core/sock_diag.c
++++ b/net/core/sock_diag.c
+@@ -10,6 +10,7 @@
+ #include <linux/kernel.h>
+ #include <linux/tcp.h>
+ #include <linux/workqueue.h>
++#include <linux/nospec.h>
+ 
+ #include <linux/inet_diag.h>
+ #include <linux/sock_diag.h>
+@@ -218,6 +219,7 @@ static int __sock_diag_cmd(struct sk_buff *skb, struct nlmsghdr *nlh)
+ 
+ 	if (req->sdiag_family >= AF_MAX)
+ 		return -EINVAL;
++	req->sdiag_family = array_index_nospec(req->sdiag_family, AF_MAX);
+ 
+ 	if (sock_diag_handlers[req->sdiag_family] == NULL)
+ 		sock_load_diag_module(req->sdiag_family, 0);
+diff --git a/net/ipv4/ip_vti.c b/net/ipv4/ip_vti.c
+index 3f091ccad9af..f38cb21d773d 100644
+--- a/net/ipv4/ip_vti.c
++++ b/net/ipv4/ip_vti.c
+@@ -438,7 +438,8 @@ static int __net_init vti_init_net(struct net *net)
+ 	if (err)
+ 		return err;
+ 	itn = net_generic(net, vti_net_id);
+-	vti_fb_tunnel_init(itn->fb_tunnel_dev);
++	if (itn->fb_tunnel_dev)
++		vti_fb_tunnel_init(itn->fb_tunnel_dev);
+ 	return 0;
+ }
+ 
+diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
+index 40261cb68e83..8aaf8157da2b 100644
+--- a/net/l2tp/l2tp_core.c
++++ b/net/l2tp/l2tp_core.c
+@@ -1110,7 +1110,7 @@ int l2tp_xmit_skb(struct l2tp_session *session, struct sk_buff *skb, int hdr_len
+ 
+ 	/* Get routing info from the tunnel socket */
+ 	skb_dst_drop(skb);
+-	skb_dst_set(skb, dst_clone(__sk_dst_check(sk, 0)));
++	skb_dst_set(skb, sk_dst_check(sk, 0));
+ 
+ 	inet = inet_sk(sk);
+ 	fl = &inet->cork.fl;
+diff --git a/net/sched/cls_matchall.c b/net/sched/cls_matchall.c
+index 47b207ef7762..7ad65daf66a4 100644
+--- a/net/sched/cls_matchall.c
++++ b/net/sched/cls_matchall.c
+@@ -111,6 +111,8 @@ static void mall_destroy(struct tcf_proto *tp, struct netlink_ext_ack *extack)
+ 	if (!head)
+ 		return;
+ 
++	tcf_unbind_filter(tp, &head->res);
++
+ 	if (!tc_skip_hw(head->flags))
+ 		mall_destroy_hw_filter(tp, head, (unsigned long) head, extack);
+ 
+diff --git a/net/sched/cls_tcindex.c b/net/sched/cls_tcindex.c
+index 32f4bbd82f35..9ccc93f257db 100644
+--- a/net/sched/cls_tcindex.c
++++ b/net/sched/cls_tcindex.c
+@@ -447,11 +447,6 @@ tcindex_set_parms(struct net *net, struct tcf_proto *tp, unsigned long base,
+ 		tcf_bind_filter(tp, &cr.res, base);
+ 	}
+ 
+-	if (old_r)
+-		tcf_exts_change(&r->exts, &e);
+-	else
+-		tcf_exts_change(&cr.exts, &e);
+-
+ 	if (old_r && old_r != r) {
+ 		err = tcindex_filter_result_init(old_r);
+ 		if (err < 0) {
+@@ -462,12 +457,15 @@ tcindex_set_parms(struct net *net, struct tcf_proto *tp, unsigned long base,
+ 
+ 	oldp = p;
+ 	r->res = cr.res;
++	tcf_exts_change(&r->exts, &e);
++
+ 	rcu_assign_pointer(tp->root, cp);
+ 
+ 	if (r == &new_filter_result) {
+ 		struct tcindex_filter *nfp;
+ 		struct tcindex_filter __rcu **fp;
+ 
++		f->result.res = r->res;
+ 		tcf_exts_change(&f->result.exts, &r->exts);
+ 
+ 		fp = cp->h + (handle % cp->hash);
+diff --git a/net/socket.c b/net/socket.c
+index 8c24d5dc4bc8..4ac3b834cce9 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -2690,8 +2690,7 @@ EXPORT_SYMBOL(sock_unregister);
+ 
+ bool sock_is_registered(int family)
+ {
+-	return family < NPROTO &&
+-		rcu_access_pointer(net_families[array_index_nospec(family, NPROTO)]);
++	return family < NPROTO && rcu_access_pointer(net_families[family]);
+ }
+ 
+ static int __init sock_init(void)
+diff --git a/sound/core/memalloc.c b/sound/core/memalloc.c
+index 7f89d3c79a4b..753d5fc4b284 100644
+--- a/sound/core/memalloc.c
++++ b/sound/core/memalloc.c
+@@ -242,16 +242,12 @@ int snd_dma_alloc_pages_fallback(int type, struct device *device, size_t size,
+ 	int err;
+ 
+ 	while ((err = snd_dma_alloc_pages(type, device, size, dmab)) < 0) {
+-		size_t aligned_size;
+ 		if (err != -ENOMEM)
+ 			return err;
+ 		if (size <= PAGE_SIZE)
+ 			return -ENOMEM;
+-		aligned_size = PAGE_SIZE << get_order(size);
+-		if (size != aligned_size)
+-			size = aligned_size;
+-		else
+-			size >>= 1;
++		size >>= 1;
++		size = PAGE_SIZE << get_order(size);
+ 	}
+ 	if (! dmab->area)
+ 		return -ENOMEM;
+diff --git a/sound/core/seq/oss/seq_oss.c b/sound/core/seq/oss/seq_oss.c
+index 5f64d0d88320..e1f44fc86885 100644
+--- a/sound/core/seq/oss/seq_oss.c
++++ b/sound/core/seq/oss/seq_oss.c
+@@ -203,7 +203,7 @@ odev_poll(struct file *file, poll_table * wait)
+ 	struct seq_oss_devinfo *dp;
+ 	dp = file->private_data;
+ 	if (snd_BUG_ON(!dp))
+-		return -ENXIO;
++		return EPOLLERR;
+ 	return snd_seq_oss_poll(dp, file, wait);
+ }
+ 
+diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
+index 56ca78423040..6fd4b074b206 100644
+--- a/sound/core/seq/seq_clientmgr.c
++++ b/sound/core/seq/seq_clientmgr.c
+@@ -1101,7 +1101,7 @@ static __poll_t snd_seq_poll(struct file *file, poll_table * wait)
+ 
+ 	/* check client structures are in place */
+ 	if (snd_BUG_ON(!client))
+-		return -ENXIO;
++		return EPOLLERR;
+ 
+ 	if ((snd_seq_file_flags(file) & SNDRV_SEQ_LFLG_INPUT) &&
+ 	    client->data.user.fifo) {
+diff --git a/sound/core/seq/seq_virmidi.c b/sound/core/seq/seq_virmidi.c
+index 289ae6bb81d9..8ebbca554e99 100644
+--- a/sound/core/seq/seq_virmidi.c
++++ b/sound/core/seq/seq_virmidi.c
+@@ -163,6 +163,7 @@ static void snd_virmidi_output_trigger(struct snd_rawmidi_substream *substream,
+ 	int count, res;
+ 	unsigned char buf[32], *pbuf;
+ 	unsigned long flags;
++	bool check_resched = !in_atomic();
+ 
+ 	if (up) {
+ 		vmidi->trigger = 1;
+@@ -200,6 +201,15 @@ static void snd_virmidi_output_trigger(struct snd_rawmidi_substream *substream,
+ 					vmidi->event.type = SNDRV_SEQ_EVENT_NONE;
+ 				}
+ 			}
++			if (!check_resched)
++				continue;
++			/* do temporary unlock & cond_resched() for avoiding
++			 * CPU soft lockup, which may happen via a write from
++			 * a huge rawmidi buffer
++			 */
++			spin_unlock_irqrestore(&substream->runtime->lock, flags);
++			cond_resched();
++			spin_lock_irqsave(&substream->runtime->lock, flags);
+ 		}
+ 	out:
+ 		spin_unlock_irqrestore(&substream->runtime->lock, flags);
+diff --git a/sound/firewire/dice/dice-alesis.c b/sound/firewire/dice/dice-alesis.c
+index b2efb1c71a98..218292bdace6 100644
+--- a/sound/firewire/dice/dice-alesis.c
++++ b/sound/firewire/dice/dice-alesis.c
+@@ -37,7 +37,7 @@ int snd_dice_detect_alesis_formats(struct snd_dice *dice)
+ 				MAX_STREAMS * SND_DICE_RATE_MODE_COUNT *
+ 				sizeof(unsigned int));
+ 	} else {
+-		memcpy(dice->rx_pcm_chs, alesis_io26_tx_pcm_chs,
++		memcpy(dice->tx_pcm_chs, alesis_io26_tx_pcm_chs,
+ 				MAX_STREAMS * SND_DICE_RATE_MODE_COUNT *
+ 				sizeof(unsigned int));
+ 	}
+diff --git a/sound/pci/cs5535audio/cs5535audio.h b/sound/pci/cs5535audio/cs5535audio.h
+index f4fcdf93f3c8..d84620a0c26c 100644
+--- a/sound/pci/cs5535audio/cs5535audio.h
++++ b/sound/pci/cs5535audio/cs5535audio.h
+@@ -67,9 +67,9 @@ struct cs5535audio_dma_ops {
+ };
+ 
+ struct cs5535audio_dma_desc {
+-	u32 addr;
+-	u16 size;
+-	u16 ctlreserved;
++	__le32 addr;
++	__le16 size;
++	__le16 ctlreserved;
+ };
+ 
+ struct cs5535audio_dma {
+diff --git a/sound/pci/cs5535audio/cs5535audio_pcm.c b/sound/pci/cs5535audio/cs5535audio_pcm.c
+index ee7065f6e162..326caec854e1 100644
+--- a/sound/pci/cs5535audio/cs5535audio_pcm.c
++++ b/sound/pci/cs5535audio/cs5535audio_pcm.c
+@@ -158,8 +158,8 @@ static int cs5535audio_build_dma_packets(struct cs5535audio *cs5535au,
+ 	lastdesc->addr = cpu_to_le32((u32) dma->desc_buf.addr);
+ 	lastdesc->size = 0;
+ 	lastdesc->ctlreserved = cpu_to_le16(PRD_JMP);
+-	jmpprd_addr = cpu_to_le32(lastdesc->addr +
+-				  (sizeof(struct cs5535audio_dma_desc)*periods));
++	jmpprd_addr = (u32)dma->desc_buf.addr +
++		sizeof(struct cs5535audio_dma_desc) * periods;
+ 
+ 	dma->substream = substream;
+ 	dma->period_bytes = period_bytes;
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 1ae1850b3bfd..647ae1a71e10 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2207,7 +2207,7 @@ out_free:
+  */
+ static struct snd_pci_quirk power_save_blacklist[] = {
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
+-	SND_PCI_QUIRK(0x1849, 0x0c0c, "Asrock B85M-ITX", 0),
++	SND_PCI_QUIRK(0x1849, 0xc892, "Asrock B85M-ITX", 0),
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
+ 	SND_PCI_QUIRK(0x1849, 0x7662, "Asrock H81M-HDS", 0),
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index f641c20095f7..1a8a2d440fbd 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -211,6 +211,7 @@ static void cx_auto_reboot_notify(struct hda_codec *codec)
+ 	struct conexant_spec *spec = codec->spec;
+ 
+ 	switch (codec->core.vendor_id) {
++	case 0x14f12008: /* CX8200 */
+ 	case 0x14f150f2: /* CX20722 */
+ 	case 0x14f150f4: /* CX20724 */
+ 		break;
+@@ -218,13 +219,14 @@ static void cx_auto_reboot_notify(struct hda_codec *codec)
+ 		return;
+ 	}
+ 
+-	/* Turn the CX20722 codec into D3 to avoid spurious noises
++	/* Turn the problematic codec into D3 to avoid spurious noises
+ 	   from the internal speaker during (and after) reboot */
+ 	cx_auto_turn_eapd(codec, spec->num_eapds, spec->eapds, false);
+ 
+ 	snd_hda_codec_set_power_to_all(codec, codec->core.afg, AC_PWRST_D3);
+ 	snd_hda_codec_write(codec, codec->core.afg, 0,
+ 			    AC_VERB_SET_POWER_STATE, AC_PWRST_D3);
++	msleep(10);
+ }
+ 
+ static void cx_auto_free(struct hda_codec *codec)
+diff --git a/sound/pci/vx222/vx222_ops.c b/sound/pci/vx222/vx222_ops.c
+index d4298af6d3ee..c0d0bf44f365 100644
+--- a/sound/pci/vx222/vx222_ops.c
++++ b/sound/pci/vx222/vx222_ops.c
+@@ -275,7 +275,7 @@ static void vx2_dma_write(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ 		length >>= 2; /* in 32bit words */
+ 		/* Transfer using pseudo-dma. */
+ 		for (; length > 0; length--) {
+-			outl(cpu_to_le32(*addr), port);
++			outl(*addr, port);
+ 			addr++;
+ 		}
+ 		addr = (u32 *)runtime->dma_area;
+@@ -285,7 +285,7 @@ static void vx2_dma_write(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ 	count >>= 2; /* in 32bit words */
+ 	/* Transfer using pseudo-dma. */
+ 	for (; count > 0; count--) {
+-		outl(cpu_to_le32(*addr), port);
++		outl(*addr, port);
+ 		addr++;
+ 	}
+ 
+@@ -313,7 +313,7 @@ static void vx2_dma_read(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ 		length >>= 2; /* in 32bit words */
+ 		/* Transfer using pseudo-dma. */
+ 		for (; length > 0; length--)
+-			*addr++ = le32_to_cpu(inl(port));
++			*addr++ = inl(port);
+ 		addr = (u32 *)runtime->dma_area;
+ 		pipe->hw_ptr = 0;
+ 	}
+@@ -321,7 +321,7 @@ static void vx2_dma_read(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ 	count >>= 2; /* in 32bit words */
+ 	/* Transfer using pseudo-dma. */
+ 	for (; count > 0; count--)
+-		*addr++ = le32_to_cpu(inl(port));
++		*addr++ = inl(port);
+ 
+ 	vx2_release_pseudo_dma(chip);
+ }
+diff --git a/sound/pcmcia/vx/vxp_ops.c b/sound/pcmcia/vx/vxp_ops.c
+index 8cde40226355..4c4ef1fec69f 100644
+--- a/sound/pcmcia/vx/vxp_ops.c
++++ b/sound/pcmcia/vx/vxp_ops.c
+@@ -375,7 +375,7 @@ static void vxp_dma_write(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ 		length >>= 1; /* in 16bit words */
+ 		/* Transfer using pseudo-dma. */
+ 		for (; length > 0; length--) {
+-			outw(cpu_to_le16(*addr), port);
++			outw(*addr, port);
+ 			addr++;
+ 		}
+ 		addr = (unsigned short *)runtime->dma_area;
+@@ -385,7 +385,7 @@ static void vxp_dma_write(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ 	count >>= 1; /* in 16bit words */
+ 	/* Transfer using pseudo-dma. */
+ 	for (; count > 0; count--) {
+-		outw(cpu_to_le16(*addr), port);
++		outw(*addr, port);
+ 		addr++;
+ 	}
+ 	vx_release_pseudo_dma(chip);
+@@ -417,7 +417,7 @@ static void vxp_dma_read(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ 		length >>= 1; /* in 16bit words */
+ 		/* Transfer using pseudo-dma. */
+ 		for (; length > 0; length--)
+-			*addr++ = le16_to_cpu(inw(port));
++			*addr++ = inw(port);
+ 		addr = (unsigned short *)runtime->dma_area;
+ 		pipe->hw_ptr = 0;
+ 	}
+@@ -425,12 +425,12 @@ static void vxp_dma_read(struct vx_core *chip, struct snd_pcm_runtime *runtime,
+ 	count >>= 1; /* in 16bit words */
+ 	/* Transfer using pseudo-dma. */
+ 	for (; count > 1; count--)
+-		*addr++ = le16_to_cpu(inw(port));
++		*addr++ = inw(port);
+ 	/* Disable DMA */
+ 	pchip->regDIALOG &= ~VXP_DLG_DMAREAD_SEL_MASK;
+ 	vx_outb(chip, DIALOG, pchip->regDIALOG);
+ 	/* Read the last word (16 bits) */
+-	*addr = le16_to_cpu(inw(port));
++	*addr = inw(port);
+ 	/* Disable 16-bit accesses */
+ 	pchip->regDIALOG &= ~VXP_DLG_DMA16_SEL_MASK;
+ 	vx_outb(chip, DIALOG, pchip->regDIALOG);


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-08-18 18:13 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-08-18 18:13 UTC (permalink / raw
  To: gentoo-commits

commit:     8419170d5f52350038c0ad6d7cbb785e53702288
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Aug 18 18:13:21 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Aug 18 18:13:21 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8419170d

Linux patch 4.18.3

 0000_README             |  4 ++++
 1002_linux-4.18.3.patch | 37 +++++++++++++++++++++++++++++++++++++
 2 files changed, 41 insertions(+)

diff --git a/0000_README b/0000_README
index f72e2ad..c313d8e 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch:  1001_linux-4.18.2.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.2
 
+Patch:  1002_linux-4.18.3.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.3
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1002_linux-4.18.3.patch b/1002_linux-4.18.3.patch
new file mode 100644
index 0000000..62abf0a
--- /dev/null
+++ b/1002_linux-4.18.3.patch
@@ -0,0 +1,37 @@
+diff --git a/Makefile b/Makefile
+index fd409a0fd4e1..e2bd815f24eb 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/x86/include/asm/pgtable-invert.h b/arch/x86/include/asm/pgtable-invert.h
+index 44b1203ece12..a0c1525f1b6f 100644
+--- a/arch/x86/include/asm/pgtable-invert.h
++++ b/arch/x86/include/asm/pgtable-invert.h
+@@ -4,9 +4,18 @@
+ 
+ #ifndef __ASSEMBLY__
+ 
++/*
++ * A clear pte value is special, and doesn't get inverted.
++ *
++ * Note that even users that only pass a pgprot_t (rather
++ * than a full pte) won't trigger the special zero case,
++ * because even PAGE_NONE has _PAGE_PROTNONE | _PAGE_ACCESSED
++ * set. So the all zero case really is limited to just the
++ * cleared page table entry case.
++ */
+ static inline bool __pte_needs_invert(u64 val)
+ {
+-	return !(val & _PAGE_PRESENT);
++	return val && !(val & _PAGE_PRESENT);
+ }
+ 
+ /* Get a mask to xor with the page table entry to get the correct pfn. */


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-08-17 19:44 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-08-17 19:44 UTC (permalink / raw
  To: gentoo-commits

commit:     8cac435047b25399fa3486e339f23ddfb49da684
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Aug 17 19:43:43 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Aug 17 19:43:43 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8cac4350

Removal of redundant patch.

ix86/l1tf: Fix build error seen if CONFIG_KVM_INTEL is disabled

 0000_README                                    |  4 ---
 1700_x86-l1tf-config-kvm-build-error-fix.patch | 40 --------------------------
 2 files changed, 44 deletions(-)

diff --git a/0000_README b/0000_README
index c801597..f72e2ad 100644
--- a/0000_README
+++ b/0000_README
@@ -59,10 +59,6 @@ Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.
 
-Patch:  1700_x86-l1tf-config-kvm-build-error-fix.patch
-From:   http://www.kernel.org
-Desc:   x86/l1tf: Fix build error seen if CONFIG_KVM_INTEL is disabled
-
 Patch:  2500_usb-storage-Disable-UAS-on-JMicron-SATA-enclosure.patch
 From:   https://bugzilla.redhat.com/show_bug.cgi?id=1260207#c5
 Desc:   Add UAS disable quirk. See bug #640082.

diff --git a/1700_x86-l1tf-config-kvm-build-error-fix.patch b/1700_x86-l1tf-config-kvm-build-error-fix.patch
deleted file mode 100644
index 88c2ec6..0000000
--- a/1700_x86-l1tf-config-kvm-build-error-fix.patch
+++ /dev/null
@@ -1,40 +0,0 @@
-From 1eb46908b35dfbac0ec1848d4b1e39667e0187e9 Mon Sep 17 00:00:00 2001
-From: Guenter Roeck <linux@roeck-us.net>
-Date: Wed, 15 Aug 2018 08:38:33 -0700
-Subject: x86/l1tf: Fix build error seen if CONFIG_KVM_INTEL is disabled
-
-From: Guenter Roeck <linux@roeck-us.net>
-
-commit 1eb46908b35dfbac0ec1848d4b1e39667e0187e9 upstream.
-
-allmodconfig+CONFIG_INTEL_KVM=n results in the following build error.
-
-  ERROR: "l1tf_vmx_mitigation" [arch/x86/kvm/kvm.ko] undefined!
-
-Fixes: 5b76a3cff011 ("KVM: VMX: Tell the nested hypervisor to skip L1D flush on vmentry")
-Reported-by: Meelis Roos <mroos@linux.ee>
-Cc: Meelis Roos <mroos@linux.ee>
-Cc: Paolo Bonzini <pbonzini@redhat.com>
-Cc: Thomas Gleixner <tglx@linutronix.de>
-Signed-off-by: Guenter Roeck <linux@roeck-us.net>
-Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
----
- arch/x86/kernel/cpu/bugs.c |    3 +--
- 1 file changed, 1 insertion(+), 2 deletions(-)
-
---- a/arch/x86/kernel/cpu/bugs.c
-+++ b/arch/x86/kernel/cpu/bugs.c
-@@ -648,10 +648,9 @@ void x86_spec_ctrl_setup_ap(void)
- enum l1tf_mitigations l1tf_mitigation __ro_after_init = L1TF_MITIGATION_FLUSH;
- #if IS_ENABLED(CONFIG_KVM_INTEL)
- EXPORT_SYMBOL_GPL(l1tf_mitigation);
--
-+#endif
- enum vmx_l1d_flush_state l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
- EXPORT_SYMBOL_GPL(l1tf_vmx_mitigation);
--#endif
- 
- static void __init l1tf_select_mitigation(void)
- {


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-08-17 19:28 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-08-17 19:28 UTC (permalink / raw
  To: gentoo-commits

commit:     1faffd1a486263e356b11211cfa05b09ce97eae4
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Aug 17 19:28:20 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Aug 17 19:28:20 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1faffd1a

Linux patch 4.18.2

 0000_README             |    4 +
 1001_linux-4.18.2.patch | 1679 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1683 insertions(+)

diff --git a/0000_README b/0000_README
index ad4a3ed..c801597 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,10 @@ Patch:  1000_linux-4.18.1.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.18.1
 
+Patch:  1001_linux-4.18.2.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.2
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1001_linux-4.18.2.patch b/1001_linux-4.18.2.patch
new file mode 100644
index 0000000..1853255
--- /dev/null
+++ b/1001_linux-4.18.2.patch
@@ -0,0 +1,1679 @@
+diff --git a/Documentation/process/changes.rst b/Documentation/process/changes.rst
+index ddc029734b25..005d8842a503 100644
+--- a/Documentation/process/changes.rst
++++ b/Documentation/process/changes.rst
+@@ -35,7 +35,7 @@ binutils               2.20             ld -v
+ flex                   2.5.35           flex --version
+ bison                  2.0              bison --version
+ util-linux             2.10o            fdformat --version
+-module-init-tools      0.9.10           depmod -V
++kmod                   13               depmod -V
+ e2fsprogs              1.41.4           e2fsck -V
+ jfsutils               1.1.3            fsck.jfs -V
+ reiserfsprogs          3.6.3            reiserfsck -V
+@@ -156,12 +156,6 @@ is not build with ``CONFIG_KALLSYMS`` and you have no way to rebuild and
+ reproduce the Oops with that option, then you can still decode that Oops
+ with ksymoops.
+ 
+-Module-Init-Tools
+------------------
+-
+-A new module loader is now in the kernel that requires ``module-init-tools``
+-to use.  It is backward compatible with the 2.4.x series kernels.
+-
+ Mkinitrd
+ --------
+ 
+@@ -371,16 +365,17 @@ Util-linux
+ 
+ - <https://www.kernel.org/pub/linux/utils/util-linux/>
+ 
++Kmod
++----
++
++- <https://www.kernel.org/pub/linux/utils/kernel/kmod/>
++- <https://git.kernel.org/pub/scm/utils/kernel/kmod/kmod.git>
++
+ Ksymoops
+ --------
+ 
+ - <https://www.kernel.org/pub/linux/utils/kernel/ksymoops/v2.4/>
+ 
+-Module-Init-Tools
+------------------
+-
+-- <https://www.kernel.org/pub/linux/utils/kernel/module-init-tools/>
+-
+ Mkinitrd
+ --------
+ 
+diff --git a/Makefile b/Makefile
+index 5edf963148e8..fd409a0fd4e1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index 493ff75670ff..8ae5d7ae4af3 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -977,12 +977,12 @@ int pmd_clear_huge(pmd_t *pmdp)
+ 	return 1;
+ }
+ 
+-int pud_free_pmd_page(pud_t *pud)
++int pud_free_pmd_page(pud_t *pud, unsigned long addr)
+ {
+ 	return pud_none(*pud);
+ }
+ 
+-int pmd_free_pte_page(pmd_t *pmd)
++int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
+ {
+ 	return pmd_none(*pmd);
+ }
+diff --git a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S b/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S
+index 16c4ccb1f154..d2364c55bbde 100644
+--- a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S
++++ b/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S
+@@ -265,7 +265,7 @@ ENTRY(sha256_mb_mgr_get_comp_job_avx2)
+ 	vpinsrd	$1, _args_digest+1*32(state, idx, 4), %xmm0, %xmm0
+ 	vpinsrd	$2, _args_digest+2*32(state, idx, 4), %xmm0, %xmm0
+ 	vpinsrd	$3, _args_digest+3*32(state, idx, 4), %xmm0, %xmm0
+-	vmovd   _args_digest(state , idx, 4) , %xmm0
++	vmovd	_args_digest+4*32(state, idx, 4), %xmm1
+ 	vpinsrd	$1, _args_digest+5*32(state, idx, 4), %xmm1, %xmm1
+ 	vpinsrd	$2, _args_digest+6*32(state, idx, 4), %xmm1, %xmm1
+ 	vpinsrd	$3, _args_digest+7*32(state, idx, 4), %xmm1, %xmm1
+diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c
+index de27615c51ea..0c662cb6a723 100644
+--- a/arch/x86/hyperv/mmu.c
++++ b/arch/x86/hyperv/mmu.c
+@@ -95,6 +95,11 @@ static void hyperv_flush_tlb_others(const struct cpumask *cpus,
+ 	} else {
+ 		for_each_cpu(cpu, cpus) {
+ 			vcpu = hv_cpu_number_to_vp_number(cpu);
++			if (vcpu == VP_INVAL) {
++				local_irq_restore(flags);
++				goto do_native;
++			}
++
+ 			if (vcpu >= 64)
+ 				goto do_native;
+ 
+diff --git a/arch/x86/include/asm/i8259.h b/arch/x86/include/asm/i8259.h
+index 5cdcdbd4d892..89789e8c80f6 100644
+--- a/arch/x86/include/asm/i8259.h
++++ b/arch/x86/include/asm/i8259.h
+@@ -3,6 +3,7 @@
+ #define _ASM_X86_I8259_H
+ 
+ #include <linux/delay.h>
++#include <asm/io.h>
+ 
+ extern unsigned int cached_irq_mask;
+ 
+diff --git a/arch/x86/kernel/apic/x2apic_uv_x.c b/arch/x86/kernel/apic/x2apic_uv_x.c
+index d492752f79e1..391f358ebb4c 100644
+--- a/arch/x86/kernel/apic/x2apic_uv_x.c
++++ b/arch/x86/kernel/apic/x2apic_uv_x.c
+@@ -394,10 +394,10 @@ extern int uv_hub_info_version(void)
+ EXPORT_SYMBOL(uv_hub_info_version);
+ 
+ /* Default UV memory block size is 2GB */
+-static unsigned long mem_block_size = (2UL << 30);
++static unsigned long mem_block_size __initdata = (2UL << 30);
+ 
+ /* Kernel parameter to specify UV mem block size */
+-static int parse_mem_block_size(char *ptr)
++static int __init parse_mem_block_size(char *ptr)
+ {
+ 	unsigned long size = memparse(ptr, NULL);
+ 
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index c4f0ae49a53d..664f161f96ff 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -648,10 +648,9 @@ void x86_spec_ctrl_setup_ap(void)
+ enum l1tf_mitigations l1tf_mitigation __ro_after_init = L1TF_MITIGATION_FLUSH;
+ #if IS_ENABLED(CONFIG_KVM_INTEL)
+ EXPORT_SYMBOL_GPL(l1tf_mitigation);
+-
++#endif
+ enum vmx_l1d_flush_state l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
+ EXPORT_SYMBOL_GPL(l1tf_vmx_mitigation);
+-#endif
+ 
+ static void __init l1tf_select_mitigation(void)
+ {
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 9eda6f730ec4..b41b72bd8bb8 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -905,7 +905,7 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
+ 	apply_forced_caps(c);
+ }
+ 
+-static void get_cpu_address_sizes(struct cpuinfo_x86 *c)
++void get_cpu_address_sizes(struct cpuinfo_x86 *c)
+ {
+ 	u32 eax, ebx, ecx, edx;
+ 
+diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
+index e59c0ea82a33..7b229afa0a37 100644
+--- a/arch/x86/kernel/cpu/cpu.h
++++ b/arch/x86/kernel/cpu/cpu.h
+@@ -46,6 +46,7 @@ extern const struct cpu_dev *const __x86_cpu_dev_start[],
+ 			    *const __x86_cpu_dev_end[];
+ 
+ extern void get_cpu_cap(struct cpuinfo_x86 *c);
++extern void get_cpu_address_sizes(struct cpuinfo_x86 *c);
+ extern void cpu_detect_cache_sizes(struct cpuinfo_x86 *c);
+ extern void init_scattered_cpuid_features(struct cpuinfo_x86 *c);
+ extern u32 get_scattered_cpuid_leaf(unsigned int level,
+diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
+index 7bb6f65c79de..29505724202a 100644
+--- a/arch/x86/mm/pageattr.c
++++ b/arch/x86/mm/pageattr.c
+@@ -1784,6 +1784,12 @@ int set_memory_nonglobal(unsigned long addr, int numpages)
+ 				      __pgprot(_PAGE_GLOBAL), 0);
+ }
+ 
++int set_memory_global(unsigned long addr, int numpages)
++{
++	return change_page_attr_set(&addr, numpages,
++				    __pgprot(_PAGE_GLOBAL), 0);
++}
++
+ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc)
+ {
+ 	struct cpa_data cpa;
+diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
+index 47b5951e592b..e3deefb891da 100644
+--- a/arch/x86/mm/pgtable.c
++++ b/arch/x86/mm/pgtable.c
+@@ -719,28 +719,50 @@ int pmd_clear_huge(pmd_t *pmd)
+ 	return 0;
+ }
+ 
++#ifdef CONFIG_X86_64
+ /**
+  * pud_free_pmd_page - Clear pud entry and free pmd page.
+  * @pud: Pointer to a PUD.
++ * @addr: Virtual address associated with pud.
+  *
+- * Context: The pud range has been unmaped and TLB purged.
++ * Context: The pud range has been unmapped and TLB purged.
+  * Return: 1 if clearing the entry succeeded. 0 otherwise.
++ *
++ * NOTE: Callers must allow a single page allocation.
+  */
+-int pud_free_pmd_page(pud_t *pud)
++int pud_free_pmd_page(pud_t *pud, unsigned long addr)
+ {
+-	pmd_t *pmd;
++	pmd_t *pmd, *pmd_sv;
++	pte_t *pte;
+ 	int i;
+ 
+ 	if (pud_none(*pud))
+ 		return 1;
+ 
+ 	pmd = (pmd_t *)pud_page_vaddr(*pud);
++	pmd_sv = (pmd_t *)__get_free_page(GFP_KERNEL);
++	if (!pmd_sv)
++		return 0;
+ 
+-	for (i = 0; i < PTRS_PER_PMD; i++)
+-		if (!pmd_free_pte_page(&pmd[i]))
+-			return 0;
++	for (i = 0; i < PTRS_PER_PMD; i++) {
++		pmd_sv[i] = pmd[i];
++		if (!pmd_none(pmd[i]))
++			pmd_clear(&pmd[i]);
++	}
+ 
+ 	pud_clear(pud);
++
++	/* INVLPG to clear all paging-structure caches */
++	flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1);
++
++	for (i = 0; i < PTRS_PER_PMD; i++) {
++		if (!pmd_none(pmd_sv[i])) {
++			pte = (pte_t *)pmd_page_vaddr(pmd_sv[i]);
++			free_page((unsigned long)pte);
++		}
++	}
++
++	free_page((unsigned long)pmd_sv);
+ 	free_page((unsigned long)pmd);
+ 
+ 	return 1;
+@@ -749,11 +771,12 @@ int pud_free_pmd_page(pud_t *pud)
+ /**
+  * pmd_free_pte_page - Clear pmd entry and free pte page.
+  * @pmd: Pointer to a PMD.
++ * @addr: Virtual address associated with pmd.
+  *
+- * Context: The pmd range has been unmaped and TLB purged.
++ * Context: The pmd range has been unmapped and TLB purged.
+  * Return: 1 if clearing the entry succeeded. 0 otherwise.
+  */
+-int pmd_free_pte_page(pmd_t *pmd)
++int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
+ {
+ 	pte_t *pte;
+ 
+@@ -762,8 +785,30 @@ int pmd_free_pte_page(pmd_t *pmd)
+ 
+ 	pte = (pte_t *)pmd_page_vaddr(*pmd);
+ 	pmd_clear(pmd);
++
++	/* INVLPG to clear all paging-structure caches */
++	flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1);
++
+ 	free_page((unsigned long)pte);
+ 
+ 	return 1;
+ }
++
++#else /* !CONFIG_X86_64 */
++
++int pud_free_pmd_page(pud_t *pud, unsigned long addr)
++{
++	return pud_none(*pud);
++}
++
++/*
++ * Disable free page handling on x86-PAE. This assures that ioremap()
++ * does not update sync'd pmd entries. See vmalloc_sync_one().
++ */
++int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
++{
++	return pmd_none(*pmd);
++}
++
++#endif /* CONFIG_X86_64 */
+ #endif	/* CONFIG_HAVE_ARCH_HUGE_VMAP */
+diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
+index fb752d9a3ce9..946455e9cfef 100644
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -435,6 +435,13 @@ static inline bool pti_kernel_image_global_ok(void)
+ 	return true;
+ }
+ 
++/*
++ * This is the only user for these and it is not arch-generic
++ * like the other set_memory.h functions.  Just extern them.
++ */
++extern int set_memory_nonglobal(unsigned long addr, int numpages);
++extern int set_memory_global(unsigned long addr, int numpages);
++
+ /*
+  * For some configurations, map all of kernel text into the user page
+  * tables.  This reduces TLB misses, especially on non-PCID systems.
+@@ -447,7 +454,8 @@ void pti_clone_kernel_text(void)
+ 	 * clone the areas past rodata, they might contain secrets.
+ 	 */
+ 	unsigned long start = PFN_ALIGN(_text);
+-	unsigned long end = (unsigned long)__end_rodata_hpage_align;
++	unsigned long end_clone  = (unsigned long)__end_rodata_hpage_align;
++	unsigned long end_global = PFN_ALIGN((unsigned long)__stop___ex_table);
+ 
+ 	if (!pti_kernel_image_global_ok())
+ 		return;
+@@ -459,14 +467,18 @@ void pti_clone_kernel_text(void)
+ 	 * pti_set_kernel_image_nonglobal() did to clear the
+ 	 * global bit.
+ 	 */
+-	pti_clone_pmds(start, end, _PAGE_RW);
++	pti_clone_pmds(start, end_clone, _PAGE_RW);
++
++	/*
++	 * pti_clone_pmds() will set the global bit in any PMDs
++	 * that it clones, but we also need to get any PTEs in
++	 * the last level for areas that are not huge-page-aligned.
++	 */
++
++	/* Set the global bit for normal non-__init kernel text: */
++	set_memory_global(start, (end_global - start) >> PAGE_SHIFT);
+ }
+ 
+-/*
+- * This is the only user for it and it is not arch-generic like
+- * the other set_memory.h functions.  Just extern it.
+- */
+-extern int set_memory_nonglobal(unsigned long addr, int numpages);
+ void pti_set_kernel_image_nonglobal(void)
+ {
+ 	/*
+@@ -478,9 +490,11 @@ void pti_set_kernel_image_nonglobal(void)
+ 	unsigned long start = PFN_ALIGN(_text);
+ 	unsigned long end = ALIGN((unsigned long)_end, PMD_PAGE_SIZE);
+ 
+-	if (pti_kernel_image_global_ok())
+-		return;
+-
++	/*
++	 * This clears _PAGE_GLOBAL from the entire kernel image.
++	 * pti_clone_kernel_text() map put _PAGE_GLOBAL back for
++	 * areas that are mapped to userspace.
++	 */
+ 	set_memory_nonglobal(start, (end - start) >> PAGE_SHIFT);
+ }
+ 
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index 439a94bf89ad..c5e3f2acc7f0 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -1259,6 +1259,9 @@ asmlinkage __visible void __init xen_start_kernel(void)
+ 	get_cpu_cap(&boot_cpu_data);
+ 	x86_configure_nx();
+ 
++	/* Determine virtual and physical address sizes */
++	get_cpu_address_sizes(&boot_cpu_data);
++
+ 	/* Let's presume PV guests always boot on vCPU with id 0. */
+ 	per_cpu(xen_vcpu_id, 0) = 0;
+ 
+diff --git a/crypto/ablkcipher.c b/crypto/ablkcipher.c
+index d880a4897159..4ee7c041bb82 100644
+--- a/crypto/ablkcipher.c
++++ b/crypto/ablkcipher.c
+@@ -71,11 +71,9 @@ static inline u8 *ablkcipher_get_spot(u8 *start, unsigned int len)
+ 	return max(start, end_page);
+ }
+ 
+-static inline unsigned int ablkcipher_done_slow(struct ablkcipher_walk *walk,
+-						unsigned int bsize)
++static inline void ablkcipher_done_slow(struct ablkcipher_walk *walk,
++					unsigned int n)
+ {
+-	unsigned int n = bsize;
+-
+ 	for (;;) {
+ 		unsigned int len_this_page = scatterwalk_pagelen(&walk->out);
+ 
+@@ -87,17 +85,13 @@ static inline unsigned int ablkcipher_done_slow(struct ablkcipher_walk *walk,
+ 		n -= len_this_page;
+ 		scatterwalk_start(&walk->out, sg_next(walk->out.sg));
+ 	}
+-
+-	return bsize;
+ }
+ 
+-static inline unsigned int ablkcipher_done_fast(struct ablkcipher_walk *walk,
+-						unsigned int n)
++static inline void ablkcipher_done_fast(struct ablkcipher_walk *walk,
++					unsigned int n)
+ {
+ 	scatterwalk_advance(&walk->in, n);
+ 	scatterwalk_advance(&walk->out, n);
+-
+-	return n;
+ }
+ 
+ static int ablkcipher_walk_next(struct ablkcipher_request *req,
+@@ -107,39 +101,40 @@ int ablkcipher_walk_done(struct ablkcipher_request *req,
+ 			 struct ablkcipher_walk *walk, int err)
+ {
+ 	struct crypto_tfm *tfm = req->base.tfm;
+-	unsigned int nbytes = 0;
++	unsigned int n; /* bytes processed */
++	bool more;
+ 
+-	if (likely(err >= 0)) {
+-		unsigned int n = walk->nbytes - err;
++	if (unlikely(err < 0))
++		goto finish;
+ 
+-		if (likely(!(walk->flags & ABLKCIPHER_WALK_SLOW)))
+-			n = ablkcipher_done_fast(walk, n);
+-		else if (WARN_ON(err)) {
+-			err = -EINVAL;
+-			goto err;
+-		} else
+-			n = ablkcipher_done_slow(walk, n);
++	n = walk->nbytes - err;
++	walk->total -= n;
++	more = (walk->total != 0);
+ 
+-		nbytes = walk->total - n;
+-		err = 0;
++	if (likely(!(walk->flags & ABLKCIPHER_WALK_SLOW))) {
++		ablkcipher_done_fast(walk, n);
++	} else {
++		if (WARN_ON(err)) {
++			/* unexpected case; didn't process all bytes */
++			err = -EINVAL;
++			goto finish;
++		}
++		ablkcipher_done_slow(walk, n);
+ 	}
+ 
+-	scatterwalk_done(&walk->in, 0, nbytes);
+-	scatterwalk_done(&walk->out, 1, nbytes);
+-
+-err:
+-	walk->total = nbytes;
+-	walk->nbytes = nbytes;
++	scatterwalk_done(&walk->in, 0, more);
++	scatterwalk_done(&walk->out, 1, more);
+ 
+-	if (nbytes) {
++	if (more) {
+ 		crypto_yield(req->base.flags);
+ 		return ablkcipher_walk_next(req, walk);
+ 	}
+-
++	err = 0;
++finish:
++	walk->nbytes = 0;
+ 	if (walk->iv != req->info)
+ 		memcpy(req->info, walk->iv, tfm->crt_ablkcipher.ivsize);
+ 	kfree(walk->iv_buffer);
+-
+ 	return err;
+ }
+ EXPORT_SYMBOL_GPL(ablkcipher_walk_done);
+diff --git a/crypto/blkcipher.c b/crypto/blkcipher.c
+index 01c0d4aa2563..77b5fa293f66 100644
+--- a/crypto/blkcipher.c
++++ b/crypto/blkcipher.c
+@@ -70,19 +70,18 @@ static inline u8 *blkcipher_get_spot(u8 *start, unsigned int len)
+ 	return max(start, end_page);
+ }
+ 
+-static inline unsigned int blkcipher_done_slow(struct blkcipher_walk *walk,
+-					       unsigned int bsize)
++static inline void blkcipher_done_slow(struct blkcipher_walk *walk,
++				       unsigned int bsize)
+ {
+ 	u8 *addr;
+ 
+ 	addr = (u8 *)ALIGN((unsigned long)walk->buffer, walk->alignmask + 1);
+ 	addr = blkcipher_get_spot(addr, bsize);
+ 	scatterwalk_copychunks(addr, &walk->out, bsize, 1);
+-	return bsize;
+ }
+ 
+-static inline unsigned int blkcipher_done_fast(struct blkcipher_walk *walk,
+-					       unsigned int n)
++static inline void blkcipher_done_fast(struct blkcipher_walk *walk,
++				       unsigned int n)
+ {
+ 	if (walk->flags & BLKCIPHER_WALK_COPY) {
+ 		blkcipher_map_dst(walk);
+@@ -96,49 +95,48 @@ static inline unsigned int blkcipher_done_fast(struct blkcipher_walk *walk,
+ 
+ 	scatterwalk_advance(&walk->in, n);
+ 	scatterwalk_advance(&walk->out, n);
+-
+-	return n;
+ }
+ 
+ int blkcipher_walk_done(struct blkcipher_desc *desc,
+ 			struct blkcipher_walk *walk, int err)
+ {
+-	unsigned int nbytes = 0;
++	unsigned int n; /* bytes processed */
++	bool more;
+ 
+-	if (likely(err >= 0)) {
+-		unsigned int n = walk->nbytes - err;
++	if (unlikely(err < 0))
++		goto finish;
+ 
+-		if (likely(!(walk->flags & BLKCIPHER_WALK_SLOW)))
+-			n = blkcipher_done_fast(walk, n);
+-		else if (WARN_ON(err)) {
+-			err = -EINVAL;
+-			goto err;
+-		} else
+-			n = blkcipher_done_slow(walk, n);
++	n = walk->nbytes - err;
++	walk->total -= n;
++	more = (walk->total != 0);
+ 
+-		nbytes = walk->total - n;
+-		err = 0;
++	if (likely(!(walk->flags & BLKCIPHER_WALK_SLOW))) {
++		blkcipher_done_fast(walk, n);
++	} else {
++		if (WARN_ON(err)) {
++			/* unexpected case; didn't process all bytes */
++			err = -EINVAL;
++			goto finish;
++		}
++		blkcipher_done_slow(walk, n);
+ 	}
+ 
+-	scatterwalk_done(&walk->in, 0, nbytes);
+-	scatterwalk_done(&walk->out, 1, nbytes);
++	scatterwalk_done(&walk->in, 0, more);
++	scatterwalk_done(&walk->out, 1, more);
+ 
+-err:
+-	walk->total = nbytes;
+-	walk->nbytes = nbytes;
+-
+-	if (nbytes) {
++	if (more) {
+ 		crypto_yield(desc->flags);
+ 		return blkcipher_walk_next(desc, walk);
+ 	}
+-
++	err = 0;
++finish:
++	walk->nbytes = 0;
+ 	if (walk->iv != desc->info)
+ 		memcpy(desc->info, walk->iv, walk->ivsize);
+ 	if (walk->buffer != walk->page)
+ 		kfree(walk->buffer);
+ 	if (walk->page)
+ 		free_page((unsigned long)walk->page);
+-
+ 	return err;
+ }
+ EXPORT_SYMBOL_GPL(blkcipher_walk_done);
+diff --git a/crypto/skcipher.c b/crypto/skcipher.c
+index 0fe2a2923ad0..5dc8407bdaa9 100644
+--- a/crypto/skcipher.c
++++ b/crypto/skcipher.c
+@@ -95,7 +95,7 @@ static inline u8 *skcipher_get_spot(u8 *start, unsigned int len)
+ 	return max(start, end_page);
+ }
+ 
+-static int skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize)
++static void skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize)
+ {
+ 	u8 *addr;
+ 
+@@ -103,23 +103,24 @@ static int skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize)
+ 	addr = skcipher_get_spot(addr, bsize);
+ 	scatterwalk_copychunks(addr, &walk->out, bsize,
+ 			       (walk->flags & SKCIPHER_WALK_PHYS) ? 2 : 1);
+-	return 0;
+ }
+ 
+ int skcipher_walk_done(struct skcipher_walk *walk, int err)
+ {
+-	unsigned int n = walk->nbytes - err;
+-	unsigned int nbytes;
+-
+-	nbytes = walk->total - n;
+-
+-	if (unlikely(err < 0)) {
+-		nbytes = 0;
+-		n = 0;
+-	} else if (likely(!(walk->flags & (SKCIPHER_WALK_PHYS |
+-					   SKCIPHER_WALK_SLOW |
+-					   SKCIPHER_WALK_COPY |
+-					   SKCIPHER_WALK_DIFF)))) {
++	unsigned int n; /* bytes processed */
++	bool more;
++
++	if (unlikely(err < 0))
++		goto finish;
++
++	n = walk->nbytes - err;
++	walk->total -= n;
++	more = (walk->total != 0);
++
++	if (likely(!(walk->flags & (SKCIPHER_WALK_PHYS |
++				    SKCIPHER_WALK_SLOW |
++				    SKCIPHER_WALK_COPY |
++				    SKCIPHER_WALK_DIFF)))) {
+ unmap_src:
+ 		skcipher_unmap_src(walk);
+ 	} else if (walk->flags & SKCIPHER_WALK_DIFF) {
+@@ -131,28 +132,28 @@ unmap_src:
+ 		skcipher_unmap_dst(walk);
+ 	} else if (unlikely(walk->flags & SKCIPHER_WALK_SLOW)) {
+ 		if (WARN_ON(err)) {
++			/* unexpected case; didn't process all bytes */
+ 			err = -EINVAL;
+-			nbytes = 0;
+-		} else
+-			n = skcipher_done_slow(walk, n);
++			goto finish;
++		}
++		skcipher_done_slow(walk, n);
++		goto already_advanced;
+ 	}
+ 
+-	if (err > 0)
+-		err = 0;
+-
+-	walk->total = nbytes;
+-	walk->nbytes = nbytes;
+-
+ 	scatterwalk_advance(&walk->in, n);
+ 	scatterwalk_advance(&walk->out, n);
+-	scatterwalk_done(&walk->in, 0, nbytes);
+-	scatterwalk_done(&walk->out, 1, nbytes);
++already_advanced:
++	scatterwalk_done(&walk->in, 0, more);
++	scatterwalk_done(&walk->out, 1, more);
+ 
+-	if (nbytes) {
++	if (more) {
+ 		crypto_yield(walk->flags & SKCIPHER_WALK_SLEEP ?
+ 			     CRYPTO_TFM_REQ_MAY_SLEEP : 0);
+ 		return skcipher_walk_next(walk);
+ 	}
++	err = 0;
++finish:
++	walk->nbytes = 0;
+ 
+ 	/* Short-circuit for the common/fast path. */
+ 	if (!((unsigned long)walk->buffer | (unsigned long)walk->page))
+@@ -399,7 +400,7 @@ static int skcipher_copy_iv(struct skcipher_walk *walk)
+ 	unsigned size;
+ 	u8 *iv;
+ 
+-	aligned_bs = ALIGN(bs, alignmask);
++	aligned_bs = ALIGN(bs, alignmask + 1);
+ 
+ 	/* Minimum size to align buffer by alignmask. */
+ 	size = alignmask & ~a;
+diff --git a/crypto/vmac.c b/crypto/vmac.c
+index df76a816cfb2..bb2fc787d615 100644
+--- a/crypto/vmac.c
++++ b/crypto/vmac.c
+@@ -1,6 +1,10 @@
+ /*
+- * Modified to interface to the Linux kernel
++ * VMAC: Message Authentication Code using Universal Hashing
++ *
++ * Reference: https://tools.ietf.org/html/draft-krovetz-vmac-01
++ *
+  * Copyright (c) 2009, Intel Corporation.
++ * Copyright (c) 2018, Google Inc.
+  *
+  * This program is free software; you can redistribute it and/or modify it
+  * under the terms and conditions of the GNU General Public License,
+@@ -16,14 +20,15 @@
+  * Place - Suite 330, Boston, MA 02111-1307 USA.
+  */
+ 
+-/* --------------------------------------------------------------------------
+- * VMAC and VHASH Implementation by Ted Krovetz (tdk@acm.org) and Wei Dai.
+- * This implementation is herby placed in the public domain.
+- * The authors offers no warranty. Use at your own risk.
+- * Please send bug reports to the authors.
+- * Last modified: 17 APR 08, 1700 PDT
+- * ----------------------------------------------------------------------- */
++/*
++ * Derived from:
++ *	VMAC and VHASH Implementation by Ted Krovetz (tdk@acm.org) and Wei Dai.
++ *	This implementation is herby placed in the public domain.
++ *	The authors offers no warranty. Use at your own risk.
++ *	Last modified: 17 APR 08, 1700 PDT
++ */
+ 
++#include <asm/unaligned.h>
+ #include <linux/init.h>
+ #include <linux/types.h>
+ #include <linux/crypto.h>
+@@ -31,9 +36,35 @@
+ #include <linux/scatterlist.h>
+ #include <asm/byteorder.h>
+ #include <crypto/scatterwalk.h>
+-#include <crypto/vmac.h>
+ #include <crypto/internal/hash.h>
+ 
++/*
++ * User definable settings.
++ */
++#define VMAC_TAG_LEN	64
++#define VMAC_KEY_SIZE	128/* Must be 128, 192 or 256			*/
++#define VMAC_KEY_LEN	(VMAC_KEY_SIZE/8)
++#define VMAC_NHBYTES	128/* Must 2^i for any 3 < i < 13 Standard = 128*/
++
++/* per-transform (per-key) context */
++struct vmac_tfm_ctx {
++	struct crypto_cipher *cipher;
++	u64 nhkey[(VMAC_NHBYTES/8)+2*(VMAC_TAG_LEN/64-1)];
++	u64 polykey[2*VMAC_TAG_LEN/64];
++	u64 l3key[2*VMAC_TAG_LEN/64];
++};
++
++/* per-request context */
++struct vmac_desc_ctx {
++	union {
++		u8 partial[VMAC_NHBYTES];	/* partial block */
++		__le64 partial_words[VMAC_NHBYTES / 8];
++	};
++	unsigned int partial_size;	/* size of the partial block */
++	bool first_block_processed;
++	u64 polytmp[2*VMAC_TAG_LEN/64];	/* running total of L2-hash */
++};
++
+ /*
+  * Constants and masks
+  */
+@@ -318,13 +349,6 @@ static void poly_step_func(u64 *ahi, u64 *alo,
+ 	} while (0)
+ #endif
+ 
+-static void vhash_abort(struct vmac_ctx *ctx)
+-{
+-	ctx->polytmp[0] = ctx->polykey[0] ;
+-	ctx->polytmp[1] = ctx->polykey[1] ;
+-	ctx->first_block_processed = 0;
+-}
+-
+ static u64 l3hash(u64 p1, u64 p2, u64 k1, u64 k2, u64 len)
+ {
+ 	u64 rh, rl, t, z = 0;
+@@ -364,280 +388,209 @@ static u64 l3hash(u64 p1, u64 p2, u64 k1, u64 k2, u64 len)
+ 	return rl;
+ }
+ 
+-static void vhash_update(const unsigned char *m,
+-			unsigned int mbytes, /* Pos multiple of VMAC_NHBYTES */
+-			struct vmac_ctx *ctx)
++/* L1 and L2-hash one or more VMAC_NHBYTES-byte blocks */
++static void vhash_blocks(const struct vmac_tfm_ctx *tctx,
++			 struct vmac_desc_ctx *dctx,
++			 const __le64 *mptr, unsigned int blocks)
+ {
+-	u64 rh, rl, *mptr;
+-	const u64 *kptr = (u64 *)ctx->nhkey;
+-	int i;
+-	u64 ch, cl;
+-	u64 pkh = ctx->polykey[0];
+-	u64 pkl = ctx->polykey[1];
+-
+-	if (!mbytes)
+-		return;
+-
+-	BUG_ON(mbytes % VMAC_NHBYTES);
+-
+-	mptr = (u64 *)m;
+-	i = mbytes / VMAC_NHBYTES;  /* Must be non-zero */
+-
+-	ch = ctx->polytmp[0];
+-	cl = ctx->polytmp[1];
+-
+-	if (!ctx->first_block_processed) {
+-		ctx->first_block_processed = 1;
++	const u64 *kptr = tctx->nhkey;
++	const u64 pkh = tctx->polykey[0];
++	const u64 pkl = tctx->polykey[1];
++	u64 ch = dctx->polytmp[0];
++	u64 cl = dctx->polytmp[1];
++	u64 rh, rl;
++
++	if (!dctx->first_block_processed) {
++		dctx->first_block_processed = true;
+ 		nh_vmac_nhbytes(mptr, kptr, VMAC_NHBYTES/8, rh, rl);
+ 		rh &= m62;
+ 		ADD128(ch, cl, rh, rl);
+ 		mptr += (VMAC_NHBYTES/sizeof(u64));
+-		i--;
++		blocks--;
+ 	}
+ 
+-	while (i--) {
++	while (blocks--) {
+ 		nh_vmac_nhbytes(mptr, kptr, VMAC_NHBYTES/8, rh, rl);
+ 		rh &= m62;
+ 		poly_step(ch, cl, pkh, pkl, rh, rl);
+ 		mptr += (VMAC_NHBYTES/sizeof(u64));
+ 	}
+ 
+-	ctx->polytmp[0] = ch;
+-	ctx->polytmp[1] = cl;
++	dctx->polytmp[0] = ch;
++	dctx->polytmp[1] = cl;
+ }
+ 
+-static u64 vhash(unsigned char m[], unsigned int mbytes,
+-			u64 *tagl, struct vmac_ctx *ctx)
++static int vmac_setkey(struct crypto_shash *tfm,
++		       const u8 *key, unsigned int keylen)
+ {
+-	u64 rh, rl, *mptr;
+-	const u64 *kptr = (u64 *)ctx->nhkey;
+-	int i, remaining;
+-	u64 ch, cl;
+-	u64 pkh = ctx->polykey[0];
+-	u64 pkl = ctx->polykey[1];
+-
+-	mptr = (u64 *)m;
+-	i = mbytes / VMAC_NHBYTES;
+-	remaining = mbytes % VMAC_NHBYTES;
+-
+-	if (ctx->first_block_processed) {
+-		ch = ctx->polytmp[0];
+-		cl = ctx->polytmp[1];
+-	} else if (i) {
+-		nh_vmac_nhbytes(mptr, kptr, VMAC_NHBYTES/8, ch, cl);
+-		ch &= m62;
+-		ADD128(ch, cl, pkh, pkl);
+-		mptr += (VMAC_NHBYTES/sizeof(u64));
+-		i--;
+-	} else if (remaining) {
+-		nh_16(mptr, kptr, 2*((remaining+15)/16), ch, cl);
+-		ch &= m62;
+-		ADD128(ch, cl, pkh, pkl);
+-		mptr += (VMAC_NHBYTES/sizeof(u64));
+-		goto do_l3;
+-	} else {/* Empty String */
+-		ch = pkh; cl = pkl;
+-		goto do_l3;
+-	}
+-
+-	while (i--) {
+-		nh_vmac_nhbytes(mptr, kptr, VMAC_NHBYTES/8, rh, rl);
+-		rh &= m62;
+-		poly_step(ch, cl, pkh, pkl, rh, rl);
+-		mptr += (VMAC_NHBYTES/sizeof(u64));
+-	}
+-	if (remaining) {
+-		nh_16(mptr, kptr, 2*((remaining+15)/16), rh, rl);
+-		rh &= m62;
+-		poly_step(ch, cl, pkh, pkl, rh, rl);
+-	}
+-
+-do_l3:
+-	vhash_abort(ctx);
+-	remaining *= 8;
+-	return l3hash(ch, cl, ctx->l3key[0], ctx->l3key[1], remaining);
+-}
++	struct vmac_tfm_ctx *tctx = crypto_shash_ctx(tfm);
++	__be64 out[2];
++	u8 in[16] = { 0 };
++	unsigned int i;
++	int err;
+ 
+-static u64 vmac(unsigned char m[], unsigned int mbytes,
+-			const unsigned char n[16], u64 *tagl,
+-			struct vmac_ctx_t *ctx)
+-{
+-	u64 *in_n, *out_p;
+-	u64 p, h;
+-	int i;
+-
+-	in_n = ctx->__vmac_ctx.cached_nonce;
+-	out_p = ctx->__vmac_ctx.cached_aes;
+-
+-	i = n[15] & 1;
+-	if ((*(u64 *)(n+8) != in_n[1]) || (*(u64 *)(n) != in_n[0])) {
+-		in_n[0] = *(u64 *)(n);
+-		in_n[1] = *(u64 *)(n+8);
+-		((unsigned char *)in_n)[15] &= 0xFE;
+-		crypto_cipher_encrypt_one(ctx->child,
+-			(unsigned char *)out_p, (unsigned char *)in_n);
+-
+-		((unsigned char *)in_n)[15] |= (unsigned char)(1-i);
++	if (keylen != VMAC_KEY_LEN) {
++		crypto_shash_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
++		return -EINVAL;
+ 	}
+-	p = be64_to_cpup(out_p + i);
+-	h = vhash(m, mbytes, (u64 *)0, &ctx->__vmac_ctx);
+-	return le64_to_cpu(p + h);
+-}
+ 
+-static int vmac_set_key(unsigned char user_key[], struct vmac_ctx_t *ctx)
+-{
+-	u64 in[2] = {0}, out[2];
+-	unsigned i;
+-	int err = 0;
+-
+-	err = crypto_cipher_setkey(ctx->child, user_key, VMAC_KEY_LEN);
++	err = crypto_cipher_setkey(tctx->cipher, key, keylen);
+ 	if (err)
+ 		return err;
+ 
+ 	/* Fill nh key */
+-	((unsigned char *)in)[0] = 0x80;
+-	for (i = 0; i < sizeof(ctx->__vmac_ctx.nhkey)/8; i += 2) {
+-		crypto_cipher_encrypt_one(ctx->child,
+-			(unsigned char *)out, (unsigned char *)in);
+-		ctx->__vmac_ctx.nhkey[i] = be64_to_cpup(out);
+-		ctx->__vmac_ctx.nhkey[i+1] = be64_to_cpup(out+1);
+-		((unsigned char *)in)[15] += 1;
++	in[0] = 0x80;
++	for (i = 0; i < ARRAY_SIZE(tctx->nhkey); i += 2) {
++		crypto_cipher_encrypt_one(tctx->cipher, (u8 *)out, in);
++		tctx->nhkey[i] = be64_to_cpu(out[0]);
++		tctx->nhkey[i+1] = be64_to_cpu(out[1]);
++		in[15]++;
+ 	}
+ 
+ 	/* Fill poly key */
+-	((unsigned char *)in)[0] = 0xC0;
+-	in[1] = 0;
+-	for (i = 0; i < sizeof(ctx->__vmac_ctx.polykey)/8; i += 2) {
+-		crypto_cipher_encrypt_one(ctx->child,
+-			(unsigned char *)out, (unsigned char *)in);
+-		ctx->__vmac_ctx.polytmp[i] =
+-			ctx->__vmac_ctx.polykey[i] =
+-				be64_to_cpup(out) & mpoly;
+-		ctx->__vmac_ctx.polytmp[i+1] =
+-			ctx->__vmac_ctx.polykey[i+1] =
+-				be64_to_cpup(out+1) & mpoly;
+-		((unsigned char *)in)[15] += 1;
++	in[0] = 0xC0;
++	in[15] = 0;
++	for (i = 0; i < ARRAY_SIZE(tctx->polykey); i += 2) {
++		crypto_cipher_encrypt_one(tctx->cipher, (u8 *)out, in);
++		tctx->polykey[i] = be64_to_cpu(out[0]) & mpoly;
++		tctx->polykey[i+1] = be64_to_cpu(out[1]) & mpoly;
++		in[15]++;
+ 	}
+ 
+ 	/* Fill ip key */
+-	((unsigned char *)in)[0] = 0xE0;
+-	in[1] = 0;
+-	for (i = 0; i < sizeof(ctx->__vmac_ctx.l3key)/8; i += 2) {
++	in[0] = 0xE0;
++	in[15] = 0;
++	for (i = 0; i < ARRAY_SIZE(tctx->l3key); i += 2) {
+ 		do {
+-			crypto_cipher_encrypt_one(ctx->child,
+-				(unsigned char *)out, (unsigned char *)in);
+-			ctx->__vmac_ctx.l3key[i] = be64_to_cpup(out);
+-			ctx->__vmac_ctx.l3key[i+1] = be64_to_cpup(out+1);
+-			((unsigned char *)in)[15] += 1;
+-		} while (ctx->__vmac_ctx.l3key[i] >= p64
+-			|| ctx->__vmac_ctx.l3key[i+1] >= p64);
++			crypto_cipher_encrypt_one(tctx->cipher, (u8 *)out, in);
++			tctx->l3key[i] = be64_to_cpu(out[0]);
++			tctx->l3key[i+1] = be64_to_cpu(out[1]);
++			in[15]++;
++		} while (tctx->l3key[i] >= p64 || tctx->l3key[i+1] >= p64);
+ 	}
+ 
+-	/* Invalidate nonce/aes cache and reset other elements */
+-	ctx->__vmac_ctx.cached_nonce[0] = (u64)-1; /* Ensure illegal nonce */
+-	ctx->__vmac_ctx.cached_nonce[1] = (u64)0;  /* Ensure illegal nonce */
+-	ctx->__vmac_ctx.first_block_processed = 0;
+-
+-	return err;
++	return 0;
+ }
+ 
+-static int vmac_setkey(struct crypto_shash *parent,
+-		const u8 *key, unsigned int keylen)
++static int vmac_init(struct shash_desc *desc)
+ {
+-	struct vmac_ctx_t *ctx = crypto_shash_ctx(parent);
++	const struct vmac_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
++	struct vmac_desc_ctx *dctx = shash_desc_ctx(desc);
+ 
+-	if (keylen != VMAC_KEY_LEN) {
+-		crypto_shash_set_flags(parent, CRYPTO_TFM_RES_BAD_KEY_LEN);
+-		return -EINVAL;
+-	}
+-
+-	return vmac_set_key((u8 *)key, ctx);
+-}
+-
+-static int vmac_init(struct shash_desc *pdesc)
+-{
++	dctx->partial_size = 0;
++	dctx->first_block_processed = false;
++	memcpy(dctx->polytmp, tctx->polykey, sizeof(dctx->polytmp));
+ 	return 0;
+ }
+ 
+-static int vmac_update(struct shash_desc *pdesc, const u8 *p,
+-		unsigned int len)
++static int vmac_update(struct shash_desc *desc, const u8 *p, unsigned int len)
+ {
+-	struct crypto_shash *parent = pdesc->tfm;
+-	struct vmac_ctx_t *ctx = crypto_shash_ctx(parent);
+-	int expand;
+-	int min;
+-
+-	expand = VMAC_NHBYTES - ctx->partial_size > 0 ?
+-			VMAC_NHBYTES - ctx->partial_size : 0;
+-
+-	min = len < expand ? len : expand;
+-
+-	memcpy(ctx->partial + ctx->partial_size, p, min);
+-	ctx->partial_size += min;
+-
+-	if (len < expand)
+-		return 0;
+-
+-	vhash_update(ctx->partial, VMAC_NHBYTES, &ctx->__vmac_ctx);
+-	ctx->partial_size = 0;
+-
+-	len -= expand;
+-	p += expand;
++	const struct vmac_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
++	struct vmac_desc_ctx *dctx = shash_desc_ctx(desc);
++	unsigned int n;
++
++	if (dctx->partial_size) {
++		n = min(len, VMAC_NHBYTES - dctx->partial_size);
++		memcpy(&dctx->partial[dctx->partial_size], p, n);
++		dctx->partial_size += n;
++		p += n;
++		len -= n;
++		if (dctx->partial_size == VMAC_NHBYTES) {
++			vhash_blocks(tctx, dctx, dctx->partial_words, 1);
++			dctx->partial_size = 0;
++		}
++	}
+ 
+-	if (len % VMAC_NHBYTES) {
+-		memcpy(ctx->partial, p + len - (len % VMAC_NHBYTES),
+-			len % VMAC_NHBYTES);
+-		ctx->partial_size = len % VMAC_NHBYTES;
++	if (len >= VMAC_NHBYTES) {
++		n = round_down(len, VMAC_NHBYTES);
++		/* TODO: 'p' may be misaligned here */
++		vhash_blocks(tctx, dctx, (const __le64 *)p, n / VMAC_NHBYTES);
++		p += n;
++		len -= n;
+ 	}
+ 
+-	vhash_update(p, len - len % VMAC_NHBYTES, &ctx->__vmac_ctx);
++	if (len) {
++		memcpy(dctx->partial, p, len);
++		dctx->partial_size = len;
++	}
+ 
+ 	return 0;
+ }
+ 
+-static int vmac_final(struct shash_desc *pdesc, u8 *out)
++static u64 vhash_final(const struct vmac_tfm_ctx *tctx,
++		       struct vmac_desc_ctx *dctx)
+ {
+-	struct crypto_shash *parent = pdesc->tfm;
+-	struct vmac_ctx_t *ctx = crypto_shash_ctx(parent);
+-	vmac_t mac;
+-	u8 nonce[16] = {};
+-
+-	/* vmac() ends up accessing outside the array bounds that
+-	 * we specify.  In appears to access up to the next 2-word
+-	 * boundary.  We'll just be uber cautious and zero the
+-	 * unwritten bytes in the buffer.
+-	 */
+-	if (ctx->partial_size) {
+-		memset(ctx->partial + ctx->partial_size, 0,
+-			VMAC_NHBYTES - ctx->partial_size);
++	unsigned int partial = dctx->partial_size;
++	u64 ch = dctx->polytmp[0];
++	u64 cl = dctx->polytmp[1];
++
++	/* L1 and L2-hash the final block if needed */
++	if (partial) {
++		/* Zero-pad to next 128-bit boundary */
++		unsigned int n = round_up(partial, 16);
++		u64 rh, rl;
++
++		memset(&dctx->partial[partial], 0, n - partial);
++		nh_16(dctx->partial_words, tctx->nhkey, n / 8, rh, rl);
++		rh &= m62;
++		if (dctx->first_block_processed)
++			poly_step(ch, cl, tctx->polykey[0], tctx->polykey[1],
++				  rh, rl);
++		else
++			ADD128(ch, cl, rh, rl);
+ 	}
+-	mac = vmac(ctx->partial, ctx->partial_size, nonce, NULL, ctx);
+-	memcpy(out, &mac, sizeof(vmac_t));
+-	memzero_explicit(&mac, sizeof(vmac_t));
+-	memset(&ctx->__vmac_ctx, 0, sizeof(struct vmac_ctx));
+-	ctx->partial_size = 0;
++
++	/* L3-hash the 128-bit output of L2-hash */
++	return l3hash(ch, cl, tctx->l3key[0], tctx->l3key[1], partial * 8);
++}
++
++static int vmac_final(struct shash_desc *desc, u8 *out)
++{
++	const struct vmac_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm);
++	struct vmac_desc_ctx *dctx = shash_desc_ctx(desc);
++	static const u8 nonce[16] = {}; /* TODO: this is insecure */
++	union {
++		u8 bytes[16];
++		__be64 pads[2];
++	} block;
++	int index;
++	u64 hash, pad;
++
++	/* Finish calculating the VHASH of the message */
++	hash = vhash_final(tctx, dctx);
++
++	/* Generate pseudorandom pad by encrypting the nonce */
++	memcpy(&block, nonce, 16);
++	index = block.bytes[15] & 1;
++	block.bytes[15] &= ~1;
++	crypto_cipher_encrypt_one(tctx->cipher, block.bytes, block.bytes);
++	pad = be64_to_cpu(block.pads[index]);
++
++	/* The VMAC is the sum of VHASH and the pseudorandom pad */
++	put_unaligned_le64(hash + pad, out);
+ 	return 0;
+ }
+ 
+ static int vmac_init_tfm(struct crypto_tfm *tfm)
+ {
+-	struct crypto_cipher *cipher;
+-	struct crypto_instance *inst = (void *)tfm->__crt_alg;
++	struct crypto_instance *inst = crypto_tfm_alg_instance(tfm);
+ 	struct crypto_spawn *spawn = crypto_instance_ctx(inst);
+-	struct vmac_ctx_t *ctx = crypto_tfm_ctx(tfm);
++	struct vmac_tfm_ctx *tctx = crypto_tfm_ctx(tfm);
++	struct crypto_cipher *cipher;
+ 
+ 	cipher = crypto_spawn_cipher(spawn);
+ 	if (IS_ERR(cipher))
+ 		return PTR_ERR(cipher);
+ 
+-	ctx->child = cipher;
++	tctx->cipher = cipher;
+ 	return 0;
+ }
+ 
+ static void vmac_exit_tfm(struct crypto_tfm *tfm)
+ {
+-	struct vmac_ctx_t *ctx = crypto_tfm_ctx(tfm);
+-	crypto_free_cipher(ctx->child);
++	struct vmac_tfm_ctx *tctx = crypto_tfm_ctx(tfm);
++
++	crypto_free_cipher(tctx->cipher);
+ }
+ 
+ static int vmac_create(struct crypto_template *tmpl, struct rtattr **tb)
+@@ -655,6 +608,10 @@ static int vmac_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 	if (IS_ERR(alg))
+ 		return PTR_ERR(alg);
+ 
++	err = -EINVAL;
++	if (alg->cra_blocksize != 16)
++		goto out_put_alg;
++
+ 	inst = shash_alloc_instance("vmac", alg);
+ 	err = PTR_ERR(inst);
+ 	if (IS_ERR(inst))
+@@ -670,11 +627,12 @@ static int vmac_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 	inst->alg.base.cra_blocksize = alg->cra_blocksize;
+ 	inst->alg.base.cra_alignmask = alg->cra_alignmask;
+ 
+-	inst->alg.digestsize = sizeof(vmac_t);
+-	inst->alg.base.cra_ctxsize = sizeof(struct vmac_ctx_t);
++	inst->alg.base.cra_ctxsize = sizeof(struct vmac_tfm_ctx);
+ 	inst->alg.base.cra_init = vmac_init_tfm;
+ 	inst->alg.base.cra_exit = vmac_exit_tfm;
+ 
++	inst->alg.descsize = sizeof(struct vmac_desc_ctx);
++	inst->alg.digestsize = VMAC_TAG_LEN / 8;
+ 	inst->alg.init = vmac_init;
+ 	inst->alg.update = vmac_update;
+ 	inst->alg.final = vmac_final;
+diff --git a/drivers/crypto/ccp/psp-dev.c b/drivers/crypto/ccp/psp-dev.c
+index ff478d826d7d..051b8c6bae64 100644
+--- a/drivers/crypto/ccp/psp-dev.c
++++ b/drivers/crypto/ccp/psp-dev.c
+@@ -84,8 +84,6 @@ done:
+ 
+ static void sev_wait_cmd_ioc(struct psp_device *psp, unsigned int *reg)
+ {
+-	psp->sev_int_rcvd = 0;
+-
+ 	wait_event(psp->sev_int_queue, psp->sev_int_rcvd);
+ 	*reg = ioread32(psp->io_regs + PSP_CMDRESP);
+ }
+@@ -148,6 +146,8 @@ static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
+ 	iowrite32(phys_lsb, psp->io_regs + PSP_CMDBUFF_ADDR_LO);
+ 	iowrite32(phys_msb, psp->io_regs + PSP_CMDBUFF_ADDR_HI);
+ 
++	psp->sev_int_rcvd = 0;
++
+ 	reg = cmd;
+ 	reg <<= PSP_CMDRESP_CMD_SHIFT;
+ 	reg |= PSP_CMDRESP_IOC;
+@@ -856,6 +856,9 @@ void psp_dev_destroy(struct sp_device *sp)
+ {
+ 	struct psp_device *psp = sp->psp_data;
+ 
++	if (!psp)
++		return;
++
+ 	if (psp->sev_misc)
+ 		kref_put(&misc_dev->refcount, sev_exit);
+ 
+diff --git a/drivers/crypto/ccree/cc_cipher.c b/drivers/crypto/ccree/cc_cipher.c
+index d2810c183b73..958ced3ca485 100644
+--- a/drivers/crypto/ccree/cc_cipher.c
++++ b/drivers/crypto/ccree/cc_cipher.c
+@@ -593,34 +593,82 @@ static void cc_setup_cipher_data(struct crypto_tfm *tfm,
+ 	}
+ }
+ 
++/*
++ * Update a CTR-AES 128 bit counter
++ */
++static void cc_update_ctr(u8 *ctr, unsigned int increment)
++{
++	if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) ||
++	    IS_ALIGNED((unsigned long)ctr, 8)) {
++
++		__be64 *high_be = (__be64 *)ctr;
++		__be64 *low_be = high_be + 1;
++		u64 orig_low = __be64_to_cpu(*low_be);
++		u64 new_low = orig_low + (u64)increment;
++
++		*low_be = __cpu_to_be64(new_low);
++
++		if (new_low < orig_low)
++			*high_be = __cpu_to_be64(__be64_to_cpu(*high_be) + 1);
++	} else {
++		u8 *pos = (ctr + AES_BLOCK_SIZE);
++		u8 val;
++		unsigned int size;
++
++		for (; increment; increment--)
++			for (size = AES_BLOCK_SIZE; size; size--) {
++				val = *--pos + 1;
++				*pos = val;
++				if (val)
++					break;
++			}
++	}
++}
++
+ static void cc_cipher_complete(struct device *dev, void *cc_req, int err)
+ {
+ 	struct skcipher_request *req = (struct skcipher_request *)cc_req;
+ 	struct scatterlist *dst = req->dst;
+ 	struct scatterlist *src = req->src;
+ 	struct cipher_req_ctx *req_ctx = skcipher_request_ctx(req);
+-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+-	unsigned int ivsize = crypto_skcipher_ivsize(tfm);
++	struct crypto_skcipher *sk_tfm = crypto_skcipher_reqtfm(req);
++	struct crypto_tfm *tfm = crypto_skcipher_tfm(sk_tfm);
++	struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
++	unsigned int ivsize = crypto_skcipher_ivsize(sk_tfm);
++	unsigned int len;
+ 
+-	cc_unmap_cipher_request(dev, req_ctx, ivsize, src, dst);
+-	kzfree(req_ctx->iv);
++	switch (ctx_p->cipher_mode) {
++	case DRV_CIPHER_CBC:
++		/*
++		 * The crypto API expects us to set the req->iv to the last
++		 * ciphertext block. For encrypt, simply copy from the result.
++		 * For decrypt, we must copy from a saved buffer since this
++		 * could be an in-place decryption operation and the src is
++		 * lost by this point.
++		 */
++		if (req_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT)  {
++			memcpy(req->iv, req_ctx->backup_info, ivsize);
++			kzfree(req_ctx->backup_info);
++		} else if (!err) {
++			len = req->cryptlen - ivsize;
++			scatterwalk_map_and_copy(req->iv, req->dst, len,
++						 ivsize, 0);
++		}
++		break;
+ 
+-	/*
+-	 * The crypto API expects us to set the req->iv to the last
+-	 * ciphertext block. For encrypt, simply copy from the result.
+-	 * For decrypt, we must copy from a saved buffer since this
+-	 * could be an in-place decryption operation and the src is
+-	 * lost by this point.
+-	 */
+-	if (req_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT)  {
+-		memcpy(req->iv, req_ctx->backup_info, ivsize);
+-		kzfree(req_ctx->backup_info);
+-	} else if (!err) {
+-		scatterwalk_map_and_copy(req->iv, req->dst,
+-					 (req->cryptlen - ivsize),
+-					 ivsize, 0);
++	case DRV_CIPHER_CTR:
++		/* Compute the counter of the last block */
++		len = ALIGN(req->cryptlen, AES_BLOCK_SIZE) / AES_BLOCK_SIZE;
++		cc_update_ctr((u8 *)req->iv, len);
++		break;
++
++	default:
++		break;
+ 	}
+ 
++	cc_unmap_cipher_request(dev, req_ctx, ivsize, src, dst);
++	kzfree(req_ctx->iv);
++
+ 	skcipher_request_complete(req, err);
+ }
+ 
+@@ -752,20 +800,29 @@ static int cc_cipher_encrypt(struct skcipher_request *req)
+ static int cc_cipher_decrypt(struct skcipher_request *req)
+ {
+ 	struct crypto_skcipher *sk_tfm = crypto_skcipher_reqtfm(req);
++	struct crypto_tfm *tfm = crypto_skcipher_tfm(sk_tfm);
++	struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+ 	struct cipher_req_ctx *req_ctx = skcipher_request_ctx(req);
+ 	unsigned int ivsize = crypto_skcipher_ivsize(sk_tfm);
+ 	gfp_t flags = cc_gfp_flags(&req->base);
++	unsigned int len;
+ 
+-	/*
+-	 * Allocate and save the last IV sized bytes of the source, which will
+-	 * be lost in case of in-place decryption and might be needed for CTS.
+-	 */
+-	req_ctx->backup_info = kmalloc(ivsize, flags);
+-	if (!req_ctx->backup_info)
+-		return -ENOMEM;
++	if (ctx_p->cipher_mode == DRV_CIPHER_CBC) {
++
++		/* Allocate and save the last IV sized bytes of the source,
++		 * which will be lost in case of in-place decryption.
++		 */
++		req_ctx->backup_info = kzalloc(ivsize, flags);
++		if (!req_ctx->backup_info)
++			return -ENOMEM;
++
++		len = req->cryptlen - ivsize;
++		scatterwalk_map_and_copy(req_ctx->backup_info, req->src, len,
++					 ivsize, 0);
++	} else {
++		req_ctx->backup_info = NULL;
++	}
+ 
+-	scatterwalk_map_and_copy(req_ctx->backup_info, req->src,
+-				 (req->cryptlen - ivsize), ivsize, 0);
+ 	req_ctx->is_giv = false;
+ 
+ 	return cc_cipher_process(req, DRV_CRYPTO_DIRECTION_DECRYPT);
+diff --git a/drivers/crypto/ccree/cc_hash.c b/drivers/crypto/ccree/cc_hash.c
+index 96ff777474d7..e4ebde05a8a0 100644
+--- a/drivers/crypto/ccree/cc_hash.c
++++ b/drivers/crypto/ccree/cc_hash.c
+@@ -602,66 +602,7 @@ static int cc_hash_update(struct ahash_request *req)
+ 	return rc;
+ }
+ 
+-static int cc_hash_finup(struct ahash_request *req)
+-{
+-	struct ahash_req_ctx *state = ahash_request_ctx(req);
+-	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+-	struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+-	u32 digestsize = crypto_ahash_digestsize(tfm);
+-	struct scatterlist *src = req->src;
+-	unsigned int nbytes = req->nbytes;
+-	u8 *result = req->result;
+-	struct device *dev = drvdata_to_dev(ctx->drvdata);
+-	bool is_hmac = ctx->is_hmac;
+-	struct cc_crypto_req cc_req = {};
+-	struct cc_hw_desc desc[CC_MAX_HASH_SEQ_LEN];
+-	unsigned int idx = 0;
+-	int rc;
+-	gfp_t flags = cc_gfp_flags(&req->base);
+-
+-	dev_dbg(dev, "===== %s-finup (%d) ====\n", is_hmac ? "hmac" : "hash",
+-		nbytes);
+-
+-	if (cc_map_req(dev, state, ctx)) {
+-		dev_err(dev, "map_ahash_source() failed\n");
+-		return -EINVAL;
+-	}
+-
+-	if (cc_map_hash_request_final(ctx->drvdata, state, src, nbytes, 1,
+-				      flags)) {
+-		dev_err(dev, "map_ahash_request_final() failed\n");
+-		cc_unmap_req(dev, state, ctx);
+-		return -ENOMEM;
+-	}
+-	if (cc_map_result(dev, state, digestsize)) {
+-		dev_err(dev, "map_ahash_digest() failed\n");
+-		cc_unmap_hash_request(dev, state, src, true);
+-		cc_unmap_req(dev, state, ctx);
+-		return -ENOMEM;
+-	}
+-
+-	/* Setup request structure */
+-	cc_req.user_cb = cc_hash_complete;
+-	cc_req.user_arg = req;
+-
+-	idx = cc_restore_hash(desc, ctx, state, idx);
+-
+-	if (is_hmac)
+-		idx = cc_fin_hmac(desc, req, idx);
+-
+-	idx = cc_fin_result(desc, req, idx);
+-
+-	rc = cc_send_request(ctx->drvdata, &cc_req, desc, idx, &req->base);
+-	if (rc != -EINPROGRESS && rc != -EBUSY) {
+-		dev_err(dev, "send_request() failed (rc=%d)\n", rc);
+-		cc_unmap_hash_request(dev, state, src, true);
+-		cc_unmap_result(dev, state, digestsize, result);
+-		cc_unmap_req(dev, state, ctx);
+-	}
+-	return rc;
+-}
+-
+-static int cc_hash_final(struct ahash_request *req)
++static int cc_do_finup(struct ahash_request *req, bool update)
+ {
+ 	struct ahash_req_ctx *state = ahash_request_ctx(req);
+ 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+@@ -678,21 +619,20 @@ static int cc_hash_final(struct ahash_request *req)
+ 	int rc;
+ 	gfp_t flags = cc_gfp_flags(&req->base);
+ 
+-	dev_dbg(dev, "===== %s-final (%d) ====\n", is_hmac ? "hmac" : "hash",
+-		nbytes);
++	dev_dbg(dev, "===== %s-%s (%d) ====\n", is_hmac ? "hmac" : "hash",
++		update ? "finup" : "final", nbytes);
+ 
+ 	if (cc_map_req(dev, state, ctx)) {
+ 		dev_err(dev, "map_ahash_source() failed\n");
+ 		return -EINVAL;
+ 	}
+ 
+-	if (cc_map_hash_request_final(ctx->drvdata, state, src, nbytes, 0,
++	if (cc_map_hash_request_final(ctx->drvdata, state, src, nbytes, update,
+ 				      flags)) {
+ 		dev_err(dev, "map_ahash_request_final() failed\n");
+ 		cc_unmap_req(dev, state, ctx);
+ 		return -ENOMEM;
+ 	}
+-
+ 	if (cc_map_result(dev, state, digestsize)) {
+ 		dev_err(dev, "map_ahash_digest() failed\n");
+ 		cc_unmap_hash_request(dev, state, src, true);
+@@ -706,7 +646,7 @@ static int cc_hash_final(struct ahash_request *req)
+ 
+ 	idx = cc_restore_hash(desc, ctx, state, idx);
+ 
+-	/* "DO-PAD" must be enabled only when writing current length to HW */
++	/* Pad the hash */
+ 	hw_desc_init(&desc[idx]);
+ 	set_cipher_do(&desc[idx], DO_PAD);
+ 	set_cipher_mode(&desc[idx], ctx->hw_mode);
+@@ -731,6 +671,17 @@ static int cc_hash_final(struct ahash_request *req)
+ 	return rc;
+ }
+ 
++static int cc_hash_finup(struct ahash_request *req)
++{
++	return cc_do_finup(req, true);
++}
++
++
++static int cc_hash_final(struct ahash_request *req)
++{
++	return cc_do_finup(req, false);
++}
++
+ static int cc_hash_init(struct ahash_request *req)
+ {
+ 	struct ahash_req_ctx *state = ahash_request_ctx(req);
+diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
+index 26ca0276b503..a75cb371cd19 100644
+--- a/include/asm-generic/pgtable.h
++++ b/include/asm-generic/pgtable.h
+@@ -1019,8 +1019,8 @@ int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot);
+ int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot);
+ int pud_clear_huge(pud_t *pud);
+ int pmd_clear_huge(pmd_t *pmd);
+-int pud_free_pmd_page(pud_t *pud);
+-int pmd_free_pte_page(pmd_t *pmd);
++int pud_free_pmd_page(pud_t *pud, unsigned long addr);
++int pmd_free_pte_page(pmd_t *pmd, unsigned long addr);
+ #else	/* !CONFIG_HAVE_ARCH_HUGE_VMAP */
+ static inline int p4d_set_huge(p4d_t *p4d, phys_addr_t addr, pgprot_t prot)
+ {
+@@ -1046,11 +1046,11 @@ static inline int pmd_clear_huge(pmd_t *pmd)
+ {
+ 	return 0;
+ }
+-static inline int pud_free_pmd_page(pud_t *pud)
++static inline int pud_free_pmd_page(pud_t *pud, unsigned long addr)
+ {
+ 	return 0;
+ }
+-static inline int pmd_free_pte_page(pmd_t *pmd)
++static inline int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
+ {
+ 	return 0;
+ }
+diff --git a/include/crypto/vmac.h b/include/crypto/vmac.h
+deleted file mode 100644
+index 6b700c7b2fe1..000000000000
+--- a/include/crypto/vmac.h
++++ /dev/null
+@@ -1,63 +0,0 @@
+-/*
+- * Modified to interface to the Linux kernel
+- * Copyright (c) 2009, Intel Corporation.
+- *
+- * This program is free software; you can redistribute it and/or modify it
+- * under the terms and conditions of the GNU General Public License,
+- * version 2, as published by the Free Software Foundation.
+- *
+- * This program is distributed in the hope it will be useful, but WITHOUT
+- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+- * more details.
+- *
+- * You should have received a copy of the GNU General Public License along with
+- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+- * Place - Suite 330, Boston, MA 02111-1307 USA.
+- */
+-
+-#ifndef __CRYPTO_VMAC_H
+-#define __CRYPTO_VMAC_H
+-
+-/* --------------------------------------------------------------------------
+- * VMAC and VHASH Implementation by Ted Krovetz (tdk@acm.org) and Wei Dai.
+- * This implementation is herby placed in the public domain.
+- * The authors offers no warranty. Use at your own risk.
+- * Please send bug reports to the authors.
+- * Last modified: 17 APR 08, 1700 PDT
+- * ----------------------------------------------------------------------- */
+-
+-/*
+- * User definable settings.
+- */
+-#define VMAC_TAG_LEN	64
+-#define VMAC_KEY_SIZE	128/* Must be 128, 192 or 256			*/
+-#define VMAC_KEY_LEN	(VMAC_KEY_SIZE/8)
+-#define VMAC_NHBYTES	128/* Must 2^i for any 3 < i < 13 Standard = 128*/
+-
+-/*
+- * This implementation uses u32 and u64 as names for unsigned 32-
+- * and 64-bit integer types. These are defined in C99 stdint.h. The
+- * following may need adaptation if you are not running a C99 or
+- * Microsoft C environment.
+- */
+-struct vmac_ctx {
+-	u64 nhkey[(VMAC_NHBYTES/8)+2*(VMAC_TAG_LEN/64-1)];
+-	u64 polykey[2*VMAC_TAG_LEN/64];
+-	u64 l3key[2*VMAC_TAG_LEN/64];
+-	u64 polytmp[2*VMAC_TAG_LEN/64];
+-	u64 cached_nonce[2];
+-	u64 cached_aes[2];
+-	int first_block_processed;
+-};
+-
+-typedef u64 vmac_t;
+-
+-struct vmac_ctx_t {
+-	struct crypto_cipher *child;
+-	struct vmac_ctx __vmac_ctx;
+-	u8 partial[VMAC_NHBYTES];	/* partial block */
+-	int partial_size;		/* size of the partial block */
+-};
+-
+-#endif /* __CRYPTO_VMAC_H */
+diff --git a/lib/ioremap.c b/lib/ioremap.c
+index 54e5bbaa3200..517f5853ffed 100644
+--- a/lib/ioremap.c
++++ b/lib/ioremap.c
+@@ -92,7 +92,7 @@ static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr,
+ 		if (ioremap_pmd_enabled() &&
+ 		    ((next - addr) == PMD_SIZE) &&
+ 		    IS_ALIGNED(phys_addr + addr, PMD_SIZE) &&
+-		    pmd_free_pte_page(pmd)) {
++		    pmd_free_pte_page(pmd, addr)) {
+ 			if (pmd_set_huge(pmd, phys_addr + addr, prot))
+ 				continue;
+ 		}
+@@ -119,7 +119,7 @@ static inline int ioremap_pud_range(p4d_t *p4d, unsigned long addr,
+ 		if (ioremap_pud_enabled() &&
+ 		    ((next - addr) == PUD_SIZE) &&
+ 		    IS_ALIGNED(phys_addr + addr, PUD_SIZE) &&
+-		    pud_free_pmd_page(pud)) {
++		    pud_free_pmd_page(pud, addr)) {
+ 			if (pud_set_huge(pud, phys_addr + addr, prot))
+ 				continue;
+ 		}
+diff --git a/net/bluetooth/hidp/core.c b/net/bluetooth/hidp/core.c
+index 1036e4fa1ea2..3bba8f4b08a9 100644
+--- a/net/bluetooth/hidp/core.c
++++ b/net/bluetooth/hidp/core.c
+@@ -431,8 +431,8 @@ static void hidp_del_timer(struct hidp_session *session)
+ 		del_timer(&session->timer);
+ }
+ 
+-static void hidp_process_report(struct hidp_session *session,
+-				int type, const u8 *data, int len, int intr)
++static void hidp_process_report(struct hidp_session *session, int type,
++				const u8 *data, unsigned int len, int intr)
+ {
+ 	if (len > HID_MAX_BUFFER_SIZE)
+ 		len = HID_MAX_BUFFER_SIZE;
+diff --git a/scripts/depmod.sh b/scripts/depmod.sh
+index 1a6f85e0e6e1..999d585eaa73 100755
+--- a/scripts/depmod.sh
++++ b/scripts/depmod.sh
+@@ -10,10 +10,16 @@ fi
+ DEPMOD=$1
+ KERNELRELEASE=$2
+ 
+-if ! test -r System.map -a -x "$DEPMOD"; then
++if ! test -r System.map ; then
+ 	exit 0
+ fi
+ 
++if [ -z $(command -v $DEPMOD) ]; then
++	echo "'make modules_install' requires $DEPMOD. Please install it." >&2
++	echo "This is probably in the kmod package." >&2
++	exit 1
++fi
++
+ # older versions of depmod require the version string to start with three
+ # numbers, so we cheat with a symlink here
+ depmod_hack_needed=true


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-08-16 11:45 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-08-16 11:45 UTC (permalink / raw
  To: gentoo-commits

commit:     3bdd6e6c6a489937f6611447824fa98451d18aa1
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 16 11:45:09 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Aug 16 11:45:09 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3bdd6e6c

x86/l1tf: Fix build error seen if CONFIG_KVM_INTEL is disabled.

 0000_README                                    |  4 +++
 1700_x86-l1tf-config-kvm-build-error-fix.patch | 40 ++++++++++++++++++++++++++
 2 files changed, 44 insertions(+)

diff --git a/0000_README b/0000_README
index cf32ff2..ad4a3ed 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.
 
+Patch:  1700_x86-l1tf-config-kvm-build-error-fix.patch
+From:   http://www.kernel.org
+Desc:   x86/l1tf: Fix build error seen if CONFIG_KVM_INTEL is disabled
+
 Patch:  2500_usb-storage-Disable-UAS-on-JMicron-SATA-enclosure.patch
 From:   https://bugzilla.redhat.com/show_bug.cgi?id=1260207#c5
 Desc:   Add UAS disable quirk. See bug #640082.

diff --git a/1700_x86-l1tf-config-kvm-build-error-fix.patch b/1700_x86-l1tf-config-kvm-build-error-fix.patch
new file mode 100644
index 0000000..88c2ec6
--- /dev/null
+++ b/1700_x86-l1tf-config-kvm-build-error-fix.patch
@@ -0,0 +1,40 @@
+From 1eb46908b35dfbac0ec1848d4b1e39667e0187e9 Mon Sep 17 00:00:00 2001
+From: Guenter Roeck <linux@roeck-us.net>
+Date: Wed, 15 Aug 2018 08:38:33 -0700
+Subject: x86/l1tf: Fix build error seen if CONFIG_KVM_INTEL is disabled
+
+From: Guenter Roeck <linux@roeck-us.net>
+
+commit 1eb46908b35dfbac0ec1848d4b1e39667e0187e9 upstream.
+
+allmodconfig+CONFIG_INTEL_KVM=n results in the following build error.
+
+  ERROR: "l1tf_vmx_mitigation" [arch/x86/kvm/kvm.ko] undefined!
+
+Fixes: 5b76a3cff011 ("KVM: VMX: Tell the nested hypervisor to skip L1D flush on vmentry")
+Reported-by: Meelis Roos <mroos@linux.ee>
+Cc: Meelis Roos <mroos@linux.ee>
+Cc: Paolo Bonzini <pbonzini@redhat.com>
+Cc: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Guenter Roeck <linux@roeck-us.net>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/x86/kernel/cpu/bugs.c |    3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
+
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -648,10 +648,9 @@ void x86_spec_ctrl_setup_ap(void)
+ enum l1tf_mitigations l1tf_mitigation __ro_after_init = L1TF_MITIGATION_FLUSH;
+ #if IS_ENABLED(CONFIG_KVM_INTEL)
+ EXPORT_SYMBOL_GPL(l1tf_mitigation);
+-
++#endif
+ enum vmx_l1d_flush_state l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
+ EXPORT_SYMBOL_GPL(l1tf_vmx_mitigation);
+-#endif
+ 
+ static void __init l1tf_select_mitigation(void)
+ {


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-08-15 16:36 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-08-15 16:36 UTC (permalink / raw
  To: gentoo-commits

commit:     ad052097fe9d40c63236e6ae02f106d5226de58d
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 15 16:36:52 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Aug 15 16:36:52 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ad052097

Linuxpatch 4.18.1

 0000_README             |    4 +
 1000_linux-4.18.1.patch | 4083 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4087 insertions(+)

diff --git a/0000_README b/0000_README
index 917d838..cf32ff2 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,10 @@ EXPERIMENTAL
 Individual Patch Descriptions:
 --------------------------------------------------------------------------
 
+Patch:  1000_linux-4.18.1.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.18.1
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1000_linux-4.18.1.patch b/1000_linux-4.18.1.patch
new file mode 100644
index 0000000..bd9c2da
--- /dev/null
+++ b/1000_linux-4.18.1.patch
@@ -0,0 +1,4083 @@
+diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
+index 9c5e7732d249..73318225a368 100644
+--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
++++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
+@@ -476,6 +476,7 @@ What:		/sys/devices/system/cpu/vulnerabilities
+ 		/sys/devices/system/cpu/vulnerabilities/spectre_v1
+ 		/sys/devices/system/cpu/vulnerabilities/spectre_v2
+ 		/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
++		/sys/devices/system/cpu/vulnerabilities/l1tf
+ Date:		January 2018
+ Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
+ Description:	Information about CPU vulnerabilities
+@@ -487,3 +488,26 @@ Description:	Information about CPU vulnerabilities
+ 		"Not affected"	  CPU is not affected by the vulnerability
+ 		"Vulnerable"	  CPU is affected and no mitigation in effect
+ 		"Mitigation: $M"  CPU is affected and mitigation $M is in effect
++
++		Details about the l1tf file can be found in
++		Documentation/admin-guide/l1tf.rst
++
++What:		/sys/devices/system/cpu/smt
++		/sys/devices/system/cpu/smt/active
++		/sys/devices/system/cpu/smt/control
++Date:		June 2018
++Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
++Description:	Control Symetric Multi Threading (SMT)
++
++		active:  Tells whether SMT is active (enabled and siblings online)
++
++		control: Read/write interface to control SMT. Possible
++			 values:
++
++			 "on"		SMT is enabled
++			 "off"		SMT is disabled
++			 "forceoff"	SMT is force disabled. Cannot be changed.
++			 "notsupported" SMT is not supported by the CPU
++
++			 If control status is "forceoff" or "notsupported" writes
++			 are rejected.
+diff --git a/Documentation/admin-guide/index.rst b/Documentation/admin-guide/index.rst
+index 48d70af11652..0873685bab0f 100644
+--- a/Documentation/admin-guide/index.rst
++++ b/Documentation/admin-guide/index.rst
+@@ -17,6 +17,15 @@ etc.
+    kernel-parameters
+    devices
+ 
++This section describes CPU vulnerabilities and provides an overview of the
++possible mitigations along with guidance for selecting mitigations if they
++are configurable at compile, boot or run time.
++
++.. toctree::
++   :maxdepth: 1
++
++   l1tf
++
+ Here is a set of documents aimed at users who are trying to track down
+ problems and bugs in particular.
+ 
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 533ff5c68970..1370b424a453 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -1967,10 +1967,84 @@
+ 			(virtualized real and unpaged mode) on capable
+ 			Intel chips. Default is 1 (enabled)
+ 
++	kvm-intel.vmentry_l1d_flush=[KVM,Intel] Mitigation for L1 Terminal Fault
++			CVE-2018-3620.
++
++			Valid arguments: never, cond, always
++
++			always: L1D cache flush on every VMENTER.
++			cond:	Flush L1D on VMENTER only when the code between
++				VMEXIT and VMENTER can leak host memory.
++			never:	Disables the mitigation
++
++			Default is cond (do L1 cache flush in specific instances)
++
+ 	kvm-intel.vpid=	[KVM,Intel] Disable Virtual Processor Identification
+ 			feature (tagged TLBs) on capable Intel chips.
+ 			Default is 1 (enabled)
+ 
++	l1tf=           [X86] Control mitigation of the L1TF vulnerability on
++			      affected CPUs
++
++			The kernel PTE inversion protection is unconditionally
++			enabled and cannot be disabled.
++
++			full
++				Provides all available mitigations for the
++				L1TF vulnerability. Disables SMT and
++				enables all mitigations in the
++				hypervisors, i.e. unconditional L1D flush.
++
++				SMT control and L1D flush control via the
++				sysfs interface is still possible after
++				boot.  Hypervisors will issue a warning
++				when the first VM is started in a
++				potentially insecure configuration,
++				i.e. SMT enabled or L1D flush disabled.
++
++			full,force
++				Same as 'full', but disables SMT and L1D
++				flush runtime control. Implies the
++				'nosmt=force' command line option.
++				(i.e. sysfs control of SMT is disabled.)
++
++			flush
++				Leaves SMT enabled and enables the default
++				hypervisor mitigation, i.e. conditional
++				L1D flush.
++
++				SMT control and L1D flush control via the
++				sysfs interface is still possible after
++				boot.  Hypervisors will issue a warning
++				when the first VM is started in a
++				potentially insecure configuration,
++				i.e. SMT enabled or L1D flush disabled.
++
++			flush,nosmt
++
++				Disables SMT and enables the default
++				hypervisor mitigation.
++
++				SMT control and L1D flush control via the
++				sysfs interface is still possible after
++				boot.  Hypervisors will issue a warning
++				when the first VM is started in a
++				potentially insecure configuration,
++				i.e. SMT enabled or L1D flush disabled.
++
++			flush,nowarn
++				Same as 'flush', but hypervisors will not
++				warn when a VM is started in a potentially
++				insecure configuration.
++
++			off
++				Disables hypervisor mitigations and doesn't
++				emit any warnings.
++
++			Default is 'flush'.
++
++			For details see: Documentation/admin-guide/l1tf.rst
++
+ 	l2cr=		[PPC]
+ 
+ 	l3cr=		[PPC]
+@@ -2687,6 +2761,10 @@
+ 	nosmt		[KNL,S390] Disable symmetric multithreading (SMT).
+ 			Equivalent to smt=1.
+ 
++			[KNL,x86] Disable symmetric multithreading (SMT).
++			nosmt=force: Force disable SMT, cannot be undone
++				     via the sysfs control file.
++
+ 	nospectre_v2	[X86] Disable all mitigations for the Spectre variant 2
+ 			(indirect branch prediction) vulnerability. System may
+ 			allow data leaks with this option, which is equivalent
+diff --git a/Documentation/admin-guide/l1tf.rst b/Documentation/admin-guide/l1tf.rst
+new file mode 100644
+index 000000000000..bae52b845de0
+--- /dev/null
++++ b/Documentation/admin-guide/l1tf.rst
+@@ -0,0 +1,610 @@
++L1TF - L1 Terminal Fault
++========================
++
++L1 Terminal Fault is a hardware vulnerability which allows unprivileged
++speculative access to data which is available in the Level 1 Data Cache
++when the page table entry controlling the virtual address, which is used
++for the access, has the Present bit cleared or other reserved bits set.
++
++Affected processors
++-------------------
++
++This vulnerability affects a wide range of Intel processors. The
++vulnerability is not present on:
++
++   - Processors from AMD, Centaur and other non Intel vendors
++
++   - Older processor models, where the CPU family is < 6
++
++   - A range of Intel ATOM processors (Cedarview, Cloverview, Lincroft,
++     Penwell, Pineview, Silvermont, Airmont, Merrifield)
++
++   - The Intel XEON PHI family
++
++   - Intel processors which have the ARCH_CAP_RDCL_NO bit set in the
++     IA32_ARCH_CAPABILITIES MSR. If the bit is set the CPU is not affected
++     by the Meltdown vulnerability either. These CPUs should become
++     available by end of 2018.
++
++Whether a processor is affected or not can be read out from the L1TF
++vulnerability file in sysfs. See :ref:`l1tf_sys_info`.
++
++Related CVEs
++------------
++
++The following CVE entries are related to the L1TF vulnerability:
++
++   =============  =================  ==============================
++   CVE-2018-3615  L1 Terminal Fault  SGX related aspects
++   CVE-2018-3620  L1 Terminal Fault  OS, SMM related aspects
++   CVE-2018-3646  L1 Terminal Fault  Virtualization related aspects
++   =============  =================  ==============================
++
++Problem
++-------
++
++If an instruction accesses a virtual address for which the relevant page
++table entry (PTE) has the Present bit cleared or other reserved bits set,
++then speculative execution ignores the invalid PTE and loads the referenced
++data if it is present in the Level 1 Data Cache, as if the page referenced
++by the address bits in the PTE was still present and accessible.
++
++While this is a purely speculative mechanism and the instruction will raise
++a page fault when it is retired eventually, the pure act of loading the
++data and making it available to other speculative instructions opens up the
++opportunity for side channel attacks to unprivileged malicious code,
++similar to the Meltdown attack.
++
++While Meltdown breaks the user space to kernel space protection, L1TF
++allows to attack any physical memory address in the system and the attack
++works across all protection domains. It allows an attack of SGX and also
++works from inside virtual machines because the speculation bypasses the
++extended page table (EPT) protection mechanism.
++
++
++Attack scenarios
++----------------
++
++1. Malicious user space
++^^^^^^^^^^^^^^^^^^^^^^^
++
++   Operating Systems store arbitrary information in the address bits of a
++   PTE which is marked non present. This allows a malicious user space
++   application to attack the physical memory to which these PTEs resolve.
++   In some cases user-space can maliciously influence the information
++   encoded in the address bits of the PTE, thus making attacks more
++   deterministic and more practical.
++
++   The Linux kernel contains a mitigation for this attack vector, PTE
++   inversion, which is permanently enabled and has no performance
++   impact. The kernel ensures that the address bits of PTEs, which are not
++   marked present, never point to cacheable physical memory space.
++
++   A system with an up to date kernel is protected against attacks from
++   malicious user space applications.
++
++2. Malicious guest in a virtual machine
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   The fact that L1TF breaks all domain protections allows malicious guest
++   OSes, which can control the PTEs directly, and malicious guest user
++   space applications, which run on an unprotected guest kernel lacking the
++   PTE inversion mitigation for L1TF, to attack physical host memory.
++
++   A special aspect of L1TF in the context of virtualization is symmetric
++   multi threading (SMT). The Intel implementation of SMT is called
++   HyperThreading. The fact that Hyperthreads on the affected processors
++   share the L1 Data Cache (L1D) is important for this. As the flaw allows
++   only to attack data which is present in L1D, a malicious guest running
++   on one Hyperthread can attack the data which is brought into the L1D by
++   the context which runs on the sibling Hyperthread of the same physical
++   core. This context can be host OS, host user space or a different guest.
++
++   If the processor does not support Extended Page Tables, the attack is
++   only possible, when the hypervisor does not sanitize the content of the
++   effective (shadow) page tables.
++
++   While solutions exist to mitigate these attack vectors fully, these
++   mitigations are not enabled by default in the Linux kernel because they
++   can affect performance significantly. The kernel provides several
++   mechanisms which can be utilized to address the problem depending on the
++   deployment scenario. The mitigations, their protection scope and impact
++   are described in the next sections.
++
++   The default mitigations and the rationale for choosing them are explained
++   at the end of this document. See :ref:`default_mitigations`.
++
++.. _l1tf_sys_info:
++
++L1TF system information
++-----------------------
++
++The Linux kernel provides a sysfs interface to enumerate the current L1TF
++status of the system: whether the system is vulnerable, and which
++mitigations are active. The relevant sysfs file is:
++
++/sys/devices/system/cpu/vulnerabilities/l1tf
++
++The possible values in this file are:
++
++  ===========================   ===============================
++  'Not affected'		The processor is not vulnerable
++  'Mitigation: PTE Inversion'	The host protection is active
++  ===========================   ===============================
++
++If KVM/VMX is enabled and the processor is vulnerable then the following
++information is appended to the 'Mitigation: PTE Inversion' part:
++
++  - SMT status:
++
++    =====================  ================
++    'VMX: SMT vulnerable'  SMT is enabled
++    'VMX: SMT disabled'    SMT is disabled
++    =====================  ================
++
++  - L1D Flush mode:
++
++    ================================  ====================================
++    'L1D vulnerable'		      L1D flushing is disabled
++
++    'L1D conditional cache flushes'   L1D flush is conditionally enabled
++
++    'L1D cache flushes'		      L1D flush is unconditionally enabled
++    ================================  ====================================
++
++The resulting grade of protection is discussed in the following sections.
++
++
++Host mitigation mechanism
++-------------------------
++
++The kernel is unconditionally protected against L1TF attacks from malicious
++user space running on the host.
++
++
++Guest mitigation mechanisms
++---------------------------
++
++.. _l1d_flush:
++
++1. L1D flush on VMENTER
++^^^^^^^^^^^^^^^^^^^^^^^
++
++   To make sure that a guest cannot attack data which is present in the L1D
++   the hypervisor flushes the L1D before entering the guest.
++
++   Flushing the L1D evicts not only the data which should not be accessed
++   by a potentially malicious guest, it also flushes the guest
++   data. Flushing the L1D has a performance impact as the processor has to
++   bring the flushed guest data back into the L1D. Depending on the
++   frequency of VMEXIT/VMENTER and the type of computations in the guest
++   performance degradation in the range of 1% to 50% has been observed. For
++   scenarios where guest VMEXIT/VMENTER are rare the performance impact is
++   minimal. Virtio and mechanisms like posted interrupts are designed to
++   confine the VMEXITs to a bare minimum, but specific configurations and
++   application scenarios might still suffer from a high VMEXIT rate.
++
++   The kernel provides two L1D flush modes:
++    - conditional ('cond')
++    - unconditional ('always')
++
++   The conditional mode avoids L1D flushing after VMEXITs which execute
++   only audited code paths before the corresponding VMENTER. These code
++   paths have been verified that they cannot expose secrets or other
++   interesting data to an attacker, but they can leak information about the
++   address space layout of the hypervisor.
++
++   Unconditional mode flushes L1D on all VMENTER invocations and provides
++   maximum protection. It has a higher overhead than the conditional
++   mode. The overhead cannot be quantified correctly as it depends on the
++   workload scenario and the resulting number of VMEXITs.
++
++   The general recommendation is to enable L1D flush on VMENTER. The kernel
++   defaults to conditional mode on affected processors.
++
++   **Note**, that L1D flush does not prevent the SMT problem because the
++   sibling thread will also bring back its data into the L1D which makes it
++   attackable again.
++
++   L1D flush can be controlled by the administrator via the kernel command
++   line and sysfs control files. See :ref:`mitigation_control_command_line`
++   and :ref:`mitigation_control_kvm`.
++
++.. _guest_confinement:
++
++2. Guest VCPU confinement to dedicated physical cores
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   To address the SMT problem, it is possible to make a guest or a group of
++   guests affine to one or more physical cores. The proper mechanism for
++   that is to utilize exclusive cpusets to ensure that no other guest or
++   host tasks can run on these cores.
++
++   If only a single guest or related guests run on sibling SMT threads on
++   the same physical core then they can only attack their own memory and
++   restricted parts of the host memory.
++
++   Host memory is attackable, when one of the sibling SMT threads runs in
++   host OS (hypervisor) context and the other in guest context. The amount
++   of valuable information from the host OS context depends on the context
++   which the host OS executes, i.e. interrupts, soft interrupts and kernel
++   threads. The amount of valuable data from these contexts cannot be
++   declared as non-interesting for an attacker without deep inspection of
++   the code.
++
++   **Note**, that assigning guests to a fixed set of physical cores affects
++   the ability of the scheduler to do load balancing and might have
++   negative effects on CPU utilization depending on the hosting
++   scenario. Disabling SMT might be a viable alternative for particular
++   scenarios.
++
++   For further information about confining guests to a single or to a group
++   of cores consult the cpusets documentation:
++
++   https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt
++
++.. _interrupt_isolation:
++
++3. Interrupt affinity
++^^^^^^^^^^^^^^^^^^^^^
++
++   Interrupts can be made affine to logical CPUs. This is not universally
++   true because there are types of interrupts which are truly per CPU
++   interrupts, e.g. the local timer interrupt. Aside of that multi queue
++   devices affine their interrupts to single CPUs or groups of CPUs per
++   queue without allowing the administrator to control the affinities.
++
++   Moving the interrupts, which can be affinity controlled, away from CPUs
++   which run untrusted guests, reduces the attack vector space.
++
++   Whether the interrupts with are affine to CPUs, which run untrusted
++   guests, provide interesting data for an attacker depends on the system
++   configuration and the scenarios which run on the system. While for some
++   of the interrupts it can be assumed that they won't expose interesting
++   information beyond exposing hints about the host OS memory layout, there
++   is no way to make general assumptions.
++
++   Interrupt affinity can be controlled by the administrator via the
++   /proc/irq/$NR/smp_affinity[_list] files. Limited documentation is
++   available at:
++
++   https://www.kernel.org/doc/Documentation/IRQ-affinity.txt
++
++.. _smt_control:
++
++4. SMT control
++^^^^^^^^^^^^^^
++
++   To prevent the SMT issues of L1TF it might be necessary to disable SMT
++   completely. Disabling SMT can have a significant performance impact, but
++   the impact depends on the hosting scenario and the type of workloads.
++   The impact of disabling SMT needs also to be weighted against the impact
++   of other mitigation solutions like confining guests to dedicated cores.
++
++   The kernel provides a sysfs interface to retrieve the status of SMT and
++   to control it. It also provides a kernel command line interface to
++   control SMT.
++
++   The kernel command line interface consists of the following options:
++
++     =========== ==========================================================
++     nosmt	 Affects the bring up of the secondary CPUs during boot. The
++		 kernel tries to bring all present CPUs online during the
++		 boot process. "nosmt" makes sure that from each physical
++		 core only one - the so called primary (hyper) thread is
++		 activated. Due to a design flaw of Intel processors related
++		 to Machine Check Exceptions the non primary siblings have
++		 to be brought up at least partially and are then shut down
++		 again.  "nosmt" can be undone via the sysfs interface.
++
++     nosmt=force Has the same effect as "nosmt" but it does not allow to
++		 undo the SMT disable via the sysfs interface.
++     =========== ==========================================================
++
++   The sysfs interface provides two files:
++
++   - /sys/devices/system/cpu/smt/control
++   - /sys/devices/system/cpu/smt/active
++
++   /sys/devices/system/cpu/smt/control:
++
++     This file allows to read out the SMT control state and provides the
++     ability to disable or (re)enable SMT. The possible states are:
++
++	==============  ===================================================
++	on		SMT is supported by the CPU and enabled. All
++			logical CPUs can be onlined and offlined without
++			restrictions.
++
++	off		SMT is supported by the CPU and disabled. Only
++			the so called primary SMT threads can be onlined
++			and offlined without restrictions. An attempt to
++			online a non-primary sibling is rejected
++
++	forceoff	Same as 'off' but the state cannot be controlled.
++			Attempts to write to the control file are rejected.
++
++	notsupported	The processor does not support SMT. It's therefore
++			not affected by the SMT implications of L1TF.
++			Attempts to write to the control file are rejected.
++	==============  ===================================================
++
++     The possible states which can be written into this file to control SMT
++     state are:
++
++     - on
++     - off
++     - forceoff
++
++   /sys/devices/system/cpu/smt/active:
++
++     This file reports whether SMT is enabled and active, i.e. if on any
++     physical core two or more sibling threads are online.
++
++   SMT control is also possible at boot time via the l1tf kernel command
++   line parameter in combination with L1D flush control. See
++   :ref:`mitigation_control_command_line`.
++
++5. Disabling EPT
++^^^^^^^^^^^^^^^^
++
++  Disabling EPT for virtual machines provides full mitigation for L1TF even
++  with SMT enabled, because the effective page tables for guests are
++  managed and sanitized by the hypervisor. Though disabling EPT has a
++  significant performance impact especially when the Meltdown mitigation
++  KPTI is enabled.
++
++  EPT can be disabled in the hypervisor via the 'kvm-intel.ept' parameter.
++
++There is ongoing research and development for new mitigation mechanisms to
++address the performance impact of disabling SMT or EPT.
++
++.. _mitigation_control_command_line:
++
++Mitigation control on the kernel command line
++---------------------------------------------
++
++The kernel command line allows to control the L1TF mitigations at boot
++time with the option "l1tf=". The valid arguments for this option are:
++
++  ============  =============================================================
++  full		Provides all available mitigations for the L1TF
++		vulnerability. Disables SMT and enables all mitigations in
++		the hypervisors, i.e. unconditional L1D flushing
++
++		SMT control and L1D flush control via the sysfs interface
++		is still possible after boot.  Hypervisors will issue a
++		warning when the first VM is started in a potentially
++		insecure configuration, i.e. SMT enabled or L1D flush
++		disabled.
++
++  full,force	Same as 'full', but disables SMT and L1D flush runtime
++		control. Implies the 'nosmt=force' command line option.
++		(i.e. sysfs control of SMT is disabled.)
++
++  flush		Leaves SMT enabled and enables the default hypervisor
++		mitigation, i.e. conditional L1D flushing
++
++		SMT control and L1D flush control via the sysfs interface
++		is still possible after boot.  Hypervisors will issue a
++		warning when the first VM is started in a potentially
++		insecure configuration, i.e. SMT enabled or L1D flush
++		disabled.
++
++  flush,nosmt	Disables SMT and enables the default hypervisor mitigation,
++		i.e. conditional L1D flushing.
++
++		SMT control and L1D flush control via the sysfs interface
++		is still possible after boot.  Hypervisors will issue a
++		warning when the first VM is started in a potentially
++		insecure configuration, i.e. SMT enabled or L1D flush
++		disabled.
++
++  flush,nowarn	Same as 'flush', but hypervisors will not warn when a VM is
++		started in a potentially insecure configuration.
++
++  off		Disables hypervisor mitigations and doesn't emit any
++		warnings.
++  ============  =============================================================
++
++The default is 'flush'. For details about L1D flushing see :ref:`l1d_flush`.
++
++
++.. _mitigation_control_kvm:
++
++Mitigation control for KVM - module parameter
++-------------------------------------------------------------
++
++The KVM hypervisor mitigation mechanism, flushing the L1D cache when
++entering a guest, can be controlled with a module parameter.
++
++The option/parameter is "kvm-intel.vmentry_l1d_flush=". It takes the
++following arguments:
++
++  ============  ==============================================================
++  always	L1D cache flush on every VMENTER.
++
++  cond		Flush L1D on VMENTER only when the code between VMEXIT and
++		VMENTER can leak host memory which is considered
++		interesting for an attacker. This still can leak host memory
++		which allows e.g. to determine the hosts address space layout.
++
++  never		Disables the mitigation
++  ============  ==============================================================
++
++The parameter can be provided on the kernel command line, as a module
++parameter when loading the modules and at runtime modified via the sysfs
++file:
++
++/sys/module/kvm_intel/parameters/vmentry_l1d_flush
++
++The default is 'cond'. If 'l1tf=full,force' is given on the kernel command
++line, then 'always' is enforced and the kvm-intel.vmentry_l1d_flush
++module parameter is ignored and writes to the sysfs file are rejected.
++
++
++Mitigation selection guide
++--------------------------
++
++1. No virtualization in use
++^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   The system is protected by the kernel unconditionally and no further
++   action is required.
++
++2. Virtualization with trusted guests
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   If the guest comes from a trusted source and the guest OS kernel is
++   guaranteed to have the L1TF mitigations in place the system is fully
++   protected against L1TF and no further action is required.
++
++   To avoid the overhead of the default L1D flushing on VMENTER the
++   administrator can disable the flushing via the kernel command line and
++   sysfs control files. See :ref:`mitigation_control_command_line` and
++   :ref:`mitigation_control_kvm`.
++
++
++3. Virtualization with untrusted guests
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++3.1. SMT not supported or disabled
++""""""""""""""""""""""""""""""""""
++
++  If SMT is not supported by the processor or disabled in the BIOS or by
++  the kernel, it's only required to enforce L1D flushing on VMENTER.
++
++  Conditional L1D flushing is the default behaviour and can be tuned. See
++  :ref:`mitigation_control_command_line` and :ref:`mitigation_control_kvm`.
++
++3.2. EPT not supported or disabled
++""""""""""""""""""""""""""""""""""
++
++  If EPT is not supported by the processor or disabled in the hypervisor,
++  the system is fully protected. SMT can stay enabled and L1D flushing on
++  VMENTER is not required.
++
++  EPT can be disabled in the hypervisor via the 'kvm-intel.ept' parameter.
++
++3.3. SMT and EPT supported and active
++"""""""""""""""""""""""""""""""""""""
++
++  If SMT and EPT are supported and active then various degrees of
++  mitigations can be employed:
++
++  - L1D flushing on VMENTER:
++
++    L1D flushing on VMENTER is the minimal protection requirement, but it
++    is only potent in combination with other mitigation methods.
++
++    Conditional L1D flushing is the default behaviour and can be tuned. See
++    :ref:`mitigation_control_command_line` and :ref:`mitigation_control_kvm`.
++
++  - Guest confinement:
++
++    Confinement of guests to a single or a group of physical cores which
++    are not running any other processes, can reduce the attack surface
++    significantly, but interrupts, soft interrupts and kernel threads can
++    still expose valuable data to a potential attacker. See
++    :ref:`guest_confinement`.
++
++  - Interrupt isolation:
++
++    Isolating the guest CPUs from interrupts can reduce the attack surface
++    further, but still allows a malicious guest to explore a limited amount
++    of host physical memory. This can at least be used to gain knowledge
++    about the host address space layout. The interrupts which have a fixed
++    affinity to the CPUs which run the untrusted guests can depending on
++    the scenario still trigger soft interrupts and schedule kernel threads
++    which might expose valuable information. See
++    :ref:`interrupt_isolation`.
++
++The above three mitigation methods combined can provide protection to a
++certain degree, but the risk of the remaining attack surface has to be
++carefully analyzed. For full protection the following methods are
++available:
++
++  - Disabling SMT:
++
++    Disabling SMT and enforcing the L1D flushing provides the maximum
++    amount of protection. This mitigation is not depending on any of the
++    above mitigation methods.
++
++    SMT control and L1D flushing can be tuned by the command line
++    parameters 'nosmt', 'l1tf', 'kvm-intel.vmentry_l1d_flush' and at run
++    time with the matching sysfs control files. See :ref:`smt_control`,
++    :ref:`mitigation_control_command_line` and
++    :ref:`mitigation_control_kvm`.
++
++  - Disabling EPT:
++
++    Disabling EPT provides the maximum amount of protection as well. It is
++    not depending on any of the above mitigation methods. SMT can stay
++    enabled and L1D flushing is not required, but the performance impact is
++    significant.
++
++    EPT can be disabled in the hypervisor via the 'kvm-intel.ept'
++    parameter.
++
++3.4. Nested virtual machines
++""""""""""""""""""""""""""""
++
++When nested virtualization is in use, three operating systems are involved:
++the bare metal hypervisor, the nested hypervisor and the nested virtual
++machine.  VMENTER operations from the nested hypervisor into the nested
++guest will always be processed by the bare metal hypervisor. If KVM is the
++bare metal hypervisor it wiil:
++
++ - Flush the L1D cache on every switch from the nested hypervisor to the
++   nested virtual machine, so that the nested hypervisor's secrets are not
++   exposed to the nested virtual machine;
++
++ - Flush the L1D cache on every switch from the nested virtual machine to
++   the nested hypervisor; this is a complex operation, and flushing the L1D
++   cache avoids that the bare metal hypervisor's secrets are exposed to the
++   nested virtual machine;
++
++ - Instruct the nested hypervisor to not perform any L1D cache flush. This
++   is an optimization to avoid double L1D flushing.
++
++
++.. _default_mitigations:
++
++Default mitigations
++-------------------
++
++  The kernel default mitigations for vulnerable processors are:
++
++  - PTE inversion to protect against malicious user space. This is done
++    unconditionally and cannot be controlled.
++
++  - L1D conditional flushing on VMENTER when EPT is enabled for
++    a guest.
++
++  The kernel does not by default enforce the disabling of SMT, which leaves
++  SMT systems vulnerable when running untrusted guests with EPT enabled.
++
++  The rationale for this choice is:
++
++  - Force disabling SMT can break existing setups, especially with
++    unattended updates.
++
++  - If regular users run untrusted guests on their machine, then L1TF is
++    just an add on to other malware which might be embedded in an untrusted
++    guest, e.g. spam-bots or attacks on the local network.
++
++    There is no technical way to prevent a user from running untrusted code
++    on their machines blindly.
++
++  - It's technically extremely unlikely and from today's knowledge even
++    impossible that L1TF can be exploited via the most popular attack
++    mechanisms like JavaScript because these mechanisms have no way to
++    control PTEs. If this would be possible and not other mitigation would
++    be possible, then the default might be different.
++
++  - The administrators of cloud and hosting setups have to carefully
++    analyze the risk for their scenarios and make the appropriate
++    mitigation choices, which might even vary across their deployed
++    machines and also result in other changes of their overall setup.
++    There is no way for the kernel to provide a sensible default for this
++    kind of scenarios.
+diff --git a/Makefile b/Makefile
+index 863f58503bee..5edf963148e8 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 18
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Merciless Moray
+ 
+diff --git a/arch/Kconfig b/arch/Kconfig
+index 1aa59063f1fd..d1f2ed462ac8 100644
+--- a/arch/Kconfig
++++ b/arch/Kconfig
+@@ -13,6 +13,9 @@ config KEXEC_CORE
+ config HAVE_IMA_KEXEC
+ 	bool
+ 
++config HOTPLUG_SMT
++	bool
++
+ config OPROFILE
+ 	tristate "OProfile system profiling"
+ 	depends on PROFILING
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 887d3a7bb646..6b8065d718bd 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -187,6 +187,7 @@ config X86
+ 	select HAVE_SYSCALL_TRACEPOINTS
+ 	select HAVE_UNSTABLE_SCHED_CLOCK
+ 	select HAVE_USER_RETURN_NOTIFIER
++	select HOTPLUG_SMT			if SMP
+ 	select IRQ_FORCED_THREADING
+ 	select NEED_SG_DMA_LENGTH
+ 	select PCI_LOCKLESS_CONFIG
+diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h
+index 74a9e06b6cfd..130e81e10fc7 100644
+--- a/arch/x86/include/asm/apic.h
++++ b/arch/x86/include/asm/apic.h
+@@ -10,6 +10,7 @@
+ #include <asm/fixmap.h>
+ #include <asm/mpspec.h>
+ #include <asm/msr.h>
++#include <asm/hardirq.h>
+ 
+ #define ARCH_APICTIMER_STOPS_ON_C3	1
+ 
+@@ -502,12 +503,19 @@ extern int default_check_phys_apicid_present(int phys_apicid);
+ 
+ #endif /* CONFIG_X86_LOCAL_APIC */
+ 
++#ifdef CONFIG_SMP
++bool apic_id_is_primary_thread(unsigned int id);
++#else
++static inline bool apic_id_is_primary_thread(unsigned int id) { return false; }
++#endif
++
+ extern void irq_enter(void);
+ extern void irq_exit(void);
+ 
+ static inline void entering_irq(void)
+ {
+ 	irq_enter();
++	kvm_set_cpu_l1tf_flush_l1d();
+ }
+ 
+ static inline void entering_ack_irq(void)
+@@ -520,6 +528,7 @@ static inline void ipi_entering_ack_irq(void)
+ {
+ 	irq_enter();
+ 	ack_APIC_irq();
++	kvm_set_cpu_l1tf_flush_l1d();
+ }
+ 
+ static inline void exiting_irq(void)
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 5701f5cecd31..64aaa3f5f36c 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -219,6 +219,7 @@
+ #define X86_FEATURE_IBPB		( 7*32+26) /* Indirect Branch Prediction Barrier */
+ #define X86_FEATURE_STIBP		( 7*32+27) /* Single Thread Indirect Branch Predictors */
+ #define X86_FEATURE_ZEN			( 7*32+28) /* "" CPU is AMD family 0x17 (Zen) */
++#define X86_FEATURE_L1TF_PTEINV		( 7*32+29) /* "" L1TF workaround PTE inversion */
+ 
+ /* Virtualization flags: Linux defined, word 8 */
+ #define X86_FEATURE_TPR_SHADOW		( 8*32+ 0) /* Intel TPR Shadow */
+@@ -341,6 +342,7 @@
+ #define X86_FEATURE_PCONFIG		(18*32+18) /* Intel PCONFIG */
+ #define X86_FEATURE_SPEC_CTRL		(18*32+26) /* "" Speculation Control (IBRS + IBPB) */
+ #define X86_FEATURE_INTEL_STIBP		(18*32+27) /* "" Single Thread Indirect Branch Predictors */
++#define X86_FEATURE_FLUSH_L1D		(18*32+28) /* Flush L1D cache */
+ #define X86_FEATURE_ARCH_CAPABILITIES	(18*32+29) /* IA32_ARCH_CAPABILITIES MSR (Intel) */
+ #define X86_FEATURE_SPEC_CTRL_SSBD	(18*32+31) /* "" Speculative Store Bypass Disable */
+ 
+@@ -373,5 +375,6 @@
+ #define X86_BUG_SPECTRE_V1		X86_BUG(15) /* CPU is affected by Spectre variant 1 attack with conditional branches */
+ #define X86_BUG_SPECTRE_V2		X86_BUG(16) /* CPU is affected by Spectre variant 2 attack with indirect branches */
+ #define X86_BUG_SPEC_STORE_BYPASS	X86_BUG(17) /* CPU is affected by speculative store bypass attack */
++#define X86_BUG_L1TF			X86_BUG(18) /* CPU is affected by L1 Terminal Fault */
+ 
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/include/asm/dmi.h b/arch/x86/include/asm/dmi.h
+index 0ab2ab27ad1f..b825cb201251 100644
+--- a/arch/x86/include/asm/dmi.h
++++ b/arch/x86/include/asm/dmi.h
+@@ -4,8 +4,8 @@
+ 
+ #include <linux/compiler.h>
+ #include <linux/init.h>
++#include <linux/io.h>
+ 
+-#include <asm/io.h>
+ #include <asm/setup.h>
+ 
+ static __always_inline __init void *dmi_alloc(unsigned len)
+diff --git a/arch/x86/include/asm/hardirq.h b/arch/x86/include/asm/hardirq.h
+index 740a428acf1e..d9069bb26c7f 100644
+--- a/arch/x86/include/asm/hardirq.h
++++ b/arch/x86/include/asm/hardirq.h
+@@ -3,10 +3,12 @@
+ #define _ASM_X86_HARDIRQ_H
+ 
+ #include <linux/threads.h>
+-#include <linux/irq.h>
+ 
+ typedef struct {
+-	unsigned int __softirq_pending;
++	u16	     __softirq_pending;
++#if IS_ENABLED(CONFIG_KVM_INTEL)
++	u8	     kvm_cpu_l1tf_flush_l1d;
++#endif
+ 	unsigned int __nmi_count;	/* arch dependent */
+ #ifdef CONFIG_X86_LOCAL_APIC
+ 	unsigned int apic_timer_irqs;	/* arch dependent */
+@@ -58,4 +60,24 @@ extern u64 arch_irq_stat_cpu(unsigned int cpu);
+ extern u64 arch_irq_stat(void);
+ #define arch_irq_stat		arch_irq_stat
+ 
++
++#if IS_ENABLED(CONFIG_KVM_INTEL)
++static inline void kvm_set_cpu_l1tf_flush_l1d(void)
++{
++	__this_cpu_write(irq_stat.kvm_cpu_l1tf_flush_l1d, 1);
++}
++
++static inline void kvm_clear_cpu_l1tf_flush_l1d(void)
++{
++	__this_cpu_write(irq_stat.kvm_cpu_l1tf_flush_l1d, 0);
++}
++
++static inline bool kvm_get_cpu_l1tf_flush_l1d(void)
++{
++	return __this_cpu_read(irq_stat.kvm_cpu_l1tf_flush_l1d);
++}
++#else /* !IS_ENABLED(CONFIG_KVM_INTEL) */
++static inline void kvm_set_cpu_l1tf_flush_l1d(void) { }
++#endif /* IS_ENABLED(CONFIG_KVM_INTEL) */
++
+ #endif /* _ASM_X86_HARDIRQ_H */
+diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
+index c4fc17220df9..c14f2a74b2be 100644
+--- a/arch/x86/include/asm/irqflags.h
++++ b/arch/x86/include/asm/irqflags.h
+@@ -13,6 +13,8 @@
+  * Interrupt control:
+  */
+ 
++/* Declaration required for gcc < 4.9 to prevent -Werror=missing-prototypes */
++extern inline unsigned long native_save_fl(void);
+ extern inline unsigned long native_save_fl(void)
+ {
+ 	unsigned long flags;
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index c13cd28d9d1b..acebb808c4b5 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -17,6 +17,7 @@
+ #include <linux/tracepoint.h>
+ #include <linux/cpumask.h>
+ #include <linux/irq_work.h>
++#include <linux/irq.h>
+ 
+ #include <linux/kvm.h>
+ #include <linux/kvm_para.h>
+@@ -713,6 +714,9 @@ struct kvm_vcpu_arch {
+ 
+ 	/* be preempted when it's in kernel-mode(cpl=0) */
+ 	bool preempted_in_kernel;
++
++	/* Flush the L1 Data cache for L1TF mitigation on VMENTER */
++	bool l1tf_flush_l1d;
+ };
+ 
+ struct kvm_lpage_info {
+@@ -881,6 +885,7 @@ struct kvm_vcpu_stat {
+ 	u64 signal_exits;
+ 	u64 irq_window_exits;
+ 	u64 nmi_window_exits;
++	u64 l1d_flush;
+ 	u64 halt_exits;
+ 	u64 halt_successful_poll;
+ 	u64 halt_attempted_poll;
+@@ -1413,6 +1418,7 @@ int kvm_cpu_get_interrupt(struct kvm_vcpu *v);
+ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event);
+ void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu);
+ 
++u64 kvm_get_arch_capabilities(void);
+ void kvm_define_shared_msr(unsigned index, u32 msr);
+ int kvm_set_shared_msr(unsigned index, u64 val, u64 mask);
+ 
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 68b2c3150de1..4731f0cf97c5 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -70,12 +70,19 @@
+ #define MSR_IA32_ARCH_CAPABILITIES	0x0000010a
+ #define ARCH_CAP_RDCL_NO		(1 << 0)   /* Not susceptible to Meltdown */
+ #define ARCH_CAP_IBRS_ALL		(1 << 1)   /* Enhanced IBRS support */
++#define ARCH_CAP_SKIP_VMENTRY_L1DFLUSH	(1 << 3)   /* Skip L1D flush on vmentry */
+ #define ARCH_CAP_SSB_NO			(1 << 4)   /*
+ 						    * Not susceptible to Speculative Store Bypass
+ 						    * attack, so no Speculative Store Bypass
+ 						    * control required.
+ 						    */
+ 
++#define MSR_IA32_FLUSH_CMD		0x0000010b
++#define L1D_FLUSH			(1 << 0)   /*
++						    * Writeback and invalidate the
++						    * L1 data cache.
++						    */
++
+ #define MSR_IA32_BBL_CR_CTL		0x00000119
+ #define MSR_IA32_BBL_CR_CTL3		0x0000011e
+ 
+diff --git a/arch/x86/include/asm/page_32_types.h b/arch/x86/include/asm/page_32_types.h
+index aa30c3241ea7..0d5c739eebd7 100644
+--- a/arch/x86/include/asm/page_32_types.h
++++ b/arch/x86/include/asm/page_32_types.h
+@@ -29,8 +29,13 @@
+ #define N_EXCEPTION_STACKS 1
+ 
+ #ifdef CONFIG_X86_PAE
+-/* 44=32+12, the limit we can fit into an unsigned long pfn */
+-#define __PHYSICAL_MASK_SHIFT	44
++/*
++ * This is beyond the 44 bit limit imposed by the 32bit long pfns,
++ * but we need the full mask to make sure inverted PROT_NONE
++ * entries have all the host bits set in a guest.
++ * The real limit is still 44 bits.
++ */
++#define __PHYSICAL_MASK_SHIFT	52
+ #define __VIRTUAL_MASK_SHIFT	32
+ 
+ #else  /* !CONFIG_X86_PAE */
+diff --git a/arch/x86/include/asm/pgtable-2level.h b/arch/x86/include/asm/pgtable-2level.h
+index 685ffe8a0eaf..60d0f9015317 100644
+--- a/arch/x86/include/asm/pgtable-2level.h
++++ b/arch/x86/include/asm/pgtable-2level.h
+@@ -95,4 +95,21 @@ static inline unsigned long pte_bitop(unsigned long value, unsigned int rightshi
+ #define __pte_to_swp_entry(pte)		((swp_entry_t) { (pte).pte_low })
+ #define __swp_entry_to_pte(x)		((pte_t) { .pte = (x).val })
+ 
++/* No inverted PFNs on 2 level page tables */
++
++static inline u64 protnone_mask(u64 val)
++{
++	return 0;
++}
++
++static inline u64 flip_protnone_guard(u64 oldval, u64 val, u64 mask)
++{
++	return val;
++}
++
++static inline bool __pte_needs_invert(u64 val)
++{
++	return false;
++}
++
+ #endif /* _ASM_X86_PGTABLE_2LEVEL_H */
+diff --git a/arch/x86/include/asm/pgtable-3level.h b/arch/x86/include/asm/pgtable-3level.h
+index f24df59c40b2..bb035a4cbc8c 100644
+--- a/arch/x86/include/asm/pgtable-3level.h
++++ b/arch/x86/include/asm/pgtable-3level.h
+@@ -241,12 +241,43 @@ static inline pud_t native_pudp_get_and_clear(pud_t *pudp)
+ #endif
+ 
+ /* Encode and de-code a swap entry */
++#define SWP_TYPE_BITS		5
++
++#define SWP_OFFSET_FIRST_BIT	(_PAGE_BIT_PROTNONE + 1)
++
++/* We always extract/encode the offset by shifting it all the way up, and then down again */
++#define SWP_OFFSET_SHIFT	(SWP_OFFSET_FIRST_BIT + SWP_TYPE_BITS)
++
+ #define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > 5)
+ #define __swp_type(x)			(((x).val) & 0x1f)
+ #define __swp_offset(x)			((x).val >> 5)
+ #define __swp_entry(type, offset)	((swp_entry_t){(type) | (offset) << 5})
+-#define __pte_to_swp_entry(pte)		((swp_entry_t){ (pte).pte_high })
+-#define __swp_entry_to_pte(x)		((pte_t){ { .pte_high = (x).val } })
++
++/*
++ * Normally, __swp_entry() converts from arch-independent swp_entry_t to
++ * arch-dependent swp_entry_t, and __swp_entry_to_pte() just stores the result
++ * to pte. But here we have 32bit swp_entry_t and 64bit pte, and need to use the
++ * whole 64 bits. Thus, we shift the "real" arch-dependent conversion to
++ * __swp_entry_to_pte() through the following helper macro based on 64bit
++ * __swp_entry().
++ */
++#define __swp_pteval_entry(type, offset) ((pteval_t) { \
++	(~(pteval_t)(offset) << SWP_OFFSET_SHIFT >> SWP_TYPE_BITS) \
++	| ((pteval_t)(type) << (64 - SWP_TYPE_BITS)) })
++
++#define __swp_entry_to_pte(x)	((pte_t){ .pte = \
++		__swp_pteval_entry(__swp_type(x), __swp_offset(x)) })
++/*
++ * Analogically, __pte_to_swp_entry() doesn't just extract the arch-dependent
++ * swp_entry_t, but also has to convert it from 64bit to the 32bit
++ * intermediate representation, using the following macros based on 64bit
++ * __swp_type() and __swp_offset().
++ */
++#define __pteval_swp_type(x) ((unsigned long)((x).pte >> (64 - SWP_TYPE_BITS)))
++#define __pteval_swp_offset(x) ((unsigned long)(~((x).pte) << SWP_TYPE_BITS >> SWP_OFFSET_SHIFT))
++
++#define __pte_to_swp_entry(pte)	(__swp_entry(__pteval_swp_type(pte), \
++					     __pteval_swp_offset(pte)))
+ 
+ #define gup_get_pte gup_get_pte
+ /*
+@@ -295,4 +326,6 @@ static inline pte_t gup_get_pte(pte_t *ptep)
+ 	return pte;
+ }
+ 
++#include <asm/pgtable-invert.h>
++
+ #endif /* _ASM_X86_PGTABLE_3LEVEL_H */
+diff --git a/arch/x86/include/asm/pgtable-invert.h b/arch/x86/include/asm/pgtable-invert.h
+new file mode 100644
+index 000000000000..44b1203ece12
+--- /dev/null
++++ b/arch/x86/include/asm/pgtable-invert.h
+@@ -0,0 +1,32 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef _ASM_PGTABLE_INVERT_H
++#define _ASM_PGTABLE_INVERT_H 1
++
++#ifndef __ASSEMBLY__
++
++static inline bool __pte_needs_invert(u64 val)
++{
++	return !(val & _PAGE_PRESENT);
++}
++
++/* Get a mask to xor with the page table entry to get the correct pfn. */
++static inline u64 protnone_mask(u64 val)
++{
++	return __pte_needs_invert(val) ?  ~0ull : 0;
++}
++
++static inline u64 flip_protnone_guard(u64 oldval, u64 val, u64 mask)
++{
++	/*
++	 * When a PTE transitions from NONE to !NONE or vice-versa
++	 * invert the PFN part to stop speculation.
++	 * pte_pfn undoes this when needed.
++	 */
++	if (__pte_needs_invert(oldval) != __pte_needs_invert(val))
++		val = (val & ~mask) | (~val & mask);
++	return val;
++}
++
++#endif /* __ASSEMBLY__ */
++
++#endif
+diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
+index 5715647fc4fe..13125aad804c 100644
+--- a/arch/x86/include/asm/pgtable.h
++++ b/arch/x86/include/asm/pgtable.h
+@@ -185,19 +185,29 @@ static inline int pte_special(pte_t pte)
+ 	return pte_flags(pte) & _PAGE_SPECIAL;
+ }
+ 
++/* Entries that were set to PROT_NONE are inverted */
++
++static inline u64 protnone_mask(u64 val);
++
+ static inline unsigned long pte_pfn(pte_t pte)
+ {
+-	return (pte_val(pte) & PTE_PFN_MASK) >> PAGE_SHIFT;
++	phys_addr_t pfn = pte_val(pte);
++	pfn ^= protnone_mask(pfn);
++	return (pfn & PTE_PFN_MASK) >> PAGE_SHIFT;
+ }
+ 
+ static inline unsigned long pmd_pfn(pmd_t pmd)
+ {
+-	return (pmd_val(pmd) & pmd_pfn_mask(pmd)) >> PAGE_SHIFT;
++	phys_addr_t pfn = pmd_val(pmd);
++	pfn ^= protnone_mask(pfn);
++	return (pfn & pmd_pfn_mask(pmd)) >> PAGE_SHIFT;
+ }
+ 
+ static inline unsigned long pud_pfn(pud_t pud)
+ {
+-	return (pud_val(pud) & pud_pfn_mask(pud)) >> PAGE_SHIFT;
++	phys_addr_t pfn = pud_val(pud);
++	pfn ^= protnone_mask(pfn);
++	return (pfn & pud_pfn_mask(pud)) >> PAGE_SHIFT;
+ }
+ 
+ static inline unsigned long p4d_pfn(p4d_t p4d)
+@@ -400,11 +410,6 @@ static inline pmd_t pmd_mkwrite(pmd_t pmd)
+ 	return pmd_set_flags(pmd, _PAGE_RW);
+ }
+ 
+-static inline pmd_t pmd_mknotpresent(pmd_t pmd)
+-{
+-	return pmd_clear_flags(pmd, _PAGE_PRESENT | _PAGE_PROTNONE);
+-}
+-
+ static inline pud_t pud_set_flags(pud_t pud, pudval_t set)
+ {
+ 	pudval_t v = native_pud_val(pud);
+@@ -459,11 +464,6 @@ static inline pud_t pud_mkwrite(pud_t pud)
+ 	return pud_set_flags(pud, _PAGE_RW);
+ }
+ 
+-static inline pud_t pud_mknotpresent(pud_t pud)
+-{
+-	return pud_clear_flags(pud, _PAGE_PRESENT | _PAGE_PROTNONE);
+-}
+-
+ #ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY
+ static inline int pte_soft_dirty(pte_t pte)
+ {
+@@ -545,25 +545,45 @@ static inline pgprotval_t check_pgprot(pgprot_t pgprot)
+ 
+ static inline pte_t pfn_pte(unsigned long page_nr, pgprot_t pgprot)
+ {
+-	return __pte(((phys_addr_t)page_nr << PAGE_SHIFT) |
+-		     check_pgprot(pgprot));
++	phys_addr_t pfn = (phys_addr_t)page_nr << PAGE_SHIFT;
++	pfn ^= protnone_mask(pgprot_val(pgprot));
++	pfn &= PTE_PFN_MASK;
++	return __pte(pfn | check_pgprot(pgprot));
+ }
+ 
+ static inline pmd_t pfn_pmd(unsigned long page_nr, pgprot_t pgprot)
+ {
+-	return __pmd(((phys_addr_t)page_nr << PAGE_SHIFT) |
+-		     check_pgprot(pgprot));
++	phys_addr_t pfn = (phys_addr_t)page_nr << PAGE_SHIFT;
++	pfn ^= protnone_mask(pgprot_val(pgprot));
++	pfn &= PHYSICAL_PMD_PAGE_MASK;
++	return __pmd(pfn | check_pgprot(pgprot));
+ }
+ 
+ static inline pud_t pfn_pud(unsigned long page_nr, pgprot_t pgprot)
+ {
+-	return __pud(((phys_addr_t)page_nr << PAGE_SHIFT) |
+-		     check_pgprot(pgprot));
++	phys_addr_t pfn = (phys_addr_t)page_nr << PAGE_SHIFT;
++	pfn ^= protnone_mask(pgprot_val(pgprot));
++	pfn &= PHYSICAL_PUD_PAGE_MASK;
++	return __pud(pfn | check_pgprot(pgprot));
+ }
+ 
++static inline pmd_t pmd_mknotpresent(pmd_t pmd)
++{
++	return pfn_pmd(pmd_pfn(pmd),
++		      __pgprot(pmd_flags(pmd) & ~(_PAGE_PRESENT|_PAGE_PROTNONE)));
++}
++
++static inline pud_t pud_mknotpresent(pud_t pud)
++{
++	return pfn_pud(pud_pfn(pud),
++	      __pgprot(pud_flags(pud) & ~(_PAGE_PRESENT|_PAGE_PROTNONE)));
++}
++
++static inline u64 flip_protnone_guard(u64 oldval, u64 val, u64 mask);
++
+ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+ {
+-	pteval_t val = pte_val(pte);
++	pteval_t val = pte_val(pte), oldval = val;
+ 
+ 	/*
+ 	 * Chop off the NX bit (if present), and add the NX portion of
+@@ -571,17 +591,17 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+ 	 */
+ 	val &= _PAGE_CHG_MASK;
+ 	val |= check_pgprot(newprot) & ~_PAGE_CHG_MASK;
+-
++	val = flip_protnone_guard(oldval, val, PTE_PFN_MASK);
+ 	return __pte(val);
+ }
+ 
+ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
+ {
+-	pmdval_t val = pmd_val(pmd);
++	pmdval_t val = pmd_val(pmd), oldval = val;
+ 
+ 	val &= _HPAGE_CHG_MASK;
+ 	val |= check_pgprot(newprot) & ~_HPAGE_CHG_MASK;
+-
++	val = flip_protnone_guard(oldval, val, PHYSICAL_PMD_PAGE_MASK);
+ 	return __pmd(val);
+ }
+ 
+@@ -1320,6 +1340,14 @@ static inline bool pud_access_permitted(pud_t pud, bool write)
+ 	return __pte_access_permitted(pud_val(pud), write);
+ }
+ 
++#define __HAVE_ARCH_PFN_MODIFY_ALLOWED 1
++extern bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot);
++
++static inline bool arch_has_pfn_modify_check(void)
++{
++	return boot_cpu_has_bug(X86_BUG_L1TF);
++}
++
+ #include <asm-generic/pgtable.h>
+ #endif	/* __ASSEMBLY__ */
+ 
+diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h
+index 3c5385f9a88f..82ff20b0ae45 100644
+--- a/arch/x86/include/asm/pgtable_64.h
++++ b/arch/x86/include/asm/pgtable_64.h
+@@ -273,7 +273,7 @@ static inline int pgd_large(pgd_t pgd) { return 0; }
+  *
+  * |     ...            | 11| 10|  9|8|7|6|5| 4| 3|2| 1|0| <- bit number
+  * |     ...            |SW3|SW2|SW1|G|L|D|A|CD|WT|U| W|P| <- bit names
+- * | OFFSET (14->63) | TYPE (9-13)  |0|0|X|X| X| X|X|SD|0| <- swp entry
++ * | TYPE (59-63) | ~OFFSET (9-58)  |0|0|X|X| X| X|X|SD|0| <- swp entry
+  *
+  * G (8) is aliased and used as a PROT_NONE indicator for
+  * !present ptes.  We need to start storing swap entries above
+@@ -286,20 +286,34 @@ static inline int pgd_large(pgd_t pgd) { return 0; }
+  *
+  * Bit 7 in swp entry should be 0 because pmd_present checks not only P,
+  * but also L and G.
++ *
++ * The offset is inverted by a binary not operation to make the high
++ * physical bits set.
+  */
+-#define SWP_TYPE_FIRST_BIT (_PAGE_BIT_PROTNONE + 1)
+-#define SWP_TYPE_BITS 5
+-/* Place the offset above the type: */
+-#define SWP_OFFSET_FIRST_BIT (SWP_TYPE_FIRST_BIT + SWP_TYPE_BITS)
++#define SWP_TYPE_BITS		5
++
++#define SWP_OFFSET_FIRST_BIT	(_PAGE_BIT_PROTNONE + 1)
++
++/* We always extract/encode the offset by shifting it all the way up, and then down again */
++#define SWP_OFFSET_SHIFT	(SWP_OFFSET_FIRST_BIT+SWP_TYPE_BITS)
+ 
+ #define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > SWP_TYPE_BITS)
+ 
+-#define __swp_type(x)			(((x).val >> (SWP_TYPE_FIRST_BIT)) \
+-					 & ((1U << SWP_TYPE_BITS) - 1))
+-#define __swp_offset(x)			((x).val >> SWP_OFFSET_FIRST_BIT)
+-#define __swp_entry(type, offset)	((swp_entry_t) { \
+-					 ((type) << (SWP_TYPE_FIRST_BIT)) \
+-					 | ((offset) << SWP_OFFSET_FIRST_BIT) })
++/* Extract the high bits for type */
++#define __swp_type(x) ((x).val >> (64 - SWP_TYPE_BITS))
++
++/* Shift up (to get rid of type), then down to get value */
++#define __swp_offset(x) (~(x).val << SWP_TYPE_BITS >> SWP_OFFSET_SHIFT)
++
++/*
++ * Shift the offset up "too far" by TYPE bits, then down again
++ * The offset is inverted by a binary not operation to make the high
++ * physical bits set.
++ */
++#define __swp_entry(type, offset) ((swp_entry_t) { \
++	(~(unsigned long)(offset) << SWP_OFFSET_SHIFT >> SWP_TYPE_BITS) \
++	| ((unsigned long)(type) << (64-SWP_TYPE_BITS)) })
++
+ #define __pte_to_swp_entry(pte)		((swp_entry_t) { pte_val((pte)) })
+ #define __pmd_to_swp_entry(pmd)		((swp_entry_t) { pmd_val((pmd)) })
+ #define __swp_entry_to_pte(x)		((pte_t) { .pte = (x).val })
+@@ -343,5 +357,7 @@ static inline bool gup_fast_permitted(unsigned long start, int nr_pages,
+ 	return true;
+ }
+ 
++#include <asm/pgtable-invert.h>
++
+ #endif /* !__ASSEMBLY__ */
+ #endif /* _ASM_X86_PGTABLE_64_H */
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index cfd29ee8c3da..79e409974ccc 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -181,6 +181,11 @@ extern const struct seq_operations cpuinfo_op;
+ 
+ extern void cpu_detect(struct cpuinfo_x86 *c);
+ 
++static inline unsigned long l1tf_pfn_limit(void)
++{
++	return BIT(boot_cpu_data.x86_phys_bits - 1 - PAGE_SHIFT) - 1;
++}
++
+ extern void early_cpu_init(void);
+ extern void identify_boot_cpu(void);
+ extern void identify_secondary_cpu(struct cpuinfo_x86 *);
+@@ -977,4 +982,16 @@ bool xen_set_default_idle(void);
+ void stop_this_cpu(void *dummy);
+ void df_debug(struct pt_regs *regs, long error_code);
+ void microcode_check(void);
++
++enum l1tf_mitigations {
++	L1TF_MITIGATION_OFF,
++	L1TF_MITIGATION_FLUSH_NOWARN,
++	L1TF_MITIGATION_FLUSH,
++	L1TF_MITIGATION_FLUSH_NOSMT,
++	L1TF_MITIGATION_FULL,
++	L1TF_MITIGATION_FULL_FORCE
++};
++
++extern enum l1tf_mitigations l1tf_mitigation;
++
+ #endif /* _ASM_X86_PROCESSOR_H */
+diff --git a/arch/x86/include/asm/topology.h b/arch/x86/include/asm/topology.h
+index c1d2a9892352..453cf38a1c33 100644
+--- a/arch/x86/include/asm/topology.h
++++ b/arch/x86/include/asm/topology.h
+@@ -123,13 +123,17 @@ static inline int topology_max_smt_threads(void)
+ }
+ 
+ int topology_update_package_map(unsigned int apicid, unsigned int cpu);
+-extern int topology_phys_to_logical_pkg(unsigned int pkg);
++int topology_phys_to_logical_pkg(unsigned int pkg);
++bool topology_is_primary_thread(unsigned int cpu);
++bool topology_smt_supported(void);
+ #else
+ #define topology_max_packages()			(1)
+ static inline int
+ topology_update_package_map(unsigned int apicid, unsigned int cpu) { return 0; }
+ static inline int topology_phys_to_logical_pkg(unsigned int pkg) { return 0; }
+ static inline int topology_max_smt_threads(void) { return 1; }
++static inline bool topology_is_primary_thread(unsigned int cpu) { return true; }
++static inline bool topology_smt_supported(void) { return false; }
+ #endif
+ 
+ static inline void arch_fix_phys_package_id(int num, u32 slot)
+diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
+index 6aa8499e1f62..95f9107449bf 100644
+--- a/arch/x86/include/asm/vmx.h
++++ b/arch/x86/include/asm/vmx.h
+@@ -576,4 +576,15 @@ enum vm_instruction_error_number {
+ 	VMXERR_INVALID_OPERAND_TO_INVEPT_INVVPID = 28,
+ };
+ 
++enum vmx_l1d_flush_state {
++	VMENTER_L1D_FLUSH_AUTO,
++	VMENTER_L1D_FLUSH_NEVER,
++	VMENTER_L1D_FLUSH_COND,
++	VMENTER_L1D_FLUSH_ALWAYS,
++	VMENTER_L1D_FLUSH_EPT_DISABLED,
++	VMENTER_L1D_FLUSH_NOT_REQUIRED,
++};
++
++extern enum vmx_l1d_flush_state l1tf_vmx_mitigation;
++
+ #endif
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index adbda5847b14..3b3a2d0af78d 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -56,6 +56,7 @@
+ #include <asm/hypervisor.h>
+ #include <asm/cpu_device_id.h>
+ #include <asm/intel-family.h>
++#include <asm/irq_regs.h>
+ 
+ unsigned int num_processors;
+ 
+@@ -2192,6 +2193,23 @@ static int cpuid_to_apicid[] = {
+ 	[0 ... NR_CPUS - 1] = -1,
+ };
+ 
++#ifdef CONFIG_SMP
++/**
++ * apic_id_is_primary_thread - Check whether APIC ID belongs to a primary thread
++ * @id:	APIC ID to check
++ */
++bool apic_id_is_primary_thread(unsigned int apicid)
++{
++	u32 mask;
++
++	if (smp_num_siblings == 1)
++		return true;
++	/* Isolate the SMT bit(s) in the APICID and check for 0 */
++	mask = (1U << (fls(smp_num_siblings) - 1)) - 1;
++	return !(apicid & mask);
++}
++#endif
++
+ /*
+  * Should use this API to allocate logical CPU IDs to keep nr_logical_cpuids
+  * and cpuid_to_apicid[] synchronized.
+diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
+index 3982f79d2377..ff0d14cd9e82 100644
+--- a/arch/x86/kernel/apic/io_apic.c
++++ b/arch/x86/kernel/apic/io_apic.c
+@@ -33,6 +33,7 @@
+ 
+ #include <linux/mm.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/init.h>
+ #include <linux/delay.h>
+ #include <linux/sched.h>
+diff --git a/arch/x86/kernel/apic/msi.c b/arch/x86/kernel/apic/msi.c
+index ce503c99f5c4..72a94401f9e0 100644
+--- a/arch/x86/kernel/apic/msi.c
++++ b/arch/x86/kernel/apic/msi.c
+@@ -12,6 +12,7 @@
+  */
+ #include <linux/mm.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/pci.h>
+ #include <linux/dmar.h>
+ #include <linux/hpet.h>
+diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
+index 35aaee4fc028..c9b773401fd8 100644
+--- a/arch/x86/kernel/apic/vector.c
++++ b/arch/x86/kernel/apic/vector.c
+@@ -11,6 +11,7 @@
+  * published by the Free Software Foundation.
+  */
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/seq_file.h>
+ #include <linux/init.h>
+ #include <linux/compiler.h>
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 38915fbfae73..97e962afb967 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -315,6 +315,13 @@ static void legacy_fixup_core_id(struct cpuinfo_x86 *c)
+ 	c->cpu_core_id %= cus_per_node;
+ }
+ 
++
++static void amd_get_topology_early(struct cpuinfo_x86 *c)
++{
++	if (cpu_has(c, X86_FEATURE_TOPOEXT))
++		smp_num_siblings = ((cpuid_ebx(0x8000001e) >> 8) & 0xff) + 1;
++}
++
+ /*
+  * Fixup core topology information for
+  * (1) AMD multi-node processors
+@@ -334,7 +341,6 @@ static void amd_get_topology(struct cpuinfo_x86 *c)
+ 		cpuid(0x8000001e, &eax, &ebx, &ecx, &edx);
+ 
+ 		node_id  = ecx & 0xff;
+-		smp_num_siblings = ((ebx >> 8) & 0xff) + 1;
+ 
+ 		if (c->x86 == 0x15)
+ 			c->cu_id = ebx & 0xff;
+@@ -613,6 +619,7 @@ clear_sev:
+ 
+ static void early_init_amd(struct cpuinfo_x86 *c)
+ {
++	u64 value;
+ 	u32 dummy;
+ 
+ 	early_init_amd_mc(c);
+@@ -683,6 +690,22 @@ static void early_init_amd(struct cpuinfo_x86 *c)
+ 		set_cpu_bug(c, X86_BUG_AMD_E400);
+ 
+ 	early_detect_mem_encrypt(c);
++
++	/* Re-enable TopologyExtensions if switched off by BIOS */
++	if (c->x86 == 0x15 &&
++	    (c->x86_model >= 0x10 && c->x86_model <= 0x6f) &&
++	    !cpu_has(c, X86_FEATURE_TOPOEXT)) {
++
++		if (msr_set_bit(0xc0011005, 54) > 0) {
++			rdmsrl(0xc0011005, value);
++			if (value & BIT_64(54)) {
++				set_cpu_cap(c, X86_FEATURE_TOPOEXT);
++				pr_info_once(FW_INFO "CPU: Re-enabling disabled Topology Extensions Support.\n");
++			}
++		}
++	}
++
++	amd_get_topology_early(c);
+ }
+ 
+ static void init_amd_k8(struct cpuinfo_x86 *c)
+@@ -774,19 +797,6 @@ static void init_amd_bd(struct cpuinfo_x86 *c)
+ {
+ 	u64 value;
+ 
+-	/* re-enable TopologyExtensions if switched off by BIOS */
+-	if ((c->x86_model >= 0x10) && (c->x86_model <= 0x6f) &&
+-	    !cpu_has(c, X86_FEATURE_TOPOEXT)) {
+-
+-		if (msr_set_bit(0xc0011005, 54) > 0) {
+-			rdmsrl(0xc0011005, value);
+-			if (value & BIT_64(54)) {
+-				set_cpu_cap(c, X86_FEATURE_TOPOEXT);
+-				pr_info_once(FW_INFO "CPU: Re-enabling disabled Topology Extensions Support.\n");
+-			}
+-		}
+-	}
+-
+ 	/*
+ 	 * The way access filter has a performance penalty on some workloads.
+ 	 * Disable it on the affected CPUs.
+@@ -850,16 +860,9 @@ static void init_amd(struct cpuinfo_x86 *c)
+ 
+ 	cpu_detect_cache_sizes(c);
+ 
+-	/* Multi core CPU? */
+-	if (c->extended_cpuid_level >= 0x80000008) {
+-		amd_detect_cmp(c);
+-		amd_get_topology(c);
+-		srat_detect_node(c);
+-	}
+-
+-#ifdef CONFIG_X86_32
+-	detect_ht(c);
+-#endif
++	amd_detect_cmp(c);
++	amd_get_topology(c);
++	srat_detect_node(c);
+ 
+ 	init_amd_cacheinfo(c);
+ 
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 5c0ea39311fe..c4f0ae49a53d 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -22,15 +22,18 @@
+ #include <asm/processor-flags.h>
+ #include <asm/fpu/internal.h>
+ #include <asm/msr.h>
++#include <asm/vmx.h>
+ #include <asm/paravirt.h>
+ #include <asm/alternative.h>
+ #include <asm/pgtable.h>
+ #include <asm/set_memory.h>
+ #include <asm/intel-family.h>
+ #include <asm/hypervisor.h>
++#include <asm/e820/api.h>
+ 
+ static void __init spectre_v2_select_mitigation(void);
+ static void __init ssb_select_mitigation(void);
++static void __init l1tf_select_mitigation(void);
+ 
+ /*
+  * Our boot-time value of the SPEC_CTRL MSR. We read it once so that any
+@@ -56,6 +59,12 @@ void __init check_bugs(void)
+ {
+ 	identify_boot_cpu();
+ 
++	/*
++	 * identify_boot_cpu() initialized SMT support information, let the
++	 * core code know.
++	 */
++	cpu_smt_check_topology_early();
++
+ 	if (!IS_ENABLED(CONFIG_SMP)) {
+ 		pr_info("CPU: ");
+ 		print_cpu_info(&boot_cpu_data);
+@@ -82,6 +91,8 @@ void __init check_bugs(void)
+ 	 */
+ 	ssb_select_mitigation();
+ 
++	l1tf_select_mitigation();
++
+ #ifdef CONFIG_X86_32
+ 	/*
+ 	 * Check whether we are able to run this kernel safely on SMP.
+@@ -313,23 +324,6 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
+ 	return cmd;
+ }
+ 
+-/* Check for Skylake-like CPUs (for RSB handling) */
+-static bool __init is_skylake_era(void)
+-{
+-	if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
+-	    boot_cpu_data.x86 == 6) {
+-		switch (boot_cpu_data.x86_model) {
+-		case INTEL_FAM6_SKYLAKE_MOBILE:
+-		case INTEL_FAM6_SKYLAKE_DESKTOP:
+-		case INTEL_FAM6_SKYLAKE_X:
+-		case INTEL_FAM6_KABYLAKE_MOBILE:
+-		case INTEL_FAM6_KABYLAKE_DESKTOP:
+-			return true;
+-		}
+-	}
+-	return false;
+-}
+-
+ static void __init spectre_v2_select_mitigation(void)
+ {
+ 	enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
+@@ -390,22 +384,15 @@ retpoline_auto:
+ 	pr_info("%s\n", spectre_v2_strings[mode]);
+ 
+ 	/*
+-	 * If neither SMEP nor PTI are available, there is a risk of
+-	 * hitting userspace addresses in the RSB after a context switch
+-	 * from a shallow call stack to a deeper one. To prevent this fill
+-	 * the entire RSB, even when using IBRS.
++	 * If spectre v2 protection has been enabled, unconditionally fill
++	 * RSB during a context switch; this protects against two independent
++	 * issues:
+ 	 *
+-	 * Skylake era CPUs have a separate issue with *underflow* of the
+-	 * RSB, when they will predict 'ret' targets from the generic BTB.
+-	 * The proper mitigation for this is IBRS. If IBRS is not supported
+-	 * or deactivated in favour of retpolines the RSB fill on context
+-	 * switch is required.
++	 *	- RSB underflow (and switch to BTB) on Skylake+
++	 *	- SpectreRSB variant of spectre v2 on X86_BUG_SPECTRE_V2 CPUs
+ 	 */
+-	if ((!boot_cpu_has(X86_FEATURE_PTI) &&
+-	     !boot_cpu_has(X86_FEATURE_SMEP)) || is_skylake_era()) {
+-		setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
+-		pr_info("Spectre v2 mitigation: Filling RSB on context switch\n");
+-	}
++	setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
++	pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n");
+ 
+ 	/* Initialize Indirect Branch Prediction Barrier if supported */
+ 	if (boot_cpu_has(X86_FEATURE_IBPB)) {
+@@ -654,8 +641,121 @@ void x86_spec_ctrl_setup_ap(void)
+ 		x86_amd_ssb_disable();
+ }
+ 
++#undef pr_fmt
++#define pr_fmt(fmt)	"L1TF: " fmt
++
++/* Default mitigation for L1TF-affected CPUs */
++enum l1tf_mitigations l1tf_mitigation __ro_after_init = L1TF_MITIGATION_FLUSH;
++#if IS_ENABLED(CONFIG_KVM_INTEL)
++EXPORT_SYMBOL_GPL(l1tf_mitigation);
++
++enum vmx_l1d_flush_state l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
++EXPORT_SYMBOL_GPL(l1tf_vmx_mitigation);
++#endif
++
++static void __init l1tf_select_mitigation(void)
++{
++	u64 half_pa;
++
++	if (!boot_cpu_has_bug(X86_BUG_L1TF))
++		return;
++
++	switch (l1tf_mitigation) {
++	case L1TF_MITIGATION_OFF:
++	case L1TF_MITIGATION_FLUSH_NOWARN:
++	case L1TF_MITIGATION_FLUSH:
++		break;
++	case L1TF_MITIGATION_FLUSH_NOSMT:
++	case L1TF_MITIGATION_FULL:
++		cpu_smt_disable(false);
++		break;
++	case L1TF_MITIGATION_FULL_FORCE:
++		cpu_smt_disable(true);
++		break;
++	}
++
++#if CONFIG_PGTABLE_LEVELS == 2
++	pr_warn("Kernel not compiled for PAE. No mitigation for L1TF\n");
++	return;
++#endif
++
++	/*
++	 * This is extremely unlikely to happen because almost all
++	 * systems have far more MAX_PA/2 than RAM can be fit into
++	 * DIMM slots.
++	 */
++	half_pa = (u64)l1tf_pfn_limit() << PAGE_SHIFT;
++	if (e820__mapped_any(half_pa, ULLONG_MAX - half_pa, E820_TYPE_RAM)) {
++		pr_warn("System has more than MAX_PA/2 memory. L1TF mitigation not effective.\n");
++		return;
++	}
++
++	setup_force_cpu_cap(X86_FEATURE_L1TF_PTEINV);
++}
++
++static int __init l1tf_cmdline(char *str)
++{
++	if (!boot_cpu_has_bug(X86_BUG_L1TF))
++		return 0;
++
++	if (!str)
++		return -EINVAL;
++
++	if (!strcmp(str, "off"))
++		l1tf_mitigation = L1TF_MITIGATION_OFF;
++	else if (!strcmp(str, "flush,nowarn"))
++		l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOWARN;
++	else if (!strcmp(str, "flush"))
++		l1tf_mitigation = L1TF_MITIGATION_FLUSH;
++	else if (!strcmp(str, "flush,nosmt"))
++		l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT;
++	else if (!strcmp(str, "full"))
++		l1tf_mitigation = L1TF_MITIGATION_FULL;
++	else if (!strcmp(str, "full,force"))
++		l1tf_mitigation = L1TF_MITIGATION_FULL_FORCE;
++
++	return 0;
++}
++early_param("l1tf", l1tf_cmdline);
++
++#undef pr_fmt
++
+ #ifdef CONFIG_SYSFS
+ 
++#define L1TF_DEFAULT_MSG "Mitigation: PTE Inversion"
++
++#if IS_ENABLED(CONFIG_KVM_INTEL)
++static const char *l1tf_vmx_states[] = {
++	[VMENTER_L1D_FLUSH_AUTO]		= "auto",
++	[VMENTER_L1D_FLUSH_NEVER]		= "vulnerable",
++	[VMENTER_L1D_FLUSH_COND]		= "conditional cache flushes",
++	[VMENTER_L1D_FLUSH_ALWAYS]		= "cache flushes",
++	[VMENTER_L1D_FLUSH_EPT_DISABLED]	= "EPT disabled",
++	[VMENTER_L1D_FLUSH_NOT_REQUIRED]	= "flush not necessary"
++};
++
++static ssize_t l1tf_show_state(char *buf)
++{
++	if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_AUTO)
++		return sprintf(buf, "%s\n", L1TF_DEFAULT_MSG);
++
++	if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_EPT_DISABLED ||
++	    (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_NEVER &&
++	     cpu_smt_control == CPU_SMT_ENABLED))
++		return sprintf(buf, "%s; VMX: %s\n", L1TF_DEFAULT_MSG,
++			       l1tf_vmx_states[l1tf_vmx_mitigation]);
++
++	return sprintf(buf, "%s; VMX: %s, SMT %s\n", L1TF_DEFAULT_MSG,
++		       l1tf_vmx_states[l1tf_vmx_mitigation],
++		       cpu_smt_control == CPU_SMT_ENABLED ? "vulnerable" : "disabled");
++}
++#else
++static ssize_t l1tf_show_state(char *buf)
++{
++	return sprintf(buf, "%s\n", L1TF_DEFAULT_MSG);
++}
++#endif
++
+ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
+ 			       char *buf, unsigned int bug)
+ {
+@@ -684,6 +784,10 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ 	case X86_BUG_SPEC_STORE_BYPASS:
+ 		return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);
+ 
++	case X86_BUG_L1TF:
++		if (boot_cpu_has(X86_FEATURE_L1TF_PTEINV))
++			return l1tf_show_state(buf);
++		break;
+ 	default:
+ 		break;
+ 	}
+@@ -710,4 +814,9 @@ ssize_t cpu_show_spec_store_bypass(struct device *dev, struct device_attribute *
+ {
+ 	return cpu_show_common(dev, attr, buf, X86_BUG_SPEC_STORE_BYPASS);
+ }
++
++ssize_t cpu_show_l1tf(struct device *dev, struct device_attribute *attr, char *buf)
++{
++	return cpu_show_common(dev, attr, buf, X86_BUG_L1TF);
++}
+ #endif
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index eb4cb3efd20e..9eda6f730ec4 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -661,33 +661,36 @@ static void cpu_detect_tlb(struct cpuinfo_x86 *c)
+ 		tlb_lld_4m[ENTRIES], tlb_lld_1g[ENTRIES]);
+ }
+ 
+-void detect_ht(struct cpuinfo_x86 *c)
++int detect_ht_early(struct cpuinfo_x86 *c)
+ {
+ #ifdef CONFIG_SMP
+ 	u32 eax, ebx, ecx, edx;
+-	int index_msb, core_bits;
+-	static bool printed;
+ 
+ 	if (!cpu_has(c, X86_FEATURE_HT))
+-		return;
++		return -1;
+ 
+ 	if (cpu_has(c, X86_FEATURE_CMP_LEGACY))
+-		goto out;
++		return -1;
+ 
+ 	if (cpu_has(c, X86_FEATURE_XTOPOLOGY))
+-		return;
++		return -1;
+ 
+ 	cpuid(1, &eax, &ebx, &ecx, &edx);
+ 
+ 	smp_num_siblings = (ebx & 0xff0000) >> 16;
+-
+-	if (smp_num_siblings == 1) {
++	if (smp_num_siblings == 1)
+ 		pr_info_once("CPU0: Hyper-Threading is disabled\n");
+-		goto out;
+-	}
++#endif
++	return 0;
++}
+ 
+-	if (smp_num_siblings <= 1)
+-		goto out;
++void detect_ht(struct cpuinfo_x86 *c)
++{
++#ifdef CONFIG_SMP
++	int index_msb, core_bits;
++
++	if (detect_ht_early(c) < 0)
++		return;
+ 
+ 	index_msb = get_count_order(smp_num_siblings);
+ 	c->phys_proc_id = apic->phys_pkg_id(c->initial_apicid, index_msb);
+@@ -700,15 +703,6 @@ void detect_ht(struct cpuinfo_x86 *c)
+ 
+ 	c->cpu_core_id = apic->phys_pkg_id(c->initial_apicid, index_msb) &
+ 				       ((1 << core_bits) - 1);
+-
+-out:
+-	if (!printed && (c->x86_max_cores * smp_num_siblings) > 1) {
+-		pr_info("CPU: Physical Processor ID: %d\n",
+-			c->phys_proc_id);
+-		pr_info("CPU: Processor Core ID: %d\n",
+-			c->cpu_core_id);
+-		printed = 1;
+-	}
+ #endif
+ }
+ 
+@@ -987,6 +981,21 @@ static const __initconst struct x86_cpu_id cpu_no_spec_store_bypass[] = {
+ 	{}
+ };
+ 
++static const __initconst struct x86_cpu_id cpu_no_l1tf[] = {
++	/* in addition to cpu_no_speculation */
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_SILVERMONT1	},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_SILVERMONT2	},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_AIRMONT		},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_MERRIFIELD	},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_MOOREFIELD	},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_GOLDMONT	},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_DENVERTON	},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_GEMINI_LAKE	},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_XEON_PHI_KNL		},
++	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_XEON_PHI_KNM		},
++	{}
++};
++
+ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ {
+ 	u64 ia32_cap = 0;
+@@ -1013,6 +1022,11 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ 		return;
+ 
+ 	setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
++
++	if (x86_match_cpu(cpu_no_l1tf))
++		return;
++
++	setup_force_cpu_bug(X86_BUG_L1TF);
+ }
+ 
+ /*
+diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
+index 38216f678fc3..e59c0ea82a33 100644
+--- a/arch/x86/kernel/cpu/cpu.h
++++ b/arch/x86/kernel/cpu/cpu.h
+@@ -55,7 +55,9 @@ extern void init_intel_cacheinfo(struct cpuinfo_x86 *c);
+ extern void init_amd_cacheinfo(struct cpuinfo_x86 *c);
+ 
+ extern void detect_num_cpu_cores(struct cpuinfo_x86 *c);
++extern int detect_extended_topology_early(struct cpuinfo_x86 *c);
+ extern int detect_extended_topology(struct cpuinfo_x86 *c);
++extern int detect_ht_early(struct cpuinfo_x86 *c);
+ extern void detect_ht(struct cpuinfo_x86 *c);
+ 
+ unsigned int aperfmperf_get_khz(int cpu);
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index eb75564f2d25..6602941cfebf 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -301,6 +301,13 @@ static void early_init_intel(struct cpuinfo_x86 *c)
+ 	}
+ 
+ 	check_mpx_erratum(c);
++
++	/*
++	 * Get the number of SMT siblings early from the extended topology
++	 * leaf, if available. Otherwise try the legacy SMT detection.
++	 */
++	if (detect_extended_topology_early(c) < 0)
++		detect_ht_early(c);
+ }
+ 
+ #ifdef CONFIG_X86_32
+diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
+index 08286269fd24..b9bc8a1a584e 100644
+--- a/arch/x86/kernel/cpu/microcode/core.c
++++ b/arch/x86/kernel/cpu/microcode/core.c
+@@ -509,12 +509,20 @@ static struct platform_device	*microcode_pdev;
+ 
+ static int check_online_cpus(void)
+ {
+-	if (num_online_cpus() == num_present_cpus())
+-		return 0;
++	unsigned int cpu;
+ 
+-	pr_err("Not all CPUs online, aborting microcode update.\n");
++	/*
++	 * Make sure all CPUs are online.  It's fine for SMT to be disabled if
++	 * all the primary threads are still online.
++	 */
++	for_each_present_cpu(cpu) {
++		if (topology_is_primary_thread(cpu) && !cpu_online(cpu)) {
++			pr_err("Not all CPUs online, aborting microcode update.\n");
++			return -EINVAL;
++		}
++	}
+ 
+-	return -EINVAL;
++	return 0;
+ }
+ 
+ static atomic_t late_cpus_in;
+diff --git a/arch/x86/kernel/cpu/topology.c b/arch/x86/kernel/cpu/topology.c
+index 81c0afb39d0a..71ca064e3794 100644
+--- a/arch/x86/kernel/cpu/topology.c
++++ b/arch/x86/kernel/cpu/topology.c
+@@ -22,18 +22,10 @@
+ #define BITS_SHIFT_NEXT_LEVEL(eax)	((eax) & 0x1f)
+ #define LEVEL_MAX_SIBLINGS(ebx)		((ebx) & 0xffff)
+ 
+-/*
+- * Check for extended topology enumeration cpuid leaf 0xb and if it
+- * exists, use it for populating initial_apicid and cpu topology
+- * detection.
+- */
+-int detect_extended_topology(struct cpuinfo_x86 *c)
++int detect_extended_topology_early(struct cpuinfo_x86 *c)
+ {
+ #ifdef CONFIG_SMP
+-	unsigned int eax, ebx, ecx, edx, sub_index;
+-	unsigned int ht_mask_width, core_plus_mask_width;
+-	unsigned int core_select_mask, core_level_siblings;
+-	static bool printed;
++	unsigned int eax, ebx, ecx, edx;
+ 
+ 	if (c->cpuid_level < 0xb)
+ 		return -1;
+@@ -52,10 +44,30 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
+ 	 * initial apic id, which also represents 32-bit extended x2apic id.
+ 	 */
+ 	c->initial_apicid = edx;
++	smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx);
++#endif
++	return 0;
++}
++
++/*
++ * Check for extended topology enumeration cpuid leaf 0xb and if it
++ * exists, use it for populating initial_apicid and cpu topology
++ * detection.
++ */
++int detect_extended_topology(struct cpuinfo_x86 *c)
++{
++#ifdef CONFIG_SMP
++	unsigned int eax, ebx, ecx, edx, sub_index;
++	unsigned int ht_mask_width, core_plus_mask_width;
++	unsigned int core_select_mask, core_level_siblings;
++
++	if (detect_extended_topology_early(c) < 0)
++		return -1;
+ 
+ 	/*
+ 	 * Populate HT related information from sub-leaf level 0.
+ 	 */
++	cpuid_count(0xb, SMT_LEVEL, &eax, &ebx, &ecx, &edx);
+ 	core_level_siblings = smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx);
+ 	core_plus_mask_width = ht_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
+ 
+@@ -86,15 +98,6 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
+ 	c->apicid = apic->phys_pkg_id(c->initial_apicid, 0);
+ 
+ 	c->x86_max_cores = (core_level_siblings / smp_num_siblings);
+-
+-	if (!printed) {
+-		pr_info("CPU: Physical Processor ID: %d\n",
+-		       c->phys_proc_id);
+-		if (c->x86_max_cores > 1)
+-			pr_info("CPU: Processor Core ID: %d\n",
+-			       c->cpu_core_id);
+-		printed = 1;
+-	}
+ #endif
+ 	return 0;
+ }
+diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c
+index f92a6593de1e..2ea85b32421a 100644
+--- a/arch/x86/kernel/fpu/core.c
++++ b/arch/x86/kernel/fpu/core.c
+@@ -10,6 +10,7 @@
+ #include <asm/fpu/signal.h>
+ #include <asm/fpu/types.h>
+ #include <asm/traps.h>
++#include <asm/irq_regs.h>
+ 
+ #include <linux/hardirq.h>
+ #include <linux/pkeys.h>
+diff --git a/arch/x86/kernel/hpet.c b/arch/x86/kernel/hpet.c
+index 346b24883911..b0acb22e5a46 100644
+--- a/arch/x86/kernel/hpet.c
++++ b/arch/x86/kernel/hpet.c
+@@ -1,6 +1,7 @@
+ #include <linux/clocksource.h>
+ #include <linux/clockchips.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/export.h>
+ #include <linux/delay.h>
+ #include <linux/errno.h>
+diff --git a/arch/x86/kernel/i8259.c b/arch/x86/kernel/i8259.c
+index 86c4439f9d74..519649ddf100 100644
+--- a/arch/x86/kernel/i8259.c
++++ b/arch/x86/kernel/i8259.c
+@@ -5,6 +5,7 @@
+ #include <linux/sched.h>
+ #include <linux/ioport.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/timex.h>
+ #include <linux/random.h>
+ #include <linux/init.h>
+diff --git a/arch/x86/kernel/idt.c b/arch/x86/kernel/idt.c
+index 74383a3780dc..01adea278a71 100644
+--- a/arch/x86/kernel/idt.c
++++ b/arch/x86/kernel/idt.c
+@@ -8,6 +8,7 @@
+ #include <asm/traps.h>
+ #include <asm/proto.h>
+ #include <asm/desc.h>
++#include <asm/hw_irq.h>
+ 
+ struct idt_data {
+ 	unsigned int	vector;
+diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
+index 328d027d829d..59b5f2ea7c2f 100644
+--- a/arch/x86/kernel/irq.c
++++ b/arch/x86/kernel/irq.c
+@@ -10,6 +10,7 @@
+ #include <linux/ftrace.h>
+ #include <linux/delay.h>
+ #include <linux/export.h>
++#include <linux/irq.h>
+ 
+ #include <asm/apic.h>
+ #include <asm/io_apic.h>
+diff --git a/arch/x86/kernel/irq_32.c b/arch/x86/kernel/irq_32.c
+index c1bdbd3d3232..95600a99ae93 100644
+--- a/arch/x86/kernel/irq_32.c
++++ b/arch/x86/kernel/irq_32.c
+@@ -11,6 +11,7 @@
+ 
+ #include <linux/seq_file.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/kernel_stat.h>
+ #include <linux/notifier.h>
+ #include <linux/cpu.h>
+diff --git a/arch/x86/kernel/irq_64.c b/arch/x86/kernel/irq_64.c
+index d86e344f5b3d..0469cd078db1 100644
+--- a/arch/x86/kernel/irq_64.c
++++ b/arch/x86/kernel/irq_64.c
+@@ -11,6 +11,7 @@
+ 
+ #include <linux/kernel_stat.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/seq_file.h>
+ #include <linux/delay.h>
+ #include <linux/ftrace.h>
+diff --git a/arch/x86/kernel/irqinit.c b/arch/x86/kernel/irqinit.c
+index 772196c1b8c4..a0693b71cfc1 100644
+--- a/arch/x86/kernel/irqinit.c
++++ b/arch/x86/kernel/irqinit.c
+@@ -5,6 +5,7 @@
+ #include <linux/sched.h>
+ #include <linux/ioport.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/timex.h>
+ #include <linux/random.h>
+ #include <linux/kprobes.h>
+diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
+index 6f4d42377fe5..44e26dc326d5 100644
+--- a/arch/x86/kernel/kprobes/core.c
++++ b/arch/x86/kernel/kprobes/core.c
+@@ -395,8 +395,6 @@ int __copy_instruction(u8 *dest, u8 *src, u8 *real, struct insn *insn)
+ 			  - (u8 *) real;
+ 		if ((s64) (s32) newdisp != newdisp) {
+ 			pr_err("Kprobes error: new displacement does not fit into s32 (%llx)\n", newdisp);
+-			pr_err("\tSrc: %p, Dest: %p, old disp: %x\n",
+-				src, real, insn->displacement.value);
+ 			return 0;
+ 		}
+ 		disp = (u8 *) dest + insn_offset_displacement(insn);
+@@ -640,8 +638,7 @@ static int reenter_kprobe(struct kprobe *p, struct pt_regs *regs,
+ 		 * Raise a BUG or we'll continue in an endless reentering loop
+ 		 * and eventually a stack overflow.
+ 		 */
+-		printk(KERN_WARNING "Unrecoverable kprobe detected at %p.\n",
+-		       p->addr);
++		pr_err("Unrecoverable kprobe detected.\n");
+ 		dump_kprobe(p);
+ 		BUG();
+ 	default:
+diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
+index 99dc79e76bdc..930c88341e4e 100644
+--- a/arch/x86/kernel/paravirt.c
++++ b/arch/x86/kernel/paravirt.c
+@@ -88,10 +88,12 @@ unsigned paravirt_patch_call(void *insnbuf,
+ 	struct branch *b = insnbuf;
+ 	unsigned long delta = (unsigned long)target - (addr+5);
+ 
+-	if (tgt_clobbers & ~site_clobbers)
+-		return len;	/* target would clobber too much for this site */
+-	if (len < 5)
++	if (len < 5) {
++#ifdef CONFIG_RETPOLINE
++		WARN_ONCE("Failing to patch indirect CALL in %ps\n", (void *)addr);
++#endif
+ 		return len;	/* call too long for patch site */
++	}
+ 
+ 	b->opcode = 0xe8; /* call */
+ 	b->delta = delta;
+@@ -106,8 +108,12 @@ unsigned paravirt_patch_jmp(void *insnbuf, const void *target,
+ 	struct branch *b = insnbuf;
+ 	unsigned long delta = (unsigned long)target - (addr+5);
+ 
+-	if (len < 5)
++	if (len < 5) {
++#ifdef CONFIG_RETPOLINE
++		WARN_ONCE("Failing to patch indirect JMP in %ps\n", (void *)addr);
++#endif
+ 		return len;	/* call too long for patch site */
++	}
+ 
+ 	b->opcode = 0xe9;	/* jmp */
+ 	b->delta = delta;
+diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
+index 2f86d883dd95..74b4472ba0a6 100644
+--- a/arch/x86/kernel/setup.c
++++ b/arch/x86/kernel/setup.c
+@@ -823,6 +823,12 @@ void __init setup_arch(char **cmdline_p)
+ 	memblock_reserve(__pa_symbol(_text),
+ 			 (unsigned long)__bss_stop - (unsigned long)_text);
+ 
++	/*
++	 * Make sure page 0 is always reserved because on systems with
++	 * L1TF its contents can be leaked to user processes.
++	 */
++	memblock_reserve(0, PAGE_SIZE);
++
+ 	early_reserve_initrd();
+ 
+ 	/*
+diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
+index 5c574dff4c1a..04adc8d60aed 100644
+--- a/arch/x86/kernel/smp.c
++++ b/arch/x86/kernel/smp.c
+@@ -261,6 +261,7 @@ __visible void __irq_entry smp_reschedule_interrupt(struct pt_regs *regs)
+ {
+ 	ack_APIC_irq();
+ 	inc_irq_stat(irq_resched_count);
++	kvm_set_cpu_l1tf_flush_l1d();
+ 
+ 	if (trace_resched_ipi_enabled()) {
+ 		/*
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index db9656e13ea0..f02ecaf97904 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -80,6 +80,7 @@
+ #include <asm/intel-family.h>
+ #include <asm/cpu_device_id.h>
+ #include <asm/spec-ctrl.h>
++#include <asm/hw_irq.h>
+ 
+ /* representing HT siblings of each logical CPU */
+ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_sibling_map);
+@@ -270,6 +271,23 @@ static void notrace start_secondary(void *unused)
+ 	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+ }
+ 
++/**
++ * topology_is_primary_thread - Check whether CPU is the primary SMT thread
++ * @cpu:	CPU to check
++ */
++bool topology_is_primary_thread(unsigned int cpu)
++{
++	return apic_id_is_primary_thread(per_cpu(x86_cpu_to_apicid, cpu));
++}
++
++/**
++ * topology_smt_supported - Check whether SMT is supported by the CPUs
++ */
++bool topology_smt_supported(void)
++{
++	return smp_num_siblings > 1;
++}
++
+ /**
+  * topology_phys_to_logical_pkg - Map a physical package id to a logical
+  *
+diff --git a/arch/x86/kernel/time.c b/arch/x86/kernel/time.c
+index 774ebafa97c4..be01328eb755 100644
+--- a/arch/x86/kernel/time.c
++++ b/arch/x86/kernel/time.c
+@@ -12,6 +12,7 @@
+ 
+ #include <linux/clockchips.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/i8253.h>
+ #include <linux/time.h>
+ #include <linux/export.h>
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index 6b8f11521c41..a44e568363a4 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -3840,6 +3840,7 @@ int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code,
+ {
+ 	int r = 1;
+ 
++	vcpu->arch.l1tf_flush_l1d = true;
+ 	switch (vcpu->arch.apf.host_apf_reason) {
+ 	default:
+ 		trace_kvm_page_fault(fault_address, error_code);
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 5d8e317c2b04..46b428c0990e 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -188,6 +188,150 @@ module_param(ple_window_max, uint, 0444);
+ 
+ extern const ulong vmx_return;
+ 
++static DEFINE_STATIC_KEY_FALSE(vmx_l1d_should_flush);
++static DEFINE_STATIC_KEY_FALSE(vmx_l1d_flush_cond);
++static DEFINE_MUTEX(vmx_l1d_flush_mutex);
++
++/* Storage for pre module init parameter parsing */
++static enum vmx_l1d_flush_state __read_mostly vmentry_l1d_flush_param = VMENTER_L1D_FLUSH_AUTO;
++
++static const struct {
++	const char *option;
++	enum vmx_l1d_flush_state cmd;
++} vmentry_l1d_param[] = {
++	{"auto",	VMENTER_L1D_FLUSH_AUTO},
++	{"never",	VMENTER_L1D_FLUSH_NEVER},
++	{"cond",	VMENTER_L1D_FLUSH_COND},
++	{"always",	VMENTER_L1D_FLUSH_ALWAYS},
++};
++
++#define L1D_CACHE_ORDER 4
++static void *vmx_l1d_flush_pages;
++
++static int vmx_setup_l1d_flush(enum vmx_l1d_flush_state l1tf)
++{
++	struct page *page;
++	unsigned int i;
++
++	if (!enable_ept) {
++		l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_EPT_DISABLED;
++		return 0;
++	}
++
++       if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES)) {
++	       u64 msr;
++
++	       rdmsrl(MSR_IA32_ARCH_CAPABILITIES, msr);
++	       if (msr & ARCH_CAP_SKIP_VMENTRY_L1DFLUSH) {
++		       l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_NOT_REQUIRED;
++		       return 0;
++	       }
++       }
++
++	/* If set to auto use the default l1tf mitigation method */
++	if (l1tf == VMENTER_L1D_FLUSH_AUTO) {
++		switch (l1tf_mitigation) {
++		case L1TF_MITIGATION_OFF:
++			l1tf = VMENTER_L1D_FLUSH_NEVER;
++			break;
++		case L1TF_MITIGATION_FLUSH_NOWARN:
++		case L1TF_MITIGATION_FLUSH:
++		case L1TF_MITIGATION_FLUSH_NOSMT:
++			l1tf = VMENTER_L1D_FLUSH_COND;
++			break;
++		case L1TF_MITIGATION_FULL:
++		case L1TF_MITIGATION_FULL_FORCE:
++			l1tf = VMENTER_L1D_FLUSH_ALWAYS;
++			break;
++		}
++	} else if (l1tf_mitigation == L1TF_MITIGATION_FULL_FORCE) {
++		l1tf = VMENTER_L1D_FLUSH_ALWAYS;
++	}
++
++	if (l1tf != VMENTER_L1D_FLUSH_NEVER && !vmx_l1d_flush_pages &&
++	    !boot_cpu_has(X86_FEATURE_FLUSH_L1D)) {
++		page = alloc_pages(GFP_KERNEL, L1D_CACHE_ORDER);
++		if (!page)
++			return -ENOMEM;
++		vmx_l1d_flush_pages = page_address(page);
++
++		/*
++		 * Initialize each page with a different pattern in
++		 * order to protect against KSM in the nested
++		 * virtualization case.
++		 */
++		for (i = 0; i < 1u << L1D_CACHE_ORDER; ++i) {
++			memset(vmx_l1d_flush_pages + i * PAGE_SIZE, i + 1,
++			       PAGE_SIZE);
++		}
++	}
++
++	l1tf_vmx_mitigation = l1tf;
++
++	if (l1tf != VMENTER_L1D_FLUSH_NEVER)
++		static_branch_enable(&vmx_l1d_should_flush);
++	else
++		static_branch_disable(&vmx_l1d_should_flush);
++
++	if (l1tf == VMENTER_L1D_FLUSH_COND)
++		static_branch_enable(&vmx_l1d_flush_cond);
++	else
++		static_branch_disable(&vmx_l1d_flush_cond);
++	return 0;
++}
++
++static int vmentry_l1d_flush_parse(const char *s)
++{
++	unsigned int i;
++
++	if (s) {
++		for (i = 0; i < ARRAY_SIZE(vmentry_l1d_param); i++) {
++			if (sysfs_streq(s, vmentry_l1d_param[i].option))
++				return vmentry_l1d_param[i].cmd;
++		}
++	}
++	return -EINVAL;
++}
++
++static int vmentry_l1d_flush_set(const char *s, const struct kernel_param *kp)
++{
++	int l1tf, ret;
++
++	if (!boot_cpu_has(X86_BUG_L1TF))
++		return 0;
++
++	l1tf = vmentry_l1d_flush_parse(s);
++	if (l1tf < 0)
++		return l1tf;
++
++	/*
++	 * Has vmx_init() run already? If not then this is the pre init
++	 * parameter parsing. In that case just store the value and let
++	 * vmx_init() do the proper setup after enable_ept has been
++	 * established.
++	 */
++	if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_AUTO) {
++		vmentry_l1d_flush_param = l1tf;
++		return 0;
++	}
++
++	mutex_lock(&vmx_l1d_flush_mutex);
++	ret = vmx_setup_l1d_flush(l1tf);
++	mutex_unlock(&vmx_l1d_flush_mutex);
++	return ret;
++}
++
++static int vmentry_l1d_flush_get(char *s, const struct kernel_param *kp)
++{
++	return sprintf(s, "%s\n", vmentry_l1d_param[l1tf_vmx_mitigation].option);
++}
++
++static const struct kernel_param_ops vmentry_l1d_flush_ops = {
++	.set = vmentry_l1d_flush_set,
++	.get = vmentry_l1d_flush_get,
++};
++module_param_cb(vmentry_l1d_flush, &vmentry_l1d_flush_ops, NULL, 0644);
++
+ struct kvm_vmx {
+ 	struct kvm kvm;
+ 
+@@ -757,6 +901,11 @@ static inline int pi_test_sn(struct pi_desc *pi_desc)
+ 			(unsigned long *)&pi_desc->control);
+ }
+ 
++struct vmx_msrs {
++	unsigned int		nr;
++	struct vmx_msr_entry	val[NR_AUTOLOAD_MSRS];
++};
++
+ struct vcpu_vmx {
+ 	struct kvm_vcpu       vcpu;
+ 	unsigned long         host_rsp;
+@@ -790,9 +939,8 @@ struct vcpu_vmx {
+ 	struct loaded_vmcs   *loaded_vmcs;
+ 	bool                  __launched; /* temporary, used in vmx_vcpu_run */
+ 	struct msr_autoload {
+-		unsigned nr;
+-		struct vmx_msr_entry guest[NR_AUTOLOAD_MSRS];
+-		struct vmx_msr_entry host[NR_AUTOLOAD_MSRS];
++		struct vmx_msrs guest;
++		struct vmx_msrs host;
+ 	} msr_autoload;
+ 	struct {
+ 		int           loaded;
+@@ -2377,9 +2525,20 @@ static void clear_atomic_switch_msr_special(struct vcpu_vmx *vmx,
+ 	vm_exit_controls_clearbit(vmx, exit);
+ }
+ 
++static int find_msr(struct vmx_msrs *m, unsigned int msr)
++{
++	unsigned int i;
++
++	for (i = 0; i < m->nr; ++i) {
++		if (m->val[i].index == msr)
++			return i;
++	}
++	return -ENOENT;
++}
++
+ static void clear_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr)
+ {
+-	unsigned i;
++	int i;
+ 	struct msr_autoload *m = &vmx->msr_autoload;
+ 
+ 	switch (msr) {
+@@ -2400,18 +2559,21 @@ static void clear_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr)
+ 		}
+ 		break;
+ 	}
++	i = find_msr(&m->guest, msr);
++	if (i < 0)
++		goto skip_guest;
++	--m->guest.nr;
++	m->guest.val[i] = m->guest.val[m->guest.nr];
++	vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->guest.nr);
+ 
+-	for (i = 0; i < m->nr; ++i)
+-		if (m->guest[i].index == msr)
+-			break;
+-
+-	if (i == m->nr)
++skip_guest:
++	i = find_msr(&m->host, msr);
++	if (i < 0)
+ 		return;
+-	--m->nr;
+-	m->guest[i] = m->guest[m->nr];
+-	m->host[i] = m->host[m->nr];
+-	vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->nr);
+-	vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->nr);
++
++	--m->host.nr;
++	m->host.val[i] = m->host.val[m->host.nr];
++	vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->host.nr);
+ }
+ 
+ static void add_atomic_switch_msr_special(struct vcpu_vmx *vmx,
+@@ -2426,9 +2588,9 @@ static void add_atomic_switch_msr_special(struct vcpu_vmx *vmx,
+ }
+ 
+ static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr,
+-				  u64 guest_val, u64 host_val)
++				  u64 guest_val, u64 host_val, bool entry_only)
+ {
+-	unsigned i;
++	int i, j = 0;
+ 	struct msr_autoload *m = &vmx->msr_autoload;
+ 
+ 	switch (msr) {
+@@ -2463,24 +2625,31 @@ static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr,
+ 		wrmsrl(MSR_IA32_PEBS_ENABLE, 0);
+ 	}
+ 
+-	for (i = 0; i < m->nr; ++i)
+-		if (m->guest[i].index == msr)
+-			break;
++	i = find_msr(&m->guest, msr);
++	if (!entry_only)
++		j = find_msr(&m->host, msr);
+ 
+-	if (i == NR_AUTOLOAD_MSRS) {
++	if (i == NR_AUTOLOAD_MSRS || j == NR_AUTOLOAD_MSRS) {
+ 		printk_once(KERN_WARNING "Not enough msr switch entries. "
+ 				"Can't add msr %x\n", msr);
+ 		return;
+-	} else if (i == m->nr) {
+-		++m->nr;
+-		vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->nr);
+-		vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->nr);
+ 	}
++	if (i < 0) {
++		i = m->guest.nr++;
++		vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, m->guest.nr);
++	}
++	m->guest.val[i].index = msr;
++	m->guest.val[i].value = guest_val;
++
++	if (entry_only)
++		return;
+ 
+-	m->guest[i].index = msr;
+-	m->guest[i].value = guest_val;
+-	m->host[i].index = msr;
+-	m->host[i].value = host_val;
++	if (j < 0) {
++		j = m->host.nr++;
++		vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, m->host.nr);
++	}
++	m->host.val[j].index = msr;
++	m->host.val[j].value = host_val;
+ }
+ 
+ static bool update_transition_efer(struct vcpu_vmx *vmx, int efer_offset)
+@@ -2524,7 +2693,7 @@ static bool update_transition_efer(struct vcpu_vmx *vmx, int efer_offset)
+ 			guest_efer &= ~EFER_LME;
+ 		if (guest_efer != host_efer)
+ 			add_atomic_switch_msr(vmx, MSR_EFER,
+-					      guest_efer, host_efer);
++					      guest_efer, host_efer, false);
+ 		return false;
+ 	} else {
+ 		guest_efer &= ~ignore_bits;
+@@ -3987,7 +4156,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		vcpu->arch.ia32_xss = data;
+ 		if (vcpu->arch.ia32_xss != host_xss)
+ 			add_atomic_switch_msr(vmx, MSR_IA32_XSS,
+-				vcpu->arch.ia32_xss, host_xss);
++				vcpu->arch.ia32_xss, host_xss, false);
+ 		else
+ 			clear_atomic_switch_msr(vmx, MSR_IA32_XSS);
+ 		break;
+@@ -6274,9 +6443,9 @@ static void vmx_vcpu_setup(struct vcpu_vmx *vmx)
+ 
+ 	vmcs_write32(VM_EXIT_MSR_STORE_COUNT, 0);
+ 	vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, 0);
+-	vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host));
++	vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host.val));
+ 	vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, 0);
+-	vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest));
++	vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest.val));
+ 
+ 	if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT)
+ 		vmcs_write64(GUEST_IA32_PAT, vmx->vcpu.arch.pat);
+@@ -6296,8 +6465,7 @@ static void vmx_vcpu_setup(struct vcpu_vmx *vmx)
+ 		++vmx->nmsrs;
+ 	}
+ 
+-	if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES))
+-		rdmsrl(MSR_IA32_ARCH_CAPABILITIES, vmx->arch_capabilities);
++	vmx->arch_capabilities = kvm_get_arch_capabilities();
+ 
+ 	vm_exit_controls_init(vmx, vmcs_config.vmexit_ctrl);
+ 
+@@ -9548,6 +9716,79 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu)
+ 	}
+ }
+ 
++/*
++ * Software based L1D cache flush which is used when microcode providing
++ * the cache control MSR is not loaded.
++ *
++ * The L1D cache is 32 KiB on Nehalem and later microarchitectures, but to
++ * flush it is required to read in 64 KiB because the replacement algorithm
++ * is not exactly LRU. This could be sized at runtime via topology
++ * information but as all relevant affected CPUs have 32KiB L1D cache size
++ * there is no point in doing so.
++ */
++#define L1D_CACHE_ORDER 4
++static void *vmx_l1d_flush_pages;
++
++static void vmx_l1d_flush(struct kvm_vcpu *vcpu)
++{
++	int size = PAGE_SIZE << L1D_CACHE_ORDER;
++
++	/*
++	 * This code is only executed when the the flush mode is 'cond' or
++	 * 'always'
++	 */
++	if (static_branch_likely(&vmx_l1d_flush_cond)) {
++		bool flush_l1d;
++
++		/*
++		 * Clear the per-vcpu flush bit, it gets set again
++		 * either from vcpu_run() or from one of the unsafe
++		 * VMEXIT handlers.
++		 */
++		flush_l1d = vcpu->arch.l1tf_flush_l1d;
++		vcpu->arch.l1tf_flush_l1d = false;
++
++		/*
++		 * Clear the per-cpu flush bit, it gets set again from
++		 * the interrupt handlers.
++		 */
++		flush_l1d |= kvm_get_cpu_l1tf_flush_l1d();
++		kvm_clear_cpu_l1tf_flush_l1d();
++
++		if (!flush_l1d)
++			return;
++	}
++
++	vcpu->stat.l1d_flush++;
++
++	if (static_cpu_has(X86_FEATURE_FLUSH_L1D)) {
++		wrmsrl(MSR_IA32_FLUSH_CMD, L1D_FLUSH);
++		return;
++	}
++
++	asm volatile(
++		/* First ensure the pages are in the TLB */
++		"xorl	%%eax, %%eax\n"
++		".Lpopulate_tlb:\n\t"
++		"movzbl	(%[flush_pages], %%" _ASM_AX "), %%ecx\n\t"
++		"addl	$4096, %%eax\n\t"
++		"cmpl	%%eax, %[size]\n\t"
++		"jne	.Lpopulate_tlb\n\t"
++		"xorl	%%eax, %%eax\n\t"
++		"cpuid\n\t"
++		/* Now fill the cache */
++		"xorl	%%eax, %%eax\n"
++		".Lfill_cache:\n"
++		"movzbl	(%[flush_pages], %%" _ASM_AX "), %%ecx\n\t"
++		"addl	$64, %%eax\n\t"
++		"cmpl	%%eax, %[size]\n\t"
++		"jne	.Lfill_cache\n\t"
++		"lfence\n"
++		:: [flush_pages] "r" (vmx_l1d_flush_pages),
++		    [size] "r" (size)
++		: "eax", "ebx", "ecx", "edx");
++}
++
+ static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
+ {
+ 	struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
+@@ -9949,7 +10190,7 @@ static void atomic_switch_perf_msrs(struct vcpu_vmx *vmx)
+ 			clear_atomic_switch_msr(vmx, msrs[i].msr);
+ 		else
+ 			add_atomic_switch_msr(vmx, msrs[i].msr, msrs[i].guest,
+-					msrs[i].host);
++					msrs[i].host, false);
+ }
+ 
+ static void vmx_arm_hv_timer(struct kvm_vcpu *vcpu)
+@@ -10044,6 +10285,9 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
+ 	evmcs_rsp = static_branch_unlikely(&enable_evmcs) ?
+ 		(unsigned long)&current_evmcs->host_rsp : 0;
+ 
++	if (static_branch_unlikely(&vmx_l1d_should_flush))
++		vmx_l1d_flush(vcpu);
++
+ 	asm(
+ 		/* Store host registers */
+ 		"push %%" _ASM_DX "; push %%" _ASM_BP ";"
+@@ -10403,10 +10647,37 @@ free_vcpu:
+ 	return ERR_PTR(err);
+ }
+ 
++#define L1TF_MSG_SMT "L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/l1tf.html for details.\n"
++#define L1TF_MSG_L1D "L1TF CPU bug present and virtualization mitigation disabled, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/l1tf.html for details.\n"
++
+ static int vmx_vm_init(struct kvm *kvm)
+ {
+ 	if (!ple_gap)
+ 		kvm->arch.pause_in_guest = true;
++
++	if (boot_cpu_has(X86_BUG_L1TF) && enable_ept) {
++		switch (l1tf_mitigation) {
++		case L1TF_MITIGATION_OFF:
++		case L1TF_MITIGATION_FLUSH_NOWARN:
++			/* 'I explicitly don't care' is set */
++			break;
++		case L1TF_MITIGATION_FLUSH:
++		case L1TF_MITIGATION_FLUSH_NOSMT:
++		case L1TF_MITIGATION_FULL:
++			/*
++			 * Warn upon starting the first VM in a potentially
++			 * insecure environment.
++			 */
++			if (cpu_smt_control == CPU_SMT_ENABLED)
++				pr_warn_once(L1TF_MSG_SMT);
++			if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_NEVER)
++				pr_warn_once(L1TF_MSG_L1D);
++			break;
++		case L1TF_MITIGATION_FULL_FORCE:
++			/* Flush is enforced */
++			break;
++		}
++	}
+ 	return 0;
+ }
+ 
+@@ -11260,10 +11531,10 @@ static void prepare_vmcs02_full(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
+ 	 * Set the MSR load/store lists to match L0's settings.
+ 	 */
+ 	vmcs_write32(VM_EXIT_MSR_STORE_COUNT, 0);
+-	vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.nr);
+-	vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host));
+-	vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.nr);
+-	vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest));
++	vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr);
++	vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host.val));
++	vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.guest.nr);
++	vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest.val));
+ 
+ 	set_cr4_guest_host_mask(vmx);
+ 
+@@ -11899,6 +12170,9 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch)
+ 		return ret;
+ 	}
+ 
++	/* Hide L1D cache contents from the nested guest.  */
++	vmx->vcpu.arch.l1tf_flush_l1d = true;
++
+ 	/*
+ 	 * If we're entering a halted L2 vcpu and the L2 vcpu won't be woken
+ 	 * by event injection, halt vcpu.
+@@ -12419,8 +12693,8 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason,
+ 	vmx_segment_cache_clear(vmx);
+ 
+ 	/* Update any VMCS fields that might have changed while L2 ran */
+-	vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.nr);
+-	vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.nr);
++	vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr);
++	vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.guest.nr);
+ 	vmcs_write64(TSC_OFFSET, vcpu->arch.tsc_offset);
+ 	if (vmx->hv_deadline_tsc == -1)
+ 		vmcs_clear_bits(PIN_BASED_VM_EXEC_CONTROL,
+@@ -13137,6 +13411,51 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
+ 	.enable_smi_window = enable_smi_window,
+ };
+ 
++static void vmx_cleanup_l1d_flush(void)
++{
++	if (vmx_l1d_flush_pages) {
++		free_pages((unsigned long)vmx_l1d_flush_pages, L1D_CACHE_ORDER);
++		vmx_l1d_flush_pages = NULL;
++	}
++	/* Restore state so sysfs ignores VMX */
++	l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
++}
++
++static void vmx_exit(void)
++{
++#ifdef CONFIG_KEXEC_CORE
++	RCU_INIT_POINTER(crash_vmclear_loaded_vmcss, NULL);
++	synchronize_rcu();
++#endif
++
++	kvm_exit();
++
++#if IS_ENABLED(CONFIG_HYPERV)
++	if (static_branch_unlikely(&enable_evmcs)) {
++		int cpu;
++		struct hv_vp_assist_page *vp_ap;
++		/*
++		 * Reset everything to support using non-enlightened VMCS
++		 * access later (e.g. when we reload the module with
++		 * enlightened_vmcs=0)
++		 */
++		for_each_online_cpu(cpu) {
++			vp_ap =	hv_get_vp_assist_page(cpu);
++
++			if (!vp_ap)
++				continue;
++
++			vp_ap->current_nested_vmcs = 0;
++			vp_ap->enlighten_vmentry = 0;
++		}
++
++		static_branch_disable(&enable_evmcs);
++	}
++#endif
++	vmx_cleanup_l1d_flush();
++}
++module_exit(vmx_exit);
++
+ static int __init vmx_init(void)
+ {
+ 	int r;
+@@ -13171,10 +13490,25 @@ static int __init vmx_init(void)
+ #endif
+ 
+ 	r = kvm_init(&vmx_x86_ops, sizeof(struct vcpu_vmx),
+-                     __alignof__(struct vcpu_vmx), THIS_MODULE);
++		     __alignof__(struct vcpu_vmx), THIS_MODULE);
+ 	if (r)
+ 		return r;
+ 
++	/*
++	 * Must be called after kvm_init() so enable_ept is properly set
++	 * up. Hand the parameter mitigation value in which was stored in
++	 * the pre module init parser. If no parameter was given, it will
++	 * contain 'auto' which will be turned into the default 'cond'
++	 * mitigation mode.
++	 */
++	if (boot_cpu_has(X86_BUG_L1TF)) {
++		r = vmx_setup_l1d_flush(vmentry_l1d_flush_param);
++		if (r) {
++			vmx_exit();
++			return r;
++		}
++	}
++
+ #ifdef CONFIG_KEXEC_CORE
+ 	rcu_assign_pointer(crash_vmclear_loaded_vmcss,
+ 			   crash_vmclear_local_loaded_vmcss);
+@@ -13183,39 +13517,4 @@ static int __init vmx_init(void)
+ 
+ 	return 0;
+ }
+-
+-static void __exit vmx_exit(void)
+-{
+-#ifdef CONFIG_KEXEC_CORE
+-	RCU_INIT_POINTER(crash_vmclear_loaded_vmcss, NULL);
+-	synchronize_rcu();
+-#endif
+-
+-	kvm_exit();
+-
+-#if IS_ENABLED(CONFIG_HYPERV)
+-	if (static_branch_unlikely(&enable_evmcs)) {
+-		int cpu;
+-		struct hv_vp_assist_page *vp_ap;
+-		/*
+-		 * Reset everything to support using non-enlightened VMCS
+-		 * access later (e.g. when we reload the module with
+-		 * enlightened_vmcs=0)
+-		 */
+-		for_each_online_cpu(cpu) {
+-			vp_ap =	hv_get_vp_assist_page(cpu);
+-
+-			if (!vp_ap)
+-				continue;
+-
+-			vp_ap->current_nested_vmcs = 0;
+-			vp_ap->enlighten_vmentry = 0;
+-		}
+-
+-		static_branch_disable(&enable_evmcs);
+-	}
+-#endif
+-}
+-
+-module_init(vmx_init)
+-module_exit(vmx_exit)
++module_init(vmx_init);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 2b812b3c5088..a5caa5e5480c 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -195,6 +195,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
+ 	{ "irq_injections", VCPU_STAT(irq_injections) },
+ 	{ "nmi_injections", VCPU_STAT(nmi_injections) },
+ 	{ "req_event", VCPU_STAT(req_event) },
++	{ "l1d_flush", VCPU_STAT(l1d_flush) },
+ 	{ "mmu_shadow_zapped", VM_STAT(mmu_shadow_zapped) },
+ 	{ "mmu_pte_write", VM_STAT(mmu_pte_write) },
+ 	{ "mmu_pte_updated", VM_STAT(mmu_pte_updated) },
+@@ -1102,11 +1103,35 @@ static u32 msr_based_features[] = {
+ 
+ static unsigned int num_msr_based_features;
+ 
++u64 kvm_get_arch_capabilities(void)
++{
++	u64 data;
++
++	rdmsrl_safe(MSR_IA32_ARCH_CAPABILITIES, &data);
++
++	/*
++	 * If we're doing cache flushes (either "always" or "cond")
++	 * we will do one whenever the guest does a vmlaunch/vmresume.
++	 * If an outer hypervisor is doing the cache flush for us
++	 * (VMENTER_L1D_FLUSH_NESTED_VM), we can safely pass that
++	 * capability to the guest too, and if EPT is disabled we're not
++	 * vulnerable.  Overall, only VMENTER_L1D_FLUSH_NEVER will
++	 * require a nested hypervisor to do a flush of its own.
++	 */
++	if (l1tf_vmx_mitigation != VMENTER_L1D_FLUSH_NEVER)
++		data |= ARCH_CAP_SKIP_VMENTRY_L1DFLUSH;
++
++	return data;
++}
++EXPORT_SYMBOL_GPL(kvm_get_arch_capabilities);
++
+ static int kvm_get_msr_feature(struct kvm_msr_entry *msr)
+ {
+ 	switch (msr->index) {
+-	case MSR_IA32_UCODE_REV:
+ 	case MSR_IA32_ARCH_CAPABILITIES:
++		msr->data = kvm_get_arch_capabilities();
++		break;
++	case MSR_IA32_UCODE_REV:
+ 		rdmsrl_safe(msr->index, &msr->data);
+ 		break;
+ 	default:
+@@ -4876,6 +4901,9 @@ static int emulator_write_std(struct x86_emulate_ctxt *ctxt, gva_t addr, void *v
+ int kvm_write_guest_virt_system(struct kvm_vcpu *vcpu, gva_t addr, void *val,
+ 				unsigned int bytes, struct x86_exception *exception)
+ {
++	/* kvm_write_guest_virt_system can pull in tons of pages. */
++	vcpu->arch.l1tf_flush_l1d = true;
++
+ 	return kvm_write_guest_virt_helper(addr, val, bytes, vcpu,
+ 					   PFERR_WRITE_MASK, exception);
+ }
+@@ -6052,6 +6080,8 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu,
+ 	bool writeback = true;
+ 	bool write_fault_to_spt = vcpu->arch.write_fault_to_shadow_pgtable;
+ 
++	vcpu->arch.l1tf_flush_l1d = true;
++
+ 	/*
+ 	 * Clear write_fault_to_shadow_pgtable here to ensure it is
+ 	 * never reused.
+@@ -7581,6 +7611,7 @@ static int vcpu_run(struct kvm_vcpu *vcpu)
+ 	struct kvm *kvm = vcpu->kvm;
+ 
+ 	vcpu->srcu_idx = srcu_read_lock(&kvm->srcu);
++	vcpu->arch.l1tf_flush_l1d = true;
+ 
+ 	for (;;) {
+ 		if (kvm_vcpu_running(vcpu)) {
+@@ -8700,6 +8731,7 @@ void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu)
+ 
+ void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu)
+ {
++	vcpu->arch.l1tf_flush_l1d = true;
+ 	kvm_x86_ops->sched_in(vcpu, cpu);
+ }
+ 
+diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
+index cee58a972cb2..83241eb71cd4 100644
+--- a/arch/x86/mm/init.c
++++ b/arch/x86/mm/init.c
+@@ -4,6 +4,8 @@
+ #include <linux/swap.h>
+ #include <linux/memblock.h>
+ #include <linux/bootmem.h>	/* for max_low_pfn */
++#include <linux/swapfile.h>
++#include <linux/swapops.h>
+ 
+ #include <asm/set_memory.h>
+ #include <asm/e820/api.h>
+@@ -880,3 +882,26 @@ void update_cache_mode_entry(unsigned entry, enum page_cache_mode cache)
+ 	__cachemode2pte_tbl[cache] = __cm_idx2pte(entry);
+ 	__pte2cachemode_tbl[entry] = cache;
+ }
++
++#ifdef CONFIG_SWAP
++unsigned long max_swapfile_size(void)
++{
++	unsigned long pages;
++
++	pages = generic_max_swapfile_size();
++
++	if (boot_cpu_has_bug(X86_BUG_L1TF)) {
++		/* Limit the swap file size to MAX_PA/2 for L1TF workaround */
++		unsigned long l1tf_limit = l1tf_pfn_limit() + 1;
++		/*
++		 * We encode swap offsets also with 3 bits below those for pfn
++		 * which makes the usable limit higher.
++		 */
++#if CONFIG_PGTABLE_LEVELS > 2
++		l1tf_limit <<= PAGE_SHIFT - SWP_OFFSET_FIRST_BIT;
++#endif
++		pages = min_t(unsigned long, l1tf_limit, pages);
++	}
++	return pages;
++}
++#endif
+diff --git a/arch/x86/mm/kmmio.c b/arch/x86/mm/kmmio.c
+index 7c8686709636..79eb55ce69a9 100644
+--- a/arch/x86/mm/kmmio.c
++++ b/arch/x86/mm/kmmio.c
+@@ -126,24 +126,29 @@ static struct kmmio_fault_page *get_kmmio_fault_page(unsigned long addr)
+ 
+ static void clear_pmd_presence(pmd_t *pmd, bool clear, pmdval_t *old)
+ {
++	pmd_t new_pmd;
+ 	pmdval_t v = pmd_val(*pmd);
+ 	if (clear) {
+-		*old = v & _PAGE_PRESENT;
+-		v &= ~_PAGE_PRESENT;
+-	} else	/* presume this has been called with clear==true previously */
+-		v |= *old;
+-	set_pmd(pmd, __pmd(v));
++		*old = v;
++		new_pmd = pmd_mknotpresent(*pmd);
++	} else {
++		/* Presume this has been called with clear==true previously */
++		new_pmd = __pmd(*old);
++	}
++	set_pmd(pmd, new_pmd);
+ }
+ 
+ static void clear_pte_presence(pte_t *pte, bool clear, pteval_t *old)
+ {
+ 	pteval_t v = pte_val(*pte);
+ 	if (clear) {
+-		*old = v & _PAGE_PRESENT;
+-		v &= ~_PAGE_PRESENT;
+-	} else	/* presume this has been called with clear==true previously */
+-		v |= *old;
+-	set_pte_atomic(pte, __pte(v));
++		*old = v;
++		/* Nothing should care about address */
++		pte_clear(&init_mm, 0, pte);
++	} else {
++		/* Presume this has been called with clear==true previously */
++		set_pte_atomic(pte, __pte(*old));
++	}
+ }
+ 
+ static int clear_page_presence(struct kmmio_fault_page *f, bool clear)
+diff --git a/arch/x86/mm/mmap.c b/arch/x86/mm/mmap.c
+index 48c591251600..f40ab8185d94 100644
+--- a/arch/x86/mm/mmap.c
++++ b/arch/x86/mm/mmap.c
+@@ -240,3 +240,24 @@ int valid_mmap_phys_addr_range(unsigned long pfn, size_t count)
+ 
+ 	return phys_addr_valid(addr + count - 1);
+ }
++
++/*
++ * Only allow root to set high MMIO mappings to PROT_NONE.
++ * This prevents an unpriv. user to set them to PROT_NONE and invert
++ * them, then pointing to valid memory for L1TF speculation.
++ *
++ * Note: for locked down kernels may want to disable the root override.
++ */
++bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot)
++{
++	if (!boot_cpu_has_bug(X86_BUG_L1TF))
++		return true;
++	if (!__pte_needs_invert(pgprot_val(prot)))
++		return true;
++	/* If it's real memory always allow */
++	if (pfn_valid(pfn))
++		return true;
++	if (pfn > l1tf_pfn_limit() && !capable(CAP_SYS_ADMIN))
++		return false;
++	return true;
++}
+diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
+index 3bded76e8d5c..7bb6f65c79de 100644
+--- a/arch/x86/mm/pageattr.c
++++ b/arch/x86/mm/pageattr.c
+@@ -1014,8 +1014,8 @@ static long populate_pmd(struct cpa_data *cpa,
+ 
+ 		pmd = pmd_offset(pud, start);
+ 
+-		set_pmd(pmd, __pmd(cpa->pfn << PAGE_SHIFT | _PAGE_PSE |
+-				   massage_pgprot(pmd_pgprot)));
++		set_pmd(pmd, pmd_mkhuge(pfn_pmd(cpa->pfn,
++					canon_pgprot(pmd_pgprot))));
+ 
+ 		start	  += PMD_SIZE;
+ 		cpa->pfn  += PMD_SIZE >> PAGE_SHIFT;
+@@ -1087,8 +1087,8 @@ static int populate_pud(struct cpa_data *cpa, unsigned long start, p4d_t *p4d,
+ 	 * Map everything starting from the Gb boundary, possibly with 1G pages
+ 	 */
+ 	while (boot_cpu_has(X86_FEATURE_GBPAGES) && end - start >= PUD_SIZE) {
+-		set_pud(pud, __pud(cpa->pfn << PAGE_SHIFT | _PAGE_PSE |
+-				   massage_pgprot(pud_pgprot)));
++		set_pud(pud, pud_mkhuge(pfn_pud(cpa->pfn,
++				   canon_pgprot(pud_pgprot))));
+ 
+ 		start	  += PUD_SIZE;
+ 		cpa->pfn  += PUD_SIZE >> PAGE_SHIFT;
+diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
+index 4d418e705878..fb752d9a3ce9 100644
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -45,6 +45,7 @@
+ #include <asm/pgalloc.h>
+ #include <asm/tlbflush.h>
+ #include <asm/desc.h>
++#include <asm/sections.h>
+ 
+ #undef pr_fmt
+ #define pr_fmt(fmt)     "Kernel/User page tables isolation: " fmt
+diff --git a/arch/x86/platform/intel-mid/device_libs/platform_mrfld_wdt.c b/arch/x86/platform/intel-mid/device_libs/platform_mrfld_wdt.c
+index 4f5fa65a1011..2acd6be13375 100644
+--- a/arch/x86/platform/intel-mid/device_libs/platform_mrfld_wdt.c
++++ b/arch/x86/platform/intel-mid/device_libs/platform_mrfld_wdt.c
+@@ -18,6 +18,7 @@
+ #include <asm/intel-mid.h>
+ #include <asm/intel_scu_ipc.h>
+ #include <asm/io_apic.h>
++#include <asm/hw_irq.h>
+ 
+ #define TANGIER_EXT_TIMER0_MSI 12
+ 
+diff --git a/arch/x86/platform/uv/tlb_uv.c b/arch/x86/platform/uv/tlb_uv.c
+index ca446da48fd2..3866b96a7ee7 100644
+--- a/arch/x86/platform/uv/tlb_uv.c
++++ b/arch/x86/platform/uv/tlb_uv.c
+@@ -1285,6 +1285,7 @@ void uv_bau_message_interrupt(struct pt_regs *regs)
+ 	struct msg_desc msgdesc;
+ 
+ 	ack_APIC_irq();
++	kvm_set_cpu_l1tf_flush_l1d();
+ 	time_start = get_cycles();
+ 
+ 	bcp = &per_cpu(bau_control, smp_processor_id());
+diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
+index 3b5318505c69..2eeddd814653 100644
+--- a/arch/x86/xen/enlighten.c
++++ b/arch/x86/xen/enlighten.c
+@@ -3,6 +3,7 @@
+ #endif
+ #include <linux/cpu.h>
+ #include <linux/kexec.h>
++#include <linux/slab.h>
+ 
+ #include <xen/features.h>
+ #include <xen/page.h>
+diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
+index 30cc9c877ebb..eb9443d5bae1 100644
+--- a/drivers/base/cpu.c
++++ b/drivers/base/cpu.c
+@@ -540,16 +540,24 @@ ssize_t __weak cpu_show_spec_store_bypass(struct device *dev,
+ 	return sprintf(buf, "Not affected\n");
+ }
+ 
++ssize_t __weak cpu_show_l1tf(struct device *dev,
++			     struct device_attribute *attr, char *buf)
++{
++	return sprintf(buf, "Not affected\n");
++}
++
+ static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
+ static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
+ static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
+ static DEVICE_ATTR(spec_store_bypass, 0444, cpu_show_spec_store_bypass, NULL);
++static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL);
+ 
+ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ 	&dev_attr_meltdown.attr,
+ 	&dev_attr_spectre_v1.attr,
+ 	&dev_attr_spectre_v2.attr,
+ 	&dev_attr_spec_store_bypass.attr,
++	&dev_attr_l1tf.attr,
+ 	NULL
+ };
+ 
+diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c
+index dc87797db500..b50b74053664 100644
+--- a/drivers/gpu/drm/i915/i915_pmu.c
++++ b/drivers/gpu/drm/i915/i915_pmu.c
+@@ -4,6 +4,7 @@
+  * Copyright © 2017-2018 Intel Corporation
+  */
+ 
++#include <linux/irq.h>
+ #include "i915_pmu.h"
+ #include "intel_ringbuffer.h"
+ #include "i915_drv.h"
+diff --git a/drivers/gpu/drm/i915/intel_lpe_audio.c b/drivers/gpu/drm/i915/intel_lpe_audio.c
+index 6269750e2b54..b4941101f21a 100644
+--- a/drivers/gpu/drm/i915/intel_lpe_audio.c
++++ b/drivers/gpu/drm/i915/intel_lpe_audio.c
+@@ -62,6 +62,7 @@
+ 
+ #include <linux/acpi.h>
+ #include <linux/device.h>
++#include <linux/irq.h>
+ #include <linux/pci.h>
+ #include <linux/pm_runtime.h>
+ 
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index f6325f1a89e8..d4d4a55f09f8 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -45,6 +45,7 @@
+ #include <linux/irqdomain.h>
+ #include <asm/irqdomain.h>
+ #include <asm/apic.h>
++#include <linux/irq.h>
+ #include <linux/msi.h>
+ #include <linux/hyperv.h>
+ #include <linux/refcount.h>
+diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
+index f59639afaa39..26ca0276b503 100644
+--- a/include/asm-generic/pgtable.h
++++ b/include/asm-generic/pgtable.h
+@@ -1083,6 +1083,18 @@ int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
+ static inline void init_espfix_bsp(void) { }
+ #endif
+ 
++#ifndef __HAVE_ARCH_PFN_MODIFY_ALLOWED
++static inline bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot)
++{
++	return true;
++}
++
++static inline bool arch_has_pfn_modify_check(void)
++{
++	return false;
++}
++#endif /* !_HAVE_ARCH_PFN_MODIFY_ALLOWED */
++
+ #endif /* !__ASSEMBLY__ */
+ 
+ #ifndef io_remap_pfn_range
+diff --git a/include/linux/cpu.h b/include/linux/cpu.h
+index 3233fbe23594..45789a892c41 100644
+--- a/include/linux/cpu.h
++++ b/include/linux/cpu.h
+@@ -55,6 +55,8 @@ extern ssize_t cpu_show_spectre_v2(struct device *dev,
+ 				   struct device_attribute *attr, char *buf);
+ extern ssize_t cpu_show_spec_store_bypass(struct device *dev,
+ 					  struct device_attribute *attr, char *buf);
++extern ssize_t cpu_show_l1tf(struct device *dev,
++			     struct device_attribute *attr, char *buf);
+ 
+ extern __printf(4, 5)
+ struct device *cpu_device_create(struct device *parent, void *drvdata,
+@@ -166,4 +168,23 @@ void cpuhp_report_idle_dead(void);
+ static inline void cpuhp_report_idle_dead(void) { }
+ #endif /* #ifdef CONFIG_HOTPLUG_CPU */
+ 
++enum cpuhp_smt_control {
++	CPU_SMT_ENABLED,
++	CPU_SMT_DISABLED,
++	CPU_SMT_FORCE_DISABLED,
++	CPU_SMT_NOT_SUPPORTED,
++};
++
++#if defined(CONFIG_SMP) && defined(CONFIG_HOTPLUG_SMT)
++extern enum cpuhp_smt_control cpu_smt_control;
++extern void cpu_smt_disable(bool force);
++extern void cpu_smt_check_topology_early(void);
++extern void cpu_smt_check_topology(void);
++#else
++# define cpu_smt_control		(CPU_SMT_ENABLED)
++static inline void cpu_smt_disable(bool force) { }
++static inline void cpu_smt_check_topology_early(void) { }
++static inline void cpu_smt_check_topology(void) { }
++#endif
++
+ #endif /* _LINUX_CPU_H_ */
+diff --git a/include/linux/swapfile.h b/include/linux/swapfile.h
+index 06bd7b096167..e06febf62978 100644
+--- a/include/linux/swapfile.h
++++ b/include/linux/swapfile.h
+@@ -10,5 +10,7 @@ extern spinlock_t swap_lock;
+ extern struct plist_head swap_active_head;
+ extern struct swap_info_struct *swap_info[];
+ extern int try_to_unuse(unsigned int, bool, unsigned long);
++extern unsigned long generic_max_swapfile_size(void);
++extern unsigned long max_swapfile_size(void);
+ 
+ #endif /* _LINUX_SWAPFILE_H */
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 2f8f338e77cf..f80afc674f02 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -60,6 +60,7 @@ struct cpuhp_cpu_state {
+ 	bool			rollback;
+ 	bool			single;
+ 	bool			bringup;
++	bool			booted_once;
+ 	struct hlist_node	*node;
+ 	struct hlist_node	*last;
+ 	enum cpuhp_state	cb_state;
+@@ -342,6 +343,85 @@ void cpu_hotplug_enable(void)
+ EXPORT_SYMBOL_GPL(cpu_hotplug_enable);
+ #endif	/* CONFIG_HOTPLUG_CPU */
+ 
++#ifdef CONFIG_HOTPLUG_SMT
++enum cpuhp_smt_control cpu_smt_control __read_mostly = CPU_SMT_ENABLED;
++EXPORT_SYMBOL_GPL(cpu_smt_control);
++
++static bool cpu_smt_available __read_mostly;
++
++void __init cpu_smt_disable(bool force)
++{
++	if (cpu_smt_control == CPU_SMT_FORCE_DISABLED ||
++		cpu_smt_control == CPU_SMT_NOT_SUPPORTED)
++		return;
++
++	if (force) {
++		pr_info("SMT: Force disabled\n");
++		cpu_smt_control = CPU_SMT_FORCE_DISABLED;
++	} else {
++		cpu_smt_control = CPU_SMT_DISABLED;
++	}
++}
++
++/*
++ * The decision whether SMT is supported can only be done after the full
++ * CPU identification. Called from architecture code before non boot CPUs
++ * are brought up.
++ */
++void __init cpu_smt_check_topology_early(void)
++{
++	if (!topology_smt_supported())
++		cpu_smt_control = CPU_SMT_NOT_SUPPORTED;
++}
++
++/*
++ * If SMT was disabled by BIOS, detect it here, after the CPUs have been
++ * brought online. This ensures the smt/l1tf sysfs entries are consistent
++ * with reality. cpu_smt_available is set to true during the bringup of non
++ * boot CPUs when a SMT sibling is detected. Note, this may overwrite
++ * cpu_smt_control's previous setting.
++ */
++void __init cpu_smt_check_topology(void)
++{
++	if (!cpu_smt_available)
++		cpu_smt_control = CPU_SMT_NOT_SUPPORTED;
++}
++
++static int __init smt_cmdline_disable(char *str)
++{
++	cpu_smt_disable(str && !strcmp(str, "force"));
++	return 0;
++}
++early_param("nosmt", smt_cmdline_disable);
++
++static inline bool cpu_smt_allowed(unsigned int cpu)
++{
++	if (topology_is_primary_thread(cpu))
++		return true;
++
++	/*
++	 * If the CPU is not a 'primary' thread and the booted_once bit is
++	 * set then the processor has SMT support. Store this information
++	 * for the late check of SMT support in cpu_smt_check_topology().
++	 */
++	if (per_cpu(cpuhp_state, cpu).booted_once)
++		cpu_smt_available = true;
++
++	if (cpu_smt_control == CPU_SMT_ENABLED)
++		return true;
++
++	/*
++	 * On x86 it's required to boot all logical CPUs at least once so
++	 * that the init code can get a chance to set CR4.MCE on each
++	 * CPU. Otherwise, a broadacasted MCE observing CR4.MCE=0b on any
++	 * core will shutdown the machine.
++	 */
++	return !per_cpu(cpuhp_state, cpu).booted_once;
++}
++#else
++static inline bool cpu_smt_allowed(unsigned int cpu) { return true; }
++#endif
++
+ static inline enum cpuhp_state
+ cpuhp_set_state(struct cpuhp_cpu_state *st, enum cpuhp_state target)
+ {
+@@ -422,6 +502,16 @@ static int bringup_wait_for_ap(unsigned int cpu)
+ 	stop_machine_unpark(cpu);
+ 	kthread_unpark(st->thread);
+ 
++	/*
++	 * SMT soft disabling on X86 requires to bring the CPU out of the
++	 * BIOS 'wait for SIPI' state in order to set the CR4.MCE bit.  The
++	 * CPU marked itself as booted_once in cpu_notify_starting() so the
++	 * cpu_smt_allowed() check will now return false if this is not the
++	 * primary sibling.
++	 */
++	if (!cpu_smt_allowed(cpu))
++		return -ECANCELED;
++
+ 	if (st->target <= CPUHP_AP_ONLINE_IDLE)
+ 		return 0;
+ 
+@@ -754,7 +844,6 @@ static int takedown_cpu(unsigned int cpu)
+ 
+ 	/* Park the smpboot threads */
+ 	kthread_park(per_cpu_ptr(&cpuhp_state, cpu)->thread);
+-	smpboot_park_threads(cpu);
+ 
+ 	/*
+ 	 * Prevent irq alloc/free while the dying cpu reorganizes the
+@@ -907,20 +996,19 @@ out:
+ 	return ret;
+ }
+ 
++static int cpu_down_maps_locked(unsigned int cpu, enum cpuhp_state target)
++{
++	if (cpu_hotplug_disabled)
++		return -EBUSY;
++	return _cpu_down(cpu, 0, target);
++}
++
+ static int do_cpu_down(unsigned int cpu, enum cpuhp_state target)
+ {
+ 	int err;
+ 
+ 	cpu_maps_update_begin();
+-
+-	if (cpu_hotplug_disabled) {
+-		err = -EBUSY;
+-		goto out;
+-	}
+-
+-	err = _cpu_down(cpu, 0, target);
+-
+-out:
++	err = cpu_down_maps_locked(cpu, target);
+ 	cpu_maps_update_done();
+ 	return err;
+ }
+@@ -949,6 +1037,7 @@ void notify_cpu_starting(unsigned int cpu)
+ 	int ret;
+ 
+ 	rcu_cpu_starting(cpu);	/* Enables RCU usage on this CPU. */
++	st->booted_once = true;
+ 	while (st->state < target) {
+ 		st->state++;
+ 		ret = cpuhp_invoke_callback(cpu, st->state, true, NULL, NULL);
+@@ -1058,6 +1147,10 @@ static int do_cpu_up(unsigned int cpu, enum cpuhp_state target)
+ 		err = -EBUSY;
+ 		goto out;
+ 	}
++	if (!cpu_smt_allowed(cpu)) {
++		err = -EPERM;
++		goto out;
++	}
+ 
+ 	err = _cpu_up(cpu, 0, target);
+ out:
+@@ -1332,7 +1425,7 @@ static struct cpuhp_step cpuhp_hp_states[] = {
+ 	[CPUHP_AP_SMPBOOT_THREADS] = {
+ 		.name			= "smpboot/threads:online",
+ 		.startup.single		= smpboot_unpark_threads,
+-		.teardown.single	= NULL,
++		.teardown.single	= smpboot_park_threads,
+ 	},
+ 	[CPUHP_AP_IRQ_AFFINITY_ONLINE] = {
+ 		.name			= "irq/affinity:online",
+@@ -1906,10 +1999,172 @@ static const struct attribute_group cpuhp_cpu_root_attr_group = {
+ 	NULL
+ };
+ 
++#ifdef CONFIG_HOTPLUG_SMT
++
++static const char *smt_states[] = {
++	[CPU_SMT_ENABLED]		= "on",
++	[CPU_SMT_DISABLED]		= "off",
++	[CPU_SMT_FORCE_DISABLED]	= "forceoff",
++	[CPU_SMT_NOT_SUPPORTED]		= "notsupported",
++};
++
++static ssize_t
++show_smt_control(struct device *dev, struct device_attribute *attr, char *buf)
++{
++	return snprintf(buf, PAGE_SIZE - 2, "%s\n", smt_states[cpu_smt_control]);
++}
++
++static void cpuhp_offline_cpu_device(unsigned int cpu)
++{
++	struct device *dev = get_cpu_device(cpu);
++
++	dev->offline = true;
++	/* Tell user space about the state change */
++	kobject_uevent(&dev->kobj, KOBJ_OFFLINE);
++}
++
++static void cpuhp_online_cpu_device(unsigned int cpu)
++{
++	struct device *dev = get_cpu_device(cpu);
++
++	dev->offline = false;
++	/* Tell user space about the state change */
++	kobject_uevent(&dev->kobj, KOBJ_ONLINE);
++}
++
++static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
++{
++	int cpu, ret = 0;
++
++	cpu_maps_update_begin();
++	for_each_online_cpu(cpu) {
++		if (topology_is_primary_thread(cpu))
++			continue;
++		ret = cpu_down_maps_locked(cpu, CPUHP_OFFLINE);
++		if (ret)
++			break;
++		/*
++		 * As this needs to hold the cpu maps lock it's impossible
++		 * to call device_offline() because that ends up calling
++		 * cpu_down() which takes cpu maps lock. cpu maps lock
++		 * needs to be held as this might race against in kernel
++		 * abusers of the hotplug machinery (thermal management).
++		 *
++		 * So nothing would update device:offline state. That would
++		 * leave the sysfs entry stale and prevent onlining after
++		 * smt control has been changed to 'off' again. This is
++		 * called under the sysfs hotplug lock, so it is properly
++		 * serialized against the regular offline usage.
++		 */
++		cpuhp_offline_cpu_device(cpu);
++	}
++	if (!ret)
++		cpu_smt_control = ctrlval;
++	cpu_maps_update_done();
++	return ret;
++}
++
++static int cpuhp_smt_enable(void)
++{
++	int cpu, ret = 0;
++
++	cpu_maps_update_begin();
++	cpu_smt_control = CPU_SMT_ENABLED;
++	for_each_present_cpu(cpu) {
++		/* Skip online CPUs and CPUs on offline nodes */
++		if (cpu_online(cpu) || !node_online(cpu_to_node(cpu)))
++			continue;
++		ret = _cpu_up(cpu, 0, CPUHP_ONLINE);
++		if (ret)
++			break;
++		/* See comment in cpuhp_smt_disable() */
++		cpuhp_online_cpu_device(cpu);
++	}
++	cpu_maps_update_done();
++	return ret;
++}
++
++static ssize_t
++store_smt_control(struct device *dev, struct device_attribute *attr,
++		  const char *buf, size_t count)
++{
++	int ctrlval, ret;
++
++	if (sysfs_streq(buf, "on"))
++		ctrlval = CPU_SMT_ENABLED;
++	else if (sysfs_streq(buf, "off"))
++		ctrlval = CPU_SMT_DISABLED;
++	else if (sysfs_streq(buf, "forceoff"))
++		ctrlval = CPU_SMT_FORCE_DISABLED;
++	else
++		return -EINVAL;
++
++	if (cpu_smt_control == CPU_SMT_FORCE_DISABLED)
++		return -EPERM;
++
++	if (cpu_smt_control == CPU_SMT_NOT_SUPPORTED)
++		return -ENODEV;
++
++	ret = lock_device_hotplug_sysfs();
++	if (ret)
++		return ret;
++
++	if (ctrlval != cpu_smt_control) {
++		switch (ctrlval) {
++		case CPU_SMT_ENABLED:
++			ret = cpuhp_smt_enable();
++			break;
++		case CPU_SMT_DISABLED:
++		case CPU_SMT_FORCE_DISABLED:
++			ret = cpuhp_smt_disable(ctrlval);
++			break;
++		}
++	}
++
++	unlock_device_hotplug();
++	return ret ? ret : count;
++}
++static DEVICE_ATTR(control, 0644, show_smt_control, store_smt_control);
++
++static ssize_t
++show_smt_active(struct device *dev, struct device_attribute *attr, char *buf)
++{
++	bool active = topology_max_smt_threads() > 1;
++
++	return snprintf(buf, PAGE_SIZE - 2, "%d\n", active);
++}
++static DEVICE_ATTR(active, 0444, show_smt_active, NULL);
++
++static struct attribute *cpuhp_smt_attrs[] = {
++	&dev_attr_control.attr,
++	&dev_attr_active.attr,
++	NULL
++};
++
++static const struct attribute_group cpuhp_smt_attr_group = {
++	.attrs = cpuhp_smt_attrs,
++	.name = "smt",
++	NULL
++};
++
++static int __init cpu_smt_state_init(void)
++{
++	return sysfs_create_group(&cpu_subsys.dev_root->kobj,
++				  &cpuhp_smt_attr_group);
++}
++
++#else
++static inline int cpu_smt_state_init(void) { return 0; }
++#endif
++
+ static int __init cpuhp_sysfs_init(void)
+ {
+ 	int cpu, ret;
+ 
++	ret = cpu_smt_state_init();
++	if (ret)
++		return ret;
++
+ 	ret = sysfs_create_group(&cpu_subsys.dev_root->kobj,
+ 				 &cpuhp_cpu_root_attr_group);
+ 	if (ret)
+@@ -2012,5 +2267,8 @@ void __init boot_cpu_init(void)
+  */
+ void __init boot_cpu_hotplug_init(void)
+ {
+-	per_cpu_ptr(&cpuhp_state, smp_processor_id())->state = CPUHP_ONLINE;
++#ifdef CONFIG_SMP
++	this_cpu_write(cpuhp_state.booted_once, true);
++#endif
++	this_cpu_write(cpuhp_state.state, CPUHP_ONLINE);
+ }
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index fe365c9a08e9..5ba96d9ddbde 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -5774,6 +5774,18 @@ int sched_cpu_activate(unsigned int cpu)
+ 	struct rq *rq = cpu_rq(cpu);
+ 	struct rq_flags rf;
+ 
++#ifdef CONFIG_SCHED_SMT
++	/*
++	 * The sched_smt_present static key needs to be evaluated on every
++	 * hotplug event because at boot time SMT might be disabled when
++	 * the number of booted CPUs is limited.
++	 *
++	 * If then later a sibling gets hotplugged, then the key would stay
++	 * off and SMT scheduling would never be functional.
++	 */
++	if (cpumask_weight(cpu_smt_mask(cpu)) > 1)
++		static_branch_enable_cpuslocked(&sched_smt_present);
++#endif
+ 	set_cpu_active(cpu, true);
+ 
+ 	if (sched_smp_initialized) {
+@@ -5871,22 +5883,6 @@ int sched_cpu_dying(unsigned int cpu)
+ }
+ #endif
+ 
+-#ifdef CONFIG_SCHED_SMT
+-DEFINE_STATIC_KEY_FALSE(sched_smt_present);
+-
+-static void sched_init_smt(void)
+-{
+-	/*
+-	 * We've enumerated all CPUs and will assume that if any CPU
+-	 * has SMT siblings, CPU0 will too.
+-	 */
+-	if (cpumask_weight(cpu_smt_mask(0)) > 1)
+-		static_branch_enable(&sched_smt_present);
+-}
+-#else
+-static inline void sched_init_smt(void) { }
+-#endif
+-
+ void __init sched_init_smp(void)
+ {
+ 	sched_init_numa();
+@@ -5908,8 +5904,6 @@ void __init sched_init_smp(void)
+ 	init_sched_rt_class();
+ 	init_sched_dl_class();
+ 
+-	sched_init_smt();
+-
+ 	sched_smp_initialized = true;
+ }
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 2f0a0be4d344..9c219f7b0970 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -6237,6 +6237,7 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
+ }
+ 
+ #ifdef CONFIG_SCHED_SMT
++DEFINE_STATIC_KEY_FALSE(sched_smt_present);
+ 
+ static inline void set_idle_cores(int cpu, int val)
+ {
+diff --git a/kernel/smp.c b/kernel/smp.c
+index 084c8b3a2681..d86eec5f51c1 100644
+--- a/kernel/smp.c
++++ b/kernel/smp.c
+@@ -584,6 +584,8 @@ void __init smp_init(void)
+ 		num_nodes, (num_nodes > 1 ? "s" : ""),
+ 		num_cpus,  (num_cpus  > 1 ? "s" : ""));
+ 
++	/* Final decision about SMT support */
++	cpu_smt_check_topology();
+ 	/* Any cleanup work */
+ 	smp_cpus_done(setup_max_cpus);
+ }
+diff --git a/mm/memory.c b/mm/memory.c
+index c5e87a3a82ba..0e356dd923c2 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -1884,6 +1884,9 @@ int vm_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr,
+ 	if (addr < vma->vm_start || addr >= vma->vm_end)
+ 		return -EFAULT;
+ 
++	if (!pfn_modify_allowed(pfn, pgprot))
++		return -EACCES;
++
+ 	track_pfn_insert(vma, &pgprot, __pfn_to_pfn_t(pfn, PFN_DEV));
+ 
+ 	ret = insert_pfn(vma, addr, __pfn_to_pfn_t(pfn, PFN_DEV), pgprot,
+@@ -1919,6 +1922,9 @@ static int __vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
+ 
+ 	track_pfn_insert(vma, &pgprot, pfn);
+ 
++	if (!pfn_modify_allowed(pfn_t_to_pfn(pfn), pgprot))
++		return -EACCES;
++
+ 	/*
+ 	 * If we don't have pte special, then we have to use the pfn_valid()
+ 	 * based VM_MIXEDMAP scheme (see vm_normal_page), and thus we *must*
+@@ -1980,6 +1986,7 @@ static int remap_pte_range(struct mm_struct *mm, pmd_t *pmd,
+ {
+ 	pte_t *pte;
+ 	spinlock_t *ptl;
++	int err = 0;
+ 
+ 	pte = pte_alloc_map_lock(mm, pmd, addr, &ptl);
+ 	if (!pte)
+@@ -1987,12 +1994,16 @@ static int remap_pte_range(struct mm_struct *mm, pmd_t *pmd,
+ 	arch_enter_lazy_mmu_mode();
+ 	do {
+ 		BUG_ON(!pte_none(*pte));
++		if (!pfn_modify_allowed(pfn, prot)) {
++			err = -EACCES;
++			break;
++		}
+ 		set_pte_at(mm, addr, pte, pte_mkspecial(pfn_pte(pfn, prot)));
+ 		pfn++;
+ 	} while (pte++, addr += PAGE_SIZE, addr != end);
+ 	arch_leave_lazy_mmu_mode();
+ 	pte_unmap_unlock(pte - 1, ptl);
+-	return 0;
++	return err;
+ }
+ 
+ static inline int remap_pmd_range(struct mm_struct *mm, pud_t *pud,
+@@ -2001,6 +2012,7 @@ static inline int remap_pmd_range(struct mm_struct *mm, pud_t *pud,
+ {
+ 	pmd_t *pmd;
+ 	unsigned long next;
++	int err;
+ 
+ 	pfn -= addr >> PAGE_SHIFT;
+ 	pmd = pmd_alloc(mm, pud, addr);
+@@ -2009,9 +2021,10 @@ static inline int remap_pmd_range(struct mm_struct *mm, pud_t *pud,
+ 	VM_BUG_ON(pmd_trans_huge(*pmd));
+ 	do {
+ 		next = pmd_addr_end(addr, end);
+-		if (remap_pte_range(mm, pmd, addr, next,
+-				pfn + (addr >> PAGE_SHIFT), prot))
+-			return -ENOMEM;
++		err = remap_pte_range(mm, pmd, addr, next,
++				pfn + (addr >> PAGE_SHIFT), prot);
++		if (err)
++			return err;
+ 	} while (pmd++, addr = next, addr != end);
+ 	return 0;
+ }
+@@ -2022,6 +2035,7 @@ static inline int remap_pud_range(struct mm_struct *mm, p4d_t *p4d,
+ {
+ 	pud_t *pud;
+ 	unsigned long next;
++	int err;
+ 
+ 	pfn -= addr >> PAGE_SHIFT;
+ 	pud = pud_alloc(mm, p4d, addr);
+@@ -2029,9 +2043,10 @@ static inline int remap_pud_range(struct mm_struct *mm, p4d_t *p4d,
+ 		return -ENOMEM;
+ 	do {
+ 		next = pud_addr_end(addr, end);
+-		if (remap_pmd_range(mm, pud, addr, next,
+-				pfn + (addr >> PAGE_SHIFT), prot))
+-			return -ENOMEM;
++		err = remap_pmd_range(mm, pud, addr, next,
++				pfn + (addr >> PAGE_SHIFT), prot);
++		if (err)
++			return err;
+ 	} while (pud++, addr = next, addr != end);
+ 	return 0;
+ }
+@@ -2042,6 +2057,7 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd,
+ {
+ 	p4d_t *p4d;
+ 	unsigned long next;
++	int err;
+ 
+ 	pfn -= addr >> PAGE_SHIFT;
+ 	p4d = p4d_alloc(mm, pgd, addr);
+@@ -2049,9 +2065,10 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd,
+ 		return -ENOMEM;
+ 	do {
+ 		next = p4d_addr_end(addr, end);
+-		if (remap_pud_range(mm, p4d, addr, next,
+-				pfn + (addr >> PAGE_SHIFT), prot))
+-			return -ENOMEM;
++		err = remap_pud_range(mm, p4d, addr, next,
++				pfn + (addr >> PAGE_SHIFT), prot);
++		if (err)
++			return err;
+ 	} while (p4d++, addr = next, addr != end);
+ 	return 0;
+ }
+diff --git a/mm/mprotect.c b/mm/mprotect.c
+index 625608bc8962..6d331620b9e5 100644
+--- a/mm/mprotect.c
++++ b/mm/mprotect.c
+@@ -306,6 +306,42 @@ unsigned long change_protection(struct vm_area_struct *vma, unsigned long start,
+ 	return pages;
+ }
+ 
++static int prot_none_pte_entry(pte_t *pte, unsigned long addr,
++			       unsigned long next, struct mm_walk *walk)
++{
++	return pfn_modify_allowed(pte_pfn(*pte), *(pgprot_t *)(walk->private)) ?
++		0 : -EACCES;
++}
++
++static int prot_none_hugetlb_entry(pte_t *pte, unsigned long hmask,
++				   unsigned long addr, unsigned long next,
++				   struct mm_walk *walk)
++{
++	return pfn_modify_allowed(pte_pfn(*pte), *(pgprot_t *)(walk->private)) ?
++		0 : -EACCES;
++}
++
++static int prot_none_test(unsigned long addr, unsigned long next,
++			  struct mm_walk *walk)
++{
++	return 0;
++}
++
++static int prot_none_walk(struct vm_area_struct *vma, unsigned long start,
++			   unsigned long end, unsigned long newflags)
++{
++	pgprot_t new_pgprot = vm_get_page_prot(newflags);
++	struct mm_walk prot_none_walk = {
++		.pte_entry = prot_none_pte_entry,
++		.hugetlb_entry = prot_none_hugetlb_entry,
++		.test_walk = prot_none_test,
++		.mm = current->mm,
++		.private = &new_pgprot,
++	};
++
++	return walk_page_range(start, end, &prot_none_walk);
++}
++
+ int
+ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev,
+ 	unsigned long start, unsigned long end, unsigned long newflags)
+@@ -323,6 +359,19 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev,
+ 		return 0;
+ 	}
+ 
++	/*
++	 * Do PROT_NONE PFN permission checks here when we can still
++	 * bail out without undoing a lot of state. This is a rather
++	 * uncommon case, so doesn't need to be very optimized.
++	 */
++	if (arch_has_pfn_modify_check() &&
++	    (vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) &&
++	    (newflags & (VM_READ|VM_WRITE|VM_EXEC)) == 0) {
++		error = prot_none_walk(vma, start, end, newflags);
++		if (error)
++			return error;
++	}
++
+ 	/*
+ 	 * If we make a private mapping writable we increase our commit;
+ 	 * but (without finer accounting) cannot reduce our commit if we
+diff --git a/mm/swapfile.c b/mm/swapfile.c
+index 2cc2972eedaf..18185ae4f223 100644
+--- a/mm/swapfile.c
++++ b/mm/swapfile.c
+@@ -2909,6 +2909,35 @@ static int claim_swapfile(struct swap_info_struct *p, struct inode *inode)
+ 	return 0;
+ }
+ 
++
++/*
++ * Find out how many pages are allowed for a single swap device. There
++ * are two limiting factors:
++ * 1) the number of bits for the swap offset in the swp_entry_t type, and
++ * 2) the number of bits in the swap pte, as defined by the different
++ * architectures.
++ *
++ * In order to find the largest possible bit mask, a swap entry with
++ * swap type 0 and swap offset ~0UL is created, encoded to a swap pte,
++ * decoded to a swp_entry_t again, and finally the swap offset is
++ * extracted.
++ *
++ * This will mask all the bits from the initial ~0UL mask that can't
++ * be encoded in either the swp_entry_t or the architecture definition
++ * of a swap pte.
++ */
++unsigned long generic_max_swapfile_size(void)
++{
++	return swp_offset(pte_to_swp_entry(
++			swp_entry_to_pte(swp_entry(0, ~0UL)))) + 1;
++}
++
++/* Can be overridden by an architecture for additional checks. */
++__weak unsigned long max_swapfile_size(void)
++{
++	return generic_max_swapfile_size();
++}
++
+ static unsigned long read_swap_header(struct swap_info_struct *p,
+ 					union swap_header *swap_header,
+ 					struct inode *inode)
+@@ -2944,22 +2973,7 @@ static unsigned long read_swap_header(struct swap_info_struct *p,
+ 	p->cluster_next = 1;
+ 	p->cluster_nr = 0;
+ 
+-	/*
+-	 * Find out how many pages are allowed for a single swap
+-	 * device. There are two limiting factors: 1) the number
+-	 * of bits for the swap offset in the swp_entry_t type, and
+-	 * 2) the number of bits in the swap pte as defined by the
+-	 * different architectures. In order to find the
+-	 * largest possible bit mask, a swap entry with swap type 0
+-	 * and swap offset ~0UL is created, encoded to a swap pte,
+-	 * decoded to a swp_entry_t again, and finally the swap
+-	 * offset is extracted. This will mask all the bits from
+-	 * the initial ~0UL mask that can't be encoded in either
+-	 * the swp_entry_t or the architecture definition of a
+-	 * swap pte.
+-	 */
+-	maxpages = swp_offset(pte_to_swp_entry(
+-			swp_entry_to_pte(swp_entry(0, ~0UL)))) + 1;
++	maxpages = max_swapfile_size();
+ 	last_page = swap_header->info.last_page;
+ 	if (!last_page) {
+ 		pr_warn("Empty swap-file\n");
+diff --git a/tools/arch/x86/include/asm/cpufeatures.h b/tools/arch/x86/include/asm/cpufeatures.h
+index 5701f5cecd31..64aaa3f5f36c 100644
+--- a/tools/arch/x86/include/asm/cpufeatures.h
++++ b/tools/arch/x86/include/asm/cpufeatures.h
+@@ -219,6 +219,7 @@
+ #define X86_FEATURE_IBPB		( 7*32+26) /* Indirect Branch Prediction Barrier */
+ #define X86_FEATURE_STIBP		( 7*32+27) /* Single Thread Indirect Branch Predictors */
+ #define X86_FEATURE_ZEN			( 7*32+28) /* "" CPU is AMD family 0x17 (Zen) */
++#define X86_FEATURE_L1TF_PTEINV		( 7*32+29) /* "" L1TF workaround PTE inversion */
+ 
+ /* Virtualization flags: Linux defined, word 8 */
+ #define X86_FEATURE_TPR_SHADOW		( 8*32+ 0) /* Intel TPR Shadow */
+@@ -341,6 +342,7 @@
+ #define X86_FEATURE_PCONFIG		(18*32+18) /* Intel PCONFIG */
+ #define X86_FEATURE_SPEC_CTRL		(18*32+26) /* "" Speculation Control (IBRS + IBPB) */
+ #define X86_FEATURE_INTEL_STIBP		(18*32+27) /* "" Single Thread Indirect Branch Predictors */
++#define X86_FEATURE_FLUSH_L1D		(18*32+28) /* Flush L1D cache */
+ #define X86_FEATURE_ARCH_CAPABILITIES	(18*32+29) /* IA32_ARCH_CAPABILITIES MSR (Intel) */
+ #define X86_FEATURE_SPEC_CTRL_SSBD	(18*32+31) /* "" Speculative Store Bypass Disable */
+ 
+@@ -373,5 +375,6 @@
+ #define X86_BUG_SPECTRE_V1		X86_BUG(15) /* CPU is affected by Spectre variant 1 attack with conditional branches */
+ #define X86_BUG_SPECTRE_V2		X86_BUG(16) /* CPU is affected by Spectre variant 2 attack with indirect branches */
+ #define X86_BUG_SPEC_STORE_BYPASS	X86_BUG(17) /* CPU is affected by speculative store bypass attack */
++#define X86_BUG_L1TF			X86_BUG(18) /* CPU is affected by L1 Terminal Fault */
+ 
+ #endif /* _ASM_X86_CPUFEATURES_H */


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-08-12 23:21 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-08-12 23:21 UTC (permalink / raw
  To: gentoo-commits

commit:     c56abec7377849868ed5871c56523c1567e3dc77
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Aug 12 23:21:05 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Aug 12 23:21:05 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c56abec7

Additional fixes for Gentoo distro patch.

 4567_distro-Gentoo-Kconfig.patch | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 5555b8a..43bae55 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -1,12 +1,11 @@
---- a/Kconfig	2016-07-01 19:22:17.117439707 -0400
-+++ b/Kconfig	2016-07-01 19:21:54.371440596 -0400
-@@ -8,4 +8,6 @@ config SRCARCH
- 	string
- 	option env="SRCARCH"
+--- a/Kconfig	2018-08-12 19:17:17.558649438 -0400
++++ b/Kconfig	2018-08-12 19:17:44.434897289 -0400
+@@ -10,3 +10,5 @@ comment "Compiler: $(CC_VERSION_TEXT)"
+ source "scripts/Kconfig.include"
  
-+source "distro/Kconfig"
+ source "arch/$(SRCARCH)/Kconfig"
 +
- source "arch/$SRCARCH/Kconfig"
++source "distro/Kconfig"
 --- /dev/null	2017-03-02 01:55:04.096566155 -0500
 +++ b/distro/Kconfig	2017-03-02 11:12:05.049448255 -0500
 @@ -0,0 +1,145 @@


^ permalink raw reply related	[flat|nested] 75+ messages in thread
* [gentoo-commits] proj/linux-patches:4.18 commit in: /
@ 2018-08-12 23:15 Mike Pagano
  0 siblings, 0 replies; 75+ messages in thread
From: Mike Pagano @ 2018-08-12 23:15 UTC (permalink / raw
  To: gentoo-commits

commit:     5e260b6644c6e0534428a158be83a8a48ff5dc6c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Aug 12 23:15:02 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Aug 12 23:15:02 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5e260b66

Update Gentoo distro patch.

 4567_distro-Gentoo-Kconfig.patch | 160 +++++++++++++++++++++++++++++++++++++--
 1 file changed, 154 insertions(+), 6 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 56293b0..5555b8a 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -1,9 +1,157 @@
---- a/Kconfig	2018-06-23 18:12:59.733149912 -0400
-+++ b/Kconfig	2018-06-23 18:15:17.972352097 -0400
-@@ -10,3 +10,6 @@ comment "Compiler: $(CC_VERSION_TEXT)"
- source "scripts/Kconfig.include"
+--- a/Kconfig	2016-07-01 19:22:17.117439707 -0400
++++ b/Kconfig	2016-07-01 19:21:54.371440596 -0400
+@@ -8,4 +8,6 @@ config SRCARCH
+ 	string
+ 	option env="SRCARCH"
  
- source "arch/$(SRCARCH)/Kconfig"
-+
 +source "distro/Kconfig"
 +
+ source "arch/$SRCARCH/Kconfig"
+--- /dev/null	2017-03-02 01:55:04.096566155 -0500
++++ b/distro/Kconfig	2017-03-02 11:12:05.049448255 -0500
+@@ -0,0 +1,145 @@
++menu "Gentoo Linux"
++
++config GENTOO_LINUX
++	bool "Gentoo Linux support"
++
++	default y
++
++	help
++		In order to boot Gentoo Linux a minimal set of config settings needs to
++		be enabled in the kernel; to avoid the users from having to enable them
++		manually as part of a Gentoo Linux installation or a new clean config,
++		we enable these config settings by default for convenience.
++
++		See the settings that become available for more details and fine-tuning.
++
++config GENTOO_LINUX_UDEV
++	bool "Linux dynamic and persistent device naming (userspace devfs) support"
++
++	depends on GENTOO_LINUX
++	default y if GENTOO_LINUX
++
++	select DEVTMPFS
++	select TMPFS
++	select UNIX
++
++	select MMU
++	select SHMEM
++
++	help
++		In order to boot Gentoo Linux a minimal set of config settings needs to
++		be enabled in the kernel; to avoid the users from having to enable them
++		manually as part of a Gentoo Linux installation or a new clean config,
++		we enable these config settings by default for convenience.
++
++		Currently this only selects TMPFS, DEVTMPFS and their dependencies.
++		TMPFS is enabled to maintain a tmpfs file system at /dev/shm, /run and
++		/sys/fs/cgroup; DEVTMPFS to maintain a devtmpfs file system at /dev.
++
++		Some of these are critical files that need to be available early in the
++		boot process; if not available, it causes sysfs and udev to malfunction.
++
++		To ensure Gentoo Linux boots, it is best to leave this setting enabled;
++		if you run a custom setup, you could consider whether to disable this.
++
++config GENTOO_LINUX_PORTAGE
++	bool "Select options required by Portage features"
++
++	depends on GENTOO_LINUX
++	default y if GENTOO_LINUX
++
++	select CGROUPS
++	select NAMESPACES
++	select IPC_NS
++	select NET_NS
++	select SYSVIPC
++
++	help
++		This enables options required by various Portage FEATURES.
++		Currently this selects:
++
++		CGROUPS     (required for FEATURES=cgroup)
++		IPC_NS      (required for FEATURES=ipc-sandbox)
++		NET_NS      (required for FEATURES=network-sandbox)
++		SYSVIPC     (required by IPC_NS)
++   
++
++		It is highly recommended that you leave this enabled as these FEATURES
++		are, or will soon be, enabled by default.
++
++menu "Support for init systems, system and service managers"
++	visible if GENTOO_LINUX
++
++config GENTOO_LINUX_INIT_SCRIPT
++	bool "OpenRC, runit and other script based systems and managers"
++
++	default y if GENTOO_LINUX
++
++	depends on GENTOO_LINUX
++
++	select BINFMT_SCRIPT
++
++	help
++		The init system is the first thing that loads after the kernel booted.
++
++		These config settings allow you to select which init systems to support;
++		instead of having to select all the individual settings all over the
++		place, these settings allows you to select all the settings at once.
++
++		This particular setting enables all the known requirements for OpenRC,
++		runit and similar script based systems and managers.
++
++		If you are unsure about this, it is best to leave this setting enabled.
++
++config GENTOO_LINUX_INIT_SYSTEMD
++	bool "systemd"
++
++	default n
++
++	depends on GENTOO_LINUX && GENTOO_LINUX_UDEV
++
++	select AUTOFS4_FS
++	select BLK_DEV_BSG
++	select CGROUPS
++	select CHECKPOINT_RESTORE
++	select CRYPTO_HMAC 
++	select CRYPTO_SHA256
++	select CRYPTO_USER_API_HASH
++	select DEVPTS_MULTIPLE_INSTANCES
++	select DMIID if X86_32 || X86_64 || X86
++	select EPOLL
++	select FANOTIFY
++	select FHANDLE
++	select INOTIFY_USER
++	select IPV6
++	select NET
++	select NET_NS
++	select PROC_FS
++	select SECCOMP
++	select SECCOMP_FILTER
++	select SIGNALFD
++	select SYSFS
++	select TIMERFD
++	select TMPFS_POSIX_ACL
++	select TMPFS_XATTR
++
++	select ANON_INODES
++	select BLOCK
++	select EVENTFD
++	select FSNOTIFY
++	select INET
++	select NLATTR
++
++	help
++		The init system is the first thing that loads after the kernel booted.
++
++		These config settings allow you to select which init systems to support;
++		instead of having to select all the individual settings all over the
++		place, these settings allows you to select all the settings at once.
++
++		This particular setting enables all the known requirements for systemd;
++		it also enables suggested optional settings, as the package suggests to.
++
++endmenu
++
++endmenu


^ permalink raw reply related	[flat|nested] 75+ messages in thread

end of thread, other threads:[~2018-11-21 12:29 UTC | newest]

Thread overview: 75+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-11-14 13:15 [gentoo-commits] proj/linux-patches:4.18 commit in: / Mike Pagano
  -- strict thread matches above, loose matches on Subject: below --
2018-11-21 12:28 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 13:15 Mike Pagano
2018-11-14 11:40 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-14 11:37 Mike Pagano
2018-11-13 21:17 Mike Pagano
2018-11-11  1:51 Mike Pagano
2018-11-10 21:33 Mike Pagano
2018-11-04 17:33 Alice Ferrazzi
2018-10-20 12:36 Mike Pagano
2018-10-18 10:27 Mike Pagano
2018-10-13 16:32 Mike Pagano
2018-10-10 11:16 Mike Pagano
2018-10-04 10:44 Mike Pagano
2018-09-29 13:36 Mike Pagano
2018-09-26 10:40 Mike Pagano
2018-09-19 22:41 Mike Pagano
2018-09-15 10:12 Mike Pagano
2018-09-09 11:25 Mike Pagano
2018-09-05 15:30 Mike Pagano
2018-08-24 11:46 Mike Pagano
2018-08-22  9:59 Alice Ferrazzi
2018-08-18 18:13 Mike Pagano
2018-08-17 19:44 Mike Pagano
2018-08-17 19:28 Mike Pagano
2018-08-16 11:45 Mike Pagano
2018-08-15 16:36 Mike Pagano
2018-08-12 23:21 Mike Pagano
2018-08-12 23:15 Mike Pagano

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox